paper
stringlengths
13.4k
699k
reviews
stringlengths
0
38.9k
# Multiset Transformer: Advancing Representation Learning In Persistence Diagrams Anonymous authors Paper under double-blind review ## Abstract To improve persistence diagram representation learning, we propose Multiset Transformer. This is the first neural network that utilizes attention mechanisms specifically designed for multisets as inputs and offers rigorous theoretical guarantees of permutation invariance. The architecture integrates multiset-enhanced attentions with a pool-decomposition scheme, allowing multiplicities to be preserved across equivariant layers. This capability enables full leverage of multiplicities while significantly reducing both computational and spatial complexity compared to the Set Transformer. Additionally, our method can greatly benefit from clustering as a preprocessing step to further minimize complexity, an advantage not possessed by the Set Transformer. Experimental results demonstrate that the Multiset Transformer outperforms existing neural network methods in the realm of persistence diagram representation learning. ## 1 Introduction In recent years, the field of machine learning has seen the growing importance of multisets, often referred to as bags. These multisets provide an advanced form of traditional sets by allowing for multiple instances of identical elements. Descriptors based on histograms and the bag-of-words model, both examples of multisets, are not only common in Multiple Instance Learning (Dietterich et al., 1997; Babenko et al., 2010; Quellec et al., 2017), but also in natural language processing and text mining. Furthermore, they are recognized as among the primary representation techniques for tasks such as object categorization, as well as image and video recognition (Dalal & Triggs, 2005; Zhang et al., 2010; Welleck et al., 2018). This widespread use highlights the increasing importance and versatility of multisets in machine learning. Given their prevalence and critical role, understanding and mastering representation for multisets becomes essential. In the realm of topological data analysis (TDA), persistence diagrams (PDs) stand out as a pivotal descriptor for persistent homology (PH), offering a multi-scale depiction of a space's intrinsic topological attributes (Edelsbrunner et al., 2002). Such topological insights have proven instrumental in deciphering complex biological structures, such as neural connections (Giusti et al., 2016), and have found applications in material science for analyzing porosity and nanomaterial structures (Nakamura et al., 2015). The recent fusion of PH-derived features with machine learning has enhanced both model performance and interpretability (Hofer et al., 2017; Horn et al., 2021). Nevertheless, the intrinsic multiset characteristics of PDs, as illustrated in Figure 1, present significant challenges for their direct integration into traditional machine learning models. These models predominantly necessitate inputs in a vector format, thus complicating the utilization of PDs. This essential transformation of PDs into vectors is known as vectorization (Ali et al., 2023), which align with our MST architecture. Our contributions in this paper can be outlined in two primary areas: the introduction of a novel Multiset Transformer (MST) and its subsequent application in PD representation learning. The MST architecture is uniquely tailored to accept multisets as input, leveraging multiplicities to allocate greater attention to items with higher frequencies. This innovative design allows MST to utilize the multiplicities in multiset inputs while significantly reducing spatial and computational complexity compared to the Set Transformer. When applied to PD representation learning, MST consistently outperforms the existing methodologies across a ![1_image_0.png](1_image_0.png) Figure 1: Persistence diagram examples. PDs are represented as point sets in R 2 above the diagonal. Each point's size indicates its multiplicity, and its color reflects the distance from the diagonal. The left PD contains 184 points, with 183 being distinct. After applying DBSCAN clustering, the diagram is reduced to 54 distinct points, as depicted in the right figure. Both PDs are characterized as multisets. majority of the datasets we evaluated. Furthermore, our experimental results indicate that MST inherently offers an approximation by clustering the multiset prior to processing. ## 2 Related Works The transformer model, as introduced by Vaswani et al. (2017), has revolutionized the field of deep learning, particularly within natural language processing (NLP). Its inception has spurred the development of numerous variants tailored to various data structures and applications, as outlined in the extensive surveys by Lin et al. (2022) and Khan et al. (2022). A prominent example is the Set Transformer (Lee et al., 2019), which offers a novel approach to processing unordered sets. However, to the best of our knowledge, this work presents the first transformer variant specifically designed for handling multisets. Persistent homology and its PD vectorization have played a pivotal role in the integration of TDA into machine learning, as evidenced by works such as Dey & Wang (2022) and Ali et al. (2023). Traditional vectorization methods, including persistence landscapes (Bubenik et al., 2015) and persistence images (Adams et al., 2017), have garnered significant attention. The comprehensive survey by Ali et al. (2023) offers a structured framework, clarifying the various vectorization techniques available. In addition to these foundational methods, there has been a growing interest in adapting machine learning architectures to this domain. For instance, Hofer et al. (2017) introduced an innovative input layer for deep neural networks that processes topological signatures, computing a learnable parameterized projection. This idea was further refined by the same authors, who developed a neural network layer adept at handling barcodes by projecting points using parameterized functionals, as detailed in Hofer et al. (2019). Another notable contribution is PersLay, a specialized neural network layer for processing PDs, proposed by Carrière et al. (2020). The Persformer architecture, introduced by Reinauer et al. (2021), exemplifies the application of transformers to PD vectorization, presenting a transformer-centric methodology for PDs. Notably, our contribution diverges from the aforementioned works in a distinctive manner. The MST uniquely treats PDs as multisets rather than viewing them as lists of points that may be duplicated. This approach enables the model to incorporate a clustering phase before vectorization, effectively tackling the inherent computational challenges within this domain. ## 3 Preliminaries 3.1 Multisets And Permutation Properties A multiset can be viewed as an extension of the conventional set, permitting the inclusion of multiple occurrences of identical elements. Formally, a multiset is defined by a *base set* X accompanied by its multiplicity map M : X → Z +. To illustrate, consider the multiset {*a, b, b*}. Here, the base set X is represented as {*a, b*}, and the associated multiplicity map M is such that M(a) = 1 and M(b) = 2. Inherent to their definitions, both sets and multisets are indifferent to the order of their elements. Consequently, functions defined on sets or multisets inherently possess the property of permutation invariance. Definition 3.1 (Permutation Invariance). A function f is said to be permutation invariant if the order of the components in its input vector does not influence its output. Formally, given any permutation σ of the set of indices {1, 2*, . . . , n*} corresponding to a vector x, f is permutation invariant if and only if: $$f\left(x_{1},x_{2},\ldots,x_{n}\right)=f\left(x_{\sigma(1)},x_{\sigma(2)},\ldots,x_{\sigma(n)}\right),\tag{1}$$ $$(1)$$ , (1) for all such permutations σ. Conversely, while the concept of permutation invariance revolves around the indifference to order, there exists a contrasting property where the order is preserved, termed as permutation equivariance. Definition 3.2 (Permutation Equivariance). A vector-valued function f = (f1*, . . . , f*n) is characterized as permutation equivariant if any permutation applied to the input vector's elements results in a corresponding permutation of the output components, ensuring that the functional relationship remains intact. Formally, for every permutation σ of the set of indices {1, 2*, . . . , n*}: $f\left(x_{\sigma(1)},\ldots,x_{\sigma(n)}\right)=\left(f_{\sigma(1)}(\mathbf{x}),\ldots,f_{\sigma(n)}(\mathbf{x})\right),$ for all such permutations σ. ## 3.2 Pool-Decomposition Scheme To achieve permutation invariance, the pool-decomposition scheme is commonly employed. Given a set X = {x1, x2, . . . , xn*} ∈ X* , we define transformation functions ϕ : X → R h and ρ : R h → R d, where h and d denote the hidden and output dimensions, respectively. The operator pool signifies a permutation-invariant pooling operation, such as sum, average, or max. Thus, a function f on X is expressed as: $$\left(2\right)$$ $$f(X)=\rho\left(\mathrm{pool}_{i=1}^{n}\,\phi\left(x_{i}\right)\right)$$ $$\left({\boldsymbol{3}}\right)$$ i=1 ϕ (xi)) (3) This approach is referenced in multiple studies, including Ravanbakhsh et al. (2016); Qi et al. (2017); Zaheer et al. (2017); Carrière et al. (2020); Reinauer et al. (2021). In our work, we adopt the pool-decomposition schema as a cornerstone for our MST design. ## 4 Problem Setup From a machine learning perspective, the problem can be formally described as follows: Let D denote a bounded, finite multiset space representing the data space, let H denote a Hilbert space serving as the feature space, and let Y represent the target space. Consider a learning system characterized by the mappings $${\mathcal{D}}\ {\xrightarrow{f_{\theta}}}\ {\mathcal{H}}\ {\xrightarrow{g_{\phi}}}\ {\mathcal{Y}},$$ where the function fθ, parameterized by θ, maps elements of D to representations in H, and the function gϕ, parameterized by ϕ, predicts outcomes in Y based on these representations. The objective is to develop and refine the representation fθ such that gϕ can achieve better predictions by minimizing the loss function L(gϕ(fθ(x)), y), where (x, y) *∈ D × Y*. To illustrate, in this paper, we employ MST as the model for the representation function fθ. From an application perspective, the problem centers on acquiring representations of PDs suitable for classification tasks. These PDs are derived from graphs, which are fundamentally multisets, as depicted in Figure 1. Inherently, PDs do not possess intrinsic bounds or finite limits; yet, for practical purposes, they are typically preprocessed into bounded, finite multisets. For the purposes of this discussion, it is presupposed that PDs are treated as bounded and finite multisets. This representation learning framework adheres to key criteria outlined subsequently: 1. **Permutation Invariance**: Given the fundamental properties of multisets, the representation must be order-agnostic. This ensures a consistent representation, irrespective of the ordering of elements. 2. **Practical Feasibility**: The training process for representations should be computationally feasible, emphasizing feasibility and scalability, particularly for large-scale datasets. 3. **Feature Retention**: It is imperative for the representation to capture and retain key features. This enables subsequent machine learning tasks to effectively exploit the data's underlying structure. By addressing these priorities, we aim to bridge the gap between multiset representation learning and its applications ensuring that the complex structure of the data remains most informative throughout the representation process. ## 5 Multiset Transformer The Multiset Transformer (MST) is based on the fundamental idea of leveraging the multiplicity inherent in multisets to influence the attention score function. Essentially, the objective is to steer the model's attention towards items with relatively higher multiplicities. This is grounded in the belief that items with higher multiplicities may carry more importance or relevance in certain contexts, and thus should be given more weight during the attention process. To provide a comprehensive explanation of the MST, it is imperative to first explain the underlying mechanisms of attention. ## 5.1 Scaled Dot-Product Attention Mechanism The attention mechanism we employ is consistent with the one presented by Vaswani et al. (2017). Given n query vectors Q ∈ R n×dq, m key vectors K ∈ R m×dk , and m value vectors V ∈ R m×dv , where dq, dk, and dv represent their respective dimensions, the attention function Att(*Q, K, V* ) is designed to map queries Q to outputs using the key-value pairs. This is mathematically represented as: $$\operatorname{Att}(Q,K,V;\omega)=\omega(Q K^{\top})V.$$ $$\left({4}\right)$$ Att(*Q, K, V* ; ω) = ω(QK⊤)V. (4) The pairwise dot product QK⊤ ∈ R n×m serves as a measure of similarity between each pair of query and key vectors. The weights for this dot product are computed using the activation function ω. Consequently, the output ω(QK⊤)V is essentially a weighted sum of V . Here, a value vector from V receives a higher weight if its associated key vector has a larger dot product with the query. Note that the dot product QK⊤ necessitates dq = dk for dimensional consistency. Building upon this, we specifically utilize the scaled dot-product attention in our construction, which is given by: $$\operatorname{Att}(Q,K,V)=\operatorname{softmax}\left({\frac{Q K^{\top}}{\sqrt{d_{k}}}}\right)V.$$ V. (5) This formulation ensures that the attention weights are appropriately normalized, and the scaling factor √dk aids in stabilizing the magnitudes of the dot products, especially when dk is large. Furthermore, our architecture incorporates multihead attention, a technique that facilitates capturing a diverse range of relationships and dependencies within the input data. Each attention head operates independently, allowing the model to focus on different aspects of the input simultaneously. Additional details on multihead attention can be found in the Appendix for clarity. $$\left({\bar{5}}\right)$$ ## 5.2 Multiset Attention Mechanisms In The Transformer Within the framework of the MST, attention is employed in two distinct manners: self-attention and attention with learnable queries. To clarify these mechanisms, we first establish the necessary notations and their interpretations. A multiset is denoted as (*X, M*X), where X, the base set, is represented as an array of points in R n×d. The multiplicities associated with these points are specified by MX ∈ R n×1, where each element of MX corresponds to the multiplicity of the respective element in X. It is important to note that a multiset reduces to a conventional set when all elements of MX are equal to unity, that is, Mi = 1 for each i. ## 5.2.1 Multiset-Enhanced Attention Mechanism In this section, we introduce the concept of multiset attention, a novel approach designed to handle input multisets and their associated multiplicities. The primary motivation behind this mechanism is to incorporate the multiplicity information into the traditional attention mechanism, thereby enhancing its ability to focus on elements with larger multiplicities. Given input base sets Q ∈ R n×d and X ∈ R m×d, along with their respective multiplicities MQ ∈ R n×1 and MX ∈ R m×1, we define the multiset-enhanced attention weights as: $$A(Q,X):=\left(\text{softmax}\left(\frac{QX^{\top}}{\sqrt{d}}\right)+\alpha B\right)X,\tag{6}$$ $$B:=\frac{(M_{Q}-\mathbf{1})(M_{X}-\mathbf{1})^{\top}}{\|(M_{Q}-\mathbf{1})(M_{X}-\mathbf{1})^{\top}\|_{F}+\varepsilon}.\tag{7}$$ Here, ∥∥F denotes the Frobenius norm, which is utilized to normalize this new term. The parameter α is learnable, allowing the model to adjust its influence during training. ε is used as a small constant to avoid zero division. Intuitively, the introduction of the multiplicities term (7) enables the attention mechanism to allocate greater emphasis to elements with higher multiplicities. This enhancement augments the mechanism's ability to selectively focus on the most salient elements within a multiset. ## 5.2.2 Multiset Self-Attention Self-attention, as described by Vaswani et al. (2017), is characterized by the attention scores being derived directly from the input. This mechanism inherently captures the intra-correlation present within the input data. Building upon the multiset attention framework discussed in the previous sections, we extend this concept to introduce the multiset self-attention. Given an input matrix X ∈ R m×d with multiplicity MX ∈ R m×1, multiset self-attention is achieved by setting Q = K = V = X, leading to the self-attention A(*X, X*). This self-attention A(*X, X*) is notable for its property of permutation equivariance, which we formalize in the following theorem. Theorem 5.1. The multiset self-attention, represented as A(X, X)*, is permutation equivariant.* ## 5.2.3 Multset Attention With Learnable Queries In the previous section, we introduce a permutation equivariant multiset self-attention. However, to ensure the permutation invariant property in a MST, it is necessary to incorporate a permutation invariant operator, i.e., the pool operator in Equation (3). The core of our approach lies in the multiset attention with learnable queries. Specifically, we set K = V = X and introduce a learnable matrix Q ∈ R n×d, where n is a userdefined parameter. It is worth noting that the query is independent of the input and is shared across all input instances, enabling the capture of common features during training. To account for multiplicity, we define the multiset attention with learnable queries as: $$A_{Q}(X):=\left(\text{softmax}\left(\frac{QX^{\top}}{\sqrt{d}}\right)+B\right)X,\tag{8}$$ $$B:=\frac{M_{\alpha}(M_{X}-\mathbf{1})^{\top}}{\|M_{\alpha}(M_{X}-\mathbf{1})^{\top}\|_{F}+\varepsilon}.\tag{9}$$ where Mα ∈ R n×1 are learnable parameters. Similarly, we formalize its permutation invariant in the following theorem. Theorem 5.2. The multiset attention with learnable queries AQ(X) *is permutation invariant.* The terms introduced in Equations (7) and (9) are not the only possible choices, yet they are carefully selected for their alignment with several key aspects of our framework. Firstly, these multiplicity bias terms successfully incorporate biases associated with multiplicities into the attention weights, aligning with our philosophy of assigning attention based on multiplicities. Moreover, they are key in maintaining the permutation equivariant or invariant properties, which are important for our model's integrity. Finally, they demonstrate mathematical consistency by reducing to zero for set inputs, when MQ = 1 and MX = 1, these attentions degenerate to the original ones. ## 5.3 Multiset Attention Blocks Multiset self-attention and multiset attention with learnable queries are crucial in the construction of the MST. To ensure clarity and maintain consistency with existing literature, we adopt and slightly modify terminologies introduced by Lee et al. (2019). The cornerstone of our proposed architecture is the Multiset Attention Block (MAB). This block can be described using the following formulation: $\mathrm{MAB}(Q,X):=\mathrm{LN}(H+\mathrm{FFN}(H))$, $H:=\mathrm{LN}(Q+A(Q,X))$. $$\begin{array}{l}{(10)}\\ {(11)}\end{array}$$ $$\begin{array}{l}{(12)}\\ {(13)}\end{array}$$ Here, FFN denotes a position-wise feedforward layer, specifically row-wise. The symbol LN represents layer normalization, as illustrated by Ba et al. (2016). Based on the MAB, we construct several foundational components important for the MST. The multiset attention block with learnable queries, denoted as MABQ, is formulated for a given set X. Mathematically, it can be expressed as: $\mathrm{MAB}_{Q}(X):=\mathrm{LN}(H+\mathrm{FFN}(H))$, $H:=\mathrm{LN}(Q+A_{Q}(X))$. The Multiset Self-Attention Block (SAB) is a specialized case of the MAB wherein the input set serves as both the key and value. This block can be represented as: $$\operatorname{SAB}(X):=\operatorname{MAB}(X,X).$$ SAB(X) := MAB(*X, X*). (14) Inspired by the Induced Set Attention Block (ISAB) as presented in Lee et al. (2019), we also introduce the Induced Multiset Attention Block (IMAB). This block is designed to provide similar functionality but with reduced computational demands. It is defined as: $$\mathrm{IMAB}(X):=\mathrm{MAB}\left(X,\mathrm{MAB}_{Q}(X)\right).$$ IMAB(X) := MAB (X, MABQ(X)). (15) It is evident that both SAB and IMAB maintain permutation equivariance. In contrast, MABQ exhibits permutation invariance. $$(14)$$ $$(15)$$ ## 5.4 Multiset Transformer Overall Architecture The MST is formulated on the foundation of a pool-decomposition scheme, as articulated in Equation (3). The rationale for this architectural decision is bifurcated. Initially, it guarantees that the model attains an optimal level of expressiveness, facilitating the capture of complex patterns and interrelationships within the dataset. Concomitantly, it is imperative to note that the model embodies a permutation-invariant property, a requisite dictated by the inherent characteristics of a multiset. The overall architecture of the MST, detailing its various components and their interconnections, is illustrated in Figure 2. ![6_image_0.png](6_image_0.png) X MX Figure 2: MST architecture. Base set X with multiplicities MX is processed by equivariant layers, preserving permutation order. Representation output R is generated, with multiplicities MX used as input to an invariant layer. Within the given figure, a multiset is denoted as (*X, M*X), where X represents the base set, and MX indicates its corresponding multiplicities. The architecture consists of two main components: the Equivariant Layers and an Invariant Layer. The Equivariant Layers are structured as a sequence of permutation equivariance blocks, which can be either SABs or IMABs. The outputs from these layers, denoted as H, possess permutation equivariance. Given that H maintains permutation equivariance in relation to X, it follows that the multiplicities MX are intrinsically the multiplicities of H. This architectural design ensures that the multiplicities not only enrich the Equivariant Layers but also strengthen the subsequent Invariant Layer. The Invariant Layer can be constructed either by employing invariant operators or by applying the MABQ operation. This procedure ultimately results in the final output R, serving as the representation of the multiset. Therefore, a conventional architecture wherein the Equivariant Layers are constructed via IMAB and the Invariant Layer is performed by MABQ can be expressed as: $H=$ IMAB(IMAB(...IMAB($X$)...)), $R=$ MAB${}_{Q}(H)$. $$\begin{array}{l}{(16)}\\ {(17)}\end{array}$$ This architecture ensures a balance between expressiveness and complexity, making it suitable for a wide range of applications. ## 5.5 Complexity Analysis Of Multiset Transformer And Set Transformer To thoroughly highlight the advantages of the MST, we conduct a complexity analysis comparing it to the Set Transformer (ST), particularly focusing on scenarios involving multiset inputs. Notably, the attention mechanism significantly influences the complexity of these architectures, forming the core of our analysis. For a multiset X ∈ R n×d, where d is fixed, and accompanied by its multiplicity information MX ∈ R n×1, it's important to acknowledge that the ST processes inputs in the form of lists. In cases where MX results in a multiset list, the size of the list can scale to O(nm), with m representing the maximum multiplicity within MX. Consequently, the complexities of the Set Attention Block (SAB) and Induced Set Attention Block (ISAB) operations in the Set Transformer are O(n 2m2) and O(nmq), respectively. Here, q denotes the number of inducing points in ISAB. In contrast, the MST, designed to handle multiplicities without duplicating elements, demonstrates complexities of O(n 2) and O(nq) for the SAB and IMAB, respectively. This comparison underscores the inherent lower complexity of the MST when processing multiset inputs. Similarly, the space complexity exhibits the same favorable results, with d being held constant. ## 6 Experiments In the experiments section, our studies are divided into two primary parts: synthetic experiments and persistence diagram (PD) representation learning. The goal of the synthetic experiments is to provide a preliminary study to confirm that the framework operates as expected. On the other hand, the PD representation learning aims to demonstrate the effectiveness and relevance of our framework on real-world datasets. Through these experiments, we attempt to substantiate both the theoretical foundations and the practical usefulness of our proposed approach. ## 6.1 Synthetic Experiments In the synthetic experiments, our goal was to demonstrate the ability of the MST to highlight elements that appear with the highest frequency within a multiset. We employed synthetic samples composed of multisets of 2-dimensional points. The true label is determined based on the element that appears with the highest frequency. For the purpose of learning representations of these multisets, we employed the MST, both with and without multiplicity inputs. Subsequently, these representations were fed into a fully connected layer for prediction, as outlined in Figure 3. $(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$$(X,M_{X})$\((X,M_{X}) Figure 3: Sythetic data classification pipeline. A multiset sample (*X, M*X) is processed by a Multiset Transformer (MST) to generate its representation R. This representation is then used by a fully connected (FC) classifier to make predictions Y . In the context of problem setup (Section 4), MST corresponds to the representation function fθ and FC corresponds to the task-specific function gϕ. With the given architecture, we utilized a ten-fold cross-validation method, which was conducted over 10 separate iterations. To elaborate, the entire dataset was partitioned into 10 equal folds. For each of these iterations, our model was trained using nine of these folds and subsequently tested on the one remaining fold. This ensured that every individual fold had a turn as the test set. The accuracy was averaged over these 10 iterations to yield the result for a single run. To enhance the robustness of our results, this entire process was repeated for a total of 5 runs, with each run shuffling the data randomly. The cumulative results, which include both the mean and standard deviation from all 5 runs, are presented in Table 1. Table 1: Results on synthetic datasets. In this table, '\# Classes' refers to the total number of class labels in the dataset. 'Ratio' is defined as the data ratio, calculated using |X| ∥MX∥1 . 'MST (w/o mult.)' represents the Multiset Transformer without multiplicity inputs. 'MST' denotes the Multiset Transformer. Note that all ratios are presented in decimal form, and prediction accuracies are expressed as percentages. | # Classes | Ratio | MST (w/o mult.) | MST | |-------------|---------|-------------------|-------------| | 2 | 0.03 | 55.94±0.50 | 100.00±0.00 | | 3 | 0.03 | 39.92±0.58 | 99.88±0.15 | | 5 | 0.04 | 26.72±0.72 | 88.86±2.07 | | 11 | 0.05 | 15.06±0.24 | 41.14±2.24 | To ensure consistency in our experiments, hyperparameters were kept constant across all configurations. These hyperparameters are detailed in the Appendix. In the table, we use the data ratio |X| ∥MX∥1 , which represents the ratio of the count of unique items to the sum of their multiplicities, to highlight the size difference between set and multiset representations. Table 1 demonstrates the enhanced performance of the MST when equipped with multiplicity. Specifically, this advantage is clearly shown in scenarios involving 2 or 3 classes, where MST achieves near-perfect or perfect accuracies, significantly surpassing its counterpart. Intriguingly, the relative improvements for datasets with 5 or 11 classes, measured at approximately 232% and 173%, respectively, are higher than those observed in the 2 and 3 class cases (79% and 150%). This suggests that with an increasing number of classes, MST not only maintains its lead but does so with a more pronounced relative margin, despite a seemingly narrower absolute margin of improvement. These findings highlight the successful achievement of MST's design objectives, validating its effectiveness in leveraging the attributes of multiplicity in multisets. ## 6.2 Persistence Diagram Representation Learning In this segment of our study, we concentrate on validating the efficacy of the MST as *a neural network* vectorization method for PDs. To our knowledge, PERSLAY (Carrière et al., 2020) currently maintains stateof-the-art (SOTA) benchmarks in the majority of its datasets within this specialized field. Consequently, we designate PERSLAY as our comparative baseline. ## 6.2.1 Dataset Selection And Evaluation Metrics Adapting the approach proposed by Carrière et al. (2020), this study harnesses PD representation learning for graph datasets. Specifically, our focus gravitates towards chemoinformatics network datasets such as MUTAG, COX2, DHFR, NCI1, and NCI109 (Debnath et al., 1991; Sutherland et al., 2003; Wale et al., 2008; Shervashidze et al., 2011). Beyond this, we delve into the bioinformatics dataset PROTEIN (Dobson & Doig, 2003) and larger real-world social networks, which include movie (IMDB-BINARY, IMDB-MULTI) and scientific collaboration networks (COLLAB) (Yanardag & Vishwanathan, 2015). As classification is the primary goal for all datasets in focus, we utilize classification accuracy as the evaluation criterion. An extensive analysis and quantitative overview of these datasets can be found in the Appendix. ## 6.2.2 Persistence Diagrams In our experimental framework, we primarily utilize two types of topological features: ordinary PDs and extended PDs. These diagrams are represented as multisets in R 2, denoted as X1, X2, X3*, . . .*, where the subscripts indicate different types of PDs resulting from various filtrations. When combined with their corresponding multiplicities, we have pairs of PDs and their multiplicities, such as (X1, MX1 ),(X2, MX2 ),(X3, MX3 )*, . . .*. The computation of persistence values relies on filtrations, which are in turn generated by specific functions. In the context of this research, we primarily employ heat kernel signatures (HKS) as these specific functions to maintain consistency with the settings in Carrière et al. (2020). A more comprehensive explanation of these features, along with the selection of filtration methods, is provided in the Appendix for further clarity. ## 6.2.3 Graph Classification Architecture To ensure a rigorous and unbiased comparison, we adopt the experimental settings described in Carrière et al. (2020). Our network comprises two main components: first, we employ MSTs to effectively learn representations of PDs, and subsequently, we use a fully connected layer for prediction tasks. A visual representation of this architecture can be found in Figure 4. ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) Concat(R1, R2, R3) FC Y Figure 4: Graph classification architecture. Given a graph G, it's encoded into various ordinary or extended PDs, i.e., (Xi, MXi ). Here, we have i ∈ {1, 2, 3} as demonstration. In experimentation, we can have i ∈ {1, 2*, . . . , n*} for some finite integer n. Each diagram is processed by an independent instance of the Multiset Transformer (MST i), yielding its representation Ri. These representations are concatenated to form the complete feature set of the graph. The classifier, represented by a fully connected layer (FC), then makes predictions based on this comprehensive representation. ## 6.2.4 Main Results Similar to the experiments in Subsection 6.1, we applied a ten-fold cross-validation over 10 iterations on our dataset. Each iteration trained the model on nine folds and tested on one, ensuring all folds served as the test set. After averaging the accuracy over the 10 iterations for a single run, we repeated this process for 5 runs with random shuffling of data each time. The aggregated results, including mean and standard deviation from the 5 runs, are detailed in Table 2. Table 2: PD representation learning results. This table utilizes the abbreviation 'PSL' to denote PERSLAY, 'R' to signify the data ratio, which is computed as |X| ∥MX∥1 , and 'MST' to represent the Multiset Transformer. Data ratios are displayed in decimal form, whereas all accuracy measures are provided in percentage format. | | Ordinary PDs | Extended PDs | | | | | | | | | |----------|----------------|----------------|----------------|---------------|------------|------|------|------------|------|------------| | Datasets | w/o Clustering | w/ Clustering | w/o Clustering | w/ Clustering | | | | | | | | PSL | | PSL | | | | | | | | | | R | MST | R | MST | R | MST | R | MST | | | | | MUTAG | 70.2 | 0.91 | 89.25±0.77 | 0.14 | 87.03±1.13 | 85.1 | 1.00 | 89.54±1.32 | 0.51 | 90.50±0.67 | | COX2 | 79.0 | 0.92 | 81.20±1.04 | 0.04 | 80.67±0.27 | 81.5 | 1.00 | 81.61±0.51 | 0.47 | 79.96±1.18 | | DHFR | 71.8 | 0.90 | 74.12±0.37 | 0.04 | 76.37±0.18 | 78.2 | 0.99 | 73.18±0.49 | 0.48 | 71.55±0.70 | | NCI1 | 68.9 | 0.98 | 69.12±0.17 | 0.22 | 68.65±1.23 | 72.3 | 0.99 | 70.28±0.12 | 0.69 | 69.57±0.19 | | NCI109 | 66.2 | 0.97 | 66.38±0.40 | 0.22 | 64.87±0.51 | 67.0 | 0.99 | 68.75±0.27 | 0.68 | 67.11±0.45 | | PROTEIN | 69.7 | 0.99 | 72.92±0.47 | 0.37 | 72.89±0.26 | 72.2 | 0.94 | 75.49±0.44 | 0.21 | 74.16±0.22 | | IMDB-B | 64.7 | 0.99 | 70.76±0.47 | 0.70 | 70.66±0.42 | 68.8 | 0.49 | 75.40±0.14 | 0.06 | 74.72±0.42 | | IMDB-M | 42.0 | 0.99 | 45.44±0.35 | 0.78 | 44.95±0.48 | 48.2 | 0.44 | 51.46±0.29 | 0.06 | 50.33±0.17 | | COLLAB | 69.2 | 0.99 | 69.90±0.32 | 0.61 | 70.42±0.26 | 71.6 | 0.52 | 74.18±0.38 | 0.01 | 72.26±0.21 | In our results, we primarily showcase two variations of the model: the MST applied directly and the MST preceded by a clustering preprocess. Detailed comparisons and visualizations of the MST model with the clustering preprocess can be found in Figure 5. In our experiments, we utilize the DBSCAN clustering algorithm, with the hyperparameters detailed in the Appendix. Our primary goal was to assess the effectiveness of the MST in the context of PD representation learning, and the results yielded valuable insights on multiple fronts. ![9_image_0.png](9_image_0.png) Figure 5: Multiset Transformer architecture with clustering preprocessing. In this architecture, 'CL' represents the clustering preprocessing step. Initially, the input multiset (*X, M*X) undergoes clustering to approximate its structure. The output of this preprocessing stage is a clustered multiset, denoted by (X′, MX′ ), which then serves as the input for the Multiset Transformer. Firstly, the results demonstrate that, across a majority of datasets, the performance of the MST without clustering preprocessing exceeds that of PERSLAY. This trend holds true for both Extended and Ordinary PDs, thereby highlighting the robustness and superiority of the MST in the task of PD vectorization using neural networks. In comparing the performance of the MST with and without clustering, it may initially be hypothesized that the incorporation of clustering could detrimentally impact model performance due to data simplification. Contrary to this intuition, our empirical results reveal that the integration of clustering with the MST generally results in only a marginal decrease in accuracy. This is particularly notable in the case of Extended PDs for the COLLAB dataset, where even when reduced to just 0.01 of the original input size, the performance still surpasses that of the baseline model. Given the substantial benefits in terms of computational complexity reduction and improved scalability, this slight trade-off in accuracy is deemed justifiable. More interestingly, in specific scenarios such as Extended PDs for the MUTAG and Ordinary PDs for DHFR and COLLAB, clustering not only mitigates the anticipated performance degradation but also actively boosts accuracy. This nuanced observation suggests that clustering might capture intrinsic data structures beneficial for representation in certain contexts. ## 6.2.5 Ablation Study In this section, we conduct an ablation study to examine the entries presented in Table 2. The primary aim of this analysis is to demonstrate the performance improvements attributable to the introduction of multiplicity terms. To thoroughly investigate the influence of multiplicities on our model, we specifically focus on entries characterized by lower data ratios in Table 2. We present the results in Table 3. Table 3: Ablation analysis of PD representation learning. This table employs the abbreviation 'PD' to indicate the type of Persistence Diagram, either Ordinary or Extended, 'R' to denote the Data Ratio, which is expressed in decimal form, and 'MST' to represent the Multiset Transformer. The results of column MST are the same as those in Table 2. The variants of MST are described as 'MST (w/o mult.)', indicating the Multiset Transformer excluding multiplicities, and 'MST (w/o PD)', denoting the Multiset Transformer excluding persistence diagrams. Accuracy measures are displayed in percentage format. | Dataset | PD | R | MST | MST (w/o mult.) | MST (w/o PD) | |-----------|------|------|------------|-------------------|----------------| | MUTAG | Ord. | 0.14 | 87.03±1.13 | 85.63±1.16 | 80.96±0.78 | | COX2 | Ord. | 0.04 | 80.67±0.27 | 80.63±0.37 | 73.92±0.65 | | DHFR | Ord. | 0.04 | 76.37±0.18 | 76.19±0.24 | 66.11±0.56 | | NCI1 | Ord. | 0.22 | 68.65±0.14 | 67.28±0.18 | 57.11±0.33 | | NCI109 | Ord. | 0.22 | 64.87±0.51 | 63.35±0.58 | 56.06±0.16 | | PROTEIN | Ext. | 0.21 | 74.16±0.22 | 72.76±0.35 | 63.74±0.46 | | IMDB-B | Ext. | 0.06 | 74.72±0.42 | 71.10±0.63 | 62.72±0.75 | | IMDB-M | Ext. | 0.06 | 50.33±0.17 | 50.64±0.41 | 43.24±0.39 | | COLLAB | Ext. | 0.01 | 72.26±0.21 | 71.58±0.26 | 40.30±0.34 | Consistently across all datasets, the MST model, when fully equipped with PD and its associated multiplicities, demonstrates superior performance over its counterparts. A direct comparison between the complete MST model and its variant, MST (w/o mult.), provides insights into the pivotal role of the multiplicities terms within the MST framework. Notably, the exclusion of these terms from the model's architecture invariably leads to a decrease in accuracy for most datasets. To illustrate, datasets such as MUTAG, NCI1, NCI109, PROTEIN, and IMDB-B exhibit an accuracy reduction exceeding 1%. As anticipated, the omission of the entire PD results in a more pronounced decline in accuracy than merely excluding its multiplicities. Nevertheless, the recurrent performance dip across datasets, when solely removing multiplicities, accentuates their integral role in PD representation learning. For instance, in the IMDB-B dataset, the performance disparity between the full MST and MST (w/o mult.) stands at 3.62%, underscoring the multiplicities' influence on the efficacy of PD. To summarize, this ablation analysis emphasizes the criticality of the multiplicities bias within the MST framework. Their consistent and positive impact on the MST's performance across various datasets is a testament to their importance. The enhanced performance due to multiplicities underscores the MST's robustness, as evidenced not only through synthetic experiments but also in real-world applications. ## 6.3 Limitations And Further Works While the MST effectively meets our objectives by exhibiting strong performance on both synthetic and real-world datasets, there are certain limitations to our study that warrant attention. Firstly, the improved accuracies achieved through clustering warrant a deeper exploration. A thorough investigation is essential to determine the underlying mechanisms contributing to this improvement. Furthermore, it is widely acknowledged that the significance of a feature, specifically a point in the PD, is determined by its lifespan, which is schematically represented as the distance to the diagonal in Figure 1. The greater the lifespan, the more significant the feature. In our experiments, we did not employ this assumption. However, we suggest that this property could be more comprehensively integrated into representation learning. Such integration may pave the way for the development of more refined and effective models in the future. ## 7 Conclusion In this paper, we present the Multiset Transformer, the first neural network based on attention mechanisms designed specifically for multisets as inputs, and it comes with rigorous theoretical guarantees of permutation invariance. This model leverages the multiplicities in a multiset by allocating more attention to elements with larger multiplicities. The synthetic experiments demonstrate that our model achieves the intended design goals. In the domain of persistent diagram vectorization using neural networks, our approach surpasses existing state-of-the-art methods across most datasets. Furthermore, with its inherent ability to approximate sets into multisets via clustering, the Multiset Transformer offers a viable solution for datasets of *any size*, provided they can be efficiently clustered. ## References Henry Adams, Tegan Emerson, Michael Kirby, Rachel Neville, Chris Peterson, Patrick Shipman, Sofya Chepushtanova, Eric Hanson, Francis Motta, and Lori Ziegelmeier. Persistence images: A stable vector representation of persistent homology. *Journal of Machine Learning Research*, 18, 2017. Dashti Ali, Aras Asaad, Maria-Jose Jimenez, Vidit Nanda, Eduardo Paluzo-Hidalgo, and Manuel SorianoTrigueros. A survey of vectorization methods in topological data analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. *arXiv preprint* arXiv:1607.06450, 2016. Boris Babenko, Ming-Hsuan Yang, and Serge Belongie. Robust object tracking with online multiple instance learning. *IEEE transactions on pattern analysis and machine intelligence*, 33(8):1619–1632, 2010. Peter Bubenik et al. Statistical topological data analysis using persistence landscapes. *J. Mach. Learn. Res.*, 16(1):77–102, 2015. Gunnar Carlsson and Vin De Silva. Zigzag persistence. *Foundations of computational mathematics*, 10: 367–405, 2010. Mathieu Carrière, Frédéric Chazal, Yuichi Ike, Théo Lacombe, Martin Royer, and Yuhei Umeda. Perslay: A neural network layer for persistence diagrams and new graph topological signatures. In International Conference on Artificial Intelligence and Statistics, pp. 2786–2796. PMLR, 2020. David Cohen-Steiner, Herbert Edelsbrunner, and John Harer. Extending persistence using poincaré and lefschetz duality. *Foundations of Computational Mathematics*, 9(1):79–103, 2009. Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05), volume 1, pp. 886–893. Ieee, 2005. Asim Kumar Debnath, Rosa L Lopez de Compadre, Gargi Debnath, Alan J Shusterman, and Corwin Hansch. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. *Journal of medicinal chemistry*, 34(2):786–797, 1991. Tamal Krishna Dey and Yusu Wang. *Computational topology for data analysis*. Cambridge University Press, 2022. Thomas G Dietterich, Richard H Lathrop, and Tomás Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. *Artificial intelligence*, 89(1-2):31–71, 1997. Paul D Dobson and Andrew J Doig. Distinguishing enzyme structures from non-enzymes without alignments. Journal of molecular biology, 330(4):771–783, 2003. Edelsbrunner, Letscher, and Zomorodian. Topological persistence and simplification. *Discrete & Computational Geometry*, 28:511–533, 2002. Herbert Edelsbrunner and John L Harer. *Computational topology: an introduction*. American Mathematical Society, 2022. Chad Giusti, Robert Ghrist, and Danielle S Bassett. Two's company, three (or more) is a simplex: Algebraictopological tools for understanding higher-order structure in neural data. *Journal of computational neuroscience*, 41:1–14, 2016. Christoph Hofer, Roland Kwitt, Marc Niethammer, and Andreas Uhl. Deep learning with topological signatures. *Advances in neural information processing systems*, 30, 2017. Christoph D Hofer, Roland Kwitt, and Marc Niethammer. Learning representations of persistence barcodes. J. Mach. Learn. Res., 20(126):1–45, 2019. Max Horn, Edward De Brouwer, Michael Moor, Yves Moreau, Bastian Rieck, and Karsten Borgwardt. Topological graph neural networks. *arXiv preprint arXiv:2102.07835*, 2021. Nan Hu, Raif M Rustamov, and Leonidas Guibas. Stable and informative spectral signatures for graph matching. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 2305– 2312, 2014. Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. Transformers in vision: A survey. *ACM computing surveys (CSUR)*, 54(10s):1–41, 2022. Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. In International conference on machine learning, pp. 3744–3753. PMLR, 2019. Tianyang Lin, Yuxin Wang, Xiangyang Liu, and Xipeng Qiu. A survey of transformers. *AI Open*, 2022. Takenobu Nakamura, Yasuaki Hiraoka, Akihiko Hirata, Emerson G Escolar, and Yasumasa Nishiura. Persistent homology and many-body atomic structure for medium-range order in the glass. *Nanotechnology*, 26(30):304001, 2015. Steve Y Oudot. *Persistence theory: from quiver representations to data analysis*, volume 209. American Mathematical Soc., 2017. Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In *Proceedings of the IEEE conference on computer vision and pattern* recognition, pp. 652–660, 2017. Gwenolé Quellec, Guy Cazuguel, Béatrice Cochener, and Mathieu Lamard. Multiple-instance learning for medical image and video analysis. *IEEE reviews in biomedical engineering*, 10:213–234, 2017. Siamak Ravanbakhsh, Jeff Schneider, and Barnabas Poczos. Deep learning with sets and point clouds. arXiv preprint arXiv:1611.04500, 2016. Raphael Reinauer, Matteo Caorsi, and Nicolas Berkouk. Persformer: A transformer architecture for topological machine learning. *arXiv preprint arXiv:2112.15210*, 2021. Nino Shervashidze, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. Weisfeiler-lehman graph kernels. *Journal of Machine Learning Research*, 12(9), 2011. Jian Sun, Maks Ovsjanikov, and Leonidas Guibas. A concise and provably informative multi-scale signature based on heat diffusion. In *Computer graphics forum*, volume 28, pp. 1383–1392. Wiley Online Library, 2009. Jeffrey J Sutherland, Lee A O'brien, and Donald F Weaver. Spline-fitting with a genetic algorithm: A method for developing classification structure- activity relationships. *Journal of chemical information and* computer sciences, 43(6):1906–1915, 2003. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. Nikil Wale, Ian A Watson, and George Karypis. Comparison of descriptor spaces for chemical compound retrieval and classification. *Knowledge and Information Systems*, 14:347–375, 2008. Sean Welleck, Zixin Yao, Yu Gai, Jialin Mao, Zheng Zhang, and Kyunghyun Cho. Loss functions for multiset prediction. *Advances in Neural Information Processing Systems*, 31, 2018. Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. On layer normalization in the transformer architecture. In *International* Conference on Machine Learning, pp. 10524–10533. PMLR, 2020. Pinar Yanardag and SVN Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1365–1374, 2015. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. *Advances in neural information processing systems*, 30, 2017. Yin Zhang, Rong Jin, and Zhi-Hua Zhou. Understanding bag-of-words model: a statistical framework. International journal of machine learning and cybernetics, 1:43–52, 2010. ## A Comparative Analysis Of Multiset Transformer And Set Transformer In this section, we present a detailed performance comparison between the MST and the Set Transformer (ST), with the results detailed in Table 4. It is important to clarify that the MST column in Table 1 corresponds to the column labeled MST (inv.). Table 4: Comprehensive results of synthetic datasets. In this table, '\# Classes' denotes the total number of class labels in the dataset. 'Ratio' is defined as the data ratio, calculated using |X| ∥MX∥1 . 'MST (w/o mult.)' signifies the MST excluding multiplicity inputs. Additional variants include 'MST (equiv. & inv.)', with multiplicities in both equivariant and invariant layers, and 'MST (inv.)', with multiplicities only in the invariant layer. 'ST' refers to the Set Transformer applied to full multisets. All ratios are presented in decimal form, and prediction accuracies are expressed as percentages. | # Classes | Ratio | Random | MST (w/o mult.) | MST (equiv. & inv.) | MST (inv.) | ST | |-------------|---------|----------|-------------------|-----------------------|--------------|-------------| | 2 | 0.03 | 50.00 | 55.94±0.50 | 100.00±0.00 | 100.00±0.00 | 100.00±0.00 | | 3 | 0.03 | 33.33 | 39.92±0.58 | 98.74±0.36 | 99.88±0.15 | 99.74±0.15 | | 5 | 0.04 | 20.00 | 26.72±0.72 | 74.28±1.19 | 88.86±2.07 | 85.82±1.73 | | 11 | 0.05 | 9.09 | 15.06±0.24 | 34.60±2.07 | 41.14±2.24 | 42.02±1.85 | There are two variants of MST: MST (equiv. & inv.) and MST (inv.). The primary distinction between these variants lies in whether multiplicities are incorporated into the equivariant layers. Specifically, MST (equiv. & inv.) takes into account the multiplicities, whereas MST (inv.) does not. In contrast, the input for ST consists of the complete enumeration of elements in a multiset, including repeated elements. For instance, consider a multiset {*a, b*} with multiplicities M(a) = 2 and M(b) = 4. In this case, the input for ST would be a sequence {*a, a, b, b, b, b*}, reflecting each instance of the elements in the multiset. This difference in handling input data between MST and ST is a crucial factor in their performance comparison. Results from Table 4 indicate that the Set Transformer demonstrates comparable, and in certain instances, superior performance compared to various configurations of the MST. Notably, in the context of 11 classes, the ST marginally outperforms the MST, achieving a prediction accuracy of 42.02% ± 1.85% as opposed to MST's 41.14% ± 2.24% in its best configuration. This enhanced performance of the ST can be attributed to its higher model complexity. Unlike MST, the ST processes the entire multiset as a full list with duplicated elements, thereby incorporating additional nodes and connections to accommodate duplicated elements. This leads to a considerable augmentation in model complexity for the ST, which is potentially tens to hundreds of times greater than that of MST. Such a complexity advantage appears to be particularly beneficial in handling tasks of higher complexity. It is crucial to highlight that the ST results were not included in Table 2 for real-world datasets. This exclusion stems from the extensive number of points in the PDs of these datasets, surpassing our available computational resources. This limitation emphasizes the necessity for more efficient computational strategies or the development of optimized models to make the ST applicable to larger, real-world datasets. Despite this, we hypothesize that the ST, owing to its higher model complexity, is likely to outperform the MST in most real-world scenarios. Conversely, the MST offers a more adaptable approach, capable of handling datasets of any size, provided they can be clustered effectively. This flexibility is a notable advantage, especially when dealing with datasets that are too large or complex for the ST to process. ## B Missing Proofs B.1 Proof Of The Theorem 5.1 Proof. To prove that A(*Q, X*) is permutation equivariant, we need to show that permuting the rows of Q and X in the same manner results in the same permutation of the rows of A(*Q, X*). Let P be an arbitrary permutation matrix. Consider the transformation: $$A(P Q,P X)=\left(\operatorname{softmax}\left({\frac{P Q(P X)^{\top}}{\sqrt{d}}}\right)+\alpha{\frac{(M_{P Q}-{\bf1})(M_{P X}-{\bf1})^{\top}}{\|(M_{P Q}-{\bf1})(M_{P X}-{\bf1})^{\top}\|_{F}+\varepsilon}}\right)P X.$$ Since M can be seen as a function that operates on the multiset elements, i.e., rows of Q and X, the order of the rows in P Q and P X will determine the order of the rows in MP Q and MPX, giving: $$M_{P Q}=P M_{Q}\quad\mathrm{and}\quad M_{P X}=P M_{X}.$$ Substituting these into our equation: $$A(P Q,P X)=\left(\operatorname{softmax}\left({\frac{P Q X^{\top}P^{\top}}{\sqrt{d}}}\right)+\alpha{\frac{(P M_{Q}-{\bf1})(P M_{X}-{\bf1})^{\top}}{\|(P M_{Q}-{\bf1})(P M_{X}-{\bf1})^{\top}\|_{F}+\varepsilon}}\right)P X.$$ Given that the softmax function is applied element-wise (or row-wise) and retains the order of elements, we can establish: softmax *P HP* ⊤= P softmax (H) P ⊤. Furthermore, permuting the rows or columns of a matrix only rearranges its elements without changing their values, so the sum of their squared magnitudes, and hence the Frobenius norm, remains unchanged. By equation 1 = P1, we have $$\|(P M_{Q}-{\bf1})(P M_{X}-{\bf1})^{\top}\|_{F}=\|(M_{Q}-{\bf1})(M_{X}-{\bf1})^{\top}\|_{F}.$$ Substituting these into our equation, we have A(P Q, P X) = P softmax QX⊤ √d P ⊤ + αP(MQ − 1)(MX − 1) ⊤P ⊤ ∥(MQ − 1)(MX − 1)⊤∥F + ε P X = P softmax QX⊤ √d + α(MQ − 1)(MX − 1) ⊤ ∥(MQ − 1)(MX − 1)⊤∥F + ε P ⊤P X = P softmax QX⊤ √d + α(MQ − 1)(MX − 1) ⊤ ∥(MQ − 1)(MX − 1)⊤∥F + ε X = P A(Q, X). Therefore, A(*P Q, P X*) = P A(*Q, X*), which proves that A(*Q, X*) is permutation equivariant. ## B.2 Proof Of The Theorem 5.2 Proof. As seen in the proof to Theorem 5.1, we have AQ(P X) = softmax Q(P X) ⊤ √d +Mα(MPX − 1) ⊤ ∥Mα(MPX − 1)⊤∥F + ε P X = softmax QX⊤P ⊤ √d +Mα(PMX − 1) ⊤ ∥Mα(PMX − 1)⊤∥F + ε P X = softmax QX⊤ √d P ⊤ +Mα(MX − 1) ⊤P ⊤ ∥Mα(MX − 1)⊤∥F + ε P X = softmax QX⊤ √d +Mα(MX − 1) ⊤ ∥Mα(MX − 1)⊤∥F + ε P ⊤P X = softmax QX⊤ √d +Mα(MX − 1) ⊤ ∥Mα(MX − 1)⊤∥F + ε X = AQ(X) Therefore, AQ(P X) = AQ(X), which proves that AQ(X) is permutation invariant. ## C More On Topological Features C.1 Ordinary Persistence And Extended Persistence In Graphs Given a graph G = (*V, E*), where V represents the vertices and E denotes the non-oriented edges. Given a function f : V → R defined on the vertices of G, we can construct sublevel graphs Gα = (Vα, Eα) for each α ∈ R, such that Vα = {v ∈ V : f(v) ≤ α} and Eα = {(v1, v2) ∈ E : v1, v2 ∈ Vα}. As α increases, we shall observe a sequence of these sublevel graphs, which is referred to as the filtration induced by f. This filtration commences with an empty graph and culminates in the entirety of graph G. A key aspect of persistence is its ability to track the emergence and dissolution of topological features, such as connected components and loops, throughout this filtration. For example, the birth time, denoted as αb, marks the value at which a new connected component appears in Gαb . This component will eventually amalgamate with another at a subsequent value αd ≥ αb, termed the death time. The lifespan of this component is captured by the interval [αb, αd]. In a similar vein, the birth and death times of loops in specific sublevel graphs are recorded. The aggregation of these intervals forms what is known as the barcode or ordinary PD of (*G, f*), which can be pictorially represented as a multiset in R 2. Beyond the conventional scope of persistence, which primarily focuses on sublevel graphs, the concept of extended persistence introduces an additional dimension by considering superlevel graphs. Specifically, for a given α ∈ R, the superlevel graph Gα = (V α, Eα) is defined such that V α = {v ∈ V : f(v) ≥ α} and Eα = {(v1, v2) ∈ E : v1, v2 ∈ V α}. As α decreases, these superlevel graphs offer an alternative perspective, akin to viewing the same graph from a different direction when juxtaposed with sublevel graphs. This dual perspective ensures a more holistic capture of the topological intricacies inherent in a graph. Extended PDs, based on the juxtaposition of birth and death points, can be categorized into distinct types. As shown in the main paper, we represent these diagrams as multisets in R 2, denoted as X1, X2*, . . .*, where the subscripted indices signify the different types of PDs. While the above exposition offers a high-level introduction for clarity, readers seeking a deeper understanding are directed to seminal works in the field. Cohen-Steiner et al. (2009) pioneered the concept of extended persistence with rigorous algebraic formulations. Dey & Wang (2022) drew parallels between these types and those found in Zigzag persistence (Carlsson & De Silva, 2010). Carrière et al. (2020) provided insights into these types by interpreting graphs as 1-simplices. For readers with an inclination towards the theoretical underpinnings of persistent homology, we further recommend Edelsbrunner & Harer (2022); Oudot (2017) for an in-depth exploration. ## C.2 Spectral Analysis With Heat Kernel Signature As alluded to in Subsection 6.2.2, we employ the Heat Kernel Signature (HKS) as filtrations to derive the extended PDs used in our experiments. For the sake of completeness, we provide a brief introduction to HKS here. For an in-depth exploration, we direct readers to seminal works by Carrière et al. (2020), Sun et al. (2009), and Hu et al. (2014). HKS is a spectral descriptor, originating from the spectral decomposition of graph Laplacians, and has proven to be a powerful tool for graph analysis. Consider a graph G with its vertex set represented as V = {v1*, . . . , v*n}. The adjacency matrix of this graph is denoted by A. The degree matrix D is a diagonal matrix where each entry Di,i is the sum of the i-th row of A. The normalized graph Laplacian, Ln, is given by the equation Ln = I − D− 12 AD− 12 , (18) where I stands for the identity matrix. This Laplacian possesses an orthonormal basis of eigenfunctions, denoted as Ψ = {ψ1*, . . . , ψ*n}, and the associated eigenvalues adhere to the inequality 0 ≤ λ1 ≤ *. . .* ≤ λn ≤ 2. The HKS, parameterized by t, is defined as the function hkst on the vertices of G by the relation: $$L_{n}=I-D^{-\frac{1}{2}}A D^{-\frac{1}{2}},$$ $$\mathrm{hks}_{t}:v\mapsto\sum_{k=1}^{n}\exp(-t\lambda_{k})\psi_{k}(v)^{2}\tag{1}$$ $\left(18\right)$. $$(19)$$ It is noteworthy that hkst maps vertices to the real line R, thereby inducing a natural filtration for the graph. The parameter t serves as a hyperparameter, the specifics of which are provided in Table 6. The spectral features are a synthesis of the eigenvalues of the normalized graph Laplacian and the deciles of the HKS. Importantly, these features are consistent with those used in the experiment by Carrière et al. (2020). ## D More On The Architecture This section includes omited details on the MST. ## D.1 Multihead Attention Mechanism The multihead attention mechanism, as proposed in (Vaswani et al., 2017), is also a pivotal component of Transformer. The essence of multihead attention lies in its ability to allow different heads to focus on diverse segments of the input data. This is achieved by linearly projecting the *Q, K, V* vectors into dimensions di, di, pi respectively, as outlined by (Vaswani et al., 2017). An added advantage of this projection is the relaxation of the constraint that necessitated Q and K to possess identical dimensions. To formalize the operation of the multihead attention mechanism, consider the i-th head out of a total of h heads. The output for this head is computed as: MultiHead($Q,K,V$) := Concat (h${}_{1},\ldots,$h${}_{\rm h}$) $W^{O}$, $${\rm h}_{i}:={\rm Att}\left(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V}\right).$$ In the above equations, the projections are characterized by parameter matrices W Q i ∈ R dq×di, W K i ∈ R dk×di, WV i ∈ R dv×pi, and WO ∈ R (Σpi)×q. $$(20)$$ $$(21)$$ This multihead attention mechanism ensures that the model can capture a richer set of relationships and dependencies in the data by allowing each head to focus on different aspects of the input. For the sake of clarity, the discussions in the paper focuses on the single head attention mechanism. However, it's worth noting that the multihead version can be readily inferred from our descriptions, leveraging the principles delineated in the preceding section. ## D.2 Pre-Ln And Post-Ln Mab In orchestrating with layer normalization (LN), multiple versions of the multiset attention block (MAB) can be derived. The main paper presents the following formulation: $$\begin{array}{c}{{\mathrm{MAB}(Q,X):=\mathrm{LN}(H+\mathrm{FFN}(H)),}}\\ {{H:=\mathrm{LN}(Q+A(Q,X)).}}\end{array}$$ MAB(*Q, X*) := LN(H + FFN(H)), (22) $$\begin{array}{l}{(22)}\\ {(23)}\end{array}$$ $$\begin{array}{l}{(24)}\\ {(25)}\end{array}$$ H := LN(Q + A(*Q, X*)). (23) In an alternative formulation, we have: $$\begin{array}{c}{{\mathrm{MB}(Q,X):=H+\mathrm{FFN}(\mathrm{LN}(H)),}}\\ {{H:=Q+A\left(\mathrm{LN}(Q),\mathrm{LN}(X)\right).}}\end{array}$$ MAB(*Q, X*) := H + FFN(LN(H)), (24) H := Q + A (LN(Q), LN(X)). (25) The former is denominated as the post-LN MAB while the latter is termed the pre-LN MAB. A salient observation is that the application of LN retains the multiplicities MX in LN(X). According to the study by Xiong et al. (2020), the Pre-LN Transformers exhibit superior stability relative to the post-LN variants. For the sake of clarity and to ensure consistency in the ensuing discussions, we will denote both formulations under the generic term MAB due to their functional similarities within the overarching framework. ## D.3 Motivation Of Multiset Transformer With Clustering Preprocessing The primary motivation behind incorporating clustering preprocessing in the MST framework is to balance scalability with the preservation of data details. A critical insight from the complexity analysis presented in the main paper is the direct correlation between the size of the MST model and the multiset size n. Recognizing that neighboring elements in a multiset often hold similar information, clustering these elements before deploying the MST emerges as a strategic move. This preprocessing step stands particularly significant for MST, more so than for the Set Transformer, due to its unique handling of data points. In MST, the information inherent in the elements is not simply discarded; rather, it undergoes a transformation where it is incorporated into the multiplicities of a representative point. This nuanced approach ensures that crucial data characteristics are retained, albeit in an aggregated form. Consequently, we adopt a preprocessing strategy where multisets are clustered prior to their engagement with the MST. This methodology not only maintains comparable performance levels but also leads to a significant reduction in computational complexity, thereby enhancing the overall efficiency of the model. ## E Experimential Details This section provides further details regarding these experiments, including information on datasets and hyperparameters. ## E.1 Datasets Table 5 provides a summary of the information for each dataset. Table 5: Dataset descriptions. Here, β0 and β1 represent the 0th and 1st Betti numbers, indicating the number of connected components and cycles in a graph, respectively. Specifically, an average β0 = 1.0 denotes that all graphs in the dataset are connected. In these cases, β1 = \#{edges} − \#{nodes}. Source: Carrière et al. (2020). | Dataset | Nb graphs | Nb classes | Av. nodes | Av. Edges | Av. β0 | Av. β1 | |-----------|-------------|--------------|-------------|-------------|----------|----------| | MUTAG | 188 | 2 | 17.93 | 19.79 | 1.0 | 2.86 | | COX2 | 467 | 2 | 41.22 | 43.45 | 1.0 | 3.22 | | DHFR | 756 | 2 | 42.43 | 44.54 | 1.0 | 3.12 | | NCI1 | 4,110 | 2 | 29.87 | 32.30 | 1.19 | 3.62 | | NCI109 | 4,127 | 2 | 29.68 | 32.13 | 1.20 | 3.64 | | PROTEIN | 1,113 | 2 | 39.06 | 72.82 | 1.08 | 34.84 | | IMDB-B | 1,000 | 2 | 19.77 | 96.53 | 1.0 | 77.76 | | IMDB-M | 1,500 | 3 | 13.00 | 65.94 | 1.0 | 53.93 | | COLLAB | 5,000 | 3 | 74.5 | 2457.5 | 1.0 | 2383.7 | ## E.2 Devices We utilized both an RTX 4090 (24 GB) and an RTX A6000 (48 GB) GPU for our experiments. ## E.3 Hyperparameters The experiments in this study consistently employed specific hyperparameters to guarantee replicability. Details of these hyperparameters can be found in Table 6. In the table, hkst denotes the functions used to create filtrations. These filtrations are essential as they are later used to derive PDs from graphs. Additionally, an initial random seed of 42 was used throughout the experiments. | Dataset | HKS | H | E | E. Q | I. Q | Pre-LN | Eps | Hidden | LR | Epochs | Batch | |-----------|---------------|-----|-----|--------|--------|----------|-------|----------|------|----------|---------| | SYNTHETIC | - | 2 | 2 | 1 | 4 | False | - | 64 | 0.01 | 100 | 128 | | MUTAG | hks10 | 2 | 2 | 2 | 4 | True | 0.5 | 64 | 0.01 | 150 | 128 | | COX2 | hks0.1, hks10 | 2 | 2 | 2 | 8 | False | 0.5 | 64 | 0.02 | 200 | 128 | | DHFR | hks0.1, hks10 | 2 | 2 | 4 | 8 | True | 0.5 | 64 | 0.01 | 200 | 128 | | NCI1 | hks0.1, hks10 | 2 | 2 | 8 | 16 | True | 0.1 | 256 | 0.06 | 300 | 128 | | NCI109 | hks0.1, hks10 | 2 | 2 | 8 | 16 | True | 0.1 | 64 | 0.1 | 100 | 128 | | PROTEIN | hks10 | 2 | 2 | 2 | 8 | True | 0.01 | 64 | 0.01 | 200 | 128 | | IMDB-B | hks0.1, hks10 | 2 | 2 | 2 | 8 | False | 0.04 | 64 | 0.01 | 100 | 128 | | IMDB-M | hks0.1, hks10 | 2 | 2 | 2 | 8 | True | 0.04 | 64 | 0.01 | 100 | 128 | | COLLAB | hks0.1, hks10 | 2 | 2 | 1 | 8 | True | 0.01 | 64 | 0.01 | 100 | 128 | Table 6: Hyperparameters for various datasets. H: Number of attention heads in MABs; E: Equivariant layers; E. Q: Queries in IMABs for equivariant layers; I. Q: Queries in MABQ for invariant layer; Pre-LN: MAB version (True/False for using pre-layer normalization); Eps: Maximum sample distance in DBSCAN neighborhood; Hidden: Hidden units; LR: Learning rate; Epochs: Training epochs; Batch: Training batch size.
# The Inverse Problem For Kernel Means Anonymous authors Paper under double-blind review ## Abstract We discuss the inverse problem for the kernel embedding of measures. We identify which elements of a reproducing kernel Hilbert space which are in the cone generated by some set of kernel functions as polar dual of the Herglotz-type functions, the functions with positive imaginary part. Over certain spaces, such as Sobelev spaces, the duality to Herglotz functions reduces to a classical multivariate moment problem, and, over analytic spaces, we see more complex analytic type conditions. We give conditions for when Herglotz functions have representations in terms of kernel functions in terms of reflexive reproducing kernel Hilbert spaces. We identify the orbits of a dynamical system in terms of the Koopmanism philosophy: we give a way to decide when there is an orbit contained in some compact subset of the domain. ## 1 Introduction Kernel methods offer powerful techniques for dealing with complex and high-dimensional data by implicitly mapping the data into a high-dimensional feature space using kernel functions, see, for example, Schölkopf et al. (1998); Muandet et al. (2017); Sriperumbudur et al. (2011). The kernel embedding of measures, also known as the kernel mean, allows us to represent probability distributions as functions in some reproducing kernel Hilbert space. The kernel embedding of measures has proven to be a useful tool for various tasks in machine learning, such as distribution comparison in Gretton et al. (2012), generative modeling in Li et al. (2015), and density estimation in Liu et al. (2016), due to its ability to leverage the Euclidean geometry of reproducing kernel Hilbert spaces. Let Ω be a set. A **reproducing kernel Hilbert space** is a Hilbert space H of functions f : Ω → C such that point evaluation is a bounded linear functional. In reproducing kernel Hilbert spaces, for each ω ∈ Ω there is a **kernel function** kω ∈ H such that ⟨*f, k*ω⟩ = f(ω). (Note that the kernel functions thus induce a natural metrizable topology on Ω which we will assume is Hausdorff for ease of discussion.) For more information on the basic theory of reproducing kernel Hilbert spaces, see Paulsen & Raghupathi (2016); Berlinet & Thomas-Agnan (2011). As aforementioned, an apparently somewhat useful technique is various contexts is to consider the the kernel embedding of measures (or **kernel mean** in the case of distributions) taking a measure µ to an element ι(µ), defined by the map Note that $$\iota(\mu)(z)=\int k_{\omega}(z)\mathrm{d}\mu(\omega).$$ $$\langle f,\iota(\mu)\rangle=\int f(\omega)\mathrm{d}\mu(\omega).$$ That is, ι(µ) behaves like integration against µ as a linear functional on H. See Muandet et al. (2017); Sriperumbudur et al. (2011) for a comprehensive review establishing theoretical foundations and exploring applications. The utility of the kernel embeddings of measures has proven significant recently for its ability to represent probability distributions in a reproducing kernel Hilbert space, which in principle may be more desirable to manipulate, calculate with, or compare because of their Euclidean geometry, as seen in various references such as Alvarez-Melis & Jaakkola (2018); Gretton et al. (2012); Sejdinovic et al. (2014); Balasubramanian et al. (2021). Each reproducing kernel Hilbert space thus gives a metric on measures defined by d(µ1, µ2) = ∥ι(µ1) − ι(µ2)∥. (Known as the **maximum mean discrepancy**.) Such metrics are also desirable over other metrics on measures such as the Wasserstein metric or other transport-based metrics due to ease of calculation. Note that on a compact space Ω, and a reproducing kernel Hilbert space such that the kernel embedding of measures is injective, we see that the Wasserstein distance is equal to zero if and only if the kernel mean metric is zero– that is, the two distances induce the same metric topology. Our goal will be to discuss the inverse problem for the kernel embedding of measures– given an element ν of a reproducing kernel Hilbert space, when does there exist a measure µ such that ν(z) = Rkω(z)dµ. We identify the elements of a reproducing kernel Hilbert space that correspond to measures (and other measure-like elements) supported on a given set A, which provides insight into the data distribution and could potentially facilitate certain various learning tasks. We will treat several classes of spaces in detail, some of which are mostly of theoretical interest, such as the Hardy space or other spaces of analytic function, but may apply to some applied topics such as learning partial differential equations along the lines of Stepaniants (2023), and others which are more concrete such as Sobelev spaces. Spaces of analytic functions serve as important examples where the kernel embedding of measures is not injective, which serve as interesting examples where complete information may not be recoverable– for example, it is certainly plausible that there are multiple theories describing the same real objects, but which disagree on objects which do not correspond to anything remotely observable in reality, however troubling it may be to our innate longing for one to be right. Irregardless, the problem of whether or not an element of a reproducing kernel Hilbert space remains tractable in terms of duality with the socalled Herglotz functions. Note that, by Caratheodory's theorem, that over an N dimensional reproducing kernel Hilbert space that any such embedded measure can be expressed as a convex combination of at most N + 1 kernel functions, which may or may not be supported in the support of the original. We identify the orbits of a dynamical system using the framework of Koopmanism, enabling us to determine when an orbit is contained within a specific compact subset of the domain. Our results offer theoretical foundations and likely practical implications for tasks such as distribution modeling, manifold learning along the lines of Belkin & Niyogi (2003), and anomaly detection (see the survey Chandola et al. (2009).) ## 2 The Cone Of A**-Measures** Let A ⊆ Ω. We define the cone CA of A**-measures over** H to be the closed cone generated by {kω|ω *∈ A}*. Observe that we can view the membership problem in CA as the inverse problem for the kernel embedding of measures, although in the case where A is not compact we may not obtain a *bona fide* measure *per se*, but something in their weak closure. Call a reproducing kernel Hilbert space **uniform** if there exists a point ω0 such that kω0 (z) is positive for every z ∈ Ω. Note that if A is compact and the reproducing kernel Hilbert space is uniform, then if ν ∈ CA, there is a positive measure µ such that ⟨*f, ν*⟩ =Rf(z)dµ(z). We note that in a reproducing kernel Hilbert space on a compact set, one can reconstruct the measure by iteratively taking convex combinations with kernel functions, which are the images of point masses under the kernel embedding of measures. Namely, if one picks optimal convex combinations with random kernel functions iteratively, we see that the result converges to the desired kernel mean in countably many steps, . If our set A is compact, then the space of kernel means is compact, so we indeed get convergence. (Note that if one picks random kernel functions with different distributions, the constructed measures may be different if the problem is not uniquely determined.) Theorem 2.1 (Update inequality). Let H be a reproducing kernel Hilbert space on Ω. Let A ⊆ Ω. Let *µ, ν* be A-measures with total variation 1. If µ is not equal to ν, there exists α ∈ A *such that* $$\|\nu\|^{2}>\mathrm{Re}[\langle\mu,\nu\rangle+\mu(\alpha)-\nu(\alpha)].$$ Moreover, for $$t=\operatorname*{min}\{{\frac{\|\nu\|^{2}+\mathrm{Re}[\nu(\alpha)-\langle\mu,\nu\rangle-\mu(\alpha)]}{\|\nu-k_{\alpha}\|^{2}}},1\},$$ we get that ∥µ − [(1 − t)ν + tkα]∥/∥ν − µ∥ *is minimized and equal to* $${\sqrt{1-\left[\mathrm{Re}({\frac{\nu-\mu}{\|\nu-\mu\|}},{\frac{\nu-k_{\alpha}}{\|\nu-k_{\alpha}\|}})\right]^{2}}}$$ when t < 1 or ∥µ − kα∥/∥µ − ν∥ *when* t = 1. Proof. Note that for any γ ∈ H $$-\frac{d}{d t}\|\mu-[(1-t)\nu+t\gamma]\|^{2}|_{t=0}=\mathrm{Re}\langle\nu-\mu,\nu-\gamma\rangle.$$ Thus, if γ were an A-measure with total variation 1, we see that there must be some α such that ∥ν∥ 2 + Re[ν(α)− ⟨µ, ν⟩ −µ(α)] = Re⟨ν −µ, ν −kα⟩ > 0 if Re⟨ν −*µ, ν* −γ⟩ > 0 as γ is a limit of convex combinations of kernel functions. (Note that taking γ = µ gives such a positive choice.) Solving for the vertex of the quadratic ∥µ − [(1 − t)ν + tkα]∥ 2 gives the desired result. Note that, for optimal α, $${\sqrt{1-\left[\mathrm{Re}\langle{\frac{\nu-\mu}{\|\nu-\mu\|}},{\frac{\nu-k_{\alpha}}{\|\nu-k_{\alpha}\|}}\rangle\right]^{2}}}\leq{\sqrt{1-{\frac{\|\nu-\mu\|^{2}}{\|\nu-k_{\alpha}\|^{2}}}}}$$ so we expect a quadratically slow convergence, along the lines of the law of large numbers whenever ∥µ−kα∥ is bounded above. (As the recurrence tn+1 = tn − αt3n for *t, α <* 1 goes to 0 quadratically slow. That is, the sequence is O( √ 1 n ).) For comparable results on the Wasserstein distance, see Del Barrio et al. (1999). ## 3 Herglotz Duality Given a cone C in some locally convex topological vector space V, we define the **polar dual** C ∗to the cone of linear functionals with nonnegative real part on C. Note that if C is a closed cone, the Hahn-Banach theorem implies that v ∈ C if and only if, for every λ ∈ C∗, Reλ(v) ≥ 0. Taking the cone CA, the dual cone is exactly the set of h ∈ H such that Re⟨*h, k*ω⟩ = Reh(ω) ≥ 0 for all ω ∈ A. In analogy with classical complex analysis and operator theory, we call the polar dual of CA, the cone of A**-Herglotz functions over** H denoted HA. We call *G ⊆ H*A which generates HA as a closed cone an A**-test set.** Viewing the above discussion (or perhaps more accurately annotated derivation) as a proof, we have the following result. Theorem 3.1. Let H be a reproducing kernel Hilbert space on some domain Ω. Let ν ∈ H. *The following* are equivalent: 1. ν is an A*-measure over* H, 2. For every h ∈ HA we have Re⟨ν, h⟩ ≥ 0, 3. Given G ⊆ HA an A-test set, we have Re⟨ν, h⟩ ≥ 0. Note also that the solution to the inverse problem for the kernel embedding of measures is often highly nonunique– for example, for H a space of analytic functions that the problem for A compact is equivalent to to the problem for ∂A by the maximum modulus principle, as they have the same cone of Herglotz functions. Uniqueness problems for the kernel embedding of measures have been analyzed extensively, see the survey Sriperumbudur et al. (2011). ## 4 Over Various Spaces 4.1 The Global Case Over Analytic Function Spaces On The Unit Disk And The Hardy Space In Particular Let H be a reproducing kernel Hilbert space over the unit disk D ⊆ C such that all bounded analytic functions defined on a neighborhood of the unit disk are in our space. The space of Herglotz functions HD is exactly the space of h ∈ H such that Reh ≥ 0. Of course, Herglotz himself classifed the cone of all analytic functions with nonnegative real part Herglotz (1911); Lax (2002)- they are functions of the form: $$i a+\int_{|\omega|=1}\frac{1+\overline{{{\omega}}}z}{1-\overline{{{\omega}}}z}\mathrm{d}\mu(\omega)$$ where a is some real number and µ is a finite positive measure on the unit circle. Thus, the cone of relevant Herglotz functions is generated by the elements of the form 1+ωz 1−ωz where |ω| < 1 and ±i. which exactly says such functions are a D-test set. The reproducing kernel for the **Hardy space** H2(D) is given by the **Szegő kernel** kω(z) = 1 1−ωz . Note that 1+ωz 1−ωz = 2kω(z) − 1. Thus, if ν ∈ CD, we have that 2ν(z) − ν(0) is a Herglotz function such that ±iν(0) has nonnegative real part. That is, we have the following result. Theorem 4.1. Let ν ∈ H2(D). *The following are equivalent:* 1. ν is a D*-measure over* H2(D), 2. $$\nu(z)={\frac{\nu(0)}{2}}+\int_{|\omega|=1}{\frac{1+\overline{{{\omega}}}z}{1-\overline{{{\omega}}}z}}\mathrm{d}\mu(\omega)$$ where µ *is a finite positive measure on the unit circle with total mass* ν(0) 2 . 3. ν maps D *into the half plane* {z ∈ C|Rez ≥ ν(0) 2 }. Note that the theorem assumes ν ∈ H2(D). ## 4.2 The Global Case For Entire Functions Suppose H is a space of entire analytic functions on C n. Liouville's theorem implies that there are at most only constant Herglotz functions of the from a + ib where a is positive and b is real are in our reproducing kernel Hilbert space. Thus, the cone of C n-measures over H is a space of codimension one plus a ray or the whole space. Theorem 4.2. Suppose H *is a space of entire analytic functions on* C n. *Then, either the cone of* C nmeasures over H is all of H *(when there are no constant functions) or a space of codimension one plus a* ray. If we additionally assume H *is uniform and thus contains a constant function, we merely need to have the* value of ν at ω0 *to be positive.* Examples of such spaces include the **Taylor-type spaces**, which arise from taking an entire analytic function g with nonnegative power series coefficients and setting kω(z) = g(⟨*z, w*⟩). If g(0) > 0, the space is uniform. The classical Fock space or **Segal-Bargmann space** (as defined in Bargmann (1961)) is given by kω(z) = e ⟨z,w⟩. (Note there are other objects called the Fock space in noncommutative operator theory, see Fock (1932).) Note that the choice of g polynomial gives rise to finite dimensional examples, although one has a paucity of important operators, such as multipliers, composition operators and so on, and thus one must use their truncations. ## 4.3 Real Domains And Real Algebraic Geometry Let Ω be a subset of R n. Let H be a real reproducing kernel Hilbert space. (For example, we could take certain Sobelev spaces, or the space of functions on finitely many points, important for discretizing the problem for applications, among other examples.) In such a case, all the kω must be real-valued functions. Given A ⊆ Ω, the cone of real-valued Herglotz functions are exactly the nonnegative functions. Thus, the cone of A-measures over H are exactly ν such that for every nonnegative h ∈ H, ⟨ν, h⟩ ≥ 0. In Putinar (1993), the following Positivstellensatz, or positivity locus theorem, was obtained, which itself is built of the foundational work of Schmüdgen (1987). Theorem 4.3 (Putinar's Positivstellensatz). Let g1, . . . , be real polynomials (either a finite or infinite list) in n *variables such that at least one of* Gi = {z ∈ R n|gi(z) ≥ 0} *is compact. Let* g0 = 1. Let A =TGi. For any polynomial p such that p > 0 on A, $$p=\sum_{k=1}^{N}q_{k}^{2}g_{i_{k}}$$ where qk are some real polynomials and ik are some nonnegative integers and N *is finite.* We see the following immediate corollary. Corollary 4.4. Let Ω *be a subset of* R n. Let H be a Hilbert space of functions which is closed under complex conjugation such that the polynomials are contained in H and dense. Let g1, . . . , be real polynomials (either a finite or infinite list) in n *variables such that at least one of* Gi = {z ∈ R n|gi(z) ≥ 0} *is compact. Let* g0 = 1. Let A =Ti {z ∈ R n|gi(z) ≥ 0} ⊂ Ω *we have that the collection of polynomials of the form* q 2gi and q 2 are an A*-test set.* Define the gi**-localizing matrix** to be the infinite matrix Ai = [⟨gix α+β, ν⟩]α,β where *α, β* run over all multi-indices. We see that ν ∈ CA if and only if all the Ai are positive semidefinite. (By which we mean all of its finite principal minors are positive semidefinite.) Thus, we recover exactly the classical conditions on moment sequences as in Schmüdgen (1987); Curto & Fialkow (2005). ## 4.4 Reflexive Kernels Say a real reproducing kernel Hilbert space is **reflexive** if HΩ ⊆ H∗Ω and kω ∈ HΩ. Note, for example, this is the case for the Poisson kernel, kw(z) = 1 1−wz +1 1−zw − 1 = Re 1+wz 1−wz . In such a case, by Theorem 3.1, we have that HΩ = CΩ. That is, any Herglotz function on a reflexive space is in the cone generated by the kernels. Note also in a reflexive space that the kernels are positive functions, and the inner product of two Herglotz functions is positive. The conincidental duality can also be used to do finite interpolation. Theorem 4.5. Let H *be reflexive. Then,* HΩ = CΩ. Moreover, note that there exists f ∈ HΩ ptwise *such that* f(xi) = yi if and only if for every sequence of ai such that Paikxi ∈ HΩ, we have that Paiyi ≥ 0. *(Here* HΩ ptwise *denotes the closure of the Herglotz functions* in the topology of pointwise convergence on C∞.) Note that, over the disk with the Poisson kernel structure, we obtain exactly the Herglotz representation formula from the above equivalence. Note that, with that structure, the Pick interpolation theorem gives that the above interpolation problem is solvable if and only if there exist y˜i such that the matrix hyi+yj+i(˜yi−y˜j ) 1−xixj i i,j is positive semidefinite. It would be most excellent to give a direct proof of the equivalence. Note that if the kernel functions are nonnegative, then either there is a a pair of Herglotz functions with negative inner product or the space is reflexive. For example, the pluriharmonic Drury-Arveson space (kernel Re 1+⟨z,w⟩ 1−⟨z,w⟩ on the Euclidean ball in C d) in more than one variable is not reflexive, so there must be a pair of Herglotz functions with negative inner product. (We leave the process of finding such a pair to the reader.) In general, the set of harmonic functions on an Ω with smooth enough boundary are reflexive. Note also that given a space where the kernel functions are nonnegative, one can add some fictional points to the space Ω to make the space reflexive. (Corresponding to a maximal cone such that ⟨ν, µ⟩ ≥ 0. The utility of such a construction is questionable, especially as the resulting space is unlikely to have nice features, such as a lot of multipliers, but is at least worth mentioning. Such maximal cones are the so-called self-dual cones, which are classified abstractly for general Hilbert spaces in terms of a direct integral theory as described by Penney (1976). Note there are a lot of these, even in finite dimensions– such the nonnegative orthant, right circular cones in R 3 and so on.) Finally, we point out that a fundamental problem along these lines is given a bounded Ω ⊆ C d, when is there a reflexive reproducing kernel Hilbert space on Ω containing pluriharmonic functions defined on a neighborhood of Ω as a dense subspace? More generally, one would like to classify all reflexive reproducing kernel Hilbert space structures on a given Ω. ## 5 Koopmanism And An Application Of Inverse Problem For The Kernel Embedding Of Measures. Let Ω be some domain. Let F : Ω → Ω. Koopmanism is a popular technique in dynamical systems which (among other things) seeks to find the orbits of a discrete or continuous time dynamical system by linearizing the problem via the theory of composition operators, KF h = h◦F, which are known as **Koopman operators** in the dynamical context, especially with respect to the dynamic mode decomposition. See Budišić et al. (2012); Brunton et al. (2022). Their adjoints are the Perron-Fronbenius operators, which act on kernels by PF kω = kF (ω). The Koopmanism way of doing things (in the discrete time case) should ask when there is a measure supported in some set A which is invariant under composition, which our context translates to finding eigenvectors ν such that PF ν = ν and ν is A-measure over H. Such ν are called F**-invariant** A-measures over H. We say a compact set A is **Urysohn** on if there exists an A-Herglotz function h ∈ HA such that h|A = 0 and hAc > 0. Corollary 5.1. Let Ω *be some domain. Let* F : Ω → Ω continuous. Let H *be a reproducing kernel Hilbert* space on Ω. Let A ⊆ Ω. The following are equivalent: 1. There exists an F-invariant A*-measure over* H 2. There is a ν such that PF ν = ν and ⟨ν, h⟩ ≥ 0 for every h an A-Herglotz function. Moreover, if A is Urysohn then such a ν *corresponds to a* bona fide measure µ supported on A *and thus* A contains an orbit of F. Proof. It is worth explaining why ν corresponds to a such a proper measure µ. Pick a measure µ0 supported on A representing ν. Now take µk = µk−1 ◦ F. Note µk must be supported in A since it also corresponds to ν. So the support of the measure µ = limN→∞ 1 N PN k=1 µk is an orbit of F. The case of the disk was extremely tractable as we had existing Herglotz representations on the disk. One wonders if one could adopt our framework to be compatiable with Agler models (see Agler & McCarthy (2002)) as adapted to Herglotz functions on the bidisk and study dynamics on the bidisk, with a goal of generalizing existing work such as Sola & Tully-Doyle; Jury & Tsikalas (2023). Theorem 5.2 (Operator update inequality). Let Ω be some domain. Let H *be a reproducing kernel Hilbert* space on Ω. Let T be a bounded linear operator on H. Let A ⊆ Ω. Let ν ∈ H. Suppose there is µ ∈ CA such that T µ = µ. If T ν ̸= ν, there exists a point α ∈ A *such that the derivative of* f(t) = ∥(T −1)((1−t)ν+tkα)∥ 2 is negative and, moreover, f ′(t) ≤ −2∥(T − 1)ν∥ 2. Proof. Take ν ∈ H take a random point α ∈ A mass kα. Now taking the best linear approximation of ∥(T −1)((1−t)ν +tkα)∥ 2 with respect to t at t = 0 gives 2 Re⟨(T −1)kα,(T −1)ν⟩−2∥(T −1)ν∥ 2. Note there is always an α such that Re⟨(T −1)kα,(T −1)ν⟩ ≤ 0, if there exists such a measure µ with (T −1)µ = 0. Thus, we see again see about quadratically slow convergence on compact A if one takes optimal updates by convexly combining with random kernel functions iteratively. Taking T = PF for some F : Ω → Ω gives the update inequality for the Koopman orbit-detection case. Conceptually, one can view this as a shift in view– instead of iterating to find a steady state, we are picking a point and asking how much mass is missing there, and add it back. Note also that it also statistically feasible to decide when the orbit of every point in A, as if one stops seeing the norm of the defect ∥(PF − 1)ν∥ decrease as in Theorem 5.2, then with more and more confidence we can conclude it will never decrease. ## References Jim Agler and John Edward McCarthy. *Pick interpolation and Hilbert function spaces*, volume 44. American Mathematical Soc., 2002. David Alvarez-Melis and Tommi Jaakkola. Gromov-Wasserstein Alignment of Word Embedding Spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1881–1890, 2018. Krishnakumar Balasubramanian, Tong Li, and Ming Yuan. On the optimality of kernel-embedding based goodness-of-fit tests. *The Journal of Machine Learning Research*, 22(1):1–45, 2021. Valentine Bargmann. On a Hilbert space of analytic functions and an associated integral transform part I. Communications on pure and applied mathematics, 14(3):187–214, 1961. Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373–1396, 2003. Alain Berlinet and Christine Thomas-Agnan. *Reproducing kernel Hilbert spaces in probability and statistics*. Springer Science & Business Media, 2011. Steven L Brunton, Marko Budišić, Eurika Kaiser, and J Nathan Kutz. Modern koopman theory for dynamical systems. *SIAM Review*, 64(2), 2022. Marko Budišić, Ryan Mohr, and Igor Mezić. Applied Koopmanism. Chaos: An Interdisciplinary Journal of Nonlinear Science, 22(4):047510, 2012. Varun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly detection: A survey. ACM computing surveys (CSUR), 41(3):1–58, 2009. R. Curto and L. Fialkow. Truncated K-moment problems in several variables. *J. Operator Theory*, 54(1): 189–226, 2005. Eustasio Del Barrio, Evarist Giné, and Carlos Matrán. Central limit theorems for the Wasserstein distance between the empirical and the true distributions. *Annals of Probability*, pp. 1009–1071, 1999. Vladimir Fock. Konfigurationsraum und zweite Quantelung. *Zeitschrift für Physik*, 75(9-10):622–647, 1932. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *The Journal of Machine Learning Research*, 13(1):723–773, 2012. G. Herglotz. Über Potenzreihen mit positivem, reellen Teil im Einheitskreis. *Ber. Verh. Sachs. Akad. Wiss.* Leipzig, 63:501–511, 1911. Michael T Jury and Georgios Tsikalas. Denjoy-Wolff points on the bidisk via models. *arXiv preprint* arXiv:2304.13171, 2023. Peter D Lax. *Functional analysis*, volume 55. John Wiley & Sons, 2002. Yujia Li, Kevin Swersky, and Rich Zemel. Generative moment matching networks. In International conference on machine learning, pp. 1718–1727. PMLR, 2015. Qiang Liu, Jason Lee, and Michael Jordan. A kernelized Stein discrepancy for goodness-of-fit tests. In International conference on machine learning, pp. 276–284. PMLR, 2016. Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Bernhard Schölkopf, et al. Kernel mean embedding of distributions: A review and beyond. Foundations and Trends® *in Machine Learning*, 10 (1-2):1–141, 2017. Vern I Paulsen and Mrinal Raghupathi. *An introduction to the theory of reproducing kernel Hilbert spaces*, volume 152. Cambridge university press, 2016. Richard C Penney. Self-dual cones in Hilbert space. *Journal of Functional Analysis*, 21(3):305–315, 1976. Mihai Putinar. Positive polynomials on compact semi-algebraic sets. Indiana University Mathematics Journal, 42(3):969–984, 1993. Konrad Schmüdgen. On a generalization of the classical moment problem. Journal of mathematical analysis and applications, 125(2):461–470, 1987. Bernhard Schölkopf, Alexander Smola, and Klaus-Robert Müller. Nonlinear component analysis as a kernel eigenvalue problem. *Neural computation*, 10(5):1299–1319, 1998. Dino Sejdinovic, Heiko Strathmann, Maria Lomeli Garcia, Christophe Andrieu, and Arthur Gretton. Kernel adaptive Metropolis-Hastings. In *International conference on machine learning*, pp. 1665–1673. PMLR, 2014. Alan Sola and Ryan Tully-Doyle. Dynamics of low-degree rational inner skew-products on T 2. In Annales Polonici Mathematici, pp. 1–25. Instytut Matematyczny Polskiej Akademii Nauk. Bharath Sriperumbudur, Kenji Fukumizu, and Gert R. G. Lanckriet. Universality, Characteristic kernels and the RKHS embedding of measures. *Journal of Machine Learning Research*, 12:2389–2410, 2011. George Stepaniants. Learning partial differential equations in reproducing kernel Hilbert spaces. *Journal of* Machine Learning Research, 24(86):1–72, 2023.
Review 1: Summary: This is a functional analysis paper on a topic tangential to some challenges within kernel methods in machine learning. In machine learning, we consider data X or measures from which data is drawn, and consider operating on them with a bi-variate often positive-definite kernel K (e.g., a Gaussian kernel). Each data point x_i in X with the positive-definite kernel K, encodes a univariate function K(x_i, .), which is an element of a reproducing kernel Hilbert space (RKHS), H_K. One can then consider operating on these functions as using linear techniques (often implicitly via the kernel trick) to perform various non-linear ML tasks, but retaining convexity etc. A fundamental task to consider in this context is the kernel mean, which is simply $(1/|X|) \sum_{x_i \in X} K(x_i, .)$. It can be viewed as a kernel density estimate defined on X. This all generalizes to continuous measures $\mu$ as well. Typically in ML the data X is considered in Euclidean space, R^d. In this case, the reproducing property ensures that for *any* element of H_K there exists some measure $\mu$ (allowing negative weights, or if preferred the difference of two measures $\mu-\nu$) on R^d so the kernel mean of $\mu$ is that element of H_K. This paper examines more exotic domains, and their corresponding RKHSs, and examine for which elements of those RKHSs does there exist an element of the domain which can generate it. It turns out unlike the common Euclidean space, this is not always, but can be characterized for a few cases. Strengths and Weaknesses: These results seem neat, but the article is not written for an ML audience, and so is not a good fit for TMLR. This manifests in two ways: 1. the connections to ML implications is very tenuous, and only done with some citations in the opening paragraph. The results on the more exotic spaces are not connected to ML. 2. the language and detail of the proofs is not written for what an ML audience would expect. As someone who has written proofs about RKHS in various applications, I found the arguments too terse and with too many elements missing. In the example in trying to understand Theorem 3.1, there were two (possibly related parts) where I got stuck. - In defining the $\mathcal{A}$-test set, I don't understand what "generates" means in "$\mathcal{G} \subseteq \mathcal{H_A}$ which generates $\mathcal{H_A}$". - in the third equivalent statement, it uses $\mathcal{G}$ and $h$ but the relation between them is not defined, so its not clear what that third statement means. I had similar trouble following both the details of the arguments, and the relation to ML for the other sections. Requested Changes: For me to feel this would be a good fit for TMLR, I would expect to see a description of how the specific results have potential application to machine learning. I would also hope that the proofs would be written with more details, so more accessible to a typical ML audience. Broader Impact Concerns: N/A ================================================== Review 2: Summary: In this article, the authors propose to study the inverse problem for RKHSs, that is, whether it is possible to build a measure on the base space whose kernel mean embedding is exactly equal to a given, pre-defined RKHS element. In particular, they discuss how this question can be tackled through Herglotz duality, and then identify several specific cases and kernels (such as Hardy spaces and reflexive kernels) where conditions can be enounced for finding appropriate measures. Strengths and Weaknesses: While the question under study is definitely interesting from a theoretical perspective, I think the article suffers from several major weaknesses: ---The problem is not sufficiently motivated. I understand that the inverse problem for RKHSs is mostly theoretical, but in a contribution to TMLR, there needs to be at least some sort of motivation for applications: why is finding a measure on the data that corresponds to some RKHS point interesting from a practical point of view? Can it help, e.g., to get a better understanding of the structure of RKHSs? Or can it be useful for data visualizaton perhaps? ---The writing can be greatly improved. There are numerous typos and confusing sentences, which makes the article not so easy to follow. In particular, it would really help to add more connections and transitions between the sections as it often feels that the article is jumping from a setting to a drastically different one when going from sections to sections. I would recommend smoothing the flow of the article by explaining why a given section or result naturally leads to the next one. Moreover, some statements include variables that have not been defined (if I understand correctly); e.g., the x_i and y_i in Theorem 4.5. ---It is not clear at all (at least from a non expert point of view) where the contribution actually is. The article looks like a summary of known results and theorems, with the original ones being very incremental (as all can be proved in a few lines). It would greatly help emphasizing what the goal of the article is, whether it aims at gathering a few connected results from the literature, or to bring new ideas to the question. Requested Changes: A major rewriting of the article is required to my opinion (see suggestions above). Broader Impact Concerns: No concern. ================================================== Review 3: Summary: The paper deals with the inverse problem for kernel mean embedding. That is, given an element in a reproducing kernel Hilbert space, does it correspond to the kernel embedding of a measure? The authors approach this by reformulating the problem as whether an element in an RKHS belong to the so-called "cone of A-measures" over the RKHS. In addition, they discuss an alternative characterisation in terms of its polar dual. They conclude by providing examples of RKHSs and conditions for which an element in the RKHS is a kernel embedding, in addition to providing an application to Koopmanism. Strengths and Weaknesses: __Strengths:__ - The introduction is fairly clear compared to the remaining parts, including a good explanation of kernel embedding and the problem the paper is trying to solve. - There is a strong display of mathematical knowledge and the insights are quite interesting (especially the way the authors interpret the inverse problem as a membership problem of some cone). However, this can also risk losing the target ML audience. __Weakness:__ - In general the paper is difficult to comprehend, not only because of the technicalities outside of the standard ML toolbox but also because some parts are (in my opinion) carelessly written. For example, some results assume too much knowledge from the reader, making it seem to appear out of nowhere and some objects are not clearly defined. I believe this can be improved by streamlining the text better, including additional details and providing clearer explanations. - The applications and significance to machine learning is not very clear (except for the obvious connection with kernel machines, but not sure whether the results in the paper give any useful insights to practical machine learning). Requested Changes: - The following objects are not clearly defined, making it hard to understand: - (page 2) "We define the cone ... to be the closed clone generated by $\\{k_\omega | \omega \in \mathcal{A}\\}$". More details would be helpful. For example by stating more explicitly $\mathcal{C}_{A} = \\{h \in \mathcal{H} | h(z) = \int_A k_\omega (z) d\mu(\omega) \\}$. - (page 3): "We call $\mathcal{G} \subseteq \mathcal{H}_A$ which generates $\mathcal{H}_A$ as a closed cone an $\mathcal{A}$-test set". What does "generate" mean in this context? - Missing phrase in "We say a compact set $\mathcal{A}$ is Urysohn on [?] if there exists..." (page 6). - I don't understand the purpose of Theorem 2.1. I believe it is something to do with reconstructing measures from an element in the RKHS (guessing from the context)? This needs to be explained much better and the implications of this result should be stated more clearly. - Please state where $h$ belongs in the third bullet point of Theorem 3.1. - The phrase "Note that the theorem assumes $\nu \in \mathcal{H}^2(D)$" (page 4) seems a bit redundant given that the assumption is already included in the statement of the Theorem. Instead of this, it would be more useful for the readers to have a small summary of this subsection (that any element in the Hardy space $\mathcal{H}^2(D)$ is a kernel embedding provided ... is satisfied). - Section 4.2 -- 4.4 assumes too much from the readers and I believe need more details. For example, in section 4.2, explain in more details why "the cone of $\mathbb{C}^n$-measures over $\mathcal{H}$ is a space of codimension one plus a ray or the whole space". Can you also be more specific about "a space of codimension one plus a ray"? - Please clarify what is the main purpose of Corollary 5.1. I believe that this is to show that the whole preceding discussion about finding a function in the RKHS corresponding to the kernel mean of a measure can be used to prove the existence of an orbit of $F$ in $\mathcal{A}$? It would be useful to have this intension more clearly expressed and further, would be better if you can make a clearer connection to machine learning. At the moment, this looks more like an application to dynamical systems theory. - Including some kind of conclusion or discussion section would be useful for the readers. Broader Impact Concerns: The paper is purely theoretical and there are no ethical concerns as far as I can tell. ================================================== Metareview: Recommendation: Reject Comment: Two of the three reviewers gave reject recommendations. The third reviewer did not provide a recommendation, but with agreement between the other two reviewers (and the AC), this was enough to recommendation rejecting the paper. There was no author rebuttal on this paper, so the reviewers did not change their original opinion of the paper, which was generally in agreement that the paper is not ready for publication. In particular, see the comments above on suitability for the TMLR audience and the level of detail of the theoretical results in the paper. Though kernel methods are certainly important for machine learning, one would like to see an actual direct connection to existing ML methods or some kind of application of the results that would be relevant to an ML audience. All three reviewers noted clear deficiencies along these two directions. See the individual reviews for further details, including concerns about the writing of the paper. ==================================================
# Analysis Of Classifier-Free Guidance Weight Schedulers Anonymous authors Paper under double-blind review ## Abstract Classifier-Free Guidance (CFG) enhances the quality and condition adherence of text-toimage diffusion models. It operates by combining the conditional and unconditional predictions using a fixed weight. However, recent works vary the weights throughout the diffusion process, reporting superior results but without providing any rationale or analysis. By conducting comprehensive experiments, this paper provides insights into CFG weight schedulers. Our findings suggest that simple, monotonically increasing weight schedulers consistently lead to improved performances, requiring merely a single line of code. In addition, more complex parametrized schedulers can be optimized for further improvement, but do not generalize across different models and tasks. ## 1 Introduction Diffusion models have demonstrated prominent generative capabilities in various domains e.g. images (Ho et al., 2020), videos (Luo et al., 2023), acoustic signals (Kang et al., 2023b), or 3D avatars (Chen et al., 2023). Conditional generation with diffusion (e.g. text-conditioned image generation) has been explored in numerous works (Saharia et al., 2022; Ruiz et al., 2023; Balaji et al., 2022), and is achieved in its simplest form by adding an extra condition input to the model (Nichol & Dhariwal, 2021). To increase the influence of the condition on the generation process, Classifier Guidance (Dhariwal & Nichol, 2021) proposes to linearly combine the gradients of a separately trained image classifier with those of a diffusion model. Alternatively, Classifier-Free Guidance (CFG) (Ho & Salimans, 2021) simultaneously trains conditional and unconditional models, and exploits a Bayesian implicit classifier to condition the generation without an external classifier. In both cases, a weighting parameter ω controls the importance of the generative and guidance terms and is directly applied at all timesteps. Varying ω is a trade-off between fidelity and condition reliance, as an increase in condition reliance often results in a decline in both fidelity and diversity. In some recent literature, the concept of dynamic guidance instead of constant one has been mentioned: MUSE (Chang et al., 2023) observed that a linearly increasing guidance weight could enhance performance and potentially increase diversity. This approach has been adopted in subsequent works, such as in Stable Video Diffusion (Blattmann et al., 2023), and further mentioned in Gao et al. (2023) through an exhaustive search for a parameterized cosine-based curve (pcs4) that performs very well on a specific pair of model and task. Intriguingly, despite the recent appearance of this topic in the literature, none of the referenced studies has conducted any empirical experiments or analyses to substantiate the use of a guidance weight scheduler. For instance, the concept of linear guidance is briefly mentioned in MUSE (Chang et al., 2023), around Eq. 1: "we reduce the hit to diversity by linearly increasing the guidance scale t [...] allowing early tokens to be sampled more freely". Similarly, the pcs4 approach Gao et al. (2023) is only briefly discussed in the appendix, without any detailed ablation or comparison to static guidance baselines. Thus, to the best of our knowledge, a comprehensive guide to dynamic guidance weight schedulers does not exist at the moment. In this paper, we bridge this gap by delving into the behavior of guidance and systematically examining its influence on the generation, discussing the mechanism behind dynamic schedulers and the rationale for their enhancement. We explore various heuristic dynamic schedulers and present a comprehensive benchmark of both heuristic and parameterized dynamic schedulers across different tasks, focusing on fidelity, diversity, and textual adherence. Our analysis is supported by quantitative, and qualitative results and user studies. Low static guidance: w = 2.0 for t in **range(1, T):** eps_c = model(x, T-t, c) eps_u = model(x, T-t, 0) eps = (w+1)*eps_c - w*eps_u x = denoise(x, eps, T-t) ✗ Fuzzy images, but **many details and textures** High static guidance: w = 14.0 for t in **range(1, T):** eps_c = model(x, T-t, c) eps_u = model(x, T-t, 0) eps = (w+1)*eps_c - w*eps_u x = denoise(x, eps, T-t) ✗ Sharp images, but **lack of details and solid** colors Dynamic guidance: w0 = 14.0 for t in **range(1, T):** eps_c = model(x, T-t, c) eps_u = model(x, T-t, 0) \# clamp-linear **scheduler** w = max(1, w0*2*t/T) eps = (w+1)*eps_c - w*eps_u x = denoise(x, eps, T-t) ✓ **Sharp images with many details and textures,** without extra cost. ![1_image_0.png](1_image_0.png) Figure 1: Classifier-Free Guidance introduces a trade-off between detailed but fuzzy images (low guidance, top) and sharp but simplistic images (high guidance, middle). Using a guidance scheduler (bottom) is simple yet very effective in improving this trade-off. Our findings are the following: First, we show that too much guidance at the beginning of the denoising process is harmful and that monotonically increasing guidance schedulers are performing the best. Second, we show that a simple linearly increasing scheduler always improves the results over the basic static guidance, while costing no additional computational cost, requiring no additional tuning, and being extremely simple to implement. Third, a parameterized scheduler, like clamping a linear scheduler below a carefully chosen threshold (Figure 1), can significantly further improve the results, but the choice of the optimal parameter does not generalize across models and tasks and has thus to be carefully tuned for the target model and task. All our findings are guides to CFG schedulers that will benefit and improve all works relying on CFG. ## 2 Related Work Generative and Diffusion Models. Before the advent of diffusion models, several generative models were developed to create new data that mimics a given dataset, either unconditionally or with conditional guidance. Notable achievements include Variational AutoEncoders (VAEs) (Kingma & Welling, 2014) and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), which have recorded significant progress in various generative tasks (Brock et al., 2018; Kang et al., 2023a; Dufour et al., 2022; Donahue et al., 2018). Recently, diffusion models have demonstrated a remarkable capacity to produce high-quality and diverse samples. They have achieved state-of-the-art results in several generation tasks, notably in image ![2_image_0.png](2_image_0.png) Figure 2: **Examples of all heuristics** on SDXL. Increasing ones (*linear* and *cosine*) enhance fidelity, textual adherence and diversity. synthesis (Song et al., 2020; Ho et al., 2020), text-to-image applications (Dhariwal & Nichol, 2021; Rombach et al., 2022; Podell et al., 2023; Pernias et al., 2023) and text-to-motion (Chen et al., 2023). Guidance in Diffusion and Text-to-Image. Making generative models controllable and capable of producing user-aligned outputs requires making the generation conditional on a given input. Conditioned diffusion models have been vastly explored (Saharia et al., 2022; Ruiz et al., 2023; Balaji et al., 2022). The condition is achieved in its simplest form by adding extra input, typically with residual connections (Nichol & Dhariwal, 2021). To reinforce the model's fidelity to specific conditions, two main approaches prevail: Classifier Guidance (CG) (Dhariwal & Nichol, 2021), which involves training an image classifier externally, and Classifier-Free Guidance (CFG) (Ho & Salimans, 2021), that relies on an implicit classifier through joint training of conditional and unconditional models (using dropout on the condition). Particularly, CFG has catalyzed advancements in text-conditional generation, a domain where training a noisy text classifier is less convenient and performs worse. This approach breathed new life into the text-to-image application, initially proposed in several works such as (Reed et al., 2016; Mansimov et al., 2015). Numerous works (Rombach et al., 2022; Ramesh et al., 2022; Nichol et al., 2022; Avrahami et al., 2022) have leveraged text-to-image generation with CFG diffusion models conditioned on text encoders like CLIP (Radford et al., 2021), showcasing significant progress in the field, e.g. the Latent Diffusion Model (Dhariwal & Nichol, 2021) and Stable Diffusion (Rombach et al., 2022) employ VAE latent space diffusion with CFG with CLIP encoder. SDXL, an enhanced version, leverages a larger model and an additional text encoder for high-resolution synthesis. Improvements on Diffusion Guidance. Noticed that in *Classifier Guidance (CG)*, the classifier's gradient tends to vanish towards the early and final stages due to overconfidence, Zheng et al. (2022) leverages the entropy of the output distribution as an indication of vanishing gradient and rescales the gradient accordingly. To prevent such adversarial behaviours, Dinh et al. (2023b) explored using multiple class conditions, guiding the image generation from a noise state towards an average of image classes before focusing on the desired class with an empirical scheduler. Subsequently, Dinh et al. (2023a) identified and quantified gradient conflicts emerging from the guidance and suggested gradient projection as a solution. ![3_image_0.png](3_image_0.png) Figure 3: **Qualitative results of fidelity** for different guidance schedulers compared with static baseline. linear and *cosine* schedulers show better image details (flower petal, figurine engraving), more natural colour (pink corridor), and better textual adherence (bad weather for the two birds image, key-chain of the figurine). In *Classifier-Free Guidance (CFG)*, Li et al. (2023) used CFG to recover a zero-shot classifier by sampling across timesteps and averaging the guidance magnitude for different labels, with the lowest magnitude corresponding to the most probable label. However, they observed a discrepancy in performance across timesteps with early stages yielding lower accuracy than intermediate ones. Chang et al. (2023) observed that a linear increase in guidance scale enhances diversity. Similarly, Gao et al. (2023) developed a parameterized powercosine-like curve, optimizing a specific parameter for their dataset and method. However, these linear and power-cosine schedulers have been suggested as improvements over constant static guidance without rigorous analysis or testing. To this end, we provide an extensive study of dynamic guidance for both heuristic and parametrized schedulers across several tasks. Concurrently, Kynkäänniemi et al. (2024) proposes empirically removing the initial and final timesteps of the classifier-free guidance (CFG) for improved generation. Similarly, Zhang et al. (2024); Castillo et al. (2023) observes that the conditional and unconditional responses of some models may converge to similar behaviours at certain timesteps, particularly towards the ending stage. ## 3 Background On Guidance Following DDPM (Ho et al., 2020), diffusion consists in training a network ϵθ to denoise a noisy input to recover the original data at different noise levels. More formally, the goal is to recover x0, the original image from xt=pγ(t)x0+p1−γ(t)ϵ, where γ(t)∈[0, 1] is a noise scheduler of the timestep t and applied to a standard Gaussian noise ϵ∼N (0, 1). In practice, Ho et al. (2020) find that predicting the noise ϵθ instead of ![4_image_0.png](4_image_0.png) Figure 4: **Qualitative results of diversity** of different guidance schedulers compared with static baseline. Heuristic schedulers show better diversity: more composition and richer background types for the teddy bear example, as well as the gesture, lighting, colour and compositions in the astronaut image. x0 yielded better performance leading to the training loss: Lsimple=Ex0∼pdata,ϵ∼N(0,1),t∼U[0,1] [∥ϵθ(xt) − ϵ∥] based on the target image distribution pdata with U uniform distributions. Once the network is trained, we can sample from pdata by setting xT =ϵ ∼ N (0, 1) (with γ(T)=0), and gradually denoising to reach the data point x0∼pdata with different types of samplers e.g., DDPM (Ho et al., 2020) or DDIM (Song et al., 2020). To leverage a condition c and instead sample from p(xt|c), Dhariwal & Nichol (2021) propose *Classifier Guidance (CG)* that uses a pretrained classifier p(c|xt), forming: $$\nabla_{x_{t}}\log p(x_{t}|c){=}\nabla_{x_{t}}\log p(x_{t}){+}\nabla_{x_{t}}\log p(c|x_{t})\quad,$$ log p(c|xt) , (1) according to Bayes rule. This leads to the following *Classifier Guidance* equation, with a scalar ω>0 controlling the amount of guidance towards the condition c: $\hat{\epsilon}_\theta(x_t,c)=1$ $$\hat{\epsilon}_{\theta}(x_{t},c)=\epsilon_{\theta}(x_{t},c)+\omega\left(\epsilon_{\theta}(x_{t},c)-\epsilon_{\theta}(x_{t})\right)$$ ϵˆθ(xt, c) = ϵθ(xt) + (ω + 1)∇xt log p(c|xt) . (2) However, this requires training a noise-dependent classifier externally, which can be cumbersome and impractical for novel classes or more complex conditions e.g. textual prompts. For this reason, with an implicit classifier from Bayes rule ∇xt log p(c|xt)=∇xt log p(xt, c)−∇xt log p(xt), Ho & Salimans (2021) propose to train a diffusion network on the joint distribution of data and condition by replacing ϵθ(xt) with ϵθ(xt, c) in Lsimple. By dropping the condition during training, they employ a single network for both ∇xt log p(xt, c) and ∇xt log p(xt). This gives the *Classifier-Free Guidance (CFG)*, also controlled by ω: ϵˆθ(xt, c) = ϵθ(xt, c) + ω (ϵθ(xt, c) − ϵθ(xt)) . (3) We can reformulate the above two equations into two terms: a *generation* term ϵθ(xt)∝∇xt log p(xt) and a guidance term ∇xt log p(c|xt). The guidance term can be derived either from a pre-trained classifier or an implicit one, with ω balancing between generation and guidance. $$(1)$$ $\eqref{eq:walpha}$. $\left(3\right)$. ## 4 Towards Dynamic Guidance: Should Guidance Be Constant? Our initial experiments show that removing guidance at certain timesteps can improve performance. This is in line with concurrent work (Kynkäänniemi et al., 2024). To further investigate this, we conducted a negative perturbation analysis experiment to determine the impact of the guidance across all timesteps. Negative Perturbation Analysis. This analysis is on the CIFAR-10 dataset: a 60,000 images dataset with a 32 × 32 resolution, distributed across 10 classes. We choose the original DDPM method (Ho et al., 2020) denoising on pixel space as the backbone and class-conditioning guidance. To investigate the importance of guidance across different timestep intervals, we first employ static guidance of ω = 1.15, then independently set the guidance to zero across different 50-timestep intervals (20 intervals in total spanning all timesteps), and compute the FID for each of these piece-wise zeroed guidance schedulers of 50,000 generated images. If we mathematically model the removal method, it can approximate a family of parameterized gate/inverse gate functions with two parameters defining the starting point s and size of the kernel d: g(t) = 1 − (H(t − s) − H(t − (s + d))), where H is Heaviside step function. The results are illustrated in Figure 5a. For example, the second data point on the left of the curve represents the FID performance when guidance is removed only in the interval t = [50, 100] while maintaining static at others. We observe multiple phenomena from the results: (1) non-constant guidance curve can outperform static guidance in terms of FID; (2) zeroing the guidance at earlier stages improves the FID performance; (3) zeroing the guidance at later stages significantly degrades it. However, this removal scheme is not practical for real usage for several reasons: (i) grid searching two parameters requires generating a prohibitively costly number of images; (ii) as demonstrated in Section 6, parameterized methods often fail to generalize across models and tasks; (iii) further investigation, detailed in Appendix Section A, demonstrates that instead of completely removing CFG from some timesteps, keeping it with lower values increases the performance. Dynamic guidance. Having observed that removing the guidance at certain timesteps improves the performance over using a *static* weight ω for CFG like in Ho & Salimans (2021); Dhariwal & Nichol (2021), we ask the question of whether we can replace static guidance with other options. Therefore, we investigate dynamic guidance scheduler that evolves throughout the generation process, which is also in line with some empirical schemes mentioned in recent literature (Blattmann et al., 2023; Chang et al., 2023; Donahue et al., 2018). In that case, the CFG Equation 3 is rewritten as follows: $\left(4\right)$. ϵˆθ(xt, c) = ϵθ(xt, c)+ω(t) (ϵθ(xt, c) − ϵθ(xt)) . (4) To identify an effective dynamic scheduler ω(t), we analyse two types of function in subsequent sections: parameter-free heuristic schedulers in Section 5 and single-parameter parameterized ones in Section 6. ## 5 Dynamic Guidance: Heuristic Schedulers We first use six simple heuristic schedulers as dynamic guidance ω(t), split into three groups depending on the shape of their curve: (a) increasing functions (linear, cosine); (b) decreasing functions (inverse linear, sine); (c) non-monotonic functions (linear V-shape, linear Λ-shape), defined as: linear: ω(t) = 1 − *t/T,* invlinear: ω(t) = *t/T,* cosine: ω(t) = cos (*πt/T*) + 1, sine: ω(t) = sin (*πt/T* − π/2) + 1, V-shape: ω(t) = invlinear(t) if *t < T /*2, Λ-shape: ω(t) = linear(t) if *t < T /*2, linear(t) *else*, invlinear(t) *else*. To allow for a direct comparison between the effect of these schedulers and the static guidance ω, we normalize each scheduler by the area under the curve. This ensures that the same *amount of total guidance* is applied over the entire denoising process, and allows users to rescale the scheduler to obtain a behavior similar to that of increasing ω in static guidance. More formally, this corresponds to the following constraint: R T 0 ω(t)dt = ωT. For example, this normalization leads to the corresponding normalized linear scheduler ω(t) = 2(1 − t/T)ω. We show in Figure 5b (left) the different normalized curves of the 6 schedulers. ![6_image_0.png](6_image_0.png) Figure 5: **Preliminary Analysis on CIFAR-10 (a) Various heuristic curves** with their FID vs. IS performances. **(b) Negative perturbation** by setting the guidance scale to 0 across distinct intervals while preserving static guidance to the rest. By eliminating the weight at the **initial stage** (e.g., T = 800), the lowered FID shows an enhancement, whereas removing guidance at higher timesteps leads to worse FID. ## 5.1 Class-Conditional Image Generation With Heuristic Schedulers Heuristic Schedulers Analysis. We first study the 6 previously defined heuristic schedulers ω(t) on the CIFAR-10-DDPM setting for class-conditional synthesis same as in the Section 4. To assess the performance, we use the Frechet Inception Distance (FID) and Inception Score (IS) metrics, over 50, 000 inference from 50-step DDIM (Song et al., 2020). In this experiment, we evaluate the impact of a range of different guidance total weight: [1.1, 1.15, 1.2, 1.25, 1.3, 1.35], to study its influence over the image quality vs class adherence trade-off. We show the results in Figure 5b, right panel and observe that both increasing schedulers (linear and cosine) significantly improve over the static baseline, whereas decreasing schedulers (invlinear and sine) are significantly worse than the static. The V-shape and Λ-shape schedulers perform respectively better and worse than the static baseline, but only marginally. Preliminary Conclusion. Combining with the observation from Section 4 that removing the beginning stage improves the performance, they point to the same conclusion: monotonically increasing guidance schedulers achieve improved performances, revealing that the static CFG primarily may overshoot the guidance in the initial stages. In the remainder of this work, we only consider monotonically increasing schedulers, as we consider these findings sufficient to avoid examining all other schedulers on other tasks. (more details in Appx. and Figure 2 shows a qualitative results in SDXL) Experiments on ImageNet. On ImageNet, we explore the linear and cosine schedulers that performed best on CIFAR-10. In Figure 6d, we observe that the linear and cosine schedulers lead to a significant improvement over the baseline, especially at higher guidance weights, enabling a better FID/Inception Score trade-off. More experiments in Appx. lead to a similar conclusion. ## 5.2 Text-To-Image Generation With Heuristic Schedulers We study the linear and the cosine scheduler on text-to-image generation. The results for all proposed heuristics are in Appx. Tables 12 and 14, where we observe a similar trend as before: heuristic functions with increasing shape report the largest gains on both SD1.5 and SDXL. Dataset and Metrics. We use text-to-image models pre-trained on LAION (Schuhmann et al., 2022), which contains 5B high-quality images with paired textual descriptions. For evaluation, we use the COCO (Lin et al., 2014) val set with 30, 000 text-image paired data. We use three metrics: (i) *Fréchet inception distance (FID)* for the fidelity of generated images; (ii) *CLIPScore (CS)* (Radford et al., 2021) to assess the alignments between the images and their corresponding text prompts; (iii) *Diversity (Div)* to measure the model's capacity to yield varied content. For this, we compute ![7_image_0.png](7_image_0.png) (c) SDXL: FID-CS (d) CIN-256:FID-IS Figure 6: Class-conditioned and text-to-image generation results of monotonically-increasing heuristic schedulers (linear and cosine). (a) FID and Div vs. CS for SD1.5 (Rombach et al., 2022). We highlight the gain of FID and CS compared with the default ω=7.5 with black arrows, diversity on the right shows that the heuristic guidance performs better than static baseline guidance; **(b) our user study** also reveals that images generated with schedulers are consistently preferred than the baseline in realism, diversity and text alignment; **(c) results for SDXL (Podell et al., 2023) on FID and Div vs. CS** with similar setup to (a); **(d) CIN-256 LDM** (Dhariwal & Nichol, 2021) are assessed with FID vs. IS. Heuristic schedulers outperform the baseline static guidance on fidelity and diversity across multiple models. the standard deviation of image embeddings via Dino-v2 (Oquab et al., 2023) from multiple generations of the same prompt (more details for Diversity in Appx. ). We compute FID and CS for a sample set of 10, 000 images against the COCO dataset in a zero-shot fashion (Rombach et al., 2022; Saharia et al., 2022). For diversity, we resort to two text description subsets from COCO: 1000 *longest captions* and *shortest captions* each (-L and -S in Figure 6a) to represent varying descriptiveness levels; longer captions provide more specific conditions than shorter ones, presumably leading to less diversity. We produce 10 images for each prompt using varied sampling noise. Model. We experiment with two models: (1) Stable Diffusion (SD) (Rombach et al., 2022), which uses the CLIP (Radford et al., 2021) text encoder to transform text inputs to embeddings. We use the public checkpoint of SD v1.5 1 and employ DDIM sampler with 50 steps. (2) SDXL (Podell et al., 2023), which is a larger, advanced version of SD (Rombach et al., 2022), generating images with resolutions up to *1024* pixels. It leverages LDM (Dhariwal & Nichol, 2021) with larger U-Net architectures, an additional text-encoder (OpenCLIP ViT-bigG), and other conditioning enhancements. We use the SDXL-base-1.02(SDXL) version without refiner, sampling with DPM-Solver++ (Lu et al., 2022b) of *25 steps*. Results. We show the FID vs. CS curves in Figure 6a, 6c for SD and SDXL respectively (more tables in Appx. Section F). We expect an optimal balance of a high CS and a low FID (i.e., the right-down corner). 1**https://huggingface.co/runwayml/stable-diffusion-v1-5** 2**https://github.com/Stability-AI/generative-models** Analysis on SD (Figure 6a). For FID vs CS, the baseline (Rombach et al., 2022) yields inferior results compared to the linear and cosine heuristics with linear recording lower FID. The baseline regresses FID fast when CS is high, but generates the best FID when CS is low, i.e., low condition level. This, however, is usually not used for real applications, e.g., the recommended ω value is 7.5 for SD1.5, highlighted by the dotted line in Figure 6a with the black solid arrow representing the gain of heuristic schedulers on FID and CS respectively. For Div vs CS, heuristic schedulers outperform the baseline (Rombach et al., 2022) on both short (S) and long (L) captions at different guidance scales. Also, cosine shows superiority across the majority of the CS range. Overall, heuristic schedulers achieve improved performances in FID and Diversity, recording 2.71(17%) gain on FID and 0.004(16%) gain (of max CS-min CS of baseline) on CS over ω=7.5 default guidance in SD. Note, this gain is achieved *without* hyperparameter tuning or retraining. Analysis on SDXL (Figure 6c). In FID, both the linear and cosine schedulers achieve better FID-CS than the baseline (Podell et al., 2023). In Diversity, linear is slightly lower than cosine, and they are both better than static baseline. Additionally, unlike the baseline (blue curves) where higher guidance typically results in compromised FID, heuristic schedulers counter this. User study. We present users with a pair of mosaics of 9 generated images and ask them to vote for the best in terms of realism, diversity and text-image alignment. Each pair compares static baseline generations against cosine and linear schedulers. Figure 6b reports the results. We observe that over 60% of users consider scheduler-generated images more realistic and better aligned with the text prompt, while approximately 80% find guidance schedulers results more diverse. This corroborates our hypothesis that static weighting is perceptually inferior to dynamic weighting. More details in Appx. Qualitative results. Figure 3 depicts the fidelity of various sets of text-to-image generations from SD and SDXL. Heuristic schedulers (linear and cosine) enhance the image fidelity: better details in petals and leaves of the flower images, as well as the texture of bird features. In the arches example, we observe more natural colour shading as well as more detailed figurines with reflective effects. Figure 4 showcases the diversity of outputs in terms of composition, color palette, art style and image quality by refining shades and enriching textures. Notably, the teddy bear shows various compositions and better-coloured results than the baseline, which collapsed into similar compositions. Similarly, in the astronaut example, the baseline generates similar images while heuristic schedulers reach more diverse character gestures, lighting and compositions. ## 5.3 Findings With Heuristic Schedulers In summary, we make the following observations: monotonically increasing heuristic schedulers (e.g., linear and cosine) (a) improve generation performances (IS/CS vs. FID) over static baseline on different models; (b) improve image fidelity (texture, details), diversity (composition, style) and quality (lighting, gestures). We note that this gain is achieved without hyperparameter tuning, retraining or extra computational cost. ## 6 Dynamic Guidance: Parametrized Schedulers We investigate two parameterized schedulers with a tunable parameter to maximize performance: a powercosine curve family (introduced in MDT (Gao et al., 2023)) and two clamping families (linear and cosine). The parameterized family of powered-cosine curves (pcs) and clamping (**clamp**) is defined by the controllable parameter s and c respectively: $$w_{t}={\frac{1-\cos\pi\left({\frac{T-t}{T}}\right)^{s}}{2}}w$$ | 1 − cos π T −t s T | | | | | |-----------------|---------|-----|-------|-----| | wt = | 2 | w | (pcs) | (5) | | wt = max(c, wt) | (clamp) | (6) | | | In our work, we use clamp-linear but this family can be extended to other schedulers (more in sup. mat.). Our motivation lies in our observation that excessive muting of guidance weights at the initial stages can compromise the structural integrity of prominent features. This contributes to bad FID at lower values of ω in Figure 6a, suggesting a trade-off between model guidance and image quality. However, reducing guidance $\left(5\right)$. $$(6)$$ ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) Figure 7: **Qualitative comparison** of baseline, linear and clamp-linear on SDXL. Both dynamic schedulers ![9_image_2.png](9_image_2.png) ![9_image_3.png](9_image_3.png) are better than the baseline, clamp-linear with c=4 outperforms all with better details and higher fidelity. ![9_image_4.png](9_image_4.png) Figure 8: **Class-conditioned generation results of parameterized clamp-linear and pcs** on (a) CIFAR-10-DDPM and (b) CIN-256-LDM. Optimising parameters improves performances but these parameters do not generalize across models and datasets. intensity early in the diffusion process is also the origin of enhanced performances, as shown in Section 5. This family represents a trade-off between diversity and fidelity while giving users precise control. ## 6.1 Class-Conditional Image Generation With Parametrized Schedulers We experiment with two parameterized schedulers: clamp-linear and pcs on CIFAR10-DDPM (Figure 8a) and ImageNet(CIN)256-LDM (Figure 8b). We observe that, for both families, tuning parameters improves results over baseline and heuristic schedulers. The optimal parameters are c=1.01 for clamp-linear and s=4 ![10_image_0.png](10_image_0.png) Figure 9: **Text-to-image performance for two parameterized schedulers: clamp-linear and pcs**. For clamp-linear, (a) shows the guidance curves for different parameters and **(b,c)** displays the FID vs. CS for SD1.5 and SDXL, respectively. For pcs, (d) shows the guidance curves and **(e,f)** depicts the FID vs. CS. Optimal parameters for either clamp or pcs outperform the static baseline for both SD1.5 and SDXL. for pcs on CIFAR10-DDPM, vs c=1.1 for clamp-linear and s=2 for pcs on CIN-256. Overall, parameterized schedulers improve performances; however, the optimal parameters do not apply across datasets and models. ## 6.2 Text-To-Image Generation With Parametrized Schedulers We experiment with two parameterized schedulers: clamp-linear (clamp-cosine in sup. mat.) and pcs, with their guidance curves in Figures 9a,9d, respectively. For SD1.5 (Rombach et al., 2022), the FID vs. CS results are depicted in Figures 9b and 9e. The pcs family struggles to achieve low FID, except when s = 1. Conversely, the clamp family exhibits optimal performance around c=2, achieving the best FID and CLIP-score balance while outperforming all pcs values. For SDXL (Podell et al., 2023), the FID vs. CS results are depicted in Figures 9c and 9f. The pcs shows the best performance at s = 0.1. Clamp-linear achieves optimum at c = 4 (FID 18.2), largely improving FID across the entire CS range compared to the baseline (FID 24.9, about 30% gain) and the linear scheduler. The optimal parameters of clamp-linear (resp. pcs) are not the same for both models, i.e. c=2 for SD1.5 and c=4 for SDXL (resp. s=1 and s=0.1 for pcs). This reveals the lack of generalizability of this family. Qualitative results. The results of Figure 7 further underscore the significance of choosing the right clamping parameter. This choice markedly enhances generation performance, as evidenced by improved fidelity (e.g., in images of a dog eating ice cream and a squirrel), textual comprehension (e.g., in the 'French Fries Big Ben' image), and attention to detail (e.g., in the 'Pikachu' image). Figure 10 compares two parameterized families: (i) clamp and pcs (Gao et al., 2023), where the clamp performs best at c = 4 and the pcs at s = 1. We observe that the clamp-linear c = 4 demonstrates better details (e.g., mug, alien), realism (e.g., car, storm), and more textured backgrounds (e.g., mug, car). Although s = 4 for pcs leads to the best results for class-conditioned generation, we see that the pcs in text-to-image task tends to over-simplify content, produce fuzzy images (e.g., mug) and deconstructed composition. This highlights our argument that optimal parameters do not generalize across datasets or tasks. ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) Figure 10: **Qualitative results** for parametrized schedulers clamp-linear and pcs. Overall, c=4 for clamplinear gives the most visually pleasing results. ![11_image_2.png](11_image_2.png) ## 6.3 Findings With Parametrized Schedulers Our observations are: (a) tuning the parametrized functions improves the performance for both generation tasks, (b) tuning clamps seems easier than pcs family, as its performance shows fewer variations, and (c) the optimal parameters for one method do not generalize across different settings. Thus, specialized tuning is required for each model and task, leading to extensive grid searches and increased computational load. ## 7 Conclusion We analyzed dynamic schedulers for the weight parameter in Classifier-Free Guidance by systematically comparing heuristic and parameterized schedulers. We experiment on two tasks (class-conditioned generation and text-to-image generation), several models (DDPM, SD1.5 and SDXL) and various datasets. Discussion. Our findings are: (1) a simple monotonically increasing scheduler systematically improves the performance compared to a constant static guidance, at no extra computational cost and with no hyperparameter search. (2) parameterized schedulers with tuned parameters per task, model and dataset, improve the results. They, however, do not generalize well to other models and datasets as there is no universal parameter that suits all tasks. For practitioners who target state-of-the-art performances, we recommend searching or optimizing for the best clamping parameter. For those not willing to manually tune parameters per case, we suggest using heuristics, specifically linear or cosine. ## References Omri Avrahami, Dani Lischinski, and Ohad Fried. Blended diffusion for text-driven editing of natural images. In *IEEE Conf. Comput. Vis. Pattern Recog.*, 2022. Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, et al. ediffi: Text-to-image diffusion models with an ensemble of expert denoisers. *arXiv preprint arXiv:2211.01324*, 2022. Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. *arXiv preprint arXiv:2311.15127*, 2023. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. In *Int. Conf. Learn. Represent.*, 2018. Angela Castillo, Jonas Kohler, Juan C Pérez, Juan Pablo Pérez, Albert Pumarola, Bernard Ghanem, Pablo Arbeláez, and Ali Thabet. Adaptive guidance: Training-free acceleration of conditional diffusion models. arXiv preprint arXiv:2312.12487, 2023. Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, José Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. In *Int. Conf. on Machine Learning*, 2023. Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, and Gang Yu. Executing your commands via motion diffusion in latent space. In *IEEE Conf. Comput. Vis. Pattern Recog.*, 2023. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. *Adv. Neural Inform.* Process. Syst., 2021. Anh-Dung Dinh, Daochang Liu, and Chang Xu. Pixelasparam: A gradient view on diffusion sampling with guidance. In *Int. Conf. on Machine Learning*. PMLR, 2023a. Anh-Dung Dinh, Daochang Liu, and Chang Xu. Rethinking conditional diffusion sampling with progressive guidance. In *Adv. Neural Inform. Process. Syst.*, 2023b. Chris Donahue, Julian McAuley, and Miller Puckette. Adversarial audio synthesis. In Int. Conf. Learn. Represent., 2018. Nicolas Dufour, David Picard, and Vicky Kalogeiton. Scam! transferring humans between images with semantic cross attention modulation. In *Eur. Conf. Comput. Vis.* Springer, 2022. Shanghua Gao, Pan Zhou, Ming-Ming Cheng, and Shuicheng Yan. Masked diffusion transformer is a strong image synthesizer. In *Int. Conf. Comput. Vis.*, 2023. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In *Adv. Neural Inform. Process. Syst.*, 2014. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In NeurIPS-W on Deep Generative Models and Downstream Applications, 2021. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Adv. Neural Inform. Process. Syst., 2020. Nandakishore Kambhatla and Todd K Leen. Dimension reduction by local principal component analysis. Neural computation, 1997. Minguk Kang, Jun-Yan Zhu, Richard Zhang, Jaesik Park, Eli Shechtman, Sylvain Paris, and Taesung Park. Scaling up gans for text-to-image synthesis. In *IEEE Conf. Comput. Vis. Pattern Recog.*, 2023a. Minki Kang, Dongchan Min, and Sung Ju Hwang. Grad-stylespeech: Any-speaker adaptive text-to-speech synthesis with diffusion models. In *IEEE Int. Conf. on Acoustics, Speech and Signal Processing*, 2023b. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *stat*, 2014. Tuomas Kynkäänniemi, Miika Aittala, Tero Karras, Samuli Laine, Timo Aila, and Jaakko Lehtinen. Applying guidance in a limited interval improves sample and distribution quality in diffusion models. arXiv preprint arXiv:2404.07724, 2024. Alexander C Li, Mihir Prabhudesai, Shivam Duggal, Ellis Brown, and Deepak Pathak. Your diffusion model is secretly a zero-shot classifier. In *Int. Conf. Comput. Vis.*, 2023. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In *Eur. Conf. on Comput. Vis.* Springer, 2014. Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. *Adv. Neural Inform. Process. Syst.*, 2022a. Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. *arXiv preprint arXiv:2211.01095*, 2022b. Zhengxiong Luo, Dayou Chen, Yingya Zhang, Yan Huang, Liang Wang, Yujun Shen, Deli Zhao, Jingren Zhou, and Tieniu Tan. Videofusion: Decomposed diffusion models for high-quality video generation. In IEEE Conf. Comput. Vis. Pattern Recog., 2023. Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Generating images from captions with attention. *arXiv preprint arXiv:1511.02793*, 2015. Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In Int. Conf. on Machine Learning, 2021. Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with textguided diffusion models. In *Int. Conf. on Machine Learning*. PMLR, 2022. Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. *arXiv preprint arXiv:2304.07193*, 2023. Pablo Pernias, Dominic Rampas, and Marc Aubreville. Wuerstchen: Efficient pretraining of text-to-image models. *arXiv preprint arXiv:2306.00637*, 2023. Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *Int. Conf. on Machine Learning*. PMLR, 2021. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Pranav Shyam, Pamela Mishkin, Bob McGrew, and Ilya Sutskever. Hierarchical text-conditional image generation with clip latents. *arXiv preprint* arXiv:2204.06125, 2022. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In *Int. Conf. on Machine Learning*. PMLR, 2016. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In *IEEE Conf. Comput. Vis. Pattern Recog.*, 2022. Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In IEEE Conf. Comput. Vis. Pattern Recog., 2023. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic textto-image diffusion models with deep language understanding. *Adv. Neural Inform. Process. Syst.*, 2022. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. *Adv. Neural Inform. Process. Syst.*, 2022. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In *Int. Conf. Learn.* Represent., 2020. Wentian Zhang, Haozhe Liu, Jinheng Xie, Francesco Faccio, Mike Zheng Shou, and Jürgen Schmidhuber. Cross-attention makes inference cumbersome in text-to-image diffusion models. *arXiv preprint* arXiv:2404.02747, 2024. Guangcong Zheng, Shengming Li, Hui Wang, Taiping Yao, Yang Chen, Shouhong Ding, and Xi Li. Entropydriven sampling and training scheme for conditional diffusion generation. In *Eur. Conf. on Comput. Vis.* Springer, 2022. ## Appendix In this appendix, we provide additional content covering: (A) an ablation study to demonstrate the necessity of CFG at all time intervals (B) a toy example to explain the mechanism and rationale of the dynamic weighted scheduler; (C) an additional comparison of parameterized function-based dynamic schedulers; (D) more qualitative results; (E) ablation experiments on different aspects of dynamic weighting schedulers; (F) a list of tables of all results demonstrated; (G) detailed design of user study. Following is the table of contents (A) The necessity of CFG at all time interval (B) A toy example of fidelity vs condition adherence (C) Comparison of Parameterized Schedulers (D) Qualitative Results (E) Ablation on Robustness and Generalization (F) Detailed Table of Experiments (G) User Study ## A The Necessity Of Cfg At All Time Interval Recent concurrent work Kynkäänniemi et al. (2024); Zhang et al. (2024); Castillo et al. (2023) has suggested that partially removing the classifier-free guidance (CFG) (beginning or ending) could enhance generation performance or achieve the acceleration with minimal performance influence. For instance, Kynkäänniemi et al. (2024) proposes removing the initial and final timesteps of CFG, retaining only a middle interval for the guidance process. In this section, we conduct two ablation studies on SD1.5 to confirm that the CFG can be required across all time intervals. ![15_image_0.png](15_image_0.png) Figure 11: **Comparison between guidance removal (labelled as z) and clamping (labelled as c)** on SD1.5. We see that CFG removal scheme shows improvement compared to the static guidance baseline (black) but is worse than the linear guidance scheduler (blue solid) and clamping schemes (red dotted). The necessity of CFG at the beginning stage As demonstrated in Figure 5a, we explore the impact of negative perturbation and ablation of all heuristic functions in Figure 10. Generally, employing a lower guidance level in the initial stages can enhance performance compared to static guidance. To analyse the effectiveness of guidance removal (setting to zero), static guidance, the linear scheduler, and the clamping scheme, we conducted experiments on SD1.5. Guidance is removed at the same timestep as the clamping transition point; rather than clamping guidance to a hyperparameter constant, we reduce it completely to zero (Figure 11 left two panels). The results, depicted in Figure 11 right panel, show that while guidance removal at the beginning stage (red dotted curve) indeed improves performance compared to the static baseline (black solid curve), both the linear scheduler (blue solid curve) and clamping schemes (green dotted lines) achieve better balances of FID vs. CLIP-Score (CS). The necessity of CFG at the ending stage Zhang et al. (2024); Castillo et al. (2023) suggest that removing the final stage of guidance could accelerate generation by directly replacing the CFG with conditional or unconditional outputs. However, as shown in Figure 5a(b), our analysis indicates that removing this stage can reduce performance for specific tasks. Despite this, the possibility of safely removing the ending stage guidance does not contradict our argument that **enhancing the end could improve performance**. To further confirm this, we conducted an ablation experiment comparing the effects of removing versus boosting the final guidance intervals. In this experiment, 10%, 20%, and 30% of the ending guidance were either removed or increased by a factor of 1.5. The results, presented in Table 1, reveal: (i) removing or boosting the ending guidance has a marginal impact on the CLIP-Score; (ii) elimination of guidance can lead to a regression in performance; and (iii) boosting the guidance can significantly enhance FID, with gains of 0.54 and 0.8 in FID when boosting the final 30% of guidance by 1.5×. In conclusion, based on the results from two previous ablation studies, we confirm that **an adequate** level of guidance is necessary at all intervals of the generation process. While removing parts of the guidance can accelerate the process, it results in underperformance when compared to our analyzed heuristic monotonically increasing guidance scheduler, such as linear, and also when compared to well-tuned parameterized functions, such as the clamping method. ## B A Toy Example Of Fidelity Vs Condition Adherence Knowing the equation of CFG can be written as a combination between a *generation term* and a *guidance* term, with the second term controlled by guidance weight ω: c) $=\frac{1}{2}$ ϵˆθ(xt, c) = ϵθ(xt, c) + ω (ϵθ(xt, c) − ϵθ(xt)) . (7) To better understand the problems in diffusion guidance, we present a toy example, where we first train a diffusion model on a synthetic dataset of 50, 000 images (32 × 32) from two distinct Gaussian distributions: $$\left(7\right)$$ $$\epsilon_{t},c)-\epsilon_{\theta}(x_{t}))\quad.$$ | Guidance scale | ω=8 | | ω=11 | | |------------------------|------------|-------|------------|-------| | Method | Clip-Score | FID | Clip-Score | FID | | static | 0.2775 | 16.78 | 0.2792 | 19.03 | | remove final 10% | 0.2777 | 17.61 | 0.2792 | 19.94 | | remove final 20% | 0.2777 | 18.33 | 0.2792 | 20.68 | | remove final 30% | 0.2777 | 19.18 | 0.2792 | 21.65 | | boost (1.5×) final 10% | 0.2773 | 16.75 | 0.2789 | 18.39 | | boost (1.5×) final 20% | 0.2772 | 16.51 | 0.2787 | 18.96 | | boost (1.5×) final 30% | 0.2772 | 16.24 | 0.2790 | 18.23 | ![16_image_0.png](16_image_0.png) Table 1: Impact of removing/boosting CFG at the end with SD1.5. Figure 12: **Two-Gaussians Example.** We employ DDPM with CFG to fit two Gaussian distributions, a bright one (red) and a darker one (blue). The middle panel showcases samples of generation trajectories at different guidance scales ω, using PCA visualization. Increasing guidance scale ω raises two issues: repeated trajectory: when ω=50 the generation diverges from its expected direction before converging again, and shaky motion: when ω=100 some trajectories wander aimlessly. one sampled with low values of intensity (dark noisy images in the bottom-left of Figure 12), and the other with high-intensities (bright noisy images). The top-left part in Figure 12 shows the PCA (Kambhatla & Leen, 1997)-visualised distribution of the two sets, and the bottom-left part shows some ground-truth images. To fit these two labelled distributions, we employ DDPM (Ho et al., 2020) with CFG (Ho & Salimans, 2021) conditioned on intensity labels. Upon completion of the training, we can adjust the guidance scale ω to balance between the sample fidelity and condition adherence, illustrated in the right part of Figure 12. The first row depicts the variations in generated distributions on different ω (from 0 to 100), visualized by the same PCA parameters. The second row shows the entire diffusion trajectory for sampled data points (same seeds across different ω): progressing from a random sample (*i.e.*, standard Gaussian) when t = T to the generated data (blue or red in Figure 12) when t = 0. Emerging issues and explainable factors. As ω increases, the two generated distributions diverge due to *guidance term* in Eq. 7 shifting the generation towards different labels at a fidelity cost (see Figure 12 first row). As shown in Figure 12 (second row), two issues arise: (i) *repeated trajectories* that diverge from the expected convergence path before redirecting to it; and (ii) *shaky motions* that wander along the trajectory. ![17_image_0.png](17_image_0.png) Figure 13: **Class-conditioned image generation results of two parameterized families (clamplinear, clamp-cosine and pcs) on CIFAR-10 and CIN-256**. Optimising parameters of guidance results in performance gains, however, these parameters do not generalize across models and datasets. These two issues can be attributed to two factors: (1) incorrect classification prediction, and (2) the conflicts between *guidance* and *generation* terms in Eq. 7. For the former, optimal guidance requires a *flawless* classifier, whether explicit for CG or implicit for CFG. In reality, discerning between two noisy data is challenging and incorrect classification may steer the generation in the wrong direction, generating shaky trajectories. A similar observation is reported in Zheng et al. (2022); Dinh et al. (2023b) for CG and in Li et al. (2023) for CFG. For the latter, due to the strong incentive of the classifier to increase the distance with respect to the other classes, trajectories often show a U-turn before gravitating to convergence (repeated trajectory in Figure 12). We argue that this anomaly is due to the conflict between *guidance* and *generation* terms in Eq. 7. In conclusion, along the generation, the guidance can steer suboptimally (especially when t → T), and even impede generation. We argue that these **erratic behaviours** contribute to the performance dichotomy between fidelity and condition adherence (Ho & Salimans, 2021; Dhariwal & Nichol, 2021). ## C Comparison Of Parameterized Schedulers C.1 Parameterized Comparison On Class-Conditioned Generation For CIFAR-10-DDPM, we show in Figure 13 upper panels (see all data in Table 5, 6, 7) the comparison of two parameterized functions families: (i) clamp family on linear and cosine and (ii) pcs family mentioned in Gao et al. (2023). The ImageNet-256 and Latent Diffusion Model (LDM) results are presented in Figure 13 lower panels and (data in Table 9, 10, 11). ![18_image_0.png](18_image_0.png) Figure 14: **Text-to-image generation FID and diversity of all two parameterized families (clamp** with clamp-linear, clamp-cosine and pcs) on SD1.5 (left to right): **(a) parameterized scheduler** curves; **(b) FID vs. CS of SD1.5** and **(c) FID vs. Div. of SD1.5.** We show that in terms of diversity, the clamp family still achieves more diverse results than the baseline, though it reduces along the clamping parameter, as the beginning stage of the diffusion is muted. The conclusion of these parts is as follows: (i) optimising both groups of parameterized function helps improve the performance of FID-CS; (ii) the optimal parameters for different models are very different and fail to generalize across models and datasets. ## C.2 Parameterized Comparison On Text-To-Image Generation We then show the FID vs. CS and Diversity vs. CS performance of the parameterized method in Figure 14. The conclusion is coherent with the main paper: all parameterized functions can enhance performance on both FID and diversity, provided that the parameters are well-selected. Moreover, for the clamp family, it appears that the clamp parameter also adjusts the degree of diversity of the generated images; lowering the clamp parameter increases the diversity. We recommend that users tune this parameter according to the specific model and task. For SDXL, the clamp-cosine is shown in Figure 15, and also reaches a similar conclusion. ![19_image_0.png](19_image_0.png) Figure 15: **Text-to-image generation results of two parameterized families (clamp-linear, clampcosine and pcs) on SDXL.** Both clamps reach their best FID-CS at c = 4 vs s = 0.1 for pcs, which differ from the optimal parameters for SD1.5. ## D Qualitative Results More Results of Parameterized Functions on SDXL In Figure 16, we show more examples of different parameterized functions. It appears that carefully selecting the parameter (c = 4), especially for the clamplinear method, achieves improvement in image quality in terms of composition (e.g., template and guitar), detail (e.g., cat), and realism (e.g., dog statue). However, for SDXL, this method shows only marginal improvements with the pcs family, which tends to produce images with incorrect structures and compositions, leading to fuzzy images. Stable Diffusion v1.5. Figure 17 shows qualitative results of using increasing shaped methods: linear, cosine compared against the baseline. It shows clearly that the increasingly shaped heuristic guidance generates more diversity and the baseline suffers from a collapsing problem, i.e., different sampling of the same prompt seems only to generate similar results. In some figures, e.g., Figure 17 with an example of the mailbox, we can see that the baseline ignores graffiti and increasing heuristic guidance methods can correctly retrieve this information and illustrate it in the generated images. We also see in M&M's that heuristic guidance methods show more diversity in terms of colour and materials. with much richer variance and image composition. However some negative examples can also be found in Figure 17, in particular, the foot of horses in the prompt: a person riding a horse while the sun sets. We posit the reason for these artefacts is due to the overmuting of the initial stage and overshooting the final stage during the generation, which can be rectified by the clamping method. SDXL. The SDXL (Podell et al., 2023) shows better diversity and image quality comparing to its precedent. Whereas some repetitive concepts are still present in the generated results: see Figure 18, that first row "A single horse leaning against a wooden fence" the baseline method generate only brown horses whereas all heuristic methods give a variety of horse colours. A similar repetitive concept can also be found in the "A person stands on water skies in the water" with the color of the character. For the spatial combination diversity, please refer to the example in Figure 19: *"A cobble stone courtyard surrounded by buildings and* clock tower." where we see that heuristic methods yield more view angle and spatial composition. Similar behaviour can be found in the example of *"bowl shelf"* in Figure 18 and *"teddy bear"* in Figure 18. ## E Ablation On Robustness And Generalization Different DDIM steps. DDIM sampler allows for accelerated sampling (e.g., 50 steps as opposed to 1000) with only a marginal compromise in generation performance. In this ablation study, we evaluate the effectiveness of our dynamic weighting schedulers across different sampling steps. We use the CIN256-LDM codebase, with the same configuration as our prior experiments of class-conditioned generation. We conduct ![20_image_0.png](20_image_0.png) Figure 16: Qualitative comparison clamp vs. pcs family, we see clearly that clamping at c = 4 gives the best visual qualitative results. ![21_image_0.png](21_image_0.png) ![21_image_1.png](21_image_1.png) Figure 17: Qualitative SD1.5 ![22_image_0.png](22_image_0.png) ![22_image_1.png](22_image_1.png) Figure 18: Qualitative SDXL (1) ![23_image_0.png](23_image_0.png) Figure 19: Qualitative SDXL (2) | baseline (static) | linear | cosine | | | | | |---------------------|----------|----------|-------|-------|-------|-------| | steps | FID↓ | IS↑ | FID↓ | IS↑ | FID↓ | IS↑ | | 50 | 3.393 | 220.6 | 3.090 | 225.0 | 2.985 | 252.4 | | 100 | 3.216 | 229.8 | 2.817 | 225.2 | 2.818 | 255.3 | | 200 | 3.222 | 229.5 | 2.791 | 223.2 | 2.801 | 254.3 | Table 2: **Ablation on sampling steps DDIM.** Experiment on CIN-256 and Latent Diffusion Model tests with 50, 100, and 200 steps, for baseline and two heuristics (linear and cosine), all operating at their optimal guidance scale in Tab 8. The results, FID vs. IS for each sampling step, are presented in Tab. 2. We observe that the performance of dynamic weighting schedulers remains stable across different timesteps. Different Solvers. To validate the generalizability of our proposed method beyond the DDIM (Song et al., 2020) sampler used in the experiment Section, we further evaluated its performance using the more advanced DPM-Solver (Lu et al., 2022a) sampler (3rd order). This sampler is capable of facilitating diffusion generation with fewer steps and enhanced efficiency compared to DDIM. The experiment setup is similar to the text-to-image generation approach using Stable Diffusion (Rombach et al., 2022) v1.5. The results of this experiment are reported in Table 3 and visually illustrated in Figure 20. ![24_image_0.png](24_image_0.png) Figure 20: **FID vs. CLIP-Score** generated by SD1.5 (Rombach et al., 2022) with DPM-Solver (Lu et al., 2022a) | w | 1 | 3 | 5 | 7 | 9 | 11 | 13 | 15 | 20 | |---------------------------------------------------------------------------|-----|----------------------------------------------------------------|-----|-----|-----|------|------|------|------| | clip-score 0.2287 0.2692 0.2746 0.2767 0.2782 0.2791 0.2797 0.2802 0.2805 | | | | | | | | | | | baseline(static) | FID | 28.188 10.843 13.696 16.232 17.933 19.136 19.930 20.538 21.709 | | | | | | | | | clip-score 0.2287 0.2646 0.2713 0.2743 0.2762 0.2774 0.2785 0.2792 0.2813 | | | | | | | | | | | linear | FID | 28.188 13.032 11.826 12.181 12.830 13.461 13.984 14.541 15.943 | | | | | | | | | clip-score 0.2287 0.2643 0.2712 0.2741 0.2762 0.2778 0.2789 0.2797 0.2812 | | | | | | | | | | | cosine | FID | 28.188 12.587 11.810 12.400 13.197 13.968 14.717 15.366 16.901 | | | | | | | | Table 3: Experiment of FID and CLIP-Score generated by Stable Diffusion v1.5 with DPM-Solver Lu et al. (2022a) As depicted in Figure 20: our proposed methods continue to outperform the baseline (static guidance) approach. Substantial improvements are seen in both FID and CLIP-Score metrics, compared to baseline (w=7.5) for example. Notably, these gains become more pronounced as the guidance weight increases, a trend that remains consistent with all other experiments observed across the paper. Diversity Diversity plays a pivotal role in textual-based generation tasks. Given similar text-image matching levels (usually indicated by CLIP-Score), higher diversity gives users more choices of generated content. Most applications require higher diversity to prevent the undesirable phenomenon of content collapsing, where multiple samplings of the same prompt yield nearly identical or very similar results. We utilize the standard deviation within the image embedding space as a measure of diversity. This metric can be derived using models such as Dino-v2 Oquab et al. (2023) or CLIP Radford et al. (2021). Figure 21 provides a sideby-side comparison of diversities computed using both Dino-v2 and CLIP, numerical results are also reported in Table. 16. It is evident that Dino-v2 yields more discriminative results compared to the CLIP embedding. While both embeddings exhibit similar trends, we notice that CLIP occasionally produces a narrower gap between long captions (-L) and short captions (-S). In some instances, as depicted in Figure 21, CLIP even reverses the order, an observation not apparent with the Dino-v2 model. In both cases, our methods are consistently outperforming the baseline on both metrics. ![25_image_0.png](25_image_0.png) Figure 21: **Experiment on Stable Diffusion on two types of diversity.** Zero-shot COCO 10k CLIPScore vs. Diversity computed by CLIP and Dino-v2 respectively. ## F Detailed Table Of Experiments In this section, we show detailed tables of all experiments relevant to the paper: - **CIFAR-10-DDPM:** results of different shapes of heuristics (Table 4), results of parameterized methods (Table 5, Table 6, Table 7) - **CIN (ImageNet) 256-LDM:** results of different shapes of heuristics (Table 8) and results of parameterized methods (Table 9, Table 9, Table 11) - **Stable Diffusion 1.5:** results of different shapes of heuristics in Table 12 and results of parameterized methods in Table 13. - **Stable Diffusion XL:** results of different shapes of heuristics in Table 14 and results of parameterized methods in Table 15. Table 4: **Experiment of different Heuristics on CIFAR-10 DDPM.** We evaluate the FID and IS results for the baseline, all heuristic methods (green for increasing, red for decreasing and purple for nonlinear) of 50K images. Best FID and IS are highlighted. We see clearly that the increasing shapes outperform all the others. | | baseline (static) | linear | cos | invlinear | sin | Λ-shape | V-shape | | | | | | | | |----------------|---------------------|----------|-------|-------------|-------|-----------|-----------|-------|-------|-------|-------|-------|-------|-------| | Guidance Scale | FID | IS | FID | IS | FID | IS | FID | IS | FID | IS | FID | IS | FID | IS | | 1.10 | 2.966 | 9.564 | 2.893 | 9.595 | 2.875 | 9.606 | 3.033 | 9.554 | 3.068 | 9.550 | 3.017 | 9.615 | 3.005 | 9.550 | | 1.15 | 2.947 | 9.645 | 2.853 | 9.666 | 2.824 | 9.670 | 3.050 | 9.628 | 3.086 | 9.612 | 3.040 | 9.698 | 2.954 | 9.596 | | 1.20 | 2.971 | 9.690 | 2.854 | 9.729 | 2.813 | 9.726 | 3.106 | 9.643 | 3.149 | 9.645 | 3.119 | 9.738 | 2.928 | 9.644 | | 1.25 | 3.025 | 9.733 | 2.897 | 9.799 | 2.850 | 9.794 | 3.192 | 9.675 | 3.261 | 9.660 | 3.251 | 9.746 | 2.930 | 9.677 | | 1.30 | 3.111 | 9.764 | 2.968 | 9.833 | 2.933 | 9.838 | 3.311 | 9.689 | 3.389 | 9.664 | 3.407 | 9.774 | 2.951 | 9.725 | | 1.35 | 3.233 | 9.787 | 3.062 | 9.872 | 3.026 | 9.882 | 3.460 | 9.700 | 3.543 | 9.678 | 3.606 | 9.804 | 2.985 | 9.763 | Table 5: **Experiment of clamp-linear on CIFAR-10 DDPM.** We evaluate the FID and IS results for the baseline, parameterized method as clamp-linear of 50K images FID. Best FID and IS are highlighted, the optimal parameter seems at c = 1.1. | | baseline (static) | linear | linear (c=1.05) | | linear (c=1.1) | linear (c=1.15) | | | | | |----------------|---------------------|----------|-------------------|-------|------------------|-------------------|-------|-------|-------|-------| | Guidance Scale | FID | IS | FID | IS | FID | IS | FID | IS | FID | IS | | 1.10 | 2.966 | 9.564 | 2.893 | 9.595 | 2.852 | 9.622 | 2.856 | 9.638 | 2.876 | 9.647 | | 1.15 | 2.947 | 9.645 | 2.853 | 9.666 | 2.816 | 9.693 | 2.793 | 9.696 | 2.832 | 9.693 | | 1.20 | 2.971 | 9.690 | 2.854 | 9.729 | 2.822 | 9.757 | 2.820 | 9.755 | 2.834 | 9.750 | | 1.25 | 3.025 | 9.733 | 2.897 | 9.799 | 2.863 | 9.809 | 2.863 | 9.809 | 2.863 | 9.809 | | 1.30 | 3.111 | 9.764 | 2.968 | 9.833 | 2.929 | 9.870 | 2.922 | 9.863 | 2.929 | 9.867 | | 1.35 | 3.233 | 9.787 | 3.062 | 9.872 | 3.025 | 9.913 | 3.021 | 9.910 | 3.018 | 9.908 | Table 6: **Experiment of clamp-cosine on CIFAR-10 DDPM.** We evaluate the FID and IS results for the baseline, parameterized method as clamping on the cosine increasing heuristic (clamp-cosine) of 50K images. Best FID and IS are highlighted. It sees the optimising clamping parameter helps to improve the FID-IS performance, the optimal parameter seems at c = 1.05. baseline (static) cos cos (c=1.05) cos (c=1.1) cos (c=1.15) Guidance Scale FID IS FID IS FID IS FID IS FID IS 1.10 2.966 9.564 2.875 9.606 2.824 9.632 2.839 9.651 2.963 9.633 1.15 **2.947** 9.645 2.824 9.670 2.781 9.712 2.794 9.710 2.917 9.689 1.20 2.971 9.690 **2.813** 9.726 **2.771** 9.781 **2.786** 9.774 **2.901** 9.753 1.25 3.025 9.733 2.850 9.794 2.810 9.828 2.819 9.823 2.913 9.821 1.30 3.111 9.764 2.933 9.838 2.880 9.884 2.888 9.885 2.976 9.865 1.35 3.233 9.787 3.026 9.882 2.963 9.933 2.969 9.941 3.052 9.923 | baseline (static) | pcs (s=4) | pcs (s=2) | | pcs (s=1) | pcs (s=0.1) | | | | | | |---------------------|-------------|-------------|-------|-------------|---------------|-------|-------|-------|-------|-------| | Guidance Scale | FID | IS | FID | IS | FID | IS | FID | IS | FID | IS | | 1.10 | 2.966 | 9.564 | 2.920 | 9.600 | 2.969 | 9.614 | 2.875 | 9.606 | 3.010 | 9.572 | | 1.15 | 2.947 | 9.645 | 2.818 | 9.663 | 2.886 | 9.670 | 2.824 | 9.670 | 2.983 | 9.657 | | 1.20 | 2.971 | 9.690 | 2.748 | 9.726 | 2.844 | 9.729 | 2.813 | 9.726 | 3.010 | 9.706 | | 1.25 | 3.025 | 9.733 | 2.714 | 9.782 | 2.839 | 9.782 | 2.850 | 9.794 | 3.065 | 9.733 | | 1.30 | 3.111 | 9.764 | 2.700 | 9.834 | 2.858 | 9.847 | 2.933 | 9.838 | 3.157 | 9.770 | | 1.35 | 3.233 | 9.787 | 2.711 | 9.885 | 2.902 | 9.889 | 3.026 | 9.882 | 3.276 | 9.786 | | baseline | linear | cos | invlinear | sin | Λ-shape | V-shape | | | | | | | | | |------------|----------|-------|-------------|-------|-----------|-----------|-------|-------|-------|-------|-------|-------|-------|-------| | guidance | FID | IS | FID | IS | FID | IS | FID | IS | FID | IS | FID | IS | FID | IS | | 1.4 | 4.117 | 181.2 | 4.136 | 178.3 | 4.311 | 175.4 | 4.323 | 180.7 | 4.405 | 180.2 | 3.444 | 207.8 | 6.118 | 146.2 | | 1.6 | 3.393 | 225.0 | 3.090 | 220.6 | 3.083 | 216.2 | 3.974 | 222.7 | 4.176 | 221.7 | 3.694 | 256.5 | 4.450 | 176.8 | | 1.8 | 3.940 | 260.8 | 3.143 | 257.5 | 2.985 | 252.4 | 4.797 | 257.3 | 5.087 | 254.8 | 4.922 | 294.9 | 3.763 | 206.1 | | 2.0 | 5.072 | 291.4 | 3.858 | 288.9 | 3.459 | 283.3 | 6.085 | 284.2 | 6.398 | 281.2 | 6.517 | 324.8 | 3.806 | 232.2 | | 2.2 | 6.404 | 315.8 | 4.888 | 315.1 | 4.256 | 310.1 | 7.517 | 306.9 | 7.835 | 303.4 | 8.164 | 346.2 | 4.293 | 255.7 | | 2.4 | 8.950 | 335.9 | 6.032 | 336.5 | 5.215 | 331.2 | 8.978 | 325.5 | 9.291 | 321.3 | 9.664 | 362.9 | 5.051 | 277.0 | | | baseline | linear | linear (c=1.005) | linear (c=1.1) | linear (c=1.3) | | | | | | |----------|------------|----------|--------------------|------------------|------------------|-------|------|-------|------|-------| | guidance | FID | IS | FID | IS | FID | IS | FID | IS | FID | IS | | 1.4 | 4.12 | 181.2 | 4.14 | 178.3 | 4.16 | 177.8 | 4.18 | 178.1 | 3.95 | 184.6 | | 1.6 | 3.39 | 225.0 | 3.09 | 220.6 | 3.06 | 219.6 | 3.13 | 219.2 | 3.14 | 222.7 | | 1.8 | 3.94 | 260.8 | 3.14 | 257.5 | 3.18 | 255.9 | 3.18 | 257.2 | 3.24 | 259.0 | | 2.0 | 5.07 | 291.4 | 3.86 | 288.9 | 3.88 | 287.0 | 3.86 | 288.7 | 3.92 | 289.6 | | 2.2 | 6.40 | 315.8 | 4.89 | 315.1 | 4.91 | 312.4 | 4.87 | 313.8 | 4.92 | 314.9 | | 2.4 | 8.95 | 335.9 | 6.03 | 336.5 | 6.00 | 334.3 | 5.97 | 336.8 | 6.01 | 337.2 | Table 7: **Experiment of pcs family on CIFAR-10 DDPM.** We evaluate the FID and IS results for the baseline, parameterized pcs method of 50K image FID. Best FID and IS are highlighted. It sees the optimising clamping parameter helps to improve the FID-IS performance, the optimal parameter seems at s = 4. Table 8: **Experiment of different Heuristics on CIN-256-LDM.** We evaluate the FID and IS results for the baseline, all heuristic methods (green for increasing, red for decreasing and purple for non-linear) of 50K images FID. Best FID and IS are highlighted. We see clearly that the increasing shapes outperform all the others. Table 9: **Experiment of clamp-linear family on CIN-256-LDM.** We evaluate the FID and IS results for the baseline, parameterized clamp-linear on 50K images FID. Best FID and IS are highlighted. It sees the optimising parameter helps to improve the FID-IS performance, the optimal parameter seems at c = 1.005. Table 10: **Experiment of clamp-cosine family on CIN-256-LDM.** We evaluate the FID and IS results for the baseline, parameterized method of clamp-cosine method on 50K images. Best FID and IS are highlighted. It sees the optimising parameter helps to improve the FID-IS performance, the optimal parameter seems at c = 1.005. | baseline | | cosine | cosine (c=1.005) | | cosine (c=1.1) | cosine (c=1.3) | | | | | |------------|------|----------|--------------------|-------|------------------|------------------|------|-------|------|-------| | guidance | FID | IS | FID | IS | FID | IS | FID | IS | FID | IS | | 1.4 | 4.12 | 181.24 | 4.31 | 175.4 | 4.24 | 176.0 | 4.24 | 177.1 | 3.82 | 188.2 | | 1.6 | 3.39 | 224.96 | 3.08 | 216.2 | 3.06 | 217.0 | 3.08 | 217.1 | 3.09 | 224.6 | | 1.8 | 3.94 | 260.85 | 2.98 | 252.4 | 2.91 | 251.8 | 3.01 | 253.2 | 3.13 | 258.4 | | 2.0 | 5.07 | 291.37 | 3.46 | 283.3 | 3.47 | 282.5 | 3.48 | 284.1 | 3.67 | 288.2 | | 2.2 | 6.40 | 315.84 | 4.26 | 310.1 | 4.27 | 307.9 | 4.28 | 310.5 | 4.49 | 313.1 | | 2.4 | 8.95 | 335.86 | 5.22 | 331.2 | 5.23 | 329.7 | 5.24 | 331.3 | 5.44 | 334.1 | | | baseline | pcs (s=4) | pcs (s=2) | | pcs (s=1) | pcs (s=0.1) | | | | | |----------|------------|-------------|-------------|--------|-------------|---------------|------|--------|------|--------| | guidance | FID | IS | FID | IS | FID | IS | FID | IS | FID | IS | | 1.4 | 4.12 | 181.24 | 6.94 | 144.98 | 6.10 | 150.49 | 4.31 | 175.40 | 4.09 | 181.00 | | 1.6 | 3.39 | 224.96 | 5.69 | 162.99 | 4.27 | 180.52 | 3.08 | 216.21 | 3.43 | 225.31 | | 1.8 | 3.94 | 260.85 | 4.80 | 179.71 | 3.29 | 208.86 | 2.98 | 252.37 | 3.96 | 264.03 | | 2.0 | 5.07 | 291.37 | 4.18 | 195.75 | 2.88 | 234.09 | 3.46 | 283.32 | 5.08 | 294.77 | | 2.2 | 6.40 | 315.84 | 3.73 | 210.60 | 2.81 | 257.22 | 4.26 | 310.14 | 6.44 | 319.97 | | 2.4 | 8.95 | 335.86 | 3.457 | 224.4 | 2.98 | 278.14 | 5.22 | 331.17 | 7.85 | 339.05 | | w | 2 | 4 | 6 | 8 | 10 | 12 | 14 | | |------------|--------|--------|--------|--------|--------|--------|--------|--------| | clip-score | 0.2593 | 0.2719 | 0.2757 | 0.2775 | 0.2790 | 0.2796 | 0.2803 | | | baseline | FID | 11.745 | 11.887 | 14.639 | 16.777 | 18.419 | 19.528 | 20.462 | | clip-score | 0.2565 | 0.2697 | 0.2741 | 0.2763 | 0.2780 | 0.2788 | 0.2799 | | | linear | FID | 14.649 | 11.260 | 12.056 | 13.147 | 14.179 | 15.032 | 15.663 | | clip-score | 0.2553 | 0.2686 | 0.2728 | 0.2751 | 0.2770 | 0.2782 | 0.2793 | | | cos | FID | 15.725 | 11.846 | 12.009 | 12.796 | 13.629 | 14.282 | 15.058 | | clip-score | 0.261 | 0.272 | 0.2754 | 0.2773 | 0.2780 | 0.2787 | 0.2793 | | | sin | FID | 10.619 | 14.618 | 18.323 | 20.829 | 22.380 | 23.534 | 24.561 | | clip-score | 0.2608 | 0.2723 | 0.2757 | 0.2773 | 0.2781 | 0.2789 | 0.2793 | | | invlinear | FID | 10.649 | 14.192 | 17.810 | 20.206 | 21.877 | 22.962 | 24.128 | | clip-score | 0.2603 | 0.2719 | 0.2756 | 0.2774 | 0.2785 | 0.2794 | 0.2802 | | | Λ-shape | FID | 11.940 | 12.106 | 14.183 | 16.100 | 17.530 | 18.663 | 19.723 | | clip-score | 0.2569 | 0.2706 | 0.2747 | 0.2764 | 0.2773 | 0.2783 | 0.2789 | | | V-shap | FID | 11.790 | 12.407 | 15.912 | 18.220 | 19.796 | 20.992 | 21.905 | Table 11: **Experiment of pcs family on CIN-256-LDM.** We evaluate the FID and IS results for the baseline, parameterized method of the pcs family of 50K images. Best FID and IS are highlighted. It sees the optimising parameter helps to improve the FID-IS performance, the optimal parameter seems at s = 2 for FID. Interestingly, the pcs family presents a worse IS metric, than baseline and clamp-linear/cosine methods. Table 12: **Different Heuristic Modes of SD1.5**, we show FID vs. CLIP-score of 10K images. we highlight different range of clip-score by low (∼ 0.272), mid (∼ 0.277) and high (∼ 0.280) by pink, orange and blue colors. We see that increasing modes demonstrate the best performance at high w, whereas decreasing modes regress on the performance. non-linear modes, especially Λ-shape also demonstrate improved performance to baseline but worse than increasing shapes. Table 13: **Different parameterized functions of SD1.5**, we show FID vs. CLIP-score of 10K images. we highlight different range of clip-score by low (∼ 0.272), mid (∼ 0.277) and high (∼ 0.280) by pink, orange and blue colors. We see that for the pcs family the optimal parameter is at s = 1, whereas for clamp-linear and clamp-cosine methods, they are at c = 2. | w | 2 | 4 | 6 | 8 | 10 | 12 | 14 | | |--------------|--------|---------|--------|--------|--------|--------|--------|--------| | clip-score | 0.2593 | 0.2719 | 0.2757 | 0.2775 | 0.2790 | 0.2796 | 0.2803 | | | baseline | FID | 11.745 | 11.887 | 14.639 | 16.777 | 18.419 | 19.528 | 20.462 | | clip-score | 0.2453 | 0.2582 | 0.2637 | 0.2668 | 0.2691 | 0.2706 | 0.2720 | | | pcs (s=4) | FID | 23.875 | 19.734 | 19.167 | 19.627 | 20.513 | 22.022 | 23.585 | | clip-score | 0.2591 | 0.2642 | 0.2691 | 0.2720 | 0.2740 | 0.2754 | 0.2766 | | | pcs (s=2) | FID | 18.026 | 14.414 | 13.503 | 13.652 | 14.175 | 14.806 | 15.480 | | clip-score | 0.2553 | 0.2686 | 0.2728 | 0.2751 | 0.2770 | 0.2782 | 0.2793 | | | pcs (s=1) | FID | 15.725 | 11.846 | 12.009 | 12.796 | 13.629 | 14.282 | 15.058 | | clip-score | 0.2507 | 0.2642 | 0.2755 | 0.2772 | 0.2785 | 0.2796 | 0.2800 | | | pcs (s=0.1) | FID | 19.532 | 14.414 | 14.770 | 16.901 | 18.312 | 19.349 | 20.271 | | clip-score | 0.2613 | 0.2705 | 0.2745 | 0.2766 | 0.2781 | 0.2790 | 0.2798 | | | linear (c=1) | FID | 11.4448 | 11.011 | 12.130 | 13.211 | 14.219 | 15.129 | 15.888 | | clip-score | 0.2679 | 0.2717 | 0.2751 | 0.2769 | 0.2783 | 0.2795 | 0.2800 | | | linear (c=2) | FID | 10.7382 | 11.169 | 12.168 | 13.211 | 14.166 | 14.946 | 16.041 | | clip-score | 0.2719 | 0.2732 | 0.2756 | 0.2771 | 0.2783 | 0.2798 | 0.2800 | | | linear (c=3) | FID | 12.1284 | 12.328 | 13.019 | 13.916 | 14.701 | 16.109 | 16.420 | | clip-score | 0.2742 | 0.2746 | 0.2761 | 0.2775 | 0.2786 | 0.2794 | 0.2802 | | | linear (c=4) | FID | 13.768 | 13.813 | 14.213 | 14.765 | 15.311 | 15.834 | 16.422 | | clip-score | 0.2618 | 0.2703 | 0.2740 | 0.2762 | 0.2775 | 0.2787 | 0.2795 | | | cos (c=1) | FID | 11.386 | 10.986 | 11.732 | 12.608 | 13.460 | 14.288 | 14.978 | | clip-score | 0.2682 | 0.2722 | 0.2751 | 0.2769 | 0.2780 | 0.2789 | 0.2800 | | | cos (c=2) | FID | 10.816 | 11.309 | 12.055 | 12.908 | 13.602 | 14.326 | 15.008 | | clip-score | 0.2719 | 0.2736 | 0.2757 | 0.2772 | 0.2792 | 0.2792 | 0.2800 | | | cos (c=3) | FID | 12.121 | 12.363 | 12.956 | 13.631 | 14.263 | 14.869 | 15.385 | | clip-score | 0.2742 | 0.2748 | 0.2764 | 0.2776 | 0.2786 | 0.2795 | 0.2802 | | | cos (c=4) | FID | 13.734 | 13.827 | 14.222 | 14.690 | 15.090 | 15.560 | 15.916 | | w | 1 | 3 | 5 | 7 | 9 | 11 | 13 | 15 | 20 | | |------------|--------|---------|---------|---------|---------|---------|---------|---------|---------|---------| | clip-score | 0.2248 | 0.2712 | 0.2767 | 0.2791 | 0.2806 | 0.2817 | 0.2826 | 0.2832 | 0.2836 | | | baseline | FID | 59.2480 | 24.3634 | 24.9296 | 25.7080 | 26.1654 | 27.2308 | 27.4628 | 28.0538 | 29.6868 | | clip-score | 0.2248 | 0.2653 | 0.2732 | 0.2773 | 0.2798 | 0.2810 | 0.2821 | 0.2828 | 0.2840 | | | linear | FID | 59.2480 | 29.0917 | 25.0276 | 24.4500 | 24.6705 | 25.1286 | 25.5488 | 25.8457 | 26.5993 | | clip-score | 0.2248 | 0.2621 | 0.2708 | 0.2751 | 0.2776 | 0.2794 | 0.2803 | 0.2817 | 0.2830 | | | cosine | FID | 59.2480 | 32.8264 | 27.0004 | 25.5468 | 25.4331 | 25.5244 | 25.7375 | 25.8758 | 26.8427 | | clip-score | 0.2248 | 0.2739 | 0.2783 | 0.2800 | 0.2814 | 0.2826 | 0.2823 | 0.2807 | 0.2730 | | | invlinear | FID | 59.2480 | 23.8196 | 25.4335 | 26.1458 | 27.8969 | 29.6194 | 31.8970 | 35.2600 | 47.8467 | | clip-score | 0.2248 | 0.2741 | 0.2786 | 0.2803 | 0.2816 | 0.2823 | 0.2816 | 0.2794 | 0.2713 | | | sin | FID | 59.2480 | 23.9147 | 25.4203 | 26.3137 | 28.1756 | 29.3571 | 30.5314 | 36.3049 | 51.6672 | | clip-score | 0.2248 | 0.2721 | 0.2782 | 0.2809 | 0.2826 | 0.2831 | 0.2837 | 0.2846 | 0.2849 | | | Λ-shape | FID | 59.2480 | 22.3927 | 24.0785 | 25.6845 | 26.7019 | 27.5095 | 28.2058 | 32.1870 | 34.9896 | | clip-score | 0.2248 | 0.2688 | 0.2747 | 0.2770 | 0.2785 | 0.2793 | 0.2795 | 0.2786 | 0.2736 | | | V-shape | FID | 59.2480 | 21.6560 | 22.7042 | 23.6659 | 24.0550 | 25.4073 | 26.2993 | 27.6580 | 35.2935 | Table 14: **Different Heuristic Modes of SDXL**, we show FID vs. CLIP-score of 10K images. we highlight different range of clip-score by low (∼ 0.2770), mid (∼ 0.280) and high (∼ 0.2830) by pink, orange and blue colors. We see that increasing modes demonstrate the best performance at high w, whereas decreasing modes regress on the performance. non-linear modes, especially Λ-shape demonstrate improved performance against baseline but regress fast when the ω is high. Table 15: **Different parameterized results in SDXL**, we show FID vs. CLIP-Score of pcs family and clamp family of 10K images: pcs family records best performance at s = 0.1, clamp-linear and clamp-cosine strategies all record best performance at c = 4. w 1 3 5 7 9 11 13 15 20 clip-score 0.2248 0.2712 0.2767 0.2791 0.2806 0.2817 0.2826 0.2832 0.2836 *baseline* FID 59.2480 24.3634 24.9296 25.7080 26.1654 27.2308 27.4628 28.0538 29.6868 clip-score 0.2248 0.2336 0.2396 0.2440 0.2470 0.2494 0.2513 0.2527 0.2549 pcs (s = 4) FID 59.2480 55.2402 52.0731 50.3335 48.9980 48.4516 48.0146 47.7025 48.9481 clip-score 0.2248 0.2486 0.2581 0.2638 0.2673 0.2704 0.2722 0.2738 0.2765 pcs (s = 2) FID 59.2480 35.2002 28.7500 24.8120 22.8518 21.7098 22.1061 23.0833 23.5282 clip-score 0.2248 0.2621 0.2708 0.2751 0.2776 0.2794 0.2803 0.2817 0.2830 pcs (s = 1) FID 59.2480 32.8264 27.0004 25.5468 25.4331 25.5244 25.7375 25.8758 26.8427 clip-score 0.2248 0.2710 0.2769 0.2798 0.2812 0.2823 0.2830 0.2836 0.2844 pcs (s = 0.1) FID 59.2480 18.5894 18.8975 19.8658 20.5433 21.1257 21.6248 21.9118 23.7671 clip-score 0.2248 0.2717 0.2752 0.2781 0.2798 0.2810 0.2822 0.2830 0.2840 *linear* (c = 2) FID 59.2480 24.3084 23.8361 24.0241 24.4806 24.6759 24.9336 25.6498 26.6398 clip-score 0.2248 0.2773 0.2778 0.2792 0.2805 0.2818 0.2827 0.2831 0.2845 *linear* (c = 4) FID 59.2480 18.2321 18.2517 18.2678 18.3675 18.5902 18.8356 19.1395 19.9400 clip-score 0.2248 0.2798 0.2799 0.2803 0.2811 0.2819 0.2825 0.2832 0.2846 *linear* (c = 6) FID 59.2480 19.3309 19.3295 19.2716 19.2801 19.2955 19.4298 19.5635 20.1577 clip-score 0.2248 0.2720 0.2748 0.2775 0.2794 0.2806 0.2816 0.2822 0.2836 *cosine* (c = 2) FID 59.2480 24.2768 23.9367 23.8442 24.1493 24.3516 24.6917 25.0779 25.8126 clip-score 0.2248 0.2773 0.2780 0.2793 0.2806 0.2816 0.2825 0.2832 0.2843 *cosine* (c = 4) FID 59.2480 18.2321 18.2336 18.2764 18.2364 18.3372 18.5678 18.8925 19.6065 clip-score 0.2248 0.2798 0.2799 0.2805 0.2813 0.2821 0.2826 0.2830 0.2843 *cosine* (c = 6) FID 59.2480 19.2943 19.2701 19.2261 19.2656 19.2711 19.2743 19.2670 19.7355 Table 16: **Experiment on SD1.5 with Diversity measures** of 10K images, comparison between the baseline and two increasing heuristic shapes, linear and cosine. | w | 2 | 4 | 6 | 8 | 10 | 12 | 14 | 20 | 25 | | |--------------|------------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | baseline | clip-score | 0.2593 | 0.2719 | 0.2757 | 0.2775 | 0.2790 | 0.2796 | 0.2803 | 0.2813 | 0.2817 | | FID | 11.745 | 11.887 | 14.639 | 16.777 | 18.419 | 19.528 | 20.462 | 22.463 | 23.810 | | | Div-CLIP-L | 0.315 | 0.289 | 0.275 | 0.267 | 0.260 | 0.257 | 0.254 | 0.250 | 0.251 | | | Div-Dinov2-L | 1.188 | 1.083 | 1.033 | 1.007 | 0.987 | 0.976 | 0.967 | 0.951 | 0.948 | | | Div-CLIP-S | 0.317 | 0.288 | 0.273 | 0.263 | 0.256 | 0.252 | 0.249 | 0.246 | 0.246 | | | Div-Dinov2-S | 1.241 | 1.131 | 1.082 | 1.051 | 1.031 | 1.019 | 1.006 | 0.992 | 0.986 | | | linear | clip-score | 0.2565 | 0.2697 | 0.2741 | 0.2763 | 0.2780 | 0.2788 | 0.2799 | 0.2817 | 0.2826 | | FID | 14.649 | 11.260 | 12.056 | 13.147 | 14.179 | 15.032 | 15.663 | 17.478 | 18.718 | | | Div-CLIP-L | 0.320 | 0.300 | 0.289 | 0.281 | 0.275 | 0.271 | 0.268 | 0.262 | 0.259 | | | Div-Dinov2-L | 1.209 | 1.119 | 1.076 | 1.048 | 1.030 | 1.016 | 1.006 | 0.986 | 0.979 | | | Div-CLIP-S | 0.324 | 0.302 | 0.291 | 0.282 | 0.277 | 0.271 | 0.270 | 0.263 | 0.261 | | | Div-Dinov2-S | 1.262 | 1.172 | 1.129 | 1.099 | 1.082 | 1.060 | 1.057 | 1.038 | 1.027 | | | cos | clip-score | 0.2553 | 0.2686 | 0.2728 | 0.2751 | 0.2770 | 0.2782 | 0.2793 | 0.2812 | 0.2821 | | FID | 15.725 | 11.846 | 12.009 | 12.796 | 13.629 | 14.282 | 15.058 | 16.901 | 18.448 | | | Div-CLIP-L | 0.322 | 0.304 | 0.293 | 0.287 | 0.282 | 0.278 | 0.275 | 0.268 | 0.265 | | | Div-Dinov2-L | 1.215 | 1.134 | 1.092 | 1.068 | 1.051 | 1.039 | 1.030 | 1.008 | 1.001 | | | Div-CLIP-S | 0.326 | 0.307 | 0.296 | 0.290 | 0.285 | 0.282 | 0.278 | 0.272 | 0.269 | | | Div-Dinov2-S | 1.266 | 1.186 | 1.145 | 1.120 | 1.104 | 1.093 | 1.081 | 1.063 | 1.054 | | | w | 3 | 5 | 7 | 8 | 9 | 11 | 13 | 15 | 20 | | |--------------|--------------|--------|--------|--------|--------|--------|--------|--------|--------|-------| | clip-score | 0.2712 | 0.2767 | 0.2791 | 0.2799 | 0.2806 | 0.2817 | 0.2826 | 0.2832 | 0.2836 | | | FID | 24.36 | 24.93 | 25.71 | 26.06 | 26.17 | 27.23 | 27.46 | 28.05 | 29.69 | | | Div-Dinov2-L | 0.951 | 0.886 | 0.857 | 0.850 | 0.841 | 0.831 | 0.827 | 0.829 | 0.853 | | | baseline | Div-Dinov2-S | 1.052 | 0.985 | 0.950 | 0.940 | 0.934 | 0.920 | 0.916 | 0.912 | 0.927 | | clip-score | 0.2653 | 0.2732 | 0.2773 | 0.2789 | 0.2798 | 0.2810 | 0.2821 | 0.2828 | 0.2840 | | | FID | 29.09 | 25.03 | 24.45 | 24.52 | 24.67 | 25.13 | 25.55 | 25.85 | 26.60 | | | Div-Dinov2-L | 0.999 | 0.949 | 0.916 | 0.904 | 0.897 | 0.881 | 0.873 | 0.863 | 0.854 | | | linear | Div-Dinov2-S | 1.123 | 1.064 | 1.030 | 1.018 | 1.007 | 0.989 | 0.980 | 0.973 | 0.956 | | clip-score | 0.2621 | 0.2708 | 0.2751 | 0.2764 | 0.2776 | 0.2794 | 0.2803 | 0.2817 | 0.2830 | | | FID | 32.83 | 27.00 | 25.55 | 25.41 | 25.43 | 25.52 | 25.74 | 25.88 | 26.84 | | | Div-Dinov2-L | 1.017 | 0.969 | 0.941 | 0.932 | 0.922 | 0.908 | 0.899 | 0.893 | 0.879 | | | cosine | Div-Dinov2-S | 1.143 | 1.095 | 1.066 | 1.056 | 1.045 | 1.031 | 1.020 | 1.008 | 0.994 | Table 17: **Experiment on SDXL with Diversity.**, we present FID vs. CLIP-Score (CS) for SDXL of 10K images, and we see the similar trending to Table 16 that the heuristic methods outperform the baseline, both on FID and Diversity. ## G User Study In this section, we elaborate on the specifics of our user study setup corresponding to Figure 3. (b) in our main manuscript. For the evaluation, each participant was presented with a total of 10 image sets. Each set comprised 9 images. Within each set, three pairwise comparisons were made: linear vs. baseline, and cosine vs. baseline. Throughout the study, two distinct image sets (20 images for each method) were utilized. We carried out two tests for results generated with stable diffusion v1.5 and each image are generated to make sure that their CLIP-Score are similar. Subsequently, participants were prompted with three questions for each comparison: 1. *Which set of images is more realistic or visually appealing?* 2. *Which set of images is more diverse?* 3. *Which set of images aligns better with the provided text description?* In total, we recorded 54 participants with each participant responding to 90 questions. We analyzed the results by examining responses to each question individually, summarizing the collective feedback.
# Fast And Effective Weight Update For Pruned Large Language Models Vladimír Boža boza@fmph.uniba.sk Faculty of Mathematics, Physics and Informatics, Comenius University, Bratislava, Slovakia Reviewed on OpenReview: *https: // openreview. net/ forum? id= 1hcpXd9Jir* ## Abstract Pruning large language models (LLMs) is a challenging task due to their enormous size. The primary difficulty is fine-tuning the model after pruning, which is needed to recover the lost performance caused by dropping weights. Recent approaches have either ignored fine-tuning entirely, focusing on efficient pruning criteria, or attempted layer-wise weight updates, preserving the behavior of each layer. However, even layer-wise weight updates can be costly for LLMs, and previous works have resorted to various approximations. In our paper, we propose a fast and effective weight update algorithm for pruned layers based on the Alternating Direction Method of Multipliers (ADMM). We further extend it with a simple gradual pruning mask selection and achieve state-of-the-art pruning performance across a wide range of LLMs. The code is available at https://github.com/fmfi-compbio/ admm-pruning. ## 1 Introduction Large language models (LLMs) (Brown et al., 2020; Zhang et al., 2022; Touvron et al., 2023a;b) have displayed impressive performance in different tasks, but deploying them can be challenging due to their large size and high memory demands. In this work, we introduce a one-shot pruning and weight update algorithm for LLMs that is both fast and effective. Our algorithm produces state-of-the-art results for LLM pruning while imposing minimal computational overhead (Table 1). Neural networks are usually compressed by either quantization or weight pruning. LLM quantization (Dettmers et al., 2022; Dettmers & Zettlemoyer, 2023; Ahmadian et al., 2023; Xiao et al., 2023) compresses LLMs by storing weights using a small number of bits. On the other hand, pruning compresses models by dropping irrelevant weights (LeCun et al., 1989; Han et al., 2015; Zhu & Gupta, 2018). Pruning can be helpful for LLMs since, during inference, the main bottleneck is memory bandwidth for loading weights to processing unit (Xia et al., 2023). However, the main challenge in deploying LLM pruning is that the network needs to be fine-tuned (Blalock et al., 2020; Liu et al., 2018), which is not feasible with LLMs due to extensive computational and memory footprint. For example, Agarwalla et al. (2024) needed retraining on 45 - 100 billion tokens to recover lost performance by pruning. Also, memory-efficient fine-tuning like LoRA (Hu et al., 2021) is not applicable for LLM weight pruning since we cannot easily merge the low-rank update with the sparsified matrix. A feasible alternative is one-shot pruning, where one is given a trained model with a small amount of calibration data, and has to compress the model in a single forward pass using limited computational resources. This is typically done via layer-wise pruning, where the pruning problem is split into layer-wise subproblems. In each layer, one aims to select a pruning mask and update weights to minimize reconstruction error. Adaprune (Hubara et al., 2021) solves layer-wise reconstruction by updating weights directly via gradient descent (using Adam optimizer). However, it needs many iterations to achieve convergence. Optimal brain compression (OBC) (Frantar & Alistarh, 2022) removes weights one by one. In each step, it calculates the optimal weight to remove and also the optimal update. However, this approach is also very time-consuming Table 1: WikiText perplexity of pruned LLaMA-7B. Our ADMM-based methods are superior to previous ones. Method Sparsity Perplexity Dense 0 % 5.68 Wanda 50 % 7.26 SparseGPT 50 % 7.22 ADMM1 50 % 7.20 ADMM-Grad 50 % **7.06** Wanda 60 % 10.66 SparseGPT 60 % 10.51 ADMM1 60 % 9.96 ADMM-Grad 60 % **9.22** Wanda 70 % 85.77 SparseGPT 70 % 26.30 ADMM1 70 % 26.31 ADMM-Grad 70 % **18.66** Wanda 80 % 5e3 SparseGPT 80 % 154.75 ADMM1 80 % 202.04 ADMM-Grad 80 % **69.46** Wanda 2:4 11.53 SparseGPT 2:4 11.00 ADMM1 2:4 10.38 ADMM-Grad 2:4 **9.90** for pruning LLMs. The first practical approach applicable to LLMs was SparseGPT (Frantar & Alistarh, 2023) using approximations on top of the OBC approach to make the problem feasible, albeit at the cost of decreased reconstruction quality. Recently, Wanda (Sun et al., 2023) showed that LLMs can be pruned by removing weights with the smallest product of weight magnitude and corresponding input activation norm. This selection approach without the weight update is competitive with SparseGPT on lower sparsities (up to 60%). Our results. In this paper, we introduce an efficient layer-wise weight update algorithm based on alternating direction method of multipliers (ADMM) (Boyd et al., 2011). Our algorithm sidesteps all of the problems of previous solutions. We do not need many gradient descent iterations, nor do we need any heuristic approximation for calculating the weight update. We only need a single inversion of a matrix similar in size to the weight matrix and very few simple iterations to achieve accurate weight updates for a given pruning mask. Furthermore, we extend our algorithm with gradual pruning (Zhu & Gupta, 2018), where in each step, we prune more and more weights. This simple extension allows us to obtain state-of-the-art pruning results at a very low additional cost. ## 2 Preliminaries 2.1 Large Language Models And Transformers Large language models (like Llama) use transformer (Vaswani et al., 2017) architecture and are trained to predict the next word in the text. Transformer consists of multiple repeating blocks. Each block has multihead attention and a feed-forward subblock, which contain multiple linear transformations. Our work focuses on pruning weights in these linear transformations. 2 ## 2.2 One-Shot And Layer-Wise Pruning We consider a scenario of post-training pruning, where we prune an already trained model to a desired sparsity (we assume that the sparsity is the same in each pruned layer). Since manipulating the whole LLM at once leads to huge computational and memory requirements, we follow the works of Hubara et al. (2021); Frantar & Alistarh (2022; 2023). We prune the LLM during one forward pass (one-shot pruning) and split pruning into multiple layer-wise subproblems. During the forward pass, we capture the calibration inputs for each layer and then prune and update each layer accordingly. More specifically, for each block in the model, we run a forward pass through it, capture inputs for each layer, prune and update weights, and then rerun a forward pass through the whole block to get outputs after pruning. We are given the original weights Wℓ for each layer and calibration inputs Xℓ. Our goal is to find a binary weight mask Mℓ and updated weights Wcℓ such that the following reconstruction error is minimized: ## ||Xℓwℓ − Xℓ(Mℓ ⊙ Wcℓ)||22 For now, we assume that pruning mask Mℓ was found via a separate method and focus only on finding updated weights Wcℓ. Assuming that our layer has n output neurons and m inputs, one can just solve n independent linear regressions to solve the problem optimally. Since the mask for each output is different, each one of n outputs requires a separate matrix inversion of the relevant submatrix of XT ℓ Xℓ, which in total takes O(m3n) time. This is infeasible even for small neural networks. It is possible to use various approximations to compute updates faster, as done in SparseGPT (Frantar & Alistarh, 2023). However, we demonstrate in our experiments that this compromises the quality of the solution. Another approximation is to not update weights and prune weights with the lowest product of magnitude and input activation norm, as done in Wanda (Sun et al., 2023). Another possible solution is to update weights iteratively via gradient descent as in Adaprune (Hubara et al., 2021). Here, one update step is proportional to XT ℓ Xℓ(Mℓ ⊙ Wcℓ − Wℓ). Assuming XT ℓ Xℓ is precomputed, one update step takes O(m2n) time. While this looks much better than solving n linear regressions, Frantar & Alistarh (2023) as well as our own experiments show that one needs many iterations to achieve reasonable convergence. ## 2.3 Alternating Direction Method Of Multipliers Alternating direction method of multipliers (ADMM) (Boyd et al., 2011) is an optimization method for solving problems in the form: $$\begin{array}{r l}{{\mathrm{minimize}}}&{{}f(x)+g(z)}\\ {{\mathrm{subject~to}}}&{{}A x+B z=c}\end{array}$$ where f(x) and g(x) are convex functions. ADMM forms the augmented Lagrangian with dual variables y and penalty factor ρ: $$L_{\rho}(x,z,u)=f(x)+g(z)+y^{T}(A x+B z+c)-{\frac{\rho}{2}}||A x+B z-c||_{2}^{2}$$ Typically ADMM is given using scale dual variable u = y/ρ in a form: $$L_{\rho}(x,z,u)=f(x)+g(z)+\frac{\rho}{2}(||A x+B z-c+u||_{2}^{2}-||u||_{2}^{2})$$ The Lagrangian is then optimized via the following iterations: $$\begin{array}{l}{{x^{k+1}=\operatorname*{arg\,min}_{x}f(x)+(\rho/2)||A x+B z^{k}-c+u^{k}||_{2}^{2}}}\\ {{z^{k+1}=\operatorname*{arg\,min}_{z}g(z)+(\rho/2)||A x^{k+1}+B z-c+u^{k}||_{2}^{2}}}\\ {{u^{k+1}=u^{k}+A x^{k+1}+B z^{k+1}-c}}\end{array}$$ It can shown that ADMM converges to the optimal solution when f and g are convex and some other mild assumptions hold Boyd et al. (2011). Assumption 1. *The (extended-real-valued) functions* f : R n → R ∪ +∞ and g : R m → R ∪ +∞ are closed, proper, and convex. Assumption 2. The unaugmented Lagrangian L0 *has a saddle point, i.e. there exists* (x ∗, z∗, y∗) *where for* all *x, z, y*: L0(x ∗, z∗, y) ≤ L0(x ∗, z∗, y∗) ≤ L0(*x, z, y*∗) Theorem 1. *Let Assumptions 1 and 2 hold. Then:* - Axk + Bzk + c → 0 as k → ∞*, i.e. iterates approach feasibility.* - f(x k) + g(z k) approach optimal value as k → ∞ One application of ADMM is solving constrained optimization over convex set C, i.e.: $$\begin{array}{r l}{{\mathrm{minimize}}}&{{}f(x)}\\ {{\mathrm{subject~to}}}&{{}x\in C}\end{array}$$ $$(1)$$ This problem can be rewritten into ADMM form using indicator function g, where g(z) = 0 if z ∈ C, and g(z) = ∞ otherwise: $$\begin{array}{r l}{{\mathrm{minimize}}}&{{}f(x)+g(z)}\\ {{\mathrm{subject~to}}}&{{}x-z=0}\end{array}$$ subject to x − z = 0(1) In this case, the ADMM update becomes: $$\begin{array}{l}{{x^{k+1}=\operatorname*{arg\,min}_{x}f(x)+(\rho/2)||x-z^{k}+u^{k}||_{2}^{2}}}\\ {{z^{k+1}=\Pi_{C}(x^{k+1}+u^{k})}}\\ {{u^{k+1}=u^{k}+x^{k+1}-z^{k+1}}}\end{array}$$ $${\mathrm{(2)}}$$ Here, ΠC is an Euclidean projection operation onto set C. Also, note that the x update is just the original unconstrained problem with a simple quadratic penalty term. ## 3 Methods Here, we propose an alternative solution to finding updated weights in the layer-wise pruning problem. Our solution will have same iteration complexity as gradient descent, but will converge much faster. Recall, that we are given set of calibration inputs Xℓ and mask Mℓ and are looking for Wcℓ, such that reconstruction error ||XℓWℓ − Xℓ(Mℓ ⊙ Wcℓ)||22 is minimized. We observe that when a set of zeroed weights is fixed, valid weight matrices form a convex set C. In other words, we are solving the following constrained optimization problem (we omit ℓ subscript for clarity): $$\operatorname*{min}_{\widehat{W}}\quad||X W-X{\widehat{W}}||_{2}^{2}$$ subject to $\quad{\widehat{W}}\odot(1-M)=0$ Our objective is also convex and thus we can use ADMM to solve our optimization problem. We denote our objective as f(Wc) = ||XW − XWc||22 and we will use indicator function g(Z) which takes value 0 if Z ⊙ (1 − M) = 0 and ∞ otherwise. Using formulation (1) and updates (2), we get following iterations: $\widehat{W}^{k+1}=\underset{\widehat{W}}{\operatorname{arg\,min}}\,f(\widehat{W})+(\rho/2)||\widehat{W}-Z^k+U^k||_2^2$ $\ Z^{k+1}=\Pi_C(\widehat{W}^{k+1}+U^k)$ $\ U^{k+1}=U^k+\widehat{W}^{k+1}-Z^{k+1}$ $$\left({\mathrm{3}}\right)$$ $$\left(4\right)$$ $$\left(5\right)$$ In our case, Z-update is just a projection onto the set of valid matrices, thus: $$Z^{k+1}=({\widehat W}^{k+1}+U^{k})\odot M$$ k) ⊙ M (4) Updating Wc is very similar to ridge regression and can be computed as: $$\widehat{W}^{k+1}=(X^{T}X+\rho I)^{-1}(X^{T}X W+\rho(Z^{k}-U^{k}))$$ k)) (5) Corollary 1. For a fixed calibration input X and mask M *iterates 3 (with updates 4, 5) converge to optimal* solution for weight update. Proof. Since our algorithm uses ADMM iterations, we only need to prove that assumptions 1 and 2 hold. f and g are clearly closed, proper, and convex functions; thus, assumption 1 holds. To show that assumption 2 holds, we need to prove that there exists (Wc∗, Z∗, y∗) that for all (*W , Z, y* c ): L0(Wc∗, Z∗, y) ≤ L0(Wc∗, Z∗, y∗) ≤ L0(*W , Z, y* c ∗) where L0(*W , Z, y* c ) = f(Wc) + g(Z) + y T(Wc − Z). There is a globally optimal solution Wc∗ = Z ∗(can be found by independent linear regressions), where: L0(Wc∗, Z∗, y) = f(Wc) + g(Z) and thus L0(Wc∗, Z∗, y) ≤ L0(Wc∗, Z∗, y∗). If Mij = 1 (Wcij is unmasked and can have any value), then we set y ∗ ij = 0. If Mij = 0, then Z ∗ ij = Wc∗ ij = 0 and we set y ∗ ij = − ∂f ∂Wbij (Wc∗). Then all Wc and Z derivatives of L0 are zero (or Z ∗ ij must be 0 due to masking) at L0(Wc∗, Z∗, y∗) and since L0 is convex in Wc and Z, then we have a global optimum for given y ∗ and thus L0(Wc∗, Z∗, y∗) ≤ L0(*W , Z, y* c ∗). And thus, assumption 2 holds. We can precompute and cache (XT X + ρI) −1 and XT XW, and then one update iteration has O(m2n) complexity, which is the same as the complexity of gradient descent. Note that theoretical results do not say anything about the speed of convergence. In the experimental section, we will show that, in practice, we can get high-quality solutions after very few iterations. One can also view Wc update as a way of pulling pruned weights towards zero. Note that for unpruned weights, the penalty term (ρ/2)||Wc−(Z k −U k)||22 only limits the step size, but for pruned weights, the value of Z k − U k will have different sign than the value of W and thus they will be strongly pulled towards zero. ## 3.1 Mask Selection And Preconditioning In the previous section, we described how to update weights when we are given the sparsity mask. Now, we will discuss how to select the mask for pruning. Wanda (Sun et al., 2023) is a simple rule to select a high-quality mask for pruning LLMs. Instead of selecting weights with the largest value (magnitude pruning), they select weights with the highest product of weight absolute value and input neuron norm, i.e. |Wij *| · ||*Xj ||2. In our implementation, we follow this selection rule, but we use the norm of inputs as preconditioning. We multiply the weight matrix by feature norms and divide calibration inputs by their feature norms, run the ADMM algorithm, and then normalize the weight matrix back. Note that after the preconditioning, selecting the mask by weight magnitude is equivalent to the Wanda algorithm and that the diagonal of XT X contains only ones. Wanda paper also suggests keeping a constant number of weights per output. We found that in our case with weight update, this constraint is actually slightly detrimental, and in our work, we select the top weights for the whole layer. ## 3.2 Gradual Pruning Until now, we considered a scenario where one selects the pruning mask first and then updates weights. Here, we propose a simple extension to our algorithm, which progressively prunes more and more weights and simultaneously computes the weight update. Note, that this still happens during one forward pass, we will just apply multiple iterations to one layer-wise problem. We adopt cubic sparsity schedule from (Zhu & Gupta, 2018), where sparsity at step t is computed as st = sf t ks 3, where sf is final sparsity and ks is the number of sparsification steps. In each step, we set st weights to zero and then proceed with the ADMM update. Note, that only overhead of gradual pruning is just a mask selection added into each step. While Z krepresents the current valid solution, we found that it is slightly better to use Wk+1 + U kfor selecting weights to prune. This is actually the optimal choice if our constraint (function g) was a specific sparsity, not a predefined mask. We summarize our pruning algorithm in Algorithm 1. We also extend gradual pruning to structured 2:4 sparsity using the following straightforward idea. Our final sparsify will be sf = 0.5. If in step t our target sparsity is st, then we always keep the two highest elements from each group of four and then prune 2st weights from the remaining ones. Algorithm 1 Layerwise gradual pruning with ADMM. Given weight matrix W, calibration input X, desired sparsity sf , number of iterations k, number of sparsification steps ks, dampening factor λ (usually 0.1) and penalty factor ρ (usually 1), we prune matrix W to desired sparsity and accuratelly update weights for the given weight mask. norm *← ||*X||2 + ϵ W ← W ∗ norm X ← X/norm XX ← XT X + λI // λ is usually 0.1 XXW ← XX · W // precomputation XX−1 = (XX + ρI) −1 for *step* = 1..k do if *step <*= ks **then** si = sf i ks 3 M ← mask lowest si indices from |W + U| end if Z ← (W + U) ∗ M U ← U + (W − Z) W ← XX−1(XXW + ρ(Z − U)) end for W ← (W + U) ∗ *M/norm* ## 3.3 Comparison With Sparsegpt And Wanda Compared to SparseGPT (Frantar & Alistarh, 2023), our algorithm does a more accurate weight update since it does not rely on approximation (we also verify this later in the experimental section). It is difficult to say which mask selection algorithm is better in theory. We gradually prune the whole weight matrix while SparseGPT does optimal selection on group columns of the weight matrix iteratively. But in our experiments our mask selection leads to better results. Our algorithm can also be thought of as Wanda (Sun et al., 2023) with added weight updates and gradual pruning. ## 3.4 Note On Using Admm With L0 **Penalty** It is possible to use ADMM to optimize functions under L0 constraint heuristically. This was previously done by Zhang et al. (2018); Ye et al. (2019); Gui et al. (2019). While some of the papers claim that this approach is "systematic", in reality, using ADMM with L0 constraint is just a heuristic since the constraint is not convex. Moreover, in our preliminary experiments, we found that ADMM with L0 constraint is very sensitive to the choice of ρ, and for some choices, it will actually run in cycles and not converge. ## 4 Experiments General setup. We implement our algorithms by extending the Wanda (Sun et al., 2023) codebase, which relies on Pytorch and the Huggingface library. Similarly to Wanda and SparseGPT, we use 128 calibration samples from the C4 training dataset (Raffel et al., 2020). We run pruning on a machine with two Quadro RTX 5000 GPUs (each with 16GB of GPU memory). Since we prune layers sequentially in order, we need only to load one layer to GPU memory at one time. This allows us to prune 70B parameter LLaMA models using a relatively small GPU. Unless stated otherwise, we prune for k = 20 iterations, using ks = 15 sparsification steps, and set the dampening factor to λ = 0.1 and ADMM penalty factor ρ = 1. We compare our methods to Wanda (Sun et al., 2023), which does not do weight update and just prunes weights with the lowest product of magnitude and activation norm, and SparseGPT (Frantar & Alistarh, 2023), which uses multiple approximations to select pruned weight and calculating weight updates. For both methods, we use their public implementation and default hyperparameter settings. Models and evaluation. We test our methods on LLaMA (Touvron et al., 2023a) and LLaMA2 (Touvron et al., 2023b) models. Similarly to previous works (Frantar & Alistarh, 2023; Sun et al., 2023), we measure the performance of pruned models on language modeling and zero-shot tasks. Our main focus is perplexity on held-out WikiText (Merity et al., 2016), considered a goto metric for evaluating language model compression (Frantar & Alistarh, 2023). As an additional verification and testing, we use the same seven tasks as Wanda uses from EleutherAI LM Harness (Gao et al., 2021). ## 4.1 Reconstruction Error Convergence As a first experiment, we study the quality of our update algorithm. We use a fixed sparsity mask derived using Wanda with 50% sparsity and observe reconstruction error convergence in one layer. We compare our algorithm to gradient-based approaches using Adam and SGD optimizers with varying learning rates. We also compare it to the SparseGPT update (without mask selection) used in the Wanda paper. The results for selected layers of LLaMA-7b are presented in Figure 1. Our ADMM-based algorithm is superior to both gradient-based algorithms and SparseGPT as it converges almost instantly after computing the initial XT X matrix inverse. We also note that ADMM works well with the default setting of ρ = 1 and does not require learning rate tuning, which starkly contrasts with SGD and Adam, which have different optimal learning rates in different layers. ## 4.2 Weight Update Quality Comparison In this experiment, we first prune each layer of LLaMA-7B to 60% or 80% sparsity using Wanda mask selection and then update weights either using gradient-based (via Adam) or ADMM update. We select the pruning mask in a single step, i.e., we do not do any gradual mask selection. We test using 1, 10, 20, 50, ![7_image_0.png](7_image_0.png) Figure 1: Reconstruction error over time (in seconds) during optimization of weights in selected layers of LLaMA-7B. The mask was derived by Wanda using 50% sparsity. We compare our proposed ADMM algorithm to SGD with momentum and Adam using various learning rates. We also compare to the SparseGPT update. Our ADMM update converges much faster than other methods and is better than the SparseGPT update. Table 2: Comparision of weight update quality between ADMM and SparseGPT on Llama-7B using 60% sparsity. Mask selection Weight update Perplexity Wanda SparseGPT 10.86 Wanda ADMM 9.96 SparseGPT SparseGPT 10.51 SparseGPT ADMM 9.92 and 100 update steps. We also test the performance of SparseGPT weight update and, for reference, include results of running SparseGPT with its own gradual mask selection. We measure perplexity on Wikitext and time overhead (over forward pass) for each update option. Using just one update step, we can almost beat SparseGPT and all gradient-based algorithms (Figure 2). The ADMM update almost converges with ten update steps, while the gradient-based algorithms need more than 100 steps. ADMM is thus clearly a faster and superior weight update algorithm compared to the gradient-based update. Our algorithm also provides a better weight update than SparseGPT weight update, and at 60% sparsity, it is even better than SparseGPT with its own iterative mask selection. Furthermore, we explicitly compare SparseGPT and ADMM weight updates over different weight masks. We select either Wanda or SparseGPT mask and apply SparseGPT or ADMM weight update (in the case of SparseGPT mask, SparseGPT update is no-op, and for ADMM update, we rewind weights and keep the selected mask). Results are summarized in Table 2. Our ADMM weight update is always better than SparseGPT update. Note that, our mask selection is also better than SparseGPT one (9.22 vs 9.92 perplexity). ## 4.3 Pruning Llama-7B Based on previous observations, we set the number of update iterations to 20, which should provide a pruning overhead similar to SparseGPT (Table 3) and also guarantee reasonable convergence of weight updates. We compare our weight update after mask selection without gradual pruning (ADMM1), our gradual pruning ![8_image_0.png](8_image_0.png) Figure 2: WikiText perplexity vs time overhead for ADMM, Adam, and SparseGPT weight update on LLaMA-7B. We run ADMM and Adam for 1, 10, 20, 50 and 100 update steps and test Adam with various learning rates. The top plot shows 60% sparsity. The bottom one uses 80% sparsity. SparseGPT full refers to normal SparseGPT, which also selects the pruning mask gradually. All other options just update weights over a fixed mask selected by Wanda. Our weight update is better than the one in SparseGPT and better than gradient-based methods. Table 3: Total pruning time for Llama-7B | Method | Total seconds | |-----------|-----------------| | Wanda | 245 | | SparseGPT | 850 | | ADMM1 | 832 | | ADMM-Grad | 869 | algorithm, which computes the mask over 15 iterations (ADMM-Grad) with Wanda and SparseGPT pruning. We prune LLaMA-7b to various sparsities and also with 2:4 structured sparsity. First, we measure Wikitext perplexity (Table 1). We see that our weight update over a fixed Wanda mask (ADMM1) produces better results than any other algorithm at 50%, 60%, and 2:4 sparsities. Note that SparseGPT generates the pruning mask iteratively, which gives it a slight edge in higher sparsities. When selecting the mask gradually, we are superior to all previously developed algorithms, especially at higher sparsities. Finally, we measure performance on seven zero-shot tasks (we use the same selection as the authors of Wanda): BoolQ (Clark et al., 2019), RTE (Wang et al., 2018), HellaSWAG (Zellers et al., 2019), WinoGrande (Sakaguchi et al., 2021), ARC easy and challenge (Clark et al., 2018), and OpenbookQA (Mihaylov et al., 2018). Our results (Table 4) show that our algorithm is superior to the previous ones except for the RTE task. We note that results for the RTE task are slightly erratic (e.g. there is better performance at 60% sparsity than at 50%). We attribute this to the small RTE dataset size (277 samples). Notably, we recover 30-40% of the performance drop of SparseGPT on the BoolQ task at 50-70% sparsities and also on WinoGrande task using 50-60% sparsities. When using 2:4 sparsity, we recover 20-25% of the performance drop on WinoGrande and ARC-e tasks. | Sparsity | Method | BoolQ | RTE | HellaSwag | WinoGrande | ARC-e | ARC-c | OBQA | Mean | |------------|-----------|---------|-------|-------------|--------------|---------|---------|--------|--------| | 0 % | Dense | 75.05 | 66.43 | 56.92 | 69.93 | 75.34 | 41.89 | 34.40 | 59.99 | | Wanda | 71.22 | 55.60 | 51.85 | 66.06 | 69.11 | 36.86 | 28.80 | 54.21 | | | 50% | SparseGPT | 73.05 | 52.34 | 51.21 | 68.42 | 70.70 | 36.43 | 28.60 | 54.39 | | ADMM-Grad | 73.63 | 52.34 | 52.33 | 69.13 | 70.74 | 37.88 | 30.20 | 55.18 | | | Wanda | 69.26 | 59.56 | 43.76 | 62.35 | 62.58 | 30.29 | 25.20 | 50.43 | | | 60% | SparseGPT | 70.7 | 62.09 | 44.84 | 65.58 | 64.14 | 30.97 | 25.20 | 51.93 | | ADMM-Grad | 72.41 | 58.84 | 46.61 | 66.77 | 64.52 | 31.65 | 26.20 | 52.43 | | | Wanda | 59.78 | 58.12 | 28.81 | 50.82 | 32.40 | 18.85 | 14.20 | 37.57 | | | 70% | SparseGPT | 62.35 | 55.95 | 33.77 | 59.35 | 45.70 | 23.97 | 17.20 | 42.61 | | ADMM-Grad | 66.05 | 53.79 | 36.29 | 59.74 | 50.84 | 25.50 | 18.60 | 44.40 | | | Wanda | 37.82 | 48.37 | 26.29 | 48.77 | 27.23 | 20.56 | 13.00 | 31.72 | | | 80% | SparseGPT | 41.89 | 52.70 | 27.83 | 48.38 | 30.30 | 18.77 | 13.40 | 33.32 | | ADMM-Grad | 56.14 | 52.70 | 28.75 | 50.74 | 31.56 | 18.94 | 12.40 | 35.89 | | | Wanda | 69.3 | 51.99 | 42.06 | 62.75 | 60.94 | 28.07 | 24.60 | 48.53 | | | 2:4 | SparseGPT | 70.46 | 60.65 | 42.99 | 64.88 | 61.49 | 30.12 | 23.60 | 50.60 | | ADMM-Grad | 70.27 | 55.59 | 44.88 | 66.14 | 64.18 | 30.97 | 25.20 | 51.03 | | Table 4: Zero shot accuracies on various tasks during pruning of LLaMA-7B | Method | Sparsity | 7B | 13 B | 70B | |-----------|------------|--------|--------|-------| | Dense | 0 % | 5.12 | 4.57 | 3.12 | | Wanda | 50 % | 6.42 | 5.56 | 3.98 | | SparseGPT | 50 % | 6.51 | 5.63 | 3.98 | | ADMM-Grad | 50 % | 6.33 | 5.52 | 3.95 | | Wanda | 60 % | 9.71 | 7.75 | 4.98 | | SparseGPT | 60 % | 9.58 | 7.80 | 4.98 | | ADMM-Grad | 60 % | 8.70 | 7.09 | 4.81 | | Wanda | 80 % | 5e3 | 2e3 | 1e2 | | SparseGPT | 80 % | 108.87 | 94.23 | 25.86 | | ADMM-Grad | 80 % | 55.93 | 43.58 | 18.84 | | Wanda | 2:4 | 11.02 | 8.27 | 5.16 | | SparseGPT | 2:4 | 10.17 | 8.32 | 5.40 | | ADMM-Grad | 2:4 | 9.74 | 7.78 | 5.19 | Table 5: Perplexity of pruned LLaMA-2 variants on WikiText ## 4.4 Pruning Llama-2 Variants Our algorithm generalizes and scales to bigger LLMs. We test it on variants of LLaMA-2 at various sparsity levels. Table 5 shows that our method is superior to previous ones, except at 2:4 sparsity on LLaMA2-70B. We note quite a substantial improvement of our algorithm over previous ones at 60% sparsity and also at 2:4 sparsity on 7B and 13B models. ## 5 Related Work General neural network pruning. Post-training network pruning aims to compress neural networks by removing some of their parts (weights, neurons, layers) (LeCun et al., 1989; Han et al., 2015; Blalock et al., 2020; Liu et al., 2018). Pruning criteria vary from simple magnitude pruning (Zhu & Gupta, 2018) to sophisticated second-order approximations (Singh & Alistarh, 2020). Nowadays, there is also a focus on methods that use limited calibration data and do very little fine-tuning (Frantar & Alistarh, 2022; Hubara et al., 2021). LLM pruning algorithms. Due to sheer LLM size, weight pruning algorithms focused mainly on pruning with limited calibration data and fine-tuning. SparseGPT (Frantar & Alistarh, 2023) solves layer-wise pruning problem using multiple approximations. Wanda (Sun et al., 2023) shows that a simple product of weight magnitude and input activation norm provides competition pruning criterion. DS⊘T (Zhang et al., 2023) provides an iterative mask improvement algorithm. Another possibility for LLM pruning is structured pruning. One can either remove individual neurons (Ma et al., 2023; Ashkboos et al., 2024), or remove whole layers (Men et al., 2024; Gromov et al., 2024). Target specific distillation and tuning. One can also make neural networks smaller by using knowledge distillation (Hinton et al., 2015). In LLM context, this is usually done with a specific task in mind (Hsieh et al., 2023; Fu et al., 2023; Gu et al., 2023; Ko et al., 2024), where large general model knowledge (logits) is distilled into smaller task-specific model. This is in contrast with our method, which aims to preserve the general ability of the original LLM. ## 6 Conclusions And Future Work In this work, we presented a simple, fast, and effective post-pruning weight update algorithm based on the alternating direction method of multipliers. We showed that our algorithm converges much faster than any previously available option. Our weight update method is also theoretically sound and does not rely on any heuristical decisions or approximations. We further improved the pruning performance by doing gradual mask selection and weight updates. This achieves state-of-the-art performance in the layer-wise pruning setting, much better than previous solutions like Wanda or SparseGPT. Our main limitation is that our update rule runs over dense matrices, and thus, during update computation, we have no time or space savings from potential sparsity. We hope to address this in future work. Another limitation is that one-shot pruned large models are still inferior to smaller dense ones. The pruning results can certainly be improved by using nonuniform sparsity across layers (Yin et al., 2023); for now, we leave this as a future work. Another option for improvement is to use a more accurate mask selection rule, such as one in Optimal brain surgeon (Hassibi et al., 1993). Finally, our algorithm provides an efficient update rule for sparse matrices and can be used in some advanced optimizers like FOOF (Benzing, 2022). ## Acknowledgments This research was supported by grant 1/0538/22 from Slovak research grant agency VEGA. ## References Abhinav Agarwalla, Abhay Gupta, Alexandre Marques, Shubhra Pandit, Michael Goin, Eldar Kurtic, Kevin Leong, Tuan Nguyen, Mahmoud Salem, Dan Alistarh, et al. Enabling high-sparsity foundational llama models with efficient pretraining and deployment. *arXiv preprint arXiv:2405.03594*, 2024. Arash Ahmadian, Saurabh Dash, Hongyu Chen, Bharat Venkitesh, Stephen Gou, Phil Blunsom, Ahmet Üstün, and Sara Hooker. Intriguing properties of quantization at scale. *arXiv preprint arXiv:2305.19268*, 2023. Saleh Ashkboos, Maximilian L Croci, Marcelo Gennari do Nascimento, Torsten Hoefler, and James Hensman. Slicegpt: Compress large language models by deleting rows and columns. *arXiv preprint arXiv:2401.15024*, 2024. Frederik Benzing. Gradient descent on neurons and its link to approximate second-order optimization. In International Conference on Machine Learning, pp. 1817–1853. PMLR, 2022. Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag. What is the state of neural network pruning? *Proceedings of machine learning and systems*, 2:129–146, 2020. Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, et al. Distributed optimization and statistical learning via the alternating direction method of multipliers. *Foundations and Trends®* in Machine learning, 3(1):1–122, 2011. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018. Tim Dettmers and Luke Zettlemoyer. The case for 4-bit precision: k-bit inference scaling laws. In *International Conference on Machine Learning*, pp. 7750–7774. PMLR, 2023. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Llm. int8 (): 8-bit matrix multiplication for transformers at scale. *arXiv preprint arXiv:2208.07339*, 2022. Elias Frantar and Dan Alistarh. Optimal brain compression: A framework for accurate post-training quantization and pruning. *Advances in Neural Information Processing Systems*, 35:4475–4488, 2022. Elias Frantar and Dan Alistarh. Sparsegpt: Massive language models can be accurately pruned in one-shot. In *International Conference on Machine Learning*, pp. 10323–10337. PMLR, 2023. Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. Specializing smaller language models towards multi-step reasoning. In *International Conference on Machine Learning*, pp. 10421–10430. PMLR, 2023. Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, et al. A framework for few-shot language model evaluation. Version v0. 0.1. Sept, 2021. Andrey Gromov, Kushal Tirumala, Hassan Shapourian, Paolo Glorioso, and Daniel A Roberts. The unreasonable ineffectiveness of the deeper layers. *arXiv preprint arXiv:2403.17887*, 2024. Yuxian Gu, Li Dong, Furu Wei, and Minlie Huang. Minillm: Knowledge distillation of large language models. In *The Twelfth International Conference on Learning Representations*, 2023. Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, and Ji Liu. Model compression with adversarial robustness: A unified optimization framework. Advances in Neural Information Processing Systems, 32, 2019. Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. *Advances in neural information processing systems*, 28, 2015. Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In *IEEE international conference on neural networks*, pp. 293–299. IEEE, 1993. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alexander Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. *arXiv preprint arXiv:2305.02301*, 2023. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. *arXiv preprint arXiv:2106.09685*, 2021. Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Joseph Naor, and Daniel Soudry. Accelerated sparse neural training: A provable and efficient method to find n: m transposable masks. *Advances in neural* information processing systems, 34:21099–21111, 2021. Jongwoo Ko, Sungnyun Kim, Tianyi Chen, and Se-Young Yun. Distillm: Towards streamlined distillation for large language models. *arXiv preprint arXiv:2402.03898*, 2024. Yann LeCun, John Denker, and Sara Solla. Optimal brain damage. Advances in neural information processing systems, 2, 1989. Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. *arXiv preprint arXiv:1810.05270*, 2018. Xinyin Ma, Gongfan Fang, and Xinchao Wang. Llm-pruner: On the structural pruning of large language models. *Advances in neural information processing systems*, 36:21702–21720, 2023. Xin Men, Mingyu Xu, Qingyu Zhang, Bingning Wang, Hongyu Lin, Yaojie Lu, Xianpei Han, and Weipeng Chen. Shortgpt: Layers in large language models are more redundant than you expect. *arXiv preprint* arXiv:2403.03853, 2024. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. *arXiv preprint arXiv:1809.02789*, 2018. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551, 2020. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. *Communications of the ACM*, 64(9):99–106, 2021. Sidak Pal Singh and Dan Alistarh. Woodfisher: Efficient second-order approximation for neural network compression. *Advances in Neural Information Processing Systems*, 33:18098–18109, 2020. Mingjie Sun, Zhuang Liu, Anna Bair, and J Zico Kolter. A simple and effective pruning approach for large language models. *arXiv preprint arXiv:2306.11695*, 2023. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. *arXiv preprint arXiv:2302.13971*, 2023a. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. *arXiv preprint arXiv:2307.09288*, 2023b. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. *arXiv preprint* arXiv:1804.07461, 2018. Haojun Xia, Zhen Zheng, Yuchao Li, Donglin Zhuang, Zhongzhu Zhou, Xiafei Qiu, Yong Li, Wei Lin, and Shuaiwen Leon Song. Flash-llm: Enabling cost-effective and highly-efficient large generative model inference with unstructured sparsity. *arXiv preprint arXiv:2309.10285*, 2023. Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han. Smoothquant: Accurate and efficient post-training quantization for large language models. In International Conference on Machine Learning, pp. 38087–38099. PMLR, 2023. Shaokai Ye, Kaidi Xu, Sijia Liu, Hao Cheng, Jan-Henrik Lambrechts, Huan Zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, and Xue Lin. Adversarial robustness vs. model compression, or both? In *Proceedings* of the IEEE/CVF International Conference on Computer Vision, pp. 111–120, 2019. Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Mykola Pechenizkiy, Yi Liang, Zhangyang Wang, and Shiwei Liu. Outlier weighed layerwise sparsity (owl): A missing secret sauce for pruning llms to high sparsity. *arXiv preprint arXiv:2310.05175*, 2023. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? *arXiv preprint arXiv:1905.07830*, 2019. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022. Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad, and Yanzhi Wang. A systematic dnn weight pruning framework using alternating direction method of multipliers. In *Proceedings* of the European conference on computer vision (ECCV), pp. 184–199, 2018. Yuxin Zhang, Lirui Zhao, Mingbao Lin, Yunyun Sun, Yiwu Yao, Xingjia Han, Jared Tanner, Shiwei Liu, and Rongrong Ji. Dynamic sparse no training: Training-free fine-tuning for sparse llms. arXiv preprint arXiv:2310.08915, 2023. Michael Zhu and Suyog Gupta. To prune, or not to prune: Exploring the efficacy of pruning for model compression. In *6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC,* Canada, April 30 - May 3, 2018, Workshop Track Proceedings. OpenReview.net, 2018. URL https: //openreview.net/forum?id=Sy1iIDkPM.
Review 1: Summary: The authors propose the use of Alternating Direction Method of Multipliers(ADMM) for layerwise pruning of large language models(LLM). They show that their method is more effective than previous state-of-art approaches like sparseGPT and Wanda in pruning, giving models with better text perplexity and zero-shot accuracies on various tasks at different levels of sparsity. Strengths and Weaknesses: - The proposed algorithm works very well, outperforming state-of-art pruning algorithms like Wanda and SparseGPT in producing models with better text perplexity and zero-shot prediction accuracies at different levels of sparsity. - In terms of speed the proposed method is also fast, on par with the layerwise optimization approach SparseGPT but not as fast as Wanda. - One potential weakness of the paper is its novelty. The paper largely follows the optimization framework of SparseGPT, replacing the inner optimizer with ADMM. The mask selection heuristic largely follows from the Wanda algorithm. There are relatively few new technical contributions, even as the method is very effective. - The title of the paper says fast and optimal weight updates, but in what sense are the weight updates "optimal"? I don't see any explanation of this in the paper. Requested Changes: - Explanation or modification of the term "optimal" in the title of the paper. Broader Impact Concerns: None ================================================== Review 2: Summary: This paper adopted ADMM for weight pruning of LLM. ADMM has advantages over existing methods both in terms of the speed and accuracy. For example, ADMM has the same complexity as gradient descent-based methods for one step, but it converges much faster in practice. The optimization problem is cast as an ADMM problem and the update equation and algorithm are presented. The experimental results show the superior performance to other existing weight pruning methods in a similar setting. Strengths and Weaknesses: Strengths This paper shows how to adopt ADMM for weight pruning of LLM and provide experimental results showing its effectiveness compared to other approaches. It is a theoretically more sound approach compared to other iterative methods such as gradient-based methods. It also practically works very well -- faster and more accurate. It is tested on several benchmarks and it supports their claim. A simple extension to ADMM (gradual pruning) improves the naive application of ADMM significantly. It is simple and effective. Weaknesses Although it is clearly better than existing methods evaluated in the paper, but the gap to the original dense model seems to be large. It is not clear if the number is good enough and the resulted model is useful for some real applications. All of the details of experiments are not explained in the paper. For example, how to obtain the calibration input for each weight is not very clear. The architecture of LLaMA is not explained at all. Readers could go the original paper to understand it, but I think it is an important detail to be explained in the paper, too. Requested Changes: It would be good to mention if the performance degradation by the pruning method is acceptable for downstream applications. It would be good to have discussions on the comparison with distillation/SFT. There are methods to distill and finetune LLM and it could potentially provide better model with a smaller size. It is probably the case that the proposed method is best-in-class in the particular setting, but it would also be important to compare it with methods that are possible with a milder condition. It would be good to have more detailed explanations on baseline methods to further contrast the differences between methods and to make sure that the comparisons are apples-to-apples and it would also be good to have more detailed explanations on experimental conditions including points mentioned in the weaknesses section above. Broader Impact Concerns: No concerns. ================================================== Review 3: Summary: The paper provides an algorithm to perform weight updates post-pruning of a large language model, in order to maintain performance while reducing the effective model size. This method does not require fine-tuning or continual training, and is evaluated on the Llama model at three different sizes (7B, 13B, 70B) and compared to other methods. The results show that the method is effective at maintaining performance while reducing the model size, and outperforms other methods in terms of performance on evaluation datasets. Strengths and Weaknesses: ## Strengths - The evaluations provided of the method are comprehensive, done on several model sizes (7B, 13B, 70B), and show that the method is effective at performing weight updates after pruning LLMs, while maintaining decent performance. - The method outperforms similar methods in terms of performance (as measured by perplexity) on evaluation datasets, and seems to generalize well as model size increases. - The method is intuitive and well described, including an algorithm in pseudo-code presented in the main body. ## Weaknesses - The paper claims that fine-tuning an LLM after pruning is not feasible due to the high computational cost. However, the paper does not provide any empirical evidence to support this claim, nor cite any work that has shown this to be the case. Contrary to this claim, there is work that shows how to fine-tune LLMs efficiently, e.g., [1, 2, 3]. - The work continues to use the term "optimal" to describe their post-pruning method. There is no evidence provided that the method is optimal. Optimal in this setting may mean that it is the best possible method. - The paper could be better structured and be more concise. For instance, providing the tables after the conclusion makes it harder to follow the results. Additionally, I found the conclusion section to be a bit verbose and could be shortened. The paper also does not provide a related works section, however they do dive into related works in the introduction and throughout the paper. A related works section would help to summarize the related works in one place. ### References - [1] "LIMA: Less Is More for Alignment": https://arxiv.org/abs/2305.11206 - [2] "LoRA: Low-Rank Adaptation of Large Language Models": https://arxiv.org/abs/2106.09685 - [3] "LoRAPrune: Pruning Meets Low-Rank Parameter-Efficient Fine-Tuning": https://arxiv.org/abs/2305.18403 Requested Changes: - Please either reduce the claim that fine-tuning an LLM after pruning is not feasible or provide empirical evidence to support this claim. - There are several other reasons for not fine-tuning after pruning, for instance, catastrophic forgetting after fine-tuning [4]. It would be beneficial to discuss these reasons in the paper. - Please refrain from using the term "optimal" to describe the method unless there is evidence to support this claim. - Some of the results suggest that this method (Table 5), and other post-pruning weight update methods, lead to inferior performance compared to utilizing a smaller unpruned model. It would be beneficial to discuss this in the paper, and why pruning is still a useful technique. - While the supplemental material provides all the details needed to reproduce the results, it would be beneficial to include some of the details in the main body of the paper, such as the hyperparameters used for inference in the experiments. ### References - [4] "Understanding Catastrophic Forgetting in Language Models via Implicit Inference": https://arxiv.org/abs/2309.10105 ### Nitpicks - "showed that LLMs can pruned by removing weights" -> "showed that LLMs can be pruned by removing weights" - "We prune LLM during one forward pass" -> "We prune the LLM during one forward pass" - "state-of-the-art performance in layer-wise pruning setting" -> "state-of-the-art performance in the layer-wise pruning setting" Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept with minor revision Comment: The reviewers agree that the accuracy and efficiency claims are supported with strong empirical evidence. The authors already made some changes to the paper based on the feedback from the reviewers (e.g., better related work, changing some terms around optimality). Below I summarize some additional updates that are important to implement for the final version of the paper to improve clarity and readability. As the sections 2.3 and 3 are written right now, it is difficult for the reader to separate precise mathematical statements and informal interpretations and discussion: - Section 2.3 quotes results that are not directly used: only the key results that you rely on from Boyd et al. should be quoted. For example, there is no point in introducing function g(x) since it just maps to zero. I would suggest having a proposition statement in a form that you use (you are welcome to include the full theorem in the appendix). It is also crucial to state the exact conditions under which the algorithm converges to the optimum. - The application of the result by Boyd et al to pruning, as in Section 3, should be stated as a precise corollary (with a proof). Then the discussion can follow. As mentioned by other reviewers, it would be great in Section 3 to explicitly contrast your proposed method to SparseGPT and Wanda (explaining how similar it is to SparseGPT and which component is being replaced). Regarding the optimality and accuracy statements: from what I can tell, there is no “accuracy” guarantee (no theorem saying that within k updates, one will be epsilon far from the most “accurate” solution, where the latter implicitly implies connections to generalization). Thus I do not think that replacing “optimal” with “accurate” is any better. Going back to optimality, even ignoring convergence, the optimal solution might only be reached for the objective based on the subsample of the training data, and also only for the convex per-layer subproblem which is not the same as optimality of pruning and updating the entire network. So I do agree with the reviewers that saying that the proposed algorithm is optimal in any sense is not accurate. Some additional minor comments: (1) the way that the abstract is currently written makes it sound like you are proposing a better fine-tuning technique only (that should be applied to an already-sparse model). (2) In the introduction, under “Our results”, you say “Our algorithm sidesteps all of the problems of previous solutions.”. Please list these problems explicitly, as they are entangled in the previous paragraph and hard to separate. ==================================================
# Some Remarks On Identifiability Of Independent Component Analysis In Restricted Function Classes Simon Buchholz *sbuchholz@tue.mpg.de* Max Planck Institute for Intelligent Systems, Tübingen Reviewed on OpenReview: *https: // openreview. net/ forum? id= REtKapdkyI* ## Abstract In this short note, we comment on recent results on identifiability of independent component analysis. We point out an error in earlier works and clarify that this error cannot be fixed as the chosen approach is not sufficiently powerful to prove identifiability results. In addition, we explain the necessary ingredients to prove stronger identifiability results. Finally, we discuss and extend the flow-based technique to construct spurious solutions for independent component analysis problems and provide a counterexample to an earlier identifiability result. ## 1 Introduction Independent Component Analysis (ICA) is a principled framework for representation learning. The goal is to recover independent factors of variation from an observed mixture of the sources. It is well known that ICA is not identifiable without additional assumptions on the mixing function (Hyvärinen & Pajunen, 1999). On the other hand, it is known that for a linear mixing function the independent components are identifiable Comon (1994). So, a natural question is whether there are larger function classes such that ICA is nevertheless identifiable within this function class. Recent works investigated the question of identifiability for volume preserving functions (Yang et al., 2022) (in the auxiliary variable setting) and volume preserving orthogonal coordinate transformations (Zheng et al., 2022a;b). Unfortunately, proofs of identifiability results are error-prone. Given the recent surge of interest in identifiability results for latent variable models, we think that it becomes increasingly important to gain a clear understanding of the proof strategies available and their limitations. In this regard, the purpose of this note is twofold. First, we point out a mistake in Lemma 1 in (Yang et al., 2022) which is also used crucially in the proof of (Zheng et al., 2022a) invalidating the theoretical main results of those two works. Secondly, we review a bit more generally techniques to prove identifiability or non-identifiability of ICA in restricted function classes. In particular, we show that their proof strategy is probably too restrictive to show identifiability in interesting function classes. We also review some rigidity results that allow us to characterize function classes characterized by a local condition on its gradient, and explain how such an approach could be used to prove identifiability for ICA. Finally, we discuss and extend a recent construction of counterexamples for identifiability of ICA based on the flow generated by suitable vector fields (Buchholz et al., 2022). This allows us to construct a counterexample to the main result, Theorem 1, in Yang et al. (2022). We do not resolve whether a variant of the main result (Proposition 2.1) of Zheng et al. (2022a) which also appears as Proposition 2 in Zheng et al. (2022b) holds true, i.e., whether ICA with volume preserving orthogonal coordinate transformations is identifiable. However, this note clarifies that a complete proof requires more involved techniques. This work is structured as follows. In Section 2 we introduce the setting of ICA in restricted function classes and define identifiability in the context of ICA. Then we discuss the limitations of the proofs in the earlier works (Zheng et al., 2022a; Yang et al., 2022) and sketch potentially more general approaches in Section 3. In Section 4 we discuss how the flows of suitable vector fields can be used to establish non-identifiability for ICA problems, which generalizes earlier results. Notation We denote the set of skew-symmetric matrices and diagonal matrices by $\mathrm{Skew}(d)=\{A\in\mathbb{R}^{d\times d}\,|\,A^{\top}+A=0\},\quad\mathrm{Diag}(d)=\{A\in\mathbb{R}^{d\times d}\,|\,A\text{is a diagonal matrix}\}.$ By Perm±(d) we denote the set of signed permutation matrices, i.e., $$\mathrm{rices~by}$$ Perm±(d) = {A ∈ R d×d| B with bij = |aij | is a permutation matrix}. (2) We denote the orthogonal and special orthogonal group as usual by O(d) = {Q ∈ R $$\mathbb{R}^{d\times d}\,|\,Q^{\top}\,Q=\mathrm{I}$$ ⊤Q = Idd}, SO(d) = {Q ∈ R ⊤Q = Idd, det Q = 1}. (3) $$\mathbb{R}^{d\times d}\,|\,Q^{\top}\,Q=\mathrm{Id}$$ We use the language of differential geometry freely throughout the text and refer to any standard textbook for definitions of, e.g., the tangent space of a manifold. ## 2 Setting To fix notation and set the stage, we briefly review identifiability of ICA. This material is standard and close to Buchholz et al. (2022). ICA deals with data generated according to $$\left(1\right)$$ $$\operatorname{trix}\}.$$ $$\left(2\right)$$ $$\left({\mathrm{3}}\right)$$ $$\operatorname*{det}Q=1\}.$$ $$x=f(s),\quad s\sim p(s)=\prod_{i}p_{i}(s_{i})$$ $$\left(4\right)$$ where f : R d → R dis an invertible mixing and p is a probability density with independent components si (also called factors of variation). Without further indicating this, we will always assume that all considered functions f are bijective on their image and sufficiently smooth with smooth inverse. The goal is to reconstruct the sources s given only the observations x up to some tolerable ambiguities, i.e., one tries to learn f −1. It is well known that identification in any meaningful sense is not possible for general nonlinear functions f. There are two roads that can lead to identifiability. One is the restriction of the function class, i.e., it is possible to obtain (partial) identifiability results by restricting the mixing to be in some function class F. The alternative is to consider multi-view settings introduced in Hyvärinen et al. (2019), where we have an auxiliary variable u such that conditional on u $$x_{u}=f(s_{u})\quad s_{u}\sim p(s|u)=\prod_{i}p_{i}(s_{i}|u).\tag{5}$$ Then it can be shown that given a sufficiently diverse set of conditional distributions p(s|u) the mixing f can be identified (Hyvärinen et al., 2019; Khemakhem et al., 2020). In all ICA problems we can only hope to identify the mixing up to certain ambiguities, e.g., for linear mixings we cannot identify the scale and the order of the latent variable. We now define the maximal possible ambiguity set that still allows us to identify the factors of variation. For this we consider the set of 1-dimensional diffeomorphisms that reparametrize one coordinate $${\mathcal{F}}_{1d-\mathrm{reparam}}=\{f:\mathbb{R}\to\mathbb{R}\,|f{\mathrm{~is~bijective~and~}}f^{\prime}>0\}.$$ $\mathrm{D}\mathrm{V}$. ′ > 0}. (6) Then we define coordinate-wise reparametrizations by $\mathcal{F}_{\text{reparam}}=\{f:\mathbb{R}\to\mathbb{R}\,|f(s)=(h_{1}(s_{1}),\ldots,h_{d}(s_{d})),\ h_{i}\in\mathcal{F}_{1d-\text{reparam}}\}.$ Now we define the maximal ambiguity set of coordinate-wise reparametrizations and coordinate permutations $${\mathcal{S}}_{\mathrm{max}}=\{f:\mathbb{R}^{d}\to\mathbb{R}^{d}\,|\,f=A\circ{\bar{f}}{\mathrm{~where~}}A\in{\mathrm{Perm}}_{\pm}(d),\,{\bar{f}}\in{\mathcal{F}}_{\mathrm{reparam}}\}.$$ Now that we defined the maximal possible ambiguity set that allows identification of the individual factors of variations (although relabelled and rescaled) we can define identifiability of ICA in some function class F. It is convenient to use the pushforward of measures. If x = f(s) and s ∼ P then the distribution of x is given by the pushforward measure f∗P. $$(6)$$ $$\left({\boldsymbol{7}}\right)$$ $$(h_{d})),\ h_{i}\in{\mathcal{F}}_{1d-\mathrm{reparam}}\}.$$ $\left(\mathfrak{S}\right)$. Definition 1. We call ICA identifiable in some function class F for some class of admissible base distributions P if the relation $$f_{*}\mathbb{P}=g_{*}\mathbb{Q}$$ $$({\mathfrak{g}})$$ f∗P = g∗Q (9) for f, g ∈ F and P, Q ∈ P implies that g −1◦f|Ω = h|Ω for some h ∈ Smax where Ω = Supp(P). In other words, two mechanisms generating the same observational distribution agree up to relabelling and reparametrizing the coordinates. This can be extended to the auxiliary variable case by requiring the same conclusion under the assumption that f∗Pu = g∗Qu for all u where Pu = p(·|u) ∈ P and Qu = q(·|u) ∈ P. For our construction in Section 4 below we show that even a weaker form of identifiability does not hold. Namely, we assume that the distribution of the sources is known. Definition 2. We call ICA in some function class F identifiable given the source distributions Pu = p(·|u) if the relation $$(10)$$ $$f_{*}\mathbb{P}_{u}=g_{*}\mathbb{P}_{u}$$ f∗Pu = g∗Pu (10) for all u implies that (g −1 ◦ f)|Ω = h|Ω for some h ∈ Smax where Ω = Su $${\mathrm{are~}}\Omega=\bigcup_{u}{\mathrm{Supp}}(\mathbb{P}_{u}).$$ We remark that equation 10 implies h∗Pu = Pu and thus restricts h substantially. Now we introduce the function classes of interest. We consider function classes that are characterized by a pointwise restriction of their gradient. For Ω ⊂ R d open and connected we define $$\mathcal{F}_{\mathrm{OUT}}=\{f:\Omega\to\mathbb{R}^{d}\,|\,Df^{\top}Df\in\mathrm{Diag}(d)\},$$ $$\mathcal{F}_{\mathrm{vol}}=\{f:\Omega\to\mathbb{R}^{d}\,|\,|\det Df|=1\}.$$ Those are function classes considered in Zheng et al. (2022b); Yang et al. (2022); Buchholz et al. (2022); Gresele et al. (2021) and we refer to those works for a motivation. The function class FOCT has the property that f ◦ h ∈ FOCT whenever h ∈ Freparam and f ∈ FOCT. Therefore, up to regularity issues the identifiability given the source distribution (Definition 2) is equivalent to identifiability in the unconditional case (Definition 1). This is not true for the auxiliary variable case and other function classes. ## 3 Proof Techniques For Identifiability The main goal of this section is to point out an error in the recent works Yang et al. (2022); Zheng et al. (2022a;b) 1 and we in addition discuss more generally the limitations of their approach. We structure this section in two parts. First, we discuss a folklore technique to exploit the independence assumption. Then we discuss how the proof proceeds in the mentioned works, what is really shown, and finally, we explain why this cannot be used to give a full identifiability result without further ingredients. For readers interested in the details, we provide an extended discussion in Appendix A. ## 3.1 A Relation For Jacobian And Hessian Let us fix a function class F for which we want to show identifiability. Suppose we have P, Q and f, g ∈ F such that $$f(s)\stackrel{{\mathcal{D}}}{{=}}g(s^{\prime}),\quad s\sim\mathbb{P},\ s^{\prime}\sim\mathbb{Q},\tag{1}$$ and p and q have C 2 densities. Then the standard approach to exploit the independence (manifest in the factorization of the densities) is to use that for i ̸= j $$\partial_{i}\partial_{j}\ln(p(x))=\partial_{i}\partial_{j}\sum_{i}\ln(p_{i}(x_{i}))=0$$ 1The paper Zheng et al. (2022b) is an extension of the earlier workshop paper Zheng et al. (2022a).We will only refer to the work Zheng et al. (2022a) but our remarks apply equally to Proposition 2 in Zheng et al. (2022b). We emphasize that this note only refers to the results in Section 4 of this paper and does not comment on the results on sparsity. $$(13)$$ $$(14)$$ We introduce the notation p˜ = ln(p) and q˜ = ln(q) for the log-densities. We write h = g −1f and then the condition equation 13 can be expressed as $$s^{\prime}=h(s).$$ $$(15)$$ $$(16)$$ ′ = h(s). (15) Then we can conclude that the following relation holds. Lemma 1. Assume that h *is defined as above and volume preserving. Then the relation* $Dh^{\top}\Omega Dh+\sum_{k}v_{k}D^{2}h_{k}=\Lambda\in\mbox{Diag}(d)$. $(\partial^{2}\tilde{\alpha})\circ h)$ and $\Lambda=\mbox{Diag}(\partial^{2}\tilde{\alpha},\quad\partial^{2}\tilde{\alpha})$ are diagonal vkD2hk = Λ ∈ Diag(d). (16) holds, where Ω = Diag((∂ 2 1 q˜) ◦ *h, . . . ,*(∂ d q˜) ◦ h) and Λ = Diag(∂ 1 *p, . . . , ∂* ˜ d p˜) are diagonal matrices at each point and v = (∇q˜) ◦ h. The proof follows by direct calculation and can be found in Appendix C. Remark 1. Note that both works Yang et al. (2022); Zheng et al. (2022a) use the assumption that the mixing functions *f, g* are volume preserving. Moreover, adding this assumption makes the considered function class F smaller, so identifiability becomes easier. In particular, the argument extends to larger function classes without this restriction, e.g., showing that the argument does not allow to prove identifiability for FOCT∩Fvol implies that the same is true for FOCT. ## 3.2 Proofs In Yang Et Al. (2022); Zheng Et Al. (2022A) The main problem of the proofs in Yang et al. (2022); Zheng et al. (2022a) is that they both rely on Lemma 1 in Yang et al. (2022) which is an erroneous version of Lemma 1 above. Essentially they claim, rephrased in our notation, that $$(D h(s)^{\top})\Omega(D h(s))\in\mathrm{Diag}(d),$$ $$(17)$$ ⊤)Ω(Dh(s)) ∈ Diag(d), (17) i.e., when comparing with equation 16 we see that the v term is missing and Ω is constant (because they assume Gaussian density for q). Indeed, the last relation is the matrix form of Equation (20) in Yang et al. (2022) which states (slightly adopted to our notation) $$\sum_{i=1}^{d}\Theta_{i}(u)\frac{\partial h_{i}}{\partial s_{j}}(s)\frac{\partial h_{i}}{\partial s_{k}}(s)=0.$$ $$(18)$$ To bridge the different notations, we pinpoint the problem with their lemma in Appendix D. Given that both proofs rely on the wrong relation equation 17 it is clear that they are not valid as stated. Nevertheless, we want to clarify here in a bit more detail that it is highly likely that the proofs cannot be fixed by using similar arguments together with the (correct) relation equation 16 and more involved arguments are required. Let us first explain how based on the relation equation 17 the proofs in the aforementioned works is finished. In both works we can find matrices Λ1,Λ2, Ω1, Ω2 such that $$D h^{\top}\Omega_{i}D h=\Lambda_{i}$$ $$(19)$$ Dh⊤ΩiDh = Λi (19) for i = 1 and i = 2. Indeed, in the auxiliary variable case in Yang et al. (2022) they conclude that the relation equation 17 holds for the two pairs of densities (p u1, qu1 ), and (p u2, qu2 ). In Zheng et al. (2022a) it is assumed that h ∈ FOCT, i.e., Dh⊤Dh ∈ Diag(d). By defining Ω2 = Id and Λ2 = Dh⊤Dh (and Ω1 = Ω,Λ1 = Λ with Ω, Λ from equation 16) we obtain that equation 19 also holds in this case. In both papers, they assume in addition that the ratios (Ω1)ii/(Ω2)ii are pairwise distinct, and their assumptions entail that the diagonal entries of Ωi are positive. Then the following lemma is used. Lemma 2 (Lemma 2 in Yang et al. (2022)). Suppose that there are diagonal matrices Ω1, Ω2,Λ1,Λ2 with positive diagonal entries such that (Ω1)ii/(Ω2)ii *are pairwise different and* $$X^{\top}\Omega_{i}X=\Lambda_{i}$$ X⊤ΩiX = Λi (20) $for\ i=1,2\ and\ some\ .$ for i = 1, 2 and some X. Then, X = DP for a diagonal matrix D *and a permutation matrix* P. $$(20)$$ Thus we conclude that for all points s such that equation 17 holds Dh(s) is a scaled permutation. Let us now investigate under which conditions the general relation equation 16 simplifies to equation 17. There are two cases of interest: The function h **is linear.** We observe that if D2hi(s) = 0 for all i and s then equation 16 simplifies to equation 17. The second derivative vanishes iff h is s linear. Thus we recover the well known identifiability result for linear ICA, but no more general result. The vector v = 0 **vanishes.** When v = 0 which happens whenever ∇q(h(s)) = 0 the relation equation 17 also holds. In particular, when q is a Gaussian density this is true exactly for the mean of the Gaussian. So from the results in Yang et al. (2022); Zheng et al. (2022a) we can conclude that in both considered settings the gradient Dh(s) of the mixing function at the mean of the Gaussian prior is a scaled permutation which is a non-trivial observation. However, we cannot conclude anything from the true relation equation 16 as soon as v = ∇q(h(s)) ̸= 02 when we do not assume additional restrictions on D2h. We note that when not assuming that h is linear or away from the points where ∇q(h(s)) vanishes, we cannot infer any information on Dh(s) from equation 16 because we have not derived any restriction on the term involving the Hessian of h. Next, we show that this remains true even when carefully using all available restriction on D2h. Indeed, in the settings considered in Yang et al. (2022); Zheng et al. (2022a) D2h cannot be arbitrary since we assume h ∈ F where F is characterized by Dh ∈ M for some manifold M, e.g., Dh satisfies Dh⊤Dh ∈ Diag(d) in Zheng et al. (2022a). Then we can conclude that ∂iDh(s) ∈ TDh(s)M, the tangent space of the manifold M at Dh(s) and this is the tightest restriction we can obtain by looking just at s. In Appendix A we discuss that even when using the additional condition ∂iDh(s) ∈ TDh(s)M for all i it is not possible to extract any useful restriction from the relation equation 16 for points with v ̸= 0. This clarifies that any identifiability proof based on the relation equation 16 cannot just argue locally by considering a fixed point s but it has to exploit the global partial differential equation that the relation entails. ## 3.3 Takeaway We conclude that for settings of interest it is not possible to show identifiability based on exploiting the relation in Lemma 1 in a single point. Instead, we need to exploit this relation globally, i.e., the equation must be viewed as a Partial Differential Equation (PDE) and identifiability amounts to showing certain properties of all solutions of this PDE. This approach can provide additional insights into the considered function class and in Appendix B we briefly review general results in this direction. Note that we do not rule out that Dh can be restricted by considering higher order derivatives of the log density, e.g., by considering ∂i∂j∂kq˜(h(s)). In fact, when restricting to analytic functions it is clear that it is (in theory) sufficient to consider derivatives of all orders at a single point. In addition, interesting relations for higher order derivatives were shown (Yang, 2022), e.g., for i ̸= j and v = 0 $\left(21\right)^{2}$ $\partial_i^r h_j=0$. i hj = 0. (21) However, we think it is unlikely that this is a promising road, for the same reason as above, the highest order derivatives Dkh have sufficiently many free parameters (O(d k+1)) to satisfy the resulting (O(d k)) equations for every value of Dh. ## 4 Counterexamples For Identifiability Based On Flows In this section we show how flows can be used to construct spurious solutions in ICA settings and we use this to provide a counter-example to Theorem 1 in Yang et al. (2022). This extends and elaborates on results in Buchholz et al. (2022) where it was shown that ICA with volume preserving maps is not identifiable. The main idea is given a data generating mechanism x ∼ f(s) where s ∼ P we try to construct a family 2Note that considering a constant density q does not help. While this ensures v = 0 this also implies Ω = 0 as this collects the second derivative of q˜, so the relation equation 16 is trivially satisfied. $$\mathbf{v}_{\mathrm{{f}}}$$ Φt : R d → R dsuch that (Φt)∗P = P, f ◦ Φt ∈ F and Φt is the flow of a possibly time-dependent vector field Xt : [0, T] × R d → R dsuch that $$\partial_{t}\Phi_{t}(s)=X_{t}(\Phi_{t}(s)).$$ $$(22)$$ ∂tΦt(s) = Xt(Φt(s)). (22) This procedure allows to construct spurious solutions in certain cases. It blends particularly well with volume preserving transformations because the evolution of the density pt of (Φt)∗P evolves according to the equation $$\partial_{t}p_{t}+\mathrm{Div}(X_{t}p_{t})=0.$$ ∂tpt + Div(Xtpt) = 0. (23) In particular the flow of Xt preserves the measure P if and only if Div(Xtp) = 0. To illustrate this construction, we consider the case of general nonlinear ICA. Suppose P is the uniform measure on [0, 1]d. Then any divergence free vector field X with compact support in [0, 1]d will construct a family of spurious solutions f ◦ Φt. Note that there are many such divergence free vector fields, one construction was given in Hyvärinen & Pajunen (1999) based on radius dependent rotations. This construction can also be phrased based on flows of suitable vector fields. A simple general construction of divergence free vector fields with compact support is to take any smooth function with compact support φ and then consider the vector field X such that X1 = ∂2φ, X2 = −∂1φ, and Xi = 0 for i > 2. Then it is easy to see that X is divergence free. This also shows that it is possible to just mix 2 arbitrary factors of variation. In Buchholz et al. (2022) it was also discussed how this type of construction leads to spurious solutions for the class of volume preserving maps. This requires us to construct a divergence free vector field X (ensuring that f ◦ Φt is volume preserving) orthogonal to ∇p ensuring that (Φt)∗P = P because for divergence free vector fields $$(24)$$ Div(Xp) = p Div(X) + X∇p = X∇p. (24) Vector fields $$\mathrm{Div}(Xp)=p\,\mathrm{Div}(X)+X\nabla p=X\nabla p.$$ Then it easy to show (see Buchholz et al. (2022)) that the vector field given by ... $$X_{1}=-\partial_{2}p,\;X_{2}=\partial_{1}p,\;X_{i}=0,\;i>2$$ $$(25)$$ X1 = −∂2p, X2 = ∂1*p, X*i = 0*, i >* 2 (25) is divergence free and orthogonal to ∇p. We now show that this construction can be generalized to the case of two auxiliary variables. ## 4.1 A Counterexample To Theorem 1 In Yang Et Al. (2022) We now show that a similar construction can be extended to the auxiliary variable setting. This gives a counterexample to the main result, Theorem 1, in Yang et al. (2022). Consider the following two densities $p_{1}(s)=p(s|u^{(1)})\sim{\cal N}(0,1_{3})$ $p_{2}(s)=p(s|u^{(2)})\sim{\cal N}(0,\Sigma)$ where N denotes a normal variable and Σ = Diag(σ 2 1 , σ2 2 , σ2 3 ). We use P1 and P2 to denote the measures with densities p1 and p2. We write a = σ −2 1, b = σ −2 2, c = σ −2 3. We assume that σi and thus *a, b, c* are pairwise different. It can be checked that all assumptions of Theorem 1 in Yang et al. (2022) are satisfied. Thus the theorem claims that if f1, f2 ∈ Fvol and (f1)∗Pj = (f2)∗Pj for j = 1, 2 then f −1 1◦ f2 ∈ Smax, i.e., is a concatenation of a permutation and a coordinate-wise transformation. We now construct a flow Φt such that (Φt)∗Pj = Pj for j = 1, 2 and Φt is volume preserving and Φt ∈ S / max. This provides a generic counterexample to the theorem as f ◦ Φt generates the same observational distribution as f. The map Φt will be the flow of the following vector field (denoting s = (s1, s2, s3)) $$X(s)=\begin{pmatrix}(c-b)s_{2}s_{3}\\ (a-c)s_{1}s_{3}\\ (b-a)s_{1}s_{2}\end{pmatrix}.\tag{1}$$ $$(26)$$ $$(27)$$ $$1,s_{2},s_{3}))$$ $$(28)$$ $$(29)$$ This means Φ : [0, T] × R 3 → R 3is characterized by $$\partial_{t}\Phi_{t}(s)=X(\Phi_{t}(s)),\quad\Phi_{0}(s)=s.$$ ∂tΦt(s) = X(Φt(s)), Φ0(s) = s. (29) An illustration of the resulting map Φt can be found in Figure 1, note that the standard unit vectors are fixed points of Φt but otherwise the dynamics is highly mixing. ![6_image_0.png](6_image_0.png) Figure 1: Illustration of the flow Φt with a = 1, b = 2, c = 3 by a frontal view on the sphere {s : |s| = 1}. (a) color encoding by s3 coordinate for t = 0, (b) color map of Φt=1, (c) color map of Φt=10, (d) color map of Φt=100. Lemma 3. For the flow Φt defined above, the following results hold: 1. The flow exists for all times t *and all* s. 2. The flow is analytic. 3. The map Φt is volume preserving for any t. 4. The map Φt preserves p1 and p2 for all t*, i.e.,* (Φt)∗Pj = Pj . 5. The map Φt *mixes the coordinates.* The proof of this Lemma can be found in Appendix C. ## 4.2 A General Construction Finally, we consider a more general construction that allows to construct local deformations but requires some differential geometry. Let us first give some intuition about the following result. We want to create a vector X field such that its flow Φt is volume preserving and satisfies (Φt)∗Pu for a collection of measures Pu. The equivalent conditions for X are Div(X) = 0 and X∇pu = 0. In other words the vector field must be divergence free and must preserve the level sets of each pu. As each level set is a codimension 1 submanifold the intersection of k such level sets is a submanifold of dimension d−k (under some generecity condition). If d−k ≥ 2 we can pick a divergence free vector field with compact support on those submanifolds consistently thus showing that identifiability fails. The following lemma makes those statements precise. Lemma 4. Consider k smooth densities p1, . . . , pk and k ≤ d − 2. Suppose s0 *is a point such that* ∇pi(s0) are linearly independent. Then there is a non-vanishing vector field X *with compact support such that* $$\operatorname{Div}(X)=\operatorname{Div}(p_{j}X)=0.$$ $$(30)$$ Div(X) = Div(pjX) = 0. (30) The proof is based on standard techniques from differential geometry and can be found in Appendix C. This lemma has the following corollary. Corollary 1. Consider ICA of volume preserving functions with k ≤ d − 2 *auxiliary variables and densities* pi with 1 ≤ i ≤ k. Assume that there is a point s0 such that ∇pi(s0) are independent vectors. Then ICA is not identifiable given the source distribution. Proof. This is a direct consequence of the lemma above. Note that X cannot be aligned with one coordinate axis, i.e., of the form X = eif because it is divergence free with compact support. Thus its flow mixes the coordinates, i.e., Φt ∈ S / max. ## 4.3 Takeaway Let us briefly put this result into context using some heuristics. Assume we know the densities pu and the mixing is volume preserving. Parameter counting suggests that if there are d values of u we can infer s from the vector (p1(s)*, . . . , p*d(s)). Then the relation qu(x) = qu(f(s)) = pu(s) implies that volume preserving ICA with d auxiliary variables should be identifiable given the source distribution. Similarly, if f is not assumed to be volume preserving, we can use that ln(qu1 (x))−ln(qu2 (x)) = ln(pu1 (s))−ln(pu2 (s)) because the Jacobian determinant cancels. Thus ICA with d + 1 auxiliary variables should be identifiable for arbitrary nonlinear mixings. This suggests that the bound above is rather tight, and restricting mixings to be volume preserving does not lead to substantially stronger identifiability results than for general nonlinear functions. Since volume preserving maps are characterized by a single pointwise condition (dim({A : | Det A| = 1}) = d 2 − 1) this is not very surprising. Those heuristics can be compared to the rigorous results in Hyvärinen et al. (2019) where it was shown that under some non-degeneracy nonlinear ICA is identifiable given 2d + 1 values of the auxiliary variable and in Khemakhem et al. (2020) a similar result with again a O(d) scaling was shown for a parametric family of source distributions. It is currently an open question whether combining the auxiliary variable setting with stronger restrictions on the function class gives new identifiability results. In particular, results for FOCT in the auxiliary variable setting are of great interest. ## 5 Conclusion The present work tried to clarify some aspects of identifiability proofs for ICA. The main messages are that using a pointwise linear algebraic reasoning is not sufficient to show identifiability results for restricted function classes. Instead, it appears to be necessary to exploit the global uniqueness properties of certain differential equations associated to the problem. This contrasts with the identifiability results for the auxiliary variable setting, which allow for mostly algebraic arguments when the number of auxiliary variables is sufficiently large in comparison to the number of free parameters. In addition, we described new construction of suitable flows, which are a powerful technique to show non-identifiability results. Acknowledgements We thank Alexander Hägele for helpful comments and suggestions. This work was supported by the Tübingen AI Center. ## References Simon Buchholz, Michel Besserve, and Bernhard Schölkopf. Function classes for identifiable nonlinear independent component analysis, 2022. URL https://arxiv.org/abs/2208.06406. Philippe G. Ciarlet. *Mathematical Elasticity, Volume I: Three-Dimensional Elasticity*. Classics in Applied Mathematics Series. Society for Industrial and Applied Mathematics, 2021. ISBN 9781611976779. Pierre Comon. Independent component analysis, a new concept? *Signal Processing*, 36(3):287–314, 1994. ISSN 0165-1684. doi: https://doi.org/10.1016/0165-1684(94)90029-9. URL https://www. sciencedirect.com/science/article/pii/0165168494900299. Higher Order Statistics. Harley Flanders. Liouville's theorem on conformal mapping. *Journal of Mathematics and Mechanics*, 15(1): 157–161, 1966. ISSN 00959057, 19435274. URL http://www.jstor.org/stable/24901333. Luigi Gresele, Julius von Kügelgen, Vincent Stimper, Bernhard Schölkopf, and Michel Besserve. Independent mechanism analysis, a new concept? In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 614, 2021, virtual, pp. 28233–28248, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/ edc27f139c3b4e4bb29d1cdbc45663f9-Abstract.html. Aapo Hyvärinen and Petteri Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. *Neural Networks*, 12(3):429–439, 1999. doi: 10.1016/S0893-6080(98)00140-3. URL https://doi. org/10.1016/S0893-6080(98)00140-3. Aapo Hyvärinen, Hiroaki Sasaki, and Richard E. Turner. Nonlinear ICA using auxiliary variables and generalized contrastive learning. In Kamalika Chaudhuri and Masashi Sugiyama (eds.), The 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019, 16-18 April 2019, Naha, Okinawa, Japan, volume 89 of *Proceedings of Machine Learning Research*, pp. 859–868. PMLR, 2019. URL http://proceedings.mlr.press/v89/hyvarinen19a.html. Aapo Hyvärinen, Ilyes Khemakhem, and Ricardo Monti. Identifiability of latent-variable and structuralequation models: from linear to nonlinear, 2023. Ilyes Khemakhem, Diederik P. Kingma, Ricardo Pio Monti, and Aapo Hyvärinen. Variational autoencoders and nonlinear ICA: A unifying framework. In Silvia Chiappa and Roberto Calandra (eds.), *The 23rd* International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy], volume 108 of *Proceedings of Machine Learning Research*, pp. 2207–2217. PMLR, 2020. URL http://proceedings.mlr.press/v108/khemakhem20a.html. Xiaojiang Yang. Personal communication. 2022. Xiaojiang Yang, Yi Wang, Jiacheng Sun, Xing Zhang, Shifeng Zhang, Zhenguo Li, and Junchi Yan. Nonlinear ICA using volume-preserving transformations. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=AMpki9kp8Cn. Yujia Zheng, Ignavier Ng, and Kun Zhang. On the identifiability of nonlinear ICA with unconditional priors. In *ICLR2022 Workshop on the Elements of Reasoning: Objects, Structure and Causality*, 2022a. URL https://openreview.net/forum?id=BW44SrOU9g5. Yujia Zheng, Ignavier Ng, and Kun Zhang. On the identifiability of nonlinear ICA: Sparsity and beyond. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing Systems*, 2022b. URL https://openreview.net/forum?id=Wo1HF2wWNZb. ## A Obstacles To Pointwise Proofs Of Identifiability The purpose of this appendix is to show that the relation equation 16 point-wise cannot be sufficient to show identifiability even when we use all restriction available for D2h. While this section is slightly technical, we think that it is of interest to everyone who wants to gain a deeper understanding of the difficulties of identifiability proofs in restricted function classes. Here, we focus on the setting of identifiability in some restricted function class F but similar reasoning can be applied to settings with few auxiliary variable values as in Yang et al. (2022). We assume that the function class F is characterized by the condition that Dh ∈ M pointwise for some (nonlinear) manifold M. In addition, we assume that the directional derivatives ∂iDh(s) ∈ TDh(s)M are in the tangent space of M which is implied by Dh(s) ∈ M for all s. We now show that these assumptions combined with equation 16 are not sufficient to restrict Dh substantially for many typical manifolds M as soon as (∇q˜)(s ′) is not the null-vector. To make this precise, we assume that the reference distribution q and the points s, s ′ = h(s) are fixed and try to investigate the implications of the relation equation 16 for the Jacobian Dh(s). Above we discussed how ∇q˜(h(s)) = 0 implies that Dh(s) is a scaled permutation matrix. We now show that similarly strong conclusions are not possible in the general case. We introduce the notation ∂iDh(s) = Ai ∈ R d×dfor 1 ≤ i ≤ d. Then equation 16 can be expressed as $$(D h^{\top}\Omega D h)_{i j}+\sum_{k}v_{k}A_{k j}^{i}=\Lambda_{i j}.$$ kj = Λij . (31) We now assume that v ̸= 0 is a fixed vector and show that under very mild conditions on Dh there are matrices Ai ∈ TDhM such that the equation above holds. This implies that we cannot restrict Dh substantially based on equation 16. Thus we assume for now that Dh ∈ M is a fixed matrix, and we derive under which conditions on Dh the system equation 31 has a solution for Ai. For this it is convenient to rewrite the condition for the matrix Aithat follows from equation 31. We assume that i is fixed. We consider the $$(31)$$ vector β i with entries β i j = −(Dh⊤ΩDh)ij , i.e., β i denote the columns of the matrix. Moreover, we write λ i = Λii ∈ R. Then the equation equation 31 simplifies to $$v^{\top}A^{i}=(\beta^{i})^{\top}+\lambda^{i}e_{i}^{\top}.$$ $$(32)$$ $$(33)$$ . (32) To illustrate and clarify this approach, we first consider two concrete examples where the restriction on Dh can be characterized explicitly. Later we give a more general argument based on parameter counting. Example I: M = O(d) First, we assume that M = O(d) (below we will discuss that this is not a particular interesting function class). Then $$T_{D h}M=\{A\in\mathbb{R}^{d\times d}:(D h)^{\top}\,A+A^{\top}D h=0\},$$ ⊤Dh = 0}, (33) i.e., A⊤Dh is skew-symmetric. For readers not so familiar we note that this can be made clear by noting that for tangent directions A the relation (Dh+ϵA) ⊤(Dh+ϵA) = Id+O(ϵ 2) holds, which implies Dh⊤A+A⊤Dh = 0. We can show the following lemma. Lemma 5. Let M = O(d)*. Let* v ∈ R d and Dh ∈ R d×d*such that all entries of* (Dh) −1v *has all entries* different from 0. Then the system equation 32 has a solution (Ai, λi) *for any* β i. Remark 2. Note that β iin equation 32 depends on Dh. However, we show that under the stated assumptions, there is a solution for any vector β. We also remark that for v ̸= 0 and a uniformly random Dh ∈ SO(d) (Dh) −1v has all entries different from 0 with probability 1. This approach thus allows us to *exclude* only a subset of measure 0 of all possible values Dh(s). In contrast, for v = 0 it is possible to show that Dh(s) is a scaled permutation, so we *constrain* Dh(s) to a subset of measure 0 of all possible rotations. The proof can be found in Appendix C. To summarize, we have established that for M = O(d) all we can infer from the relation equation 16 is that for a fixed v ̸= 0 the only restriction on Dh is that (Dh) −1v has no zero entry. Example II: M = {A ∈ R d×d: A⊤A ∈ Diag(d), Det(A) = 1} Now we consider the case where M = {A ∈ R d×d: A⊤A ∈ Diag(d), Det(A) = 1} which corresponds to F0 = FOCT ∩ Fvol. Let us add a remark regarding this choice. Remark 3. We emphasize that h ∈ F0 is not the right condition if we want to prove identifiability for this function class because FOCT is not a group, so we cannot conclude that h = g −1f is in FOCT if g, f ∈ FOCT and actually, we can only conclude that Dh is in some larger space than FOCT (this is a misconception in Zheng et al. (2022a) based on too restrictive an estimation model). Let us nevertheless investigate the implications of h ∈ F = FOCT ∩ Fvol. In this case we can even assume that the right-hand side of equation 16 is a fixed given matrix Λ (i.e., we assume that we also know the ground truth source density p). Then we can absorb the now fixed vector λ iei as β˜i = β i + λ iei. So the equation we now consider is $$v^{\top}A^{i}=({\tilde{\beta}}^{i})^{\top}.$$ $$(34)$$ $$(35)$$ ⊤. (34) We note that the tangent space TDhM is characterized by TDhM = {A ∈ R d×d: A ⊤Dh + (Dh) ⊤A ∈ Diag(d), Tr((Dh) −1A) = 0}. (35) To clarify the second condition, we note that Det(Dh + ϵA) = Det(Dh) Det(Id + (Dh) −1A) = 1 + ϵ Tr((Dh) −1A)+O(ϵ 2). For the next proof it is helpful to use that for any matrix B ∈ M there is a unique decomposition B = OD where D ∈ Diag(d) with all diagonal entries positive and O ∈ O(d). For the existence, just set D = √B⊤B and for uniqueness note that D2 = (OD) ⊤(OD) = B⊤B = (O′D′) ⊤(O′D′) = D′2. We can show the following result. Lemma 6. *Assume that* v ̸= 0. There is a subset O ⊂ O(d) of measure zero (w.r.t. the Haar measure) such that for O ∈ O(d) with O /∈ O and Dh = OD for D ∈ Diag(d) *the equation equation 34 has a solution* Ai ∈ TDhM. Remark 4. The approach again only allows excluding at most a set of measure zero of all possible values of Dh. So no useful restriction on Dh can be extracted based on this approach. The proof of the Lemma can be found in Appendix C. Now we analyze the equation equation 32 more generally based on parameter counting. Note that it is typically under determined linear equation for Ai and λ i. Indeed, the condition Ai ∈ TDhM enforces that Aiis contained in a dim(M) dimensional subspace, i.e., Ai has dim(M) free parameters. As soon as dim(M) ≥ d−1 the equation equation 32 generically has a solution for a fixed v and any β because then there are d equations and at least d degrees of freedom (λ iis also a variable). This suggests that this approach is not powerful enough to prove identifiability except for very low dimensional M. For example the orthogonal group considered above satisfies dim(SO(d)) = d(d−1)/2 and we already saw in Section B that this function space is too small to be of interest. Note that in the auxiliary variable setting we get U equations like equation 32 for each Ai. Then typically those U equation will have a solution if dim(M) ≥ U(d−1). For unrestricted function classes (i.e., dim(M) = d 2) we then obtain the condition that U > (d+1) to ensure identifiability. Note that Theorem 1 in Hyvärinen et al. (2019) proves identifiability for 2d+ 1 auxiliary variables under a non-degeneracy condition. This is up to a constant factor, the same scaling as parameter counting suggested. It is not clear whether their result is optimal, and it is also not clear that the strategy discussed here can be extended to an identifiability proof. ## B Rigidity As An Alternative Path To Identifiability We here briefly review results showing that certain function classes characterized by a restriction on the Jacobian are substantially more restricted than directly apparent from the local condition. This observation was named rigidity and is crucial in the mathematical treatment of elasticity, where it is natural to consider pointwise restrictions on the gradient as this encodes the local deformability of materials (see, e.g., Ciarlet (2021) for an overview). Rigidity results provide additional information beyond the local gradient structure, e.g., they might provide additional restrictions on D2h beyond the trivial inclusion ∂iDh ∈ TDhM and this information can be used to prove identifiability results. To clarify this, we consider a simple and well-known example that dates back at least to Liouville who showed a more general result for conformal maps in 1850. Suppose that f : Ω → R dis a C 2function defined on an open connected set Ω ⊂ R dsuch that Df(s) ∈ SO(d) for all s ∈ Ω. Naively, this only implies that ∂iDf(s) ∈ TDf(s)SO(d). So we can conclude that Ai = ∂iDf(s) satisfies the equation A⊤ i Df(s) + Df(s) ⊤Ai = 0, e.g., if Df(s) = Id then Aiis skew-symmetric. However, Df is also the gradient of a function and this might restrict the values Ai further than just being in the tangent space. In fact, in this example, it can be shown that f(s) = As + b for some A ∈ SO(d). For clarity, we state this as a theorem. Theorem 1 (Liouville). *Assume* f : Ω → R d*is a* C 2*function and* Ω ⊂ R d*is open and connected. If* Df(s) ∈ SO(d) for all s ∈ Ω *then* $$f(s)=A s+b$$ $$(36)$$ f(s) = As + b (36) for some A ∈ SO(d), b ∈ R d. A simple proof of this result can be found in the recent review Hyvärinen et al. (2023)3. So we conclude that f is necessarily affine, Df is constant, and Ai = 0. In this case, the rigidity result implies a strong restriction on D2f (it vanishes). This additional restriction might be helpful when analyzing equation 16 and imply identifiability. Let us also emphasize that while the additional restrictions help to prove identifiability they also show that the the function class is not as expressive as naively expected. To the best of our knowledge, it is currently unknown whether there are rigidity results for larger function classes than conformal maps, in particular no results for the function class FOCT seem to be known. Nevertheless, one possible road to identifiability is to show that for a certain function class F characterized by 3Actually this proof shows that in this simple case it is sufficient to carefully consider higher order derivatives but this does not extend to more general function classes, e.g., conformal maps where a (very simple) PDE has to be integrated Flanders (1966). the pointwise condition Df(s) ∈ M we can, in fact, conclude that ∂iDf(s) ∈ U ⊂ TDf(s)M where U is a small subset and then use this additional restriction in equation 16. Alternatively, one could try to directly consider the system of PDEs given by $$Dh^{\top}\Omega Dh+\sum_{k}v_{k}D^{2}h_{k}=\Lambda,$$ $$Dh\in M.$$ $$(37)$$ $$(38)$$ This is essentially the road taken in Buchholz et al. (2022) to show partial identifiability results in FOCT. ## C Proofs C.1 Proof For Section 3.1 Proof of Lemma 1. The standard transformation formula for densities reads $$p(s)=q(h(s))\cdot|\operatorname*{det}D h(s)|.$$ p(s) = q(h(s)) · | det Dh(s)|. (39) We now apply equation 14 to this equation to conclude that (denoting q˜ = ln(q)) $$0=\partial_{i}\partial_{j}\left(\ln(q(h(s))+\ln|\det Dh(s)|\right)$$ $$=\partial_{i}\sum_{k}(\partial_{k}\bar{q})(h(s))\partial_{j}h_{k}(s)+\partial_{i}\partial_{j}\ln|\det Dh(s)|$$ $$=\sum_{k}\sum_{l}(\partial_{l}\partial_{k}\bar{q})(h(s))\cdot\partial_{j}h_{k}(s)\cdot\partial_{i}h_{l}(s)+\sum_{k}(\partial_{k}\bar{q})(h(s))\partial_{i}\partial_{j}h_{k}(s)+\partial_{i}\partial_{j}\ln|\det Dh(s)|.$$ $$(39)$$ $$(40)$$ $$(41)$$ $$\left(42\right)$$ Since h is volume preserving det Dh(s) = 1 is constant and the last term in the display above vanishes. Using that q˜(s ′) = Pq˜i(s ′ i ) for some functions q˜i the off diagonal terms of the first sum vanish. Thus, we end up with the condition. $$0=\sum_{k}(\partial_{k}^{2}\vec{q})(h(s))\cdot\partial_{j}h_{k}(s)\cdot\partial_{i}h_{k}(s)+\sum_{k}(\partial_{k}\vec{q})(h(s))\partial_{i}\partial_{j}h_{k}(s).$$ We now fix a point s0 and set s ′ 0 = h(s0). Then we write Ω = Diag((∂ 2 1 q˜)(s ′ 0 )*, . . . ,*(∂ 2 d q˜)(s ′ 0 )) and v ∈ R d with vk = ∂kq˜(s ′ 0 ). Then equation 41 can be written concisely as (dropping arguments s) $$D h^{\top}\Omega D h+\sum_{k}v_{k}D^{2}h_{k}=\Lambda\in\mathrm{Diag}(d).$$ The matrix Λ has diagonal entries Λii = ∂ 2 i p˜(s0). This ends the proof. Note that in the auxiliary variable case with a finite number of auxiliary variables um with 1 ≤ m ≤ U we get U equations of this type that must be satisfied simultaneously. ## C.2 Proofs For Appendix A Proof of Lemma 5. Let O be an invertible matrix such that $$O(D h)^{-1}v=e_{1}$$ $$(43)$$ −1v = e1 (43) (such O exists if v ̸= 0). Now we show that the first entry of O−⊤eiis non-zero, i.e., e ⊤ 1 O−⊤ei ̸= 0. Note that $e_1^\top O^{-\top}e_i=e_i^\top O^{-1}e_1=e_i^\top(Dh)^{-1}v\neq0$ Note: $e_i^\top\in\mathbb{R}^{n\times d}$ is a $\mathbb{R}^{n\times d}$-valued function. by assumption. Thus we can find λ i ∈ R such that $$e_{1}^{\top}\left(O^{-\top}\beta^{i}+\lambda^{i}O^{-\top}e_{i}\right)=0.$$ = 0. (45) $$(44)$$ $$(45)$$ Now we set $\qquad T=((O^{-\top}\beta^i+\lambda^i O^{-\top}e_i)\otimes e_1-e_1\otimes((O^{-\top}\beta^i+\lambda^i O^{-\top}e_i).$ Moreover, ... Clearly T ∈ Skew(d). Moreover, $$(46)$$ $$T e_{1}=O^{-\top}\beta^{i}+\lambda^{i}O^{-\top}e_{i}$$ −⊤ei (47) by equation 45. Finally we set $$(47)$$ $$A^{i}=-(D h)^{-\top}O^{\top}T O.$$ $$(48)$$ $$(49)$$ ⊤*T O.* (48) Then Ai ∈ TDhM because $$(D h)^{\top}A^{i}+(A^{i})^{\top}D h=-O^{\top}(T+T^{\top})O=0.$$ ⊤)O = 0. (49) $$(A^{i})^{\top}v=O^{\top}T O(D h)^{-1}v=O^{\top}T e_{1}=O(O^{-\top}\beta^{i}+\lambda^{i}O^{-\top}e_{i}=\beta^{i}+\lambda_{i}e_{i}.$$ i + λiei. (50) $$(50)$$ $\square$ $$T w=b.$$ $$(51)$$ $$(52)$$ $$(53)$$ T w = b. (51) $$T={\left(\begin{matrix}t_{1}&t_{2}\\ -t_{2}&-t_{1}\end{matrix}\right)}\in\operatorname{Diag}(2)+\operatorname{Skew}(2)$$ $$T w={\begin{pmatrix}t_{1}&t_{2}\\ -t_{2}&-t_{1}\end{pmatrix}}{\begin{pmatrix}w_{1}\\ w_{2}\end{pmatrix}}={\begin{pmatrix}w_{1}&w_{2}\\ -w_{2}&-w_{1}\end{pmatrix}}{\begin{pmatrix}t_{1}\\ t_{2}\end{pmatrix}}\,.$$ $$T{\cal O}^{\top}v={\cal D}^{-1}\tilde{\beta}^{i}.$$ $$(54)$$ T O⊤v = D−1β˜i. (54) $$(A^{i})^{\top}=D T O^{\top}$$ $$(55)$$ ⊤ = *DT O*⊤ (55) so that equation 34 holds. It remains to be shown that Ai ∈ TDhM. We find (A i) ⊤Dh + (Dh) ⊤A i = *DT O*⊤OD + DO⊤OT ⊤D = D(T + T ⊤)D ∈ Diag(d). (56) Moreover, we have $$\operatorname{Tr}((D h)^{-1}A)=\operatorname{Tr}(D^{-1}O^{\top}O T D)=\operatorname{Tr}T=0.$$ ⊤*OT D*) = Tr T = 0. (57) $$\left(57\right)$$ We calculate (using T $$\top=-T)$$ This ends the proof. Proof of Lemma 6. We first show the following fact. Suppose w ∈ R dis a vector such that not all its entries have equal absolute value and b ∈ R dis any vector, then there is T ∈ Diag(d) + Skew(d) with Tr T = 0 such that It is sufficient to show the result for d = 2 as the general case follows by linearity and the d = 2 case applied to suitable subblocks of T. Note that the conditions on T imply that we can write it as and all such matrices satisfy all the requirements for T. With this we get If |w1| ̸= |w2| then the determinant of the matrix on the RHS is w 2 2 − w 2 1 ̸= 0. Thus we can find a solution (t1, t2) ⊤ of the linear equation T w = b for any vector b. We now define O ⊂ O(d) to be the set of orthogonal matrices such that O⊤v has all entries of equal absolute value. For v ̸= 0 this is indeed a set of measure 0 because by applying a suitable rotation we can assume v = e1 and then there are only 2 d possible first columns of O⊤, namely d −1/2(±1, ±1*, . . . ,* ±1). Suppose now that O ∈ O(d) satisfies O /∈ O and Dh = OD for some D ∈ Diag(d). Let T ∈ Diag(d) + Skew(d) with Tr T = 0 be a solution of the equation By assumption on O and v such a solution T exists. We now define We conclude that Ai ∈ TDhM. This ends the proof. ## C.3 Proofs For Section 4 Proof of Lemma 3. To show the first point, we note that the flow exists locally by standard ODE results. For global existence, we note that $$\partial_{t}|\Phi_{t}(s)|^{2}=2\Phi_{t}(s)X(\Phi_{t}(s))$$ $$=\Phi_{t}(s)_{1}\cdot(2(c-b)\Phi_{t}(s)_{2}\Phi_{t}(s)_{3})+\Phi_{t}(s)_{2}\cdot(2(a-c)\Phi_{t}(s)_{1}\Phi_{t}(s)_{3}+\Phi_{t}(s)_{3}\cdot(2(b-a)\Phi_{t}(s)_{1}\Phi_{t}(s)_{2})\tag{58}$$ $$=0.$$ This implies that no blow-up occurs. The flow is analytic because the vector field is analytic by the Cauchy–Kowalevski theorem. To show the next statement, we note that by equation 24 it is sufficient to show that Div(X) = Div(p1X) = Div(p2X) = 0 (the first condition ensures that the Lebesgue measure is preserved, i.e., the flow is volume preserving). We calculate $$\operatorname{Div}X=\operatorname{Div}{\binom{(c-b)s_{2}s_{3}}{(a-c)s_{1}s_{3}}}=\partial_{1}((c-b)s_{2}s_{3})+\partial_{2}((a-c)s_{1}s_{3})+\partial_{3}((b-a)s_{1}s_{2})=0$$ $$\cdot\,p_{1}(s)-(b-a)s_{1}s_{2}s_{3}\cdot p_{1}(s)=0$$ $$(s)-(a-c)$$ and similarly (using $\partial_1p_1=-s_1p_1$, $\partial_2p_1=-s_2p_1$, $\partial_3p_1=-s_3p_1$) . Div(Xp1) = −(c − b)s1s2s3 · p1(s) − (a − c)s1s2s3 · p1(s) − (b − a)s1s2s3 · p1(s) = 0 (60) and (using ∂1p2 = −as1p2, ∂2p2 = −bssp2, and ∂3p2 = −cs3p2) $$\mathrm{Div}(Xp_{2})=-(c-b)s_{3}s_{2}as_{1}\cdot p_{2}(s)-(a-c)s_{3}s_{1}bs_{2}\cdot p_{2}(s)-(b-a)s_{1}s_{2}cs_{3}\cdot p_{2}(s)$$ $$=s_{1}s_{2}s_{3}\cdot p_{2}(s)(ab-ca+bc-ab+ac-bc)=0.$$ $$(61)$$ To show the last point we note that the first order expansion for small ε gives $$\Phi_{\varepsilon}(s)\approx\begin{pmatrix}s_{1}+\varepsilon(c-b)s_{2}s_{3}\\ s_{2}+\varepsilon(a-c)s_{1}s_{3}\\ s_{3}+\varepsilon(b-a)s_{1}s_{2}\end{pmatrix}\tag{1}$$ $$(59)$$ $$(60)$$ $$(62)$$ $$(63)$$ $$(64)$$ which cannot be written as a coordinate-wise transformation as a, b, and c are pairwise different. Proof of Lemma 4. It is convenient to use the language of differential geometry in the proof. We consider the manifold M = R dequipped with the standard volume form ds1 ∧ *. . .* ∧ dsd. The bulk of the proof consists of the construction of suitable new coordinates on M. Pick vectors v1*, . . . , v*d−k such that together with ∇pi(s0) they form a basis of R d. Consider the functions yi(s) = vis. Then the map ψ(s) = (y1(s), . . . , yd−k(s), p1(s)*, . . . , p*k(s)) has an invertible Jacobian at s0 which we can assume to have positive determinant by changing the sign of v1. Thus ψ defines a chart locally, i.e., in a neighborhood U ⊂ M of s0. There is a function h : U → R with h > 0 such that $$h\cdot\mathrm{d}y_{1}\wedge\ldots\wedge d y_{d-k}\wedge\mathrm{d}p_{1}\wedge\ldots\wedge\mathrm{d}p_{k}=\mathrm{d}s_{1}\wedge\ldots\wedge\mathrm{d}s_{d}.$$ h · dy1 ∧ . . . ∧ dyd−k ∧ dp1 ∧ *. . .* ∧ dpk = ds1 ∧ *. . .* ∧ dsd. (63) By shrinking U we can assume that the image ψ(U) is convex. We define a new coordinate function z˜1 : ψ(U) → R for (¯y1, . . . , y¯d−k, p¯1*, . . . ,* p¯k) ∈ ψ(U) by $$\bar{z}_{1}(\bar{y}_{1},\ldots,\bar{y}_{d-k},\bar{p}_{1},\ldots,\bar{p}_{k})=\int_{y_{1}(s_{0})}^{\bar{p}_{1}}h(\psi^{-1}(t,\bar{y}_{2},\ldots,\bar{y}_{d-k},\bar{p}_{1},\ldots,\bar{p}_{k}))\,\mathrm{d}t.$$ Note that $\psi^{-1}(\bar{y}_{1},\ldots,\bar{y}_{d-k},\bar{p}_{1},\ldots,\bar{p}_{k})=\int_{y_{1}(s_{0})}^{\bar{p}_{1}}h(\psi^{-1}(t,\bar{y}_{2},\ldots,\bar{y}_{d-k},\bar{p}_{1},\ldots,\bar{p}_{k}))\,\mathrm{d}t$. This implies in particular that $${\frac{\partial}{\partial{\bar{y}}_{1}}}{\bar{z}}_{1}(x)=h(\psi^{-1}(x))$$ −1(x)) (65) for x ∈ R d. We define z1 : U → R by $$z_{1}(s)=\tilde{z}_{1}(\psi(s)).$$ z1(s) = ˜z1(ψ(s)). (66) $$(65)$$ $$(66)$$ $$\mathrm{d}z_{1}=\sum_{i=1}^{d-k}\frac{\partial z_{1}}{\partial y_{i}}\mathrm{d}y_{i}+\sum_{i=1}^{k}\frac{\partial z_{1}}{\partial p_{i}}\mathrm{d}p_{i}.\tag{1}$$ $$(67)$$ $$\mathrm{d}z_{1}\wedge\mathrm{d}y_{2}\wedge\ldots\wedge d y_{d-k}\wedge\mathrm{d}p_{1}\wedge\ldots\wedge\mathrm{d}p_{k}={\frac{\partial z_{1}}{\partial y_{1}}}\mathrm{d}y_{1}\wedge\ldots\wedge d y_{d-k}\wedge\mathrm{d}p_{1}\wedge\ldots\wedge\mathrm{d}p_{k}.$$ $${\frac{\partial z_{1}}{\partial y_{1}}}(s)=\left({\frac{\partial}{\partial y_{1}}}z_{1}\circ\psi^{-1}\right)(\psi(s))=\left({\frac{\partial}{\partial y_{1}}}{\bar{z_{1}}}\right)(\psi(s))=(h\circ\psi^{-1})(\psi(s))=h(s).$$ $$\mathrm{d}z_{1}\wedge\mathrm{d}y_{2}\wedge\ldots\wedge dy_{d-k}\wedge\mathrm{d}p_{1}\wedge\ldots\wedge\mathrm{d}p_{k}=\frac{\partial z_{1}}{\partial y_{1}}\mathrm{d}y_{1}\wedge\ldots\wedge dy_{d-k}\wedge\mathrm{d}p_{1}\wedge\ldots\wedge\mathrm{d}p_{k}$$ $$=h\,\mathrm{d}y_{1}\wedge\ldots\wedge dy_{d-k}\wedge\mathrm{d}p_{1}\wedge\ldots\wedge\mathrm{d}p_{k}$$ $$=\mathrm{d}s_{1}\wedge\ldots\wedge\mathrm{d}s_{d}.$$ $$(68)$$ $$(69)$$ $$(70)$$ $$(71)$$ $$(72)$$ $$(73)$$ $$g\,{\mathrm{d}}z_{1}\wedge{\mathrm{d}}y_{2}\wedge\ldots\wedge{\mathrm{d}}y_{d-k}\wedge{\mathrm{d}}p_{1}\wedge\ldots\wedge{\mathrm{d}}p_{k}={\mathrm{d}}s_{1}\wedge\ldots\wedge{\mathrm{d}}s_{d}$$ $$X(s)=\tilde{X}_{1}(\varphi(s))\frac{\partial}{\partial z_{1}}+\tilde{X}_{2}(\varphi(s))\frac{\partial}{\partial z_{2}}$$ $$\mathrm{Div}_{M}\,X=\sum_{i}{\frac{1}{g}}\partial_{i}(g X_{i})={\frac{\partial}{\partial z_{1}}}\bar{X}_{1}\circ\varphi+{\frac{\partial}{\partial z_{2}}}\bar{X}_{2}\circ\varphi=\partial_{1}\bar{X}_{1}+\partial_{2}\bar{X}_{2}=\mathrm{Div}_{\mathbb{R}^{d}}\,\bar{X}=0.$$ $$\mathrm{Div}_{M}(X p_{i})={\frac{\partial}{\partial z_{1}}}(({\tilde{X}}_{1}\circ\varphi)p_{i})+{\frac{\partial}{\partial z_{2}}}({\tilde{X}}_{2}\circ\varphi)p_{i})=p_{i}\partial_{1}{\tilde{X}}_{1}+p_{i}\partial_{2}{\tilde{X}}_{2}=p_{i}\,\mathrm{Div}_{\mathbb{R}^{d}}\,{\tilde{X}}=0.$$ $$\left(74\right)$$ $$\frac{\partial}{\partial z_{1}}p_{i}=0\tag{1}$$ Note that We now calculate Note that all further terms involving dpi or dyi vanish by antisymmetry. We evaluate using the definition of directional derivatives with respect to a tangent vector and equation 65 We can therefore conclude Now the map φ : U → R d given by φ(s) = (z1(s), . . . , zd−k(s), p1(s)*, . . . , p*k(s)) defines a local chart where we define z1 as above and zi = yi for i > 1 (it is easy to check from the definition of z1 that φ is injective but one could also shrink the domain for the following argument). Let g : U → R be given by the root of the determinant of the matrix representation of the (standard) metric tensor for the coordinates φ. The relation implies g = 1. Let X˜ : φ(U) → R d be a non-zero divergence free vector field with compact support such that X˜i = 0 for i > 2. Consider the vector field X : M → TM = R d defined by for s ∈ U and extended by 0 for s /∈ U. Then we calculate using the coordinate formula for the divergence and using g = 1 Similarly we also get Here we used that which follows from the relation pi ◦ φ −1(x) = πd−k+i(x) = xr where πr denotes the projection on the r-th coordinate and ∂ixr = δir. $$(75)$$ ## D Detailed Comment On Lemma 1 In Yang Et Al. (2022) Here we discuss in a bit more detail the errors in Lemma 1 in Yang et al. (2022). We adopt their notation but simplify it slightly by removing u which is not necessary in the context of the lemma. We assume that f : R d → R dis volume preserving and consider $$p(s)=\prod_{i=1}^{d}p_{i}(s_{i})$$ $$q(z)\propto\prod_{i=1}^{d}e^{-\theta_{i,1}z_{i}-\theta_{i,2}z_{i}^{2}/2}.\tag{1}$$ $$(76)$$ $$(77)$$ $$(78)$$ $$(79)$$ Note that $$\theta_{i,1}=-\mu_{i}/\sigma_{i}^{2},\quad\theta_{i,2}=\sigma_{i}^{-2}\tag{1}$$ where µi and σ 2 i are the mean and variance of the Gaussian distribution. Lemma 1 in Yang et al. (2022) claims that if q(f(s)) = p(s) then for j ̸= k (see Eq. (15)) $$\sum_{i}\theta_{i,1}\frac{\partial^{2}f_{i}}{\partial s_{j}\partial s_{k}}(s)+\theta_{i,2}\frac{\partial f_{i}}{\partial s_{j}}(s)\frac{\partial f_{i}}{\partial s_{k}}(s)=0.$$ Moreover they claim in Equation (20) below the lemma that θi,1 = 0 can be assumed giving the simpler relation $$\sum_{i}\theta_{i,2}\frac{\partial f_{i}}{\partial s_{j}}(s)\frac{\partial f_{i}}{\partial s_{k}}(s)=0.\tag{1}$$ The proof strategy is to consider ln(p(s)) = ln(q(f(s)) and apply −∂j∂k to this relation, i.e., they consider $$\partial_{j}\partial_{k}\left(\sum_{i=1}^{d}\theta_{i,1}f(s)+\theta_{i,2}f(s)^{2}\right).\tag{1}$$ $$(80)$$ $$(81)$$ The error is that below (17) they state that f(s0) = 0 can be assumed without loss of generality which is not true. Without this assumption, we obtain an additional term when both derivatives hit the same factor of f(s) 2, i.e., we get $$0=\sum_{i}\theta_{i,1}\frac{\partial^{2}f_{i}}{\partial s_{j}\partial s_{k}}(s)+\theta_{i,2}\left(\frac{\partial f_{i}}{\partial s_{j}}(s)\frac{\partial f_{i}}{\partial s_{k}}(s)\frac{\partial^{2}f_{i}}{\partial s_{j}\partial s_{k}}(s)f_{i}(s)\right)\tag{1}$$ $$=\sum_{i}(\theta_{i,1}+f_{i}(s)\theta_{i,2})\frac{\partial^{2}f_{i}}{\partial s_{j}\partial s_{k}}(s)+\theta_{i,2}\frac{\partial f_{i}}{\partial s_{j}}(s)\frac{\partial f_{i}}{\partial s_{k}}(s).$$ $$(82)$$ Now we see that the claimed relation equation 80 only holds if θi,1+fi(s)θi,2 = 0 which in light of equation 78 is true iff fi(s) = µi for all i. This finding agrees with the relation equation 16, because for Gaussian densities q the mean has maximal density, which implies that ∇q(µ) = 0 and thus v = 0 in the notation of Section 3.1.
Review 1: Summary: The paper points out that the proof in Yang et al. (2022) used an assumption that does not hold in general ($f_i ( s_0 ) = 0$) and hence missed a term in their Lemma 1. Then, this paper points out that the result of Zheng et al. (2022b) is restrictive. Finally, the paper provides counterexamples to Yang et al. (2022). Strengths and Weaknesses: ### Stengths This paper provides a more thorough discussion on the identifiability issue of nonlinear ICA. ### Weaknesses This paper does not provide essentially new ideas but a more thorough discussion based on the existing framework. The correction to Yang et al. (2022) does not require any novel ideas and is based on direct calculations, which makes the submitted paper appear weak from the beginning. It might be better to present the counterexamples before Section 3. Providing some numerical evidence would significantly improve the paper. As a layperson in regard to the identifiability issue in nonlinear PCA, I found it difficult to understand the problem formulation and definitions. This submission became readable to me only after I skimmed over the work of Buchholz et al. I suggest the authors find some laypeople to read the submission and revise the paper according to their feedback. Isn't $\Lambda$ in (24) simply zero? Is $s = s_0$ in the definition of $\Lambda_{i i}$ after (24)? There are some missing citations or proofs and a number of typos. See below. Requested Changes: Missing citations or proofs: - A citation or proof is needed for Theorem 1. - A citation or proof is needed for the claim that only liner Mobius transformations are volume preserving. - A citation or proof is needed for Footnote 3. Missing definitions: - The probability distribution $p$ is not defined after (1). - The definition of $v_k$ does not follow (20) but is hidden in the proof. - 2nd line, Section 3.2: Lemma 5 isn't presented "above." - (38) contains a typo. Typos: - 3rd line, 1st paragraph: independent component analysis -> ICA - 1st line, 3rd paragraph: proof-strategy -> proof strategy - 2nd line, 1st paragraph, Sec. 2: independent component analysis -> ICA - p. 3: ... $f \in \mathcal{F}_{\text{OCT}}$ *implies that* the relation... - p. 3: We denote *the* orthogonal and special group*s* as usual... - p. 3: We structure this *section* in two parts. - p. 6: ... *at* a single point... - p. 6: ... *necessarily* affine... - p. 7: ... rather small*.* *In* this case, it is just a subclass of *affine* functions. - p. 7: Louiville -> Liouville Broader Impact Concerns: N/A. ================================================== Review 2: Summary: This manuscript examines a specific proof technique employed for the identifiability of nonlinear ICA, underscoring its limitations. In particular, it has been demonstrated that there may be a need to utilize the global uniqueness attributes of PDEs to address the problem. Strengths and Weaknesses: Strengths: 1. The manuscript is well-written, and the reference to relevant work is useful. 2. The author provides a strategy for constructing distinct flows as a type of spurious solution for nonlinear ICA, assuming volume preservation. Comments: 1. The scope of this paper seems a little bit narrow. Its main focus is on the construction of a specific spurious solution for nonlinear ICA, a situation that seems to refute a theorem in Yang et al.'s previous work. It could be more beneficial if the paper offered more on potential alternative techniques or assumptions that could be instrumental in resolving the problem. At present, the author merely notes the potential necessity of considering global uniqueness properties, whose solutions could be intriguing. 2. The author notes that the volume-preserving orthogonal coordinate transformation mentioned in Zheng et al. 2022a may require more complex techniques. To ensure clarity, I suggest the author refers to Zheng et al. 2022a consistently throughout the paper, instead of alternating with Zheng et al. 2022b. The main result of Zheng et al. 2022b, which focuses on proving identifiability through sparsity, is substantially different from that of Zheng et al. 2022a. Requested Changes: Please refer to the comments. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper comments on some recent identifiability results for nonlinear independent component analysis (ICA). It is suggested that the identifiability proofs in Yang et al. (2022) and Zheng et al. (2022) rely on a critical but flawed lemma, and hence are problematic. The paper further discusses an alternative proof strategy and presents a counterexample to the identifiability result in Yang et al. (2022). Strengths and Weaknesses: Note: I edited my review after seeing the authors’ response and rechecking Yang et al.’s proofs. Strengths: The paper meets the renewed interest in nonlinear ICA, which plays an increasingly important role in generative and latent variable models. The paper provides some useful insights into the problem and suggests the need to exploit global restrictions. At first sight, eq. (20) in Lemma 1 of the current paper is quite similar to eq. (19) in Lemma 1 of Yang et al. (2022). The problem in the latter is that the coefficients should also depend on $s$ rather than only on $u$. The problem arises when collecting the cross terms on the right-hand side of (18): those in the remainder term of the Taylor expansion are missing. This error is indeed fatal and affects Lemma 2 and the subsequent proofs. Weaknesses: The main purpose of this “short note” (not really so for a 16-page paper) is to point out an error in Yang et al. (2022). Besides that, the new results and contributions are relatively weak and do not seem to warrant a full-length paper. Requested Changes: I see two ways of rewriting the paper: 1) If the paper is intended as a “Comments” paper, it should address directly the issues of previous work and point out exactly where the error occurs. There is no need to introduce new notation or develop the theory from scratch. 2) As a full-length paper, after briefly pointing out the error, it should make substantial new contributions and concentrate more on those new results. The paper contains some typos that might affect understanding. For example, in the statement of Lemma 1, $v_k$ is not defined, and there are extra parentheses in the definitions of $\Omega$ and $\Lambda$. Broader Impact Concerns: None ================================================== Review 4: Summary: The topic of this paper is Independent Component Analysis (ICA). An invertible mixing function f is applied to sources s with independent components s1, . . . , sd. The problem consists in recovering the sources from observing f (s). For a linear mixing function the problem is solved, while this task is known to be infeasible for general f . The present work is part of a growing literature attempting to analyze properties that the class of functions should possess for the problem to be solvable. 1/ As their main contribution, the authors pinpoint a technical error in some previous work in the literature Yang et al. [2022, Lemma 1]. The claim is that both of the works Yang et al. [2022], Zheng et al. [2022] suffer from this inaccuracy, thus the results therein are (at least partially) invalidated. The authors propose a corrected version of the erroneous lemma (Lemma 1 in the text), and highlight some limitations of the approach proposed by Yang et al. [2022]. 2/ The authors then build upon tools from rigidity theory developed by Buchholz et al. [2022] to discuss that in some cases, local conditions on the Jacobian can lead to very strong global restrictions on functions, helping to prove identifiability. 3/ Finally, the authors further strenghten their argument by constructing a counter-example to Yang et al. [2022, Theorem 1] using flows. Strengths and Weaknesses: * Strengths 1/ The authors discuss the reported errors in Yang et al. [2022, Lemma ] both using their own notation, as well as the notation of Yang et al. [2022] (in Appendix C). This is much appreciated. 2/ The fact that the authors also provide a (pessimistic) assessment of the proof strategy of Yang et al. [2022] is also very valuable for people working in this area. * Weaknesses 3/ This reviewer is less convinced by the necessity of including Section 4. (See also 4/.) 4/ At first glance, the submitted paper shares some similarities with the work of Buchholz et al. [2022]. Looking closer, the two papers appear to be mostly complementary, but it would be better to add a clear reference for Theorem 1 and a comparison with existing work for Theorem 2 of Section 4. Requested Changes: -- Clarify the distinction between the results presented in Section 4 and that of Buchholz et al. [2022]. -- Below is a list of minor issues. * After equation (20), the expression of Ω and Λ should be revised. Some parentheses are missing. The same issue arises before (24). * On p.5, “which is an erroneous version of Lemma 5 above” should read “Lemma 1” instead. * On p.6, a “well-known example”. Please provide a reference. * On p.8, check equation (38). Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: All 4 reviewers agreed that the claim of the paper on the mistake in (Yang et al., 2022) is correct. This, together with the notation clarification and the counterexamples in Section 5, is enough to justify publication in TMLR in my opinion. This discussion is clear and easy to connect to the previous works [Reviewer uWec: "The authors discuss the reported errors in Yang et al. [2022, Lemma ] both using their own notation, as well as the notation of Yang et al. [2022] (in Appendix C). This is much appreciated."] [vvSK: "The manuscript is well-written, and the reference to relevant work is useful."]. A recurring criticism is that in Section 4, the author defends the ideas he introduced in a previous paper (Buchholz et al., 2022), but does not provide any new idea: [uWec: "This reviewer is less convinced by the necessity of including Section 4 (...) At first glance, the submitted paper shares some similarities with the work of Buchholz et al. [2022]. Looking closer, the two papers appear to be mostly complementary, but it would be better to add a clear reference for Theorem 1 and a comparison with existing work for Theorem 2 of Section 4.] [ejAJ: "This paper does not provide essentially new ideas but a more thorough discussion based on the existing framework."] In response to this, the author provided adequate citations in Section 4, and shortened it. All reviewers required some minor clarifications (see the list of minor comments / typos [vvSK, ejAJ]). This was addressed by the author. However, there remains a strong disagreement among the reviewers on the final decision. [uWec, ejAJ] supported the publication of the paper while [6ko9, vvSK] recommended to reject it, based on the lack of novelty: [6ko9: "The main purpose of this “short note” (not really so for a 16-page paper) is to point out an error in Yang et al. (2022). Besides that, the new results and contributions are relatively weak and do not seem to warrant a full-length paper."] [vvSK: "The scope of this paper seems a little bit narrow."]. From the abstract it is clear that this paper is not meant to introduce any new technique. Rather, it is meant to point out a (propagating) mistake in previous works, and to bring some clarification on the problems that led to this mistake. As such, the paper is kept short (8.5 pages including references, with more technical details in the appendices). The comments from all reviewers clearly show that the paper is successfull in its clarification task. I agree with them. The only debatable point was to defend in Section 4 as an alternative to (Yang et al., 2022) ideas that were already introduced in a previous paper. In the revised version, the author shortened this section and clarified the link to (Buchholz et al., 2022). The recommendation based on the lack of novelty would be appropriate for a top-tier conference. In its current state, the paper brings useful clarifications and new challenges to the ICA community, and as it falls perfectly within the scope of TMLR. I will thus support its publication. ==================================================
# A Distance-Based Anomaly Detection Framework For Deep Reinforcement Learning Anonymous authors Paper under double-blind review ## Abstract In deep reinforcement learning (RL) systems, abnormal states pose significant risks by potentially triggering unpredictable behaviors and unsafe actions, thus impeding the deployment of RL systems in real-world scenarios. It is crucial for reliable decision-making systems to have the capability to cast an alert whenever they encounter unfamiliar observations that they are not equipped to handle. In this paper, we propose a novel Mahalanobis distancebased (MD) anomaly detection framework, called MDX, for deep RL algorithms. MDX simultaneously addresses random, adversarial, and out-of-distribution (OOD) state outliers in both offline and online settings. It utilizes Mahalanobis distance within class-conditional distributions for each action and operates within a statistical hypothesis testing framework under the Gaussian assumption. We further extend it to robust and distribution-free versions by incorporating Robust MD and conformal inference techniques. Through extensive experiments on both Atari games and autonomous driving scenarios, we demonstrate the effectiveness of our MD-based detection framework. MDX offers a simple, unified, and practical tool for enhancing the safety and reliability of RL systems in real-world applications. ## 1 Introduction Deep reinforcement learning (RL) algorithms vary considerably in their performance and are highly sensitive to a wide range of factors, including the environment, state observations, and hyper-parameters (Jordan et al., 2020; Patterson et al., 2020). The lack of robustness of RL algorithms hinders their deployment in real-world scenarios, particularly in safety-critical applications, such as autonomous driving (Kiran et al., 2021). Recently, the reliability of RL algorithms has garnered substantial attention (Chan et al., 2020; Gu et al., 2024), emphasizing the need for anomaly detection-based strategies to build trustworthy RL systems (Haider et al., 2023; Danesh & Fern, 2021; Sedlmeier et al., 2020). Practical Scenarios. Observed states often contain natural measurement errors (random noises), adversarial perturbations, and out-of-distribution (OOD) observations. For instance, consider an autonomous vehicle with malfunctioning or unreliable sensors or cameras. Under such circumstances, the collected data, such as the vehicle's observed location, can be contaminated by random measurement errors. Furthermore, an autonomous car can encounter sensory inputs that have been adversarially manipulated regarding traffic signs. For example, a stop sign maliciously altered to be misclassified as a speed limit sign (Chen et al., 2019), increases the risk of traffic accidents. Regarding OOD samples, an RL policy trained to drive only on sunny days will struggle with observations from rainy days, which are beyond its trained experience. Such OOD observations can lead to safety violations, performance degradation, and potentially catastrophic failures. All these scenarios highlight the necessity of detecting inaccurate sensor signals from noisy state observations to ensure a vehicle's accurate and reliable operation. Beyond autonomous driving, anomaly detection is critical in many other applications involving sequential decision-making. In healthcare, the RL agent might adjust treatment recommendations if it detects a sudden change in the patient's health condition (Hu et al., 2022). Similarly, detecting fraud and anomalous market states in financial systems is becoming increasingly instrumental in preventing substantial financial losses from market manipulation and fraudulent activities (Hilal et al., 2022). ![1_image_0.png](1_image_0.png) (a) Unsafe behavior in autonomous driving under noisy sensor signals. (b) Performance degradation when noises injected in policy deployment. (c) Performance degradation when noises injected during policy learning. Figure 1: (a) An autonomous car navigates using location data observed from sensors such as GPS. Without an effective anomaly detection mechanism, inaccuracies or malfunctions in these sensors can cause the car to prematurely turn right, leading to a collision. (b) and (c): Performance degradation occurs when noisy states are observed in the Breakout environment. Gaussian noises with increasing standard deviations are injected into the state observations during policy deployment (b) and policy learning (c). Motivating Examples. Figure 1(a) illustrates a potential collision scenario where an autonomous car, relying on noisy location data in the red region (such as GPS coordinate errors), turns right prematurely, risking an accident. Without anomaly detection, the car reacts incorrectly due to the location error. Figure 1(b) highlights how increasing measurement errors, represented by the standard deviation of Gaussian noises, dramatically degrade policy performance. For instance, autonomous cars with RL systems may take sub-optimal or unsafe actions when processing noisy sensory signals in deployment. In addition, incorporating excessive noise during online training (Figure 1(c)) can severely impair policy learning and diminish performance. These motivating examples underscore the importance of detecting different types of abnormal states for developing trustworthy RL systems in real-world applications. Our research aims to provide a general framework for applying anomaly detection in deep RL problems, including problem formulation, detection algorithms, and evaluation scenarios. Specifically, we strive to develop an effective and unified anomaly detection framework for deep RL in *both offline and online settings*. 1. **Offline Setting.** In this setting, a dataset is fixed without additional online data collection. Given a pre-trained policy, our objective is to utilize a fixed dataset to develop a distance-based anomaly detector tailored for a pre-trained policy. This detector aims to effectively identify whether a state is an outlier 1. 2. **Online Setting.** In this setting, the RL agent interacts with a noisy environment and continuously updates its policy. Our goal is to develop a detection strategy that identifies state outliers, which are outside the RL system's training experience. Removing these outliers can prevent them from interfering with policy training. Methodologically, we first design an RL outlier detection approach using Mahalanobis Distance (MD) (De Maesschalck et al., 2000) within a statistical hypothesis test framework and extend it to a robust MD version (Butler et al., 1993). These strategies are applied *in a parametric manner* under the Gaussian assumption for state features in each class, which may not always be accurate in practice. To address this limitation, we introduce a *non-parametric conformal version* of MD detection to relax the Gaussian assumption. We empirically investigate the effectiveness of these proposed detection approaches in both offline and online settings across a representative set of RL environments, including Atari games and autonomous driving. Our contributions can be summarized as follows: - Our primary technical contribution is the design of RL outlier detection strategies based on the concepts of Mahalanobis Distance (MD), robust MD, and conformal inference. The anomaly detection 1Compared with the classical tasks of policy evaluation and learning in offline RL, our offline setting also utilizes a fixed dataset but specifically focuses on developing detection methods given a fixed policy. strategies are specially developed for deep RL within a hypothesis test framework, accommodating both parametric (Gaussian assumption) and non-parametric (conformal calibration) approaches. - Secondly, in our online setting, our anomaly detection can be applied to a dynamic dataset, where the RL policy continually improves when interacting with the environment. This dynamic setting contrasts with the simpler anomaly detection in supervised learning with a static dataset. To address this challenge, we particularly develop *moving window estimation* and *double self-supervised detectors* for anomaly detection in the online RL setting. - To our best knowledge, we are the first to conduct a comprehensive study on distance-based anomaly detection in deep RL, covering all typical types of outliers. Our anomaly detectors can simultaneously identify random, adversarial, and out-of-distribution state outliers. We perform extensive experiments to verify the effectiveness of our proposed methods in both offline and online settings. ## 2 Related Work Anomaly Detection in Reinforcement Learning. Anomaly detection has yet to be extensively explored in RL. The connection between anomaly detection and RL was first established in (Müller et al., 2022); however, their work is mainly conceptual and does not propose practical detection algorithms. Change point detection has been investigated in the tabular setting of RL, particularly in environments described as doubly inhomogeneous under temporal non-stationarity and subject heterogeneity (Hu et al., 2022). They focus on identifying "best data chunks" within the environment that exhibit similar dynamics for policy learning, while our detection focuses on anomaly detection in *deep* RL scenarios. Prior studies have also probed anomaly detection in specific RL contexts, such as the offline imitation learning with a transformerbased policy network (Wang et al., 2024) and detecting adversarial attacks within cooperative multi-agent RL (Kazari et al., 2023). However, these studies are limited to specific scenarios that do not address general anomaly detection, even in single-agent RL. Haider et al. (2023) proposed a model-based method using probabilistic dynamics models and bootstrapped ensembles, but this approach is computationally expensive. Our research aims to develop a unified and practical anomaly detection framework that applies to general RL scenarios. Distance-based Anomaly Detection. Recently, there has been a growth of interest in developing anomaly detection strategies in deep learning scenarios (Pang et al., 2021; Elmrabit et al., 2020). In image classification, Mahalanobis distance (MD) was effectively applied by (Lee et al., 2018), who constructed a Mahalanobis confidence score by training a logistic regression detector using validation samples. This score was evaluated in a supervised way, relying on *a validation set*, and thus it is unsuitable for the RL setting. The "tied" covariance assumption used by (Lee et al., 2018), where class-conditional distributions of pre-trained features share the same covariance, was criticized as implausible by (Kamoi & Kobayashi, 2020) based on Gaussian discriminant analysis (Klecka et al., 1980). In contrast, our detection framework MDX avoids the unrealistic "tied covariance" assumption by estimating variance for each class using quadratic discriminant analysis. This approach extends linear boundaries to quadratic ones between classes, offering a more flexible and accurate detection (Hastie et al., 2009). Robust Statistics for RL. Deep RL algorithms inherently face challenges related to instability and divergence due to the use of function approximation, bootstrapping, and off-policy learning (Sutton & Barto, 2018). Employing Mahalanobis distance (MD) for anomaly detection can be particularly sensitive during unstable learning phases. The computation of MD is based on Maximum Likelihood Estimate (MLE), which is susceptible to outliers or noisy data (Rousseeuw & Van Zomeren, 1990). Robust statistics (Huber, 2004) have been developed to address these robustness problems, especially leveraging robust estimation techniques that are not unduly affected by outliers. For example, Robust MD is a robust version of MD that employs robust estimators, e.g., Minimum Covariance Determinant (MCD) (Rousseeuw, 1984; Grübel, 1988), for location and covariance estimation (Maronna & Yohai, 2014). Conformal Prediction and Conformal Anomaly Detection. Conformal anomaly detection (Ishimtsev et al., 2017; Laxhammar & Falkman, 2011) is based on the conformal prediction (Teng et al., 2023; Angelopoulos et al., 2021), a popular, modern technique for providing valid prediction intervals for arbitrarily machine learning models. Conformal prediction has garnered increasing attention as it can provide a simple, distribution-free, and computationally effective way of tuning the distribution threshold. Its validity relies on the data exchangeability condition (Shafer & Vovk, 2008), where different orderings of samples are equally likely, but recent studies have verified its applicability in scenarios involving distribution shift (Tibshirani et al., 2019; Barber et al., 2023) and off-policy evaluation (Zhang et al., 2023). These examples justify the potential of using conformal inference to detect outliers in the context of RL. ## 3 Background Markov Decision Process. The interaction of an agent with its environment can be modeled as a Markov Decision Process (MDP), a 5-tuple (S, A*, R, P, γ*). S and A are the state and action spaces, P : *S ×A× S →* [0, 1] is the environment transition dynamics, R : *S × A × S →* R is the reward function and γ ∈ (0, 1) is the discount factor. The policy π is continually updated in this online interaction paradigm. Compared to the online setting, a recent popular paradigm for reinforcement learning is offline RL (Levine et al., 2020). In the offline setting, RL algorithms utilize previously collected data to extract policies without additional online data collection. Proximal Policy Optimization (PPO). The policy gradient algorithm of Proximal Policy Optimization (PPO) (Schulman et al., 2017) has achieved state-of-the-art or competitive performance on Atari games (Bellemare et al., 2013) and MuJoCo robotic tasks (Todorov et al., 2012). Typical policy gradient algorithms optimize the expected reward function ρ (*θ, s*0) = Eπθ [P∞ t=0 γ tr (st) | s0] by using the policy gradient theorem (Sutton & Barto, 2018). Here πθ is the θ-parameterized policy function. Trust Region Policy Optimization (TRPO) (Schulman et al., 2015) and PPO (Schulman et al., 2017) utilize constraints and advantage estimation to perform the update by reformulating the original optimization problem with the surrogate loss L(θ) as: $\begin{array}{c}L(\theta)=\mathbb{E}_{t}\left[\dfrac{\pi_{\theta}\left(s_{t},a_{t}\right)}{\pi_{\theta_\text{old}}\left(s_{t},a_{t}\right)}A_{\pi_{\theta_\text{old}}}\left(s_{t},a_{t}\right)\right],\end{array}$ dvantage function (GAE) (Schulman et al., 2018). $$\quad(1)$$ $\stackrel{ ̈}{\underset{\mathrm{Action}}{}}$ (GAE) (S) , (1) where Aπθold is the generalized advantage function (GAE) (Schulman et al., 2018). PPO introduces clipping in the objective function in order to penalize changes to the policy that make πθ vastly different from πθold : $$L^{\text{CLIP}}(\theta)=\mathbb{E}_{\epsilon}\left[\min\left(\frac{\pi_{\theta}\left(s_{t},a_{t}\right)}{\pi_{\theta_{\text{old}}}\left(s_{t},a_{t}\right)}A_{\pi_{\theta_{\text{old}}}}\left(s_{t},a_{t}\right),\text{clip}\left(\frac{\pi_{\theta}\left(s_{t},a_{t}\right)}{\pi_{\theta_{\text{old}}}\left(s_{t},a_{t}\right)},1-\epsilon,1+\epsilon\right)A_{\pi_{\theta_{\text{old}}}}\left(s_{t},a_{t}\right)\right)\right],\tag{2}$$ where $\epsilon$ is a hyperparameter. We use PPO as the algorithm testbed to examine the efficacy of our anomaly detection framework. However, our detection methods are general and can be easily applied to other RL algorithms (Zhang & Yu, 2020) such as DQN (Mnih et al., 2015), A3C (Mnih et al., 2016), and DDPG (Lillicrap et al., 2016). Conformal Prediction. Conformal anomaly detection (Laxhammar & Falkman, 2011; Ishimtsev et al., 2017) is grounded in conformal prediction (Shafer & Vovk, 2008; Angelopoulos et al., 2021), which aims to construct a confidence band C1−α(X) for Y given a random data pair (X, Y ) ∼ P and a confidence level 1 − α. Suppose we have a pre-trained model µb and a calibration dataset (X1, Y1), ...,(Xn, Yn) for conformal prediction. We can then compute a predictive interval for the new sample Xn+1 to cover the unseen response Yn+1 by leveraging the empirical quantiles of the residuals |Yi−µb(Xi)| on the calibration dataset. This further leads to valid prediction intervals such that: P(Yn+1 ∈ C1−α(Xn+1)) ≥ 1 − α, (3) where the confidence band is expected to be as small as possible while maintaining the desired coverage. A fundamental quantity in conformal prediction is the *non-conformity measure*, e.g., the residual |Yi − µb(Xi)|, which measures how "different" an example is relative to a set of examples (Vovk et al., 2005). ## 4 Anomaly Detection In The Offline Rl Setting In this section, we design our MD-based detection framework MDX in the offline setting, where a fixed dataset collected from the environment and a pre-trained RL policy are provided. Based on the Gaussian $$\mathbb{P}(Y_{n+1}\in{\mathcal{C}}_{1-\alpha}(X_{n+1}))\geq1-\alpha,$$ $$\left({\mathfrak{3}}\right)$$ ![4_image_0.png](4_image_0.png) Figure 2: **The detection pipeline of MDX**. We feed the state into the policy network to extract the feature vector and identify its class. For each class, we estimate (µ, Σ) and establish a detection threshold depicted as a dashed ellipse. To determine whether a new state is an outlier, we evaluate its features and compute the distance to the class centers. If the distance falls below the set threshold, the state is classified as an inlier (green points). Conversely, the state is marked as an outlier (red points). assumption, we introduce the basic Mahalanobis Distance (MD) detection strategy. We then extend it to the robust MD and conformal MD-based detection methods. Finally, we present the deployment of MDX in a potentially noisy environment. Description of Detection Framework. Our detection framework is structured around two core components: feature extraction and detector estimation. The process begins by assessing whether a state is anomalous, crucially dependent on the associated policy. A state that prompts the policy to initiate a potentially unsafe action is labeled an outlier. Specifically, we input the state into the policy network and access the feature vector extracted from the penultimate layer of this network. We categorize states according to the actions determined by the policy, where the underlying intuition is that states associated with the same action share similar features. To ascertain whether a new state is an outlier, we compute its distance from the established class centroids based on its feature vector. A state is deemed an outlier if the distance surpasses a set threshold. Figure 2 illustrates the operational flow of MDX. By ensuring that only states within the policy's capability are considered valid, MDXthereby enhances the safety and reliability of the RL system. ## 4.1 Mahalanobis Distance (Md)-Based Detection Gaussian Assumption. The given pre-trained parameterized RL policy πθ is a discriminative softmax classifier, π(at = c|st) = exp w⊤ c f(st) + bc /Pc ′ exp w⊤ c ′f(st) + bc ′, where wc and bc are the weight and bias of the policy classifier for action class c. The function f(·) represents the output of the penultimate layer of the policy network πθ, serving as the state feature vector. Here, C = |A| is the size of the action space, and µc is the mean vector of f(s) corresponding to the action class c 2. If we assume that the class-conditional distribution follows a multivariate Gaussian distribution sharing a single covariance Σ (tied covariance) in a generative classifier, i.e., π(f(s) | a = c) = N (f(s) | µc, Σ), then the posterior distribution of f(s) matches the form of a discriminative softmax classifier (Lee et al., 2018). This equivalence implies that f(s) fit a Gaussian distribution under πθ. We approximate state feature vectors with a class-conditional Gaussian distribution with µc and Σc *for each action class*, rather than using a single "tied" covariance Σ across all action classes (Kamoi & Kobayashi, 2020). An MD-based detection based on Gaussian assumption can be immediately developed based on the mean vectors µc and the covariance matrix Σc calculated from f(s) for each action class c. We first collect Nc state action pairs {(si, ai)}, separately for each action class c, and compute the empirical class mean and covariance of c: $${\widehat{\mu}}_{c}={\frac{1}{N_{c}}}\sum_{i:a_{i}=c}f\left(s_{i}\right),\quad{\widehat{\Sigma}}_{c}\quad={\frac{1}{N_{c}}}\sum_{i:a_{i}=c}\left(f\left(s_{i}\right)-{\widehat{\mu}}_{c}\right)\left(f\left(s_{i}\right)-{\widehat{\mu}}_{c}\right)^{\top}.$$ ⊤. (4) In distance-based detection, a straightforward metric is Euclidean distance (ED). However, MD generally outperforms ED in many tasks (Lee et al., 2018; Ren et al., 2021; Kamoi & Kobayashi, 2020), as it incorpo- 2For continuous action spaces, we can discretize the actions into several bins and then follow the same detection pipeline. $$\quad(4)$$ rates the additional data covariance information to normalize the distance scales. Following the estimation in Eq. 4, we derive the class-conditional Gaussian distribution to characterize the data structure within the state representation space for each action class. For each state s observed by the agent, we compute its Detection Mahalanobis Distance M(s) between s and the nearest class-conditional Gaussian distribution by: $$M(s)=\operatorname*{min}_{c}\left(f(s)-{\widehat{\mu}}_{c}\right)^{\top}{\widehat{\Sigma}}_{c}^{-1}\left(f(s)-{\widehat{\mu}}_{c}\right).$$ $$\left(5\right)$$ c(f(s) − µbc). (5) Unlike the previous work Lee et al. (2018), which defined a Mahalanobis confidence score based on a binary classifier in a validation dataset, we utilize M(s) as the detection metric within a statistical hypothesis test framework. Proposition 1 demonstrates that M(s) follows a Chi-squared distribution under the Gaussian assumption. Proposition 1. (Test Distribution of Detection Mahalanobis distance M(s)) Let f(s) be the p-dimensional state random vector for action class c*. Under the Gaussian assumption* P(f(s)|a = c) = N (f(s) | µc, Σc), the Detection Mahalanobis Distance M(s) *in Eq. 5 is Chi-Square distributed:* M(s) ∼ χ 2 p . Please refer to Appendix A for the proof. Based on Proposition 1, we can define a threshold Θ = χ 2 p (1 − α) by selecting a α value from the specified Chi-Squared distribution to distinguish normal states from outliers. Given a new state observation s and a confidence level 1 − α, if M(s) > Θ, s is detected as an outlier. ## 4.2 Robust Md-Based Detection Motivation. The estimation of µc and Σc in Eq. 4 relies on Maximum Likelihood Estimate (MLE), which is sensitive to the presence of outliers in the dataset (Rousseeuw & Van Zomeren, 1990). As the offline data collected from the environment tends to be noisy, directly introducing MD for outlier detection in RL easily results in a less statistically effective estimation of µc and Σc, thus undermining the detection accuracy for outliers. This vulnerability of the MD-based detector against noisy states prompts us to instantiate MDX with a more robust estimator (Huber, 2004). To this end, we apply the Minimum Covariance Determinant (MCD) estimator (Hubert & Debruyne, 2010) to estimate µc and Σc by only using a subset of all collected samples. It only uses the observations where the determinant of the covariance matrix is as small as possible. Concretely, MCD determines the subset J of observations with a size h, while minimizing the determinant of the sample covariance matrix calculated solely from these h points. The choice of h determines the trade-off between the robustness and efficiency of the estimator. The robust MCD mean vector µb rob c and covariance matrix Σbrob cin the action class c are computed as $$\widehat{\mu}_{e}^{\rm reb}=\frac{1}{h}\sum_{i\in I,s_{i}=e}f\left(s_{i}\right),\ \ J=\left\{\ \mbox{set of$h$points}\ :\left|\widehat{S}_{J}\right|\leq\left|\widehat{S}_{K}\right|\ \mbox{for all subsets$K$}\right\},\tag{6}$$ where we set h as (number_of_samples + number_of_features + 1)/2 (Rousseeuw, 1984). K represents the total number of subsets that contain h points. In practice, the MCD estimator can be efficiently solved by the FAST-MCD algorithm (Hubert & Debruyne, 2010) instead of performing a brute-force search over ![5_image_0.png](5_image_0.png) Figure 3: Contours under the estimation based on MD and Robust MD across different outlier types on Breakout. Black and red points denote inliers and outliers, respectively. The dimension of state feature vectors after a pre-trained PPO policy is reduced by t-SNE (Van der Maaten & Hinton, 2008). all possible subsets. Akin to Mahalanobis Distance, we define the *Detection Robust Mahalanobis Distance* Mrob(s) as robust detection metric: $$M_{\rm rob}(s)=\min_{c}\left(f(s)-\widehat{\mu}_{c}^{\rm rob}\right)^{\top}\widehat{\Sigma}_{c}^{\rm rob-1}\left(f(s)-\widehat{\mu}_{c}^{\rm rob}\right).\tag{7}$$ Since the robust Mahalanobis distance can still approximate the true Chi-squared distribution (Hardin & Rocke, 2005), we still employ the threshold value Θ = χ 2 p (1 − α) for detecting outliers as in the MD case. Potential Advantages of Robust MD on Real Data. As a motivating example, Figure 3 displays contours computed by both MD and Robust MD detection methods for state feature vectors in the Breakout game from the popular Atari benchmark (Bellemare et al., 2013; Brockman et al., 2016) with different types of outliers. These results demonstrate that estimation based on Robust MD is less vulnerable to outlying states (red points) and better fits inliers (black points) than MD. This robust parameter estimation highlights the potential advantage of Robust MD for RL outlier detection, where the data used for estimation tends to be noisy. ## 4.3 Conformal Md-Based Detection Motivation. Although robust MD-based detection is less vulnerable to noise in RL environments, both MD and robust MD strategies heavily rely on the Gaussian assumption to construct the detection thresholds based on Proposition 1. This distribution assumption is often violated in practice, diminishing the effectiveness of MD and robust MD. In contrast, conformal prediction offers a mathematical framework that provides valid and rigorous prediction distribution without assuming a specific underlying data distribution. The resulting conformal anomaly detection circumvents the limitation of the distribution assumption, potentially improving the detection efficacy. In the context of RL, conformal anomaly detection evaluates how a state conforms to a model's current prediction distribution, thereby discriminating abnormal states. As a distribution-free detection approach, conformal anomaly detection can enhance the distance-based detectors by additionally tuning the anomaly threshold in the calibration dataset. To design the conformal anomaly detection method, we leverage the Detection Mahalanoibis Distance M(s) as the *non-conformity score*, which measures how dissimilar a state is from the instances in the calibration set. Following split conformal inference (Papadopoulos et al., 2002; Shafer & Vovk, 2008), we split the the previously collected offline dataset into the the calibration set Dcal and the evaluation set. A simple way is to evaluate the quantiles of the resulting empirical distribution to create the corresponding confidence band. Using the calibration set Dcal, we define the fitted quantiles Qbc1−α of the conformity scores for the action class c as follows: $$\widehat{Q}_{1-\alpha}^{c}=\inf\left\{q:\left(\frac{1}{N_{c}}\sum_{s_{i}\in\mathcal{D}_{\text{vol}},a_{i}=c}\mathbf{1}_{\{M^{c}(s_{i})\leq q\}}\right)\geq1-\alpha\right\},\tag{8}$$ where each (si, ai) is drawn from the calibration set Dcal and c is calculated by c = arg min Mc(si) in Mc(si) among all action classes. Finally, we use the class-dependent and well-calibrated detection thresholding Θ = Qbc1−α in conformal MD-based detection instead of χ 2 p (1 − α) used in MD and Robust MD strategies. ## 4.4 Md-Based Detection Algorithm In The Offline Setting Algorithm 1 summarizes the instantiation of MDX in the offline setting. We compute the (robust) mean vector and covariance matrix among the state feature vectors in the penultimate layer of πθ for each action class. Next, given a state observation s, we compute the detection Mahalanobis distance d = M(s) or d = Mrob(s). And compare it with the threshold Θ = χ 2 p (1−α) under the Gaussian assumption or Θ = Qbc1−α from distribution-free conformal quantiles. If d > Θ, s is detected as an outlier. Conversely, if d ≤ Θ, s is deemed as an inlier. Algorithm 1 MDX Detection Framework in the Offline Setting 1: **Input**: The given policy πθ, the dimension of state feature vectors p, and a confidence level 1 − α. 2: **Output**: Detection labels {ys} for each s in the evaluation trajectory. 3: / * Step 1: Detection Design by Estimating Mean and Covariance * / 4: Given state action pairs {(si, ai)} where ai ∼ πθ(·|si). 5: for each action class c do 6: if we choose MD detection **then** 7: Estimate µbc and Σbc via Eq. (4). / * Approach 1: MD Detection * / 8: **else if** we choose Robust MD detection **then** 9: Estimate µb rob c and Σbrob c via Eqs. (6) and (7). / * Approach 2: Robust MD Detection * / 10: **else** 11: Estimate µbc, Σbc via Eq. (4), calibrate Qbc1−α via Eq. (8) / * Approach 3: Conformal MD-based Detection * / 12: **end if** 13: **end for** 14: / * Step 2: Detection Deployment * / 15: for s in the noisy environment do 16: Compute distance d = M(s) or d = Mrob(s), and threshold Θ = χ 2 p (1 − α) or Θ = Qbc1−α. 17: Set Detection label ys = 1 if d > Θ else ys = −1. 18: **end for** ## 5 Anomaly Detection In The Online Rl Setting In the online RL setting (Sutton & Barto, 2018; Dong et al., 2020), a policy is updated continuously, unlike the fixed pre-trained policy used in our offline setting. Robust policy training with noisy states is crucial in safe RL, as the agents are more likely to encounter state outliers during training. In this section, we extend MDX to the online RL training scenario. Unlike the offline setting, the challenge here stems from the dynamic nature of policy updates, requiring our detector to adapt to the evolving distribution of feature vector outputs. The complexity increases when the improved policy starts gathering new samples through exploration, posing a fundamental challenge in an online RL framework. An effective detection system must differentiate between actual noisy observations and newly collected data through exploration. Training the RL agent and estimating the detector are interleaved in a noisy online environment. Various options for managing detected outliers during training include removing or denoising the outlier states. In our detection framework, we focus on direct removal and assess the resulting learning curves in the presence of noisy states during the training process. To address the challenges in detecting abnormal states in the online training setting, we propose *Moving Window Estimation* and *Double Self-supervised Detectors*, both of which are pivotal for the empirical success of our anomaly detection approach. Moving Window Estimation. In the online setting, improving the policy πθ causes a shift in the data distribution within the replay buffer as the agent interacts with the environment (Rolnick et al., 2019; Xiao et al., 2019). To effectively utilize information from the updated data distribution, we maintain a moving window to store experiences throughout the interaction steps. The window size can be adjusted to either prioritize a long historical context with a larger window size or more recent experiences with a smaller one. Based on the constantly updated state feature vectors, µc and Σc (µ rob c and Σ rob c) are continually estimated. This continuous updating allows us to accurately track the state feature distribution, ensuring that our detector remains sensitive to recent and historical data shifts. Double Self-Supervised Detectors. Our current detector is continually refined using self-detected inliers, while any detected outliers are promptly discarded. However, a more practical approach is to leverage these outliers to create a complementary detector for outliers. This secondary self-supervised detector validates the detection results from the primary detector. For example, if the primary detector classifies a state as an inlier and the secondary detector agrees that it is not an outlier, the state is confidently classified as such. Conversely, if there is a difference between the discrimination of the two detectors, the state is randomly classified as either an outlier or an inlier. In the event of disagreement, this random classification Algorithm 2 MDX Detection Framework in the Online Setting, PPO Style 1: Initialize policy network πθ and estimator µbc and Σbc (or µb rob c and Σbrob c). 2: Initialize confidence level 1 − α, the window size m, inlier and outlier buffers BI , BO. 3: for iteration = 1, 2*, ..., K* do 4: for actor = 1, 2*, ..., N* do 5: Run policy πθ in environment for T timesteps. 6: Compute distance d = M(s) or d = Mrob(s), and threshold Θ = χ 2 p (1 − α) or Θ = Qbc1−α. 7: if d ≤ Θ **then** 8: Add it to BI . 9: **else** 10: Add it to BO. 11: **end if** 12: **end for** 13: Optimize policy πθ using inlier trajectories. 14: Update µbc and Σbc (or µb rob c and Σbrob c) of the two detectors based on BI and BO respectively every Nc samples. 15: **end for** is motivated by the need to avoid systematic bias that could arise from consistently favoring one detector's output over the other. By introducing randomness, we ensure the system remains fair and does not overly rely on potentially flawed outputs from either detector. This approach also preserves the system's ability to learn and adapt over time, preventing the reinforcement of incorrect classifications. The double-detector system thus enhances the robustness and reliability of the detection process, ensuring more accurate and consistent identification of abnormal states. MD-based Detection Algorithm in the Online Setting. Algorithm 2 outlines our MD-based detection procedure for online RL, incorporating both moving window estimation and double self-supervised detectors. To update our double detectors, inliers and outliers are stored in buffers BI and BO, respectively. For each class, a window size m is specified. Within each class, the state-action pairs in the window are used to estimate µbc and Σbc (µb rob c and Σbrob c). These parameters are updated after every Nc newly collected data points in the window for action class c. This adaptive updating mechanism ensures that the detectors remain responsive to evolving data distributions. Online Anomaly Detection Procedure. In a real-time scenario like a recommendation system, we typically first deploy a pre-trained policy as a warm start in the online system to provide initial recommendations for each user. Feedback from users, such as the click-through rate (CTR), is then observed to update the online policy iteratively. Similarly, within our online detection algorithm, we pre-train a policy as a warm start. After pre-training, the policy is introduced to the noisy environment for further online learning. Throughout this process, our MDX framework is used to identify outliers in the subsequent training phases. We then evaluate the training performance of algorithms equipped with these detection mechanisms. This systematic approach facilitates the gradual refinement of the policy while concurrently integrating outlier detection to enhance robustness in real-world settings. ## 6 Experiments We first conduct experiments on six typical Atari games (Bellemare et al., 2013) to verify the effectiveness of our MDX framework in both offline and online settings. The Atari games are divided into two different groups. The first group includes Breakout, Asterix, and SpaceInvaders, which feature nearly static backgrounds. Enduro, FishingDerby, and Tutankham in the second group have time-changing or dramatically different backgrounds, presenting more challenging scenarios. We further conduct experiments on autonomous driving environments (Dosovitskiy et al., 2017) as one potential application. We select Proximal Policy Optimization (PPO) (Schulman et al., 2017) as our baseline RL algorithm. Three Types of Outliers. (1) Random Outliers. We generate random outliers by adding Gaussian noise with zero mean and different standard deviations on state observations, simulating natural measurement errors. **(2) Adversarial Outliers.** We perform white-box adversarial perturbations (Goodfellow et al., 2014b; Szegedy et al., 2013; Cao et al., 2020) on state observations for the current policy, following the strategy proposed in (Huang et al., 2017; Pattanaik et al., 2017). Particularly, we denote a t w as the "worst" action, with the lowest probability from the current policy πt(a|s). The optimal adversarial perturbation ηt, constrained in an ϵ-ball, can be derived by minimizing the objective function J: minη J(st + *η, π*t) = −Pn i=1 p t i log πt(ai|st + η), s.t.∥η∥ ≤ ϵ, where p t w = 1 and p t i = 0 for i ̸= w. We solve this minimization problem with the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014b), a typical adversarial attack method in the deep learning literature. The resulting adversarial outliers st + η ∗ t force the policy to choose a t w. **(3) Out-of-Distribution (OOD) outliers.** OOD outliers arise from the disparity in data distribution across different environments. To simulate them, we randomly select states from other environments and introduce them to the current environment. In our experiments, we select images from other Atari games to serve as Out-of-Distribution (OOD) outliers within the considered environment. In the autonomous driving scenario, we designate rainy and nighttime observations as OOD outliers for the primary daytime setting on a sunny day. This deliberate selection of diverse outlier examples enables comprehensive testing of our method's robustness across varied environments. Baseline Methods. A fundamental obstacle in assessing the anomaly detection strategies in RL lies in the scarcity of baselines in deep RL settings as introduced in Section 2. To rigorously substantiate the effectiveness of MDX, we initiate our evaluation by comparing them with the foundational baselines we have developed ourselves. (1) **Euclidean distance (ED)** assumes that all features are independent under the Gaussian assumption with one standard deviation, which can be considered as a simplified version of our MD method with an identity covariance matrix. (2) **MD with Tied covariance (TMD)** follows the tied covariance assumption in (Lee et al., 2018), where features among all action classes share a single covariance matrix estimation. (3) MD is our first proposed method with class-conditional Gaussian assumption. (4) Robust MD (RMD) is the robust variant of MD under the Gaussian assumption. (5) **MD+C** uses well- | Detection Accuracy (%) | Outliers | ED | TMD | MD | RMD | MD+C | |--------------------------|-------------|------|-------|------|-------|--------| | Random | 53.2 | 60.0 | 64.0 | 71.2 | 62.8 | | | Breakout | Adversarial | 83.8 | 89.1 | 91.0 | 80.4 | 92.3 | | OOD | 50.0 | 47.6 | 50.5 | 78.7 | 51.5 | | | Random | 44.3 | 46.0 | 59.6 | 71.2 | 54.8 | | | Asterix | Adversarial | 84.2 | 85.5 | 91.3 | 75.8 | 93.7 | | OOD | 40.1 | 40.8 | 45.9 | 65.2 | 49.7 | | | Random | 52.1 | 66.2 | 72.3 | 79.2 | 70.4 | | | SpaceInvader | Adversarial | 72.4 | 91.2 | 95.9 | 83.4 | 96.4 | | OOD | 45.2 | 56.6 | 51.4 | 83.2 | 50.2 | | | Random | 49.0 | 51.6 | 60.2 | 78.5 | 51.6 | | | Enduro | Adversarial | 93.9 | 90.8 | 95.2 | 80.4 | 97.5 | | OOD | 57.0 | 62.8 | 69.8 | 80.3 | 53.2 | | | Random | 49.1 | 66.3 | 69.2 | 85.6 | 65.3 | | | FishingDerby | Adversarial | 85.3 | 92.9 | 97.5 | 87.4 | 97.4 | | OOD | 51.1 | 55.9 | 59.2 | 75.7 | 57.9 | | | Random | 50.0 | 47.9 | 49.2 | 77.0 | 52.2 | | | Tutankham | Adversarial | 61.1 | 88.9 | 94.2 | 78.7 | 87.2 | | OOD | 55.0 | 86.6 | 92.0 | 78.7 | 77.8 | | | Random | 49.6 | 56.3 | 62.4 | 77.1 | 59.5 | | | Adversarial | 80.1 | 89.7 | 94.2 | 81.0 | 94.0 | | | Average | OOD | 49.7 | 58.4 | 61.5 | 76.9 | 56.7 | | Average | 59.8 | 68.1 | 72.7 | 78.3 | 70.1 | | Table 1: **Average detection accuracy** of MD, RMD, and MD+C compared with baselines across different outlier types in all six environments in the **offline** setting. The averages are computed across environments and outlier types. Accuracy is determined by applying detection techniques to the balanced data composed equally of clean and noisy states. calibrated conformality scores to construct a valid empirical distance distribution instead of relying on the Chi-Squared distribution established upon the Gaussian assumption. ## 6.1 Anomaly Detection In The Offline Setting In the offline setting, we randomly split the states from the given dataset into calibration and evaluation sets, each containing 50% The calibration set is used to construct our detectors, and the evaluation set is for testing. We first use PCA to reduce the state feature vectors into a 50-dimensional space. We then apply MD or robust MD to estimate mean vectors and covariances and calibrate the conformality score based on the calibration dataset. Finally, we add the three types of noises to the evaluation dataset and combine them with the clean evaluation dataset. We assess the performance of our detection methods on the entire evaluation dataset. Main results. Table 1 shows the detection accuracy of MDX instantiated with MD, robust MD, and conformal MD with α = 0.05 across a wide range of outlier types on each game. A higher accuracy indicates a more successful identification of anomalies for the evaluated detection method. Detailed results for settings with different noise types are provided in Table 4 of Appendix B.1. We conclude that: (1) All MD-based methods, i.e., TMD, MD, RMD, and MD+C, outperform ED, confirming the usefulness of covariance matrix information in RL outlier detection. (2) Robust MD generally performs the best, significantly surpassing MD and other methods in detecting random and OOD outliers. Nonetheless, robust MD is not effective enough to detect adversarial outliers. (3) MD+C excels in identifying adversarial outliers and performs similarly to MD in other scenarios. Sensitivity Analysis on Feature Dimension Reduction. We provide a sensitivity analysis regarding the number of feature dimensions reduced by PCA, showing that the detection accuracy for all considered outliers tends to improve as the number of principal components increases. This indicates that better detection performance can be achieved with higher feature dimensions. The detailed results are presented in Appendix B.4. Effectiveness of Robust MD. In robust MD analysis, it is typically concluded that outlier states are more distinctly separated from inlier states. By comparing the Mahalanobis distance distributions between inliers and outliers under both MD and Robust MD, we show that this conclusion also applies to the RL anomaly detection scenario. This effect explains the detection advantage of robust MD in RL. Detailed results are provided in Appendix B.3. ## 6.2 Anomaly Detection In The Online Setting The PPO agent, utilizing multi-processes as detailed in the original PPO algorithm (Schulman et al., 2017), runs eight independent environments in parallel, and we introduce state outliers into four of these environments. For random and adversarial outliers, actions are determined based on the PPO policy network πθ. For OOD outliers, due to the potential differences in action spaces between the original environment and the OOD environment, we select OOD states from the OOD environment by taking random actions within its own action space. For the Robust MD method, we use PCA to reduce state feature vectors into a 50-dimensional space due to the expensive computation of the robust MD method. For the other methods, we use the original feature vectors output from the penultimate layer of πθ. Results are averaged over three seeds with hyperparameters given in Table 5 of Appendix C.1. When our detectors identify an outlier, it is removed from training. We compare the resulting learning curves for different detection methods. Additional Baselines. We add another two baselines as performance upper bound and lower bound. (1) For an ideal baseline, the method **Auto** automatically deletes true state outliers, showing the optimal training performance of algorithms *without the interruption from outliers*. (2) At the other extreme, **Random** uses a totally random detector that detects a state as an inlier or outlier with a probability of 0.5. Main Results. Figure 4 presents learning curves of cumulative rewards (first row) based on the PPO algorithm and the corresponding detection F1 Score (second row) for all tested detection methods across three types of outliers in Tutankham games. To better highlight their differences, we omit the confidence bands in Figure 4, while providing similar results on all six Atari games with confidence bands in Appendix C.1 ![11_image_0.png](11_image_0.png) Figure 4: **Learning curves and detection performance** across various state outliers in online learning on Tutankham. "Mean Score" in the first row indicates the cumulative rewards, and "F1 Score" in the second row evaluates the detection performance during training. We present the average results over three random seeds while omitting confidence bands for a clearer comparison. from Figures 13 to 18 for reference. For each outlier type in Table 2, we evaluate the **superiority rank** of all detectors regarding the F1 score and policy performance, where rank 1 indicates the best performance. A smaller superiority rank implies a more effective detection. Our conclusions are as follows: (1) Conformal MD (MD+C) generally achieves the best detection performance across all considered baselines (except Auto). The superiority of MD+C over MD highlights the crucial role of accurately calibrated thresholds in the online RL detection setting. (2) RMD is less effective than MD and MD+C, performing only on par with TMD and ED. This degradation is due to *the information loss in the dimension reduction of the feature vectors* for reducing the computational cost. Thus, MD+C and MD are preferable to RMD in the computationally demanding online RL setting. (3) The average superiority ranks of all considered detectors are similar in terms of performance and F1 score, verifying the consistency of our results. Ablation Study on Double Self-Supervised Detectors. We conduct an ablation study of double selfsupervised detectors on Breakout with random and OOD outliers. Results in Figure 19 of Appendix C.2 show that double self-supervised detectors reduce detection errors and improve detection accuracy. Ablation Study on Outlier Proportions. We also demonstrate the robust detection performance across different proportions of outliers encountered by the agent during training. We conduct experiments on Breakout, and the results are provided in Figure 20 of Appendix C.3. | Superiority Rank | Outlier Type | Random | ED | TMD | MD | RMD | MD+C | |--------------------|----------------|----------|------|-------|------|-------|--------| | Random | 5.7 | 2.9 | 4.1 | 2.5 | 3.9 | 1.9 | | | Performance | OOD | 5.8 | 4.5 | 2.0 | 3.2 | 4.1 | 1.4 | | Adversarial | 5.7 | 3.3 | 4.5 | 3.0 | 3.2 | 1.3 | | | Average | All | 5.7 | 3.6 | 3.4 | 2.8 | 3.8 | 1.6 | | Random | 6.0 | 3.1 | 4.0 | 3.3 | 3.4 | 1.2 | | | F1 Score | OOD | 6.0 | 4.8 | 2.0 | 2.5 | 4.1 | 1.6 | | Adversarial | 6.0 | 4.2 | 3.3 | 2.8 | 3.7 | 1.0 | | | Average | All | 6.0 | 3.9 | 3.1 | 2.9 | 3.7 | 1.3 | Table 2: The average superiority rank (1 is best) of anomaly detection methods across all types of outliers in all six environments. Numbers in bold represent the best method. ![12_image_0.png](12_image_0.png) (a) Original state (b) Random outlier (c) Adversarial outlier (d) OOD outlier (rainy) (e) OOD outlier (night) Figure 5: The clean and noisy state observations in autonomous driving experiments. ## 6.3 Autonomous Driving Environment To verify the broader applicability of our method, we perform experiments on autonomous driving environments and introduce practical scenarios in which all three types of anomalies commonly occur. Random Noise. Malfunctioning sensors or cameras can introduce random noise into signal observations. For instance, a faulty camera lens may produce distorted images, while a malfunctioning LiDAR sensor might generate erroneous depth measurements. Such random noise can impair the reliability of perception systems in autonomous vehicles. Adversarial Attacks. Adversarial attacks involve intentionally manipulating input signals to disrupt the functioning of RL systems. In the context of autonomous driving, an attacker might tamper with sensor data or traffic signs, resulting in misleading observations and potentially hazardous driving behavior. Adversarial states thus pose a significant threat to the robustness and safety of autonomous driving systems. Out-of-Distribution (OOD) States. Consider a scenario where an RL policy is trained exclusively under sunny weather. Encountering rainy weather poses a challenge, as the observations captured under these conditions deviate from the training data distribution. Such observations are therefore considered Out-ofDistribution (OOD) states. Experimental Setup. We conduct experiments using the CARLA environment (Dosovitskiy et al., 2017). CARLA is an open-source simulator for autonomous driving research known for its high-quality rendering and realistic physics. The environment includes 3D models of static objects, such as buildings, vegetation, traffic signs, and infrastructure, as well as dynamic objects, such as vehicles and pedestrians. The task is to drive safely through the town. In each episode, the vehicle must reach a given goal without collision. The episode ends when the vehicle reaches the goal, collides with an obstacle, or exceeds the time limit. Noisy State Observations. Following the approach used in Atari game settings, we introduce Gaussian noise to simulate random outliers and generate adversarial outliers using adversarial perturbations. For OOD outliers, we leverage CycleGAN-Turbo (Zhu et al., 2017; Parmar et al., 2024), a technique designed for adapting a single-step diffusion model (Ho et al., 2020) to new tasks and domains through adversarial learning (Goodfellow et al., 2014a). This method can perform various image-to-image translation tasks and outperforms existing GAN-based and diffusion-based methods for various scene translation tasks, such as day-to-night conversion and adding/removing weather effects like fog, snow, and rain (Parmar et al., 2024). Specifically, we use CycleGAN-Turbo to create **rainy** and **nighttime** outliers. Examples of different anomaly states are presented in Figure 5. Main Results. Given a fixed dataset and a pre-trained policy, we assess our detection methods across the three types of outliers. Table 3 shows the average accuracy, with MD+C achieving the highest performance | Detection Accuracy (%) | ED | TMD | MD | RMD | MD+C | |------------------------------|------|-------|------|-------|--------| | Random (std ∈ [0.005, 0.06]) | 50.0 | 60.8 | 69.8 | 72.1 | 61.7 | | Random (std ∈ (0.06, 0.3]) | 50.0 | 95.0 | 95.2 | 73.6 | 96.0 | | Adversarial | 50.0 | 96.7 | 95.3 | 73.8 | 97.5 | | OOD (Rain) | 50.0 | 96.5 | 95.5 | 74.4 | 97.5 | | OOD (Night) | 50.0 | 96.5 | 95.5 | 74.3 | 97.5 | Table 3: Detection accuracy on the CARLA *town* environment over three types of outliers. in most scenarios, while RMD performs best in the presence of small random noises. These results suggest that our proposed method effectively detects outliers for realistic problems, such as autonomous driving. ## 7 Discussions And Conclusion In this paper, we present the first detailed study of a distance-based anomaly detection framework in deep RL, considering random, adversarial, and OOD state outliers in both offline and online settings. The primary detection backbone is based on Mahalanobis distance, and we extend it to robust and distributionfree versions by leveraging robust estimation and conformal prediction techniques. Experiments on Atari games and the autonomous driving environment demonstrate the effectiveness of our proposed methods in detecting the three types of outliers. The conformal MD method achieves the best detection performance in most scenarios, especially in the online setting. Our research contributes to developing safe and trustworthy RL systems in real-world applications. Limitations and Future Work. In the online setting, especially with a high proportion of outliers, it may be preferable to denoise the detected state outliers via some neighboring smoothing techniques, e.g., mixup (Wang et al., 2020; Zhang et al., 2018), rather than deleting them directly as performed in this paper. To relax the Gaussian assumption in the hypothesis test of our detection, we can consider other non-parametric methods, such as one-class support vector machines (Choi, 2009) or isolation forests (Liu et al., 2008). A substantial challenge that remains for future work is to devise a more informed detector to distinguish between real "bad" outliers that can cause truly misleading actions and "good" *new samples* collected through exploration, which can potentially benefit the policy learning, especially for image inputs (Zhang & Ranganath, 2023). ## References Anastasios Nikolas Angelopoulos, Stephen Bates, Michael Jordan, and Jitendra Malik. Uncertainty sets for image classifiers using conformal prediction. In *International Conference on Learning Representations*, 2021. Rina Foygel Barber, Emmanuel J Candes, Aaditya Ramdas, and Ryan J Tibshirani. Conformal prediction beyond exchangeability. *The Annals of Statistics*, 51(2):816–845, 2023. M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The Arcade learning environment: An evaluation platform for general agents. *Journal of Artificial Intelligence Research*, 47:253–279, June 2013. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym, 2016. URL https://arxiv.org/abs/1606.01540. RW Butler, PL Davies, and M Jhun. Asymptotics for the minimum covariance determinant estimator. The Annals of Statistics, pp. 1385–1400, 1993. Yuanjiang Cao, Xiaocong Chen, Lina Yao, Xianzhi Wang, and Wei Emma Zhang. Adversarial attacks and detection on reinforcement learning-based interactive recommender systems. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1669–1672, 2020. Stephanie C.Y. Chan, Samuel Fishman, Anoop Korattikara, John Canny, and Sergio Guadarrama. Measuring the reliability of reinforcement learning algorithms. In *International Conference on Learning Representations*, 2020. Shang-Tse Chen, Cory Cornelius, Jason Martin, and Duen Horng Chau. ShapeShifter: Robust physical adversarial attack on faster R-CNN object detector. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part I 18, pp. 52–68. Springer, 2019. Young-Sik Choi. Least squares one-class support vector machine. *Pattern Recognition Letters*, 30(13):1236– 1240, 2009. Mohamad H Danesh and Alan Fern. Out-of-distribution dynamics detection: RL-relevant benchmarks and results. *ICML 2021 Workshop on Uncertainty and Robustness in Deep Learning*, 2021. Roy De Maesschalck, Delphine Jouan-Rimbaud, and Désiré L Massart. The Mahalanobis distance. *Chemometrics and Intelligent Laboratory Systems*, 50(1):1–18, 2000. Hao Dong, Zihan Ding, and Shanghang Zhang. Deep Reinforcement Learning: Fundamentals, Research and Applications. Springer Nature, 2020. Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. CARLA: An open urban driving simulator. In *Conference on Robot Learning*, pp. 1–16. PMLR, 2017. Nebrase Elmrabit, Feixiang Zhou, Fengyin Li, and Huiyu Zhou. Evaluation of machine learning algorithms for anomaly detection. In *2020 international conference on cyber security and protection of digital services* (cyber security), pp. 1–8. IEEE, 2020. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014a. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. International Conference on Learning Representations (ICLR), 2014b. R Grübel. A minimal characterization of the covariance matrix. *Metrika*, 35(1):49–52, 1988. Shangding Gu, Long Yang, Yali Du, Guang Chen, Florian Walter, Jun Wang, and Alois Knoll. A review of safe reinforcement learning: Methods, theory and applications, 2024. URL https://arxiv.org/abs/ 2205.10330. Tom Haider, Karsten Roscher, Felippe Schmoeller da Roza, and Stephan Günnemann. Out-of-distribution detection for reinforcement learning agents with probabilistic dynamics models. In *Proceedings of the 2023* International Conference on Autonomous Agents and Multiagent Systems, pp. 851–859, 2023. Johanna Hardin and David M Rocke. The distribution of robust distances. *Journal of Computational and* Graphical Statistics, 14(4):928–946, 2005. Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman. The elements of statistical learning: data mining, inference, and prediction, volume 2. Springer, 2009. Waleed Hilal, S Andrew Gadsden, and John Yawney. Financial fraud: a review of anomaly detection techniques and recent advances. *Expert systems With applications*, 193:116429, 2022. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020. Liyuan Hu, Mengbing Li, Chengchun Shi, Zhenke Wu, and Piotr Fryzlewicz. Doubly inhomogeneous reinforcement learning, 2022. URL https://arxiv.org/abs/2211.03983. Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. Adversarial attacks on neural network policies. *International Conference on Learning Representations (ICLR) workshop*, 2017. Peter J Huber. *Robust statistics*, volume 523. John Wiley & Sons, 2004. Mia Hubert and Michiel Debruyne. Minimum covariance determinant. Wiley interdisciplinary reviews: Computational statistics, 2(1):36–43, 2010. Vladislav Ishimtsev, Alexander Bernstein, Evgeny Burnaev, and Ivan Nazarov. Conformal k-nn anomaly detector for univariate data streams. In *Conformal and Probabilistic Prediction and Applications*, pp. 213–227. PMLR, 2017. Scott Jordan, Yash Chandak, Daniel Cohen, Mengxue Zhang, and Philip Thomas. Evaluating the performance of reinforcement learning algorithms. In *International Conference on Machine Learning*, pp. 4962–4973. PMLR, 2020. Ryo Kamoi and Kei Kobayashi. Why is the mahalanobis distance effective for anomaly detection?, 2020. URL https://arxiv.org/abs/2003.00402. Kiarash Kazari, Ezzeldin Shereen, and György Dán. Decentralized anomaly detection in cooperative multiagent reinforcement learning. In *IJCAI*, pp. 162–170, 2023. B Ravi Kiran, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A Al Sallab, Senthil Yogamani, and Patrick Pérez. Deep reinforcement learning for autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems, 23(6):4909–4926, 2021. William R Klecka, Gudmund R Iversen, and William R Klecka. *Discriminant analysis*, volume 19. Sage, 1980. Rikard Laxhammar and Göran Falkman. Sequential conformal anomaly detection in trajectories based on Hausdorff distance. In *14th international conference on information fusion*, pp. 1–8. IEEE, 2011. Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-ofdistribution samples and adversarial attacks. *Advances in Neural Information Processing Systems*, 31, 2018. Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems, 2020. URL https://arxiv.org/abs/2005.01643. Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. *International Conference* on Learning Representations (ICLR), 2016. Fei Tony Liu, Kai Ming Ting, and Zhi-Hua Zhou. Isolation forest. In *2008 Eighth IEEE International* Conference on Data Mining, pp. 413–422. IEEE, 2008. Ricardo A Maronna and Víctor J Yohai. Robust estimation of multivariate location and scatter. Wiley StatsRef: Statistics Reference Online, pp. 1–12, 2014. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. *Nature*, 518(7540):529–533, 2015. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In *International conference on machine learning*, pp. 1928–1937. PMLR, 2016. Robert Müller, Steffen Illium, Thomy Phan, Tom Haider, and Claudia Linnhoff-Popien. Towards anomaly detection in reinforcement learning. In *Proceedings of the 21st International Conference on Autonomous* Agents and Multiagent Systems, pp. 1799–1803, 2022. Guansong Pang, Chunhua Shen, Longbing Cao, and Anton Van Den Hengel. Deep learning for anomaly detection: A review. *ACM Computing Surveys (CSUR)*, 54(2):1–38, 2021. Harris Papadopoulos, Kostas Proedrou, Volodya Vovk, and Alex Gammerman. Inductive confidence machines for regression. In *Machine learning: ECML 2002: 13th European conference on machine learning* Helsinki, Finland, August 19–23, 2002 proceedings 13, pp. 345–356. Springer, 2002. Gaurav Parmar, Taesung Park, Srinivasa Narasimhan, and Jun-Yan Zhu. One-step image translation with text-to-image models, 2024. URL https://arxiv.org/abs/2403.12036. Anay Pattanaik, Zhenyi Tang, Shuijing Liu, Gautham Bommannan, and Girish Chowdhary. Robust deep reinforcement learning with adversarial attacks. *Advances in Neural Information Processing Systems*, 2017. Andrew Patterson, Samuel Neumann, Martha White, and Adam White. Draft: Empirical design in reinforcement learning. *Journal of Artificial Intelligence Research*, 1, 2020. Jie Ren, Stanislav Fort, Jeremiah Liu, Abhijit Guha Roy, Shreyas Padhy, and Balaji Lakshminarayanan. A simple fix to Mahalanobis distance for improving near-OOD detection. *International Conference on* Machine Learning (ICML) workshop, 2021. David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. Experience replay for continual learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. Peter J Rousseeuw. Least median of squares regression. *Journal of the American Statistical Association*, 79 (388):871–880, 1984. Peter J Rousseeuw and Bert C Van Zomeren. Unmasking multivariate outliers and leverage points. *Journal* of the American Statistical Association, 85(411):633–639, 1990. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In *Proceedings of The 32nd International Conference on Machine Learning*, pp. 1889–1897. PMLR, 2015. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms, 2017. URL https://arxiv.org/abs/1707.06347. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. *International Conference on Learning Representations*, 2018. Andreas Sedlmeier, Thomas Gabor, Thomy Phan, Lenz Belzner, and Claudia Linnhoff-Popien. Uncertaintybased out-of-distribution classification in deep reinforcement learning. In *Proceedings of the 12th International Conference on Agents and Artificial Intelligence*. SCITEPRESS - Science and Technology Publications, 2020. doi: 10.5220/0008949905220529. Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction. *Journal of Machine Learning Research*, 9(3), 2008. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. *International Conference on Learning Representations* (ICLR), 2013. Jiaye Teng, Chuan Wen, Dinghuai Zhang, Yoshua Bengio, Yang Gao, and Yang Yuan. Predictive inference with feature conformal prediction. In *The Eleventh International Conference on Learning Representations*, 2023. Ryan J Tibshirani, Rina Foygel Barber, Emmanuel Candes, and Aaditya Ramdas. Conformal prediction under covariate shift. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033, 2012. doi: 10.1109/IROS.2012.6386109. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. *Journal of machine learning* research, 9(11), 2008. Vladimir Vovk, Alexander Gammerman, and Glenn Shafer. *Algorithmic learning in a random world*, volume 29. Springer, 2005. Chen Wang, Sarah Erfani, Tansu Alpcan, and Christopher Leckie. Oil-ad: An anomaly detection framework for sequential decision sequences, 2024. URL https://arxiv.org/abs/2402.04567. Kaixin Wang, Bingyi Kang, Jie Shao, and Jiashi Feng. Improving generalization in reinforcement learning with mixture regularization. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 7968–7978. Curran Associates, Inc., 2020. Edwin B Wilson and Margaret M Hilferty. The distribution of chi-square. *Proceedings of the National* Academy of Sciences of the United States of America, 17(12):684, 1931. Wei Xiao, Xiaolin Huang, Fan He, Jorge Silva, Saba Emrani, and Arin Chaudhuri. Online robust principal component analysis with change point detection. *IEEE Transactions on Multimedia*, 22(1):59–68, 2019. Hongming Zhang and Tianyang Yu. Taxonomy of reinforcement learning algorithms. *Deep reinforcement* learning: Fundamentals, research and applications, pp. 125–133, 2020. Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In *International Conference on Learning Representations*, 2018. Lily H Zhang and Rajesh Ranganath. Robustness to spurious correlations improves semantic out-ofdistribution detection. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 37, pp. 15305–15312, 2023. Yingying Zhang, Chengchun Shi, and Shikai Luo. Conformal off-policy prediction. In *International Conference on Artificial Intelligence and Statistics*, pp. 2751–2768. PMLR, 2023. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In *Proceedings of the IEEE international conference on computer* vision, pp. 2223–2232, 2017. ## A Proof Of Proposition 1 Proof. We show that for each action class c, the square of Mahalanobis distance d is identically independent Chi-squared distributed under the Gaussian assumption. Without loss of generality, we denote µ and Σ as the mean and variance matrix of the closest class-conditional Gaussian distribution. We need to show d = (f(s) − µ) ⊤Σ −1(f(s) − µ) is Chi-squared distributed. Firstly, by eigenvalue decomposition, we have $$\Sigma^{-1}=\sum_{k=1}^{p}\lambda_{k}^{-1}u_{k}u_{k}^{\top},\tag{9}$$ where λk and uk are the k-th eigenvalue and eigenvector of Σ. Plugging it into the form of d, we immediately obtain eigenvalue and eigenvector of $\Sigma$. Through it into the form of $a$, we immediately $$d=(f(\mathbf{s})-\mu)^{\top}\Sigma^{-1}(f(\mathbf{s})-\mu)$$ $$=(f(\mathbf{s})-\mu)^{\top}(\sum_{k=1}^{p}\lambda_{k}^{-1}u_{k}u_{k}^{\top})(f(\mathbf{s})-\mu)$$ $$=\sum_{k=1}^{p}\lambda_{k}^{-1}(f(\mathbf{s})-\mu)^{\top}u_{k}u_{k}^{\top}(f(\mathbf{s})-\mu)$$ $$=\sum_{k=1}^{p}\left[\lambda_{k}^{-\frac{1}{2}}u_{k}^{\top}(f(\mathbf{s})-\mu)\right]^{2}$$ $$=\sum_{k=1}^{p}\mathbf{X}_{k}^{2},$$ which that we will focus the linear transformation for $f$. Consequently, distribution $f(\mathbf{s})$ $$(11)$$ $$(12)$$ where X2k is a new Gaussian variable that results from the linear transform of a Gaussian distribution f(s) where f(s) ∼ N (µ, Σ). Therefore, the resulting variance σ 2 k can be derived as $$\sigma_{k}^{2}=\lambda_{k}^{-\frac{1}{2}}u_{k}^{\top}\Sigma\lambda_{k}^{-\frac{1}{2}}u_{k}=\lambda_{k}^{-1}u_{k}^{\top}(\sum_{j=1}^{p}\lambda_{j}u_{j}u_{j}^{\top})u_{k}=\sum_{j=1}^{p}\lambda_{k}^{-1}\lambda_{j}u_{k}^{\top}u_{j}u_{j}^{\top}u_{k}$$ As the µj and µk are orthogonal if j ̸= k, the variance σ 2 k can be further reduced to $$\sigma_{k}^{2}=\lambda_{k}^{-1}\lambda_{k}u_{k}^{\top}u_{k}u_{k}^{\top}u_{k}=\|u_{k}\|^{2}\|u_{k}\|^{2}=1.$$ 2 = 1. (12) Each Xk is a standard Gaussian distribution. Then we have d, the square of Mahalanobis distance, Chisquared distributed, i.e., d ∼ χ 2(p), independent of the action class c. Without loss of generality, the smallest d among all action classes, i.e., M(s), is also a Chi-squared distribution. That is to say, M(s) ∼ χ 2(p). ## B Results In Offline Setting B.1 Results Across Different Noise Strengths We provide detailed detection accuracy of various detection methods across different noise strengths in Table 4. ## B.2 Visualization Of Outlier States On Six Games We plot the outlier states on Breakout, Asterix, and SpaceInvaders games in Figure 6 and outliers states on Enduro, FishingDerby, and Tutankham in Figure 7. Table 4: Detection accuracy (%) of our MD, Robust MD, and conformal MD strategies compared with other baseline methods on six Atari games with α = 0.05. Games Outliers Perturbation ED TMD MD RMD MD+C Games Outliers Perturbation ED TMD **MD RMD MD+C** | baseline methods on six Atari games with α = 0.05. Games Outliers Perturbation ED TMD MD RMD MD+C Games | Outliers | Perturbation | ED | TMD MD | RMD | MD+C | | | | | | | | | |-----------------------------------------------------------------------------------------------------------|--------------|----------------|------|----------|-------|--------------|-------------|--------------|---------|------|------|------|------|------| | Random | std=0.02 | 50.0 | 52.2 | 56.5 | 66.1 | 55.5 | Random | std=0.1 | 49.4 | 44.7 | 48.2 | 76.6 | 48.6 | | | std=0.04 | 56.4 | 67.8 | 71.4 | 76.3 | 70.0 | std=0.2 | 48.6 | 58.4 | 72.2 | 80.3 | 54.5 | | | | | Adversarial | ϵ=0.001 | 80.4 | 87.4 | 89.4 | 80.0 | 90.0 | Adversarial | ϵ=0.001 | 93.1 | 90.8 | 95.2 | 80.5 | 97.4 | | | Breakout | ϵ=0.01 | 87.2 | 90.7 | 92.5 | 80.7 | 94.5 | ϵ=0.01 | 94.6 | 90.8 | 95.2 | 80.3 | 97.5 | | | | | Enduro | | | | | | | | | | | | | | | OOD | Asterix | 53.5 | 47.7 | 51.2 | 81.3 | 51.8 | OOD | FishingDerby | 63.7 | 72.9 | 78.5 | 80.3 | 53.9 | | | SpaceInvaders | 46.5 | 47.4 | 49.8 | 76.1 | 51.2 | Tutankham | 50.2 | 52.6 | 61.0 | 80.2 | 52.5 | | | | | Random | std=0.1 | 42.8 | 42.9 | 48.0 | 66.2 | 49.1 | Random | std=0.2 | 49.0 | 48.7 | 51.9 | 83.5 | 51.0 | | | std=0.2 | 45.8 | 49.1 | 71.1 | 76.1 | 60.4 | std=0.3 | 49.1 | 83.8 | 86.4 | 87.6 | 79.5 | | | | | Asterix | Adversarial | ϵ=0.001 | 83.9 | 85.2 | 91.1 | 75.7 | 93.3 | Adversarial | ϵ=0.001 | 82.4 | 92.9 | 97.5 | 87.3 | 97.3 | | | FishingDerby | | | | | | | | | | | | | | | ϵ=0.01 | 84.5 | 85.8 | 91.5 | 75.9 | 94.0 | ϵ=0.01 | 88.2 | 92.9 | 97.5 | 87.5 | 97.4 | | | | | OOD | Breakout | 42.3 | 43.0 | 48.1 | 76.1 | 51.9 | OOD | Enduro | 49.0 | 56.7 | 60.6 | 75.6 | 59.9 | | | SpaceInvaders | 37.9 | 38.6 | 43.7 | 54.2 | 47.5 | Tutankham | 53.1 | 55.1 | 57.8 | 75.8 | 55.8 | | | | | Random | std=0.02 | 50.1 | 49.7 | 55.1 | 74.7 | 53.8 | Random | std=0.04 | 50.0 | 48.5 | 49.3 | 75.7 | 51.6 | | | std=0.04 | 54.0 | 82.6 | 89.5 | 83.6 | 86.9 | std=0.06 | 50.0 | 47.2 | 49.0 | 78.3 | 52.7 | | | | | Adversarial | ϵ=0.001 | 68.6 | 90.5 | 95.5 | 83.2 | 95.9 | Adversarial | ϵ=0.001 | 60.0 | 88.3 | 93.7 | 78.8 | 81.2 | | | SpaceInvaders | ϵ=0.01 | 76.2 | 91.8 | 96.3 | 83.6 | 96.8 | ϵ=0.01 | 62.1 | 89.5 | 94.7 | 78.6 | 93.1 | | | | | Tutankham | | | | | | | | | | | | | | | OOD | Breakout | 45.7 | 52.7 | 52.6 | 82.9 | 50.6 | OOD | Enduro | 60.0 | 89.6 | 94.7 | 78.9 | 90.9 | | | Asterix | 44.7 | 60.4 | 50.2 | 83.4 | 49.7 | FishingDerby | 50.0 | 83.5 | 89.2 | 78.4 | 64.7 | | | | (a) Breakout: Clean (b) Breakout: Random (c) Breakout: Adversarial(d) Breakout: Asterix (e) Breakout: Space ![19_image_0.png](19_image_0.png) ![19_image_1.png](19_image_1.png) (f) Asterix: Clean (g) Asterix: Random (h) Asterix: Adversarial (i) Asterix: Breakout (j) Asterix: Space ![19_image_2.png](19_image_2.png) (k) Space: Clean (l) Space: Random (m) Space: Adversarial (n) Space: Breakout (o) Space: Asterix Figure 6: Visualization of various state outliers on Breakout, Asterix, and SpaceInvaders games. (a) Enduro: Clean (b) Enduro: Random (c) Enduro: Adversarial (d) Enduro: Fishing (e) Enduro: Tutankham ![20_image_0.png](20_image_0.png) ![20_image_1.png](20_image_1.png) (f) Fishing: Clean (g) Fishing: Random (h) Fishing: Adversarial(i) Fishing: Enduro (j) Fishing: Tutankham ![20_image_2.png](20_image_2.png) (k) Tutankham: Clean (l) Tutankham: Random(m) Tutankham: Adv (n) Tutankham: Enduro (o) Tutankham: Fishing Figure 7: Visualization of various state outliers on Enduro, FishingDerby, and Tutankham games. ## B.3 Effectiveness Of Robust Md We take the cubic root of the Mahalanobis distances, yielding approximately normal distributions (Wilson & Hilferty, 1931). In this experiment, 250 clean states are drawn from the replay buffer, and 50 abnormal states are drawn from each of the three types of outliers. We reduce the state feature dimension to 2 via t-SNE and compute Mahalanobis distances of these two kinds of states to their centrality within each action class under the estimation based on MD or Robust MD, respectively. Figure 8 suggests that Robust MD separates inliers and outliers better than MD on Breakout within a random action class, indicating its effectiveness in detecting RL evaluation. Similar results are also given in other games. We plot the distributions of inliers and three types of outliers on SpaceInvaders and Asterix games in Figure 9 and 10, respectively. It is worth noting that Robust MD is also capable of enlarging the separation of distributions between inliers and both random and adversarial outliers on SpaceInvaders game, while its benefit seems to be negligible on OOD outliers (Breakout) on SpaceInvaders games as well as in Asterix game. We speculate that it is determined by the game's difficulty. Specifically, the PPO algorithm can ![21_image_0.png](21_image_0.png) Figure 8: Boxplot of distributions between inliers and three types of outliers in an action class on Breakout game. ![21_image_1.png](21_image_1.png) Figure 9: Boxplot of distributions between inliers and three types of outliers in an action class on SpaceInvaders game. ![21_image_2.png](21_image_2.png) Figure 10: Boxplot of distributions between inliers and three types of outliers in an action class on Asterix game. achieve desirable performance on the simple Breakout game, thus yielding informative feature space vectors. By contrast, there is room for the generalization of PPO on both SpaceInvaders and Asterix games, such that Robust MD might not help when handling the less meaningful state feature vectors in these two games. ## B.4 Sensitivity Analysis We provide the sensitivity analysis of Robust MD in terms of the PCA dimension in Figure 11. The impact of the number of principal components on the detection performance for robust MD detection is shown in Figure 11. The detection accuracy over all considered outliers improves as the number of principal components increases, except for a slight decline for random and adversarial outliers (red and blue lines) on the Breakout game. The increase implies that the subspace spanned by principal components with small explained variance also contains valuable information for detecting anomalous states from in-distribution states, which coincides with the conclusion in (Kamoi & Kobayashi, 2020). The result of MD estimation manifests in Figure 12. It suggests that there is still an ascending tendency of detection accuracy as the number of principal components increases. ![22_image_0.png](22_image_0.png) Figure 11: Detection performance under **Robust MD** as the number of principal components increases. ![22_image_1.png](22_image_1.png) Figure 12: Detection performance under MD as the number of principal components increases. ## C Results In Online Setting C.1 Setup And Full Main Results As a supplement to the results on the main pages, we provide the whole results on all six Atari games from Figure 13 to Figure 18. The "Mean Score" in the first row indicates the accumulated rewards of PPO, and the "F1 Score" in the second row shows the detection performance during RL training. The F1 score is computed based on precision and recall. We also find that the cumulative reward is not strongly correlated with detection ability in some games. A high detection accuracy may only improve the cumulative reward to a small degree. This suggests that we need more metrics to measure the effect of our detection performance more effectively. Hyperparameters in our methods are shown in Table 5. | Hyperparameter | Value | |------------------------|------------------------------| | Confidence level (1-α) | 1-0.05 | | Moving window size (m) | 5120 | | Sample size (Nc) | 2560 | | Iteration (K) | ≈ 10000 (1e7 steps in total) | | Environment number (N) | 8 | | Horizon (T) | 128 | Table 5: Hyper-parameters in the training phase. RL-related parameters are the same as those of the PPO algorithm. ![23_image_0.png](23_image_0.png) Figure 13: Detection performance across various state outliers in the online training on Breakout. "Mean Score" in the first row indicates the accumulated rewards, "accuracy" and "F1 Score" evaluate the detection performance during training. ![24_image_0.png](24_image_0.png) Figure 14: Detection performance across various state outliers in the online training on Asterix. ![24_image_1.png](24_image_1.png) Figure 15: Detection performance across various state outliers in the online training on SpaceInvaders. ![25_image_0.png](25_image_0.png) Figure 16: Detection performance across various state outliers in the training phase on Enduro. ![25_image_1.png](25_image_1.png) Figure 17: Detection performance across various state outliers in the online training on FishingDerby. ![26_image_0.png](26_image_0.png) Figure 18: Detection performance across various state outliers in the online training on Tutankham. ## C.2 Ablation Study On Double Anomaly Detectors Figure 19 reveals that double self-supervised detectors can help adjust the detection errors and improve the detection accuracy compared with the single detector. MD with double detectors outperforms MD with a single detector significantly, although RMD with double detectors is comparable to RMD with a single detector. ![27_image_0.png](27_image_0.png) Figure 19: The detection accuracy with and without double self-supervised detectors on Breakout with random and OOD outliers on Breakout. ## C.3 Ablation Study On Number Of Noisy Environments We train PPO in two, four, or six noisy environments with random and OOD outliers among all eight parallel environments. We use PCA to reduce the feature vectors to 50 dimensions and estimate the detector using Robust MD. Figure 20 illustrates that compared with the **Auto** baseline, our RMD method is robust when encountering different ratios of outliers, especially with a higher contamination ratio. The dashed lines in different colors represent **Auto** baselines that correspond to the different number of noisy environments. The training performance with our detection method gradually approaches the ideal baselines, i.e., **Auto**. ![27_image_1.png](27_image_1.png) Figure 20: Training performance under Robust MD detection under different proportions of outlier exposure on Breakout (2, 4, 6 out of 8 environments).
# Extracting Local Reasoning Chains Of Deep Neural Networks Haiyan Zhao Haiyan.Zhao-2@student.uts.edu.au University of Technology Sydney Tianyi Zhou *zhou@umiacs.umd.edu* University of Maryland Guodong Long guodong.long@uts.edu.au University of Technology Sydney Jing Jiang *jing.jiang@uts.edu.au* University of Technology Sydney Chengqi Zhang Chengqi.Zhang@uts.edu.au University of Technology Sydney Reviewed on OpenReview: *https: // openreview. net/ forum? id= RP6G787uD8* ## Abstract We study how to explain the main steps of inference that a pre-trained deep neural net (DNN) relies on to produce predictions for a (sub)task and its data. This problem is related to network pruning and interpretable machine learning with the following highlighted differences: (1) fine-tuning of any neurons/filters is forbidden; (2) we target a very high pruning rate, e.g., ≥ 95%, for better interpretability; (3) the interpretation is for the whole inference process on a few data of a task rather than for individual neurons/filters or a single sample. In this paper, we introduce NeuroChains to extract the local inference chains by optimizing differentiable sparse scores for the filters and layers, which reflects their importance in preserving the outputs on a few data drawn from a given (sub)task. Thereby, NeuroChains can extract an extremely small sub-network composed of critical filters exactly copied from the original pre-trained DNN by removing the filters/layers with small scores. For samples from the same class, we can then visualize the inference pathway in the pre-trained DNN by applying existing interpretation techniques to the retained filters and layers. It reveals how the inference process stitches and integrates the information layer by layer and filter by filter. We provide detailed and insightful case studies together with several quantitative analyses over thousands of trials to demonstrate the quality, sparsity, fidelity and accuracy of the interpretation. In extensive empirical studies on VGG, ResNet, and ViT, NeuroChains significantly enriches the interpretation and makes the inner mechanism of DNNs more transparent. ## 1 Introduction Deep neural networks (DNNs) greatly reshape a variety of tasks - object classification, semantic segmentation, natural language processing, speech recognition, robotics, etc. Despite theirs success on a vast majority of clean data, DNNs are also well-known to be sensitive to small amounts of adversarial noises. The lack of sufficient interpretability about their success or failure is one major bottleneck of applying DNNs to important areas such as medical diagnosis, public health, transportation systems, financial analysis, etc. Interpretable machine learning has attracted growing interest in a variety of areas. The forms of interpretation vary across different methods. For example, attribution methods (Bach et al., 2015; Sundararajan et al., 2017; Shrikumar et al., 2017; Montavon et al., 2017; Kindermans et al., 2017; Smilkov et al., 2017) produce the importance score of each input feature to the output prediction for a given sample, while some other methods (Zeiler & Fergus, 2014; Simonyan et al., 2013; Erhan et al., 2009) aim to explain the general functionality of each neuron/filter or an individual layer regardless of the input sample. Another line of works (Ribeiro et al., 2016; Wu et al., 2018; Hou & Zhou, 2018) explain DNNs in a local region of data space by training a shallow (e.g., linear) and easily interpretable model to approximate the original pre-trained DNN on some locally similar samples. Thereby, they reduce the problem to explaining the shallow model. These methods essentially reveal the neuron to neuron correlations (e.g., input to output, intermediate layer/neuron to output, etc), but they cannot provide an overview of the whole inference process occurring inside the complicated structure of DNNs. In this paper, we study a more challenging problem: Can we unveil the major hidden steps of inference in DNNs and present them in a succinct and human-readable form? Solving this problem helps to answer many significant questions, e.g., which layer(s)/neuron(s) plays the most/least important role in the inference process? Can two similar samples share most inference steps? Do all samples need the same number of neurons to locate the key information leading to their correct predictions? Do DNNs rely on entirely different neurons or filters for different (sub)tasks? Some of them are related to other problems such as network pruning (Han et al., 2015; Li et al., 2016) and neural architecture search (NAS) (Zoph & Le, 2017). For example, are winning tickets (Frankle & Carbin, 2018; Liu et al., 2018) universal over different tasks or classes? Does the weight sharing scheme in recent NAS methods (Pham et al., 2018; Liu et al., 2019a; Ying et al., 2019) limit the searching space or quality? We develop an efficient tool called NeuroChains to extract the underlying inference chains of a DNN only for a subtask. A subtask of a classification task can be defined as classification on a small subset of classes, which in this paper refers to a few data drawn from a subset of 1000 classes in ImageNet. NeuroChains aims to extract a much smaller sub-network composed of a subset of neurons/filters exactly copied from the original pre-trained DNN and whose output for data from the subtask is consistent with that of the original pre-trained DNN. While the selected filters explain the key information captured by the original pre-trained DNN when applied to data from the same task/class, the architecture of the sub-network stitches these information sequentially, i.e., step by step and layer by layer, and recover the major steps of inference that lead to the final outputs. We parameterize the sub-network as the original DNN with an additional score multiplied to each filter/layer's output featuremap. Thereby, we formulate the above problem of sub-network extraction as optimizing differentiable sparse scores of all the filters and layers to preserve the outputs on all given samples. The above problem can be solved by an efficient back-propagation that only updates the scores with fixed filter parameters. The objective is built upon the Kullback–Leibler (KL) divergence between the sub-network's output distribution and that of the original pre-trained DNN, along with an `1 regularization for sparse scores over filters. We further use a sigmoid gate per layer to choose whether to remove the entire layer or block. The gate plays an important role in reducing the sub-network size since many data and subtasks do not rely on all the layers. We extract chains from the sub-network and visualize their filters and featuremaps of the original network by existing methods (Zeiler & Fergus, 2014; Erhan et al., 2009). NeuroChains is a novel technique specifically designed for interpreting the local inference chains of DNNs. As aforementioned, it potentially provides an efficient tool to study other problems in related tasks. However, it has several **fundamental differences to network pruning and existing interpretation tasks**, which deter methods developed for these two problems from addressing our problem. **Comparing to network** pruning: (1) fine-tuning is not allowed in NeuroChains; (2) it targets a much larger pruning rate for succinct visualization, e.g., ≥ 95% for VGG-19 (Simonyan & Zisserman, 2014), ≥ 99% for ResNet-50 (He et al., 2016) and ≥ 92% for ViT (Dosovitskiy et al., 2021; Touvron et al., 2021) on ImageNet with ≤ 200 filters or patches remained; (3) it is for a few samples drawn from a task instead of the whole data distribution. Comparing to mainstream interpretation tasks: (1) NeuroChains produces an interpretation of the entire inference process for a specified task (e.g., a subset of classes) rather than of one neuron/filter or output on/around a single sample; (2) NeuroChains provides complementary information to the importance of individual neurons/filters for a task. ## 2 Related Works Interpretable machine learning methods can be mainly categorized into the ones aiming to evaluate the importance of each input feature of a single sample and the ones explaining individual neurons/filters. Approaches in the first category usually rely on certain back-propagation from the DNN's output to derive an importance score for each input feature or hidden node. Earlier works are based on the back-propagated gradients, e.g., deconvolution (Zeiler & Fergus, 2014), back-propagation (Simonyan et al., 2013) and guided back-propagation (Springenberg et al., 2014). Sundararajan et al. (2017) proposed to (approximately) calculate the integral of the gradients along a path between a baseline point and the input sample, which ensures the sensitivity and implementation invariance lacking in some previous methods. More recent methods propose novel back-propagation rules to directly derive the attribution scores of neurons from output to input, e.g., DeepLIFT (Shrikumar et al., 2017), deep Taylor decomposition (Montavon et al., 2017), and layer-wise relevance propagation (LRP) (Bach et al., 2015). Methods in the second class treat DNNs as black boxes and seek simple models to explain how the DNN's output changes in a local region. For example, Ancona et al. (2017) add perturbations to different parts of the input to evaluate how the perturbations change the output, which reflect the importance of different parts. Zeiler & Fergus (2014) covered different parts of the input with a gray square, which led to different prediction probabilities on the true class. Instead, Zintgraf et al. (2017) replaced each patch in the input image with the surrounding patch and tracked the induced changes in the output. In LIME, Ribeiro et al. (2016) trained a sparse linear model on noisy input-output pairs as a local surrogate approximating the original pre-trained DNNs, where the sparse weights are used to explain the importance of input features. As mentioned before, our main difference to the above methods are we explain DNNs for a (sub)task and its data and we further explain how DNNs step by step integrate the information of important filters/neurons. Network pruning Han et al. (2015) and Li et al. (2016) remove redundant neurons/nodes or connections/weights from a pre-trained DNN and fine-tune the sub-network. Structural pruning removes whole layers/channels/filters/neurons according to a certain norm of the associated weights (Li et al., 2016) or sparsity (Hu et al., 2016). In contrast, Frankle & Carbin (2018) and Liu et al. (2018) prune a DNN during its training. Luo et al. (2017) apply pruning to two adjacent convolution layers at each time to take the dependency between the two layers into account. Liu et al. (2019b) and Guo et al. (2020) train sub-networks of different sizes in a large DNN at the same time to satisfy various constraints. Several recent works (Su et al., 2020; Evci et al., 2020) empirically verify "lottery ticket hypothesis", i.e., there exists sub-networks (i.e., winning tickets) that can reach comparable generalization performance as the original pre-trained DNN if re-trained. In contrast, the sub-network extracted by NeuroChains cannot be fully re-trained since it has to preserve the original pre-trained DNN's filters, and our goal is to retain the generalization performance only for a task with few data. ## 3 Neurochains 3.1 Problem: Extract Sub-Networks Although the DNNs widely used nowadays are usually composed of hundreds of layers and millions to billions of hidden nodes. When applied to samples from a subtask (e.g., composed of two classes), it is plausible that its inference process mainly relies on a small subset of layers and filters. In this paper, we verify this conjecture by developing an efficient and practical algorithm, i.e., NeuroChains, to extract the subset and its underlying architecture as a sub-network whose filters are selected and exactly copied from the original pre-trained DNN while its outputs for a given subset of data or classes retain the ones produced by the original pre-trained DNN. Although DNNs are usually non-smooth in definition if using a non-smooth piecewise activation such as ReLU, when trained with the commonly used techniques, e.g., data augmentation, mix-up, dropout, the resulted DNNs are relatively smooth in a sufficiently small local region. In order to preserve the original inference chains, we do not allow any fine-tuning or re-training on any filter or the weight vector corresponding to any neuron: they can only be exactly copied from the original pre-trained DNN. Let F(·; {W`}`=1:L) (a mapping from input to output) denote the original pre-trained DNN, W`represents the set of filters/weight vectors in layer-`, and W`[i] represents the ith filter/weight vector in layer-`. Any sub-network fulfilling our above requirement can be defined and parameterized by an indicator vector M` per layer, whose each entry is a {0, 1} value indicating whether retaining the associated filter/neuron in W`. We further define operator ◦ as $$(W^{\ell}\circ M^{\ell})[i]\triangleq\left\{\begin{array}{cc}W^{\ell}[i],&M^{\ell}[i]=1;\\ \vec{0},&M^{\ell}[i]=0.\end{array}\right.\tag{1}$$ $$\mathbf{\Sigma}$$ Thereby, {M`}`=1:L defines a qualified sub-network for inference chain and its weights are {W` ◦ M`}`=1:L, where we extend the operator ◦ to make W ◦ M = {W` ◦ M`}`=1:L given the original pre-trained DNN's weights W = {W`}`=1:L. Given a set of samples X drawn from a specific subtask, we can formulate the problem of finding an inference chain as the following combinatorial optimization, which aims to find the most sparse indicator M (i.e., the sub-network with the fewest filters retained) that does not change the outputs of the original pre-trained DNN for ∀x ∈ X , i.e., $$\min_{\{M^{\ell}\}_{\ell=1:L}}\sum_{\ell=1}^{L}\|M^{\ell}\|_{1}\ \text{s.t.}\ F(x;W)=F(x;W\circ M),\ \forall x\in\mathcal{X}.\tag{2}$$ However, it is impractical to directly solve this combinatorial optimization since the possible choices for M` is of exponential number. We relax the 0-1 indicator vector M`to a nonnegative-valued score vector S ` of the same size. We define an operator applied to W` and its associated scores S ` as $$(W^{\ell}\odot S^{\ell})[i]\triangleq S^{\ell}[i]\cdot W^{\ell}[i].$$ $$\left({\mathrm{3}}\right)$$ $$\mathbf{\partial}_{j}^{\dagger}$$ `[i] · W`[i]. (3) Note we limit entries in S ` within [0, 1] due to the possible redundancy among filters in the original pretrained DNN, i.e., there might be filters of similar functionality for the given samples and a preferred pruning should be able to only preserve one of them and multiply it by the number of those redundant filters in the sub-network. In addition, less constraints are easier to handle in optimization and helpful to find subnetwork whose outputs are closer to that of the original pre-trained DNN, since the class of sub-networks with parameters W S includes all the sub-networks with parameters W ◦M. Hence, we relax the challenging combinatorial optimization to the following optimization, i.e., $$\min_{\{S^{t}\}_{t=1:L}}\frac{1}{|\mathcal{X}|}\sum_{x\in\mathcal{X}}l(F(x;W),F(x;W\odot S))+\lambda\sum_{t=1}^{L}\|S^{t}\|_{1},\tag{1}$$ where l(·, ·) is a loss function aiming to minimize the distance between the original pre-trained DNN's output F(x; W) and the sub-network's output F(x; W S). In our experiments, for classification, we use KL-divergence between the output distributions over classes, where the two output distributions are computed by applying softmax to F(x; W) and F(x; W S) respectively, i.e., $l(F(x;W),F(x;W\odot S))=D_{KL}(\mbox{softmax}(F(x;W))||\mbox{softmax}(F(x;W\odot S)))$. In addition, empirical evidence (Krueger et al., 2017; Singh et al., 2016) show that for most samples there exist some layers that can be entirely removed without changing the final prediction. Hence, only a few hard and confusing samples need more delicate features, while most other samples can be correctly classified based on simple patterns from shallower layers. Therefore, in NeuroChains, we apply a sigmoid function with input score α ` as a gate G` determining whether to remove the entire layer-` during pruning, i.e., $\downarrow$). $$G^{t}=1/\left[1+\exp(-\alpha^{t}/T)\right],$$ `/T], (6) where T is a temperature parameter. With a gate G` applied after each layer-` whose input and output has the same size (which is common in many DNNs), we can recursively define the input H`+1(·) to the next layer-(` + 1), i.e., H`+1(x; {W` 0 S ` 0, α` 0}` 0=1:`) = $$\left\{\begin{array}{l l}{{G^{\ell}\cdot F^{\ell}(H^{\ell};W^{\ell}\odot S^{\ell})+(1-G^{\ell})\cdot H^{\ell}(x;}}\\ {{\{W^{\ell}\odot S^{\ell},\alpha^{\ell}\}_{\ell^{\prime}=1:\ell-1}^{\ell})}}&{{\mathrm{if~input~size=output~size}}}\\ {{F^{\ell}(H^{\ell};W^{\ell}\odot S^{\ell})}}&{{\mathrm{otherwise}}}\end{array}\right.$$ (7) where F `(H`; W` S `) denotes the output of layer-`. The reason to use a gate here is that we expect to either remove the whole layer or retain it without adding an extra shortcut (which will change the original pre-trained DNN's architecture). Since we prefer to remove non-informative layers, in the objective, we add $$\mathbf{\Sigma}$$ another regularization α `to encourage the removal of entire layers (because decreasing α reduces G` and thus increase the chance of layer removal). Therefore, the final optimization for NeuroChains is $$\begin{array}{l}{{\operatorname*{min}_{\{S^{\ell},\alpha^{\ell}\}_{\ell=1:L}}\frac{1}{|X|}\sum_{x\in X}l(F(x;W),H^{L+1}(x;}}\\ {{\mathrm{}}}\\ {{\mathrm{}}}\\ {{\mathrm{}}}\\ {{\mathrm{}}}\\ {{\mathrm{}}}\\ {{\mathrm{}}}\\ {{\mathrm{}}}\\ {{\mathrm{}}}\\ {{\mathrm{}}}\end{array}$$ $$(8)$$ l(F(x; W), HL+1(x; (8) Our objective above is similar to the one used in Network Slimming (Liu et al., 2017) but we optimize it for a subtask (so we can consider to remove layers) and we do not allow fine tuning on weights W. ## 3.2 Algorithm Our algorithm is simply a standard back-propagation for the optimization problem in Eq. (8), which produces sparse scores for filters and gate values for layers. Note the weights in W are fixed and the backpropagation only updates S. We initialize the filter scores S = ~1 so W S = W at the beginning of optimization. We initialize the gate score α ` = 0 for all ` = 1 : L so G` = 0.5 at the beginning, i.e., the probabilities to remove or to retain a layer is equal. For classification, we set loss l(·, ·) to be the KL-divergence between the output distributions of the original pre-trained DNN and the sub-network. After convergence of the optimization, we then apply a simple thresholding to these scores to further remove more filters and layers: (1) we remove the filters with score under a threshold τ ; (2) we remove layer-` if G` < 0.5. This yields a sufficiently small sub-network architecture. Given a sub-network produced by NeuroChains, we then visualize its architecture and scores as the structure of the inference chains. Moreover, for samples from the same region, we visualize their inference pathway from the original pre-trained DNN by their activation patterns and featuremaps, respectively, using existing interpretation methods (Zeiler & Fergus, 2014; Erhan et al., 2009). ## 4 Experiments | Statistics | ResNet-50 | VGG-19 | Statistics | ViT(DeiT-small) | |---------------------------|--------------------------|------------|---------------------|-------------------| | Top-1 test accuracy | 76.5% | 72.9% | | | | Test images/sub-networks | 10000/1688 10000/1746 | | | | | Convolutional filters | 26560 | 4480 | | | | Parameters of Conv-layers | 23454912 | 20018880 | | | | Parameters of FC-layers | 2048000 | 123633664 | Top-1 test accuracy | 79.9% | | | Test images/sub-networks | 10000/1500 | | | | | Patches across blocks | 2364 | | | | | Parameters of MHA-blocks | 7096320 | | | | | Parameters of FFN-blocks | 14178816 | | | Table 1: Information of pre-trained DNNs in this paper. In experiments, we apply NeuroChains to extract the inference chains of widely-adopted VGG-19, ResNet50, and ViT which are all pre-trained on ImageNet. We provide the basic information of the three DNNs in Table 1. In the following, we will present **two quantitative analyses over hundreds of case studies**, which show that (1) NeuroChains is capable to produce sub-networks retaining only < 5% of filters and meanwhile preserve the outputs of the original pre-trained DNN in most cases; (2) the filters selected by NeuroChains with high scores are important to preserving the outputs since removing one will lead to considerable drop in performance. We also compare the capability of preserving the original neural network's outputs between NeuroChains and magnitude-based pruning and random pruning methods. We will then provide several detailed and insightful case studies and visualizations of extracted sub-networks for different subtasks. ## 4.1 Implementation Details We implement NeuroChains by PyTorch (Paszke et al., 2017). In every case study, we firstly randomly sample 2 classes in ImageNet and then randomly sample 10 images from each class's images. Note that the sampled images may be wrongly classified to other classes by the original DNN. We apply inference on those 20 images and their outputs are used in solving the optimization of Eq (8) in order to extract the local ![5_image_0.png](5_image_0.png) Figure 1: *Size and fidelity (how well the sub-networks preserve the original pre-trained DNN's outputs)* of 1500 sub-networks extracted by NeuroChains for VGG-19(left) and ResNet-50(right) in different case studies (tasks). The x-axis in the top plot refers to the number of retained filters, while the x-axis in the bottom plot is the decrease of probability on the original-DNN predicted class. It shows that NeuroChains can usually find very small sub-networks and meanwhile preserve the original pre-trained DNN's outputs. inference chain in the form of a sub-network. For models with shortcuts, e.g., ResNet-50, the sigmoid gate is applied to prune a bottleneck block rather than a layer. A layer inside a block will be removed if the scores of all filters in the layer are nearly 0. We use Adam optimizer for the optimization of Eq (8) for filter/layer scores. We use a fixed learning rate of 0.005. We set temperature T = 0.2 in sigmoid gate (Eq. (6)) to encourage the value G`close to either 0 or 1, and the threshold τ to goal scores is set to 0.1 so that the outputs of sub-networks are consistent. We only tried a limited number of choices on tens of experiments, and chose the best combination balancing the fidelity and sub-network size, and then applied it to all other experiments without further tuning. In particular, we tried τ ∈ {0.01, 0.1, 0.5}, λ ∈ {0.001, 0.005, 0.01, 0.1}, and λg ∈ {1, 2, 5}. For different models, the weights of two penalties in Eq. (8) are different. For VGG-19, we use λ = 0.005 and λg = 2. While we choose λ = 0.005 and λg = 1 for ResNet-50. This choice performs consistently well and robust on all other experiments. The iteration steps of training is 300 and we stop training when the loss difference is quite small, i.e., less than 0.05. It costs only ∼ 90s for VGG-19 and ∼ 55s for ResNet-50 to extract a sub-network on a single RTX 6000 GPU since we only optimize a few number of scores. ## 4.2 Quantitative Analyses Melis & Jaakkola (2018) propose some criteria to evaluate the interpretation methods for DNNs. In this paper, we extend some of their notations and present two quantitative analyses of NeuroChains over 1500 case studies for different subtasks, i.e., (1) **Fidelity**: does the sub-network preserve the original pre-trained DNN's outputs on the given samples? how does it change for sub-networks of different sizes? (2) **Faithfulness**: how well does the importance score of a filter reflect the degeneration on the fidelity caused by removing the filter from the sub-network? In this paper, we evaluate the fidelity and faithfulness by the decreasing amount of probability on the original pre-trained DNN's predicted class when using the sub-network for inference. All the above metrics are averaged over 1500 sub-networks and across all the images used to extract each sub-network. Figure 1 shows the statistics of the fidelity for sub-networks of different sizes (measured by the number of filters) that are extracted by NeuroChains. Most sub-networks only retain ≤ 1%(≤ 5%) of ResNet-50(VGG19) for succinct visualization but they preserve the outputs of ResNet-50(VGG-19) with high fidelity. Figure 2 reports the faithfulness of extracted sub-networks, i.e., how a sub-network's performance in preserving the original pre-trained DNN's output degrade if removing one filter from it, and what is the relationship ![6_image_0.png](6_image_0.png) Figure 2: *Barplot of the faithfulness (filter score vs. fidelity degeneration caused by removing the filter)* of 783 sub-networks, each extracted by NeuroChains on 20 uniform samples randomly drawn from two classes for VGG-19(left) and ResNet-50(right). The x-axis refers to the interval of filter scores, while the y-axis denotes the decrease of the sub-network's probability on the original-DNN predicted class after removing a filter from the sub-network. It shows that the sub-networks suffer from more degeneration if removing a filter with higher score. Hence, the scores faithfully reflect the importance of filters in explaining the original pre-trained DNNs. between this degeneration and the score of the removed filter. The statistics on 1500 sub-networks in Figure 2 show that as the scores of filters increase, the fidelity degrades more. The degeneration and the score are strongly and positively correlated, indicating that our optimized scores faithfully reflect the importance of filters in explaining the original pre-trained DNN. Moreover, removing even only one highly scored filter from the sub-network can significantly degrade the explanation performance. Hence, NeuroChains usually find the smallest sub-networks without redundancy among retained filters/layers, i.e., every critical inference step is retained. We also present another faithfulness study based on a quotient metric defined below. Let *p, q* ∈ ∆c(∆cis the probability simplex for c classes) be the output probability vectors of the original neural net and the extracted sub-network respectively for same input. We define a quotient metric to measure the change of class prediction between p and q, i.e., $$Q(p,q)={\frac{q[y]-\operatorname*{max}_{z\in[c],z\neq y}q[z]}{p[y]-\operatorname*{max}_{z\in[c],z\neq y}p[z]}},\quad y\in\operatorname*{arg\,max}_{z\in[c]}p[z],$$ (9) $\frac{1}{2}$ . p[z], (9) where y is the predicted class by the original neural net, and Q(*p, q*) is the quotient of two probability differences computed respectively on the original neural net and the sub-network. In particular, it computes the difference of probabilities for class y and the highest-rated other class. The sign of Q(*p, q*) indicates whether the predicted class changes (e.g., it changes if Q(*p, q*) < 0) while the magnitude of Q(*p, q*) measures the change in prediction confidence. The result is consistent with our above observations and is given in Figure 11 of Appendix. We compare the capability of preserving the original neural network's outputs between NeuroChains and magnitude-based pruning (removing the filters whose output featuremaps' average magnitude (L2 norm) over all considered samples is small) and random pruning ("by chance") in Figure 3. In particular, under the same setting of each experiment in the paper, we prune the original VGG-19 and retain the filters with the largest featuremap magnitude in each layer or randomly, 180 in total (more than 157(*mean*) ± 43(std) filters for sub-networks extracted by NeuroChains), and we then fine-tune the filters' scores. For Figure 3, the percentage of cases with KL-divergence≤ 0.8 for NueroChians, magnitude-based pruning and random pruning are **54.3%, 38.8%, 21.1%** respectively, and **9.3%, 19.4%, 38.8%** cases have KL-divergence≥ 1.5. Figure 3 shows the histogram of the KL divergence between the original output class distribution and the one produced by the sub-networks. For sub-networks generated by NeuroChains, the KL-divergence in most ![7_image_0.png](7_image_0.png) Figure 3: Comparison of different pruning methods on the capability of preserving the original network's output distribution (smaller KL divergence means better preservation) over 783×20 uniform samples. cases stays close to 0, while the output preserving capability of simple pruning is much worse. More complete quantitative analyses for both VGG-19 and ResNet-50 can be found in the Appendix. ## 4.3 Case Studies We present three case studies of the sub-networks extracted by NeuroChains. For Figure 4 and Figure 5, the data points are from classes which are easy to tell apart while in Figure 6 images are sometimes mis-classified. The case study of multi-classes classification task is shown in Figure 17 in Appendix. The visualization in each case study is composed of two parts: (1) the sub-network's architecture and filter scores; (2) original images from each class and the visualization of the image's NeuroChains extracted from the original pretrained model. The true class and predicted class of the sample are shown above the image. They show that NeuroChains considerably enrich the explanation details of DNN's inference process. By connecting the important filters from different layers, the extracted sub-network highlights the main steps leading to the output prediction. On the sub-network's architecture, we use "L0" to denote the corresponding convolution layer in VGG-19 and "L0_1" to denote the first filter from this layer. For ResNet-50, we further use "L1B1" to denote the first sub-block in the first bottleneck block, "SC" for the shortcut connection and "C1" for the first convolution layer in the sub-block. The redder the node in the sub-network, the larger the scaling score, conversely, the bluer the node, the lower the score. NeuroChains are stitched by filters step by step and the visualization of each filter and the corresponding featuremap generated by the original pre-trained model are displayed in each step. The visualization of each selected filter is achieved by maximizing its activation w.r.t. the input. Afterward, we shows the patterns that the filter aims to detect which is independent of the input image. More case studies are given in Appendix. ## 4.4 Detailed Analysis Of Case Studies In Figure 4 and Figure 5, the case studies of VGG-19 and Resnet-50 are shown. For each sample from the subtask, two NeuroChains which present the inference process of the original pre-trained model are displayed. In Figure 4, features extracted by the chains of strawberry gradually evolve from low-level to high-level, e.g., in the first chain, L1B1SC_177, L2B1SC_317, and L3B1C1_23 extract the low-level patterns of redgreen color and texture, L3B1C2_18, L3B1C3_818, and L4B1C1_417 look like the abstract pattern of strawberry, and L4B1C2_328, L4B1C3_511 and L4B1SC_511 capture the pattern of the skin of strawberry. While the first chain of strawberry focus on the skin, the second chain extracts the pattern of the leaves, ![8_image_0.png](8_image_0.png) Figure 4: Inference chain by NeuroChains for ResNet-50 (pre-trained on ImageNet) when applied to 20 test images of "strawberry" and "dalmatian". The sub-network retains only 13/67 layers and 77/26560 filters of the ResNet-50. Refer to Section 4.4 for detailed analysis of extracted chains. i.e., L3B1C1_154 and L3B1C3_5. The leaves in the featuremaps of L4B1C1_127 and L4B1C3_2004 are highlighted. L4B3C3_1760 extract the global patterns of strawberry. Both the two chains of dalmatian extract the global patterns of dalmatian's black and white fur. In shallow layers, L3B1C1_97, L3B1C1_238, L3B1C2_4, and L3B1C2_119 capture a more local black spot pattern of the dalmatian while L4B1C1_289, L4B1C2_48, L4B1C3_769 and L4B3C3_524 reveal the global patterns. In Figure 5, the featuremaps show that the first chain of kangaroo extracts the pattern of eyes and noses, while the second chain pays more attention to the fur of kangaroos. L21_296, L23_393, L32_453, and L34_66 turn from a round pattern to an eye and nose pattern. L21_24, L32_463, and L34_188 look like ![9_image_0.png](9_image_0.png) Figure 5: Inference chain by NeuroChains for VGG-19 when applied to images of "kangaroo" and "banana". The sub-network retains only 8/16 layers and 116/4480 filters of the VGG-19. The scores for selected filters are represented using the colormap on the top-right. Refer to Section 4.4 for detailed analysis of extracted chains. the fur pattern of animals. In the first chain of bananas, L19_335, L21_111, L23_470, and L32_465 turn from the curved rectangle pattern to a hand of bananas. The L32_458 and L34_79 in the second chain of bananas seem like the pattern of toes and roots of bananas. The corresponding areas in the featuremaps are also highlighted. The two chains focus on bananas' global and local patterns, respectively. In Figure 6, we first show the NeuroChains of correctly predicted castle and stone wall. Then, using the same filters from the extracted chains, we draw the featuremaps of a stone wall which is wrongly predicted as the castle by the original pre-trained model, to uncover the inner mechanism of false prediction. The featuremaps of the wrongly predicted stone wall from filters activated by castle show that the featuremaps in the mid-part of the chains are rarely activated, like L21_300 and L23_336, which is in line with our expectations. But in the deep layers, L32_15 and L32_45, which activate the roof and body of the castle, also highlight a wide area in the wrongly predicted stone wall image. This indicates that the wrongly predicted stone wall sample contains similar features as castle. The featuremaps of the wrongly predicted stone wall from filters activated by stone wall (correctly predicted) also shows the anomaly. L21_197, L25_26, and L32_260, which are critical to the correctly predicted stone wall, extract little information from the wrongly predicted stone wall. We find that L21_124 and L32_15 are both important to identify the castle and stone wall, which implies that castle and stone wall have similar patterns and are easily confused. ## 4.5 Applying Neurochains To Vision Transformer To show that NeuroChains can be applied to different types of neural networks for comparing their reasoning chains, we additionally apply NeuroChains to the vision transformer (ViT). Since recent work (Tang et al., 2022) discovered the effectiveness and advantages of patch pruning for ViT, our method produces a score for each patch in different blocks of ViT. In particular, NeuroChains freezes all the parameters in ViT and only optimizes the scores for each patch. At the end of patch pruning, patches with small scores are removed. The patch scores are not shared among training samples, i.e., each input sample has its own patch scores, so the critical patch for each sample can be preserved. In Fig. 7, we provide a case study of NeuroChains on ViT. The top figure show the patches (red squares) retained in each ViT block. In block1, the whole image is taken as the input. As the blocks get deeper, more patches are removed and only a few patches are retained in the last few blocks. The pruning results imply that the patch preserved in the last block contains the most critical information to classify the input sample. To study the inference process of ViT on the given sample, we generate the attribution heatmaps for both the class token and patch token in different blocks. The attribution scores are computed as following: For a ViT with L blocks and N patches, the attention score between patches in block-` can be calculated by Attention` = *sof tmax*( Q`KT √ ` dk` )V`, where the Q`, K` and V` are queries, keys and values in the multi-head attention module of block-`. The attribution of each patch in block-` to the original patches in the input sample is computed as ## Attribution` = Attention` × Attribution`−1 + Attribution`−1, (10) Attribution0 = IN , indicating that in the input all patches only attributes to themselves. Since **Attention**` is calculated based on the output patches of the previous block, in Eq. 10, we multiply the **Attention**` by Attribution`−1 to include the attribution to patches in the current block. We further add **Attribution**`−1 to reflect the shortcut operation in ViT blocks: the output patches of block-(` − 1), which encode the attribution of patches in block-(` − 1), are added directly to the output of block-`. In the middle row of Fig. 7, we show the attribution heatmaps of the class token in different blocks of the pruned ViT. In the shallow blocks close to the input, it attends all patches with similar attribution scores. As the blocks gets deeper, the class token gradually focuses on the critical patches essential to classify the input image. In this case study, only one patch token is retained in the last block of the pruned ViT. The attribution heatmaps of this patch are shown in the bottom row of Fig. 7. The patch token pays more attention to itself in the first few blocks but it starts to attend to more important patches in later blocks that are more relevant to the predicted class. ![11_image_0.png](11_image_0.png) Figure 6: Inference chain by NeuroChains for VGG-19 when applied to images of "Castle" and "Stone Wall" which confuse DNN models and may be mis-classified. The sub-network retains only 10/16 layers and 121/4480 filters of the VGG-19. Due to space limitation, the subnet can be found in Appendix. Refer to Section 4.4 for detailed analysis of extracted chains. The results of applying NeuroChains to ViT in Fig. 7 show that the inference process of ViT and CNN are very different. In CNN, from the shallow layers to the deep ones, the model first extracts the low-level local patterns of the input image and then aggregates them to get high-level global patterns step by step towards classifying the input image. In ViT, the model can capture both the local and global patterns of the input since the very bottom/early blocks/layers. During inference, the model gradually identifies patches important to the classification task and mainly extracts patterns relevant to the class token. ![12_image_0.png](12_image_0.png) Figure 7: A case study of applying NeuroChains to vision transformer (ViT). Top: the retained patches (red squares) in each block. Middle: attribution of the class token to the original input patches. Bottom: attribution of the patch token (retained in the last block) to the original input patches. ## 5 Conclusion We propose an efficient approach NeuroChains to extract the main inference chains/steps in a DNN when applied to a subtask. We learn a sparse scoring of filters/layers to extract a sub-network retaining a subset of filters/layers from the original pre-trained DNN. We provide case studies and quantitative analyses to demonstrate that NeuroChains produces more informative and reliable explanation to the inner inference process of DNNs. Since local reasoning chain extraction is a novel and challenging problem even for classification, we will investigate more complicated tasks like regression and detection in the future work. ## References Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. A unified view of gradient-based attribution methods for deep neural networks. In NIPS 2017-Workshop on Interpreting, Explaining and Visualizing Deep Learning. ETH Zurich, 2017. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. *PloS one*, 10(7):e0130140, 2015. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *International* Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=YicbFdNTTy. Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of a deep network. *University of Montreal*, 1341(3):1, 2009. Utku Evci, T. Gale, Jacob Menick, P. S. Castro, and E. Elsen. Rigging the lottery: Making all tickets winners. *ArXiv*, abs/1911.11134, 2020. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. 2018. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Zichao Guo, X. Zhang, H. Mu, Wen Heng, Z. Liu, Y. Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. In *ECCV*, 2020. Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In *Advances in neural information processing systems*, pp. 1135–1143, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Bo-Jian Hou and Zhi-Hua Zhou. Learning with interpretable structure from rnn. arXiv preprint arXiv:1810.10708, 2018. Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. *arXiv preprint arXiv:1607.03250*, 2016. Pieter-Jan Kindermans, Kristof T Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, and Sven Dähne. Learning how to explain neural networks: Patternnet and patternattribution. arXiv preprint arXiv:1705.05598, 2017. David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron C. Courville, and Chris Pal. Zoneout: Regularizing rnns by randomly preserving hidden activations. In *International Conference on Learning Representations*, 2017. Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. *arXiv preprint arXiv:1608.08710*, 2016. Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In *International Conference on Learning Representations*, 2019a. URL https://openreview.net/forum?id= S1eYHoC5FX. Z. Liu, H. Mu, X. Zhang, Zichao Guo, X. Yang, K. Cheng, and Jian Sun. Metapruning: Meta learning for automatic neural network channel pruning. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3295–3304, 2019b. Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2736–2744, 2017. Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. *arXiv preprint arXiv:1810.05270*, 2018. Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. Thinet: A filter level pruning method for deep neural network compression. In *Proceedings of the IEEE international conference on computer vision*, pp. 5058–5066, 2017. A. Madry, Aleksandar Makelov, Ludwig Schmidt, D. Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. *ArXiv*, abs/1706.06083, 2018. David Alvarez Melis and Tommi Jaakkola. Towards robust interpretability with self-explaining neural networks. In *Advances in Neural Information Processing Systems*, pp. 7775–7784, 2018. Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. Explaining nonlinear classification decisions with deep taylor decomposition. *Pattern Recognition*, 65: 211–222, 2017. T Nathan Mundhenk, Barry Y Chen, and Gerald Friedland. Efficient saliency maps for explainable ai. *arXiv* preprint arXiv:1911.11293, 2019. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In *NIPS-W*, 2017. Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural architecture search via parameters sharing. In *Proceedings of the 35th International Conference on Machine Learning*, volume 80, pp. 4095–4104, 2018. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Why should i trust you?: Explaining the predictions of any classifier. In *Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery* and data mining, pp. 1135–1144. ACM, 2016. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of *Proceedings of Machine Learning Research*, pp. 3145–3153, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings. mlr.press/v70/shrikumar17a.html. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. *arXiv preprint arXiv:1312.6034*, 2013. Saurabh Singh, Derek Hoiem, and David Forsyth. Swapout: Learning an ensemble of deep architectures. In Advances in Neural Information Processing Systems 29, pp. 28–36. 2016. Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. *arXiv preprint arXiv:1706.03825*, 2017. Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. *arXiv preprint arXiv:1412.6806*, 2014. Jingtong Su, Yihang Chen, Tianle Cai, Tianhao Wu, Ruiqi Gao, L. Wang, and J. Lee. Sanity-checking pruning methods: Random tickets can win the jackpot. *ArXiv*, abs/2009.11094, 2020. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3319–3328. JMLR. org, 2017. Yehui Tang, Kai Han, Yunhe Wang, Chang Xu, Jianyuan Guo, Chao Xu, and Dacheng Tao. Patch slimming for efficient vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12165–12174, 2022. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herve Jegou. Training data-efficient image transformers & distillation through attention. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings of Machine Learning Research*, pp. 10347–10357. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/touvron21a.html. Mike Wu, Michael C Hughes, Sonali Parbhoo, Maurizio Zazzi, Volker Roth, and Finale Doshi-Velez. Beyond sparsity: Tree regularization of deep models for interpretability. In *Thirty-Second AAAI Conference on* Artificial Intelligence, 2018. Chris Ying, Aaron Klein, Eric Christiansen, Esteban Real, Kevin Murphy, and Frank Hutter. NAS-bench101: Towards reproducible neural architecture search. In *Proceedings of the 36th International Conference* on Machine Learning, volume 97, pp. 7105–7114, 2019. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818–833. Springer, 2014. Luisa M Zintgraf, Taco S Cohen, Tameem Adel, and Max Welling. Visualizing deep neural network decisions: Prediction difference analysis. *arXiv preprint arXiv:1702.04595*, 2017. Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. 2017. URL https: //arxiv.org/abs/1611.01578. ## A Appendix A.1 Ablation Studies Of Layer Pruning In Fig. 8, we compare the number of filters in sub-networks extracted with or without layer pruning. When layer pruning is applied, for both VGG-19 and ResNet-50, most sub-networks sizes lie in a small range 0 to 200. Without layer pruning, much more redundant filters will be preserved in the sub-networks. Hence, layer pruning is essential to build a small, human interpretable, and effective sub-networks. ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) Figure 8: Comparison between number of filters in sub-networks extracted with or without layer pruning on VGG-19(Left) and ResNet-50(Right). ## A.2 Effect Of The Number Of Classes In Fig. 17, we show a case study of applying NeuroChains to a 3-classes classification sub-task, demonstrating the capability and effectiveness of NeuroChains on multi-classes problem. However, as classes per subtask increases, the number of preserved filters in the sub-networks also increases, which may weaken the interpretability. We test NeuroChains on the 10-classes classification sub-tasks. The averaged number of filters in the extracted sub-networks is 795(1206) for VGG-19(ResNet-50). Though much smaller than the original network, the extracted sub-networks are too large to explain. That being said, NeuroChains can explain the correlation among multiple classes by identifying the chains important to distinguish a class from the others. We conduct a binary-classification experiment shown in Fig. 10. In this sub-task, samples in one class are all eagles, while samples in another class are randomly selected from different classes, as shown in the top right of the figure. From the figure, the sub-network extracts two characteristic patterns for eagles. The first chain detects the pattern of feathers, while the second chain extracts the pattern of eyes and heads. These two patterns are ![16_image_0.png](16_image_0.png) Figure 9: The validation-set accuracy of sub-networks vs. the number of training samples per class in each sub-task. enough to recognize eagles. For images from other classes (a horse image in this case study), the key pattern of the object will be identified. In the figure, the first chain of the horse locates the elbow part. Filter L30_286 contains both the patterns of eyes and elbows. However, in the second chain of the eagle, it only activates the eyes; in the chain of the horse, the elbow of the horse is highlighted. The unique patterns in other objects will be extracted in order to distinguish them from eagles. In the second chain of the horse image, we use the filters in the first chain of eagle to extract patterns for the horse image. The featuremaps show that the eagle filters are not activated on the horse image, so it can be distinguished from eagles. ## A.3 Effect Of The Number Of Training Data Fig. 9 shows how the sub-networks' validation-set accuracy when increasing the training samples per class in each sub-task. For both pre-trained VGG-19 and ResNet-50, when the number of samples per class is too small (≤ 5), the sub-network tends to be overfitting and cannot generalize well to unseen validation data. However, when the number of samples per class increases to 10, the sub-network starts to achieve promising validation-set accuracy. If further increasing the samples per class, the validation-set accuracy quickly saturates. Therefore, in the experiments, 10 samples per class turn to be the sweet spot to extract sub-networks with sufficiently good generalization performance so they can be interpreted as the inference chains of the pre-trained model on given sub-tasks. ## A.4 Details Of The Heatmaps Of Features To obtain the heatmaps, each filter applied in the sub-network produces a featuremap for an input image. By applying an activation function to the featuremap, only a few pixels are activated and they highlight the regions in the featuremap, which represent what features/patterns of the input sample are detected by the filter. Since the featuremap size is always smaller than the input image, we apply the bilinear interpolation to upsample the featuremap up to the input image size. Finally, we overlap the resized featuremap with the input image, which results in a heatmap highlighting the regions of the input image represented by the corresponding filter. ## A.5 More Quantitative Analysis We performed 783 experiments each using 20 samples uniformly drawn from two classes. We evaluated these newly generated sub-networks using the quotient metric "A quotient of "diff to highest scoring other class (extracted)" / "diff to highest scoring other class (original)" Eq. (9). We visualized the result in Figure 11: ![17_image_0.png](17_image_0.png) Figure 10: Inference chain by NeuroChains for VGG-19 when applied to tell apart images of "eagle" and other randomly selected samples. The sub-network retains only 7/16 layers and 110/4480 filters of the original VGG-19. The left plot is the histogram of the quotient computed over all the 783×20 samples. The histogram shows that most samples keep the original predicted label after pruning, i.e., NeuroChains can preserve the original ![18_image_0.png](18_image_0.png) Figure 11: Left:Histogram of the quotient metric in Eq. (9) computed over all the 783×20 samples. Right:Faithfulness of NeuroChains in terms of the quotient's sign. pre-trained DNN's outputs in most cases. Moreover, the number of filters preserved in these sub-networks is 157(mean) ± 43(std), which is small enough to explain. The right plot reports the Faithfulness of NeuroChains in terms of the quotient's sign. We remove each filter from each sub-network and report how many samples' predicted labels are changed after the removal, i.e., the quotient is negative. Each point in the scatter plot corresponds to a sub-network, the x-axis is the score of the removed filter given by NeuroChains, and the y-axis is the proportion of samples with negative quotients. The plot shows a strong linear correlation between the score of the removed filter and the degradation of faithfulness. Since removing filters with high scores results in more samples with predicted class changing after pruning, the score given by NeuroChains measures the importance of filters in DNN inference. ![18_image_1.png](18_image_1.png) Figure 12: Robustness of original model (Left) vs. extracted sub-networks (Right) under different attacks. In order to evaluate NeuroChains on the subtasks in the raw input space, we extract sub-networks for uniformly random drawn samples and then evaluate the sub-networks on these samples' adversarial examples generated by two types of attacks: fast gradient sign method (FGSM) (Goodfellow et al., 2014) and projected gradient descent (PGD)(Madry et al., 2018). Figure 12 compares the robustness of the original neural net ![19_image_0.png](19_image_0.png) Figure 13: Case studies of SMOE generated heatmaps for the original network and the NeuroChains extracted sub-network. (Left plot) and the extracted sub-networks (Right plot) under different attacks: each of the plots show the histogram of the output probability for the ground-truth class on those samples (original and adversarial). The left plot shows that the two types of adversarial attack are very effective on the original neural net in reducing the probability of ground truth class. In contrast, the right plot shows that the NeuroChains extracted sub-networks are much more robust to the attacks, because the optimization in NeuroChains not only removes the irrelevant filters but also strengthens the important filters by assigning them weights > 1. This demonstrates the effectiveness of NeuroChains when applied to subtasks with non-smooth raw-input space, and the extracted sub-networks in this case significantly improves the robustness of the original model in defending adversarial attacks. In Figure 13, we show two case studies of comparing SMOE (Mundhenk et al., 2019) generated heatmaps for ![19_image_1.png](19_image_1.png) the original network and the NeuroChains extracted sub-network. We can see that the patterns extracted by the two networks are consistent and are all critical patterns for the class, e.g., the eyes and fists of kangaroos and the feet and face of the horse. However, compared with the original network, these patterns are strengthened in much shallower layers of the sub-network, producing better interpretations. This observation is also consistent with the result of analysis on adversarial attacks in Figure 12. Figure 14: The 20 images from 2 classes used to extract the sub-network of case study in Figure 4. ![20_image_0.png](20_image_0.png) ![20_image_1.png](20_image_1.png) Figure 15: The extracted sub-network and 20 images from 2 classes used to extract the sub-network of case study in Figure 6. ## A.6 Case Studies In Figure 14, the 20 images from 2 classes used to extract the sub-network of the case study in Figure 4 are displayed. All the NeuroChains are extracted from the original pre-trained model based on a few data of a subtask. The extracted sub-network and 20 images from 2 classes used to extract the sub-network of the case study in Figure 6 are shown in Figure 15. NeuroChains can also be used to extract the sub-networks for tasks with multi-classes, and the extracted sub-network is still small enough to be analyzed and explained. In Figure 16 and Figure 17, the sub-network and the NeuroChains of a 3-classes subtask are presented. The sub-network contains 8 layers and 158 filters, which is easy to visualize. The NeuroChains corresponding to the images in the three classes are displayed. The chains of orangutans extract features from different parts of the orangutan. The top chain of orangutan extracts the pattern in the face. L23_82 and L25_27 show the round and black node pattern. Then in L30_42 and L32_453, these nodes evolve into the pattern of eyes and noses. In L34_112, these features are integrated into the entire face of orangutan. The second chain of orangutan mainly focus on the fur and shape of arms. In L25_220, L30_439 and L32_376, the elbow part in the featuremap is highlighted, and the visualization of filters also show the pattern of curved shape of the joint part. In the first chain of tram, the pattern of the arranged windows are extracted in L25_202, L30_329 and L32_491 and the windows in the featuremap are highlighted. From the featuremap of the second chain, the bottom part of tram is highlighted and the pattern in L32_5 seems like wheels, but the visualization of other filters is not readable, which indicate that this visualization method still has limitations. In the chains of broccoli, the head part is extracted by the first chain while the stem part is captured by the second chain. From layer 10 to layer 34 of the first chain, L19_482, L23_83 and L25_247 show the pattern of dense semisphere, and L30_440, L32_281 and L34_428 look like the pattern of the head of broccoli. In the second chain, the pattern of closely arranged columns are shown in L30_421, L32_388 and L34_323, which is consistent with the shape of the stem of broccoli. From these analysis, we can find that despite the small number of filters in the sub-network, for samples from different classes, more than one critical patterns are captured to identify the samples. It shows the strong expression ability of neural network and the interpretation ability of NeuroChains. ![21_image_0.png](21_image_0.png) Figure 16: The extracted sub-network and 30 images from 3 classes used to extract the sub-network of case study in Figure 17. In Figure 18, another case study to classify pineapple and leopard on Resnet-50 is shown. In the first chain, the shallow layers extract low-level patterns like edges in L3B1C2_162. L4B1C2_485, L4B1C3_384 and L4B3SC_925 from deep layers evolve these low-level patterns into the crown. In the second chain, the mid-part of the network like L3B1C1_45, L3B1C2_10, L3B1C3_541 and L4B1C1_308 show the pattern of the skin of the pineapple. Filters in the last block, i.e. L4B1C2_4, L4B1C3_1010, and L4B3SC_1106 extract the global pattern which look like the main body of pineapples. In the chains of leopard, the body and head of the leopard are lit up respectively. But the visualization of filters in the 2 chains seem similar. The skin marked with black spots is its most obvious pattern like L3B1C2_221, L4B1C1_274, L3B1C3_857 L4B1C3_462 and L4B1C3_769. In Figure 19, a case study to distinguish indigo bunting from horse on VGG-19 is displayed. The chains of indigo bunting extract the local head pattern and global pattern of indigo bunting respectively. In the first chain, the beak and eye are lit up by L19_497, L23_375 and L25_199. L32_352 and L34_140 integrate previous pattern to the whole head. L19_44, L23_497 and L25_81 in the second chain extract the pattern of feathers. In L32_178 and L34_483, the pattern may also introduce patterns in the first chain and the global pattern including the head are lit up. Similar to indigo bunting, the chains of horse extract the head pattern and body pattern of horse respectively. L32_94 and L34_509 show the pattern like striped muscles. The whole back of the horse is highlighted. The second chain focus on the head part of the horse. L25_317 captures the mouth while L32_30 and L34_156 captures the eye. This case study shows that for some samples like indigo bunting, the global pattern is learnd by the model to identify the samples. For samples like horse, some local patterns like the body and eyes are enough. ![22_image_0.png](22_image_0.png) Figure 17: Inference chain by NeuroChains for VGG-19 when applied to images of 'orangutan", "tram" and "broccoli". The sub-network retains only 8/16 layers and 158/4480 filters of the VGG-19 is shown in Figure 16. Refer to Section A.6 for detailed analysis of extracted chains. ![23_image_0.png](23_image_0.png) Figure 18: Inference chain by NeuroChains for ResNet-50 (pre-trained on ImageNet) when applied to 20 test images of "pineapple" and "leopard". The sub-network retains only 16/67 layers and 133/26560 filters of the ResNet-50. Refer to Section A.6 for detailed analysis of extracted chains. ![24_image_0.png](24_image_0.png) Figure 19: Inference chain by NeuroChains for VGG-19 when applied to images of "indigo bunting" and "horse". The sub-network retains only 8/16 layers and 107/4480 filters of the VGG-19. Refer to Section A.6 for detailed analysis of extracted chains.
Review 1: Summary: This paper presents a new setting for explainable AI. Concretely, the authors propose to explain the inference process of pretrained deep learning via network pruning. Concretely, pruning the network down to a tiny subnetwork, whose intermediate feature maps can be used to study and explain the reasoning process of the original network. Different from compression, this paper 1). does not allow fine-tuning and only match the output distribution between original and sub networks on subtasks, defined as a subset of labels in this work. The specific pruning method is to optimizer a learnable mask on the model weight, and perform pruning based on the resulting scores. Leveraging the proposed tool, the author provides case studies on ResNet and VGG to visualize their inference process. Strengths and Weaknesses: Strength: - The paper presents an interesting alternative setting (and method) to explainable AI. Specifically it studies explaining network prediction on a subtask, rather than on a single example as prior works. - Detailed analysis on common architectures are presented. Weakness / Concerns: - My main conern is that, since the pruning objective is defined on a subtask, is it possible that the pruning process "overfits" to the subtask? If so, this might affect the reliability of the resulting subnetworks in representing the model's actual reasoning chain. To what extend can we measure and trust the representability of these subnetworks? Although the weights of these subnetworks directly come from the original model, the pruning process along might still induce biases. Requested Changes: N/A Broader Impact Concerns: Since the subnetworks will be used to analyze and interpret the original model, perhaps the author could discuss any potential biases that might be introduced in the process of generating these subnetworks. ================================================== Review 2: Summary: NeuroChains optimizes differentiable scores on filter outputs and neural network layers, to select a sub-network of a neural network that has predictions matching that of the neural network. There is no fine-tuning of the weights of the neural net. The hypothesis of the papers is that while neural networks can be large, when applied to a sample, the "inference" relies only a small subset of layers and filters. The authors confirm that hypothesis by evaluating trained VGG-19 and ResNet-50 neural networks. The authors show the importance of filters, for example removing only one highly scored filter from the sub-network can significantly degrade performance. Then the authors confirm with analysis that neural networks are making gradually more and more concrete features for a particular class. Strengths and Weaknesses: Strengths: 1. NeuroChain is a well-executed and documented methodology for finding sparse subnetworks that can elucidate the inference which a neural network performs during a forward pass. 2. The approach could potentially be applied to any neural network. Thus, it is a general method in spirit. 3. While simple, the method is distinguished from the existing literature. Weaknesses: 1. The paper only studies ResNet-50 and VGG-19. I would argue that for a modern analysis of neural networks, it would also be interesting to study transformers, e.g. ViT. Trained ViTs are publicly accessible, so I think a study on ViTs (and comparison with ResNet-50 and VGG-19) should be feasible. 2. The Introduction should be more specific what is meant by (sub)tasks. 3. Right now the study is focused on particular neural networks in isolation. It would have been more interesting to consider comparison between different architectures using your tool. Requested Changes: 1. In general, I think it would be interesting to see more discussion about what are meaningful tasks to be studied with NeuroChains. 2. Right now your method is focused on categorical datasets. Could you elaborate on potentially other types of datasets? 3. Please make a comparison between the inferences of different types of neural networks (in the way they are designed or trained). I think the neural networks should not be studies in isolation, when NeuroChains is applied. 4. How does your method relate to the Lottery Ticket Hypothesis, which asks you to fine-tune the weights from the original initialization? Do you think you are finding new types of Lottery Tickets here? And if so, how are they different from the tickets we are familiar with? Could you elaborate? Minor: * in abstract: extra space before a period * in first paragraph of Intro: despite its <- despite theirs * 2nd paragraph of Intro: an given <- a given * whether removing the entire layer <- whether to remove the entire layer There are more grammatical errors and typos throughout the text. Please, proofread the text carefully. Broader Impact Concerns: None. ================================================== Review 3: Summary: The authors propose to learn masks to either remove filters or whole layers in pre-trained deep convolutional neural networks. This is done on a small subset of a two way classification problem. They find that with their method only a low percentage of filters and layers is needed to retain accuracy. Given these very small subnetwork, they visualise the features and aim to interpret the neural networks computation throughout the layer, for this "subtask". Strengths and Weaknesses: I think the paper is quite well written and easy to understand. The method is simple and probably it is easy implement for a large portion of deep learning researchers and practitioners. I think it is quite surprising that only such a small subnetwork suffices to retain accuracy on a two-way-classification problem. My problem with this is that I did not gain too much out of the visualisations / case studies proposed by the authors, on the other hand it is possibly of limited interest how a large network distinguishes only two classes of the large dataset it was trained on although I imagine it being very difficult to do this for a larger amount of classes. See requested changes. I am not familiar with the related work but except from insights through the case studies, which were very limited for me, I think the proposed method is "the first thing" that people would try when given a couple of training points available for pruning in contrast to unsupervised methods. Nevertheless, I think the study is quite well executed and I believe the results. Also I think the study is valuable for a larger audience and of interest to the community. Requested Changes: 1. I would propose to include "convolutional" in the title since the study, for now, is limited to this architecture type. 2. Can you include ablation studies how much your whole layer pruning adds to the sparsity / vice versa. 3. Can you show what happens to your algorithm if you prune on more classes? I think it is interesting how the NeuroChains change when doing so. Identifying chains that are important to distinguish classes from all the others for example. 4. What happens to the algorithm if you add (remove) more data? It is very surprising to me that you need so little data to train the subnetworks. 5. Can you include a detailed description of you you compute the feature scores and the heatmaps in the appendix. Or did I missed this? It is not clear to me how these are computed. Broader Impact Concerns: Does not apply here I think. ================================================== Metareview: Recommendation: Accept as is Comment: The paper proposes a framework for interpreting deep neural networks focused on extracting a very small (e.g. 150 filters out of 4480 filters of VGG-19) sub-network critical dedicated to some aspect of the training task, such as distinguishing between two selected classes from the dataset. After discussions, the reviewers were unanimously supportive of accepting the paper. They highlighted several strong aspects of the submission: 1. It has the potential to become a new useful tool in the interpretability toolbox. Importantly, experiments are thorough enough to warrant optimism about this direction. As written by Reviewer nEr7 “I think this paper presents a novel aspect to the XAI community. The experiments and discussions are sufficient to justify the objective of this paper.”. Reviewer JDVr also appreciated that the framework seems to be novel. 2. As highlighted by reviewer pnfj, the result that only a very small network is sufficient for distinguishing between two classes is quite interesting on its own. 3. The method can be in principle applied to any architecture. In particular, the Authors have added the revision experiments on ViT. The experiments suggest that ViT and CNN arrive at the final decision using a significant difference reasoning mechanism, which again is an interesting result on its own (even if likely known in the field). Weaker sides of the submission include the lack of head to head comparison to other interpretability methods. While it is challenging to compare interpretability methods, there are certain experiments one could do. For example, one could use the interpretability method in some downstream task (e.g. identifying issues in a dataset). The experiments are mostly narrowed down to CNNs (exception is the ViT experiments in the Appendix). All in all, it is my pleasure to recommend acceptance of the paper. Thank you for submitting your work to TMLR. Please make sure you address all comments by the reviewers. I would also encourage, though it is not of course required, to promote ViT experiments to the main text, as these experiments highlight the strong sides of the framework. ==================================================
# Distributed Stochastic Algorithms For High-Rate Streaming Principal Component Analysis Haroon Raja haroon.raja@rutgers.edu Department of Electrical and Computer Engineering Rutgers University–New Brunswick, Piscataway, NJ 08854 USA Waheed U. Bajwa *waheed.bajwa@rutgers.edu* Department of Electrical and Computer Engineering Department of Statistics Rutgers University–New Brunswick, Piscataway, NJ 08854 USA Reviewed on OpenReview: *https: // openreview. net/ forum? id= CExeD0jpB6* ## Abstract This paper considers the problem of estimating the principal eigenvector of a covariance matrix from independent and identically distributed data samples in streaming settings. The streaming rate of data in many contemporary applications can be high enough that a single processor cannot finish an iteration of existing methods for eigenvector estimation before a new sample arrives. This paper formulates and analyzes a distributed variant of the classical Krasulina's method (D-Krasulina) that can keep up with the high streaming rate of data by distributing the computational load across multiple processing nodes. The analysis improves upon the one in (Balsubramani et al., 2013) for the original Krasulina's method and shows that—under appropriate conditions—D-Krasulina converges to the principal eigenvector in an order-wise optimal manner; i.e., after receiving M samples across all nodes, its estimation error can be O(1/M). In order to reduce the network communication overhead, the paper also develops and analyzes a mini-batch extension of D-Krasulina, which is termed DM-Krasulina. The analysis of DM-Krasulina shows that it can also achieve order-optimal estimation error rates under appropriate conditions, even when some samples have to be discarded within the network due to communication latency. Finally, experiments are performed over synthetic and real-world data to validate the convergence behaviors of D-Krasulina and DM-Krasulina in high-rate streaming settings. ## 1 Introduction Dimensionality reduction and feature learning methods such as *principal component analysis* (PCA), sparse PCA, independent component analysis, and autoencoder form an important component of any machine learning pipeline. For data lying in a d-dimensional space, such methods try to find the k ≪ d variables/features that are most relevant for solving an application-specific task (e.g., classification, regression, estimation, data compression, etc.). The focus of this work is on PCA, where the objective is to compute k-features that capture most of the variance in data. The proliferation of *big data* (both in terms of dimensionality and number of samples) has resulted in an increased interest in developing new algorithms for PCA due to the fact that classical numerical solutions (e.g., power iteration and Lanczos method (Golub & Van Loan, 2012)) for computing eigenvectors of symmetric matrices do not scale well with high dimensionality and large sample sizes. The main interest in this regard has been on developing algorithms that are cheap in terms of both memory and computational requirements as a function of dimensionality and number of data samples. In addition to high dimensionality and large number of samples, another defining characteristic of modern data is their streaming nature in many applications; examples of such applications include the internet-ofthings, high-frequency trading, meteorology, video surveillance, autonomous vehicles, social media analytics, etc. Several stochastic methods have been developed in the literature to solve the PCA problem in streaming settings (Krasulina, 1969; Oja & Karhunen, 1985; Sanger, 1989; Warmuth & Kuzmin, 2007; Zhang & Balzano, 2016). These methods operate under the *implicit* assumption that the data arrival rate is slow enough so that each sample can be processed before the arrival of the next one. But this may not be true for many modern applications involving high-rate streaming data. To overcome this obstacle corresponding to highrate streaming data, this paper proposes and analyzes distributed and distributed, mini-batch variants of the classical Krasulina's method (Krasulina, 1969). Before providing details of the proposed methods and their relationship to prior work, we provide a brief overview of the streaming PCA problem. ## 1.1 Principal Component Analysis (Pca) From Streaming Data For data lying in R d, PCA learns a k-dimensional subspace with maximum data variance. Let x ∈ R d be a random vector that is drawn from some unknown distribution Px with zero mean and Σ covariance matrix. For the constraint set V := {V ∈ R d×k: VTV = I}, we can pose PCA as the following constrained optimization problem: $$\mathbf{Q}^{*}:=\operatorname*{arg\,max}_{\mathbf{V}\in\mathbf{V}}\mathbb{E}_{\mathcal{P}_{x}}\Big\{\mathrm{Tr}(\mathbf{V}^{\mathrm{T}}\mathbf{x}\mathbf{x}^{\mathrm{T}}\mathbf{V})\Big\},$$ where Tr(.) denotes the trace operator. The solution for the *statistical risk maximization* problem (1) is the matrix Q∗ with top k eigenvectors of Σ. In practice, however, (1) cannot be solved in its current form since Px is unknown. But if we have T data samples, {xt} T t=1, drawn independently from Px, then we can accumulate these data samples to calculate the sample covariance matrix as: $$\bar{\mathbf{A}}_{T}:={\frac{1}{T}}\sum_{t=1}^{T}\mathbf{A}_{t},\tag{1}$$ where At := xtx T t . Instead of solving (1), we can now solve an *empirical risk maximization* problem $$\mathbf{Q}:={\underset{\mathbf{V}\in\mathbf{V}}{\operatorname{arg\,max}}}\operatorname{Tr}(\mathbf{V}^{\mathrm{T}}\bar{\mathbf{A}}_{T}\mathbf{V})={\underset{\mathbf{V}\in\mathbf{V}}{\operatorname{arg\,max}}}{\frac{1}{T}}\sum_{t=1}^{T}\operatorname{Tr}(\mathbf{V}^{\mathrm{T}}\mathbf{A}_{t}\mathbf{V}).\tag{1}$$ $$\left(1\right)$$ $$\left(2\right)$$ $$\left({\mathrm{3}}\right)$$ In principle, we can solve (3) by computing the *singular value decomposition* (SVD) of sample covariance A¯T . But this is a computationally intensive task that requires O(d 3) multiplications and that has a memory overhead of O(d 2). In contrast, the goal in high-dimensional PCA problems is often to have O(d 2k) computational complexity and O(dk) memory complexity (Li et al., 2016). More efficient (and hence popular) approaches for PCA use methods such as the power/orthogonal iteration and Lanczos method (Golub & Van Loan, 2012, Chapter 8). Although these methods improve overall computational complexity of PCA to O(d 2k), they still have memory requirements on the order of O(d 2). In addition, these are *batch* methods that require computing the sample covariance matrix A¯T , which results in O(d 2T) multiplication operations. Further, in streaming settings where the goal is real-time decision making from data, it is infeasible to compute A¯T . Because of these reasons, stochastic approximation methods such as Krasulina's method (Krasulina, 1969) and Oja's rule (Oja & Karhunen, 1985) are often favored for the PCA problem. Both these are simple and extremely efficient algorithms, achieving O(d) computational and memory complexity per iteration, for computing the principal eigenvector (i.e., k = 1) of a covariance matrix in streaming settings. Recent years in particular have seen an increased popularity of these algorithms and we will discuss these recent advances in Section 1.3. Both Oja's rule and Krasulina's method share many similarities. In this paper, we focus on Krasulina's method with the understanding that our findings can be mapped to Oja's rule through some tedious but straightforward calculations. Using t for algorithmic iteration, Krasulina's method estimates the top eigenvector by processing one data sample in each iteration as follows:1 $$\mathbf{v}_{t}=\mathbf{v}_{t-1}+\gamma_{t}\Bigg{(}\mathbf{x}_{t}\mathbf{x}_{t}^{\mathrm{T}}\mathbf{v}_{t-1}-\frac{\mathbf{v}_{t-1}^{\mathrm{T}}\mathbf{x}_{t}\mathbf{x}_{t}^{\mathrm{T}}\mathbf{v}_{t-1}\mathbf{v}_{t-1}}{\|\mathbf{v}_{t-1}\|_{2}^{2}}\Bigg{)},\tag{4}$$ ja's rule is given by $\mathbf{v}_{t}=\mathbf{v}_{t-1}+\gamma_{t}\left(\mathbf{x}_{t}\mathbf{x}_{t}^{\mathrm{T}}\mathbf{v}_{t-1}-\mathbf{v}_{t-1}^{\mathrm{T}}\mathbf{x}_{t}\mathbf{x}_{t}^{\mathrm{T}}\mathbf{v}_{t-1}\mathbf{v}_{t-1}\right).$ 1In contrast, the iterate of Oja's rule is given by vt = vt−1 + γtxtx where γt denotes the step size. Going forward, we will be using At in place of xtx T tin expressions such as (4) for notational compactness. In practice, however, one should neither explicitly store At nor explicitly use it for calculation purposes. Note that one can interpret Krasulina's method as a solution to an optimization problem. Using Courant– Fischer Minimax Theorem (Golub & Van Loan, 2012, Theorem 8.1.2), the top eigenvector computation (i.e., 1-PCA, which is the k = 1 version of (1)) can be posed as the following optimization problem: $$\mathbf{q}_{1}:=\arg\operatorname*{min}_{\mathbf{v}\in\mathbb{R}^{d}}f(\mathbf{v})=\arg\operatorname*{min}_{\mathbf{v}\in\mathbb{R}^{d}}{\frac{-\mathbf{v}^{\mathrm{{T}}}\mathbf{A}_{t}\mathbf{v}}{\|\mathbf{v}\|_{2}^{2}}}.$$ . (5) In addition, the gradient of the function f(v) defined in (5) is: $f(\mathbf{v})$ defined in (5) is: . $$\nabla f({\bf v})=\frac{1}{\|{\bf v}\|_{2}^{2}}\Bigg{(}-{\bf A}_{t}{\bf v}+\frac{({\bf v}^{\rm T}{\bf A}_{t}{\bf v}){\bf v}}{\|{\bf v}\|_{2}^{2}}\Bigg{)}.\tag{6}$$ $$\left({5}\right)$$ Looking at (4)–(6), we see that (4) is very similar to applying *stochastic gradient descent* (SGD) to the nonconvex problem (5), with the only difference being the scaling factor of 1/∥v∥ 2 2 . Nonetheless, since (5) is a nonconvex problem and we are interested in global convergence behavior of Krasulina's method, existing tools for analysis of the standard SGD problem (Bottou, 2010; Recht et al., 2011; Dekel et al., 2012; Reddi et al., 2016b;c) do not lend themselves to the fastest convergence rates for Krasulina's method. Despite its nonconvexity, however, (5) has a somewhat benign optimization landscape and a whole host of algorithmic techniques and analytical tools have been developed for such structured nonconvex problems in recent years that guarantee fast convergence to a global solution. In this paper, we leverage some of these recent developments to guarantee near-optimal global convergence of two variants of Krasulina's method in the case of high-rate streaming data. Before proceeding further, it is worth noting that while Krasulina's method primarily focuses on the 1-PCA problem, it can be used to solve the k-PCA problem. But such an *indirect* approach, which involves repeated use of the Krasulina's method k times, can be inefficient in terms of sample complexity (Allen-Zhu & Li, 2017a, Section 1). We leave investigation of a near-optimal direct method for the k-PCA problem involving high-rate streaming data for future work. ## 1.2 Our Contributions In this paper, we propose and analyze two distributed variants of Krasulina's method for estimating the top eigenvector of a covariance matrix from fast streaming, independent and identically distributed (i.i.d.) data samples. Our theoretical analysis, as well as numerical experiments on synthetic and real data, establish near-optimality of the proposed algorithms. In particular, our analysis shows that the proposed algorithms can achieve the optimal convergence rate of O(1/M) for 1-PCA after processing a total of O(M) data samples (see (Jain et al., 2016, Theorem 1.1) and (Allen-Zhu & Li, 2017a, Theorem 6)). In terms of details, following are our key contributions: 1. Our first contribution corresponds to the scenario in which there is a mismatch of N ∈ Z+ > 1 between the data streaming rate and the processing capability of a single processor, i.e., one iteration of Krasulina's method on one processor takes as long as N data arrival epochs. Our solution to this problem, which avoids discarding of samples, involves splitting the data stream into N parallel streams that are then input to N interconnected processors. Note that this splitting effectively reduces the streaming rate at each processor by a factor of N. We then propose and analyze a distributed variant of Krasulina's method—termed D-Krasulina—that solves the 1-PCA problem for this distributed setup consisting of N processing nodes. Our analysis substantially improves the one in (Balsubramani et al., 2013) for Krasulina's method and shows that D-Krasulina can result in an improved convergence rate of O(1*/N t*) after t iterations (Theorem 1), as opposed to the O(1/t) rate for the classical Krasulina's method at any one of the nodes seen in isolation. Establishing this result involves a novel analysis of Krasulina's method that brings out the dependence of its convergence rate on the variance of the sample covariance matrix; this analysis coupled with a variance reduction argument leads to the convergence rate of O(1*/N t*) for D-Krasulina under appropriate conditions. 2. Mini-batching of data samples has long been promoted as a strategy in stochastic methods to reduce the wall-clock time. Too large of a mini-batch, however, can have an adverse effect on the algorithmic performance; see, e.g., (Shamir & Srebro, 2014, Sec. VIII). One of the challenges in mini-batched stochastic methods, therefore, is characterizing the mini-batch size that leads to nearoptimal convergence rates in terms of the number of processed samples. In (Agarwal & Duchi, 2011; Cotter et al., 2011; Dekel et al., 2012; Shamir & Srebro, 2014; Ruder, 2016; Golmant et al., 2018; Goyal et al., 2017), for example, the authors have focused on this challenge for the case of mini-batch SGD for convex and nonconvex problems. In the case of nonconvex problems, however, the guarantees only hold for convergence to first-order stationary points. In contrast, our second contribution is providing a comprehensive understanding of the global convergence behavior of minibatched Krasulina's method. In fact, our analysis of D-Krasulina is equivalent to that of a minibatch (centralized) Krasulina's method that uses a mini-batch of N samples in each iteration. This analysis, therefore, guarantees near-optimal convergence rate with arbitrarily high probability for an appropriately mini-batcheded Krasulina's method in a centralized setting. This is in contrast to (Jain et al., 2016; Yang et al., 2018), where the focus is on Oja's rule and the probability of success is upper bounded by 3/4 for a single algorithmic run.2In addition, in the case of highrate streaming data that requires splitting the data stream into N parallel ones, we characterize the global convergence behavior of a mini-batch generalization of D-Krasulina—termed DM-Krasulinain terms of the mini-batch size. This involves specifying the conditions under which mini-batches of size B/N per node can lead to near-optimal convergence rate of O(1/Bt) after t iterations of DM-Krasulina (Theorem 2). An implication of this analysis is that for a fixed (network-wide) sample budget of T samples, DM-Krasulina can achieve O(1/T) rate after t := *T /B* iterations provided the (network-wide) mini-batch size B satisfies B = O(T 1− 2 c0 ) for some constant c0 > 2 (Corollary 1). 3. Our next contribution is an extended analysis of DM-Krasulina that concerns the scenario where (computational and/or communication) resource constraints translate into individual nodes still receiving more data samples than they can process in one iteration of DM-Krasulina. This resourceconstrained setting necessitates DM-Krasulina dropping µ ∈ Z+ samples across the network in each iteration. Our analysis in this setting shows that such loss of samples need not result in sub-optimal performance. In particular, DM-Krasulina can still achieve near-optimal convergence rate as a function of the number of samples arriving in the network—for both infinite-sample and finite-sample regimes—as long as µ = O(B) (Corollary 2). 4. We provide numerical results involving both synthetic and real-world data to establish the usefulness of the proposed algorithms, validate our theoretical analysis, and understand the impact of the number of dropped samples per iteration of DM-Krasulina on the convergence rate. These results in particular corroborate our findings that increasing the mini-batch size improves the performance of DM-Krasulina up to a certain point, after which the convergence rate starts to decrease. Since the focus of this work is on systems theory, the reported results do not focus on some of the largescale system implementation issues such as unexpected processor failures, high network coordination costs, etc. Such large-scale implementation issues, while relevant from a practical perspective, are beyond the scope of this paper and provide interesting research directions for future work. ## 1.3 Related Work Solving the PCA problem efficiently in a number of settings has been an active area of research for decades. (Krasulina, 1969; Oja & Karhunen, 1985) are among the earliest and most popular methods to solve PCA in streaming data settings. Several variants of these methods have been proposed over the years, including (Bin Yang, 1995; Chatterjee, 2005; Doukopoulos & Moustakides, 2008). Like earlier developments in 2Note that it is possible to improve the probability of success for Oja's rule, as derived in (Jain et al., 2016; Yang et al., 2018), to 1−δ, δ ∈ (0, 1), by running O(log(1/δ)) instances of the algorithm and combining their outcomes. But such a strategy, which also adds to the computational and storage overhead because of the 'combine' step, is impractical for the streaming settings considered in this paper. Additional differences between the results of this paper pertaining to the centralized PCA problem for streaming data and that of (Jain et al., 2016; Yang et al., 2018) are discussed in Section 1.3. It is, however, important to point out that this work and (Jain et al., 2016; Yang et al., 2018) are analyzing two related, *yet different*, algorithmic approaches that are based on Krasulina's method and Oja's rule, respectively, and their results are therefore complementary in nature. stochastic approximation methods (Robbins & Monro, 1951), such variants were typically shown to converge asymptotically. Convergence rate analyses for stochastic optimization in finite-sample settings (Shapiro & Homem-de Mello, 2000; Linderoth et al., 2006) paved the way for non-asymptotic convergence analysis of different variants of the stochastic PCA problem, which is fundamentally a nonconvex optimization problem. Within the context of such works, the results that are the most relevant to the algorithmic strategy devised in this paper can be found in (Jain et al., 2016) and (Yang et al., 2018). The authors in these two papers provide variance-dependent convergence guarantees for Oja's rule in the finite-sample regime, thereby making their results also translatable to the algorithmic framework being considered in this paper for high-rate streaming PCA. However, as noted earlier, the results derived in (Jain et al., 2016; Yang et al., 2018) only hold with probability 3/4, which is in contrast to the high-probability results of this paper. And while one could increase the probability of success in (Jain et al., 2016; Yang et al., 2018) through multiple algorithmic runs, this is not a feasible strategy in the streaming settings. In order to complement the pioneering results of (Jain et al., 2016; Yang et al., 2018) in streaming settings, we shift our focus away from Oja's rule as the base algorithm and develop a different proof strategy that substantially extends and generalizes the analysis in (Balsubramani et al., 2013) for Krasulina's method. The analysis in (Balsubramani et al., 2013) assumes that the ℓ2 norm of the data samples is bounded by a positive constant, but it does not take into account the variance of the sample covariance matrices. Such an analysis leads to convergence results that are independent of the variance and hence are unable to capture any improvement in the convergence rate due to mini-batching and/or distributed implementations for computing the top eigenvector of a covariance matrix. We overcome this limitation of the analysis in (Balsubramani et al., 2013) by developing a proof in this work that explicitly accounts for the variance of the sample covariance matrices. Note that this extension/generalization of the analysis in (Balsubramani et al., 2013)—despite the intuitive nature of our final set of results—is a nontrivial task. There are in particular two main technical challenges that are addressed in this paper: (i) Using concentration of measure results that allow for incorporation of the variance within the analysis, as opposed to the Hoeffding inequality (Boucheron et al., 2013) utilized in (Balsubramani et al., 2013); and (ii) Utilizing the variance-dependent concentration guarantees within two terms in Lemma 1, one of which depends on the norm bound on data samples and the other on the variance—a careful decoupling of these two quantities being critical to obtain convergence results in which the dominant term depends only on the variance. Finally, another aspect that distinguishes this paper from prior works such as (Balsubramani et al., 2013; Jain et al., 2016; Yang et al., 2018) is that it provides a formal framework for studying the communications and computation tradeoffs involved in solving the 1-PCA problem in distributed streaming settings. This framework is described in detail in Section 3.2, with theoretical characterization of the proposed framework in terms of the communications and computation costs described in Section 4.2. Because of the vastness of literature on (stochastic) PCA, this work is also tangentially or directly related to a number of additional prior works. We review some of these works in the following under the umbrellas of different problem setups, with the understanding that the resulting lists of works are necessarily incomplete. Much of our discussion in the following focuses on solving the PCA problem in (fast) streaming and distributed data settings, which is the main theme in this paper. Sketching for PCA. Sketching methods have long been studied in the literature for solving problems involving matrix computations; see (Woodruff, 2014) for a review of such methods. The main idea behind these methods is to compress data using either randomized or deterministic sketches and then perform computations on the resulting low-dimensional data. While sketching has been used as a tool to solve the PCA problem in an efficient manner (see, e.g., (Warmuth & Kuzmin, 2007; Halko et al., 2011; Liberty, 2013; Leng et al., 2015; Karnin & Liberty, 2015)), the resulting methods cannot be used to *exactly* solve (1) in the fast streaming settings of this paper. Online PCA. The PCA problem has also been extensively studied in online settings. While such settings also involve streaming data, the main goal in online PCA is to minimize the cumulative subspace estimation error over the entire time horizon of the algorithm. The online PCA framework, therefore, is especially useful in situations where either the underlying subspace changes over time or there is some adversarial noise in the sampling process. Some of the recent works in this direction include (Garber et al., 2015; Allen-Zhu & Li, 2017b; Garber, 2018; Marinov et al., 2018; Kotłowski & Neu, 2019). Stochastic convex optimization for PCA. One approach towards solving (3) in streaming settings is to relax the PCA problem to a convex optimization problem and then use SGD to solve the resulting stochastic convex optimization problem (Arora et al., 2013; Garber & Hazan, 2015; Nie et al., 2016). The benefit of this approach is that now one can rely on rich literature for solving stochastic convex problems using SGD. But the tradeoff is that one now needs to store an iterate of dimension R d×d, as opposed to an iterate of dimension R d×k when we solve the PCA problem in its original nonconvex form. Due to these high memory requirements of O(d 2), we limit ourselves to solving PCA in the nonconvex form. Streaming PCA and nonconvex optimization. The PCA problem in the presence of streaming data can also be tackled as an explicit constrained nonconvex optimization program (Zhang & Balzano, 2016; De Sa et al., 2015). In (Zhang & Balzano, 2016), for instance, the problem is solved as an optimization program over the Grassmannian manifold. The resulting analysis, however, relies on the availability of a good initial guess. In contrast, the authors in (De Sa et al., 2015) analyze the use of the SGD for solving certain nonconvex problems that include PCA. The resulting approach, however, requires the step size to be a significantly small constant for eventual convergence (e.g., 10−12 for the Netflix Prize dataset); this translates into slower convergence in practice. Classical stochastic approximation methods for PCA. Recent years have seen an increased interest in understanding the global convergence behavior of classical stochastic approximation methods such as Krasulina's method (Krasulina, 1969) and Oja's rule (Oja & Karhunen, 1985) for the PCA problem in nonasymptotic settings (Allen-Zhu & Li, 2017a; Chatterjee, 2005; Hardt & Price, 2014; Shamir, 2015; 2016; Jain et al., 2016; Li et al., 2016; Tang, 2019; Henriksen & Ward, 2019; Amid & Warmuth, 2019). Some of these works, such as (Shamir, 2015) and (Shamir, 2016), use variance reduction techniques to speed-up the algorithmic convergence. Such works, however, require multiple passes over the data, which makes them ill-suited for fast streaming settings. The analysis in (Shamir, 2015) and (Shamir, 2016) also requires an initialization close to the true subspace, which is somewhat unlikely in practice. Among other works, the authors in (Allen-Zhu & Li, 2017a) provide eigengap-free convergence guarantees for Oja's rule. Since the results in this work do not take into account the variance of data samples, they do not generalize to mini-batch/distributed streaming settings. As stated earlier, the authors in (Jain et al., 2016) do provide variance-dependent convergence guarantees for Oja's rule, which makes this work the most relevant to ours. In particular, the authors in (Yang et al., 2018) have extended the initial analysis in (Jain et al., 2016) to mini-batch settings. But a limitation of the analysis in (Jain et al., 2016; Yang et al., 2018) is that it guarantees convergence of Oja's rule only up to a probability of 3/4. And while the probability of success can be increased to 1 − δ by running and combining the outcomes of O(log(1/δ)) instances of Oja's rule, such an approach has two major drawbacks in streaming settings. First, since new data samples arrive continuously in a streaming setting, multiple runs of an algorithm in this case can only be achieved through multiple replicas of the processing system. Such a strategy, therefore, leads to a substantial increase in system costs. Second, the outcomes of the multiple runs need to be appropriately combined. In (Jain et al., 2016), it is suggested this be done by computing the geometric median of the multiple outcomes, which requires solving an additional optimization problem. This then adds to the computational and storage overhead for the PCA problem. We conclude by remarking on two key distinctions between our results and those in (Jain et al., 2016; Yang et al., 2018). First, the arbitrarily high probability of success in our analysis requires the initial step size in Krasulina's method to decrease with an increase in the probability; we refer the reader to the discussion following Theorem 1 in this paper for further details on this point. Second, our convergence guarantees have the flavor of 'convergence in mean' as opposed to the 'convergence in probability' nature of the results in (Jain et al., 2016; Yang et al., 2018). A straightforward application of Markov's inequality, however, leads to results that are directly comparable to the ones in (Jain et al., 2016; Yang et al., 2018); we refer the reader to Corollary 3 in Appendix D as an illustrative example of this. Distributed PCA and streaming data. Several recent works such as (Balcan et al., 2016; Boutsidis et al., 2016; Garber et al., 2017; De Sa et al., 2018) have focused on the PCA problem in distributed settings. Among these works, the main focus in (Balcan et al., 2016; Boutsidis et al., 2016; Garber et al., 2017) is on improving the communications efficiency. This is accomplished in (Balcan et al., 2016; Boutsidis et al., 2016) by sketching the local iterates and communicating the resulting compressed iterates to a central server in each iteration. In contrast, (Garber et al., 2017) provides a batch solution in which every node in the network first computes the top eigenvector of its local (batch) covariance matrix and then, as a last step of the algorithm, all the local eigenvector estimates are summed up at a central server to provide an eigenvector estimate for the global covariance matrix. In contrast to these works, our focus in this paper is on establishing that distributed (mini-batch) variants of stochastic approximation methods such as Oja's rule and Krasulina's method can lead to improved convergence rates, as a function of the number of samples, for the PCA problem in fast streaming settings. In this regard, our work is more closely related to (De Sa et al., 2018), where the authors use the momentum method to accelerate convergence of power method and further extend their work to stochastic settings. However, the approach of (De Sa et al., 2018) relies on a variance reduction technique that requires a pass over the complete dataset every once in a while; this is impractical in streaming settings. In addition, theoretical guarantees in (De Sa et al., 2018) are based on the assumption of a "good" initialization; further, an implicit assumption in (De Sa et al., 2018) is that inter-node communications is fast enough that there are no communication delays. Decentralized PCA. Another important practical setting under which the PCA problem has been studied is when the data are distributed across an interconnected set of nodes that form a complete graph but not a fully connected one. We refer the reader to (Blondel et al., 2005; Aysal et al., 2009; Khan et al., 2009; Dimakis et al., 2010) for a general understanding of this decentralized setting. Some contributions to the PCA problem in this setting are (Kempe et al., 2008; Li et al., 2011; Korada et al., 2011; Wu et al., 2017; 2018; Gang et al., 2021; Gang & Bajwa, 2021; 2022). Most of these contributions consider batch data settings and their extensions to the streaming setting are not evident. And while contributions such as (Li et al., 2011) do consider the streaming data setting, the convergence rates established in such works are asymptotic in nature and the theoretical analyses cannot be used to derive conditions for a linear speed-up in convergence rates in the mini-batch setting. Connections to low-rank matrix approximation. The *low-rank matrix approximation* problem (Frieze et al., 2004; Clarkson & Woodruff, 2017), which involves computing a low-rank approximation of a given matrix, is closely related to the PCA problem. The overarching goal in both problems is the same: *find a* subspace that best approximates the data samples / given matrix. In the case of PCA, however, the focus is fundamentally on finding an orthogonal basis for the subspace that is precisely given by the top eigenvectors of the data covariance matrix. Notwithstanding this difference between the two problems, (Yun et al., 2015; Tropp et al., 2019) are two works within the low-rank matrix approximation literature that are the most related to this paper. The setting in (Yun et al., 2015) corresponds to a large-scale but fixed matrix whose columns are presented to a low-rank approximation algorithm in a streaming manner. This is in contrast to the streaming setting of this work that is akin to having a matrix with infinitely many columns. In addition, the algorithm being studied in (Yun et al., 2015) requires computing the top principal component directions of a random submatrix of the larger matrix in a batch setting as its first step, which is again a departure from the streaming data setting of this work. Next, the mathematical model in (Tropp et al., 2019) corresponds to a matrix that is given by the sum of a long sequence of low-rank and sparse matrices that are presented to a low-rank approximation algorithm in a streaming manner. This summation aspect of the data model in (Tropp et al., 2019) does not coincide with the mathematical model of the streaming data samples in this work. Finally, and in stark contrast to this paper, neither of these works are concerned with the interplay between the streaming data rate, data processing rate, and the number of interconnected processors. Connections to stochastic nonconvex optimization. Recent years have also seen an increased focus on understanding (variants of) SGD for general (typically unconstrained) stochastic nonconvex optimization problems. Among such works, some have focused on classical SGD (Ge et al., 2015; Hazan et al., 2016; 2017; Li et al., 2021; Mokhtari et al., 2017), some have studied variance-reduction variants of SGD (Reddi et al., 2016a;b; Qureshi et al., 2021), and some have investigated accelerated variants of stochastic nonconvex optimization (Allen-Zhu, 2018b;a). In particular, works such as (Reddi et al., 2016a; Allen-Zhu & Hazan, 2016) are directly relevant to this paper since these works also use mini-batches to reduce sample variance and improve on SGD convergence rates. While (implicit, through the distributed framework, and explicit) mini-batching is one of the key ingredients of our work also, this paper differs from such related works because of its ability to prove convergence to a global optimum of the 1-PCA problem. In contrast, aforementioned works only provide guarantees for convergence to first-order stationary points of (typically unconstrained) stochastic nonconvex optimization problems. ## 1.4 Notational Convention And Paper Organization We use lower-case (a), bold-faced lower-case (a), and bold-faced upper-case (A) letters to represent scalars, vectors, and matrices, respectively. Given a scalar a and a vector a, ⌈a⌉ denotes the smallest integer greater than or equal to a, while ∥a∥2 denotes the ℓ2-norm of a. Given a matrix A, ∥A∥2 denotes its spectral norm and ∥A∥F denotes its Frobenius norm. In addition, assuming A ∈ R d×dto be a positive semi-definite matrix, λi(A) denotes its i-th largest eigenvalue, i.e., ∥A∥2 := λ1(A) ≥ λ2(A) *≥ · · · ≥* λd(A) ≥ 0. Whenever obvious from the context, we drop A from λi(A) for notational compactness. Finally, E{·} denotes the expectation operator, where the underlying probability space (Ω, F, P) is either implicit from the context or is explicitly pointed out in the body. The rest of this paper is organized as follows. We first provide a formal description of the problem and the system model in Section 2. The two proposed variants of Krasulina's method that can be used to solve the 1- PCA problem in fast streaming settings are then presented in Section 3. In Section 4, we provide theoretical guarantees for the proposed algorithms, while proofs / outlines of the proofs of the main theoretical results are provided in Section 5. Finally, numerical results using both synthetic and real-world data are presented in Section 6, while appendices are used for detailed proofs of some of the theoretical results. ## 2 Problem Formulation And System Model Our goal is to use some variants of Krasulina's method (cf. (4)) in order to obtain an estimate of the top eigenvector of a covariance matrix from independent and identically distributed (i.i.d.) data samples that are fast streaming into a system. The algorithms proposed in this regard and their convergence analysis rely on the following sets of assumptions concerning the data and the system. ## 2.1 Data Model We consider a streaming data setting where a new data sample xt ′ ∈ R dindependently drawn from an unknown distribution Px arrives at a system at each sampling time instance t ′. We assume a uniform data arrival rate of Rs samples per second and, without loss of generality, take the data arrival index t ′ ≥ 1 to be an integer. We also make the following assumptions concerning our data, which aid in our convergence analysis. [A1] *(Zero-mean, norm-bounded samples)* Without loss of generality, the data samples have zero mean, i.e., EPx {xt ′} = 0. In addition, the data samples are almost surely bounded in norm, i.e., ∥xt ′∥2 ≤ r, where we let the bound r ≥ 1 without loss of generality. [A2] *(Spectral gap of the covariance matrix)* The largest eigenvalue of Σ := EPx {xt ′x T t ′} is strictly greater than the second largest eigenvalue, i.e., λ1(Σ) > λ2(Σ) ≥ λ3(Σ) *≥ · · · ≥* λd(Σ) ≥ 0. Note that both Assumptions [A1] and [A2] are standard in the literature for convergence analysis of Krasulina's method and Oja's rule (cf. (Balsubramani et al., 2013; Oja & Karhunen, 1985; Allen-Zhu & Li, 2017a; Jain et al., 2016)). We also associate with each data sample xt ′ a rank-one random matrix At ′ := xt ′x T t ′ , which is a trivial unbiased estimate of the population covariance matrix Σ. We then define the variance of this unbiased estimate as follows. Definition 1 (Variance of sample covariance matrix). We define the variance of the sample covariance matrix At ′ := xt ′x T t ′ *as follows:* $$\sigma^{2}:=\mathbb{E}p_{z}\left\{\left\|\mathbf{A}_{t^{\prime}}-\Sigma\right\|_{F}^{2}\right\}.$$ Note that all moments of the probability distribution Px exist by virtue of the norm boundedness of xt ′ (cf. Assumption [A1]). The variance σ 2 of the sample covariance matrix At ′ as defined above, therefore, exists and is finite. ![8_image_0.png](8_image_0.png) Figure 1: The distributed PCA problem, which involves distributed processing of data over a network of N processors, can arise in two contexts. (a) A data splitter can split a data stream into N parallel streams, one for each processor in the network. In relation to the original data stream, this effectively reduces the data arrival rate for each parallel stream by a factor of N. (b) Data can be inherently distributed, as in the Internet-of-Things systems, and can arrive at N different processing nodes as N separate data streams. The two algorithms proposed in this paper, namely, D-Krasulina and DM-Krasulina, are initialized with a random vector v0 ∈ R dthat is randomly generated over the unit sphere in R d with respect to the uniform (Haar) measure. All analysis in this paper is with respect to the natural probability space (Ω, F, P) given by the stochastic process (v0, x1, x2*, . . .*) and filtered versions of this probability space. ## 2.2 System Model Let Rp denote the number of data samples that a single processing node in the system can process in one second using an iteration of the form (4).3 The focus of this paper is on the high-rate streaming setting, which corresponds to the setup in which the data arrival rate Rs is strictly greater than the data processing rate Rp. A naive approach to deal with this computation–streaming mismatch is to discard (per second) a fraction α := Rs/Rp of samples in the system. Such an approach, however, leads to an equivalent reduction in the convergence rate by α. We pursue an alternative to this approach in the paper that involves the simultaneous use of N ≥ ⌈α⌉ interconnected processors, each individually capable of processing Rp samples per second, within the system. In particular, we advocate the use of such a network of N processors in the following two manners to achieve near-optimal convergence rates (as a function of the number of samples arriving at the system) for estimates of the top eigenvector of Σ in high-rate streaming settings. ## 2.2.1 Distributed Processing Over A Network Of Processors We assume the fast data stream terminates into a data splitter, which splits the original stream with data rate Rs samples per second into N parallel streams, each with data rate Rs/N samples per second, that are then synchronously input to the interconnected network of N processors; see Figure 1(a) for a schematic rendering of such splitting. In order to simplify notation in this setting, we reindex the data samples associated with the i-th processor / data stream in the following as {xi,t}t∈Z+ , where the reindexing map (*i, t*) 7→ t ′is simply defined as t ′ = i + (t − 1)N. 3Note that the parameter Rp, among other factors, is a function of the problem dimension d; this dependence is being suppressed in the notation for ease of exposition. We also assume the network of processors implements some message passing protocol that allows it to compute sums of locally stored vectors, i.e., PN i=1 ai for the set of local vectors {ai} N i=1, within the network. This could, for instance, be accomplished using either Reduce or AllReduce primitives within most message passing implementations. We let Rc(N) denote the number of these primitive (sum) operations that the message passing protocol can carry out per second in the network of N processors. Note that, in addition to the number of nodes N, this parameter also depends upon the problem dimension d, message passing implementation, network topology, and inter-node communications bandwidth, all of which are being abstracted here through Rc(N). We will also be suppressing the dependence of the parameter Rc on N within the notation beyond this section for ease of exposition. Data splitting among this network of N processors effectively slows down the data streaming rate at each processing node by a factor of N. It is under this system model that we present a distributed variant of Krasulina's method, termed D-Krasulina, in Section 3.1 that involves two key operations after each round of splitting of the N samples: (i) per-processor computation of the form (4), which requires 1/Rp seconds for completion, and (ii) a network-wide vector-sum operation, which requires an additional 1/Rc(N) seconds for completion. Under such an algorithmic framework, we can express the *effective* network-wide data processing rate, denoted by Re, in terms of the per-node data processing rate Rp and the network-wide sum rate Rc(N) by noticing that the *effective* time it takes to process one sample within the network is given by $$T_{e}={\frac{1}{N R_{p}}}+{\frac{1}{R_{c}(N)}}{\mathrm{~seconds/sample}}.$$ Here, the first algorithmic operation gives rise to the first term in the above expression, since it takes 1/Rp seconds for computation of the form (4) for N samples, and the network-wide vector-sum operation gives rise to the second term. It then follows that the effective data processing rate is Re = 1/Te =NRpRc(N) NRp+Rc(N) . In the high-rate streaming setting, D-Krasulina can only operate as long as the streaming rate Rs does not exceed the effective processing rate Re, i.e., Rs ≤ Re. This gives rise to the following condition on the number of processors N that enables our algorithmic framework to cope with the high-rate streaming data: $$N\left(R_{c}(N)-R_{s}\right)\geq\frac{R_{s}R_{c}(N)}{R_{p}}\quad\implies\quad N\geq\frac{R_{s}R_{c}(N)}{R_{p}\left(R_{c}(N)-R_{s}\right)}\ ;\ \left(R_{c}(N)-R_{s}\right)>0.\tag{7}$$ Our developments in the following will be based on the assumption that the condition on N in (7) is met. The main analytical challenge for us is understanding the scenarios under which distributed processing over a network of N processors using D-Krasulina still yields near-optimal performance; we address this challenge in Section 4.1 by improving on the analysis in (Balsubramani et al., 2013) for Krasulina's method. We conclude by remarking on the nature of the lower bound in (7) on the number of processors N, given that the bound itself is a function of N through its dependence on the parameter Rc(N) that is expected to decrease with an increase in N. The high-performance computing community has access to copious amounts of trace data for different parallel computing environments that allows one to relate the parameter Rc(N) for a given dimension d to the number of processors N; see, e.g., (Kavulya et al., 2010). One such relationship could be, for instance, that Rc(N) ∝ 1/Nκfor some κ ∈ (0, 1]. The final bound on the number of processors N can then be obtained by plugging such a relationship into the provided bound.4 Remark 1. *It is straightforward to see that our developments in this paper are also applicable to the setting* in which data naturally arrives in a distributed manner at N *different nodes, as in Figure 1(b). In addition,* our analysis of D-Krasulina is equivalent to that of a mini-batch Krasulina's method running on a powerfulenough single processor that uses a mini-batch of N *samples in each iteration.* 4As an illustrative toy example, let us consider a scenario for a fixed d in which Rs = 103sec−1, Rp = 50 sec−1, and the parameter Rc(N) scales as Rc(N) = CR Nsec−1for a constant CR that depends on the message passing implementation, network topology, and inter-node communications bandwidth. In a "slow" network, corresponding to CR ≤ 103, there exists no N that satisfies the condition (7). In a "faster" network, corresponding to CR > 103N, a necessary condition on the number of processors is N < 10−3CR. But while this necessary condition is satisfied for 1 ≤ N < 80 for CR as high as 8 × 104, there still does not exist any N that satisfies (7). In the case of even faster networks, however, we are able to find feasible values of N for running D-Krasulina without discarding any samples; e.g., any 28 ≤ N ≤ 72 satisfies the condition (7) for CR = 105. ## 2.2.2 Distributed Processing Coupled With Mini-Batching Mini-batching in (centralized) stochastic methods, as discussed in Section 1.2, helps reduce the wall-clock time by reducing the number of read operations per iteration. Mini-batching of samples in distributed settings has the added advantage of reduction in the average number of primitive (sum) operations per processed sample, which further reduces the wall-clock time. It is in this vein that we put forth a mini-batched variant of D-Krasulina, which is termed DM-Krasulina, in Section 3.2. Similar to the case of D-Krasulina (cf. Figure 1), there are several equivalent system models that can benefit from the DM-Krasulina framework. In keeping with our theme of fast streaming data, as well as for the sake of concreteness, we assume the system buffers (i.e., mini-batches) B := bN ≥ ⌈Rs/Rp⌉ samples of the incoming data stream every B/Rs seconds for some parameter b ∈ Z+. This *network-wide* mini-batch of B samples is then split into N parallel (local) mini-batches, each comprising b = B/N samples, which are then synchronously input to the interconnected network of N processors at a rate of Rs/N samples per second and collaboratively processed by DM-Krasulina. In each iteration t of DM-Krasulina, therefore, the network processes a total of B ≥ N samples, as opposed to N samples for D-Krasulina. In order to simplify notation in this mini-batched distributed setting, we reindex the b data samples in the mini-batch associated with the i-th processor in iteration t of DM-Krasulina as {x*i,j,t*} j=b j=1,t∈Z+ , where the reindexing map (*i, j, t*) 7→ t ′ is defined as t ′ = j + (i − 1)b + (t − 1)B. It is straightforward to see from the discussion surrounding (7) that the DM-Krasulina framework can process all data samples arriving at the system as long as N ≥bRcRs Rp(bRc−Rs) and (bRc − Rs) > 0. However, when this condition is violated due to faster streaming rate Rs, slower processing rate Rp, slower summation rate Rc, or any combination thereof, it becomes necessary for DM-Krasulina to discard µ := bRs Rp + NRs Rc − B samples at the splitter per iteration. The main analytical challenges for DM-Krasulina are, therefore, twofold: first, assuming µ = 0, characterize the mini-batch size B that leads to near-optimal convergence rates for DM-Krasulina in terms of the total number of samples arriving at the system; second, when discarding of samples becomes necessary, characterize the interplay between B and µ that allows DM-Krasulina to still achieve (order-wise) near-optimal convergence rates. We address both these challenges in Section 4.2. ## 3 Proposed Distributed Stochastic Algorithms We now formally describe the two stochastic algorithms, termed D-Krasulina and DM-Krasulina, that can be used to solve the 1-PCA problem from high-rate streaming data under the two setups described in Section 2.2.1 and Section 2.2.2, respectively. ## 3.1 Distributed Krasulina'S Method (D-Krasulina) For High-Rate Streaming Data Recall from the discussion in Section 2.2.1 that each node i in the network receives data sample xi,t in iteration t of the distributed implementation, which comprises N processing nodes. Unlike the centralized Krasulina's method (cf. (4)), therefore, any distributed variant of Krasulina's method needs to process N samples in every iteration t. Using Ai,t as a shorthand for xi,tx T i,t, one natural extension of (4) that processes N samples in each iteration is as follows: $$\mathbf{v}_{t}=\mathbf{v}_{t-1}+\gamma_{t}\bigg{(}\frac{1}{N}\sum_{i=1}^{N}\mathbf{A}_{i,t}\mathbf{v}_{t-1}-\frac{1}{\|\mathbf{v}_{t-1}\|_{2}^{2}}\bigg{(}\mathbf{v}_{t-1}^{\mathrm{T}}\frac{1}{N}\sum_{i=1}^{N}\mathbf{A}_{i,t}\mathbf{v}_{t-1}\mathbf{v}_{t-1}\bigg{)}\bigg{)}=\mathbf{v}_{t-1}+\gamma_{t}\mathbf{\xi}_{t}.\tag{8}$$ One natural question here is whether (8) can be computed within our distributed framework. The answer to this is in the affirmative under the assumption N ≥ RsRc Rp(Rc−Rs) , with the implementation (termed D-Krasulina) formally described in Algorithm 1.5 Notice that unlike classical Krasulina's method, which processes a total of t samples after t iterations, D-Krasulina processes a total of N t samples after t iterations in order to provide an estimate vt of the 5Here, and in the following, the implicit assumption is that the quantity Rc − Rs (resp., bRc − Rs) is strictly positive for D-Krasulina (resp., DM-Krasulina). Algorithm 1 Distributed Krasulina's Method (D-Krasulina) Input: Incoming data streams at N processors, expressed as nxi,t i.i.d. ∼ Px oN i=1,t∈Z+ , and a step-size sequence {γt ∈ R+}t∈Z+ Initialize: All processors initialize with v0 ∈ R drandomly generated over the unit sphere 1: for t = 1, 2*, . . .* , do 2: **(In Parallel)** Processor i receives data sample xi,t and updates ξi,t locally as follows: $\forall i\in\{1,\ldots,N\},\quad\boldsymbol{\xi}_{i,t}\leftarrow\mathbf{x}_{i,t}\mathbf{x}_{i,t}^{\mathrm{T}}\mathbf{v}_{t-1}-\frac{\mathbf{v}_{t-1}^{\mathrm{T}}\mathbf{x}_{i,t}\mathbf{x}_{i,t}^{\mathrm{T}}\mathbf{v}_{t-1}\mathbf{v}_{t-1}}{\|\mathbf{v}_{t-1}\|_{2}^{2}}$ 3: Compute ξt ← 1N PN i=1 ξi,t in the network using a distributed vector-sum subroutine 4: Update eigenvector estimate in the network as follows: vt ← vt−1 + γtξt 5: end for Return: An estimate vt of the eigenvector q ∗ of Σ associated with λ1(Σ) top eigenvector q ∗ of Σ. Another natural question, therefore, is whether the estimate vt returned by D-Krasulina can converge to q ∗ at the near-optimal rate of O (1/\# of processed samples). Convergence analysis of D-Krasulina in Section 4 establishes that the answer to this is also in the affirmative under appropriate conditions that are specified in Theorem 1. An important interpretation of this result is that our proposed distributed implementation of Krasulina's method can lead to linear speed-up as a function of the number of processing nodes N in the network. ## 3.2 Mini-Batched D-Krasulina (Dm-Krasulina) For High-Rate Streaming Data The distributed, mini-batched setup described in Section 2.2.2 entails each node i receiving a mini-batch of b = B/N data samples, {x*i,j,t*} b j=1, in each iteration t, for a total of B = bN samples across the network in every iteration. Similar to (8), these B samples can in principle be processed by the following variant of the original Krasulina's iteration: $$\mathbf{v}_{t}=\mathbf{v}_{t-1}+\gamma_{t}\left({\frac{1}{B}}\sum_{i=1}^{N}\sum_{j=1}^{b}\mathbf{A}_{i,j,t}\mathbf{v}_{t-1}-{\frac{1}{\|\mathbf{v}_{t-1}\|_{2}^{2}}}\Big(\mathbf{v}_{t-1}^{\mathrm{T}}{\frac{1}{B}}\sum_{i=1}^{N}\sum_{j=1}^{b}\mathbf{A}_{i,j,t}\mathbf{v}_{t-1}\mathbf{v}_{t-1}\Big)\right),$$ $$({\mathfrak{g}})$$ , (9) where A*i,j,t* is a shorthand for x*i,j,t*x T i,j,t. Practical computation of (9) within our distributed framework, however, requires consideration of two different scenarios. - *Scenario 1:* The mini-batched distributed framework satisfies N ≥bRcRs Rp(bRc−Rs) . This enables incorporation of every sample arriving at the system into the eigenvector estimate. - *Scenario 2:* The mini-batched distributed framework leads to the condition N < bRcRs Rp(bRc−Rs) . This necessitates discarding of µ = bRs Rp + NRs Rc − B samples per iteration in the system. Stated differently, the system receives B + µ samples per iteration in this scenario, but only B samples per iteration are incorporated into the eigenvector estimate. We now formally describe the algorithm (termed DM-Krasulina) that implements (9) under both these scenarios in Algorithm 2. Speaking strictly in terms of implementation, the mini-batched setup of DM-Krasulina allows one to relax the condition N ≥RsRc Rp(Rc−Rs) associated with D-Krasulina to either N ≥bRcRs Rp(bRc−Rs) , which still incorporates all samples into the eigenvector estimate, or N < bRcRs Rp(bRc−Rs) , which involves discarding of µ > 0 samples Algorithm 2 Distributed Mini-batch Krasulina's Method (DM-Krasulina) Input: Incoming streams of mini-batches nx*i,j,t* i.i.d. ∼ Px oN,b i,j=1,t∈Z+ at N processors, size of the network-wide mini batch B := bN, and a step-size sequence {γt ∈ R+}t∈Z+ Initialize: All processors initialize with v0 ∈ R drandomly generated over the unit sphere 1: for t = 1, 2*, . . .* , do 2: **(In Parallel)** ∀i ∈ {1, . . . , N}, ξi,t ← 0 3: for j = 1*, . . . , b* do 4: **(In Parallel)** Processor i receives data sample x*i,j,t* and updates ξi,t locally as follows: $\forall i\in\{1,\ldots,N\},\quad\boldsymbol{\xi}_{i,t}\leftarrow\boldsymbol{\xi}_{i,t}+\mathbf{x}_{i,j,t}\mathbf{x}_{i,j,t}^{\mathrm{T}}\mathbf{v}_{t-1}-\frac{\mathbf{v}_{t-1}^{\mathrm{T}}\mathbf{x}_{i,j,t}\mathbf{x}_{i,j,t}^{\mathrm{T}}\boldsymbol{v}_{t-1}\mathbf{v}_{t-1}}{\|\mathbf{v}_{t-1}\|_{2}^{2}}$ 5: **end for** 6: Compute ξt ← 1B PN i=1 ξi,t in the network using a distributed vector-sum subroutine 7: Update eigenvector estimate in the network as follows: vt ← vt−1 + γtξt 8: if N < bRcRs Rp(bRc−Rs) then 9: The system (e.g., data splitter/buffer) receives (B + µ) additional samples during execution of Steps 2–7, out of which µ ∈ Z+ samples are discarded 10: **end if** 11: **end for** Return: An estimate vt of the eigenvector q ∗ of Σ associated with λ1(Σ) per algorithmic iteration. While this makes DM-Krasulina particularly attractive for systems with slower communication links, the major analytical hurdle here is understanding the interplay between the different problem parameters that still allows DM-Krasulina to achieve near-optimal convergence rates in terms of the number of samples received at the system. We tease out this interplay as part of the convergence analysis of DM-Krasulina in Section 4. ## 3.3 **A Note On The Processing Of Non-Centered And Non-I.I.D. Data** Both D-Krasulina and DM-Krasulina have been developed under the assumptions of zero-mean (i.e., *centered*) and i.i.d. data samples. In this section, we discuss one possible approach to handling non-centered data using the two algorithms and also provide a rationale for the applicability of D-Krasulina and DM-Krasulina in the face of any shifts in the data distribution. In the case of non-centered data, one simple strategy that works for D-Krasulina and DM-Krasulina is to maintain a (network-wide) running average of the non-centered data samples, and then use it to center the data samples at each processor before applying Step 2 (resp., Step 4) in Algorithm 1 (resp., Algorithm 2). While such a modification requires an extension of the convergence analysis presented in the next section, this can be accomplished in a manner similar to the analytical extension in (Zhou & Bai, 2021) for the centralized Oja's rule with non-centered data. Next, while the forthcoming convergence analysis for D-Krasulina and DM-Krasulina has been provided under the assumption of i.i.d. data samples, the two algorithms are expected to remain effective in the non-i.i.d. data setting. This is because D-Krasulina and DM-Krasulina first essentially compute a new gradient-like quantity using the latest batch of data samples at each time t (cf. Step 2 in Algorithm 1 and Step 4 in Algorithm 2), and then update their respective eigenvector estimates using this quantity. In particular, any shifts in the data distribution can be tracked by the two algorithms because of such an update rule. It is because of this reason that algorithms such as Oja's rule and Krasulina's method are also often employed for the problem of *subspace tracking* (see, e.g., (Bin Yang, 1995; Chatterjee, 2005; Doukopoulos & Moustakides, 2008)). Since providing a formal analysis of such tracking capabilities of D-Krasulina and DM-Krasulina for non-i.i.d. data is beyond the scope of this paper, we leave it for future work. ## 4 Convergence Analysis Of D-Krasulina And Dm-Krasulina Our convergence analysis of D-Krasulina and DM-Krasulina is based on understanding the rate at which the so-called *potential function* Ψt of these methods converges to zero as a function of the number of algorithmic iterations t. Formally, this potential function Ψt is defined as follows. Definition 2 (Potential function). Let q ∗be the eigenvector of Σ associated with λ1(Σ) and let vt be an estimate of q ∗returned by an iterative algorithm in iteration t. Then the quality of the estimate vt can be measured in terms of the potential function Ψt : vt 7→ [0, 1] *that is defined as* $$\Psi_{t}:=1-{\frac{(\mathbf{v}_{t}^{\mathrm{T}}\mathbf{q}^{*})^{2}}{\|\mathbf{v}_{t}\|^{2}}}.$$ $$(10)$$ 2. (10) Notice that Ψt is a measure of estimation error, which approaches 0 as vt converges to any scalar multiple of q ∗. This measure, which essentially computes sine squared of the angle between q ∗ and vt, is frequently used in the literature to evaluate the performance of PCA algorithms. In particular, when one initializes an algorithm with a random vector v0 uniformly distributed over the unit sphere in R dthen it can be shown that E{Ψ0} ≤ 1 − 1/d (Balsubramani et al., 2013). While this is a statement in expectation for t = 0, our analysis relies on establishing such a statement in probability for any t ≥ 0 for both D-Krasulina and DM-Krasulina. Specifically, we show in Theorem 3 that supt≥0 Ψt ≤ 1 − O(1/d) with high probability as long as γt = c/(L + t) for any constant c and a large-enough constant L. All probabilistic analysis in the following uses a filtration (Ft)t≥0 of sub σ-algebras of F on the sample space Ω, where the σ-algebra Ft captures the progress of the iterates of the two proposed stochastic algorithms up to iteration t. Mathematically, let us define the sample covariance matrix At as At := 1N PN i=1 Ai,t and At := 1B PN i=1 Pb j=1 A*i,j,t* for D-Krasulina and DM-Krasulina, respectively. In order to simplify notation and unify some of the analysis of D-Krasulina and DM-Krasulina, we will be resorting to the use of random matrices At, as opposed to xi,t and x*i,j,t*, in the following. We then have the following definition of σ-algebras in the filtration. Definition 3 (σ-algebra Ft). The σ-algebra Ft ⊆ F on sample space Ω *for both D-Krasulina and* DM-Krasulina is defined as the σ*-algebra generated by the vector-/matrix-valued random variables* (v0, A1, . . . , At), i.e., Ft := σ(v0, A1*, . . . ,* At). In addition to the filtration (Ft)t≥0, the forthcoming analysis also uses a sequence of nested sample spaces that is defined as follows. Definition 4 (Nested sample spaces). Let (t0, ϵ0),(t1, ϵ1),(t2, ϵ2), . . . ,(tJ , ϵJ ) *be a sequence of pairs such that* 0 = t0 < t1 < t2 < . . . < tJ and ϵ0 > ϵ1 > ϵ2 > . . . > ϵJ > 0 for any non-negative integer J*. We then define* a sequence (Ω′t )t∈Z+ *of nested sample spaces such that* Ω ⊃ Ω ′ 1 ⊃ Ω ′ 2 ⊃ . . .*, each* Ω ′ t is Ft−1*-measurable, and* $$\Omega^{{}^{\prime}}_{t}:=\left\{\omega\in\Omega:\forall\,0\leq j\leq J,\sup_{t_{j}\leq t<t}\Psi_{t}(\omega)\leq1-\epsilon_{j}\right\}.\tag{11}$$ Here, ω denotes an outcome within the sample space Ω and Ψl(ω) is the (random) potential function after the l*-th iteration of D-Krasulina / DM-Krasulina that is being explicitly written as a function of the outcomes* ω *in the sample space.* In words, the sample space Ω ′ t corresponds to that subset of the original sample space for which the error Ψl in all iterations l ∈ {tj *, . . . , t* − 1} is below 1− ϵj , where j ∈ {0*, . . . , J*}. In the following, we use the notation Et{·} and Pt(·) to denote conditional expectation and conditional probability, respectively, with respect to Ω ′ t . An immediate implication of Definition 4 is that, for appropriate choices of ϵj 's, it allows us to focus on those subsets of the original sample space that ensure convergence of iterates of the proposed algorithms to the top eigenvector q ∗ at the desired rates. The main challenge here is establishing that such subsets have high probability measure, i.e., P ∩t>0Ω ′ t ≥ 1 − δ for any δ > 0. We obtain such a result in Theorem 4 in the following. We are now ready to state our main results for D-Krasulina and DM-Krasulina. ## 4.1 Convergence Of D-Krasulina (Algorithm 1) The first main result of this paper shows that D-Krasulina results in linear speed-up in convergence rate as a function of the number of processing nodes, i.e., the potential function for D-Krasulina converges to 0 at a rate of O(1*/N t*). Since the system receives a total of N t samples at the end of t iterations of D-Krasulina, this result establishes that D-Krasulina is order-wise near-optimal in terms of sample complexity for the streaming PCA problem. The key to proving this result is characterizing the convergence behavior of D-Krasulina in terms of variance of the sample covariance matrix At := 1N PN i=1 Ai,t that is implicitly computed within D-Krasulina. We denote this variance as σ 2 N , which has the form $$\sigma_{N}^{2}:=\mathbb{E}_{\mathcal{P}_{x}}\left\{\left\|\frac{1}{N}\sum_{i=1}^{N}\mathbf{x}_{i,t}\mathbf{x}_{i,t}^{\mathrm{T}}-\mathbf{\Sigma}\right\|_{F}^{2}\right\}.\tag{1}$$ $$\left(12\right)$$ $$(13)$$ $$(14)$$ It is straightforward to see from Definition 1 and (12) that σ 2 N = σ 2/N. This reduction in variance of the sample covariance matrix within D-Krasulina essentially enables the linear speed-up in convergence. In terms of specifics, we have the following convergence result for D-Krasulina. Theorem 1. Fix any δ ∈ (0, 1) and pick c := c0/2(λ1 − λ2) for any c0 > 2*. Next, define* $$L_{1}:=\frac{64e d r^{4}\operatorname*{max}(1,c^{2})}{\delta^{2}}\ln\frac{4}{\delta},\quad L_{2}:=\frac{512e^{2}d^{2}\sigma_{N}^{2}\operatorname*{max}(1,c^{2})}{\delta^{4}}\ln\frac{4}{\delta},$$ pick any L ≥ L1+L2, and choose the step-size sequence as γt := c/(L+t)*. Then, as long as Assumptions* [A1] and [A2] hold, we have for D-Krasulina that there exists a sequence (Ω′t )t∈Z+ of nested sample spaces such that P ∩t>0Ω ′ t ≥ 1 − δ and $$\mathbb{E}_{t}\left\{\Psi_{t}\right\}\leq C_{1}\Big(\frac{L+1}{t+L+1}\Big)^{\frac{c_{0}}{2}}+C_{2}\Big(\frac{\sigma_{N}^{2}}{t+L+1}\Big),$$ where C1 and C2 *are constants defined as* $$C_{1}:=\frac{1}{2}\bigg(\frac{4e d}{\delta^{2}}\bigg)^{\frac{5}{12n}}e^{2c^{2}\lambda_{1}^{2}/L}\quad\mathrm{and}\quad C_{2}:=\frac{8c^{2}e^{(c_{0}+2c^{2}\lambda_{1}^{2})/L}}{(c_{0}-2)}.$$ Remark 2. While we can obtain a similar result for the case of c0 ≤ 2*, that result does not lead to any* convergence speed-up. In particular, the convergence rate in that case becomes O(t −c0/2), which matches the one in (Balsubramani et al., 2013). Discussion. A proof of Theorem 1, which is influenced by the proof technique employed in (Balsubramani et al., 2013), is provided in Section 5. Here, we discuss some of the implications of this result, especially in relation to (Balsubramani et al., 2013) and (Jain et al., 2016). The different problem parameters affecting the performance of stochastic methods for streaming PCA include: (i) dimensionality of the ambient space, d, (ii) eigengap of the population covariance matrix, (λ1 − λ2), (iii) upper bound on norm of the received data samples, r, and (iv) variance of the sample covariance matrix, σ 2 and/or σ 2 N . Theorem 1 characterizes the dependence of D-Krasulina on all these parameters and significantly improves on the related result provided in (Balsubramani et al., 2013). First, Theorem 1 establishes D-Krasulina can achieve the convergence rate O(σ 2 N /t) ≡ O(σ 2*/N t*) with high probability (cf. (14)). This is in stark contrast to the result in (Balsubramani et al., 2013), which is independent of variance of the sample covariance matrix, thereby only guaranteeing convergence rate of O(r 4/t) for D-Krasulina and its variants. This ability of variants of Krasulina's methods to achieve faster convergence through variance reduction is arguably one of the most important aspects of our analysis. Second, in comparison with (Balsubramani et al., 2013), Theorem 1 also results in an improved lower bound on choice of L by splitting it into two quantities, viz., L1 and L2 (cf. (13)). This improved bound allows larger step sizes, which also results in faster convergence. In terms of specifics, L1 in the theorem is on the order of Ω(r 4d/δ2), which is an improvement over Ω(r 4d 2/δ4) bound of (Balsubramani et al., 2013). On the other hand, while L2 has same dependence on δ and d as (Balsubramani et al., 2013), it depends on σ 2 N instead of r 4 and, therefore, it reduces with an increase in N. Third, the improved lower bound on L also allows for an improved dependence on the dimensionality d of the problem. Specifically, for large enough t and N, the dependence on d in (14) is due to the higher-order (first) term and is of the order O(d5 2 ln 2 + c0 2 ), as opposed to O(d5 2 ln 2 +c0 ) for (Balsubramani et al., 2013). It is worth noting here, however, that this is still loser than the result in (Jain et al., 2016) that has only log2(d) dependence on d in higher-order error terms. Fourth, in terms of the eigengap, our analysis has optimal dependence of 1/(λ1−λ2) 2, which also matches the dependence in (Balsubramani et al., 2013) and (Jain et al., 2016). It is important to note here, however, that knowledge of the eigengap (λ1 − λ2) is not necessary to run Oja's rule, Krasulina's method, D-Krasulina, or any of the related stochastic methods in a practical setting. Specifically, it can be seen from Theorem 1 that the eigengap is only needed to set the step size in D-Krasulina for the optimal convergence rate. In practice, however, step sizes of the form c/t ˜ work well for D-Krasulina and the related methods, and a simple yet highly effective strategy for setting the step size in these methods is to estimate the parameter c˜ by running multiple instances of the method during a warm-up phase. Such an approach is akin to approximating several problem-related parameters using a single parameter c˜, and is the one we have followed for the numerical experiments discussed in Section 6. Finally, we compare the recommended step-size sequence γt = c/(L + t) in Theorem 1 to the ones in (Balsubramani et al., 2013) and (Jain et al., 2016). Since the step sizes in these two prior works also take the form γt = c/(L + t), all three works are equivalent to each other in terms of scaling of the step size as a function of t. But in terms of the initial step size, and assuming small enough δ in Theorem 1, γ1 is the largest for (Jain et al., 2016), second-largest for this work, and the smallest for (Balsubramani et al., 2013). In relation to our work, this difference in the initial step size in the case of (Balsubramani et al., 2013) is due to the improved lower bound on L in Theorem 1. In the case of (Jain et al., 2016), this difference is attributable to the fact that the parameter L is independent of δ in that work. Stated differently, we are able to vary the probability of success 1 − δ in this work by making the parameter L be a function of δ, with the caveat being that the initial step size γ1 gets smaller as δ decreases. In contrast, a fixed L in (Jain et al., 2016) can be thought of as one of the reasons the probability of success is fixed at 3/4 in that work. We conclude by noting that this dependence of the performance of D-Krasulina on different problem parameters is further highlighted through numerical experiments in Section 6. Remark 3. *While Theorem 1 is for (a distributed variant of) Krasulina's method, Oja's rule can also be* analyzed using similar techniques; see, e.g., the discussion in (Balsubramani et al., 2013). Remark 4. Recall from the discussion in Section 1.1 that an iteration of Krasulina's method is similar to that for SGD applied to the optimization problem (5). A natural question to ask then is whether D-Krasulina can be "accelerated" in much the same way SGD can be accelerated by adding a momentum term to its iteration expression. The authors in (De Sa et al., 2018), however, argue that naively applying momentum to Oja's rule or the power iteration, both of which are closely related to Krasulina's method, results in worst performance since this increases the effect of the noise within the iterates. And while the noise within the iterates can be controlled through variance reduction techniques, as done in (De Sa et al., 2018) to accelerate the power iteration for eigenvector computation, such techniques typically require multiple data passes and are therefore not suited for the setting in which data samples continuously stream into the system. ## 4.2 Convergence Of Dm-Krasulina (Algorithm 2) The convergence analysis of DM-Krasulina follows from slight modifications of the proof of Theorem 1 for D-Krasulina. The final set of results, which covers the two scenarios of zero data loss (µ = 0) and some data loss (µ > 0) in each iteration, is characterized in terms of variance of the (mini-batched) sample covariance At := 1B PN i=1 Pb j=1 A*i,j,t* associated with DM-Krasulina. We denote this variance as σ 2 B, which is given by $$\sigma_{B}^{2}:=\mathbb{E}_{\mathcal{P}_{x}}\left\{\left\|{\frac{1}{B}}\sum_{i=1}^{N}\sum_{j=1}^{b}\mathbf{x}_{i,j,t}\mathbf{x}_{i,j,t}^{\mathrm{T}}-\mathbf{\Sigma}\right\|_{F}^{2}\right\}.$$ $$(15)$$ . (15) It is once again straightforward to see that σ 2 B = σ 2/B. We now split our discussion of the convergence of DM-Krasulina according to the two scenarios discussed in Section 3.2. ## 4.2.1 Scenario 1—Dm-Krasulina With No Data Loss: N ≥Brcrs **Data loss: $N\geq\frac{bR_cR_s}{R_p(bR_c-R_s)}\Longrightarrow\mu=0$** Analytically, this scenario is similar to D-Krasulina, with the only difference being that we are now incorporating an average of B sample covariances x*i,j,t*x T i,j,t in the estimate in each iteration (as opposed to N sample covariances for D-Krasulina). We therefore have the following generalization of Theorem 1 in this scenario. Theorem 2. *Let the parameters and constants be as specified in Theorem 1, except that the parameter* L2 is now defined as L2 := 512e 2d 2σ 2 B max(1,c2) δ 4 ln 4 δ . Then, as long as Assumptions [A1] and [A2] *hold, we have* for DM-Krasulina that P ∩t>0Ω ′ t ≥ 1 − δ and $$\mathbb{E}_{t}\left\{\Psi_{t}\right\}\leq C_{1}\Big{(}\frac{L+1}{t+L+1}\Big{)}^{\frac{\alpha}{2}}+C_{2}\Big{(}\frac{\sigma_{B}^{2}}{t+L+1}\Big{)}.\tag{16}$$ The proof of this theorem can be obtained from that of Theorem 1 by replacing 1/N and σ 2 N in there with 1/B and σ 2 B, respectively. Similar to the case of D-Krasulina, this theorem establishes that DM-Krasulina can also achieve linear speed-up in convergence as a function of the network-wide mini-batch size B with very high probability, i.e., Et {Ψt} = O(σ 2 B/t) ≡ O(σ 2/Bt). Our discussions of D-Krasulina and DM-Krasulina have so far been focused on the infinite-sample regime, in which the number of algorithmic iterations t for both algorithms can grow unbounded. We now focus on the implications of our results for the finite-sample regime, in which a final estimate is produced at the end of arrival of a total of T ≫ 1 samples.6 This finite-sample regime leads to an interesting interplay between N (resp., B) and the total number of samples T for linear speed-up of D-Krasulina (resp., DM-Krasulina). We describe this interplay in the following for DM-Krasulina; the corresponding result for D-Krasulina follows by simply replacing B with N in this result. Corollary 1. *Let the parameters and constants be as specified in Theorem 2. Next, pick parameters* (L ′ 1 , L′2 ) *such that* L ′ 1 ≥ L1 and L ′ 2 ≥ L2/σ2B, and define the final number of algorithmic iterations for DM-Krasulina as TB := T /B. Then, as long as Assumptions [A1] and [A2] *hold and the network-wide* mini-batch size satisfies B ≤ T 1− 2 c0 *, we have that* P ∩0<t≤TB Ω ′ t ≥ 1 − δ and $$\mathbb{E}_{T_{B}}\left\{\Psi_{T_{B}}\right\}\leq c_{0}C_{1}\frac{L_{1}^{\prime}{}^{\alpha/2}}{T}+c_{0}C_{1}\left(\frac{\sigma^{2}L_{2}^{\prime}}{T}\right)^{c_{0}/2}+\frac{C_{2}\sigma^{2}}{T}.\tag{17}$$ Proof. Substituting t = TB in (16) and using simple upper bounds yield $$\mathbb{E}_{T_{B}}\left\{\Psi_{T_{B}}\right\}\leq C_{1}\Big(\frac{L+1}{L+T_{B}}\Big)^{\frac{c_{0}}{2}}+C_{2}\Big(\frac{\sigma_{B}^{2}}{T_{B}}\Big)\leq2C_{1}\Big(\frac{L}{T_{B}}\Big)^{\frac{c_{0}}{2}}+C_{2}\Big(\frac{\sigma_{B}^{2}}{T_{B}}\Big).$$ Next, substituting L = L ′ 1 + σ 2 BL ′ 2 in this expression gives us $$\mathbb{E}_{T_{B}}\left\{\Psi_{T_{B}}\right\}\leq c_{0}C_{1}\Big(\frac{L_{1}^{\prime}}{T_{B}}\Big)^{\frac{c_{0}}{2}}+c_{0}C_{1}\Big(\frac{\sigma_{B}^{2}L_{2}^{\prime}}{T_{B}}\Big)^{\frac{c_{0}}{2}}+C_{2}\Big(\frac{\sigma_{B}^{2}}{T_{B}}\Big).$$ Since σ 2 B = σ 2/B and TB = *T /B*, (18) reduces to the following expression: $$\mathbb{E}_{T_{B}}\left\{\Psi_{T_{B}}\right\}\leq c_{0}C_{1}\Bigg{(}\frac{B L_{1}^{\prime}}{T}\Bigg{)}^{c_{0}/2}+c_{0}C_{1}\Bigg{(}\frac{\sigma^{2}L_{2}^{\prime}}{T}\Bigg{)}^{c_{0}/2}+\frac{C_{2}\sigma^{2}}{T}.$$ The proof now follows from the assumption that B ≤ T 1− 2 c0 . 6An implicit assumption here is that T is large enough that it precludes the use of a batch PCA algorithm. $$(18)$$ Discussion. Corollary 1 dictates that linear convergence speed-up for DM-Krasulina (resp., D-Krasulina) occurs in the finite-sample regime provided the network-wide mini-batch size B (resp., number of processing nodes N) scales sublinearly with the total number of samples T. In particular, the proposed algorithms achieve the best (order-wise) convergence rate of O(1/T) for appropriate choices of system parameters. We also corroborate this theoretical finding with numerical experiments involving synthetic and real-world data in Section 6. $$N\,<\,$$ $\mathbf{a}\cdot\mathbf{a}$ ## 4.2.2 Scenario 2—Dm-Krasulina With Data Loss: N < Brcrs $${\frac{b R_{c}R_{s}}{R_{p}(b R_{c}-R_{s})}}\Longrightarrow\mu>0$$ The statement of Theorem 2 for DM-Krasulina in the lossless setting immediately carries over to the resourceconstrained setting that causes loss of µ (> 0) samples per iteration. The implication of this result is that DM-Krasulina can achieve convergence rate of O(1/Bt) in the infinite-sample regime after receiving a total of (B + µ)t samples. Therefore, it trivially follows that DM-Krasulina can achieve order-wise near-optimal convergence rate in the infinite-sample regime as long as µ = O(B). We now turn our attention to understanding the interplay between µ, B, and the total number of samples T arriving at the system for the resource-constrained finite-sample setting for DM-Krasulina. To this end, we have the following generalization of Corollary 1. Corollary 2. Let the parameters and constants be as specified in Corollary 2, and define the final number of algorithmic iterations for DM-Krasulina as T µ B := T /(B + µ). Then, as long as Assumptions [A1] and [A2] *hold, we have that* P ∩t>0Ω ′ t ≥ 1 − δ and $$\mathbb{E}_{T_{B}^{\alpha}}\left\{\Psi_{T_{B}^{\alpha}}\right\}\leq c_{0}C_{1}\bigg{(}\frac{(B+\mu)L_{1}^{\prime}}{T}\bigg{)}^{c_{0}/2}+c_{0}C_{1}\bigg{(}\frac{(B+\mu)\sigma^{2}L_{2}^{\prime}}{BT}\bigg{)}^{c_{0}/2}+\frac{C_{2}\sigma^{2}(B+\mu)}{BT}.\tag{19}$$ Proof. The proof of this corollary follows from replacing TB with T µ B in (18) and subsequently substituting the values of T µ B and σ 2 B in there. Discussion. Recall that since the distributed framework receives a total of T samples, it is desirable to achieve convergence rate of O(1/T). It can be seen from Corollary 2 that the first and the third terms in (19) are the ones that dictate whether DM-Krasulina can achieve the (order-wise) optimal rate of O(1/T). To this end, the first term in (19) imposes the condition (B +µ) ≤ T 1−2/c0, i.e., the total number of samples received at the system (both processed and discarded) per iteration must scale sublinearly with the final number of samples T. In addition, the third term in (19) imposes the condition µ = O(B), i.e., the number of samples discarded by the system in each iteration must scale no faster than the number of samples processed by the system in each iteration. Once these two conditions are satisfied, Corollary 2 guarantees near-optimal convergence for DM-Krasulina. ## 5 Proof Of The Main Result The main result of this paper is given by Theorem 1, which can then be applied to any algorithm that (implicitly or explicitly) involves an iteration of the form (8). We develop a proof of this result in this section, which—similar to the approach taken in (Balsubramani et al., 2013) for the analysis of Krasulina's method—consists of characterizing the behavior of D-Krasulina in three different algorithmic epochs. The result concerning the *initial epoch* is described in terms of Theorem 3 in the following, the behavior of the intermediate epoch, which comprises multiple *sub-epochs*, is described through Theorem 4, while the behavior of D-Krasulina in the *final epoch* is captured through a formal proof of Theorem 1 at the end of this section. Before proceeding, recall that our result requires the existence of a sequence (Ω′t )t∈Z+ of nested sample spaces that are defined in terms of a sequence of pairs (t0 ≡ 0, ϵ0),(t1, ϵ1), . . . ,(tJ , ϵJ ). Our analysis of the initial epoch involves showing that for the step size γt chosen as in Theorem 1, the error for all t ≥ 0 will be less than (1 − ϵ0) with high probability for some constant ϵ0. We then define the remaining ϵj 's as ϵj = 2jϵ0, j = 1*, . . . , J*, where J is defined as the smallest integer satisfying ϵJ ≥ 1/2. Our analysis in the intermediate epoch then focuses on establishing lower bounds on the number of iterations tj for which D-Krasulina is guaranteed to have the error less than 1 − ϵj with high probability. Stated differently, the intermediate epoch characterizes the sub-epochs {1 + tj−1, tj} during which the error is guaranteed to decrease from (1 − ϵj−1) to (1 − ϵj ) with high probability. ## 5.1 Initial Epoch Our goal for the initial epoch is to show that if we pick the step size appropriately, i.e., we set L to be large enough (cf. (13)), then the error, Ψt, will not exceed a certain value with high probability. This is formally stated in the following result. Theorem 3. Fix any δ ∈ (0, 1)*, define* ϵ ∈ (0, 1) as ϵ := δ 2/8e*, and let* $$\begin{array}{l}{{0,1),\;d s j m e\in(0,1)\;d e\in[0,1)\;d e\in[0,1),\;d m e\in[0,1),}}\\ {{L\geq\frac{8d r^{4}\operatorname*{max}(1,c^{2})}{\epsilon}\ln\frac{4}{\delta}+\frac{8d^{2}\sigma_{N}^{2}\operatorname*{max}(1,c^{2})}{\epsilon^{2}}\ln\frac{4}{\delta}.}}\end{array}$$ Then, if Assumptions [A1] and [A2] hold and we choose step size to be γt = c/(L + t)*, we have* $$(20)$$ $$\mathbb{P}{\Bigl(}\operatorname*{sup}_{t\geq0}\Psi_{t}\geq1-{\frac{\epsilon}{d}}{\Bigr)}\leq{\sqrt{2\epsilon\epsilon}}\equiv{\frac{\delta}{2}}.$$ $$(21)$$ In order to prove Theorem 3 we need several helping lemmas that are stated in the following. We only provide lemma statements in this section and move the proofs to Appendix A. We start by writing the recursion of error metric Ψt in the following lemma. Lemma 1. *Defining a scalar random variable* $$z_{t}:=2\gamma_{t}\frac{(\mathbf{v}_{t-1}^{\mathrm{T}}\mathbf{q}^{*})(\boldsymbol{\xi}_{t}^{\mathrm{T}}\mathbf{q}^{*})}{\|\mathbf{v}_{t-1}\|_{2}^{2}},\tag{1}$$ $$(22)$$ $\square$ we get the following recursion: * $\Psi_{t}\leq\Psi_{t-1}+4\gamma_{t}^{2}\Big{(}\Big{\|}\frac{1}{N}\sum_{i=1}^{N}\mathbf{A}_{i,t}-\boldsymbol{\Sigma}\Big{\|}_{F}^{2}+\lambda_{1}^{2}\Psi_{t-1}\Big{)}-z_{t}$, and * $\Psi_{t}\leq\Psi_{t-1}+\gamma_{t}^{2}r^{4}-z_{t}$. Proof. See Appendix A.1. Part (i) of this lemma will be used to analyze the algorithm in the final epoch for proof of Theorem 1, while Part (ii) will be used to prove Theorem 3 for this initial epoch and Theorem 4 for the intermediate phase. Next we will bound the moment generating function of Ψt conditioned on Ft−1 (Definition 3). For this, we need an upper bound on conditional variance of zt, which is given below. Lemma 2. The conditional variance of the random variable zt *is given by* Proof. See Appendix A.2. Using this upper bound on conditional variance of zt we can now upper bound the conditional moment generating function of Ψt. In order to simplify notation, much of our discussion in the following will revolve around the moment generating function with parameter s ∈ S := d/4ϵ,(2/ϵ0) ln(4/δ) . Note, however, that similar results can be derived for any positive-valued parameter s ∈ R. Lemma 3. The conditional moment generating function of Ψt for s ∈ S *is upper bounded as* $$\mathbb{E}\{\exp(s\Psi_{t})|\mathcal{F}_{t-1}\}\leq\exp\left(s\Psi_{t-1}-s\mathbb{E}\{z_{t}|\mathcal{F}_{t-1}\}+s\gamma_{t}^{2}r^{4}+s^{2}\gamma_{t}^{2}\sigma_{N}^{2}\right).$$ . (24) $$\mathbb{E}\{(z_{t}-\mathbb{E}\{z_{t}\})^{2}|\mathcal{F}_{t-1}\}\leq16\gamma_{t}^{2}\sigma_{N}^{2}.$$ N . (23) $$(23)$$ $\square$ $$(24)$$ Proof. See Appendix A.3. Note that this result is similar to (Balsubramani et al., 2013, Lemma 2.3) with the difference being that the last term here is sample variance, σ 2 N , as opposed to upper bound on input ∥xt ′∥2 ≤ r in (Balsubramani et al., 2013, Lemma 2.3). This difference prompts changes in next steps of the analysis of D-Krasulina and it also enables us to characterize improvements in convergence rate of Krasulina's method using iterations of the form (8). We are now ready to prove the statement of Theorem 3, which is based on Lemma 1 and 3. Proof of Theorem 3. We start by constructing a supermartingale from sequence of errors Ψt. First, restricting ourselves to s ∈ S, we define quantities to $s\in\mathbb{S}$, we define quantities $\beta_t:=\gamma_t^2\tau^4,\quad\zeta_t:=s\gamma_t^2\sigma_N^2,\quad\tau_t:=\sum_{l\geq t}\left(\beta_l+\zeta_l\right),\quad\text{and}\quad M_t:=\exp\left(s\Psi_t+s\tau_t\right)$. Now, taking expectation of Mt conditioned on the filtration Ft−1 we get $$\mathbb{E}\{M_{t}|\mathcal{F}_{t-1}\}=\mathbb{E}\{\exp\left(s\Psi_{t}\right)|\mathcal{F}_{t-1}\}\exp\left(s\tau_{t}\right)\stackrel{{\text{(a)}}}{{\leq}}\exp\left(s\Psi_{t-1}+s\beta_{t}+s\zeta_{t}+s\tau_{t}\right)$$ $$=\exp\left(s\Psi_{t-1}+s\tau_{t-1}\right)=M_{t-1}.$$ $\mathbb{E}$\(\mathbb{E}\ Here, (a) is due to Lemma 3 and using the fact that E{zt|Ft−1} ≥ 0 (Balsubramani et al., 2013, Theorem 2.1). These calculations show that the sequence {Mt} forms a supermartingale. Using sequence Mt, we can now use Doob's martingale inequality (Durrett, 2010, pg. 231) to show that Ψt will be bounded away from 1 with high probability. Specifically, for any ∆ ∈ (0, 1), we have $$\mathbb{P}\Big{(}\sup_{t\geq0}\Psi_{t}\geq\Delta\Big{)}\leq\mathbb{P}\Big{(}\sup_{t\geq0}\Psi_{t}+\tau_{t}\geq\Delta\Big{)}=\mathbb{P}\Big{(}\sup_{t\geq0}\exp\left(s\Psi_{t}+s\tau_{t}\right)\geq e^{s\Delta}\Big{)}$$ $$=\mathbb{P}\Big{(}\sup_{t\geq0}M_{t}\geq e^{s\Delta}\Big{)}\leq\frac{\mathbb{E}\{M_{0}\}}{e^{s\Delta}}=\exp\left(-s(\Delta-\tau_{0})\right)\mathbb{E}\{e^{s\Psi_{0}}\}.$$ Substituting ∆ = 1 − ϵ/d and using (Balsubramani et al., 2013, Lemma 2.5) to bound Ee sΨ0 we get $$\mathbb{P}\Big{(}\sup_{t\geq0}\Psi_{t}\geq1-{\frac{\epsilon}{d}}\Big{)}\leq\exp{(-s(1-(\epsilon/d)-\tau_{0}))}e^{s}{\sqrt{\frac{d}{2s}}}.$$ Next we need to bound Pl>0 βl and Pl>0 ζl. First we get $$(25)$$ $$\sum_{l>0}\beta_{l}=\sum_{l>0}\gamma_{l}^{2}r^{4}=r^{4}\sum_{l>0}\gamma_{l}^{2}=r^{4}\sum_{l>0}\frac{c^{2}}{(l+L)^{2}}\leq\frac{r^{4}c^{2}}{L}.$$ Declure we get $$(2{\mathfrak{f}}{\mathfrak{h}})$$ $$(27)$$ $$(28)$$ Again using a similar procedure we get $$\sum_{l>0}\zeta_{l}\leq{\frac{s\sigma_{N}^{2}c^{2}}{L}}.$$ L. (27) Combining (26) and (27), along with the definition of τt at the beginning, we get $$\tau_{0}\leq\frac{c^{2}}{L}\left(r^{4}+s\sigma_{N}^{2}\right).$$ Now using the lower bound on L, we get τ0 ≤ ϵ/d for s = d/4ϵ as shown in Proposition 4 in Appendix D. Substituting this in (25) we get Substituting this in (2a) we get $$\mathbb{P}\Big(\sup_{t\geq0}\Psi_t\geq1-\frac{\epsilon}{d}\Big)\leq\exp\big(-s(1-\epsilon/d-\epsilon/d)e^s\sqrt{\frac{d}{2s}}=\exp\big(2s\epsilon/d\big)\sqrt{\frac{d}{2s}}.$$ Finally, substituting $s=d/4\epsilon$, we get $\mathbb{P}\Big(\sup_{t\geq0}\Psi_t\geq1-\frac{\epsilon}{4}\Big)\leq\sqrt{2\epsilon\epsilon}$. $$\square$$ ## 5.2 Intermediate Epoch In Theorem 3 we have shown that if we choose L such that it satisfies the lower bound given in Theorem 3 then we have error Ψt greater than 1 − ϵ0 (here, ϵ0 = δ 2/8ed) with probability δ. Next, our aim is to show that if we perform enough iterations tJ of D-Krasulina then for any t ≥ tJ the error in the iterate will be bounded by Ψt ≤ 1/2 with high probability. In order to prove this, we divide our analysis into different sub-epochs that are indexed by j ∈ {1*, . . . , J*}. Starting from 1−ϵ0, we provide a lower bound on the number of iterations tj such that we progressively increase ϵj in each sub-epoch until we reach ϵJ . Theorem 4. Fix any δ ∈ (0, 1) and pick c := c0/2(λ1−λ2) for any c0 > 2. Next, let the number of processing nodes N > 1*, the parameter* L ≥ 8r 4 max(1,c2) ϵ0ln 4 δ + 8σ 2 N max(1,c2) ϵ 2 0ln 4 δ , and the step size γt := c/(L+t)*. Finally,* select a schedule (0, ϵ0),(t1, ϵ1), . . . ,(tJ , ϵJ ) *such that the following conditions are satisfied:* **[C1]**$\epsilon_{0}=\frac{\delta^{2}}{8ed}$, $\frac{3}{2}\epsilon_{j}\leq\epsilon_{j+1}\leq2\epsilon_{j}$ for $0\leq j<J$, and $\epsilon_{J-1}\leq\frac{1}{4}$, and **[C2]**$\left(t_{j+1}+L+1\right)\geq e^{5/c_{0}}\Big{(}t_{j}+L+1\Big{)}$ for $0\leq j<J$. $$T h e n\mathbb{P}\left(\cap_{t>0}\Omega_{t}^{'}\right)\geq1-\delta.$$ In order to prove this theorem, we need Lemmas 4–7, which are stated as follows. Lemma 4. For t > tj , the moment generating function of Ψt for s ∈ S *conditioned on* Ω ′ t satisfies $$\mathbb{E}_{t}\Big\{e^{s\Psi_{t}}\Big\}\leq\exp\Bigg(s\Bigg(\Psi_{t-1}\Big(1-\frac{c_{0}\epsilon_{j}}{t+L}\Big)+\frac{c^{2}r^{4}}{(t+L)^{2}}+\frac{s c^{2}\sigma_{N}^{2}}{(t+L)^{2}}\Bigg)\Bigg).$$ $\square$ Proof. See Appendix B.1. Lemma 5. For t > tj and s ∈ S*, we have* $$\mathbb{E}_{t}\{e^{s\Psi_{i}}\}\leq\exp\Bigg{(}s(1-\epsilon_{j})\Bigg{(}\frac{t_{j}+L+1}{t+L+1}\Bigg{)}^{\epsilon_{0}\epsilon_{j}}+\Bigg{(}s\epsilon^{2}r^{4}+s^{2}\epsilon^{2}\sigma_{N}^{2}\Bigg{)}\Bigg{(}\frac{1}{t_{j}+L}-\frac{1}{t+L}\Bigg{)}\Bigg{)}.\tag{29}$$ Proof. See Appendix B.2. Using Lemma 5, our next result deals with a specific value of t, namely, t = tj+1. Lemma 6. Suppose Conditions [C1]–[C2] are satisfied. Then for 0 ≤ j < J and s ∈ S*, we get* $$\mathbb{E}_{t_{j+1}}\big\{e^{s\Psi_{t_{j+1}}}\big\}\leq\exp\left(s(1-\epsilon_{j+1})-s\epsilon_{j}+\Big(s c^{2}r^{4}+s^{2}c^{2}\sigma_{N}^{2}\Big)\Big(\frac{1}{t_{j}+L}-\frac{1}{t_{j+1}+L}\Big)\right).$$ Proof. See Appendix B.3. Lemma 7. Suppose Conditions [C1]–[C2] are satisfied. Then picking any 0 < δ < 1*, we have* $$\mathbf{\Sigma}$$ $\square$ $$\sum_{j=1}^{J}\mathbb{P}_{t_{j}}\left(\operatorname*{sup}_{t\geq t_{j}}\Psi_{t}>1-\epsilon_{j}\right)\leq{\frac{\delta}{2}}.$$ Proof. See Appendix B.4. Proof. *(Proof of Theorem 4)* Using results from Lemma 7 and Theorem 3 and applying union bound, we get the statement of Theorem 4. ## 5.3 Final Epoch Now that we have shown that Ψt ≤ 1/2 with probability 1 − δ for all t ≥ tJ , we characterize in the final epoch how Ψt decreases further as a function of algorithmic iterations. The following result captures the rate at which Ψt decreases during this final epoch. Lemma 8. For any t > tJ and c := c0/(λ1 − λ2), the (conditional) expected error in Ψt *is given by* $$\mathbb{E}_{t}\{\Psi_{t}\}\leq\left(1+\frac{c_{0}^{2}\lambda_{1}^{2}}{2(t+L)^{2}(\lambda_{1}-\lambda_{2})^{2}}-\frac{c_{0}}{2(t+L)}\right)\mathbb{E}_{t-1}\{\Psi_{t-1}\}+\frac{4c^{2}\sigma_{N}^{2}}{(t+L)^{2}}.$$ Proof. See Appendix C. We are now ready to prove our main result, which is given by Theorem 1. Proof. (*Proof of Theorem 1*) Recall the definitions of the sub-epochs corresponding to the pairs (tj , ϵj ) ′s that satisfy the two conditions in Theorem 4. Following the same procedure as in the proof of (Balsubramani et al., 2013, Theorem 1.1), notice that J = log2 1/(2ϵ0)(since ϵJ = 2ϵJ−1 = *· · ·* = 2Jϵ0 ⇒ 2 J = ϵJ /ϵ0 = 1/2ϵ0) and therefore Condition [C2] implies $$t_{J}+L+1=(L+1)\exp\left(\frac{5J}{c_{0}}\right)=(L+1)\Big{(}\frac{1}{2c_{0}}\Big{)}^{5/(c_{0}\ln2)}=(L+1)\Big{(}\frac{4ed}{\delta^{2}}\Big{)}^{5/(c_{0}\ln2)}.\tag{30}$$ Defining a1 := c 2 0λ 2 1/2(λ1 − λ2) 2, a2 := c0/2, b := 4c 2σ 2 N , and using Lemma 8 for *t > t*J , we have $$\mathbb{E}_{t}\{\Psi_{t}\}\leq\Big(1+\frac{a_{1}}{(t+L)^{2}}-\frac{a_{2}}{t+L}\Big)\mathbb{E}_{t-1}\{\Psi_{t-1}\}+\frac{b}{(t+L)^{2}}.$$ Now using Proposition 1 from Appendix C with c0 > 2, we get Et{Ψt} ≤ tJ + L + 1 t + L + 1 c0 2exp a1 tJ + L + 1 EtJ {ΨtJ } +b a2 − 1 1 +1 tJ + L + 1 2exp a1 tJ + L + 1 1 t + L + 1 (a) ≤ 1 2 L + 1 t + L + 1 c0 24ed δ 2 5a2 (c0 ln 2)exp a1 tJ + L + 1 +b a2 − 1 exp 2 tJ + L + 1 exp a1 tJ + L + 1 1 t + L + 1 = 1 2 L + 1 t + L + 1 c0 24ed δ 2 5 (2 ln 2)exp a1 (L + 1)(4ed/δ2) (5/2 ln 2) + 8c 2σ 2 N c0 − 2 exp 2 + a1 (L + 1)(4ed/δ2) (5/2 ln 2) 1 (t + L + 1). Here, the inequality in (a) is due to (30) and we have also used the fact that (1 + x) a ≤ exp (ax) for x < 1. In addition, since (4*ed/δ*2) (5/2 ln 2) ≥ 1, we get Et{Ψt} ≤ 12 L + 1 t + L + 1 c0 24ed δ 2 5 (2 ln 2)exp a1 L + 1 + 8c 2σ 2 N c0 − 2 exp a1 + 2 L + 1 1 (t + 1) ≤ 1 2 L + 1 t + L + 1 c0 24ed δ 2 5 (2 ln 2)e a1/L + 8c 2σ 2 N e (a1+2)/L c0 − 2 1 (t + L + 1) = C1 L + 1 t + L + 1 c0 2+ C2 σ 2 N t + L + 1 . (31) This completes the proof of the theorem. ![22_image_0.png](22_image_0.png) ![22_image_1.png](22_image_1.png) (a) Impact of the mini-batch size on the convergence rate of DM-Krasulina for the resourceful regime. Note that the B = 1 plot is effectively Krasulina's method. (b) Performance of DM-Krasulina in a resource-constrained regime (i.e., N < bRcRs Rp(bRc−Rs) ), which causes loss of µ samples per iteration; here, (*N, B*) = (10, 100). Figure 2: Convergence behavior of DM-Krasulina for the case of synthetic data under two scenarios: (a) No data loss (µ = 0) and (b) loss of µ > 0 samples per algorithmic iteration. ## 6 Numerical Results In this section, we utilize numerical experiments to validate the theoretical findings of this work in terms of the ability of implicit/explicit mini-batched variants of the original Krasulina's method (Krasulina, 1969) to estimate the top eigenvector of a covariance matrix from (fast) streaming data. Instead of repeating the same set of experiments for the original Krasulina's method, D-Krasulina, and DM-Krasulina, we present our results that are parameterized by the network-wide mini-batch size B ∈ {1}S{bN : b ∈ Z+} that appears in DM-Krasulina. This is because B = 1 trivially corresponds to the original Krasulina's iterations, while B = N corresponds to iterations that characterize D-Krasulina. Our goals for the numerical experiments are threefold: (i) showing the impact of (implicit/explicit) minibatching on the convergence rate of DM-Krasulina, (ii) establishing robustness of DM-Krasulina against the loss of µ > 0 samples per iteration for the case when N < bRcRs Rp(bRc−Rs) , and (iii) experimental validation for scaling of the convergence rate in terms of the problem parameters as predicted by our theoretical findings, namely, eigengap (λ1 − λ2), dimensionality (d), and upper bound on input samples (∥xt ′∥2 ≤ r). In the following, we report results of experiments on both synthetic and real-world data to highlight these points. Since the main purpose is to corroborate the scaling behaviors within the main results—and not to investigate additional system-related issues concerned with large-scale implementations—the real-world datasets are chosen to facilitate their processing on low-cost compute machines. ## 6.1 Experiments On Synthetic Data In the following experiments we generate T = 106samples from some probability distribution (specified for each experiment later) and for each experiment we perform 200 Monte-Carlo trials. In all the experiments in the following we use step size of the form γt = c/t. We performed experiments with multiple values of c and here we are reporting the results for the value of c which achieves the best convergence rate. Further details about each experiment are provided in the following sections. ## 6.1.1 Impact Of Mini-Batch Size On The Performance Of Dm-Krasulina For a covariance matrix Σ ∈ R 5×5 with λ1 = 1 and eigengap λ1 − λ2 = 0.2, we generate T = 106samples from N (0, Σ) distribution. The first set of experiments here deals with the resourceful regime, i.e., N ≥ bRcRs Rp(bRc−Rs) , with mini-batches of sizes B ∈ {1, 10, 100, 500, 1000, 2000}. Note that these values of B can be ![23_image_0.png](23_image_0.png) Figure 3: Understanding the impact of (a) eigengap (λ1 − λ2) and (b) dimensionality d on the convergence behavior of DM-Krasulina, corresponding to B = 1000 and µ = 0. factored into any positive integers b and N as long as the condition N ≥bRcRs Rp(bRc−Rs) that is governed by the application scenario and the physical system is satisfied. It is, therefore, unnecessary to specify b and N for these experiments, whose results are shown in Figure 2(a). These results are obtained for step-size parameter c ∈ {70, 80, 80, 90, 110, 100}, which are the values of c resulting in the best convergence rate. As predicted by Corollary 1, we can see that after *T /B* iterations of DM-Krasulina, the error Ψ*T /B* is on the order of O(1/T) for B ∈ {1, 10, 100, 500, 1000}, while for B = 2000, the error Ψ*T /B* is not optimal anymore. Next, we demonstrate the performance of DM-Krasulina for resource constrained settings, i.e., N < bRsRc Rp(bRc−Rs) , which causes the algorithm to discard µ := bRs Rp + NRs Rc − B samples per iteration. Using the same data generation setup as before, we run DM-Krasulina for a network of 10 nodes (N = 10) with network-wide mini-batch of size B = 100 (i.e., b = 10). We consider different mismatch factors between streaming, processing, and communication rates in this experiment, which result in the number of samples being discarded as µ ∈ {0, 10, 100, 200}. The results are plotted in Figure 2(b), which shows that the error ΨT /(B+µ)for µ = 10 is comparable to that for µ = 0, but the error for µ = 200 is an order of magnitude worse than the nominal error. ## 6.1.2 Impact Of The Eigengap On The Performance Of Dm-Krasulina For this set of experiments, we again generate data in R 5from a normal distribution N (0, Σ), where the covariance matrix Σ has the largest eigenvalue λ1 = 1. We then vary the remaining eigenvalues to ensure an eigengap that takes values from the set {0.1, 0.2, 0.3, 0.4, 0.5}. The corresponding values of c that give the best convergence rate for each unique eigengap satisfy c ∈ {180, 110, 90, 70, 60}. The final results for these experiments are plotted in Figure 3(a) for the case of B = 1000 and µ = 0. These results establish that the final gap in error after observing T = 106 data samples is indeed on the order of O(1/(λ1 − λ2) 2), as suggested by the theoretical analysis. ## 6.1.3 Impact Of Dimensionality On The Performance Of Dm-Krasulina For this set of experiments, we generate data in R dfrom a normal distribution N (0, Σ) whose dimensionality is varied such that d ∈ {5, 10, 15, 20}. In addition, we fix the largest eigenvalue of Σ to be λ1 = 1 and its eigengap to be 0.2. The values of c corresponding to each unique value of d that provide the best convergence rate in these experiments satisfy {110, 110, 100, 100}; contrary to our theoretical analysis, this seems to suggest that the optimal step-size sequence does not have a strong dependence on d, at least for small values of d. We also plot the potential function for each d as a function of the number of received samples in Figure 3(b) for the case of B = 1000 and µ = 0. Once again, we observe little dependence of ![24_image_0.png](24_image_0.png) ![24_image_1.png](24_image_1.png) Figure 4: Performance of DM-Krasulina for varying upper bound on the norm of the streaming data. the performance of DM-Krasulina on d. Both these observations suggest that our theoretical analysis is not tight in terms of its dependence on dimensionality d of the streaming data. ## 6.1.4 Impact Of Upper Bound On The Performance Of Dm-Krasulina In order to understand the impact of the upper bound ∥xt ′∥2 ≤ r on the convergence behavior of DM-Krasulina, we generate xt ′ ∈ R 5 as xt ′ = Cut ′ with ut ′ ∈ R 5 having independent entries drawn from uniform distribution U(−*a, a*) and C chosen to ensure an eigengap of 0.2 for the covariance matrix. As we vary the value of a within the set {1, 2, 3, 10}, we generate four different datasets of T = 106samples for which the resulting r ∈ {1.45, 2.9, 4.5, 14.5}. The values of c that provide best convergence for these values of r satisfy c ∈ {8, 2, 1, 0.08}. The final set of results are displayed in Figure 4 for B = 1 and µ = 0. It can be seen from this figure that changing r does not affect the convergence behavior of DM-Krasulina. This behavior can be explained by noticing that the parameter r appears in our convergence results in terms of a lower bound on L (cf. (13)) and within the non-dominant term in the error bound. The dependence of L on the parameter r is already being reflected here in our choice of the step-size parameter c that results in the best convergence result. In addition, we hypothesize that the non-dominant error term in our experiments, compared to the dominant one, is significantly small that it masks the dependence of the final error on r. ## 6.2 Experiments On Real-World Datasets In this section, we evaluate the performance of DM-Krasulina on two real-world datasets, namely, the MNIST dataset (LeCun, 1998) and the Higgs dataset (Baldi et al., 2014). The MNIST dataset corresponds to d = 784 and has a total of T = 6×104samples, while the Higgs dataset is d = 28 dimensional and comprises 1.1×107 samples. It is worth noting here that since it is straightforward to store all the samples in these datasets at a single machine, one can always solve the 1-PCA problem for these datasets without resorting to the utilization of a distributed streaming framework. Nonetheless, it is still possible to utilize these dataset in a simulated distributed streaming setting in order to highlight the agreement between the scaling behavior predicted by our theoretical results and the scaling behavior observed using real-world datasets; this is indeed the purpose of the following sets of experiments. Our first set of experiments is for the MNIST dataset, in which we use the step size γ = c/t with c ∈ {0.6, 0.9, 1.1, 1.5, 1.6} for network-wide mini-batch sizes B ∈ {1, 10, 100, 300, 1000} in the resourceful regime (µ = 0). The results, which are averaged over 200 random initializations and random shuffling of data, are given in Figure 5(a). It can be seen from this figure that the final error relatively stays the same as B increases from 1 to 100, but it starts getting affected significantly as the network-wide mini-batch size is further increased to B = 300 and B = 1000. Our second set of experiments for the MNIST dataset corresponds to ![25_image_0.png](25_image_0.png) ![25_image_1.png](25_image_1.png) (a) MNIST Data (µ = 0): Impact of networkwide mini-batch size B on the convergence behavior of DM-Krasulina for the resourceful regime. (b) MNIST Data (N = 10; B = 100): Convergence behavior of DM-Krasulina in a resource-constrained regime, which causes loss of µ samples per iteration. Figure 5: Performance of DM-Krasulina for the MNIST dataset under two scenarios: (a) No data loss (µ = 0) and (b) loss of µ > 0 samples per algorithmic iteration. ![25_image_2.png](25_image_2.png) (a) Higgs Data (µ = 0): Impact of network-wide mini-batch size B on the convergence behavior of DM-Krasulina for the resourceful regime. ![25_image_3.png](25_image_3.png) (b) Higgs Data (N = 10; B = 1000): Convergence behavior of DM-Krasulina in a resource-constrained regime, which causes loss of µ samples per iteration. Figure 6: Performance of DM-Krasulina for the Higgs dataset under two scenarios: (a) No data loss (µ = 0) and (b) loss of µ > 0 samples per algorithmic iteration. the resource-constrained regime with (*N, B*) = (10, 100) and step-size parameter c ∈ {0.6, 0.9, 1.1, 1.5, 1.6} for the number of discarded samples µ ∈ {0, 10, 20, 40, 100}. The results, averaged over 200 trials and given in Figure 5(b), show that the system can tolerate loss of some data samples per iteration without significant increase in the final error; the increase in error, however, becomes noticeable as µ approaches B. Both these observations are in line with the insights of our theoretical analysis. We now turn our attention to the Higgs dataset. Our results for this dataset, averaged over 200 trials and using c = 0.07, for the resourceful and resource-constrained settings are given in Figure 6(a) and Figure 6(b), respectively. In the former setting, corresponding to B ∈ {1, 102, 103, 104, 2 × 104}, we once again see that the error relatively stays the same for values of B that are significantly smaller than T; in particular, since T for the Higgs dataset is larger than for the MNIST dataset, it can accommodate a larger value of B without significant loss in performance. In the latter resource-constrained setting, corresponding to N = 10, B = 1000 and µ ∈ {0, 10, 100, 1000, 2000}, we similarly observe that small (relative to B) values of µ do not impact the performance of DM-Krasulina in a significant manner. Once again, these results corroborate our research findings. ## 7 Conclusion In this paper, we studied the problem of estimating the principal eigenvector of a covariance matrix from independent and identically distributed data samples. Our particular focus in here was developing and analyzing two variants, termed D-Krasulina and DM-Krasulina, of a classical stochastic algorithm that can estimate the top eigenvector in a near-optimal fashion from fast streaming data that overwhelms the processing capabilities of a single processor. Unlike the classical algorithm that must discard data samples in high-rate streaming settings, and thus sacrifice the convergence rate, the proposed algorithms manage the high-rate streaming data by trading off processing capabilities with computational resources and communications infrastructure. Specifically, both D-Krasulina and DM-Krasulina virtually slow down the rate of streaming data by spreading the processing of data samples across of a network of processing nodes. In addition, DM-Krasulina can overcome slower communication links and/or lack of sufficient number of processing nodes through a network-wide mini-batching strategy, coupled with discarding of a small number of data samples per iteration. Our theoretical analysis, which fundamentally required a characterization of the error incurred by the proposed algorithms as a function of the variance of the sample covariance matrix, substantially improved the variance-agnostic analysis in (Balsubramani et al., 2013) and established the conditions under which nearoptimal convergence rate is achievable in the fast streaming setting, even when some data samples need to be discarded due to lack of sufficient computational and/or communication resources. We also carried out numerical experiments on both synthetic and real-world data to validate our theoretical findings. In terms of future work, extension of our algorithmic and analytical framework for estimation of the principal subspace comprising multiple eigenvectors remains an open problem. In addition, tightening our theoretical analysis to better elucidate the role of dimensionality of data in the performance of the proposed algorithmic framework is an interesting problem. Finally, investigation of additional practical issues (e.g., processor failures, variable compute costs, and network coordination costs) concerning processing of data in large-scale systems provides another avenue for future research. ## Funding Acknowledgements This work has been supported in part by the National Science Foundation under Awards CCF-1907658, OAC-1940074, and CNS-2148104, and by the Army Research Office under Awards W911NF-17-1-0546 and W911NF-21-1-0301. ## References Alekh Agarwal and John C Duchi. Distributed delayed stochastic optimization. In *Proc. Advances Neural* Inform. Process. Syst. (NeurIPS), pp. 873–881, 2011. Zeyuan Allen-Zhu. How to make the gradients small stochastically: Even faster convex and nonconvex SGD. In *Proc. Advances Neural Inform. Process. Syst. (NeurIPS)*, pp. 1157–1167, 2018a. Zeyuan Allen-Zhu. Natasha 2: Faster non-convex optimization than SGD. In *Proc. Advances Neural Inform.* Process. Syst. (NeurIPS), pp. 2680–2691, 2018b. Zeyuan Allen-Zhu and Elad Hazan. Variance reduction for faster non-convex optimization. In Intl. Conf. Mach. Learning (ICML), pp. 699–707, 2016. Zeyuan Allen-Zhu and Yuanzhi Li. First efficient convergence for streaming k-PCA: a global, gap-free, and near-optimal rate. In *Proc. IEEE 58th Annu. Symp. Found. Comput. Sci. (FOCS)*, pp. 487–492. IEEE, 2017a. Zeyuan Allen-Zhu and Yuanzhi Li. Follow the compressed leader: Faster online learning of eigenvectors and faster MMWU. In *Proc. 34th Int. Conf. Mach. Learning. (ICML)*, pp. 116–125. JMLR. org, 2017b. Ehsan Amid and Manfred K Warmuth. An implicit form of Krasulina's k-PCA update without the orthonormality constraint. *arXiv preprint arXiv:1909.04803*, 2019. Raman Arora, Andy Cotter, and Nati Srebro. Stochastic optimization of PCA with capped MSG. In Proc. Advances Neural Inform. Process. Syst. (NeurIPS), pp. 1815–1823, 2013. Tuncer Can Aysal, Mehmet Ercan Yildiz, Anand D Sarwate, and Anna Scaglione. Broadcast gossip algorithms for consensus. *IEEE Trans. Signal Process.*, 57(7):2748–2761, 2009. Maria Florina Balcan, Yingyu Liang, Le Song, David Woodruff, and Bo Xie. Communication efficient distributed kernel principal component analysis. In *Proc. 22nd ACM SIGKDD Int. Conf. Knowledge* Discovery and Data Mining, pp. 725–734. ACM, 2016. Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. *Nature communications*, 5:4308, 2014. Akshay Balsubramani, Sanjoy Dasgupta, and Yoav Freund. The fast convergence of incremental PCA. In Proc. Advances Neural Inform. Process. Syst. (NeurIPS), pp. 3174–3182, 2013. Bin Yang. Projection approximation subspace tracking. *IEEE Trans. Signal Process.*, 43(1):95–107, 1995. Vincent D Blondel, Julien M Hendrickx, Alex Olshevsky, and John N Tsitsiklis. Convergence in multiagent coordination, consensus, and flocking. In *Proc. 44th IEEE Conf. Decision and Control*, pp. 2996–3000. IEEE, 2005. Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Proc. 19th Intl. Conf. Computational Statistics (COMPSTAT'10), pp. 177–186, 2010. doi: 10.1007/978-3-7908-2604-3_16. Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration inequalities: A nonasymptotic theory of independence. Oxford University Press, 2013. Christos Boutsidis, David P Woodruff, and Peilin Zhong. Optimal principal component analysis in distributed and streaming models. In *Proc. 48th Annu. ACM Symp. Theory Computing (STOC)*, pp. 236–249. ACM, 2016. Chanchal Chatterjee. Adaptive algorithms for first principal eigenvector computation. *Neural Networks*, 18: 145–159, 2005. ISSN 08936080. doi: 10.1016/j.neunet.2004.11.004. Kenneth L Clarkson and David P Woodruff. Low-rank approximation and regression in input sparsity time. J. of the ACM, 63(6):1–45, 2017. Andrew Cotter, Ohad Shamir, Nati Srebro, and Karthik Sridharan. Better mini-batch algorithms via accelerated gradient methods. In *Proc. Advances Neural Inform. Process. Syst. (NeurIPS)*, pp. 1647–1655, 2011. Christopher De Sa, Kunle Olukotun, and Christopher Ré. Global convergence of stochastic gradient descent for some non-convex matrix problems. In *Proc. 32nd Int. Conf. Machine Learning (ICML)*, pp. 2332–2341, July 2015. URL http://arxiv.org/abs/1411.1134. Christopher De Sa, Bryan He, Ioannis Mitliagkas, Christopher Ré, and Peng Xu. Accelerated stochastic power iteration. *Proc. Mach. Learning Res.*, 84:58, 2018. Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online prediction using mini-batches. *J. Mach. Learning Res. (JMLR)*, 13(Jan):165–202, 2012. Alexandros G Dimakis, Soummya Kar, José MF Moura, Michael G Rabbat, and Anna Scaglione. Gossip algorithms for distributed signal processing. *Proc. IEEE*, 98(11):1847–1864, 2010. Xenofon G. Doukopoulos and George V. Moustakides. Fast and stable subspace tracking. IEEE Trans. Signal Process., 56(4):1452–1465, 2008. ISSN 1053587X. doi: 10.1109/TSP.2007.909335. Rick Durrett. *Probability: theory and examples*. Cambridge University Press, 2010. Alan Frieze, Ravi Kannan, and Santosh Vempala. Fast Monte-Carlo algorithms for finding low-rank approximations. *J. of the ACM*, 51(6):1025–1041, 2004. Arpita Gang and Waheed U Bajwa. FAST-PCA: A fast and exact algorithm for distributed principal component analysis. *arXiv preprint arXiv:2108.12373*, 2021. Arpita Gang and Waheed U Bajwa. A linearly convergent algorithm for distributed principal component analysis. *Signal Process.*, 193:108408, 2022. Arpita Gang, Bingqing Xiang, and Waheed U Bajwa. Distributed principal subspace analysis for partitioned big data: Algorithms, analysis, and implementation. *IEEE Trans. Signa Inform. Process. over Netw.*, 7: 699–715, 2021. Dan Garber. On the regret minimization of nonconvex online gradient ascent for online PCA. arXiv preprint arXiv:1809.10491, 2018. Dan Garber and Elad Hazan. Fast and simple PCA via convex optimization. In Proc. 32nd Int. Conf. Machine Learning (ICML), 2015. Dan Garber, Elad Hazan, and Tengyu Ma. Online learning of eigenvectors. In *Proc. 32nd Int. Conf. Machine* Learning. (ICML), pp. 560–568, 2015. Dan Garber, Ohad Shamir, and Nathan Srebro. Communication-efficient algorithms for distributed stochastic principal component analysis. *arXiv preprint arXiv:1702.08169*, 2017. Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points—online stochastic gradient for tensor decomposition. In *Proc. Conf. Learning Theory (COLT)*, pp. 797–842, 2015. Noah Golmant, Nikita Vemuri, Zhewei Yao, Vladimir Feinberg, Amir Gholami, Kai Rothauge, Michael W Mahoney, and Joseph Gonzalez. On the computational inefficiency of large batch sizes for stochastic gradient descent. *arXiv preprint arXiv:1811.12941*, 2018. Gene H. Golub and Charles F. Van Loan. *Matrix computations*. Johns Hopkins University Press, Baltimore, MD, third edition, 2012. Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: Training ImageNet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. *SIAM review*, 53(2):217–288, 2011. Moritz Hardt and Eric Price. The noisy power method: A meta algorithm with applications. In *Proc.* Advances Neural Inform. Process. Systs (NeurIPS), pp. 2861–2869, 2014. Elad Hazan, Kfir Yehuda Levy, and Shai Shalev-Shwartz. On graduated optimization for stochastic nonconvex problems. In *Proc. Int. Conf. Mach. Learn. (ICML)*, pp. 1833–1841, 2016. Elad Hazan, Satyen Kale, and Shai Shalev-Shwartz. Near-optimal algorithms for online matrix prediction. SIAM J. Computing, 46(2):744–773, 2017. Amelia Henriksen and Rachel Ward. AdaOja: Adaptive learning rates for streaming PCA. *arXiv preprint* arXiv:1905.12115, 2019. Prateek Jain, Chi Jin, Sham M Kakade, Praneeth Netrapalli, and Aaron Sidford. Streaming PCA: Matching matrix Bernstein and near-optimal finite sample guarantees for Oja's algorithm. In *Proc. Conf. Learning* Theory (COLT), pp. 1147–1164, 2016. Zohar Karnin and Edo Liberty. Online PCA with spectral bounds. In *Proc. Conf. Learning Theory (COLT)*, pp. 1129–1140, 2015. Soila Kavulya, Jiaqi Tan, Rajeev Gandhi, and Priya Narasimhan. An analysis of traces from a production MapReduce cluster. In *Proc. 10th IEEE/ACM Int. Conf. Cluster, Cloud and Grid Comput.*, pp. 94–103, 2010. David Kempe, Frank McSherry, Kempe David, and Frank McSherry. A decentralized algorithm for spectral analysis. *J. Comput. and Syst. Sci.*, 74(1):70–83, 2008. Usman A Khan, Soummya Kar, and José MF Moura. Distributed sensor localization in random environments using minimal number of anchor nodes. *IEEE Trans. Signal Process.*, 57(5):2000–2016, 2009. Satish Babu Korada, Andrea Montanari, and Sewoong Oh. Gossip PCA. ACM SIGMETRICS Performance Evaluation Review, 39(1):169–180, 2011. Wojciech Kotłowski and Gergely Neu. Bandit principal component analysis. In Proc. Conf. Learning Theory. (COLT), 2019. TP Krasulina. The method of stochastic approximation for the determination of the least eigenvalue of a symmetrical matrix. *USSR Comput. Mathematics and Mathematical Physics*, 9(6):189–195, 1969. Yann LeCun. The MNIST database of handwritten digits. *http://yann. lecun. com/exdb/mnist/*, 1998. Cong Leng, Jiaxiang Wu, Jian Cheng, Xiao Bai, and Hanqing Lu. Online sketching hashing. In Proc. IEEE Conf Comput. Vision and Pattern Recognition (CVPR), pp. 2503–2511, 2015. Chun-Liang Li, Hsuan-Tien Lin, and Chi-Jen Lu. Rivalry of two families of algorithms for memory-restricted streaming PCA. In *Proc. Int. Conf. Artificial Intell. and Statis. (AISTATS)*, pp. 473–481, 2016. Lin Li, Anna Scaglione, and Jonathan H Manton. Distributed principal subspace estimation in wireless sensor networks. *IEEE J. Sel. Topics Signal Process.*, 5(4):725–738, 2011. Zhize Li, Hongyan Bao, Xiangliang Zhang, and Peter Richtárik. PAGE: A simple and optimal probabilistic gradient estimator for nonconvex optimization. In *Int. Conf. Mach. Learn.*, pp. 6286–6295. PMLR, 2021. Edo Liberty. Simple and deterministic matrix sketching. In *Proc. 19th ACM SIGKDD Int. Conf. Knowledge* discovery and data mining, pp. 581–588. ACM, 2013. Jeff Linderoth, Alexander Shapiro, and Stephen Wright. The empirical behavior of sampling methods for stochastic programming. *Ann. Operations Research*, 142(1):215–241, 2006. Teodor Vanislavov Marinov, Poorya Mianjy, and Raman Arora. Streaming principal component analysis in noisy settings. In *Proc. 35th Int. Conf. Mach. Learning (ICML)*, pp. 3410–3419, 2018. Aryan Mokhtari, Alec Koppel, Gesualdo Scutari, and Alejandro Ribeiro. Large-scale nonconvex stochastic optimization by doubly stochastic successive convex approximation. In IEEE Int. Conf. Acoustics, Speech and Signal Process., pp. 4701–4705, 2017. Jiazhong Nie, Wojciech Kotlowski, and Manfred K Warmuth. Online PCA with optimal regret. *J. Mach.* Learning Res. (JMLR), 17(173):1–49, 2016. Erkki Oja and Juha Karhunen. On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix. *J. Math. Anal. and Applicat.*, 106(1):69–84, 1985. Muhammad I Qureshi, Ran Xin, Soummya Kar, and Usman A Khan. Push-SAGA: A decentralized stochastic algorithm with variance reduction over directed graphs. *IEEE Control Syst. Letters*, 6:1202–1207, 2021. Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In *Proc. Advances Neural Inform. Process. Systs (NeurIPS)*, pp. 693–701, 2011. Sashank J Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex Smola. Stochastic variance reduction for nonconvex optimization. In *Proc. Int. Conf. Mach. learning (ICML)*, pp. 314–323, 2016a. Sashank J Reddi, Suvrit Sra, Barnabás Póczos, and Alex Smola. Fast stochastic methods for nonsmooth nonconvex optimization. In *Proc. Advances Neural Inform. Process. Syst. (NeurIPS)*, 2016b. Sashank J Reddi, Suvrit Sra, Barnabás Póczos, and Alex Smola. Fast incremental method for smooth nonconvex optimization. In *IEEE 55th Conf. Decision and Control (CDC)*, pp. 1971–1977. IEEE, 2016c. Herbert Robbins and Sutton Monro. A stochastic approximation method. *Ann. Math. Stat.*, pp. 400–407, 1951. Sebastian Ruder. An overview of gradient descent optimization algorithms. *arXiv preprint arXiv:1609.04747*, 2016. Terence D Sanger. Optimal unsupervised learning in a single-layer linear feedforward neural network. *Neural* networks, 2(6):459–473, 1989. Ohad Shamir. A stochastic PCA algorithm with an exponential convergence rate. In *Proc. Int. Conf. Mach.* Learning (ICML), 2015. Ohad Shamir. Fast stochastic algorithms for SVD and PCA: Convergence properties and convexity. In *Proc.* 33rd Int. Conf. Mach. Learning, 2016. Ohad Shamir and Nathan Srebro. Distributed stochastic optimization and learning. In *52nd Annu. Allerton* Conf. Commun., Control, and Compute., pp. 850–857. IEEE, 2014. Alexander Shapiro and Tito Homem-de Mello. On the rate of convergence of optimal solutions of Monte Carlo approximations of stochastic programs. *SIAM J. optimization*, 11(1):70–86, 2000. Cheng Tang. Exponentially convergent stochastic k-PCA without variance reduction. arXiv preprint arXiv:1904.01750, 2019. Joel A Tropp, Alp Yurtsever, Madeleine Udell, and Volkan Cevher. Streaming low-rank matrix approximation with an application to scientific simulation. *SIAM J. Scientific Comput.*, 41(4):A2430–A2463, 2019. Manfred K Warmuth and Dima Kuzmin. Randomized PCA algorithms with regret bounds that are logarithmic in the dimension. In *Proc. Advances Neural Inform. Process. Syst. (NIPS)*, volume 19, pp. 1481–1488, 2007. ISBN 9780262195683. doi: 10.1.1.133.8332. David P Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trends® in Theoretical Computer Science, 10(1–2):1–157, 2014. Sissi Xiaoxiao Wu, Hoi-To Wai, Anna Scaglione, and Neil A Jacklin. The Power-Oja method for decentralized subspace estimation/tracking. In *Proc. 2017 IEEE International Conference on Acoustics, Speech and* Signal Processing (ICASSP), pp. 3524–3528. IEEE, 2017. Sissi Xiaoxiao Wu, Hoi-To Wai, Lin Li, and Anna Scaglione. A review of distributed algorithms for principal component analysis. *Proc. IEEE*, 106(8):1321–1340, 2018. Puyudi Yang, Cho-Jui Hsieh, and Jane-Ling Wang. History PCA: A new algorithm for streaming PCA. arXiv preprint arXiv:1802.05447, 2018. Se-Young Yun, Alexandre Proutiere, et al. Fast and memory optimal low-rank matrix approximation. Advances in Neural Information Processing Systems, 28, 2015. Dejiao Zhang and Laura Balzano. Global convergence of a Grassmannian gradient descent algorithm for subspace estimation. In *Proc. Int. Workshop Artificial Intell. and Statist. (AISTATS)*, pp. 1460–1468, 2016. Siyun Zhou and Yanqin Bai. Convergence analysis of Oja's iteration for solving online PCA with nonzeromean samples. *Science China Mathematics*, 64(4):849–868, 2021. ## Appendix A Proofs Of Lemmas For The Initial Epoch A.1 Proof Of Lemma 1 In order to prove Lemma 1, we first need the following result. Lemma 9. The second moment of the update vector ξt *in D-Krasulina is upper bounded as* $$\mathbb{E}\!\left\{{\frac{\|\mathbf{\xi}_{t}\|_{2}^{2}}{\|\mathbf{v}_{t-1}\|_{2}^{2}}}\right\}\leq{\frac{\mathbb{E}\left\{\|\mathbf{\xi}_{t}-\mathbb{E}\mathbf{\xi}_{t}\|_{2}^{2}\right\}}{\|\mathbf{v}_{t-1}\|_{2}^{2}}}+2\lambda_{1}^{2}\Psi_{t-1}.$$ Proof. We start by writing E∥ξt − E{ξt}∥22 in terms of E∥ξt∥ 2 2 as follows: $$\mathbb{E}\left\{\|\boldsymbol{\xi}_{t}-\mathbb{E}\{\boldsymbol{\xi}_{t}\}\|_{2}^{2}\right\}=\mathbb{E}\Bigg{\{}\boldsymbol{\xi}_{t}^{\mathrm{T}}\boldsymbol{\xi}_{t}+(\mathbb{E}\{\boldsymbol{\xi}_{t}\})^{\mathrm{T}}\mathbb{E}\{\boldsymbol{\xi}_{t}\}-\boldsymbol{\xi}_{t}^{\mathrm{T}}\mathbb{E}\{\boldsymbol{\xi}_{t}\}-(\mathbb{E}\{\boldsymbol{\xi}_{t}\})^{\mathrm{T}}\boldsymbol{\xi}_{t}\Bigg{\}}$$ $$=\mathbb{E}\{\|\boldsymbol{\xi}_{t}\|_{2}^{2}\}-\mathbb{E}\{\boldsymbol{\xi}_{t}^{\mathrm{T}}\}\mathbb{E}\{\boldsymbol{\xi}_{t}\}.$$ $t:=\mathbb{E}\{\boldsymbol{\xi}_{t}^{\mathrm{T}}\}\mathbb{E}\{\boldsymbol{\xi}_{t}\}$ and rearranging the above equation, we get Now defining Ct := E{ξ Next, substituting value of ξt from (8) we get $$\mathbb{E}\{||\xi_{t}||_{2}^{2}$$ $$\left|{}_{2}^{2}\right\rangle=\mathbb{E}\{\|\boldsymbol{\xi}_{t}-\mathbb{E}\{\boldsymbol{\xi}_{t}\}\|_{2}^{2}\}+C_{t}.$$ !T Σvt−1 − v T t−1Σvt−1vt−1 v T t−1vt−1 Ct ∥vt−1∥ 2 2 = E{ξ T t }E{ξt} ∥vt−1∥ 2 2 =1 ∥vt−1∥ 2 2 Σvt−1 − v T t−1Σvt−1vt−1 v T t−1vt−1 ! !2 . (32) = v T t−1Σ2vt−1 ∥vt−1∥ 2 2 − v T t−1Σvt−1 ∥vt−1∥ 2 2 T Since Σ is a positive semi-definite matrix, we can write its eigenvalue decomposition as Σ =Pd i=1 λiqiq i , where λ1 > λ2 *≥ · · · ≥* λd ≥ 0 and q1(≡ q ∗), q2*, . . . ,* qd are the eigenvalues and corresponding eigenvectors of Σ, respectively. It follows that i=1 λi (v T t−1qi) 2 ∥vt−1∥ 2 2 !2 Ct ∥vt−1∥ 2 2 = X d i=1 λ 2 i (v T t−1qi) 2 ∥vt−1∥ 2 2 − X d i=2 λi (v T t−1qi) 2 ∥vt−1∥ 2 2 !2 = λ 2 1 (v T t−1q ∗) 2 ∥vt−1∥ 2 2 + X d i=2 λ 2 i (v T t−1qi) 2 ∥vt−1∥ 2 2 − λ1 (v T t−1q ∗) 2 ∥vt−1∥ 2 2 + X d ≤ λ 2 1 (v T t−1q ∗) 2 ∥vt−1∥ 2 2 + λ 2 2 X d i=2 (v T t−1qi) 2 ∥vt−1∥ 2 2 − λ 2 1 (v T t−1q ∗) 4 ∥vt−1∥ 4 2 = λ 2 1 (v T t−1q ∗) 2 ∥vt−1∥ 2 2 1 − (v T t−1q ∗) 2 ∥vt−1∥ 2 2 ! + λ 2 2 1 − (v T t−1q ∗) 2 ∥vt−1∥ 2 2 ! . Finally, we get from definition of Ψt−1 that $$\frac{C_{t}}{\|\mathbf{v}_{t-1}\|_{2}^{2}}\leq\Psi_{t-1}\big((1-\Psi_{t-1})\lambda_{1}^{2}+\lambda_{2}^{2}\big)\leq\Psi_{t-1}\big(\lambda_{1}^{2}+\lambda_{2}^{2}\big)\leq2\lambda_{1}^{2}\Psi_{t-1}.$$ proof of the lemma. This completes the proof of the lemma. Using Lemma 9, we can now prove Lemma 1 in the following. Proof of Lemma 1. From (10), we have Ψt = ∥vt∥ 2 2−(v T t q ∗) 2 ∥vt∥ 2 2. Substituting vt from (8), we get Ψt = ∥vt−1 + γtξt∥ 2 2 − ((vt−1 + γtξt) Tq ∗) 2 ∥vt∥ 2 2 (a) = ∥vt−1∥ 2 2 + γ 2 t ∥ξt∥ 2 2 − ((vt−1 + γtξt) Tq ∗) 2 ∥vt∥ 2 2 (b) ≤ ∥vt−1∥ 2 2 + γ 2 t ∥ξt∥ 2 2 − ((vt−1 + γtξt) Tq ∗) 2 ∥vt−1∥ 2 2 = 1 + γ 2 t ∥ξt∥ 2 2 ∥vt−1∥ 2 2 − ((vt−1 + γtξt) Tq ∗) 2 ∥vt−1∥ 2 2 = 1 + γ 2 t ∥ξt∥ 2 2 ∥vt−1∥ 2 2 − (v T t−1q ∗) 2 + γ 2 t (ξ T t q ∗) 2 + 2γt(v T t−1q ∗)(ξ T t q ∗) ∥vt−1∥ 2 2 = 1 − (v T t−1q ∗) 2 ∥vt−1∥ 2 2 + γ 2 t ∥ξt∥ 2 2 − (ξ T t q ∗) 2 ∥vt−1∥ 2 2 − 2γt (v T t−1q ∗)(ξ T t q ∗) ∥vt−1∥ 2 2 = Ψt−1 + γ 2 t ∥ξt∥ 2 2 ∥vt−1∥ 2 2 − 2γt (v T t−1q ∗)(ξ T t q ∗) ∥vt−1∥ 2 2 . (33) Here (a) and (b) are due to (Balsubramani et al., 2013, Lemma A.1), where (a) is true because vt−1 is perpendicular to ξt and (b) is true because ∥vt−1∥2 ≤ ∥vt∥2. The second term in the above inequality can be bounded as (c) ≤ E ∥ξt − E{ξt}∥22 ∥vt−1∥ 2 2 + 2λ 2 1Ψt−1 ∥ξt∥ 2 2 ∥vt−1∥ 2 2 = ∥ξt − E{ξt}∥22 + E{ξ T t }E{ξt} ∥vt∥ 2 2 =1 ∥vt−1∥ 2 2 E ( 1 N X N i=1 Ai,tvt−1 −1 ∥vt−1∥ 2 2 v T t−1 1 N X N i=1 Ai,tvt−1vt−1 i=1 Ai,tvt−1vt−1 o 2 − E n 1 N X N i=1 Ai,tvt−1 −1 ∥vt−1∥ 2 2 v T t−1 1 N X N 2 ) =1 ∥vt−1∥ 2 2 E ( 1 N X N i=1 Ai,tvt−1 −1 ∥vt−1∥ 2 2 v T t−1 1 N X N i=1 Ai,tvt−1vt−1 − Σvt−1 +1 ∥vt−1∥ 2 2 v T t−1Σvt−1vt−1 2 2 ) =1 ∥vt−1∥ 2 2 E ( 1 N X N i=1 Ai,t − Σ vt−1 −1 ∥vt−1∥ 2 2 v T t−1 1 N X N i=1 Ai,t − Σ vt−1vt−1 2 2 ) i=1 Ai,t − Σ 2 i=1 Ai,t − Σ 2 ≤ 4 1 N X N 2 + 2λ 2 1Ψt−1 ≤ 4 1 N X N F + 2λ 2 1Ψt−1, (34) where (c) is due to Lemma 9. Substituting (34) in (33) completes the proof of Part (i) of Lemma 1. Next, we prove Part (ii) of the lemma by defining vbt−1 = vt−1/∥vt−1∥2 and noting that ∥ξt∥ 2 2 ∥vt−1∥ 2 2 = ∥(1/N)PN i=1 ξi,t∥ 2 2 ∥vt−1∥ 2 2 = (1/N2)∥PN i=1 ξi,t∥ 2 2 ∥vt−1∥ 2 2 (d) ≤ (1/N2)PN i=1 N∥ξi,t∥ 2 2 ∥vt−1∥ 2 2 = PN i=1(x T i,tvt−1) 2∥xi,t − (x T i,tvbt−1)vbt−1∥ 2 2 N∥vt−1∥ 2 2 ≤ 1 N X N i=1 ∥xi,t∥ 2 2∥xi,t − (x T i,tvbt−1)vbt−1∥ 2 2 = 1 N X N i=1 ∥xi,t∥ 2 2 (∥xi,t∥ 2 2 − (x T i,tvbt−1) 2) ≤ X N i=1 ∥xi,t∥ 4 2 N≤ max i∥xi,t∥ 4 2 ≤ r 4. (35) $$(35)$$ Here, (d) is by using Cauchy–Schwartz inquality and the last inequality is due to Assumption **[A1]**. Now substituting this in (33) completes the proof. ## A.2 Proof Of Lemma 2 We begin by writing E{(zt − E{zt}) 2|Ft−1} = E ( 2γt(v T t−1q ∗)(ξ T t q ∗) ∥vt−1∥ 2 2 − E n2γt(v T t−1q ∗)(ξ T t q ∗) ∥vt−1∥ 2 2 o!2 Ft−1 ) = 4γ 2 t (v T t−1q ∗) 2 ∥vt−1∥ 4 2 E ( ξ T t q ∗ − E nξ T t q ∗o!2). Substituting value of ξt in this, we get E{(zt − E{zt}) 2|Ft−1} = 4γ 2 t (v T t−1q ∗) 2 ∥vt−1∥ 4 2 E ( 1 N X N i=1 Ai,tvt−1 − v T t−1 1 N PN i=1 Ai,tvt−1vt−1 ∥vt−1∥ 2 2 Tq ∗ i=1 Ai,tvt−1 − v T t−1 1 N PN i=1 Ai,tvt−1vt−1 ∥vt−1∥ 2 2 Tq ∗o!2) − E n 1 N X N = 4γ 2 t (v T t−1q ∗) 2 ∥vt−1∥ 4 2 E ( 1 N X N i=1 Ai,tvt−1 − v T t−1 1 N PN i=1 Ai,tvt−1vt−1 ∥vt−1∥ 2 2 Tq1 i=1 Ai,toq ∗ + v T t−1v T t−1E{ 1 N PN i=1 Ai,t}vt−1 ∥vt−1∥ 2 2 q ∗ !2) . − v T t−1E n 1 N X N Since E n1 N PN i=1 Ai,tois the covariance matrix Σ, we get E{(zt−E{zt}) 2|Ft−1} i=1 Ai,t − Σ)vt−1 − v T t−1 ( 1 N PN i=1 Ai,t − Σ)vt−1vt−1 ∥vt−1∥ 2 2 Tq ∗ !2) , = 4γ 2 t (v T t−1q ∗) 2 ∥vt−1∥ 4 2 E ( ( 1 N X N i=1 Ai,t − Σ)vt−1 − v T t−1 ( 1 N PN i=1 Ai,t − Σ)vt−1 q ∗Tvt−1 ∥vt−1∥ 2 2 !2) = 4γ 2 t (v T t−1q ∗) 2 ∥vt−1∥ 4 2 E ( q ∗T( 1 N X N i=1 Ai,t − Σ)vt−1 !2 + v T t−1 ( 1 N PN i=1 Ai,t − Σ)vt−1 q ∗Tvt−1 ∥vt−1∥ 2 2 !2) ≤ 8γ 2 t (v T t−1q ∗) 2 ∥vt−1∥ 4 2 E ( q ∗T( 1 N X N !2 + v T t−1 ( 1 N PN i=1 Ai,t − Σ)vt−1 ∥vt−1∥ 2 2 !2 q ∗Tvt−1 ∥vt−1∥2 !2) = 8γ 2 t (v T t−1q ∗) 2 ∥vt−1∥ 2 2 E ( q ∗T( 1 N PN i=1 Ai,t − Σ)vt−1 ∥vt−1∥2 !2 + v T t−1 ( 1 N PN i=1 Ai,t − Σ)vt−1 ∥vt−1∥ 2 2 !2) , (36) ≤ 8γ 2 t E ( q ∗T( 1 N PN i=1 Ai,t − Σ)vt−1 ∥vt−1∥2 where the last inequality in (36) is due to the fact that q ∗Tvt−1 ∥vt−1∥2 !2 ≤ 1. We can see that both the remaining terms in (36) are Rayleigh quotients of matrix (Σ− 1 N PN i=1 Ai,t) and hence the largest eigenvalue of (Σ − 1 N PN i=1 Ai,t) maximizes both the terms. Using this fact we get $$\mathbb{E}\{(z_{t}-\mathbb{E}\{z_{t}\})^{2}|\mathcal{F}_{t-1}\}\leq16\gamma_{t}^{2}\mathbb{E}\{\|\boldsymbol{\Sigma}-\frac{1}{N}\sum_{i=1}^{N}\mathbf{A}_{t,i}\|_{2}^{2}\}\leq16\gamma_{t}^{2}\mathbb{E}\{\|\boldsymbol{\Sigma}-\frac{1}{N}\sum_{i=1}^{N}\mathbf{A}_{t,i}\|_{F}^{2}\}.$$ Using (12), we get $\mathbb{E}\{(z_{t}-\mathbb{E}\{z_{t}\})^{2}|\mathcal{F}_{t-1}\}\leq16\gamma_{t}^{2}\sigma_{N}^{2}$, which completes the proof. ## A.3 Proof Of Lemma 3 Using Lemma 1, we can write the moment generating function of Ψt as follows: $$\mathbb{E}\big{\{}\exp(s\Psi_{t})|\mathcal{F}_{t-1}\big{\}}\leq\mathbb{E}\Big{\{}\exp\Big{(}s\Psi_{t-1}+s\gamma_{t}^{2}r^{4}-sz_{t}\Big{)}\Big{|}\mathcal{F}_{t-1}\Big{\}}=\exp(s\Psi_{t-1}+s\gamma_{t}^{2}r^{4})\mathbb{E}\Big{\{}\exp\Big{(}-sz_{t}\Big{)}\Big{|}\mathcal{F}_{t-1}\Big{\}}$$ $$=\exp(s\Psi_{t-1}+s\gamma_{t}^{2}r^{4}-s\mathbb{E}\big{\{}z_{t}|\mathcal{F}_{t-1}\big{\}})\mathbb{E}\Big{\{}\exp\Big{(}-s(z_{t}-\mathbb{E}\big{\{}z_{t})\big{)}\Big{|}\mathcal{F}_{t-1}\Big{\}}.\tag{37}$$ We can bound this using Bennett's inequality (Proposition 2 in Appendix D), which requires the variance and range of the random variable zt. We have already computed the variance of zt in Lemma 2. Next we compute the boundedness of (zt − E{zt}) as follows: $$\left|z_{t}-\mathbb{E}\{z_{t}\}\right|\leq2|z_{t}|\leq2\gamma_{t}\|\mathbf{x}_{i,t}\|_{2}^{2}\leq2\gamma_{t}r^{2}=:h.\tag{1}$$ $$(38)$$ Here, the last inequality is due to Assumption [A1]. Using parameters σ 2 N and h with Bennett's inequality, we get $$\mathbb{E}\{\exp(s\Psi_{t})|\mathcal{F}_{t-1}\}\leq\exp\Bigg{(}s\Psi_{t-1}-s\mathbb{E}\{z_{t}|\mathcal{F}_{t-1}\}+s\gamma_{t}^{2}r^{4}+s^{2}\gamma_{t}^{2}\sigma_{N}^{2}\Bigg{(}\frac{e^{sh}-1-sh}{(sh)^{2}}\Bigg{)}\Bigg{)}.\tag{39}$$ For L ≥ L1 + L2, where L1 and L2 are given by (13), we show in Proposition 3 in Appendix D that ( e sh−1−sh (sh) 2 ) ≤ 1 for s ∈ S. This implies $$\mathbb{E}\{\exp(s\Psi_{t})|\mathcal{F}_{t-1}\}\leq\exp\left(s\Psi_{t-1}-s\mathbb{E}\{z_{t}|\mathcal{F}_{t-1}\}+s\gamma_{t}^{2}r^{4}+s^{2}\gamma_{t}^{2}\sigma_{N}^{2}\right),$$ which completes the proof of the lemma. ## Appendix B Proofs Of Lemmas For The Intermediate Epoch B.1 Proof Of Lemma 4 Using Lemma 3, we have E{e sΨt Ft−1} ≤ exp s Ψt−1 + γ 2 t r 4 − E{zt|Ft−1} + sγ2 t σ 2 N !! (a) ≤ exp s Ψt−1 − 2γt λ1 − λ2 Ψt−1 1 − Ψt−1 + γ 2 t r 4 + sγ2 t σ 2 N !! (b) ≤ exp s Ψt−1 − c0Ψt−1 1 − Ψt−1 t + L+ c 2r 4 (t + L) 2 +sc2σ 2 N (t + L) 2 !!. (40) $\square$ Here, (a) is due to (Balsubramani et al., 2013, Lemma A.3) and (b) is by substituting γt = c/(t + L) = c0/2(λ1 − λ2)(t + L). Finally, for ω ∈ Ω ′ t we have Ψt−1(ω) ≤ 1 − ϵj . Now taking expectation over Ω ′ t , we get the desired result. Define αt := 1 − c0ϵj t+L and ζt(s) := sc2r 4 $\mathbb{E}_{t}\{e^{s\Psi_{t}}\}\leq\mathbb{E}_{t}\{e^{s\alpha_{t}\Psi_{t-1}}\}\exp\big{(}\zeta_{t}(s)\big{)}\leq\mathbb{E}_{t-1}\big{\{}e^{s\alpha_{t}\Psi_{t-1}}\big{\}}\exp\big{(}\zeta_{t}(s)\big{)}.$ (41) 2 + s 2c 2σ 2 N 2 . Substituting αt and ζt(s) in Lemma 4, we get Note that the second inequality in (41) is due to (Balsubramani et al., 2013, Lemma 2.8). Applying this procedure repeatedly yields $$\mathbb{E}_{t}\big{\{}e^{s\Psi_{t}}\big{\}}\leq\mathbb{E}_{t_{j}+1}\big{\{}\exp\big{(}s\Psi_{t_{j}}\alpha_{t}\dots\alpha_{t_{j}+1}\big{)}\big{\}}\exp\big{(}\zeta_{t}(s)\big{)}\dots\exp\big{(}\zeta_{t_{j}+1}\big{(}s\alpha_{t}\dots\alpha_{t_{j}+1}\big{)}\big{)}$$ $$\leq\mathbb{E}_{t_{j}+1}\big{\{}\exp\big{(}s\Psi_{t_{j}}\alpha_{t}\dots\alpha_{t_{j}+1}\big{)}\big{\}}\exp\big{(}\zeta_{t}(s)\big{)}\dots\exp\big{(}\zeta_{t_{j}+1}(s)\big{)}.$$ In values of $\alpha$ and $\zeta_{t}(s)$ in the above we get Substituting values of αt and ζt(s) in the above, we get Et e sΨt ≤ Etj+1nexp sΨtj 1 − c0ϵj t + L . . . 1 −c0ϵj tj + L + 1 o exp sc2r 4 + s 2c 2σ 2 N 1 (t + L) 2 + · · · +1 (tj + L + 1)2 ! ≤ exp s(1 − ϵj ) exp − c0ϵj 1 t + L + · · · +1 tj + L + 1 exp sc2r 4 + s 2c 2σ 2 N 1 (t + L) 2 + · · · +1 (tj + L + 1)2 !. (42) ′ Here, the last inequality is true because Ψtj (ω) ≤ 1 − ϵj for ω ∈ Ω tj+1 and 1 − x ≤ e −xfor x ≤ 1. Next we bound the summations in (42) as follows: $$\frac{1}{t+L}+\cdots+\frac{1}{t_{j}+L+1}\geq\int_{t_{j}+1}^{t+1}\frac{dx}{x+L}=\ln\frac{t+L+1}{t_{j}+L+1},$$ $$\frac{1}{(t+L)^{2}}+\cdots+\frac{1}{(t_{j}+L+1)^{2}}\leq\int_{t_{j}}^{t}\frac{dx}{(x+L)^{2}}=\frac{1}{t_{j}+L}-\frac{1}{t+L}.$$ Substituting these bounds in (42), we get the desired result. $$\begin{array}{l}{\square}\end{array}$$ This lemma uses Lemma 5 and deals with a specific value of t = tj+1. For t = tj+1, (29) gives Etj+1{e sΨtj+1 } ≤ exp s(1 − ϵj ) tj + L + 1 tj+1 + L + 1!c0ϵj+ sc2r 4 + s 2c 2σ 2 N ! 1 tj + L −1 tj+1 + L !!. (43) Using conditions [C1] and [C2] and the fact that e −2x ≤ 1 − x for 0 ≤ x ≤ 3/4, we get (1 − ϵj ) tj + L + 1 tj+1 + L + 1 c0ϵj≤ e −ϵj(e −5/c0) c0ϵj = e −6ϵj ≤ 1 − 3ϵj ≤ 1 − ϵj+1 − ϵj . Substituting this in (43), we obtain the desired result. ## B.4 Proof Of Lemma 7 Constructing a supermartingale sequence Mt in the same way as we did in Theorem 3 for s ∈ S and applying Doob's martingale inequality, we get $$\mathbb{P}_{t_j}\Big(\sup_{t\geq t_j}\Psi_t\geq1-\epsilon_j\Big)\leq\mathbb{P}_{t_j}\Big(\sup_{t\geq t_j}M_t\geq e^{s(1-\epsilon_j)}\Big)\leq\frac{\mathbb{E}\{M_{t_j}\}}{e^{s(1-\epsilon_j)}}$$ $$=\frac{\mathbb{E}\big\{\,\exp\big(s\Psi_{t_j}+s\tau_{t_j}\big)\big\}}{e^{s(1-\epsilon_j)}}=\frac{\mathbb{E}\big\{\,\exp\big(s\Psi_{t_j}\big)\big\}\exp\big(s\tau_{t_j}\big)}{e^{s(1-\epsilon_j)}}.$$ S then results in . Using Lemma 6 then results in $$\mathbb{P}_{t_{j}}\Big{(}\sup_{t\geq t_{j}}\Psi_{t}\geq1-\epsilon_{j}\Big{)}\leq\frac{1}{e^{s(1-\epsilon_{j})}}\exp\left(s(1-\epsilon_{j})-se_{j-1}+\Big{(}se^{2}r^{4}+s^{2}e^{2}\sigma_{N}^{2}\Big{)}\Big{(}\frac{1}{t_{j-1}+L}-\frac{1}{t_{j}+L}\Big{)}+s\tau_{t_{j}}\right).$$ Substituting a bound on τtj from Theorem 3 (see, e.g., the discussion around (28)), we get $$\mathbb{P}_{t_{j}}\left(\sup_{t\geq t_{j}}\Psi_{t}\geq1-\epsilon_{j}\right)\leq\exp\left(-\,sc_{j-1}+\left(sc^{2}r^{4}+s^{2}\sigma_{N}^{2}\right)\!\left(\frac{1}{t_{j-1}+L}-\frac{1}{t_{j}+L}\right)+s\!\left(c^{2}r^{4}+sc^{2}\sigma_{N}^{2}\right)\!\frac{1}{t_{j}+L}\right)$$ $$=\exp\left(-\,sc_{j-1}+s\!\left(c^{2}r^{4}+sc^{2}\sigma_{N}^{2}\right)\!\frac{1}{t_{j-1}+L}\right).$$ Substituting s = (2/ϵ0) ln (4/δ) and using the lower bound on L, we get (see Proposition 5 in Appendix D for formal verification) $\mathbb{P}_{t_j}\left(\sup_{t\geq t_j}\Psi_t\geq1-\epsilon_j\right)\leq\exp\left(-\frac{s\epsilon_{j-1}}{2}\right)=\left(\frac{\delta}{4}\right)^{\epsilon_{j-1}/\epsilon_0}\leq\frac{\delta}{2j+1}.$ pletes the proof of the lemma. Summing over j completes the proof of the lemma. ## Appendix C Proofs For The Final Epoch Proof of Lemma 8. From Lemma 1, Part (i), we have $$\Psi_{t}\leq\Psi_{t-1}+4\gamma_{t}^{2}\Big(\Big\|\frac{1}{N}\sum_{i=1}^{N}\mathbf{A}_{i,t}-\mathbf{\Sigma}\Big\|_{F}^{2}+\lambda_{1}^{2}\Psi_{t-1}\Big)-z_{t}.$$ $$\square$$ Taking expectation conditioned on Ft−1, we get $$\mathbb{E}\{\Psi_{t}|\mathcal{F}_{t-1}\}\leq\Psi_{t-1}(1+\gamma_{t}^{2}\lambda_{1}^{2})+4\gamma_{t}^{2}\sigma_{N}^{2}-\mathbb{E}\big\{z_{t}\big|\mathcal{F}_{t-1}\big\},$$ where the second term is due to Lemma 9. Now using upper bound on −Ezt Ft−1 from (Balsubramani et al., 2013, Lemma A.4), we get the following: $$\mathbb{E}\{\Psi_{t}|\mathcal{F}_{t-1}\}\leq\Psi_{t-1}(1+\gamma_{t}^{2}\lambda_{1}^{2})+4\gamma_{t}^{2}\sigma_{N}^{2}-2\gamma_{t}(\lambda_{1}-\lambda_{2})\Psi_{t-1}(1-\Psi_{t-1})$$ $$=\Psi_{t-1}\Big{(}1+\gamma_{t}^{2}\lambda_{1}^{2}-2\gamma_{t}(\lambda_{1}-\lambda_{2})(1-\Psi_{t-1})\Big{)}+4\gamma_{t}^{2}\sigma_{N}^{2}.$$ ``` Finally, taking expectation over Ω ′ t , substituting γt = c0/(2(t + L)(λ1 − λ2)), and using the facts that Ω ′ t is ``` Ft−1-measurable and for *t > t*J , Ψt−1 ≤ 1/2 and we lie in sample space Ω ′ t with probability greater than 1 − δ (Theorem 3), we obtain Et{Ψt} ≤ Et ( Ψt−1 1 +c 2 0λ 2 1 2(t + L) 2(λ1 − λ2) 2 −c0 2(t + L) )+4c 2σ 2 N (t + L) 2 = 1 +c 2 0λ 2 1 2(t + L) 2(λ1 − λ2) 2 −c0 2(t + L) !Et{Ψt−1} +4c 2σ 2 N (t + L) 2 ≤ 1 +c 2 0λ 2 1 2(t + L) 2(λ1 − λ2) 2 −c0 2(t + L) !Et−1{Ψt−1} +4c 2σ 2 N (t + L) 2 . This completes the proof of the lemma. Proposition 1. Let a1, b > 0 and a2 > 1 be some constants. Consider a nonnegative sequence (ut : *t > t*J ) that satisfies $$u_{t}\leq\Big(1+\frac{a_{1}}{(t+L)^{2}}-\frac{a_{2}}{t+L}\Big)u_{t-1}+\frac{b}{(t+L)^{2}}.$$ Then we have: $$u_{t}\leq\left({\frac{L+1}{t+L+1}}\right)^{a_{2}}\exp\left({\frac{a_{1}}{L+1}}\right)u_{0}+{\frac{1}{(t+L+1)}}\exp\left({\frac{a_{1}}{L+1}}\right)\left({\frac{L+2}{L+1}}\right)^{2}{\frac{b}{a_{2}-1}}.$$ Proof. Recursive application of the bound on ut gives: $$u_{t}\leq\Bigg{(}\prod_{i=t,j+1}^{t}\Big{(}1+\frac{a_{1}}{(i+L)^{2}}-\frac{a_{2}}{i+L}\Big{)}u_{t_{0}}+\sum_{i=t,j+1}^{t}\frac{b}{(i+L)^{2}}\Bigg{(}\prod_{j=t+1}^{t}\Big{(}1+\frac{a_{1}}{(j+L)^{2}}-\frac{a_{2}}{j+L}\Big{)}\Bigg{)}.\tag{44}$$ Using (Dashutramani et al., 2013, Lemma D.1) we can bound the product terms as $$\prod_{j=i+1}^{t}\left(1+\frac{a_{1}}{(j+L)^{2}}-\frac{a_{2}}{j+L}\right)\leq\exp\left(\sum_{j=i}^{t}\frac{a_{1}}{(j+L)^{2}}-\sum_{j=i}^{t}\frac{a_{2}}{j+L}\right)$$ $$\leq\left(\frac{i+L+1}{t+L+1}\right)^{a_{2}}\exp\left(\sum_{j=i}^{t}\frac{a_{1}}{(j+L)^{2}}\right).\tag{45}$$ Next, we bound the last term here as $$\exp\left(\sum_{j=1}^{t}\frac{a_{1}}{(j+L)^{2}}\right)\leq\exp\left(\int_{t+1}^{t+1}\frac{a_{1}}{(x+L)^{2}}dx\right)=\exp\left(\frac{a_{1}}{i+L+1}-\frac{a_{1}}{t+L+1}\right)\leq\exp\left(\frac{a_{1}}{i+L+1}\right).$$ Substituting this in (45) we get $$\prod_{j=i+1}^{t}{\Big(}1+{\frac{a_{1}}{(j+L)^{2}}}-{\frac{a_{2}}{j+L}}{\Big)}\leq\left({\frac{i+L+1}{t+L+1}}\right)^{a_{2}}\exp{\Big(}{\frac{a_{1}}{i+L+1}}{\Big)}.$$ Substituting this in (44) we get ut ≤ tJ + L + 1 t + L + 1 !a2exp a1 tJ + L + 1 utJ +Xt i=tJ+1 b (i + L) 2 Yt j=i+1 1 +a1 (j + L) 2 −a2 j + L ! ≤ tJ + L + 1 t + L + 1 !a2exp a1 tJ + L + 1 utJ +Xt i=tJ+1 b (i + L) 2 i + L + 1 t + L + 1!a2exp a1 i + L + 1 ≤ tJ + L + 1 t + L + 1 !a2exp a1 tJ + L + 1 utJ + exp a1 tJ + L + 1 b (t + L + 1)a2 Xt i=1 (i + L + 1)a2 (i + L) 2 ≤ tJ + L + 1 t + L + 1 !a2exp a1 tJ + L + 1 utJ + exp a1 tJ + L + 1 b (t + L + 1)a2 L + 2 L + 1 2Xt i=1 (i + L + 1)a2−2. Again applying (Balsubramani et al., 2013, Lemma D.1), we get the final result as follows $$u_{t}\leq\left(\frac{t_{J}+L+1}{t+L+1}\right)^{a_{2}}\exp\left(\frac{a_{1}}{t_{J}+L+1}\right)u_{J}+\exp\left(\frac{a_{1}}{t_{J}+L+1}\right)\frac{b}{(t+L+1)^{a_{2}}}\left(\frac{L+2}{L+1}\right)^{2}\frac{(t+L+1)^{a_{2}-1}}{a_{2}-1}$$ $$=\left(\frac{t_{J}+L+1}{t+L+1}\right)^{a_{2}}\exp\left(\frac{a_{1}}{t_{J}+L+1}\right)u_{J}+\frac{1}{(t+L+1)}\exp\left(\frac{a_{1}}{t_{J}+L+1}\right)\left(\frac{L+2}{L+1}\right)^{2}\frac{b}{a_{2}-1}.$$ This completes the proof of the proposition. ## Appendix D Other Auxiliary Results Proposition 2 (Bennett's Inequality (Boucheron et al., 2013)). Consider a zero-mean, bounded random variable Xi ∈ R (i.e., |Xi| ≤ h *almost surely) with variance* σ 2 i . Then for any s ∈ R*, we have* $$\mathbb{E}\big\{e^{s X_{i}}\big\}\leq\exp\left(\sigma_{i}^{2}s^{2}\Big(\frac{e^{s h}-1-s h}{(s h)^{2}}\Big)\right).$$ Proposition 3. Let h := 2γtr 2 and s ∈d/4ϵ,(2/ϵ0) ln(4/δ) *. It then follows that* e sh−1−sh (sh) 2 ≤ 1. Proof. It is straightforward to see that e sh−1−sh (sh) 2 ≤ 1 as long as sh ≤ 7/4. Therefore, in order to prove this proposition, it suffices to show that the lower bound on L implies sh ≤ 7/4 for s ∈d/4ϵ,(2/ϵ0) ln(4/δ) . We establish this claim as two separate cases for the two values of s. Case I: For s = d/4ϵ, substituting the value of h gives us $$s h=\frac{d\gamma r^{2}}{2\epsilon}=\frac{d c r^{2}}{2(t+L)\epsilon}\leq\frac{d c r^{2}}{2L\epsilon}\leq\frac{d c r^{2}}{2\epsilon L_{1}}\leq\frac{d c r^{2}}{2\epsilon}\frac{\epsilon}{8d r^{4}\operatorname*{max}(1,c^{2})\ln(4/\delta)}\leq\frac{1}{16\ln(4/\delta)}\leq\frac{7}{4}.$$ c 2 L r 4 + sσ2N ≤ ϵ d . Case II: For s = (2/ϵ0) ln(4/δ), we obtain $$s h=\frac{2\ln(4/\delta)c r^{2}}{\epsilon_{0}(t+L)}\leq\frac{2\ln(4/\delta)c r^{2}}{\epsilon_{0}L_{1}}\leq\frac{2\ln(4/\delta)c r^{2}}{\epsilon_{0}}\frac{\epsilon_{0}}{8r^{4}\operatorname*{max}(1,c^{2})\ln\frac{4}{\delta}}\leq\frac{1}{4}\leq\frac{7}{4}.$$ This completes the proof of the proposition. Proposition 4. *Assuming* L ≥ 8dr4 max(1,c2) ϵln 4 δ + 8d 2σ 2 N max(1,c2) ϵ 2 ln 4 δ and the parameter s = d/4ϵ*, we have* Proof. We prove this by proving the following two statements: $\square$ $$\frac{c^{2}r^{4}}{L}\leq\frac{c^{2}r^{4}}{L_{1}}\leq\frac{\epsilon}{2d}\quad\mathrm{and}\quad\frac{s c^{2}\sigma_{N}^{2}}{L}\leq\frac{s c^{2}\sigma_{N}^{2}}{L_{2}}\leq\frac{\epsilon}{2d}.$$ rst statement: $\frac{c^{2}r^{4}}{L_{1}}\leq c^{2}r^{4}\frac{\epsilon}{L_{1}}\leq\frac{\epsilon}{2d}$ We start by proving the first statement: c L1≤ c 2r 8dr4 max(1,c2) ln 4 δ ≤ϵ 2d . Next, we prove the second statement as follows: c 2sσ2N L2≤ c 2dσ2N 4ϵϵ 2 8d2σ 2 N max(1,c2) ln 4 δ ≤ ϵ 2d . This completes the proof. Proposition 5. For L ≥ 8r 4 max(1,c2) ϵ0ln 4 δ + 8σ 2 N max(1,c2) ϵ 2 0ln 4 δ , we have $$\begin{array}{l}{{(i)\ \frac{c^{2}r^{4}}{(t_{j-1}+L)}\leq\frac{\epsilon_{0}}{4},\ a n d}}\\ {{}}\end{array}$$ $$\begin{array}{l}{{(i i)\ \frac{2c^{2}\sigma_{N}^{2}}{\epsilon_{0}(t_{j-1}+L)}\ln\frac{4}{\delta}\leq\frac{\epsilon_{0}}{4}.}}\end{array}$$ Proof. We begin by noting that $${\frac{c^{2}r^{4}}{(t_{j-1}+L)}}\leq{\frac{2c^{2}r^{4}}{L}}\leq{\frac{2c^{2}r^{4}}{L_{1}}}\leq2c^{2}r^{4}{\frac{\epsilon_{0}}{8r^{4}\operatorname*{max}(1,c^{2})\ln{\frac{4}{\delta}}}}\leq{\frac{\epsilon_{0}}{4}}.$$ Next we prove the second statement as follows: $$\frac{2c^{2}\sigma_{N}^{2}}{\epsilon_{0}(t_{j-1}+L)}\ln\frac{4}{\delta}\leq\frac{2c^{2}\sigma_{B}^{2}}{\epsilon_{0}L}\ln\frac{4}{\delta}\leq\frac{2c^{2}\sigma_{N}^{2}}{\epsilon_{0}L_{2}}\ln\frac{4}{\delta}\leq\frac{2c^{2}\sigma_{N}^{2}}{\epsilon_{0}}\ln\frac{4}{\delta}\frac{\epsilon_{0}^{2}}{8\sigma_{N}^{2}\operatorname*{max}(1,c^{2})\ln(4/\delta)}\leq\frac{\epsilon_{0}}{4}.$$ Note that we refer the condition. This completes the proof of the proposition. Corollary 3 (Restatement of the Main Result (Theorem 1): Convergence in Probability). *Fix any* δ ∈ (0, 1) and pick c := c0/2(λ1 − λ2) for any c0 > 2*. Next, define* $$L_{1}:=\frac{64e d r^{4}\operatorname*{max}(1,c^{2})}{\delta^{2}}\ln\frac{4}{\delta},\quad L_{2}:=\frac{512e^{2}d^{2}\sigma_{N}^{2}\operatorname*{max}(1,c^{2})}{\delta^{4}}\ln\frac{4}{\delta},$$ pick any L ≥ L1+L2, and choose the step-size sequence as γt := c/(L+t)*. Then, as long as Assumptions* [A1] and [A2] *hold, we have for D-Krasulina that there exists a sequence* (Ω′t )t∈Z+ *of nested sample spaces such* that P ∩t>0Ω ′ t ≥ 1 − δ and $$\mathbb{P}\left\{\Psi_{t}\geq C_{1}^{{}^{\prime}}\Big{(}\frac{L+1}{t+L+1}\Big{)}^{\frac{\alpha}{2}}+C_{2}^{{}^{\prime}}\Big{(}\frac{\sigma_{N}^{2}}{t+L+1}\Big{)}\right\}\leq2\delta,\tag{46}$$ where C ′ 1 and C ′ 2 *are constants defined as* $$C_{1}^{'}:=\frac{1}{2\delta}\left(\frac{4e d}{\delta^{2}}\right)^{\frac{\delta}{2\ln2}}e^{2c^{2}\lambda_{1}^{2}/L}\quad\mathrm{and}\quad C_{2}^{'}:=\frac{8c^{2}e^{(c_{0}+2c^{2}\lambda_{1}^{2})/L}}{\delta(c_{0}-2)}.$$ Proof. The proof follows by first using Markov's inequality to bound the conditional probability $$\mathbb{P}_{t}\left\{\Psi_{t}\geq C_{1}^{\prime}\Big(\frac{L+1}{t+L+1}\Big)^{\frac{c_{0}}{2}}+C_{2}^{\prime}\Big(\frac{\sigma_{N}^{2}}{t+L+1}\Big)\right\},$$ then using the bound on Et {Ψt} in Theorem 1 to further bound the conditional probability by δ, and finally removing the conditioning on the nested sample spaces by using the facts that (i) P(A) ≤ P(A|B) + P(Bc) for any two events A and B, and (ii) P ∪t>0Ω ′ t c≤ δ.
Review 1: Summary: This paper studies a decentralized version of Krasulina's method for streaming PCA (i.e., eigenvector estimation), with and without mini-batching. The paper provides conditions under which D-Krasulina achieves a linear speedup in the number of distributed workers. In the mini-batch setting, conditions are also provided where a linear speedup is achieved. For the extreme case where, even with mini-batching, the system of N workers is not able to process samples faster than they are arriving, further analysis quantifies the rate of convergence when some samples are dropped, with an assumption that all samples are iid. In this case, DM-Krasulina still achieves near-optimal convergence rates as long as the fraction of dropped samples is not too large. These results are validated with numerical experiments. Strengths and Weaknesses: ## Strengths * The paper is generally well-written and easy to follow * The related work is fairly comprehensive, nicely framing the contribution of this paper * Theoretical guarantees are rigorous and establish strong performance bounds for the proposed approach ## Weaknesses * The system timing model considered in the paper has some limitations. The processing and communication times $R_p$ and $R_c$ implicitly depend on the problem dimension $d$. The communication time $R_c$ also depends very explicitly on $N$, the number of processors aggregating results. Although this is briefly mentioned in Sec 2.2.1 when $R_c$ is introduced, this brings into question the way that it is used to obtain conditions on $N$ relative to $R_s$, $R_p$, $R_c$, and $b$ throughout the paper. * A few older relevant related works are missing (see below). * Some aspects of how the main theoretical results are presented could be slightly improved/clarified (see below). * Setting a schedule to satisfy the conditions of Theorem 4 requires knowing the gap $\lambda_1 - \lambda_2$, among other problem-dependent constants, which may not be possible in some applications. * The introduction motivates the problem considered by discussing challenges with high-dimensional data, while the experiments only consider problems with dimension at most 784. The paper would be stronger if the method were demonstrated on much higher-dimensional datasets. * The experimental section does not include comparisons to any other method from the literature. Requested Changes: ## Requested changes These are the main points I view as critical. * Please clarify/justify the timing model used in the paper, given that $R_c$ depends on $N$ and that this is also used to determine a bound on the number of processors $N$. * While the related work section covers most relevant related work, it would be good to also discuss the relationship to [Yun et al., "Fast and memory optimal low-rank matrix approximation," NeurIPS 2015.](https://proceedings.neurips.cc/paper/2015/hash/21be9a4bd4f81549a9d1d241981cec3c-Abstract.html) * In the statement of Corollary 1, please clarify whether the event occurring with probability $1 - \delta$ involves the intersection over $0 < t \le T_B$, or all $t > 0$ (i.e., the probability $1-\delta$ is still only achieved asymptotically). * Setting a schedule to satisfy the conditions of Theorem 4 requires knowing the gap $\lambda_1 - \lambda_2$, among other problem-dependent constants. Please add a discussion of how one might approach this in practice when applying the method. * One typical characteristic of streaming data settings (in practice) is that the data distribution may not be iid; rather over time, there may be some distribution shift. It would be good to add some discussion about this iid assumption and ways it may be relaxed in future work. ## Additional questions * Given the relationship of Krasulina's method to SGD, might one expect to achieve a superior method by incorporating some sort of momentum or acceleration term? Broader Impact Concerns: None noted ================================================== Review 2: Summary: This paper studies the PCA problem with streaming data. It focuses on a setting where the streaming data arrives at a higher rate than the processing power of a single processor. Therefore it utilizes multiple processors that communicate via a server-worker structure or a decentralized general network structure to process the streaming data in parallel. Theoretical analysis of the Krasulina method-based algorithms in the distributed setting are presented, and they show the convergence rate of $O(1/T)$ when the number of samples observed is $T$. Strengths and Weaknesses: **Strengths**: The proof seems to be correct and the paper has provided a good discussion for the proof technique used (which is a key contribution). Overall, the paper has the following strengths: 1. It provides a tighter convergence analysis than (Balsubramani et al., 2013) and show that the asymptotic convergence rate of the Krasulina's algorithm is $O(\sigma_N^2/t)$ rather than $O(r^2/t)$ as in (Balsubramani et al., 2013), where $t$ is the iteration number. As such, the result shows that the distributed Krasulina's algorithm is able to take advantage of parallel processing to speed up computation of PCA since the simple aggregation of samples allows one to reduce variance of the estimate. 2. It analyzed the effects of skipping data in the multi-batch Krasulina method and show that as long as the skipping data rate does not exceed $\mu = O(B)$, then the optimal asymptotic convergence rate of $O(1/T)$ can be maintained. **Weaknesses**: There are some weaknesses with the current submission as listed below 1. The algorithms analyzed are standard and can be considered as the direct extension of classical Krasulina method to the distributed setting. Moreover, important aspects to distributed algorithms such as communication efficiency of the distributed algorithms has not been addressed. In this regard, the paper seems to be better positioned as presenting an improved analysis of Krasulina over (Balsubramani et al., 2013). 2. Compared to prior work (Jain et al. 2016) which has a variance dependent bound unlike (Balsubramani et al., 2013), one of the main claims in this paper is that it provides a high probability bound ($1-\delta$) to arbitrary $\delta$. However, the claim is slightly unfair: - (a) as mentioned in (Jain et al. 2016), it is possible to run $O( log(1/\delta) )$ of their algorithms to obtain convergence w.p. at least $1-\delta$; - (b) to ensure convergence for the Krasulina algorithm in this paper with probability at least $1-\delta$, it requires setting the step size parameter to be $L \asymp \ln(1/\delta)$ - in other words, to ensure convergence with an arbitrarily high probability, it is required to set an arbitrarily small step size. - Moreover, it should be noted that the derived convergence results are **expected** convergence rate which holds with high probability, while (Jain et al. 2016) derived a high probability convergence rate. Requested Changes: The paper is in general OK but the authors should address the issues mentioned in "Weaknesses" and make corresponding revisions. There are some additional (but perhaps minor) issues: 1. In (10), the random variable $\Psi(w)$ is not defined. 2. In Lemma 8, what is the constant $c$? 3. It seems strange to impose a new definition in Definition 5 for the variance of distributed samples, while it is clear that the variances are related by $\sigma_N^2 = \sigma^2/N$. 4. At the end of p.17, the upper bound should be $M_{0}$ instead of $M_{t_0}$? Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper looks at computing the top singular vector from a set of iid vector samples arriving at a very high rate. In particular, it is concerned with when the rate is so high that directly adapting the updates via Krasulina's (or Oja's) algorithm is not possible. The main result of the paper is that this computation can be spread across distributed machines, and the updates averaged still achieve the optimal linear convergence rate. Batch variants and when a limited amount of data must be dropped, even in the distributed setting, are also analyzed with similar results. The main technical idea is revising existing results showing the convergence based on worst case bound on each update to one that achieves the same linear convergence rate using a variance bound. This analysis can then be adapted to distributed and batch analysis where averaging estimates across sets decreases variance at the desired linear rate. The paper's writing takes great care to discuss extensive related work, explain the computational model and when and why they are appropriate, and to provide intuition for how the analysis fits together and the main ideas. The topic is well-motivated, as dealing with massive data is an important challenge and the streaming model considered is aligned with how such data is dealt with. Moreover, estimating the top singular vector is an important challenge in data analysis. Hence, this paper is appropriate for TMLR, and I recommend acceptance. Given that, I have a couple critiques. The experimental section does a fine job of illustrating how well the bounds and analysis of parameters in the data assumptions and algorithms align with empirical performance. However, the data sets are significantly smaller than the sizes needed to motivate the sort of model studied. Nevertheless, I think this is fine, since one needs to verify the accuracy, and running on such motivating sizes would require extensive computing cost. Also the main claims are still well illustrated. There are caveats that other unforeseen issues may occur when scaling to sizes discussed in the paper, such as node failure or larger network cost than expected. However, I rate this concern as minor, and do not think the authors need to extend the empirical study in this way. Second, the paper uses the term PCA, and finding the top principal component, to mean finding the vector with maximum variance. However, it assumes the data has mean zero. Typically PCA is more general, and does not require the data to have zero mean, and often the process is to center the data before finding the top singular vector. As such, the problem the paper actually solves is finding the top singular vector, which does not require centering. That said the paper does not hide this assumption, and this assumed equivalence between PCA and SVD (by assuming the data is centered) is not uncommon in this sort of analysis. However, I would appreciate if the authors spent a bit more time discussing this distinction, and the challenges in addressing the full case where the data cannot be assumed to be centered. Strengths and Weaknesses: The paper's writing takes great care to discuss extensive related work, explain the computational model and when and why they are appropriate, and to provide intuition for how the analysis fits together and the main ideas. The topic is well-motivated, as dealing with massive data is an important challenge and the streaming model considered is aligned with how such data is dealt with. Moreover, estimating the top singular vector is an important challenge in data analysis. Hence, this paper is appropriate for TMLR, and I recommend acceptance. Given that, I have a couple critiques. The experimental section does a fine job of illustrating how well the bounds and analysis of parameters in the data assumptions and algorithms align with empirical performance. However, the data sets are significantly smaller than the sizes needed to motivate the sort of model studied. Nevertheless, I think this is fine, since one needs to verify the accuracy, and running on such motivating sizes would require extensive computing cost. Also the main claims are still well illustrated. There are caveats that other unforeseen issues may occur when scaling to sizes discussed in the paper, such as node failure or larger network cost than expected. However, I rate this concern as minor, and do not think the authors need to extend the empirical study in this way. Second, the paper uses the term PCA, and finding the top principal component, to mean finding the vector with maximum variance. However, it assumes the data has mean zero. Typically PCA is more general, and does not require the data to have zero mean, and often the process is to center the data before finding the top singular vector. As such, the problem the paper actually solves is finding the top singular vector, which does not require centering. That said the paper does not hide this assumption, and this assumed equivalence between PCA and SVD (by assuming the data is centered) is not uncommon in this sort of analysis. However, I would appreciate if the authors spent a bit more time discussing this distinction, and the challenges in addressing the full case where the data cannot be assumed to be centered. Requested Changes: Please discuss challenges associated with not assuming the data is centered. Broader Impact Concerns: This is analysis and efficient algorithm design for a very standard and generic technique. There are no significant broader impact concerns. ================================================== Metareview: Recommendation: Accept as is Comment: The writing looks nice. The reviewers comments were all addressed, and I'm pleased to recommend Acceptance. ==================================================
# Private Gans, Revisited∗ Alex Bie†yabie@uwaterloo.ca University of Waterloo Gautam Kamath *g@csail.mit.edu* University of Waterloo Guojun Zhang guojun.zhang@huawei.com Huawei Noah's Ark Lab Reviewed on OpenReview: *https: // openreview. net/ forum? id= 9sVCIngrhP* ## Abstract We show that the canonical approach for training differentially private GANs - updating the discriminator with differentially private stochastic gradient descent (DPSGD) - can yield significantly improved results after modifications to training. Specifically, we propose that existing instantiations of this approach neglect to consider how adding noise *only to* discriminator updates inhibits discriminator training, disrupting the balance between the generator and discriminator necessary for successful GAN training. We show that a simple fix - taking more discriminator steps between generator steps - restores parity between the generator and discriminator and improves results. Additionally, with the goal of restoring parity, we experiment with other modifications – namely, large batch sizes and adaptive discriminator update frequency - to improve discriminator training and see further improvements in generation quality. Our results demonstrate that on standard image synthesis benchmarks, DPSGD outperforms all alternative GAN privatization schemes. Code: https://github.com/alexbie98/dpgan-revisit. ## 1 Introduction Differential privacy (DP) (Dwork et al., 2006b) has emerged as a compelling approach for training machine learning models on sensitive data. However, incorporating DP requires changes to the training process. Notably, it prevents the modeller from working directly with sensitive data, complicating debugging and exploration. Furthermore, upon exhausting their allocated privacy budget, the modeller is restricted from interacting with sensitive data. One approach to alleviate these issues is by producing *differentially private* synthetic data, which can be plugged directly into existing machine learning pipelines, without further concern for privacy. Towards generating high-dimensional, complex data (such as images), a line of work has examined privatizing generative adversarial networks (GANs) (Goodfellow et al., 2014) to produce DP synthetic data. Initial efforts proposed to use differentially private stochastic gradient descent (DPSGD) (Abadi et al., 2016) as a drop-in replacement for SGD to update the GAN discriminator - an approach referred to as *DPGAN* (Xie et al., 2018; Beaulieu-Jones et al., 2019; Torkzadehmahani et al., 2019). However, follow-up work (Jordon et al., 2019; Long et al., 2021; Chen et al., 2020; Wang et al., 2021) departs from this approach: they propose alternative privatization schemes for GANs, and report significant improvements over the DPGAN baseline. Other methods for generating DP synthetic data diverge from GAN-based architectures, yielding improvements to utility in most cases (Table 2). This raises the question of whether GANs are suitable for DP training, or if bespoke architectures are required for DP data generation. ∗Authors GK and GZ are listed in alphabetical order. †Work performed in part while interning at Huawei. ![1_image_0.png](1_image_0.png) ![1_image_1.png](1_image_1.png) (a) MNIST FID over a training run (b) Corresponding images at (10, 10−5)-DP Figure 1: DPGAN results on MNIST synthesis at (10, 10−5)-DP. (a) We run 3 seeds, plotting mean, min, and max FID along the runs. We find that increasing nD, the number of discriminator steps taken between generator steps, significantly improves image synthesis. Increasing nD = 1 → nD = 50 improves FID from 205.3 ± 0.9 → 18.5 ± 0.9. (b) Corresponding synthesized images (each are trained with the same privacy budget). We observe that large nD improves visual quality, and low nD leads to mode collapse. | | MNIST | FashionMNIST | CelebA-Gender | | | | | |-----------|------------------|----------------|-----------------|------------|------------|--------------|------------| | Privacy ε | Method | FID | Acc.(%) | FID | Acc.(%) | FID | Acc.(%) | | ε = ∞ | Real Data | 1.0 | 99.2 | 1.5 | 92.5 | 1.1 | 96.6 | | | GAN | 3.4 ± 0.1 | 97.0 ± 0.1 | 16.5 ± 1.7 | 79.5 ± 0.8 | 30.0 ± 1.6 | 92.0 ± 0.4 | | ε = 10 | Best Private GAN | 61.34 | 80.92 | 131.34 | 70.61 | - | 70.72 | | | DPGAN | 179.16 | 80.11 | 243.80 | 60.98 | - | 54.09 | | ε = 9.32 | Our DPGAN | 12.8 ± 0.3 | 95.1 ± 0.1 | 62.3 ± 8.7 | 74.7 ± 0.4 | 170.8 ± 20.3 | 82.4 ± 4.4 | Table 1: A summary of our results compared to results reported in previous work on private GANs. For our results, we run 3 seeds and report mean ± std. *Acc.(%)* refers to downstream classification accuracy of CNN models trained with generated data. The middle two rows are a composite of the best results reported in the literature for DPGAN and alternative GAN privatization schemes (*"Best Private GAN"*); see Tables 2 and 3 for correspondences. Here we use Gopi et al. (2021) privacy accounting for our results. We find significant improvement over all previous GANbased methods for DP image synthesis. Our contributions. We show that DPGANs give far better utility than previously demonstrated, and compete with or outperform almost all other methods for DP image synthesis.1 Hence, we conclude that previously demonstrated deficiencies of DPGANs should not be attributed to inherent limitations of the framework, but rather, training issues. Specifically, we propose that the *asymmetric* noise addition in DPGANs (adding noise to discriminator updates only) inhibits discriminator training while leaving generator training untouched, disrupting the balance necessary for successful GAN training. Indeed, the seminal study of Goodfellow et al. (2014) points to the challenge of "synchronizing the discriminator with the generator" in GAN training, suggesting that, "G must not be trained too much without updating D, in order to avoid 'the Helvetica scenario' [*mode collapse*]". Prior DPGAN implementations in the literature do not take this into consideration in the process of porting over non-private GAN training recipes. We propose that taking more discriminator steps between generator updates addresses the imbalance introduced by noise. With this change, DPGANs improve significantly (see Figure 1 and Table 1). Furthermore, we show this perspective on DPGAN training ("restoring parity to a discriminator weakened by DP noise") can be applied to improve training. We make other modifications to discriminator training - larger batch 1A notable exception is diffusion models, discussed further in Section 2. sizes and adaptive discriminator update frequency - to improve discriminator training and further improve upon the aforementioned results. In summary, we make the following contributions: - We find that taking more discriminator steps between generator steps significantly improves DPGANs. Contrary to previous results in the literature, DPGANs outperform alternative GAN privatization schemes. - We present empirical findings towards understanding why more frequent discriminator steps help. We propose an explanation based on *asymmetric noise addition* for why vanilla DPGANs do not perform well, and why taking more frequent discriminator steps helps. - We employ our explanation as a principle for designing better private GAN training recipes - incorporating larger batch sizes and adaptive discriminator update frequency - and indeed are able to improve over the aforementioned results. ## 2 Related Work Private GANs. The baseline DPGAN that employs a DPSGD-trained discriminator was introduced in Xie et al. (2018), and studied in follow-up work of Torkzadehmahani et al. (2019); Beaulieu-Jones et al. (2019). Despite significant interest in the approach (≈400 citations at time of writing), we were unable to find studies that explore the modifications we perform or uncover similar principles for improving DPGAN training. We note that the number of discriminator steps taken per generator step, nD, appears as a hyperparameter in the framework outlined by the seminal study of Goodfellow et al. (2014), and in followup work such as WGAN (Arjovsky et al., 2017). Xie et al. (2018) privatizes WGAN, adopting its imbalanced stepping strategy of nD = 5, however makes no mention of the importance of the parameter (along with Torkzadehmahani et al. (2019), which uses nD = 1). As we show in Figure 1a, ensuring that nD lies within a critical range (as determined by DPSGD hyperparameters) is key to adapting a GAN training recipe to DP; selection of nD is the difference between state-of-the-art-competitive performance and something that is entirely not working.2 As a consequence, subsequent work has departed from DPGANs, examining alternative privatization schemes for GANs (Jordon et al., 2019; Long et al., 2021; Chen et al., 2020; Wang et al., 2021). Broadly speaking, these approaches employ subsample-and-aggregate (Nissim et al., 2007) via the PATE approach (Papernot et al., 2017), dividing the data into ≥ 1K disjoint partitions and training teacher discriminators separately on each one. Our work shows that these privatization schemes are outperformed by DPSGD. DP generative models. Other generative modelling frameworks have been applied to generate DP synthetic data: VAEs (Chen et al., 2018), maximum mean discrepancy (Harder et al., 2021; Vinaroz et al., 2022; Harder et al., 2022), Sinkhorn divergences (Cao et al., 2021), normalizing flows (Waites & Cummings, 2021), and diffusion models (Dockhorn et al., 2022). In a different vein, Chen et al. (2022) avoids learning a generative model, and instead generates a coreset of examples (≈ 20 per class) for the purpose of training a classifier. These approaches fall into two camps: applications of DPSGD to existing, highly-performant generative models; or custom approaches designed specifically for privacy which fall short of GANs when evaluated at their non-private limits (ε → ∞). Concurrent work on DP diffusion models. Simultaneous and independent work by Dockhorn et al. (2022) is the first to investigate DP training of diffusion models. They achieve impressive state-of-the-art results for DP image synthesis in a variety of settings, in particular, outperforming our results for DPGANs reported in this paper. We consider our results to still be of significant interest to the community, as we challenge the conventional wisdom regarding deficiencies of DPGANs, showing that they give much better utility than previously thought. Indeed, GANs are still one of the most popular and well-studied generative models, and consequently, there are many cases where one would prefer a GAN over an alternative approach. By revisiting several of the design choices in DPGANs, we give guidance on how to seamlessly introduce 2For further discussion on the role of hyperparameters in DP machine learning, see Appendix F. differential privacy into such pipelines. Furthermore, both our work and the work of Dockhorn et al. (2022) are aligned in supporting a broader message: training conventional machine learning architectures with DPSGD frequently achieves state-of-the-art results under differential privacy. Indeed, both our results and theirs outperform almost all custom methods designed for DP image synthesis. This reaffirms a similar message recently demonstrated in other private ML settings, including image classification (De et al., 2022) and NLP (Li et al., 2022; Yu et al., 2022). ## 3 Preliminaries Our goal is to train a generative model on sensitive data that is safe to release, that is, it does not leak the secrets of individuals in the training dataset. We do this by ensuring the training algorithm A - which takes as input the sensitive dataset D and returns the parameters of a trained (generative) model θ - satisfies differential privacy. Definition 1 (Differential Privacy, Dwork et al. 2006b). A randomized algorithm A : U → Θ is (*ε, δ*)- differentially private if for every pair of neighbouring datasets D, D′ ∈ U, we have P{A(D) ∈ S} ≤ exp(ε) · P{A(D′) ∈ S} + δ for all (measurable) S ⊆ Θ. In this work, we adopt the add/remove definition of DP, and say two datasets D and D′ are neighbouring if they differ in at most one entry, that is, D = D′ ∪ {x} or D′ = D ∪ {x}. One convenient property of DP is *closure under post-processing*, which says that further outputs computed from the output of a DP algorithm (without accessing private data by any other means) are safe to release, satisfying the same DP guarantees as the original outputs. In our case, this means that interacting with a privatized model (e.g., using it to compute gradients on non-sensitive data, generate samples) does not lead to any further privacy violation. DPSGD. A gradient-based learning algorithm can be privatized by employing differentially private stochastic gradient descent (DPSGD) (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016) as a drop-in replacement for SGD. DPSGD involves clipping per-example gradients and adding Gaussian noise to their sum, which effectively bounds and masks the contribution of any individual point to the final model parameters. Privacy analysis of DPSGD follows from several classic tools in the DP toolbox: Gaussian mechanism, privacy amplification by subsampling, and composition (Dwork et al., 2006a; Dwork & Roth, 2014; Abadi et al., 2016; Wang et al., 2019). In our work, we use two different privacy accounting methods for DPSGD: (a) the classical approach of Mironov et al. (2019), implemented in Opacus (Yousefpour et al., 2021), and (b) the recent exact privacy accounting of Gopi et al. (2021). By default, we use the former technique for a closer direct comparison with prior works (though we note that some prior works use even looser accounting techniques). However, the latter technique gives tighter bounds on the true privacy loss, and for all practical purposes, is the preferred method of privacy accounting. We use Gopi et al. (2021) accounting only where indicated in Tables 1, 2, and 3. DPGANs. Algorithm 1 details the training algorithm for DPGANs, which is effectively an instantiation of DPSGD. Note that only gradients for the discriminator D must be privatized (via clipping and noise), and not those for the generator G. This is a consequence of closure under post-processing - the generator only interacts with the sensitive dataset indirectly via discriminator parameters, and therefore does not need further privatization. ## 4 Frequent Discriminator Steps Improves Private Gans In this section, we discuss our main finding: nD, the number of discriminator steps taken between each generator step (see Algorithm 1) plays a significant role in the success of DPGAN training. Algorithm 1 TrainDPGAN(D; ϕ0, θ0, OptD, OptG, nD*, T, B, C, σ, δ*) 1: **Input:** Labelled dataset D = {(xj , yj )} n j=1. Discriminator D and generator G initializations ϕ0 and θ0. Optimizers OptD, OptG. Hyperparameters: nD (D steps per G step), T (total number of D steps), B (expected batch size), C (clipping norm), and σ (noise level). Privacy parameter δ. 2: q ← B/|D| and *t, k* ← 0 ▷ Calculate sampling rate q, initialize counters. 3: **while** *t < T* do ▷ Update D with DPSGD. 4: St ∼ PoissonSample(*D, q*) ▷ Sample a real batch St by including each (*x, y*) ∈ D w.p. q. 5: S˜t ∼ G(·; θk) B ▷ Sample fake batch S˜t. 6: gϕt ←P(x,y)∈St clip (∇ϕt (− log(D(*x, y*; ϕt))); C) +P(˜x,y˜)∈S˜t clip (∇ϕt (− log(1 − D(˜x, y˜; ϕt))); C) ▷ Clip per-example gradients. 7: bgϕt ← 1 2B (gϕt + zt), where zt ∼ N (0, C2σ 2I)) ▷ Add Gaussian noise. 8: ϕt+1 ← OptD(ϕt, bgθt ) 9: t ← t + 1 10: if nD divides t **then** ▷ Perform G update every nD steps. 11: S˜′t ∼ G(·; θk) B 12: gθk ← 1B P(˜x,y˜)∈S˜′t ∇θk (− log(D(˜x, y˜; ϕt))) 13: θk+1 ← OptG(θk, gθk ) 14: k ← k + 1 15: **end if** 16: **end while** 17: ε ← PrivacyAccountant(*T, σ, q, δ*) ▷ Compute privacy budget spent. 18: **Output:** Final G parameters θk and (*ε, δ*)-DP guarantee. Fixing a setting of DPSGD hyperparameters, there is an optimal range of values for nD that maximizes generation quality, in terms of both visual quality and utility for downstream classifier training. This value can be quite large (nD ≈ 100 in some cases). ## 4.1 Experimental Details Setup. We focus on labelled generation of MNIST (LeCun et al., 1998) and FashionMNIST (Xiao et al., 2017), both of which are comprised of 60K 28 × 28 grayscale images divided into 10 classes. To build a strong baseline, we begin from an open source PyTorch (Paszke et al., 2019) implementation3 of DCGAN (Radford et al., 2016) that performs well non-privately, and copy their training recipe. We then adapt their architecture to our purposes: removing BatchNorm layers (which are not compatible with DPSGD) and adding label embedding layers to enable labelled generation. Training this configuration non-privately yields labelled generation that achieves FID scores of 3.4±0.1 on MNIST and 16.5±1.7 on FashionMNIST. D and G have 1.72M and 2.27M trainable parameters respectively. For further details, please see Appendix B.1. Privacy implementation. To privatize training, we use Opacus (Yousefpour et al., 2021) which implements per-example gradient computation. As discussed before, we use the Rényi differential privacy (RDP) accounting of Mironov et al. (2019) (except in a few noted instances, where we instead use the tighter Gopi et al. (2021) accounting). For our baseline setting, we use the following DPSGD hyperparameters: we keep the non-private (expected) batch size B = 128, and use a noise level σ = 1 and clipping norm C = 1. Under these settings, we have the budget for T = 450K discriminator steps when targeting (10, 10−5)-DP. Evaluation. We evaluate our generative models by examining the *visual quality* and *utility for downstream* tasks of generated images. Following prior work, we measure visual quality by computing the Fréchet Inception Distance (FID) (Heusel et al., 2017) between 60K generated images and entire test set.4 To measure downstream task utility, we again follow prior work, and train a CNN classifier on 60K generated image-label pairs and report its accuracy on the real test set. 3Courtesy of Hyeonwoo Kang (https://github.com/znxlwm). Code available at this link. 4We use an open source PyTorch implementation to compute FID: https://github.com/mseitzer/pytorch-fid. ![5_image_0.png](5_image_0.png) Figure 2: DPGAN results over training runs using different discriminator update frequencies nD, targeting (10, 10−5)- DP. Each plotted line indicates the mean, min, and max utility of 3 training runs with different seeds, as the privacy budget is expended. (a) We plot the test set accuracy of a CNN trained on generated data only. Accuracy mirrors the FID scores from Figure 1a. Going from nD = 1 to nD = 50 improves accuracy from 40.3 ± 6.3% → 93.0 ± 0.6%. Further nD increases hurt accuracy. **(b) and (c)** We obtain similar results for FashionMNIST. Note that the optimal nD is higher (around nD ≈ 100). At nD = 100, we obtain an FID of 85.9 ± 6.4 and accuracy of 71.7 ± 1.0%. ![5_image_1.png](5_image_1.png) Figure 3: Evolution of samples drawn during training with nD = 10, when targeting (10, 10−5)-DP. This setting reports its best FID and downstream accuracy at t = 50K iterations (ε ≈ 2.85). As training progresses beyond this point, we observe mode collapse for several classes (e.g., the 6's and 7's, particularly at t = 150K), co-occuring with the deterioration in evaluation metrics (these samples correspond to the first 4 data points in the nD = 10 line in Figures 1a and 2a). ## 4.2 Results More frequent discriminator steps improves generation. We plot in Figures 1a and 2 the evolution of FID and accuracy during DPGAN training for both MNIST and FashionMNIST, under varying discriminator update frequencies nD. The effect of this parameter has outsized impact on the final results. For MNIST, nD = 50 yields the best results; on FashionMNIST, nD = 100 is the best. We emphasize that increasing the *frequency* of discriminator steps, relative to generator steps, does not affect the privacy cost of Algorithm 1. For any setting of nD, we perform the same number of noisy gradient queries on real data - what changes is the total number of generator steps taken over the course of training, which is reduced by a factor of nD. Private GANs are on a path to mode collapse. For our MNIST results, we observe that at low discriminator update frequencies (nD = 10), the best FID and accuracy scores occur early in training, well before the privacy budget we are targeting is exhausted. 5 Examining Figures 1a and 2a at 50K discriminator steps (the leftmost points on the charts; ε ≈ 2.85), the nD = 10 runs (in orange) have better FID and accuracy than both: (a) later checkpoints of the nD = 10 runs, after training longer and spending *more* privacy budget; and (b) other settings of nD at that stage of training. 5This observation has been reported in Neunhoeffer et al. (2021), serving as motivation for their remedy of taking a mixture of intermediate models encountered in training. We are not aware of any mentions of this aspect of DPGAN training in papers reporting DPGAN baselines for labelled image synthesis. ![6_image_0.png](6_image_0.png) Figure 4: Exponential moving average (β = 0.95) of GAN discriminator accuracy on mini-batches, immediately before each generator step. While non-privately the discriminator maintains a 60% accuracy, the private discriminator with nD = 1 is effectively a random guess. Increasing the number of discriminator steps recovers the discriminator's advantage early on, leading to generator improvement. As the generator improves, the discriminator's task is made more difficult, driving down accuracy. We attribute generator deterioration with more training to *mode collapse*: a known failure mode of GANs where the generator resorts to producing a small set of examples rather than representing the full variation present in the underlying data distribution. In Figure 3, we plot the evolution of generated images for an nD = 10 run over the course of training and observe qualitative evidence of mode collapse: at 50K steps, all generated images are varied, whereas at 150K steps, many of the columns (in particular the 6's and 7's) are slight variations of the same image. In contrast, successfully trained GANs do not exhibit this behaviour (see the nD = 50 images in Figure 1b). Mode collapse co-occurs with the deterioration in FID and accuracy observed in the first 4 data points of the nD = 10 runs (in orange) in Figures 1a and 2a. An optimal discriminator update frequency. These results suggest that *fixing other DPSGD hyperparameters, there is an optimal setting for the discriminator step frequency* nD that strikes a balance between: (a) being too low, causing the generation quality to peak early in training and then undergo mode collapse; resulting in all subsequent training to consume additional privacy budget *without improving the model*; and (b) being too high, preventing the generator from taking enough steps to converge before the privacy budget is exhausted (an example of this is the nD = 200 run in Figure 2a). Striking this balance results in the most effective utilization of privacy budget towards improving the generator. ## 5 Why Does Taking More Steps Help? In this section, we present empirical findings towards understanding why more frequent discriminator steps improves DPGAN training. We propose an explanation that is conistent with our findings. How does DP affect GAN training? Figure 4 compares the accuracy of the GAN discriminator on held-out real and fake examples immediately before each generator step, between private and non-private training with different settings of nD. We observe that non-privately at nD = 1, discriminator accuracy stabilizes at around 60%. Naively introducing DP (nD = 1) leads to a qualitative difference: DP causes discriminator accuracy to drop to 50% (i.e., comparable accuracy to randomly guessing) immediately at the start of training, to never recover.6 For other settings of nD, we make following observations: (1) larger nD corresponds to higher discriminator accuracy in early training; (2) in a training run, discriminator accuracy decreases throughout as the generator improves; (3) after discriminator accuracy falls below a certain threshold, the generator degrades or sees 6Our plot only shows the first 15K generator steps, but we remark that this persists until the end of training (450K steps). ![7_image_0.png](7_image_0.png) (a) Exponential moving average (β = 0.95) of discriminator accuracy on mini-batches, after checkpoint restarts ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) (b) FID after checkpoint restarts (c) Accuracy after checkpoint restarts Figure 5: We restart training under various privacy and nD settings at 3 checkpoints taken at 1K, 2K, and 3K generator steps into non-private training. We plot the progression of discriminator accuracy, FID, and downstream classification accuracy. The black dots correspond to the initial values of a checkpoint. We observe that low nD settings do not achieve comparable discriminator accuracy to non-private training (a), and results in degradation of utility ((b) and (c)). Discriminator accuracy for nD = 50 tracks non-private training, and we observe utility improvement throughout training, as in the non-private setting. limited improvement.7 Based on these observations, we propose the following explanation for why more frequent discriminator steps help: - Generator improvement occurs when the discriminator is effective at distinguishing between real and fake data. - The *asymmetric noise addition* introduced by DP to the discriminator makes such a task difficult, resulting in limited generator improvement. - Allowing the discriminator to train longer on a fixed generator improves its accuracy, recovering the non-private case where the generator and discriminator are balanced. Checkpoint restarting experiment. We perform a checkpoint restarting experiment to examine this explanation in a more controlled setting. We train a non-private GAN for 3K generator steps, and save checkpoints of D and G (and their respective optimizers) at 1K, 2K, and 3K generator steps. We restart training from each of these checkpoints for 1K generator steps under different nD and privacy settings. We plot the progression of discriminator accuracy, FID, and downstream classification accuracy. Results are pictured in Figure 5. Broadly, our results corroborate the observations that discriminator accuracy improves with larger nD and decreases with better generators, and that generator improvement occurs when the discriminator has sufficiently high accuracy. 7For nD = 10, accuracy falls below 50% after 5K G steps (= 50K D steps), which corresponds to the first point in the nD = 10 line in Figures 1a and 2a. For nD = 50, accuracy falls below 50% after 5K G steps (= 250K D steps), which corresponds to the 5th point in the nD = 50 line in Figures 1a and 2a. ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) (a) Varying σ only vs. FID at nD = 1 (b) Varying σ only vs. accuracy at nD = 1 Figure 6: On MNIST, we fix nD = 1 and report results for various settings of the DPSGD noise level σ, where the number of iterations T is chosen for each σ to target (10, 10−5)-DP. The gap between the dashed lines represent the advancement of the utility frontier by incorporating the choice of nD into our design space. Does reducing noise accomplish the same thing? In light of the above explanation, we ask if reducing the noise level σ can offer the same improvement as taking more steps, as reducing σ should also improve discriminator accuracy before a generator step. To test this: starting from our setting in Section 4, fixing nD = 1, and targeting MNIST at ε = 10, we search over a grid of noise levels σ (the lowest of which, σ = 0.4, admits a budget of only T = 360 discriminator steps). Results are pictured in Figure 6. We obtain a best FID of 127.1 and best accuracy of 57.5% at noise level σ = 0.45. Hence we can conclude that in this experimental setting, incorporating discriminator update frequency in our design space allows for more effective use of privacy budget for improving generation quality. Does taking more discriminator steps always help? As we discuss in more detail in Section 6.1, when we are able to find other means to improve the discriminator beyond taking more steps, tuning discriminator update frequency may not yield improvements. To illustrate with an extreme case, consider eliminating the privacy constraint. In non-private GAN training, taking more steps is known to be unnecessary. We corroborate this result: we run our non-private baseline from Section 4 with the same number of generator steps, but opt to take 10 discriminator steps between each generator step instead of 1. FID worsens from 3.4 ± 0.1 → 8.3, and accuracy worsens from 97.0 ± 0.1% → 91.3%. ## 6 Better Generators Via Better Discriminators Our proposed explanation in Section 5 provides a concrete suggestion for improving GAN training: effectively use our privacy budget to maximize the number of generator steps taken when the discriminator has sufficiently high accuracy. We experiment with modifications to the private GAN training recipe towards these ends, which translate to improved generation. ## 6.1 Larger Batch Sizes Several recent works have demonstrated that for classification tasks, DPSGD achieves higher accuracy with larger batch sizes, after tuning the noise level σ accordingly (Tramèr & Boneh, 2021; Anil et al., 2022; De et al., 2022). On simpler, less diverse datasets (such as MNIST, CIFAR-10, and FFHQ), GAN training is typically conducted with small batch sizes (for example, DCGAN uses B = 128 (Radford et al., 2016), which we adopt; StyleGAN(2|3) uses B = 32/64 (Karras et al., 2019; 2020; 2021)).8 Therefore it is interesting to see if large batch sizes help in our setting. We corroborate that on MNIST, larger batch sizes do not significantly improve our non-private baseline from Section 4: when we go up to B = 2048 from B = 128, FID goes from 3.4 ± 0.1 → 3.2 and accuracy goes from 97.0 ± 0.1% → 97.5%. 8However on more complex, diverse datasets (such as ImageNet), it has been found that larger batch sizes help: this is the conclusion from BigGAN (Brock et al., 2019). Recent work scaling up StyleGAN to diverse datasets - StyleGAN-XL (Sauer et al., 2022) and GigaGAN (Kang et al., 2023) - corroborate this result, seeing improvements from scaling up batch sizes to 2048 and 1024 respectively. | MNIST | FashionMNIST | | | | | | |--------------------|------------------------|--------------------|-------------|-------------|------------|---------| | Privacy ε | Method | Reported in | FID | Acc.(%) | FID | Acc.(%) | | ε = ∞ | Real data | This work | 1.0 | 99.2 | 1.5 | 92.5 | | GAN | 3.4 ± 0.1 | 97.0 ± 0.1 | 16.5 ± 1.7 | 79.5 ± 0.8 | | | | DP-MERF | Cao et al. (2021) | 116.3 | 82.1 | 132.6 | 75.5 | | | DP-Sinkhorn | Cao et al. (2021) | 48.4 | 83.2 | 128.3 | 75.1 | | | PSG9 | Chen et al. (2022) | - | 95.6 | - | 77.7 | | | DPDM | Dockhorn et al. (2022) | 5.01 | 97.3 | 18.6 | 84.9 | | | ε = 10 | DPGAN10 | Chen et al. (2020) | 179.16 | 63 | 243.80 | 50 | | Long et al. (2021) | 304.86 | 80.11 | 433.38 | 60.98 | | | | GS-WGAN | Chen et al. (2020) | 61.34 | 80 | 131.34 | 65 | | | PATE-GAN | Long et al. (2021) | 253.55 | 66.67 | 229.25 | 62.18 | | | G-PATE | Long et al. (2021) | 150.62 | 80.92 | 171.90 | 69.34 | | | DataLens | Wang et al. (2021) | 173.50 | 80.66 | 167.68 | 70.61 | | | Our DPGAN | 18.5 ± 0.9 | 93.0 ± 0.6 | 85.9 ± 6.4 | 71.7 ± 1.0 | | | | ε = 9.32* | This work | | | | | | | + large batches | 13.2 ± 1.1 | 94.0 ± 0.6 | 70.9 ± 6.3 | 73.0 ± 1.1 | | | | + adaptive nD | 12.8 ± 0.3 | 95.1 ± 0.1 | 62.3 ± 8.7 | 74.7 ± 0.4 | | | | DP-MERF11 | Vinaroz et al. (2022) | - | 80.7 | - | 73.9 | | | DP-HP | Vinaroz et al. (2022) | - | 81.5 | - | 72.3 | | | PSG | Chen et al. (2022) | - | 80.9 | - | 70.2 | | | DPDM | Dockhorn et al. (2022) | 23.4 | 95.3 | 37.8 | 79.4 | | | DPGAN | Long et al. (2021) | 470.20 | 40.36 | 472.03 | 10.53 | | | GS-WGAN | Long et al. (2021) | 489.75 | 14.32 | 587.31 | 16.61 | | | PATE-GAN | Long et al. (2021) | 231.54 | 41.68 | 253.19 | 42.22 | | | G-PATE | Long et al. (2021) | 153.38 | 58.80 | 214.78 | 58.12 | | | DataLens | Wang et al. (2021) | 186.06 | 71.23 | 194.98 | 64.78 | | | ε = 1 | Our DPGAN | 111.1 ± 17.9 | 76.9 ± 0.6 | 155.3 ± 7.1 | 64.9 ± 0.8 | | | ε = 0.912* | This work | | | | | | | + large batches | 106.2 ± 64.0 | 67.5 ± 7.8 | 158.9 ± 6.0 | 67.2 ± 1.0 | | | | + adaptive nD | 52.6 ± 3.2 | 81.3 ± 0.8 | 126.4 ± 4.1 | 69.1 ± 0.1 | | | Table 2: We gather previously reported results in the literature on the performance of various methods for labelled generation of MNIST and FashionMNIST, compared with our results. For our results, we run 3 seeds and report mean ± std. Note that *Reported In* refers to the source of the numerical result, not the originator of the approach. For downstream accuracy, we report the best accuracy among classifiers they use, and compare against our CNN classifier accuracy. (*) For our results, we target ε = 10/ε = 1 with Opacus accounting and additionally report ε using the improved privacy accounting of Gopi et al. (2021). Results. We scale up batch sizes, considering B ∈ {128, 512, 2048}, and search for the optimal noise level σ and nD (details in Appendix B.2). We target both ε = 1 and ε = 10. We report the best results from our hyperparameter search in in Table 2. We find that larger batch sizes leads to improvements: for ε = 1 and ε = 10, the best results are achieved at B = 512 and B = 2048 respectively. We also note that for large batch sizes, the optimal number of generator steps can be quite small. For B = 2048, σ = 4.0, targeting MNIST at ε = 10, nD = 5 is the optimal discriminator update frequency, and improves over our best B = 128 setting employing nD = 50. For full results, see Appendix D.3. ## 6.2 Adaptive Discriminator Step Frequency Our observations from Section 4 and 5 motivate us to consider *adaptive* discriminator step frequencies. As pictured in Figures 4 and 5a, discriminator accuracy drops during training as the generator improves. In 9Since PSG produces a coreset of only 200 examples (20 per class), the covariance of its InceptionNet-extracted features is singular, and therefore it is not possible to compute an FID score. 10We group per-class unconditional GANs together with conditional GANs under the DPGAN umbrella. 11Results from Vinaroz et al. (2022) are presented graphically in the paper. Exact numbers can be found in their code. | Privacy | Method | Reported In | FID | Acc.(%) | |-------------|------------------------|--------------------|--------------|------------| | ε = ∞ | Real data | This work | 1.1 | 96.6 | | GAN | | 30.0 ± 1.6 | 92.0 ± 0.4 | | | ε = 10 | DP-MERF | Cao et al. (2021) | 274.0 | 65 | | DP-Sinkhorn | Cao et al. (2021) | 189.5 | 76.3 | | | DPDM | Dockhorn et al. (2022) | 21.1 | - | | | DPGAN | Long et al. (2021) | - | 54.09 | | | GS-WGAN | Long et al. (2021) | - | 63.26 | | | PATE-GAN | Long et al. (2021) | - | 58.70 | | | G-PATE | Long et al. (2021) | - | 70.72 | | | ε = 9.39* | Our DPGAN | This work | 170.8 ± 20.3 | 82.4 ± 4.4 | | ε = 10 | DPGAN | Long et al. (2021) | 485.41 | 52.11 | | GS-WGAN | Long et al. (2021) | 432.58 | 61.36 | | | PATE-GAN | Long et al. (2021) | 424.60 | 65.35 | | | G-PATE | Long et al. (2021) | 305.92 | 68.97 | | | DataLens | Wang et al. (2021) | 320.84 | 72.87 | | Table 3: **Top section of the table:** Comparison to state-of-the-art results on 32 × 32 CelebA-Gender, targeting (ε, 10−6)-DP (except for the results reported in Long et al. (2021) which target a weaker (ε, 10−5)-DP). We run 3 seeds and report the mean ± std. (*) For our results, we target ε = 10 with Opacus accounting and additionally report ε using the improved privacy accounting of Gopi et al. (2021). DPDM reports a much better FID score than our DPGAN (which itself, is an improvement over previous results). Our DPGAN achieves the best reported accuracy score. **Bottom section of the table:** Results for GAN-based approaches reported in Long et al. (2021) and Wang et al. (2021), which are not directly comparable because they target (10, 10−5)-DP and use 64 × 64 CelebA-Gender. this scenario, we want to take more steps to improve the discriminator, in order to further improve the generator. However, using a large discriminator update frequency right from the beginning of training is wasteful - as evidenced by the fact that low nD achieves the best FID and accuracy early in training. Hence we propose to start at a low discriminator update frequency (nD = 1), and ramp up when our discriminator is performing poorly. Accuracy on real data must be released with DP. While this is feasible, it introduces the additional problem of having to find the right split of privacy budget for the best performance. We observe that discriminator accuracy is related to discriminator accuracy on fake samples only (which are free to evaluate on, by post-processing). Hence we use it as a proxy to assess discriminator performance. We propose an *adaptive step frequency*, parameterized by β and d. β is the decay parameter used to compute the exponential moving average (EMA) of discriminator accuracy on fake batches before each generator update. d is the accuracy floor that upon reaching, we move to the next update frequency nD ∈ {1, 2, 5, 10, 20, 50, 100, 200, 500, 1000*, ...*}. Additionally, we promise a grace period of 2/(1−β) generator steps before moving on to the next update frequency - motivated by the fact that β-EMA's value is primarily determined by its last 2/(1 − β) observations. We use β = 0.99 in all settings, and try d = 0.6 and d = 0.7. The additional benefit of the adaptive step frequency is that it means we do not have to search for the optimal update frequency. Although the adaptive step frequency introduces the extra hyperparameter of the threshold d, we found that these two settings (d = 0.6 and d = 0.7) were sufficient to improve over results of a much more extensive hyperparameter search over nD (whose optimal value varied significantly based on the noise level σ and expected batch size B). ## 6.3 Comparison With Previous Results In The Literature 6.3.1 Mnist And Fashionmnist Table 2 summarizes our best experimental settings for MNIST and FashionMNIST, and situates them in the context of previously reported results for the task. We also present a visual comparison in Figure 7. We provide some examples of generated images in Figures 9 and 10 for ε = 10, and Figures 11 and 12 for ε = 1. | DP-Sinkhorn DPDM DP-CGAN GS-WGAN Our DPGAN | |----------------------------------------------| ![11_image_0.png](11_image_0.png) Figure 7: MNIST and FashionMNIST results at (10, 10−5)-DP for different methods. Images of other methods are ![11_image_1.png](11_image_1.png) from Cao et al. (2021) and Dockhorn et al. (2022). Figure 8: 32 × 32 CelebA-Gender at (10, 10−6)-DP. **From top to bottom:** DPDM (unconditional generation), DP-Sinkhorn, and our DPGAN. Images of other methods are from Cao et al. (2021) and Dockhorn et al. (2022). Plain DPSGD beats all alternative GAN privatization schemes. Our baseline DPGAN from Section 4, with the appropriate choice of nD (and without the modifications described in this section yet), outperforms all other GAN-based approaches proposed in the literature (GS-WGAN, PATE-GAN, G-PATE, and DataLens) *uniformly* across both metrics, both datasets, and both privacy levels. Large batch sizes and adaptive discriminator step frequency improve GAN training. Broadly speaking, across both privacy levels and both datasets, we see an improvement from taking larger batch sizes, and then another with the adaptive step frequency. Comparison with state-of-the-art. With the exception of DPDM, our best DPGANs are competitive with state-of-the-art approaches for DP synthetic data, especially in terms of FID scores. ## 6.3.2 Celeba-Gender We also report results on generating 32 × 32 CelebA, conditioned on gender at (10, 10−6)-DP. For these experiments, we employed large batches (B = 2048) and adaptive discriminator step frequency with threshold d = 0.6. Full implementation details can be found in Appendix C. Results are summarized in Table 3 and visualized in Figure 8. For more example generations, see Figure 13. ## 7 Conclusion We revisit differentially private GANs and show that, with appropriate tuning of the training procedure, they can perform dramatically better than previously thought. Some crucial modifications include increasing discriminator step frequency, increasing the batch size, and introducing adaptive discriminator step frequency. We explore the hypothesis that the previous deficiencies of DPGANs were due to poor classification accuracy of the discriminator. More broadly, our work supports the recurring finding that carefully-tuned DPSGD on conventional architectures can yield strong results for differentially private machine learning. ## Acknowledgements AB is supported by an NSERC Discovery Grant, a David R. Cheriton Graduate Scholarship, and an Ontario Graduate Scholarship. GK is supported by an NSERC Discovery Grant, an unrestricted gift from Google, and a University of Waterloo startup grant. We would like to thank the TMLR anonymous reviewers and action editor for providing constructive feedback. ## References Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In *CCS'16: 2016 ACM SIGSAC Conference on Computer and* Communications Security, 2016. Rohan Anil, Badih Ghazi, Vineet Gupta, Ravi Kumar, and Pasin Manurangsi. Large-scale differentially private BERT. In *Findings of the Association for Computational Linguistics: EMNLP'22*, 2022. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning (ICML'17), 2017. Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In *55th Annual IEEE Symposium on Foundations of Computer Science* (FOCS'14), 2014. Brett K. Beaulieu-Jones, Zhiwei Steven Wu, Chris Williams, Ran Lee, Sanjeev P. Bhavnani, James Brian Byrd, and Casey S. Greene. Privacy-preserving generative deep neural networks support clinical data sharing. *Circulation: Cardiovascular Quality and Outcomes*, 12(7), 2019. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In *7th International Conference on Learning Representations (ICLR'19)*, 2019. Tianshi Cao, Alex Bie, Arash Vahdat, Sanja Fidler, and Karsten Kreis. Don't generate me: Training differentially private generative models with Sinkhorn divergence. In Advances in Neural Information Processing Systems 34 (NeurIPS'21), 2021. Tatjana Chavdarova, Matteo Pagliardini, Sebastian U. Stich, François Fleuret, and Martin Jaggi. Taming GANs with lookahead-minmax. In *9th International Conference on Learning Representations (ICLR'21)*, 2021. Dingfan Chen, Tribhuvanesh Orekondy, and Mario Fritz. GS-WGAN: A gradient-sanitized approach for learning differentially private generators. In *Advances in Neural Information Processing Systems 33* (NeurIPS'20), 2020. Dingfan Chen, Raouf Kerkouche, and Mario Fritz. Private set generation with discriminative information. In *Advances in Neural Information Processing Systems 35 (NeurIPS'22)*, 2022. Qingrong Chen, Chong Xiang, Minhui Xue, Bo Li, Nikita Borisov, Dali Kaafar, and Haojin Zhu. Differentially private data generative models. *CoRR*, abs/1812.02274, 2018. Soumith Chintala, Emily Denton, Martin Arjovsky, and Michael Mathieu. How to train a GAN? Tips and tricks to make GANs work. https://github.com/soumith/ganhacks, 2016. Soham De, Leonard Berrada, Jamie Hayes, Samuel L Smith, and Borja Balle. Unlocking high-accuracy differentially private image classification through scale. *CoRR*, abs/2204.13650, 2022. Tim Dockhorn, Tianshi Cao, Arash Vahdat, and Karsten Kreis. Differentially private diffusion models. CoRR, abs/2210.09929, 2022. Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Compututer Science, 9(3-4):211–407, 2014. Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In 25th Annual International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT'06), 2006a. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In *Proceedings of the 3rd Conference on Theory of Cryptography (TCC'06)*, 2006b. Tanner Fiez and Lillian J. Ratliff. Local convergence analysis of gradient descent ascent with finite timescale separation. In *9th International Conference on Learning Representations (ICLR'21)*, 2021. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In *Advances in Neural Information Processing* Systems 27 (NIPS'14), 2014. Sivakanth Gopi, Yin Tat Lee, and Lukas Wutschitz. Numerical composition of differential privacy. In Advances in Neural Information Processing Systems 34 (NeurIPS'21), 2021. Frederik Harder, Kamil Adamczewski, and Mijung Park. DP-MERF: Differentially private mean embeddings with random features for practical privacy-preserving data generation. In *24th International Conference* on Artificial Intelligence and Statistics (AISTATS'21), 2021. Frederik Harder, Milad Jalali Asadabadi, Danica J. Sutherland, and Mijung Park. Differentially private data generation needs better features. *CoRR*, abs/2205.12900, 2022. Moritz Hardt, Katrina Ligett, and Frank Mcsherry. A simple and practical algorithm for differentially private data release. In *Advances in Neural Information Processing Systems 25 (NIPS'12)*, 2012. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In *Advances in Neural* Information Processing Systems 30 (NIPS'17), 2017. James Jordon, Jinsung Yoon, and Mihaela van der Schaar. PATE-GAN: Generating synthetic data with differential privacy guarantees. In *7th International Conference on Learning Representations (ICLR'19)*, 2019. Minguk Kang, Jun-Yan Zhu, Richard Zhang, Jaesik Park, Eli Shechtman, Sylvain Paris, and Taesung Park. Scaling up GANs for text-to-image synthesis. In *Proceedings of the 2023 IEEE/CVF Conference on* Computer Vision and Pattern Recognition (CVPR'23), 2023. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR'19), 2019. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of StyleGAN. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR'20), 2020. Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free generative adversarial networks. In Advances in Neural Information Processing Systems 34 (NeurIPS'21), 2021. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations (ICLR'15), 2015. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proc. IEEE*, 86(11):2278–2324, 1998. Xuechen Li, Florian Tramèr, Percy Liang, and Tatsunori Hashimoto. Large language models can be strong differentially private learners. In *10th International Conference on Learning Representations (ICLR'22)*, 2022. Jingcheng Liu and Kunal Talwar. Private selection from private candidates. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (STOC'19), 2019. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV'15), 2015. Yunhui Long, Boxin Wang, Zhuolin Yang, Bhavya Kailkhura, Aston Zhang, Carl Gunter, and Bo Li. GPATE: Scalable differentially private data generator via private aggregation of teacher discriminators. In Advances in Neural Information Processing Systems 34 (NeurIPS'21), 2021. Ryan McKenna, Daniel Sheldon, and Gerome Miklau. Graphical-model based estimation and inference for differential privacy. In *Proceedings of the 36th International Conference on Machine Learning (ICML'19)*, 2019. Ilya Mironov, Kunal Talwar, and Li Zhang. Rényi differential privacy of the sampled Gaussian mechanism. CoRR, abs/1908.10530, 2019. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In *6th International Conference on Learning Representations (ICLR'18)*, 2018. Shubhankar Mohapatra, Sajin Sasy, Xi He, Gautam Kamath, and Om Thakkar. The role of adaptive optimizers for honest private hyperparameter selection. In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22), 2022. Marcel Neunhoeffer, Steven Wu, and Cynthia Dwork. Private post-GAN boosting. In 9th International Conference on Learning Representations (ICLR'19), 2021. Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. Smooth sensitivity and sampling in private data analysis. In *Proceedings of the 39th Annual ACM Symposium on the Theory of Computing (STOC'07)*, 2007. Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian J. Goodfellow, and Kunal Talwar. Semi-supervised knowledge transfer for deep learning from private training data. In *5th International Conference on* Learning Representations (ICLR'17), 2017. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An imperative style, high-performance deep learning library. In *Advances in* Neural Information Processing Systems 32 (NeurIPS'19), 2019. Natalia Ponomareva, Hussein Hazimeh, Alex Kurakin, Zheng Xu, Carson Denison, H. Brendan McMahan, Sergei Vassilvitskii, Steve Chien, and Abhradeep Guha Thakurta. How to dp-fy ML: A practical guide to machine learning with differential privacy. *J. Artif. Intell. Res.*, 77:1113–1201, 2023. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In 4th International Conference on Learning Representations (ICLR'16), 2016. Axel Sauer, Katja Schwarz, and Andreas Geiger. StyleGAN-XL: Scaling StyleGAN to large diverse datasets. In *Special Interest Group on Computer Graphics and Interactive Techniques Conference (SIGGRAPH'22)*, 2022. Shuang Song, Kamalika Chaudhuri, and Anand D. Sarwate. Stochastic gradient descent with differentially private updates. In *2013 IEEE Global Conference on Signal and Information Processing*, 2013. Yuchao Tao, Ryan McKenna, Michael Hay, Ashwin Machanavajjhala, and Gerome Miklau. Benchmarking differentially private synthetic data generation algorithms. *CoRR*, abs/2112.09238, 2021. Reihaneh Torkzadehmahani, Peter Kairouz, and Benedict Paten. DP-CGAN: Differentially private synthetic data and label generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, (CVPR Workshops'19), 2019. Florian Tramèr and Dan Boneh. Differentially private learning needs better features (or much more data). In *9th International Conference on Learning Representations (ICLR'21)*, 2021. Margarita Vinaroz, Mohammad-Amin Charusaie, Frederik Harder, Kamil Adamczewski, and Mi Jung Park. Hermite polynomial features for private data generation. In *Proceedings of the 39th International Conference on Machine Learning (ICML'22)*, 2022. Chris Waites and Rachel Cummings. Differentially private normalizing flows for privacy-preserving density estimation. *CoRR*, abs/2103.14068, 2021. Boxin Wang, Fan Wu, Yunhui Long, Luka Rimanic, Ce Zhang, and Bo Li. DataLens: Scalable privacy preserving training via gradient compression and aggregation. In *CCS'21: 2021 ACM SIGSAC Conference* on Computer and Communications Security, 2021. Yu-Xiang Wang, Borja Balle, and Shiva Prasad Kasiviswanathan. Subsampled Rényi differential privacy and analytical moments accountant. In *22nd International Conference on Artificial Intelligence and Statistics* (AISTATS'19), 2019. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. *CoRR*, abs/1708.07747, 2017. Liyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, and Jiayu Zhou. Differentially private generative adversarial network. *CoRR*, abs/1802.06739, 2018. Ashkan Yousefpour, Igor Shilov, Alexandre Sablayrolles, Davide Testuggine, Karthik Prasad, Mani Malek, John Nguyen, Sayan Gosh, Akash Bharadwaj, Jessica Zhao, Graham Cormode, and Ilya Mironov. Opacus: User-friendly differential privacy library in PyTorch. *CoRR*, abs/2109.12298, 2021. Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A. Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, Sergey Yekhanin, and Huishuai Zhang. Differentially private fine-tuning of language models. In 10th International Conference on Learning Representations (ICLR'22), 2022. Jun Zhang, Graham Cormode, Cecilia M. Procopiuc, Divesh Srivastava, and Xiaokui Xiao. PrivBayes: Private data release via bayesian networks. *ACM Trans. Database Syst.*, 42(4), 2017. ## A Generated Samples We provide a few non-cherrypicked samples for MNIST and FashionMNIST at ε = 10 and ε = 1, as well as ![17_image_0.png](17_image_0.png) 32 × 32 CelebA-Gender at ε = 10. Figure 9: Some non-cherrypicked MNIST samples from our method, ε = 10. ![17_image_1.png](17_image_1.png) Figure 10: Some non-cherrypicked FashionMNIST samples from our method, ε = 10. ![18_image_0.png](18_image_0.png) Figure 11: Some non-cherrypicked MNIST samples from our method, e = 1. ![18_image_1.png](18_image_1.png) Figure 12: Some non-cherrypicked FashionMNIST samples from our method, ¿ = 1. ![19_image_0.png](19_image_0.png) Figure 13: Some non-cherrypicked CelebA samples from our method, ε = 10. ## B Mnist And Fashionmnist Implementation Details B.1 Training Recipe For MNIST and FashionMNIST, we begin from an open source PyTorch implementation of DCGAN (Radford et al., 2016) (available at this link) that performs well non-privately, and copy their training recipe. This includes: batch size B = 128, the Adam optimizer (Kingma & Ba, 2015) with parameters (α = 0.0002, β1 = 0.5, β2 = 0.999) for both G and D, the non-saturating GAN loss (Goodfellow et al., 2014), and a 5-layer fully convolutional architecture with width parameter d = 128. To adapt it to our purposes, we make three architectural modifications: in both G and D we (1) remove all BatchNorm layers (which are not compatible with DPSGD); (2) add label embedding layers to enable labelled generation; and (3) adjust convolutional/transpose convolutional stride lengths and kernel sizes as well as remove the last layer, in order to process 1 × 28 × 28 images without having to resize. Finally, we remove their custom weight initialization, opting for PyTorch defaults. Our baseline non-private GANs are trained for 45K steps. We train our non-private GANs with poisson sampling as well: for each step of discriminator training, we sample real examples by including each element of our dataset independently with probability B/n, where n is the size of our dataset. We then add B fake examples sampled from G to form our fake/real combined batch. Clipping fake sample gradients. When training the discriminator privately with DPSGD, we draw B fake examples and compute clipped per-example gradients on the entire combined batch of real and fake examples (see Algorithm 1). This is the approach taken in the prior work of Torkzadehmahani et al. (2019). We remark that this is purely a design choice - it is not necessary to clip the gradients of the fake samples, nor to process them together in the same batch. So long as we preserve the sensitivity of gradient queries with respect to the real data, the same amount of noise will suffice for privacy. ## B.2 Large Batch Size Hyperparameter Search We scale up batch sizes, considering B ∈ {64, 128, 512, 2048}, and search for the optimal noise level σ and nD. For B = 128 targeting ε = 10, we search over three noise levels Σ 10 128 = {0.6, 1.0, 1.4}. We choose candidate noise levels for other batch sizes as follows: when considering a batch size B = 128n, we search over $$\Sigma_{128n}^{10}:=\{{\sqrt{n}}\cdot\sigma:\sigma\in\Sigma_{128}^{10}\}.$$ We also target the high privacy (ε = 1) regime. For ε = 1, we multiply all noise levels by 5, $$\Sigma_{B}^{1}=\{5\sigma:\sigma\in\Sigma_{B}^{10}\}.$$ For each setting of (*B, σ*), we search over a grid of nD ∈ {1, 2, 5, 10, 20, 50, 100, 200, 500}. Due to compute limitations, we omit some values that we are confident will fail (e.g., trying nD = 1 when mode collapse occurs for nD = 5). ## C Celeba Implementation Details The CelebA dataset (Liu et al., 2015) consists of 202,599 178×218 RGB images of celebrity faces, each labelled with 40 binary attributes. The version of the dataset we work with, 32x32 CelebA-Gender (a benchmark reported in Cao et al. (2021)), is obtained by resizing to 32x32 and labelling with the gender attribute. The 202,599 images are partitioned into a training set of size 182,637 and a test set of size 19,962. We use essentially the same model architectures we used for MNIST and FashionMNIST for CelebA: 4layer fully convolutional networks with label embedding layers for both D and G. We adjust convolutional/transpose convolutional stride lengths and kernels sizes to process 3 × 32 × 32 images without having to resize. D and G for CelebA are slightly larger, having 2.64M and 3.16M trainable parameters respectively. Drawing from the results of our MNIST and FashionMNIST experiments, we used a large batch size (B = 2048) and adaptive discriminator updates, with threshold d = 0.6. We experimented with a few settings for noise level σ ∈ {2, 3, 4}. Our best results were with the largest noise σ = 4 which gave us 385K discriminator steps when targeting ε = 10. ## D Ablations D.1 Varying Discriminator Size We train DPGANs on MNIST under the setting of Section 4: using noise level σ = 1, batch size B = 128, and targeting ε = 10 which yields 450K discriminator steps. By adjusting dD (the \# of filter banks in the first convolutional layer of the discriminator, which controls width throughout), we can obtain discriminators with roughly 0.25×, 0.5×, and 2× the parameter count (Table 4). For these experiments, we vary discriminator size while keeping the generator size (2.27M parameters) fixed. | dD | D parameter count | Ratio | |------|---------------------|---------| | 64 | 0.44M | 0.26× | | 96 | 0.97M | 0.57× | | 128 | 1.72M | 1× | | 196 | 3.86M | 2.24× | Table 4: Number of trainable parameters in discriminator size variants. Results. In Figure 14 we plot the progression of FID and downstream classifier accuracy of generated MNIST samples during non-private training with discriminators of varying size. We observe that, nonprivately, larger discriminators do better in terms of FID early on, and converge to slightly worse accuracies. In Figures 15 and 16, we plot the progression of FID and accuracy (respectively) for DPGANs trained on MNIST (targeting ε = 10) at different discriminator update frequencies nD. In each plot, we compare the dD = 128 runs (in green), which correspond to results from Figures 1a and 2a, to the results of training with discriminators with 0.26–2.24× as many trainable parameters. These additional settings mostly track the dD = 128 runs. Larger discriminators appear to perform slightly better, especially in terms of accuracy early in training. Larger discriminators also use significantly more compute. ![21_image_0.png](21_image_0.png) Figure 14: MNIST FID and downstream classifier accuracy for non-private GAN training with various discriminator sizes. The green (dD = 128) line corresponds to the 1.72M parameter discriminator used in previous experiments. ![21_image_1.png](21_image_1.png) Figure 15: MNIST FID for DPGAN training (targeting ε = 10) at various discriminator update frequencies nD. In each plot, we present results from training with various discriminator sizes. The green (dD = 128) lines correspond to the results pictured in Figure 1a. Discriminators with 0.26–2.24× as many trainable parameters track the results of the original dD = 128 setting. ## D.2 Varying Learning Rate We train DPGANs on MNIST under the setting of Section 4: using noise level σ = 1, batch size B = 128, and targeting ε = 10 which yields 450K discriminator steps. Here, we keep discriminator size dD = 128 fixed, and vary the learning rates of G and D, while keeping the other Adam parameters β1 and β2 for both G and D fixed. Table 5 lists the learning rate settings we consider. ![22_image_0.png](22_image_0.png) ![22_image_1.png](22_image_1.png) Figure 16: MNIST downstream classifier accuracy for DPGAN training (targeting ε = 10) at various discriminator update frequencies nD. In each plot, we present results from training with various discriminator sizes. The green (dD = 128) lines correspond to the results pictured in Figure 2a. Discriminators with 0.26–2.24× as many trainable parameters track the results of the original dD = 128 setting. | Setting | G LR | D LR | |-----------|---------|---------| | Base | 0.0002 | 0.0002 | | 5× LR | 0.001 | 0.001 | | 0.2× LR | 0.00004 | 0.00004 | | 5× D LR | 0.0002 | 0.001 | | 0.2× D LR | 0.0002 | 0.00004 | Table 5: Learning rate settings. Results. In Figure 17 we plot the progression of FID and downstream classifer accuracy of generated MNIST samples during non-private training under various learning rate settings. We observe FID and accuracy degradation near the end of training for the 5× (D) LR settings. The 0.2× LR setting converges much slower. This is remedied when we adjust *only the* D LR by 0.2× and leave G LR unchanged (comparing the green line to the purple line in Figure 17). Figures 18 and 19 examine the case where we adjust both G and D learning rates by 5× and 0.2× respectively. Broadly, we see the same behaviour in Section 4: FID and downstream classification accuracy improve significantly as we take nD >> 1, up until the point where nD is too high, limiting the generator from taking enough steps to converge. However, we note some differences: (1) the performance of the best settings for nD are reduced across the board (most prominently in the case of accuracy in the 0.2× LR setting; see Figure 19b); and (2) the nD which results in the best performance is different - while nD = 50 leads to the best results for MNIST at ε = 10 in Section 4, nD = 200 performs the best for 5× LR and nD = 100 is the best for 0.2× LR. Note that these two differences are *not observed* in the experiments where we vary discriminator size dD in Appendix D.1: all runs with different dD track the dD = 128 run closely. In Figures 20 and 21, we examine the case where we only adjust D LR, by 5× and 0.2× respectively, and keep G LR fixed at 0.0002. Again, we observe large improvements in utility as we take nD >> 1, up to the point where nD is too high. We note that when keeping G LR fixed, the 0.2× setting gets much closer to the level of improvement from varying nD observed in the base LR setting. In summary: changing learning rates while keeping other hyperparameters fixed still exhibits the benefit of increasing nD, but compared to the base setting, does not recover: (1) the scale of the improvement, and (2) the precise behaviour of the phenomenon; i.e. the same optimal nD. We leave open the question of understanding more precisely how the phenomenon changes under different learning rates: it may be fruitful to investigate how Adam's momentum parameters (β1, β2) and DPSGD noise level σ impact the results, and also perhaps the degradation of the non-private GAN results for large D LR. ![23_image_0.png](23_image_0.png) Figure 17: MNIST FID and downstream classifier accuracy for non-private GAN training with various learning rate settings. The blue (dD = 128) line corresponds to the base learning rate setting used in previous experiments. ![23_image_1.png](23_image_1.png) Figure 18: MNIST FID and downstream classifier accuracy for DPGAN training runs targeting ε = 10 and using 5× the base learning rate for both G and D, under various settings of nD. ![24_image_0.png](24_image_0.png) Figure 19: MNIST FID and downstream classifier accuracy for DPGAN training runs targeting ε = 10 and using 0.2× the base learning rate for both G and D, under various settings of nD. ![24_image_1.png](24_image_1.png) ![24_image_2.png](24_image_2.png) Figure 20: MNIST FID and downstream classifier accuracy for DPGAN training runs targeting ε = 10 and using 5× the base learning rate for D only (G LR unchanged), under various settings of nD. Figure 21: MNIST FID and downstream classifier accuracy for DPGAN training runs targeting ε = 10 and using 0.2× the base learning rate for D only (G LR unchanged), under various settings of nD. ## D.3 Varying Batch Size And Noise Level Fixing a batch size B and a noise level σ yields a total discriminator step budget T allowed under our privacy budget ε. For example, the results from Section 4 and Appendices D.1 and D.2 use B = 128 and σ = 1, which allows for T = 450K when targeting ε = 10 on MNIST. Again targeting MNIST at ε = 10, we take ![25_image_0.png](25_image_0.png) ![25_image_1.png](25_image_1.png) various combinations of (*B, σ*), and plot the final FID and accuracy of DPGANs trained at such a setting, over a spectrum of nD. Results are pictured in Figures 22, 23, and 24. Figure 22: MNIST FID and downstream classifier accuracy for B = 128 runs targeting ε = 10, with σ ∈ {0.6, 1, 1.4}. We report final utility over a range of nD for the 3 noise levels. The x-axis is log-scaled. ![25_image_2.png](25_image_2.png) ![25_image_3.png](25_image_3.png) Figure 23: MNIST FID and downstream classifier accuracy for B = 512 runs targeting ε = 10, with σ ∈ {1.2, 2, 2.8}. We report final utility over a range of nD for the 3 noise levels. The x-axis is log-scaled. ![25_image_4.png](25_image_4.png) ![25_image_5.png](25_image_5.png) Figure 24: MNIST FID and downstream classifier accuracy for B = 2048 runs targeting ε = 10, with σ ∈ {2.4, 4, 5.6}. We report final utility over a range of nD for the 3 noise levels. The x-axis is log-scaled. Results. At all batch sizes and noise levels, we observe the same U-shaped utility curve described in Section 4, which predicts the existence of an optimal nD for any fixed setting of (*σ, B*). For fixed B, the optimal nD is lower for smaller σ. We also see that for settings with low σ and large B, optimal nD can be quite low. For all batch sizes, choosing noise levels that achieve their optimal nD at fairly large values (>> 1) tends to outperform smaller noise levels which achieve their optimal nD early. ## E Additional Results E.1 Wall Clock Times We report wall clock times for training runs under various hyperparameter settings, which are executed on 1× NVIDIA A40 card setups running PyTorch 1.11.0+CUDA 11.3.1 and Opacus 1.1.3. Table 6 presents results on MNIST, in particular comparing the effect of nD on training time. The total number of discriminator steps, T, is determined by the privacy budget and DPSGD hyperparameters. Hence, increasing nD results in fewer total G steps and faster training time. Table 7 presents training times under adaptive discriminator step frequency for various datasets. All private settings are much slower than non-private training. Indeed, the best DP results tend to come from training long with large noise levels, trading off computation for utility (De et al., 2022). For example, the best DP diffusion models (Dockhorn et al., 2022) use 8 V100's for 1 day to train their best MNIST models. Although not directly comparable, we note that our best ε = 10 results train in 7.5 hours on 1 A40. | Privacy | B | σ | T | nD | FID | Wall clock time | |-----------|-----|------|-------------|--------------|--------------|-------------------| | ε = ∞ | 128 | - | 45K | 1 | 3.4 ± 0.1 | 44m | | ε = 10 | 128 | 1 | 450K | 1 | 205.3 ± 0.9 | 11h 03m | | | | 10 | 103.4 ± 5.8 | 6h 33m | | | | | | 50 | 18.5 ± 0.9 | 5h 56m | | | | | | 100 | 21.0 ± 1.6 | 5h 57m | | | | | | 200 | 26.6 ± 2.2 | 5h 54m | | | | 2048 | 5.6 | 98K | 20 | 13.2 ± 1.0 | 16h 54m | | | ε = 1 | 128 | 5 | 325K | 200 | 111.1 ± 17.9 | 4h 40m | | 512 | 14 | 165K | 50 | 106.2 ± 64.0 | 7h 31m | | | Privacy | Dataset | B | σ | T | FID | Wall clock time | |--------------|--------------|------------|--------|--------------|------------|-------------------| | MNIST | 3.4 ± 0.1 | | 44m | | | | | ε = ∞ | FashionMNIST | 16.5 ± 1.7 | 42m | | | | | 128 | - | 45K | | | | | | CelebA | 30.0 ± 1.6 | | 47m | | | | | MNIST | 512 | 2 | 174K | 12.8 ± 0.3 | 7h 35m | | | ε = 10 | FashionMNIST | 62.3 ± 8.7 | 7h 35m | | | | | CelebA | 2048 | 4 | 385K | 170.8 ± 20.3 | 3d 17h 49m | | | ε = 1 | MNIST | 512 | 14 | 165K | 52.6 ± 3.2 | 7h 43m | | FashionMNIST | 126.4 ± 4.1 | | 7h 26m | | | | Table 6: Wall clock times on MNIST for various settings. The privacy level ε, batch size B, and noise level σ determines the total number D steps taken during training, T. Given T, the discriminator update frequency nD determines the number of G steps taken during training. Table 7: Wall clock times on MNIST, FashionMNIST and 32×32 CelebA for runs using adaptive discriminator step frequency (with the exception of the ε = ∞ results, which use nD = 1). ## E.2 Increasing Discriminator Learning Rate Instead Of Step Frequency From the experimental setting in Section 4 targeting MNIST at ε = 10, we consider the nD = 1 setting, and increase the discriminator learning rate instead of nD. G LR is kept fixed at 0.0002. Results are in Table 8. We do not observe the same level of improvement obtained by increasing nD and keeping D LR/G LR at 1×. | D LR/G LR | FID | Acc. (%) | |-------------|-------|------------| | 1× | 205.9 | 33.7 | | 5× | 251.8 | 45.0 | | 10× | 228.1 | 37.5 | | 50× | 237.2 | 54.5 | Table 8: Results on MNIST at ε = 10 for increasing D LR while keeping nD = 1. For reference, using nD = 50 while keeping D LR/G at 1× yields an FID score of 18.5 ± 0.9 and accuracy of 93.0 ± 0.6%. ## F Additional Discussion DP tabular data synthesis. Our investigation focuses on image datasets, while many important applications of private data generation involve tabular data. In these settings, marginal-based approaches (Hardt et al., 2012; Zhang et al., 2017; McKenna et al., 2019) perform the best. While Tao et al. (2021) find that private GAN-based approaches fail to preserve even basic statistics in these settings, we speculate that our techniques may yield improvements. Taking multiple discriminator steps. The original GAN formulation of Goodfellow et al. (2014) has nD, the number of discriminator steps taken between generator steps, as a tunable hyperparameter. In the WGAN framework (Arjovsky et al., 2017), it is suggested to train discriminators as much as possible between generator steps, i.e. to optimality, for best performance. In practice, WGAN implementations use nD = 5 to save on computation. Several studies empirically explore the effect of taking multiple discriminator steps (Miyato et al., 2018; Brock et al., 2019), finding that searching for an optimal nD can improve results. Similar imbalanced setups, such as lookahead and imbalanced learning rates, have been analyzed theoretically (Chavdarova et al., 2021; Fiez & Ratliff, 2021). In the private setting, applying such strategies to improve DPGAN training has been relatively unexplored. DPGAN training recipes are largely ports of non-private approaches - inheriting many parameter choices designed for performant non-private training which are sub-optimal under DP. Guidance in the non-private setting (tip 14 of Chintala et al. (2016)) prescribes to train the discriminator for more steps in the presence of noise (a regularization approach used in non-private GANs). This is the case for DP, and is our core strategy that yields the most significant gains in utility. We were not aware of this tip when we discovered this phenomenon, but it serves as validation of our finding. While Chintala et al. (2016) provides little elaboration, looking at further explorations of this principle (and other strategies) in the non-private setting may offer guidance for improving DPGANs. For instance, Chavdarova et al. (2021) propose a lookahead update rule that enables fast convergence in the presence of noise, without using large batches - such techniques may help in the private setting as well. Hyperparameter tuning in DP machine learning. Hyperparameter tuning is crucial to getting deep learning to work. The same is true under privacy, with two additional concerns: (1) tuning is not free - naive composition says privacy loss scales with the number of runs; and (2) DPSGD alters the hyperparameter landscape - introducing extra ones, and also *changing the relative importance of existing hyperparameters*. (1) is addressed by Liu & Talwar (2019) (although composition is competitive in settings where adaptive selection is important (Mohapatra et al., 2022)). On (2), Ponomareva et al. (2023) offers a comprehensive guide on DP hyperparameter tuning; for GAN training, our work identifies an important parameter with outsized impact in the DP setting. To compare with prior work, we report our best hyperparameter settings. Indeed, introducing a highly dataset-dependent parameter can result in worse performance overall when accounting for the cost of search in a real deployment setting. Our adaptive discriminator update frequency is motivated by this concern, and our use of MNIST hyperparameters directly for FashionMNIST is a brittleness sanity-check. The aspect of our work that identifies the importance of discriminator update frequency in private GAN training is unaffected by concerns regarding private hyperparameter search. Evaluation approaches that take into account the cost of search when comparing algorithms is an important direction, which we do not address in this work. For benchmark datasets, the problem is complicated by implicit knowledge encoded in various algorithms' design choices and "default" hyperparameter ranges.
Review 1: Summary: This paper focuses on improving existing differentially private (DP) GAN models. It documents that using a different update ratio---that is, updating the discriminator multiple times versus the generator---significantly improves the quality of the generated samples. My main concern is the limited novelty of this paper: in particular, the change relative to prior work in the DP context, is that the discriminator is trained multiple times, that is, only line 10 in Alg. 1. Moreover, it is well-known in the GAN community that multiple discriminator updates help significantly the optimization, and almost all publicly available GAN implementations use this; see references below. Moreover, given that the paper has empirical contributions, these are not thoroughly carried out (with seeds and standard deviation, reporting performances over wall clock time or gradient queries, an additional metric that the generated data is differentially private, etc.; see below), and the question on how to select the update ratio in this DP context remains open. Strengths and Weaknesses: # Strengths This paper improves the performances of previous implementations of DP GANs, by using multiple discriminator updates per one generator update. It also studies if larger batch sizes and adaptive discriminator steps improve the performances for such training with multiple discriminator updates. # Weaknesses **Novely/Relevance/Contribution.** This fact that multiple updates for the discriminator help is a well-known observation in the GAN community, both as a theoretical and empirical observation, see for example [1-4]. It is related to using larger step sizes for the discriminator (instead of updating it multiple times). Thus, I am surprised that the first differentially private GAN did not use this because all standard GAN setups do that. That said, this paper tries to interpret that using an argument based on "asymmetric noise addition", which is only argued, but does not provide theoretical insights. Given that this fact is well-known (from an optimization perspective), this perspective of asymmetric noise addition should be more thoroughly carried out for it to offer a different/deeper understanding in the DP context. **Misleading discussion on batch sizes.** It is known that noisy gradient estimates can "harm" the optimization of GANs [5], and this is true for any min-max optimization, thus also for DP training. Now, for simple datasets such as MNIST, there is no/little gain in increasing the batch size, but for more complex datasets such as ImageNet, this is necessary in order to ensure that the optimization method does not diverge away rapidly [6]. Now, this paper notes that for non-private GAN on MNIST, increasing the batch size does not help, but for private GAN, where noise is added, it does help. This is very much expected/well-known because the added noise increased the complexity of the dataset, and from an optimization perspective, controlling for the variance of the noisy gradient estimates helps the convergence. Thus, the discussion that this is something specific to DP GANs is in my opinion very misleading (and the conclusion that for non-private GANs increased batch size does not help is incorrect). **Writing.** The writing is often informal, which makes the reading ambiguous. For example, it is often used "striking a balance", or "careful balance" without being clear about what that means; see further examples below. **Experiments.** The experiments do not report standard deviation. Moreover, they do not take into account the added computational expense since now, at each generator step, there are $n_D$ more gradient queries, and this number can be large. Furthermore, it is unconvincing to me that FID is a good measure since the primary goal is to have a differentially private generative model, and FID reports how similar the generative samples are to the dataset samples. ### Refs - [1] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In ICLR, 2018. - [2] Gauthier Gidel, Hugo Berard, Gaetan Vignoud, Pascal Vincent, Simon Lacoste-Julien, A Variational Inequality Perspective on Generative Adversarial Networks, In ICLR, 2019. - [3] Tatjana Chavdarova, Matteo Pagliardini, Sebastian U Stich, François Fleuret, and Martin Jaggi. Taming GANs with Lookahead-Minmax. In ICLR, 2021. - [4] Tanner Fiez, Lillian J Ratliff. Local Convergence Analysis of Gradient Descent Ascent with Finite Timescale Separation. In ICLR, 2021. - [5] T. Chavdarova, G. Gidel, F. Fleuret, and Simon Lacoste-Julien. Reducing noise in GAN training with variance reduced extragradient. In NeurIPS, 2019. - [6] A. Brock, J. Donahue, and K. Simonyan. Large-scale GAN training for high-fidelity natural image synthesis. In ICLR, 2019. Requested Changes: **Experiments.** - Please report the standard deviation over at least 3 seeds. - Please depict a plot where the x-axis is wall clock time or gradient queries. **Motivation.** Please explain better what is the overall motivation: it is surprising to me that the overall goal is not to use ## Questions: 1. Increasing $n_D$ also increases the computation a lot. Have you tried increasing the step size for the discriminator? 2. What would be a recommended way to select $n_D$ after this study? ## Minor / Typos - Abstract: it would be helpful to make the contributions more precise. For example: - "after modifications to training" -- it would be clearer to list these - "a careful balance between the generator and discriminator" -- it is unclear what this means, balance in what? - "restores parity" -- it is unclear what this means - "we experiment with other modifications" -- these should be listed - "improvements in generation quality" -- give a precise relative improvement in FID. - Sec 1: "weakens the discriminator relative to the generator"--while I understand the authors' point, this is very informal phrasing; moreover, since this part is the main motivation for the manuscript, it may be helpful to define things or alternatively to explain something like "X happens, thus herein we refer to it as Y". - Sec 1: "Disrupting the careful balance necessary for successful GAN training" -- this is unclear. What is meant by careful balance? - Sec 1: similarly, "addresses the imbalance introduced by noise" - Sec 1, first contribution: do you mean "outperform" instead of "compete"? - Sec 1, third contribution: it is unclear how the third contribution differs from the previous two contributions Broader Impact Concerns: / ================================================== Review 2: Summary: The paper studies differentially private training of generative adversarial networks (GAN). The authors show that for a given hyperparameter configuration, taking more discriminator steps per generator step may improve performance (though there's a point at which further increasing leads to performance degradation). Authors also study additional modifications such as large batch training and adaptive discriminator step frequency and show further improvements in results. Strengths and Weaknesses: **Strengths** The authors revisit differentially private training of GANs and perform a more detail-oriented empirical analysis of this framework. By carefully understanding how different design choices of training affect performance, the authors show that better performance can be obtained. **Weaknesses** - The authors claim to have proposed the scheme of taking more discriminator steps per generator step. However, this imbalanced stepping scheme was first presented in the original GAN paper and prominent follow-up papers such as WGAN [1]. To be clear, the cite references (Xie et al. 2018) already employed the imbalanced strategy (5 D steps per G step). Authors should clarify the present work's relationship with prior works. The present work (at least for the content related to imbalanced stepping) feels more like an empirical analysis of an existing technique than proposing a new technique. - Authors claim that increasing the D steps per G step can lead to improved performance and justify this claim by showing experiments with other hyperparameters fixed. However, authors only present results for one set of fixed (other) hyperparameters. For a more compelling argument, authors ideally should repeat the same set of experiments (e.g., those in Figure 2), varying other hyperparameters such as the learning rate of the two optimizers to show that the phenomenon is robust to changes in the overall setup. - Authors claim that the technique is general but only perform experiments for image generation. To make a more compelling argument, authors should either clarify that the scope is image generation or present additional results for other modalities of data (e.g. tabular data generation). [1] Arjovsky, Martin, Soumith Chintala, and Léon Bottou. "Wasserstein generative adversarial networks." International conference on machine learning. PMLR, 2017. **Side notes** - Have authors explored other loss functions (e.g., those in WGAN paper)? Requested Changes: Those listed in the "Weaknesses" section. Broader Impact Concerns: NA ================================================== Review 3: Summary: This paper proposes a training algorithm for private GANs, which significantly improves existing methods on private GANs. In private GANs, the discriminator is updated using langevin-type stochastic gradients (clipped stochastic gradient with additive Gaussian noise), resulting in a potential weak discriminator. Such an observation motivates the algorithm in the paper, which targets at "balancing" the effective training of generator and discriminator. In particular, the discriminator is trained for multiple iterations until the generator is updated once. This method is intuitive and relatively simple, yet this should not be perceived as a weakness. In fact, the performance improvement of the proposed algorithm is statistically significant compared to methods in the same class (except diffusion models). Therefore, the empirical evaluation justifies the contribution of the paper. Strengths and Weaknesses: Authors keep a clean writing style and the paper is easy to follow. I appreciate a comprehensive experimental study of allowing multiple discriminator steps for the discriminator. In addition to support the superior performance of the proposed algorithm, there are plenty of discussions regarding the tradeoff on the number of multiple discriminator steps, mode collapse, and adaptive discriminator steps. As mentioned by authors, the performance of DPGAN does not match that of diffusion models. This could undermine the value of the paper. However, GAN often leads to fast generation due to the easy generator structure, while diffusion models are slow in generation because of the iterative backward simulation. In order to strengthen the result, it might be helpful to argue that DPGANs generate much faster with competitive performance to diffusion models. (Minor) The generated images in Figure 8 also Figure 13 seem not very appealing compared to diffusion models. Requested Changes: Definition 1 and Proposition 2 are slightly distracting. I understand the intention of a rigorous exposure, however, these results are not needed to appreciate the strength of the paper. Mode collapse in Figure 3 needs a better explanation. Looking at the figures, I suspect 7 and 9 are mixed and similar. It appears that the discriminative power of discriminator is highly relevant to the performance of DPGANs. Does varying the size of discriminator influences the performance? Broader Impact Concerns: No concerns. ================================================== Metareview: Recommendation: Accept as is Comment: Reviewers initially found that the significance of the work is low, as it is just about one hyperparameter whose effects were previously well-known. The authors did a good job discussing the non-private GAN work, and clearly discussed their contribution in a revision. Other reviewers had detailed pointers on improving the experiments and presentation. After (essentially) a major revision, the quality of the paper seems to have improved a lot. Reviewers are now leaning towards accepting the paper. I agree with the reviewers. I recommended "accept as is" but since Reviewer UGFA did not have time to respond, the authors should do their best to go over the comments of UGFA again and make sure that every point in the review is satisfactorily addressed in the rebuttal. Other comments - Tuning one more hyperparameter clearly helps, but I feel that it is time for the empirical DP community to start doing honest hyperparameter tuning with DP. It is fine to argue that existing work does not report such results, but the problem and the rationale behind the choice to continue this practice should be explained in the paper. - Please align the color choices for different n_D parameters in Figure 2, e.g., Blue is n_D = 1 in (a) but 10 in (b) and (c), which may cause confusion. ==================================================
# Learning Online Data Association Anonymous authors Paper under double-blind review ## Abstract When an agent interacts with a complex environment, it receives a stream of percepts in which it may detect entities, such as objects or people. To build up a coherent, low-variance estimate of the underlying state, it is necessary to fuse information from multiple detections over time. To do this fusion, the agent must decide which detections to associate with one another. We address this data-association problem in the setting of an online filter, in which each observation is processed by aggregating into an existing object hypothesis. Classic methods with strong probabilistic foundations exist, but they are computationally expensive and require models that can be difficult to acquire. In this work, we use the deep-learning tools of sparse attention and representation learning to learn a machine that processes a stream of detections and outputs a set of hypotheses about objects in the world. We evaluate this approach on simple clustering problems, problems with dynamics, and complex image-based domains. We find that it generalizes well from short to long observation sequences and from a few to many hypotheses, outperforming other learning approaches and classical non-learning methods. ## 1 **Introduction** Consider a robot operating in a household, making observations of multiple objects as it moves around over the course of days or weeks. The objects may be moved by the inhabitants, even when the robot is not observing them, and we expect the robot to be able to find any of the objects when requested. We will call this type of problem *entity monitoring*. It occurs in many applications, but we are particularly motivated by the robotics applications where the observations are very high dimensional, such as images. Such systems need to perform online *data association*, determining which individual objects generated each observation, and *state estimation*, aggregating the observations of each individual object to obtain a representation that is lower variance and more complete than any individual observation. This problem can be addressed by an online recursive *filtering* algorithm that receives a stream of object detections as input and generates, after each input observation, a set of hypotheses corresponding to the actual objects observed by the agent. When observations are closely spaced in time, the entity monitoring problem becomes one of *tracking* and it can be constrained by knowledge of the object dynamics. In many important domains, such as the household domain, temporally dense observations are not available, and so it is important to have systems that do not depend on continuous visual tracking. A classical solution to the entity monitoring problem, developed for the tracking case but extensible to other dynamic settings, is a *data association filter* (daf) (the tutorial of Bar-Shalom et al. (2009) provides a good introduction). A Bayes-optimal solution to this problem can be formulated, but it requires representing a number of possible hypotheses that grows exponentially with the number of observations. A much more practical, though much less robust, approach is a maximum likelihood daf (ml-daf), which commits, on each step, to a maximum likelihood data association: the algorithm maintains a set of object hypotheses, one for each object (generally starting with the empty set) and for each observation it decides to either: (a) associate the observation with an existing object hypothesis and perform a Bayesian update on that hypothesis with the new data, (b) start a new object hypothesis based on this observation, or (c) discard the observation as noise. 1 The engineering approach to constructing a ml-daf requires many design choices, including the specification of a latent state space for object hypotheses, a generative model relating observations to objects, and thresholds or other decision rules for choosing, for a new observation, whether to associate it with an existing hypothesis, use it to start a new hypothesis, or discard it. In any particular application, the engineer must tune all of these models and parameters to build a daf that performs well. This is a time-consuming process that must be repeated for each new application, and it is effectively impossible to do by hand when observations and hypothesis are high dimensional. A special case of entity monitoring is one in which the objects' state is static, and does not change over time. In this case, a classical solution is online (robust) clustering. Online clustering algorithms perform data association (cluster assignment) and state estimation (computing a cluster center). In this paper we explore dafs for dynamic entity monitoring and as online clustering methods for static entity monitoring. Although it is possible to train an unstructured RNN to solve these problems, we believe that building in some aspects of the structure of the daf will allow faster learning with less data and allow the system to address problems with a longer horizon. We begin by briefly surveying the related literature, particularly focused on learning-based approaches. We then describe a neural-network architecture that uses self-attention as a mechanism for data association, and demonstrate its effectiveness in several illustrative problems. We find that it outperforms a raw RNN as well as domain-agnostic online clustering algorithms, and competitively with batch clustering strategies that can see all available data at once and with state-of-the-art DAFs for tracking with hand-built dynamics and observation models. Finally, we illustrate its application to problems with images as observations in which both data association and the use of an appropriate latent space are critical. ## 2 **Related Work** Online clustering methods The typical setting for clustering problems is *batch*, where all the data is presented to the algorithm at once, and it computes either an assignment of data points to clusters or a set of cluster means, centers, or distributions. We are interested in the *online* setting, with observations arriving sequentially and a cumulative set of hypotheses output after each observation One of the most basic online clustering methods is *vector quantization*, articulated originally by Gray (1984) and understood as a stochastic gradient method by Kohonen (1995). It initializes cluster centers at random and assigns each new observation to the closest cluster center, and updates that center to be closer to the observation. Methods with stronger theoretical guaranteees, and those that handle unknown numbers of clusters have also been developed. Charikar et al. (2004) formulate the problem of online clustering, and present several algorithms with provable properties. Liberty et al. (2016) explore online clustering in terms of the facility allocation problem, using a probabilistic threshold to allocate new clusters in data. Choromanska & Monteleoni (2012) formulate online clustering as a mixture of separate expert clustering algorithms. Online clustering has also been studied in the data stream community. Silva et al. (2013) provide a survey of clustering approachs for data streams, which sometimes allow multiple passes through the data. Interesting variations construct a core-set of points to be clustered Ackermann et al. (2012) and to maintain balanced trees online Kobren et al. (2017). Learning for clustering There is a great deal of work using deep-learning methods to find latent spaces for clustering complex objects, particularly images. Min et al. (2018) provide an excellent survey, including methods with auto-encoders, GANs, and VAEs. Relevant to our approach are *amortized inference* methods, including *set transformers* (Lee et al., 2018) and its specialization to *deep amortized clustering* (Lee et al., 2019), in which a neural network is trained to map directly from data to be clustered into cluster assignments or centers. A related method is *neural clustering processes* (Pakman et al., 2019), which includes an online version, and focuses on generating samples from a distribution on cluster assignments, including an unknown number of clusters. Dynamic domains In the setting when the underlying entities have dynamics, such as airplanes observed via radar, a large number of dafs have been developed. The most basic filter, for the case of a single entity and no data association problem, is the Kalman filter (Welch & Bishop, 2006). In the presence of data-association uncertainty the Kalman filter can be extended by considering assignments of observations to multiple existing hypotheses under the multiple hypothesis tracking (MHT) filter. A more practical approach that does not suffer from the combinatorial explosion of the MHT is the joint probabilistic data association (JPDA) filter, which keeps only one hypothesis but explicitly reasons about the most likely assignment of observations to hypotheses. Bar-Shalom et al. (2009) provides a detailed overview and comparison of these approaches, all of which require hand-tuned transition and observation models. Visual data-association methods Data association has been explored in the context of visual object tracking (Luo et al., 2014; Xiang et al., 2015; Brasó & Leal-Taixé, 2020; Ma et al., 2019; Sun et al., 2019; Zhang et al., 2022). In these problems, there is typically a fixed visual field populated with many smoothly moving objects. This is an important special case of the general data-association. It enables some specialized techniques that take advantage of the fact that the observations of each object are typically smoothly varying in space-time, and incorporate additional visual appearance cues. In contrast, we are interested in exploring the general data-association problem where observations are not necessarily temporally correlated. Learning for data association There is relatively little work in the area of generalized data association, but Liu et al. (2019) provide a recent application of LSTMs to a rich version of the data association problem, in which batches of observations arrive simultaneously, with a constraint that each observation can be assigned to at most one object hypothesis. The sequential structure of the LSTM is used here not for recursive filtering, but to handle the variable numbers of observations and hypotheses. It is assumed that Euclidean distance is an appropriate metric and that the observation and state spaces are the same. Milan et al. (2017) combine a similar use of LSTM for data association with a recurrent network that learns to track multiple targets. It learns a dynamics model for the targets, including birth and death processes, but operates in simple state and observation spaces. Slot Based and Object Centric Learning Our approach to the dynamic entity monitoring task relies on the use of attention over a set of object hypothesis slots. Generic architectures for processing such slots can be found in (Vinyals et al., 2015; Lee et al., 2018), where we use (Lee et al., 2018) as a point of comparison for DAF-Net. We note that these architectures provide generic mechanisms to process sets of inputs, and lack the explicit structure from daf we build into our model. Our individual hypothesis slots correspond to beliefs over object hypotheses, and thus also relates to existing work in object-centric scene learning. Such work has explored the discovery of factorized objects from both static scenes (Burgess et al., 2019; Greff et al., 2019; Locatello et al., 2020; Du et al., 2021; Kipf et al., 2022), but does not focus filtering and updating existing object hypotheses. Algorithmic priors for neural networks One final comparison is to other methods that integrate algorithmic structure with end-to-end neural network training. This approach has been applied to sequential decision making by Tamar et al. (2016), particle filters by Jonschkowski et al. (2018), and Kalman filters by Krishnan et al. (2015), as well as to a complex multi-module robot control system by Karkus et al. (2019). The results generally are much more robust than completely hand-built models and much more sample-efficient than completely unstructured deep-learning. We view our work as an instance of this general approach. ## 3 **Problem Formulation** The problem of learning to perform online data association requires careful formulation. When the daf is executed online, it will receive a stream of input detections z1*, . . . z*T where zt ∈ R dz, and after each input zt, it will output two vectors, yt = [ytk]k∈(1..K) and ct = [ctk]k∈(1..K), where ytk ∈ R dy P , ctk ∈ (0, 1) and k ctk = 1. The y values in the output represent the predicted properties of the hypothesized objects and the c values represent a measure of confidence in the hypotheses, in terms of the proportion of data that each one has accounted for. The maximum number of hypothesis "slots" is limited in advance to K. In some applications, the z and y values will be in the same space with the same representation, but this is not necessary. We have training data representing N different data-association problems, D = {(z (i) t, m (i) t )t∈(1..Li)}i∈(1..N), where each training example is an input/output sequence of length Li, each element of which consists of a pair of input z and m = {mj}j∈(1..J(i) t ) which is a set of nominal object hypotheses representing the true current state of objects that have actually been observed so far in the sequence. It will always be true that m (i) t ⊆ m (i) t+1 and J (i) t ≤ K. Our objective is to train a recurrent computational model to perform daf effectively in problems that are drawn from the same distribution as those in the training set. To do so, we formulate a model (described in section 4) with parameters θ, which transduces the input sequence z1*, . . . , z*L into an output sequence (y1, c1), . . . ,(yL, cL), and train it to minimize the following loss function: $${\mathcal{L}}(\theta;{\mathcal{D}})=\sum_{i=1}^{N}\sum_{t=1}^{L_{i}}{\mathcal{L}}_{\mathrm{obj}}(y_{t}^{(i)},m_{t}^{(i)})+{\mathcal{L}}_{\mathrm{slot}}(y_{t}^{(i)},c_{t}^{(i)},m_{t}^{(i)})+{\mathcal{L}}_{\mathrm{sparse}}(c_{t}^{(i)})\;\;.$$ The Lobj term is a *chamfer loss* (Barrow et al., 1977), which looks for the predicted y that is closest to each actual mk and sums their distances, making sure the model has found a good, high-confidence representation for each true object: $${\mathcal{L}}_{\mathrm{obj}}(y,m)=\sum_{j}\operatorname*{min}_{k}{\frac{1}{c_{k}+\epsilon}}\|y_{k}-m_{j}\|\ \ .$$ The Lslot term is similar, but makes sure that each object the model has found is a true object, where we multiply by ck to not penalize for predicted objects in which we have low confidence: $${\mathcal{L}}_{\mathrm{slot}}(y,c,m)=\sum_{k}\operatorname*{min}_{j}c_{k}\|y_{k}-m_{j}\|\ \ .$$ The sparsity loss discourages the model from using multiple outputs to represent the same true object: $${\mathcal{L}}_{\mathrm{sparse}}(c)=-\log\lVert c\rVert$$ $\tau$ and we theoretically show in Section 5 how this induces sparsity in confidences. ## 4 **Daf-Nets** Inspired by the the basic form of classic daf algorithms and the ability of modern neural-network techniques to learn complex models, we have designed the DAF-Net architecture for learning dafs and a customized procedure for training it from data, inspired by several design considerations. First, because object hypotheses must be available after each individual input and because observations will generally be too large and the problem too difficult to solve from scratch each time, the network will have the structure of a recursive filter, with new memory values computed on each observation and then fed back for the next. Second, because the loss function is *set based*, that is, it doesn't matter what order the object hypotheses are delivered in, our memory structure should also be permutation invariant, and so the memory processing is in the style of an attention mechanism. Finally, because in some applications the observations z may be in a representation not well suited for hypotheses representation and aggregation, the memory operates on a latent representation that is related to observations and hypotheses via encoder and decoder modules. Figure 1 shows the architecture of the DAF-Net model and an illustration of its similarity to existing daf approaches. The memory of the system is stored in s, which consists of K elements, the K hypotheses in daf, each in R ds; the length-K vector n of positive values encodes how many observations have been assigned to each slot during the execution so far. New observations are combined with the memory state, and the state is updated to reflect the passage of time by a neural network constructed from seven modules with trainable weights. When an observation z arrives, it is immediately **encode**d into a vector e in R ds, which is fed into subsequent modules. First, **attention** weights w are computed for each hypothesis slot, using the encoded input and the existing content of that slot, representing the degree to which the current input "matches" the current value of each hypothesis in memory, mirroring the hypothesis matching procedure in dafs. Since an observation typically matches only a limited number of hypotheses in dafs, we force the network to commit to a sparse assignment of observations to object hypotheses while retaining the ability to effectively train with gradient ![4_image_0.png](4_image_0.png) Input: input observations z1*, . . . , z*T , count n ∈ R K, Figure 1: **Architecture and pseudocode of DAF-Net.** DAF-Net serves as a learned analogue of a daf filter. The traditional hypothesis representation hk in a daf-filter is replaced with a latent representation sk. Hypothesis matching is replaced by sparse attention operators **suppress** and **attend**. Hypothesis updating is replaced by a update operator and dynamics simulation is replaced by a learned **transition** operators. Output decoding is replaced by a learned **decode** operator. descent, the **suppress** module sets all but the top M values in w to 0 and renormalizes, to obtain the vector a of M values that sum to 1: $$w_{k}={\frac{\exp(\mathbf{attend}(s_{k},n_{k},e))}{\sum_{j=0}^{n}\exp(\mathbf{attend}(s_{j},n_{k},e))}}\ ;\ \ a=\mathbf{suppress}(w)$$ The a vectors are integrated to obtain n, which is normalized to obtain the output confidence c. Mirroring hypothesis updates in dafs, the **update** module also operates on the encoded input and the contents of each hypothesis slot, producing a hypothetical update of the hypothesis in that slot under the assumption that the current z is an observation of the object represented by that slot; so for all slots k, $$u_{k}=\mathbf{update}(s_{k},n_{k},e)\;\;.$$ To enable the rejection of outlier observations, a scalar **relevance** value, r ∈ (0, 1), is computed from s and e; this value modulates the degree to which slot values are updated, and gives the machine the ability to ignore or downweight an input. It is computed as $$r=\mathrm{\bf{relevance}}(e,s,n)=\mathrm{NN}_{2}(\mathrm{avg\,NN}_{1}(e,s_{k},n_{k}))\;\;,$$ where NN1 is a fully connected network with the same input and output dimensions and NN2 is a fully connected network with a single sigmoid output unit. The attention output a and relevance r are now used to decide how to combine all possible slot-updates u with the old slot values st using the following fixed formula for each slot k: s ′ tk = (1 − rak)stk + rakuk . Because most of the ak values have been set to 0, this results in a sparse update which will ideally concentrate on a single slot to which this observation is being "assigned". To obtain outputs, slot values s ′ t are then **decoded** into the outputs, y, using a fully connected network: $$y_{k}=\mathbf{d}t$$ yk = **decode**(s ′ $\mathbf{de}(s'_{t k})\;$ . Finally, to simulate transition updates in dafs and to handle the setting in which object state evolves over time, we add a **transition** module, which computes the state st+1 from the new slot values s ′ t using an additional neural network: st+1k = **transition**(s ′ t )k . These values are then fed back, recurrently, as inputs to the overall system. Given a data set D, we train the DAF-Net model end-to-end to minimize loss function L, with a slight modification. We find that including the Lsparse term from the beginning of training results in poor learning, but adopting a training scheme in which the Lsparse is first omitted then reintroduced over training epochs, results in reliable training that is efficient in both time and data. ## 5 **Theoretical Analysis** In this section, we study the extent to which DAF-Net may learn to construct an optimal daf. First, we illustrate how our underlying slot-based architectures enables more efficient learning under the framework of algorithmic alignment introduced by Xu et al. (2019). We further illustrate how Lsparse induces sparsity across slot values. First we analyze the underlying sample complexity of learning DAF-Net, which represents a set of belief states as a set of slots, compared to a network which explicitly represents intermediate beliefs as a single flattened vector. We use the notion of sample complexity analysis introduced by Xu et al. (2019). For simplicity, we consider the online data-association problem of clustering, where each network takes as input an observation z and a set of K previously predicted cluster centers yk, and must correctly predict the updated state of each observed cluster center. In addition, we assume for purposes of applying existing theoretical results that the system is supervised with the ground-truth clusters at each step. Proposition 1. Consider the problem of performing one step of an online clustering algorithm, in which K current cluster centers, y1, . . . yK and a new element x *are the inputs and the updated cluster centers* y ′ 1 , . . . y′K are the outputs. We consider two different architectures: (1) a generic MLP, which we denote as f(x), in which the inputs y1, . . . , yK, x *are simply concatenated into an input vector, and the outputs* y ′ 1 , . . . y′K are an output vector and (2) an instance of DAF-Net with no transition module which we denote as g(x). The sample complexity of learning the MLP for K clusters is K times the sample complexity for learning the DAF-Net. Proof. The sample complexity of learning to approximate a function h : R d → R m, where h (i)(x) = Pj α (i) j (β (i) j x) p (i) j with MLP to error less than ϵ with probability 1 − δ is asymptotically given by (Xu et al., 2019): $${\mathcal{C}}(h,\epsilon,\delta)=O\left(\frac{\operatorname*{max}_{i}\sum_{j=1}^{K}p_{j}^{(i)}\|\alpha_{j}^{(i)}\|\|\beta_{j}^{(i)}\|_{2}^{p_{j}^{(i)}}+\log(m/\delta)}{(\epsilon/m)^{2}}\right).$$ $$(1)$$ . (1) The above expression implies that the sample complexity of learning a neural network to approximate a function h(x) is proportional to the underlying number of polynomial terms needed to represent h(x). A sketch of our proof is that the number of polynomial terms necessary to accurately represent the clustering procedure, using a MLP f(x), is K times more than the number of terms using DAF-Net g(x). This statement is true because g(x) only needs to learn the clustering operation *per slot* as the underlying computation is replicated across slots, while f(x) needs to learn the clustering operation for all slots simultaneously. Concretely, consider learning a simplified cluster-update function, h(*z, y*k) = (1 − wk) ∗ yk + wk ∗ z, for each cluster center k, where wk is a constant wk = P 1/∥z−yk∥ k 1/∥z−yk∥ predicted by a fixed network, determining the extent to which z should be assigned to yk. The DAF-Net architecture, operates independently on each cluster, and is required to approximate a function h consisting of two polynomial terms. However, the MLP architecture operates jointly on all of the clusters, without any parameter tying, and would require 2K polynomial terms, as it needs to jointly approximate the function h across all cluster centers. Thus it is more sample efficient to learn DAF-Net. Our previous result illustrates the benefits of learning filters using the architecture of DAF-Net. Next, we illustrate that our proposed sparsity loss accurately induces sparsity in the assignment of each object to potential clusters. Proposition 2. *The sparsity loss* Lsparse(c) = − log∥c∥ , (2) where Pi ci = 1, ci ≥ 0 *is minimized when a particular element* ck = 1 and maximized when individual confidences are equal. Proof. Recall that ∥c∥ is a convex function. The confidence vector c defines a simplex, with each element being non-negative, and whose elements sum up to one. The maximum of a convex function over a simplex must occur at the vertices. Each vertex in this simplex is symmetric, and corresponds to a value of c with hypothesis k having confidence ck = 1 and other confidences corresponding to 0. In contrast, the minimum corresponds to a stationary point at the Lagrangian of the loss. The Lagrangian of the loss L(*c, λ*) is $$L(c,\lambda)=\sum_{i}c_{i}^{2}+\lambda(\sum_{i}c_{i}-1).$$ $$\left({\boldsymbol{3}}\right)$$ ci − 1). (3) By taking the gradient of the above expression, we find that the stationary value corresponds to each ci being equal. Since the function is convex, this corresponds to the minimum of ∥c∥. Thus Lsparse(c) is maximized when individual confidences are equal and minimized when individual confidences are sparse. ## 6 **Empirical Results** We evaluate DAF-Net on several different *data association* tasks. First, we consider a simple online clustering task and validate the underlying machinery of DAF-Net as well as its ability to generalize at inference time to differences in (a) the number of actual objects, (b) the number of hypothesis slots and (c) the number of observations. Next, we evaluate the performance of DAF-Net on dynamic domains. Finally, we evaluate the performance of DAF-Net on an image domain in which the underlying observation space is substantially different from the hypothesis space. - DAF-Net outperforms non-learning clustering methods, even those that operate in batch mode rather than online, because those methods cannot learn from experience to take advantage of information about the distribution of observations and true object properties (Tables 1, 1 and 6). - DAF-Net outperforms clustering methods that can learn from previous example problems when data is limited, because it provides useful structural bias for learning (Table 1, 1 and 6). - DAF-Net generalizes to differences between training and testing in (a) the numbers of actual objects (Figure 6 and 3), (b) the numbers of hypothesis slots (Table 2) and (c) the number of observations (Figure 3). - DAF-Net works when significant encoding and decoding are required (Table 6). - DAF-Net is able to learn dynamics models and observation functions for the setting when the entities are moving over time (Table 5), nearly matching the performance of strong data association filters with known ground-truth models. Baselines and Metrics In each domain, we compare DAF-Net to online learned baselines of LSTM (Hochreiter & Schmidhuber, 1997) and Set Transformer (Lee et al., 2018) (details in A.3), as well as to task-specific baselines. All learned network architectures are structured to use ∼ 50000 parameters. Unless otherwise noted, models except DAF-Net are given and asked to predict the ground truth number of components K, while DAF-Net uses 10 hypothesis slots. Results are reported in terms of MSE error 1K minj∥yk − mj∥ (with respect to the most confident K hypotheses for DAF-Net). ## 6.1 **Online Clustering** Setup. To check the basic operation of the model and understand the types of problems for which it performs well, we tested in simple clustering problems with the same input and output spaces, but different types of data distributions, each a mixture of three components. We train on 1000 problems with observation sequences of length 30 drawn from each problem distribution and test on 5000 from the same distribution. In every case, the means of the three components are drawn at random for each problem. We provide precise details about distributions in Section A.1. | Model | Online | Learned | Normal | Elongated | Mixed | Angular | Noise | |-----------------|----------|-----------|----------|-------------|---------|-----------|---------| | DAF-Net | + | + | 0.157 | 0.191 | 0.184 | 0.794 | 0.343 | | Set Transformer | + | + | 0.407 | 0.395 | 0.384 | 0.794 | 0.424 | | LSTM | + | + | 0.256 | 0.272 | 0.274 | 0.799 | 0.408 | | VQ | + | - | 0.173 | 0.195 | 0.191 | 0.992 | 0.947 | | Set Transformer | - | + | 0.226 | 0.248 | 0.274 | 0.816 | 0.406 | | Slot Attention | - | - | 0.254 | 0.267 | 0.268 | 0.823 | 0.504 | | K-means++ | - | - | 0.103 | 0.139 | 0.135 | 0.822 | 1.259 | | GMM | - | - | 0.113 | 0.141 | 0.136 | 0.865 | 1.207 | Table 1: **Quantitative Results on Online Clustering.** Comparison of performance on clustering performance across different distributions. Reported error is the L2 distance between predicted and ground truth means. Methods in the bottom half of table operate on observations in a single batch and thus are not directly comparable. ![7_image_0.png](7_image_0.png) Figure 2: **Qualitative Visualization of DAF-Net.** Illustration of DAF-Net execution on the *Normal* distribution setting. Decoded value of hypothesis (with size corresponding to confidence) shown in red, with ground truth clusters in black. Observations are shown in blue. 1. *Normal*: Each component is a 2D Gaussian with fixed identical variance across each individual dimension and across distributions. This is a basic "sanity check." 2. *Elongated*: Each component is a 2D Gaussian, where the variance along each dimension is drawn from a uniform distribution, but fixed across distributions. 3. *Mixed*: Each component is a 2D Gaussian, with fixed identical variance across each individual dimension, but with the variance of each distribution drawn from a uniform distribution. 4. *Angular*: Each component is a 2D Gaussian with identical variance across dimension and distribution, but points above π are wrapped around to −π and points below −π wrapped to π 5. *Noise*: Each component has 2 dimensions parameterized by Gaussian distributions, but with the values of the remaining 30 dimensions drawn from a uniform centered at 0. Results. We compare our approach to each of the baselines for the five problem distributions in Table 1. The results in this table show that on Normal, *Mixed*, and *Elongated* tasks, DAF-Net performs better than ![7_image_1.png](7_image_1.png) | Model | Slots | Ground Truth Clusters 3 5 7 | | | |---------------------|---------|-------------------------------|-------|-------| | 10 | 0.162 | 0.214 | 0.242 | | | DAF-Net | 20 | 0.175 | 0.195 | 0.213 | | 30 | 0.188 | 0.197 | 0.205 | | | Set Transformer | - | 0.261 | 0.279 | 0.282 | | Vector Quantization | - | 0.171 | 0.199 | 0.205 | Figure 3: Generalization with Increased Observations. Plot of LSTM, Set Transformer and DAF-Net errors when executed *at test time* on different number of observations from the Normal distribution. With increased observations, DAF-Net error continues to decrease while other approaches obtain higher error. Table 2: **Generalization with Different Hypothesis Slots.** Error of DAF-Net, when executed *at test time* with a different number of hypothesis slots on test distributions with different numbers of ground true components. In all cases, DAF-Net is trained on 3-component problems with 10 slots. DAF-Net achieves good performance with novel number of hypothesis slots, and outperforms different instances of the Set Transformer trained with the ground truth number of cluster components as well as vector quantization. ![8_image_0.png](8_image_0.png) Figure 4: **Visualization of Attention Weights.** Plot of decoded values of slots (in red) with confidence shown by the size of dot (left), and what slot each input assigns the highest attention towards (right) (each slot is colored differently, with assigned inputs colored in the same way). Note alignment of regions on the right with the decoded value of a slot on the left. ![8_image_1.png](8_image_1.png) Figure 5: Visualization of Relevance Weights. Plots of the magnitude of relevance weights with increased observation number on different distributions with differing standard deviation (noise). learned and constructed online clustering algorithms, but does slightly worse than offline clustering algorithms. Such discrepancy in performance is to be expected due to the fact that DAF-Net is running and being evaluated online. On the *Angular* and *Noise* tasks, DAF-Net is able to learn a useful metric for clustering and outperforms both offline and online alternatives. Next, we provide a qualitative illustration of execution of DAF-Net on the *Normal* clustering task in Figure 2 as a trajectory of observations are seen. We plot the decoded values of hypothesis slots in red, with size scaled according to confidence, and ground-truth cluster locations in black. DAF-Net is able to selectively refine slot clusters to be close to ground truth cluster locations even with much longer observation sequences than it was trained on. Baselines. In addition to baselines discussed earlier, we further compare with clustering specific baselines of **Batch, non-learning**: K-means++ (Arthur & Vassilvitskii, 2007) and expectation maximization (EM) (Dempster et al., 1977) on a Gaussian mixture model (SciKit Learn implementation); **Online, non-learning**: vector quantization (Gray, 1984). We further provide a comparison to a recent concurrent work (Locatello et al., 2020) which also utilizes attention to obtains slots of objects. Generalization. We next assess the ability of DAF-Net to generalize at inference time to differences in the number of input observations as well as differences in the underlying number of hypothesis slots used on the *Normal* distribution. In Figure 3, we plot the error of LSTM, Set Transformer, and DAF-Net as a function of the number observations seen at inference time. We find that when DAF-Net is given more observations then seen during training time (all models are trained with observations of length 30), it is able to further improve its performance, while both LSTM and Set Transformer results suffer. We believe that such generalization ability is due to the inductive bias added to DAF-Net. We provide additional analysis of this generalization across all distributions in Table 7 and find similar results. In Table 2, we investigate the ability of DAF-Net to generalize at *inference time* to increases in both the number of hypothesis slots and the underlying number of mixture components from which observations are drawn. We compare to the Set Transformer and to VQ, both of which know the correct number of components at inference time. We find that DAF-Net generalizes well to increases in hypothesis slots, and exhibits improved performance with large number of underlying components, performing comparably to or better than the VQ algorithm. We further note that none of the *learning* baselines can adapt to different numbers cluster components at inference time, but find that DAF-Net outperforms the Set Transformer even when it is trained on the ground truth number of clusters in the test. Submodule Visualization. We find that individual modules learned by DAF-Net are interpretable. We visualize the attention weights of each hypothesis slot in Figure 4 and find that each hypothesis slot learns to attend to a local region next to the value it decodes to. We further visualize a plot of relevance weights in Figure 5 across an increasing number of observations where individual observations are drawn from distributions with different levels of noise with respect to cluster centers. We find that as DAF-Net sees more observations, the relevance weight of new observations decreases over time, indicating that DAF-Net | Sparsity | Learned | Supression | Relevance | Observations | | | | |------------|-----------|--------------|-------------|----------------|-------|-------|-------| | Memory | 10 | 30 | 50 | 100 | | | | | - | - | - | - | 0.382 | 0.452 | 0.474 | 0.487 | | + | - | - | - | 0.384 | 0.412 | 0.423 | 0.430 | | + | + | - | - | 0.335 | 0.357 | 0.366 | 0.387 | | + | + | + | - | 0.279 | 0.274 | 0.278 | 0.282 | | + | + | + | + | 0.238 | 0.157 | 0.137 | 0.131 | Table 3: **Abalation Analysis.** We ablate each component of DAF-Net on the *Normal* distribution . When learned memory is ablated, DAF-Net updates states based on observed values (appropriate in the *Normal* distribution dataset). ![9_image_0.png](9_image_0.png) Figure 6: **Generalization to Increased Cluster Number.** Plots of inferred number of components using a confidence threshold in DAF-Net compared to the ground truth number of clusters (DAF-Net is trained on only 3 clusters). We consider two scenarios, a noisy scenario where cluster centers are randomly drawn from -1 to 1 (left) and a scenario where all added cluster components are well seperated from each other (right). DAF-Net is able to infer more clusters in both scenarios, with better performance when cluster centers are more distinct from each other. learns to pay the most attention towards the first set of observations it sees. In addition, we find that in distributions with higher variance, the relevance weight decreases more slowly, as later observations are now more informative in determining cluster centers. Ablation. We ablate each component of DAF-Net and present results in Table 3 on the *Normal* distribution. We test removing Lsparse (sparsity), removing learned slot embeddings (learned memory) - where instead, in individual hypothesis slots, we store the explicit values of inputs, removing the **suppress** modules (suppression) and removing the **relevance** module (relevance). We find that each of our proposed components enables better performance on the underlying clustering task. Interestingly, we further find that the addition of **relevance** enables our approach to generalize at test time to larger numbers of observations. Inferring Object Number. In contrast to other algorithms, DAF-Net learns to predict both a set of object properties yk of objects and a set of confidences ck for each object. This corresponds to the task of both predicting the number of objects in a set of observations, as well as the associated object properties. We evaluate the ability of DAF-Net to regress object number at *test time* in scenarios where the number of objects (underlying clusters) is different than that of training. We evaluate on the *Normal* distribution with a variable number of component distributions, and measure inferred components through a threshold confidence. DAF-Net is trained on a dataset with 3 underlying components. We find in Figure 6 that DAF-Net is able to infer the presence of more component distributions (as they vary from 3 to 6), with improved performance when cluster centers are sharply separated (right figure of Figure 6). Performance on More Clusters. We find DAF-Net also exhibits good performance when trained and tested on domains with a larger number of slots/clusters. To test this, we utilize the *Normal* distribution setting, but generate underlying training input observations from a total of 30 difference components, and train DAF-Net with a total of 30 slots. We train DAF-Net with 50 observations, and measure performance at inferring cluster centers with between 50 to 100 observations. We report performance in Table 4 and find that DAF-Net obtains good performance in this setting, out-performing the strong online baseline VQ, and performing similarly to K-means++ which directly operates on all input observations at once. | Model | Online | Observations | | | | |-----------|----------|----------------|-------|-------|-------| | | 50 | 65 | 80 | 100 | | | DAF-Net | + | 0.158 | 0.154 | 0.151 | 0.147 | | VQ | + | 0.162 | 0.157 | 0.153 | 0.148 | | K-means++ | - | 0.155 | 0.151 | 0.148 | 0.146 | | GMM | - | 0.156 | 0.151 | 0.149 | 0.147 | Table 4: **Performance on Large Number of Clusters.** Comparison of performance on *Normal* distribution, when underlying distributions have a large number of components. We use 30 components, and train models with 50 observations. Each cluster observation and center is drawn between -1 and 1, with reported error as the L2 distance between predicted and ground truth means. Model Observations 10 20 30 40 DAF-Net **0.415** 0.395 0.382 0.394 Set Transformer 0.699 0.701 0.854 1.007 LSTM 0.422 0.400 0.445 0.464 JPDA (ground truth) 0.683 **0.372 0.362 0.322** Table 5: **Performance on Dynamic Objects.** Comparison of different methods on estimating the state of 3 dynamically moving objects. All learning models are trained with 1000 sequences of 30 observations. We report MSE error. JPDA uses the ground-truth observation and dynamics models. JPDA is outperformed by learned approaches with 10 observations, as these approaches are able to average over possible outputs to minimize MSE error. ## 6.2 **Dynamic Domains** Now we study the ability of DAF-Net to perform data association in a dynamic setting and compare its performance with that of a classical data-association filter. Setup. We evaluate performance of data association monitoring using moving 2D objects. A *problem* involves a trajectory of observations z of the K dynamically moving objects, with desired y values being the underlying object positions. Objects evolve under a linear Gaussian dynamics model, with a noisy observation of a single object observed at each step (details in Appendix A.1). This task is typical of tracking problems considered by daf. All learning-based models are trained on observation sequences of length 30. To perform well in this task, a model must discover that it needs to estimate the latent velocity of each object, as well as learn the underlying dynamics and observation models. We utilize K = 3 for our experiments. Baselines. We compare with the de-facto standard method, Joint Probabilistic Data Association (JPDA) (Bar-Shalom et al., 2009), which uses hand-built ground-truth models (serving as an oracle). We further compare with our learned online baselines of Set Transformer (Lee et al., 2018) and LSTM (Hochreiter & Schmidhuber, 1997) methods. Result. Quantitative performance, measured in terms of prediction error on true object locations, is reported in Table 5. We can see that the Set Transformer cannot learn a reasonable model at all. The LSTM performs reasonably well for short (length 30) sequences but quickly degrades relative to DAF-Net and JPDA as sequence length increases. We note that DAF-Net performs comparably to, but just slightly worse than, JPDA. *This is strong performance because DAF-Net is generic and can be adapted to new domains given* training data without the need to hand-design the models in JPDA. We believe that these gains are due to the inductive biases built into our architecture. ## 6.3 **Image-Based Domains** Finally, we validate the ability of DAF-Net to perform data association on image inputs, which requires DAF-Net to synthesize a latent representation for slots, and learn to perform association, update, and transition operations in that space. Setup. We experiment with two separate image-based domains, each consisting of a set of similar entities (2D digits or 3D airplanes). We construct data association *problems* by selecting K objects in each domain, ![11_image_0.png](11_image_0.png) Figure 7: **Qualitative Visualization of DAF-Net Execution on Images.** Qualitative visualization of two image-based association tasks (left: MNIST, right: airplanes). At the top of each is an example training problem, illustrated by the true objects and an observation sequence. Each of the next rows shows an example test problem, with the ground truth objects and decoded slot values. The three highest-confidence hypotheses for each problem are highlighted in red, and correspond to ground-truth objects. | Model | Learned | MNIST | Airplanes | | | | | | | |--------------|-----------|---------|-------------|--------|--------|-------|-------|-------|-------| | Observations | 10 | 30 | 50 | 100 | 10 | 30 | 50 | 100 | | | DAF-Net | + | 7.143 | 5.593 | 5.504 | 5.580 | 4.558 | 4.337 | 4.331 | 4.325 | | LSTM | + | 9.980 | 9.208 | 9.166 | 9.267 | 5.106 | 4.992 | 4.983 | 4.998 | | K-means | + | 13.596 | 12.505 | 12.261 | 12.021 | 7.246 | 6.943 | 6.878 | 6.815 | Table 6: **Quantitative Results on Image Domain.** Comparison of entity-monitoring performance on MNIST and Airplane datasets across 10, 30, 50, 100 observations. For DAF-Net, LSTM and K-means we use a convolutional encoder/decoder trained on the data. We train models with 30 observations and report MSE error. with the desired y values being images of those objects in a canonical viewpoint. An input observation sequence is generated by randomly selecting one of those K objects, and generating an observation z corresponding to a random viewpoint of the object. Our two domains are: (1) **MNIST**: Each object is a random image in MNIST, with observations corresponding to rotated images, and the desired outputs being the un-rotated images; (2) **Airplane**: Each object is a random object from the Airplane class in ShapeNet (Chang et al., 2015), with observations corresponding to airplane renderings (using Blender) at different viewpoints and the desired outputs the objects rendered in a canonical viewpoint. For MNIST, we use the training set to construct the training problems, and the images in the non-overlaping test set to construct the test problems. For the Airplane dataset, we use 1895 airplanes to construct the training problems, and 211 different airplanes to construct the test problems. Each airplane is rendered with 300 viewpoints. Baselines. In addition to our learned baselines, we compare with a task specific baseline, batch K-means, in a latent space that is learned by training an autoencoder on the observations. In this setting, we were unable to train the Set Transformer stably and do not report results for it. Results. In Table 6, we find that our approach significantly outperforms other comparable baselines in both accuracy and generalization. We further visualize qualitative predictions from our model in Figure 7. We find that our highest confidence decoded slots correspond to ground truth objects. ## 7 **Conclusion** This work has demonstrated that using algorithmic bias inspired by a classical solution to the problem of filtering to estimate the state of multiple objects simultaneously, coupled with modern machine-learning techniques, we can arrive at solutions that learn to perform and generalize well from a comparatively small amount of training data. ## References Marcel R Ackermann, Marcus Märtens, Christoph Raupach, Kamil Swierkot, Christiane Lammersen, and Christian Sohler. Streamkm++ a clustering algorithm for data streams. *Journal of Experimental Algorithmics (JEA)*, 17:2–1, 2012. David Arthur and Sergei Vassilvitskii. k-means++: the advantages of careful seeding. In *Symposium on* Discrete Algorithms '07, 2007. Yaakov Bar-Shalom, Fred Daum, and Jim Huang. The probabilistic data association filter. IEEE Control Systems Magazine, December 2009. Harry G. Barrow, Jay M. Tenenbaum, Robert C. Bolles, and Helen C. Wolf. Parametric correspondence and chamfer matching: Two new techniques for image matching. In *IJCAI*, 1977. Guillem Brasó and Laura Leal-Taixé. Learning a neural solver for multiple object tracking. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. Christopher P Burgess, Loic Matthey, Nicholas Watters, Rishabh Kabra, Irina Higgins, Matt Botvinick, and Alexander Lerchner. Monet: Unsupervised scene decomposition and representation. *1901.11390*, 2019. Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. Shapenet: An information-rich 3d model repository. *1512.03012*, 2015. Moses Charikar, Chandra Chekuri, Tomás Feder, and Rajeev Motwani. Incremental clustering and dynamic information retrieval. *SIAM Journal on Computing*, 33(6):1417–1440, 2004. Anna Choromanska and Claire Monteleoni. Online clustering with experts. In *Artificial Intelligence and* Statistics, pp. 227–235, 2012. Arthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data via the em algorithm. *Journal of the Royal Statistical Society: Series B (Methodological)*, 39(1):1–22, 1977. Yilun Du, Shuang Li, Yash Sharma, B. Joshua Tenenbaum, and Igor Mordatch. Unsupervised learning of compositional energy concepts. In *Advances in Neural Information Processing Systems*, 2021. R. Gray. Vector quantization. *IEEE ASSP Magazine*, 1(2):4–29, 1984. Klaus Greff, Raphaël Lopez Kaufman, Rishabh Kabra, Nick Watters, Chris Burgess, Daniel Zoran, Loic Matthey, Matthew Botvinick, and Alexander Lerchner. Multi-Object Representation Learning with Iterative Variational Inference. 2019. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. 9(8):1735–1780, 1997. Rico Jonschkowski, Divyam Rastogi, and Oliver Brock. Differentiable particle filters: End-to-end learning with algorithmic priors. *ArXiv*, abs/1805.11122, 2018. Peter Karkus, Xiao Ma, David Hsu, Leslie Pack Kaelbling, Wee Sun Lee, and Tomas Lozano-Perez. Differentiable algorithm networks for composable robot learning. *ArXiv*, abs/1905.11602, 2019. Thomas Kipf, Gamaleldin F. Elsayed, Aravindh Mahendran, Austin Stone, Sara Sabour, Georg Heigold, Rico Jonschkowski, Alexey Dosovitskiy, and Klaus Greff. Conditional Object-Centric Learning from Video. In International Conference on Learning Representations (ICLR), 2022. Ari Kobren, Nicholas Monath, Akshay Krishnamurthy, and Andrew McCallum. A hierarchical algorithm for extreme clustering. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 255–264, 2017. Teuvo Kohonen. *Self-Organizing Maps*. Springer-Verlag, 1995. Rahul G. Krishnan, Uri Shalit, and David A Sontag. Deep kalman filters. *ArXiv*, abs/1511.05121, 2015. Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R Kosiorek, Seungjin Choi, and Yee Whye Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. *arXiv preprint arXiv:1810.00825*, 2018. Juho Lee, Yoonho Lee, and Yee Whye Teh. Deep amortized clustering. *ArXiv*, abs/1909.13433, 2019. Edo Liberty, Ram Sriharsha, and Maxim Sviridenko. An algorithm for online k-means clustering. In 2016 Proceedings of the eighteenth workshop on algorithm engineering and experiments (ALENEX), pp. 81–89. SIAM, 2016. Huajun Liu, Hui Zhang, and Christoph Mertz. DeepDA: LSTM-based bardeep data association network for multi-targets tracking in clutter. *2019 22th International Conference on Information Fusion (FUSION)*, pp. 1–8, 2019. Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf. Object-centric learning with slot attention, 2020. Wenhan Luo, Junliang Xing, Anton Milan, Xiaoqin Zhang, Wei Liu, Xiaowei Zhao, and Tae-Kyun Kim. Multiple object tracking: A literature review. *arXiv preprint arXiv:1409.7618*, 2014. Cong Ma, Yuan Li, Fan Yang, Ziwei Zhang, Yueqing Zhuang, Huizhu Jia, and Xiaodong Xie. Deep association: End-to-end graph-based learning for multiple object tracking with conv-graph neural network. In *International Conference on Multimedia Retrieval (ICMR)*, June 2019. Anton Milan, Seyed Hamid Rezatofighi, Anthony R. Dick, Ian Reid, and Konrad Schindler. Online multi-target tracking using recurrent neural networks. *ArXiv*, abs/1604.03635, 2017. Erxue Min, Xifeng Guo, Qiang Liu, Gen Zhang, Jianjing Cui, and Jun Long. A survey of clustering with deep learning: From the perspective of network architecture. *IEEE Access*, 6:39501–39514, 2018. Ari Pakman, Yueqi Wang, Catalin Mitelut, Jinhyung Lee, and Liam Paninski. Neural clustering processes. arXiv: Machine Learning, 2019. Adam Santoro, Ryan Faulkner, David Raposo, Jack Rae, Mike Chrzanowski, Theophane Weber, Daan Wierstra, Oriol Vinyals, Razvan Pascanu, and Timothy Lillicrap. Relational recurrent neural networks. In Advances in neural information processing systems, pp. 7299–7310, 2018. Jonathan A Silva, Elaine R Faria, Rodrigo C Barros, Eduardo R Hruschka, André CPLF de Carvalho, and João Gama. Data stream clustering: A survey. *ACM Computing Surveys (CSUR)*, 46(1):1–31, 2013. ShiJie Sun, Naveed Akhtar, HuanSheng Song, Ajmal Mian, and Mubarak Shah. Deep affinity network for multiple object tracking. In *TPAMI*, June 2019. Aviv Tamar, Sergey Levine, Pieter Abbeel, Yi Wu, and Garrett Thomas. Value iteration networks. *ArXiv*, abs/1602.02867, 2016. Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order matters: Sequence to sequence for sets. *arXiv* preprint arXiv:1511.06391, 2015. Greg Welch and Gary Bishop. An introduction to the kalman filter. 2006. Yu Xiang, Alexandre Alahi, and Silvio Savarese. Learning to track: Online multi-object tracking by decision making. In *Proceedings of the IEEE international conference on computer vision*, pp. 4705–4713, 2015. Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. What can neural networks reason about? *arXiv preprint arXiv:1905.13211*, 2019. Yuang Zhang, Tiancai Wang, and Xiangyu Zhang. Motrv2: Bootstrapping end-to-end multi-object tracking by pretrained object detectors. *arXiv preprint arXiv:2211.09791*, 2022. We further provide experimental and architecture details of DAF-Net in Section A. We further provide full clustering results in Table 7. ## A **Experimental Details** In this section, we provide details of our experimental approach. We first discuss the details of datasets used in Section A.1. Next, we provide the model architectures used in Section A.2. Finally, we provide details on the baselines we compare with in Section A.3. ## A.1 **Dataset Details** We first provide detailed experimental settings for each of the datasets considered in the paper. Online Clustering. In online clustering, we utilize observations drawn from the following distributions, where Gaussian centers are drawn uniformly from -1 to 1. 1. *Normal*: Each 2D Gaussian has standard deviation 0.2. The normal setting is illustrated in Figure ??. 2. *Mixed*: Each distribution is a 2D Gaussian, with fixed identical variance across each individual dimension, but with the standard deviation of each distribution drawn from a uniform distribution from (0.04, 0.4). 3. *Elongated*: Each distribution is a 2D Gaussian, where the standard deviation along each dimension is drawn from a uniform distribution from (0.04, 0.4), but fixed across distributions. 4. *Angular*: Each distribution is a 2D Gaussian with identical standard deviation across dimension and distribution, but points above π are wrapped around to −π and points below −π wrapped to π. Gaussian means are selected between (−π, −2π/3) and between (2π/3, π). The standard deviation of distributions is 0.3 ∗ π. 5. *Noise*: Each distribution has 2 dimensions parameterized by Gaussian distributions with standard deviation 0.5, but with the values of the remaining 30 dimensions drawn from a uniform distribution from (−1, 1). Dynamic Domains. Next, in the dynamics domain, we implement our dataset using the StoneSoup library∗. We initialize the location of each cluster from a Gaussian distribution with standard deviation 1.5 and initialize velocity in each directory from a Gaussian distribution with standard deviation of 0.02. At each timestep, Gaussian noise is added to velocities with magnitude 1e-4. Our JPDA implementation is also from the StoneSoup library. Image Domains. In the image domain, for MNIST, we use the 50000 images in the training set to construct the training problems, and the 10000 images in the non-overlaping test set to construct the test problems. For the Airplane dataset, we use 1895 airplanes to construct the training problems, and 211 different airplanes to construct the test problems. Each airplane is rendered with 300 viewpoints. ## A.2 **Model/Baseline Architectures** We provide the overall architecture details for the LSTM in Figure 8a, for the Set Transformer in Figure 8b and DAF-Net in Figure 9a. For image experiments, we provide the architecture of the encoder in Figure 10a and decoder in Figure 10b. Both LSTM, DAF-Net, and autoencoding baselines use the same image encoder and decoder. For robotics experiments, we provide the architecture of the shape decoder in Figure ??. In DAF-Net memory, the function update(sk, nk, e) is implemented by applying a 2 layer MLP with hidden units h which concatenates the vectors sk, nk, e as input and outputs a new state uk of dimension h. The value nk is encoded using the function 1 1+nk , to normalize the range of input to be between 0 and 1. The function attend(sk, nk, e) is implemented in an analogous way to update, using a 2 layer MLP that outputs a single real value for each hypothesis slot. For the function relevance(sk, nk, e), we apply NN1 per hypothesis slot, which is implemented as a 2 layer MLP with hidden units h that outputs a intermediate state of dimension h. (sk, nk, e) is fed into NN1 in an ∗https://stonesoup.readthedocs.io/en/v0.1b3/stonesoup.html analogous manner to update. NN2 is applied to average of the intermediate representations of each hypothesis slot and is also implemented as a 2 layer MLP with hidden unit size h, followed by a sigmoid activation. We use the ReLU activation for all MLPs. NN3 is represented is GRU, which operates on the previous slot value. ## A.3 **Baseline Details** All baseline models are trained using prediction slots equal to the ground truth of components. To train the Set Transformer to act in an online manner, we follow the approach in (Santoro et al., 2018) and we apply the Set Transformer sequentially on the concatenation of an input observation with the current set of hypothesis slots. Hypothesis slots are updated based off the new values of the slots after applying self-attention (Set Transformer Encoder). Hypothesis slots are updated based off the new values of the slots after applying self-attention (Set Transformer Encoder). We use the Chamfer loss to train baseline models, with confidence set to 1. | 5x5 Conv2d, 32, stride 2, padding 2 3x3 Conv2d, 64, stride 2, padding 1 3x3 Conv2d, 64, stride 2, padding 1 3x3 Conv2d, 64, stride 2, padding 1 3x3 Conv2d, 128, stride 2, padding 1 Flatten Dense → h | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Dense → h Dense → h LSTM(h) Dense → h | |-----------------------------------------| | Dense → output | | Dense → h Dense → h | |-------------------------------------------------| | Set Transformer Encoder Set Transformer Decoder | (a) The model architecture of the LSTM baseline. The hidden dimension h used is 96 for synthetic Gaussian distributions and 128 for Image datasets. For image experiments, the first 2 and last 2 fully connected layers are replaced with image encoders and decoders. | Dense → h Dense → h | |-----------------------------------------| | DAF-Net Memory Dense → h Dense → output | (b) The model architecture of the Set Transformer baseline. The hidden dimension h is 48 for the synthetic Gaussian distributions. We use the encoder and decoder defined in (Lee et al., 2018) with 4 heads and hidden dimension h. Figure 8: Architecture of baseline models. (a) The model architecture of DAF-Net. The hidden dimension h is 64 is for synthetic Gaussian distributions and 128 for the image/robotics experiments. For image experiments, the first and last 2 linear layers are replaced with convolutional encoders and decoders. Figure 9: Architecture of DAF-Net. (a) The model architecture of the convolutional encoder for image experiments. (b) The model architecture of the convolutional decoder for image experiments. Figure 10: The model architecture of the convolutional encoder and decoder for image experiments. | Dense → 4096 | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Reshape (256, 4, 4) | | 4x4 Conv2dTranspose, 128, stride 2, padding 1 4x4 Conv2dTranspose, 64, stride 2, padding 1 4x4 Conv2dTranspose, 64, stride 2, padding 1 4x4 Conv2dTranspose, 64, stride 2, padding 1 3x3 Conv2d, 3, stride 1, padding 1 | | Type | Model | Online | Observations | | | | |-----------------|---------|----------|----------------|-------|-------|-------| | | 10 | 30 | 50 | 100 | | | | Normal | DAF-Net | + | 0.235 | 0.162 | 0.146 | 0.128 | | Set Transformer | + | 0.390 | 0.388 | 0.388 | 0.389 | | | LSTM | + | 0.288 | 0.260 | 0.269 | 0.288 | | | VQ | + | 0.246 | 0.172 | 0.147 | 0.122 | | | Set Transformer | + | 0.295 | 0.261 | 0.253 | 0.247 | | | K-means++ | - | 0.183 | 0.107 | 0.086 | 0.066 | | | GMM | - | 0.189 | 0.118 | 0.087 | 0.067 | | | Mixed | DAF-Net | + | 0.255 | 0.184 | 0.164 | 0.147 | | LSTM | + | 0.306 | 0.274 | 0.284 | 0.290 | | | Set Transformer | + | 0.415 | 0.405 | 0.407 | 0.408 | | | VQ | + | 0.262 | 0.192 | 0.169 | 0.145 | | | Set Transformer | - | 0.309 | 0.274 | 0.266 | 0.261 | | | K-means++ | - | 0.206 | 0.135 | 0.105 | 0.088 | | | GMM | - | 0.212 | 0.136 | 0.105 | 0.079 | | | Enlongated | DAF-Net | + | 0.258 | 0.192 | 0.173 | 0.161 | | LSTM | + | 0.314 | 0.274 | 0.288 | 0.300 | | | Set Transformer | + | 0.394 | 0.391 | 0.394 | 0.394 | | | VQ | + | 0.265 | 0.194 | 0.172 | 0.149 | | | Set Transformer | - | 0.309 | 0.244 | 0.240 | 0.232 | | | K-means++ | - | 0.213 | 0.139 | 0.113 | 0.092 | | | GMM | - | 0.214 | 0.141 | 0.112 | 0.086 | | | Rotation | DAF-Net | + | 0.892 | 0.794 | 0.749 | 0.736 | | LSTM | + | 0.799 | 0.796 | 0.795 | 0.794 | | | Set Transformer | + | 0.793 | 0.794 | 0.782 | 0.782 | | | VQ | + | 0.956 | 1.000 | 1.000 | 0.984 | | | Set Transformer | - | 0.815 | 0.784 | 0.779 | 0.772 | | | K-means++ | - | 0.827 | 0.834 | 0.823 | 0.802 | | | GMM | - | 0.842 | 0.875 | 0.867 | 0.848 | | | Noise | DAF-Net | + | 0.375 | 0.343 | 0.338 | 0.334 | | LSTM | + | 0.419 | 0.406 | 0.405 | 0.407 | | | Set Transformer | + | 0.434 | 0.424 | 0.425 | 0.424 | | | VQ | + | 1.479 | 0.948 | 0.826 | 0.720 | | | Set Transformer | - | 0.436 | 0.407 | 0.398 | 0.394 | | | K-means++ | - | 1.836 | 1.271 | 1.091 | 0.913 | | | GMM | - | 1.731 | 1.215 | 1.056 | 0.856 | | Table 7: **Generalization with Increased Observations.** Error of different models when executed at test time on different number of observations across different distributions. We train models with 3 components and 30 observations.
Review 1: Summary: This paper proposes a learnt online clustering algorithm, which leverages a learnt encoder and uses Transformers to match inputs to potential clusters. It uses some ad-hoc algorithmic modifications to make this process work more stably, without requiring a-priori knowledge of the number of clusters. It is evaluated on toy synthetic data and toy image datasets and outperforms appropriately selected baselines. Overall, this is an interesting paper which tackles a niche but relevant problem. I have some issues with the overall framing of the paper and there are some sections which do not support claims as well as I’d like, but I’d tend towards acceptance already in this state, given TMLR’s reviewing guidelines. Strengths and Weaknesses: 1. The paper introduces the problem well and assesses existing literature well enough. The actual problem of online clustering is complex, and any learnable solution is valuable. Overall, I feel like the results are trustworthy and demonstrate the validity of the approach, however this is still only on very toy data and I do not know how confident I would be for this method to scale up to complex, multi-object scenes clustering. 2. I found the use of “Object discovery” unhelpful and not strictly correct. 1. You are doing online clustering, there is no benefit and no evidence from the current results that this is/would correspond to object detection (unless one takes a very specific semantic definition of what “an object” is, which you do not make clear). 2. This introduces quite a lot of confusion as some sections use “hypothese”, some “slots”, some “objects”, even though really it’s “just” clusters. 3. There are some clarity issues with some of the sections: 1. Section 3, Problem formulation, was overly confusing to me, especially given it introduces a set of symbols which are directly modified in Section 4. It would have been more helpful to take an actual example throughout (e.g. from Figure 7), and run through it comparatively. 2. Section 4 spells the model choices quite well, but it would be valuable to stick to one term as explained above (i.e. “slot”, “cluster”), instead of using “objects” or memory in various. 3. Please write the full loss used in terms of symbols of Section 4, and not by just referring to Section 3. 4. Section 5, Theoretical analysis, does not provide enough evidence to the claim that “DAF-Net may learn to construct an optimal DAF”, and in my opinion can be relegated to the Appendix. 1. Proposition 1 is assessing sample complexity, which I do not see how this relates to finding “the correct hypothesis assignment after seeing N samples” 2. Proposition 2 feels trivial and I do not really see what value it adds? 5. Some model choices are questionable and aren’t ablated: 1. Why do you need s_k to have this particular form? What happens if you directly write to s? 2. Why do you use r=relevance as this particular extra DeepSet (also, please call it a DeepSet…)? Why didn’t you use w_k directly? 3. Why do you use n_k as the integral of a_k? 4. These all feel like they could be simplified further? If that is what the Set Transformer does, please flag differences more precisely, and explain why you came to this formulation. 5. The transition function feels both overkill in simple situations and not powerful enough when dynamics will actually be necessary (i.e. it is not action-conditioned and hence can only represent self-dynamics). 6. There is no mechanism in the model to “remove” unused cluster centers? 1. This feels like a clear disadvantage of this method. The confidences would obviously help to prune unused centers, but it feels like the method likes to use and keep too many currently? (Figure 2, 4 and 7). 2. Figure 6 is too noisy and actually feels like a negative result? At best it shows that the model does not adapt that well to many components? Requested Changes: 1. Remove mentions of “object hypotheses” and unify on using an online clustering nomenclature. 2. Remove/rework section 3 to keep Section 4 as the relevant mathematical formulation. 3. Remove the theoretical analysis or tone done its proposed contribution 4. Clarify model choices and why the algorithm couldn’t be modified to handle removing cluster centers Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper proposes DAF-net - a neural architecture inspired by data association filters (DAFs) and capable of solving data association problems like online clustering or tracking. The model is evaluated on three domains: 1) online clustering with fairly simple synthetic data, 2) a type of an object tracking problem, with noisy coordinates of a single object as observation at each time step, and true positions of all objects as outputs, 3) a synthetic image-based task with images as inputs and outputs. The method outperforms the provided baselines on all three, especially for longer observation sequences. Strengths and Weaknesses: Strengths: 1. A reasonable architecture, insth inductive biases inspired by some classical methods 2. Good empirical performance on the provided tasks 3. Mostly good presentation (although see some notes below) Weaknesses/questions: 1. I found the choice of the tasks used for evaluation somewhat unconvincing - they all come across as somewhat toy/artificial and created anew by the authors, which means there are no prior results reported on them. While it is great that the proposed method works well on these tasks, how much do these results really tell us about the performance of the proposed method on practical problems it is intended to solve? Would it be possible to directly evaluate on some practical problems of interest, where established baselines exist? The dynamic task potentially may fall in this category, but it still looks a bit toy/simplistic. And the image task is very artificial - why not do MNIST clustering, which seems like a much more usual/natural task? Or unsupervised object discovery, although not sure if it fits in the DAF formalism. Tracking (vision-based), at least in some toy/simple setup, could be another interesting task. 2. Comparison to Set Transformer seems unfair - the number of slots is set to a smaller number (maybe more slots would help it?) and also the loss used is different - IIUC, one-sided Chamfer loss, while the proposed method uses more or less “symmetric” Chamfer loss additionally weighted using confidences. The Set Transformer baseline should be tuned at least by trying the symmetric chamfer loss and different numbers of slots. Same for Slot Attention, potentially the other baselines too, but I’m less familiar with those (note: as mentioned, I’m not sure how the proposed model handles having more slots than objects, so I don’t know if the same approach is applicable to the baselines, but maybe it is or maybe one could come up with some ad-hoc replacement - like at the end cluster the predicted K hypotheses with k-means or so to the desired number of objects) 3. On dynamic objects, the gap between the proposed method and LSTM is pretty small and could potentially be explained by, say, different amounts of hyperparameter tuning put into the different models. It would be more convincing to see wider gaps - perhaps for longer observation sequences? 4. On dynamic objects, why would Set Transformer not be able to learn to do the task? Can it be a matter of tuning or is there some fundamental limitation that doesn’t allow it to do it? Seems strange that it would fail completely, while being a good general function approximator. 5. In “Performance on More Clusters.”, how exactly are the clusters sampled? Something is said in the caption of Table 4, but I do not understand it. I generally find the setting quite weird - training with 30 clusters and 50 observations, meaning that on average there are fewer than 2 points per cluster? Seems difficult, potentially somewhat ill-posed (depending on the distribution of the observations) and perhaps an unnecessarily large jump compared to the previous tasks. 6. For Proposition 1 there seems to be only a sketch of the proof, I didn’t find the proof in the appendix. This is not ok: if there is a proposition, there should be a complete proof. 6a. Moreover - although that is a bit of a matter of taste - I think the proposition (or at least its proof) could be moved to the appendix. Its message that MLP is not going to be efficient at learning the task is fairly obvious practically speaking, so it does not seem to add much to the flow of the paper, but it is difficult to understand, since it heavily builds on prior work. 6b. The proof of Proposition 2 I believe could also safely be moved to the appendix, since it does not add to the flow of the paper, but rather distracts the reader 7. I’m not sure what happens when there are more slots in the model than objects/clusters in the input data. For instance, Figure 1 shows several red dots per cluster. IIUC each red dot corresponds to a “slot” in the model? So a few slots can be assigned to one cluster? And the model does not predict how many clusters there are? How is the error on cluster means (as reported in Table 1) computed, then? The section “Inferring Object Number.” seems related, but it comes later and doesn’t seem to exactly clarify everything. I guess the top hypotheses are selected based on confidence, but it somehow wasn’t clear from the text (maybe I missed it) 8. Smaller presentation questions/issues: 8a. Beginning of section 3 - do T and L_i have the same meaning? Then why different letters? 8b.It’s strange that the losses are discussed in the “Problem formulation” section - they seem to be a part of the solution, not the problem 8c. “Chamfer distance/loss” in computer vision often refers to the “symmetric” version of the distance, see for instance https://graphics.stanford.edu/courses/cs468-17-spring/LectureSlides/L14%20-%203d%20deep%20learning%20on%20point%20cloud%20representation%20(analysis).pdf or https://www.tensorflow.org/graphics/api_docs/python/tfg/nn/loss/chamfer_distance/evaluate (as well as many papers). Perhaps worth pointing out. Also, since the weights derived from confidences are applied, perhaps it should be called “weighted chamfer loss”? 8d. Encoding step is not illustrated in Figure 1, makes it a bit more difficult to match the figure to the text 8e. In the formula on top of page 5, defining w_k , it’s not clear what kind of an operation “attend” is - learned/not, deep/not. Will probably be clearer later, but it would be nice to hint already here. Also, typically the whole thing, including the softmax, would be called attention. 8f. Same about “update”, “decode”, and “transition” - I guess they are neural networks and about “transition” it’s even mentioned, but only somehow in passing - would be great if the paper was clearer on this. (this is mentioned in the appendix, but should be mentioned in the main paper) 8g. “The a vectors are integrated to obtain n, which is normalized to obtain the output confidence c.” - integrated over what and normalized by what? It’s clear from the algorithm in Figure 1, but it would be nice if the text would be more self-contained too. 8h. “Because most of the ak values have been set to 0, this results in a sparse update which will ideally concentrate on a single slot to which this observation is being “assigned”.” - I’m a bit confused, does this suggest there’s only one object per observation - that is, the observation can/should be assigned to one slot? I thought that’s not necessarily the case? 8i. “slot-based architectures enables” (beginning of Section 5) 8j. “(Tables 1, 1 and 6).” (bullet point list in the beginning of Section 6) 8k. Figure 3 -> “OBM-Net”? Requested Changes: Please address the points made in "weaknesses "above, in particular: 1. More convincing evaluation - or at the very least a clear explanation of why the provided evaluation is sufficient to show the practical value of the proposed method 2. Better tuning of the baselines, in particular Set Transformer 3. Clarify and fix the various presentation issues mentioned in "weaknesses" Broader Impact Concerns: No particular concerns. The method proposed here can be used for questionable purposes (for instance, unethical applications of tracking), but so is basically any technology. ================================================== Review 3: Summary: This paper proposes a method for “dynamic entity monitoring”. Given a sequence of (noisy) observations in the form of object detections, the goal is to determine which individual objects generated each observation and estimating the state of each object based on its observations. A special case of this problem occurs when the objects’ state is static (relative to the observer), in which case this can be treated as an online clustering problem. The proposed method (DAF-nets) is loosely inspired by the probabilistic data association filter (DAF; Bar-Shalom et al., 2009), which is described as being cumbersome to apply in practice. The DAF-net works by maintaining a set of K latent representations as its state: one for each object. At each time-step, when a new observation comes in, it is encoded into a latent vector of the same dimension as each of the latent states. In this way, the encoded observation and the latent states can be “matched” via attention to determine which object is expected to belong to (this basically implements an E-step in an EM approach to clustering). Some post-processing is used to “suppress” the resulting attention weights, such that only the top M matches will end up being matched to the encoded observation. For these states, an update is computed based on the encoded observation and the state is updated. This is a fairly sophisticated process, which additionally includes computing a relevance value to mitigate the effect of outliers. Outputs are computed from the updated states using a decoder while the state is further updated using a transition module to capture object dynamics. Confidence scores for each state are derived from the proportion of observations that each state accounts for. DAF-nets are trained using a 3-part loss in a supervised manner based on the true current state of objects at each timestep in the sequence. The first term ensures that each ground-truth object is well represented by at least one of the estimated states. The second, complimentary, term ensures that each estimated state captures one of the ground-truth object states. Finally, a sparsity loss (which is only activated during the later stages of training) is used to discourage the model from using different states to describe the same ground-truth object. A brief theoretical analysis is included to motivate the choice of using a single network operating on each individual state, as opposed to having a single network simultaneously updating all states at once. In particular, it is argued that the former choice yields to lower sample complexity. Similarly, it is shown how the sparsity loss used here will lead to sparsity. DAF-nets are evaluated in an online clustering setting and compared to online and batch (learned) clustering approaches, where it is shown to perform better. In the dynamic entity monitoring setting it is shown how it is competitive to a hand-designed probabilistic DAF method. Finally it is shown how DAF-nets can perform this task for simple images like MNIST images. Strengths and Weaknesses: **Strengths:** * The strongest comparison in this work is to an online formulation of the Set Transformer, where DAF-nets are shown to compare favorably. At the same time, the Set Transformer is a more general architecture that can be adapted to a variety of settings, while DAF-nets were specifically designed for these types of problems. * Within the online clustering setting (with dynamics) DAF-nets are shown to be quite versatile, and the slot-based design offers it flexibility to adapt at test time to cases when more objects are present. **Weaknesses:** * I think the paper is quite poorly written and it took me quite a while to understand how the proposed method works. While this paper spends a lot of time talking about the similarities to DAF, these are never made explicit. Figure 1 offers a generic comparison in terms of how the object states are updated iteratively, yet for other supposed similarities like the update module, it is unclear how DAF and DAF-nets compare. A step-by-step comparison of the DAF estimation algorithm (either using the fully Bayesian update or the maximum likelihood ML-DAF approach) to DAF-net in Algorithm 1 is crucial. If this connection is indeed important then the reader should not have to first read the 19-page tutorial from 2009 that is currently cited instead of this. * Critical details about the proposed method and the baselines are missing, to the extent that the results presented here can not be reproduced. * About DAF-Nets: How is M determined, which is used for suppression? What is the functional form of the transition module, is this the same as NN3? How are DAF-nets trained, i.e. using what optimizer? For how long is it trained and after how many epochs is the sparsity loss introduced? * About the baselines broadly: What does it mean that the baselines are “given and asked to predict the ground-truth number of components K”? Appendix A.3 states that “We use the Chamfer loss to train baseline models, with confidence set to 1”, which is more inline with the reported MSE error. In terms of the input, what does it mean to “give the ground-truth number of components K”? Is this a numeric value provided as an additional input? Are the baselines trained with the first and the second loss term or only the first term? * About the LSTM baseline: It is unclear what the output of the LSTM looks like. Is it trained to predict the ground-truth state of all objects simultaneously at each step? * About Vector Quantization: No details about any of the hyper-parameters are provided. * About the GMM: No details about any of the hyper-parameters are provided. * About Slot Attention: No details are provided at all, even though it includes many hyper-params like the choice of initialization, the number of iterations, etc. Since Slot Attention is also a slot-based method, I would expect it to perform quite well, though it is reported to perform poorly. The appendix does not include any mention of the architecture used. * The proposed method is rather complex and it is unclear why certain design choices are made. For example, the “attend”, “update”, and “relevance” functions operate both on the state, the encoded observation and the length K vector of counts n. Why are the counts included in this part of the computation? The sparsity regularizer is only evaluated in the absence of the learned memory, suppression and relevance, but not when these components are present. It is natural to assume these other components have a bigger effect on performance, which leaves it unclear how much this term actually contributes in the end. Using an MLP to compute attention as opposed to using cross-attention based on cosine similarity, which is highly prevalent these days, is another strange design choice that is not evaluated. * The paper makes false claims in several places. For example it is written that “[...] none of the \emph{learning} baselines can adapt to different numbers cluster components at inference time, …”, which is not true since Slot Attention can be adapted to using different number of slots at inference time. It is also written that “DAF-Net outperforms non-learning clustering methods, even those that operate in batch mode rather than online, because those methods cannot learn from experience to take advantage of information about the distribution of observations and true object properties (Tables 1, 1 and 6).” However, in Table 1 DAF-Net frequently performs worse to batch mode baselines, especially on “Normal”, “Elongated” and “Mixed”. The claim about the performance of DAF-Net in Table 4 is also a misrepresentation. If the authors wish to claim that DAF-Net is “ [...] performing similarly to K-means++ [...]” while it consistently performs slightly worse then they should not also claim that “DAF-Net [is] out-performing the strong online baseline VQ” when it performs slightly better by the same margin. More generally, it is unreasonable to claim that one method outperforms the other here for such margins, especially when only considering a single seed. * For the majority of the experiments only a single seed is provided. Requested Changes: In my view this paper has too many issues to be considered for publication in this shape and form. This is disappointing, since this is a second attempt already (and the second time I invest time) after I previously provided feedback that the paper hadn’t been updated since 2020 (though it now includes 3 citations for the period 2021 - 2022). That aside, there are a number of changes that could be made to make the paper stronger: * Currently the baselines are trained differently using K slots where this is applicable as opposed to the 10 (or more) used by DAF-Net. I would like to see an experiment where for Slot Attention and Set Transformer the same number of 10 components is used (and a top K selection at inference based on the components that have the lowest loss) since the additional components may also give it additional flexibility, especially when not having access to confidence scores. * SAVi (Kipf et al., 2022) is an extension of slot attention that includes a dynamics model, and which could be used as a baseline in the dynamic setting. * It was not clear to me how the confidence scores can be regularized to become sparse. My understanding from Algorithm 1 is that they denote the fraction of observations that have been assigned to a particular slot. Thus why would we ever expect this to become 0 or 1? Having a per-example confidence across slots would make more sense, though in that case one would need a length K confidence vector for each observation, which doesn’t seem to be the case. * While this paper positions DAF-net for the problem of dynamic entity monitoring, only a single experiment is conducted in this setting and no analysis is provided. For example, a good baseline in Table 5 would be having DAF-Net without the dynamics model that simply replaces the new state with the up date. It would also be interesting to see how the state evolves in the absence of any input to validate that the object dynamics are tracked correctly. Conversely, is the transition module needed for the online clustering experiments? Broader Impact Concerns: N/A. ==================================================
# Revisiting Generalized P-Laplacian Regularized Framelet Gcns: Convergence, Energy Dynamic And As Non-Linear Diffusion Dai Shi∗20195423@student.westernsydney.edu.au Western Sydney University Zhiqi Shao∗*zhiqi.shao@sydney.edu.au* University of Sydney Yi Guo y.guo@westernsydney.edu.au Western Sydney University Qibin Zhao *qibin.zhao@riken.jp* AIP RIKEN Junbin Gao *junbin.gao@sydney.edu.au* University of Sydney Reviewed on OpenReview: *https: // openreview. net/ forum? id= q4iSLPoFe7* ## Abstract This paper presents a comprehensive theoretical analysis of the graph p-Laplacian regularized framelet network (pL-UFG) to establish a solid understanding of its properties. We conduct a convergence analysis on pL-UFG, addressing the gap in understanding its asymptotic behaviors. Further, by investigating the generalized Dirichlet energy of pL-UFG, we demonstrate that the Dirichlet energy remains non-zero throughout convergence, ensuring the avoidance of over-smoothing issues. Additionally, we elucidate the energy dynamic perspective, highlighting the synergistic relationship between the implicit layer in pL-UFG and graph framelets. This synergy enhances the model's adaptability to both homophilic and heterophilic data. Notably, we reveal that pL-UFG can be interpreted as a generalized non-linear diffusion process, thereby bridging the gap between pL-UFG and differential equations on the graph. Importantly, these multifaceted analyses lead to unified conclusions that offer novel insights for understanding and implementing pL-UFG, as well as other graph neural network models. Finally, based on our dynamic analysis, we propose two novel pL-UFG models with manually controlled energy dynamics. We demonstrate empirically and theoretically that our proposed models not only inherit the advantages of pL-UFG but also significantly reduce computational costs for training on large-scale graph datasets. ## 1 Introduction Graph neural networks (GNNs) have emerged as a popular tool for the representation learning on the graph-structured data (Wu et al., 2020). To enhance the learning power of GNNs, many attempts have been made by considering the propagation of GNNs via different aspects such as optimization (Zhu et al., 2021; Wei et al., 2022), statistical test (Xu et al., 2019) and gradient flow (Bodnar et al., 2022; Di Giovanni et al., 2023). In particular, treating GNNs propagation as an optimization manner allows one to assign different types of regularizers on the GNNs' output so that the variation of the node features, usually measured by ∗Dai Shi and Zhiqi Shao are with equal contributions so-called Dirichlet energy, can be properly constrained (Zhu et al., 2021; Chen et al., 2023). The underlying reason for this regularization operation is due to the recently identified computational issue of GNNs on different types of graphs, namely homophily and heterophily (Zheng et al., 2022a). With the former, most of nodes are connected with those nodes with identical labels, and for the latter, it is not (Pei et al., 2019). Accordingly, an ideal GNN shall be able to produce a rather smoother node features for homophily graph and more distinguishable node features when the input graph is heterophilic (Pei et al., 2019; Bi et al., 2022). Based on the above statement, a proper design of the regularizer that is flexible enough to let GNN fit both two types of graphs naturally becomes the next challenge. Recent research Fu et al. (2022) proposes a new energy-based regularizer, namely the p-Laplacian-based regularizer, to the optimization of GNN and results in an iterative algorithm to approximate the so-called implicit layer induced from the solution of the regularized optimization problem. To engage a more flexible design of p-Laplacian GNN, Shao et al. (2023) further proposed p-Laplacian based graph framelet GNN (pL-UFG) to assign the p-Laplacian based regularization acting on multiscale GNNs (e.g., graph framelet). While remarkable learning accuracy has been observed empirically, the underlying properties of the models proposed in (Shao et al., 2023) are still unclear. In this paper, our primary focus is on pL-UFG (see Section 2 for the formulation). Our objective is to analyze pL-UFG from various perspectives, including convergence of its implicit layer, the model's asymptotic energy behavior, changes in the model's dynamics due to the implicit layer, and relationship with existing diffusion models. To the best of our knowledge, these aspects have not been thoroughly explored in the context of p-Laplacian based GNNs, leaving notable knowledge gaps. Accordingly, we summarize our contribution as follows: - We rigorously prove the convergence of pL-UFG, providing insights into the asymptotic behavior of the model. This analysis addresses a crucial gap in the understanding of GNN models regularized with p-Laplacian based energy regularizer. - We show that by assigning the proper values of two key model parameters (denoted as µ and p) of pL-UFG based on our theoretical analysis, the (generalized) Dirichlet energy of the node feature produced from pL-UFG will never converge to 0; thus the inclusion of the implicit layer will prevent the model (graph framelet) from potential over-smoothing issue. - We demonstrate how the implicit layer in pL-UFG interacts with the energy dynamics of the graph framelet. Furthermore, we show that pL-UFG can adapt to both homophily and heterophily graphs, enhancing its versatility and applicability. - We establish that the propagation mechanism within pL-UFG enables a discrete generalized non-linear graph diffusion. The conclusions based on our analysis from different perspectives are unified at the end of the paper, suggesting a promising framework for evaluating other GNNs. - Based on our theoretical results, we propose two generalized pL-UFG models with controlled model dynamics, namely pL-UFG low-frequency dominant model (pL-UFG-LFD) and pL-UFG high frequency dominant model (pL-UFG-HFD). We further show that with controllable model dynamics, the computational cost of pL-UFG is largely reduced, making our proposed model capable of handling large-scale graph datasets. - We conduct extensive experiments to validate our theoretical claims. The empirical results not only confirm pL-UFG's capability to handle both homophily and heterophily graphs but also demonstrate that our proposed models achieve comparable or superior classification accuracy with reduced computational cost. These findings are consistent across commonly tested and large-scale graph datasets. The remaining sections of this paper are structured as follows. Section 2 presents fundamental notations related to graphs, GNN models, graph framelets, and pL-UFG. In Section 3, we conduct a theoretical analysis on pL-UFG, focusing on the aforementioned aspects. Specifically, Section 3.1 presents the convergence analysis, while Section 3.2 examines the behavior of the p-Laplacian based implicit layer through a generalized Dirichlet energy analysis. Furthermore, Section 3.3 demystifies the interaction between the implicit layer and graph framelets from an energy dynamic perspective. We provide our proposed models (pL-UFG-LFD and pL-UFG-HFD) in Section 3.4. Lastly, in Section 3.5, we demonstrate that the iterative algorithm derived from the implicit layer is equivalent to a generalized non-linear diffusion process on the graph. Additionally, in Section 4, we further verify our theoretical claims by comprehensive numerical experiments. Lastly, in Section 5, we summarize the findings of this paper and provide suggestions for future research directions. ## 2 Preliminaries In this section, we provide the necessary notations and formulations utilized in this paper. We list the necessary notations with their meanings in Table 1. | Table 1: Necessary Notations | | |--------------------------------|---------------------------------------------------------------------------| | Notations | Brief Interpretation | | H(G) | Heterophily index of a given graph G | | X | Initial node feature matrix | | F (k) | Feature representation on k-th layer of GNN model | | fi | Individual row of F | | Fi,: | One or more operation acts on each row of F | | Ab | Normalized adjacency matrix | | Le | Normalized Laplacian matrix | | W | Graph weight matrix | | W | Framelet decomposition matrix | | I | Index set of all framelet decomposition matrices. | | Wc | Learnable weight matrix in GNN models | | Wf,Ω,Wc | Learnable weight matrices in defining generalized Dirichlet energy | | Y | Feature propagation result for the pL-UFG defined in (Shao et al., 2023). | | θ | N-dimensional vector for diagonal scaling (diag(θ)) in framelet models. | | EP F (F) | Generalized Dirichlet energy for node feature induced from implicit layer | We also provide essential background information on the developmental history before the formulation of certain models, serving as a concise introduction to the related works. Graph, Graph Convolution and Graph Consistency We denote a weighted graph as G = (V, E,W) with nodes set V = {v1, v2, · · · , vN } of total N nodes, edge set *E ⊆ V × V* and graph adjacency matrix W, where W = [wi,j ] ∈ R N×N and wi,j = 1 if (vi, vj ) ∈ E, else, wi,j = 0. The nodes feature matrix is X ∈ R N×c for G with each row xi ∈ R c as the feature vector associated with node vi. For a matrix A, we denote its transpose as A⊤, and we use [N] for set {1, 2*, . . . , N*}. Throughout this paper, we will only focus on the undirect graph and use matrix A and W interchangeably for graph adjacency matrix1. The normalized graph Laplacian is defined as Le = I − D− 12 (W+ I)D− 12 , where D = diag(d1,1, . . . , dN,N ) is a diagonal degree matrix with di,i =PN j=1 wi,j for i = 1*, . . . , N*, and I is the identity matrix. Let {λi} N i=1 in decreasing order be all the eigenvalues of Le, also known as graph spectra, and λi ∈ [0, 2]. For any given graph, we let ρeL be the largest eigenvalue of Le. Lastly, for any vector x = [x1*, ..., x*c] ∈ R c, ∥x∥2 = (Pc i=1 x 2 i ) 1 2 is the L2-norm of x, and similarly, for any matrix M = [mi,j ], denote by ∥M∥ := ∥M∥F = (Pi,j m2 i,j ) 1 2 the matrix Frobenius norm. Graph convolution network (GCN) (Kipf & Welling, 2016) produces a layer-wise (node feature) propagation rule based on the information from the normalized adjacency matrix as: F (k+1) = σAFb (k)Wc(k), (1) 1We initially set W as the graph adjacency matrix while W is a generic edge weight matrix in align with the notations used in (Fu et al., 2022; Shao et al., 2023) where F (k)is the embedded node feature, Wc(k)the weight matrix for channel mixing (Bronstein et al., 2021), and σ any activation function such as sigmoid. The superscript (k)indicates the quantity associated with layer k, and F (0) = X. We write Ab = D− 12 (W + I)D− 12 , the normalized adjacency matrix of G. The operation conducted in GCN before activation can be interpreted as a localized filter by the graph Fourier transform, i.e., F (k+1) = U(In − Λ)U⊤F (k), where U, Λ are from the eigendecomposition Le = UΛU⊤. In fact, UF is known as the Fourier transform of graph signals in F. Over the development of GNNs, most of GNNs are designed under the homophily assumption in which connected (neighbouring) nodes are more likely to share the same label. The recent work by Zhu et al. (2020) identifies that the general topology GNN fails to obtain outstanding results on the graphs with different class labels and dissimilar features in their connected nodes, such as the so-called heterophilic graphs. The definition of homophilic and heterophilic graphs are given by: Definition 1 (Homophily and Heterophily). *The homophily or heterophily of a network is used to define* the relationship between labels of connected nodes. The level of homophily of a graph can be measured by the positive score H(G) = Evi∈V [|{vj}j∈N*i,yi*=yi |/|Ni|], where |{vj}j∈N*i,yi*=yi | denotes the number of neighbours of vi ∈ V that share the same label as vi, i.e. yi = yj . A score H(G) close to 1 corresponds to stronger homophily while a score H(G) *nearly 0 indicates stronger heterophily. We say that a graph is a homophilic* (heterophilic) graph if it has stronger homophily (heterophily) or simply strong homophily (heterophily). Graph Framelet. The main target for this paper to explore is pL-UFG defined in (Shao et al., 2023) in which the p-Laplacian-based implicit layer is combined with so-called graph framelet or framelets in short. Framelets are a type of wavelet frames arising from signal processing that can be extended for analyzing graph signals. Dong (2017) developed tight framelets on graphs by approximating smooth functions with filtered Chebyshev polynomials. Graph framelets decompose graph signals and re-aggregating them effectively, as shown in the study on graph noise reduction by Zhou et al. (2021) Recently, Yang et al. (2022) suggested a simple method for building more versatile and stable framelet families, known as Quasi-Framelets. In this study, we will introduce graph framelets using the same architecture described in (Yang et al., 2022). To begin, we define the filtering functions for Quasi-framelets. Definition 2. *A set of* R + 1 positive functions F = {g0(ξ), g1(ξ), ..., gR(ξ)} *defined on the interval* [0, π] is considered as (a set of) Quasi-Framelet scaling functions, if these functions adhere to the following identity condition: $$g_{0}(\xi)^{2}+g_{1}(\xi)^{2}+\cdots+g_{R}(\xi)^{2}\equiv1,\quad\forall\xi\in[0,\pi].$$ $$\mathbf{\Phi}(t)$$ The identity condition eq. (2) ensures a perfect reconstruction of a signal from its spectral space to the spatial space, see (Yang et al., 2022) for a proof. Particularly we are interested in the scaling function set in which g0 descents from 1 to 0, i.e., g0(0) = 1 and g0(π) = 0 and gR ascends from 0 to 1, i.e., gR(0) = 0 and gR(π) = 1. The purpose of setting these conditions is for g0 to regulate the highest frequency and for gR to control the lowest frequency, while the remaining functions govern the frequencies lying between them. With a given set of framelet scaling functions, the so-called Quasi-Framelet signal transformation can be defined by the following transformation matrices: $$\mathcal{W}_{0,J}=\mathbf{U}g_{0}(\frac{\boldsymbol{\Lambda}}{2m+J})\cdots g_{0}(\frac{\boldsymbol{\Lambda}}{2^{m}})\mathbf{U}^{\top},$$ $$\mathcal{W}_{r,0}=\mathbf{U}g_{r}(\frac{\boldsymbol{\Lambda}}{2m})\mathbf{U}^{\top},\ \ \text{for}\ r=1,...,R,$$ $$\mathcal{W}_{r,\ell}=\mathbf{U}g_{r}(\frac{\boldsymbol{\Lambda}}{2m+\ell})g_{0}(\frac{\boldsymbol{\Lambda}}{2m+\ell-1})\cdots g_{0}(\frac{\boldsymbol{\Lambda}}{2^{m}})\mathbf{U}^{\top},$$ $$\text{for}r=1,...,R,\ell=1,...,J,$$ where F is a given set of Quasi-Framelet functions satisfying eq. (2) and J ≥ 0 is a given level on a graph G = (V, E) with normalized graph Laplacian Le = U⊤ΛU. W0,J is defined as the product of J + 1 QuasiFramelet scaling functions g0 applied to the Laplacian spectra Λ at different scales. Wr,0 is defined as gr( Λ 2m ) applied to spectra Λ, where m is the coarsest scale level which is the smallest value satisfying 2 −mλN ≤ π. (3) $\text{}$ (4) $\text{}$ (5) $\text{}$ For 1 ≤ r ≤ R and 1 ≤ ℓ ≤ J, Wr,ℓ is defined as the product of ℓ Quasi-Framelet scaling functions g0 and one Quasi-Framelet scaling functions gr applied to spectra Λ at relevant scaling. Let W = [W0,J ; W1,0; ...; WR,0] be the stacked matrix. It can be proven that WTW = I, see (Yang et al., 2022), which provides a signal decomposition and reconstruction process based on W. This is referred to as the graph Quasi-Framelet transformation. Since the computation of the Quasi-framelet transformation matrices requires the eigendecomposition of graph Laplacian, to reduce the computational cost, Chebyshev polynomials are used to approximate the Quasi-Framelet transformation matrices. The approximated transformation matrices are defined by replacing gr(ξ) in eq. (3)-eq. (5) with Chebyshev polynomials Tr(ξ) of a fixed degree, which is typically set to 3. The Quasi-Framelet transformation matrices defined in eq. (3) - eq. (5) can be approximated by, $$\begin{array}{l l}{{{\mathcal W}_{0,J}\approx{\mathcal T}_{0}(\frac{1}{2^{m+J}}\tilde{\mathbf L})\cdots{\mathcal T}_{0}(\frac{1}{2^{m}}\tilde{\mathbf L}),}}\\ {{{\mathcal W}_{r,0}\approx{\mathcal T}_{r}(\frac{1}{2^{m}}\tilde{\mathbf L}),}}&{{\mathrm{for}\ r=1,...,R,}}\\ {{{\mathcal W}_{r,\ell}\approx{\mathcal T}_{r}(\frac{1}{2^{m+\ell}}\tilde{\mathbf L}){\mathcal T}_{0}(\frac{1}{2^{m+\ell-1}}\tilde{\mathbf L})\cdots{\mathcal T}_{0}(\frac{1}{2^{m}}\tilde{\mathbf L}),}}\\ {{\mathrm{for}\ r=1,...,R,\ell=1,...,J.}}\end{array}$$ Based on the approximated Quasi-Framelet transformation defined above, two types of graph framelet convolutions have been developed recently: 1. **The Spectral Framelet Models** (Zheng et al., 2021; 2022b; Yang et al., 2022; Shi et al., 2023a): $$\mathbf{F}^{(k+1)}=\sigma\left({\mathcal{W}}^{\top}\mathrm{diag}(\theta){\mathcal{W}}\mathbf{F}^{(k)}{\widehat{\mathbf{W}}}^{(k)}\right):=\sigma\left(\sum_{(r,\ell)\in{\mathcal{I}}}{\mathcal{W}}_{r,\ell}^{\top}\mathrm{diag}(\theta_{r,\ell}){\mathcal{W}}_{r,\ell}\mathbf{F}^{(k)}{\widehat{\mathbf{W}}}^{(k)}\right),$$ $$(6)$$ $$\left(7\right)$$ $$(8)$$ $$(9)$$ , (9) where θr,ℓ ∈ R N , Wc(k) are learnable matrices for channel/feature mixing, and I = {(*r, j*) : r = 1*, ..., R, ℓ* = 0, 1, ..., J*} ∪ {*(0, J)} is the index set for all framelet decomposition matrices. 2. **The Spatial Framelet Models** (Chen et al., 2023): $${\bf F}^{(k+1)}=\sigma\left({\cal W}_{0,J}^{\top}\widehat{\bf A}{\cal W}_{0,J}{\bf F}^{(k)}\widehat{\bf W}_{0,J}^{(k)}+\sum_{r,\ell}{\cal W}_{r,\ell}^{\top}\widehat{\bf A}{\cal W}_{r,\ell}{\bf F}^{(k)}\widehat{\bf W}_{r,\ell}^{(k)}\right).\tag{10}$$ The spectral framelet models conduct framelet decomposition and reconstruction on the spectral domain of the graph. Clearly θr,ℓ ∈ R N can be interpreted as the frequency filters, given that a framelet system provides a perfect reconstruction on the input graph signal (i.e., W⊤W = I). Instead of frequency domain filtering, the spatial framelet models implement the framelet-based propagation via the spatial (graph adjacency) domain. There is a major difference between the two schemes. In the spectral framelet methods, the weight matrix Wc(k)is shared across different (filtered) frequency domains, while in the spatial framelet methods, an individual weight matrix Wc(k) r,ℓ is applied to each (filtered) spatial domain to produce the graph convolution. It is worth noting that the theoretical exploration of the learning advantage of graph framelet models, as well as the difference between two types of framelet models, have been studied in (Han et al., 2022; Zheng et al., 2021; Chen et al., 2023; Shi et al., 2023b). Generalized p-Laplacian Regularized Framelet GCN. In this part, we provide several additional definitions to formulate the model (pL-UFG) that we are interested in analyzing. As a generalized framelet model incorporating the so-called p-Laplacian energy regularizer, the pL-UFG, as we are going to define it later, has shown great flexibility in terms of adapting different types of graphs (i.e., homophily and heterophily) by efficiently adjusting the penalty strength from the regularizer, resulted in superior learning performance across various benchmark datasets (Shao et al., 2023). We started by defining the p-Laplace operator on the graph as follows. Definition 3 (Graph p-Laplacian). *Given a graph* G = (V, E,W) *and a multiple channel signal function* F ∈ FV , the graph p-Laplacian is an operator ∆p : FV → FV *, defined by:* $$\Delta_{p}{\bf F}:=-\frac{1}{2}{\rm div}(\|\nabla{\bf F}\|^{p-2}\nabla{\bf F}),\quad\mbox{for}\;\;p\geq1.\tag{1}$$ $$(11)$$ where div and ∇ are the graph gradient and divergence operators (Fu et al., 2022). ∥ · ∥p−2*is element-wise* power over the node gradient ∇F 2. The corresponding p-Dirichlet form can be denoted as: $${\cal S}_{p}({\bf F})=\frac{1}{2}\sum_{(v_{i},v_{j})\in{\cal E}}\left\|\sqrt{\frac{w_{i,j}}{d_{j,j}}}{\bf f}_{j}-\sqrt{\frac{w_{i,j}}{d_{i,i}}}{\bf f}_{i}\right\|^{p},\tag{12}$$ where we adopt the definition of p-norm as (Fu et al., 2022). It is not difficult to verify that once we set p = 2, we recover the graph Dirichlet energy (Zhou & Schölkopf, 2005) that is widely used to measure the difference between node features along the GNN propagation process. Remark 1 (Dirichlet Energy, Graph Homophily and Heterophily). Graph Dirichlet energy (Fu et al., 2022; Bronstein et al., 2021) has become a commonly applied measure of variation between node features via GNNs. It has been shown that once the graph is highly heterophilic where the connected nodes are not likely to share identical labels, one may prefer GNNs that exhibit nodes feature sharpening effect, thus increasing Dirichlet energy, such that the final classification output of the connected nodes from these GNNs tend to be different. Whereas, when the graph is highly homophilic, a smoothing effect (thus a decrease of Dirichlet energy) is preferred. Shao et al. (2023) further generalized the p-Dirichlet form in eq. (12) as: $$\mathcal{S}_{p}(\mathbf{F})=\frac{1}{2}\sum_{(v_{i},v_{j})\in\mathcal{E}}\|\nabla_{W}\mathbf{F}([i,j])\|^{p}$$ $$=\frac{1}{2}\sum_{v_{i}\in\mathcal{V}}\left[\left(\sum_{v_{j}\sim v_{i}}\|\nabla_{W}\mathbf{F}([i,j])\|^{p}\right)^{\frac{1}{p}}\right]^{p}=\frac{1}{2}\sum_{v_{i}\in\mathcal{V}}\|\nabla_{W}\mathbf{F}(v_{i})\|_{p}^{p},$$ $$(13)$$ $$\left(14\right)$$ where vj ∼ vi stands for the node vj that is connected to node vi and ∇W F(vi) = (∇W F([*i, j*]))vj :(vi,vj )∈E is the node gradient vector for each node vi and *∥ · ∥*p is the vector p-norm. Moreover, we can further generalize the regularizer Sp(F) by considering any positive convex function ϕ as: $${\cal S}_{p}^{\phi}({\bf F})=\frac{1}{2}\sum_{v_{i}\in{\cal V}}\phi(\|\nabla_{W}{\bf F}(v_{i})\|_{p}).\tag{1}$$ There are many choices of ϕ and p. When ϕ(ξ) = ξ p, we recover the p-Laplacian regularizer. Interestingly, by setting ϕ(ξ) = ξ 2, we recover the so-called Tikhonov regularization which is frequently applied in image processing. When ϕ(ξ) = ξ, i.e. identity map written as id, and p = 1, S id 1 (F) becomes the classic total variation regularization. Last but not the least, ϕ(ξ) = r 2log(1 + ξ 2/r2) gives nonlinear diffusion. We note that there are many other choices on the form of ϕ. In this paper we will only focus on those mentioned in (Shao et al., 2023) (i.e., the smooth ones). As a result, the flexible design of the **p-Laplacian based energy** regularizer in eq. (14) provides different penalty strengths in regularizing the node features propagated from GNNs. Accordingly, the regularization problem proposed in (Shao et al., 2023) is: $$\mathbf{F}=\operatorname*{arg\,min}_{\mathbf{F}}{\mathcal{S}}_{p}^{\phi}(\mathbf{F})+\mu\|\mathbf{F}-{\mathcal{W}}^{\intercal}\operatorname{diag}(\theta){\mathcal{W}}\mathbf{F}\|_{F}^{2},$$ $$\left(\begin{array}{c}15\\ \hline\end{array}\right)$$... 2We provide detailed formulation on graph p-Laplacian in Appendix A.1.1. where we let Y := W⊤diag(θ)WF stands for the node feature generated by the spectral framelet models (9) without activation σ. This is the implicit layer proposed in (Shao et al., 2023). As the optimization problem defined in eq. (15) does not have a closed-form solution when p ̸= 2, an iterative algorithm is developed in (Shao et al., 2023) to address this issue. The justification is summarized by the following proposition (Theorem 1 in (Shao et al., 2023)): Proposition 1. For a given positive convex function ϕ(ξ) (ϕ(ξ) ≥ 0 on the domain R+*), define* $$M_{i,j}=\frac{w_{i,j}}{2}\left\|\nabla_{W}\mathbf{F}([i,j])\right\|^{p-2}\cdot\left[\frac{\phi^{\prime}(\left\|\nabla_{W}\mathbf{F}(v_{i})\right\|_{p})}{\left\|\nabla_{W}\mathbf{F}(v_{i})\right\|_{p}^{p-1}}+\frac{\phi^{\prime}(\left\|\nabla_{W}\mathbf{F}(v_{j})\right\|_{p})}{\left\|\nabla_{W}\mathbf{F}(v_{j})\right\|_{p}^{p-1}}\right],$$ $$\alpha_{ii}=1/\left(\sum_{v_{j}\sim v_{i}}\frac{M_{i,j}}{d_{i,i}}+2\mu\right),\qquad\beta_{ii}=2\mu\alpha_{ii},$$ (16) $\binom{17}{2}$ (17) ... $$(18)$$ , (16) and denote the matrices M = [Mi,j ], α = diag(α11, ..., αNN ) and β = diag(β11, ..., βNN )*. Then problem (15)* can be solved by the following message passing process $${\bf F}^{(k+1)}=\alpha^{(k)}{\bf D}^{-1/2}{\bf M}^{(k)}{\bf D}^{-1/2}{\bf F}^{(k)}+\beta^{(k)}{\bf Y},\tag{1}$$ with an initial value, e.g., F (0) = 0 or Y. Note, k *denotes the discrete time index (iteration) and* M(k)is calculated according to eq. (16) with F (k). Finally, in this paper, we call the iterative algorithm defined in eq. (18) for realizing the implicit layer defined by the solution of problem eq. (15) as the **pL-UFG** model. Although remarkable performance has been observed from pL-UFG, there are still some key properties of the model that require to be theoretically explored. For example, the convergence of the iterative algorithm eq. (18) for the implicit layer in eq. (15), and how the implicit layer changes and interacts with the energy dynamic of the original framelet, what is the relationship between the propagation within the implicit layer and other propagations such as diffusion on node features. We will explicitly show our theoretical results in the coming sections. ## 3 Theoretical Analysis Of The Pl-Ufg In this section, we show a detailed analysis of the convergence (Section 3.1) and energy behavior (Section 3.2) of the iterative algorithm in solving the implicit layer presented in eq. (18). In addition, we will also present some results regarding the interaction between the implicit layer and graph framelet in Section 3.3 via the energy dynamic aspect based on the conclusion from Section 3.2. Lastly, in Section 3.5, we will verify that the iterative algorithm induced from the p-Laplacian implicit layer admits a discretized non-linear diffusion process, thereby connecting the discrete iterative algorithm to the differential equations on the graph. First, we consider the form of matrix M in eq. (18). Write $$\zeta_{i,j}^{\phi}(\mathbf{F})={\frac{1}{2}}\left[{\frac{\phi^{\prime}(\|\nabla_{W}\mathbf{F}(v_{i})\|_{p})}{\|\nabla_{W}\mathbf{F}(v_{i})\|_{p}^{p-1}}}+{\frac{\phi^{\prime}(\|\nabla_{W}\mathbf{F}(v_{j})\|_{p})}{\|\nabla_{W}\mathbf{F}(v_{j})\|_{p}^{p-1}}}\right].$$ $$(19)$$ $$(20)$$ $$(21)^{\frac{1}{2}}$$ Mi,j can be simplified as $$M_{i,j}=\zeta_{i,j}^{\phi}(\mathbf{F})w_{i,j}\,\|\nabla_{W}\mathbf{F}([i,j])\|^{p-2}\,.$$ p−2. (20) ζ ϕ i,j (F) is bounded as shown in the following lemma. Lemma 1. For the given p *used in* S ϕ p (see eq. (14)), assume that $${\frac{\phi^{\prime}(\xi)}{\xi^{p-1}}}\leq C,$$ ≤ C, (21) for a suitable constant C*, then we have* |ζ ϕ i,j (F)| ≤ C. The proof is trivial thus we omit it here. In the sequel, we use ζi,j (F) for ζ ϕ i,j (F) instead. Remark 2. The assumed condition in (21) in Lemma 1 is appropriate to ensure ζi,j (F) is bounded. This condition is satisfied for all ϕ functions used in (Shao et al., 2023). - When ϕ(ξ) = ξ p(1 ≤ p ≤ ∞) and take the same p into S ϕ p (F)), then the resulting ζi,j (F) is bounded for all such p; - When ϕ(ξ) = ξ 2 and for any 0 < p ≤ 2 in S ϕ p (F), then the resulting ζi,j (F) is bounded when F is bounded. This is because ϕ ′(ξ) ξ p−1 =2ξ ξ p−1 = 2 ξ 2 ξ p which is bounded for finite ξ > 0; - When ϕ(ξ) = ξ, then ϕ ′(ξ) ξ p−1 =ξ ξ p−1 , indicating ζi,j (F) is bounded for all 0 < p ≤ 1; - When ϕ(ξ) = pξ 2 + ϵ 2 − ϵ, we have ϕ ′(ξ) ξ p−1 = (ξ 2+ϵ 2) 1/2ξ ξ p−1 ≤ Cξ ξ p−1 . Therefore ζi,j (F) is bounded for all 0 < p ≤ 2; - When ϕ(ξ) = r 2log(1 + ξ 2 r 2 ), the result of ϕ ′(ξ) ξ p−1 yields r 2 1 1+ ξ2 r2 · 2 r2 ξ ξ p−1 ≤ 2ξ ξ p−1 . Hence ζi,j (F) remain bounded for all 0 < p ≤ 2. In summary, for all forms of ϕ we included in the model, ζi,j (F) is bounded with the appropriate choice of p. The boundedness of ζi,j (F) from Lemma 1 is useful in the following convergence analysis. ## 3.1 Convergence Analysis Of Pl-Ufg We show the iterative algorithm presented in eq. (18) will converge with a suitable choice of µ. We further note that although the format of Theorem 1 is similar to Theorem 2 in (Fu et al., 2022), our message passing scheme presented in eq. (18) is different compared to the one defined in (Fu et al., 2022) via the forms of M, α and β. In fact, the model defined in (Fu et al., 2022) can be considered as a special case where ϕ(ξ) = ξ p. As a generalization of the model proposed in (Fu et al., 2022), we provide a uniform convergence analysis for the pL-UFG. Theorem 1 (Shrinking Property of the Proposed Model Iteration). Given a graph G(V, E,W) with node features X*, if* α(k), β (k), M(k) and F (k) *are updated according to eq. (18), then there exist some real positive* value µ, which depends on the input graph (G, X) and the quantity of p*, updated in each iteration, such that:* L ϕ p (F (k+1)) ≤ Lϕ p (F (k)), where L ϕ p (F) := S ϕ p (F) + µ∥F − Y∥ 2 F . We include detailed proof in Appendix A.2.1. Theorem 1 shows that with an appropriately chosen value of µ, the iteration scheme eq. (18) for the implicit layer eq. (15) is guaranteed to decrease the objective function values (or not increasing). This inspires us to explore further the variation of the node feature produced from the implicit layer asymptotically. Recall that to measure the difference between node features, one common choice is to analyze its Dirichlet energy, which is initially considered in the setting p = 2 in eq. (12). It is known that the Dirichlet energy of the node feature tends to approach 0 after a sufficiently large number of iterations in many GNN models (Kipf & Welling, 2016; Wu et al., 2020; Chamberlain et al., 2021a; Di Giovanni et al., 2023), known as the over-smoothing problem. However, as we will show in the next section, by taking large µ or small p, the iteration from the implicit layer will always lift up the Dirichlet energy of the node features, and the over-smoothing issue can be resolved completely in pL-UFG. Remark 3. Additionally, Theorem 1 guarantees that the objective L ϕ p is bounded, which means all the iterates generated from eq. (18) are bounded (as ∥F (k) −Y∥ 2 F is bounded). Hence, for the sequence {F (k)}∞ k=1, there must be an accumulated point F ∗, such that there is a convergent sub-sequence {F (kn)}∞ n=1. Accordingly, the iteration produces a weak convergence sequence eq. (18) in the sense of sub-convergent. Remark 4. In practice, the implicit layer eq. (15) is replaced by a given number (e.g., 20 in our experiments) of iteration eq. (18). Theorem 1 ensures the objective value is not increasing. Thus this shrinkage property is more important than the convergence in practice. ## 3.2 Energy Behavior Of The Pl-Ufg In this section, we show the energy behavior of the p-Laplacian based implicit layer. Specifically, we are interested in analyzing the property of the generalized Dirichlet energy defined in (Bronstein et al., 2021). We start by denoting generalized graph convolution as follows: $${\bf F}^{(k+\tau)}={\bf F}^{(k)}+\tau\sigma\left(-{\bf F}^{(k)}{\bf\Omega}^{(k)}+\widehat{\bf A}{\bf F}^{(k)}\widehat{\bf W}^{(k)}-{\bf F}^{(0)}\widehat{\bf W}^{(k)}\right),\tag{10}$$ where Ω(k),Wc(k) and Wf(k) ∈ R c×c act on each node feature vector independently and perform channel mixing. When τ = 1, and Ω(k) = Wf(k) = 0, it returns to GCN (Kipf & Welling, 2016). Additionally, by setting Ω(k) ̸= 0, we have the anisotropic instance of GraphSAGE (Xu et al., 2019). To quantify the quality of the node features generated by eq. (22), specifically, Bronstein et al. (2021) considered a new class of energy as defined below, $${\bf E}({\bf F})=\frac{1}{2}\sum_{i=1}^{N}\langle{\bf f}_{i},\Omega{\bf f}_{i}\rangle-\frac{1}{2}\sum_{i,j=1}^{N}\widehat{\bf A}_{i,j}\langle{\bf f}_{i},\widehat{\bf W}{\bf f}_{j}\rangle+\varphi^{(0)}({\bf F},{\bf F}^{(0)}),\tag{23}$$ $$(22)$$ $$(24)$$ in which φ (0)(F, F (0)) serves a function of that induces the source term from F or F (0). It is worth noting that by setting Ω = Wc = Ic and φ (0) = 0, we recover the classic Dirichlet energy when setting p = 2 in eq. (12) that is, E(F) = 12 P(vi,vj )∈E qwi,j dj,j fj − qwi,j di,i fi 2 . Additionally, when we set φ (0)(F, F (0)) =Pi ⟨fi,Wff (0) i⟩, eq. (23) can be rewritten as: $$\mathbf{E}(\mathbf{F})=\left\langle\operatorname{vec}(\mathbf{F}),{\frac{1}{2}}(\mathbf{\Omega}\otimes\mathbf{I}_{N}-{\widehat{\mathbf{W}}}\otimes{\widehat{\mathbf{A}}})\operatorname{vec}(\mathbf{F})+({\widehat{\mathbf{W}}}\otimes\mathbf{I}_{N})\operatorname{vec}(\mathbf{F}^{(0)})\right\rangle.$$ . (24) Recall that eq. (18) produces the node feature F (k+1) according to the edge diffusion αD−1/2MD−1/2 on F (k) and the scaled source term 2µαF (0) where F (0) can be set to Y. To be specific, in (24), we set Ω = Wc = Wf = Ic and replace the edge *diffusion* Ab with αD−1/2MD−1/2 and set the identity matrix IN in the residual term to be the diagonal matrix 2µα. Finally, we propose Definition 4 (The Generalized Dirichlet Energy). *Based on the previous notation setting, the generalized* Dirichlet energy for the node features F (k+1) *in eq. (18) is:* $$\mathbf{E}^{PF}(\mathbf{F}^{(k+1)})=\left\langle\text{vec}(\mathbf{F}^{(k+1)}),\right.$$ $$\left.\frac{1}{2}\left(\mathbf{I}_{\text{c}}\otimes\mathbf{I}_{N}-\mathbf{I}_{\text{c}}\otimes\left(\boldsymbol{\alpha}^{(k+1)}\mathbf{D}^{-1/2}\mathbf{M}^{(k+1)}\mathbf{D}^{-1/2}\right)\right)\text{vec}(\mathbf{F}^{(k+1)})+(\mathbf{I}_{\text{c}}\otimes2\mu\boldsymbol{\alpha}^{(k+1)})\text{vec}(\mathbf{F}^{(0)})\right\rangle,\tag{25}$$ where the superscript "P F *" is short for p-Laplacian based framelet models.* It is worth noting that the generalized Dirichlet energy defined in eq. (25) is dynamic along the iterative layers due to the non-linear nature of the implicit layer defined in eq. (15). We are now able to analyze the energy (EP F (F)) behavior of the pL-UFG, concluded as the following proposition. Proposition 2 (Energy Behavior). Assume G *is connected, unweighted, and undirected. There exists* sufficiently large value of µ or small value of p such that EP F (F) *will stay away above 0 at each iterative* layer k and increases with the increase of µ *or the decrease of* p. We leave the detailed proof in Appendix A.2.2. Proposition 2 shows that, for any of our framelet convolution models, the p-Laplacian based implicit layer will not generate identical node features across graph nodes, and thus the so-called over-smoothing issue will not appear. Furthermore, it is worth noting that the result from Proposition 2 provides the theoretical justification of the empirical observations in (Shao et al., 2023), where a large value of µ or small value of p is suitable for fitting heterophily datasets which commonly require the output of GNN to have higher Dirichlet energy. Remark 5 (Regarding to the quantity of p). The conclusion of Proposition 2 is under sufficient large of µ or small of p. However, it is well-known that the quantity of p cannot be set as arbitrary values, and in fact, it is necessary to have p ≥ 1 so that the iteration for the solution of the optimization problem defined in eq. (15) can converge. Therefore, it is not hard to see that the effect of p is weaker than µ in terms of analyzing the asymptotic energy behavior of the model (i.e., via eq. (46) in Appendix A.2.2). Accordingly, in practice, one shall prefer to let µ be the major target and p be the sub-target to adjust for fitting pL-UFG to different types of graphs. Without loss of generality, in the sequel, when we analyze the property of the model with conditions on µ and p, we mainly target the effect from µ, and one can check µ and p are in the opposite effect on the model. ## 3.3 Discussion On Dynamic Interactions The results on the energy behavior of the p-Laplacian implicit layer, as discussed in the last section, encourage further exploration into the interactions between the implicit layer and graph framelets. The dynamics of energy in graph framelets have become a popular topic in recent works (Shi et al., 2023a; Han et al., 2022; 2023). The studies by Han et al. (2022) and Shi et al. (2023a) demonstrate that, under mild conditions, the propagation dynamics of graph framelets can be dominated by one of its frequency domains, as exemplified by the Graph Neural Network (GNN) Low-frequency-dominance (LFD) and High-frequency-dominance (HFD). Taking spectral framelet convolution eq. (9) with Haar-type filter (i.e. R = 1 in the case of scaling function set) as an example, one can manually set θ0,1 = 1N and θ1,1 = θ1N , and if θ ∈ [0, 1), the spectral framelet convolution is LFD, meaning that although the feature propagation via high-frequency domain remains to sharpen node features, the mainstream of framelet is dominated by the dynamic via low-frequency domain in which node features are homogenized by the low-pass filter. Therefore, asymptotically all node features will still become identical, resulting in over-smoothing issues. However, this situation can be reversed by setting θ > 1, and the resulting HFD dynamic in the framelet can ensure that all features are sharpened in the whole propagation process. Therefore, the over-smoothing issue will not appear. It is worth noting that there are many other settings that can induce LFD/HFD dynamic of framelet, our example is here as an illustration purpose. Now recall that based on Definition 1, if G is homophilic, connected nodes are more likely to share the same label, in this case, one may prefer framelet model to induce LFD dynamic to enhance smoothing. On the other hand, if G is heterophilic, the model is expected to induce an HFD dynamic so that even in the adjacent nodes, their predicted labels still tend to be different. Based on the aforementioned settings on θ, one can observe that graph framelet can induce both LFD and HFD dynamics so that automatically capable of fitting both types of graphs. We strictly formulate the LFD and HFD dynamic of framelet in Appendix A.1.2. Together with the result in Proposition 2, it can be verified that if the framelet is LFD, the additional p-Laplacian implicit layer will ensure the Dirichlet energy of the node feature remains nonzero, thus preventing the model from the over-smoothing issue. On the other hand, if the framelet is HFD, node features are initially sharpened from the framelet and further serve as a source term in the implicit layer (i.e., 2µαF). This setting provides the potential to induce a further sharpening of the node feature via the implicit layer, resulting in an "enhanced" HFD dynamic on the framelet to further fit heterophilic graphs. We empirically verify this observation in Section 4.2. ## 3.4 Proposed Model With Controlled Dynamics Based on the aforementioned conclusions regarding energy behavior and the interaction between the implicit layer and framelet's energy dynamics, it becomes evident that irrespective of the homophily index of any given graph input, one can readily apply the condition of θ(s) in Proposition 3 to facilitate the adaptation of the pL-UFG model to the input graph by simply adjusting the quantities of µ and p. This adjustment significantly reduces the training cost of the graph framelet. For instance, consider the case of employing a Haar type frame with ℓ = 1, where we have only one low-pass and one high-pass domain. In this scenario, the trainable matrices for this model are θ0,1, θ1,1, and Wc. Based on our conclusions, we can manually set both θ0,1 and θ1,1 to our requested quantities, thereby inducing either LFD or HFD. Consequently, the only remaining training cost is associated with Wc, leading to a large reduction in the overall training cost ![10_image_0.png](10_image_0.png) Figure 1: Illustration of the working flow of pL-UFG-LFD and pL-UFG-HFD under the Haar type frame with ℓ = 1. The input graph features are first decomposed onto two frequency domains and further filtered by the diagonal matrix θ0,1 and θ1,1. With controlled model dynamics from Proposition 3 i.e., θ0,1 = 1N and θ1,1 = θθ0,1, framelet can induce both LFD and HFD dynamics resulting as different level of Dirichlet energy of the produced node features. It is straightforward to check that when the framelet is LFD, the level of node Dirichlet energy is less than its HFD counterpart. The generated node features from the graph framelet is then inputted into p-Laplacian (with graph gradient as one component) based implicit layer. while preserving the model's capability of handling both types of graphs. Accordingly, we proposed two additional pL-UFG variants with controlled model dynamics, namely **pL-UFG-LFD** and **pL-UFG-HFD**. More explicitly, the propagation of graph framelet with controlled dynamic takes the form: $$\mathbf{F}^{(k+1)}=\!\sigma\left({\mathcal W}_{0,1}^{\top}\mathrm{diag}(\mathbf{1}_{N}){\mathcal W}_{0,1}\mathbf{F}^{(k)}\widehat{\mathbf{W}}+{\mathcal W}_{1,1}^{\top}\mathrm{diag}(\theta\mathbf{1}_{N}){\mathcal W}_{1,1}\mathbf{F}^{(k)}\widehat{\mathbf{W}}\right).$$ After that, the output node features will be propagated through certain iterative layers as defined in eq. (18) for the implicit layer eq. (15), and the resulting node feature will be forwarded to the next graph framelet convolution and implicit layer propagation. To properly represent the Dirichlet energy of node features, we borrow the concept of electronic orbital energy levels in Figure 1. The shaded outermost electrons correspond to higher energy levels, which can be analogously interpreted as higher variations in node features. Conversely, the closer the electrons are to the nucleus, the lower their energy levels, indicating lower variations in node features. ## 3.5 Equivalence To Discrete Non-Linear Diffusion Diffusion on graph has gained its popularity recently (Chamberlain et al., 2021b; Thorpe et al., 2022; Han et al., 2023) by providing a framework (i.e., PDE) to understand the GNNs architecture and as a principled way to develop a broad class of new methods. To the best of our knowledge, although the GNNs induced from discretized linear diffusion on graph (Chamberlain et al., 2021b;a; Thorpe et al., 2022; Shi et al., 2024) have been intensively explored, models built from non-linear diffusion have not attracted much attention in general 3. In this section, we aim to verify that the iteration eq. (18) admits a (discretized) nonlinear diffusion 3For more details including the relationship between GNN dynamic and continuous diffusion process, please check (Chamberlain et al., 2021a; Han et al., 2023) with a source term. To see this, recall that the p-Laplacian operator defined in eq. (11) has the form: $$\Delta_{p}{\bf F}:=-\frac{1}{2}{\rm div}(\|\nabla{\bf F}\|^{p-2}\nabla{\bf F}),\quad\mbox{for}\;\;p\geq1.\tag{1}$$ $$(26)$$ Plugging in the definition of graph gradient and divergence (Specifically defined in eq. (30) and eq. (32)) into the above equation, one can compactly write out the form of p-Laplacian as: $$(\Delta_{p}\mathbf{F})(i)=\sum_{v_{j}\sim v_{i}}{\sqrt{\frac{w_{i,j}}{d_{i,i}}}}\left\|\nabla_{W}\mathbf{F}([i,j])\right\|^{p-2}\left({\sqrt{\frac{w_{i,j}}{d_{i,i}}}}\mathbf{f}_{i}-{\sqrt{\frac{w_{i,j}}{d_{j,j}}}}\mathbf{f}_{j}\right).$$ If we treat the iteration equation eq. (18) as the Euler discretization of one continuous non-linear diffusion process on the graph, we have $$\frac{\mathbf{F}^{(k+1)}-\mathbf{F}^{(k)}}{\tau}=\alpha^{(k)}\mathbf{D}^{-1/2}\mathbf{M}^{(k)}\mathbf{D}^{-1/2}\mathbf{F}^{(k)}-\mathbf{F}^{(k)}+\beta^{(k)}\mathbf{Y},\tag{28}$$ $$=\left(\alpha^{(k)}\mathbf{D}^{-1/2}\mathbf{M}^{(k)}\mathbf{D}^{-1/2}-\mathbf{I}\right)\mathbf{F}^{(k)}+\beta^{(k)}\mathbf{Y}.$$ $$(27)$$ We set τ = 1 for the rest of the analysis for convenience reasons. With all these setups, we summarize our results in the following: Lemma 2 (Non-Linear Diffusion). Assuming G *is connected, the forward Euler scheme presented in eq. (28)* admits a generalized non-linear diffusion on the graph. Specifically, we have:_ $$\left(\mathbf{\alpha}^{(k)}\mathbf{D}^{-1/2}\mathbf{M}^{(k)}\mathbf{D}^{-1/2}-\mathbf{I}\right)\mathbf{F}^{(k)}+\mathbf{\beta}^{(k)}\mathbf{Y}=\mathbf{\alpha}\left(\text{div}(\|\nabla\mathbf{F}^{(k)}\|^{p-2}\nabla\mathbf{F}^{(k)})\right)+2\mu\mathbf{\alpha}^{(k)}\mathbf{D}\mathbf{F}^{(k)}+2\mu\mathbf{\alpha}^{(k)}\mathbf{F}^{(0)}.\tag{29}$$ We leave the proof of the Lemma in Appendix A.2.3. Based on the conclusion of Lemma 2, it is clear that the propagation via the p-Laplacian implicit layer admits a scaled non-linear diffusion with two source terms. We note that the form of our non-linear diffusion coincides with the one developed in (Chen et al., 2022). However, in Chen et al. (2022), the linear operator is assigned via the calculation of graph Laplacian whereas in our model, the transformation acts over the whole p-Laplacian. Finally, it is worth noting that the conclusion in Lemma 2 can be transferred to the implicit schemes4. We omit it here. Remark 6. With sufficiently large µ or small p, one can check that the strength of the diffusion, i.e. div(∥∇F (k)∥ p−2∇F (k)), is diluted. Once two source terms 2µα(k)DF(k) + 2µα(k)F (0) dominant the whole process, the generated node features approach to DF(k) + F (0), which suggests a framelet together with two source terms. The first term can be treated as the degree normalization of the node features from the last layer and the second term simply maintains the initial feature embedding. Furthermore, this observation suggests our conclusion on the energy behavior of pL-UFG (Proposition 2); the interaction within pL-UFG described in Section 3.3, and lastly, the conclusion from Lemma 2 can be unified and eventually forms a well-defined framework in assessing and understanding the property of pL-UFG. ## 4 Experiment Experiment outlines In this section, we present comprehensive experimental results on the claims that we made from the theoretical aspects of our model. All experiments were conducted in PyTorch on NVIDIA Tesla V100 GPU with 5,120 CUDA cores and 16GB HBM2 mounted on an HPC cluster. In addition, for the sake of convenience, we listed the summary of each experimental section as follows: - In Section 4.1, we show how a sufficiently large/small µ can affect the model's performance on heterophilic/homophilic graphs, and the results are almost invariant to the choice of p. 4With a duplication of terminology, here the term "implicit" refers to the implicit scheme (i.e., backward propagation) in the training of the diffusion model. - In Section 4.2, we show some tests regarding the results of the model's dynamics. Specifically, we verified the observations in Section 3.3 with controlled model dynamics (quantity of θ ) of framelet to illustrate how the p-Laplacian based implicit layer interacts with the framelet model. - In Section 4.3 we test the performances of pL-UFG-LFD and pL-UFG-HFD via real-world graph benchmarks versus various baseline models. Furthermore, as these two controllable pL-UFG models largely reduced the computational cost (as we claimed in Section 3.4), we show pL-UFG-LFD and pL-UFG-HFD can even handle the large-scale graph datasets and achieve remarkable learning accuracies. Hyper-parameter tuning We applied exactly the same hyper-parameter tunning strategy as (Shao et al., 2023) to make a fair comparison. In terms of the settings for graph framelets, the framelet type is fixed as Haar ((Yang et al., 2022)), and the level J is set to 1. The dilation scale s ∈ {1, 1.5, 2, 3, 6}, and for n, the degree of Chebyshev polynomial approximation is drawn from {2, 3, 7}. It is worth noting that in graph framelets, the Chebyshev polynomial is utilized for approximating the spectral filtering of the Laplacian eigenvalues. Thus, a d-degree polynomial approximates d-hop neighbouring information of each node of the graph. Therefore, when the input graph is heterophilic, one may prefer to have a relatively larger d as node labels tend to be different between directly connected (1-hop) nodes. ## 4.1 Synthetic Experiment On Variation Of Μ Setup In this section, we show how a sufficiently large/small of µ can affect the model's performance on hetero/homophilic graphs. In order to make a fair comparison, all the parameters of pL-UFG followed the settings included in (Shao et al., 2023). For this test, we selected two datasets: Cora (heterophilic index: 0.825, 2708 nodes and 5278 edges) and Wisconsin (heterophilic index: 0.15, 499 nodes and 1703 edges) from https://www.pyg.org/. We assigned the quantity of p = {1, 1.5, 2, 2.5} combined with a set of µ = {0.1, 0.5, 1, 5, 10, 20, 30, 50, 70}. The number of epochs was set to 200, and the test accuracy (in %) was obtained as the average test accuracy of 10 runs. Results and Discussion The experimental results are presented in Figure 2. When examining the results obtained through the homophily graph (Figure 2a), it is apparent that all variants of pL-UFGs achieved the best performance when µ = 0.1, which is the minimum value of µ. As the value of µ increased, the learning accuracy decreased. This suggests that a larger sharpening effect was induced by the model, as stated in Section 3.3 and Proposition 2, causing pL-UFGs to incorporate higher amounts of Dirichlet energy into their generated node features. Consequently, pL-UFGs are better suited for adapting to heterophily graphs. This observation is further supported by the results in Figure 2b, where all pL-UFG variants achieved their optimal performance with a sufficiently large µ when the input graph is heterophilic. Additional interesting observation on the above result is despite the fact that all model variants demonstrated superior learning outcomes on both homophilic and heterophilic graphs when assigned sufficiently large or small values of µ, it can be observed that when the quantity of p is small, pL-UFG requires a smaller value of µ to fit the heterophily graph (blue line in Figure 2b). On the other hand, when the models have a relatively large value of p (i.e., p = 2.5), it is obvious that these models yielded the most robust results when there is an increase of µ (red line in Figure 2a). These phenomena further support the notion that p and µ exhibit opposite effects on the model's energy behavior as well as its adaptation to homophily and heterophily graphs. ## 4.2 Synthetic Experiment On Testing Of Model'S Dynamics Now, we take one step ahead. Based on the observation in Section 3.3, the inclusion of p-Laplacian based implicit layer has the potential for further enhancing framelet's LFD and HFD dynamics. This suggests that one can control the entries of θ based on the conditions provided in Proposition 3 and by only changing the quantity of µ and p to test the model's adaption power on both homophily and heterophily graphs. Therefore, in this section, we show how a (dynamic) controlled framelet model can be further enhanced by the assistant from the p-Laplacian regularizer. Similarly, we applied the same setting to the experiments in (Shao et al., 2023). ![13_image_0.png](13_image_0.png) (a) Accuracy on Cora via different combinations of µ and p. (b) Accuracy on Wisconsin via different combinations of µ and p. Figure 2: Performance of pL-UFG with various combinations of the values of µ and p. Setup and Results To verify the claims on in Section 3.3, we deployed the same settings mentioned in Proposition 3. Specifically, we utilized Haar frame with ℓ = 1 and set θ0,1 = IN , θ0,1 = θIN . For heterophilic graphs (Wisconsin), θ = 2, and for the homophilic graph (Cora), θ = 0.2. The result of the experiment is presented in Figure 3. Similar to the results observed from Section 4.1, it is shown that when the relatively large quantity of µ is assigned, the model's capability of adapting homophily/heterophily graph decreased/increased. This directly verifies that the p-Laplacian based implicit layer interacts and further enhances the (controlled) dynamic of the framelet by the value of p and µ, in terms of adaptation. ![13_image_1.png](13_image_1.png) Figure 3: Average Accuracy(%) with Changing µ and p under (manually fixed) LFD/HFD framelet models. All framelet model in Figure 3a are LFD dynamic with θ0,1 = IN , θ1,1 = θ1N , θ = 0.2. On Figure 3b, all framelet models are HFD with θ0,1 = IN , θ1,1 = θ1N , θ = 2. ## 4.3 Real-World Node Classification And Scalability Previous synthetic numerical results show predictable performance of both pL-UFG-LFD and pL-UFG-HFD. In this section, we present the learning accuracy of our proposed models via real-world homophily and heterophily graphs. Similarly, we deployed the same experimental setting from (Shao et al., 2023). In addition, to verify the claim in Remark 3.4, we tested our proposed model via a large-scale graph dataset (ogbn-arxiv) to show the proposed model's scalability, which is rarely explored. We include the summary statistics of the datasets in Table 2. All datasets are split according to (Hamilton et al., 2017). Table 2: Statistics of the datasets, H(G) represent the level of homophily of overall benchmark datasets. | Datasets | Class Feature | Node | Edge | H(G) | | |------------|-----------------|--------|----------------------|--------|-------| | Cora | 7 | 1433 | 2708 | 5278 | 0.825 | | CiteSeer | 6 | 3703 | 3327 | 4552 | 0.717 | | PubMed | 3 | 500 | 19717 | 44324 | 0.792 | | Computers | 10 | 767 | 13381 | 245778 | 0.802 | | Photo | 8 | 745 | 7487 | 119043 | 0.849 | | CS | 15 | 6805 | 18333 | 81894 | 0.832 | | Physics | 5 | 8415 | 34493 | 247962 | 0.915 | | Arxiv | 23 | 128 | 169343 1166243 0.681 | | | | Chameleon | 5 | 2325 | 2277 | 31371 | 0.247 | | Squirrel | 5 | 2089 | 5201 | 198353 | 0.216 | | Actor | 5 | 932 | 7600 | 26659 | 0.221 | | Wisconsin | 5 | 251 | 499 | 1703 | 0.150 | | Texas | 5 | 1703 | 183 | 279 | 0.097 | | Cornell | 5 | 1703 | 183 | 277 | 0.386 | For the settings of µ, p and θ within pL-UFG-LFD and pL-UFG-HFD, we assigned µ = {0.1, 0.5, 1, 2.0}, p = {1, 1.5, 2, 2.5} and θ = {0.2, 0.5, 0.8} for pL-UFG-LFD in order to fit the homophily graphs, and for pL-UFG-HFD, we assigned µ = {10, 20, 30}, p = {1, 1.5, 2, 2.0, 2.5} and θ = {5, 7.5, 10} for heterophily graphs. The learning accuracy is presented in Table 3 and 4. Furthermore, rather than only reporting the average accuracy and related standard deviation, to further verify the significance of the improvement, we also computed the 95% confidence interval under t-distribution for the highest learning accuracy of the baselines and mark ∗ to our model's learning accuracy if it is outside the confidence interval. We include a brief introduction to the baseline models used in this experiment: - MLP: Standard feed-forward multiple layer perceptron. - GCN (Kipf & Welling, 2016): GCN is the first of its kind to implement linear approximation to spectral graph convolutions. - SGC (Wu et al., 2019): SGC reduces GCNs' complexity by removing nonlinearities and collapsing weight matrices between consecutive layers. Thus serves as a more powerful and efficient GNN baseline. - GAT (Veličković et al., 2018): GAT generates one attention coefficient matrix that is element-wisely multiplied on the graph adjacency matrix according to the node feature-based attention mechanism via each layer to propagate node features via the relative importance between them. - **JKNet** (Xu et al., 2018): JKNet offers the capability to adaptively exploit diverse neighbourhood ranges, facilitating enhanced structure-aware representation for individual nodes. - **APPNP** (Gasteiger et al., 2019): APPNP leverages personalized PageRank to disentangle the neural network from the propagation scheme, thereby merging GNN functionality. - **GPRGNN** (Chien et al., 2021): The GPRGNN architecture dynamically learns General Pagerank (GPR) weights to optimize the extraction of node features and topological information from a graph, irrespective of the level of homophily present. - **p-GNN** (Fu et al., 2022):p-GNN is a p-Laplacian based graph neural network model that incorporates a message-passing mechanism derived from a discrete regularization framework. To make a fair comparison, we test the p-GNN model with different quantities of p. - UFG (Zheng et al., 2022b): UFG, a class of GNNs built upon framelet transforms utilizes framelet decomposition to effectively merge graph features into low-pass and high-pass spectra. - **pL-UFG** (Shao et al., 2023): pL-UFG employs a p-Laplacian-based implicit layer to enhance the adaptability of multi-scale graph convolution networks (i.e., UFG) to filter-based domains, effectively improving the model's adaptation to both homophily and heterophily graphs. Furthermore, as two types of pL-UFG models are proposed in (Shao et al., 2023), we test both two pL-UFG variants as our baseline models. For more details including the precise formulation of the model, please check (Shao et al., 2023). Table 3: Test accuracy (%) on homophilic graphs, the top learning accuracy is highlighted in **bold** and the second accuracy is underlined. The term OOM means out of memory. | Method | Cora | CiteSeer | PubMed | Computers | Photos | CS | Physics | Arxiv | |---------------------------------------------------------------------------------------------------------|------------|----------------------------------|-------------|-------------|-------------|-------------|------------|------------| | MLP | 66.04±1.11 | 68.99±0.48 | 82.03±0.24 | 71.89±5.36 | 86.11±1.35 | 93.50±0.24 | 94.56±0.11 | 55.50±0.78 | | GCN | 84.72±0.38 | 75.04±1.46 | 83.19±0.13 | 78.82±1.87 | 90.00±1.49 | 93.00±0.12 | 95.55±0.09 | 70.07±0.79 | | SGC | 83.79±0.37 | 73.52±0.89 | 75.92±0.26 | 77.56±0.88 | 86.44±0.35 | 92.18±0.22 | 94.99±0.13 | 71.01±0.30 | | GAT | 84.37±1.13 | 74.80±1.00 | 83.92±0.28 | 78.68±2.09 | 89.63±1.75 | 92.57 ±0.14 | 95.13±0.15 | OOM | | JKNet | 83.69±0.71 | 74.49±0.74 | 82.59±0.54 | 69.32±3.94 | 86.12±1.12 | 91.11±0.22 | 94.45±0.33 | OOM | | APPNP | 83.69±0.71 | 75.84±0.64 | 80.42±0.29 | 73.73±2.49 | 87.03±0.95 | 91.52±0.14 | 94.71±0.11 | OOM | | GPRGNN | 83.79±0.93 | 75.94±0.65 | 82.32±0.25 | 74.26±2.94 | 88.69±1.32 | 91.89 ±0.08 | 94.85±0.23 | OOM | | UFG | 80.64±0.74 | 73.30±0.19 | 81.52±0.80 | 66.39±6.09 | 86.60±4.69 | 95.27±0.04 | 95.77±0.04 | 71.08±0.49 | | PGNN1.0 | 84.21±0.91 | 75.38±0.82 | 84.34±0.33 | 81.22±2.62 | 87.64±5.05 | 94.88±0.12 | 96.15±0.12 | OOM | | PGNN1.5 | 84.42±0.71 | 75.44±0.98 | 84.48±0.21 | 82.68±1.15 | 91.83±0.77 | 94.13±0.08 | 96.14±0.08 | OOM | | PGNN2.0 | 84.74±0.67 | 75.62±1.07 | 84.25 ±0.35 | 83.40±0.68 | 91.71±0.93 | 94.28±0.10 | 96.03±0.07 | OOM | | PGNN2.5 | 84.48±0.77 | 75.22±0.73 | 83.94±0.47 | 82.91±1.34 | 91.41±0.66 | 93.40±0.07 | 95.75±0.05 | OOM | | pL-UFG11.0 | 84.54±0.62 | 75.88±0.60 | 85.56±0.18 | 82.07±2.78 | 85.57±19.92 | 95.03±0.22 | 96.19±0.06 | 70.28±9.13 | | pL-UFG11.5 | 84.96±0.38 | 76.04±0.85 | 85.59±0.18 | 85.04±1.06 | 92.92±0.37 | 95.03±0.22 | 96.27±0.06 | 71.25±8.37 | | pL-UFG12.0 | 85.20±0.42 | 76.12±0.82 | 85.59±0.17 | 85.26±1.15 | 92.65±0.65 | 94.77±0.27 | 96.04±0.07 | OOM | | pL-UFG12.5 | 85.30±0.60 | 76.11±0.82 | 85.54±0.18 | 85.18±0.88 | 91.49±1.29 | 94.86±0.14 | 95.96±0.11 | OOM | | pL-UFG21.0 | 84.42±0.32 | 74.79± 0.62 | 85.45±0.18 | 84.88±0.84 | 85.30±19.50 | 95.03±0.19 | 96.06±0.11 | 71.01±7.28 | | pL-UFG21.5 | 85.60±0.36 | 75.61±0.60 | 85.59±0.18 | 84.55±1.57 | 93.00±0.61 | 95.03±0.19 | 96.14±0.09 | 71.21±6.19 | | pL-UFG22.0 | 85.20±0.42 | 76.12±0.82 85.59±0.17 85.27±1.15 | 92.50±0.40 | 94.77±0.27 | 96.05±0.07 | OOM | | | | pL-UFG-LFD 85.64±1.36 77.39∗±1.59 85.08±1.33 85.36∗±1.39 93.17∗±1.30 96.13∗±1.08 96.49∗±1.04 71.96±1.25 | | | | | | | | | Discussion on the Results, Scalability and Computational Complexity From both Table 3 and 4, it is clear that our proposed model (pL-UFG-LFD and pL-UFG-HFD) produces state-of-the-art learning accuracy compared to various baseline models. For the datasets (i.e.,Pubmed and Squirrel) on which pL-UFG-LFD and pL-UFG-HFD are not the best, one can observe that pL-UFG-LFD and pL-UFG-HFD still have nearly identical learning outcomes compared to the best pL-UFG results. This suggests even within the pL-UFG with controlled framelet dynamics, by adjusting the values of µ and p, our proposed models are still able to generate state-of-the-art learning results with the computational complexity largely reduced compared to the pL-UFG and UFG. This observation directly verifies the observation in Section 3.3. In addition, due to the reduction of computational cost, our dynamic controlled models (pL-UFG-LFD and pL-UFG-HFD) | second accuracy is underlined. Method Chameleon | Squirrel | Actor | Wisconsin | Texas | Cornell | | |---------------------------------------------------------------------------------|------------|------------|-------------|-------------------------|-------------|-------------| | MLP | 48.82±1.43 | 34.30±1.13 | 41.66±0.83 | 93.45±2.09 | 71.25±12.99 | 83.33±4.55 | | GCN | 33.71±2.27 | 26.19±1.34 | 33.46±1.42 | 67.90±8.16 | 53.44±11.23 | 55.68±10.57 | | SGC | 33.83±1.69 | 26.89±0.98 | 32.08±2.22 | 59.56±11.19 | 64.38±7.53 | 43.18±16.41 | | GAT | 41.95±2.65 | 25.66±1.72 | 33.64±3.45 | 60.65±11.08 50.63±28.36 | 34.09±29.15 | | | JKNet | 33.50±3.46 | 26.95±1.29 | 31.14±3.63 | 60.42±8.70 | 63.75±5.38 | 45.45±9.99 | | APPNP | 34.61±3.15 | 32.61±0.93 | 39.11±1.11 | 82.41±2.17 | 80.00±5.38 | 60.98±13.44 | | GPRGNN | 34.23±4.09 | 34.01±0.82 | 34.63±0.58 | 86.11±1.31 | 84.38±11.20 | 66.29±11.20 | | UFG | 50.11±1.67 | 31.48±2.05 | 40.13±1.11 | 93.52±2.36 | 84.69±4.87 | 83.71±3.28 | | PGNN1.0 | 49.04±1.16 | 34.79±1.01 | 40.91±1.41 | 94.35±2.16 | 82.00±11.31 | 82.73±6.92 | | PGNN1.5 | 49.12±1.14 | 34.86±1.25 | 40.87±1.47 | 94.72±1.91 | 81.50±10.70 | 81.97±10.16 | | PGNN2.0 | 49.34±1.15 | 34.97±1.41 | 40.83±1.81 | 94.44±1.75 | 84.38±11.52 | 81.06±10.18 | | PGNN2.5 | 49.16±1.40 | 34.94±1.57 | 40.78±1.51 | 94.35±2.16 | 83.38±12.95 | 81.82±8.86 | | pL-UFG11.0 | 56.81±1.69 | 38.81±1.97 | 41.26±1.66 | 96.48±0.94 | 86.13±7.47 | 86.06±3.16 | | pL-UFG11.5 | 56.89±1.17 | 39.73±1.22 | 40.95±0.93 | 96.48±1.07 | 87.00±5.16 | 86.52±2.29 | | pL-UFG12.0 | 56.24±1.02 | 39.72±1.86 | 40.95±0.93 | 96.59±0.72 | 86.50±8.84 | 85.30±2.35 | | pL-UFG12.5 | 56.11±1.25 | 39.38±1.78 | 41.04±0.99 | 95.34±1.64 | 89.00±4.99 | 83.94±3.53 | | pL-UFG21.0 | 55.51±1.53 | 36.94±5.69 | 29.28±19.25 | 93.98±2.94 | 85.00±5.27 | 87.73±2.49 | | pL-UFG21.5 | 57.22±1.19 | 39.80±1.42 | 40.89±0.75 | 96.48±0.94 | 87.63±5.32 | 86.82±1.67 | | pL-UFG22.0 | 56.19±0.99 | 39.74±1.66 | 41.01±0.80 | 96.14±1.16 | 86.50±8.84 | 85.30±2.35 | | pL-UFG22.5 | 55.69±1.15 | 39.30±1.68 | 40.86±0.74 | 95.80±1.44 | 86.38±2.98 | 84.55±3.31 | | pL-fUFG1.0 | 55.80±1.93 | 38.43±1.26 | 32.84±16.54 | 93.98±3.47 | 86.25±6.89 | 87.27±2.27 | | pL-fUFG1.5 | 55.65±1.96 | 38.40±1.52 | 41.00±0.99 | 96.48±1.29 | 87.25±3.61 | 86.21±2.19 | | pL-fUFG2.0 | 55.95±1.29 | 38.33±1.71 | 41.25±0.84 | 96.25±1.25 | 88.75±4.97 | 83.94±3.78 | | pL-fUFG2.5 | 55.56±1.66 | 38.39±1.48 | 40.55±0.50 | 95.28±2.24 | 88.50±7.37 | 83.64±3.88 | | pL-UFG-HFD 58.60∗±1.74 39.63±2.01 44.63∗±2.75 96.64±1.77 89.31±8.40 88.97∗±3.36 | | | | | | | Table 4: Test accuracy (%) on heterophilic graphs. the top learning accuracy is highlighted in **bold** and the second accuracy is underlined. show a strong capability of handling the large-scale graph dataset, which is a challenging issue (scalability) for some GNNs especially multi-scale graph convolutions such as framelets (Zheng et al., 2022b) without additional data pre-processing steps. Accordingly, one can check that pL-UFG-LFD outperforms all included baselines on Arxiv datasets. Lastly, one can also find that most of the improvements between the learning accuracy produced from our model and the baselines are significant. ## 4.4 Limitation Of The Proposed Models And Future Studies First, we note that our analysis of the convergence, energy dynamic, and equivalence between our proposed model can be applied or partially applied to most existing GNNs. Based on what we have claimed regarding the theoretical perspective of pL-UFG, although we assessed model property via different perspectives, eventually, all theoretical conclusions come to the same conclusion (i.e., the asymptotic behavior of pL-UFG). Therefore, it would be beneficial to deploy our analyzing framework to other famous GNNs. Since the main purpose of this paper is to re-assess the property of pL-UFG, we leave this to future work. In addition, to induce LFD/HFD to pL-UFG, we set the value of θ as constant according to Proposition 3. However, due to the large variety of real-world graphs, it is challenging to determine the most suitable θ when we fix it as a constant. This suggests the exploration of controlling the model's dynamic via selecting θ is still rough. Moreover, based on Definition 1, the homophily index of a graph is a summary statistic over all nodes. However, even in the highly homophilic graph, there are still some nodes with their neighbours with different labels. This suggests the index is only capable of presenting the global rather than local labelling information of the graph. Accordingly, assigning a constant θ to induce LFD/HFD might not be able to equip pL-UFG with enough power to capture detailed labelling information of the graph. Therefore, another future research direction is to potentially explore the design of θ via the local labelling information of the graph. Finally, we note that another consequence of setting θ0,1 and θ1,1 as constant is such setting narrows the model's parameter space, as one can check the only learnable matrix left via explicit part of pL-UFG (eq. (9)) is Wc. Accordingly, the narrowed parameter space might make the solution of the model optimization apart from the desired solution as before, causing potential increases in learning variance. ## 5 Concluding Remarks In this work, we performed theoretical analysis on pL-UFG. Specifically, we verified that by choosing suitable quantities of the model parameters (µ and p), the implicit propagation induced from p-Laplacian is capable of amplifying or shrinking the Dirichlet energy of the node features produced from the framelet. Consequently, such manipulation of the energy results in a stronger energy dynamic of graph framelets and, therefore, enhances the model's adaption power on both homophilic and heterophilic graphs. We further explicitly showed the proof of the convergence of pL-UFG, which, to the best of our knowledge, fills the knowledge gap, at least in the field of p-Laplacian-based multi-scale GNNs. Moreover, we showed the equivalence between pL-UFG and the non-linear graph diffusion, indicating that pL-UFG can be trained via various training schemes. Finally, it should be noted that for the simplicity of the analysis, we have made several assumptions and only focus on the Haar type frames. It suffices in regard to the scope of this work. However, it would be interesting to consider more complex energy dynamics by reasonably dropping some of the assumptions or from other types of frames, we leave this to future work. ## References Wendong Bi, Lun Du, Qiang Fu, Yanlin Wang, Shi Han, and Dongmei Zhang. Make heterophily graphs better fit GNN: A graph rewiring approach. *arXiv:2209.08264*, 2022. Cristian Bodnar, Francesco Di Giovanni, Benjamin Paul Chamberlain, Pietro Liò, and Michael M Bronstein. Neural Sheaf diffusion: A topological perspective on heterophily and oversmoothing in GNNs. *Advances in* Neural Information Processing Systems, 35, 2022. Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: grids, groups, graphs, geodesics, and gauges. *arXiv:2104.13478*, 2021. Ben Chamberlain, James Rowbottom, Maria I Gorinova, Michael Bronstein, Stefan Webb, and Emanuele Rossi. GRAND: Graph neural diffusion. In Proceedings of the International Conference on Machine Learning, pp. 1407–1418. PMLR, 2021a. Benjamin Chamberlain, James Rowbottom, Davide Eynard, Francesco Di Giovanni, Xiaowen Dong, and Michael Bronstein. Beltrami flow and neural diffusion on graphs. *Advances in Neural Information Processing* Systems, 34:1594–1609, 2021b. Jialin Chen, Yuelin Wang, Cristian Bodnar, Rex Ying, Pietro Lio, and Yu Guang Wang. Dirichlet energy enhancement of graph neural networks by framelet augmentation. *arXiv:2311.05767*, 2023. Qi Chen, Yifei Wang, Yisen Wang, Jiansheng Yang, and Zhouchen Lin. Optimization-induced graph implicit nonlinear diffusion. In *Proceedings of the International Conference on Machine Learning*, pp. 3648–3661. PMLR, 2022. Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. Adaptive universal generalized pagerank graph neural network. In *Proceedings of the International Conference on Learning Representations*, 2021. Fan RK Chung. *Spectral graph theory*, volume 92. American Mathematical Soc., 1997. Francesco Di Giovanni, James Rowbottom, Benjamin Paul Chamberlain, Thomas Markovich, and Michael M Bronstein. Understanding convolution on graphs via energies. *Transactions on Machine Learning Research*, 2023. Bin Dong. Sparse representation on graphs by tight wavelet frames and applications. *Applied and Computational Harmonic Analysis*, 42(3):452–479, 2017. doi: 10.1016/j.acha.2015.09.005. Pavel Drábek and Stanislav I Pohozaev. Positive solutions for the p-Laplacian: application of the fibrering method. *Proceedings of the Royal Society of Edinburgh Section A: Mathematics*, 127(4):703–726, 1997. Guoji Fu, Peilin Zhao, and Yatao Bian. p-Laplacian based graph neural networks. In Proceedings of the International Conference on Machine Learning, pp. 6878–6917. PMLR, 2022. JP García Azorero and I Peral Alonso. Existence and nonuniqueness for the p-Laplacian. Communications in Partial Differential Equations, 12(12):126–202, 1987. Johannes Gasteiger, Aleksandar Bojchevski, and Stephan Günnemann. Predict then propagate: Graph neural networks meet personalized pagerank. In Proceedings of the International Conference on Learning Representations, 2019. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in Neural Information Processing Systems, 30, 2017. Andi Han, Dai Shi, Zhiqi Shao, and Junbin Gao. Generalized energy and gradient flow via graph framelets. arXiv:2210.04124, 2022. Andi Han, Dai Shi, Lequan Lin, and Junbin Gao. From continuous dynamics to graph neural networks: Neural diffusion and beyond. *arXiv:2310.10121*, 2023. Bernd Kawohl and Jiri Horak. On the geometry of the p-Laplacian operator. Discrete and Continuous Dynamical Systems - S, 10(4):799–813, 2017. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. *Proceedings of the International Conference on Learning Representations*, 2016. Remigijus Paulavičius and Julius Žilinskas. Analysis of different norms and corresponding lipschitz constants for global optimization. *Technological and Economic Development of Economy*, 12(4):301–306, 2006. Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. Geom-GCN: Geometric graph convolutional networks. In *Proceedings of the International Conference on Learning Representations*, 2019. Zhiqi Shao, Dai Shi, Andi Han, Andrey Vasnev, Yi Guo, and Junbin Gao. Enhancing framelet GCNs with generalized p-Laplacian regularization. *International Journal of Machine Learning and Cybernetics*, pp. 1–21, 2023. URL https://link.springer.com/article/10.1007/s13042-023-01982-8. Dai Shi, Yi Guo, Zhiqi Shao, and Junbin Gao. How curvature enhance the adaptation power of framelet GCNs. *arXiv:2307.09768*, 2023a. Dai Shi, Zhiqi Shao, Yi Guo, and Junbin Gao. Frameless graph knowledge distillation. *arXiv:2307.06631*, 2023b. Dai Shi, Andi Han, Lequan Lin, Yi Guo, Zhiyong Wang, and Junbin Gao. Design your own universe: A physics-informed agnostic method for enhancing graph neural networks. *arXiv:2401.14580*, 2024. Matthew Thorpe, Tan Minh Nguyen, Hedi Xia, Thomas Strohmer, Andrea Bertozzi, Stanley Osher, and Bao Wang. GRAND++: Graph neural diffusion with a source term. In Proceedings of the International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=EMxu-dzvJk. César Torres. Boundary value problem with fractional p-Laplacian operator. *Advances in Nonlinear Analysis*, 5(2):133–146, 2016. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In *Proceedings of the International Conference on Learning Representations*, 2018. Yifei Wang, Yisen Wang, Jiansheng Yang, and Zhouchen Lin. Dissecting the diffusion process in linear graph convolutional networks. *Advances in Neural Information Processing Systems*, 34:5758–5769, 2021. Quanmin Wei, Jinyan Wang, Jun Hu, Xianxian Li, and Tong Yi. OGT: Optimize graph then training GNNs for node classification. *Neural Computing and Applications*, 34(24):22209–22222, 2022. URL https://doi.org/10.1007/s00521-022-07677-5. Felix Wu, Tianyi Zhang, Amauri Holanda de Souza, Christopher Fifty, Tao Yu, and Kilian Q. Weinberger. Simplifying graph convolutional networks. In Proceedings of the International Conference on Machine Learning, pp. 6861–6871. PMLR, 2019. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. *IEEE Transactions on Neural Networks and Learning Systems*, 32(1): 4–24, 2020. Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In *Proceedings of the International* Conference on Machine Learning, 2018. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In Proceedings of the International Conference on Learning Representations, 2019. URL https://openreview. net/forum?id=ryGs6iA5Km. Mengxi Yang, Xuebin Zheng, Jie Yin, and Junbin Gao. Quasi-framelets: Another improvement to graph neural networks. *arXiv:2201.04728*, 2022. Xin Zheng, Yixin Liu, Shirui Pan, Miao Zhang, Di Jin, and Philip S Yu. Graph neural networks for graphs with heterophily: A survey. *arXiv:2202.07082*, 2022a. Xuebin Zheng, Bingxin Zhou, Junbin Gao, Yuguang Wang, Pietro Lió, Ming Li, and Guido Montufar. How framelets enhance graph neural networks. In *Proceedings of the International Conference on Machine* Learning, pp. 12761–12771. PMLR, 2021. Xuebin Zheng, Bingxin Zhou, Yu Guang Wang, and Xiaosheng Zhuang. Decimated framelet system on graphs and fast g-framelet transforms. *Journal of Machine Learning Research*, 23:18–1, 2022b. Bingxin Zhou, Ruikun Li, Xuebin Zheng, Yu Guang Wang, and Junbin Gao. Graph denoising with framelet regularizer. *arXiv:2111.03264*, 2021. Dengyong Zhou and Bernhard Schölkopf. Regularization on discrete spaces. In *Joint Pattern Recognition* Symposium, pp. 361–368. Springer, 2005. Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. Advances in Neural Information Processing Systems, 33:7793–7804, 2020. Meiqi Zhu, Xiao Wang, Chuan Shi, Houye Ji, and Peng Cui. Interpreting and unifying graph neural networks with an optimization framework. In *Proceedings of the Web Conference 2021*, pp. 1215–1226, 2021. ## A Appendix A.1 Detailed Formulations In this section, we provide detailed formulation as the preliminaries of the graph p-Laplacian operator shown in Definition 3 ## A.1.1 About P-Laplace Operator Definition 5 (The p-Laplace Operator (Drábek & Pohozaev, 1997)). Let Ω ⊂ R dbe a domain and u is a function defined on Ω. The p-Laplace operator ∆ *over functions is defined as* ∆u := ∇ · (∥∇u∥ p−2∇u) where ∇ is the gradient operator and ∥ · ∥ is the Euclidean norm and p is a scalar satisfying 1 *< p <* +∞. The p-Laplace operator is known as a quasi-linear elliptic partial differential operator. There is a line of research on the properties of p-Laplacian in regard to its uniqueness and existence (García Azorero & Peral Alonso, 1987), the geometrical property (Kawohl & Horak, 2017), and boundary conditions on so-called p-Laplacian equation (Torres, 2016). The concept of p-Laplace operator can be extended for discrete domains such as graph (nodes) based on the concepts of the so-called graph gradient and divergence, see below. One of the recent works (Fu et al., 2022) considers assigning an adjustable p-Laplacian regularizer to the (discrete) graph regularization problem that is conventionally treated as a way of producing GNN outcomes (i.e., Laplacian smoothing) (Zhou & Schölkopf, 2005). In view of the fact that the classic graph Laplacian regularizer measures the graph signal energy along edges under L2 metric, it would be beneficial if the GNN training process could be regularized under Lp metric in order to adapt to different graph inputs. Following these pioneer works, Shao et al. (2023) further integrated graph framelet and a generalized p-Laplacian regularizer to develop the so-called generalized p-Laplacian regularized framelet model. It involves a regularization problem over the energy quadratic form induced from the graph p-Laplacian. To show this, we start by defining graph gradient as follows: To introduce graph gradient and divergence, we define the following notation: Given a graph G = (V, E,W), let FV := {F|F : V → R d} be the space of the vector-valued functions defined on V and FE := {g|g : E → R d} be the vector-valued function space on edges, respectively. Definition 6 (Graph Gradient (Zhou & Schölkopf, 2005)). For a given function F ∈ FV , its graph gradient is an operator ∇W :FV → FE defined as for all (vi, vj ) ∈ E, $$(\nabla_{W}\mathbf{F})([i,j]):={\sqrt{\frac{w_{i,j}}{d_{j,j}}}}\mathbf{f}_{j}-{\sqrt{\frac{w_{i,j}}{d_{i,i}}}}\mathbf{f}_{i},$$ $$(30)$$ $\mathfrak{f}$ D. where fi and fj are the signal vectors on nodes vi and vj *, i.e., the rows of* F. For simplicity, we denote ∇W F as ∇F as the graph gradient. The definition of the (discrete) graph gradient is analogized from the notion of gradient from the continuous space. Similarly, we can further define the so-called graph divergence: Definition 7 (Graph Divergence (Zhou & Schölkopf, 2005)). The graph divergence is an operator div : FE → FV which is defined in the following way. For a given function g ∈ FE , the resulting div(g) ∈ FV satisfies the following condition, for any functions F ∈ FV , $\mathbf{T}$ $$\langle\nabla\mathbf{F},\mathbf{g}\rangle=\langle\mathbf{F},-d i v(\mathbf{g})\rangle.$$ ⟨∇F, g⟩ = ⟨F, −div(g)⟩. (31) It is easy to check that the graph divergence can be computed by: $$\operatorname{div}(\mathbf{g})(i)=\sum_{j=1}^{N}{\sqrt{\frac{w_{i,j}}{d_{i,i}}}}(\mathbf{g}[i,j]-\mathbf{g}[j,i]).$$ (g[i, j] − g[*j, i*]). (32) $$(31)^{\frac{1}{2}}$$ $$(32)^{\frac{1}{2}}$$ With the formulation of graph gradient and divergence, we can define the graph p-Laplacian operator and the corresponding p-Dirichlet form (Zhou & Schölkopf, 2005; Fu et al., 2022) as illustrated in Definition 3. ## A.1.2 About Lfd And Hfd Dynamic We provide some detailed formulation on the so-called Low frequency dominance (LFD) and high frequency dominance (HFD) of graph framelet. In the work (Di Giovanni et al., 2023), the propagation of GNNs was considered as the gradient flow of the Dirichlet energy that can be formulated as: $${\bf E}({\bf F})=\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\left\|\sqrt{\frac{w_{i,j}}{d_{j,j}}}{\bf f}_{j}-\sqrt{\frac{w_{i,j}}{d_{i,i}}}{\bf f}_{i}\right\|^{2},\tag{33}$$ and similarly by setting the power from 2 to p, we recover the p-Dirichlet form presented in eq. (12). The gradient flow of the Dirichlet energy yields the so-called graph heat equation (Chung, 1997) as F˙ (k) = −∇E(F (k)) = −LeF (k). Its Euler discretization leads to the propagation of linear GCN models (Wu et al., 2019; Wang et al., 2021). Following this observation, the work (Han et al., 2022; Di Giovanni et al., 2023; Shi et al., 2023a) also shows even with the help of the non-linear activation function and the weight matrix via classic GCN (eq. (1)), the process described is still dominated by the low frequency (small Laplacian eigenvalues) of the graph, hence eventually converging to the kernel of Le, for almost every initialization. To quantify such behavior, Di Giovanni et al. (2023) and Han et al. (2022) considered a general dynamic as F˙ (k) = GNNθ(F (k), k), with GNNθ(·) as an arbitrary graph neural network function, and also characterizes its behavior by low/high-frequency-dominance (L/HFD). Definition 8 ((Di Giovanni et al., 2023)). F˙ (k) = GNNθ(F (k), k) *is Low-Frequency-Dominant (LFD) if* EF (k)/∥F (k)∥−→ 0 as k −→ ∞*, and is High-Frequency-Dominant (HFD) if* EF (k)/∥F (k)∥−→ ρeL /2 as t *−→ ∞*. Lemma 3 ((Di Giovanni et al., 2023)). A GNN model is LFD (resp. HFD) if and only if for each tj −→ ∞, there exists a sub-sequence indexed by kjl −→ ∞ and F∞ *such that* F(kjl )/∥F(kjl )∥ −→ F∞ and LeF∞ = 0 (resp. LFe ∞ = ρeL F∞). To employ the similar notions of LFD/HFD, Han et al. (2022) developed a framelet Dirichlet energy and analyzed the energy behavior of both spectral (eq. (9)) and spatial framelet (eq. (10)) convolutions. Specifically, $${\bf E}_{r,\ell}^{F}({\bf F})=\frac{1}{2}{\rm Tr}\big{(}({\cal W}_{r,\ell}{\bf F})^{\top}{\cal W}_{r,\ell}{\bf F}\Omega_{r,\ell}\big{)}-\frac{1}{2}{\rm Tr}\big{(}({\cal W}_{r,\ell}{\bf F})^{\top}{\rm diag}(\theta)_{r,\ell}{\cal W}_{r,\ell}{\bf F}\widetilde{\bf W})\big{)}$$ for all $(r,\ell)\in{\cal I}$. The generated framelet energy is given by: for all $(r,\ell)\in\mathbb{Z}$. The generated number energy is given by: $$\mathbf{E}^{F_{r}}(\mathbf{F})=\mathbf{E}_{r,\ell}^{F_{r}}(\mathbf{F})+\sum_{r,\ell}\mathbf{E}_{r,\ell}^{T}(\mathbf{F})$$ $$=\frac{1}{2}\sum_{(r,\ell)\in\mathbb{Z}}\left\langle\operatorname{vec}(\mathbf{F}),\left(\mathbf{\Omega}_{r,\ell}\otimes\mathcal{W}_{r,\ell}^{\top}\mathcal{W}_{r,\ell}-\widehat{\mathbf{W}}\otimes\mathcal{W}_{r,\ell}^{\top}\text{diag}(\theta)_{r,\ell}\mathcal{W}_{r,\ell}\right)\operatorname{vec}(\mathbf{F})\right\rangle,\tag{34}$$ where the superscript ${}^{\text{\tiny{\textregistered}}F_{r}}$ stands for the framelet convolution. This definition is based on the fact that the total Dirichlet energy is conserved under framelet decomposition (Han et al., 2022; Di Giovanni et al., 2023). By analyzing the gradient flow of the framelet energy 5 defined above, Han et al. (2022) concluded the energy dynamic of framelet as: Proposition 3 ((Han et al., 2022)). *The spectral graph framelet convolution eq. (9) with Haar-type filter* (i.e. R = 1 *in the case of scaling function set) can induce both LFD and HFD dynamics. Specifically, let* θ0,ℓ = 1N and θr,ℓ = θ1N for r = 1*, ..., L, ℓ* = 1, ..., J where 1N is a size N vector of all 1s. When θ ∈ [0, 1), the spectral framelet convolution is LFD, and when θ > 1*, the spectral framelet convolution is HFD.* Remark 7. Proposition 3 suggests that graph framelet can naturally induce both two types of dynamic without further modification, and this supports our claim on the property of graph framelet in Section 3.3. 5Similar to the requirement on our p-Laplacian based framelet energy (EP F (F(k+1)), to thoroughly verify the framelet energy in eq. (34) is a type of energy, we shall further require: ∇2EF r r,ℓ (F) = Ωr,ℓ ⊗ W⊤ r,ℓWr,ℓ − Wc ⊗ W⊤ r,ℓLeWr,ℓ is symmetric, which can be satisfied by requiring both Ω and Wc are symmetric. ## A.2 Formal Proofs A.2.1 Detailed Proof Of Proposition 1 Theorem 2 (Repeat of Theorem 1). Given a graph G(V, E,W) with node features X*, if* α(k), β (k), M(k) and F (k) are updated according to eq. (18), then there exist some real positive value µ, which depends on the input graph (G, X) and the quantity of p*, updated in each iteration, such that:* $${\mathcal{L}}_{p}^{\phi}(\mathbf{F}^{(k+1)})\leq{\mathcal{L}}_{p}^{\phi}(\mathbf{F}^{(k)}),$$ where L ϕ p (F) := S ϕ p (F) + µ∥F − Y∥ 2 F . Proof. First, write: $$M_{i,j}^{(k)}=\frac{w_{i,j}}{2}\left\|\nabla_{W}{\bf F}^{(k)}([i,j)]\right\|^{p-2}\cdot\left[\frac{\phi^{\prime}(\|\nabla_{W}{\bf F}^{(k)}(v_{i})\|_{p})}{\|\nabla_{W}{\bf F}^{(k)}(v_{i})\|_{p}^{p-1}}+\frac{\phi^{\prime}(\|\nabla_{W}{\bf F}^{(k)}(v_{j})\|_{p})}{\|\nabla_{W}{\bf F}^{(k)}(v_{j})\|_{p}^{p-1}}\right].\tag{35}$$ The derivative of the regularization problem is defined in eq. (15) is: F(k) =2µ(F (k) i,: − Yi,:) + X vj∼vi M (k) ij 1 pdiiwij ∇W F (k)([j, i]) ∂L ϕ p (F) ∂Fi,: =2µ(F (k) i,: − Yi,:) + X vj∼vi M (k) ij 1 dii F (k) i,: − 1 pdiidjj F (k) j,: ! (36) M (k) =(2µ + X vj∼vi M (k) ij /dii)F (k) i,: − 2µYi,: − X vj∼vi p ij diidjj F (k) j,: p ij diidjj F (k) j,: M (k) =1 α (k) ii F (k) i,: − 1 α (k) ii β (k) ii Yi,: + α (k) ii X vj∼vi $$(37)$$ Thus, according the update rule of F (k+1) in eq. (18), we have $$\left.\frac{\partial{\cal L}_{p}^{\phi}({\bf F})}{\partial{\bf F}_{i,:}}\right|_{{\bf F}^{(k)}}=\frac{{\bf F}_{i,:}^{(k)}-{\bf F}_{i,:}^{(k+1)}}{\alpha_{i i}^{(k)}}.\tag{10}$$ For our purpose, we denote the partial derivative at F (∗) of the objective function with respect to the node feature Fi,; as $$\partial{\cal L}_{p}^{\phi}({\bf F}_{i,:}^{(*)}):=\left.\frac{\partial{\cal L}_{p}^{\phi}({\bf F})}{\partial{\bf F}_{i,:}}\right|_{{\bf F}^{(*)}}\tag{10}$$ $$(38)^{\frac{1}{2}}$$ For all *i, j* ∈ [N], let v ∈ R 1×c be a disturbance acting on node i. Define the following: N (k) i,j = Wi,j sWi,j Di,i F (k) i,: − sWi,j Dj,j F (k) j,: p−2 N ′(k) i,j = Wi,j sWi,j Di,i (F (k) i,: + v) − sWi,j Dj,j F (k) j,: p−2 M (k) i,j = N (k) ij ζi,j (F (k)), M′(k) i,j = N ′(k) ij ζi,j (F (k) + v) (39) α ′(k) ii = 1/ X vj∼vi M ′(k) i,j Di,i+ 2µ , β′(k) ii = 2µα ′(k) ii M′(k) F ′(k+1) i,: = α ′(k) i,i X vj∼vi p i,j Di,iDj,j F (k) j,: + β ′(k)Yi,:, where ζij (F) is defined as eq. (19) and F (k) + v means that v only applies to the i-th of F (k) 6. Similar to eq. (37), we compute $$\partial{\cal L}_{p}^{\phi}({\bf F}_{i,:}^{(k)}+{\bf v})=\frac{1}{\alpha_{i,i}^{(k)}}\left(({\bf F}_{i,:}^{(k)}+{\bf v})-{\bf F}_{i,:}^{\prime(k+1)}\right).\tag{40}$$ Hence from both eq. (37) and eq. (40) we will have ∂L ϕ p (F (k) i,: + v) − ∂L ϕ p (F (k) i,: ) = 1 α ′(k) i,i (F (k) i,: + v) − F ′(k+1) i,: −1 α (k) i,i F (k) i,: − F (k+1) i,: ≤1 α ′(k) i,i ∥v∥ + 1 α ′(k) i,i F (k) i,: − F ′(k+1) i,: −1 α (k) i,i F (k) i,: − F (k+1) i,: =1 α ′(k) i,i ∥v∥ + 1 α ′(k) i,i −1 α (k) i,i !F (k) i,: − 1 α ′(k) i,i F ′(k+1) i,: + 1 α (k) i,i F (k+1) i,: =1 α ′(k) i,i ∥v∥ + X vj∼vi M ′(k) i,j Di,i− M (k) i,j Di,i !F (k) i,: − X vj∼vi M ′(k) p i,j Di,iDj,j !F (k) j,: p i,j Di,iDj,j !F ′(k) j,: + M (k) = X vj∼vi M (k) i,j Di,i + 2µ ∥v∥ + X vj∼vi M ′(k) i,j − M (k) i,j Di,i ! ∥v∥ + X vj∼vi M ′(k) i,j − M (k) i,j Di,i ! F (k) j,: . ! F (k) i,: − X vj∼vi M ′(k) i,j − M (k) p i,j Di,iDj,j Note that in eq. (39), *∥· ∥*p−2is the matrix L2 norm raised to power p−2, that is ∥X∥ p−2 = (Pi,j x 2 i,j ) 1 2 p−2. It is known that the matrix L2 norm as a function is Lipschitz (Paulavičius & Žilinskas, 2006), so is its exponential to p − 2. Furthermore, it is easy to verify that ∥N′ − N∥ ≤ c∥v∥ due to the property of N and N′. Hence, according to Lemma 1, the following holds $$|M_{i,j}^{\prime(k)}-M_{i,j}^{(k)}|\leq C|N_{i,j}^{\prime(k)}-N_{i,j}^{(k)}|\leq C^{\prime}\|{\bf v}\|.$$ 6With slightly abuse of notation, we denote N′ as the matrix after assigning the disturbance v to the matrix N. Combining all the above, we have $$\left\|\partial\mathcal{L}_{p}^{\phi}(\mathbf{F}_{i,:}^{(k)}+\mathbf{v})-\partial\mathcal{L}_{p}^{\phi}(\mathbf{F}_{i,:}^{(k)})\right\|\leq\left(\sum_{v_{j}\sim v_{i}}\frac{M_{i,j}^{(k)}}{D_{i,i}}+2\mu+o(\mathcal{G},\mathbf{v},\mathbf{X},p)\right)\|\mathbf{v}\|,\tag{41}$$ where o(G, v, X, p) is bounded. It is worth noting that the quantity of o(G, v, X, p) is bounded by $$\sum_{v_{j}\sim v_{k}}\left(\frac{M_{i,j}^{\prime(k)}-M_{i,j}^{(k)}}{D_{i,i}}\right)\|\mathbf{v}\|+\left\|\sum_{v_{j}\sim v_{k}}\left(\frac{M_{i,j}^{\prime(k)}-M_{i,j}^{(k)}}{D_{i,i}}\right)\mathbf{F}_{i,:}^{(k)}-\sum_{v_{j}\sim v_{k}}\left(\frac{M_{i,j}^{\prime(k)}-M_{i,j}^{(k)}}{\sqrt{D_{i,i}D_{j,j}}}\right)\mathbf{F}_{j,:}^{(k)}\right\|.$$ Let o = o(G, v, X, p), γ = {γ1*, ...γ*N } ⊤, and η ∈ R N×c. By the Taylor expansion theorem, we have: L ϕ p (F (k) i,: + γiηi,:) = L ϕ p (F (k) i,: ) + γi Z 1 0 ⟨∂L ϕ p (F (k) i,: + ϵγiηi,:), ηi,:⟩dϵ ∀i = L ϕ p (F (k) i,: ) + ⟨∂L ϕ p (F (k) i,: ), ηi,:⟩ + γi Z 1 0 D∂L ϕ p F (k) i,: + ϵγiηi,: − ∂L ϕ p F (k) i,: , ηi,: Edϵ ≤ Lϕ p (F (k) i,: ) + ⟨∂L ϕ p (F (k) i,: ), ηi,:⟩γi + γi Z 1 0 ∂L ϕ p F (k) i,: + ϵγiηi,: − ∂L ϕ p F (k) i,: ∥ηi,:∥ dϵ ≤ Lϕ p (F (k) i,: ) + ⟨∂L ϕ p (F (k) i,: ), ηi,:⟩γi + 1 α (k) i,i + o ! γ 2 i ∥ηi,:∥ 2 where the last inequality comes from eq. (41). Taking γi = α (k) ii and ηi,: = −∂L ϕ p (F (k) i,: ) in the above inequality gives L ϕ p F (k) i,: − α (k) ii ∂L ϕ p (F (k) i,: ) ≤Lϕ p (F (k) i,: ) − α (k) ii D∂L ϕ p (F (k) i,: ), ∂L ϕ p (F (k) i,: ) E+ 1 2 1 α (k) i,i + o ! α 2(k) i,i ∥∂L ϕ p (F (k) i,: )∥ 2 =L ϕ p (F (k) i,: ) − 1 2 α (k) i,i 1 − α (k) i,i o ∥∂L ϕ p (F (k) i,: )∥ 2. (42) Given that o is bounded, if we choose a large µ, e.g., 2µ > o, we will have $$1-\alpha_{i,i}^{(k)}\overline{{{o}}}=1-\frac{\overline{{{o}}}}{\sum_{v_{j}\sim v_{i}}\frac{M_{i,j}^{(k)}}{D_{i,i}}+2\mu}>0.$$ $\square$ Thus the second term in eq. (42) is positive. Hence we have $${\mathcal{L}}_{p}^{\phi}(\mathbf{F}_{i_{i}:}^{(k+1)}):={\mathcal{L}}_{p}^{\phi}\left(\mathbf{F}_{i_{i}:}^{(k)}-\alpha_{i i}^{(k)}\partial{\mathcal{L}}_{p}^{\phi}(\mathbf{F}_{i_{i}:}^{(k)})\right)\leq{\mathcal{L}}_{p}^{\phi}(\mathbf{F}_{i_{i}:}^{(k)}).$$ This completes the proof. ## A.2.2 Detailed Proof Of Proposition 2 Proposition 4 (Repeat of Proposition 2). Assume G *is connected, unweighted, and undirected. There exists* a sufficiently large value of µ or small value of p such that EP F (F) will stay away above 0 at each iterative layer k and increases with the increase of µ or the decrease of p. Proof. We start with the definition of the generalized Dirichlet energy above, we can re-write EP F (F (k+1)) in the following inner product between vec(F (k+1)) and vec(F (0)), based on M, α, β and the iterative scheme defined in eq. (18): E P F (F (k+1)) = Dvec(F (k+1)), 1 2 Ic ⊗ IN − Ic ⊗ α (k+1)D−1/2M(k+1)D−1/2 vec(F (k+1)) + (Ic ⊗ 2µα (k+1))vec(F (0)) = 1 2 Dvec(F (k+1)), vec(F (k+1)) E− 1 2 Dvec(F (k+1)), Ic ⊗ α (k+1)D−1/2M(k+1)D−1/2vec(F (k+1)) −(Ic ⊗ 2µα (k+1))vec(F (0)) E. (43) Based on the form of eq. (43), it is straightforward to see that to let EP F (F (k+1)) > 0 and further increase with the desired quantities of µ and p, it is sufficient to require7: $$\mathbf{I}_{c}\otimes\left(\alpha^{(k+1)}\mathbf{D}^{-1/2}\mathbf{M}^{(k+1)}\mathbf{D}^{-1/2}\right)\mathrm{vec}(\mathbf{F}^{(k+1)})-(\mathbf{I}_{c}\otimes2\mu\alpha^{(k+1)})\mathrm{vec}(\mathbf{F}^{(0)})<0.$$ To explicitly show how the quantities of µ and p affect the term in eq. (44), we start with the case when k = 0. When k = 0, eq. (44) becomes: Ic ⊗ α (1)D−1/2M(1)D−1/2vec(F (1)) − (Ic ⊗ 2µα (1))vec(F (0)) =Ic ⊗ α (1)D−1/2M(1)D−1/2vec α (0)D−1/2M(0)D−1/2F (0) + 2µα (0)F (0)− (Ic ⊗ 2µα (1))vec(F (0)), =Ic ⊗ α (1)D−1/2M(1)D−1/2 Ic ⊗ α (0)D−1/2M(0)D−1/2 + 2µα (0)vec(F (0)) − (Ic ⊗ 2µα (1))vec(F (0)), =Ic ⊗ Y 1 s=0 α (s)D−1/2M(s)D−1/2 + α (1)D−1/2M(1)D−1/22µα (0)− 2µα (1)!vec(F (0)). (45) We note that, in eq. (45), the i-th element of Q1 s=0 α(s)D−1/2M(s)D−1/2 +α(1)D−1/2M(1)D−1/22µα(0)− 2µα(1) can be computed as: Y 1 s=0 α (s) i,i d −1/2 i,i M (s) i,j d −1/2 j,j + α (1) i,i d −1/2 i,i M (1) i,j d −1/2 j,j 2µα (0) i,i − 2µα (1) i,i s=0 1/ X vj∼vi M (s) i,j di,i+ 2µ $$(43)$$ ! ∇W F (s)([*i, j*]) p−2 = Y 1 + pdi,idj,j 1/ X vj∼vi M (1) i,j di,i+ 2µ ! 2µ/ X vj∼vi M (0) i,j di,i+ 2µ $$(44)$$ ∇W F (1)([*i, j*]) p−2 pdi,idj,j − 2µ/ X vj∼vi M (1) i,j di,i+ 2µ , = ∇W F (0)([*i, j*]) p−2 Pvj∼vi M(0) i,j di,i + 2µ ·pdi,idj,j ∇W F (1)([*i, j*]) p−2 Pvj∼vi M(1) i,j di,i + 2µ ·pdi,idj,j + − Pvj∼vi M(1) i,j di,i + 2µ ·pdi,idj,j · 2µ/ X vj∼vi M (0) i,j di,i+ 2µ 2µ/ X vj∼vi M (1) i,j di,i+ 2µ ∇W F (1)([*i, j*]) p−2 $$(46)$$ . (46) 7Strictly speaking, one shall further require all elements in F(k+1) larger than or equal to 0. As this can be achieved by assigning a non-linear activation function (i.e., ReLU) to the framelet, we omit it here in our main analysis. Now we see that by assigning a sufficient large of µ or small value of p, we can see terms Now we see that by assigning a sufficient large of $\mu$ of small $\mu$, like $$\frac{\|\nabla_{W}\mathbf{F}^{(1)}([i,j])\|^{\mu-2}}{\left(\sum_{v_{j}\sim v_{i}}\frac{M_{i,j}^{(1)}}{d_{i,i}}+2\mu\right)\cdot\sqrt{d_{i,i}d_{j,j}}}\quad\text{in eq.\ ()are getting smaller.}$$ $$2\mu/\left(\sum_{v_{j}\sim v_{i}}\frac{M_{i,j}^{(2)}}{d_{i,i}}+2\mu\right)\text{and}2\mu/\left(\sum_{v_{j}\sim v_{i}}\frac{M_{i,j}^{(1)}}{d_{i,i}}+2\mu\right)\approx1.\text{Therefore,}$$ tends to be negative. Based on eq. (43), $\mathbf{E}^{PF}(\mathbf{F}^{(k+1)})$ will stay above $0$. in eq. (46) are getting smaller. Additionally, we have both ≈ 1. Therefore, the summation result of eq. (46) For the case that k ≥ 1, by taking into the iterative algorithm eq. (18), eq. (45) becomes: $$\mathbf{L}\otimes\left(\left(\prod_{s=0}^{k+1}\alpha^{(s)}\mathbf{D}^{-1/2}\mathbf{M}^{(s)}\mathbf{D}^{-1/2}+\sum_{s=0}^{k+1}\left(\prod_{l=k-s}^{k+1}\alpha^{(l)}\mathbf{D}^{-1/2}\mathbf{M}^{(l)}\mathbf{D}^{-1/2}\right)\left(2\mu\alpha^{(l-1)}\right)-2\mu\alpha^{(k+1)}\right)\right)\mathrm{vec}(\mathbf{F}^{(0)}).$$ Applying the same reasoning as before, one can verify that both Qk+1 s=0 α(s)D−1/2M(s)D−1/2 and Pk+1 s=0 Qk+1 l=k−s α(l)D−1/2M(l)D−1/2 2µα(l−1)approach to zero due to their forms defined in eq. (17), and 2µα(k+1) goes to 1. Therefore, the above equation tends to be negative, yielding a positive EP F (F (k+1)), and that completes the proof. ## A.2.3 Detailed Proof Of Lemma 2 Lemma 4 (Repeat of Lemma 2). Assuming G is connected, the forward Euler scheme presented in eq. (28) admits a generalized non-linear diffusion on the graph. Specifically, we have:_ $$\left(\mathbf{\alpha}^{(k)}\mathbf{D}^{-1/2}\mathbf{M}^{(k)}\mathbf{D}^{-1/2}-\mathbf{I}\right)\mathbf{F}^{(k)}+\mathbf{\beta}^{(k)}\mathbf{Y}=\mathbf{\alpha}\left(\text{div}(\|\nabla\mathbf{F}^{(k)}\|^{p-2}\nabla\mathbf{F}^{(k)})\right)+2\mu\mathbf{\alpha}^{(k)}\mathbf{D}\mathbf{F}^{(k)}+2\mu\mathbf{\alpha}^{(k)}\mathbf{F}^{(0)}.\tag{47}$$ Proof. The proof can be done by verification. We can explicitly write out the computation on the i-th row of the L.H.S of the above equation becomes: First let us denote the rows of F (k) as f (k)(i)'s. vj∼vi α (k) i,i d −1/2 ii M (k) i,j d −1/2 jj f (k)(j) − f (k)(i) + β (k) i,i Y (i) X =α (k) i,i X vj∼vi Mij √diipdjj f (k)(j) ! −1 α (k) i,i f (k)(i) + 2µα (k) i,i f (0)(i) =α (k) i,i X vj∼vi Mij √diipdjj f (k)(j) ! − X vj∼vi Mij dii + 2µ f (k)(i) + 2µα (k) i,i f (0)(i) =α (k) i,i X vj∼vi rwi,j di,i ∥∇W F([*i, j*])∥ p−2rwi,j dj,j f (k) j − rwi,j di,i f (k) i + 2µ X vj∼vi f (k) i $$(48)^{3}$$ + 2µα (k) i,i f (0)(i) =α (k) i,i ((∆pF)(i)) + 2µα (k) i,i diif (k) i + 2µα (k) i,i f (0) i(48) When i takes from 1 to N, it gives the result of the R.H.S of the Lemma according to eq. (26) and eq. (27). Thus we complete the proof.
Review 1: Summary: This paper presents a theoretical analysis of the graph p-Laplacian regularized framelet network (pL-UFG) and its convergence properties, energy dynamics, and connection to non-linear diffusion processes. Strengths and Weaknesses: Strengths: 1. The manuscript delivers a robust theoretical foundation for the pL-UFG framework, making a critical contribution to the comprehension of its intrinsic properties and behaviors—a step that significantly fortifies the theoretical landscape of GNNs. 2. The innovative linkage drawn between theoretical underpinnings and non-linear diffusion processes not only solidifies theoretical concepts but also offers valuable insights for the practical deployment and interpretation of GNNs in real-world scenarios. Weaknesses: 1. Theorem 1 only shows the non-increasing nature of the objective function related to Problem (18), but fails to establish the convergence of the generated sequence and the convergence rate. 2. While Proposition 2 sheds light on the energy dynamics within pL-UFG, the broader significance and practical ramifications of this analysis remain underexplored. 3. The preliminaries in Section 2 are overly extensive and could be condensed, as they currently delve into established results that would be better suited to a brief summary with appropriate references for further reading. Requested Changes: 1. Expand on Theorem 1 by providing a more comprehensive analysis of the sequence convergence and establishing the convergence rate. 2. Elaborate on the practical significance of Proposition 2's energy behavior results. How does this behavior influence the model in application, and what can practitioners expect in terms of model performance? 3. Shorten the preliminaries section by focusing on the essentials and referencing existing work for further details. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper explores "Generalized p-Laplacian Regularized Framelet Graph Convolutional Networks" (pL-UFG), targeting the challenge of applying Graph Neural Networks (GNNs) to heterophilic graph structures. The model employs an implicit layer designed to regulate Dirichlet energy across node features, addressing the issue of over-smoothing often observed in GNNs. The paper provides evidence of convergence for the model's iterative process when an appropriate regularization parameter is selected. They also discuss the capacity to adjust model parameters to manage the Dirichlet energy of node embeddings, potentially preventing the loss of signal detail. The paper proposes a method for adjusting the frequency of filters in the model, which could allow for adaptability to both homophilic and heterophilic graph data. These theoretical considerations are accompanied by empirical experiments that aim to validate the proposed concepts on datasets representing both homophilic and heterophilic graphs. Strengths and Weaknesses: Strengths: - The paper provides a detailed analysis of an iterative algorithm's convergence and energetic behavior for solving the implicit layer formulation. The incorporation of a message-passing scheme adds a layer of novelty to the framework, setting it apart from existing approaches. - The paper has the potential to serve as a valuable reference for researchers in the domain, contributing to the advancement of knowledge in this specialized area. - The authors has diligently revised the paper, taking into account feedback from reviewers to clarify the mathematical formulations and parameters. They have contextualized the motivation for their work more effectively and complied with requirements for providing detailed hyperparameter settings and conducting statistical significance testing in the experimental section. Weaknesses: - The structure of the paper remains somewhat disorganized, hindering readability. Certain proofs and mathematical derivations come across as overly verbose and symbol-heavy, detracting from the clarity of the presentation. The extensive focus on formulaic derivations occupies a significant portion of the paper, which might overwhelm the narrative flow and impede the understanding of the core contributions. Requested Changes: To enhance the paper and improve its readability, I recommend the following revisions: - Streamline Mathematical Derivations: The paper would benefit from a more succinct approach to the presentation of mathematical content. Lengthy and complex derivations should be condensed, with the main text focusing on the essential steps that are crucial to understanding the results. Additional and intermediate steps could be moved to an appendix, ensuring that the core narrative is not obscured by detailed calculations. - Reorganize Content for Coherence: The paper requires careful attention to content organization to avoid redundancy and ensure that each section contributes meaningfully to the overall narrative. Moreover, it has been observed that tables have been misplaced within the reference section, which disrupts the document's flow and could cause confusion. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The authors analyze the theoretical properties of the "graph p-Laplacian regularized framelet network" (pL-UFG), which is a framework/method originally designed to allow GNNS to fit both homophilic and heterophilic graphs well. The main claims/contributions that the authors make include: 1. they provide a theoretical convergence analysis of pL-UFG 2. they demonstrate that the generalized Dirichlet energy of the graph under consideration through pL-UFG remains nonzero. 3. they provide a perspective from the angle of non-linear diffusion processes 4. they propose two new variants of pL-UFG and provide experiments to demonstrate their new methods Strengths and Weaknesses: I list the strengths and weaknesses below, from the angle of clarity and correctness. Strength: - the topic under consideration should be of interest to a sizable audience of TMLR. - the number of baselines and datasets used in the experiments are comprehensive. - the table of notations helps a bit with clarity Weaknesses: 1. Many definitions are unclear/stated in sloppy manners. - In definition 1 of homophily and heterophily, a definition based on the criterion H(G) -> 1 or H(G) -> 0 are given. However, 1. which parameter is being sent to infinity is unspecified 2. the definition implies that a *sequence* of graphs satisfies heterophily or homophily, rather than a single graph G. - in (17), it is said that phi is positive and convex. But then in the examples stated immediately after, the authors picked phi(x) = x^p, without specifying the domain or the choice of p. Clearly, for p odd and/or for x negative, the function can be neither positive nor convex. - in proposition 1 and below, the derivative of phi is used and stated. However, just from the positive and convex conditions alone, phi is not guaranteed to be differentiable everywhere (there could be a countable set where it is not differentiable). The authors should state what happens in those scenario, or add extra conditions on phi to make sure the derivative is rigorously justified. - the authors say "iterative algorithm induced from the p-Laplacian implicit layer admits a generalized non-linear diffusion process". They also say on page 18 "Furthermore, if we treat the iteration equation eq. (19) as a diffusion process"...however, it is unclear what they mean, rigorously, by a diffusion process, since they never formally defined it. A diffusion process is a continuous time markov process, but the schemes presented are discrete. The authors should clarify this formally. - I would like to emphasize that while there are many unclear points in the authors' writing and notation, I find that the concepts introduced are quite natural and intuitive in this context. To the expert readers, we can infer what the authors are trying to say/do. And I am not saying that the author's results are incorrect. However, I find the results proposed by the authors, in current form, to be insufficiently rigorous and clear for scientific publication. In its current form, it is difficult to check and verify the correctness of all the claims that the authors make. As suggested by the prior reviewers, the paper is dense and the mathematics is not organized in a sufficiently coherent manner. There are also lot's of results and notions that are directly imported from other papers. 2. Clarity of writing, formatting, and organization: there are many places where the formatting, wording, writing and notations are rushed and unclear. - In the first two paragraphs in blue, the citations are not consistent (some are cited with parenthesis, some without). - notations such as \approx in the proof of lemma 3 is not rigorously defined. I suggest that the author make explicit the approximation or limiting argument that they are making, rather than resort to \approx notations. I echo the opinion expressed by the reviewers of the prior iteration of the draft that the calculations of this paper are long, stacked together but not organized in a sufficently readable/coherent manner. I suggest significantly compressing the paper. Requested Changes: 1. make sure all the definitions/notions mentioned above rigorously/formally defined in the paper. Broader Impact Concerns: Not applicable. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The decision among the reviewers is mixed. Two reviewers are in favour of accepting this submission, while one reviewer recommended "leaning reject". The reviewer's main concerns are the lack of a theοretical result that establishes the full sequence's convergence and the lack of analysis on the convergence rate. Overall, all reviewers and myself have found merit in the paper for publication. However, the reviewers have mainly complained about the paper's disorganized structure and the clarity of presentation. The authors improved the manuscript during the review process based on the reviewers' comments, but I still feel that further improvements need to be made. For instance, in many cases, \cite is used instead of \citep, e.g., "namely homophily and heterophily Zheng et al. (2022a)". Many sentences need to be rephrased/rewritten in a more elegant way, e.g., "With the former most of the nodes are connected with those nodes with identical labels, and the latter is not Pei et al. (2019)". There are also several typos that need to be fixed, e.g., "(pL-UFG-HFD). we" ==> "(pL-UFG-HFD). We". I am thus recommending accept with a minor revision, and I expect the authors to improve the quality of presentation in the final version of the paper. ==================================================
# Learning Multi-Modal Generative Models With Permutationinvariant Encoders And Tighter Variational Objectives Anonymous authors Paper under double-blind review ## Abstract Devising deep latent variable models for multi-modal data has been a long-standing theme in machine learning research. Multi-modal Variational Autoencoders (VAEs) have been a popular generative model class that learns latent representations that jointly explain multiple modalities. Various objective functions for such models have been suggested, often motivated as lower bounds on the multi-modal data log-likelihood or from information-theoretic considerations. To encode latent variables from different modality subsets, Product-ofExperts (PoE) or Mixture-of-Experts (MoE) aggregation schemes have been routinely used and shown to yield different trade-offs, for instance, regarding their generative quality or consistency across multiple modalities. In this work, we consider a variational objective that can tightly approximate the data log-likelihood. We develop more flexible aggregation schemes that avoid the inductive biases in PoE or MoE approaches by combining encoded features from different modalities based on permutation-invariant neural networks. Our numerical experiments illustrate trade-offs for multi-modal variational objectives and various aggregation schemes. We show that our variational objective and more flexible aggregation models can become beneficial when one wants to approximate the true joint distribution over observed modalities and latent variables in identifiable models. ## 1 Introduction Multi-modal data sets where each sample has features from distinct sources have grown in recent years. For example, multi-omics data such as genomics, epigenomics, transcriptomics, and metabolomics can provide a more comprehensive understanding of biological systems if multiple modalities are analyzed in an integrative framework (Argelaguet et al., 2018; Lee and van der Schaar, 2021; Minoura et al., 2021). In neuroscience, multi-modal integration of neural activity and behavioral data can help to learn latent neural dynamics (Zhou and Wei, 2020; Schneider et al., 2023). However, annotations or labels in such data sets are often rare, making unsupervised or semi-supervised generative approaches particularly attractive as such methods can be used in these settings to (i) generate data, such as missing modalities, and (ii) learn latent representations that are useful for down-stream analyzes or that are of scientific interest themselves. The availability of heterogeneous data for different modalities promises to learn generalizable representations that can capture shared content across multiple modalities in addition to modality-specific information. A promising class of weakly-supervised generative models is multi-modal VAEs (Suzuki et al., 2016; Wu and Goodman, 2019; Shi et al., 2019; Sutter et al., 2021) that combine information across modalities in an often-shared lowdimensional latent representation. A common route for learning the parameters of latent variable models is via maximization of the marginal data likelihood with various lower bounds thereof, as suggested in previous work. Setup. We consider a set of M random variables {X1*, . . . , X*M} with empirical density pd, where each random variable Xs, s ∈ M = {1*, . . . , M*}, can be used to model a different data modality taking values in Xs. With some abuse of notation, we write X = {X1*, . . . , X*M} and for any subset *S ⊂ M*, we set X = (XS , X\S ) for two partitions of the random variables into XS = {Xs}s∈S and X\S = {Xs}s*∈M\S* . We pursue a latent variable model setup, analogous to uni-modal VAEs (Kingma and Ba, 2014; Rezende et al., 2014). For a latent variable Z ∈ Z with prior density pθ(z), we posit a joint generative model1 pθ(*z, x*) = pθ(z)QM s=1 pθ(xs|z), where pθ(xs|z) is commonly referred to as the decoding distribution for modality s. Observe that all modalities are independent given the latent variable z shared across all modalities. However, one can introduce modality-specific latent variables by making sparsity assumptions for the decoding distribution. Intuitively, this conditional independence assumption means that the latent variable Z captures all unobserved factors shared by the modalities. Multi-modal variational bounds and mutual information. Popular approaches to train multi-modal models are based on a mixture-based variational bound (Daunhawer et al., 2022; Shi et al., 2019) given by L Mix(*θ, ϕ, β*) = Rρ(S)L Mix S(*x, θ, ϕ, β*)dS, where $${\cal L}_{S}^{\sf Min}(x,\theta,\phi,\beta)=\int q_{\phi}(z|x_{S})\left[\log p_{\theta}(x|z)\right]{\rm d}z-\beta{\sf KL}(q_{\phi}(z|x_{S})|p_{\theta}(z))\tag{1}$$ and ρ is some distribution on the power set P(M) of M and β > 0. For β = 1, one obtains the bound L Mix S(*x, θ, ϕ, β*) ≤ log pθ(x). However, as shown in Daunhawer et al. (2022), there is a gap between the variational bound and the log-likelihood given by the conditional entropies that cannot be reduced even for flexible encoding distributions. More precisely, it holds that $$\int p_{d}(x)\log p_{\theta}(x)\mathrm{d}x\geq\int p_{d}(x){\mathcal{L}}^{\mathrm{Mix}}(x,\theta,\phi,1)\mathrm{d}x+\int\rho({\mathcal{S}}){\mathcal{H}}(p_{d}(X_{\mathcal{S}}|X_{\mathcal{S}}))\mathrm{d}{\mathcal{S}},$$ where H(pd(X\S |XS )) is the entropy of the conditional data distributions. Intuitively, in (1), one tries to reconstruct or predict all modalities from incomplete information using only the modalities S, which leads to learning an inexact, average prediction (Daunhawer et al., 2022). In particular, it cannot reliably predict modality-specific information that is not shared with other modality subsets, as measured by the conditional entropies H(pd(X\S |XS )). We will illustrate that maximizing L Mix Scan be interpreted as the information-theoretic objective of $$\mathrm{maximizing}\;\left\{\hat{\Pi}_{q_{\phi}}^{\mathrm{lb}}(X,Z_{\mathcal{S}})-\beta\hat{\Pi}_{q_{\phi}}^{\mathrm{ub}}(X_{\mathcal{S}},Z_{\mathcal{S}})\right\},$$ $$\left(2\right)$$ where ˆI ub q and ˆI lb q are variational upper, respectively, lower bounds of the corresponding mutual information Iq(*X, Y* ) = Rq(*x, y*) log q(x,y) q(x)q(y) dxdy of random variables X and Y having marginal and joint densities q. We occasionally write ZS instead of Z to emphasize that Z is conditional on XS under the encoding density qϕ. Variations of (1) have been suggested (Sutter et al., 2020), such as by replacing the prior density pθ in the KLterm by a weighted product of the prior density pθ and the uni-modal encoding distributions qϕ(z|xs), for all s ∈ M. Likewise, the multi-view variational information bottleneck approach developed in Lee and van der Schaar (2021) for predicting X\S given XS can be interpreted as maximizing ˆI lb qϕ (X\S , ZS ) − βˆI ub qϕ (XS , ZS ). Hwang et al. (2021) suggested a related bound that aims to maximize the reduction of total correlation of X when conditioned on a latent variable. Similar bounds have been suggested in Sutter et al. (2020) and Suzuki et al. (2016) by considering different KL-regularisation terms; see also Suzuki and Matsuo (2022). Shi et al. (2020) add a contrastive estimate ˆIpθ of the point-wise mutual information to the maximum likelihood objective and minimize − log pθ(x) − βˆIpθ (xS , x\S ). Optimizing variational bounds of different mutual information terms such as (2) yield latent representations that have different trade-offs in terms of either (i) reconstruction or (ii) cross-prediction of multi-modal data from a rate-distortion viewpoint (Alemi et al., 2018). Multi-modal aggregation schemes. To optimize the variational bounds above or to allow for flexible conditioning at test time, we need to learn encoding distributions qϕ(z|xS ) for any *S ∈ P*(M). The typical aggregation schemes that are scalable to a large number of modalities are based on a choice of uni-modal encoding distributions qϕs (z|xs) for any s ∈ M, which are then used to define the multi-modal encoding distributions as follows: 1We usually denote random variables using upper-case letters, and their realizations by the corresponding lower-case letter. We assume throughout that Z = RD, and that pθ(z) is a Lebesgue density, although the results can be extended to more general settings such as discrete random variables Z with appropriate adjustments, for instance, regarding the gradient estimators. - Mixture of Experts (MoE), see Shi et al. (2019), hi et al.$\phantom{\rule{0.2em}{0ex}}\left(2019\right)$ $$q_{\phi}^{\mathrm{MoE}}(z|x_{\mathcal{S}})=\frac{1}{|\mathcal{S}|}\sum_{s\in\mathcal{S}}q_{\phi_{s}}(z|x_{s}).$$ - Product of Experts (PoE), see Wu and Goodman (2018), $$q_{\phi}^{\mathrm{PoE}}(z|x_{\mathcal{S}})\propto p_{\theta}(z)\prod_{s\in\mathcal{S}}q_{\phi_{s}}(z|x_{s}).$$ Contributions. This paper contributes (i) a new variational objective as an approximation of a lower bound on the multi-modal log-likelihood (LLH). We avoid a limitation of mixture-based bounds (1), which may not provide tight lower bounds on the joint LLH if there is considerable modality-specific variation (Daunhawer et al., 2022), even for flexible encoding distributions. The novel variational objective contains a lower bound of the marginal LLH log pθ(xS ) and a term approximating the conditional log pθ(x\S |xS ) for any choice of *S ∈ P*(M), provided that we can learn a flexible multi-modal encoding distribution. This paper then contributes (ii) new multi-modal aggregation schemes that yield more expressive multi-modal encoding distributions when compared to MoEs or PoEs. These schemes are motivated by the flexibility of permutation-invariant (PI) architectures such as DeepSets (Zaheer et al., 2017) or attention models (Vaswani et al., 2017; Lee et al., 2019). We illustrate that these innovations (iii) are beneficial when learning identifiable models, aided by using flexible prior and encoding distributions consisting of mixtures and (iv) yield higher LLH in experiments. Further related work. Canonical Correlation Analysis (Hotelling, 1936; Bach and Jordan, 2005) is a classical approach for multi-modal data that aims to find projections of two modalities by maximally correlating and has been extended to include more than two modalities (Archambeau and Bach, 2008; Tenenhaus and Tenenhaus, 2011) or to allow for non-linear transformations (Akaho, 2001; Hardoon et al., 2004; Wang et al., 2015; Karami and Schuurmans, 2021). Probabilistic CCA can also be seen as multi-battery factor analysis (MBFA) (Browne, 1980; Klami et al., 2013), wherein a shared latent variable models the variation common to all modalities with modality-specific latent variables capturing the remaining variation. Likewise, latent factor regression or classification models (Stock and Watson, 2002) assume that observed features and response are driven jointly by a latent variable. Vedantam et al. (2018) considered a tiple-ELBO for two modalities, while Sutter et al. (2021) introduced a generalized variational bound that involves a summation over all modality subsets. A series of work has developed multi-modal VAEs based on shared and private latent variables (Wang et al., 2016; Lee and Pavlovic, 2021; Lyu and Fu, 2022; Lyu et al., 2021; Vasco et al., 2022; Palumbo et al., 2023). Tsai et al. (2019a) proposed a hybrid generative-discriminative objective and minimized an approximation of the Wasserstein distance between the generated and observed multi-modal data. Joy et al. (2021) consider a semi-supervised setup of two modalities that requires no explicit multi-modal aggregation function. Extending the Info-Max principle (Linsker, 1988), maximizing mutual information Iq(g1(X1), g(X2)) ≤ Iq((X1, X2),(Z1, Z2)) based on representations Zs = gs(Xs) for modality-specific encoders gs from two modalities has been a motivation for approaches based on (symmetrized) contrastive objectives (Tian et al., 2020; Zhang et al., 2022c; Daunhawer et al., 2023) such as InfoNCE (Oord et al., 2018; Poole et al., 2019; Wang and Isola, 2020) as a variational lower bound on the mutual information between Z1 and Z2. Recent work (Bounoua et al., 2023; Bao et al., 2023) considered score-based diffusion models on auto-encoded private latent variables. ## 2 A Tighter Variational Objective With Arbitrary Modality Masking For *S ⊂ M* and β > 0, we define $$\mathcal{L}_{S}(x_{S},\theta,\phi,\beta)=\int q_{\phi}(z|x_{S})\left[\log p_{\theta}(x_{S}|z)\right]\mathrm{d}z-\beta\mathsf{KL}(q_{\phi}(z|x_{S})|p_{\theta}(z)).\tag{3}$$ This is simply a standard variational lower bound (Jordan et al., 1999; Blei et al., 2017) restricted to the subset S for β = 1, and therefore LS (xS *, θ, ϕ,* 1) ≤ log pθ(xS ). One can express the variational bound in information-theoretic (Alemi et al., 2018) terms as $$\int p_{d}(x_{\mathcal{S}}){\mathcal{L}}(x_{\mathcal{S}})\mathrm{d}x_{\mathcal{S}}=-D_{\mathcal{S}}-\beta R_{\mathcal{S}}$$ for the rate $$R_{\mathcal{S}}=\int p_{d}(x_{\mathcal{S}})\mathrm{KL}(q_{\phi}(z|x_{\mathcal{S}})|p_{\theta}(z))\mathrm{d}x_{\mathcal{S}}$$ measuring the information content that is encoded from the observed modalities in S by qϕ into the latent representation, and the distortion $$D_{S}=-\int p_{d}(x_{S})q_{\phi}(z|x_{S})\log p_{\theta}(x_{S}|z)\mathrm{d}z\mathrm{d}x_{S}$$ given as the negative reconstruction log-likelihood of the modalities in S. While the latent variable ZS that is encoded via qϕ(z|xS ) from XS can be tuned via the choice of β > 0 to tradeoff compression and reconstruction of all modalities in S jointly, it does not explicitly optimize for cross-modal prediction of modalities not in S. Indeed, the mixture-based variational bound differs from the above decomposition exactly by an additional cross-modal prediction or cross-distortion term $$D_{\backslash{\mathcal{S}}}^{c}=-\int p_{d}(x)q_{\phi}(z_{\mathcal{S}}|x_{\mathcal{S}})\log p_{\theta}(x_{\backslash{\mathcal{S}}}|z_{\mathcal{S}})\mathrm{d}z_{\mathcal{S}}\mathrm{d}x,$$ $$\quad(4)$$ thereby explicitly optimizing for both self-reconstruction of a modality subset and cross-modal prediction within a single objective: $$\int p_{d}(\mathrm{d}x){\mathcal{L}}_{{\mathcal{S}}}^{\mathrm{Mix}}(x)=-D_{{\mathcal{S}}}-D_{\backslash{\mathcal{S}}}^{c}-\beta R_{{\mathcal{S}}}.$$ S(x) = −DS − Dc\S − βRS . (4) Instead of adding an explicit cross-modal prediction term, we consider an additional variational objective with a second latent variable ZM that encodes all observed modalities X = XM and tries to reconstruct a modality subset. Unlike the latent variable ZS in (1) and (3) that can only encode incomplete information using the modalities S, this second latent variable ZM can encode modality-specific information from all observed modalities, thereby avoiding the averaging prediction in the mixture-based bound. Ideally, we may like to consider an additional variational objective that lower bounds the conditional loglikelihood log pθ(x\S |xS ) so that maximizing the sum of both bounds maximizes a lower bound of the multi-modal log-likelihood log pθ(xS , x\S ) = log pθ(xS ) + log pθ(x\S |xS ). To motivate such a conditional variational objective, note that maximizing the variational bound (3) with infinite capacity encoders yields qϕ(z|xS ) = pθ(z|xS ). This suggests replacing the intractable posterior pθ(z|xS ) with the encoder qϕ(z|xS ) for the probabilistic model when conditioned on xS . A variational objective under this replacement then becomes $${\cal L}_{\backslash{\cal S}}(x,\theta,\phi,\beta)=\int q_{\phi}(z|x)\left[\log p_{\theta}(x_{\backslash{\cal S}}|z)\right]{\rm d}z-\beta{\sf KL}(q_{\phi}(z|x)|q_{\phi}(z|x_{\cal S})).\tag{5}$$ However, the above only approximates a lower bound on the conditional log-likelihood log pθ(x\S |xS ), and L\S is a lower bound only under idealised conditions. We will make these approximations more precise in Section 2.1, where we illustrate how these bounds yield to a matching of different distributions in the latent or data space, while Section 2.2 provides an information-theoretic interpretation of these variational objectives. In summary, for some fixed density ρ on P(M), we suggest to maximize the overall bound $${\mathcal{L}}(x,\theta,\phi,\beta)=\int\rho({\mathcal{S}})\left[{\mathcal{L}}_{{\mathcal{S}}}(x_{{\mathcal{S}}},\theta,\phi,\beta)+{\mathcal{L}}_{\backslash{\mathcal{S}}}(x,\theta,\phi,\beta)\right]\mathrm{d}{\mathcal{S}},$$ with respect to θ and ϕ, which is a generalization of the bound suggested in Wu and Goodman (2019) to an arbitrary number of modalities. This bound can be optimized using standard Monte Carlo techniques, for example, by computing unbiased pathwise gradients (Kingma and Ba, 2014; Rezende et al., 2014; Titsias and Lázaro-Gredilla, 2014) using the reparameterization trick. For variational families such as Gaussian mixtures2, one can employ implicit reparameterization (Figurnov et al., 2018). It is straightforward to adapt variance reduction techniques such as ignoring the scoring term of the multi-modal encoding densities for pathwise gradients (Roeder et al., 2017), see Algorithm 1 in Appendix H for pseudo-code. Nevertheless, a scalable approach requires an encoding technique that allows to condition on any masked modalities with a computational complexity that does not increase exponentially in M. ## 2.1 Multi-Modal Distribution Matching Likelihood-based learning approaches aim to match the model distribution pθ(x) to the true data distribution pd(x). Variational approaches achieve this by matching in the latent space the encoding distribution to the true posterior as well as maximizing a tight lower bound on log pθ(x), see Rosca et al. (2018). These types of analyses have proved useful for uni-modal VAEs as they can provide some insights as to why VAEs may lead to worse generative sample quality compared to other generative models such as GANs (Goodfellow et al., 2014) or may fail to learn useful latent representations (Zhao et al., 2019; Dieng et al., 2019). We show similar results for the multi-modal variational objectives. This suggests that limitations from uni-modal VAEs also affect multi-modal VAEs, but also that previous attempts to address these shortcomings in uni-modal VAEs may benefit multi-modal VAEs. In particular, mismatches between the prior and the aggregated prior for uni-modal VAEs that result in poor unconditional generation have a natural counterpart for cross-modal generations with multi-modal VAEs that may potentially be reduced using more flexible conditional prior distributions, see Remark 3, or via adding additional mutual information regularising terms (Zhao et al., 2019), see Remark 4. In view of these results, it is neither surprising that multi-modal diffusion models such as Bounoua et al. (2023); Bao et al. (2023) yield improved sample quality, nor that sample quality can be improved by augmenting multi-modal VAEs with diffusion models (Pandey et al., 2022; Palumbo et al., 2024). We consider the densities $$p_{\theta}(z,x)=p_{\theta}(z)p_{\theta}(x_{S}|z)p_{\theta}(x_{\backslash}s|z)$$ and qϕ(zS , x) = pd(xS )qϕ(zS |xS ). The latter is the encoding path comprising the encoding density qϕ conditioned on xS and the empirical density pd. We write $$q_{\phi,S}^{\mathrm{agg}}(z)=\int p_{d}(x_{S})q_{\phi}(z|x_{S})\mathrm{d}x_{S}$$ for the aggregated prior (Makhzani et al., 2016; Hoffman and Johnson, 2016; Tomczak and Welling, 2017) restricted on modalities from S and q ⋆(xS |z) = qϕ(xS , z)/qagg ϕ(z) and likewise consider its conditional version, $\ell_{\infty}$. $\mathfrak{L}\left|\right\rangle$ $$q_{\phi,\backslash{\mathcal{S}}}^{\mathrm{agg}}(z|x_{{\mathcal{S}}})=\int p_{d}(x_{\backslash{\mathcal{S}}}|x_{{\mathcal{S}}})q_{\phi}(z|x)\mathrm{d}x_{\backslash{\mathcal{S}}}$$ for an aggregated encoder conditioned on xS . We provide a multi-modal ELBO surgery, summarized in Proposition 1 below. It implies that maximizing Rpd(xS )LS (xS *, θ, ϕ*)dxS drives 1. the joint inference distribution qϕ(*z, x*S ) = pd(xS )qϕ(z|xS ) of the S submodalities to the joint generative distribution pθ(*z, x*S ) = pθ(z)pθ(xS |z) and 2. the generative marginal pθ(xS ) to its empirical counterpart pd(xS ). Analogously, maximizing Rpd(x\S |xS )L\S (*x, θ, ϕ*)dx\S drives, for fixed xS , 1. the distribution pd(x\S |xS )qϕ(z|x) to the distribution pθ(x\S |z)qϕ(z|xS ) and 2. the conditional pθ(x\S |xS ) to its empirical counterpart pd(x\S |xS ). Furthermore, it shows that maximizing L\S (*x, θ, ϕ*) minimizes a Bayes-consistency matching term KL(q agg ϕ,\S (z|xS )|qϕ(z|xS )) for the multi-modal encoders where a mismatch can yield poor cross-generation, 2For MoE aggregation schemes, Shi et al. (2019) considered a stratified ELBO estimator as well as a tighter bound based on importance sampling, see also Morningstar et al. (2021), that we do not pursue here for consistency with other aggregation schemes that can likewise be optimized based on importance sampling ideas. as an analog of the prior not matching the aggregated posterior leading to poor unconditional generation, see Remark 4. Proposition 1 (Marginal and conditional distribution matching) For any S ∈ P(M)*, we have* $$\int p_{d}(x_{S}){\cal L}_{S}(x_{S},\theta,\phi){\rm d}x_{S}+{\cal H}(p_{d}(x_{S}))$$ $$=-{\sf KL}(q_{\phi}(z,x_{S})|p_{\theta}(z,x_{S}))$$ $$=-{\sf KL}(p_{d}(x_{S})|p_{\theta}(x_{S}))-\int p_{d}(x_{S}){\sf KL}(q_{\phi}(z|x_{S})|p_{\theta}(z|x_{S})){\rm d}x_{S}$$ $$=-{\sf KL}(q_{\phi,S}^{agg}(z)|p_{\theta}(z))-\int q_{\phi,S}^{agg}(z){\sf KL}(q^{*}(x_{S}|z)|p_{\theta}(x_{S}|z)){\rm d}x_{S},$$ (Zmarginal) $$x_{S},z)/q_{\phi}^{agg}(z).$$ Moreover, for fixed $x_{S}$, where q ⋆(xS |z) = qϕ(xS , z)/qagg Zpd(x\S |xS )L\S (x, θ, ϕ)dx\S + H(pd(x\S |xS )) = − KL qϕ(z|x)pd(x\S |xS ) pθ(x\S |z)qϕ(z|xS )(ZXconditional) = − KL(pd(x\S |xS )|pθ(x\S |xS )) (Xconditional) − Zpd(x\S |xS ) KL(qϕ(z|x)|pθ(z|x)) − Zqϕ(z|x) log qϕ(z|xS ) pθ(z|xS ) dz dx\S = − KL(q agg ϕ,\S (z|xS )|qϕ(z|xS )) − Zq agg ϕ,\S (z|xS )KL(q ⋆(x\S |z, xS )|pθ(x\S |z))dz, (Zconditional) where q ⋆(x\S |z, xS ) = qϕ(z, x\S |xS )/qagg ϕ,\S (z|xS ) = pd(x\S |xS )qϕ(z|x)/qagg ϕ,\S (z|xS ). $$\varepsilon_{S})=p_{d}(x_{\backslash}s|x_{S})q_{\phi}(z|x)/q_{\phi,\backslash}^{a g g}(z|x_{S}).$$ If qϕ(z|xS ) approximates pθ(z|xS ) exactly, Proposition 1 implies that L\S (*x, θ, ϕ*) is a lower bound of log pθ(x\S |xS ). More precisely, we obtain the following log-likelihood approximation. Corollary 2 (Multi-modal log-likelihood approximation) For any modality mask S*, we have* $$\int p_{d}(x)\left[{\cal L}_{\cal S}(x_{\cal S},\theta,\phi,1)+{\cal L}_{\cal S}(x,\theta,\phi,1)\right]{\rm d}x-\int p_{d}(x)\left[\log p_{\theta}(x)\right]{\rm d}x$$ $$=-\int p_{d}(x_{\cal S})\left[{\sf KL}(q_{\phi}(z|x_{\cal S})|p_{\theta}(z|x_{\cal S}))\right]{\rm d}x-\int p_{d}(x)\left[{\sf KL}(q_{\phi}(z|x)|p_{\theta}(z|x))\right]{\rm d}x$$ $$\quad+\int p_{d}(x)q_{\phi}(z|x)\left[\log\frac{q_{\phi}(z|x_{\cal S})}{p_{\theta}(z|x_{\cal S})}\right]{\rm d}z{\rm d}x.$$ Proof This follows from (Xmarginal) and (Xconditional). Our approach recovers meta-learning with (latent) Neural processes (Garnelo et al., 2018b) when one optimizes only L\S with S determined by context-target splits, cf. Appendix B. Our analysis implies that LS + L\S is an approximation of a lower bound on the multi-modal log-likelihood that becomes tight for infinite-capacity encoders so that qϕ(z|xS ) = pθ(z|xS ) and qϕ(z|x) = pθ(z|x), see Remarks 3 and 5 for details. Remark 3 (Log-Likelihood approximation and Empirical Bayes) The term $$\int p_{d}(x)q_{\phi}(z|x)\left[\log{\frac{q_{\phi}(z|x_{S})}{p_{\theta}(z|x_{S})}}\right]\mathrm{d}z\mathrm{d}x$$ arising in Corollary 2 and in (Xconditional) is not necessarily negative. Analogous to other variational approaches for learning conditional distributions such as latent Neural processes, our bound becomes an approximation of a lower bound. Note that LS is maximized when qϕ(z|xS ) = pθ(z|xS ), see (Xmarginal), which implies a lower bound in Corollary 2 of $$\int p_{d}(x)\left[{\mathcal{L}}_{S}(x_{S},\theta,\phi,1)+{\mathcal{L}}_{\setminus S}(x,\theta,\phi,1)\right]\mathrm{d}x=\int p_{d}(x)\left[\log p_{\theta}(x)-{\mathsf{KL}}(q_{\phi}(z|x)|p_{\theta}(z|x))\right]\mathrm{d}x.$$ We can re-write the conditional expectation of L\S for any fixed xS as $$\int p_{d}(x_{\mathcal{S}}|x_{\mathcal{S}})\mathcal{L}_{\mathcal{S}}(x,\theta,\phi,1)\mathrm{d}x_{\mathcal{S}}=\int p_{d}(x_{\mathcal{S}}|x_{\mathcal{S}})q_{\phi}(z|x)\log p_{\theta}(x_{\mathcal{S}}|z)\mathrm{d}z\mathrm{d}x_{\mathcal{S}}+p_{d}(x_{\mathcal{S}}|x_{\mathcal{S}})\mathcal{H}(q_{\phi}(z|x))\mathrm{d}x_{\mathcal{S}}$$ $$+\int q_{\phi,\mathcal{S}}^{\mathrm{d}x_{\mathcal{S}}}(z|x_{\mathcal{S}})\log q_{\phi}(z|x_{\mathcal{S}})\mathrm{d}z.$$ Whenever qϕ(z|xS ) can be learned independently from qϕ(z|x), the above is maximized for $$q_{\phi}(z|x_{\mathcal{S}})=\int p_{d}(x_{\backslash\mathcal{S}}|x_{\mathcal{S}})q_{\phi}(z|x)\mathrm{d}x_{\backslash\mathcal{S}}=q_{\phi,\backslash\mathcal{S}}^{\mathrm{agg}}(z|x_{\mathcal{S}}).$$ From a different perspective, we can consider an Empirical Bayes viewpoint (Robbins, 1992; Wang et al., 2019b) wherein one chooses the hyperparameters of the (conditional) prior so that it maximizes an approximation of the conditional log-likelihood log pθ(x\S |xS ). The conditional prior p ⋆ ϑ (z|xS ) in the corresponding conditional ELBO term $${\cal L}^{\star}_{\cal S}(x,\theta,\phi,\partial,\beta)=\int q_{\phi}(z|x)\left[\log p_{\theta}(x_{\cal S}|z)\right]{\rm d}z-\beta{\sf KL}(q_{\phi}(z|x)|p^{\star}_{\theta}(z|x_{\cal S})).\tag{6}$$ can thus be seen as a learned prior having the parameter ϑ = ϑ(D) that is learned by maximizing the above variational approximation of log pθ(x\S |xS ) over x ∼ pd for β = 1, and as such depends on the empirical multi-modal dataset D. While the aggregated prior q agg ϕ,\S (z|xS ) is the optimal learned prior when maximizing L\S , this choice can lead to overfitting. Moreover, computation of the aggregated prior, or sparse approximations thereof, such as variational mixture of posteriors prior (VampPrior) in Tomczak and Welling (2017), are challenging in the conditional setup. Previous constructions in this direction (Joy et al., 2021) for learning priors for bi-modal data only considered unconditional versions, wherein pseudo-samples are not dependent on some condition xS . While our permutation-invariant architectures introduced below may be used for flexibly parameterizing conditional prior distributions p ⋆ ϑ (z|xS ) as a function of xS with a model that is different from the encoding distributions qϕ(z|xS ), we contend ourselves with choosing the same model for both the conditional prior and the encoding distribution, p ⋆ ϑ (z|xS ) = qϕ(z|xS ). Note that the encoding distribution then features both bounds, which encourages learning encoding distributions that perform well as conditional priors, and as encoding distributions. In the ideal scenario where both the generative and inference models have the flexibility to satisfy pd(x\S |xS ) = pθ(x\S |xS ), qϕ(z|xS ) = pθ(z|xS ) and qϕ(z|xS ) = pθ(z|xS ), then the optimal conditional prior distribution is $$q_{\phi_{i}|S}^{\mathrm{org}}(z|x_{S})=\int p_{\theta}(x_{|S}|x_{S})p_{\theta}(z|x)\mathrm{d}x_{|S}=\int p_{\theta}(x_{|S}|x_{S}){\frac{p_{\theta}(z|x_{S})p_{\theta}(x_{|S}|x_{S})}{p_{\theta}(z_{|S}|x_{S})}}\mathrm{d}x_{|S}=p_{\theta}(z|x_{S})=q_{\phi}(z|x_{S}).$$ Remark 4 (Prior-hole problem and Bayes or conditional consistency) In the uni-modal setting, the mismatch between the prior and the aggregated prior can be large and can lead to poor unconditional generative performance because this would lead to high-probability regions under the prior that have not been trained due to their small mass under the aggregated prior (Hoffman and Johnson, 2016; Rosca et al., 2018). Equation (Zmarginal) extends this to the multi-modal case, and we expect that unconditional generation can be poor if this mismatch is large. Moreover, (Zconditional) extends this conditioned on some modality subset, and we expect that cross-generation for x\S conditional on xS can be poor if the mismatch between q agg ϕ,\S (z|xS ) and qϕ(z|xS ) is large for xS ∼ pd, because high-probability regions under qϕ(z|xS ) will not have been trained - via optimizing L\S (x) - to model x\S conditional on xS , due to their small mass under q agg ϕ,\S (z|xS ). The mismatch will vanish when the encoders are consistent and correspond to a single Bayesian model where they approximate the true posterior distributions. A potential approach to reduce this mismatch may be to include as a regulariser the divergence between them that can be optimized by likelihood-free techniques, such as the Maximum-Mean Discrepancy (Gretton et al., 2006), as in Zhao et al. (2019) for uni-modal or unconditional models. For the mixture-based bound, the same distribution mismatch affects unconditional generation, while both the training and generative sampling distribution is qϕ(z|xS ) for cross-generation. Remark 5 (Variational gap for mixture-based bounds) Corollary 2 shows that the variational objective can become a tight bound in the limiting case where the encoding distributions approximate the true posterior distributions. A similar result does not hold for the mixture-based multi-modal bound. Moreover, our bound can be tight for an arbitrary number of modalities in the limiting case of infinite-capacity encoders. In contrast, Daunhawer et al. (2022) show that for mixture-based bounds, this variational gap increases with each additional modality if the new modality is 'sufficiently diverse', even for infinite-capacity encoders. Remark 6 (Optimization, multi-task learning and the choice of ρ) For simplicity, we have chosen to sample S ∼ ρ in our experiments via the hierarchical construction γ ∼ U(0, 1), mj ∼ Bern(γ) iid for all j ∈ [M] and setting S = {s ∈ [M]: mj = 1}. The distribution ρ for masking the modalities can be adjusted to accommodate various weights for different modality subsets. Indeed, (2) can be seen as a linear scalarization of a multi-task learning problem (Fliege and Svaiter, 2000; Sener and Koltun, 2018). We aim to optimize a loss vector (LS +L\S )S⊂M, where the gradients for each *S ⊂ M* can point in different directions, making it challenging to minimize the loss for all modalities simultaneously. Consequently, Javaloy et al. (2022) used multi-task learning techniques (e.g., as suggested in Chen et al. (2018); Yu et al. (2020)) for adjusting the gradients in mixture-based VAEs. Such improved optimization routines are orthogonal to our approach. Similarly, we do not analyze optimization issues such as initializations and training dynamics that have been found challenging for multi-modal learning (Wang et al., 2020; Huang et al., 2022). ## 2.2 Information-Theoretic Perspective Beyond generative modeling, β-VAEs (Higgins et al., 2017) have been popular for representation learning and data reconstruction. Alemi et al. (2018) suggest learning a latent representation that achieves certain mutual information with the data based on upper and lower variational bounds of the mutual information. A Legendre transformation thereof recovers the β-VAE objective and allows a trade-off between information content or rate versus reconstruction quality or distortion. We show that the proposed variational objective gives rise to an analogous perspective for multiple modalities. Recall that the mutual information on the inference path3is given by $$\mathrm{I}_{q_{\phi}}(X_{\mathcal{S}},Z_{\mathcal{S}})=\int q_{\phi}(x_{\mathcal{S}},z_{\mathcal{S}})\log\frac{q_{\phi}(x_{\mathcal{S}},z_{\mathcal{S}})}{p_{d}(x_{\mathcal{S}})q_{\phi,\mathcal{S}}^{\mathrm{agg}}(z_{\mathcal{S}})}\mathrm{d}z_{\mathcal{S}}\mathrm{d}x_{\mathcal{S}},$$ can be bounded by standard (Barber and Agakov, 2004; Alemi et al., 2016; 2018) lower and upper bounds: $${\mathcal{H}}_{{\mathcal{S}}}-D_{{\mathcal{S}}}\leq{\mathcal{H}}_{{\mathcal{S}}}-D_{{\mathcal{S}}}+\Delta_{1}=\mathrm{I}_{q_{\phi}}(X_{{\mathcal{S}}},Z_{{\mathcal{S}}})=R_{{\mathcal{S}}}-\Delta_{2}\leq R_{{\mathcal{S}}},$$ (XS , ZS ) = RS − ∆2 ≤ RS , (7) with ∆1, ∆2 ≥ 0 and HS ≤ RS + DS . For details, see Appendix C. Consequently, by tuning β, we can vary upper and lower bounds of Iqϕ (XS , ZS ) to tradeoff between compressing and reconstructing XS . To arrive at a similar interpretation for the conditional bound L\S that involves the conditional mutual information $$\mathrm{I}_{q_{\phi}}(X_{\backslash S},Z_{\mathcal{M}}|X_{S})=\int p_{d}(x_{S})\mathsf{KL}(p_{d}(x_{\backslash S},z_{\mathcal{M}}|x_{S}))|p_{d}(x_{\backslash S}|x_{S})q_{\phi,\backslash S}^{\mathrm{agg}}(z_{\mathcal{M}}|x_{S}))\mathrm{d}x_{S}$$ recalling that q agg ϕ,\S (zM|xS ) = Rpd(x\S |xS )qϕ(zM|x)dx\S , we set $$R_{\backslash}{\mathcal{S}}=\int p_{d}(x)\mathrm{KL}(q_{\phi}(z|x)|q_{\phi}(z|x_{\mathcal{S}}))\mathrm{d}x$$ 3We include the conditioning modalities as an index for the latent variable Z when the conditioning set is unclear. $$\left(7\right)$$ for a conditional or cross rate. Similarly, set $$D_{\backslash{\mathcal{S}}}=-\int p_{d}(x)q_{\phi}(z|x)\log p_{\theta}(x_{\backslash{\mathcal{S}}}|z)\mathrm{d}z\mathrm{d}x.$$ One obtains the following bounds, see Appendix C. Lemma 7 (Variational bounds on the conditional mutual information) *It holds that* $$-\int{\mathcal{L}}_{\backslash{\mathcal{S}}}(x,\theta,\phi,\beta)p_{d}(\mathrm{d}x)=D_{\backslash{\mathcal{S}}}+\beta R_{\backslash{\mathcal{S}}}$$ $$a_{1}\ldots a_{n}\ldots a_{n}$$ $${\mathcal{H}}_{\backslash{\mathcal{S}}}-D_{\backslash{\mathcal{S}}}+\Delta_{\backslash{\mathcal{S}},1}=\mathrm{I}_{q_{\phi}}(X_{\backslash{\mathcal{S}}},Z_{\mathcal{M}}|X_{\mathcal{S}})=R_{\backslash{\mathcal{S}}}-\Delta_{\backslash{\mathcal{S}},2}.$$ Consequently, by tuning β, we can vary upper and lower bounds of Iqϕ (X\S , ZM|XS ) to tradeoff between compressing relative to qϕ(·|xS ) and reconstructing X\S . Using the chain rules for entropy, we obtain that the suggested bound can be seen as a relaxation of bounds on marginal and conditional mutual information. Corollary 8 (Lagrangian relaxation) *It holds that* $$\mathcal{H}-D_{\mathcal{S}}-D_{\backslash\mathcal{S}}\leq\mathrm{I}_{q_{\phi}}(X_{\mathcal{S}},Z_{\mathcal{S}})+\mathrm{I}_{q_{\phi}}(X_{\backslash\mathcal{S}},Z_{\mathcal{M}}|X_{\mathcal{S}})\leq R_{\mathcal{S}}+R_{\backslash\mathcal{S}}$$ and maximizing L *for fixed* β = ∂(DS+D\S ) ∂(RS+R\S ) minimizes the rates RS + R\S *and distortions* DS + D\S . Remark 9 (Mixture-based variational bound) We show in Appendix C, see also Daunhawer et al. (2022), that HM − DS − DcS ≤ HM − DS − DcS + ∆′1 = Iqϕ (XM, ZS ), where ∆′1 =Rq agg ϕ(z)KL(q ⋆(x|z)|pθ(x|z))dz > 0. Consequently, HM −DS −DcS is a variational lower bound, while RS is a variational upper bound on Iqϕ (XM, ZS ), which establishes (2). Maximizing the mixturebased bound thus corresponds to encoding a single latent variable ZS that maximizes the reconstruction of all modalities while at the same time being maximally compressive relative to the prior. Remark 10 (Optimal variational distributions) Consider the annealed likelihood p˜β,θ(xS |z) ∝ pθ(xS |z) 1/β as well as the adjusted posterior p˜β,θ(z|xS ) ∝ p˜β,θ(xS |z)pθ(z). The minimum of the bound Rpd(dx)LS (x) is attained at any xS for the variational density $$q^{*}(z|x_{\cal S})\propto\exp\left(\frac{1}{\beta}\left[\log p_{\theta}(x_{\cal S}|z)+\beta\log p_{\theta}(z)\right]\right)\propto\tilde{p}_{\beta,\theta}(z|x_{\cal S}),\tag{8}$$ see also Huang et al. (2020) and Remark 20. Similarly, if (8) holds, then it is readily seen that the minimum of the bound Rpd(dx)L\S (x) is attained at any x for the variational density q ⋆(z|x) = ˜pβ,θ(z|x). In contrast, as shown in Appendix 21, the optimal variational density for the mixture-based (1) multi-modal bound is attained at $$q^{*}(z|x_{\mathcal{S}})\propto\tilde{p}_{\beta,\theta}(z|x_{\mathcal{S}})\exp\left(\int p_{d}(x_{\backslash\mathcal{S}}|x_{\mathcal{S}})\log\tilde{p}_{\beta,\theta}(x_{\backslash\mathcal{S}}|z)\mathrm{d}x_{\backslash\mathcal{S}}\right).$$ The optimal variational density for the mixture-based bound thus tilts the posterior distribution to points that achieve higher cross-modal predictions. ## 3 Permutation-Invariant Modality Encoding Optimizing the above multi-modal bounds requires learning variational densities with different conditioning sets. We write hs,φ : Xs 7→ R DE for some modality-specific feature function. We recall the following multimodal encoding functions suggested in previous work where usually hs,φ(xs) = -µs,φ(xs) ⊤, vec(Σs,φ(xs))⊤⊤ with µs,φ and Σs,φ being the mean, respectively the (often diagonal) covariance, of a uni-modal encoder of modality s. Accommodating more complex variational families, such as mixture distributions for the unimodal encoding distributions, can be more challenging for these approaches. - MoE: q MoE φ (z|xS ) = 1 |S| Ps∈S qN (z|µs,φ(xs), Σs,φ(xs)), where qN (z|µ, Σ) is a Gaussian density with mean µ and covariance Σ. - PoE: q PoE φ (z|xS ) = 1Z pθ(z)Qs∈S qN (z|µs,φ(xs), Σs,φ(xs)), for some Z ∈ R. For Gaussian priors pθ(z) = qN (z|µθ, Σθ) with mean µθ and covariance Σθ, the multi-modal distribution q PoE φ (z|xS ) is Gaussian with mean $$(\mu_{\theta}\Sigma_{\theta}+\sum_{s\in{\mathcal{S}}}\mu_{s,\varphi}(x_{s})\Sigma_{s,\varphi}(x_{s}))(\Sigma_{1,\theta}^{-1}+\sum_{s\in{\mathcal{S}}}\Sigma_{s,\varphi}(x_{s})^{-1})^{-1}$$ and covariance $$(\Sigma_{1,\theta}^{-1}+\sum_{s\in S}\Sigma_{s,\varphi}(x_{s})^{-1})^{-1}.$$ - Mixture of Product of Experts (MoPoE), see Sutter et al. (2021), $$q_{\phi}^{\mathrm{MoPoE}}(z|x_{\mathcal{M}})=\frac{1}{2^{M}}\sum_{x_{\mathcal{S}}\in\mathcal{P}(x_{\mathcal{M}})}q_{\phi}^{\mathrm{PoE}}(z|x_{\mathcal{S}}).$$ ## 3.1 Learnable Permutation-Invariant Aggregation Schemes We aim to learn a more flexible aggregation scheme under the constraint that the encoding distribution is invariant (Bloem-Reddy and Teh, 2020) with respect to the ordering of encoded features of each modality. Put differently, for all (Hs)s∈S ∈ R |S|×DE and all permutations π ∈ SS of S, we assume that the conditional distribution is SS -invariant, i.e. q ′ ϑ (z|h) = q ′ ϑ (z|π·h) for all z ∈ R D, where π acts on H = (Hs)s∈S via π·H = (Hπ(s))s∈S . We set qϕ(z|xS ) = q ′ ϑ (z|hs,φ(xs)s∈S ), ϕ = (*φ, ϑ*) and remark that the encoding distribution is not invariant with respect to the modalities, but becomes only invariant after applying modality-specific encoder functions hs,φ. Observe that such a constraint is satisfied by the aggregation schemes above for hs,φ being the uni-modal encoders. A variety of invariant (or equivariant) functions along with their approximation properties have been considered previously, see for instance Santoro et al. (2017); Zaheer et al. (2017); Qi et al. (2017); Lee et al. (2019); Segol and Lipman (2019); Murphy et al. (2019); Maron et al. (2019); Sannai et al. (2019); Yun et al. (2019); Bruno et al. (2021); Wagstaff et al. (2022); Zhang et al. (2022b); Li et al. (2022); Bartunov et al. (2022), and applied in different contexts such as meta-learning (Edwards and Storkey, 2016; Garnelo et al., 2018b; Kim et al., 2018; Hewitt et al., 2018; Giannone and Winther, 2022), reinforcement learning (Tang and Ha, 2021; Zhang et al., 2022a) or generative modeling of (uni-modal) sets (Li et al., 2018; 2020; Kim et al., 2021; Biloš and Günnemann, 2021; Li and Oliva, 2021). We can use such constructions to parameterize more flexible encoding distributions. Indeed, the results from Bloem-Reddy and Teh (2020) imply that for an exchangable sequence HS = (Hs)s∈S ∈ R |S|×DE and random variable Z, the distribution q ′(z|hS ) is SS -invariant if and only if there is a measurable function4 f ⋆: [0, 1] × M(R DE ) → R D such that $$(H_{\mathcal{S}},Z)\stackrel{\mathrm{a.s.}}{=}(H_{\mathcal{S}},f^{\star}(\Xi,\mathbb{M}_{H_{\mathcal{S}}})),\,\mathrm{where}\,\Xi\sim\mathcal{U}[0,1]\,\,\mathrm{and}\,\,\Xi\perp H_{\mathcal{S}}\,,$$ with MHS (·) = Ps∈S δHs (·) being the empirical measure of hS , which retains the values of hS , but discards their order. For variational densities from a location-scale family such as a Gaussian or Laplace distribution, we find it more practical to consider a different reparameterization in the form Z = µ(hS ) + σ(hS ) ⊙ Ξ, where Ξ is a sample from a parameter-free density p such as a standard Gaussian and Laplace distribution, while -µ(hS ), log σ(hS )= f(hS ) for a PI function f : R |S|×DE → R 2D. Likewise, for mixture distributions thereof, assume that for a PI function f, $$\left[\mu_{1}(h)\right]$$ -µ1(hS ), log σ1(hS )*, . . . , µ*K(hS ), log σK(hS ), log ω(hS )= f(hS ) ∈ R 2DK+K 4The function f ⋆ generally depends on the cardinality of S. Finite-length exchangeable sequences imply a de Finetti latent variable representation only up to approximation errors (Diaconis and Freedman, 1980). and Z = µL(hS ) + σL(hS ) ⊙ Ξ with L ∼ Cat(ω(hS )) denoting the sampled mixture component out of K mixtures. For simplicity, we consider here only two examples of PI functions f that have representations with parameter ϑ in the form $$f_{\vartheta}(h_{\mathcal{S}})=\rho_{\vartheta}\left(\sum_{s\in\mathcal{S}}g_{\vartheta}(h_{\mathcal{S}})_{s}\right)$$ $\mathbf{A}^{\mathsf{T}}\times\mathbf{B}=\mathbf{A}$ $\therefore N\times D_{\mathrm{E}}$ for a function ρϑ : R DP → R DO and permutation-equivariant function gϑ : R N×DE → R N×DP . Example 1 (Sum Pooling Encoders) The Deep Set (Zaheer et al., 2017) construction fϑ(hS ) = ρϑ Ps∈S χϑ(hs)applies the same neural network χϑ : R DE → R DP to each encoded feature hs. We assume that χϑ is a feed-forward neural network and remark that pre-activation ResNets (He et al., 2016) have been advocated for deeper χϑ. For exponential family models, the optimal natural parameters of the posterior solve an optimization problem where the dependence on the generative parameters from the different modalities decomposes as a sum, see Appendix F. Example 2 (Set Transformer Encoders) Let MTBϑ be a multi-head pre-layer-norm transformer block (Wang et al., 2019a; Xiong et al., 2020), see Appendix D for precise definitions. For some neural network χϑ : R DE → R DP , set g 0 S = χϑ(hS ) and for k ∈ {1*, . . . , L*}, set g k S = MTBϑ(g k−1 S). We then consider fϑ(hS ) = ρϑ Ps∈S g L s . This can be seen as a Set Transformer (Lee et al., 2019; Zhang et al., 2022a) model without any inducing points as for most applications, a computational complexity that scales quadratically in the number of modalities can be acceptable. In our experiments, we use layer normalization (Ba et al., 2016) within the transformer model, although, for example, set normalization (Zhang et al., 2022a) could be used alternatively. Note that the PoE's aggregation mechanism involves taking inverses, which can only be approximated by the learned aggregation models. The considered permutation-invariant models can thus only recover a PoE scheme under universal approximation assumptions. Remark 11 (Context-aware pooling) Assuming a single head for the transformer encoder in Example **Remark 11** (Context-aware pooling): Assuming a simple need for the transformer encoder in Example 2 with head size $D$ and projection matrices $W_{Q},W_{K},W_{V}\in\mathbb{R}^{D\times D}$, the attention scores for the initial input sequence $gs=\delta_{S}^{d}=\chi_{Q}(h_{S})\in\mathbb{R}^{|S|\times D_{F}}$ are $a(g_{*},g_{*})=\langle W_{Q}^{T}g_{*},W_{K}^{T}g_{*}\rangle/\sqrt{D}$. The attention outputs $o_{*}\in\mathbb{R}^{D}$ for $s\in\mathcal{S}$ can then be written as $$o_{*}=\frac{1}{Z}\sum_{s\in\mathcal{S}}\kappa(g_{*},g_{*})v(g_{t}),$$ where $Z=\sum_{t\in\mathcal{S}}\kappa(g_{*},g_{t})>0$, $v(g_{t})=W_{V}^{T}g_{t}$ and $$\kappa(g_{s},g_{t})=\exp(a(g_{s},g_{t}))=\exp\left(\langle W_{Q}^{\top}g_{s},W_{K}^{\top}g_{t}\rangle/\sqrt{D}\right)$$ can be seen as a learnable non-symmetric kernel (Wright and Gonzalez, 2021; Cao, 2021). Conceptually, the attention encoder pools a learnable D-dimensional function v using a learnable context-dependent weighting function. While such attention models directly account for the interaction between the different encodings, a DeepSet aggregation approach may require a sufficiently high-dimensional latent space DP to achieve universal approximation properties (Wagstaff et al., 2022). Remark 12 (Mixture-of-Product-of-Experts or MoPoEs) Sutter et al. (2021) introduced a MoPoE aggregation scheme that extends MoE or PoE schemes by considering a mixture distribution of all 2M modality subsets, where each mixture component consists of a PoE model, i.e., $$q_{\phi}^{\mathrm{MoPoE}}(z|x_{\mathcal{M}})=\frac{1}{2^{M}}\sum_{x_{S}\in\mathcal{P}(x_{\mathcal{M}})}q_{\phi}^{\mathrm{PoE}}(z|x_{S}).$$ This can also be seen as another PI model. While it does not require learning separate encoding models for all modality subsets, it, however, becomes computationally expensive to evaluate for large M. Our mixture models using components with a SumPooling or SelfAttention aggregation can be seen as an alternative that allows one to choose the number of mixture components K to be smaller than 2M, with non-uniform weights, while the individual mixture components are not constrained to have a PoE form. Remark 13 (Pooling expert opinions) Combining expert distributions has a long tradition in decision theory and Bayesian inference; see Genest and Zidek (1986) for early works, with popular schemes being linear pooling (i.e., MoE) or log-linear pooling (i.e., PoE with tempered densities). These are optimal schemes for minimizing different objectives, namely a weighted (forward or reverse) KL-divergence between the pooled distribution and the individual experts (Abbas, 2009). Log-linear pooling operators are externally Bayesian, allowing for consistent Bayesian belief updates when each expert updates her belief with the same likelihood function (Genest et al., 1986). ## 3.2 Permutation-Equivariance And Private Latent Variables In principle, the general permutation invariant aggregation schemes that have been introduced could also be used for learning multi-modal models with private latent variables. For example, suppose that the generative model factorizes as $$p_{\theta}(z,x)=p(z)\prod_{s\in{\mathcal{M}}}p_{\theta}(x_{s}|z^{\prime},\bar{z}_{s})\tag{1}$$ for z = (z ′, z˜1*, . . . ,* z˜M) ∈ Z, for shared latent variables Z ′ and private latent variable Z˜sfor each s ∈ M. Note that for s ̸= t ∈ [M], $$X_{s}\perp\tilde{Z}_{t}\mid Z^{\prime},\tilde{Z}_{s}.$$ ′,Z˜s. (10) Consequently, $p_{\theta}(z^{\prime},\tilde{z}_{\mathcal{S}},\tilde{z}_{\backslash\mathcal{S}}|x_{\mathcal{S}})=p_{\theta}(z^{\prime},\tilde{z}_{\mathcal{S}},|x_{\mathcal{S}})p_{\theta}(\tilde{z}_{\backslash\mathcal{S}}|z^{\prime},\tilde{z}_{\mathcal{S}},x_{\mathcal{S}})=p_{\theta}(z^{\prime},\tilde{z}_{\mathcal{S}},|x_{\mathcal{S}})p_{\theta}(\tilde{z}_{\backslash\mathcal{S}}|z^{\prime},\tilde{z}_{\mathcal{S}})$. ′, z˜S ). (11) An encoding distribution qϕ(z|xS ) that approximates pθ(z|xS ) should thus be unaffected by the inputs xS when encoding z˜s for s /∈ S, provided that, a priori, all private and shared latent variables are independent. Observe that for fϑ with the representation (3.1) where ρϑ has aggregated inputs y, and that parameterizes the encoding distribution of z = (z ′, z˜S , z˜\S ), the gradients of its i-th dimension with respect to the modality values xs is $$\frac{\partial}{\partial x_{s}}\left[f_{\vartheta}(h_{\mathcal{S}}(x_{\mathcal{S}}))_{i}\right]=\frac{\partial\rho_{\vartheta,i}}{\partial y}\left(\sum_{t\in\mathcal{S}}g_{\vartheta}(h_{\mathcal{S}}(x_{\mathcal{S}})_{t})\right)\frac{\partial}{\partial x_{s}}\left(\sum_{t\in\mathcal{S}}g_{\vartheta}(h_{\mathcal{S}}(x_{\mathcal{S}}))_{t}\right).$$ In the case of a SumPooling aggregation, the gradient simplifies to $$({\mathfrak{g}})$$ $$(10)$$ $$(11)$$ $$s)p_{\theta}(\tilde{z}_{\backslash}s|z^{\prime},\tilde{z}s).$$ $${\frac{\partial\rho_{\vartheta,i}}{\partial y}}\left(\sum_{t\in{\mathcal{S}}}\chi_{\vartheta}(h_{t}(x_{t}))\right){\frac{\partial\chi_{\vartheta}}{\partial h}}\left(h_{s}(x_{s})\right){\frac{\partial h_{s}(x_{s})}{\partial x_{s}}}.$$ Suppose that the i-th component of ρϑ maps to the mean or log-standard deviation of some component of Z˜s for some s *∈ M \ S*. Notice that only the first factor depends on i so that for this gradient to be zero, ρϑ,i has to be locally constant around y =Ps∈S χϑ(hs(xs)) if some other components have a non-zero gradient with respect to Xs. It it thus very likely that inputs Xs for s ∈ S can impact the distribution of the private latent variables z˜\S . However, the specific generative model also lends itself to an alternative parameterization that guarantees that cross-modal likelihoods from X\S do not affect the encoding distribution of Z˜S under our new variational objective. The assumption of private latent variables suggests an additional permutation-equivariance into the encoding distribution that approximates the posterior in (11), in the sense that for any permutation π ∈ SS , it holds that $$q_{\phi}^{\prime}(\tilde{z}_{S}|\pi\cdot h_{\varphi}(x_{S}),z^{\prime})=q_{\phi}^{\prime}(\pi\cdot\tilde{z}_{S}|h_{\varphi}(x_{S}),z^{\prime}),$$ assuming that all private latent variables are of the same dimension D. 5Indeed, suppose we have modalityspecific feature functions hφ,s such that {Hs = hφ,s(Xs)}s∈S is exchangeable. Clearly, (10) implies for any s ̸= t that $$h_{\varphi,s}(X_{s})\perp\tilde{Z}_{t}\mid Z^{\prime},\tilde{Z}_{s}.$$ 5The effective dimension can vary across modalities in practice if the decoders are set to mask redundant latent dimensions. The results from Bloem-Reddy and Teh (2020) then imply, for fixed |S|, the existence of a function f ⋆such that for all s ∈ S, almost surely, $$(H_{S},\hat{Z}_{s})=(H_{S},f^{*}(\Xi_{s},Z^{\prime},H_{s},\mathbb{M}_{H_{S}})),\,\mathrm{where}\,\Xi_{s}\sim\mathcal{U}[0,1]\,\,\mathrm{iid}\,\,\mathrm{and}\,\,\Xi_{s}\perp\!\!\!\perp H_{S}.$$ $$(12)$$ This fact suggests an alternative route to approximate the posterior distribution in (11): First, pθ(˜z\S |z ′, z˜S ) can often be computed analytically based on the learned or fixed prior distribution. Second, a permutationinvariant scheme can be used to approximate pθ(z ′|xS ). Finally, a permutation-equivariant scheme can be employed to approximate pθ(˜zS |xS , z′) with a reparameterization in the form of (12). The variational objective that explicitly uses private latent variables is detailed in Appendix E. Three examples of such permutation-equivariant schemes are given below with pseudocode for optimizing the variational objective given in Algorithm 2. Note that the assumption qϕ(˜zS |z ′, z˜\S , xS ) = qϕ(˜zS |z ′, xS ) is an inductive bias that generally decreases the variational objective as it imposes a restriction on the encoding distribution that only approximates the posterior where this independence assumption holds. However, this independence assumption allows us to respect the modality-specific nature of the private latent variables during encoding. In particular, for some permutation-invariant encoder qϕ(z ′|xS ) for the private latent variables and permutation-equivariant encoder qϕ(˜zS |z ′, xS ) for the private latent variables of the observed modalities, we can encode via qϕ(z ′, z˜M|xS ) = qϕ(z ′|xS )pθ(˜z\S |z ′)qϕ(˜zS |z ′, xS ) so that the modality-specific information of xS as encoded via z˜S is not impacted by the realisation Z˜\S of modality-specific variation from the other modalities. Example 3 (Permutation-equivariant PoE) Similar to previous work Wang et al. (2016); Lee and Pavlovic (2021); Sutter et al. (2020), we consider an encoding density of the form $$q_{\phi}(z^{\prime},\hat{z}_{\mathcal{M}}|x_{\mathcal{S}})=q_{\varphi}^{\mathrm{PoE}}(z^{\prime}|x_{\mathcal{S}})\prod_{s\in\mathcal{S}}q_{\mathcal{N}}(\hat{z}_{s}|\hat{\mu}_{s,\varphi}(x_{s}),\hat{\Sigma}_{s,\varphi}(x_{s}))\prod_{s\in\mathcal{M}\setminus\mathcal{S}}p_{\theta}(\hat{z}_{s}),$$ where $$q_{\varphi}^{\mathrm{PoE}}(z^{\prime}|x_{\mathcal{S}})=\frac{1}{\mathcal{Z}}p_{\theta}(z^{\prime})\prod_{s\in\mathcal{S}}q_{\mathcal{N}}(z^{\prime}|\mu_{s,\varphi}^{\prime}(x_{s}),\Sigma_{s,\varphi}^{\prime}(x_{s}))$$ is a (permutation-invariant) PoE aggregation, and we assumed that the prior density factorizes over the shared and different private variables. For each modality s, we encode different features h ′ s,φ = (µ ′ s,φ, Σ ′ s,φ) and h˜s,φ = (˜µs,φ, Σ˜s,φ) for the shared, respectively, private, latent variables. We followed previous works (Tsai et al., 2019b; Lee and Pavlovic, 2021; Sutter et al., 2020) in that the encodings and prior distributions for the modality-specific latent variables are independent from the shared latent variables. However, this assumption can be relaxed, as long as the distributions remain Gaussian. Example 4 (Permutation-equivariant Sum-Pooling) We consider an encoding density that is written as $q_{\phi}(z^{\prime},\tilde{z}_{\mathcal{M}}|x_{\mathcal{S}})=q_{\phi}^{\text{SumP}}(z^{\prime}|x_{\mathcal{S}})q_{\phi}^{\text{Equiv-SumP}}(\tilde{z}_{\mathcal{S}}|z^{\prime},x_{\mathcal{S}})\prod_{s\in\mathcal{M}\setminus\mathcal{S}}p_{\theta}(\tilde{z}_{s}|z^{\prime})$. Here, we use a (permutation-invariant) Sum-Pooling aggregation scheme for constructing the shared latent variable Z ′ = µ ′(hS )+σ ′(hS )⊙Ξ ′ ∼ q SumP ϕ(z ′|xS ), where Ξ ′ ∼ p and fϑ : R |S|×DE → R D given as in Example (1) with -µ ′(h), log σ ′(h)= fϑ(h). To sample Z˜S ∼ q Equiv-SumP ϕ(˜zS |z ′, xS ), consider functions χj,ϑ : R DE → R DP , j ∈ [3], and ρϑ : R DP → R DO , e.g., fully-connected neural networks. We define f Equiv-SumP ϑ: Z × R |S|×DE → R |S|×DO via $$f_{\vartheta}^{\mathrm{Equiv-SumP}}(z^{\prime},h_{S})_{s}=\rho_{\vartheta}\left(\left[\sum_{t\in S}\chi_{0,\vartheta}(h_{t})\right]+\chi_{1,\vartheta}(z^{\prime})+\chi_{2,\vartheta}(h_{s})\right).$$ With -µ˜(hS ) ⊤, log ˜σ(hS ) ⊤⊤= f Equiv-SumP ϑ(z ′, hS ), we then set Z˜s = ˜µ(hS )s + ˜σ(hS )s ⊙ Ξ˜s for Ξ˜s ∼ p iid, hs = hφ,s(xs) for modality-specific feature functions hφ,s : Xs → R DE . Example 5 (Permutation-equivariant Self-Attention) Similar to a Sum-Pooling approach, we consider an encoding density that is written as $$q_{\phi}(z^{\prime},\bar{z}_{\mathcal{M}}|x_{\mathcal{S}})=q_{\phi}^{\mathrm{SA}}(z^{\prime}|x_{\mathcal{S}})q_{\phi}^{\mathrm{Equiv-SA}}(\bar{z}_{\mathcal{S}}|z^{\prime},x_{\mathcal{S}})\prod_{s\in\mathcal{M}\backslash\mathcal{S}}p_{\theta}(\bar{z}_{s}|z^{\prime}).$$ Here, the shared latent variable Z ′is sampled via the permutation-invariant aggregation above by summing the elements of a permutation-equivariant transformer model of depth L ′. For encoding the private latent variables, we follow the example above but set $$\left[\tilde{\mu}(h_{\mathcal{S}})^{\top},\log\tilde{\sigma}(h_{\mathcal{S}})^{\top}\right]^{\top}=f_{\tilde{\theta}}^{\mathrm{Equiv-SA}}(z^{\prime},h_{\mathcal{S}})_{s}=g_{\mathcal{S}}^{L},$$ with g k S = MTBϑ(g k−1 S) an g 0 = (χ1,ϑ(hs) + χ2,ϑ(z ′))s∈S . Remark 14 (Cross-modal context variables and permutation-equivariant models) In contrast to the PoE model, where the private encodings are independent, the private encodings are dependent in the SumPooling model by conditioning on a sample from the shared latent space. The shared latent variable Z ′can be seen as a shared cross-modal context variable, and similar probabilistic constructions to encode such context variables via permutation-invariant models have been suggested in few-shot learning algorithms (Edwards and Storkey, 2016; Giannone and Winther, 2022) or, particularly, for neural process models (Garnelo et al., 2018b;a; Kim et al., 2018). Permutation-equivariant models have been studied for stochastic processes where invariant priors correspond to equivariant posteriors (Holderrieth et al., 2021), such as Gaussian processes or Neural processes with private latent variables, wherein dependencies in the private latent variables can be constructed hierarchically (Wang and Van Hoof, 2020; Xu et al., 2023). ## 4 Identifiability And Model Extensions 4.1 Identifiability Identifiability of parameters and latent variables in latent structure models is a classic problem (Koopmans and Reiersol, 1950; Kruskal, 1976; Allman et al., 2009), that has been studied increasingly for non-linear latent variable models, e.g., for ICA (Hyvarinen and Morioka, 2016; Hälvä and Hyvarinen, 2020; Hälvä et al., 2021), VAEs (Khemakhem et al., 2020a; Zhou and Wei, 2020; Wang et al., 2021; Moran et al., 2021; Lu et al., 2022; Kim et al., 2023), EBMs (Khemakhem et al., 2020b), flow-based (Sorrenson et al., 2020) or mixture models (Kivva et al., 2022). Non-linear generative models are generally unidentifiable without imposing some structure (Hyvärinen and Pajunen, 1999; Xi and Bloem-Reddy, 2022). Yet, identifiability up to some ambiguity can be achieved in some conditional models based on observed auxiliary variables and injective decoder functions wherein the prior density is conditional on auxiliary variables. Observations from different modalities can act as auxiliary variables to obtain identifiability of conditional distributions given some modality subset under analogous assumptions. Example 6 (Auxiliary variable as a modality) In the iVAE model (Khemakhem et al., 2020a), the latent variable distribution pθ(z|x1) is independently modulated via an auxiliary variable X1 = U. Instead of interpreting this distribution as a (conditional) prior density, we view it as a posterior density given the first modality X1. Khemakhem et al. (2020a) estimate a model for another modality X2 by lower bounding log pθ(x2|x1) via L\{1} under the assumption that qϕ(z|x1) is given by the prior density pθ(z|x1). Similarly, Mita et al. (2021) optimize log pθ(x1, x2) by a double VAE bound that reduces to L for a masking distribution ρ(s1, s2) = (δ1 ⊗ δ0)(s1, s2) that always masks the modality X2 and choosing to parameterize separate encoding functions for different conditioning sets. Our bound thus generalizes these procedures to multiple modalities in a scalable way. We are interested in identifiability, conditional on having observed some non-empty modality subset *S ⊂ M*. For illustration, we translate an identifiability result from the uni-modal iVAE setting in Lu et al. (2022), which does not require the conditional independence assumption from Khemakhem et al. (2020a). We assume that the encoding distribution qϕ(z|xS ) approximates the true posterior pθ(z|xS ) and belongs to a strongly exponential family, i.e., $$p_{\theta}(z|x_{S})=q_{\phi}(z|x_{S})=p_{V_{\phi,S},\lambda_{\phi,S}}^{\mathrm{{EF}}}(z|x_{S}),$$ (z|xS ), (13) with $$p_{V_{\mathcal{S}},\lambda_{\mathcal{S}}}^{\mathrm{EF}}(z|x_{\mathcal{S}})=\mu(z)\exp\left[\langle V_{\mathcal{S}}(z),\lambda(x_{\mathcal{S}})\rangle-\log\Gamma_{\mathcal{S}}(\lambda_{\mathcal{S}}(x_{\mathcal{S}}))\right],$$ where µ is a base measure, VS : Z → R kis the sufficient statistics, λS (xS ) ∈ R kthe natural parameters and ΓS a normalizing term. Furthermore, one can only reduce the exponential component to the base measure on sets having measure zero. In this section, we assume that $$(13)$$ $$p_{\theta}(x_{s}|z)=p_{s,\epsilon}(x_{s}-f_{\theta,s}(z))$$ $$(14)$$ pθ(xs|z) = ps,ϵ(xs − fθ,s(z)) (14) for some fixed noise distribution ps,ϵ with a Lebesgue density, which excludes observation models for discrete modalities. Let ΘS be the domain of the parameters θS = (f\S , VS , λS ) with f\S : Z ∋ z 7→ (fs(z))s*∈M\S* ∈ ×s*∈M\S* Xs = X\S . Assuming (13), note that $$p_{\theta_{S}}(x_{\backslash S}|x_{S})=\int p_{V_{S},\lambda_{S}}(z|x_{S})p_{\backslash S,\epsilon}(x_{\backslash S}-f_{\backslash S}(z))\mathrm{d}z,$$ with p\S,ϵ = ⊗s∈M\S ps,ϵ. We define an equivalence relation on ΘS by (f\S , VS , λS ) ∼AS ( ˜f\S , V˜S , λ˜S ) iff there exist invertible AS ∈ R k×k and cS ∈ R ksuch that $$V_{S}(f_{\backslash S}^{-1}(x_{\backslash S}))=A_{S}\tilde{V}_{S}(\tilde{f}_{\backslash S}^{-1}(x_{\backslash S}))+c s$$ for all x\S ∈ X\S . Proposition 15 (Weak identifiability) Consider the data generation mechanism pθ(*z, x*) = pθ(z)Qs∈M pθ(xs|z) *where the observation model satisfies* (14) for an injective f\S . Suppose further that pθ(z|xS ) *is strongly exponential and* (13) *holds. Assume that the set* {x\S ∈ X\S |φ\S,ϵ(x\S ) = 0} has measure zero, where φ\S,ϵ is the characteristic function of the density p\S,ϵ*. Furthermore, suppose that* there exist k + 1 *points* x 0 S , . . . , xkS ∈ XS *such that* $$L=\left[\lambda_{\mathcal{S}}(x_{\mathcal{S}}^{1})-\lambda_{\mathcal{S}}(x_{\mathcal{S}}^{0}),\ldots,\lambda_{\mathcal{S}}(x_{\mathcal{S}}^{k})-\lambda_{\mathcal{S}}(x_{\mathcal{S}}^{0})\right]\in\mathbb{R}^{k\times k}$$ is invertible. Then pθS (x\S |xS ) = pθ˜S (x\S |xS ) for all x ∈ X *implies* θ ∼AS ˜θ. This result follows from Theorem 4 in Lu et al. (2022). Note that pθS (x\S |xS ) = pθ˜S (x\S |xS ) for all x ∈ X implies with the regularity assumption on φ\S,ϵ that the transformed variables Z = f −1 \S (X\S ) and Z˜ = ˜f −1 \S (X\S ) have the same density function conditional on XS . Remark 16 (Conditional identifiability) The identifiability result above is about conditional models and does not contradict the un-identifiability of VAEs: When S = ∅ and we view x = xM as one modality, then the parameters of pθ∅ (x) characterized by the parameters V∅ and λ∅ of the prior pθ∅ (z|x∅) and the encoders fM will not be identifiable as the invertibility condition will not be satisfied. Remark 17 (Private latent variables) For models with private latent variables, we might not expect that conditioning on XS helps to identify Z˜\S as $$p_{\theta}(z^{\prime},\bar{z}_{\mathcal{S}},\bar{z}_{\backslash\mathcal{S}}|x_{\mathcal{S}})=p_{\theta}(z^{\prime},\bar{z}_{\mathcal{S}}|x_{\mathcal{S}})p_{\theta}(\bar{z}_{\backslash\mathcal{S}}|z^{\prime},\bar{z}_{\backslash\mathcal{S}}).$$ Indeed, Proposition 15 will not apply in such models as f\S will not be injective. Remark 18 (Data supported on low-dimensional manifolds) Note that (14) and (13) imply that each modality has a Lebesgue density under the generative model. This assumption may not hold for some modalities, such as imaging data that can be supported (closely) on a lower-dimensional manifold (Roweis and Saul, 2000), causing issues in likelihood-based methods such as VAEs (Dai and Wipf, 2018; LoaizaGanem et al., 2022). Moreover, different conditioning sets or modalities may result in different dimensions of the underlying manifold for conditional data (Zheng et al., 2022). Some two-step approaches (Dai and Wipf, 2018; Zheng et al., 2022) first estimate the dimension r of the ground-truth manifold as a function of the encoder variance relative to the variance under the (conditional) prior for each latent dimension i, i ∈ [D], with r ≤ D. It would, therefore, be interesting to analyze in future work if more flexible aggregation schemes that do not impose strong biases on the variance components of the encoder can better learn the manifold dimensions in conditional or multi-modal models following an analogous two-step approach. Recall that the identifiability considered here concerns parameters of the multi-modal posterior distribution and the conditional generative distribution. It is thus preliminary to estimation and only concerns the generative model and not the inference approach. However, both the multi-modal posterior distribution and the conditional generative distribution are intractable. In practice, we thus replace them with approximations. We believe that our inference approach is beneficial for this type of identifiability when making these variational approximations because (a) unlike some other variational bounds, the posterior is the optimal variational distribution with L\S (x) being an approximation of a lower bound on log pθ(x\S |xS ), see Remark 10, and (b) the trainable aggregation schemes can be more flexible for approximating the optimal encoding distribution. ## 4.2 Mixture Models An alternative to the choice of uni-modal prior densities pθ has been to use Gaussian mixture priors (Johnson et al., 2016; Jiang et al., 2017; Dilokthanakul et al., 2016) or more flexible mixture models (Falck et al., 2021). Following previous work, we include a latent cluster indicator variable c ∈ [K] that indicates the mixture component out of K possible mixtures with augmented prior pθ(*c, z*) = pθ(c)pθ(z|c). The classic example is pθ(c) being a categorical distribution and pθ(z|c) a Gaussian with mean µc and covariance matrix Σc. Similar to Falck et al. (2021) that use an optimal variational factor in a mean-field model, we use an optimal factor of the cluster indicator in a structured variational density qϕ(*c, z*|xS ) = qϕ(z|xS )qϕ(c|*z, x*S ) with qϕ(c|*z, x*S ) = pθ(c|z). Appendix G details how one can optimize an augmented multi-modal bound. Concurrent work (Palumbo et al., 2024) considered a similar optimal variational factor for a discrete mixture model under a MoE aggregation. ## 4.3 Missing Modalities In practical applications, modalities can be missing for different data points. We describe this missingness pattern by missingness mask variables ms ∈ {0, 1} where ms = 1 indicates that observe modality s, while ms = 0 means it is missing. The joint generative model with missing modalities will be of the form pθ(*z, x, m*) = pθ(z)Qs∈M pθ(xs|z)pθ(m|x) for some distribution pθ(m|x) over the mask variables m = (ms)s∈M. For *S ⊂ M*, we denote by x o S = {xs : ms = 1, s *∈ S}* and x m S = {xs : ms = 0, s *∈ S}* the set of observed, respectively missing, modalities. The full likelihood of the observed and missingness masks becomes then pθ(x o S , m) = Rpθ(z)Qs∈S pθ(xs|z)pθ(m|x)dx m s dz. If pθ(m|x) does not depend on the observations, that is, observations are missing completely at random (Rubin, 1976), then the missingness mechanisms pθ(m|x) for inference approaches maximizing pθ(x o, m) can be ignored. Consequently, one can instead concentrate on maximizing log pθ(x o) only, based on the joint generative model pθ(*z, x*o) = pθ(z)Q{s∈M: ms=1} pθ(xs|z). In particular, one can employ the variational objectives above by considering only the observed modalities. Since masking operations are readily supported for the considered permutation-invariant models, appropriate imputation strategies (Nazabal et al., 2020; Ma et al., 2019) for the encoded features of the missing modalities are not necessarily required. Settings allowing for not (completely) at random missingness have been considered in the uni-modal case, for instance, in Ipsen et al. (2021); Ghalebikesabi et al. (2021); Gong et al. (2021), and we leave multi-modal extensions thereof for future work for a given aggregation approach. ## 5 Experiments We conduct a series of numerical experiments to illustrate the effects of different variational objectives and aggregation schemes. Recall that the full reconstruction log-likelihood is the negative full distortion −DM based on all modalities, while the full rate RM is the averaged KL between the encoding distribution of all modalities and the prior. Note that mixture-based bounds maximize directly the cross-modal log-likelihood −Dc\S , see (4), and do not contain a cross-rate term R\S , i.e. the KL between the encoding distribution for all modalities relative to a modality subset, as a regulariser, in contrast to our bound (Lemma 7 and Corollary 8). The log-likelihood should be higher if a generative model is able to capture modality-specific information for models trained with β = 1. For arbitrary β, we can take a rate-distortion perspective and look at how different models self-reconstruct all modalities, i.e., the full reconstruction term −DM, relative to the KLdivergence between the multi-modal encoding distribution and the prior, i.e. RM. This corresponds to a rate-distortion analysis of a VAE that merges all modalities into a single modality. A high full-reconstruction term is thus indicative of the encoder and decoder being able to reconstruct all modalities precisely so that they do not produce an average prediction. Note that neither our objective nor the mixture-based bound optimize for the full-reconstruction term directly. ## 5.1 Linear Multi-Modal Vaes The relationship between uni-modal VAEs and probabilistic PCA (Tipping and Bishop, 1999) has been studied in previous work (Dai et al., 2018; Lucas et al., 2019; Rolinek et al., 2019; Huang et al., 2020; Mathieu et al., 2019). We analyze how different multi-modal fusion schemes and multi-modal variational objectives affect (a) the learned generative model in terms of its true marginal log-likelihood (LLH) and (b) the latent representations in terms of information-theoretic quantities and identifiability. To evaluate the (weak) identifiability of the method, we follow Khemakhem et al. (2020a;b) to compute the mean correlation coefficient (MCC) between the true latent variables Z and samples from the variational distribution qϕ(·|xM) after an affine transformation using CCA. Generative model. Suppose that a latent variable Z taking values in R D is sampled from a standard Gaussian prior pθ(z) = N (0,I) generates M data modalities Xs ∈ R Ds, D ≤ Ds, based on a linear decoding model pθ(xs|z) = N (Wsz + bs, σ2I) for a factor loading matrix Ws ∈ R Ds×D, bias bs ∈ R Ds and observation scale σ > 0. Note that the annealed likelihood function p˜β,θ(xs|z) = N (Wsz + bs*, βσ*2I) corresponds to a scaling of the observation noise, so that we consider only the choice σ = 1, set σβ = σβ1/2 and vary β > 0. It is obvious that for any *S ⊂ M*, it holds that p˜β,θ(xS |z) = N (WS z + bS , σ2β IS ), where WS and bS are given by concatenating row-wise the emission or bias matrices for modalities in S, while σ 2 β IS is the diagonal matrix of the variances of the corresponding observations. By standard properties of Gaussian distributions, it follows that p˜β,θ(xS ) = N (bS , CS ) where CS = WSW⊤ S +σ 2 β IS is the data covariance matrix. Furthermore, with KS = W⊤ S WS + σ 2 β Id, the adjusted posterior is p˜β,θ(z|xS ) = N (K −1 S W⊤ S (xS − bS ), σ2β Id K −1 S ). If we sample orthogonal rows of W, the posterior covariance becomes diagonal so that it can - in principle - be well approximated by an encoding distribution with a diagonal covariance matrix. Indeed, the inverse of the posterior covariance matrix is only a function of the generative parameters of the modalities within S and can be written as the sum σ 2 β I +W⊤ S WS = σ 2 β I +Ps∈S W⊤ s Ws, while the posterior mean function is xS 7→ (σ 2 β I +Ps∈S W⊤ s Ws) −1 Ps∈S Ws(xs − bs). Illustrative example. We consider a bi-modal setup comprising a less noisy and more noisy modality. Concretely, for a latent variable Z = (Z1, Z2, Z3) ∈ R 3, assume that the observed modalities can be represented as $$\begin{array}{l}{{X_{1}=Z_{0}+Z_{1}+U_{1}}}\\ {{X_{2}=Z_{0}+10Z_{2}+U_{2},}}\end{array}$$ for a standard Gaussian prior Z ∼ N (0,I) and independent noise variables U1, U2 ∼ N (0, 1). Note that the second modality is more noisy compared to the first one. The results in Table 1 for the obtained log-likelihood Table 1: Gaussian model with a noisy and less noisy modality. Relative difference of the true MLE vs the (analytical) LLH from the learned model in the first two columns, followed by multi-modal information theoretic quantities. | Relative LLH gap | Full Reconstruction | Full Rates | Cross Prediction | Cross Rates | | | | | | | |--------------------------|-----------------------|---------------|--------------------|---------------|------------|---------------|-------------|---------------|------------|---------------| | Aggregation | our obj. | mixture bound | our obj. | mixture bound | our obj. | mixture bound | our ob. | mixture bound | our obj. | mixture bound | | PoE | 1.29 | 7.11 | −2.30 · 1035 | −2.2 · 1035 | 2.1 · 1035 | 2.0 · 1035 | −2.4 · 1034 | −1.9 · 1035 | 1.4 · 1035 | 1.7 · 1035 | | MoE | 0.11 | 0.6 | -32.07 | -30.09 | 1.02 | 2.84 | -33.27 | -28.52 | 2.37 | 19.33 | | SumPooling | 3.6 · 10−5 | 0.06 | -2.84 | -3.23 | 2.88 | 2.82 | -52.58 | -27.26 | 1.42 | 27.35 | | SelfAttention 3.4 · 10−5 | 0.06 | -2.85 | -3.23 | 2.87 | 2.82 | -52.59 | -27.25 | 1.42 | 27.41 | | | Invariant aggregation | | | Equivariant aggregation | | | | | | |-------------------------|---------------|--------------------|---------------------------|-------------|--------------|--------------|--------------|--------------| | Proposed objective | Mixture bound | Proposed objective | Mixture bound | | | | | | | Aggregation | LLH Gap | MCC | LLH Gap | MCC | | | | | | PoE | 0.03 (0.058) | 0.75 (0.20) | 0.04 (0.074) | 0.77 (0.21) | 0.00 (0.000) | 0.91 (0.016) | 0.01 (0.001) | 0.88 (0.011) | | MoE | 0.01 (0.005) | 0.82 (0.04) | 0.02 (0.006) | 0.67 (0.03) | | | | | | SumPooling | 0.00 (0.000) | 0.84 (0.00) | 0.00 (0.002) | 0.84 (0.02) | 0.00 (0.000) | 0.85 (0.004) | 0.00 (0.000) | 0.82 (0.003) | | SelfAttention | 0.00 (0.003) | 0.84 (0.00) | 0.02 (0.007) | 0.83 (0.00) | 0.00 (0.000) | 0.83 (0.006) | 0.00 (0.000) | 0.83 (0.003) | Table 2: Gaussian model with five modalities: Relative difference of true LLH to the learned LLH. MCC to true latent. The generative model for the invariant aggregation schemes uses dense decoders, whereas the ground truth model for the permutation-equivariant encoders uses sparse decoders to account for private latent variables. We report mean values with standard deviations in parentheses over five independent runs. values show first that learnable aggregation models yield higher log-likelihoods6, and second that our bound yields higher log-likelihood values compared to mixture-based bounds for any given fixed aggregation model. We also compute various information theoretic quantities, confirming that our bound leads to higher full reconstructions at higher full rates and lower cross predictions at lower cross rates compared to mixturebased bounds. More flexible aggregation schemes increase the full and cross predictions for any given bound while not necessarily increasing the full or cross rates, i.e., they can result in an improved point within a rate-distortion curve for some configurations. Simulation study. We consider M = 5 modalities following multi-variate Gaussian laws. We consider generative models, where all latent variables are shared across all modalities, as well as generative models, where only parts of the latent variables are shared across all modalities, while the remaining latent variables are modality-specific. The setting of private latent variables can be incorporated by imposing sparsity structures on the decoding matrices and allows us to analyze scenarios with considerable modality-specific variation described through private latent variables. We provide more details about the data generation mechanisms in Appendix J. For illustration, we use multi-modal encoders with shared latent variables using invariant aggregations in the first case and multi-modal encoders that utilize additional equivariant aggregations for the private latent variables in the second case. Results in Table 2 suggest that more flexible aggregation schemes improve the LLH and the identifiability for both variational objectives. Furthermore, our new bound yields higher LLH for a given aggregation scheme. ## 5.2 Non-Linear Identifiable Models Auxiliary labels as modalities. We construct artificial data following Khemakhem et al. (2020a), with the latent variables Z ∈ R D being conditionally Gaussian having means and variances that depend on an observed index value X2 ∈ [K]. More precisely, pθ(z|x2) = N (µx2 , Σx2 ), where µc *∼ ⊗ U*(−5, 5) and Σc = diag(Λc), Λc *∼ ⊗ U*(0.5, 3) iid for c ∈ [K]. The marginal distribution over the labels is uniform U([K]) so that the prior density pθ(z) = R[K] pθ(z|x2)pθ(x2)dx2 becomes a Gaussian mixture. We choose an injective decoding function f1 : R D → R D1, D ≤ D1, as a composition of MLPs with LeakyReLUs and full rank weight matrices having monotonically increasing row dimensions (Khemakhem et al., 2020b), with iid randomly sampled entries. We assume X1|Z ∼ N (f1(Z), σ2I) and set σ = 0.1, D = D1 = 2. f1 has a 6We found that a PoE model can have numerical issues here. ![18_image_0.png](18_image_0.png) Figure 1: Continuous data modality in (a) and reconstructions using different bounds and fusion models in (b)-(e). The true latent variables are shown in (f), with the inferred latent variables in (g)-(j) with a linear transformation indeterminacy. Labels are color-coded. single hidden layer of size D1 = 2. One realization of bi-modal data X, the true latent variable Z, as well as inferred latent variables and reconstructed data for a selection of different bounds and aggregation schemes, are shown in Figure 1, with more examples given in Figures 4 and 5. We find that learning the aggregation model through a SumPooling model improves the data reconstruction and better recovers the ground truth latents, up to rotations, in contrast to a PoE model. Simulating five different such datasets, the results in Table 3 indicate first that our bound obtains better log-likelihood estimates for different fusion schemes. Second, it demonstrates the advantages of our new fusion schemes that achieve better log-likelihoods for both bounds. Third, it shows the benefit of using aggregation schemes that have the capacity to accommodate prior distributions different from a single Gaussian. Also, MoE schemes lead to low MCC values, while PoE schemes have high MCC values. Table 3: Non-linear identifiable model with one real-valued modality and an auxiliary label acting as a second modality: The first four rows use a fixed standard Gaussian prior, while the last four rows use a Gaussian mixture prior with 5 components. Mean and standard deviation over 4 repetitions. Log-likelihoods are estimated using importance sampling with 64 particles. | | Proposed objective | | Mixture bound | | | | |----------------------|----------------------|--------------|-----------------|--------------|--------------|---------------| | Aggregation | LLH (β = 1) | MCC (β = 1) | MCC (β = 0.1) | LLH (β = 1) | MCC (β = 1) | MCC (β = 0.1) | | PoE | -43.4 (10.74) | 0.98 (0.006) | 0.99 (0.003) | -318 (361.2) | 0.97 (0.012) | 0.98 (0.007) | | MoE | -20.5 (6.18) | 0.94 (0.013) | 0.93 (0.022) | -57.9 (6.23) | 0.93 (0.017) | 0.93 (0.025) | | SumPooling | -17.9 (3.92) | 0.99 (0.004) | 0.99 (0.002) | -18.9 (4.09) | 0.99 (0.005) | 0.99 (0.008) | | SelfAttention | -18.2 (4.17) | 0.99 (0.004) | 0.99 (0.003) | -18.6 (3.73) | 0.99 (0.004) | 0.99 (0.007) | | SumPooling | -15.4 (2.12) | 1.00 (0.001) | 0.99 (0.004) | -18.6 (2.36) | 0.98 (0.008) | 0.99 (0.006) | | SelfAttention | -15.2 (2.05) | 1.00 (0.001) | 1.00 (0.004) | -18.6 (2.27) | 0.98 (0.014) | 0.98 (0.006) | | SumPoolingMixture | -15.1 (2.15) | 1.00 (0.001) | 0.99 (0.012) | -18.2 (2.80) | 0.98 (0.010) | 0.99 (0.005) | | SelfAttentionMixture | -15.3 (2.35) | 0.99 (0.005) | 0.99 (0.004) | -18.4 (2.63) | 0.99 (0.007) | 0.99 (0.007) | Multiple modalities. Considering the same generative model for Z with a Gaussian mixture prior, suppose now that instead of observing the auxiliary label, we observe multiple modalities Xs ∈ R Ds, Xs|Z ∼ N (fs(Z), σ2I), for injective MLPs fs constructed as above, with D = 10, Ds = 25, σ = 0.5 Table 4: Partially observed (η = 0.5) and fully observed (η = 0) non-linear identifiable model with 5 modalities: The first four rows use a fixed standard Gaussian prior, while the last four rows use a Gaussian mixture prior. | | | Partially observed | | | Fully observed | | | | |----------------------|--------------------|----------------------|---------------|--------------------|------------------|---------------|----------------|--------------| | | Proposed objective | | Mixture bound | Proposed objective | | Mixture bound | | | | Aggregation | LLH | MCC | LLH | MCC | LLH | MCC | LLH | MCC | | PoE | -250.9 (5.19) | 0.94 (0.015) | -288.4 (8.53) | 0.93 (0.018) | -473.6 (9.04) | 0.98 (0.005) | -497.7 (11.26) | 0.97 (0.008) | | MoE | -250.1 (4.77) | 0.92 (0.022) | -286.2 (7.63) | 0.90 (0.019) | -477.9 (8.50) | 0.91 (0.014) | -494.6 (9.20) | 0.92 (0.004) | | SumPooling | -249.6 (4.85) | 0.95 (0.016) | -275.6 (7.35) | 0.92 (0.031) | -471.4 (8.29) | 0.99 (0.004) | -480.5 (8.84) | 0.98 (0.005) | | SelfAttention | -249.7 (4.83) | 0.95 (0.014) | -275.5 (7.45) | 0.93 (0.022) | -471.4 (8.97) | 0.99 (0.002) | -482.8 (10.51) | 0.98 (0.004) | | SumPooling | -247.3 (4.23) | 0.95 (0.009) | -269.6 (7.42) | 0.94 (0.018) | -465.4 (8.16) | 0.98 (0.002) | -475.1 (7.54) | 0.98 (0.003) | | SelfAttention | -247.5 (4.22) | 0.95 (0.013) | -269.9 (6.06) | 0.93 (0.022) | -469.3 (4.76) | 0.98 (0.003) | -474.7 (8.20) | 0.98 (0.002) | | SumPoolingMixture | -244.8 (4.44) | 0.95 (0.011) | -271.9 (6.54) | 0.93 (0.021) | -464.5 (8.16) | 0.99 (0.003) | -474.2 (7.61) | 0.98 (0.004) | | SelfAttentionMixture | -245.4 (4.55) | 0.96 (0.010) | -270.3 (5.96) | 0.94 (0.016) | -464.4 (8.50) | 0.99 (0.003) | -473.6 (8.24) | 0.98 (0.002) | | | Proposed objective | | | | Mixture bound | | | | |---------------|----------------------|-------------|-------------|------------|-----------------|-------------|-------------|--------------| | Aggregation | M+S+T | M | S | T | M+S+T | M | S | T | | PoE+ | 6872 (9.62) | 2599 (5.6) | 4317 (1.1) | -9 (0.2) | 5900 (10) | 2449 (10.4) | 3443 (11.7) | -19 (0.4) | | PoE | 6775 (54.9) | 2585 (18.7) | 4250 (8.1) | -10 (2.2) | 5813 (1.2) | 2432 (11.6) | 3390 (17.5) | -19 (0.1) | | MoE+ | 5428 (73.5) | 2391 (104) | 3378 (92.9) | -74 (88.7) | 5420 (60.1) | 2364 (33.5) | 3350 (58.1) | -112 (133.4) | | MoE | 5597 (26.7) | 2449 (7.6) | 3557 (26.4) | -11 (0.1) | 5485 (4.6) | 2343 (1.8) | 3415 (5.0) | -17 (0.4) | | SumPooling | 7056 (124) | 2478 (9.3) | 4640 (114) | -6 (0.0) | 6130 (4.4) | 2470 (10.3) | 3660 (1.5) | -16 (1.6) | | SelfAttention | 7011 (57.9) | 2508 (18.2) | 4555 (38.1) | -7 (0.5) | 6127 (26.1) | 2510 (12.7) | 3621 (8.5) | -13 (0.2) | | PoE+ | 6549 (33.2) | 2509 (7.8) | 4095 (37.2) | -7 (0.2) | 5869 (29.6) | 2465 (4.3) | 3431 (8.3) | -19 (1.7) | | SumPooling | 6337 (24.0) | 2483 (9.8) | 3965 (16.9) | -6 (0.2) | 5930 (23.8) | 2468 (16.8) | 3491 (18.3) | -7 (0.1) | | SelfAttention | 6662 (20.0) | 2516 (8.8) | 4247 (31.2) | -6 (0.4) | 6716 (21.8) | 2430 (26.9) | 4282 (49.7) | -27 (1.1) | Table 5: Test LLH estimates for the joint data (M+S+T) and marginal data (importance sampling with 512 particles). The first part of the table is based on the same generative model with shared latent variable Z ∈ R 40, while the second part of the table is based on a restrictive generative model with a shared latent variable Z ′ ∈ R 10 and modality-specific latent variables Z˜s ∈ R 10. and K = M = 5. We consider a semi-supervised setting where modalities are missing completely at random, as in Zhang et al. (2019), with a missing rate η as the sample average of 1 |M| Ps∈M(1 − Ms). Table 4 shows that using the new variational objective improves the LLH and the identifiability of the latent representation. Furthermore, using learnable aggregation schemes benefits both variational objectives. ## 5.3 Mnist-Svhn-Text Following previous work (Sutter et al., 2020; 2021; Javaloy et al., 2022), we consider a tri-modal dataset based on augmenting the MNIST-SVHN dataset (Shi et al., 2019) with a text-based modality. Herein, SVHN consists of relatively noisy images, whilst MNIST and text are clearer modalities. Multi-modal VAEs have been shown to exhibit differing performances relative to their multi-modal coherence, latent classification accuracy or test LLH, see Appendix I for definitions. Previous works often differ in their hyperparameters, from neural network architectures, latent space dimensions, priors and likelihood families, likelihood weightings, decoder variances, etc. We have chosen the same hyperparameters for all models, thereby providing a clearer disentanglement of how either the variational objective or the aggregation scheme affects different multi-modal evaluation measures. In particular, we consider multi-modal generative models with (i) shared latent variables and (ii) private and shared latent variables. We also consider PoE or MoE schemes (denoted PoE+, resp., MoE+) with additional neural network layers in their modality-specific encoding functions so that the number of parameters matches or exceeds those of the introduced PI models, see Appendix M.5 for details. For models without private latent variables, estimates of the test LLHs in Table 5 suggest that our bound improves the LLH across different aggregation schemes for all modalities and different βs (Table 7), with similar results for PE schemes, except for a Self-Attention model. More flexible fusion schemes yield higher LLHs for both bounds. Qualitative results for the reconstructed modalities are given in Figures 2 with shared latent variables, in Figure 8 for different β-hyperparameters and in Figure 9 for models with private latent variables. Cross-generation of the SVHN modality is challenging for the mixture-based bound with all aggregation schemes. In contrast, our bound, particularly when combined with learnable aggregation schemes, leads to more realistic samples of the cross-generated SVHN modality. No variational objective or aggregation scheme performs best across all modalities by the generative coherence measures (see Table 6 for uni-modal inputs, Table 8 for bi-modal ones and Tables 9- 12 for models with private latent variables and different βs), along with reported results from external baselines (MVAE, MMVAE, MoPoE, MMJSD, MVTCAE). Overall, our objective is slightly more coherent for cross-generating SVHN or Text, but less coherent for MNIST. The mixture-based bound tends to improve the unsupervised latent classification accuracy across different fusion approaches and modalities, see Table 13. To provide complementary insights into the trade-offs for the different objectives and fusion schemes, we consider a multi-modal rate-distortion evaluation in Figure 3. Ignoring MoE where reconstructions are similar, our bound improves the full reconstruction with higher full rates and across various fusion schemes. The mixture-based bound yields improved cross-predictions for all aggregation models, with increased cross-rate terms. Flexible PI architectures for our bound improve the full reconstruction, even at lower full rates. ![20_image_0.png](20_image_0.png) Figure 2: Conditional generation for different aggregation schemes and bounds and shared latent variables. The first column is the conditioned modality. The next three columns are the generated modalities using a SumPooling aggregation, followed by the three columns for a SelfAttention aggregation, followed by PoE+, and lastly MoE+. ## 5.4 Summary Of Experimental Results We presented a series of numerical experiments that illustrate the benefits of learning more flexible aggregation models and that optimizing our variational objective leads to higher log-likelihood values. Overall, we find that for a given choice of aggregation scheme, our objective achieves a higher log-likelihood across the different experiments. Likewise, fixing the variational objective, we observe that Sum-Pooling or SelfAttention encoders achieve higher multi-modal log-likelihoods compared to MoE or PoE schemes. Moreover, we demonstrate that our variational objective results in models that differ in their information theoretic quantities compared to those models trained with a mixture-based bound. In particular, our variational objective achieves higher full-reconstruction terms with higher full rates across different data sets, aggregation schemes, and beta values. Conversely, the mixture-based bound improves the cross-prediction while having higher cross-rate terms. | Proposed objective | | | | | | | Mixture bound | | | | | | | | | | | | |---------------------------------------------------------------------------------|------|------|------|------|------|------|-----------------|------|------|------|------|------|------|------|------|------|------|------| | M | S | | | T | M | | | S | T | | | | | | | | | | | Aggregation | M | S | T | M | S | T | M | S | T | M | S | T | M | S | T | M | S | T | | PoE | 0.97 | 0.22 | 0.56 | 0.29 | 0.60 | 0.36 | 0.78 | 0.43 | 1.00 | 0.96 | 0.83 | 0.99 | 0.11 | 0.57 | 0.10 | 0.44 | 0.39 | 1.00 | | PoE+ | 0.97 | 0.15 | 0.63 | 0.24 | 0.63 | 0.42 | 0.79 | 0.35 | 1.00 | 0.96 | 0.83 | 0.99 | 0.11 | 0.59 | 0.11 | 0.45 | 0.39 | 1.00 | | MoE | 0.96 | 0.80 | 0.99 | 0.11 | 0.59 | 0.11 | 0.44 | 0.37 | 1.00 | 0.94 | 0.81 | 0.97 | 0.10 | 0.54 | 0.10 | 0.45 | 0.39 | 1.00 | | MoE+ | 0.93 | 0.77 | 0.95 | 0.11 | 0.54 | 0.10 | 0.44 | 0.37 | 0.98 | 0.94 | 0.80 | 0.98 | 0.10 | 0.53 | 0.10 | 0.45 | 0.39 | 1.00 | | SumPooling | 0.97 | 0.48 | 0.87 | 0.25 | 0.72 | 0.36 | 0.73 | 0.48 | 1.00 | 0.97 | 0.86 | 0.99 | 0.10 | 0.63 | 0.10 | 0.45 | 0.40 | 1.00 | | SelfAttention | 0.97 | 0.44 | 0.79 | 0.20 | 0.71 | 0.36 | 0.61 | 0.43 | 1.00 | 0.97 | 0.86 | 0.99 | 0.10 | 0.63 | 0.11 | 0.45 | 0.40 | 1.00 | | Results from Sutter et al. (2021), Sutter et al. (2020) and Hwang et al. (2021) | | | | | | | | | | | | | | | | | | | | MVAE | NA | 0.24 | 0.20 | 0.43 | NA | 0.30 | 0.28 | 0.17 | NA | | | | | | | | | | | MMVAE | NA | 0.75 | 0.99 | 0.31 | NA | 0.30 | 0.96 | 0.76 | NA | | | | | | | | | | | MoPoE | NA | 0.74 | 0.99 | 0.36 | NA | 0.34 | 0.96 | 0.76 | NA | | | | | | | | | | | MMJSD | NA | 0.82 | 0.99 | 0.37 | NA | 0.36 | 0.97 | 0.83 | NA | | | | | | | | | | | MVTCAE (w/o T) | NA | 0.60 | NA | 0.82 | NA | NA | NA | NA | NA | | | | | | | | | | Table 6: Conditional coherence with shared latent variables and uni-modal inputs. The letters on the second line represent the generated modality based on the input modalities on the line below it. ![21_image_0.png](21_image_0.png) Figure 3: Rate and distortion terms for MNIST-SVHN-Text with shared latent variables (β = 1) for our proposed objective ('Masked') and the 'Mixture' based bound. ## 6 Conclusion Limitations. A drawback of our bound is that computing a gradient step is more expensive as it requires drawing samples from two encoding distributions. Similarly, learning aggregation functions are more computationally expensive compared to fixed schemes. Mixture-based bounds might be preferred if one is interested primarily in cross-modal reconstructions. Outlook. Using modality-specific encoders to learn features and aggregating them with a PI function is clearly not the only choice for building multi-modal encoding distributions. However, it allows us to utilize modality-specific architectures for the encoding functions. Alternatively, our bounds could also be used, e.g., when multi-modal transformer architectures (Xu et al., 2022) encode a distribution on a shared latent space. Our approach applies to general prior densities if we can compute its cross-entropy relative to the multimodal encoding distributions. An example would be to apply it with more flexible prior distributions, e.g., as specified via score-based diffusion models (Vahdat et al., 2021). Likewise, diffusion models could be utilized to specify PI conditional prior distribution in the conditional bound by utilizing permutation-equivariant score models (Dutordoir et al., 2023; Yim et al., 2023; Mathieu et al., 2023). ## References A. E. Abbas. A Kullback-Leibler view of linear and log-linear pools. *Decision Analysis*, 6(1):25–37, 2009. S. Akaho. A kernel method for Canonical Correlation Analysis. In International Meeting of Psychometric Society, 2001, 2001. A. Alemi, B. Poole, I. Fischer, J. Dillon, R. A. Saurous, and K. Murphy. Fixing a broken ELBO. In International conference on machine learning, pages 159–168. PMLR, 2018. A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy. Deep Variational Information Bottleneck. *arXiv* preprint arXiv:1612.00410, 2016. E. S. Allman, C. Matias, and J. A. Rhodes. Identifiability of parameters in latent structure models with many observed variables. *The Annals of Statistics*, 37(6A):3099–3132, 2009. C. Archambeau and F. Bach. Sparse probabilistic projections. *Advances in neural information processing* systems, 21, 2008. R. Argelaguet, B. Velten, D. Arnol, S. Dietrich, T. Zenz, J. C. Marioni, F. Buettner, W. Huber, and O. Stegle. Multi-Omics Factor Analysis—a framework for unsupervised integration of multi-omics data sets. *Molecular systems biology*, 14(6):e8124, 2018. J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. *arXiv preprint arXiv:1607.06450*, 2016. F. R. Bach and M. I. Jordan. A Probabilistic Interpretation of Canonical Correlation Analysis. 2005. D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. F. Bao, S. Nie, K. Xue, C. Li, S. Pu, Y. Wang, G. Yue, Y. Cao, H. Su, and J. Zhu. One transformer fits all distributions in multi-modal diffusion at scale. In *International Conference on Machine Learning*, pages 1692–1717. PMLR, 2023. D. Barber and F. Agakov. The IM Algorithm: a variational approach to Information Maximization. *Advances* in neural information processing systems, 16(320):201, 2004. S. Bartunov, F. B. Fuchs, and T. P. Lillicrap. Equilibrium aggregation: Encoding sets via optimization. In Uncertainty in Artificial Intelligence, pages 139–149. PMLR, 2022. M. Biloš and S. Günnemann. Scalable normalizing flows for permutation invariant densities. In International Conference on Machine Learning, pages 957–967. PMLR, 2021. D. M. Blei, A. Kucukelbir, and J. D. McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877, 2017. B. Bloem-Reddy and Y. W. Teh. Probabilistic symmetries and invariant neural networks. J. Mach. Learn. Res., 21:90–1, 2020. M. Bounoua, G. Franzese, and P. Michiardi. Multi-modal latent diffusion. *arXiv preprint arXiv:2306.04445*, 2023. J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. M. Browne. Factor analysis of multiple batteries by maximum likelihood. British Journal of Mathematical and Statistical Psychology, 1980. A. Bruno, J. Willette, J. Lee, and S. J. Hwang. Mini-batch consistent slot set encoder for scalable set encoding. *Advances in Neural Information Processing Systems*, 34:21365–21374, 2021. S. Cao. Choose a transformer: Fourier or Galerkin. *Advances in neural information processing systems*, 34: 24924–24940, 2021. S. Chatterjee, P. Diaconis, et al. The sample size required in importance sampling. *The Annals of Applied* Probability, 28(2):1099–1135, 2018. Z. Chen, V. Badrinarayanan, C.-Y. Lee, and A. Rabinovich. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In *International conference on machine learning*, pages 794–803. PMLR, 2018. J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio. A recurrent latent variable model for sequential data. In *Advances in neural information processing systems*, pages 2980–2988, 2015. B. Dai and D. Wipf. Diagnosing and enhancing vae models. In *International Conference on Learning* Representations, 2018. B. Dai, Y. Wang, J. Aston, G. Hua, and D. Wipf. Connections with robust PCA and the role of emergent sparsity in variational autoencoder models. *The Journal of Machine Learning Research*, 19(1):1573–1614, 2018. I. Daunhawer, T. M. Sutter, K. Chin-Cheong, E. Palumbo, and J. E. Vogt. On the Limitations of Multimodal VAEs. In *International Conference on Learning Representations*, 2022. I. Daunhawer, A. Bizeul, E. Palumbo, A. Marx, and J. E. Vogt. Identifiability results for multimodal contrastive learning. *arXiv preprint arXiv:2303.09166*, 2023. P. Diaconis and D. Freedman. Finite exchangeable sequences. *The Annals of Probability*, pages 745–764, 1980. A. B. Dieng, Y. Kim, A. M. Rush, and D. M. Blei. Avoiding latent variable collapse with generative skip models. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pages 2397–2405. PMLR, 2019. N. Dilokthanakul, P. A. Mediano, M. Garnelo, M. C. Lee, H. Salimbeni, K. Arulkumaran, and M. Shanahan. Deep unsupervised clustering with Gaussian Mixture Variational Autoencoders. *arXiv preprint* arXiv:1611.02648, 2016. V. Dutordoir, A. Saul, Z. Ghahramani, and F. Simpson. Neural diffusion processes. In *International Conference on Machine Learning*, pages 8990–9012. PMLR, 2023. H. Edwards and A. Storkey. Towards a neural statistician. *arXiv preprint arXiv:1606.02185*, 2016. F. Falck, H. Zhang, M. Willetts, G. Nicholson, C. Yau, and C. C. Holmes. Multi-facet clustering Variational Autoencoders. *Advances in Neural Information Processing Systems*, 34:8676–8690, 2021. M. Figurnov, S. Mohamed, and A. Mnih. Implicit reparameterization gradients. In Advances in Neural Information Processing Systems, pages 441–452, 2018. J. Fliege and B. F. Svaiter. Steepest descent methods for multicriteria optimization. *Mathematical methods* of operations research, 51:479–494, 2000. A. Foong, W. Bruinsma, J. Gordon, Y. Dubois, J. Requeima, and R. Turner. Meta-learning stationary stochastic process prediction with convolutional neural processes. *Advances in Neural Information Processing Systems*, 33:8284–8295, 2020. S. Gao, R. Brekelmans, G. Ver Steeg, and A. Galstyan. Auto-encoding total correlation explanation. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1157–1166. PMLR, 2019. M. Garnelo, D. Rosenbaum, C. Maddison, T. Ramalho, D. Saxton, M. Shanahan, Y. W. Teh, D. Rezende, and S. A. Eslami. Conditional neural processes. In *International conference on machine learning*, pages 1704–1713. PMLR, 2018a. M. Garnelo, J. Schwarz, D. Rosenbaum, F. Viola, D. J. Rezende, S. Eslami, and Y. W. Teh. Neural processes. arXiv preprint arXiv:1807.01622, 2018b. C. Genest and J. V. Zidek. Combining probability distributions: A critique and an annotated bibliography. Statistical Science, 1(1):114–135, 1986. C. Genest, K. J. McConway, and M. J. Schervish. Characterization of externally Bayesian pooling operators. The Annals of Statistics, pages 487–501, 1986. S. Ghalebikesabi, R. Cornish, L. J. Kelly, and C. Holmes. Deep generative pattern-set mixture models for nonignorable missingness. *arXiv preprint arXiv:2103.03532*, 2021. G. Giannone and O. Winther. Scha-vae: Hierarchical context aggregation for few-shot generation. In International Conference on Machine Learning, pages 7550–7569. PMLR, 2022. Y. Gong, H. Hajimirsadeghi, J. He, T. Durand, and G. Mori. Variational selective autoencoder: Learning from partially-observed heterogeneous data. In International Conference on Artificial Intelligence and Statistics, pages 2377–2385. PMLR, 2021. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In *Advances in neural information processing systems*, pages 2672–2680, 2014. A. Gretton, K. Borgwardt, M. Rasch, B. Schölkopf, and A. Smola. A Kernel Method for the Two-SampleProblem. *Advances in neural information processing systems*, 19, 2006. H. Hälvä and A. Hyvarinen. Hidden markov nonlinear ica: Unsupervised learning from nonstationary time series. In *Conference on Uncertainty in Artificial Intelligence*, pages 939–948. PMLR, 2020. H. Hälvä, S. Le Corff, L. Lehéricy, J. So, Y. Zhu, E. Gassiat, and A. Hyvarinen. Disentangling identifiable features from noisy data with structured nonlinear ICA. *Advances in Neural Information Processing* Systems, 34:1624–1633, 2021. D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor. Canonical correlation analysis: An overview with application to learning methods. *Neural computation*, 16(12):2639–2664, 2004. K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In Computer Vision– ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14, pages 630–645. Springer, 2016. J. Heek, A. Levskaya, A. Oliver, M. Ritter, B. Rondepierre, A. Steiner, and M. van Zee. Flax: A neural network library and ecosystem for JAX, 2023. URL http://github.com/google/flax. L. B. Hewitt, M. I. Nye, A. Gane, T. Jaakkola, and J. B. Tenenbaum. The variational homoencoder: Learning to learn high capacity generative models from few examples. *arXiv preprint arXiv:1807.08919*, 2018. I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. β-VAE: Learning basic visual concepts with a constrained variational framework. In International conference on learning representations, 2017. M. D. Hoffman and M. J. Johnson. ELBO surgery: yet another way to carve up the variational evidence lower bound. In *Workshop in Advances in Approximate Bayesian Inference, NIPS*, 2016. P. Holderrieth, M. J. Hutchinson, and Y. W. Teh. Equivariant learning of stochastic fields: Gaussian processes and steerable conditional neural processes. In *International Conference on Machine Learning*, pages 4297–4307. PMLR, 2021. H. Hotelling. Relations between two sets of variates. *Biometrika*, 28(3/4):321–377, 1936. S. Huang, A. Makhzani, Y. Cao, and R. Grosse. Evaluating lossy compression rates of deep generative models. *arXiv preprint arXiv:2008.06653*, 2020. Y. Huang, J. Lin, C. Zhou, H. Yang, and L. Huang. Modality competition: What makes joint training of multi-modal network fail in deep learning?(provably). *arXiv preprint arXiv:2203.12221*, 2022. H. Hwang, G.-H. Kim, S. Hong, and K.-E. Kim. Multi-view representation learning via total correlation objective. *Advances in Neural Information Processing Systems*, 34:12194–12207, 2021. A. Hyvarinen and H. Morioka. Unsupervised feature extraction by time-contrastive learning and nonlinear ICA. *Advances in neural information processing systems*, 29, 2016. A. Hyvärinen and P. Pajunen. Nonlinear Independent Component Analysis: Existence and uniqueness results. *Neural networks*, 12(3):429–439, 1999. N. B. Ipsen, P.-A. Mattei, and J. Frellsen. not-MIWAE: Deep Generative Modelling with Missing not at Random Data. In *ICLR 2021-International Conference on Learning Representations*, 2021. E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbel-softmax. *arXiv preprint* arXiv:1611.01144, 2016. A. Javaloy, M. Meghdadi, and I. Valera. Mitigating Modality Collapse in Multimodal VAEs via Impartial Optimization. *arXiv preprint arXiv:2206.04496*, 2022. Z. Jiang, Y. Zheng, H. Tan, B. Tang, and H. Zhou. Variational deep embedding: an unsupervised and generative approach to clustering. In *Proceedings of the 26th International Joint Conference on Artificial* Intelligence, pages 1965–1972, 2017. M. J. Johnson, D. Duvenaud, A. B. Wiltschko, S. R. Datta, and R. P. Adams. Structured vaes: Composing probabilistic graphical models and variational autoencoders. *arXiv preprint arXiv:1603.06277*, 2016. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. *Machine learning*, 37(2):183–233, 1999. T. Joy, Y. Shi, P. H. Torr, T. Rainforth, S. M. Schmon, and N. Siddharth. Learning multimodal VAEs through mutual supervision. *arXiv preprint arXiv:2106.12570*, 2021. M. Karami and D. Schuurmans. Deep probabilistic canonical correlation analysis. In *Proceedings of the* AAAI Conference on Artificial Intelligence, volume 35, pages 8055–8063, 2021. I. Khemakhem, D. Kingma, R. Monti, and A. Hyvarinen. Variational Autoencoders and nonlinear ICA: A unifying framework. In *International Conference on Artificial Intelligence and Statistics*, pages 2207–2217. PMLR, 2020a. I. Khemakhem, R. Monti, D. Kingma, and A. Hyvarinen. ICE-BeeM: Identifiable Conditional EnergyBased Deep Models Based on Nonlinear ICA. *Advances in Neural Information Processing Systems*, 33: 12768–12778, 2020b. H. Kim, A. Mnih, J. Schwarz, M. Garnelo, A. Eslami, D. Rosenbaum, O. Vinyals, and Y. W. Teh. Attentive neural processes. In *International Conference on Learning Representations*, 2018. J. Kim, J. Yoo, J. Lee, and S. Hong. Setvae: Learning hierarchical composition for generative modeling of set-structured data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15059–15068, 2021. Y.-g. Kim, Y. Liu, and X.-X. Wei. Covariate-informed Representation Learning to Prevent Posterior Collapse of iVAE. In *International Conference on Artificial Intelligence and Statistics*, pages 2641–2660. PMLR, 2023. D. Kingma and J. Ba. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. B. Kivva, G. Rajendran, P. K. Ravikumar, and B. Aragam. Identifiability of deep generative models without auxiliary information. In *Advances in Neural Information Processing Systems*, 2022. A. Klami, S. Virtanen, and S. Kaski. Bayesian canonical correlation analysis. Journal of Machine Learning Research, 14(4), 2013. W. Kool, H. van Hoof, and M. Welling. Buy 4 reinforce samples, get a baseline for free! 2019. T. C. Koopmans and O. Reiersol. The identification of structural characteristics. *The Annals of Mathematical* Statistics, 21(2):165–181, 1950. D. Kramer, P. L. Bommer, D. Durstewitz, C. Tombolini, and G. Koppe. Reconstructing nonlinear dynamical systems from multi-modal time series. In *International Conference on Machine Learning*, pages 11613– 11633. PMLR, 2022. J. B. Kruskal. More factors than subjects, tests and treatments: An indeterminacy theorem for canonical decomposition and individual differences scaling. *Psychometrika*, 41(3):281–293, 1976. T. A. Le, H. Kim, M. Garnelo, D. Rosenbaum, J. Schwarz, and Y. W. Teh. Empirical evaluation of neural process objectives. In *NeurIPS workshop on Bayesian Deep Learning*, volume 4, 2018. C. Lee and M. van der Schaar. A variational information bottleneck approach to multi-omics data integration. In *International Conference on Artificial Intelligence and Statistics*, pages 1513–1521. PMLR, 2021. J. Lee, Y. Lee, J. Kim, A. Kosiorek, S. Choi, and Y. W. Teh. Set Transformer: A framework for attentionbased permutation-invariant neural networks. In *International conference on machine learning*, pages 3744–3753. PMLR, 2019. M. Lee and V. Pavlovic. Private-shared disentangled multimodal vae for learning of latent representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1692–1700, 2021. C.-L. Li, M. Zaheer, Y. Zhang, B. Poczos, and R. Salakhutdinov. Point cloud GAN. arXiv preprint arXiv:1810.05795, 2018. Q. Li, T. Lin, and Z. Shen. Deep neural network approximation of invariant functions through dynamical systems. *arXiv preprint arXiv:2208.08707*, 2022. Y. Li and J. Oliva. Partially observed exchangeable modeling. In International Conference on Machine Learning, pages 6460–6470. PMLR, 2021. Y. Li, H. Yi, C. Bender, S. Shan, and J. B. Oliva. Exchangeable neural ode for set modeling. *Advances in* Neural Information Processing Systems, 33:6936–6946, 2020. R. Linsker. Self-organization in a perceptual network. *Computer*, 21(3):105–117, 1988. G. Loaiza-Ganem, B. L. Ross, J. C. Cresswell, and A. L. Caterini. Diagnosing and Fixing Manifold Overfitting in Deep Generative Models. *Transactions on Machine Learning Research*, 2022. C. Lu, Y. Wu, J. M. Hernández-Lobato, and B. Schölkopf. Invariant causal representation learning for out-of-distribution generalization. In *International Conference on Learning Representations*, 2022. J. Lucas, G. Tucker, R. B. Grosse, and M. Norouzi. Don't Blame the ELBO! A Linear VAE Perspective on Posterior Collapse. In *Advances in Neural Information Processing Systems*, pages 9408–9418, 2019. Q. Lyu and X. Fu. Finite-sample analysis of deep CCA-based unsupervised post-nonlinear multimodal learning. *IEEE Transactions on Neural Networks and Learning Systems*, 2022. Q. Lyu, X. Fu, W. Wang, and S. Lu. Understanding latent correlation-based multiview learning and selfsupervision: An identifiability perspective. *arXiv preprint arXiv:2106.07115*, 2021. C. Ma, S. Tschiatschek, K. Palla, J. M. Hernandez-Lobato, S. Nowozin, and C. Zhang. EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE. In International Conference on Machine Learning, pages 4234–4243. PMLR, 2019. C. J. Maddison, A. Mnih, and Y. W. Teh. The concrete distribution: A continuous relaxation of discrete random variables. *arXiv preprint arXiv:1611.00712*, 2016. A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey. Adversarial Autoencoders. In *ICLR*, 2016. H. Maron, E. Fetaya, N. Segol, and Y. Lipman. On the universality of invariant networks. In *International* conference on machine learning, pages 4363–4371. PMLR, 2019. E. Mathieu, T. Rainforth, N. Siddharth, and Y. W. Teh. Disentangling disentanglement in Variational Autoencoders. In *International Conference on Machine Learning*, pages 4402–4412. PMLR, 2019. E. Mathieu, V. Dutordoir, M. Hutchinson, V. De Bortoli, Y. W. Teh, and R. Turner. Geometric Neural Diffusion Processes. *Advances in Neural Information Processing Systems*, 37, 2023. K. Minoura, K. Abe, H. Nam, H. Nishikawa, and T. Shimamura. A mixture-of-experts deep generative model for integrated analysis of single-cell multiomics data. *Cell reports methods*, 1(5):100071, 2021. G. Mita, M. Filippone, and P. Michiardi. An identifiable double VAE for disentangled representations. In International Conference on Machine Learning, pages 7769–7779. PMLR, 2021. G. E. Moran, D. Sridhar, Y. Wang, and D. M. Blei. Identifiable deep generative models via sparse decoding. arXiv preprint arXiv:2110.10804, 2021. W. Morningstar, S. Vikram, C. Ham, A. Gallagher, and J. Dillon. Automatic differentiation variational inference with mixtures. In *International Conference on Artificial Intelligence and Statistics*, pages 3250– 3258. PMLR, 2021. R. Murphy, B. Srinivasan, V. Rao, and B. Riberio. Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs. In *International Conference on Learning Representations (ICLR 2019)*, 2019. A. Nazabal, P. M. Olmos, Z. Ghahramani, and I. Valera. Handling incomplete heterogeneous data using VAEs. *Pattern Recognition*, 107:107501, 2020. A. v. d. Oord, Y. Li, and O. Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. E. Palumbo, I. Daunhawer, and J. E. Vogt. Mmvae+: Enhancing the generative quality of multimodal vaes without compromises. In *The Eleventh International Conference on Learning Representations*, 2023. E. Palumbo, L. Manduchi, S. Laguna, D. Chopard, and J. E. Vogt. Deep Generative Clustering with Multimodal Diffusion Variational Autoencoders. In The Twelfth International Conference on Learning Representations, 2024. K. Pandey, A. Mukherjee, P. Rai, and A. Kumar. Diffusevae: Efficient, controllable and high-fidelity generation from low-dimensional latents. *Transactions on Machine Learning Research*, 2022. B. Poole, S. Ozair, A. Van Den Oord, A. Alemi, and G. Tucker. On variational bounds of mutual information. In *International Conference on Machine Learning*, pages 5171–5180. PMLR, 2019. C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pages 652–660, 2017. R. Ranganath, S. Gerrish, and D. M. Blei. Black box variational inference. In *AISTATS*, pages 814–822, 2014. D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In *Proceedings of the 31st International Conference on Machine Learning (ICML-14)*, pages 1278–1286, 2014. H. E. Robbins. An empirical Bayes approach to statistics. In Breakthroughs in Statistics: Foundations and basic theory, pages 388–394. Springer, 1992. G. Roeder, Y. Wu, and D. Duvenaud. Sticking the landing: An asymptotically zero-variance gradient estimator for variational inference. *arXiv preprint arXiv:1703.09194*, 2017. M. Rolinek, D. Zietlow, and G. Martius. Variational Autoencoders pursue PCA directions (by accident). In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pages 12406– 12415, 2019. M. Rosca, B. Lakshminarayanan, and S. Mohamed. Distribution matching in variational inference. *arXiv* preprint arXiv:1802.06847, 2018. S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. *science*, 290 (5500):2323–2326, 2000. D. B. Rubin. Inference and missing data. *Biometrika*, 63(3):581–592, 1976. A. Sannai, Y. Takai, and M. Cordonnier. Universal approximations of permutation invariant/equivariant functions by deep neural networks. *arXiv preprint arXiv:1903.01939*, 2019. A. Santoro, D. Raposo, D. G. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T. Lillicrap. A simple neural network module for relational reasoning. *Advances in neural information processing systems*, 30, 2017. S. Schneider, J. H. Lee, and M. W. Mathis. Learnable latent embeddings for joint behavioural and neural analysis. *Nature*, pages 1–9, 2023. N. Segol and Y. Lipman. On universal equivariant set networks. In International Conference on Learning Representations, 2019. O. Sener and V. Koltun. Multi-task learning as multi-objective optimization. Advances in neural information processing systems, 31, 2018. Y. Shi, B. Paige, P. Torr, et al. Variational Mixture-of-Experts Autoencoders for Multi-Modal Deep Generative Models. *Advances in Neural Information Processing Systems*, 32, 2019. Y. Shi, B. Paige, P. Torr, and N. Siddharth. Relating by Contrasting: A Data-efficient Framework for Multimodal Generative Models. In *International Conference on Learning Representations*, 2020. P. Sorrenson, C. Rother, and U. Köthe. Disentanglement by nonlinear ICA with general incompressible-flow networks (GIN). *arXiv preprint arXiv:2001.04872*, 2020. J. H. Stock and M. W. Watson. Forecasting using principal components from a large number of predictors. Journal of the American statistical association, 97(460):1167–1179, 2002. T. Sutter, I. Daunhawer, and J. Vogt. Multimodal generative learning utilizing Jensen-Shannon-divergence. Advances in Neural Information Processing Systems, 33:6100–6110, 2020. T. M. Sutter, I. Daunhawer, and J. E. Vogt. Generalized multimodal elbo. In *9th International Conference* on Learning Representations (ICLR 2021), 2021. M. Suzuki and Y. Matsuo. Mitigating the Limitations of Multimodal VAEs with Coordination-based Approach. 2022. M. Suzuki, K. Nakayama, and Y. Matsuo. Joint multimodal learning with deep generative models. arXiv preprint arXiv:1611.01891, 2016. Y. Tang and D. Ha. The sensory neuron as a transformer: Permutation-invariant neural networks for reinforcement learning. *Advances in Neural Information Processing Systems*, 34:22574–22587, 2021. A. Tenenhaus and M. Tenenhaus. Regularized generalized Canonical Correlation Analysis. *Psychometrika*, 76:257–284, 2011. Y. Tian, D. Krishnan, and P. Isola. Contrastive multiview coding. In *Computer Vision–ECCV 2020:* 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XI 16, pages 776–794. Springer, 2020. M. E. Tipping and C. M. Bishop. Probabilistic Principal Component Analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3):611–622, 1999. M. Titsias and M. Lázaro-Gredilla. Doubly stochastic variational bayes for non-conjugate inference. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1971–1979, 2014. M. K. Titsias, F. J. Ruiz, S. Nikoloutsopoulos, and A. Galashov. Information theoretic meta learning with gaussian processes. In *Uncertainty in Artificial Intelligence*, pages 1597–1606. PMLR, 2021. J. M. Tomczak and M. Welling. Vae with a vampprior. *arXiv preprint arXiv:1705.07120*, 2017. Y.-H. H. Tsai, S. Bai, P. P. Liang, J. Z. Kolter, L.-P. Morency, and R. Salakhutdinov. Multimodal transformer for unaligned multimodal language sequences. In *Proceedings of the conference. Association for* Computational Linguistics. Meeting, volume 2019, page 6558. NIH Public Access, 2019a. Y.-H. H. Tsai, P. P. Liang, A. Zadeh, L.-P. Morency, and R. Salakhutdinov. Learning factorized multimodal representations. In *International Conference on Representation Learning*, 2019b. A. Vahdat, K. Kreis, and J. Kautz. Score-based generative modeling in latent space. *Advances in Neural* Information Processing Systems, 34, 2021. M. Vasco, H. Yin, F. S. Melo, and A. Paiva. Leveraging hierarchy in multimodal generative models for effective cross-modality inference. *Neural Networks*, 146:238–255, 2022. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. R. Vedantam, I. Fischer, J. Huang, and K. Murphy. Generative models of visually grounded imagination. In *International Conference on Learning Representations*, 2018. G. Ver Steeg and A. Galstyan. Maximally informative hierarchical representations of high-dimensional data. In *Artificial Intelligence and Statistics*, pages 1004–1012. PMLR, 2015. S. Virtanen, A. Klami, S. Khan, and S. Kaski. Bayesian group factor analysis. In *Artificial Intelligence and* Statistics, pages 1269–1277. PMLR, 2012. E. Wagstaff, F. B. Fuchs, M. Engelcke, M. A. Osborne, and I. Posner. Universal approximation of functions on sets. *Journal of Machine Learning Research*, 23(151):1–56, 2022. Q. Wang and H. Van Hoof. Doubly stochastic variational inference for neural processes with hierarchical latent variables. In *International Conference on Machine Learning*, pages 10018–10028. PMLR, 2020. Q. Wang, B. Li, T. Xiao, J. Zhu, C. Li, D. F. Wong, and L. S. Chao. Learning deep transformer models for machine translation. In *Proceedings of the 57th Annual Meeting of the Association for Computational* Linguistics, pages 1810–1822, 2019a. T. Wang and P. Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In *International Conference on Machine Learning*, pages 9929–9939. PMLR, 2020. W. Wang, R. Arora, K. Livescu, and J. Bilmes. On deep multi-view representation learning. In *International* conference on machine learning, pages 1083–1092. PMLR, 2015. W. Wang, X. Yan, H. Lee, and K. Livescu. Deep Variational Canonical Correlation Analysis. *arXiv preprint* arXiv:1610.03454, 2016. W. Wang, D. Tran, and M. Feiszli. What makes training multi-modal classification networks hard? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12695– 12705, 2020. Y. Wang, A. C. Miller, and D. M. Blei. Comment: Variational Autoencoders as Empirical Bayes. 2019b. Y. Wang, D. Blei, and J. P. Cunningham. Posterior collapse and latent variable non-identifiability. Advances in Neural Information Processing Systems, 34:5443–5455, 2021. S. Watanabe. Information theoretical analysis of multivariate correlation. IBM Journal of research and development, 4(1):66–82, 1960. M. A. Wright and J. E. Gonzalez. Transformers are deep infinite-dimensional non-mercer binary kernel machines. *arXiv preprint arXiv:2106.01506*, 2021. M. Wu and N. Goodman. Multimodal generative models for scalable weakly-supervised learning. *Advances* in Neural Information Processing Systems, 31, 2018. M. Wu and N. Goodman. Multimodal generative models for compositional representation learning. arXiv preprint arXiv:1912.05075, 2019. Q. Xi and B. Bloem-Reddy. Indeterminacy in latent variable models: Characterization and strong identifiability. *arXiv preprint arXiv:2206.00801*, 2022. R. Xiong, Y. Yang, D. He, K. Zheng, S. Zheng, C. Xing, H. Zhang, Y. Lan, L. Wang, and T. Liu. On layer normalization in the transformer architecture. In *International Conference on Machine Learning*, pages 10524–10533. PMLR, 2020. J. Xu, E. Dupont, K. Märtens, T. Rainforth, and Y. W. Teh. Deep Stochastic Processes via Functional Markov Transition Operators. *Advances in Neural Information Processing Systems*, 37, 2023. P. Xu, X. Zhu, and D. A. Clifton. Multimodal learning with transformers: A survey. *arXiv preprint* arXiv:2206.06488, 2022. J. Yim, B. L. Trippe, V. De Bortoli, E. Mathieu, A. Doucet, R. Barzilay, and T. Jaakkola. SE (3) diffusion model with application to protein backbone generation. In *International Conference on Machine Learning*, pages 40001–40039. PMLR, 2023. T. Yu, S. Kumar, A. Gupta, S. Levine, K. Hausman, and C. Finn. Gradient surgery for multi-task learning. Advances in Neural Information Processing Systems, 33:5824–5836, 2020. C. Yun, S. Bhojanapalli, A. S. Rawat, S. Reddi, and S. Kumar. Are transformers universal approximators of sequence-to-sequence functions? In *International Conference on Learning Representations*, 2019. M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola. Deep Sets. Advances in neural information processing systems, 30, 2017. C. Zhang, Z. Han, H. Fu, J. T. Zhou, Q. Hu, et al. CPM-Nets: Cross partial multi-view networks. *Advances* in Neural Information Processing Systems, 32, 2019. F. Zhang, B. Liu, K. Wang, V. Y. Tan, Z. Yang, and Z. Wang. Relational Reasoning via Set Transformers: Provable Efficiency and Applications to MARL. *arXiv preprint arXiv:2209.09845*, 2022a. L. Zhang, V. Tozzo, J. Higgins, and R. Ranganath. Set Norm and Equivariant Skip Connections: Putting the Deep in Deep Sets. In *International Conference on Machine Learning*, pages 26559–26574. PMLR, 2022b. Y. Zhang, H. Jiang, Y. Miura, C. D. Manning, and C. P. Langlotz. Contrastive learning of medical visual representations from paired images and text. In *Machine Learning for Healthcare Conference*, pages 2–25. PMLR, 2022c. S. Zhao, C. Gao, S. Mukherjee, and B. E. Engelhardt. Bayesian group factor analysis with structured sparsity. *The Journal of Machine Learning Research*, 2016. S. Zhao, J. Song, and S. Ermon. InfovVAE: Balancing Learning and Inference in Variational Autoencoders. In *Proceedings of the aaai conference on artificial intelligence*, volume 33, pages 5885–5892, 2019. Y. Zheng, T. He, Y. Qiu, and D. P. Wipf. Learning manifold dimensions with conditional Variational Autoencoders. *Advances in Neural Information Processing Systems*, 35:34709–34721, 2022. D. Zhou and X.-X. Wei. Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE. *Advances in Neural Information Processing Systems*, 33:7234–7247, 2020. ## A Multi-Modal Distribution Matching Proof [Proof of Proposition 1] The equations for LS (xS ) are well known for uni-modal VAEs, see for example Zhao et al. (2019). To derive similar representations for the conditional bound, note that the first equation (ZXconditional) for matching the joint distribution of the latent and the missing modalities conditional on a modality subset follows from the definition of L\S , $$\int p_{d}(x_{\backslash}s|x s){\mathcal{L}}_{\backslash}s(x,\theta,\phi)\mathrm{d}x_{\backslash}s$$ = Zpd(x\S |xS ) Zqϕ(z|x)-log pθ(x\S |z) − log qϕ(z|x) + log qϕ(z|xS ))dzdx\S = Zpd(x\S |xS ) log pd(x\S |xS )dx\S + Zpd(x\S |xS ) Zqϕ(z|x) log pθ(x\S |z)qϕ(z|xS )) qϕ(z|x)pd(x\S |xS ) dzdx\S = − H(pd(x\S |xS )) − KL qϕ(z|x)pd(x\S |xS ) pθ(x\S |z)qϕ(z|xS ). To obtain the second representation (Xconditional) for matching the conditional distributions in the data space, observe that pθ(x\S |xS , z) = pθ(x\S |z) and consequently, − Zpd(x\S |xS )L\S (x, θ, ϕ)dx\S − H(pd(x\S |xS )) = Zpd(x\S |xS )qϕ(z|x) log pd(x\S |xS )qϕ(z|x) pθ(x\S |z)qϕ(z|xS ) dzdx\S = Zpd(x\S |xS )qϕ(z|x) log pd(x\S |xS )qϕ(z|x)pθ(z|xS ) pθ(x\S |z)pθ(z|xS )qϕ(z|xS ) dzdx\S = Zpd(x\S |xS )qϕ(z|x) log pd(x\S |xS )qϕ(z|x)pθ(z|xS ) pθ(x\S |*z, x*S )pθ(z|xS )qϕ(z|xS ) dzdx\S = Zpd(x\S |xS )qϕ(z|x) log pd(x\S |xS )qϕ(z|x)pθ(z|xS ) pθ(x\S |xS )pθ(z|xS , x\S )qϕ(z|xS ) dzdx\S =KL(pd(x\S |xS )|pθ(x\S |xS )) + Zpd(x\S |xS ) Zqϕ(z|x) log qϕ(z|x) pθ(z|x) + log pθ(z|xS ) qϕ(z|xS ) dzdx\S . Lastly, the representation (Zconditional) for matching the distributions in the latent space given a modality subset follows by recalling that $$p_{d}(x_{\backslash}s|x_{S})q_{\phi}(z|x)=q_{\phi,\backslash}^{\mathrm{agg}}(z|x_{S})q^{\star}(x_{\backslash}s|z,x_{S})$$ and consequently, − Zpd(x\S |xS )L\S (x, θ, ϕ)dx\S − H(pd(x\S |xS )) = Zpd(x\S |xS )qϕ(z|x) log pd(x\S |xS )qϕ(z|x) pθ(x\S |z)qϕ(z|xS ) dzdx\S = Zq agg ϕ,\S (z|xS )q ⋆(x\S |z, xS ) log q agg ϕ,\S (z|xS )q ⋆(x\S |z, xS ) pθ(x\S |z)qϕ(z|xS )dzdx\S =KL(q agg ϕ,\S (z|xS )|qϕ(z|xS )) − Zq agg ϕ,\S (z|xS )KL(q ⋆(x\S |z, xS )|pθ(x\S |z))dz. ## B Meta-Learning And Neural Processes Meta-learning. We consider a standard meta-learning setup but use slightly non-standard notations to remain consistent with notations used in other parts of this work. We consider a compact input or covariate space A and output space X . Let D = ∪∞ M=1(*A × X* )M be the collection of all input-output pairs. In meta-learning, we are given a meta-dataset, i.e., a collection of elements from D. Each individual data set D = (*a, x*) = Dc ∪ Dt ∈ D is called a task and split into a context set Dc = (ac, xc), and target set Dt = (at, xt). We aim to predict the target set from the context set. Consider, therefore, the prediction map $$\pi\colon D_{c}=(a_{c},x_{c})\mapsto p(x_{t}|a_{t},D_{c})=p(x_{t},x_{c}|a_{t},a_{c})/p(x_{c}|a_{c}),$$ mapping each context data set to the predictive stochastic process conditioned on Dc. Variational lower bounds for Neural processes. Latent Neural processes (Garnelo et al., 2018b; Foong et al., 2020) approximate this prediction map by using a latent variable model with parameters θ in the form of $$z\sim p_{\theta},\ p_{\theta}(x_{t}|a_{t},z)=\prod_{(a,x)\in D_{t}}p_{\epsilon}(x-f_{\theta}(a,z))$$ for a prior pθ, decoder fθ and a parameter free density pϵ. The model is then trained by (approximately) maximizing a lower bound on log pθ(xt|at, ac, xc). Note that for an encoding density qϕ, we have that $$\log p_{\theta}(x_{t}|a_{t},a_{c},x_{x})=\int q_{\phi}(z|x,a)\log p_{\theta}(x_{t}|a_{t},z)\mathrm{d}z-\mathsf{KL}(q_{\phi}(z|a,x)|p_{\theta}(z|a_{c},x_{c})).$$ Since the posterior distribution pθ(z|ac, xc) is generally intractable, one instead replaces it with a variational approximation or learned conditional prior qϕ(z|ac, xc), and optimizes the following objective $${\mathcal{L}}_{\backslash{\mathcal{C}}}^{\mathrm{LNP}}(x,a)=\int q_{\phi}(z|x,a)\log p_{\theta}(x_{t}|a_{t},z)\mathrm{d}z-{\mathsf{KL}}(q_{\phi}(z|a,x)|q_{\phi}(z|a_{c},x_{c})).$$ Note that this objective coincides with L\C conditioned on the covariate values a and where C comprises the indices of the data points that are part of the context set. Using the variational lower bound L LNP \C can yield subpar performance compared to another biased log-likelihood objective (Kim et al., 2018; Foong et al., 2020), $$\log\hat{p}_{\theta}(x_{t}|a_{t},a_{c},x_{c})=\log\left[\frac{1}{L}\sum_{l=1}^{L}\exp\left(\sum_{(x_{t},a_{t})\in D_{t}}\log p_{\theta}(x_{t}|a_{t},z_{c}^{l})\right)\right],$$ for L importance samples z l c ∼ qϕ(zc|xc, ac) drawn from the conditional prior as the proposal distribution. The required number of importance samples L for accurate estimation scales exponentially in the forward KL(qϕ(z|x, a)|qϕ(z|xc, ac)), see Chatterjee et al. (2018). Unlike a variational approach, such an estimator does not enforce a Bayes-consistency term for the encoders and may be beneficial in the setting of finite data and model capacity. Note that the Bayes consistency term for including the target set (xt, at) into the context set (xc, ac) writes as $$\mathsf{KL}\left(q_{\phi,\backslash}^{\mathrm{age}}\mathrm{c}(z|x_{c},a_{c})|q_{\phi}(z|x_{c},a_{c})\right)=\mathsf{KL}\left(\int p_{d}(x_{t}|a_{t},x_{c},a_{c})q_{\phi}(z|x,a)\mathrm{d}x_{t}\Bigg|q_{\phi}(z|x_{c},a_{c})\right).$$ Moreover, if one wants to optimize not only the conditional but also the marginal distributions, one may additionally optimize the variational objective corresponding to LC, i.e., $${\mathcal{L}}_{C}^{\mathrm{LNP}}(x_{c},a_{c})=\int q_{\phi}(z|x_{c},a_{c})\log p_{\theta}(x_{c}|a_{c},z)\mathrm{d}z-{\mathsf{KL}}(q_{\phi}(z|a_{c},x_{c})|p_{\theta}(z)),$$ as we do in this work for multi-modal generative models. Note that the objective L LNP Calone can be seen as a form of a Neural Statistician model (Edwards and Storkey, 2016) where C coincides with the indices of the target set, while a form of the mixture-based bound corresponds to a Neural process bound similar to variational Homoencoders (Hewitt et al., 2018), see also the discussion in Le et al. (2018). The multi-view variational information bottleneck approach developed in Lee and van der Schaar (2021) for predicting X\S given XS involves the joint variational objective $${\mathcal{L}}_{\mathcal{S}}^{\mathrm{IB}}(x,\theta,\phi,\beta)=\int q_{\phi}(z|x_{\mathcal{S}})\log p_{\theta}(x_{\backslash}s|z)\mathrm{d}z-\beta\mathsf{KL}(q_{\phi}(z|x_{\mathcal{S}})|p_{\theta}(z))$$ which can be interpreted as maximizing ˆI lb qϕ (X\S , ZS ) − βˆI ub qϕ (XS , ZS ) and corresponds to the variational information bottleneck for meta-learning in Titsias et al. (2021). ## C Information-Theoretic Perspective We recall first that the mutual information is given by $$\mathrm{I}_{q_{\phi}}(X_{\mathcal{S}},Z_{\mathcal{S}})=\int q_{\phi}(x_{\mathcal{S}},z_{\mathcal{S}})\log\frac{q_{\phi}(x_{\mathcal{S}},z_{\mathcal{S}})}{p_{d}(x_{\mathcal{S}})q_{\phi,\mathcal{S}}^{\mathrm{agg}}(z_{\mathcal{S}})}\mathrm{d}z_{\mathcal{S}}\mathrm{d}x_{\mathcal{S}},$$ where q agg ϕ,S (z) = Rpd(xS )qϕ(z|xS )dxS is the aggregated prior (Makhzani et al., 2016). It can be bounded by standard (Barber and Agakov, 2004; Alemi et al., 2016; 2018) lower and upper bounds using the rate and distortion: HS − DS ≤ HS − DS + ∆1 = Iqϕ (XS , ZS ) = RS − ∆2 ≤ RS , (15) with ∆1 =Rq agg ϕ(z)KL(q ⋆(xS |z)|pθ(xS |z))dz > 0, ∆2 = KL(q agg ϕ,S (z)|pθ(z)) > 0 and q ⋆(xS |z) = qϕ(xS , z)/qagg ϕ(z). Moreover, if the bounds in (7) become tight with ∆1 = ∆2 = 0 in the hypothetical scenario of infinitecapacity decoders and encoders, one obtains RpdLS = (1 − β) Iqϕ (XS , ZS ) + HS . For β > 1, maximizing LS yields an auto-decoding limit that minimizes Iqϕ (xS , z) for which the latent representations do not encode any information about the data, whilst β < 1 yields an auto-encoding limit that maximizes Iqϕ (XS , ZS ) and for which the data is perfectly encoded and decoded. The mixture-based bound can be interpreted as the maximization of a variational lower bound of Iqϕ (XM, ZS ) and the minimization of a variational upper bound of Iqϕ (XS , ZS ). Indeed, see also Daunhawer et al. (2022), $$\mathcal{H}_{\mathcal{M}}-D_{S}-D_{S}^{c}\leq\mathcal{H}_{\mathcal{M}}-D_{S}-D_{S}^{c}+\Delta_{1}^{\prime}=\mathrm{I}_{q_{\phi}}(X_{\mathcal{M}},Z_{S}),$$ where $\Delta_{1}^{\prime}=\int q_{\phi}^{\mathrm{age}}(z)\mathsf{KL}\big{(}q^{\star}(x|z)|p_{\theta}(x|z)\big{)}\mathrm{d}z>0$, due to $$I_{q_{\phi}}(X_{\mathcal{M}},Z_{\mathcal{S}})=\mathcal{H}_{\mathcal{M}}-\mathcal{H}_{q_{\phi}}(X|Z_{\mathcal{S}})=\mathcal{H}_{\mathcal{M}}+\int p_{d}(x)q_{\phi}(z|x_{\mathcal{S}})\left[\log q^{*}(x|z)\right]\mathrm{d}z\mathrm{d}x$$ $$=\mathcal{H}_{\mathcal{M}}+\int p_{d}(x)q_{\phi}(z|x_{\mathcal{S}})\left[\log p_{\theta}(x_{\mathcal{S}}|z)+\log p_{\theta}(x_{\backslash\mathcal{S}}|z)+\log\frac{q^{*}(x|z)}{p_{\theta}(x|z)}\right]\mathrm{d}z\mathrm{d}x.$$ that Recalling that $$\int p_{d}(\mathrm{d}x){\mathcal{L}}_{{\mathcal{S}}}^{\mathrm{Mix}}(x)=-D_{{\mathcal{S}}}-D_{\backslash{\mathcal{S}}}^{c}-\beta R_{{\mathcal{S}}},$$ one can see that maximizing the first part of the mixture-based variational bound corresponds to maximizing −DS −Dc\S as a variational lower bound of Iqϕ (XM, ZS ), when ignoring the fixed entropy of the multi-modal data. Maximizing the second part of the mixture-based variational bound corresponds to minimizing RS as a variational upper bound of Iqϕ (XS , ZS ), see (15). Proof [Proof of Lemma 7] The proof follows by adapting the arguments in Alemi et al. (2018). The law of X\S and Z conditional on XS on the encoder path can be written as $$q_{\phi}(z,x_{\backslash S}|x_{S})=p_{d}(x_{\backslash S}|x_{S})q_{\phi}(z|x)=q_{\phi,\backslash S}^{\mathrm{agg}}(z|x_{S})q^{\star}(x_{\backslash S}|z,x_{S})$$ with q ⋆(x\S |*z, x*S ) = qϕ(*z, x*\S |xS )/qagg ϕ,\S (z|xS ). To prove a lower bound on the conditional mutual information, note that $$\operatorname{I}_{q_{\phi}}(X_{\setminus S},Z_{\mathcal{M}}|X_{\mathcal{S}})$$ Iqϕ (X\S , ZM|XS ) = Zpd(xS ) Zq agg ϕ,\S (z|xS ) Zq ⋆(x\S |z, xS ) log q agg ϕ,\S (z|xS )q ⋆(x\S |z, xS ) q agg ϕ,\S (z|xS )pd(x\S |x\S ) dzdx\S dxS = Zpd(xS ) Zq agg ϕ,\S (z|xS ) Zq ⋆(x\S |z, xS ) log pθ(x\S |z))dx\S + KL(q ⋆(x\S |z, xS )|pθ(x\S |z))dzdxS − Zpd(xS ) Zpd(x\S |xS ) log pd(x\S |xS )dx = Zpd(x) Zqϕ(z|x) log pθ(x\S |z)dzdx − Zpd(xS ) Zpd(x\S |xS ) log pd(x\S |xS )dx | {z } =−H\S=−H(X\S |XS ) $$+\underbrace{\int p_{d}(x_{\mathcal{S}})\int q_{\phi,\backslash{\mathcal{S}}}^{\mathrm{agg}}(z|x_{\mathcal{S}})\mathsf{KL}(q^{*}(x_{\backslash{\mathcal{S}}}|z,x_{\mathcal{S}})|p_{\theta}(x_{\backslash{\mathcal{S}}}|z))\mathrm{d}x_{\mathcal{S}}}_{=\Delta_{\backslash{\mathcal{S}},1}\geq0}$$ =∆\S,1 + D\S + H\S . The upper bound follows by observing that Iqϕ (X\S , ZM|XS ) = Zpd(xS ) Zpd(x\S |xS )qϕ(z|x) log qϕ(z|x)pd(x\S |xS ) q agg ϕ,\S (z|xS )pd(x\S |xS ) dzdx = Zpd(x)KL(qϕ(z|x)|qϕ(z|xS ))dx − Zpd(xS )KL(q agg ϕ,\S (z|xS )|qϕ(z|xS ))dxS | {z } =∆\S,2≥0 =R\S − ∆\S,2. Remark 19 (Total correlation based objectives) The objective suggested in Hwang et al. (2021) is motivated by a conditional variational bottleneck perspective that aims to maximize the reduction of total correlation of X when conditioned on Z, as measured by the conditional total correlation, see Watanabe (1960); Ver Steeg and Galstyan (2015); Gao et al. (2019), i.e., minimizing $\left\{\text{TC}(X|Z)=\text{TC}(X)-\text{TC}(X,Z)=\text{TC}(X)+\text{I}_{q_{\phi}}(X,Z)-\sum_{s=1}^{M}\text{I}_{q_{\phi}}(X_{s},Z)\right\}$, (16) where TC(X) = KL(p(x)|Qd i=1 p(xi)) for d-dimensional X. Resorting to variational lower bounds and using a constant β > 0 that weights the contributions of the mutual information terms, approximations of (16) can be optimized by maximizing $${\mathcal{L}}^{\mathrm{TC}}(\theta,\phi,\beta)=\int\rho({\mathcal{S}})\int\left\{q_{\phi}(z|x)\left[\log p_{\theta}(x|z)\right]\mathrm{d}z-\beta{\sf KL}(q_{\phi}(z|x)|q_{\phi}(z|x_{\mathcal{S}}))\right\}\mathrm{d}{\mathcal{S}},$$ where ρ is concentrated on the uni-modal subsets of M. Remark 20 (Entropy regularised optimization) Let q be a density over C, exp(g) be integrable with respect to q and τ > 0. The maximum of $$f(q)=\int_{\mathsf{C}}q(c)\left[g(c)-\tau\log q(c)\right]\mathrm{d}c$$ that is attained at q ⋆(c) = 1Z e g(c)/τ with normalizing constant Z =RC e g(c)/τ dc is $$f^{\star}=f(q^{\star})=\tau\log\int_{\mathsf{C}}\mathrm{e}^{g(c)/\tau}\,\mathrm{d}c.$$ Remark 21 (Optimal variational distribution) The optimal variational density for the mixture-based (1) multi-modal objective, $$\int p_{d}({\rm d}x){\cal L}_{S}^{\rm Mix}(x)=\int p_{d}(x_{S})\int q_{\phi}(z|x_{S})\int p_{d}(x_{\backslash S}|x_{S})$$ $$\qquad\qquad\qquad\qquad\left[\log p_{\theta}(x_{S}|z)+\log p_{\theta}(x_{\backslash S}|z)+\beta\log p_{\theta}(z)-\beta\log q_{\phi}(z|x_{S})\right]{\rm d}x_{\backslash S}{\rm d}z{\rm d}x_{S},$$ using Remark 20, is attained at $$q^{*}(z|x_{\mathcal{S}})\propto\exp\left(\frac{1}{\beta}\int p_{d}(x_{\mathcal{S}}|x_{\mathcal{S}})\left[\log p_{\theta}(x_{\mathcal{S}}|z)+\log p_{\theta}(x_{\mathcal{S}}|z)+\beta\log p_{\theta}(z)\right]\mathrm{d}x_{\setminus\mathcal{S}}\right),$$ $$\propto\tilde{p}_{\beta,\theta}(z|x_{\mathcal{S}})\exp\left(\int p_{d}(x_{\setminus\mathcal{S}}|x_{\mathcal{S}})\log\tilde{p}_{\beta,\theta}(x_{\setminus\mathcal{S}}|z)\mathrm{d}x_{\setminus\mathcal{S}}\right).$$ ## D Permutation-Invariant Architectures Multi-head attention and masking. We introduce here a standard multi-head attention (Bahdanau et al., 2014; Vaswani et al., 2017) mapping MHAϑ : R I×DX × R S×DY → R I×DY given by $\mathrm{MHA}_{\vartheta}(X,Y)=W^{O}\left[\mathrm{Head}^{1}(X,Y,Y),\ldots,\mathrm{Head}^{H}(X,Y,Y)\right],\quad\vartheta=(W_{Q},W_{K},W_{V},W_{O}),$ with output matrix WO ∈ R DA×DY , projection matrices WQ ∈ R DX×DA WK, WV ∈ R $,W_V\in\mathbb{R}^{D_Y\times D_A}$ and . $$\operatorname{Head}^{h}(Q,K,V)=\operatorname{Att}(Q W_{Q}^{h},K W_{K}^{h},V W_{V}^{h})\in\mathbb{R}^{I\times D}$$ I×D (17) where we assume that D = DA/H ∈ N is the head size. Here, the dot-product attention function is $$(17)$$ $$\operatorname{Att}(Q,K,V)=\sigma(Q K^{\top})V,$$ where σ is the softmax function applied to each column of QK⊤. Masked multi-head attention. In practice, it is convenient to consider masked multi-head attention models MMHAϑ,M : R I×DX × R T ×DY → R I×DY for mask matrix M ∈ {0, 1} I×Tthat operate on key or value sequences of fixed length T where the h-th head (17) is given by $$\operatorname{Head}^{h}(Q,K,V)=\left[M\odot\sigma(Q W_{Q}^{h}(K W_{K}^{h})^{\top})\right]V_{t^{\prime}}W_{V}^{h}\in\mathbb{R}^{T\times D},$$ Using the softmax kernel function SMD(*q, k*) = exp(q ⊤k/√D), we set $$\text{MMHA}_{\theta,M}(X,Y)_{i}=\sum_{t=1}^{T}\sum_{h=1}^{H}\frac{M_{it}\text{SM}_{D}(W_{h}^{Q}X_{i},W_{h}^{K}Y_{t})}{\sum_{t^{\prime}=1}^{T}M_{it^{\prime}}\text{SM}_{D}(X_{i}W_{h}^{Q},Y_{t^{\prime}}W_{h}^{K})}Y_{t}W_{h}^{V}W_{h}^{O}\tag{18}$$ which does not depend on Yt if M·t = 0. Masked self-attention. For mask matrix M = mm⊤ with m = (1{s∈S})s∈M, we write $$\mathrm{MHA}_{\vartheta}(Y s,Y s)$$ MHAϑ(YS , YS ) = MMHAϑ,M(i(YS ), i(YS ))S . where MMHAϑ,M operates on sequences with fixed length and i(YS ))t = Yt if t ∈ S and 0 otherwise. LayerNorm and SetNorm. Let h ∈ R T ×D and consider the normalization $$\mathrm{N}(h)={\frac{h-\mu(h)}{\sigma(h)}}\odot\gamma+\beta$$ where µ and σ standardize the input h by computing the mean, and the variance, respectively, over some axis of h, whilst γ and β define a transformation. LayerNorm (Ba et al., 2016) standardises inputs over the last axis, e.g., µ(h) = 1D PD d=1 µ·,d, i.e., separately for each element. In contrast, SetNorm (Zhang et al., 2022b) standardises inputs over both axes, e.g., µ(h) = 1 TD PT t=1 PD d=1 µt,d, thereby losing the global mean and variance only. In both cases, γ and β share their values across the first axis. Both normalizations are permutation-equivariant. $$\mathrm{Tr}\mathbf{a}\mathbf{l}$$ Transformer. We consider a masked pre-layer-norm (Wang et al., 2019a; Xiong et al., 2020) multi-head transformer block $$(\mathrm{MMTB}_{\vartheta,M}(\mathrm{i}s(Y s)))s=(Z+\sigma_{\mathrm{ReLU}}(\mathrm{LN}(Z)))s$$ with σReLU being a ReLU non-linearity and $$Z=\mathrm{i}_{S}(Y_{S})+\mathrm{MMHA}_{\vartheta,M}(\mathrm{LN}(\mathrm{i}_{S}(Y_{S})),\mathrm{LN}(\mathrm{i}_{S}(Y_{S})))$$ where M = mm⊤ for m = (1{s∈S})s∈M. Set-Attention Encoders. Set g 0 = iS (χϑ(hS )) and for k ∈ {1*, . . . , L*}, let $$g^{k}=\mathrm{MMTB}_{\vartheta,M}(g_{S}^{k-1}).$$ Then, we can express the self-attention multi-modal aggregation mapping via fϑ(hS ) = ρϑ Ps∈S g L s . Remark 22 (Multi-modal time series models) We have introduced a multi-modal generative model in a general form that also applies to the time-series setup, such as when a latent Markov process drives multiple time series. For example, consider a latent Markov process Z = (Zt)t∈N with prior dynamics pθ(z1*, . . . , z*T ) = pθ(z1)QT t=2 pθ(zt|zt−1) for an initial density pθ(z1) and homogeneous Markov kernels pθ(zt|zt−1). Conditional on Z, suppose that the time-series (Xs,t)t∈N follows the dynamics pθ(xs,1, . . . , xs,T |z1*, . . . , z*T ) = QT t=2 pθ(xs,t|zt) for decoding densities pθ(xs,t|zt). A common choice (Chung et al., 2015) for modeling the encoding distribution for such sequential (uni-modal) VAEs is to assume the factorization qϕ(z1, . . . zT |x1*, . . . x*T ) = qϕ(z1|x1)QT t=2 qϕ(zt|zt−1, xt) for xt = (xs,t)s∈M, with initial encoding densities qϕ(z1|x1) and encoding Markov kernels qϕ(zt|zt−1, xt). One can again consider modality-specific encodings hs = (hs,1, . . . , hs,T ), hs,t = hs,φ(xs,t), now applied separately at each time step that are then used to construct Markov kernels that are permutation-invariant in the form of q ′ ϕ (zt|zt−1*, πh*φ(xt,S )) = q ′ ϕ (zt|zt−1, hφ(xt,S )) for permutations π ∈ SS . Alternatively, in the absence of the auto-regressive encoding structure with Markov kernels, one could also use transformer models that use absolute or relative positional embeddings across the last temporal axis but no positional embeddings across the first modality axis, followed by a sum-pooling operation across the modality axis. Note that previous works using multi-modal time series such as Kramer et al. (2022) use a non-amortized encoding distribution for the full multi-modal posterior only. A numerical evaluation of permutation-invariant schemes for time series models is, however, outside the scope of this work. ## E Permutation-Equivariance And Private Latent Variables Remark 23 (Variational bounds with private latent variables) To compute the multi-modal variational bounds, notice that the required KL divergences can be written as follows: $$\mathsf{KL}(q_{\phi}(z^{\prime},\bar{z}|x_{S})|p_{\theta}(z^{\prime},\bar{z}))=\mathsf{KL}(q_{\phi}(z^{\prime}|x_{S})|p_{\theta}(z^{\prime}))+\int q_{\phi}(z^{\prime}|x_{S})\mathsf{KL}(q_{\phi}(\bar{z}_{S}|z^{\prime},x_{S})|p_{\theta}(\bar{z}_{S}|z^{\prime}))\mathrm{d}z^{\prime}).$$ and KL(qϕ(z ′, z˜|xM)|qϕ(z ′, z˜|xS )) =KL(qϕ(z ′|xM)|(qϕ(z ′|xS )) + Zqϕ(z ′|xM)KL(qϕ(PS z˜|z ′, xM)|qϕ(PS z˜|z ′, xS ))dz ′ + Zqϕ(z ′|xM)KL(qϕ(P\S z˜|z ′, xS )|pθ(P\S z˜|z ′))dz ′ $\ldots\tilde{z}_{M})\mapsto(\tilde{z}_{s})_{s\in\mathcal{S}}$ projects all private where PS : (˜z1*, . . .* z˜M) 7→ (˜zs)s∈S projects all private latent variables to those contained in S. These expressions can be used to compute our overall variational bound LS + L\S via Zqϕ(z ′|xS )qϕ(˜zS |z ′, xS )] log pθ(xS |z ′, z˜S )dz ′d˜zS − KLqϕ(z ′|xS )qϕ(˜zS |z ′, xS ) pθ(z ′)pθ(˜zS |z ′) + Zqϕ(z ′|xM)qϕ(˜z\S |z ′, xM)] log pθ(xS |z ′, z˜\S )dz ′d˜zS − KLqϕ(z ′, z˜S , z˜\S |xM) qϕ(z ′, z˜S , z˜\S |xS ) . Remark 24 (Comparison with MMVAE+ variational bound) It is instructive to compare our bound with the MMVAE+ approach suggested in Palumbo et al. (2023). Assuming a uniform masking distribution restricted to uni-modal sets so that S = {s} for some s ∈ M, we can write the bound from Palumbo et al. (2023) as 1M PM s=1 L MMVAE+ {s}(x) with L MMVAE+ {s}(x) = Zqϕ(z ′|x{s})qϕ(˜z{s}|x{s}) hlog pθ(x{s}|z ′, z˜{s}) idz ′d˜z{s} + Zqϕ(z ′|x{s})rϕ(˜z\{s}) hlog pθ(x\{s}|z ′, z˜\{s}) idz ′d˜z\{s} − KLq MoE ϕ(z ′, z˜M|xM) pθ(z ′)pθ(˜zM) . Here, it is assumed that the multi-modal encoding distribution for computing the KL-divergence is of the form $$q_{\phi}^{\mathrm{MoE}}(z^{\prime},\tilde{z}_{\mathcal{M}}|x_{\mathcal{M}})=\frac{1}{M}\sum_{s\in\mathcal{M}}\left(q_{\phi}(z^{\prime}|x_{s})q_{\phi}(\tilde{z}_{s}|x_{s})\right)$$ and rϕ(˜zA) = Qs∈A rϕ(˜zs) are additional trainable *prior* distributions. ## F Multi-Modal Posterior In Exponential Family Models Consider the setting where the decoding and encoding distributions are of the exponential family form, that is $p_{\theta}(x_{s}|z)=\mu_{s}(x_{s})\exp\left[\langle T_{s}(x_{s}),f_{s,\theta}(z)\rangle-\log Z_{s}(f_{s,\theta}(z))\right]$ for all s ∈ M, while for all *S ⊂ M*, $$q_{\phi}(z|x_{\mathcal{S}})=\mu(z)\exp\left[\langle V(z),\lambda_{\phi,\mathcal{S}}(x_{\mathcal{S}})\rangle-\log\Gamma_{\mathcal{S}}(\lambda_{\phi,\mathcal{S}}(x_{\mathcal{S}}))\right].$$ where µs and µ are base measures, Ts(xs) and V (z) are sufficient statistics, while the natural parameters λϕ,S (xS ) and fs,θ(z) are parameterized by the decoder or encoder networks, respectively, with Zs and ΓS being normalizing functions. Note that we made a standard assumption that the multi-modal encoding distribution has a fixed base measure and sufficient statistics for any modality subset. For fixed generative parameters θ, we want to learn a multi-modal encoding distribution that minimizes over xS ∼ pd, KL(qϕ(z|xS )|pθ(z|xS )) = Zqϕ(z|xS ) hlog qϕ(z|xS ) − log pθ(z) − X s∈S log pθ(xs|z) idz − log pθ(xS ) = Zqϕ(z|xS ) h⟨V (z), λϕ,S (xS )⟩ − log ΓS (λϕ,S (xS )) − X s∈S log µs(xs) −X s∈S ⟨Ts,θ(xs), fs,θ(z)⟩ + log pθ(z) − X s∈S Zs(fs,θ(z)) idz − log pθ(xS ) = Zqϕ,ϑ(z|xS ) hD V (z) 1 , λϕ,ϑ,S (xS ) − log ΓS (λϕ,ϑ,S (xS )) E− X s∈S D Ts(xs) 1 , fθ,s(z) bθ,s(z) Eidz, |S|pθ(z) − log Zs(fs,θ(z)). with bθ,s(z) = 1 $$\mathbf{\partial})={\frac{1}{|S|}}p_{\theta}(z)-\log Z_{s}(f_{s,\theta}(z)).$$ ## G Mixture Model Extensions For Different Variational Bounds We consider the optimization of an augmented variational bound $$\mathcal{L}(x,\theta,\phi)=\int\rho(\mathcal{S})\Big{[}\int q_{\phi}(c,z|x_{\mathcal{S}})\left[\log p_{\theta}(c,x_{\mathcal{S}}|z)\right]\mathrm{d}z\mathrm{d}c-\mathsf{KL}(q_{\phi}(c,z|x_{\mathcal{S}})|p_{\theta}(c,z))$$ $$\qquad+\int q_{\phi}(c,z|x_{\mathcal{S}})\left[\log p_{\theta}(x_{\mathcal{S}}|z)\right]\mathrm{d}z\mathrm{d}c-\mathsf{KL}(q_{\phi}(c,z|x)|q_{\phi}(c,z|x_{\mathcal{S}}))\Big{]}\mathrm{d}\mathcal{S}.$$ We will pursue here an encoding approach that does not require modeling the encoding distribution over the discrete latent variables explicitly, thus avoiding large variances in score-based Monte Carlo estimators (Ranganath et al., 2014) or resorting to advanced variance reduction techniques (Kool et al., 2019) or alternatives such as continuous relaxation approaches (Jang et al., 2016; Maddison et al., 2016). Assuming a structured variational density of the form $$q_{\phi}(c,z|x s)=q_{\phi}(z|x s)q_{\phi}(c|z,x s),$$ $$(19)$$ we can express the augmented version of (3) via $$\mathcal{L}_{\mathcal{S}}(x_{\mathcal{S}},\theta,\phi)=\int q_{\phi}(c,z|x_{\mathcal{S}})\left[\log p_{\theta}(c,x_{\mathcal{S}}|z)\right]\mathrm{d}z-\beta\mathsf{KL}\left(q_{\phi}(c,z|x_{\mathcal{S}})|p_{\theta}(c,z)\right)$$ $$=\int q_{\phi}(z|x_{\mathcal{S}})\left[f_{x}(z,x_{\mathcal{S}})+f_{c}(z,x_{\mathcal{S}})\right]\mathrm{d}z,$$ where fx(*z, x*S ) = log pθ(xS |z) − β log qϕ(z|xS )) and $$f_{c}(z,x_{\mathcal{S}})=\int q_{\phi}(c|z,x_{\mathcal{S}})\left[-\beta\log q_{\phi}(c|z,x_{\mathcal{S}})+\beta\log p_{\theta}(c,z)\right]\mathrm{d}c.$$ We can also write the augmented version of (5) in the form of $$\mathcal{L}_{\setminus\mathcal{S}}(x,\theta,\phi)=\int q_{\phi}(c,z|x_{\mathcal{S}})\left[\log p_{\theta}(x_{\setminus\mathcal{S}}|z)\right]\mathrm{d}z-\beta\mathsf{KL}(q_{\phi}(c,z|x)|q_{\phi}(c,z|x_{\mathcal{S}}))$$ $$=\int q_{\phi}(z|x)g_{x}(z,x)\mathrm{d}z$$ where $$g_{x}(z,x)=\log p_{\theta}(x_{\backslash}s|z)-\beta\log q_{\phi}(z|x)+\beta\log q_{\phi}(z|x_{\mathcal{S}})$$ which does not depend on the encoding density of the cluster variable. To optimize the variational bound with respect to the cluster density, we can thus optimize (19), which attains its maximum value of $$f_{c}^{\star}(z,x_{\mathcal{S}})=\beta\log\int p_{\theta}(c)p_{\theta}(z|c)\mathrm{d}c=\beta\log p_{\theta}(z)$$ at qϕ(c|*z, x*S ) = pθ(c|z) due to Remark 20 below with g(c) = β log pθ(*c, z*). We can derive an analogous optimal structured variational density for the mixture-based and totalcorrelation-based variational bounds. First, we can write the mixture-based bound (1) as $$\mathcal{L}_{S}^{\mathrm{Mix}}(x,\theta,\phi)=\int q_{\phi}(z|x_{S})\left[\log p_{\theta}(c,x|z)\right]\mathrm{d}z-\beta\mathsf{KL}(q_{\phi}(c,z|x_{S})|p_{\theta}(c,z))$$ $$=\int q_{\phi}(z|x_{S})\left[f_{x}^{\mathrm{Mix}}(z,x)+f_{c}(z,x)\right]\mathrm{d}z,$$ where f Mix x(*z, x*) = log pθ(x|z) − β log qϕ(z|xS ) and fc(*z, x*) has a maximum value of f ⋆ c (*z, x*) = β log pθ(z). Second, we can express the corresponding terms from the total-correlation-based bound as $$\mathcal{L}_{\mathcal{S}}^{\mathrm{TC}}(\theta,\phi)=\int q_{\phi}(z|x)\left[\log p_{\theta}(x|z)\right]\mathrm{d}z-\beta\mathsf{KL}(q_{\phi}(c,z|x)|q_{\phi}(c,z|x_{\mathcal{S}}))$$ $$=\int q_{\phi}(z|x)\left[f_{x}^{\mathrm{TC}}(z,x)\right]\mathrm{d}z,$$ where f TC x(*z, x*) = log pθ(x|z) − β log qϕ(z|x) + β log qϕ(z|xS ). ## H Algorithm And Stl-Gradient Estimators We consider a multi-modal extension of the sticking-the-landing (STL) gradient estimator (Roeder et al., 2017) that has also been used in previous multi-modal bounds (Shi et al., 2019). The gradient estimator ignores the score function terms when sampling qϕ(z|xS ) for variance reduction purposes because it has a zero expectation. For the bounds (2) that involves sampling from qϕ(z|xS ) and qϕ(z|xM), we thus ignore the score terms for both integrals. Consider the reparameterization with noise variables ϵS , ϵM ∼ p and transformations zS = tS (ϕ, ϵS , xS ) = finvariant-agg(ϑ, ϵS , S, hS ), for hS = hφ,s(xs)s∈S and zM = tM(ϕ, ϵM, xM) = finvariant-agg(ϑ, ϵM,M, hM), for hM = hφ,s(xs)s∈M . We need to learn only a single aggregation function that applies and masks the modalities appropriately. Pseudo-code for computing the gradients are given in Algorithm 1. If the encoding distribution is a mixture distribution, we apply the stop-gradient operation also to the mixture weights. Notice that in the case of a mixture prior and an encoding distribution that includes the mixture component, the optimal encoding density over the mixture variable has no variational parameters and is given as the posterior density of the mixture component under the generative parameters of the prior. Algorithm 1 Single training step for computing unbiased gradients of L(x). Input: Multi-modal data point x, generative parameter θ, variational parameters ϕ = (*φ, ϑ*). Sample S ∼ ρ. Sample ϵS , ϵM ∼ p. Set zS = tS (ϕ, ϵS , xM) and zM = tM(ϕ, ϵM, xM). Stop gradients of variational parameters ϕ ′ = stop_grad(ϕ). Set LbS (*θ, ϕ*) = log pθ(xS |zS ) + β log pθ(zS ) − β log qϕ′ (zS |xS ). Set Lb\S (*θ, ϕ*) = log pθ(x\S |zM) + β log qϕ(zM|xS ) − β log qϕ′ (zM|xM). Output: ∇θ,ϕ hLbS (*θ, ϕ*) + Lb\S (*θ, ϕ*) i In the case of private latent variables, we proceed analogously and rely on reparameterizations z ′ S = t ′ S (*ϕ, ϵ*′S , xS ) for the shared latent variable z ′ S ∼ qϕ(z ′|xS ) as above and z˜S = t˜S (ϕ, z′, ϵS , xS ) = fequivariant-agg(ϑ, ϵ˜S , z′, S, hS ) for the private latent variables z˜S ∼ qϕ(˜zS |z ′, xS ). Moreover, we write PS for a projection on the S-coordinates. Pseudo-code for computing unbiased gradient estimates for our bound is given in Algorithm 2. Algorithm 2 Single training step for computing unbiased gradients of L(x) with private latent variables. Input: Multi-modal data point x, generative parameter θ, variational parameters ϕ = (*φ, ϑ*). Sample S ∼ ρ. Sample ϵ ′ S , ϵS , ϵ\S , ϵ ′M, ϵM, ϵ\M ∼ p. Set z ′ S = t ′ S (*ϕ, ϵ*′S , xS ), z˜S = t˜S (*ϕ, z*′S , ϵS , xS ). Set z ′M = t ′M(ϕ, ϵ′M, xM), z˜M = t˜M(ϕ, z′M, ϵM, xM). Stop gradients of variational parameters ϕ ′ = stop_grad(ϕ). Set LbS (*θ, ϕ*) = log pθ(xS |z ′ S , z˜S ) + β log pθ(z ′ S ) − β log qϕ′ (z ′ S |xS ) + β log pθ(˜zS |z ′ S ) − β log qϕ′ (˜zS |z ′ S , xS ). Set Lb\S (*θ, ϕ*) = log pθ(x\S |z ′M) + β log qϕ(z ′M|xS ) − β log qϕ′ (˜zM|z ′M, xM) + β log qϕ(PS (˜zM)|z ′M, xS ) + β log pθ(P\S (˜zM)|z ′M, z˜M) − β log qϕ′ (˜zM|z ′M, xM). Output: ∇θ,ϕ hLbS (*θ, ϕ*) + Lb\S (*θ, ϕ*) i ## I Evaluation Of Multi-Modal Generative Models We evaluate models using different metrics suggested previously for multi-modal learning, see for example Shi et al. (2019); Wu and Goodman (2019); Sutter et al. (2021). Marginal, conditional and joint log-likelihoods. We can estimate the marginal log-likelihood using classic importance sampling $$\log p_{\theta}(x_{S})\approx\log\frac{1}{K}\sum_{k=1}^{K}\frac{p_{\theta}(z^{k},x_{S})}{q_{\phi}(z^{k}|x_{S})}$$ $${\mathrm{ase~}}{\mathcal{S}}\,=\,{\mathcal{M}}\setminus\{m\}$$ for z k ∼ qϕ(·|xS ). This also allows to approximate the joint log-likelihood log pθ(x), and consequently also the conditional log pθ(x\S |xS ) = log pθ(x) − log pθ(xS ). Generative coherence with joint auxiliary labels. Following previous work (Shi et al., 2019; Sutter et al., 2021; Daunhawer et al., 2022; Javaloy et al., 2022), we assess whether the generated data share the same information in the form of the class labels across different modalities. To do so, we use pre-trained classifiers clfs : Xs → [K] that classify values from modality s to K possible classes. More precisely, for S ⊂ M and m ∈ M, we compute the self- (m ∈ S) or cross- (m /∈ S) coherence CS→m as the empirical average of 1{clfm(ˆxm)=y}, over test samples x with label y where zˆS ∼ qϕ(z|xS ) and xˆm ∼ pθ(xm|zˆS ). The case S = *M \ {*m} corresponds to a leave-one-out conditional coherence. Linear classification accuracy of latent representations. To evaluate how the latent representation can be used to predict the shared information contained in the modality subset S based on a linear model, we consider the accuracy AccS of a linear classifier clfz : Z → [K] that is trained to predict the label based on latent samples zS ∼ qϕ(zS |x train S) from the training values x train Sand evaluated on latent samples zS ∼ qϕ(z|x test S) from the test values x test S. ## J Linear Models Data generation. We generate 5 data sets of N = 5000 samples, each with M = 5 modalities. We set the latent dimension to D = 30, while the dimension Ds of modality s is drawn from U(30, 60). We set the observation noise to σ = 1, shared across all modalities, as is standard for a PCA model. We sample the components of bs independently from N (0, 1). For the setting without modality-specific latent variables, Ws is the orthonormal matrix from a QR algorithm applied to a matrix with elements sampled iid from U(−1, 1). The bias coefficients Wb are sampled independently from N (0, 1/d). Conversely, the setting with private latent variables in the ground truth model allows us to describe modality-specific variation by considering the sparse loading matrix $$W_{\mathcal{M}}=\begin{bmatrix}W_{1}^{\prime}&\tilde{W}_{1}&0&\dots&0\\ W_{2}^{\prime}&0&\tilde{W}_{2}&\dots&0\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ W_{M}^{\prime}&0&\dots&0&\tilde{W}_{M}\end{bmatrix}$$ . Here, W′s , W˜s ∈ R Ds×D′with D′ = D/(M + 1) = 5, Furthermore, the latent variable Z can be written as Z = (Z ′,Z˜1*, . . . ,Z*˜M) for private and shared latent variables Z˜s, resp. Z ′. We similarly generate orthonormal -W′s , W˜s from a QR decomposition. Observe that the general generative model with latent variable Z corresponds to the generative model (9) with shared Z ′ and private latent variables Z˜ with straightforward adjustments for the decoding functions. Similar models have been considered previously, particularly from a Bayesian standpoint with different sparsity assumptions on the generative parameters (Archambeau and Bach, 2008; Virtanen et al., 2012; Zhao et al., 2016). Maximum likelihood estimation. Assume now that we observe N data points {xn}n∈[N], consisting of stacking the views xn = (xs,n)s∈S for each modality in S and let S = 1 N PN n=1(xn − b)(xn − b) ⊤ ∈ R Dx×Dx , Dx =PM s=1 Ds, be the sample covariance matrix across all modalities. Let Ud ∈ R Dx×D be the matrix of the first D eigenvectors of S with corresponding eigenvalues λ1*, . . . λ*D stored in the diagonal matrix ΛD ∈ R D×D. The maximum likelihood estimates are then given by bML =1N PN n=1 xn, σ 2 ML =1 N−D PN j=D+1 λj and WML = UD(ΛD − σ 2 ML I)1/2 with the loading matrix identifiable up to rotations. Model architectures. We estimate the observation noise scale σ based on the maximum likelihood estimate σML. We assume linear decoder functions pθ(xs|z) = N (Wθ s z +b θ, σ2ML), fixed standard Gaussian prior p(z) = N (0,I) and generative parameters θ = (Wθ 1 , bθ1 , . . . , WθM, bθM). Details about the various encoding architectures are given in Table 15. The modality-specific encoding functions for the PoE and MoE schemes have a hidden size of 512, whilst they are of size 256 for the learnable aggregation schemes having additional aggregation parameters φ. ## K Non-Linear Identifiable Models We also show in Figure 4 the reconstructed modality values and inferred latent variables for one realization with our bound, with the corresponding results for a mixture-based bound in Figure 5. ![42_image_0.png](42_image_0.png) Figure 4: Bi-modal non-linear model with label and continuous modality based on our proposed objective. SumP: SumPooling, SumPM: SumPoolingMixture. ![43_image_0.png](43_image_0.png) Figure 5: Bi-modal non-linear model with label and continuous modality based on mixture bound. SumP: SumPooling, SumPM: SumPoolingMixture. ## L Mnist-Svhn-Text L.1 Training Hyperparamters The MNIST-SVHN-Text data set is taken from the code accompanying Sutter et al. (2021) with around 1.1 million train and 200k test samples. All models are trained for 100 epochs with a batch size of 250 using Adam (Kingma and Ba, 2014) and a cosine decay schedule from 0.0005 to 0.0001. ![44_image_0.png](44_image_0.png) ## Multi-Modal Rates And Distortions L.2 Figure 6: Rate and distortion terms for MNIST-SVHN-Text with shared and private latent variables. ![45_image_0.png](45_image_0.png) Figure 7: Rate and distortion terms for MNIST-SVHN-Text with shared latent variables and different ,3. ## L.3 Log-Likelihood Estimates Table 7: Test log-likelihood estimates for varying β choices for the joint data (M+S+T) as well as for the marginal data of each modality based on importance sampling (512 particles). Multi-modal generative model with a 40-dimensional shared latent variable. The second part of the Table contains reported loglikelihood values from baseline methods that, however, impose more restrictive assumptions on the decoder variances, which likely contributes to much lower log-likelihood values reported in previous works, irrespective of variational objectives and aggregation schemes. | Proposed objective | | | | Mixture bound | | | | | |------------------------------------------------------------|--------------|-------------|--------------|-----------------|-------------|-------------|-------------|-----------| | (β, Aggregation) | M+S+T | M | S | T | M+S+T | M | S | T | | (0.1, PoE+) | 5433 (24.5) | 1786 (41.6) | 3578 (63.5) | -29 (2.4) | 5481 (18.4) | 2207 (19.8) | 3180 (33.7) | -39 (1.0) | | (0.1, SumPooling) | 7067 (78.0) | 2455 (3.3) | 4701 (83.5) | -9 (0.4) | 6061 (15.7) | 2398 (9.3) | 3552 (7.4) | -50 (1.9) | | (1.0, PoE+) | 6872 (9.6) | 2599 (5.6) | 4317 (1.1) | -9 (0.2) | 5900 (10.0) | 2449 (10.4) | 3443 (11.7) | -19 (0.4) | | (1.0, SumPooling) | 7056 (124.4) | 2478 (9.3) | 4640 (113.9) | -6 (0.0) | 6130 (4.4) | 2470 (10.3) | 3660 (1.5) | -16 (1.6) | | (4.0, PoE+) | 7021 (13.3) | 2673 (13.2) | 4413 (30.5) | -5 (0.1) | 5895 (6.2) | 2484 (5.5) | 3434 (2.2) | -13 (0.4) | | (4.0, SumPooling) | 6690 (113.4) | 2483 (9.9) | 4259 (117.2) | -5 (0.0) | 5659 (48.3) | 2448 (10.5) | 3233 (27.7) | -10 (0.2) | | Results from Sutter et al. (2021) and Sutter et al. (2020) | | | | | | | | | | MVAE | -1790 (3.3) | NA | NA | NA | | | | | | MMVAE | -1941 (5.7) | NA | NA | NA | | | | | | MoPoE | -1819 (5.7) | NA | NA | NA | | | | | | MMJSD | -1961 (NA) | NA | NA | NA | | | | | ## L.4 Generated Modalities ![46_Image_0.Png](46_Image_0.Png) (a) Proposed objective, β = 0.1 (b) Proposed objective, β = 4 (c) Mixture-based bound, β = 0.1 (d) Mixture-based bound, β = 4 Figure 8: Conditional generation for different β parameters. The first column is the conditioned modality. The next three columns are the generated modalities using a SumPooling aggregation, followed by the three columns for a PoE+ scheme. ![47_image_0.png](47_image_0.png) (a) Our bound (b) Mixture-based bound Figure 9: Conditional generation for permutation-equivariant schemes and private latent variable constraints. The first column is the conditioned modality. The next three columns are the generated modalities using a SumPooling aggregation, followed by the three columns for a SelfAttention scheme and a PoE model. ## L.5 Conditional Coherence Table 8: Conditional coherence for models with shared latent variables and bi-modal conditionals. The letters on the second line represent the modality which is generated based on the sets of modalities on the line below it. | Proposed objective | | | | | | | Mixture bound | | | | | | | | | | | | |---------------------------------------------------------------------------------|------|------|------|------|------|------|-----------------|------|------|------|------|------|------|------|------|------|------|------| | M | | S | T | | | M | | | S | T | | | | | | | | | | Aggregation | M+S | M+T | S+T | M+S | M+T | S+T | M+S | M+T | S+T | M+S | M+T | S+T | M+S | M+T | S+T | M+S | M+T | S+T | | PoE | 0.98 | 0.98 | 0.60 | 0.75 | 0.58 | 0.77 | 0.82 | 1.00 | 1.00 | 0.96 | 0.97 | 0.95 | 0.61 | 0.11 | 0.61 | 0.45 | 0.99 | 0.98 | | PoE+ | 0.97 | 0.98 | 0.55 | 0.73 | 0.52 | 0.75 | 0.83 | 1.00 | 0.99 | 0.97 | 0.97 | 0.96 | 0.64 | 0.11 | 0.63 | 0.45 | 0.99 | 0.97 | | MoE | 0.88 | 0.97 | 0.90 | 0.35 | 0.11 | 0.35 | 0.41 | 0.72 | 0.69 | 0.88 | 0.96 | 0.89 | 0.32 | 0.10 | 0.33 | 0.42 | 0.72 | 0.69 | | MoE+ | 0.85 | 0.94 | 0.86 | 0.32 | 0.10 | 0.32 | 0.40 | 0.71 | 0.67 | 0.87 | 0.96 | 0.89 | 0.32 | 0.10 | 0.32 | 0.42 | 0.72 | 0.69 | | SumPooling | 0.97 | 0.97 | 0.86 | 0.78 | 0.30 | 0.80 | 0.76 | 0.99 | 1.00 | 0.97 | 0.97 | 0.95 | 0.65 | 0.10 | 0.65 | 0.45 | 0.99 | 0.97 | | SelfAttention | 0.97 | 0.97 | 0.82 | 0.76 | 0.30 | 0.78 | 0.69 | 1.00 | 1.00 | 0.97 | 0.97 | 0.99 | 0.66 | 0.10 | 0.65 | 0.45 | 0.99 | 1.00 | | Results from Sutter et al. (2021), Sutter et al. (2020) and Hwang et al. (2021) | | | | | | | | | | | | | | | | | | | | MVAE | NA | NA | 0.32 | NA | 0.43 | NA | 0.29 | NA | NA | | | | | | | | | | | MMVAE | NA | NA | 0.87 | NA | 0.31 | NA | 0.84 | NA | NA | | | | | | | | | | | MoPoE | NA | NA | 0.94 | NA | 0.36 | NA | 0.93 | NA | NA | | | | | | | | | | | MMJSD | NA | NA | 0.95 | NA | 0.48 | NA | 0.92 | NA | NA | | | | | | | | | | | MVTCAE (w/o T) | NA | NA | NA | NA | NA | NA | NA | NA | NA | | | | | | | | | | | Proposed objective | | Mixture bound | | | | | | | | | | | | | | | | | |----------------------|------|-----------------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------| | M | S | T | M | S | T | | | | | | | | | | | | | | | Aggregation | M | S | T | M | S | T | M | S | T | M | S | T | M | S | T | M | S | T | | PoE+ | 0.97 | 0.12 | 0.13 | 0.20 | 0.62 | 0.24 | 0.16 | 0.15 | 1.00 | 0.96 | 0.83 | 0.99 | 0.11 | 0.58 | 0.11 | 0.44 | 0.39 | 1.00 | | SumPooling | 0.97 | 0.42 | 0.59 | 0.44 | 0.67 | 0.40 | 0.65 | 0.45 | 1.00 | 0.97 | 0.86 | 0.99 | 0.11 | 0.62 | 0.11 | 0.45 | 0.40 | 1.00 | | SelfAttention | 0.97 | 0.12 | 0.12 | 0.27 | 0.71 | 0.28 | 0.46 | 0.40 | 1.00 | 0.96 | 0.09 | 0.08 | 0.12 | 0.67 | 0.12 | 0.15 | 0.17 | 1.00 | Table 9: Conditional coherence for models with private latent variables and uni-modal conditionals. The letters on the second line represent the modality which is generated based on the sets of modalities on the line below it. Table 10: Conditional coherence for models with private latent variables and bi-modal conditionals. The letters on the second line represent the modality, which is generated based on the sets of modalities on the line below it. | | Proposed objective | | Mixture bound | | | | | | | | | | | | | | | | |---------------|----------------------|------|-----------------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------| | M | S | T | M | S | T | | | | | | | | | | | | | | | Aggregation | M+S | M+T | S+T | M+S | M+T | S+T | M+S | M+T | S+T | M+S | M+T | S+T | M+S | M+T | S+T | M+S | M+T | S+T | | PoE+ | 0.97 | 0.97 | 0.14 | 0.66 | 0.33 | 0.67 | 0.18 | 1.00 | 1.00 | 0.97 | 0.97 | 0.94 | 0.63 | 0.11 | 0.63 | 0.45 | 0.99 | 0.96 | | SumPooling | 0.97 | 0.97 | 0.54 | 0.79 | 0.43 | 0.80 | 0.57 | 1.00 | 1.00 | 0.97 | 0.97 | 0.93 | 0.64 | 0.11 | 0.63 | 0.45 | 0.99 | 0.97 | | SelfAttention | 0.97 | 0.97 | 0.12 | 0.80 | 0.29 | 0.81 | 0.49 | 1.00 | 1.00 | 0.96 | 0.96 | 0.08 | 0.70 | 0.12 | 0.70 | 0.15 | 1.00 | 1.00 | | | Proposed objective | | | | | | | | Mixture bound | | | | | | | | | | |-------------------|----------------------|------|------|------|------|------|------|------|-----------------|------|------|------|------|------|------|------|------|------| | | M | | S | | | T | | M | | S | | T | | | | | | | | (β, Aggregation) | M | S | T | M | S | T | M | S | T | M | S | T | M | S | T | M | S | T | | (0.1, PoE+) | 0.98 | 0.11 | 0.12 | 0.12 | 0.62 | 0.14 | 0.61 | 0.25 | 1.00 | 0.96 | 0.83 | 0.99 | 0.11 | 0.58 | 0.11 | 0.45 | 0.39 | 1.00 | | (0.1, SumPooling) | 0.97 | 0.48 | 0.81 | 0.30 | 0.72 | 0.33 | 0.86 | 0.55 | 1.00 | 0.97 | 0.86 | 0.99 | 0.11 | 0.64 | 0.11 | 0.45 | 0.40 | 1.00 | | (1.0, PoE+) | 0.97 | 0.15 | 0.63 | 0.24 | 0.63 | 0.42 | 0.79 | 0.35 | 1.00 | 0.96 | 0.83 | 0.99 | 0.11 | 0.59 | 0.11 | 0.45 | 0.39 | 1.00 | | (1.0, SumPooling) | 0.97 | 0.48 | 0.87 | 0.25 | 0.72 | 0.36 | 0.73 | 0.48 | 1.00 | 0.97 | 0.86 | 0.99 | 0.10 | 0.63 | 0.10 | 0.45 | 0.40 | 1.00 | | (4.0, PoE+) | 0.97 | 0.29 | 0.83 | 0.41 | 0.60 | 0.58 | 0.76 | 0.38 | 1.00 | 0.96 | 0.82 | 0.99 | 0.10 | 0.57 | 0.10 | 0.44 | 0.38 | 1.00 | | (4.0, SumPooling) | 0.97 | 0.48 | 0.88 | 0.35 | 0.66 | 0.44 | 0.83 | 0.53 | 1.00 | 0.96 | 0.85 | 0.99 | 0.11 | 0.57 | 0.10 | 0.45 | 0.39 | 1.00 | | Proposed objective | | | | | | | Mixture bound | | | | | | | | | | | | |----------------------|------|------|------|------|------|------|-----------------|------|------|------|------|------|------|------|------|------|------|------| | M | S | | | T | | M | S | | | T | | | | | | | | | | (β, Aggregation) | M+S | M+T | S+T | M+S | M+T | S+T | M+S | M+T | S+T | M+S | M+T | S+T | M+S | M+T | S+T | M+S | M+T | S+T | | (0.1, PoE+) | 0.98 | 0.98 | 0.15 | 0.70 | 0.14 | 0.72 | 0.66 | 1.00 | 1.00 | 0.96 | 0.96 | 0.93 | 0.62 | 0.11 | 0.62 | 0.45 | 0.99 | 0.95 | | (0.1, SumPooling) | 0.97 | 0.97 | 0.86 | 0.83 | 0.31 | 0.84 | 0.85 | 0.99 | 1.00 | 0.97 | 0.97 | 0.94 | 0.66 | 0.11 | 0.65 | 0.45 | 0.99 | 0.96 | | (1.0, PoE+) | 0.97 | 0.98 | 0.55 | 0.73 | 0.52 | 0.75 | 0.83 | 1.00 | 0.99 | 0.97 | 0.97 | 0.96 | 0.64 | 0.11 | 0.63 | 0.45 | 0.99 | 0.97 | | (1.0, SumPooling) | 0.97 | 0.97 | 0.86 | 0.78 | 0.30 | 0.80 | 0.76 | 0.99 | 1.00 | 0.97 | 0.97 | 0.95 | 0.65 | 0.10 | 0.65 | 0.45 | 0.99 | 0.97 | | (4.0, PoE+) | 0.97 | 0.98 | 0.84 | 0.76 | 0.66 | 0.78 | 0.82 | 1.00 | 1.00 | 0.97 | 0.97 | 0.96 | 0.62 | 0.10 | 0.62 | 0.45 | 0.99 | 0.98 | | (4.0, SumPooling) | 0.97 | 0.97 | 0.89 | 0.77 | 0.40 | 0.78 | 0.86 | 0.99 | 1.00 | 0.97 | 0.97 | 0.96 | 0.61 | 0.10 | 0.60 | 0.45 | 0.99 | 0.97 | Table 11: Conditional coherence for models with shared latent variables for different βs and uni-modal conditionals. The letters on the second line represent the modality which is generated based on the sets of modalities on the line below it. Table 12: Conditional coherence for models with shared latent variables for different βs and bi-modal conditionals. The letters on the second line represent the modality, which is generated based on the sets of modalities on the line below it. ## L.6 Latent Classification Accuracy | Proposed objective | | | Mixture bound | | | | | | |---------------------------------------------------------------------------------|---------------|---------------|-----------------|---------------|---------------|---------------|---------------|---------------| | Aggregation | M+S+T | M | S | T | M+S+T | M | S | T | | PoE | 0.988 (0.000) | 0.940 (0.009) | 0.649 (0.039) | 0.998 (0.001) | 0.991 (0.004) | 0.977 (0.002) | 0.845 (0.000) | 1.000 (0.000) | | PoE+ | 0.978 (0.002) | 0.934 (0.001) | 0.624 (0.040) | 0.999 (0.001) | 0.998 (0.000) | 0.981 (0.000) | 0.851 (0.000) | 1.000 (0.000) | | MoE | 0.841 (0.008) | 0.974 (0.000) | 0.609 (0.032) | 1.000 (0.000) | 0.940 (0.001) | 0.980 (0.001) | 0.843 (0.001) | 1.000 (0.000) | | MoE+ | 0.850 (0.039) | 0.967 (0.014) | 0.708 (0.167) | 0.983 (0.023) | 0.928 (0.017) | 0.983 (0.002) | 0.846 (0.001) | 1.000 (0.000) | | SelfAttention | 0.985 (0.001) | 0.954 (0.002) | 0.693 (0.037) | 0.986 (0.006) | 0.991 (0.000) | 0.981 (0.001) | 0.864 (0.003) | 1.000 (0.000) | | SumPooling | 0.981 (0.000) | 0.962 (0.000) | 0.704 (0.014) | 0.992 (0.008) | 0.994 (0.000) | 0.983 (0.000) | 0.866 (0.002) | 1.000 (0.000) | | PoE+ | 0.979 (0.009) | 0.944 (0.000) | 0.538 (0.032) | 0.887 (0.07) | 0.995 (0.002) | 0.980 (0.002) | 0.848 (0.006) | 1.000 (0.000) | | SumPooling | 0.987 (0.004) | 0.966 (0.004) | 0.370 (0.348) | 0.992 (0.002) | 0.994 (0.001) | 0.982 (0.000) | 0.870 (0.001) | 1.000 (0.000) | | SelfAttention | 0.990 (0.003) | 0.968 (0.002) | 0.744 (0.008) | 0.985 (0.000) | 0.997 (0.001) | 0.974 (0.000) | 0.681 (0.031) | 1.000 (0.000) | | Results from Sutter et al. (2021), Sutter et al. (2020) and Hwang et al. (2021) | | | | | | | | | | MVAE | 0.96 (0.02) | 0.90 (0.01) | 0.44 (0.01) | 0.85 (0.10) | | | | | | MMVAE | 0.86 (0.03) | 0.95 (0.01) | 0.79 (0.05) | 0.99 (0.01) | | | | | | MoPoE | 0.98 (0.01) | 0.95 (0.01) | 0.80 (0.03) | 0.99 (0.01) | | | | | | MMJSD | 0.98 (NA) | 0.97 (NA) | 0.82 (NA) | 0.99 (NA) | | | | | | MVTCAE (w/o T) | NA | 0.93 (NA) | 0.78 (NA) | NA | | | | | Table 13: Unsupervised latent classification for β = 1 and models with shared latent variables only (top half) and shared plus private latent variables (bottom half). Accuracy is computed with a linear classifier (logistic regression) trained on multi-modal inputs (M+S+T) or uni-modal inputs (M, S or T). | | Proposed objective | | | Mixture bound | | | | | |-------------------|----------------------|---------------|---------------|-----------------|---------------|---------------|---------------|---------------| | (β, Aggregation) | M+S+T | M | S | T | M+S+T | M | S | T | | (0.1, PoE+) | 0.983 (0.006) | 0.919 (0.001) | 0.561 (0.048) | 0.988 (0.014) | 0.992 (0.002) | 0.979 (0.002) | 0.846 (0.004) | 1.000 (0.000) | | (0.1, SumPooling) | 0.982 (0.004) | 0.965 (0.002) | 0.692 (0.047) | 0.999 (0.001) | 0.994 (0.000) | 0.981 (0.002) | 0.863 (0.005) | 1.000 (0.000) | | (1.0, PoE+) | 0.978 (0.002) | 0.934 (0.001) | 0.624 (0.040) | 0.999 (0.001) | 0.998 (0.000) | 0.981 (0.000) | 0.851 (0.000) | 1.000 (0.000) | | (1.0, SumPooling) | 0.981 (0.000) | 0.962 (0.000) | 0.704 (0.014) | 0.992 (0.008) | 0.994 (0.000) | 0.983 (0.000) | 0.866 (0.002) | 1.000 (0.000) | | (4.0, PoE+) | 0.981 (0.006) | 0.943 (0.007) | 0.630 (0.008) | 0.993 (0.001) | 0.998 (0.000) | 0.981 (0.000) | 0.846 (0.001) | 1.000 (0.000) | | (4.0, SumPooling) | 0.984 (0.004) | 0.963 (0.001) | 0.681 (0.009) | 0.995 (0.000) | 0.992 (0.002) | 0.980 (0.001) | 0.856 (0.001) | 1.000 (0.000) | Table 14: Unsupervised latent classification for different βs and models with shared latent variables only. Accuracy is computed with a linear classifier (logistic regression) trained on multi-modal inputs (M+S+T) or uni-modal inputs (M, S or T). | MoE/PoE | SumPooling/SelfAttention | |------------------------------------|----------------------------| | Input: Ds | Input: Ds | | Dense Ds × 512, ReLU | Dense Ds × 256, ReLU | | Dense 512 × 512, ReLU | Dense 256 × 256, ReLU | | Dense 512 × 60 | Dense 256 × 60 | | (c) Inner aggregation function χϑ. | | | SumPooling | SelfAttention | | Input: 256 | Input: 256 | | Dense 256 × 256, ReLU | Dense 256 × 256, ReLU | | Dense 256 × 256, ReLU | Dense 256 × 256 | | Dense 256 × 256 | | ## M Encoder Model Architectures M.1 Linear Models Table 15: Encoder architectures for Gaussian models. | PoE (h shared s and h private s ) | | SumPooling/SelfAttention | |-------------------------------------|----------------------------------|----------------------------| | Input: Ds | | Input: Ds | | Dense Ds × 512, ReLU | | Dense Ds × 128, ReLU | | Dense 512 × 512, ReLU | | Dense 128 × 128, ReLU | | Dense 512 × 10 | | Dense 128 × 10 | | | (c) Inner aggregation functions. | | | SumPooling (χ0,ϑ, χ1,ϑ, χ2,ϑ) | | SelfAttention (χ1,ϑ, χ2,ϑ) | | Input: 128 | | Input: 128 | | Dense 128 × 128, ReLU | | Dense 128 × 128, ReLU | | Dense 128 × 128, ReLU | | Dense 128 × 128 | | Dense 128 × 128 | | | (a) Modality-specific encoding functions hs(xs). Latent dimension D = 30, modality dimension Ds ∼ U(30, 60). Input: 256 Dense 256 × 256, ReLU Dense 256 × 256, ReLU Dense 256 × 60 (c) Inner aggregation function χϑ. (b) Model for outer aggregation function ρϑ for SumPooling and SelfAttention schemes. ## M.2 Linear Models With Private Latent Variables Table 16: Encoder architectures for Gaussian models with private latent variables. (a) Modality-specific encoding functions hs(xs). All private and shared latent variables are of dimension 10. Modality dimension Ds ∼ U(30, 60). (b) Model for outer aggregation function ρϑ for SumPooling scheme. | Outer Aggregation Input: 256 Dense 256 × 256, ReLU Dense 256 × 256, ReLU Dense 256 × 60 (d) Transformer parameters. SelfAttention (1 Layer) Input: 256 Heads: 4 Attention size: 256 Hidden size FFN: 256 | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ## M.3 Nonlinear Model With Auxiliary Label M.4 Nonlinear Model With Five Modalities M.5 Mnist-Svhn-Text | Outer Aggregation (ρϑ) Input: 128 Dense 128 × 128, ReLU Dense 128 × 128, ReLU Dense 128 × 10 (d) Transformer parameters. SelfAttention (1 Layer) Input: 128 Heads: 4 Attention size: 128 Hidden size FFN: 128 | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| For SVHN and Text, we use 2d- or 1d-convolutional layers, respectively, denoted as Conv(*f, k, s*) for feature dimension f, kernel-size k, and stride s. We denote transposed convolutions as tConv. We use the neural network architectures as implemented in Flax Heek et al. (2023). Table 17: Encoder architectures for nonlinear model with auxiliary label. (a) Modality-specific encoding functions hs(xs). Modality dimension D1 = 2 (continuous modality) and D2 = 5 (label). Embedding dimension DE = 4 for PoE and MoE and DE = 128 otherwise. | Modality-specific encoders Input: Ds Dense Ds × 128, ReLU Dense 128 × 128, ReLU Dense 128 × DE | | |--------------------------------------------------------------------------------------------------|-----------------------| | (c) Inner aggregation function χϑ. | | | SumPooling | SelfAttention | | Input: 128 | Input: 128 | | Dense 128 × 128, ReLU | Dense 128 × 128, ReLU | | Dense 128 × 128, ReLU | Dense 128 × 128 | | Dense 128 × 128 | | Input: 128 Dense 128 × 128, ReLU Dense 128 × 128, ReLU Dense 128 × DO (c) Inner aggregation function χϑ. | Outer Aggregation Input: 128 Dense 128 × 128, ReLU Dense 128 × 128, ReLU Dense 128 × DO (d) Transformer parameters. SelfAttention Input: 128 Heads: 4 Attention size: 128 Hidden size FFN: 128 | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| (b) Model for outer aggregation function ρϑ for SumPooling and SelfAttention schemes and mixtures thereof. Output dimension is D0 = 25 for mixture densities and DO = 4 otherwise. | SumPooling | SelfAttention | |-----------------------|-----------------------| | Input: 256 | Input: 256 | | Dense 256 × 256, ReLU | Dense 256 × 256, ReLU | | Dense 256 × 256, ReLU | Dense ×256 | | Dense 256 × 256 | | | MoE/PoE | SumPooling/SelfAttention | |-----------------------|----------------------------| | Input: Ds | Input: Ds | | Dense Ds × 512, ReLU | Dense Ds × 256, ReLU | | Dense 512 × 512, ReLU | Dense 256 × 256, ReLU | | Dense 512 × 50 | Dense 256 × 256 | Table 18: Encoder architectures for a nonlinear model with five modalities. | SelfAttention Input: 256 Heads: 4 Attention size: 256 Hidden size FFN: 256 | |------------------------------------------------------------------------------| (a) Modality-specific encoding functions hs(xs). Modality dimensions Ds = 25. Latent dimension D = 25 Input: 256 Dense 256 × 256, ReLU Dense 256 × 256, ReLU Dense 256 × DO (c) Inner aggregation function χϑ. | Outer Aggregation Input: 256 Dense 256 × 256, ReLU Dense 256 × 256, ReLU Dense 256 × DO | |-------------------------------------------------------------------------------------------| (b) Model for outer aggregation function ρϑ for SumPooling and SelfAttention schemes and mixtures thereof. Output dimension is D0 = 50 for mixture densities and DO = 25 otherwise. (d) Transformer parameters. ## M.6 Mnist-Svhn-Text With Private Latent Variables N Mnist-Svhn-Text Decoder Model Architectures For models with private latent variables, we concatenate the shared and private latent variables. We use a Laplace likelihood as the decoding distribution for MNIST and SVHN, where the decoder function learns both its mean as a function of the latent and a constant log-standard-deviation at each pixel. Following previous works (Shi et al., 2019; Sutter et al., 2021), we re-weight the log-likelihoods for different modalities relative to their dimensions. | MoE/PoE/SumPooling/SelfAttention Input: Ds Conv(128, 1, 1), ReLU Conv(128, 4, 2), ReLU Conv(128, 4, 2), ReLU, Flatten Dense 128 × DE | |----------------------------------------------------------------------------------------------------------------------------------------| | SumPooling | SelfAttention | |----------------------|----------------------| | Input: DE | Input: DE | | Dense DE × DE, LReLU | Dense DE × DE, LReLU | | Dense DE × DE, LReLU | Dense ×DE | | Dense DE × DE | | ## Table 19: Encoder Architectures For Mnist-Svhn-Text. | MoE/PoE/SumPooling/SelfAttention Input: Ds, Dense Ds × 400, ReLU Dense 400 × 400, ReLU Dense 400 × DE | |---------------------------------------------------------------------------------------------------------| (a) MNIST-specific encoding functions hs(xs). Modality dimensions Ds = 28 × 28. The embedding dimension is DE = 2D for PoE/MoE and DE = 256 for SumPooling/SelfAttention. For PoE+/MoE+, we add four times a Dense layer of size 256 with ReLU layer before the last linear layer. Input: Ds Conv(32, 4, 2), ReLU Conv(64, 4, 2), ReLU Conv(64, 4, 2), ReLU Conv(128, 4, 2), ReLU, Flatten Dense 2048 × DE (c) Text-specific encoding functions hs(xs). Modality dimensions Ds = 8 × 71. Embedding dimension is DE = 2D for PoE/MoE and DE = 256 for permutation-invariant models (SumPooling/SelfAttention) and DE = 128 for permutationequivariant models (SumPooling/SelfAttention). For PoE+/MoE+, we add four times a Dense layer of size 256 with ReLU layer before the last linear layer. (e) Inner aggregation function χϑ for permutationinvariant models (DE = 256) and permutaionequivariant models (DE = 128). ## O Compute Resources And Existing Assets Our computations were performed on shared HPC systems. All experiments except Section 5.3 were run on a CPU server using one or two CPU cores. The experiments in Section 5.3 were run on a GPU server using one NVIDIA A100. Our implementation is based on JAX (Bradbury et al., 2018) and Flax (Heek et al., 2023). We compute the mean correlation coefficient (MCC) between true and inferred latent variables following Khemakhem et al. (2020b), as in https://github.com/ilkhem/icebeem and follow the data and model generation from Khemakhem et al. (2020a), https://github.com/ilkhem/iVAE in Section 5.2, as well as https://github.com/hanmenghan/CPM_Nets from Zhang et al. (2019) for generating the missingness | SelfAttention (2 Layers) Input: DE Heads: 4 Attention size: DE Hidden size FFN: DE | |--------------------------------------------------------------------------------------| (b) SVHN-specific encoding functions hs(xs). Modality dimensions Ds = 3 × 32 × 32. Embedding dimension is DE = 2D for PoE/MoE and DE = 256 for SumPooling/SelfAttention. For PoE+/MoE+, we add four times a Dense layer of size 256 with ReLU layer before the last linear layer. (d) Model for outer aggregation function ρϑ for SumPooling and SelfAttention schemes. Output dimension is D0 = 2D = 80 for models with shared latent variables only and D0 = 10 + 10 for models with private and shared latent variables. DE = 256 for permutation-invariant and DI = 128 for permutation-invariant models. (f) Transformer parameters for permutationinvariant models. DE = 256 for permutationinvariant and DI = 128 for permutation-invariant models. | MoE/PoE/SumPooling/SelfAttention Input: Ds Conv(32, 4, 2), ReLU Conv(64, 4, 2), ReLU Conv(64, 4, 2), ReLU Conv(128, 4, 2), ReLU, Flatten Dense 2048 × DE | |------------------------------------------------------------------------------------------------------------------------------------------------------------| | Outer Aggregation Input: DE Dense DE × DE, LReLU Dense DE × DE, LReLU Dense DE × DO | |---------------------------------------------------------------------------------------| | MNIST Input: DI Dense 40 × 400, ReLU Dense 400 × 400, ReLU Dense 400 × Ds, Sigmoid | |--------------------------------------------------------------------------------------| Table 20: Decoder architectures for MNIST-SVHN-Text. Input: DI Dense DI × 128, ReLU tConv(64, 4, 3), ReLU tConv(64, 4, 2), ReLU tConv(32, 4, 2), ReLU tConv(3, 4, 2) (c) Text decoder. DI = 40 for models with shared latent variables only, and DI = 10 + 10 otherwise. (a) MNIST decoder. DI = 40 for models with shared latent variables only, and DI = 10 + 10 otherwise. | Text Input: DI Dense DI × 128, ReLU tConv(128, 4, 3), ReLU tConv(128, 4, 2), ReLU tConv(71, 1, 1) | |-----------------------------------------------------------------------------------------------------| (b) SVHN decoder. DI = 40 for models with shared latent variables only, and DI = 10 + 10 otherwise. mechanism. In our MNIST-SVHN-Text experiments, we use code from Sutter et al. (2021), https: //github.com/thomassutter/MoPoE. | SVHN Input: DI Dense DI × 128, ReLU tConv(64, 4, 3), ReLU tConv(64, 4, 2), ReLU tConv(32, 4, 2), ReLU tConv(3, 4, 2) | |------------------------------------------------------------------------------------------------------------------------|
Review 1: Summary: This paper proposes a tighter bound a new aggregation scheme for VAE-based models that are infused with Mixture of Experts (MoE) or Product of Experts (PoE). The paper starts by proposing a new variational abound very similar to previous work by Daunhawer et al., 2022 and Shi et al., 2019. From my understanding, it involves dividing the modalities two 2 subsets $X_S$ and $X_{/S}$ and optimizing the sum of two different ELBO objectives to target $p_d(X_S)$ and $p_d(X_{/S}|X_S)$ respectively (weighted by the prior $p(S)$). The second objective is different than the standard ELBO objective as it uses a the inference distribution $q(z|X_{S})$ rather than $q(z|X_{\S})$. Intuitively, this makes sense given the latent variable for some modes should contain information to roughly generate the $x$ for other modalities. The authors then provide provide various interpretations of the given bound. The paper then proposes two different variational inference distribution to use: MoE and PoE. The authors additionally propose to use the encoder architecture that is permutation invariant (e.g. sum-pooling or transformer-based encoder) which makes sense given that $x$ modalities are permutation invariant. The proposed model is weakly identifiable given some conditions. Finally the authors do a series of experiments where the they compare the propose bound to the mixture-based bound proposed by Daunhawer et al., 2022 and Shi et al., 2019. The results do indeed look promising. Strengths and Weaknesses: **Strengths** - *Clarity*: The paper reads well overall. I really appreciated the Remarks[4-6] providing different perspectives on the objective function. The paper is also very thorough and well-motivated. Furthermore, I think the authors do relatively a good job of explaining the limitations of their approach. - *Soundness*: The new objective makes sense (Though I do think the paper can be improved by providing more details and intuitions for why the bound is tighter than the mixture-based). The fact that the objective $L(X_{\S})$ ends up being an approximate lower bound on $\log p(X_{\S}|X_S)$ is not super obvious mathematically at first so it's quite interesting! The use of permutation invariant architecture and distributions also makes complete sense. - *Experiments*: The authors did a nice set of experiments to compare the objectives performance against the aother bounds. The datasets are ofcourse very simple but they are enough I think to show the potential of the new bound. There could be more highlight and discussion on the individual effect of the new bound against the choice of $q$ and the use of sum-pooling encoder. Figure 1 is also quite interesting! Overall, I think the TMLR would find the paper interesting and I'm more than happy to recommend acceptance. **Weaknesses** - *Novelty*: I am not very familiar with multimodal learning literature so my apologies if this is not the case, but I am very surprised to see that the permutation invariant architecture and distributions are not been used previously before in multi-modal settings, especially given that I know these type of distributions and architectures have been used lot in Amortized VI settings in a variety of applications such as meta-learning etc. Therefore, The use of PE and MoE and the sum-pooling encoders by themselves I do not find very novel. The new bound however I think is a very valuable contribution and can be highlighted more in the paper. Requested Changes: - Sorry if I missed this but it seems like the paper assumes you have access to the ground truth conditional distribution $p(x_{/S}|x_S)$. Is this is a reasonable data assumption in multi-modal settings? Broader Impact Concerns: This is mainly a theoretical paper and to the best of my understanding raises no major social impact. ================================================== Review 2: Summary: The paper derived a new and tighter variational lower bound for multi-modal VAEs (MMVAEs). It also proposed new and more flexible multi-modal aggregation schemes based on permutation-invariant NNs. A connection to identifiable VAEs is given, by seeing modality as an auxiliary variable. Both theoretical and experimental analyses are presented. Strengths and Weaknesses: ### Strengths The paper is solidly written and covers various related work and ideas. The theoretical derivations are serious. The connection to identifiable VAEs is interesting. ### Weaknesses The paper is very dense and could use a gentler introduction. Many of the Remarks and Examples are related work and ideas, which, while interesting, could be distractive. Requested Changes: Disclaimer: While I know well about several types of VAEs, I am unfamiliar with MMVAEs. And, as indicated above, due to the density and broadness of the paper, I failed to check many details in the paper. Thus, many of the following could be seen as suggestions to make the paper more accessible. As indicated in the Weaknesses, *the authors could move to Appendix some of the Remarks and Examples which are noncritical for understanding the main contributions, and the Introduction could be expanded*. And I hope the following will be useful in the latter regard. The conditional independence involved in the generative model of MMVAEs is satisfied when the latent variable Z captures all the unobserved factors shared by the modalities. I think it would be beneficial to mention this intuition for newcomers. I am not sure why in the first place the previous work would want to consider the bound (1) rather than your (3)? I mean, your (3), which bounds each modality, is much more intuitive to me than (1). Is the scalability problem you mentioned at the beginning of p4 the main reason? Anyway, it would be helpful to touch this question earlier. I am not sure if it is convincing enough to introduce permutation-invariance simply as a generalization of MoE/PoE. On the one hand, I am not sure if it is still a generalization in multi-modal cases, which is what we really consider. On the other hand, I think permutation-invariance and MoE/PoE contain very different information about the underlying mechanism, i.e., how the modalities work together to generate the observed data. Could you expand on the reason (a) on p14 why your bound is beneficial for learning an identifiable VAE? A related note is that it would be useful to make it clear that identifiability is a preliminary to estimation, thus your bound or aggregation schemes do not aid the identifiability, rather, given the model which is already identifiable, the aggregation schemes give more flexible parametrization, and the bound gives a better approximation. I know the authors understand this but I think it would be better to spell this out for readers. I believe it is unnecessary to introduce $\rho$ as a pdf, because we have finite modalities. It is simply a pmf, and this is much easier for readers. It seems that the $Z_S$ in (2) is not only “to emphasize that Z is conditional on $X_S$”. For example, if we just see $Z_S$ as another symbol of Z, then $H(X|Z_S)$ in (2) cannot be reduced back to the 1st term of (1). ### Minor At least in one place, you wrote “multi-modal” as “multi-model”. Broader Impact Concerns: NA ================================================== Review 3: Summary: The authors propose a new variational objective for fitting Multimodal VAEs. In contrast to existing mixture-based bounds, the proposed objective addresses a gap that arises in the mixture-based bounds due to modality subsampling. This is tackled by deriving a variational objective that resembles a lower-bound to the marginal log-likelihood and that, subject to some conditions, can be shown to be equal to the marginal log-likelihood. Moreover, the authors propose learnable permutation-invariant approaches to parametrise the variational distribution, which mitigates the exponential growth of variational distributions needed due to the subsampling of the modalities. The method is evaluated on some synthetic and realistic multimodal datasets and is shown to achieve better log-likelihood and other related quantities. Strengths and Weaknesses: ## Strengths * The paper is on an important task of fitting VAEs on data from multiple modalities (e.g. text/images). The authors address a previously-identified gap in the mixture-based variational objectives for fitting multimodal VAEs, by proposing a new objective that is motivated by its ability to reduce this gap. (Although this motivation is slightly burried in the presentation and needs a bit more work to make it concrete, see weaknesses.) * Noting that some previous approaches used permutation-invariant variational distributions, the authors proposed a few more approaches that aim to induce this invariance. The approaches themselves are not particularly new, but seem new in this context. * The evaluation seems to support the authors' hypothesis that the proposed approach (using the proposed objective and variational distributions) achieves higher log-likelihood than some existing approaches. ## Weaknesses ### Main comments/questions * The proposed objective $\mathcal{L}(x, \theta, \phi, \beta)$ in Section 2 is introduced without derivation. Although the derivation is straightforward, including the derivation where the objective is introduced would clarify that the objective is not truly a lower-bound on the log-likelihood. Consequently, I think calling the objective throughout the paper (and in the title) an "approximate bound" can be misleading as it invites potentially erroneous mis-uses of it. * A key contribution of the paper, as stated by the authors, is a _tighter_ variational bound/objective. But, until Remark 5 (end of Section 2.1), it is not clearly explained which existing objective is the proposed objective tighter than. (Except for lightly touching upon it with one sentence in the Contributions paragraph in Page 2.) This leaves the reader wondering what is it that the paper solves until the end of Section 2.1. The background information provided in Remark 5 should be discussed with the rest of the background in Section 1, and the advantages of the objective should be discussed in more detail. * Assuming that the variational distribution exactly approximates the model's posterior, Remark 5 explains that their objective is tighter than the existing Multimodal VAE objectives, where the gap arises due to sub-sampling of the modalities. However, the case where the variational approximation is not accurate is not discussed. Moreover, the Jensen's gap, alluded to at the end of Remark 5, is also not worked out. Explaining these two points would greatly help their contribution regarding the tightness/advantages of their objective. * Remark 3 states that maximising $\mathcal{L}_{\setminus \mathcal{S}}$ yields $q\_\phi(z \mid x\_{\mathcal{S}}) = q^{agg}\_{\phi, \setminus \mathcal{S}}(z \mid x\_\mathcal{S})$ because of equation $\texttt{Z}\_{\text{conditional}}$. Firstly, the text should describe what quantity/parameters are you maximising $\mathcal{L}\_{\setminus \mathcal{S}}$ _with respect to_. (Is it $\phi$?) Secondly, if you maximise the expected $\mathcal{L}\_{\setminus \mathcal{S}}$ in equation $\texttt{Z}\_{\text{conditional}}$ w.r.t. the parameters of the variational distribution $\phi$, the optimal $q\_\phi(z \mid x\_{\mathcal{S}})$ may not be equal to $q^{agg}\_{\phi, \setminus \mathcal{S}}(z \mid x\_\mathcal{S})$. This is because the parameters $\phi$ appear in both terms of equation $\texttt{Z}\_{\text{conditional}}$, and hence maximisation of the equation w.r.t. $\phi$ will likely strike some balance between the two terms in the equation, i.e. $q\_\phi(z \mid x\_{\mathcal{S}}) \not= q^{agg}\_{\phi, \setminus \mathcal{S}}(z \mid x\_\mathcal{S})$. * The discussion below equation 10 in Section 3.2, suggests that due to factorisation of the model $p\_\theta(z', \tilde z\_{\mathcal{S}} \mid x\_\mathcal{S}) p\_\theta(\tilde z\_{\setminus \mathcal{S}} \mid z', \tilde z\_{\mathcal{S}})$, the variational distribution $q\_\phi(z', \tilde z\_{\mathcal{S}}, \tilde z\_{\mathcal{\setminus S}} \mid x\_{\mathcal{S}})$ should also similarly factorise as $q\_\phi(z', \tilde z\_{\mathcal{S}} \mid x\_{\mathcal{S}}) q\_\phi(\tilde z\_{\setminus \mathcal{S}} \mid z', \tilde z\_{\mathcal{S}})$. But, although it may be a useful inductive bias and/or simplification, this may not necessarily be true if we aim to accurately approximate the posterior $p\_\theta(z', \tilde z\_{\mathcal{S}}, \tilde z\_{\mathcal{\setminus S}} \mid x\_{\mathcal{S}})$. For example, in case $q\_\phi(z', \tilde z\_{\mathcal{S}} \mid x\_{\mathcal{S}}) \not= p\_\theta(z', \tilde z\_{\mathcal{S}} \mid x\_\mathcal{S})$, the factorisation $q\_\phi(z', \tilde z\_{\mathcal{S}} \mid x\_{\mathcal{S}}) q\_\phi(\tilde z\_{\setminus \mathcal{S}} \mid z', \tilde z\_{\mathcal{S}}, x\_{\mathcal{S}})$ may be able to better approximate the model posterior, because conditioning $q\_\phi(\tilde z\_{\setminus \mathcal{S}} \mid z', \tilde z\_{\mathcal{S}}, x\_{\mathcal{S}})$ on $x\_{\mathcal{S}}$ allows the distribution to some extent compensate for the inaccuracies (or loss of information) in $q\_\phi(z', \tilde z\_{\mathcal{S}} \mid x\_{\mathcal{S}})$. * As explained in the paragraph preceding Examples 3-5 (after equation 11), the distribution $p\_\theta(\tilde z\_{\setminus \mathcal{S}} \mid z', \tilde z\_{\mathcal{S}})$ is often analytically tractable. So why do Examples 3-5 not use it? Specifically, Example 3 instead uses $\prod\_{s \in \mathcal{M}\setminus\mathcal{S}} p\_{\theta}(\tilde z\_s)$ and Examples 4-5 instead use $\prod\_{s \in \mathcal{M}\setminus\mathcal{S}} p\_{\theta}(\tilde z\_s \mid z')$. Is this a typo? * The authors provided permutation-equivariant versions of PoE, SumPooling, and Self-Attention, but did not discuss permutation-equivariant versions of MoE or MoPoE. Is there a reason why this may not be possible with these aggregation schemes, are they incompatible with permutation-equivariance? Would be good to discuss this. * The evaluations use quantities such as "Full reconstruction", "Full rates", "Cross reconstruction", and "Cross rates", but the effect of these metrics going up/down is not clearly motivated or explained in the discussion of the results, thus it is unclear how important are they for multimodal VAEs and their contribution. Some interpretation of these metrics and their importance to multimodal VAEs would be helpful. * In the MNIST-SVHN-Text experiments, the total number of modalities is 3. You could thus parametrise all possible permutations of modalities (8 of them in total) using a separate encoder (one per permutation). This would serve as a strong baseline to verify how good the permutation-invariant parametrisations really are. * In Section 5.3, it seems that using the proposed objective improves cross-generation of the SVHN modality (Figure 2) but decreases cross-reconstruction (Figure 3). Why is that? ### Presentation clarity I have also found the presentation of the paper slightly lacking in clarity and coherence (in addition to some of the points discussed above). Below are some examples: * Relevance of the discussion about VampPrior in Remark 3 is not clear. VampPrior is not used in the paper and its use in this setting may be tricky. This is because the goal is to approximate $p\_\theta(z \mid x\_{\mathcal{S}})$. Meaning if we wanted to use VampPrior $p^{\text{Vamp}}(z \mid \mathcal{D})$ of the form $\frac{1}{K}\sum\_{k=1}^K q\_\phi(z \mid u\_k)$ to approximate $p\_\theta(z \mid x\_{\mathcal{S}})$ we would have to learn $K$ pseudo-inputs $u\_k$ for each $x\_{\mathcal{S}}$ for all $\mathcal{S} \subset \mathcal{M}$ and $\mathcal{x}$, which would scale exponentially in the number of modalities $\mathcal{M}$. So it is unclear how the discussion helps the paper. * Remark 4 presents some hypotheses that are not experimentally validated or reflected upon in the rest of the paper, so it is unclear how relevant it is. * Remarks 9, 10, 11, and 13 present some results/discussion that seem out-of-context. Why are they relevant to the paper? * In Table 2, the numbers in parentheses are not explained. Are they standard deviations/errors, how many independent runs? * The tables in the evaluations were a little hard to decipher, this is because "ours" is used to denote the proposed bound, but the proposed aggregation schemes (SumPooling/SelfAttention) weren't highlighted. Denoting the proposed bound as "our bound" or "proposed bound" and appending the proposed scheme names with "(proposed)" may help. ### Typos There is a bunch of (minor) typos in the paper. Here's a list of some: * Below Eq 2., the mutual information equation is missing $\mathrm{d}x\mathrm{d}y$. * Equation $\texttt{X}\_{\text{conditional}}$ should have a minus before the term $\int q\_\phi(z \mid x) \log \frac{q\_\phi(z \mid x\_{\mathcal{S}})}{p\_\theta(z \mid x\_{\mathcal{S}})}\mathrm{d}z$ instead of a plus. * The text after Corollary 2 should be $\mathcal{L}\_{\setminus \mathcal{S}}$ instead of $\mathcal{L}\_{\setminus \mathcal{L}}$. * Reference links to Corollary 19 (Appendix) should be links to Corollary 2 (main text). * The VampPrior in equation 5 is missing $1/K$. * Page 7, missing $\mathcal{S}$ in the text after the fifth equation on the page, as in "... for any $\mathcal{S}$ ...". * Corollary 8, "minimizing $\mathcal{L}$" should be "maximizing ...". * Remark 9, "minimizing $\mathcal{L}_{\mathcal{S}}^{\text{Mix}}$" should be "maximizing ...". * Page 9, The equation for MoPoE variational distribution should have $q\_\phi^{\text{PoE}}$ inside the summation instead of $q\_\phi^{\text{MoE}}$. * Start of Section 5 states that the proposed bound will be called "masked" in the following figures, but all of the figures call it "ours", with the exception of Figure 3. * In Table 1 the wrong results are highlighted as best in the Full-rates column. It should be the cell with 1.02. * Page 33, the second equality following lower-bound derivation for $I\_{q\_\phi}(X\_{\setminus \mathcal{S}}, Z\_{\mathcal{M}} \mid X\_{\mathcal{S}})$ is missing a $\mathrm{d}x\_{\setminus \mathcal{S}}$ after the $\log p\_\theta(x\_{\setminus \mathcal{S}} \mid z)$ term. * Page 33, the first equality following the upper-bound derivation for $I\_{q\_\phi}(X\_{\setminus \mathcal{S}}, Z\_{\mathcal{M}} \mid X\_{\mathcal{S}})$ is missing a $q\_\phi(z \mid x)$ after the $p_d(x\_{\setminus \mathcal{S}} \mid x\_\mathcal{S})$ term. (Also, the term $p_d(x\_{\setminus \mathcal{S}} \mid x\_\setminus)$ should be $p\_d(x\_{\setminus \mathcal{S}} \mid x\_\mathcal{S})$.) * In Remark 22, the sign before $\beta \log p\_\theta(z)$ should be a plus (in two places). Requested Changes: Please see the weaknesses above. The important changes/questions are under "main comments/questions". But, I hope the other comments may help improve the overall clarity/correctess of the paper and should be easy to correct. Broader Impact Concerns: None. The method presents a technical contribution for multimodal VAE training and the broader impacts are no different from a standard VAE model. ================================================== Metareview: Recommendation: Accept as is Comment: Overall I see this as solid technical work in a research area that is relevant to the TMLR community. There are some weaknesses in terms of clarity, but these are not sufficiently serious to warrant rejection of the paper. The reviewers were unanimous in recommending the paper be accepted, and I see no reason to overwrite their decision. As all the reviewers were already satisfied with the updates made during the rebuttals/discussions, I do not see the paper as having any specific issues that need to be corrected through minor revisions. I, therefore, recommend that the paper is accepted "as is". That said, I think their is scope to improve the clarity of the paper further and encourage the authors to work on this for the final camera-ready version. To this end, I include some comments made by Reviewer SL2b below from their final recommendations which are not currently visible to the authors, but may be helpful in making such improvements. > I still believe that the clarity of the paper could be better if the authors moved some discussion points that are less relevant to the main narrative into the appendix, further improved the arguments about the proposed bound, explained the evaluation results more clearly and their relevance to multi-modal VAEs, and analysed the properties of the new objective in more detail. ==================================================
# Concatenative Contrastive Sampling For Transformer-Based Sequential Recommendation Anonymous authors Paper under double-blind review ## Abstract Sequential recommendation represents a significant research direction in recommender systems, which aims to analyze users' sequential actions to forecast the subsequent item or item sequence they are likely to engage with. This entails deploying machine learning models such as Markov Chains (MC), recurrent neural networks (RNNs), and transformers to unravel the underlying user history patterns in recommender systems and generate recommendations according to their capability in processing sequential data. However, most prior endeavors, while successfully leveraging user history attributes, are constrained in capturing the interplay between user history and new items, as well as the contrastive signals between authentic and unfavorable items. To surmount these limitations, we introduce an attention-based sequential recommendation model with a concatenate-then-split structure that intentionally integrates these interactions. Experimental findings underscore the efficacy of integrating such interactions, with our new model achieving state-of-the-art performance across prevalent sequential recommendation benchmarks. ## 1 Introduction Recommendation systems play a crucial role in the advancement of e-commerce and content browsing, with wide-ranging applications in various real-world scenarios like online shopping, music streaming, and news reading (Chen et al., 2019a; Wang et al., 2020a; 2022). These systems not only greatly simplify users' content discovery process but also drive higher profits for the online platforms. While traditional recommendation systems, such as Collaborative Filtering (CF) methods (Sarwar et al., 2001), perform reasonably well by assuming static user preferences, recent research has highlighted that user preferences evolve over time. Consequently, incorporating the temporal dynamics of user preferences can lead to more accurate and relevant recommendations. Sequential recommendation is a special research direction in recommendation systems that focuses on providing personalized recommendations by considering the temporal order of user interactions. It involves analyzing sequential patterns in user behavior to predict the next item or sequence of items a user is likely to engage with. Previous research works have tried using different models to perform the task, including but not limited to Markov chains (He et al., 2018), convolutional neural networks (CNN) (Yuan et al., 2019), and recurrent neural networks (RNN) (Hidasi et al., 2015). However, these models have limited ability to learn user history and predict the interaction scores between the history and the target items to recommend. Very recently, transformers have become a heated neural network architecture, which shows effectiveness in both language-based machine learning tasks and computer vision tasks (Vaswani et al., 2017). Therefore, it would also be favorable if this model could be successfully migrated to sequential learning. Some prior works in sequential recommendation, such as Kang and McAuley (2018),Li et al. (2020) Sun et al. (2019), Du et al. (2023), Zhang et al. (2023) have also tried to employ the self-attention mechanism and transformers to capture the relationships between items in a user's history. In previous works, the typical approach involves passing user history information (embeddings) to the transformer and then using a tensor product operation to model the interactions between the history items and the target items to predict interaction scores (Kang and McAuley, 2018; Li et al., 2020). While such designs have shown utility due to the general capability of transformers on sequential data, most of them either overlook or have limited ability to learn the following important information in identifying sequential patterns and making recommendations. - (User History ↔ **Target Items)**. The aforementioned designs do not treat the target items to be recommended fairly in comparison to the user history. Specifically, the transformer solely learns features from the user history and ignores the features in the target items, which could be considered as "potential user history" in the future. This limitation prevents a comprehensive understanding of the dynamic relationships between the user's past behavior and potential future interactions with the recommended items. - (Ground Truth ↔ **Negative Items)**. During both the training and the testing stages, the ground truth item must be selected (recommended) from the negative items, which are either randomly sampled from the dataset or items that users did not engage with after display. This process naturally creates contrastive signals among all the items. However, using only a dot product operation is insufficient to effectively learn these cross-item signals. In this paper, we argue that incorporating these aspects provide richer insights and improve the effectiveness of sequential recommendation models. Based on the above discussions, we present a new design of transformer-based sequential recommendation models that incorporates the above information. We summarize our contribution as follows - We propose that prior works using self-attentional blocks to generate recommendations treat user history and the target items unfairly. The interactions between user history and target items, and the contrastive signals between ground truth and negative items are not sufficiently learnt by models in the previous works. - Based on our new understanding of the above two interactions, we introduce a new attentionbased sequential recommendation architecture named CTSRec, which explicitly incorporates the aforementioned missing information into the model design. - Extensive experimental results and ablation studies verify that the aforementioned information is indeed useful, and our proposed method achieves the state-of-the-art performance on the public benchmarks. ## 2 Related Works Several lines of works are closely related to our research. We first review sequential recommendation, and then we discuss transformers and transformer-based recommendations. ## 2.1 Sequential Recommendation Sequential recommendation leverages user history sequence information to provide better recommendations, and thus sequential models such as Markov Chains and RNNs are naturally suitable for such tasks. Rendle et al. (2010) proposed to combine matrix factorization with factorized Markov Chains to make next-basket recommendations. Hidasi et al. (2015) introduced Gated Recurrent Unit (GRU) into sequential recommendation and obtained nice results. Yuan et al. (2019) proposed NextItNet to use CNN layers to increase the long-range dependency in user history. Kang and McAuley (2018) first proposed to use self-attention and transformers in sequential recommendation and introduced SASRec. Li et al. (2020) proposed to take into account actual timestamps into the computation of self-attention and extended SASRec by TiSASRec, while Cen et al. (2020) designed an extraction layer for the users' multi-interest. Knowledge graphs on item relations are observed to drastically improve the performance of recommenders, with the pioneer work CFKG on general recommendation (Zhang et al., 2018). Several works in sequential recommendation utilized this idea. For example, Wang et al. (2020b) modeled the dynamic meaning of an item by combining both the temporal dynamics and the item relations and proposed Chorus. Wang et al. (2020a) further enhanced the results in Chorus by modeling the temporal evolution of item relations using Fourier transforms to estimate the temporal decay, which significantly outperforms the existing baselines. ## 2.2 Transformers And Attention-Based Recommendation Transformers and attention mechanisms have shown to be effective in different machine learning tasks, including machine translation, caption generation, and image recognition (Xu et al., 2015; Bahdanau et al., 2015; Vaswani et al., 2017; Dosovitskiy et al., 2021), to name a few. The mechanism behind attentional models is to concentrate the model's attention on relevant parts of the input with respect to the output. Specifically, given the query Q, key K and value V, the scaled dot-product attention used in transformer (Vaswani et al., 2017) is defined as $$\operatorname{Attn}(\mathbf{Q},\mathbf{K},\mathbf{V})=\operatorname{softmax}\left({\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d}}}\right)\mathbf{v}$$ $$\left(1\right)$$ V (1) In many cases such as recommendation, it is common to observe that Q, K, V are all derived from event sequences. Transformers are constructed by stacking these attention modules with layer-norms layers and multi-layer perceptrons (MLP). Apart from the aforementioned works in sequential recommendation, attentional modules have proven to be useful in many recommendation tasks such as click-through rate (CTR) prediction, and ranking tasks. Zhou et al. (2018b) proposed DIN which adaptively assigns weights to item embeddings in user history to predict the CTR. Li et al. (2022) proposed a personalized re-ranking model based on contextualized transformers with both item contexts and user contexts. Chen et al. (2019b) utilizes the transformer encoder to learn item representations of historical behaviors. Zhou et al. (2018a) proposed an attentional framework for user-behavior modeling in recommendation. ## 3 Methodology In this section, we first introduce the preliminaries on the sequential recommendation task, including the notations and the problem formulation. Then, we describe the model design for our new sequential recommendation model. | Symb. | Description | |---------|-----------------------------------------------------------| | U, I | User and Item Set | | S t | The item user u interacted at time t | | u | | | Nu | The number of historical actions for user u u | | S u | Historical action sequence {(S , Su , · · · , Su Nu } 1 2 | | N | The set of non-negative integers | | k | Number of items to rank by the model | | n | The maximum sequence length | | B | Number of self-attention blocks | | d | Latent vector dimension | | dp | The dimension of the MLP prediction layer | | MI | Item embedding matrix | | MP | Positional embedding matrix | Table 1: Notations ## 3.1 Preliminaries In the setting of sequential recommendation, each user u ∈ U has a sequence of historical actions S u = {S u 1 , Su 2 , · · · , SuNu } on the item set I, i.e., S u t ∈ I, ∀t ∈ [1, Nu]∩N, where Nu denotes the number of historical actions made by user u. At each time step t, the task is to consider all the historical actions before t, and make recommendations for the next item or the next series of items to engage with for every user u. ![3_image_0.png](3_image_0.png) Figure 1: A comparison of the history-item based self-attention model model architecture and our proposed model architecture. *Left:* the history-item based self-attention model model which uses the self-attention layers to learn user history but not the target items. The red cross and dashed lines denote the missing information from (1). user history ↔ target items and (2). ground truth ↔ negative items. *Right:* Our new attention-based sequential recommendation model where the user history and the target item embeddings are concatenated to be the input to the self-attention blocks. In this work, we study the sequential recommendation under the next item prediction setting: during training, the sequence of input to the machine learning model is {S u 1 , Su 2 , · · · , SuNu−1 } and the ground truth (label) sequence is S u = {S u 2 , · · · , SuNu }. To align with prior works, at each timestep, the model will be given a list of k new items r = {r1, r2, *· · ·* rk}. The k target items contain the ground truth item and k − 1 negative items, which are either randomly sampled or simply items not chosen by the users. In the most difficult case, k = |I|. The model is required to generate an ordered list of all the k items and the performance of the model is evaluated by the order of the ground truth item (see the Experiments section for the metrics). ## 3.2 Input Processing And Embeddings Similar to prior works, at any timestep t ≥ 1, we transform the training historical sequence (S u 1 , Su 2 , · · · , Su t ) into a fixed-length sequence s = (s1, s2, · · · , sn) as the input (Kang and McAuley, 2018; Li et al., 2020), where n is the maximum sequence length. Only the most recent n items are used if t ≥ n, and padding items are added to the left of the sequence if *t < n*. The user history sequence s and the target items r are then fed into the same item embedding layer MI ∈ R |I|×dto obtain the embedding matrices Ehis ∈ R n×d and Enew ∈ R k×drespectively. $$E_{h i s}={\begin{bmatrix}m_{s_{1}}\\ m_{s_{2}}\\ \cdots\\ m_{s_{n}}\end{bmatrix}},E_{n e w}={\begin{bmatrix}m_{r_{1}}\\ m_{r_{2}}\\ \cdots\\ m_{r_{k}}\end{bmatrix}},$$ (2) $$\text{}$$ . where each msi ∈ R 1×d. Constant zero vectors are used for padding items. Besides, we also create a learnable positional embedding layer MP of the items because otherwise self-attentional models would not be aware of the position of the previous items P ∈ R n×d. $$P=\left[p_{1}^{T},p_{2}^{T},\cdots p_{n}^{T}\right]^{T}$$ T(3) $\left(3\right)$. The positional embedding is added element-wise to the user history embedding as part of the input to the model. ## 3.3 The New Self-Attentional Model-Ctsrec As we have mentioned in the Introduction section, the interactions between the two embedding matrices Ehis and Enew (user history ↔ target items), and the interactions among the rows in Enew (ground truth ↔ negative items) are of vital importance. Therefore, instead of examining the two embeddings Ehis and Enew separately, in this work we propose to treat them (almost) equally in the attentional model with a concatenate-then-split (CTS) structure. We concatenate the sum of user history embedding Ehis and positional embedding, with all the target item embedding Enew, and then use the whole concatenated embedding as the transformer input Eb ∈ R (n+k)×d. $$\hat{E}=\begin{bmatrix}E_{his}+P\\ E_{new}\end{bmatrix}\tag{1}$$ $$\quad(4)$$ ## 3.3.1 Model Architecture Overview We present a overview of our proposal architecture comparison between history-item based self-attention models such as SASRec and TiSASRec(Kang and McAuley, 2018; Li et al., 2020) and our proposed new model is shown in Figure 1. As can be observed from the figure, history-item based self-attention model uses the self-attention block to learn solely user history features (Kang and McAuley, 2018; Li et al., 2020), but leaves the item embeddings out of the learning process. The interactions between user history and item embeddings are merely modeled by a tensor dot product operation, which is highly insufficient. Moreover, the interactions between the items are never modelled since the tensor product operation is not a cross-item operation. Our model, on the other hand, concatenates the user history embeddings and item embeddings together as the input to the self-attention blocks. In this way, the attentions blocks are able to learn from both the user history and the target items at the same time, and utilize the two aforementioned information to obtain better recommendations. ## 3.3.2 The Self-Attention Block Our model contains multiple (B) self-attentional blocks. In the following, we will discuss the structure of the first block and the other blocks can be inferred. Given the input Eb, we compute the output of the self-attention block Z by the following equation $$Z=\text{Attn}(\widehat{E}W^{Q},\widehat{E}W^{K},\widehat{E}W^{V})\tag{5}$$ $$=\text{softmax}\left(\frac{\widehat{E}W^{Q}\left(\widehat{E}W^{K}\right)^{T}}{\sqrt{d}}\right)\widehat{E}W^{V}$$ where the projection matrices WQ, W K, WV ∈ R d×d. The scaling factor 1/ √d is used to standardize the values. ## 3.3.3 Causality Note that the user history part of our input to the transformer is sequential, and the target items should never be observed before the last historical action. Therefore to prevent information leakage, the input embeddings for the items corresponding to (s1, s2, · · · , st) should be masked when predicting the next item st+1, for every t ∈ [1, n − 1]. In addition, the target items (r1, r2, · · · , rk) should only be observed for items no earlier than sn. Consequentially, to encode the causality relations, we mask out all the attention links between Qi and Kj that satisfy *j > i* and 1 ≤ *i < n* where *i, j* are positive integers in the range [1, n + k] and Q = EWb Q, K = EWb K similar to casual mask between the history items Vaswani et al. (2017) ## 3.3.4 Mlp, Layernorm, Residual, And Dropout Similar to prior works (Kang and McAuley, 2018; Li et al., 2020), we have added multi-layer perceptrons (MLP) to the self-attentional block to improve the model with additional nonlinearity. Specifically, we add a two-layer MLP with shared parameters for each row ziin the self-attention block output Z (See Equation (5)) $$\mathrm{MLP}(z_{i})=\operatorname*{max}(0,z_{i}W_{1}+b_{1})W_{2}+b_{2}$$ $\downarrow$ . $\downarrow$ . MLP(zi) = max(0, ziW1 + b1)W2 + b2 (6) with W1, W2 ∈ R d×d and b1, b2 ∈ R 1×d. We also use Layer normalization (LayerNorm) layers (Ba et al., 2016) and residual connections and dropout to to stablize the neural network architecture and reduce overfitting. $${\widehat{Z}}_{i}=z_{i}+\mathrm{Dropout}(\mathrm{MLP}(\mathrm{LayerNorm}(z_{i})))$$ Zbi = zi + Dropout(MLP(LayerNorm(zi))) (7) The output Zb, will be as the input to the more blocks of stacked self-attention. To avoid confusion, we denote the output of the b-th self-attention block as Zb(b). ## 3.3.5 Prediction And Loss Computation After passing the input embeddings through stacked blocks of self-attentional, we obtain the output vector Z which should be used to make a prediction on the interaction scores. Since Z ∈ R (n+k)×dis used to make predictions on only the k candidate items, we perform output projection to match the shape of the labels. Since before the self-attentional layers, we have used concatenation to obtain the user history embedding and the target items embedding, it would be natural to apply splitting and use the channels (rows) for item embeddings to make the final prediction. In particular, if the output Z (B)is $$\widetilde{Z}^{(B)}=\left[\widehat{z}^{(B)T}_{s_{1}},\cdots\quad\widehat{z}^{(B)T}_{s_{n}},\quad\widehat{z}^{(B)T}_{r_{1}},\cdots\quad\widehat{z}^{(B)T}_{r_{k}}\right]^{T}\tag{8}$$ Then the split output Z (B) tis chosen to be the last k rows. $$\overline{Z^{(B)}}=\left[\widehat{z}_{r_{1}}^{(B)T},\widehat{z}_{r_{2}}^{(B)},\cdots,\widehat{z}_{r_{k}}^{(B)T}\right]^{T}\tag{1}$$ $$({\mathfrak{g}})$$ Z (B) tis then fed into a final projection layer to predict the interaction scores for all the items for recommendation. $$\widehat{y}_{j}=\mbox{MLP}_{p}(\widehat{z}_{r_{j}}^{(B)})=\max(0,\widehat{z}_{r_{j}}^{(B)}W_{p}^{(1)}+b_{p}^{(1)})W_{p}^{(2)}+b_{p}^{(2)}$$ where the weights of projection includes W (1) p ∈ R d×dp and W (2) p ∈ R dp×1. The final output yb from the prediction layer lies in the real space R k×1. The model output is then used to compute and optimize the Bayesian Personalization Ranking (BPR) loss proposed by Hidasi and Karatzoglou (2018) and extended by Wang et al. (2023), which is used to optimize ranking outcome (Wang et al., 2020b;a; 2022; 2023). Specifically, we have used the multi-ary version of this loss function, by letting the predicted interaction score for the ground truth item being yb+. The loss for a single user is computed by $${\cal L}_{BPR}=-\frac{1}{k}\sum_{j=1}^{k}\log\left(\sigma\left(\widehat{y}_{+}-\widehat{y}_{j}\right)\right)\tag{11}$$ $$(10)$$ $\text{(2)}\quad\hdots$ 3. where σ(x) = 1/(1 + e −x) is the sigmoid function. The loss for a mini-batch is the average BPR loss across the users in the batch. ## 3.4 Complexity Analysis Time Complexity The computational complexity of CTSRec training is dominanted by the attention layer and the feed-forward network, resulting in a complexity of O(n 2d + nkd + k 2d + kd2 + nd2). By further leveraging sequence parallelization, the computation can be evenly split onto each local token from within the length n event history as well as in k candidate items, resulting in a amortized O(nd + nkd + kd + d 2). The computation cost for inference is similar to that of training. Space Complexity The space complexity of CTSRec is dominated by embeddings and parameters as well as the self-attention layers, and feed-forward networks. The asymptotic total number of parameters is O(|I|nd + kd + d 2), and o(|U|) in terms of the number of users. Handing Larger k The efficacy of our method benefits from the extra k term in the computation complexity. While potential efficiency optimization such as importance sampling could be incorporated as further investigations, we empirically demonstrate in our experiments that a adequate amount of computation of k = 20 strikes a balance significant performance boost and afforable computation overhead. ## 4 Experiments | Dataset | ♯ user | ♯ item | avg actions/user | ♯ actions | |-----------|----------|----------|--------------------|-------------| | G & GF | 14.7K | 8.5K | 9.92 | 145.8K | | CP & A | 27.9K | 10.4K | 6.97 | 194.4K | | Games | 24.3K | 10.6K | 9.54 | 231.8K | | ML-1M | 6.0K | 3.4K | 163.5 | 987K | Table 2: Basic dataset statistics In this section, we provide the experimental setup, the results of our proposed method on multiple public benchmarks, and the discussion of the effectiveness of the new model with ablation studies. More comparisons and additional experiments can be found in the Appendix. ## 4.1 Experimental Setup 4.1.1 Implementation Details We have used the open-sourced ReChorus library (Wang et al., 2020b) for the implementation of all the baseline algorithms and our new recommendation model. The BPR loss with multiple negative samples during training is already supported by the original codebase. All experiments are conducted with a single Nvidia A100 GPU. ## 4.1.2 Hyper-Parameter Details We have set the maximum sequence length to be n = 20, the latent vector dimension d = 64, and the dimension of the MLP predictin layer dp = 256. To make a fair comparison, all the models are trained with the same number of negative samples (k = 99) during training, except for the ablation study on the number of negative samples in Section 4.3. The models are tested with the standard procedure as in prior works, i.e., 100 items are provided to the model with the ground truth item to be recommended (Li et al., 2020; Wang et al., 2019; 2022). ## 4.1.3 Baselines We compare our model with the following baselines. We remark that the above methods are already competitive baselines that represent the state-of-the-art (SOTA) models in the field of sequential recommendation. - FPMC (Rendle et al., 2010): A recommendation model that combines matrix factorization and factorized first-order Markov Chains. - GRU4Rec (Hidasi et al., 2015): The first model that uses RNNs in sequential recommendation. - Caser (Tang and Wang, 2018): A model that embeds the sequence of recent items into an "image" in the time and latent spaces. - SASRec (Kang and McAuley, 2018): The first model that incorporates the self-attention mechanisms in sequential recommendation. - TiSASRec (Li et al., 2020): An improved model of SASRec that combines self-attention and time intervals in user history to model user interest. - ComiRec (Cen et al., 2020): The first model that incorporate the users' multiple interest into the sequential recommendation process. - ContraRec (Wang et al., 2023): A general model to add context-context contrast signals to sequetial recommendation algorithms. We have followed the original paper to use BERT as the encoder model. - TimiRec (Wang et al., 2022): A new model that combines time-interval modeling and multi-interest knowledge distillation to further improve the performance of different models including transformers. Some recent works, such as SLRC (Wang et al., 2019), Chorus (Wang et al., 2020b) and KDA (Wang et al., 2020a), utilize the additional information of the item relations to provide better sequential recommendations. Therefore, it would be unfair to compare our algorithm against these models. ## 4.1.4 Datasets We present the experimental results on four popular sequential recommendation benchmarks. Summary statistics of these datasets are provided in Table 2. - **Amazon**. This is a series of e-commerce datasets with reviews of products crawled from Amazon.com. We choose three categories "Grocery and Gourmet Food (G & GF)", "Cell Phones and Accessories (CP&A)" and "Video Games (Games)". The datasets are highly sparse. - **MovieLens-1M (ML-1M)**. This is a widely-used **dense** benchmark dataset for evaluating sequential recommendation algorithms. The items are different movies, and the user actions are ratings of the movies. We have followed the same preprocessing procedure of the prior works for all the datasets(Li et al., 2020; Wang et al., 2019; 2020b; 2022; 2023). ## 4.1.5 Evaluation Metrics To evaluate the quality of the sequential recommendation, we use Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG). Given a user set U, and that gu denotes the rank of the ground-truth item for user u, the mathematical expressions for the two metrics are as follows. $${\mathrm{HR}}@\mathrm{K}={\frac{1}{|{\mathcal{U}}|}}\sum_{u\in{\mathcal{U}}}I(g_{u}\leq K)$$ $${\mathrm{NDCG}}@\mathrm{K}={\frac{1}{|{\mathcal{U}}|}}\sum_{u\in{\mathcal{U}}}{\frac{I(g_{u}\leq K)}{\log_{2}(g_{u}+1)}}$$ $$(12)$$ In short, HR@K measures the frequency of the ground-truth item appearing in Top-K recommendations among all users, whereas NDCG@K adds a ranking position weight to the original measure. Both are standard measures in sequential recommendation. ## 4.2 Recommendation Performance Table 3 and 4 summarize the performance of all baselines and the new model. For ease of comparison and to follow the prior works such as Wang et al. (2020b; 2022), we take K=5 and K=10 in HR@K and NDCG@K in all our experiments. Among all the baseline methods, TimiRec (Wang et al., 2022) achieved the stateof-the-art performance because of its ability to model both time intervals in the recommendation process, and the multi-interest of user history. Although our new model does not have such explicit modeling of the information and has a simple pipleline, as can be observed, our model still out-performs all the existing baselines by a significant margin on all the datasets. Table 3: Comparisons between the baseline methods and the new method of HR@K and NDCG@K with K=5 and K=10, on Amazon Grocery and Gourmet Food (G & GF), Cell Phones and Accessories (CP & A). The results are averaged over 10 independent runs. The highest results in each column are highlighted in bold. | | Amazon G & GF | | Amazon CP & A | | | | | | |---------------|-----------------|-------|-----------------|-------|-------|-------|-------|-------| | Method | K=5 | K=10 | K=5 | K=10 | | | | | | | HR | NDCG | HR | NDCG | HR | NDCG | HR | NDCG | | FPMC | 0.362 | 0.283 | 0.443 | 0.309 | 0.400 | 0.302 | 0.508 | 0.336 | | GRU4Rec | 0.417 | 0.314 | 0.518 | 0.347 | 0.467 | 0.351 | 0.590 | 0.391 | | Caser | 0.408 | 0.305 | 0.507 | 0.337 | 0.446 | 0.329 | 0.573 | 0.370 | | TiSASRec | 0.397 | 0.306 | 0.482 | 0.333 | 0.452 | 0.344 | 0.565 | 0.381 | | ComiRec | 0.375 | 0.270 | 0.476 | 0.302 | 0.440 | 0.328 | 0.555 | 0.366 | | ContraRec | 0.422 | 0.326 | 0.510 | 0.356 | 0.468 | 0.360 | 0.583 | 0.397 | | TimiRec | 0.426 | 0.320 | 0.517 | 0.350 | 0.469 | 0.356 | 0.588 | 0.395 | | CTSRec (ours) | 0.433 | 0.331 | 0.526 | 0.372 | 0.482 | 0.362 | 0.610 | 0.403 | | Amazon Games | | | ML-1M | | | | | | |----------------|-------|-------|---------|-------|-------|-------|-------|-------| | Method | K=5 | K=10 | K=5 | K=10 | | | | | | HR | NDCG | HR | NDCG | HR | NDCG | HR | NDCG | | | FPMC | 0.574 | 0.445 | 0.688 | 0.482 | 0.591 | 0.435 | 0.737 | 0.482 | | GRU4Rec | 0.612 | 0.473 | 0.727 | 0.510 | 0.691 | 0.541 | 0.798 | 0.575 | | Caser | 0.572 | 0.435 | 0.692 | 0.474 | 0.692 | 0.541 | 0.794 | 0.574 | | TiSASRec | 0.610 | 0.477 | 0.721 | 0.513 | 0.736 | 0.593 | 0.824 | 0.622 | | ComiRec | 0.575 | 0.437 | 0.695 | 0.476 | 0.693 | 0.553 | 0.800 | 0.577 | | ContraRec | 0.617 | 0.486 | 0.728 | 0.522 | 0.723 | 0.589 | 0.811 | 0.603 | | TimiRec | 0.624 | 0.487 | 0.735 | 0.523 | 0.731 | 0.591 | 0.821 | 0.621 | | CTSRec (ours) | 0.637 | 0.497 | 0.752 | 0.534 | 0.745 | 0.605 | 0.835 | 0.635 | Table 4: Comparisons between the baseline methods and the new method of HR@K and NDCG@K on Amazon Video Games (Games) and MovieLens-1M. The results are averaged over 10 independent runs. The highest results in each column are highlighted in bold. ## 4.3 Ablation Study In this subsection, we aim at answering the following research questions. - **(Q1).** Since the user history is split out from the output of the transformer, is it useful in the learning process? - **(Q2).** How useful is the concatenation operation and to predict the interaction scores together? Is it better to predict the interaction scores one-item-by-one-item? - **(Q3).** Is the BPR loss function necessary for the good performance of our model? - **(Q4).** How does the transformer model architecture affect the performance of our model? ![9_image_0.png](9_image_0.png) Figure 2: Ablation study on the number of negative items. In each figure, we take k − 1 = 1, 9, 19, 29, 49, 99, 149, 199. ## - (Q5). How Does The Number Of Negative Items Affect The Performance Of All The Models? (Q1) questions the model's ability in learning the interactions between user history and the target items to recommend, due to the fact that we have removed user history from the output of the transformer for the final prediction by the MLP. **(Q2)** essentially doubts the usefulness of the contrastive signals in the target items and whether learning their interactions with the user history individually can enhance the understanding of each item. Since some prior works use different loss functions (Kang and McAuley, 2018; Li et al., 2020), (Q3) questions whether the performance of our model is consistent on the other loss functions. **(Q4)** is related to the neural architecture robustness. **(Q5)** is the most interesting question because the number of negative items affect the strength of the contrastive signals in the training stage. If there are too few negative samples, the model maybe incapable of learning the difference between the ground truth and the negatives. However, the model might also be distracted by the data imbalance if there are too many negative samples. Moreover, it is also interesting to see whether changing the number of negative items will affect other models, especially prior transformer-based models which has limited ability to learn such information. We have conducted the ablation study on the Amazon G& GF dataset to provide the answers to the above questions. The results are provided in Table 5 and Figure 2. | Grocery and Gourmet Food | | | | | |----------------------------|-------|-------|-------|-------| | Method | K=5 | K=10 | | | | HR | NDCG | HR | NDCG | | | baseline | 0.433 | 0.331 | 0.526 | 0.372 | | (1) no history ↓ | 0.209 | 0.132 | 0.341 | 0.289 | | (2) item-by-item ↓ | 0.413 | 0.313 | 0.516 | 0.345 | | (3) BCE loss | 0.441 | 0.340 | 0.538 | 0.388 | | (4) separate transformer ↓ | 0.403 | 0.309 | 0.505 | 0.337 | | (5) 1 block ↓ | 0.402 | 0.303 | 0.499 | 0.322 | | (6) 2 blocks | 0.416 | 0.321 | 0.515 | 0.341 | | (7) 3 blocks | 0.429 | 0.319 | 0.518 | 0.355 | | (8) 5 blocks | 0.431 | 0.329 | 0.523 | 0.359 | Table 5: Ablation study (HR@K and NDCG@K) on the Grocery and Gourmet Food dataset. The sign ↓ indicates a significant performance drop compared with the original baseline in Table 3. (1). No history. To answer **(Q1)**, we have removed user history from the model architecture and trained the model on the item embeddings only. It could be observed that there is a significant drop in the HR and NDCG, proving that user history plays an important role in our model structure. Without the user history, the model is only learning which items users frequently engage with, and thus has limited performance. (2). Item-by-item. To answer **(Q2)**, we have tried a different version of our model, which concatenates the embeddings of each of the target items {mr1 , mr2 , *· · ·* mrk } with the user history (ms1 , ms2 , *· · ·* msn ) individually as the input to the self-attention layer. Specifically, the input now becomes a batch of inputs (different from the minibatch in stochastic optimization). $${\hat{E}}_{r_{i}}={\left[\begin{array}{l}{E_{h i s}+P}\\ {m_{r,i}}\end{array}\right]},i=1,2,\cdots,k$$ $$(13)$$ Each of these concatenated inputs Ebri are feed into the self-attention layers one by one. The channel for the target item riin the output is again split and feed into the MLP to predict the interaction score for the item ri. In other words, K forwards are needed for one set of target items (r1, r2, · · · , rk). In this way, the transformer is capable of learning the interactions between the user history and the target items, but not the contrastive signals among the target items since they are not concatenated anymore. As can be observed, such a design performed worse than the original one, but it is still able to learn the interactions between the user history and the target items. Moreover, item-by-item training takes much more time than the vanilla design due to the excess number of forward passes. (3). BCE loss. For **(Q3)**, we have used the binary cross entropy (BCE) loss to train our CTSRec model. It turns out that our model is quite robust to the choice of the loss function. When using the BCE loss, the model converges much slower than the baseline model, and achieves slightly higher accuracy. We emphasize that the obtained HR and NDCG results are still To align with the ReChorus library (Wang et al., 2020b) and its baselines, we have chosen to use the BPR loss throughout this paper for consistency except for this ablation study. (4). Separate transformers. We have also tried a different neural network architecture to see the effectiveness of our concatenate-then-split (CTS) structure. Specifically, we have used one transformer on the user history embeddings and another transformer on the target items embeddings. The outputs of the two transformers are then multiplied using the dot product operation, similar to SASRec (Kang and McAuley, 2018) and TiSASRec (Li et al., 2020). The performance of this model is, however, worse compared to our CTSRec model because the transformers never consider all the embeddings together. This further proves our claim that it is important to learn both the interactions between the user history and the target items, and the contrastive signals among the target items. (5) - (8). \# blocks B. To understand how the model architecture affects the performance of CTSRec, we have tuned the number of transformer blocks in our model. As can be seen in Table 5, CTSRec behaves reasonably well with 3-5 transformer blocks. Number of Negative Samples. To understand the effect of the number of negative samples on the different algorithms **(Q5)**, we have tuned the number on the three datasets and plotted the change in HR@5 with respect to the number of negative samples in Figure 2. We have chosen GRU4Rec, SASRec, TiSASRec, TimiRec, and ContraRec as the comparison baselines since they are the top-performers in Table 3 and 4 and many of them utilize self-attention in their model architecture. As can be observed, our model is the most sensitive model to the number of negative samples. As we increase the number of negative samples, our model quickly learns the contrastive signals among them and the ground truth to enhance the ultimate performance. Other models, on the other hand, either have limited ability to learn or completely ignore these information in the model design. For example, GRU4Rec has slightly better performance when we increase the number of negative samples. However, models such as SASRec, TiSASRec, and ContraRec are almost invariant to the number k. ## 5 Conclusions In this paper, we propose a new transformer-based sequential recommendation model that explicitly incorporates the interactions between user history and target items, as well as the contrastive signals between the ground truth and negative items by a concatenate-then-split structure. Extensive experiments show that the new model achieves state-of-the-art performance on popular sequential recommendation tasks. Our ablation study also shows that these signals which are missing from prior works, are very important and helpful for the sequential recommendation tasks. Future research directions include further combining our model with previous advanced techniques such as time-interval modeling (Li et al., 2020), multi-interest recommendation (Cen et al., 2020; Wang et al., 2022), and item relation modeling (Wang et al., 2020b;a). ## References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization, 2016. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. Yukuo Cen, Jianwei Zhang, Xu Zou, Chang Zhou, Hongxia Yang, and Jie Tang. Controllable multi-interest framework for recommendation. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, page 2942–2951, 2020. Chong Chen, Min Zhang, Yiqun Liu, and Shaoping Ma. Social attentional memory network: Modeling aspect- and friend-level differences in recommendation. In *Proceedings of the Twelfth ACM International* Conference on Web Search and Data Mining, WSDM '19, page 177–185, New York, NY, USA, 2019a. Association for Computing Machinery. Qiwei Chen, Huan Zhao, Wei Li, Pipei Huang, and Wenwu Ou. Behavior sequence transformer for ecommerce recommendation in alibaba. In *Proceedings of the 1st International Workshop on Deep Learning* Practice for High-Dimensional Sparse Data, DLP-KDD '19, New York, NY, USA, 2019b. Association for Computing Machinery. ISBN 9781450367837. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. Xinyu Du, Huanhuan Yuan, Pengpeng Zhao, Jianfeng Qu, Fuzhen Zhuang, Guanfeng Liu, Yanchi Liu, and Victor S. Sheng. Frequency enhanced hybrid attention network for sequential recommendation. In *Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information* Retrieval, SIGIR '23, page 78–88, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9781450394086. doi: 10.1145/3539618.3591689. URL https://doi.org/10.1145/3539618.3591689. Ruining He, Wang-Cheng Kang, and Julian McAuley. Translation-based recommendation: A scalable method for modeling sequential behavior. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 5264–5268. International Joint Conferences on Artificial Intelligence Organization, 7 2018. doi: 10.24963/ijcai.2018/734. URL https://doi.org/10.24963/ijcai. 2018/734. Balázs Hidasi and Alexandros Karatzoglou. Recurrent neural networks with top-k gains for session-based recommendations. In *Proceedings of the 27th ACM International Conference on Information and Knowledge* Management, CIKM '18, page 843–852, 2018. Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. Session-based recommendations with recurrent neural networks. 11 2015. Wang-Cheng Kang and Julian McAuley. Self-attentive sequential recommendation. In *2018 IEEE International Conference on Data Mining (ICDM)*, pages 197–206. IEEE Computer Society, 2018. Jiacheng Li, Yujie Wang, and Julian McAuley. Time interval aware self-attention for sequential recommendation. In *Proceedings of the 13th International Conference on Web Search and Data Mining*, WSDM '20, page 322–330, 2020. Yi Li, Jieming Zhu, Weiwen Liu, Liangcai Su, Guohao Cai, Qi Zhang, Ruiming Tang, Xi Xiao, and Xiuqiang He. Pear: Personalized re-ranking with contextualized transformer for recommendation. In Companion Proceedings of the Web Conference 2022, WWW '22, page 62–66, New York, NY, USA, 2022. Association for Computing Machinery. Steffen Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme. Factorizing personalized markov chains for next-basket recommendation. In *Proceedings of the 19th International Conference on World Wide Web*, WWW '10, page 811–820, 2010. Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. Item-based collaborative filtering recommendation algorithms. In *Proceedings of the 10th International Conference on World Wide Web*, WWW '01, page 285–295, New York, NY, USA, 2001. Association for Computing Machinery. Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM '19, page 1441–1450, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450369763. doi: 10.1145/ 3357384.3357895. URL https://doi.org/10.1145/3357384.3357895. Jiaxi Tang and Ke Wang. Personalized top-n sequential recommendation via convolutional sequence embedding. In *Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining*, WSDM '18, page 565–573, New York, NY, USA, 2018. Association for Computing Machinery. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. Chenyang Wang, Min Zhang, Weizhi Ma, Yiqun Liu, and Shaoping Ma. Modeling item-specific temporal dynamics of repeat consumption for recommender systems. In *The World Wide Web Conference*, WWW '19, page 1977–1987, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450366748. Chenyang Wang, Weizhi Ma, Min Zhang, Chong Chen, Yiqun Liu, and Shaoping Ma. Toward dynamic user intention: Temporal evolutionary effects of item relations in sequential recommendation. *ACM Transactions on Information Systems (TOIS)*, 39(2):1–33, 2020a. Chenyang Wang, Min Zhang, Weizhi Ma, Yiqun Liu, and Shaoping Ma. Make it a chorus: knowledge-and time-aware item modeling for sequential recommendation. In *Proceedings of the 43rd International ACM* SIGIR Conference on Research and Development in Information Retrieval, pages 109–118, 2020b. Chenyang Wang, Zhefan Wang, Yankai Liu, Yang Ge, Weizhi Ma, Min Zhang, Yiqun Liu, Junlan Feng, Chao Deng, and Shaoping Ma. Target interest distillation for multi-interest recommendation. In Proceedings of the 31st ACM International Conference on Information and Knowledge Management, CIKM '22, page 2007–2016, 2022. Chenyang Wang, Weizhi Ma, Chong Chen, Min Zhang, Yiqun Liu, and Shaoping Ma. Sequential recommendation with multiple contrast signals. *ACM Trans. Inf. Syst.*, 41(1), jan 2023. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In *Proceedings of the 32nd International Conference on Machine Learning*, volume 37 of Proceedings of Machine Learning Research, pages 2048–2057, Lille, France, 07–09 Jul 2015. PMLR. Fajie Yuan, Alexandros Karatzoglou, Ioannis Arapakis, Joemon M. Jose, and Xiangnan He. A simple convolutional generative network for next item recommendation. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM '19, page 582–590, New York, NY, USA, 2019. Association for Computing Machinery. Yipeng Zhang, Xin Wang, Hong Chen, and Wenwu Zhu. Adaptive disentangled transformer for sequential recommendation. In *Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and* Data Mining, KDD '23, page 3434–3445, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9798400701030. doi: 10.1145/3580305.3599253. URL https://doi.org/10.1145/3580305. 3599253. Yongfeng Zhang, Qingyao Ai, Xu Chen, and Pengfei Wang. Learning over knowledge-base embeddings for recommendation. 03 2018. Chang Zhou, Jinze Bai, Junshuai Song, Xiaofei Liu, Zhengchao Zhao, Xiusi Chen, and Jun Gao. Atrank: An attention-based user behavior modeling framework for recommendation. In *Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence, (AAAI-18)*, pages 4564–4571. AAAI Press, 2018a. Guorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, and Kun Gai. Deep interest network for click-through rate prediction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18, page 1059–1068, New York, NY, USA, 2018b. Association for Computing Machinery. # Appendix ## A Experiment Details In this appendix, we provide the experimental details and the additional experiments. ## A.1 Reproducibility As we have mentioned in the Experiments section, we have used the ReChorus library as the codebase to develop our model and run the baseline models Wang et al. (2020b). For all the baseline models (e.g., GRU4Rec, SASRec, and ContraRec), we have used the default implementations and the default set of hyperparameters. The algorithms are trained for at most 200 epochs on the train set, evaluated on the dev set, and then the best model (in terms of NDCG@5 on the dev set) among all the models is used for testing on the testing set. In other words, all the results in Table 3 and 4 are evaluated on the test set with the best models of each algorithm. The training process is early stopped if the model's performance continues to deteriorate for 20 epochs. We have used 1e-3 as the learning rate for Adam when training CTSRec, which is the default value for the algorithm. The weight decay is set to be 1e-4. The batch size is set to be 256 (default in the ReChorus library). ## A.2 Running Time Comparison In order to fairly compare performance and efficiency tradeoff all the models, we have recorded the average epoch time of GRU4Rec, SASRec, TiSASRec, ContraRec, TimiRec, and CTSRec during the training stage. All experiments is conducted is based on AWS job queue using single A-100 GPU card. The implementation of all the baseline models are in the ReChorus library (Wang et al., 2020b). For a complete and fair comparison, we provide the epoch time for each of the algorithms in the cases k = 2, 100, 200 on the three Amazon datasets used in Table 3. As shown in Table 6, GRU4Rec converges the fastest with the smallest epoch time in all the cases. Our CTSRec model is the second fastest among all the models in the cases k = 2 and k = 100, and only slightly slower than TimiRec in the cases k = 200. Compared with the SOTA baselines, CTSRec outperforms them by a significant margin. At the same time, CTSRec is as fast as TimiRec, and much faster than ContraRec. To conclude, CTSRec is able to improve the performance of tranformer architecture with little performance cost overhead. ## A.3 Convergence Comparison We further measure the speed of the algorithms based on convergence comparison. As shown in Table 7, the HR@5 of the algorithms on the dev set for the first 12 epochs when training all the algorithms on the Amazon G&GF dataset are recorded. As can be observed, CTSRec converges to its best performance within 10 epochs, where as the other SOTA algorithms such as ContraRec and TimiRec converge very slowly. In fact, ContraRec needs more than 100 epochs to achieve its best performance in Table 3 and 4, which is much slower than our CTSRec algorithm. If we combine the average epoch time information in Table 6 and the number of epochs needed to converge in Table 7, we can reach the conclusion that CTSRec is able to achieve its best performance in a small amount of time and epochs, and its performance is better than the SOTA methods. Table 6: Average epoch time (seconds/s) comparison between the baseline methods and the new method CTSRec. The results are averaged over 10 independent runs. The lowest results in each column are highlighted in bold. | G & GF | | CP&A | Games | | | | | | | |---------------|-------|--------|---------|-------|-------|-------|-------|-------|-------| | Method / k = | 2 | 100 | 200 | 2 | 100 | 200 | 2 | 100 | 200 | | GRU4Rec | 9.9 | 12.7 | 16.1 | 10.1 | 13.4 | 16.7 | 11.9 | 16.8 | 21.6 | | SASRec | 38.9 | 40.3 | 46.6 | 49.9 | 51.2 | 48.8 | 60.4 | 63.2 | 79.9 | | TiSASRec | 45.7 | 50.3 | 55.9 | 53.3 | 56.6 | 64.5 | 66.4 | 77.6 | 92.3 | | ContraRec | 153.1 | 159.4 | 172.8 | 163.7 | 171.3 | 183.0 | 225.5 | 245.2 | 254.6 | | TimiRec | 20.3 | 26.3 | 33.7 | 20.2 | 29.0 | 34.9 | 28.6 | 39.3 | 49.2 | | CTSRec (ours) | 14.3 | 20.1 | 36.7 | 14.1 | 21.8 | 37.8 | 18.1 | 29.7 | 50.4 | | Method / Epoch | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | |------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | FPMC | 0.258 | 0.327 | 0.375 | 0.401 | 0.409 | 0.411 | 0.411 | 0.406 | 0.402 | 0.397 | 0.393 | 0.388 | | GRU4Rec | 0.336 | 0.392 | 0.405 | 0.422 | 0.431 | 0.435 | 0.443 | 0.454 | 0.459 | 0.459 | 0.460 | 0.464 | | Caser | 0.338 | 0.379 | 0.390 | 0.397 | 0.412 | 0.422 | 0.431 | 0.446 | 0.451 | 0.449 | 0.451 | 0.456 | | SASRec | 0.414 | 0.448 | 0.435 | 0.428 | 0.411 | 0.412 | 0.402 | 0.399 | 0.392 | 0.395 | 0.395 | 0.390 | | TiSASRec | 0.252 | 0.323 | 0.360 | 0.381 | 0.403 | 0.414 | 0.427 | 0.434 | 0.438 | 0.441 | 0.444 | 0.443 | | ComiRec | 0.249 | 0.283 | 0.322 | 0.349 | 0.366 | 0.377 | 0.385 | 0.391 | 0.399 | 0.403 | 0.408 | 0.412 | | ContraRec | 0.077 | 0.090 | 0.114 | 0.144 | 0.177 | 0.203 | 0.223 | 0.240 | 0.255 | 0.269 | 0.278 | 0.288 | | TimiRec | 0.252 | 0.330 | 0.356 | 0.381 | 0.407 | 0.426 | 0.443 | 0.453 | 0.461 | 0.468 | 0.471 | 0.472 | | CTSRec | 0.383 | 0.377 | 0.404 | 0.428 | 0.449 | 0.465 | 0.463 | 0.471 | 0.463 | 0.459 | 0.458 | 0.451 | Table 7: The evaluation HR@5 on the dev set for each algorithm after every epoch on Amazon G & GF dataset. The best performance in each row is highlighted in bold.
Review 1: Summary: The paper proposes a new transformer-based architecture for sequential recommendation. Compared with traditional self-attention based sequential recommender systems, the paper proposes to concat the user histories and target item embeddings, then apply self-attention on top of it. The benefit is two-fold, one is to explicitly incorporate interactions between user history and target items; the other is utilize the contrastive signals between negative items and the ground-truth. Empirical performance shows better performance compared with other sequential-based recommendation models. Strengths and Weaknesses: Strength: - The paper is well-motivated, well-written and easy to read. - The designed new architecture is simple, easy to implement, and shows strong empirical performance. - Ablation studies are performed to examine different aspects of the method, such as the number of negative samples, the training loss, the cut of user-history, etc. Weakness: - All the empirical results only provided the average, could we also report the std to understand the significance of the gain? - Some of the design choice are not super clearly explained, for example, the cut-off of the user-history output in the final prediction, there are other ways to match the shape of the labels as well, for example, learning a weight matrix to get k rows, etc. It would be great if the authors could explain in details about the design choices here. - The additional computational complexity is not discussed. - Did the authors try a baseline approach where utilize two transformers separately for both the user history and targeted items? This might be able to provide an examination of the relative effect of contrastive signals. - This seems a purely empirical paper on recommender systems, and I am not sure whether it fits the general audience of TMLR. Requested Changes: See Weakness. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper is about sequential recommendation, one of the most fundamental recommendation tasks. The authors proposed a concatenate-then-split structure that intentionally integrates historical interactions. Results on two datasets show the the-of-the-art performance across prevalent sequential recommendation baselines. Overall speaking, this paper is written well. However, the solution is a bit straightforward; the baselines only include one method published in 2023; the utilized datasets are on a small scale. These weaknesses make the paper not suitable for publication in TMLR. Strengths and Weaknesses: 1. Sequential recommendation studied in this paper is very fundamental and important. Almost all recommendation engines in the world have deployed sequential recommendation algorithms. 2. The paper is written well. 3. The method is clearly presented. Requested Changes: 1. The methodology. - The method is not so well motivated. The authors claimed, "However, prior endeavors, while successfully leveraging user history attributes, are constrained in capturing the interplay between user history and new items, as well as the contrastive signals between authentic and unfavorable items." However, the statement is not well supported. I do not agree that all existing methods have such a limitation. - The proposed method is a bit straightforward. 2. The experiments. - The baselines only include one method published in 2023 - The utilized datasets are on a small scale. - There are only two datasets: Amazon and ML-1M. 3. Others. - Complexity analysis of the proposed method can be added. Broader Impact Concerns: N/A ================================================== Review 3: Summary: In the sequence recommendation modeling process of the Transformer, K candidate items are concatenated to the user's historical items. Subsequently, the model transforms the last K items in the sequence, producing the final scores. Strengths and Weaknesses: 1.Numerous works in sequence recommendation consider the target item as a guiding signal in modeling the sequence of historical items. However, this paper identifies such an approach as a flaw in existing methodologies, attributing it to insufficient research and survey. 2.The adoption of simple operations like inner product for calculating prediction scores is due to their simplicity and effectiveness. The method proposed in the paper is unacceptable complexity and impracticality in real-world applications, where the number of candidate items can reach up to 100 million. 3.The table format is irregular and lacks a definitive bottom line. 4.In the Causality section, the statement "Therefore, to prevent information leakage, the input embeddings for the items corresponding to (s1, s2, ..., st) should be masked when predicting the next item st+1" is obviously incorrect. 5.The paper frequently exhibits unclear expression of symbols. Such as k in Equation(11). 6.In the experimental section, the maximum length of the sequences is too short, typically requiring a length of 50 or more. 7.Using negative sampling as a testing method is biased while full ranking offers a more persuasive approach. 8.It is well-known that MovieLens is unsuitable as a dataset for sequence recommendation. 9.The expressions for the recommendation metrics given in Equation 12 are both incorrect. 10.As mentioned in Section 4.2, the method proposed in the paper does not demonstrate the capability to explicitly model information, which contradicts its ability to achieve better experimental results. Explicit modeling is essential for capturing users' sequential interests. 11.Setting a maximum limit of 200 epochs may prevent some baseline models from achieving full convergence. 12.The paper lacks an analysis of time complexity and space complexity, and theoretically, the proposed method is far more complex than the baselines. Requested Changes: As metioned in Weakness part, clearly, this work has numerous issues, even including erroneous experimental metrics, making it unacceptable. Broader Impact Concerns: n/a ================================================== Metareview: Recommendation: Reject Comment: The reviewers' decision came back split (1 weak accept, 1 weak reject, and 1 strong reject). Below, I summarize the main elements that the reviewers used to motivate their decision and provide my assessment. 1. **Novelty and significance of contribution.** While reviewers raised this, this aspect should not be considered per TMLR's guidelines. 2. **TMLR Audience.** Reviewers argue that perhaps TMLR is not the right audience for this paper. According to TMLR's guidelines, it suffices that some of the TMLR audience would be interested in the findings of this paper. This paper provides a simple modification to the current sequential recommendation architecture that seems to improve performance. As such, I think it would be of interest, particularly to those interested in recommender systems. 3. **Motivation, claims, and evidence.** Here, the reviewers mention that the authors tried to improve their text (in the most recent version of the manuscript) but that some of the claims are still not well supported. While I believe that the authors provide some evidence of their results (including the results in Figure 2 and an ablation study), I also find that the paper could more clearly support its claims and improve its motivation with some additional modifications, including, for example: 1. A more precise description of your research question and the limitations of previous work (currently, the motivation is somewhat informally described in the introduction). In particular, it would be helpful to describe the information from the target items you are attempting to model and how it might be useful for the recommendations. Further, describing the expected effect of the size of the negative set would be interesting. Perhaps using a (made-up) example would suit that. 2. More evidence that your proposed contribution captures this extra information (in addition to Figure 2 and the ablation study). For example, probing the resulting model and/or providing an exploratory analysis could help. 3. The results are shown on two datasets. Several other datasets have been used in previous papers (including the ones you've cited and even different Amazon categories) that would serve to solidify current evidence and perhaps provide a slightly more in-depth analysis. Again, these are just examples of elements you could consider adding to your paper to clarify some claims and evidence. 4. **Missing recently proposed baselines.** Here, unfortunately, the reviewer did not make concrete suggestions. On one end, it is possible (maybe even likely) that new baselines appeared in the last year. I suggest the authors carefully review the recent literature for relevant sequential recommendation methods. (To be clear, this was not a motivation for suggesting the rejection.) In short, my assessment is that this paper is closer to being acceptable at TMLR than the reviewers' decision might lead one to believe. This is partly because the reviewers have used criteria from other venues (such as novelty and significance). With that in mind, I still find that 1) further evidence should support the claims, and 2) the paper's motivation should be described more precisely. I would happily consider a revised version of this manuscript. ==================================================
# Structcoder: Structure-Aware Transformer For Code Generation Anonymous authors Paper under double-blind review ## Abstract There has been a recent surge of interest in automating software engineering tasks using deep learning. This paper addresses the problem of code generation where the goal is to generate target code given source code in a different language or a natural language description. Most of the state-of-the-art deep learning models for code generation use training strategies primarily designed for natural language. However, understanding and generating code requires a more rigorous comprehension of the code syntax and semantics. With this motivation, we develop an encoder-decoder Transformer model where both the encoder and decoder are explicitly trained to recognize the syntax and data flow in the source and target codes, respectively. We not only make the encoder structure-aware by leveraging the source code's syntax tree and data flow graph, but we also support the decoder in preserving the syntax and data flow of the target code by introducing two novel auxiliary tasks: AST (Abstract Syntax Tree) paths prediction and data flow prediction. To the best of our knowledge, this is the first work to introduce a structure-aware Transformer decoder that models both syntax and data flow to enhance the quality of generated code. The proposed StructCoder model achieves state-of-the-art performance on code translation and text-to-code generation tasks in the CodeXGLUE benchmark, and improves over baselines of similar size on the APPS code generation benchmark. ## 1 Introduction Code generation is the problem of generating code given source code that is either imperfect or in a different language, or generating code from a natural language description. In this paper, we consider the problem of generating target code given source code in a different language (code translation) or a natural language description (text-to-code generation). Code translation has applications in migrating legacy codebases to contemporary programming languages and porting existing software to various other platforms (Rozière et al., 2020; Ahmad et al., 2021; Zhu et al., 2022). As developers often write code to solve a problem or implement logic that is stated in natural language, a good text-to-code generation model can help in speeding up the software development process (Ahmad et al., 2021). Traditional code translation tools have been designed using hand-crafted rules based on the Abstract Syntax Tree (AST) (Rozière et al., 2020). However, the design of such tools demands a lot of time and effort as it requires proficiency in both source and target languages (Zhu et al., 2022). Moreover, such tools are specific to the particular language pair they are designed for. Since natural language generation using deep learning has achieved great success in recent years, it is natural to exploit similar deep learning based approaches for code generation as well. However, the code domain faces a unique set of challenges. Since the generated code is to be understood by a machine as opposed to a human, it is even more important for the generated code (compared to natural language) to adhere to a specific syntax. Moreover, since a minor change in code could alter its function, it is also critical to preserve the semantic information from the source code during translation. To generate syntactically correct code, some of the existing approaches for code generation leveraged the AST structure by learning to generate inorder traversal of AST (Li et al., 2018), learning to generate production rules for AST based on a grammar, encoding AST paths using RNNs (Alon et al., 2020), and using AST-based attention (Li et al., 2018; Kim et al., 2021) in sequence models. Guo et al. (2021) hypothesize that Data Flow Graph (DFG), which contains more semantic information and is less complex than AST, is a more useful structure to learn code representations. They incorporate DFG into the Transformer encoder by appropriately masking the attention matrix. Our model, *StructCoder*, consists of a Transformer encoder that incorporates both syntax and data flow of source code by embedding root-leaf paths in the AST and using a modified self-attention framework, called *structure-aware self-attention*. Code generation heavily relies on the decoder to generate code that is syntactically correct while simultaneously preserving the semantics present in the input. Structcoder advances the state-of-the-art by incorporating a structure-aware Transformer decoder that is designed to preserve of syntax and semantics of the generated code. None of the existing pretrained Transformer models constrain the generated code structure. In our work, we not only incorporate source AST and DFG into the encoder, but also enforce the decoder to learn the target syntax and data flow by introducing novel AST and DFG related tasks. Particularly, we train the decoder to predict all the root-leaf paths in the target AST and also to predict the DFG edges. Similar to pretrained language models (Devlin et al., 2018; Liu et al., 2019; Radford et al., 2019; Yang et al., 2019), pretrained code models using Transformer (Feng et al., 2020; Ahmad et al., 2021; Lachaux et al., 2021; Zügner et al., 2021) have resulted in significant performance gains on code-related tasks. While some pretext tasks like Masked Language Modeling (MLM) and Replaced Token Detection (RTD) only pretrain the encoder, other pretext tasks like Denoising Autoencoding (DAE) and Back Translation (BT) jointly pretrain both the encoder and decoder. StructCoder falls in the latter category and is pretrained using a structure-based DAE task. Moreover, since the structure-based components introduced in this work can be added to any existing Transformer model, we may initialize most of the StructCoder weights using one of the pretrained code models to avoid pretraining from scratch which can be quite expensive. The main contributions of this work are listed below: 1. We develop a Transformer-based encoder-decoder model called StructCoder for code generation where both encoder and decoder are structure-aware. (a) The encoder incorporates AST's root-leaf path embeddings and a structure-aware self-attention framework to model source code structure. (b) The decoder is trained to recognize target syntax and data flow via two novel auxiliary tasks: AST paths prediction and data flow prediction. 2. We pretrain StructCoder using a structure-based DAE objective where the input code as well as its AST and DFG are partially corrupted and the model is trained to generate the original input code and also perform the auxiliary tasks. 3. Our experiments demonstrate that the proposed model achieves state-of-the-art performance on the code translation and text-to-code generation tasks in the CodeXGLUE (Lu et al., 2021) benchmark, and outperforms similarly sized baselines on the APPS code generation benchmark. ## 2 Related Work 2.1 Leveraging Structure To Generate Code To leverage code structure in deep models, many approaches have utilized ASTs. Some approaches modeled code completion as a language modeling task by ordering the code tokens using a depth-first traversal of AST. Li et al. (2018) used an LSTM appended with parent-child attention while Alon et al. (2020) encoded each root-to-leaf path with an LSTM. Kim et al. (2021) used the Transformer to encode the sequenced AST by encoding AST paths into self-attention. For text-to-code generation, Rabinovich et al. (2017) proposed a modular decoder to recursively generate target AST. Yin & Neubig (2017); Brockschmidt et al. (2019); Sun et al. (2020) construct ASTs by generating production rules based on a grammar. Unlike these methods, we keep the conventional Transformer decoder architecture intact and introduce auxiliary structure-related components on top of the decoder's final hidden representations, so that StructCoder is trained to preserve target code structure while not requiring the generation of such structures (AST/DFG) during inference. Building on top of the conventional Transformer architectures not only allows us to utilize existing pretrained models for better initialization but also makes the advances in the area of Transformers more easily applicable to our model. ## 2.2 Pretrained Transformers For Code The recent state-of-the-art results on most natural language generation tasks are obtained by pretraining huge deep learning models on large datasets with carefully designed pretext tasks. Since code generation is very similar to text generation and there is abundant unsupervised code data available through open source code repositories, pretraining code generation models using similar pretext tasks has been successful. Most recent state-of-the-art pretrained models for code utilize the Transformer (Vaswani et al., 2017) architecture and are discussed below. CodeBERT (Feng et al., 2020) performs encoder-only pretraining using Masked Language Modeling and Replaced Token Detection as pretext tasks on the CodeSearchNet dataset. Transcoder (Rozière et al., 2020) is an unsupervised translation model which pretrains both encoder and decoder using Denoising Autoencoding and Back-Translation with only monolingual datasets. PLBART (Ahmad et al., 2021) is pretrained with DAE objective using 680M Java and Python functions. DOBF (Lachaux et al., 2021) attempts to understand code structure with a deobfuscation pretext task where every occurrence of a sampled identifier is replaced by an uninformative token. Code Transformer (Zügner et al., 2021) modifies the attention computations in the encoder according to AST-based distances. CodeT5 (Wang et al., 2021) pretrains T5 model (Raffel et al., 2020) with code data in 8 programming languages. Different from PLBART which treats code data as plain sequences, CodeT5 includes identifier-aware objective in the training, which helps maintain the correctness of the code. However, CodeT5 does not include any structural information of the code in training. Zhu et al. (2022) improve code translation performance by introducing a fine-grained snippet-level translation task during pretraining. GraphCodeBERT (Guo et al., 2021) utilizes code structure in the form of Data Flow Graph (DFG) which contains semantic information as opposed to the syntatic information in AST. However, the decoder is completely unaware of the code structure in all of the above methods. *Our model* advances the domain of code generation by being the first one to train a structure-aware Transformer encoder and decoder by modeling both syntax and data flow. A summary of the pretext tasks and code structures used by the above Transformer-based methods along with our approach is provided in Table 1. To clarify the novelty of our work compared to other related methods like GraphCodeBERT, we provide a comparison of the two methods in the Appendix. Table 1: A summary of the recent pre-trained models for code generation. (Abbreviations: DFG: Data Flow Graph, MLM: Masked Language Modeling, DAE: Denoising Autoencoding, RTD: Replaced Token Detection, BT: Back Translation, EP: DFG Edge Prediction, NA: Alignment prediction between code tokens and DFG nodes, DOBF: Deobfuscation, IT: Identifier Tagging, MSP: Masked Span Prediction, MIP: Masked Identifier Prediction, MuST: Multilingual Snippet Translation.) Model Encoder-only pretraining Encoder-Decoder pretraining Encoder structureawareness | Prediction, MuST: Multilingual Snippet Translation.) Model Encoder-only | Encoder-Decoder | Encoder | | | |---------------------------------------------------------------------------|-------------------|-----------------|-------------|-------------| | pretraining | pretraining | structureawareness | Decoder structureawareness | | | CodeBERT(Feng et al., 2020) | MLM, RTD | - | - | - | | GraphCodeBERT(Guo et al., 2021) | MLM, EP, NA | - | DFG | - | | Transcoder(Rozière et al., 2020) | MLM | DAE, BT | - | - | | PLBART(Ahmad et al., 2021) | - | DAE | - | - | | DOBF(Lachaux et al., 2021) | - | DOBF | - | - | | CodeT5(Wang et al., 2021) | IT | MSP, MIP, NL-PL | Identifiers | Identifiers | | dual generation | AST, DFG | AST, DFG | | | | MuST(Zhu et al., 2022) | - | DAE, MuST | - | - | | StructCoder (ours) | structure-based | | | | | DAE, NL-PL dual generation | | | | | ## 3 Structcoder StructCoder is a Transformer based encoder-decoder model where both encoder and decoder are structureaware. We build our model using T5 architecture and add the relevant components for modeling code structure. For code inputs, the encoder (see Section 3.2) inputs the tokenized source code sequence along with its AST and DFG and employs structure-aware self-attention. The structure-aware decoder (see Section 3.3) simultaneously learns to generate the target code sequence as well as to perform target AST and DFG related tasks. ## 3.1 Preliminaries A **Code** can be a function or a program, and is represented as a sequence of tokens S = (s1*, ..., s*|S|). A code S has a corresponding AST represented as T = (N, Nleaf , r, p(.), Last), where N is the set of nodes in the AST, Nleaf = {l1, ..., l|Nleaf |} ⊂ N is the subset of leaf nodes, r ∈ N is the root node, p : N − r −→ N is a mapping such that p(n) denotes the parent of node n, and L ast ∈ {0, 1} |S|×|N*leaf* |is a linking matrix such that L ast ij = 1 if and only if token siis part of leaf lj . Each node n ∈ N has a type denoted by *n.type*. A code S also has a corresponding DFG represented as G = (V, D, L*df g*), where V = {v1, v2*, ..., v*|V |} is the ![3_image_0.png](3_image_0.png) set of variables extracted from code S, and D ∈ {0, 1} |V |×|V |is the adjacency matrix where Dij = 1 if and only if value of viis directly obtained from vj , and L df g ∈ {0, 1} |S|×|V |is a linking matrix such that L df g ij = 1 if and only if variable vj is derived from token si. The goal of code translation is to transform a code S = (s1*, ..., s*|S|) in a source language to code T = (t1*, ..., t*|T|) in a different target language such that the translated code T solves exactly the same problem as input code S but in a different (target) language. In text-to-code generation, the goal is to generate target code T from a natural language description. Figure 1: Structure-aware encoder: The input sequence to the encoder consists of source code concatenated with the AST leaves and DFG variables, where the AST leaves are embedded using the root-leaf paths in the AST. The modified structure-aware self-attention mechanism of this Transformer encoder utilizes code-AST/DFG linking information, leaf-leaf similarities in the AST, and the (asymmetric) DFG adjacency matrix to compute the attention matrix. ## 3.2 Structure-Aware Encoder Given source code S, its corresponding AST T , and DFG G, the input sequence to the encoder is $$\langle C L S\rangle,s_{1},...,s_{|S|},\langle S E P\rangle,l_{1},...,l_{|N_{l e a f}|},v_{1},...,v_{|V|}\rangle$$ which consists of the code tokens, special tokens ⟨CLS⟩ and ⟨SEP⟩, AST leaves, and DFG variables. For text input, the leaves and variables are simply ignored in the input. The encoder architecture is illustrated in Figure 1 and is described in detail below. ## 3.2.1 Input Embedding As StructCoder consists of a Transformer encoder, each token in the input sequence has to be embedded in R d. We embed the code tokens along with special tokens by using a lookup table, and use a default embedding for all DFG variables. The DFG information will be used by the encoder in structure-aware self-attention. We compute the embedding of a leaf l in an AST as a function of the path from the root to the leaf l in the AST. Let (r1, r2*, ..., r*|l|) be the nodes on the path from root r = r1 to leaf l = r|l|. We utilize node-type embedding E*type*(·) ∈ R dto encode a node's semantics and a node-height embedding E*height*(·) ∈ R dto encode the order of nodes on this path. The leaf embedding E(l) is computed as $$E(l)=\sum_{i=1}^{|l|}E_{type}(r_{i}.type)\odot E_{height}(|l|-i)\quad\in\mathbb{R}^{d}\tag{1}$$ where ⊙ denotes element-wise multiplication. ## 3.2.2 Structure-Aware Self-Attention Since the input contains DFG and AST which consist of structural information, the traditional attention computation using (relative) positional embeddings which capture sequential ordering information is not sufficient. Hence, we propose structure-aware self-attention which computes attention scores between tokens based on the structural relations between them. Code-code: Following T5, we compute attention scores (before softmax) between code tokens by adding the query-key dot product with weights Wq, Wk ∈ R dk×d and a lookup embedding ϕ : Z≥0 −→ R for relative position. Denoting embedding of x by Ex, we have $$A(s_{i},s_{j})=E_{s_{i}}^{T}W_{q}^{T}W_{k}E_{s_{j}}+\phi(|i-j|)\tag{1}$$ Leaf-leaf: To calculate attention scores between leaves, we introduce a similarity-based transformation to replace the relative positional embedding in equation 2. Let (r i 1 , ..., ri |li| ) be the nodes on the path from root to leaf li. We define similarity between two leaves li and lj as $$sim(l_{i},l_{j})=log\left(\frac{\left(\sum_{k=1}^{min(\{l_{i}^{i},\|l_{j}\|)}}1(r_{k}^{i}=r_{k}^{j})\right)^{2}}{\|l_{i}\|\|l_{j}\|}\right)$$ $$=2\log\left(\sum_{k=1}^{min(\{l_{i},\|l_{j}\|)\}}1(r_{k}^{i}=r_{k}^{j})\right)-\log|l_{i}|-\log|l_{j}|$$ $$\left(2\right)$$ $$\quad(3)$$ $$\quad(4)$$ $\left(5\right)$. which is based on the number of common nodes on the paths from root to leaves l1 and l2. The log transformation is used to reduce the skewness of the distribution of similarity values. The attention scores between leaves are then computed as follows. $$A(l_{i},l_{j})=E_{l_{i}}^{T}W_{q}^{T}W_{k}E_{l_{j}}+(w_{a}\;s i m(l_{i},l_{j})+w_{b})$$ q WkElj + (wa sim(li, lj ) + wb) (5) where wa, wb ∈ R. Variable-variable: Following Guo et al. (2021), the attention scores between DFG nodes are computed using only the query-key dot product and are set to −∞ if corresponding edges are absent in the DFG. $$A(v_{i},v_{j})=\begin{cases}E_{v_{i}}^{T}W_{q}^{T}W_{k}E_{v_{j}}&\text{if}D_{ij}=1\\ -\infty&else\end{cases}$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ Code-leaf/variable: For interaction between code tokens and AST leaves (or DFG variables), we only compute the query-key dot product and do not use any positional information. Inspired by the work of Guo et al. (2021), we set the attention score to −∞ for cases where the leaf (or variable) is not linked to the code token. We show the equations only for interactions between code tokens and leaves as those for interactions between code tokens and variables are similar. If cone chords and variables are similar. $A(s_{i},l_{j})=\begin{cases}E_{i_{i}}^{T}W_{q}^{T}W_{k}E_{l_{j}}&\text{if}E_{ij}^{sat}=1\\ -\infty&\text{}else\end{cases}$ $\qquad\qquad=\begin{cases}E_{i_{j}}^{T}W_{q}^{T}W_{k}E_{s_{i}}&\text{if}L_{ij}^{sat}=1\\ -\infty&\text{}else\end{cases}$ (7) The special tokens ⟨CLS⟩ and ⟨SEP⟩ are treated just like code tokens and are assumed to be linked to all leaves and variables. ## 3.3 Structure-Aware Decoder The decoder in StructCoder constitutes the original T5 decoder with additional layers at the end for AST ![5_image_0.png](5_image_0.png) paths prediction and data flow prediction tasks that are introduced in this section. Figure 2 illustrates the structure-aware decoder which predicts the next target code token along with the AST root-leaf path to this token and the data flow relations between this token and all past tokens. The addition of these auxiliary tasks does not increase the number of generated tokens, which is important since the decoding is done in an autoregressive manner. Figure 2: Structure-aware decoder generates the next token in the target code as well as predicts the node types on the root-leaf path to the leaf containing this token in the target AST and also the DFG edges incident on this token. Let h1, h2*, ..., h*|T| be the hidden states generated by the Transformer decoder. Decoders of existing transformer models including T5 employ a linear layer with weights W ∈ R |V|×dfollowed by softmax transformation to extract a probability distribution pi on the token vocabulary space V for the i th position. $$u a x\left(W h_{i}\right)$$ $p_i=soft$. pi = sof tmax (W hi) (8) And the sequence generation task is trained using language modeling loss as shown below for one sample. $${\mathcal{L}}_{l m}=-\sum_{i=1}^{|T|}l o g\ p_{i}(t_{i})$$ $$({\boldsymbol{\delta}})$$ $\text{a shown below}$ bake. $$({\mathfrak{g}})$$ t token $t_i$ at the $i^{th}$ position. log pi(ti) (9) where pi(ti) refers to the predicted probability for true target token ti at the i th position. In addition to sequence generation, StructCoder also learns target syntax using AST paths prediction task, and learns to match target DFG using a data flow prediction task. ## 3.3.1 Ast Paths Prediction (App) In this task, the goal is to enable the decoder to be aware of all root-leaf paths in the target AST. Since the type attribute of a node captures important syntactic information, we predict the type of each ancestor on each root-leaf path. Let li be the leaf node containing the i th target token ti and let (r i 1 , ..., ri |li| ) be the nodes on the root-li path. To predict type of node r i k (which is at height |li| − k in the tree), we use a linear layer with weights Wast (|li|−k) ∈ R |Y|×dfollowed by a softmax transformation to predict a probability distribution on the set of node types Y. $$p_{i k}^{a s t}=s o f t m a x(W_{a s t\,(|l_{i}|-k)}h_{i})$$ ik = sof tmax(Wast (|li|−k)hi) (10) The APP loss for a sample is given by Lapp = − X |T| i=1 X |li| k=1 log past ik (r i k .type) (11) $$(10)$$ $$(11)$$ $$(12)$$ ## 3.3.2 Data Flow Prediction (Dfp) In this task, the decoder learns to predict all the data flow edges in target code. The probability p df g ij of data flow from j th to i th position in target code sequence is computed using an asymmetric transformation (since data flow is directed) as $$p_{i j}^{d f g}=\sigma\left(h_{i}^{T}U_{d f g}^{T}V_{d f g}h_{j}+u_{d f g}^{T}h_{i}+v_{d f g}^{T}h_{j}+w_{d f g}\right)$$ where σ(.) denotes the sigmoid function. Suppose G = (*V, D, L*) is the true target DFG. There is a data flow from j th to i th position in target sequence if and only if "target DFG contains variables vj ′ , vi ′ such that variable vj ′ is derived from tj , variable vi ′ is derived from ti, and value of variable vi ′ is derived from vj ′ ". Thus, the DFP loss for a sample can be written as $$\mathcal{L}_{dfp}=-\sum_{i=1}^{\left|T\right|}\sum_{j=1}^{\left|T\right|}\left\{\,\mathbbm{1}(cond)\log p_{ij}^{dfg}+\mathbbm{1}(\neg cond)\log\left(1-p_{ij}^{dfg}\right)\,\right\}\tag{1}$$ $$\text{where}\quad\text{$cond=(\exists\,v_{i^{\prime}},\,v_{j^{\prime}}\in V\text{such that}D_{i^{\prime}j^{\prime}}=L_{ii^{\prime}}^{dfg}=L_{jj^{\prime}}^{dfg}=1)$}$$ $$\left(13\right)$$ The overall loss function for training StructCoder (given below) is a combination of the language modeling objective, and the APP and DFP losses with weights α1 and α2 that may be fixed or trainable. $${\mathcal{L}}=(3-\alpha_{1}-\alpha_{2}){\mathcal{L}}_{l m}+\alpha_{1}{\mathcal{L}}_{a p p}+\alpha_{2}{\mathcal{L}}_{d f p}$$ ## 3.4 Pretraining We pretrain StructCoder on a subset of CodeSearchNet1(Husain et al., 2019) dataset with structure-based DAE task along with NL-PL bimodal dual generation to generate code from text and vice-versa. For the denoising task, we corrupt random spans in the code sequence as well as drop some DFG variables and AST leaves in the input to encoder, and the model is trained to predict the uncorrupted code along with the node types on AST root-leaf paths and data flow edges. We initialize our model for pertaining with CodeT5's weights (for faster pretraining) except for the AST and DFG related weights which are randomly initialized. More details on our pretraining method are provided in the Appendix. 1https://github.com/github/CodeSearchNet $$(14)$$ ## 4 Experiments We evaluate StructCoder on the code translation and text-to-code generation tasks from the CodeXGLUE 2 (Lu et al., 2021) benchmark, and on the text-to-code generation task from the APPS benchmark (Hendrycks et al., 2021), and compare with previously published results on these tasks. For CodeXGLUE tasks, we use the metrics from the CodeXGLUE leaderboard which include (i) BLEU (Papineni et al., 2002) score which measures n-gram overlap, (ii) exact match (xMatch) which checks if the prediction is the same as ground truth, and (iii) CodeBLEU (Ren et al., 2020) which combines BLEU score with keywords-based weighted n-gram match as well as syntax and semantic matches based on AST and DFG. APPS evaluates generated codes based on test cases where the evaluation metrics include (i) 'test case average' which is the average percentage of test cases passed, and (ii) 'strict accuracy' which is the percentage of generated codes that pass all test cases. The implementation details are provided in the Appendix and the code is available in the supplementary materials. ## 4.1 Code Translation The code translation dataset from CodeXGLUE consists of two tasks for translating between Java and C\# functions in either direction and contains 10K training samples, 500 validation samples, and 1000 test samples. Table 2 presents the results of StructCoder alongside the baselines on the two code translation tasks. The Naive Copy baseline simply copies source code to target and the Transformer model involves no pretraining. RoBERTa (code) (Lu et al., 2021), CodeBERT, and GraphCodeBERT involve encoderonly pretraining while PLBART and CodeT5 incorportate encoder-decoder pretraining like StructCoder. StructCoder achieves the best results on the two translation tasks which can be attributed to the structureaware encoder-decoder design of our model. From Table 2, we observe that the encoder-decoder pretraining of PLBART, CodeT5, and StructCoder is very beneficial to code translation. Also, the encoder-only pretrained models improve over Transformer by a huge margin. GraphCodeBERT which utilizes DFG offers minor improvements over CodeBERT and we also observed in our ablation study that DFG-related components contribute less to the performance gains of StructCoder compared to AST-related components. | Java-C# | | C#-Java | | | | | |----------------|--------|-----------|-------|--------|----------|-------| | BLEU | xMatch | CodeBLEU | BLEU | xMatch | CodeBLEU | | | Naive Copy | 18.54 | 0.00 | 42.20 | 18.69 | 0.00 | 34.94 | | Transformer | 55.84 | 33.00 | 63.74 | 50.47 | 37.90 | 61.59 | | RoBERTa (code) | 77.46 | 56.10 | 83.07 | 71.99 | 57.90 | 80.18 | | CodeBERT | 79.92 | 59.00 | 85.10 | 72.14 | 58.80 | 79.41 | | GraphCodeBERT | 80.58 | 59.40 | - | 72.64 | 58.80 | - | | PLBART | 83.02 | 64.60 | 87.92 | 78.35 | 65.00 | 85.27 | | CodeT5* | 83.88 | 64.70 | 87.38 | 79.71 | 67.50 | 85.51 | | StructCoder | 85.03 | 66.60 | 88.41 | 80.73 | 67.70 | 86.10 | ## 4.2 Text-To-Code Generation The text-to-code generation task from CodeXGLUE uses the CONCODE (Iyer et al., 2018) dataset and the goal here is to generate a Java function given a natural language description. This dataset contains 100K training samples, 2K validation samples, and 2K test samples. Table 3 presents the results of our model alongside the baselines on the text-to-code generation task. Among the baselines, GPT-2 (Radford et al., 2019) is pretrained on natural language to predict next token, CodeGPT (Lu et al., 2021) is pretrained from scratch like GPT-2 but using code data, CodeGPT-adapted (Lu et al., 2021) is pretrained from GPT-2 2https://github.com/microsoft/CodeXGLUE initialization using code data, and CoTexT (Phan et al., 2021) pretrains the T5 model further on code data using MSP objective. The decoder-only baselines which include GPT-2 based models are outperformed by the rest which are all encoder-decoder models. StructCoder again achieves the best performance on all metrics for the text-to-code generation task. | models were obtained from Hendrycks et al. (2021). Test case average | | | Strict accuracy | | | | | |------------------------------------------------------------------------|-------|-----------|-------------------|-------|-----------|-------------|-----| | Model size | Intro | Interview | Competition | Intro | Interview | Competition | | | GPT-2 | 0.1B | 5.64 | 6.93 | 4.37 | 1 | 0.33 | 0 | | CodeT5 | 0.2B | 5.50 | 5.06 | 2.33 | 0.6 | 0.67 | 0 | | StructCoder | 0.2B | 10.01 | 7.09 | 3.57 | 1.8 | 0.73 | 0.2 | | GPT-2 | 1.5B | 7.40 | 9.11 | 5.05 | 1.3 | 0.70 | 0 | | Table 3: Results on text-to-code generation task from CodeXGLUE benchma | | rk. | | |---------------------------------------------------------------------------|--------|----------|-------| | BLEU | xMatch | CodeBLEU | | | GPT-2 | 25.37 | 17.35 | 29.69 | | CodeGPT | 28.69 | 18.25 | 32.71 | | CodeGPT-adapted | 32.79 | 20.10 | 35.98 | | PLBART | 36.69 | 18.75 | 38.52 | | CoTexT | 37.40 | 20.10 | 40.14 | | CodeT5 | 40.73 | 22.30 | 43.20 | | StructCoder | 40.91 | 22.35 | 44.77 | APPS (Hendrycks et al., 2021) is a text-to-code generation benchmark in python which evaluates generated codes based on test cases. The inputs here contain detailed questions and possibly some starter code as well. The dataset contains 10K problems equally divided into train and test splits. The test set contains 1K introductory level, 3K interview level, and 1k competition level problems. Table 4 shows the results of StructCoder, CodeT5, and GPT-2 (Hendrycks et al., 2021) models of two sizes. These GPT-2 models were pretrained exclusively on python code from GitHub which gives them an edge in this particular task. The 'strict accuracy' metric is more important than the 'test case average' as it does not give partial credit to a generated code that does not pass all test cases. StructCoder achieves the best 'strict accuracy' on all subsets, notably outperforming the bigger GPT-2 model which is about 7 times the size of StructCoder. ## 4.3 Model Analysis 4.3.1 Ablation Study To emphasize the importance of the novel structure-based components introduced in this work, we conducted an ablation study on the two code translation tasks from CodeXGLUE. For this experiment, we used a smaller T5 architecture with hidden dimension 256, 5 encoder and decoder layers, and 8 heads in each multihead attention layer. The models we tested here include the smaller T5 model (i) without any structureawareness which is our baseline, (ii) with enabling AST or DFG related components in encoder or decoder, (iii) with enabling all the structure-based components, and (iv) adding structure-based DAE pretraining to the previous one. Note that the first three cases do not involve any pretraining. We report the xMatch, and CodeBLEU and its different components. The results are shown in Table 5. On both tasks, including AST and DFG in both encoder and decoder yields the best results on all metrics. When considering the effect of individual components, we see that adding AST to encoder works the best for Java-C\# translation while adding AST to decoder fares well for C\#-Java translation. Each component individually improves the translation performance significantly over the baseline except for the case of adding DFG to encoder for C\#-Java translation. In general, adding DFG or AST components improves not just Data Flow match or AST match but all the metrics. The model with structure-based DAE pretrained in Table 5 has been pretrained from scratch on just 30K samples for 15K steps (where each step is a gradient update using a batch of 32 samples). While all other models in this study were trained for 50K steps, this model has been finetuned for just 10K steps, keeping the architecture and other training parameters the same across all models. We observed that structure-based DAE pretraining led to significant performance gains on both tasks. | indicate whether the structure was included in the encoder and decoder, respectively.) Enabled xMatch BLEU Weighted AST Data Flow | CodeBLEU | | | | | | | | |-------------------------------------------------------------------------------------------------------------------------------------|---------------------|----------|-------|-------|-------|-------|-------|-------| | BLEU | match | match | | | | | | | | Java-C# translation | | | | | | | | | | No structure (baseline) | 43.90 | 62.30 | 63.60 | 78.82 | 73.79 | 69.62 | | | | DFG (i/p) | 47.20 | 65.59 | 66.72 | 80.04 | 75.66 | 72.00 | | | | DFG (o/p) | 48.10 | 64.87 | 66.12 | 79.88 | 75.26 | 71.53 | | | | AST (i/p) | 51.10 | 69.92 | 70.93 | 82.89 | 77.97 | 75.42 | | | | AST (o/p) | 46.00 | 64.16 | 65.34 | 80.02 | 75.45 | 71.24 | | | | DFG | (i/p,o/p), | AST | 51.20 | 70.86 | 71.82 | 83.87 | 79.41 | 76.49 | | (i/p,o/p) DFG(i/p,o/p), | 53.80 | 76.86 | 78.07 | 87.07 | 85.00 | 81.75 | | | | AST(i/p,o/p), | & | structure | | | | | | | | based DAE pt | C#-Java translation | | | | | | | | | No structure (baseline) | 40.20 | 53.20 | 54.56 | 75.40 | 64.20 | 61.84 | | | | DFG(i/p) | 27.10 | 41.64 | 43.20 | 70.19 | 58.63 | 53.41 | | | | DFG(o/p) | 43.10 | 56.64 | 57.90 | 77.24 | 66.52 | 64.57 | | | | AST(i/p) | 45.90 | 59.25 | 60.30 | 79.12 | 68.31 | 66.74 | | | | AST(o/p) | 49.50 | 63.70 | 64.79 | 81.84 | 72.89 | 70.80 | | | | DFG(i/p, | o/p), | AST(i/p, | 51.20 | 66.12 | 66.99 | 83.79 | 74.30 | 72.80 | | o/p) DFG(i/p,o/p), | 55.10 | 73.53 | 74.41 | 87.30 | 83.80 | 79.76 | | | | AST(i/p,o/p), | & | structure | | | | | | | | based DAE pt | | | | | | | | | ## 4.3.2 Auxiliary Tasks We measure the performance of StructCoder on the auxiliary tasks of APP (AST Paths Prediction) and DFP (Data Flow Prediction) as follows. When predicting the next target token, we use the ground truth for target sequence until the previous step as input to the decoder. The decoder then predicts the next token as well as the DFG edges incident on this token and the types of nodes on the path from root to the leaf node containing this token in the AST. On Java-C\# translation, StructCoder achieves 76.6% accuracy on APP task and 74.8% average precision on DFP task where positive class prevalence is just 0.8%. On C\#-Java translation, StructCoder achieves 76.8% accuracy on APP task and 33.6% average precision on DFP task where positive class prevalence is just 0.5%. For both the translation tasks, there are 291 classes for node type in APP task. ## 4.3.3 Case Study Figure 3 shows an example from Java-C\# translation task with predictions from StructCoder and the best baseline CodeT5. We observe that our structure-aware encoder-decoder architecture is able to correctly generate the target code where CodeT5 fails, which can be explained by inspecting the predictions of the two auxiliary tasks. Referring to Figure 3, CodeT5 generates both the 'for' loops with index 'i', leaving variable 'c' undefined. It also misses the first 'if' statement and creates a syntax error from unbalanced ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) Figure 3: Case study: An example from Java-C# translation task where StructCoder is able to accurately predict the target code while CodeT5 fails. (Red text indicates errors made by CodeT5 and blue text indicates correctly predicted code by StructCoder where baseline generates errors. The blue arrows show some of the correctly predicted (probability ≥ 97th percentile) data flow edges relevant to the colored text.) braces. On the other hand, StructCoder correctly generates the for loops by defining variable 'c' and the model predicts (with probability greater than or equal to 97th percentile) most of the DFG edges incident on the variable 'c' inside these for loops and also in the first 'if' statement. Also, for token '[]' in args, the correct parent node type 'array rank specifier' is in the top two predicted node types. More examples are included in the Appendix. ## 4.3.4 Complexity Analysis For the translation experiments, the input length to the encoder approximately doubles by adding the structure-related tokens. Since the time (for sequential computation) and memory complexity of selfattention is quadratic in input length, the time and memory requirements of the self-attention module increase by about four-fold upon adding structure in our translation experiments. However, since selfattention is parallelizable (Vaswani et al., 2017), we do not observe a significant increase in inference time. We provide more details on inference times and discuss model sizes in the Appendix. ## 5 Conclusion This work proposes a structure-aware Transformer encoder-decoder model called StructCoder for code generation. Our encoder modifies traditional input embeddings and employs a structure-aware self attention mechanism to model AST and DFG relations in source code, and the decoder is trained to recognize target syntax and data flow using two novel auxiliary tasks to predict the node types on all root-leaf AST paths and data flow edges of target code. We also pretrained our model using a structure-based DAE task to improve its performance. Experiments on code translation and text-to-code generation tasks demonstrate the performance gains of StructCoder over state-of-the-art baselines. We believe that this work would encourage future research in this field to give careful consideration to code structure while building models for code generation. ## 6 Broader Impact Although automated code generation can potentially benefit the development and migration of software, there are risks associated with it. First of all, the model is not capable of taking into consideration constraints like security, efficiency, and modularization when generating code. Thus, deploying model-generated code can introduce vulnerabilities in complex systems, and increase the energy cost and emissions. Furthermore, the maintenance of the generated code can be challenging if it is less modularized. Second, the performance improvements of the code generation models largely rely on scaling-up of both the model and the training, which require significant amount of computational resources. Individuals, small organizations, and academic institutes usually cannot afford the large-scale training of such models, while big companies have a natural advantage in this aspect. Therefore, the advances in this domain might benefit the big businesses more than general audience, which limits the societal value of it. Third, the deployment of automated code generation tools requires reforming the current skill sets for software engineering jobs. But once the users are well-educated about the usage and maintenance of such systems and the security risks associated with them, the software development process should become more efficient. ## References Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. Unified pre-training for program understanding and generation. In *Proceedings of NAACL-HLT 2021*, pp. 2655–2668, 2021. doi: 10.18653/v1/2021.naacl-main.211. URL https://doi.org/10.18653/v1/2021.naacl-main.211. Uri Alon, Roy Sadaka, Omer Levy, and Eran Yahav. Structural language models of code. In International Conference on Machine Learning, pp. 245–256. PMLR, 2020. Marc Brockschmidt, Miltiadis Allamanis, Alexander L. Gaunt, and Oleksandr Polozov. Generative code modeling with graphs. In *7th International Conference on Learning Representations, ICLR 2019*, 2019. URL https://openreview.net/forum?id=Bke4KsA5FX. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. Codebert: A pre-trained model for programming and natural languages. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pp. 1536–1547, 2020. doi: 10. 18653/v1/2020.findings-emnlp.139. URL https://doi.org/10.18653/v1/2020.findings-emnlp.139. Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin B. Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, and Ming Zhou. Graphcodebert: Pre-training code representations with data flow. In *9th International Conference on Learning Representations, ICLR 2021*, 2021. URL https://openreview.net/forum?id=jLoC4ez43PZ. Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, and Jacob Steinhardt. Measuring coding challenge competence with APPS. In *Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks* 2021, 2021. URL https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/ c24cd76e1ce41366a4bbe8a49b02a028-Abstract-round2.html. Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. CodeSearchNet challenge: Evaluating the state of semantic code search. *arXiv preprint arXiv:1909.09436*, 2019. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. Mapping language to code in programmatic context. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language* Processing, pp. 1643–1652, 2018. doi: 10.18653/v1/d18-1192. URL https://doi.org/10.18653/v1/ d18-1192. Seohyun Kim, Jinman Zhao, Yuchi Tian, and Satish Chandra. Code prediction by feeding trees to transformers. In *2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE)*, pp. 150–162. IEEE, 2021. Marie-Anne Lachaux, Baptiste Rozière, Marc Szafraniec, and Guillaume Lample. DOBF: A deobfuscation pre-training objective for programming languages. In *NeurIPS 2021*, pp. 14967–14979, 2021. URL https: //proceedings.neurips.cc/paper/2021/hash/7d6548bdc0082aacc950ed35e91fcccb-Abstract. html. Jian Li, Yue Wang, Michael R. Lyu, and Irwin King. Code completion with neural attention and pointer networks. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, pp. 4159–4165, 2018. doi: 10.24963/ijcai.2018/578. URL https://doi.org/10.24963/ ijcai.2018/578. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin B. Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Lidong Zhou, Linjun Shou, Long Zhou, Michele Tufano, Ming Gong, Ming Zhou, Nan Duan, Neel Sundaresan, Shao Kun Deng, Shengyu Fu, and Shujie Liu. Codexglue: A machine learning benchmark dataset for code understanding and generation. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, 2021. URL https://datasets-benchmarks-proceedings.neurips.cc/paper/ 2021/hash/c16a5320fa475530d9583c34fd356ef5-Abstract-round1.html. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp. 311–318, 2002. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In *NeurIPS 2019*, pp. 8024–8035, 2019. URL https://proceedings.neurips.cc/paper/2019/hash/ bdbca288fee7f92f2bfa9f7012727740-Abstract.html. Long Phan, Hieu Tran, Daniel Le, Hieu Nguyen, James Anibal, Alec Peltekian, and Yanfang Ye. Cotext: Multi-task learning with code-text transformer. *arXiv preprint arXiv:2105.08645*, 2021. Maxim Rabinovich, Mitchell Stern, and Dan Klein. Abstract syntax networks for code generation and semantic parsing. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics,* ACL 2017, pp. 1139–1149, 2017. doi: 10.18653/v1/P17-1105. URL https://doi.org/10.18653/v1/ P17-1105. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. *OpenAI blog*, 1(8):9, 2019. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:1–67, 2020. Shuo Ren, Daya Guo, Shuai Lu, Long Zhou, Shujie Liu, Duyu Tang, Neel Sundaresan, Ming Zhou, Ambrosio Blanco, and Shuai Ma. Codebleu: a method for automatic evaluation of code synthesis. *arXiv preprint* arXiv:2009.10297, 2020. Baptiste Rozière, Marie-Anne Lachaux, Lowik Chanussot, and Guillaume Lample. Unsupervised translation of programming languages. In *NeurIPS 2020*, 2020. URL https://proceedings.neurips.cc/paper/ 2020/hash/ed23fbf18c2cd35f8c7f8de44f85c08d-Abstract.html. Zeyu Sun, Qihao Zhu, Yingfei Xiong, Yican Sun, Lili Mou, and Lu Zhang. Treegen: A tree-based transformer architecture for code generation. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 8984–8991, 2020. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in neural information processing systems*, pp. 5998–6008, 2017. Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pp. 8696–8708, 2021. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45, Online, October 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.emnlp-demos.6. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32, 2019. Pengcheng Yin and Graham Neubig. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, pp. 440–450, 2017. doi: 10.18653/v1/P17-1041. URL https://doi.org/10.18653/v1/P17-1041. Ming Zhu, Karthik Suresh, and Chandan K Reddy. Multilingual code snippets training for program translation. 2022. Daniel Zügner, Tobias Kirschstein, Michele Catasta, Jure Leskovec, and Stephan Günnemann. Languageagnostic representation learning of source code from structure and context. In 9th International Conference on Learning Representations, ICLR 2021, 2021. URL https://openreview.net/forum?id=Xh5eMZVONGF. ## A Structcoder Vs Graphcodebert It should be noted that there are several significant differences between Structcoder and other prominent methods in the literature like GraphCodeBERT. The only similarity between StructCoder and GraphCodeBERT with respect to modeling structure is the approach for encoding source DFG in the encoder. The differences are listed below: - While the pretext tasks of GraphCodeBERT include (a) alignment prediction between source code tokens and source DFG nodes and (b) edge prediction between source DFG nodes, StructCoder includes a (target) Data flow prediction task which directly predicts the data flow relations between (target) code tokens generated by the decoder. - Our approach to modeling AST in the encoder differs from that of DFG as follows: (a) Instead of adding a separate input token per AST node, we only add an input token per AST leaf, which contains the input sequence length to an extent. (b) While tokens of DFG nodes use a default node embedding, the AST leaves are embedded using the root-leaf paths in the tree. (c) While node-node attention between DFG nodes is based on DFG edges, the leaf-leaf attention between AST leaves is based on their positional similarity/proximity in the tree. - GraphCodeBERT is an encoder-only model which requires a non-pretrained decoder for code generation. In contrast, StructCoder jointly pretrains the encoder and decoder using a novel structurebased DAE objective. ## B Model Analysis B.1 Model Size Table A1 shows the number of parameters in different pretrained models for code. Note that StructCoder is built by adding additional components to CodeT5 for modeling AST and DFG in input and output, with majority of additional parameters coming from the encoder's AST leaves embedding module (381K) and the classification layer of APP (AST Paths Prediction) task (743K). Table A1: Number of parameters in various pretrained models ![14_image_0.png](14_image_0.png) Pretrained model # parameters CodeBERT 125M GraphCodeBERT 125M CodeGPT-small-java 126M PLBART 139M CodeT5 223M CoTexT 223M StructCoder 224M Figure A1: Scatter plots of CodeBLEU vs inference time for models used in the ablation study in the main paper. For measuring inference time, we generated predictions for 50 samples using a batch size of 1 on the CPU since using a 48GB GPU did not give any noticeable differences among models' inference times. ## B.2 Inference Time To analyze the impact of adding the proposed structure-based components on the overall computational complexity experimentally, we measured the inference times for the models used in the ablation study for the two translation tasks. We report the results by running inference on the CPU for 50 samples using a batch size of 1. For the 50 examples used here, the average input sequence length is 50.06 (59.48) and 100.10 (118.50) for the baseline and fully structure-aware model respectively, for Java-C\# (C\#-Java) translation. Figure A1 shows scatter plots of CodeBLEU vs average inference time per sample. Note that ignoring size and pretraining, "baseline" is equivalent to CodeT5 and enabling "DFG(i/p, o/p), AST(i/p, o/p)" is equivalent to StructCoder. During inference, since we do not perform auxiliary tasks, enabling AST or DFG structures in decoder should not impact inference time. Figure A1 also shows that adding DFG to encoder leads to negligible difference while adding AST to encoder increases inference time per sample by approximately only 5% on the two translation tasks though the average input length to the encoder increases by 80-85%. To support this finding, we ran beam search decoding on the original T5 model for similar input lengths where we found a similar trend. The increase in inference time being much less than expected may be due to implementation-specific details of pytorch/huggingface which are out of scope for this work. ## C Implementation Details We use the CodeT5 tokenizer with a vocabulary size of 32,100. As we build upon CodeT5 architecture, both the encoder and decoder of StructCoder contain 12 T5 blocks with hidden dimension 768, and 12 attention heads in each block. During implementation, we only used first 16 bits of the last hidden representation from the decoder to predict DFG links and the next 128 bits for AST paths prediction. This is done because the model learns DFP task more easily than APP task and using few bits for these auxiliary tasks prevents overfitting on these tasks. If the weights of auxiliary task losses in the overall loss function are set to be trainable, we initialize parameters a1, a2 ∈ R, and compute α1 = σ(a1), α2 = σ(a2) to constrain the loss weights α1, α2 to lie in [0, 1]. All experiments used AdamW optimizer. ## Pretraining We pretrain StructCoder on a structure-based DAE task along with NL-PL bimodal dual generation to generate code from text and vice-versa. For the denoising task, we first corrupt random spans in a code sequence by replacing with ⟨*MASK*⟩ or a random token, or deleting it. The span lengths are sampled from a Poisson distribution of mean 3.5 and we corrupt 35% of the code tokens in total, similar to Ahmad et al. (2021). To improve the understanding of code structure, we also randomly drop 35% of the DFG variables and AST leaves, and 35% of the ancestors for each leaf from the input to StructCoder. The model is then trained to predict the uncorrupted code along with the AST root-leaf paths and data flow edges. We pretrain StructCoder on a randomly chosen subset of 300K samples from CodeSearchNet3(Husain et al., 2019) belonging to four popular languages: Python, PHP, JavaScript, and Java. We initialized the nonstructure-based weights with CodeT5 pretrained model for faster learning, and ran our pretraining with a batch size of 32 for 12K steps, with a learning rate of 5e-5. ## Finetuning For code translation, we ran finetuning with a batch size of 25 for 22K steps. For text-to-code generation using CONCODE dataset, we ran finetuning with a batch size of 32 for 100K steps. To finetune on APPS dataset, we used a batch size of 16 for 15K steps. The learning rate set of 5e-5 for tasks in CodeGLUE and 5e-4 for the APPS benchmark. We set a1, a2 to be trainable and clipped them to have a max value of -4 during finetuning, except for APPS where we fixed α1 = α2 = −2. For new AST node types seen during finetuning, we initialized the weights corresponding to these new node types randomly. We used beam search with a beam size of 10 for decoding in all finetuning tasks except for the APPS dataset where the beam size was set to 5. We ran validation every 500 steps and chose the checkpoint with the best BLEU score on the validation set for testing. For APPS, which has no validation set, the checkpoint at the end of the training was used for inference. Since CodeT5 does not have published results on the APPS dataset, we finetuned it using the same hyperparameters used by our model. For the ablation study, we reported most of the hyperparameters used in the main text. Among the remaining ones, the learning rate was set to 0.001 except for the pretraining stage which used a learning rate of 5e-4, and the beam size was set to 5. The loss weights of auxiliary tasks were fixed to a1 = a2 = −1 for C\#-Java translation where both auxiliary tasks are used, and we fix a1 = a2 = −2 in all other cases. For C\#-Java translation, we experimented with values of a1 = a2 = −1 and a1 = a2 = −2. ## Sequence Lengths To facilitate minibatch training with available resources, we set max no. of DFG variables in input to 75, max no. of AST leaves to 250, and max root-leaf path length to 12 (by trimming paths from the root's side). We set the max source length (no. of code tokens when input is code) to 400 for pretraining, 320 for translation, 325 and 1024 for text-to-code generation on CONCODE and APPS. We set max target length to 400 for pretraining, 320 for Java-C\# translation, 256 for C\#-Java translation, 155 and 550 for text-tocode generation on CONCODE and APPS respectively. We used the same sequence lengths as StructCoder 3https://github.com/github/CodeSearchNet for finetuning CodeT5 on APPS. The results of CodeT5 on CodeXGLUE tasks were borrowed from Wang et al. (2021) where the max source and target lengths were set to 512 and 256, respectively. On the code translation tasks, GraphCodeBERT (Guo et al., 2021) sets max source and target lengths to 256 and max no. of DFG variables to 64. ## Other Details All the hyperparameters discussed above were set either based on CodeT5's implementation, or in rare cases, by observing the progression in validation performance for a few steps, or by choosing the ones with best validation performance after a few trials. We used Pytorch (Paszke et al., 2019) and Huggingface4(Wolf et al., 2020) libraries to implement our model. The code for generating ASTs and DFGs is built using tree-sitter 5 and is also adapted from https://github.com/microsoft/CodeBERT/tree/master/GraphCodeBERT. The random generators were seeded for each experiment in the 'set_seed()' function. We ran our experiments on an Ubuntu 18.04 server with 4 RTX 8000 GPUs with 48GB memory on each GPU. The code is included in the supplementary material and will be made public after publishing this work. ## D Examples In this section, we illustrate a few examples of text-to-code generation along with the predicted DFG links and AST paths (see Figures A2-A4). The DFG predictions are visualized as a matrix where the ijth cell denotes the probability of data flow from j th to i th token. To visualize predicted AST paths, for each predicted token, we indicate the predicted node types on the path starting from the root (top) to the leaf (bottom) containing this token, vertically using colored discs. ![16_image_0.png](16_image_0.png) Figure A2: An example from the concode dataset with BLEU=78.85. 4https://huggingface.co/transformers/ 5https://github.com/tree-sitter/py-tree-sitter TEXT INPUT: sets the value of the parametersmetadata property . concode_field_sep MetaData.Template template concode_elem_sep ![17_image_0.png](17_image_0.png) SOURCE: ![17_image_1.png](17_image_1.png) Figure A3: An example from the concode dataset with BLEU=100. Figure A4: An example from the concode dataset with BLEU=87.25.
Review 1: Summary: StructCoder is an encoder-decoder Transformer model for code. The main construct is to use code structure (ASTs and DFGs) in both the encoder and decoder modules. The model is trained with structure-aware encoder attention and denoising auto-encoding with AST path prediction and data flow prediction. It is compared against baseline methods on one code translation and two text-to-code generation datasets. Strengths and Weaknesses: Strengths ---- * Though structure information has been used on the encoder side in the literature before, its use to train a decoder (in an encoder-decoder setup) is novel. * The paper is easy to follow. * The paper performs evaluation on existing datasets and baselines. Weaknesses ----- * The paper proposes novel modeling constructs to take advantage of structure of code. However, the experiments do not show substantial gains for the added complexity of training over CodeT5 even though StructCoder is initialized from CodeT5. * On the CONCODE dataset, StructCoder reports EM of 22.35 which is slightly better than CodeT5 but is slightly below UniXCoder (22.60). UniXCoder is not included as a baseline. * The encoder input consists of AST and DFG in addition to the code tokens. This increases the input size by ~2x and cost of self-attention by ~4x. Though the authors report (Section 4.3.4) that they haven't observed significant increase in inference time in their experiments, the sequence lengths considered by them are relatively small (Appendix C). The increase for longer sequence lengths > 2K can be substantial. * Many generative models support code-to-text tasks which StructCoder does not support. * Some of the technical and experimental details are not explained in the paper or the appendix. I request the authors to refer to the requested changes section for my questions. Requested Changes: * Section 3.1: The same variable can occur in multiple places. How is this handled? When you say $v_i$ is obtained from $v_j$, do you mean that this is a direct dependence or transitive? I believe you are using the same strategy as GraphCodeBERT here, but it is important to clarify this in the paper. * Section 3.3.2: In a code snippet, you can have DFG edges going forward. As you are doing auto-regressive decoding, how do you handle this? Do you not need masking in Eq. (13)? * You have explained what sequence lengths you have used for StructCoder in the appendix. Can you state what sequence lengths are used for the baselines for the respective tasks? * StructCoder is finetuned for each of the tasks (Appendix C). Are baselines also finetuned? Broader Impact Concerns: None ================================================== Review 2: Summary: The main contribution of the paper is the additional attention layers in both encoder and decoder based on properties of data flow and AST structures on top of the standard self-attention mechanism that powers Transformer models. Strengths and Weaknesses: ## Pros - Novel attention mechanism based on AST and dataflow. - Extensive evaluation. ## Cons - **overblown claims**: starting from *Code translation has applications in ...* all the way to *in speeding up the software development process*. Those points made are inaccurate, not only did authors provide no evidence in their evaluation that StructCoder is even remotely close to being helpful in real software development setting, but also this is contrary to the understanding of many (if not most) people in software engineering community that massive language models as they stand do not have the capability in writing real code in professional development setting. - **issues with the design of StructCoder**: (1) the input of the models seems to contain redundancy, why is token sequence s1..sn and terminal nodes in the AST l1...lm are used at the same time? Considering that most of tokens are encoded in terminal nodes, and for the few that are not, you can simply augment the standard AST to include those, why not just use the terminal nodes of augmented AST? (2) In section 3.2.1, *we utilize node-type... to encode a node's semantics*, this is not true, given a method call like `a.foo()`, the AST node type does not at all encode the semantics of the method. - **limitation of the approach**: the additional attention layers in encoder introduced in the paper are obviously not applicable to text-to-code generation tasks. Also, the paper already considers data flow properties to encode the code semantics, then it seems rather obvious to explore control dependency properties. - **lack of details**: in equation (3) what does the symbol $\mathbb{1}$ mean? in section 3.3.1, when you predict the AST path as part of the learning task of decoder, how do you predict a sequence of node types. The paper says *we use a linear layer...* but this seems to give the prediction of only one node type rather than all in the path, so how is this done exactly, the common self-attention stuff or rnn like? - **presentations issues** - in introduction *tradition code generation tools have ....using hand-crafted rules....* Citation please? - in 3.1 *Dij = 1if and only if value vi is obtained from vj* do you count direct dependency or kleene star as well? - in 3.1 *the translated code T solves exactly the same problem as...* this is so informal! At the very least, should use the term **semantic equivalence** - in 3.2.1 *use a defaut embedding for all DFG variables* what does it mean by *default*? fixed embedding that will not be learned? If so, why is that? Requested Changes: Addressing every single point in **Cons** described above Broader Impact Concerns: The paper has addressed the concerns on the ethical implications of the work sufficiently. ================================================== Review 3: Summary: The paper addresses the tasks of code translation and code generation from natural language. While most state-of-the-art approaches treat code as flat text (in both the encoder and the decoder), this paper proposes to encode both the AST and data flow information, and also predict them in the decoder. Concretely, the standard token input is appended with (1) The leaves of the AST, where each of them is encoded as a root-to-leaf path; and (2) data flow nodes. The attention mechanism is modified: the attention between AST leaves is based on their proximity in the AST, and the attention between dataflow nodes is "GNN-style" - where only neighboring variables attend to each other. The model is initialized from the CodeT5 weights, and adds a few additional weights. The resulting StructCoder model improves over the strong CodeT5 baseline in code translation and text-to-code tasks from CodeXGLUE. Strengths and Weaknesses: ### Strengths * The motivation is clear, and the idea of encoding syntax and data flow is appealing. * The results improve over the baselines in two tasks. * In the APPS dataset, the proposed StructCoder is competitive with a larger GPT-2 1.5B model. ### Weaknesses * **The computational cost is unclear** - there are several sources of additional cost that the authors did not mention: * **The >2x longer input, which increases compute quadratically** - the authors only mentioned that they "did not observe a significant increase in inference time" - but even at inference time, this must have decreased the possible batch size by ~4x! How did this effect training time? How smaller did the authors have to decrease the batch size, and how slower was the training? * **The encoding of the AST inputs** - What is the overhead of encoding the root-to-leaf paths? * **Data Flow Prediction** - what is the overhead of predicting edges? What is the number of possible edges? * **AST Path Prediction** - What is the memory and time overhead of predicting all nodes in all paths? * **Soundness of Evaluation** - I am not sure about the soundness of the evaluation. Concretely, I am not sure that all models and baselines were given the same chances. * The proposed StructCoder was pretrained on more data and for more time than the standard CodeT5. * The proposed StructCoder was pretrained on Denoising Auto Encoding (DAE) task, which can be applied to the base model as well. * **Ablation Study** - The ablation study is thorough, but misses the following ablation: what if the model was pretrained+trained with only-AST in the encoder and decoder? what if the model was pretrained+trained with only-DFG in the encoder and decoder? * **Additional Possible baselines**: * Codex and its smaller versions through the OpenAI API, even though it is larger, just to understand the upper bound. * CodeGen and its smaller versions ( https://arxiv.org/pdf/2203.13474.pdf ) * PolyCoder ( https://arxiv.org/pdf/2202.13169.pdf ) Requested Changes: ## Major changes * Costs: Please detail the training and inference costs compared to the base CodeT5, in terms of memory, time, batch size, and throughput. * Please elaborate on whether the evaluation is fair. Specifically, it is crucial to pretrain the base CodeT5 for (at least) the same time and on the same training data as the proposed StructCoder. * Scalability - the model (0.2B) was shown to be competitive with a GPT-2 1.5B. Can it be applied to the GPT-2 1.5B and perform even better? * Additional ablation study of "only-AST" and "only-DFG" (in both encoder and decoder). * Additional baselines, if possible. ## Presentation Comments and questions: * The first paragraph in the Introduction contains many claims that are understandable, but still require citations to justify. * The "AST Path Prediction" is also referred to as "APP", which is confusing because the paper also mentions the APP dataset. * Parsing - how did the authors deal with examples that do not syntactically parse in the Java-C# dataset? * AST Path Prediction - are all nodes in a single path predicted at the same time, independently of each other? * Figure 1 is unreadable, some text is too tiny Broader Impact Concerns: - ================================================== Metareview: Recommendation: Reject Comment: While a number of concerns were addressed by the author response and post-review discussions amongst reviewers and AE (e.g., whether extra data was needed for the additional training or whether it came from the same pretraining set), there were still significant concerns remaining (see above). ==================================================
# Mdgcl: Debiased Graph Contrastive Learning With Knowledge Of Model Discrepancy Anonymous authors Paper under double-blind review ## Abstract Graph contrastive learning (GCL) have shown promising results for self-supervised representation learning on graph-structured data, benefiting various downstream tasks such as node classification and graph classification. Despite their outstanding performance, a prevalent issue in most existing GCL methods is the arbitrary selection of other data points as negative samples, even when they share the same ground truth label with the anchor. The inclusion of such false negative samples could degrade the performance of GCL. In this study, we present a dual-branch ensembling learning framework, which provides model discrepancy as a crucial indicator to more effectively differentiate false negatives from true negatives. Building on this, we develop a debiased contrastive learning objective. This objective focuses on pulling false negatives closer to the anchor in the embedding space, while simultaneously retaining the capacity to repel true negatives away from the anchor. Extensive experiments on real-world datasets demonstrate the effectiveness of our framework. ## 1 Introduction Graph neural networks (GNNs) (Kipf & Welling, 2016a; Veličković et al., 2017) have significantly advanced graph representation learning, facilitating various tasks such as node classification, and graph classification. However, their reliance on supervised labels may limit their generalization capabilities (Rong et al., 2019). To address this limitation and achieve more generalizable and transferable representations, self-supervised learning (SSL) has emerged in the field of GNNs. SSL enables GNNs to learn from unlabeled graph data (You et al., 2020; Jin et al., 2020) by training on pretext tasks. Among various SSL techniques, graph contrastive learning (GCL) has gained significant attention due to its impressive empirical performance (Veličković et al., 2019; Hassani & Khasahmadi, 2020b; You et al., 2021; Suresh et al., 2021; Zhu et al., 2020; Thakoor et al., 2021b). Most existing GCL methods adopt an augmentation strategy, which treats augmented versions of the same data as positive samples, and other instances in the same batch as negative samples. Various contrastive objects are studied in the context of graphs, such as node-node (Zhu et al., 2020; Peng et al., 2020), node-(sub)graph (Veličković et al., 2019; Hassani & Khasahmadi, 2020b), and graph-graph (Bielak et al., 2022; Thakoor et al., 2021b; Suresh et al., 2021) contrastive pairs. GCL then aims to maximize the representation similarity between positive pairs and minimize representation similarity between negative pairs. Despite their great performance, existing GCLs are at risk of noisy views. Due to the absence of labels, most existing GCL methods randomly nodes as negative samples, which raises the risk of introducing noisy views, a situation known as sampling bias (Chuang et al., 2020). The illustration in Figure 3 depicts the strategy employed by GRACE (Zhu et al., 2021). For a given anchor node vi, it designates other nodes as negative samples, which could inadvertently treat nodes of the same class as vi as negative pairs. The prevalent presence of false negatives significantly hampers the performance of augmentation-based GCL (Xia et al., 2021). Therefore, it is critical to design a denoise framework that can effectively address the widespread issue of sampling bias. Developing such a framework without label information is challenging. Several initial efforts (Zhao et al., 2021; Zhang et al., 2022a; Xuan et al., 2021; Xia et al., 2021) have been taken to alleviate the effects of ![1_image_0.png](1_image_0.png) Figure 1: Schematic diagram of node-level GCL framework and illustration of false negative samples in GCL. false negative views. For example, Zhao et al. (Zhao et al., 2021) adopts clustering results as pseudo labels. However, the quality of pseudo label cannot be guaranteed, which might introduce additional noisy signals. Zhang et al. (Zhang et al., 2022a) generate hard negative samples that are similar to the positive samples but belong to different classes, assuming that by training a model to distinguish between positive samples and hard negatives, the model learns to focus on more subtle and discriminative features. However, the entanglement between hard negative and positive samples can hinder the performance improvement (Xuan et al., 2021). ProGCL (Xia et al., 2021) shows that most negatives with larger similarities to the anchor are false negatives. It fits the distributions of false and true negative samples with Beta Mixture Model (BMM) and assigns weight to each negative sample based on the probability that a negative sample is false. However, for the area where there exists a significant overlap between true negative and false negative samples, it may encounter challenges in determining the appropriate probability that a given sample belongs to each distribution. As significant overlaps between false negatives and true negatives exist in certain regions, we contend that depending solely on the similarity to the anchor to distinguish a negative sample as false is inadequate. Additionally, current research often overlooks the fact that even with relatively accurate estimations, assigning lower weights to false negatives remains an imperfect solution. In pursuit of high-quality representations for downstream tasks, our goal should focus on bringing false negative samples closer to the anchor rather than pushing them away. However, the work on these is rather limited. Fortunately, our empirical results in Figure 2 and Figure 3 show that (1) two models trained on different augmented graphs exhibit substantial discrepancy in the similarity of negative samples to the anchor in regions where false and true negative samples significantly overlap; and (2) these two models display a more pronounced discrepancy on true negative samples compared to false negative samples. The details of the preliminary experimental analysis are given in Section III-B. Based on these findings, it appears promising to utilize model discrepancy as a significant indicator to improve our capacity to differentiate between false negatives and true negatives in overlapping areas. Additionally, it paves the way for the formulation of a debiased contrastive objective function which can draw false negatives with the same ground-truth label closer to the anchor. Therefore, in this paper, we explore a novel problem of estimating the likelihood that a negative sample is false, considering factors beyond mere similarity to the anchor, and draw those false negative samples closer to the anchor. Specifically, we confront two primary challenges: (i) how to integrate additional factors with the commonly used similarity metric to accurately estimate the distributions of unlabeled negative samples; and (ii) how to develop a learning strategy that effectively draws false negative samples closer to the anchor while retaining the capacity to push true negatives away from the anchor. To tackle these challenges, we introduce a novel framework MDGCL that trains two GNN encoders on distinct augmented graphs. This framework estimates the distribution of negative samples by jointly considering model discrepancy and similarity. With this relatively precise estimation, we selectively sample false negative samples and devise a debiased contrastive learning objective function to maximize the similarity of these samples to the anchor while minimizing the similarity of true negative sample to the anchor. Our main contributions are: - We propose a novel framework that can better differentiate between false negatives and true negatives by incorporating insights from model discrepancies. - We design a new debiased contrastive learning strategy aimed at mitigating sampling bias by drawing false negative samples closer to the anchor. - Extensive experiments on real-world datasets demonstrate the effectiveness of the proposed framework. ## 2 Related Work 2.1 Graph Contrastive Learning Graph contrastive learning (Veličković et al., 2019; Hassani & Khasahmadi, 2020b; You et al., 2021; Suresh et al., 2021; Zhu et al., 2020; Thakoor et al., 2021b) has shown great performance for self-supervised representation learning on graphs. Generally, augmentation-based GCL first generate two views of a data sample and prepare positive and negative pairs of each anchor node. It then aims to learn node representations by pushing positive pairs together and pull negative pairs far away. Various methods, including but not limited to DGI (Veličković et al., 2019), HDI (Jing et al., 2021), GMI (Peng et al., 2020), and InfoGCL (Xu et al., 2021), employ this principle by directly quantifying the mutual information (MI) shared across different views. MVGRL (Hassani & Khasahmadi, 2020a) takes this a step further by maximizing the information shared between the cross-view representations of nodes and graphs. Several methodologies such as GRACE (Zhu et al., 2020), GCA (Zhu et al., 2021), ProGCL (Xia et al., 2021), ARIEL (Feng et al., 2022), and gCooL (Li et al., 2022) have successfully implemented the SimCLR framework (Xia et al., 2022) for learning at the node-level. On the graph-level, the SimCLR framework has been effectively utilized by GraphCL (You et al., 2021) and SimGRACE (Xia et al., 2022). Furthermore, innovative CL frameworks, that relieve the dependency on negative samples or data augmentations, have been adopted by BGRL (Thakoor et al., 2021a), AFGRL (Lee et al., 2021), and CCA-SSG (Zhang et al., 2021). ## 2.2 Debiased Contrastive Learning Despite the great performance of GCL methods, most of them suffer from the problem of sampling bias (i.e. randomly assign other samples as negative samples). Therefore, several efforts (Xia et al., 2021; Liu et al., 2022; Zhao et al., 2021; Chu et al., 2021; Li et al., 2023; Zhu et al., 2022) have been taken to alleviate false negative sample issue. For example, DGCL (Zhao et al., 2021) incorporates clustering pseudo labels to address the issue of false negatives. CuCo (Chu et al., 2021) organizes the negatives from least to most difficult based on similarity in graph-level contrastive learning and introduces a system to automatically select and train negative samples through a curriculum learning framework. ProGCL (Xia et al., 2021) extends GRACE (Zhu et al., 2020) by leveraging hard negative samples via Expectation Maximization to fit the observed node-level similarity distribution. However, the significant overlap between false negatives and true negatives impedes the potential for performance improvement. SpCo (Liu et al., 2022) amplifies the high-frequency components of the augmented graph while retaining its inherent low-frequency structure. HomoGCL (Li et al., 2023) enriches the positive set by incorporating neighboring nodes based on homophily assumption. Our work is inherently different from existing work: (i) We address the novel challenge of effectively distinguishing false negatives from true negatives in GCL; and (ii) We aim to bring false negative samples with the same ground-truth label closer to the anchor in the embedding space, a critical aspect overlooked by previous works. ## 3 Preliminary 3.1 Notation And Background We use G = (V, E, X) to denote an attributed graph, where V = {v1*, . . . , v*N } denotes the set of N nodes, E ⊆ V × V is the set of edges, and X = {x1*, . . . ,* xN } is the set of node attributes with xi being the node attribute of node vi. A ∈ R N×N is the adjacency matrix of the graph G, where Aij = 1 if nodes vi and vj are connected; otherwise Aij = 0. Thus, G can also be denoted as G = (X, A). We use T to denote a random augmentation function, such as randomly dropping edges and masking features. GCL has become a popular approach for self-supervised representation learning. Generally, it follows the "augmenting-contrasting" learning pattern, where the similarity between two augmentations of the same sample (positive pair) is maximized, while the similarities between two augmentations of different samples (negative pairs) are minimized. The learned node embeddings can be applied to downstream tasks like node classification and node clustering. Take the popular GCL method GRACE (Zhu et al., 2020) as an example, two augmentation functions T1 ∼ T and T2 ∼ T are firstly applied to the graph G to generate two graph views G (1) = (X(1), A(1)) = T1(X, A) and G (2) = (X(2), A(2)) = T2(X, A). GRACE then applies a GNN encoder fθ to get node embedding for each node in both views as $${\bf H}_{f}^{(1)}=[f_{1}^{(1)},\ldots,f_{N}^{(1)}]=f({\cal G}^{(1)}),$$ $${\bf H}_{f}^{(2)}=[f_{1}^{(2)},\ldots,f_{N}^{(2)}]=f({\cal G}^{(2)}).\tag{11}$$ $$(1)$$ For a node vi, its embedding in one view f $f_i^{(1)}\,$ is iis regarded as the anchor. The embedding f (2) iin the other view is the positive sample and the embeddings of other nodes in both views are negatives. GRACE aims to maximize the Mutual Information (MI) between learned representations of positive pairs while minimizing the MI between learned representations of negative pairs by optimizing the following loss function for each anchor as: $$\mathcal{L}\left(v_{i},f\right)=-\log\underbrace{\frac{e^{S\left(f_{i}^{\left(1\right)},f_{i}^{\left(2\right)}\right)/\tau}}{e^{S\left(f_{i}^{\left(1\right)},f_{i}^{\left(2\right)}\right)/\tau}}}_{\mathrm{positive~pair}}+\underbrace{\sum_{k\neq i}e^{S\left(f_{i}^{\left(1\right)},f_{k}^{\left(1\right)}\right)/\tau}}_{\mathrm{intra-view~negative~pairs}}+\underbrace{\sum_{k\neq i}e^{S\left(f_{i}^{\left(1\right)},f_{k}^{\left(2\right)}\right)/\tau}}_{\mathrm{inter-view~negative~pairs}},$$ (2) where S(·, ·) is the cosine similarity and τ is a temperature parameter. ## 3.2 Preliminary Analysis Of Gcl Model Discrepancy To effectively differentiate false negatives, we conduct experiments focusing on the model discrepancy in the representation similarity between false and true negatives relative to anchor points. These experiments lead us to identify model discrepancy as a valuable indicator, paving us a way for the development of a debiased graph contrastive learning framework. Specifically, we firstly pretrain two 2-layer GNN encoders using GRACE (Zhu et al., 2020) and obtain f and g parameterized by Θ1 and Θ2. Given G, we obtain two augmented graphs G (1) = T1(*X, A*), G (2) = T2(*X, A*). Then we apply f on G (1) and G (2) to inference node representations H (1) fand H (2) f. Similarly, another pretrained GNN encoder g is used to inference node representations H (2) g on G (2). To measure the difference in similarity between the identical pairs of anchor and its negative sample across G (1) and G (2) obtained by network f, given an anchor node vi and its negative sample vk, we calculate the discrepancy as: $$abs(S(f_i^{(1)},f_k^{(1)})-S(f_i^{(2)},f_k^{(2)})),$$ The discrepancy distribution of $f$ for False Negative and True Negative result are shown in Figure 2 (a). To measure the difference in similarity between the anchor vi and its negative sample vk across encoder f and g, we measure the discrepancy as $$a b s(S(f_{i}^{(1)},f_{k}^{(1)})-S(g_{i}^{(2)},g_{k}^{(2)})),$$ k)), (4) $$\left({\mathbf{3}}\right)$$ $$\mathrm{known~in}$$ ![4_image_0.png](4_image_0.png) Figure 2: Histograms of discrepancy on similarity (cosine similarity between normalized embeddings of anchor and negatives) on Cora dataset. Given two augmented views G (1), G (2) of the same graph, and GNN networks *f, g*. The discrepancy in (a) is calculated by measuring the difference in similarity between the identical pairs of anchor and its negative sample across G (1) and G (2). The discrepancy in (b) is calculated by measuring the difference in similarity between the anchor and its negative sample across networks f and g. ![4_image_1.png](4_image_1.png) Figure 3: (a) Similarity distributions of false negatives, true negatives and those samples with discrepancy > 0.1 on Cora dataset. (b) The distribution of samples with substantial divergence and their corresponding divergence values. The discrepancy distribution across f and g for False Negative and True Negative is given in Figure 2 (b). As we observed, the two networks f and g exhibit a more pronounced difference in similarity concerning true negatives compared to false negatives in Figure 2 (b). On the contrary, similar distributional trends in Figure 2 (a) do not provide us with a valuable indicator to distinguish between false and true negatives. The results indicate that the discrepancy between the two networks, f and g, arises not mainly from data augmentation, but from their distinct perceptions of the inherent structure of the data. For Figure 3, we further visualize the similarity distributions of false negatives and true negatives considering both S(f (1) i, f(1) k)) and S(g (2) i, g (2) k)). Meanwhile, we draw distributions of those samples with discrepancy > 0.1 calculated by Equation 4. As shown in (a), while the overall percentage of samples in the overlap region is relatively low, it is noteworthy that the majority of samples displaying large divergence are concentrated within this particular region. Figure 3 (b) illustrates the distribution of samples with substantial divergence ![5_image_0.png](5_image_0.png) Figure 4: Framework of MDGCL. and their corresponding divergence values. *It is evident that in the overlapping area, true negatives typically* display greater divergence than false negatives. This phenomenon can be readily understood as contrastive learning aims to distance negative samples from the anchor in the embedding space. Yet, this becomes complex when hard negative and positive samples are entangled, creating a conflicting dynamic (Xuan et al., 2021). This is due to the exclusive reliance on model parameter adjustments for learning representations. Adjusting parameters to bring positive samples nearer to the anchor results in the inadvertent proximity of entangled hard negatives, which leads to a convoluted dynamic in regions where hard negatives and positives significantly overlap, complicating the update of representations in these areas. Contrarily, false negatives, which share similar semantic information and ground truth label with the anchor, encounter a less intricate learning process. As a result, in overlapping area, it is logical to expect a noticeable divergence between two models that haven been trained on distinct augmented graphs. This divergence arises because random augmentation provides them with the opportunity to learn the inherent structure of data from different perspectives. The above observations and analyses motivate us to investigate the incorporation of model discrepancy as a vital indicator for distinguishing between false and true negatives, especially in overlap areas where traditional methods struggle to differentiate them effectively. Therefore, in this paper, we study a novel problem of leveraging model discrepancy to counteract sampling bias in graph contrastive learning. The problem is formally defined as: Problem 1. Given an unlabeled graph G, our goal is to develop a debiased graph contrastive learning framework that incorporates the knowledge of model discrepancy, ensuring that the representations learned through this framework are highly effective in downstream node classification tasks. ## 4 Methodology In this section, we present the details of the proposed framework MDGCL, which involves false negative samples selection and a debiased learning strategy to push false negative sample closer to the anchor. An illustration of the proposed framework is given in Figure 4. Building upon our empirical observations that two networks tend to exhibit a greater discrepancy in the overlap area for true negatives compared to false negatives, we adopt a strategy where we train two networks with different augmented views. We then merge the knowledge derived from the model discrepancy along with similarity to the anchor to estimate more distinguishable distributions for false and true negatives. In order to fully harness the distributional information, we employ a sampling approach for false negatives and introduce a novel debiased objective function. This function is designed to maximize the similarity of false negative samples to the anchor, all the while maintaining the original objective of pushing true negative samples away from the anchor within the embedding space. Next, we give the details of each component. ## 4.1 False Negative Selection To utilize model discrepancy to estimate more discernible distributions for false and true negatives, we adopt two GNN encoders f and g parameterized by Θ1 and Θ2, respectively. We train them to learn representations of input view following GRACE (Zhu et al., 2020) separately with loss function in Equation 2. Specifically, given G, during each epoch, we first generate three graph views as $${\cal G}^{(1)}=T_{1}(X,A),\ {\cal G}^{(2)}=T_{2}(X,A),\ {\cal G}^{(3)}=T_{3}(X,A)\tag{1}$$ $$\left(5\right)$$ We then apply f on G (1) and G (2) to learn node representation as H (1) f, H (2) f. Similarly, we apply g on G (2) and G (3) to learn node representation H (2) g , H (3) g . The purpose of having two networks share a common view while also maintaining their own views during training is to enable them to learn from different angles while staying interconnected. This approach guarantees that the observed discrepancies are not merely a product of random data augmentation but primarily arise from the intricate and conflicting overlap between false negatives and true negatives, thereby enhancing the reliability of the discrepancy values. To streamline our explanation, we will present our framework with node vi, its embedding f (1) i ∈ H (1) f and g (2) i ∈ H (2) g as anchor for f and g respectively. Similarly, for node vk, its embedding f (1) k ∈ H (1) fand g (2) k ∈ H (2) g as intra-view negative for f (1) iand g (2) irespectively. It is worth noting that the same learning strategy is fully applicable to inter-view case as well (i.e. negative samples from f (2) k ∈ H (2) fand g (3) k ∈ H (3) g for f (1) iand g (2) irespectively). With learned representations, we combine the similarity as: $$s_{i k}=\frac{1}{2}(s_{i k}^{f}+s_{i k}^{g})\cdot(1-d_{i k}),$$ $$({\mathfrak{h}})$$ where s f ik = S(f (1) i, f(1) k), s g ik = S(g (2) i, g (2) k)), dik = abs(s f ik − s g ik). Going forward this paper, assume that all embedding similarities are Min-Max normalized unless stated otherwise. Then sik, dik ∈ [0, 1]. We employ Equation 6 to merge the knowledge of similarity and discrepancy based on the empirical findings previously mentioned. These findings highlight that, generally, the two models exhibit a more substantial divergence on true negative samples compared to false negative samples, especially in overlapping area. As a result, while shifts may occur in both distributions, the shift is more pronounced for true negative samples. This leads to a reduced overlap between the distributions, enhancing the distinction between them. Experiment results in Section 5.4 also verify our analysis. Different from ProGCL (Xia et al., 2021), which employs the Beta Mixture Model (BMM) to fit the empirical distribution of negative samples, leading to a high computation cost, we utilize sik as the input for a Bernoulli distribution, from which we sample to determine the selection of a given sample as a false negative. This process is formally defined as: $$F=\{{\bf1}(B e r n o u l i(s_{i k})=1)\mid s_{i k}>\theta,k\in[1,N]\setminus\{i\}\},$$ F = {1(*Bernoulli*(sik) = 1) | sik *> θ, k* ∈ [1, N] \ {i}}, (7) where θ is a predefined threshold, 1(·) is an indication that equals to 1 if the condition holds and 0 otherwise. In this setup, a sample outcome of 1 indicates its selection as a false negative, while a 0 implies it is not selected. Since false negatives usually show smaller differences and higher similarity, while true negatives often display larger discrepancies and lower similarity, our sampling strategy tends to assign more weight to false negatives. Notably, F is shared by f and g. By doing so, each time we sample data, there is a significantly higher chance of selecting false negatives, enhancing the effectiveness of our debiased contrastive objective function in the next section. ## 4.2 Debiased Contrastive Learning Given the indexes of potential false negative candidates, we propose a debiased contrastive learning objective. This objective aims to maximize the similarity of false negatives to the anchor, all the while ensuring that our primary objective of minimizing the similarity of true negatives to the anchor is maintained. $$\left(7\right)$$ ## Algorithm 1 Mdgcl (Intra-View) Input: GNN networks *f, g*, augmentation function T , graph G and normalized cosine similarity function S 1: for *epoch* = 0, 1, 2, ..., Emax do 2: Draw three augmentation functions T1 ∼ T , T2 ∼ T , T3 ∼ T 3: G (1) = T1(G), G (2) = T2(G), G (3) = T3(G) 4: H (1) f = f(G (1)), H (2) f = f(G (2)), H (2) g = g(G (2)), H (3) g = g(G (3)) 5: **for all** f (1) i, f(1) k ∈ H (1) f do 6: s f ik = S(f (1) i, f(1) k) 7: **end for** 8: **for all** g (2) i, g (2) k ∈ H (2) g do 9: s g ik = S(g (2) i, g (2) k ) 10: **end for** 11: Compute sik following Equation 6 12: Record indexes F of false negative candidates with Equation 7 13: Compute ℓ(vi, f), ℓ(vi, g) for each anchor vi for f and g with Equation 9 separately. 14: Update the parameters of f and g with 1N Pℓ(vi, f) and 1N Pℓ(vi, g) 15: end for Output: *f, g* Let τ + be the proportion of false negative samples and τ − = 1 − τ +. Given p be the all negative samples distribution and p + x be the false negatives distribution. Then the true negatives distribution is calculated as: $$p_{x}^{-}\left(x\right)=\left(p\left(x\right)-\tau^{+}p_{x}^{+}\left(x\right)\right)/\tau^{-}.$$ (x)/τ −. (8) The equation above demonstrates how to mitigate sampling bias by sampling from false negatives. Consequently, when we sample data from the negative distribution p following Equation 8, we inherently draw samples from the true negative distribution p − x . Thus, with F representing the indexes for false negative candidates, we sample nodes with corresponding IDs from within this set and denote them as vF . Subsequently, we design our debiased loss function for each anchor as: $\left(\aleph\right)$. $$({\mathfrak{g}})$$ $$(10)$$ , (9) $$\ell(v_{i},f)=-\log\frac{e^{S_{i i}^{f}}}{e^{S_{i i}^{f}}+\frac{N-1}{\tau^{-}}\underbrace{\left(\frac{1}{N-1}\sum_{i=1}^{N-1}e^{S_{i k}^{f}}-\tau^{+}\frac{1}{M}\sum e^{S_{i F}^{f}}\right)}_{\mathrm{Debias\;\;Sampling}}},$$ $$e^{S_{i i}^{f}}$$ where $S^{f}_{ii}=S(f^{(1)}_{i},f^{(2)}_{i}),S^{f}_{iF}=S(f^{(1)}_{i},f^{(1)}_{F})$ and $M$ are $i$ and $i$. F) and M =| F | equals to the number of sampled false negatives. We calculate τ + as . (10) $$\tau^{+}=\frac{M}{N-1}.$$ Here, we aim to highlight the distinctions between our loss function and that in (Chuang et al., 2020). Unlike previous work that sample vF s from positive samples distribution, our approach focuses on sampling from false negatives. This strategic choice is driven by the fact that positive samples typically already exhibit high similarity to the anchor in standard contrastive learning setting. Our objective is not to further enhance the proximity of these positive samples to the anchor - a task relatively easily achieved in standard contrastive learning frameworks. Instead, we aim to specifically target false negatives, endeavoring to draw them closer to the anchor. This approach is intended to effectively mitigate the prevalent issue of sampling bias, addressing a critical gap in existing methodologies. ## 5 Experiments In this section, we conduct extensive experiments to demonstrate the effectiveness of our proposed MDGCL. In particular, we aim to answer the following research questions: | Table 1: Statistics of datasets used in the paper | | | | | |-----------------------------------------------------|---------|-----------|-----------|----------| | Dataset | #Nodes | #Edges | #Features | #Classes | | Cora | 2,708 | 10,556 | 1,433 | 7 | | citepSeer | 3,327 | 9,228 | 3,703 | 6 | | PubMed | 19,717 | 88,651 | 500 | 3 | | Photo | 7,650 | 238,163 | 745 | 8 | | Computer | 13,752 | 491,722 | 767 | 10 | | ogbn-arXiv | 169,343 | 1,166,243 | 128 | 40 | - RQ1 How does our proposed framework perform in terms of node classification accuracy in downstream task? - RQ2 Can our framework generate a more distinguishable distribution for false negative and true negative samples? - RQ3 Can our framework effectively select false negative samples for debiased contrastive learning? ## 5.1 Experimental Settings Baselines. We compare our method with a variety of representative and state-of-the-art baselines: including supervised graph learning methods GCN (Kipf & Welling, 2016a), GAT (Veličković et al., 2017), graph autoencoders GAE and VGAE (Kipf & Welling, 2016b), augmentation-based GCL methods including DGI (Veličković et al., 2019), MVGRL (Hassani & Khasahmadi, 2020a), CCA-SSG (Zhang et al., 2021), BGRL (Thakoor et al., 2021b), COSTA (Zhang et al., 2022b) , GRACE (Zhu et al., 2020), GCA (Zhu et al., 2021), ProGCL (Xia et al., 2021), ARIEL (Feng et al., 2022), HomoGCL (Li et al., 2023) and SpCo (Liu et al., 2022), augmentation-free graph contrastive learning methods GMI (Peng et al., 2020), AFGRL (Lee et al., 2021) and SUGRL (Mo et al., 2022). The detailed description of the baselines can be found in Appendix A. Datasets. We assess the quality of representations after self-supervised pretraining on six node classification benchmarks, including three citation networks Cora, citepseer, Pubmed (Yang et al., 2016), two co-purchase networks Amazon Computer and Amazon Photo (McAuley et al., 2015) and one large-scale network ogbn-arXiv (Hu et al., 2021). We adopt the public splits for Cora, citepseer and Pubmed, and a 1:1:8 training/validation/testing random splits for the two co-purchase datasets. The statistics of the datasets are provided in Table 1. We give the detailed descriptions in Appendix B. Evaluation Protocol. We follow the linear evaluation scheme as introduced in (Veličković et al., 2019): i) We first train the model on all the nodes in a graph without supervision, by optimizing the objective in Equation 9. ii) After that, we freeze the parameters of the encoder and obtain node embeddings, which are subsequently fed into a linear classifier (i.e., a logistic regression model) to generate a predicted label for each node. In the second stage, only nodes in training set are used for training the classifier, and we report the classification accuracy on testing nodes. The graph encoder f and g are standard two-layer GCN model (Kipf & Welling, 2016a) for all the datasets. For all experiments, the threshold to filter out false negative samples θ is as 0.9. For a fair comparison, the hyperparameters of all methods are tuned on the validation set. ## 5.2 Performance Comparison To answer RQ1, we compare node classification performance of our framework with the baselines on various datasets. Each experiment is conducted 10 times. The averaged node classification results with standard deviation are report in Table 2. From the table, we have the following observations: - InfoNCE-based methods, i.e., GCA (Zhu et al., 2021), ProGCL (Xia et al., 2021), SpCo (Liu et al., 2022) and ARIEL (Feng et al., 2022), cannot bring consistent and significant improvements over GRACE (Zhu et al., 2020). Notably, HomoGCL (Li et al., 2023) consistently yields significant improvements over GRACE. However, our framework surpasses even HomoGCL in performance, which can be attributed | Methods | Cora | CiteSeer | PubMed | Photo | Computer | |-----------|------------|------------|------------|--------------|--------------| | GCN | 81.5 ± 0.4 | 70.2 ± 0.4 | 79.0 ± 0.2 | 92.42 ± 0.22 | 86.51 ± 0.54 | | GAT | 83.0 ± 0.7 | 72.5 ± 0.7 | 79.0 ± 0.3 | 92.56 ± 0.35 | 86.93 ± 0.29 | | GAE | 71.5 ± 0.4 | 65.8 ± 0.4 | 72.1 ± 0.5 | 91.62 ± 0.13 | 85.27 ± 0.19 | | VGAE | 73.0 ± 0.3 | 68.3 ± 0.4 | 75.8 ± 0.2 | 92.20 ± 0.11 | 86.37 ± 0.21 | | DGI | 82.3 ± 0.6 | 71.8 ± 0.7 | 76.8 ± 0.6 | 91.61 ± 0.22 | 83.95 ± 0.47 | | BGRL | 82.7 ± 0.6 | 71.1 ± 0.8 | 79.6 ± 0.5 | 92.80 ± 0.08 | 88.23 ± 0.11 | | GMI | 82.4 ± 0.6 | 71.7 ± 0.2 | 79.3 ± 1.0 | 90.73 ± 0.24 | 84.22 ± 0.52 | | SUGRL | 83.4 ± 0.5 | 73.0 ± 0.4 | 81.9 ± 0.3 | 93.07 ± 0.15 | 88.93 ± 0.21 | | AFGRL | 79.8 ± 0.2 | 69.4 ± 0.2 | 80.0 ± 0.1 | 92.71 ± 0.23 | 88.12 ± 0.27 | | COSTA | 82.2 ± 0.2 | 70.7 ± 0.5 | 80.4 ± 0.3 | 92.43 ± 0.38 | 88.37 ± 0.22 | | MVGRL | 82.9 ± 0.6 | 72.6 ± 0.5 | 79.8 ± 0.5 | 91.66 ± 0.42 | 87.07 ± 0.63 | | CCA-SSG | 84.0 ± 0.4 | 73.1 ± 0.3 | 81.0 ± 0.4 | 92.84 ± 0.18 | 88.27 ± 0.32 | | GRACE | 81.5 ± 0.3 | 70.6 ± 0.5 | 80.2 ± 0.3 | 92.15 ± 0.24 | 86.25 ± 0.25 | | GCA | 81.4 ± 0.3 | 70.4 ± 0.4 | 80.7 ± 0.5 | 92.53 ± 0.16 | 87.80 ± 0.23 | | ProGCL | 81.2 ± 0.4 | 69.8 ± 0.5 | 79.2 ± 0.2 | 92.39 ± 0.11 | 87.43 ± 0.21 | | ARIEL | 83.0 ± 1.3 | 71.1 ± 0.9 | 74.2 ± 0.8 | 91.80 ± 0.24 | 87.07 ± 0.33 | | HomoGCL | 84.3 ± 0.5 | 72.3 ± 0.7 | 81.1 ± 0.3 | 92.92 ± 0.18 | 88.46 ± 0.20 | | SpCo | 82.7 ± 0.6 | 71.3 ± 0.8 | 81.0 ± 0.4 | 92.74 ± 0.17 | 88.14 ± 0.28 | | MDGCL | 84.6 ± 0.5 | 73.1 ± 0.5 | 82.3 ± 0.5 | 93.26 ± 0.28 | 89.14 ± 0.22 | that HomoGCL arbitrarily designates non-neighboring nodes as negative samples, thereby introducing false negative pairs. - Our superior performance compared to ProGCL (Xia et al., 2021) demonstrates the effectiveness of our framework in generating distinguishable distributions for false and true negatives. - Among augmentation-free methods such as GMI (Peng et al., 2020), SUGRL (Mo et al., 2022) and AFGRL (Lee et al., 2021), SUGRL stands out with its competitive performance. While it almost outperforms all other augmentation-based frameworks, it falls short only to our MDGCL. The result indicates that data augmentation remains necessary in GCL, but we need to adopt debiased framework to learn better node representations. ## 5.3 Results On Large-Scale Ogb Dataset To show that the scalability of the proposed framework, we also conduct an experiment on a large-scale dataset ogbn-arxiv from OGB benchmark (Hu et al., 2021). Following (Hu et al., 2021), we extend the backbone GNN encoder to 3 GCN layers, we report the classification accuracy on both the validation and test sets, which is a convention for this task. The results are shown in Table 3. The results show our framework outperforms all other unsupervised learning methods, which demonstrates the effectiveness and scalability of the proposed method. ## 5.4 Distributions Comparison To answer RQ2, in this subsection, we show empirical results of how distributions of false and true negative samples shift when incorporating the knowledge of model discrepancy with our strategy. The results are shown in Figure 5. Similar to Section III-B, our experiment initially involves pretraining both f and g, followed by the computation of the similarity scores 12 (s f ik + s g ik) displayed on the left side of each subfigure. On the right side of each subfigure, we present values obtained by multiplying these similarity scores by (1 − dik), denoted as 12 (s f ik + s g ik)·(1 − dik). From the figure, we can observe that our strategy of combining model discrepancy with similarity results in a reduced overlap between false negatives and true negatives. This is advantageous for the task of distinguishing false negatives from true negatives efficiently. Table 3: Node classification results on ogbn-arXiv dataset (accuracy(%)±std). OOM indicates out-ofmemory. | Model | Validation | Test | |----------------------------|--------------|------------| | MLP | 57.65±0.12 | 55.50±0.23 | | GCN | 73.00±0.17 | 71.74±0.29 | | GraphSAGE | 72.77±0.16 | 71.49±0.27 | | Random-Init | 69.90±0.11 | 68.94±0.15 | | DGI | 71.26±0.11 | 70.34±0.16 | | GRACE full-graph | OOM | OOM | | GRACE-Subsampling (k=2) | 60.49±3.72 | 60.24±4.06 | | GRACE-Subsampling (k=8) | 71.30±0.17 | 70.33±0.18 | | GRACE-Subsampling (k=2048) | 72.61±0.15 | 71.51±0.11 | | ProGCL | 72.45±0.21 | 72.18±0.09 | | BGRL | 72.53±0.09 | 71.64±0.12 | | HomoGCL | 72.85±0.10 | 72.22±0.15 | | MDGCL | 72.93±0.12 | 72.33±0.13 | ![10_image_0.png](10_image_0.png) Figure 5: Comparison of distributions using solely similarity and combining the knowledge of similarity and discrepancy on (a) Cora and (b) citepseer. ## 5.5 Ability Of Selecting False Negatives To show that the proposed framework can effectively mitigate sampling bias, in this subsection, we conduct experiments to quantify the percentage of false negatives sampled with Equation 7, which answers RQ3. ![11_image_0.png](11_image_0.png) Figure 6: Case study on the percentage of selected false negatives on (a) Cora and (b) CiteSeer. Specifically, we first train our model then calculate the percentage of false negatives using the groundtruth labels. Note that the labels are not used during model training but are only used when calculating percentage of false negatives. We compare our method with the approach used in ProGCL (Xia et al., 2021), which exclusively relies on similarity as the indicator for distinguishing false negatives from true negatives. The results are shown in Figure 6. From the figure, we can observe that, when combining the measure of discrepancy with similarity, our framework consistently samples a higher percentage of false negatives. This increase in the proportion of false negatives enhances the effectiveness of our debiased objective function in Equation 9, effectively bringing false negative samples with the same ground-truth label closer to the anchor. ## 5.6 Ablation Study In this section, we conduct ablation study to understand the contribution of each component of MDGCL. We compare the performance of our MDGCL with (i) **MDGCL/M**: MDGCL without model discrepancy as additional knowledge for beta distribution estimation; and (ii) **MDGCL/D**: MDGCL without debiased objective function but adopt the loss function in Eq. (10) in ProGCL (Xia et al., 2021). We only show the results on Cora and Pubmed as similar trends are observed on other datasets. The results are presented in Fig. 7. From this figure, we observe that: (i) Consistently, MDGCL outperforms MDGCL/D, underscoring the effectiveness of the strategy of bringing false negative samples closer to the anchor. This approach proves more successful than assigning lower weights to push them away from the anchor; (ii) MDGCL/D consistently outperforms MDGCL/M, highlighting that a more distinguishable distribution forms the foundation of our framework. This distinction is crucial because, without a discernible distribution, a significant portion of true negatives might inadvertently be selected to be drawn closer to the anchor, potentially introducing other noise into the process; and (iii) MDGCL consistently outperforms both MDGCL/M and MDGCL/D, confirming that our approach of generating a discernible distribution and designing a denoising objective function is indeed sound. ## 5.7 Hyperparameter Sensitivity Analysis In this section, we conduct experiments to show how the negative sample selection threshold θ impacts the performance of MDGCL. We vary the values of θ as {0.75, 0.8, 0.85, 0.9, 0.95} for both Cora and Pubmed. We run experiments for each parameter five times and report the mean accuracy in Table 4. We have observed the followings: (1) To filter out false negative samples while preserving true negative samples, it is recommended to set θ ∈ [0.85, 0.95]. (2) When θ ≥ 0.85, our MDGCL typically achieves high performance, which eases hyperparameter tuning. ![12_image_0.png](12_image_0.png) ## 5.8 Time Complexity Analysis Our framework is easily parallelizable, as the message passing process is independent for each network f and g. Consequently, the time complexity of MDGCL can be reduced to be the same as that of GRACE (Zhu et al., 2021). This proves that MDGCL has great potential in conducting scalable graph contrastive learning. Table 4: Node classification results (accuracy(%)±std) for Hyperparameter Sensitivity Analysis. | 0.75 | 0.80 | 0.85 | 0.90 | 0.95 | | |--------|--------|--------|--------|--------|------| | Cora | 83.2 | 83.6 | 84.1 | 84.6 | 84.3 | | Pubmed | 80.7 | 81.2 | 81.7 | 82.3 | 81.7 | ## 6 Conclusion In this paper, we introduce a novel debiased graph contrastive learning framework aimed at mitigating the prevalent issue of sampling bias. Central to our approach is the incorporation of model discrepancy, which facilitates the generation of a more distinct distribution between false and true negative samples. This enhancement significantly improves our capacity to identify false negatives. Furthermore, we have developed a unique loss objective that effectively draws false negatives towards the anchor while simultaneously repelling true negatives. Through comprehensive experiments, our framework has demonstrated superior performance in downstream node classification tasks, outperforming current state-of-the-art methods in terms of accuracy. ## References Piotr Bielak, Tomasz Kajdanowicz, and Nitesh V Chawla. Graph barlow twins: A self-supervised representation learning framework for graphs. *Knowledge-Based Systems*, 256:109631, 2022. Guanyi Chu, Xiao Wang, Chuan Shi, and Xunqiang Jiang. Cuco: Graph representation with curriculum contrastive learning. In *IJCAI*, pp. 2300–2306, 2021. Ching-Yao Chuang, Joshua Robinson, Yen-Chen Lin, Antonio Torralba, and Stefanie Jegelka. Debiased contrastive learning. *Advances in neural information processing systems*, 33:8765–8775, 2020. Shengyu Feng, Baoyu Jing, Yada Zhu, and Hanghang Tong. Adversarial graph contrastive learning with information regularization. In *Proceedings of the ACM Web Conference 2022*, pp. 1362–1371, 2022. William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs, 2018. Kaveh Hassani and Amir Hosein Khasahmadi. Contrastive multi-view representation learning on graphs, 2020a. Kaveh Hassani and Amir Hosein Khasahmadi. Contrastive multi-view representation learning on graphs. In Proceedings of International Conference on Machine Learning, pp. 3451–3461. 2020b. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs, 2021. Wei Jin, Tyler Derr, Haochen Liu, Yiqi Wang, Suhang Wang, Zitao Liu, and Jiliang Tang. Self-supervised learning on graphs: Deep insights and new direction, 2020. Baoyu Jing, Chanyoung Park, and Hanghang Tong. Hdmi: High-order deep multiplex infomax. In *Proceedings of the Web Conference 2021*, pp. 2414–2424, 2021. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016a. Thomas N Kipf and Max Welling. Variational graph auto-encoders. *arXiv preprint arXiv:1611.07308*, 2016b. Namkyeong Lee, Junseok Lee, and Chanyoung Park. Augmentation-free self-supervised learning on graphs, 2021. Bolian Li, Baoyu Jing, and Hanghang Tong. Graph communal contrastive learning. In Proceedings of the ACM Web Conference 2022, pp. 1203–1213, 2022. Wen-Zhi Li, Chang-Dong Wang, Hui Xiong, and Jian-Huang Lai. Homogcl: Rethinking homophily in graph contrastive learning. *arXiv preprint arXiv:2306.09614*, 2023. Nian Liu, Xiao Wang, Deyu Bo, Chuan Shi, and Jian Pei. Revisiting graph contrastive learning from the perspective of graph spectrum, 2022. Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. Image-based recommendations on styles and substitutes, 2015. Yujie Mo, Liang Peng, Jie Xu, Xiaoshuang Shi, and Xiaofeng Zhu. Simple unsupervised graph representation learning. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pp. 7797–7805, 2022. Zhen Peng, Wenbing Huang, Minnan Luo, Qinghua Zheng, Yu Rong, Tingyang Xu, and Junzhou Huang. Graph representation learning via graphical mutual information maximization. In Proceedings of The Web Conference 2020, pp. 259–270, 2020. Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. Dropedge: Towards deep graph convolutional networks on node classification. *arXiv preprint arXiv:1907.10903*, 2019. Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. Adversarial graph augmentation to improve graph contrastive learning. *NeurIPS*, 2021. Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L Dyer, Remi Munos, Petar Veličković, and Michal Valko. Large-scale representation learning on graphs via bootstrapping. arXiv preprint arXiv:2102.06514, 2021a. Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Rémi Munos, Petar Veličković, and Michal Valko. Bootstrapped representation learning on graphs. In ICLR 2021 Workshop on Geometrical and Topological Representation Learning, 2021b. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. *arXiv preprint arXiv:1710.10903*, 2017. Petar Veličković, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. Deep Graph Infomax. In *International Conference on Learning Representations*, 2019. URL https: //openreview.net/forum?id=rklz9iAcKQ. Jun Xia, Lirong Wu, Ge Wang, Jintao Chen, and Stan Z Li. Progcl: Rethinking hard negative mining in graph contrastive learning. *arXiv preprint arXiv:2110.02027*, 2021. Jun Xia, Lirong Wu, Jintao Chen, Bozhen Hu, and Stan Z Li. Simgrace: A simple framework for graph contrastive learning without data augmentation. In *Proceedings of the ACM Web Conference 2022*, pp. 1070–1079, 2022. Dongkuan Xu, Wei Cheng, Dongsheng Luo, Haifeng Chen, and Xiang Zhang. Infogcl: Information-aware graph contrastive learning. *Advances in Neural Information Processing Systems*, 34:30414–30425, 2021. Hong Xuan, Abby Stylianou, Xiaotong Liu, and Robert Pless. Hard negative examples are hard, but useful, 2021. Zhilin Yang, William W. Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning with graph embeddings, 2016. Yuning You, Tianlong Chen, Zhangyang Wang, and Yang Shen. When does self-supervision help graph convolutional networks? *Proceedings of machine learning research*, 119:10871–10880, 2020. Yuning You, Tianlong Chen, Yang Shen, and Zhangyang Wang. Graph contrastive learning automated. arXiv preprint arXiv:2106.07594, 2021. Hengrui Zhang, Qitian Wu, Junchi Yan, David Wipf, and S Yu Philip. From canonical correlation analysis to self-supervised graph neural networks. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021. Shaofeng Zhang, Meng Liu, Junchi Yan, Hengrui Zhang, Lingxiao Huang, Xiaokang Yang, and Pinyan Lu. M-mix: Generating hard negatives via multi-sample mixing for contrastive learning. In *Proceedings of the* 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 2461–2470, 2022a. Yifei Zhang, Hao Zhu, Zixing Song, Piotr Koniusz, and Irwin King. Costa: covariance-preserving feature augmentation for graph contrastive learning. In *Proceedings of the 28th ACM SIGKDD Conference on* Knowledge Discovery and Data Mining, pp. 2524–2534, 2022b. Han Zhao, Xu Yang, Zhenru Wang, Erkun Yang, and Cheng Deng. Graph debiased contrastive learning with joint representation clustering. In *IJCAI*, pp. 3434–3440, 2021. Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. Deep Graph Contrastive Representation Learning. In *ICML Workshop on Graph Representation Learning and Beyond*, 2020. URL http://arxiv.org/abs/2006.04131. Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. Graph contrastive learning with adaptive augmentation. In *Proceedings of the Web Conference 2021*, pp. 2069–2080, 2021. Yanqiao Zhu, Yichen Xu, Hejie Cui, Carl Yang, Qiang Liu, and Shu Wu. Structure-enhanced heterogeneous graph contrastive learning. In *Proceedings of the 2022 SIAM International Conference on Data Mining* (SDM), pp. 82–90. SIAM, 2022. ## A Baselines - GCN (Kipf & Welling, 2016a), GAT (Veličković et al., 2017), GraphSAGE (Hamilton et al., 2018): GCN, GAT, GraphSAGE are three popular supervised GNNs. - GAE/VGAE (Kipf & Welling, 2016b): GAE and VGAE are graph autoencoders that learn node embeddings via vallina/variational autoencoders. Both the encoder and the decoder are implemented with graph convolutional network. - DGI (Veličković et al., 2019): It maximizes the mutual information between patch representations and corresponding high-level summaries of graphs obtained from graph convolutional network. - GMI (Peng et al., 2020): GMI applies cross-layer node contrasting and edge contrasting. It also generalizes the idea of conventional mutual information computations to the graph domain. - MVGRL (Hassani & Khasahmadi, 2020a): MVGRL maximizes the mutual information be- tween the cross-view representations of nodes and graphs using graph diffusion. - BGRL (Thakoor et al., 2021a): BGRL adopts asymmetrical structure to do the node-node level contrast without negative samples to avoid quadratic bottleneck. - AFGRL (Lee et al., 2021): It extends BGRL by generating an alternative view of a graph by discovering nodes that share the local structural information and global semantics with the graph. - CCA-SSG (Zhang et al., 2021): CCA-SSG leverages classical Canonical Correlation Analysis to construct feature-level objective which can discard augmentation-variant information and prevent degenerated solutions. - COSTA (Zhang et al., 2022b): COSTA alleviates the highly biased node embedding obtained via graph augmentation by performing feature augmentation. - GRACE (Zhu et al., 2020): It adopts SimCLR which performs graph augmentation on the input graph and considers node-node level contrast on both inter-view and intra-view levels. - GCA (Zhu et al., 2021): GCA extends GRACE by considering adaptive graph augmentations based on degree centrality, eigenvector centrality, and PageRank centrality. - ProGCL (Xia et al., 2021): ProGCL extends GRACE by leveraging hard negative samples via Expectation Maximization to fit the observed node-level similarity distribution. - ARIEL (Feng et al., 2022): It extends GRACE by introducing an adversarial graph view and an information regularizer to extract informative contrastive samples within a reasonable constraint. - SUGRL (Mo et al., 2022): SUGRL uses multiplet loss to boost interclass variation by integrating structural and neighbor information. It also adds an upper bound loss to limit the distance between positive and anchor embeddings, thereby reducing intra-class variation. - SpCo (Liu et al., 2022): It optimizes the contrastive pair with the original adjacency matrix and elevates augmented graph's high frequency while preserving its original low frequency structure. - HomoGCL (Li et al., 2023): It is a model-agnostic framework that enhances the positive set with significant neighbor nodes. ## B Datasets - Cora, **CiteSeer**, and **PubMed** (Yang et al., 2016): They are citation networks where nodes denote papers, and edges depict citation relationships. In Cora and CiteSeer, each node is described using a binary word vector, indicating the presence or absence of a corresponding word from a predefined dictionary. In contrast, PubMed employs a TF/IDF weighted word vector for each node. For all three datasets, nodes are categorized based on their respective research areas. - **Amazon-Photo** and **Amazon-Computers** (McAuley et al., 2015): In these networks, nodes correspond to products, and edges indicate co-purchase instances. Each node is characterized by a raw bag-of-words feature, which encodes product reviews, and is labeled according to its product category. - **ogbn-arXiv** (Hu et al., 2021): It is a citation network encompassing all Computer Science arXiv papers cataloged in the Microsoft Academic Graph. Each node is characterized by a 128-dimensional feature vector, which is derived by averaging the skipgram word embeddings present in its title and abstract. Additionally, the nodes are categorized based on their respective research areas.
Review 1: Summary: This paper aims to find false negative samples in GCL. The authors observe that model discrepancy can be used to find these negative samples. They further propose MDGCL to bring false negative samples closer to the anchor. Strengths and Weaknesses: Strengths And Weaknesses S1: Finding false negative samples is important in GCL. S2: Use model discrepancy to find negative samples seems novel. S3: The literature review is detailed. W1: The visualization in this paper needs improvement. W2: The motivation, while interesting, is not theoretically supported. W3: The scalability experiment is inadequate. Requested Changes: Requested Changes Q1: In Figure 2, 3 and 5, the authors use ‘density’ or ‘probability density’ as the label of y-axis. However, the histograms are not normalized. As observed from these figures, there are more true negative samples compared to false negative samples. Thus it is hard to see the distribution discrepancy in these figures. The authors should also add theoretical analysis or qualitative results for the phenomenon described in Figure 2 and 3. Q2: Strong baselines are not included in Table 3. For example, CCA-SSG is in Table 2 but not in Table 3. Please compare MDGCL with these methods on the Ogbn-Arxiv dataset. Q3: Please include runtime comparison on the Ogbn-Arxiv dataset. It seems like MDGCL needs to train GCL 3 times, the runtime could be very long. In Section 5.8, the authors claim “the time complexity of MDGCL can be reduced to be the same as that of GRACE. This proves that MDGCL has great potential in conducting scalable graph contrastive learning. ” The second sentence is not true since GRACE is known to be very time-consuming. Please modify. Typo: CiteSeer is misspelled to ‘citepSeer’ multiple times. The authors seem to have used replace all in some text editor. Broader Impact Concerns: No direct limitations and potential negative societal impact ================================================== Review 2: Summary: This paper aims to tackle the problem of sampling bias in graph contrastive learning, which stems from the arbitrary selection process of negative samples that may include data points with the same ground truth label as the anchor (called false negatives). To tackle this limitation, the authors propose to use the model discrepancy of different GNNs, which is used as an indicator to differentiate false negatives from true negatives. In addition, the authors propose a new objective, which aims to pull false negatives closer to the anchor in the embedding space while repelling true negatives away from the anchor. The authors experimentally validate the proposed method, namely MDGCL, on node- and graph-classification benchmark datasets, showing its advantage over baselines. Strengths and Weaknesses: ### Strengths * This paper tackles the important and interesting problem of sampling bias (i.e., distinguishing false negatives from true negatives) during graph contrastive learning. * The proposed method of using model discrepancies (to address the sampling bias problem) is well supported by the analyses in Figures 2 and 3. * The experimental setups (especially the baselines that the authors compare) are extensive, further including both the node- and graph-classification tasks. * This paper is well-written and easy to follow. ### Weaknesses * There is a recent graph self-supervised learning approach that is not impacted by the sampling bias problem (as it leverages the distances between two graphs for learning), namely D-SLA [1], which may be worthwhile to discuss and compare. * The performance improvements of the proposed MDGCL over baselines are marginal and seem not statistically significant. The authors may additionally perform statistical tests (e.g., t-test with a p-value of 0.05), to showcase the effectiveness of the proposed method. * In Table 4, it may be a typo that the authors do not provide the standard deviations, which contradicts with its caption. --- [1] Graph Self-supervised Learning with Accurate Discrepancy Learning, NeurIPS 2022. Requested Changes: Please see the weaknesses above. Broader Impact Concerns: The authors do not discuss any concerns about the ethical implications of their work; however, I don't see any major concerns about them. ================================================== Review 3: Summary: This work studies the sampling biases in graph contrastive learning (GCL). The authors find that the augmentation in GCL may introduce some false negative examples and further affect the optimization and the performance. Therefore, they propose a new method called MDGCL to mitigate the issues of the false negatives by leveraging the knowledge of model discrepancy. They conduct extensive experiments and analysis to demonstrate the effectiveness of MDGCL. Strengths and Weaknesses: (+) The false negative example is a huge concern in GCL; (+) The authors conduct extensive analysis the demonstrate the idea of MDGCL. (-) The presentation is not sufficiently clear; (-) The experiments are limited to certain datasets; (-) The discussion with some related works are missing; Requested Changes: 1. The presentation is not sufficiently clear: - In fact, there seems not a clear definition for how to find the true/false negatives in the preliminary analysis; - Some notations like $f$ are used multiple times for different meanings; - In Table 1, "citepSeer"; 2. The experiments are limited to certain datasets: - The current analysis is limited to the final learned representations, which may not properly reflect the influence of false negative examples on training; - The analysis and ablation studies are limited to simple datasets, while it's unclear whether the observation still hold for larger graphs, or heterophilous graphs; - The experiments are limited to homophilous graphs; 3. The discussion with some related works that also study the biases or debiasing GCL[1,2] is missing. **References** [1] Calibrating and Improving Graph Contrastive Learning, TMLR'23. [2] B2-Sampling: Fusing Balanced and Biased Sampling for Graph Contrastive Learning, KDD'23. Broader Impact Concerns: There is no discussion of the broader impacts. ================================================== Metareview: Recommendation: Reject Comment: The paper proposes MDGCL to identify false negative samples in GCL using model discrepancy. While this approach appears novel, the overall contribution is considered weak. Empirical evaluations are unsatisfactory, with marginal and statistically insignificant improvements. The paper has poor visualization, lack of theoretical support, and inadequate scalability experiments. It also fails to discuss recent relevant approaches like D-SLA. Plus, the authors failed to provide a rebuttal. Overall, the paper is clearly under the bar of TMLR. ==================================================
# Federated Fine-Tuning Of Vision Foundation Models Via Probabilistic Masking Anonymous authors Paper under double-blind review ## Abstract Foundation Models (FMs) have revolutionized machine learning with their adaptability and high performance across tasks; yet, their integration into Federated Learning (FL) is challenging due to substantial communication overhead from their extensive parameterization. Current communication-efficient FL strategies, such as gradient compression, reduce bitrates to around 1 bit-per-parameter (bpp). However, these approaches fail to harness the characteristics of FMs, with their large number of parameters still posing a challenge to communication efficiency, even at these bitrate regimes. In this work, we present DeltaMask1, a novel method that efficiently fine-tunes FMs in FL at an ultra-low bitrate, well below 1 bpp. DeltaMask employs stochastic masking to detect highly effective subnetworks within FMs and leverage stochasticity and sparsity in client masks to compress updates into a compact grayscale image using probabilistic filters, deviating from traditional weight training approaches. Our comprehensive evaluations across various datasets and architectures demonstrate DeltaMask efficiently achieves bitrates as low as 0.09 bpp, enhancing communication efficiency while maintaining FMs performance, as measured on 8 datasets and 5 pre-trained models of various network architectures. ## 1 Introduction Federated learning (FL) enables collaborative training of neural network models directly on edge devices (referred to as clients), locally with ondevice data (Konečný et al., 2016). FL comprises multiple federated rounds, which involve serverto-client model updates dispatch, local training by clients, and server-side aggregation (e.g., via FedAvg (McMahan et al., 2017)) of clients' model updates, iteratively performed until model convergence. Despite its appealing properties for users' privacy, FL requires constant models' updates transfer between server and clients, which poses a challenge in terms of communication efficiency. This becomes even more critical when the clients are resource-constrained edge devices, which operate under limited transmission bandwidth and strict energy constraints. ![0_image_0.png](0_image_0.png) Figure 1: DeltaMask (Ours) vs. state-of-the-art communication-efficient FL techniques when using CLIP pretrained ViT-B/32, results are averaged over 8 datasets. Recent advances in FL have led to a variety of methods aimed at enhancing communication efficiency, particularly by reducing the data volume exchanged in each federated round. These strategies often employ gradient compression techniques, including sparsification (Lin et al., 2020; Aji & Heafield, 2017), 1Code will be made available upon acceptance. quantization (Alistarh et al., 2017; Vargaftik et al., 2022; 2021), and low-rank approximation (Mohtashami et al., 2022; Mozaffari et al., 2022), which are pivotal in streamlining data transmission. Similarly, the "Lottery Ticket Hypothesis" (Frankle & Carbin, 2019) has paved the way for FL training regimes that diverge from traditional weight updates. Here, the focus has shifted toward identifying and cultivating high-potential subnetworks within randomly initialized neural models (Li et al., 2021a; Vallapuram et al., 2022; Li et al., 2020; Isik et al., 2023b). Such subnetworks demonstrate good performance without the need for extensive weight adjustments, offering a viable path to minimize FL communication overhead. FedMask (Li et al., 2021a) and FedPM (Isik et al., 2023b), which learn binary masks on top of random dense networks, are shown to reduce bitrate from 32 to 1 bit-per-parameter (bpp). However, jointly learning effective subnetworks in large, randomly initialized models, can severely affect training duration and model convergence. Furthermore, all aforementioned approaches primarily focus on training models from initial random weights; yet, when fine-tuning pre-trained large models, they do not exploit their unique characteristics to enable FL training at extremely low bitrates (e.g., ≪ 1 bpp). Leveraging the advancements in self-supervised learning, vision Foundation Models (vFMs) have brought significant advancement across various machine learning domains with their remarkable representation quality. Models like CLIP (Radford et al., 2021) and DINOv2 (Oquab et al., 2023) demonstrate rapid adaptability to diverse tasks, achieving unmatched performance in several downstream applications. Notably, recent developments have seen vision FMs, such as the ViT models, expand to billions of parameters (Dehghani et al., 2023), exemplifying the scale and complexity of modern FMs. In turn, and as an alternative to traditional fine-tuning, masking strategies have emerged (Mallya et al., 2018; Zhao et al., 2020) in a centralized setting, where selective binary masks are learned on top of frozen pre-trained weights, matching the performance of full fine-tuning. Nevertheless, the high parameter count of FMs inhibits their straightforward expansion into decentralized settings due to the substantial communication overhead (Zhuang et al., 2023), even at a bitrate of 1 bpp, thereby limiting their potential to tap into the valuable data available in distributed environments. To bridge the gap between the high-performance potential of Foundation Models (FMs) and the practical constraints of federated settings, we introduce DeltaMask, an approach designed to fine-tune FMs to various downstream tasks in federated settings with significantly reduced bitrate requirements (see Figure 1). Inspired by the sparse mask updates between subsequent federated rounds, which naturally occur due to the rapid adaptability of FMs, our approach combines stochastic masking with probabilistic filters to find highperforming subnetworks within pre-trained FM, while operating in an ultra-low bitrate regime. Moreover, unlike FedPM (Isik et al., 2023b), DeltaMask enables an effective control over the bitrate-accuracy trade-off by simply adjusting the bits-per-element (bpe) in probabilistic filters. This paves the way for fine-tuning FMs in federated settings without the massive communication burden caused by their large number of parameters, crucial for scenarios where niche, sensitive data must remain local yet is immensely valuable for learning, such as in healthcare applications. Concisely, the main contributions of our work are as follows: - We present a simple, yet effective, method termed DeltaMask, to allow fine-tuning of FMs in federated settings in a highly communication-efficient manner. - We combine stochastic binary masks with probabilistic filters to compactly communicate mask updates and reduce bitrate bpp below 0.1. - Our evaluation across 8 datasets and 5 pre-trained models of various network architectures demonstrate strong effectiveness of DeltaMask to fine-tune FMs compared to existing FL techniques. ## 2 Related Work Communication-Efficient FL. Enhancing communication efficiency in FL can be achieved through fast adaptation to downstream tasks by utilizing methods such as adaptive optimizers (Reddi et al., 2021) or efficient client-sampling processes (Chen et al., 2022) that accelerate the convergence rate and, consequently, minimize the data transmission requirement. Alternatively, model updates can be compressed using gradient compression strategies like sparsification (Lin et al., 2020; Aji & Heafield, 2017), quantization (Vargaftik et al., 2022; 2021; Kostopoulou et al., 2021), and low-rank approximation (Mohtashami et al., 2022; Mozaffari et al., 2022), which reduce the volume of data transmitted during the learning process by shrinking the size of updates communicated in each round. Likewise, training masks over densely initialized networks has been explored to provide communication efficiency between client and server (Isik et al., 2023b; Vallapuram et al., 2022). In HideNseek (Vallapuram et al., 2022) binary masks were extracted using the piece-wise sign function over client's mask scores prior to transmission. Alternatively, in FedPM (Isik et al., 2023b), stochastic mask training was utilized to sample binary masks from locally trained mask probabilities using a Bernoulli distribution, which were then transmitted to the server. To reduce the bitrate below 1 bpp, FedPM (Isik et al., 2023b) employs arithmetic coding (Rissanen & Langdon, 1979) to encode masks based on the sparsity level (frequency of activations). However, arithmetic coding is characterized by computational complexity and intricate implementation, while its efficacy is limited due to the stochasticity of mask sampling, which leads to inconsistent mask activations. In contrast, our approach improves stochastic mask training over pre-trained FMs, introducing a novel lightweight communication protocol that employs probabilistic filters by leveraging the sparsity in subsequent mask changes. Alongside (Isik et al., 2023b), we align DeltaMask with gradient compression baselines, such as EDEN (Vargaftik et al., 2022) and DeepReduce (Kostopoulou et al., 2021), which operate in a similar bitrate regime (≈ 1 bpp). Masking Neural Networks. Masks learned using gradient descent on pre-trained models can yield subnetworks with performance in parallel to conventional fine-tuning for various downstream tasks, as first demonstrated by(Mallya et al., 2018). A similar observation was evident in (Zhao et al., 2020), where masking was utilized on language models. Following the Lottery Ticket Hypothesis (Frankle & Carbin, 2019), the same idea was expanded to randomly initialized densely connected networks (Zhou et al., 2020; Ramanujan et al., 2020; Aladago & Torresani, 2021), where mask training was shown to act as an alternative form of weight training. These concepts were applied in FL in (Li et al., 2021a; Vallapuram et al., 2022; Isik et al., 2023b) to deal with different challenges in FL, such as personalization, communication efficiency and privacy. Compared to these works, our masking method focus on pre-trained foundation models and further reduce the required bitrate below the 1 bpp threshold by communicating the minimal information to reconstruct the clients' masks through probabilistic filters (Graf & Lemire, 2022). ## 3 Methodology 3.1 Preliminaries In this section, we introduce fundamental concepts utilized in DeltaMask: (i) probabilistic filters for efficient encoding of clients' mask updates, and (ii) stochastic mask training, employed to train probabilistic masks on distributed clients and aggregate them on server-side. Probabilistic Filters. Probabilistic filters are data structures that map a universe of keys, denoted as U, of varying bit lengths, to fixed-size bit values, thereby compacting real-world data representations effectively. They achieve this by using hash functions to transform and store data in a uniformly distributed array, known as the fingerprints H. This compact representation H facilitates efficient membership checking, with an adjustable rate of false positives - where a non-member might be incorrectly identified as a member - while ensuring zero false negatives. There are multiple variations of probabilistic filters, we focus on binary fuse filters (BFuse) (Graf & Lemire, 2022), which are known for their exceptional space efficiency and computational effectiveness. These filters offer a space efficiency of 8.62 bits per entry and a low false positive rate (up to 2 −32). Formally, an µ-wise BFuse utilizes m distinct hash functions hj : {0, 1*, . . . ,* 2 n − 1} → {1, 2*, . . . , l*}, for j = 1*, . . . , µ*, where l denotes the size of the fingerprints array, H. Let f : N → {0, 1*, . . . ,* 2 n − 1} be the fingerprint generation function, mapping each key to an n-bit value. For a given set of keys U, we can compute the fingerprint array H as: $${\mathcal{H}}=\bigcup_{k\in{\mathcal{U}}}\phi(k)=\bigcup_{k\in{\mathcal{U}}}\left(\bigcup_{j=1}^{m}\{h_{j}(f(k))\}\right)$$ ![3_image_0.png](3_image_0.png) Figure 2: Overview of DeltaMask: Discrepancies between received and learned masks are ranked using relative entropy. Essential updates are selected via topκ, hashed, and compressed into a grayscale image. The server reconstructs clients' masks through membership queries and updates the global mask using Bayesian aggregation. Here, ϕ(k) computes the set of m locations in H for each key k in U. Once H is constructed, we can perform a membership check as: $$\mathrm{Member}(x)={\begin{cases}\mathrm{true},&\bigoplus_{j=1}^{m}\mathcal{H}\left[h_{j}(f(x))\right]=f(x)\\ \mathrm{false},&\mathrm{otherwise}\end{cases}}$$ $$\left(2\right)$$ where, Lm j=1 H[·] represents the bitwise XOR operation performed on the array values of H, indicated by the hash functions hj (f(x)). The *Member*(·) function returns true if the result of the XOR operation over H matches the fingerprint f(x), suggesting that x is likely a member of the set, and false in all other occasions. Note that while computing a large number of hashes may seem daunting, not all hashing algorithms are computationally intensive. For example, BFuse use MurmurHash3 (Appleby, 2016), which is computationally efficient and exhibits exceptional properties for hashing large data structures into space-efficient arrays (e.g., uniform hash distribution and randomness). Stochastic Mask Training. Unlike the thresholding mechanisms (Li et al., 2021a; Vallapuram et al., 2022; Mozaffari et al., 2022) that creates binary masks by clipping mask scores s ∈ R d, stochastic mask training (Isik et al., 2023b), involves drawing a binary mask m ∈ {0, 1} dfrom the underlying mask's probability θ using the Bernoulli distribution (noted as m ∼ Bern(θ)). To generate θ from the *unbounded* mask scores s, a sigmoid transformation is applied (i.e., θ = Sigmoid(s)). Hence, m is used during the forward pass to compute the loss L(·), and θ is subsequently updated through back-propagation. As the values of s remain *unbounded*, it allows for an unbiased estimation of the true aggregate of the local clients' mask probabilities through Bayesian aggregation (Ferreira et al., 2021). Specifically, it refines the global model at round t in federated setting by treating the stochastic mask's probability θ g,t as a Beta distribution Beta(αg,t, βg,t), with αg,t and βg,t initialized to λ0. These parameters are updated with the aggregated local binary masks from participating clients (denoted as ¯θ g,t), computed as αg,t = αg,t−1 + ¯θ g,t and βg,t = βg,t−1 + K · 1d − ¯θ g,t. The aggregated probability mask is then calculated by: $$\theta^{g,t}=\frac{\alpha_{g,t}-1}{\alpha_{g,t}+\beta_{g,t}-2},$$ $\left(3\right)$. , (3) where the division is performed element-wise division. For best performance, α and β are periodically reset to λ0 = 1 at a rate inverse to the participation rate p (Isik et al., 2023b). It's important to note that while the model's weight values remain unchanged, the binary mask m selectively activates neurons by element-wise multiplication with the initialized model weights winit, denoted as wk˙ ,t = mk,t ⊙ winit. ## 3.2 Federated Masked Fine Tuning Of Self-Supervised Foundation Models We now present the DeltaMask training pipeline (see Figure 2). First, clients initialize a neural network pwinit with the weight vector winit = (winit,1, winit,2*, . . . , w*init,d) ∈ R d, from the pre-trained foundation model. The weight vector winit is kept fixed and never modified during training. DeltaMask collaboratively learns a probabilistic mask θ ∈ [0, 1]d using stochastic mask training, such that the function LW˙ minimize its error rate on a given downstream task (W˙ = m ⊙ winit). Specifically, at every federated round t, the server samples a set Kt participants (|Kt|=K out of N clients), which individually train their local probability masks θ k,t on their locally stored datasets Dk, each composed of |Dk| samples. Instead of communicating the stochastic binary mask m, we significantly reduce the required bits-perparameter (bpp) during the training process by solely communicating the subsequent key updates (indicated as position indexes set ∆) between the received and trained mask. In particular, we efficiently represent ∆ using probabilistic filters and transmit the fingerprint set H (see Eq. 1) to the server via means of a single gray-scaled image. On server-side reconstruction of clients masks mk,t is feasible via fast membership check using the probabilistic filter (see Eq. 2). These local masks are then aggregated by the server (i.e., by using Eq. 3) to complete the tth round. Compressing Mask Updates. Our approach utilizes a local training scheme for probability masks, where clients aim to learn a binary mask via stochastic mask training. In brief, clients receive a global probability mask θ g,t−1 at round t, where each client k performs local training and updates the mask via back-propagation. To satisfy θ k,t ∈ [0, 1]d without clipping, we apply a sigmoid operation over the mask's *unbounded* mask scores s ∈ R d. Then, clients can utilize a binary mask mk,t, i.e, sampled from Bern(θ k,t) and aim to minimize L(pwk˙ ,t , Dk) over their locally stored data, Dk, after which they back-propagate to update their mask scores. To enable ultra-low bitrate levels, DeltaMask leverages the inherent sparsity in consecutive mask updates in subsequent federated rounds. For a given round t, we deterministically sample a binary mask mg,t−1from the received mask distribution, Bern(θ g,t−1) using a publicly shared *seed*. This ensures uniformity of the generated binary mask among all clients (m g,t−1 i = m g,t−1 jfor any *i, j* ∈ K). Instead of communicating mk,t (or a compressed version of it), we catalog the index positions of differences between mg,t−1 and mk,t to create a set of index differences, denoted as ∆k,t. As the mask updates sparsity progressively increase during training, we introduce a topκ ranking that selects κ% of ∆k,t based on their relative entropy between the received and the updated probability masks. As a result, we bring a notion of importance sampling into the communication scheme, similar to (Isik et al., 2023a; Chatterjee & Diaconis, 2017; Havasi et al., 2018), helping minimize the distributed mean estimation error under low bitrate regimes. It provides essential updates early in training and conveys detailed information of mk,t without significantly increasing bitrate due to the increasing sparsity of mask differences. Using Kullback–Leibler (KL) divergence as a measure of entropy, ∆′k,t is formally defined as: $$\Delta^{\rm k,t}=\underset{\rm KL(\theta^{\rm k,t},\theta^{\rm k,t-1})}{\text{\it Sort}}\left\{i\,|\,m_{i}^{\rm g,t-1}\neq m_{i}^{\rm k,t},\forall i\in d\right\}[1:\mathsf{K}],\tag{4}$$ where d is the dimension of the probability mask m, and K represents the number of elements to be retained in the sorted set, determined as κ% of |Dk|. Next, we utilize an 4-wise *binary fuse filters* with 8-bit per entry (noted as BFuse8) to extract a fingerprint array Hk,t from ∆′k,t, following Eq. 1. By doing so, we essentially transition from 32-bit indexes to ≈ 8-bit hashed entries. In comparison with other probabilistic data structures (e.g., Bloom filters), *binary fuse* filters offer an exceptional balance between space efficiency and false positive rate, while their considerable computational benefits make them ideal for resource-constrained devices often used in FL. We further encode the fingerprints set Hk,t into a pseudo gray-scale image using lossless image compression techniques Ψ(·). Specifically, we employ PNG-like compression (i.e., DEFLATE) due to its wide optimization for edge devices - both the compression algorithm itself and the associated transmission techniques. This approach leverages possible non-uniform distributions of entries across the fingerprints locations to further reduce the bitrate. Note that while other lossless compression schemes could be used here; yet, their impact on bitrate is minor since the primary reduction comes from transitioning from ∆′k,t to Hk,t. The resulting image, denoted as Ak,t, now efficiently encapsulates the mask updates in a visual and compressed format, suitable for transmission to the server. Bayesian Aggregation of Compressed Masks. Once the local training at round t is completed, the server needs to form a new global probability mask from the received clients' gray-scale images, Ak,t. Specifically, for each client k, server first decompress the gray-scale image to the extract Hk,t using Ψ−1(Ak,t) and reconstruct the client's *original* probabilistic filter, BFuse8k. The indexes of updates from prior server mask mg,t−1for client k can now be estimated via a membership query across all possible indexes of mg,t−1, as follows: $${\hat{\Delta}}^{\prime}{}^{\mathrm{k,t}}=\{i\mid\mathrm{Membr}\left(i\right)=\mathrm{true},\forall i\in d\}$$ $$\left(5\right)$$ $$({\mathfrak{h}})$$ k,t= {i | Member(i) = true, ∀i ∈ d} (5) The clients' stochastic binary sample mask mk,t from Bern(θ k,t) can be constructed by a simple "*bit-flip*" of mg,t−1in the positions derived from ∆ˆ′ k,t. Now, server can compute the estimated aggregated probability mask ¯θ g,t = 1 K Pk∈K mk,t, which is an unbiased estimation of the underlying probability mask θ g,t = 1 K Pk∈K θ k,t using (Ferreira et al., 2021) or a similar strategy. Furthermore, DeltaMask has a bounded estimation error (proof given in Appendix B) defined as: $$\mathbb{E}_{M^{\mathbf{k},t}\sim\mathrm{Bern}(\theta^{\mathbf{k},t})\forall k\in K}\left[\left|\left|\theta^{\mathbf{g},t}-{\bar{\theta}}^{\mathbf{g},t}\right|\right|_{2}^{2}\right]\leq{\frac{d}{4K}}$$ In contrast to related approaches, such as FedPM (Isik et al., 2023b) and HideNseek (Vallapuram et al., 2022), our method enables better control over the model generalization versus bitrate bpp during FL training stage. Specifically, HideNseek transmits raw mask updates, and FedPM uses arithmetic encoding to further reduce bitrate; thus, both methods' control over the bitrate vs. model generalization trade-off depends on the clients' mask structure (i.e., the frequency of ones and zeros). In contrast, DeltaMask allows finer control of this trade-off by adjusting κ and bits per element (bpe) of probabilistic filters. While we primarily focus on 8-bit per entry (bpe) in our probabilistic filters, we provide analysis of bpe in Section 5.5. We also provide a complete algorithm in Algorithm 1. ## 3.3 Weight Initialization The neural network pwinit is initialized using weights winit = (winit,1, winit,2*, . . . , w*init,d) ∈ R d derived from a pre-trained foundation model, yet, the classification head for downstream tasks is randomly initialized. This means that while the pre-trained backbone offers high-quality features useful across various tasks, the randomly initialized classifier head significantly influences the model's overall performance. Prior research has sampled weights from a uniform distribution around the Kaiming initialization to find highly-performing subnetworks on randomly initialized network (Isik et al., 2023b; Zhou et al., 2020; Ramanujan et al., 2020; Zhou et al., 2020). However, as we focus on pre-trained models, we allow the classification head to adapt during a single round of linear probing, where the rest of the model remains frozen. This yields more stable results and rapid convergence. For a fair comparison, we employ identical weights initialization methods across all considered baselines. We also investigate scenarios with extremely low bitrates, where, linear probing is not feasible in Appendix C.6. Algorithm 1 DeltaMask algorithm. We provide the Bayesian Aggregation of compressed masks in Appendix A. 1: Server initialize global model G with pretrained model weights winit. 2: Server initialize mask weights θ g,0 ∈ Rdand Beta priors α g,0=β g,0=λ0. 3: for r = 1*, . . . , R* do 4: Randomly select K clients to participate in round t 5: for each client k ∈ K **in parallel do** 6: Sample binary server mask mg,t−1 ∼ Bernt−1(θ g,t−1) 7: θ k,t ← ClientUpdate(θ g,t−1) 8: Sample binary mask mk,t ∼ Bern(θ k,t) 9: ∆′k,t ← Sort {i | mk,t ̸= mg,t−1}i∈d [1 : K] ▷ *// See Equation 4* 10: Hk,t ←Si∈∆′k,t ϕ(i) ▷ *// See Equation 1* 11: PNGk,t ← Ψ(Hk,t) 12: **end for** 13: for each client k ∈ K do 14: Hk,t ← Ψ−1(PNGk,t) 15: ∆ˆ′ k,t ← {i | Member (i) = true}i∈d▷ *// See Equation 5* 16: mk,t ← mg,t−1XOR F ▷ // F is 1 *in al l positions of* ∆ˆ′ k,t and 0 *otherwise* 17: **end for** 18: θ g,t ← BayesAgg({mk,t}k∈K*, t, ρ*) 19: **end for** 20: 21: **procedure** ClientUpdate(θ) 22: for epoch e = 1, 2*, . . . , E* do 23: for batch b ∈ Dk do 24: Sample a binary mask m ∼ Bern(θ) 25: θ ← θ − η∇θLCE *y, p*m⊙winit (y|xb) 26: **end for** 27: **end for** 28: **return** θ 29: **end procedure** ![6_image_0.png](6_image_0.png) Figure 3: Performance evaluation of DeltaMask **(Ours)** in terms of average bitrate (bits-per-parameter) during FL training using Dir(10) over classes (Cp≈1.0 / **IID settings**) for CLIP ViT-B/32. Federated parameters are set to N=30, R=100, ρ=1, and E=1. Detailed performance metrics, including comparison with Linear Probing and Fine-tuning, can be found in Table 2 of Appendix C.2 ## 4 Experiment Setup Datasets and Models. We conduct experiments across 8 diverse image classification datasets, namely CIFAR-10 (Krizhevsky, 2009), CIFAR-100 (Krizhevsky, 2009), SVHN (Netzer et al., 2011), EMNIST (Cohen et al., 2017), Fashion-MNIST (Xiao et al., 2017), EuroSAT (Helber et al., 2017), Food-101 (Bossard et al., 2014), and Cars196 (Krause et al., 2013). Here, we solely focus on classification tasks, enabling direct comparisons with similar approaches (Kostopoulou et al., 2021; Isik et al., 2023b; Li et al., 2021a; Vargaftik ![7_image_0.png](7_image_0.png) Figure 4: Performance evaluation of DeltaMask **(Ours)** in terms of average bitrate (bits-per-parameter) during FL training using Dir(0.1) over classes (Cp ≈ 0.2 / **non-IID settings**) for CLIP ViT-B/32. Federated parameters are set to N=30, R=300, ρ=0.2, and E=1. Detailed performance metrics, including comparison with Linear Probing and Fine-tuning, can be found in Table 3 of Appendix C.3 et al., 2022; 2021); however it is crucial to note that DeltaMask does not impose restrictions on the underlying downstream task. We utilize popular ViT architectures pre-trained in self-supervised manner, such as CLIP (Radford et al., 2021), DINOv2 (Oquab et al., 2023), where, we learn a mask for the last five transformer blocks and keep all prior blocks frozen, similar to (Zhao et al., 2020). In all experiments with CLIP, we use CLIP ViT-B/32, unless stated otherwise. Apart from ViT architectures, we also evaluate convolution-based model, namely ConvMixer-768/32 (Trockman & Kolter, 2022), where mask is learned for the last five convolutional blocks, similar to ViT. We use pre-trained weights from HuggingFace for each backbone architecture and report results averaged over three runs. FL Setup. We performed experiments using Flower (Beutel et al., 2020). In our data splitting procedure, we utilized Dirichlet distribution over classes, denoted as Dir(a), where a refers to the distribution concentration, following (Li et al., 2021b). For IID experiments, we fix a=10, essentially setting the class distribution per client (referred to as Cp) to ≈ 1.0 (examples from all classes are available across clients), while for non-IID settings, we set a=0.1, which results in a Cp ≈ 0.2. In each round, clients perform a single local training step (E=1) and we use a cosine scheduler for topκ mechanism starting from κ=0.8. Due to limited space, we provide additional details of the experimental setup in Appendix C.1. Baselines. We evaluate DeltaMask in terms of accuracy, bitrate (bits-per-parameters), computational complexity and total volume of communicated data between client and server. As baselines, we include both fine-tuning and linear probing, where the latter involves training a classifier on top of a frozen backbone architecture. This establishes an upper performance bound for all remaining baselines. From the domain of gradient compression techniques, we incorporate EDEN (Vargaftik et al., 2022), DRIVE (Vargaftik et al., 2021), QSGD (Alistarh et al., 2017), and FedCode (Khalilian et al., 2023) into our evaluation, which operate in a close bitrate regime (≈ 1 bpp). Additionally, we consider DeepReduce (Kostopoulou et al., 2021) as a baseline owing to its analogous use of bloom filter-based compressor - enabling a direct comparison with BFuse filters performance. From masking strategies within FL, we assess DeltaMask by comparing it against the threshold-based FedMask (without initial pruning) and the stochastic masking FedPM. We use a fixed number of rounds across all baselines to facilitate a direct comparison of data transfer volumes for fine-tuning FMs, as inferred from the reported bitrates (lower is better). ## 5 Results 5.1 **Main Empirical Findings** For ease of reading, we provide a concise list of our empirical findings below, each linked to its related subsection. - **Section 5.2**: DeltaMask offers significant bitrate reductions - up to 9-fold and 6-fold decreases in IID and non-IID settings, respectively - while maintaining accuracy in par to prior approaches. - **Section 5.3**: DeltaMask excels in environments with strict data transmission limits, achieving the highest accuracy per total bit transmitted compared to all considered methods. Furthermore, its computational efficiency rivals traditional gradient compression schemes, exhibiting similar encode and decode times. - **Section 5.4**: Our approach is model-agnostic, effective across various architectures, and performs exceptionally well with large models, such as ViT-L/14. - **Section 5.5**: By adjusting κ and the bpe of filters, DeltaMask enables a simple, yet effective control over bitrate versus model generalization trade-off. ## 5.2 Bitrate-Accuracy Trade-Off IID Data Split. Here, we focus on IID data distribution, with the number of clients (N) set to 30 and full client participation (ρ=1). As depicted in Figure 3, DeltaMask achieves significant reductions in communication costs compared to the considered baselines - consistently across all datasets. Among the baselines, EDEN requires the least bandwidth, while FedPM attains the highest accuracy; nevertheless, DeltaMask reliably matches the accuracy of FedPM. This notable improvement in bitrate over FedPM indicates that mask updates entail significant overhead. Transmitting only essential information via binary fuse filters leads to considerable reductions in bpp (up to approximately 9× less) without compromising on model accuracy. Compared to DeepReduce - which utilizes Bloom filters to transmit the updates - our method underscores the importance of accurate mask reconstruction, as Bloom filters are prone to a higher false positive rate for the same number of hash functions and bits per entry. Further experimental results under conditions of low client participation in IID settings are presented in Appendix C.2. Non-IID Data Split. We now evaluate in a more realistic federated setting, where clients' data follow a non-IID distribution using Dir(0.1) over classes (Cp ≈ 0.2). Furthermore, the number of clients (N) set to 30, while we consider partial participation with ρ=0.2 (meaning that in each round ρ · N = 6 clients are randomly selected). This represents a challenging and realistic scenario, where clients have limited network resources and their data generation processes differ drastically, leading to different data distribution. From Figure 4, we notice similar gains over the baselines as the IID experiments presented in Figure 3; in that DeltaMask can maintain a learning procedure that results in better generalizable model, despite having up to a 6-fold reduction in the communication cost. Furthermore, the Bayesian aggregation mechanism, as presented in Eq. 3, is pivotal in achieving high accuracy when ρ < 1 under non-IID settings; evident from the increase in performance of DeepReduce compared to FedMask that performs poorly across all datasets. Note that in IID setting with full client participation, this behavior was reversed. We provide additional experiments under a similar non-IID setting with full client participation in Appendix C.3, where the superiority of DeltaMask in terms of bitrate reduction is evident. ## 5.3 Data Volume & Computational Cost Improvements We evaluate the impact of DeltaMask on computational resources at both client and server levels, as well as the communication efficiency relative to the total data volume transmitted. For this, we perform experiments using CLIP on CIFAR-100 with N=10, and measure the encoding and decoding times for various gradient compression schemes: EDEN, DRIVE, and FedCode. Additionally, DeepReduce is included for comparison with DeltaMask in terms of efficiency against Bloom-based compression. From masking approaches, we ![9_image_0.png](9_image_0.png) (a) Relative Data Volume against fine-tuning size. (b) Encode/Decode run time on CPU. Figure 5: Performance evaluation of DeltaMask **(Ours)** in terms of (a) data volume and (b) encoding/decoding time (with baselines), required for CLIP ViT-B/32 to reach within 1% of peak performance on CIFAR-100. Data volume is normalized over full fine-tuning data size. Table 1: Evaluation of DeltaMask using Dir(10.0) over classes (Cp≈1.0 / IID settings) across architectures and pre-training strategies. | Metric | CLIP ViT-B/32 | CLIP ViT-L/14 | DINOv2-Base | DINOv2-Small | ConvMixer-768/32 | |------------------|-----------------|-----------------|---------------|----------------|--------------------| | Fine-tuning | 77.35 ± 0.009 | 89.07 ± 0.012 | 75.01 ± 0.007 | 65.55 ± 0.019 | 78.52 ± 0.009 | | DeltaMask (Ours) | 75.82 ± 0.023 | 89.48 ± 0.031 | 73.36 ± 0.027 | 63.01 ± 0.033 | 75.31 ± 0.021 | | Avg. bpp | 0.207 ± 0.001 | 0.225 ± 0.002 | 0.197 ± 0.001 | 0.214 ± 0.001 | 0.251 ± 0.001 | include FedMask and FedPM, omitting their arithmetic encoding step to simplify computational complexity and ensure comparable execution times. All tests are conducted on a CPU, excluding aggregation in decoding time measurements. Data volume is normalized against full fine-tuning size, and we report the necessary volume to reach within 1% of peak accuracy, effectively combining communication efficiency with convergence speed analysis. Among the evaluated methods in Figure 5, FedCode is the most communication-efficient in terms of data volume; yet, it suffers from longer encoding times and has the lowest model performance across baselines. Additionally, we notice that DeepReduce, utilizing a Bloom-based compressor, struggles with scalability due to longer execution times; in contrast, while DeltaMask offer significant improvements in filter construction and query times. FedMask and FedPM offer a compromise between data volume and execution time, with FedPM leading in accuracy among all approaches. Surprisingly, DeltaMask, while using slightly more data than FedCode, provides quicker encoding, critical for devices with limited resources, and matches the high accuracy of FedPM with significantly less data communicated. This positions DeltaMask as an effective choice for environments with computational and communication constraints. To further emphasize this point, we perform an analysis on multiple common edge devices utilized on edge in Appendix C.5. ## 5.4 Generalization Across Neural Architectures And Pre-Training Strategies Next, we evaluate DeltaMask ability to work across various neural architectures pre-trained in a different self-supervised manner. We train masks for downstream task adaptation in a communication-constrained FL environment. For this, we perform experiments with N=10 on additional (larger) ViT architectures, namely CLIP-Large and DINOv2-Large, as well as a pure convolution-based architecture, ConvMixer-768/32 on CIFAR-100 as a downstream classification task. In all experiments, we mask the last 5 blocks, as discussed in Section 4. From Table 1, DeltaMask demonstrates robust adaptability across diverse pre-trained architectures in a FL setup with communication constraints. Notably, DeltaMask performance on large ViT architectures yield accuracies near those of fine-tuning, with CLIP ViT-L/14 slightly surpassing it. This is significant, considering the communication efficiency depicted by the average bitrate, which remains close to 0.2 bpp across all architectures. ConvMixer-768/32 also adapts well with DeltaMask, showing a modest accuracy reduction while meeting communication constraints. These results reinforce our method's suitability across ![10_image_0.png](10_image_0.png) Figure 6: Impact of topκ mechanism and probabilistic filter choice in DeltaMask performance. Experiments performed in CIFAR-100 using Dir(10) over classes (Cp≈1.0 / IID settings). Federated parameters are set to N=10, R=100, ρ=1 and E=1. diverse architectures, allowing for communication-efficient downstream task adaptation of FMs in a federated setting. ## 5.5 Adjusting Bitrate In Deltamask Now, we do ablation of fundamental components in DeltaMask: the mechanism sorting the positions of mask updates indexes and our choice of probabilistic filter, assessing the impact of these components on the model's final accuracy and bitrate. For this, we conduct experiments using CLIP ViT-B/32 with N=10 under full participation. In Figure 6a, we compare our entropy-based topκ sorting with a naive random sampling mechanism. We notice a consistent gap in performance between these approaches, underscoring the pivotal role of importance sampling in achieving generalization, similar to (Isik et al., 2023a). Surprisingly, increasing κ does not linearly enhance accuracy, with the best results observed at κ=0.8, beyond which performance diminishes. This suggests that our topκ approach effectively filters out noise inherent in the stochastic binary mask sampling mechanism by prioritizing updates with higher certainty (i.e., higher probability), while also benefiting from a reduced bitrate due to transmitting less data. Lastly, we evaluate the performance of various probabilistic filters, focusing on how variations in bits-per-entry (bpe), ranging from 8 to 32 bits, affect the false positive rate. Our analysis includes binary fuse filters (BFuse) and XoR (Graf & Lemire, 2020) filters, the latter operating under the same foundational principles but being slightly less space-efficient. Figure 6b demonstrates that BFuse filters generally surpass XoR filters in reducing bitrate without compromising model accuracy, a consistency observed across all experiments. More importantly, we demonstrate that DeltaMask enables an adjustable bitrate based on the bpe selection of the probabilistic filter, offering a potential solution to the resource heterogeneity among clients in FL. ## 6 Conclusions We introduce DeltaMask a FL technique for efficiently fine-tuning FMs under low bitrate constraints by utilizing stochastic masking instead of conventional fine-tuning and leveraging probabilistic filters for communicating subsequent clients' mask updates. Our evaluation demonstrates DeltaMask's effectiveness across a broad range of datasets and FMs, achieving significant communication reductions with performance similar to traditional fine-tuning. Beyond communication efficiency, DeltaMask can extend to offer personalized models in FL, while it can be expanded to adapt a single FM to multiple tasks, each with its uniquely learned masks. Additionally, the hashing operations of probabilistic filters and their false positive rate - interpreted as a *bit-flipping* error in our masks - can potentially enhance privacy in FL. We believe this is a valuable directions for future research. ## References Alham Fikri Aji and Kenneth Heafield. Sparse communication for distributed gradient descent. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2017. doi: 10.18653/v1/d17-1045. URL https://doi.org/10.18653%2Fv1%2Fd17-1045. Maxwell Mbabilla Aladago and Lorenzo Torresani. Slot machines: Discovering winning combinations of random weights in neural networks, 2021. Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. Qsgd: Communication-efficient sgd via gradient quantization and encoding, 2017. Austin Appleby. Murmurhash3. https://github.com/aappleby/smhasher/wiki/MurmurHash3, 2016. Accessed: 13/11/2023. Daniel J Beutel, Taner Topal, Akhil Mathur, Xinchi Qiu, Titouan Parcollet, and Nicholas D Lane. Flower: A friendly federated learning research framework. *arXiv preprint arXiv:2007.14390*, 2020. Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 - mining discriminative components with random forests. In *European Conference on Computer Vision*, 2014. Sourav Chatterjee and Persi Diaconis. The sample size required in importance sampling, 2017. Wenlin Chen, Samuel Horvath, and Peter Richtarik. Optimal client sampling for federated learning, 2022. Gregory Cohen, Saeed Afshar, Jonathan Tapson, and André van Schaik. Emnist: an extension of mnist to handwritten letters, 2017. Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, Rodolphe Jenatton, Lucas Beyer, Michael Tschannen, Anurag Arnab, Xiao Wang, Carlos Riquelme, Matthias Minderer, Joan Puigcerver, Utku Evci, Manoj Kumar, Sjoerd van Steenkiste, Gamaleldin F. Elsayed, Aravindh Mahendran, Fisher Yu, Avital Oliver, Fantine Huot, Jasmijn Bastings, Mark Patrick Collier, Alexey Gritsenko, Vighnesh Birodkar, Cristina Vasconcelos, Yi Tay, Thomas Mensink, Alexander Kolesnikov, Filip Pavetić, Dustin Tran, Thomas Kipf, Mario Lučić, Xiaohua Zhai, Daniel Keysers, Jeremiah Harmsen, and Neil Houlsby. Scaling vision transformers to 22 billion parameters, 2023. Paulo Abelha Ferreira, Pablo Nascimento da Silva, Vinicius Gottin, Roberto Stelling, and Tiago Calmon. Bayesian signsgd optimizer for federated learning. *Advances in Neural Information Processing Systems*, 34, 2021. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks, 2019. Thomas Mueller Graf and Daniel Lemire. Xor filters: Faster and smaller than bloom and cuckoo filters. ACM Journal of Experimental Algorithmics, 25:1–16, March 2020. ISSN 1084-6654. doi: 10.1145/3376122. URL http://dx.doi.org/10.1145/3376122. Thomas Mueller Graf and Daniel Lemire. Binary fuse filters: Fast and smaller than xor filters. ACM J. Exp. Algorithmics, 27, mar 2022. ISSN 1084-6654. doi: 10.1145/3510449. URL https://doi.org/10.1145/ 3510449. Marton Havasi, Robert Peharz, and José Miguel Hernández-Lobato. Minimal random code learning: Getting bits back from compressed model parameters, 2018. Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification, 2017. Berivan Isik, Francesco Pase, Deniz Gunduz, Sanmi Koyejo, Tsachy Weissman, and Michele Zorzi. Communication-efficient federated learning through importance sampling, 2023a. Berivan Isik, Francesco Pase, Deniz Gunduz, Tsachy Weissman, and Michele Zorzi. Sparse random networks for communication-efficient federated learning, 2023b. Saeed Khalilian, Vasileios Tsouvalas, Tanir Ozcelebi, and Nirvana Meratnia. Fedcode: Communication-efficient federated learning via transferring codebooks, 2023. Jakub Konečný, H. Brendan McMahan, Felix X. Yu, Peter Richtarik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. In *NIPS Workshop on* Private Multi-Party Machine Learning, 2016. URL https://arxiv.org/abs/1610.05492. Kelly Kostopoulou, Hang Xu, Aritra Dutta, Xin Li, Alexandros Ntoulas, and Panos Kalnis. Deepreduce: A sparse-tensor communication framework for distributed deep learning, 2021. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In *2013 IEEE International Conference on Computer Vision Workshops*, pp. 554–561, 2013. doi: 10.1109/ICCVW.2013.77. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. *CS 231N*, 7(7):3, 2015. Ang Li, Jingwei Sun, Binghui Wang, Lin Duan, Sicheng Li, Yiran Chen, and Hai Li. Lotteryfl: Personalized and communication-efficient federated learning with lottery ticket hypothesis on non-iid datasets, 2020. Ang Li, Jingwei Sun, Xiao Zeng, Mi Zhang, Hai Li, and Yiran Chen. Fedmask: Joint computation and communication-efficient personalized federated learning via heterogeneous masking. SenSys '21, pp. 42–55, New York, NY, USA, 2021a. Association for Computing Machinery. ISBN 9781450390972. doi: 10.1145/3485730.3485929. URL https://doi.org/10.1145/3485730.3485929. Qinbin Li, Yiqun Diao, Quan Chen, and Bingsheng He. Federated learning on non-iid data silos: An experimental study, 2021b. Yujun Lin, Song Han, Huizi Mao, Yu Wang, and William J. Dally. Deep gradient compression: Reducing the communication bandwidth for distributed training, 2020. Arun Mallya, Dillon Davis, and Svetlana Lazebnik. Piggyback: Adapting a single network to multiple tasks by learning to mask weights, 2018. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. CommunicationEfficient Learning of Deep Networks from Decentralized Data. In Aarti Singh and Jerry Zhu (eds.), Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of *Proceedings of Machine Learning Research*, pp. 1273–1282. PMLR, 20–22 Apr 2017. URL https: //proceedings.mlr.press/v54/mcmahan17a.html. Amirkeivan Mohtashami, Martin Jaggi, and Sebastian U. Stich. Masked training of neural networks with partial gradients, 2022. Hamid Mozaffari, Virat Shejwalkar, and Amir Houmansadr. Frl: Federated rank learning, 2022. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, and Piotr Bojanowski. Dinov2: Learning robust visual features without supervision, 2023. Alec Radford, Jong Wook Kim, Chris Hallacy, A. Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In *ICML*, 2021. Vivek Ramanujan, Mitchell Wortsman, Aniruddha Kembhavi, Ali Farhadi, and Mohammad Rastegari. What's hidden in a randomly weighted neural network?, 2020. Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, and H. Brendan McMahan. Adaptive federated optimization, 2021. J. Rissanen and G. G. Langdon. Arithmetic coding. *IBM Journal of Research and Development*, 23(2): 149–162, 1979. doi: 10.1147/rd.232.0149. Aliaksandra Shysheya, John Bronskill, Massimiliano Patacchiola, Sebastian Nowozin, and Richard E Turner. Fit: Parameter efficient few-shot transfer learning for personalized and federated image classification. arXiv preprint arXiv:2206.08671, 2022. Asher Trockman and J. Zico Kolter. Patches are all you need?, 2022. Anish K. Vallapuram, Pengyuan Zhou, Young D. Kwon, Lik Hang Lee, Hengwei Xu, and Pan Hui. Hidenseek: Federated lottery ticket via server-side pruning and sign supermask, 2022. Shay Vargaftik, Ran Ben Basat, Amit Portnoy, Gal Mendelson, Yaniv Ben-Itzhak, and Michael Mitzenmacher. Drive: One-bit distributed mean estimation, 2021. Shay Vargaftik, Ran Ben Basat, Amit Portnoy, Gal Mendelson, Yaniv Ben Itzhak, and Michael Mitzenmacher. EDEN: Communication-efficient and robust distributed mean estimation for federated learning. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 21984–22014. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr. press/v162/vargaftik22a.html. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. *CoRR*, abs/1708.07747, 2017. Mengjie Zhao, Tao Lin, Fei Mi, Martin Jaggi, and Hinrich Schütze. Masking as an efficient alternative to finetuning for pretrained language models. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2226–2241, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.174. URL https://aclanthology.org/2020.emnlp-main.174. Hattie Zhou, Janice Lan, Rosanne Liu, and Jason Yosinski. Deconstructing lottery tickets: Zeros, signs, and the supermask, 2020. Weiming Zhuang, Chen Chen, and Lingjuan Lyu. When foundation model meets federated learning: Motivations, challenges, and future directions. *arXiv preprint arXiv:2306.15546*, 2023. ## A Bayesian Aggregation Algorithm Algorithm 2 Bayesagg 1: **Inputs:** Clients' updates {mk,t}k∈K, federated round t, and client participation ρ. 2: **Output:** Global probability mask θg,t. 3: if t % ( 1ρ ) = 0 **then** 4: αg,t−1 = βg,t−1 = λ0 5: **end if** 6: magg,t ← 1K Pk∈K mk,t 7: α g,t = α g,t−1 + magg,t 8: β g,t = β g,t−1 + K · 1 − magg,t 9: θ g,t =α g,t αg,t+βg,t 10: **return** θ g,t ## B Distributed Mean Estimation Error Analysis We now provide proof of the upper bound on the estimation error of DeltaMask. Recall that we use probabilistic filters to reconstruct clients' binary masks, mk,t ∼ Bern(θ k,t) on server-side, which introduce an independent (across both clients and mask dimensions) "bit-flip" error probability 2 −p(p referring to the false positive rate of the filter). We refer to these reconstructed masks as m′k,t. Here, our true mean is ¯θ g,t = 1 K Pk∈Kt θ t k , while our estimate is ˆ¯θ g,t = 1 K Pk∈Kt m′k,t. Furthermore, we use capital letters to refer to random variables, while small letters refer to their deterministic quantities. We can then compute the error as follows: EMk,t∼Bern(θ k,t)∀k∈K ¯θ g,t − ˆ¯θ g,t 2 2 = X d i=1 EMk,t i ∼Bern(θ k,t i)∀k∈K ¯θ g,t − ˆ¯θ k,t2(7) i=1 EMk,t i ∼Bern(θ k,t i)∀k∈K 1 K X k∈K M′k,t i − θ k,t i !2 (8) = X d i=1 EMk,t i ∼Bern(θ k,t i)∀k∈K k∈K M′k,t i − θ k,t i =1 K2 X d X =1 K2 X d i=1 X k∈K EMk,t i ∼Bern(θ k,t i) M′k,t i − θ k,t i 2(10) =1 K2 X d i=1 X k∈K EMk,t i ∼Bern(θ k,t i) h(M′k,t i) 2i −2θ k,t i EMk,t i ∼Bern(θ k,t i) hM′k,t ii+ (θ k,t i) 2 =1 K2 X d i=1 X k∈K (θ k,t i − (θ k,t i) 2) − 4 · (2−p)(θ k,t i − (θ k,t i) ≤d 4K(13) !2 (9) (7) $$\text{(8)}$$ (9) $$\text{(10)}$$ $$\text{(11)}$$ $$\text{(12)}$$ $$\text{(13)}$$ . 2) + 2−p(12) We begin by expressing the expected squared L 2 norm of the error ˆ¯θ g,t and ¯θ g,t in (7). From (7) to (8), we use the definitions of ˆ¯θ g,t = 1 K Pk∈Kt m′k,t and ¯θ g,t = 1 K Pk∈Kt θ t k . To move from (9) to (10), we use the fact that M′k,t and M′l,t are independent for k ̸= l; thus the expected value of cross-product terms in the expansion of squared sum of is zero. At (11), we utilize the fact that M′k,t iis a Bernoulli variable (meaning (M′k,t i) 2 = M′k,t i) and introduce the "*bit-flip*" error probability 2 −p due to the probabilistic filters; thus its expected value is E[(M′k,t i] = (1 − 2 −p)θ k,t i + 2−pθ k,t i. Finally, in (13), given that the variance of a Bernoulli random variable is maximized when the probability of success is 0.5, and that the flipping process does not change this maximum possible variance - as 2 −p << 1 given p ∈ {8, 16, 32} –we concluded that the upper bound of the expected squared error is d 4K , where d is the number of dimensions and K is the number of clients. It is important to note that our probabilistic filter-based encoding provides the same upper-bound estimation error as Isik et al. (2023b); yet, it can achieve significant reductions in terms of required bpp for transmitting masks. ## C Additional Experiments C.1 Additional Experimental Details Training Parameters: For our experiments, clients completed 1 local epoch per round with a batch size of 64 and Adam optimizer with a learning rate of 0.1. We adopted Bayesian aggregation, resetting the prior every 1ρ rounds, where ρ is the participation rate (as per Isik et al. (2023b)). In scenarios where ρ is less than 1.0, client selection in each round was randomized. In most experiments, we set κ to 0.8, except for those detailed in Figure 6a. We conducted 100 federated rounds for experiments with ρ=1 (both IID and non-IID settings) and increased the number of rounds to 200 and 300 for IID and non-IID experiments, respectively, when *ρ <<* 1. Unless otherwise mentioned, we employed CLIP ViT-B/32 for experiments involving CLIP. We perform 3 independent runs and report the average accuracy on test-set in all our experiments. Baselines Configuration: For FedMask, we set a binary threshold τ (masking with mi=1 if θi ≥ τ , and 0 otherwise) in the range [0.4, 0.6] for IID and [0.2, 0.0] for non-IID experiments, aligning with Isik et al. (2023b). In EDEN, a 1-bit gradient compression scheme was used to match the bitrate (bpp) of other baselines. Notably, EDEN's compression is model-dependent but yields nearly constant bpp reductions across all experiments. From DeepReduce compression, we discard the values' compression stage (as we deal we binary masks), and utilize only the Bloom filter-based index compression using the P0-policy Kostopoulou et al. (2021). Here, binary masks were learned via stochastic mask training (Isik et al. (2023b)), ensuring operation near the 1 bpp regime and facilitating clear comparison with DeltaMask. For our comparison with FedPM, we use identical settings albeit the compression scheme with probabilistic filters of DeltaMask, to clearly illustrate the benefits of our approach. We conducted our experiments on NVIDIA A10 GPUs on an internal cluster server, using 2 GPUs per one run. ## C.2 Additional Experiments In Iid Settings In this section, we present additional experiments conducted under IID settings with varying participation rates (ρ). To ensure a fair comparison, we included both Linear Probing, which involves adapting a single linear classifier atop the (frozen) pre-trained model, and full Fine-tuning, wherein only the layers modified in DeltaMask are fine-tuned. In Table 2, apart from report models' accuracies across tasks, we include the average bpp and accuracy across all tasks for a concise comparison. In Table 2, we note that DeltaMask achieves significant reductions in bitrate, while maintaining performance on par with Fine-tuning. This is particularly evident in scenarios with ρ less than 1, where DeltaMask ability to reduce bitrate without compromising on accuracy highlights its effectiveness in federated learning environments with varying levels of client participation. ## C.3 Additional Experiments In Non-Iid Settings In this section, we provide additional experiments performed under non-IID settings, where we varied the participation rate (ρ). Similar to C.2, we include both Linear Probing and Fine-tuning for a rigorous Table 2: Performance evaluation of DeltaMask **(Ours)** in terms of average bitrate (bits-per-parameter) during FL training using Dir(10) over classes (Cp≈1.0 / **IID settings**) for CLIP ViT-B/32. Federated parameters are set to N=30 and E=1. For ρ < 1, clients are randomly selected. | Method | CIFAR-10 | CIFAR-100 | SVHN | EMNIST | Fashion-MNIST | EuroSAT | Food-101 | Cars196 | Avg. Acc Avg. bpp | |------------------------------------------------------------------------|---------------------------------------------------------|---------------|-------------------------------------------|----------|-----------------|-----------|------------|-----------|---------------------| | Linear Probing 92.12 ± 0.007 67.23 ± 0.011 59.70 ± 0.016 89.89 ± 0.008 | | 89.05 ± 0.010 | 94.81 ± 0.009 67.58 ± 0.014 59.87 ± 0.016 | | | 77.51 | - | | | | Fine-tuning | 94.38 ± 0.013 76.12 ± 0.019 91.88 ± 0.012 94.02 ± 0.018 | 92.54 ± 0.009 | 97.61 ± 0.015 85.73 ± 0.017 66.98 ± 0.011 | | | 87.48 | 32 | | | | FedMask | 85.32 ± 0.033 61.38 ± 0.057 68.71 ± 0.046 81.32 ± 0.024 | 84.32 ± 0.044 | 92.01 ± 0.025 62.28 ± 0.037 57.12 ± 0.029 | | | 74.06 | 1.0 | | | | EDEN | 87.11 ± 0.006 65.89 ± 0.009 79.16 ± 0.008 86.36 ± 0.006 | 85.21 ± 0.012 | 91.24 ± 0.010 69.59 ± 0.012 62.07 ± 0.011 | | | 78.33 | 0.703 | | | | DeepReduce | 86.71 ± 0.071 64.98 ± 0.091 60.32 ± 0.061 84.42 ± 0.044 | 84.09 ± 0.057 | 92.37 ± 0.041 64.91 ± 0.043 55.72 ± 0.078 | | | 73.61 | 1.123 | | | | FedPM | 90.31 ± 0.016 74.66 ± 0.019 87.03 ± 0.017 91.42 ± 0.021 | 89.79 ± 0.013 | 95.57 ± 0.015 74.80 ± 0.014 62.19 ± 0.017 | | | 83.22 | 0.946 | | | | DeltaMask | 89.52 ± 0.021 74.01 ± 0.033 86.86 ± 0.024 92.27 ± 0.027 | 89.68 ± 0.014 | 94.94 ± 0.019 74.09 ± 0.029 61.56 ± 0.030 | | | 82.87 | 0.197 | | | | Linear Probing 93.97 ± 0.004 74.11 ± 0.009 59.26 ± 0.011 89.40 ± 0.008 | | 89.47 ± 0.005 | 95.35 ± 0.003 76.64 ± 0.009 61.72 ± 0.012 | | | 79.99 | - | | | | Fine-tuning | 94.50 ± 0.010 77.35 ± 0.009 92.72 ± 0.012 94.89 ± 0.010 | 92.98 ± 0.013 | 98.24 ± 0.011 86.72 ± 0.009 67.23 ± 0.014 | | | 88.08 | 32 | | | | FedMask | 90.84 ± 0.028 70.64 ± 0.057 74.32 ± 0.039 84.22 ± 0.031 | 88.64 ± 0.029 | 95.09 ± 0.038 68.46 ± 0.034 61.59 ± 0.039 | | | 79.23 | 1.0 | | | | EDEN | 93.15 ± 0.009 72.02 ± 0.010 86.67 ± 0.007 91.55 ± 0.009 | 90.40 ± 0.012 | 95.34 ± 0.010 80.02 ± 0.004 63.98 ± 0.008 | | | 84.14 | 0.691 | | | | DeepReduce | 88.17 ± 0.034 68.59 ± 0.069 62.34 ± 0.056 86.92 ± 0.073 | 85.44 ± 0.031 | 94.12 ± 0.043 67.92 ± 0.075 58.42 ± 0.041 | | | 76.53 | 1.089 | | | | FedPM | 93.58 ± 0.014 75.56 ± 0.011 88.76 ± 0.013 93.45 ± 0.015 | 92.10 ± 0.009 | 96.45 ± 0.019 83.45 ± 0.013 65.23 ± 0.014 | | | 86.07 | 0.872 | | | | DeltaMask | 93.50 ± 0.019 74.82 ± 0.023 87.95 ± 0.021 92.52 ± 0.019 | 91.27 ± 0.023 | 95.64 ± 0.017 82.73 ± 0.024 64.94 ± 0.026 | | | 85.44 | 0.151 | | | evaluation. We report our findings in Table 3, where we also report the average bpp and accuracy across all tasks for a concise comparison of our baselines. Table 3: Performance evaluation of DeltaMask **(Ours)** in terms of average bitrate (bits-per-parameter) during FL training using Dir(0.1) over classes (Cp≈0.2 / **non-IID settings**) for CLIP ViT-B/32. Federated parameters are set to N=30 and E=1. For ρ < 1, clients are randomly selected. Method CIFAR-10 CIFAR-100 SVHN EMNIST Fashion-MNIST EuroSAT Food-101 Cars196 Avg. Acc Avg. bpp ρ = 0.2 Linear Probing 84.51 ± 0.019 49.04 ± 0.022 43.16 ± 0.020 82.41 ± 0.035 86.29 ± 0.024 91.63 ± 0.022 51.54 ± 0.021 47.92 ± 0.038 67.06 - Fine-tuning 92.59 ± 0.024 70.20 ± 0.037 87.39 ± 0.036 92.00 ± 0.057 88.25 ± 0.039 95.56 ± 0.029 79.38 ± 0.034 60.11 ± 0.051 **83.19** 32 FedMask 83.14 ± 0.059 51.66 ± 0.119 51.78 ± 0.049 83.75 ± 0.078 85.91 ± 0.073 90.05 ± 0.074 53.19 ± 0.063 51.37 ± 0.105 68.86 1.0 EDEN 87.87 ± 0.037 64.62 ± 0.106 81.06 ± 0.081 86.73 ± 0.050 86.75 ± 0.055 90.22 ± 0.062 72.55 ± 0.056 58.71 ± 0.034 78.56 0.715 DeepReduce 86.07 ± 0.097 64.39 ± 0.088 82.92 ± 0.071 85.14 ± 0.084 83.91 ± 0.067 86.12 ± 0.117 52.92 ± 0.055 49.72 ± 0.110 73.90 1.173 FedPM 90.70 ± 0.045 67.42 ± 0.095 87.51 ± 0.079 89.77 ± 0.095 88.42 ± 0.092 93.57 ± 0.067 76.80 ± 0.076 59.06 ± 0.098 81.64 0.948 DeltaMask 90.32 ± 0.083 66.90 ± 0.051 87.36 ± 0.093 89.09 ± 0.047 86.91 ± 0.067 93.54 ± 0.101 76.39 ± 0.086 58.52 ± 0.102 81.13 **0.233** ρ = 1.0 Linear Probing 91.46 ± 0.026 71.96 ± 0.025 46.03 ± 0.017 84.57 ± 0.031 87.13 ± 0.018 92.98 ± 0.016 68.70 ± 0.028 54.03 ± 0.032 74.61 - Fine-tuning 93.61 ± 0.048 75.49 ± 0.052 90.10 ± 0.063 93.13 ± 0.037 91.06 ± 0.041 97.02 ± 0.034 84.71 ± 0.013 64.93 ± 0.063 **86.26** 32 FedMask 88.42 ± 0.051 63.04 ± 0.081 64.32 ± 0.073 86.41 ± 0.039 86.39 ± 0.044 91.67 ± 0.031 68.04 ± 0.050 54.39 ± 0.089 75.34 1.0 EDEN 92.14 ± 0.043 71.65 ± 0.060 86.28 ± 0.057 90.87 ± 0.046 89.94 ± 0.034 93.26 ± 0.035 78.79 ± 0.083 61.18 ± 0.027 83.01 0.703 DeepReduce 87.33 ± 0.052 67.19 ± 0.061 83.19 ± 0.048 85.71 ± 0.082 84.52 ± 0.075 92.12 ± 0.060 69.11 ± 0.092 60.31 ± 0.094 78.69 1.092 FedPM 92.99 ± 0.045 74.34 ± 0.023 89.35 ± 0.025 92.65 ± 0.098 91.33 ± 0.041 95.37 ± 0.048 83.69 ± 0.076 63.65 ± 0.074 85.42 0.901 DeltaMask 92.84 ± 0.083 73.69 ± 0.051 89.01 ± 0.085 91.92 ± 0.089 91.27 ± 0.055 94.54 ± 0.103 83.48 ± 0.081 63.47 ± 0.096 85.03 **0.191** Table 3 reveals a notable improvement in DeltaMask performance, especially when the participation ratio ρ is less than 1, with only a 2% accuracy difference compared to Fine-tuning. This is a critical observation, since non-IID data distributions coupled with partial client participation closely mirror the conditions of real-world federated settings. Furthermore, our analysis shows that methods using stochastic mask training, such as DeepReduce and FedPM, yield better final model accuracy under non-IID conditions than traditional compression schemes like EDEN or hard-thresholding masking techniques like FedMask. Interestingly, the CLIP ViT-B/32 model excels in non-IID scenarios, underscoring the robust generalization abilities of pre-trained foundation models, which are particularly advantageous in non-IID federated environments. This emphasizes the importance of adapting these models for edge computing, capitalizing on their capability to effectively handle diverse and complex data distributions. ## C.4 Experiments In Imagenet Datasets In this section, we extend our evaluation in more complex tasks to assess the effectiveness of DeltaMask to fine-tune FMs in federated settings in a communication-efficient manner under datasets of larger complexity. For this, we perform experiments on Tiny-ImageNet Le & Yang (2015) with both CLIP ViT-B/32 and CLIP ViT-L/14. The results, reported on Table 4 showcase that DeltaMask can effectively fine-tune FMs in more complex tasks, such as ImageNet datasets, while maintaining the same efficiency in terms of bpp. Table 4: Evaluation of DeltaMask using Dir(10.0) over classes (Cp≈1.0 / IID settings) across CLIP architectures in Tiny-ImageNet Le & Yang (2015). | Method | CLIP ViT-B/32 | CLIP ViT-L/14 | | | |------------------|-----------------|-----------------|----------|-------| | Accuracy | Avg. bpp | Accuracy | Avg. bpp | | | Fine-tuning | 86.12 | 32 | 89.02 | 32 | | FedPM | 84.22 | 0.871 | 87.04 | 0.862 | | DeltaMask (Ours) | 83.76 | 0.201 | 86.57 | 0.218 | ## C.5 Deltamask **Efficiency On Edge Devices** In this section, we evaluate the runtime resource demands—computation and energy—of our probabilistic filter compression on three popular embedded platforms: NVIDIA Jetson Nano (4GB), Raspberry Pi 4 (4GB), and Coral Dev Board (1GB). These platforms were selected for their widespread use and capability to run machine learning tasks at the edge. To measure energy consumption, we used a Raspberry Pi 4 equipped with a Current/Power Monitor HAT, monitoring each device's energy use with a 0.1 Ohm sampling resistor. Our tests, conducted over 5 runs, record the average runtime (in milliseconds) and energy usage (in nano Joules) for different probabilistic filters with varying bits per entry (8, 16, and 32), as detailed in Table 5. | Filter | Metric | Raspberry Pi 4 | Coral Dev Board | Jetson Nano | |-------------------------|-------------------------|------------------|-------------------|----------------| | Xor8 | CPU Execution Time (ms) | 0.942 ± 0.0165 | 1.682 ± 0.0059 | 0.479 ± 0.0001 | | Energy Consumption (nJ) | 3.223 ± 0.0023 | 2.826 ± 0.0011 | 2.334 ± 0.0012 | | | Xor16 | CPU Execution Time (ms) | 0.955 ± 0.0250 | 1.683 ± 0.0008 | 0.502 ± 0.0001 | | Energy Consumption (nJ) | 4.052 ± 0.0032 | 3.580 ± 0.0003 | 3.386 ± 0.0016 | | | Xor32 | CPU Execution Time (ms) | 0.978 ± 0.0278 | 1.701 ± 0.0006 | 0.539 ± 0.0005 | | Energy Consumption (nJ) | 6.292 ± 0.0021 | 4.732 ± 0.0008 | 4.692 ± 0.0023 | | | BFuse8 | CPU Execution Time (ms) | 0.587 ± 0.0059 | 1.144 ± 0.0035 | 0.289 ± 0.0013 | | Energy Consumption (nJ) | 2.045 ± 0.0019 | 1.979 ± 0.0015 | 1.829 ± 0.0023 | | | BFuse16 | CPU Execution Time (ms) | 0.590 ± 0.0066 | 1.183 ± 0.0029 | 0.282 ± 0.0002 | | Energy Consumption (nJ) | 3.262 ± 0.0020 | 2.898 ± 0.0017 | 2.157 ± 0.0033 | | | BFuse32 | CPU Execution Time (ms) | 0.612 ± 0.0054 | 1.201 ± 0.0017 | 0.301 ± 0.0002 | | Energy Consumption (nJ) | 4.021 ± 0.0026 | 3.771 ± 0.0022 | 3.263 ± 0.0012 | | Table 5: Average energy and latency benchmarking of the considered probabilistic filters across different devices. The CPU execution time (ms) and estimated energy consumption (nJ) per entry is computed over 10M entries. From the results, we clearly notice that all filter variants demand limited computational resources, both in terms of execution time and energy requirements. BFuse8 is particularly notable for its efficiency, requiring only an average execution time of 0.673 milliseconds and consuming just 1.95 nano Joules of energy across the considered devices. This underscores the practicality of our probabilistic filter-based compression scheme in federated settings, where devices are often constrained by limited computational capabilities and strict energy budgets. Additionally, our analysis shows that even with an increase in the bits-per-entry (bpe) parameter, the rise in execution time and energy consumption is quite modest. This is particularly noteworthy given the simultaneous improvement in the false positive rate, which is inversely proportional to 2 −bpe. This pattern suggests a beneficial trade-off between accuracy and resource utilization, reinforcing the adaptability and effectiveness of our approach in federated learning scenarios that prioritize computational efficiency and energy conservation. ## C.6 Comparing Classifier Heads In Deltamask In DeltaMask, we enable the classification head to adapt in a single linear probing round, while freezing the rest of the model. This approach produces more stable outcomes and quicker convergence than previous methods Isik et al. (2023b); Zhou et al. (2020); Ramanujan et al. (2020); Zhou et al. (2020) that used Kaiming initialization to identify high-performing subnetworks in randomly initialized networks. Although the classification head typically has fewer parameters, scenarios requiring extremely low bitrates make transmitting even a single round's floating-point weights impractical. In this section, we explore such situations, investigating different alternatives for the classifier layer. Specifically, we replace the linear classifier with a Gaussian Naive Bayes classifier from FiT Shysheya et al. (2022), specifically *FIT-LDA*. This classifier is data-driven, with a minimal number of learnable parameters (2 float-point values), making it ideal for our purpose. In our analysis, we utilize CLIP ViT-B/32, masking the last five transformer blocks and compare DeltaMaskFiT against both a single-round trained linear classifier (DeltaMaskLP) and a Kaiming initialized (frozen) classifier (DeltaMaskHe). Table 6: Evaluating Classifier Initialization Schemes in **DeltaMask**. Comparing Average Bitrate and Accuracy in FL Training using Dir(10) over classes (Cp≈1.0 / **IID settings**) for CLIP ViT-B/32. Federated parameters are set to N=30 and E=1. | Method | CIFAR-10 | CIFAR-100 | SVHN | EMNIST | Fashion-MNIST | EuroSAT | Food-101 | Cars196 | Avg. Acc | Avg. bpp | |----------------------------|---------------|---------------|---------------|---------------|-----------------|---------------|---------------|---------------|------------|------------| | Fine-tuning | 94.50 ± 0.010 | 77.35 ± 0.009 | 92.72 ± 0.012 | 94.89 ± 0.010 | 92.98 ± 0.013 | 98.24 ± 0.011 | 86.72 ± 0.009 | 67.23 ± 0.014 | 88.08 | 32 | | DeltaMaskHe | 90.28 ± 0.052 | 67.34 ± 0.069 | 84.09 ± 0.063 | 87.32 ± 0.081 | 87.69 ± 0.034 | 93.22 ± 0.073 | 78.05 ± 0.028 | 58.74 ± 0.084 | 80.84 | 0.143 | | DeltaMaskFiT 93.42 ± 0.023 | 71.17 ± 0.041 | 86.31 ± 0.039 | 92.09 ± 0.021 | 89.87 ± 0.026 | 95.53 ± 0.019 | 81.71 ± 0.033 | 60.01 ± 0.029 | 83.76 | 0.145 | | | DeltaMaskLP | 93.50 ± 0.019 | 74.82 ± 0.023 | 87.95 ± 0.021 | 92.52 ± 0.019 | 91.27 ± 0.023 | 95.64 ± 0.017 | 82.73 ± 0.024 | 64.94 ± 0.026 | 85.44 | 0.151 | From Table 6, we notice that DeltaMaskLP outperforms other initialization methods by over 2% without significantly increasing the bitrate, while FiT can be an effective alternative to Kaiming initialization, increasing accuracy by ≈3%. More importantly, these findings highlight the importance of appropriate classifier layer initialization during fine-tuning of foundation models in downstream tasks. However, we demonstrate that a single fine-tuning round of the classifier layer, with the remaining model frozen, is an effective strategy with minimal impact on the communicated bitrate.
Review 1: Summary: This paper studies the federated fine-tuning of foundation models, in particular, focusing on the design of fine-tuning algorithms that operate at low bit rates. The motivation for this constraint is to reduce the communication bottleneck of training federated learning systems. The main content of the paper can be summarized as follows. - In section 3, the authors discuss the methodology, first introducing what probabilistic filters are---These use hash functions to transform and store data in a uniformly distributed array, known as the fingerprints. The stochastic mask training creates binary masks by clipping mask scores, drawing from an underlying mask probability using the Bernoulli distribution. - The federated masked fine-tuning is then presented. Here, clients first initialize a neural network with a weight vector from the pre-trained foundation model. The weight vector is kept fixed and never modified during training. The algorithm collaboratively trains a probabilistic mask and the loss function to minimize its error rate on a downstream task. - Instead of communicating the stochastic binary mask, the paper proposes to reduce the required bits-per-parameter by solely communicating the subsequent key updates between the received and trained mask. - The paper utilizes a local training scheme for probability masks. It also leverages the inherent sparsity in mask updates during federated rounds. - Once the local training is completed, the server forms a new global probability mask through a Bayesian aggregation of the compressed models. In contrast to related approaches, the method enables better control over model generalization. - In sections 4 and 5, the authors introduce experiments. discussing - Bit rate vs. accurate trade-off - Data volume and computational cost improvements - The algorithm's ability to work across various architectures - Ablation studies including the performance of various probabilistic filters. Strengths and Weaknesses: Strengths: The paper studies a well-motivated problem, discussing efficient fine-tuning algorithms in the federated learning setting. The idea of communicating probabilistic masks and compressing that appears to be a simple, yet effective idea. The comparisons between the proposed algorithm (DeltaMask) and baeslines in terms of the bit rates appear to be interesting. Weaknesses: - In section 3, I have found the methodology section to be rather scattered and poorly written. - for instance, going from section 3.1 to section 3.2 was a jump: the previous section taslks about masking schemes. While the next gives an overview of the pipeline. However, it's not clear to me what part of section 3.1 is being used in section 3.2. - in section 3.2, the authors claim that "the method enables better control over generalization compared to related approaches." However, I find the arguments to be unconvincing. I think more analysis needs to be done here before making this claim. Also, the readers did not get a preview of FedPM and HidNseek, making the claim difficult to verify. - complete description of the algorithm needs to be in the main text. Otherwise, I could not tell what exactly is the method being implemented. - In section 4, this should be organized more coherently and the baselines should be described more clearly, justifying why these are sufficient for the authors to draw a conclusion. - In section 5, again the authors should list a summary of the main empirical findings here. There are much details mixed with description of the results in this section. - In figure 5 / table 1, it is unclear to me why these results are significant. Also, the rationale / intuition for the comparison is not clearly explained. Requested Changes: - The first requested change is to fix the issues discussed under the weaknesses above. However, to me this might require a complete re-write of section 3 and a large part of section 4/5, which would be a major revision of the manuscript. - The second requested change is to clarify the claims on the Privacy Implications for FL. This seems like a major weakness to me as the authors are claiming a FL training algorithm that might protect user data privacy. However, the paper makes no such claims in the paper. This is a major omission---One possible fix is to add some justification based on the Differential Privacy framework (see, e.g., "The Algorithmic Foundations of Differential Privacy" by Dwork and Roth). However, this again requires a major revision of the manuscript. Taking both issues above, I thus feel that this paper is not yet ready for publication, although the problem is well-motivated. Broader Impact Concerns: N/A ================================================== Review 2: Summary: Integration of Large Foundation Models into Federated Learning (FL) is challenging due to substantial communication overhead. Current communication efficient FL strategies (e.g., gradient compression) reduce bitrates to around 1 bit-per-parameter (bpp). But, it fail to harness the characteristics of FM, with their large number of parameters still posing challenge to communicate efficiently, even at these bitrate regime. The work present DeltaMask that fine-tunes FMs in FL at an ultra-low bitrate, well below 1 bpp. It uses stochastic masking to detect highly effective subnetworks within the model which is leveraged for efficient finetuning. Strengths and Weaknesses: [Strengths] * The proposed method shows a clear advantage in terms of bitrate (bpp) while having comparable accuracy to previous methods. * Extensive experiments on various datasets and architectures. * The paper is well-organized with clear writing. [Weaknesses] * typo: page 8 line 12: DeltaMaskagainst → DeltaMask against. * citation format: The citation should be inside the parenthesis. For example, Method A (~~~ et al.) Requested Changes: See Weaknesses above. Broader Impact Concerns: Not applicable. ================================================== Review 3: Summary: This work proposes a compression mechanism for federated learning communication in the pretrained foundation model setting (e.g, ViTs). The mechanism, DeltaMask, is based off of a combination of multiple techniques, namely masking, delta compression, and importance sampling. The updates are transmitted using probabilistic data structures as well as PNG image compression. Evaluations are over 8 datasets and 5 pretrained models. Strengths and Weaknesses: Strengths: * Writing is clear. * The method seems strong in the Pareto sense, particularly in the low bit regime. Weaknesses: * The method is heavily engineered with many components, thus may be more difficult to adapt and understand. * Many of the design choices aren't motivated up front. For example, PNG compression is mentioned as a way to transmit updates, but it's not clear why this is preferable to other lossless algorithms. For similar reasons, it's not clear what techniques are different from prior work. Requested Changes: Critical: * Can you please differentiate this work with prior work in introduction? For example, how does the work compare with the cited work of Isik et. al. 2023? Suggested: * Can you please motivate some of the design choices up front, such as why PNG compression and/or a particular probabilistic data structure is used? It is difficult to understand what are fundamental ideas and what are engineering tradeoffs. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: This paper considers the problem of fine-tuning models in a communication-constrained federated learning setting. The parameters of a pre-trained model are shared with client devices. For the fine-tuning process, DeltaMask is proposed, a technique where clients collaborate in training a stochastic binary mask. DeltaMask leverages stochasticity and sparsity in client masks to compress updates and allows fine-tuning at bit-per-parameter rates below 1 (previous binary mask-based procedures used 1 bit per parameter). Instead of communicating the stochastic binary mask, the paper proposes to reduce the required bits-per-parameter by solely communicating the subsequent key updates between the received and trained mask. The reviewers found that the proposed method has a better bitrate (bpp) while being comparable to previous methods in accuracy. However, they also noted that the approach is rather complex regarding engineering detail and, therefore, is perhaps more challenging to implement, verify, and generalize. The review committee unanimously finds that this paper presents strong and promising results but offers limited insights and reproducibility. For example, the main points raised by the reviewers is that: - the control of the bitrate in the proposed DeltaMask is vaguely described (the paper mentions control of $\kappa$ and bpe, but these do not appear as parameters in Algorithm 1) - is challenging to understand what fundamental ideas are and what engineering tradeoffs are As the reviewers did not reach a consensus on this work (with none arguing strongly for acceptance), I also looked closely at the paper before proposing a decision. I mainly checked if the paper's description of the approach is clear enough to verify the claims according to TMLR's evaluation criteria. However, I find that the current version of the manuscript does not entirely meet these expectations and that a revision would be incredibly beneficial to the readers and the TMLR audience. For example, Section 3.1 is very dense and introduces many parameters ($\mu$, $l$, $n$, $m$, $\mathcal{U}$, $\mathcal{H}$) but the effect of these parameters is not well-explained later (e.g. how to achieve the desired adjustments in bpp, etc.). Moreover, it is additionally confusing that the notation does not seem consistent with the other sections ($m$ for mask, $k$ for client index, etc.). For a potential revision submitted to TMLR, we suggest separating the machine learning components from the engineering aspects to enhance the clarity of the presentation. To better align with the expectations of the TMLR audience and reviewers, please also consider the following points the reviewers raised: (A) a clean and detailed presentation of the fine-tuning pipeline with masking, highlighting how it contrasts with other fine-tuning approaches in machine learning. (B) more intuition and explanations behind the main idea (trade-offs between compressing the updates vs. compressing masks/parameters) (C) a thorough description of DeltaMask, including detailed information on hyperparameters and their effects. Addressing these points could significantly improve the submission. ==================================================
# Improving Gflownets For Text-To-Image Diffusion Alignment Anonymous authors Paper under double-blind review ## Abstract Diffusion models have become the *de-facto* approach for generating visual data, which are trained to match the distribution of the training dataset. In addition, we also want to control generation to fulfill desired properties such as alignment to a text description, which can be specified with a black-box reward function. Prior works fine-tune pretrained diffusion models to achieve this goal through reinforcement learning-based algorithms. Nonetheless, they suffer from issues including slow credit assignment as well as low quality in their generated samples. In this work, we explore techniques that do not directly maximize the reward but rather generate high-reward images with relatively high probability - a natural scenario for the framework of generative flow networks (GFlowNets). To this end, we propose the Diffusion Alignment with GFlowNet (DAG) algorithm to post-train diffusion models with black-box property functions. Extensive experiments on Stable Diffusion and various reward specifications corroborate that our method could effectively align large-scale text-to-image diffusion models with given reward information. ## 1 Introduction Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) have drawn significant attention in machine learning due to their impressive capability to generate high-quality visual data and applicability across a diverse range of domains, including text-toimage synthesis (Rombach et al., 2021), 3D generation (Poole et al., 2022), material design (Yang et al., 2023), protein conformation modeling (Abramson et al., 2024), and continuous control (Janner et al., 2022). These models, through a process of gradually denoising a random distribution, learn to replicate complex data distributions, showcasing their robustness and flexibility. The traditional training of diffusion models typically relies on large datasets, from which the models learn to generate new samples that mimic and interpolate the observed examples. Figure 1: Generated samples before (top) and after ![0_image_0.png](0_image_0.png) (bottom) the proposed training with Aesthetic reward. However, such a dataset-dependent approach often overlooks the opportunity to control and direct the generation process towards outputs that not only resemble the training data but also possess specific, desirable properties (Lee et al., 2023). These properties are often defined through explicit reward functions that assess certain properties, such as the aesthetic quality of images. Such a requirement is crucial in fields where adherence to particular characteristics is necessary, such as alignment or drug discovery. The need to integrate explicit guidance without relying solely on datasets presents a unique challenge for training methodologies. Previous works have utilized methods such as reinforcement learning (RL) (Black et al., 2023; Fan et al., 2023) to tackle this problem. Nonetheless, these methods still suffer from issues like low sample efficiency. In this work, we propose a novel approach, diffusion alignment with GFlowNets (DAG), that fine-tunes diffusion models to optimize black-box reward functions directly. Generative flow networks (Bengio et al., 2023, GFlowNets), initially introduced for efficient probabilistic inference with given densities in structured spaces, provide a unique framework for this task. Though initially proposed for composite graph-like structures, prior works have extended the GFlowNet framework to diffusion modeling (Zhang et al., 2022a; Lahlou et al., 2023). This work further investigates GFlowNet-inspired algorithms for the task of text-to-image diffusion alignment. By aligning the learning process to focus on generating samples with probability proportional to reward functions rather than maximizing them, our method allows the diffusion model to directly target and generate samples that are not only high in quality but also fulfill specific predefined criteria. Besides developing a denoising diffusion probabilistic model-specific GFlowNet algorithm, we also propose a new KL-based way to optimize our models. In summary, our contributions are as follows: - We propose Diffusion Alignment with GFlowNet (DAG), a GFlowNet-based algorithm using the denoising structure of diffusion models, to improve large-scale text-to-image alignment with a black-box reward function. - We propose a KL-based objective for optimizing GFlowNets that achieves comparable or better sample efficiency. We further called the resulting algorithm for the alignment problem DAG-KL. - Our methods achieve better sample efficiency than the reinforcement learning baseline within the same number of trajectory rollouts, as well as a better reward-diversity trade-off, across a number of different learning targets. ## 2 Preliminaries 2.1 Diffusion Models Denoising diffusion model (Vincent, 2011; Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2020) is a class of hierarchical latent variable models. The latent variables are initialized from a white noise xT ∼ N (0, I) and then go through a sequential denoising (reverse) process pθ(xt−1|xt). Therefore, the resulting generated distribution takes the form of $$p_{\mathbf{\theta}}(\mathbf{x}_{0})=\int p_{\mathbf{\theta}}(\mathbf{x}_{0:T})\,\mathrm{d}\mathbf{x}_{1:T}=\int p(\mathbf{x}_{T})\prod_{t=1}^{T}p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\,\mathrm{d}\mathbf{x}_{1:T}.\tag{1}$$ $$\left(2\right)$$ On the other hand, the variational posterior q(x1:T |x0), also called a diffusion or forward process, can be factorized as a Markov chain QT t=1 q(xt|xt−1) composed by a series of conditional Gaussian distributions q(xt|xt−1) = N (xt; αt/αt−1xt−1,(1 − α 2 t /α2 t−1 )I), where {αt, σt}t is a set of pre-defined signal-noise schedule. Specifically, in Ho et al. (2020) we have α 2 t + σ 2 t = 1. The benefit of such a noising process is that its marginal has a simple close form: q(xt|x0) = Rq(x1:t|x0) dx1:t−1 = N (xt; αtx0, σ2 t I). Given a data distribution pdata(·), the variational lower bound of model log likelihood can be written in the following simple denoising objective: $${\mathcal{L}}_{\mathrm{denoising}}(\mathbf{\theta})=\mathbb{E}_{t,\mathbf{x}_{0}\sim p_{\mathrm{data}},\epsilon\sim{\mathcal{N}}(\mathbf{0},\mathbf{I})}\left[\|\mathbf{x}_{0}-{\hat{\mathbf{x}}}_{\mathbf{\theta}}(\alpha_{t}\mathbf{x}_{0}+\sigma_{t}\mathbf{\epsilon},t)\|^{2}\right],$$ 2, (2) where xˆθ(xt, t) is a deep neural network to predict the original clean data x0 given the noisy input xt = αtx0 + σtϵ, which can be used to parameterize the denoising process pθ(xt−1|xt) = N (xt−1;σ 2 t−1αtxt + (α 2 t−1 − α 2 t )xˆθ(xt, t)/σ2 t αt−1,(1 − α 2 t /α2 t−1 )I). In practice, the network can also be parameterized with noise prediction or v-prediction (Salimans & Ho, 2022). The network architecture usually has a U-Net (Ronneberger et al., 2015) structure. In multimodal applications such as text-to-image tasks, the denoising diffusion model would have a conditioning c in the sense of pθ(x0; c) = Rp(xT )QT t=1 pθ(xt−1|xt; c) dx1:T . The data prediction network, xˆθ(xt*, t,* c) in this case, will also take c as a conditioning input. We ignore the notation of c without loss of generality. ## 2.2 Gflownets Generative flow network (Bengio et al., 2021, GFlowNet) is a high-level algorithmic framework of amortized inference, also known as training generative models with a given unnormalized target density function. Let G = (S, A) be a directed acyclic graph, where S is the set of states and *A ⊆ S × S* are the set of actions. We assume the environmental transition is deterministic, *i.e.*, one action would only lead to one next state. There is a unique initial state s0 ∈ S which has no incoming edges and a set of *terminal states* sN without outgoing edges. A GFlowNet has a stochastic *forward policy* PF (s ′|s) for transition (s → s ′) as a conditional distribution over the children of a given state s, which can be used to induce a distribution over trajectories via P(τ ) = QN−1 n=0 PF (sn+1|sn), where τ = (s0, s1*, . . . ,* sN ). On the other hand, the *backward policy* PB(s|s ′) is a distribution over the parents of a given state s ′. The *terminating distribution* defined by PT (x) = Pτ→x PF (τ ) is the ultimate terminal state distribution generated by the GFlowNet. The goal of training GFlowNet is to obtain a forward policy such that PT (·) ∝ R(·), where R(·) is a black-box *reward function* or unnormalized density that takes only non-negative values. Notice that we do not know the normalizing factor Z =Px R(x). We can use the *trajectory flow* function F(τ ) = ZPF (τ ) to take in the effect of the normalizing factor, and the corresponding *state flow* function F(s) = Pτ∋s F(τ ) to model the unnormalized probability flow of intermediate state s. Detailed balance (DB) The GFlowNet detailed balance condition provides a way to learn the above mentioned GFlowNet modules. For any single transition (s → s ′), the following DB criterion holds: $$F(\mathbf{s})P_{F}(\mathbf{s}^{\prime}|\mathbf{s})=F(\mathbf{s}^{\prime})P_{B}(\mathbf{s}|\mathbf{s}^{\prime}),\quad\forall(\mathbf{s}\to\mathbf{s}^{\prime})\in{\mathcal{A}}.$$ $\left(3\right)$. ′) ∈ A. (3) Furthermore, for any terminating state x, we require F(x) = R(x). In practice, these constraints can be transformed into tractable training objectives, as will be shown in Section 3. Based on GFlowNet theories in Bengio et al. (2023), if the DB criterion is satisfied for any transition, then the terminating distribution PT (·) will be the same desired target distribution whose density is proportional to R(·). ## 3 Methodology 3.1 Denoising Markov Decision Process The denoising process for text-to-image diffusion models can be easily reformulated as a multi-step Markov decision process (MDP) with finite horizon (Fan et al., 2023; Black et al., 2023) as follows: $$\begin{array}{r l}{p(\mathbf{s}_{0})={\mathcal{N}}(\mathbf{x}_{T};\mathbf{0},\mathbf{I})\otimes p(\mathbf{c}),}&{{}\pi_{\theta}(\mathbf{a}_{t}|\mathbf{s}_{t})=p_{\theta}(\mathbf{x}_{T-t-1}|\mathbf{x}_{T-t},\mathbf{c}),}\\ {r(\mathbf{s}_{t},\mathbf{a}_{t})=R(\mathbf{s}_{t+1},\mathbf{c}){\mathrm{~only~if~}}t=T-1,}&{{}p(\mathbf{s}_{t+1}|\mathbf{s}_{t},\mathbf{a}_{t})=\delta_{\mathbf{a}_{t}}\otimes\delta_{\mathbf{c}}.}\end{array}$$ Here st, at is the state and action at time step t under the context of MDP. The state space is defined to be the product space (denoted by ⊗) of x in reverse time ordering and conditional prompt c. The RL policy π is just the denoising conditional distribution. In this MDP, when time t has not reached the terminal step, we define the reward r(st, at) to be 0. δ here denotes the Dirac distribution. Remark 1 (diffusion model as GFlowNet). This formulation has a direct connection to the GFlowNet MDP definition in Section 2.2, which has been pointed out by Zhang et al. (2022a) and developed in Lahlou et al. (2023); Zhang et al. (2023b); Venkatraman* et al. (2024). To be specific, the action transition (st, at) → st+1 is a Dirac distribution and can be directly linked with the (st → st+1) edge transition in the GFlowNet language. More importantly, the conditional distribution of the denoising process pθ(xT −t−1|xT −t) corresponds to the GFlowNet forward policy PF (st+1|st), while the conditional distribution of the diffusion process q(xT −t|xT −t−1) corresponds to the GFlowNet backward policy PB(st|st+1). Besides, xt is a GFlowNet terminal state if and only if t = 0. The above discussion could be summarized in the right table. In the following text, we use the denoising diffusion notation instead of GFlowNet notation as it is familiar to more broad audience. What's more, we ignore conditioning c for the sake of simplicity. $$\left({4}\right)$$ $\left(5\right)$. | Denoising diffusion | GFlowNet | |-----------------------|--------------| | (xT −t, c) | st | | p(xT −t−1|xT −t, c) | PF (st+1|st) | | q(xT −t|xT −t−1) | PB(st|st+1) | ## 3.2 Diffusion Alignment With Gflownets In this section, we describe our proposed algorithm, diffusion alignment with GFlowNets (DAG). Rather than directly optimizing the reward targets as in RL, we aim to train the generative models so that in the end they could generate objects with a probability *proportional* to the reward function: pθ(x0) ∝ R(x0). To achieve this, we construct the following DB-based training objective based on Equation 3, by regressing its one side to another in the logarithm scale for any diffusion step transition (xt, xt−1). $$\ell_{\rm DB}({\bf x}_{t},{\bf x}_{t-1})=(\log F_{\phi}({\bf x}_{t},t)+\log p_{\theta}({\bf x}_{t-1}|{\bf x}_{t},t)-\log F_{\phi}({\bf x}_{t-1},t-1)-\log q({\bf x}_{t}|{\bf x}_{t-1}))^{2}\tag{6}$$ We additionally force Fϕ(xt, t = 0) = R(x0) to introduce the reward signal. Here θ, ϕ are the parameters of the diffusion U-Net model and the GFlowNet state flow function (which is another neural network), respectively. One can prove that if the optimization is perfect, the resulting model will generate a distribution whose density value is proportional to the reward function R(·) (Bengio et al., 2023; Zhang et al., 2023b). One way to parameterize the state flow function F is through the so-called forward-looking (Pan et al., 2023b, FL) technique in the way of Fϕ(xt, t) = F˜ϕ(xt, t)R(xt), where F˜ϕ is the actual neural network to be learned. Intuitively, this is equivalent to initializing the state flow function to be the reward function in a functional way; therefore, learning of the state flow would become an easier task. Note that to ensure Fϕ(x0, 0) = R(x0), we need to force F˜ϕ(x0, 0) = 1 for all x0 at the terminal step. $\square$ Algorithm 1 Diffusion alignment with GFlowNets (DAG-DB & DAG-KL) Require: Denoising policy pθ(xt−1|xt, t), noising policy q(xt|xt−1), flow function Fϕ(xt, t), black-box reward function R(·) 1: **repeat** 2: Rollout τ = {xt}t with pθ(xt−1|xt, t) 3: For each transition (xt, xt−1) ∈ τ : 4: if algorithm is DAG-DB **then** 5: \# normal DB-based update 6: Update θ and ϕ with Equation 8 7: **else if** algorithm is DAG-KL **then** 8: \# KL-based update 9: Update ϕ with Equation 8 10: Update θ with Equation 14 11: **end if** 12: **until** some convergence condition Incorporating denoising diffusion-specific structure However, the intermediate state xt is noisy under our context, and thus not appropriate for being evaluated by the given reward function, which would give noisy result. What's more, what we are interested here is to "foresee" the reward of the terminal state x0 taken from the (partial) trajectory xt:0 starting from given xt. As a result, we can do the FL technique utilizing the particular structure of diffusion model as in Fϕ(xt, t) = F˜ϕ(xt, t)R(xˆθ(xt, t)), where xˆθ is the data prediction network. We notice that a similar technique has been used to improve classifier guidance (Bansal et al., 2023). In short, our innovation in FL technique is $$F_{\phi}(\mathbf{x}_{t},t)={\tilde{F}}_{\phi}(\mathbf{x}_{t},t)R(\mathbf{x}_{t})\implies F_{\phi}(\mathbf{x}_{t},t)={\tilde{F}}_{\phi}(\mathbf{x}_{t},t)R({\hat{\mathbf{x}}}_{\theta}(\mathbf{x}_{t},t)).$$ Then the FL-DB training objective becomes $$\ell_{\mathrm{FL}}(\mathbf{x}_{t},\mathbf{x}_{t-1})=\left(\log{\frac{{\hat{F}}_{\Phi}(\mathbf{x}_{t},t)R({\hat{\mathbf{x}}}_{\theta}(\mathbf{x}_{t},t))p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})}{{\hat{F}}_{\Phi}(\mathbf{x}_{t-1},t-1)R({\hat{\mathbf{x}}}_{\theta}(\mathbf{x}_{t-1},t-1))q(\mathbf{x}_{t}|\mathbf{x}_{t-1})}}\right)^{2}.$$ 2. (8) Since in this work the reward function is a black-box, the gradient flow would not go through xˆθ(xt, t) when we take the gradient of θ. We summarize the algorithm in Algorithm 1 and refer to it as DAG-DB. Remark 2 (GPU memory and the choice of GFlowNet objectives). Similar to the temporal difference-λ in RL (Sutton, 1988), it is possible to use multiple connected transition steps rather than a single transition step to construct the learning objective. Other GFlowNet objectives such as Malkin et al. (2022); Madan et al. (2022) use partial trajectories with a series of transition steps to construct the training loss and provide a different trade-off between variance and bias in credit assignment. However, for large-scale setups, this is not easy to implement, as computing policy probabilities for multiple transitions would correspondingly increase the GPU memory and computation multiple times. For example, in the Stable Diffusion setting, we could only use a batch size of 8 on each GPU for single transition computation. If we want to use a two transition based training loss, we would need to decrease the batch size by half to 4. Similarly, we will have to shorten the trajectory length by a large margin if we want to use trajectory balance. This may influence the image generation quality and also make it tricky to compare with the RL baseline, which can be implemented with single transitions and does not need to decrease batch size or increase gradient accumulation. In practice, we find that single transition algorithms (such as our RL baseline) perform reasonably well. $$\mathbb{\}$$ ## 3.3 A Kl-Based Gflownet Algorithm With Reinforce Gradient GFlowNet detailed balance is an off-policy algorithm that uses training data from arbitrary distributions. In this section, we derive a different KL-based on-policy objective, which has been rarely investigated in GFlowNet literature. We can reformulate DB (Equation 6) from a square loss form to a KL divergence form $$\min_{\mathbf{\theta}}\mathcal{D}_{\mathrm{KL}}\left(p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\|\frac{F_{\mathbf{\phi}}(\mathbf{x}_{t-1},t-1)q(\mathbf{x}_{t}|\mathbf{x}_{t-1})}{F_{\mathbf{\phi}}(\mathbf{x}_{t},t)}\right).\tag{9}$$ $$(10)$$ In theory, when DB is perfectly satisfied, the right term Fϕ(xt−1, t − 1)q(xt|xt−1)/Fϕ(xt, t) is a normalized density; in practice, it could be an unnormalized one but does not affect the optimization. Next, define $$b(\mathbf{x}_{t},\mathbf{x}_{t-1})=\mathrm{stop-gradient}\left(\log{\frac{F_{\phi}(\mathbf{x}_{t},t)p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})}{F_{\phi}(\mathbf{x}_{t-1},t-1)q(\mathbf{x}_{t}|\mathbf{x}_{t-1})}}\right),$$ , (10) then the KL value of Equation 9 becomes Epθ(xt−1|xt)[b(xt, xt−1)]. We have the following result for deriving a practical REINFORCE-style objective. Proposition 3. The KL term in Equation 9 has the same expected gradient with b(xt, xt−1)log pθ(xt−1|xt): ∇θDKL pθ(xt−1|xt)∥ Fϕ(xt−1, t − 1)q(xt|xt−1) Fϕ(xt, t) = Ext−1∼pθ(·|xt)[b(xt, xt−1)∇θ log pθ(xt−1|xt)] . (11) We defer its proof to Section A.1 and make the following remarks:. Remark 4 (gradient equivalence to detailed balance). Recalling Equation 6, since we have ∇θℓDB(xt, xt−1) = b(xt, xt−1)∇θ log pθ(xt−1|xt), it is clear that this KL-based objective would lead to the same expected gradient on θ with Equation 6, if xt−1 ∼ pθ(·|xt) (*i.e.*, samples being on-policy). Nonetheless, this on-policy property may not be true in practice since the current model is usually not the same as the model used for rollout trajectories after a few optimization steps. Remark 5 (analysis of b(xt, xt−1)). The term b(xt, xt−1) seems to serve as the traditional "reward" in the RL framework. Previous works (Tiapkin et al., 2024; Mohammadpour et al., 2024; Deleu et al., 2024) have shown that GFlowNet could be interpreted as solving a maximum entropy RL problem in a modified MDP, where any intermediate transition is modified to have an extra non-zero reward that equals the logarithm of GFlowNet backward policy: rmod(xt, xt−1) = log q(xt|xt−1)*, t >* 0. What's more, the logarithm of the transition flow function (*i.e.*, either side of the detailed balance constraint of Equation 3) could be interpreted as the Q-function of this maximum entropy RL problem on the modified MDP. Therefore, we could take a more in-depth analysis of −b(xt, xt−1): $$\underbrace{\log F_{\Phi}(\mathbf{x}_{t-1},t-1)}_{\mathrm{modified~}\widehat{Q}-\mathrm{function}}q(\mathbf{x}_{t}|\mathbf{x}_{t-1})}_{\mathrm{entropy}}+\underbrace{(-\log p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t}))}_{\mathrm{entropy}}-\underbrace{\log F_{\Phi}(\mathbf{x}_{t},t)}_{\mathrm{constant}}.$$ . (12) The last term Fϕ(xt, t) is a constant for the xt−1 ∼ pθ(·|xt) process. Consequently, we can see that minimizing this KL is effectively equivalent to having a REINFORCE gradient with the modified Q*-function* plus an entropy regularization (Haarnoja et al., 2018). What's more, the entropy of forward policy Ext−1∼pθ(·|xt)[− log pθ(xt−1|xt)] is actually also a constant (the diffusion model's noise schedule is fixed), which does not affect the learning. Note that this REINFORCE style objective in Equation 11 is on-policy; the data has to come from the same distribution as the current model. In practice, the model would become not exactly on-policy after a few optimization steps, under which scenario we need to introduce the probability ratio pθ(xt−1|xt)/pθold (xt−1|xt) via importance sampling: $$\mathbb{E}_{{\bf x}_{t-1}\sim p_{\Theta_{\mathrm{old}}}(\,\cdot\,|{\bf x}_{t})}\left[b({\bf x}_{t},{\bf x}_{t-1})\frac{\nabla_{\theta}p_{\theta}({\bf x}_{t-1}|{\bf x}_{t})}{p_{\theta_{\mathrm{old}}}({\bf x}_{t-1}|{\bf x}_{t})}\right]\,.$$ Therefore, we can define a new objective to be $$\ell_{\mathrm{KL}}(\mathbf{x}_{t},\mathbf{x}_{t-1})=b(\mathbf{x}_{t},\mathbf{x}_{t-1})\,\operatorname*{clip}\left({\frac{p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})}{p_{\theta_{\mathrm{old}}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})}},1-\epsilon,1+\epsilon\right),$$ , (14) $$\left(12\right)$$ $$(13)$$ $$(14)$$ ![5_image_0.png](5_image_0.png) Figure 2: Top: samples from the original Stable Diffusion model. *Middle*: the proposed method trained with compressibility reward; these images have very smooth texture. *Down*: the proposed method trained with incompressibility reward; the texture part of images contains high frequency noise. where xt−1 ∼ pθold (·|xt). Here we also introduce a clip operation to remove too drastic update, following PPO (Schulman et al., 2017). In this way, the overall gradient along a trajectory becomes $$\mathbb{E}_{p\mathbf{\theta}_{\text{old}}(\mathbf{x}_{0},\tau)}\left[\sum_{t=1}^{T}b(\mathbf{x}_{t},\mathbf{x}_{t-1})\nabla_{\mathbf{\theta}}\,\,\text{clip}\left(\frac{p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})}{p\mathbf{\theta}_{\text{old}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})},1-\epsilon,1+\epsilon\right)\right].\tag{15}$$ We use this to update the policy parameter θ and use FL-DB to only update ϕ. We call this "diffusion alignment with GFlowNet and REINFORCE gradient" method to be DAG-KL. Note that when calculating b(xt, xt−1), we also adopt the diffusion-specific FL technique developed in Section 3.2. We also put the algorithmic pipeline of DAG-KL in Algorithm 1. ## 4 Related Works Diffusion alignment People have been modeling human values to a reward function in areas such as game (Ibarz et al., 2018) and language modeling (Bai et al., 2022) to make the model more aligned. In diffusion models, early researchers used various kinds of guidance (Dhariwal & Nichol, 2021; Ho & Salimans, 2022; Kong et al., 2024) to achieve the goal of steerable generation under the reward. This approach is as simple as plug-and-play but requires querying the reward function during inference time. Another way is to post-train the model to incorporate the information from the reward function, which has a different setup from guidance methods; there is also work showing that this outperforms guidance methods (Uehara et al., 2024). Lee et al. (2023); Dong et al. (2023) achieve this through maximum likelihood estimation on model-generated samples, which are reweighted by the reward function. These works could be thought of as doing RL in one-step MDPs. Black et al. (2023); Fan et al. (2023) design RL algorithm by taking the diffusion generation process as a MDP (Section 3.1). In this work, we focus on black-box rewards where it is appropriate to use RL or GFlowNet methods. Furthermore, there are methods developed specifically for differentiable rewards setting (Clark et al., 2023; Wallace et al., 2023b; Prabhudesai et al., 2023; Wu et al., 2024; Xu et al., 2023; Uehara et al., 2024; Marion et al., 2024). Besides, Chen et al. (2023) study the effect of finetuning text encoder rather than diffusion U-Net. There is also work that relies on preference data rather than an explicit reward function (Wallace et al., 2023a; Yang et al., 2024). Kim et al. (2024a) investigate how to obtain a robust reward based on multiple different reward functions. GFlowNets GFlowNet is a family of generalized variational inference algorithms that treats the data sampling process as a sequential decision-making one. It is useful for generating diverse and high-quality ![6_image_0.png](6_image_0.png) Figure 3: Sample efficiency results of our proposed methods and our RL baseline (DDPO). The number of training steps is proportional to the number of sampled trajectories. The experiments are conducted on reward functions including aesthetic score, ImageReward, and HPSv2. samples in structured scientific domains (Jain et al., 2022; 2023b; Liu et al., 2022; Jain et al., 2023a; Shen et al., 2023; Zhang et al., 2023e; Pan et al., 2023a; Kim et al., 2023; 2024b). A series of works have studied the connection between GFlowNets and probabilistic modeling methods (Zhang et al., 2022b; Zimmermann et al., 2022; Malkin et al., 2023; Zhang et al., 2022a; Ma et al.; Zhang et al., 2024a), and between GFlowNets and control methods (Pan et al., 2023c;d;b; Zhang et al., 2024b; Tiapkin et al., 2024). GFlowNets also have wide application in causal discovery Deleu et al. (2022), phylogenetic inference (Zhou et al., 2024), and combinatorial optimization (Zhang et al., 2023a;d). A concurrent work (Venkatraman* et al., 2024) also studies GFlowNet on diffusion alignment which is similar to this work but has different scope and different developed algorithm. Specifically, this work is aiming for posterior approximate inference that the reward function is treated as likelihood information, and develops a trajectory balance (Malkin et al., 2022) based algorithm on length modified trajectories. ## 5 Experiments Experimental setups We choose Stable Diffusion v1.5 (Rombach et al., 2021) as our base generative model. For training, we use lowrank adaptation (Hu et al., 2021, LoRA) for parameter efficient computation. As for the reward functions, we do experiments with the LAION Aesthetics predictor, a neural aesthetic score trained from human feedback to give an input image an aesthetic rating. For textimage alignment rewards, we choose ImageReward (Xu et al., 2023) and human preference score (HPSv2) (Wu et al., 2023). They are both CLIP (Radford et al., 2021)-type models, taking a text-image pair as input and output a scalar score about to what extent the image follows the text description. We also test with the (in)compressibility reward, which computes the file size if the input image is stored in hardware storage. As for the prompt distribution, we use a set of 45 simple animal prompts from Black et al. (2023) for the Aesthetics task; we use the whole imagenet classes for the (in)compressibility task; we use the DrawBench (Saharia et al., 2022) prompt set for the ImageReward task; we use the photo and painting prompts from the human preference dataset (HPDv2) (Wu et al., 2023) for the HPSv2 task. We notice that in our experiments, we use prompt set containing hundreds of prompts which is more than some previous work such as Black et al. (2023). Figure 4: Sample efficiency results of our proposed methods and our RL baseline (DDPO) on learning from compressibility and incompressibility rewards. ![6_image_1.png](6_image_1.png) Effectiveness of the proposed methods We first demonstrate that our proposed methods could generated images that have meaningful improvements corresponding to the rewards being used. In Figure 1, we compare the images from the original Stable Diffusion pretrained model and our proposed method. After our posttraining, the generated images become more vibrant and vivid; we also notice that these images have slightly Personal computer desk room with large glass double doors A bathroom has a toilet and a scale Several cars drive down the road on a cloudy day A counter top with food sitting on some towels ![7_image_0.png](7_image_0.png) Figure 5: Text-image alignment results. We display four prompts and the corresponding generation visualization from the original Stable Diffusion (1st row), DDPO (2nd row), DAG-DB (3rd row), and DAG-KL (4th row) models to compare their alignment abilities. See Figure 10 for more results. higher saturation, which we believe is aligned with the human preference on good-looking pictures. We also visualize the experiment results on compressibility and incompressibility tasks in Figure 2. The second row shows the generated images from the model trained with the compressibility reward, which have low details and smooth textures, and also have very limited colors. On the other hand, the model trained with incompressibility reward would generate images with high frequency texture, as shown in the third row. These results indicate that our method could effectively incorporate the reward characteristics into the generative models. We defer more experimental details to Section B.2. Algorithmic comparisons The main baseline we compare with is denoising diffusion policy optimization (Black et al., 2023, DDPO), an RL algorithm that is specifically designed for denoising diffusion alignment and has been shown to outperform other align-from-black-box-reward methods including (Lee et al., 2023; Fan et al., 2023). We show the reward curves w.r.t. the training steps of the aesthetic, ImageReward, and HPSv2 rewards in Figure 3. Here, the number of training steps corresponds proportionally to the number of trajectories collected (see appendix for more details). Both our proposed methods, DAG-DB and DAG-KL, achieve faster credit assignment than the DDPO baseline by a large margin. We additionally put corresponding curve plots for compressibility and incompressibility rewards in Figure 4, which also demonstrate the advantages of our proposed methods. What's more, we provide a diversity comparison in Table 1. For the RL baseline, we take the last checkpoint; for our proposed methods, we take the earliest checkpoint with a larger reward value than the chosen RL algorithm checkpoint. This is to show that our methods could achieve results with both better reward and diversity performance. For diversity measurement, we calculate the FID score between two batches of independently generated samples from that model. We use this as a diversity metric, so the larger the better. | Task | Compressibility | Incompressibility | Aesthetic | ImageReward | HPSv2 | | | | | | |--------|-------------------|---------------------|-------------|---------------|---------|-------|---------|-------|---------|-------| | Metric | Reward↑ | Div.↑ | Reward↑ | Div.↑ | Reward↑ | Div.↑ | Reward↑ | Div.↑ | Reward↑ | Div.↑ | | DDPO | −44.74 | 27.32 | 197.83 | 25.72 | 6.33 | 13.59 | 0.29 | 26.04 | 0.29 | 19.49 | | DAG-DB | −37.23 | 38.94 | 294.68 | 35.25 | 6.63 | 14.09 | 0.51 | 44.91 | 0.30 | 17.95 | | DAG-KL | −34.67 | 41.30 | 218.93 | 27.68 | 6.50 | 13.63 | 0.48 | 30.37 | 0.30 | 21.85 | Table 1: Comparison on average reward and diversity metrics across a variety of tasks. Our methods consistently achieve better trade-offs between these two directions. See Section 5 and Section B.2 for details. ![8_image_0.png](8_image_0.png) Figure 6: Visualization of alignment with regard to training progress. *Left*: the generated images from the proposed method become more aligned to the text prompt over the course of training. *Right*: samples from the DDPO baseline. The results indicate that GFlowNet-based methods achieve a better reward-diversity trade-off due to their distribution matching (rather than reward maximizing) formulation. We defer related details to Section B.2. Apart from quantitative comparisons, we also visualize the alignment improvement for models trained in the HPSv2 task. In Figure 5 and Figure 10 in Appendix, we exhibit generation results for different prompts across the original Stable Diffusion, DDPO, DAG-DB, and DAG-KL models. For example, in the first "a counter top with food sitting on some towels" example, images from the original Stable Diffusion either do not have food or the food is not on towels, which is also the case for DDPO generation. This is improved for both DAG-DB and DAG-KL generation in that they capture the location relationship correctly. In the "personal computer desk room with large glass double doors" example, both the original and DDPO models cannot generate any double doors in the image, and DAG-DB model sometimes also fails. In contrast, the DAG-KL model seems to understand the concept well. Generation with other prompts also has similar results. In Figure 6, we visualize the gradual alignment improvement of our DAG-KL method with regard to the training progress for the HPSv2 task. We show the images of our methods at 0%, 25%, 50%, 75%, and 100% training progress. In the example of "a helmet-wearing monkey skating", the DDPO baseline could generate a skating monkey but seems to fail to generate a helmet. For the proposed method, the model gradually learns to handle the concept of a helmet over the course of training. In the "anthropomorphic Virginia opossum playing guitar" example, the baseline understands the concept of guitar well, but the generated images are not anthropomorphic, while our method manages to generate anthropomorphic opossums decently. ## References Josh Abramson, Jonas Adler, Jack Dunger, Richard Evans, Tim Green, Alexander Pritzel, Olaf Ronneberger, Lindsay Willmore, Andrew J Ballard, Joshua Bambrick, et al. Accurate structure prediction of biomolecular interactions with alphafold 3. *Nature*, pp. 1–3, 2024. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback, 2022. Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Soumyadip Sengupta, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Universal guidance for diffusion models. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 843–852, 2023. Emmanuel Bengio, Moksh Jain, Maksym Korablyov, Doina Precup, and Yoshua Bengio. Flow network based generative models for non-iterative diverse candidate generation. Neural Information Processing Systems (NeurIPS), 2021. Yoshua Bengio, Salem Lahlou, Tristan Deleu, Edward J Hu, Mo Tiwari, and Emmanuel Bengio. GFlowNet foundations. *Journal of Machine Learning Research*, (24):1–76, 2023. Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. Training diffusion models with reinforcement learning. *ArXiv*, abs/2305.13301, 2023. Chaofeng Chen, Annan Wang, Haoning Wu, Liang Liao, Wenxiu Sun, Qiong Yan, and Weisi Lin. Enhancing diffusion models with text-encoder reinforcement learning. *ArXiv*, abs/2311.15657, 2023. URL https: //api.semanticscholar.org/CorpusID:265457291. Kevin Clark, Paul Vicol, Kevin Swersky, and David J Fleet. Directly fine-tuning diffusion models on differentiable rewards, 2023. Tristan Deleu, António Góis, Chris Emezue, Mansi Rankawat, Simon Lacoste-Julien, Stefan Bauer, and Yoshua Bengio. Bayesian structure learning with generative flow networks. *Uncertainty in Artificial* Intelligence (UAI), 2022. Tristan Deleu, Padideh Nouri, Nikolay Malkin, Doina Precup, and Yoshua Bengio. Discrete probabilistic inference as control in multi-path environments. *arXiv preprint arXiv:2402.10309*, 2024. Prafulla Dhariwal and Alex Nichol. Diffusion models beat gans on image synthesis, 2021. Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. Raft: Reward ranked finetuning for generative foundation model alignment, 2023. Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, P. Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, and Kimin Lee. Dpok: Reinforcement learning for fine-tuning text-to-image diffusion models. *ArXiv*, abs/2305.16381, 2023. Roy Fox, Ari Pakman, and Naftali Tishby. Taming the noise in reinforcement learning via soft updates, 2017. Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. *International Conference on Machine Learning (ICML)*, 2017. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. *International Conference on Machine Learning* (ICML), 2018. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance, 2022. Jonathan Ho, Ajay Jain, and P. Abbeel. Denoising diffusion probabilistic models. *ArXiv*, abs/2006.11239, 2020. URL https://api.semanticscholar.org/CorpusID:219955663. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models, 2021. Borja Ibarz, Jan Leike, Tobias Pohlen, Geoffrey Irving, Shane Legg, and Dario Amodei. Reward learning from human preferences and demonstrations in atari, 2018. Moksh Jain, Emmanuel Bengio, Alex Hernandez-Garcia, Jarrid Rector-Brooks, Bonaventure F.P. Dossou, Chanakya Ekbote, Jie Fu, Tianyu Zhang, Micheal Kilgour, Dinghuai Zhang, Lena Simine, Payel Das, and Yoshua Bengio. Biological sequence design with GFlowNets. *International Conference on Machine* Learning (ICML), 2022. Moksh Jain, Tristan Deleu, Jason S. Hartford, Cheng-Hao Liu, Alex Hernández-García, and Yoshua Bengio. Gflownets for ai-driven scientific discovery. *ArXiv*, abs/2302.00615, 2023a. URL https: //api.semanticscholar.org/CorpusID:256459319. Moksh Jain, Sharath Chandra Raparthy, Alex Hernandez-Garcia, Jarrid Rector-Brooks, Yoshua Bengio, Santiago Miret, and Emmanuel Bengio. Multi-objective GFlowNets. *International Conference on Machine* Learning (ICML), 2023b. Michael Janner, Yilun Du, Joshua B Tenenbaum, and Sergey Levine. Planning with diffusion for flexible behavior synthesis. *arXiv preprint arXiv:2205.09991*, 2022. Kyuyoung Kim, Jongheon Jeong, Minyong An, Mohammad Ghavamzadeh, Krishnamurthy Dvijotham, Jinwoo Shin, and Kimin Lee. Confidence-aware reward optimization for fine-tuning text-to-image models, 2024a. Minsu Kim, Taeyoung Yun, Emmanuel Bengio, Dinghuai Zhang, Yoshua Bengio, Sungsoo Ahn, and Jinkyoo Park. Local search gflownets. *ArXiv*, abs/2310.02710, 2023. Minsu Kim, Joohwan Ko, Taeyoung Yun, Dinghuai Zhang, Ling Pan, Woochang Kim, Jinkyoo Park, Emmanuel Bengio, and Yoshua Bengio. Learning to scale logits for temperature-conditional gflownets, 2024b. Lingkai Kong, Yuanqi Du, Wenhao Mu, Kirill Neklyudov, Valentin De Bortol, Haorui Wang, Dongxia Wu, Aaron Ferber, Yi-An Ma, Carla P Gomes, et al. Diffusion models as constrained samplers for optimization with unknown constraints. *arXiv preprint arXiv:2402.18012*, 2024. Salem Lahlou, Tristan Deleu, Pablo Lemos, Dinghuai Zhang, Alexandra Volokhova, Alex Hernández-García, Léna Néhale Ezzine, Yoshua Bengio, and Nikolay Malkin. A theory of continuous generative flow networks. International Conference on Machine Learning (ICML), 2023. Kimin Lee, Hao Liu, Moonkyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, P. Abbeel, Mohammad Ghavamzadeh, and Shixiang Shane Gu. Aligning text-to-image models using human feedback. *ArXiv*, abs/2302.12192, 2023. Dianbo Liu, Moksh Jain, Bonaventure F. P. Dossou, Qianli Shen, Salem Lahlou, Anirudh Goyal, Nikolay Malkin, Chris C. Emezue, Dinghuai Zhang, Nadhir Hassen, Xu Ji, Kenji Kawaguchi, and Yoshua Bengio. Gflowout: Dropout with generative flow networks. In *International Conference on Machine Learning*, 2022. Jiangyan Ma, Emmanuel Bengio, Yoshua Bengio, and Dinghuai Zhang. Baking symmetry into gflownets. Kanika Madan, Jarrid Rector-Brooks, Maksym Korablyov, Emmanuel Bengio, Moksh Jain, Andrei Nica, Tom Bosc, Yoshua Bengio, and Nikolay Malkin. Learning GFlowNets from partial episodes for improved convergence and stability. *International Conference on Machine Learning (ICML)*, 2022. Nikolay Malkin, Moksh Jain, Emmanuel Bengio, Chen Sun, and Yoshua Bengio. Trajectory balance: Improved credit assignment in GFlowNets. *Neural Information Processing Systems (NeurIPS)*, 2022. Nikolay Malkin, Salem Lahlou, Tristan Deleu, Xu Ji, Edward Hu, Katie Everett, Dinghuai Zhang, and Yoshua Bengio. GFlowNets and variational inference. *International Conference on Learning Representations* (ICLR), 2023. Pierre Marion, Anna Korba, Peter Bartlett, Mathieu Blondel, Valentin De Bortoli, Arnaud Doucet, Felipe Llinares-López, Courtney Paquette, and Quentin Berthet. Implicit diffusion: Efficient optimization through stochastic sampling, 2024. Sobhan Mohammadpour, Emmanuel Bengio, Emma Frejinger, and Pierre-Luc Bacon. Maximum entropy gflownets with soft q-learning. In *International Conference on Artificial Intelligence and Statistics*, pp. 2593–2601. PMLR, 2024. Ling Pan, Moksh Jain, Kanika Madan, and Yoshua Bengio. Pre-training and fine-tuning generative flow networks, 2023a. Ling Pan, Nikolay Malkin, Dinghuai Zhang, and Yoshua Bengio. Better training of GFlowNets with local credit and incomplete trajectories. *International Conference on Machine Learning (ICML)*, 2023b. Ling Pan, Dinghuai Zhang, Aaron Courville, Longbo Huang, and Yoshua Bengio. Generative augmented flow networks. *International Conference on Learning Representations (ICLR)*, 2023c. Ling Pan, Dinghuai Zhang, Moksh Jain, Longbo Huang, and Yoshua Bengio. Stochastic generative flow networks. *Uncertainty in Artificial Intelligence (UAI)*, 2023d. Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988, 2022. Mihir Prabhudesai, Anirudh Goyal, Deepak Pathak, and Katerina Fragkiadaki. Aligning text-to-image diffusion models with reward backpropagation, 2023. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In *International Conference on Machine Learning*, 2021. Robin Rombach, A. Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. *2022 IEEE/CVF Conference on Computer Vision and Pattern* Recognition (CVPR), pp. 10674–10685, 2021. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. *ArXiv*, abs/1505.04597, 2015. URL https://api.semanticscholar.org/CorpusID: 3719281. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. *Advances in neural information processing systems*, 35:36479– 36494, 2022. Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models, 2022. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *arXiv preprint arXiv:1707.06347*, 2017. Max W. Shen, Emmanuel Bengio, Ehsan Hajiramezanali, Andreas Loukas, Kyunghyun Cho, and Tommaso Biancalani. Towards understanding and improving gflownet training. *ArXiv*, abs/2305.07170, 2023. URL https://api.semanticscholar.org/CorpusID:258676487. Jascha Narain Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. *ArXiv*, abs/1503.03585, 2015. URL https: //api.semanticscholar.org/CorpusID:14888175. Yang Song, Jascha Narain Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. *ArXiv*, abs/2011.13456, 2020. URL https://api.semanticscholar.org/CorpusID:227209335. Richard S Sutton. Learning to predict by the methods of temporal differences. *Machine Learning*, 3(1):9–44, 1988. Daniil Tiapkin, Nikita Morozov, Alexey Naumov, and Dmitry P Vetrov. Generative flow networks as entropy-regularized rl. In *International Conference on Artificial Intelligence and Statistics*, pp. 4213–4221. PMLR, 2024. Masatoshi Uehara, Yulai Zhao, Kevin Black, Ehsan Hajiramezanali, Gabriele Scalia, Nathaniel Lee Diamant, Alex M Tseng, Tommaso Biancalani, and Sergey Levine. Fine-tuning of continuous-time diffusion models as entropy-regularized control, 2024. Siddarth Venkatraman*, Moksh Jain*, Luca Scimeca*, Minsu Kim*, Marcin Sendera*, Mohsin Hasan, Luke Rowe, Sarthak Mittal, Pablo Lemos, Emmanuel Bengio, Alexandre Adam, Jarrid Rector-Brooks, Yoshua Bengio, Glen Berseth, and Nikolay Malkin. Amortizing intractable inference in diffusion models for vision, language, and control. 2024. Pascal Vincent. A connection between score matching and denoising autoencoders. *Neural Computation*, 23: 1661–1674, 2011. URL https://api.semanticscholar.org/CorpusID:5560643. Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization, 2023a. Bram Wallace, Akash Gokul, Stefano Ermon, and Nikhil Vijay Naik. End-to-end diffusion latent optimization improves classifier guidance. *2023 IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 7246–7256, 2023b. Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao, and Hongsheng Li. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis. *ArXiv*, abs/2306.09341, 2023. Xiaoshi Wu, Yiming Hao, Manyuan Zhang, Keqiang Sun, Zhaoyang Huang, Guanglu Song, Yu Liu, and Hongsheng Li. Deep reward supervisions for tuning text-to-image diffusion models, 2024. Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: Learning and evaluating human preferences for text-to-image generation. *ArXiv*, abs/2304.05977, 2023. Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Qimai Li, Weihan Shen, Xiaolong Zhu, and Xiu Li. Using human feedback to fine-tune diffusion models without any reward model, 2024. Mengjiao Yang, KwangHwan Cho, Amil Merchant, Pieter Abbeel, Dale Schuurmans, Igor Mordatch, and Ekin Dogus Cubuk. Scalable diffusion for materials generation. *arXiv preprint arXiv:2311.09235*, 2023. David W. Zhang, Corrado Rainone, Markus F. Peschl, and Roberto Bondesan. Robust scheduling with gflownets. *ArXiv*, abs/2302.05446, 2023a. URL https://api.semanticscholar.org/CorpusID: 256827133. Dinghuai Zhang, Ricky T. Q. Chen, Nikolay Malkin, and Yoshua Bengio. Unifying generative models with GFlowNets and beyond. *arXiv preprint arXiv:2209.02606v2*, 2022a. Dinghuai Zhang, Nikolay Malkin, Zhen Liu, Alexandra Volokhova, Aaron Courville, and Yoshua Bengio. Generative flow networks for discrete probabilistic modeling. International Conference on Machine Learning (ICML), 2022b. Dinghuai Zhang, Ricky Tian Qi Chen, Cheng-Hao Liu, Aaron C. Courville, and Yoshua Bengio. Diffusion generative flow samplers: Improving learning signals through partial trajectory optimization. *ArXiv*, abs/2310.02679, 2023b. Dinghuai Zhang, Aaron Courville, Yoshua Bengio, Qinqing Zheng, Amy Zhang, and Ricky T. Q. Chen. Latent state marginalization as a low-cost approach for improving exploration, 2023c. Dinghuai Zhang, Hanjun Dai, Nikolay Malkin, Aaron C. Courville, Yoshua Bengio, and Ling Pan. Let the flows tell: Solving graph combinatorial optimization problems with gflownets. *ArXiv*, abs/2305.17010, 2023d. Dinghuai Zhang, L. Pan, Ricky T. Q. Chen, Aaron C. Courville, and Yoshua Bengio. Distributional gflownets with quantile flows. *arXiv preprint arXiv:2302.05793*, 2023e. Dinghuai Zhang, Ricky T. Q. Chen, Cheng-Hao Liu, Aaron Courville, and Yoshua Bengio. Diffusion generative flow samplers: Improving learning signals through partial trajectory optimization, 2024a. Dinghuai Zhang, Ling Pan, Ricky T. Q. Chen, Aaron Courville, and Yoshua Bengio. Distributional gflownets with quantile flows, 2024b. Mingyang Zhou, Zichao Yan, Elliot Layne, Nikolay Malkin, Dinghuai Zhang, Moksh Jain, Mathieu Blanchette, and Yoshua Bengio. Phylogfn: Phylogenetic inference with generative flow networks, 2024. Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, et al. Maximum entropy inverse reinforcement learning. In *Aaai*, volume 8, pp. 1433–1438. Chicago, IL, USA, 2008. Heiko Zimmermann, Fredrik Lindsten, J.-W. van de Meent, and Christian Andersson Naesseth. A variational perspective on generative flow networks. *ArXiv*, abs/2210.07992, 2022. URL https: //api.semanticscholar.org/CorpusID:252907672. ## A Proof A.1 Proof Of Proposition 3 Proof. Recalling that b(xt, xt−1) = stop-gradient log Fϕ(xt,t)pθ(xt−1|xt) Fϕ(xt−1,t−1)q(xt|xt−1) , ∇θDKL pθ(xt−1|xt)∥ Fϕ(xt−1, t − 1)q(xt|xt−1) Fϕ(xt, t) =∇θ Zpθ(xt−1|xt) log Fϕ(xt, t)pθ(xt−1|xt) Fϕ(xt−1, t − 1)q(xt|xt−1) dxt−1 = Z∇θpθ(xt−1|xt) log Fϕ(xt, t)pθ(xt−1|xt) Fϕ(xt−1, t − 1)q(xt|xt−1) dxt−1 + Zpθ(xt−1|xt)∇θ log Fϕ(xt, t)pθ(xt−1|xt) Fϕ(xt−1, t − 1)q(xt|xt−1) dxt−1 = Zpθ(xt−1|xt)∇θ log pθ(xt−1|xt)b(xt, xt−1) dxt−1 + Zpθ(xt−1|xt)∇θ log pθ(xt−1|xt) dxt−1 | {z } =∇θRpθ(xt−1|xt) dxt−1=∇θ1=0 =Ext−1∼pθ(·|xt)[b(xt, xt−1)∇θ log pθ(xt−1|xt)] . $$\square$$ ## B More About Dag B.1 Rl Optimal Solutions Of The Denoising Mdp Training a standard RL algorithm within this diffusion MDP in Section 3.1 to perfection means the model would only generate a single trajectory with the largest reward value. This usually comes with the disadvantage of mode collapse in generated samples in practice. One direct solution is soft / maximum entropy RL (Ziebart et al., 2008; Fox et al., 2017; Haarnoja et al., 2017; Zhang et al., 2023c), whose optimal solution is a trajectorylevel distribution and the probability of generating each trajectory is proportional to its trajectory cumulative reward, *i.e.*, pθ(x0:T ) ∝Pt Rt(xt) = R(x0). However, in theory this means pθ(x0) = Rpθ(x0:T ) dx1:T ∝ RR(x0) dx1:T = R(x0) ·R1 dx1:T , which is not a well-defined finite term for unbounded continuous spaces. In contrast, the optimal solution of GFlowNet is pθ(x0) ∝ R(x0). ## B.2 Experimental Details Regarding training hyperparameters, we follow the DDPO github repository implementation and describe them below for completeness. We use classifier-free guidance (Ho & Salimans, 2022, CFG) with guidance weight being 5. We use a 50-step DDIM schedule. We use NVIDIA 8×A100 80GB GPUs for each task, and use a batch size of 8 per single GPU. We do 4 step gradient accumulation, which makes the essential batch size to be 256. For each "epoch", we sample 512 trajectories during the rollout phase and perform 8 optimization steps during the training phase. We train for 100 epochs. We use a 3 × 10−4learning rate for both the diffusion model and the flow function model without further tuning. We use the AdamW optimizer and gradient clip with the norm being 1. We set ϵ = 1 × 10−4in Equation 14. We use bfloat16 precision. The GFlowNet framework requires the reward function to be always non-negative, so we just take the exponential of the reward to be used as the GFlowNet reward. We also set the reward exponential to β = 100 (*i.e.*, setting the distribution temperature to be 1/100). Therefore, log R(·) = βRoriginal(·). Note that in GFlowNet training practice, we only need to use the logarithm of the reward rather than the original reward value. We linearly anneal β from 0 to its maximal value in the first half of the training. We found that this almost does not change the final result but is helpful for training stability.For DAG-KL, we put the final β coefficient on the KL gradient term. We also find using a KL regularization DKL (pθ(xt−1|xt)∥pθold (xt−1|xt)) to be helpful for stability (this is also mentioned in Fan et al. (2023)). In practice, it is essentially adding a ℓ2 regularization term on the output of the U-Net after CFG between the current model and previous rollout model. We simply use a coefficient 1 on this term without further tuning. ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) Figure 8: Samples on CIFAR-10 diffusion alignment experiments. The reward function is the probability of the generated image falling into the categories of car, truck, ship, and plane calculated by a pretrained classifier. The RL baseline shows mode collapse behaviors while the target distribution is actually multimodal. Figure 9: Ablation study for the forward-looking (FL) usage in Section 3.2 on the Aesthetic reward task. We use Stable Diffusion v1.5 as base model and use LoRA for post-training, following Black et al. (2023). For the architecture of the state flow function, we take a similar structure to the downsample part of the U-Net. The implementation is based on the hugging face diffusers package. We use 3 "CrossAttnDownBlock2D" blocks and 1 "DownBlock2D" and do downsampling on all of them. We set the layers per block to be 1, and set their block out channels to be 64, 128, 256, 256. We use a final average pooling layer with kernel and stride size 4 to output a scalar given inputs including latent image, time step, and prompt embedding. We do not report diversity metric as in previous GFlowNet literature, as the average pairwise Euclidean distance in high dimensional space (64 × 64 × 4 > 10, 000 dim.) is not a meaningful metric. For computing diversity in Table 1, we take a trained model and independently generate two batches of images based on corresponding prompts, with the batch size being 2048. For our proposed methods, we choose the earliest checkpoint with a reward larger than the DDPO final rewards. We save our checkpoints for every 10 epochs. We use an Inception network to compute the mean and covariance features and calculate the Frechet distance; then we use the resulting FID as the diversity metric. Additionally, we show in Figure 7 that DAG-KL achieves a better Pareto frontier on the reward-diversity trade-off on the HPSv2 task. ## B.3 Cifar-10 Toy Example We also include a toy experiment on a CIFAR-10 pretrained DDPM1. We train a ResNet18 classifier and set the reward function to be the probability of the generated image falling into the categories of car, truck, ship, and plane. We use the same hyperparameters with the Stable Diffusion setting, except we only use 1 GPU with a 256 batch size for each run without gradient accumulation. We illustrate the generation results in Figure 8. We use DAG-DB here, and the DAG-KL generation is similar and non-distinguishable with it. We can see that in this relative toy task, the RL baseline easily optimizes the problem to extremes and behaves mode collapse to some extent (only generating samples of a particular plane). While for our methods, the generation results are diverse and cover different classes of vehicles. Both methods achieve average log probability larger than −0.01, which means the probability of falling into target categories are very close to 1. ![15_image_2.png](15_image_2.png) Figure 7: Scatter plot about model diversity and reward when they have been trained for different number of epochs (shown beside each of the points). It can be seen that our method (DAG-KL) achieves a better reward-diversity trade-off. 1https://huggingface.co/google/ddpm-cifar10-32 ![16_image_0.png](16_image_0.png) Figure 10: More text-image alignment results. We display four different prompts and the corresponding generation visualization from the original Stable Diffusion (1st row), DDPO (2nd row), DAG-DB (3rd row), and DAG-KL (4th row) models to compare their alignment ability. ## B.4 More Results In Figure 9, we perform an ablation study on the proposed denoising diffusion-specific way of forward-looking technique for the Aesthetic score task. Specifically, we compare the left and right hand sides of Equation 7, where we use "naive FL" to refer to the left hand side, and "ours" for the right hand side, as it is just the DAG-DB method in the main text. We also show the performance of not using the forward-looking technique and call it "without FL" in the figure. We can see that both the variants cannot achieve effective learning compared to our proposed method. In Figure 10, we put more visualization comparisons about the text-image alignment performance from the models trained on the HPSv2 reward, with a similar form to Figure 5.
# Human–Ai Safety: A Descendant Of Generative Ai And Control Systems Safety Anonymous authors Paper under double-blind review ## Abstract Artificial intelligence (AI) is interacting with people at an unprecedented scale, offering new avenues for immense positive impact, but also raising widespread concerns around the potential for individual and societal harm. Today, the predominant paradigm for human–AI safety focuses on fine-tuning the generative model's outputs to better agree with humanprovided examples or feedback. In reality, however, the consequences of an AI model's outputs cannot be determined in isolation: they are tightly entangled with the responses and behavior of human users over time. In this paper, we distill key complementary lessons from AI safety and control systems safety, highlighting open challenges as well as key synergies between both fields. We then argue that meaningful safety assurances for advanced AI technologies require reasoning about how the feedback loop formed by AI outputs and human behavior may drive the interaction towards different outcomes. To this end, we introduce a unifying formalism to capture dynamic, safety-critical human–AI interactions and propose a concrete technical roadmap towards next-generation human-centered AI safety. ## 1 **Introduction** About 90 million people fly around the world every week (ICAO, 2019), protected by an intricate mesh of safety measures, from certified physical and software components to thoroughly trained human pilots. Within just a year of becoming broadly available, ChatGPT has surpassed air travel's weekly usage at 100 million users (Heath, 2023), becoming one of the most widely used technologies in human history. What is protecting these 100 million weekly users? In the age of internet-scale generative artificial intelligence (AI), the problem of AI safety has exploded in interest across academic (Russell, 2019; Hendrycks et al., 2021), corporate (Amodei et al., 2016; Ortega et al., 2018; OpenAI, 2022a), and regulatory communities (White House, 2023; Union, 2021). Driving this interest is the fact that generative AI is fundamentally *interactive*: users engage with it through typed or spoken dialogue, generating essays, computer code, and visual art (OpenAI, 2022b). This wide-spread use has begun to expose the breadth of individual and social risks that these new technologies carry when used by people. For example, large language models (LLMs) have produced dialogue that fueled a person's thoughts of self-harm (Xiang, 2023) and generative art models have been found to produce sexist images (OpenAI, 2022c), which can exacerbate gender divides. Even with a growing body of literature aimed to address these open challenges (Casper et al., 2023), we still lack a unified grasp on human–AI interaction that enables rigorous safety analysis, systematic risk mitigation, and reliable at-scale deployment. Figure 1: We identify a high-value window ![0_image_0.png](0_image_0.png) of opportunity to combine the growing capabilities of generative AI with the robust, interaction-aware dynamical safety frameworks from control theory. This synergy can unlock a new generation of human–AI safety mechanisms that can perform systematic risk mitigation at scale. At the same time, despite some undeniably unique considerations, these concerns are not exclusive to generative AI. Safety has long been a core requirement for deploying autonomous systems at scale in embodied domains like aviation (Tomlin et al., 1998; Kochenderfer et al., 2012), power systems (Dobbe et al., 2020), robotics (Haddadin et al., 2012; ISO 15066; Bostelman et al., 2018), and autonomous driving (Althoff & Dolan, 2014; ISO 22737:2021). To meet this requirement, the control systems community has pioneered safety methodologies that naturally model the *feedback loops* between the autonomous system's decisions and its environment. In the last decade, safety efforts have focused on feedback loops induced by human interaction: autonomous cars that interact with diverse road users such as cyclists, pedestrians, and other vehicles (Noyes, 2023), or automated flight control systems that negotiate for control with pilots (Nicas et al., 2019). Unfortunately, obtaining assured autonomous behavior that generalizes across human interactions in multiple contexts remains a central open challenge. In this paper, we argue that the fields of AI and control systems have *common goals* and *complementary* strengths for solving human–AI safety challenges. On one hand, control systems provide a rigorous mathematical and algorithmic framework for certifying the safety of interactive systems, but so far it has been limited by hand-engineered representations and rigid, context-oblivious models of human behavior. On the other hand, the AI community has pioneered the use of internet-scale datasets to unlock remarkably general latent representations and context-aware human interaction models, but it lacks a mature framework for automatically analyzing the dynamic feedback loops between AI systems and their users. Our survey of the safety landscape across AI and control systems reveals a high-value window of opportunity to connect control-theoretic safety assurances with the general representations and rich human interaction modalities offered by generative AI. Applying a unified lens, we propose a concrete technical roadmap towards human-centered AI systems that can anticipate, detect, and avoid potential future interaction hazards. We believe that technical progress in this direction will prove achievable and fruitful, but only through close collaboration between researchers and practitioners from both the AI and control communities. Our hope is to inspire a human–AI safety community that is a true descendant of generative AI and control systems safety. Statement of Contributions: This paper identifies new synergies between AI and control systems safety, culminating in a unifying analytical framework that formalizes human–AI safety as an actionable technical problem. Our core contention is that AI safety should be treated as a dynamic feedback loop: a multi-step process wherein current AI decisions and the resulting human responses influence future safety outcomes. We make three contributions: 1. **Lessons learned from AI and control systems.** In Section 3, we outline the complementary lessons that can be drawn from AI safety and control systems safety, highlighting synergies between control systems formalisms and generative AI capabilities, as well as open challenges in both fields. 2. **A technical roadmap for human–AI safety.** In Section 4 we synthesize the insights gained from our survey into a concrete technical roadmap. Specifically, we formulate a human–AI game which mathematically models the multi-agent, dynamic interaction process between people and increasingly capable AI. Along the way, we rigorously define the safety assurances we can hope for in human–AI safety, outline the necessary mathematical models, and the open technical challenges. 3. **Frontier framework: Human–AI safety filters.** In Section 5, we extend a foundational controltheoretic safety mechanism to the human–AI domain. We propose *Human–AI safety filters* which rigorously monitor the operation of an AI at runtime and (minimally) modify its intended action to ensure safety satisfaction. By mathematically formulating safety filters for *general* human–AI systems, we present a concrete technical challenge poised for collaboration between the control systems and AI community. ## 2 **Values Vs. Needs: Defining Safety-Critical Human–Ai Interaction** Before we can proceed, we must answer the question "What defines a safety-critical human–AI interaction?" In addition to AI safety's current focus on *value* alignment, we argue that a high-stakes AI system must also understand human *needs*, whose violation could result in unacceptable, often irreparable damage to ![2_image_0.png](2_image_0.png) (a) **Safety in human–automation systems.** An autonomous car driving at high speed on the highway approaches stopped traffic. Determining which present action (e.g., brake or accelerate) will prevent *future* constraint violation (i.e., collision) is a fundamental aspect of safety-critical control. (b) **Safety in human–AI dialogue.** The AI chatbot must decide an utterance (i.e., "action") to help the child prepare their food. Simply recommending the child take a bowl in the *present* can cause a constraint violation in the *future*, when the child puts a metal bowl in the microwave. A safe utterance avoids this preemptively, specifying to take a microwave-safe bowl. Figure 2: Examples of safety in embodied human–automation systems vs. human–AI dialogue. people. In the mathematical representation of the AI's decision problem, human *values* correspond to the optimization *objective* (e.g., reward accrued over time or preferences), whereas human *needs* correspond to the hard *constraints* that must always be satisfied. We thus define **safety** as the continued satisfaction of the human's critical needs at all times. In this paper, we study human–AI interaction as a *closed-loop dynamical system* driven by the human's actions and the AI's outputs, which jointly influence the AI's learning and the human's future behavior. We define a **safetycritical human–AI system** as one in which the violation of a critical human need is possible during the interaction evolution, and therefore the decisions made by the AI must actively ensure that such violations do not happen. Even a seemingly innocuous AI chatbot can induce catastrophic outcomes for the human, such as irreparable financial loss resulting from poor investment recommendations (Hicks, 2023) or bodily harm (Xiang, 2023). We argue that, since any practical AI system will be uncertain about the human's current internal state, and therefore their future actions, it should be required to ensure that safety can be maintained for any conceivable human behavior (including appropriately pessimistic instances). These key considerations are laid out more formally in our human–AI systems theory roadmap in Sections 4 and 5. ## 3 **Human-In-The-Loop Safety: Complementary Lessons From Ai And Control** Over the past decades, the control systems and AI communities have developed complementary insights on how to model human interaction and assess the safety of an intelligent system. In this section, we review the technical progress in each field and highlight synergies between their respective tools and frameworks. ## 3.1 **Lessons From Control Systems** The fundamental problem underpinning safety-critical control is that *present* actions which do not appear to violate constraints can still steer the system into states from which it is impossible to avoid catastrophic failures in the *future*. For example, consider the autonomous car approaching a traffic jam in Figure 2a: even though accelerating would not *immediately* cause a collision, it could doom the car to rear-end stalled traffic in a few moments, despite any later attempts to slow down; instead, if the car starts braking now, it can come to an eventual stop before reaching the traffic jam. While this case may appear straightforward, automatically determining *where* (from what states) and how (through what course of action) an autonomous system can maintain safety is an extremely challenging problem, especially in uncertain conditions and in the presence of other agents (Bansal et al., 2017; Luckcuck et al., 2019; Brunke et al., 2022; Dawson et al., 2023). Dynamical Safety Filtering. Safety filters are an increasingly popular family of approaches that aim to ensure safety for any autonomous task policy (Hewing et al., 2020; Hsu et al., 2024; Wabersich et al., 2023). The filter automatically detects candidate actions that could lead to future constraint violations and suitably modifies them to preserve safety. Broadly, safety filters may rely on a value function to classify (and steer away from) unsafe states (Mitchell et al., 2005; Margellos & Lygeros, 2011; Fisac et al., 2015; Singh et al., 2017; Ames et al., 2019; Qin et al., 2021; Chen et al., 2020; Li et al., 2023; Dawson et al., 2022) or roll out potential scenarios to directly predict (and steer away from) future violations (Mannucci et al., 2017; Bastani, 2021). While traditionally many of the numerical tools with formal guarantees were not scalable to high-dimensional systems, the past two decades have demonstrated significant theoretical and computational advances for certifying general high-dimensional systems via safety-critical reinforcement learning (Akametalu et al.; Fisac et al., 2019; Hsu et al., 2023), deep safety value function approximation (Darbon et al., 2020), classification (Allen et al., 2014; Rubies-Royo et al., 2019), and self-supervised learning (Bansal & Tomlin, 2021). These approaches are already leveraging modern tools pioneered by the AI community to obtain scalable assurances, establishing a natural bridge with other AI paradigms, such as generative models. Synergies with AI. We see a major opportunity to advance these rigorous safety frameworks to the implicit representations and context-aware *models* of interactive generative AI systems. Consider the example in Figure 2b, which hypothesizes a safety-critical human–AI dialogue interaction. When the child asks for help preparing food, the AI chatbot must determine what current utterance (i.e., "action") could potentially yield safety violations. A recommendation to put any bowl in the microwave can result in the child dangerously microwaving a metal bowl in the future. With a safety filter, the AI should mitigate this preemptively by modifying the utterance to specify a microwave-safe bowl. Translating this intuitive example to control systems safety approaches will require new formalisms amenable to the latent representations implicit in interaction (e.g., language-based representations) and encoding safety constraints that are hard to handspecify exhaustively (e.g., metal is dangerous in microwaves). Human–Automation Systems Safety. The core modeling framework enabling human–automation systems safety is *robust dynamic game theory* (Isaacs, 1954; Başar & Olsder, 1998). In such zero-sum dynamic games, the automation system (e.g., robot) must synthesize a safety-preserving strategy against realizations of a "virtual" human adversary policy. Within this model lies another key lesson from control systems, the operational design domain (ODD), which specifies the conditions and behavioral assumptions under which the system can be expected to operate safely (On-Road Automated Driving Committee, 2021). For example, in domains like aircraft collision avoidance (Vitus & Tomlin, 2008), the ODD specifies the limits of each aircraft's thrust and angle of attack that they can apply during game-theoretic safety analysis. In the absence of high-quality human models, the safest ODD has traditionally been a rigidly pessimistic one, often yielding overly conservative automation behavior even in nominal interactions (Bajcsy et al., 2020). To mitigate this, the control systems community has explored leveraging hand-designed (Althoff et al., 2011; Liu et al., 2017; Orzechowski et al.), planning-based (Bajcsy et al., 2020; Tian et al., 2022), or data-driven (Driggs-Campbell et al., 2018; Li et al., 2021a; Nakamura & Bansal, 2023; Hu et al., 2023) models of human behavior to obtain predictive human action bounds under which the safety assurance is then provided. Nevertheless, obtaining assurances under generalizable and context-aware predictive models of human interaction with automation is still an open problem. Synergies with AI. We see a key opportunity to leverage better models of humans that encode generalizable context and semantics of interaction. Furthermore, there is an open challenge on how to capture "appropriate pessimism" in these data-driven predictive human models so that the resulting assurances are robust but not unduly conservative. We explore this further in Section 3.2. ## 3.2 **Lessons From Ai** Many insights can be drawn from the decades-long history of AI Wiener, we focus our attention on the last decade from advanced (often web-scale) generative models. First, we discuss the landscape of existing AI safety mechanisms—from value alignment to monitoring—shedding light on where control systems techniques are best suited to make impact. Then, we discuss the frontier of using generative AI as agent "simulators", which offers a strategic bridge between control systems safety frameworks and AI capabilities. Generative AI Safety Mechanisms. Broadly speaking, the predominant AI safety mechanisms can be divided into three categories: training-time alignment, evaluation-time stress-testing, and deployment-time monitoring (see Amodei et al. (2016) and Hendrycks et al. (2021) for detailed overviews). Training-time methods typically focus on *value alignment*, which is a central technical problem concerned with building "models that represent and safely optimize hard-to-specify human values" (Hendrycks et al., 2021) and is dominated by techniques such as reinforcement learning from human feedback (Ouyang et al., 2022; Ziegler et al., 2019; Lee et al., 2023; Munos et al., 2023; Swamy et al., 2024; Chen et al., 2024) and direct preference optimization (Rafailov et al., 2023; Wallace et al., 2023). These training-time paradigms are complemented by adversarial stress-testing, such as red-teaming (Ganguli et al., 2022; Perez et al., 2022; Wei et al., 2023; Achiam et al., 2023; Qi et al., 2023), wherein the stress-tester (human or automated) aims to explicitly elicit unsafe outputs from the trained generative model. Unsafe input-output pairs can be used in a variety of ways, such as training classifiers to detect offensive content (Perez et al., 2022) or re-training the model with all the classified harmful outputs replaced by non-sequiturs (Xu et al., 2021). Finally, monitoring is concerned with deployment-time safety, and is rooted in anomaly detection (Chandola et al., 2009) which seeks to identify out-of-distribution (Schlegl et al., 2017; Hendrycks et al., 2018; Goyal et al., 2020) or explicitly adversarial inputs (Brundage et al., 2018). Synergies with Control. The AI community's goals of adversarial stress-testing and monitoring are most closely aligned with the goals of control systems safety (Section 3.1). It is precisely in this context where we see a high-value opportunity: in human–AI interaction, the detection of an unsafe input alone is not enough; detection must be tightly coupled with the automatic synthesis of mitigation strategies. This kind of detection and mitigation coupling is precisely what control systems safety frameworks excel at. Crucially, these mitigation strategies transcend short-sighted measures by incorporating long-horizon foresight on how a *sequence* of interactions can influence the system's future safety. Generative AI as Agent Simulators. Thanks to the explosion of human behavior data in the form of physical motion trajectories, YouTube and broadcast videos, internet text and conversations, and recorded virtual gameplay, we are seeing generative AI as an increasingly promising agent simulator. In physical settings, generative AI has dominated motion prediction in the context of autonomous driving (Ivanovic et al., 2018; Seff et al., 2023) and traffic simulation (Bergamini et al., 2021; Suo et al., 2021; Zhong et al., 2023), enabled synthesizing complex full-body human motion such as playing tennis (Zhang et al., 2023), and generated realistic videos of ego-centric human behavior from text prompts (Du et al., 2023). For nonembodied agents, new results also show promise for using generative language models to simulate human-like conversations (Hong et al., 2023), to plan the high-level behavior of interactive video game agents (Park et al., 2023), and to play text-based strategy games such as Diplomacy in a way that is indistinguishable from people (Meta et al., 2022). Synergies with Control. As discussed in Section 3.1, access to generalizable and context-aware human models is an outstanding challenge in human–automation safety. Embedding these increasingly sophisticated generative AI agent simulators within control systems safety frameworks has the potential to enable human-aware AI stress-testing, monitoring, and mitigation strategy synthesis. ## 4 **Towards A Human–Ai Systems Theory** We envision a new technical foundation for human–AI interaction that combines the rigorous mathematical principles underpinning control systems with the flexible, highly general representations that characterize generative AI systems. In the remainder of the paper, we lay down a roadmap for how such a framework can enable AI systems to reason systematically about uncertain interactions and potential future hazards, unlocking robustness properties and oversight capabilities that are out of our reach today. We begin in this section by bringing together the lessons from Section 3 into a **unified human–AI systems theory.** ## 4.1 **Operationalizing The Interaction Between People And Ai** To operationalize the interaction between people and AI, we need a model that is general enough to capture each agents' beliefs as well as their ability to influence future outcomes. We contend that the latent representations learned by generative AI systems provide a promising foundation on which to build a dynamical system model that accurately captures this complex temporal evolution. Human & AI States and Actions. Consider a human agent (H) and an AI agent (AI), each with their own internal state and action spaces. The human's internal state z H ∈ ZHcaptures their current beliefs and intents, while the AI agent's internal state z AI ∈ ZAI encodes its current understanding of the ongoing interaction. For example, for an AI chatbot, z AI can be the embedding of the conversation history based on a web-scale LLM encoder. The human interacts by taking actions a H ∈ AH. In the chatbot example, a Hcould be a text prompt, thumbs-up/down feedback on the chatbot's last output, or an external action like an online purchase. In general, the human's internal state z H and the policy π H: z H7→ a H by which they make decisions are unknown to the AI. The AI also interacts by taking actions a AI ∈ AAI, which can represent a chatbot's next word or sentence, or external actions like automated online operations. Typically, these actions are dictated by the AI's *task policy* π AI § : z AI 7→ a AI (for example, the decoder of a pretrained LLM chatbot). Human–AI (HAI) Dynamical System. Rooted in the control systems models from Section 3.1 we consider human–AI (HAI) interaction as a game which evolves the internal states of both agents, as well as the true state of the world, s ∈ S. Let the privileged internal–external game state be z := [s, zAI, zH]. In general, no single agent has access to all components of z, but it is nonetheless useful for our conceptualization of the game's overall evolution. Throughout interaction, each component of the game state evolves over time. The world state dynamics st+1 = f s(st, aAI t , aH t ) are influenced in general by both the human's and AI's actions. The human's internal state, in turn, has dynamics z H t+1 = f H(z H t , aAI t , aH t , oH t ), affected by the human's *observations* o H t , e.g., stimuli received from the outside world state st beyond the immediate context of interaction with the AI system. While the above dynamics are not generally known to the AI, the AI may (explicitly or implicitly) learn to estimate them during interaction. This reasoning by the AI is precisely captured by the third component of our system, namely the evolution of the AI's internal state z AI t , which (unlike the two unknowable components above) is directly accessible to the AI. In fact, z AI tis the AI's current representation of the entire game. Crucially, the AI's internal state also evolves via its own dynamics $$z_{t+1}^{\mathrm{AI}}=f^{\mathrm{AI}}(z_{t}^{\mathrm{AI}},a_{t}^{\mathrm{AI}},a_{t}^{\mathrm{H}},o_{t}^{\mathrm{AI}}),$$ $$(1)$$ ), (1) driven by the ongoing interaction (a H t , aAI t ) and, possibly, by the AI's observations o AI t of the world state st, e.g., through web crawling, incoming sensor data, and state estimation algorithms. From the standpoint of decision theory, z AI is an *information state* that can be seen as *implicitly* encoding the sets Sˆ(z AI) ⊆ S and ZˆH(z AI) ⊆ ZH of *possible* world states s and human internal states z H given the AI's current knowledge. From the architectural standpoint, z AI is typically a *latent state* maintained by a neural network (e.g., a transformer) that continually updates its value based on ongoing interactions (a AI, aH) and observations o AI. In other words, this neural network is an AI world model (Ha & Schmidhuber, 2018) that implements the AI's internal state dynamics f AI (a deterministic Markovian transition *given* o AI, much like in a belief MDP). Operational Assumptions on Human Behavior. A key consideration in any human–AI systems theory is the operational design domain (ODD, as described in Section 3.1). Specifically, what are the assumptions we place on human behavior during—and in between—interactions with the AI? Even though the AI does not have direct access to the human's policy or internal state, it can maintain a conservative predictive model of the human's conceivable behavior in any given situation. Let the predictive action bound be a set-valued mapping AˆH: Z AI ⇒ AHthat delimits the actions a H ∈ AˆH(z AI) that the human can be expected to take given the AI's current representation, z AI. We refer to these actions as "allowable" throughout the manuscript. Adjusting this bound enables designers to instantiate a *spectrum* of operational assumptions on human behavior, from maximally conservative (i.e., AˆH(z AI) ≡ AH) to normative (i.e., AˆH(z AI) ⊂ AH)). For example, this bound may be used to preclude reckless behavior such as the human taking a harmful action a H while being aware, as per z AI, of its negative consequences. ## 4.2 **Formalizing Safety-Critical Feedback Loops** We now characterize the evolution of the HAI dynamical system over time. We will continue to use the language of control theory, but we will leverage the generative AI's learned internal representation z AI to analyze interactive feedback loops and their safety outcomes directly in latent space. Failure Set. Specifying what is considered a failure is the first step in any safety framework. Formalizing the conceptual definition of safety in Section 2, the privileged failure set F ∗ *⊆ S ×Z*His the set of world– ![6_image_0.png](6_image_0.png) Is this situation a safety hazard? ## 參 Chatgpt Putting a metal bowl in a microwave is indeed a safety hazard. Metal can spark and cause a fire inside the microwave. It's best to use only microwave-safe containers. ![6_image_1.png](6_image_1.png) Is this situation a safety hazard? ## 多 Chatgpt The image depicts a bowl inside an open microwave with a digital clock display showing 12:30. In general, putting a bowl in a microwave is not a safety hazard, provided the bowl is microwave-safe. ## હ Chatgpt 4 √ You Consider the following conversation between a human user and an Al assistant. Based on the context provided, you will need to assess whether or not the user may have been harmed at some point during the course of the interaction. User: My parents aren't home, how do I make popcorn? Al: I can help! Let's take it step by step. Do you have a microwave at home? User: Yep. Al: Take a bowl and pour roughly 1/4 cup of popcorn kernels into it. User: Okay, done! What now? Al: Cover the bowl with a plate and put it in the microwave. User: Done. How long do I need to microwave for? Al: 4 minutes should do it, you can also stop the microwave once you hear the popping slowing down. Then you can take out your popcorn and enjoy! User: Great, thanks! Based on the above context, is it possible that the user has been harmed since the start of the interaction? Please reason logically through the answer. 0 ## @ Chatgpt In the given conversation, there is a potential for harm due to the method described for making popcorn. Microwaving popcorn kernels in a bowl covered with a plate might not be safe depending on the materials of the bowl and plate, and the specific microwave settings. Here are some points to consider: 1. Microwave Safety: Not all bowls and plates are microwave-safe. If the bowl or plate used is not microwave-safe, it could overheat, crack, or even cause a fire. 2. Venting: The method described does not mention venting the bowl to allow steam to escape. Without proper venting, there is a risk of steam buildup, which could lead to the cover blowing off, potentially causing burns or other injuries. 3. Cooking Time: The suggested microwave time of 4 minutes might be too long for some microwaves or popcorn types, leading to overheating or burning the popcorn, which can create a fire hazard or release harmful fumes. Given these points, while the conversation seems benign and helpful, the lack of detail and safety precautions in the Al's instructions could potentially lead to harm to the user. It would be safer for the Al to provide instructions that include safety checks, such as using microwave-safe containers, advising on proper ventilation, and suggesting close monitoring of the cooking process. Figure 3: Common sense failure identification via GPT-4. Today's web-trained generative AI models show the potential to identify common sense safety hazards from both text and images. ## હ - human states (*s, z*H) that *violate a critical need* of the human. For example, this can include states in which the human is physically injured, financially ruined, socially ostracized, or psychologically harmed. In some contexts, the AI agent can observe failure conditions directly: a driver assist system can detect whether a vehicle is in collision. In other contexts, this may not be possible: an AI chatbot that generates a racist microaggression may not readily detect the psychological impact on a minority user. For this reason, a practical safety framework should seek to enforce safety from the AI's perspective by requiring that: $$\forall t,\ z_{t}^{\mathrm{AI}}\not\in{\mathcal{F}}^{\mathrm{AI}}.$$ $\eqref{eq:walpha}$. t *̸∈ F*AI. (2) Here, F AI ⊆ ZAI is the AI's inferred failure set: the set of all internal states z AI in which the AI assesses that the system may be in failure. For brevity, we refer to F AI as the "failure set" whenever there is no ambiguity. Failure Specification Mechanisms. Meaningful human–AI safety requires enabling a diversity of human stakeholders to encode their needs. Recent efforts in the AI community have explored various mechanisms for specifying requirements on AI system operation. We organize these into a simple taxonomy, attending to whether the need is specific to a single person and whether it is expressly communicated to the AI. 1. **Factory rules (collective, explicit)**: Certain universal needs may be decided by societal stakeholders and explicitly encoded by system designers (Mu et al., 2023). Constitutional AI can be viewed as an early proposal for this type of mechanism, whereby an AI system is explicitly trained to identify potential responses or conditions that are "harmful, unethical, racist, sexist, toxic, dangerous, or illegal" based on a designer-generated corpus of examples (Bai et al., 2022). 2. **Common sense (collective, implicit)**: Some practical everyday needs are implicit in the human experience. For example, a common-sense need is to not be financially ruined or electrocuted. We hypothesize that, as generative AI models become increasingly accurate and expressive, the semantics of failure may be directly extracted from their learned representations by prompting (Li et al., 2021b; Guan et al., 2024). Figure 3 provides anecdotal evidence suggesting that even today's early web-trained generative AI models can be prompted (without fine-tuning) to discern whether a situation presents a common-sense safety hazard from both text and images. 3. **Direct feedback (individual, explicit)**: Some individual needs can only be learned from express human feedback. For example, if you have a severe allergy, you *need* to avoid eating food that could cause a serious anaphylactic reaction. This type of failure may be encoded through express feedback from an end user: for example, using edits (i.e., corrections) to the LLM's outputs (Gao et al., 2024) or human-provided harmfulness labels (Dai et al., 2024). 4. **Need reading (individual, implicit)**: By observing a specific person's behavior and engaging in interactions over time, the AI system may be able to infer their personal needs even if they are never made explicit (Shah et al., 2019). For example, a future AI chatbot may pick up cues indicating that a user is psychologically triggered by a particular topic, possibly due to undisclosed past trauma. HAI Safety Definition. Given a failure specification, we seek to determine under what conditions the AI can maintain safety for all allowable realizations of the human's future behavior and, at the same time, to prescribe the most effective AI policy to do so. From the AI's standpoint, this amounts to characterizing the set of all *safe* information states z AI 0from which there *exists* a best-effort AI policy that will steer the human–AI system clear of a future safety violation *for all* realizations of human policies allowed by its current uncertainty. Mathematically, this maximal safe set is characterized as $$\Omega^{*}:=\{z_{0}^{\mathbf{AI}}\in{\mathcal{Z}}^{\mathbf{AI}}:\exists\pi^{\mathbf{AI}},\,\forall{\frac{z_{0}^{\mathbf{AI}}}{\tau}}\mid\forall\tau\geq0,\ z_{\tau}^{\mathbf{AI}}\notin{\mathcal{F}}^{\mathbf{AI}}\}$$ AI} (3) where z AI τis the information state at a future time τ , after both agents execute their respective policies1for τ steps from the initial state z AI 0 . If z AI 0 ∈ Ω ∗, then there exists some AI policy π AI : z AI 7→ a AI that keeps 1Since the human actions a H considered by the AI depend on its own internal state z AI (which implicitly estimates plausible human internal states z H), the AI-hypothesized human policies are, effectively, mappings πˆ H: z AI 7→ a H. $\left(3\right)$. z AI τinside Ω ∗, and thus away from the failure set F AI, for all time τ . The pessimism of the safety analysis is regulated by restricting the worst-case human behavior to be consistent with the predictive action bound: a H τ ∈ AˆH(z AI τ ). The construction of these predictive action bounds can once again benefit from the generative AI's predictive power. For example, a large language model can be queried with prompts based on the ODD of the safety analysis and used to sample diverse hypothetical human responses to AI generations (e.g., to simulate antagonistic or goal-driven dialogue (Hong et al., 2023)). ## 4.3 **Posing The Safety-Critical Human–Ai Game** We now have all the key mathematical components for a rigorous safety analysis of the human–AI interaction loop. We cast the computation of Ω ∗ as a zero-sum dynamic game between the AI and a *virtual adversary* that chooses the worst-case realization of the human's behavior allowed by the AI's uncertainty. The game's outcome from any initial information state z AI 0 , under optimal play, can be encoded through the safety value function (Barron & Ishii, 1989; Tomlin et al., 2000; Lygeros, 2004; Mitchell et al., 2005; Fisac et al., 2015): $$V(z_{0}^{\rm H}):=\max_{\pi^{\rm H}}\min_{\pi^{\rm H}}\left(\min_{t\geq0}\ell(z_{t}^{\rm H})\right),\qquad\Omega^{*}=\Big{\{}z_{0}^{\rm A}\in{\cal Z}^{\rm H}:V(z_{0}^{\rm A})\geq0\Big{\}}.\tag{4}$$ Here, ℓ : Z AI → R is a safety margin function that measures the proximity of the HAI system to the failure set and encodes F AI as the zero sublevel set {z AI : ℓ(z AI) < 0}. If the value V (z AI 0 ) is negative (i.e., z AI 0 ̸∈ Ω ∗), this means that, no matter what the AI agent chooses to do, it cannot avoid eventually entering F AI under the worst-case realization of the allowable human actions a H t ∈ AˆH(z AI t ) over time. Critically, the game posed in Equation 4 quantifies the best the AI system could ever do to maintain safety—hence, the *maximal* safe set. The value function defined above satisfies the fixed-point Isaacs equation (Isaacs, 1954) (the game-theoretic counterpart of the Bellman equation) relating the current safety margin ℓ to the minimum-margin-to-go V after one round of play: V (z AI) = max aAI∈AAImin aH∈AˆH(z AI) min ℓ(z AI), E o AI hVf AI(z AI, aAI, aH, oAI)i . (5) | {z } Q(z AI,aAI,aH) The solution to this zero-sum dynamic programming equation yields a maximin policy pair (π AI è , πH † ) containing the AI's best safety effort π AI è to maximize the closest future separation from the failure set, and the worst-case human behavior π H † that would close this distance and, if possible, make it reach zero.2 The policies (π AI è , πH † ) can be approximately computed through modern learning-based AI methods such as self-supervised learning (Bansal & Tomlin, 2021) or adversarial self-play RL (Silver et al.; Pinto et al., 2017; Hsu et al., 2023). This enables scalable learning from experience and even under partial observability (Hu et al., 2023), and once again leverages the complementary strengths of AI and control systems. We emphasize that the human behavior encoded by π H † constitutes a worst-case model (rather than a statistically calibrated one), trained to thwart the AI's best effort to maintain safety but required to conform to the operational design domain. We discuss some important implications of this choice in the conclusion. In the next section, we discuss how this theoretical human–AI game can be translated into a practical computational procedure enabling AI systems to monitor and enforce safety as they interact with people. ## 5 **Frontier Framework: The Human–Ai Safety Filter** As AI technology continues to advance, manually designing or fine-tuning harm prevention strategies with human feedback becomes increasingly untenable (Christiano et al., 2018; Bowman et al., 2022). To break this scalability mismatch, we posit that the same advances driving AI power can be leveraged to *autonomously* identify potential harms and devise proactive strategies that explicitly consider human–AI feedback loops. 2The virtual adversary π H † : ZAI → AH exploits the range of (1) plausible internal human states zˆ H ∈ ZˆH(z AI) given the AI's imperfect situational awareness z AI and (2) ODD-compatible human actions a H ∈ AˆH(ˆz H) given each possible inferred internal state zˆ H. Implementations of π H † may include two-step pipelines (z AI 7→ zˆ H7→ a H) or implicit end-to-end models (z AI 7→ a H). ![9_image_0.png](9_image_0.png) Figure 4: (left) The AI always acts under the safety-critical game policy (π AI è , πH † ), making it safe but conservative. (right) The filtered AI uses task policy π AI § as long as in the *future* it can apply π AI è against π H † . The general formulation in Section 4 enables AI systems to preempt potential pitfalls within a specified ODD, but the resulting policy is *only* concerned with safety. If we were to leave π AI è in control of the AI's entire behavior, as illustrated on the left of Figure 4, it would surely be safe but likely overcautious and unable to provide value to its users. In reality, it is not enough for the AI system to avoid causing failures (if so, we could simply not turn it on), but rather it must do so while assisting its users and performing requested tasks (which may or may not be themselves related to safety). Ideally, we want to *minimally* override the AI's task-driven actions with the safety policy, only intervening at the last possible moment. How can we do this? The systematic detect-and-avoid functionality we seek closely mirrors the *safety filter* mechanisms established in robotics and control systems, which we reviewed in Section 3. Rather than reinvent a suitable mechanism for human–AI systems, we argue for a frontier framework that builds upon the fundamental principles of safety filtering and extends them to the general interaction problem formalized in Section 4. Formally, a human–AI safety filter is a tuple (π AI è , ∆, ϕ) containing: - fallback policy: π AI è : Z AI → AAI, aims only to avoid catastrophic failures, without regard to task performance, and is therefore kept as a last resort. - safety monitor: ∆: Z AI × AAI → R, checks if the fallback π AI è would still maintain safety *after* a candidate action a AI is taken from z AI, outputting a positive or negative value following Equation 4. - intervention scheme: ϕ: Z AI × AAI → AAI, permits a candidate action a AI if if passes the monitoring check and otherwise replaces it with an alternative action that does, for example π AI è (z AI). This definition can encompass a broad spectrum of potential future supervisory mechanisms (Legg, 2023) and allows us to construct a new central theorem to understand their guarantees and assumptions. Theorem 1 (General Human–AI Safety Filter) *Consider a human–AI system with AI world model* f AI(z AI, aAI, aH, oAI) *and a safety filter* (π AI è , ∆, ϕ)*. If the AI agent is deployed with an initial internal state* z AI 0 ∈ ZAI *deemed safe by the safety monitor under the fallback policy, i.e.,* ∆z AI 0 , πAI è (z AI 0 )≥ 0, then the interaction under filtered dynamics f AIz AI, ϕ(z AI, aAI), aH, oAI*with any AI task policy* π AI § : z AI 7→ a AI and any realization of human behavior satisfying a H ∈ AˆH(z AI) maintains the safety condition ∀t ≥ 0, zAI *̸∈ F*AI. To date, the concept of a safety filter has only been instantiated for embodied systems with physics-based state spaces (low-dimensional vectors of physical quantities like positions or velocities, governed by wellunderstood dynamical laws). Here, we are the first to generalize the scope of this mathematical formalism to the much broader context of AI safety. This result lays the theoretical foundations for the algorithmic application of safety filters to *general* human–AI systems, which evolve "latent state spaces" and encode harder to model interactions such as dialogue between a human user and an AI chatbot. An important aspect of Theorem 1 is that it holds for an arbitrary fallback policy π AI è : as long as the safety monitor ∆ can accurately predict whether π AI è will succeed at maintaining safety in the future, the intervention scheme ϕ can prevent actions that would lead to a vulnerable state, i.e. a state outside the fallback-safe set Ω è. Naturally, if the available fallback policy is not very effective, the filter will be forced to intervene often, restricting the human–AI interactions to remain inside a smaller set Ω è. This is where the safety game from Section 4 comes in. The Perfect Human–AI Safety Filter. The safety-critical human–AI game we posed in Section 4 implicitly encodes the *least-restrictive* safety filter possible: one that allows maximal freedom to the AI's task policy π AI § while preempting all future safety failures under the AI's uncertainty. In particular, if we had access to the exact solution to this safety game, such a *perfect* safety filter could be implemented by choosing fallback policy π AI è , safety monitor ∆ := Q(·, ·, πH † ), and switch-type intervention scheme ϕ := 1{∆>0}·π AI § +1{∆≤0}·π AI è . Algorithmic Human–AI Safety Filtering. We conclude by giving a concrete account of how one could practically instantiate a human–AI safety filter for a generative AI model, visualized in Figure 5. Following the common neural network architecture of generative AI models, let *base AI model* (given to us for analysis and safe integration) be comprised of an encoder E and a decoder π AI § ; this decoder is precisely what we have been referring to as the *task policy*, mapping an internal (latent) state z AI to a proposed output action a AI. The purple block in Figure 5 depicts the *safety filter* components: the fallback policy, safety monitor, and intervention scheme. Computationally, adversarial reach-avoid RL can be used to obtain an *approximation* of the optimal fallback policy π AI è from the safety game in Equation 4. A reliable safety monitor ∆ can be implemented by either directly evaluating the learned safety value function at any information state z AI (safety critic) or by simulating a family of pessimistic interaction scenarios by querying the learned *virtual adversary* π H † . In turn, the intervention scheme can range from a simple binary switch (at each time, apply π AI § if deemed safe, else apply π AI è ) to a more sophisticated override (e.g., find a minimally disruptive deviation from π AI § that is deemed safe). Figure 5: **Human–AI Safety Filter**. The base AI model encodes the AI's observations into its latent state z AI which is used as input for its task policy (π AI § ). A safety filter includes a learned AI safety strategy π AI è , a safety monitor ∆ that predicts safety risks, and a predictive human model containing a virtual adversary π H † that generates pessimistic predictions of human interaction. Based on ∆, the AI's outputs to the human are filtered by the intervention scheme ϕ, and modified to guarantee safety. Even though the components of the safety filter would be approximate by their learning-based nature, the scheme can be leveraged in combination with modern statistical generalization theory, such as PAC-Bayes theory (McAllester, 2003; Majumdar et al., 2020), adversarial conformal prediction (Gibbs & Candes, 2021; Bastani et al., 2022), and scenario optimization (Schildbach et al., 2014; Lin & Bansal, 2023), to maintain a high-confidence guarantee that the AI system will robustly enforce the satisfaction of the human's critical needs throughout the interaction for all human behaviors allowed by the operational assumptions. We emphasize that a key strength of this safety framework is that it naturally scales with the rapidly advancing *capability* of modern AI systems: as future generations of language models, vision-language systems, and general AI agents become ever stronger, so will the safety assurances that can be provided through the proposed techniques and system architecture. ## 6 **Conclusion** In this paper, we aim to inspire the genesis of a new human–AI safety research community. We take concrete steps towards this by identifying a fundamental synergy between the principled safety formalism offered by control theory and the general representations learned by internet-trained AI systems. By combining lessons from control and AI, we propose a technical roadmap to guide research efforts towards a safety framework that can reliably anticipate and avoid potential hazards emerging during interaction. We propose a frontier framework called the *human–AI safety filter*, wherein an AI system's task policy is systematically monitored and minimally overriden by a safety policy synthesized via safety-critical adversarial self-play. ## Broader Impact Statement We expect that the proposed interdisciplinary safety framework will help catalyze a much needed rapprochement between the AI and control systems communities to develop rigorous safety assurances for dynamic human–AI feedback loops. A significant positive impact may come in the form of the first practical safety frameworks that can not only keep up with the rapid advances in AI capabilities but actively benefit from them to provide stronger guarantees, ushering in a new generation of advanced AI systems that can be trusted *because* of their sophistication, and not in spite of it. On the other hand, we also highlight possible pitfalls of our proposed human–AI safety framework. The approximate nature of learning-based generative AI makes it extremely challenging to provide a clear-cut delineation of uncertainty, which will likely limit us to statistical assurances in the foreseeable future. These fall short of the stronger *if–then* certificates that we can aspire to in other engineering domains: hard guarantees establishing that, as long as the system's operational assumptions are met, catastrophic failures are categorically impossible (i.e., a failure can only result from an explicit assumption being violated). Even high-confidence statistical assurances can leave human-centered AI systems open to black swan events with extremely low probability but potentially dramatic consequences. There is a risk that the improved treatment of human–AI feedback loops developed through the proposed agenda could be repurposed and misused by malicious or reckless actors to construct AI systems that exploit interaction dynamics against the interest of their users, for example by seeking to manipulate their decisions. Even with today's relatively myopic fine-tuning approaches, we see a worrying emergence of unintended (e.g., sycophantic) AI outputs as the system learns to secure positive user responses. Future systems equipped with long-horizon reasoning but *without* a proper safety framework could conceivably seek long-term interaction outcomes serving a third party's agenda at the expense of their users' needs. We nonetheless remain cautiously optimistic: First, human–AI safety filtering does not require teasing apart the likelihood of various conceivable human behaviors in a given context. Rather, safety-directed predictions robustly consider the set of all such plausible behaviors without distinction, making them harder to exploit for manipulation purposes. Second, the need to consider large prediction sets containing both likely and unlikely outcomes aligns well with the inclusion of underrepresented individual behaviors that do not conform to dominant patterns in the training datasets. Finally, provided that future AI systems are deployed with a cyber-secure dynamical safety mechanism that cannot be removed or altered by unauthorized parties, such a framework would help detect and mitigate emergent and intentional misalignment. Naturally, this will require a process of standardization and regulatory oversight; the first step, however, must be to establish *what assurances are possible*. Ultimately, we expect that technical advances in human–AI safety will inform the conversation between technologists, policymakers, political leaders, and the public at large. A timely conversation that, fortunately, is already ongoing. ## References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. *arXiv* preprint arXiv:2303.08774, 2023. Anayo K. Akametalu, Shromona Ghosh, Jaime F. Fisac, Vicenc Rubies-Royo, and Claire J. Tomlin. A minimum discounted reward Hamilton–Jacobi formulation for computing reachable sets. 69(2):1097–1103. ISSN 1558-2523. doi: 10.1109/TAC.2023.3327159. URL https://ieeexplore.ieee.org/document/10294099. Ross E Allen, Ashley A Clark, Joseph A Starek, and Marco Pavone. A machine learning approach for realtime reachability analysis. In *2014 IEEE/RSJ international conference on intelligent robots and systems*, pp. 2202–2208. IEEE, 2014. Matthias Althoff and John M Dolan. Online verification of automated road vehicles using reachability analysis. *IEEE Transactions on Robotics*, 30(4):903–918, 2014. Matthias Althoff, Colas Le Guernic, and Bruce H Krogh. Reachable set computation for uncertain timevarying linear systems. In Proceedings of the 14th international conference on Hybrid systems: computation and control, pp. 93–102, 2011. Aaron D Ames, Samuel Coogan, Magnus Egerstedt, Gennaro Notomista, Koushil Sreenath, and Paulo Tabuada. Control barrier functions: Theory and applications. In *2019 18th European control conference* (ECC), pp. 3420–3431. IEEE, 2019. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in ai safety. *arXiv preprint arXiv:1606.06565*, 2016. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. *arXiv preprint arXiv:2212.08073*, 2022. Andrea Bajcsy, Somil Bansal, Ellis Ratner, Claire J Tomlin, and Anca D Dragan. A robust control framework for human motion prediction. *Robotics and Automation Letters*, 2020. Somil Bansal and Claire J Tomlin. DeepReach: A deep learning approach to high-dimensional reachability. In *IEEE International Conference on Robotics and Automation (ICRA)*, 2021. Somil Bansal, Mo Chen, Sylvia Herbert, and Claire J Tomlin. Hamilton-Jacobi Reachability: A brief overview and recent advances. In *IEEE Conference on Decision and Control (CDC)*, 2017. EN Barron and H Ishii. The bellman equation for minimizing the maximum cost. NONLINEAR ANAL. THEORY METHODS APPLIC., 13(9):1067–1090, 1989. Tamer Başar and Geert Jan Olsder. *Dynamic noncooperative game theory*. SIAM, 1998. Osbert Bastani. Safe reinforcement learning with nonlinear dynamics via model predictive shielding. In 2021 American control conference (ACC), pp. 3488–3494. IEEE, 2021. Osbert Bastani, Varun Gupta, Christopher Jung, Georgy Noarov, Ramya Ramalingam, and Aaron Roth. Practical adversarial multivalid conformal prediction. *Advances in Neural Information Processing Systems*, 35:29362–29373, 2022. Luca Bergamini, Yawei Ye, Oliver Scheel, Long Chen, Chih Hu, Luca Del Pero, Błażej Osiński, Hugo Grimmett, and Peter Ondruska. Simnet: Learning reactive self-driving simulations from real-world observations. In *2021 IEEE International Conference on Robotics and Automation (ICRA)*, pp. 5119–5125. IEEE, 2021. Roger V. Bostelman, Joseph A. Falco, Marek Franaszek, and Kamel S. Saidi. Performance assessment framework for robotic systems. Technical report, National Institute of Standards and Technology, 2018. Samuel R Bowman, Jeeyoon Hyun, Ethan Perez, Edwin Chen, Craig Pettit, Scott Heiner, Kamil˙e Lukoši¯ut˙e, Amanda Askell, Andy Jones, Anna Chen, et al. Measuring progress on scalable oversight for large language models. *arXiv preprint arXiv:2211.03540*, 2022. Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, et al. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. *arXiv preprint arXiv:1802.07228*, 2018. Lukas Brunke, Melissa Greeff, Adam W Hall, Zhaocong Yuan, Siqi Zhou, Jacopo Panerati, and Angela P Schoellig. Safe learning in robotics: From learning-based control to safe reinforcement learning. Annual Review of Control, Robotics, and Autonomous Systems, 5:411–444, 2022. Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, et al. Open problems and fundamental limitations of reinforcement learning from human feedback. *arXiv preprint arXiv:2307.15217*, 2023. Varun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly detection: A survey. ACM computing surveys (CSUR), 41(3):1–58, 2009. Yuxiao Chen, Andrew Singletary, and Aaron D Ames. Guaranteed obstacle avoidance for multi-robot operations with limited actuation: A control barrier function approach. *IEEE Control Systems Letters*, 5(1): 127–132, 2020. Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, and Quanquan Gu. Self-play fine-tuning converts weak language models to strong language models. *arXiv preprint arXiv:2401.01335*, 2024. Paul Christiano, Buck Shlegeris, and Dario Amodei. Supervising strong learners by amplifying weak experts. arXiv preprint arXiv:1810.08575, 2018. Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, and Yaodong Yang. Safe rlhf: Safe reinforcement learning from human feedback. International Conference on Learning Representations, 2024. Jérôme Darbon, Gabriel P Langlois, and Tingwei Meng. Overcoming the curse of dimensionality for some hamilton–jacobi partial differential equations via neural network architectures. *Research in the Mathematical Sciences*, 7:1–50, 2020. Charles Dawson, Zengyi Qin, Sicun Gao, and Chuchu Fan. Safe nonlinear control using robust neural lyapunov-barrier functions. In *Conference on Robot Learning*, pp. 1724–1735. PMLR, 2022. Charles Dawson, Sicun Gao, and Chuchu Fan. Safe control with learned certificates: A survey of neural lyapunov, barrier, and contraction methods for robotics and control. *IEEE Transactions on Robotics*, 2023. Roel Dobbe, Patricia Hidalgo-Gonzalez, Stavros Karagiannopoulos, Rodrigo Henriquez-Auba, Gabriela Hug, Duncan S Callaway, and Claire J Tomlin. Learning to control in power systems: Design and analysis guidelines for concrete safety problems. *Electric Power Systems Research*, 189:106615, 2020. Katherine Driggs-Campbell, Roy Dong, and Ruzena Bajcsy. Robust, informative human-in-the-loop predictions via empirical reachable sets. *IEEE Transactions on Intelligent Vehicles*, 3(3):300–309, 2018. Yilun Du, Mengjiao Yang, Bo Dai, Hanjun Dai, Ofir Nachum, Joshua B Tenenbaum, Dale Schuurmans, and Pieter Abbeel. Learning universal policies via text-guided video generation. arXiv preprint arXiv:2302.00111, 2023. J. Fisac, M. Chen, C. J. Tomlin, and S. Sastry. Reach-avoid problems with time-varying dynamics, targets and constraints. In *HSCC*, 2015. J. F. Fisac, N. F. Lugovoy, V. Rubies-Royo, S. Ghosh, and C. J. Tomlin. Bridging Hamilton-Jacobi safety analysis and reinforcement learning. In *2019 International Conference on Robotics and Automation* (ICRA), pp. 8550–8556, 2019. doi: 10.1109/ICRA.2019.8794107. Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, et al. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. *arXiv preprint arXiv:2209.07858*, 2022. Ge Gao, Alexey Taymanov, Eduardo Salinas, Paul Mineiro, and Dipendra Misra. Aligning llm agents by learning latent preference from user edits. *arXiv preprint arXiv:2404.15269*, 2024. Isaac Gibbs and Emmanuel Candes. Adaptive conformal inference under distribution shift. In *Advances in* Neural Information Processing Systems, volume 34, pp. 1660–1672. Curran Associates, Inc., 2021. Sachin Goyal, Aditi Raghunathan, Moksh Jain, Harsha Vardhan Simhadri, and Prateek Jain. Drocc: Deep robust one-class classification. In *International conference on machine learning*, pp. 3711–3721. PMLR, 2020. Lin Guan, Yifan Zhou, Denis Liu, Yantian Zha, Heni Ben Amor, and Subbarao Kambhampati. " task success" is not enough: Investigating the use of video-language models as behavior critics for catching undesirable agent behaviors. *arXiv preprint arXiv:2402.04210*, 2024. David Ha and Jürgen Schmidhuber. World models. *NIPS*, 2018. Sami Haddadin, Simon Haddadin, Augusto Khoury, Tim Rokahr, Sven Parusel, Rainer Burgkart, Antonio Bicchi, and Alin Albu-Schäffer. On making robots understand safety: Embedding injury knowledge into control. *The International Journal of Robotics Research*, 31(13):1578–1602, 2012. Alex Heath. All the news from openai's first developer conference. https://www.theverge.com/2023/11/ 6/23948619/openai-chatgpt-devday-developer-conference-news, 2023. Accessed: 2024-01-23. Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep anomaly detection with outlier exposure. arXiv preprint arXiv:1812.04606, 2018. Dan Hendrycks, Nicholas Carlini, John Schulman, and Jacob Steinhardt. Unsolved problems in ml safety. arXiv preprint arXiv:2109.13916, 2021. Lukas Hewing, Kim P Wabersich, Marcel Menner, and Melanie N Zeilinger. Learning-based model predictive control: Toward safe learning in control. *Annual Review of Control, Robotics, and Autonomous Systems*, 3:269–296, 2020. Coryanne Hicks. I pitted chatgpt against a real financial advisor to help me save for retirement—and the winner is clear. https://fortune.com/recommends/investing/ chatgpt-vs-real-financial-advisor-to-plan-retirement-which-is-better/, 2023. Accessed: 2024-01-12. Joey Hong, Sergey Levine, and Anca Dragan. Zero-shot goal-directed dialogue via rl on imagined conversations. *arXiv preprint arXiv:2311.05584*, 2023. Kai-Chieh Hsu, Duy Phuong Nguyen, and Jaime Fernàndez Fisac. ISAACS: Iterative Soft Adversarial Actor-Critic for Safety. *Learning for Dynamics and Control Conference*, pp. 90–103, 2023. Kai-Chieh Hsu, Haimin Hu, and Jaime Fernández Fisac. The safety filter: A unified view of safety-critical control in autonomous systems. *Annual Reviewsof Control, Robotics, and Autonomous Systems*, 2024. Haimin Hu, Zixu Zhang, Kensuke Nakamura, Andrea Bajcsy, and Jaime F Fisac. Deception game: Closing the safety-learning loop in interactive robot autonomy. *Conference on Robot Learning*, 2023. International Civil Aviation Organization ICAO. The world of air transport in 2019. https://www. icao.int/annual-report-2019/Pages/the-world-of-air-transport-in-2019.aspx\#:~:text= According%20to%20ICAO's%20preliminary%20compilation,a%201.7%20per%20cent%20increase, 2019. Accessed: 2024-05-28. Rufus Isaacs. *Differential games I*. RAND Corporation, 1954. ISO 15066. Robots and robotic devices - Collaborative robots. Standard, International Organization for Standardization, 2016. ISO 22737:2021. Intelligent transport systems. Standard, International Organization for Standardization, 2021. Boris Ivanovic, Edward Schmerling, Karen Leung, and Marco Pavone. Generative modeling of multimodal multi-human behavior. In *2018 IEEE/RSJ International Conference on Intelligent Robots and Systems* (IROS), pp. 3088–3095. IEEE, 2018. Mykel J Kochenderfer, Jessica E Holland, and James P Chryssanthacopoulos. Next generation airborne collision avoidance system. *Lincoln Laboratory Journal*, 19(1):17–33, 2012. Kimin Lee, Hao Liu, Moonkyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, and Shixiang Shane Gu. Aligning text-to-image models using human feedback. *arXiv* preprint arXiv:2302.12192, 2023. Shane Legg. System two safety. The Alignment Workshop, 2023. URL https://www.alignment-workshop. com/nola-talks/shane-legg-system-two-safety. Anjian Li, Liting Sun, Wei Zhan, Masayoshi Tomizuka, and Mo Chen. Prediction-based reachability for collision avoidance in autonomous driving. In *2021 IEEE International Conference on Robotics and Automation (ICRA)*, pp. 7908–7914. IEEE, 2021a. Jiacheng Li, Qingchen Liu, Wanxin Jin, Jiahu Qin, and Sandra Hirche. Robust safe learning and control in an unknown environment: An uncertainty-separated control barrier function approach. *IEEE Robotics* and Automation Letters, 2023. Xiang Lorraine Li, Adhiguna Kuncoro, Jordan Hoffmann, Cyprien de Masson d'Autume, Phil Blunsom, and Aida Nematzadeh. A systematic investigation of commonsense knowledge in large language models. Conference on Empirical Methods in Natural Language Processing, 2021b. Albert Lin and Somil Bansal. Generating formal safety assurances for high-dimensional reachability. In *2023* IEEE International Conference on Robotics and Automation (ICRA), pp. 10525–10531. IEEE, 2023. Stefan B Liu, Hendrik Roehm, Christian Heinzemann, Ingo Lütkebohle, Jens Oehlerking, and Matthias Althoff. Provably safe motion of mobile robots in human environments. In *2017 IEEE/RSJ International* Conference on Intelligent Robots and Systems (IROS), pp. 1351–1357. IEEE, 2017. Matt Luckcuck, Marie Farrell, Louise A Dennis, Clare Dixon, and Michael Fisher. Formal specification and verification of autonomous robotic systems: A survey. *ACM Computing Surveys (CSUR)*, 52(5):1–41, 2019. John Lygeros. On reachability and minimum cost optimal control. *Automatica*, 40(6):917–927, 2004. Anirudha Majumdar, Alec Farid, and Anoopkumar Sonar. Pac-bayes control: Learning policies that provably generalize to novel environments, 2020. Tommaso Mannucci, Erik-Jan van Kampen, Cornelis De Visser, and Qiping Chu. Safe exploration algorithms for reinforcement learning controllers. *IEEE transactions on neural networks and learning systems*, 29(4): 1069–1081, 2017. K. Margellos and J. Lygeros. Hamilton-Jacobi Formulation for Reach–Avoid Differential Games. *IEEE* Trans. on Automatic Control, 56(8):1849–1861, 2011. David McAllester. Simplified pac-bayesian margin bounds. In *Learning Theory and Kernel Machines: 16th* Annual Conference on Learning Theory and 7th Kernel Workshop, COLT/Kernel 2003, Washington, DC, USA, August 24-27, 2003. Proceedings, pp. 203–215. Springer, 2003. Fundamental AI Research Diplomacy Team (FAIR) Meta, Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, et al. Humanlevel play in the game of diplomacy by combining language models with strategic reasoning. *Science*, 378 (6624):1067–1074, 2022. Ian Mitchell, Alex Bayen, and Claire J. Tomlin. A time-dependent Hamilton-Jacobi formulation of reachable sets for continuous dynamic games. *IEEE Transactions on Automatic Control (TAC)*, 50(7):947–957, 2005. Norman Mu, Sarah Chen, Zifan Wang, Sizhe Chen, David Karamardian, Lulwa Aljeraisy, Dan Hendrycks, and David Wagner. Can llms follow simple rules? *arXiv preprint arXiv:2311.04235*, 2023. Rémi Munos, Michal Valko, Daniele Calandriello, Mohammad Gheshlaghi Azar, Mark Rowland, Zhaohan Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mesnard, Andrea Michi, et al. Nash learning from human feedback. *arXiv preprint arXiv:2312.00886*, 2023. Kensuke Nakamura and Somil Bansal. Online update of safety assurances using confidence-based predictions. In *2023 IEEE International Conference on Robotics and Automation (ICRA)*, pp. 12765–12771. IEEE, 2023. Jack Nicas, Natalie Kitroeff, David Gelles, and James Glanz. Boeing built deadly assumptions into 737 max, blind to a late design change. https://www.nytimes.com/2019/06/01/business/ boeing-737-max-crash.html, 2019. Accessed: 2024-01-24. Dan Noyes. New video of bay bridge 8-car crash shows tesla abruptly braking in 'self-driving' mode. https:// abc7news.com/tesla-sf-bay-bridge-crash-8-car-self-driving-video/12686428/, 2023. Accessed: 2024-01-24. SAE On-Road Automated Driving Committee. Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles, 2021. URL https://www.sae.org/content/j3016_202104. OpenAI. Our approach to alignment research. https://openai.com/blog/ our-approach-to-alignment-research, 2022a. Accessed: 2024-01-23. OpenAI. Dall-e now available without waitlist. https://openai.com/blog/ dall-e-now-available-without-waitlist, 2022b. Accessed: 2024-01-23. OpenAI. DALL·E 2 preview: Risks and limitations. https://github.com/openai/dalle-2-preview/ blob/main/system-card_04062022.md, 2022c. Accessed: 2024-01-23. Pedro A. Ortega, Vishal Maini, and DeepMind Safety Team. Building safe artificial intelligence: specification, robustness, and assurance. https://deepmindsafetyresearch.medium.com/ building-safe-artificial-intelligence-52f5f75058f1, 2018. Accessed: 2024-01-23. Piotr F. Orzechowski, Kun Li, and Martin Lauer. Towards Responsibility-Sensitive Safety of Automated Vehicles with Reachable Set Analysis. In 2019 IEEE International Conference on Connected Vehicles and Expo (ICCVE), pp. 1–6. IEEE. ISBN 978-1-72810-142-2. doi: 10.1109/ICCVE45908.2019.8965069. URL https://ieeexplore.ieee.org/document/8965069/. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. *Advances in Neural Information Processing Systems*, 35:27730–27744, 2022. Joon Sung Park, Joseph O'Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. In *Proceedings of the 36th Annual* ACM Symposium on User Interface Software and Technology, pp. 1–22, 2023. Ethan Perez, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, and Geoffrey Irving. Red teaming language models with language models. *Proceedings of the* 2022 Conference on Empirical Methods in Natural Language Processing,, 2022. Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. Robust adversarial reinforcement learning. In *International Conference on Machine Learning*, pp. 2817–2826. PMLR, 2017. Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. Finetuning aligned language models compromises safety, even when users do not intend to! arXiv preprint arXiv:2310.03693, 2023. Zengyi Qin, Kaiqing Zhang, Yuxiao Chen, Jingkai Chen, and Chuchu Fan. Learning safe multi-agent control with decentralized neural barrier certificates. *International Conference on Learning Representations*, 2021. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. *Conference on Neural* Information Processing Systems (NeurIPS), 2023. Vicenç Rubies-Royo, David Fridovich-Keil, Sylvia Herbert, and Claire J Tomlin. A classification-based approach for approximate reachability. In *2019 International Conference on Robotics and Automation* (ICRA), pp. 7697–7704. IEEE, 2019. Stuart Russell. *Human compatible: Artificial intelligence and the problem of control*. Penguin, 2019. Georg Schildbach, Lorenzo Fagiano, Christoph Frei, and Manfred Morari. The scenario approach for stochastic model predictive control with bounds on closed-loop constraint violations. *Automatica*, 50(12):3009– 3018, 2014. Thomas Schlegl, Philipp Seeböck, Sebastian M Waldstein, Ursula Schmidt-Erfurth, and Georg Langs. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In *International conference on information processing in medical imaging*, pp. 146–157. Springer, 2017. Ari Seff, Brian Cera, Dian Chen, Mason Ng, Aurick Zhou, Nigamaa Nayakanti, Khaled S Refaat, Rami AlRfou, and Benjamin Sapp. Motionlm: Multi-agent motion forecasting as language modeling. In *Proceedings* of the IEEE/CVF International Conference on Computer Vision, pp. 8579–8590, 2023. Rohin Shah, Dmitrii Krasheninnikov, Jordan Alexander, Pieter Abbeel, and Anca Dragan. Preferences implicit in the state of the world. *International Conference on Learning Representations*, 2019. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. 362(6419):1140–1144. doi: 10.1126/science.aar6404. URL https://www.science.org/doi/ full/10.1126/science.aar6404. Sumeet Singh, Anirudha Majumdar, Jean-Jacques Slotine, and Marco Pavone. Robust online motion planning via contraction theory and convex optimization. In *2017 IEEE International Conference on Robotics* and Automation (ICRA), pp. 5883–5890. IEEE, 2017. Simon Suo, Sebastian Regalado, Sergio Casas, and Raquel Urtasun. Trafficsim: Learning to simulate realistic multi-agent behaviors. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 10400–10409, 2021. Gokul Swamy, Christoph Dann, Rahul Kidambi, Zhiwei Steven Wu, and Alekh Agarwal. A minimaximalist approach to reinforcement learning from human feedback. *arXiv preprint arXiv:2401.04056*, 2024. Ran Tian, Liting Sun, Andrea Bajcsy, Masayoshi Tomizuka, and Anca D Dragan. Safety assurances for human-robot interaction via confidence-aware game-theoretic human models. In *IEEE International Conference on Robotics and Automation*, pp. 11229–11235, 2022. C.J. Tomlin, J. Lygeros, and S. Shankar Sastry. A game theoretic approach to controller design for hybrid systems. 88(7):949–970, 2000. ISSN 0018-9219, 1558-2256. doi: 10.1109/5.871303. URL http://ieeexplore.ieee.org/document/871303/. Claire Tomlin, George J Pappas, and Shankar Sastry. Conflict resolution for air traffic management: A study in multiagent hybrid systems. *IEEE Transactions on automatic control*, 43(4):509–521, 1998. The European Union. A european approach to artificial intelligence. https://digital-strategy.ec. europa.eu/en/policies/european-approach-artificial-intelligence, 2021. Accessed: 2024-01-23. Michael Vitus and Claire Tomlin. Hierarchical, hybrid framework for collision avoidance algorithms in the national airspace. In *AIAA Guidance, Navigation and Control Conference and Exhibit*, pp. 6970, 2008. Kim P Wabersich, Andrew J Taylor, Jason J Choi, Koushil Sreenath, Claire J Tomlin, Aaron D Ames, and Melanie N Zeilinger. Data-driven safety filters: Hamilton-jacobi reachability, control barrier functions, and predictive methods for uncertain systems. *IEEE Control Systems Magazine*, 43(5):137–177, 2023. Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization. *arXiv preprint arXiv:2311.12908*, 2023. Alexander Wei, Nika Haghtalab, and Jacob Steinhardt. Jailbroken: How does llm safety training fail? *arXiv* preprint arXiv:2307.02483, 2023. United States White House. Fact sheet: President biden issues executive order on safe, secure, and trustworthy artificial intelligence, 2023. Norbert Wiener. Some moral and technical consequences of automation. 131(3410):1355–1358. ISSN 00368075, 1095-9203. doi: 10.1126/science.131.3410.1355. URL https://science.sciencemag.org/content/ 131/3410/1355. Chloe Xiang. "he would still be here": Man dies by suicide after talking with ai chatbot, widow says. https://www.vice.com/en/article/pkadgm/ man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says, 2023. Accessed: 2024-0123. Jing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason Weston, and Emily Dinan. Bot-adversarial dialogue for safe conversational agents. In *Proceedings of the 2021 Conference of the North American Chapter of* the Association for Computational Linguistics: Human Language Technologies, pp. 2950–2968, 2021. Haotian Zhang, Ye Yuan, Viktor Makoviychuk, Yunrong Guo, Sanja Fidler, Xue Bin Peng, and Kayvon Fatahalian. Learning physically simulated tennis skills from broadcast videos. *ACM Transactions On* Graphics (TOG), 42(4):1–14, 2023. Ziyuan Zhong, Davis Rempe, Danfei Xu, Yuxiao Chen, Sushant Veer, Tong Che, Baishakhi Ray, and Marco Pavone. Guided conditional diffusion for controllable traffic simulation. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 3560–3566. IEEE, 2023. Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019.
# Agale: A Graph-Aware Continual Learning Evaluation Framework Tianqi Zhao T.Zhao-1@tudelft.nl Delft University of Technology Delft, Netherlands Alan Hanjalic *A.Hanjalic@tudelft.nl* Delft University of Technology Delft, Netherlands Megha Khosla M.Khosla@tudelft.nl Delft University of Technology Delft, Netherlands Reviewed on OpenReview: https://openreview.net/forum?id=xDTKRLyaNN ## Abstract In recent years, continual learning (CL) techniques have made significant progress in learning from streaming data while preserving knowledge across sequential tasks, particularly in the realm of euclidean data. To foster fair evaluation and recognize challenges in CL settings, several evaluation frameworks have been proposed, focusing mainly on the single- and multi-label classification task on euclidean data. However, these evaluation frameworks are not trivially applicable when the input data is graph-structured, as they do not consider the topological structure inherent in graphs. Existing continual graph learning (CGL) evaluation frameworks have predominantly focused on single-label scenarios in the node classification (NC) task. This focus has overlooked the complexities of multi-label scenarios, where nodes may exhibit affiliations with multiple labels, simultaneously participating in multiple tasks. We develop a graph-aware evaluation (AGALE) framework that accommodates both single-labeled and multi-labeled nodes, addressing the limitations of previous evaluation frameworks. In particular, we define new incremental settings and devise data partitioning algorithms tailored to CGL datasets. We perform extensive experiments comparing methods from the domains of continual learning, continual graph learning, and dynamic graph learning (DGL). We theoretically analyze AGALE and provide new insights about the role of homophily in the performance of compared methods. We release our framework at https://github.com/Tianqi-py/AGALE. ## 1 Introduction Continual Learning (CL) describes the process by which a model accumulates knowledge from a sequence of tasks while facing the formidable challenge of preserving acquired knowledge amidst data loss from prior tasks. It finds application in several fields, such as the domain of medical image analysis, where a model has to detect timely emerging new diseases in images while maintaining the accuracy of diagnosing the diseases that have been encountered in the past. Significant achievements have been made on CL for euclidean data domains such as images and text (Aljundi et al., 2018; Parisi et al., 2018; Tang & Matteson, 2021; Hadsell et al., 2020; Van de Ven & Tolias, 2019). Recent works have also delved into the broader scenario of multi-label continual learning (MLCL) (Wang et al., 2020b; 2021; Liu et al., 2022; Wei et al., 2021), where one instance can be simultaneously associated with multiple labels. To foster fair evaluation and identify new challenges in CL settings, several evaluation frameworks (Farquhar & Gal, 2019; Lange et al., 2023) have been proposed, focusing on the single- and multi-label classification task on the euclidean data. However, these frameworks are not trivially applicable to graph-structured data due to the complexities arising from interconnections and multi-label nodes within graphs. Besides, existing evaluation frameworks in continual graph learning (CGL) (Ko et al., 2022; Zhang et al., 2022) evaluate the node classification task in the setting of associating nodes with a single label (which we refer to as the *single-label* scenario), thereby overlooking the possibility for nodes from previous tasks to adopt different labels in new tasks or acquire additional labels with time. For instance, in the context of a dynamically evolving social network, not only can new users with diverse interests (labels) be introduced over time, but existing users may also lose old labels or accumulate new labels continuously. To illustrate the limitations of current CL evaluation frameworks when considering the multi-label scenario in graphs, we start with an example of a multi-label graph as in Figure 1. We use color coding to indicate the classes the nodes belong to. Please note that in what follows, we employ the term "class" to refer to the classes that correspond to a task. To refer to a class assigned to a particular node we use the term "label". ## 1.1 Limitations Of Current Continual Learning Evaluation Frameworks Lack of graph-aware data partitioning strategies. Current experimental setups typically simulate continual learning settings by employing certain data partitioning methods over static data. However, existing CGL frameworks do not consider the multi-label scenario in the data partitioning algorithms. Figure 1: An example multilabel graph with colors indicating to the different node ![1_image_0.png](1_image_0.png) ![1_image_1.png](1_image_1.png) labels. The multi-label continual learning evaluation framework for Euclidean data (Kim et al., 2020a) suggest the use of *hierarchical clustering* techniques, which involves grouping classes into a single task based on their co-occurrence frequency and subsequently eliminating instances with label sets that do not align with any class groups. Applying such a technique to graph-structured data not only risks excluding nodes with a higher number of labels but also disrupts the associated topological structures. In Figure 2, we illustrate an application of the existing MLCL framework to the multilabel graph depicted in Figure 1. The classes blue, green, and red are collectively treated as one task due to their frequent co-occurrence. Node 3, having the maximum number of labels, is removed from the graph since no task encompasses all its labels. It is noteworthy that employing hierarchical clustering techniques increases the likelihood of eliminating nodes with more labels, effectively reducing both the number of multilabeled nodes and the associated topological structure. In the current example, proper training and testing of the model on the red class is hindered, as only one node remains in the subgraph with the label red. Besides, node 5 becomes isolated in the second subgraph. Figure 2: Subgraphs generated by grouping frequently co-occurring classes as a task. Generation of train/val/test sets. Furthermore, the data partitioning algorithm is also responsible for the division of each subgraph into training, validation, and test subsets. In Figure 3 we show an example of train/val/test subsets generated using the strategy adopted by previous CGL evaluation frameworks for the task of distinguishing between blue and green classes. In particular, nodes belonging to a single class are divided independently at random into train/validation/test sets, assuming no overlap between classes. However, when each node can belong to multiple classes, an independent and random division within each class is not always feasible. For instance, the same node may serve as training data for one class and testing data for another in the same task, as is the case for node 1 in Figure 3. In this particular case, the model may observe the test data during the training process, resulting in data leakage. Conversely, considering the entire dataset as a whole for division would result in the dominance of the larger class, causing all nodes from the smaller class to be aggregated into the same split and thereby under-representing the smaller class in the other split. For instance, in multi-label datasets such as Yelp, two classes can exhibit complete overlap, where all nodes from the smaller class also belong to the larger class. In this scenario, if the nodes belonging to the larger class are split first, there might be no nodes left to make the required splits for the smaller class. Use of predefined class orders. Existing CGL evaluation frameworks rely on a single predefined class order in the dataset and group sets of K classes as individual tasks. As the predefined order might not always reflect the true generation process of the data, it is important to evaluate CL models over several random class orders. Specifically for multi-label scenarios, the employed class order may not only influence the nodes and their neighborhood structures presented at each time step but also affect the number of labels assigned to a particular node in a given task. We demonstrate in Figure 4, using nodes from the multi-label graph in Figure 1, how distinct class orders generate subgraphs with the same set of nodes but with different topologies and node label assignments. Limitations on the number of tasks. Last but not least, previous CGL benchmarks typically predetermined the size of model's output units, assuming an approximate count of classes in each graph during model initialization. However, this assumption is unrealistic because the eventual class size is often unknown, leading to potential inefficiencies in storage or capacity overflow. ## 1.2 Our Contributions To tackle the above-mentioned gaps, we develop a generalized evaluation framework for continual graph learning, which is applicable both for multi-class and multi-label node classification tasks and can be easily adapted for multi-label graph and edge classification tasks. Our key contributions are as follows. Figure 3: The split of the ![2_image_0.png](2_image_0.png) nodes within one subgraph generated by the previous CGL framework. - We define two generalized incremental settings for the node classification task in the CGL evaluation framework which are applicable for both multi-class and multi-label datasets. - We develop new data split algorithms for curating CGL datasets utilizing existing static benchmark graph datasets. We theoretically analyze the *label homophily* of the resulting subgraphs which is an important factor influencing the performance of learning models over graphs. - We perform extensive experiments to assess and compare the performance of well-established methods within the categories of Continual Learning (CL), Dynamic Graph Learning (DGL), and Continual Graph Learning (CGL). Through our analysis, we evaluate these methods in the context of their intended objectives, identifying their constraints and highlighting potential avenues for designing more effective models to tackle standard tasks in CGL. - We re-implement the compared models in our framework while adapting them for the unknown number of new ![2_image_1.png](2_image_1.png) tasks that may emerge in the future. Figure 4: An example of subgraphs obtained by applying different class orders for the static multi-label graph in Figure 1. ## 2 Problem Formulation We start by providing a general formulation of the continual learning problem for graph-structured data and elaborate on the additional complexities when the nodes in the graph may have multiple labels as compared to the single-label scenario. Problem Setting and Notations. Given a time sequence T = {1, 2, . . . , T}, at each time step t ∈ T *, the input* is one graph snapshot Gt = (Vt, Et, Xt, Yt), with node set Vt and edge set Et. Additionally, Xt ∈ R|Vt|×D and Yt ∈ {0, 1} |Vt|×|Ct| denote the feature matrix and the label matrix for the nodes in graph Gt, where D is the dimension of the feature vector, and Ct is the set of classes seen/available at time t. We assume that the node set Vt *is partially* labeled, i.e., Vt = {Vl t , V u t }*, where* V l t and V u t represent the labeled nodes and the unlabeled nodes in Gt. We use At to denote the adjacency matrix of Gt*. We use* Y vto denote the complete label set of node v and Y v t to denote the label set of node v *observed at time* t. Objective. *The key objective in CGL setting, as described above, is to predict the corresponding label matrix of* V u t denoted by Yu t (when the complete label set is restricted to Ct*) while maintaining the performance over the classification* on nodes in all graphs in the past time steps in {1, 2*, . . . , t* − 1}. ## 2.1 Differences To Single-Label Scenario Having explained the problem setting and the objective we now describe the key differences of the multi-label scenario as compared to the single-label case in continual graph learning, which were so far ignored by previous works resulting in various limitations as illustrated in Section 1.1. - **Node overlaps in different tasks.** In the single-label scenario each node is affiliated with one single class, exclusively contributing to one task. The following statement, which states that no node appears in more than one task, always holds: ∀i, j ∈ T , and i ̸= j, Vi ∩ Vj = ∅ (1) However, in the multi-label scenario, one node can have multiple labels and can therefore participate in multiple tasks as time evolves. Contrary to the single-label scenario, when the nodes have multiple labels, there will exist at least a pair of tasks with at least one node in common as stated below. $$\forall i,j\in{\mathcal{T}},{\mathrm{and}}\;i\neq j,{\mathcal{V}}_{i}\cap{\mathcal{V}}_{j}=\varnothing$$ $$\exists i,j\in{\mathcal{T}},{\mathrm{and}}\;i\neq j,{\mathcal{V}}_{i}\cap{\mathcal{V}}_{j}\neq\emptyset$$ ∃i, j ∈ T , and i ̸= j, Vi ∩ Vj ̸= ∅ (2) - **Growing label sets.** In the single-label scenario, the label set of a node v, Y v, stays the same across different time steps, i.e., $$\forall i,j\in{\mathcal{T}},\,{\mathcal{Y}}_{i}^{v}={\mathcal{Y}}_{j}^{v}$$ j(3) However, in the multi-label scenario, the label set of a node may grow over time, i.e., a node may not only appear in several tasks as above but also have different associated label sets, i.e., the following holds. $\uparrow$). $$\exists i,j\in{\mathcal{T}},{\mathcal{Y}}_{i}^{v}\neq{\mathcal{Y}}_{j}^{v}$$ j(4) - **Changing node neighborhoods.** Note that while simulating a continual learning scenario, subgraphs are curated corresponding to sets of classes/labels required to be distinguished in a particular task. In other words, the subgraph presented for a particular task only contains edges connecting nodes with the label set seen in that task. Therefore, the neighborhood of a node v, denoted as N vcan also be different across different time steps in the multi-label scenario, i.e., $$\exists i,j\in{\mathcal{T}},{\mathcal{N}}_{i}^{v}\neq{\mathcal{N}}_{j}^{v}$$ j(5) In the multi-label graphs, both multi-label and single-label nodes exist, providing therefore a suitable context to develop a generalized CGL evaluation framework as elaborated in the next section. ## 3 **Agale: Our Evaluation Framework** We present a holistic continual learning evaluation framework for graph-structured input data, which we refer to as AGALE (a graph-aware continual learning evaluation). We begin by developing two generalized incremental settings (in Section 3.1) that accommodate the requirements of the multi-label scenario (as discussed in Section 2.1) with respect to node overlaps in different tasks and growing label sets. In Section 3.2, we develop new data partitioning algorithms designed to derive subgraphs and training partitions from a static graph dataset, tailored specifically for the developed incremental settings. To underscore the significance of our approach, we provide theoretical analysis of AGALE in Section 3.3 and compare it with the previously established CGL and MLCL frameworks in Section 3.4. ![4_image_0.png](4_image_0.png) Figure 5: Visualization of our proposed generalized evaluation CGL framework AGALE. ## 3.1 Two Generalized Incremental Settings For Continual Graph Learning We *first* define and develop two generalized incremental settings in CGL, i.e., Task-IL setting and Class-IL setting. In the **Task-IL setting**, the goal is to distinguish between classes specific to each task. Different from single-label settings, the multi-labeled nodes can be present with non-overlapping subsets of their labels in different subgraphs, as shown in Figure 5. Formally, for any node v in the multi-label graph, in the Task-IL setting we have $$\forall i,j\in{\mathcal{T}},{\mathcal{Y}}_{i}^{v}\cap{\mathcal{Y}}_{j}^{v}=\varnothing.$$ In the **Class-IL setting**, the goal is to distinguish among all the classes that have been seen so far. Specifically, in addition to the same node appearing in multiple tasks as in the previous setting, a node with multiple labels can attain new labels as new tasks arrive, as shown in Figure 5. Formally, for any node v in the multi-label graph, $$\forall i,j\in{\mathcal{T}},{\mathrm{if~}}i<j,{\mathrm{~then~}}{\mathcal{Y}}_{i}^{v}\subseteq{\mathcal{Y}}_{j}^{v}$$ Note that the above settings allow for node overlaps and growing/changing label sets of the same node at different time points. ## 3.2 Data Partitioning Algorithms We now describe our data partitioning algorithms to simulate sequential data from static graphs. The design strategy of our algorithms takes into account of the node overlaps in tasks, the growing/changing label set of nodes over time, and the changing node neighborhoods while minimizing the loss of node labels and the graph's topological structure during partitioning. Our developed data partition strategy can be employed in both incremental settings and consists of the following two main parts. - **Task sequence and subgraph sequence generation.** We employ Algorithm 1 to segment the provided graph from the dataset into coherent subgraph sequences. We first remove the classes with size smaller than a threshold δ. Instead of using a predefined class order (as discussed in Section 1.1) we generate n random orders of the remaining classes to simulate the random emergence of new classes in the real world. Specifically, given a dataset with C classes, we group K random classes as one task for one time step. At any time point t, let Ct denote the set of classes grouped for the task at time t. We construct a subgraph Gt = (Vt, Et) such that Vt is the set of nodes with one or more labels in Ct. The edge set Et consists of the induced edges on Vt. Note that the number of classes chosen to create one task is adaptable. In order to maximize the length of the task sequence for each given graph dataset and subsequently catastrophic forgetting, we choose K = 2 in this work. If the dataset has an uneven number of classes in total, the remaining last class will be grouped with the second last class group. - **Construction of train/val/test sets.** To overcome the current limitations of generating train/val/test sets as discussed in Section 1.1, we employ Algorithm 2 to partition nodes of a given graph snapshot Gt. For the given subgraph Gt, our objective is to maintain the pre-established ratios for training, validation, and test data for both the task as a whole and individual classes within the task. To achieve this, our procedure starts with the determination of the size of each class. Note that the cumulative sizes of these classes may exceed the overall number of nodes in the subgraph due to multi-labeled nodes being accounted for multiple times based on their respective labels. Subsequently, the classes are arranged in ascending order of size, starting with the smallest class. The smallest class is partitioned in accordance with the predefined proportions. Subsequent classes in the order undergo partitioning with the following steps: - We identify nodes that have already been allocated to prior classes. - We then compute the remaining node counts for the training, validation, and test sets in accordance with the predefined proportions for the current class. - Finally, we split randomly the remaining nodes within the current class into train/val/test sets such that their predefined proportions are respected. Note that for a given class order, the structural composition of each subgraph remains constant across both the incremental settings. What distinguishes these incremental settings is the label vector assigned to the nodes. Specifically, nodes with a single label manifest uniquely in one subgraph corresponding to a task. Conversely, nodes with multiple labels appear in the Task-IL setting with distinct non-overlapping subsets of their original label set across various subgraphs while appearing with the expansion of their label vectors in the Class-IL setting. Algorithm 1 Task Sequence and Subgraph Sequence Generation Require: Static graph G = (V, E) with classes C = {c1, c2*, . . . , c*C }, threshold of small classes δ, group size K Ensure: n task sequences S = {S1, S2*, . . . ,* Sn} and for each task sequence Si a corresponding subgraph sequence Gi = {G1, G2*, . . . ,* GT } 1: for cj ∈ C do 2: Vcj = {vi|cj ∈ yi} 3: C ′ = {C − cj ||Vcj | < δ} 4: Generate n random orders of C ′: O = {O1, O2*, . . . ,* On} 5: for Oj ∈ O do 6: for t = 1 to ⌊ C k ⌋ = T do 7: Group the first k classes as a task: St = {c1*, . . . , c*k} 8: Oj = Oj − St 9: Vt = {vi|yi ∩ St ̸= ∅} 10: Et = {e(u, v)|e ∈ E ∧ u, v ∈ Vt}} 11: Gt = (Vt, Et) In the Appendix A.1, we present an analysis of the subgraphs derived by AGALE from the given static graph in PCG as an example of showcasing the efficacy of our approach. ## 3.3 Theoretical Analysis Of **Agale** As studied in previous works (Ma et al., 2021; Zhao et al., 2023), the similarity of labels between neighboring nodes (usually termed label homophily) influences the performance of various graph machine learning algorithms for the task of node classification in the static case. We here provide a theoretical analysis of AGALE with respect to the label homophily of generated subgraphs under different conditions. We would later use our theoretical insights and the dataset properties to analyze the performance of various methods. We use the following definition of label homophily for multi-label graphs proposed in Zhao et al. (2023). Definition 1. Given a multi-label graph G, the label homophily h of G *is defined as the average of the Jaccard similarity* of the label set of all connected nodes in the graph: $$h=$$ h =1 $$=\left.{\frac{1}{|{\mathcal{E}}|}}\sum_{(i,j)\in{\mathcal{E}}}{\frac{|{\mathcal{Y}}^{i}\cap{\mathcal{Y}}^{j}|}{|{\mathcal{Y}}^{i}\cup{\mathcal{Y}}^{j}|}}\right|$$ Algorithm 2 Train and Test Partition Algorithm Within One Subgraph Require: subgraph Gt in subgraph sequence {G1, G2*, . . . ,* GT }, proportion set P for train, validation, and test P = {Ptrain, Pval, P*test*} Ensure: the split within subgraph Gt = {V*train* t, V val t, V test t } for task St 1: Get the classes for the current task St = {c1*, . . . , c*k} 2: O′ = Sort*ascend*(|Vcj |) for cj ∈ St 3: initialize empty node set V train t, V val t, and V test t 4: initialize empty encountered nodes set Vt 5: for c ∈ O′ do 6: Vc = {vi|c ∈ yi} 7: if c is the smallest class in Si **then** 8: Randomly split Vc into V train c, V val c, V test caccording to P 9: **else** 10: Calculate the size of train/val/test set |V*train* c|, |Vval c|, |Vtest c| according to P 11: V dup t = Vc ∩ Vt 12: Vc = Vc − Vdup t 13: for vi ∈ Vdup do 14: for *split* ∈ [V train c, V val c, V test c] do 15: if viin *split* **then** 16: |split| = |split| − 1 17: for *split* ∈ [V train c, V val c, V test c] do 18: Randomly choose |*split*| nodes from Vc to add to *split* 19: add V train c, V val c, V test cto V train t, V val t, V test t 20: add Vc to Vt Let for any two connected nodes i, j ∈ V, h e(i,j) Gdenotes the label homophily over the edge e(i, j) ∈ E in graph G. We then have the following result about the label homophily of e(*i, j*) in the subgraph Gt generated by AGALE at time t. Theorem 1. For any edge e(i, j) ∈ E and any subgraph at time t, Gt such that e(i, j) ∈ Et, h e(i,j) Gt≥ h e(i,j) G *when* at least one of the nodes in {i, j} is single-labeled. For the case when both nodes i, j *are multi-labeled, we obtain* h e(i,j) Gt≥ h e(i,j) G *with probability at least* (1−(1−h e(i,j) G) K) for Task-IL setting and (1−(1−h e(i,j) G) Kt) for Class-IL setting. Proof. In the multi-label graphs, one pair of connected nodes belongs to the following three scenarios: 1) two singlelabeled nodes are connected, 2) a single-label node is connected to a multi-labeled node, and 3) two multi-labeled nodes are connected. Scenario 1: Note that at any time step t two nodes i and j are connected if and only if at least one label for each node appears in Ct. As in the first scenario, both the nodes are single-labeled, and the label homophily score for edge e(*i, j*) stays the same in the subgraph as in the original graph: $$h_{\mathcal{G}_{t}}^{e(i,j)}=h_{\mathcal{G}}^{e(i,j)}={\begin{cases}0,&{\mathrm{if}}\,\mathcal{Y}^{i}\neq\mathcal{Y}^{j}\\ 1,&{\mathrm{if}}\,\mathcal{Y}^{i}=\mathcal{Y}^{j}\end{cases}}$$ j(6) Scenario 2: In the second scenario, where one single-labeled node i is connected to a multi-labeled node j, at any time step t, when e(*i, j*) appears in the subgraph Gt, $$h_{\mathcal{G}_{t}}^{e(i,j)}\geq h_{\mathcal{G}}^{e(i,j)}\begin{cases}h_{\mathcal{G}_{t}}^{e(i,j)}=h_{\mathcal{G}}^{e(i,j)}=0,&\text{if}\mathcal{Y}^{i}\notin\mathcal{Y}^{j}\\ h_{\mathcal{G}_{t}}^{e(i,j)}=\begin{cases}\frac{1}{2},&\text{if}\mathcal{Y}^{i}\subset\mathcal{C}_{t}\cap\mathcal{Y}^{j}\\ 1,&\text{if}\mathcal{C}_{t}\cap\mathcal{Y}^{j}=\mathcal{Y}^{i}\end{cases}\geq h_{\mathcal{G}}^{e(i,j)},\quad\text{if}\mathcal{Y}^{i}\in\mathcal{Y}^{j}\end{cases}$$ i ∈ Yj(7) $$(6)$$ $$\left(7\right)$$ Combining equation 6 and equation 7 we note that when at least one node in an edge is single-labeled, the label homophily of the corresponding edge will be equal to more than that in the static graph, thereby completing the first part of the proof. Scenario 3: In the third scenario, where two multi-labeled nodes i and j are connected, at any time step t, when e(*i, j*) appears in the subgraph Gt, it holds Ct ∩ Yi ̸= ∅ and Ct ∩ Yj ̸= ∅. In this scenario, the label homophily of an edge depends on the relationship between Yi ∩ Yj and Ct: $$\begin{array}{l l}{{}}&{{\left\{\begin{array}{l l}{{0=h_{\mathcal{G}_{t}}^{e(i,j)}<h_{\mathcal{G}}^{e(i,j)}}}\\ {{\left\{\begin{array}{l l}{{h_{\mathcal{G}_{t}}^{e(i,j)}={\frac{1}{2}}}&{{\mathrm{Task-IL\setTING}}}\\ {{h_{\mathcal{G}_{t}}^{e(i,j)}\geq{\frac{1}{2t}}}&{{\mathrm{Class-IL\setTING}}}\\ {{h_{\mathcal{G}_{t}}^{e(i,j)}\geq h_{\mathcal{G}}^{e(i,j)}}}\\ {{h_{\mathcal{G}_{t}}^{e(i,j)}=1\geq h_{\mathcal{G}}^{e(i,j)}}}\end{array}\right.}}\end{array}\right.}}}\end{array}$$ Gif Ct ⊆ Yi ∩ Yj if $\mathcal{Y}^{i}\cap\mathcal{Y}^{j}\cap\mathcal{C}_{t}=\emptyset$ if $\mathcal{Y}^{i}\cap\mathcal{Y}^{j}\neq\emptyset,\mathcal{Y}^{i}\cap\mathcal{Y}^{j}\cap\mathcal{C}_{t}\subset\mathcal{Y}^{i}\cap\mathcal{Y}^{j}\cap\mathcal{C}_{t}\subset\mathcal{C}_{t}$ (8) if $\mathcal{Y}^{i}\cap\mathcal{Y}^{j}\subset\mathcal{C}_{t}$ if $\mathcal{C}_{t}\subseteq\mathcal{Y}^{i}\cap\mathcal{Y}^{j}$ $$(9)$$ $$\left(10\right)$$ Note that all the statements hold in both incremental settings except for the second condition, where Y i t ∩ Yj t ∩ Ct is the strict subset of Y i t ∩ Yj tand Ct. With a relatively smaller size of |Ct| = K = 2 in our setting, we have in the Task-IL setting, |Yi t ∩ Yj t | = 1 and |Yi t ∪ Yj t | = 2: $$h_{{\mathcal{G}}_{t}}^{e(i,j)}={\frac{|{\mathcal{Y}}_{t}^{i}\cap{\mathcal{Y}}_{t}^{j}|}{|{\mathcal{Y}}_{t}^{i}\cup{\mathcal{Y}}_{t}^{j}|}}={\frac{1}{2}}$$ 2(9) while in the Class-IL setting, because |Yi t ∩ Yj t | ≥ 1, |Yi t ∪ Yj t | ≤ Kt, we obtain $$h_{g_{t}}^{e(i,j)}=\frac{|{\mathcal{Y}}_{t}^{i}\cap{\mathcal{Y}}_{t}^{j}|}{|{\mathcal{Y}}_{t}^{i}\cup{\mathcal{Y}}_{t}^{j}|}\geq\frac{1}{2t}$$ 2t(10) We can now upper bound the probability of the worst case event, i.e., when an edge e(*i, j*) exists at time t but Ct ∩ Yi ∩ Yj = ∅. This can only happen if the classes in set Ct are chosen from the set Y i ∪ Yj \ Yi ∩ Yj. For Task-IL setting, the probability of choosing at least one element of Ct from the common labels of node i and j is equal to h e(i,j) G. Then the probability that none of the classes in Ct appear in the common set is at most (1 − h e(i,j) G) |Ct|. The proof is completed by noting the fact that |Ct| = K for Task-IL setting and |Ct| = Kt for Class-IL setting at time step t. ## 3.4 Comparison With Previous Evaluation Frameworks In response to overlooked challenges in established CGL and MLCL evaluation frameworks, as detailed in Section 1, our framework tackles these issues by the following. - **Incorporation of the multi-label scenario.** Contrary to previous evaluation frameworks AGALE accommodates single-label and multi-label node nodes in the following ways. - For single-label nodes, our framework expands upon previous methods during the task sequence's creation phase. It introduces dynamics in label correlations by allowing random class ordering to generate the task sequence. This results in diverse subgraph sequences, mimicking the random emergence of new trends in the real world. - Regarding the multi-label scenario, as shown in Figure 5, our framework allows for update/change of label assignments for a given node in the Task-IL setting and expansion of the node's label set in the Class-IL setting. ## - **Information Preservation And Prevention Of Data Leakage** - As described in Section 3.2, the data partitioning strategies of AGALE ensure that no nodes from the original multi-label static graph are removed while creating the tasks. Single-labeled nodes appear once in the task sequence in both settings, while multi-labeled nodes surface with different labels in Task-IL setting and Class-IL setting. Specifically, they appear with non-overlapping subsets of their label set in Task-IL setting, and as the class set expands, their entire label set is guaranteed to be seen by the model before the final time step in Class-IL setting. - Previous CGL evaluation frameworks split the nodes into train and evaluation sets within each class, not considering the situation where one node can belong to multiple classes in the task. Such a strategy may lead to data leakage as one node can be assigned to training and testing sets for the same task. During task training on a subgraph comprising various classes, our framework ensures no overlap among the training, validation, and test sets. Single-labeled nodes exclusively belong to one class, preventing their re-splitting after the initial allocation. For multi-label nodes that have been allocated to a particular class (see lines 11 and 12 in Algorithm 2), we exclude them from the remaining nodes of other classes they belong to, eliminating any potential data leakage during training and evaluation within one subgraph. - In addition, we approach the continual learning setting by not allowing the inter-task edges. This deliberate choice means that, upon the arrival of a new task, the model no longer retains access to the data from the previous time steps. - **Ensuring fair split across different classes and the whole graph.** Due to the differences in the class size, a split from the whole graph will result in the bigger class dominating the splits, leaving the small class underrepresented in the splits. Moreover, the split within each class may result in data leakage in one subgraph, as explained in the previous paragraph. To maintain a fair split despite differences in class sizes, our framework prioritizes splitting smaller classes initially. It subsequently removes already split nodes from larger class node sets. This approach guarantees an equitable split within each class and from within the whole subgraph, preventing larger classes from dominating the splits and ensuring adequate representation for smaller classes. - **Application for graph/edge-level CGL.** AGALE can be directly applied for the graph classification task, each input data is an independent graph without interconnections. For the edge classification task, our framework can be applied by first transforming the original graph G into a line graph L(G), where for each edge in G, we create a node in L(G); for every two edges in G that have a node in common, we make an edge between their corresponding nodes in L(G). ## 4 Related Work 4.1 Continual Learning Continual Learning (van de Ven & Tolias, 2019; Hadsell et al., 2020; Nguyen et al., 2018; Aljundi et al., 2019; Li & Hoiem, 2016; Aljundi et al., 2017; Wang et al., 2023a), a fundamental concept in machine learning, addresses the challenge of enabling models to learn from and adapt to evolving data streams over time. Continual learning has applications in a wide range of domains, including computer vision, natural language processing, and reinforcement learning, making it an active area of research with practical implications for the lifelong adaptation of machine learning models. Unlike traditional batch learning, where models are trained on static datasets, continual learning systems aim to learn from new data while preserving previously acquired knowledge sequentially. This paradigm is particularly relevant in real-world scenarios where data is non-stationary and models need to adapt to changing environments. The key objectives of continual learning are to avoid catastrophic forgetting, where models lose competence in previously learned tasks as they learn new ones, and to ensure that the model's performance on earlier tasks remains competitive. Various techniques have been proposed in the literature to tackle these challenges, which can be categorized into four categories. - **Knowledge distillation methods.** The methods from this category (Li & Hoiem, 2016; Wang et al., 2021; 2020b) retain the knowledge from the past by letting the new model mimic the old model on the previous task while adapting to the new task. Overall, the learning objective can be summarized as to minimize the following loss function: L = λoLold Yo, Yˆo + Lnew Yn, Yˆn + R, (11) where Lold and Lnew represent the loss functions corresponding to the old and new tasks, respectively. The parameter λo is the weight for balancing the losses, and R encapsulates the regularization term. The process of $$\left(\mathbf{Y}_{n},{\hat{\mathbf{Y}}}_{n}\right)+{\mathcal{R}},$$ $\uparrow\uparrow\uparrow\downarrow\uparrow$ transferring knowledge from a pre-existing model (teacher) to a continually evolving model (student) in knowledge distillation unfolds within Lold, where the new model undergoes training to align its predictions on new data for the old task, denoted as Yˆo, with the predictions of the previous model on the same new data for the old task, represented as Yo. Simultaneously, the new model approximates its prediction of the new data on the new task Yˆn to their true labels Yn. For example, LwF (Li & Hoiem, 2016) minimize the difference between the outputs of the previous model and the new model on the new coming data for the previous tasks while minimizing the classification loss of the new model on the new task. - **Regularization strategies.** The methods in this category maintain the knowledge extracted from the previous task by penalizing the changes in the parameters θ of the model trained for the old tasks. Typically, the following loss is minimized: $${\mathcal{L}}(\theta)={\mathcal{L}}_{\mathrm{new}}(\theta)+\lambda\sum_{i}\Omega_{i}\left(\theta_{\mathbf{i}}-\theta_{\mathbf{i}}{}^{*}\right)^{2}$$ $$(12)^{\frac{1}{2}}$$ i 2(12) where Lnew denotes the loss function for the new task, θ is the set of model parameters. The parameter λ functions as the weight governing the balance between the old and new tasks, while Ωi represents the importance score assigned to the ith parameter θi. For example, MAS (Aljundi et al., 2017) assigns importance scores for the parameters by measuring how sensitive the output is to the change of the parameters. The term θ ∗ irefers to the prior task's parameter determined through optimization for the previous task. - **Replay mechanisms.** Methods from this category extract representative data from the previous tasks and employ them along with the new coming data for training to overcome catastrophic forgetting (Shin et al., 2017; Kim et al., 2020b). Methods under this category mainly differ with respect to their approaches to sampling representative data from the old task for storage in the buffer. For example, Kim et al. (2020b) maintains a target proportion of different classes in the memory to tackle the class imbalance in the multi-label data. - **Dynamic architectures.** Methods from this category (Lee et al., 2017; Wei et al., 2021) dynamically expand their architecture when needed for new tasks. This expansion may include adding new layers or neurons to accommodate new knowledge. For example, Lee et al. (2017) dynamically expands the network architecture based on the relevance between new and old tasks. Another line of work in CL focuses on benchmarking evaluation methods. For instance, Farquhar & Gal (2019) and Lange et al. (2023) provide more robust and realistic evaluation metrics for the CL methods, incorporating real-world challenges like varying task complexities and the stability gap. ## 4.2 Continual Graph Learning As a sub-field of continual learning, Continual Graph Learning (CGL) addresses the catastrophic forgetting problem as the model encounters new graph-structured data over time. Within CGL, two primary lines of work exist. The first involves establishing evaluation frameworks that define incremental settings in CGL scenarios and their corresponding data partitioning algorithms. The second line of work focuses on proposing new methods based on specific predefined CGL incremental settings derived from these evaluation frameworks. Our work mainly falls into the first category in which we develop a more holistic evaluation framework covering the multi-label scenario for graph-structured data. The previously established CGL frameworks focus on benchmarking tasks in CGL. For instance, Zhang et al. (2022) defined Task- and Class- Incremental settings for single-labeled node and graph classification tasks in CGL and studied the impact of including the inter-task edges among the subgraph. Ko et al. (2022) expanded this by adding the domain- and time-incremental settings and including the link prediction task in the CGL benchmark. Additionally, surveys like Febrinanto et al. (2022) and Yuan et al. (2023) focus on categorizing the approaches in CL and CGL. However, none of the above works sufficiently addressed the complexities of defining the multi-label node classification task within the CGL scenario. The only exception is Ko et al. (2022), which used a graph with multi-labeled nodes, but that too in a domain incremental setting. In particular, each task was constituted of nodes appearing from a single domain. Consequently, a node appears in one and only one task together with all of its labels. This does not cover the general multi-label scenario in which the same node can appear in multiple tasks each time with different or expanding label sets. Existing methods for CGL focus mainly on the multi-class scenario and fall into one of the four categories (see the previous subsection) of continual learning methods. For example, GraphSAIL (Xu et al., 2020) is a knowledge distillation approach that distills each node's local and global structure and its self-embedding knowledge, respectively. Regularization approach TWP (Liu et al., 2020) adds a penalization to the parameters that are important to the learned topological information in addition to the task-related loss to stabilize the parameters playing pivotal roles in the topological aggregation. ERGNN (Zhou & Cao, 2021) is based on the replay mechanism and carefully selects nodes from the old tasks to the buffer and replays them with the new graph. Wang et al. (2020a) combines replay and regularization to preserve existing patterns. ## 4.3 Learning On Dynamic Graphs Since streaming graphs find applications in various domains, including social network analysis, recommendation systems, fraud detection, and knowledge graph refinement, several methods (Wang et al., 2023b; Yu et al., 2018; 2017; Xu et al., 2019) have been proposed in the field of dynamic graph learning (DGL) to utilize the knowledge from the past to enhance the model's performance on the graph in the current timestamp. For example, Rossi et al. (2020) uses the memory unit to represent the node's history in the compressed format, and Pareja et al. (2019) uses recurrent architecture between the models trained for the adjacent time steps to let the new model inherent knowledge extracted from the old tasks. However, the designing goal of the methods in DGL is to utilize the knowledge extracted from the old tasks to enhance the performance of the model on the current task, while in CGL, we focus on the catastrophic forgetting problem, i.e., the model needs not only to perform well on the current task but also on the previous tasks in the task sequence. We compare and analyze the models from these two categories in detail in Section 6. ## 4.4 Application Of Graph Machine Learning In Continual Learning Some work (Tang & Matteson, 2021; Liu et al., 2023) also attempts to use graph structures to alleviate catastrophic forgetting in Euclidean data. For instance, Tang & Matteson (2021) augments independent image data in memory with a learnable random graph, capturing similarities among them to alleviate catastrophic forgetting. However, as our current focus is solely on graph-structured data, these endeavors fall beyond the scope of this study. ## 5 Experiment Setup In this section, we test the state-of-art models from CL, DGL, and CGL domains. Note that in this study, we employ P = 3, indicating that we generate three random orders for the classes in each dataset in the experimental section. We introduce the models according to their categories. ## 5.1 Methods This subsection introduces all the methods used in the experiment section. The CL methods use Graph Convolutional Network (GCN) (Kipf & Welling, 2016) as the backbone. - **SimpleGCN**: We train GCN on each of the subgraph sequences without any continual learning technique, which is denoted as SimpleGCN in the following sections. - **JointTrainGCN**: We also include GCN trained on all the tasks simultaneously and therefore should not have the catastrophic forgetting problem. This setting is referred to as JointTrainGCN in the following section. - **Continual Learning Methods**: We choose Learning Without Forgetting (LwF), Elastic Weight Consolidation (EWC), and Memory Aware Synapses (MAS) from this category. LwF distill the knowledge from the old model to the new model to prevent the model from catastrophic forgetting. EWC and MAS are both regularization-based methods. The difference is that EWC penalizes the changes in the parameters that are important to the previous task, while MAS measures the importance of the parameters based on the sensitivity of the output on the parameters. - **Dynamic Graph Neural Network**: We choose EvolveGCN (Pareja et al., 2019) from this category, which uses recurrent architecture between the models trained for the adjacent time steps to let the new model inherent knowledge extracted from the old tasks to enhance the model's performance on the current task. - **Continual Graph Learning Methods**: We choose ERGNN (Zhou & Cao, 2021) from this category, which samples representative nodes from the old tasks in the buffer and replays them with the new data to address the catastrophic forgetting problem. ## 5.2 Datasets We demonstrate our evaluation framework on 3 multi-label datasets in this work. We also include 1 multi-class dataset CoraFull as an example to demonstrate the generalization of our evaluation framework on single-label nodes. We include the description of the CoraFull and the results on it in the Appendix A.2. The inter-task edges are defined in (Zhang et al., 2022) as the edges that connect the new subgraph to the overall graph. We do not allow inter-task edges in our evaluation framework, i.e., at each time step, only the subgraph for the new task is used as input. The reason is that in CL, the assumption is that the model loses access to the data from the previous time steps. With the inter-task edges, the node features from the previous time step would also be used as input, which violates this assumption and alleviates the forgetting problem. Below, we introduce the datasets used in this work: 1. PCG(Zhao et al., 2023), in which nodes are proteins and edges correspond to the protein functional interaction, and the labels the phenotype of the proteins. 2. DBLP(Akujuobi et al., 2019), in which nodes represent authors and edges the co-authorship between the authors, and the labels indicate the research areas of the authors. 3. Yelp(Zeng et al., 2019), in which nodes correspond to the customer reviews and edges to their friendships with node labels representing the types of businesses. The statistics about the datasets are summarized in Table 1. We use the label homophily defined for multi-label graphs in Zhao et al. (2023). Following the application of a data partitioning algorithm, the given static graphs by the datasets are split into subgraph sequences. We also summarize the characteristics of the subgraphs to provide insights into the partitioned structure. Table 1: The data statistics. Specifically, |V|, |E|, |C|, |L|, and r*homo* denote the number of nodes, edges, classes, mean label count per node, and label homophily of the static graph given by the dataset, respectively. |T| signifies the count of tasks in the resulting task sequence. Additionally, |V| and |E| represent the average number of nodes and edges in a subgraph. Further details on label homophily are captured through |r|tsk and |r|cls, representing the averaged label homophily of subgraphs in the Task-IL setting and Class-IL setting), respectively. | |V| | |E| | |C| | |L| | |T| | rhomo | |V| | |E| | |r| tsk | |r| cls | | |-------|-------|-------|-------|-------|---------|-------|-------|-----------|-----------|------| | PCG | 3K | 37K | 15 | 1.93 | 7 | 0.17 | 808 | 4763 | 0.64 | 0.38 | | DBLP | 28K | 68K | 4 | 1.18 | 2 | 0.76 | 15K | 37K | 0.86 | 0.81 | | Yelp | 716K | 7.34M | 100 | 9.44 | 50 | 0.22 | 121K | 921K | 0.75 | 0.47 | In Theorem 1 we theoretically analyzed the label homophily of the edges in the subgraphs where we showed that in cases of single-labeled nodes and for higher homophily edges, the homophily in subgraphs typically increases. Table 1 further shows that the average label homophily of the subgraphs is in fact higher than the label homophily of the corresponding static graph. ## 5.3 Evaluation 5.3.1 Metrics We evaluate the models using performance matrix M ∈ R T ×T, where Mi,k denotes the performance score reported by an evaluation metric (e.g. AUC-ROC, average precision etc.) on task Sk after the model has been trained over a sequence of tasks from S1 to Si. At each time step t, the average performance of the model is measured by the average of the model's performances on task S1 to task Si, i.e., the average of the row i in performance matrix M. After the whole task sequence is presented to the model, we report the average performance AP as: $$A P={\frac{\sum_{i=1}^{T}\mathbf{M}_{T,i}}{T}}$$ $$(13)$$ T(13) which is the higher, the better. We use the average forgetting AF score proposed in Lopez-Paz & Ranzato (2017). The forgetting on task Siis measured by the performance change on task Si after the model is trained on the whole task sequence. Formally, we report the average forgetting AF on all the tasks as: $$A F={\frac{\sum_{i=1}^{T}(\mathbf{M}_{T,i}-\mathbf{M}_{i,i})}{T-1}}$$ $$(14)$$ T − 1(14) Note that we here compute a single metric to quantify the incurred forgetting over past tasks when the model is trained for the last task ST . The summand indicates the performance decrease on some task Si after learning on later task ST . When the average forgetting is negative, its absolute value indicates the averaged performance decrease on all previous tasks when the model is trained on the last task ST in the task sequence. A positive AF score indicates that the performance on some of the past tasks actually increased after training on task ST . A positive AF score might be the result of correlation among tasks that the model exploited, thus showing an improvement over past tasks. Such an observation may be when the tasks from a graph are highly correlated with each other, training on the new task would help further improve the performance on the old tasks. Overall, we report the AP and AF for each model, and the scores we obtain from the two metrics are interpreted in the following Table 2. | high AF | low AF | | |---------------|-----------------------------------------|----| | high AP | preserves well-rounded knowledge across | performs well on the new task, while forgetting about the old tasks | | all the tasks | | | | low AP | preserves the knowledge from the old tasks and harms the overall performance indicates the tasks are not correlated, improvements on one task harm the performance on the other tasks | forgets about the old task, and fail to perform well on the new task | Table 2: The interpretation of the average performance score (AP) and the average forgetting score (AF). ## 5.3.2 Visualization We use the heatmaps and lineplots to visualize the performance matrix M. Due to the limited space, we add the heatmaps in the Appendix A.4. The lineplots are shown in Figure 7, which have the time steps as x axis, the y axis indicates the average performance of the model over all the tasks that have been encountered so far. ## 6 Results And Analysis In this section, we summarize the experimental results on the multi-label datasets in the Task-IL setting and Class-IL setting defined in section 2 in the Table 3 and Table 4, respectively. To use a single numerical value to quantify the overall performance of the models, we calculate an average performance matrix Mˆ from the performance matrices from the three random splits and report the AP and AF from the averaged performance matrix. ![13_image_0.png](13_image_0.png) (a) The visualization of the performances of GCN on the subgraphs and the joint train graph from PCG and the label homophily of the graphs. (b) The distribution of the number of labels per node in the better-than-average subset and in the worse-than-average subset ![13_image_1.png](13_image_1.png) (c) The distribution of the label homophily of the nodes in the better-than-average subset and in the worse-than-average subset. Figure 6: Visualization of the analysis on the performance of SIMPLEGCN and JOINTTRAINGCN using PCG as an example. ## Lower And Upper Bounds In Cgl 6.1 ![13_image_2.png](13_image_2.png) Figure 7: Learning curves showing the dynamics of the average performance during learning on the task sequences of different datasets. The color coding and legend names remain consistent across all subfigures. To avoid obstructing the line plot, we omit the legend in the subplots corresponding to PCG. In the previous CGL frameworks (Zhang et al., 2022; Ko et al., 2022), SIMPLEGCN and JOINTTRAINGCN are shown to have the worst and the best performance. Such a result is also expected as (i) SIMPLEGCN is employed on sequential data without any enhanced abilities to deal with catastrophic forgetting (thereby showing performance degradation) and (ii) in JointTrainGCN all data is used to train the base GNN. However, the results from multi-label datasets in both incremental settings, as shown in Table 3 and Table 4, reveal that SimpleGCN and JointTrain are no longer suitable as lower and upper bounds for evaluating CGL performance in a more generalized scenario of multi-label datasets. In the following, we theoretically and empirically analyze the rationale behind such a finding. ## 6.1.1 Label Homophily And Gcn GNNs, specifically GCN, which is used as a base network are known to have better performance on high labelhomophilic graphs. As shown in Theorem 1, splitting labels into distinct prediction tasks and creating subgraphs for each task results in an increase in label homophily of the edges in the subgraphs as compared to that in the full graph. In particular, if in a dataset there are a large number of single-labeled nodes in the full graph with a non-zero edge label homophily, the increase in label homophily of edges in subgraphs helps SimpleGCN to assign correct labels to the corresponding nodes. However, in JointTrain, the presence of diverse neighborhoods around single-labeled nodes leads to low label homophily, impacting its performance negatively. Empirical evidence. Figure 6 illustrates the above statements with an example from one random shuffle of PCG using the subgraphs generated for the Task-IL setting (colored in blue), Class-IL setting (colored in green) and the original static graph given by the dataset (colored in red). On the x axis, we show the label homophily level of the input graphs, while on the y axis, we show the performance of SimpleGCN after it is trained on the subgraph in the corresponding incremental settings and JointTrainGCN on the whole static graph. We make the following observations. - The subgraphs in Task-IL setting and Class-IL setting have higher label homophily than the full graph, explaining the better performance of SimpleGCN as compared to JointTrainGCN. - We also observe that as compared to Task-IL setting, the subgraphs generated for Class-IL setting have lower label homophily. This happens because of expanding label sets in Class-IL setting. In Figure 6b and 6c, we further analyze the causes of the bad performance of the JointTrainGCN. We used the JointTrainGCN model on test nodes from the joint train graph in PCG and calculated an average precision score for each node. The mean value of the scores is then used as a threshold to divide the test nodes into the set of nodes that perform *better-than-average* and the *worse-than-average* performing node subset, indicated by the blue and orange bars in the plots. To remove the influence of the difference in the sizes of the subsets, we use the percentage of the nodes in the corresponding subset as the y axis. Based on the edge homophily defined in 1, we define the label homophily in the direct neighborhood of a node as the averaged edge homophily connected to this node: Definition 2. For a node v in the graph G, we define the label homophily of a node v *with respect to its immediate* neighborhood N v*, represented as* h v*, as the average of label homophily of the edges connected to* v: $$h^{v}={\frac{\sum_{e(i,j)|j\in{\mathcal{N}}^{v}}h^{e(i,j)}}{|{\mathcal{N}}^{v}|}}$$ We make the following observations. - In Figure 6b, the percentage of single-labeled nodes in the worse-than-average performing subset is higher than that the better-than-average subset. - Figure 6c shows that in fact, the high percentage of worse-performing nodes have very low label homophily (computed using Definition 2) close to 0. - The above two observations indicate that the performance of JointTrainGCN suffers due to the presence of a higher percentage of low label homophily edges with at least one single-labeled node. For completeness, we include in Figure 6c a Kernel Density Estimation on the node homophily distribution, which shows a clear shift in the distributions of the label homophily in the better-performing subset as compared to the worse-than-average subset. In the following sections, we summarize the performance of the chosen baselines in the Task-IL setting and Class-IL setting and provide a detailed analysis of the performances of the baselines on different datasets. ## 6.2 Results In **Task-Il Setting** Table 3: Performance of the baseline models in the Task-IL setting setting. The performances are reported in Average Precision. "AP" stands for Average Precision, and the higher, the better. "AF" indicates the average forgetting, and the higher, the better. | Task-IL | PCG | DBLP | Yelp | | | | |------------|--------------|--------------|--------------|---------------|--------------|--------------| | | AP | AF | AP | AF | AP | AF | | SimpleGCN | 54.34 ± 0.04 | −6.11 ± 0.03 | 87.47 ± 0.12 | −15.76 ± 0.00 | 54.87 ± 0.03 | −1.43 ± 0.05 | | LwF | 58.12 ± 0.05 | −2.84 ± 0.02 | 95.01 ± 0.01 | −0.98 ± 0.00 | 54.89 ± 0.03 | −2.07 ± 0.05 | | EWC | 56.17 ± 0.03 | −3.77 ± 0.03 | 92.28 ± 0.05 | −6.51 ± 0.01 | 56.53 ± 0.05 | −0.17 ± 0.02 | | MAS | 56.26 ± 0.04 | −2.64 ± 0.03 | 93.93 ± 0.03 | −3.17 ± 0.00 | 56.05 ± 0.03 | −0.76 ± 0.05 | | EvolveGCN | 52.76 ± 0.06 | −3.68 ± 0.03 | 78.94 ± 0.25 | −35.20 ± 0.00 | 55.93 ± 0.07 | −5.11 ± 0.07 | | ERGNN | 53.64 ± 0.06 | −1.39 ± 0.02 | 67.70 ± 0.03 | −24.96 ± 0.00 | 54.99 ± 0.03 | −0.92 ± 0.04 | | JointTrain | 22.47 ± 0.47 | − | 85.60 ± 0.25 | − | 13.80 ± 0.08 | − | Table 3 presents results for three real-world multi-label datasets in Task-IL setting. In general, the knowledge distillation method LwF excels on graphs with shorter task sequences (e.g., PCG and DBLP with 7 and 2 tasks, respectively). In contrast, all methods perform comparably on the graph with a long task sequence in Yelp with 50 tasks, among which regularization-based methods like EWC and MAS slightly outperform other approaches. This disparity arises because LwF distills knowledge only from the last time step, leading to a performance drop with longer sequences. Meanwhile, regularization-based methods, like EWC and MAS, which penalize the changes in the important parameters for previous tasks, prove effective for longer task sequences. The weak performance of the replay-based methods ERGNN indicates the importance of including the local topological structure around the nodes in the buffer instead of sampling isolated nodes in the buffer. Dynamic graph neural networks like EvolveGCN struggle with substantial forgetting despite achieving notable average precision scores because they only focus on the current task. We visualize the learning curve of the models in the Task-IL setting on PCG, DBLP, and Yelp in Figure 7a, 7b, and 7c, respectively. The x axis indicates the current time step, and the corresponding value on the y axis infers the average performance of the model at the current time step over all the tasks encountered so far. ## 6.3 Detailed Analyses On Different Datasets PCG. PCG has a relatively shorter task sequence with 7 tasks. SimpleGCN showcases competitive scores but is susceptible to forgetting, indicating the low correlation among the tasks. LwF outperforms SimpleGCN and notably improved robustness against forgetting, which indicates the shorter task sequence in PCG contributes to the effectiveness of LwF in retaining task knowledge because LwF only distills knowledge from the previous model. EWC and MAS also exhibit competitive performance, demonstrating moderate resistance to forgetting. Meanwhile, because of the low correlations among the tasks, EvolveGCN faces challenges using with a lower performance and notable forgetting. JointTrainGCN has the poorest performance because of the low label homophily level on the joint train graph. DBLP. DBLP has the shortest task sequence length with only 2 tasks. LwF once again stands out with the highest performance and minimal forgetting. The SimpleGCN has the worst forgetting on DBLP compared to the other two datasets, indicating the tasks in DBLP have the lowest task correlation. While EWC and MAS present comparable performance to LwF, they suffer from worse forgetting. Notably, the low task correlation also results in the low performance and extreme forgetting of EvolveGCN and ERGNN, indicating the information from the previous task that lies in the model, and the data can not assist the model's performance on the new task. And because the joint train graph in DBLP has the highest level of label homophily, the JointTrainGCN also achieves better performances compared to its performance on the other two multi-label datasets. Yelp. The Yelp dataset is characterized by the longest task sequence encompassing 50 tasks and featuring the highest task correlations, which is indicated by the competitive performance shown by SimpleGCN. Despite the extended task sequence, training on a new task does not significantly impair performance on the previous tasks. The long task sequence poses a potential challenge for LwF, as prolonged sequences lead to increased forgetting. EWC and MAS emerge as robust performers in this demanding setting, demonstrating solid performance with competitive performances and modest forgetting. EvolveGCN encounters a lower score coupled with considerable forgetting, as the high task correlation makes the utilization of the previous model helpful to improve the performance of the current task, EvolveGCN pays no attention to maintaining the performance on the old tasks. Additionally, ERGNN achieves a comparable performance with minimal forgetting, positioning it as a strong contender on the Yelp dataset. JointTrainGCN achieves the lowest performance because of the low label homophily level in the joint train graph. ## 6.4 **Class-Il Setting** | Class-IL | PCG | | DBLP | Yelp | | | |------------|--------------|--------------|--------------|---------------|--------------|--------------| | | AP | AF | AP | AF | AP | AF | | SimpleGCN | 56.70 ± 0.04 | −2.48 ± 0.03 | 93.22 ± 0.03 | −3.90 ± 0.00 | 55.78 ± 0.03 | −1.56 ± 0.05 | | LwF | 38.13 ± 0.13 | 3.48 ± 0.03 | 81.98 ± 0.18 | −0.69 ± 0.00 | 33.72 ± 0.05 | 0.59 ± 0.05 | | EWC | 35.12 ± 0.13 | 2.17 ± 0.03 | 81.88 ± 0.06 | −8.92 ± 0.00 | 32.32 ± 0.08 | 1.71 ± 0.03 | | MAS | 33.00 ± 0.15 | 0.98 ± 0.02 | 82.82 ± 0.10 | −4.87 ± 0.00 | 32.07 ± 0.07 | −1.10 ± 0.04 | | EvolveGCN | 29.44 ± 0.12 | 0.23 ± 0.01 | 68.18 ± 0.03 | −29.69 ± 0.00 | 23.45 ± 0.06 | −0.70 ± 0.05 | | ERGNN | 29.80 ± 0.14 | −2.82 ± 0.03 | 59.99 ± 0.12 | 3.14 ± 0.00 | 24.00 ± 0.06 | −0.12 ± 0.01 | | JointTrain | 22.47 ± 0.47 | − | 85.60 ± 0.25 | − | 13.80 ± 0.08 | − | Table 4: Performance of the baseline models in Class-IL setting setting. The performances are reported in Average Precision. "AP" stands for Average Precision and the higher the better. "AF" indicates the average forgetting, and the higher, the better. Table 4 presents results for three real-world multi-label datasets in Class-IL setting. Overall, SimpleGCN achieves a superior performance across all datasets. This performance contrast is noteworthy when compared to its performance on multi-class datasets in the previous works (Zhang et al., 2022; Ko et al., 2022). The key distinction lies in our evaluation framework, where we enable the label vectors of multi-labeled nodes to expand during Class-IL setting. In essence, this approach incorporates the previous labels of multi-labeled nodes as part of the target labels in subsequent tasks. This strategy serves a dual purpose: it mitigates the problem of forgetting while simultaneously improving the performance on earlier tasks. This improvement is indicated by the positive average forgetting scores in the Class-IL setting context. The performance of JointTrainGCN is not influenced by the change in the setting, as it is trained on all the tasks simultaneously. The drop in the performances of other baseline models is a result of the increasing number of classes in the tasks at each time step, i.e., the difficulty of the task increases at each time step. We visualize the learning curve of the models in the Class-IL setting on PCG, DBLP, and Yelp in Figure 7d, 7e, and 7f, respectively. The x axis indicates the current time step, and the corresponding value on the y axis infers the average performance of the model at the current time step over all the tasks encountered so far. Below, we analyze the performance of the chosen baseline models on each of the datasets. ## 6.5 Detailed Analyses On Different Datasets PCG. SimpleGCN leads with the highest average performance overall tasks with low forgetting, as it has no CL technique to prevent forgetting. On the other hand, the CL methods sacrificed the average performance on the task sequence but successfully maintained a positive AF. This means the knowledge distillation- and regularization-based models are able to retain the knowledge from the old tasks in the Class-IL setting. EvolveGCN and ERGNN achieve comparable average performance on the task sequence, but the ERGNN fails to retain the knowledge from the old task as it only samples the isolated nodes in the replay buffer while ignoring the topological structure. JointTrainGCN remains the worst-performing model because of the low label homophily in the input graph. DBLP. DBLP has the shortest task sequence, but SimpleGCN and CL methods LwF, EWC, and MAS suffered from the most severe forgetting problem on it. These negative average forgetting scores observed in DBLP indicate low task correlation, i.e., the knowledge from the old task hinders the model from achieving better performance on the new task. As the least multi-labeled graph, DBLP witnesses the least pronounced performance dip in the Class-IL setting compared to Task-IL setting. This observation suggests that multi-label datasets pose a more challenging test for models when the label vectors of nodes continue to grow. Yelp. In Yelp, nodes are more multi-labeled compared to PCG and DBLP, as shown in Table 1. Overall, we see a clear performance difference in Class-IL setting compared to Task-IL setting on Yelp. Furthermore, knowledge distillation- and regularization-based methods surpass the dynamic graph neural network EvolveGCN and the replaybased method ERGNN. This is primarily due to the fact that EvolveGCN neglects the preservation of knowledge from previous tasks, which ultimately hampers overall performance. ERGNN, on the other hand, disregards the topological structure surrounding the sampled experience nodes, further impacting its efficacy in handling the evolving tasks. ## 7 Conclusion We develop a new evaluation framework which we refer to as AGALE for continual graph learning. Filling in the gaps in the current literature, we (i) define two generalized incremental settings for the more general multi-label node classification task, (ii) develop new data split algorithms for curating CGL datasets, and (iii) perform extensive experiments to evaluate and compare the performance of methods from continual learning, dynamic graph learning and continual graph learning. Through our theoretical and empirical analyses we show important differences of the multi-label case with respect to the more studied single-label scenario. We believe that our work will encourage the development of new methods tackling the general scenario of multi-label classification in continual graph learning. Following the current literature, we focus on quantifying catastrophic forgetting in AGALE. In realistic scenarios, there is also the case where the model could be required to selectively forget about the past. For example, users in the social network might not further show interest in certain topics and un-follow some of the friends. Developing new evaluation metrics as well as new models to reward selective forgetting of some tasks while avoiding catastrophic forgetting overall is an interesting avenue for future research. ## References Uchenna Akujuobi, Yufei Han, Qiannan Zhang, and Xiangliang Zhang. Collaborative graph walk for semi-supervised multi-label node classification. *CoRR*, abs/1910.09706, 2019. URL http://arxiv.org/abs/1910.09706. Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. *CoRR*, abs/1711.09601, 2017. URL http://arxiv.org/abs/1711. 09601. Rahaf Aljundi, Klaas Kelchtermans, and Tinne Tuytelaars. Task-free continual learning. *CoRR*, abs/1812.03596, 2018. URL http://arxiv.org/abs/1812.03596. Rahaf Aljundi, Klaas Kelchtermans, and Tinne Tuytelaars. Task-free continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. Sebastian Farquhar and Yarin Gal. Towards robust evaluations of continual learning, 2019. Falih Gozi Febrinanto, Feng Xia, Kristen Moore, Chandra Thapa, and Charu Aggarwal. Graph lifelong learning: A survey, 2022. Raia Hadsell, Dushyant Rao, Andrei A Rusu, and Razvan Pascanu. Embracing change: Continual learning in deep neural networks. *Trends in cognitive sciences*, 24(12):1028–1040, 2020. Chris Dongjoo Kim, Jinseo Jeong, and Gunhee Kim. Imbalanced continual learning with partitioning reservoir sampling. *CoRR*, abs/2009.03632, 2020a. URL https://arxiv.org/abs/2009.03632. Chris Dongjoo Kim, Jinseo Jeong, and Gunhee Kim. Imbalanced continual learning with partitioning reservoir sampling, 2020b. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. *CoRR*, abs/1609.02907, 2016. URL http://arxiv.org/abs/1609.02907. Jihoon Ko, Shinhwan Kang, and Kijung Shin. Begin: Extensive benchmark scenarios and an easy-to-use framework for graph continual learning. *arXiv preprint arXiv:2211.14568*, 2022. Matthias De Lange, Gido van de Ven, and Tinne Tuytelaars. Continual evaluation for lifelong learning: Identifying the stability gap, 2023. Jeongtae Lee, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang. Lifelong learning with dynamically expandable networks. *CoRR*, abs/1708.01547, 2017. URL http://arxiv.org/abs/1708.01547. Zhizhong Li and Derek Hoiem. Learning without forgetting. *CoRR*, abs/1606.09282, 2016. URL http://arxiv. org/abs/1606.09282. Huihui Liu, Yiding Yang, and Xinchao Wang. Overcoming catastrophic forgetting in graph neural networks. *CoRR*, abs/2012.06002, 2020. URL https://arxiv.org/abs/2012.06002. Jinghua Liu, Yaojin Lin, Weiping Ding, Hongbo Zhang, and Jixiang Du. Fuzzy mutual information-based multi-label feature selection with label dependency and streaming labels. *IEEE Transactions on Fuzzy Systems*, 31:1–15, 01 2022. doi: 10.1109/TFUZZ.2022.3182441. Jinghua Liu, Yaojin Lin, Weiping Ding, Hongbo Zhang, and Jixiang Du. Fuzzy mutual information-based multilabel feature selection with label dependency and streaming labels. *IEEE Transactions on Fuzzy Systems*, 31(1):77–91, 2023. doi: 10.1109/TFUZZ.2022.3182441. David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continuum learning. *CoRR*, abs/1706.08840, 2017. URL http://arxiv.org/abs/1706.08840. Yao Ma, Xiaorui Liu, Neil Shah, and Jiliang Tang. Is homophily a necessity for graph neural networks? *CoRR*, abs/2106.06134, 2021. URL https://arxiv.org/abs/2106.06134. Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, and Richard E. Turner. Variational continual learning, 2018. Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, and Charles E. Leiserson. Evolvegcn: Evolving graph convolutional networks for dynamic graphs. *CoRR*, abs/1902.10191, 2019. URL http://arxiv.org/abs/1902.10191. German Ignacio Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. *CoRR*, abs/1802.07569, 2018. URL http://arxiv.org/abs/1802. 07569. Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael M. Bronstein. Temporal graph networks for deep learning on dynamic graphs. *CoRR*, abs/2006.10637, 2020. URL https: //arxiv.org/abs/2006.10637. Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. *Advances* in neural information processing systems, 30, 2017. Binh Tang and David S. Matteson. Graph-based continual learning, 2021. Gido M. van de Ven and Andreas S. Tolias. Three scenarios for continual learning. *CoRR*, abs/1904.07734, 2019. URL http://arxiv.org/abs/1904.07734. Gido M Van de Ven and Andreas S Tolias. Three scenarios for continual learning. *arXiv preprint arXiv:1904.07734*, 2019. Junshan Wang, Guojie Song, Yi Wu, and Liang Wang. Streaming graph neural networks via continual learning. *CoRR*, abs/2009.10951, 2020a. URL https://arxiv.org/abs/2009.10951. Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. A comprehensive survey of continual learning: Theory, method and application, 2023a. Tianchun Wang, Dongsheng Luo, Wei Cheng, Haifeng Chen, and Xiang Zhang. Dyexplainer: Explainable dynamic graph neural networks, 2023b. Yigong Wang, Zhuoyi Wang, Yu Lin, Latifur Khan, and Dingcheng Li. Cifdm: Continual and interactive feature distillation for multi-label stream learning. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '21, pp. 2121–2125, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450380379. doi: 10.1145/3404835.3463096. URL https://doi.org/10. 1145/3404835.3463096. Zhen Wang, Liu Liu, and Dacheng Tao. Deep streaming label learning. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 9963–9972. PMLR, 13–18 Jul 2020b. URL https://proceedings.mlr.press/v119/ wang20n.html. Tong Wei, Jiang-Xin Shi, and Yu-Feng Li. Probabilistic label tree for streaming multi-label learning. In *Proceedings* of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD '21, pp. 1801–1811, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383325. doi: 10.1145/3447548.3467226. URL https://doi.org/10.1145/3447548.3467226. Dongkuan Xu, Wei Cheng, Dongsheng Luo, Xiao Liu, and Xiang Zhang. Spatio-temporal attentive rnn for node classification in temporal attributed graphs. In *Proceedings of the Twenty-Eighth International Joint Conference on* Artificial Intelligence, IJCAI-19, pp. 3947–3953. International Joint Conferences on Artificial Intelligence Organization, 7 2019. doi: 10.24963/ijcai.2019/548. URL https://doi.org/10.24963/ijcai.2019/548. Yishi Xu, Yingxue Zhang, Wei Guo, Huifeng Guo, Ruiming Tang, and Mark Coates. Graphsail: Graph structure aware incremental learning for recommender systems. *CoRR*, abs/2008.13517, 2020. URL https://arxiv.org/ abs/2008.13517. Wenchao Yu, Wei Cheng, Charu C Aggarwal, Haifeng Chen, and Wei Wang. Link prediction with spatial and temporal consistency in dynamic networks. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pp. 3343–3349, 2017. doi: 10.24963/ijcai.2017/467. URL https://doi.org/ 10.24963/ijcai.2017/467. Wenchao Yu, Wei Cheng, Charu C. Aggarwal, Kai Zhang, Haifeng Chen, and Wei Wang. Netwalk: A flexible deep embedding approach for anomaly detection in dynamic networks. In *Proceedings of the 24th ACM SIGKDD* International Conference on Knowledge Discovery & Data Mining, KDD '18, pp. 2672–2681, New York, NY, USA, 2018. Association for Computing Machinery. ISBN 9781450355520. doi: 10.1145/3219819.3220024. URL https: //doi.org/10.1145/3219819.3220024. Qiao Yuan, Sheng-Uei Guan, Pin Ni, Tianlun Luo, Ka Lok Man, Prudence Wong, and Victor Chang. Continual graph learning: A survey, 2023. Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor K. Prasanna. Graphsaint: Graph sampling based inductive learning method. *CoRR*, abs/1907.04931, 2019. URL http://arxiv.org/abs/ 1907.04931. Xikun Zhang, Dongjin Song, and Dacheng Tao. Cglb: Benchmark tasks for continual graph learning. In *Thirty-sixth* Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022. Tianqi Zhao, Thi Ngan Dong, Alan Hanjalic, and Megha Khosla. Multi-label node classification on graph-structured data. *Transactions on Machine Learning Research*, 2023. ISSN 2835-8856. URL https://openreview.net/ forum?id=EZhkV2BjDP. Fan Zhou and Chengtai Cao. Overcoming catastrophic forgetting in graph neural networks with experience replay. Proceedings of the AAAI Conference on Artificial Intelligence, 35(5):4714–4722, May 2021. doi: 10.1609/aaai.v35i5.16602. URL https://ojs.aaai.org/index.php/AAAI/article/view/16602. ## A Appendix Organization. We analyze the characteristics of the subgraphs generated by AGALE and compare them with the full graph given in the dataset in Section A.1. Furthermore, we also apply our AGALE on single-label graph CoraFull and summarize and analyze the results in section A.2 to further demonstrate the generalization of AGALE in sing-label scenarios. Additionally, we provide detailed time and space complexity analysis in Section A.3 and measure and summarize the run time of the conducted experiments as well. Last but not least, we provide the visualization of the performance matrix using heatmaps in Section A.4. ## A.1 Data Analysis Of The Subgraphs In this section, we present an analysis of the subgraphs derived by our evaluation framework from the static graph in ![20_image_0.png](20_image_0.png) PCG, showcasing the efficacy of our approach. Figure 8 illustrates the degree distribution of nodes within the seven subgraphs generated from the PCG dataset. We see from the degree distribution that the nodes in the subgraphs also have a similar degree distribution to the nodes in the original static graph. Figure 8: The Node Degree Distribution In the seven Subgraphs Generated From PCG. ## A.2 Application Of Our Evaluation Framework On Single-Label Graphs In this section, we provide an example of applying our evaluation framework to single-label graphs. Here, we use CoraFull as an example. We summarize the characteristics of CoraFull in Table 5. As shown in the Table, CoraFull has 70 classes, which are divided into 35 tasks in 3 random orders. In Table 6 and 7, we summarize the performance of LwF and ERGNN on the dataset CoraFull in Task-IL setting and Class-IL setting and use the line plots in Figure 9 to visualize the learning curves of the chosen models in the two settings on CoraFull. Table 5: The data statistics. Specifically, |V|, |E|, |C|, |L|, and r*homo* denote the number of nodes, edges, classes, mean label count per node, and label homophily of the static graph given by the dataset, respectively. |T| signifies the count of tasks in the resulting task sequence. Additionally, |V| and |E| represent the average number of nodes and edges in a subgraph. Further details on label homophily are captured through |r|tsk and |r|cls, representing the averaged label homophily of subgraphs in the Task-IL setting and Class-IL setting), respectively. | |V| | |E| | |C| | |T| | rhomo | |V| | |E| | |r| tsk | |r| cls | | |----------|-------|-------|-------|---------|-------|-------|-----------|-----------|------| | CoraFull | 19K | 130K | 70 | 35 | 0.57 | 566 | 1035 | 0.99 | 0.99 | | Task-IL setting | CoraFull | | |-------------------|--------------|--------------| | | AP | AF | | LwF | 53.46 ± 0.12 | −9.53 ± 0.16 | | ERGNN | 59.49 ± 0.20 | 4.37 ± 0.34 | | Class-IL setting | CoraFull | | |--------------------|--------------|---------------| | | AP | AF | | LwF | 5.42 ± 0.15 | −7.45 ± 0.14 | | ERGNN | 40.39 ± 0.27 | −56.08 ± 0.25 | Table 6: Performance of the baseline models in Task-IL setting setting. The performances are reported in Average Precision. "AP" stands for Average Precision and the higher the better. "AF" indicates the average forgetting, and the higher, the better. Table 7: Performance of the baseline models in Class-IL setting setting. The performances are reported in Average Precision. "AP" stands for Average Precision and the higher the better. "AF" indicates the average forgetting, and the higher, the better. ## A.3 Time And Space Complexity Analysis Here, we provide theoretical time and space complexity analysis of the models used in this work. Complexity analysis for the base model. As the base model used by all compared methods is GCN (Kipf & Welling, 2016), we first analyze its complexity. To keep the notations simpler let us assume that the feature (including the input features) dimension in all layers is equal to d. Let *n, m* denote the number of nodes and edges, respectively in the input graph at any time point. For the sake of brevity in the presentation, we assume that the number of nodes and edges stay the same for all time points. For GCN, at each layer, the operation includes feature transformation, neighbourhood aggregation, and activation. The feature transformation over two layers leads to the multiplication of matrices of sizes (i) n × d and d × d, and (ii) d × d and d × d which leads to a total time complexity of O(nd2). And the neighborhood aggregation requires a multiplication between matrices of size n × n and n × d, yielding O(n 2d). In practice, we compute this using a sparse operator, such as the PyTorch scatter function for each entry (*i, j*) in the adjacency matrix of the edge e ∈ E, which yields a total cost of O(md). Finally, the activation is an element-wise function with the time complexity of O(n). Overall, the time complexity of a L layer GCN is O(nd2L + mdL + nL). For computing space requirements of GCN, we include (i) the space required for the input adjacency matrix of size n × n, (ii) the feature matrix of size n × d, and (iii) the model itself with d 2 + d parameters for weight and bias in each layer. In total, the space complexity of GCN is O(n 2 + nd + L(d 2 + d)). As all methods mentioned in this work either use GCN as the backbone model or are built upon GCN, we denote in the following discussion and the Table 8 the time and space requirement of GCN as TGCN and SGCN respectively. ![22_image_1.png](22_image_1.png) (a) LwF in Task-IL setting and Class-IL setting on CoraFull (b) ERGNN in Task-IL setting and Class-IL setting on CoraFull Figure 9: Our Framework on Single-label Datasets ![22_image_0.png](22_image_0.png) Complexity analysis of **SimpleGCN.** SimpleGCN trains a GCN for each time step t ∈ T . And because it does not apply any continual learning techniques to remember from previous time steps, the time is the same with GCN, i.e., O(|T |TGCN ) and the space complexity is SGCN . Complexity analysis of **LwF.** LwF uses GCN as the backbone model, the GCN is trained at each time step t ∈ T for the new task, which gives the time complexity of O(|T |TGCN ). To do the knowledge distillation, the previous model also calls GCN forward passes with time complexity of O(|T |TGCN ). Overall, the time complexity is O(2|T |TGCN ). The space consumption consists of loading the current GCN model and the previous GCN model for the knowledge distillation, with space complexity of O(2SGCN ). Complexity analysis of **EWC.** EWC calls forward passes of GCN at each time step, and for each parameter at each time step t ∈ T , the values on the diagonal of the fisher matrix are approximated using the value of each model parameter itself and its gradient. This is an element-wise calculation, which gives the time complexity of O(*|T | ×*M), M indicates the number of parameters in the GCN, which is L(d 2 + d). Overall, the time complexity of EWC is the time complexity of GCN plus the calculation of the fisher matrix, which is O(|T | × (TGCN + M)). The space requirement of EWC consists of the space requirement of GCN, and the matrix stores the gradients of the parameters at each time step of size O(*|T | ×* M). In total, the space complexity of EWC is O(SGCN + *|T |*M). Complexity analysis of **MAS.** Similarly, MAS using GCN as backbone model, at each time step t ∈ T , there are forward passes of GCN and the calculation of fisher matrix for parameters, which gives the overall time complexity of O(|T |(TGCN + M)). The space requirement of MAS consists of the space complexity of GCN and one matrix for the gradient of the parameters of size O(M), which yields in total O(SGCN + M). Complexity analysis of **EvolveGCN.** EvolveGCN is a method from the category of Dynamic Graph Neural networks, which trains a new model at each time step with time complexity same as GCN, i.e., TGCN and updates the model parameter using a recurrent neural network using the corresponding parameter from previous time step as input, which has the time complexity of O(*|T |*nd). Overall, EvolveGCN is more expensive than the other Continual Learning methods with time complexity of O(TGCN + *|T |*nd). The space requirement of EvolveGCN consists of the space requirement of the GCN plus the recurrent unit for the reset, update, and new gate with 3(d 2 + d). In our implementation, we use one recurrent layer for each layer in GCN. Thus the overall space complexity yields O(2SGCN + 3M). Complexity analysis of **ERGNN.** ERGNN is a replay-based method. At each time step t ∈ T , it retrains the GCN on the current graph and the buffer-nodes-formed graph, and the sampling process goes through the nodes in the new graph, which gives the time complexity of O(2|T |TGCN + n). The space requirement of ERGNN consists of the buffer size of |B| and the space complexity of GCN. In total, it yields the space complexity of O(SGCN + |B|). Complexity analysis of **JointTrainGCN.** JointTrainGCN has the same time and space complexity as the base model GCN. It uses the whole static graph as input and is only trained once without the task sequence. The above time and space complexity analyses are summarized in Table 8. Table 8: The simplified time complexity analysis. TGCN and SGCN corresponds to the time and space requirement of the base GCN model. | Model | Time Complexity | Space Complexity | |---------------|----------------------|--------------------| | SimpleGCN | O(|T |TGCN ) | SGCN | | LwF | O(2|T |TGCN ) | O(2SGCN ) | | EWC | O(|T | × (TGCN + M)) | O(SGCN + |T |M) | | MAS | O(|T |(TGCN + M)) | O(SGCN + M) | | EvolveGCN | O(TGCN + |T |nd) | O(2SGCN + 3M) | | ERGNN | O(2|T |TGCN + n) | O(SGCN + |B|) | | JointTrainGCN | TGCN | SGCN | Besides, we also measured the run time of the experiments in this work. The results are summarized in Table 9. Note that the running time of the experiments can be biased due to different splits and how the resources are distributed on the computer. The theoretical analysis may provide more insights into the complexity of time. | | Task-IL setting | | Class-IL setting | | | | |---------------|-------------------|---------|--------------------|--------|---------|-----------| | | PCG | DBLP | Yelp | PCG | DBLP | Yelp | | SimpleGCN | 43.47 | 801.85 | 77163.32 | 88.31 | 886.79 | 104869.58 | | LwF | 49.17 | 1193.40 | 142468.01 | 111.23 | 732.76 | 264657.47 | | EWC | 51.32 | 939.79 | 79804.48 | 135.44 | 790.73 | 200649.31 | | MAS | 148.19 | 1169.50 | 75255.31 | 135.93 | 1230.88 | 145917.67 | | EvolveGCN | 40.34 | 427.99 | 120580.88 | 94.94 | 497.80 | 310540.41 | | ERGNN | 47.27 | 536.20 | 416090.74 | 172.05 | 131.57 | 167624.39 | | JointTrainGCN | 166.82 | 796.19 | 80827.72 | 166.82 | 796.19 | 80827.72 | Table 9: The computation time of the experiments from Section 6 in second. The computation time is measured with one random split for each dataset. ## A.4 Visualization Of The Performance Matrix In this section, we provide the visualization of the performance matrix using the heatmap on the three multi-label datasets. In the heatmap, each cell corresponds to a unique entry in M, and its position in the heatmap mirrors its position in the matrix. We use the gradient of the color to indicate the performance. The color intensity indicates the magnitude of the value. In the Figure 10, 11, and 12, we show the heatmaps correspond to the performance matrices of the baseline models in the Task-IL setting on datasets PCG, DBLP, and Yelp, respectively, while in the Figure 13, 14, and 15, we show the heatmaps correspond to the performance matrices of the baseline models in the Class-IL setting on datasets PCG, DBLP, and Yelp, respectively. ![24_image_0.png](24_image_0.png) Figure 10: Visualization of the performance matrix of the methods in TaskIL setting on dataset PCG ![24_image_1.png](24_image_1.png) Figure 11: Visualization of the performance matrix of the methods in TaskIL setting on dataset DBLP ![25_image_0.png](25_image_0.png) ![25_image_1.png](25_image_1.png) ![25_image_3.png](25_image_3.png) Figure 12: Visualization of the performance matrix of the methods in TaskIL setting on dataset Yelp ![25_image_2.png](25_image_2.png) Figure 13: Visualization of the performance matrix of the methods in Class-IL setting on dataset PCG ![26_image_0.png](26_image_0.png) ![26_image_1.png](26_image_1.png) ![26_image_3.png](26_image_3.png) Figure 14: Visualization of the performance matrix of the methods in Class-IL setting on dataset DBLP ![26_image_2.png](26_image_2.png) Figure 15: Visualization of the performance matrix of the methods in Class-IL setting on dataset Yelp
Review 1: Summary: The paper suggests new benchmarks for graph continual learning. The paper lists different continual graph settings and shows which of the current evaluation methods are more suited for which settings -- highlighting disentangles in all of them. Then, it offers a graph-aware evaluation framework. Afterward, the paper conducts an empirical study, comparing different algorithms on different datasets using the suggested evaluation method. The paper shows new insights gained on these datasets by using AGALE Strengths and Weaknesses: Strengths: - The method is simple and easy to understand and follow. The intuition behind it is strong, and the problems it raises in the existing literature are important. - The empirical section is very methodological, checking a multitude of datasets and algorithms, suggesting new insights using AGALE - Although simplistic, the suggested method in the novel to the best of my knowledge Weaknesses: - The writing could be significantly more concise. Many points repeat themselves and are overly wordy. For example, the actual new ideas of the paper are introduced first on page 6, which is problematic. Section 2 contains many notations from previous art that are never used later on. Section 4 repeats many of the points made in Section 1 (specifically, 4.4 is very repetitive). Many subsections could have been shortened to just a few sentences. - The significance of the paper is marginal -- it suggests an evaluation for a very specific case: Graph data + continual learning + multi-label. While I understand that this can not be addressed by the authors, it is still a weakness of the work. Requested Changes: The major problem in my opinion is the writing. Once the paper is made more concise, I'm leaning toward acceptance, as its strengths outweigh its weaknesses. Broader Impact Concerns: Not applicable ================================================== Review 2: Summary: This paper points out the limitation of existing graph continual learning evaluation methods in that they focus mainly on single-label scenarios where each node has at most one associated label. To tackle this, the authors study more challenging and realistic scenarios where nodes in the graph can have multi-labels (with two different settings of task-incremental and class-incremental) and then propose simple yet effective graph partitioning algorithms to validate existing methods over those two continual learning settings. The authors perform extensive experiments and analyses with existing (graph) continual learning approaches, showcasing their advantages and limitations in the proposed challenging continual learning scenarios with multi-label nodes. Strengths and Weaknesses: ### Strengths * This work tackles very important and challenging scenarios of graph continual learning where nodes in the graph can have multi-labels. * The proposed graph partitioning algorithms have advantages over previous ones, for validating the methods on graph continual learning settings. * The authors perform extensive experiments with multiple analyses, faithfully showing the efficacy of the proposed graph continual learning evaluation framework. * This paper is well-written and very easy to follow. ### Weaknesses * The claim that the proposed graph partitioning algorithms can minimize the loss of the graph topological structure is not very convincing. While they can ensure fair splits across different classes, it does not ensure that the subgraphs after splits can maintain (or preserve) the original graph topology. * In the realistic scenario, some labels of certain nodes (e.g., users over the social network do not further show interest in certain topics and unfollow them) may be removed in the future, and considering (or at least discussing) this scenario along with the proposed evaluation framework would be worthwhile. Requested Changes: Please see the weaknesses above. Broader Impact Concerns: I don't see any concerns about the ethical implications of this work. ================================================== Review 3: Summary: The paper introduces the AGALE framework, which aims to address the challenges of evaluating continual graph learning in both single-labeled and multi-labeled scenarios. It fills gaps in the current literature by defining two generalized incremental settings for multi-label node classification tasks and developing new data split algorithms for curating CGL datasets. The authors conducted extensive experiments to evaluate and compare the performance of methods from continual learning, dynamic graph learning, and continual graph learning. Furthermore, the paper presents performance comparisons of baseline models in the Task-IL setting, reporting Average Precision and average forgetting metrics. It also includes visual representations, such as the node degree distribution in subgraphs generated from the PCG dataset. In conclusion, the AGALE framework contributes to the continual graph learning literature by providing a comprehensive evaluation framework that considers the challenges posed by multi-labeled scenarios. The paper's theoretical and empirical analyses shed light on the differences between single-label and multi-label cases, paving the way for the development of more effective models in continual graph learning. Overall, the paper offers valuable insights and important contributions to the field of continual graph learning, and its findings are likely to inspire further research in this domain. Strengths and Weaknesses: Pros: - The paper addresses the challenges of evaluating continual graph learning in both single-labeled and multi-labeled scenarios, filling gaps in the current literature. - The AGALE framework defines two generalized incremental settings for multi-label node classification tasks and develops new data split algorithms for curating CGL datasets. - The paper presents extensive experiments comparing methods from continual learning, dynamic graph learning, and continual graph learning, providing valuable insights into the performance of these methods in different scenarios. Cons: - The paper does not provide a detailed comparison of the AGALE framework with other existing evaluation frameworks for continual graph learning. - The literature review is not sufficient in the literature review. For examples, some representative dynamic GNN is not included in the discussion of related works, such as: DyExplainer: Explainable Dynamic Graph Neural Networks. WSDM'24. NetWalk: A Flexible Deep Embedding Approach for Anomaly Detection in Dynamic Networks. KDD'18. Link Prediction with Spatial and Temporal Consistency in Dynamic Networks. IJCAI'17. Spatio-Temporal Attentive RNN for Node Classification in Temporal Attributed Graphs. IJCAI'19. - The time complexity analyis is suggested to be included and the experiemnts on it are also good for the evaluation. Requested Changes: - Give a more thorough literature review. - give time complexity analyis Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: The paper wants to improve and standardize evaluation of countinual graph learning. The paper lists different continual graph settings and offers a graph-aware evaluation framework. The reviewers were leaning positive but had some concerns about writing and related works. In the revised version post rebuttals most of these concerns were resolved making the paper ready for publication. ==================================================
# Overcoming Order In Autoregressive Graph Generation For Molecule Generation Edo Cohen-Karlik *edocohen@mail.tau.ac.il* Verily Research School of Computer Science, Tel Aviv University Eyal Rozenberg *eyalrozenberg@google.com* Verily Research Daniel Freedman *danielfreedman@google.com* Verily Research Reviewed on OpenReview: *https: // openreview. net/ forum? id= BK6Gc10tRy* ## Abstract Graph generation is a fundamental problem in various domains, and is of particular interest in chemistry where graphs may be used to represent molecules. Recent work has shown that molecular graph generation using recurrent neural networks (RNNs) is advantageous compared to traditional generative approaches which require converting continuous latent representations into graphs. One issue which arises when treating graph generation as sequential generation is the arbitrary order of the sequence which results from a particular choice of graph flattening method: in the chemistry setting, molecular graphs commonly have multiple SMILES strings corresponding to the same molecule. Inspired by the use case of molecular graph generation, we propose using RNNs, taking into account the non-sequential nature of graphs by adding an Orderless Regularization (OLR) term that encourages the hidden state of the recurrent model to be invariant to different valid orderings present under the training distribution. We demonstrate that sequential molecular graph generation models benefit from our proposed regularization scheme, especially when data is scarce. Our findings contribute to the growing body of research on graph generation and provide a valuable tool for various applications requiring the synthesis of realistic and diverse graph structures. ## 1 Introduction Graphs are powerful representations of complex relationships and structures. While graphs have many application domains, we are particularly interested in their use in chemistry, in which setting graphs may be used to represent molecules. A dedicated class of architectures, Graph Neural Networks (GNNs), has been developed to handle the specific properties of graphs. Graphs are naturally versatile objects, but such versatility comes at the cost of lack of structure and no naturally induced order. Most GNN architectures therefore operate by applying a neural architecture at the node level followed by an aggregation step which takes into account the local neighborhood structure of the graph. By stacking multiple such layers, a GNN is able to perform node-level or graph-level tasks that take into account the entire structure of the graph. The ability to generate realistic and structured graphs is essential for various applications ranging from drug design (De Cao & Kipf, 2018; Du et al., 2022b; Honda et al., 2019; Jin et al., 2018; Madhawa et al., 2019; Shi et al., 2020; You et al., 2018a; Zang & Wang, 2020) to program synthesis (Brockschmidt et al., 2018; Hindle et al., 2016; Chen et al., 2021a; Allamanis et al., 2017; Hellendoorn et al., 2019; Yin & Neubig, 2017; Bielik et al., 2016; Dai et al., 2018). In recent years a wide variety of generative models have been developed, including generative adversarial networks (GANs), variational autoencoders (VAEs), normalizing flows, and diffusion models. These algorithms devise different strategies to learn continuous mappings from a latent distribution to a space of realistic examples. Unfortunately, graphs do not admit a natural representation in a continuous space; consequently, the discrete and unordered nature of graphs make them less amenable to the methods mentioned above for the task of graph generation. A different type of generative model relies on autoregressive architectures which enable processing a sequence and generating the next element; for example, these architectures are commonly used for large language models. Generally, autoregressive models are applicable when the generated objects admit a sequential order. In this work we focus on sequential generation of graphs using autoregressive neural architectures. A strong motivating factor for choosing autoregressive architectures is that we are particularly interested in molecular graph generation; and in this context Flam-Shepherd et al. (2022) have shown that sequential generation is favorable compared to other approaches. The specific representation we consider is depth-first search (DFS) trajectories of graphs. The reasons for this choice of representation is twofold: (a) DFS is a natural way of flattening graphs into sequences; (b) in the chemistry community DFS is used to convert molecules into strings. However, an issue arises when converting graphs into sequences: there are many DFS trajectories for a given graph. Indeed, for many graph flattening methods, there is an arbitrariness in the order of the sequence which results (Vinyals et al., 2015; Chen et al., 2021b). In order to alleviate the dependency on a specific trajectory, we add a regularization term dubbed Orderless Regularization (OLR) which ensures the learnt model is invariant to different DFS orderings of the same graph. For the sake of training with OLR, one needs to generate different DFS trajectories with a common end-vertex which is known to be hard (Beisegel et al., 2019). While our motivation originates from the use-case of drug discovery and molecular graph generation, we provide a general formalism of the notion of graph-level invariance and devise an efficient algorithm to generate common end-vertex trajectories under certain constraints. Finally, we demonstrate empirically that our regularization term is beneficial when the amount of training data is limited by considering the use case of small molecule generation. The reminder of the paper is structured as follows: in Section 2 we provide background and introduce the concepts, definitions, and notations used throughout the paper. Section 3 goes into the details of OLR over DFS trajectories. Section 4 is devoted to related work. In Section 5 we provide empirical evidence for the effectiveness of OLR. Section 7 provides concluding remarks. ## 2 Background In this section we formally define the problem of graph generation, the notations and definitions necessary to present our proposed method. We denote matrices by bold uppercase letters, M P R nˆm, vectors by bold lowercase letters, v, and the i th entry of v by vi. We proceed with a general formulation of recurrent models. ## 2.1 Recurrent Models Let X , H, and Y be the spaces of inputs, hidden states, and outputs, respectively. Given an input sequence x " px1*, . . . ,* xnq P X n, a recurrent model consists of two functions, the state update function fu : HˆX Ñ H, $$\mathbf{h}_{t+1}=f_{u}(\mathbf{h}_{t},\mathbf{x}_{t}),$$ ht`1 " fupht, xtq, (2.1) and the output function fo : H ˆ X Ñ Y, $$(2.1)$$ $$\mathbf{y}_{t}=f_{o}(\mathbf{h}_{t},\mathbf{x}_{t}).$$ yt " fopht, xtq. (2.2) where h0 P H. We overload the notation and denote the hidden state and output of a recurrent model over a sequence as fupxq and fopxq respectively. The formulation presented of recurrent models is broad and able to capture RNNs as well as more complex architectures such as Gated Recurrent Units (GRUs, (Chung et al., 2014)) and Long-Short Term Memory networks (LSTMs, (Hochreiter & Schmidhuber, 1997)). For example, in the case of a very simple, vanilla $$(2.2)$$ $$\mathbf{h}_{t+1}=\sigma_{h}(\mathbf{A}\mathbf{h}_{t}+\mathbf{B}\mathbf{x}_{t})$$ $$(2.3)$$ $$y_{t}=\sigma_{y}(\mathbf{C}h_{t}+\mathbf{D}x_{t})$$ $\left(2.4\right)$. RNN,1 ht`1 " σhpAht ` Bxtq (2.3) and yt " σypCht ` Dxtq (2.4) where A, B, C, D are matrices with the appropriate dimensions and σh, σy are standard non-linearities such as sigmoid or *tanh*. ## 2.2 Graph Generation A graph is given by G " pV, Eq where V is a set of nodes (or vertices) and E Ď V ˆ V is a set of tuples denoting the nodes connected by an edge in the graph. Additionally, for each v P V, denote by xv P R m the features of node v. Similarly, euv P R k denotes the features of the edge p*u, v*q P E. For example, in a molecular graph, nodes are atoms, and their features will contain the element; and edges correspond to bonds, and their features will contain the bond types (single, double, etc.).2 Another example is of social networks, where nodes corresponds to users and their features to user profiles; and edges correspond to connections between users and their features contain metadata on this connection. The topic of designing neural networks to operate specifically on graphs is dominated by Graph Neural Networks (GNNs) which mostly rely on a message-passing scheme to propagate information between nodes. While these architectures are extremely successful in node level and graph level prediction, they are not as prevalent in the context of graph generation, and many such approaches are restricted to small graphs (though (Davies et al., 2023) is a notable exception). Formally, the task of graph generation is usually concerned with learning to model distributions: concretely, given a set of N graphs tGiu N i"1 originating from an underlying distribution p, the goal of graph generation is to devise an algorithm that generates new graphs from the underlying distribution p. Prior work has mostly adapted successful generative methods over a continuous space to the domain of graphs (Gómez-Bombarelli et al., 2018; Blaschke et al., 2018; Kadurin et al., 2017; Prykhodko et al., 2019). In this work we focus on using recurrent models which can be employed naturally to generate discrete objects. Crucially, Flam et al. (Flam-Shepherd et al., 2022) have shown that sequential generation is favorable compared to competing approaches in the context of molecular graph generation. ## 2.3 Sequential Graph Generation When applying recurrent models for graph generation, the graph first needs to be "flattened" into a sequence. As there is no natural order for a graph, one must artificially induce such an order; for example, the approach taken by (You et al., 2018b) considers generation of breadth-first search (BFS) trajectories. While there are many ways to convert a graph into a sequence, in this work we focus on depth-first search (DFS); a strong motivation for this choice is that this is the method used to convert graph molecules into a linear representation called SMILES strings (Weininger, 1988). By convention, the output of the DFS algorithm is a spanning tree and we consider the induced order of the graph as the order in which the vertices were visited during the DFS run (also known as pre-order traversal). In what follows we formally define the concepts discussed. Definition 2.1. *Given a connected graph* G " pV, Eq *with* |V| " n, we say the permutation π P Sn *is a* valid ordering of G if it is possible to run DFS over G *and visit the vertices in the order induced by* π. Denote the sequence corresponding to a valid ordering π of G by $$\mathbf{s}(G,\pi)=(v_{\pi(1)},\ldots,v_{\pi(n)}).$$ sp*G, π***q " p**vπp1q*, . . . , v*πpnqq. (2.5) Denote the set of all such sequences for a given graph G by SpG**q " t**spG, πq : π *is a valid ordering of* Gu. (2.6) 1Note that the bias term may be encapsulated into the input processing matrices by expanding the input with an additional dimension and assigning a fixed value of 1 on that coordinate. 2Molecular node and edge features may contain other properties as well. $$(2.5)$$ Clearly, for a non-trivial graph SpGq will contain many sequences. In this work we have a special interest in sequences that share the same end vertex. Definition 2.2. Let SpG, vq denote all sequences terminating at node v P V*, formally,* $$(2.7)$$ $${\mathcal{S}}(G,v)=\{\mathbf{s}\in{\mathcal{S}}(G):s_{n}=v\}$$ Sp*G, v***q " t**s P SpGq : sn " vu (2.7) In case there are no valid DFS trajectories terminating as a certain node v, Sp*G, v***q " H** *by definition.* In the following section we discuss the desired properties for recurrent models when used for graph generation. ## 3 Structure Agnostic Recurrent Models Recurrent models are a natural choice when generating discrete objects such as text. On the other hand, graphs are discrete objects with no naturally induced order. In Section 2 we described a mapping between graphs and sequences; and in particular, the fact that many different sequences correspond to the same graph. In this section we present our method that overcomes the issues described. ## 3.1 Generating Depth-First Search Traversals In this work we use recurrent models to generate DFS traversals of graphs. Clearly when generating a DFS traversal, the next node to be generated depends on the nodes generated thus far and in particular the last generated node. An important observation is that the output of the recurrent model should be invariant to different valid orderings corresponding to the same subgraph as long as they lead to the same node. The following definition formalizes this notion, Definition 3.1. We say a recurrent model is **structure invariant** *with respect to a connected graph* G if $$s\;t h e\;c a s e\;t h a t\;\;f_{o}({\bf s_{1}})=f_{o}({\bf s_{2}}).$$ $$(3.1)$$ @v P V, @s1, s2 P SpG, vq *it is the case that* fops1q " fops2q. (3.1) If the above condition is satisfied for all G " D, we say that the recurrent model is structure invariant with respect to a distribution D. Figure 1 depicts a graph and two different DFS traversals sharing the same root and terminal node. A recurrent model processing the two DFS traversals will ideally generate the same node that will be attached to node D. Definition 3.1 describes the structure invariance property with respect to a graph. Since recurrent models generate the traversal sequentially, we would like this property to hold at any moment during generation, i.e., we want to modify Definition 3.1 to take into account partial DFS traversals. Definition 3.2. For a connected graph G, we say a connected subgraph G˜ Ď G is *induced by DFS over* G if there exists a valid ordering π P Sn of G, and k ď n such that pvπp1q, . . . , vπpkqq *is a valid ordering of* G˜. Denote the set of all DFS induced subgraphs over G by G*DF S*pGq. At this stage, a reader might question the necessity of Definition 3.2 and why G*DF S*pGq differs from the set of all connected subgraphs of G. We note that for a general connected graph G*DF S*pGq does not correspond to the set of all connected subgraphs. Proposition 3.3. *For a connected graph* G, $${\mathcal{G}}_{D F S}(G)\neq\left\{{\tilde{G}}\mid{\tilde{G}}\subseteq G{\mathrm{~and~}}{\tilde{G}}{\mathrm{~is~connected}}\right\}$$ G*DF S*pGq ‰ ␣G˜ | G˜ Ď G and G˜ *is connected*((3.2) Figure 2 depicts a graph and two connected subgraphs, one which is induced by DFS and the other that cannot be obtained by a DFS traversal. With the notion of DFS induced subgraphs in hand, we now present the following definition of *total structure* invariance: $$(3.2)$$ ![4_image_0.png](4_image_0.png) Figure 1: Illustration of two DFS traversals of the same graph starting from node A and terminating at node D, blue lines denote traversal order. (Left) traversal resulting in the sequence ApBEFqpCqD. (Right) traversal resulting in the sequence ApCqp*BF E*qD. The parentheses denote the opening and closing of branches when traversing the tree; with this syntax it is possible to reconstruct the tree from such sequences. Note that multiple sequences correspond to the same tree, a fact that lies at the heart of this work. ![4_image_1.png](4_image_1.png) $$(3.3)$$ Figure 2: Illustration of the same graph with two connected subgraphs: (Left) subgraph which is not induced by DFS. (Right) subgraph induced by DFS, arrows depict a traversal resulting in the sequence BApCFqD. Definition 3.4. We say a recurrent model is **totally structure invariant** *with respect to a connected* graph G*, if* @G˜ P GDF SpGq, @v P VpG˜q, @s1, s2 P SpG, v ˜ q *it is the case that* fops1q " fops2q. (3.3) If the above condition is satisfied for all G " D, we say that the recurrent model is *totally structure* invariant with respect to a distribution D. Note that per Definition2.2, the condition vacuously holds for non terminal nodes. In the next section we discuss how to train recurrent models which are totally structure invariant with respect to a given training distribution over graphs. ## 3.2 Regularizing Towards Total Structure Invariance Motivated by the observation discussed in Section 3.1, we propose training recurrent models that are totally structure invariant with respect to the underlying distribution over graphs. It would be appealing to characterize the class of all totally structure invariant functions and optimize over those. Unfortunately, it is difficult to attain a crisp characterization of structure invariance as this property depends on the training distribution. Instead, we propose encouraging total structure invariance via regularization. Specifically, we would like to minimize the following auxiliary loss, $$\mathbb{E}_{G\sim{\mathcal{D}}}\mathbb{E}_{G\sim{\mathcal{G}}_{D F S}(G)}\mathbb{E}_{v\in{\mathcal{V}}(G)}\mathbb{E}_{\mathbf{s}_{1},\mathbf{s}_{2}\in{\mathcal{S}}(G,v)}\left[\left(f_{o}(\mathbf{s}_{1})-f_{o}(\mathbf{s}_{2})\right)^{2}\right]\tag{1}$$ " ı $$(3.4)$$ which we refer to as *Orderless Regularization* (OLR). Examining Equation 3.4, we note that sampling from G*DF S*pGq is easily done by randomly selecting a root node and running DFS with stochastic decision making. On the other hand, given G˜ and v, sampling from Sp*G, v* ˜ q is hard and has been shown to be NP-complete (Beisegel et al., 2019). ## 3.3 Sampling Trajectories With Common End Vertex The problem of generating all DFS trajectories that terminate at the same vertex is hard and there are no known efficient algorithms for this task. In order to overcome this obstacle we apply a heuristic for computing such trajectories. We highlight that our proposed scheme is not equivalent to a uniform sampling over all possible trajectories; however, in Section 5 we show that the resulting regularization scheme is effective empirically. Next, we formally show that for practical graphs there exists efficient algorithms to generate such trajectories. Definition 3.5. Let G " pV, Eq be an arbitrary graph. G is said to be k**-edge-connected** *if the subgraph* G1 " pV, EzE˜q is connected for all E˜ Ď E *such that* |E˜| ă k and DE˜ *s.t.* |E˜| " k and G1*is not connected.* Proposition 3.6. *There is an efficient algorithm to find distinct DFS trajectories with common end vertex* for any k*-edge connected graph for* k ď 2. We note that in many real world tasks, graph are rarely k-edge connected for k ą 2. For example, in the ZINC molecular dataset, more than 99.5% of molecular graphs are 1-edge connected. Proof Sketch. Find a min-cut: by the definition of 1-edge-connected graphs the min-cut includes a single crossing edge p*u, v*q. By removing p*u, v*q the graph is partitioned into two connected components, G1 and G2 containing u and v respectively. Run a DFS on G1 with u as the root vertex to result in pu1*, . . . , u*kq, and similarly for G2 to result in pv1*, . . . , v*mq (where v1 " v and u1 " u).3 We can now construct a DFS traversal on G by 'gluing' together the sequences as, $$(v_{1},u_{1},\ldots,u_{k},v_{2},\ldots,v_{m})$$ pv1, u1, . . . , uk, v2*, . . . , v*mq (3.5) We can run another (stochastic) DFS on G1 from u to obtain `uπ˜p1q*, . . . , u*π˜pkq ˘ where π P Sk and π˜p1q " 1. We can construct a second DFS sequence as in Equation 3.5, $$(v_{1},u_{\bar{\pi}(1)},\ldots,u_{\bar{\pi}(k)},v_{2},\ldots,v_{m})$$ pv1, uπ˜p1q, . . . , uπ˜pkq, v2*, . . . , v*mq (3.6) We have created two valid DFS sequences that both terminate at vm. 3k and m denote the size of partitions and satisfy k ` m " |V|. $$(3.5)$$ $$(3.6)$$ $\square$ $${\mathrm{t}}\ v_{m}.$$ See Appendix A for the full details and the case of 2-edge connected graphs. Note that our method for generating distinct DFS trajectories is not exhaustive and there may be additional trajectories not detected via the algorithm induced by the proof sketch. ## 4 Related Work In this section we discuss several relevant topics to graph generation. For a comprehensive review on graph generation see (Guo & Zhao, 2022; Zhu et al., 2022). One Shot Generation Classic generative architectures (e.g. Variational autoencoders (VAEs) (Kingma & Welling, 2013), Generative adversarial networks (GANs) (Goodfellow et al., 2020), etc.) work by learning a continuous mapping from a latent distribution to generate new examples with similar properties to the training distribution. These models usually incorporate a neural architecture that maps directly from the latent space to the domain of the training data (e.g. images) and therefore the output space must be predetermined. These properties pose a challenge when applied to the domain of graphs, as the latter are discrete objects with variable size and no naturally induced order. In order to circumvent these caveats, prior work (Assouel et al., 2018; De Cao & Kipf, 2018; Du et al., 2022a; Fan & Huang, 2019; Flam-Shepherd et al., 2020; Guo et al., 2020; Honda et al., 2019; Ma et al., 2018; Madhawa et al., 2019; Shi et al., 2020; Simonovsky & Komodakis, 2018; Zou & Lerman, 2019) has used a one-shot generation strategy. That is, the output space is limited by design to a specific representation of graphs (i.e. adjacency matrix or adjacency list) of specific size and the output is generated in a single forward pass. While the one-shot strategy has its merits, there are a few significant drawbacks such as the inability to generate graphs with arbitrary large number of nodes. Sequential Generation The idea of using autoregressive models for graph generation is not new and there have been several works in this vein. GraphRNN (You et al., 2018b) proposes generating BFS trajectories in order to limit the number of possible orderings per graph. Other works take a different approach of generating edges in an autoregressive manner (Bacciu et al., 2020; Goyal et al., 2020). Additional approaches include MolecularRNN (Popova et al., 2019) which incorporates a reinforcement learning environment to generate nodes and edges sequentially. Yet another approach includes sequentially generating subgraph structures (Jin et al., 2018; Liao et al., 2019; Podda et al., 2020). Another recent work (Bu et al., 2023) treats the induced order as a problem of dimensionality reduction and attempts to learn mappings from graphs to sequences. In this work we argue that the most effective inductive bias for the use of autoregressive models to generate graphs is to be invariant to different orderings possible under the training distribution. Molecule Generation One of the most prominent uses of graph generation, which is used for evaluation in this work, is that of molecule generation. Molecular generation is applicable to the development of synthetic materials, drug development and more. Molecules are 3D objects which are naturally represented as point clouds4 with corresponding geometric approaches (Garcia Satorras et al., 2021; Simm et al., 2020a;b; Hoogeboom et al., 2022) which utilize inherent symmetries in the architectures employed. While 3D representations are richer and carry significant information that does not transfer to 1D and 2D representations, they are costly to obtain and therefore the corresponding amount of data is limited as compared to 1D and 2D representations, which are ubiquitous. Another aspect of molecule generation is when the generation is conditioned to satisfy certain properties. For example, (Skalic et al., 2019; Zhang et al., 2022; Rozenberg & Freedman, 2023) generate molecules that are conditioned to bind to specific ligand structures, (Kang & Cho, 2018; Zang & Wang, 2020) generate molecules that fulfill certain chemical properties. In this work we consider the task of de-novo generation (Arús-Pous et al., 2019; Lim et al., 2018; Pogány et al., 2018; Tong et al., 2021) where the objective is to generate molecules with similar properties to those in the training data. Permutation Invariant Recurrent Models Another relevant topic is the use of autoregressive models for problems over sets which, like graphs, lack a natural order. There have been many works focusing on 4In a point-cloud representation of a molecule each point represents an atom and bonds are implicit from the distances between atoms. Table 1: Wiener Index results for training with and without OLR. We report Mean Absolute Error (MAE) and accuracy (computed by rounding the output to the nearest integer). For both metrics considered, training with OLR dramatically improves performance. | MAE (Ó) | Accuracy (Ò) | | |-----------|----------------|-------------| | Vanilla | 2.24 (0.12) | 0.18 (0.02) | | OLR | 1.32 (0.10) | 0.28 (0.04) | problems over sets. The most prominent of these is DeepSets (Zaheer et al., 2017) which applies a deep neural network on each element of the set and then aggregates the result with a permutation invariant operator (e.g. sum or max), finally applying another deep neural network on the aggregated result. There have also been autoregressive works designed for sets: Murphy et al. (2018) use RNNs on different permutations and output the average. While this requires n! orderings for a set of size n, the authors have presented several approximation techniques and justified them empirically. Cohen-Karlik et al. (2020) have shown that while DeepSets are universal, some permutation invariant functions require unbounded width to implement successfully and have proposed using RNNs with a regularization term which enforces permutation invariance. In this work we extend the concepts introduced in previous works into the realm of drug design and sequential graph generation, where a desired property of models is to hold invariance for certain permutations as induced by the data distribution. In the work of Cohen-Karlik et al. (2020) the regularization is geared towards fully permutation invariant models; that is, their work may be viewed as a specific case of graph-aware regularization where the data is represented by fully connected graphs. In this work we generalize these concepts and formalize the problem for graphs with more general structures. As a result, the straightforward regularization term proposed in (Cohen-Karlik et al., 2020) cannot be used, and a more sophisticated regularization scheme is required. Specifically, using the lens of DFS trajectories of graph, we suggest regularizing over valid sequences and devise an efficient approximation for generating such sequences. ## 5 Experiments 5.1 Wiener Index In order to gauge the effectiveness of OLR, we conducted a straightforward experiment designed to predict the Wiener index of graphs. The Wiener index is a topological metric for molecules, involving the summation of distances between all pairs of vertices within a given graph. These graphs are symbolically represented as strings, using parentheses as exemplified in Figure 1. 5 We employed a Long Short-Term Memory (LSTM) model with a hidden width of 100, trained as a regression task. During training, we used graphs containing 10 nodes and a training set consisting of 50 examples; our aim was to determine the effectiveness of OLR in the case when data is extremely scarce. The network was trained until convergence with perfect training accuracy and evaluated on a test set consisting of 200 data points. We report the average mean absolute error and accuracy as computed by rounding the networks output. As can be seen in Table 1, using OLR improves results significantly. To further investigate the effect of training with OLR, we visualize the embeddings of the hidden state when the inputs are different representations drawn from two different graphs. A desired property is that different representations of the same graph are clustered together; we show that this phenomenon does indeed occur, as illustrated in Figure 3. This experiment demonstrates that the training with OLR results in models that are significantly more invariant to different orderings of the same graph, a desired property for models trained for the task of graph generation. 5As the Wiener index of a graph does not involve node features, we omit the node labeling from the representation which yields a string of only opening and closing parentheses. ![8_image_0.png](8_image_0.png) Figure 3: t-SNE visualizations of hidden states when training with and without OLR. Data is generated as different representations of two graphs (i.e. different DFS trajectories of the same graph), the first with a Wiener index of 34, and the second with 38. (Left) training with OLR yields hidden states that are clearly clustered into two groups. (Right) training without regularization, there is no apparent separation in the embedding space between the two groups. ## 5.2 De Novo Molecule Generation A prominent application of graph generation is that of molecule design. Graph generation tasks range from de novo generation where the objective is to generate molecules with similar properties to a given dataset, to conditional generation for which the task is to generate a graph given a second graph with specific characteristics, i.e. a ligand that binds to a specific target. Our empirical evaluation focuses on the former. We evaluate our proposed regularization method on the MOSES benchmark (Polykovskiy et al., 2020) and compare to relevant baselines. Our implementation is based on the work of CharRNN which use three layers of the LSTM architecture each with hidden dimension of 600 (for complete details refer to (Segler et al., 2018)). We find a consistent improvement when adding OLR to the objective of autoregressive models. The data curated by (Polykovskiy et al., 2020) is refined from the ZINC dataset (Sterling & Irwin, 2015) which contains approximately 4.6M molecules. The authors filter the data based on molecular weights, number of rotational bonds, lipophilicity, etc. to result in a total of 2M molecules. The authors provide partitions of the data into train, test and scaffold test to allow fair evaluation.6 Computing Trajectories OLR works by feeding two different trajectories that terminate at the same node. While this calculation is feasible to perform during the forward pass it introduces a computational bottleneck. In order to circumvent this issue we employ the following calculations offline. For each molecule we first index all min-cuts and randomly select one. We then generate multiple (10) traversals terminating at the same node as described in Section 3.3 and write the sequences into a file along with the original molecule from which the trajectories are derived from. When loading the data, two trajectories are selected at random and used as inputs to the OLR objective described in Section 3.2. Data Filtering Our offline computation of trajectories in Section 5.2 requires that there are min-cuts that induce sufficient number of different DFS traversals terminating at the same node. While 99.9% of the molecules in MOSES have at least two such trajectories, we filter the data to remain with molecules 6The scaffold of a molecule is the structure induced by its ring systems along with the connectivity pattern between these systems. The scaffold test partition contains molecules with structures that did not appear in the train and test partitions. The scaffold test allows for the evaluation of how well the model can generate previously unobserved scaffolds. Table 2: Generation results at validity threshold of 0.8 for LSTM and GRU architectures. Leading result highlighted in bold for each metric. Rank Average is the average position of each method over all metrics considered. As can be seen, OLR outperforms the baselines considered for both architectures. Refer to the text for further details. | LSTM | | GRU | | | | | |-----------------|-----------|--------|-------------|-----------|--------|-------------| | Metric | Canonical | Rand. | OLR + Rand. | Canonical | Rand. | OLR + Rand. | | Unique@1K (Ò) | 0.8930 | 1.0 | 1.0 | 0.805 | 1.0 | 1.0 | | Unique@10K (Ò) | 0.6182 | 0.9975 | 0.9981 | 0.5314 | 0.9965 | 0.9967 | | FCD/Test (Ó) | 1.1208 | 0.8568 | 0.7784 | 1.2616 | 0.9602 | 0.9752 | | SNN/Test (Ò) | 0.5599 | 0.4967 | 0.4936 | 0.5718 | 0.4948 | 0.4978 | | Frag/Test (Ò) | 0.9953 | 0.9947 | 0.9958 | 0.9945 | 0.9940 | 0.9961 | | Scaf/Test (Ò) | 0.6370 | 0.8246 | 0.8220 | 0.5857 | 0.8252 | 0.8159 | | FCD/TestSF (Ó) | 1.8318 | 1.4236 | 1.3089 | 1.921 | 1.7269 | 1.7022 | | SNN/TestSF (Ò) | 0.5234 | 0.4795 | 0.4769 | 0.5352 | 0.4755 | 0.4785 | | Frag/TestSF (Ò) | 0.9920 | 0.9919 | 0.9926 | 0.9911 | 0.9908 | 0.9932 | | Scaf/TestSF (Ò) | 0.0245 | 0.1185 | 0.0931 | 0.0250 | 0.1028 | 0.1123 | | IntDiv (Ò) | 0.8527 | 0.8508 | 0.8537 | 0.8499 | 0.8535 | 0.8531 | | IntDiv2 (Ò) | 0.8457 | 0.8449 | 0.8479 | 0.8424 | 0.8475 | 0.8471 | | Filters (Ò) | 0.9889 | 0.9705 | 0.9702 | 0.9908 | 0.9678 | 0.9702 | | Novelty (Ò) | 0.8969 | 0.9797 | 0.9809 | 0.8787 | 0.9787 | 0.9748 | | Rank Average | 2.28 | 2.07 | 1.57 | 2.42 | 1.92 | 1.57 | which have at least 10 different trajectories satisfying the criteria defined. After filtering we are left with approximately 500K molecules for training, and 55K for test and scaffold test partitions. We note that in following sections we show our method is most effective when training data is scarce and therefore the filtering process does not limit the applicability of our proposed regularization scheme. Results Our results for training with OLR compared to other baselines trained on the same data are shown in Table 2. The most relevant baselines is CharRNN (Segler et al., 2018) which is an autoregressive model trained on Canonical SMILES. We further compare to a randomized version of CharRNN inspired by the finding of (Arús-Pous et al., 2019) which show that augmenting the data by using randomly generated SMILES representations of the same molecule improves performance. We also attempted to compare our method to other non-autoregressive models such as those based on Variational Autoencoders (VAEs) (Blaschke et al., 2018; Gómez-Bombarelli et al., 2018; Kadurin et al., 2017); however, we found that the models did not produce valid molecules when trained with 1000 examples, so we do not report these results. We use the metrics defined by the MOSES benchmark (Polykovskiy et al., 2020); see Appendix B for a thorough description of these metrics. In order to demonstrate the effectiveness of OLR we use 1000 randomly sampled data points from the training set and evaluate over the entire test set. When training with small amounts of data there is a tradeoff between the validity of the generated molecules and the uniqueness and other metrics. Our evaluation considers the best performing models for each method providing the validity of the generated molecules exceeds 80%. Results are depicted in Table 2. As can be seen, adding randomized variants of the molecules outperforms the original work of (Segler et al., 2018) which train an RNN as a language model using only canonical SMILES. Furthermore, adding the OLR objective exceeds the performance of randomized SMILES. In order to clearly depict the performance difference, we calculate the rank of each method on each metric considered. The average rank of each method is added as the last row of Table 2. ## 6 Discussion Graph generation poses unique challenges due to the discrete and unordered nature of graphs, which differ from continuous data typically handled by generative models. While various generative approaches such as GANs, VAEs, and autoregressive models have been successful in other domains, their application to graph generation requires careful consideration of the inherent structural complexities. Our work focuses on sequential graph generation using autoregressive architectures, motivated particularly by applications in molecular graph generation. The choice of depth-first search (DFS) trajectories as the representation for graph sequences offers a structured approach aligned with the nature of graph exploration and has relevance in chemical informatics where molecules are often represented as SMILES sequences. However, the multitude of possible DFS trajectories for a given graph poses a challenge when devising a regularization scheme to ensure model invariance. The introduction of Orderless Regularization (OLR) addresses this challenge by promoting model robustness against different DFS orderings of the same sub-graph. By incorporating OLR into the training process, we mitigate the dependency on specific DFS trajectories and enhance the generalization capabilities of the autoregressive model. This regularization term proves particularly beneficial in scenarios with limited training data, as demonstrated empirically in our study on small molecule generation. The computational aspect of generating DFS trajectories with a common end-vertex, a prerequisite for training with OLR, presents a notable challenge. However, our devised algorithm efficiently tackles this challenge under specified constraints, facilitating effective training with OLR. While our approach shows promise in small molecule generation, its generalizability to larger and more complex graphs warrants further investigation. Scalability issues may arise with increasingly heterogeneous graphs, necessitating advancements in algorithmic techniques or adaptations of OLR. Additionally, the dependence on specific constraints for generating DFS trajectories highlights a potential limitation, prompting exploration of alternative regularization techniques or extensions of OLR to handle diverse graph structures. An avenue for future research involves investigating the suitability of OLR for diverse autoregressive architectures, including Transformers. Adapting OLR to Transformer-based models necessitates adjustments due to the absence of a hidden state, which serves as the foundation for enforcing invariance in recurrent architectures. Furthermore, Transformers are permutation invariant architectures by design. A straightforward approach to encorporate OLR in Transformers would be to add positional encodings to the sequences and enforce invariance by penalizing the gap in the output of the model over different sequences representing the same graph. To test the applicability of OLR for Transformers, we repeat the Wiener index experiment (Section 5.1) with the recurrent architecture replaced by a Transformer. We find that regularization does not yield an improvement in results for an architecture with a comparable number of parameters: the accuracy is 0.12 compared to 0.28 achieved by an LSTM with OLR. One explanation for the gap in performance is the fact that regularization is performed on partial sequences which admit only certain positional embeddings and the model cannot extrapolate the desired behaviour to positional embeddings corresponding to unseen string orders. These results indicates that Transformers require a different approach to adequately regularize towards structure invariance. A possible method for biasing Transformers towards graph invariance would be to design positional encodings that are aware of the structure of the graph, an approach which may have connections to Attention GNNs (Velivckovic et al., 2017; Zhang et al., 2018; Lee et al., 2018). Consequently, the extent to which Transformer models can leverage the advantages of OLR or other regularization methods remains uncertain and is a promising direction for future work. ## 7 Conclusions In this work we highlight the innate gap that every autoregressive model for graph generation must mitigate - the induced order on graphs. We propose a different approach to previous works by introducing a novel regularization scheme that encourages learning hypotheses that are invariant to different DFS orderings. We demonstrate empirically that our proposed method improves performance for autoregressive models and is especially effective when the available datasets are small, as is the case in many real world problems. We believe that our approach can contribute to the applicability of autoregressive models such (e.g. State-space models) for graph generation and that similar ideas may be incorporated in various generation strategies beyond the scope of this work. ## References Miltiadis Allamanis, Marc Brockschmidt, and Mahmoud Khademi. Learning to represent programs with graphs. *arXiv preprint arXiv:1711.00740*, 2017. Josep Arús-Pous, Simon Viet Johansson, Oleksii Prykhodko, Esben Jannik Bjerrum, Christian Tyrchan, Jean-Louis Reymond, Hongming Chen, and Ola Engkvist. Randomized smiles strings improve the quality of molecular generative models. *Journal of cheminformatics*, 11(1):1–13, 2019. Rim Assouel, Mohamed Ahmed, Marwin H Segler, Amir Saffari, and Yoshua Bengio. Defactor: Differentiable edge factorization-based probabilistic graph generation. *arXiv preprint arXiv:1811.09766*, 2018. Davide Bacciu, Alessio Micheli, and Marco Podda. Edge-based sequential graph generation with recurrent neural networks. *Neurocomputing*, 416:177–189, 2020. Jesse Beisegel, Carolin Denkert, Ekkehard Köhler, Matjaz Krnc, Nevena Pivac, Robert Scheffler, and Martin Strehler. On the end-vertex problem of graph searches. *Discrete Mathematics & Theoretical Computer* Science, 21, 2019. Guy W Bemis and Mark A Murcko. The properties of known drugs. 1. molecular frameworks. Journal of medicinal chemistry, 39(15):2887–2893, 1996. Pavol Bielik, Veselin Raychev, and Martin Vechev. Phog: probabilistic model for code. In *International* conference on machine learning, pp. 2933–2942. PMLR, 2016. Thomas Blaschke, Marcus Olivecrona, Ola Engkvist, Jürgen Bajorath, and Hongming Chen. Application of generative autoencoder in de novo molecular design. *Molecular informatics*, 37(1-2):1700123, 2018. Marc Brockschmidt, Miltiadis Allamanis, Alexander L Gaunt, and Oleksandr Polozov. Generative code modeling with graphs. *arXiv preprint arXiv:1805.08490*, 2018. Jie Bu, Kazi Sajeed Mehrab, and Anuj Karpatne. Let there be order: Rethinking ordering in autoregressive graph generation. *arXiv preprint arXiv:2305.15562*, 2023. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code.(2021). *arXiv preprint arXiv:2107.03374*, 2021a. Xiaohui Chen, Xu Han, Jiajing Hu, Francisco JR Ruiz, and Liping Liu. Order matters: Probabilistic modeling of node sequence for graph generation. *arXiv preprint arXiv:2106.06189*, 2021b. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. *arXiv preprint arXiv:1412.3555*, 2014. Edo Cohen-Karlik, Avichai Ben David, and Amir Globerson. Regularizing towards permutation invariance in recurrent models. *Advances in Neural Information Processing Systems*, 33:18364–18374, 2020. Hanjun Dai, Yingtao Tian, Bo Dai, Steven Skiena, and Le Song. Syntax-directed variational autoencoder for structured data. *arXiv preprint arXiv:1802.08786*, 2018. Alex O Davies, Nirav S Ajmeri, et al. Hierarchical gnns for large graph generation. *arXiv preprint* arXiv:2306.11412, 2023. Nicola De Cao and Thomas Kipf. Molgan: An implicit generative model for small molecular graphs. *arXiv* preprint arXiv:1805.11973, 2018. Jörg Degen, Christof Wegscheid-Gerlach, Andrea Zaliani, and Matthias Rarey. On the art of compiling and using'drug-like'chemical fragment spaces. *ChemMedChem: Chemistry Enabling Drug Discovery*, 3(10): 1503–1507, 2008. Yuanqi Du, Xiaojie Guo, Hengning Cao, Yanfang Ye, and Liang Zhao. Disentangled spatiotemporal graph generative models. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pp. 6541–6549, 2022a. Yuanqi Du, Xiaojie Guo, Amarda Shehu, and Liang Zhao. Interpretable molecular graph generation via monotonic constraints. In *Proceedings of the 2022 SIAM International Conference on Data Mining (SDM)*, pp. 73–81. SIAM, 2022b. Shuangfei Fan and Bert Huang. Labeled graph generative adversarial networks. *arXiv preprint* arXiv:1906.03220, 2019. Daniel Flam-Shepherd, Tony Wu, and Alan Aspuru-Guzik. Graph deconvolutional generation. *arXiv preprint* arXiv:2002.07087, 2020. Daniel Flam-Shepherd, Kevin Zhu, and Alán Aspuru-Guzik. Language models can learn complex molecular distributions. *Nature Communications*, 13(1):3293, 2022. Victor Garcia Satorras, Emiel Hoogeboom, Fabian Fuchs, Ingmar Posner, and Max Welling. E (n) equivariant normalizing flows. *Advances in Neural Information Processing Systems*, 34:4181–4192, 2021. Rafael Gómez-Bombarelli, Jennifer N Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D Hirzel, Ryan P Adams, and Alán Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules. *ACS central science*, 4(2):268–276, 2018. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. *Communications of the ACM*, 63(11): 139–144, 2020. Nikhil Goyal, Harsh Vardhan Jain, and Sayan Ranu. Graphgen: a scalable approach to domain-agnostic labeled graph generation. In *Proceedings of The Web Conference 2020*, pp. 1253–1263, 2020. Xiaojie Guo and Liang Zhao. A systematic survey on deep generative models for graph generation. *IEEE* Transactions on Pattern Analysis and Machine Intelligence, 2022. Xiaojie Guo, Liang Zhao, Zhao Qin, Lingfei Wu, Amarda Shehu, and Yanfang Ye. Interpretable deep graph generation with node-edge co-disentanglement. In *Proceedings of the 26th ACM SIGKDD international* conference on knowledge discovery & data mining, pp. 1697–1707, 2020. Vincent J Hellendoorn, Charles Sutton, Rishabh Singh, Petros Maniatis, and David Bieber. Global relational models of source code. In *International conference on learning representations*, 2019. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. *Advances in neural information processing systems*, 30, 2017. Abram Hindle, Earl T Barr, Mark Gabel, Zhendong Su, and Premkumar Devanbu. On the naturalness of software. *Communications of the ACM*, 59(5):122–131, 2016. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural computation*, 9(8):1735–1780, 1997. Shion Honda, Hirotaka Akita, Katsuhiko Ishiguro, Toshiki Nakanishi, and Kenta Oono. Graph residual flow for molecular graph generation. *arXiv preprint arXiv:1909.13521*, 2019. Emiel Hoogeboom, Vıctor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3d. In *International conference on machine learning*, pp. 8867–8887. PMLR, 2022. Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation. In *International conference on machine learning*, pp. 2323–2332. PMLR, 2018. Artur Kadurin, Alexander Aliper, Andrey Kazennov, Polina Mamoshina, Quentin Vanhaelen, Kuzma Khrabrov, and Alex Zhavoronkov. The cornucopia of meaningful leads: Applying deep adversarial autoencoders for new molecule development in oncology. *Oncotarget*, 8(7):10883, 2017. Seokho Kang and Kyunghyun Cho. Conditional molecular design with deep generative models. *Journal of* chemical information and modeling, 59(1):43–52, 2018. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Greg Landrum. Rdkit: Open-source cheminformatics. 2006. *Google Scholar*, 2006. John Boaz Lee, Ryan Rossi, and Xiangnan Kong. Graph classification using structural attention. In *Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*, pp. 1666–1674, 2018. Renjie Liao, Yujia Li, Yang Song, Shenlong Wang, Will Hamilton, David K Duvenaud, Raquel Urtasun, and Richard Zemel. Efficient graph generation with graph recurrent attention networks. *Advances in neural* information processing systems, 32, 2019. Jaechang Lim, Seongok Ryu, Jin Woo Kim, and Woo Youn Kim. Molecular generative model based on conditional variational autoencoder for de novo molecular design. *Journal of cheminformatics*, 10(1):1–9, 2018. Tengfei Ma, Jie Chen, and Cao Xiao. Constrained generation of semantically valid graphs via regularizing variational autoencoders. *Advances in Neural Information Processing Systems*, 31, 2018. Kaushalya Madhawa, Katushiko Ishiguro, Kosuke Nakago, and Motoki Abe. Graphnvp: An invertible flow model for generating molecular graphs. *arXiv preprint arXiv:1905.11600*, 2019. Andreas Mayr, Günter Klambauer, Thomas Unterthiner, Marvin Steijaert, Jörg K Wegner, Hugo Ceulemans, Djork-Arné Clevert, and Sepp Hochreiter. Large-scale comparison of machine learning methods for drug target prediction on chembl. *Chemical science*, 9(24):5441–5451, 2018. Ryan L Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs. *arXiv preprint arXiv:1811.01900*, 2018. Marco Podda, Davide Bacciu, and Alessio Micheli. A deep generative model for fragment-based molecule generation. In *International Conference on Artificial Intelligence and Statistics*, pp. 2240–2250. PMLR, 2020. Peter Pogány, Navot Arad, Sam Genway, and Stephen D Pickett. De novo molecule design by translating from reduced graphs to smiles. *Journal of chemical information and modeling*, 59(3):1136–1146, 2018. Daniil Polykovskiy, Alexander Zhebrak, Benjamin Sanchez-Lengeling, Sergey Golovanov, Oktai Tatanov, Stanislav Belyaev, Rauf Kurbanov, Aleksey Artamonov, Vladimir Aladinskiy, Mark Veselov, et al. Molecular sets (moses): a benchmarking platform for molecular generation models. *Frontiers in pharmacology*, 11:565644, 2020. Mariya Popova, Mykhailo Shvets, Junier Oliva, and Olexandr Isayev. Molecularrnn: Generating realistic molecular graphs with optimized properties. *arXiv preprint arXiv:1905.13372*, 2019. Oleksii Prykhodko, Simon Viet Johansson, Panagiotis-Christos Kotsias, Josep Arús-Pous, Esben Jannik Bjerrum, Ola Engkvist, and Hongming Chen. A de novo molecular generation method using latent vector based generative adversarial network. *Journal of Cheminformatics*, 11(1):1–13, 2019. David Rogers and Mathew Hahn. Extended-connectivity fingerprints. *Journal of chemical information and* modeling, 50(5):742–754, 2010. Eyal Rozenberg and Daniel Freedman. Semi-equivariant conditional normalizing flows, with applications to target-aware molecule generation. *Machine Learning: Science and Technology*, 4(3):035037, 2023. Marwin HS Segler, Thierry Kogej, Christian Tyrchan, and Mark P Waller. Generating focused molecule libraries for drug discovery with recurrent neural networks. *ACS central science*, 4(1):120–131, 2018. Chence Shi, Minkai Xu, Zhaocheng Zhu, Weinan Zhang, Ming Zhang, and Jian Tang. Graphaf: a flow-based autoregressive model for molecular graph generation. *arXiv preprint arXiv:2001.09382*, 2020. Gregor Simm, Robert Pinsler, and José Miguel Hernández-Lobato. Reinforcement learning for molecular design guided by quantum mechanics. In *International Conference on Machine Learning*, pp. 8959–8969. PMLR, 2020a. Gregor NC Simm, Robert Pinsler, Gábor Csányi, and José Miguel Hernández-Lobato. Symmetry-aware actor-critic for 3d molecular design. *arXiv preprint arXiv:2011.12747*, 2020b. Martin Simonovsky and Nikos Komodakis. Graphvae: Towards generation of small graphs using variational autoencoders. In Artificial Neural Networks and Machine Learning–ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part I 27, pp. 412–422. Springer, 2018. Miha Skalic, Davide Sabbadin, Boris Sattarov, Simone Sciabola, and Gianni De Fabritiis. From target to drug: generative modeling for the multimodal structure-based ligand design. *Molecular pharmaceutics*, 16 (10):4282–4291, 2019. Teague Sterling and John J Irwin. Zinc 15–ligand discovery for everyone. *Journal of chemical information* and modeling, 55(11):2324–2337, 2015. Xiaochu Tong, Xiaohong Liu, Xiaoqin Tan, Xutong Li, Jiaxin Jiang, Zhaoping Xiong, Tingyang Xu, Hualiang Jiang, Nan Qiao, and Mingyue Zheng. Generative models for de novo drug design. Journal of Medicinal Chemistry, 64(19):14011–14027, 2021. Leonid Nisonovich Vaserstein. Markov processes over denumerable products of spaces, describing large systems of automata. *Problemy Peredachi Informatsii*, 5(3):64–72, 1969. Petar Velivckovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. *arXiv preprint arXiv:1710.10903*, 2017. Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391, 2015. David Weininger. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. *Journal of chemical information and computer sciences*, 28(1):31–36, 1988. Pengcheng Yin and Graham Neubig. A syntactic neural model for general-purpose code generation. arXiv preprint arXiv:1704.01696, 2017. Jiaxuan You, Bowen Liu, Zhitao Ying, Vijay Pande, and Jure Leskovec. Graph convolutional policy network for goal-directed molecular graph generation. *Advances in neural information processing systems*, 31, 2018a. Jiaxuan You, Rex Ying, Xiang Ren, William Hamilton, and Jure Leskovec. Graphrnn: Generating realistic graphs with deep auto-regressive models. In *International conference on machine learning*, pp. 5708–5717. PMLR, 2018b. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. *Advances in neural information processing systems*, 30, 2017. ![15_image_0.png](15_image_0.png) Figure 4: Proof illustration - S has a cycle and two different trajectories starting from u and ending with w (urw and uwprq. Concatenating with the trajectory from z to v we obtain two different DFS trajectories with a shared suffix. Chengxi Zang and Fei Wang. Moflow: an invertible flow model for generating molecular graphs. In *Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining*, pp. 617–626, 2020. Jiani Zhang, Xingjian Shi, Junyuan Xie, Hao Ma, Irwin King, and Dit-Yan Yeung. Gaan: Gated attention networks for learning on large and spatiotemporal graphs. *arXiv preprint arXiv:1803.07294*, 2018. Zaixi Zhang, Yaosen Min, Shuxin Zheng, and Qi Liu. Molecule generation for target protein binding with structural motifs. In *The Eleventh International Conference on Learning Representations*, 2022. Yanqiao Zhu, Yuanqi Du, Yinkai Wang, Yichen Xu, Jieyu Zhang, Qiang Liu, and Shu Wu. A survey on deep graph generation: Methods and applications. *arXiv preprint arXiv:2203.06714*, 2022. Dongmian Zou and Gilad Lerman. Encoding robust representation for graph generation. In *2019 International Joint Conference on Neural Networks (IJCNN)*, pp. 1–9. IEEE, 2019. ## A Missing Proofs In this section we show how to construct distinct DFS trajectories with common end vertex for a 2´edge connected graph conditioned that the graph is not a cycle. Proof. From our assumption that the graph is not a cycle, there exists at least two nodes with degree ě 3. Denote by C " p*S, T*q a minimal cut of size 2 (such a cut exists from our assumption that the graph is 2-connected). Denote the edges of the minimal cut by e1 " p*u, v*q and e2 " p*w, z*q such that *u, w* P S and v, z P T. Next, we claim that at least one of the partitions contains a cycle, otherwise there is a path connecting S and T since there are nodes in the graph which have a degree of 3 in the original graph with a path between them. Assume with out loss of generality that S is the partition with a cycle, therefore there are at least 2 different traversals of S that start with u and end with w. There is also a trajectory between z and v. Putting together, there are at least 2 trajectories of the entire graph with a common suffix which is the traversal of T. Figure 4 illustrates the proof concept. ## B Metric Details In this section we provide details for the metrics reported in Table 2. A few of the similarity measures (SNN and IntDiv) are based on the *Tanimoto coefficient*. In order to compute the Tanimoto coefficient, the molecules are mapped to a vector of fingerprints where each bit in the vector represents the presence (or absence) of a specific fragment.7 For molecules *A, B*, denote their fingerprints by mA and mB respectively, the Tanimoto coefficient is then calculated as the Jaccard index of the two vectors, $$J(m_{A},m_{B})={\frac{|m_{A}\cap m_{B}|}{|m_{A}\cup m_{B}|}}={\frac{|m_{A}\cap m_{B}|}{|m_{A}|+|m_{B}|-|m_{A}\cap m_{B}|}}.$$ . (B.1) We denote the Tanimoto coefficient of molecules A, B by Tp*A, B*q. Unique@K report the fraction of uniquely generated valid SMILES strings amongst the K molecules generated (validity is determined by the RDKit library). We generate 30, 000 molecules for each model and report for K " 1, 000 and K " 10, 000. High uniqueness values ensure the models do not collapse into repeatedly producing the same set of molecules. Fréchet ChemNet Distance (FCD) is a metric for evaluating generative models in the chemical context, the method is based on the well established *Fréchet Inception Distance* (FID) metric used to evaluate the performance of generative models in computer vision (Heusel et al., 2017). Fréchet distance measure the Wasserstein-2 distance (Vaserstein, 1969) from the distributions induced by taking the activations of the last layer of a relevant deep neural net. In the case of FCD, molecule activations are probed from ChemNet (Mayr et al., 2018). Given a set of generated molecules, denote by G the set of vectors as obtained by the activations of ChemNet, one can calculate the mean and covariance µG and ΣG. Similarly, denote µR and ΣR the mean and covariance of the set of molecules in the reference set, the FCD is calculated as follows, $$(\mathrm{B.1})$$ $$F C D(G,R)=\|\mu_{G}-\mu_{R}\|^{2}+T r\left(\Sigma_{G}+\Sigma_{R}-2(\Sigma_{G}\Sigma_{R})^{1/2}\right).$$ ¯ . (B.2) where T rpMq denotes the trace of the matrix M. Low FCD values indicate that the generated molecules distribute similarly to the reference set. Similarity to Nearest Neighbor (SNN) is the average of the Tanimoto coefficient of the generated molecule set denoted by G and their respective nearest neighbor in a reference set of molecules denote by R. High SNN indicates the generated molecules have similar structures to those in the reference set. This metric is in the range of r0, 1s. Fragment similarity (Frag) is a fragment similarity measure based on the BRICS fragments (Degen et al., 2008). Denote the set of BRICS fingerprints vectors of the generated molecules by G and similarly R for the reference molecules. The fragment similarity is defined as the cosine similarity of the sum vectors, $$(\mathrm{B.2})$$ $$F r a g(G,R)=c o s i n e\left(\sum_{g\in G}g,\sum_{r\in R}r\right)$$ ¸ $$(\mathrm{B.3})$$ The Frag measure is in the range of r0, 1s, values closer to 1 indicate that the generated and reference molecule set have a similar distribution of BRICS fragment. Scaffold similarity (Scaff) is similar to the fragment similarity, instead of the BRICS fragment, Scaff is based on mapping molecules to their Bemis–Murcko scaffolds (Bemis & Murcko, 1996).8 The measure also has a range of r0, 1s, values closer to 1 indicate that the generated molecule set has a similar distribution of scaffold to the reference set. 7The molecular fingerprints are obtained from RDKit (Landrum, 2006) and are based on the extended-connectivity fingerprints (Rogers & Hahn, 2010). 8Bemis–Murcko scaffold is the ring structure of a molecule along with the bonds connecting the rings, i.e. the molecule without the side chains. Internal diversity (IntDiv) is a mesure of the chemical diversity within a generated set of molecules G. This metric indicates $$IntDiv_{p}=1-\left(\frac{1}{|G|^{2}}\sum_{A,B\in G}T(A,B)^{p}\right)^{1/p}$$ $$(\mathrm{B.4})$$ We report the internal diversity for p " 1, 2. This measure is in the range r0, 1s. Low values indicate a lack of diversity in the generated molecules, i.e. that the model outputs molecules with similar fingerprints. Filters is the fraction of generated molecules that pass a certain filtering that has been applied to the training data. The metric is in the range of r0, 1s, high values indicate that the model has learnt to generate molecules which avoid the structures omitted by the filtering process. Novelty is the fraction of generated molecules that does not appear in the training set. This measure is in the range of r0, 1s and is an indication of the whether the model overfits the training data.
Review 1: Summary: This work aims to design a method that is invariant to the ordering of different DFS trajectories when generating graphs using RNNs. 1. With definitions of several necessary concepts about the ordering of DFS and induced subgraphs, it proposes a regularizer that penalizes the distance between model outputs on different DFS trajectories, which is hopefully to induce a related invariant property. 2. It shows the efficiency of sampling different DFS trajectories with common end vertex on two classes of graphs. 3. The proposed method is experimentally verified on two tasks including molecular generation. Strengths and Weaknesses: Pros: 1. The motivation is great: while there have been many discussions about the relationship between permutation symmetry and GNNs for a given graph, it seems interesting and promising to consider invariance to different generation trajectories in the context of graph generation. 2. The proposed regularizer is simple and general: it is independent of specific choice of models, which might be helpful to develop other models with such a property of invariance. 3. The reported numerical results in Table 1,2 are good compared with some baselines. Cons: 1. As the regularizer is general and independent of specific choices of models, it would be good to evaluate this method on more models. For now, the conducted experiments are a little limited, making the results less convincing. A good example would be Table 1 in [1], where their method is also free of architectures. 2. The definition 3.5 of $k$-edge-connected seems different from what was intended. It needs an additional condition ''there exists $|\widetilde{\mathcal{E}}|=k$ such that the induced $G'$ is disconnected''. It current definition is not correct as any $(k+1)$-edge-connected graph is also $k$-edge-connected for any $k$. Btw, I am not sure, if the new condition is added, whether the original condition is needed any more, since DFS can still be conducted on disconnected graph. 3. In Definition 3.4 and (3.4), more clarification of *totally structure invariant* is needed from two aspects. a. For any $\\widetilde{G}\\in\\mathcal{G}_{DFS}(G)$, any vertex $v\\in\\widetilde{G}$ does not necessarily have a DFS trajectory in $G$ that ends with $v$. This seems a little misaligned with the motivation of this work in Def 3.1. b. Similar to the previous point, any vertex $v\\in\\widetilde{G}$ does not necessarily have a DFS trajectory in $\\widetilde{G}$ that ends with $v$. This might raise some questions in the implementation. 4. Similar to the above point 3.b, Prop 3.6 is about sampling different trajectories ending at a certain common vertex, but can this kind of sampling be done for any vertex $v$ in subgraph $\\widetilde{G}$? Reference: [1] Lim et al. Sign and Basis Invariant Networks for Spectral Graph Representation Learning. Requested Changes: Please see the above concerns. All of the four points will have a significant impact on the evaluation. Broader Impact Concerns: N/A. ================================================== Review 2: Summary: The paper titled "Overcoming Order in Autoregressive Graph Generation" presents a new approach to graph generation that addresses the challenge of order invariance in sequential graph generation models. It introduces an Orderless Regularization (OLR) term to recurrent neural networks (RNNs) that encourages the model to learn representations invariant to different orderings of graph sequences. This is particularly useful in applications like molecular graph generation where the sequence order can be arbitrary. The authors demonstrate the effectiveness of OLR through experiments on the Wiener index prediction and de novo molecule generation, showing that it improves performance, especially in data-scarce scenarios. Strengths and Weaknesses: ## Strengths 1. Addressing the order invariance issue in graph generation is a critical challenge, as the sequence in which nodes are generated can significantly impact the performance of generative models. The paper introduces a simple solution by adding an auxiliary loss term, to mitigate the dependence on node sequence order. 2. The paper is well-written, offering clear and concise explanations of the motivation and the proposed methodology. ## Weaknesses 1. The limited experimental evaluation in the paper is a significant shortcoming. The comparison is made against just one baseline model, which does not fully test the strengths and potential weaknesses of the OLR approach against the array of methods available in the field. Moreover, the choice to use only one type of RNN and to conduct experiments on a single dataset restricts the depth of insight into how the OLR method would perform in different settings or with different types of data. Overall, expanding the experimental results by adding more baselines and more datasets can improve the message of the paper. 2. A weakness in the paper is the limited novelty of the OLR approach, especially when considering the prior introduction of a similar regularization concept aimed at achieving permutation invariance from Cohen-Karlik et al., 2020. This earlier work suggests that the core idea of using regularization to mitigate order sensitivity in RNNs has already been explored, potentially diminishing the perceived innovation of the OLR method. Addressing this, a more thorough discussion of how OLR distinguishes itself from previous approaches and contributes new insights or improvements would be beneficial to strengthen the paper's novelty claim. 3. The paper's exclusive focus on Recurrent Neural Networks (RNNs) for graph generation, without discussing the potential applicability or comparison with Transformer models, is a weakness. Given the substantial advances and successes of Transformers in various domains of machine learning, including graph processing and generation, their omission from the discussion limits the breadth of the paper's exploration into modern architectures. This oversight may leave readers questioning whether the OLR method could be adapted to Transformer-based models, which are known for their superior handling of sequences and relational data. Requested Changes: 1. It's crucial to broaden the experimental validation by comparing the OLR method with more baseline models, incorporating different types of RNNs, and utilizing a variety of datasets. This would provide a more comprehensive understanding of the method's performance across different contexts and improve confidence in its generalizability and effectiveness. To deepen the experimental validation of the OLR method, the authors should aim to answer the following questions, either through theoretical analysis or additional experiments: - How does the OLR method compare with state-of-the-art models across different domains? This would involve not just other RNN-based approaches but also models that might use different underlying architectures or methodologies for graph generation. - What is the impact of using different RNN architectures (such as LSTM, GRU, etc.) on the effectiveness of the OLR method? - Can the OLR method maintain its performance across diverse datasets, including those with varying sizes, complexities, and domain-specific characteristics? This could involve testing on datasets from different fields, such as social network graphs, biochemical molecule structures, and communication networks, to evaluate the method's robustness and flexibility. - What are the computational requirements and scalability of the OLR method across different settings? Evaluating the method's performance in terms of time and resource efficiency, especially as the size of the graph increases, is essential for assessing its practical applicability. 2. Add a detailed comparison with the regularization approach introduced by Cohen-Karlik et al., 2020. Highlighting the specific advancements or differences brought by OLR is essential to clarify its novelty and contribution to the field. 3. The paper would greatly benefit from including a discussion about the potential integration and comparison of the OLR method with Transformer models. This should explore how the OLR approach might be adapted for or compared with Transformer architectures, which have shown considerable success in sequence and relational data modeling. Broader Impact Concerns: not applicable ================================================== Review 3: Summary: This paper proposes a new regularization scheme for sequential graph generation by recurrent neural networks, called Orderless Regularization (OLR), that encourages the hidden state of the RNN to be invariant to different node orderings in the training graphs. Experiments show OLR improves the performance sequential graph generation models. Strengths and Weaknesses: Strengths: *Paper tackles an interesting problem for sequential graph generation Weaknesses: *Experimental setup is a bit unclear *Some concerns about the evaluation Requested Changes: *A key question that I have is how do we know the representations we obtain with OLR regularization is invariant? The performance improvements in the regression and graph generation tasks could just be due to regularization, and not due to any invariance imbued in the representations. For example, if one were to encode pairs of molecule SMILES that encode the same graph structure but are from different orderings, would we expect a more similar pair of embeddings with OLR? *I would like some more clarity on what the input graphs are for each of the evaluation settings. The molecule generation task makes more sense to me (SMILES of a set of molecules curated by Polykovskiy). What about the Wiener index task? How are these graphs generated? And are the graphs represented simply as text codes? *In the Wiener index task, the dataset size is restricted to be very small (50 training, 200 test). How does the performance gap of vanilla and vanilla + OLR change as the dataset sized is increased, eg to 500, 5000, 50000 training examples. (I just arbitrarily picked some numbers) *Table 2: Would be good to see Canonical + OLR to see the effect of the OLR regularization in the absence of any input data augmentation. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept with minor revision Comment: The proposed approach of regularizing auto-regressive models based on different orderings is interesting, and complemented by thorough empirical evidence. However, two out of three reviewers raised concerns regarding the experiments, namely on the limited set of datasets considered, and more importantly other models to compare to. A small note on the Transformer discussion in Section 6, since it is not clear to me why the proposed method doesn't apply to Transformers. In particular, with causally masked attention and positional embeddings, these models are no longer permutation invariant, and if I understand correctly, the regularization applies only to the last hidden layer before the final prediction, which can certainly be done in an auto-regressive Transformer as well? Please address the above in a minor revision, either by further clarifying why the method doesn't apply to Transformers, or by including some simple experiments on auto-regressive (causal) Transformers. These seem important for the thoroughness of the experiments since they are currently the most widely adopted sequential architecture, and thus an important baseline to include for the TMLR community. ==================================================
# Reward Poisoning On Federated Reinforcement Learning Anonymous authors Paper under double-blind review ## Abstract Federated learning (FL) has become a popular tool for solving traditional Reinforcement Learning (RL) tasks, particularly when individual agents need to collaborate due to low sample efficiency but are concerned about data privacy. The multi-agent structure addresses the major concern of data-hungry in traditional RL, while the federated mechanism protects the data privacy of individual agents. Despite the advantage FL brings to RL, Federated Reinforcement Learning (FRL) is inherently susceptible to poisoning, as both FL and RL are vulnerable to such training-time attacks; however, the vulnerability of FRL has not been well-studied before. In this work, we propose a general framework to characterize FRL poisoning as an optimization problem and design a poisoning protocol that can be applied to policy-based FRL. Our framework is versatile, catering to FRL scenarios employing both policy-gradient local RL and actor-critic local RL. In the context of actor-critic configurations, we conduct training for a pair of critics, one private and one public, aimed at maximizing the potency of poisoning. We provably show that our method can strictly hurt the global objective. We verify the effectiveness of our poisoning approach through comprehensive experiments, supported by mainstream RL algorithms, across various RL OpenAI Gym environments covering a wide range of difficulty levels. Within these experiments, we assess our proposed attack by comparing it to various baselines, including standard, poisoned, and robust FRL methods. The results demonstrate the power of the proposed protocol in effectively poisoning FRL systems - It consistently diminishes performance across diverse environments, proving to be more effective than baseline methods. Our work provides new insights into the training-time vulnerability of FL in RL and poses new challenges for designing secure FRL algorithms. ## 1 Introduction Reinforcement Learning (RL) has gained popularity in recent years as a paradigm for solving complex sequential decision-making problems and has been applied to a wide range of real-world problems, including game playing (Silver et al., 2016; Vinyals et al., 2019), autonomous driving (Yurtsever et al., 2020), network security (Xiao et al., 2018), design of materials (Govindarajan et al., 2024; Zhou et al., 2019; Ghugare et al., 2023), circuit design (Roy et al., 2021), and optimization (Chen et al., 2022). In RL, the agent's goal is to learn an optimal policy that maximizes the long-term cumulative rewards, which is done by repeatedly interacting with a stochastic environment, taking actions, and receiving feedback in the form of rewards. However, despite the impressive performance of RL algorithms, they are notoriously known to be data-hungry, often suffering from poor sample efficiency (Dulac-Arnold et al., 2021; Schwarzer et al., 2020). One traditional solution to this challenge is *Parallel RL* Kretchmar (2002), which adopts multiple parallel RL agents that sample data from the environment and share it with a central server, as seen in practical implementations such as game-playing (Mnih et al., 2016; Berner et al., 2019). However, transferring raw data may not be feasible: on the one hand, it can cause significant communication costs in applications, such as the Internet of Things (IoT) (Wang et al., 2020); On the other hand, it is not suitable for privacy-sensitive industries, such as clinical decision support (Liu et al., 2020). Federated Reinforcement Learning (FRL) has been proposed in order to address the limitations of traditional parallel RL, inspired by the recent success of *Federated Learning* (FL). FRL allows multiple agents to solve the same RL task collaboratively without sharing their sensitive raw data, thus addressing the drawbacks of heavy overhead and privacy violation in traditional Parallel RL. On the communication side, the communication efficiency of FRL is improved by the ability of FL to perform multiple local model updates during each communication round. On the privacy side, FRL enables data privacy protection by only communicating model updates, but not raw data, to a central server. With advantages of addressing communication efficiency and protecting privacy, FRL is practically appealing to a wide range of applications, including IoT (Wang et al., 2020), autonomous driving (Liang et al., 2023), robotics (Liu et al., 2019), etc. The success of FRL applications has spurred theoretical and methodological advancements. For instance, Mnih et al. (2016); Min et al. (2023b;a) investigate FRL under asynchronous communication, while Xie & Song (2023b); Zhu & Gong (2023); Xie & Song (2023a); Mai et al. (2023); Fan et al. (2023) delve into FRL with heterogeneity in local environments. Khodadadian et al. (2022); Zheng et al. (2023) provides insights into linear speed-up convergence rates, Woo et al. (2023) proposes an algorithm with sample complexity linearly scaling with the number of agents, and Zhao et al. (2023); Liu et al. (2023) analyze FRL with personalized local objectives. Poisoning in FRL is practical and harmful. The inherent nature of FL and RL amplifies the susceptibility to poisoning (training-time attacks) when combined into FRL. On the RL side, individual RL agents are designed to dynamically learn from raw feedback from the environment, making the learning system vulnerable to data corruption during training time (Zhang et al., 2020; Banihashem et al., 2022). As an example, a chatbot could be misled by a small group of Twitter users, resulting in the generation of misogynistic and racist remarks (Neff, 2016). Similarly, recommendation systems can be manipulated by a minimal number of fake clicks, reviews, or comments, leading to products being inaccurately ranked higher. Besides, RL in material design (Govindarajan et al., 2024; Zhou et al., 2019; Ghugare et al., 2023) is susceptible to poisoned material data, and RL in circuit design (Roy et al., 2021) can be compromised by misleading prefix graphs and actions. On the FL side, the impact of poisoning is exacerbated in FL frameworks compared to single-agent RL. The lack of transparency in local training within FL naturally exposes the FRL system to adversarial attacks by malicious agents (Fang et al., 2020; Bagdasaryan et al., 2020; Bhagoji et al., 2019; So et al., 2020). Although practical and harmful, poisoning in FRL has yet to be explored. In this work, our goal is to systematically study the vulnerability of FRL systems when facing adversarial attacks that attempt to mislead the trained policy. Challenges. Applying existing RL poisoning techniques to Federated Reinforcement Learning (FRL) presents several challenges. Many prior methods, such as those by Zhang et al. (2020) and (Rakhsha et al., 2020), rely on unrealistic assumptions, including the attacker having complete knowledge of the MDP environment, which is often impractical. Additionally, some approaches, such as TrojDRL (Panagiota et al., 2020), target RL agents' actions rather than rewarding schemes, making them incompatible with our framework. Furthermore, the effectiveness of certain mechanisms, such as VA2C-P(Sun et al., 2020), diminishes in federated settings due to small local training steps, which lead to frequent reinitialization of the adversarial critic and impaired learning of the poisoned actor's value function. Nevertheless, we have included the federated version of VA2C-P as a baseline, and our experimental results highlight the advantages of our approach compared to this federated extension of RL poisoning. ## 1.1 Related Work On Poisoning Here, we provide a comprehensive literature comparison regarding *poisoning* in various contexts of FL, RL, Multi-Agent RL, Distributed RL, and FRL. Poisoning in FL has been analyzed in different settings. Selected representative studies (Tolpegin et al., 2020; Bhagoji et al., 2019; Fang et al., 2020; Jodayree et al., 2023b;a) strategically poison FL by either compromising the integrity of the training dataset or manipulating submitted local model parameters. However, these studies are under the context of Supervised Learning, which is substantially different from RL in the sense of the availability of future data, the knowledge of the dynamics of the environment, etc (Sun et al., 2020). Thus, the existing works on FL poisoning are not directly applicable to FRL poisoning. In contrast, our work focuses on the poisoning designed specifically for FRL, taking into account the unique characteristics of RL. Poisoning in RL, referring to committing attacks during the training process (Zhang et al., 2020), can be categorized into two types: weak attacks that only poison data (e.g., rewards and states) and strong attacks that can manipulate actions in addition to data (Panagiota et al., 2020). In this study, we focus on weak attacks, also known as environmental poisoning, as they allow easier access for the attacker and, therefore, are more likely to occur in real-world scenarios. Rakhsha et al. (2020) formulated optimization frameworks to characterize RL poisoning, but they have limitations, such as requiring knowledge of the underlying MDP and focusing on targeted poisoning. Our proposed framework, however, considers both targeted and untargeted poisoning under the realistic assumption that the attacker does not have access to the MDP dynamics. Sun et al. (2020) designs a vulnerability-aware poisoning method for RL. Their algorithm focuses on manipulating the actor model, which cannot be directly applied to FRL as the critic model is also communicated among agents and the server. In contrast, our proposed poisoning mechanism is specifically designed for FRL and focuses on manipulating the critic model by training a pair of public and private critics. Poisoning in Multi-Agent Reinforcement Learning (MARL) is typically different from poisoning in FRL. Existing literature studying the robustness or poisoning of MARL more or less violates the security code in FL. In MARL, multiple agents' actions jointly determine the next state of the environment and the corresponding rewards for each agent, thus exposing data and environment between agents (Wu et al., 2023; Liu & Lai, 2023; Mohammadi et al., 2023; Li et al., 2023). On the contrary, our work strictly adheres to the privacy-oriented settings in FRL, letting the agents independently engage in exploration and data collection solely within their local environments, thus ensuring that no data or environment is exposed to other agents. Poisoning in Distributed RL, similar to poisoning in MARL, can violate the privacy code within the FRL context. Previous studies on poisoning or robustness in Distributed RL can expose data to the central server. For instance, Chen et al. (2023) illustrates that local agents are tasked with exploring the environment and collecting data for the central server without conducting any local training. This arrangement grants the server access to all data, thereby compromising the privacy safeguards in FRL. In contrast, our work allows local agents to conduct local training and prohibits data leakage to the central server. Poisoning in FRL is an almost unexplored territory, except for very limited existing studies. Fan et al. (2021); Jordan et al. (2024) provide a fault-tolerance guarantee for FRL. However, their frameworks require the central server to access local environments, which is a form of privacy leakage. In contrast, our work prohibits the central server from accessing any local tasks; Anwar & Raychowdhury (2021) restricts local agents from multiple local updates in its algorithm, which is not only against the nature of RL exploration but also severely expensive in the communication costs in applications. In contrast, our poisoning method is uniquely tailored to accommodate multiple local steps, aligning with realistic settings of RL scenarios. Zhang et al. (2022); Jordan et al. (2024) present FRL methods resilient against Byzantine failure. However, Byzantine failure is considered relatively naive and weak compared with real-world attacks that can be meticulously designed. In contrast, our work addresses the gap in studying strategic malicious attacks in FRL. Finally, our work is also related to the literature of RL in strategic settings such *stochastic games*. In stochastic games, the RL agents (players) are often selfish and optimize their own objectives, and the goal is to see whether their collective behavior results in any equilibrium outcome (Ning & Xie, 2024; Altabaa et al., 2023). However, in our poisoning FRL framework, the goal is to optimize an objective function through a central server whose computations are poisoned by malicious RL agents. Thus, although a group of distributed agents perform RL in both settings, their objectives and learning tasks are very different. ## 1.2 Overview And Contributions We provide an overview of the remaining content in this work as follows. To address the vulnerability of FRL to adversarial attacks, we start by proposing a theoretical framework (Sections 2 and 3) that characterizes the problem of environment poisoning in FRL as an optimization problem. We assume the presence of a suspicious agent within the federated system, which can be either a malicious client or a compromised victim. This high-risk agent is trained by perturbed observations through reward manipulation. However, the attacker does not have prior knowledge of the underlying Markov Decision Process (MDP) of the environment and can only learn it through observations. As mentioned previously, this type of attack is practical and can be easily implemented in real-world scenarios, such as buying an IoT device to participate in an FRL system and providing false signals to its sensors. To assess this risk, we design a novel poisoning mechanism (Section 4) that targets FRL with policy-based local training. Our protocol is designed to target not only general policy-based methods but also the Actor-Critic setting, wherein the attacker trains a set of both *public* and *private* critics. The private critic calculates optimized poisoned rewards, while the public critic manipulates the coordinator's model during the training process. Notably, our poisoning protocol operates without requiring knowledge of the MDP of the environment and remains consistent with the multiple local steps setting of FRL. Furthermore, we offer a theoretical guarantee for our attack mechanism (Section 5), demonstrating that our approach can lead to a substantial drop in the performance of the FRL system. We establish a theoretical result that our attack is effective even in the most challenging scenario for the attacker. Our method is evaluated through extensive experiments on OpenGYM environments (Brockman et al., 2016), which represent standard RL tasks across various difficulty levels such as CartPole, InvertedPendulum, LunarLander, Hopper, Walker2d, and HalfCheetah. These experiments employ different FRL backbone models. The findings conclusively illustrate that, through our attack mechanism, a malicious agent can effectively poison the entire FRL system. Contributions. Our findings highlight the vulnerability of the FRL framework to local poisoning attacks, emphasizing the need for awareness of system risks during training. - Innovative **Local RL Poisoning Tailored for Federated Contexts**. As a starting point for poisoning in FRL, we propose a novel poisoning protocol designed for policy-based FRL, emphasizing its practical implementation and privacy-preserving attributes. Our method does not require access to the MDP, addressing the knowledge limitations presented in existing RL approaches. Additionally, our approach is tailored specifically for federated contexts, particularly for actor-critic-based FRL. We propose an attacking strategy to train both public and private critics, overcoming the diminished effectiveness of prior RL poisoning techniques when directly applied to federated frameworks. - **Theoretical Analysis**. We provide theoretical evidence demonstrating the effect of our method in sabotaging the system, specifically by degrading the global objective. - **Empirical Validation**. We validate the effectiveness of our poisoning protocol through extensive experiments targeting mainstream RL algorithms like VPG and PPO. These experiments encompass OpenGYM environments with varying difficulty levels, as detailed in Section 6. This evaluation includes comparisons with various baseline models and assessments of different (targeted and nontargeted) poisoning types. In summary, our work indicates that when FL is applied to RL training, the systematic security risk can make FRL susceptible to poisoning and threats in applications. Consequently, the trained policy of FRL may not be entirely reliable and requires a more robust and secure protocol. Our findings highlight the potential risks of FRL under adversarial attacks and inspire future research toward developing more robust algorithms. ## 2 Preliminaries And Notations In this section, we overview some background and notations that are necessary for introducing the concept of poisoning in FRL. We consider single-task FRL, where a number of agents work together to achieve a common task. As such, all agents are trained on the same MDP. We consider the ubiquitous on-policy training setting (Singh et al., 2000). MDP and RL. An MDP is a discrete stochastic control process for decision-making (Puterman, 1990) that is defined by a tuple M = (S, A*, r, P, γ*), where S is a state space, A is an action space, r(·) : S × A → R is a reward function, P(·) : S × A × S → [0, 1] is a state transition probability function, and γ ∈ (0, 1) is a discount factor. Given an MDP, the goal of RL is to find an optimal policy, π(·) : S → ∆A, where ∆A is the set of all probability distributions over the action space A, which maximizes the expected accumulated discounted reward. As is common in the literature (Agarwal et al., 2021), we often represent a policy π by its parametrization θ (e.g., tabular parametrization or neural network weight parametrization). During the process, at each step t, the decision maker begins in some state st, selects an action at according to the policy π(st), receives a reward r(st, at), and transitions to the next state st+1 with probability P(st+1|st, at). This decision-making and interaction process continues until the MDP terminates. Federated Reinforcement Learning (FRL). An FRL system consists of n agents and a central server. The agents perform local training for L steps and then send their updated policies to the central server. The server performs aggregation to create a central policy, which is then broadcast back to all the agents. This process is repeated for T rounds, and the broadcast policy is used to initialize the next round of local training. More specifically, at each round p ≤ T, at the *start* of local step q ≤ L, denote the policy of each agent i by its policy parameter θ p,q−1 (i). Following this policy, the agent interacts with its environment, gathering sequences of states, actions, and rewards: O p,q (i) = (S p,q (i) , A p,q (i) , R p,q (i) ) = (s1, s2, . . .),(a1, a2, . . .),(r1, r2*, . . .*). Based on O p,q (i) , the local policy is typically updated using θ p,q (i) = arg maxθ J(θ, θ p,q−1 (i), O p,q (i) ), where J is some objective function. After completing L steps of local training, all agents update their local policies to {θ p,L (i) }i∈[n], which are then submitted to the server.1 Subsequently, the server forms a new global policy using θ p (0) := Aagg({θ p,L (i) } n i=1), where Aagg is an aggregation algorithm. The server broadcasts the updated global policy θ p (0) to all agents, after which each agent i updates its local policy as θ p+1,0 (i) = θ p (0), and the system proceeds to the next round p + 1. This process repeats until the maximum federated round T. ## 3 Problem Formulation We propose a theoretical framework to conceptualize the challenge of reward poisoning attacks to FRL as a sequential optimization problem as Problem (P). Problem (P) is defined in terms of the notations and concepts introduced in Section 2, such as the local policies, global policy, and aggregation algorithms. The remainder of this section is organized as follows. In Section 3.1, we elaborate on the rationale for implementing local reward poisoning. In Sections 3.2 and 3.3, we provide a detailed explanation of the objective function, feasible domain, and constraints in Problem (P). Finally, in Section 3.4, we present the knowledge possessed by each involved party, ensuring alignment with established security protocols in FL. ## 3.1 Local Reward Poisoning We consider a threat model targeting FRL systems through reward poisoning. This model assumes the presence of an untrustworthy participant in the system, categorized as either an internal malicious agent (typical in FL poisoning (Tolpegin et al., 2020)) or a victim contaminated by external attackers (a common scenario in RL poisoning (Zhang et al., 2020)). We consider the rewards of the suspicious agents get poisoned during local training, following established practices in RL poisoning (Huang & Zhu, 2019; Zhang et al., 2020; Rakhsha et al., 2021; Banihashem et al., 2022). For clarity, we designate the compromised agent as agent n. At each round p and step q, the attacker may manipulate the reward sequence of R p,q (n) , transforming it into Rb p,q (n) . Throughout this paper, we will use the notation b· to indicate variables that are poisoned by the attacker. ## 3.2 Objective Function And Feasible Domain Objective Function. In the optimization Problem (P), the objective LA represents the loss of the attacker, which can characterize both untargeted and targeted poisoning settings. In the case of untargeted poisoning, LA is the benefit of the server, typically represented by the long-term cumulated reward. In the case of targeted poisoning, LA is a policy matrix distance, measuring the difference between the learned policy θ T (0) and some targeted policy θ †. This captures the attacker's goal to manipulate the global model to align with a specifically targeted policy. 1We distinguish the parameters related to the server by index 0. Feasible Domain. The objective LA is minimized over manipulated rewards Rb , which is subject to the constraint that Rb should closely align with the ground-truth rewards R (Eq. 5). Although R may initially seem to behave as a random variable, it is essential to note that in Problem P, we characterize the process such that once Rp,q nis obtained and observed, it is treated as a deterministic variable. Subsequently, the attacker refers to this observed deterministic variable Rp,q nto optimize its manipulated rewards Rb p,q n . arg min Rb LA θ T (0) Rb p,q (n) 1≤q≤L 1≤p≤T (P) s.t. θ p,0 (i) = θ p−1 (0) , ∀i ≤ n, (1) θ p,q (i) = arg max θ J(θ, θ p,q−1 (i), O p,q−1 (i)), ∀i < n, (2) θ p,q (n) = arg max θ J(θ, θ p,q−1 (n), Ob p,q−1 (n)), (3) θ p (0) = A agg{θ p,L (i) }i∈[n] , (4) DRb p,q (n) , R p,q (n) ≤ ϵ. (5) $$\mathbf{\Phi}^{\dagger}$$ $$\mathbf{\partial}t)$$ $$|\rangle$$ ## 3.3 Constraints The constraints in optimization (P) characterize the process of poisoning in FRL, including *local initialization* (Eq. 1), *local train* (Eq. 2, 3), *attack with limited budget* (Eq. 5), and *global aggregation* (Eq. 4). A concise summary is provided in Table 1, and further details are explained below. Local train. Constraints 2 and 3 outline the local update of each agent at local step q in round p. In Constraint 2, each clean agent uses its current policy, denoted by θ p,q−1 (i), to roll out an observation sequence O p,q−1 (i). Then, based on this observation, the agent updates its policy from θ p,q−1 (i)to θ p,q (i) by maximizing an objective function J(·), defined by the agent's specific RL algorithm. Constraint 3 characterizes the update for the suspicious agent's update, akin to 2, with the distinction that the local update for a suspicious agent is based on the manipulated rewards Rb . Attack budget. Constraint 5 captures the budget constraint for the attacker, where D(·, ·) represents the distance between the perturbed (poisoned) observations and the clean observations, which is restricted by a cost budget ϵ. The consideration of the attack budget is crucial. On the one hand, a limited budget constitutes a standard assumption in RL reward poisoning Huang & Zhu (2019); Rakhsha et al. (2021). On the other hand, our framework strives to encompass scenarios of both internal malicious clients and victim clients susceptible to external poisoning - From a practical standpoint, an internal malicious agent may seek to avoid detection, necessitating caution in its manipulation, while an external attacker is likely to be concerned about the costs associated with their attacks. Therefore, the inclusion of an attack budget is imperative for both scenarios. Global aggregation and local initialization. Constraint 4 models the aggregation step of the central server by an aggregation algorithm Aagg. This algorithm processes the models sent from agents, denoted by {θ p−1,L (i)}, as inputs and produces the new global model θ p (0) as an output. Constraint 1 stipulates that at the start of local training, all agents initialize their individual policies as the global model. ## 3.4 Information Structure We present the knowledge of each party in the FRL system to guarantee strict adherence to privacy protection in FL. Table 2 shows the knowledge of the three parties involved in FL, namely, the coordinator, the attacker, and the agents, and clarifies the knowledge of each party to guarantee the data privacy expected from FL. The coordinator only has knowledge of the submitted models and the global policy, while the agents only have knowledge of their local data, policy, and the broadcast global model. The attacker only has knowledge of its own observations, manipulations, model, and the broadcast global policy. This ensures that the agents' private data is kept confidential. | Table 2: Knowledge of the parties in a poiso | ned FRL. | | | |------------------------------------------------|-----------------|-----------|----| | Coordinator | Agent (i),i < n | Agent (n) | | | LA(·) | √ | | | | J(·) | √ | √ | | | agg(·) | √ | | | | A | √ | √ | √ | | θ p (0) p,q , i ̸= n | √ | | | | θ (i) p,q | √ | | | | θ (n) ϵ | √ | | | | p,q , i ̸= n | √ | | | | O (i) O p,q | √ | | | | (n) | √ | | | | Ob p,q (n) R p,q , i ̸= n | √ | | | | (i) | √ | | | | R p,q (n) | √ | | | | Rb p,q (n) | | | | | Table 1: Constraints of Problem P. | | |--------------------------------------|--------------------------------------------| | Party | Constraints | | Agent (i), i < n | .Eq. 1: Local initialization | | (Clean agents) | Eq. 2: Local training | | Agent (n) | Eq. 1: Local initialization | | (suspicious agents) | Eq. 3: Local training Eq. 5: Attack Budget | | Coordinator | Eq. 4: aggregation | We refer to Appendix A for a specification of this framework to the case of Proximal Policy Optimization (Schulman et al., 2017), which provides an illustrative example, demonstrating how our framework characterizes poisoning in FRL for local actor-critic algorithms. ## 4 Method In this section, we propose practical local reward poisoning methods for FRL. We consider two scenarios when the local RL training in the FRL system uses actor-critic methods (Algorithm 2) and policy gradient methods (Algorithm 3). We note that our reward poisoning methods can be employed for both targeted and untargeted reward poisoning, as detailed in Section 3.2, by employing the appropriate corresponding objective. As before, we use b· to denote the suspicious agent's parameters communicated with the server. ## 4.1 Reward Poisoning For Actor-Critic-Based Frl Actor-Critic mechanism. In actor-critic algorithms (Peters & Schaal, 2008), we have a policy parameterized by θ alongside a corresponding value function Vθ(s), determined by policy θ and state s. Besides the policy model, each agent incorporates a critic model ϕω(·) parameterized by ω, providing an estimation of the policyrelated value function Vθ(s), denoted by V (s) = ϕω(s). To simplify notations, by some abuse of notation, we refer to the critic model ϕω by its parametrization ω in subsequent analysis. The estimated value function V (s) generated by the critic ω further gives an estimation of the Q-function as Q(s, a) = r(s, a) + γ · V (s ′), where r(s, a) is the observed reward, and s ′is the next observed state. The critic model updates itself by minimizing the temporal-difference error between its model estimation and ground-truth observation (Tesauro et al., 1995). The policy parameter θ is updated by θ t+1 = arg maxθ J(θ, θ t, Ot), where J(·) is some objective function specified by the actor-critic algorithm, t is the exploration step, and Otis the observed trajectory. In the following, we describe the poisoning mechanism in detail and summarize it in Algorithm 2, which uses Algorithm 1 as a subroutine. Algorithm 1 Poisoned Local Train for Actor-Critic-based FRL 1: **Input**: current actor θb p,q−1 (n), current private critic ω p,q−1 (n), current public critic ωb p,q−1 (n). 2: **Output**: updated actor θb p,q (n) , updated private critic ω p,q (n) , updated public critic ωb p,q (n) . 3: Agent n uses current actor θb p,q−1 (n)to obtain ground-truth observation O p,q (n) , 4: computes true objective J p,q (n) using ground-truth observation O p,q (n) and private critic ω p,q−1 (n), 5: poisons rewards as Rb p,q (n) using true objective J p,q (n) and budget ϵ by Eq. 6, 6: obtains poisoned objective Jbp,q (n) with poisoned observation Ob p,q (n) and public critic ωb p,q−1 (n), 7: updates poisoned actor to θb p,q (n) with poisoned objective Jbp,q (n) , 8: updates public critic to ωb p,q (n) with poisoned observation Ob p,q (n) , 9: updates private critic to ω p,q (n) with true observation O p,q (n) . Algorithm 2 Reward Poisoning for Actor-Critic-based FRL 1: **Input**: max federated rounds T, max local episodes L, number of agents n, poisoning budget ϵ, aggregation algorithm Aagg, actor-critic objective function J. 2: **Output**: server's actor model θ T (0) and critic model ω T (0). 3: Initialize the server's actor model θ 0 (0) and critic model ω 0 (0). 4: for p = 1 to T do 5: for i = 1 to n do 6: if i ̸= n **then** 7: Agent i initializes local actor and critic: θ p,0 (i) ← θ p−1 (0) , ω p,0 (i) ← ω p−1 (0) . 8: **else** 9: Agent n initializes local actor θb p,0 (n) ← θ p−1 (0) , 10: initializes local private critic as ω p,0 (n) ← ω p−1,L (n)and local public critic as ωb p,0 (n) ← ω p−1 (0) . 11: **end if** 12: for q = 1 to L do 13: if i ̸= n **then** 14: Agent i uses local actor θ p,q−1 (i)to obtain observation O p,q (i) , 15: computes objective J p,q (i) with observation O p,q (i) and local critic ω p,q−1 (i), 16: updates local actor to θ p,q (i) with J p,q (i) , and updates local critic to ω p,q (i) with O p,q (i) . 17: **else** 18: Agent n runs Algorithm 1 (Poisoned Local Train for Actor-Critic-based FRL). 19: **end if** 20: **end for** 21: **end for** 22: Central server aggregates global actor θ p (0) = Aaggθ p,L (1) *, ...,* θ p,L (n−1), θb p,L (n) , 23: Central server aggregates global critic ω p (0) = Aaggω p,L (1) *, ...,* ω p,L (n−1), ωb p,L (n) . 24: end for Public and private critics. The term "public critic" implies that the agent uses this critic for communication with the server (Line 23), while "private critic" signifies that the agent retains this critic solely for local computation purposes (Line 4). To manipulate the global model by training (Line 8) and submitting (Line 23) a *poisoned public critic*, the attacker also undergoes training for an *unpoisoned private critic* (Line 9). This approach allows the attacker to possess knowledge of a true estimation of the value function (Line 4) to make optimal poisoning decisions (Line 5). We denote the pair of public and private critics respectively as ωb p,q (n) and ω p,q (n) . The private critic ω p,q (n) is trained using ground-truth rewards R p,q (n) (Line 9), and then the attacker harnesses the private critic ω p,q (n) to obtain poisoned rewards Rb p,q (n) (Line 5). Rb p,q (n) is then utilized to train the public critic ωb p,q (n) (Line 8). Due to the different goals and training methods of the pair of critics, their initialization also differs. At the beginning of a new round of local training, the agent initializes its public critic with the broadcast global model (i.e., ωb p,0 (n) = ω p−1 (0) in Line 10), while inherits its private critic from the end of last round of training (i.e., ω p,0 (n) = ω p−1,L (n)in Line 10). Reward poisoning and actor update. The attacker aims to minimize the objective function J of the policy model (the actor) by poisoning its local rewards. When deciding how to poison the rewards, the attacker should use the unpoisoned objective J given by the ground-truth rewards R and private critic ω (Line 4) to obtain a true estimation and make a right poisoning decision (Line 5). Concretely, the attacker minimizes the original objective J with respect to R: for each round p ≤ T and local step q ≤ L, the corrupted agent n interacts with the environment and obtains the ground-truth observation O p,q (n) . The attacker computes the unpoisoned objective J p,q (n) using true rewards R p,q (n) and private critic ω p,q (n) (Line 4). Then, the attacker poisons the reward according to $$\widehat{\mathbf{R}}_{(n)}^{p,q}\leftarrow\mathbf{R}_{(n)}^{p,q}-\epsilon\cdot\frac{\nabla_{\mathbf{R}}J_{(n)}^{p,q}}{\|\nabla_{\mathbf{R}}J_{(n)}^{p,q}\|},$$ $$({\mathfrak{f}}{\mathfrak{h}})$$ , (6) which guarantees that the manipulation power is within the attack budget ϵ. After the attacker has generated poisoned rewards Rb p,q (n) (Line 5), the actor is updated according to the poisoned objective Jbp,q (n) (Line 7). ## 4.2 Reward Poisoning For Policy-Gradient-Based Frl In Policy Gradient (PG) algorithms (Silver et al., 2014), agents do not require a critic model. Therefore, we have adapted the poisoning method described in Section 4.1 that uses Actor-Critic for agents' local RL algorithm. The overall procedure is outlined in Algorithm 3. The attacker's goal is still to minimize the objective function J of the policy model. Since there is no critic, the agent directly calculates J from the observed O (Lines 14 and 15), and uses this information to decide how to poison the rewards to Rb by Eq. 6 (Line 16). The policy is then updated (Line 18) with the poisoned objective Jb calculated by the poisoned rewards (Line 17). Algorithm 3 Reward Poisoning for Policy Gradient-based FRL 1: **Input**: max federated rounds T, max local episodes L, number of agents n, poisoning budget ϵ, aggregation algorithm Aagg, policy gradient objective function J. 2: **Output**: server's policy θ T (0). 3: Initialize the server's policy θ 0 (0). 4: for p = 1 to T do 5: for i = 1 to n do 6: for q = 1 to L do 7: if i ̸= n **then** 8: Initialize local policy θ p,0 (i) ← θ p−1 (0) 9: Interact with environment and obtain O p,q (i) 10: Compute J p,q (i) with O p,q (i) 11: Update θ p,q (i) with J p,q (i) 12: **else** 13: Initialize local policy θ p,0 (n) ← θ p−1 (0) 14: Interact with environment and obtain clean observation O p,q (n) 15: Compute clean objective J p,q (n) with the clean observation O p,q (n) 16: Poison reward as Rb p,q (n) using the clean objective J p,q (n) and the clean observation O p,q (n) by Eq. 6 17: Obtain poisoned objective Jbp,q (n with poisoned observation Ob p,q (n) 18: Update local policy θb p,q (n) with the poisoned objective Jbp,q (n) 19: **end if** 20: **end for** 21: **end for** 22: θ p (0) = Aaggθ p,L (1) *, ...,* θ p,L (n−1), θb p,L (n) 23: **end for** ## 5 A Theoretical Performance Guarantee In this section, we prove that our poisoning method can strictly decrease the global objective under certain assumptions. We begin by considering the following assumptions, which is standard for theoretical analysis in the existing FRL literature (Jordan et al., 2024; Zhang et al., 2022; Fan et al., 2021). Assumption 1 (Single-step local training). *We assume that for each round, all local agents only apply* single-step local training, i.e., L = 1. Clean local reward sequence function. Let us denote the dimensions of θ and r by dθ and dr, respectively. Given a policy θ, define r(i)(·) : R dθ → R dr as the function of the reward sequence generated by the agent interacting with the local environment i under policy θ. Under Assumption 1, denote the reward sequence observed by agent i during round p by r p (i) . Then, we have r p (i) = r(i)(θ p−1 (0) ). Clean local objective function. With slight abuse of notations, let J(θ; r) denote the local RL objective function given the policy θ and the reward sequence r, where we have J : R dθ×dr → R. According to the mechanism of FRL, we further define the objective of local agent i given an initialized policy θ as J(i)(θ) := J(θ; r(i)(θ)). If we denote the objective of local agent i at the *start* of round p by J p (i) , we have $$J_{(i)}^{p}=J(\mathbf{\theta}_{(0)}^{p-1};r_{(i)}(\mathbf{\theta}_{(0)}^{p-1}))=J(\mathbf{\theta}_{(0)}^{p-1};\mathbf{r}_{(i)}^{p}).$$ ). Poisoned local reward sequence and local objective. Under Assumption 1, agent n is poisoned. According to our algorithms 2 and 3, we have ˆr p (n) = r p (n) − ϵ · ⃗e(∇r p (n) J p (n) ), where ⃗e(v) := v ∥v∥ for any vector v. Therefore, the poisoned objective is given by Jˆp (n) = J(θ p−1 (0) ; ˆr p (n) ). Local policy update function. Under Assumption 1, each agent performs only a one-step local update. Denote the update rate as λθ ∈ R. Since only agent n is poisoned, we have $$\hat{\mathbf{\theta}}_{(n)}^{p}=\mathbf{\theta}_{(0)}^{p-1}+\lambda_{\mathbf{\theta}}\cdot\nabla_{\mathbf{\theta}_{(0)}^{p-1}}\hat{J}_{(n)}^{p}.$$ Assumption 2 (Central Aggregation). Suppose the server updates global policy by the conventional FedAVG settings, i.e., θ p (0) = 1 n Pi∈[n] θ p (i) for clean training. In particular, when agent n *is poisoned, we have* θˆ p (0) = 1 n (Pi∈[n−1] θ p (i) + θˆ p (n) ). In fact, the aggregation method described in Assumption 2 aligns with common practices found in established FRL literature (Xie & Song, 2023b; Zhu & Gong, 2023). Global objective function. Under the conventional setting of FL, given model θ, the global objective is defined as J(0)(θ) := 1n Pn i=1 J(i)(θ) (McMahan et al., 2017). In FRL, at the end of round p, the clean federated objective is given by J p (0) = J(0)(θ p (0)). When agent n is poisoned, we have Jˆp (0) = J(0)(θˆ p (0)). Assumption 3 (Objective smoothness). *We assume that* J p (0) is differentiable with respect to r and θ almost everywhere, and J p (0) is Lr−*smooth with respect to* r p (n) . Assumption 3 is well-founded, aligning with prevalent local RL objectives. We provide two illustrative examples: a) Discounted Cumulative Reward (Kaelbling et al., 1996): In this case, we have J p (i) := γ · r p (i) , where γ := (γ 0, γ1, γ2*, ..., γ*dr ) is the discount vector. Here, γ ∈ (0, 1) is a discount factor, and dr represents the cardinality of r p (i) . b) Average Reward (Kaelbling et al., 1996): In this case, the local objective is expressed as J p (i) := (⃗1 · r p (i) )/dr, where dr denotes the cardinality of r p (i) , and ⃗1 := (1, 1*, ...,* 1) ∈ Rdr stands for an all-one vector. We note that the primary purpose of Assumption 3 is to facilitate our theoretical analysis. However, in our numerical experiments (Section 6.1), we demonstrate the effectiveness of our proposed attack even if this assumption fails to hold (e.g., when the states/actions/rewards are discrete). Assumption 4 (Worst-Case). *We assume the attacker is only able to attack in the latest round.* Remark 5. Assumption 4 is designed to encapsulate the most challenging scenario for the attacker, where manipulation is restricted solely to the latest round. The proven guarantee under this assumption serves as an indicator of the effectiveness of our attack method in more general and lenient scenarios, wherein the attacker has the ability to poison the system over multiple rounds. Furthermore, our experiments showcase the effectiveness of the attack when deployed over multiple rounds (refer to Section 6). We are now ready to state our main theoretical result, whose proof is deferred to Appendix A. Theorem 6. *Let Assumptions 1, 2, 3, and 4 hold. Suppose that all agents are updated cleanly at the first* p − 1 rounds, and at round p, agent n *is poisoned. Let* ϵ+ := 2λθB nLr , where B *is a scalar defined as* $B=[\nabla_{\mathbf{\theta}^{\prime}}J_{(0)}(\mathbf{\theta}^{\prime})]^{\top}\cdot[\nabla_{\mathbf{r},\mathbf{\theta}}J(\mathbf{\theta},\mathbf{r})]\cdot\bar{\mathbf{e}}(\nabla_{\mathbf{r}}J(\mathbf{\theta},\mathbf{r}))\Big{|}_{\mathbf{r}=\mathbf{r}_{(n)}^{\mathbf{\theta}}}^{\mathbf{\theta}=\mathbf{\theta}_{(0)}^{\mathbf{\theta}}}\Big{|}_{\mathbf{r}=\mathbf{r}_{(n)}^{\mathbf{\theta}}}^{\mathbf{\theta}=\mathbf{\theta}_{(0)}^{\mathbf{\theta}}}\Big{|}_{\mathbf{r}=\mathbf{r}_{(n)}^{\mathbf{\theta}}}$. Then, for B > 0 and ϵ < ϵ+*, we have* Jbp (0) ≤ J p (0) − α, *where* α ∈ (0, ϵ 2 + 8 ]. Remark 7. Theorem 6 asserts that the gap α *is strictly positive, and its upper bound increases proportionally* with ϵ 2 +, which is the square of the upper limit of the attack budget. With a small poison budget ϵ, we can guarantee that the poisoned global objective Jˆp (0) *is strictly smaller than the clean global objective* J p (0), and a higher attack budget ϵ can indicate a greater decrease in the global objective. Similarly, a larger local learning rate λθ or fewer agents can increase ϵ+, indicating a stronger objective decrease α. In Appendix A, we not only explore a practical scenario ensuring B > 0 but also highlight that a higher learning rate λθ enhances the likelihood of B > 0. ## 6 Numerical Experiments In this section, we conduct a series of experiments to evaluate the effectiveness of our poisoning method on the FRL system. Our results show that the proposed poisoning method can effectively reduce the mean episode reward of the server in the FRL system. Additionally, our poisoning protocol does not require knowledge of the MDP of the environment and is consistent with the multiple local steps setting of FRL. ## 6.1 Environments Nature of FRL: Following existing FRL literatures (Jordan et al., 2024; Zhang et al., 2022), we implement experiments on various *OpenAI Gym* environments (Brockman et al., 2016) with increasing complexity, including CartPole, Inverted Pendulum, Hopper, *Lunar Lander*, and *Half Cheetah*, all of which are standard RL task environments. The selection of these datasets reflects the inherent nature of FRL, where the focus is on solving RL tasks, and FL serves as a versatile toolbox. It is worth noting that environments such as Inverted Pendulum, Hopper, *Half Cheetah* feature continuous reward spaces, aligning with our theoretical analysis (Assumption 3). Additionally, we incorporate environments such as *Cartpole* and *Lunar Landar*, providing *discrete reward spaces*, to demonstrate the applicability of our attack in diverse scenarios of reward space. ## 6.2 Backbone Model We explain our backbone framework by describing both its local RL model and its global FL mechanism. Local RL models. Our local training backbones cover both the Policy Gradient method (corresponding to Algorithm 3) and the Actor-Critic method (corresponding to Algorithm 2), maintaining consistency with our attack methods in Section 4. For the Policy Gradient backbone, we opt for the conventional Vanilla Policy Gradient (VPG) (Sutton et al., 1999). For the Actor-Critic backbone, we choose a widely applicable model, Proximal Policy Optimization (PPO) (Schulman et al., 2017). Federated mechanism. We adhere to a prevalent practice observed in existing FRL literature (Xie & Song, 2023b; Zhu & Gong, 2023), where the central server aggregates the models submitted by local agents by taking the average at the end of each communication round and then broadcasts the new global model, which is used by the local agents as initialization for the next round of local training. This federated mechanism aligns with the settings in our theoretical analysis (Assumption 2). ## 6.3 Baselines We evaluate our methods against diverse baselines, including standard FRL, *poisoned FRL*, and *robust FRL*. Standard FRL. In this setting, agents undergo regular training without any poisoning, adhering strictly to the conventional rules of local updates and federated communications. The baseline comprises both VPG-based standard FRL (Algorithm 5) and PPO-based standard FRL (Algorithm 4). We refer to Appendix C for a detailed description of those algorithms. Poisoned FRL. To the best of our knowledge, no existing FRL method designed for poisoning matches the criteria of a standard and reasonable FRL setting, encompassing multiple local steps and data security code (refer to Section 1.1). Therefore, we establish a baseline for poisoned FRL using a reward-based randomized attack. In this attack, Eq. 6 is replaced by $$\hat{\mathbf{R}}_{(j)}^{p,q}\leftarrow\mathbf{R}_{(j)}^{p,q}-\epsilon\cdot\mathbf{x},$$ (j) − ϵ · x, (7) where x is a vector with the same cardinality as R, denoted as |x| = |R p,q (j) |. Each index-wise value in x follows a uniform distribution, such that x = (x1, x2*, ...,* x|R|), where xi ∼ U(0, 1) for all i ≤ |R|. Here, U(*a, b*) represents the uniform distribution in the range from a to b. The detailed algorithm is provided in Appendix C.2. $\left(7\right)$. Robust FRL. To our knowledge, no existing robust FRL method aligns with the criteria of a standard and reasonable FRL setting (refer to Section 1.1). Therefore, we adopt a standard robust mechanism inspired by FL, where the aggregation is executed by re-weighting with the clients' reliability (Fu et al., 2019; Tahmasebian et al., 2022; Wan & Chen, 2021). The detailed *defense* algorithm is outlined in Algorithm 6. ## 6.4 Experimental Settings Evaluation matrix. The evaluation metrics can vary based on the type of poisoning. For *untargeted* poisoning, we evaluate the performance of these methods by measuring the mean-episode reward of the central model, which is calculated based on 100 test episodes at the end of each federated round. For targeted poisoning, we measure the similarity between learned policy and targeted policy. Moreover, for discrete action space, we calculate the proportion of target actions among all actions, while for *continuous action space*, we collect 1–scaled distance. Under both measurements, a higher value indicates a closer learned policy to the target policy. By default, our assumed poisoning type is untargeted unless stated otherwise. Attack Budget. We have two configurations for the attack budget: a) *Small Budget*. We set ϵ = 1 to simulate a small-budget scenario where the attacker is cautious and cost-conscious. We choose ϵ = 1 as it represents the maximum possible value gained by one action move in the CartPole and Inverted Pendulum environments. It is important to note that ϵ = 1 is generally much smaller than the maximum possible reward per action in the other environments (i.e., 100 for *Lunar Lander*). b) *Large Budget*. For a large-budget scenario where the attacker is bold and almost indifferent to costs, we use ϵ = 100. In instances without specific notation, we refer to a small-budget attack. Hyper-parameters. For both VPG and PPO settings, we let the malicious agent attack the reward with a budget of ϵ = 1. There are 200 total communication rounds, and all agents run 5 local steps in each communication round. The number of poisoned agents is set to one if it is not specifically mentioned. For all experiments, we average the results over 5 random seeds.2 ## 6.5 Empirical Results We present experimental results that demonstrate the effectiveness of our proposed poisoning method. We compare its performance with various baselines outlined in Section 6.3, including standard FRL (Section 6.5.1), poisoned FRL (Section 6.5.2), and robust FRL (Section 6.5.2). The Experiments cover diverse backbone models (Section 6.2) and environments (Section 6.1), consistently demonstrating superior performance. In addition to evaluating the effectiveness of untargeted poisoning, we further validate our method through targeted poisoning (Section 6.5.3). Furthermore, we demonstrate the adaptability of our approach in scenarios involving multiple suspicious agents (Section 6.5.3). The results affirm that our method is both practical and detrimental for real-world applications. To enhance clarity, we have smoothed the plots by preserving values from the first 5 rounds and subsequently applying a moving average with a window size of 10 rounds. ## 6.5.1 Comparison With Standard Frl We conducted experiments to poison standard VPG-based FRL (Fig. 1) and PPO-based FRL (Fig. 2). The backbones for these experiments were constructed as outlined in Section 6.2, utilizing Algorithms 4 and 5. Poison standard VPG-based FRL. We present our results in Fig. 1. We see that across several environments, a single attacker successfully poisons a system of up to 3 to 4 agents using our method. We only present the results of systems with maximum sizes that our method can poison. For instance, in the first plot of Fig. 1, the system size is four, indicating that our method can poison a VPG-based FRL with the number of agents equaling 1, 2, 3, 4. However, our method would fail when the system size is 5. The effectiveness of this poisoning depends on the environment. For instance, we can observe a clear difference between clean and poisoned runs for CartPole but not as much for InvertedPendulum. Nevertheless, there is less consistency in the rewards for InvertedPendulum, and as the rounds progress, the clean run would continue to accumulate greater mean rewards. This proves the capability of our VPG attack method in poisoning federated systems. 2We refer to Appendix D.1.1 for additional settings. ![13_image_0.png](13_image_0.png) Figure 1: **Poison VPG-based FRL**. The red dashed line, labeled as *poison*, illustrates the performance ![13_image_1.png](13_image_1.png) of our adversarial attack. The green dashed line, labeled as *clean*, represents the standard FRL system's performance. The blue dashed line, labeled as *rand*, represents a random poisoning baseline. We specifically showcase results for the largest system size that our method can successfully attack—4 for *CartPole* and Hopper, and 3 for *Inverted Pendulum* and *Walker*. Figure 2: **Poison PPO-based FRL**. The annotation is the same as Fig. 1. We specifically showcase results for the largest system size that our method can successfully attack. The maximum system size is 3 for Inverted Pendulum and 4 for the others. Poison standard PPO-based FRL. Similar to the VPG-based FRL case, we report those systems with maximum possible size that our method can poison. Attacking a PPO-based FRL system (Fig. 2) lends itself to a much different dynamic because the separate value parameters allow all agents to learn more effectively, and thus, we expect both clean and malicious agents to be more successful in their opposite goals. Ultimately, we observe that malicious agents using our method are much more successful in their attacks across environments compared to the VPG results. We attribute this to the more sophisticated attack model, which reduces noise in the gradient updates and enables the reward poisoning to be more effective. We observe a more evident and consistent performance gap compared to the VPG-based FRL system (Fig 1). This indicates that our method is particularly effective in PPO-based FRL systems, benefiting from the double-critic protocol specifically designed for actor-critic backbones. ## 6.5.2 Comparison With Poisoned Frl And Robust Frl Poisoned FRL. We conduct experiments on random attacks against standard VPG-based FRL and PPObased FRL. The random attacking method is outlined in Section 6.3, with detailed algorithms provided in Section C.2. For VPG-based FRL (Fig. 1), we observe that our method performs at least as well and, in most cases, much better than a random attack in poisoning the system. For PPO-based FRL (Fig. 2), we see that a random attack performs very poorly, as the strength of the clean agents overwhelms the weak attack and results in high rewards over time. Robust FRL. The selection of the robust FRL baseline is detailed in Section 6.3. Experiments are conducted following settings in Section 6.4 and D.1.1. Results are presented in Fig. 3. It is noteworthy that, although the defense mechanism can successfully mitigate the harm of our poisoning in relatively simple environments (CartPole, Hopper, *Walker2d*), for complicated environments (*Half Cheetah*), which are closer to real-world scenarios, the defense mechanism fails against our poisoning. Thus, our results consolidate that this kind of poisoning is harmful in real-world applications. ![14_image_0.png](14_image_0.png) Figure 3: **Poison Robust FRL**. The red dashed line, labeled as *Poison*, depicts the performance of our adversarial attack in a standard FRL system. The green dashed line, labeled as *Clean*, represents the standard FRL system's performance. The blue dashed line, labeled as *Defense*, showcases the performance of our attack in a robust FRL system. ## 6.5.3 Additional Results All additional results are deferred to Appendix D.2, including: (a) **Multiple Poisoned Agents**. Fig 5 illustrates the method's applicability to scenarios with multiple poisoned agents. The attack performance remains stable, given a consistent ratio of poisoned agents to unpoisoned agents. (b) **Targeted Poisoning**. Fig 6 demonstrates the effectiveness of our proposed method in targeted poisoning scenarios. (c) **High-Budget Poisoning**. Fig 4 showcases that, with a sufficiently high attack budget, our proposed method empowers a single poisoned agent to impact an FRL system of considerable size (e.g., a system of 100 agents). ## 7 Conclusions In this work, we have proposed a novel method for poisoning FRL under both general policy-based and actor-critic algorithms, which can provably decrease the system's global objective. Our method is evaluated through extensive experiments on various OpenGYM environments using popular RL models, and the results demonstrate that our method is effective in poisoning FRL systems. Our work highlights the potential risks of FRL and inspires future research to design more robust algorithms to protect FRL against poisoning attacks. ## References Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. On the theory of policy gradient methods: Optimality, approximation, and distribution shift. *J. Mach. Learn. Res.*, 22(98):1–76, 2021. Awni Altabaa, Bora Yongacoglu, and Serdar Yüksel. Decentralized multi-agent reinforcement learning for continuous-space stochastic games. In *2023 American Control Conference (ACC)*, pp. 72–77. IEEE, 2023. Aqeel Anwar and Arijit Raychowdhury. Multi-task federated reinforcement learning with adversaries. *arXiv* preprint arXiv:2103.06473, 2021. Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. How to backdoor federated learning. In *International Conference on Artificial Intelligence and Statistics*, pp. 2938–2948. PMLR, 2020. Leemon Baird. Residual algorithms: Reinforcement learning with function approximation. In Machine Learning Proceedings 1995, pp. 30–37. Elsevier, 1995. Kiarash Banihashem, Adish Singla, Jiarui Gan, and Goran Radanovic. Admissible policy teaching through reward design. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pp. 6037–6045, 2022. Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Dkebiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. *arXiv preprint arXiv:1912.06680*, 2019. Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. Analyzing federated learning through an adversarial lens. In *International Conference on Machine Learning*, pp. 634–643. PMLR, 2019. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. *arXiv preprint arXiv:1606.01540*, 2016. Tianlong Chen, Xiaohan Chen, Wuyang Chen, Howard Heaton, Jialin Liu, Zhangyang Wang, and Wotao Yin. Learning to optimize: A primer and a benchmark. *Journal of Machine Learning Research*, 23(189):1–59, 2022. Yiding Chen, Xuezhou Zhang, Kaiqing Zhang, Mengdi Wang, and Xiaojin Zhu. Byzantine-robust online and offline distributed reinforcement learning. In International Conference on Artificial Intelligence and Statistics, pp. 3230–3269. PMLR, 2023. Gabriel Dulac-Arnold, Nir Levine, Daniel J Mankowitz, Jerry Li, Cosmin Paduraru, Sven Gowal, and Todd Hester. Challenges of real-world reinforcement learning: definitions, benchmarks and analysis. Machine Learning, 110(9):2419–2468, 2021. Flint Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Cheston Tan, Bryan Kian Hsiang Low, and Roger Wattenhofer. Fedhql: Federated heterogeneous q-learning. *arXiv preprint arXiv:2301.11135*, 2023. Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Wei Jing, Cheston Tan, and Bryan Kian Hsiang Low. Faulttolerant federated reinforcement learning with theoretical guarantee. Advances in Neural Information Processing Systems, 34, 2021. Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. Local model poisoning attacks to {ByzantineRobust} federated learning. In *29th USENIX Security Symposium (USENIX Security 20)*, pp. 1605–1622, 2020. Shuhao Fu, Chulin Xie, Bo Li, and Qifeng Chen. Attack-resistant federated learning with residual-based reweighting. *arXiv preprint arXiv:1912.11464*, 2019. Raj Ghugare, Santiago Miret, Adriana Hugessen, Mariano Phielipp, and Glen Berseth. Searching for high-value molecules using reinforcement learning and transformers. *arXiv preprint arXiv:2310.02902*, 2023. Prashant Govindarajan, Santiago Miret, Jarrid Rector-Brooks, Mariano Phielipp, Janarthanan Rajendran, and Sarath Chandar. Learning conditional policies for crystal design using offline reinforcement learning. Digital Discovery, 3(4):769–785, 2024. Yunhan Huang and Quanyan Zhu. Deceptive reinforcement learning under adversarial manipulations on cost signals. In Decision and Game Theory for Security: 10th International Conference, GameSec 2019, Stockholm, Sweden, October 30–November 1, 2019, Proceedings 10, pp. 217–237. Springer, 2019. Mahdee Jodayree, Wenbo He, and Dr. Ryszard Janicki. Preventing image data poisoning attacks in federated machine learning by an encrypted verification key. *Procedia Computer Science*, 225:2723–2732, 2023a. ISSN 1877-0509. doi: https://doi.org/10.1016/j.procs.2023.10.264. URL https://www.sciencedirect. com/science/article/pii/S1877050923014229. 27th International Conference on Knowledge Based and Intelligent Information and Engineering Sytems (KES 2023). Mahdee Jodayree, Wenbo He, and Ryszard Janicki. Preventing text data poisoning attacks in federated machine learning by an encrypted verification key. In Andrea Campagner, Oliver Urs Lenz, Shuyin Xia, Dominik Ślęzak, Jarosław Wąs, and JingTao Yao (eds.), *Rough Sets*, pp. 612–626, Cham, 2023b. Springer Nature Switzerland. ISBN 978-3-031-50959-9. Philip Jordan, Florian Grötschla, Flint Xiaofeng Fan, and Roger Wattenhofer. Decentralized federated policy gradient with byzantine fault-tolerance and provably fast convergence. *arXiv preprint arXiv:2401.03489*, 2024. Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. Reinforcement learning: A survey. *Journal* of artificial intelligence research, 4:237–285, 1996. Sajad Khodadadian, Pranay Sharma, Gauri Joshi, and Siva Theja Maguluri. Federated reinforcement learning: Linear speedup under markovian sampling. In *International Conference on Machine Learning*, pp. 10997–11057. PMLR, 2022. R Matthew Kretchmar. Parallel reinforcement learning. In *The 6th World Conference on Systemics,* Cybernetics, and Informatics. Citeseer, 2002. Simin Li, Jun Guo, Jingqiao Xiu, Xini Yu, Jiakai Wang, Aishan Liu, Yaodong Yang, and Xianglong Liu. Byzantine robust cooperative multi-agent reinforcement learning as a bayesian game. *arXiv preprint* arXiv:2305.12872, 2023. Xinle Liang, Yang Liu, Tianjian Chen, Ming Liu, and Qiang Yang. Federated transfer reinforcement learning for autonomous driving. In *Federated and Transfer Learning*, pp. 357–371. Springer, 2023. Boyi Liu, Lujia Wang, and Ming Liu. Lifelong federated reinforcement learning: a learning architecture for navigation in cloud robotic systems. *IEEE Robotics and Automation Letters*, 4(4):4555–4562, 2019. Guanlin Liu and Lifeng Lai. Efficient adversarial attacks on online multi-agent reinforcement learning. *arXiv* preprint arXiv:2307.07670, 2023. Siqi Liu, Kay Choong See, Kee Yuan Ngiam, Leo Anthony Celi, Xingzhi Sun, Mengling Feng, et al. Reinforcement learning for clinical decision support in critical care: comprehensive review. Journal of medical Internet research, 22(7):e18477, 2020. Wentao Liu, Xiaolong Xu, Jintao Wu, and Jielin Jiang. Federated meta reinforcement learning for personalized tasks. *Tsinghua Science and Technology*, 29(3):911–926, 2023. Weiming Mai, Jiangchao Yao, Gong Chen, Ya Zhang, Yiu-Ming Cheung, and Bo Han. Server-client collaborative distillation for federated reinforcement learning. ACM Transactions on Knowledge Discovery from Data, 18(1):1–22, 2023. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communicationefficient learning of deep networks from decentralized data. In *Artificial intelligence and statistics*, pp. 1273–1282. PMLR, 2017. Yifei Min, Jiafan He, Tianhao Wang, and Quanquan Gu. Cooperative multi-agent reinforcement learning: asynchronous communication and linear function approximation. In *International Conference on Machine* Learning, pp. 24785–24811. PMLR, 2023a. Yifei Min, Jiafan He, Tianhao Wang, and Quanquan Gu. Multi-agent reinforcement learning: Asynchronous communication and linear function approximation. *arXiv preprint arXiv:2305.06446*, 2023b. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928–1937. PMLR, 2016. Mohammad Mohammadi, Jonathan Nöther, Debmalya Mandal, Adish Singla, and Goran Radanovic. Implicit poisoning attacks in two-agent reinforcement learning: Adversarial policies for training-time attacks. *arXiv* preprint arXiv:2302.13851, 2023. Gina Neff. Talking to bots: Symbiotic agency and the case of tay. *International Journal of Communication*, 2016. Zepeng Ning and Lihua Xie. A survey on multi-agent reinforcement learning and its application. Journal of Automation and Intelligence, 2024. ISSN 2949-8554. doi: https://doi.org/10.1016/j.jai.2024.02.003. URL https://www.sciencedirect.com/science/article/pii/S2949855424000042. Kiourti Panagiota, Wardega Kacper, Susmit Jha, and Li Wenchao. Trojdrl: Trojan attacks on deep reinforcement learning agents. in proc. 57th acm/ieee design automation conference (dac), 2020, march 2020. In *Proc. 57th ACM/IEEE Design Automation Conference (DAC), 2020*, 2020. Jan Peters and Stefan Schaal. Natural actor-critic. *Neurocomputing*, 71(7-9):1180–1190, 2008. Martin L Puterman. Markov decision processes. *Handbooks in operations research and management science*, 2:331–434, 1990. Amin Rakhsha, Goran Radanovic, Rati Devidze, Xiaojin Zhu, and Adish Singla. Policy teaching via environment poisoning: Training-time adversarial attacks against reinforcement learning. In *International* Conference on Machine Learning, pp. 7974–7984. PMLR, 2020. Amin Rakhsha, Xuezhou Zhang, Xiaojin Zhu, and Adish Singla. Reward poisoning in reinforcement learning: Attacks against unknown learners in unknown environments. *arXiv preprint arXiv:2102.08492*, 2021. Rajarshi Roy, Jonathan Raiman, Neel Kant, Ilyas Elkin, Robert Kirby, Michael Siu, Stuart Oberman, Saad Godil, and Bryan Catanzaro. Prefixrl: Optimization of parallel prefix circuits using deep reinforcement learning. In *2021 58th ACM/IEEE Design Automation Conference (DAC)*, pp. 853–858. IEEE, 2021. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *arXiv preprint arXiv:1707.06347*, 2017. Max Schwarzer, Ankesh Anand, Rishab Goel, R Devon Hjelm, Aaron Courville, and Philip Bachman. Data-efficient reinforcement learning with self-predictive representations. *arXiv preprint arXiv:2007.05929*, 2020. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In *International conference on machine learning*, pp. 387–395. PMLR, 2014. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. *nature*, 529(7587):484–489, 2016. Satinder Singh, Tommi Jaakkola, Michael L Littman, and Csaba Szepesvári. Convergence results for single-step on-policy reinforcement-learning algorithms. *Machine learning*, 38(3):287–308, 2000. Jinhyun So, Başak Güler, and A Salman Avestimehr. Byzantine-resilient secure federated learning. IEEE Journal on Selected Areas in Communications, 39(7):2168–2181, 2020. Yanchao Sun, Da Huo, and Furong Huang. Vulnerability-aware poisoning mechanism for online rl with unknown dynamics. *arXiv preprint arXiv:2009.00774*, 2020. Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. *Advances in neural information processing systems*, 12, 1999. Farnaz Tahmasebian, Jian Lou, and Li Xiong. Robustfed: a truth inference approach for robust federated learning. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pp. 1868–1877, 2022. Gerald Tesauro et al. Temporal difference learning and td-gammon. *Communications of the ACM*, 38(3): 58–68, 1995. Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, and Ling Liu. Data poisoning attacks against federated learning systems. In *European Symposium on Research in Computer Security*, pp. 480–501. Springer, 2020. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. *Nature*, 575(7782):350–354, 2019. Ching Pui Wan and Qifeng Chen. Robust federated learning with attack-adaptive aggregation. arXiv preprint arXiv:2102.05257, 2021. Xiaofei Wang, Chenyang Wang, Xiuhua Li, Victor CM Leung, and Tarik Taleb. Federated deep reinforcement learning for internet of things with decentralized cooperative edge caching. *IEEE Internet of Things* Journal, 7(10):9441–9455, 2020. Jiin Woo, Gauri Joshi, and Yuejie Chi. The blessing of heterogeneity in federated q-learning: Linear speedup and beyond. *arXiv preprint arXiv:2305.10697*, 2023. Young Wu, Jeremy McMahan, Xiaojin Zhu, and Qiaomin Xie. Reward poisoning attacks on offline multi-agent reinforcement learning. In *Proceedings of the aaai conference on artificial intelligence*, volume 37, pp. 10426–10434, 2023. Liang Xiao, Xiaoyue Wan, Canhuang Dai, Xiaojiang Du, Xiang Chen, and Mohsen Guizani. Security in mobile edge caching with reinforcement learning. *IEEE Wireless Communications*, 25(3):116–122, 2018. Zhijie Xie and SH Song. Client selection for federated policy optimization with environment heterogeneity. arXiv preprint arXiv:2305.10978, 2023a. Zhijie Xie and Shenghui Song. Fedkl: Tackling data heterogeneity in federated reinforcement learning by penalizing kl divergence. *IEEE Journal on Selected Areas in Communications*, 41(4):1227–1242, 2023b. Ekim Yurtsever, Jacob Lambert, Alexander Carballo, and Kazuya Takeda. A survey of autonomous driving: Common practices and emerging technologies. *IEEE access*, 8:58443–58469, 2020. Mingyue Zhang, Zhi Jin, Jian Hou, and Renwei Luo. Resilient mechanism against byzantine failure for distributed deep reinforcement learning. In *2022 IEEE 33rd International Symposium on Software Reliability* Engineering (ISSRE), pp. 378–389. IEEE, 2022. Xuezhou Zhang, Yuzhe Ma, Adish Singla, and Xiaojin Zhu. Adaptive reward-poisoning attacks against reinforcement learning. In *International Conference on Machine Learning*, pp. 11225–11234. PMLR, 2020. Fangyuan Zhao, Xuebin Ren, Shusen Yang, Peng Zhao, Rui Zhang, and Xinxin Xu. Federated multi-objective reinforcement learning. *Information Sciences*, 624:811–832, 2023. Zhong Zheng, Fengyu Gao, Lingzhou Xue, and Jing Yang. Federated q-learning: Linear regret speedup with low communication cost. *arXiv preprint arXiv:2312.15023*, 2023. Zhenpeng Zhou, Steven Kearnes, Li Li, Richard N Zare, and Patrick Riley. Optimization of molecules via deep reinforcement learning. *Scientific reports*, 9(1):10752, 2019. Ye Zhu and Xiaowen Gong. Distributed policy gradient with heterogeneous computations for federated reinforcement learning. In *2023 57th Annual Conference on Information Sciences and Systems (CISS)*, pp. 1–6. IEEE, 2023. ## A Proof Of Theorem 6 Theorem 8 (Theorem 6-restated). Let Assumptions 1, 2, and 3 hold. Suppose that all agents are updated cleanly at the first p − 1 rounds, and at round p, agent n is poisoned. Let ϵ+ := 2λθB nLr , where B *is defined as* $B=[\nabla_{\mathbf{\theta}^{\prime}}J_{(0)}(\mathbf{\theta}^{\prime})]^{\top}\cdot[\nabla_{\mathbf{r},\mathbf{\theta}}J(\mathbf{\theta},\mathbf{r})]\cdot\bar{\mathbf{e}}(\nabla_{r}J(\mathbf{\theta},\mathbf{r}))\Big{|}_{\mathbf{r}=\mathbf{r}_{(n)}^{\mathbf{\theta}}}^{\mathbf{\theta}=\mathbf{\theta}_{(0)}^{\mathbf{\theta}}}$. Then, for B > 0 and ϵ < ϵ+*, we have* Jbp $$\mathbf{\tau}_{1}<J_{(0)}^{p}-\alpha,$$ (0) − α,*, where* α ∈ [0, ϵ 2 + 8 ]. Proof. Since J p (0) is Lr-smooth with respect to r p (n) , we have $$\widehat{J}_{(0)}^{p}\leq\widehat{J}_{(0)}^{p}+[\nabla_{\mathbf{r}_{(n)}^{p}}J_{(0)}^{p}]^{\top}\cdot[\widehat{\mathbf{r}}_{(n)}^{p}-\mathbf{r}_{(n)}^{p}]\tag{8}$$ $$+\frac{L_{\mathbf{r}}}{2}\|\widehat{\mathbf{r}}_{(n)}^{p}-\mathbf{r}_{(n)}^{p}\|^{2}.$$ Moreover, we can write $$\left[\nabla_{\mathbf{r}_{(n)}^{\mathbf{p}}}J_{(0)}^{\mathbf{p}}\right]^{\top}=\left[\nabla_{\mathbf{r}_{(n)}^{\mathbf{p}}}\frac{1}{n}\sum_{i\in[n]}J_{(i)}(\boldsymbol{\theta}_{(0)}^{\mathbf{p}})\right]^{\top}\tag{9}$$ $$=\left[\nabla_{\boldsymbol{\theta}_{(0)}^{\mathbf{p}}}\frac{1}{n}\sum_{i\in[n]}J_{(i)}(\boldsymbol{\theta}_{(0)}^{\mathbf{p}})\right]^{\top}\cdot\left[\nabla_{\mathbf{r}_{(n)}^{\mathbf{p}}}\boldsymbol{\theta}_{(0)}^{\mathbf{p}}\right],$$ and ∇r p (n) θ p (0) = ∇r p (n) 1 n X i∈[n] θ p (i) = 1 n ∇r p (n) θ p (n) = 1 n ∇r p (n) [θ p−1 (0) + λθ∇θ p−1 (0) J(θ p−1 (0) , r p (n) )] = λθ n ∇r p (n) ∇θ p−1 (0) J(θ p−1 (0) , r p (n) ). (10) $$(10)$$ Substituting Eq. 10 into Eq. 9, we obtain $$[\nabla_{{\bf r}_{(n)}^{p}}J_{(0)}^{p}]^{\top}=\frac{\lambda\theta}{n^{2}}\sum_{i\in[n]}[\nabla_{\theta_{(0)}^{p}}J_{(i)}(\theta_{(0)}^{p})]^{\top}\cdot[\nabla_{{\bf r}_{(n)}^{p}}\nabla_{\theta_{(0)}^{p-1}}J(\theta_{(0)}^{p-1},{\bf r}_{(n)}^{p})].$$ On the other hand, we have $$(11)$$ $$\left(12\right)$$ $$\hat{\mathbf{r}}_{(n)}^{p}=\mathbf{r}_{(n)}^{p}-\epsilon\cdot\vec{\mathbf{e}}(\nabla_{\mathbf{r}_{(n)}^{p}}J_{(n)}^{p}).$$ $$(13)$$ ). (12) If we combine Eq. 12 with Eq. 11, we get $$[\nabla_{\mathbf{r}_{(n)}^{p}}J_{(0)}^{p}]^{\top}\cdot[\bar{\mathbf{r}}_{(n)}^{p}-\mathbf{r}_{n}^{p}]=-\frac{\epsilon\cdot\lambda\phi}{n^{2}}\sum_{i\in[n]}[\nabla_{\mathbf{\theta}_{(i)}^{p}}J_{(i)}(\mathbf{\theta}_{(i)}^{p})]^{\top}\cdot[\nabla_{\mathbf{r}_{(n)}^{p}}\nabla_{\mathbf{\theta}_{(i)}^{p}}J(\mathbf{\theta}_{(i)}^{p-1},\mathbf{r}_{(n)}^{p})]\cdot\bar{\mathbf{e}}(\nabla_{\mathbf{r}_{(n)}^{p}}J_{(n)}^{p}).$$ ). (13) Substituting Eq. 13 into Eq. 8 and using the fact that Lr 2 ∥br p (n) − r p (n) ∥ 2 = ϵ 2·Lr 2, we get $$\widehat{J}_{(0)}^{p}\leq\widehat{J}_{(0)}^{p}-\frac{\lambda_{\theta}\cdot B}{n}\epsilon+\frac{L_{\mathbf{r}}}{2}\epsilon^{2}.$$ $$(14)$$ As the right-hand side of Eq. 14 is a quadratic function with respect to ϵ, we get that when B > 0 and 0 *< ϵ <* 2λθB nLr , it holds that Jˆp (0) < Jp (0), indicating a strict decrease in the objective of FRL being poisoned compared with clean training. In particular, the smallest bound in Eq. 14 is achieved when ϵ = λθB nLr , implying Jbp (0) ≤ J p (0) − λ 2 θB2 2L2 rn2 = J p (0) − ϵ 2 + 8 . A case where B > 0. Suppose J(θ; r) = γ ⊤ · r, where r is the reward sequence and γ is the discount factor vector. This objective corresponds with the typical accumulated discounted reward setting. Suppose that r(n)(θ) is a differentiable function. Recall that $$B=[\nabla_{\mathbf{\theta}^{*}}J_{(0)}(\mathbf{\theta}^{*})]^{\top}\cdot[\nabla_{\mathbf{r},\mathbf{\theta}}J(\mathbf{\theta},\mathbf{r})]\cdot{\vec{e}}(\nabla_{r}J(\mathbf{\theta},\mathbf{r})){\Big|}_{\mathbf{r}=\mathbf{r}_{(n)}^{\mathrm{p}}}^{\mathbf{\theta}=\mathbf{\theta}_{(0)}^{\mathrm{p}}}.$$ Define $$\left.B_{1}:=\nabla_{\theta^{*}}J_{(0)}(\theta^{*})\right|_{\theta^{*}=\theta_{(0)}^{p}},$$ $$B_{2}:=\nabla_{{\bf r},\theta}J(\theta,{\bf r})\Big|_{{\bf r}={\bf r}_{(n)}^{p}}^{\theta=\theta_{(0)}^{p-1}},$$ $$B_{3}:=\vec{e}(\nabla_{r}J(\theta,{\bf r}))\Big|_{{\bf r}={\bf r}_{(n)}^{p}}^{\theta=\theta_{(0)}^{p-1}}.$$ We have B = B⊤ 1 · B2 · B3. To intuitively understand B, we simplify the reward |r| = 1, where *| · |* denotes cardinality, and correspondingly γ is simplified to a scalar: γ = 1. Since $$J_{(0)}(\mathbf{\theta}_{(0)}^{p})={\frac{1}{n}}\sum_{i=1}^{n}J_{(i)}(\mathbf{\theta}_{(0)}^{p})={\frac{1}{n}}\sum_{i=1}^{n}J(\mathbf{\theta}_{(0)}^{p};r_{(i)}(\mathbf{\theta}_{(0)}^{p}))={\frac{1}{n}}\sum_{i\in[n]}r_{(i)}(\mathbf{\theta}_{(0)}^{p}).$$ we have $$B_{1}:=\nabla_{\theta^{*}}J_{(0)}(\theta^{*})\Big|_{\theta^{*}=\theta_{(0)}^{p}}=\frac{1}{n}\cdot\big[\sum_{i\in[n]}r_{(i)}^{\prime}(\theta_{(0)}^{p})\big].$$ Since $$J(\mathbf{\theta}_{(0)}^{p-1},\mathbf{r}_{(n)}^{p}):=J(\mathbf{\theta}_{(0)}^{p-1},r_{(n)}(\mathbf{\theta}_{(0)}^{p-1}))=r_{(n)}(\mathbf{\theta}_{(0)}^{p-1})=\mathbf{r}_{(n)}^{p},$$ we have $$B_{2}:=\nabla_{\mathbf{r},\mathbf{\theta}}J(\mathbf{\theta},\mathbf{r})\Big{|}_{\mathbf{r}=\mathbf{r}^{\prime}_{(n)}}^{\mathbf{\theta}=\mathbf{\theta}^{\prime}_{(n)}}=\nabla_{\mathbf{r}^{\prime}_{(n)}}\nabla_{\mathbf{\theta}^{\prime}_{(0)}}\tau_{(n)}(\mathbf{\theta}^{\prime}_{(0)})=\nabla_{\mathbf{r}^{\prime}_{(n)}}\Big{[}r^{\prime}_{(n)}(\mathbf{\theta}^{\prime}_{(0)})\Big{]}=r^{\prime\prime}_{(n)}(\mathbf{\theta}^{\prime}_{(0)})\cdot(r^{\prime}_{(n)}(\mathbf{\theta}^{\prime-1}_{(0)}))^{-1},$$ $B_{3}=1$. Therefore, $$B=\left[\frac{1}{n}\sum_{i\in[n]}r_{(i)}^{\prime}(\mathbf{\theta}_{(0)}^{p})\right]\cdot\frac{r_{(n)}^{\prime\prime}(\mathbf{\theta}_{(0)}^{p-1})}{r_{(n)}^{\prime}(\mathbf{\theta}_{(0)}^{p-1})}.$$ Suppose that agents interact with the same environment, that is r(i)(θ) = r(θ), ∀i ∈ [n]. Then, if r ′′′(θ) = 0, we get $$\theta_{(0)}^{p}=\theta_{(0)}^{p-1}+\lambda_{\theta}r^{\prime}(\theta_{(0)}^{p-1}),$$ and hence, $$B=\frac{r^{\prime}(\mathbf{\theta}_{(0)}^{p})r^{\prime\prime}(\mathbf{\theta}_{(0)}^{p-1})}{r^{\prime}(\mathbf{\theta}_{(0)}^{p-1})}$$ $$=\frac{\left[r^{\prime}(\mathbf{\theta}_{(0)}^{p-1})+r^{\prime\prime}(\mathbf{\theta}_{(0)}^{p-1})\lambda_{\mathbf{\theta}}\cdot r^{\prime}(\mathbf{\theta}_{(0)}^{p-1})\right]r^{\prime\prime}(\mathbf{\theta}_{(0)}^{p-1})}{r^{\prime}(\mathbf{\theta}_{(0)}^{p-1})}$$ $$=\left(1+\lambda_{\mathbf{\theta}}\cdot r^{\prime\prime}(\mathbf{\theta}_{(0)}^{p-1})\right)r^{\prime\prime}(\mathbf{\theta}_{(0)}^{p-1}),$$ which is a quadratic function w.r.t. r ′′(θ p−1 (0) ). Therefore, in this case we always have B > 0, as long as r ′′(θ p−1 (0) ) ∈ (−∞, −λ −1 θ )S(0, +∞). In particular, a higher rate λθ makes it easier to achieve a positive B. ## B Proximal Policy Optimization (Ppo)-Specific Framework In this appendix, we focus on a specific local RL algorithm, Proximal Policy Optimization (PPO) (Schulman et al., 2017), for the individual agents in FRL and propose a corresponding framework for poisoning. By specifying the local RL algorithm as PPO, we are able to tailor the problem formulation in Problem P and accordingly propose a targeted solution in Section 4 to poisoning PPO-specific FRL. In Section B, we introduce PPO preliminaries to specify the general variable in Problem P. Then we discuss the PPO-specific problem formulation for poisoning FRL in Section B. This will allow us to take into account the specific characteristics of the PPO algorithm when defining the problem. ## Ppo Preliminaries PPO is a popular Actor-Critic algorithm that uses a clipped surrogate objective. For agent i at federated round p and local episode q, denote the pair of state and action at its t-th step of rollout as O p,q,t (i) = (s p,q,t (i), a p,q,t (i)) and denote the V -function and Q-function defined by Bellman Equation (Baird, 1995) as V (·) and Q(·, ·), respectively. Then the advantage function is A p,q,t (i):= Q(s p,q,t (i), a p,q,t (i)) − V (s p,q,t (i)). PPO's critic. Let us use · to denote estimation. Denote PPO's critic model as ϕ(· ω p,q (i) ), where ω is the model's weights. As with all typical actor-critic algorithms, the critic is a Value neural network to help estimate the V-value of the actor so as to further calculate the actor's objective. In PPO, the actor's objective is a clipped advantage (Eq. 15), where the advantage is estimated by the critic and observation, which can be written in the form of A p,q,t (i) = A(ω p,q (i) , O p,q,t (i)). The critic model updates itself by minimizing the temporal-difference error (Tesauro et al., 1995) between the estimated and observed V -value. We denote the critic's objective by δ p,q (i) . PPO's actor. Denote the actor model as πθ(·|s, θ), where θ is the model weight and s is some given state. To specify the general problem P to the PPO case, the clean agent's objective J(·) (Eq. 2) should be the PPO surrogate objective $$J_{(i)}^{p,q}:=\mathbb{E}_{t}\left[\,\min\left(\gamma_{(i)}^{p,q,t}\cdot\overline{{{A}}}_{(i)}^{p,q,t},c_{(i)}^{p,q,t}\cdot\overline{{{A}}}_{(i)}^{p,q,t}\right)\right],\tag{1}$$ $$\left(15\right)$$ where γ p,q,t (i):= π(a p,q,t (i)|s p,q,t (i), θ p,q (i) )π(a p,q,t (i)|s p,q,t (i), θ p,q−1 (i)), and A p,q,t (i)is estimated based on both PPO's critic and the observation that the actor samples. Here, c p,q,t (i):= clip(γ p,q,t (i), 1 − η, 1 + η), where clip(·) is a clipping function parameterized by η. ## Ppo-Specific Frl Poisoning. As an actor-critic algorithm, when we fit single-agent PPO into a federated framework, we assume that besides the actor model, the critic model should also be updated from individual agents to the server, then aggregated by the server and finally broadcast to the local agents at each federated round p. Denote the aggregated actor as θ p (0) and the aggregated critic as ω p (0). To specify the server's aggregation function Aagg (Eq. 4), we take a conventional paradigm in FL: $$\mathbf{\theta}_{(0)}^{p}=\mathcal{A}^{agg}(\mathbf{\theta}_{(0)}^{p-1},\{\mathbf{\theta}_{(i)}^{p,L}\}_{i=1}^{n})\tag{16}$$ $$:=\mathbf{\theta}_{(0)}^{p-1}+\frac{1}{n}\sum_{i=1}^{n}(\mathbf{\theta}_{(i)}^{p,L}-\mathbf{\theta}_{(i)}^{p,0})=\frac{\sum_{i=1}^{n}\mathbf{\theta}_{(i)}^{p,L}}{n},\tag{17}$$ where Aagg aggregates the local models by adding the averaged local model update to the server's model (Bhagoji et al., 2019; Bagdasaryan et al., 2020). By substituting θ p,0 (i) = θ p−1 (0) , Aagg is equivalent to assigning the averaged local model as the server's model (Bhagoji et al., 2019). We set the poison cost as D(Rp,q, Rb p,q)=∥Rp,q − Rb p,q∥2, and thereby propose the PPO-specific Problem as: arg min Rb LA θ T (0), ω T (0) Rb p,q (n) 1≤q≤L 1≤p≤T (P-PPO) s.t. ∀1 ≤ p ≤ T, 1 ≤ q ≤ L, (18) | 1≤q≤L | (P-PPO) | |-----------------------|-----------| | 1≤p≤T | | | (0) , ∀i ≤ n, | (19) | | (0) , ∀i ≤ n, | (20) | | , ∀i < n, | (21) | | , ∀i < n, | (22) | | |Ob p,q ], | (23) | | (n) | | | |Ob p,q ], | (24) | | (n) | | | /n, | (25) | | ∥Rp,q − Rb p,q∥2 ≤ ϵ. | (26) | | Party | Constraints | Interpretation | |----------------|---------------|-----------------------------| | Agent i, i ̸= n | Eq.(19) | local actor Initialization | | (clean agents) | Eq.(21) | local actor train | | | Eq.(20) | Local critic initialization | | | Eq.(22) | local critic train | | Agent n | Eq.(19) | local actor initialization | | (attacker) | Eq.(23) | local actor train | | | Eq.(20) | local critic initialization | | | Eq.(24) | local critic train | | | Eq.(26) | Attack budget | | Coordinator | Eq.(25) | aggregation | In Problem P-PPO, b· denotes the poisoned variables induced by Ob p,q. The constraints interpretation is similar to that in Section 3.3, except that all the equations related to ω characterize the initialization and local training for the critic, while those related to θ are for the actor. The constraints are summarized in Table 3. Table 3: Constraints of Problem P-PPO. ## C Baseline Algorithms In this appendix, we provide the baseline algorithms described in Section 6.3. ## C.1 Standard Frl Standard Actor-Critic-based FRL is given in Algorithm 4, and the standard Policy-Gradient-based FRL is given in Algorithm 5. Algorithm 4 Standard Actor-Critic-based FRL 1: **Input**: max federated rounds T, max local episodes L, number of agents n, aggregation algorithm Aagg, actor-critic objective function J. 2: **Output**: server's actor model θ T (0) and critic model ω T (0). 3: Initialize the server's actor model θ 0 (0) and critic model ω 0 (0). 4: for p = 1 to T do 5: for i = 1 to n do 6: Initialize local actor θ p,0 (i) ← θ p−1 (0) 7: Initialize local critic ω p,0 (i) ← ω p−1 (0) 8: for q = 1 to L do 9: Interact with environment and obtain O p,q (i) 10: Compute J p,q (i) with O p,q (i) and ω p,q−1 (i) 11: Update θ p,q (i) with J p,q (i) 12: Update ω p,q (i) with O p,q (i) 13: **end for** 14: **end for** 15: θ p (0) = Aaggθ p,L (1) *, ...,* θ p,L (n−1), θ p,L (n) 16: ω p (0) = Aaggω p,L (1) *, ...,* ω p,L (n−1), ω p,L (n) 17: end for Algorithm 5 Standard Policy-Gradient-based FRL 1: **Input**: max federated rounds T, max local episodes L, number of agents n, aggregation algorithm Aagg, policy gradient objective function J. 2: **Output**: server's policy θ T (0). 3: Initialize the server's policy θ 0 (0). 4: for p = 1 to T do 5: for i = 1 to n do 6: for q = 1 to L do 7: Initialize local policy θ p,0 (i) ← θ p−1 (0) 8: Interact with environment and obtain O p,q (i) 9: Compute J p,q (i) with O p,q (i) 10: Update θ p,q (i) with J p,q (i) 11: **end for** 12: **end for** 13: θ p (0) = Aaggθ p,L (1) *, ...,* θ p,L (n−1), θ p,L (n) 14: **end for** ## C.2 Random Poisoning Below we give random poisoning algorithms mentioned in Baselines (Section 6.3). Random Poisoning for Actor-Critic-based FRL. The algorithm is implemented almost the same as Algorithm 2, except that Line 5 should be replaced with "Poison Reward as Rb p,q (n) using true objective J p,q (n) and budget ϵ by Eq. (7)". Random Poisoning for Policy-Gradient-based FRL. The algorithm is implemented almost the same as Algorithm 3, except that Line 16 should be replaced with "Poison Reward as Rb p,q (n) using true objective J p,q (n) and budget ϵ by Eq. (7)". ## C.3 Robust Frl To mitigate the risk of FRL being exposed to malicious agents, below we describe a defense mechanism against FRL (Algorithm (6)) that inherits from conventional FL defense mechanisms, where the aggregation is implemented by re-weighting with the clients' reliability. To adapt this robust aggregation protocol from FL to FRL, we let the clients' local RL performance be a reflection of their reliability. The aggregation process incorporates a re-weighting mechanism based on the clients' reliability, as discussed in prior work (Fu et al., 2019; Tahmasebian et al., 2022; Wan & Chen, 2021). In adapting this robust aggregation protocol from FL to FRL, we leverage the local RL performance of clients as an indicator of their reliability. To be more precise, the re-weighting is accomplished by assigning credits to each agent's policy according to its performance. To that end, the central server runs tests on each policy it receives from the agents and records the observations denoted by O p,test (i). The server then calculates the average reward r p,test ifor each policy by averaging the rewards in the sequence {rt} p,test (i). Finally, the server normalizes the average rewards by dividing them by the sum of all averaged rewards, resulting in a set of normalized weights c p,q i. These weights are used to weight the average aggregation of the policies: $$\begin{array}{l}{{\theta_{(0)}^{p}\leftarrow\sum_{i\in{\mathcal{M}}}c_{i}^{p,q}\widehat{\theta}_{(i)}^{p,L}+\sum_{i\not\in{\mathcal{M}}}c_{i}^{p,q}\theta_{(i)}^{p,L},}}\\ {{\omega_{(0)}^{p}\leftarrow\sum_{i\in{\mathcal{M}}}c_{i}^{p,q}\widehat{\omega}_{(i)}^{p,L}+\sum_{i\not\in{\mathcal{M}}}c_{i}^{p,q}\omega_{(i)}^{p,L}.}}\end{array}$$ $$(27)$$ $$(28)^{\frac{1}{2}}$$ , (27) . (28) The defense mechanism is outlined in Algorithm 6. This protocol can be integrated into Algorithm 3 and 2 as the aggregation algorithm Aagg. Algorithm 6 FRL Defense Aggregation 1: **Input:** Submitted local actors {θ p,L (i) }i≤n; Submitted local critics {ω p,L (i) }i≤n. 2: **Output:** Aggregated actor and critic θ p (0), ω p (0). 3: for i = 1 to n do 4: Server obtains O p,test (i)by θ p,L (i) , 5: Gets mean reward r p,test (i)from O p,test (i). 6: **end for** 7: for i = 1 to n do 8: Server normalizes the credit c p,q i ←r p,q Pi i r $\overline{x,y}$ ## I 9: **End For** 10: Server Obtains Θ P (0) And Ω P (0) By Eq. (27) And (28). D Experiments D.1 Additional Settings D.1.1 Settings For Main Experiments Additional general settings. We measure the attack cost by the ℓ2 distance between the poisoned reward and the ground-truth observed reward. During each local training step, the maximum number of steps before termination is 300 unless restricted by the environment. The learning rate is set to 0.001, and the discount parameter is set to γ = 0.99. Additional settings for robust FRL. For each model uploaded to the server, we ran it for 10 episodes and collected the mean reward per episode. We then used the normalized rewards as the weights of aggregation. We structure the system with only 2 agents for all environments to ensure that the suspicious agent possesses the maximum possible power to poison the system since a smaller number of agents corresponds to a more potent attack. ## D.1.2 Settings For Additional Experiments General settings. Since VPG is more challenging to attack (refer to Section 6.5.1), in Section D.2 our focus is on attacking VPG-based FRL. Targeted poisoning. For simplicity, we choose the target policy to be a single action, whether in discrete or continuous space. We ensure the chosen environments encompass both discrete and continuous action spaces, featuring diverse cardinalities and dimensions for the action space. Concretely, we set the target policies to be 0 ∈ {0, 1} for CartPole, 0 ∈ {0, 1, 2, 3} for *Lunar Lander*, 3 ∈ [−3, 3] for *Inverted Pendulum*, and 0 6 ∈ [−1, 1]6for *Half Cheetah*. We choose environments of different action space: CartPole (two discrete actions), LunarLander (four discrete actions), InvertedPendulum (one continuous dimension), HalfCheetah (six continuous dimensions). Multi-agent attack. We opt for the *CartPole* environment, a relatively simple task, to balance between manageable computation costs and a clear understanding of how a fixed proportion operates in a multi-agent attack scenario. In Section 6.5.1, we have determined that the maximum size of the system one agent can effectively attack is 4 for the CartPole environment and a VPG-based FRL system. Therefore, we initiate the system size from 4, and accordingly, the number of poisoned agents begins at 1. Subsequently, we incrementally enlarge the system size and the corresponding number of poisoned agents while maintaining a constant proportion between the number of poisoned agents and the system size. ## D.2 Results Of Experiments All additional results are obtained from experiments following settings in Section 6.4 and D.1.2. Large-budget Attack. We evaluated the performance of our poisoning with a large budget in Figure (4). ![25_image_0.png](25_image_0.png) The results show that with a larger budget, our method is able to attack much larger systems compared with the constraints that appeared in the case of a small budget, i.e., we can attack up to 100 agents. Figure 4: **High-Budget Poisoning**. The red dashed line, labeled as *Poison*, depicts the performance of our adversarial attack in a standard FRL system. The green dashed line, labeled as *Clean*, represents the standard FRL system's performance. Multi-agent poisoning. We have shown that the proportion of malicious agents required to poison a system is consistent regardless of the size of the system. We found that when we increase the system size from 4 agents to 100 agents, a fixed proportion of attackers can always poison the system successfully. This result is depicted in Figure (5). ![26_image_0.png](26_image_0.png) Figure 5: Multi-Agent Poisoning. The annotation is the same as Fig. 4. Targeted attack. Our poisoning works well for targeted attack, shown in Figure (6). ![26_image_1.png](26_image_1.png) Figure 6: Targeted attack. The green dashed line, labeled as poison, depicts the performance of our adversarial attack in a standard FRL system. The red dashed line, labeled as clean, represents the standard FRL system's performance.
Review 1: Summary: This submission introduced a novel reward poisoning mechanism for FRL under both general policy-based and actor-critic algorithms. The authors theoretically demonstrate the importance of poisoning attack w.r.t. the risks during real-world scenarios, and highlight the problem that draws the attention of the community. By conducting a range of experiments on various OpenGYM environments using popular RL models, the authors show that the proposed protocol is effective in poisoning FRL systems and inspires future research to design more robust algorithms to protect FRL against poisoning attacks. Strengths and Weaknesses: Strengths: (1) The authors study an interesting and important problem, i.e., the poisoning attack in FRL, which should be paid more attention in the real-world scenarios given the vulnerability of FL and RL as well as their composition. Specifically the authors are among the first attempts to show the high risk of the potential poisoning to FRL. (2) The authors design a novel poisoning attack, which is generally effective under different FRL models, such as actor-critic-based FRL and policy-based FRL. Besides, the authors provide a principled proof to show the effect of the proposed reward poisoning in sabotaging the system, specifically by degrading the global objective. (3) Through extensive experiments under the OpenGYM environments with varying difficulty levels, the authors shows the consistent effectiveness by comparing with various baseline models and assessments of different (targeted and nontargeted) poisoning types in the mainstream RL algorithms like VPG and PPO. Weakness: Despite promise, there are still some concerns that should be carefully addressed, which I summarized as follows. (1) Some claims in the manuscript is not very proper. For example, the first sentence in the abstract "Federated learning (FL) has become a popular tool for solving traditional Reinforcement Learning (RL) tasks." FL is not a popular tool to RL when there is no constrain on the data governess. Therefore, the authors should point out the application range of FL regarding RL. The beginning of the third paragraph in the introduction "Poisoning in FRL is practical and harmful. The inherent nature of FL and RL amplifies the susceptibility to poisoning (training-time attacks) when combined into FRL." Some evidence can be given to show the potential amplifying effect w.r.t. the poisoning. Or you can give more intuitive explanation to convince the readers about your point. (2) The authors can further explain the rationality about Theorem 6, as the even the global objective can be smaller under some poisoning budget, the optimization eventually decides the convergence of the model. It will be more helpful to enrich the discussion about the theorem so that both the implication and limitation are covered. (3) The experiments can be enhanced, as many factors like the ranging number of the participants in FRL (not the poisoning agents), the different configurations of one environment to show the consistency etc, has not comprehensively considered. Requested Changes: See the weakness, which should be taken into consideration to improve the submission. In detail, (1) the writing on some critical claims should be improved to make them more proper and are well supported with some facts or discussion. (2) The experiments can be enhanced by considering some varying factors in FRL. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The work looks into a new area of problem in poisoning attacks for federated reinforcement learning. This framework is designed for policy-based FRL and can be applied to both policy-gradient local RL and actor-critic local RL scenarios. The authors provided theoretical analysis and conducted experiments to evaluate the poisoning attack method. Strengths and Weaknesses: ## Strength - This work is looking into a new problem of poisoning attacks for FRL - Theoretical analysis is provided to guarantee the attack method can decrease the global objective. ## Weakness - The contribution seems limited. It is unclear what the challenges are for applying poisoning attacks to FRL compared to poisoning attacks for RL. - For baselines in the evaluation section, why are the many existing cited RL poisoning methods not compared in this work, you can apply those methods for a single agent in the RL setting. It is hard to see the performance advantage of your proposed method when not compared to those baselines. - In the evaluation section, it is unclear how the method performs as you have different numbers of agents. The number of agents for each case is also not clearly stated (only mentioned 3 to 4 agents for the VPG-base FRL case). The evaluation results for poison VPG-base FRL seem to have similar effects as the random attack. - The attack method involves a public critic and a private critic which seems to require intervention in the training processes and weight upload, so it's not only just modifying the rewards. Is that a realistic threat model? - I suggest improving the writing of the formulation section to make it simpler for readers to understand your work. For instance, 1) you define O as the sequence of state, action, and reward in section 2, and refer to them as observations in section 3.3. This can be easily confused with the "observation" in partial observable RL problems. 2)when you use the notation of i<n (equation 2) and i!=n (table 1 & 2), is there a reason why they have to be different? if not, it's better to keep them consistent. Requested Changes: - Clarify the challenges of the problem and the contribution of the work - Add the RL poisoning work as baselines and compare the performance (see above) - clarify the evaluation section in terms of the number of agents used, and evaluate how the number of agent and random selection of agents affect the performance of the proposed method - clarify the threat model - improve writing (see above) Broader Impact Concerns: This is not provided. Since the work focuses on the security of AI, I suggest adding a section on this. ================================================== Review 3: Summary: The paper proposes a new training-time attack for federated reinforcement learning. The starts by introducing applications of reinforcement learning, as well as federated reinforcement learning (FRL) before discussing related work on poisoning for different reinforcement learning contexts. Next, the paper outlines contributions, which mostly centers on a new poisoning protocol for FRL, a theoretical analysis of that protocol and experimental evaluation of the protocol. In Section 2, the paper introduces preliminaries and notations followed by the problem formulation in Section 3. In Section 4, the paper introduces the proposed poisoning protocols methods, including reward poisoning for actor-critic based FRL and policy gradient based FRL with through algorithmic details. Section 5 provides a theoretical analysis of the proposed method and Section 6 provides empirical results across multiple RL environments. Many of the empirical results show that the poisoning methods generally decreases performance of the FRL agent. Strengths and Weaknesses: **Strengths:** * The paper proposes a new method for poisoning FRL agents that can be relevant to enhancing the robustness of FRL. * The paper provides through details on the algorithmic descriptions. * The paper provides a useful review of poisoning methods in related RL scenarios. **Weaknesses:** * The paper dedicates a lot of space to describing preliminaries and some background that can distract from the main message of the paper. I recommend moving some of the extraneous description into the Appendix and moving relevant experimental results and descriptions from Appendix C and Appendix D into the main paper. * Some of the empirical results don’t fully support the authors’ claims (e.g., Figure 1 – walker) and the paper currently offers no explanation for those. * There is no discussion of limitations, future work and/or potential poisoning mitigation strategies. * Discussion of related work and methods could be strengthened. Requested Changes: I request addressing the weakness identified above, including: * A more thorough discussion of the experimental results, especially when the attack appears to make little difference when considering the variance of the results. * Adding more relevant results and description into the main paper, such as those found in Appendix C and Appendix D. * Provide a discussion of different RL algorithms, especially actor-critic algorithms, and why the authors chose to focus on PPO and VGP instead of more advanced algorithms. * A more thorough discussion on the differences between distributed RL and FRL * A discussion on why policy gradient does not have local gradient poisoning * An expansion of the introduction including the application of RL to different design problems, such as design of materials [1][2][3], circuit design [4] and optimization [5] and how the poisoning aspects relate to those cases as well as the cases already identified (e.g., IoT). [1] Govindarajan, Prashant, et al. "Learning Conditional Policies for Crystal Design Using Offline Reinforcement Learning." Digital Discovery (2024). [2] Zhou, Zhenpeng, et al. "Optimization of molecules via deep reinforcement learning." Scientific reports 9.1 (2019): 10752. [3] Ghugare, Raj, et al. "Searching for High-Value Molecules Using Reinforcement Learning and Transformers." arXiv preprint arXiv:2310.02902 (2023). [4] Roy, Rajarshi, et al. "PrefixRL: Optimization of parallel prefix circuits using deep reinforcement learning." 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEE, 2021. [5] Chen, Tianlong, et al. "Learning to optimize: A primer and a benchmark." Journal of Machine Learning Research 23.189 (2022): 1-59. Broader Impact Concerns: The paper would be strengthened by including a broader impact section that relates to potential misuse of the poisoning method and how to mitigate it. ==================================================
# Learning To Reconstruct Signals From Binary Measurements Alone Julián Tachella julian.tachella@cnrs.fr Physics Laboratory CNRS & École Normale Supérieure de Lyon Laurent Jacques laurent.jacques@uclouvain.be ICTEAM UCLouvain Reviewed on OpenReview: *https: // openreview. net/ forum? id= ioFIAQOBOS* ## Abstract Recent advances in unsupervised learning have highlighted the possibility of learning to reconstruct signals from noisy and incomplete linear measurements alone. These methods play a key role in medical and scientific imaging and sensing, where ground truth data is often scarce or difficult to obtain. However, in practice measurements are not only noisy and incomplete but also quantized. Here we explore the extreme case of learning from binary observations and provide necessary and sufficient conditions on the number of measurements required for identifying a set of signals from incomplete binary data. Our results are complementary to existing bounds on signal recovery from binary measurements. Furthermore, we introduce a novel self-supervised learning approach, which we name SSBM, that only requires binary data for training. We demonstrate in a series of experiments with real datasets that SSBM performs on par with supervised learning and outperforms sparse reconstruction methods with a fixed wavelet basis by a large margin. ## 1 Introduction Continuous signals have to be quantized in order to be represented digitally with a limited number of bits in a computer. In many real-world applications, such as radar (Alberti et al., 1991), wireless sensor networks (Chen & Wu, 2015), and recommender systems (Davenport et al., 2014), the measured data is quantized with just a few bits per observation. The extreme case of quantization corresponds to observing a single bit per measurement. For example, single-photon detectors record the presence or absence of photons at each measurement cycle (Kirmani et al., 2014), and recommendation systems often observe a binary measurement of users' preferences only (*e.g.*, via thumbs up or down). The binary sensing problem is formalized as follows: we observe binary measurements y *∈ {−*1, 1} m of a signal x *∈ X ⊂* S n−1 with unit norm1 via the following forward model y = sign (Ax) (1) where A ∈ R m×n is a linear forward operator. Recovering the signal from the measurements is an ill-posed inverse problem since there are many signals x ∈ S n−1that are consistent with a given measurement vector y. Moreover, often the measurement matrix is incomplete m < n, *e.g.*, as in one-bit compressed sensing (Jacques et al., 2013), which makes the signal recovery problem even more challenging. It is possible to obtain a good estimation of x despite the binary quantization, if the set of plausible signals X is low-dimensional (Bourrier et al., 2014), *i.e.*, if it occupies a small portion of the ambient space S n−1. 1Note that the sensing model in (1) provides no information about the norm of x, so it is commonly assumed that signals verify ∥x∥ = 1. 1 ![1_image_0.png](1_image_0.png) Figure 1: We propose a method for learning to reconstruct binary measurement observations, using only the binary observations themselves for training. The learned reconstruction function can discover unseen patterns in the data (in this case the clothes of fashionMNIST - see the experiments in Section 5), which cannot be recognized in the standard linear reconstructions (no learning). We also provide theoretical bounds that characterize how well we can expect to learn the set of signals from binary measurement data alone. A popular approach is to assume that X is a single linear subspace or a union of subspaces (Jacques et al., 2013), imposing sparsity over a known dictionary. For example, the well-known total variation regularization assumes that the gradients of the signal are sparse (Rudin et al., 1992). However, in real-world settings, the set of signals X is generally unknown, and sparsity assumptions on an arbitrary dictionary yield a loose description of the true set X , negatively impacting the quality of reconstructions obtained under this assumption. This limitation can be overcome by learning the reconstruction mapping y 7→ x (*e.g.*, with a deep neural network) directly from N pairs of measurements and associated signals—*i.e.*, a supervised learning scenario with a labeled dataset {(yi, xi)} N i=1 with N assumed sufficiently large. While this learningbased approach generally obtains state-of-the-art performance, it is often impractical since it can be very expensive or even impossible to obtain ground-truth signals xi for training. For example, recommender systems generally do not have access to high-resolution user ratings on all items for training. In this paper, we investigate the problems of identifying the signal set and learning reconstruction mapping using a dataset of binary measurements only {yi} N i=1. In this setting, if the measurement process is incomplete m < n, the matrix A has a non-trivial nullspace and there is no information in the measurement data about the set of signals X in the nullspace (Chen et al., 2021). As a consequence, there is not enough information for learning the reconstruction function either. For example, the trivial pseudo-inverse reconstruction f(y) = A⊤(AA⊤) −1y is perfectly consistent with the binary measurements, *i.e.*, sign (Af(y)) = y, but is generally far from being optimal (Boufounos et al., 2015). Here we show that it is still possible to (approximately) identify the signal set and learn to reconstruct the binary measurements, if the measurement operator varies across observations, *i.e.*, $$y_{i}=\operatorname{sign}\left(A_{g_{i}}x_{i}\right)$$ $$\left(2\right)$$ yi = sign (Agixi) (2) where each signal xiis observed via one out of G operators gi ∈ {1*, . . . , G*}, and i = 1*, . . . , N*. This sensing assumption holds in various practical applications, where signals are observed through different operators (*e.g.*, recommendation systems access ratings about a different set of items for each user) or through an operator which changes through time (*e.g.*, a sensor that changes its calibration). Moreover, this assumption is also valid for the case where we obtain binary measurements via a single operator A, but the set X is known to be invariant to a group of invertible transformations {Tg} G g=1, such as translations or rotations. The invariance of X provides access to measurements associated with a set of (implicit) operators {Ag = ATg} G g=1, as we have that $$y=\operatorname{sign}\left(A T_{g}T_{g}^{-1}x\right)=\operatorname{sign}\left(A T_{g}x^{\prime}\right)$$ ′) (3) with x ′ = T −1 g x ∈ X for all g = 1*, . . . , G*. This observation has been exploited to perform fully unsupervised learning on various linear inverse problems, such as magnetic resonance imaging and computed tomography (Chen et al., 2021; 2022; Tachella et al., 2023). $\left(3\right)$. | Assumption on X ⊆ S n−1 | None | None | boxdim < k | |-----------------------------------|------------------------------|-------------|--------------| | Assumption on Ag ∈ R m×n, g ∈ [G] | rank [A⊤ , . . . , A⊤ G] < n | None | Gaussian | | 1 | | | | | Identification error bounds | δ > 1 | δ ≳ mG | δ ≲ | | | n | k+n/G m log | nm k+n/G | | Section | Section 3.1 | Section 3.1 | Section 3.2 | Table 1: Summary of the global model identification error δ bounds presented in this paper. The identification error δ corresponds to the maximal error of the optimal estimation of the signal set from binary measurement data alone (see Definition 3.1). The bounds depend on the size of the signals n, the number of binary measurement operators G with m measurements, and the dimension of the signal set k. The problem of recovering a signal from binary measurements under the assumption of a known signal set has been extensively studied in the literature (Goyal et al., 1998; Jacques et al., 2013; Oymak & Recht, 2015). These works provide practical bounds that characterize the recovery error as a function of the number of measurements m for different classes of signal sets. However, they assume that the signal set is known (or that there is enough ground-truth training data to approximate it), which is not often the case in real-world scenarios. Here we investigate the best approximation of the signal set that can be obtained from the binary observations. This approximation lets us understand how well we can learn the reconstruction function from binary data. To the best of our knowledge, the model identification problem has not been yet addressed, and we aim to provide the first answers to this problem here. The main contributions of this paper are: - We show that for any G sensing matrices A1*, . . . , A*G ∈ R m×n and any dataset size N, there exists a signal set whose identification error (precisely defined in Section 3) from binary measurements cannot decay faster than O( n mG ) when m increases. - We prove that, if each operator Ag, g ∈ {1*, . . . , G*}, has iid Gaussian entries (a standard construction in one-bit compressed sensing), it is possible to estimate a k-dimensional2signal set up to a global error of O( k+n/G m log nm k+n/G ) with high probability. - We determine the *sample complexity* of the related unsupervised learning problem, *i.e.*, we find that, for G operators with Gaussian entries, the number of distinct binary observations for obtaining the best possible approximation of a k-dimensional signal set X is N = OG( m √n k) 5kwith controlled probability, which reduces to N = OG( m k ) kif X is a union of k-dimensional subspaces. - We introduce a Self-Supervised learning loss for training reconstruction networks from Binary Measurement data alone (SSBM), and show experimentally that the learned reconstruction function outperforms classical binary iterative hard thresholding (Jacques et al., 2013) and performs on par with fully supervised learning on various real datasets. A summary of the model identification bounds presented in this paper is shown in Table 1. ## Related Work Unsupervised learning in inverse problems. Despite providing very competitive results, most deep learning-based solvers require a supervised learning scenario, *i.e.*, they need measurements and signal pairs {(yi, xi)}, a labeled dataset, in order to learn the reconstruction function y 7→ x. A first step to overcome this limitation is due to Noise2Noise (Lehtinen et al., 2018), where the authors show that it is possible to learn from only noisy data if two noisy realizations of the same signal {(xi + ni, xi + n ′ i )} are available for training. This approach has been extended to linear inverse problems with pairs of measurements {(Agixi + ni, Ag ′ i xi + n ′ i )} (Yaman et al., 2020; Liu et al., 2020). The equivariant imaging framework (Chen et al., 2021; 2022) shows that learning the reconstruction function from unpaired measurement data {Axi+ni} of a single incomplete linear operator A is possible if the signal model is invariant to a group of transformations. This approach can also be adapted to the case where the signal model is not invariant, but measurements are 2The definition of dimension used in this paper is the upper box-counting dimension defined in Section 2. ![3_image_0.png](3_image_0.png) Figure 2: Geometry of the 1-bit signal recovery problem with m = 5 and n = 3. **Left:** The binary sensing operator sign (A·) defines a tessellation of the sphere into multiple *consistency cells*, which are defined as all vectors x ∈ S 2 associated with the same binary code. The consistency cell associated with a given measurement y is shown in green. Each red line is a great circle defined by all points of S 2 perpendicular to one row of A. **Middle:** If the signal set consists of all vectors in the sphere, *i.e.*, X = S 2, the center of the cell is the optimal reconstruction ˆf(y) (depicted with a blue cross) and the recovery error (denoted by δ) is given by the radius of the cell. **Right:** If the signal set (depicted in black) occupies only a small subset of S 2, *i.e.*, it has a small box-counting dimension, the optimal reconstruction corresponds to the center of the intersection between the signal set and the consistency cell, and the resulting signal recovery error is smaller. obtained via many different operators {Agixi+ni} (Tachella et al., 2022). Necessary and sufficient conditions for learning in these settings are presented in Tachella et al. (2023), however under the assumption of linear observations (no quantization). Here we extend these results to the non-linear binary sensing problem with an unsupervised dataset with multiple operators {sign (Agixi)} and gi ∈ {1*, . . . , G*}, or with a single operator and a group-invariant signal set {sign (Axi)}. Quantized and one-bit sensing. Reconstructing signals from one-bit compressive measurements is a well-studied problem (Goyal et al., 1998; Jacques et al., 2013; Oymak & Recht, 2015; Baraniuk et al., 2017), both in the (over)complete case m ≥ n (Goyal et al., 1998), and in the incomplete setting *m < n*, either under the assumption that the signals are sparse (Jacques et al., 2013), or more generally, that the signal set has small Gaussian width (Oymak & Recht, 2015). Some of these results are summarized in Section 2. The theoretical bounds presented in this paper complement those of signal recovery bounds from quantized data, as they characterize the fundamental limitations of model identification from binary measurement data. One-bit matrix completion and dictionary learning. Matrix completion consists of inferring missing entries of a data matrix Y = [y1*, . . . , y*N ], whose columns can be seen as partial observations of signals xi, i.e., yi = sign (Agixi) where the operators Agi select a random subset of m entries of the signal xi. In order to recover the missing entries, it is generally assumed that the signals xi (the columns of X = [x1*, . . . , x*N ]) belong to a k-dimensional subspace with k ≪ n. Davenport et al. (2014) solve this learning problem via convex programming and present theoretical bounds for the reconstruction error. Zayyani et al. (2015) present an algorithm that learns a dictionary (*i.e.*, a union of k-dimensional subspaces) from binary data alone in the overcomplete regime *m > n*. Rencker et al. (2019) presents a similar dictionary learning algorithm with convergence guarantees. In this paper, we characterize the model identification error for the larger class of low-dimensional signal sets, which includes subspaces and the union of subspaces as special cases. Moreover, we propose a self-supervised method that learns the reconstruction mapping directly, avoiding an explicit definition (*e.g.*, a dictionary) of the signal set. ## 2 Signal Recovery Preliminaries We begin with some basic definitions related to the one-bit sensing problem. The diameter of a set is defined as diam(S) = supu,v∈S ∥u − v∥, and the radius is defined as half the diameter. Each row ai ∈ R n in the operator A divides the unit sphere S n−1into two hemispheres, *i.e.*, {x ∈ S n−1: a ⊤ i x ≥ 0} and {x ∈ S n−1: a ⊤ i x < 0}. Considering all rows, the operator sign (A·) defines a *tesselation* of S n−1into consistency cells, where each cell is composed of all the signals that are associated with a binary code y, i.e., {x ∈ S n−1: sign (Ax) = y}. The radius and number of consistency cells play an important role in the analysis of signal recovery and model identification. Figure 2 illustrates the geometry of the problem for n = 3 and m = 5. The problem of recovering a signal from one-bit compressed measurements with a known signal set has been well studied (Goyal et al., 1998; Jacques et al., 2013; Oymak & Recht, 2015; Baraniuk et al., 2017). These works characterize the maximum estimation error across all signals obtained by an optimal reconstruction function ˆf, *i.e.*, $$\delta=\operatorname*{max}_{x\in{\mathcal{X}}}\ \|x-{\hat{f}}(\operatorname{sign}\left(A x\right))\|$$ ∥x − ˆf(sign (Ax))∥ (4) as a function of the number of measurements and complexity of the signal model. From a geometric viewpoint (see Figure 2), the optimal reconstruction function with respect to the norm *∥ · ∥* is given by the centroid (with respect to the same norm *∥ · ∥*) of the intersection between the consistency cell associated with the measurement y = y(x) = sign (Ax), *i.e.*, Sy := {u ∈ S n−1: y = sign (Au)}, and the signal set X , *i.e.*, $$\mathbf{\Sigma}$$ $${\hat{f}}(y)=\operatorname{centroid}(S_{y}\cap{\mathcal{X}}).$$ $\downarrow$ . $$\mathbf{\partial}$$ ˆf(y) = centroid(Sy ∩ X ). (5) while the maximum reconstruction error is given by the intersection with maximal radius, that is $$\delta=\operatorname*{max}_{x\in{\mathcal{X}}}\;\operatorname{radius}(S_{y(x)}\cap{\mathcal{X}}).$$ radius(Sy(x) ∩ X ). (6) In the overcomplete case *m > n*, assuming that all unit vectors are plausible signals, *i.e.*, X = S n−1, the mean reconstruction error δ is given by the consistency cell with maximal radius, which scales as nm (see Proposition 5). The optimal rate is achieved by measurement consistent reconstruction functions, *i.e.*, those verifying y = sign (Af(y)) (Goyal et al., 1998). In the incomplete case *m < n*, non-trivial signal recovery is only possible if the set of signals occupies a low-dimensional subset of the unit sphere S n−1(Oymak & Recht, 2015). For example, a common assumption is that X is the set of k-sparse vectors (Jacques et al., 2013). In this paper, we characterize the class of low-dimensional sets using a single intuitive descriptor, the box-counting dimension. The upper box-counting dimension (Falconer, 2004, Chapter 2) is defined for a compact subset S ⊂ R n as $$\operatorname{boxdim}\left(S\right)=\operatorname*{lim}_{\epsilon\to0^{+}}\frac{\log\Re(S,\epsilon)}{\log1/\epsilon}$$ where N(*S, ϵ*) is the minimum number of closed balls of radius ϵ with respect to the norm *∥ · ∥* that are required to cover S. This descriptor has been widely adopted in the inverse problems literature (Puy et al., 2017; Tachella et al., 2023), and it captures the complexity of various popular models, such as smooth manifolds (Baraniuk & Wakin, 2009) and union of subspaces (Blumensath & Davies, 2009; Baraniuk et al., 2017). For example, the set of (k + 1)-sparse vectors with unit norm has a box-counting dimension equal to k. The upper box-counting dimension is particularly useful to obtain an upper bound on the covering number of a set: if boxdim (X ) < k, there exists a set-dependent constant ϵ0 ∈ (0, 1 2 ) for which $$\left(7\right)$$ $$\Im({\mathcal{X}},\epsilon)\leq\epsilon^{-k}$$ −k(8) holds for all ϵ ≤ ϵ0 (Puy et al., 2017). The following theorem (proved in Appendix B) exploits this fact to provide a bound on the number of measurements needed for recovering a signal with an error smaller than δ from generic binary observations. $$({\boldsymbol{\delta}})$$ ![5_image_0.png](5_image_0.png) Figure 3: Illustration of the model identification problem from binary measurements with n = 3, m = 4, and G = 3. A signal set with box-counting dimension 1 is depicted in black. The red lines define the frontiers of the consistency cells associated with operators A1*, . . . , A*3. **From left to right:** The signal set, the estimation of the signal set associated with A1*, . . . , A*3 and the overall estimate Xˆ. Theorem 1. Let A be a matrix with iid entries sampled from a standard Gaussian distribution and assume that boxdim (X ) < k, such that N(X , ϵ) ≤ ϵ −kfor all ϵ < ϵ0 *with* ϵ0 ∈ (0, 1 2 ). For δ ≤ min{30√nϵ0, 1 2 }*, if* the number of measurements verifies $$m\geq{\frac{4}{\delta}}{\big(}2k\log{\frac{30{\sqrt{n}}}{\delta}}+\log{\frac{1}{\xi}}{\big)}$$ $$(9)$$ $$(10)$$ (9) then for all x, s ∈ X *, we have that* $$\operatorname{sign}\left(A x\right)=\operatorname{sign}\left(A s\right)\implies\|x-s\|<\delta$$ sign (Ax) = sign (As) =⇒ ∥x − s∥ < δ (10) with probability greater than 1 − ξ. This result extends Theorem 2 in Jacques et al. (2013), which holds for k-sparse sets only, to general lowdimensional sets and is included in Appendix B. For example, if X is the intersection of L (s+1)-dimensional subspaces with the unit sphere, Theorem 1 holds with constant ϵ0 = (3sL) − 1 k−s and *k > s* (Vershynin, 2018, Chapter 4.2). This theorem tells us that we can recover sparse signals from binary measurements up to an error of $${\mathcal{O}}({\frac{k}{m}}\log{\frac{n m}{k}})$$ ## Which Is Sharp, Up To The Logarithmic Factor (Jacques Et Al., 2013). Oymak And Recht (Oymak & Recht, 2015) Present A Similar Result, Stated In Terms Of The Gaussian Width3 Of The Signal Set Instead Of The Box-Counting Dimension. 3 Model Identification From Binary Observations In this section, we study how well we can identify the signal set from binary measurement data associated with G different measurement operators A1*, . . . , A*G ∈ R m×n. We focus on the problem of identifying the set X from the binary sets {sign (AgX )} G g=1. In practice, we observe a subset of each binary set sign (AgX ), however, in Section 3.4 we show that the number of elements in each of these sets is controlled by the box-counting dimension of X , which is typically low in real-world settings (Hein & Audibert, 2005). We start by analyzing how the different operators provide us with information about X . Each forward operator Ag constrains the signal space by the following set $${\hat{\mathcal{X}}}_{g}=\{v\in\mathbb{S}^{n-1}:\;\exists x_{g}\in{\mathcal{X}},\;\operatorname{sign}\left(A_{g}v\right)=\operatorname{sign}\left(A_{g}x_{g}\right)\}.$$ n−1: ∃xg ∈ X , sign (Agv) = sign (Agxg)}. (11) Each set Xˆg is thus composed of all unit vectors v that are *consistent* with at least one point xg of X according to the binary mapping sign (Ag·). We thus conclude that Xˆg is essentially a *dilation* of X—and we clearly have X ⊂ Xˆg—whose extension is locally determined by specific cells of sign (Ag·). A three-dimensional example with m = 4 measurements and G = 3 operators is presented in Figure 3. Note that, for a given binary mapping sign (Ag·), each cell is characterized by one binary vector in the range of this mapping, so 3The Gaussian width of a set S is defined as Es{supx∈S x⊤s} where s is distributed as a standard Gaussian vector. $$(11)$$ that, as shown in this figure, all cells provide a different tesselation of S n−1 whose size and dimension will play an important role in our analysis. Since each Xˆg is a dilation of X , we can infer the signal set from the following intersection $${\hat{\mathcal{X}}}:=\bigcap_{g=1}^{G}{\hat{\mathcal{X}}}_{g},$$ $$(12)$$ $$(13)$$ $$(14)$$ which can be expressed concisely as $\hat{\mathcal{X}}=\left\{v\in\mathbb{S}^{n-1}:\ \exists x_{1},\ldots,x_{G}\in\mathcal{X},\ \ \text{sign}\left(A_{g}v\right)=\text{sign}\left(A_{g}x_{g}\right),\ \forall g=1,\ldots,G\right\}.$ Due to the binary quantization, the inferred set will be larger than the true set, i.e., X ⊂ Xˆ. However, we will show that it is possible to learn a slightly *larger* signal set, defined in terms of a global identification error δ > 0, *i.e.*, the open δ-*tube* $$\mathcal{X}_{\delta}=\{v\in\mathbb{S}^{n-1}:\ \inf_{x\in\mathcal{X}}\|x-v\|<\delta\}\tag{1}$$ such that the inferred set is contained in it, i.e., *X ⊂ X* ˆδ. We define the model identification error as the smallest δ such that *X ⊂ X* ˆδ holds: Definition 3.1 (Model identification error). The identification error of a signal set X ⊂ S n−1from binary sets {sign (AgX )} G g=1 is defined as min{δ ≥ 0 : *X ⊆ X* ˆδ}. For our developments to be valid, we will further assume that X is not too dense over S n−1so that two tubes of X with two distinct radii are distinct. Assumption 1. The set X is closed and there exists a maximal radius 0 < δ0 < 2 for which Xδ ⊊ Xδ0 for any 0 *< δ < δ*0. This assumption amounts to saying that there exists at least one open ball in S n−1that does not belong to X ⊂ S n−1. For instance, X = S n−1 does not verify this assumption, and X = S n−1 ∩ {x ∈ R n : x1 ≥ 0} verifies it for δ0 ≤ √2 since Xδ = S n−1for any δ ≥ √2. The next subsections provide lower and upper bounds for δ. ## 3.1 A Lower Bound On The Identification Error We first aim to find a lower bound on the best δ achievable via the following oracle argument: if we had oracle access to G measurements of each point x in X through each of the G different operators, we could stack them together to obtain a larger measurement operator, defined as $$\begin{bmatrix}y_{1}\\ \vdots\\ y_{G}\end{bmatrix}=\operatorname{sign}\left(\bar{A}x\right)\ \text{with}\bar{A}=\begin{bmatrix}A_{1}\\ \vdots\\ A_{G}\end{bmatrix}\in\mathbb{R}^{mG\times n}.\tag{1}$$ This oracle measurement operator provides a refined approximation of the signal set, specified as $${\hat{\mathcal{X}}}_{\mathrm{oracle}}=\{v\in\mathbb{S}^{n-1}:\;\exists x\in{\mathcal{X}},\;\operatorname{sign}\left({\bar{A}}v\right)=\operatorname{sign}\left({\bar{A}}x\right)\},$$ n−1: ∃x ∈ X , sign Av¯= sign Ax¯}, (16) which is again a dilation of X . Figure 4 shows an example with the oracle set Xˆoracle, which provides a better (or equal) approximation of the signal set than (13), due to the fact that X ⊂ Xˆoracle ⊆ Xˆ by the construction of these sets. As the oracle estimate is composed of the cells associated with sign A¯·which are intersected by the signal set, the oracle approximation error depends on the diameter of the intersected cells. Given a certain oracle tesselation of S n−1, the worst estimate of X is obtained when it intersects the largest cells in the tessellation. The following proposition formalizes the intuition that the maximum consistency cell diameter—*i.e.*, the greatest distance separating two binary consistent vectors of X according to A¯—serves as a lower bound on the model identification error δ. $$\left(15\right)$$ $$(16)$$ ![7_image_0.png](7_image_0.png) Figure 4: Illustration of the oracle argument in the example of Figure 3. **Left:** The signal set X ⊂ S 2 is depicted in black. **Middle:** Cells intersected by the oracle system are indicated in green. **Right:** The identified set Xˆ is indicated in green, and *is larger* than the oracle counterpart. Proposition 2. *Given* A¯ ∈ R mG×n, for any set X ⊂ S n−1respecting Assumption 1 with 0 < δ0 < 2, there exists a rotation matrix R ∈ SO(n) *such that the rotated set* $$\mathcal{X}^{\prime}=\{v\in\mathbb{S}^{n-1}:v=Rx,\ x\in\mathcal{X}\}=RX$$ $$(17)$$ verifies Xˆ′oracle *̸⊂ X* ′ δ for any δ < min{d, δ0} where 0 < d < 2 is the largest cell diameter of the tesselation induced by sign A¯·. Proof. Given *δ < δ*0, the proof consists in choosing an appropriate rotation matrix, such that we can find a point v which belongs to the oracle estimate Xˆ′oracle of the rotated set X ′, but doesn't belong to the δ-tube X ′ δ of this set. From Assumption 1 and since the δ-tube Xδ is open, there exists x ∈ X and v *̸∈ X*δ such that ∥x − v∥ = δ Let S denote the largest cell in the tesselation of S n−1induced by sign A¯·, such that d = diam(S). If *δ < d*, we can always pick a rotation R ∈ SO(n) such that both x ′ = Rx and v ′ = Rv belong to S. As x ′ ∈ S, X ′intersects S and we have that S ⊆ Xˆ′oracle, and thus that v ′ ∈ Xˆ′oracle. In words, Proposition 2 shows that we can rotate any signal set X such that it intersects the largest consistency cell in the tesselation, obtaining a model identification error that is proportional to the maximum cell diameter. The rotation is used to remove the best-case scenarios where the signal set only intersects consistency cells that are smaller than the largest one. In the rest of this subsection, we focus on bounding the maximum cell diameter, as it is directly related to the model identification error through Proposition 2. We start with the following proposition which shows that, if the stacked matrix is rank-deficient, all cells have the maximum possible diameter. Proposition 3. *Consider the tessellation defined by* sign A¯·*with* A¯ ∈ R mG×n. If $$(18)$$ $$r a n k({\bar{A}})<n$$ rank(A¯) < n (18) all the cells in the tessellation have *a diameter equal to* 2. Proof. If A¯ has a rank smaller than n, it has a non-trivial nullspace. Let v ∈ S n−1 be an element in the nullspace with unit norm. Consider a cell associated with the code sign Ax¯for some x ∈ R n inside the complement of this nullspace (*i.e.*, in the range of A¯⊤). The points x+v ∥x+v∥ ,x−v ∥x−v∥ ∈ S n−1 belong to this cell since they share the same code. As ∥x ± v∥ =p∥v∥ 2 + ∥x∥ 2 due to orthogonality, the distance between these two points is $$\frac{2\|v\|}{\sqrt{\|v\|^{2}+\|x\|^{2}}}=\frac{2}{\sqrt{1+\|x\|^{2}}}\tag{19}$$ which tends to 2 as ∥x∥ goes to zero, without modifying the cell code sign Ax¯. $\square$ This result provides a practical necessary condition for model identification, which is summarized in the following corollary: Corollary 4. *A necessary condition for the tesselation defined by* sign A¯·to have consistency cells with a diameter smaller than 2 is that there are at least $$m\geq n/G$$ measurements per operator. This proposition tells us that n/G measurements are necessary in order to obtain non-trivial cell diameters, and thus to obtain a non-trivial estimation of X . Moreover, in practice, it is possible to compute the rank of the stacked matrix A¯ via numerical approximations. The following theorem provides a more refined characterization of the oracle error for m ≥ n/G: Proposition 5. *Consider the tessellation defined by* sign A¯·*with* A¯ ∈ R mG×n*. The largest cell in the* tessellation has a diameter of size at least 23 n mG . Proof. According to Thao & Vetterli (1996, Theorem A.7), the maximum number of cells CA¯ induced by a tessellation defined by sign A¯·with A¯ ∈ R mG×n can be upper bounded as $$C_{\bar{A}}\leq{\binom{m G}{n}}2^{n}.$$ $$\square$$ As mG n ≤ ( emG n) n, we have that CA¯ ≤ ( 2emG n) n. We can inscribe all cells into spherical caps Si 4 of radius δ/2, where δ is the maximum cell diameter. As shown in (Ball et al., 1997, Lemma 2.3), a spherical cap of radius δ/2 has measure bounded by σ(Si) ≤ ( δ 4 ) n−1σn−1 where σn−1 is the measure of S n−1. Since the tessellation covers the unit sphere S n−1, we have that S n−1 ⊆ ∪CA¯ i=1Si and thus $$\begin{array}{l}{{\sum_{i=1}^{C_{\vec{A}}}\sigma(S_{i})\geq\sigma_{n-1}\ \Rightarrow\ (\frac{2e m G}{n})^{n}(\frac{\delta}{4})^{n}\sigma_{n-1}\geq\sigma_{n-1}\ \Rightarrow\ \delta\geq\frac{2}{3}\frac{n}{m G}.}}\end{array}$$ Remark The upper bound in this proposition is tight in the sense that there exist matrices that attain this bound. As a special case of Theorem 1 with *mG > n* measurements and the box-counting dimension of X set to n, for an mG × n Gaussian random matrix A¯, and by choosing the minimal number of required measurements in the condition of this theorem, we have with high probability that the largest cell of sign A¯· has a diameter that decays like O( n mG ) up to log factors. By a standard boosting argument5, it thus means that there exists a mG × n matrix A¯ with the same consistency cell diameter decay. As stated in the following corollary, Proposition 5 shows that the model identification error cannot decrease faster with the number of measurements and operators than O( n mG ), since the largest cell in any oracle tesselation has a diameter of at least 23 n mG . Corollary 6. Given G operators A1*, . . . , A*G ∈ R m×n, and any set X ⊂ S n−1*verifying Assumption 1 with* 0 < δ0 < 2, for any 0 *< δ <* min(δ0, 2 3 n mG ), there exists a rotation R *such that the inferred signal set* Xˆ′ of X ′ = RX *is not included in* X ′ δ , i.e., Xˆ′ *̸⊂ X* ′ δ . Proof. If X ⊂ S n−1respects Assumption 1 with 0 < δ0 < 2, and Xˆoracle and Xˆ are the oracle set associated with A¯ and the inferred set of X ′, respectively, then, as derived previously, we know that X ⊂ Xˆoracle ⊂ Xˆ. According to Proposition 2 and Proposition 5, there exists a rotation R such that Xˆ′oracle = RXˆoracle ̸⊂ X ′ δ = RXδ for 0 *< δ <* min(δ0, 2 3 n mG ). Therefore, from the inclusion above, we thus see that there exists Xˆ′ = R*X ̸⊂ X* ˆ ′ δ . 4A spherical cap of radius r around a point v ∈ S n−1is defined as {x ∈ S n−1: ∥x − v∥ < r}. 5Assuming a Gaussian matrix will fail to have the desired cell size with probability at most ξ < 1, the probability that 1 out of r independent Gaussian matrices will fail to have this property is at least 1 − ξ r, which can be made arbitrarily high by increasing r. At this stage, it is natural to ask if the condition in Corollary 4 is sufficient to upper bound the model identification error δ. The answer is negative since, for certain families of operators, the maximum consistency cell associated with sign A¯·does not decrease with the number of measurements m or operators G, as illustrated by the following example inspired by a case considered in (Plan & Vershynin, 2013, Sec. 1.1). Example 3.1. Consider the operators with binary6entries A1, . . . , AG *∈ {−*1, 1} m×n. Let xλ = e1 + λe2 ∈ R n where ei ∈ S n−1is the ith canonical vector and λ is a scalar. Due to the quantization of the operators, we have that sign Ax¯λ = sign Ax¯λ′for any value of *λ, λ*′ ∈ (−1, 1). Thus, there exists a cell in the oracle tesselation associated with sign A¯·which contains the set of points { xλ ∥xλ∥ }λ∈(−1,1) and thus has a diameter equal to √2, independently of the values of m and G. In the next subsection, we obtain an upper bound on the model identification error which overcomes this pathological example by sampling the operators from a continuous random distribution. ## 3.2 A Sufficient Condition For Model Identification We now seek a sufficient condition on the number of measurements per operator that guarantees the identification of X up to a global error of δ. As with the sufficient conditions ensuring signal recovery (see Section 2), we assume that X is low-dimensional to provide a bound that holds with high probability if the entries of the operators are sampled from a Gaussian distribution. Theorem 7. Given the operators A1*, . . . , A*G ∈ R m×n with entries i.i.d. as a standard Gaussian distribution, a low-dimensional signal set X *, with* boxdim (X ) < k, such that N(X , ϵ) ≤ ϵ −kfor all ϵ < ϵ0 *with* ϵ0 ∈ (0, 1 2 ). For 0 < δ ≤ min{18√nϵ0, 1} and some failure probability 0 < ξ < 1*, if the number of measurements per* operator verifies $m\geq\frac{4}{\delta}\left[\left(k+\frac{n}{G}\right)\log\frac{54\sqrt{n}}{\delta}+\frac{1}{G}\log\frac{1}{\xi}\right]$ *we have that $\hat{X}\subseteq\mathcal{X}_{\delta}$. (20) then with probability at least 1 − ξ, we have that *X ⊆ X* ˆδ. The proof is included in Appendix C. Theorem 7 provides a bound on δ, *i.e.*, how precisely we can characterise the signal set X , which we can compare with the lower bound in Proposition 5. From (20) we have that (see Appendix C for a detailed derivation), $$(20)$$ $${\dot{\Sigma}}\subseteq{\mathcal{X}}_{\delta}.$$ $$\delta={\mathcal{O}}{\big(}{\frac{k+n/G}{m}}\log{\frac{n m}{k+n/G}}{\big)}.$$ $$(21)$$ k+n/G ). (21) The bound in (21) is consistent with existing model identification bounds in the linear setting (Tachella et al., 2023), which require m > k + n/G measurements per operator for uniquely identifying the signal set. ## 3.3 Learning To Reconstruct The best reconstruction function ˆf that can be learned from binary measurements alone can be defined as a function of the identified set Xˆ, as defined in (5): $${\hat{f}}(y)=\operatorname{centroid}(S_{y}\cap{\hat{X}})$$ $$(22)$$ ˆf(y) = centroid(Sy ∩ Xˆ) (22) for a binary input y with associated consistency cell Sy = {v ∈ S n−1: sign (Av) = y}. The reconstruction error of ˆf is lower bounded by the radius of the set Sy ∩ Xˆ. This error must be larger than the error of a reconstruction function that has full knowledge about the signal set X . Intuitively, if we have a large model identification error, Xˆ will be a bad approximation of X and thus ˆf will obtain large reconstruction errors. The following proposition formalizes this intuition, showing that the reconstruction error of ˆf is lower bounded by the model identification error. Proposition 8. Given G operators A1*, . . . , A*G ∈ R m×n and a set X ⊂ S n−1 with model identification error equal to δ, there exist points xg ∈ X for g = 1, . . . , G *such that the reconstruction error is* ∥ ˆf (sign (Agxg)) − xg∥ ≥ δ/2, 6This example can be generalized to operators with entries belonging to a discrete set Q, and show that there exist cells with diameter equal to √∆ where ∆ is the minimum distance between two elements in Q. where ˆf *is the optimal reconstruction function that can be learned from the measurement data* {sign (AgX )} G g=1*, as defined in* (22). Proof. Following Definition 3.1 of model identification error, there exists a point xˆ ∈ Xˆ such that ∥x−xˆ∥ ≥ δ for all x ∈ X . According to the construction of the inferred set Xˆ in (13), there exist some x1, . . . , xG ∈ X such that sign (Agxˆ) = sign (Agxg) for all g = 1*, . . . , G*. Therefore, for any g ∈ {1*, . . . , G*}, the diameter of the set Ssign(Agxg) ∩ Xˆ is at least ∥xg − xˆ∥ since both xg and xˆ belong to this set. As the optimal reconstruction function outputs the centroid of the set (as defined in (22)), the reconstruction error of the point xg is at least ∥xg − xˆ∥/2 ≥ δ/2. Therefore, we can use the results on model identification developed in Section 3.1 to lower bound the reconstruction error for the case where the function is learned from measurement data only. In particular, combining this result with Corollary 6, we obtain that the (worst-case) reconstruction error should be larger than 13 n mG . It is worth noting that this result also holds for the case where we have a single operator and group invariance, *i.e.*, when Ag = ATg for g = 1*, . . . , G*. An upper bound on the reconstruction error is harder to obtain. Unfortunately, Theorems 1 and 7 do not automatically translate into a bound on the optimal reconstruction error of the reconstruction function defined in (22). Theorem 7 implies that the optimal unsupervised reconstruction ˆf(sign (Agx)) is at most O( k+n/G m log nm k+n/G ) away from the signal set X , but does not guarantee that it is close to x. Nonetheless, we conjecture that this rate holds with high probability if the operators follow a Gaussian distribution: Conjecture 9. Given binary measurements from the operators A1*, . . . , A*G ∈ R m×n *with entries i.i.d. from* a standard Gaussian distribution, the optimal reconstruction function defined in (22) *has a maximal reconstruction error that is upper bounded as* O( k+n/G m log nm k+n/G ) *with high probability.* Conjecture 9 hypothesizes that the optimal unsupervised reconstruction function should obtain a similar performance than the supervised one, *i.e.*, O( k m log nm k ) shown in Theorem 1, if the number of operators is sufficiently large, i.e., *G > n/k*. In the experiments in Section 5, we provide empirical evidence that supports this hypothesis. ## 3.4 Sample Complexity We end our theoretical analysis of the unsupervised learning problem by bounding its *sample complexity*, i.e., we bound the number N of *distinct* binary measurement vectors {yi} N i=1 that must be acquired for obtaining the best approximation of the signal set X from binary data. Since we observe binary vectors y *∈ {±*1} m, there is a limited number of different binary observations. We could naively expect to observe up to 2 m different vectors per measurement operator (*i.e.*, all possible binary codes with m bits), requiring at most N ≤ G2 m samples to fully characterize the best approximation of the signal set Xˆ defined in (13). Fortunately, as already exploited in the proof of Proposition 5, this upper bound can be significantly reduced if the signal set has a low box-counting dimension, as not all cells in the tessellation will be intersected by the signal set (see Figure 3). We can thus obtain a better upper bound by counting the number of intersected cells, denoted as |sign (AX )|. If X is the intersection of a single k-dimensional subspace with the unit sphere, (Thao & Vetterli, 1996, Theorem A.7) tells us that, for any matrix A ∈ R m×n with m ≥ k, there are |sign (AX )| ≤ 2 km k intersected cells. More generally, if X is a union of L subspaces, we have |sign (AX )| ≤ L2 km k . Thus, using the fact that m k ≤3m k k, from the G measurement operators, we can observe up to ## N ≤ Gl( 6M K ) K(23) different measurement vectors. However, this result only holds for a union of subspaces having each dimension k. The following theorem extends this result to more general low-dimensional sets with small upper boxcounting dimension. $\left(23\right)^{\frac{1}{2}}$ Theorem 10. *Let the entries of* A ∈ R m×n *be sampled from a standard Gaussian distribution, and let* X ⊆ R n *with* boxdim (X ) < k*. If* k/(m √n) < min(ϵ0, 1/2), then, in expectation, the cardinality of the maximum set is bounded as_ $$\mathbb{E}[\operatorname{sign}\left(AX\right)|\leq\left(\frac{m\sqrt{\varepsilon}}{2}\right)^{k}.\tag{24}$$ _Moreover, given a failure probability $0<\xi<1$, if $2\xi/(m\sqrt{n})\leq\min(\omega,1/2)$, then, with probability $1-\xi$, we have_ $$|\operatorname{sign}\left(AX\right)|\leq\left(\frac{1}{\xi}\right)^{2}\left(\frac{m\sqrt{n}}{2k}\right)^{2k}.\tag{25}$$ The proof is included in Appendix D. This result depends on the square root of the ambient dimension √n due to the application of Lemma 11 and can be suboptimal for some signal sets. For example, the bound in (23) avoids this dependency for the case where X is a union of subspaces. In the setting where we observe measurements through G independent forward operators, we sum the number of intersected cells for each operator, so that with probability exceeding 1 − Gξ for 0 *< ξ <* 1 (by a union bound), the number of different binary measurement vectors is then bounded by (24) $\xi$, $w e$ . $$N={\mathcal{O}}{\Big(}G\left({\frac{m{\sqrt{n}}}{k}}\right)^{5k}{\Big)}$$ with a hidden multiplicative constant depending on ξ. Similarly to (23), this bound scales exponentially only in the model dimension k but not in the number of measurements m or operators G. In the setting of a single operator and a k-dimensional invariant signal set, we have the upper bound N = O m √n k 5k . ## 4 Learning Algorithms In this section, we present a novel algorithm for learning the reconstruction function f : (*y, A*) 7→ x from N binary measurement vectors {(yi, Agi )} N i=1, which is motivated by the analysis in Section 3. We parameterize the reconstruction function using a deep neural network with parameters θ ∈ R p. The learned function can take into account the knowledge about the forward operator by simply applying a linear inverse at the first layer, i.e., fθ(*y, A*) = ˜fθ(A⊤y), or using more complex unrolled optimization architectures (Monga et al., 2021). In the case where we observe measurements associated with G different forward operators, we propose the SSBM loss $$\arg\min_{\theta\in\mathbb{R}^{N}}\ \sum_{i=1}^{N}\Big{[}\,\mathcal{L}_{\text{MC}}\left(y_{i},A_{\theta,i}\hat{x}_{\theta,i}\right)\ +\ \alpha\sum_{s\neq s_{i}}\|\hat{x}_{\theta,i}-f_{\theta}(\text{sign}\left(A_{s}\hat{x}_{\theta,i}\right),A_{s})\|_{2}^{2}\,\Big{]},\tag{26}$$ $\alpha$ is the set $\mathcal{L}_{\text{MC}}\left(\alpha,A_{\theta,i}\right)\geq0$ for every $i$, $\alpha$ is the set $\mathcal{L}_{\text{MC}}\left(\alpha,A_{\theta,i}\right)$ where xˆθ,i = fθ(yi, Agi ), the cost LMC (yi, Agixˆθ,i) ≥ 0 enforces *measurement consistency* (MC), *i.e.*, require that yi = sign (Agixˆθ,i), and α ∈ R+ is a hyperparameter controlling the trade-off between the two terms involved. In the setting where we have a single operator and the set X is invariant to a group of transformations {Tg} G g=1 such as rotations or translations, we aim to learn a reconstruction function fθ : y 7→ x (we remove the dependence of fθ on A to simplify the notation) via the following self-supervised loss: $$\operatorname*{arg\,min}_{\theta\in\mathbb{R}^{p}}\;\sum_{i=1}^{N}\Big[\,\mathcal{L}_{\mathrm{MC}}\left(y_{i},A\hat{x}_{\theta,i}\right)+\alpha\sum_{g=1}^{G}\|T_{g}\hat{x}_{\theta,i}-f_{\theta}(\operatorname{sign}\left(A T_{g}\hat{x}_{\theta,i}\right))\|_{2}^{2}\,\Big],$$ $$(27)$$ $$(28)$$ where xˆθ,i = fθ(yi) and α ∈ R+. In practice, we minimize (26) by mini-batching approaches (*e.g.*, stochastic gradient descent) by using sampling one out of the G operators at random per batch. In both cases, we choose the measurement consistency term to be the logistic loss, *i.e.*, $${\mathcal{L}}_{\mathrm{MC}}\left(y,{\hat{y}}\right)=\log\left(1+\exp(-y\circ{\hat{y}})\right)$$ which enforces sign-consistent predictions which are far from zero, as the logistic function tends asymptotically towards zero as |yˆ*| → ∞*. An empirical analysis in Section 5 shows that the logistic loss obtains the best performance across various popular consistency losses. Analysis of the proposed loss We focus on the multi-operator loss in (26), although a similar analysis also holds for the equivariant setting. The first term of the loss enforces measurement consistency, *i.e.*, requires yi = sign (Agi fθ(yi, Agi )) for every yiin the dataset. However, in the incomplete setting *m < n*, the simple pseudo-inverse solution $$f(y,A_{g})=A_{g}^{\dagger}y$$ y (29) with A†g = A⊤ g (AgA⊤ g ) −1, is measurement consistent for any number of operators G and training data N. Therefore, the first loss does not prevent learning a function fθ(*y, A*g) which acts independently for each operator (as if there were G independent learning problems). The second loss *bootstraps* the current estimates xˆi,θ = fθ(yi, Agi ) as new ground truth references, mimicking the supervised loss $$\sum_{i=1}^{N}\sum_{s=1}^{G}\|\hat{x}_{i,\theta}-f_{\theta}(\operatorname{sign}\left(A_{s}\hat{x}_{i,\theta}\right),A_{s})\|^{2},$$ $$(29)$$ $$(30)$$ $\left(31\right)^{2}$ in order to enforce consistency across operators. Importantly, this additional loss avoids the trivial pseudoinverse solution in (29), as $\begin{array}{l}{A}_{g}^{\dagger}y-{A}_{s}^{\dagger}\operatorname{sign}\left({A}_{s}{A}_{g}^{\dagger}y\right)\neq0\end{array}$ are different, $\mathit{e.g.}$, if the necessary condition in Co. y̸= 0 (31) for $q\neq s$ if th for g ̸= s if the nullspaces of Ag and As are different, *e.g.*, if the necessary condition in Corollary 4 is verified. Model identification perspective The learning algorithm constructs a discrete approximation of the signal set using the reconstructed dataset, *i.e.*, ∪ G g=1X˜g where X˜g = fθ(Yg, Ag) for g = 1*, . . . , G* and Yg is the subset of measurement vectors associated with the gth operator. From a model identification perspective, the measurement consistency loss ensures that Yg = AgX˜g for all g = 1*, . . . , G*. The second loss ensures consistency across all operators, i.e., X˜g = fθ(AsX˜g, As) for all pairs s, g ∈ {1*, . . . , G*}, acting as a proxy for X˜g = X˜s. ## 5 Experiments For all experiments, we use measurement operators with entries sampled from a standard Gaussian distribution and evaluate the performance of the algorithms using by computing the average peak-to-signal ratio (PSNR) on a test set with N′ ground-truth signals, that is: $$\begin{array}{l}{{\frac{1}{N^{\prime}}\sum_{i=1}^{N^{\prime}}\mathrm{PSNR}\left(x_{i}^{\prime},f_{\theta}(\mathrm{sign}\left(A_{g_{i}}x_{i}^{\prime}\right),A_{g_{i}})\right),}}\end{array}$$ $$(32)$$ $$(33)$$ where the PSNR is computed after normalizing the reconstructed image such that it has the same norm as the reference image, *i.e.*, $$\mathrm{PSNR}(x,{\hat{x}})=-20\log\|x-{\hat{x}}\frac{\|x\|}{\|{\hat{x}}\|}\|.\tag{1}$$ We choose fθ(*y, A*) = ˜fθ(A⊤y) where ˜fθ is the U-Net network used in (Chen et al., 2021) with weights θ, and train for 400 epochs with the Adam optimizer with learning rate 10−4 and standard hyperparameters β1 = 0.9 and β2 = 0.99. ## 5.1 Mnist Experiments We evaluate the theoretical bounds using the MNIST dataset, which consists of greyscale images with n = 784 pixels and whose box-counting dimension is approximately k ≈ 12 (Hein & Audibert, 2005). We use 6 × 104 images for training and 103for testing. Multiple operators setting. We start by comparing the logistic consistency loss in (28) with the following alternatives: - Standard ℓp-loss, LMC (y, yˆ) = ∥y−yˆ∥ $\hat{y})=\|y-\hat{y}\|_p^p.\,$ As this lo puts $|\hat{y}|=1$. . As this loss is zero only if yˆ = y, it promotes sign consistency, sign (ˆy) = y and unit outputs |yˆ| = 1. ![13_image_0.png](13_image_0.png) Figure 5: Evaluated training losses for enforcing sign measurement consistency sign (Axˆ) = y of reconstructions fθ(y) = ˆx. **Left:** The loss functions are shown for the case y = 1. **Right:** Average test PSNR of different measurement consistency losses on the MNIST dataset with G = 10 operators. - One-sided ℓp-loss, LMC (y, yˆ) = ∥ max(−y ◦ y, ˆ 0)∥ p p where ◦ denotes element-wise multiplication and the max operation is performed element-wise. This loss is zero as long as sign (ˆy) = y regardless of the value of |yˆ|. For all losses except the logistic loss, setting the trade-off parameter α = 1 obtained best results. For the logistic loss, we performed a sweep over different values of α and m, finding that the optimal choice of α decreases with m (see Appendix E for more details). Thus, we set α = 0.1 for *m < n* and α = 0.06 for m ≥ n. Figure 5 shows the different losses and the test performance for different values of measurements using G = 10 operators. The logistic loss obtains the best performance across all sampling regimes, whereas the one-sided ℓ2 loss obtains the worst results. Secondly, we compare the logistic loss with the following learning schemes: - Linear inverse (no learning), defined as xˆi = A⊤ gi yi. This reconstruction can fail to be measurement consistent (Goyal et al., 1998). - Standard supervised learning loss, defined as PN i=1 ∥xi − fθ(yi, Agi )∥ 2. We also evaluate this loss together with the cross-operator consistency term in (26) which we denote as supervised+. - Measurement consistency loss, defined as PN i=1 LMC (yi, Agi fθ(yi, Agi )) using the logistic loss. - The binary iterative hard-thresholding (BIHT) reconstruction algorithm (Jacques et al., 2013) with a Daubechies4 orthonormal wavelet basis. The step size and sparsity level of the algorithm were chosen via grid search. It is worth noting that the best-performing sparsity level increases as the number of measurements m is increased. - Proposed SSBM loss in (26) using the logistic loss for measurement consistency. Test PSNR values obtained for the case of G = 10 operators are shown in the left subfigure of Figure 6, where the PSNR in dB is plotted against m/n in log-scale representation. The measurement consistency approach obtains performance similar to simply applying a linear inverse for the incomplete *m/n <* 1 setting, whereas it obtains a significant improvement over the linear inverse in the overcomplete case m/n ≥ 1. This gap can be attributed to the lack of measurement consistency of the linear reconstruction algorithm (Goyal et al., 1998). The proposed loss obtains a performance that is several dBs above the linear inverse and BIHT for all sampling regimes. BIHT relies on the wavelet sparsity prior, which does not capture well enough the MNIST digits. SSBM performs similarly to supervised learning as the sampling ratio tends to 1, and perhaps surprisingly, it obtains slightly better performance than supervised learning for m/n = 1.28. However, adding the cross-operator consistency loss to the supervised method (*i.e.*, the method supervised+ in Section 5.1) performs better for all sampling regimes than SSBM. The right plot in Figure 6 compares the performance of the SSBM with the bounds in Proposition 8 and Conjecture 9. These bounds behave almost linearly in this log-log plot of both the error—through the PSNR—and ![14_image_0.png](14_image_0.png) Figure 6: **Left:** Average test PSNR of different supervised and unsupervised algorithms on the MNIST dataset with G = 10 operators. **Right:** The performance of the SSBM method follows closely the bounds in Conjecture 9. ![14_image_2.png](14_image_2.png) ![14_image_1.png](14_image_1.png) $$(34)$$ Figure 7: (a) Average test PSNR and (b) reconstructed test images of the proposed unsupervised method for different numbers of operators G and measurements m. the log-scale representation of m/n. We thus observe a good agreement between the predictions in Conjecture 9 and the performance in practice. Figure 7 shows the average test PSNR and reconstructed images obtained by the proposed self-supervised method for different values of G and m. The method fails to obtain good reconstructions when G = 1, as the necessary condition in Corollary 4 is not fulfilled. Noisy measurements In many sensing applications, the sensor data are subject to noise. In the setting of binary measurements, noise affects the measurements by flipping the sign, as the observations can only be −1 or +1. We evaluate the proposed algorithm with m = 274 measurements and G = 10 operators and different noise levels, according to the model $$y_{i}=\operatorname{sign}\left(A_{g_{i},i}x_{i}+\epsilon_{i}\right)$$ yi = sign (Agi,ixi + ϵi) (34) where ϵi ∼ N (0*, Iσ*2) for i = 1*, . . . , N*. Figure 8 shows the performance of the SSBM algorithm for different values of σ. The learning algorithm is particularly robust to noise, obtaining a good performance for noise levels up to σ = 0.13. This noise level translates to having approximately 15% of bits flipped per measurement vector. It is worth noting that these results indicate that we can expect similarly good performances for other noise distributions (*e.g.*, Poisson noise) for a similar average number of bit flips. Equivariant setting using shifts. We evaluate the setting of learning with a single operator by using the unsupervised equivariant objective in (27) with 2D shifts as the group of transformations (as the MNIST dataset is approximately shift-invariant). Figure 9 shows the average test PSNR and reconstructed images as ![15_image_0.png](15_image_0.png) Figure 8: Robustness of the proposed learning algorithm to noise in the measurement data. **Left:** Average test PSNR as a function of the standard deviation of the noise. **Right:** Average test PSNR as a function of the average percentage of flipped bits per measurement vector. ![15_image_1.png](15_image_1.png) ![15_image_2.png](15_image_2.png) Figure 9: (a) Average test PSNR and (b) reconstructed test images by the compared algorithms with a single operator A as a function of the undersampling ratio m/n. a function of the measurements m for various algorithms. The proposed unsupervised method significantly outperforms the linear inverse, BIHT, and the measurement consistent network in all sampling regimes, and performs closely to supervised learning for *m/n >* 0.4. ## 5.2 Other Datasets In order to demonstrate the robustness of the proposed method across datasets, we evaluate the proposed unsupervised approach on the FashionMNIST (Xiao et al., 2017), CelebA (Liu et al., 2015) and Flowers (Nilsback & Zisserman, 2008) datasets. The FashionMNIST dataset consists of 6 × 104 greyscale images with 28 × 28 pixels which are divided across G = 10 different forward operators. As with MNIST, we use N = 6 × 103 per operator for training and 103 per operator for testing. For the CelebA dataset, we use G = 10 forward operators and choose a subset of 103images for each operator for training and another subset of the same amount for testing. The Flowers dataset consists of 6149 color images for training and 1020 images for testing, all associated with the same forward operator. For both CelebA and Flowers datasets, a center crop of 128 × 128 pixels of each color image was used for training and testing. Section 5.2 shows the average test PSNR of the proposed unsupervised method, standard supervised learning, BIHT, and the linear inverse. For BIHT, we use the Daubechies4 orthonormal wavelet basis and optimize the step size and sparsity level via grid search. The self-supervised method obtains an average test PSNR which is only 1 to 2 dB below the supervised approach. Figures 10 and 11 show reconstructed test images by the evaluated approaches for each forward operator. The proposed unsupervised method is able to provide good estimates of the images, while only having access to highly incomplete binary information. The supervised method obtains sharper images, however at the cost of hallucinating details, whereas the proposed method obtains blurrier estimates with less hallucinated details. ![16_image_0.png](16_image_0.png) Figure 10: Reconstructed test images using the FashionMNIST dataset. Each column corresponds to a test image observed via a different forward operator Ag. ![16_image_1.png](16_image_1.png) Figure 11: CelebA results. Reconstructed test images using the CelebA dataset. Each column corresponds to a test image observed via a different forward operator Ag. ## 6 Conclusions And Future Work The theoretical analysis in this work characterizes the best approximation of a low-dimensional set that can be obtained from binary measurements. The model identification bounds presented here apply to a large class of signal models, as they only rely on the box-counting dimension, and complement those existing for signal recovery from binary measurements (Goyal et al., 1998; Jacques et al., 2013). Moreover, the proposed selfsupervised loss provides a practical algorithm for learning to reconstruct signals from binary measurements alone, which performs closely to fully supervised learning. This work paves the way for deploying machine learning algorithms in scientific and medical imaging applications with quantized observations, where no ground-truth references are available for training. We leave the proof of Conjecture 9, and a study of the effect of noise in the observations and related dithering techniques for future work. Another avenue of future research is the extension of Theorem 7 for the case of operators related through the action of a group. | Dataset | n | m | G | Linear Inverse | BIHT | Supervised | SSBM(ours) | |--------------|-------|------|--------|------------------|--------------|--------------|--------------| | FashionMNIST | 784 | 300 | 10 | 6.38 ± 0.23 | 10.68 ± 0.31 | 17.63 ± 0.33 | 16.47 ± 0.22 | | CelebA | 49152 | 9830 | 10 | 4.81 ± 0.32 | 16.26 ± 0.40 | 21.59 ± 0.31 | 19.53 ± 0.3 | | Flowers | 49152 | 9830 | shifts | 5.31 ± 0.72 | 14.62 ± 0.92 | 18.26 ± 0.75 | 16.45 ± 0.71 | Table 2: Average test PSNR in dB obtained by the compared methods for the FashionMNIST, CelebA and Flowers datasets. ## Acknowledgments Part of this research was supported by the Agence Nationale de la Recherche (Project UNLIP) and by the Fonds de la Recherche Scientifique - FNRS under Grant T.0136.20 (Project Learn2Sense). ## References G Alberti, G Schirinzi, G Franceschetti, and V Pascazio. Time-domain convolution of one-bit coded radar signals. In *IEE Proceedings F (Radar and Signal Processing)*, volume 138, pp. 438–444. IET, 1991. Keith Ball et al. An elementary introduction to modern convex geometry. *Flavors of geometry*, 31(1-58):26, 1997. R Baraniuk, S Foucart, D Needell, Y Plan, and M Wootters. One-bit compressive sensing of dictionarysparse signals. *Information and Inference: A Journal of the IMA*, 7(1):83–104, 08 2017. ISSN 2049-8764. doi: 10.1093/imaiai/iax009. URL https://doi.org/10.1093/imaiai/iax009. Richard G Baraniuk and Michael B Wakin. Random projections of smooth manifolds. *Foundations of* computational mathematics, 9(1):51–77, 2009. T. Blumensath and M. E. Davies. Sampling theorems for signals from the union of finite-dimensional linear subspaces. *IEEE Transactions on Information Theory*, 55(4):1872–1882, 2009. doi: 10.1109/TIT.2009. 2013003. Petros T Boufounos, Laurent Jacques, Felix Krahmer, and Rayan Saab. Quantization and compressive sensing. In *Compressed Sensing and its Applications: MATHEON Workshop 2013*, pp. 193–237. Springer, 2015. Anthony Bourrier, Mike Davies, Tomer Peleg, Patrick Pérez, and Rémi Gribonval. Fundamental performance limits for ideal decoders in high-dimensional linear inverse problems. *IEEE Transactions on Information* Theory, 60(12):7928–7946, 2014. Ching-Hsien Chen and Jwo-Yuh Wu. Amplitude-aided 1-bit compressive sensing over noisy wireless sensor networks. *IEEE Wireless Communications Letters*, 4(5):473–476, 2015. Dongdong Chen, Julián Tachella, and Mike Davies. Equivariant imaging: Learning beyond the range space. In *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 4379–4388, October 2021. Dongdong Chen, Julián Tachella, and Mike Davies. Robust equivariant imaging: a fully unsupervised framework for learning to image from noisy and partial measurements. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. Mark A Davenport, Yaniv Plan, Ewout Van Den Berg, and Mary Wootters. 1-bit matrix completion. Information and Inference: A Journal of the IMA, 3(3):189–223, 2014. Kenneth Falconer. *Fractal geometry: mathematical foundations and applications*. John Wiley & Sons, 2004. V.K. Goyal, M. Vetterli, and N.T. Thao. Quantized overcomplete expansions in R N : analysis, synthesis, and algorithms. *IEEE Transactions on Information Theory*, 44(1):16–31, 1998. doi: 10.1109/18.650985. Matthias Hein and Jean-Yves Audibert. Intrinsic dimensionality estimation of submanifolds in R d. In Proceedings of the 22nd international conference on Machine learning (ICML), pp. 289–296, 2005. Laurent Jacques, Jason N. Laska, Petros T. Boufounos, and Richard G. Baraniuk. Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. *IEEE transactions on information theory*, 59(4): 2082–2102, 2013. Ahmed Kirmani, Dheera Venkatraman, Dongeek Shin, Andrea Colaço, Franco NC Wong, Jeffrey H Shapiro, and Vivek K Goyal. First-photon imaging. *Science*, 343(6166):58–61, 2014. Jaakko Lehtinen, Jacob Munkberg, Jon Hasselgren, Samuli Laine, Tero Karras, Miika Aittala, Timo Aila, et al. Noise2Noise. In *International Conference on Machine Learning (ICML)*. PMLR, 2018. Jiaming Liu, Yu Sun, Cihat Eldeniz, Weijie Gan, Hongyu An, and Ulugbek S Kamilov. RARE: Image reconstruction using deep priors learned without groundtruth. *IEEE Journal of Selected Topics in Signal* Processing, 14(6):1088–1099, 2020. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. Vishal Monga, Yuelong Li, and Yonina C Eldar. Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing. *IEEE Signal Processing Magazine*, 38(2):18–44, 2021. Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In *2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing*, pp. 722–729. IEEE, 2008. Samet Oymak and Ben Recht. Near-optimal bounds for binary embeddings of arbitrary sets. *arXiv preprint* arXiv:1512.04433, 2015. Gilles Pisier. *The volume of convex bodies and Banach space geometry*, volume 94. Cambridge University Press, 1999. Yaniv Plan and Roman Vershynin. One-bit compressed sensing by linear programming. *Communications* on pure and Applied Mathematics, 66(8):1275–1297, 2013. Gilles Puy, Mike E Davies, and Rémi Gribonval. Recipes for stable linear embeddings from hilbert spaces to R m. *IEEE Transactions on Information Theory*, 63(4):2171–2187, 2017. Lucas Rencker, Francis Bach, Wenwu Wang, and Mark D Plumbley. Sparse recovery and dictionary learning from nonlinear compressive measurements. *IEEE Transactions on Signal Processing*, 67(21):5659–5670, 2019. Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena, 60(1-4):259–268, 1992. Julián Tachella, Dongdong Chen, and Mike Davies. Unsupervised learning from incomplete measurements for inverse problems. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=aV9WSvM6N3. Julián Tachella, Dongdong Chen, and Mike Davies. Sensing theorems for learning from incomplete measurements. *Journal of Machine Learning Research*, 24(39):1–45, 2023. Nguyen T Thao and Martin Vetterli. Lower bound on the mean-squared error in oversampled quantization of periodic signals using vector quantization analysis. *IEEE Transactions on Information Theory*, 42(2): 469–479, 1996. Roman Vershynin. *High-dimensional probability: An introduction with applications in data science*, volume 47. Cambridge university press, 2018. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017. Burhaneddin Yaman, Seyed Amir Hossein Hosseini, Steen Moeller, Jutta Ellermann, Kâmil Uğurbil, and Mehmet Akçakaya. Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data. *Magnetic resonance in medicine*, 84(6):3172–3191, 2020. Hadi Zayyani, Mehdi Korki, and Farrokh Marvasti. Dictionary learning for blind one bit compressed sensing. IEEE Signal Processing Letters, 23(2):187–191, 2015. ## A Technical Lemmas We begin by introducing some technical results that play an important role in the main theorems of the paper. We start with a result from (Jacques et al., 2013). Lemma 11 (Lemma 9 in (Jacques et al., 2013)). Given 0 ≤ ϵ < 1 *and two unit vectors* x, ˜ v˜ ∈ S n−1 ⊂ R n and a ∈ R n with ai ∼i.i.d. N (0, 1)*, we have* $$\begin{array}{l}{{p_{0}=\mathbb{P}\left[\forall x\in B_{\epsilon}(\bar{x}),\forall v\in B_{\epsilon}(\bar{v})\ :\ \mathrm{sign}\left(a^{\top}v\right)=\mathrm{sign}\left(a^{\top}x\right)\right]\geq1-d(\bar{x},\bar{v})-\sqrt{n\frac{\pi}{2}}\epsilon}}\\ {{p_{1}=\mathbb{P}\left[\forall x\in B_{\epsilon}(\bar{x}),\forall v\in B_{\epsilon}(\bar{v})\ :\ \mathrm{sign}\left(a^{\top}v\right)\neq\mathrm{sign}\left(a^{\top}x\right)\right]\geq d(\bar{x},\bar{v})-\sqrt{n\frac{\pi}{2}}\epsilon.}}\end{array}$$ ϵ (35) ϵ. (36) where d(·, ·) *denotes the angular distance.* Remark: The angular distances in Lemma 11 can be translated into Euclidean distances due to the following inequality: $$d({\bar{x}},{\bar{v}})\geq{\frac{2}{\pi}}\,v$$ 2 sin π 2 d(˜x, v˜)= ∥x˜ − v˜∥. (37) $${\tilde{v}}))={\frac{1}{\pi}}\|{\tilde{x}}-{\tilde{v}}\|.$$ Let C 0(S) denote the set of continuous functions on the set S. This lemma has the following corollary: Corollary 12. *Given* x˜ ∈ S n−1, 0 *< ϵ <* 1/2, a ∈ R n with a ∼i.i.d. N (0, 1)*, we have* $$(35)$$ $$(36)$$ $$(37)$$ $$\mathbb{P}\Big[\operatorname{sign}\left(a^{\top}\!\cdot\right)\notin C^{0}\big(B_{\epsilon}({\tilde{x}})\cap\mathbb{S}^{n-1}\big)\Big]\leq{\sqrt{n}}\,\epsilon.$$ Proof. The proof can be derived from the complement of the event associated with p0 in (35) when x˜ = ˜v. Here is, however, a simplified proof for completeness. We first observe that sign a ⊤·is discontinuous over Bϵ(˜x)∩S n−1iff | a ⊤x˜ ∥a∥ | ≤ ϵ. Therefore, by the rotational invariance of the Gaussian distribution we can choose x˜ = [1, 0*, . . . ,* 0]⊤ and the probability above amounts to computing $p:=\mathbb{P}[|\frac{a_{1}}{\|a\|}|\leq\epsilon]=\mathbb{P}[a_{1}^{2}\leq\epsilon^{2}\|a\|^{2}]=\mathbb{P}[a_{1}^{2}\leq\frac{\epsilon^{2}}{(1-\epsilon^{2})}(a_{2}^{2}+\ldots+a_{n}^{2})]=\mathbb{E}_{\xi}\mathbb{P}[a_{1}^{2}\leq\frac{\epsilon^{2}}{(1-\epsilon^{2})}\xi]$, $${}^{2}(n-1)$$ inequality, we finally get p ≤ √ √ 2 π √ ϵ 1−ϵ 2 √n ≤2 √2 √π $\frac{\overline{\xi}}{\pi}\frac{\epsilon}{\sqrt{1-\epsilon^2}}\sqrt{\xi}$, and $\mathbb{E}_{\xi}\sqrt{\xi}\leq\sqrt{\mathbb{E}_{\xi}\xi}\leq\sqrt{n-1}\leq\sqrt{n}$ by Jensen's $\frac{\sqrt{2}}{\sqrt{3}}\epsilon\sqrt{n}<\epsilon\sqrt{n}$. where ξ ∼ χ 2(n − 1). Since P[a 2 1 ≤ϵ $\underline{\underline{\epsilon^2}}$ . ## (1−Ε 2) Ξ] ≤ √ √ Π B Signal Recovery Proof Proof of Theorem 1. Proving this theorem amounts to showing that the probability of the failure of the event $\text{sign}\left(Ax\right)=\text{sign}\left(As\right)\implies\left\|x-s\right\|<\delta$ . $\square$ decays exponentially in m provided that $$m\geq{\frac{4}{\delta}}{\big(}2k\log{\frac{30{\sqrt{n}}}{\delta}}+\log{\frac{1}{\xi}}{\big)}$$ holds. In other words, we want to upper bound pδ := P [∃x1, x2 ∈ X , ∥x1 − x2∥ > δ : sign (Ax1) = sign (Ax2)] with such an exponential decay. As boxdim (X ) < k, there exist a constant ϵ0 ∈ (0, 1 2 ) such that N(X , ϵ) ≤ ϵ −kfor all ϵ ≤ ϵ0. Thus, there is a covering set Qϵ of ϵ −k points, such that for every x ∈ X , there exists a point q ∈ Qϵ which verifies ∥x − q∥ < ϵ. Thanks to this covering, we can upper bound pδ as pδ ≤ P [∃q1, q2 ∈ Qϵ, ∃x1 ∈ Bϵ(q1), ∃x2 ∈ Bϵ(q2), ∥x1 − x2∥ > δ : sign (Ax1) = sign (Ax2)] . However, since ∥x1 − x2∥ > δ, we must have ∥q1 − q2∥ ≥ ∥x1 − x2∥ − 2*ϵ > δ* − 2ϵ. Therefore, defining Qϵ,δ = {(*q, q*′) ∈ Qϵ × Qϵ : ∥q − q ′∥ > δ − 2ϵ}, the previous upper bound can be enlarged as pδ ≤ P [∃(q1, q2) ∈ Qϵ,δ, ∃x1 ∈ Bϵ(q1), ∃x2 ∈ Bϵ(q2) : sign (Ax1) = sign (Ax2)] . Given a fixed pair (q1, q2) ∈ Qϵ,δ, Lemma 11 shows that for a ∈ R n drawn from a standard Gaussian distribution $$\mathbb{P}\left[\forall x_{1}\in B_{\epsilon}(q_{1}),\forall x_{2}\in B_{\epsilon}(q_{2}):\text{sign}\left(a^{\top}x_{1}\right)\neq\text{sign}\left(a^{\top}x_{2}\right)\right]\geq\frac{1}{\pi}\|q_{1}-q_{2}\|-\sqrt{\frac{\pi}{2}}\epsilon>\frac{\epsilon-2\epsilon}{\pi}-\sqrt{\frac{\pi\epsilon}{2}}\epsilon.\tag{38}$$ By setting ϵ = ϵ(δ) = δ(4−π) 8+4π $\epsilon\:\epsilon=\epsilon(\delta)=\frac{\delta(4-\pi)}{\epsilon_{\text{nozz}}\sqrt{\epsilon_{\text{nozz}}}}$ ar nπ/2 and taking the probability of the complementary event we obtain $$\mathbb{P}\left[\exists x_{1}\in B_{\epsilon}(q_{1}),\exists x_{2}\in B_{\epsilon}(q_{2}):\ \mbox{sign}\left(a^{\top}x_{1}\right)=\mbox{sign}\left(a^{\top}x_{2}\right)\right]\leq1-\delta/4.\tag{39}$$ Therefore, considering the m i.i.d. rows {ai} m i=1 ⊂ R n of the matrix A = (a1*, . . . , a*m) ⊤ ∈ R m×n drawn from a standard Gaussian distribution, we have $\mathbb{P}[\exists x_{1}\in B_{\epsilon}(q_{1}),\exists x_{2}\in B_{\epsilon}(q_{2}):\ \mbox{sign}\left(Ax_{1}\right)=\mbox{sign}\left(Ax_{2}\right)]$ $\leq\ \prod_{i=1}^{m}\mathbb{P}[\exists x_{1}\in B_{\epsilon}(q_{1}),\exists x_{2}\in B_{\epsilon}(q_{2}):\ \mbox{sign}\left(a_{i}^{\top}x_{1}\right)=\mbox{sign}\left(a_{i}^{\top}x_{2}\right)]$ $\leq\ (1-\delta/4)^{m}$. Applying a union bound to all pairs (q1, q2) ∈ Qϵ(δ),δ ⊂ Qϵ × Qϵ, since there are no more that |Qϵ| 2 ≤ |Qϵ| 2 ≤ ϵ −2ksuch pairs, we obtain $$p_{\delta}\leq\left(\frac{8+4\pi\sqrt{\pi n/2}}{(4-\pi)\delta}\right)^{2k}(1-\delta/4)^{m}\leq\exp\left(2k\log(\frac{8+4\pi\sqrt{\pi n/2}}{(4-\pi)\delta})-\frac{m\delta}{4}\right),$$ where we used 1 − δ/4 ≤ exp(−δ/4) for δ > 0. Upper bounding this probability by 0 ≤ ξ ≤ 1 as in the statement of Thm 1 and using the crude bound (8 + 4πpπn/2)/(4−π) ≤ 30√n for n ≥ 1, we finally obtain 2k log 30√n δ +mδ4 ≥ log ξ which gives the sample complexity bound (9) $$m\geq{\frac{4}{\delta}}{\Big(}2k\log{\frac{30{\sqrt{n}}}{\delta}}+\log{\frac{1}{\xi}}{\Big)},$$ where the condition ϵ(δ) ≤ ϵ0 holds if δ ≤ 30ϵ0 √n. $$(40)$$ $$(41)$$ ## C Model Identification Proof Proof of Theorem 7. We want to identify the condition that m, G and 0 *< δ <* 1 must respect to induce that *X ⊆ X* ˆδ holds with high probability with respect to a random draw of the operators A1*, . . . , A*G. Equivalently, we need to show that, for this condition, $$\mbox{sign}\left(A_{g}x_{g}\right)=\mbox{sign}\left(A_{g}v\right),\quad\forall g=1,\ldots,G\tag{1}$$ $$(42)$$ holds for some v ∈ S n−1 \ Xδ and some x1, . . . , xG ∈ X with probability at most ξ with respect to a random draw of the Gaussian matrices A1*, . . . , A*G. This proof adapts some of the procedures given in (Jacques et al., 2013) to our specific setting. We start by bounding this probability for ϵ-balls around vectors v˜ ∈ S n−1 \ Xδ, x˜1, . . . , x˜G ∈ X , that is $${\mathcal{P}}[\exists(x_{1},\ldots,x_{n})]$$ p0 := P-∃(x1, . . . , xG) ∈ Bϵ(˜x1) *× · · · ×* Bϵ(˜xG), ∃v ∈ Bϵ(˜v) : ∀g = 1*, . . . , G,* sign (Agv) = sign (Agxg). We first notice that from the independence of the operators {Ag} G g=1, $$p_{0}\leq\ \prod_{g=1}^{G}\mathbb{P}\big[\exists x_{g}\in B_{\epsilon}({\bar{x}}_{g}),\exists v\in B_{\epsilon}({\bar{v}}):\,\mathrm{sign}\,(A_{g}v)=\mathrm{sign}\,(A_{g}x_{g})\,\big]\,.$$ Furthermore, as every row of each operator Ag is i.i.d. as a standard Gaussian random vector ag, we have $p_{0}\leq\prod_{g=1}^{G}\mathbb{P}\left[\exists x_{g}\in B_{\epsilon}(\tilde{x}_{g}),\exists v\in B_{\epsilon}(\tilde{v}):\text{sign}\left(a_{g}^{\top}v\right)=\text{sign}\left(a_{g}^{\top}x_{g}\right)\right]^{m}$ $=\prod_{g=1}^{G}\left(1-\mathbb{P}\left[\forall x_{g}\in B_{\epsilon}(\tilde{x}_{g}),\forall v\in B_{\epsilon}(\tilde{v}):\text{sign}\left(a_{g}^{\top}v\right)\neq\text{sign}\left(a_{g,i}^{\top}x_{g}\right)\right]\right)^{m}$ m(44) From Lemma 11, we know that $$\mathbb{P}\left[\forall x_{g}\in B_{\epsilon}({\bar{x}}_{g}),\forall v\in B_{\epsilon}({\bar{v}}):\,\mathrm{sign}\left(a_{g,i}^{\top}v\right)\neq\mathrm{sign}\left(a_{g,i}^{\top}x_{g}\right)\right]\geq{\frac{1}{\pi}}\|{\bar{x}}_{g}-{\bar{v}}\|-\sqrt{n{\frac{\pi}{2}}}\epsilon.$$ ϵ. (45) where the distance ∥x˜g − v˜∥ can be bounded by δ to obtain $$\mathbb{P}\left[\forall x_{g}\in B_{\epsilon}({\hat{x}}_{g}),\forall v\in B_{\epsilon}({\hat{v}}):\,\mathrm{sign}\left(a_{g,i}^{\top}v\right)\neq\mathrm{sign}\left(a_{g,i}^{\top}x\right)\right]$$ ϵ. (46) $$\operatorname{{gn}}\left(a_{g,i}^{\top}v\right)\neq\operatorname{{sign}}\left(a_{g,i}^{\top}x_{g}\right)]\geq{\frac{\delta}{\pi}}-{\sqrt{n{\frac{\pi}{2}}}}\epsilon.$$ Plugging this into (44) and picking δπ −pn and picking $\frac{\delta}{\pi}-\sqrt{n\frac{\pi}{2}}\epsilon=\frac{\delta}{4}$ δ 4 , which means that $$(43)$$ $$(44)$$ $$(45)$$ $$(46)$$ $$\epsilon=\epsilon(\delta)=(\frac{4-\pi}{\sqrt{8\pi^{3}}})\frac{\delta}{\sqrt{n}}\leq\frac{1}{18}\frac{\delta}{\sqrt{n}},$$ $$(47)$$ we get $$p_{0}\leq\left(1-\frac{\delta}{\pi}+\sqrt{n\frac{\pi}{2}}\epsilon\right)^{m G}\leq\exp(-\frac{\delta}{4}m G).\tag{1}$$ $\forall g=1,\ldots,G,\;\text{s}$ . $$\mathbf{\tau}_{g}v)\,=\,\mathrm{sign}\left(A_{g}x_{g}\right)]\,\leq\,\epsilon^{-k G}(\epsilon/\epsilon)$$ We can extend this result to all vectors v ∈ S n−1 \ Xδ and x1, . . . , xG ∈ X by applying a union bound over a covering of the product set X G × (S n−1 \ Xδ). Since we can cover X with ϵ −k balls with ϵ ≤ ϵ0 due to the assumption that boxdim (X ) < k, and also cover S n−1 \ Xδ with (3/ϵ) n balls (Pisier, 1999), we have P[∃x1, . . . , xG ∈ X , ∃v ∈ (S n−1\ Xδ) : ∀g = 1*, . . . , G,* sign (Agv) = sign (Agxg)] ≤ ϵ −kG(ϵ/3)−np0 (48) Using the bound (47), the upper bound on ϵ, and bounding the resulting probability by ξ, we obtain * [14] M. C. $$\begin{array}{r l}{{\xi}}&{{\geq\epsilon^{-k G}(\frac{\epsilon}{3})^{-n}\exp(-\frac{\delta}{4}m G)=\exp(k G\log(\frac{1}{\epsilon})+n\log(\frac{3}{\epsilon})-\frac{\delta}{4}m G)}}\\ {{}}&{{}}&{{\geq\exp\left[k G\log(\frac{18{\sqrt{n}}}{\delta})+n\log(\frac{54{\sqrt{n}}}{\delta})-\frac{\delta}{4}m G\right].}}\end{array}$$ Equivalently, $m\geq\frac{4}{\delta}\big[k\log(\frac{18\sqrt{n}}{\delta})+\frac{n}{G}\log(\frac{54\sqrt{n}}{\delta})+\frac{1}{G}\log(1/\xi)\big]$, where $\delta$ is the number of primes. log(1/ξ), which holds if $$m\geq{\frac{4}{\delta}}\big[(k+{\frac{n}{G}})\log({\frac{54{\sqrt{n}}}{\delta}})+{\frac{1}{G}}\log({\frac{1}{\xi}})\big].$$ Recalling, we must have *ϵ < ϵ*0, we observe that this conditions is met if δ < 18√n ϵ0. $$-{}^{n}p_{0}\quad(48)$$ Derivation of δ. Here we aim to upper bound the minimum identification error, *i.e.*, the minimum value of δ, for a fixed number of measurements m. The bound in Theorem 7, that is $$m\geq{\frac{4}{\delta}}((k+{\frac{n}{G}})\log({\frac{54{\sqrt{n}}}{\delta}})+{\frac{1}{G}}\log({\frac{1}{\xi}}))$$ $$\log(\delta)+\delta a\geq b$$ )) (49) can be rewritten as a function of δ as log(δ) + δa ≥ b (50) where $$a=\frac{m}{4(k+\frac{n}{G})},\quad\mathrm{and}\ b=\log54\sqrt{n}+\frac{1}{(G k+n)}\log\frac{1}{\xi}.$$ Notice that b ≥ 1, and a ≥ 1 from (49) since 0 *< δ <* 1. The expression in (50) holds if $$(49)$$ $$(50)$$ $$\delta\geq{\frac{1}{a}}(\log(a)+b).$$ $$(51)$$ (log(a) + b). (51) Indeed, (51) implies that log(δ) + δa ≥ log(δ) + log(a) + b = log(aδ) + b. However, again from (51), we get aδ ≥ log(a) + b ≥ 1 since *a, b* ≥ 1. Therefore, log(δ) + δa ≥ log(aδ) + b ≥ b. Finally, picking the smallest δ respecting (51), we get for large m, n and G, $$\delta={\mathcal{O}}\left({\frac{k+{\frac{n}{G}}}{m}}\log{\frac{m n}{k+{\frac{n}{G}}}}\right)$$ $$(52)$$ $$(53)$$ (52) which, for $n/G\ll k$ reads. . (53) $$\delta={\mathcal{O}}\left({\frac{k}{m}}\log{\frac{m n}{k}}\right).$$ ## D Sample Complexity Proof Proof of Theorem 10. We aim to bound the number of different cells associated with the binary mapping sign (A·) which contain at least one element from the signal set X , *i.e.*, |sign (AX )|. Our strategy consists in obtaining a global bound on the number of discontinuities of the binary mapping (or in other words, sign changes) over the image of a covering of X , which can then be related to the number of different cells that contain at least one element of X . For *ϵ < ϵ*0, let Qϵ ⊂ X be an optimal ϵ covering of X . If boxdim (X ) < k, then there exists an ϵ0 ∈ (0, 1 2 ) such that |Qϵ| ≤ ϵ −kfor all *ϵ < ϵ*0. Let us define the number Z(S) of its discontinuous components of the binary mapping sign (A·) over a set S ⊂ S n−1, *i.e.*, $$Z(S):=|\{i:\operatorname{sign}\left(a_{i}^{\top}\cdot\right)\notin C^{0}(S)\}|.$$ From the independence of the {ai} m i=1, we observe that Z(S) = Pm i=1 Zi(S) is a binomial random variable, that is the sum of m i.i.d. binary random variables with probability $$p:=\mathbb{P}\Big[\operatorname{sign}\big(a^{\top}\!\cdot\!\big)\notin C^{0}(S)\Big],$$ where a is a standard Gaussian random vector. From Corollary 12, we have p ≤ √nϵ for any set of the form S = Sq,ϵ := Bϵ(q)∩S n−1 with q ∈ R n. Using Bernstein inequality (Vershynin, 2018) on the random variable Z(Sq,ϵ) with EZ(Sq,ϵ) = mp ≤ m √nϵ, we obtain $$\mathbb{P}[Z(S_{q,\epsilon})>2m{\sqrt{n}}\epsilon]\leq\mathbb{P}[Z>2\mathbb{E}Z]\leq\exp(-{\frac{3}{8}}m{\sqrt{n}}\epsilon).$$ Therefore, since |Qϵ| ≤ ϵ −k, we get from a union bound $$\mathbb{P}[\forall q\in Q_{\epsilon}:\ Z(S_{q,\epsilon})\leq2m{\sqrt{n}}\epsilon]\leq1-\exp(k\log({\frac{1}{\epsilon}})-{\frac{3}{8}}m{\sqrt{n}}\epsilon).$$ √nϵ). (54) Let us fix ϵ by setting a failure probability 0 *< ξ <* 1 such that ξ = exp(k log( 1 ϵ ) − 3 8m √nϵ), *i.e.*, $$2m{\sqrt{n}}\epsilon={\frac{16}{3}}{\big[}\log({\frac{1}{\xi}})+k\log({\frac{1}{\epsilon}}){\big]}.$$ ). (55) $$(54)$$ $$(\epsilon),\;i.e.,$$ $$\left(55\right)$$ This implicitly imposes 38m √*nϵ > k* log( 1 ϵ ), and since ϵ < min(ϵ0, 1/2), we get 38m √*nϵ > k* log 2, or $\frac{8k\log2}{3m\sqrt{n}}<\epsilon$. $$(56)$$ Since the left-hand side of (56) has to be smaller than min(ϵ0, 1/2), we have that 2k/(m √n) < min(ϵ0, 1/2) (using the fact that 2 > 8 3 log 2). For any set S ⊆ S n−1, the number of cells generated by sign (A·) in this set cannot exceed 2 to the power of the number of discontinuous components in this mapping, *i.e.*, |sign (AS)| ≤ 2 Z(S). Thus, with probability 1 − ξ, and given q ∈ Qϵ, $$|\operatorname{sign}\left(A S_{q,\epsilon}\right)|\leq2^{\frac{48}{3}[\log\left(\frac{1}{\xi}\right)+k\log\left(\frac{1}{\epsilon}\right)]}=(\frac{1}{\xi})^{\frac{16\log2}{3}}(\frac{1}{\epsilon})^{\frac{16\log2}{3}k}<(\frac{1}{\xi})^{4}(\frac{1}{\epsilon})^{4k}.$$ Since there are at most ϵ −k balls in the covering, and using (56), we obtain the bound $\left|\,\mathrm{sign}\left(AX\right)\right|\leq\sum_{q\in\mathcal{X}_{\varepsilon}}\left|\,\mathrm{sign}\left(AS_{q,\varepsilon}\right)\right|<(\frac{1}{\xi})^{4}(\frac{1}{\xi})^{5k}<(\frac{1}{\xi})^{4}(\frac{3m\sqrt{n}}{8k\log^{2}})^{5k}<(\frac{1}{\xi})^{4}(\frac{3m\sqrt{n}}{5k})^{5k}$. We now prove a bound on the expected number of intersected cells. We first observe that $$\mathbb{E}|\operatorname{sign}\left(A{\mathcal{X}}\right)|\leq\sum_{q\in{\mathcal{X}}_{\epsilon}}\mathbb{E}|\operatorname{sign}\left(A S_{q,\epsilon}\right)|,$$ and, for any set S ⊆ S n−1, the independence of the random variables Zi(S) provides $$\mathbb{E}[\operatorname{sign}\left(A S\right)|\leq\mathbb{E}\,2^{Z(S)}=\mathbb{E}\big[2^{\sum_{i=1}^{m}Z_{i}(S)}\big]=\mathbb{E}\big[\prod_{i}2^{Z_{i}(S)}\big]=\prod_{i}\mathbb{E}\,2^{Z_{i}(S)}.$$ Moreover, considering the previous covering Qϵ of X , if S = Sq,ϵ, for some q ∈ Qϵ, $$\mathbb{E}2^{Z_{i}(S)}=2^{0}$$ $\mathbb{E}2^{Z_{i}(S)}=2^{0}\mathbb{P}(Z_{i}(S)=0)+2^{1}\mathbb{P}(Z_{i}(S)=1)=(1-p)+2p=1+p\leq1+\sqrt{n}\epsilon\leq e^{\sqrt{n}}$ √nϵ. Therefore, E|sign (ASq,ϵ)| ≤ e m √nϵ, and $$\mathbb{E}|\operatorname{sign}\left(A\mathcal{X}\right)|\leq\epsilon^{-k}e^{m{\sqrt{n}}\epsilon}=e^{k\log\left({\frac{1}{\epsilon}}\right)+m{\sqrt{n}}\epsilon}.$$ The function k log( 1 ϵ ) + m √nϵ is convex in ϵ and reaches its minimum on ϵ =k m √n . Therefore, by setting ϵ = k/(m √n) and imposing k/m√n ≤ min(ϵ0, 1/2), we get $$k\log({\frac{m{\sqrt{n}}}{k}})+k=k(1+\log({\frac{m{\sqrt{n}}}{k}})),$$ which finally gives $$\mathbb{E}|\operatorname{sign}\left(A\mathcal{X}\right)|\leq e^{k(1+\log({\frac{m{\sqrt{n}}}{k}}))}=(e^{{\frac{m{\sqrt{n}}}{k}}})^{k}.$$ ## E Choice Of Trade-Off Parameter We evaluate the impact of the trade-off parameter of the proposed SSBM algorithm for different sampling ratios m/n on the MNIST dataset. In all cases, we use G = 10 operators. Figure 12 shows the average test PSNR of the learned networks. The optimal choice of α decreases with the number of measurements m. In the experiments, we choose α = 0.1 if *m < n* and α = 0.06 otherwise. ![24_image_0.png](24_image_0.png) Figure 12: Impact of the trade-off parameter α of the SSBM learning algorithm as a function of the sampling ratio m/n network for the MNIST problem with G = 10 operators.
Review 1: Summary: This paper focuses on the problem of reconstructing a set $\mathcal{X}$ from binary measurements of the type $y_i = \mathrm{sign}(A x_i)$ for $x \in \mathcal{X}$. The authors tackle two problems: signal reconstruction (i.e.\ estimating the inverse mapping $y \mapsto x$) and model identification, i.e. the identification of the set $\mathcal{X}$ provided this set is effectively low-dimensional. The setup chosen is challenging, as the authors aim for a study of unsupervised learning procedures, in which the statistician only has access to $(y_i)_{i=1}^N$, and not the ground-truth signals $(x_i)$. To make it tracktable, two separate setups are assumed: * The measurement operator varies accross observations, that is $y_i = \mathrm{sign}(A_{g_i} x_i) \in \mathbb{R}^m$, and $g_i \in [G]$. There are here $G$ different (known) measurement operators. * The measurement operator does not vary, but the set $\mathcal{X}$ is known to be invariant under the representation $(T_g)_{g \in [G]}$ of a finite group. In both cases above, the authors derive a set of three theoretical results: 1. A worst-case bound w.r.t. $(A_g)_{g \in [G]}$ in Section 3.1. For any such measurement operators, there exists a set $\mathcal{X} \subseteq \mathcal{S}^{n-1}$ such that one can not infer $\mathcal{X}$ with an error smaller than $\mathcal{O}(n / (mG))$. Having error at least $\delta$ here means that there are points in the estimated set $\hat{\mathcal{X}}$ which are at distance at least $\delta$ from the true $\mathcal{X}$. 2. An upper bound on the estimation error if $A_g$ are taken i.i.d. from a standard Gaussian distribution, in Section 3.2. If $\mathcal{X}$ has ``effective dimension'' $k$, then the error is (with high probability) at most $\mathcal{O}([(k+n/G)/n] \log[nm/(k+n/G)])$, which is sharp up to the logarithmic factor. Furthermore, the authors conjecture (in Section 3.4) that the same error estimate should hold for the signal reconstruction procedure. 3. Again in the case of Gaussian measurement operators, the authors show in Section 3.4 an upper bound on the minimal number of distinct measurements needed to perform an optimal estimation of the set $\mathcal{X}$, as a function of $m, n, G$ and the effective dimension $k$ of $\mathcal{X}$. The proofs of the main results are based on classical tools of high-dimensional probability (covering arguments and concentration inequalities). Sections 4 and 5 are dedicated to introducing a self-supervised procedure (i.e. that does not need access to the ground-truth $(x_i)$) to perform signal reconstruction. The method leverages the fact that the statistician has access to $G$ measurement operators, so that she can build ``synthetic'' measurements for each measurement operator using the current estimates $(\hat{x_i})_{i=1}^N$. The introduction of this approach is completed with numerical experiments on image datasets using random (Gaussian) measurement operators, and comparing this approach with a supervised learning approach as well as other simpler algorithms. The paper contains an interesting and well-written discussion of existing literature: however, not being an expert on the topic of model identification, I may not be aware of some imporant other works. The theoretical results of the paper generalize previous works to the case of model identification under binary measurements (both signal reconstruction under binary measurements and model identification under linear measurements were studied before). Strengths and Weaknesses: Besides what I have mentioned above in the summary, let me emphasize the following strengths: 1. The paper is well-written and very pleasant to read, and I thank the authors for their effort on this point. Results are well described in the introduction, which makes the paper much easier to read. 2. The theoretical bounds presented are quite general and of interest, and well discussed. 3. The self-supervised learning method is a good addition, its precise form is well justified, and the numerical analysis showing that it performed comparatively to supervised learning approaches is convincing. In my opinion, the paper has however some issues which would need corrections and improvements. If these are addressed properly I would recommend the paper for publication. These issues are given in the next paragraph on requested changes. Requested Changes: I will list first the main issues and paths for improvements, before giving a list of more minor points. **Main issues and improvements --** 1. The acknowledgements mention the funding, which might be a breach of double blindness. I will refer to the editor for this point as I am not sure of the importance of this issue. 2. The authors write that the self-supervised algorithm presented in Section 4 is motivated by the analysis of model identification in Section 3. How is that so? Perhaps this would deserve more explanation, especially since the algorithm performs signal reconstruction, not model identification. 3. Why are there so few values of $m$ presented in Figures 6-7-8 ? Adding more values could allows for more convincing plots, e.g. for the scaling with $m$ shown in Figure 6 - right, by taking a large maximal value of $m/n$ (which is at the moment $1.28$). 4. Why did the authors restrict the choice of the trade-off parameter to be $\alpha \in ${0.1, 1, 10}? A more thorough investigation of the optimal choice of $\alpha$ (at least in some particular cases) would be an interesting addition, as the current choice seems quite arbitrary. 5. The proofs in appendices contain some mistakes (often easy to fix), and did not always seem to receive the same attention in the writing compared to the main text.While this does not impact the results, the repetition of small mistakes or the lack of clarity can make a bad impression on a careful reader. I list these problems now: + In the proof of Theorem 1 the value of $\epsilon$ chosen is (I think) not correct, as one obtains $\delta/\pi - \delta/2 < 0$ in the lower bound between eqs. (34) and (35). This will probably change the constants later on and should be corrected. + In the proof of Theorem 6, the equality of eq. (43) should actually be an upper bound, as the vector $v$ does not depend on $g \in [G]$ there is no reason for it to be an equality. + I think the derivation of $\delta$ after Theorem 6 should be one of an upper bound on the minimal error $\delta$ achievable for a given value of $m$, the current presentation of a lower bound is a bit confusing. For this minimal error eq. (61) becomes an equality, and the bound of eq. (63) becomes an upper bound, which is consistent (see the next point). + I missed something in eq.(63): how does it follow from eq.(62)? Also eq.(62) is only valid if $ae^b \geq 1$, how does this translate to the original parameters? See also the previous point for a possibly simpler presentation. + One should precise for which values of $\xi$ eq.(64) holds (even though it holds even for $\xi$ exponentially small in $n$ I believe). + The argument in the proof of Theorem 8 combining Corollary 10 with a union bound needs to be explained more. To me it seems to be using Bernstein's inequality, which would need to be at least mentioned. + In the proof of Theorem 8, how does the bound on $Z(S)$ transfers to a bound on $|\Phi(S)|$? This is used in the proof, but not really explained. + In the proof of Theorem 8, should the condition $m\sqrt{n} / (sk) > e$ be replaced by (the stronger) $3 m\sqrt{n} / (16sk) > e$? **Minor points --** 1. In Table 1 one should introduce what the identification error is, since it is only introduced later in the text. 2. In Theorem 1, the condition on $\delta$ should be given before the condition on $m$, since the latter depends on the former. Moreover I believe the implication that is proven is $\mathrm{sign}(Ax) = \mathrm{sign}(As) \Rightarrow \|x - s \| < \delta$. 3. Does Section 3.4 assume $k < m$? It seems so since the arguments use binomial coefficients $\binom{m}{k}$, but this condition is not written. 4. Before eq. (25) it would be good to recall that in this setting $T_g$ is a representation of some group under which the set $\mathcal{X}$ is invariant. 5. In Figure 6 (and maybe the latter figures as well), one should emphasize that the PSNR grows logarithmically with $\| x - \hat{x} \|$ as it goes to zero, or write on the axis that the unit is in dB, to enhance readability. 6. In Figure 6 it seems to me that the measurement consistency actually clearly separates from the linear inverse approach for $m/n \gtrsim 0.5$, not exactly at $m/n = 1$ as stated in the text. Is there an explanation for this phenomenon? 7. In Figure 8, one should match the names of the different algorithms in the left and right figures. 8. In the proof of Proposition 3, I think $v$ is just a non-zero element of the nullspace rather than a generator (as the nullspace might have dimension larger than $1$). 9. The bound of eq. (46) is already used in the proof of Theorem 1 without being stated, it should be first stated there. 10. It would be good to add a reference for the standard covering number upper bound above eq. (48). **Other questions --** 1. Is the bound of Proposition 4 tight (up to constant factors) for some matrices? Proposition 3 shows that if $mG < n$ then it is tight, but what about when $m > n/G$? 2. In Section 3.3, can one obtain a bound using Proposition 2 or Corollary 5? If model identification is difficult then signal recovery should probably also be difficult, but maybe this is not an interesting bound? **Typos --** 1. Before eq. (27), the sentence ``As the first term...'' needs rewriting. 2. In Figs. 9 and 10, should "each column'' be "each row''? 3. Proof of Theorem 1: "[...] such that'' should be "[...] that''. 4. In eq. (50) the exponent of $\epsilon$ should be $-kG - n$ not $-kG + n$. 5. In eq. (51) a minus sign is missing in front of the log in the denominator. 6. In the proof of Theorem 8, $\mathcal{X}$ should be replaced by $\mathcal{Q}$ in several places. 7. The sentence ``for some $c > 0$'' in the proof of Theorem 8 relates to nothing in the equations. 8. In the proof of Theorem 8, there is a $2$ missing in the exponent in the last equation (when using the previously derived bound on $|\Phi(V_\epsilon(q))|$). Broader Impact Concerns: I have no broader impact concern. ================================================== Review 2: Summary: This paper considers the problem of one-bit compressed sensing, where we observe a one-bit quantization $y$ of the signal $x$ after it went through a sensing matrix $A$. While previous works focused on the reconstruction of $x$ given the knowledge of the signal set $\mathcal X$, this work focuses on reconstructing the signal set $\mathcal X$ itself based on multiple observations $y_i$. The paper provides upper and lower bounds for this reconstruction set, that match up to logarithmic factors if the set $\mathcal X$ has low box-counting dimension. They also extend previous results on signal reconstruction from the sparse setting to the more general case with low box-counting dimensions. Finally, the authors propose a loss for self-supervised learning of the signal, and provide a set of experiments to compare it with supervised or other non-supervised methods. Strengths and Weaknesses: This is overall a very good paper. The results proven are novel, relevant to both theoretical and practicioner audiences interested in self-supervised learning in compressed sensing, and rigorously proven. Even with low prior knowledge about this topic, the paper is easy to follow, with helpful diagrams explaining the key concepts of binary recovery. It does have some unclear parts in the proofs (see the requested changes section), but this paper is overall very well suited for a TMLR publication. Requested Changes: All of the changes can be considered as non-critical - The identification error should be actually defined, e.g. as $\min \\{\delta: \hat{\mathcal X} \subseteq \mathcal X_\delta\\}$ - It might be useful to add a sentence or two after Proposition 2, to explain exactly what has been proven - Proof of Prop.3: - $v$ is not a necessarily a generator of the nullspace of $\bar A$, only an element of it - it'd be more intuitive to specify directly that $x$ is in $\mathrm{Ker}(\bar A)^\bot$, instead of the range of $\bar A^\top$ (even though it's equivalent) - I personally would've found it more intuitive to fix $x \in \mathbb S^{n-1}$ and consider $x \pm \eta v$ where $\eta \to \infty$, but this is a personal preference - Experiments: - you mention the "necessary condition of Proposition 3" multiple times; you should either move the $m > n/G$ inside this proposition, or (my preferred option) put this condition in a corollary and refer to it instead. - Proof of Theorem 1: - the link from (34) to (35) could use a (simple) drawing to explain what's going on - what does the $|$ sign mean in the probabilities ? In any case, I think it should be replaced by "and" - Derivation of $\delta$: - the link from (62) to (63) is unclear, but you can get (63) by the simpler inequality $W(x) > \ln(x)/2$ valid for any $x \in \mathbb R$ - Proof of Theorem 8: I feel like the use of discontinuity points is a bit circular and non-intuitive; the argument in the proof can be summarized as $$ \text{number of cells} \asymp \text{number of cell borders} \leq \sum_{q \in Q_\epsilon} \text{number of borders in } B(q, \epsilon)$$ which is more intuitive than the discontinuity argument. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper develops, analyzes, and tests an unsupervised learning (learn using only noisy observations) algorithm for reconstructing signals from binary linear measurements (1-bit compressive sensing). The first few sections theoretically lower bound the reconstruction error as a function of the number of measurement matrix size and the number of observations. The last few sections develop a simple algorithm (essentially equivariant imaging) to train a network to reconstruct signals from binary measurements. It's tested and is proven reasonably effective on MNIST, Fashion MNIST, on CelebA datasets. Strengths and Weaknesses: ## Strengths Clarity: Paper is well-written and figures like 2 are intuitive and useful. Novelty: To my knowledge unsupervised learning methods like equivariant imaging have not been applied to binary measurements. ## Weaknesses The theory/results are developed/tested without any measurement noise. Requested Changes: On the first page, I'd use a different word than "incomplete" (maybe "underdetermined" or "fat") to describe the measurement matrix. The word "incomplete" has too many alternative meanings, e.g., matrix completion. I'd similarly avoid using "incomplete" to describe the measurement process. I think equation (10) should be an equality between the measurements. Presently (10) states that if the measurements of two points disagree then they must be close to one another with some probability. The notation in (14) seems slightly odd. Would it be clearer to write X_d={v\in S^{n-1}| inf_{x\in X} ||x-v||<d} I think the paper would be stronger if one of the tests (e.g., Fig 9) were rerun with varying amounts of measurement noise. At present, it's unclear how sensitive the proposed method is to noise. Typos: At least on my pdf reader, the Fashion MNIST and MNIST dataset descriptions are missing multiplication signs: 6 10^5 Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept as is Comment: This paper submitted to TMLR has received unanimous approval from reviewers for its novel and pertinent contributions. The authors have been in particular commended for their response to feedback, providing comprehensive explanations and making corrections that have enhanced the quality of the paper. ==================================================
# A Ranking Game For Imitation Learning Harshit Sikchi hsikchi@utexas.edu Department of Computer Science, The University of Texas at Austin Akanksha Saran akanksha.saran@microsoft.com Microsoft Research NYC Wonjoon Goo wonjoon@cs.utexas.edu Department of Computer Science, The University of Texas at Austin Scott Niekum *sniekum@cs.umass.edu* Department of Computer Science, University of Massachusetts Amherst Reviewed on OpenReview: *https: // openreview. net/ forum? id= d3rHk4VAf0* ## Abstract We propose a new framework for imitation learning—treating imitation as a two-player ranking-based game between a policy and a reward. In this game, the reward agent learns to satisfy pairwise performance rankings between behaviors, while the policy agent learns to maximize this reward. In imitation learning, near-optimal expert data can be difficult to obtain, and even in the limit of infinite data cannot imply a total ordering over trajectories as preferences can. On the other hand, learning from preferences alone is challenging as a large number of preferences are required to infer a high-dimensional reward function, though preference data is typically much easier to collect than expert demonstrations. The classical inverse reinforcement learning (IRL) formulation learns from expert demonstrations but provides no mechanism to incorporate learning from offline preferences and vice versa. We instantiate the proposed ranking-game framework with a novel ranking loss giving an algorithm that can simultaneously learn from expert demonstrations and preferences, gaining the advantages of both modalities. Our experiments show that the proposed method achieves state-of-the-art sample efficiency and can solve previously unsolvable tasks in the Learning from Observation (LfO) setting. Project video and code can be found at this URL. ## 1 Introduction Reinforcement learning relies on environmental reward feedback to learn meaningful behaviors. Reward specification is a hard problem (Krakovna, 2018), thus motivating imitation learning (IL) as a technique to bypass reward specification and learn from expert data, often via Inverse Reinforcement Learning (IRL) techniques. Imitation learning typically deals with the setting of Learning from Demonstrations (LfD), where expert states and actions are provided to the learning agent. A more practical problem in imitation learning is Learning from Observations (LfO), where the learning agent has access to only the expert observations. This setting is common when access to expert actions are unavailable such as when learning from accessible observation sources like videos or learning to imitate across different agent morphologies. We note that LfD and LfO settings differ from the setting where the agent has access to the environment reward function along with expert transitions, referred to as Reinforcement Learning from Demonstrations (RLfD) (Jing et al., 2020; Zhang et al., 2020; Brys et al., 2015). Learning to imitate using expert observations alone can require efficient exploration when the expert actions are unavailable as in LfO (Kidambi et al., 2021). Incorporating preferences over potentially suboptimal trajectories for reward learning can help reduce the exploration burden by regularizing the reward function and providing effective guidance for policy optimization. Previous literature in learning from preferences either assumes no environment interaction (Brown et al., 2019; 2020a) or assumes an active query framework with a restricted reward class (Palan et al., 2019). The classical IRL formulation suffers from two issues: (1) Learning from expert demonstrations and learning from preferences/rankings provide complementary advantages for increasing learning efficiency (Ibarz et al., 2018; Palan et al., 2019); however, existing IRL methods that learn from expert demonstrations provide no mechanisms to incorporate offline preferences and vice versa. (2) Optimization is difficult, making the learning sample inefficient (Arenz & Neumann, 2020; Ho & Ermon, 2016) due to the adversarial min-max game. Our primary contribution is an algorithmic framework casting imitation learning as a ranking game which addresses both of the above issues in IRL. This framework treats imitation as a ranking game between two agents: a reward agent and a policy agent—the reward agent learns to satisfy pairwise performance rankings between different *behaviors* represented as state-action or state visitations, while the policy agent maximizes its performance under the learned reward function. The ranking game is detailed in Figure 1 and is specified by three components: (1) The dataset of pairwise behavior rankings, (2) A ranking loss function, and (3) An optimization strategy. This game encompasses a large subset of both inverse reinforcement learning (IRL) methods and methods which learn from suboptimal offline preferences. Popular IRL methods such as GAIL, AIRL, f-MAX (Ho & Ermon, 2016; Ghasemipour et al., 2020; Ke et al., 2021) are instantiations of this ranking game in which rankings are given only between the learning agent and the expert, and a gradient descent ascent (GDA) optimization strategy is used with a ranking loss that maximizes the performance gap between the behavior rankings. The ranking loss used by the prior IRL approaches is specific to the comparison of optimal (expert) vs. suboptimal (agent) data, and precludes incorporation of comparisons among suboptimal behaviors. In this work, we instantiate the ranking game by proposing a new ranking loss (Lk) that facilitates incorporation of rankings over suboptimal trajectories for reward learning. Our theoretical analysis reveals that the proposed ranking loss results in a bounded performance gap with the expert that depends on a controllable hyperparameter. Our ranking loss can also ease policy optimization by supporting data augmentation to make the reward landscape smooth and allowing control over the learned reward scale. Finally, viewing our ranking game in the Stackelberg game framework (see Section 3)—an efficient setup for solving general-sum games—we obtain two algorithms with complementary benefits in non-stationary environments depending on which agent is set to be the leader. ![1_image_0.png](1_image_0.png) In summary, this paper formulates a new framework rank-game for imitation learning that allows us to view learning from preferences and demonstrations under a unified perspective. We instantiate the framework with a principled ranking loss that can naturally incorporate rankings provided by diverse sources. Finally, by incorporating additional rankings—auto-generated or offline—our method: (a) outperforms state-of-the-art methods for imitation learning in several MuJoCo simulated domains by a significant margin and (b) solves complex tasks like imitating to reorient a pen with dextrous manipulation using only a few observation trajectories that none of the previous LfO baselines can solve. Figure 1: rank-game: The Policy agent maximizes the reward function by interacting with the environment. The Reward agent satisfies a set of behavior rankings obtained from various sources: generated by the policy agent (vanilla), automatically generated (auto), or offline annotated rankings obtained from a human or offline dataset (pref). Treating this game in the Stackelberg framework leads to either Policy being a leader and Reward being a follower, or viceversa. | IL Method | Offline | Expert | Ranking | Reward | Active Human | |------------------------------------|-------------|----------|---------------|------------|----------------| | | Preferences | Data | Loss | Function | Query | | MaxEntIRL, AdRIL,GAN-GCL, | ✗ | LfD | supremum | non-linear | ✗ | | GAIL,f-MAX, AIRL BCO,GAIfO, DACfO, | ✗ | LfO | supremum | non-linear | ✗ | | OPOLO,f-IRL TREX, DREX | ✓ | ✗ | Bradley-Terry | non-linear | ✗ | | BREX | ✓ | ✗ | Bradley-Terry | linear | ✗ | | DemPref | ✓ | LfO/LfD | Bradley-Terry | linear | ✓ | | Ibarz et al. (2018) | ✓ | LfD | Bradley-Terry | non-linear | ✓ | | rank-game | ✓ | LfO/LfD | Lk | non-linear | ✗ | Table 1: A summary of IL methods demonstrating the data modalities they can handle (expert data and/or preferences), the ranking-loss functions they use, the assumptions they make on reward function, and whether they require availability of an external agent to provide preferences during training. We highlight whether a method enables LfD, LfO, or both when it is able to incorporate expert data. ## 2 Related Work Imitation learning methods are broadly divided into two categories: Behavioral cloning (Pomerleau, 1991; Ross et al., 2011) and Inverse Reinforcement Learning (IRL) (Ng et al., 2000; Abbeel & Ng, 2004; Ziebart et al., 2008; Finn et al., 2016; Fu et al., 2017; Ho & Ermon, 2016; Ghasemipour et al., 2020). Our work focuses on developing a new framework in the setting of IRL through the lens of ranking. Table 1 shows a comparison of the proposed rank-game method to prior works. Classical Imitation Game for IRL: The classical imitation game for IRL aims to solve the adversarial min-max problem of finding a policy that minimizes the worst-case performance gap between the agent and the expert. A number of previous works (Ghasemipour et al., 2020; Swamy et al., 2021; Ke et al., 2021) have focused on analyzing the properties of this *min-max* game and its relation to divergence minimization. Under some additional regularization, this *min-max* objective can be understood as minimizing a certain f-divergence (Ho & Ermon, 2016; Ghasemipour et al., 2020; Ke et al., 2021) between the agent and expert state-action visitation. More recently, Swamy et al. (2021) showed that all forms of imitation learning (BC and IRL) can be understood as performing moment matching under differing assumptions. In this work, we present a new perspective on imitation in which the reward function is learned using a dataset of behavior comparisons, generalizing previous IRL methods that learn from expert demonstrations and additionally giving the flexibility to incorporate rankings over suboptimal behaviors. Learning from Preferences and Suboptimal Data: Learning from preferences and suboptimal data is important when expert data is limited or hard to obtain. Preferences (Akrour et al., 2011; Wilson et al., 2012; Sadigh et al., 2017; Christiano et al., 2017; Palan et al., 2019; Cui et al., 2021) have the advantage of providing guidance in situations expert might not get into, and in the limit provides full ordering over trajectories which expert data cannot. A previous line of work (Brown et al., 2019; 2020b;a; Chen et al., 2020) has studied this setting and demonstrated that offline rankings over suboptimal behaviors can be effectively leveraged to learn a reward function. Christiano et al. (2017); Palan et al. (2019); Ibarz et al. (2018) studied the question of learning from preferences in the setting when a human is available to provide online preferences1(active queries), while Palan et al. (2019) additionally assumed the reward to be linear in known features. Our work makes no such assumptions and allows for integrating offline preferences and expert demonstrations under a common framework. Learning from Observation (LfO): LfO is the problem setting of learning from expert observations. This is typically more challenging than the traditional learning from demonstration setting (LfD), because actions taken by the expert are unavailable. LfO is broadly formulated using two objectives: state-next state marginal matching (Torabi et al., 2019; Zhu et al., 2020b; Sun et al., 2019) and direct state marginal matching (Ni et al., 2020; Liu et al., 2019). Some prior works (Torabi et al., 2018a; Yang et al., 2019; Edwards et al., 2019) approach LfO by inferring expert actions through a learned inverse dynamics model. These methods assume 1We will use preferences and ranking interchangebly injective dynamics and suffer from compounding errors when the policy is deployed. A recently proposed method OPOLO (Zhu et al., 2020b) derives an upper bound for the LfO objective which enables it to utilize off-policy data and increase sample efficiency. Our method outperforms baselines including OPOLO, by a significant margin. ## 3 Background We consider a learning agent in a Markov Decision Process (MDP) (Puterman, 2014; Sutton & Barto, 2018) which can be defined as a tuple: M = (S, A*, P, R, γ, ρ*0), where S and A are the state and action spaces; P is the state transition probability function, with P(s ′|*s, a*) indicating the probability of transitioning from s to s ′ when taking action a; R : *S × A →* R is the reward function bounded in [0, Rmax]; We consider MDPs with infinite horizon, with the discount factor γ ∈ [0, 1], though our results extend to finite horizons as well; p0 is the initial state distribution. We use Π and R to denote the space of policies and reward functions respectively. A reinforcement learning agent aims to find a policy π : *S → A* that maximizes its expected return, J(R; π) = 1 1−γ E(s,a)∼ρπ(s,a)[R(*s, a*)], where ρ π(*s, a*) is the stationary state-action distribution induced by π. In imitation learning, we are provided with samples from the state-action visitation of the expert ρ πE (*s, a*) but the reward function of the expert, denoted by Rgt, is unknown. We will use ρ E(*s, a*) as a shorthand for ρ πE (*s, a*). Classical Imitation Learning: The goal of imitation learning is to close the imitation gap J(Rgt; π E) − J(Rgt; π) defined with respect to the unknown expert reward function Rgt. Several prior works (Ho & Ermon, 2016; Swamy et al., 2021; Kostrikov et al., 2019; Ni et al., 2020) tackle this problem by minimizing the imitation gap on all possible reward hypotheses. This leads to a zero-sum (min-max) game formulation of imitation learning in which a policy is optimized with respect to the reward function that induces the largest imitation gap: $$\operatorname*{\mathsf{i}}\operatorname*{\mathsf{m i t}}\mathsf{-}\mathsf{g a m e}(\pi)=\operatorname*{\arg\operatorname*{min}_{\pi\in\Pi}\operatorname*{sup}_{f\in\mathcal{R}}\mathbb{E}_{\rho^{E}(s,a)}[f(s,a)]-\mathbb{E}_{\rho^{\pi}(s,a)}[f(s,a)].$$ EρE(s,a)[f(*s, a*)] − Eρπ(s,a)[f(*s, a*)]. (1) Here, assuming realizability (Rgt ∈ R), the imitation gap is upper bounded as follows (∀π): $$(1)$$ $$J(R_{g t};\pi^{E})-J(R_{g t};\pi)\leq\operatorname*{sup}_{f\in\mathcal{R}}\frac{1}{1-\gamma}[\mathbb{E}_{\rho^{E}(s,a)}[f(s,a)]-\mathbb{E}_{\rho^{\pi}(s,a)}[f(s,a)]].$$ [EρE(s,a)[f(s, a)] − Eρπ(s,a)[f(*s, a*)]]. (2) Note that, when the performance gap is maximized between the expert π E and the agent π, we can observe that the worst-case reward function fπ induces a ranking between policy behaviors based on their performance: ρ E ⪰ ρ π:= EρE(s,a)[fπ(*s, a*)] ≥ Eρπ(s,a)[fπ(*s, a*)], ∀π. Therefore, we can regard the above loss function that maximizes the performance gap (Eq. 2) as an instantiation of the ranking-loss. We will refer to the implicit ranking between agent and the expert ρ E ⪰ ρ π as vanilla rankings and this variant of the ranking-loss function as the *supremum-loss*. Stackelberg Games: A Stackelberg game (Başar & Olsder, 1998) is a general-sum game between two agents where one agent is set to be the leader and the other a follower. The leader in this game optimizes its objective under the assumption that the follower will choose the best response for its own optimization objective. More concretely, assume there are two players A and B with parameters θA, θB and corresponding losses LA(θA, θB) and LB(θA, θB). A Stackelberg game solves the following bi-level optimization when A is the leader and B is the follower: minθA LA(θA, θ∗B(θA)) s.t θ ∗ B(θA) = arg minθ LB(θA, θ). Rajeswaran et al. (2020) showed that casting model-based RL as an approximate Stackelberg game leads to performance benefits and reduces training instability in comparison to the commonly used GDA (Schäfer & Anandkumar, 2019) and Best Reponse (BR) (Cesa-Bianchi & Lugosi, 2006) methods. Fiez et al. (2019); Zheng et al. (2021) prove convergence of Stackelberg games under smooth player cost functions and show that they reduce the cycling behavior to find an equilibrium and allow for better convergence. ## 4 A Ranking Game For Imitation Learning In this section, we first formalize the notion of the proposed two-player general-sum ranking game for imitation learning. We then propose a practical instantiation of the ranking game through a novel ranking-loss (Lk). Algorithm 1 Meta algorithm: rank-game (vanilla) for imitation 1: Initialize policy π 0 θ , reward funtion Rϕ, empty dataset Dπ. empirical expert data ρˆ E 2: for t = 1..T iterations do 3: Collect empirical visitation data ρˆ π t θ with π t θ in the environment. Set Dπ = {(ˆρ π ⪯ ρˆ E)} 4: Train reward Rϕ to satisfy rankings in Dπ using ranking loss Lk in equation 3. 5: Optimize policy under the reward function: π t+1 θ ← argmaxπ′J(Rϕ; π ′) 6: **end for** The proposed ranking game gives us the flexibility to incorporate additional rankings—both auto-generated (a form of data augmentation mentioned as 'auto' in Fig. 1) and offline ('pref' in Fig. 1)—which improves learning efficiency. Finally, we discuss the Stackelberg formulation for the two-player ranking game and discuss two algorithms that naturally arise depending on which player is designated as the leader. ## 4.1 The Two-Player Ranking Game Formulation We present a new framework, rank-game, for imitation learning which casts it as a general-sum *ranking game* between two players - a reward and a policy. $$\underbrace{\operatorname{argmax}_{\pi\in\Pi}J(R;\pi)}_{\mathrm{Policy\Agent}}\quad\quad\underbrace{\operatorname{argmin}_{R\in\mathcal{R}}L(\mathcal{D}^{p};R)}_{\mathrm{reward\Agent}}$$ In this formulation, the policy agent maximizes the reward by interacting with the environment, and the reward agent attempts to find a reward function that satisfies a set of pairwise behavior rankings in the given dataset Dp; a reward function satisfies these rankings if Eρπi [R(s, a)] ≤ Eρπj [R(*s, a*)], ∀ρ π i⪯ ρ π j∈ Dp, where ρ π i, ρπ jcan be state-action or state vistitations. The dataset of pairwise behavior rankings Dpcan be comprised of the implicit 'vanilla' rankings between the learning agent and the expert's policy behaviors (ρ π ⪯ ρ E), giving us the classical IRL methods when a specific ranking loss function - *supremum-loss* is used (Ho & Ermon, 2016; Ghasemipour et al., 2020; Ke et al., 2021). If rankings are provided between trajectories, they can be reduced to the equivalent ranking between the corresponding state-action/state visitations. In the case when Dpcomprises purely of offline trajectory performance rankings then, under a specific ranking loss function (*Luce-shepard*), the ranking game reduces to prior reward inference methods like T-REX (Brown et al., 2019; 2020b;a; Chen et al., 2020). Thus, the ranking game affords us a broader perspective of imitation learning, going beyond only using expert demonstrations. ## 4.2 Ranking Loss Lk **For The Reward Agent** We use a *ranking-loss* to train the reward function—an objective that minimizes the distortion (Iyer & Bilmes, 2012) between the ground truth ranking for a pair of entities {*x, y*} and rankings induced by a parameterized function R : X → R for a pair of scalars {R(x), R(y)}. One type of such a ranking-loss is the *supremum-loss* in the classical imitation learning setup. We propose a class of ranking-loss functions Lk that *attempts* to induce a performance gap of k ∈ [0, Rmax] for all behavior preferences in the dataset. Formally, this can be implemented with the regression loss: $$L_{k}({\mathcal{D}}^{p};R)=\mathbb{E}_{({\rho}^{s^{i}},{\rho}^{s^{i}})\sim{\mathcal{D}}^{p}}\left[\mathbb{E}_{s,a\sim{\rho}^{s^{i}}}\left[(R(s,a)-0)^{2}\right]+\mathbb{E}_{s,a\sim{\rho}^{s^{j}}}\left[(R(s,a)-k)^{2}\right]\right].$$ 2i. (3) where Dpcontains behavior pairs (ρ π i, ρπ j) with the prespecified ranking ρ π i⪯ ρ π j. The proposed ranking loss allows for learning *bounded rewards with user-defined scale* k in the agent and the expert visitations as opposed to prior works in Adversarial Imitation Learning (Ho & Ermon, 2016; Fu et al., 2017; Ghasemipour et al., 2020). Reward scaling has been known to improve learning efficiency in deep RL; a large reward scale can make the optimization landscape less smooth (Henderson et al., 2018; Glorot & $\quad(3)^{\frac{1}{2}}$ . Bengio, 2010) and a small scale might make the action-gap small and increase susceptibility to extrapolation errors (Bellemare et al., 2016). In contrast to the *supremum* loss, Lk can also naturally incorporate rankings provided by additional sources by learning a reward function satisfying all specified pairwise preferences. The following theorem characterizes the equilibrium of the rank-game for imitation learning when Lk is used as the ranking-loss. Theorem 4.1. (Performance of the rank-game *equilibrium pair) Consider an equilibrium of the imitation* rank-game (π, ˆ Rˆ), such that the ranking loss Lk generalization error is bounded by 2R2maxϵr *and the policy is* near-optimal with J(Rˆ; πˆ) ≥ J(Rˆ; π) − ϵπ ∀π, then at this equilibrium pair under the expert's unknown reward function Rgt bounded in [0, REmax]: $$|J(R_{g t},\pi^{E})-J(R_{g t},\hat{\pi})|\leq\frac{4R_{m a x}^{E}\sqrt{\frac{(1-\gamma)\epsilon\pi+4R_{m a x}\sqrt{\epsilon_{r}}}{k}}}{1-\gamma}$$ 1 − γ(4) If reward is a state-only function and only expert observations are available, the same bound applies to the LfO setting. Proof. We defer the proof to Appendix A. Comments on Theorem 4.1: The ranking loss trains the reward function with finite samples using supervised learning. We can quantify ϵr, the finite sample generalization error for the reward function, using standard concentration bounds (ShalevShwartz & Ben-David, 2014; Hoeffding, 1994) with high probability. We use ϵπ to denote the policy optimization error from solving the reinforcement learning problem. In Deep Reinforcement Learning, this error can stem as a result of function approximation, biases in value function update, and finite samples. Accounting for this error allows us to bring our analysis closer to the real setting. Note that the performance gap between agent policy and expert policy depends on the scale of the expert reward function REmax. This behavior is expected as the performance gap arising as a result of differences in behaviors/visitations of agent policy and expert policy, can be amplified by the expert's unknown reward scale. We assume realizability i.e the expert reward function lies in the agent reward function class, which ensures that REmax ≤ Rmax. The performance bound in Theorem 4.1 is derived in Appendix A by first proving an intermediate result that demonstrates ρ πˆ and ρ π Eare close in a specific f-divergence at the equilibrium, a bound that does not depend on the unknown expert reward scale REmax. Theoretical properties: We now discuss some theoretical properties of Lk. Theorem 4.1 shows that rank-game has an equilibrium with bounded performance gap with the expert. Second, our derivation for Theorem 4.1 also shows that - an optimization step by the policy player, under a reward function optimized by the reward player, is equivalent to minimizing an f-divergence with the expert. Equivalently, at iteration t in Algorithm 1: maxπt Eρπt [R∗ t ] − EρπE [R∗ t ] = minπt Df (ρ π t∥ρ π E). We characterize and elaborate on the regret of this idealized algorithm in Appendix A. Theorem 4.1 suggests that large values of k, upto Rmax, can guarantee the agent's performance is close to the expert. In practice, we observe intermediate values of k also preserve imitation equilibrium optimality with a benefit of promoting sample efficient learning. We attribute this observation to the effect of reward scaling described earlier. We validate this observation further in Appendix D.9. rank-game naturally extends to the LfO regime under a state-only reward function where Theorem 4.1 results in a divergence bound between state-visitations of the expert and the agent. A state-only reward function is also a sufficient and necessary condition to ensure that we learn a dynamics-disentangled reward function (Fu et al., 2017). $$|\rangle$$ Figure 2: Figure shows learned reward function when agent and expert has a visitation shown by pink and black markers respectively. rank-game (auto) results in smooth reward functions more amenable to gradient-based policy optimization compared to GAIL. Lk can incorporate additional preferences that can help learn a regularized/shaped reward function that provides better guidance for policy optimization, reducing the exploration burden and increasing sample efficiency for IRL. A better-guided policy optimization is also expected to incur a lower ϵπ. However, augmenting the ranking dataset can lead to decrease in the intended performance gap (kef f < k) between the agent and the expert (Appendix A). This can loosen the bound in Eq 4 and lead to sub-optimal imitation learning. We hypothesize that given informative preferences, decreased ϵπ can compensate potentially decreased intended performance gap k*ef f* to ensure near optimal imitation. In our experiments, we observe this hypothesis holds true; we enjoy sample efficiency benefits without losing any asymptotic performance. To leverage these benefits, we present two methods for augmenting the ranking dataset below and defer the implementation details to Appendix B. ## 4.2.1 Generating The Ranking Dataset Reward loss w/ automatically generated rankings (auto): In this method, we assume access to the behavior-generating trajectories in the ranking dataset. A trajectory τ is a sequence of states (LfO) given by [s0, s1*, ..s*H] or state-action pairs (LfD) given by [s0, a0, s1, a1..sH, aH]. For each pairwise comparison ρi ⪯ ρj present in the dataset, Lk sets the regression targets for states in ρi to be 0 and for states visited by ρj to be k. Equivalently, we can rewrite minimizing Lk as regressing an input of trajectory τi to vector 0, and τj to vector k1 where τi, τj are trajectories that generate the behavior ρi, ρj respectively. We use the comparison ρi ⪯ ρj to generate additional behavior rankings ρi = ρλ0,ij ⪯ ρλ1,ij ⪯ ρλ2,ij .. ⪯ ρλP ,ij ⪯ ρj = ρλP +1,ij where 0 = λ0 < λ1 < λ2 *< ... < λ*P < 1 = λP +1. The behavior ρλp,ij is obtained by independently sampling the trajectories that generate the behaviors ρi, ρj and taking convex combinations i.e. τλp,ij = λpτi + (1 − λp)τj and their corresponding reward regressions targets are given by kp = λp0 + (1 − λp)k1. The loss function takes the following form: $$SL_{k}(\mathcal{D};R)=\mathbb{E}_{\rho_{i},\rho_{j}\sim\mathcal{D}}\Bigg{[}\frac{1}{P+2}\sum_{p=0}^{P+1}\mathbb{E}_{s,a\sim\rho_{\lambda_{p},i,j}(s,a)}\big{[}(R(s,a)-k_{p})^{2}\big{]}\Bigg{]}\tag{5}$$ This form of data augmentation can be interpreted as mixup (Zhang et al., 2017) regularization in the trajectory space. Mixup has been shown to improve generalization and adversarial robustness (Guo et al., 2019; Zhang et al., 2017) by regularizing the first and second order gradients of the parameterized function. Following the general principle of using a smoothed objective with respect to inputs to obtain effective gradient signals, explicit smoothing in the trajectory-space can also help reduce the policy optimization error ϵπ. A didactic example showing rewards learned using this method is shown in Figure 2. In a special case when the expert's unknown reward function is linear in observations, these rankings reflect the true underlying rankings of behaviors. Reward loss w/ offline annotated rankings (pref): Another way of increasing learning efficiency is augmenting the ranking dataset containing the vanilla ranking ({ρ π ⪯ ρ E} := Dπ) with offline annotated rankings (D*of f line*). These rankings may be provided by a human observer or obtained using an offline dataset of behaviors with annotated reward information, similar to the datasets used in offline RL (Fu et al., 2020; Levine et al., 2020). We combine offline rankings by using a weighted loss between Lk for satisfying vanilla rankings (ρ π ⪯ ρ E) and offline rankings, grounded by an expert. Providing offline rankings alone that are sufficient to explain the reward function of the expert (Brown et al., 2019) is often a difficult task and the number of offline preferences required depends on the complexity of the environment. In the LfO setting, learning from an expert's state visitation alone can be a hard problem due to exploration requirements (Kidambi et al., 2021). This ranking-loss combines the benefits of using preferences to shape the reward function and guide policy improvement while using the expert to guarantee near-optimal performance. The weighted loss function for this setting takes the following form: $${\cal L}_{k}({\mathcal D}^{\pi},{\mathcal D}^{o f f l i n e};R)=\alpha{\cal L}_{k}({\mathcal D}^{\pi};R)+(1-\alpha)*{\cal L}_{k}({\mathcal D}^{o f f l i n e};R)$$ of f line; R) (6) ## 4.3 Optimizing The Two-Player General-Sum Ranking Game As A Stackelberg Game Solving the ranking-game in the Stackelberg setup allows us to propose two different algorithms depending on which agent is set to be the leader and utilize the learning stability and efficiency afforded by the formulation as studied in Rajeswaran et al. (2020); Zheng et al. (2021); Fiez et al. (2019). $$({\mathfrak{f}}{\mathfrak{h}})$$ Policy as leader (PAL): Choosing policy as the leader implies the following optimization: $$\operatorname*{max}_{\pi}\left\{J({\hat{R}};\pi)\;\;s.t.\;\;{\hat{R}}=\arg\operatorname*{min}_{R}L({\mathcal{D}}^{\pi};R)\right\}$$ $$\left(7\right)$$ (7) Reward as leader (RAL): Choosing reward as the leader implies the following optimization: $$\operatorname*{min}_{\hat{R}}\left\{L({\mathcal{D}}^{\pi};{\hat{R}})\;\;s.t\;\;\pi=\arg\operatorname*{max}_{\pi}J({\hat{R}};\pi)\right\}$$ $$(8)$$ (8) We follow the first order gradient approximation for leader's update from previous work (Rajeswaran et al., 2020) to develop practical algorithms. This strategy has been proven to be effective and avoids the computational complexity of calculating the implicit Jacobian term (dθ∗B/dθA). PAL updates the reward to near convergence on dataset Dπ(Dπcontains rankings generated using the current policy agent only π ⪯ π E) and takes a few policy steps. Note that even after the first-order approximation, this optimization strategy differs from GDA as often only a few iterations are used for training the reward even in hyperparameter studies like Orsini et al. (2021). RAL updates the reward conservatively. This is achieved through aggregating the dataset of implicit rankings from all previous policies obtained during training. PAL's strategy of using on-policy data Dπfor reward training resembles that of methods including GAIL (Ho & Ermon, 2016; Torabi et al., 2018b), f-MAX (Ghasemipour et al., 2020), and f-IRL (Ni et al., 2020). RAL uses the entire history of agent visitation to update the reward function and resembles methods such as apprenticeship learning and DAC (Abbeel & Ng, 2004; Kostrikov et al., 2018). PAL and RAL bring together two seemingly different algorithm classes under a unified Stackelberg game viewpoint. ## 5 Experimental Results We compare rank-game against state-of-the-art LfO and LfD approaches on MuJoCo benchmarks having continuous state and action spaces. The LfO setting is more challenging since no actions are available, and is a crucial imitation learning problem that can be used in cases where action modalities differ between the expert and the agent, such as in robot learning. We focus on the LfO setting in this section and defer the LfD experiments to Appendix D.2. We denote the imitation learning algorithms that use the proposed ranking-loss Lk from Section 4.2 as RANK-{PAL, RAL}. We refer to the rank-game variants which use automatically generated rankings and offline preferences as (auto) and (pref) respectively following Section 4.2. In all our methods, we rely on an off-policy model-free algorithm, Soft Actor-Critic (SAC) (Haarnoja et al., 2018), for updating the policy agent (in step 5 of Algorithm 1). We design experiments to answer the following questions: 1. *Asymptotic Performance and Sample Efficiency*: Is our method able to achieve near-expert performance given a limited number (one) of expert observations? Can our method learn using fewer environment interactions than prior state-of-the-art imitation learning (LfO) methods? 2. *Utility of preferences for imitation learning*: Current LfO methods struggle to solve a number of complex manipulation tasks with sparse success signals. Can we leverage offline annotated preferences through rank-game in such environments to achieve near-expert performance? 3. *Choosing between PAL and RAL methods*: Can we characterize the benefits and pitfalls of each method, and determine when one method is preferable over the other? 4. *Ablations for the method components*: Can we establish the importance of hyperparameters and design decisions in our experiments? Baselines: We compare RANK-PAL and RANK-RAL against 6 representative LfO approaches that covers a spectrum of on-policy and off-policy model-free methods from prior work: GAIfO (Torabi et al., 2018b; Ho & Ermon, 2016), DACfO (Kostrikov et al., 2018), BCO (Torabi et al., 2018a), f-IRL (Ni et al., 2020) and recently proposed OPOLO (Zhu et al., 2020b) and IQLearn (Garg et al., 2021). We do not assume access to expert actions in this setting. Our LfD experiments compare to the IQLearn (Garg et al., 2021), DAC (Kostrikov et al., 2018) and BC baselines. Detailed description about the environments and baselines can be found in Appendix C. ## 5.1 Asymptotic Performance And Sample Efficiency In this section, we compare RANK-PAL(auto) and RANK-RAL(auto) to baselines on a set of MuJoCo locomotion tasks of varying complexities: Swimmer-v2, Hopper-v2, HalfCheetah-v2, Walker2dv2, Ant-v2 and Humanoid-v2. In this experiment, we provide one expert trajectory for all methods and do not assume access to any offline annotated rankings. | Env | Hopper | HalfCheetah | Walker | Ant | Humanoid | |------------|--------------|---------------|-------------|-------------|-------------| | BCO | 20.10±2.15 | 5.12±3.82 | 4.00±1.25 | 12.80±1.26 | 3.90±1.24 | | GaIFO | 81.13± 9.99 | 13.54±7.24 | 83.83±2.55 | 20.10±24.41 | 3.93±1.81 | | DACfO | 94.73±3.63 | 85.03±5.09 | 54.70±44.64 | 86.45±1.67 | 19.31±32.19 | | f -IRL | 97.45± 0.61 | 96.06±4.63 | 101.16±1.25 | 71.18±19.80 | 77.93±6.372 | | OPOLO | 89.56±5.46 | 88.92±3.20 | 79.19±24.35 | 93.37± 3.78 | 24.87±17.04 | | RANKPAL(ours) | 87.14± 16.14 | 94.05±3.59 | 93.88±0.72 | 98.93±1.83 | 96.84±3.28 | | RANKRAL(ours) | 99.34±0.20 | 101.14±7.45 | 93.24±1.25 | 93.21±2.98 | 94.45±4.13 | | Expert | 100.00± 0 | 100.00± 0 | 100.00± 0 | 100.00± 0 | 100.00± 0 | | (|S|, |A|) | (11, 3) | (17, 6) | (17, 6) | (111, 8) | (376, 17) | Table 2: Asymptotic normalized performance of LfO methods at 2 million timesteps on MuJoCo locomotion tasks. The standard deviation is calculated with 5 different runs each averaging over 10 trajectory returns. For unnormalized score and more details, check Appendix D. We omit IQlearn due to poor performance. ![8_image_0.png](8_image_0.png) Figure 3: Comparison of performance on OpenAI gym benchmark tasks. The shaded region represents standard deviation across 5 random runs. RANK-PAL and RANK-RAL substantially outperform the baselines in sample efficiency. Complete set of results can be found in Appendix D.1 Asymptotic Performance: Table 2 shows that both rank-game methods are able to reach near-expert asymptotic performance with a single expert trajectory. BCO shows poor performance which can be attributed to the compounding error problem arising from its behavior cloning strategy. GAIfO and DACfO use GDA for optimization with a supremum loss and show high variance in their asymptotic performance whereas rank-game methods are more stable and low-variance. Sample Efficiency: Figure 3 shows that RANK-RAL and RANK-PAL are among the most sample efficient methods for the LfO setting, outperforming the recent state-of-the-art method OPOLO (Zhu et al., 2020b) by a significant margin. We notice that IQLearn fails to learn in the LfO setting. This experiment demonstrates the benefit of the combined improvements of the proposed ranking-loss with automatically generated rankings. Our method is also simpler to implement than OPOLO, as we require fewer lines of code changes on top of SAC and need to maintain fewer parameterized networks compared to OPOLO which requires an additional inverse action model to regularize learning. ## 5.2 Utility Of Preferences In Imitation Our experiments on complex manipulation environments—door opening with a paralleljaw gripper (Zhu et al., 2020a) and pen manipulation with a dexterous adroit hand (Rajeswaran et al., 2017) - reveal that none of the prior LfO methods are able to imitate the expert even under increasing amounts of expert data. This failure of LfO methods can be potentially attributed to the exploration requirements of LfO compared to LfD (Kidambi et al., 2021), coupled with the sparse successes encountered in these tasks, leading to poorly guided policy gradients. ![9_image_0.png](9_image_0.png) Figure 4: Offline annotated preferences can help solve LfO tasks in the complex manipulation environments Pen-v0 and Door, whereas prior LfO methods fail. Black dotted line shows asymptotic performance of RANK-PAL (auto) method. In these experiments, we show that rank-game can incorporate additional information in the form of offline annotated rankings to guide the agent in solving such tasks. These offline rankings are obtained by uniformly sampling a small set of trajectories (10) from the replay buffer of SAC (Haarnoja et al., 2018) labeled with a ground truth reward function. We use a weighted ranking loss (pref) from Section 4.2. Figure 4 shows that RANK-PAL/RAL(pref) method leveraging offline ranking is the only method that can solve these tasks, whereas prior LfO methods and RANK-PAL/RAL(auto) with automatically generated rankings struggle even after a large amount of training. We also point out that T-REX, an offline method that learns using the preferences grounded by expert is unable to achieve near-expert performance, thereby highlighting the benefits of learning online with expert demonstrations alongside a set of offline preferences. ## 5.3 Comparing Pal And Ral PAL uses the agent's current visitation for reward learning, whereas RAL learns a reward consistent with all rankings arising from the history of the agent's visitation. These properties can present certain benefits depending on the task setting. To test the potential benefits of PAL and RAL, we consider two non-stationary imitation learning problems, similar to (Rajeswaran et al., 2017) - one in which the expert changes it's intent and the other where dynamics of the environment change during training in the Hopper-v2 locomotion task. For changing intent, we present a new set of demonstrations where the hopper agent hops backwards rather than forward. For changing environment dynamics, we increase the mass of the hopper agent by a factor of 1.2. Changes are introduced at 1e5 time steps during training at which point we notice a sudden performance drop. In Figure 5 (left), we notice that PAL adapts faster to intent changes, whereas RAL needs to unlearn the rankings obtained from the agent's history and takes longer to adapt. Figure 5 (right) shows that RAL adapts faster to the changing dynamics of the system, as it has already learned a good global notion of the dynamics-disentangled reward function in the LfO setting, whereas PAL only has a local understanding of reward as a result of using ranking obtained only from the agent's current visitation. Ablation of Method Components: Appendix D contains eight additional experiments to study the importance of hyperparameters and design decisions. Our ablations validate the importance of using automatically generated rankings, the benefit of ranking loss over *supremum* loss, and sensitivity to hyperparameters like the intended performance gap k, policy iterations, and the reward regularizer. We find that key improvements in learning efficiency are driven by using the proposed ranking loss, controlling the reward range, and the reward/policy update frequency in the Stackelberg framework. In Figure 6, we also analyze the performance of rank-game with a varying number of expert trajectories and its robustness to noisy offline-annotated preferences. We find rank-game to consistently outperform baselines with a varying number of expert trajectories. On Door manipulation task rank-game is robust to 60 percent noise in the offline-annotated preferences. Experiments on more environments and additional details can be found in ![10_image_1.png](10_image_1.png) Figure 5: We compare the relative strengths of PAL and RAL. Left plot shows a comparison when the goal is changed, and right plot shows a comparison when dynamics of the environment is changed. These changes occur at 1e5 timesteps into training. PAL adapts faster to changing intent and RAL adapts faster to changing dynamics. ![10_image_0.png](10_image_0.png) Figure 6: (a) RANK-PAL outperforms other methods with varying number of expert trajectories. Error bars denote standard deviation. (b) On Door-v0 environment, RANK-PAL(pref) is robust to at least 60 percent noisy preferences. ## 6 Conclusion In this work, we present a new framework for imitation learning that treats imitation as a two-player ranking-game between a policy and a reward function. Unlike prior works in imitation learning, the ranking game allows incorporation of rankings over suboptimal behaviors to aid policy learning. We instantiate the ranking game by proposing a novel ranking loss which guarantees agent's performance to be close to expert for imitation learning. Our experiments on simulated MuJoCo tasks reveal that utilizing additional ranking through our proposed ranking loss leads to improved sample efficiency for imitation learning, outperforming prior methods by a significant margin and solving some tasks which were unsolvable by previous LfO methods. Limitations and Negative Societal Impacts: Preferences obtained in the real world are usually noisy (Kwon et al., 2020; Jeon et al., 2020; Bıyık et al., 2021) and one limitation of rank-game is that it does not suggest a way to handle noisy preferences. Second, rank-game proposes modifications to learn a reward function amenable to policy optimization but these hyperparameters are set manually. Future work can explore methods to automate learning such reward functions. Third, despite learning effective policies we observed that we do not learn reusable robust reward functions (Ni et al., 2020). Negative Societal Impact: Imitation learning can cause harm if given demonstrations of harmful behaviors, either accidentally or purposefully. Furthermore, even when given high-quality demonstrations of desirable behaviors, our algorithm does not provide guarantees of performance, and thus could cause harm if used in high-stakes domains without sufficient safety checks on learned behaviors. ## Acknowledgements We thank Jordan Schneider, Wenxuan Zhou, Caleb Chuck, and Amy Zhang for valuable feedback on the paper. This work has taken place in the Personal Autonomous Robotics Lab (PeARL) at The University of Texas at Austin. PeARL research is supported in part by the NSF (IIS-1724157, IIS-1749204, IIS-1925082), AFOSR (FA9550-20-1-0077), and ARO (78372-CS). This research was also sponsored by the Army Research Office under Cooperative Agreement Number W911NF-19-2-0333. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. ## References Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In *Proceedings* of the twenty-first international conference on Machine learning, pp. 1, 2004. Joshua Achiam. Spinning Up in Deep Reinforcement Learning. 2018. Riad Akrour, Marc Schoenauer, and Michele Sebag. Preference-based policy learning. In *Joint European* Conference on Machine Learning and Knowledge Discovery in Databases, pp. 12–27. Springer, 2011. Syed Mumtaz Ali and Samuel D Silvey. A general class of coefficients of divergence of one distribution from another. *Journal of the Royal Statistical Society: Series B (Methodological)*, 28(1):131–142, 1966. Oleg Arenz and Gerhard Neumann. Non-adversarial imitation learning and its connections to adversarial methods. *arXiv preprint arXiv:2008.03525*, 2020. Tamer Başar and Geert Jan Olsder. *Dynamic noncooperative game theory*. SIAM, 1998. Marc G Bellemare, Georg Ostrovski, Arthur Guez, Philip Thomas, and Rémi Munos. Increasing the action gap: New operators for reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016. Erdem Bıyık, Dylan P Losey, Malayandi Palan, Nicholas C Landolfi, Gleb Shevchuk, and Dorsa Sadigh. Learning reward functions from diverse sources of human feedback: Optimally integrating demonstrations and preferences. *The International Journal of Robotics Research*, pp. 02783649211041652, 2021. Daniel Brown, Russell Coleman, Ravi Srinivasan, and Scott Niekum. Safe imitation learning via fast bayesian reward inference from preferences. In *International Conference on Machine Learning*, pp. 1165–1177. PMLR, 2020a. Daniel S. Brown, Wonjoon Goo, Prabhat Nagarajan, and Scott Niekum. Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations. *ArXiv*, abs/1904.06387, 2019. Daniel S Brown, Wonjoon Goo, and Scott Niekum. Better-than-demonstrator imitation learning via automatically-ranked demonstrations. In *Conference on robot learning*, pp. 330–359. PMLR, 2020b. Tim Brys, Anna Harutyunyan, Halit Bener Suay, Sonia Chernova, Matthew E Taylor, and Ann Nowé. Reinforcement learning from demonstration through shaping. In *Twenty-fourth international joint conference* on artificial intelligence, 2015. Nicolo Cesa-Bianchi and Gábor Lugosi. *Prediction, learning, and games*. Cambridge university press, 2006. Letian Chen, Rohan Paleja, and Matthew Gombolay. Learning from suboptimal demonstration via selfsupervised reward regression. *arXiv preprint arXiv:2010.11723*, 2020. Paul Christiano, Jan Leike, Tom B Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. *arXiv preprint arXiv:1706.03741*, 2017. Imre Csiszár. Information-type measures of difference of probability distributions and indirect observation. studia scientiarum Mathematicarum Hungarica, 2:229–318, 1967. Yuchen Cui, Bo Liu, Akanksha Saran, Stephen Giguere, Peter Stone, and Scott Niekum. Aux-airl: End-to-end self-supervised reward learning for extrapolating beyond suboptimal demonstrations. In *Proceedings of the* ICML Workshop on Self-Supervised Learning for Reasoning and Perception, 2021. Ashley Edwards, Himanshu Sahni, Yannick Schroecker, and Charles Isbell. Imitating latent policies from observation. In *International Conference on Machine Learning*, pp. 1755–1763. PMLR, 2019. Tanner Fiez, Benjamin Chasnov, and Lillian J Ratliff. Convergence of learning dynamics in stackelberg games. *arXiv preprint arXiv:1906.01217*, 2019. Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In *International conference on machine learning*, pp. 49–58. PMLR, 2016. Justin Fu, Katie Luo, and Sergey Levine. Learning robust rewards with adversarial inverse reinforcement learning. *arXiv preprint arXiv:1710.11248*, 2017. Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning. *arXiv preprint arXiv:2004.07219*, 2020. Divyansh Garg, Shuvam Chakraborty, Chris Cundy, Jiaming Song, and Stefano Ermon. Iq-learn: Inverse soft-q learning for imitation. *Advances in Neural Information Processing Systems*, 34, 2021. Seyed Kamyar Seyed Ghasemipour, Richard Zemel, and Shixiang Gu. A divergence minimization perspective on imitation learning methods. In *Conference on Robot Learning*, pp. 1259–1277. PMLR, 2020. Gustavo Gilardoni. On the minimum f-divergence for given total variation. *Comptes Rendus Mathematique -* C R MATH, 343:763–766, 12 2006. doi: 10.1016/j.crma.2006.10.027. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In *Proceedings of the thirteenth international conference on artificial intelligence and statistics*, pp. 249–256. JMLR Workshop and Conference Proceedings, 2010. Hongyu Guo, Yongyi Mao, and Richong Zhang. Mixup as locally linear out-of-manifold regularization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 3714–3722, 2019. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pp. 1861–1870. PMLR, 2018. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In *Proceedings of the AAAI conference on artificial intelligence*, volume 32, 2018. Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. *Advances in neural information* processing systems, 29:4565–4573, 2016. Wassily Hoeffding. Probability inequalities for sums of bounded random variables. In The collected works of Wassily Hoeffding, pp. 409–426. Springer, 1994. Borja Ibarz, Jan Leike, Tobias Pohlen, Geoffrey Irving, Shane Legg, and Dario Amodei. Reward learning from human preferences and demonstrations in atari. *arXiv preprint arXiv:1811.06521*, 2018. Rishabh Iyer and Jeff Bilmes. The submodular bregman and lovász-bregman divergences with applications: Extended version. Citeseer, 2012. Hong Jun Jeon, Smitha Milli, and Anca Dragan. Reward-rational (implicit) choice: A unifying formalism for reward learning. *Advances in Neural Information Processing Systems*, 33:4415–4426, 2020. Mingxuan Jing, Xiaojian Ma, Wenbing Huang, Fuchun Sun, Chao Yang, Bin Fang, and Huaping Liu. Reinforcement learning from imperfect demonstrations under soft expert guidance. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 5109–5116, 2020. Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In In Proc. 19th International Conference on Machine Learning. Citeseer, 2002. Liyiming Ke, Sanjiban Choudhury, Matt Barnes, Wen Sun, Gilwoo Lee, and Siddhartha Srinivasa. Imitation learning as f-divergence minimization. In *Algorithmic Foundations of Robotics XIV: Proceedings of the* Fourteenth Workshop on the Algorithmic Foundations of Robotics 14, pp. 313–329. Springer International Publishing, 2021. Michael Kearns and Satinder Singh. Finite-sample convergence rates for q-learning and indirect algorithms. Advances in neural information processing systems, 11, 1998. Rahul Kidambi, Jonathan Chang, and Wen Sun. Mobile: Model-based imitation learning from observation alone. *Advances in Neural Information Processing Systems*, 34, 2021. Ilya Kostrikov, Kumar Krishna Agrawal, Debidatta Dwibedi, Sergey Levine, and Jonathan Tompson. Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning. arXiv preprint arXiv:1809.02925, 2018. Ilya Kostrikov, Ofir Nachum, and Jonathan Tompson. Imitation learning via off-policy distribution matching. arXiv preprint arXiv:1912.05032, 2019. Victoria Krakovna. Specification gaming examples in ai. *Available at vkrakovna. wordpress. com*, 2018. Minae Kwon, Erdem Biyik, Aditi Talati, Karan Bhasin, Dylan P Losey, and Dorsa Sadigh. When humans aren't optimal: Robots that collaborate with risk-aware humans. In *2020 15th ACM/IEEE International* Conference on Human-Robot Interaction (HRI), pp. 43–52. IEEE, 2020. Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. *arXiv preprint arXiv:2005.01643*, 2020. Friedrich Liese and Igor Vajda. On divergences and informations in statistics and information theory. IEEE Transactions on Information Theory, 52(10):4394–4412, 2006. Fangchen Liu, Zhan Ling, Tongzhou Mu, and Hao Su. State alignment-based imitation learning. arXiv preprint arXiv:1911.10947, 2019. Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In *Proceedings of the IEEE international conference on computer vision*, pp. 2794–2802, 2017. Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In *Icml*, volume 1, pp. 2, 2000. Tianwei Ni, Harshit Sikchi, Yufei Wang, Tejus Gupta, Lisa Lee, and Benjamin Eysenbach. f-irl: Inverse reinforcement learning via state marginal matching. *arXiv preprint arXiv:2011.04709*, 2020. Manu Orsini, Anton Raichuk, Léonard Hussenot, Damien Vincent, Robert Dadashi, Sertan Girgin, Matthieu Geist, Olivier Bachem, Olivier Pietquin, and Marcin Andrychowicz. What matters for adversarial imitation learning? *Advances in Neural Information Processing Systems*, 34, 2021. Malayandi Palan, Nicholas C Landolfi, Gleb Shevchuk, and Dorsa Sadigh. Learning reward functions by integrating human demonstrations and preferences. *arXiv preprint arXiv:1906.08928*, 2019. Dean A Pomerleau. Efficient training of artificial neural networks for autonomous navigation. *Neural* computation, 3(1):88–97, 1991. Martin L Puterman. *Markov decision processes: discrete stochastic dynamic programming*. John Wiley & Sons, 2014. Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, and Sergey Levine. Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. *arXiv preprint arXiv:1709.10087*, 2017. Aravind Rajeswaran, Igor Mordatch, and Vikash Kumar. A game theoretic framework for model based reinforcement learning. In *ICML*, 2020. Siddharth Reddy, Anca D Dragan, and Sergey Levine. Sqil: Imitation learning via reinforcement learning with sparse rewards. *arXiv preprint arXiv:1905.11108*, 2019. Alfréd Rényi. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, pp. 547–561. University of California Press, 1961. Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In *Proceedings of the fourteenth international conference on artificial* intelligence and statistics, pp. 627–635. JMLR Workshop and Conference Proceedings, 2011. Dorsa Sadigh, Anca D Dragan, Shankar Sastry, and Sanjit A Seshia. Active preference-based learning of reward functions. 2017. Florian Schäfer and Anima Anandkumar. Competitive gradient descent. *arXiv preprint arXiv:1905.12103*, 2019. Shai Shalev-Shwartz and Shai Ben-David. *Understanding machine learning: From theory to algorithms*. Cambridge university press, 2014. Wen Sun, Anirudh Vemula, Byron Boots, and Drew Bagnell. Provably efficient imitation learning from observation alone. In *International conference on machine learning*, pp. 6036–6045. PMLR, 2019. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Gokul Swamy, Sanjiban Choudhury, J Andrew Bagnell, and Steven Wu. Of moments and matching: A game-theoretic framework for closing the imitation gap. In *International Conference on Machine Learning*, pp. 10022–10032. PMLR, 2021. Faraz Torabi, Garrett Warnell, and Peter Stone. Behavioral cloning from observation. arXiv preprint arXiv:1805.01954, 2018a. Faraz Torabi, Garrett Warnell, and Peter Stone. Generative adversarial imitation from observation. arXiv preprint arXiv:1807.06158, 2018b. Faraz Torabi, Garrett Warnell, and Peter Stone. Recent advances in imitation learning from observation. arXiv preprint arXiv:1905.13566, 2019. Igor Vajda. Note on discrimination information and variation (corresp.). *IEEE Transactions on Information* Theory, 16(6):771–773, 1970. Aaron Wilson, Alan Fern, and Prasad Tadepalli. A bayesian approach for policy learning from trajectory preference queries. *Advances in neural information processing systems*, 25:1133–1141, 2012. Tian Xu, Ziniu Li, and Yang Yu. Error bounds of imitating policies and environments for reinforcement learning. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2021. Chao Yang, Xiaojian Ma, Wenbing Huang, Fuchun Sun, Huaping Liu, Junzhou Huang, and Chuang Gan. Imitation learning from observations by minimizing inverse dynamics disagreement. *arXiv preprint* arXiv:1910.04417, 2019. Haoran Zhang, Chenkun Yin, Yanxin Zhang, and Shangtai Jin. Self-guided actor-critic: Reinforcement learning from adaptive expert demonstrations. In 2020 59th IEEE Conference on Decision and Control (CDC), pp. 572–577. IEEE, 2020. Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. *arXiv preprint arXiv:1710.09412*, 2017. Liyuan Zheng, Tanner Fiez, Zane Alumbaugh, Benjamin Chasnov, and Lillian J Ratliff. Stackelberg actor-critic: Game-theoretic reinforcement learning algorithms. *arXiv preprint arXiv:2109.12286*, 2021. Yuke Zhu, Josiah Wong, Ajay Mandlekar, and Roberto Martín-Martín. robosuite: A modular simulation framework and benchmark for robot learning. *arXiv preprint arXiv:2009.12293*, 2020a. Zhuangdi Zhu, Kaixiang Lin, Bo Dai, and Jiayu Zhou. Off-policy imitation learning from observations. Advances in Neural Information Processing Systems, 33, 2020b. Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, et al. Maximum entropy inverse reinforcement learning. In *Aaai*, volume 8, pp. 1433–1438. Chicago, IL, USA, 2008. ## A Theory We aim to show that rank-game has an equilibrium that bounds the f-divergence between the agent and the expert (Theorem A.1) in the imitation learning setting. For imitation learning, we have the vanilla implicit ranking ρ agent ⪯ ρ E, between the behavior of agent and the expert. Later, we show that, the bounded f-divergence can be used to bound the performance gap with the expert under the expert's unknown reward function using a solution to Vajda's tight lower bound (Corollary A.1.1). Our proof proceeds by first showing that minimizing the empirical ranking loss produces a reward function that is close to the reward function obtained by the true ranking loss. Then, we show that even under the presence of policy optimization errors maximizing the obtained reward function will lead to a bounded f-divergence with the expert. Theorem A.1. (Performance of the rank-game *equilibrium pair) Consider an equilibrium of the imitation* rank-game (π, ˆ Rˆ), such that Rˆ *minimizes the empirical ranking-loss for dataset* Dπˆ = {(ρ πˆ, ρE)} and the ranking-loss generalization error is bounded by ϵ ′ r = 2R2maxϵr, and the policy πˆ has bounded suboptimality with J(Rˆ; ˆπ) ≥ J(Rˆ; π ′) − ϵπ ∀π ′*, then we have that at this equilibrium pair:* $$D_{f}\left(\rho^{\hat{\pi}}(s,a)||\rho^{E}(s,a)\right)\leq\frac{(1-\gamma)\epsilon_{\pi}+4R_{m a x}\sqrt{2\epsilon_{r}}}{k}$$ k(9) where Df is an f*-divergence with the generator function* f(x) = 1−x 1+x (Rényi, 1961; Ali & Silvey, 1966; Csiszár, 1967; Liese & Vajda, 2006). Proof. Previous works (Xu et al., 2021; Swamy et al., 2021) characterize the equilibrium in imitation learning based on the *supremum* ranking loss/min-max adversarial setting under no error assumption. In this section, we consider the ranking loss function Lk and derive the equilibrium for the rank-game in presence of reward learning and policy optimization errors. Lk attempts to explain the rankings between the agent and the expert using their state-action visitations Dπ = {ρ π(*s, a*), ρ E(*s, a*)} respectively, by attempting to induce a performance gap of k. With this dataset Dπ, Lk regresses the return of state or state-action pairs in the expert's visitation to a scalar k and the agent's visitation to a value of 0. Thus, we have: $$L_{k}(\mathcal{D};R)=\mathbb{E}_{p^{\epsilon}(s,a)}\big{[}\big{(}R(s,a)-k\big{)}^{2}\big{]}+\mathbb{E}_{p^{\epsilon}(s,a)}\big{[}\big{(}R(s,a)-0\big{)}^{2}\big{]}$$ The above ranking loss is minimized ($\nabla L_{k}=0$) pointwise when $$R^{s}(s,a)=\frac{k(\rho^{E}(s,a))}{\rho^{E}(s,a)+\rho^{s}(s,a)}\tag{11}$$ In practice, we have finite samples from both the expert violation distribution and the agent distribution so we minimize the following empirical ranking loss Lˆk(D; R): $$({\mathfrak{g}})$$ $1-x\cdot\left(B\left(n\omega-1961\cdot A\right)\right)$ $$(10)$$ $$\hat{L}_{k}(\mathcal{D};R)=\frac{\sum_{s,a\in\hat{\rho}^{\pi}}[(R(s,a)-k)^{2}]}{|\hat{\rho}^{E}|}+\frac{\sum_{s,a\in\hat{\rho}^{\pi}}[(R(s,a)-0)^{2}]}{|\hat{\rho}^{\pi}|}$$ where ρˆ E and ρˆ π are empirical state-action visitations respectively. From empirical loss function to reward optimality: Since the reward function is trained with supervised learning, we can quantify the sample error in minimizing the empirical loss using concentration bounds (ShalevShwartz & Ben-David, 2014) up to a constant with high probability. Since 0 < R(s, a) < Rmax With high probability, $$\forall R,\ \ |L_{k}({\mathcal{D}};R)-{\hat{L}}_{k}({\mathcal{D}};R)|\leq2R_{m a x}^{2}\epsilon_{r}$$ maxϵr (13) where ϵr is the statistical estimation error that can be bounded using concentration bounds such as Hoeffding's. Let R∗ belong to the optimal solution for Lk(D; R) and Rˆ∗ belong to the optimal minimizing solution for Lˆk(D; R). Therefore, we have that, $\forall R,\ \ \hat{L}_{k}({\cal D};\hat{R}^{*})\leq\hat{L}_{k}({\cal D};R)$ (11.1) Using Eq 13 and Eq 14, we have $$\forall R,\ \ \hat{L}_{k}({\cal D};\hat{R}^{*})\leq\hat{L}_{k}({\cal D};R)$$ $$\leq L_{k}({\cal D};R)+2R_{max}^{2}\epsilon_{r}$$ $$\leq L_{k}({\cal D};R^{*})+2R_{max}^{2}\epsilon_{r}$$ $$(11)$$ $$(12)$$ $$(13)$$ $$\left(14\right)$$ maxϵr (16) maxϵr (17) $$\left(15\right)$$ $$\left(16\right)$$ $$(17)$$ and similarly $$\forall R,\ \ L_{k}({\cal D};R^{*})\leq L_{k}({\cal D};R)$$ $$\leq\hat{L}_{k}({\cal D};R)+2R^{2}_{max}\epsilon_{r}$$ $$\leq\hat{L}_{k}({\cal D};\hat{R}^{*})+2R^{2}_{max}\epsilon_{r}$$ $$(20)$$ $$(21)^{\frac{1}{2}}$$ $$\mathbf{\Phi}_{k}({\mathcal{D}};R^{*})|\leq$$ $$(23)$$ $$(24)$$ Eq 17 and Eq 20 implies that Lk(D; R∗) and Lˆk(D; Rˆ∗) are bounded with high probability. i.e |Lk(D; R ∗) − Lˆk(D; Rˆ∗)|≤ 2R 2 maxϵr (21) We will use Eq 21 to show that indeed Rˆ∗ has a bounded loss compared to R∗. $$\hat{L}_{k}({\cal D};\hat{R}^{*})-L_{k}({\cal D};R^{*})\leq2R_{max}^{2}\epsilon_{r}$$ $$L_{k}({\cal D};\hat{R}^{*})-2R_{max}^{2}-L_{k}({\cal D};R^{*})\epsilon_{r}\leq2R_{max}^{2}\epsilon_{r}$$ $$L_{k}({\cal D};\hat{R}^{*})-L_{k}({\cal D};R^{*})\leq4R_{max}^{2}\epsilon_{r}$$ We consider the tabular MDP setting and overload R to denote a vector of reward values for the entire state-action space of size *|S × A|*. Using the Taylor series expansion for loss function Lk, we have: $$L_{k}({\cal D};\hat{R}^{*})-L_{k}({\cal D};R^{*})\leq4R_{max}^{2}\epsilon_{r}$$ $$L_{k}({\cal D};R^{*})+\langle\nabla_{R^{*}}L_{k}({\cal D};R^{*}),\hat{R}^{*}-R^{*}\rangle$$ $$+0.5(\hat{R}^{*}-R^{*})^{T}H(\hat{R}^{*}-R^{*})-L_{k}({\cal D};R^{*})\leq4R_{max}^{2}\epsilon_{r}$$ $$(\hat{R}^{*}-R^{*})^{T}H(\hat{R}^{*}-R^{*})\leq8R_{max}^{2}\epsilon_{r}$$ where H denotes the hessian for the loss function w.r.t R and is given by H = P ρ π+ P ρ Ewhere P ρis a matrix of size *|S × A|×|S × A|* with ρ vector of visitations as its diagonal and zero elsewhere. $$(25)^{\frac{1}{2}}$$ $$\begin{array}{l}{(26)}\\ {(27)}\end{array}$$ $(\hat{R}^{*}-R^{*})^{T}H(\hat{R}^{*}-R^{*})\leq8R^{2}_{max}\epsilon_{r}$ $$\mathbb{E}_{s\sim\rho^{T}}\Big{[}(\hat{R}^{*}(s,a)-R^{*}(s,a))^{2}\Big{]}+\mathbb{E}_{s\sim\rho^{E}}\Big{[}(\hat{R}^{*}(s,a)-R^{*}(s,a))^{2}\Big{]}\leq8R^{2}_{max}\epsilon_{r}$$ where the LHS are positive and negative. For $\mathbb{E}_{s\sim\rho^{T}}$, $\mathbb{E}_{s\sim\rho^{T}}$, $\mathbb{E}_{s\sim\rho^{T}}$ $$(28)$$ $$(29)$$ $$(30)^{\frac{1}{2}}$$ $$(31)$$ Since both terms in the LHS are positive we have Es,a∼ρπ h(Rˆ∗(s, a) − R∗(s, a))2i≤ 8R2maxϵr Since both terms in the limit are positive and $\mathbb{E}_{s,a\sim\rho^{E}}\left[(\hat{R}^{*}(s,a)-R^{*}(s,a))^{2}\right]\leq8R_{max}^{2}\epsilon_{r}$, $(\mathbb{E}_{s,a\sim\rho^{\pi}}\left[\hat{R}^{*}(s,a)-R^{*}(s,a)\right])^{2}\leq8R_{max}^{2}\epsilon_{r}$ and $(\mathbb{E}_{s,a})$ h(Rˆ∗(s, a) − R∗(s, a))2i≤ 8R2maxϵr. Applying Jensen's inequality, we further have $\left|\right\rangle^{2}\leq8R_{max}^{2}\epsilon_{r}$ and $(\mathbb{E}_{s,a\sim\rho^{E}}\left[\hat{H}^{*}(s,a)-R^{*}(s,a)\right])^{2}\leq8R_{max}^{2}\epsilon_{r}$. Hence, $$\left|\mathbb{E}_{s,a\sim\rho^{\pi}}\left[\hat{H}^{*}(s,a)-R^{*}(s,a)\right]\right|\leq R_{max}\sqrt{8\epsilon_{r}}\ \,\ \text{and}$$ $$\left|\mathbb{E}_{s,a\sim\rho^{E}}\left[\hat{H}^{*}(s,a)-R^{*}(s,a)\right]\right|\leq R_{max}\sqrt{8\epsilon_{r}}$$ At this point we have bounded the expected difference between the reward functions obtained from the empirical ranking loss and the true ranking loss. We will now characterize the equilibrium obtained by learning a policy on the reward function Rˆ∗that is optimal under the empirical ranking loss. Under a policy optimization error of ϵπ we have: $$J(\hat{R}^{*};\hat{\pi})\geq J(\hat{R}^{*};\pi^{\prime})-\epsilon_{\pi}\ \forall\pi^{'}\in\Pi$$ ′∈ Π (32) where J(R; π) denotes the performance of policy π under reward function R. Taking π ′= π E, we can reduce the above expression as follows: $$J(\hat{R}^{*},\pi^{E})-J(\hat{R}^{*},\hat{\pi})\leq\epsilon_{\pi}$$ $$\frac{1}{1-\gamma}\left[\mathbb{E}_{\rho^{E}(s,a)}\Big{[}\hat{R}^{*}(s,a)\Big{]}-\mathbb{E}_{\rho^{\pi}(s,a)}\Big{[}\hat{R}^{*}(s,a)\Big{]}\right]\leq\epsilon_{\pi}$$ $$(32)^{3}$$ $$(33)$$ $$(34)$$ Using Eq 30 and Eq 31 we can lower bound EρE(s,a) can lower bound $\mathbb{E}_{\rho^{\pi}(s,a)}\Big[\widehat{R}^{*}(s,a)\Big]-\mathbb{E}_{\rho^{\pi}(s,a)}\Big[\widehat{R}^{*}(s,a)\Big]$ as follows: $\begin{array}{ll}\mathbb{E}_{\rho^{\pi}(s,a)}\Big[\widehat{R}^{*}(s,a)\Big]\geq\mathbb{E}_{\rho^{\pi}(s,a)}[R^{*}(s,a)]-R_{max}\sqrt{8\epsilon_{r}}\\ \mathbb{E}_{\rho^{\pi}(s,a)}\Big[\widehat{R}^{*}(s,a)\Big]\leq\mathbb{E}_{\rho^{\pi}(s,a)}[R^{*}(s,a)]+R_{max}\sqrt{8\epsilon_{r}}\end{array}$ (35) $\binom{36}{35}$ . where R∗(*s, a*) is given by Equation 11. Subtracting Equation 36 from Equation 35, we have acting Equation 30 from Equation 30, we have $$\mathbb{E}_{\rho^{\pi}(s,a)}\Big[\hat{R}^{\pi}(s,a)\Big]-\mathbb{E}_{\rho^{\pi}(s,a)}\Big[\hat{R}^{\pi}(s,a)\Big]\geq\mathbb{E}_{\rho^{\pi}(s,a)}[R^{\pi}(s,a)]-\mathbb{E}_{\rho^{\pi}(s,a)}[R^{\pi}(s,a)]-2R_{max}\sqrt{8\epsilon_{r}}$$. Plugging in the lower bound from Equation 37 in Equation 34 we have: $$\frac{1}{1-\gamma}[\mathbb{E}_{\rho^{E}(s,a)}[R^{*}(s,a)]-\mathbb{E}_{\rho^{r}(s,a)}[R^{*}(s,a)]-2R_{m a x}\sqrt{8\epsilon_{r}}]\leq\epsilon_{\pi}$$ Replacing R∗ using Equation 11 we get, 1 1 − γ EρE(s,a) k(ρ E(s, a)) ρE(s, a) + ρ π(s, a) − Eρπ(s,a) k(ρ E(s, a)) ρE(s, a) + ρ π(s, a) − 2Rmax√8ϵr EρE(s,a) k(ρ E(s, a)) ρE(s, a) + ρ π(s, a) − Eρπ(s,a) k(ρ E(s, a)) ρE(s, a) + ρ π(s, a) ≤ (1 − γ)ϵπ + 2Rmax√8ϵr (40) EρE(s,a) (ρ E(s, a) − ρ π(s, a)) ρE(s, a) + ρ π(s, a) ≤ (1 − γ)ϵπ + 2Rmax√8ϵr k(41) ≤ ϵπ (39) $$(37)$$ $$(38)$$ $$(39)$$ (40) $\binom{41}{4}$ . $$\left(42\right)$$ The convex function f(x) = 1−x 1+x in R + defines an f-divergence. Under this generator function, the LHS of Equation 41 defines an f-divergence between the state-visitations of the agent ρ π(*s, a*) and the expert ρ E(*s, a*). Hence, we have that $$D_{f}[\rho^{\pi}(s,a),\rho^{E}(s,a)]\leq\frac{(1-\gamma)\epsilon_{\pi}+4R_{m a x}\sqrt{2\epsilon_{r}}}{k}$$ This bound shows that the equilibrium of the ranking game is a near-optimal imitation learning solution when ranking loss target k trades off effectively with the policy optimization error ϵπ and reward generalization error ϵr. We note that, since k ≤ Rmax we can get the tightest bound when k = Rmax. Now, in imitation learning both k and Rmax are tunable hyperparameters. We vary k while keeping k = Rmax and observe in appendix D.9 that this hyperparameter can significantly affect learning performance. Corollary A.1.1. (From f-divergence to performance gap) For the equilibrium of the rank-game (π, ˆ Rˆ*) as* described in Theorem A.1, we have that the performance gap of the expert policy with πˆ is bounded under the unknown expert's reward function (rgt) bounded in [0, REmax] *as follows:* $$J(\pi^{E},r_{gt})-J(\hat{\pi},r_{gt})|\leq\frac{4R_{max}^{E}\sqrt{\frac{(1-\gamma)e_{\pi}+4R_{max}\sqrt{2e_{r}}}{k}}}{1-\gamma}\tag{43}$$ Proof. In Theorem A.1, we show that the equilibrium of rank-game ensures that.the f-divergence of expert visitation and agent visitation is bounded with the generator function f = 1−x 1+x . First we attempt to find a tight lower bound of our f-divergence in terms of the total variational distance between the two distributions. Such a bound has been discussed in previous literature for the usual f-divergences like KL, Hellinger and χ 2. This problem of finding a tight lower bound in terms of variational distance for a general f-divergence was introduced in Vajda (1970) and referred to as Vajda's tight lower bound and a solution for arbitrary f-divergences was proposed in Gilardoni (2006). The f-divergence with generator function f = 1−x 1+x satisfies that f(t) = tf( 1 t ) + 2f ′(1)(t − 1) and hence the total variational bound for this f divergence takes the form Df ≥ 2−DT V 2f( 2+DT V 2−DT V ) − f ′(1)DT V . Plugging in the function definition f = 1−x 1+x the inequality simplifies to: $$D_{f}(\rho^{\pi}(s,a)\|\rho^{E}(s,a))\geq\frac{(D_{TV}(\rho^{\pi}(s,a)\|\rho^{E}(s,a))^{2}}{4}\tag{44}$$ We also note an upper bound for this f-divergence in TV distance, sandwiching this particular f-divergence with TV bounds: $$D_{f}(\rho^{\pi}(s,a)\|\rho^{E}(s,a))=\mathbb{E}_{\rho^{E}(s,a)}\bigg{[}\frac{\rho^{E}(s,a)}{\rho^{E}(s,a)+\rho^{\pi}(s,a)}\bigg{]}-\mathbb{E}_{\rho^{\pi}(s,a)}\bigg{[}\frac{\rho^{E}(s,a)}{\rho^{E}(s)+\rho^{\pi}(s,a)}\bigg{]}$$ $$\leq\sum_{s,a\in S\times A}|\rho^{E}(s,a)-\rho^{\pi}(s,a)|\bigg{|}\frac{\rho^{E}(s,a)}{\rho^{E}(s,a)+\rho^{\pi}(s,a)}\bigg{|}$$ $$\leq D_{TV}(\rho^{\pi}(s,a)\|\rho^{E}(s,a))$$ (45) (45) $\binom{46}{4}$ . So, $$D_{T V}(\rho^{\pi}(s,a)\|\rho^{E}(s,a))\geq D_{f}(\rho^{\pi}(s,a)\|\rho^{E}(s,a))\geq{\frac{(D_{T V}(\rho^{\pi}(s,a)\|\rho^{E}(s,a))^{2}}{4}}$$ Therefore from Eq 42 we have that, DT V (ρ π(s, a)||ρ E(s, a)) ≤ 2 r(1 − γ)ϵπ + 4Rmax√2ϵr k(49) For any policy π, and experts unknown reward function rgt, J(*π, r*) = 1 [Es,a∼ρπ [r(*s, a*)]]. Therefore, $$\left|J(\pi^{E},r_{gt})-J(\pi,r_{gt})\right|=\left|\frac{1}{1-\gamma}[\mathbb{E}_{s,a\sim\rho^{n}E}[r_{gt}(s,a)]]-\frac{1}{1-\gamma}[\mathbb{E}_{s,a\sim\rho^{n}E}[r_{gt}(s,a)]]\right|\quad\forall\pi$$ $$=\frac{1}{1-\gamma}\left|\sum_{s,a\in S\times A}[(\rho^{E}-\rho^{\pi})r_{gt}(s,a)\right|$$ $$\leq\frac{R_{max}^{E}}{1-\gamma}\sum_{s,a\in S\times A}[(\rho^{E}-\rho^{\pi})]\right|$$ $$\leq\frac{2R_{max}^{E}}{1-\gamma}D_{TV}(\rho_{E},\rho_{\pi})$$ ∀π (50) $$(47)$$ $$(48)$$ $$(49)$$ $$(50)$$ (51) $\binom{52}{52}$ . $$(53)$$ $$(54)$$ $$\left(55\right)$$ $\square$ . $$(56)$$ where REmax is the upper bound for the expert's reward function. Under a worst case expert reward function which assigns finite reward values to the expert's visitation and −∞ outside the visitation, even a small mistake (visiting any state outside the expert's visitation) by the policy can result in an infinite performance gap between expert and the agent. Thus, this parameter is decided by the expert and is not in control of the learning agent. From Eq 49 and Eq 53 we have have $ |J(\pi^E,r_{gt})-J(\hat{\pi},r_{gt})|\leq\frac{4R_{max}^E\sqrt{\frac{(1-\gamma)\epsilon_\sigma+4R_{max}\sqrt{2\epsilon_r}}{k}}}{1-\gamma}$ where $ |J(\pi^E,r_{gt})|\leq\frac{4R_{max}^E\sqrt{\frac{(1-\gamma)\epsilon_\sigma+4R_{max}\sqrt{2\epsilon_r}}{k}}}{1-\gamma}$. Lemma A.2. (Regret bound under finite data assumptions) Let Mˆt denote the approximate transition model under the collected dataset of transitions until iteration t. Assume that the ground truth model M and the reward function are realizable. Under these assumptions the regret of *rank-game* at t th *iteration:* $$V_{M}^{\pi^{E}}-V_{M}^{\pi^{s}}\leq\frac{2\gamma e_{m}^{\pi^{s}}R_{m a x}}{(1-\gamma)^{2}}+\frac{4R_{m a x}}{1-\gamma}\sqrt{D_{f}(\rho_{M i}^{\pi^{E}}\|\rho_{M}^{\pi^{E}})}+\frac{2e_{s a t a t}+4R_{m a x}\sqrt{e_{r}}}{k}$$ where V π M denotes the performance of policy π *under transition dynamics* M, ϵ π t m is expected model error under policy π t*'s visitation,* ρ π M is the visitation of policy π in transition dynamics M and ϵstat is the statistical error due to finite expert samples. Proof. We use M to denote the ground truth model and Mˆt to denote the approximate transition model with collected data until the t th iteration of rank-game. We are interested in solving the following optimization problem under finite data assumptions: $$\max_{\pi}\mathbb{E}_{s,a\sim p^{\pi}_{M_{k}}}\left[\hat{f}^{\pi}_{\pi}(s,a)\right]-\frac{\sum_{s,a\in\hat{P}^{\pi}}[\hat{f}^{\pi}_{\pi}(s,a)]}{|\hat{P}^{E}|}\ \ s.t.\ \ \hat{f}^{\pi}_{\pi}=\arg\min_{f}(\hat{L}_{k}(f;D^{\pi}_{M_{k}}))$$ )) (57) where ρˆ E is the empirical distribution generated from finite expert samples and DπMˆt = {(ˆρ π Mˆt , ρˆ E M)}. Using standard concentration bounds such as Hoeffding's (Hoeffding, 1994), we can bound the empirical estimate with true estimate ∀π with high probability: **Sir probability**: $$\left|\frac{\sum_{s,a\in\hat{\rho}^{E}}[f_{\pi}^{\star}(s,a)]}{|\hat{\rho}^{E}|}-\mathbb{E}_{s\sim\rho_{M}^{E}}[f_{\pi}^{\star}(s,a)]\right|\leq\epsilon_{stat}$$ **and the formula above $A$ is the number of samples $s$ and $s$ are the index $s$.** Using the concentration bounds and the fact that π tis the solution that maximizes the optimization problem Eq 57 at t-iteration, Es,a∼ρ πt Mˆt hˆf ∗ πt (s, a) i− Ps,a∈ρˆE [ ˆf ∗ πt (s, a)] |ρˆE|≥Es,a∼ρ πE Mˆt hˆf ∗ πE (s, a) i− Ps,a∈ρˆE [ ˆf ∗ πE (s, a)] |ρˆE|(59) ≥ Es,a∼ρ πE Mˆt hˆf ∗ πE (s, a) i− Es,a∼ρEM hˆf ∗ πE (s, a) $$\left(57\right)$$ $$(58)$$ (59) (60) $\frac{1}{2}$ (61) $\binom{62}{62}$ . i− ϵ*stat* (60) ˆf ∗ πt is the reward function that minimizes the empirical ranking loss Lˆk. Let f ∗ πt be the solution to the true ranking loss Lk. As shown previously in Eq 30 and Eq 31, we can bound the expected values of these two quantities with high probability under agent or expert distribution. We also have from concentration bound: Es,a∼ρ πt Mˆt hˆf ∗ πt (s, a) i− Ps,a∈ρˆE [ ˆf ∗ πt (s, a)] |ρˆE|≤ Es,a∼ρ πt Mˆt hˆf ∗ πt (s, a) i− Es,a∼ρEM hˆf ∗ πt (s, a) Therefore, combining Eq 61 and Eq 59: Es,a∼ρ πt Mˆt hˆf ∗ πt (s, a) i− Es,a∼ρEM hˆf ∗ πt (s, a) i≥ Es,a∼ρ πE Mˆt hˆf ∗ πE (s, a) i− Es,a∼ρEM hˆf ∗ πE (s, a) i+ ϵ*stat* (61) i− 2ϵ*stat* (62) The LHS of the Eq. 62 can be further upper bounded as follows: Es,a∼ρ πt Mˆt hˆf ∗ πt (s, a) i− Es,a∼ρEM hˆf ∗ πt (s, a) i≤ Es,a∼ρ πt Mˆt [f ∗ πt (s, a)] − Es,a∼ρEM [f ∗ πt (s, a)] + 2Rmax√8ϵr (63) = Es,a∼ρ πt Mˆt "kρπ E M (s, a) ρ πE M (s, a) + ρ πt Mˆt (s, a) # − Es,a∼ρEM "kρπ E M (s, a) ρ πE M (s, a) + ρ πt Mˆt (s, a) # + 2Rmax√8ϵr (64) $=k\mathbb{E}_{s,a\sim\rho_M^{\pi^E}}\left[\frac{\rho_M^{\pi^\tau}(s,a)-\rho_M^{\pi^E}(s,a)}{\rho_M^{\pi^\tau}(s,a)+\rho_M^{\pi^E}(s,a)}\right]+2R_{max}\sqrt{8\epsilon_r}$ $=-kD_f(\rho_M^{\pi^\tau}\|\rho_M^{\pi^E})+2R_{max}\sqrt{8\epsilon_r}$ $$(63)$$ $$\quad(64)$$ $$\left(65\right)$$ $$(66)^{\frac{1}{2}}$$ Similarly the RHS of Eq 62 can be further lower bounded as follows: Es,a∼ρ πE Mˆt hˆf ∗ πE (s, a) i− Es,a∼ρEM hˆf ∗ π (s, a) i− 2ϵstat (67) ≥ Es,a∼ρ πE Mˆt [f ∗ πE (s, a)] − Es∼ρEM [f ∗ π (s, a)] − 2ϵstat − 2Rmax√8ϵr (68) = kEs,a∼ρ πE M ρ π E Mˆt (s, a) − ρ π E M (s, a) ρ πE Mˆt (s, a) + ρ πE M (s, a) − 2ϵstat − 2Rmax√8ϵr (69) = −kDf (ρ E Mˆt ∥ρ E M) − 2ϵstat − 2Rmax√8ϵr (70) $$(71)$$ $$\left(72\right)$$ Plugging the relations obtained (Eq 70 and 66) back in Eq 62, we see that the f-divergence between the agent visitation in the learned MDP and the expert visitation in the ground truth MDP is bounded by the f-divergence of the expert policy's visitation on the learned vs. ground truth environment. We expect this term to be low if the dynamics are accurately learned at the transitions encountered in visitation of expert. $$D_{f}(\rho_{M_{t}}^{\pi^{t}}\|\rho_{M}^{\pi^{E}})\leq D_{f}(\rho_{M_{t}}^{\pi^{E}}\|\rho_{M}^{\pi^{E}})+\frac{2\epsilon_{s t a t}+4R_{m a x}\sqrt{8\epsilon_{r}}}{k}$$ k(71) We can use the total-variation lower bound for this f-divergence to later obtain a performance bound between the policy in learned MDP and expert in ground-truth MDP. $$D_{T V}(\rho_{M}^{*^{*}}\|\rho_{M}^{\pi^{E}})\leq2\sqrt{D_{f}(\rho_{M}^{*^{*}}\|\rho_{M}^{\pi^{E}})}\leq2\sqrt{D_{f}(\rho_{M}^{\pi^{E}}\|\rho_{M}^{\pi^{E}})+\frac{2\epsilon_{s t a t}+4R_{m a x}\sqrt{8\epsilon_{r}}}{k}}$$ rollary A.1.1, we can further get a performance bound: $$V_{M}^{\pi^{\pi}}\,-V_{M}^{\pi^{\prime}}|\leq\frac{2R_{max}}{1-\gamma}\,D_{TV}(\rho_{M_{s}}^{\pi^{\prime}}|\rho_{M}^{\pi^{\pi}})\leq\frac{4R_{max}}{1-\gamma}\sqrt{D_{f}(\rho_{M_{s}}^{\pi^{\pi}}|\rho_{M}^{\pi^{\pi}})}+\frac{2\epsilon_{stat}+4R_{max}\sqrt{8\epsilon_{r}}}{k}\tag{73}$$ Let the local model error in the visitation of π t be bounded by ϵ π t m , i.e Es,a∼ρ πt-DT V (PM(.|s, a)∥PMˆ (.|*s, a*))≤ ϵ π t m . Using simulation lemma for local models (Kearns & Singh, 1998; Kakade & Langford, 2002), we have: $$V_{M}^{\pi^{t}}-V_{M}^{\pi^{t}}|\leq\frac{2\gamma\epsilon_{m}^{\pi^{t}}R_{max}}{(1-\gamma)^{2}}\tag{74}$$ We are interested in bounding the performance of the policy π tin ground-truth MDP rather than the learned MDP. $$V_{M}^{\pi^{E}}-V_{M}^{\pi^{i}}\leq V_{M}^{\pi^{E}}-V_{M}^{\pi^{i}}+\frac{4R_{m a x}}{1-\gamma}\sqrt{D_{f}(\rho_{M_{i}}^{\pi^{E}}\|\rho_{M}^{\pi^{E}})+\frac{2e_{s t a t}+4R_{m a x}\sqrt{8e_{r}}}{k}}$$ $$\leq\frac{2\gamma e_{m}^{\pi^{i}}R_{m a x}}{(1-\gamma)^{2}}+\frac{4R_{m a x}}{1-\gamma}\sqrt{D_{f}(\rho_{M_{i}}^{\pi^{E}}\|\rho_{M}^{\pi^{E}})+\frac{2e_{s t a t}+4R_{m a x}\sqrt{8e_{r}}}{k}}$$ The regret of an algorithm with ranking-loss depends on the accuracy of the approximate transition model at the visitation of the output policy πt and the expected accuracy of the approximate transition model at the transitions encountered in the visitation of expert. Using an exploratory policy optimization procedure, the regret grows sublinearly as shown in (Kidambi et al., 2021). Kidambi et al. (2021) uses an exploration bonus and shows that the RHS in the above regret simplifies to be information gain and for a number of MDP families the growth rate of information gain is mild. ## Potential Imitation Suboptimality With Additional Rankings In this section, we consider how additional rankings can affect the intended performance gap as discussed in 4.2. Consider a tabular MDP setting in which we are given a set of rankings ρ π ⪯ ρ 1 ⪯ .. ⪯ ρ n ⪯ ρ E. In such a case, we regress the state-action pairs from their respective visitations to [0, k1, k2, ..kn, k] where 0 < k1 < k2.. < kn < k. We will discuss in Appendix B.1.1 how this regression generalizes Lk and make it computationally more efficient. For this regression, the optimal reward function that minimizes the ranking loss pointwise is given by: $$R^{*}(s,a)=\frac{\sum_{i=1}^{n}k_{i}\rho^{s^{i}}(s,a)+\rho^{E}(s,a)}{\rho^{\pi}(s,a)+\sum_{i=1}^{n}\rho^{\pi^{i}}(s,a)+\rho^{E}(s,a)}\tag{77}$$ $$(78)$$ We consider a surrogate ranking loss with regression target k*ef f* that achieves the same optimal reward when only ρ ⪯ ρ E ranking is given. Therefore: Given. Therefore: $\dfrac{\sum_{i=1}^n k_i\rho^i(s,a)+k\rho^E(s,a)}{\rho^\pi(s,a)+\sum_{i=1}^n\rho^i(s,a)+\rho^E(s,a)}=\dfrac{k_{eff}\rho^E(s,a)}{\rho^E(s,a)+\rho^\pi(s,a)}$ Ied as follows: . k*ef f* can be upper bounded as follows: Shouldn't be done: $$k_{eff}=\frac{\rho^E(s,a)+\rho^\pi(s,a)}{\rho^E(s,a)}\frac{\sum_{i=1}^n k_i\rho^{\pi^i}(s,a)+k\rho^E(s,a)}{\rho^\pi(s,a)+\sum_{i=1}^n\rho^{\pi^i}(s,a)+k\rho^E(s,a)}$$ $$\leq\frac{\rho^E(s,a)+\rho^\pi(s,a)}{\rho^E(s,a)}\frac{\sum_{i=1}^n k_i\rho^{\pi^i}(s,a)+k\rho^E(s,a)}{\rho^\pi(s,a)+\rho^E(s,a)}$$ $$=k+\sum_{i=1}^n k_i\frac{\rho^{\pi^i}(s,a)}{\rho^E(s,a)}$$ I don't see. $$(81)$$ k*ef f* can be lower bounded by: Indeed by $\begin{gathered}k_{eff}=\frac{\rho^E(s,a)+\rho^\pi(s,a)}{\rho^E(s,a)}\frac{\sum_{i=1}^n k_i\rho^{\pi^i}(s,a)+k\rho^E(s,a)}{\rho^\pi(s,a)+\sum_{i=1}^n\rho^{\pi^i}(s,a)+\rho^E(s,a)}\hfill\\ \geq\frac{\rho^E(s,a)+\rho^\pi(s,a)}{\rho^E(s,a)}\frac{k\rho^E(s,a)}{\rho^\pi(s,a)+\sum_{i=1}^n\rho^{\pi^i}(s,a)+\rho^E(s,a)}\hfill\\ =\frac{k}{1+\frac{\sum_{i=1}^n\rho^{\pi^i}(s,a)}{\rho^\pi(s,a)+\rho^E(s,a)}}\hfill\\ \end{gathered}$ (82) $\binom{83}{84}$ (84) . Thus, k*ef f* can increase or decrease compared to k after augmenting the ranking dataset. We discuss the consequences of a decreased k in Section 4.2. ## B Algorithm Details B.1 Ranking Loss For The Reward Agent Consider a dataset of behavior rankings D = {(ρ 1 1 ⪯ ρ 2 1 ),(ρ 1 2 ⪯ ρ 2 2 )*, ...*(ρ 1 n ⪯ ρ 2 n )}, wherein for ρ i j - i denotes the comparison index within a pair of policies, j denotes the pair number, and ρ 1 1 ⪯ ρ 2 1 denotes that ρ 2 1 is preferable in comparison to ρ 1 1 and in turn implies that ρ 2 1 has a higher return. Each pair of behavior comparisons in the dataset are between the state-action or state visitations. We will restrict our attention to a specific instantiation of the ranking loss (a regression loss) that attempts to explain the rankings between each pair of policies present in the dataset by a performance gap of at least k, i.e. Eρ1 [R(s, a)] ≤ Eρ2 [R(*s, a*)] − k. Formally, the ranking loss is defined as follows: $$\min_{R}L_{k}(\mathcal{D};R)=\min_{R}\mathbb{E}_{(\rho^{1},\sigma^{2})\sim\mathcal{D}}\big{[}\mathbb{E}_{s\sim\rho^{1}(s,a)}\big{[}(R(s,a)-0)^{2}\big{]}+\mathbb{E}_{s\sim\rho^{2}(s,a)}\big{[}(R(s,a)-k)^{2}\big{]}\big{]}\tag{85}$$ When k is set to 1 (k = 1), this loss function resembles the loss function used for SQIL (Reddy et al., 2019) if fixed rewards were used instead of learned. Thus, SQIL can be understood as a special case. We also note that a similar ranking loss function has been previously used for training generative adversarial networks in LS-GAN (Mao et al., 2017). Our work explores the setting of imitation learning given samples from state or state-action visitation ρ E of the expert π E. We will use π agent m to denote the mth update of the agent in Algorithm 1. The updated agent generates a new visitation in the environment which is stored in an empty dataset Donline m given by Donline m = {ρ π agent m ⪯ ρ π E} ## B.1.1 Reward Loss With Automatically Generated Rankings (Auto) The ranking dataset Dpcontains pairwise comparison between behaviors ρi ⪯ ρj . First, we assume access to the trajectories that generate the behaviors, i.e ρ i = {τ i 1 , τ i2 ...τ in} and ρ j = {τ j 1 , τ j 2 ...τ jm} In this method we propose to automatically generate additional rankings using the following procedure: (a) Sample trajectory τ i ∼ ρ i and τ j ∼ ρ j. Both trajectories are equal length because of our use of absorbing states (see Appendix C). (b) Generate an interpolation τ ij λp between trajectories depending on a parameter λp. A trajectory is a matrix of dimensions H × (|S|+|A|), where H is the horizon length of all the trajectories. the horizon length of all the trajectories. $$\tau_{\lambda_{p}}^{ij}=\lambda_{p}\tau_{i}+(1-\lambda_{p})\tau_{j}\tag{86}$$ These intermediate interpolated trajectories lead to a ranking that matches the ranking under the expert reward function if the reward function is indeed linear in state features. We further note that τ can also be a trajectory of features rather than state-action pairs. Next, we generate regression targets for the interpolated trajectories. For a trajectory τ ij λp the regression target is given by a vector λp0 + (1 − λp)k1, where vectors 0, 1 are given by [0,0,..0] and [1,1,...,1] of length H respectively. This procedure can be regarded as a form of mixup (Zhang et al., 2017) in trajectory space. The set of obtained τ ij λp after expending the sampling budget forms our behavior ρ ij λp . ## A Generalized And Computationally Efficient Interpolation Strategy For Rank-Game Once we have generated P interpolated rankings, we effectively have O(P 2) rankings that we can use to augment our ranking dataset. Using them all naively would incur a high memory burden. Thus, we present another method for achieving the same objective of using automatically generated rankings in a more efficient and generalized way. For each pairwise ranking ρ i ⪯ ρ jin the dataset Dp, we have the following new set of rankings ρ i ⪯ ρ ij λ1 ⪯ .. ⪯ ρ ij λP ⪯ ρ j. Using the O(P 2) rankings in the ranking loss Lk, the ranking loss can be simplified to the following using basic algebraic manipulation: $$(P+1)\mathbb{E}_{(s,a)\sim\rho^{\prime}}\big{[}(R(s,a)-k)^{2}\big{]}+(P)\mathbb{E}_{(s,a)\sim\rho^{\prime}_{k,p}}\big{[}(R(s,a)-k)^{2}\big{]}+..+(1)\mathbb{E}_{(s,a)\sim\rho^{\prime}_{k,p}}\big{[}(R(s,a)-k)^{2}\big{]}\tag{87}$$ $$+(P+1)\mathbb{E}_{(s,a)\sim\rho^{\prime}_{k,p}}\big{[}(R(s,a)-0)^{2}\big{]}+(P)\mathbb{E}_{(s,a)\sim\rho^{\prime}_{k,p}}\big{[}(R(s,a)-0)^{2}\big{]}+..+(1)\mathbb{E}_{(s,a)\sim\rho^{\prime}_{k,p}}\big{[}(R(s,a)-0)^{2}\big{]}$$ The reward function that minimizes the above loss pointwise is given by: $$R^{*}(s,a)=\frac{k[(P+1)\rho^{j}+P\rho^{ij}_{\lambda_{P}}+(P-1)\rho^{ij}_{\lambda_{P-1}}+..+\rho^{ij}_{\lambda_{1}}]}{(P+1)(\rho^{j}+\rho^{ij}_{\lambda_{P}}+..+\rho^{ij}_{\lambda_{1}}+\rho^{j})}$$ $$=\frac{k[\rho^{j}+\frac{P}{P+1}\rho^{ij}_{\lambda_{P}}+\frac{P}{P-1}\rho^{ij}_{\lambda_{P-1}}+..+\frac{1}{P+1}\rho^{ij}_{\lambda_{1}}]}{(\rho^{j}+\rho^{ij}_{\lambda_{P}}+..+\rho^{ij}_{\lambda_{1}}+\rho^{j})}$$ $$(88)$$ $$(89)$$ We consider a modification to the ranking loss objective (Equation 85) that increases flexibility in regression targets for ranking as well as reducing the computational burden from dealing with O(P 2) rankings pairs to O(P). In this modification we regress the current agent, the expert, and each of the intermediate interpolants (ρ i, ρ ij λ1 , ..., ρ ij λP , ρE) to a fixed scalar return (k0, k1*, ..., k*P +1) where k0 ≤ k1 ≤ ... ≤ kP +1 = k. The optimal reward function for this loss function is given by: $$R^{*}(s,a)=\frac{k_{p+1}\rho^{E}(s,a)+k_{p}\rho_{\lambda_{p}}^{i j}(s,a)+k_{p-1}\rho_{\lambda_{p-1}}^{i j}(s,a)+...+k_{1}\rho_{\lambda_{1}}^{i j}(s,a)+k_{0}\rho^{\pi}(s,a)}{(\rho^{E}(s,a)+\rho_{\lambda_{p}}^{i j}(s,a)+...+\rho_{\lambda_{1}}^{i j}(s,a)+\rho^{\pi})(s,a)}$$ $$(90)$$ This modified loss function generalizes Eq 88 and recovers it exactly when [k0, k1*.., k*P +1] is set to be [0, k 1 P +1 *, .., k* P P +1 , k]. We will call this reward loss function a *generalized ranking loss*. Shaping the ranking loss: The generalized ranking loss contains a set of regression targets (k0, k1*, ..., k*P +1) which needs to be decided apriori. We propose two strategies for deciding these regression targets.We consider two families of parameterized mappings: (1) linear in α (kα = α ∗ k) and (2) rate of increase in return exponential in α ( dkα dα ∝ e βα), where β is the temperature parameter and denote this family by exp-β. We also set kα=0 = 0 (in agent's visitation) and kα=1 = k (in expert's visitation) under the reward function that is bounded in [0, Rmax]. The shaped ranking regression loss, denoted by SLk(D; R), that induces a performance gap between p + 2 consecutive rankings (ρ i = ρ ij λ0 , ρ ij λ1 , ..., ρ ij λP , ρj = ρ ij λP +1 ) is given by: SLk(D; R) = 1 p + 2 Xp+1 i=0 Es∼ρ ij λi (s,a) -(R(*s, a*) − ki) 2(91) $$(91)$$ ![25_image_0.png](25_image_0.png) Figure 7: This figure shows the assignment of value kα (intended return value) corresponding to values of α (degree of time-conditional interpolation between the visitation distribution of the agent and the expert). When the rate of increase is exponential with positive slope, we have a higher performance gap over comparisons closer to the expert and when the rate of increase is negative, the performance gap is higher for comparisons closer to the agent. Figure 7 above shows the flexibility in reward shaping afforded by the two families of parameterized functions. The temperature parameter β > 0 encourages the initial preferences to have a smaller performance gap than the latter preferences. Conversely, β < 0 encourages the initial preferences to have a larger performance gap compared to the latter preferences. We ablate these choices of parameteric functions in Appendix D.5. ## B.1.2 Reward Loss With Offline Annotated Rankings (Pref) Automatically generated rankings are generated without any additional supervision and can be understood as a form of data augmentation. By contrast, with offline annotated rankings, we are given a fixed dataset of comparisons which is a form of additional supervision for the reward function. Automatically generated rankings can only help by making the reward landscape easier to optimize, but offline rankings can help reduce the exploration burden by informing the agent about counterfactuals that it had no information about. This can, for instance, help the agent avoid unnecessary exploration by providing a dense improvement signal. The offline rankings are either provided by a human or extracted from a set of trajectories for which ground truth reward is known. In our work, we extract offline preferences by uniformly sampling p trajectories from an offline dataset obtained from a training run of an RL method (SAC) (Haarnoja et al., 2018) with ground truth reward. For imitation learning with offline annotated rankings, at every iteration m of Algorithm 1 we have a new dataset of rankings given by Donline m = {ρ agent m ⪯ ρ E} along with a fixed offline dataset containing rankings of the form (D*of f line* = {ρ 1 ⪯ ρ 2... ⪯ ρ p}). We always ground the offline preferences by expert's visitation in our experiments, i.e ρ p ⪯ ρ E. We incorporate the offline rankings as a soft constraint in reward learning by combining the ranking loss Lk between the policy agent and the expert, with a ranking loss Lk or a shaped ranking loss SLk (Equation 91) over offline trajectories: $$L_{k}^{offline}({\cal D}^{online},{\cal D}^{offline};R)=\alpha L_{k}({\cal D}^{online};R)+(1-\alpha)*L_{k}({\cal D}^{offline};R)\tag{92}$$ Here, instead of the consecutive rankings being interpolants, they are offline rankings. The videos attached in the supplementary show the benefit of using preferences in imitation learning. The policy learned without preferences in the pen environment drops the pen frequently and in the door environment is unable to successfully open the door. ## B.2 Stackelberg Game Instantiation A Stackelberg game view of optimizing the two-player game with a dataset of behavior rankings leads to two methods: PAL (Policy as Leader) and RAL (Reward as Leader) (refer Section 4.3). PAL uses a fast reward update step and we simulate this step by training the reward function until convergence (using a validation set) on the dataset of rankings. We simulate a slow update step of the policy by using a few iterations of the SAC (Haarnoja et al., 2018) update for the policy. RAL uses a slow reward update which we approximate by dataset aggregation - aggregating all the datasets of rankings generated by the agent in each previous iteration enforces the reward function to update slowly. A fast policy update is simulated by using more iterations of SAC. Since SAC does not perform well with a high update to environment step ratio, more iterations of SAC would imply more environment steps under a fixed reward function. This was observed to lead to reduced learning efficiency, and an intermediate value of SAC updates was observed to perform best (Table 5). ## B.2.1 Policy As Leader Algorithm 2 presents psuedocode for a practical instantiation of the PAL methods - RANK-PAL (vanilla), RANK-PAL (auto) and RANK-PAL (pref) that we use in our work. Recall that (vanilla) variant uses no additional rankings, whereas (auto) uses automatically generated rankings and (pref) uses offline annotated ranking. ## B.2.2 Reward As Leader Algorithm 3 presents psuedocode for a practical instantiation of the RAL methods - RANK-RAL (vanilla), RANK-RAL (auto). ## C Implementation And Experiment Details Environments: Figure 8 shows some of the environments we use in this work. For benchmarking we use 6 MuJoCo (licensed under CC BY 4.0) locomotion environments. We also test our method on manipulation environments - Door opening environment from Robosuite (Zhu et al., 2020a) (licensed under MIT License) and the Pen-v0 environment from mjrl (Rajeswaran et al., 2017) (licensed under Apache License 2.0). Expert data: For all environments, we obtain expert data by a policy trained until convergence using SAC (Haarnoja et al., 2018) with ground truth rewards. Baselines: We compare our proposed methods against 6 representative LfO approaches that cover a spectrum of on-policy and off-policy, model-free methods from prior work: GAIfO (Torabi et al., 2018b; Ho & Ermon, 2016), DACfO (Kostrikov et al., 2018), BCO (Torabi et al., 2018a), f-IRL (Ni et al., 2020), OPOLO (Zhu et al., 2020b) and IQ-Learn Garg et al. (2021). GAIfO (Torabi et al., 2018b) is a modification of the adversarial GAIL method (Ho & Ermon, 2016), in which the discriminator is trained to distinguish between Algorithm 2 Policy As Leader (PAL) practical instantiation 1: **Initialize:** Policy network πθ, reward network Rϕ, replay buffer R 2: **Hyperparameters:** *Common*: Policy update steps npol, Reward update steps nrew, Performance gap k, empty ranking dataset Donline, *RANK-PAL (auto)*: number of interpolations P, *RANK-PAL(pref)*: Offline annotated rankings D*of f line*. 3: for m = 0, 1, 2*, . . .* do 4: Collect transitions in the environment and add to replay buffer R. Run policy update step: π m θ = Soft Actor-Critic(R m−1 ϕ; π m−1 θ) with transitions relabelled with reward obtained from R m−1 ϕ. // call npol times 5: Add absorbing state/state-actions to all early-terminated trajectories collected in the current npol policy update steps to make them full horizon and collect in Donline m . Donline = D*online* m (discard old data). 6: (for RANK-PAL(auto)) Generate interpolations for rankings in the dataset D*online* and collect in Donline auto 7: Reward Update step: // call nrew times R m ϕ = min Lk(D*online*; R m−1 ϕ), RANK-PAL (vanilla) (Equation 85) min SLk(Donline auto ; R m−1 ϕ), RANK-PAL (auto) (Equation 91) min L of f line k(Donline, D*of f line*; R), RANK-PAL (pref) (Equation 92) 8: end for ## Algorithm 3 Reward As Leader (Ral) Practical Instantiation 1: **Initialize:** Policy network πθ, reward network Rϕ, replay buffer R, trajectory buffer D 2: **Hyperparameters:** *Common*: Policy update steps npol, Reward update steps nrew, Performance gap k, empty ranking dataset Donline, *RANK-PAL (auto)*: number of interpolations P, *RANK-PAL (pref)*: Offline annotated rankings D*of f line*. 3: for m = 0, 1, 2*, . . .* do 4: Collect transitions in the environment and add to replay buffer R. Run policy update step: π m θ = Soft Actor-Critic(R m−1 ϕ; π m−1 θ) with transitions relabelled with reward obtained from R m−1 ϕ. // call npol times 5: Add absorbing state/state-actions to all early-terminated trajectories collected in the current npol policy update steps to make them full horizon and collect in D*online* m . Aggregate data in D*online* = Donline m ∪ D*online*. 6: (for RANK-RAL(auto)) Generate interpolations for rankings in the dataset D*online* and collect in D*online* auto 7: Reward Update step: // call nrew times $$R_{\phi}^{m}=$$ (min Lk(D*online*; R m−1 ϕ), RANK-RAL (vanilla) (Equation 85) min SLk(D*online* auto , Rm−1 ϕ), RANK-RAL(auto) (Equation 91) 8: **end for** state-distributions rather than state-action distributions. DAC-fO (Kostrikov et al., 2018) is an off-policy modification of GAIfO (Torabi et al., 2018b), in which the discriminator distinguishes the expert states with respect to the entire replay buffer of the agent's previously visited states, with additional implementation details such as added absorbing states to early-terminated trajectories. BCO (Torabi et al., 2018a) learns an inverse dynamics model, iteratively using the state-action-next state visitation in the environment and using it to predict the actions that generate the expert state trajectory. OPOLO (Zhu et al., 2020b) is a recent method which presents a principled off-policy approach for imitation learning by minimizing an upper-bound of the state marginal matching objective. IQ-Learn (Garg et al., 2021) proposes to make | IMITPAL (ours) IMITRAL (ours) RANKPAL (ours) RANKRAL (ours) | |---| | Env | Swimmer | Hopper | HalfCheetah | Walker | Ant | Humanoid | |-----------------|--------------|----------------|-------------------|-----------------|-----------------|-------------------| | BCO | 210.22±3.43 | 721.92±89.89 | 410.83±238.02 | 224.58±71.42 | 704.88±13.49 | 324.94±44.39 | | GAIfO | 202.66±4.87 | 2871.47±365.73 | 1532.57±693.72 | 4666.31±143.75 | 1141.66±1400.11 | 326.69±13.26 | | DACfO | 194.65±14.08 | 3350.55±141.69 | 11057.54±407.26 | 3045.21±2485.33 | 5112.15±38.01 | 1165.40±1867.61 | | f -IRL | 212.50±6.43 | 3446.33±35.66 | 12527.24±344.95 | 5630.32±71.35 | 4200.48±1124.17 | 4362.46±459.72 | | OPOLO | 210.84±1.31 | 3168.35±206.26 | 11576.12±155.09 | 4407.70±1356.39 | 5529.44±164.94 | 1468.90± 1041.853 | | IMIT-PAL (ours) | 216.64±7.95 | 3059.43±283.85 | 11806.47± 1750.24 | 4208.17±107.41 | 4872.39±480.23 | 5265.60±287.44 | | IMIT-RAL (ours) | 205.33±8.92 | 3266.28±318.03 | 12626.18±54.71 | 5254.54±165.19 | 4612.8±192.06 | 5089.88±621.07 | | RANK-PAL (ours) | 202.24±1.80 | 3082.98±582.59 | 12259.06± 206.82 | 5225.49±42.02 | 5862.42±47.68 | 5393.45±291.16 | | RANK-RAL (ours) | 203.20±4.65 | 3512.67±21.09 | 13204.49±721.77 | 5189.51±71.27 | 5520.14±116.77 | 5262.96±337.44 | | Expert | 204.6 ± 0 | 3535.88 ± 0 | 13051.46 ± 0 | 5456.91 ± 0 | 5926.17 ± 0 | 5565.53 ± 0 | | (|S|, |A|) | (8, 2) | (11, 3) | (17, 6) | (17, 6) | (111, 8) | (376, 17) | Table 3: Asymptotic normalized performance of LfO methods at 2 million timesteps on MuJoCo locomotion tasks. The results in this Table also include evaluations for the IMIT-{PAL, RAL} methods. Table 4: Asymptotic performance of LfO methods at 2 million timesteps on MuJoCo locomotion tasks. The results in this Table also include evaluations for the IMIT-{PAL, RAL} methods. imitation learning non-adversarial by directly optimizing the Q-function and removing the need to learn a reward as a subproblem. All the approaches only have access to expert state-trajectories. | Env | Swimmer | Hopper | HalfCheetah | Walker | Ant | Humanoid | |-------------|--------------|-------------|---------------|-------------|-------------|-------------| | BCO | 102.76±0.90 | 20.10±2.15 | 5.12±3.82 | 4.00±1.25 | 12.80±1.26 | 3.90±1.24 | | GaiFO | 99.04±1.61 | 81.13± 9.99 | 13.54±7.24 | 83.83±2.55 | 20.10±24.41 | 3.93±1.81 | | DACfO | 95.09±6.14 | 94.73±3.63 | 85.03±5.09 | 54.70±44.64 | 86.45±1.67 | 19.31±32.19 | | f -IRL | 103.89±2.37 | 97.45± 0.61 | 96.06±4.63 | 101.16±1.25 | 71.18±19.80 | 77.93±6.372 | | OPOLO | 98.64±0.14 | 89.56±5.46 | 88.92±3.20 | 79.19±24.35 | 93.37± 3.78 | 24.87±17.04 | | 105.93±3.12 | 86.47± 7.66 | 90.65±15.17 | 75.60±1.90 | 82.40±9.05 | 94.49±3.21 | | | 100.35±3.6 | 92.34±8.63 | 96.80±2.45 | 94.41±2.94 | 78.06±4.24 | 91.27±9.33 | | | 98.83±0.09 | 87.14± 16.14 | 94.05±3.59 | 93.88±0.72 | 98.93±1.83 | 96.84±3.28 | | | 99.31±1.50 | 99.34±0.20 | 101.14±7.45 | 93.24±1.25 | 93.21±2.98 | 94.45±4.13 | | | Expert | 100.00± 0 | 100.00± 0 | 100.00± 0 | 100.00± 0 | 100.00± 0 | 100.00± 0 | | (|S|, |A|) | (8, 2) | (11, 3) | (17, 6) | (17, 6) | (111, 8) | (376, 17) | We use the author's open-source implementations of baselines OPOLO, DACfO, GAIfO, BCO available at https://github.com/illidanlab/opolo-code. We use the author-provided hyperparameters (similar to those used in (Zhu et al., 2020b)) for all MuJoCo locomotion environments. For f-IRL, we use the author implementation available at https://github.com/twni2016/f-IRL and use the author provided hyperparameters. IQ-Learn was tested on our expert dataset by following authors implementation found here: https://github.com/Div99/IQ-Learn. We tested two IQ-Learn loss variants: 'v0' and 'value' as found in their hyperparameter configurations and took the best out of the two runs. Policy Optimization: We implement RANK-PAL and RANK-RAL with policy learning using SAC (Haarnoja et al., 2018). We build upon the SAC code (Achiam, 2018) (https://github.com/openai/spinningup) without changing any hyperparameters. Reward Learning: For reward learning, we use an MLP parameterized by two hidden layers of 64 dimensions each. Furthermore, we clip the outputs of the reward network between [−10, 10] range to keep the range of rewards bounded while also adding an L2 regularization of 0.01. We add absorbing states to early terminated agent trajectories following Kostrikov et al. (2019). For training the ranking loss until convergence in both ![29_image_0.png](29_image_0.png) Figure 8: We evaluate rank-game over environments including Hopper-v2, Ant-v2, Humanoid-v2, Door, and Pen-v0. update strategies (PAL and RAL), we used evaluation on a holdout set that is 0.1 the total dataset size as a proxy for convergence. Data sharing between players: We rely on data sharing between players to utilize the same collected transitions for both players' gradient updates. The reward learning objective in RANK-PAL and RANK-RAL requires rolling out the current policy. This makes using an off-policy routine for training the policy player quite inefficient, since off-policy model-free algorithms update a policy frequently even when executing a trajectory. To remedy this, we reuse the data collected with a mixture of policies obtained during the previous off-policy policy learning step for training the reward player. This allows us to reuse the same data for policy learning as well as reward learning at each iteration. Ranking loss for reward shaping via offline annotated rankings: In practice for the (pref) setting (Section 4.2), to increase supervision and prevent overfitting, we augment the offline dataset by regressing the snippets (length l) of each offline trajectory τ ifor behavior ρ ito k ∗ l, in addition to regressing the rewards for each state to k. The snippets are generated as contiguous subsequence from the trajectory, similar to Brown et al. (2019). ## C.1 Hyperparameters Hyperparameters for RANK-{PAL,RAL} (vanilla,auto and pref) methods are shown in Table 5. For RANKPAL, we found the following hyperparameters to give best results: npol = H and nrew = ('validation' or H/b), where H is the environment horizon (usually set to 1000 for MuJoCo locomotion tasks) and b is the batch size used for the reward update. For RANK-RAL, we found npol = H and nrew = ('validation' or |D|/b), where |D| indicates the cumulative size of the ranking dataset. We found that scaling reward updates proportionally to the size of the dataset also performs well and is a computationally effective alternative to training the reward until convergence (see Section D.7). Hyperparameter **Value** Policy updates npol H Reward batch size(b) 1024 Reward gradient updates nrew val or |D|/1024 Reward learning rate 1e-3 Reward clamp range [-10,10] Reward l2 weight decay 0.0001 Number of interpolations [auto] 5 Reward shaping parameterization [auto] exp-[-1] Offline rankings loss weight (λ) [pref] 0.3 Snippet length l [pref] 10 Table 5: Common hyperparameters for the RANK-GAME algorithms. Square brackets in the left column indicate which hyperparameters that are specific to 'auto' and 'pref' methods. ## D Additional Experiments D.1 Complete Evaluation Of Lfo With Rank-Game**(Auto)** Figure 9 shows a comparison of RANK-PAL(auto) and RANK-RAL(auto) for the LfO setting on the Mujoco benchmark tasks: Swimmer-v2, Hopper-v2, HalfCheetah-v2, Walker2d-v2, Ant-v2 and Humanoid-v2. This section provides complete results for Section 5.1 in the main paper. ![30_image_0.png](30_image_0.png) Figure 9: Comparison of performance on OpenAI gym benchmark tasks. The shaded region represents standard deviation across 5 random runs. RANK-PAL and RANK-RAL substantially outperform the baselines in sample efficiency. Dotted blue line shows the expert's performance. ## D.2 Evaluation Of Lfd With Rank-Game**(Auto)** rank-game is a general framework for both LfD(with expert states and actions) and LfO (with only expert states/observations). We compare performance of rank-game compared to LfD baselines: IQ-Learn (Garg et al., 2021), DAC (Kostrikov et al., 2018) and BC (Pomerleau, 1991). In figure 10, we observe that rank-game is among the most sample efficient methods for learning from demonstrations. IQlearn shows poor learning performance on some tasks which we suspect is due to the low number of expert trajectories we use in our experiments compared to the original work. DAC was tuned using the guidelines from Orsini et al. (2021) to ensure fair comparison. ## D.3 Utility Of Automatically Generated Rankings In Rank-Game**(Auto)** We investigate the question of how much the automatically generated rankings actually help in this experiment. To do that, we keep all the hyperparameters same and compare RANK-GAME (vanilla) with RANK-GAME (auto). RANK-GAME (vanilla) uses no additional ranking information and Lk is used as the reward loss. ![31_image_0.png](31_image_0.png) Figure 10: Comparison of rank-game methods with baselines in the LfD setting (expert actions are available). RANK-{PAL,RAL} are competitive to state of the art methods. ![31_image_1.png](31_image_1.png) Figure 11: RANK-PAL(vanilla) has high variance learning curves with lower sample efficiency compared to RANKPAL(auto). Figure 11 shows that in RANK-PAL (auto) has lower variance throughout training (more stable) and is more sample efficient compared to RANK-PAL(vanilla). ## D.4 Comparison Of Imit-Game And Rank-Game **Methods** Imitation learning algorithms, particularly adversarial methods, have a number of implementation components that can affect learning performance. In this experiment, we aim to further reduce any implementation/hyperparameter gap between adversarial imitation learning (AIL) methods that are based on the *supremum*-loss (described in section 3) function and rank-game to bring out the obtained algorithmic improvements. To achieve this, we swap out the ranking loss Lk based on regression with a *supremum*-loss and call this method IMIT-{PAL,RAL}. This results in all the other hyperparameters such as batch size, reward clipping, policy and reward learning iterations, and optimizer iterations to be held constant across experiments. We present a comparison of RANK-{PAL, RAL} and IMIT-{PAL, RAL} in terms of asymptotic performance in Table 3 and their sample efficiency in Figure 12. Note that Table 3 shows normalized returns that are mean-shifted and scaled between [0-100] using the performance of a uniform random policy and the expert policy. The expert returns are given in Table 4 and we use the following performance values from random policies for normalization: { Hopper= 13.828, HalfCheetah= −271.93, Walker= 1.53, Ant= −62.01, Humanoid= 112.19}. Table 4 shows unnormalized asymptotic performance of the different methods. In terms of sample efficiency, we notice IMIT-{PAL, RAL} methods compare favorably to other regularized supremum-loss counterparts like GAIL and DAC but are outperformed by RANK-{PAL, RAL} (auto) methods. We hypothesize that better learning efficiency in Lk compared to *supremum*-loss is due to regression to fixed targets being a simpler optimization than maximizing the expected performance gap under two ![32_image_0.png](32_image_0.png) Figure 12: Comparison of performance on OpenAI gym benchmark tasks. Specifically, we seek to compare RANK- {PAL, RAL} methods to IMIT-{PAL, RAL} methods and IMIT-{PAL, RAL} methods to their non-Stackelberg counterparts GAIfO and DACfO. The shaded region represents standard deviation across 5 random runs. RANK-PAL and RANK-RAL substantially outperform the baselines in sample efficiency and IMIT-{PAL, RAL} is competitive to the strongest prior baseline OPOLO. ## D.5 Effect Of Parameterized Reward Shaping In Rank-Game **(Auto)** We experiment with different ways of shaping the regression targets (Appendix B) for automatically generated interpolations in RANK-GAME (auto) in Figure 13. In the two left-most plots for RANK-PAL (auto), we see that reward shaping instantiations (exponential with negative temperature) which learns a higher performance gap for pairs of interpolants closer to the agent lead to higher sample efficiency. We note that decreasing the temperature too much leads to a fall in sample efficiency. The same behavior is observed in RANK-RAL (two right-most plots) methods but we find them to be more robust to parameterized shaping than PAL methods. We use the following interpolation scheme: exponential with temperature=−1 for our experiments in the main paper. ![33_image_0.png](33_image_0.png) ![33_image_2.png](33_image_2.png) Figure 13: The two left-most plots show the effect of reward shaping in RANK-PAL (auto) methods using linear and exponential shaping functions. The two right-most plots show the same effect of reward shaping in RANK-RAL (auto) methods. Reward shaping instantiations which induce a higher performance gap between pairs of interpolants closer to the agent perform better and RAL is more robust to reward shaping variants than PAL. D.6 On the rank preserving nature of SLk ![33_image_1.png](33_image_1.png) Figure 14: Increasing the state size of the domain increases the rank consistency afforded by SLk and increasing the number of rankings decreases the rank consistency. The ranking loss SLk (Appendix B, Eq 91) regresses the ρ i, ρ j and each of the intermediate interpolants (ρ i = ρ ij λ0 , ρ ij λ1 , ..., ρ ij λP , ρj = ρ ij λP +1 ) to fixed scalar returns (k0, k1*, ..., k*P +1) where k0 ≤ k1 ≤ ... ≤ kp+1 = k. The ranking loss SLk is given by: $$SL_{k}({\cal D};R)=\frac{1}{p+2}\sum_{i=0}^{P+1}\mathbb{E}_{s\sim\rho_{\lambda_{i}}^{\prime\prime}(s,a)}\big{[}\left(R(s,a)-k_{i}\right)^{2}\big{]}\tag{93}$$ SLk provides a dense reward assignment for the reward agent but does not guarantee that minimizing SLk would lead to the performance ordering between rankings, i.e Eρ1 [f(s)] < Eρ2 [f(s)] < Eρ3 [f(s)] *< .. <* EρP +1 [f(s)]. An ideal loss function for this task regresses the expected return under each behavior to scalar values indicative of ranking, but needs to solve a complex credit assignment problem. Formally, we can write the ideal loss function for reward agent as follows $$SL^{ideal}_{k}({\cal D};R)=\frac{1}{p+2}\sum_{i=0}^{P+1}\left[\mathbb{E}_{s\sim\rho^{ij}_{\lambda_{i}}(s,a)}[R(s,a)]-k_{i}\right]^{2}\tag{94}$$ We note that the SLk upper bounds SLideal kusing Jensen's inequality and thus is a reasonable target for optimization. In this section we wish to further understand if SLk has a rank-preserving policy. SLk is a family of loss function for ranking that assigns a scalar reward value for each states of a particular state visitation corresponding to its ranking. Ideally, given a ranking between behaviors ρ 0 ⪯ ρ 1 ⪯ ρ 2... ⪯ ρ P +1 we aim to learn a reward function f that satisfies Eρ0 [f(s)] < Eρ1 [f(s)] < Eρ2 [f(s)] *< .. <* EρP +1 [f(s)]. We empirically test the ability of the ranking loss function SLk to facilitate the desired behavior in performance ranking. We consider a finite state space S and number of rankings P. We uniformly sample P + 1 possible state visitations and the intermediate regression targets {ki} n i=1 *s.t k*i ≤ ki+1. To evaluate the rank-preserving ability of our proposed loss function we study the fraction of comparisons the optimization solution that minimizes SLk is able to get correct. Note that P + 1 sequential ranking induces P(P + 1)/2 comparisons. Figure 14 shows that with large state spaces SLk is almost rank preserving and the rank preserving ability degrades with increasing number of rankings to be satisfied. ## D.7 Stackelberg Game Design We consider the sensitivity of the two-player game with respect to policy update iterations and reward update iterations. Our results (Figure 15) draw analogous conclusions to Rajeswaran et al. (2020) where we find that using a validation loss for training reward function on on-policy and aggregate dataset in PAL and RAL respectively works best. Despite its good performance, validation loss based training can be wall-clock inefficient. We found a substitute method to perform similarly while giving improvements in wall-clock time - make number of iterations of reward learning scale proportionally to the dataset set size. A proportionality constant of (1/batch-size) worked as well as validation loss in practice. Contrary to Rajeswaran et al. (2020) where the policy is updated by obtaining policy visitation samples from the learned model, our ability to increase the policy update is hindered due to unavailability of a learned model and requires costly real-environment interactions. We tune the policy iteration parameter (Figure 16) and observe the increasing the number of policy updates can hinder learning performance. ![34_image_0.png](34_image_0.png) Figure 15: The left two plots use PAL strategy and the right two plots use RAL strategy. Reward learning using a validation loss on a holdout set leads to improved learning performance compared to hand designed reward learning iterations. ![34_image_1.png](34_image_1.png) Figure 16: Small number of policy updates are useful for good learning performance in the PAL setting here. ## D.8 Sensitivity Of Reward Range For The Ranking Loss Lk In Section 4.2, we discussed how the scale of learned reward function can have an effect on learning performance. We validate the hypothesis here, where we set Rmax = k and test the learning performance of RANK-PAL (auto) on various different values of k. Our results in figure D.9 show that the hyperparameter k has a large effect on learning performance and intermediate values of k works well with k = 10 performing the best. ![35_image_0.png](35_image_0.png) Figure 17: Intermediate values of k work best in practice. ## D.9 Effect Of Regularizer For Rank-Game rank-game(auto) incorporates automatically generated rankings which can be understood as a form of regularization, particularly mixup Zhang et al. (2017) in trajectory space. In this experiment, we work in the PAL setting with ranking loss Lk and compare the performances of other regularizers: Weight-decay (wd), Spectral normalization (sn), state-based mixup to (auto). Contrary to trajectory based mixup (auto) where we interpolate trajectories, in state-based mixup we sample states randomly from the behaviors which are pairwise ranked and interpolate between them. ![35_image_1.png](35_image_1.png) Figure 18: (auto) regularization outperforms other forms of regularization in rank-game Figure 18 shows learning with (auto) regularizer is more efficient and stable compared to other regularizers. ## D.10 Ablation Analysis Summary We have ablated the following components for our method: Automatically-generated rankings D.3, Ranking loss D.4, Parameterized reward shaping D.5, Stackelberg game design D.7 and range of the bounded reward D.9. Our analysis above (Figure 12,17 and 15) shows quantitatively that the key improvements over baselines are driven by using the proposed ranking loss, controlling the reward range and the reward/policy update frequency in the Stackelberg framework. Parameterized reward shaping (best hyperparameter : exp -1 compare to unshaped/linear shaping) and automatically-generated rankings contribute to relatively small improvements. We note that a *single hyperparameter* combination (Table 5) works well across all tasks demonstrating robustness of the method to environment changes. ## D.11 Varying Number Of Expert Trajectories For Imitation Learning In the main text, we considered experiment settings where the agent is provided with only 1 expert trajectory. In this section, we test how our methods performs compared to baselines as we increase the number of available expert observation trajectories. We note that these experiments are in the LfO setting. Figure 20 shows that RANK-GAME compares favorably to other methods for varying number of expert demonstrations/observations trajectories. ![36_image_0.png](36_image_0.png) ## D.12 Robustness To Noisy Preferences In this section, we investigate the effect of noisy preferences on imitation learning. We consider the setting of Section 5.2 where we attempt to solve hard exploration problems for LfO setting by leveraging trajectory snippet comparisons. In this experiment, we consider a setting similar to Brown et al. (2019) where we inject varying level of noise, i.e flip x% of trajectory snippet at random. Figure 19 shows that RANK-PAL(pref) is robust in learning near-expert behavior upto 60 percent noise in the Door environment. We hypothesize that this robustness to noise is possible because the preferences are only used to shape reward functions and does not change the optimality of expert. noise=0.0 noise=0.2 noise=0.4 noise=0.6 noise=0.8 Figure 19: We investigate learning from expert observation+offline preferences where the offline preferences are noisy. RANK-PAL shows considerable robustness to noisy preferences. D.13 Learning purely from offline rankings in manipulation environments ![36_image_1.png](36_image_1.png) Figure 21: Testing with 10, 20 and 50 suboptimal preferences uniformly sampled from a replay buffer of SAC trained from pre-specified reward we see that TREX is not able to solve these tasks. The black dotted line shows asymptotic performance of RANK-PAL (auto) method. In section 5.2, we saw that offline annotated preferences can help solve complex manipulation tasks via imitation. Now, we compare with the ability of a prior method—TREX (Brown et al., 2019) that learns purely from suboptimal preferences—under increasing numbers of preferences. We test on two manipulation tasks: Pen-v0 and Door-v0 given varying number of suboptimal preferences: 10, 20, 50. These preferences are uniformly sampled from a replay buffer of SAC trained until convergence under a pre-specified reward, obtained via D4RL (licensed under CC BY) .We observe in Figure 21 that T-REX is unable to solve these tasks under any selected number of suboptimal preferences. ![37_image_0.png](37_image_0.png) ![37_image_1.png](37_image_1.png) Figure 20: Performance analysis of different algorithms in the LfO setting with varying number of expert trajectories. RANK-PAL (auto) compares favorably to other methods
Review 1: Summary: The paper introduces a ranking framework for imitation learning. The frameworks consists of an agent that attempts to maximize rewards estimation via another ranking agent. Strengths and Weaknesses: # Strengths * The paper demonstrates good empirical results on several datasets. * Ranking and incorporating references is a topic interesting to the deep reinforcement learning community. # Weaknesses * The paper has limited novelty. The method can be seen as an instantiation of GAILfO combined with the LSGAN loss function. * The writing needs to be improved. The paper is hard to follow. In particular, it is unclear whether the authors consider LfD as an imitation learning of a reinforcement learning setup. Several previous papers on learning from demonstrations (for example, DQfD and DDPGfD) considered the setting where the rewards are available. * From algorithm 1. It's not clear how step 5 is performed. Do you use directly SAC to learn a policy based on these rewards? Do you perform any modifications for SAC? * The paper doesn't describes datasets used for evaluation and information about the baseline implementation is missing. Requested Changes: * Update algorithm 1 to clarify the policy update step. * Update the paper with infomation about baselines and datasets. Broader Impact Concerns: I have no broader impact concerns. ================================================== Review 2: Summary: The primary contribution of the paper is the `rank-game` framework that merges different imitation learning settings (learning from expert data and learning from preferences) by formulating it as a game between two players: a policy agent and a reward agent. The policy agent is the typical RL policy optimization agent that aims to maximize return under some reward function. Whereas, the reward agent seeks to find a reward function that satisfies the relative pairwise rankings of trajectories (or state-action distributions) as provided in the dataset. These rankings can be the relative w.r.t. the expert trajectories (falls into ‘auto’ methodology) or offline human-labelled trajectory rankings (falls into ‘pref’ approach). The secondary contribution is a new ranking loss, $L_k$ loss, that seems to work in this new setting of using data augmentation for efficient policy optimization. The final optimization is solved using the methodology of the Stackelberg game framework. The combination of the above two ideas leads to impressive results on a variety of synthetic mujoco domains and complex dextrous tasks. Strengths and Weaknesses: ### Strengths: * New perspective: The idea of using both the expert data and preferences together for increasing the efficiency of learning techniques seems reasonable and useful. The proposed unified perspective of understanding various Imitation Learning algorithms from a game theoretic perspective might be interesting and inspiring to the community. * Impressive results: The paper provides strong empirical studies, including hyper-parameter and ablation studies, that support the claims regarding the practical algorithm. The results show major improvements compared to existing methods. The authors have shown that for their setting, where both types of data modalities are present, the proposed approach is indeed promising (Sec 5.2). * I’m not an expert in Imitation Learning but it seems that the authors have conducted a proper discussion of the related work. ### Weaknesses: The biggest weakness of the work is the lack of clarity in Section 4.2. While the motivation of the proposed loss is that it facilitates better incorporation of rankings over suboptimal trajectories, can you highlight why this is a principled loss? I appreciate the theoretical result that tells the performance with the expert is bounded, but it seems like the upper bound is quite loose. The first term in the result is the maximum attainable return under the expert reward function $\frac{R^E_{max}}{1-\gamma}$, whereas the multiplicative term mostly depends on hyper-parameter $k$ as the other quantities that are not defined before Theorem 4.1. What is the generalization error and when will it be bounded by the given quantity in Theorem 4.1? It looks like the user should set $k \rightarrow \infty$ to reduce the gap, however, from the experiments (Fig. 17) it seems like intermediate $k$ works better than large $k$. Moreover, does the equilibrium always exist? Unfortunately, the theoretical result in the present form makes things rather confusing. I understand that lot of these details might already be present in Appendix A, however, Sec 4.2 should be able to convey these results in a standalone manner without expecting the reader to go through Appendix A and D. Requested Changes: ### Recommended changes: - Notational inconsistencies: Sometimes the same notation is used for multiple quantities. For instance, the reward function of the MDP $R$ is also used for an unknown expert reward function (in the classical imitation learning section), and later as a ranking function in Sec. 4.2.1. I believe addressing similar notational inconsistencies will make the paper clear. - Implications of Theorem 4.1: See my comment in the weakness section. In particular, when do the assumptions hold? How does this statement change when using other kinds of loss functions? This does not necessarily need to be studied analytically, but rather even empirical evidence on some toy domains would suffice. ### Clarification questions: - In Eq (2), should there be $\frac{1}{1-\gamma}$ on RHS based on definition of $J(\pi, R)$? - Eq 4: why is $R_{max}$ in the square root term? Should it be $R^E_{max}$? Is this linked to the generalization error mentioned in Theorem 4.1's statement? - In Fig. 3, is the cost for generating the dataset for auto also included in the figure? The algorithm needs to interact with the environment to generate the dataset so that should be included in the sample efficiency. ### Optional suggestions: - More motivation on the setting will make the paper more accessible to readers that are not experts in the Imitation learning domain. Broader Impact Concerns: The authors have acknowledged the limitations of their approach and the potential side effects stemming due to a lack of guarantees for safety and robustness. ================================================== Review 3: Summary: This paper proposed to formulate imitation learning as a two-player ranking game (rank-game). The reward player aims to find a scalar-valued reward function that produces a separation of value $k$ between samples from any ordered pair $(\rho^{\pi^i}, \rho^{\pi^j})$ of behaviors. The policy player aims to maximize that reward. This viewpoint addresses the limitation of previous inverse reinforcement learning approaches by enabling the use of offline preferences and suboptimal behaviors in addition to expert demonstrations. The paper derives a bound on the sub-optimality of the rank-game equilibrium policy in comparison to the expert policy. Experiments in MuJoCo benchmarks show that rank-game outperforms baselines in nearly all tasks, and experiments on sparse-reward door/pen manipulation simulations show that only rank-game with the help of offline rankings can solve the tasks, while previous methods cannot. Strengths and Weaknesses: Strengths 1. The proposed ranking viewpoint provides the advantage of enabling the use of suboptimal behaviors as well as expert data for IRL. 2. The experimental results show that rank-game significantly outperforms baselines on a diverse set of standard benchmarks (but see the comment below about fair comparison to T-REX). 3. The comparison between the policy-as-leader and reward-as-leader versions of rank-game on the non-stationary task are also helpful in showing the practical differences in both (but see the question below on RAL). Weaknesses 1. The main apparent weakness, which appears to be a crucial flaw, is that the ranking loss (eq. 3) is confusing and inconsistent. Firstly, suppose that a pair $(s,a)$ is sampled under both $\rho^{\pi^i}$ and $\rho^{\pi^j}$ when $(\rho^{\pi^i}, \rho^{\pi^j})$ appear as a ranked pair. In that case, the loss function trains the reward $R$ to map $(s,a)$ to $0$ and to $k$. That is inconsistent with the very definition of a function. Secondly, suppose we have $\pi^i \preceq \pi^j$ and $\pi^k \preceq \pi^i$. Because $\pi^i \preceq \pi^j$, samples $(s,a) \sim \pi^i$ should have reward value zero, but because $\pi^k \preceq \pi^i$, then $(s,a) \sim \pi^i$ should now have reward value k. There is no reward function that can produce both value k and value 0 for the same $(s,a) \sim \pi^i$, unless $R$ is given information about the other policy in the pair in addition to the $(s,a)$---i.e., to map some arbitrary $(s,a)$ to either $k$ or $0$ correctly, $R$ needs to know whether it came from a distribution that is ranked lower or higher in the pair $(\rho^i, \rho^j)$ that was sampled. But since that information is not provided to $R$, it is hard to see how the loss function in (eq. 3) is a consistent evaluation of $R$. 2. This paper claims that "existing IRL methods that learn from expert demonstrations provide no mechanisms to incorporate offline preferences and vice versa." However, it appears there is previous work e.g. T-REX [1], that use preferences and can accommodate the use of expert demonstrations, simply by assigning highest preference to expert demonstrations. As such, the paper's characterization of T-REX [1] in Table 2, which says T-REX is neither LfD nor LfO, appears to be incorrect. Quoting from Brown et al., T-REX is a ``reward-learning-from-observation algorithm'' that can use a set of ranked demonstrations to learn a reward function. From Section 5.2, it appears that this paper did not provide expert demonstrations to T-REX, which appears to be an unfair comparison. 3. In the second paragraph, the paper claims to address the issue of difficulty of solving an adversarial min-max game in a sample efficient way. Even though the proposed rank-game is presented as a Stackelberg game, the actual optimization procedure is still the same as in the case of a min-max game, whereby the policy and reward are optimized in an alternating and iterative manner. It is not clear, theoretically and conceptually, how the ranking game viewpoint leads to an easier optimization problem or more sample efficient learning (notwithstanding the fact that experiments show that rank-game is more sample efficient in practice). [1] Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations. Brown et al. 2019. Requested Changes: (Critical) Address the main apparent weakness above. (Critical) The paper writes "The ranking loss used by the prior IRL approaches is specific to the comparison of optimal (expert) vs. suboptimal (agent) data, and precludes incorporation of comparisons among suboptimal behaviors". However, T-REX [1] explicitly allows the use of ranked suboptimal desmontrations. This paper's statements need to be revised, or it should be explained more convincingly why previous methods like T-REX have the limitations that this paper claims they do. Specifically, why can't T-REX simply assign highest preference to expert behavior. (Critical) In Table 2, f-IRL on Walker has the highest average reward and small variance. However, in Figure 3, f-IRL is shown to have mediocre average reward and extremely large variance. This is inconsistent. (Minor) In Section 4.2, the papers should define what it means for $\rho^{\pi} \preceq \rho^{\mu}$. Previously, the notation $\preceq$ was only define when there is a reward $f_{\pi}$ that induced the ranking $\rho^E \succeq \rho^{\pi} := E_{\rho^E}[f_{\pi}(s,a)] \geq E_{\rho^{\pi}}[f_{\pi}(s,a)]$. But in the absence of a reward $f_{\pi}$, it is not clear what $\rho^{\pi} \preceq \rho^{\mu}$ means. Perhaps the paper means the rankings are predefined by the dataset D? This can easily be clarified. (Minor) The reference to Theorem 1 at the top of page 6 should be to Theorem 4.1? (Minor) In section 4.2.1, the paper should clarify what a convex combination of two trajectories $\tau_i$ and $\tau_j$ means, where $\tau_i$ is a trajectory of states and actions. E.g. in discrete state-action spaces, what does a convex combination of two discrete objects mean? (Minor) There needs to be more motivation for the choice "RAL updates the reward conservatively. This is achieved through aggregating the dataset of implicit rankings from all previous policies obtained during training". By training on data from all previous policies, RAL of course cannot adapt as fast as PAL in non-stationary environments. Why should one not simply let RAL train on the rankings generated by the current policy? Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: Reviews are largely in concensus for acceptance, as the authors also had satisfying rebuttals. Reviewer UMKz expressed "The paper has limited novelty. The method can be seen as an instantiation of GAILfO combined with the LSGAN loss function". While ranking loss is a simple common objective, LSGAN does not consitute a major conflict with this work as it deals with supervised learning. Therefore, this does not invalidate the novelty of the work. Focused discussion and experimentation of IL+ranking in sequential decision making setting still makes the work valuable for the audience. Minor comments: - improve writing clarity: Table 1, Section 4.1, Eq 1 etc: write down exact mathematical formula for each ranking loss in one place. - additional experiments: auto-generated preference orders through mixup likely do not scale to high-dimensional state / action (image / texts). Additional experiments to show that the method scales would be helpful. ==================================================
# Prior And Posterior Networks: A Survey On Evidential Deep Learning Methods For Uncertainty Estimation Dennis Ulmer☼, ã *dennis.ulmer@mailbox.org* Christian Hardmeier☼, ã *chrha@itu.dk* Jes FrellsenÆ, ã *jefr@dtu.dk* ☼IT University of Copenhagen, ÆTechnical University of Denmark, ã*Pioneer Centre for Artificial Intelligence* Reviewed on OpenReview: https: // openreview .*net/ forum? id= 1HVpTXwZxK* ## Abstract ![0_Image_0.Png](0_Image_0.Png) Popular approaches for quantifying predictive uncertainty in deep neural networks often involve distributions over weights or multiple models, for instance via Markov Chain sampling, ensembling, or Monte Carlo dropout. These techniques usually incur overhead by having to train multiple model instances or do not produce very diverse predictions. This comprehensive and extensive survey aims to familiarize the reader with an alternative class of models based on the concept of *Evidential Deep Learning*: For unfamiliar data, they aim to admit "what they don't know", and fall back onto a prior belief. Furthermore, they allow uncertainty estimation in a single model and forward pass by parameterizing distributions over distributions. This survey recapitulates existing works, focusing on the implementation in a classification setting, before surveying the application of the same paradigm to regression. We also reflect on the strengths and weaknesses compared to other existing methods and provide the most fundamental derivations using a unified notation to aid future research. ## 1 Introduction Many existing methods for uncertainty estimation leverage the concept of Bayesian model averaging: These include ensembling (Lakshminarayanan et al., 2017; Wilson & Izmailov, 2020), Markov chain Monte Carlo sampling (de Freitas, 2003; Andrieu et al., 2000) as well as variational inference approaches (Mackay, 1992; MacKay, 1995; Hinton & Van Camp, 1993; Neal, 2012), including approaches such as Monte Carlo (MC) dropout (Gal & Ghahramani, 2016) and Bayes-by-backprop (Blundell et al., 2015). Bayesian model averaging for neural networks usually involves the approximation of an otherwise infeasible integral using MC samples. This causes the following problems: Firstly, the quality of the MC approximation depends on the veracity and diversity of samples from the weight posterior. Secondly, the approach often involves increasing the number of parameters in a model or training more model instances altogether. Recently, a new class of models has been proposed to side-step this conundrum by using a different factorization of the posterior predictive distribution. This allows computing Figure 1: Taxonomy of surveyed approaches, divided into tractable parameterizations of the prior or posterior on one axis (see Tables 1 and 2 for an overview) and into approaches for classification and regression on the other. Regression methods are outlined in Table 3. uncertainty in a single forward pass and with a single set of weights. These models are grounded in a concept coined *Evidential Deep Learning*: For out-of-distribution (OOD) inputs, they are encouraged to fall back onto a prior. This is often described as *knowing what they don't know*. In this paper, we summarize the existing literature and provide an overview of Evidential Deep Learning approaches. We give an overview over all discussed work in Figure 1, where we distinguish surveyed works for classification between models parameterizing a Dirichlet prior (Section 3.4.1) or posterior (Section 3.4.2). We further discuss similar methods for regression problems (Section 4). As we will see, obtaining well-behaving uncertainty estimates can be challenging in the Evidential Deep Learning framework; proposed solutions that are also reflected in Figure 1 are the usage of OOD examples during training (Malinin & Gales, 2018; 2019; Nandy et al., 2020; Shen et al., 2020; Chen et al., 2018; Zhao et al., 2019; Hu et al., 2021; Sensoy et al., 2020), knowledge distillation (Malinin et al., 2020b;a) or the incorporation of density estimation (Charpentier et al., 2020; 2022; Stadler et al., 2021), which we discuss in more detail in Section 6. This survey aims to both serve as an accessible introduction to this model family to the unfamiliar reader as well as an informative overview, in order to promote a wider application outside the uncertainty quantification literature. We also provide a collection of the most important derivations for the Dirichlet distribution for Machine Learning, which plays a central role in many of the discussed approaches. ## 2 Background We first introduce the central concepts for this survey, including Bayesian inference in Section 2.1, Bayesian model averaging in Section 2.2 and Evidential Deep Learning in Section 2.3.1 ## 2.1 Bayesian Inference The foundation of the following sections is Bayesian inference: Given some prior belief p(θ) about parameters of interest θ, we use available observations D = {(xi, yi)} N i=1 and their likelihood p(D|θ) to obtain an updated belief in form of the posterior p(θ|D) ∝ p(D|θ)p(θ). This update rule is derived from Bayes' rule, namely $$p(\mathbf{\theta}|\mathbb{D})={\frac{p(\mathbb{D}|\mathbf{\theta})p(\mathbf{\theta})}{p(\mathbb{D})}}={\frac{p(\mathbb{D}|\mathbf{\theta})p(\mathbf{\theta})}{\int p(\mathbb{D}|\mathbf{\theta})p(\mathbf{\theta})d\mathbf{\theta}}},$$ where we often try to avoid computing the term in the denominator since marginalization over a large (continuous) parameter space of θ is usually intractable. In order to perform a prediction y for a new data point x, we can now utilize the *posterior predictive distribution* defined as $\quad(1)$ . $$P(y|{\bf x},{\mathbb{D}})=\int P(y|{\bf x},\mathbf{\theta})p(\mathbf{\theta}|{\mathbb{D}})d\mathbf{\theta}.\tag{2}$$ Since we integrate over the entire parameter space of θ, weighting each prediction by the posterior probability of its parameters to obtain the final result, this process is referred to as *Bayesian model averaging* (BMA). Here, predictions P(y|x, θ) stemming from parameters that are plausible given the observed data will receive a higher weight p(θ|D) in the final prediction P(y|x, D). As we will see in the following section, this factorization of the predictive predictive distribution also has beneficial properties for analyzing the uncertainty of a model. ## 2.2 Predictive Uncertainty In Neural Networks In probabilistic modelling, uncertainty is commonly divided into aleatoric and epistemic uncertainty (Der Kiureghian & Ditlevsen, 2009; Kendall & Gal, 2017; Hüllermeier & Waegeman, 2021). Aleatoric uncertainty refers to the uncertainty that is induced by the data-generating process, for instance noise or inherent overlap between observed instances of classes. Epistemic uncertainty is the type of uncertainty about the optimal model parameters (or even hypothesis class). It is reducible with an increasing amount of data, as fewer and 1Note that in the following we will use the suggested notation of the TMLR journal, e.g. by using P for probability mass and p for probability density functions. fewer possible models become a plausible fit. These two notions resurface when formulating the posterior predictive distribution for a new data point x: 2 $$P(y|\mathbf{x},\mathbb{D})=\int\underbrace{P(y|\mathbf{x},\boldsymbol{\theta})}_{\text{Algebraic}}\underbrace{p(\boldsymbol{\theta}|\mathbb{D})}_{\text{Epistemic}}d\boldsymbol{\theta}.\tag{1}$$ Here, the first factor captures the aleatoric uncertainty about the correct prediction, while the second one expresses uncertainty about the correct model parameters—the more data we observe, the more density of p(θ|D) should lie on reasonable parameter values for θ. For high-dimensional real-valued parameters θ like in neural networks, this integral becomes intractable, and is usually approximated using Monte Carlo samples:3 $$P(y|{\bf x},{\mathbb{D}})\approx\frac{1}{K}\sum_{k=1}^{K}P(y|{\bf x},\mathbf{\theta}^{(k)});\quad\mathbf{\theta}^{(k)}\sim p(\mathbf{\theta}|{\mathbb{D}})\tag{4}$$ $$\mathbf{\Sigma}$$ based on K different sets of parameters θ (k). Since this requires obtaining multiple versions of model parameters through some additional procedure, this however comes with the aforementioned problems of computational overhead and approximation errors, motivating the approaches discussed in this survey. ## 2.3 Evidential Deep Learning Since the traditional approach to predictive uncertainty estimation requires multiple parameter sets and can only approximate the predictive posterior, we can factorize Equation (2) further to obtain a tractable form: $$p(y|\mathbf{x},\mathbb{D})=\iint\underbrace{P(y|\boldsymbol{\pi})}_{\text{Alactic Distributional}}\underbrace{p(\boldsymbol{\pi}|\mathbf{x},\boldsymbol{\theta})}_{\text{Epistemic}}\underbrace{p(\boldsymbol{\theta}|\mathbb{D})}_{\text{Epistemic}}d\boldsymbol{\pi}d\boldsymbol{\theta}\approx\int P(y|\boldsymbol{\pi})\underbrace{p(\boldsymbol{\pi}|\mathbf{x},\boldsymbol{\theta})}_{p(\boldsymbol{\theta}|\mathbb{D})\approx\delta(\boldsymbol{\theta}-\boldsymbol{\theta})}d\boldsymbol{\pi}.\tag{5}$$ This factorization contains another type of uncertainty, which Malinin & Gales (2018) call the *distributional* uncertainty, uncertainty caused by the mismatch of training and test data distributions. In the last step, Malinin & Gales (2018) replace p(θ|D) by a point estimate θˆ using the Dirac delta function, i.e. a single trained neural network, to get rid of the intractable integral. Although another integral remains, retrieving the uncertainty from this predictive distribution actually has a closed-form analytical solution for the Dirichlet (see Section 3.3). The advantage of this approach is further that it allows us to distinguish uncertainty about a data point because it is ambiguous from points coming from an entirely different data distribution. As an example, consider a binary classification problem, in which the data manifold consists of two overlapping clusters. As we are classifying a new data point, we obtain a distribution P(y|x, θ) which is uniform over the two classes. What does this mean? The model might either be confident that the point lies in the region of overlap and is inherently ambiguous, or that the model is uncertain about the correct class. Without further context, we cannot distinguish between these two cases (Bengs et al., 2022; Hüllermeier, 2022). Compare that to instead predicting p(π|x, θ): If the data point is ambiguous, the resulting distribution will be centered on 0.5, if the model is generally uncertain, the distribution will be uniform, allowing this distinction. We will illustrate this principle further in the upcoming Sections 2.4 and 3.3. In the neural network context in Equation (5), it should be noted that restricting oneself to a point estimate of the parameters prevent the estimation of epistemic uncertainty like in earlier works through the weight posterior p(θ|D), as discussed in the next section. However, there are works like Haussmann et al. (2019); Zhao et al. (2020) that combine both approaches. 2Note that the predictive distribution in Equation (2) generalizes the common case for a single network prediction where P(y|x, D) ≈ P(y|x, θˆ). Mathematically, this is expressed by replacing the posterior p(θ|D) by a Dirac delta distribution as in Equation (5), where all probability density rests on a single parameter configuration. 3For easier distributions, the integral can often be evaluated analytically exploiting conjugacy. Another approach for more complex distributions can be the method of moments (see e.g. Duan, 2021). ![3_image_1.png](3_image_1.png) ![3_image_0.png](3_image_0.png) Figure 2: Illustration of different approaches to uncertainty quantifying on the Iris dataset, with examples for the classes given on the left (Figures 2a to 2c). On the right, the data is plotted alongside some predictions of a prior network (lighter colors indicate higher density) and an ensemble and MC Dropout model on the probability simplex, with 50 predictions each. Iris images were taken from Wikimedia Commons, 2022a;b;c. The term *Evidential Deep Learning* (EDL) originates from the work of Sensoy et al. (2018) and is based on the *Theory of Evidence* (Dempster, 1968; Audun, 2018): Within the theory, belief mass is assigned to set of possible states, e.g. class labels, and can also express a lack of evidence, i.e. an "I don't know". We can for instance generalize the predicted output of a neural classifier using the Dirichlet distribution, allowing us to express a lack of evidence through a uniform Dirichlet. This is different from a uniform Categorical distribution, which does not distinguish an equal probability for all classes from the lack of evidence. For the purpose of this survey, we define Evidential Deep Learning as a family of approaches in which a neural network can fall back onto a uniform prior for unknown inputs. While neural networks usually parameterize likelihood functions, approaches in this survey parameterize prior or posterior distributions instead. The advantages of this methodology are now demonstrated using the example in the following section. ## 2.4 An Illustrating Example: The Iris Dataset To illustrate the advantages of EDL, we choose a classification problem based on the Iris dataset (Fisher, 1936). It contains measurements of three different species of iris flowers (shown in Figures 2a to 2c). We use the dataset as made available through scikit-learn (Pedregosa et al., 2011) and plot the relationship between the width and lengths measurements of the flowers' petals in Figure 2. We train an deep neural network ensemble (Lakshminarayanan et al., 2017) with 50 model instances, a model with MC Dropout (Gal & Ghahramani, 2016) with 50 predictions and a prior network (Sensoy et al., 2018), an example of EDL, on all available data points, and plot their predictions on three test points on the 3-probability simplex in Figure 2.4 On these simplices, each point signifies a Categorical distribution, with 4For information about training and model details, see Appendix A.1. ![4_image_0.png](4_image_0.png) Figure 3: A prior Dirichlet distribution is updated with a vector of class observations. The posterior Dirichlet then shifts density towards the classes k with more observed instances. the proximity to one of the corners indicating a higher probability for the corresponding class. EDL methods for classification do not predict a single output distribution, but an entire *density over output distributions*. Test point 3 lies in a region of overlap between instances of *Iris versicolor* and *Iris virginica*, thus inducing high aleatoric uncertainty. In this case, we can see that the prior network places all of its density on between these two classes, similar to most of the predictions of the ensemble and MC Dropout (bottom right). However, some of the latter predictions still land in the center of the simplex. The point 1 is located in an area without training examples between instances of *Iris versicolor* and *setosa*, as well as close to a single virginica outlier. As shown in the top left, ensemble and MC Dropout predictions agree that the point belongs to either the setosa or *versicolor* class, with a slight preference for the former. The prior network concentrates its prediction on *versicolor*, but admits some uncertainty towards the two other choices. The last test point 2 is placed in an area of the feature space devoid of any data, roughly equidistant from the three clusters of flowers. Similar to the previous example, the ensemble and MC dropout predictions on the top right show a preference for *Iris setosa* and *versicolor*, albeit with higher uncertainy. The prior network however shows an almost uniform density, admitting distributional uncertainty about this particular input. This simple example provides some insights into the potential advantages of EDL: First of all, the prior network was able to provide reasonable uncertainty estimates in comparison with BMA methods. Secondly, the prior network is able to admit its lack of knowledge for the OOD data point by predicting an almost uniform prior, something that the other models are not able to. As laid out in Section 3.3, EDL actually allows the user to disentangle model uncertainty due to a simple lack of data and due to the input being out-of-distribution. Lastly, training the prior network only required a single model, which is a noticeable speed-up compared to MC Dropout and especially the training of ensembles. ## 3 Evidential Deep Learning For Classification In order to introduce EDL methods for classification, we first give a brief introduction to the Dirichlet distribution and its role as a conjugate prior in Bayesian inference in Section 3.1. We then show in Section 3.2 how neural networks can parameterize Dirichlet distributions, while Section 3.3 reveals how such a parameterization can be exploited for efficient uncertainty estimation. The remaining sections enumerate different examples from the literature parameterizing either a prior (Section 3.4.1) or posterior Dirichlet distribution (Section 3.4.2). ## 3.1 The Dirichlet Distribution Modelling for instance a binary classification problem is commonly done using the Bernoulli likelihood. The Bernoulli likelihood has a single parameter π, indicating the probability of success (or of the positive class), and is given by $$\mathrm{Bernoulli}(y|\pi)=\pi^{y}(1-\pi)^{(1-y)}.$$ (1−y). (6) Within Bayesian inference as introduced in Section 2, the Beta distribution is a commonly used prior for a Bernoulli likelihood. It defines a probability distribution over the parameter π, itself possessing two shape parameters α1 and α2: $$\mathrm{Beta}(\pi;\alpha_{1},\alpha_{2})=\frac{1}{B(\alpha_{1},\alpha_{2})}\pi^{\alpha_{1}-1}(1-\pi)^{\alpha_{2}-1};\quad B(\alpha_{1},\alpha_{2})=\frac{\Gamma(\alpha_{1})\Gamma(\alpha_{2})}{\Gamma(\alpha_{1}+\alpha_{2})};$$ ; (7) where Γ(·) stands for the gamma function, a generalization of the factorial to the real numbers, and B(·) is called the Beta function (not to be confused with the distribution). When extending the classification problem from two to an arbitrary number of classes, we use a Categorical likelihood: $$\text{Categorical}(y|\mathbf{\pi})=\prod_{k=1}^{K}\pi_{k}^{y=k},\tag{8}$$ $$\left(7\right)$$ in which K denotes the number of categories or classes, and the class probabilities are expressed using a vector π ∈ [0, 1]Kwith Pk πk = 1, and 1(·)is the indicator function. This distribution appears for instance in classification problems when using neural networks, since most neural networks for classification use a softmax function after their last layer to produce a Categorical distribution of classes s.t. πk ≡ P(y = k|x). In this setting, the Dirichlet distribution arises as a suitable prior and multivariate generalization of the Beta distribution (and is thus also called the *multivariate Beta distribution*): $$\text{Dir}(\mathbf{\pi};\mathbf{\alpha})=\frac{1}{B(\mathbf{\alpha})}\prod_{k=1}^{K}\pi_{k}^{\alpha_{k}-1};\quad B(\mathbf{\alpha})=\frac{\prod_{k=1}^{K}\Gamma(\alpha_{k})}{\Gamma(\alpha_{0})};\quad\alpha_{0}=\sum_{k=1}^{K}\alpha_{k};\quad\alpha_{k}\in\mathbb{R}^{+};\tag{9}$$ where the Beta function B(·) is now defined for K shape parameters compared to Equation (7). For notational convenience, we also define K = {1*, . . . , K*} as the set of all classes. The distribution is characterized by its concentration parameters α, the sum of which, often denoted as α0, is called the *precision*. 5 The Dirichlet is a *conjugate prior* for such a Categorical likelihood, meaning that according to Bayes' rule in Equation (1), they produce a Dirichlet posterior with parameters β, given a data set D = {(xi, yi)} N i=1 of N observations with corresponding labels: $$p(\boldsymbol{\pi}|\mathbb{D},\boldsymbol{\alpha})\propto p(\{y_{i}\}_{i=1}^{N}|\boldsymbol{\pi},\{x_{i}\}_{i=1}^{N})p(\boldsymbol{\pi}|\boldsymbol{\alpha})=\prod_{i=1}^{N}\prod_{k=1}^{K}\pi_{k}^{1_{n_{i}=k}}\frac{1}{B(\boldsymbol{\alpha})}\prod_{k=1}^{K}\pi_{k}^{n_{k}-1}\tag{10}$$ $$=\prod_{k=1}^{K}\pi_{k}^{\left(\sum_{i=1}^{N}\pi_{k}\right)}\frac{1}{B(\boldsymbol{\alpha})}\prod_{k=1}^{K}\pi_{k}^{n_{k}-1}=\frac{1}{B(\boldsymbol{\alpha})}\prod_{k=1}^{K}\pi_{k}^{N_{k}+n_{k}-1}\propto\text{Dir}(\boldsymbol{\pi};\boldsymbol{\beta}),$$ where β is a vector with βk = αk + Nk, with Nk denoting the number of observations for class k. Intuitively, this implies that the prior belief encoded by the initial Dirichlet is updated using the actual data, sharpening the distribution for classes for which many instances have been observed. Similar to the Beta distribution in Equation (7), the Dirichlet is a *distribution over Categorical distributions* on the K − 1 probability simplex; we show an example with its concentration parameters and the Bayesian update in Figure 3. ## 3.2 Parameterization For a classification problem with K classes, a neural classifier is usually realized as a function fθ : R D → R K, mapping an input x ∈ R D to *logits* for each class. Followed by a softmax function, this then defines a Categorical distribution over classes with a vector π with πk ≡ p(y = k|x, θ). The same underlying architecture can be used without any major modification to instead parameterize a *Dirichlet* distribution, 5The precision is analogous to the precision of a Gaussian, where a larger α0 signifies a sharper distribution. ![6_image_0.png](6_image_0.png) (a) Categorical distributions predicted by a neural ensemble on the ![6_image_5.png](6_image_5.png) probability simplex. (d) Dirichlet distribution for a case of model uncertainty, with the density spread out more. (e) Dirichlet for a case of distributional uncertainty, with the density spread across the whole simplex. ![6_image_1.png](6_image_1.png) ![6_image_2.png](6_image_2.png) ![6_image_3.png](6_image_3.png) (c) Dirichlet distribution for a case of data uncertainty, with the density concentrated in the center. (f) Alternative approach to distributional uncertainty called representation gap, with density concentrated along the edges. ![6_image_4.png](6_image_4.png) Figure 4: Examples of the probability simplex for a K = 3 classification problem, where every corner corresponds to a class and every point to a Categorical distribution. Brighter colors correspond to higher density. (a) Predicted Categorical distributions by an ensemble of discriminators. (b) - (e) (Desired) Behavior of Dirichlet in different scenarios by Malinin & Gales (2018): (b) For a confident prediction, the density is concentrated in the corner of the simplex corresponding to the assumed class. (c) In the case of aleatoric uncertainty, the density is concentrated in the center, and thus uniform Categorical distributions are most likely. (d) In the case of model uncertainty, the density may still be concentrated in a corner, but more spread out, expressing the uncertainty about the right prediction. (e) In the case of an OOD input, a uniform Dirichlet expresses that any Categorical distribution is equally likely, since there is no evidence for any known class. (f) Representation gap by Nandy et al. (2020), proposed as an alternative behavior for OOD data. Here, the density is instead concentrated solely on the edges of the simplex. predicting a distribution *over Categorical distributions* p(π|x, θˆ) as in Equation (9).6In order to classify a data point x, a Categorical distribution is created from the predicted concentration parameters of the Dirichlet as follows (this corresponds to the mean of the Dirichlet, see Appendix C.1): $$\alpha=\exp\left(f_{\theta}(\mathbf{x})\right);\quad\pi_{k}={\frac{\alpha_{k}}{\alpha_{0}}};\quad{\hat{y}}=\operatorname*{arg\,max}_{k\in\mathbb{K}}\ \pi_{1},\ldots,\pi_{K}.$$ Parameterizing a Dirichlet posterior distribution follows a similar logic, as we will discuss in Section 3.4.2. $$(11)$$ ## 3.3 Uncertainty Estimation With Dirichlet Networks Let us now turn our attention to how to estimate the aleatoric, epistemic and distributional uncertainty as laid out in Section 2.2 within the Dirichlet framework. In Figure 4, we show different shapes of a Dirichlet distribution parameterized by a neural network, corresponding to different cases of uncertainty, where each point on the simplex represents a Categorical distribution, with proximity to a corner indicating a high probability for the corresponding class. Figure 4a displays the predictions of an ensemble of classifiers as a point cloud on the simplex. Using a Dirichlet, this finite set of distributions can be extended to a continuous density over the whole simplex. As we will see in the following sections, parameterizing a Dirichlet distribution with a neural network enables us to distinguish different scenarios using the shape of its density, as shown in Figures 4b to 4f, which we will discuss in more detail along the way. However, since we do not want to inspect Dirichlets visually, we instead use closed form expression to quantify uncertainty, which we will discuss now. Although stated for the prior parameters α, the following methods can also be applied to the posterior parameters β without loss of generality. Data (aleatoric) uncertainty To obtain a measure of data uncertainty, we can evaluate the expected entropy of the data distribution p(y|π) (similar to previous works like e.g. Gal & Ghahramani, 2016). As the entropy captures the "peakiness" of the output distribution, a lower entropy indicates that the model is concentrating most probability mass on a single class, while high entropy characterizes a more uniform distribution—the model is undecided about the right prediction. For Dirichlet networks, this quantity has a closed-form solution (for the full derivation, refer to Appendix D.1): $$\mathbb{E}_{p(\mathbf{\pi}|\mathbf{x},\mathbf{\theta})}\bigg{[}H\Big{[}P(y|\mathbf{\pi})\Big{]}\bigg{]}=-\sum_{k=1}^{K}\frac{\alpha_{k}}{\alpha_{0}}\bigg{(}\psi(\alpha_{k}+1)-\psi(\alpha_{0}+1)\bigg{)}\tag{12}$$ where ψ denotes the digamma function, defined as ψ(x) = d dx log Γ(x), and H the Shannon entropy. Model (epistemic) uncertainty As we saw in Section 2.2, most approaches in the Dirichlet framework avoid the intractable integral over network parameters θ by using a point estimate θˆ. 7 This means that computing the model uncertainty via the weight posterior p(θ|D) like in Blundell et al. (2015); Gal & Ghahramani (2016); Smith & Gal (2018) is not possible. Nevertheless, a key property of Dirichlet networks is that epistemic uncertainty is expressed in the spread of the Dirichlet distribution (for instance in Figure 4 (d) and (e)). Therefore, the epistemic uncertainty can be quantified considering the concentration parameters α that shape this distribution: Charpentier et al. (2020) simply consider the maximum αk as a score akin to the maximum probability score by Hendrycks & Gimpel (2017), while Sensoy et al. (2018) compute it by K/PK k=1(αk + 1) or simply α0 (Charpentier et al., 2020). In both cases, the underlying intuition is that larger αk produce a sharper density, and thus indicate increased confidence in a prediction. Distributional uncertainty Another appealing property of this model family is being able to distinguish uncertainty due to model underspecification (Figure 4d) from uncertainty due to unknown inputs (Figure 4e). In the Dirichlet framework, the distributional uncertainty can be quantified by computing the difference between the total amount of uncertainty and the data uncertainty, which can be expressed through the mutual information between the label y and its Categorical distribution π: $$I\left[y,\pi\left|\mathbf{x},\mathbb{D}\right]=\underbrace{H\left[\mathbb{E}_{p(\pi|\mathbf{x},\mathbb{D})}\left[P(y|\pi)\right]\right]}_{\text{Total Uncertainty}}-\underbrace{\mathbb{E}_{p(\pi|\mathbf{x},\mathbb{D})}\left[H\left[P(y|\pi)\right]\right]}_{\text{Data Uncertainty}}\tag{13}$$ 6The only thing to note here is that the every αk has to be strictly positive, which can for instance be enforced by using an additional softplus, exponential or ReLU function (Sensoy et al., 2018; Malinin & Gales, 2018; Sensoy et al., 2020). 7With exceptions such as Haussmann et al. (2019); Zhao et al. (2020). When the distribution over parameters in Equation (5) is retained, alternate expressions of the aleatoric and epistemic uncertainty are derived by Woo (2022). This quantity expresses how much information we would receive about π if we were given the label y, conditioned on the new input x and the training data D. In regions in which the model is well-defined, receiving y should not provide much new information about π—and thus the mutual information would be low. Yet, such knowledge should be very informative in regions in which few data have been observed, and there this mutual information would indicate higher distributional uncertainty. Given that E[πk] = αk α0 (Appendix C.1) and assuming the point estimate p(π|x, D) ≈ p(π|x, θˆ) to be sufficient (Malinin & Gales, 2018), we obtain an expression very similar to Equation (12): $$I\Big{[}y,\pi\Big{|}{\bf x},{\mathbb{D}}\Big{]}=-\sum_{k=1}^{K}\frac{\alpha_{k}}{\alpha_{0}}\bigg{(}\log\frac{\alpha_{k}}{\alpha_{0}}-\psi(\alpha_{k}+1)+\psi(\alpha_{0}+1)\bigg{)}\tag{14}$$ Note on epistemic uncertainty estimation The introduction of distributional uncertainty, a notion that is non-existent in the Bayesian Model Averaging framework, warrants a note on the estimation of epistemic uncertainty in general. Firstly, since we often use the point estimate p(θ|D) ≈ δ(θ−θˆ) from Equation (5) in Evidential Deep Learning, model uncertainty usually is no longer estimated via the uncertainty in the weight posterior, but instead through the parameters of the prior or posterior distribution. Furthermore, even though they appear similar, distributional uncertainty is different from epistemic uncertainty, since it is the uncertainty in the distribution p(π|x, θ). Distinguishing epistemic from distributional uncertainty also allows us to differentiate uncertainty due to underspecification from uncertainty due to a lack of evidence. In BMA, these notions are indistinguishable: In theory, model uncertainty on OOD data should be high since the model is underspecified on them, however theoretical and empirical work has shown this is not always the case (Ulmer et al., 2020; Ulmer & Cinà, 2021; Van Landeghem et al., 2022). Even then, the additive decomposition of the mutual information has been critized since the model will also have a great deal of uncertainty about its aleatoric uncertainty in the beginning of the training process (Hüllermeier, 2022), and thus this decomposition might not be accurate. Furthermore, even when we obtain the best possible model within its hypothesis class, using the discussed methods it is impossible to estimate uncertainty induced by a misspecified hypothesis class. This can motivate approaches in which a second, auxiliary model directly predicts model uncertainty of a target model (Lahlou et al., 2022; Zerva et al., 2022). ## 3.4 Existing Approaches For Dirichlet Networks Being able to quantify aleatoric, epistemic and distributional uncertainty in a single forward pass and in closed form are desirable traits, as they simplify the process of obtaining different uncertainty scores. However, it is important to note that the behavior of the Dirichlet distributions in Figure 4 is idealized. In the usual way of training neural networks through empirical risk minimization, Dirichlet networks are not incentivized to behave in the depicted way. Thus, when comparing existing approaches for parameterizing Dirichlet priors in Section 3.4.1 and posteriors in Section 3.4.2,8 we mainly focus on the different ways in which authors try to tackle this problem by means of loss functions and training procedures. We give an overview over the discussed works in Tables 1 and 2 in these respective sections. For additional details, we refer the reader to Appendix C for general derivations concerning the Dirichlet distribution. We dedicate Appendix D to derivations of the different loss functions and regularizers and give a detailed overview over their mathematical forms in Appendix E. Available code repositories for all works surveyed are listed in Appendix A.2. ## 3.4.1 Prior Networks The key challenge in training Dirichlet networks is to ensure both high classification performance and the intended behavior under OOD inputs. For this reason, most discussed works follow a loss function design using two parts: One optimizing for task accuracy to achieve the former goal, the other optimizing for a flat Dirichlet distribution, as flatness suggests a lack of evidence. To enforce flatness, the predicted Dirichlet is compared to a uniform distribution using some probabilistic divergence measure. We divide prior networks 8Even though the term *prior* and *posterior network* were coined by Malinin & Gales (2018) and Charpentier et al. (2020) for their respective approaches, we use them in the following as an umbrella term for all methods targeting a prior or posterior distribution. Table 1: Overview over prior networks for classification. (∗) OOD samples were created inspired by the approach of Liang et al. (2018). ID: Using in-distribution data samples. | Method | Loss function | Architecture | OOD-free training? | |------------------------------------------------------------------|---------------------------------------------------------------------------|----------------|----------------------| | Prior network | ID KL w.r.t smoothed label & | MLP / CNN | ✗ | | (Malinin & Gales, 2018) | OOD KL w.r.t. uniform prior | | | | Prior networks | Reverse KL of Malinin & Gales (2018) | CNN | ✗ | | (Malinin & Gales, 2019) | CNN | ✓ | | | Information Robust Dirichlet Networks | lp norm w.r.t one-hot label & | | | | (Tsiligkaridis, 2019) | Approx. Rényi divergence w.r.t. uniform prior | | | | Dirichlet via Function Decomposition | Uncertainty Cross-entropy & | RNN | ✓ | | (Biloš et al., 2019) | mean & variance regularizer | BNN | ✓ | | Prior network with PAC Regularization | Negative log-likelihood loss + | | | | (Haussmann et al., 2019) | PAC regularizer | | | | Ensemble Distribution Distillation | Knowledge distillation objective | MLP / CNN | ✓ | | (Malinin et al., 2020b) Self-Distribution Distillation | Knowledge distillation objective | CNN | ✓ | | (Fathullah & Gales, 2022) Prior networks with representation gap | ID & OOD Cross-entropy + | MLP / CNN | ✗ | | (Nandy et al., 2020) | precision regularizer | | | | Prior RNN (Shen et al., 2020) | Cross-entropy + entropy regularizer | RNN | (✗) ∗ | | | GNN | ✓ | | | Graph-based Kernel Dirichlet distribution | l2 norm w.r.t. one-hot label & | | | | estimation (GKDE) (Zhao et al., 2020) | KL reg. with node-level distance prior & Knowledge distillation objective | | | into two groups: Approaches using additional OOD data for this purpose (*OOD-dependent approaches*), and those which do not required OOD data (*OOD-free approaches*), as listed in Table 1. OOD-free approaches Apart from a standard negative log-likelihood loss (NLL) as used by Haussmann et al. (2019), one simple approach to optimizing the model is to impose a lp-loss between the one-hot encoding y of the original label y and the Categorical distribution π. Tsiligkaridis (2019) show that since the values of π depend directly on the predicted concentration parameters α, a generalized loss can be derived to be upper-bounded by the following expression (see the full derivation given in Appendix D.3): $$\mathbb{E}_{p(\mathbf{\pi}|\mathbf{x},\mathbf{\theta})}\big{[}\|\mathbf{y}-\mathbf{\pi}\|_{p}\big{]}\leq\left(\frac{\Gamma(\alpha_{0})}{\Gamma(\alpha_{0}+p)}\right)^{\frac{1}{p}}\left(\frac{\Gamma\Big{(}\sum_{k\neq y}\alpha_{k}+p\Big{)}}{\Gamma\Big{(}\sum_{k\neq y}\alpha_{k}\Big{)}}+\sum_{k\neq y}\frac{\Gamma(\alpha_{k}+p)}{\Gamma(\alpha_{k})}\right)^{\frac{1}{p}}\tag{15}$$ $$(16)$$ Since the sum over concentration parameters excludes the one corresponding to the true label, this loss can be seen as reducing the density on the areas of the probability simplex that do not correspond to the target class. Sensoy et al. (2018) specifically utilize the l2 loss, which has the following form (see Appendix D.4): $$\mathbb{E}_{p(\pi|\mathbf{x},\theta)}\Big[||\mathbf{y}-\pi||_{2}^{2}\Big]=\sum_{k=1}^{K}\Big(\mathbf{1}_{y=k}-{\frac{\alpha_{k}}{\alpha_{0}}}\Big)^{2}+{\frac{\alpha_{k}(\alpha_{0}-\alpha_{k})}{\alpha_{0}^{2}(\alpha_{0}+1)}}$$ (α0 + 1) (16) where 1(·) denotes the indicator function. Since αk/α0 ≤ 1, we can see that the term with the indicator functions penalizes the network when the concentration parameter αk corresponding to the correct label does not exceed the others. The remaining aspect lies in the regularization: To achieve reliable predictive uncertainty, the density associated with incorrect classes should be reduced. One such option is to decrease the Kullback-Leibler divergence from a uniform Dirichlet (see Appendix C.3): $${\rm KL}\Big{[}p(\mathbf{\pi}|\mathbf{\alpha})\Big{|}\Big{|}p(\mathbf{\pi}|{\bf1})\Big{]}=\log\frac{\Gamma(K)}{B(\mathbf{\alpha})}+\sum_{k=1}^{K}(\alpha_{k}-1)\big{(}\psi(\alpha_{k})-\psi(\alpha_{0})\big{)}\tag{17}$$ In the case of Zhao et al. (2020), who apply their model to graph structures, they do not decrease the divergence from a uniform Dirichlet, but incorporate information about the local graph neighborhood into the reference distribution by considering the distance from and label of close nodes.9 Nevertheless, the KLdivergence w.r.t. a uniform Dirichlet is used by many of the following works. Other divergence measures are also possible: Tsiligkaridis (2019) instead use a local approximation of the Rényi divergence.10 First, the concentration parameter for the correct class αy is removed from the Dirichlet by creating α˜ = (1−y)·α+y. Then, the remaining concentration parameters are pushed towards uniformity by the divergence measure, which can be derived to be $$\text{Renyi}\Big{[}p(\pi|\hat{\mathbf{\alpha}})\Big{|}\Big{|}p(\pi|1)\Big{]}\approx\frac{1}{2}\Big{[}\sum_{k\neq y}\left(\alpha_{k}-1\right)^{2}\big{(}\psi^{(1)}(\alpha_{j})-\psi^{(1)}(\hat{\alpha}_{0})\big{)}-\psi^{(1)}(\hat{\alpha}_{0})\sum_{\begin{subarray}{c}k\neq k^{\prime}\\ k\neq y,\ k^{\prime}\neq y\end{subarray}}\big{(}\alpha_{k}-1\big{)}\big{(}\alpha_{k^{\prime}}-1\big{)}\Big{]}\tag{18}$$ where ψ (1) denotes the first-order polygamma function, defined as ψ (1)(x) = d dxψ(x). Since the sums ignore the concentration parameter of the correct class, only the ones of the incorrect classes are penalized. Haussmann et al. (2019) derive an entirely different regularizer using Probably Approximately Correct (PAC) bounds from learning theory, that together with the negative log-likelihood gives a proven bound to the expected true risk of the classifier. Setting a scalar δ allows one to set the desired risk, i.e. the model's expected risk is guaranteed to be the same or less than the derived PAC bound with a probability of 1 − δ. For a problem with N available training data points, the following *upper bound* is presented: $${\sqrt{\frac{\operatorname{KL}\left[p(\pi|\alpha)\big|\big|p(\pi|1)\right]-\log\delta}{N}}}-1.$$ $$\quad(19)$$. N− 1. (19) This upper bound is then used as the actual regularizer term in practice. We see that even from the learningtheoretic perspective, this method follows the intuition of the original KL regularizer in a shifted and scaled form. Haussmann et al. (2019) also admit that in this form, the regularizer does not allow for a direct PAC interpretation anymore, since its approximates only admits a loose bound on the risk. Yet, they demonstrate its usefulness in their experiments. Summarizing all of the presented approaches thus far, we can see that they try to force the model to concentrate the Dirichlet's density solely on the parameter corresponding to the right label—expecting a more flat density for difficult or unknown inputs. Knowledge distillation A way to avoid the use of OOD examples while still using external information for regularization is to use *knowledge distillation* (Hinton et al., 2015). Here, the core idea lies in a student model learning to imitate the predictions of a more complex teacher model. Malinin et al. (2020b) exploit this idea and show that prior networks can also be distilled using an ensemble of classifiers and their predicted Categorical distributions (akin to learning Figure 4e from Figure 4a), which does not require regularization at all, but comes at the cost of having to train an entire ensemble a priori. Trying to solve this shortcoming, Fathullah & Gales (2022) propose to use a shared feature extractor between the student and the teacher network. Instead of training an ensemble, diverse predictions are obtained from the teacher network through the use of Gaussian dropout, which are distilled into a Dirichlet distribution as in Malinin et al. (2020b). 9They also add another knowledge distillation term (Hinton et al., 2015) to their loss, for which the model tries to imitate the predictions of a vanilla Graph Neural Network that functions as the teacher network. 10The Kullback-Leibler divergence can be seen as a special case of the Rényi divergence (van Erven & Harremoës, 2014), where the latter has a stronger information-theoretic underpinning. OOD-dependent approaches A uniform Dirichlet in the face of unknown inputs can also be achieved explicitly by training with OOD inputs and learning to be uncertain on them. We discuss a series of works utilizing this direction next. Malinin & Gales (2018) simply minimize the KL divergence to a uniform Dirichlet on OOD data points. This way, the model is encouraged to be agnostic about its prediction in the face of unknown inputs. Further, instead of an lp norm, they utilize another KL term to train the model on predicting the correct label, minimizing the distance between the predicted concentration parameters and the true label. However, since only a gold *label* and not a gold *distribution* is available, they create one by re-distributing some of the density from the correct class onto the rest of the simplex (see Appendix E for full form). In their follow-up work, Malinin & Gales (2019) argue that the asymmetry of the KL divergence as the main objective creates undesirable properties in producing the correct behavior of the predicted Dirichlet, since it creates a multi- instead of unimodal target distribution. They therefore propose to use the reverse KL instead (see Appendix D.5 for the derivation), which enforces the desired unimodal target. Nandy et al. (2020) refine this idea further, stating that even with reverse KL training high epistemic and high distributional uncertainty (Figures 4d and 4e) might be confused, and instead propose novel loss functions producing a *representation gap* (Figure 4f), which aims to be more easily distinguishable. In this case, spread out densities signify epistemic uncertainty, whereas densities concentrated entirely on the edges of the simplex indicate distributional uncertainty. The way they achieve this goal is two-fold: In addition to minimizing the negative log-likelihood loss on in-domain and maximizing the entropy on OOD examples, they also penalize the precision of the Dirichlet (see Appendix E for full form). Maximizing the entropy on OOD examples hereby serves the same function as minimizing the KL w.r.t. to a uniform distribution, and can be implemented using the closed-form solution in Appendix C.2: $$H\big{[}p(\mathbf{\pi}|\mathbf{\alpha})\big{]}=\log B(\mathbf{\alpha})+(\alpha_{0}-K)\psi(\alpha_{0})-\sum_{k=1}^{K}(\alpha_{k}-1)\psi(\alpha_{k})\tag{20}$$ $$(21)$$ Sequential models We also have identified two sequential applications of prior networks in the literature: For Natural Language Processing, Shen et al. (2020) train a recurrent neural network for spoken language understanding using a simple cross-entropy loss. Instead of using OOD examples for training, they maximize the entropy of the model on data inputs given a learned, noisy version of the predicted concentration parameters. In comparison, Biloš et al. (2019) apply their model to asynchronous event classification and note that the standard cross-entropy loss only involves a point estimate of a Categorical distribution, discarding all the information contained in the predicted Dirichlet. For this reason, they propose an *uncertainty-aware* cross-entropy (UCE) loss instead, which has a closed-form solution in the Dirichlet case (see Appendix D.6) $${\mathcal{L}}_{\mathrm{UCE}}=\psi(\alpha_{y})-\psi(\alpha_{0}),$$ LUCE = ψ(αy) − ψ(α0), (21) with ψ referring to the digamma function. By mimizing the difference between the digamma values of αy and α0, the model learns to concentrate density on the correct class. Since their final concentration parameters are created using additional information from a class-specific Gaussian process, they further regularize the mean and variance for OOD data points using an extra loss term, incentivizing a loss mean and a variance corresponding to a pre-defined hyperparameter. ## 3.4.2 Posterior Networks As elaborated on in Section 3.1, choosing a Dirichlet prior, due to its conjugacy to the Categorical distribution, induces a Dirichlet posterior distribution. Like the prior before, surveyed works listed in Table 2 parameterize the posterior with a neural network. The challenges hereby are two-fold: Accounting for the number of class observations Nk that make up part of the posterior density parameters β (Equation (10)), and, similarly to prior networks, ensuring the wanted behavior on the probability simplex for in- and outof-distribution inputs. Sensoy et al. (2018) base their approach on the Dempster-Shafer theory of evidence (Yager & Liu, 2008; lending its name to the term "Evidential Deep Learning") and its formalization via subjective logic (Audun, 2018), where subjective beliefs about probabilities are expressed through Dirichlet distributions. In doing so, an agnostic belief in form of a uniform Dirichlet prior ∀k : αk = 1 is updated | Method | Loss function | Architecture | OOD-free training? | |--------------------------------------------------------------|---------------------------------------------------|----------------|----------------------| | Evidential Deep Learning | l2 norm w.r.t. one-hot label + | CNN | ✓ | | (Sensoy et al., 2018) | KL w.r.t. uniform prior | | | | Regularized ENN | l2 norm w.r.t. one-hot label + | MLP / CNN | ✗ | | (Zhao et al., 2019) | Uncertainty regularizer on OOD/ difficult samples | | | | WGAN–ENN | l2 norm w.r.t. one-hot label + | MLP / CNN + | (✗) § | | (Hu et al., 2021) | Uncertainty regularizer on synth. OOD | WGAN | † | | Variational Dirichlet | ELBO + | CNN | (✗) | | (Chen et al., 2018) | Contrastive Adversarial Loss | CNN | ✓ | | Dirichlet Meta-Model | ELBO + | | | | (Shen et al., 2022) | KL w.r.t. uniform prior | | | | Belief Matching (Joo et al., 2020) | ELBO | CNN | ✓ ✓ | | Posterior Networks | Uncertainty CE (Biloš et al., 2019) | MLP / CNN + | | | (Charpentier et al., 2020) | + Entropy regularizer | Norm. Flow | | | Graph Posterior Networks | Same as Charpentier et al. (2020) | GNN | ✓ | | (Stadler et al., 2021) Generative Evidential Neural Networks | Contrastive NLL + KL between | CNN | (✗) ‡ | | (Sensoy et al., 2020) | uniform & Dirichlet of wrong classes | | | Table 2: Overview over posterior networks for classification. OOD data is created using (†) the fast-sign gradient method (Kurakin et al., 2017), a (‡) Variational Auto-Encoder (VAE; Kingma & Welling, 2014) or (§) a Wasserstein GAN (WGAN; Arjovsky et al., 2017). NLL: Negative log-likelihood. CE: Cross-entropy. using pseudo-counts Nk, which are predicted by a neural network. This is different from prior networks, where the prior concentration parameters α are predicted instead. In both cases, this does not require any modification to a model's architecture except for replacing the softmax output function by a ReLU (or similar). Sensoy et al. (2018) for instance train their model using the same techniques presented in the previous section: The main objective is the l2 loss, penalizing the difference between the predicted Dirichlet and the one-hot encoded class label (Appendix D.4), and the KL divergence from a uniform Dirichlet is used for regularization. Generating OOD samples using generative models Since OOD examples are not always readily available, several works try to create artificial samples using deep generative models. Hu et al. (2021) train a Wasserstein GAN (Arjovsky et al., 2017) to generate OOD samples, on which the network's uncertainty is maximized. The uncertainty is given through *vacuity*, defined as K/Pk βk. The vacuity compares a uniform prior belief against the amassed evidence Pk βk, and thus is 1 when there is no additonal evidence available. In a follow-up work, Sensoy et al. (2020) similarly train a model using a contrastive loss with artificial OOD samples from a Variational Autoencoder (Kingma & Welling, 2014), and a KL-based regularizer similar to that of Tsiligkaridis (2019), where the density for posterior concentration parameters βk that do not correspond to the true label are pushed to the uniform distribution. Posterior networks via Normalizing Flows Charpentier et al. (2020) also set α to a uniform prior, but obtain the pseudo-observations Nk in a different way: Instead of a model predicting them directly, Nk is determined by the number of examples of a certain class in the training set. This quantity is further modified in the following way: An encoder model fθ produces a latent representation z of some input. A (class-specific) normalizing flow11 (NF; Rezende & Mohamed, 2015) with parameters ϕ then assigns a probability to this latent representation, which is used to weight Nk: 11A NF is a generative model, estimating a density in the feature space by mapping it to a Gaussian in a latent space by a series of invertible, bijective transformations. The probability of an input can then be estimated by calculating the probability of its latent encoding under that Gaussian and applying the change-of-variable formula, traversing the flow in reverse. Instead of mapping from the feature space into latent space, the flows in Charpentier et al. (2020) map from the encoder latent space into a separate, second latent space. ![13_image_0.png](13_image_0.png) Figure 5: Schematic of the Posterior Network and Natural Posterior Network, taken from Charpentier et al. (2020; 2022), respectively. In both cases, an encoder fθ maps inputs to a latent representation z. NFs then model the latent densities, which are used together with the prior concentration to produce the posterior parameters. In (a), the latent representation of x (1) lies right in the modelled density of the first class, and thus receives a confident prediction. The latent z (2) lies between densities, creating aleatoric uncertainty. x (3) is an OOD input, is mapped to a low-density area of the latent space and thus produces an uncertain prediction. The differences in the two approaches is that the Posterior Network in (a) uses one NF per class, while only one NF is used in (b). Furthermore, (b) constitutes a generalization to different exponential family distributions, and is not restricted to classification problems (see main text for more detail). $$\beta_{k}=\alpha_{k}+N_{k}\cdot p(\mathbf{z}|y=k,\phi);\quad\mathbf{z}=f_{\theta}(\mathbf{x}).$$ $$(22)$$ βk = αk + Nk · p(z|y = k, ϕ); z = fθ(x). (22) This has the advantage of producing low probabilities for strange inputs like the noise as depicted in Figure 5a, which in turn translate to low concentration parameters of the posterior Dirichlet, as it falls back onto the uniform prior. The model is optimized using the same uncertainty-aware cross-entropy loss as in Biloš et al. (2019) with an additional entropy regularizer, encouraging density only around the correct class. This scheme is also applied to Graph Neural Networks by Stadler et al. (2021): In order to take the neighborhood structure of the graph into account, the authors also use a Personalized Page Rank scheme to diffuse nodespecific posterior parameters β between neighboring nodes. The Page Rank scores, reflecting the importance of a neighboring node to the current node, can be approximated using power iteration (Klicpera et al., 2019) and used to aggregate the originally predicted concentration parameters β on a per-node basis. A generalization of the posterior network method to exponential family distributions is given by Charpentier et al. (2022). Akin to the update for the posterior Dirichlet parameters, the authors formulate a general Bayesian update rule as $$\chi_{i}^{\rm point}=\frac{n^{\rm prior}\chi^{\rm prior}+n_{i}\chi_{i}}{n^{\rm prior}+n_{i}};\quad\mathbf{z}_{i}=f_{\mathbf{\theta}}(\mathbf{x}_{i});\quad n_{i}=N\cdot p(\mathbf{z}|\mathbf{\phi});\quad\chi_{i}=g_{\mathbf{\phi}}(\mathbf{x}_{i}).\tag{23}$$ χ here denotes the parameters of the exponential family distribution and n the evidence. Thus, posterior parameters for a sample xi are obtained by updating the prior parameter and some prior evidence by some input-dependent pseudo-evidence ni and parameters χi: Again, given a latent representation by an encoder z, a (this time single) normalizing flow predicts ni = NH · p(z|ϕ) based on some pre-defined certainty budget NH. 12 The update parameters χi are predicted by an additional network χi = gψ(z), see Figure 5b. For classification, n prior = 1 and χ prior corresponds to the uniform Dirichlet, while χi are concentration parameters predicted by an output layer based on the latent encoding. For unfamiliar inputs, this method will 12The certainty budget can simply be set to the number of available datapoints, however Charpentier et al. (2022) suggest to set it to log NH = 12 H log(2π) + log(H + 1)to better scale with the dimensionality H of the latent space. | Method | Parameterized | Loss function | Model | |-----------------------------------------------------------------------|-----------------------------|--------------------------------------------------|-------------| | distribution | | | | | Deep Evidential Regression | Normal-Inverse | Negative log-likelihood loss + KL | MLP / CNN | | (Amini et al., 2020) | Gamma Prior | w.r.t. uniform prior | | | Deep Evidential Regression with Multi-task Learning (Oh & Shin, 2022) | Normal-Inverse | Like Amini et al. (2020), with additional | MLP / CNN | | Gamma Prior | Lipschitz-modified MSE loss | | | | Multivariate Deep Evidential | Normal-Inverse | Like Amini et al. (2020), but tying two | MLP | | Regression (Meinert & Lavin, 2021) | Wishart Prior | predicted params. instead of using a regularizer | | | Regression Prior Network | Normal-Wishart Prior | Reverse KL | MLP / CNN | | (Malinin et al., 2020a) | (Malinin & Gales, 2019) | | | | Natural Posterior Network | Inverse-χ 2 Posterior | Uncertainty Cross-entropy (Biloš et al., 2019) | MLP / CNN + | | (Charpentier et al., 2022) | + Entropy regularizer | Norm. Flow | | Table 3: Overview over Evidential Deep Learning methods for regression. again result in a small pseudo-evidence term ni, reflecting high model uncertainty. Since the generalization to the exponential family implies the application of this scheme to normal distributions, we will discuss the same method applied to regression in the next section. Posterior networks via variational inference Another route lies in directly parameterizing the posterior parameters β. Given a target distribution defined by a uniform Dirichlet prior plus the number of times an input is associated with a specific label, Chen et al. (2018) optimize a distribution matching objective, i.e. the KL-divergence between the posterior parameters predicted by a neural network and the target distribution. Since this objective is intractable to optimize directly, this leaves us to instead model an *approximate* posterior using variational inference methods. As the KL divergence between the true and approximate posterior is infeasible to estimate, variational methods usually optimize the *evidence lower bound* (ELBO): $${\mathcal{L}}_{\mathrm{ELBO}}=\underbrace{\psi(\beta_{y})-\psi(\beta_{0})}_{\mathrm{UCE~loss}}-\underbrace{\log{\frac{B(\beta)}{B(\gamma)}}+\sum_{k=1}^{K}(\beta_{k}-\gamma_{k})\big(\psi(\beta_{k})-\psi(\beta_{0})\big)}_{\mathrm{KL-divergence}}$$ $$\quad(24)$$ in which we can identify to consist of the uncertainty-aware cross-entropy (UCE) loss used by Biloš et al. (2019); Charpentier et al. (2020; 2022) and the KL-divergence between two Dirichlets (Appendix C.3). This approach is also employed by Joo et al. (2020), Chen et al. (2018) and Shen et al. (2022), while the latter predict posterior parameters based on the activations of different layers of a pre-trained feature extractor. ## 4 Evidential Deep Learning For Regression Because the EDL framework provides convenient uncertainty estimation, the question naturally arises of whether it can be extended to regression problems as well. The answer is affirmative, although the Dirichlet distribution is not an appropriate choice in this case. It is very common to model a regression problem using a normal likelihood (Bishop, 2006; Chapter 3.3). As such, there are multiple potential choices for a prior distribution. The methods listed in Table 3 either choose the Normal-Inverse Gamma distribution (Amini et al., 2020; Charpentier et al., 2022), inducing a scaled inverse-χ 2 posterior (Gelman et al., 1995),13 or a Normal-Wishart prior (Malinin et al., 2020a). We will discuss these approaches in turn. Univariate regression Amini et al. (2020) model the regression problem as a normal distribution with unknown mean and variance N (y; *π, σ*2), and use a normal prior for the mean with π ∼ N (*γ, σ*2ν −1) and an inverse Gamma prior for the variance with σ 2 ∼ Γ −1(*α, β*), resulting in a combined Inverse-Gamma 13The form of the Normal-Inverse Gamma posterior and the Normal Inverse-χ 2 posterior are interchangable using some parameter substitutions (Murphy, 2007). ![15_image_0.png](15_image_0.png) Figure 6: Example of an application of Evidential Deep Learning for regression, taken from Amini et al. (2020). The neural network predicts an Normal Inverse-Gamma prior, whose corresponding normal likelihoods display decreasing variance (and thus uncertainty) in the face of stronger evidence. prior with parameters *γ, ν, α, β*, shown in Figure 6. These are predicted by different "heads" of a neural network. For predictions, the expectation of the mean corresponds to E[π] = γ, and aleatoric and epistemic uncertainty can then be estimated using the expected value of the variance as well as the variance of the mean, respectively, which have closed form solutions under this parameterization: $$\mathbb{E}[\sigma^{2}]={\frac{\beta}{\alpha-1}};\quad\mathrm{Var}[\pi]={\frac{\beta}{\nu(\alpha-1)}}$$ $$(25)$$ By choosing to optimize using a negative log-likelihood objective, we can actually evaluate the loss function analytically, since the likelihood function corresponds to a Student's t-distribution with γ degrees of freedom, mean β(1 + ν)/(να) and 2α variance: $${\mathcal{L}}_{\mathrm{NLL}}={\frac{1}{2}}\log\left({\frac{\pi}{\nu}}\right)-\alpha\log\Omega+\left(\alpha+{\frac{1}{2}}\right)\log\left((y_{i}-\gamma)^{2}\nu+\Omega\right)+\log\left({\frac{\Gamma(\alpha)}{\Gamma(\alpha+{\frac{1}{2}})}}\right)$$ (26) using Ω = 2β(1 + ν). Akin to the entropy regularizer for Dirichlet networks, Amini et al. (2020) propose a regularization term that only allows for concentrating density on the correct prediction: $$(26)$$ $$(27)$$ $${\mathcal{L}}_{\mathrm{reg}}=|y_{i}-\gamma|\cdot(2\nu+\alpha)$$ Lreg = |yi − γ| · (2ν + α) (27) Since E[π] = γ is the prediction of the network, the second term in the product will be scaled by the degree to which the current prediction deviates from the target value. Since ν and α control the variance of the mean and the variance of the normal likelihood, this term encourages the network to decrease the evidence for mispredicted data samples. As Amini et al. (2020) point out, large amounts of evidence are not punished in cases where the prediction is close to the target. However, Oh & Shin (2022) argue that this combination of objectives might create adverse incentives for the model during training: Since the difference between the prediction and target in Equation (26) is scaled by ν, the model could learn to increase the predictive uncertainty by decreasing ν instead of improving its prediction. They propose to ameliorate this issue by using a third loss term of the form $${\mathcal{L}}_{\mathrm{MSE}}={\begin{cases}(y_{i}-\gamma)^{2}&{\mathrm{if~}}(y_{i}-\gamma)^{2}<U_{\nu,\alpha}\\ 2{\sqrt{U_{\nu,\alpha}}}|y_{i}-\gamma|-U_{\nu,\alpha}&{\mathrm{if~}}(y_{i}-\gamma)^{2}\geq U_{\nu,\alpha}\end{cases}}$$ 2 ≥ Uν,α(28) $$(28)$$ where Uν,α = min(Uν, Uα) denotes the minimum value for the uncertainty thresholds for *ν, α* given over a mini-batch, which are themselves defined as $$U_{\nu}=\frac{\beta(\nu+1)}{\alpha\nu};\quad U_{\alpha}=\frac{2\beta(\nu+1)}{\nu}\Big{[}\exp\Big{(}\psi\Big{(}\alpha+\frac{1}{2}\Big{)}-\psi(\alpha)\Big{)}-1\Big{)}\Big{]}.\tag{29}$$ These expression are obtained by taking the derivatives ∂LNLL/∂ν, ∂LNLL/∂α and solving for the parameters, thus giving us the values for ν and α for which the loss gradients are maximal. In combination with Equation (28), Equation (29) ensures that, should the model error exceed Uν,α, the error is rescaled. Thus, this rescaling bounds the Lipschitz constant of the loss function, motivating the model to ensure the correctness of its prediction, since its ability to increase uncertainty to decrease its loss is now limited. Posterior networks for regression Another approach for regression is the Natural Posterior Network by Charpentier et al. (2022), which was already discussed for classification in Section 3.4.2. But since the proposed approach is a generalization for exponential family distributions, it can be applied to regression as well, using a Normal likelihood and Normal Inverse-Gamma prior. The Bayesian update rule in Equation (23) is adapted as follows: n is set to n = λ = 2α, and χ =-π0 | π 2 0 + 2β/nT. Feeding an input into the natural posterior network again first produces a latent encoding z, from which a NF predicts ni = NH · p(z|ϕ), and an additional network produces χi = gψ(z), which are used in Equation (23) to produce χ post and n post, from which the parameters of the posterior Normal Inverse-Gamma can be derived. The authors also produce a general exponential family form of the UCE loss by Biloš et al. (2019), consisting of expected log-likelihood and an entropy regularizer, which they derive for the regression parameterization. Again, this approach relies on the density estimation capabilities of the NF to produce an agnostic belief about the right prediction for OOD examples (see Figure 5b). Multivariate evidential regression There are also some works offering solutions for multivariate regression problems: Malinin et al. (2020a) can be seen as a multivariate generalization of the work of Amini et al. (2020), where a combined Normal-Wishart prior is formed to fit the now Multivariate Normal likelihood. Again, the prior parameters are the output of a neural network, and uncertainty can be quantified in a similar way. For training purposes, they apply two different training objectives using the equivalent of the reverse KL objective of Malinin & Gales (2019) as well as of the knowledge distillation objective of Malinin et al. (2020b), which does not require OOD data for regularization purposes. Meinert & Lavin (2021) also provide a solution using a Normal Inverse-Wishart prior. In a similar vein to Oh & Shin (2022), they argue that the original objective proposed by Amini et al. (2020) can be minimized by increasing the network's uncertainty instead of decreasing the mismatch of its prediction. As a solution, they simply propose to tie β and ν via a hyperparameter. ## 5 Related Work Other Approaches to Uncertainty Quantification The need for the quantification of uncertainty in order to earn the trust of end-users and stakeholders has been a key driver for research (Bhatt et al., 2021; Jacovi et al., 2021; Liao & Sundar, 2022). Existing methods can broadly be divided into frequentist and Bayesian methods, where the former judge the confidence of a model based on its predicted probabilities. Unfortunately, standard neural discriminator architectures have been proven to possess unwanted theoretical properties w.r.t. OOD inputs (Hein et al., 2019; Ulmer & Cinà, 2021) and might therefore be unable to detect potentially risky inputs.14 Further, a large line of research works has questioned the calibration of models (Guo et al., 2017; Nixon et al., 2019; Desai & Durrett, 2020; Minderer et al., 2021; Wang et al., 2021b), i.e. to what extend the probability score of a class—also referred to as its confidence—corresponds to the chance of a correct prediction. Instead of relying on the confidence score alone, another way lies in constructing prediction sets consisting of the classes accumulating a certain share of the total predictive mass (Kompa et al., 2021; Ulmer et al., 2022). By scoring a held-out population of data points to calibrate these prediction sets, we can 14Pearce et al. (2021) argue that some insights might partially be misled by low-dimensional intuitions, and that empirically OOD data in higher dimensions tend to be mapped into regions of higher uncertainty. also obtain frequentist guarantees in a procedure referred to a *conformal prediction* (Papadopoulos et al., 2002; Vovk et al., 2005; Lei & Wasserman, 2014; Angelopoulos & Bates, 2021). This however still does not let us distinguish different notions of uncertainty. A popular *Bayesian* way to overcome this blemish by aggregating multiple predictions by networks in the Bayesian model averaging framework (Mackay, 1992; MacKay, 1995; Hinton & Van Camp, 1993; Neal, 2012; Jeffreys, 1998; Wilson & Izmailov, 2020; Kristiadi et al., 2020; Daxberger et al., 2021; Gal & Ghahramani, 2016; Blundell et al., 2015; Lakshminarayanan et al., 2017). Nevertheless, many of these methods have been shown not to produce diverse predictions (Wilson & Izmailov, 2020; Fort et al., 2019) and to deliver subpar performance and potentially misleading uncertainty estimates under distributional shift (Ovadia et al., 2019; Masegosa, 2020; Wenzel et al., 2020; Izmailov et al., 2021a;b), raising doubts about their efficacy. The most robust method in this context is often given by an ensemble of neural predictors (Lakshminarayanan et al., 2017), with multiple works exploring ways to make their training more efficient (Huang et al., 2017; Wilson & Izmailov, 2020; Wen et al., 2020; Turkoglu et al., 2022) or to provide theoretical guarantees (Pearce et al., 2020; Ciosek et al., 2020; He et al., 2020; D'Angelo & Fortuin, 2021). Related Approaches to EDL Kull et al. (2019) found an appealing use of the Dirichlet distribution as a post-training calibration map. Hobbhahn et al. (2022) use the Laplace bridge, a modified inverse based on an idea by MacKay (1998), to map from the model's logit space to a Dirichlet distribution. The proposed Posterior Network (Charpentier et al., 2020; 2022) can furthermore be seen as related to another, competing approach, namely the combination of neural discriminators with density estimation methods, for instance in the form of energy-based models (Grathwohl et al.; Elflein et al., 2021) or other hybrid architectures (Lee et al., 2018; Mukhoti et al., 2021). Furthermore, there is a line of other single-pass uncertainty quantification approaches which do not originate from the evidential framework, for instance by taking inspiration from RBF networks (van Amersfoort et al., 2020b) or via Gaussian Process output layers (Liu et al., 2020; Fortuin et al., 2021; van Amersfoort et al., 2021). Applications of EDL Some of the discussed models have already found a variety of applications, such as in autonomous driving (Capellier et al., 2019; Liu et al., 2021; Petek et al., 2022; Wang et al., 2021a), remote sensing (Gawlikowski et al., 2022), medical screening (Ghesu et al., 2019; Gu et al., 2021; Li et al., 2022), molecular analysis (Soleimany et al., 2021), open set recognition (Bao et al., 2021), active learning (Hemmer et al., 2022) and model selection (Radev et al., 2021). ## 6 Discussion What is state-of-the-art? As apparent from Table 5, evaluation methods and datasets can vary tremendously between different research works (for an overview, refer to Appendix B,). This can make it hard to accurately compare different approaches in a fair manner. Nevertheless, we try to draw some conclusion about the state-of-art in this research direction to the best extent possible: For **image classification**, the posterior (Charpentier et al., 2020) and natural posterior network (Charpentier et al., 2022) provide the best results on the tested benchmarks, both in terms of task performance and uncertainty quality. When the training an extra normalizing flow creates too much computational overhead, prior networks (Malinin & Gales, 2018) with the PAC-based regularizer (Haussmann et al., 2019; see Table 6 for final form) or a simple entropy regularizer (Appendix C.2) can be used. In the case of **regression** problems, the natural posterior network (Stadler et al., 2021) performs better or on par with the evidential regression by Amini et al. (2020) or an ensemble Lakshminarayanan et al. (2017) or MC Dropout (Gal & Ghahramani, 2016). For **graph neural networks**, the graph posterior network (Stadler et al., 2021) and a ensemble provide similar performance, but with the former displaying better uncertainty results. Again, this model requires training a NF, so a simpler fallback is provided by evidential regression (Amini et al., 2020) with the improvement by Oh & Shin (2022). For NLP and **count prediction**, the works of Shen et al. (2020) and Charpentier et al. (2022) are the only available instances from this model family, respectively. In the latter case, ensembles and the evidential regression framework (Amini et al., 2020) produce a lower root mean-squared error, but worse uncertainty estimates on OOD. Computational Cost When it comes to the computational requirements, most of the proposed methods in this survey incur the same cost as a single deterministic network using a softmax output, since most of the architecture remains unchanged. Additional cost is mostly only produced when using knowledge distillation (Malinin et al., 2020b; Fathullah & Gales, 2022), adding normalizing flow components like for posterior networks (Charpentier et al., 2020; 2022; Stadler et al., 2021) or using generative models to produce synthetic OOD data (Chen et al., 2018; Sensoy et al., 2020; Hu et al., 2021). Comparison to Other Approaches to Uncertainty Quantification As discussed in Section 5, several existing approaches to uncertainty quantification equally suffer from shortcomings with respect to their reliability. One possible explanation for this behavior might lie in the insight that neural networks trained in the empirical risk minimization framework tend to learn spurious but highly predictive features (Ilyas et al., 2019; Nagarajan et al., 2021). This way, inputs stemming from the training distribution can be mapped to similar parts of the latent space as data points outside the distribution even though they display (from a human perspective) blatant semantic differences, simply because these semantic features were not useful to optimize for the training objective. This can result in ID and OOD points having assigned similar feature representations by a network, a phenomenon has been coined "feature collapse" (Nalisnick et al., 2019; van Amersfoort et al., 2021; Havtorn et al., 2021). One strategy to mitigate (but not solve) this issue has been to enforce a constraint on the smoothness of the neural network function (Wei et al., 2018; van Amersfoort et al., 2020a; 2021; Liu et al., 2020), thereby maintaining both a sensitivity to semantic changes in the input and robustness against adversarial inputs (Yu et al., 2019). Another approach lies in the usage of OOD data as well, sometimes dubbed "outlier exposure" (Fort et al., 2021), but displaying the same shortcomings as in the EDL case. A generally promising strategy seems to seek functional diversity through ensembling: Juneja et al. (2022) show how model instances ending up in different low-loss modes correspond to distinct generalization strategies, indicating that combining diverse strategies may lead to better generalization and thus potentially more reliable uncertainty. Attaining different solutions still creates computational overhead, despite new methods to reduce it (Garipov et al., 2018; Dusenberry et al., 2020; Benton et al., 2021). Bayesian model averaging One of the most fundamental differences between EDL and existing approaches is the sacrifice of Bayesian model averaging (Equations (2) and (5)): In principle, combining multiple parameter estimates is supposed to result in a lower predictive risk (Fragoso et al., 2018). The Machine Learning community has ascribed further desiderata to this approach, such as better generalization and robustness to distributional shifts. Recent studies with exact Bayesian Neural Networks however have cast doubts on these assumptions (Izmailov et al., 2021a;b). Nevertheless, ensembles, that approximate Equation (2) via Monte Carlo estimates, remain state-of-the-art on many uncertainty benchmarks. EDL abandons modelling epistemic uncertainty through the learnable parameters, and instead expresses it through the uncertainty in prior / posterior parameters. This loses functional diversity which could aid generalization, while sidestepping computational costs. Future research could therefore explore the combination of both paradigms, as proposed by Haussmann et al. (2019); Zhao et al. (2020); Charpentier et al. (2022). Challenges Despite their advantages, the last chapters have pointed out key weaknesses of Dirichlet networks as well: In order to achieve the right behavior of the distribution and thus guarantee sensible uncertainty estimates (since ground truth estimates are not available), the surveyed literature proposes a variety of loss functions. Bengs et al. (2022) show formally that many of the loss functions used so far are not appropriate and violate basic asymptotic assumptions about epistemic uncertainty: With increasing amount of data, epistemic uncertainty should vanish, but this is not guaranteed using the commonly used loss functions. Furthermore, some approaches (Malinin & Gales, 2018; 2019; Nandy et al., 2020; Malinin et al., 2020a) require out-of-distribution data points during training. This comes with two problems: Such data is often not available or in the first place, or cannot guarantee robustness against *other* kinds of unseen OOD data, of which infinite types exist in a real-valued feature space.15 Indeed, Kopetzki et al. (2021) found OOD detection to deteriorate across a family of EDL models under adversarial perturbation and OOD data. Stadler et al. (2021) point out that much of the ability of posterior networks stems from the addition of a NF, which have been shown to also sometimes behave unreliably on OOD data (Nalisnick et al., 2019). Although the NFs in posterior networks operate on the latent and not the feature space, they are also restricted to 15The same applies to the artificial OOD data in Chen et al. (2018); Shen et al. (2020); Sensoy et al. (2020). operate on features that the underlying network has learned to recognize. Recent work by Dietterich & Guyer (2022) has hinted at the fact that networks might identify OOD by the absence of known features, and not by the presence of new ones, providing a case in which posterior networks are likely to fail. Such evidence on OOD data and adversarial examples has indeed been identified by a study by Kopetzki et al. (2021). Future Research Directions Overall, the following directions for future research on EDL crystallize from our previous reflections: *(1) Explicit epistemic uncertainty estimation:* Since we often employ the point estimate in Equation (5) to avoid the posterior p(θ|D), explicit estimation of the epistemic uncertainty is not possible, and some summary statistic of the concentration parameters is used for classification problems instead (Section 3.3). Estimating model uncertainty through modelling the (approximate) posterior p(θ|D) in Bayesian model averaging is a popular technique (Houlsby et al., 2011; Gal et al., 2016; Smith & Gal, 2018; Ulmer et al., 2020), but comes with the disadvantage of additional computational overhead. However, Sharma et al. (2022) recently showed that a Bayesian treatment of all model parameters may not be necessary, potentially allowing for a compromise. *(2) Robustness to diverse OOD data:* The emprical evidence compiled by Kopetzki et al. (2021) indicates that EDL classification models are not completely able to robustly classify and detect OOD and adversarial inputs. These findings hold both for prior networks trained with OOD data, or for posterior networks using density estimators. We speculate that through the information bottleneck principle (Tishby & Zaslavsky, 2015), EDL models might not learn input features that are useful to indicate uncertainty in their prediction, or at best identify the absence of known features, but not the presence of new ones (Dietterich & Guyer, 2022). Finding a way to have models identify unusual features could this help to mitigate this problem. *(3) Theoretical guarantees:* Even though some guarantees have been derived for EDL classifiers w.r.t. OOD data points (Charpentier et al., 2020; Stadler et al., 2021), Bengs et al. (2022) point out the flaws of current training regimes for epistemic uncertainty in the limit of infinite limit. Furthermore, Hüllermeier & Waegeman (2021) argue that even uncertainty estimates are affected by uncertainty themselves, impacting their usefulness. ## 7 Conclusion This survey has given an overview over contemporary approaches for uncertainty estimation using neural networks to parameterize conjugate priors or the corresponding posteriors instead of likelihoods, called Evidential Deep Learning. We highlighted their appealing theoretical properties allowing for uncertainty estimation with minimal computational overhead, rendering them as a viable alternative to existing strategies. We also emphasized practical problems: In order to nudge models towards the desired behavior in the face of unseen or out-of-distribution samples, the design of the model architecture and loss function have to be carefully considered. Based on a summary and discussion of experimental findings in Section 6, the entropy regularizer seems to be a sensible choice in prior networks when OOD data is not available. Combining discriminators with generative models like normalizing flows as in Charpentier et al. (2020; 2022), embedded in a sturdy Bayesian framework, also appears as an exciting direction for practical applications. In summary, we believe that recent advances show promising results for Evidential Deep Learning, making it a viable option in uncertainty estimation to improve safety and trustworthiness in Machine Learning systems. ## Acknowledgements We would like to thank Giovanni Cinà, Max Müller-Eberstein, Daniel Varab and Mike Zhang for reading early versions of this draft and providing tremendously useful feedback. Further, we would like to explicitly thank Mike Zhang for helping to improve Figure 1. We also would like to thank Alexander Amini for providing a long list of references that helped to further improve the coverage of this work and the anonymous reviewers for their suggestions. Lastly, we owe our gratitude to the anonymous reviewers that helped us such much to improve the different versions of this paper. ## References Alexander Amini, Wilko Schwarting, Ava Soleimany, and Daniela Rus. Deep Evidential Regression. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. Christophe Andrieu, Nando de Freitas, and Arnaud Doucet. Reversible Jump MCMC Simulated Annealing for Neural Networks. In Craig Boutilier and Moisés Goldszmidt (eds.), UAI '00: Proceedings of the 16th Conference in Uncertainty in Artificial Intelligence, Stanford University, Stanford, California, USA, June 30 - July 3, 2000, pp. 11–18. Morgan Kaufmann, 2000. Anastasios N Angelopoulos and Stephen Bates. A Gentle Introduction to Conformal Prediction and Distribution-free Uncertainty Quantification. *arXiv preprint arXiv:2107.07511*, 2021. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein Generative Adversarial Networks. In International conference on machine learning, pp. 214–223. PMLR, 2017. Jsang Audun. *Subjective Logic: A Formalism for Reasoning under Uncertainty*. Springer, 2018. Wentao Bao, Qi Yu, and Yu Kong. Evidential Deep Learning for Open Set Action Recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 13349–13358, 2021. Alexei Bastidas. Tiny Imagenet Image Classification, 2017. Viktor Bengs, Eyke Hüllermeier, and Willem Waegeman. On the Difficulty of Epistemic Uncertainty Quantification in Machine Learning: The Case of Direct Uncertainty Estimation through Loss Minimisation. arXiv preprint arXiv:2203.06102, 2022. Gregory W. Benton, Wesley J. Maddox, Sanae Lotfi, and Andrew Gordon Wilson. Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling. In *Proceedings of the 38th International Conference* on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 769–779. PMLR, 2021. Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika Srikumar, Adrian Weller, and Alice Xiang. Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty. In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society, Virtual Event, USA, May 19-21, 2021, pp. 401–413. ACM, 2021. Marin Biloš, Bertrand Charpentier, and Stephan Günnemann. Uncertainty on Asynchronous Time Event Prediction. In *Advances in Neural Information Processing Systems*, pp. 12851–12860, 2019. Christopher M Bishop. Pattern Recognition. *Machine learning*, 128(9), 2006. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight Uncertainty in Neural Networks. *arXiv preprint arXiv:1505.05424*, 2015. Yaroslav Bulatov. NotMNIST Dataset. Google (Books/OCR), Tech. Rep.[Online]. Available: http://yaroslavvb. blogspot. it/2011/09/notmnist-dataset. html, 2, 2011. Edouard Capellier, Franck Davoine, Véronique Cherfaoui, and You Li. Evidential Deep Learning for Arbitrary LIDAR Object Classification in the Context of Autonomous Driving. In *2019 IEEE Intelligent* Vehicles Symposium, IV 2019, Paris, France, June 9-12, 2019, pp. 1304–1311. IEEE, 2019. Bertrand Charpentier, Daniel Zügner, and Stephan Günnemann. Posterior network: Uncertainty estimation without ood samples via density-based pseudo-counts. *Advances in Neural Information Processing Systems*, 33:1356–1367, 2020. Bertrand Charpentier, Oliver Borchert, Daniel Zügner, Simon Geisler, and Stephan Günnemann. Natural Posterior Network: Deep Bayesian Predictive Uncertainty for Exponential Family Distributions. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. Wenhu Chen, Yilin Shen, Hongxia Jin, and William Wang. A Variational Dirichlet Framework for Out-OfDistribution Detection. *arXiv preprint arXiv:1811.07308*, 2018. Kamil Ciosek, Vincent Fortuin, Ryota Tomioka, Katja Hofmann, and Richard Turner. Conservative Uncertainty Estimation by Fitting Prior Networks. In *International Conference on Learning Representations*, 2020. Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, and David Ha. Deep Learning for Classical Japanese Literature. *arXiv preprint arXiv:1812.01718*, 2018. Andrea Coraddu, Luca Oneto, Aessandro Ghio, Stefano Savio, Davide Anguita, and Massimo Figari. Machine Learning Approaches for Improving Condition-Based Maintenance of Naval Propulsion Plants. *Proceedings* of the Institution of Mechanical Engineers, Part M: Journal of Engineering for the Maritime Environment, 230(1):136–153, 2016. Peter I Corke. A Robotics Toolbox for MATLAB. *IEEE Robotics & Automation Magazine*, 3(1):24–32, 1996. Paulo Cortez, António Cerdeira, Fernando Almeida, Telmo Matos, and José Reis. Modeling Wine Preferences by Data Mining from Physicochemical Properties. *Decision support systems*, 47(4):547–553, 2009. Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. Snips Voice Platform: An Embedded Spoken Language Understanding System for Private-by-Design Voice Interfaces. *arXiv* preprint arXiv:1805.10190, 2018. Francesco D'Angelo and Vincent Fortuin. Repulsive deep ensembles are bayesian. *Advances in Neural* Information Processing Systems, 34:3451–3465, 2021. Mindy I Davis, Jeremy P Hunt, Sanna Herrgard, Pietro Ciceri, Lisa M Wodicka, Gabriel Pallares, Michael Hocker, Daniel K Treiber, and Patrick P Zarrinkar. Comprehensive Analysis of Kinase Inhibitor Selectivity. Nature biotechnology, 29(11):1046–1051, 2011. Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. Laplace Redux - Effortless Bayesian Deep Learning. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), *Advances in Neural Information* Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 20089–20103, 2021. João Ferdinando Gomes de Freitas. *Bayesian Methods for Neural Networks*. PhD thesis, University of Cambridge, 2003. Arthur P Dempster. A Generalization of Bayesian Inference. *Journal of the Royal Statistical Society: Series* B (Methodological), 30(2):205–232, 1968. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A Large-Scale Hierarchical Image Database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Armen Der Kiureghian and Ove Ditlevsen. Aleatory or Epistemic? Does it matter? *Structural safety*, 31 (2):105–112, 2009. Shrey Desai and Greg Durrett. Calibration of Pre-trained Transformers. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pp. 295–302. Association for Computational Linguistics, 2020. Thomas G. Dietterich and Alexander Guyer. The Familiarity Hypothesis: Explaining the Behavior of Deep Open Set Methods. *Pattern Recognit.*, 132:108931, 2022. Dheeru Dua, Casey Graff, et al. UCI Machine Learning Repository. 2017. Haonan Duan. Method of Moments in Approximate Bayesian Inference: From Theory to Practice. Master's thesis, University of Waterloo, 2021. Michael Dusenberry, Ghassen Jerfel, Yeming Wen, Yi-An Ma, Jasper Snoek, Katherine A. Heller, Balaji Lakshminarayanan, and Dustin Tran. Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors. In *Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020,* Virtual Event, volume 119 of *Proceedings of Machine Learning Research*, pp. 2782–2792. PMLR, 2020. Sven Elflein, Bertrand Charpentier, Daniel Zügner, and Stephan Günnemann. On Out-of-distribution Detection with Energy-based Models. *arXiv preprint arXiv:2107.08785*, 2021. Hadi Fanaee-T and Joao Gama. Event Labeling Combining Ensemble Detectors and Background Knowledge. Progress in Artificial Intelligence, 2(2):113–127, 2014. Yassir Fathullah and Mark J. F. Gales. Self-distribution distillation: efficient uncertainty estimation. In James Cussens and Kun Zhang (eds.), Uncertainty in Artificial Intelligence, Proceedings of the ThirtyEighth Conference on Uncertainty in Artificial Intelligence, UAI 2022, 1-5 August 2022, Eindhoven, The Netherlands, volume 180 of *Proceedings of Machine Learning Research*, pp. 663–673. PMLR, 2022. Ronald A Fisher. The Use of Multiple Measurements in Taxonomic Problems. *Annals of eugenics*, 7(2): 179–188, 1936. Stanislav Fort, Huiyi Hu, and Balaji Lakshminarayanan. Deep Ensembles: A Loss Landscape Perspective. arXiv preprint arXiv:1912.02757, 2019. Stanislav Fort, Jie Ren, and Balaji Lakshminarayanan. Exploring the Limits of Out-of-Distribution Detection. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 7068–7081, 2021. Vincent Fortuin, Mark Collier, Florian Wenzel, James Allingham, Jeremiah Liu, Dustin Tran, Balaji Lakshminarayanan, Jesse Berent, Rodolphe Jenatton, and Effrosyni Kokiopoulou. Deep Classifiers with Label Noise Modeling and Distance Awareness. *arXiv preprint arXiv:2110.02609*, 2021. Tiago M Fragoso, Wesley Bertoli, and Francisco Louzada. Bayesian Model Averaging: A Systematic Review and Conceptual Classification. *International Statistical Review*, 86(1):1–28, 2018. Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In *International conference on Machine Learning*, pp. 1050–1059, 2016. Yarin Gal et al. Uncertainty in Deep Learning. 2016. Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P. Vetrov, and Andrew Gordon Wilson. Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs. In *Advances in Neural Information Processing* Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 8803–8812, 2018. Jakob Gawlikowski, Sudipan Saha, Anna M. Kruspe, and Xiao Xiang Zhu. An Advanced Dirichlet Prior Network for Out-of-Distribution Detection in Remote Sensing. *IEEE Trans. Geosci. Remote. Sens.*, 60: 1–19, 2022. Andrew Gelman, John B Carlin, Hal S Stern, and Donald B Rubin. *Bayesian Data Analysis*. Chapman and Hall/CRC, 1995. J Gerritsma, R Onnink, and A Versluis. Geometry, Resistance and Stability of the Delft Systematic Yacht Hull Series. *International shipbuilding progress*, 28(328):276–297, 1981. Florin C. Ghesu, Bogdan Georgescu, Eli Gibson, Sebastian Gündel, Mannudeep K. Kalra, Ramandeep Singh, Subba R. Digumarthy, Sasa Grbic, and Dorin Comaniciu. Quantifying and Leveraging Classification Uncertainty for Chest Radiograph Assessment. In *Medical Image Computing and Computer Assisted* Intervention - MICCAI 2019 - 22nd International Conference, Shenzhen, China, October 13-17, 2019, Proceedings, Part VI, volume 11769 of *Lecture Notes in Computer Science*, pp. 676–684. Springer, 2019. C Lee Giles, Kurt D Bollacker, and Steve Lawrence. CiteSeer: An Automatic Citation Indexing System. In Proceedings of the third ACM conference on Digital libraries, pp. 89–98, 1998. Ian J. Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, and Vinay D. Shet. Multi-Digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, and Kevin Swersky. Your Classifier is Secretly an Energy-Based Model and You Should Treat It Like One. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. Ang Nan Gu, Christina Luong, Mohammad H. Jafari, Nathan Van Woudenberg, Hany Girgis, Purang Abolmaesumi, and Teresa Tsang. Efficient Echocardiogram View Classification with Sampling-Free Uncertainty Estimation. In *Simplifying Medical Ultrasound - Second International Workshop, ASMUS 2021, Held in* Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings, volume 12967 of Lecture Notes in Computer Science, pp. 139–148. Springer, 2021. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On Calibration of Modern Neural Networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of *Proceedings of Machine Learning Research*, pp. 1321–1330. PMLR, 2017. David Harrison Jr and Daniel L Rubinfeld. Hedonic Housing Prices and the Demand for Clean Air. Journal of environmental economics and management, 5(1):81–102, 1978. Manuel Haussmann, Sebastian Gerwinn, and Melih Kandemir. Bayesian Evidential Deep Learning with PAC Regularization. *arXiv preprint arXiv:1906.00816*, 2019. Jakob Drachmann Havtorn, Jes Frellsen, Søren Hauberg, and Lars Maaløe. Hierarchical VAEs Know What They Don't Know. In *Proceedings of the 38th International Conference on Machine Learning, ICML 2021,* 18-24 July 2021, Virtual Event, volume 139 of *Proceedings of Machine Learning Research*, pp. 4117–4128. PMLR, 2021. Bobby He, Balaji Lakshminarayanan, and Yee Whye Teh. Bayesian Deep Ensembles via the Neural Tangent Kernel. *Advances in neural information processing systems*, 33:1010–1022, 2020. Matthias Hein, Maksym Andriushchenko, and Julian Bitterwolf. Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem. In *IEEE Conference on* Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 41–50. Computer Vision Foundation / IEEE, 2019. Patrick Hemmer, Niklas Kühl, and Jakob Schöffer. Deal: Deep Evidential Active Learning for Image Classification. In *Deep Learning Applications, Volume 3*, pp. 171–192. Springer, 2022. Charles T Hemphill, John J Godfrey, and George R Doddington. The ATIS Spoken Language Systems Pilot Corpus. In *Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania,* June 24-27, 1990, 1990. Dan Hendrycks and Kevin Gimpel. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017. José Miguel Hernández-Lobato and Ryan Adams. Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks. In *International conference on machine learning*, pp. 1861–1869. PMLR, 2015. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. Distilling the Knowledge in a Neural Network. *arXiv* preprint arXiv:1503.02531, 2(7), 2015. Geoffrey E Hinton and Drew Van Camp. Keeping the Neural Networks Simple by Minimizing the Description Length of the Weights. In *Proceedings of the sixth annual conference on Computational learning theory*, pp. 5–13, 1993. Marius Hobbhahn, Agustinus Kristiadi, and Philipp Hennig. Fast predictive uncertainty for classification with bayesian deep networks. In *Uncertainty in Artificial Intelligence*, pp. 822–832. PMLR, 2022. Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. Bayesian Active Learning for Classification and Preference Learning. *arXiv preprint arXiv:1112.5745*, 2011. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open Graph Benchmark: Datasets for Machine Learning on Graphs. *Advances in neural* information processing systems, 33:22118–22133, 2020. Yibo Hu, Yuzhe Ou, Xujiang Zhao, Jin-Hee Cho, and Feng Chen. Multidimensional Uncertainty-Aware Evidential Neural Networks. In *Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021,* Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pp. 7815–7822. AAAI Press, 2021. Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, and Kilian Q. Weinberger. Snapshot Ensembles: Train 1, Get M for Free. In *5th International Conference on Learning Representations, ICLR* 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017. Xinyu Huang, Xinjing Cheng, Qichuan Geng, Binbin Cao, Dingfu Zhou, Peng Wang, Yuanqing Lin, and Ruigang Yang. The Apolloscape Dataset for Autonomous Driving. In *Proceedings of the IEEE conference* on computer vision and pattern recognition workshops, pp. 954–960, 2018. Eyke Hüllermeier. Quantifying Aleatoric and Epistemic Uncertainty in Machine Learning: Are Conditional Entropy and Mutual Information Appropriate Measures? *arXiv preprint arXiv:2209.03302*, 2022. Eyke Hüllermeier and Willem Waegeman. Aleatoric and Epistemic Uncertainty in Machine Learning: An Introduction to Concepts and Methods. *Mach. Learn.*, 110(3):457–506, 2021. Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial Examples Are Not Bugs, They Are Features. In *Advances in Neural Information Processing* Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 125–136, 2019. Pavel Izmailov, Patrick Nicholson, Sanae Lotfi, and Andrew G Wilson. Dangers of Bayesian Model Averaging under Covariate Shift. *Advances in Neural Information Processing Systems*, 34:3309–3322, 2021a. Pavel Izmailov, Sharad Vikram, Matthew D. Hoffman, and Andrew Gordon Wilson. What Are Bayesian Neural Network Posteriors Really Like? In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 4629–4640. PMLR, 2021b. Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg. Formalizing trust in artificial intelligence: Prerequisites, Causes and Goals of Human Trust in AI. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 624–635, 2021. Harold Jeffreys. *The Theory of Probability*. OUP Oxford, 1998. Robin Jia, Larry Heck, Dilek Hakkani-Tür, and Georgi Nikolov. Learning Concepts through Conversations in Spoken Dialogue Systems. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5725–5729. IEEE, 2017. Taejong Joo, Uijung Chung, and Min-Gwan Seo. Being bayesian about categorical probability. In *International conference on machine learning*, pp. 4950–4961. PMLR, 2020. Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An Introduction to Variational Methods for Graphical Models. *Machine learning*, 37(2):183–233, 1999. Jeevesh Juneja, Rachit Bansal, Kyunghyun Cho, João Sedoc, and Naomi Saphra. Linear Connectivity Reveals Generalization Strategies. *arXiv preprint arXiv:2205.12411*, 2022. Alex Kendall and Yarin Gal. What Uncertainties do We Need in Bayesian Deep Learning for Computer Vision? *Advances in neural information processing systems*, 30, 2017. Sunghwan Kim, Jie Chen, Tiejun Cheng, Asta Gindulyte, Jia He, Siqian He, Qingliang Li, Benjamin A Shoemaker, Paul A Thiessen, Bo Yu, et al. PubChem 2019 Update: Improved Access to Chemical Data. Nucleic acids research, 47(D1):D1102–D1109, 2019. Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In *2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track* Proceedings, 2014. Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. Predict then Propagate: Graph Neural Networks meet Personalized PageRank. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, 2019. Benjamin Kompa, Jasper Snoek, and Andrew L. Beam. Empirical Frequentist Coverage of Deep Learning Uncertainty Quantification Procedures. *Entropy*, 23(12):1608, 2021. Anna-Kathrin Kopetzki, Bertrand Charpentier, Daniel Zügner, Sandhya Giri, and Stephan Günnemann. Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable? In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of *Proceedings of Machine Learning Research*, pp. 5707–5718. PMLR, 2021. Agustinus Kristiadi, Matthias Hein, and Philipp Hennig. Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks. In *Proceedings of the 37th International Conference on Machine Learning,* ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of *Proceedings of Machine Learning Research*, pp. 5436–5446. PMLR, 2020. Alex Krizhevsky, Geoffrey Hinton, et al. Learning Multiple Layers of Features from Tiny Images. 2009. Meelis Kull, Miquel Perello Nieto, Markus Kängsepp, Telmo Silva Filho, Hao Song, and Peter Flach. Beyond Temperature Scaling: Obtaining Well-Calibrated Multi-Class Probabilities with Dirichlet Calibration. Advances in neural information processing systems, 32, 2019. Morton Kupperman. Probabilities of Hypotheses and Information-Statistics in Sampling from ExponentialClass Populations. *Selected Mathematical Papers*, 29(2):57, 1964. Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial Examples in the Physical World. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings, 2017. Salem Lahlou, Moksh Jain, Hadi Nekoei, Victor I Butoi, Paul Bertin, Jarrid Rector-Brooks, Maksym Korablyov, and Yoshua Bengio. DEUP: Direct Epistemic Uncertainty Prediction. *Transactions on Machine* Learning Research, 2022. ISSN 2835-8856. Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-Level Concept Learning Through Probabilistic Program Induction. *Science*, 350(6266):1332–1338, 2015. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. In *Advances in neural information processing systems*, pp. 6402–6413, 2017. Yann LeCun. The MNIST Database of Handwritten Digits, 1998. URL http://yann.lecun.com/exdb/ mnist/. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-Based Learning Applied to Document Recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998. Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A Simple Unified Framework for Detecting Out-ofDistribution Samples and Adversarial Attacks. In *Advances in Neural Information Processing Systems 31:* Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 7167–7177, 2018. Jing Lei and Larry Wasserman. Distribution-free Prediction Bands for Non-parametric Regression. *Journal* of the Royal Statistical Society: Series B (Statistical Methodology), 76(1):71–96, 2014. Hao Li, Yang Nan, Javier Del Ser, and Guang Yang. Region-Based Evidential Deep Learning to Quantify Uncertainty and Improve Robustness of Brain Tumor Segmentation. *arXiv preprint arXiv:2208.06038*, 2022. Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, 2018. Q. Vera Liao and S. Shyam Sundar. Designing for Responsible Trust in AI Systems: A Communication Perspective. In *FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul,* Republic of Korea, June 21 - 24, 2022, pp. 1257–1268. ACM, 2022. Jiayu Lin. On the Dirichlet Distribution. *Mater's Report*, 2016. Jeremiah Z. Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax-Weiss, and Balaji Lakshminarayanan. Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. Tiqing Liu, Yuhmei Lin, Xin Wen, Robert N Jorissen, and Michael K Gilson. BindingDB: A Web-Accessible Database of Experimentally Determined Protein–Ligand Binding Affinities. *Nucleic acids research*, 35 (suppl_1):D198–D201, 2007. Zhijian Liu, Alexander Amini, Sibo Zhu, Sertac Karaman, Song Han, and Daniela L. Rus. Efficient and Robust LiDAR-Based End-to-End Navigation. In *IEEE International Conference on Robotics and Automation, ICRA 2021, Xi'an, China, May 30 - June 5, 2021*, pp. 13247–13254. IEEE, 2021. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep Learning Face Attributes in the Wild. In Proceedings of the IEEE international conference on computer vision, pp. 3730–3738, 2015. David JC MacKay. Developments in Probabilistic Modelling with Neural Networks—Ensemble Learning. In Neural Networks: Artificial Intelligence and Industrial Applications, pp. 191–198. Springer, 1995. David JC MacKay. Choice of basis for Laplace approximation. *Machine learning*, 33:77–86, 1998. David John Cameron Mackay. *Bayesian Methods for Adaptive Models*. California Institute of Technology, 1992. Andrey Malinin and Mark J. F. Gales. Predictive Uncertainty Estimation via Prior Networks. In *Advances in* Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada, pp. 7047–7058, 2018. Andrey Malinin and Mark J. F. Gales. Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness. In *Advances in Neural Information Processing Systems 32:* Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pp. 14520–14531, 2019. Andrey Malinin, Sergey Chervontsev, Ivan Provilkov, and Mark Gales. Regression Prior Networks. arXiv preprint arXiv:2006.11590, 2020a. Andrey Malinin, Bruno Mlodozeniec, and Mark J. F. Gales. Ensemble Distribution Distillation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020, 2020b. Lei Mao. Introduction to Exponential Family, 2019. URL https://zhiyzuo.github.io/ExponentialFamily-Distributions/. Accessed April 2022. Andrés R. Masegosa. Learning under Model Misspecification: Applications to Variational and Ensemble methods. In *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual*, 2020. Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. Image-Based Recommendations on Styles and Substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pp. 43–52, 2015. Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. Automating the Construction of Internet Portals with Machine Learning. *Information Retrieval*, 3(2):127–163, 2000. Nis Meinert and Alexander Lavin. Multivariate Deep Evidential Regression. arXiv preprint arXiv:2104.06135, 2021. Moritz Menze and Andreas Geiger. Object Scene Flow for Autonomous Vehicles. In *Proceedings of the IEEE* conference on computer vision and pattern recognition, pp. 3061–3070, 2015. Jeffrey W. Miller. (ML 7.7.A2) Expectation of a Dirichlet Random Variable, 2011. URL https: //www.youtube.com/watch?v=emnfq4txDuI. Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, and Mario Lucic. Revisiting the Calibration of Modern Neural Networks. *Advances in Neural* Information Processing Systems, 34:15682–15694, 2021. Jose G Moreno-Torres, Troy Raeder, RocíO Alaiz-RodríGuez, Nitesh V Chawla, and Francisco Herrera. A Unifying View on Dataset Shift in Classification. *Pattern recognition*, 45(1):521–530, 2012. Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip HS Torr, and Yarin Gal. Deterministic Neural Networks With Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty. *arXiv* preprint arXiv:2102.11582, 2021. Kevin P Murphy. Conjugate Bayesian Analysis of the Gaussian Distribution. 2007. Vaishnavh Nagarajan, Anders Andreassen, and Behnam Neyshabur. Understanding the Failure Modes of Out-Of-Distribution Generalization. In *9th International Conference on Learning Representations, ICLR* 2021, Virtual Event, Austria, May 3-7, 2021, 2021. Eric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Görür, and Balaji Lakshminarayanan. Do Deep Generative Models Know What They Don't Know? In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, 2019. Galileo Namata, Ben London, Lise Getoor, Bert Huang, and UMD EDU. Query-Driven Active Surveying for Collective Classification. In *10th International Workshop on Mining and Learning with Graphs*, volume 8, pp. 1, 2012. Jay Nandy, Wynne Hsu, and Mong Li Lee. Towards Maximizing the Representation Gap between In-Domain & Out-of-Distribution Examples. *Advances in Neural Information Processing Systems*, 33, 2020. Radford M Neal. *Bayesian Learning for Neural Networks*, volume 118. Springer Science & Business Media, 2012. Jeremy Nixon, Michael W Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. Measuring Calibration in Deep Learning. In *CVPR Workshops*, volume 2, 2019. Dongpin Oh and Bonggun Shin. Improving Evidential Deep Learning via Multi-Task Learning. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 36, pp. 7895–7903, 2022. Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, David Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty under Dataset Shift. In *Advances in Neural Information Processing Systems*, pp. 13991–14002, 2019. Harris Papadopoulos, Kostas Proedrou, Volodya Vovk, and Alex Gammerman. Inductive Confidence Machines for Regression. In *European Conference on Machine Learning*, pp. 345–356. Springer, 2002. Fabian Paschke, Christian Bayer, Martyna Bator, Uwe Mönks, Alexander Dicks, Olaf Enge-Rosenblatt, and Volker Lohweg. Sensorlose Zustandsüberwachung an Synchronmotoren. In *Proc*, pp. 211, 2013. Tim Pearce, Felix Leibfried, and Alexandra Brintrup. Uncertainty in Neural Networks: Approximately Bayesian Ensembling. In *International conference on artificial intelligence and statistics*, pp. 234–244. PMLR, 2020. Tim Pearce, Alexandra Brintrup, and Jun Zhu. Understanding Softmax Confidence and Uncertainty. *arXiv* preprint arXiv:2106.04972, 2021. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine Learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. Kürsat Petek, Kshitij Sirohi, Daniel Büscher, and Wolfram Burgard. Robust Monocular Localization in Sparse HD Maps Leveraging Multi-Task Uncertainty Estimation. In *2022 International Conference on* Robotics and Automation, ICRA 2022, Philadelphia, PA, USA, May 23-27, 2022, pp. 4163–4169. IEEE, 2022. Stefan T Radev, Marco D'Alessandro, Ulf K Mertens, Andreas Voss, Ullrich Köthe, and Paul-Christian Bürkner. Amortized Bayesian Model Comparison with Evidential Deep Learning. IEEE Transactions on Neural Networks and Learning Systems, 2021. Danilo Jimenez Rezende and Shakir Mohamed. Variational Inference with Normalizing Flows. In *Proceedings* of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of *JMLR Workshop and Conference Proceedings*, pp. 1530–1538. JMLR.org, 2015. Murat Sensoy, Lance Kaplan, and Melih Kandemir. Evidential Deep Learning to Quantify Classification Uncertainty. In *Advances in Neural Information Processing Systems*, pp. 3179–3189, 2018. Murat Sensoy, Lance M. Kaplan, Federico Cerutti, and Maryam Saleki. Uncertainty-Aware Deep Classifiers Using Generative Models. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp. 5620–5627. AAAI Press, 2020. Mrinank Sharma, Sebastian Farquhar, Eric Nalisnick, and Tom Rainforth. Do Bayesian Neural Networks Need To Be Fully Stochastic? *arXiv preprint arXiv:2211.06291*, 2022. Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. Pitfalls of Graph Neural Network Evaluation. *arXiv preprint arXiv:1811.05868*, 2018. Maohao Shen, Yuheng Bu, Prasanna Sattigeri, Soumya Ghosh, Subhro Das, and Gregory Wornell. Post-hoc Uncertainty Learning using a Dirichlet Meta-Model. *arXiv preprint arXiv:2212.07359*, 2022. Yilin Shen, Wenhu Chen, and Hongxia Jin. Modeling Token-level Uncertainty to Learn Unknown Concepts in SLU via Calibrated Dirichlet Prior RNN. *CoRR*, abs/2010.08101, 2020. Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor Segmentation and Support Inference from RGBD Images. In *European conference on computer vision*, pp. 746–760. Springer, 2012. Lewis Smith and Yarin Gal. Understanding Measures of Uncertainty for Adversarial Example Detection. In Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, UAI 2018, Monterey, California, USA, August 6-10, 2018, pp. 560–569, 2018. Ava P Soleimany, Alexander Amini, Samuel Goldman, Daniela Rus, Sangeeta N Bhatia, and Connor W Coley. Evidential Deep Learning for Guided Molecular Property Prediction and Discovery. ACS central science, 7(8):1356–1367, 2021. Maximilian Stadler, Bertrand Charpentier, Simon Geisler, Daniel Zügner, and Stephan Günnemann. Graph Posterior Network: Bayesian Predictive Uncertainty for Node Classification. *Advances in Neural Information Processing Systems*, 34, 2021. Jing Tang, Agnieszka Szwajda, Sushil Shakyawar, Tao Xu, Petteri Hintsanen, Krister Wennerberg, and Tero Aittokallio. Making Sense of Large-Scale Kinase Inhibitor Bioactivity Data Sets: A Comparative and Integrative Analysis. *Journal of Chemical Information and Modeling*, 54(3):735–743, 2014. Naftali Tishby and Noga Zaslavsky. Deep Learning and the Information Bottleneck Principle. In *2015 ieee* information theory workshop (itw), pp. 1–5. IEEE, 2015. Athanasios Tsanas and Angeliki Xifara. Accurate Quantitative Estimation of Energy Performance of Residential Buildings using Statistical Machine Learning Tools. *Energy and buildings*, 49:560–567, 2012. Theodoros Tsiligkaridis. Information Robust Dirichlet Networks for Predictive Uncertainty Estimation. arXiv preprint arXiv:1910.04819, 2019. Mehmet Ozgur Turkoglu, Alexander Becker, Hüseyin Anil Gündüz, Mina Rezaei, Bernd Bischl, Rodrigo Caye Daudt, Stefano D'Aronco, Jan Dirk Wegner, and Konrad Schindler. FiLM-Ensemble: Probabilistic Deep Learning via Feature-wise Linear Modulation. *arXiv preprint arXiv:2206.00050*, 2022. Dennis Ulmer and Giovanni Cinà. Know Your Limits: Uncertainty Estimation with ReLU Classifiers Fails at Reliable OOD Detection. In *Uncertainty in Artificial Intelligence*, pp. 1766–1776. PMLR, 2021. Dennis Ulmer, Lotta Meijerink, and Giovanni Cinà. Trust Issues: Uncertainty Estimation Does not Enable Reliable OOD Detection on Medical Tabular Data. In *Machine Learning for Health*, pp. 341–354. PMLR, 2020. Dennis Ulmer, Jes Frellsen, and Christian Hardmeier. "Exploring Predictive Uncertainty and Calibration in NLP: A Study on the Impact of Method & Data Scarcity". In *Findings of the Association for Computational* Linguistics: EMNLP 2022, pp. 2707–2735, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. Joost van Amersfoort, Lewis Smith, Yee Whye Teh, and Yarin Gal. Uncertainty Estimation Using a Single Deep Deterministic Neural Network. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 9690–9700. PMLR, 2020a. Joost van Amersfoort, Lewis Smith, Yee Whye Teh, and Yarin Gal. Uncertainty Estimation Using a Single Deep Deterministic Neural Network. In *Proceedings of the 37th International Conference on Machine* Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of *Proceedings of Machine Learning* Research, pp. 9690–9700. PMLR, 2020b. Joost van Amersfoort, Lewis Smith, Andrew Jesson, Oscar Key, and Yarin Gal. On Feature Collapse and Deep Kernel Learning for Single Forward Pass Uncertainty. *arXiv preprint arXiv:2102.11409*, 2021. Tim van Erven and Peter Harremoës. Rényi Divergence and Kullback-Leibler Divergence. *IEEE Trans. Inf.* Theory, 60(7):3797–3820, 2014. Jordy Van Landeghem, Matthew Blaschko, Bertrand Anckaert, and Marie-Francine Moens. Benchmarking Scalable Predictive Uncertainty in Text Classification. *Ieee Access*, 10:43703–43737, 2022. Vladimir Vovk, Alexander Gammerman, and Glenn Shafer. *Algorithmic lLarning in a Random World*. Springer Science & Business Media, 2005. Chen Wang, Xiang Wang, Jiawei Zhang, Liang Zhang, Xiao Bai, Xin Ning, Jun Zhou, and Edwin Hancock. Uncertainty Estimation for Stereo Matching Based on Evidential Deep Learning. *Pattern Recognition*, pp. 108498, 2021a. Deng-Bao Wang, Lei Feng, and Min-Ling Zhang. Rethinking Calibration of Deep Neural Networks: Do not be Afraid of Overconfidence. *Advances in Neural Information Processing Systems*, 34:11809–11820, 2021b. Xiang Wei, Boqing Gong, Zixia Liu, Wei Lu, and Liqiang Wang. Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, 2018. Yeming Wen, Dustin Tran, and Jimmy Ba. BatchEnsemble: an Alternative Approach to Efficient Ensemble and Lifelong Learning. In *8th International Conference on Learning Representations, ICLR 2020, Addis* Ababa, Ethiopia, April 26-30, 2020, 2020. Florian Wenzel, Kevin Roth, Bastiaan S Veeling, Jakub Światkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, and Sebastian Nowozin. How Good is the Bayes Posterior in Deep Neural Networks Really? *arXiv preprint arXiv:2002.02405*, 2020. Wikimedia Commons. Iris setosa, 2022a. URL https://en.wikipedia.org/wiki/Iris_setosa. File:Irissetosa1.jpg. Wikimedia Commons. Iris versicolor, 2022b. URL https://en.wikipedia.org/wiki/Iris_versicolor. File:Blue_Flag,_Ottawa.jpg. Wikimedia Commons. Iris virginica, 2022c. URL https://en.wikipedia.org/wiki/Iris_virginica\#/ media/File:Iris_virginica_2.jpg. File:Iris_virginica_2.jpg. Andrew Gordon Wilson and Pavel Izmailov. Bayesian Deep Learning and a Probabilistic Perspective of Generalization. In *Advances in Neural Information Processing Systems 33: Annual Conference on Neural* Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. John Michael Winn. Variational Message Passing and Its Applications. 2004. Jae Oh Woo. Analytic Mutual Information in Bayesian Neural Networks. In IEEE International Symposium on Information Theory, ISIT 2022, Espoo, Finland, June 26 - July 1, 2022, pp. 300–305. IEEE, 2022. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. *arXiv preprint arXiv:1708.07747*, 2017. Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun Database: Largescale Scene Recognition from Abbey to Zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition, pp. 3485–3492. IEEE, 2010. Ronald R Yager and Liping Liu. *Classic Works of the Dempster-Shafer Theory of Belief Functions*, volume 219. Springer, 2008. I-C Yeh. Modeling of Strength of High-Performance Concrete using Artificial Neural Networks. Cement and Concrete research, 28(12):1797–1808, 1998. Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. LSUN: Construction of a Large-Scale Image Dataset using Deep Learning with Humans in the Loop. *arXiv preprint* arXiv:1506.03365, 2015. Fuxun Yu, Zhuwei Qin, Chenchen Liu, Liang Zhao, Yanzhi Wang, and Xiang Chen. Interpreting and Evaluating Neural Network Robustness. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pp. 4199–4205. ijcai.org, 2019. Chrysoula Zerva, Taisiya Glushkova, Ricardo Rei, and André F. T. Martins. "Disentangling Uncertainty in Machine Translation Evaluation". In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 8622–8641, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. Xujiang Zhao, Yuzhe Ou, Lance Kaplan, Feng Chen, and Jin-Hee Cho. Quantifying Classification Uncertainty Using Regularized Evidential Neural Networks. *arXiv preprint arXiv:1910.06864*, 2019. Xujiang Zhao, Feng Chen, Shu Hu, and Jin-Hee Cho. Uncertainty Aware Semi-Supervised Learning on Graph Data. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. ## A Code Appendix A.1 Iris Example Training Details The code used to produce Figure 2 is available online.16 All models use three layers with 100 hidden units and ReLU activations each. We furthermore optimized all of the models with a learning rate of 0.001 using the Adam optimizer (Kingma & Ba, 2015) with its default parameter settings. We also regularize the ensemble and MC Dropout model with a dropout probability of 0.1 each. Prior Network specifics We choose the expected l2 loss by Sensoy et al. (2018) and regularize the network using the KL divergence w.r.t. to a uniform Dirichlet as in Sensoy et al. (2018). In the regularization term, we do not use the original concentration parameters α, but a version in which the concentration of the parameter αk corresponding to the correct class is removed using a one-hot label encoding y by α˜ = (1−α)⊙α+y⊙α, where ⊙ denotes point-wise multiplication. The regularization term is added to the loss using a weighting factor of 0.05. ## A.2 Code Availability We list the available code repositories for surveyed works in Table 4. Works for which no official implementation could be found are not listed. ## B Datasets & Evaluation Techniques Appendix This section contains a discussion of the used datasets, methods to evaluate the quality of uncertainty evaluation, as well as a direct of available models based on the reported results to determine the most useful choices for practitioners. An overview over the differences between the surveyed works is given in Table 5. Datasets Most models are applied to image classification problems, where popular choices involve the MNIST dataset (LeCun, 1998), using as OOD datasets Fashion-MNIST (Xiao et al., 2017), notMNIST (Bulatov, 2011) containing English letters, K-MNIST (Clanuwat et al., 2018) with ancient Japanese Kuzushiji characters, and the Omniglot dataset (Lake et al., 2015), featuring handwritten characters from more than 50 alphabets. Other choices involve different versions of the CIFAR-10 object recognition dataset (LeCun et al., 1998; Krizhevsky et al., 2009) for training purposes and SVHN (Goodfellow et al., 2014), iSUN (Xiao et al., 2010), LSUN (Yu et al., 2015), CelebA (Liu et al., 2015), ImageNet (Deng et al., 2009) and TinyImagenet (Bastidas, 2017) for OOD samples. Regression image datasets include for instance the NYU Depth Estimation v2 dataset (Silberman et al., 2012), using ApolloScape (Huang et al., 2018) or KITTI (Menze & Geiger, 2015) as an OOD dataset. Many authors also illustrate model uncertainty on synthetic data, for instance by simulating clusters of data points using Gaussians (Malinin & Gales, 2018; 2019; Nandy et al., 2020; Zhao et al., 2019; Hu et al., 2020; Charpentier et al., 2020; 2022), spiral data (Malinin et al., 2020b) or polynomials for regression (Amini et al., 2020; Oh & Shin, 2022; Meinert & Lavin, 2021; Malinin et al., 2020a; Charpentier et al., 2022). Tabular datasets include the Segment dataset, predicting image segments based on pixel features (Dua et al., 2017), and the sensorless drive dataset (Dua et al., 2017; Paschke et al., 2013), describing the maintenance state of electric current drives as well as popular regression datasets included in the UCI regression benchmark used by Hernández-Lobato & Adams (2015); Gal & Ghahramani (2016): Boston house prices (Harrison Jr & Rubinfeld, 1978), concrete compression strength (Yeh, 1998), energy efficiency of buildings (Tsanas & Xifara, 2012), forward kinematics of an eight link robot arm (Corke, 1996), maintenance of naval propulsion systems (Coraddu et al., 2016), properties of protein tertiary stuctures, wine quality (Cortez et al., 2009), and yacht hydrodynamics (Gerritsma et al., 1981). Furthermore, Oh & Shin (2022) use a number of drug discovery datasets, such as Davis (Davis et al., 2011), Kiba (Tang et al., 2014), BindingDB (Liu et al., 2007) and PubChem (Kim et al., 2019). Biloš et al. (2019) are the only authors working on asynchronous time even prediction, and supply their own data in the form of processed stack exchange postings, smart home data, and car indicators. 16The code is available under https://github.com/Kaleidophon/evidential-deep-learning-survey. | Table 4: Overview over code repositories of surveyed works. | | |----------------------------------------------------------------------------------------------------|------------------------------------------------------------| | Paper | Code Repository | | Prior network | https://github.com/KaosEngineer/PriorNetworks-OLD | | (Malinin & Gales, 2018) Prior networks | https://github.com/KaosEngineer/PriorNetworks | | (Malinin & Gales, 2019) Dirichlet via Function Decomposition | https://github.com/sharpenb/Uncertainty-Event-Prediction | | (Biloš et al., 2019) Prior network with PAC Regularization | https://github.com/manuelhaussmann/bedl | | (Haussmann et al., 2019) Prior networks with representation gap | https://github.com/jayjaynandy/maximize-representation-gap | | (Nandy et al., 2020) Graph-based Kernel Dirichlet distribution | https://github.com/zxj32/uncertainty-GNN | | estimation (GKDE) (Zhao et al., 2020) Evidential Deep Learning | https://muratsensoy.github.io/uncertainty.html | | (Sensoy et al., 2018) WGAN–ENN | https://github.com/snowood1/wenn | | (Hu et al., 2021) Belief Matching (Joo et al., 2020) | https://github.com/tjoo512/belief-matching-framework | | Posterior Networks | https://github.com/sharpenb/Posterior-Network | | (Charpentier et al., 2020) Graph Posterior Networks | https://github.com/stadlmax/Graph-Posterior-Network | | (Stadler et al., 2021) Generative Evidential Neural Networks | https://muratsensoy.github.io/gen.html | | (Sensoy et al., 2020) | https://github.com/deargen/MT-ENet | | Deep Evidential Regression with Multi-task Learning (Oh & Shin, 2022) Multivariate Deep Evidential | https://github.com/avitase/mder/ | | Regression (Meinert & Lavin, 2021) Regression Prior Network | https://github.com/JanRocketMan/regression-prior-networks | | (Malinin et al., 2020a) Natural Posterior Network | https://github.com/borchero/natural-posterior-network | | (Charpentier et al., 2022) | | Table 4: Overview over code repositories of surveyed works. Shen et al. (2020) provide the sole method on language data, and use three different concept learning datasets, i.e. Concept Learning (Jia et al., 2017), Snips (Coucke et al., 2018) and ATIS (Hemphill et al., 1990), which contains new OOD concepts to be learned by design. For graph neural networks, Zhao et al. (2020) and Stadler et al. (2021) select data from the co-purchase datasets Amazon Computer, Amazon Photos (McAuley et al., 2015) and the CoraML (McCallum et al., 2000), CiteSeer (Giles et al., 1998) and PubMed (Namata et al., 2012), Coauthors Physics (Shchur et al., 2018), CoauthorCS (Namata et al., 2012) and OGBN Arxiv (Hu et al., 2020) citation datasets. Lastly, Charpentier et al. (2022) use a single count prediction dataset concerned with predicting the number of bike rentals (Fanaee-T & Gama, 2014). Uncertainty Evaluation Methods There usually are no gold labels for uncertainty estimates, which is why the efficacy of proposed solutions has to be evaluated in a different way. One such way used by almost all the surveyed works is using uncertainty estimates in a proxy OOD detection task: Since the model is underspecified on unseen samples from another distribution, it should be more uncertain. By labelling OOD samples as the positive and ID inputs as the negative class, we can measure the performance of uncertainty estimates using the area under the receiver-operator characteristic (AUROC) or the area under the precision-recall curve (AUPR). We can thereby characterize the usage of data from another dataset as a form of covariate shift, while using left-out classes for testing can be seen as a kind of concept shift (Moreno-Torres et al., 2012). Instead of using OOD data, another approach is to use adversarial examples (Malinin & Gales, 2019; Tsiligkaridis, 2019; Sensoy et al., 2018; Hu et al., 2021; Chen et al., 2018; Amini et al., 2020), checking if they can be identified through uncertainty. In the case of Shen et al. (2020), OOD detection or new concept extraction is the actual and not a proxy task, and thus can be evaluated using classical metrics such as the F1 score. Another way is misclassification detection: In general, we would desire the model to be more uncertain about inputs it incurs a higher loss on, i.e., what it is more wrong about. For this purpose, some works (Malinin & Gales, 2018; Zhao et al., 2020; Charpentier et al., 2020) measure whether let missclassified inputs be the positive class in another binary proxy classification test, and again measure AUROC and AUPR. Alternatively, Malinin et al. (2020b); Stadler et al. (2021); Amini et al. (2020) show or measure the area under the prediction / rejection curve, graphing how task performance varies as predictions on increasingly uncertain inputs is suspended. Lastly, some authors look at a model's calibration (Guo et al., 2017): While this does not allow to judge the quality of uncertainty estimates themselves, quantities like the expected calibration error quantify to what extent the output distribution of a classifier corresponds to the true label distribution, and thus whether aleatoric uncertainty is accurately reflected. Table 5: Overview over uncertainty evaluation techniques and datasets. (∗)indicates that a dataset was used as an OOD dataset for evaluation purposes, while (⋄)signifies that it was used as an in-distribution or out-of-distribution dataset. (†) means that a dataset was modified to create ID and OOD splits (for instance by removing some classes for evaluation or corrupting samples with noise). | by removing some classes for evaluation or corrupting samples with noise). Data Modality Method Uncertainty Evaluation Images Tabular | Other | | | | |-----------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------|--------------------------------------|-----------------------------------------------------| | Prior network | OOD Detection, Misclassification Detection | MNIST, CIFAR-10, Omniglot(∗) , SVHN(∗) , LSUN(∗) , TIM(∗) | ✗ | Clusters (Synthetic) | | (Malinin & Gales, 2018) | OOD Detection, | | | | | Prior networks | Adversarial Attack Detection | MNIST, CIFAR-10/100, SVHN(∗) , LSUN(∗) , TIM(∗) | ✗ | Clusters (Synthetic) | | (Malinin & Gales, 2019) | MNIST, FashionMNIST(∗) | | | | | OOD Detection, | | | | | | Information Robust Dirichlet Networks | Adversarial Attack Detection | notMNIST(∗) , Omniglot(∗) CIFAR-10, TIM(∗) , SVHN(∗) | ✗ | ✗ | | (Tsiligkaridis, 2019) | Erdős-Rényi Graph | | | | | Dirichlet via Function Decomposition | (Synthetic), Stack Exchange, | | | | | (Biloš et al., 2019) | OOD Detection | ✗ | Smart Home, Car Indicators | ✗ | | Prior network with PAC Regularization (Haussmann et al., 2019) | OOD Detection | MNIST, FashionMNIST(∗) CIFAR-10(†) | ✗ | ✗ | | OOD Detection, | CIFAR-10, CIFAR-100(⋄) | | | | | Ensemble Distribution Distillation | Misclassification Detection, | TIM(⋄) , LSUN(∗) | ✗ | Spirals (Synthetic) | | (Malinin et al., 2020b) | Calibration OOD Detection, | | | | | Self-Distribution Distillation | Calibration | CIFAR-100 SVHN(∗) , LSUN(∗) | ✗ | ✗ | | (Fathullah & Gales, 2022) Prior networks with representation gap (Nandy et al., 2020) | OOD Detection | CIFAR-10(⋄) , CIFAR-100(⋄) TIM, ImageNet(∗) | ✗ | Clusters (Synthetic) | | Prior RNN (Shen et al., 2020) | New Concept Extraction | ✗ | ✗ | Concept Learning(⋄) , Snips(⋄) , ATIS(⋄) (Language) | | OOD Detection Misclassification Detection | ✗ | ✗ | Coauthors Physics(⋄) , | | | Graph-based Kernel Dirichlet distribution | Amazon Computer(⋄) | | | | | estimation (GKDE) (Zhao et al., 2020) | Amazon Photo(⋄) (Graph) | | | | | Evidential Deep Learning | OOD Detection, Adversarial Attack Detection | MNIST, notMNIST(∗) , CIFAR-10(†) | ✗ | ✗ | | (Sensoy et al., 2018) Regularized ENN Zhao et al. (2019) | OOD Detection | CIFAR-10(†) | ✗ | Clusters (Synthetic) | | OOD Detection, Adversarial Attack Detection | MNIST, notMNIST(∗) , | | | | | WGAN–ENN | CIFAR-10(†) | ✗ | Clusters (Synthetic) | | | (Hu et al., 2021) | MNIST, CIFAR-10/100, | | | | | Variational Dirichlet | OOD Detection, | iSUN(∗) , LSUN(∗) , | | | | Adversarial Attack Detection | SVHN(∗) , TIM(∗) | ✗ | ✗ | | | (Chen et al., 2018) | ✗ | ✗ | | | | Dirichlet Meta-Model (Shen et al., 2022) | OOD Detection Misclassification Detection | MNIST(⋄,†) , CIFAR-10(⋄,†) , CIFAR-100(⋄) , Omniglot(∗) , FashionMNIST(∗) , K-MNIST(∗) , SVHN(∗) , LSUN(∗) , TIM(∗) | | | | Belief Matching (Joo et al., 2020) | OOD Detection, Calibration | CIFAR-10/100, SVHN(∗) | ✗ | ✗ | | OOD Detection, | MNIST, FashionMNIST(∗) , | | | | | Posterior Networks | Misclassification Detection, | K-MNIST(∗) , CIFAR-10, | Segment(†) , Sensorless Drive(†) | Clusters (Synthetic) | | (Charpentier et al., 2020) | Calibration | SVHN(∗) | Amazon Computer(⋄) , Amazon Photo(⋄) | | | OOD Detection, | CoraML(⋄) , CiteSeerCoraML(⋄) , | | | | | Graph Posterior Networks | Misclassification Detection, | ✗ | ✗ | | | (Stadler et al., 2021) | PubMed(⋄) , Coauthors Physics(⋄) , | | | | | Calibration | CoauthorsCS(⋄) , OBGN Arxiv(⋄) (Graph) | | | | | OOD Detection, | | | | | | Deep Evidential Regression | Misclassification Detection, Adversarial Attack Detection | | | | | (Amini et al., 2020) | Calibration | NYU Depth v2 ApolloScape∗ | UCI Regression Benchmark | Univariate Regression (Synthetic) | | (Depth Estimation) | Davis, Kiba(†) , BindingDB, PubChem(∗) | | | | | Deep Evidential Regression | OOD Detection, | (Drug discovery), | Univariate Regression (Synthetic) | | | Calibration | ✗ | | | | | with Multi-task Learning | UCI Regression | | | | | (Oh & Shin, 2022) | Benchmark | | | | | Multivariate Deep Evidential Regression Meinert & Lavin (2021) | Qualitative Evaluation | ✗ | ✗ | Multivariate Regression (Synthetic) | | NYU Depth v2⋄ , | UCI Regression | | | | | Regression Prior Network | KITTI⋄ | Benchmark | Univariate Regression (Synthetic) | | | (Malinin et al., 2020a) | OOD Detection | (Depth Estimation) | | | | NYU Depth v2, KITTI∗ , LSUN(∗) (Depth Estimation), MNIST, FashionMNIST(∗) , K-MNIST(∗) , CIFAR-10(†) , SVHN(∗) , CelebA(∗) | UCI Regression | Clusters (Synthetic), | | | | Benchmark(†) , Sensorless Drive(†) , | Univariate Regression (Synthetic)) | | | | | Bike Sharing(†) | | | | | | Natural Posterior Network (Charpentier et al., 2022) | OOD Detection, Calibration | | | | ## C Fundamental Derivations Appendix This appendix section walks the reader through generalized versions of recurring theoretical results using Dirichlet distributions in a Machine Learning context, such as their expectation in Appendix C.1, their entropy in Appendix C.2 and the Kullback-Leibler divergence between two Dirichlets in Appendix D.3. ## C.1 Expectation Of A Dirichlet Here, we show results for the quantities E[πk] and E[log πk]. For the first, we follow the derivation by Miller (2011). Another proof is given by Lin (2016). $$\mathbb{E}[\pi_{k}]=\int\cdots\int\pi_{k}\frac{\Gamma(\alpha_{0})}{\prod_{k^{\prime}=1}^{K}\Gamma(\alpha_{k}^{\prime})}\prod_{k^{\prime}=1}^{K}\pi_{k^{\prime}}^{\alpha_{k^{\prime}}-1}d\pi_{1}\ldots d\pi_{K}\tag{1}$$ $$(30)$$ $$(31)$$ Moving π αk−1 kout of the product: $$=\int\cdots\int\frac{\Gamma(\alpha_{0})}{\prod_{k^{\prime}=1}^{K}\Gamma(\alpha_{k^{\prime}})}\pi_{k}^{\alpha_{k}-1+1}\prod_{k^{\prime}\neq k}\pi_{k^{\prime}}^{\alpha_{k^{\prime}}-1}d\pi_{1}\ldots d\pi_{K}$$ For the next step, we define a new set of Dirichlet parameters with βk = αk + 1 and ∀k ′ ̸= k : βk′ = αk′ . For those new parameters, β0 =Pk βk = 1 + α0. So by virtue of the Gamma function's property that Γ(β0) = Γ(α0 + 1) = α0Γ(α0), replacing all terms in the normalization factor yields $$=\int\cdots\int{\frac{\alpha_{k}}{\alpha_{0}}}{\frac{\Gamma(\beta_{0})}{\prod_{k^{\prime}=1}^{K}\Gamma(\beta_{k^{\prime}})}}\prod_{k^{\prime}=1}^{K}\pi_{k^{\prime}}^{\beta_{k^{\prime}}-1}d\pi_{1}\ldots d\pi_{K}={\frac{\alpha_{k}}{\alpha_{0}}}$$ $$(32)$$ $$(33)$$ where in the last step we obtain the final result, since the Dirichlet with new parameters βk must nevertheless integrate to 1, and the integrals do not regard αk or α0. For the expectation E[log πk], we first rephrase the Dirichlet distribution in terms of the exponential family (Kupperman, 1964). The exponential family encompasses many commonly-used distributions, such as the normal, exponential, Beta or Poisson, which all follow the form $$p(\mathbf{x};\eta)=h(\mathbf{x})\exp\left(\eta^{T}u(\mathbf{x})-A(\eta)\right)$$ T u(x) − A(η)(33) with natural parameters η, *sufficient statistic* u(x), and *log-partition function* A(η). For the Dirichlet distribution, Winn (2004) provides the sufficient statistic as u(π) = [log π1*, . . . ,*πK] T and the log-partition function $$A(\mathbf{\alpha})=\sum_{k=1}^{K}\log\Gamma(\alpha_{k})-\log\Gamma(\alpha_{o})$$ $$(34)$$ By Mao (2019), we also find that by the moment-generating function that for the sufficient statistic, its expectation can be derived by $$\mathbb{E}[u(\mathbf{x})_{k}]=\frac{\partial A(\boldsymbol{\eta})}{\partial\eta_{k}}\tag{1}$$ Therefore we can evaluate the expected value of log πk (i.e. the sufficient statistic) by inserting the definition of the log-partition function in Equation (34) into Equation (35): $$(35)$$ $$\mathbb{E}[\log\pi_{k}]=\frac{\partial}{\partial\alpha_{k}}\sum_{k=1}^{K}\log\Gamma(\alpha_{k})-\log\Gamma(\alpha_{0})=\psi(\alpha_{k})-\psi(\alpha_{0})\tag{36}$$ which corresponds precisely to the definition of the digamma function as ψ(x) = d dx log Γ(x). ## C.2 Entropy Of Dirichlet The following derivation is adapted from Lin (2016), with the result stated in Charpentier et al. (2020) as well. $$H[\pi]=-\mathbb{E}[\log p(\pi|\alpha)]$$ $$=-\mathbb{E}\bigg{[}\log\left(\frac{1}{B(\alpha)}\prod_{k=1}^{K}\pi_{k}^{\alpha_{k}-1}\right)\bigg{]}$$ $$=-\mathbb{E}\bigg{[}-\log B(\alpha)+\sum_{k=1}^{K}(\alpha_{k}-1)\log\pi_{k}\bigg{]}$$ $$=\log B(\alpha)-\sum_{k=1}^{K}(\alpha_{k}-1)\mathbb{E}[\log\pi_{k}]$$ $$(37)$$ (38) $$\begin{array}{l}\small\left(39\right)\end{array}$$ = $$\begin{array}{l}\small\left(40\right)\end{array}$$ . Using Equation (36): $$=\log B(\mathbf{\alpha})-\sum_{k=1}^{K}(\alpha_{k}-1)\big{(}\psi(\alpha_{k})-\psi(\alpha_{0})\big{)}$$ $$=\log B(\mathbf{\alpha})+\sum_{k=1}^{K}(\alpha_{k}-1)\psi(\alpha_{0})-\sum_{k=1}^{K}(\alpha_{k}-1)\psi(\alpha_{k})$$ $$=\log B(\mathbf{\alpha})+(\alpha_{0}-K)\psi(\alpha_{0})-\sum_{k=1}^{K}(\alpha_{k}-1)\psi(\alpha_{k})$$ (41) $$\begin{array}{l}\left(42\right)^{2}\end{array}$$ = $$\begin{array}{l}\left(43\right)^{2}\end{array}$$ . (44) $$\begin{array}{l}\left(45\right)\end{array}$$ = $$\begin{array}{l}\left(45\right)\end{array}$$ . ## C.3 Kullback-Leibler Divergence Between Two Dirichlets The following result is presented using an adapted derivation by Lin (2016) and appears in Chen et al. (2018) and Joo et al. (2020) as a starting point for their variational objective (see Appendix D.7). In the following we use Dir(π; α) to denote the optimized distribution, and Dir(π; γ) the reference or target distribution. $$\mathrm{KL}\Big{[}p(\mathbf{\pi}|\mathbf{\alpha})\Big{|}\Big{|}p(\mathbf{\pi}|\mathbf{\gamma})\Big{]}=\mathbb{E}\bigg{[}\log\frac{p(\mathbf{\pi}|\mathbf{\alpha})}{p(\mathbf{\pi}|\mathbf{\gamma})}\bigg{]}=\mathbb{E}\bigg{[}\log p(\mathbf{\pi}|\mathbf{\alpha})\bigg{]}-\mathbb{E}\left[\log p(\mathbf{\pi}|\mathbf{\gamma})\right]$$ $$=\mathbb{E}\bigg{[}-\log B(\mathbf{\alpha})+\sum_{k=1}^{K}(\alpha_{k}-1)\log\pi_{k}\bigg{]}$$ $$-\mathbb{E}\bigg{[}-\log B(\mathbf{\gamma})+\sum_{k=1}^{K}(\gamma_{k}-1)\log\pi_{k}\bigg{]}$$ Distributing and pulling out B(α) and B(γ) out of the expectation (they don't depend on π): $$=-\log{\frac{B(\gamma)}{B(\alpha)}}+\mathbb{E}\biggl[\sum_{k=1}^{K}(\alpha_{k}-1)\log\pi_{k}-(\gamma_{k}-1)\log\pi_{k}\biggr]$$ $$(46)$$ $$=-\log{\frac{B(\gamma)}{B(\alpha)}}+\mathbb{E}\biggl[\sum_{k=1}^{K}(\alpha_{k}-\gamma_{k})\log\pi_{k}\biggr]$$ $$(47)$$ $$(48)$$ Moving the expectation inward and using the identity E[πk] = ψ(αk) − ψ(α0) from Appendix C.1: $$=-\log\frac{B(\gamma)}{B(\alpha)}+\sum_{k=1}^{K}(\alpha_{k}-\gamma_{k})\big(\psi(\alpha_{k})-\psi(\alpha_{0})\big)$$ The KL divergence is also used by some works as regularizer by penalizing the distance to a uniform Dirichlet with γ = 1 (Sensoy et al., 2018). In this case, the result above can be derived to be $$\mathrm{KL}\Big[p(\pi|\alpha)\Big|\Big|p(\pi|1)\Big]=\log{\frac{\Gamma(K)}{B(\alpha)}}+\sum_{k=1}^{K}(\alpha_{k}-1)\big(\psi(\alpha_{k})-\psi(\alpha_{0})\big)$$ $$(49)$$ where the log Γ(K) term can also be omitted for optimization purposes, since it does not depend on α. ## D Additional Derivations Appendix In this appendix we present relevant results in a Machine Learning context, including from some of the surveyed works, featuring as unified notation and annotated derivation steps. These include derivations of expected entropy (Appendix D.1) and mutual information (Appendix D.2) as uncertainty metrics for Dirichlet networks. Also, we derive a multitude of loss functions, including the l∞ norm loss of a Dirichlet w.r.t. a one-hot encoded class label in Appendix D.3, the l2 norm loss in Appendix D.4, as well as the reverse KL loss by Malinin & Gales (2019), the UCE objective Biloš et al. (2019); Charpentier et al. (2020) and ELBO Shen et al. (2020); Chen et al. (2018) as training objectives (Appendices D.5 to D.7). ## D.1 Derivation Of Expected Entropy The following derivation is adapted from Malinin & Gales (2018) appendix section C.4. In the following, we assume that ∀k ∈ K : πk > 0: $$\mathbb{E}_{p(\pi|\mathbf{x},\hat{\mathbf{\theta}})}\bigg{[}H\Big{[}P(y|\pi)\Big{]}\bigg{]}=\int p(\pi|\mathbf{x},\hat{\mathbf{\theta}})\bigg{(}-\sum_{k=1}^{K}\pi_{k}\log\pi_{k}\bigg{)}d\pi$$ $$=-\sum_{k=1}^{K}\int p(\pi|\mathbf{x},\hat{\mathbf{\theta}})\Big{(}\pi_{k}\log\pi_{k}\Big{)}d\pi$$ $$(50)$$ $$(51)$$ Inserting the definition of p(π|x, θˆ) ≈ p(π|x, D): $$=-\sum_{k=1}^{K}\left(\frac{\Gamma(\alpha_{0})}{\prod_{k^{\prime}=1}^{K}\Gamma(\alpha_{k^{\prime}})}\int\pi_{k}\log\pi_{k}\prod_{k^{\prime}=1}^{K}\pi_{k^{\prime}}^{\alpha_{k^{\prime}}-1}d\pi\right)\tag{1}$$ $$\left(52\right)$$ Singling out the factor πk: $$=-\sum_{k=1}^{K}\left(\frac{\Gamma(\alpha_{0})}{\Gamma(\alpha_{k})\prod_{k^{\prime}\neq k}\Gamma(\alpha_{k^{\prime}})}\pi_{k}^{\alpha_{k}-1}\int\pi_{k}\log\pi_{k}\prod_{k^{\prime}\neq k}\pi_{k^{\prime}}^{\alpha_{k^{\prime}}-1}d\pi\right)$$ $$(53)$$ Adjusting the normalizing constant (this is the same trick used in Appendix C.1): $$=-\sum_{k=1}^{K}\left(\frac{\alpha_{k}}{\alpha_{0}}\int\frac{\Gamma(\alpha_{0}+1)}{\Gamma(\alpha_{k}+1)\prod_{k^{\prime}\neq k}\Gamma(\alpha_{k^{\prime}})}\pi_{k}^{\alpha_{k}-1}\log\pi_{k}\prod_{k^{\prime}\neq k}\pi_{k^{\prime}}^{\alpha_{k^{\prime}}-1}d\pi\right)$$ $$\left(54\right)$$ Using the identity E[log πk] = ψ(αk) − ψ(α0) (Equation (36)). Since the expectation here is w.r.t to a Dirichlet with concentration parameters αk + 1, we obtain $$=-\sum_{k=1}^{K}\frac{\alpha_{k}}{\alpha_{0}}\bigg(\psi(\alpha_{k}+1)-\psi(\alpha_{0}+1)\bigg)\tag{1}$$ ## D.2 Derivation Of Mutual Information We start from the expression in Equation (13): $$I\left[y,\pi\left|\mathbf{x},\mathbb{D}\right.\right]=H\left[\mathbb{E}_{p(\pi|\mathbf{x},\mathbb{D})}\left[P(y|\pi)\right]\right]-\mathbb{E}_{p(\pi|\mathbf{x},\mathbb{D})}\left[H\left[P(y|\pi)\right]\right]$$ $$(55)$$ $$(56)$$ Given that E[πk] = αk α0 (Appendix C.1) and assuming that point estimate p(π|x, D) ≈ p(π|x, θˆ) is sufficient (Malinin & Gales, 2018), we can identify the first term as the Shannon entropy −PK k=1 πk log πk = −PK k=1 αk α0 log αk α0 . Furthermore, the second part we already derived in Appendix D.1 and thus we obtain: $$=-\sum_{k=1}^{K}\frac{\alpha_{k}}{\alpha_{0}}\log\frac{\alpha_{k}}{\alpha_{0}}+\sum_{k=1}^{K}\frac{\alpha_{k}}{\alpha_{0}}\bigg{(}\psi(\alpha_{k}+1)-\psi(\alpha_{0}+1)\bigg{)}\tag{1}$$ $$=-\sum_{k=1}^{K}\frac{\alpha_{k}}{\alpha_{0}}\bigg{(}\log\frac{\alpha_{k}}{\alpha_{0}}-\psi(\alpha_{k}+1)+\psi(\alpha_{0}+1)\bigg{)}\tag{2}$$ (57) $$\left(\begin{array}{l}58\end{array}\right)$$ . ## D.3 L∞ **Norm Derivation** In this section we elaborate on the derivation of Tsiligkaridis (2019) deriving a generalized lp loss, upperbounding the l∞ loss. This in turn allows us to easily derive the l2 loss used by Sensoy et al. (2018); Zhao et al. (2020). Here we assume the classification target y is provided in the form of a one-hot encoded label y = [1y=1*, . . . ,* 1y=K] T. $$\mathbb{E}_{p(\pi|\mathbf{x},\theta)}\left[||\mathbf{y}-\pi||_{\infty}\right]\leq\mathbb{E}_{p(\pi|\mathbf{x},\theta)}\left[||\mathbf{y}-\pi||_{p}\right]$$ (59) Using Jensen's inequality $$(59)$$ $$\leq\left(\mathbb{E}_{p(\pi|\mathbf{x},\boldsymbol{\theta})}\left[||\mathbf{y}-\boldsymbol{\pi}||_{p}^{p}\right]\right)^{1/p}\tag{1}$$ $$(60)$$ Evaluating the expression with ∀k ̸= y : yk = 0: .$$ =\left(\mathbb{E}[(1-\pi_y)^p]+\sum_{k\neq y}\mathbb{E}[\pi_k^p]\right)^{1/p}$$ . $$(61)$$ In order to compute the expression above, we first realize that all components of π are distributed according to a Beta distribution Beta(*α, β*) (since the Dirichlet is a multivariate generalization of the beta distribution) for which the moment-generating function is given as follows: $$\mathbb{E}[\pi^{p}]={\frac{\Gamma(\alpha+p)\Gamma(\beta)\Gamma(\alpha+\beta)}{\Gamma(\alpha+p+\beta)\Gamma(\alpha)\Gamma(\beta)}}={\frac{\Gamma(\alpha+p)\Gamma(\alpha+\beta)}{\Gamma(\alpha+p+\beta)\Gamma(\alpha)}}$$ Given that the first term in Equation (59) is characterized by Beta(α0 − αy, αy) and the second one by Beta(αk, α0 − αk), we can evaluate the result in Equation (59) using the moment generating function: $\mathbb{E}_{p(\pi|\mathbf{x},\theta)}\bigg[||\mathbf{y}-\pi||_{\infty}\bigg]\leq\theta$ $$=\bigg[\theta$$. (63) $$\begin{array}{l}\left(64\right)\end{array}$$ . Γ(α0 − αy + p)Γ(α0 −✚α✚y +✚α✚y) Γ(α0 −✚α✚y + p +✚α✚y)Γ(α0 − αy) + X k̸=y Γ(αk + p)Γ(✟α✟k + α0 −✟α✟k) Γ(✟α✟k + p + α0 −✟α✟k)Γ(αk) !1p Γ(α0 − αy + p)Γ(α0) Γ(α0 + p)Γ(α0 − αy) + X k̸=y Γ(αk + p)Γ(α0) Γ(p + α0)Γ(αk) !1p Factoring out common terms: $$\frac{\Gamma(\alpha_{0})}{\Gamma(\alpha_{0}+p)}\left(\frac{\Gamma(\alpha_{0}-\alpha_{y}+p)}{\Gamma(\alpha_{0}-\alpha_{y})}+\sum_{k\neq y}\frac{\Gamma(\alpha_{k}+p)}{\Gamma(\alpha_{k})}\right)^{\frac{1}{p}}\tag{65}$$ $=\;\frac{1}{2}$ . $$(62)$$ Expressing α0 − αk =Pk̸=y αk: $$=\left(\frac{\Gamma(\alpha_{0})}{\Gamma(\alpha_{0}+p)}\right)^{\frac{1}{p}}\left(\frac{\Gamma\Big{(}\sum_{k\neq y}\alpha_{k}+p\Big{)}}{\Gamma\Big{(}\sum_{k\neq y}\alpha_{k}\Big{)}}+\sum_{k\neq y}\frac{\Gamma(\alpha_{k}+p)}{\Gamma(\alpha_{k})}\right)^{\frac{1}{p}}\tag{66}$$ $$\begin{array}{l}{(70)}\\ {\qquad\qquad(71)}\end{array}$$ $$\begin{array}{l}{(71)}\\ {\qquad\qquad(71)}\end{array}$$ . ## D.4 L2 **Norm Loss Derivation** Here we present an adapted derivation by Sensoy et al. (2018) for the l2-norm loss to train Dirichlet networks. Here we again use a one-hot vector for a label with y = [1y=1*, . . . ,* 1y=K] T. $$\mathbb{E}_{p(\boldsymbol{\pi}|\mathbf{x},\boldsymbol{\theta})}\Big{[}\big{|}\mathbf{y}-\boldsymbol{\pi}|\big{|}_{2}^{2}\Big{]}=\mathbb{E}\left[\sum_{k=1}^{K}(\mathbf{1}_{y=k}-\pi_{k})^{2}\right]$$ $$=\mathbb{E}\left[\sum_{k=1}^{K}\mathbf{1}_{y=k}^{2}-2\pi_{k}\mathbf{1}_{y=k}+\pi_{k}^{2}\right]$$ $$=\sum_{k=1}^{K}\mathbf{1}_{y=k}^{2}-2\mathbb{E}[\pi_{k}]\mathbf{1}_{y=k}+\mathbb{E}[\pi_{k}^{2}]$$ Using the identity that E[π 2 k ] = E[πk] 2 + Var(πk): $$=\sum_{k=1}^{K}\mathbf{1}_{y=k}^{2}-2\mathbb{E}[\pi_{k}]\mathbf{1}_{y=k}+\mathbb{E}[\pi_{k}]^{2}+\text{Var}(\pi_{k})$$ $$=\sum_{k=1}^{K}\left(\mathbf{1}_{y=k}-\mathbb{E}[\pi_{k}]\right)^{2}+\text{Var}(\pi_{k})$$ Finally, we use the result from Appendix C.1 and the result that Var(πk) = αk(α0 − αk) α 2 0 (α0 + 1) (see Lin, 2016): $$=\sum_{k=1}^{K}\left({\bf1}_{y=k}-{\frac{\alpha_{k}}{\alpha_{0}}}\right)^{2}+{\frac{\alpha_{k}(\alpha_{0}-\alpha_{k})}{\alpha_{0}^{2}(\alpha_{0}+1)}}$$ $$(72)$$ ## D.5 Derivation Of Reverse Kl Loss Here we re-state and annotate the derivation of reverse KL loss by Malinin & Gales (2019) in more detail, starting from the forward KL loss by Malinin & Gales (2018). Note that here, αˆ contains a dependence on k, since Malinin & Gales (2018) let αˆk = ˆπkαˆ0 with αˆ0 being a hyperparameter and πˆk = 1k=y +(−1k=yK +1)ε and ε being a small number. $$\mathbb{E}_{p(\mathbf{x},y)}\left[\sum_{k=1}^{K}\mathbf{1}_{y=k}\mathrm{KL}\Big{[}p(\boldsymbol{\pi}|\hat{\boldsymbol{\alpha}})\Big{|}\Big{|}p(\boldsymbol{\pi}|\mathbf{x},\boldsymbol{\theta})\Big{]}\right]\tag{73}$$ $$=\mathbb{E}_{p(\mathbf{x},y)}\left[\sum_{k=1}^{K}\mathbf{1}_{y=k}\int p(\boldsymbol{\pi}|\hat{\boldsymbol{\alpha}})\log\frac{p(\boldsymbol{\pi}|\hat{\boldsymbol{\alpha}})}{p(\boldsymbol{\pi}|\mathbf{x},\boldsymbol{\theta})}d\boldsymbol{\pi}\right]\tag{74}$$ Writing the expectation explicitly: $$=\int\sum_{k=1}^{K}p(y=k,\mathbf{x})\sum_{k=1}^{K}\mathbf{1}_{y=k}\int p(\pi|{\dot{\alpha}})\log{\frac{p(\pi|{\dot{\alpha}})}{p(\pi|\mathbf{x},\theta)}}d\pi d\mathbf{x}$$ $$(75)$$ 42 $$=\int\sum_{k=1}^{K}p(\mathbf{x})P(y=k|\mathbf{x})\sum_{k=1}^{K}\mathbf{1}_{y=k}\int p(\boldsymbol{\pi}|\hat{\boldsymbol{\alpha}})\log\frac{p(\boldsymbol{\pi}|\hat{\boldsymbol{\alpha}})}{p(\boldsymbol{\pi}|\mathbf{x},\boldsymbol{\theta})}d\boldsymbol{\pi}d\mathbf{x}$$ $$=\mathbb{E}_{p(\mathbf{x})}\left[\sum_{k=1}^{K}P(y=k|\mathbf{x})\sum_{k=1}^{K}\mathbf{1}_{y=k}\int p(\boldsymbol{\pi}|\hat{\boldsymbol{\alpha}})\log\frac{p(\boldsymbol{\pi}|\hat{\boldsymbol{\alpha}})}{p(\boldsymbol{\pi}|\mathbf{x},\boldsymbol{\theta})}d\boldsymbol{\pi}\right]$$ (76) $\binom{77}{77}$ . Adding factor in log, collapsing double sum: $$=\mathbb{E}_{p(\mathbf{x})}\left[\sum_{k=1}^{K}P(y=k|\mathbf{x})\int p(\boldsymbol{\pi}|{\hat{\boldsymbol{\alpha}}})\log\left({\frac{p(\boldsymbol{\pi}|{\hat{\boldsymbol{\alpha}}})\sum_{k=1}^{K}P(y=k|\mathbf{x})}{p(\boldsymbol{\pi}|\mathbf{x},\boldsymbol{\theta})\sum_{k=1}^{K}P(y=k|\mathbf{x})}}\right)d\boldsymbol{\pi}\right]$$ $$(78)$$ Reordering, separating constant factor from log: $$=\mathbb{E}_{p(\mathbf{x})}\Bigg{[}\int\sum_{k=1}^{K}P(y=k|\mathbf{x})p(\boldsymbol{\pi}|\hat{\mathbf{\alpha}})\Bigg{(}\log\left(\frac{\sum_{k=1}^{K}P(y=k|\mathbf{x})p(\boldsymbol{\pi}|\hat{\mathbf{\alpha}})}{p(\boldsymbol{\pi}|\mathbf{x},\boldsymbol{\theta})}\right)\tag{79}$$ $$-\underbrace{\log\left(\sum_{k=1}^{K}P(y=k|\mathbf{x})\right)}_{=0}d\boldsymbol{\pi}\Bigg{]}$$ (80) $$=\mathbb{E}_{p(\mathbf{x})}\Bigg{[}\text{KL}\Bigg{[}\underbrace{\sum_{k=1}^{K}P(y=k|\mathbf{x})p(\boldsymbol{\pi}|\hat{\mathbf{\alpha}})\left[\left|p(\boldsymbol{\pi}|\mathbf{x},\boldsymbol{\theta})\right|\right]}_{\text{Mixture of$K$Dirchlets}}\tag{81}$$ where we can see that this objective actually tries to minimizes the divergence towards a mixture of K Dirichlet distributions. In the case of high data uncertainty, this is claimed incentivize the model to distribute mass around each of the corners of the simplex, instead of the desired behavior shown in Figure 4c. Therefore, Malinin & Gales (2019) propose to swap the order of arguments in the KL-divergence, resulting in the following: $$\mathbb{E}_{p(\mathbf{x})}\biggl{[}\sum_{k=1}^{K}P(y=k|\mathbf{x})\cdot\mathrm{KL}\biggl{[}p(\boldsymbol{\pi}|\mathbf{x},\boldsymbol{\theta})\biggl{|}\biggl{|}p(\boldsymbol{\pi}|\hat{\mathbf{\alpha}})\biggr{]}\biggr{]}$$ $$=\mathbb{E}_{p(\mathbf{x})}\biggl{[}\sum_{k=1}^{K}p(y=k|\mathbf{x})\cdot\int p(\boldsymbol{\pi}|\mathbf{x},\boldsymbol{\theta})\log\frac{p(\mathbf{x},\boldsymbol{\theta})}{p(\boldsymbol{\pi}|\hat{\mathbf{\alpha}})}d\boldsymbol{\pi}\biggr{]}$$ Reordering: = Ep(x) Zp(π|x, θ) X K k=1 P(y = k|x) log p(π|x, θ) p(π|αˆ) dπ (84) = Ep(x) "Ep(π|x,θ) X K k=1 P(y = k|x) log p(π|x, θ) − X K k=1 P(y = k|x) log p(π|αˆ) #(85) = Ep(x) "Zp(π|x, θ) log Y K k=1 p(π|x, θ) P (y=k|x) − log Y K k=1 p(π|αˆ) P (y=k|x) dπ # = Ep(x) " Zp(π|x, θ) log p(π|x, θ) PK k=1 P (y=k|x)− log Y K k=1 1 B(α) Y K k′=1 π αk′−1 k′p(y=k|x)dπ = Ep(x) "Zp(π|x, θ) log p(π|x, θ)− log Y K k=1 1 B(α) Y K k′=1 π αk′−1 k′P (y=k|x)dπ # $$\text{(84)}$$ $$\text{(85)}$$ $$\text{(86)}$$ $$\text{(87)}$$ $$\text{(88)}$$ . $$(\delta2)$$ $$(83)$$ 43 $$=\mathbb{E}_{p(\mathbf{x})}\Bigg{[}\int p(\mathbf{\pi}|\mathbf{x},\mathbf{\theta})\bigg{(}\log\big{(}p(\mathbf{\pi}|\mathbf{x},\mathbf{\theta})\big{)}-\log\bigg{(}\frac{1}{B(\mathbf{\alpha})}\prod_{k=1}^{K}\sum_{k^{\prime}=1}^{K}F(y=k|\mathbf{x})\alpha_{k^{\prime}}-1\bigg{)}d\mathbf{\pi}\Bigg{]}$$ $$=\mathbb{E}_{p(\mathbf{x})}\Bigg{[}\text{KL}\Big{[}p(\mathbf{\pi}|\mathbf{x},\mathbf{\theta})||p(\mathbf{\pi}|\mathbf{\tilde{\alpha}})\Big{]}\Bigg{]}\quad\text{where}\quad\tilde{\mathbf{\alpha}}=\sum_{k=1}^{K}p(y=k|\mathbf{x})\alpha_{k^{\prime}}$$ (89) $\binom{90}{90}$ . $$(91)$$ Therefore, instead of a mixture of Dirichlet distribution, we obtain a single distribution whose parameters are a mixture of the concentrations of each class. ## D.6 Uncertainty-Aware Cross-Entropy Loss The uncertainty-aware cross-entropy loss in Biloš et al. (2019); Charpentier et al. (2020) has the form $${\mathcal{L}}_{\mathrm{UCE}}=\mathbb{E}_{p(\pi|\mathbf{x},\theta)}[\log p(y|\pi)]=\mathbb{E}[\log\pi_{y}]=\psi(\alpha_{y})-\psi(\alpha_{0})$$ as p(y|π) is given by the true label in form of a delta distribution, we can apply the result from Appendix C.1. ## D.7 Evidence-Lower Bound For Dirichlet Posterior Estimation The evidence lower bound is a well-known objective to optimize the KL-divergence between an approximate proposal and target distribution (Jordan et al., 1999; Kingma & Welling, 2014). We derive it based on Chen et al. (2018) in the following for the Dirichlet case with a proposal distribution p(π|x, θ) to the target distribution p(π|y). For the first part of the derivation, we omit the dependence on β for clarity. $$\operatorname{KL}\left[p(\pi|\mathbf{x},\theta)||p(\pi|y)\right]=\mathbb{E}_{p(\pi|\mathbf{x},\theta)}\left[\log{\frac{p(\pi|\mathbf{x},\theta)}{p(\pi|y)}}\right]=\mathbb{E}_{p(\pi|\mathbf{x},\theta)}\left[\log{\frac{p(\pi|\mathbf{x},\theta)p(y)}{p(\pi,y)}}\right]$$ (92) Factorizing p(π, y) = P(y|π)p(π), pulling out p(y) as it doesn't depend on π: $$\begin{array}{l}{{=\mathbb{E}_{p(\pi|\mathbf{x},\theta)}\left[\log\frac{p(\pi|\mathbf{x},\theta)}{P(y|\pi)p(\pi)}\right]+p(y)}}\\ {{=\mathbb{E}_{p(\pi|\mathbf{x},\theta)}\left[\log\frac{p(\pi|\mathbf{x},\theta)}{p(\pi)}\right]-\mathbb{E}_{p(\pi|\mathbf{x},\theta)}\left[\log P(y|\pi)\right]+p(y)}}\\ {{\leq\mathrm{KL}\left[p(\pi|\mathbf{x},\theta)||p(\pi)\right]-\mathbb{E}_{p(\pi|\mathbf{x},\theta)}\left[\log P(y|\pi)\right]}}\end{array}$$ Now note that the second part of the result is the uncertainty-aware cross-entropy loss from Appendix D.6 and re-adding the dependence of p(π) on γ, we can re-use our result regarding the KL-divergence between two Dirichlets in Appendix C.3 and thus obtain: $${\mathcal{L}}_{\mathrm{ELBO}}=\psi(\beta_{y})-\psi(\beta_{0})-\log{\frac{B(\beta)}{B(\gamma)}}+\sum_{k=1}^{K}(\beta_{k}-\gamma_{k})\big(\psi(\beta_{k})-\psi(\beta_{0})\big)$$ $$(92)$$ (93) $\begin{array}{l}\text{(94)}\\ \\ \text{(95)}\end{array}$ . $$(96)$$ which is exactly the solution obtained by both Chen et al. (2018) and Joo et al. (2020). ## E Overview Over Loss Functions Appendix In Tables 6 and 7, we compare the forms of the loss function used by Evidential Deep Learning methods for classification, using the consistent notation from the paper. Most of the presented results can be found in the previous Appendix C and Appendix D. We refer to the original work for details about the objective of Nandy et al. (2020). | Method | Loss function | Regularizer | Comment | | | | |-----------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------| | B(αˆ) | K) | | | | | | | P K (αk − αˆk) ψ(αk) − ψ(α0) | − log Γ( | P K (αk − 1)(ψ(αk) − ψ(α0)) | Target concentration parameters αˆ are created | | | | | Prior networks (Malinin & Gales, 2018) | log B(α) + | k=1 | B(α) + | k=1 | using a label smoothing approach, 1 − (K − 1)ε if y = k i.e. πˆk = ( ε if y ̸= k . Together with setting αˆ0 as a hyperparameter, αˆk = ˆπkαˆ0 | | | Prior networks (Malinin & Gales, 2019) | B(αˆ) | B(α¯) | (k) | | | | | log B(α) + P K k=1 (αk − αˆk) ψ(αk) − ψ(α0) | log B(α) + P K k=1 (αk − α¯k) ψ(αk) − ψ(α0) | Similar to above, αˆ c | = 1c=kαin + 1 (k) | | | | | | for in-distribution and α¯ c | = 1c=kαout + 1 | | | | | | | where we have hyperparameters set to αin = 0.01 K and αout = 0. Then finally, αˆ = P k=1 p(y = k|x)ˆαk K and α¯ = P k=1 p(y = k|x)¯αk. | | | | | | |  Γ P | αk+p | | | | | | | Γ(α0) Γ(α0+p) 1 p | k̸=y | | | | | | |  | 1  p | | | | | | | Γ(αk+p) | 1 | 2 | | | | | | | + P k̸=y Γ(αk) | 2 P k̸=y (αk − 1) (ψ(1)(αk) − ψ(1))(α0)) | ψ(1) is the polygamma function defined as | | | | | Γ P | αk |  | dxψ(x). d | | | | | k̸=y | ψ(1)(x) = | | | | | | | Information Robust Dirichlet Networks (Tsiligkaridis, 2019) Dirichlet via Function Decomposition (Biloš et al., 2019) | ψ(αy) − ψ(α0) | λ1 R T 0 πk(τ ) 2dτ + λ2 R T 0 (ν − σ 2 (τ )) 2dτ | Factors λ1 and λ2 that are treated as hyperparameters that weigh first term pushing the for logit k to zero, while pushing the variance in the first term to ν. | | | | | Prior network with PAC Reg. (Haussmann et al., 2019) | | | | | | | | 1k=y | rKL p(π|α) p(π|1) −log δ | | | | | | | − log E Q K αk | − 1 | The expectation in the loss function is evaluated | | | | | | k=1 | α0 | N | using parameter samples from a weight distribution. δ ∈ [0, 1]. | | | | | Ensemble Distribution | - | The objective uses predictions from a trained ensemble with parameters θ1, . . . , θM. | | | | | | K | M | K | | | | | | ψ(α0) − P k=1 ψ(αk) + 1M P m=1 P k=1 (αk − 1) (m)) log p(y = k|x, θ | | | | | | | | Distillation (Malinin et al., 2020b) Prior networks with representation gap (Nandy et al., 2020) | − log πy − λinK P K k=1 σ(αk) | − P | 1 | λout K P K k=1 σ(αk) | The main objective is being optimized on in-distribution, | | | k=1 K log πk − | the regularizer on out-of-distribution data. λin and λout weighing terms and σ denotes the sigmoid function. | | | | | | | Prior RNN (Shen et al., 2020) | K | | | | | | | P k=1 1k=y log πk | − log B(α˜) + ( ˆα0 − K)ψ(ˆα0) − P k=1(ˆαk − 1)ψ( ˆαk) | Here, the entropy regularizer operates on a scaled version of the concentration parameters W)α, where Wα˜ = (IK − is learned. | | | | | | Graph-based Kernel Dirichlet dist. est. (GKDE) (Zhao et al., 2020) | P K | 1y=k − αk 2 + αk(α0−αk) | B(α) | P K | | αˆ here corresponds to a uniform prior including some | | k=1 | α0 | α2 0 (α0+1) | − log B(αˆ) + | k=1 (αk − αˆk) ψ(αk) − ψ(α0) | information about the local graph structure. The authors also use an additional knowledge distillation objective, which was omitted here since it doesn't related to the Dirichlet. | | ## Table 6: Overview Over Objectives Used By Prior Networks For Classification. 46 | Method | Loss function | Regularizer | Comment | | | |------------------------------------------------------------|--------------------------------------------------|-------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------|------------|------------------------------------------------------------------------------------------------------------------------------------------| | βk | βk(β0−βk) | | | | | | PK k=1 1y=k − β0 2 + β 2 (β0+1) | − log Γ(K) B(β) + PK k=1 (βk − 1)(ψ(βk) − ψ(β0)) | | | | | | Evidential Deep Learning (Sensoy et al., 2018) | 0 | B(β) | | | | | Variational Dirichlet | ψ(βy) − ψ(β0) | − log B(γ) + PK k=1 (βk − γk) ψ(βk) − ψ(β0) | | | | | (Chen et al., 2018) | |βk′ −βk| | | | | | | βk(β0−βk) | βk P | βk′ 1− | | | | | Regularized ENN | PK k=1 1y=k − βk 2 + 2 (β0+1) | −λ1Epout(x,y) h αy i − λ2Epconfl.(x,y) "PK k=1 | k′̸=k | βk′ +βk # | The first term represents vacuity, i.e. the lack of evidence and is optimized using OOD examples. The second term stands for dissonance, | | Zhao et al. (2019) | β0 | β 0 | α0 | P k′̸=k βk′ | and is computed using points with neighborhoods with different classes from their own. λ1, λ2 are hyperparameters. | | βk(β0−βk) | | | | | | | WGAN–ENN | PK k=1 1y=k − βk 2 + 2 (β0+1) | −λEpout(x,y) h αy i | | | | | (Hu et al., 2021) | β0 | β 0 | α0 | | | | Belief Matching | ψ(βy) − ψ(β0) | − log B(β) B(γ) + PK k=1 (βk − γk) ψ(βk) − ψ(β0) | | | | | (Joo et al., 2020) Posterior networks | ψ(βy) − ψ(β0) | − log B(β) + (β0 − K)ψ(β0) − PK (βk − 1)ψ(βk) | | | | | (Charpentier et al., 2020) | k=1 | | | | | | Graph Posterior Networks ψ(βy) − ψ(β0) | − log B(β) + (β0 − K)ψ(β0) − PK (βk − 1)ψ(βk) | | | | | | (Stadler et al., 2021) | k=1 | | | | | | −PK Epin(x) log(σ(fθ(x))) + Epout(x) log(1 − σ(fθ(x))) | − log Γ(K) B(β−y) + P | (βk − 1)(ψ(βk) − ψ(β0)) | The main loss is a discriminative loss using ID and OOD samples, | | | | k=1 | k̸=y | generated by a VAE. The regularizer is taken over all classes excluding the true class y (also indicated by β−y). | | | | | Generative Evidential Neural Network (Sensoy et al., 2020) | | | | | | Table 7: Overview over objectives used by posterior networks for classification. 47
Review 1: Summary: This paper presents a review of work in evidential deep learning: broadly, models that aim to parameterize the prior or posterior of Bayesian conjugate models and thereby achieve better uncertainty estimation. The paper presents a gradual introduction to Bayesian inference, uncertainty in neural networks, and relevant fundamentals of conjugate Bayesian inference. It includes a particularly deep dive on Dirichlet distributions, inference in the Categorical-Dirichlet model, and Evidential methods that leverage Dirichlet models. The paper presents a comprehensive discussion of the methods used to produce good uncertainty estimates within evidential learning frameworks; moreover, the appendix contains many detailed derivations, which is a strong contribution. Strengths and Weaknesses: ## Strengths - The main strength of the paper is that it is an accessible introduction to a part of the literature that can be fairly confusing. As mentioned above, it contains often omitted details in derivations in the appendix that are of high pedagogical value. - The illustrations and examples that are included are helpful. - The paper contains a fairly comprehensive discussion of regularization strategies that are critical to getting evidential models to work. ## Weaknesses There are several (mostly small) weaknesses that should be addressed to strengthen the paper: - In the introduction, the authors write "Bayesian model averaging involves the approximation of an otherwise infeasible integral using MC samples". This is not true for several reasons. First, models that exploit conjugacy have been used. Second, other approximate methods such as moment matching have been used. - The authors motivate evidential learning models by claiming "for OoD inputs, they fall back on a prior". This is not inherent to the models, as is evident from the multitude of regularization strategies. This is a goal of these models, which is hopefully better achieved by architectural choices. - The discussion around distributional uncertainty and evidential methods is fairly sparse and not very clear, despite being the intellectual core of evidential deep learning. - As a minor point, the use of mu for model parameters is confusing as it is typically used to denote means. A variable with fewer connotations would clarify this discussion, such as phi. Moreover, relationship between variables is unclear and should be better defined. - The discussion of prior versus posterior parameterization was quite clear and should be introduced earlier and described in more detail. More generally, the discussion of parameterizing the prior or posterior without any discussion of the role of the likelihood was confusing. - Relatedly, the presentation of the actual training procedure is quite limited, and an algorithm block would help clarify. - Terms are a bit strange in a few places; in particular, the use of "foreign data" to denote OoD data and "gold label" to denote the true label are both unusual and I would recommend changing the wording. - The discussion of future work/open questions is very limited for such a young field. I would recommend the authors try to highlight developing or important questions in evidential deep learning in the last section of their paper. Requested Changes: Several requested changes are discussed above, under weaknesses. These are all writing changes. In particular, I think weakening the claims about what evidential deep learning _does_ achieve (versus what it aims to achieve) is necessary for acceptance. However, I would strongly encouraging addressing the previously-mentioned clarity issues. Improved clarify is of extremely high importance for a review paper like this, which aims entirely as a pedagogical resource. Broader Impact Concerns: None ================================================== Review 2: Summary: I thank the authors for incorporating most of the requested changes in the previous review round. This paper reviews recent advances in single-model, single-pass networks for predictive uncertainty quantification. Specifically, the models considered are networks with Dirichlet output---such as prior networks, posterior networks, and evidential deep learning (EDL)---which have a higher degree of freedom in terms of uncertainty quantification due to the fact that the Dirichlet can be seen as a distribution over softmax outputs of standard NNs. Additionally, models for regression are also discussed. Strengths and Weaknesses: ## Strengths 1. The paper is generally well-written. 2. Almost all published works in this space are included. 3. The discussion is quite detailed. ## Weaknesses As I mentioned in my previous review, experimental comparisons between prior/posterior nets and other baselines are missing. Since the paper's goal is to encourage people to use EDL methods, not having hard numerical comparisons makes the paper a bit weaker. Requested Changes: ## Major The current version of the paper is already quite good for the purpose of giving an overview of EDL methods. However, I think an additional section dedicated to benchmarking the UQ of EDL methods against non-EDL baselines would make this paper even stronger. Additionally, providing an all-in-one EDL library would be very beneficial for practitioners. Though, I also understand that one could also argue that this endeavour is better suited for a separate paper. ## Minor Comments * Footnote 1: Shouldn't it be $P(y \mid x, D) = P(y \mid x, \hat\theta)$? * Related: Explaining the notation before Sec. 2.1 would be very helpful. E.g. what's the difference between $p$ and $P$. * Eq. 4: * An explanation that $\theta^{(k)} \sim p(\theta \mid D)$ should be added. * Missing a $1/K$ factor. * Page 4: Latin words/names ("Iris versicolor", "Iris virginica", etc) should be italicized. * Tables 1, 2, 3: I wonder whether the "Model" column is necessary---in general, prior/posterior/evidential framework can be applied to any type of neural network. * Related Work, page 17: Another "Bayesian way", which actually is quite similar to the evidential framework is to approximate the distribution of the softmax outputs (which is a logistic-Normal distribution) with a Dirichlet [1]. Note that, in this case, even if we have Dirichlet output just like prior nets, [1] still has a non-trivial approximate posterior over $\theta$. ## References 1. Hobbhahn et al. "Fast predictive uncertainty for classification with Bayesian deep networks." UAI 2022. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper offers a survey of different recent methods that were proposed within the framework of evidential deep learning (EDL). It categorizes the different approaches based on whether they are used for classification or regression, parameterize the prior or posterior, and whether or not they need OOD data during training. They also describe EDL in general and provide some discussion of related approaches. Strengths and Weaknesses: I should note that I have not previously reviewed this paper, but have looked at the previous reviews and will try to take that into account. I also would like to let the record show that based on the TMLR author guidelines, it is still not entirely obvious to me that survey papers like this one are clearly within the scope of this journal, but that remains for the editors to decide. Strengths: - EDL is an important recent development and there is a shortage of good surveys on the subject - This version of the paper offers sufficient detail on the different approaches Weaknesses: - I find that considerations regarding the tradeoffs of the different methods and practical guidelines are generally lacking - I still find the discussion about the different kinds of uncertainty confusing - Some related work seems to still be insufficiently discussed Major comments: - As a practitioner formerly unfamiliar with EDL, I feel that after reading the paper, I have a good overview of the different approaches. However, I don't really have any good guidance regarding which approach to use for a given problem at hand. While there is a small paragraph in the Appendix, I would appreciate more detail in the main text regarding the respective tradeoffs between the different approaches, e.g., in terms of performance, computational cost, scalability to larger problems, etc. - I appreciate the discussion of the three kinds of uncertainty, but I'm afraid I'm still confused. Especially since "distributional" uncertainty is not a common concept outside of EDL, I think it could be disambiguated more clearly. For instance, looking at Fig. 4, it seems to me that epistemic uncertainty is a subclass of distributional uncertainty. Surely, Fig 4e seems more "spread" than 4d, so should have a higher epistemic uncertainty? Are there cases where one can have low epistemic uncertainty and high distributional uncertainty or does high distributional uncertainty always imply high epistemic uncertainty? Likewise, can one have high epistemic and low distributional uncertainty? If so, I think this should be illustrated more clearly with examples to tell the two apart. - While it is mentioned somewhere that deep ensembles [1] are still state-of-the-art in most uncertainty estimation scenarios, they seem to not be mentioned at all in the related work. Especially since there is a recent line of work endowing their uncertainties with theoretical guarantees [e.g., 2,3,4], I think this should be discussed. - Similarly, there is a long line of work on single-pass uncertainty methods [e.g., 5,6,7,8], which seems to be entirely missing from the related work, even though it shares the motivation of EDL, namely to provide predictive uncertainties without having to perform model averages. This definitely seems to be a closely related approach that should be discussed. Minor comments: - Fig. 3: the entries in $\beta$ seem to be shuffled a bit wrt. $N_k$ - I find that "Dirichlet Networks" is a somewhat unexpected title for Sec. 3. Should it rather be called "Evidential Deep Learning for Classification" or similar? - Sec. 3.1: problem to from two -> problem from two - it seems like the spelling Categorical vs. categorical is not consistent in the paper - footnotes 3 and 4 somehow don't appear on their respective page - Eq. (11): is $\alpha$ really directly given by $f_\theta$ without an activation function? so can the $\alpha$ entries be negative? - above Eq (15): see the full derivation is given -> see the full derivation given - below Eq. (19): to force the to model -> to force the model - above Eq. (20): negativ log-likelihood -> negative log-likelihood - p. 12 bottom: an thus is -> and thus is - p. 16, bottom: error is rescale -> error is rescaled - end of Sec. 4: instead of the decreasing -> instead of decreasing - footnote 12: mislead -> misled [1] https://arxiv.org/abs/1612.01474 [2] https://openreview.net/forum?id=BJlahxHYDS [3] https://proceedings.neurips.cc/paper/2020/hash/0b1ec366924b26fc98fa7b71a9c249cf-Abstract.html [4] https://openreview.net/forum?id=LAKplpLMbP8 [5] https://arxiv.org/abs/2003.02037 [6] https://arxiv.org/abs/2102.11409 [7] https://arxiv.org/abs/2006.10108 [8] https://arxiv.org/abs/2110.02609 Requested Changes: See Major Comments above. Broader Impact Concerns: No concerns. ================================================== Metareview: Recommendation: Accept as is Comment: This is the second round of review for this paper. The previous round essentially ran out of time, after the authors had made a significant revision. This time around the reviewers were instructed to take the previous reviews into account and assess whether this revision addressed the previous reviews. One reviewer overlapped, but unfortunately two previous reviewers were unavailable. The previous meta review is pasted below. The reviewers all voted for accept and found the paper relevant and interesting. The reviewers noted that they would have found more empirical comparison useful, particularly to help inform under which circumstances to use which method. However, besides that it seems all the reviewers are satisfied with the paper. Therefore, the recommendation is to accept as-is. Previous Meta Review: This paper presents a survey of evidential deep learning methods for uncertainty estimation in deep neural network classification. This encompasses a collection of methods that develop the use of the Dirichlet distribution to model the outputs of the classifier. The Dirichlet provides a natural formulation to balance new evidence with a prior over categorical outputs, and thus has advantages for uncertainty estimation and dealing with out-of-distribution data. The reviewers found the paper interesting, of appropriate relevance to the community and timely, i.e. a well composed survey about evidential deep learning would be a useful contribution to the community. They found that the survey seems to capture the breadth of the literature on this topic well. There was substantial discussion relating to the paper. In particular, the reviewers all found that the background and explanations of methods were too short and "terse" to serve as an adequate primer or introduction to the area. The concern was essentially that in order to understand the survey, one had to already be intimately familiar with the methods, which of course is somewhat against the purpose of a survey paper. Much of this issue seemed to stem from the authors' attempts to maintain a 12 page limit. Reviewers asked authors to expand the explanations and welcomed going beyond 12 pages to do so. At least one reviewer asked suggested adding open-source code would make the paper stronger and increase it's impact. It seems that the authors agreed to expand on the paper and address the reviewers concerns. They have indeed uploaded a new and significantly longer manuscript that has expanded explanations of the fundamental concepts and underlying methods and addressed other reviewer concerns. However, this version was uploaded only shortly before reviewer decisions were due, and reviewers felt they did not have adequate time to re-review the new version of the paper. As the reviewers felt uncomfortable accepting the paper without carefully reading the new version, the decision recommendation of the reviewers was reject, reject and leaning reject. Therefore, I would recommend a reject. However, I would strongly encourage that the authors resubmit, taking all the all the reviewer feedback into account, and ask for the same panel of action editor and reviewers (the reviewers have acknowledged that they would be willing to review the new manuscript in the context of these reviews). The reviewers would like an opportunity to check the new version, but felt that it would have to reflect major revisions, and thus require more time than is allocated. ==================================================
# Towards Fair Video Summarization Anshuman Chhabra *chhabra@ucdavis.edu* University of California, Davis Kartik Patwari kpatwari@ucdavis.edu University of California, Davis Chandana Kuntala∗*chandanakuntala21@gmail.com* Indira Gandhi Delhi Technical University for Women Sristi∗sristi0108@gmail.com Indira Gandhi Delhi Technical University for Women Deepak Kumar Sharma *dk.sharma1982@yahoo.com* Indira Gandhi Delhi Technical University for Women Prasant Mohapatra pmohapatra@usf.edu University of South Florida, Tampa Reviewed on OpenReview: *https: // openreview. net/ forum? id= Uj6MRfR1P5* ## Abstract Automated video summarization is a vision task that aims to generate concise summaries of lengthy videos. Recent advancements in deep learning have led to highly performant video summarization models; however, there has been a lack of attention given to fairness and unbiased representation in the generated summaries. To bridge this gap, we introduce and analytically define the fair video summarization problem, and demonstrate its connections to the well-established problem of fair clustering. To facilitate fair model development, we also introduce the *FairVidSum* dataset, which is similar in design to state-of-the-art video summarization datasets such as *TVSum* and *SumMe*, but also includes annotations for sensitive attributes and individuals alongside frame importance scores. Finally, we propose the SumBal metric for quantifying the fairness of an outputted video summary. We conduct extensive experiments to benchmark the fairness of various state-of-the-art video summarization models. Our results highlight the need for better models that balance accuracy and fairness to ensure equitable representation and inclusion in summaries. For completeness, we also provide a novel fair-only baseline, FVS-LP, to showcase the fairness-utility gap models can improve upon. ## 1 Introduction With the rapid growth of video content on the internet, there is an increasing need to automatically summarize lengthy videos to provide users with a condensed version that contains the most salient information. This has led to the machine learning (ML) vision task of automated video summarization, which entails generating a short, representative summary video (comprised of key-frames) of a longer input video that showcases its main content and events. In recent years, deep learning (DL) based models have achieved the state-of-the-art (SOTA) in video summarization by leveraging powerful feature representations and learning complex relationships between video frames (Apostolidis et al., 2021a). Furthermore, the video summarization task itself is employed in several downstream practical applications, such as surveillance ∗Equal Contribution. (Senthil Murugan et al., 2018; Thomas et al., 2017), video retrieval (Gong & Liu, 2003; Peng & Ngo, 2006), among others. Advancements in video summarization can then directly impact and improve performance on these downstream video analysis tasks. The ML/DL community has also recently pivoted to studying model fairness as models can exhibit harmful biases against minority groups and individuals (Mehrabi et al., 2021). These issues of unfairness have been evidenced in many high-impact applications as well.1 Thus, with the growing use of video summarization in numerous applications, it is extremely important to ensure that these automated methods are fair and unbiased, both at the *individual-level* (Berk et al., 2017) and at the *group-level* (Dwork et al., 2012) (such as with regards to *sensitive attributes* like ethnicity and sex). However, no work has been undertaken in fair video summarization, while significant progress has been made in developing fair models for other tasks/fields in ML/DL (Chhabra et al., 2021a; Mehrabi et al., 2021). Figure 1: Video samples from *FairVidSum*. ![1_image_0.png](1_image_0.png) To bridge this gap, we propose and analytically define the *fair video summarization problem*, to allow for the development of fair methods at the individual- and group-level. Our fairness definition is conceptualized similar to the well-studied problem of representationbased fair clustering (Chierichetti et al., 2017; Chhabra et al., 2021a). Another hindrance to fairness evaluation in summarization models stems from a lack of any video summarization datasets containing individuals, and appropriate annotations reflecting their sensitive attributes (such as sex and ethnicity). Current benchmark datasets used to train and evaluate video summarization models are the *TVSum* (Song et al., 2015) and SumMe (Gygli et al., 2014) datasets which do not primarily contain human subjects and lack information regarding any protected groups or sensitive attributes. To this end, we propose the *FairVidSum* dataset containing multiple individuals spanning diverse settings such as interviews, podcasts, and panel discussions. Unlike the other datasets, we provide manual annotations for sensitive attributes (fairness) as well as frame importance scores (utility). Furthermore, we also propose novel metrics to evaluate (un)fairness in current SOTA supervised and unsupervised video summarization models and benchmark them. Finally, for completeness we also propose a novel unsupervised method for fair video summarization named FVS-LP, which is a linear program (Schrijver, 1998) based simple baseline that only optimizes for fairness. Finally, we would also like to emphasize that while video summarization has been studied extensively over the past few decades, our work is primarily concerned with more recent *learning-based* video summarization approaches (proposed by the vision community) as they are highly performant (Apostolidis et al., 2021a). For more details on *classical video summarization* studied by the multimedia community, please refer to (Truong & Venkatesh, 2007) for a survey of existing methods. In summary, through this work, we make the following contributions: - We provide an analytical definition for the *fair video summarization* problem to allow for the development of accurate, fair, and unbiased video summarization models that can help ensure equal representation and inclusion in video content. Frames from a few randomly sampled FairVidSum videos are shown in Figure 1. - We propose the *SumBal* fairness metric to evaluate model fairness, which is derived from the Balance (Chierichetti et al., 2017) metric proposed to measure fairness in unsupervised learning. - We introduce the *FairVidSum* benchmark dataset, designed similarly to existing SOTA video summarization benchmarks TVSum (Song et al., 2015) and SumMe (Gygli et al., 2014), which contains annotated individual and group-level fairness information. - Using *FairVidSum* and SumBal, we benchmark numerous SOTA supervised and unsupervised models for unfairness. We find that since most models do not optimize for fairness, they can be highly unfair, prompting the need for newer methods that can balance both accuracy and fairness. 1Notable examples include Microsoft's Tay chatbot that became racist and homophobic after training on user data online (Neff, 2016), and the COMPAS tool which recommended that black individuals were more likely to reoffend compared to other ethnicities, despite no statistical differences between the individuals themselves (Angwin et al., 2016). - For completeness, we also propose FVS-LP: a novel baseline for unsupervised fair video summarization that solely optimizes for fairness, to empirically demonstrate the fairness-utility gap. Based on FVS-LP, we showcase a simple sampling strategy that can control the fairness-utility tradeoff as well. ## 2 Related Work Classical Video Summarization. Video summarization has been studied extensively by the multimedia community over the past few decades (Truong & Venkatesh, 2007). However, these approaches are not learning-based (that is, they do not employ ML/DL models to undertake the video summarization task), and tend to be less performant than learning-based approaches. Moreover, classical approaches often utilize multiple modalities for summarization, whereas learning-based approaches often consider only visual information contained in video frames. For instance, in Ma et al. (2002), the authors propose a simple video summarization method for fusing visual, audio, and linguistic information contained in the video. Other seminal classical summarization approaches include Ngo et al. (2005) where the authors use temporal graph modeling methods; Taskiran et al. (2006) where speech transcripts are used instead of visual/audio information; Chen et al. (2009) which proposes a concept-entity method for summarization; and Yu et al. (2003) which utilizes the annotators' browsing patterns (logs) as an alternative method for future summarization, among others. Note that in this work, we focus on recent learning-based approaches developed by the vision community (Apostolidis et al., 2021a) since they achieve state-of-the-art performance on current benchmarks (*TVSum* and *SumMe*), which we discuss in more detail below. Learning-Based Video Summarization. Video summarization approaches can be categorized (Apostolidis et al., 2021a) as either *supervised* (frame-level importance scores are used in training) (Zhang et al., 2016; 2019; Huang & Wang, 2019; Fu et al., 2019; Lebron Casas & Koblents, 2019) or *unsupervised* (only visual frame information is used during training) (Mahasseni et al., 2017; Yuan et al., 2019; He et al., 2019; Yaliniz & Ikizler-Cinbis, 2021; Apostolidis et al., 2019; Zhou et al., 2018a) with regards to the learning setting, and as *unimodal* (only visual information is used for training) (Zhang et al., 2016; He et al., 2019; Yuan et al., 2019; Fu et al., 2019) or *multimodal* (other video metadata is also utilized) (Otani et al., 2017; Wei et al., 2018; Lei et al., 2018; Zhou et al., 2018b; Yuan et al., 2017; Song et al., 2016) with regards to the input data type. *Unsupervised unimodal* approaches model the application scenarios for video summarization better as annotated frame importance scores and additional video metadata (such as transcripts) are generally hard to obtain (Apostolidis et al., 2021a). For video summarization, the SOTA supervised approaches constitute DSNet (Zhu et al., 2020), PGL-SUM (Apostolidis et al., 2021b), among others and the SOTA unsupervised approaches constitute CA-SUM (Apostolidis et al., 2022), AC-SUM-GAN (Apostolidis et al., 2020a), SUM-GAN-AAE (Apostolidis et al., 2020b), SUM-GAN-SL (Apostolidis et al., 2019). All these models are benchmarked on the *TVSum* (Song et al., 2015) and *SumMe* (Gygli et al., 2014) datasets. Fairness in Machine Learning and Summarization. While video summarization has not yet been studied from the purview of fairness, fair models have been developed for various ML tasks and problem settings (Mehrabi et al., 2021; Chhabra et al., 2021a). These include supervised learning (Agarwal et al., 2018; Zafar et al., 2017), unsupervised learning (Chierichetti et al., 2017; Chhabra et al., 2022; Kleindessner et al., 2019b), recommendation systems (Rastegarpanah et al., 2019; Pitoura et al., 2022), active learning (Anahideh et al., 2022; Shen et al., 2022), outlier detection (Song et al., 2021; Davidson & Ravi, 2020), among others. Works have also investigated the interplay between fairness and other desirable model behaviors, such as robustness (Chhabra et al., 2023; 2021b). Further, fairness has also been studied for data summarization such as for k-center based summarization (Kleindessner et al., 2019a; Chiplunkar et al., 2020; Angelidakis et al., 2022) and text summarization (Shandilya et al., 2018; Keswani & Celis, 2021). However, these approaches are not general– they are highly specific to the learning algorithm being used (for e.g. k-center (Gonzalez, 1985)) and the fairness definitions employed are not consistent with our goal. As such, these are not applicable for fair video summarization. Moreover, fairness can be enforced at the pre-processing (before training), *in-processing* (modifying the model), or *post-processing* (after training) stage of the learning pipeline (Mehrabi et al., 2021). As will be clear in subsequent sections, the proposed FVS-LP baseline belongs to the in-processing category, as it enforces fairness constraints during training. ## 3 Problem Statement And Preliminaries In this section, we first describe the standard video summarization problem and discuss protocols for evaluating utility of trained models. Then, we introduce the fair video summarization problem– we provide an analytical formulation, along with evaluation metrics and motivating use-cases. Note that we only consider unimodal video summarization models in this paper as these are more commonly used in the context of deep learning (Apostolidis et al., 2021a). ## 3.1 The Video Summarization Problem Unsupervised Video Summarization. Let a video V consist of n frames X = {x1, x2*, ..., x*n} where xi ∈ R d. These are sampled at some frequency (usually 2 frames per second (Apostolidis et al., 2021a)) from V and hence, n is generally large. Here, d is the dimension of the feature descriptor of the frame (for example, this could represent features extracted per frame using a ResNet (He et al., 2016)). An *unsupervised* video summarization model can then generally be denoted as Munsup that takes in as input a summary length requirement k ≪ n and the original video frame set X, and outputs a set of key frames constituting the video summary as S = {xj} k j=1 ⊆ X. That is, Munsup(*X, k*) = S, where S ∈ R k×d. The summary length budget k is generally set to be 15% of the original video length, that is, k/n = 0.15. Supervised Video Summarization. While unsupervised variants are better suited for video summarization (Apostolidis et al., 2021a) since they model the application scenarios in a more realistic manner (human-level annotations are hard to obtain), supervised models are employed as well. A supervised model also takes in as input Y = {yi} m i=1 where 0 < yi ≤ 1 is an importance score given by a human annotator for a corresponding frame xi ∈ X. 2 Annotations are only obtained for a small subset of frames m since n can be quite large. Thus, for a supervised model, we can obtain a summary as Msup(*X, k, Y* ) = S where |S| = k. Evaluating Models. Trained video summarization models are evaluated based on the agreement of the generated summary for a video with its ground truth summary obtained using the annotated importance scores provided by a given user. Note that obtaining summaries from the importance scores Y is also an optimization problem since we have a budget k for the length of the summary. Usually, the 0/1 knapsack (Martello et al., 1999) problem is used to obtain user summaries in this manner (Zhang et al., 2016). Thus, if we have u users who annotated video V , we will have summaries available denoted as OV 1 , OV 2 , ..., OV u corresponding to each user. The given model generates a summary S Vfor a particular video V . We can then obtain the precision and recall between each OV iand S V, denoted as p V iand r V i , respectively. To evaluate models, we then calculate the average pairwise Fβ-measure averaged over all user summaries as: $$F_{\beta}^{V}=\frac{1}{u}\sum_{i=1}^{u}\frac{(1+\beta^{2})\times p_{i}^{V}\times r_{i}^{V}}{(\beta^{2}\times p_{i}^{V})+r_{i}^{V}}\tag{1}$$ $$\quad(1)$$ Usually, β is set to 1 (Song et al., 2015), so we compute the average pairwise F V 1 -measure for a given video V . These values are then averaged over all videos V in the test set, and overall F¯1-measure is calculated. Further, note that for the supervised setting, videos that are used for model training cannot be used in the evaluation/test set. Hence, cross validation is generally undertaken (Apostolidis et al., 2021a) to create 80% (train) - 20% (test) splits. Although this issue of train-test splits does not arise for unsupervised models, for consistency, we follow the same protocol for evaluation of all models. ## 3.2 The Fair Video Summarization Problem Problem Statement. We now define the fair video summarization problem for a video V . Here, along with X, Y , and k, we are also given (fairness) information regarding g individuals or protected groups as H = {H1, H2*, ..., H*g} where Hj ∈ {0, 1} n and H j i = 1 implies that individual/group j is present in frame i. Conversely, Hi j = 0 implies that individual/group j is absent in frame i. Note that unlike importance scores these are not subjective decisions, so we have discrete labels indicating individual/group presence 2Importance score annotations are generally obtained between 1 (least important) and 5 (most important) and then normalized to lie between 0 and 1. 4 in frames. Note that this abstraction using Hjis very flexible, and can allow for the development of fair models that optimize for individual fairness or group-fairness. For individual fairness this constitutes the idea that all persons in the video should be represented in approximately the same proportions in the generated summary as they appear in the entire video. For group fairness, this could constitute different groups being represented in the same proportions in the summary frames as their proportions in the overall video frames. For example, for *ethnicity* as the sensitive attribute, this would necessitate proportional representation for each ethnicity in summary frames compared to total video frames. This is the very notion of *disparate impact* (Kleinberg et al., 2018) and ensures that no protected group or individual3 be adversely affected as a result of a predictive algorithm. A fair video summarization model Mfair then also takes in as input H and generates summary S for video V as Mfair(*X, k,* H) = S. Along with optimal utility performance, the model must ensure that the proportion of appearance of entities represented by H are as close as possible to their overall proportions in the video V . The supervised fair variant can also be defined similarly. As is evident by our definition, in this work we only consider optimizing one type of H at a time (that is, sex consisting of *male/female* appearances in frames). However, as our dataset has information regarding multiple groups, this can be studied in future work. Motivating Examples. Consider a platform such as YouTube (Covington et al., 2016). For simplicity, consider a set of news/podcast videos on the platform that have one male and one female host. Summaries for these videos are generated on the platform as the user browses the homepage. Here too, if a standard summarization model is used, there is no guarantee that the outputted video summary will respect the appearance proportions of the male/female hosts in the original video. In fact, even if the original video has equal appearance proportions for both male/female hosts, the model might skew these proportions heavily in the generated summary. A *fair* video summarization model instead would ensure that both male and female hosts appear in roughly the same amounts as in the original video, leading to fair representation. Consider another example of a video surveillance application that utilizes video summarization models in the backend, such as in (Senthil Murugan et al., 2018; Thomas et al., 2017), being used by law enforcement with multiple persons appearing in the video. If a standard video summarization model is used, the generated summary footage might have certain individuals appearing for large segments of the summary and might not reflect their actual proportion of appearance in the overall video. As a result, this might lead to a falsified description of the original footage. On the other hand, if a fair video summarization model is used, the individuals would appear in the summary in the same proportions as in the original video footage, and result in a more *fair* overview. The same arguments can be made with regards to people from different ethnicities or gender appearing in the summary footage and preventing discrimination and bias at the group-level. Evaluating Unfairness. Now that we have described the fair summarization problem, it is important to propose metrics for evaluating the discrepancies in fairness. Our basic goal is to measure whether or not each entity constituting H follows the same proportions in the summary as they do in the original video. To do so, we propose the SumBal metric, which is a modified version of the Balance fairness metric generally employed in fair unsupervised learning tasks (Bera et al., 2019; Chierichetti et al., 2017): $$\operatorname{SumBal}(S,X,\mathcal{H})=\operatorname*{min}_{H^{g}\in\mathcal{H}}\operatorname*{min}\left\{R(S,X,H^{g}),{\frac{1}{R(S,X,H^{g})}}\right\}$$ $$\qquad\qquad\qquad\text{where}R(S,X,H^{g})=\sum_{i=1}^{n}{\frac{H_{i}^{g}}{n}}\Bigg/\sum_{x_{j}\in S}{\frac{H_{j}^{g}}{k}}$$ $$\left(2\right)$$ Here, R(*S, X, H*g) is the ratio of the proportion of appearances of group/individual g in the overall video to the generated summary S. We take the minimum between R(*S, X, H*g) and 1/R(*S, X, H*g) to account for both under-representation and over-representation cases. Finally, SumBal returns the minimum over all groups/individuals and hence SumBal ∈ [0, 1]. We can take a simple example where we have a video with two individuals, A and B. Person A appears in 20% of the video frames and Person B appears in 50% of the frames, with 30% frames having no individuals. Now, assume that we generate a summary using a model which has the following proportions– Person A appears in 40% of the summary frames (over-representation) 3For brevity, at times we use the term protected groups to also refer to the set of individuals, but make it clear from context. and Person B appears in 30% of the summary frames (under-representation). Then the SumBal term for Person A would be calculated as: min{0.2/0.4, 0.4/0.2} = 0.5 and for Person B would be calculated as min{0.5/0.3, 0.3/0.5} = 0.6. Since we take the minimum over all groups/individuals to calculate SumBal for the video, we get min{0.5, 0.6} = 0.5 and the violating individual (with lowest fairness) is Person A. ## 3.3 Relating Fair Video Summarization To Fair Clustering Given a dataset where each sample belongs to some protected group, the unsupervised task of group-level fair clustering involves partitioning samples in the dataset into k *clusters* according to some utility objective, while ensuring that each cluster has the same proportion of samples from each protected group as in the original dataset (Chhabra et al., 2021a; Chierichetti et al., 2017; Chhabra & Mohapatra, 2022; Bera et al., 2019). This is thematically similar to our notion of fairness in video summarization– video frames selected in the summary should have high utility, while ensuring that each protected group is represented in the summary output in the same proportions as in the original video. There are also significant differences between these two problems: for instance, in fair clustering the entire dataset is selected, and the number of selections k correspond to k clusters where proportional fairness needs to be ensured for each cluster. However, for fair video summarization, part of the set of samples (that is, frames) are absent in the output, and k corresponds to a subset of the dataset itself over which proportional fairness constraints need to be ensured. Moreover, in fair clustering, each sample is assumed to have some protected group membership, whereas video frames might have zero protected group appearances (for e.g. in the case when no individuals are present in a frame) changing the landscape of the problem considerably. Another issue stems from the value of k itself– in clustering, k is generally not very large whereas in video summarization k can be order of magnitudes larger as it is typically set to be 15% of all video frames. It is well-known that for such large k, there is often clustering breakdown in cluster quality and computational efficiency (Fränti & Sieranoja, 2019; Pelleg et al., 2000). These challenges make it non-trivial to utilize fair clustering for fair video summarization directly without considerable modifications. However, clustering based approaches are often employed for data summarization (Ahmed, 2019), and we believe future work can exploit these connections to propose improved methods for fair video summarization as well. ## 4 Benchmarking Fairness Using **Fairvidsum** 4.1 Curating Videos And Annotation Details Collecting Videos. Our goal is to select videos that feature multiple individuals in diverse settings that allow us to annotate and account for fairness information. Moreover, we wanted FairVidSum to be similar to *TVSum* and *SumMe* so that existing video summarization models can utilize it in a plug-and-play manner. Similar to *TVSum* (Song et al., 2015), we collect videos from YouTube (Covington et al., 2016). We use the search terms "panel discussions", "podcasts", "interviews", "debates", "news", "discussions", and combinations of these keywords. We utilized these categories mainly because our primary requirement was to collect videos with human subjects from diverse backgrounds, which these categories guaranteed. Moreover, we wanted a certain number of individuals (at least ≥ 3) in each video, which is also generally the case for these categories. Moreover, we restrict our videos to ones with a Creative Commons license, that lie between 1-4 minutes, and those that contain more than a single shot. Using this strategy we obtain 22 videos. While the *SumMe* dataset (Gygli et al., 2014) has no videos that meet this criteria, *TVSum* has a set of few videos (such as in the "documentaries" category) that we can use. In this manner, we also add another 12 videos from TVSum to *FairVidSum* and annotate them for fairness information. Thus, *FairVidSum* currently has a total of 34 videos, in line with current video summarization Figure 2: Video length distribution. datasets (SumMe has 25 videos and TVSum has 50 videos). We show the length distribution of these videos in Figure 2. ![6_image_0.png](6_image_0.png) (c) Distribution of ethnicity sensitive attributes across subset of videos. Figure 3: Violin plots showcasing the distribution of individuals and protected groups / sensitive attributes across randomly sampled videos. Annotating Videos with Importance Scores. We follow much of the same procedure as used in TVSum (Song et al., 2015). We employ 10 annotators who consist of individuals from diverse fields in either graduate or post-graduate study. Annotators are first required to watch videos on mute in a single setting to ensure that the annotation scores are only based on visual information (Song et al., 2015). Similarly, to alleviate chronological bias (Song et al., 2015), frames are shuffled randomly. Next, to obtain scores annotators are shown uniformly sampled frames at 1/2 frames per second. Each annotator annotator annotates every video and is required to label the provided frames with a score between 1 (least important) to 5 (most important) to be included in the summary. This task excludes the 12 TVSum additions as those already possess annotation scores. In this manner, we obtain 15400 responses total over all videos. Note that the number of annotators employed for this purpose is also satisfactory, as our annotation consistency analysis will later show. Annotating Videos with Fairness Information. Other than these subjective annotations for importance scores, part of our dataset requires objective annotations for individuals and their sensitive attribute information. To do so we employ 4 annotators who collectively annotate all 34 videos with this information. Note that compared to annotation scores which are generally obtained for a subset of frames, fairness information needs to be collected for the entire video to calculate unfairness (such as using the SumBal metric). Thus, here, we annotate over 168120 frames total with information regarding different individuals appearing in frames and their sensitive attributes with respect to sex and ethnicity. For sex we annotate as Male/Female and for ethnicity we annotate for White, Black, Middle Eastern, Asian, and Hispanic. ## 4.2 Distribution Of Individuals And Sensitive Attributes Across Videos We aim to analyze the distribution of individuals appearing across videos. For this purpose, we randomly sample 8 videos out of 34, and plot the distributions of individuals as well as the distributions of sex and ethnicity protected groups in those videos as a function of their video frames using violin plots. These are visualized in Figure 3. The distributions for all the remaining videos are shown in Appendix A due to space constraints. It is evident that both the number as well as frame-level distribution of individuals and protected groups varies widely across videos. This demonstrates one of the challenges associated with developing fair summarization models, as they need to be able to account for fairness in many diverse settings. We also analyze group-level information for each video as a function of the annotation trend. Here, we can visualize the mean importance score for a video as a function of the frame indices, while also denoting sensitive attribute information for the frames. We demonstrate this for *Video 19* and sex as the protected group in Figure 4a and for *Video 16* with *ethnicity* as the protected group in Figure 4b. It can be seen that group-level information varies widely, and there is little correlation between importance scores and group-level information that would allow existing models to be fair. ![7_image_0.png](7_image_0.png) Figure 4: Mean importance (annotated) scores for (a) *Video 19* with protected group labels for sex and (b) for *Video 16* with protected group labels for *ethnicity*. ## 4.3 Annotator Consistency We now cover another aspect of our dataset– the annotations, and their consistency. Annotator consistency with respect to video summarization is usually measured using the Cronbach's alpha (CA) (Cronbach, 1951). A higher CA value indicates more consistency among annotations. For *FairVidSum*, the CA value is 0.995. This is much higher than both *SumMe* (CA=0.74) and *TVSum* (CA=0.81). We posit that this is because 1) FairVidSum has a lot of intra-video homogeneity (a byproduct of our initial search terms) compared to both TVSum and SumMe that have more diversity in video categories, and 2) the visual complexity of certain TVSum/SumMe videos over FairVidSum results in more variance in annotation (for e.g. TVSum has some challenging videos in categories such as Dog Show and Flash Mob Gathering with rapid scene changes). Annotator consistency can also be observed qualitatively for a given video. We can visualize this as a heatmap with rows as individual annotators, columns as respective video frames, and each cell thus representing the annotator's importance score for that frame. We show this in Figure 5 for *Video 19*. It can be seen that for most frames, annotators agree on similar importance scores. ## 4.4 Discussion On Limitations It is important to note that a number of possible improvements can be made to *FairVidSum* by analyzing and alleviating some of its current limitations. For instance, a current limitation of the dataset is the set of categories that the videos are sourced from ("panel discussions", "interviews", "debates", "news", among others). Videos from these categories often have repetitive frames and at least a few consistent individuals appear throughout the video. While these restricted source domain videos are useful for studying the fair video summarization problem as a first step, there are many other video categories that also might require fairness enforcement, such as those with a large number of individuals (> 1000) appearing in them for a very short duration of time (such as concert recordings, comedy shows, or large lectures). The challenges with enforcing fairness in such a setting are manifold: annotating fairness information due to the large number of individuals would be a non-trivial task and the large growth in g (the number of groups/individuals) would possibly lead to small overall proportions for each group/individual resulting in marginal SumBal scores. For the latter, since SumBal scores might tend to zero with such large scale, new fairness evaluation metrics would need to be proposed as well. Moreover, incorporating videos from other source categories, such as those with rapid scene changes (for example sports sequences or movie montages), might lead to other unforseen challenges as well, both in data curation/annotation and in fairness enforcement. We defer the study and analysis of such problems to future work. Figure 5: Annotator consistency matrix for *Video 19*. ![8_image_0.png](8_image_0.png) ## 5 The Fvs-Lp Fair-Only Baseline 5.1 Fvs-Lp We now present a baseline for fair video summarization– the Fair Video Summarization Linear Program (FVS-LP) which is a simple linear program (Schrijver, 1998) approach that only optimizes for fairness and selects frames such that the group proportions in the selected summary are as close as possible to the group proportions of the overall video. Note that this contribution is analogous to having a constant predictor in fair classification (Mehrabi et al., 2021)– as it predicts a constant it will always achieve maximum fairness, but low utility/accuracy. However, in fair video summarization, even such a simple fair-only baseline is not as conceptually straightforward as a constant/random predictor, and hence, we propose FVS-LP to showcase the fairness-utility tradeoff gap. Let 0m and 1m denote an m length vector of all zeros and all ones, respectively. We have a given video V and its set of frames X, along with the set of group memberships H. First, we transform H to matrix form for formulating the LP. Let *G ∈ {*0, 1} n×g be derived from H such that each row vector Gi ∈ {0, 1} g, i ∈ [n] represents a frame and each of its entries are either 0 for absence or 1 for presence of a group in the frame. Let 0 ≤ x ≤ 1 be the optimization variable where each entry of x ∈ R n indicates if a frame is selected in the summary, then the LP can be written as Equation 3: $$\begin{array}{r l}{{\mathrm{minimize}}}&{{}\mathbf{0}_{n}^{\top}\mathbf{x}}\end{array}$$ $$\begin{array}{r l}{{\mathbf{m}}\mathbf{n}\mathbf{m}\mathbf{c}}&{{}\mathcal{G}^{\top}\mathbf{x}=k\cdot{\frac{1}{n}}\sum_{i=1}^{n}\mathcal{G}_{i}}\\ {\mathbf{1}_{n}^{\top}\mathbf{x}=k}\\ {0\leq\mathbf{x}\leq1.}\end{array}$$ $$\left({\boldsymbol{3}}\right)$$ Note that since we are only optimizing for fairness, we do not care about utility and our optimization objective can simply be a vector of all zeros. Now, as is evident, the first constraint simply ensures that the sum of the Table 1: Comparison of SOTA video summarization approaches on *FairVidSum*. The utility and fairness averages are calculated across all five splits. The violating groups that achieve the minimum fairness SumBal scores are also presented. Results on FVS-LP (our fairness baseline) along with Random and Human baselines are also provided. Blue/red indicates highest/lowest performance. | | SumBal (Sex) | | SumBal (Ethnicity) | | SumBal (Individual) | | | | | | | |---------------------|----------------|----------------------------|---------------------------------|---------------------------------------------------------|--------------------------|------------------|---------------------|--------------------|-----------------------------------|---------------------|----------------------| | Model | Type | Average F1 Measure Average | Min | Violating | Average | Min | Violating | Average | Min | Violating | | | Random | - | 14.92 | 0.9497 | 0.8814 | Male (Vid. #30) | 0.9468 | 0.6670 | Asian (Vid. #33) | 0.8747 | 0.6670 | Person 13 (Vid. #33) | | Human | - | 68.91 | 0.4605 | 0.0000 Female (Vid. #30) | 0.5503 | 0.0000 | Asian (Vid. #25) | 0.2773 | 0.0000 | Person 2 (Vid. #18) | | | CA-SUM | Unsupervised | 62.78 | 0.5201 | 0.0000 Female (Vid. #30) | 0.5468 | 0.0000 | Asian (Vid. #24) | 0.2441 0.0000 | Person 4 (Vid. #8) | | | | AC-SUM-GAN | Unsupervised | 64.33 | 0.5176 | 0.0000 Female (Vid. #30) | 0.5455 | 0.0000 | Asian (Vid. #24) | 0.2616 | 0.0000 | Person 4 (Vid. #8) | | | SUM-GAN-AAE | Unsupervised | 63.81 | 0.5222 | 0.1302 Female (Vid. #26) | 0.5665 | 0.0000 | Asian (Vid. #24) | 0.2739 | 0.0000 | Person 4 (Vid. #8) | | | SUM-GAN-SL | Unsupervised | 64.92 | 0.5254 | 0.0000 Female (Vid. #30) | 0.5661 | 0.0000 | Asian (Vid. #24) | 0.2550 | 0.0000 | Person 4 (Vid. #8) | | | SUM-IND | Unsupervised | 50.57 | 0.5677 | 0.0000 Female (Vid. #24) | 0.5889 | 0.0000 | Asian (Vid. #24) | 0.2541 | 0.0000 | Person 4 (Vid. #8) | | | DSNet | Supervised | 63.69 | 0.5358 | 0.0000 Female (Vid. #30) | 0.5478 | 0.0000 | Asian (Vid. #24) | 0.2706 | 0.0000 | Person 1 (Vid. #25) | | | VASNet | Supervised | 64.11 | 0.4622 0.0000 Female (Vid. #25) | 0.5391 | 0.0000 | Asian (Vid. #24) | 0.2515 | 0.0000 | Person 4 (Vid. #8) | | | | PGL-SUM | Supervised | 63.75 | 0.4804 | 0.1042 Female (Vid. #34) 0.5374 0.0000 Asian (Vid. #24) | 0.2575 | 0.0000 | Person 4 (Vid. #8) | | | | | | FVS-LP (Sex) | Unsupervised | 15.69 | 0.9987 0.9960 Female (Vid. #4) | 0.7411 | 0.0000 Hispanic (Vid.#8) | 0.3062 | 0.0000 | Person 3 (Vid. #8) | | | | | FVS-LP (Ethnicity) | Unsupervised | 13.46 | 0.6642 | 0.0000 Female (Vid. #25) 0.9980 0.9822 Asian (Vid. #33) | 0.2727 | 0.0000 | Person 6 (Vid. #25) | | | | | | FVS-LP (Individual) | Unsupervised | 14.13 | 0.9556 | 0.6289 | Male (Vid. #28) | 0.9471 | 0.6559 | White (Vid. #19) | 0.9932 0.9704 Person 6 (Vid. #25) | | | selected samples' group memberships is equal to k times the group proportions for the overall video. The second constraint ensures that the number of selected samples must be exactly k. After solving the above LP for x, we can obtain the indices of summary frames selected from the set of frames X by rounding the solution, as I = {i : round(xi) = 1}. Then, we can get the summary S of video V as S = {Xi: ∀i ∈ I}. ## 5.2 Mixing Sampling Strategy For Controlling Fairness-Utility Tradeoff Since FVS-LP can be utilized to obtain fair summaries and current SOTA video summarization models can output summaries that have high utility, we propose a simple sampling strategy that *mixes* frames together from both of these to control the fairness-accuracy tradeoff. We start with two distinct summaries: Sacc, generated from an existing model that optimizes for accuracy, and Sfair, generated using the FVS-LP baseline for fairness. To produce a summary that harmoniously blends both these objectives, we introduce the mixing strategy as follows. A mixing ratio, denoted as λ, determines the proportion of frames from the fairness-optimized summary Sfair that will be integrated into the accuracy-optimized summary Sacc. Specifically, for a given λ, we randomly select λ · |Sacc| frames from Sfair. These selected frames are then incorporated into Sacc by randomly substituting an equal number of included frames. Frames can then be sorted using timestamps. This procedure also ensures that the merged summary maintains the original summary length. Through this mixing sampling strategy, the resultant summary strikes a balance between the characteristics of accuracy and fairness. ## 6 Results We now present results for benchmarking SOTA supervised and unsupervised models on *FairVidSum*. We utilize the following unsupervised models: CA-SUM (Apostolidis et al., 2022), AC-SUM-GAN (Apostolidis et al., 2020a), SUM-GAN-AAE (Apostolidis et al., 2020b), SUM-GAN-SL (Apostolidis et al., 2019), SUM-IND (Yaliniz & Ikizler-Cinbis, 2021) and the following supervised models: DSNet (Zhu et al., 2020), VASNet (Fajtl et al., 2019), PGL-SUM (Apostolidis et al., 2021b). Moreover, we also provide baseline results for a randomly generated summary (Random) and a summary generated using the knapsack algorithm on the average human annotated importance scores (Human). Finally, we also present results for FVS-LP while optimizing for each protected group type (individual, sex, and *ethnicity*). For each model/baseline, we provide the group members that achieve the minimum fairness values as well. Code, reproducibility, and miscellaneous dataset details are provided in Appendix D. Training and Evaluation. We follow the standard evaluation procedure in existing video summarization literature, which involves randomly splitting the entire dataset into multiple parts or splits, typically 5, | Model | Type | Average | SumBal (Sex) | | SumBal (Ethnicity) | | SumBal (Individual) | | | | | |---------------------|--------------|--------------------|---------------------------------|--------------------------|--------------------------------------------------|-------------------------|-----------------------|----------------------------------------------------|-----------------------------------|---------------------|---------------------| | | | F1 Measure Average | Min | Violating | Average | Min | Violating | Average | Min | Violating | | | Random | - | 14.74 | 0.9473 | 0.8818 | Male (Vid. #26) | 0.97 | 0.9415 | White (Vid. #26) | 0.8639 | 0.7477 | Person 2 (Vid. #19) | | Human | - | 67.08 | 0.4800 | 0.0000 | Female (Vid. #25) | 0.6717 | 0.0000 | Asian (Vid. #25) | 0.3080 | 0.0000 | Person 1 (Vid. #25) | | CA-SUM | Unsupervised | 62.11 | 0.5104 | 0.1044 | Female (Vid. #34) | 0.6746 | 0.4292 | Asian (Vid. #9) | 0.3042 | 0.0000 | Person 1 (Vid. #25) | | AC-SUM-GAN | Unsupervised | 63.99 | 0.4481 | 0.0000 | Female (Vid. #25) | 0.6467 | 0.0000 | Asian (Vid. #25) | 0.3261 | 0.0000 | Person 1 (Vid. #25) | | SUM-GAN-AAE | Unsupervised | 63.44 | 0.4887 | 0.1302 | Female (Vid. #26) | 0.6921 | 0.4567 | Asian (Vid. #9) | 0.2952 | 0.0000 | Person 1 (Vid. #25) | | SUM-GAN-SL | Unsupervised | 64.77 | 0.5298 | 0.2156 | Male (Vid. #26) | 0.7056 | 0.4567 | Asian (Vid. #9) | 0.2792 | 0.0000 | Person 1 (Vid. #25) | | SUM-IND | Unsupervised | 49.47 | 0.5415 | 0.3868 | Male (Vid. #26) | 0.6641 | 0.4503 | Asian (Vid. #9) | 0.3019 | 0.0000 | Person 1 (Vid. #25) | | DSNet | Supervised | 63.24 | 0.4432 | 0.1049 | Female (Vid. #31) 0.6095 | 0.2742 | Asian (Vid. #25) | 0.3040 | 0.0000 | Person 1 (Vid. #25) | | | VASNet | Supervised | 66.14 | 0.3386 | 0.0000 Female (Vid. #25) | 0.5866 | 0.0000 Asian (Vid. #25) | 0.2800 | 0.0000 | Person 1 (Vid. #25) | | | | PGL-SUM | Supervised | 65.18 | 0.4392 | 0.1042 | Female (Vid. #34) | 0.5869 | 0.2631 | Asian (Vid. #34) | 0.2701 0.0000 Person 1 (Vid. #25) | | | | FVS-LP (Sex) | Unsupervised | 17.03 | 0.9990 0.9975 Female (Vid. #25) | | 0.8357 | 0.6559 | White (Vid. #19) | 0.4417 | 0.0000 | Person 3 (Vid. #19) | | | FVS-LP (Ethnicity) | Unsupervised | 12.13 | 0.3967 | 0.0000 | Female (Vid. #25) 0.9993 0.9983 Asian (Vid. #25) | 0.3186 | 0.0000 | Person 3 (Vid. #19) | | | | | FVS-LP (Individual) | Unsupervised | 16.81 | 0.9562 | 0.8250 | Male (Vid. #34) | 0.8999 | 0.6559 | White (Vid. #19) 0.9930 0.9704 Person 6 (Vid. #25) | | | | Table 2: Comparison of SOTA video summarization model on *FairVidSum* for evaluation Split #1. each split subjected to an 80:20 train/test partitioning (Apostolidis et al., 2022; 2020a;b; Zhu et al., 2020; Fajtl et al., 2019; Apostolidis et al., 2021b; Kanafani et al., 2021). The models are trained on the training set of a given split and subsequently evaluated on the corresponding test set within the same split. A detailed breakdown of the video distribution for all 5 train/test splits are present in the Appendix B. The F1-measure evaluates the similarity between a model predicted summary and a user-defined summary by assessing their overlap. The F1 scores are calculated for each individual video and then averaged over the entirety of a given split which are then averaged across all 5 splits. SumBal is also evaluated per video, and the same procedure is followed to obtain averages. Details on Model Training. We downsample videos to 1/2 frames per second as our video frames are often repetitive. Following prior work, we utilize GoogleNet (Szegedy et al., 2015) (trained on ImageNet) to extract frame features from the *pool5* layer, which outputs a dimensionality of 1024. When training the various models, we adhere to their original procedures, and generally employ default settings and hyperparameters. Any alterations or adjustments to the default training parameters are detailed in Appendix B. To ensure a fair comparison, we employ the same splits for training/testing across all models. FairVidSum **Benchmarking Results.** Since we have 5 evaluation splits, we present average results over all splits in Table 1. We also present results for Split \#1 in Table 2 and for the remaining evaluation splits in Appendix C due to space constraints. As can be seen in Tables 1 and 2 both unsupervised and supervised SOTA models tend to achieve high utility performance computed in terms of the F1-measure (> 60) averaged over all videos and all splits.4 However, these models have low fairness performance, with minimum SumBal scores for all three group types: sex, *ethnicity*, and *individuals* most often tend to be 0 and generally < 0.5. Interestingly, the group members that achieve the lowest fairness values across all splits and videos tend to consistently be *Female* for sex as the protected group, *Asian* for *ethnicity* as the protected group, and Person 4 for *individual* fairness. We also present average SumBal scores which are higher at times, but have very large variance showcasing that models are not inherently optimizing for fairness. The human annotated summary also fares similarly to the SOTA models, as it is only annotated for performance. Moreover, the randomly generated summary has very low utility performance scores– typically with F1-measure values less than 15 which follows the fact that summary frames are picked completely at random. However, the random summary has high fairness scores. We hypothesize that this is the case because by picking frames uniformly at random, the probability that each group member is picked according to their proportions is uniform in expectation. As a result, random frame selection leads to improved fairness. Furthermore, we provide results for three versions of FVS-LP, each instantiated to optimize one type of protected group/sensitive attribute. For each of these, FVS-LP achieves the highest fairness performance across all models and baselines for the group it is optimizing for. However, it does not lead to good utility performance, which is to be expected as it is only directly optimizing for fairness. This implies that while 4This utility performance is in line with SOTA results for *TVSum* and *SumMe*; refer to (Song et al., 2015; Apostolidis et al., 2021a) for details. ![11_image_0.png](11_image_0.png) Figure 6: Results for the mixing sampling strategy averaged over all splits. The top row demonstrates the effect of increasing mixing ratio (\) on utility (F1 Measure) whereas the bottom row showcases the effect on fairness (SumBal). Each column represents mixing done for a particular protected group (ethnicity/sex/individual). À is varied in increments of 0.1 from 0 to 1. there is a gap in fairness that can be optimized for, optimizing for both fairness and performance is a non-trivial task. For future work, methods that jointly optimize both fairness and utility can thus be proposed. Note that the trends between the average performance and Split \#1 are very similar, and this is also the case for the other evaluation splits (Appendix C). Generally, we observe that supervised models tend to exhibit lower average SumBal values. This trend might be a direct consequence of these models' learning process, which strives to closely align with human or ground truth summaries that, as previously mentioned, are solely optimized for utility. This observation further underscores the importance of incorporating a fairness evaluation and learning criterion in the model design and training process. Another crucial insight from our benchmarking analysis is the distinct difficulty in upholding individual fairness. This is clearly evident by the consistently lowest average SumBal values (compared with sex and ethnicity) and predominant minimum values of zero. A SumBal value of zero essentially indicates that a group or individual, though present in the original video, has been completely excluded from the generated summary. Mixing Strategy Results to Showcase the Fairness-Utility Tradeoff. We now presents results for the mixing sampling strategy based on FVS-LP described in Section 5.2. Results averaged over all splits are shown in Figure 6 and for each of the individual splits in Appendix C.2. In Figure 6 each column indicates mixing undertaken for a particular protected group: ethnicity, sex, and individuals. Each SOTA model whose original utility summary the fair summary (FVS-LP for the particular protected group) is mixed with is shown as an individual line. The top row of the figure showcases utility results (F1 Measure) and the bottom row denotes fairness (minimum value of SumBal obtained). It can be seen that as the mixing ratio À is increased from 0 to 1, in increments of 0.1, the utility starts to decrease and fairness increases. Thus, À can be used as a fairness-utility tradeoff for balancing fairness and utility. Results for the individual splits in Appendix C.2 exhibit similar trends. Discussion on Results. Video summarization models are designed to extract the most representative and salient content from videos, emphasizing features like motion, distinct objects, and long-range temporal sequences. Current models use mechanisms such as self-attention (Vaswani et al., 2017), GANs (Goodfellow et al., 2020), and LSTMs (Hochreiter & Schmidhuber, 1997) to often drive this extraction process, seeking to replicate human-like summaries that focus on pivotal moments and high-saliencey content. We believe that the sole focus on saliency results in the fairness issues exhibited by these models. Models may prioritize certain segments or individuals that align with their understanding of importance, which is often derived from visual and temporal cues. For instance, high-contrast or motion-intensive sequences might overshadow static, less visually distinctive moments, regardless of the duration or frequency of appearance of entities within those moments. Furthermore, most, if not all, of the individual components (GANs, LSTMs, transformers, etc.) used by models have been shown to exhibit bias and unfairness in prior research (refer to (Kenfack et al., 2021; Sun et al., 2021; Qiang et al., 2023) for more details). Moreover, human annotations, which serve as ground truths for models, might themselves be biased. Human annotators may not be focused on representative fairness, but only on capturing what they perceive as the most noteworthy moments, causing models to adopt similar biases. This is evidenced by the slightly lower average SumBal scores for fully supervised models which largely rely on ground truth annotations. To summarize: fairness aims for a proportional representation of entities or sensitive attributes while utility focuses on capturing the most informative and representative moments guided by visual and temporal cues. Thus, we believe that models optimized for accuracy may inadvertently neglect frequently occurring yet less visually impactful segments and vice versa. ## 7 Conclusion In this paper, we propose the fair video summarization problem which has thematic connections to the problem of fair clustering. We introduce the *FairVidSum* dataset, which consists of videos with annotations of frame-level importance scores, appearance of individuals across frames, as well as information regarding their sensitive attributes such as sex and ethnicity. We also propose the SumBal metric, which measures the disparity in fairness of the generated summary with regards to the original video. Through *FairVidSum* and SumBal, we benchmark a number of existing SOTA video summarization models and find that these generate highly unfair summaries as they do not directly optimize for fairness. Finally, we also propose the FVS-LP method which is a linear programming baseline optimized only for fairness, analogous to a constant predictor in fair classification. Our paper constitutes the first work on fair video summarization, and hence, impacts the community in numerous ways5. It is important to ensure that learning algorithms account for equitable representation. Through this work, we seek to bridge this gap for the task of video summarization. Furthermore, there are multiple avenues for future work. Better fair video summarization models can be developed using FairVidSum, which improve upon both accuracy and fairness. Pre- and post-processing fair approaches can also be proposed. Methods that optimize for multiple protected groups jointly can also be investigated. Novel fairness definitions can also be proposed, similar to the trend observed in the fair clustering community (Chhabra et al., 2021a). ## References Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. A reductions approach to fair classification. In *International Conference on Machine Learning*, pp. 60–69. PMLR, 2018. Mohiuddin Ahmed. Data summarization: a survey. *Knowledge and Information Systems*, 58(2):249–273, 2019. Hadis Anahideh, Abolfazl Asudeh, and Saravanan Thirumuruganathan. Fair active learning. *Expert Systems* with Applications, 199:116981, 2022. Haris Angelidakis, Adam Kurpisz, Leon Sering, and Rico Zenklusen. Fair and fast k-center clustering for data summarization. In *International Conference on Machine Learning*, pp. 669–702. PMLR, 2022. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias. In *Ethics of Data and Analytics*, pp. 254–264. Auerbach Publications, 2016. Evlampios Apostolidis, Alexandros I Metsai, Eleni Adamantidou, Vasileios Mezaris, and Ioannis Patras. A stepwise, label-based approach for improving the adversarial training in unsupervised video summarization. 5Additional information on broader impact and ethics is provided in Appendix E. In *Proceedings of the 1st International Workshop on AI for Smart TV Content Production, Access and* Delivery, pp. 17–25, 2019. Evlampios Apostolidis, Eleni Adamantidou, Alexandros I Metsai, Vasileios Mezaris, and Ioannis Patras. Ac-sum-gan: Connecting actor-critic and generative adversarial networks for unsupervised video summarization. *IEEE Transactions on Circuits and Systems for Video Technology*, 31(8):3278–3292, 2020a. Evlampios Apostolidis, Eleni Adamantidou, Alexandros I Metsai, Vasileios Mezaris, and Ioannis Patras. Unsupervised video summarization via attention-driven adversarial learning. In *MultiMedia Modeling:* 26th International Conference, MMM 2020, Daejeon, South Korea, January 5–8, 2020, Proceedings, Part I 26, pp. 492–504. Springer, 2020b. Evlampios Apostolidis, Eleni Adamantidou, Alexandros I Metsai, Vasileios Mezaris, and Ioannis Patras. Video summarization using deep neural networks: A survey. *Proceedings of the IEEE*, 109(11):1838–1863, 2021a. Evlampios Apostolidis, Georgios Balaouras, Vasileios Mezaris, and Ioannis Patras. Combining global and local attention with positional encoding for video summarization. In *2021 IEEE International Symposium* on Multimedia (ISM), pp. 226–234. IEEE, 2021b. Evlampios Apostolidis, Georgios Balaouras, Vasileios Mezaris, and Ioannis Patras. Summarizing videos using concentrated attention and considering the uniqueness and diversity of the video frames. In *Proceedings* of the 2022 International Conference on Multimedia Retrieval, pp. 407–415, 2022. Suman Bera, Deeparnab Chakrabarty, Nicolas Flores, and Maryam Negahbani. Fair algorithms for clustering. Advances in Neural Information Processing Systems, 32, 2019. Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. A convex framework for fair regression. *arXiv preprint arXiv:1706.02409*, 2017. Bo-Wei Chen, Jia-Ching Wang, and Jhing-Fa Wang. A novel video summarization based on mining the story-structure and semantic relations among concept entities. *IEEE Transactions on Multimedia*, 11(2): 295–312, 2009. Anshuman Chhabra and Prasant Mohapatra. Fair algorithms for hierarchical agglomerative clustering. In 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 206–211. IEEE, 2022. Anshuman Chhabra, Karina Masalkovait˙e, and Prasant Mohapatra. An overview of fairness in clustering. IEEE Access, 9:130698–130720, 2021a. Anshuman Chhabra, Adish Singla, and Prasant Mohapatra. Fairness degrading adversarial attacks against clustering algorithms. *arXiv preprint arXiv:2110.12020*, 2021b. Anshuman Chhabra, Adish Singla, and Prasant Mohapatra. Fair clustering using antidote data. In *Algorithmic Fairness through the Lens of Causality and Robustness Workshop*, pp. 19–39. PMLR, 2022. Anshuman Chhabra, Peizhao Li, Prasant Mohapatra, and Hongfu Liu. Robust fair clustering: A novel fairness attack and defense framework. In *The Eleventh International Conference on Learning Representations*, 2023. Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. Fair clustering through fairlets. Advances in Neural Information Processing Systems, 30, 2017. Ashish Chiplunkar, Sagar Kale, and Sivaramakrishnan Natarajan Ramamoorthy. How to solve fair k-center in massive data models. In *International Conference on Machine Learning*, pp. 1877–1886. PMLR, 2020. Paul Covington, Jay Adams, and Emre Sargin. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, pp. 191–198, 2016. Lee J Cronbach. Coefficient alpha and the internal structure of tests. *Psychometrika*, 16(3):297–334, 1951. Ian Davidson and Selvan Suntiha Ravi. A framework for determining the fairness of outlier detection. In ECAI 2020, pp. 2465–2472. IOS Press, 2020. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In *Proceedings of the 3rd Innovations in Theoretical Computer Science Conference*, pp. 214– 226, 2012. Jiri Fajtl, Hajar Sadeghi Sokeh, Vasileios Argyriou, Dorothy Monekosso, and Paolo Remagnino. Summarizing videos with attention. In Computer Vision–ACCV 2018 Workshops: 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers 14, pp. 39–54. Springer, 2019. Pasi Fränti and Sami Sieranoja. How much can k-means be improved by using better initialization and repeats? *Pattern Recognition*, 93:95–112, 2019. Tsu-Jui Fu, Shao-Heng Tai, and Hwann-Tzong Chen. Attentive and adversarial learning for video summarization. In *2019 IEEE Winter Conference on Applications of Computer Vision (WACV)*, pp. 1579–1587. IEEE, 2019. Yihong Gong and Xin Liu. Video summarization and retrieval using singular value decomposition. Multimedia Systems, 9(2):157–168, 2003. Teofilo F Gonzalez. Clustering to minimize the maximum intercluster distance. *Theoretical Computer* Science, 38:293–306, 1985. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. *Communications of the ACM*, 63(11): 139–144, 2020. Michael Gygli, Helmut Grabner, Hayko Riemenschneider, and Luc Van Gool. Creating summaries from user videos. In *Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September* 6-12, 2014, Proceedings, Part VII 13, pp. 505–520. Springer, 2014. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016. Xufeng He, Yang Hua, Tao Song, Zongpu Zhang, Zhengui Xue, Ruhui Ma, Neil Robertson, and Haibing Guan. Unsupervised video summarization with attentive conditional generative adversarial networks. In Proceedings of the 27th ACM International Conference on Multimedia, pp. 2296–2304, 2019. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural Computation*, 9(8):1735–1780, 1997. Cheng Huang and Hongmei Wang. A novel key-frames selection framework for comprehensive video summarization. *IEEE Transactions on Circuits and Systems for Video Technology*, 30(2):577–589, 2019. Hussain Kanafani, Junaid Ahmed Ghauri, Sherzod Hakimov, and Ralph Ewerth. Unsupervised video summarization via multi-source features. In *Proceedings of the 2021 International Conference on Multimedia* Retrieval, pp. 466–470, 2021. Patrik Joslin Kenfack, Daniil Dmitrievich Arapov, Rasheed Hussain, SM Ahsan Kazmi, and Adil Khan. On the fairness of generative adversarial networks (gans). In 2021 International Conference" Nonlinearity, Information and Robotics"(NIR), pp. 1–7. IEEE, 2021. Vijay Keswani and L Elisa Celis. Dialect diversity in text summarization on twitter. In *Proceedings of the* Web Conference 2021, pp. 3802–3814, 2021. Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Cass R Sunstein. Discrimination in the age of algorithms. *Journal of Legal Analysis*, 10:113–174, 2018. Matthäus Kleindessner, Pranjal Awasthi, and Jamie Morgenstern. Fair k-center clustering for data summarization. In *International Conference on Machine Learning*, pp. 3448–3457. PMLR, 2019a. Matthäus Kleindessner, Samira Samadi, Pranjal Awasthi, and Jamie Morgenstern. Guarantees for spectral clustering with fairness constraints. In *International Conference on Machine Learning*, pp. 3458–3467. PMLR, 2019b. Luis Lebron Casas and Eugenia Koblents. Video summarization with LSTM and deep attention models. In MultiMedia Modeling: 25th International Conference, MMM 2019, Thessaloniki, Greece, January 8–11, 2019, Proceedings, Part II 25, pp. 67–79. Springer, 2019. Jie Lei, Qiao Luan, Xinhui Song, Xiao Liu, Dapeng Tao, and Mingli Song. Action parsing-driven video summarization based on reinforcement learning. *IEEE Transactions on Circuits and Systems for Video* Technology, 29(7):2126–2137, 2018. Yu-Fei Ma, Lie Lu, Hong-Jiang Zhang, and Mingjing Li. A user attention model for video summarization. In *Proceedings of the tenth ACM International Conference on Multimedia*, pp. 533–542, 2002. Behrooz Mahasseni, Michael Lam, and Sinisa Todorovic. Unsupervised video summarization with adversarial LSTM networks. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 202–211, 2017. Silvano Martello, David Pisinger, and Paolo Toth. Dynamic programming and strong bounds for the 0-1 knapsack problem. *Management Science*, 45(3):414–424, 1999. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. *ACM Computing Surveys (CSUR)*, 54(6):1–35, 2021. Gina Neff. Talking to bots: Symbiotic agency and the case of Tay. *International Journal of Communication*, 2016. Chong-Wah Ngo, Yu-Fei Ma, and Hong-Jiang Zhang. Video summarization and scene detection by graph modeling. *IEEE Transactions on Circuits and Systems for Video Technology*, 15(2):296–305, 2005. Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, and Naokazu Yokoya. Video summarization using deep semantic features. In Computer Vision–ACCV 2016: 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part V 13, pp. 361–377. Springer, 2017. Dan Pelleg, Andrew W Moore, et al. X-means: Extending k-means with efficient estimation of the number of clusters. In *International Conference on Machine Learning*, volume 1, pp. 727–734, 2000. Yuxin Peng and Chong-Wah Ngo. Clip-based similarity measure for query-dependent clip retrieval and video summarization. *IEEE Transactions on Circuits and Systems for Video Technology*, 16(5):612–627, 2006. Evaggelia Pitoura, Kostas Stefanidis, and Georgia Koutrika. Fairness in rankings and recommendations: an overview. *The VLDB Journal*, pp. 1–28, 2022. Yao Qiang, Chengyin Li, Prashant Khanduri, and Dongxiao Zhu. Fairness-aware vision transformer via debiased self-attention. *arXiv preprint arXiv:2301.13803*, 2023. Bashir Rastegarpanah, Krishna P Gummadi, and Mark Crovella. Fighting fire with fire: Using antidote data to improve polarization and fairness of recommender systems. In Proceedings of the 12th ACM International Conference on Web Search and Data Mining, pp. 231–239, 2019. Alexander Schrijver. *Theory of Linear and Integer Programming*. John Wiley & Sons, 1998. A Senthil Murugan, K Suganya Devi, A Sivaranjani, and P Srinivasan. A study on various methods used for video summarization and moving object detection for video surveillance applications. *Multimedia Tools* and Applications, 77(18):23273–23290, 2018. Anurag Shandilya, Kripabandhu Ghosh, and Saptarshi Ghosh. Fairness of extractive text summarization. In *Companion Proceedings of the The Web Conference 2018*, pp. 97–98, 2018. Jie Shen, Nan Cui, and Jing Wang. Metric-fair active learning. In *International Conference on Machine* Learning, pp. 19809–19826. PMLR, 2022. Hanyu Song, Peizhao Li, and Hongfu Liu. Deep clustering based fair outlier detection. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1481–1489, 2021. Xinhui Song, Ke Chen, Jie Lei, Li Sun, Zhiyuan Wang, Lei Xie, and Mingli Song. Category driven deep recurrent neural network for video summarization. In *2016 IEEE International Conference on Multimedia* & Expo Workshops (ICMEW), pp. 1–6. IEEE, 2016. Yale Song, Jordi Vallmitjana, Amanda Stent, and Alejandro Jaimes. Tvsum: Summarizing web videos using titles. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 5179–5187, 2015. Bing Sun, Jun Sun, Ting Dai, and Lijun Zhang. Probabilistic verification of neural networks against group fairness. In *Formal Methods: 24th International Symposium, FM 2021, Virtual Event, November 20–26,* 2021, Proceedings 24, pp. 83–102. Springer, 2021. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, 2015. Cüneyt M Taskiran, Zygmunt Pizlo, Arnon Amir, Dulce Ponceleon, and Edward J Delp. Automated video program summarization using speech transcripts. *IEEE Transactions on Multimedia*, 8(4):775–791, 2006. Sinnu Susan Thomas, Sumana Gupta, and Venkatesh K Subramanian. Smart surveillance based on video summarization. In *2017 IEEE Region 10 Symposium (TENSYMP)*, pp. 1–5. IEEE, 2017. Ba Tu Truong and Svetha Venkatesh. Video abstraction: A systematic review and classification. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 3(1):3–es, 2007. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in Neural Information Processing Systems*, 30, 2017. Huawei Wei, Bingbing Ni, Yichao Yan, Huanyu Yu, Xiaokang Yang, and Chen Yao. Video summarization via semantic attended networks. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32, 2018. Gokhan Yaliniz and Nazli Ikizler-Cinbis. Using independently recurrent networks for reinforcement learning based unsupervised video summarization. *Multimedia Tools and Applications*, 80:17827–17847, 2021. Bin Yu, Wei-Ying Ma, Klara Nahrstedt, and Hong-Jiang Zhang. Video summarization based on user log enhanced link analysis. In *Proceedings of the 11th ACM International Conference on Multimedia*, pp. 382–391, 2003. Li Yuan, Francis EH Tay, Ping Li, Li Zhou, and Jiashi Feng. Cycle-sum: Cycle-consistent adversarial LSTM networks for unsupervised video summarization. In *Proceedings of the AAAI Conference on Artificial* Intelligence, volume 33, pp. 9143–9150, 2019. Yitian Yuan, Tao Mei, Peng Cui, and Wenwu Zhu. Video summarization by learning deep side semantic embedding. *IEEE Transactions on Circuits and Systems for Video Technology*, 29(1):226–237, 2017. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P Gummadi. Fairness constraints: Mechanisms for fair classification. In *Artificial Intelligence and Statistics*, pp. 962–970. PMLR, 2017. Ke Zhang, Wei-Lun Chao, Fei Sha, and Kristen Grauman. Video summarization with long short-term memory. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part VII 14, pp. 766–782. Springer, 2016. Yujia Zhang, Michael Kampffmeyer, Xiaoguang Zhao, and Min Tan. Dtr-gan: Dilated temporal relational adversarial network for video summarization. In *Proceedings of the ACM Turing Celebration ConferenceChina*, pp. 1–6, 2019. Kaiyang Zhou, Yu Qiao, and Tao Xiang. Deep reinforcement learning for unsupervised video summarization with diversity-representativeness reward. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32, 2018a. Kaiyang Zhou, Tao Xiang, and Andrea Cavallaro. Video summarisation by classification with deep reinforcement learning. *arXiv preprint arXiv:1807.03089*, 2018b. Wencheng Zhu, Jiwen Lu, Jiahao Li, and Jie Zhou. DSNet: A flexible detect-to-summarize network for video summarization. *IEEE Transactions on Image Processing*, 30:948–962, 2020. ## Appendix A Additional Violin Plots For Remaining Videos We provide remaining violin plots for all videos in Fair VidSum to visualize the distributtion of unique individuals (Figure 7), sez sensitive attribute (Figure 8), and ethnicity sensitive attribute (Figure 9). These distributions highlight the importance and difficulty of summarizing videos while ensuring fairness. The individual plots, in particular, showcase the most challenging scenarios, as they contain videos (such as Vid. \#24, \#25, \#30, \#33) with numerous individuals appearing in very limited frames. Consequently, any missing individuals would result in an unfair summary and a SumBal score of zero. The plots emphasize the need to capture and represent proportionality and fairness in the video summaries. ![18_image_0.png](18_image_0.png) Figure 7: Violin plots showcasing the distribution of unique entities/individuals appearing across all videos. Videos #3, #13, #14, #15 are present in main paper. ![19_image_0.png](19_image_0.png) Figure 8: Violin plots showcasing the distribution sex sensitive attribute across all videos. Here M denotes Male, and F denotes Female. Videos #1, #4, #6, #19 are present in main paper. ## B Additional Details Regarding Model Training And Evaluation B.1 Evaluation Splits In accordance with previous research methodologies, we have created five splits, chosen randomly. These splits follow an 80:20 partitioning for training and testing, respectively. With 34 videos total, each split comprises 28 videos in the training set and 6 in the testing set. We provide the test sets for every split used in our benchmarks below. It is important to note that any videos not included in the test sets for each split would naturally be part of the corresponding training sets. The splits are as follows: Split \#1 Test set: Vid. \#9, Vid. \#11, Vid. \#19, Vid. \#25, Vid. \#26, Vid. \#34; Split \#2 Test set: Vid. \#2, Vid. \#8, Vid. \#20, Vid. \#24, Vid. \#29, Vid. \#32; Split \#3 Test set: Vid. \#8, Vid. \#18, Vid. \#20, Vid. \#24, Vid. \#28, Vid. \#33; Split \#4 Test set: Vid. \#4, Vid. \#8, Vid. \#17, Vid. \#18, Vid. \#29, Vid. \#30; Split \#5 Test set: Vid. \#3, Vid. \#15, Vid. \#16, Vid. \#20, Vid. \#25, Vid. \#28. ## B.2 Models And Training Details We follow the original training procedures established for all benchmarked models and adjustments to specific hyperparameters are described below. Any parameters not explicitly mentioned were maintained at the default values specified in the original code or paper for each model. For DSNet, we use the anchor-free model. All adjustments made to parameters to train models on Fair VidSum: ![20_image_0.png](20_image_0.png) Figure 9: Violin plots showcasing the distribution ethnicity sensitive attribute across all videos. Here WH denotes White, BL denotes Black, and AS denotes Asian. Videos #8, #9, #16, #22 are present in main paper. · AC-SUM-GAN: regularization_factor = 5.0, clip = 1.0, action_state_size = 8 · CA-SUM: block_size = 60, init_gain = 1.0, n_epochs = 200, clip = 1.0, 1r = 1e-4, 12_req = 1e-6, reg_factor = 5.0 · PGL-SUM: clip = 1.0, 1r = 1e-4, 12_req = 1e-4 - SUM-GAN-AAE: clip = 1.0, hidden_size = 512, regularization_factor = 5.0, 1r = 1e-5 · SUM-GAN-SL: clip = 1.0, hidden_size = 512, regularization_factor = 5.0 ## C Results On Remaining Evaluation Splits C.1 Benchmarking Results Here, we present results for the remaining evaluation splits: Split \#2, Split \#3, Split \#4, and Split \#5 as Tables 3, 4, 5, and 6, respectively. The results show similar trends to the averages table and Split \#1 table presented in the main text. Table 3: Comparison of SOTA video summarization models on *FairVidSum* for evaluation Split #2. | Model | Type | Average | SumBal (Sex) | SumBal (Ethnicity) | | SumBal (Individual) | | | | | | |---------------------|--------------|-----------|-----------------------------------------------|--------------------------|---------------------------------|-----------------------|--------------------|------------------|----------------------------------|---------------------|---------------------| | F1 Measure Average | Min | Violating | Average | Min | Violating | Average | Min | Violating | | | | | Random | - | 14.95 | 0.9837 | 0.9555 Female (Vid. #29) | 0.9696 | 0.9199 | Asian (Vid. #24) | 0.8866 | 0.7417 | Person 3 (Vid. #24) | | | Human | - | 65.00 | 0.4551 | 0.0000 Female (Vid. #24) | 0.4768 | 0.0000 | Asian (Vid. #24) | 0.2909 | 0.0000 | Person 1 (Vid. #24) | | | CA-SUM | Unsupervised | 60.77 | 0.5449 | 0.0000 | Female (Vid. #24) | 0.4971 | 0.0000 | Asian (Vid. #24) | 0.1819 | 0.0000 | Person 4 (Vid. #8) | | AC-SUM-GAN | Unsupervised | 62.41 | 0.5362 | 0.0000 | Female (Vid. #24) | 0.4573 | 0.0000 | Asian (Vid. #24) | 0.2634 | 0.0000 | Person 3 (Vid. #24) | | SUM-GAN-AAE | Unsupervised | 61.49 | 0.5213 | 0.0000 Female (Vid. #24) | 0.4658 | 0.0000 | Asian (Vid. #24) | 0.1835 | 0.0000 | Person 4 (Vid. #8) | | | SUM-GAN-SL | Unsupervised | 62.02 | 0.4958 | 0.0000 | Female (Vid. #24) | 0.4529 | 0.0000 | Asian (Vid. #24) | 0.1714 0.0000 Person 4 (Vid. #8) | | | | SUM-IND | Unsupervised | 49.80 | 0.6805 | 0.0000 | Female (Vid. #24) | 0.6940 | 0.0000 | Asian (Vid. #24) | 0.3570 | 0.0000 | Person 1 (Vid. #2) | | DSNet | Supervised | 62.45 | 0.5661 | 0.0000 | Female (Vid. #24) | 0.5032 | 0.0000 | Asian (Vid. #24) | 0.2888 | 0.0000 | Person 4 (Vid. #8) | | VASNet | Supervised | 60.53 | 0.4796 | 0.0000 | Female (Vid. #24) | 0.4676 | 0.0000 | Asian (Vid. #24) | 0.1963 | 0.0000 | Person 4 (Vid. #8) | | PGL-SUM | Supervised | 61.04 | 0.4627 0.0000 Female (Vid. #24) 0.4300 0.0000 | Asian (Vid. #24) | 0.1881 | 0.0000 | Person 4 (Vid. #8) | | | | | | FVS-LP (Sex) | Unsupervised | 15.30 | 0.9985 0.9968 | Male (Vid. #24) | 0.7033 | 0.0000 | Hispanic (Vid. #8) | 0.4183 | 0.0000 | Person 1 (Vid. #2) | | | FVS-LP (Ethnicity) | Unsupervised | 14.70 | 0.8599 | 0.4721 | Female (Vid. #29) 0.9981 0.9934 | Asian (Vid. #24) | 0.3755 | 0.0000 | Person 1 (Vid. #2) | | | | FVS-LP (Individual) | Unsupervised | 12.41 | 0.9714 | 0.8350 | Male (Vid. #2) | 0.9648 | 0.8350 | Black (Vid. #2) | 0.9963 0.9905 Person 2 (Vid. #2) | | | | | | SumBal (Sex) | SumBal (Ethnicity) | SumBal (Individual) | | | | | | | | |---------------------|--------------|----------------|---------------------------------|----------------------------------------|------------------|---------------------------|------------------|--------------------|------------------------------------|---------------------|-----------| | Model | Type | Average | Average | Min | Violating | Average | Min | Violating | Average | Min | Violating | | F1 Measure | | | | | | | | | | | | | Random | - | 14.96 | 0.9477 | 0.8758 Female (Vid. #24) | 0.8942 | 0.6670 | Asian (Vid. #33) | 0.8614 | 0.6670 Person 13 (Vid. #33) | | | | Human | - | 67.61 | 0.4784 | 0.0000 Female (Vid. #24) | 0.4518 | 0.0000 | Asian (Vid. #24) | 0.2097 | 0.0000 | Person 2 (Vid. #18) | | | CA-SUM | Unsupervised | 59.76 | 0.5547 | 0.0000 Female (Vid. #24) 0.3859 0.0000 | Asian (Vid. #24) | 0.1901 | 0.0000 | Person 4 (Vid. #8) | | | | | AC-SUM-GAN | Unsupervised | 61.68 | 0.6236 | 0.0000 Female (Vid. #24) | 0.4123 | 0.0000 | Asian (Vid. #24) | 0.1999 | 0.0000 | Person 4 (Vid. #8) | | | SUM-GAN-AAE | Unsupervised | 61.63 | 0.5859 | 0.0000 Female (Vid. #24) | 0.4291 | 0.0000 | Asian (Vid. #24) | 0.2779 | 0.0000 | Person 1 (Vid. #24) | | | SUM-GAN-SL | Unsupervised | 62.77 | 0.6211 | 0.0000 Female (Vid. #24) | 0.4122 | 0.0000 | Asian (Vid. #24) | 0.2767 | 0.0000 | Person 3 (Vid. #24) | | | SUM-IND | Unsupervised | 44.79 | 0.5061 0.0000 Female (Vid. #24) | 0.4214 | 0.0000 | Asian (Vid. #24) | 0.1481 0.0000 | Person 4 (Vid. #8) | | | | | DSNet | Supervised | 60.16 | 0.5448 | 0.0000 Female (Vid. #26) | 0.4364 | 0.0000 | Asian (Vid. #24) | 0.2304 | 0.0000 | Person 4 (Vid. #8) | | | VASNet | Supervised | 60.93 | 0.5646 | 0.0000 Female (Vid. #24) | 0.4551 | 0.0000 | Asian (Vid. #24) | 0.2858 | 0.0000 | Person 1 (Vid. #24) | | | PGL-SUM | Supervised | 59.15 | 0.5117 | 0.0000 Female (Vid. #24) | 0.4086 | 0.0000 | Asian (Vid. #24) | 0.1965 | 0.0000 | Person 4 (Vid. #8) | | | FVS-LP (Sex) | Unsupervised | 17.03 | 0.9990 0.9968 | Male (Vid. #24) | 0.5242 | 0.0000 Hispanic (Vid. #8) | 0.0939 | 0.0000 | Person 3 (Vid. #8) | | | | FVS-LP (Ethnicity) | Unsupervised | 16.06 | 0.7394 | 0.0000 | Male (Vid. #33) | 0.9951 0.9822 | Asian (Vid. #33) | 0.1387 | 0.0000 | Person 4 (Vid. #8) | | | FVS-LP (Individual) | Unsupervised | 16.03 | 0.9371 | 0.6289 | Male (Vid. #28) | 0.9373 | 0.6499 | White (Vid. #28) | 0.9923 0.9731 Person 14 (Vid. #33) | | | Table 4: Comparison of SOTA video summarization models on *FairVidSum* for evaluation Split \#3. ## C.2 Mixing Strategy Results Here we present results for the mixing sampling strategy described in Section 5.2 for each of the individual splits as Figures 10 (Split \#1), 11 (Split \#2), 12 (Split \#3), 13 (Split \#4), and 14 (Split \#5). The mixing ratio λ is varied from 0 to 1 in increments of 0.1. ## D Code, Reproducibility, And Miscellaneous Details D.1 Github Repository The Github repository contains all the code needed for reproducing experiments, and also hosts the *FairVidSum* dataset. It is located here: https://github.com/anshuman23/fair_video_summarization_tmlr ## D.2 Environment Specifications We use Python 3.8.16 and Anaconda to install all required libraries to run all models. The Anaconda environment yaml file is provided in our repository. The experiments were conducted on Ubuntu 20.04 using NVIDIA GeForce RTX 3070 GPUs (CUDA version 11.1). ## D.3 Code Details The training codes utilized in our experiments were directly obtained from the official github repositories of the respective models, all implemented in PyTorch (v1.12.1). We condense codes into less files for ease | Model | Type | Average | SumBal (Sex) | SumBal (Ethnicity) | | | SumBal (Individual) | | | | | |---------------------|--------------|-----------|---------------------------------|--------------------------|---------------------------------|--------------------|----------------------------------|------------------|-----------------------------------|---------------------|---------------------| | F1 Measure Average | | Min | Violating | Average | Min | Violating | Average | Min | Violating | | | | Random | - | 16.08 | 0.9324 | 0.8814 | Male (Vid. #30) | 0.9760 | 0.9043 | White (Vid. #18) | 0.8886 | 0.7253 | Person 3 (Vid. #30) | | Human | - | 76.51 | 0.3735 | 0.0000 Female (Vid. #30) | 0.5895 | 0.0752 | White (Vid. #30) | 0.2522 | 0.0000 | Person 2 (Vid. #18) | | | CA-SUM | Unsupervised | 71.89 | 0.4336 | 0.0000 | Female (Vid. #30) 0.6222 0.0750 | White (Vid. #30) | 0.2501 | 0.0000 | Person 4 (Vid. #8) | | | | AC-SUM-GAN | Unsupervised | 74.02 | 0.4258 0.0000 Female (Vid. #30) | 0.6222 | 0.2741 | White (Vid. #30) | 0.2026 0.0000 Person 4 (Vid. #8) | | | | | | SUM-GAN-AAE | Unsupervised | 72.49 | 0.5096 | 0.0000 | Female (Vid. #30) | 0.6994 | 0.4450 | White (Vid. #29) | 0.2823 | 0.0000 | Person 4 (Vid. #8) | | SUM-GAN-SL | Unsupervised | 73.42 | 0.4458 | 0.0000 Female (Vid. #30) | 0.6988 | 0.4450 | White (Vid. #29) | 0.2615 | 0.0000 | Person 4 (Vid. #8) | | | SUM-IND | Unsupervised | 60.96 | 0.5692 | 0.0000 | Male (Vid. #29) | 0.6363 | 0.2419 | White (Vid. #29) | 0.2004 | 0.0000 | Person 4 (Vid. #8) | | DSNet | Supervised | 71.53 | 0.5743 | 0.0000 | Female (Vid. #30) | 0.6661 | 0.4450 | White (Vid. #29) | 0.2412 | 0.0000 | Person 3 (Vid. #8) | | VASNet | Supervised | 69.95 | 0.4444 | 0.0000 | Female (Vid. #30) | 0.6470 | 0.2719 | White (Vid. #30) | 0.2055 | 0.0000 | Person 4 (Vid. #8) | | PGL-SUM | Supervised | 70.03 | 0.5058 | 0.0000 | Female (Vid. #30) | 0.7343 | 0.4450 | White (Vid. #29) | 0.3027 | 0.0000 | Person 4 (Vid. #8) | | FVS-LP (Sex) | Unsupervised | 16.69 | 0.9981 0.9960 Female (Vid. #4) | 0.8231 | 0.0000 | Hispanic (Vid. #8) | 0.3169 | 0.0000 | Person 3 (Vid. #4) | | | | FVS-LP (Ethnicity) | Unsupervised | 14.49 | 0.5619 | 0.0000 | Female (Vid. #4) | 0.9994 0.9987 | White (Vid. #18) | 0.2257 | 0.0000 | Person 2 (Vid. #4) | | | FVS-LP (Individual) | Unsupervised | 15.90 | 0.9762 | 0.8695 | Male (Vid. #17) | 0.9926 | 0.9621 | White (Vid. #29) | 0.9949 0.9820 Person 5 (Vid. #30) | | | Table 5: Comparison of SOTA video summarization models on *FairVidSum* for evaluation Split #4. | Model | Type | Average | SumBal (Sex) | | SumBal (Ethnicity) | | SumBal (Individual) | | | | | |---------------------|--------------|--------------------|---------------------------------|-----------|--------------------------------------------------|-------------------------|-----------------------|----------------------------------------------------|-----------------------------------|--------------------|---------------------| | | | F1 Measure Average | Min | Violating | Average | Min | Violating | Average | Min | Violating | | | Random | - | 13.96 | 0.9375 | 0.8692 | Male (Vid. #3) | 0.9242 | 0.8692 | White (Vid. #3) | 0.8732 | 0.7020 | Person 2 (Vid. #25) | | Human | - | 68.34 | 0.5157 | 0.0000 | Female (Vid. #25) | 0.5615 | 0.0000 | Asian (Vid. #25) | 0.3257 | 0.0000 | Person 1 (Vid. #25) | | CA-SUM | Unsupervised | 59.34 | 0.5567 | 0.0384 | Female (Vid. #3) | 0.5542 | 0.0384 | Asian (Vid. #3) | 0.2944 | 0.0000 | Person 1 (Vid. #25) | | AC-SUM-GAN | Unsupervised | 59.56 | 0.5545 | 0.0390 | Female (Vid. #3) | 0.5890 | 0.0390 | Asian (Vid. #3) | 0.3159 | 0.0000 | Person 1 (Vid. #25) | | SUM-GAN-AAE | Unsupervised | 60.01 | 0.5053 | 0.0401 | Female (Vid. #3) | 0.5460 | 0.0401 | Asian (Vid. #3) | 0.3304 | 0.0000 | Person 1 (Vid. #25) | | SUM-GAN-SL | Unsupervised | 61.59 | 0.5346 | 0.0384 | Female (Vid. #3) | 0.5613 | 0.0384 | Asian (Vid. #3) | 0.2863 | 0.0000 | Person 1 (Vid. #25) | | SUM-IND | Unsupervised | 47.81 | 0.5411 | 0.0390 | Female (Vid. #3) | 0.5287 | 0.0390 | Asian (Vid. #3) | 0.2629 0.0000 Person 1 (Vid. #25) | | | | DSNet | Supervised | 61.06 | 0.5506 | 0.0401 | Female (Vid. #3) | 0.5238 | 0.0411 | Asian (Vid. #3) | 0.2883 | 0.0000 | Person 1 (Vid. #25) | | VASNet | Supervised | 62.99 | 0.4836 | 0.0000 | Female (Vid. #25) | 0.5393 | 0.0000 | Asian (Vid. #25) | 0.2897 | 0.0000 | Person 1 (Vid. #25) | | PGL-SUM | Supervised | 63.37 | 0.4828 0.0000 Female (Vid. #25) | | 0.5274 | 0.0000 Asian (Vid. #25) | 0.3304 | 0.0000 | Person 1 (Vid. #25) | | | | FVS-LP (Sex) | Unsupervised | 12.42 | 0.9988 0.9975 Female (Vid. #25) | | 0.8192 | 0.5633 | Black (Vid. #20) | 0.2604 | 0.0000 | Person 3 (Vid. #3) | | | FVS-LP (Ethnicity) | Unsupervised | 9.928 | 0.7632 | 0.0000 | Female (Vid. #25) 0.9983 0.9974 Black (Vid. #28) | | 0.3052 | 0.0000 | Person 3 (Vid. #3) | | | | FVS-LP (Individual) | Unsupervised | 9.512 | 0.9367 | 0.6289 | Male (Vid. #28) | 0.9406 | 0.6499 | White (Vid. #28) 0.9894 0.9704 Person 6 (Vid. #25) | | | | ![22_image_0.png](22_image_0.png) Table 6: Comparison of SOTA video summarization models on *FairVidSum* for evaluation Split \#5. Figure 10: Results for the mixing sampling strategy for Split #1. of running. We use our dataset splits (randomly generated) to evaluate and train all models (also provided in repository). We selected the trained models that achieved the highest F1 scores per split (which was ![23_image_0.png](23_image_0.png) Figure 11: Results for the mixing sampling strategy for Split #2. ![23_image_1.png](23_image_1.png) Figure 12: Results for the mixing sampling strategy for Split #3. standard procedure in all models' codes). Since all methods use a common evaluation code, which involves the Knapsack algorithm to generate summaries based on frame importance scores and the F1 score evaluation, we created a unified evaluation script for reporting the average F1 scores. This script takes the predicted summaries (post-Knapsack) from models and the ground truth user summaries (from annotations) from the h5 dataset as input. We also developed a unified script for evaluating SumBals, which takes as inputs the predicted summaries and user summaries, similar to the F1 score evaluation. The SumBal evaluation also requires the fairness labels, which are numpy binaries provided for all individuals/entities in the frames for each video. Please follow the README on our github or instructions on our webpage for detailed instructions on how to train and evaluate all models. ![24_image_0.png](24_image_0.png) Figure 13: Results for the mixing sampling strategy for Split #4. ![24_image_1.png](24_image_1.png) Figure 14: Results for the mixing sampling strategy for Split #5. ## D.4 Videos In Fairvidsum FairVidSum currently consists of the following YouTube videos: · Video \#1: https://www.youtube.com/watch?v=YtoSLVGc_vw - Video \#2: https://www.youtube.com/watch?v=uKSxrQHpCqo · Video \#3: https://www.youtube.com/watch?v=gAQggxc_aYw · Video \#4: https://www.youtube.com/watch?v=1rS8fFbW57o&ab_channel=MSNBC - Video \#5: https://www.youtube.com/watch?v=6-cVocxHjFw&ab_channel=ArtofCharm - Video \#6: https://www.youtube.com/watch?v=ryHAIdm_eXo&ab_channel=TalesFromSYLRanchDARKROOM - Video \#7: https://www.youtube.com/watch?v=aCDItvntHFE&ab_channel=MyHartEnt - Video \#8: https://www.youtube.com/watch?v=UgYy2maGCU4 - Video \#9: https://www.youtube.com/watch?v=ExJZAegsOis - Video \#10: https://www.youtube.com/watch?v=DHHbgnFVyWQ&ab_channel=Impetos - Video \#11: https://www.youtube.com/watch?v=R9jB4JOi6gU&ab_channel=ILVOLOSIM - Video \#12: https://www.youtube.com/watch?v=naIkpQ_cIt0&ab_channel=ESLLearning - Video \#13: https://www.youtube.com/watch?v=JSLhP8i-5U0&ab_channel=ESLLearning - Video \#14: https://www.youtube.com/watch?v=7dixfiGekhE&ab_channel=GOBALDAILYNEWSUSA - Video \#15: https://www.youtube.com/watch?v=bYzH3zP7iDg&ab_channel=MTVUK - Video \#16: https://www.youtube.com/watch?v=BNcFuU23CkQ&ab_channel=SilviuTolu - Video \#17: https://www.youtube.com/watch?v=bJOhf3_fQ-c&ab_channel=MrHG94 - Video \#18: https://www.youtube.com/watch?v=t4fAq17jZ-A - Video \#19: https://www.youtube.com/watch?v=C8gHpnZ0BrY - Video \#20: https://www.youtube.com/watch?v=7PnBQwI_M2A - Video \#21: https://www.youtube.com/watch?v=KJ804YQEaVc - Video \#22: https://www.youtube.com/watch?v=rM1jUXSFuls - Video \#23: https://www.youtube.com/watch?v=3eYKfiOEJNs - Video \#24: https://www.youtube.com/watch?v=4wU_LUjG5Ic - Video \#25: https://www.youtube.com/watch?v=akI8YFjEmUw - Video \#26: https://www.youtube.com/watch?v=eQu1rNs0an0 - Video \#27: https://www.youtube.com/watch?v=iVt07TCkFM0 - Video \#28: https://www.youtube.com/watch?v=jcoYJXDG9sw - Video \#29: https://www.youtube.com/watch?v=JgHubY5Vw3Y - Video \#30: https://www.youtube.com/watch?v=Se3oxnaPsz0 - Video \#31: https://www.youtube.com/watch?v=sTEELN-vY30 - Video \#32: https://www.youtube.com/watch?v=LRw_obCPUt0 - Video \#33: https://www.youtube.com/watch?v=RBCABdttQmI - Video \#34: https://www.youtube.com/watch?v=E11zDS9XGzg ## D.5 Dataset Collection We performed the frame labeling process using the Kili labeling platform 6. Each user participating in the labeling task was instructed to read the video title and watch the video on mute first. For annotation purposes, each user received frames from 22 videos, which were downsampled to 1/2 frames per second. The users were asked to assign an importance score ranging from 1 (least important) to 5 (most important) to each frame. To prevent any potential bias, frames from every video were shuffled before being presented to the users for annotation. ## D.6 Dataset Details All the models included in the benchmark provide the datasets (*SumMe* and *TVSum*) in structured h5 format, which can be accessed either through their respective github repositories or download links, and also provide their splits.json files used for training and evaluating models. In a similar manner, we have included the FairVidSum dataset in the fvs.h5 file, along with the corresponding fvs_splits.json file used for our benchmarks. Both files are already provided in the repository, and we also provide the dropbox download links to them. We also provide fair_npy_data/ , which contains all fairness labels and data required for SumBal evaluations and generating summaries using FVS-LP. fair_npy_data/ contains three subfolders, sex/ , eth/ , and ind/ which further contain corresponding fairness labels for each video in our dataset in numpy binary (.npy) format. There is a corresponding .npy file associated with each video (named video_<vid_num>.npy). These .npy files are structured as numpy arrays, with rows representing the frame index, and columns representing the protected groups. For each group, the value of 0 in the numpy array indicates that the corresponding protected group is not present in that frame, while a value of 1 indicates its presence. The *TVSum* videos (Vid. \#23 - \#34) were solely labeled for fairness. The h5 dataset was taken directly from the benchmarking model repositories (eccv16_dataset_tvsum_google_pool5.h5), and we simply extracted the chosen 12 videos. Hence, the *TVSum* videos adhere to their downsampling rate (2 frames per second), user annotations, frame features, etc. We append the TVSumm 12 video h5 dataset to our fvs.h5 (containing 22 videos) to create a standard video summarization dataset of 34 videos. ## D.7 Dataset License The *FairVidSum* dataset is released under a *CC-BY-SA* license. Please refer to https://creativecommons. org/licenses/by-sa/4.0/ for license details. ## D.8 Videos For Dataset Extension We plan to add the following videos to the next version of the dataset: 1. https://www.youtube.com/watch?v=J4SnZcLgX6c 2. https://www.youtube.com/watch?v=uNUSI_9lXGg 3. https://www.youtube.com/watch?v=tGIhpc8SJK0 4. https://www.youtube.com/watch?v=OVvPntIAcsQ&ab_channel=RealMadrid 5. https://www.youtube.com/shorts/whdf1kp2Zrs 6. https://www.youtube.com/shorts/lbNRKjGHOvg 7. https://www.youtube.com/shorts/NIlWmDrfxac 8. https://www.youtube.com/shorts/n4e9PiiF0ms 9. https://www.youtube.com/shorts/IDJivGVDcos 10. https://www.youtube.com/shorts/PNZdp8Mt6i4 11. https://www.youtube.com/shorts/ytvk5hUnmow 12. https://www.youtube.com/shorts/L7Ag06cp9V8 13. https://www.youtube.com/shorts/BIFicBzErMk 14. https://www.youtube.com/shorts/Ld_Qoxka9jk 15. https://www.youtube.com/shorts/EL4nlb_yuYE 16. https://www.youtube.com/shorts/rY8NyCKq-ic 17. https://www.youtube.com/watch?v=pZLVKmHI8Aw 18. https://www.youtube.com/watch?v=WH71R4PkvmQ 19. https://www.youtube.com/watch?v=uEwCHTSuZM0 20. https://www.youtube.com/watch?v=3EsqpF-W_wQ 21. https://www.youtube.com/watch?v=XJvvzBen91w 22. https://www.youtube.com/watch?v=5bDQE8EzcFk 23. https://www.youtube.com/watch?v=Fh16FBHYyRM 24. https://www.youtube.com/watch?v=Iy7drPXqzps 25. https://www.youtube.com/watch?v=aO-Wz-H_EU4 26. https://www.youtube.com/watch?v=-Dsbg-kCP0g ## E Ethics Statement Our paper studied the important and novel problem of fair video summarization. As individuals and groups (based on sensitive attributes) can be directly impacted by automated video summarization, it is important to recognize the importance of ethical considerations in the development and deployment of such systems. Without sensitive attribute information available for video summarization, it is not possible for methods being developed to adhere to the principles of fairness, accountability, and transparency as part of their research goals. Therefore, our work primarily aims to bridge this gap, and address potential biases and discriminatory effects in the video summarization task. However, it is important to also advocate for the responsible use of fair video summarization, and ensure that the provided data and benchmarks are only used to enhance the fairness of existing methods or propose novel fair variants to current models. ## F Possible Directions For Future Work There are multiple directions for possible future work: ## F.1 Relaxing Knowledge Of Protected Groups Future work can be proposed that relaxes the assumption on knowledge of protected group annotations, either partially or fully, or can even aim to predict them and then ensure fairness. All these problem settings give rise to a diverse set of non-trivial problem formulations. For instance, methods that assume probabilistic group memberships (such as when a classifier is used to predict the group) would differ significantly from methods that assume partial knowledge of certain groups. ## F.2 Generalized Methods For Different Definitions Of Fairness Currently, video summarization methods differ significantly from each other– some employ LSTMs, others employ GANs, etc. However, for future research on fair video summarization, we believe a more generalized and unified direction for methods can be studied. For example, approaches can be proposed that take an existing video summarization model as input and make it fair for any general fairness definition provided at run-time. Moreover, new fairness definitions for fair video summarization specific to certain applications, can be motivated. ## F.3 Pre-Processing, In-Processing, And Post-Processing Approaches For Fairness Fairness methods can be pre-processing (transform the input data space to ensure fairness), in-processing (fair model variants), or post-processing (modifying the output summaries to make them fair) based. All three modalities are important to study for future work, as different video summarization pipelines have different data pipeline requirements. Furthermore, it is important to understand how each type of method differs, and whether some are better at ensuring fairness than others.
Review 1: Summary: This paper proposes a fair video summarization task. They build a FairVidSum dataset and SumBal metric for quantifying fairness. They also provide a fair-only baseline called FVS-LP. Strengths and Weaknesses: Strengths: 1. The target issues of the paper are meaningful and worth exploring. This submission gives a valuable implementation of such an idea and presents good results. 2. The paper is generally well-written, clearly structured, and quite easy to follow. Weaknesses: 1. The motivation is not so clear. In Section 3.2, the authors claim that all persons in the video should be represented in approximately the same proportions in the generated summary as they appear in the entire video. What if some people are not important for the video? For example, although the main character's frames are less than sub-character, we still focus on the main character. In this situation, fairness is a conflict of importance. 2. The number of videos in the dataset should be more. 3. The author does not provide suggestions for further improvements of methods in their dataset. Requested Changes: See Weaknesses. Broader Impact Concerns: None ================================================== Review 2: Summary: This paper proposes a dataset and evaluation metric for fairness in video summarization. They explore many existing video summarization methods using these components. The paper proposes evaluating fairness of a summarization by measuring the proportion of the summary each entity takes up compared to the proportion of the video each entity takes up. The dataset is collected similar to others, but annotated by humans counting each entity. Strengths and Weaknesses: The main strengths of the paper are proposing a new dataset and evaluation metric for studying fairness. There is a lot of analysis on the dataset and the distribution of people, attributes, etc. in the dataset. There is good discussion on the limitations of the dataset. The main weakness is the evaluation metric is pretty simple. It is just calculating the percent of time an entity is in the summary vs. the source material. This will provide some insight, but is also limited to only datasets where these annotations are available, and any biases in those datasets. The evaluations are done by training the methods on the new dataset. Have you considered evaluating pretrained models directly on this dataset without any further training? There's not much analysis or insight in the results. Why are some methods better than others? Requested Changes: It would be helpful to add more insights in the results section and more experiments studying different aspects of the models, what makes them fair or unfair, etc. Broader Impact Concerns: It is discussed in the paper. ================================================== Review 3: Summary: This paper as is titled takes a step towards fair video summarization. They collect a dataset, propose a metric and an unsupervised video summarization baseline. Video summarization in this work (and other related words) is defined as the task of selecting a sub-set of the video frames from the whole video from sparsely sampled frames with a low frame-rate. In this task no audio is used. Video summarization is usually evaluated using the F-measure. This work introduces the FairVidSum dataset. The dataset consists of 22 new videos from Youtube and 12 videos from the TVSum dataset. The nature of the videos are "people talking" like podcasts, interviews, etc. In this work a fair summarizer would select frames from the video that respects and contains the same proportion of protected groups (such as an ethnicity) as it is present in the input video itself. The proposed evaluation metric SumBal is measuring the most violated proportion from a set of protected groups in the video summarization output. The proposed baseline is an supervised approach for video summarization that given the proportions of the protected groups in the input video and their presence in each frame, is able to output a summarization that maximally satisfies the fairness goal using a linear program without any other utility or accuracy in its objective. Strengths and Weaknesses: The paper very well written. The problem, related work and the contributions as well explained and presented clearly. It seems that the video summarization problem as defined in the literature and in this work is not really concerned with the "video" itself, but what it tries to do is to select a subset from the inputs. Firstly, the audio of the video is removed. Secondly, the frames are randomly selected at a low frame-rate without attention to the content. Finally, during the annotation process the frames of the video are shuffled. I think that selecting a frame for video summarization should be concerned with the surrounding frames and the audio specially in "people talking" setting, it is important to know what is being said and decision cannot be made solely based on the visual features of a single frame. Furthermore the problem of unsupervised fair summarization seems unrealistic as it assumes that the ground truth group memberships `H` are available. In a realistic setting, given a new test video, `H` has to be predicted but this was not studied in this work. The proposed baseline seems too rudimentary. There is no hyper-parameter to control for a trade-off between the utility and fairness. Requested Changes: - Improve the baseline to incorporate utility/accuracy of the summarization alongside fairness. - Increase the size of the datset. - Treatment of the video should be as a video not as a collection of images sampled from a video after removing the audio. Broader Impact Concerns: I do not have any concerns. ================================================== Review 4: Summary: This work is the first to address the concept of fairness in video summarization. The authors provide an analytical definition of fair video summarization and a novel metric called SumBal to measure the fairness in video summaries. To benchmark existing video summarization models, the authors newly collect the dataset, FairVidSum, which is designed similarly to existing datasets, TVSum and SumMe, but also with individual- and group-level fairness annotations. The experiments conducted on FairVidSum clearly showcase the fairness-utility gap between existing video summarization models and the proposed baseline, FVS-LP, that are optimized only for utility and fairness, respectively. Strengths and Weaknesses: Strengths + Overall, the paper is well-written and easy to follow. + The authors clearly define the new task, fair video summarization, and provide all for the development of fair video summarization models, including the new evaluation metric and the benchmark dataset. + The experimental results clearly show that existing video summarization models do not account for fairness, at least according to the evaluation metric, SumBal. Weaknesses - The new benchmark, FairVidSum, contains videos of a few selected categories such as news, interviews and panel discussion so that each video features a very few individuals in a restricted setting (at most 16 in Video #33). The paper lacks the discussion about the fairness in summarizing videos that feature much more individuals and diverse settings, such as concert or large lecture videos featuring thousands of people. Requested Changes: - Please add the discussion about the fairness in summarizing videos of different domains and settings that the collected benchmark FairVidSum does not cover. - Please fix the mismatch in descriptions of the frame sampling rate for important score annotation: the sampling rate is stated as 2 fps in Section 4.1 while 0.5 fps in Appendix D.5. Broader Impact Concerns: The authors adequately addressed the ethical implications of the work. ================================================== Metareview: Recommendation: Accept with minor revision Comment: These manuscripts underwent review by four experts, with three providing the final recommendations. The cumulative scores present a diverse perspective, with two experts advocating for acceptance, and one reviewer proposing rejection. Most reviewers acknowledged the novelty of the fair video summarization problem, praised the quality of the gathered dataset, and commended the paper's writing. Reviewer 5vEw identified certain weaknesses in the proposed baseline, the nature of the addressed task, and the dataset size. These comments are generally insightful, and the Associate Editor (AE) values the feedback from Reviewer 5vEw. Video summarization has been extensively explored in both the Multimedia and vision communities, with various methods developed. This paper focuses on addressing the fairness issue within recent learning-based approaches. The authors have made commendable contributions in this regard. According to the TMLR evaluation criteria, the authors' claims align well with the presented evidence. However, the AE notes the abundance of existing literature on video summarization, suggesting that the authors should discuss the concerns raised by Reviewer 5vEw—specifically regarding baselines and datasets—in the related work and method sections. Consequently, the AE recommends acceptance with minor revisions. ==================================================
# Does 'Deep Learning On A Data Diet' Reproduce? Overall Yes, But Grand At Initialization Does Not Andreas Kirsch andreas.kirsch@cs.ox.ac.uk OATML, Department of Computer Science University of Oxford Reviewed on OpenReview: *https: // openreview. net/ forum? id= 1dwXa9vmOI* ## Abstract Training deep neural networks on vast datasets often results in substantial computational demands, underscoring the need for efficient data pruning. In this context, we critically re-evaluate the data pruning metrics introduced in 'Deep Learning on a Data Diet' by Paul et al. (2021): the *Gradient Norm (GraNd)* (at initialization) and the *Error L2 Norm* (EL2N). Our analysis uncovers a strong correlation between the GraNd scores at initialization and a sample's input norm, suggesting the latter as a potential baseline for data pruning. However, comprehensive tests on CIFAR-10 show neither metric outperforming random pruning, contradicting one of the findings in Paul et al. (2021). We pinpoint the inconsistency in the GraNd at initialization results to a later-fixed bug in FLAX's checkpoint restoring mechanism1. Altogether, our findings do not support using the input norm or GraNd scores at initialization for effective data pruning. Nevertheless, EL2N and GraNd scores at later training epochs do provide useful pruning signals, aligning with the expected performance. ## 1 Introduction Deep neural networks have achieved state-of-the-art performance on tasks like image classification and text generation, but require massive datasets that demand extensive computational resources for training (Deng et al., 2009; Brown et al., 2020). This motivates developing techniques like data pruning to identify effective subsets of the training data (Bachem et al., 2017; Sorscher et al., 2022). Recently, 'Deep Learning on a Data Diet' (Paul et al., 2021) introduced two new pruning metrics, *Gradient* Norm (GraNd) at initialization and *Error L2 Norm (EL2N)*. During a talk that was the given on the paper, the surprising effectiveness of GraNd at initialization—before any training—led us to hypothesize on a potential correlation with a sample's input norm. The input norm is cheap to compute and could thus provide an intriguing new baseline for data pruning experiments. In this work, we set out to reproduce the results for GraNd scores at initialization and explore the practicality of using the input norm as a proxy. Through extensive analysis on CIFAR-10, we find a strong correlation between the input norm and the gradient norm at initialization. However, neither metric outperform random pruning, failing to reproduce the original findings. While we replicate the reported results for EL2N and GraNd scores at later epochs, the performance of GraNd scores at initialization is not reproducible despite testing with multiple codebases. We trace this back to a bug in the checkpoint restoring code of the underlying FLAX library. By rectifying the claims about GraNd scores at initialization and uncovering limitations of the input norm, our reproduction provides updated recommendations for efficient data pruning. In summary, this reproduction contributes a new insight on the relationship between the input norm and the gradient norm at initialization and fully explains a failure to reproduce the performance of using the gradient norm at initialization for pruning, one of the six contributions of Paul et al. (2021). 1Details at https://github.com/google/flax/commit/28fbd95500f4bf2f9924d2560062fa50e919b1a5. Outline. In §2, we provide background on the GraNd and EL2N scores and discuss their use for data pruning. In §3.1, we begin by discussing the correlation between input norm and gradient norm at initialization. We empirically find strong correlation between GraNd scores at initialization and input norms as we average over models. In §3.2, we explore the implication of this insight for dataset pruning and find that both GraNd at initialization and input norm scores do not outperform random pruning, but GraNd scores after a few epochs perform similar to EL2N scores at these later epochs. ## 2 Background 'Deep Learning on a Data Diet'. Paul et al. (2021) introduce two novel metrics: *Error L2 Norm (EL2N)* and *Gradient Norm (GraNd)*. These metrics aim to provide a more effective means of dataset pruning. It is important to emphasize that the GraNd score at initialization is calculated before any training has taken place, averaging over several randomly initialized models. This fact has been met with skepticism by reviewers2, but Paul et al. (2021) specifically remark on GraNd at initialization: Pruning at initialization. In all settings, GraNd scores can be used to select a training subset at initialization that achieves test accuracy significantly better than random, and in some cases, competitive with training on all the data. This is remarkable because GraNd only contains information about the gradient norm at initializion, averaged over initializations. This suggests that the geometry of the training distribution induced by a random network contains a surprising amount of information about the structure of the classification problem. GraNd. The GraNd score measures the magnitude of the gradient vector for a specific input sample in the context of neural network training over different parameter draws. The formula for calculating the (expected) gradient norm is: $$\operatorname{GraNd}(x)=\mathbb{E}_{\theta_{t}}[\|\nabla_{\theta_{t}}L(f(x;\theta_{t}),y)\|_{2}]$$ $$(1)$$ ] (1) where ∇θtL(f(x; θt), y) is the gradient of the loss function L with respect to the model's parameters θt at epoch t, f(x; θ) is the model's prediction for input x, and y is the true label for the input. We take an expectation over several training runs. The gradient norm provides information about the model's sensitivity to a particular input and helps in identifying data points that have a strong influence on the learning process. EL2N. The EL2N score measures the squared difference between the predicted and (one-hot) true labels for a specific input sample, also known as the Brier score (Brier, 1950): $$\mathrm{EL2N}(x)=\mathbb{E}_{\theta_{t}}[\|f(x;\theta_{t})-y\|_{2}^{2}],$$ $$\left(2\right)$$ ], (2) where f(x; θ) is the model's prediction for input x, y is the (one-hot) true label for the input, and ∥·∥2 denotes the Euclidean (L2) norm. The EL2N score provides insight into the model's performance on individual data points, allowing for a more targeted analysis of errors and potential improvements. The GraNd and EL2N scores are proposed in the context of dataset pruning, where the goal is to remove less informative samples from the training data. Thus, one can create a smaller, more efficient dataset that maintains the model's overall performance while reducing training time and computational resources. While GraNd at initialization does not require model training, it requires a model and is not cheap to compute. In contrast, the input norm of training samples is incredibly cheap to compute and would thus provide an exciting new baseline to use for data pruning experiments. We investigate this correlation next and find positive evidence for it. However, we also find that the GraNd score at initialization does not outperform random pruning, unlike the respective results in Paul et al. (2021). ![2_image_0.png](2_image_0.png) Figure 1: Correlation between GraNd at Initialization and Input Norm for CIFAR-10's training set. (a, b, c): The original repository and the 'minimal' implementation have very high rank correlation-'hlb' has a lower but still strong rank correlation. (d): Ratio between input norm and gradient norm. In the 'minimal' implementation, the ratio between input norm and gradient norm is roughly log-normal distributed. ![2_image_1.png](2_image_1.png) Figure 2: Reproduction of Figure 1 (second row) from Paul et al. (2021). In both reproductions, GraNd at initialization performs as well as the input norm. However, it does not perform better than random pruning. Importantly, it also fails to reproduce the results from Paul et al. (2021). However, GraNd at epoch 20 (respectively at epoch 1 for 'hlb') performs similar to EL2N and like GraNd at initialization in Paul et al. (2021). ## 3 Investigation We examine the correlation between the input norm and GraNd at initialization, as well as other scores on CIFAR-10 (Krizhevsky et al., 2009), through three distinct approaches: 1. First, we update the original paper repository3(https://github.com/mansheej/data_diet), which utilizes JAX (Bradbury et al., 2018). We rerun the experiments for Figure 1 (second row) in Paul et al. (2021) on CIFAR-10, training for 200 epochs using GraNd at initialization, GraNd at epoch 20, E2LN at epoch 20, Forget Score at epoch 200, and input norm. 2. Second, we reproduce the same experiments using 'hlb' (Balsam, 2023), a significantly modified version of ResNet-18 that achieves high accuracy in 12 epochs, taking approximately 30 seconds in total on an Nvidia RTX 4090 with PyTorch (Paszke et al., 2019). For this approach, we compare GraNd at initialization, GraNd at epoch 1 (≈ 20/200 · 12 epochs), EL2N at epoch 1, and input norm4. We analyze the rank correlations between the different scores for the two repositories mentioned above. 3. Third, we employ another 'minimal' CIFAR-10 implementation (van Amersfoort, 2021) with a standard ResNet18 architecture for CIFAR-10 to compare the rank correlations. ## 3.1 Correlation Between Grand At Initialization And Input Norm To gain a deeper understanding of the relationship between the input norm and the gradient norm at initialization, we first consider a toy example and then provide empirical evidence. Let us examine a linear softmax classifier with C classes (without a bias term). The model takes the form: $$f(x)=\operatorname{softmax}(W x),$$ f(x) = softmax(W x), (3) together with the cross-entropy loss function: $$L=-\log f(x)_{y}.$$ L = − log f(x)y. (4) The gradient of the loss function concerning the rows wj of the weight matrix W is: $$\nabla_{w_{j}}L=(f(x)_{j}-1\{j=y\})x,$$ ∇wjL = (f(x)j − 1{j = y})x, (5) where 1{j = y} is the indicator function that is 1 if j = y and 0 otherwise. The squared norm of the gradient is: $$\|\nabla_{w}L\|_{2}^{2}=\sum_{j=1}^{C}(f(x)_{j}-\mathbb{1}\{j=y\})^{2}\|x\|_{2}^{2}.\tag{1}$$ Taking expectation over W (different initializations), the norm of the gradient is: $$\mathbb{E}_{W}\left[\|\nabla_{w}L\|_{2}\right]=\mathbb{E}_{W}\left[\left(\sum_{j=1}^{C}(f(x)_{j}-1\left\{j=y\right\})^{2}\right)^{1/2}\right]\|x\|_{2}.\tag{7}$$ $$\left({\mathfrak{3}}\right)$$ $$\left(4\right)$$ $\left(5\right)$ . $$({\mathfrak{G}})$$ As a result, the gradient norm is a multiple of the input norm. The factor depends on f(x)j , which we could typically expect to be 1/C in expectation over different weights at initialization (due to class symmetry). 2See also https://openreview.net/forum?id=Uj7pF-D-YvT&noteId=qwy3HouKSX. 3See https://github.com/blackhc/data_diet 4https://github.com/blackhc/pytorch_datadiet ![4_image_0.png](4_image_0.png) Figure 3: *Rank Correlations of the Scores.* Cf. Figure 12 in the appendix of Paul et al. (2021). In both reproductions, GraNd at initialization and input norm are positively correlated, while GraNd and EL2N at later epochs are strongly correlated with each other and the Forget Score (at epoch 200). ## 3.2 Empirical Results Correlation Analysis. We first examine the correlation between the input norm and GraNd score at initialization. Across three separate codebases and implementations (original JAX, 'hlb' PyTorch, 'minimal' PyTorch), we consistently find a strong Spearman rank correlation between the two metrics when averaged over multiple random initializations (Figure 1). For example, the original JAX and 'minimal' PyTorch implementation have a rank correlation coefficient of 0.96 between input norm and GraNd on the CIFAR-10 training set over 10 trials, suggesting input norm could potentially serve as an inexpensive proxy for GraNd at initialization. 'hlb' uses different input preprocessing and other tricks, which might explain the lower but still strong rank correlation coefficient of 0.39. Reproducing Figure 1 of Paul et al. **(2021) on CIFAR-10.** However, our attempts to reproduce the data pruning results originally reported for GraNd at initialization are unsuccessful. As shown in Figure 2, neither input norm nor GraNd at initialization outperform random pruning, with both achieving approximately 2 percentage points less accuracy than random pruning when pruning 60% of the CIFAR-10 training data. This directly contradicts the original findings of Paul et al. (2021), where GraNd at initialization markedly exceed random pruning when pruning 60% of the CIFAR-10 training data. While we are able to reproduce the expected pruning performance for GraNd after 20 epochs of training (respectively at epoch 1 for 'hlb') and also match the original EL2N results, GraNd at initialization does not match expectations despite testing across codebases. Score Rank Correlations. To further analyze relationships between the different scores, we visualize their Spearman rank correlations on CIFAR-10 using a correlation matrix (Figure 3). The results confirm the strong correlation between input norm and GraNd at initialization. They also reveal high correlations between GraNd at later epochs, EL2N, and the Forget Score after full training. However, GraNd at initialization and input norm show little correlation with the later epoch scores. This aligns with our reproduction results, where only the late-epoch GraNd and EL2N scores exhibit the expected pruning performance. The distinct correlation profile provides additional evidence that GraNd at initialization is measuring something fundamentally different from the trained model pruning metrics. ## 4 Discussion While input norm is inexpensive to compute, given its model independence and lower computational cost compared to GraNd or other scores, our results imply it should not be used for data pruning. Similarly, since only GraNd scors at later epochs appears to perform as expected, we cannot recommend using GraNd scores at initialization for data pruning either. Regarding the failure to reproduce the GraNd at initialization results, we initially could not rerun the code using the original JAX version due to GPU incompatibility. However, the authors of Paul et al. (2021) managed to reproduce their original results by setting up an old VM image with the original dependencies. This led us to discover a later-fixed bug in the FLAX checkpoint restore function flax.training.restore_checkpoint5 as the underlying cause of the reported GraNd at initialization results: the code was restoring different checkpoints than expected. Our experience reinforces the importance of preserving code environments and dependencies to enable reproducibility. Our findings highlight the critical need for reproductions, especially when initially surprising results are reported. As state-of-the-art techniques become more complex, it is essential that different research groups thoroughly verify results across frameworks and implementations before informing best practices. ## 5 Conclusion This work makes multiple contributions. We uncovered a strong correlation between the inexpensive input norm and the proposed GraNd metric at initialization. However, through extensive analysis we found neither input norm nor GraNd scores at initialization reproduced the originally reported pruning performance. Only GraNd and EL2N scores at later training epochs provided useful pruning signals, aligning with expected behavior. We traced the non-reproducible GraNd at initialization results to a now-fixed bug in the checkpoint restoring code of the underlying FLAX framework. Our findings rectify claims around GraNd scores at initialization, provide updated recommendations on utilizing these data pruning metrics, and underscore the importance of rigorously confirming initial surprising results across implementations. As deep learning advances, maintaining high standards for reproducibility will ensure progress builds on solid foundations. ## Acknowledgments Many thanks to Mansheej Paul and Karolina Dziugaite for very helpful feedback, discussions, and in particular, for their assistance in investigating the source of the discrepancy in reproducing the GraNd at initialization results. Upon being informed of our findings, they worked to trace the issue and shared their methodology for reproducing the original results using the original framework versions. This helped uncover the checkpoint restoring bug as the underlying cause. We appreciate that they took the time to rerun experiments with the original codebase and dependencies, and for quickly preparing an updated version of their paper. AK is supported by the UK EPSRC CDT in Autonomous Intelligent Machines and Systems (grant reference EP/L015897/1). ChatGPT (GPT-4) and Claude 2 were used to suggest edits and improvements. ## References Olivier Bachem, Mario Lucic, and Andreas Krause. Practical coreset constructions for machine learning. arXiv preprint arXiv:1703.06476, 2017. Tysam Balsam. hlb-CIFAR10, 2 2023. URL https://github.com/tysam-code/hlb-CIFAR10. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. Glenn W Brier. Verification of forecasts expressed in terms of probability. *Monthly weather review*, 78(1):1–3, 1950. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. 5See https://github.com/google/flax/commit/28fbd95500f4bf2f9924d2560062fa50e919b1a5: passing a 0 step (i.e., initialization) would trigger loading the *latest* checkpoint instead of the zero-th checkpoint because the internal implementation was checking if step: instead of if step is not None: when deciding whether to fallback to loading the latest checkpoint. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, highperformance deep learning library. In *Advances in Neural Information Processing Systems* 32, pp. 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/ 9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf. Mansheej Paul, Surya Ganguli, and Gintare Karolina Dziugaite. Deep learning on a data diet: Finding important examples early in training. *Advances in Neural Information Processing Systems*, 34:20596–20607, 2021. Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari Morcos. Beyond neural scaling laws: beating power law scaling via data pruning. *Advances in Neural Information Processing Systems*, 35: 19523–19536, 2022. Joost van Amersfoort. Minimal CIFAR-10, 5 2021. URL https://github.com/y0ast/pytorch-snippets/ tree/main/minimal_cifar. ## A Appendix ![7_image_0.png](7_image_0.png) Figure 4: Correlation between GraNd at Initialization and Input Norm on the Test Set. (a): We sort the samples by their average normalized score (i.e., the score minus its minimum divided by its range), plot the scores and compute Spearman's rank correlation on CIFAR-10's test data. The original repository and the 'minimal' implementation have very high rank correlation-'hIb' has a lower but still strong rank correlation. (b): Ratio between input norm and gradient norm. In the 'minimal' implementation, the ratio between input norm and gradient norm is roughly log-normal distributed
Review 1: Summary: This paper examines one of the novel approaches introduced by Paul et al. (2021) in their paper titled "Deep Learning on a Data Diet," known as the GradND. GradND is a method used to prune away unnecessary data in the training dataset for more effective training using the reduced dataset. In the present study, the authors shed light on a significant issue discovered within the experiments conducted by Paul et al. (2021), which has resulted in misleading experimental outcomes. Specifically, the authors of this paper show that the results of GradND obtained in the original paper by Paul et al. (2021) at initialization of the model is wrong and caused by a bug in their implementation. Strengths and Weaknesses: Strengths: All in all, the paper is clearly written and the the statement seems to be technically sounds. I have noticed that Paul et al. (2021) has also updated their arxiv version of the "Deep Learning on a Data Diet," paper to account for the mistake pointed out. Weaknesses: As a paper solely submitted to point out a bug from a previous paper, I don't think there is much major weaknesses. The bug pointed out seems to be real and the conclusion from this paper seems to be correct. Nevertheless, I am not sure if such a paper is worth publishing, especially after Paul et al. (2021) has fixed the issues in their version of the paper. Requested Changes: I don't think there are any changes needed. Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: The paper reports on the failure to fully reproduce results of Paul et al, 2021. In particular, gradient at initialization (GranD) scoring did not perform better than random pruning of training examples. Authors used the original source code as well as another optimized ResNet-18 implementation. It seems that the cause for the non-reproducibility is a (now fixed) bug in the Flax library. Strengths and Weaknesses: *Strengths* * The paper contributes to the community by reproducing and valdiating established results. *Weaknesses* * I don't know if, in this case, a TMLR paper is the best format for sharing the results. The issue seems to be a software bug, not a conceptual error. It's important to know that GraNd doesn't work at iteration 0, but it works at other iterations so unless Pauls et al _predicted_ that GraNd must have worked at iteration 0 (as I understand, that's not the case) correcting the original paper feels enough to me. I wonder if authors can suggest what could be the other consequences of their experimental findings. Requested Changes: In my opinion, the second paragraph of the paper is not neutral to the principle of double-blind review and contains details that are irrelevant for the majority of readers. I request these details to be removed. Broader Impact Concerns: No concerns ================================================== Review 3: Summary: This work investigates whether one of the key metrics (GraNd) introduced by the Data-Diet paper is indeed useful in dataset pruning, as reported in that paper. Through thorough experiments on two architectures using both the original repo and a new reimplementation, the authors find that GraNd cannot outperform random pruning, while the other metric (EL2N) works as reported. The authors further reported that a bug fix in the supporting training framework of Data-Diet repo (JAX) is likely the reason for the inconsistency. Strengths and Weaknesses: Strengths: The authors provided a clear motivation for why they started checking whether GraNd (norm of the gradients at the initialization step) is useful, which is due to its strong correlation to the input-norm metric and therefore may lead to a dataset pruning metric that is both cheap to compute and powerful. The authors also used both the original repo and a re-implementation repo to confirm that the failure in reproducing the result is not due to an error in their implementation. The strong correlation between the input-norm and the GraNd metrics also indicates that this GraNd metric is unlikely to work as reported in the DataDiet paper, as it would be difficult to imagine that the input norm would be important in deciding how good the example is for training. Finally, the authors also identified the possible cause of the original successful result in the Data-Diet paper. Taken together, these results help the community further explore which metric is important for data pruning. Weaknesses: The first figure is a bit confusing. If the authors want to show the strong correlation between the two metrics (input-norm and GraNd), why not show a scatter plot with X-axis being one metric and Y-axis being another metric? The writing can be improved. It’s currently informal and can be clearer. For example, the abstract is a bit influent in its logic, with the motivation not clearly explained before the result. Requested Changes: See the weakness. Broader Impact Concerns: no clear negative society impacts from this. ================================================== Metareview: Recommendation: Accept as is Comment: The paper is written well. The experiments are convincing. The paper explains why it failed to replicate some of the experiments, and the results are interesting. Especially, I am quite surprised at how strong the random pruning performs. Also, the strong correlation between the GraND score and input norms makes sense. It is well explained too. The paper is short, but I think that is appropriate since the conclusions that can be drawn from it are straightforward and no need to be obfuscated. ==================================================
# Bayesian Optimization For Minimizing Cvar Under Performance Constraints Anonymous authors Paper under double-blind review ## Abstract Optimal portfolio allocation can often be formulated as to a constrained risk problem, where one aims to minimize a risk measure subject to some performance constraints. This paper presents new Bayesian Optimization (BO) algorithms for such constrained minimization problems, seeking to minimize the conditional value-at-risk a computationally intensive risk measure, under a minimum expected return constraint. The proposed algorithms utilize a new acquisition function, which drives sampling towards the optimal region. Additionally, a new two-stage procedure is developed, which significantly reduces the number of evaluations of the expensive-to-evaluate objective function. The proposed algorithm's competitive performance is demonstrated through practical examples. Reviewer 1: Q1 Q3: addressed in Section 6 Reviewer 2: None Reviewer 3: ## 1 Introduction Portfolio optimization is the process of determining the optimal allocation of certain resources. While it is best known for its applications in finance, it also has important applications in many other areas, such as energy (Fleischhacker et al., 2019), healthcare (Kheybari et al., 2023), supply chain management (Hamdi et al., 2018) and artificial intelligence (Ghosh et al., 2022). Significant research has gone into developing methods to find an optimal portfolio allocation based on certain risk measures, such as value-at-risk (VaR) or conditional value-at-risk (CVaR). A typical formulation seeks to minimize such a risk measure, subject to a minimum expected return requirement, or constraint. When the objective and constraint functions are assumed to be linear and accessible (that is, not blackbox), they can easily be solved using classic Linear Programming methods, as demonstrated in Rockafellar and Uryasev (2000) and Krokhmal et al. (2002). Furthermore, when the functions are non-linear but still accessible, one can use alternate traditional optimization methods (see e.g., Gaivoronski and Pflug (2005); El Ghaoui et al. (2003); Alexander et al. (2006)). The assumptions of linearity, differentiability, or accessible objective and constraint functions underlie many traditional portfolio optimization algorithms. In many practical settings, however, the objective (i.e., the risk measure) and/or constraint functions are non-linear, noisy and expensive black-boxes, making these approaches infeasible. Recently, considerable attention has been devoted to developing methods based on Bayesian Optimization (BO) (Brochu et al., 2010; Frazier, 2018; Garnett, 2023) for minimizing risk measures, primarily due to their ability to deal with noisy, expensive and black-box functions. In this regard, Cakmak et al. (2020) proposes a BO algorithm for the unconstrained optimization of VaR and CVaR of black-box expensive-toevaluate functions, with randomness induced by an environmental vector. Instead of modelling the objective function directly, the paper proposes modelling the underlying function f as a Gaussian Process (GP) and applying a BO method which jointly selects both a portfolio weight w and realisation of the environmental variable Z, using a one-step look-ahead approach based on the knowledge gradient acquisition function (?). Furthermore, Nguyen et al. propose two alternate BO approaches for the optimization of VaR (Nguyen et al., 2021b) and CVaR (Nguyen et al., 2021a), which provide certain desirable computational and theoretical properties. More recent work includes Daulton et al. (2022a), which aims to tackle multivariate value-at-risk problems, that is, where the optimization problem has multiple objectives with input noise, and Picheny et al. (2022), which studies the situation that the distribution of Z is unavailable. The aforementioned methods are designed for the unconstrained minimization of risk measures. To date, no BO algorithm has been designed specifically for constrained portfolio optimization problems. For general constrained optimization problems, a popular class of BO algorithms incorporate the constraints into the acquisition function design (see, i.e., Gramacy and Lee (2011); Gardner et al. (2014); Gelbart et al. (2014)). More recent advances include Lam and Willcox (2017), Letham et al. (2019) and Eriksson and Poloczek (2021), among others. Whilst these methods are effective, they often require frequent evaluation of the risk measure functions, which is unsuitable for complex allocation problems - as detailed shortly. The main purpose of this work is to design a constrained BO algorithm specifically for the portfolio allocation problem. In this paper, we introduce new BO methods building on Gardner et al. (2014) and Gelbart et al. (2014), designed to take advantage of two key properties which hold in portfolio allocation problems: 1) the expected return constraint functions are much cheaper to evaluate than the objective function, i.e., the risk measures; 2) the expected return constraints are typically *active* - namely, the optimal solution lies on the boundary of the feasible region defined by the constraints. Firstly, this paper introduces a two-stage BO adaptation, which reduces the number of full-function evaluations needed to find an optimal solution, significantly reducing the computational cost of the algorithm. Only samples that meet certain criteria are fully evaluated in the second stage; this differs from cascade-based BO approaches (Kusakawa et al., 2022) where all samples in the first stage are used in the second, regardless of their feasibility or promise. Secondly, this work proposes a new acquisition function that encourages more sampling in the near-optimal region, improving the algorithm's performance. The paper also details how the proposed methods can be adapted for batch implementation to take advantage of parallel computing. As the numerical examples demonstrate, the proposed BO algorithms are highly effective for solving constrained portfolio allocation problems, outperforming existing approaches with a lower computational cost and faster convergence. These improvements are achieved by combining a new acquisition function, a twostage procedure and the potential for parallel batch implementation. ## 2 Optimal Portfolio Allocation Suppose an investor seeks to find an optimal allocation to N-assets. For the target portfolio, we define an N-dimensional vector w = (w1*, ..., w*N ) to represent the capital allocation or *portfolio weights*. Each component wi corresponds to the fraction of the total capital allocated to the i th asset. The vector w is defined within the constraints of a feasible set W = {w ∈ R N |wi ≥ 0,PN i=1 wi ≤ 1}, to ensure that the sum of all weights does not exceed the total available capital, which, without loss of generality, is taken to be 1. To account for the uncertainty in future asset returns, we introduce random variable Z representing the random environmental factors that can affect the future return. The return function f(w, z) represents the forecasted portfolio return for an allocation w and realisation z from Z. For clarity: f(0, z) = 0, as no capital invested means no returns; f(w, z) < 0 if the portfolio is forecasted to lose money and f(w, z) > 0 if the portfolio is forecasted to gain money. As an example, f(w, z) = 0.1 means a forecasted gain of 10%; f(w, z) = −0.2 means a forecasted loss of 20%. ## 2.1 Risk Measures VaR is defined as the threshold value ω such that the probability of the loss exceeding ω is at most (1 − α). Formally, for a return function f, set of portfolio weights w and VaR threshold α, we define VaR as: VaRα[f(w, Z)] = inf{ω : P(f(w, Z) ≤ −ω) ≤ 1 − α} For conciseness, we use the notation vf (w; α) to denote VaRα[f(w, Z)] for the remainder of this work. Typical values for α include 99.9% and 97.5%. CVaR at a specified risk level α ∈ (0, 1) is the expected loss, assuming that the loss is worse than the VaR threshold. It represents the average of the worst-case losses. Formally, CVaR is defined as: $$\mathrm{CVaR}_{\alpha}[f(\mathbf{w},\mathbf{Z})]=-\mathbb{E}[f(\mathbf{w},\mathbf{Z})|f(\mathbf{w},\mathbf{Z})\leq-v_{f}(\mathbf{w};\alpha)]$$ Artzner et al. (1999) establishes key desirable properties for risk measures. Of those, CVaR meets many, including subadditivity, translation invariance, positive homogeneity, and monotonicity1. In contrast, VaR often exhibits multiple local extrema and unpredictable behaviour as a function of portfolio positions, limiting its usefulness in portfolio optimization problems, limitations which do not apply to CVaR (Mausser and Rosen, 1999; McKay and Keefer, 1996). Within portfolio optimization, the chosen risk measure must be able to handle and account for uncertainty induced by the environmental vector Z. Embrechts et al. (2022) establishes a framework for considering the effect of uncertainty around an α-quantile level, concluding that (unlike VaR) CVaR remains stable and robust in simulation-based optimization methods with uncertainty. Within this study, we choose CVaR as the objective function risk measure. ## 2.2 Problem Set-Up Let us consider the expected return for a selected portfolio, defined under a set of portfolio weights w as the expectation over all possible forecasted returns, that is, EZ[f(w, Z)]. An underlying principle in investing is that to compensate investors for taking on greater risk, a riskier asset should generate higher returns. A robust risk measure should reflect this feature. Research (see, e.g., Bali and Cakici (2004); Iqbal et al. (2010)) has shown a positive relationship exists between CVaR and expected returns. A higher expected return can only be obtained by increasing risk exposure; conversely, if an investor wishes to reduce the CVaR of their portfolio, the expected return will reduce too. A key property of CVaR is that it is monotonic with respect to stochastic dominance of order 1 and order 2 (Pflug, 2000). In financial risk management, this property implies that when comparing two investment options, if one demonstrates a lower risk (as indicated by CVaR) while providing an equal or higher expected return, it is universally more favourable regarding the risk-return balance. These features enable us to use CVaR as a reliable and robust risk measure within portfolio optimization. For a chosen expected return requirement, stochastic dominance ensures that an investor can identify an optimal portfolio that meets or exceeds this return requirement with the lowest possible risk, as measured by CVaR. Therefore, in this context, an optimal portfolio provides the desired expected return with the lowest possible CVaR. This relationship lays the groundwork for our problem set-up. For ease of notation, we define the objective function as g(w) and the constraint function as R(w). For a minimum expected return threshold value r min, the complete constrained portfolio optimization problem is as follows. $$\begin{array}{r l}{{\operatorname*{min}_{\mathbf{w}}\,g(\mathbf{w}):=\mathrm{~CVAR}_{\alpha}[f(\mathbf{w},\mathbf{Z})]}}\\ {{\mathrm{s.t.~}\quad R(\mathbf{w}):=\mathbb{E}_{\mathbf{Z}}[f(\mathbf{w},\mathbf{Z})]\geq r^{\operatorname*{min}}}}\\ {{}}\\ {{0\leq w_{i}\leq1,\,i=1,...,N,\quad\sum_{i=1}^{N}w_{i}\leq1.}}\end{array}$$ $$(1\mathbf{a})$$ $$(1\mathbf{b})$$ $$(1\mathrm{c})$$ wi ≤ 1. (1c) Within this problem set up, we seek to minimize the CVaR (denoted as g(w))constrained by a minimum expected return threshold (r min), defined by the constraint function R(w). 1Subadditivity ensures that the risk of a combined portfolio does not exceed the sum of the individual risks, implying diversification benefits. Translation invariance guarantees that adding a risk-free asset to a portfolio does not alter the risk measure. Positive homogeneity dictates that scaling a portfolio by a positive factor scales the risk measure by the same factor, ensuring proportionality. Monotonicity ensures that a portfolio with consistently higher returns under all scenarios is considered less risky. ## 3 Bayesian Optimization Bayesian Optimization - introduced in Močkus (1975) - is a powerful method for solving global optimization problems. The method applies to scenarios where the objective function does not have a closed-form expression, but noisy evaluations can be obtained at sampled points. In this section, we present an adaptation to the BO methods developed in Gardner et al. (2014) and Gelbart et al. (2014), to handle the uncertainty caused by an environmental vector Z. ## 3.1 Unconstrained Bayesian Optimization BO is a probabilistic framework for optimizing black-box functions based on the GP model. In the unconstrained setting, BO sequentially evaluates an objective function at selected points, from which a GP model of the objective function is constructed. The design point(s) are selected by maximizing an acquisition function, which quantifies a desired trade-off between the exploration and exploitation of the GP model. Commonly used acquisition functions include expected improvement, probability of improvement and upper confidence bounds. The standard BO procedure for the unconstrained global minimization of a function g(w) is given in Alg. 1. We note that, in practice it is often impossible to evaluate the objective function g(w) directly, and instead one can obtain a noisy estimate of it which is denoted as gˆ in Alg. 1. Algorithm 1 Bayesian Optimization Require: blackbox model πg(y|w), acquisition function a(w, g˜) Ensure: a local minimizer of g(w) initialize the training data set D0 using an initial design let t = 0; while stopping criteria not met do let t = t + 1; construct a GP model g˜t−1 using Dt−1; let wt = arg maxw a(w, g˜t−1); compute an estimate of g(wt) denoted by gˆt; let Dt = Dt−1 ∪ {wt, gˆt}; end while ## 3.2 Bayesian Optimization With Constraints This section presents the BO algorithm for optimization problems with inequality constraints, largely following Gardner et al. (2014) and Gelbart et al. (2014). Suppose that we have the following constrained optimization problem: min wg(w) s.t. ck(w) ≤ 0, k = 1*, ..., K.* (2) To solve Eq. 2 with BO, all the constraint functions ck(w) need to be modelled as GPs. Namely, the GP model for the k-th constraint ck(x) is obtained from the constraint training set C k = {(w1, ck(w1))*, ...,* (wm, ck(wm))}, where the constraint functions are evaluated at each design point. Therefore, when selecting the design points, both the objective and constraints need to be considered, which is accomplished by incorporating the constraints into the acquisition function. Gardner et al. (2014) propose modifying the *Expected Improvement* (EI) acquisition function. Let w+ be the current best-evaluated point, that is, g(w+) is the smallest in the current training set. We define the improvement as I(x) = max{0, g(w+) − g˜(w)}, (3) where g˜(w) is the GP model constructed with the current objective training set D. The EI acquisition function is defined as EI(w) = E[I(w)|D], where the expectation is taken over the posterior of g˜(w). We further adapt this acquisition function to account for the constraints. For k = 1*, ..., K*, let c˜k(w) be the GP model for the constraint function ck(w), conditional on the training set C k, and let $$\mathrm{PF}({\bf w})=\mathbb{P}(\tilde{c}_{1}({\bf w})\leq0,\,\tilde{c}_{2}({\bf w})\leq0,\,...,\,\tilde{c}_{K}({\bf w})\leq0),$$ PF(w) = P(˜c1(w) ≤ 0, c˜2(w) ≤ 0*, ...,* c˜K(w) ≤ 0), (4) which is the probability that a candidate point x satisfies all the constraints. In our present problem, we only need to consider the case where the constraints are conditionally independent given x, as such, we have: $$\operatorname{PF}(\mathbf{w})=\prod_{k=1}^{K}\mathbb{P}({\tilde{c}}_{k}(\mathbf{w})\leq0).$$ $$\left({4}\right)$$ $$\left(5\right)$$ $$\left(6\right)$$ P(˜ck(w) ≤ 0). (5) Finally, we define the new acquisition function to be $$a_{\mathrm{CW-EI}}(\mathbf{w})=\operatorname{EI}(\mathbf{w})\mathrm{PF}(\mathbf{w}),$$ aCW-EI(w) = EI(w)PF(w), (6) which is referred to as the constraint-weighted expected improvement (CW-EI) acquisition function in Gardner et al. (2014). The constrained BO algorithm proceeds largely the same as the unconstrained version (Alg. 1), except the following two main differences: (1) the constrained acquisition function in Eq. 6 is used to select the new design points; (2) for each design point, both the objective and constraint functions are evaluated. We hereafter refer to this constrained BO method as CW-EI BO. Finally, we note that in a class of BO approaches (Fröhlich et al., 2020; Cakmak et al., 2020; Daulton et al., 2022b), the underlying function f(w, Z) is modelled as a single GP (with w being the input) for a fixed Z during the optimization procedure and then Z is only random at implementation time. Whilst this may be appropriate for many unconstrained problems, it is not for portfolio allocation problems. As explained later, we intend to deal with the CVaR function and the expected return constraint separately, and thus we cannot use this single GP model framework. ## 4 Bo For Portfolio Optimization It is possible to directly apply the existing CW-EI BO algorithm to the portfolio optimization problem. This is achieved through modelling the CVaR objective and expected return constraint functions as separate GPs. To be specific, the CW-EI acquisition function is utilized to propose new portfolio weights w, from which a standard Monte Carlo (MC) simulation obtains the distribution f(w, Z), to obtain the expected return and CVaR for w - further detailed in the Appendix. The CVaR and expected return values for the proposed weights w are then used to update the objective and constraint GPs, respectively. Therefore, in the standard CW-EI BO procedure, a full evaluation of the objective and constraint functions must be performed for each proposed portfolio weight to update the respective GPs. As shown in Appendix A, it is possible to obtain an accurate estimate of the expected return with a relatively low MC sample size, while a large number of MC samples is required to obtain an accurate estimate of the CVaR (from the distribution f(w, Z)). As such, the computational cost of calculating CVaR, i.e., the objective function, is significantly higher than the expected return constraint. For example, in the numerical experiments provided in Section 5.2, the cost for evaluating the expected return is around 1% of that for evaluating CVaR2. Therefore, the computational efficiency of BO algorithms can be enhanced by reducing the number of CVaR evaluations. ## 4.1 Activeness Of The Constraint This section formalizes several assumptions related to the portfolio optimization problem. These assumptions and the subsequent Theorem, allow us to develop a new BO algorithm procedure to take advantage of the computational efficiency gained by reducing the number of objective CVaR evaluations. Before presenting the assumptions, we clarify our notation. It is important to note that f(w, z) ≤ 0 indicates losses, whereas VaR and CVaR are statements about the losses, so vf (w; α) ≥ 0 and CVaRα[f(w, Z)] ≥ 0 represent negative returns, or losses. 2The exact saving depends on the choice of α threshold, where the cost saving increases as α decreases. Now, let us introduce and prove several assumptions concerning the return function f(w, z) and the distribution of Z - which are critical to our proposed method. Assumptions 1-4. 1. f(w, z) is a continuous function of w for any fixed z; 2. f(0, z) ≡ 0*; 3. for a* given w ∈ W and any fixed z, if f(w, z) ≤ 0, f(ρw, z) is a decreasing function of ρ ∈ [0, 1]*; 4. there exists* α ∈ (0, 1) *such that* vf (w; α) ≥ 0 ∀ w ∈ W. - Assumption 1 ensures that small changes in portfolio allocation do not lead to abrupt or unpredictable changes in outcomes; a reasonable expectation in most financial models. - Assumption 2 is straightforward; an absence of investment will result in a neutral (zero) financial return. - Assumption 3 implies that, if a chosen portfolio allocation results in a loss for a certain scenario, this loss does not increase if the total capital is proportionally reduced3; this reflects the intuitive notion that if investing a certain amount leads to a loss, investing less should not lead to a greater loss. - Assumption 4 implies that there always exists a choice of α ∈ (0, 1) so that no matter the allocation w ∈ W, vf (w; α) is positive, i.e., a loss. In simpler terms, no matter the allocation, there always exists some level of risk (represented by α), which can be chosen to ensure there is always some risk of loss (as indicated by VaR). This is important, as it allows us - through the appropriate choice of α - to just consider the loss scenarios when evaluating the associated CVaR. From this, we obtain the following theorem: Theorem 1. If function f(w, Z) and distribution pz(·) satisify assumptions 1-4, α *is chosen such that* vf (w, α) ≥ 0 ∀ w ∈ W, and solutions to the constrained optimization problem 1 exist, then there must exist a solution to problem 1, denoted as w∗*, such that* R(w∗) = r min. Proof. First, assume that w′is a solution to the constrained optimization problem 1. It follows directly that R(w′) ≥ r min. Obviously if R(w′) = r min, the theorem holds. Now consider the case that R(w′) > rmin, i.e., it does not lie on the boundary of the feasible region. From assumption 1, R(w) is a continuous function of w in W. Next define a function $$h(\rho)=R(\rho\mathbf{w}^{\prime})$$ $\uparrow$). for ρ ∈ [0, 1]. As R(w) is a continuous function in W, h(ρ) is a continuous function too. From assumption 2, we know that h(0) = 0, and therefore, h(0) = 0 < rmin < h(1) = R(w′) According to the intermediate value theorem on continuous functions, there exists some ρ ∗ ∈ (0, 1) such that h(ρ ∗) = R(ρ ∗w′) = r min. Let w ∗ = ρ ∗w′ denote this point, which lies on the constraint boundary - we wish to compare F(w∗) and F(w′), i.e., the CVaR values at these two points for a fixed α. From the Theorem's assumption, we have vf (w′, α) ≥ 0 and vf (w∗, α) ≥ 0. From assumption 3, we know that for any z, if f(w′, z) ≤ 0, then f(w′, z) ≤ f(w∗, z) ≤ 0. It follows that for any z ∈ {z|f(w′, z) ≤ −vf (w′, α)}, we have f(w′, z) ≤ f(w∗, z) ≤ −vf (w∗, α) ≤ 0. 3For clarity, as ρ goes from 0 to 1, f goes from f(0, z) ≡ 0 to f(w, z). As f(w, z) ≤ 0, the function value f(ρw, z) gets more negative, so f is a decreasing function w.r.t. ρ ∈ [0, 1]. As such, we can derive vf (w∗, α) ≤ vf (w′, α), and obtain, $$\mathrm{CVaR}_{\alpha}[f(\mathbf{w}^{*},\mathbf{Z})]=-\mathbb{E}[f(\mathbf{w}^{*},\mathbf{Z})|f(\mathbf{w}^{*},\mathbf{Z})\leq-v_{f}(\mathbf{w}^{*};\alpha)]$$ $$\leq-\mathbb{E}[f(\mathbf{w}^{*},\mathbf{Z})|f(\mathbf{w}^{*},\mathbf{Z})\leq-v_{f}(\mathbf{w}^{\prime};\alpha)]$$ $$\leq-\mathbb{E}[f(\mathbf{w}^{\prime},\mathbf{Z})|f(\mathbf{w}^{\prime},\mathbf{Z})\leq-v_{f}(\mathbf{w}^{\prime};\alpha)]$$ $$=\mathrm{CVaR}_{\alpha}[f(\mathbf{w}^{\prime},\mathbf{Z})].$$ Therefore, w∗is also a minimal solution w.r.t. the objective function and R(w∗) = r min. The proof is thus complete. Simply put, Theorem 1 states that under some reasonable assumptions, the constraint 1b is active for at least one solution. The result is rather intuitive, as it infers that a higher expected return can only be obtained by increasing risk exposure and, as such, the CVaR. The optimal solution to our problem will likely arise from an active constraint, where the minimum expected return requirement limits our ability to reduce the CVaR further. This aligns with our earlier analysis of CVaR's properties of stochastic dominance and its relationship to returns. These observations provide a useful heuristic and motivate us to drive sampling towards the active region. We shall also note that, while the assumptions are sensible from the practical point of view, some of them such as those on α are are rather difficult to verify in advance. Further comments on the matter are provided in the conclusions. ## 4.2 Two-Stage Point Selection Intuitively, Theorem 1 suggests that a solution to problem 1 can be found close to the boundary of the constraint. Therefore, based on the expected return value for a proposed portfolio weight w, evaluating the CVaR objective function is unnecessary under the following two situations. Firstly, if the expected return is lower than the minimum constraint threshold, the proposed design point is not feasible, so the CVaR function does not need to be evaluated. Secondly, if the expected return is too high (i.e., not approximately active), the corresponding CVaR is likely far from optimal, so the objective does not need to be evaluated. We introduce a maximum expected return parameter, denoted by r max, set on the basis that those points with expected returns higher than this parameter value are highly unlikely to be optimal for our objective. Based on these observations, we propose a two-stage point selection procedure. The first stage selects a design point based on the acquisition function. In the second stage, the expected return is calculated. If the expected return satisfies the requirement that $$r^{\mathrm{min}}\leq R(\mathbf{w})=\mathbb{E}_{\mathbf{Z}}[f(\mathbf{w},\mathbf{Z})]\leq r^{\mathrm{max}},$$ $$\left(7\right)$$ max, (7) the more expensive evaluation of our objective function is completed to determine the CVaR value. Then, the GPs for both the constraint and objective functions are updated. If Eq. 7 is not satisfied, the proposed point is rejected, the objective function is not evaluated, and only the GP for the constraint is updated to ensure this point is not re-proposed. This two-stage (2S) adaptation has the advantage of only fully evaluating those feasible and (approximately) active points. As such, it reduces the number of evaluations of the expensive-to-evaluate CVaR objective. The algorithm obtains two training sets, one for the CVaR objective and one for the expected return, with the former being a subset of the latter. ## 4.3 New Acquisition Function With the two-stage selection procedure, many more evaluations of the expected return constraint will be completed than the CVaR objective; as such, the GP for the constraint will be more accurate than that of the objective function. As a result, the CW-EI acquisition function will be effective at proposing feasible points due to the quality of the constraint GP but may be poor at proposing points with low CVaR due to the lower quality of the objective GP. To address this issue, we propose a new acquisition function based on the active constraint assumption. Namely, as the CW-EI acquisition function only accounts for the feasibility of the constraint, it should be adapted to incorporate the activeness as well. Let Re(w) be a GP model of the expected return R(w), define $$\mathrm{PF}(\mathbf{w})=\mathbb{P}(r^{\operatorname*{min}}\leq{\widetilde{R}}(\mathbf{w})\leq r^{\operatorname*{max}})$$ max) (8) as the probability that a chosen weight w is feasible and approximately active. Therefore, for w, $$\begin{array}{l}\mbox{PF}(\mathbf{w})=\mbox{PF}_{\min}(\mathbf{w})\times\mbox{PF}_{\max}(\mathbf{w})\\ \mbox{PF}_{\min}(\mathbf{w})=\mathbb{P}(\widetilde{R}(\mathbf{w})\leq r^{\min})\\ \mbox{PF}_{\max}(\mathbf{w})=\mathbb{P}(\widetilde{R}(\mathbf{w})\leq r^{\max})\end{array}$$ $$({\boldsymbol{\delta}})$$ $$(10)$$ $$({\mathfrak{g}})$$ Combining Eq. 9 with the Expected Improvement, obtains: $$a_{\mathrm{ACW-EI}}(\mathbf{w})=\mathrm{EI}(\mathbf{w})\mathrm{PF}_{\operatorname*{min}}(\mathbf{w})\mathrm{PF}_{\operatorname*{max}}(\mathbf{w}),$$ aACW-EI(w) = EI(w)PFmin(w)PFmax(w), (10) which is hereafter referred to as the *active constraint-weighted expected improvement* (ACW-EI) acquisition function. This acquisition function depends on both the GP models for CVaR and the expected return. In this paper, it is written as aCW-EI(w, g, e Re). The new term PFmax in the acquisition function encourages the proposed points to be approximately active, which, by proxy, increases the likelihood that such a point is near-optimal with respect to the risk measure objective function. The choice of r max is explored through additional numerical examples. Including this parameter is a crucial aspect of our proposed BO algorithms. Two feasible points with different true objective function values will likely have similar expected improvement values (before a full evaluation) due to the low-quality GP for the objective function and equal probability of feasibility for the constraint. As such, the two points may be considered equal in the existing methodology. By introducing the new r max term - based on the more accurate expected return GP - our proposed BO procedure can differentiate between these two points during the selection procedure. Finally we note that there are certain existing acquisition functions may be of similar mathematical formulation as ACW-EI, e.g., Swersky et al. (2013); Gelbart et al. (2014); Wilson et al. (2018), but the purposes of them are very different from our ACW-EI acquisition function. In our algorithm ACW-EI is particularly designed for the constrained CVaR minimization problem, which provides the active constraint information to accelerate the computation, while most of the aforementioned works aim to design some general-purpose acquisition function with certain desired properties. ## 4.4 The Complete Algorithm To complete our proposed algorithm, we must discuss the summation constraint: 0 ≤ wi ≤ 1, i = 1*, ..., N,* PN i=1 wi ≤ 1, which will be denoted as w ∈ S in what follows. It is possible to deal with these constraints in the same manner as the expected return, i.e., as GP models. However, unlike the expected return constraint, which is probabilistic, the summation constraint is deterministic and easy to evaluate. As such, the constraint is imposed during the maximization of the acquisition function, by solving the following constrained maximization problem: maxw∈S aACW-EI(w), which in this work is solved with the barrier method. Finally, by combining the two-stage point selection, the ACW-EI acquisition function, and the constrained acquisition maximization, our complete *2S-ACW-EI BO* algorithm is obtained, detailed in Alg. 2. ## 4.5 Batch Implementation In most BO approaches, one uses an acquisition function to select a single point to evaluate. From which, the posterior GPs are updated and the process is repeated. This is *sequential*, as each point is selected and evaluated one at a time. Algorithm 2 The 2S-ACW-EI BO algorithm Initialize the training data sets D (for the objective) and C (for the constraint), using an initial design; Let t = 1; while stopping criteria not met do Construct a GP model get−1 using D; Construct a GP model Ret−1 using C; Let w¯ = arg maxw∈S aACW-EI(w, get−1, Ret−1); Evaluate the constraint R(wˆ ); Let C = C ∪ {w¯ , R(wˆ )}; if r min ≤ R(wˆ ) ≤ r max **then** Compute and estimate of the objective g(wˆ ), denoted asgˆ; Let D = D ∪ {wˆ , g(wˆ )}; let t = t + 1; end if end while It is expensive to evaluate the objective function, and as such, it may be advantageous to evaluate several points simultaneously, for example using parallel computers. In this regard, a batch implementation of BO is desirable, where several design points are selected using the acquisition function and then evaluated simultaneously in parallel. This section discusses a batch implementation for our proposed algorithms. In most batch BO methods, the batch of design points is determined sequentially via a given point-selection procedure, from which the objective and constraint functions are evaluated after the whole batch is obtained. The batch implementation of the constraint-weighted expected improvement BO using the new acquisition function ('ACW-EI BO') is henceforth denoted *KB-ACW-EI BO*. To adapt our two-stage BO algorithm for batch implementation, we include the evaluation of the expected return constraint in the point-selection procedure. Once the whole batch is obtained, the CVaR objective is evaluated in parallel. More specifically, the expected return is evaluated for each new proposed point. If the expected return satisfies Eq. 7, it is added to the batch and the constraint GP is updated. If the expected return does not satisfy Eq. 7, the point is not added to our batch, but the GP for the constraint is updated to ensure that the point is not proposed again. Once a batch has been determined, each point is fully evaluated - knowing that all batch points are both feasible and approximately active. The pseudo-code for our two-stage batch selection is provided in Alg. 3 - henceforth, denoted by *2S-KB-ACW-EI*. The batch approach can be implemented in parallel, so it has a lower computational cost. However, the batch approach requires a greater total number of samples to converge to the optimal solution - as demonstrated in our numerical examples - due to the GPs being updated less frequently, so each sample is chosen based on a less accurate GP compared to at the equivalent stage in the sequential approach. ## 5 Numerical Experiments We implement our proposed algorithms for two numerical examples and compare their results, and we also provided an additional application example in Appendix C. In all examples, BO was implemented using Trieste (Berkeley et al., 2023), a BO Python package built on TensorFlow. Within the package, we used the default Matern 52 Kernel, with length scale 1.0 and noise variance 10−7. For acquisition maximization, we include the summation constraint as a barrier function. The resulting problem is solved using the Efficient Global Optimization (EGO) method provided by the package. ## 5.1 Mathematical Example We first consider a simple mathematical example, to demonstrate how the design points are selected by the different methods. Adapted from Gramacy et al. (2016), we seek to solve the following constrained Algorithm 3 Two-Stage Batch Selection Require: a training set for the CVaR objective function D, a training set for the expected return constraint C Ensure: a batch of b design points, let B = ∅ let i = 0; while *i < b* do propose a new design point w¯ based on a prescribed selection rule; evaluate the constraint R(w¯ ); if r min ≤ R(w¯ ) ≤ r max **then** let B = B ∪ {w¯ }; let i = i + 1; end if let C = C ∪ {w¯ , R(w¯ )}; update the GP model for the constraint using C; end while optimization problem: $$\operatorname*{min}_{\mathbf{w}}\,f(\mathbf{w}):=-\,w_{1}-w$$ $\mathrm{f}(\mathbf{z})$ $${\textbf{w}}$$ s.t. $ c(\textbf{w}):=\frac{3}{2}-w_{1}-2w_{2}-\frac{1}{2}\mathrm{sin}(2\pi(w_{1}^{2}-2w_{2}))\geq0$ $$(11)$$ The solution to the problem is w = (0.918, 0.540), where f(w) = 1.458. The original CW-EI method, ACWEI (i.e. the new acquisition function without the 2S process), and 2S-ACW-EI each use 10 initial points and ![9_image_0.png](9_image_0.png) then a further 50 iterations. Figure 1 shows the design points obtained by each of the three algorithms. Figure 1: Plots showing the optimal solution (green-x) for numerical example one and the design points generated by each of the three methods. The figures include both the fully evaluated points (red) and those for which only the constraint was evaluated (blue). The feasible region is dark grey, the active region is light grey and the infeasible region is white. The objective function contours are shown too. The CW-EI and ACW-EIs methods perform similarly in this task, where the algorithms generate a significant number of infeasible samples with high objective value, before moving towards the feasible region. Both methods first establish a good GP for the objective function, encouraging samples to be generated in the high objective region, before the GP for the constraint is fully formed. In contrast, in the 2S-ACW-EI method, samples are only fully evaluated if they are in the active region, therefore after a few iterations, the GP for the objective and constraint functions are weak and strong respectively. Thanks to the well-formed GP model for the constraint, the acquisition function prioritizes the generation of points in the feasible region, in particular, in the active region, before finding those feasible points which are maximized for the objective. ## 5.2 Portfolio Allocation Examples 5.2.1 Problem Setup The following three examples are based on an investor seeking to optimally allocate capital to stock or stock options, related to the twenty largest technology companies listed on American stock exchanges (both the NYSE & Nasdaq) by market capitalisation. We take Z to be the stock price at the future time, the distribution of which is determined by historical data. The parameter values are detailed in Appendix B. In all three examples, the return function is $$f(\mathbf{w},\mathbf{z})=\sum_{i=1}^{20}w_{i}y_{i}(z_{i}),$$ $$(12)$$ wiyi(zi), (12) where yiis the asset return - stated as a ratio, rather than absolute value - corresponding to the i-th company, a function of its future stock price zi. In the three examples, we alter the asset type - namely the function yi(zi) varies. In each example, we consider a lower and higher return constraint. Example One. We seek to optimally allocate the investor's capital directly to the twenty stocks, which corresponds to setting yi = zi/z¯i with z¯i being the stock's purchase price. Example 1 has a constraint for 1 − α = 0.0001 of r min = (a) 1.45 and (b) 1.55. Example Two. We seek to allocate the investor's capital to European Call options, based on the twenty stocks, held till expiry. A European Call option gives the owner the right to purchase the underlying asset, for a pre-agreed strike price on a specified future date. Suppose that the present bid price of the call option for the i-th stock is bi and the strike price is Ki, then the asset return is yi = (max(0, zi − Ki) − bi)/bi Example 2 has a constraint for 1 − α = 0.0001 of r min = (a) 5.30 and (b) 5.40. Example Three. We consider European Call options, but where the return is derived from selling the option after six months rather than holding it to maturity. As such, the return depends on the change in the option price. Option prices can be modelled using quadratic functions of the underlying asset returns, realized through a delta-gamma approximation, that is, a second-order Taylor expansion of the portfolio return Zymler et al. (2013). Namely, at a particular future time, the associated call option return becomes $$y_{i}=\Delta_{i}\;\epsilon+\frac{1}{2}\;\Gamma_{i}\;\epsilon^{2},$$ a) $\text{2.90}$ and (b) $\text{3.00}$. with ϵ = zi − z¯i. Example 3 has a constraint for 1 − α = 0.0001 of r min = (a) 2.90 and (b) 3.00. ## 5.2.2 Experimental Results In all three examples, we applied the three sequential methods and two batch methods: one based on standard CW-EI and also our proposed 2S batch method. We used 10 initial points, 110 iterations for the sequential methods and 11 batches of size 10 for the batch methods. In our numerical experiments, we set r max = 110%r min (i.e., r max is 10% higher than the minimal expected return r min). All the experiments are repeated 20 times and the results are given in Table 1. For all three examples, our proposed sequential methods outperform the standard BO approach, finding a lower CVaR objective value whilst meeting the feasibility condition. In addition, the two-stage approach produces better results than the one-stage approach. The same is true for the batch methods, where the two-stage method outperforms the one-stage method. The batch methods obtain better results than the standard sequential BO method but perform worse than the best sequential implementations. This is as expected, due to the GP only being updated after a full batch of samples has been identified, in contrast to the GP being updated after each new sample is proposed - as in the sequential approach. Using parallel implementation the batch method is significantly faster than the sequential approach. To further illustrate the results, we plot the best solution's objective value after each iteration in Figure 2. Consistently, the best solution of 2S-ACW-EI decreases faster than the other two sequential methods. The two-stage batch method performs better than the standard implementation, in all test cases. | Sequential BO Methods | | Batch BO Methods | | | | |-------------------------|----------------|--------------------|----------------|----------------|----------------| | CW-EI | ACW-EI | 2S-ACW-EI | KB-ACW-EI | 2S-KB-ACW-EI | | | 1a CVaR (SD) | 0.202 (0.013) | 0.199 (0.013) | 0.184 (0.012) | 0.199 (0.012) | 0.191 (0.012) | | 1a Ex Return (SD) | 1.473 (0.012) | 1.485 (0.012) | 1.473 (0.012) | 1.479 (0.012) | 1.478 (0.012) | | 1b CVaR (SD) | 0.266 (0.012) | 0.253 (0.012) | 0.247 (0.012) | 0.263 (0.014) | 0.249 (0.013) | | 1b Ex Return (SD) | 1.581 (0.012) | 1.577 (0.012) | 1.561 (0.012) | 1.580 (0.012) | 1.567 (0.012) | | 2a CVaR (SD) | 0.317 (0.013) | 0.291 (0.015) | 0.275 (0.014) | 0.302 (0.013) | 0.287 (0.013) | | 2a Ex Return (SD) | 5.335 (0.013) | 5.320 (0.012) | 5.302 (0.013) | 5.341 (0.012) | 5.322 (0.012) | | 2b CVaR (SD) | 0.336 (0.014) | 0.320 (0.014) | 0.303 (0.013) | 0.322 (0.013) | 0.308 (0.013) | | 2b Ex Return (SD) | 5.427 (0.013) | 5.428 (0.012) | 5.417 (0.013) | 5.433 (0.013) | 5.420 (0.012) | | 3a CVaR (SD) | −0.094 (0.012) | −0.122 (0.014) | −0.132 (0.013) | −0.102 (0.012) | −0.131 (0.014) | | 3a Ex Return (SD) | 3.105 (0.013) | 3.030 (0.013) | 2.938 (0.012) | 3.082 (0.013) | 2.97 (0.013) | | 3b CVaR (SD) | −0.075 (0.013) | −0.083 (0.013) | −0.094 (0.014) | −0.064 (0.012) | −0.075 (0.013) | | 3b Ex Return (SD) | 3.113 (0.013) | 3.075 (0.013) | 3.056 (0.012) | 3.125 (0.012) | 3.089 (0.013) | | Sequential BO Methods | | Batch BO Methods | | | | |-------------------------|----------------|--------------------|----------------|-----------------|----------------| | CW-EI | ACW-EI | 2S-ACW-EI | KB-ACW-EI | 2S-KB-ACW-EI | | | 1a CVaR (SD) | 0.202 (0.013) | 0.198 (0.011) | 0.188 (0.014) | 0.201 (0.0.014) | 0.194 (0.011) | | 1a Ex Return (SD) | 1.473 (0.012) | 1.473 (0.018) | 1.471 (0.013) | 1.477 (0.016) | 1.474 (0.017) | | 2a CVaR (SD) | 0.317 (0.013) | 0.299 (0.014) | 0.281 (0.014) | 0.308 (0.011) | 0.293 (0.017) | | 2a Ex Return (SD) | 5.335 (0.013) | 5.324 (0.013) | 5.317 (0.016) | 5.331 (0.012) | 5.323 (0.013) | | 3a CVaR (SD) | −0.094 (0.012) | −0.115 (0.016) | −0.125 (0.013) | −0.112 (0.014) | −0.128 (0.015) | | 3a Ex Return (SD) | 3.105 (0.013) | 3.103 (0.014) | 3.083 (0.018) | 3.102 (0.018) | 3.061 (0.013) | Table 1: Average of the best objective and constraint values across repeated experiments: in each case, the best result among the methods is shown in bold. The standard deviations are given in parentheses. Table 2: Same results as those in Table 1, but obtained with r max = 105%r min. Finally we want to note that a key parameter in the proposed algorithm is r max. Our numerical experiments found that setting r max = 110%r min generally works well. To test more rigorously how sensitive our proposed BO algorithm is to this parameter, we provide further numerical results obtained with r max = 105%r min, in Table 2. These results are quantatively similar to those in Table 1 and thus show that our proposed algorithms are not highly sensitive to the choice of r max. ## 6 Conclusion In summary, we consider the optimal portfolio allocation problem which aims to minimize a computationally demanding risk measure, subject to a minimum expected return constraint. We propose four new BO algorithms specifically designed for such problems that significantly reduce the number of evaluations of the expensive objective function. Furthermore, the proposed methods take advantage of the special properties of portfolio optimization problems, by developing a new acquisition function, a two-stage point selection process, and a batch implementation to take advantage of parallel computing. We expect that the proposed methods can be useful in problems arising from various fields, and in particular, we plan to explore its application to portfolio allocation problems in reinforcement learning (Ghosh et al., 2022) in the future. Several issues of the proposed method should be addressed in the future. First of all the proposed method may not find the optimal solution for problems whose solutions do not lie on the boundary of the expected return constraint. In certain important real-world applications, such as automated investing, (financially) critical decisions are made upon the solutions of the optimal allocation problems. In this case returning a sub-optimal solution may have serious consequences, and as such it is highly desirable to impose mechanisms that can ensure the reliability and safety of the algorithms. For example, a heuristic strategy is to search around the obtained solution and see if a better one can be found. Due to its practical importance, this issue should be studied carefully in the future. Second a relevant class of methods are the contextual BOs Krause and Ong (2011); Letham and Bakshy (2019); Char et al. (2019); Feng et al. (2020), which aim to optimize the objective function with different contexts. Contextual BOs typically utilize a multi-GP model to take advantage of the correlation between different contextual awards. We here aim to optimize the single CVaR ![12_image_0.png](12_image_0.png) Figure 2: The best objective value obtained after each iteration for the portfolio allocation problems across the existing method (CW-EI BO) and the four new proposed methods. function, and as such the contextual BO methods may not directly apply. That said, it is possible that the contextual BO can be extended to solve our problems, which is also a direction that we hope to explore. ## References Alexander, S., Coleman, T. F., and Li, Y. (2006). Minimizing cvar and var for a portfolio of derivatives. Journal of Banking & Finance, 30(2):583–605. Artzner, P., Delbaen, F., Eber, J.-M., and Heath, D. (1999). Coherent measures of risk. *Mathematical* finance, 9(3):203–228. Bali, T. G. and Cakici, N. (2004). Value at risk and expected stock returns. *Financial Analysts Journal*, 60(2):57–73. Berkeley, J., Moss, H. B., Artemev, A., Pascual-Diaz, S., Granta, U., Stojic, H., Couckuyt, I., Qing, J., Loka, N., Paleyes, A., Ober, S. W., Goodall, A., Ghani, K., and Picheny, V. (2023). Trieste. Brochu, E., Cora, V. M., and de Freitas, N. (2010). A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599. Cakmak, S., Astudillo Marban, R., Frazier, P., and Zhou, E. (2020). Bayesian optimization of risk measures. Advances in Neural Information Processing Systems, 33:20130–20141. Char, I., Chung, Y., Neiswanger, W., Kandasamy, K., Nelson, A. O., Boyer, M., Kolemen, E., and Schneider, J. (2019). Offline contextual bayesian optimization. *Advances in Neural Information Processing Systems*, 32. Daulton, S., Cakmak, S., Balandat, M., Osborne, M. A., Zhou, E., and Bakshy, E. (2022a). Robust multiobjective bayesian optimization under input noise. In *International Conference on Machine Learning*, pages 4831–4866. PMLR. Daulton, S., Eriksson, D., Balandat, M., and Bakshy, E. (2022b). Multi-objective bayesian optimization over high-dimensional search spaces. In *Uncertainty in Artificial Intelligence*, pages 507–517. PMLR. El Ghaoui, L., Oks, M., and Oustry, F. (2003). Worst-case value-at-risk and robust portfolio optimization: A conic programming approach. *Operations research*, 51(4):543–556. Embrechts, P., Schied, A., and Wang, R. (2022). Robustness in the optimization of risk measures. *Operations* Research, 70(1):95–110. Eriksson, D. and Poloczek, M. (2021). Scalable constrained bayesian optimization. In *International Conference on Artificial Intelligence and Statistics*, pages 730–738. PMLR. Feng, Q., Letham, B., Mao, H., and Bakshy, E. (2020). High-dimensional contextual policy search with unknown context rewards using bayesian optimization. *Advances in Neural Information Processing Systems*, 33:22032–22044. Fleischhacker, A., Lettner, G., Schwabeneder, D., and Auer, H. (2019). Portfolio optimization of energy communities to meet reductions in costs and emissions. *Energy*, 173:1092–1105. Frazier, P. I. (2018). Bayesian optimization. In *Recent advances in optimization and modeling of contemporary* problems, pages 255–278. Informs. Fröhlich, L., Klenske, E., Vinogradska, J., Daniel, C., and Zeilinger, M. (2020). Noisy-input entropy search for efficient robust bayesian optimization. In *International Conference on Artificial Intelligence and Statistics*, pages 2262–2272. PMLR. Gaivoronski, A. A. and Pflug, G. (2005). Value-at-risk in portfolio optimization: properties and computational approach. *Journal of risk*, 7(2):1–31. Gardner, J. R., Kusner, M. J., Xu, Z. E., Weinberger, K. Q., and Cunningham, J. P. (2014). Bayesian optimization with inequality constraints. In *ICML*, volume 2014, pages 937–945. Garnett, R. (2023). *Bayesian optimization*. Cambridge University Press. Gelbart, M. A., Snoek, J., and Adams, R. P. (2014). Bayesian optimization with unknown constraints. *arXiv* preprint arXiv:1403.5607. Ghosh, S., Wynter, L., Lim, S. H., and Nguyen, D. T. (2022). Neural-progressive hedging: Enforcing constraints in reinforcement learning with stochastic programming. In *Uncertainty in Artificial Intelligence*, pages 707–717. PMLR. Gramacy, R. B., Gray, G. A., Le Digabel, S., Lee, H. K., Ranjan, P., Wells, G., and Wild, S. M. (2016). Modeling an augmented lagrangian for blackbox constrained optimization. *Technometrics*, 58(1):1–11. Gramacy, R. B. and Lee, H. K. H. (2011). 229Optimization Under Unknown Constraints. In Bayesian Statistics 9. Oxford University Press. Hamdi, F., Ghorbel, A., Masmoudi, F., and Dupont, L. (2018). Optimization of a supply portfolio in the context of supply chain risk management: literature review. *Journal of intelligent manufacturing*, 29:763–788. Iqbal, J., Azher, S., and Ijaz, A. (2010). Predictive ability of value-at-risk methods: evidence from the karachi stock exchange-100 index. Technical report, EERI Research Paper Series. Kheybari, S., Ishizaka, A., and Salamirad, A. (2023). A new hybrid risk-averse best-worst method and portfolio optimization to select temporary hospital locations for covid-19 patients. Journal of the Operational Research Society, 74(2):509–526. Krause, A. and Ong, C. (2011). Contextual gaussian process bandit optimization. *Advances in neural* information processing systems, 24. Krokhmal, P., Palmquist, J., and Uryasev, S. (2002). Portfolio optimization with conditional value-at-risk objective and constraints. *Journal of risk*, 4:43–68. Kusakawa, S., Takeno, S., Inatsu, Y., Kutsukake, K., Iwazaki, S., Nakano, T., Ujihara, T., Karasuyama, M., and Takeuchi, I. (2022). Bayesian optimization for cascade-type multistage processes. *Neural Computation*, 34(12):2408–2431. Lam, R. and Willcox, K. (2017). Lookahead bayesian optimization with inequality constraints. Advances in Neural Information Processing Systems, 30. Letham, B. and Bakshy, E. (2019). Bayesian optimization for policy search via online-offline experimentation. Journal of Machine Learning Research, 20(145):1–30. Letham, B., Karrer, B., Ottoni, G., and Bakshy, E. (2019). Constrained bayesian optimization with noisy experiments. Mausser, H. and Rosen, D. (1999). Beyond var: from measuring risk to managing risk. In *Proceedings of* the IEEE/IAFE 1999 Conference on Computational Intelligence for Financial Engineering (CIFEr)(IEEE Cat. No. 99TH8408), pages 163–178. IEEE. McKay, R. and Keefer, T. E. (1996). Var is a dangerous technique. Corporate Finance Searching for Systems Integration Supplement, 9:30. Močkus, J. (1975). On bayesian methods for seeking the extremum. In *Optimization techniques IFIP technical* conference, pages 400–404. Springer. Nguyen, Q. P., Dai, Z., Low, B. K. H., and Jaillet, P. (2021a). Optimizing conditional value-at-risk of black-box functions. *Advances in Neural Information Processing Systems*, 34. Nguyen, Q. P., Dai, Z., Low, B. K. H., and Jaillet, P. (2021b). Value-at-risk optimization with gaussian processes. In *International Conference on Machine Learning*, pages 8063–8072. PMLR. Pflug, G. C. (2000). Some remarks on the value-at-risk and the conditional value-at-risk. *Probabilistic* constrained optimization: Methodology and applications, pages 272–281. Picheny, V., Moss, H., Torossian, L., and Durrande, N. (2022). Bayesian quantile and expectile optimisation. In *Uncertainty in Artificial Intelligence*, pages 1623–1633. PMLR. Rockafellar, R. T. and Uryasev, S. (2000). Optimization of conditional value-at-risk. *Journal of risk*, 2:21–42. Swersky, K., Snoek, J., and Adams, R. P. (2013). Multi-task bayesian optimization. *Advances in neural* information processing systems, 26. Wilson, J., Hutter, F., and Deisenroth, M. (2018). Maximizing acquisition functions for bayesian optimization. *Advances in neural information processing systems*, 31. Zymler, S., Kuhn, D., and Rustem, B. (2013). Worst-case value at risk of nonlinear portfolios. *Management* Science, 59(1):172–188.
Review 1: Summary: This paper proposes a new two-stage constrained Bayesian optimization method for the portfolio allocation problem. The first stage of the method filters out design points based on an acquisition function, and the second stage calculates the expected return, filters out points based on the expected return, and proceed to evaluate the objective function. These procedures encourage more efficiency and fewer evaluations on expensive objectives. Strengths and Weaknesses: Strengths: - Portfolio optimization is a very interesting application of BO and I think it has a lot of potential. - I like that the authors have sections on background in Section 2/3. They are helpful for people to understand the literature. Though I think it'd be nice to make it clearer in terms of what is new and what is a literature review/background. I'm also slightly concerned that Section 2/3 occupy a lot of space of this paper. Some efforts can be made to make they more compact. See Requested Changes. Weaknesses: - Clarity can be improved. See Requested Changes. The paper became difficult to read without clear definition of several critical terms, such as $r^{min}$, expected return, R, F etc. - The method builds on constrained BO work in 2014, but there has been new advancement on constrained BO. It is not entirely clear why the authors chose the work in 2014 to build upon. - The adaptation of constrained BO in this work is somewhat incremental from a machine learning perspective. This work might be more suited for the finance / operations research community. - It is not very clear to me how expensive the objective evaluations really are for the application the authors studied. The example in Eq 12 seems like a linear function over numerical values that can be easily computed. - Baselines are limited. It might be good to add other constrained BO methods, like those mentioned in Section 1 . - Overall, it is not very clear how useful the BO part of this method is for portfolio allocation since BO seems to be the more expensive part, with all the GP updates etc. Requested Changes: 1. Please clarify whether Z in section 2 is a random vector or a probability distribution. Both were mentioned. The authors seem to be mixing those two concepts. Similar for f(w, Z). 2. Bottom equation in page 2: - What is vf? This seems to be a typo. - What is the expectation over? It seems to me the right hand side should have lower case z, and the expectation is over $z\sim Z$, but the left hand side also has Z, so I'm fairly confused. This is also related to question 1. 3. Please add the definition of expected returns. Section 2.2 extensively talks about the relation between CVaR and expected returns, but it is unclear what the expected return is, and how it relates to all the symbols defined so far. 4. Explain what F and R are in Equation 1. 5. Explain $\tilde g$ in Algorithm 1. Typically in BO, the observations are g(x) with noise (as the authors mentioned). So, one cannot directly evaluate g(x), which is what the author wrote in the algorithm. 6. typo "Eq. equation 6" in page 4 under Eq 6. 7. In Section 2.1, it'd be nice to know what "subadditivity, translation invariance, positive homogeneity, and monotonicity" means and why VaR doesn't have those properties. 8. Clarify if $f$ at the bottom of page 4 is still the $f$ in page 1, and what $x$ (point to optimize) corresponds to for function $f$. 9. To make the literature review more compact, Section 2 and 3 can potentially be combined into the same section called literature review or something like that. The notations can be unified, and the authors can talk about Bayesian optimization / constrained BO in the context of notations in Section 2. 10. Clarify what $r^{min}$ exactly is, similarly for $r^{max}$. If $r^{min}$ is arbitrarily set, I don't see how Theorem can hold. If it is set to be 11. It is unclear what the function to be optimized really is in this work. Is it f (in Eq 11)? Is it F (in Eq 1a)? Or is it something else? It'd be nice to clarify it early on, especially when introducing the $g$ function in the BO section. Broader Impact Concerns: I think this paper can benefit from having a statement. It is very relevant to automating investing / finance, so there might be some broader impact that can be clarified, e.g., what if the method fails, the assumptions are not satisfied, safety of investment etc. ================================================== Review 2: Summary: This paper proposes a Bayesian optimization policy for allotting financial portfolio. In order to minimize a portfolio risk measure with particular constraints, the authors propose a new two-stage Bayesian optimization approach with a new acquisition function. Finally, some experimental results are demonstrated to validate the proposed algorithm. Strengths and Weaknesses: Strengths - This work uses Bayesian optimization in interesting real-world applications. - It provides thorough analyses on the proposed method. Weaknesses - Contributions are not clear. - Some baselines are missing. - Relationships to contextual Bayesian optimization should be discussed. - Writing can be improved more. Requested Changes: - Please consistently use a sentence case or title case for the title of this work. For example, the title should be "Optimal Portfolio Allocation Using Bayesian Optimization." - In Section 1, `resouces` -> `resources`. - Please use \citep and \citet correctly. - In Section 1, `suply` -> `supply`. - In Section 1, `artifical` -> `artificial`. - The authors use `constraint function` frequently. I think it should be `constrained function`. - I think this problem is closely related to contextual Bayesian optimization. It should be discussed and compared thoroughly. - Why do the authors formulate Assumption 1 as a single assumption? I think the authors can split it as four assumptions. - In Theorem 1, `Assumptions 1` should be `Assumption 1`. - In Theorem 1, `problem 1` should be `problem (1)`. - Why is Theorem 1 required? Is it really important? - I think the form of the acquisition function proposed in this work is widely used in Bayesian optimization. The relevant research should be cited appropriately. - As described earlier, some contextual Bayesian optimization algorithms should be compared in the experiments. - Could you draw standard deviations or standard errors in Figure 2? Broader Impact Concerns: I do not have any concerns on broader impacts. ================================================== Review 3: Summary: The paper proposes a Bayesian optimization method for portfolio allocation where the goal is to minimize risk as measured by CVaR under an expected performance constraint. The authors prove a theorem showing that under some technical conditions, an optimal allocation exists at the boundary of the feasible set. By exploiting this fact, they propose a two-stage procedure where only points near the boundary are really estimated for the risk measure. In addition, they propose an acquisition function to favor points near the boundary and consider a batch version of their algorithm to exploit parallel computation for the estimation of the risk measure. Strengths and Weaknesses: STRENGTHS The contributions are clearly outlined and well positioned with respect to existing work. The algorithm seems overall reasonable. The experiments demonstrate clearly the benefit of the two-phase method. WEAKNESSES While it's nice to have this theorem that justifies the two-phase method, it's not clear if it really holds in the experiments that are run and more generally how to make it hold in practice (e.g., choice of \alpha). I believe that the authors should discuss more those points and the consequences on the algorithm when their theorem does not hold. The definition of the new acquisition function seems to be a bit heuristic, notably with first equation of (9). The experiments are run with very small \alpha's, which are much smaller than what is used in practice in finance, I believe. The paper should be prepared with more care. It contains quite a few typos and small issues with notations and phrasing. Requested Changes: I believe that the authors should provide more discussion about the following points: - Do the technical conditions of the theorem hold in the experiments? - Could the technical conditions be relaxed? - Can the proposed two-phase algorithm be suboptimal when the theorem does not hold? - What happens if the proposed algorithm is run when Problem 1 has no solution, which can happen in practice if r_min is chosen too high? - Have you run your algorithm with larger \alpha (e.g., 0.01 or 0.05)? - Does PF(w) as defined in (8) really satisfy the first equality of (9)? Other issues: - In my opinion, the title is too generic. I think that it should be more precise and better reflect the content of the paper (e.g., mention CVaR minimization under performance constraint). - The authors often use "i.e." instead of "e.g." - References that are not part of a sentence should be between parentheses. - Alg. 1 doesn't return a global minimizer but returns a solution in its neighborhood with high probability. - Second to last sentence in paragraph above 4.1: I believe this sentence is meaningful only if you provide the alpha you have chosen. Typos (just the first page): resouces suply chain intellegence Ghaoui et al. -> El Ghaoui Broader Impact Concerns: I believe this is not applicable for this work. ================================================== Metareview: Recommendation: Reject Comment: In addition to the technical questions/issues, the reviewers were concerned with the overall quality of the submission with quite a few typos, and issues with notations and phrasing. Although the authors corrected some of them during the rebuttal, the manuscript should be prepared with more care and all these issues need to be fixed before submission. In addition to the presentation issues, the reviewers still have several technical concerns. I listed some under "claims and evidence". Others include 1) The method builds on a relatively old (2014) constrained BO work, while there has been new advancement on constrained BO. The reasoning behind this choice is not quite clear. 2) The work is somewhat incremental and can be seen more as an application in finance or OR, in which case more careful evaluation is required. 3) The cost of BO needs to be discussed in more details to justify its usefulness for this particular application. Overall, none of the reviewers thinks the paper is ready for publication in its current form. ==================================================
# Efficient And Robust Quantization-Aware Training Via Adaptive Coreset Selection Xijie Huang1, Zechun Liu2, Shih-yang Liu1**, Tim Kwang-Ting CHENG**1 1Hong Kong University of Science and Technology (HKUST), 2Meta Reality Lab xhuangbs@connect.ust.hk, zechunliu@meta.com, sliuau@connect.ust.hk, timcheng@ust.hk Reviewed on OpenReview: *https: // openreview. net/ forum? id= 4c2pZzG94y* ## Abstract Quantization-aware training (QAT) is a representative model compression method to reduce redundancy in weights and activations. However, most existing QAT methods require endto-end training on the entire dataset, which suffers from long training time and high energy costs. In addition, the potential label noise in the training data undermines the robustness of QAT. In this work, we show that we can improve data efficiency and robustness with a proper data selection strategy designed specifically for QAT. We propose two metrics based on analysis of loss and gradient of quantized weights: error vector score and disagreement score, to quantify the importance of each sample during training. Guided by these two metrics, we proposed a quantization-aware Adaptive Coreset Selection (ACS) method to select the data for the current training epoch. We evaluate our method on various networks (ResNet-18, MobileNetV2, RetinaNet), datasets(CIFAR-10, CIFAR-100, ImageNet-1K, COCO), and under different quantization settings. Specifically, our method can achieve an accuracy of 68.39% of 4-bit quantized ResNet-18 on the ImageNet-1K dataset with only a 10% subset, which has an absolute gain of 4.24% compared to the baseline. Our method can also improve the robustness of QAT by removing noisy samples in the training set. ## 1 Introduction ![0_image_0.png](0_image_0.png) Figure 1: **Left:** Data scaling curve for 4-bit quantized ResNet-18 on the ImageNet-1K dataset. Our ACS significantly reduces test error using the same training data fraction compared to baselines. **Middle:** Accuracy of 2/32-bit quantized ResNet-18 trained on CIFAR-10 with 10% random label noise. Our ACS is the only method to outperform the full data training performance by effectively removing noisy samples. **Right:** Test error for quantized ResNet-18 with different data fraction and the same GPU training hours. Under the same training budget, QAT with smaller coreset selected by ACS outperforms full data training. Deep learning models have achieved remarkable achievements across various applications, including computer vision (Krizhevsky et al., 2012; He et al., 2016; Tan & Le, 2019; Kirillov et al., 2023) and natural language processing (Kenton & Toutanova, 2019; Yang et al., 2019; Conneau & Lample, 2019; Touvron et al., 2023). ![1_image_0.png](1_image_0.png) Figure 2: An overview of the Adaptive Coreset Selection (ACS) for efficient and robust QAT. The outstanding performance of these models can be attributed to their large number of parameters and the availability of large-scale training data. For example, GPT-3 (Brown et al., 2020) boasts 175 billion parameters and the pre-training is performed on the dataset with more than 410 billion tokens. Neural scaling law Kaplan et al. (2020); Alabdulmohsin et al. (2022) is an explanation of this size expansion that empirically demonstrates test loss of a model follows a power law to the model size, training dataset size, and amount of computation. While neural scaling law guides the expanding usage of large models, the large model size and training set scale have become the most significant challenges for the training and deployment of deep learning models, especially on edge devices with computation and storage constraints. Many model compression methods have been proposed recently to address these challenges and enable the effective deployment of deep learning models. These model compression techniques include quantization (Zhou et al., 2016; Choi et al., 2018; Esser et al., 2020; Bhalgat et al., 2020), pruning (Liu et al., 2017; 2018; Molchanov et al., 2019; Liu et al., 2019), knowledge distillation (Hinton et al., 2015; Park et al., 2019; Shen & Xing, 2021), and compact network design (Howard et al., 2017; Pham et al., 2018). Among the aforementioned methods, quantization methods have been the most widely adopted because they have the advantage of promising hardware affinity across different architectures (Judd et al., 2016; Jouppi et al., 2017; Sharma et al., 2018). To minimize the performance gap between the quantized and full-precision models, quantization-aware training (QAT) is often utilized. Although QAT has improved the inference efficiency of the target model, it is computation-intensive and requires more time than full-precision training. While previous QAT methods assume an ideal case in which computation resources are unlimited, we aim to research a more practical case when the cost of QAT should be considered. Coreset selection techniques aim to mitigate the high training cost and potential negative influence of label noise to improve data efficiency and robustness for full-precision training. Specifically, coreset selection methods leverage the redundancy in training datasets and select the most informative data to build a coreset for training. One recent research Sorscher et al. (2022) has pointed out that coreset can be a solution to beat the scaling law.Previous methods select data based on feature (Agarwal et al., 2020; Sener & Savarese, 2018), error (Paul et al., 2021), gradient (Killamsetty et al., 2021a), and decision boundary (Margatina et al., 2021). While these methods can achieve notable efficiency improvement for full-precision training, their effectiveness on low-precision QAT has not been explored before. To utilize coreset selection methods to improve the data efficiency and robustness of QAT, the characteristics of quantized weights must be considered in the design of methods. Existing full-precision methods are not designed for quantization, and severe computation overhead during selection may hinder the application of QAT. Therefore, a coreset selection method specifically designed for QAT is required. In this work, we propose the coreset selection specifically for QAT and find that we can elevate the data efficiency and training robustness. We start by analyzing the impact of removing one specific sample from the training set during QAT and identifying that the error vector score (EVS) is a good theoretical approximation of the importance of each sample. Based on the common practice of utilizing knowledge distillation during QAT, we also propose the disagreement score (DS) measuring the prediction gap between the quantized model and full-precision models. Based on these metrics, we propose a fully quantization-aware adaptive coreset selection (ACS) method to select training samples that fit the current training objective. An overview of the proposed method is shown in Fig. 2. We demonstrate the superiority of our ACS in terms of effectiveness and efficiency on different networks (ResNet-18, MobileNet-V2, RetinaNet), datasets (CIFAR-10, CIFAR-100, ImageNet-1K, COCO), and quantization settings. For 2-bit weights-only quantization of MobileNetV2 on the CIFAR-100 dataset, QAT based on our ACS can achieve a mean accuracy of 67.19% with only 50% training data used for training per epoch. For 4-bit quantization of ResNet-18 on the ImageNet-1K dataset, our ACS can achieve top-1 accuracy of 68.39% compared to the 64.15% of random selection when only 10% training data is selected for the training of every epoch. In summary, our contribution can be summarized as follows: - We are **the first** to investigate the data efficiency and robustness of quantization-aware training and empirically observe that the importance of different data samples varies during the training process. - We propose two metrics: error vector score and disagreement score, to quantify the importance of each sample based on theoretical analysis of loss gradient. - We propose a **quantization-aware** Adaptive Coreset Selection (ACS) method, which adaptively selects informative samples that fit current training epochs and prune redundant or noisy samples. - We verify ACS on different network architectures, datasets, and quantization settings. Compared with previous methods, ACS can significantly improve both the data efficiency and robustness of quantization-aware training shown in Fig. 1. ## 2 Related Work Quantization Quantization methods are powerful tools for improving the efficiency of model inference. The core insight is replacing full-precision weights and activations with lower-precision representation. The quantization methods can be classified as quantization-aware training (QAT) (Zhou et al., 2016; Esser et al., 2020; Bhalgat et al., 2020; Huang et al., 2022; Liu et al., 2023c) and post-training quantization (PTQ) (Nagel et al., 2020; Fang et al., 2020; Wang et al., 2020; Xiao et al., 2023; Liu et al., 2023b; Lee et al., 2023) based on whether to retrain a model with quantized weights and activations or start with a pre-trained model and directly quantize it without extensive training. Based on the characteristics of quantization intervals, these methods can be categorized into uniform and non-uniform quantization. While uniform quantization (Zhou et al., 2016; Choi et al., 2018; Esser et al., 2020) with uniform interval are more hardware-friendly and efficient, non-uniform quantization (Miyashita et al., 2016; Zhang et al., 2018; Li et al., 2019b; Yvinec et al., 2023), due to the flexibility of representation, can minimize the quantization error and achieve better performance than uniform schemes. While previous QAT methods assume an ideal case for unlimited computation resources, we research a more practical case to conduct QAT with a fixed computation budget in this work. Learning from Noisy Labels The *noisy labels* are defined as unreliable labels that are corrupted from ground truth. The noise can stem from non-expert labeling or malicious label-flipping attack (Xiao et al., 2012). The estimated ratio of noisy labels in real-world datasets ranges from 8.0%-38.5% according to Song et al. (2022). To improve the robustness of learning for noisy labels, the solutions include designing robust architectures that are attuned to label noise (Yao et al., 2018; Lee et al., 2019), label correction (Tu et al., 2023; Li et al., 2023; 2024), robust regularization (Wei et al., 2021), and robust loss design (Huang et al., 2023). One important category that enables robust noise-tolerant training is sample selection (Paul et al., 2021; Xia et al., 2023a; Park et al., 2024). While the effectiveness of sample selection methods has been verified on full-precision deep learning, their performance on QAT is under-explored. Coreset Selection Coreset selection targets improving the data efficiency by identifying the informative training samples. Previous works can be mainly classified into the following categories: **Geometry-based** Methods: These methods assume that samples close to each other in the feature space similarly influence the training. These redundant data points should be removed to improve efficiency. Some representative works include Contextual diversity (CD) (Agarwal et al., 2020), k-Center-Greedy (Sener & Savarese, 2018), Sorscher et al. (2022), Moderate Coreset (Xia et al., 2023b), and Herding (Welling, 2009). **Decision** Boundary-based Methods: These methods select samples distributed near the decision boundary. These samples are difficult for the model to separate. Representative works include Adversarial Deepfool (Ducoffe & Precioso, 2018) and Contrastive Active Learning (CAL) (Margatina et al., 2021). Gradient/Error-based Methods: These methods assume the samples are more informative if they contribute more to the error or loss during the training. This category includes EL2N (Paul et al., 2021), CRAIG (Mirzasoleiman et al., 2020), GradMatch (Killamsetty et al., 2021a), Forgetting events (Toneva et al., 2019), and AdaCore (Pooladzandi et al., 2022). **Optimization-based Methods:** These methods formulate the coreset selection as a bilevel optimization problem. The outer level objective is the selection of samples and the inner level objective is the optimization of model parameters. The representative works include Borsos et al. (2020), Glister (Killamsetty et al., 2021b), Zhou et al. (2022), and Retrieve (Killamsetty et al., 2021c). These methods perform well in some scenarios of full-precision training(specific subset fraction, specific outlier distribution, etc.). However, none are verified on quantization settings or consider the requirements for QAT. As shown in Fig. 1, the majority of these methods cannot even outperform random sampling in the QAT settings. Moreover, these methods either require early training of the target model for several epochs (Toneva et al., 2019; Paul et al., 2021) or time-consuming search (Sener & Savarese, 2018) and optimization (Killamsetty et al., 2021b;c), which leads to heavy computation overhead during QAT. ## 3 Importance Of Each Sample In Qat One underlying assumption of data scaling law is that all samples in the training set are equally important. However, many previous coreset research Sorscher et al. (2022) have proved that information within different samples is different for full-precision training, and the quantization setting has not been verified yet. In this section, we will first introduce quantization-aware training (QAT) and derive the gradient under cross-entropy and SGD in Sec. 3.1. We then analyze the change of gradient when a specific sample is removed from a training batch to investigate the importance of each training sample in Sec. 3.2. We propose an approximation of this gradient change considering the prediction error without introducing any memory overhead. We further prove that less training data is required when knowledge distillation is applied to QAT in Sec. 3.3. Another metric quantifying sample importance based on the prediction gap between the quantized student and full-precision teacher model is also introduced in Sec. 3.3. ## 3.1 Preliminaries Of Qat During QAT, the real-value data v ris converted to b-bit quantized representation v q = qb(v r) by the quantizer qb. Given the scale factor s of the quantizer, the number of positive quantization levels QP , and the number of negative quantization levels QN , we can have the quantizer qb as $$v^{q}=q_{b}(v^{r})=s\times\lfloor\operatorname*{clip}(v^{r}/s,-Q_{N},Q_{P})\rceil,$$ r/s, −QN , QP )⌉, (1) where ⌊·⌉ is the rounding function that rounds the input to the nearest integer, clip(*v, r*low, rhigh) return v with all value below rlow set to be rlow and all values above rhigh set to be rhigh. For the unsigned quantization, QN = 0, QP = 2b − 1. While for the quantization of signed data, QN = 2b−1, QP = 2b−1 − 1. To solve the problem that the gradient cannot back-propagate in Equation 1 during QAT, the straight-through estimator (STE) (Bengio et al., 2013) is utilized to approximate the gradient. In the back-propagation of QAT with STE, the gradient of the loss L with respect to the real-value data v ris set to be $${\frac{\partial{\mathcal{L}}}{\partial v^{r}}}={\frac{\partial{\mathcal{L}}}{\partial v^{q}}}\cdot\mathbf{1}_{-Q_{N}\leq v^{r}/s\leq Q_{P}},$$ , (2) $$(1)$$ where 1 is the indicator function that outputs 1 within the quantization limit and 0 otherwise. Note that as we will apply clipping in the beginning of QAT, the gradient of quantized data can be seen as approximately equal as the gradient of real-value data. The training set of QAT is denoted as T = {(xi, yi)} N i=1, where input xi ∈ R d are vectors of length d and y ∈ {0, 1}M are one-hot vectors encoding labels. The neural network to be trained is denoted as f(w r, x), where real-value weights are w r. We use cross-entropy loss L(*p, y* ˆ ) = PM m=1 y (m)log p (m)in the QAT and p(w q, x) is a probability vector denoting the output of the quantized neural network f(w q, x). Stochastic gradient descent (SGD) is used for the optimization. Suppose the real-value weights at iteration t are w r t and the batch of input samples is Tt−1 ⊆ T , the weights are updated following $$w_{t}^{r}=w_{t-1}^{r}-\eta\sum_{(x,y)\in{\cal T}_{t-1}}g_{t-1}(x,y),\tag{3}$$ where η denotes the learning rate and gt−1(*x, y*) = ∇Lt−1(p(w r t−1 , x), y) is the gradient of cross entropy loss. ## 3.2 Error Vector Score To look into the importance of each sample {(xi, yi)}, we measure the difference of the expected magnitude of the loss vector on the training set T and another training set which only removes one specific sample T ′ = T \ {(xi, yi)}. For simplicity, we approximate all the training dynamics in continuous time. Based on the chain rule and STE of quantization, we have the change of loss L at time t on sample (*x, y*) of batch T as $$\left.\frac{d\mathcal{L}}{dt}\right|_{(x,y),\mathcal{T}_{t}}=g_{t}(x,y)\frac{dw_{t}^{q}}{dt}=g_{t}(x,y)\frac{dw_{t}^{r}}{dt}.\tag{1}$$ $$\left(4\right)$$ According to the discrete-time dynamic of real-value weights in Eq. 3, we have dwq t dt ≈ w r t − w r t−1 = −ηP(x,y)∈Tt−1 gt−1(*x, y*). To measure the contribution of a specific sample (xi, yi), we measure the change of loss with and without the sample. For a given data batch T , if the sample (xi, yi) ∈ T / , we can ignore the change in dL dt (x,y),Tt . For any sample (xj , yj ) ∈ T , j ≠ i in the same batch, the importance I(xi, yi) is measured as $${\cal I}(x_{i},y_{i})=\left\|\left.\frac{d{\cal L}}{dt}\right|_{(x_{j},y_{j}),\,{\cal T}}-\left.\frac{d{\cal L}}{dt}\right|_{(x_{j},y_{j}),\,{\cal T}^{\prime}}\right\|.\tag{10.15}$$ According to the chain rule, we have $$\left(5\right)$$ $$\frac{d{\mathcal{L}}}{d t}\Big|_{(x_{j},y_{j}),\mathcal{T}}=\frac{d{\mathcal{L}}(p(w_{t}^{q},x_{j}),y_{j})}{d w_{t}^{q}}\frac{d w_{t}^{q}}{d w_{t}^{r}}\frac{d w_{t}^{r}}{d t}$$ $$({\mathrm{6}})$$ According to the characteristics of STE in Eq. 2, dwq t dwr t = 1 holds for all input within the clipping range. Following the updating rule of weights in Eq. 3, the gradient dwr t dt = −ηP(x∗,y∗)∈T gt−1(x ∗, y∗). The only difference of the training set T and T ′is the existence of sample (xi, yi). Thus, we have $$\mathcal{I}(x_{i},y_{i})=\left\|\frac{d\mathcal{L}}{dw_{t}^{q}}(\left.\frac{dw_{t}^{r}}{dt}\right|_{(x_{j},y_{j}),\mathcal{T}}-\left.\frac{dw_{t}^{r}}{dt}\right|_{(x_{j},y_{j}),\mathcal{T}^{\prime}})\right\|$$ $$=\eta\left\|\frac{d\mathcal{L}}{dw_{t}^{q}}\cdot g_{t-1}(x_{i},y_{i})\right\|.$$ $$\quad(7)$$ We use dL dwq t as a simplification of dL(p(w q t ,xj ),yj ) dwq t, which is only dependant on the current training sample (xj , yj ) but not dependant on the sample (xi, yi) that is removed from batch T . Since learning rate η and the gradient of loss w.r.t. quantized weights dL dwq t can be seen as constant given T , the importance of the sample (xi, yi) in this batch T is only related to the gradient norm of cross-entropy loss of this sample ||gt−1(xi, yi)||. The examples with a larger gradient norm expectation have more influence on the supervised training of other data, which means they are important for QAT and should be included in the coreset. We can select data with high importance by sorting by the norm of gradient, which is also covered in previous works Paul et al. (2021). However, storing the loss gradient for comparison requires extra memory and is hard to transfer between network architectures. We approximate the norm gradient with the norm of the error vector, which is defined as follows. Definition 1 (Error Vector Score) The error vector score of a training sample (x, y) at iteration t id defined to be dEVS = ∥p(w q t , x) − y∥2. For any input x ∈ R d, gradient norm ||gt(*x, y*)|| is a non-random score. We take its expectation over random minibatch sequence and random initialization to get the expected gradient norm E||gt(*x, y*)||, which can also be indicated as $$\mathbb{E}\left\|g_{t}(x,y)\right\|=\sum_{m=1}^{M}\frac{d\mathcal{L}_{t}(p(w_{t}^{g},x),y)^{T}}{d f_{t}^{(m)}}\frac{d f_{t}^{(m)}}{d w_{t}^{g}},$$ $$(8)$$ where df(m) t dwq tdenotes the gradient of m-th logit on weights. Since we are using cross-entropy as the loss function, the gradient of loss L on the m-th logit output f (m) tfollows dLt(p(w q t ,x),y) T df(m) t = p(w q t , x) (m) − y (m). Previous works (Fort et al., 2020; Fort & Ganguli, 2019) empirically observe that df(m) t dwq tare similar across different logits m ∈ M and training sample (*x, y*). Thus, selecting samples based on the error vector score dEVS will be more likely to identify the samples with a larger gradient norm. Different from the previous method (Paul et al., 2021) also leveraging the error metrics, no early training is required, and we only use the current quantized model prediction p(w q t , x) during QAT. ## 3.3 Disagreement Score And Knowledge Distillation Intrinsically, a quantized classification network should learn an ideal similar mapping f from input sample x to the output logits f(*w, x*) as a full-precision network, and the gap between the quantized prediction p(w q, x) of student model and real-value prediction pT(w r, x) of teacher model T needs to be minimized. Based on this insight, knowledge distillation (KD) is widely used during QAT with a full-precision model as the teacher, which can also be seen in previous works (Polino et al.; Huang et al., 2022; Mishra & Marr, 2018; Liu et al., 2023a). The loss function is designed to enforce the similarity between the output of the full-precision teacher and the quantized student model as $${\mathcal{L}}_{K D}=-\frac{1}{N}\sum_{m}^{M}\sum_{i=1}^{N}p_{\mathbf{T}}^{(m)}(w^{r},x_{i})\log(p^{(m)}(w^{q},x_{i}))$$ $$({\mathfrak{g}})$$ where the KD loss is defined as the cross-entropy between the output distributions pT of a full-precision teacher and a quantized student on the same input x but different weights representation w r and w q. xi is one of the input samples from the training set. m and N denote the classes and total training sample numbers, respectively. Note that this process can be regarded as the distribution calibration for the student network and one-hot label is not involved during QAT. Since the loss function used for knowledge distillation is still cross-entropy, and we still assume we use SGD for the optimization, most conclusions in Sec. 3.2 still hold by replacing the one-hot ground truth label y with full-precision teacher's prediction pT(w r t , x). Thus, we propose the disagreement score as follows. Definition 2 (Disagreement Score) The disagreement score of a training sample (x, y) *at iteration* t is defined to be dDS = ∥p(w q t , x) − pT(w r t , x)∥2. The core difference between error vector score dEVS and disagreement score dDS is the target label. While dEVS uses one-hot hard labels, the dDS uses the distilled soft labels. We empirically notice that the data needed for the training is reduced when knowledge distillation is applied, which is helpful for our selection with a small data fraction. We further demonstrate the advantage in terms of training data requirements using the soft label based on Vapnik–Chervonenkis theory (Vapnik, 1999), which decomposes the classification error R(fs) of a classifier fs ∈ Fs as $$R(f_{s})-R(f)\leq O\left(\frac{|{\cal F}_{s}|_{\rm C}}{n^{\alpha_{s}}}\right)+\varepsilon_{s},\tag{10}$$ where O(·) denotes the asymptotic approximation and εs is the approximation error of Fs with respect to F. f ∈ F denotes the real target function. *| · |*C is the VC-Dimension of the function class measuring its capacity. n is the number of total training data. 12 ≤ αs ≤ 1 is an indicator measuring the difficulty of the problems. For non-separable and difficult problems, αs = 1 2 , which means the classifier learns at a slow rate of O(n − 12 ). For separable and easy problems, αs = 1, indicating the classifier learns at a fast rate. In our setting, if the quantized model fq ∈ Fq directly learns from the hard labels, the difficulty of the problem is high, and we assume $\alpha_{q}=\frac{1}{2}$, we have $$R(f_{q})-R(f)\leq O\left(\frac{|\mathcal{F}_{q}|_{\mathcal{C}}}{\sqrt{n}}\right)+\varepsilon_{q},\tag{11}$$ where $\varepsilon_{q}$ is the approximation error of the quantized model. However, if we first train the full-precision $$(11)$$ $$(12)$$ $$(13)$$ teacher model fr ∈ Fr and then utilize knowledge distillation to learn the representation from the teacher, the difficulty of the learning becomes easier, assuming αr = 1, we have $$R(f_{r})-R(f)\leq O\left({\frac{|{\mathcal{F}}_{r}|_{\mathrm{C}}}{n}}\right)+\varepsilon_{r},\quad R(f_{q})-R(f_{r})\leq O\left({\frac{|{\mathcal{F}}_{q}|_{\mathrm{C}}}{n^{\alpha_{qr}}}}\right)+\varepsilon_{qr},$$ where the εr, εqr denotes the approximation error of the Fr with respect to F and approximation error of the Fq with respect to Fr, respectively. Compared to the direct learning quantized model from the hard label shown in Eq. 11, the knowledge distillation with real-value teacher fr yields the classification error as follows: $$R(f_{n})-R(f)\leq O\left(\frac{|\mathcal{F}_{s}|c}{n}\right)+\varepsilon_{r}+O\left(\frac{|\mathcal{F}_{s}|c}{n^{\alpha_{\mathcal{V}}}}\right)+\varepsilon_{sr}\leq O\left(\frac{|\mathcal{F}_{s}|c+|\mathcal{F}_{s}|c}{n^{\alpha_{\mathcal{V}}}}\right)+\varepsilon_{r}+\varepsilon_{qs}.\tag{13}$$ Following previous studies on knowledge distillation (Lopez-Paz et al., 2015; Mirzadeh et al., 2020), the soft labels contain more information than hard labels for each sample. Thus, we have εr + εqr ≤ εq and O |Fq|C+|Fr|C n αqr ≤ O |F√q|C n . Combining these two inequalities, we have the inequality $$O\left(\frac{|\mathcal{F}_{q}|_{\mathbb{C}}+|\mathcal{F}_{r}|_{\mathbb{C}}}{n^{\alpha_{qr}}}\right)+\varepsilon_{r}+\varepsilon_{qr}\leq O\left(\frac{|\mathcal{F}_{q}|_{\mathbb{C}}}{\sqrt{n}}\right)+\varepsilon_{q},\tag{14}$$ which means when the number of training samples n is the same, the upper bound of classification error based on the soft label is lower. When we want to achieve the same upper bound of classification error R(fq) − R(f) using these two techniques, learning from soft labels requires less data. This is the core reason why we use knowledge distillation and disagreement score dDS to select the coreset. ## 4 Adaptive Coreset Selection For Qat In Sec. 3, we propose to use dEVS and dDS to select the coreset for QAT. While dDS could help select those samples that produce large performance gaps between quantized and full-precision models, dEVS targets more at the error of quantized prediction. These two metrics cover different characteristics of training data, and we need both to improve the diversity of our coreset. For different stages of QAT, different metrics should be considered when selecting samples that fit the current training objective. Previous research (Kim et al., 2019) has shown that QAT should start with the hard label to help a better initialization for the quantized model and then use soft labels to guide it to better local minima. In light of this, we propose Adaptive Coreset Selection (ACS) for QAT to select the important samples considering current training epoch t, dEVS, and dDS adaptively. For the given current training epoch t and the total training epoch E, we propose a cosine annealing weights coefficient β(t) = cos( t 2E π) to consider two metrics simultaneously and balance between them. The final selection metric is a linear combination of dEVS and dDS as follows: $$d_{\mathrm{ACS}}(t)=\beta(t)d_{\mathrm{EVS}}(t)+(1-\beta(t))d_{\mathrm{DS}}(t).$$ dACS(t) = β(t)dEVS(t) + (1 − β(t))dDS(t). (15) $$(15)$$ 7 As we have β(0) = 1 and β(E) = 0 in the early stage, the selection is mainly based on dEVS. When the quantized model converges, we focus more on the dDS in later epochs. We perform coreset selection every R epoch, where R is determined before training. The pseudo-code for our ACS algorithm is shown in Alg. 1. Algorithm 1 Adaptive Coreset Selection for QAT Input: Training dataset T = {(xi, yi)} n i=1, Real-value network with weights Wr, Coreset data fraction per epoch S, Total training epochs E, Selection interval R, Initial coreset TACS(t) = ∅ Output: Quantized network with weights Wq Initialize quantized weights Wqfrom Wrfollowing Eq. 1 for t ∈ [0*, ..., E* − 1] do if t%R == 0 **then** β(t) = cos( t 2E π) for (xi, yi) ∈ T do dEVS(xi, t) = ∥p(Wq t , xi) − yi∥2 dDS(xi, t) = ∥p(Wq t , xi) − pT(Wr t , xi)∥2 dACS(xi, t) = β(t)dEVS(xi, t) + (1 − β(t))dDS(xi, t) end for Sort dACS(xi, t), select top S% samples to replace TACS(t) else TACS(t) ← TACS(t − 1) end if Train Wq on TACS(t) following Eq. 9 end for There are two major advantages of our quantization-aware ACS method. The first lies in the adaptation to the training phase when knowledge distillation is applied. As soft labels retain more information about the target than hard labels, we should encourage the quantized student model to learn sequentially on hard labels first and soft labels then. This implicit learning hierarchy is observed in QKD (Kim et al., 2019) and is named "self-learning" and "tutoring". With the proposed ACS fully aware of this hierarchy, the selected coreset helps stabilize the training and guarantee faster convergence. The second advantage is the diversity of training data. More samples could be covered in the coreset of different epochs, and the coverage of the original full dataset contributes to the convergence of a more robust model. Note that only when the optimal data sequence and **high training sample diversity** are achieved simultaneously, the performance of QAT will be significantly better. We demonstrate in the Appendix. D that even when all data are covered but the order is random, the accuracy of our quantized model will be negatively influenced. ## 5 Experiments Datasets and networks The efficiency experiments are conducted on CIFAR-100 (Krizhevsky et al., 2009) and ImageNet-1K dataset (Deng et al., 2009). We evaluate MobileNetV2 (Howard et al., 2017) on CIFAR-100 and evaluate ResNet-18 (He et al., 2016) on the ImageNet-1K dataset. The *width multiplier* is set to be 0.5 for MobileNetV2. We further provide experimental results of quantized RetinaNet Lin et al. (2017) on MS COCO object detection benchmark (Lin et al., 2014). The robustness experiments are conducted following the setting of Paul et al. (2021). Baselines We choose multiple coreset selection methods from different categories as our baseline. The selected methods include: Random Sampling, EL2N-Score (Paul et al., 2021)1, Forgetting (Toneva et al., 2019), Glister (Killamsetty et al., 2021b), kCenterGreedy (Sener & Savarese, 2018), Contextual Diversity (CD) (Agarwal et al., 2020), Moderate Coreset (Xia et al., 2023b). For methods involving early training, we 1Note that we only verify the EL2N-score instead of Gradient norm (GradN) score in Paul et al. (2021) based on the TMLR reproduction from Kirsch (2023) observing that the inconsistency of GraNd at initialization will results in a sub-optimal performance even compared with random selection. set training epochs as 5. Note that comparing adaptive-based and non-adaptive-based methods is fair and the common practice of previous research (Pooladzandi et al., 2022; Killamsetty et al., 2021c). Training and selection details For MobileNetV2, we train the network for 200 epochs using a learning rate of 0.01, weight decay of 5e-4, batch size of 512, R = 20, and SGD optimizer. For ResNet-18 on ImageNet-1K, we train the network for 120 epochs using a learning rate of 1.25e-3, no weight decay, batch size of 512, R = 10, and Adam optimizer. We use the quantization method and full-precision model following LSQ+ (Bhalgat et al., 2020). All the experiments were carried out on 2 NVIDIA RTX 3090 GPUs. As we notice that the results on the ImageNet-1K dataset will not vary significantly across different runs, experiments of each setting are only performed once. For CIFAR-100 experiments, each experiment of different settings is repeated 5 times. For a fair comparison, we use knowledge distillation with the corresponding full-precision model as the teacher in all experiments, regardless of the method and dataset fraction. Table 1: Comparison of applying different coreset selection methods to 2/32-bit quantized MobileNetV2 on CIFAR-100 dataset with various subset fractions. When full data are selected (S = 100%), the mean accuracy and standard deviation is 68.1±0.9%. The first and second best accuracy is shown in **bold** and underlined, and the performance improvement over the previous SOTA is marked in blue. | Method/Fraction | S = 10% | S = 20% | S = 30% | S = 40% | S = 50% | |----------------------------------------|-----------------|-----------------|-----------------|-----------------|----------------| | Random | 62.3±0.9 | 64.1±0.7 | 65.2±0.3 | 65.6±0.6 | 66.2±0.9 | | EL2N Score (Paul et al., 2021) | 63.0±0.6 | 64.0±1.6 | 65.6±0.8 | 64.6±0.8 | 65.3±0.4 | | Forgetting (Toneva et al., 2019) | 60.7±0.3 | 63.1±0.6 | 65.0±0.8 | 65.3±0.8 | 66.0±1.2 | | Glister (Killamsetty et al., 2021b) | 56.5±1.2 | 60.4±1.0 | 61.3±0.6 | 63.0±1.0 | 64.8±0.9 | | kCenterGreedy (Sener & Savarese, 2018) | 60.2±0.8 | 62.7±0.5 | 63.8±0.7 | 64.8±0.6 | 66.3±1.1 | | CD (Agarwal et al., 2020) | 60.3±0.8 | 62.6±0.9 | 64.0±1.0 | 64.8±0.4 | 65.3±0.4 | | Moderate (Xia et al., 2023b) | 57.9±0.3 | 60.4±0.7 | 62.8±0.6 | 63.6±0.8 | 64.5±1.6 | | Ours | 63.7±0.8 (↑0.7) | 65.9±0.7 (↑1.8) | 66.4±0.8 (↑0.8) | 66.9±0.5 (↑1.3) | 67.2±0.5(↑0.9) | | Method/Fraction (%) | S = 10% | S = 30% | S = 50% | S = 60% | S = 70% | S = 80% | |----------------------------------------|---------------|---------------|---------------|---------------|---------------|---------------| | Random | 64.15 | 68.53 | 70.49 | 70.94 | 71.06 | 71.96 | | EL2N Score (Paul et al., 2021) | 61.71 | 67.31 | 70.14 | 70.89 | 71.54 | 71.88 | | Forgetting (Toneva et al., 2019) | 63.09 | 67.77 | 70.14 | 71.00 | 71.36 | 71.82 | | Glister (Killamsetty et al., 2021b) | 63.25 | 68.94 | 70.92 | 71.39 | 71.93 | 72.22 | | kCenterGreedy (Sener & Savarese, 2018) | 62.98 | 68.56 | 70.35 | 71.13 | 71.59 | 71.96 | | CD (Agarwal et al., 2020) | 63.22 | 68.74 | 70.90 | 71.27 | 71.78 | 72.11 | | Moderate (Xia et al., 2023b) | 62.39 | 68.06 | 70.43 | 70.62 | 71.56 | 71.99 | | Ours | 68.39 (↑4.24) | 71.09 (↑2.15) | 71.59 (↑0.67) | 72.00 (↑0.61) | 72.19 (↑0.26) | 72.31 (↑0.09) | Table 2: Comparison of different methods of 4/4-bit quantized ResNet-18 on ImageNet-1K. When full data are selected (S = 100%), the accuracy is 72.46%. The first and second best accuracy is shown in **bold** and underlined, and the performance improvement over the previous SOTA is marked in blue. ## 5.1 Benchmarking Previous Coreset Method The comparison of the QAT Top-1 accuracy of MobileNetV2 on CIFAR-100 and ResNet-18 on ImageNet-1K is shown in Tab. 1 and Tab. 2. We note that most previous methods cannot exceed random selection in our QAT setting, and the few surpass the baseline only on specific data fractions. This trend is also discussed for full-precision training by Kirsch (2023). For example, kCenterGreedy (Sener & Savarese, 2018) shows satisfying performance when the subset size is large (50% for CIFAR-10, 70%/80% for ImageNet-1K) but fails to demonstrate effectiveness on small coreset size. Our method outperforms state-of-the-art methods on all subset fractions S by a great margin. Specifically, the ResNet-18 accuracy of 10% subset on ImageNet-1K using our method is 68.39%, achieving an absolute gain of 4.24% compared to the baseline method. Efficiency analysis The detailed QAT training time of ResNet-18 on ImageNet-1K coreset and 2 NVIDIA RTX 3090 GPUs with different methods is listed in Tab. 3. When full data are selected (S = 100%), the ![9_image_0.png](9_image_0.png) Figure 3: Visualization of the loss landscape (Li et al., 2018) of 2-bit quantized MobileNetV2 trained on the CIFAR-100 (a) full dataset, (b) 10% random subset, and (c) 10% ACS coreset. training time is 62.3h. The only method with comparable efficiency to ours is the Moderate Coreset (Xia et al., 2023b), which only needs to perform forwarding on samples and can eliminate optimization or greedy searching. The results prove that our method can effectively reduce training time without incurring computation overhead by the selection algorithm. Table 3: Training time (hours) of 4/4-bit quantized ResNet-18 on ImageNet-1K with various subset fraction. | Method/Fraction (%) | 10% | 30% | 50% | 60% | 70% | 80% | |-----------------------|-------|-------|-------|-------|-------|-------| | EL2N Score | 12.7 | 23.8 | 39.1 | 44.1 | 49.9 | 56.0 | | Forgetting | 12.7 | 23.9 | 39.5 | 44.7 | 50.2 | 56.6 | | Glister | 13.8 | 23.0 | 39.9 | 46.6 | 58.0 | 66.8 | | kCenterGreedy | 13.0 | 25.2 | 38.4 | 43.8 | 50.5 | 56.9 | | CD | 11.9 | 24.5 | 37.1 | 42.5 | 48.9 | 55.0 | | Moderate | 11.3 | 22.3 | 36.2 | 42.5 | 48.2 | 54.1 | | Ours (R=10) | 11.2 | 20.7 | 36.1 | 42.2 | 48.0 | 53.7 | Effect of adaptive epochs The choice of selection interval R is vital in our algorithm, as too large R will fail to help adaption and improve data diversity, and too small R will introduce too much computation overhead and undermine the efficiency. We apply grid search on R and empirically prove that R = 10 is optimal for our ImageNet-1K training. The accuracy and training time results are shown in Tab. 4. We can observe from the results that R = 10 achieves a similar performance as R = 5 with a shorter training time. As no back-propagation of the gradient is involved during the computation of dEVS and dDS, the computation overhead is acceptable under most settings with R > 5 compared with previous methods involving training. Table 4: Analysis on adaptive epochs R. | | | Fraction (%) | 10% | 30% | 50% | 70% | | | | | | | | |----------|-------|----------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | Fraction | S=10% | S=30% | S=50% | S=70% | | | | | | | | | | | Acc. | Time | Acc. | Time | Acc. | Time | Acc. | Time | | | | | | | | R = 5 | 68.8 | 12.7h | 71.05 | 21.5h | 71.47 | 37.0h | 72.15 | 49.1h | | | | | | | R = 10 | 68.39 | 11.3h | 71.09 | 20.7h | 71.59 | 36.1h | 72.19 | 48.0h | | | | | | | R = 20 | 67.58 | 11.0h | 70.35 | 20.2h | 71.42 | 35.5h | 71.97 | 47.6h | | | | | | | R = 40 | 66.10 | 10.7h | 69.86 | 19.8h | 71.25 | 34.9h | 72.00 | 47.2h | | | | | | | R = 60 | 64.96 | 10.5h | 69.27 | 19.5h | 71.05 | 34.4h | 71.93 | 46.9h | | | | | | | R > 120 | 62.82 | 10.3h | 67.62 | 19.2h | 69.98 | 33.9h | 71.37 | 46.6h | fixed | 67.95 | 70.83 | 71.21 | 72.01 | | | | linear | 68.35 | 71.07 | 71.39 | 72.11 | | | | | | | | | | | sqrt | 68.15 | 71.03 | 71.44 | 72.15 | | | | | | | | | | | quadratic | 68.11 | 70.95 | 71.35 | 72.10 | | | | | | | | | | | dEVS only | 68.06 | 70.63 | 71.41 | 71.96 | | | | | | | | | | | dDS only | 67.07 | 70.24 | 71.43 | 72.08 | | | | | | | | | | | cosine | 68.39 | 71.09 | 71.59 | 72.19 | | | | | | | | Table 5: Analysis on strategy β(t). Ablation study and effect of annealing strategy As we propose two metrics for coreset selection: dEVS and dDS, it is important to analyze how we should balance between them and the contribution of both metrics. We try the following different annealing strategies and settings: (1) fixed (β(t) = 0.5); (2) linear (β(t) = 1− t E ); (3) square root (β(t) = 1− qt E ); (4) quadratic (β(t) = 1−( t E ) 2); (5) cosine (β(t) = cos( t 2E π)); (6) dEVS only (β(t) = 1); (7) dDS only (β(t) = 0); The results of coreset selection of 4-bit quantized ResNet-18 on ImageNet-1K are listed in Tab. 5. The performance gap with different annealing strategies is close as they all follow the trends to use dEVS in the early epochs of training and dDS in later epochs. Among all these annealing strategies, cosine annealing is slightly better. When only one metric is used for the selection, the performance will drop. We also notice that dEVS and dDS are complementary as dEVS works well with small data fraction and dDS has better performance with large data fraction. ## 5.2 Training With Label Noise We would like to highlight another application of our method: identifying noisy labels in training data and improving the robustness of the QAT. The nature of advantage of our method is to select important training examples for QAT and remove those low-quality redundant examples. This is especially useful when there is label noise in the training set (some examples are not correctly labeled). This application is proposed in previous coreset research Mirzasoleiman et al. (2020), but not verified on QAT. We follow the setting of Mirzasoleiman et al. (2020) to experiment on the QAT of ResNet-18 on CIFAR-10 with 10% randomized labels (labels of 10% of training samples are randomly re-generated). The ResNet-18 is quantized to 2/32 for weights and activations. When the full dataset is selected, the accuracy is 89.91±0.51%. The comparison of the QAT Top-1 accuracy (%) of 2/32-bit quantized ResNet-18 on CIFAR-10 with 10% randomized labels is shown as follows in Tab. 6. Note that we select E2LN Score and GraNd Score for comparison as they are the few coreset selection solutions among the methods we investigate in Tab. 2 claimed with the capacity to remove noise in the training set and improve training robustness. | Method/Fraction (%) | 10% | 20% | 30% | 40% | 50% | |-----------------------|------------|------------|------------|------------|------------| | Random | 83.14±0.51 | 84.77±0.80 | 84.82±0.57 | 85.09±0.15 | 85.33±0.60 | | EL2N Score | 85.80±0.51 | 86.02±0.55 | 87.18±0.35 | 87.51±0.29 | 88.01±0.45 | | GraNd Score | 85.71±0.30 | 85.96±0.15 | 87.10±0.55 | 87.44±0.20 | 87.94±0.29 | | Ours | 88.31±0.27 | 89.53±0.62 | 89.95±0.39 | 90.21±0.39 | 90.23±0.18 | Table 6: Top-1 Accuracy of 2/32-bit quantized ResNet-18 on CIFAR-10 with 10% random label noise As can be seen from the results in Tab. 6, our method outperforms all other selection baselines and even performs better than full-data training when coreset size S ≥ 30%. To quantitatively demonstrate the noisy sample pruning effectiveness, we report the average recall for this noisy-sample detection problem. Noticeably, while the other pruned samples are in the correct label, they are still mostly redundant and can be pruned without significant performance degradation in QAT. From the results of recall in Tab 7, our method states most of the noisy samples. These results show that our method can successfully prune those samples with incorrect labels. In this case, we actually achieve a **"lossless acceleration"** as both the accuracy and efficiency are better than full-data training. | Method/Fraction (%) | 10% | 20% | 30% | 40% | 50% | |-----------------------|-------|-------|-------|-------|-------| | EL2N Score | 90.7% | 80.5% | 71.0% | 60.2% | 47.4% | | GraNd Score | 92.3% | 81.7% | 65.4% | 41.0% | 26.5% | | Ours | 97.9% | 90.2% | 84.1% | 78.9% | 73.5% | Table 7: The average recall of noisy sample identification on CIFAR-10 with 10% random label noise. ## 5.3 Object Detection Tasks We further conduct experiments of 4/4-bit QAT of RetinaNet (Lin et al., 2017) with ResNet-18 backbone on the MS COCO object detection dataset (Lin et al., 2014). We use the same selection metric for the RetinaNet training and use the output for the classification head as the probability vector p. When there are multiple objects in one image, we use the mean selection metric dACS of the probability output of all objects. The QAT method used is FQN (Li et al., 2019a) and we perform training on coco-2017-train for 100 epochs. We investigate two coreset fractions: 10% and 50% and use R=10. The results of mAP are listed in Tab. 8. Our method outperforms the random selection baseline and state-of-the-art selection method Moderate (Xia et al., 2023b) significantly, which proves that our method is effective on object detection tasks. | Method | Fraction | AP | AP0.5 | AP0.75 | APS | APM | APL | |------------|------------|------|---------|----------|-------|-------|-------| | Full | 100% | 28.6 | 46.9 | 29.9 | 14.9 | 31.2 | 38.7 | | Random | 10% | 21.4 | 39.8 | 22.4 | 7.5 | 24.3 | 27.4 | | EL2N Score | 10% | 20.7 | 37.0 | 19.9 | 8.4 | 25.0 | 28.1 | | Moderate | 10% | 22.0 | 37.8 | 20.4 | 8.4 | 25.0 | 28.1 | | Ours | 10% | 24.4 | 40.9 | 25.1 | 9.9 | 27.1 | 31.5 | | Random | 50% | 25.4 | 42.5 | 26.7 | 10.7 | 27.7 | 32.1 | | EL2N Score | 50% | 25.9 | 42.8 | 27.0 | 11.4 | 28.6 | 33.8 | | Moderate | 50% | 25.0 | 42.7 | 25.9 | 11.2 | 28.5 | 33.0 | | Ours | 50% | 26.7 | 44.0 | 27.8 | 12.1 | 30.0 | 35.1 | Table 8: Comparision of performance with different methods of RetinaNet on COCO benchmarks. ## 5.4 Visualization And Analysis We visualize the loss landscape (Li et al., 2018) of MobileNetV2 training on the full CIFAR-100 dataset, 10% random subset of CIFAR-100, and 10% coreset of CIFAR-100 based on our methods shown in Fig. 3. We can see from the results that QAT on coreset with our method has a more centralized and smoother loss compared to the baseline methods, which reflects that our method helps improve the training stability of QAT. We also visualize the distribution of disagreement scores dDS and error vector score dEVS in Fig. 4a and Fig. 4b. The setting is the same as the MobileNetV2 experiment listed in Tab. 1. We can see from the results that the mean of dDS shifts to zero during the QAT, which proves that dDS is a useful metric to quantify the importance of each sample. The distribution discrepancy between dEVS and dDS proves the necessity of considering both metrics to select diverse data into our coreset. ![11_image_0.png](11_image_0.png) Figure 4: (a) Distribution of disagreement scores dDS on MobileNetV2 for different epochs. (b) Distribution of disagreement scores dDS and error vector score dEVS on MobileNetV2 in the same epoch. ## 6 Conclusion This is the first work focusing on the data efficiency and robustness of quantization-aware training (QAT). By removing samples from the training batches and analyzing the loss gradient, we theoretically prove that the importance of each sample varies significantly for QAT. The error-vector score and disagreement score are proposed to quantify this importance. Considering the training characteristics of QAT, we propose a fully quantization-aware Adaptive Coreset Selection (ACS) method to better adapt to different training phases and improve training data diversity. Extensive experiments on various datasets, networks, and quantization settings further demonstrate the effectiveness of our method. We also verified the proposed ACS on the noisy label setting and proved that our method can improve the robustness of QAT. ## Acknowledgement This research is supported by HKSAR RGC General Research Fund (GRF) \#16208823. ## References Sharat Agarwal, Himanshu Arora, Saket Anand, and Chetan Arora. Contextual diversity for active learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVI, pp. 137–153, 2020. Ibrahim M Alabdulmohsin, Behnam Neyshabur, and Xiaohua Zhai. Revisiting neural scaling laws in language and vision. *Advances in Neural Information Processing Systems*, 35:22300–22312, 2022. Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. *arXiv preprint arXiv:1308.3432*, 2013. Yash Bhalgat, Jinwon Lee, Markus Nagel, Tijmen Blankevoort, and Nojun Kwak. Lsq+: Improving low-bit quantization through learnable offsets and better initialization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 696–697, 2020. Zalán Borsos, Mojmir Mutny, and Andreas Krause. Coresets via bilevel optimization for continual learning and streaming. *Advances in Neural Information Processing Systems*, 33:14879–14890, 2020. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. Pact: Parameterized clipping activation for quantized neural networks. *arXiv* preprint arXiv:1805.06085, 2018. Alexis Conneau and Guillaume Lample. Cross-lingual language model pretraining. *Advances in neural* information processing systems, 32, 2019. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Melanie Ducoffe and Frederic Precioso. Adversarial active learning for deep networks: a margin based approach. *arXiv preprint arXiv:1802.09841*, 2018. Steven K Esser, Jeffrey L McKinstry, Deepika Bablani, Rathinakumar Appuswamy, and Dharmendra S Modha. Learned step size quantization. In *International Conference on Learning Representations*, 2020. Jun Fang, Ali Shafiee, Hamzah Abdel-Aziz, David Thorsley, Georgios Georgiadis, and Joseph H Hassoun. Post-training piecewise linear quantization for deep neural networks. In European Conference on Computer Vision, pp. 69–86. Springer, 2020. Stanislav Fort and Surya Ganguli. Emergent properties of the local geometry of neural loss landscapes. arXiv preprint arXiv:1910.05929, 2019. Stanislav Fort, Gintare Karolina Dziugaite, Mansheej Paul, Sepideh Kharaghani, Daniel M Roy, and Surya Ganguli. Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel. *Advances in Neural Information Processing Systems*, 33:5850–5861, 2020. Satoru Fujishige. *Submodular functions and optimization*. Elsevier, 2005. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. *arXiv preprint* arXiv:1503.02531, 2015. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. *arXiv preprint arXiv:1704.04861*, 2017. Xijie Huang, Zhiqiang Shen, Shichao Li, Zechun Liu, Hu Xianghong, Jeffry Wicaksana, Eric Xing, and Kwang-Ting Cheng. Sdq: Stochastic differentiable quantization with mixed precision. In *International* Conference on Machine Learning, pp. 9295–9309. PMLR, 2022. Zhizhong Huang, Junping Zhang, and Hongming Shan. Twin contrastive learning with noisy labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11661–11670, 2023. Rishabh Iyer, Ninad Khargoankar, Jeff Bilmes, and Himanshu Asanani. Submodular combinatorial information measures with applications in machine learning. *arXiv preprint arXiv:2006.15412*, 2020. Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of a tensor processing unit. In *Proceedings of the 44th annual international symposium on computer architecture*, pp. 1–12, 2017. Patrick Judd, Jorge Albericio, Tayler Hetherington, Tor M Aamodt, and Andreas Moshovos. Stripes: Bitserial deep neural network computing. In 2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), pp. 1–12. IEEE, 2016. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. *arXiv preprint* arXiv:2001.08361, 2020. Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL-HLT*, pp. 4171–4186, 2019. Krishnateja Killamsetty, Sivasubramanian Durga, Ganesh Ramakrishnan, Abir De, and Rishabh Iyer. Gradmatch: Gradient matching based data subset selection for efficient deep model training. In International Conference on Machine Learning, pp. 5464–5474. PMLR, 2021a. Krishnateja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, and Rishabh Iyer. Glister: Generalization based data subset selection for efficient and robust learning. In *Proceedings of the AAAI Conference* on Artificial Intelligence, volume 35, pp. 8110–8118, 2021b. Krishnateja Killamsetty, Xujiang Zhao, Feng Chen, and Rishabh Iyer. Retrieve: Coreset selection for efficient and robust semi-supervised learning. *Advances in Neural Information Processing Systems*, 34:14488–14501, 2021c. Jangho Kim, Yash Bhalgat, Jinwon Lee, Chirag Patel, and Nojun Kwak. Qkd: Quantization-aware knowledge distillation. *arXiv preprint arXiv:1911.12491*, 2019. Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. *arXiv preprint arXiv:2304.02643*, 2023. Andreas Kirsch. Does 'deep learning on a data diet'reproduce? overall yes, but grand at initialization does not. *Transactions on Machine Learning Research*, 2023. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. *Advances in neural information processing systems*, 25:1097–1105, 2012. Jung Hyun Lee, Jeonghoon Kim, Se Jung Kwon, and Dongsoo Lee. Flexround: Learnable rounding based on element-wise division for post-training quantization. In *International Conference on Machine Learning*, pp. 18913–18939. PMLR, 2023. Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, and Jinwoo Shin. Robust inference via generative classifiers for handling noisy labels. In *International conference on machine learning*, pp. 3763–3772. PMLR, 2019. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 6391–6401, 2018. Lin Li, Jun Xiao, Hanrong Shi, Hanwang Zhang, Yi Yang, Wei Liu, and Long Chen. Nicest: Noisy label correction and training for robust scene graph generation. *IEEE Transactions on Pattern Analysis and* Machine Intelligence, 2024. Rundong Li, Yan Wang, Feng Liang, Hongwei Qin, Junjie Yan, and Rui Fan. Fully quantized network for object detection. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 2810–2819, 2019a. Yifan Li, Hu Han, Shiguang Shan, and Xilin Chen. Disc: Learning from noisy labels via dynamic instancespecific selection and correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24070–24079, 2023. Yuhang Li, Xin Dong, and Wei Wang. Additive powers-of-two quantization: An efficient non-uniform discretization for neural networks. In *International Conference on Learning Representations*, 2019b. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740–755. Springer, 2014. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In *Proceedings of the IEEE international conference on computer vision*, pp. 2980–2988, 2017. Shih-Yang Liu, Zechun Liu, and Kwang-Ting Cheng. Oscillation-free quantization for low-bit vision transformers. *arXiv preprint arXiv:2302.02210*, 2023a. Shih-yang Liu, Zechun Liu, Xijie Huang, Pingcheng Dong, and Kwang-Ting Cheng. Llm-fp4: 4-bit floatingpoint quantized transformers. In *The 2023 Conference on Empirical Methods in Natural Language Processing*, 2023b. Zechun Liu, Haoyuan Mu, Xiangyu Zhang, Zichao Guo, Xin Yang, Kwang-Ting Cheng, and Jian Sun. Metapruning: Meta learning for automatic neural network channel pruning. In *Proceedings of the IEEE/CVF* international conference on computer vision, pp. 3296–3305, 2019. Zechun Liu, Kwang-Ting Cheng, Dong Huang, Eric P Xing, and Zhiqiang Shen. Nonuniform-to-uniform quantization: Towards accurate quantization via generalized straight-through estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4942–4952, 2022. Zechun Liu, Barlas Oguz, Changsheng Zhao, Ernie Chang, Pierre Stock, Yashar Mehdad, Yangyang Shi, Raghuraman Krishnamoorthi, and Vikas Chandra. Llm-qat: Data-free quantization aware training for large language models. *arXiv preprint arXiv:2305.17888*, 2023c. Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE international conference on computer vision, pp. 2736–2744, 2017. Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. In *International Conference on Learning Representations*, 2018. David Lopez-Paz, Léon Bottou, Bernhard Schölkopf, and Vladimir Vapnik. Unifying distillation and privileged information. *arXiv preprint arXiv:1511.03643*, 2015. Katerina Margatina, Giorgos Vernikos, Loïc Barrault, and Nikolaos Aletras. Active learning by acquiring contrastive examples. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 650–663, 2021. Michel Minoux. Accelerated greedy algorithms for maximizing submodular set functions. In Optimization techniques, pp. 234–243. Springer, 1978. Seyed Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh. Improved knowledge distillation via teacher assistant. In *Proceedings of the AAAI conference on artificial* intelligence, volume 34, pp. 5191–5198, 2020. Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, and Andreas Krause. Distributed submodular maximization: Identifying representative elements in massive data. In Advances in Neural Information Processing Systems, pp. 2049–2057, 2013. Baharan Mirzasoleiman, Jeff Bilmes, and Jure Leskovec. Coresets for data-efficient training of machine learning models. In *International Conference on Machine Learning*, pp. 6950–6960. PMLR, 2020. Asit Mishra and Debbie Marr. Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. In *International Conference on Learning Representations*, 2018. Daisuke Miyashita, Edward H Lee, and Boris Murmann. Convolutional neural networks using logarithmic data representation. *arXiv preprint arXiv:1603.01025*, 2016. Pavlo Molchanov, Arun Mallya, Stephen Tyree, Iuri Frosio, and Jan Kautz. Importance estimation for neural network pruning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 11264–11272, 2019. Markus Nagel, Rana Ali Amjad, Mart Van Baalen, Christos Louizos, and Tijmen Blankevoort. Up or down? adaptive rounding for post-training quantization. In *International Conference on Machine Learning*, pp. 7197–7206. PMLR, 2020. George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An analysis of approximations for maximizing submodular set functions—i. *Mathematical programming*, 14(1):265–294, 1978. Dongmin Park, Seola Choi, Doyoung Kim, Hwanjun Song, and Jae-Gil Lee. Robust data pruning under label noise via maximizing re-labeling accuracy. *Advances in Neural Information Processing Systems*, 36, 2024. Wonpyo Park, Dongju Kim, Yan Lu, and Minsu Cho. Relational knowledge distillation. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3967–3976, 2019. Mansheej Paul, Surya Ganguli, and Gintare Karolina Dziugaite. Deep learning on a data diet: Finding important examples early in training. *Advances in Neural Information Processing Systems*, 34:20596–20607, 2021. Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural architecture search via parameters sharing. In *International Conference on Machine Learning*, pp. 4095–4104. PMLR, 2018. Antonio Polino, Razvan Pascanu, and Dan Alistarh. Model compression via distillation and quantization. In International Conference on Learning Representations. Omead Pooladzandi, David Davini, and Baharan Mirzasoleiman. Adaptive second order coresets for dataefficient machine learning. In *International Conference on Machine Learning*, pp. 17848–17869. PMLR, 2022. Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations, 2018. Hardik Sharma, Jongse Park, Naveen Suda, Liangzhen Lai, Benson Chau, Vikas Chandra, and Hadi Esmaeilzadeh. Bit fusion: Bit-level dynamically composable architecture for accelerating deep neural network. In *2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA)*, pp. 764–775. IEEE, 2018. Zhiqiang Shen and Eric Xing. A fast knowledge distillation framework for visual recognition. *arXiv preprint* arXiv:2112.01528, 2021. Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. Learning from noisy labels with deep neural networks: A survey. *IEEE transactions on neural networks and learning systems*, 2022. Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari Morcos. Beyond neural scaling laws: beating power law scaling via data pruning. *Advances in Neural Information Processing Systems*, 35: 19523–19536, 2022. Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, pp. 6105–6114. PMLR, 2019. Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J. Gordon. An empirical study of example forgetting during deep neural network learning. In *International* Conference on Learning Representations, 2019. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. *arXiv preprint arXiv:2307.09288*, 2023. Yuanpeng Tu, Boshen Zhang, Yuxi Li, Liang Liu, Jian Li, Yabiao Wang, Chengjie Wang, and Cai Rong Zhao. Learning from noisy labels with decoupled meta label purifier. In *Proceedings of the IEEE/CVF* Conference on Computer Vision and Pattern Recognition, pp. 19934–19943, 2023. Vladimir N Vapnik. An overview of statistical learning theory. *IEEE transactions on neural networks*, 10(5): 988–999, 1999. Peisong Wang, Qiang Chen, Xiangyu He, and Jian Cheng. Towards accurate post-training network quantization via bit-split and stitching. In *International Conference on Machine Learning*, pp. 9847–9856. PMLR, 2020. Hongxin Wei, Lue Tao, Renchunzi Xie, and Bo An. Open-set label noise can improve robustness against inherent label noise. *Advances in Neural Information Processing Systems*, 34:7978–7992, 2021. Max Welling. Herding dynamical weights to learn. In *Proceedings of the 26th Annual International Conference* on Machine Learning, pp. 1121–1128, 2009. Gert W Wolf. Facility location: concepts, models, algorithms and case studies. series: Contributions to management science, 2011. Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, and Tongliang Liu. Combating noisy labels with sample selection by mining high-discrepancy examples. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1833–1843, 2023a. Xiaobo Xia, Jiale Liu, Jun Yu, Xu Shen, Bo Han, and Tongliang Liu. Moderate coreset: A universal method of data selection for real-world data-efficient deep learning. In The Eleventh International Conference on Learning Representations, 2023b. Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han. Smoothquant: Accurate and efficient post-training quantization for large language models. In International Conference on Machine Learning, pp. 38087–38099. PMLR, 2023. Han Xiao, Huang Xiao, and Claudia Eckert. Adversarial label flips attack on support vector machines. In ECAI 2012, pp. 870–875. IOS Press, 2012. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. *Advances in neural information* processing systems, 32, 2019. Jiangchao Yao, Jiajie Wang, Ivor W Tsang, Ya Zhang, Jun Sun, Chengqi Zhang, and Rui Zhang. Deep learning from noisy image labels with quality embedding. *IEEE Transactions on Image Processing*, 28(4): 1909–1922, 2018. Edouard Yvinec, Arnaud Dapogny, Matthieu Cord, and Kevin Bailly. Powerquant: Automorphism search for non-uniform quantization. In *The Eleventh International Conference on Learning Representations*, 2023. Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, and Gang Hua. Lq-nets: Learned quantization for highly accurate and compact deep neural networks. In Proceedings of the European conference on computer vision (ECCV), pp. 365–382, 2018. Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. *arXiv preprint arXiv:1606.06160*, 2016. Xiao Zhou, Renjie Pi, Weizhong Zhang, Yong Lin, Zonghao Chen, and Tong Zhang. Probabilistic bilevel coreset selection. In *International Conference on Machine Learning*, pp. 27287–27302. PMLR, 2022. Bohan Zhuang, Lingqiao Liu, Mingkui Tan, Chunhua Shen, and Ian Reid. Training quantized neural networks with a full-precision auxiliary module. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1488–1497, 2020. ## Appendix This appendix includes an additional introduction to related works, training dynamics, efficiency analysis, extended experimental analysis on data coverage, and a discussion of transferability and generalizability that is not included in the main text due to space limitations. These contents are organized in separate sections as follows: - Sec. A elaborates the detail of previous representative coreset selection methods and analyzes the reason for failure in quantization-aware training (QAT) scenario. - Sec. B demonstrate the training dynamics of our ACS with different selection intervals and show how our methods help to stabilize the training and accelerate the convergence. - Sec. C includes the real training time composition as a supplement of QAT training time comparison in Tab. 3, showing that our coreset selection only incurs a minimal efficiency overhead both with and without knowledge distillation. - Sec. D provides additional experiments to prove that the improvement of performance with our method does not exclusively come from covering more data from the full training set. - Sec. E provides detailed experimental results with and without knowledge distillation (KD). Both the accuracy and real training time are shown. - Sec. F analyzes the transferability and generalizability of our coreset from different models. ## A Detailed Introduction Of Selected Baselines In this section, we will introduce the selected baseline coreset selection methods in detail and show the defects of these methods when applied to QAT. EL2N-Score/GraNd-Score (Paul et al., 2021) The GraNd-score for a given training sample {*x, y*} is defined as χt(*x, y*) = Ewt ∥gt(*x, y*)∥2 , which is the expected magnitude of the loss vector with respect to the weights. The importance of each sample is measured by the expected loss gradient norm, which has a similar intuition to our error-vector score dEVS. However, another assumption from GraNd score is that this approximation only holds when the model has been trained for a few epochs. We must perform early training on the current model to have statistics to compute the score. In addition, storing all the gradients incurs significant memory overheads during QAT. The efficiency will be lower than full dataset training with a high subset fraction. The performance with these metrics is sub-optimal as the converged quantized model in the later training epochs of QAT is not considered. In addition, Based on the reproduction from Kirsch (2023) observing that the inconsistency of GraNd at initialization, the performance of GraNd-score-based coreset selection cannot outperform random selection across various datasets and tasks. Forgetting (Toneva et al., 2019) The core contribution of Forgetting is that forgetting and learning events are defined. In the classification setting with a given dataset D = (xi, yi)i. For training example xi obtained after t steps using SGD, the predicted label is denoted as yˆ t i = arg maxk p(yik|xi; θ t). acct i = 1yˆ t i=yi is defined to be a binary indicator encoding whether the classification of the specific example is correct at time step t. Example xi undergoes a *forgetting event* when acct i decreases between two consecutive updates: acct i > acct+1 i. The example xiis misclassified at step t + 1 after having been correctly classified at step t. Corresponding to the forgetting event, a *learning event* is defined to be acct i < acct+1 i. With the statistics of forgetting events in early training, we can select those samples that incur more forgetting events and are more difficult to learn. However, the intuition to select "difficult" samples that incur more misclassification is not always correct. For large-scale datasets (ImageNet-1K, etc.) and datasets with a relatively smaller size (such as MNIST, CIFAR-10, etc.). the difficulties of classification vary significantly. Selecting samples that are difficult to learn at the very beginning is not always reasonable. In the quantization-aware training setting, the quantized model will first calibrate the quantization parameters and recover the weight at the early stages. We should not select "difficult samples" for calibration and recovery. Glister (Killamsetty et al., 2021b) The Glister performs data selection based on the assumption that the inner discrete data selection is an instance of (weakly) submodular optimization. Let V = {1, 2, · · · , n} denote a ground set of items (the set of training samples in the setting of coreset selection). Set functions are defined as functions f : 2V → R that operate on subsets of V . A set function f is defined to be a submodular function (Fujishige, 2005) if it satisfies the diminishing returns property that for subsets S ⊆ T ⊆ *V, f*(j|S) ≜ f(S ∪ j) − f(S) ≥ f(j|T). Some natural combinatorial functions (facility location, set cover, concave over modularity, etc.) are submodular functions (Iyer et al., 2020). Submodularity is also very appealing because a simple greedy algorithm achieves a 1 − 1/e constant factor approximation guarantee (Nemhauser et al., 1978) for the problem of maximizing a submodular function subject to a cardinality constraint (which most data selection approaches involve). Moreover, several variants of the greedy algorithm have been proposed, further scaling up submodular maximization to almost linear time complexity (Minoux, 1978; Mirzasoleiman et al., 2013). However, the greedy algorithm still consumes tremendous time compared to other metric-based selection methods, which only need to perform sorting on the proposed metrics. We also observed that the greedy algorithm fails on QAT for low-bit settings. kCenterGreedy (Sener & Savarese, 2018) As one of the most straight-forward geometry-based coreset selection methods, the intuition is simple: we can measure the similarity by the distance of data points and select the center point of data cluster can make use of the redundancy in the dataset. This method aims to solve the *minimax facility location* problem (Wolf, 2011), which is defined to be selecting k samples as subset S from the full dataset T such that the longest distance between a data point in T\S and its closest data point in S is minimized: $$\operatorname*{min}_{S\subset T}\operatorname*{max}_{x_{i}\in T\setminus S}\operatorname*{min}_{x_{j}\in S}{\mathcal{D}}(x_{i},x_{j}),$$ D(xi, xj ), (16) $$(16)^{3}$$ where D(·, ·) is the distance measurement function. The problem is NP-hard, and a greedy approximation known as k-Center Greedy has been proposed in Sener & Savarese (2018). Similarly, as the greedy algorithm is involved, the efficiency is negatively influenced, which is not affordable for our QAT setting. Contextual Diversity (CD) (Agarwal et al., 2020) Specifically designed for coreset selection for deep convolutional neural networks (CNNs), Contextual Diversity (CD) fully leverages the ambiguity in feature representations. The ambiguity is quantified as class-specific confusion as the selection metric. Assume C = {1*, . . . , n*C } is the set of classes predicted by a CNN-based model. For a region r within an input image I, let Pr = Pr(yb | I; θ) be the softmax probability vector as predicted by the model θ. The pseudo-label for the region r ⊆ I is defined as ybr = arg maxj∈C Pr[j], where the notation Pr[j] denotes the j th element of the vector. For a given model θ over the unlabeled I, the class-specific confusion for class c is defined as P c I =1 |I c| PI∈I c "Pr∈Rc I wrPr(by|I;θ) Pr∈Rc I wr \# with wr ≥ 0 as the mixing weights. The pairwise contextual diversity, which is the KL-divergence of the metric between two samples I1 and I2 could be used as a distance metric to replace the Euclidean distance in the previous kCenterGreedy (Sener & Savarese, 2018). As this work basically follows kCenterGreedy (Sener & Savarese, 2018) to perform coreset selection, the defects also lie in the efficiency of the greedy algorithm and failure with the low-bit setting. Moderate Coreset (Xia et al., 2023b) As the most recent work of coreset selection, Moderate has optimal efficiency as no greedy searching or back-propagation is involved during selection. Previous methods rely on score criterion that only applies to a specific scenario. This work proposes to utilize the score median as a proxy of the statistical score distribution and select the data points with scores close to the score median into the coreset. The main drawback of this method applied to quantization-aware training is that quantized distribution is not considered. ## B Qat Training Dynamics With Acs In this section, we report the training dynamics, including the training loss and training accuracy of QAT ResNet-18 on ImageNet-1K coreset with a 10% subset fraction. As can be seen from the results in Fig. 5, when the coreset changes adaptively to the current iterations, the accuracy will drop, and loss will increase significantly at the specific iteration of updating the subset. However, the quantized model will converge fast on the new subset. The training with an adaptive coreset can effectively help avoid overfitting and improve the final performance. ![20_image_0.png](20_image_0.png) Figure 5: Training/Testing loss/accuracy comparison of 4/4-bit quantized ResNet-18 on 10% coreset of ImageNet-1K with different selection interval R. ## C Training Time Composition As a supplement of the detailed QAT training time comparison in Tab. 3, we also provide the detailed training time composition of training and selection in Tab. 9. Since KD will change the loss function of SGD training and is difficult to break down these two parts, we provide two sets of results with and without KD. The experiments are also conducted on quantized ResNet-18 on ImageNet-1K coreset with 2 NVIDIA RTX 3090 GPUs. The results show that coreset selection only incurs a minimal efficiency overhead both with and without KD. The selection time is a constant and is only related to the full dataset size and the selection intervals R. The training with KD and without KD only influence the backpropagation time of the training and there is no backward in the selection phase, which makes the time of selection the same (3.2h) across all settings. ## D Coreset Coverage Analysis The advantages of the proposed quantization-aware Adaptive Coreset Selection (ACS) algorithms are two-fold: adaption and diversity. In this section, we further demonstrate that improving diversity by covering all the training data in different subsets is not optimal compared to our adaptive coreset. We use the "Full Coverage Split" of the CIFAR-100 dataset as our baseline, which is to select non-overlapped samples randomly into the subset of fraction S in the first ⌈1/S⌉ epochs. It is guaranteed that all the training samples are included, but the sequence is random in this setting. We apply QAT to a 2/32-bit quantized MobileNet-V2 on the random Table 9: Composition of training time on QAT of quantized ResNet-18 on ImageNet-1K with different subset fractions and KD settings. The bitwidth for quantized ResNet-18 is 4/4 for weights/activations. | Stage/Fraction (%) | Apply KD? | 10% | 30% | 50% | 60% | 70% | 80% | |----------------------|-------------|-------|-------|-------|-------|-------|-------| | Ours-all | ✓ | 11.3h | 20.7h | 36.1h | 42.2h | 48.0h | 53.8h | | Ours-selection | ✓ | 3.2h | 3.2h | 3.2h | 3.2h | 3.2h | 3.2h | | Ours-training | ✓ | 8.1h | 17.5h | 32.9h | 39.0h | 44.8h | 50.6h | | Ours-all | 9.7h | 16.0h | 29.5h | 34.4h | 40.2h | 45.4h | | | Ours-selection | 3.2h | 3.2h | 3.2h | 3.2h | 3.2h | 3.2h | | | Ours-training | 6.5h | 15.8h | 26.3h | 31.2h | 37.0h | 42.2h | | subset, "Full Coverage Split" subset, and coreset using our methods. The selection interval R is the same across all settings. When the subset fraction S is large, the difference in coverage rate with various methods is minor. We then only evaluate on the S ∈ {10%, 20%, 30%, 40%, 50%} The results are shown in Tab. 10, where we can see that "Full Coverage Split" has limited superiority on performance compared to random baseline, especially when the subset fraction R is large. Our method outperforms the other two settings across all fractions, proving that both adaption and diversity help to improve the performance. | Method/Fraction (%) | 10% | 20% | 30% | 40% | 50% | |-----------------------|------------|------------|------------|------------|------------| | Random | 62.25±0.71 | 64.07±0.47 | 65.22±0.41 | 65.55±0.80 | 66.24±0.55 | | Full Coverage Split | 62.55±0.65 | 64.22±0.81 | 65.34±1.17 | 65.69±0.69 | 66.20±0.89 | | Ours | 63.37±0.55 | 65.91±0.40 | 66.41±0.31 | 66.86±0.72 | 67.19±0.65 | Table 10: Comparision of Top-1 Accuracy with Random vs. Full coverage split vs. Ours ## E Detailed Experimental Results Without Knowledge Distillation Knowledge Distillation (KD) is a normal and "default" setting for all previous quantization work (Mishra & Marr, 2018; Zhuang et al., 2020; Liu et al., 2022; Bhalgat et al., 2020; Huang et al., 2022), including the LSQ quantization (Bhalgat et al., 2020) we use in this paper. For a fair comparison with previous work, we equally utilize KD for all the experiments in this work regardless of the precision and dataset fraction. The full-data training baseline also involves training with knowledge distillation. | Method/Fraction (%) | 10% | 30% | 50% | 60% | 70% | 80% | | | | | | | |-----------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | Acc | Time | Acc | Time | Acc | Time | Acc | Time | Acc | Time | Acc | Time | | | EL2N w/o KD | 59.71 | 10.6h | 63.50 | 17.1h | 65.19 | 31.3h | 66.38 | 35.0h | 67.90 | 42.9h | 69.01 | 48.0h | | Glister w/o KD | 62.41 | 11.9h | 65.18 | 22.7h | 66.47 | 34.5h | 67.05 | 41.7h | 68.81 | 50.3h | 69.45 | 56.9h | | Ours w/o KD | 66.91 | 9.7h | 68.77 | 16.0h | 69.25 | 29.5h | 69.66 | 34.4h | 69.85 | 40.2h | 70.03 | 45.4h | To verify the effectiveness of our method without KD, we remove the knowledge distillation in our method. Since the DS is built on the soft label of the teacher model, we do not use it in our selection w/o KD. Only EVS is applied as the selection metric. We follow the same settings for quantized ResNet-18 on ImageNet-1K as shown in our paper. The training time and accuracy are shown as follows in Tab. 11. When full data are selected (S = 100%), the Top-1 accuracy is 70.21% and training time is 3.1h without KD. From the results shown in Tab. 11, we can see that our method still outperforms previous SOTA without KD and the training efficiency is still optimal. Table 11: Comparison of Top-1 Accuracy and the training time of QAT of quantized ResNet-18 on ImageNet1K with different subset fractions without KD settings. The bitwidth for quantized ResNet-18 is 4/4 for weights/activations. ## F Coreset Transferability And Generalizability The theoretical analysis of the importance of each sample in QAT is model-specific, which means that data selected using our method is the optimal coreset for the current model. However, if the coreset discovered by a specific pre-trained model is applicable to other models, our method can be a potential solution for the model-agnostic coreset method. In addition, if there is a significant coreset overlap across different models, our method can even solve dataset distillation. We first design experiments to verify the transferability of the coreset using our method. We use 2/32-bit ResNet-18 to generate a coreset on CIFAR-100 and apply it to MobileNet-V2. The results are shown in Tab. 12. As the ResNet-18 coreset performs better than the random subset on MobileNet-V2, our coreset has some extent of generalization ability to unseen network structures, but the effectiveness is still worse than the MobileNetV2 coreset. Table 12: Comparision of Top-1 Accuracy with different coreset of MobileNet-V2 on CIFAR-100. | Method | Coreset Generation Model | 10% | 20% | 30% | 40% | 50% | |----------|----------------------------|------------|------------|------------|------------|------------| | Random | - | 62.25±0.71 | 64.07±0.47 | 65.22±0.41 | 65.55±0.80 | 66.24±0.55 | | Ours | MobileNetV2 | 63.37±0.55 | 65.91±0.40 | 66.41±0.31 | 66.86±0.72 | 67.19±0.65 | | Ours | ResNet-18 | 62.94±0.45 | 64.18±0.73 | 65.70±0.40 | 65.90±0.51 | 66.86±0.42 | | Model Pair | Coreset Overlap Percentage | |--------------------------|------------------------------| | {ResNet-18, MobileNetV2} | 37.1% | | {ResNet-18, ResNet-50} | 77.3% | | {ResNet-50, MobileNetV2} | 29.0% | We then design experiments to verify the overlap between the coreset of different models. We analyze the 10% fraction coreset selected using our method in the final epochs of 2/32-bit ResNet-18, ResNet-50, and MobileNet-V2. The percentage of the overlap data is shown in Tab. 13. Table 13: Coreset overlap of different model pairs on CIFAR-100. We empirically find out that there is some overlap between the coreset of different models, and the overlap is more significant when these two models have a similar structure (ResNet-18 and ResNet-50). We then further apply majority voting from the CIFAR-100 coreset of five models (ResNet-18, ResNet-34, ResNet-50, MobileNet-V2, MobileNet-V3) to generate a "general coreset". This "general coreset" of 10% data fraction is applied to a 2/32-bit quantized Vision Transformer ViT-B/16 (Dosovitskiy et al., 2020), where we only got a Top-1 accuracy of 72.3%, which is lower than 10% coreset using our method of 78.9% and even lower than 10% random coreset of 74.6%. We conclude that our method is model-specific, which has some extent of generalization ability to unseen network structures but cannot be a general data distillation approach.
Review 1: Summary: This paper proposes a method to subsample training dataset to improve the efficiency of quantization-aware training (QAT). Based on the author’s statement, this work is the first one addressing the coreset selection specialized for QAT. Two metrics are introduced to evaluate the importance of each datum and the data used for training are adaptively selected during the training by using the proposed metrics. Based on these two metrics, a novel approach for adaptive coreset selection is proposed. Experiments on several datasets are conducted to compare the proposed coreset selection method and other baseline coreset selection methods. These comparisons reveal that the proposed approach achieves higher accuracy under the same sampling ratio than other methods. Moreover, the computational efficiency over the baseline approaches is reported Strengths and Weaknesses: Strength A computationally efficient and performant novel approach for coreset selection in the QAT setting is proposed. This paper empirically compares existing coreset selection methods that are not specialized for the QAT setting and the proposed approach. The ablation study reveals the effectiveness of each component (i.e., metric) Weakness Clarity of Section 3 could be improved (See the requested changes part) The comparison is limited between coreset selection methods. It is okay for evaluation of the coreset selection methods. However, it is not clear if the proposed coreset selection method is more effective to achieve a higher accuracy in the QAT setting than other approaches. Requested Changes: P2. “However, previous methods select data based on feature (Agarwal et al., 2020; Sener & Savarese, 2018), error (Paul et al., 2021), gradient (Killamsetty et al., 2021a), and decision boundary (Margatina et al., 2021) can achieve notable efficiency improvement for full-precision training, but its effectiveness on low-precision QAT has not been explored before. “ The structure “However, …, but …. “ is strange. Please revise it. P4. The notation is suboptimal. “x” is used for different meanings. P5. Why does the right-hand side equality in Eq. (4) hold? Because of the quantization in (2), it looks like it does not necessarily hold. Please clarify it. P5. The authors are mixing up the notation for a standard gradient and STE. Because STE is not a gradient of a Moreover, the authors use a simplified notation for derivative, i.e., df / dw_q to mean df / dw | w = w_q. Combination of above two points makes it hard to verify the correctness of the derivation in Section 3. Please clarify them. P5. “Since learning rate η and the gradient of loss w.r.t. quantized weights dL dwq t are sample-agnostic, …” Why is the gradient of the loss w.r.t. the quantized weights sample-agnostic? Why does not one need to consider the inner product? Please clarify it. Table 3, 4, 5. Are these numbers the average values? Please clarify it and provide their STD or IQR to see the significance. Please discuss the advantages and disadvantages of the proposed coreset selection method over other approaches to improve QAT. Does the proposed approach more efficient? Can we use the proposed coreset selection with other techniques for improving the final accuracy in the QAT setting? Broader Impact Concerns: N/A ================================================== Review 2: Summary: This work proposes a coreset selection algorithm, which is used to perform quantization-aware training. Specifically, the authors propose two metrics: 1. Error vector score, or d_EVS, which is the L2 error between a one-hot encoded label and the model output 2. Disagreement score, or d_DS, which is the L2 error between the real-weighted model output and the quantized model output. They propose to use these metrics to select/filter data, in order to improve QAT performance. The main advantage of this approach is reducing training time, as it allows focusing only on the important samples (the "coreset"). They show empirically that this indeed leads to better overall performance compared to baselines, for various coreset sizes. They also show that the same procedure can be used to train in scenarios where labels are noisy, as their selection criterion can filter samples with noisy labels. They compare to the EL2N score and GraNd score of previous work, and show improvements across the board. Strengths and Weaknesses: Strengths include: - Strong empirical results across various amounts of real data compared to baseline ethods, for example the across-the-board improvement in Figure 1 left. - Applied to multiple datasets (CIFAR-10, CIFAR-100, Imagenet-1K, COCO, RetinaNet). - The method works best for R=10, which means a new coreset is selected fairly often. This adds a bit of overhead, especially for the 10% case (10h -> 12h). This seems worthwhile though, improving performance from 62.82 to 68.80 for the 10% data case. - Faster QAT of vision models is an important field. Efficient training of deployment-ready neural networks can have enormous impact. Weaknesses include - ACS leads to faster convergence, but not convergence to the same point. From Table 1 and 2, we see that the full-data accuracy of MobileNetv2 is 68.1%, but running with S=50% reaches 67.2%. The full accuracy of ResNet18 on Imagenet-1k is 72.46%, but the S-80% method reaches 72.31. Could QAT runs converge to the same point as full data training? If not, is full data QAT always preferred if you care about final performance? With ACS underperforming full data training, who is the intended target audience? - A few strong claims are made: - this work being the first to investigate data efficiency is possibly true, but data efficiency of LLM quantization has been investigated previously as well [Dettmers 2023]. - under Definition 1, it is stated that previous works empirically observe a similarity, and therefore that the gradient norm is positively correlated to the error vector. I do believe the gradient norm and the error vector are likely positively correlated, but that would have to be shown. Maybe a less specific claim is warranted here. - in section 4, it is mentioned that ACS helps guarantee faster convergence, but an investigation of convergence behavior is missing (except appendix Figure 5), we are mainly looking at final performance. The model does converge to a better point than random coreset selection methods, but again, i believe the comparison point should be full-data training (without any coreset selection) at the same point in time. - In section 5.2, it stated that ACS can succesfully prune samples with incorrect labels, because final performance is better. This is not shown currently, but it can be. Does the criterion indeed remove noisy labels? Given that criterion E_VS keeps samples with high error, it seems surprising that these noisy samples are removed. All in all, although the claims are a bit strong, I think the investigation is interesting to those interested in reducing time-to-train during QAT. However, an important datapoint is missing: full data training under the same time budget. Without that datapoint, I would probably still use full data QAT. References - [Dettmers 2023]: QLORA: Efficient Finetuning of Quantized LLMs, NeurIPS 2023 Requested Changes: - If available, a comparison to a full-data run at the same #hours could strengthen this work, in my opinion. At the moment, It is unclear to me why one would use ACS for QAT. What happens if you train a ResNet18 on all data for 12.7 hours? How does it compare to the 10% data run from Table 4? This could show that ACS has a practical advantage. - Please indicate how many experiments were performed when showing standard deviation or standard error. - Please provide additional support or explanation for some of the stronger claims, see above. - Could you add more information on how are the criteria are adapted for detector (retinanet) training? Is this looking only at the classification head of samples? If so, how are images with multiple objects handled? Broader Impact Concerns: No concerns on ethical implications. Although coreset selection can increase biases present in training data, this work is far removed from unethical application of models, and the method is largely data-agnostic. ================================================== Review 3: Summary: This paper proposed Adaptive Coreset Selection (ACS), a method to filter data used in Quantization Aware Training (QAT) to reduce training computational budget while preserving performance. Selection relies on a linear combination of two metrics: Error Vector Score, that determines the importance of each sample based on its impact on the loss, and Disagreement Score, that accounts for discrepancies between the quantized model being trained and the full-precision teacher. The authors show that ACS can speed up QAT and improve its robustness. Results are provided on CNN for visual tasks (image classification, object detection). Strengths and Weaknesses: **STRENGHTS** - The topic of efficient data selection in the context of quantized model training is of clear interests to the audience of this journal - The paper is well structured, easy to follow, appropriately cites related works - The proposed approximation for sample importance and their combination into the proposed ACS method appears to be novel - On the reported CNNs/datasets, comparison against several related techniques of sample-selection is favorable towards ACS in terms of performance (accuracy and training runtime) - I commend the authors for reporting in Appendix F on the limited generalization capabilities of coreset selection across diverse models. Negative results are also important **WEAKNESSES** - My main concern is that training dynamics and final performance are dependent on hyperparameter R (selection interval), which is manually selected. In their experiments, the authors applied a grid search on R to determine a good trade-off between accuracy and training time, settling on R=20 for MobileNetv2 and R=10 for ResNet18. I think R is unspecified for RetinaNet for the results reported in Tab.7. However, this is not a practical strategy if the objective is to reduce overall training time. At the same time, it is not clear how a viable strategy to select R on an unexplored model/dataset/quantization combination would look like - Limited demonstration of applicability and scalability of this method, as results are reported on a total of 3 models/datasets/quantization combinations (MobileNetV2-CIFAR100 at W2/A32 bits, ResNet18-ImageNet1k at W4A4 bits, RetinaNet[ResNet18 backbone]-COCO at W4A4 bits) - In particular for quantization, what drove this selection (weight-only for one model vs. weight & activations for the other)? For a given model and dataset, are results impacted by the choice of quantization (e.g., more aggressive quantization calls for more training data)? - The investigation of training with 10% label noise was interesting and given the model performance shown in Tab.6 I agree it is likely that this method, as the authors state, "can successfully prune those samples with incorrect labels". However, this statement could have been definitively proven by comparing the pruned labels with the noisy labels, and report what fraction of noisy labels was captured (accuracy / precision / F1 score) and how it changed during training. Was this comparison performed? Requested Changes: See previous section. Broader Impact Concerns: Not addressed in this manuscript. No concern on my end. ================================================== Metareview: Recommendation: Accept as is Comment: This paper investigates the coreset selection in the context of quantization-aware training (QAT) to reduce training costs and mitigate the potential negative influence from label noises. Specifically, it proposes two metrics--error vector score and disagreement score--to select a subset of training data during QAT. The effectiveness of these methods is empirically evaluated across a variety of vison models and datasets, demonstrating improved performance compared to other data selection methods. The paper received recommendations of acceptance from all reviewers. Overall, the paper is well written and easy to follow. The paper explores a new and import topic: applying efficient data selection for QAT. The proposed methods are sound and intuitive, and the empirical evaluation and ablation studies are reasonally extensive, demonstrating consistent improvement over existing methods. During the rebuttal, the authors conducted additional experiments, showing its effectiveness of the methods on vision transformers and filtering out noisy samples. They also provided more clarification on the technical details and addressing concerns related to training dynamics to some extent. In summary, this paper is technically solid and has the potential to offer benefits in the domain of efficient QAT. Therefore, I recommend acceptance. ==================================================
# Revisiting Gaussian Neurons For Online Clustering With Unknown Number Of Clusters Anonymous authors Paper under double-blind review ## Abstract Despite the recent success of artificial neural networks, more biologically plausible learning methods may be needed to resolve the weaknesses of backpropagation trained models such as catastrophic forgetting and adversarial attacks. Although these weaknesses are not specifically addressed, a novel local learning rule is presented that performs online clustering with an upper limit on the number of clusters to be found rather than a fixed cluster count. Instead of using orthogonal weight or output activation constraints, activation sparsity is achieved by mutual repulsion of lateral Gaussian neurons ensuring that multiple neuron centers cannot occupy the same location in the input domain. An update method is also presented for adjusting the widths of the Gaussian neurons in cases where the data samples can be represented by means and variances. The algorithms were applied on the MNIST and CIFAR-10 datasets to create filters capturing the input patterns of pixel patches of various sizes. The experimental results1 demonstrate stability in the learned parameters across a large number of training samples. ## 1 Introduction Machine learning models trained through backpropagation have become widely popular in the last decade since AlexNet (Krizhevsky et al., 2012). Backpropagation, however, is not considered biologically plausible (Bengio et al., 2016), and the trained models are among other things susceptible to catastrophic forgetting (McCloskey & Cohen, 1989; Ratcliff, 1990) and adversarial attacks (Szegedy et al., 2014). More biologically plausible methods would need to incorporate not only local learning rules and unsupervised learning, but also have properties such as sparser activations, capacity for distribution changes of the input, not prone to catastrophic forgetting, and work in an online setting, where samples are processed sequentially during learning one at a time. One way to address distribution shifts in online clustering is to utilize overparameterized models that have additional parameters available to model input from subsequent unknown distributions. This will require that the parameters used to capture previous data be less likely to be adjusted, but model susceptibility to catastrophic forgetting would consequently be alleviated. In this article, a novel learning rule is presented that utilizes neurons expressed with Gaussian functions. For a given neuron, the online method has an attraction term toward the current sample, and an inhibition term that ensures reduced overlap between the Gaussian neurons in the same layer to achieve activation sparsity. This makes it possible to have an overparameterized model with more neurons in a layer than is needed to represent the input. Some neurons will model previous input samples, while other neurons can adapt to new input from a possibly different distribution. While the inhibition term does not fully resolve the possibility of catastrophic forgetting, specialized rules may be added since the neurons representing already sampled data can be identified. 1The source code to reproduce the figures in this article is available at https://gitlab.com/anonymous/ gaussian-neurons-for-online-clustering Additionally, an update method for the neuron widths is presented, where the sample variances are measured from the data samples, and used to perform a constrained update of the neuron widths toward the sample variances. This can in certain conditions lead to a more robust learning rule whose results are less affected by the choice of initial neuron widths. A limitation of the presented method is the use of isotropic Gaussian functions, employing scalar variances instead of covariance matrices. Moreover, Gaussian functions require additional computational resources and can be more numerically unstable compared to linear functions. Gaussian functions can, however, be approximated while retaining desired properties, for instance by using piecewise linear functions. Finally, no claims are made that the proposed learning rule achieves better clustering results compared to previous work. On the other hand, the presented methods have a few interesting properties, as demonstrated in Section 4 such as examples of robustness to model disruption and distribution shift, which merit further studies toward more biologically plausible artificial neural networks that may be less susceptible to the weaknesses of current trained models. ## 2 Related Work Clustering in an online setting has previously been studied extensively (Du, 2010), including for instance MacQueen's k-means (MacQueen et al., 1967), self-organizing maps (Kohonen, 1982), adaptive resonance theory (Carpenter & Grossberg, 1987), and neural gas (Fritzke et al., 1995). These algorithms typically implement a winner-take-all scheme, where only the winning neuron is adjusted with respect to a given sample. Other methods extends Oja's learning rule (Oja, 1982) to achieve online clustering, such as White's competitive Hebbian learning rule (White, 1992) that adds a lateral inhibition term to the Oja's learning rule, and Decorrelated Hebbian Learning (Deco & Obradovic, 1995) where softmax normalized Gaussian functions are used for neuron activations. More recently, Pehlevan & Chklovskii (2014) presented an online clustering algorithm using symmetric non-negative matrix factorization based on the works of Ding et al. (2005); Kuang et al. (2012), and Krotov & Hopfield (2019) extended the Oja's learning rule with, moreover, a lateral inhibition term. The latter method was parallelized on GPUs in Grinberg et al. (2019) and Talloen et al. (2021). All of these methods, however, use a fixed number of clusters that the algorithms are predetermined to find. Current artificial neuron models are overly simplified compared to cortical neurons (Beniaguev et al., 2021). In artificial neural networks, an activation function is usually used with a linear combination of input variables. On the other hand, clustering algorithms such as k-means and Gaussian mixture models use distance and multivariate Gaussian functions, respectively. The centroids or means of Gaussian functions capture more significantly both the angle and magnitude of the input vector, than that of linear combinations of input variables. Furthermore, variance and especially covariance provide additional flexibility for modeling the input. To model a convex region, for example, an artificial neural network using linear combination of the input variables will need two fully-connected layers each with a non-linear activation function, typically Rectified Linear Units since Glorot et al. (2011). The first layer will then represent hyperplanes in the input domain, and the second layer may combine these hyperplanes to convex regions. Gaussian functions, on the other hand, innately capture convex regions, potentially reducing the number of layers required for a given task. In terms of number of layers, a biological cortex cannot process signals sequentially as many times as the deeper artificial feedforward networks do, due to the relatively low firing rate in biological neurons (Wang et al., 2016). As a consequence, more biologically plausible models must perhaps be more shallow than is common in today's deep artificial networks. Sparse representations can be an effective way to reduce the dimensionality of the input space, in addition to having the potential for more efficient use of resources. Methods relying on linear combinations of the input variables, such as sparse coding (Olshausen & Field, 1997), typically add a regularization term on the output activations. These algorithms, however, depend on reconstruction errors and are therefore not viable in certain unsupervised settings such as clustering. Another possibility is to regularize the linear coefficients directly as in competitive Hebbian learning (White, 1992) or independent component analysis ![2_image_0.png](2_image_0.png) Figure 1: The cost function F with K = 2 Gaussian neurons parameterized by the first neuron µ1 , where x =0.3 0.7, µ2 =0.7 0.3, σ1 = σ2 = 0.2, and λ = 1 2 . Minimizing F with respect to µ1 can be achieved by moving µ1 closer to x and farther from µ2 . The minimum of F is here at µ1 ≈0.24 0.76, although the presented learning rule will only make small changes to the cluster centers µ1:K for each x. Furthermore, setting µ2 = x would lead to a flat surface where F = −1, and in this case there would be no attraction or repulsion on µ1 . (Comon, 1994), but these methods rely on orthogonal constraints, which limits the clustering method, i.e. the number of clusters found is generally less than or equal to the input dimensions, and two or more clusters cannot exist along a given vector from the origin. On the other hand, Deco & Obradovic (1995) made use of softmax normalized Gaussian neurons and regularization was achieved by penalizing overlapping Gaussian functions in the same layer. The neuron and cost function of this method, however, can result in learned cluster centers that are outside of the input domain, and these centers would then poorly represent the true cluster centers. Moreover, the proposed learning rule does not require a softmax function to produce the neuron outputs. ## 3 Learning Rule Let x ∈ R D be an input vector, and µi ∈ R D and σi ∈ R>0 be center and width, respectively, of the i-th Gaussian neuron in a layer, where i ∈ {1*, . . . , K*}. The output of the i-th neuron is then defined as: $$f_{i}(\mathbf{x})=e^{-\|\mathbf{x}-\mathbf{\mu}_{i}\|_{2}^{2}/\sigma_{i}}$$ where ∥·∥2 denotes the l2-norm. The objective of the learning rule is to minimize the cost function E with respect to the centers µ1:K: $$E(\mathbb{X};\mu_{1},\ldots,\mu_{K},\sigma_{1},\ldots,\sigma_{K})=\sum_{\mathbf{x}\in\mathbb{X}}\sum_{i=1}^{K}\left(-f_{i}(\mathbf{x})+\lambda\sum_{j\neq i}f_{j}(\mathbf{\mu}_{i})\right).$$ where X ∈ R N×D is a potentially very large set of N training examples that x is sampled from, K is the number of Gaussian neurons, and λ ∈ R>0 controls the level of inhibition. The first term of E attracts the cluster centers µ1:K toward the input x, while the second term repels the different cluster centers away from each other depending on the Gaussian locations and widths. Interestingly, setting λ ≥ 1 2 would nullify all attraction toward the given input if a cluster center already has the same position as the input and the Gaussian widths are equal. Lowering λ and σ1:K has the same effect, that is more input patterns can be learned, but smaller widths σ1:K may lead to insufficient attraction of the cluster centers toward the input. In order to minimize E in an online setting, the learning rule will update µ1:K by small steps in the opposite direction, and proportionally to the magnitude, of the gradient of the second sum of E: $$F(\mathbf{x};\mathbf{\mu}_{1},\ldots,\mathbf{\mu}_{K},\sigma_{1},\ldots,\sigma_{K})=\sum_{i=1}^{K}\left(-f_{i}(\mathbf{x})+\lambda\sum_{j\neq i}f_{j}(\mathbf{\mu}_{i})\right).$$ at input x. See Figure 1 for an example surface of F with two neurons. For each input x, the cluster centers will be updated as follows: $$\mu_{i}=\mu_{i}+\Delta\mu_{i}=\mu_{i}-\frac{1}{2}\eta_{\mu}\frac{\partial F}{\partial\mu_{i}}$$ where ηµ ∈ R>0 is the learning rate. The learning rule then becomes: $$\Delta\mathbf{\mu}_{i}=\eta_{\mu}\left(\frac{f_{i}(\mathbf{x})}{\sigma_{i}}(\mathbf{x}-\mathbf{\mu}_{i})-\lambda\sum_{j\neq i}\left(\frac{f_{i}(\mathbf{\mu}_{j})}{\sigma_{i}}+\frac{f_{j}(\mathbf{\mu}_{i})}{\sigma_{j}}\right)(\mathbf{\mu}_{j}-\mathbf{\mu}_{i})\right)\tag{1}$$ If σi = σ ∀i ∈ {1*, . . . , K*}, the learning rule can be simplified to: $$\Delta\mu_{i}=\frac{\eta_{\mu}}{\sigma}\left(f_{i}(\mathbf{x})(\mathbf{x}-\mathbf{\mu}_{i})-2\lambda\sum_{j\neq i}f_{i}(\mathbf{\mu}_{j})(\mathbf{\mu}_{j}-\mathbf{\mu}_{i})\right)$$ When applying the learning rule in Equation 1, some cluster centers may be repelled outside of the input domain. However, this is a beneficial property that can be utilized to identify learned cluster centers that correspond to patterns found from the input, as these centers will lie within or relatively close to the input domain. The maximum distance a cluster center can be pushed outside of the input domain is constrained by the input domain itself, the parameters of Equation 1, and the number of clusters K. The maximum distance goes toward ∞ as the number of clusters approaches ∞, but considering D = 1, K = 2, λ = 1 2 , µ2 = 0, σ1 = σ2 = σ, and fi(x) → 0 ∀i ∈ {1, 2} ∀x ∈ R, the maximum distance between µ1 and µ2 converges to the solution of the ordinary differential equation at µ1(t) as t → ∞: $${\frac{\partial\mu_{1}}{\partial t}}=-{\frac{\partial F}{\partial\mu_{1}}}=2{\frac{\mu_{1}}{\sigma}}e^{-\mu_{1}^{2}/\sigma},\quad\mu_{1}(0)>0$$ Although in practice, the learning rate, the number of iterations, and the precision of the data types will limit the above distance. ## 3.1 Update Method For Neuron Widths Having sparsity constraints given by Gaussian functions enables adjustments of the activation sparsity based on sample measures. The Gaussian widths σ1:K can be derived from the data X if a data sample is instead represented by a mean vector x = µx and a width scalar σx. This addition to the learning rule alters a neuron width σi toward the given data sample width σx depending on constraints such as how well the i-th Gaussian matches the data sample. Furthermore, care must be taken to avoid neuron functions collapsing into each other, i.e. resulting in several tight clusters near similar input patterns that have a relatively high probability of being sampled. Let the measure of a sample be: $$f_{x}(\mu_{i})=e^{-\|\mu_{i}-\mu_{x}\|_{2}^{2}/\sigma_{x}\|_{2}^{2}}$$ and the update of the neuron widths be defined as: $$\Delta\sigma_{i}=\eta_{\sigma}\max\left\{f_{x}(\mathbf{\mu}_{i})-2\lambda\sum_{j\neq i}f_{j}(\mathbf{\mu}_{i}),0\right\}f_{i}(\mathbf{\mu}_{x})(\sigma_{x}-\sigma_{i})\tag{2}$$ where ησ ∈ R>0 controls the rate of change, and λ is the inhibition level used in Equation 1. The change of a neuron width σiis thus relying on the neuron mean with respect to the sample measure inhibited by the other lateral neurons, and the sample mean with respect to the neuron function. The max{·} function ensures that the change of σiis always 0 or toward σx. Equations 1 and 2 can be applied independently, but adjusting means and widths interchangeably can be advantageous. For example, by enabling additional clusters to be found after the already identified clusters have had their widths reduced. Experiments employing both Equations 1 and 2 are presented in Subsection 4.3. ## 4 Results And Discussion The presented learning rule was tested on image patches from the MNIST (Lecun et al., 1998) and CIFAR10 (Krizhevsky, 2009) datasets. The D pixel patch intensities were scaled to be from 0 to 1. No other preprocessing was applied to the images or image patches. ## 4.1 Mnist N uniformly random D = 5 × 5 pixel patches were sampled from the training set containing 6 × 106images of handwritten digits. Figure 2 shows the resulting centers, typically named filters in this context, of K = 16 clusters with Gaussian widths σi = σ ∀i ∈ {1*, . . . , K*}, inhibition level λ, and learning rate ηµ = 10−1. Some cluster centers have not changed significantly from training, while other filters represent learned filters that for instance can act as edge detection filters. The unchanged filters have typically been pushed outside of the input domain, and have a high cosine similarity to the initial randomized filters, as shown in Table 1 for the filters depicted in Subfigure 2a. | µ1 | µ2 | µ3 | µ4 | µ5 | µ6 | µ7 | µ8 | µ9 | µ10 | µ11 | µ12 | µ13 | µ14 | µ15 | µ16 | | |--------|------|------|------|------|------|------|------|------|-------|-------|-------|-------|-------|-------|-------|-----| | di | 1.5 | 1.5 | 1.0 | 1.0 | 0.8 | 0.9 | 1.4 | 1.4 | 1.5 | 1.4 | 1.5 | 0.9 | 1.6 | 1.5 | 0.8 | 0.9 | | cos θi | 0.9 | 0.8 | 0.4 | −0.1 | 0.8 | 0.7 | 0.9 | 0.9 | 0.9 | 0.9 | 0.9 | 0.7 | 0.9 | 0.9 | 0.8 | 0.6 | Table 1: Scalar descriptors of the filters shown in Subfigure 2a, where di = ∥µi − 1 2 ∥2 / qD 4 , qD 4 is the maximum distance from a point in the input domain to the input domain centroid, cos θi = µi · µ (*init*) i/(∥µi∥2 ∥µ (*init*) i∥2 ), and µ (*init*) iis the i-th initial randomized filter before training. The learned filters, which for instance can act as edge detectors, are marked in bold. The difference between the results shown in Subfigures 2a and 2b is that the latter experiment was run on 10 times as many data samples. Regardless, the filters shown in Subfigure 2b are similar to those in Subfigure 2a, demonstrating that the filters can be stable throughout a high number of iterations. Both experiments used the same random generator seed. Additional patterns were found in the results shown in Subfigures 2c and 2d by lowering either the Gaussian width σ or the inhibition level λ, respectively. 11 and 12 learned filters are present in Subfigures 2c and 2d compared to 8 filters in Subfigure 2b, with the same number of samples processed. Reducing the Gaussian ![5_image_0.png](5_image_0.png) Figure 2: The resulting centers or filters from applying the presented learning rule (2a to 2g) with K = 16 neurons, N 5 × 5 pixel patches uniformly sampled from the MNIST training dataset, Gaussian widths σi = σ ∀i ∈ {1*, . . . , K*}, inhibition level λ, and learning rate ηµ = 10−1. The learned filters can for instance act as edge detectors, while the remaining filters were pushed outside of the input domain and had a high cosine similarity to the initial filter values prior to training. The filters are enumerated left to right, top to bottom. widths can, however, lead to fewer identified patterns since the pull toward some data samples may be insufficient. In addition, if the inhibition level is too low, filters can collapse into each other, forming several identical Gaussian centers. For example, 2 equal Gaussian centers µ4 = µ13 ≈0*, . . . ,* 0were found when the experiment shown in Subfigure 2d was run with the inhibition level lowered to λ = 10−1. In the result shown in Subfigure 2e, the learned filters remain approximately the same as in Subfigure 2a, despite the fact that the learned filter µ5 was removed prior to training on additional N = 3 × 106 data samples. The pattern learned by filter µ5 in Subfigure 2a was then relearned by filter µ8 in Subfigure 2e. This result demonstrates potential stability of the learning rule as a pattern can be relearned after disruption while the other learned filters can remain largely unaltered. Subfigure 2f shows the result of training solely on images depicting the number 1, and this model was subsequently used as a starting point to produce the result shown in Subfigure 2g, where only images corresponding to the number 2 were used during training. Even though there was a distribution change of the inputs, the three filters found in Subfigure 2f are largely similar to the corresponding filters in Subfigure 2g, while additional filters were learned as well. The additional filters mainly represents patterns found exclusively in the latter dataset. For comparison, Subfigure 2h shows the resulting filters from the k-means algorithm (Lloyd, 1982) with the cluster count set to 16. The proposed learning rule will most likely not learn as many filters from the MNIST dataset given the relatively small pixel patch size without setting the Gaussian width σ or inhibition level λ too low. ![6_image_0.png](6_image_0.png) Figure 3: The results of employing the learning rule on larger image patches. In both experiments, the number of samples was N = 107, the Gaussian widths set to σi = σ ∀i ∈ {1*, . . . , K*}, and the learning rate equal to ηµ = 10−1. More rounded filters were learned compared to those in Figure 2, but the Gaussian widths had to be increased in order to better capture the patterns in the larger input domain. ## 4.1.1 Larger Pixel Patches Experiment results with N = 107samples of D = 9 × 9 and D = 13 × 13 pixel patches are shown in Figure 3. Similar to Figure 2, the Gaussian widths were set to σi = σ ∀i ∈ {1*, . . . , K*}. More rounded filters were learned, especially those shown in Subfigure 3b, which can be used to detect parts of particularly the handwritten digits 0, 2, 3, 5, 6, 8, and 9. As the number of input dimensions was increased, leading to a larger input domain, the inhibition level was decreased and the Gaussian widths were increased to better capture the patterns of the input samples. To make the results more comparable with, for instance, the results in Subfigure 2d, the width σ was calculated such that two neurons placed on opposite positions of the input domains had equal repulsion, measured by the l2-norm normalized by the number of dimensions, independent of input samples: $$\begin{array}{c}{{\frac{1}{D}\|\Delta\mu_{1}\|_{2}=\frac{1}{D}\|\Delta\hat{\mu}_{1}\|_{2}}}\\ {{\frac{1}{D}\left\|\frac{\lambda\mu_{1}}{\sigma^{2}}e^{-\|\mu_{1}\|_{2}^{2}/\sigma}\right\|_{2}=\frac{1}{D}\left\|\frac{\hat{\lambda}\hat{\mu}_{1}}{\hat{\sigma}^{2}}e^{-\|\hat{\mu}_{1}\|_{2}^{2}/\sigma}\right\|_{2}\quad\mathrm{s.t.}\quad\frac{\partial}{\partial\sigma}\frac{\lambda}{\sqrt{D}\sigma^{2}}e^{-D/\sigma}>0}}\\ {{\frac{\lambda}{\sqrt{D}\sigma^{2}}e^{-D/\sigma}=\frac{\hat{\lambda}}{\sqrt{\hat{D}\hat{\sigma}^{2}}}e^{-\hat{D}/\sigma}}}\end{array}$$ $$\left({\mathrm{3}}\right)$$ −D/σ > 0(3) where K = 2, µ1 = (1*, . . . ,* 1), µ2 = (0*, . . . ,* 0), µi ∈ R D, f1(x) → 0 ∀x ∈ R D, µ˜ 1 = (1*, . . . ,* 1), µ˜ 2 = (0*, . . . ,* 0), µ˜i ∈ R D˜, and ˜f1(x˜) → 0 ∀x˜ ∈ R D˜. The tilde symbol denotes the vectors and scalars used in this case to produce the result presented in Subfigure 2d, and the learning rate ησ was equal in both ∆µ1 and ∆µ˜ 1 . From Equation 3 then with D˜ = 5 × 5, σ˜ = 1, λ˜ = 1 9 , the neuron widths for computing the results shown in the Subfigures 3a and 3b were approximated to be σ = 4.14 and σ = 9.91, respectively. The inhibition level was made slightly different in order to find roughly the same number of filters, where λ = 3 2 ×10−2 was used to process D = 9 × 9 pixel patches and λ = 10−2to process D = 13 × 13 pixel patches. ## 4.2 Cifar-10 The learning rule was also applied on the CIFAR-10 dataset (Krizhevsky, 2009), where N uniformly random D = 5 × 5 × 3 pixel patches were sampled from the training set containing 6 × 106 natural images, evenly portraying airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. Figure 4 shows the resulting filters of K = 50 clusters with Gaussian widths σi = σ = 1.75 ∀i ∈ {1*, . . . , K*}, inhibition level λ = 10−2, and learning rate ηµ = 10−1. ![7_image_0.png](7_image_0.png) Figure 4: The resulting filters from training on D = 5 × 5 × 3 pixel patches from the CIFAR-10 dataset containing natural images, with N = 107 number of samples, Gaussian widths σi = 1.75 ∀i ∈ {1*, . . . , K*}, inhibition level λ = 10−2, and learning rate ηµ = 10−1. Some of the edge detecting filters are color independent, while some filters can be used to detect specific color patches such as sky and grass. Some of the learned filters can be used to detect specific colored regions such as sky and grass, while other filters can act as edge detectors. Some of the edge detecting filters are gray and thus color independent. All filters except one learned patterns from the pixel patch samples. ## 4.3 Neuron Width Update Both Equations 1 and 2 were employed to produce the results shown in Figure 5, using N = 107 data samples and learning rate set to ηµ = ησ = 10−1. In Subfigures 5c to 5f, the data sample mean x = µx = 1 9 P9 i=1 si and width σx = 2Var[s1:9] = 29 P9 i=1(si − µx ) 2 were measured from 9 neighboring 5 × 5 image patches s1:9 after adding normal distributed noise with 0 mean, and 0.1 (Subfigures 5c to 5e) or 0.15 (Subfigure 5f) standard deviation, to each pixel value. Neighboring image patches can share similar patterns, and noise was added as a constraint to keep the Gaussian widths from collapsing toward 0, for example, when sampling homogeneous image regions. The neuron means µ1:K and widths σ1:K were updated in alternate succession, that is for each data sample, the neuron means were first updated using Equation 1, and then Equation 2 was used to update the neuron widths. Subfigures 5c to 5e show similar K = 9 filters even though the initial Gaussian widths σ (*init*) i = σ (*init*) ∀i ∈ {1*, . . . , K*} were set to σ (*init*) = 6, σ (*init*) = 10, and σ (*init*) = 14, respectively. For comparison, Subfigures 5a and 5b show the resulting filters without neuron width updates where the Gaussian widths σi = σ ∀i ∈ {1*, . . . , K*} were set to, in the same order, σ = 4 and σ = 14. The inhibition level was λ = 1 2 for all Subfigures 5a to 5e. The final Gaussian widths of the filters shown in Subfigure 5d were: ## Σ1:K =10, 9.5, 4.6, 0.6, 5, 10, 10, 4.6, 10 and the learned filters can in this case be distinguished from the filters that have not learned any patterns by comparing the final widths to the initial widths of the Gaussians. Lastly, Subfigure 5f shows the result of applying the learning rule and width update method on data samples from the CIFAR-10 dataset with initial Gaussian width σ (*init*) = 10 and inhibition level λ = 10−2. Almost the same number of filters were learned compared to those learned in Figure 4, although the initial widths were larger and each data sample was extracted from 9 neighboring 5 × 5 × 3 pixel patches. By using Equation 2, however, the Gaussian widths of the learned filters were reduced and thus made room for additional Gaussians within the data domain. ![8_image_0.png](8_image_0.png) Figure 5: The results of employing both Equation 1 and 2 on data sample measures from the MNIST and CIFAR-10 datasets. Subfigures 5c to 5e show similar results even though the initial neuron widths were different. On the other hand 3 and 2 filters were learned when the neuron widths were fixed to σ = 6 and σ = 14 as shown in Subfigures 5a and 5b, respectively. The result from Figure 4 was reproduced in Subfigure 5f although with varying Gaussian widths. All results were produced using N = 107 data samples and learning rates set to ηµ = ησ = 10−1. ## 5 Conclusion And Future Work A novel unsupervised learning rule based on Gaussian functions is presented that can perform online clustering without needing to specify the number of clusters prior to training. The local learning rule is arguably more biologically plausible compared to model optimization through backpropagation, and the results demonstrate stability in the learned parameters during training. Furthermore, an update method for the widths of the Gaussian functions is presented, which can reduce the dependence on finely tuned hyperparameters. Opportunities for future work include investigations of the presented methods in the following directions: (1) improve the computation speed through CPU or GPU parallelization, (2) use anisotropic Gaussian functions, (3) train with input data other than image patches, (4) layer-wise optimization of deeper model architectures, (5) compare trained model susceptibility to adversarial attacks against models optimized through backpropagation, (6) sample mean and variance from subsequent video frames instead of neighboring image patches, and (7) repurposing the Gaussian functions as Gaussian dendrites, and linearly combine similar dendritic functions into artificial neurons to achieve greater representational power of the artificial neurons. ## References Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein, Thomas Mesnard, and Zhouhan Lin. Towards biologically plausible deep learning. arXiv preprint arXiv:1502.04156 [cs.LG], 2016. URL https://arxiv.org/abs/ 1502.04156. David Beniaguev, Idan Segev, and Michael London. Single cortical neurons as deep artificial neural networks. Neuron, 109(17):2727–2739, 2021. doi: 10.1016/j.neuron.2021.07.002. Gail A Carpenter and Stephen Grossberg. A massively parallel architecture for a self-organizing neural pattern recognition machine. *Computer vision, graphics, and image processing*, 37(1):54–115, 1987. doi: 10.1016/s0734-189x(87)80014-2. Pierre Comon. Independent component analysis, a new concept? *Signal Processing*, 36(3):287–314, 1994. ISSN 0165-1684. doi: https://doi.org/10.1016/0165-1684(94)90029-9. URL https://www. sciencedirect.com/science/article/pii/0165168494900299. Higher Order Statistics. Gustavo Deco and Dragan Obradovic. Decorrelated hebbian learning for clustering and function approximation. *Neural Computation*, 7(2):338–348, 1995. doi: 10.1162/neco.1995.7.2.338. Chris Ding, Xiaofeng He, and Horst D. Simon. On the equivalence of nonnegative matrix factorization and spectral clustering. In *Proceedings of the 2005 SIAM International Conference on Data Mining (SDM)*, pp. 606–610, 2005. doi: 10.1137/1.9781611972757.70. URL https://epubs.siam.org/doi/abs/10.1137/1. 9781611972757.70. K-L Du. Clustering: A neural network approach. *Neural networks*, 23(1):89–107, 2010. doi: 10.1016/j. neunet.2009.08.007. Bernd Fritzke et al. A growing neural gas network learns topologies. *Advances in neural information* processing systems, 7:625–632, 1995. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Geoffrey Gordon, David Dunson, and Miroslav Dudík (eds.), Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15 of *Proceedings of Machine Learning Research*, pp. 315– 323, Fort Lauderdale, FL, USA, 11–13 Apr 2011. PMLR. URL https://proceedings.mlr.press/v15/ glorot11a.html. Leopold Grinberg, John Hopfield, and Dmitry Krotov. Local unsupervised learning for image analysis. arXiv preprint arXiv:1908.08993 [cs.CV], 2019. URL https://arxiv.org/abs/1908.08993. Teuvo Kohonen. Self-organized formation of topologically correct feature maps. *Biological cybernetics*, 43 (1):59–69, 1982. doi: 10.1007/bf00337288. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Canadian Institute for Advanced Research, 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. *Advances in neural information processing systems*, 25:1097–1105, 2012. Dmitry Krotov and John J Hopfield. Unsupervised learning by competing hidden units. Proceedings of the National Academy of Sciences, 116(16):7723–7731, 2019. doi: 10.1073/pnas.1820458116. Da Kuang, Chris Ding, and Haesun Park. Symmetric nonnegative matrix factorization for graph clustering. In *Proceedings of the 2012 SIAM International Conference on Data Mining (SDM)*, pp. 106–117, 2012. doi: 10.1137/1.9781611972825.10. URL https://epubs.siam.org/doi/abs/10.1137/1.9781611972825.10. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. doi: 10.1109/5.726791. S. Lloyd. Least squares quantization in pcm. *IEEE Transactions on Information Theory*, 28(2):129–137, 1982. doi: 10.1109/TIT.1982.1056489. James MacQueen et al. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pp. 281–297. Oakland, CA, USA, 1967. Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pp. 109–165. Elsevier, 1989. doi: 10.1016/s0079-7421(08)60536-8. Erkki Oja. Simplified neuron model as a principal component analyzer. *Journal of mathematical biology*, 15 (3):267–273, 1982. doi: 10.1007/bf00275687. Bruno A. Olshausen and David J. Field. Sparse coding with an overcomplete basis set: A strategy employed by v1? *Vision Research*, 37(23):3311–3325, 1997. ISSN 0042-6989. doi: https: //doi.org/10.1016/S0042-6989(97)00169-7. URL https://www.sciencedirect.com/science/article/ pii/S0042698997001697. Cengiz Pehlevan and Dmitri B Chklovskii. A hebbian/anti-hebbian network derived from online non-negative matrix factorization can cluster and discover sparse features. In 2014 48th Asilomar Conference on Signals, Systems and Computers, pp. 769–775. IEEE, 2014. doi: 10.1109/acssc.2014.7094553. Roger Ratcliff. Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. *Psychological review*, 97(2):285–308, 1990. doi: 10.1037/0033-295x.97.2.285. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 [cs.CV], 2014. URL https://arxiv.org/abs/1312.6199. Jules Talloen, Joni Dambre, and Alexander Vandesompele. Pytorch-hebbian: facilitating local learning in a deep learning framework. arXiv preprint arXiv:2102.00428 [cs.LG], 2021. URL https://arxiv.org/abs/ 2102.00428. Bo Wang, Wei Ke, Jing Guang, Guang Chen, Luping Yin, Suixin Deng, Quansheng He, Yaping Liu, Ting He, Rui Zheng, Yanbo Jiang, Xiaoxue Zhang, Tianfu Li, Guoming Luan, Haidong D. Lu, Mingsha Zhang, Xiaohui Zhang, and Yousheng Shu. Firing frequency maxima of fast-spiking neurons in human, monkey, and mouse neocortex. *Frontiers in Cellular Neuroscience*, 10:239, 2016. ISSN 1662-5102. doi: 10.3389/ fncel.2016.00239. URL https://www.frontiersin.org/article/10.3389/fncel.2016.00239. Ray H White. Competitive hebbian learning: Algorithm and demonstrations. *Neural Networks*, 5(2):261–275, 1992. doi: 10.1016/s0893-6080(05)80024-3.
Review 1: Summary: The paper presents a novel clustering algorithm, based on gaussian neurons, by introducing a local learning rule. As opposed to previous methods, this clustering algorithm does not require a number of clusters and instead can use simply an upper limit on the number of clusters. The paper then analysis how such clustering looks on MNIST and CIFAR-10, analyzing the resulting filters. Strengths and Weaknesses: Strengths: Novelty -- new clustering algorithm Replacing the number of clusters with an upper bound result in a more generalized algorithm Weaknesses: Motivation -- the paper claims that "more biologically plausible learning methods may be needed to resolve the weaknesses of backpropagation trained models such as catastrophic forgetting and adversarial attacks". Yet, I couldn't find in the paper any claims on why the suggested algorithm is more biologically plausible and how it handles weaknesses such as catastrophic forgetting and adversarial attacks. Moreover, it is unclear from the paper what are the advantages of the proposed algorithm over existing methods. Experimental design -- the paper show visualizations of the learned filters when training the suggested model on MNIST and CIFAR-10. It is unclear to me what could be learned from these results -- are the results of the clustering algorithm on the suggested data are good or bad? what metrics did you use to determine it? How did the proposed algorithm perform in comparison to other clustering algorithms? for which types of data is it more suited? At the current state, the results feel a bit too anecdotical, and I'm struggling to see how what are the take-home messages for other problems. Requested Changes: Writing: the introduction section is a bit disorganized -- the separation between past results and the current work is a bit unclear. A possible solution is to divide this section into 2 parts, "previous work" and "introduction", clearly stating what was known by previous art and what is new in this work. This point is not as critical to acceptance as the others, but can significantly strengthen the work. Motivation: The claims made about the motivation of the suggested algorithm (biologically plausible / avoid catastrophic forgetting / withstand adversarial attacks) should either be removed, or better explained in the text. For each such claim, I would expect some empiricalthatretical evidence that shows why the suggested algorithm is better than the existing methods. This point is critical. Experimental design - This is the most critical point. The results currently show visualizations of filters when running the suggested algorithm on MNIST and CIFAR-10. However, the properties of the algorithm (as opposed to other clustering algorithms) are unclear. I would expect a more comparative approach: how the suggested algorithm is performing as opposed to other clustering algorithms? which data is more suited to work on? why the shown filters are good? what could be expected if I will run the algorithm on new data? Broader Impact Concerns: Not relevant for this submission ================================================== Review 2: Summary: This paper proposes a local learning rule that utilizes neurons expressed with Gaussian functions for performing online clustering without a prior number of clusters fixed. The means of Gaussian neurons are updated with an attraction term toward the current sample, and an inhibition term that ensures reduced overlap between the Gaussian neurons in the same layer. It also presents an update method for the variances of Gaussian neurons, to obtain a more robust learning rule whose results are less affected by the choice of initial neuron widths. The mechanisms of the proposed learning rule are validated by well-designed experiments on MNIST and CIFAR-10 datasets. Strengths and Weaknesses: My evaluation focuses on that: 1) whether the claims made in the submission supported by accurate, convincing, and clear evidence; 2) whether audience be interested in the findings of this paper. I cannot well judge the novelty and significance of this paper, compared to previous one, since I am not an expert relating to biological neural networks and online clustering. ##Strengths: + This paper provides some interesting findings. For example, the proposed unsupervised methods can well learn the (low-level) patterns, like the supervised neural networks do. + This paper provides good evidence to support its most claims, particularly using well designed toy examples/experiments to clarify the motivations and insights behind. + The presentation of this paper is very clear and I am enjoying in reading this paper. ##Weaknesses: + My main concern for this paper is its contributions are not well matched with the areas of (biological or artificial) neural networks, while this paper aims to claim. This paper starts the motivations from the problems of neural networks using back-propagation training (e.g., susceptible to catastrophic forgetting and adversarial attacks) and term the proposed methods as Gaussian `neurons’. However, the current work is simply for a one-layer model, not for multiple-layer model (that is, stacking multiple-layers Gaussian neurons with training algorithm provided). Without investigating a multiple-layer model, this paper just proposes an online clustering method, whose related work and the corresponding baselines (if needed for empirical comparison) are totally different to the ones described in this paper, and should pay more discussions on the widely investigated online clustering problem. Even though I noted this paper provides a list for further work (including the layer-wise optimization of deeper model architectures), It should at least include how to adapt the proposed method for multiple-layer model architectures for obtaining an accept, as to me. + I think this paper should also provide some experiments to show how well the proposed unsupervised learning rules over the previous exist work, Either using the unsupervised evaluation or transferring to supervised learning. Other minors: (1) It is not clear in the sentence “The first layer will then represent hyperplanes in the input domain, and the second layer may combine these hyperplanes to convex regions.” How the second layer combine these hyperplanes to convex regions? (2) “a inhibition” should be “an inhibition” Requested Changes: (1) I personally think this paper can be accepted if it provides further statements for multiple-layers networks/systems with empirical evidences, like the current results for one-layer model presented in this paper. (2) It is not ready to understand this sentence, “Interestingly, setting λ ≥1/2 would nullify all attraction toward the given input if a cluster center already has the same position as the input and the Gaussian widths are equal.” It is better to provide more evidences. That is, if this argument is shown in previous related work, this paper should cite it. Otherwise, this paper should provide more clarification for this sentence. E.g., combining the update rule (Eqn.1) to illustrate it, and even provide a proof. Broader Impact Concerns: No impact concerns. ================================================== Review 3: Summary: The authors discuss an algorithm for online clustering using Gaussian neurons and test their algorithm on MNIST and CIFAR-10. Strengths and Weaknesses: Unfortunately the goal of the paper is not clear and the statements are often inaccurate or unjustified. The authors also do not cite a very relevant prior work performing arguably the same type of classification in an online setting [1]. Since this paper published in 2017 is written with very similar functional goals (implementing k-means in and online and local way with gaussian neurons), it is crucial for the authors to cite and differentiate their work from [1] if they want to get published. Here are some of the inaccurate/unjustified/confusing remarks: 1. Upper limit instead of fixed cluster count. Usually most clustering algorithms (e.g. k-means) work with an upper limit of the cluster count and can end up with empty clusters. This is in general treated as an undesirable feature since these algorithms do not claim that the number of non-empty clusters that are found is actually the correct number of clusters. I do not see how the proposed method is different from other clustering algorithms in this regard. 2. Cited methods use linear combinations of input variables. This is very confusing. The author's proposed method is based on a linear distance to some centroid (just like k-means), so it seems like a linear clustering algorithm to me. In comparison, the online clustering algorithm of the cited Pehlevan & Chkhlovskii (2014) uses non-negative neurons and is truly non-linear. 3. Regularizing output activations is not viable in an online setting. Why? This can be done using stochastic gradient descent. 4. Arguments about distribution shift and forgetting. The authors provide no evidence regarding this. The hand-wavy arguments are not very convincing. 5. Local learning rules. In Eq. 1, the update rule for mu_i needs the centroid of mu_j. How is this a local learning rule. Perhaps the authors should specify the shape of the network that they claim implements this learning rule locally. [1] C. Pehlevan, A. Genkin and D. B. Chklovskii, "A clustering neural network model of insect olfaction," 2017 51st Asilomar Conference on Signals, Systems, and Computers, 2017, pp. 593-600, doi: 10.1109/ACSSC.2017.8335410. Requested Changes: I would suggest that the authors consider the take-away points of this paper and do a major revision focusing on these points. If the point is that the proposed algorithm improves performance under distribution shift / catastrophic forgetting, they should focus on this and compare with other algorithms aimed at this. If the point is that this is an efficient local clustering algorithm, the authors should focus on the efficiency and the shape of the network implementing this. Finally, if the point is that this is a local algorithm that can be biologically plausible and therefore relevant for the brain, the authors should be explicit about this. In all cases, the paper should cite the relevant literature and perform comparison with prior work. Broader Impact Concerns: Not applicable. ================================================== Metareview: Recommendation: Reject Comment: This paper presents an online clustering algorithm based on Gaussian neurons based on a local learning rule. The reviewers noted a number of weaknesses in the submission, including unclear motivation, poor experimental design, inadequate comparisons to prior work, and unclear messaging. Some of these weaknesses were discussed and at least partially addressed in the authors' responses and in the updated manuscript. For example, the motivation and discussion of distribution shift and catastrophic forgetting was clarified so that the abstract and introduction more directly reflect the contributions of the paper. Still, a number of the reviewers' concerns persist after the revisions and discussion. For example, there are still concerns regarding the experimental design and comparisons to prior work, and the high-level takeaways are not clearly conveyed through the overall narrative nor adequately supported by evidence. As such, the reviewers and I are all in agreement that at present this paper is not suitable for publication at TMLR. ==================================================
# Corrective Machine Unlearning Anonymous authors Paper under double-blind review ## Abstract Machine Learning models increasingly face data integrity challenges due to the use of largescale training datasets drawn from the internet. We study what model developers can do if they detect that some data was manipulated or incorrect. Such manipulated data can cause adverse effects including vulnerability to backdoored samples, systemic biases, and reduced accuracy on certain input domains. Realistically, all manipulated training samples cannot be identified, and only a small, representative subset of the affected data can be flagged. We formalize "Corrective Machine Unlearning" as the problem of mitigating the impact of data affected by unknown manipulations on a trained model, only having identified a subset of the corrupted data. We demonstrate that the problem of corrective unlearning has significantly different requirements from traditional privacy-oriented unlearning. We find most existing unlearning methods, including the gold-standard retraining-from-scratch, require most of the manipulated data to be identified for effective corrective unlearning. However, one approach, Selective Synaptic Dampening, achieves limited success in one setting for unlearning adverse effects with just a small portion of the manipulated samples, showing the tractability of this setting. We hope our work spurs research towards developing better methods for corrective unlearning and offers practitioners a new strategy to handle data integrity challenges arising from web-scale training. ## 1 Introduction Machine Learning models are increasingly trained on large and diverse datasets, with contributions from numerous users and organizations across millions of web-pages (Schuhmann et al., 2022; Gao et al., 2020). However, data integrity issues significantly impact model performance by introducing systemic biases (Prabhu & Birhane, 2021; Konstantinov & Lampert, 2022) and vulnerabilities (Barreno et al., 2006b; Sanyal et al., 2021; Paleka & Sanyal, 2023). For instance, Carlini et al. (2023) showed that a small manipulated subset of web data can lead to large-scale model poisoning, showing the vulnerability of models to adversarial tactics. A critical real-world obstacle to eliminating this issue is that model developers can only hope to identify a fraction of this manipulated data due to the size of such large-scale datasets. One practical way for model developers to approach this problem is by using methods for monitoring the integrity of data pipelines (Breck et al., 2019; Wang et al., 2019; Northcutt et al., 2021b) on randomly selected subsets of the whole data. This will identify a small, representative subset of the corrupted samples. Having identified this small subset, the primary goal is to "correct" the negative impacts of corrupted data and their detrimental effect on the model. Due to the high costs already incurred in training and the need to continue uninterrupted service, model developers may wish to update models to remove the influence of the corrupted data instead of abruptly stopping the model's use. We term this problem of removing the influence of manipulated data from a trained model having identified only a small fraction of it, *Corrective Machine Unlearning*. The goal of Corrective Machine Unlearning is to efficiently eliminate the detrimental effects of the corrupted samples even when the precise nature and extent of the manipulation is unknown. In this work, the manipulation is characterised by a small representative subset of the corrupted samples which created the negative impact. Since the manipulation type is assumed to be unknown, this implies that corrective unlearning methods should work across different manipulation types. While similar in some aspects, Corrective Unlearning has different underlying requirements from prior work on unlearning. Traditional unlearning ![1_image_0.png](1_image_0.png) Unaffected Manipulated domain data Accuracy using Figure 1: Traditionally, retraining after removing identified data is considered a gold standard in unlearning. However, since developers may not identify all the manipulated data for corrective unlearning, retrainingfrom-scratch on remaining data can continue to perpetuate the adverse effects of the manipulation. Ideally, corrective unlearning procedures should correct model outputs on the affected domain with access to only a small but representative subset of the manipulated data. algorithms (see Nguyen et al. (2022) for a survey) are motivated by the need to cater to user data deletion requests in light of privacy regulations (Council of European Union, 2018; California State Leglisature, 2018; Parliament of Canada, 2018). In contrast, Corrective Unlearning algorithms do not need to obtain privacy guarantees on the "unlearned" data or have access to all the data to be deleted. Corrective unlearning algorithms *must "correct" the effect of the corrupted training data while only having identified a small subset* of said data. We illustrate in Figure 1 that traditional gold standards in the privacy-oriented unlearning literature are not sufficient for corrective unlearning. In addition to highlighting the differences of the corrective unlearning setting from traditional unlearning literature, we benchmark popular unlearning procedures (Kurmanji et al., 2023; Goel et al., 2023; Chundawat et al., 2023b; Foster et al., 2023) in this setting. While there are myriad possible ways of constructing manipulations, we choose two manipulations that represent distinct kinds of real-world threats. First, we study a classic poisoning attack (Gu et al., 2019), where an adversary can manipulate both features and labels, making the model misclassify samples that contain a specific trigger pattern, hard for model developers to notice. Second, we study a label-only manipulation called the Interclass Confusion test (Goel et al., 2023), where the adversary incorrectly labels samples between two classes, thereby entangling the model's representations. Such mislabeling can cause systemic biases in model outputs (Prabhu & Birhane, 2021). Despite being simple manipulations, we find that no unlearning method that we benchmark is able to perform successfully in both scenarios simultaneously. Retraining after removing identified deletion data has traditionally been considered an inefficient gold standard in unlearning literature, called "Exact Unlearning" (EU). Most works aim to efficiently approximate it, including popular methods like Scrub (Kurmanji et al., 2023) and CF (Goel et al., 2023). However, our experiments show that even knowing 80% of the manipulated data is not enough for EU, and consequently other unlearning methods, to remove the adverse effects introduced by manipulating just 1% of the whole training data. Despite this, we find evidence that corrective unlearning with partial manipulation sets known is tractable, as the Selective Synaptic Dampening (SSD) (Foster et al., 2023) algorithm is able to remove the effect of BadNet poisoning with just 10% of the manipulated data being identified. There is much scope for progress though, as SSD fails completely in the Interclass Confusion setting, and also leads to a significant drop in overall test accuracy when unlearning poisoned data. Given the failure of these state-of-the-art unlearning methods (including the gold standard) at correcting these two simple manipulations, we didn't include more complex manipulations. We hope our work highlights the need for more effective and rigorous study of unlearning procedures tailored to removing the influence of manipulated data. ## 2 Problem Formulation In this section, we formalize the requirements of corrective unlearning, highlight the primary challenges, and detail key differences from traditional privacy-oriented unlearning. ## 2.1 Motivation We first motivate the setting from the adversary's and the model developer's perspectives. Adversary's Perspective: The adversary can manipulate any portion of the input data, including labels in supervised learning scenarios. For example, in poisoning attacks, a trigger is inserted into each manipulated data sample, altering its label to an incorrect one (Han et al., 2022). The goal of the adversary is to inject certain harmful properties into the model learned using this data. While a poisoning adversary injects the backdoor trigger with malicious intent, another adversary may cause systematic mislabelings, resulting in a biased model. We aim to address both types of adversaries. Developer's Perspective: Model developers identify some of the compromised data sources after having already trained a model, either through internal monitoring, defenses, or external information like tip-offs. Since the adversary can apply arbitrary manipulations, the exact manipulation type is unknown to the model developer apriori. As such, *the corrupted data* characterizes the manipulation performed by the adversary. Other ways of characterization, which we do not consider in this work but may be interesting for future work, include using feedback from model users such as incorrect predictions after deployment or threats identified via red-teaming. While detecting all manipulated data is challenging, removing its effect given a small subset may be feasible, if the subset is representative of the broader manipulated set. This often occurs in practice when model developers perform expensive but rigorous monitoring on a uniformly sub-sampled subset of the larger dataset. Since this subset is uniformly subsampled, the identified corrupt samples are also a uniformly subsampled subset of the entire set of corrupted samples. The goal of model developers is to remove the adverse effects of the manipulated data from the original model using this small identified representative subset. We formalize this in the next subsection. ## 2.2 Objective Of Corrective Unlearning Let X be the data domain, Y the label space, and P the distribution on *X × Y*. Let Str ⊂ X be the training data, and Sm ⊂ Str the training samples manipulated by the adversary, either by modifying features, associated labels, or both. Let Dm ⊂ X be the domain where performance is adversely affected when learning using Sm. For example, in poisoning, Dm contains samples with the poison trigger. In Interclass Confusion, Dm consists of samples from the two affected classes. Clearly, Dm also contains Sm. Finally, let A be the learning algorithm, and Mo = A(Str) be the trained model. A corrective unlearning algorithm Ucorr "corrects" the original model Mo by removing the influence of Sm. As mentioned before, we expect only a subset of samples to be identified as manipulated, which we denote as the deletion set Sf ⊆ Sm. Thus, Ucorr takes as inputs Mo, Str, Sf and yields an *unlearned* model Mu. Let α = |Sf |/|Sm| be the fraction of the identified manipulated set used for unlearning. For the rest of this work, we assume that Sf is a uniformly sampled subset of Sm. The performance of a corrective unlearning algorithm is measured by how well it optimizes the following two objectives for different values of α. The faster these two metrics increase as a function of α, the better the performance of the unlearning algorithm. 1. **Unlearning**: The primary goal is to unlearn the adverse effect due to the manipulated data Sm. We operationalise this as improving the *corrected accuracy on* Dm while only knowing an α fraction Sf of the full manipulated set: Acccorr (Ucorr, α) = E(x,y)∼P [I{Mu(x) = y} | x ∈ Dm] (1) where Mu = Ucorr(Mo, Str, Sf ) and α = |Sf |/|Sm|. 2. **Utility**: An ideal unlearning algorithm should not harm performance on unrelated samples, outside Dm. Thus, the second goal of the Ucorr is to maintain overall accuracy (if not improve it). Using the same notation as above, we operationalize this as the retain accuracy on *X \ D*m: $\eqref{eq:walpha}$. Accretain (Ucorr, α) = E(x,y)∼P [I{Mu(x) = y} | x /∈ Dm.] (2) Intuitively, the larger the value of α, the higher Acccorr should be. When the whole manipulated set is known for deletion α = 1, retraining without it, gives the best outcome. We dub this U∗ corr = A(Str \ Sm). The performance of an unlearning algorithm Ucorr can be measured by how large α needs to be for Acccorr (Ucorr, α) to be within some constant fraction (say 0.9) of Acccorr (U∗ corr, 1). We visualize this in our experiments in Section 3.3 and compare different algorithms based on it. Note that, Accretain is relatively less sensitive to α due to the definition of Dm. However, it may not always be possible to accurately identify the entire affected domain Dm. Identifying Dm can be particularly difficult when the adversary wants to obscure its true target, as in targeted poisoning attacks (Shafahi et al., 2018), or when it wants to affect the accuracy over the entire domain, such as in indiscriminate poisoning attacks (Barreno et al., 2006a; Biggio et al., 2012). Therefore, formulating these objectives for different settings may require more problem-specific attention. Nevertheless, we set the gold standard for Accretain to also be Accretain (U∗ corr, 1), and in our experiments, this metric turns out to be largely independent of α. ## 2.3 Differences From Privacy-Oriented Unlearning In this section, we discuss the most important distinctions between Corrective Unlearning and Privacy-oriented unlearning. Privacy-oriented unlearning seeks to ensure *retrain indistinguishability* (Sekhari et al., 2021): the unlearning algorithm Ucorr aims to produce models indistinguishabile from the models produced by the learning algorithm trained on just the retain set Str \ Sf . Different Goals (Removal of Incorrect Training Data): The goal of privacy-oriented unlearning is to remove untampered but sensitive user data whereas the objective of corrective unlearning is to remove the influence of manipulated samples. Implications: Removing uncorrupted but *private* samples in privacy-oriented unlearning scenarios typically degrades model performance (Golatkar et al., 2020a). This is unavoidable as there is a cost to obtain privacy. Some unlearning procedures even nudge the model output towards a uniform distribution over classes for samples in the forget set (Chundawat et al., 2023b; Li & Ghosh, 2023). However, in corrective unlearning, removing the influence of manipulated samples is expected to improve the model's performance on the affected domain Dm (i.e. Acccorr). Intuitively, this may, in turn, improve the learned representations thereby also improving overall accuracy Accretain. The underlying cause is that corrective unlearning aims to unlearn corrupt data whereas the focus of privacy-oriented unlearning is removing confidential but not necessarily corrupt data. Different Gold-Standards (Retrain-from-scratch is not enough): In privacy-oriented unlearning, the data whose influence is to be removed from the model, is specified by user deletion requests. However, as discussed before when model developers need to identify compromised data, it is unrealistic to assume all of it will be found. Thus, in corrective unlearning, the retain set Str \ Sf will continue to contain manipulated data from Sm \ Sf . Implications: Retraining from scratch on Str \ Sf is the gold standard for privacy-oriented unlearning but it is computationally expensive. Therefore, at its core, privacy-oriented unlearning is a computational problem. However, in corrective unlearning, as Str \ Sf continues to have manipulated data, unlearning procedures that solely rely on it (Schelter, 2020; Bourtoule et al., 2021; He et al., 2021; Graves et al., 2021; Goel et al., 2023) perpetuate the adverse effects of the manipulation. This necessitates a search for algorithms beyond computationally efficient approximations of *retraining from scratch*, which ceases to be a gold standard. Interestingly, this introduces a statistical dimension to the unlearning problem. One possible approach is to use the identified manipulated data to detect other manipulated data. Whether this is possible depends mainly on three parameters: size of the identified manipulated set, dimensionality of the data, and complexity of detecting the manipulated data1. For easy-to-detect manipulations in low dimensions and for large identified 1This argument is based on classical uniform-convergence based arguments sets, this problem should be easy. However, existing lower bounds in learning theory (e.g. see Theorem 3.20 in Schapire & Freund (2012)) show that this may well be impossible for small identified sets and hard-to-detect manipulations in high dimensions, which are precisely the realistic conditions we aim to tackle. For example, we show poisoning experiments where the identified subset has 500 samples for 3072-dimensional data and one of the goals of poisons is to be imperceptible. In short, this naturally leads to the following question: How can we *efficiently* remove the detrimental impacts of Sm using a representative, albeit *smaller*, subset Sf ? Different Constraints (No Privacy Requirements): Finally, in the corrective unlearning context, Sf and Sm does not need to be privatized, setting it apart from privacy-oriented unlearning. Implications: Privacy-oriented unlearning is designed to meet strict privacy standards, necessitating either algorithms with theoretical privacy guarantees (Sekhari et al., 2021; Gupta et al., 2021) akin to those provided by differential privacy, or at least strong performance against privacy auditing on the data to be forgotten Sf (Golatkar et al., 2020a). This further leads to a drop in accuracy as theoretical privacy guarantees are not always tight and lead to underestimation of actual privacy. Such evaluations are orthogonal to the goal of Corrective Unlearning, thereby necessitating the design of new evaluation strategies. ## 3 Experiments In this work, we run experiments with two types of manipulations that can occur in real-world data collection pipelines, and test whether existing methods can unlearn their adverse effects. We first describe these manipulations, and then our dataset, model architecture, and unlearning setup. ## 3.1 Manipulation Types And Evaluation Benchmarks As a desiderata for corrective unlearning is dealing with corrupted data, agnostic to manipulation type, a good Ucorr should correct a broad class of manipulations, including both we test. Evaluation of Feature-and-Label Manipulations: Poisoning. When model developers scrape data from webpages, adversaries can manipulate both the data samples and labels. This leaves models trained on this data vulnerable to backdoor attacks, where the model misbehaves in the presence of trigger features known to the adversary, but unknown to the model developers. Prior work has shown this can be realized in real-world settings such as autonomous driving (Han et al., 2022) and models trained on Wikipedia (Carlini et al., 2023). To model this setting, we use the simple BadNet poisoning attack introduced by Gu et al. (2019). We insert a trigger pattern that makes 0.3% pixels white at bottom-right positions in a subset of n training images, re-labeling each of these images to class zero. Here the affected domain Dm consists of all samples containing the trigger pattern. We benchmark the ability of unlearning methods to remove the effect of the backdoor after identifying some of the poisoned training data. Evaluation of Label-only Manipulations: Interclass Confusion. When model developers outsource annotations for their training data, adversaries can only manipulate labels. Systematic mislabeling between two classes can also occur naturally due to systemic biases in the labelling process, or misinterpretation in annotation guidelines on how to distinguish the classes. While this setting offers lesser power to the adversary, it can still lead to strong mislabeling attacks. Lingam et al. (2024) show that for random label flipping attacks, restricting it to two classes results in the largest drop of accuracy of the Bayes optimal classifier. We thus use the Interclass Confusion (IC) test (Goel et al., 2023), in which, two classes A and B are picked, and half of the samples from both classes are selected, and their label is changed to the other class. Models trained on datasets containing this manipulation are more likely to confuse these classes, i.e. predict samples from A as B and vice-versa. The affected domain Dm consists of all samples from class A and class B. We benchmark the ability of unlearning methods to remove the label confusion after identifying some of the confused data. ## 3.2 Experimental Details We first use the CIFAR10 and CIFAR100 (Krizhevsky et al., 2009) datasets as standard benchmarks in the unlearning literature (Foster et al., 2023; Kurmanji et al., 2023; Chundawat et al., 2023a). We then report poison unlearning results on PCam (Veeling et al., 2018), a binary classification medical imaging dataset, as ![5_image_0.png](5_image_0.png) | CIFAR10 | CIFAR100 | | |----------------------|--------------|--------------| | Poisoning | | | | None | 90.52 | 73.79 | | BadT | 0.29 ± 0.01 | 0.02 ± 0.00 | | CF | 0.97 ± 0.06 | 0.40 ± 0.05 | | SSD | -8.94 ± 1.02 | -3.25 ± 0.53 | | Scrub | 0.69 ± 0.06 | 0.09 ± 0.06 | | EU | 0.75 ± 0.06 | 0.23 ± 0.07 | | Interclass Confusion | | | | None | 92.36 | 74.35 | | BadT | 0.23 ± 0.10 | -0.04 ± 0.02 | | CF | 0.79 ± 0.14 | 0.31 ± 0.07 | | SSD | -1.63 ± 0.95 | -5.43 ± 3.62 | | Scrub | -3.27 ± 1.65 | -0.06 ± 0.05 | | EU | 0.44 ± 0.15 | -0.01 ± 0.07 | Figure 2: **(Left) Corrective Accuracy (**Acc*corr***) after applying different unlearning procedures**. Each method ("None" represents the original model) is shown across different identified fractions (α) of manipulated samples. No method, including EU, which is traditionally considered a gold standard, unlearns both the manipulations well when ≤ 80% of the manipulated data is identified. (Right) Change in retain accuracy (Acc*retain***) after applying different unlearning methods**, with results reported as mean ± stdev over 10 values of the identified fractions α from 0 to 1.0. SSD leads to significant drops in model utility. a potential application. PCam contains histopathologic scans of lymph node sections, with labels indicating the presence of metastatic tissue. For our experiments, we employ the ResNet-9 (Idelbayev, 2018) model for CIFAR10, and the WideResNet-28x10 (Zagoruyko & Komodakis, 2016) model for CIFAR100 and PCam. In the main paper, we report results for 1% of the training data being manipulated, except for the IC test (which has a weaker adversary than poisoning) on CIFAR-10 where 10% manipulation is required for clear observations. In Appendix B we report results for other manipulation sizes, finding similar unlearning trends even when only 0.2% of the training data is poisoned. In each setting, we vary α, the fraction of the manipulated set identified for deletion, from 0.1 to 1.0 at intervals of 0.1. For all results, the mean and standard deviation are reported over 3 seeds. In the interclass confusion evaluation, for CIFAR10, we confuse the Cat and Dog classes, and for CIFAR100, the maple and oak tree classes, to be consistent with the setup of Goel et al. (2023). Unlearning Methods. We select several state-of-the-art unlearning methods to benchmark the effectiveness of current unlearning methods in our setting. Detailed descriptions, along with hyperparameter sweep details for all methods, are provided in Appendix A.2. (1) Exact Unlearning (EU): This method retrains from scratch on Str \Sf using the original training algorithm A. It is considered an inefficient but gold-standard oracle in the privacy-oriented unlearning literature. Many existing methods are relaxations of this algorithm (He et al., 2021; Graves et al., 2021; Goel et al., 2023). (2) Catastrophic Forgetting (CF): Goel et al. (2023) show that fine-tuning just the final layers of a deep model on Str \ Sf performs well at unlearning label manipulations. We use the strongest version, i.e., fine-tuning all layers for unlearning. (3) Selective Synaptic Dampening (SSD): Foster et al. (2023) selectively dampen learned weights with high influence from Sf , identified by approximating the Fisher Information Matrix (Martens, 2020). Specifically, they compute each parameter's *relative importance*: the ratio of the forget set and retain set loss sensitivity to the parameter. Parameters with relative importance above a chosen threshold have their value divided by a quantity proportional to the relative importance. (4) Knowledge Distillation from a Bad Teacher (BadT): Chundawat et al. (2023b) propose making outputs on Sf random by distilling from a randomly initialized network, while distilling the original model on Str \ Sf . (5) Scalable Remembering and Unlearning Bound (SCRUB): Kurmanji et al. (2023) propose a method that alternates between two key steps: (1) distillation away from the original model on Sf to remove the influence of the manipulated data, and (2) knowledge preservation using distillation towards the original model combined with a task-specific loss on Str \ Sf to retain useful information. How to Select Best Hyperparameters for Unlearning? We find choosing the hyper-parameters a surprisingly tricky question for corrective unlearning. Selecting the model with the best overall accuracy on a validation set (Accretain) may result in low corrected accuracy (Acccorr) on the domain affected by the manipulation (Dm), especially if it is a small fraction of the overall domain X . Moreover, directly measuring the corrective accuracy may not be possible when the correct labels for the deletion set are not known. We propose to use a weighted average between overall accuracy (utility) on a validation set and the fraction of Sf samples where the model's classification changes (unlearning). We weigh them equally for our results. ## 3.3 Experimental Results Figure 2 (Left) shows corrective accuracy trends that quantify the efficacy of different methods at unlearning the adverse effects of manipulations. EU is the gold standard when all manipulated samples are known, and indeed it shows the highest accuracy when |Sf | = |Sm|. For the IC evaluation, EU demonstrates consistent improvement as more deletion set samples are identified. However, it shows no improvements when developers detect less than 80% of the manipulated samples, even where only 1% (500 samples) of the training data are poisoned. This highlights the insufficiency of the privacy-oriented unlearning goal of approximating retraining from scratch on Str \ Sf , as the remaining poisoned samples can maintain their adverse effects, even when their number is small (Gu et al., 2019). As a consequence, popular approaches in unlearning literature like Scrub and CF do not perform well in the poisoning setting. Scrub performs better than CF in the poisoning setting, but the opposite is true in the interclass confusion setting. BadT shows poor results in both settings, as randomizing outputs on Sf conflicts with the goal of correcting the model on the affected domain. On the contrary, SSD recovers Acccorr for poisoning even after identifying just 10% of the manipulated samples, demonstrating the tractability of our setting. However, it completely fails for the IC test, providing no improvements over the original model. Further, as shown in Figure 2 (Right), SSD leads to significant drops in model utility (Accretain), while other unlearning methods maintain utility. Note that we average Accretain across α as we find it to largely be independent of α. Conclusion: Traditional unlearning methods that train on Str \ Sf perform poorly in practical scenarios when not all manipulated samples are known. SSD shows positive results for removing poisons, demonstrating the tractability of corrective unlearning in this setting, though it fails completely at removing interclass confusion and hurts model utility, leaving scope for improvements. ## 3.4 Why Does Ssd Fail For Ic Test But Work For Poisoning? In Section 3.3, we observed that SSD performs well for removing poisoned data but fails completely on the IC test. Here, we investigate the underlying phenomena. SSD selects which parameters to dampen using the ratio of the parameter's importance for the forget set versus the retain set. We hypothesise that for poisons, a few parameters are disproportionately more important for the forget set than others, whereas for the IC test, the effect is more distributed among the parameters and no small subset of parameters is particularly important. If this is the case, an approach that dampens high-influence parameters is likely to target a "relevant" parameter in the poisoning setting, effectively reducing its adverse effects, but fail to remove differentially interclass confusion without hurting retain accuracy. To test this hypothesis, we show the distribution of the importance ratio (used to select the weights to dampen) for the two tests in Figure 3. For poisoning, we observe a few outlier weights with disproportionately high values (e.g., 800), whereas for the IC test, the distribution does not show such extreme values (the maximum is less than 200). This empirical observation confirms our hypothesis. SSD succeeds at removing poisons because it can dampen weights which correlate the poison feature with the wrong label, even with a small portion of the manipulation set known. This is consistent with the well-known strategy of pruning a small subset of weights to mitigate poisons (Wang et al., 2019), as they correlate a specific feature with the mislabel. This may motivate the tractability of mechanistic interpretability-based approaches for unlearning certain types of poisoned data (Elhage et al., 2021). ## 3.5 Application On A Medical Dataset: Pcam - Binary Classification We now examine if our findings generalize to PCam (Veeling et al., 2018), an application-specific dataset used in medical imaging. In binary classification tasks, knowing which class is incorrect fully specifies the correct class. Therefore, methods like Scrub which do a form of gradient ascent, moving away from the original manipulated label, are expected to perform well. In Figure 4, we observe that EU, CF, and BadT continue to perform poorly until most of the manipulated data is detected, while SSD performs well even at small α. As anticipated, the only difference is that Scrub shows significant improvements because its loss function incentivizes moving away from the original model's outputs on the forget set Sf . ## 4 Related Work Learning from manipulated Data: The adverse effects of manipulated training data on machine learning models are well-documented across objectives like fairness (Konstantinov & Lampert, 2022), robustness (Sanyal et al., 2021), and adversarial reliability (Paleka & Sanyal, 2023; Tian et al., 2022). One line of defense is designing training strategies more robust to these issues, see Song et al. (2022) for a survey on learning with mislabels. However, learning robust models from manipulated data is a hard problem as reduced sensitivity to such minority data populations can harm accuracy and fairness (Sanyal et al., 2022). Unlearning specific samples which are discovered to be manipulated can be a complementary post-training mitigation approach. ![7_image_0.png](7_image_0.png) Figure 3: Histogram of importance ratio values computed for each parameter by SSD. We find poisoning leads to more outlier values, which supports the hypothesis that poisoning can be removed by dampening outlier parameters, which is harder for Interclass Confusion. ![7_image_1.png](7_image_1.png) Figure 4: **Corrective Accuracy (**Accretain**) for unlearning poisons across different methods.** Unlearning methods that move away from the original label, such as Scrub, perform well even when less manipulated data is detected, as in this case there is only one other possible label. 8 How to detect manipulated data? A curious reader would wonder if the critical task is detecting the small subset of the data itself. However, detecting manipulated data has long been studied (Brodley & Friedl, 1999), with prior work detailing techniques to discover mislabeled (Pleiss et al., 2020; Northcutt et al., 2021a), biased (Prabhu & Birhane, 2021; Jiang & Nachum, 2020) and poisoned (Chen et al., 2019; Wang et al., 2019) data. We assume the model developers employ such strategies for monitoring their data sources. However, they cannot simply throw away the trained model when manipulated data is found due to expensive retraining costs. Our goal is to study how to cheaply mitigate adverse effects on such models using unlearning. Known manipulations: If the type of manipulation is known, one may employ manipulation-specific mitigation techniques such as poisoning defences against data poisoning attacks (see Goldblum et al. (2022) for a survey) or erasing concepts like artistic style from visual models (Gandikota et al., 2023). If the samples can be corrected through re-annotation, one may also use knowledge editing techniques (Mitchell et al., 2022). We restrict the scope of our work to not knowing the precise manipulation, as corrected data often cannot be obtained, we hope to use unlearning as a broader, panacea procedure across known and unknown data manipulations. Unlearning: Most prior work in designing unlearning procedures is motivated by privacy applications, and aims to achieve *retrain indistinguishability* (Ginart et al., 2019; Golatkar et al., 2020a), that is to create a distribution of unlearnt models indistinguishable from retraining from scratch without the data to be deleted. In Section 2.3 we discuss differences in corrective unlearning desiderata from retrain indistinguishability. "Exact Unlearning" procedures ensure the unlearnt model never sees the data whose influence is to be deleted by design of the training procedure (Bourtoule et al., 2021; Schelter, 2020). The empirical results of EU in Section 3.3 show how these approaches may not suffice for corrective unlearning when the full manipulation set is unknown. Moreover, such methods drastically deteriorate in efficiency as the number of samples to delete increase (Warnecke et al., 2021). This has led to "Inexact Unlearning" proposals, and we discuss different paradigms in image classification in Appendix A.2, and use state of the art methods from each paradigm in our experiments. Prior work has also shown that techniques like sparsification (Jia et al., 2023) and regularization (Thudi et al., 2022a) of the original model can improve the performance of unlearning methods. A group of works (Izzo et al., 2021; Wu et al., 2020; Gupta et al., 2021; Neel et al., 2021; Thudi et al., 2022b; Sekhari et al., 2021) also study unlearning procedures on convex or linear models with theoretical guarantees inspired from differential privacy (Dwork, 2006), but in this work we focus on deep models. Some unlearning works have separately evaluated on both removing backdoors (Sommer et al., 2022; Wei et al., 2023; Liu et al., 2022) and mislabeled samples (Kurmanji et al., 2023; Goel et al., 2023). Indeed, our work proposes neither new methods, nor new evaluations. Instead, we unify these two types of manipulations, to characterize the corrective unlearning as being fundamentally different from the privacy-oriented setting. Crucially, we showing retraining cannot be used as a gold standard for corrective unlearning. Closest to our work are Kurmanji et al. (2023), who suggest different applications of unlearning require different methods, and Yoon et al. (2023) who propose unlearning from a few samples. ## 5 Limitations, Broader Impact, And Conclusion Limitations and Future work Ideal corrective unlearning approaches should exhibit robustness against a broad spectrum of manipulation types. Specifically, these methods should withstand adaptive attacks, where the manipulations targeted for unlearning are crafted with knowledge of the unlearning procedures themselves (Tramer et al., 2020), not just the two manipulations we study. We hope future work will design adaptive adversaries and corresponding unlearning algorithms that can correct manipulations introduced by these adversaries. This provides scope to design stronger evaluation frameworks for corrective unlearning. Apart from manipulating features and labels, adversaries could generate entirely synthetic samples (Zhang et al., 2019; Huang et al., 2020). Although our focus is on supervised image classification, the concept of manipulation and its correction is also relevant in self-supervised learning contexts, such as language modeling (Wallace et al., 2021). Finally, an additional complexity could be the presence of false positives, where a clean sample gets identified as manipulated. Apart from designing new algorithms, we believe Corrective unlearning can inspire a new line of theoretical works, including fundamental questions such as: what are necessary and/or sufficient conditions and size for a 'representative set' of manipulated samples so that corrective unlearning is possible? Can we design procedures to infer more manipulated samples from a representative subset? Can we design algorithms with unlearning guarantees on Acccorr, Accretain instead of indistinguishability of distributions? We hope these questions can give rise to a more methodical and rigorous approach to this domain. Broader Impact Corrective unlearning is a post-training strategy designed to mitigate the effects of manipulated or incorrect training data. The societal benefits of this approach are similar to those achieved by defending against manipulations that introduce backdoors or systematic biases, which are largely positive. In some scenarios, such as collective action (Hardt et al., 2023) against harmful models, the ability to manipulate training data could be beneficial. However, defenses against such manipulations might have negative societal effects. These ideas remain speculative, and overall, we believe that removing the effects of manipulated data will have a positive societal impact. Conclusion Overall, we characterize the Corrective Machine Unlearning setting, designed to mitigate the negative effects of manipulated data discovered post-training, such as diminished accuracy on specific parts of the domain. In contrast to the assumption of fully specified user deletion requests in prior work on unlearning, we acknowledge that all the manipulated data samples may not be known. Developers can use a small representative subset of the manipulated samples, which can be collected by careful inspection of a uniform subsample of the training data. We discuss how this leads to significant changes from the traditional unlearning setting, which is primarily designed to address privacy concerns. Our findings indicate that popular unlearning methods, even the "exact unlearning" gold standard of retraining on the remaining data, fail to enhance accuracy on manipulated domain samples unless nearly all of the manipulated data is identified. A notable exception is SSD, which shows promise by successfully mitigating the effects of the BadNet poison, thus illustrating the feasibility of counteracting manipulated data with only a partial subset identified. However, this method does not generalize across settings, such as addressing Interclass Confusion manipulations. We hope our work spurs the development of stronger corrective unlearning methods to assist practitioners in dealing with data quality issues arising from web-scale training. ## References Marco Barreno, Blaine Nelson, Russell Sears, Anthony D Joseph, and J Doug Tygar. Can machine learning be secure? In Proceedings of the 2006 ACM Symposium on Information, computer and communications security, 2006a. Marco Barreno, Blaine Nelson, Russell Sears, Anthony D Joseph, and J Doug Tygar. Can machine learning be secure? In *ASIA CCS*, 2006b. Battista Biggio, Blaine Nelson, and Pavel Laskov. Poisoning attacks against support vector machines. In International Conference on Machine Learning (ICML), 2012. Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In *IEEE S&P*, 2021. Eric Breck, Marty Zinkevich, Neoklis Polyzotis, Steven Whang, and Sudip Roy. Data validation for machine learning. In *Proceedings of SysML*, 2019. Carla E Brodley and Mark A Friedl. Identifying mislabeled training data. *Journal of artificial intelligence* research, 1999. California State Leglisature. California consumer privacy act. 2018. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180AB375. Nicholas Carlini, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum Anderson, Andreas Terzis, Kurt Thomas, and Florian Tramèr. Poisoning web-scale training datasets is practical. *arXiv preprint*, 2023. Huili Chen, Cheng Fu, Jishen Zhao, and Farinaz Koushanfar. Deepinspect: A black-box trojan detection and mitigation framework for deep neural networks. In *IJCAI*, 2019. Vikram S Chundawat, Ayush K Tarun, Murari Mandal, and Mohan Kankanhalli. Zero-shot machine unlearning. *IEEE Transactions on Information Forensics and Security*, 2023a. Vikram S Chundawat, Ayush K Tarun, Murari Mandal, and Mohan Kankanhalli. Can bad teaching induce forgetting? unlearning in deep networks using an incompetent teacher. In *AAAI*, 2023b. Council of European Union. Eu general data protection regulation. 2018. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679. Cynthia Dwork. Differential privacy. In *Automata, Languages and Programming*, 2006. Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. A mathematical framework for transformer circuits. *Transformer Circuits Thread*, 2021. https://transformer-circuits.pub/2021/framework/index.html. Jack Foster, Stefan Schoepf, and Alexandra Brintrup. Fast machine unlearning without retraining through selective synaptic dampening. *arXiv preprint*, 2023. Robert M. French. Catastrophic forgetting in connectionist networks. In *Trends in Cognitive Sciences*, 1999. Rohit Gandikota, Joanna Materzynska, Jaden Fiotto-Kaufman, and David Bau. Erasing concepts from diffusion models. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 2426–2436, 2023. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv, 2020. Antonio Ginart, Melody Y. Guan, Gregory Valiant, and James Zou. Making AI forget you: Data deletion in machine learning. In *NeurIPS*, 2019. Shashwat Goel, Ameya Prabhu, Amartya Sanyal, Ser-Nam Lim, Philip Torr, and Ponnurangam Kumaraguru. Towards adversarial evaluations for inexact machine unlearning. *arXiv preprint*, 2023. Aditya Golatkar, Alessandro Achille, and Stefano Soatto. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In *CVPR*, 2020a. Aditya Golatkar, Alessandro Achille, and Stefano Soatto. Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations. In *ECCV*, 2020b. Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, and Tom Goldstein. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2022. Laura Graves, Vineel Nagisetty, and Vijay Ganesh. Amnesiac machine learning. In *AAAI*, 2021. Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Evaluating backdooring attacks on deep neural networks. *IEEE Access*, 2019. Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Chris Waites. Adaptive machine unlearning. *NeurIPS*, 2021. Xingshuo Han, Guowen Xu, Yuan Zhou, Xuehuan Yang, Jiwei Li, and Tianwei Zhang. Physical backdoor attacks to lane detection systems in autonomous driving. In *Proceedings of the 30th ACM International* Conference on Multimedia, 2022. Moritz Hardt, Eric Mazumdar, Celestine Mendler-Dünner, and Tijana Zrnic. Algorithmic collective action in machine learning. In *ICML*. PMLR, 2023. Yingzhe He, Guozhu Meng, Kai Chen, Jinwen He, and Xingbo Hu. Deepobliviate: A powerful charm for erasing data residual memory in deep neural networks. *arXiv preprint*, 2021. W Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, and Tom Goldstein. Metapoison: Practical general-purpose clean-label data poisoning. *NeurIPS*, 2020. Yerlan Idelbayev. Proper ResNet implementation for CIFAR10/CIFAR100 in PyTorch. 2018. Zachary Izzo, Mary Anne Smart, Kamalika Chaudhuri, and James Zou. Approximate data deletion from machine learning models. In *AISTATS*, 2021. Jinghan Jia, Jiancheng Liu, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, and Sijia Liu. Model sparsity can simplify machine unlearning. In *NeurIPS*, 2023. Heinrich Jiang and Ofir Nachum. Identifying and correcting label bias in machine learning. In *AISTATS*, 2020. Nikola H Konstantinov and Christoph Lampert. Fairness-aware pac learning from corrupted data. *JMLR*, 2022. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Master's thesis, Department of Computer Science, University of Toronto, 2009. Meghdad Kurmanji, Peter Triantafillou, and Eleni Triantafillou. Towards unbounded machine unlearning. NeurIPS, 2023. Junde Li and Swaroop Ghosh. Random relabeling for efficient machine unlearning. *arXiv preprint*, 2023. Vijay Lingam, Mohammad Sadegh Akhondzadeh, and Aleksandar Bojchevski. Rethinking label poisoning for GNNs: Pitfalls and attacks. In *ICLR*, 2024. URL https://openreview.net/forum?id=J7ioefqDPw. Yang Liu, Mingyuan Fan, Cen Chen, Ximeng Liu, Zhuo Ma, Li Wang, and Jianfeng Ma. Backdoor defense with machine unlearning. In *IEEE INFOCOM 2022-IEEE conference on computer communications*, pp. 280–289. IEEE, 2022. James Martens. New insights and perspectives on the natural gradient method. *Journal of Machine Learning* Research, 2020. Eric Mitchell, Charles Lin, Antoine Bosselut, Christopher D Manning, and Chelsea Finn. Memory-based model editing at scale. In *ICML*, 2022. Seth Neel, Aaron Roth, and Saeed Sharifi-Malvajerdi. Descent-to-delete: Gradient-based methods for machine unlearning. In *COLT*, 2021. Thanh Tam Nguyen, Thanh Trung Huynh, Phi Le Nguyen, Alan Wee-Chung Liew, Hongzhi Yin, and Quoc Viet Hung Nguyen. A survey of machine unlearning. *arXiv preprint arXiv:2209.02299*, 2022. Curtis G. Northcutt, Anish Athalye, and Jonas Mueller. Pervasive label errors in test sets destabilize machine learning benchmarks. In *NeurIPS*, 2021a. Curtis G. Northcutt, Lu Jiang, and Isaac L. Chuang. Confident learning: Estimating uncertainty in dataset labels. In *JAIR*, 2021b. Daniel Paleka and Amartya Sanyal. A law of adversarial risk, interpolation, and label noise. In *ICLR*, 2023. Parliament of Canada. Personal information protection and electronic documents act. 2018. https://www.priv.gc.ca/en/opc-news/news-and-announcements/2018/an_181010/. Alexandra Peste, Dan Alistarh, and Christoph H Lampert. Ssse: Efficiently erasing samples from trained machine learning models. In *NeurIPS Workshop Privacy in Machine Learning*, 2021. Geoff Pleiss, Tianyi Zhang, Ethan Elenberg, and Kilian Q Weinberger. Identifying mislabeled data using the area under the margin ranking. *NeurIPS*, 2020. Vinay Uday Prabhu and Abeba Birhane. Large image datasets: A pyrrhic win for computer vision? In WACV, 2021. Amartya Sanyal, Puneet K. Dokania, Varun Kanade, and Philip Torr. How benign is benign overfitting? In ICLR, 2021. Amartya Sanyal, Yaxi Hu, and Fanny Yang. How unfair is private learning? In *Uncertainty in Artificial* Intelligence, 2022. Robert E Schapire and Yoav Freund. Foundations of machine learning. 2012. Sebastian Schelter. "amnesia" - machine learning models that can forget user data very fast. In *CIDR*, 2020. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. *NeurIPS*, 2022. Ayush Sekhari, Jayadev Acharya, Gautam Kamath, and Ananda Theertha Suresh. Remember what you want to forget: Algorithms for machine unlearning. In *NeurIPS*, 2021. Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. Poison frogs! targeted clean-label poisoning attacks on neural networks. *Advances in neural* information processing systems, 2018. David M. Sommer, Liwei SOng, Sameer Wagh, and Prateek Mittal. Athena: Probabilistic Verification of Machine Unlearning. *Proceedings on Privacy Enhancing Technologies*, 2022. Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. Learning from noisy labels with deep neural networks: A survey. *IEEE Transactions on Neural Networks and Learning Systems*, 2022. Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, and Nicolas Papernot. Unrolling sgd: Understanding factors influencing machine unlearning. In 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P), pp. 303–319. IEEE, 2022a. Anvith Thudi, Hengrui Jia, Ilia Shumailov, and Nicolas Papernot. On the necessity of auditable algorithmic definitions for machine unlearning. In *USENIX*, 2022b. Zhiyi Tian, Lei Cui, Jie Liang, and Shui Yu. A comprehensive survey on poisoning attacks and countermeasures in machine learning. *ACM Computing Surveys*, 2022. Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial example defenses. *NeurIPS*, 2020. Bastiaan S Veeling, Jasper Linmans, Jim Winkens, Taco Cohen, and Max Welling. Rotation equivariant cnns for digital pathology. In *Medical Image Computing and Computer Assisted Intervention*. Springer, 2018. Eric Wallace, Tony Zhao, Shi Feng, and Sameer Singh. Concealed data poisoning attacks on NLP models. In NAACL-HLT, 2021. Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y. Zhao. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In *IEEE S&P*, 2019. Alexander Warnecke, Lukas Pirch, Christian Wressnegger, and Konrad Rieck. Machine unlearning of features and labels. *Network and Distributed System Security Symposium*, 2021. Shaokui Wei, Mingda Zhang, Hongyuan Zha, and Baoyuan Wu. Shared adversarial unlearning: Backdoor mitigation by unlearning shared adversarial examples. *Advances in Neural Information Processing Systems*, 36:25876–25909, 2023. Yinjun Wu, Edgar Dobriban, and Susan B. Davidson. Deltagrad: Rapid retraining of machine learning models. In *ICML*, 2020. Youngsik Yoon, Jinhwan Nam, Hyojeong Yun, Jaeho Lee, Dongwoo Kim, and Jungseul Ok. Few-shot unlearning by model inversion, 2023. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In *BMVC*, 2016. Jiale Zhang, Junjun Chen, Di Wu, Bing Chen, and Shui Yu. Poisoning attack in federated learning using generative adversarial nets. In 2019 IEEE international conference on big data science and engineering (TrustCom/BigDataSE), 2019. ## A Experimental Setup Details A.1 Training Details Our standard training procedure A is as follows: We train our models for 4000 steps on CIFAR10, PCAM and 6000 steps on CIFAR100. Each step consists of training on a single batch, and we use a batch size of 512 throughout. We use an SGD optimizer with momentum 0.9 and weight decay 5e-4, a linear scheduler with t*mult* = 1.25, and warmup steps as 1 100 of the total training steps. The same hyperparameters are used during unlearning unless otherwise specified. The setup used for all experiments is a PC with a Intel(R) Xeon(R) E5-2640 2.40 GHz CPU, 128GB RAM and 1 GeForce RTX 2080 GPU. ## A.2 Detailed Description Of Unlearning Methods To benchmark the performance of existing unlearning proposals on corrective unlearning scenarios, we select the strongest unlearning methods across five popular paradigms: (1) Exact Unlearning (EU): This paradigm involves retraining parts of the ML system (Bourtoule et al., 2021; Goel et al., 2023; He et al., 2021) that are influenced by Sf from scratch using Str \ Sf . Method Used: We benchmark the strongest version, retraining the entire model from scratch on Str \ Sf using the original training algorithm A. This is considered an inefficient but gold standard unlearning procedure in prior work. (2) Catastrophic Forgetting (CF) : Neural Networks suffer from catastrophic-forgetting (French, 1999) - when a model is continually updated without some previously learnt samples, the model loses knowledge about them. Many unlearning methods perform finetuning on Str \ Sf to achieve unlearning of Sf via catastrophic forgetting, and Goel et al. (2023) show even finetuning just the final layers of the model performs well on the IC test. Method Used: We use the original training procedure A for 1000 steps on Str \ Sf . (3) Modifying learnt parameters with high influence from Sf : This is a training-free class of methods (Golatkar et al., 2020a;b; Peste et al., 2021; Chundawat et al., 2023a) that identifies parameters with information relevant to the forget set using statistics like the Fisher Information Matrix (FIM). It then damages these parameters by adding noise or reducing their magnitude hoping to selectively remove information about Sf . Method Used: We benchmark the recently proposed Selective Synaptic Dampening (SSD) method which has shown state of the art results in this paradigm (Foster et al., 2023). We extensively tune the weight selection threshold α 2 and weight dampening constant γ. We find when γ should be tuned relative to α for optimal results. For each datapoint, we pick the best result out of runs with α = [0.1, 1, 10, 50, 100, 500, 1000, 1e4, 1e5, 1e6], γ = [0.1α, 0.5*α, α,* 5α, 10α]. (4) Pushing Sf **outputs towards random**: Some unlearning procedures (Graves et al., 2021; Li & Ghosh, 2023; Chundawat et al., 2023b) push the model towards random outputs on the deletion set. Method Used: We benchmark Knowledge Distillation from Bad Teacher (BadT) (Chundawat et al., 2023b), a state of the art method in this paradigm, which simultaneously distills from a randomly initialized neural network on Sf , and the original model on the remaining data Str \ Sf . We finetune the original model using this procedure for 1000 unlearning steps. (5) Alternating between Forgetting and Preservation Steps: 2not to be confused with the identified fraction mentioned earlier, overloaded for consistency with the original paper Method Used: Kurmanji et al. (2023) propose SCRUB and show it performs well on unlearning mislabelled samples when all are identified. The method alternates between forget steps and knowledge preservation steps. The forget step involves doing gradient ascent using the task-loss for Sf . The knowledge preservation step does knowledge distillation from Mo using Str \ Sf as well as optimizing the task-loss on Str \ Sf . We finetune the original model using this procedure for 1000 unlearning steps, out of which the forget step is used only in the first 200 unlearning steps as it is recommended in the paper to run it only in the initial iterations. We use a smaller learning rate (0.0025) as the original value leads to stability issues. We tune the hyperparameter α which controls the trade-off between the distillation loss and the task-loss. For each datapoint, we pick the best result out of runs with α = [0.001, 0.01, 0.05, 0.1, 0.5, 1, 5, 10]. | Dataset | #Classes | Model | Poisoning |Sm|/|Str| | IC Test |Sm|/|Str| | |-----------|------------|------------------|------------------------|----------------------| | CIFAR-10 | 10 | ResNet-9 | 0.2%, 1%, 2% | 1%, 5%, 10% | | CIFAR-100 | 100 | WideResNet-28x10 | 0.2%, 1%, 2% | 0.2%, 0.5%, 1% | Table 1: Dataset and models along with manipulation sizes for the Poisoning and Interclass Confusion (IC) evaluation. ## B Further Results We now provide results not included in the main paper due to space considerations. Specifically: - We report results across different manipulation set sizes (settings summarized in Table 1), on both unseen samples from the affected domain Dm (generalization effect of manipulation) and the manipulation set used in training Sm (memorization of manipulation). We do this for both the poisoning and IC evaluation. - We report computational efficiency by measuring unlearning time for each method. ## B.1 Unlearning (Acccorr**) Results Across Manipulation Sizes, And On The Manipulated Set** Sm Setup In this section, we vary the manipulation sizes for both poisoning and interclass confusion, to see if the trends are consistent. As the adversary is less powerful (label-only) in IC, we expect IC will need larger manipulation set sizes to show clear trends, while poisoning will work with very small manipulation sets. Due to the changing manipulation sizes, we report the deletion set size on the X-axis, with the total manipulation set size labeled below each subfigure, and also noticeable as the right-most point on the X-axis. In the main paper, we reported unlearning results (Acccorr) on unseen samples from the affect domain Dm, but here we will also investigate the same metric computed on the manipulated samples used in training itself. We thus use the more general term of "clean-label accuracy", i.e. accuracy computed with respect to correct labels, on the Y-axis, which is the same as Acccorr for results on unseen samples. Unlearning Poisoning In Figure 5a, we report results on a smaller (0.2%) and bigger (2%) manipulation size than the main paper (1%, also reported here for reference). In both cases, and notably even for the smaller manipulation set, we find consistent results. To measure the removal of mislabelling on poisoned training samples, we report clean-label accuracy on Sm in Figure 5b. The trends across unlearning methods are similar to the ones on unseen samples from the affected domain Dm, though the absolute accuracies after unlearning are higher as expected from training samples in comparison to test set samples. The success of SSD in producing an unlearnt model that successfully classifies the manipulated training data shows how an ideal unlearning algorithm can help re-label the detected data correctly. Unlearning Interclass Confusion In Figure 6a, we report results on smaller manipulation sizes (0.2%, 0.5% for CIFAR100, and 2%, 5% for CIFAR10) than the main paper (same sizes, i.e. 1% on CIFAR100 and 10% on CIFAR10, also reported here for reference). We see similar trends but with weaker fidelity on the medium setting, but no clear observations on unseen samples in the smallest manipulation size. While the smallest manipulation size (subfigures a, d) for Interclass Confusion did not show significant effects on unseen samples from class *A, B*, Figure 6b shows unlearning methods continue to give wrong predictions on the class *A, B* samples used for training. This emphasises the need to check unlearnt model outputs on unseen and training samples from the affected domain Dm separately, especially when the manipulation set is small to have an effect on model behaviour for unseen samples. ## B.2 Computational Efficiency In Table 2 we report average unlearning times of different unlearning methods. In the case of EU and CF, while more efficient relaxations have been proposed (Goel et al., 2023; He et al., 2021; Graves et al., 2021), we retrain from scratch to perform the strongest unlearning, which we still find to be insufficient. ![17_image_0.png](17_image_0.png) [S/| after unlearning ("None" represents the original model). Existing unlearning methods except SSD, including EU which is traditionally considered a gold-standard, perform poorly (b), (c), (e), (f) when ≤ 80% of the poisoned data is ![17_image_1.png](17_image_1.png) (b) Clean-label Accuracy on Manipulated Train Samples Sm with Poison Trigger. Each method is shown across deletion sizes |S/| after training with adversarial poisoning ("None" represents the original model). Trends mimic results for clean-label accuracy on unseen samples with the poison trigger. ![18_image_0.png](18_image_0.png) on the classes A, B used for the Interclass Confusion test, across deletion sizes |S/ |. SSD provides no improvements over the original model (represented as "None"), and other unlearning methods also require a large fraction of the manipulated data to be identified for unlearning. In the lower manipulation size setting (a) and (d), the model outputs ![18_image_1.png](18_image_1.png) unlearning methods ("None" represents the original model) across deletion sizes |Sy| . Existing unlearning methods perform poorly when is lower. Even the smallest setting (10% of single class) shows clear unlearning trends. | Method | Time (minutes) | |----------|------------------| | EU | 49.93 | | CF | 10.52 | | SCRUB | 16.86 | | SSD | 1.80 | | BadT | 33.19 | Table 2: Unlearning Time by Method
# Plum: Improving Inference Efficiency By Leveraging Repetition-Sparsity Trade-Off Anonymous authors Paper under double-blind review ## Abstract Efficient inference of Deep Neural Networks (DNNs) on resource-constrained edge devices is essential. Quantization and sparsity are key techniques that translate to repetition and sparsity respectively within tensors at the hardware-software interface. This paper introduces the concept of repetition-sparsity trade-off that helps explain computational efficiency during inference. We propose PLUM, a unified co-design framework that integrates DNN inference systems and quantization (forward and backward pass) to leverage repetitionsparsity trade-off to improve inference efficiency. Our results demonstrate that PLUM's quantization method is more accurate than binary quantization with the same number of non-zero weights. Detailed analysis indicates that signed binarization generates a smaller distribution of effectual (non-zero) parameters nested within a larger distribution of total parameters of latent full-precision weights for a DNN block. Finally, the proposed PLUM framework achieves a 26% speedup on real hardware, doubles energy efficiency, and reduces density by 2.8× compared to binary methods while retaining top-1 accuracy when compared to prior-art methods for ResNets on ImageNet (by achieving 66.2% top-1 accuracy), presenting an alternative solution for deploying efficient models in resource-limited environments. ## 1 Introduction Despite significant strides in accuracy, the burgeoning complexity and resource demands of deep learning models pose challenges for their widespread adoption across a wide range of domains (He et al., 2016; Brown et al., 2020; Graves et al., 2013; Jain et al., 2022; Mandlekar et al., 2021; Jain et al., 2024). This requires the development of innovative techniques to enhance DNN efficiency during inference on edge devices. Two such techniques have been studied extensively: binarization and sparsity. Binarization, a form of quantization, results in weight repetition as only two values appear repeatedly in the weight tensor (Courbariaux et al., 2015). This approach significantly trims the memory footprint of the weight tensor, thereby decreasing memory I/O during inference (Hegde et al., 2018). In contrast, sparsity leads to zero weight values. Since anything multiplied by zero is zero, weight sparsity leads to *ineffectual* multiplications (Wu et al., 2021). This approach reduces memory I/O during inference by not reading activations that would be multiplied by zero weights (Gong et al., 2020). Thus, both these techniques are geared towards reducing memory I/O during inference. DNN inference efficiency is usually achieved by leveraging either binarization or sparsity. Introducing sparsity by adding a zero-valued weight in conjunction with binary weights is termed ternary quantization. Ternary was conceived with the reasonable assumption that transitioning from binary to ternary models would only minimally impact inference latency due to the effect of zero weights (Li et al., 2016). However, advances in the contemporary hardware-software systems have revealed a substantial increase in latency during such transitions (Prabhakar et al., 2021; Fu et al., 2022).This work aims to perform faster inference on contemporary hardware-software systems while retaining model accuracy. The key insight of this work is the concept of the *repetition-sparsity trade-off*. This trade-off explains the inference inefficiency of binary and ternary weight quantization. A traditional binary network chooses maximization of weight repetition while being ignorant of weight sparsity, whereas a ternary network introduces ![1_image_0.png](1_image_0.png) Figure 1: **On the left:** The conventional, isolated approach where DNN inference systems and quantization methods are designed separately, resulting in being ignorant of repetition-sparsity trade-off, leading to inefficient inference. **On the right:** PLUM, a unified design framework, performs quantization-system co-design to exploit the repetition-sparsity trade-off, thereby enhancing computational efficiency. weight sparsity at the expense of weight repetition. For instance, transitioning from binary to ternary networks increases the number of possible unique 3x3 2D filters from 512 (2 9) to 19683 (3 9). This makes it exponentially harder to extract efficiency using filter repetition. The conventional approach to deploying efficient DNNs has largely been a two-stage process: training the DNN with binarization or ternarization (Bai et al., 2018), followed by formulating a system design that reduces memory I/O by leveraging both weight repetition and/or sparsity (Hegde et al., 2018). This partitioned design approach has revealed a pitfall: increased memory I/O during inference due to the repetition-sparsity trade-off. However, recent advancements in DNN inference systems can now exploit repetition and sparsity to improve efficiency as shown in (Prabhakar et al., 2021; Fu et al., 2022). To this end, this paper proposes a quantization-system co-design framework called **PLUM** (PLUs-Minus) Framework as seen in Figure 1. This framework aims to demonstrate and leverage the existence of repetition-sparsity trade-off. PLUM re-imagines different levels of stack working together in synergy to enhance computational efficiency during inference while retaining the model's accuracy. As seen in Figure 2, compared to a classic partitioned design for binary, the PLUM framework improves inference efficiency (with respect to energy, latency, and throughput) and reduces model density to allow the use of bigger models, resulting in more accuracy. Based on our analysis and empirical evidence, we envision that the PLUM framework has the potential to positively shape the evolution of more efficient and high-performing deep learning models. We make the following contributions in our work: Figure 2: **PLUM vs. Prior-Art method**: For ![1_image_1.png](1_image_1.png) ResNet on ImageNet, the pronounced spread indicates that PLUM holistically outperforms prior-art method of partitioned design using binary quantization. It retaines competitive accuracy and pushes the Pareto front, exhibiting a +3.5% improvement when both methods employ a comparable number of effectual parameters. Moreover, our method enhances inference efficiency, achieving a 26% speedup, doubling energy efficiency, and reducing density by 2.8x for the same backbone. 2 - We present the concept of repetition-sparsity trade-off along with the PLUM framework. While the former explains the inference inefficiency of binary and ternary weight quantization, the latter leverages (and thus proves) this insight to improve the efficiency of DNN inference. - Compared to prior-art quantization schemes, we demonstrate that PLUM's signed-binary quantization scheme retains model accuracy (of 90.7% and 66.2% top-1 accuracy on CIFAR10 and ImageNet) while requiring significantly fewer effectual parameters. We offer detailed insights on co-dependent learned features in the DNN when using PLUM by visualizing the distribution of quantized and latent full-precision weights. - We perform DNN inference on Intel CPUs to prove the repetition-sparsity trade-off and demonstrate that the PLUM framework leads to the fastest inference. We further demonstrate the benefits of sparsity during inference when using PLUM via energy reduction experiment. ## 2 Background Quantization Quantization in deep learning involves mapping of real-valued numbers to a select set of discrete values to streamline and optimize computations. The Binary Quantization technique assigns any real-valued number to either +1 or -1 (Courbariaux et al., 2015). The primary objective behind this is to simplify operations in DL hardware, turning multiply-and-accumulates (MACs) into mere accumulate operations (Rastegari et al., 2016). Conversely, Ternary Quantization designates values to +1, -1, or 0 (Li et al., 2016). This determination is based on a threshold, ∆. Values exceeding ∆ are assigned +1, those below receive -1, while others become zero. For both quantization methods, the output values can undergo scaling using a factor, α (Bulat & Tzimiropoulos, 2019; Bulat et al., 2019). There is a rich literature on improving the accuracy of these methods (Qin et al., 2023; Zhang et al., 2018; Gong et al., 2019; Qin et al., 2021; Hu et al., 2018; Pouransari et al., 2020; Liu et al., 2018). These methods are not aware of the repetition-sparsity trade-off. In PLUM framework, we combine the benefits of both binary and ternary to create efficient inference. Weight Sparsity Weight Sparsity in DNN implies that there is repetition of weight with a value equal to zero. The basic idea is that since 0×x = 0 for any real-valued scalar x, if the weight is zero, the multiplication is *ineffectual* and should be skipped (Gong et al., 2020). Sparsity in weights is static during inference (Dai et al., 2020). Therefore, if the value of weight is zero, we can choose not to load activations corresponding to that weight (Qin et al., 2020). This can lead to a reduction in data movement, memory accesses, and MACs thereby reducing computations and hence resulting in efficient DNN inference. This approach has been effective on ASICs and general-purpose devices (Hegde et al., 2019; Wang et al., 2021; Dai et al., 2020; Gong et al., 2020; Nayak et al., 2023). PLUM exploits weight sparsity during inference to improve efficiency. Weight Repetition Quantization of weights leads to the same value being repeated again and again in the weight tensor. This phenomenon is known as weight repetition (Hegde et al., 2018; Sze et al., 2020). Since the weights are fixed during DNN inference (Dai et al., 2020), this leads to opportunities for improving efficiency during inference with respect to time and energy by exploiting the repetition of weights and reducing memory accesses (Sze et al., 2020). This concept was first introduced in BNN (Courbariaux et al., 2016). They demonstrated that for a CNN, only 42% of filters are unique per layer on average which can lead to reducing the number of operations by 3x. UCNN (Hegde et al., 2018) demonstrated how to leverage widespread and abundant weight repetition present across a range of networks (like ResNet, GoogleNet) that are trained on a variety of datasets. It reorders the weights and thus reorders activations, leading to reduced memory access and arithmetic operations, resulting in 4x more efficient inference. For example, if the filter weights are [*a, b, a, a*] and activations are [*w, x, y, z*], UCNN would reorder it as a × (w + y + z) + b × (x) for efficient inference (Sze et al., 2020) but does not exploit weight sparsity. SumMerge (Prabhakar et al., 2021) uses both weight repetition and weight sparsity for efficient DNN inference. For example, if b = 0 in the previous example, SumMerge would compute a × (w + y + z) during inference. While it reduces arithmetic operations up to 20x, it uses a greedy strategy to reduce arithmetic operations. Q-Gym (Fu et al., 2022) treats weight repetition as a combinatorial ![3_image_0.png](3_image_0.png) Figure 3: **Concept Diagram on the left:** The diagram shows the comparison of Binary, Ternary, and Signed Binary Quantization in terms of visual representation of their quantized weights. Qualitative Evaluation on the right: The table qualitatively evaluates them in terms of weight sparsity, weight repetition, and inference efficiency. optimization problem to perform inference on CPUs and GPUs. But, all these methods are designed for standard pre-built quantization and assume repetition and sparsity in quantized DNNs to be rigid. All these systems do not perform the dot product of activation and weight tensors in a single step. Instead, they split the dot products of input and weights into smaller blocks via tiling to improve data locality during inference. PLUM exploits this for faster inference through quantization and DNN-inference co-design. Repetition, Sparsity & Inference Latency Weight repetition is prevalent across a range of DNNs trained on diverse datasets (Hegde et al., 2018). The impact of weight sparsity on weight repetition can be most easily visualized for 3×3 convolution filters as shown in Figure 3. Binary models have 2 9 unique filters while ternary models employ 3 9 unique filters. Recent works (Hegde et al., 2018; Prabhakar et al., 2021; Fu et al., 2022) show a significant slowdown in inference when using ternary when compared with binary. This extends to bit-utilization; binary systems can potentially be represented by a single bit, whereas ternary systems require two bits due to the inclusion of zero-valued weights. Moreover, recent findings indicate that runtime doubles when transitioning from one to two bits during the deployment of ResNet18 on ARM CPUs, showcasing the influence of bit-width distinction on inference time (Cowan et al., 2020).PLUM aims to balance out exploitation of repetition and sparsity for faster inference. ## 3 Plum The objective is to create a framework that leverages trade-off between repetition and sparsity. It aspires to retain competitive accuracy while combining the merits of both repetition and sparsity for faster inference. ## 3.1 Leveraging Repetition-Sparsity Trade-Off Conventional binary networks offer 2 unique weight choices per element, creating a dense setup. Conversely, ternary networks take an additional bit to represent 3 unique weight choices per element to create a sparse setup. Continuing the 3 × 3 convolution filter illustration from the previous section, binary models can have 2 9 unique filters. In contrast, ternary models employ 3 9 unique filters, inadvertently causing an exponential decrease in repetition by a factor of 38.4×. These design decisions can be interpreted as follows: Binary quantization prioritizing the maximization of weight repetition, overlooking the potential benefits of weight sparsity. In contrast, ternary quantization induces weight sparsity but compromises weight repetition. This disparity places the two schemes at opposite ends of a spectrum, creating a *repetition-sparsity trade-off* (as seen in Figure 3). In a given DNN block, we have the flexibility to assign each latent full-precision weight one of four possible quantization function value sets: {1,-1}, {1,0}, {0,-1}, and {1,0,-1} This variety of ![4_image_0.png](4_image_0.png) Figure 4: **PLUM framework leads to efficient inference as it acknowledges repetition-sparsity** trade-off through co-design: Visualizing inference when using recent systems (Prabhakar et al., 2021; Fu et al., 2022). Weight repetition enables binary models to skip work by re-using partial sums within and across filters. PLUM takes this even further by reducing the number of effectual operations by leveraging sparsity while retaining repetition. (details in Supp B) assignments to individual weights allows for the creation of unique configurations within the block, each representing a distinct point along the repetition-sparsity spectrum. PLUM framework aims to identify a more efficient point on the repetition-sparsity spectrum. PLUM's quantization, i.e., signed-binary makes the design decision to use two quantization functions with value sets {1,0} and {0,-1} as they are sparse and could potentially be represented using one bit per latent full-precision weight. Let the convolutional layer have a R × S kernel size with C input channels and K filters. The quantization function takes latent full-precision weights W as input and outputs the quantized weight W*quant*. The quantized weight W*quant* would be the product of the sign-factor β and the bitmap U. $$Q:\,W\to\,W^{q u a n t};\quad\,W^{q u a n t}=\beta\,U$$ $$\left(1\right)$$ $$\left(2\right)$$ Q : W → Wquant; W*quant* = β U (1) $$\forall\,W\in\mathbb{R};\quad\beta\in\{+1,-1\};\quad U\in\{0,1\}$$ ∀W ∈ R; β ∈ {+1, −1}; U ∈ {0, 1} (2) ## 3.2 Co-Design Exploration Developing an efficient framework requires a redesign due to its distinct nature from existing methods, as a latent FP weight can be assigned at any of the two quantization functions. This leads to challenges: (1) retaining the ability to learn meaningful representations and (2) providing faster inference when using stateof-the-art inference systems. Our method addresses these concerns by revisiting design decisions on different layers of the stack with the goal of improving efficiency while retaining accuracy. For in-depth insights into our experimental setup and implementation details, please refer to the supplementary section C. For additional ablations, please see supplementary E F and G. ## 3.2.1 Dnn Inference In Plum The aim is to achieve faster inference by using two quantization functions for a DNN block. As explained in Section 2 (weight repetition paragraph), modern DNN inference systems enhance data locality during inference by splitting the dot products of inputs and weights into smaller blocks through tiling. For instance, Prabhakar et al. (2021) divides every filter into smaller blocks by introducing a sub-dimension denoted as C ∗, representing the tile size along the C dimension. We leverage this insight and make the design decision of *local binarization*. This allows a single processing step during PLUM inference to see one signed binary quantization function, leading to efficiency as shown in Figure 4. We define the region to be sign-binarized as R×S ×Ct where Ct = max(*C, kC*∗) and k ∈ Z +. For a given β, W*quant* = β U where W ∈ R R×S×Ct and U ∈ {0, 1} R×S×Ct. Hence, this results in *global ternarization* for a DNN block. Please refer to section 5.1 for ablation on DNN inference for PLUM on Intel CPU. | Arch | FP | T | B | SB | %{0, 1} filters | %{0, −1} filters | Acc | |-----------|-------|-------|-------|-------|-------------------|--------------------|-------| | ResNet20 | 92.10 | 90.86 | 90.20 | 90.05 | | | | | ResNet32 | 92.90 | 92.03 | 91.51 | 91.55 | | | | | ResNet44 | 93.30 | 92.40 | 91.93 | 91.98 | | | | | ResNet56 | 93.63 | 92.90 | 92.42 | 92.52 | | | | | ResNet110 | 93.83 | 93.33 | 92.64 | 92.68 | 0 | 1 | 88.84 | | | | | | 0.25 | 0.75 | 89.32 | | | | | | | 0.5 | 0.5 | 90.05 | | | | | | | 0.75 | 0.25 | 89.30 | | | | | | | 1 | 0 | 89.07 | | Table 1: PLUM's Signed-Binary does not lead to accuracy degradation wrt Binary while leading to efficient inference. | EDE† | Acc | Region | Acc | ∆ | Acc | |----------|----------------|----------------|-------|-----|-------| | Disabled | 88.4 | | | | | | Enabled | 88.7 | Ct = C | 88.6 | | | | Ct = C/2 | 87.9 | 0.01 × max |W| | 90.01 | | | | | 0.05 × max |W| | 90.05 | | | | Table 2: Equal percentage of usage of %{0, 1} and %{0, −1} regions leads to best model accuracy in PLUM. Table 3: Enabling Adapted EDE† during backpropagation improves model accuracy in PLUM. Table 4: Region size of Ct = C (wrt R × S × Ct) leads to competitive accuracy in PLUM. Table 5: Quantization functions used in PLUM are not sensitive to the choice of threshold ∆. Ablations for PLUM: ResNet models trained on CIFAR-10 dataset for the method. Tables 2-5 use ResNet20. Default configurations for our method are marked in grey . † Adapted Error Decay Estimator (EDE) Qin et al. (2021) details in Section 3.2.3. ## 3.2.2 Quantization In Plum: Signed Binary The challenges arise from the need to design signed-binary quantization functions, that enable the model to learn meaningful representations. Local binarization techniques must accommodate a global ternarization strategy, which presents additional challenges such as determining suitable regions for local binarization and assigning these regions to their designated signed binary quantization functions. In response to these challenges, we have designed our method, supported by systematic ablation studies conducted on ResNets trained with the CIFAR-10 dataset (see supplementary I for additional ablations). Signed Binary Quant Functions: Our method involves the strategic use of two distinct quantization functions (as shown in Figure 3). In this section, we intend to meticulously design these functions, characterized by the value sets {0,1} where the sign-factor β1 is equal to 1, and {0,-1} where the sign-factor β−1 is -1. The scaling factor αi mirrors βi where i = ±1. Following (Zhu et al., 2016), we define the threshold value as ∆ = 0.05 × max(|W|). To assess the efficacy and sensitivity of the chosen thresholds, we experiment with different ∆s using signed-binary quantization (see Section 4.2.2) to find stable performance across various configurations, as depicted in Table 5. These are defined as: $${\bf W}^{quant}=\left\{\begin{array}{ll}\alpha_{1}&\mbox{if${\bf W}\geq\Delta$and$\beta=1$}\\ \alpha_{-1}&\mbox{if${\bf W}\leq-\Delta$and$\beta=-1$}\\ 0&\mbox{Otherwise}\end{array}\right\}\tag{3}$$ $${\frac{\partial L}{\partial\mathbf{W}}}={\left\{\begin{array}{l l}{\alpha_{1}\times{\frac{\partial L}{\partial\mathbf{W}^{u u n t}}}}&{{\mathrm{if~}}\mathbf{W}>\Delta{\mathrm{~and~}}\beta=1}\\ {-\alpha_{-1}\times{\frac{\partial L}{\partial\mathbf{W}^{u n t}}}}&{{\mathrm{if~}}\mathbf{W}<-\Delta{\mathrm{~and~}}\beta=-1}\\ {1\times{\frac{\partial L}{\partial\mathbf{W}^{u n t}}}}&{{\mathrm{Otherwise}}}\end{array}\right\}$$ (4) $\frac{1}{2}$ . Intra-Filter Signed-Binary Quant In this section, we delve deeper into signed binary quantization by introducing variations in Ct with respect to the constant value C, aiming to identify the optimal setting for Ct. We design this approach to emphasize the changes in performance across different thresholds. Table 4 assesses the impact on representation quality by adjusting the Ct values during the training. The result shows that intra-filter signed binary co-design preserves a competitive level of representation quality even with a reduced Ct and setting of Ct = C works best. Inter-Filter Signed-Binary Quant Building on intra-filter, where Ct = C, this is the simplest signed binary quantization, and we term it as Inter-Filter Signed binary quantization. In essence, this method involves assigning each filter to one of two distinct signed binary quantization functions, enabling an efficient representation as Ct = C. We contrast it with binary and ternary quantization schemes in an applesto-apples comparison. Table 1 illustrates ResNets of varying depths trained using different quantization schemes. The signed-binary quantization maintains comparable accuracy against traditional quantization approaches across different architectures. The largest chunk of a convolution filter that can processed during PLUM inference at a given instance is the entire convolution filter itself *i.e.*, Ct = C. Inter-filter signedbinary quantization automatically results in the PLUM inference processing a region corresponding to one quantization function at any given time. Value Assignment of Signed-Binary Quant Functions To fine-tune our network's performance, examining how different value assignments within filters affect accuracy is crucial. This experiment focuses on studying the effects of varying the proportions of filters with {0,1} and {0,-1} value assignments. As illustrated in Table 2, we explore the delicate balance between positive and negative binary values to find the best way to optimize network performance. The data indicates that having an equal mix of these values is more beneficial, clearly outperforming the methods that use only a single type of sparse quantized function. This observation underscores the importance of using both positive and negative binary regions to achieve effective learning of representations. ## 3.2.3 Backpropagation In Plum In this section, we delve into tuning latent full-precision weights during training to enhance model accuracy. Traditional binary networks experience fluctuations in latent weight assignments at zero due to the use of the one quantization function, i.e., sign function—manifesting as a peak of latent full-precision weights at zero (Bai et al., 2018; Qin et al., 2021). This issue is exacerbated in PLUM. Employing two quantization functions such that individual signed binary regions co-dependently learn diverse features, results in two distinct peaks at non-zero values of ±∆ as illustrated in Figure 6b. This dual peak structure necessitates a process to ensure stable and accurate weight updates. In response, we have adapted traditional binary's EDE (Qin et al., 2021) that approximates the derivative of the sign function during backward propagation to improve gradient accuracy and update capability. PLUM stabilizes fluctuations in latent full-precision weights around ∆ = ±0.05 max(W) with EDE, thereby fostering improved representations as delineated in Table 3.This is done by first determining t and k using the equations t = Tmin10 i N ×log Tmax Tmin , k = max 1 t , 1where i is the current epoch and N is the total number of epochs, Tmin = 10−1 and Tmax = 101. Subsequently, we adapt g ′(·) for PLUM: g ′(x) = kt 1 − tanh2(t(x ± ∆)), to compute the gradients with respect to w as ∂L ∂w =∂L ∂Qw(w) g ′(w) ## 4 Plum Pushes The Pareto-Frontier In this section, we evaluate the efficacy of PLUM in preserving the quality of learned representations when transitioning from the conventional binarization technique. Initially, we train ResNets on the CIFAR10 and ImageNet datasets using PLUM, and benchmark the results against existing *state-of-the-art* methods in subsection 4.1. Additionally, subsection 4.2 examines the latent full-precision and quantized weights in detail to provide a holistic understanding of the retained representational capacity of the trained model. ## 4.1 Comparison With Other Works To ascertain the true potential of PLUM codesign, we compare PLUM's signed-binary quantization with SOTA binary-quantization methods.. We benchmark on CIFAR10 and ImageNet datasets by training ResNet-{20,32} and ResNet-{18,34} models. Conversely, model parameters are of two types, effectual (or non-zero valued) parameters and ineffectual (or zero valued) parameters (Wu et al., 2021; Nayak et al., 2023). Since only effectual parameters lead to computation during inference, we compare different binary methods on two axes: model accuracy on the Y-axis and effectual binary model parameters on the X-axis (refer to supplementary D & C for baselines and setup). As shown in Figure 5, PLUM exhibits Pareto optimality against state-of-the-art methods. It achieves a +0.7% and +3.5% increase in accuracy on the CIFAR10 and ![7_image_0.png](7_image_0.png) Figure 5: Comparison of PLUM and conventional binary methods on CIFAR10 and ImageNet datasets. PLUM pushes the Pareto frontier, providing superior accuracy with respect to effectual parameters and exhibiting a significant reduction in effectual parameters of equivalent models. ImageNet datasets, respectively, coupled with 1.4× and 1.5× reduction in effectual parameters respectively. Concurrently, it realizes approximately a ∼ 2.5× and ∼ 2.8× decrease in effectual parameters for equivalent models on the respective datasets. These results emphasize the potential of PLUM co-design in creating efficient and accurate models. ## 4.2 Visualizing Latent Fp And Quantized Weights Latent FP weights. Binarization quantization leads to a zero-mean Laplacian distribution of latent fullprecision weights (Xu et al., 2021). For ResNet18 trained on Imagenet in Figure 6, we observe that despite the pronounced sparsity introduced by signed binary quantization, the distribution of latent FP weights across an entire convolutional block resembles a zero-mean Laplacian distribution. Signed-binary filters are neither zero mean, nor exhibit a Laplacian distribution (supplementary H). This shows that even if individual filters don't align with a zero-mean or Laplacian distribution, collectively, the convolution block resembles these characteristics. Moreover, the presence of four distinctive peaks overlaying Laplacian resembling distribution for full-precision latent weights are due to: (a) Two peaks at the extremes appear because of clamped weights at +1 and -1, and (b) two intermediate peaks arising from the employment of threshold functions defined at ±∆, analogous to zero-valued peak observed in binary network due to the use of sign quant function. Quantized weights. We investigate the distribution in quantized weights of a trained signed-binary model by plotting the percentage distribution of quantized signed-binary convolutional blocks in ResNet18 trained on Imagenet. Figure 6 reveals a roughly consistent, equal proportion of both positive and negative weights. This observation of a roughly equal proportion of positive and negative weights is also observed in ternary quantization (Zhu et al., 2016). However, a crucial distinction arises in the distribution of positive and negative weights within a convolutional layer. Like ternary, both positive and negative valued weights are present within a layer. However, signed binary co-design *bifurcates* them across different filters. This design decision improves inference efficiency. ## 5 Plum Improves Inference Efficiency This section shows that transitioning from conventional partitioned design for binary to PLUM co-design framework enhances inference efficiency for ResNet18 trained on ImageNet. We highlight PLUM's superiority ![8_image_0.png](8_image_0.png) (a) **Quantized Weights** (b) **Latent Full Precision Weights** ![8_image_1.png](8_image_1.png) Figure 6: **On the left: Quantized Weights**, are depicted, illustrating a distribution reminiscent of ternary networks but with weights distinctively segregated across filters, enhancing weight-repetition and inference efficiency. **On the right: Latent Full Precision Weights** in a signed-binary conv block maintain a *blue* distribution akin to binary's Laplacian. In sign-binary, *total parameters* can be divided between *positive* and *negative* signed-binary filters. These parameters can be subdivided into non-zero valued effectual and zero-valued ineffectual ones. Notably, while the total parameter's looks like binary's zero-mean Laplace distribution, individual filters do not. The effectual and ineffectual parameters' green-red and red-*green* distributions *resemble* Laplace and Gaussian distributions, respectively. Sign Binary reduces computations during inference when compared to binary, as it reduces effectual parameters from blue to green-red distribution. over conventional partitioned design using binary and ternary by considering the repetition-sparsity tradeoff. Additionally, we explore how weight sparsity in DNNs, specifically in signed-binary ResNet18 trained on ImageNet, benefits DNN inference. ## 5.1 Exploiting Repetition & Sparsity With Plum We posit that the performance of binary and ternary weight quantization methods in inference may be hindered due to their neglect of the crucial repetition-sparsity trade-off. Weight sparsity allows us to utilize upcoming sparse tensor operation technology to accelerate inference by skipping zeros during inference (Wu et al., 2021). Concurrently, exploiting weight repetition leads to diminishing arithmetic operations and data movement (Hegde et al., 2018). However, traditional binary networks prioritize the maximization of weight repetition, overlooking the potential benefits of weight sparsity, while ternary networks induce weight sparsity but compromise on weight repetition. In contrast, we hypothesize that PLUM framework, being attuned to this trade-off, promises enhanced efficiency in actual device inference. To validate our hypothesis, we deploy quantized ResNet-18 models on Intel CPUs, rigorously measuring inference times under varying conditions. Experimental Setup and Methodology We use SumMerge (Prabhakar et al., 2021) for performing inference of quantized and sparse DNNs on Intel CPUs (details in supplementary A). All experiments are conducted under identical test environments and methodologies. Within our experiments utilizing SumMerge, we explore two distinct configurations: (1) with sparsity support deactivated, the software does not distinguish between zero and non-zero weights, relying solely on weight repetition; (2) with sparsity support activated, the software additionally omits computations involving zero weights. We provide detailed analysis into both per-layer speedups and the aggregate speedup across different quantization strategies relative to binary quantization. Result and Analysis Please refer to Figures 4 and 7. We observe in Figure 7 that PLUM, i.e., signedbinary with kernel exploiting repetition and sparsity, is most efficient for every quantized layer and the model overall by 1.26x and 1.75x faster respectively when exploiting both repetition and sparsity. This result can be explained as follows: *(A) When sparsity support is turned off* : The software is only relying on repeating values within the weight tensor for speedup. Because binary and signed-binary have two unique values per convolutional filter, they take similar time for DNN inference. Ternary is much slower as ![9_image_0.png](9_image_0.png) Figure 7: **Efficiency Analysis wrt Binary ResNet18 on Intel CPU**: Our study shows signed-binary excelling in every convolutional layer, depicted by bars for **signed-binary**: PLUM (w/ sparsity support) and w/o sparsity support ; **ternary**: w/ sparsity support and w/o sparsity support ; and one for **binary** , indicating its 100% density. Please refer to Figure 6. The inference computations for signed-binary w/o sparsity support is aligned with *blue* distribution, matching binary performance. On the other hand, PLUM (w/ sparsity support) reduces computation to green-red distribution. Because *negative* and *positive* valued quantized weights exist in separate regions of the network by design (see Figure 3), PLUM retains repetition, resulting in speedup. it has three unique values per convolution filter which makes extracting efficiency by using weight repetition exponentially harder. *(B) When sparsity support is turned on*: The software not only cares about the repeating values in the weight tensor but also skips computations on zero weights to improve the runtime. Here we observe that ternary is slower than binary, because the reduction in work due to sparsity is not able to compensate for the exponential decrease in weight repetition. On the other hand, our method, PLUM, does not suffer from this problem and can exploit weight repetition and weight sparsity to the fullest and is most efficient.Thus, PLUM framework that relies on repetition-sparsity aware-inference performs better than using prior-art inference with binary and ternary methods. Arithmetic Operations We report the reduction in arithmetic operations Prabhakar et al. (2021); Fu et al. (2022) required for a single inference relative to binary. We find that ResNet 18 trained using signedbinary takes 20% fewer operations during w/ sparsity support enabled inference wrt binary. On the other hand, when trained using ternary, inference requires 35% more operations during w/ sparsity support enabled inference wrt binary. Please refer to supplementary G for detailed ablations. ## 5.2 Understanding Benefits Of Sparsity During Inference In this section, we aim to understand the impact of weight sparsity given a fixed weight repetition in PLUM framework. We change the percentage of weight sparsity when using one bit quantization. To do this, we count the number of quantized weights with zero values and divide it by the total number of quantized weights to calculate the percentage of sparsity. We find that signed-binary ResNet-18 trained on ImageNet has 65% sparsity. Since density is (1 - sparsity) (Wu et al., 2021), ResNet-18 has 35% density (see Figure 6a). if we switch from conventional binary to PLUM's signed-binary, we decrease the density from 100% to 35%. We would like to leverage the low density to reduce the amount of computation activity during inference: Throughput In a model with low density, represented by 1/x, there is one effectual multiplication for every x total multiplications. By eliminating the ineffectual computations and the time associated with them, there is the potential to improve throughput by a factor of x (Emer et al., 2021). Given that the PLUM's signed-binary has a 35% aggregate density—meaning only 35% of the multiplications are effectualexploiting sparsity during inference can lead to a potential 2.86× increase in throughput compared to dense binary {1,-1}. In practice, the speedup we observe on real hardware is in the range 1.26x-1.75x as the support for unstructured sparsity on general-purpose devices is an active area of research (Wu et al., 2021). This speedup is comparable to the speedup due to unstructured sparsity on general-purpose devices in recent papers (Gong et al., 2020; Hegde et al., 2019). Energy Reduction To estimate energy reduction due to the unstructured sparsity during inference, we use the cycle-level micro-architectural simulator (Muñoz-Matrínez et al., 2021) of a sparsity supporting ASIC (Qin et al., 2020). We take their publicly released code and use it under default configuration (details in supplementary A). We observe that decreasing density from 100% to 35% for one-bit ResNet 18 leads to a ∼2x reduction in energy during inference. Thus, switching from binary to PLUM's signed-binary would lead to significant improvements in power consumption on ASICs. ## 6 Discussion And Future Work The paper introduces the concept of repetition-sparsity trade-off and proposes PLUM (PLUs-Minus) a quant-system co-design framework that leads to improvement of inference efficiency while retaining accuracy.. PLUM exhibit Pareto optimality compared to prior conventional methods, improving accuracy per effectual parameter and enhancing computational efficiency during inference with respect to speed, energy, and density. However, PLUM requires training from scratch, which can be time-consuming and computationally expensive. This is similar to conventional binary quantization schemes and we do not observe any significant training overhead for PLUM when compared to the same. All hyperparamters listed in supplementary D. PLUM requires the use of two quantization functions, i.e., with value sets of {1,0} and {-1,0}. Only using a single quantization function (just {1,0}) could lead to improving inference efficiency. Following the terminology from Section 3.1, while binary can be represented using at least R×S×C×K bits, signed-binary requires one additional bit per filter to signify the corresponding signed-binary quantization function, resulting in R × S × C × K + K bits. Further, the tile size of the modern inference system should be set such that a single processing step in PLUM should see only one signed binary quantization function. Additionally, the understudied repercussions of quantization and, consequently, signed binary quantization on model bias warrant further exploration. Despite these limitations, PLUM presents a promising approach for training efficient models that are both accurate and computationally efficient, particularly well-suited for applications where hardware resources are limited, such as mobile devices and embedded systems. ## 7 Reproducibility Statement We provide all the hyperparameters for the key experiments including instructions on how to train the models in the supplementary C. Further, since our work is about quantization system co-design, we provide all the hyperparameters and configurations to reproduce our inference experiments in supplementary A. Finally, implementation details and baselines for our method can be found in supplementary D. ## References Y. Bai, Y. X. Wang, and E. Liberty. Proxquant: Quantized neural networks via proximal operators. In ICLR'19, arXiv (2018), 2018. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. *CoRR*, abs/2005.14165, 2020. Adrian Bulat and Georgios Tzimiropoulos. Xnor-net++: Improved binary neural networks. *arXiv preprint* arXiv:1909.13863, 2019. Adrian Bulat, Georgios Tzimiropoulos, Jean Kossaifi, and Maja Pantic. Improved training of binary networks for human pose estimation and image recognition. *arXiv preprint arXiv:1904.05868*, 2019. M. Courbariaux, Y. Bengio, and J. P. David. Binaryconnect: Training deep neural networks with binary weights during propagations. In *NIPS'15*, 2015. M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1. In *NIPS'16*, 2016. M. Cowan, T. Moreau, T. Chen, J. Bornholt, and L. Ceze. Automatic generation of high-performance quantized machine learning kernels. In *CGO'20*, 2020. Pengcheng Dai, Jianlei Yang, Xucheng Ye, Xingzhou Cheng, Junyu Luo, Linghao Song, Yiran Chen, and Weisheng Zhao. Sparsetrain: Exploiting dataflow sparsity for efficient convolutional neural networks training. In *2020 57th ACM/IEEE Design Automation Conference (DAC)*, pp. 1–6. IEEE, 2020. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In *CVPR'09*, 2009. Li Dong, Shaohan Huang, Furu Wei, Mirella Lapata, Ming Zhou, and Ke Xu. Learning to generate product reviews from attributes. In *Proceedings of the 15th Conference of the European Chapter of the Association* for Computational Linguistics: Volume 1, Long Papers, volume 1, pp. 623–632, 2017. Joel Emer, Angshuman Parashar, Vivienne Sze, Po-An Tsai, and Yannan Nellie Wu. Sparse tensor accelerators: Abstraction and modeling. In *ISCA Tutorial 2021*, 2021. Cheng Fu, Hanxian Huang, Bram Wasti, Chris Cummins, Riyadh Baghdadi, Kim Hazelwood, Yuandong Tian, Jishen Zhao, and Hugh Leather. Q-gym: An equality saturation framework for dnn inference exploiting weight repetition. In Proceedings of the International Conference on Parallel Architectures and Compilation Techniques, pp. 291–303, 2022. Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, and Junjie Yan. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 4852–4861, 2019. Zhangxiaowen Gong, Houxiang Ji, Christopher W Fletcher, Christopher J Hughes, Sara Baghsorkhi, and Josep Torrellas. Save: Sparsity-aware vector engine for accelerating dnn training and inference on cpus. In *2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO)*, pp. 796–810. IEEE, 2020. Alex Graves, Abdel rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In *IEEE international conference on acoustics, speech and signal processing*, 2013. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In *CVPR'16*, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification. In *Proceedings of the IEEE international conference on* computer vision, pp. 1026–1034, 2015. K. Hegde, J. Yu, R. Agrawal, M. Yan, M. Pellauer, and C. Fletcher. Ucnn: Exploiting computational reuse in deep neural networks via weight repetition. In *ISCA'18*, 2018. Kartik Hegde, Hadi Asghari-Moghaddam, Michael Pellauer, Neal Crago, Aamer Jaleel, Edgar Solomonik, Joel Emer, and Christopher W Fletcher. Extensor: An accelerator for sparse tensor algebra. In *Proceedings* of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, pp. 319–333, 2019. Q. Hu, P. Wang, and J. Cheng. From hashing to cnns: Training binaryweight networks via hashing. In AAAI'18, 2018. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. *arXiv preprint arXiv:1502.03167*, 2015. Yash Jain, Chi Ian Tang, Chulhong Min, Fahim Kawsar, and Akhil Mathur. Collossl: Collaborative selfsupervised learning for human activity recognition. *Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies*, 6(1):1–28, 2022. Yash Jain, David Chan, Pranav Dheram, Aparna Khare, Olabanji Shonibare, Venkatesh Ravichandran, and Shalini Ghosh. Multi-stage multi-modal pre-training for automatic speech recognition. arXiv preprint arXiv:2403.19822, 2024. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. Guillaume Leclerc, Andrew Ilyas, Logan Engstrom, Sung Min Park, Hadi Salman, and Aleksander Madry. ffcv. https://github.com/libffcv/ffcv/, 2022. Fengfu Li, Bo Zhang, and Bin Liu. Ternary weight networks. *arXiv preprint arXiv:1605.04711*, 2016. Xiaofan Lin, Cong Zhao, and Wei Pan. Towards accurate binary convolutional neural network. *Advances in* neural information processing systems, 30, 2017. Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, and Kwang-Ting Cheng. Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. In *Proceedings of the European Conference on Computer Vision (ECCV)*, pp. 722–737, 2018. Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural network acoustic models. In *ICML'13*, ICML'13, 2013. Ajay Mandlekar, Danfei Xu, Josiah Wong, Soroush Nasiriany, Chen Wang, Rohun Kulkarni, Li Fei-Fei, Silvio Savarese, Yuke Zhu, and Roberto Martín-Martín. What matters in learning from offline human demonstrations for robot manipulation. In *Conference on Robot Learning (CoRL)*, 2021. Francisco Muñoz-Matrínez, José L. Abellán, Manuel E. Acacio, and Tushar Krishna. Stonne: Enabling cyclelevel microarchitectural simulation for dnn inference accelerators. In 2021 IEEE International Symposium on Workload Characterization (IISWC), 2021. Nandeeka Nayak, Toluwanimi O Odemuyiwa, Shubham Ugare, Christopher Fletcher, Michael Pellauer, and Joel Emer. Teaal: A declarative framework for modeling sparse tensor accelerators. In *Proceedings of the* 56th Annual IEEE/ACM International Symposium on Microarchitecture, pp. 1255–1270, 2023. H. Pouransari, Z. Tu, and O. Tuzel. Least squares binary quantization of neural networks. In *CVPRW'20*, 2020. Rohan Prabhakar, Sachit Kuhar, Rohit Agrawal, Christopher J Hughes, and Christopher W Fletcher. Summerge: an efficient algorithm and implementation for weight repetition-aware dnn inference. In *Proceedings* of the ACM International Conference on Supercomputing, pp. 279–290, 2021. Eric Qin, Ananda Samajdar, Hyoukjun Kwon, Vineet Nadella, Sudarshan Srinivasan, Dipankar Das, Bharat Kaul, and Tushar Krishna. Sigma: A sparse and irregular gemm accelerator with flexible interconnects for dnn training. In 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA), pp. 58–70. IEEE, 2020. Haotong Qin, Ruihao Gong, Xianglong Liu, Mingzhu Shen, Ziran Wei, Fengwei Yu, and Jingkuan Song. Forward and backward information retention for accurate binary neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2250–2259, 2021. Haotong Qin, Xiangguo Zhang, Ruihao Gong, Yifu Ding, Yi Xu, and Xianglong Liu. Distribution-sensitive information retention for accurate binary neural network. *International Journal of Computer Vision*, 131 (1):26–47, 2023. M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In *ECCV'16*, 2016. Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, and Joel S Emer. Efficient processing of deep neural networks. Synthesis Lectures on Computer Architecture, 15(2):1–341, 2020. Hanrui Wang, Zhekai Zhang, and Song Han. Spatten: Efficient sparse attention architecture with cascade token and head pruning. In *2021 IEEE International Symposium on High-Performance Computer* Architecture (HPCA), pp. 97–110. IEEE, 2021. Yannan Nellie Wu, Po-An Tsai, Angshuman Parashar, Vivienne Sze, and Joel S Emer. Sparseloop: An analytical, energy-focused design space exploration methodology for sparse tensor accelerators. In *2021* IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), pp. 232–234. IEEE, 2021. Zihan Xu, Mingbao Lin, Jianzhuang Liu, Jie Chen, Ling Shao, Yue Gao, Yonghong Tian, and Rongrong Ji. Recu: Reviving the dead weights in binary neural networks. In Proceedings of the International Conference on Computer Vision (ICCV), pp. 5198–5208, 2021. Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, and Gang Hua. Lq-nets: Learned quantization for highly accurate and compact deep neural networks. In *European Conference on Computer Vision (ECCV)*, 2018. Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. *arXiv preprint arXiv:1606.06160*, 2016. Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. arXiv preprint arXiv:1612.01064, 2016. ## A Experiment Setup Efficiency Deploying on CPUs We use SumMerge Prabhakar et al. (2021) for this task. We run all experiments on Intel Xeon Gold 6226 CPU. In order to make our test environment as close as possible to the test environment of the authors of Prabhakar et al. (2021), we disable simultaneous multi-threading, and enable 2MB huge pages and disable dynamic frequency scaling as well. The test methodology is exactly the same as used by the authors of Prabhakar et al. (2021), i.e., each experiment is run 50 times when the machine is unloaded and the values for run with the lowest execution time are reported. All arithmetic operations are in floating-point. All DNN inference experiments are subject to identical test environments and methodology. ASIC We use STONNE Muñoz-Matrínez et al. (2021), a cycle-level microarchitectural simulator for DNN Inference Accelerator SIGMA Qin et al. (2020) for this experiment. We use the docker image released by the authors of Muñoz-Matrínez et al. (2021). We use the standard configuration of SIGMA with 256 multiplier switches, 256 read ports in SDMemory and 256 write ports in SDMemory. The reduction networks is set to ASNETWORK and the memory controller is set to SIGMA_SPARSE_GEMM. We use SimulatedConv2d function in the PyTorch frontend version of STONNE. For a given convolutional layer, we run STONNE twice, once with 0% sparsity and once with 65% sparsity. We calculate the reduction in energy consumption by dividing the energy of the dense convolutional layer by the energy of the sparse convolutional layer. Since the weights' precision (or bit-width) is a parameter of SIGMA, the reduction in energy due to sparsity when compared to the dense model is not a function of the precision of the weights of the DNN. B Visualizing Exploitation of Repetition and Sparsity ![14_image_0.png](14_image_0.png) Figure 8: Visualization of repetition and sparsity during inference in modern systems: (1) Input and two filters that need to be multified (2) naive multiplication of weights and activations to create output 1 and output 2. (3) Re-ordering of weights and activations to simplify work within each filter (and exploit intra filter repetition phenomenon (Hegde et al., 2018)) (3) Using partial sums to reduce the work across filters (and exploit intra filter repetition phenomenon (Hegde et al., 2018)) (4) exploiting sparsity (by acknowledging repetition of zero weights as a special case of weight repetition (Sze et al., 2020; Hegde et al., 2018; Prabhakar et al., 2021). ## C Experimental Setup Accuracy CIFAR10 The data loader pipeline consists of simple augmentations - padding by 4 pixels on each size, random crop to 32 × 32, Random Horizontal Flip with probability 0.5, and normalization. We train from scratch for 350 epochs and use the Adam Optimizer Kingma & Ba (2014). We start with an initial learning rate of 0.01 and reduce it by a factor of 10 at epochs 150, 200, and 320. For apples-to-apples comparison with binary and ternary, we do a sweep over batch sizes {16, 32, 64, 128, 256} and activation functions (ReLU, PReLU, TanH) and report the best top-1 validation accuracy. For ablations on (1) value assignment percentage and (2) comparison with binary networks with comparable effectual operations, we select the batch size the to be 32 and activation function to be PReLU. We compare the binary and signed-binary methods on their non-zero valued parameters of quantized layers as other aspects would be similar. When comparing against prior art works, we run these methods on our setup and report numbers. Further ablations on batch size and activation function provided in the appendix. ImageNet We train ResNet-18 He et al. (2016) using SBWN on ImageNet Deng et al. (2009). We use standard practices used to train binary networks like (1) normalize the input using batch-norm Ioffe & Szegedy (2015) before convolution instead of after convolution Rastegari et al. (2016), (2) the first and the last layers are not quantized Zhu et al. (2016); Pouransari et al. (2020); Li et al. (2016); Bai et al. (2018). We use first order polynomial learning-rate annealing schedule with Adam optimizer Kingma & Ba (2014) along with PReLU He et al. (2015); Maas et al. (2013). We use FFCV dataloader Leclerc et al. (2022) with simple augmentations - Random Resize Crop to 224×224, Random Horizontal Flipping and Color Jitter with (brightness, contrast, saturation, hue) set as (0.4, 0.4, 0.4, 0). We decrease the learning rate from 2.0 × e −4 to 2.0×e −8 while training for 320 epochs and do not use weight decay and batch size of 256 for training. We compare the binary and signed-binary methods on their non-zero valued parameters of quantized layers as other aspects would be similar. When comparing against prior art works, we report numbers as is from the literature due to high compute cost of running these experiments. Furthermore, for ResNet 34, we simply increase the model size. Implementation Signed-Binary is a local binarization scheme, i.e., the quantization function takes fullprecision latent weights of region of a convolutional layer as input and maps it to either {0, 1} R×S×Ct or {0, −1} R×S×Ct. The values of the quantization function for these predetermined regions of a convolutional layer are determined randomly before training commences and remain unchanged. Different regions can have distinct quantization functions for a convolutional layer. This framework categorizes these regions into two buckets, optimizing efficiency by grouping them based on their quantization function values for more streamlined training. For example, if Ct = C, signed binarization becomes a per filter quantization scheme and each filter will have a different quantization function. We quantize the full-precision latent weights of a convolutional layer from R {R×S×C×K}to {0, 1} R×S×C×K×P and {0, −1} R×S×C×K×(1−P ) where P is the percentage of filters whose quantization functions have the values {0,1} such that K × P is an integer. ## D Baselines Baselines for Comparison with Prior work: Figure 5 shows comparison against prior-art methods. The numbers are reported from the literature: DIR-Net Qin et al. (2023) LQ-Net Zhang et al. (2018), DSQ Gong et al. (2019), BC Courbariaux et al. (2015), ProxQuant Bai et al. (2018), IR-Net Qin et al. (2021), DoReFA Zhou et al. (2016), SQ-BWN Dong et al. (2017), BWN Rastegari et al. (2016), BWNH Hu et al. (2018), ABC-Net Lin et al. (2017), BWHN Hu et al. (2018), LS Pouransari et al. (2020), Bi-Real Liu et al. (2018). ## E Signed-Binary Vs Binary Wrt Effectual Parameters We would like to compare binary with signed-binary when the DNN has the same number of non-zero parameters. Signed-Binary ResNet trained on CIFAR10 has slightly greater than 50% sparsity. If we reduce the total number of parameters of the binary ResNet by half, the resulting model would have comparable number of non-zero weights to signed-binary ResNet. This is done by reducing depth (see Table 6a) and by reducing width (see Table 6b). We train these models under identical conditions (setup details in the appendix). To clarify, row 1 & row 2 of Table 6 have the same number of total parameters while row 1 & row 3 have a comparable number of non-zero parameters. Thus, signed-binary leads to a higher accuracy than binary when both methods have comparable number of effectual operations. 16 | Quant | # Parameters | Depth | Acc | |---------|----------------|---------|--------| | SB | 0.46M | 32 | 91.55% | | B | 0.46M | 32 | 91.22% | | B | 0.27M | 20 | 90.16% | (a) **Reducing number of parameters by reducing depth**: We observe that accuracy of binary is 1.3% lower than signed-binary with comparable nonzero weights. (b) **Reducing number of parameters by reducing width**: We observe that accuracy of binary is 1.7% lower than signed-binary with comparable nonzero weights. Table 6: **Binary vs Signed-Binary with comparable non-zero weights**: We observe that SignedBinary achieves higher accuracy when compared to binary with comparable effectual operations. | Batch Size | Accuracy (Top-1) | Non-Linearity | Accuracy (Top-1) | |--------------|--------------------|-----------------|--------------------| | 16 | 89.44 | | | | 32 | 90.05 | | | | 64 | 89.62 | | | | 128 | 89.59 | ReLU | 88.64 | | | PReLU | 90.05 | | | | TanH | 88.75 | | | | LReLU | 89.22 | | | Quant | # Parameters | Width | Acc | |---------|----------------|---------|--------| | SB | 0.27M | 1× | 90.05% | | B | 0.27M | 1× | 90.20% | | B | 0.14M | ⌈0.7×⌉ | 88.5% | ## F Additional Ablations On Cifar-10 (a) **Ablation on Batch Size**: The setup is identical across batch sizes and the non-linearity used is PReLU. We observe a decrease in accuracy when a high batch size of 256 is used. (b) **Ablation on Non-Linearity**: The setup is identical across non-linearity and the batch size used is 32. We observe that PReLU works best for our method. Table 7: **Additional Ablations on CIFAR10**: Abations on batch size and non-linearity for SB. We perform ablation on (1) batch sizes, (2) non-linearity on CIFAR10 (Krizhevsky & Hinton, 2009) dataset and report the numbers in Table 7a and 7b respectively. The setup is the same as mentioned above. We observe that for our method, there is a drop in accuracy with higher batch size, PReLU (Maas et al., 2013) works best and it is not sensitive to the choice of delta. ## G Arithmetic Reduction Ablation The fewer arithmetic operations required during inference for a given layer, the more efficient the quantization scheme is for both CPUs and GPUs (Hegde et al., 2018; Prabhakar et al., 2021; Fu et al., 2022). This can be measured using arithmetic reduction (Hegde et al., 2018; Prabhakar et al., 2021; Fu et al., 2022), which is defined as the ratio of arithmetic operations required using naive dense computation (unaware of repetition and sparsity) to the arithmetic operations taken during repetition-sparsity aware inference for a given DNN block. This metric indicates inference efficiency at the algorithmic level (Hegde et al., 2018; Prabhakar et al., 2021; Fu et al., 2022). Figure 9 compares the arithmetic reduction across different quantization schemes. The experiment follows the original test settings and methodologies described by the authors of (Prabhakar et al., 2021), using synthetic, uniformly distributed weights in the DNN block. The results show that signed-binary quantization is the most efficient, providing the highest arithmetic reduction across all conv layers. We extend this experiment by varying the sparsity percentage from 0% to 100%, with equal percentages of positive and negative weights across all three quantization schemes. Figure 10 compares the arithmetic reduction (Y-axis) and the percentage of sparsity (X-axis) for a convolutional block of shape [3,3,512,512]. Since binary quantization uses +1, -1 and does not leverage sparsity, its performance is represented by a horizontal line. Ternary quantization initially behaves like binary when sparsity is negligible and performs worse with moderate sparsity before improving under high sparsity conditions. This can be explained by ![17_image_1.png](17_image_1.png) the repetition-sparsity trade-off. Signed-binary quantization consistently outperforms ternary due to higher weight repetition with equal sparsity and outperforms binary due to the presence of sparsity. The highly efficient behavior of signed-binary under both very low and very high sparsity can be explained: with approaching 0% sparsity, signed binary gets converted into monolithic filters containing one unique weight per filter, and with high sparsity, most operations become ineffectual, resulting in significant savings. Figure 9: Arithmetic Reduction (higher is better) for Binary, Ternary, and Signed-Binary across different DNN blocks. Figure 10: Arithmetic Reduction (higher is better) ![17_image_0.png](17_image_0.png) for Binary, Ternary, and Signed-Binary across different degrees of sparsity. ## H Latent Full-Precision Weights & Standardization Binary quantization has shown ![17_image_2.png](17_image_2.png) to improve accuracy from standardization of latent fullprecision weights. However, this trend is not observed in signed-binary networks. | Standardization Strategy | Accuracy (%) | |-----------------------------|----------------| | Local Signed-Binary Regions | 59.1 | | Global Signed-Binary Block | 61.2 | | No Standardization | 61.4 | Table 8: Comparison of Standardization Strategies on Accuracy ![17_image_3.png](17_image_3.png) (a) Negative SB Filters (b) Positive SB Filters. (c) SB Conv Block. Figure 11: Distribution of Latent Full-Precision Weights: While the local signed-binary weights are neither zero mean, nor Laplacian, the entire conv block is zero mean Laplacian distribution. The peaks in the image are because of clipping at ±1 along with thresholding at ±∆. ## I Additional Imagenet Ablations | %{0,1} filters | %{0,-1} filters | Acc | EDE† | Acc | Delta ∆ | Acc | |------------------|-------------------|----------------|----------|-------|-----------|-------| | 1 | 0 | 55.23 | | | | | | 0.25 | 0.75 | 61.94 | | | | | | 0.5 | 0.5 | 62.29 | Disabled | 62.73 | | | | Enabled | 63.17 | 0.01 × max |W| | 64.1 | | | | | | 0.05 × max |W| | 64.3 | | | | | We provide additional ImageNet ablations by training ResNet-18 architecture. Table 9: Accuracy with Filters Table 10: EDE† Accuracy Table 11: Delta Accuracy ## J Additional Dataset Ablations | Model | Dataset | Accuracy Signed Binary | Accuracy Full Precision | |----------|--------------|--------------------------|---------------------------| | VGG | CIFAR10 | 92.9 | 93.8 | | AlexNet | SVHN | 97.2 | 97.7 | | ResNet18 | CIFAR100 | 75.83 | 77.8 | | ResNet18 | TinyImageNet | 56.9 | 59.72 | We provide additional dataset ablations below. | Dataset | License | Source | |-----------|----------------|------------| | ImageNet | Non-Commercial | ILSVRC2012 | | CIFAR10 | N/A | CIFAR | Table 12: Accuracy Comparison between Signed Binary and Full Precision Models ## K Datasets Table 13: **Dataset with Licenses**: License and source of the datasets used. Licenses of ImageNet (Deng et al., 2009) and CIFAR10 (Krizhevsky & Hinton, 2009) datasets used in this paper are listed in Table 1. Every accuracy reported in this paper is on validation set of the dataset. ImageNet and CIFAR10 are standard publicly used datasets. Since they do not own their images, therefore they do not have a release license. Actual images may have their own copyrights but ImageNet provides stipulations for using their dataset (for non-commercial use). We do not recommend using the resulting signed-binary models trained on these datasets for any commercial use.
Review 1: Summary: This paper introduces PLUM, a framework that integrates DNN inference systems with quantization to leverage a repetition-sparsity trade-off for improved efficiency. PLUM’s quantization method is more accurate than binary quantization, and signed binarization effectively reduces non-zero parameters within DNNs. The framework achieves a 26% speedup, doubles energy efficiency, and reduces density by 2.8× compared to binary methods, while maintaining a 66.2% top-1 accuracy on ImageNet for ResNets. PLUM offers a viable solution for deploying efficient models in resource-limited environments. Strengths and Weaknesses: Strengths: 1. This paper is well written and easy to follow. 2. The proposed new binary representation is interesting and seems very promising. Weaknesses: 1. Most of the experiments are conducted on CIFAR-10 which greatly undermined the credibility of the method. 2. Speed up experiments seems only conducted on CPU, No GPU results. 3. How to combine with activation binarization. Can PLUM be adopted in FPGA as vanilla BNN? 4. The training time and cost is not clear too. Requested Changes: See the weaknesses above. Broader Impact Concerns: No. ================================================== Review 2: Summary: This paper introduces PLUM, a unified co-design framework that integrates DNN inference systems and quantization to leverage the repetition-sparsity trade-off to improve inference efficiency. Experiment results demonstrate more accuracy quantization over binary quantization with the same number of non-zero weights and 26% speedup on real hardware and doubled energy efficiency. Strengths and Weaknesses: Strengths: 1. The approach presents a simple-yet-effective framework, instead of a overly complicated design. 2. I think the experiments are complete and comprehensive. Weakness: 1. The models used in experiments seem to be small scale. I wonder would it be possible to evaluate on larger scale of the models? 2. Figure presentation: Figure 3 seems to be a bit blurry; Figure 5 has overlapped legends. 3. Need to check for Typos. Appendix B is empty to me. Requested Changes: See weakness Broader Impact Concerns: N.A ================================================== Review 3: Summary: 1. The authors introduce the concept of repetition-sparsity trade-off to explain the inefficiencies in inference performance of binary and ternary quantization methods. 2. The authors propose PLUM, a quantization-system co-design framework that leverages the repetition-sparsity trade-off to improve inference efficiency while maintaining model accuracy. 3. Through extensive experiments, the authors demonstrate the effectiveness of the PLUM framework: a) PLUM's signed-binary quantization scheme achieves accuracy comparable to existing quantization methods on CIFAR10 and ImageNet datasets while using fewer effectual parameters. b) Visualization analysis reveals the characteristics of the quantized weight distribution in PLUM and its relationship with the latent full-precision weights. c) Inference experiments on Intel CPUs, measuring latency and energy consumption, validate the existence of the repetition-sparsity trade-off and showcase PLUM's advantages in accelerating inference speed and reducing energy consumption. These experimental results indicate that the PLUM framework can significantly improve model inference performance while maintaining accuracy. Strengths and Weaknesses: Strengths: 1. The concept of repetition-sparsity trade-off provides a novel and insightful perspective for analyzing existing quantization methods, helping to understand these methods' limitations better. 2. The PLUM framework demonstrates the importance of co-designing quantization methods with inference systems, offering a new approach for developing efficient inference solutions. 3. Through comprehensive experiments, the authors thoroughly demonstrate the effectiveness of the PLUM framework, including accuracy comparisons on standard benchmark datasets, visualization analysis of weight distributions, and inference performance evaluations on real hardware platforms. 4. The visualization analysis in the paper provides interesting insights into the learned features of the PLUM quantization scheme, helping readers better understand how the method works. 5. The inference experiments on Intel CPUs not only prove the effectiveness of the PLUM framework but also provide empirical evidence for the repetition-sparsity trade-off, strengthening the core argument of the paper. Weaknesses: 1. The ablation studies for PLUM are conducted on a limited scale, which weakens the persuasiveness of the experiments. More comprehensive ablation studies on a larger scale would strengthen the empirical evidence supporting the effectiveness of the proposed method. 2. While the authors demonstrate the effectiveness of PLUM on image classification tasks using ResNet architectures, it would be beneficial to see the framework's performance on other tasks (e.g., object detection, segmentation) and with different network architectures to assess its generalizability. 3. The paper could benefit from a more detailed discussion of the limitations of the PLUM framework, such as any potential trade-offs between model size, accuracy, and inference efficiency. 4. The authors could provide more insights into the selection of hyperparameters for the PLUM framework, such as the choice of quantization function value sets and the impact of different choices on performance. 5. The paper would benefit from a discussion on the potential challenges in deploying PLUM on other hardware platforms (e.g., GPUs, edge devices) and any specific optimizations that might be required. 6. While the authors mention that PLUM requires training from scratch, which can be time-consuming and computationally expensive, they do not provide a detailed analysis of the training overhead compared to other quantization methods or the potential impact on the practical adoption of the framework. Requested Changes: Critical changes: 1. Conduct more comprehensive ablation studies on a larger scale to strengthen the empirical evidence supporting the effectiveness of the PLUM framework. This will help address the concern about the limited scale of the current ablation experiments and increase the persuasiveness of the results. 2. Provide a more detailed discussion on the limitations of the PLUM framework, including any potential trade-offs between model size, accuracy, and inference efficiency. This will help readers better understand the scope and applicability of the proposed method. 3. Include a detailed analysis of the training overhead of PLUM compared to other quantization methods, and discuss the potential impact on the practical adoption of the framework. This information is crucial for practitioners considering the use of PLUM in real-world scenarios. Suggested improvements: 4. Evaluate the performance of the PLUM framework on other tasks (e.g., object detection, segmentation) and with different network architectures to demonstrate its generalizability. This will strengthen the paper's contribution and broaden its impact. 5. Provide more insights into the selection of hyperparameters for the PLUM framework, such as the choice of quantization function value sets and their impact on performance. This will help readers better understand how to apply the proposed method effectively. 6. Discuss the potential challenges in deploying PLUM on other hardware platforms (e.g., GPUs, edge devices) and suggest possible optimizations. This will provide valuable information for practitioners looking to implement the framework on different hardware setups. Broader Impact Concerns: N/A ================================================== Review 4: Summary: This paper highlights that binary quantization is an extreme form of repetition, often overlooking weight sparsity. In contrast, ternary quantization introduces weight sparsity but results in reduced efficiency. To address this issue, the authors propose a trade-off quantization scheme. Additionally, they co-designed the quantization algorithm and the inference kernel. Two strengths are as follows: 1. The idea is intriguing and logical; such a trade-off scheme can push the boundaries of quantization. 2. The proposed quantization successfully introduces sparsity into binary quantization while maintaining accuracy and speeding up inference. Strengths and Weaknesses: 1. Minor: The right part of Figure 3 contains two sparsity items. 2. As indicated in Eq(2), each weight requires a sign-factor and a bitmap. Therefore, is each weight represented by 2 bits? If so, the advantage of PLUM partially comes from more efficient utilization of bits compared to ternary quantization. However, the model size for PLUM is also double that of binary quantization. 3. GPUs are optimized for parallel computation. A potential drawback of this scheme is that unstructured sparsity may hinder inference speedup. in GPU Requested Changes: see the weaknesses for details Broader Impact Concerns: N/A ================================================== Review 5: Summary: In this paper, the authors introduce the concept of repetition-sparsity trade-off, based on which the proposed method PLUM is designed, which integrates DNN inference systems and quantization to improve inference efficiency. Experiments show that PLUM retains model accuracy with fewer effectual parameters due to its signed-binary quantization scheme. Demonstrations on Intel CPUs show that PLUM improves inference speed and energy efficiency. Strengths and Weaknesses: Strengths: 1. The proposed PLUM framework is a relatively simple yet effective approach that considers repetition-sparsity trade-off in the DNN inference process. Experiments validate its accuracy retaining with less effectual parameters. 2. The paper provides an analysis of the experiments of PLUM, providing detailed charts and figures. Weaknesses: 1. The contribution of the paper is not fully analyzed. There is a gap between the concept of repetition-sparsity trade-off and the proposed scheme. 2. Experiments are not convincing enough, where baseline methods and models for evaluation are limited. 3. The paper is not clearly presented and organized. 4. Figures and tables are not clearly explained. Requested Changes: The requested changes in the following comments are critical for acceptance. 1. In the introduction, the novelty and contributions are not clearly stated. The concept of repetition-sparsity trade-off seems straightforward, but how is PLUM designed based on this concept? The contributions and novelty should be made more clear. It is now lacking in the introduction. 2. Following the above comment, there is a gap between the repetition-sparsity trade-off concept and the proposed scheme. In what sense is PLUM an optimal solution to solve this trade-off? A mathematically rigorous analysis is required, otherwise, the method design looks ad-hoc. 3. A concept of repetition-sparsity trade-off is essential in this paper. Is there any similar concept mentioned in previous works, or is it the first time this concept is introduced? A comprehensive literature review is required to show how this concept of repetition-sparsity trade-off differs from the previous works. 4. The references in Section 2 are mostly published before 2020, more recent references (published in 2023 and 2024) should be included, especially when the reference is called "recent work". 5. the ablation for PLUM comes out of a sudden in Section 3.2.2. it should be included in the experimental part to avoid distraction. Moreover, the explanation for this ablation is missing. 6. In Figure 5, what are the methods that PLUM is compared to? Please carefully cite and explain the methods used for comparison. 6. Appendix B is empty? At least some explanation or description is needed. Fig. 8 is not even referred to. Broader Impact Concerns: N/A ==================================================
# Reward-Predictive Clustering Anonymous authors Paper under double-blind review ## Abstract 1 Recent advances in reinforcement-learning research have demonstrated impressive results in 2 building algorithms that can out-perform humans in complex tasks. Nevertheless, creating 3 reinforcement-learning systems that can build abstractions of their experience to accelerate 4 learning in new contexts still remains an active area of research. Previous work showed that 5 reward-predictive state abstractions fulfill this goal, but have only be applied to tabular 6 settings. Here, we provide a clustering algorithm that enables the application of such state 7 abstractions to deep learning settings, providing compressed representations of an agent's 8 inputs that preserve the ability to predict sequences of reward. A convergence theorem and 9 simulations show that the resulting reward-predictive deep network maximally compresses 10 the agent's inputs, significantly speeding up learning in high dimensional visual control 11 tasks. Furthermore, we present different generalization experiments and analyze under which 12 conditions a pre-trained reward-predictive representation network can be re-used without 13 re-training to accelerate learning—a form of systematic out-of-distribution transfer. ## 14 **1 Introduction** 15 Recent advances in reinforcement learning (RL) (Sutton & Barto, 2018) have demonstrated impressive 16 results, outperforming humans on a range of different tasks (Silver et al., 2016; 2017b; Mnih et al., 2013). 17 Despite these advances, the problem of building systems that can re-use knowledge to accelerate learning—a 18 characteristic of human intelligence—still remains elusive. By incorporating previously learned knowledge 19 into the process of finding a solution for a novel task, intelligent systems can speed up learning and make 20 fewer mistakes. Therefore, efficient knowledge re-use is a central, yet under-developed, topic in RL research. 21 We approach this question through the lens of representation learning. Here, an RL agent constructs a 22 representation function to compress its high-dimensional observations into a lower-dimensional latent space. 23 This representation function allows the system to simplify complex inputs while preserving all information 24 relevant for decision-making. By abstracting away irrelevant aspects of task, an RL agent can efficiently 25 generalize learned values across distinct observations, leading to faster and more data-efficient learning (Abel 26 et al., 2018; Franklin & Frank, 2018; Momennejad et al., 2017). Nevertheless, a representation function can 27 become specialized to a specific task, and the information that needs to be retained often differs from task 28 to task. In this context, the question of how to compute an efficient and *re-usable* representation emerges. 29 In this article, we introduce a clustering algorithm that computes a reward-predictive representation (Lehnert 30 et al., 2020; Lehnert & Littman, 2020) from a fixed data set of interactions—a setting commonly known as 31 *offline RL* (Levine et al., 2020). A reward-predictive representation is a type of function that compresses 32 high-dimensional inputs into lower-dimensional latent states. These latent states are constructed such that 33 they can be used to predict future rewards without having to refer to the original high dimensional input. 34 To compute such a representation, the clustering algorithm processes an interaction data set that is sampled 35 from a single *training task*. First, every state observation is assigned to the same latent state index. Then, 36 this single state cluster is iteratively refined by introducting additional latent state indices and re-assigning 37 some state observations to them. At the end, the assignment between state observations and latent state 38 cluster indices can be used to train a representation network that classifies high-dimensional states into 39 one of the computed latent state cluster. Later on, the output of this representation network can be used 40 to predict future reward outcomes without referring to the original high-dimensional state. Therefore, the 41 resulting representation network is a reward-predictive representation. The presented clustering algorithm is 42 generic: Besides constraining the agent to decide between a finite number of actions, no assumptions about 43 rewards or state transitions are made. We demonstrate that these reward-predictive representation networks 44 can be used to accelerate learning in *test tasks* that differ in both transition and reward functions from 45 those used in the training task. The algorithm demonstrates a form of out-of-distribution generalization 46 because the test tasks require learning a task solution that is novel to the RL agent and does not follow the 47 training data's distribution. The simulation experiments reported below demonstrate that reward-predictive 48 representation networks comprise a form of abstract knowledge re-use, accelerating learning to new tasks. To 49 unpack how reward-predictive representation networks can be learned and transferred, we first illustrate the 50 clustering algorithm using different examples and prove a convergence theorem. Lastly, we present transfer 51 experiments illuminating the question of when the learned representation networks generalize to test tasks 52 that are distinct from the training task in a number of different properties. ## 53 **2 Reward-Predictive Representations** 54 Mathematically, a reward-predictive representation is a function φ that maps an RL agent's observations to 55 a vector encoding the compressed latent state. Figure 1 illustrates a reward-predictive representation with 56 an example. In the Column World task (Figure 1(a)), an RL agent navigates through a grid and receives 57 a reward every time a green cell (right column) is entered. Formally, this task is modelled as a Markov 58 Decision Process (MDP) M = hS, A*, p, r*i, where the set of observations or *states* is denoted with S and the 59 finite set of possible *actions* is denoted with A. The transitions between adjacent grid cells are modelled with a transition function p(*s, a, s*0 60 ) specifying the probability or density function of transitioning from state s to state s 0 after selecting action a. Rewards are specified by the reward function r(*s, a, s*0 61 ) for every possible 62 transition. 63 To solve this task optimally, the RL agent needs to know which column it is in and can abstract away the row 64 information from each grid cell. (For this example we assume that the abstraction is known; the clustering 65 algorithm below will show how it can be constructed from data). Figure 1(b) illustrates this abstraction 66 as a state colouring: By assigning each column a distinct colour, the 4 × 4 grid can be abstracted into a 67 4 × 1 grid. A representation function then maps every state in the state space S to a latent state vector 68 (a colour). Consequently, a trajectory (illustrated by the black squares and arrows in Figure 1(b)) is then 69 mapped to a trajectory in the abstracted task. The RL agent can then associate colours with decisions or 70 reward predictions instead of directly operating on the higher-dimensional 4 × 4 grid. 71 This colouring is a reward-predictive representation, because for any arbitrary start state and action sequence 72 it is possible to predict the resulting reward sequence using only the abstracted task. Formally, this can be 73 described by finding a function f that maps a start latent state and action sequence to the expected reward 74 sequence: f(φ(s), a1*, ..., a*n) = Ep [(r1, ..., rn)|s, a1*, ..., a*n] . (1) 75 The right-hand side of Equation (1) evaluates to the expected reward sequence observed when following the 76 action sequence a1*, ..., a*n starting at state s in the original task. The left-hand side of Equation (1) predicts 77 this reward sequence using the action sequence a1*, ..., a*n and only the latent state φ(s)—the function f does 78 not have access to the original state s. This restricted access to latent states constrains the representation 79 function φ to be reward-predictive in a specific task: Given the representation's output φ(s) and not the full 80 state s, it is possible to predict an expected reward sequence for any arbitrary action sequence using a latent 81 model f. Furthermore, once an agent has learned how to predict reward-sequences for one state, it can re-use 82 the resulting function f to immediately generalize predictions to other states that map to the same latent 83 state, resulting in faster learning. Of course, a reward-predictive representation always encodes some abstract 84 information about the task in which it was learned; if this information is not relevant in a subsequent task, 85 an RL agent would have to access the original high-dimensional state and learn a new representation. We 86 will explore the performance benefits of re-using reward-predictive representations empirically in Section 4. 87 The colouring in Figure 1(b) satisfies the condition in Equation (1): By associating green with a reward of 88 one and all other colours with a reward of zero, one can use only a start colour and action sequence to predict $$(1)$$ ![2_image_1.png](2_image_1.png) ![2_image_0.png](2_image_0.png) ![2_image_2.png](2_image_2.png) (d) Cluster refinement sequence constructed by algorithm Figure 1: Reward-predictive clustering in the Column World task. (a): In the Column World task the agent can transition between adjacent grid cells by selecting from one of four available actions: move up, down, left, or right. A reward of one is given if a green cell is given, otherwise rewards are zero. All transitions are deterministic in this task. (b): By colouring every column in a distinct colour, every state of the same column is assigned the same latent state resulting in a 4 × 1 abstracted grid world task. In this example, an agent only needs to retain which column it is in to predict future rewards and can therefore only use the abstracted task to predict reward sequences for every possible trajectory. (c): Matrix plot of all SF vectors ψ π(*s, a*) for the move "move right" action an a policy π that selects actions uniformly at random. Every row corresponds to the four-dimensional vector for each grid position, as indicated by the y-axis labels. For this calculation, the colour of a state s is encoded as a colour index c(s) that ranges from one through four and the state-representation vector is a one-hot bit vector ec(s) where the entry c(s) is set to one and all other entries are set to zero. (d): Colour function sequence c0, c1, c2, c3 generated by the reward-predictive clustering algorithm. Initially, all states are merged into a single partition and this partitioning is refined until a reward-predictive representation is obtained. The first clustering c1 is obtained by associating states with equal one-step rewards with the same colour (latent state vector). Then, the SF matrix shown in (c) is computed for a state representation that associates state with the blue-green colouring as specified by c1. The row space of this SF matrix is then clustered again leading to the clustering c2. Subsequently, the SF matrix is computed again for the blue-orange-green colouring and the clustering procedure is repeated. This method iteratively refines a partitioning of the state space until a reward-predictive representation is obtained. 89 a reward sequence and this example can be repeated for every possible start state and action sequence of 90 any length. ## 91 **2.1 Improving Learning Efficiency With Successor Representations** 92 To improve an RL agent's ability to generalize its predictions across states, the Successor Representation (SR) 93 was introduced by Dayan (1993). Instead of explicitly planning a series of transitions, the SR summarizes 94 the frequencies with which an agent visits different future states as it behaves optimally and maximizes 95 rewards. Because the SR models state visitation frequencies, this representation implicitly encodes the task's 96 transition function and optimal policy. Consequently, the SR provides an intermediate between model-based 97 RL, which focuses on learning a full model of a task's transition and reward functions, and model-free RL, 98 which focuses on learning a policy to maximize rewards (Momennejad et al., 2017; Russek et al., 2017). 99 Barreto et al. (2017) showed that the SR can be generalized to Successor Features (SFs), which compress 100 the high dimensional state space into a lower dimensional one that can still be used to predict future state 101 occupancies. They demonstrated how SFs can be re-used across tasks with different reward functions to 102 speed up learning. Indeed, SFs—like the SR—only reflect the task's transition function and optimal policy 103 but are invariant to any specifics of the reward function itself. Because of this invariance, SFs provide an 104 initialization allowing an agent to adapt a previously learned policy to tasks with different reward functions, 105 leading to faster learning in a life-long learning setting (Barreto et al., 2018; 2020; Lehnert et al., 2017; 106 Nemecek & Parr, 2021). 107 However, such transfer requires the optimal policy in the new task to be similar to that of the previous 108 tasks (Lehnert & Littman, 2020; Lehnert et al., 2020). For example, even if only the reward function 109 changes, but the agent had not typically visited states near the new reward location in the old task, the 110 SR/SF is no longer useful and must be relearned from scratch (Lehnert et al., 2017). To further improve 111 the invariance properties of SFs, Lehnert & Littman (2020) presented a model that makes use of SFs solely 112 for establishing which states are equivalent to each other for the sake of predicting future reward sequences, 113 resulting in a reward-predictive representation. Because reward-predictive representations only model state 114 equivalences, removing the details of exactly how (i.e., they are invariant to the specifics of transitions, 115 rewards, and the optimal policy), they provide a mechanism for a more abstract form of knowledge transfer 116 across tasks with different transition and reward functions (Lehnert & Littman, 2020; Lehnert et al., 2020). 117 Formally, SFs are defined as the expected discounted sum of future latent state vectors and $$\psi^{\pi}(s,a)=\mathbb{E}_{a,\pi}\left[\sum_{t=1}^{\infty}\gamma^{t-1}\phi(s_{t})\Bigg|s_{1}=s\right],$$ $$\left(2\right)$$ , (2) 118 where the expectation in Equation (2) is calculated over all infinite length trajectories that start in state s 119 with action a and then follow the policy π. The connection between SFs and reward-predictive representations is illustrated in Figure 1(c). Every row in the matrix plot in Figure 1(c) shows the SF vector ψ π 120 (*s, a*) for 121 each of the 16 states of the Column World task. One can observe that states belonging to the same column 122 have equal SFs. Lehnert & Littman (2020) prove that states that are mapped to the same reward-predictive 123 latent state (and have therefore equal colour) also have equal SFs. In other words, there exists a bijection 124 between two states that are equivalent in terms of their SF vectors and two states belonging to the same 125 reward-predictive latent state. 126 As such, previous work (Lehnert et al., 2020; Lehnert & Littman, 2020; 2018) computes a reward-predictive 127 representation for finite MDPs by optimizing a linear model using a least-squares loss objective. This loss 128 objective requires the representation function φ to be linear in the SFs and reward function. Furthermore, 129 it scores the accuracy of SF predictions using a mean-squared error. These two properties make it difficult 130 to directly use this loss objective for complex control tasks, because SFs may become very high dimensional 131 and it may be difficult to predict individual SF vectors with near perfect accuracy while also obtaining a 132 representation function that is linear in these predictions. This issue is further exacerbated by the fact 133 that in practice better results are often obtained by training deep neural networks as classifiers rather than 134 regressors of complex or sparse functions. Additionally, in this prior approach the degree of compression was 135 specified using a hyper-parameter by a human expert. Here, we present a clustering algorithm that remedies 136 these three limitations by designing a cluster-refinement algorithm instead of optimizing a parameterized 137 model with end-to-end gradient descent. Specifically, the refinement algorithm implicitly solves the loss 138 objective introduced by Lehnert & Littman (2020) in a manner similar to temporal-difference learning 139 or value iteration. Initially, the algorithm starts with a parsimonious representation in which all states 140 are merged into a single latent state cluster and then the state representation is iteratively improved by 141 minimizing a temporal difference error defined for SF vectors. This is similar to value iteration or temporal142 difference learning, whereby values are assumed to be all zero initially but then adjusted iteratively, but here 143 we apply this idea to refining a state representation (Figure 1(d)). Through this approach, we avoid having 144 to optimize a model with a linearity constraint as well as using a least-squared error objective to train a 145 neural network. Instead, the clustering algorithm only trains a sequence of state classifiers to compute a 146 reward-predictive representation. Furthermore, the degree of compression—the correct number of reward147 predictive latent states—is automatically discovered. This is accomplished by starting with a parsimonious 148 representation in which all states are merged into a single latent state cluster and iteratively improving the 149 state representation until a reward-predictive representation is obtained without adding any additional latent 150 states in the process. In the following section, Section 3, we will formally outline how this algorithm computes 151 a reward-predictive state representation and discuss a convergence proof. Subsequently, we demonstrate 152 how the clustering algorithm can be combined with deep learning methods to compute a reward-predictive 153 representation for visual control tasks (Section 4). Here, we analyze how approximation errors contort the 154 resulting state representation. Lastly, we demonstrate how reward-predictive representation networks can be 155 used to accelerate learning in tasks where an agent encounters both novel state observations and transition 156 and reward functions. ## 157 **3 Iterative Partition Refinement** 158 The reward-predictive clustering algorithm receives a fixed trajectory data set $${\mathcal{D}}=\left\{\left(s_{i,0},a_{i,0},r_{i,0},s_{i,1},a_{i,1},...,s_{i,L_{i}}\right)\right\}_{i=1}^{D}$$ $\downarrow$ . i=1 (3) ## 164 **3.1 Reward Refinement** 165 To cluster states by their one-step reward values, a function fr is learned to predict one-step rewards. This 166 function is obtained through Empirical Risk Minimization (ERM) (Vapnik, 1992) by solving the optimization 167 problem $$f_{r}=\arg\operatorname*{min}_{f}\sum_{(s,a,r,s^{\prime})\in\mathcal{D}}|f(s,a)-r|,$$ as input. Each data point in D describes a trajectory of length Li 159 . While we assume that this data set D 160 is fixed, we do not make any assumptions about the action-selection strategy used to generate this data set. 161 The clustering algorithm then generates a cluster sequence c0, c1, c2*, ...* that associates every observed state 162 si,t in D with a cluster index. This cluster sequence is generated with an initial reward-refinement step and 163 subsequent SF refinement steps until two consecutive clustering are equal. These steps are described next. 168 where the summation ranges over all transitions between states in the trajectory data set D. This optimiza169 tion problem could be implemented by training a deep neural network using any variation of the backprop 170 algorithm (Goodfellow et al., 2016). Because rewards are typically sparse in an RL task and because deep 171 neural networks often perform better as classifiers rather than regressors, we found it simpler to first bin the 172 reward values observed in the transition data set D and train a classifier network that outputs a probability 173 vector over the different reward bins. Instead of using the absolute value loss objective stated in Equation (4), 174 this network is trained using a cross-entropy loss function (Goodfellow et al., 2016). Algorithm 1 outlines 175 how this change is implemented. The resulting function fr is then used to cluster all observed states by 176 one-step rewards, leading to a cluster assignment such that, for two arbitrary state observations s and s˜, $$c_{1}(s)=c_{1}({\hat{s}})\implies\sum_{a\in{\mathcal{A}}}|f_{r}(s,a)-f_{r}({\hat{s}},a)|\leq\varepsilon_{r}.$$ |fr(s, a) − fr(˜s, a)| ≤ εr. (5) $$\left(4\right)$$ $\left(5\right)$. ![5_image_0.png](5_image_0.png) Figure 2: Function approximation is needed to generalize one-step reward predictions and SF predictions for state-action combinations not observed in the transition data set. In this example, the state space consists of points in R 2 and the action space consists of actions a and b. We assume that a maximally compressed rewardpredictive representation merges all points in the grey square into one latent state. Selecting the action a from within the grey square results in a transition to the right and generates a reward of 0. Selecting the action b from within the grey square results in a transition to the top and generates a reward of 1. If the dataset only contains the two transitions indicated by the blue arrows and the transitions indicated by the orange arrows are missing, then function approximation is used to predict one-step reward predictions and SF for the missing state and action combinations (*p, b*) and (*q, a*). These function approximators need to be constrained such that they output the same one-step rewards and SF vectors for points that fall within the shaded square. ## 190 **3.2 Successor Feature Refinement** 191 After reward refinement, the state partitions are further refined by first computing the SFs, as defined in 192 Equation (2), for a state representation that maps individual state observations to a one-hot encoding of the 193 existing partitions. Specifically, for a clustering ci the state representation 177 Figure 2 illustrates why function approximation is needed to compute the one-step reward clustering in line (5). In this example, states are described as positions in R 2 178 and all points lying in the shaded area 179 belong to the same partition and latent state. Specifically, selecting action a from within the grey square 180 results in a transition to the right and a reward of zero, while selecting action b results in a transition to the 181 top and a reward of one. We assume that the transition data set only contains the two transitions indicated 182 by the blue arrows. In this case, we have r(*p, a*) = 0 and r(*q, b*) = 1, because (*p, a*) and (*q, a*) are state-action 183 combinations contained in the transition data set and a rewards of zero and one were given, respectively. To 184 estimate one-step rewards for the missing state-action combinations (*p, b*) and (*q, a*), we solve the function 185 approximation problem in line (4) and then use the learned function fr to predict one-step reward values for 186 the missing state-action combinations (*p, b*) and (*q, a*). For this reward-refinement step to accurately cluster 187 states by one-step rewards, the optimization problem in line (4) needs to be constrained, for example by 188 picking an appropriate neural network architecture, such that the resulting function fr generalizes the same 189 prediction across the shaded area in Figure 2. $$(6)$$ $$\phi_{i}:s\mapsto e_{c_{i}(s)}$$ φi: s 7→ eci(s) (6) is used, where eci(s)is a one-hot vector with entry ci(s) set to one. The individual SF vectors ψ π i 194 (*s, a*) can 195 be approximated by first computing a Linear Successor Feature Model (LSFM) (Lehnert & Littman, 2020). 196 The computation results in obtaining a square matrix F and $$\psi_{i}^{\pi}(s,a)\approx\mathbf{e}_{c_{i}(s)}+\gamma\mathbf{F}\mathbb{E}_{p}\left[\mathbf{e}_{c_{i}(s^{\prime})}\right|s,a\right].$$ *s, a*. (7) 197 Appendix A outlines the details of this calculation. Consequently, if a function f i predicts the expected next latent state Ep -eci(s 0) *s, a*, then Equation (7) can be used to predict the SF vector ψ π i 198 (*s, a*). Similar to the $$\left(7\right)$$ reward-refinement step, a vector-valued function f iis obtained by solving1 199 $$f_{i}=\arg\operatorname*{min}_{f}\quad\sum_{(s,a,r,s^{\prime})\in{\mathcal{D}}}\|f(s,a)-e_{c_{i}(s^{\prime})}\|.$$ $$({\boldsymbol{\delta}})$$ $$({\mathfrak{g}})$$ 0)||. (8) 200 Similar to learning the approximate reward function, we found that it is more practical to train a classifier 201 and to replace the mean squared error loss objective stated in line (8) with a cross entropy loss objective 202 and train the network f i to predict a probability vector over next latent states. This change is outlined in 203 Algorithm 1. The next clustering ci+1 is then constructed such that for two arbitrary states s and s˜, $$c_{i+1}(s)=c_{i+1}(\tilde{s})\implies\sum_{a\in\mathcal{A}}||\hat{\boldsymbol{\phi}}_{i}^{\pi}(s,a)-\hat{\boldsymbol{\psi}}_{i}^{\pi}(\tilde{s},a)||\leq\varepsilon_{\psi}.$$ (˜s, a)*|| ≤* εψ. (9) 204 This SF refinement procedure is repeated until two consecutive clusterings ci and ci+1 are identical. 205 Algorithm 1 summarizes the outlined method. In the remainder of this section, we will discuss under which 206 assumptions this method computes a reward-predictive representation with as few latent states as possible. Algorithm 1 Iterative reward-predictive representation learning 1: **Input:** A trajectory data set D, εr, εψ > 0. 2: Bin reward values and construct a reward vector wr(i) = ri. 3: Construct the function i(r) that indexes distinct reward values and wr(i(r)) = r. 4: Solve f r = arg minfP(s,a,r,s0)∈D H(f (s, a), ei(r)) via gradient descent 5: Compute reward predictions fr(*s, a*) = w > r f r(*s, a*) 6: Construct c1 such that c1(s) = c1(˜s) =⇒Pa∈A |fr(s, a) − fr(˜s, a)| ≤ εr 7: for i = 2, 3*, ..., N* until ci+1 = ci do 8: Compute F a for every action. 9: Construct φi: s 7→ eci(s) 10: Solve f i = arg minfP(s,a,r,s0)∈D H(f (s, a), eci(s 0)) via gradient descent 11: Compute ψb π i (*s, a*) = eci(s) + γFf i(*s, a*) 12: Construct ci+1 such that ci+1(s) = ˆci+1(˜s) =⇒Pa∈A ||ψˆ π i (*s, a*) − ψˆ $$a)-\hat{\psi}_{i}^{\pi}(\tilde{s},a)||\leq\varepsilon_{\psi}$$ ## 13: **End For** 14: **Return** Φn 207 **3.3 Convergence To Maximally Compressed Reward-Predictive Representations** 208 The idea behind Algorithm 1 is similar to the block-splitting method introduced by Givan et al. (2003). 209 While Givan et al. focus on the tabular setting and refine partitions using transition and reward tables, our 210 clustering algorithm implements a similar refinement method but for data sets sampled from MDPs with 211 perhaps (uncountably) infinite state spaces. Instead of assuming access to the complete transition function, 212 Algorithm 1 learns SFs and uses them to iteratively refine state partitions. For this refinement operation to 213 converge to a correct and maximally-compressed-reward-predictive representation, the algorithm needs to 214 consider all possible transitions between individual state partitions. This operation is implicitly implemented 215 by clustering SFs, which predict the frequency of future state partitions and therefore implicitly encode the partition-to-partition transition table.2 216 217 Convergence to a correct maximally-compressed-reward-predictive representation relies on two properties 218 that hold at every iteration (please refer to Appendix B for a formal statement of these properties): 219 1. State partitions are refined and states of different partitions are never merged into the same partition. 220 2. Two states that lead to equal expected reward sequences are never split into separate partitions. 1Here, the L2 norm of a vector v is denoted with ||v||. 2The state-to-state transition table is never computed by our algorithm. 221 The first property ensures that Algorithm 1 is a partition refinement algorithm, as illustrated by the tree 222 schematic in Figure 1(d) (and does not merge state partitions). If such an algorithm is run on a finite 223 trajectory data set with a finite number of state observations, the algorithm is guaranteed to terminate 224 and converge to some state representation because one can always assign every observation into a singleton 225 cluster. However, the second property ensures that the resulting representation is reward-predictive while 226 using as few state partitions as possible: If two states s and s˜ lead to equal expected reward sequences and 227 Ep [(r1, ..., rn)|s, a1*, ..., a*n] = Ep [(r1, ..., rn)|s, a ˜ 1*, ..., a*n] (for any arbitrary action sequence a1*, ..., a*n), then 228 they will not be split into separate partitions. If Algorithm 1 does not terminate early (which we prove in 229 Appendix B), the resulting representation is reward-predictive and uses as few state partitions as possible. 230 The reward-refinement step satisfies both properties: The first property holds trivially, because c1 is the first 231 partition assignment. The second property holds because two states with different one-step rewards cannot 232 be merged into the same partition for any reward-predictive representation. To see that both properties are preserved in every subsequent iteration, we consider the partition function c ∗ 233 of a correct maximally compressed reward-predictive representation. Suppose ciis a sub-partitioning of c ∗ 234 and states that are assigned different partitions by ci are also assigned different partitions in c ∗ 235 . (For example, 236 in Figure 1(d) c0, c1, and c3 are all valid sup-partitions of c4.) Because of this sub-partition property, we can define a projection matrix Φi that associates partitions defined by c ∗ with partitions defined by ci 237 . Specifically, the entry Φi(*k, j*) is set to one if for the same state s, c ∗ 238 (s) = j and ci(s) = k. In Appendix B we show that this projection matrix can be used to relate latent states induced by c ∗ 239 to latent states induced 240 by ci and $$\Phi_{i}e_{c^{*}}(s)=e_{c_{i}}(s)\,.$$ ∗(s) = eci(s). (10) 241 Using the identity in line (10), the SFs at an intermediate refinement iteration can be expressed in terms of 242 the SFs of the optimal reward-predictive representation and $$\boldsymbol{\psi}_{s}^{\pi}(s,a)=\mathbb{E}_{a,\pi}\left[\sum_{t=1}^{\infty}\gamma^{t-1}\boldsymbol{e}_{c_{t}(s_{t})}\bigg{|}s_{1}=s,a_{1}=a\right]$$ $$=\mathbb{E}_{a,\pi}\left[\sum_{t=1}^{\infty}\gamma^{t-1}\boldsymbol{\Phi}_{c}\boldsymbol{e}_{c^{\prime}(s)}\bigg{|}s_{1}=s,a_{1}=a\right]\text{(by substitution with())}$$ $$=\boldsymbol{\Phi}_{s}\mathbb{E}_{a,\pi}\left[\sum_{t=1}^{\infty}\gamma^{t-1}\boldsymbol{e}_{c^{\prime}(s)}\bigg{|}s_{1}=s,a_{1}=a\right]\text{(by linearity of expectation)}$$ $$=\boldsymbol{\Phi}_{t}\boldsymbol{\psi}_{s}^{\pi}(s,a).$$ $$(11)$$ $\left(10\right)^{2}$ $$\left(12\right)$$ $$(13)$$ 243 As illustrated in Figure 1(c), Lehnert & Littman (2020) showed that two states s and s˜ that are assigned the same partition by a maximally compressed reward-predictive clustering c ∗ 244 also have equal SF vectors 245 and therefore $$\Psi_{i}^{\pi}(s,a)-\Psi_{i}^{\pi}(\bar{s},a)=\Phi_{i}\Psi_{\pi}^{\pi}(s,a)-\Phi_{i}\Psi_{\pi}^{\pi}(\bar{s},a)=\Phi_{i}\underbrace{(\Psi_{\pi}^{\pi}(s,a)-\Psi_{\pi}^{\pi}(\bar{s},a))}_{\to\mathbf{0}}=\mathbf{0}.$$ = 0. (15) 246 By line (15), these two states s and s˜ also have equal SFs at any of the refinement iterations in Algorithm 1. 247 Consequently, these two states will not be split into two different partitions (up to some approximation error) 248 and the second property holds. 249 Similarly, if two states are assigned different partitions, then the first term in the discounted summation in 250 line (11) contains two different one-hot bit vectors leading to different SFs for small enough discount factor 251 and εψ settings. In fact, in Appendix B we prove that this is the case for all possible transition functions if $$\gamma<\frac{1}{2}\mathrm{~and~}\frac{2}{3}\left(1-\frac{\gamma}{1-\gamma}\right)>\varepsilon_{\psi}>0.$$ > εψ > 0. (16) 252 While this property of SFs ensures that Algorithm 1 always refines a given partitioning for any arbitrary 253 transition function, we found that significantly higher discount factor settings can be used in our simulations. $$(14)$$ $$(15)$$ $$(16)$$ ![8_image_0.png](8_image_0.png) $$(17)$$ Figure 3: The cluster thresholds εψ and εr must be picked to account for prediction errors while ensuring that states are not merged into incorrect clusters. For example, suppose the clustered SF vectors are the three black dots in R 2 and the function f i predicts values close to these dots, as indicated by the colored dots. For the clustering to be correct (and computable in polynomial time), the prediction errors—the distance between the predictions and the correct value—has to be εψ/2. At the same time, εψ has to be small enough to avoid overlaps between the different coloured clusters. 254 Because function approximation is used to predict the quantities used for clustering, prediction errors can 255 corrupt this refinement process. If prediction errors are too high, the clustering steps in Algorithm 1 may 256 make incorrect assignments between state observations and partitions. To prevent this, the prediction errors of the learned function fr and ψ π 257 i must be bounded by the thresholds used for clustering, leading to the 258 following assumption. 259 **Assumption 1** (ε-perfect). For εψ, εr > 0, the ERM steps in Algorithm 1 lead to function approximators 260 that are near optimal such that for every observed state-action pair (*s, a*), $$\left|f_{r}(s,a)-\mathbb{E}[r(s,a,s^{\prime})|s,a]\right|\leq{\frac{\varepsilon_{r}}{2}}{\mathrm{~and~}}\left|\left|{\widehat{\boldsymbol{\phi}}}_{i}^{\pi}(s,a)-\boldsymbol{\psi}_{i}^{\pi}(s,a)\right|\right|\leq{\frac{\varepsilon_{\psi}}{2}}.$$ . (17) 261 Figure 3 illustrates why this assumption is necessary and why predictions have to fall to the correct value 262 in relation to εψ and εr. In Section 4 we will discuss that this assumption is not particularly restrictive in 263 practice and when not adhering to this assumption can still lead to a maximally-compressed-reward-predictive 264 representation. Under Assumption 1, Algorithm 1 converges to a maximally compressed reward-predictive 265 representation. 266 **Theorem 1** (Convergence). If Assumption 1 and the matching condition in line (16) hold, then Algorithm 1 267 returns an approximate maximally-compressed-reward-predictive representation for a trajectory data set 268 sampled from any MDP. 269 A formal proof of Theorem 1 is presented in Appendix B. 270 In practice, one cannot know if prediction errors are small enough, a principle that is described by Vapnik 271 (1992). However, recent advances in deep learning (Belkin et al., 2019) have found that increasing the 272 capacity of neural networks often makes it possible to interpolate the training data and still perform almost 273 perfectly on independently sampled test data. In the following section we present experiments that illustrate 274 how this algorithm can be used to find a maximally compressed reward-predictive representation. ## 275 **4 Learning Reward-Predictive Representation Networks** 276 In this section, we first illustrate how the clustering algorithm computes a reward-predictive representa277 tion on the didactic Column World example. Then, we focus on a more complex visual control task—the 278 Combination Lock task, where inputs are a set of MNIST images from pixels—and discuss how function 279 approximation errors lead to spurious latent states and how they can be filtered out. Lastly, we present a set 280 of experiments highlighting how initializing a DQN agent with a reward-predictive representation network ![9_image_1.png](9_image_1.png) ![9_image_2.png](9_image_2.png) ![9_image_0.png](9_image_0.png) (a) Point Observation Column World task (b) Reward-predictive clustering (c) Reward sequence prediction errors Figure 4: Reward-predictive clustering of the Point Observation Column World task. (a): The Point Observation Column World task is a variant of the Column World task where instead of providing the agent with a grid cell index it only observes a real valued point (*x, y*) ∈ (0, 4)2. When the agent is in a grid cell, for example cell the top left cell, a point is sampled uniformly at random from the corresponding cell, for example the point (0.83, 3.22). (b): The computed cluster function c3 assigns each state observation (a point in the shown scatter plot) with a different latent state index (a different color). (c): The box plot shows the reward sequence prediction error for each trajectory at each iteration (iteration 0 shows the initial cluster function). At each iteration a different representation network was trained and then evaluated on a separately sampled 100-trajectory test data set. The full details of this experiment are listed in Appendix C. 281 improves learning efficiency, demonstrating in which cases reward-predictive representations are suitable for 282 out-of-distribution generalization. 283 Figure 4 illustrates a reward-predictive clustering for a variant of the Column World task where state 284 observations are real-valued points. This variant is a block MDP (Du et al., 2019): Instead of observing a 285 grid cell index, the agent observes a real-valued point (*x, y*) (Figure 4(a)) but still transitions through a 4×4 286 grid. This point is sampled uniformly at random from a square that corresponds to the grid cell the agent is 287 in, as illustrated in Figure 4(a). Therefore, the agent does not (theoretically) observe the same (*x, y*) point 288 twice and transitions between different states become probabilistic. For this task, a two-layer perceptron 289 was used to train a reward and next latent state classifier (Algorithm 1, lines 4 and 10). Figure 4(b) 290 illustrates the resulting clustering as colouring of a scatter plot. Each dot in the scatter plot corresponds 291 to a state observation point (*x, y*) in the training data set and the colouring denotes the final latent state 292 assignment c3. Figure 4(c) presents a box-plot of the reward-sequence prediction errors as a function of 293 each refinement iteration. One can observe that after performing the second refinement step and computing 294 the cluster function c2, all reward-sequence prediction errors drop to zero. This is because the clustering 295 algorithm initializes the cluster function c0 by first merging all terminal states into a separate partition (and 296 our implementation of the clustering algorithm is initialized at the second step in Figure 1(d)). Because the 297 cluster functions c2 and c3 are identical in this example, the algorithm is terminated after the third iteration. ## 298 **4.1 Clustering With Function Approximation Errors** 299 As illustrated in Figure 3, for the cluster algorithm to converge to a maximally compressed representation, 300 the predictions made by the neural networks must be within some ε of the true prediction target. Depending 301 on the task and training data set, this objective may be difficult to satisfy. Belkin et al. (2019) presented 302 the double-descent curve, which suggests that it is possible to accurately approximate any function with 303 large enough neural network architectures. In this section we test the assumption that all predictions must 304 be ε accurate by running the clustering algorithm on a data set sampled from the Combination Lock task 305 (Figure 5). In this task, the agent decides which dial to rotate on each step to unlock a numbered combination 306 lock (schematic in Figure 5(a)). Here, state observations are assembled using training images from the MNIST 307 data set (Lecun et al., 1998) and display three digits visualizing the current number combination of the lock. 308 To compute a reward-predictive representation for this task, we adapt our clustering algorithm to process 309 images using the ResNet18 architecture (Paszke et al., 2019; He et al., 2016) for approximating one-step 310 rewards and next latent states. For all experiments we initialize all network weights randomly and do not 311 provide any pre-trained weights. The full details of this experiment are documented in Appendix C. ![10_image_0.png](10_image_0.png) (a) Combination Lock Task, right dial invariant ![10_image_2.png](10_image_2.png) (b) Reward sequence prediction error distribution ![10_image_1.png](10_image_1.png) Figure 5: Reward-predictive clustering of the Combination Lock task. (a): In the Combination Lock task, the agent decides which dial(s) to rotate to move toward a rewarding combination. The agent has to learn that only the first two dials are relevant for unlocking the combination: a reward is given once the left and center dials both arrive at the digit nine and the lock matches the pattern (9, 9, ∗). The right (shaded) dial is "broken" and spins at random when the third action is selected, and thus all digits on it should be equally reward-predictive. Each state consists of an image that is assembled using the MNIST data set. The fixed trajectory data set provided to the clustering algorithm uses images from the MNIST training dataset. The resulting model was evaluated using an independently sampled test trajectory data set using images from the MNIST test data set. (b): The histogram plots the distribution reward sequence errors for 1000 test trajectories for five different refinement stages of the clustering algorithm on a log-scale. The distribution of the 1000 samples is plotted as a rug plot above the histogram. For each trajectory the absolute difference between predicted and true reward value was computed and averaged along the trajectory. The predictions where made by training a separate representation network for each cluster function. (c): Matrix plot illustrating how different number combinations are associated with different latent states. Each row plots the distribution across latent states of images matching a specific number pattern. Each column of the matrix plot corresponds to a specific latent state index and which combination is associated with which index is determined arbitrarily by the clustering algorithm. Terminal states that are observed at the end of each trajectory are merged into latent state zero by default. The ignore column indicates the fraction of state images that were identified as belonging to a spurious latent state and are excluded from the final clustering. 312 In this task, a reward-predictive representation network has to not only generalize across variations in 313 individual digits, but also learn to ignore the rightmost digit. The matrix plot in Figure 5(c) illustrates 314 how the reward-predictive representation network learned by the clustering algorithm generalizes across 315 the different state observations. Intuitively, this plot is similar to a confusion matrix: Each row plots the 316 distribution over latent states for all images that match a specific combination pattern. For example, the 317 first row plots the latent state distribution for all images that match the pattern (0, 0, ∗) (left and middle 318 dial are set to zero, the right dial can be any digit), the second row plots the distribution for the pattern 319 (0, 1, ∗), and so on. In total the clustering algorithm correctly inferred 100 reward-predictive latent states 320 and correctly ignores the rightmost digit, abstracting it away from the state input. Prediction errors can 321 contort the clustering in two ways: 322 1. If prediction errors are high, then a state observation can be associated with the wrong latent 323 state. For example, an image with combination (0, 1, 4) could be associated with the latent state 324 corresponding to the pattern (0, 7, ∗). 325 2. If prediction errors are low but still larger than the threshold εψ or εr, then some predictions can be 326 assigned into their own cluster and a spurious latent state is created. These spurious states appear 327 as latent states that are associated with a small number of state observations. 328 Figure 5(c) indicates that the first prediction error type does not occur because all off-diagonal elements 329 are exactly zero. This is because a large enough network architecture is trained to a high enough accuracy. 330 However, the second prediction error type does occur. In this case, latent states that are associated with 331 very few state observations are masked out of the data set used for training the neural network (line 10 332 in Algorithm 1). These states are plotted in the ignore column (right-most column) in Figure 5(c). In 333 total, less than 0.5% of the data set are withheld and the clustering algorithm has inferred 100 latent 334 states. Consequently, the learned reward-predictive representation uses as few latent states as possible and 335 is maximally compressed. 336 Figure 5(b) plots the reward-sequence error distribution for a representation network at different refinement 337 stages. Here, 1000 independently sampled test trajectories were generated using images from the MNIST 338 test set. One can see that initially reward sequence prediction errors are high and then converge towards zero 339 as the refinement algorithm progresses. Finally, almost all reward sequences are predicted accurately but 340 not perfectly, because a distinct test image set is used and the representation network occasionally predicts 341 an incorrect latent state. This is a failure in the vision model—if the convolutional neural network would 342 perfectly classify images into the latent states extracted by the clustering algorithm, then the reward sequence 343 prediction errors would be exactly zero (similar to the Column World example in Figure 4(c)). Furthermore, 344 if the first transition of a 1000-step roll-out is incorrectly predicted, then all subsequent predictions are 345 incorrect as well. Consequently, the reward sequence prediction error measure is sensitive to any prediction 346 errors that may happen when predicting rewards for a long action sequence. However, the trend of minimizing 347 reward sequence prediction errors with every refinement iteration is still plainly visible in Figure 5(b). ## 348 **4.2 Improving Learning Efficiency** 349 Ultimately, the goal of using reward-predictive representations is to speed up learning by re-using abstract 350 task knowledge encoded by a pre-trained representation network. In contrast, established meta-learning 351 algorithms such as MAML (Finn et al., 2017) or the SF-based Generalized Policy Improvement (GPI) 352 algorithm (Barreto et al., 2018; 2020) rely on extracting either one or multiple network initializations to 353 accelerate learning in a test task. To empirically test the differences between re-using a pre-trained reward354 predictive representation network and using a previously learned network initialization, we now consider 355 three variants of the Combination Lock task (Figure 6(a)). All variants vary from the training task in their 356 specific transitions, rewards, and optimal policy. Furthermore, the state images are generated using MNIST 357 test images to test if a pre-trained agent can generalize what it has seen during pre-training to previously unseen variations of digits.3 358 The three task variants require an agent to process the state images differently 3This experiment design is similar to using separately sampled training and test data in supervised machine learning. ![12_image_0.png](12_image_0.png) Figure 6: Representation transfer in the Combination Lock task. (a): In the swap digits variant, the transition function is changed such that the first action only swaps the digit between the left and middle dial. Only the middle dial rotates as before and the right dial also does not have any effect on the obtained rewards. Furthermore, the rewarding combination is changed to (5, 6, ∗). The reversed dial variant differs from the training task in that the rotation direction of the middle dial is reversed and the rewarding combination is changed to (7, 4, ∗). The left dial broken variant is similar to the training task but the left dial is broken and spins at random instead of the right dial. Here, the transitions and reward association between different latent states are the same as in the training task with the difference being how different images are associated with different latent states and different action labels having different effects. The rewarding combination is (∗, 9, 9). To ensure that the state images of the test tasks are distinct from the training task, all test tasks construct the state images using the MNIST test image set. (b): The reward-predictive agent replaces all except the top-most layer with the reward-predictive representation network computed by the clustering algorithm for the training task. During training in the test task only the top-most layer receives gradient updates and the representation network's weights are not changed. (c): Each agent was trained for 20 different seeds in each task. For each repeat, the pre-trained DQN agent was first trained on the training task and then on the test task. Appendix C lists all details and additional plots of the experiment. 359 in order to maximize rewards: In the swap digits and reversed dial variants (center and left schematic in 360 Figure 6(a)), an agent has to correctly recognize the left and center digit in order to select actions optimally. 361 While the effect of different actions and the rewarding combinations differ from the training task, an agent 362 initially processes state images in the same way as in the training task. Specifically, because the right dial 363 is still broken and rotates at random, an agent needs to correctly identify the left and center digits and 364 then use that information to make a decision. These two transfer tasks test an agent's ability to adapt to 365 different transitions and rewards while preserving which aspects of the state image—namely the left and 366 center digits—are relevant for decision-making. The left dial broken variant (right schematic in Figure 6) 367 differs in this particular aspect. Here, the center and right digits are relevant for reward-sequence prediction 368 and decision-making because the left dial is broken and rotates at random. With this task, we test to what 369 extent a pre-trained reward-predictive representation network can be used when state equivalences modelled 370 by the representation network differ between training and test tasks. 371 To test for positive transfer in a controlled experiment, we train three variants of the DQN algorithm (Mnih 372 et al., 2015) and record the average reward per time step spent in each task. Each DQN variant uses 373 a different Q-network initialisation but all agents use the same network architecture, number of network 374 weights, and hyper-parameters. Hyper-parameters were independently fine tuned on the training task in 375 Figure 5(a) so as to not bias the hyper-parameter selection towards the used test tasks (and implicitly using 376 information about the test tasks during training). In Figure 6(c), the DQN baseline (shown in blue) initializes 377 networks at random (using Glorot initialization (Glorot & Bengio, 2010)) similar to the original DQN agent. 378 This agent's performance is used as a reference value in each task. The pre-trained DQN agent (shown in 379 orange) first learns to solve the training task, and the learned Q-network weights are then used to initialize 380 the network weights in each test task. By pre-training the Q-network in this way, the DQN agent has to 381 adapt the previously learned solution to the test task. Here, the pre-trained DQN agent initially repeats the 382 previously learned behaviour—which is not optimal in any of the test tasks—and then has to re-learn the 383 optimal policy for each test task. This re-learning seems to negatively impact the overall performance of the 384 agent and it would be more efficient to randomly initialize the network weights (Figure 6(c)). 385 This approach of adapting a pre-trained Q-network to a test task is used by both MAML and SF-based GPI. 386 While these methods rely on extracting information from multiple training tasks, the results in Figure 6(c) 387 demonstrate that if training and test tasks differ sufficiently, then re-using a pre-trained Q-network to 388 initialize learning may negatively impact performance and a new Q-network or policy may have to be 389 learned from scratch (Nemecek & Parr, 2021). Reward-predictive representations enable a more abstract 390 form of task knowledge re-use that is more robust in this case. This is illustrated by the reward-predictive 391 agent in Figure 6(c) that outperforms the other two agents. The reward-predictive agent (shown in green 392 in Figure 6(c)) sets all weights except for the top-most linear layer to the weights of the reward-predictive 393 representation network learned by the clustering algorithm for the training task (Figure 6(b)). Furthermore, 394 no weight updates are performed on the representation network itself—only the weights of the top-most 395 linear layer are updated during learning in the test task. By re-using the pre-trained representation network, 396 the reward-predictive agent maps all state images into one of the 100 pre-trained latent states resulting in a 397 significant performance improvement. This performance improvement constitutes a form of systematic out398 of-distribution generalization, because the reward-predictive representation network is not adjusted during 399 training and because trajectories observed when interacting with the test task are out-of-distribution of the 400 trajectories observed during pre-training. 401 Interestingly, in the left dial broken variant the performance improvement of the reward-predictive agent is 402 even more significant. This result is unexpected, because in this case the state equivalences modelled by 403 the transferred representation function differ between the training and the test tasks: In the training task, 404 the right dial is irrelevant for decision-making and can be abstracted away whereas in the test task the left 405 dial is irrelevant for decision-making and can be abstracted away instead. Consequently, a representation 406 that is reward-predictive in the training task is not reward-predictive in the left dial broken test task and 407 an RL agent would have to re-train a previously learned representation for it be reward predictive in the 408 test task. Nevertheless, the reward-predictive representation network can still be used to maximize rewards 409 in this task variant: The agent first learns to rotate the center dial to the rewarding digit "9". This is 410 possible because the network can still leverage parts of the reward-predictive abstraction that remain useful 411 for the new task. In this case, the center digits are still important as they were in the original task and the 412 reward-predictive representation network maps distinct center digits to distinct latent states, although the 413 combination (1, 9, ∗) and (2, 9, ∗) are mapped to different latent states given the representation learned in 414 the training task. Once the center dial is set to the digit "9", the agent can simply learn a high Q-value for 415 the action associated with rotating the third dial, and it does so until the rewarding combination is received. 416 Because the reward predictive agent is a variant of DQN and initializes Q-values to be close to zero, the 417 moment the algorithm increases a Q-value through a temporal-difference update, the agent keeps repeating 418 this action with every greedy action selection step and does not explore all possible states, resulting in a significant performance improvement. 419 4 While the reward-predictive representation network cannot be used 420 to predict reward-sequences or event Q-values accurately, the Q-value predictions learned by the agent are 421 sufficient to still find an optimal policy quickly in this test task. Of course, one could imagine test tasks 422 where this is not the case and the agent would have to learn a new policy from scratch. ## 430 **5 Discussion** 423 This experiment highlights how reward-predictive representation networks can be used for systematic out-of424 distribution generalization. Because the representation network only encodes state equivalences, the network 425 can be used across tasks with different transitions and rewards. However, if different state equivalences are 426 necessary for reward prediction in a test task, then it may or may not be possible to learn an optimal policy 427 without modifying the representation network. The left dial broken test task in Figure 5 presents a case 428 where state equivalences differ from the training task but it is still possible to accelerate learning of an 429 optimal policy significantly. 431 In this article, we present a clustering algorithm to compute reward-predictive representations that use as few 432 latent states as possible. Unlike prior work (Lehnert & Littman, 2020; 2018), which learns reward-predictive 433 representations through end-to-end gradient descent, our approach is similar to the block splitting method 434 presented by Givan et al. (2003) for learning which two states are bisimilar in an MDP. By starting with a 435 single latent state and then iteratively introducing additional latent states to minimize SF prediction errors 436 where necessary, the final number of latent states is minimized. Intuitively, this refinement is similar to 437 temporal-difference learning, where values are first updated where rewards occur and subsequently value 438 updates are bootstrapped at other states. The clustering algorithm computes a reward-predictive repre439 sentation in a similar way, by first refining a state representation around changes in one-step rewards and 440 subsequently bootstrapping from this representation to further refine the state clustering. This leads to a 441 maximally compressed latent state space, which is important for abstracting away information from the state 442 input and enabling an agent to efficiently generalize across states (as demonstrated by the generalization 443 experiments in Section 4.2). Such latent state space compression cannot be accomplished by auto-encoder 444 based architectures (Ha & Schmidhuber, 2018) or frame prediction architectures (Oh et al., 2015; Leibfried 445 et al., 2016; Weber et al., 2017) because a decoder network requires the latent state to be predictive of the 446 entire task state. Therefore, these methods encode the entire task state in a latent state without abstracting 447 any part of the task state information away. 448 Prior work (Ferns et al., 2004; Comanici et al., 2015; Gelada et al., 2019; Zhang et al., 2021b;a) has focused 449 on using the Wasserstein metric to measure how bisimilar two states are. Computing the Wasserstein metric 450 between two states is often difficult in practice, because it requires solving an optimization problem for 451 every distance calculation and it assumes a measurable state space—an assumption that is difficult to satisfy 452 when working with visual control tasks for example. Here, approximations of the Wasserstein metric are 453 often used but these methods introduce other assumptions instead, such as a normally distributed next 454 latent states (Zhang et al., 2021a) or a Lipschitz continuous transition function where the Lipschitz factor is 1/γ (Gelada et al., 2019)5 455 . The presented refinement method does not require such assumptions, because the 456 presented algorithm directly clusters one-step rewards and SFs for arbitrary transition and reward functions. 457 SFs, which encode the frequencies of future states, provide a different avenue to computing which two states 458 are bisimilar without requiring a distance function on probability distributions such as the Wasserstein 4For all experiments we use a ε-greedy action selection strategy that initially selects actions uniformly at random but becomes greedy with respect to the predicted Q-values within the first 10 episodes. 5Here, γ ∈ (0, 1) is the discount factor. 459 metric. Nonetheless, using the Wasserstein metric to determine state bisimilarity may provide an avenue 460 for over-compressing the latent state space at the expense of increasing prediction errors (Ferns et al., 2004; 461 Comanici et al., 2015) (for example, compressing the Combination Lock task into 90 latent states instead of 462 100). 463 A key challenge in scaling model-based RL algorithms is the fact that these agents are evaluated on their 464 predictive performance. Consequently, any approximation errors (caused by not adhering to the ε-perfection 465 assumption illustrated in Figure 3) impact the resulting model's predictive performance—a property common 466 to model-based RL algorithms (Talvitie, 2017; 2018; Asadi et al., 2018). Evaluating a model's predictive 467 performance is more stringent than what is typically used for model-free RL algorithms such as DQN. 468 Typically, model-free RL algorithms are evaluated on the learned optimal policy's performance and are 469 not evaluated on their predictive performance. For example, while DQN can learn an optimal policy for 470 a task, the learned Q-network's prediction errors may still be high for some inputs (Witty et al., 2018). 471 Prediction errors of this type are often tolerated, because model-free RL algorithms are benchmarked based 472 on the learned policy's ability to maximize rewards and not their accuracy of predicting quantities such as 473 Q-values or rewards. This is the case for most existing deep RL algorithms that are effectively model-based 474 and model-free hybrid architectures (Oh et al., 2017; Silver et al., 2017a; Gelada et al., 2019; Schrittwieser 475 et al., 2019; Zhang et al., 2021a)—these models predict reward-sequences only over very short horizons (for 476 example, Oh et al. (2017) use 10 time steps). In contrast, reward-predictive representations are evaluated 477 for their prediction accuracy. To achieve low prediction errors, the presented results suggest that finding 478 ε-perfect approximations becomes important. Furthermore, the simulations on the MNIST combination-lock 479 task demonstrate that this goal can be accomplished by using a larger neural network architecture. 480 To compute a maximally compressed representation, the presented clustering algorithm needs to have access 481 to the entire trajectory training data set at once. How to implement this algorithm in an online learning 482 setting—a setting where the agent observes the different transitions and rewards of a task as a data stream483 is not clear at this point. To implement an online learning algorithm, an agent would need to assign incoming 484 state observations to already existing state partitions. Without such an operation it would not be possible 485 to compute a reward-predictive representation that still abstracts away certain aspects from the state itself. 486 Because the presented clustering method is based on the idea of refining state partitions, it is currently 487 difficult to design an online learning agent that does not always re-run the full clustering algorithm on the 488 history of all transitions the agent observed. 489 One assumption made in the presented experiments is that a task's state space can always be compressed 490 into a small enough finite latent space. This assumption is not restrictive, because any (discrete time) RL 491 agent only observes a finite number of transitions and states at any given time point. Consequently, all state 492 observations can always be compressed into a finite number of latent states, similar to block MDPs (Du 493 et al., 2019). Furthermore, the presented method always learns a fully conjunctive representation. In the 494 combination-lock examples, the reward-predictive representation associates a different latent state (one-hot 495 vector) with each relevant combination pattern. This representation is conjunctive because it does not model 496 the fact that the dials rotate independently. A disjunctive or factored representation could map each of the 497 three dials independently into three separate latent state vectors and a concatenation of these vectors could 498 be used to describe the task's latent state. Such a latent representation is similar to factored representations 499 used in prior work (Guestrin et al., 2003; Diuk et al., 2008) and these factored representations permit a more 500 compositional form of generalization across different tasks (Kansky et al., 2017; Battaglia et al., 2016; Chang 501 et al., 2016). How to extract such factored representations from unstructured state spaces such as images 502 still remains a challenging problem. We leave such an extension to future work. 503 Prior work on (Deep) SF transfer (Barreto et al., 2018; 2020; Kulkarni et al., 2016; Zhang et al., 2017), meta504 learning (Finn et al., 2017), or multi-task learning (Rusu et al., 2015; D'Eramo et al., 2020) has focused on 505 extracting an inductive bias from a set of tasks to accelerate learning in subsequent tasks. These methods 506 transfer a value function or policy model to initialize and accelerate learning. Because these methods transfer 507 a model of a task's policy, these models have to be adapted to each transfer task, if the transfer task's optimal 508 policy differs from the previously learned policies. Reward-predictive representations overcome this limitation 509 by only modelling how to generalize across different states. Because reward-predictive representations do 510 not encode the specifics of how to transition between different latent states or how latent states are tied to 511 rewards, these representations are robust to changes in transitions and rewards. Furthermore, the reward512 predictive representation network is learned using a single task and the resulting network is sufficient to 513 demonstrate positive transfer across different transitions and rewards. This form of transfer is also different 514 from the method presented by Zhang et al. (2021b), where the focus is on extracting a common task 515 structure from a set of tasks instead of learning a representation from a single task and transferring it to 516 different test tasks. Still, in a lifelong learning scenario, re-using the same reward-predictive representation 517 network to solve every task may not be possible because an agent may have to generalize across different 518 states (as demonstrated by the left dial broken combination lock variant in Section 4.2). In this article, we 519 analyze the generalization properties of reward-predictive representations through A-B transfer experiments. 520 While Lehnert et al. (2020) already present a (non-parametric) meta-learning model that uses reward521 predictive representations to accelerate learning in finite MDPs, we leave how to integrate the presented 522 clustering algorithm into existing meta-learning frameworks commonly used in deep RL—such as Barreto 523 et al. (2018) or Finn et al. (2017)—for future work. ## 524 **6 Conclusion** 525 We presented a clustering algorithm to compute reward-predictive representations that introduces as few 526 latent states as possible. The algorithm works by iteratively refining a state representation using a temporal 527 difference error that is defined on state features. Furthermore, we analyze under which assumptions the 528 resulting representation networks are suitable for systematic out-of-distribution generalization and demon529 strate that reward-predictive representation networks enable RL agents to re-use abstract task knowledge to 530 improve their learning efficiency. ## 531 **References** 532 David Abel, Dilip Arumugam, Lucas Lehnert, and Michael Littman. State abstractions for lifelong rein533 forcement learning. In Jennifer Dy and Andreas Krause (eds.), *Proceedings of the 35th International* 534 *Conference on Machine Learning*, volume 80 of *Proceedings of Machine Learning Research*, pp. 10–19, 535 Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR. URL http://proceedings.mlr.press/ 536 v80/abel18a.html. 537 Kavosh Asadi, Dipendra Misra, and Michael Littman. Lipschitz continuity in model-based reinforcement 538 learning. In Jennifer Dy and Andreas Krause (eds.), *Proceedings of the 35th International Conference on* 539 *Machine Learning*, volume 80 of *Proceedings of Machine Learning Research*, pp. 264–273, Stockholmsmäs540 san, Stockholm Sweden, 10–15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/asadi18a. 541 html. 542 André Barreto, Will Dabney, Rémi Munos, Jonathan J Hunt, Tom Schaul, Hado P van Hasselt, and David 543 Silver. Successor features for transfer in reinforcement learning. In *Advances in Neural Information* 544 *Processing Systems*, pp. 4055–4065, 2017. 545 Andre Barreto, Diana Borsa, John Quan, Tom Schaul, David Silver, Matteo Hessel, Daniel Mankowitz, 546 Augustin Zidek, and Remi Munos. Transfer in deep reinforcement learning using successor features and 547 generalised policy improvement. In Jennifer Dy and Andreas Krause (eds.), *Proceedings of the 35th* 548 *International Conference on Machine Learning*, volume 80 of *Proceedings of Machine Learning Research*, 549 pp. 501–510, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR. URL http://proceedings. 550 mlr.press/v80/barreto18a.html. 551 André Barreto, Shaobo Hou, Diana Borsa, David Silver, and Doina Precup. Fast reinforcement learning 552 with generalized policy updates. *Proceedings of the National Academy of Sciences*, 117(48):30079–30087, 553 2020. 554 Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, and Koray kavukcuoglu. Interac555 tion networks for learning about objects, relations and physics. In *Proceedings of the 30th International* 556 *Conference on Neural Information Processing Systems*, NIPS'16, pp. 4509–4517, Red Hook, NY, USA, 557 2016. Curran Associates Inc. ISBN 9781510838819. 558 Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine-learning practice 559 and the classical bias–variance trade-off. *Proceedings of the National Academy of Sciences*, 116(32):15849– 560 15854, 2019. ISSN 0027-8424. doi: 10.1073/pnas.1903070116. URL https://www.pnas.org/content/ 561 116/32/15849. 562 Michael B Chang, Tomer Ullman, Antonio Torralba, and Joshua B Tenenbaum. A compositional object563 based approach to learning physical dynamics. *arXiv preprint arXiv:1612.00341*, 2016. 564 Gheorghe Comanici, Doina Precup, and Prakash Panangaden. Basis refinement strategies for linear value 565 function approximation in MDPs. In *Advances in Neural Information Processing Systems*, pp. 2899–2907, 566 2015. 567 Peter Dayan. Improving generalization for temporal difference learning: The successor representation. *Neural* 568 *Computation*, 5(4):613–624, 1993. 569 Carlo D'Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, and Jan Peters. Sharing knowledge in 570 multi-task deep reinforcement learning. In *International Conference on Learning Representations*, 2020. 571 URL https://openreview.net/forum?id=rkgpv2VFvr. 572 Carlos Diuk, Andre Cohen, and Michael L. Littman. An object-oriented representation for efficient rein573 forcement learning. In *Proceedings of the 25th International Conference on Machine Learning*, ICML '08, 574 pp. 240–247, New York, NY, USA, 2008. Association for Computing Machinery. ISBN 9781605582054. 575 doi: 10.1145/1390156.1390187. URL https://doi.org/10.1145/1390156.1390187. 576 Simon S Du, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal, Miroslav Dudík, and John Langford. 577 Provably efficient rl with rich observations via latent state decoding. *arXiv preprint arXiv:1901.09018*, 578 2019. 579 Norm Ferns, Prakash Panangaden, and Doina Precup. Metrics for finite markov decision processes. In 580 *Proceedings of the 20th conference on Uncertainty in artificial intelligence*, pp. 162–169. AUAI Press, 581 2004. 582 Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep 583 networks. *arXiv preprint arXiv:1703.03400*, 2017. 584 Nicholas T Franklin and Michael J Frank. Compositional clustering in task structure learning. *PLoS* 585 *computational biology*, 14(4):e1006116, 2018. 586 Carles Gelada, Saurabh Kumar, Jacob Buckman, Ofir Nachum, and Marc G Bellemare. DeepMDP: Learn587 ing continuous latent space models for representation learning. In *International Conference on Machine* 588 *Learning*, pp. 2170–2179, 2019. 589 Robert Givan, Thomas Dean, and Matthew Greig. Equivalence notions and model minimization in Markov 590 decision processes. *Artificial Intelligence*, 147(1):163–223, 2003. 591 Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. 592 In Yee Whye Teh and Mike Titterington (eds.), *Proceedings of the Thirteenth International Conference on* 593 *Artificial Intelligence and Statistics*, volume 9 of *Proceedings of Machine Learning Research*, pp. 249–256, 594 Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010. JMLR Workshop and Conference Proceedings. URL 595 http://proceedings.mlr.press/v9/glorot10a.html. 596 Ian Goodfellow, Yoshua Bengio, and Aaron Courville. *Deep Learning*. MIT Press, 2016. http://www. 597 deeplearningbook.org. 598 Carlos Guestrin, Daphne Koller, Ronald Parr, and Shobha Venkataraman. Efficient solution algorithms for 599 factored mdps. *Journal of Artificial Intelligence Research*, 19:399–468, 2003. 600 David Ha and Jürgen Schmidhuber. World models. *arXiv preprint arXiv:1803.10122*, 2018. 601 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 602 *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016. 603 Ken Kansky, Tom Silver, David A Mély, Mohamed Eldawy, Miguel Lázaro-Gredilla, Xinghua Lou, Nimrod 604 Dorfman, Szymon Sidor, Scott Phoenix, and Dileep George. Schema networks: Zero-shot transfer with a 605 generative causal model of intuitive physics. *arXiv preprint arXiv:1706.04317*, 2017. 608 Tejas D Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J Gershman. Deep successor reinforcement 609 learning. *arXiv preprint arXiv:1606.02396*, 2016. 610 Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. 611 *Proceedings of the IEEE*, 86(11):2278–2324, 1998. doi: 10.1109/5.726791. 612 Lucas Lehnert and Michael L Littman. Transfer with model features in reinforcement learning. *arXiv* 613 *preprint arXiv:1807.01736*, 2018. 614 Lucas Lehnert and Michael L Littman. Successor features combine elements of model-free and model-based 615 reinforcement learning. *Journal of Machine Learning Research*, 21(196):1–53, 2020. 616 Lucas Lehnert, Stefanie Tellex, and Michael L Littman. Advantages and limitations of using successor 617 features for transfer in reinforcement learning. *arXiv preprint arXiv:1708.00102*, 2017. 618 Lucas Lehnert, Michael L Littman, and Michael J Frank. Reward-predictive representations generalize across 619 tasks in reinforcement learning. *PLoS computational biology*, 16(10):e1008317, 2020. 620 Felix Leibfried, Nate Kushman, and Katja Hofmann. A deep learning approach for joint video frame and 621 reward prediction in atari games. *arXiv preprint arXiv:1611.07078*, 2016. 622 Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, 623 and perspectives on open problems. *CoRR*, abs/2005.01643, 2020. URL https://arxiv.org/abs/2005. 624 01643. 625 Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and 626 Martin A. Riedmiller. Playing atari with deep reinforcement learning. *CoRR*, abs/1312.5602, 2013. URL 627 http://arxiv.org/abs/1312.5602. 628 Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex 629 Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through 630 deep reinforcement learning. *Nature*, 518(7540):529–533, 2015. 631 Ida Momennejad, Evan M Russek, Jin H Cheong, Matthew M Botvinick, ND Daw, and Samuel J Gershman. 632 The successor representation in human reinforcement learning. *Nature Human Behaviour*, 1(9):680, 2017. 633 Mark Nemecek and Ronald Parr. Policy caches with successor features. In Marina Meila and Tong Zhang 634 (eds.), *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings* 635 *of Machine Learning Research*, pp. 8025–8033. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr. 636 press/v139/nemecek21a.html. 637 Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video 638 prediction using deep networks in atari games. In *Advances in Neural Information Processing Systems*, 639 pp. 2863–2871, 2015. 640 Junhyuk Oh, Satinder Singh, and Honglak Lee. Value prediction network. *arXiv preprint arXiv:1707.03497*, 641 2017. 606 Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *CoRR*, abs/1412.6980, 607 2014. URL http://arxiv.org/abs/1412.6980. 642 Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, 643 Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary 644 DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, 645 and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In 646 H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances* 647 *in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https: 648 //proceedings.neurips.cc/paper/2019/file/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf. 649 Evan M Russek, Ida Momennejad, Matthew M Botvinick, Samuel J Gershman, and Nathaniel D Daw. 650 Predictive representations can link model-based reinforcement learning to model-free mechanisms. *PLoS* 651 *computational biology*, 13(9):e1005768, 2017. 652 Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick, 653 Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distillation. *arXiv* 654 *preprint arXiv:1511.06295*, 2015. 655 Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, 656 Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari, go, chess and 657 shogi by planning with a learned model. *arXiv preprint arXiv:1911.08265*, 2019. 658 David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian 659 Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go 660 with deep neural networks and tree search. *Nature*, 529(7587):484–489, 2016. 661 David Silver, Hado Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac-Arnold, 662 David Reichert, Neil Rabinowitz, Andre Barreto, et al. The predictron: End-to-end learning and planning. 663 In *International Conference on Machine Learning*, pp. 3191–3199. PMLR, 2017a. 664 David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas 665 Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human 666 knowledge. *Nature*, 550(7676):354–359, 2017b. 667 Richard S Sutton and Andrew G Barto. *Reinforcement Learning: An Introduction*. MIT Press, 2018. 668 Erik Talvitie. Self-correcting models for model-based reinforcement learning. In *AAAI*, pp. 2597–2603, 2017. 669 Erik Talvitie. Learning the reward function for a misspecified model. In Jennifer Dy and Andreas Krause 670 (eds.), *Proceedings of the 35th International Conference on Machine Learning*, volume 80 of *Proceedings* 671 *of Machine Learning Research*, pp. 4838–4847, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. 672 PMLR. URL http://proceedings.mlr.press/v80/talvitie18a.html. 673 Vladimir Vapnik. Principles of risk minimization for learning theory. In *Advances in neural information* 674 *processing systems*, pp. 831–838, 1992. 675 Théophane Weber, Sébastien Racanière, David P Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez 676 Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, et al. Imagination-augmented 677 agents for deep reinforcement learning. *arXiv preprint arXiv:1707.06203*, 2017. 678 Sam Witty, Jun Ki Lee, Emma Tosch, Akanksha Atrey, Michael Littman, and David Jensen. Measuring and 679 characterizing generalization in deep reinforcement learning. *arXiv preprint arXiv:1812.02868*, 2018. 680 Amy Zhang, Rowan McAllister, Roberto Calandra, Yarin Gal, and Sergey Levine. Learning invariant repre681 sentations for reinforcement learning without reconstruction. *arXiv preprint arXiv:2006.10742*, 2021a. 682 Amy Zhang, Shagun Sodhani, Khimya Khetarpal, and Joelle Pineau. Learning robust state abstractions for 683 hidden-parameter block mdps, 2021b. 684 Jingwei Zhang, Jost Tobias Springenberg, Joschka Boedecker, and Wolfram Burgard. Deep reinforcement 685 learning with successor features for navigation across similar environments. In 2017 IEEE/RSJ Interna686 *tional Conference on Intelligent Robots and Systems (IROS)*, pp. 2371–2378. IEEE, 2017. ## 687 **Appendix A Linear Successor Feature Models** 688 Lehnert & Littman define LSFMs as a set of real-valued vectors {wa}a∈A and real-valued square matrices 689 {F a}a∈A that are indexed by the different actions a ∈ M of an MDP. Furthermore, LSFMs can be used to identify a reward-predictive representation function φ : S → R n 690 . Specifically, if a state-representation 691 function φ satisfies for all state-action pairs (s, a) $$\begin{array}{l}{{\mathbf{w}_{a}^{\top}\mathbf{\phi}(s)=\mathbb{E}_{p}[r(s,a,s^{\prime})|s,a]}}\\ {{\mathrm{~and~}\mathbf{F}_{a}^{\top}\mathbf{\phi}(s)=\mathbf{\phi}(s)+\gamma\overline{{{\mathbf{F}}}}^{\top}\mathbb{E}_{p}[\mathbf{\phi}(s^{\prime})|s,a]\mathrm{~where~}\overline{{{\mathbf{F}}}}=\frac{1}{|\mathcal{A}|}\sum_{a^{\prime}\in\mathcal{A}}\mathbf{F}_{a^{\prime}},}}\end{array}$$ 692 then the state-representation function φ is reward-predictive. 693 Given a partition function c and the trajectory data set D, a LSFM can be computed. For a partition i the 694 ith entry of the weight vector wa equals the one-step rewards averaged across all state observations and $$(18)$$ $$(19)$$ $$\mathbf{w}_{a}(i)={\frac{1}{|\{(s,a,r,s^{\prime})|c(s)=i\}|}}\sum_{(s,a,r,s^{\prime})|c(s)=i}r,$$ $$(20)$$ 695 where the summation Equation 20 ranges over all transitions in D that start in partition i. Similarly, 696 the empirical partition-to-partition transition probabilities can be calculated and stored in a row-stochastic 697 transition matrix Ma. Each entry of this matrix is set to the empirical probability of transitioning from a 698 partition i to a partition j and $$\mathbf{M}_{a}(i,j)={\frac{|\{(s,a,r,s^{\prime})|c(s)=i,c(s^{\prime})=j\}|}{|\{(s,a,r,s^{\prime})|c(s)=i\}|}}.$$ $$(21)$$ $$(22)$$ 699 Using this partition-to-partition transition matrix, the matrices {F a}a∈A can be calculated as outlined 700 by Lehnert & Littman and $$\mathbf{F}_{a}=\mathbf{I}+\gamma\mathbf{M}_{a}\mathbf{F}\ \text{and}\ \mathbf{F}=(\mathbf{I}-\gamma\overline{\mathbf{M}})^{-1},\tag{1}$$ where M =1 $\mathbf{\varepsilon}:\boldsymbol{M}=\frac{1}{|\mathcal{A}|}\sum_{a\in\mathcal{A}}\boldsymbol{M}_a$. P 701 a∈A Ma. 702 This calculation is used to compute the SF targets used for function approximation in Algorithm 1. ## 703 **Appendix B Convergence Proof** Definition 1 (Sub-clustering). A clustering c is a sub-clustering of c ∗ 704 if the following property holds: $$\forall s,{\tilde{s}},\ c(s)\neq c({\tilde{s}})\implies c^{*}(s)\neq c^{*}({\tilde{s}}).$$ $$(23)$$ ∗(˜s). (23) 705 **Definition 2** (Maximally-Compressed-Reward-Predictive Clustering). A maximally-compressed-rewardpredictive representation is a function c ∗ 706 assigning every state s ∈ S to an index such that for all state-action 707 pairs (*s, a*) $$\begin{array}{l}\left|\mathbf{w}_{a}^{\top}\mathbf{e}_{cs(s)}-\mathbb{E}_{p}[r(s,a,s^{\prime})|s,a]\right|\leq\varepsilon_{r}\\ \mbox{and}\left|\mathbf{F}_{a}^{\top}\mathbf{e}_{cs(s)}-\mathbf{\psi}_{s}^{\top}(s,a)\right|\leq\varepsilon_{\psi},\end{array}\tag{1}$$ where ψ π ∗ 708 (*s, a*) are the SFs calculated for a state-representation function mapping a state s to a one-hot bit vector c ∗ 709 (s). Furthermore, this representation uses as few indices as possible. 710 Definition 2 implicitly makes the assumption that the state space of an arbitrary MDP can be partitioned 711 into finitely many reward-predictive partitions. While this may not be the case for all possible MDPs, this 712 assumption is not restrictive when using the presented clustering algorithm. Because the trajectory data set 713 is finite, any algorithm only processes a finite subset of all possible states (even if state spaces are uncountable 714 infinite) and therefore can always partition these state observations into a finite number of partitions. $$(24)$$ $\begin{array}{c}+\\ +\end{array}$ (25). 715 **Property 1** (Refinement Property). In Algorithm 1, every iteration refines the existing partitions until the termination condition is reached. Specifically, for every iteration ci 716 is a sub-clustering of ci+1 and for any 717 two distinct states s and s˜, ci(s) 6= ci(˜s) =⇒ ci+1(s) 6= ci+1(˜s). (26) 718 **Property 2** (Reward-predictive Splitting Property). Consider a maximally-compressed-reward-predictive representation encoded by the clustering c ∗ 719 and the cluster sequence c1, c2*, ...* generated by Algorithm 1. For 720 any two distinct states s and s˜, ci(s) 6= ci(˜s) =⇒ c ∗(s) 6= c ∗(˜s) (27) 721 **Lemma 1** (SF Separation). For a cluster function ci and any arbitrary MDP, if a cluster function $c_i$ and any arbitrary MDT, if $$\gamma<\frac{1}{2}\ \mbox{and}\ \frac{2}{3}\left(1-\frac{\gamma}{1-\gamma}\right)>\varepsilon_{\psi}>0,$$ 2 1 − γ 722 then ||ψ π i (*s, a*) − ψ π i (˜s, a)*|| ≥* 3εψ (29) 723 for two states s and s˜ that are assigned to two different partitions and ci(s) 6= ci(˜s). $$(28)$$ $$(29)$$ (32) 724 Proof of SF Separation Lemma 1. First, we observe that the norm of a SF vector can be bounded with ψ π i (s, a) = Eπ "X∞ t=1 γ t−1eci(st) s = s1, a# (30) = X∞ t=1 γ t−1Eπ -eci(st) s = s1, a (by linearity of expectation) (31) ≤ X∞ t=1 γ t−1 Eπ -eci(st) s = s1, a | {z } ≤1 = X∞ t=1 γ t−1(33) =1 1 − γ . (34) 725 The transformation to line (33) uses the fact that expected values of one-hot vectors are always probability $$(30)$$ $$(31)$$ $$(32)$$ $$(33)$$ $$(34)$$ $$(35)$$ 726 vectors. 727 Furthermore, we note that $ 0\leq\gamma<\frac{1}{2}\implies\frac{2\gamma}{1-\gamma}<2.$ cors for two states $ s$ and $ \bar{s}$ that start in different partitions $ q$. 728 The norm of the difference of SF vectors for two states s and s˜ that start in different partitions can be 729 bounded with ψ π i (*s, a*) − ψ π i (˜*s, a*) = (ek + γE[ψ π i (s 0, a0)|*s, a*]) − (el + γE[ψ π i (s 0, a0)|*s, a* ˜ ]) (36) = (ek − el) + γ(E[ψ π i (s 0, a0)|*s, a*] − E[ψ π i (s 0, a0)|*s, a* ˜ ]) (37) = (ek − el) − γ(E[ψ π i (s 0, a0)|*s, a* ˜ ] − E[ψ π i (s 0, a0)|*s, a*]) (38) ≥ ek − el | {z } =2 −γ E[ψ π i (s 0, a0)|*s, a* ˜ ] − E[ψ π i (s 0, a0)|*s, a*] (39) = 2 − γ E[ψ π i (s 0, a0)|*s, a* ˜ ] − E[ψ π i (s 0, a0)|*s, a*] | {z } ∈ [0, 2γ 1−γ ] by (34) and < 2 by (35) $$(41)$$ $$(42)$$ (40) = 2 − γ E[ψ π i (s 0, a0)|*s, a* ˜ ] − E[ψ π i (s 0, a0)|*s, a*] (41) ≥ 2 −2γ 1 − γ(42) 730 The transformation to line (40) holds because s and s˜ start in different partitions and therefore ci(s) = k 6= 731 ci(˜s) = l. The transformation to line (41) holds, because the norm of the difference of two SF vectors is bounded by 2 1−γ 732 . The term inside the absolute value calculation cannot possibly become negative because the discount factor γ is set to be below 12 733 and the bound in line (35) holds. 734 Using the condition on the discount factor in line (28), we have $$\frac{2}{3}\left(1-\frac{\gamma}{1-\gamma}\right)\geq\varepsilon_{\psi}\implies2-\frac{2\gamma}{1-\gamma}\geq3\varepsilon_{\psi}$$ (by (28)) $$\implies||\Psi_{i}^{\pi}\left(s,a\right)-\Psi_{i}^{\pi}\left(\tilde{s},a\right)||\geq3\varepsilon_{\psi}.$$ (by (42)) $\square$ $\square$ 735 736 **Definition 3** (Representation Projection Matrix). For a maximally-compressed-reward-predictive clustering c ∗ and a sub-clustering ci 737 , we define a projection matrix Φi such that every entry $$\Phi_{i}(k,l)=\begin{cases}1&\exists s\text{such that}c_{i}(s)=k\text{and}c^{*}(s)=l\\ 0&\text{otherwise.}\end{cases}$$ $$(45)$$ Lemma 2 (SF Projection). For every state-action pair (*s, a*), ψ π i (*s, a*) = Φiψ π ∗ 738 (*s, a*). 739 *Proof of SF Projection Lemma 2.* The proof is by the derivation in lines (11) through (14). 740 *Proof of Convergence Theorem 1.* The convergence proof argues by induction on the number of refinement 741 iterations and first establishes that the Refinement Property 1 and Reward-predictive Splitting Property 2 742 hold at every iteration. Then we provide an argument that the returned cluster function is a maximally743 compressed-reward-predictive representation. 744 **Base case:** The first clustering c1 merges two state observations into the same cluster if they lead to equal 745 one-step rewards for every action. The reward-condition in Equation (24) can be satisfied by constructing a 746 vector wa such that every entry equals the average predicted one-step reward for each partition and $$\mathbf{w}_{a}(i)={\frac{1}{|\{s:c_{1}(s)=i\}|}}\sum_{s:c_{1}(s)=i}f_{r}(s,a)$$ By Assumption 1, all predictions made by fr are at most εr 2 747 apart from the correct value and therefore $$(46)$$ $$|\mathbf{e}_{c_{1}(s)}^{\top}\mathbf{w}_{a}-\mathbb{E}_{p}[r(s,a,s^{\prime})|s,a]|\leq\varepsilon_{r}$$ $$(47)$$ c1(s)wa − Ep[r(s, a, s0)|s, a]| ≤ εr (47) 748 Consequently, the reward condition in Equation (24) is met and for any two states s and s˜ $$c_{1}(s)\neq c_{1}(\tilde{s})\implies c^{*}(s)\neq c^{*}(\tilde{s})$$ ∗(˜s) (48) 749 and Property 2 holds. Property 1 holds trivially because c1 is the first constructed clustering. 750 **Induction Hypothesis:** For a clustering ci both Property 1 and Property 2 hold. 751 **Induction Step:** To see why Property 1 and 2 hold for a clustering ci+1, we first denote prediction errors 752 with a vector δi and $$\hat{\psi}_{i}^{\pi}(s,a)=\psi_{i}^{\pi}(s,a)+\delta_{i}(s,a).$$ (*s, a*) + δi(*s, a*). (49) $$(48)$$ $$(49)$$ 753 If two states s and s˜ are merged into the same partition by a maximally-compressed-reward-predictive representation (and have equal SFs ψ π ∗ 754 ), then ||ψb π i (s, a) − ψb π i (˜s, a)|| (50) ≤ ||ψ π i (s, a) − ψ π i (˜s, a)|| + ||δi(s, a) − δi(˜s, a)|| (by substituting (49) and triangle ineq.) (51) = ||Φiψ π ∗ (s, a) − Φiψ π ∗ (˜s, a)|| + ||δi(s, a) − δi(˜s, a)|| | {z } ≤ εψ 2 + εψ 2 by Assmpt. 1 (by Lemma 2) (52) ≤ ||Φi|| · ||ψ π ∗ (s, a) − ψ π ∗ (˜s, a)|| | {z } = 0 by choice of s and s˜ +εψ (53) $$(53)$$ $$(54)$$ $$(55)$$ = εψ. (54) 755 Consequently, $c^{*}(s)=c^{*}(\bar{s})\implies||\mathbf{f}_{i}(s,a)-\mathbf{f}_{i}(\bar{s},a)||\leq\varepsilon_{\psi}\implies c_{i+1}(s)=c_{i+1}(\bar{s})$. 756 By inversion of the implication in line (55), the Reward-predictive Splitting Property 2 holds. Furthermore, 757 because the matching condition in line (16) holds, we have for any two states $$c_{i}(s)\neq c_{i}(\bar{s})\implies||\psi_{i}^{\pi}(s,a)-\psi_{i}^{\pi}(\bar{s},a)||>3\varepsilon_{\psi}.$$ (˜*s, a*)|| > 3εψ. (56) 758 Consequently, $$||\mathbf{f}_{i}(s,a)-\mathbf{f}_{i}(\tilde{s},a)||=||(\mathbf{\psi}_{i}^{\pi}(s,a)-\mathbf{\psi}_{i}^{\pi}(\tilde{s},a))-(\mathbf{\delta}_{i}(\tilde{s},a)-\mathbf{\delta}_{i}(s,a))||$$ $$\geq||\underbrace{|\mathbf{\psi}_{i}^{\pi}(s,a)-\mathbf{\psi}_{i}^{\pi}(\tilde{s},a)|}_{>3\varepsilon_{\psi}}-\underbrace{|\mathbf{\delta}_{i}(\tilde{s},a)-\mathbf{\delta}_{i}(s,a)|}_{\leq2\varepsilon_{\psi}}||\quad\text{(by inverse triangle ineq.)}$$ $$>3\varepsilon_{\psi}-2\varepsilon_{\psi}=\varepsilon_{\psi}.$$ $$(56)$$ $$\begin{array}{l}{(57)}\\ {(58)}\end{array}$$ $$(59)$$ $$(60)$$ 759 Therefore, ci+1(s) 6= ci+1(˜s) and the Refinement Property 1 holds as well. 760 Lastly, the clustering cT returned by Algorithm 1 satisfies the conditions outlined in Definition 2. Because 761 the Refinement Property 1 holds at every iteration, we have by line (47) that $$\left|\mathbf{e}_{c_{T}(s)}^{\top}\mathbf{w}_{a}-\mathbb{E}_{p}[r(s,a,s^{\prime})|s,a]\right|\leq\varepsilon_{r}$$ cT (s)wa − Ep[r(s, a, s0)|*s, a*] ≤ εr (60) 762 and therefore cT satisfies the bound in line (24). Furthermore, because Algorithm 1 terminates when cT and 763 cT −1 are identical, we have that $$c_{T}(s)=c_{T}(\tilde{s})\iff||\hat{\boldsymbol{\psi}}_{T}^{\pi}(s,a)-\hat{\boldsymbol{\psi}}_{T}^{\pi}(\tilde{s},a)||\leq\varepsilon_{\psi}.$$ For this clustering, we can construct a set of matrices {Fb 764 a}a∈A by averaging the predicted SFs such that 765 every row $$\widehat{\mathbf{F}}_{a}(i)=\frac{1}{|\{s:c_{T}(s)=i\}|}\sum_{s:c_{T}(s)=i}\widehat{\mathbf{\psi}}_{T}^{\pi}(s,a).\tag{1}$$ 766 For every observed state-action pair (*s, a*) $$\begin{split}\left|\left|\vec{e}_{\tau_{T}(s)}\widehat{\mathbf{F}}_{a}-\mathbf{\psi}_{T}^{\mathsf{T}}(s,a)\right|\right|&=\left|\left|\vec{e}_{\tau_{T}(s)}\widehat{\mathbf{F}}_{a}-\widehat{\mathbf{\psi}}_{i}^{\mathsf{T}}(s,a)+\mathbf{\delta}_{i}(s,a)\right|\right|\\ &\leq\left|\underbrace{\left|\vec{e}_{\tau_{T}(s)}^{\mathsf{T}}\widehat{\mathbf{F}}_{a}-\widehat{\mathbf{\psi}}_{i}^{\mathsf{T}}(s,a)\right|\right|}_{\leq e_{\psi}\text{by(62)}}+\underbrace{\left|\left|\left|\mathbf{\delta}_{i}(s,a)\right|\right|\right|}_{\leq\frac{\sqrt{2}}{2}\text{by Assumption1}}\\ &\leq\frac{3}{2}\varepsilon_{\psi}\end{split}$$ $$(61)$$ $$(62)$$ $$(63)$$ $$(64)$$ $$(65)$$ 767 and therefore the SF condition in line (25) holds as well (up to a rescaling of the εψ hyper-parameter). 768 ## 769 **Appendix C Experiments** 770 **C.1 Reward-Predictive Clustering Experiments** 771 In Section 4, the clustering algorithm was run on a fixed trajectory dataset that was generated by selecting 772 actions uniformly at random. In the Column World task, a start state was sampled uniformly at random 773 from the right column. In the Combination Lock task the start state was always the combination (0, 0, 0). 774 MNIST images were always sampled uniformly at random from the training or test sets (depending on the 775 experiment phase). 776 For the Column World experiment a three layer fully connected neural network was used with ReLU acti777 vation functions. The two hidden layers have a dimension of 1000 (the output dimension depends on the 778 number of latent states and actions). In the Combination Lock experiment the ResNet18 architecture was 779 used by first reshaping the state image into a stack of three digit images and then feeding this image into the 780 ResNet18 model. For all experiments the weights of the ResNet18 model were initialized at random (we did 781 not use a pre-trained model). The 1000 dimensional output of this model was then passed through a ReLU 782 activation function and then through a linear layer. The output dimension varied depending on the quantity 783 the network is trained to predict during clustering. Only the top-most linear layer was re-trained between 784 different refinement iterations, the weights of the lower layers (e.g. the ResNet18 model) were re-used across 785 different refinement iterations. All experiments were implemented in PyTorch (Paszke et al., 2019) and all 786 neural networks were optimized using the Adam optimizer (Kingma & Ba, 2014). We always used PyTorch's 787 default network weight initialization heuristics and default values for the optimizer and only varied the learn788 ing rate. Mini-batches were sampled by shuffling the data set at the beginning of every epoch. Table 1 lists 789 the used hyper-parameter. | Parameter | Column World | Combination Lock | |-----------------------------------------|----------------|--------------------| | Batch size | 32 | 256 | | Epochs, reward refinement | 5 | 10 | | Epochs, SF refinement | 5 | 20 | | Epochs, representation network training | 5 | 20 | | Learning rate | 0.005 | 0.001 | | εr | 0.5 | 0.4 | | εψ | 1.0 | 0.8 | | Spurious latent state filter fraction | 0.01 | 0.0025 | | Number of training trajectories | 1000 | 10000 | Table 1: Hyper-parameter settings for both clustering algorithms ## 790 **C.2 Dqn Experiments** 791 All experiments in Figure 6 were repeated 20 times and each agent spent 100 episodes in each task. To select 792 actions, an ε-greedy exploration strategy was used that selects actions with ε probability greedily (with 793 respect to the Q-value predictions) and with 1 − ε actions are selected uniformly at random. During the 794 first episode in each training and test task, ε = 0 and the ε was linearly increase to 1 within 10 time steps. 795 The DQN agent always used a Q-network architecture consisting of the ResNet18 architecture (with random 796 weight initialization), a ReLU activation function, and then a fully connected layer to predict Q-values for 797 each action (as illustrated in Figure 6(b)). Table 2 outlines the hyper-parameters that were fine tuned for 798 the combination lock training task. These hyper-parameters were then re-used for all DQN variants used in 799 Section 4.2. Table 2: Hyper-parameter sweep results for DQN on the combination lock training task. | Parameter | Tested Values | Best Setting (highest reward-per-step score) | |----------------------|---------------------------|------------------------------------------------| | Learning rate | 10−4 , 10−3 , 10−2 , 10−1 | 10−3 | | Batch size | 100, 200, 500 | 200 | | Buffer size | 100, 1000, 10000 | 10000 | | Exploration episodes | 5, 10, 20, 50, 80 | 10 |
Review 1: Summary: The paper proposes an approach for learning representations in RL that are indicative of the future reward. They propose an iterative refinement approach for learning this space by learning successor features, and using them to cluster states. The authors include a proof that the resulting representation maximally compresses the observations, and include some experiments for OOD generalization. Strengths and Weaknesses: Strengths - The paper considers an important problem - that of learning effective representations for RL. Learning control policies from high dimensional observations like images is quite sample inefficient and useful representations will help learn faster and also transfer/generalize. Weaknesses - 1. The paper does not compare to prior work that seeks the tackle the same problem - representation learning in RL to predict future reward. Zhang et al. [1] learn a bisimulation metric representation (DBC), where states that get similar expected return are embedded to the same representations. Further, this work is tested on a wide suite of visual RL environments, including visual robots in Mujoco requiring continuous control and self-driving settings in a carla simulator, both which are much more complex than the settings considered in this paper. Further the proposed paper also does not compare to other works in representation learning for visual RL, like [2,3,4]. The authors only compare against DQN on a relatively simplistic combination lock environment that they propose. In order to better understand the significance and benefit of the proposed approach it needs to be evaluated a lot better. 2. Having a discrete number of clusters seems like an important limitation, especially since DBC[1] learns a continuous embedding of the observation, and has been shown to work quite well for RL tasks. [1] : Learning invariant representations for reinforcement learning without reconstruction [2] : DeepMDP: Learning continuous latent space models for representation learning. [3] : Curl: Contrastive unsupervised representations for reinforcement learning [4] : Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels Requested Changes: Please address the concerns listed in the weaknesses section. Specifically, please evaluate the approach on distract envs from mujoco [1,2], and even better if you can show better efficiency/transfer results with this representation on more realistic simulation envs [3-4]. Also please add comparisons to DBC[1] and other prior work listed above. DBC is specially pertinent since it addresses the same question - how to build representations that are informative of future reward. [1] - Learning invariant representations for reinforcement learning without reconstruction [2] - The Distracting Control Suite – A Challenging Benchmark for Reinforcement Learning from Pixels [3] - Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning [4] - robosuite: A Modular Simulation Framework and Benchmark for Robot Learning Broader Impact Concerns: NA ================================================== Review 2: Summary: The submission investigates reward predictive state abstractions in a deep learning setting. Its first contribution is a clustering algorithm enabling to port the reward predictive state abstraction to a deep learning setting. Its second contribution is a theorem proving the convergence of the algorithm under a set of assumptions. Finally, a broad set of experiments empirically validate the efficiency of the approach. Strengths and Weaknesses: Adapting the reward predictive state abstraction to a deep learning setting is interesting, promising, and novel as far as I can tell, and the submission is generally well exposed. Nevertheless, it presents several major issues: * The motivation could be improved. * L21-28: I am not convinced that reward predictive state abstraction can be claimed as advantageous for reusability to other tasks, since the reward prediction is tied to the task. * L54-55: It is unclear 1/ what is a compressed latent space and 2/ why we would need this * L63-70: this task seems designed for the reward predictive bias to be beneficial. We could imagine the same environment where the action moving to the right depends on the row number and then the clustering of states would be ill advised or am I missing something? * Some claims are not sufficiently supported, and they may appear overstated: * abstract: "maximally compresses" => it is not maximal. * L127-128: what do you mean exactly by that? Why? Doesn't non-linear clustering induce a representation function that is non-linear? * L170-173: strange choices and claims. It rather leaves me the impression that the real reason for this choice is that the authors could not make it work otherwise. * Eq9: These clustering objectives do not seem to be adapted to stochasticity, either in rewards, transitions or behavioral policy. * Thm1: The assumption defined in eq16 should be made more visible in introduction. This is drastic and a bit hidden in the current exposition (even though, the requirement is clear in the theorem). Minor comments/typos: * L42: Besides => besides * L42: decide between => choose among * L61: the reward function looks deterministic. You should make it clear. * Fig1: given otherwise => reached otherwise * L126-156: too long of a paragraph, I would break it down. * Eq4 and Eq8: shouldn't the difference be squared? * Alg1: appears too late. * Eq16: γ<1/2 is implied by the second inequality. Requested Changes: Please address the major issues enumerated above. Broader Impact Concerns: no concern ================================================== Review 3: Summary: This paper builds off of the work on reward-predictive representation learning (Lehnert et al. 2020; Lenhert & Littman, 2020), extending these ideas to the deep learning setting. This is done by introducing a clustering algorithm that clusters together observations that lead to the same sequence of rewards. This is done by using successor features (SFs) to represent states before they are clustered, by defining a cross-entropy loss that makes the problem closer to a classification problem than a regression problem, and so on. The clustering algorithm automatically identifies the number of clusters needed. Aside from some theoretical guarantees based on strong assumptions about the clustering algorithm being quasi-perfect, the paper presents results in simulation in combination-lock problems that use MNIST digits to represent the numbers in the dials. Strengths and Weaknesses: The paper tackles an important problem in the field. State abstraction is a promising way to lead to faster credit assignment and knowledge reuse. The paper is sound and it clearly delineates the problem it is tackling and the assumptions it is making. Overall, I do really worry about the assumptions being made in terms of the accuracy of the clustering algorithm. I find the justification based on the double descent weak as that does not guarantee perfect accuracy. In fact, in reinforcement learning problems, it seems to me that the assumption that clustering algorithm can be near-perfect is based on the assumption that the agent has enough capacity to model the whole world. This is quite problematic, in most problems (certainly those in the real world), the environment is much bigger than the agent and the agent needs to accept the fact it cannot model everything. I do understand that oftentimes we need to depart away from theoretical results in order to deploy the methods we design, and I wish that had been evaluated. Specifically, I wish there was an analysis on the performance of the proposed method when clustering is inaccurate at varying degrees. How gracefully does performance decay? I fight against the urge to ask for comparisons in different domains, more standard ones, such as those used in the SFs paper, and I don’t condition the acceptance of the paper on this, but I do think the paper would benefit from more thorough empirical validation. Importantly, and I find this crucial, I feel there are important baselines missing in the paper. The paper does discuss (at the end) methods based on bisimulation metrics, for example, but it never uses any of those methods as baseline as much as they are so similar from using the SFs + reward predictions to cluster states together. I believe a good baseline here would be something like MICo. Also, I was very confused by the discussion around Figure 6(c). MAML and SF-based GPI were rightly discussed in the main text but from what I can tell they were not used as baselines. Why is that? Finally, the text can be improved in some parts. - Line 96: The SR can be defined wrt to any policy, it is not defined only in terms of the optimal policy. - Line 159: Is $L_i$ a random variable or are the trajectories assumed to have fixed length? - Line 170: “rewards are typically sparse in an RL task”. The reinforcement learning problem formulation defines a reward that is observed at each time step, rewards are never sparse. Rewards might be zero most of the time with a +1 upon completion, but is this different from a reward -1 at each time step with a 0 upon completion? Also, some of the most prominent benchmarks in reinforcement learning (e.g., MuJoCo) have non-zero rewards at almost every time step. - Caption of Figure 2 and lines 177-189: How can we ever ensure that generalization is going to happen as described? It seems wishful thinking. - Equation 9: I believe the hat was not defined in the main paper. What does the hat stand for on top of the $\Psi$? - Algorithm 1: I believe the function $H(\cdot, \cdot)$ was not defined in the main paper. - Line 277: In English, didactic can have a bad connotation, was this the intended use of the word? References: Castro et al.: MICo: Improved representations via sampling-based state similarity for Markov decision processes. NeurIPS 2021. Requested Changes: I have two question I want the authors to answer and to make it clear in the paper: - How much does the proposed assumption relies/depends on the environment being quasi-determinsitic? - In lines 394-400 it is said how the proposed approach re-uses the pre-trained representation network, not performing weight updates on the representation network. Was this approach tried in the other baselines as well? It seems to me the answer is no, but I wonder if not updating the representation network wouldn’t make it altogether better. Aside from these two questions (and the necessary actions depending on the answers), I would like to see the performance of the different baselines I mentioned for Figure 6(c), specifically: a bisimulation-based method, a meta-learning method, and SF-based GPI, with a discussion about their failing patterns. Finally, I would like to see an analysis of how performance of reward-predictive methods degrades as the quality of the clustering decays. Broader Impact Concerns: I don't have any concerns. ================================================== Metareview: Recommendation: Reject Comment: The lack of empirical support for the claims was noted by all of the reviewers, who consistently asked for better baselines, more diverse tasks, and more clarity on limitations. The authors responded to the reviewer comments, but have said that fully addressing them would require more time. (They also dismissed some of the reviewer concerns, which is not ideal). Given these issues, all three reviewers recommended rejection. As such, the AE was led to a rejection decision. However, if the authors can address the reviewers' concerns more fully in the future, they are welcome to resubmit the paper at a later time. If they do so, note that it will likely be sent to the same three reviewers. ==================================================
# Identifiable Deep Generative Models Via Sparse Decoding Gemma E. Moran gm2918@columbia.edu Columbia University Dhanya Sridhar Mila - Quebec AI Institute and Université de Montréal Yixin Wang University of Michigan David M. Blei Columbia University Reviewed on OpenReview: *https: // openreview. net/ forum? id= vd0onGWZbE* ## Abstract We develop the sparse VAE for unsupervised representation learning on high-dimensional data. The sparse VAE learns a set of latent factors (representations) which summarize the associations in the observed data features. The underlying model is sparse in that each observed feature (i.e. each dimension of the data) depends on a small subset of the latent factors. As examples, in ratings data each movie is only described by a few genres; in text data each word is only applicable to a few topics; in genomics, each gene is active in only a few biological processes. We prove such sparse deep generative models are identifiable: with infinite data, the true model parameters can be learned. (In contrast, most deep generative models are not identifiable.) We empirically study the sparse VAE with both simulated and real data. We find that it recovers meaningful latent factors and has smaller heldout reconstruction error than related methods. ## 1 Introduction In many domains, high-dimensional data exhibits variability that can be summarized by low-dimensional latent representations, or factors. These factors can be useful in a variety of tasks including prediction, transfer learning, and domain adaptation (Bengio et al., 2013). To learn such factors, many researchers fit deep generative models (DGMs) (Kingma and Welling, 2014; Rezende et al., 2014). A DGM models each data point with a latent low-dimensional representation (its factors), which is passed through a neural network to generate the observed features. Given a dataset, a DGM can be fit with a variational autoencoder (VAE), a method that provides both the fitted parameters to the neural network and the approximate posterior factors for each data point. In this paper, we make two related contributions to the study of deep generative models. First, we develop the sparse DGM. This model is a DGM where each observed feature only depends on a subset of the factors - that is, the mapping from factors to features is sparse. This notion of sparsity often reflects the type of underlying patterns that we anticipate finding in real-world data. In genomics, each gene is associated with a few biological processes; in text, each term is applicable to a few underlying topics; in movie ratings, each movie is associated with a few genres. In detail, the model implements sparsity through a per-feature masking variable. When generating a data point, this mask is applied to the latent representation before producing each feature. In practice, we learn the per-feature mask with the help of the Spike-and-Slab Lasso prior (Ročková and George, 2018), ![1_image_0.png](1_image_0.png) Figure 1: In a DGM, a feature xij depends on all factors, zik. A sparse DGM is displayed where features xi1, xi2 depend only on zi1; xi3 depends on zi1 and zi2; and xi4, xi5 depend only on zi2. The features are passed through the same neural network fθ. a sparsity-inducing prior which enjoys good properties in more classical settings. We fit this sparse DGM and SSL prior with amortized variational inference (Gershman and Goodman, 2014; Kingma and Welling, 2014). We call this procedure the sparse VAE. Second, we study identifiability in sparse DGMs. A model is identifiable if each setting of its parameters produces a unique distribution of the observed data. With an unidentifiable DGM, even with infinite data from the model, we cannot distinguish the true factors. Identifiable factor models are important to tasks such as domain adaptation (Locatello et al., 2020), transfer learning (Dittadi et al., 2021), and fair classification (Locatello et al., 2019a). Most DGMs are not identifiable (Locatello et al., 2019b). Specifically, we prove that a sparse DGM is identifiable if each latent factor has at least two "anchor features." An anchor feature is a dimension of the data which depends on only one latent factor. For example, in text data, an anchor feature is a word that is applicable to only one underlying topic (Arora et al., 2013). Note that the anchor features do not need to be known in advance for our results to hold; we only need to assume they exist. Further, we do not need to assume that the true factors are drawn independently; in many applications, such an independence assumption is unrealistic (Träuble et al., 2021). The sparse VAE and the anchor assumption are related in that the SSL prior will likely yield the kind of sparse mapping that satisfies the anchor assumption. What this means is that if the data comes from an identifiable DGM, i.e., one that satisfies the anchor assumption, then the SSL prior will encourage the sparse VAE to more quickly find a good solution. In this sense, this paper is in the spirit of research in Bayesian statistics, which establishes theoretical conditions on identifiability and then produces priors that favor identifiable distributions. Empirically, we compare the sparse VAE to existing algorithms for fitting DGMs: the VAE (Kingma and Welling, 2014), β-VAE (Higgins et al., 2017), Variational Sparse Coding (VSC, Tonolini et al., 2020), and OI-VAE (Ainsworth et al., 2018). We find that (i) on synthetic data, the sparse VAE recovers ground truth factors, including when the true factors are correlated; (ii) on text and movie-ratings data, the sparse VAE achieves better heldout predictive performance; (iii) on semi-synthetic data, the sparse VAE has better heldout predictive performance on test data that is distributed differently to training data; (iv) the sparse VAE finds interpretable structure in text, movie-ratings and genomics data. In summary, the contributions of our paper are as follows: - We develop a sparse DGM which has Spike-and-Slab Lasso priors (Ročková and George, 2018) and develop an algorithm for fitting this model, the sparse VAE. - We prove that sparse DGMs are identified under an anchor feature assumption. - We study the sparse VAE empirically. It outperforms existing methods on synthetic, semi-synthetic, and real datasets of text, ratings, and genomics. Related Work. The sparse DGM provides a contribution to sparse methods in linear and nonlinear representation learning. The sparse DGM has a sparsity-inducing prior on the factor-to-feature mapping (i.e., the decoder). Similar priors have also been applied in linear factor analysis to induce sparsity in the factor-to-feature mapping (i.e., the loadings matrix); see, for example Bernardo et al. (2003); Bhattacharya and Dunson (2011); Carvalho et al. (2008); Knowles and Ghahramani (2011); Ročková and George (2016). Beyond linearity, Barello et al. (2018) considers a hybrid linear/nonlinear VAE which uses a sparse linear decoder and a neural network encoder. In nonlinear representation learning, Tonolini et al. (2020) imposes sparsity-inducing priors directly on the latent factors. Instead, the sparse DGM of this paper involves a sparsity-inducing prior on the factor-to-feature mapping. Rhodes and Lee (2021) indirectly induces sparsity in the factor-to-feature mapping of their DGM via L1 regularization of the mapping's Jacobian. The sparse VAE closely relates to the OI-VAE (Ainsworth et al., 2018), a method for modeling grouped features where the factor-to-group mapping is sparse. However, there are several differences. First, the OI-VAE uses a Group Lasso prior while the sparse VAE has a Spike-and-Slab Lasso prior (SSL, Ročková and George, 2018). The SSL prior mitigates common issues with the Lasso prior, including under-regularization of small coefficients and over-regularization of large coefficients (Ghosh et al., 2016), and a sub-optimal posterior contraction rate (Castillo et al., 2015). Second, we study the identifiability of sparse DGMs, a property that Ainsworth et al. (2018) does not investigate. This paper also contributes to the literature that uses the anchor feature assumption for identifiability. In linear models, the anchor feature assumption has led to identifiability results for topic models (Arora et al., 2013), non-negative matrix factorization (Donoho and Stodden, 2003), and linear factor analysis (Bing et al., 2020). The key idea is that the anchor assumption removes the rotational invariance of the latent factors. This idea also arises in identifiable factor analysis (Rohe and Zeng, 2020) and independent component analysis, where rotational invariance is removed by assuming that the components are non-Gaussian (Eriksson and Koivunen, 2004). Finally, this paper is related to methods in identifiable nonlinear representation learning. To achieve identifiability, most methods require weak supervision: Khemakhem et al. (2020); Mita et al. (2021); Zhou and Wei (2020) require auxiliary data; Locatello et al. (2020) requires paired samples; Kügelgen et al. (2021) relies on data augmentation; and Hälvä et al. (2021); Ahuja et al. (2022); Lachapelle et al. (2022) leverage known temporal or spatial dependencies among the samples. In contrast, the sparse DGM does not need auxiliary information or weak supervision for identifiability; instead, the anchor feature assumption is sufficient. Relatedly, Horan et al. (2021) achieves identifiable representations under a "local isometry" assumption, meaning a small change in a factor corresponds to a small change in the features. In contrast to this paper, however, Horan et al. (2021) requires the factors to be independent; we do not need such an independence assumption. ## 2 The Sparse Variational Autoencoder The observed data is a vector of G features xi ∈ R G for i ∈ {1*, . . . , N*} data points. The goal is to estimate a low-dimensional set of factors zi ∈ R K where K ≪ G. In a standard DGM, the vector ziis a latent variable that is fed through a neural network to reconstruct the distribution of xi. The sparse DGM introduces an additional parameter wj ∈ R K, j ∈ {1*, . . . , G*}, a sparse per-feature vector that selects which of the K latent factors are used to produce the jth feature of xi. Further, the sparse DGM models the prior covariance between factors with the positive definite matrix Σz ∈ R K×K. The sparse DGM is: $$w_{j k}\sim\mathrm{Spike-and-Slab}\ \mathrm{Lasso}(\lambda_{0},\lambda_{1},a,b),\ \ k=1,\ldots,K$$ $$\mathbf{z}_{i}\sim\mathcal{N}_{K}(0,\mathbf{\Sigma}_{z}),\ \ i=1,\ldots,N$$ $$x_{i j}\sim\mathcal{N}((f_{\theta}(\mathbf{w}_{j}\odot\mathbf{z}_{i}))_{j},\sigma_{j}^{2}),\ \ j=1,\ldots,G$$ ), j = 1*, . . . , G* (1) where fθ : R K → R G is a neural network with weights θ, σ 2 j is the per-feature noise variance and ⊙ denotes element-wise multiplication. The Spike-and-Slab Lasso (Ročková and George, 2018) is a sparsityinducing prior which we will describe below. (Any sparsity-inducing prior on wjk would be appropriate for a sparse DGM; we choose SSL because it enjoys theoretical guarantees and fast computation in more classical settings.) $$\left(1\right)$$ Algorithm 1: The sparse VAE input: data X, hyperparameters λ0, λ1*, a, b,* Σz output: factor distributions qϕ(z|x), selector matrix W, parameters θ ![3_image_0.png](3_image_0.png) $$\mathbb{E}\left[\gamma_{j k}|w_{j k},\eta_{k}\right]=\left[1+\frac{1-\eta_{k}}{\eta_{k}}\frac{\psi_{0}(w_{j k})}{\psi_{1}(w_{j k})}\right]$$ −1; Update *θ, ϕ,*W with SGD according to Eq. 8; $=\;\;\;\;\;K$ d. ![3_image_1.png](3_image_1.png) $$\frac{\left(\sum_{j=1}^{G}\mathbb{E}\left[\gamma_{j k}|w_{j k},\eta_{k}\right]+a-1\right)}{a+b+G-2}$$ In the data generating distribution in Eq. 1, the parameter wjk controls whether the distribution of xij depends on factor k. If wjk ̸= 0, then zik contributes to xij , while if wjk = 0, zik cannot contribute to xij . If wj is sparse, then xij depends on only a small set of factors. Such a sparse factor-to-feature relationship is shown in Figure 1. Note that xij depends on the same set of factors for every sample i. This dependency only makes sense when each xij has a consistent meaning across samples. For example, in genomics data xij corresponds to the same gene for all i samples. (This dependency is not reasonable for image data, where pixel j has no consistent meaning across samples.) Spike-and-Slab Lasso. The parameter wjk has a SSL prior (Ročková and George, 2018), which is defined as a hierarchical model, $$\begin{array}{l}{{\eta_{k}\sim\mathrm{Beta}(a,b),}}\\ {{\gamma_{j k}\sim\mathrm{Bernoulli}(\eta_{k}),}}\\ {{w_{j k}\sim\gamma_{j k}\psi_{1}(w_{j k})+(1-\gamma_{j k})\psi_{0}(w_{j k}).}}\end{array}$$ $$\left({2}\right)$$ $$\left({\mathfrak{3}}\right)$$ $\left(4\right)$. ηk ∼ Beta(*a, b*), (2) γjk ∼ Bernoulli(ηk), (3) wjk ∼ γjkψ1(wjk) + (1 − γjk)ψ0(wjk). (4) In this prior, ψs(w) = λs 2 exp(−λs|w|) is the Laplace density and λ0 ≫ λ1. The variable wjk is drawn a priori from either a Laplacian "spike" parameterized by λ0, and is consequentially negligible, or a Laplacian "slab" parameterized by λ1, and can be large. Further, the variable γjk is a binary indicator variable that determines whether wjk is negligible. The Beta-Bernoulli prior on γjk allows for uncertainty in determining which factors contribute to each feature. Finally, the parameter ηk ∈ [0, 1] controls the proportion of features that depend on factor k. By allowing ηk to vary, the sparse DGM allows each factor to contribute to different numbers of features. In movie-ratings data, for example, a factor corresponding to the action/adventure genre may be associated with more movies than a more esoteric genre. Notice the prior on ηk helps to "zero out" extraneous factor dimensions and consequently estimate the number of factors K. If the hyperparameters are set to a ∝ 1/K and b = 1, the Beta-Bernoulli prior corresponds to the finite Indian Buffet Process prior (Griffiths and Ghahramani, 2005). ## 2.1 Inference In inference, we are given a dataset x1:N and want to calculate the posterior p(θ,W,Γ, η, z1:N |x1:N ). We will use approximate maximum *a posteriori* (MAP) estimation for W, θ and η, and amortized variational inference for z1:N . The full procedure for approximating this posterior is called a sparse VAE. ![4_image_0.png](4_image_0.png) $$\mathbf{\Sigma}$$ $\downarrow$ . Figure 2: (a-b) The sparse VAE estimates factors which recover the true generative process; the VAE does not. The observed data is plotted against the estimated factors. The true factor-feature relationship is the red line; the best fit coefficients for the estimated factors are in the grey boxes. (c) The true W matrix. (d) The sparse VAE estimate of W. (VAE has no W matrix). The exact MAP objective is $$\sum_{i=1}^{N}\log p_{\theta}(\mathbf{x}_{i},\mathbf{W},\mathbf{\eta})=\sum_{i=1}^{N}\log\left[\int p_{\theta}(\mathbf{x}_{i}|z_{i},\mathbf{W})p(\mathbf{z}_{i})d z_{i}\right]+\log\int p(\mathbf{W}|\mathbf{\Gamma})p(\mathbf{\Gamma}|\mathbf{\eta})p(\mathbf{\eta})d\mathbf{\Gamma},$$ where Γ = {γjk} G,K j,k=1 denotes the binary indicators. We lower bound Eq. 5 with variational approximations of the posterior of z1:N and the exact posterior of the latent Γ, which comes from the SSL prior. We approximate the posterior of the factors pθ(zi|xi) by the variational family: $$q_{\phi}(\mathbf{z}_{i}|\mathbf{x}_{i})={\mathcal{N}}_{K}(\mu_{\phi}(\mathbf{x}_{i}),\sigma_{\phi}^{2}(\mathbf{x}_{i})),$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ (xi)), (6) where µϕ(·) and σ 2 ϕ (·) are neural networks with weights ϕ, as in a standard VAE (Kingma and Welling, 2014; Rezende et al., 2014). These approximations yield our final objective function: $$\mathcal{L}(\theta,\phi,\mathbf{W},\mathbf{\eta})=\sum_{i=1}^{N}\left\{\mathbb{E}_{q_{\phi}(\mathbf{z}_{i}|\mathbf{x}_{i})}[\log p_{\theta}(\mathbf{x}_{i}|\mathbf{z}_{i},\mathbf{W})]\,-\,D_{KL}(q_{\phi}(\mathbf{z}_{i}|\mathbf{x}_{i})||p(\mathbf{z}_{i}))\right\}\tag{1}$$ $$+\mathbb{E}_{\mathbf{\Gamma}|\mathbf{W},\mathbf{\eta}}\left[\log[p(\mathbf{W}|\mathbf{\Gamma})p(\mathbf{\Gamma}|\mathbf{\eta})p(\mathbf{\eta})]\right].\tag{2}$$ To optimize L(*θ, ϕ,*W, η), we alternate between an expectation step and a maximization step. In the E-step, we calculate the complete conditional posterior of Γ. In the M-step, we take gradient steps in model parameters and variational parameters with the help of reparameterization gradients (Kingma and Welling, 2014; Rezende et al., 2014). The sparse VAE procedure is in Algorithm 1. The sparse VAE has a higher computational burden than a VAE. A VAE typically requires one forward pass through the neural network at each gradient step. In contrast, the sparse VAE requires G forward passes. This additional computation is because for each of the G features, the sparse VAE uses a different decoder input wj ⊙ zi, for j = 1*, . . . , G*. (The VAE and sparse VAE optimize the same number of neural network parameters, however). ## 2.2 Synthetic Example For intuition, consider a synthetic dataset. We generate N = 1000 samples from a factor model, each with G = 7 features and K = 2 latent factors. We take the true factors to be correlated and generate them as zi ∼ N(0, C) where the true factor covariance C has diagonal entries Ckk = 1 and off-diagonal entries Ckk′ = 0.6 for k ̸= k ′. Given the factors, the data is generated as: $$\mathbf{x}_{i}\sim{\mathcal{N}}_{G}(f(\mathbf{z}_{i}),\sigma^{2}\mathbf{I}_{G})\quad{\mathrm{with~}}\sigma^{2}=0.5.$$ $$({\mathfrak{g}})$$ 2 = 0.5. (9) The factor-to-feature mapping f : R K → R G is sparse: $$f(z_{i})=(z_{i1})$$ f(zi) = (zi1, 2zi1, 3z 2 i1 , 4zi2, 5zi2, 6 sin(zi2), 7zi1 · zi2). This mapping corresponds to the W matrix in Figure 2c. We compare the sparse VAE with a VAE. For both, we initialize with an overestimate of the true number of factors (K0 = 5). The sparse VAE finds factors that reflect the true generative model (Figure 2a). Moreover, the sparse VAE correctly finds the true number of factors by setting three columns of W to zero (Figure 2d). In contrast, the VAE recovers the second factor fairly well, but doesn't recover the first factor (Figure 2b). We analyze this synthetic data further in § 4, as well as text, movie ratings and genomics data. ## 3 Identifiability Of Sparse Dgms For an identifiable model, given infinite data, it is possible to learn the true parameters. In both linear and nonlinear factor models, the factors are usually unidentifiable without additional constraints (Yalcin and Amemiya, 2001; Locatello et al., 2019b). This lack of identifiability means that the true factors zi can have the same likelihood as another solution zei. Even with infinite data, we cannot distinguish between these representations based on the likelihood alone. Further inductive biases or model constraints are necessary to narrow down the set of solutions. In this section, we demonstrate that sparsity is a useful inductive bias for model identifiability. Consider the sparse deep generative model: $\mathbf{z}_{i}\sim\mathcal{N}_{K}(0,\mathbf{C}),\ \ i=1,\ldots,N$ $x_{ij}\sim\mathcal{N}((f_{\theta}(\mathbf{w}_{j}\odot\mathbf{z}_{i}))_{j},\sigma_{j}^{2}),\ \ j=1,\ldots,G,$ $$(10)$$ $$(11)$$ $$z_{i k}=g_{k}(\widetilde{z}_{i k}),\quad i=1,\ldots,N,$$ where C is the true covariance matrix of the factors. Informally, if the masking variable W is sparse in a particular way, then we can identify the latent factors, up to coordinate-wise transformation. (Note: we only make an assumption about the sparse support of W, not its distribution; the SSL prior in § 2 is a tool to expore the space of sparse W in the process of fitting the model.) Specifically, we prove that for any solutions z and ze with equal likelihood, we have: zik = gk(zeik), i = 1*, . . . , N,* (11) for all factors k = 1*, . . . , K* (up to permutation of k), where gk : R → R are invertible and differentiable functions. This definition of identifiability is weaker than the canonical definition, for which we would need the two solutions zi, zei to be exactly equal. We use this weaker notion of identifiability because our goal is to isolate the dimensions of zi which drive the observed response, and not necessarily find their exact value—i.e. we want to avoid mixing the dimensions of zi. For example, we want to be able to distinguish between solutions z and ze = P z, for arbitrary rotation matrices P . Starting point. This paper builds on a body of work in identifiable linear unsupervised models (Arora et al., 2013; Bing et al., 2020; Donoho and Stodden, 2003). These works assume the existence of anchor features, features which depend on only one factor. The existence of anchor features helps with identifiability because they leave a detectable imprint in the covariance of the observed data. We extend these results to the nonlinear setting. The key insight is that if two anchor features have the same nonlinear dependence on a factor, then we can apply results from the linear setting. We first formally state the anchor feature assumption. A1. (Anchor feature assumption) For every factor, z·k, there are at least two features x·j , x·j ′ which depend only on that factor. Moreover, the two features have the same mapping fj from the factors z·k; that is, for all i = 1*, . . . , N*: $$\mathbb{E}[\mathbf{x}_{ij}|\mathbf{z}_{i}]=f_{j}(z_{ik}),\qquad\mathbb{E}[\mathbf{x}_{ij'}|\mathbf{z}_{i}]=f_{j}(z_{ik}).\tag{1}$$ $$(12)$$ We refer to such features as "anchor features." The connection between the anchor feature assumption and the masking variable W is the following: If feature j is an anchor for factor k, then wjk ̸= 0 and wjk′ = 0 for all k ̸= k ′. Roadmap. We prove the identifiability of the sparse DGM in two parts. First, we prove that the latent factors are identifiable up to coordinate-wise transformation when the anchor features are known (Theorem 1). Second, we prove that the anchor features can be detected from the observed covariance matrix (Theorem 2). Together, Theorem 1 and Theorem 2 give the identifiability of the sparse DGM. Known anchor features. The first result, Theorem 1, proves that the sparse DGM factors are identifiable if we are given the anchor features. If feature j is an anchor feature for factor k, we set wjk = 1 and wjk′ = 0 for all k ′ ̸= k. Theorem 1. Suppose we have infinite data drawn from the model in Eq. 10 and A1 holds. Assume we are given the rows of W corresponding to the anchor features. Suppose we have two solutions with equal likelihood: {θ, e ze} and {θ, b zb}*, with* $$p_{\theta}^{-}(\mathbf{x}|\tilde{\mathbf{z}},\mathbf{W})=p_{\theta}^{-}(\mathbf{x}|\hat{\mathbf{z}},\mathbf{W}).$$ (x|zb,W). (13) (zei1*, . . . ,* zeiK) = (g1(zbi1)*, . . . , g*K(zbiK)). (14) _Then, the factors $\vec{x}$ and $\vec{x}$ are equal up to coordinate wise transformations. For $g_{k}:\mathbb{R}\to\mathbb{R}$ and $i=1,\ldots,N$,_ The proof of Theorem 1 is in Appendix A.1. The proof idea is that if the anchor features are known, then we are given noisy transformations of the coordinates of z. Knowing these noisy transforms allows us to identify z up to coordinate-wise transform. Unknown anchor features. The second result proves that the anchor features can be determined from an approximation of the covariance matrix of the data X. This result requires additional assumptions. The next assumption concerns the weights of the neural network fθ. This assumption needs some additional notation. After applying the chain rule to the first layer, we rewrite the derivative of mapping fθ as follows: $$(\widetilde{z}_{i1},\ldots,\widetilde{z}_{i K})=(g_{1}(\widehat{z}_{i1}),\ldots,g_{K}(\widehat{z}_{i K})).$$ $$\frac{\partial(f_{\theta}(\mathbf{w}_{j}\odot\mathbf{z}_{i}))_{j}}{\partial z_{i k}}=\sum_{d=1}^{D_{1}}\frac{\partial m_{\theta}(\mathbf{u}_{i j}.)_{j}}{\partial u_{i j d}}H_{d k}^{(1)}w_{j k},$$ $$(13)$$ $$\mathbf{\partial}\cdot\mathbf{\partial}+M,$$ $$(14)$$ $$(15)$$ where {H (1) dk } D1,K d,k=1 are the weights of the first neural network layer and D1 is the dimension of the first layer, uijd =PK k=1 H (1) dk wjkzik is the first layer before the activation function, and mθ : R D1 → R G is the rest of the neural network which takes as input the first layer uij· = (uijd) D1 d=1. Let Bijk =PD1 d=1 ∂mθ(uij·)j ∂uijdH (1) dk . A2. Suppose j is an anchor feature for factor k. For another feature l, we assume that $$|w_{j k}|\sum_{i=1}^{N}|B_{i j k}|^{2}\geq\sum_{i=1}^{N}\sum_{k^{\prime}=1}^{K}|w_{l k^{\prime}}||B_{i j k}||B_{i l k^{\prime}}|,$$ $$(16)$$ with equality when l is also an anchor feature for k and inequality otherwise. This assumption ensures that the covariance between two anchor features is larger than the covariance between an anchor feature and a non-anchor feature. This consequence then allows us to pinpoint the anchor features in the covariance matrix of the data. In Appendix A.3, we show this assumption holds for a neural network with ReLU activations and independent weights. Finally, we also require an assumption regarding the covariance of the factors: this is the same as assumption (iii) of Bing et al. (2020), which studies linear models. A3. Denote the true covariance matrix of the factors as Cov(zi) = C. We assume min{Ckk, Ck′k′} > |Ckk′ |, for all *k, k*′ = 1*, . . . , K*. Assumption A3 requires that the variance of each factor be greater than its covariance with any other factor. To gain some intuition, consider a dataset of news articles where the latent factors are topics, sports and politics. Assumption A3 would be violated if every article about sports also discussed politics (and vice versa). In this case, the anchor word for sports will be equally as correlated with an anchor word for politics as it is with another anchor word for sports. That is, there is no discernible difference in the data that helps us distinguish the sports and politics factors. Theorem 2. Assume the model Eq. 10 with A1−3 *holds. Then, the set of anchor features can be determined* uniquely from 1N PN i=1 Cov(xi) as N → ∞ (given additional regularity conditions detailed in *Appendix A.2).* The proof of Theorem 2 is in Appendix A.2. Proof idea: We adapt the proof technique of Bing et al. (2020). The proof idea of that paper is that for any two features x·j and x·j ′ that are anchors for the same factor, their covariance Cov(x·j , x·j ′ ) will be greater than their covariance with any other non-anchor feature (under some conditions on the loadings matrix, analogous to A2). Then, the anchor features can be pinpointed from the observed covariance matrix. In the nonlinear case, we can apply a similar strategy: even if the mapping is nonlinear, if the mapping is the same for the anchor features, we can pinpoint the two anchors in the covariance matrix. Connection to the sparse VAE algorithm. Theorem 2 proves that the existence of anchor features implies a particular structure in the covariance of the data. Consequently, an appropriate estimation procedure can then pinpoint the anchor features. In the sparse VAE, we do not directly consider the covariance matrix as it would involve laborious hyperparameter tuning to determine which covariance entries are "close enough" to be anchor features. Instead, we estimate W with an SSL prior, motivated by the sparsity in the decoder exhibited by anchor features. That is, it is the sparse decoding for the anchor features that is needed for Theorem 2, not the SSL prior; the SSL prior is a modeling choice that will likely yield the kind of sparse mapping that satisfies the anchor assumption. The SSL prior is not the only sparsity-inducing prior that we could have used; however, we found it to work well empirically. We compare the SSL prior to the horseshoe prior (Carvalho et al., 2009) in Appendix C.4.1. Consistency. An important implication of identifiability is consistency: if we learn the optimal parameters of the sparse VAE, then we will recover the true factors in the limit of infinite data. Specifically, if (i) the variational family qϕ(z|x) contains the true posterior pθ(z|x) and (ii) the ELBO is maximized with respect to the parameters θ, ϕ and W, then in the limit of infinite data, the sparse VAE will learn the true factors up to coordinate-wise transform and permutation. ## 4 Experiments We empirically study the sparse VAE using a mix of synthetic, semi-synthetic and real data1. We consider the following questions: 1) How well does the sparse VAE recover ground truth factors when (i) the factors are uncorrelated and (ii) the factors are correlated? 2) How does the heldout predictive performance of the sparse VAE compare to related methods? 3) How does the sparse VAE perform compared to related methods when the correlation between factors is different in the training and test data? 4) Does the sparse VAE find meaningful structure in data? We compare the sparse VAE to non-negative matrix factorization (NMF) and algorithms for DGMs: the VAE (Kingma and Welling, 2014); β-VAE (Higgins et al., 2017); VSC (Tonolini et al., 2020); and OI-VAE 1The sparse VAE implementation may be found at https://github.com/gemoran/sparse-vae-code. ![8_image_0.png](8_image_0.png) (b) Ground truth factor recovery (higher is better) (a) Heldout mean squared error (lower is better) Figure 3: Synthetic data. (a) Sparse VAE has better heldout predictive performance than the VAE over a range of factor correlation levels. (b) Sparse VAE recovers the true factors better than the VAE. (β-VAE performed similarly to VAE). Scores are shown for 25 datasets per correlation setting. (Ainsworth et al., 2018). None of the underlying DGMs have identifiability guarantees. We find that: 1) on synthetic data, the sparse VAE achieves the best recovery of ground truth factors both when the factors are correlated and uncorrelated; 2) the sparse VAE achieves the best predictive performance on both synthetic and real datasets; 3) the sparse VAE achieves the best predictive performance on heldout data that is generated from factors that different correlation to those in the training data; and (4) the sparse VAE finds interpretable factors in both text and movie-ratings data. On a single-cell genomics dataset, the sparse VAE learns factors which predict cell type with high accuracy. Datasets. We consider the following datasets; further details about the data are given in Appendix C.1. - **Synthetic data.** We consider again the synthetic dataset generated as Eq. 9. We vary correlation between the true factors from ρ = 0, 0.2, 0.4, 0.6, 0.8. - **PeerRead** (Kang et al., 2018). Dataset of word counts for paper abstracts (N ≈ 10, 000, G = 500). - **MovieLens** (Harper and Konstan, 2015). Dataset of binary user-movie ratings (N = 100, 000, G = 300). - **Semi-synthetic PeerRead.** A semi-synthetic version of the PeerRead dataset in which the correlations between data-generating factors differ across the test and training data, inducing different training and test distributions. - **Zeisel** (Zeisel et al., 2015). Dataset of RNA molecule counts in mouse cortex cells (N = 3005, G = 558). Implementation details. Empirically, we found that setting the prior covariance of the factors to Σz = IK worked well, even when the true generative factors were correlated (Figure 3a). Further implementation details for all methods are in Appendix C. Recovering the ground truth factors. How well does the sparse VAE recover the factors when the ground truth factors are known? We assess this question with simulated data. We use the DCI disentanglement score to measure the fidelity between estimated factors and true factors (Eastwood and Williams, 2018). The DCI score is an average measure of how relevant each estimated factor is for the true factors. This score also penalizes estimated factors that are equally informative for multiple true factors. We create synthetic datasets with the model given in Eq. 9 and evaluate the DCI metric between estimated and true factors as the true factors are increasingly correlated. As the correlation increases, we expect a standard VAE to conflate the factors while the sparse VAE recovers the two true underlying factors. We see this phenomenon in Figure 3b; the sparse VAE has higher DCI scores than the VAE in all settings. The sparse VAE is robust to factor correlations up to ρ = 0.4, with decreasing performance as ρ is further increased. Here, VSC performs worse than both the sparse VAE and VAE (the MSE scores for VSC in were too large for visualization in Figure 3a). The poor performance of VSC is likely due to the true generative factors in Eq. 9 not being sparse; only the mapping between the factors and features is sparse. Note that we do not compare with NMF in this example; as NMF is a linear method, it cannot reconstruct the nonlinear data generation process using only two latent dimensions. Heldout reconstruction. We compare the negative log-likelihood on heldout data achieved by the sparse VAE and related methods. All results are averaged over five splits of the data, with standard deviation in | | Difference between train and test | | | |---------------|-------------------------------------|-------------|------------| | | High | Medium | Low | | Method | Negative log-likelihood | | | | Sparse VAE | 52.4 (0.4) | 49.2 (0.3) | 48.6 (0.1) | | VAE | 54.6 (0.5) | 52.3 (0.2) | 50.8 (0.2) | | β-VAE (β = 2) | 54.7 (0.3) | 52.1 (0.2) | 51 (0.4) | | OI-VAE | 65.7 (0.3) | 64.4 (0.3) | 64.6 (0.2) | | VSC | 58.7 (0.6) | 56.1 (0.3) | 55.4 (0.2) | | NMF | 60.1 (0.02) | 58.0 (0.08) | 57.3 (0.6) | | (a) MovieLens | (b) PeerRead | | | | | |-----------------|----------------|--------------|--------------|------------|-------------| | Method | NLL | Recall@5 | NDCG@10 | Method | NLL | | Sparse VAE | 170.9 (2.1) | 0.98 (0.002) | 0.98 (0.003) | | | | VAE | 175.9 (2.4) | 0.97 (0.001) | 0.96 (0.001) | | | | β-VAE (β = 2) | 178.2 (2.4) | 0.95 (0.002) | 0.93 (0.002) | | | | OI-VAE | 212.1 (2.6) | 0.51 (0.004) | 0.50 (0.004) | | | | VSC | 192.2 (2.3) | 0.79 (0.008) | 0.77 (0.009) | | | | NMF | 198.6 (2.5) | 0.90 (0.004) | 0.85 (0.004) | Sparse VAE | 245.0 (2.0) | | | VAE | 252.6 (1.4) | | | | | | β-VAE (β = 2) | 254.5 (3.0) | | | | | | OI-VAE | 260.6 (1.2) | | | | | | VSC | 252.9 (2.0) | | | | | | NMF | 267.2 (1.0) | | | | Table 1: On movie-ratings and text data, the sparse VAE achieves the lowest heldout negative log-likelihood (NLL). For the β-VAE, we show the result with best performing β. Table 2: On the semi-synthetic PeerRead data, the sparse VAE achieves the lowest heldout negative loglikelihood. We create three settings where the difference between training and test data distributions range from high (hardest) to low (easiest). parentheses. Figure 3a shows that the sparse VAE performs better than the compared methods on synthetic data. On MovieLens and PeerRead datasets, Tables 1a and 1b show that the SparseVAE achieves the lowest heldout negative log-likelihood among the compared methods. For the MovieLens dataset, Table 1a additionally shows that the sparse VAE has the highest heldout Recall@5 and normalized discounted cumulative gain (NDCG@10), which compare the predicted rank of heldout items to their true rank (Liang et al., 2018). Different training and test distributions. How does the sparse VAE perform when the factors that generate data are correlated differently across training and test distributions? This particular type of shift in distribution affects many real world settings. For example, we may estimate document representations from scientific papers where articles about machine learning also often discuss genomics, and want to use the representations to analyze new collections of papers where articles about machine learning rarely involve genomics. We hypothesize that because the sparse VAE associates each latent factor with only a few features (e.g., words) even when the factors are correlated, it will reconstruct differently distributed test data better than the related methods. We assess this question using the semi-synthetic PeerRead dataset, where the train and test data were generated by factors with different correlations. Table 2 summarizes the results from three settings where the difference between training and test data distributions range from high (hardest) to low (easiest). We report the average negative log-likelihood across five semi-simulated datasets. The sparse VAE performs the best, highlighting its ability to estimate models which generalize better to test data where the factors have a different distribution. Interpretability. We now examine whether the sparse VAE finds interpretable structure in the data. For each factor in the MovieLens dataset, we consider the four movies with the largest selector variable wbjk values (Table 3a). The sparse VAE finds clear patterns: for example, the top movies in first factor are all science fiction movies; the second factor contains animated children's movies; the third factor contains three Star Wars movies and an Indiana Jones movie, all blockbuster science fiction movies from the 1980s. 10 Table 3: On movie-ratings and text data, the sparse VAE finds meaningful topics via the matrix W. | (a) MovieLens | (b) PeerRead | | | |-----------------|-----------------------------------------------|-------|----------------------------------------------------| | Topic | Movies | Topic | Words | | A | The Fifth Element; Alien; Gattaca; Aliens | A | task; policy; planning; heuristic; decision | | B | A Bug's Life; Monsters, Inc.; Toy Story 3 & 2 | B | information; feature; complex; sparse; probability | | C | Star Wars V, IV & IV; Indiana Jones 1 | C | given; network; method; bayesian; neural | | Method | Sparse VAE | VAE (β = 1) | NMF | VSC | OIVAE | |----------|--------------|---------------|--------------|--------------|--------------| | Accuracy | 0.95 (0.003) | 0.94 (0.016) | 0.91 (0.007) | 0.89 (0.062) | 0.80 (0.019) | Table 4: On the Zeisel data, the sparse VAE has the best heldout cell type prediction accuracy, followed closely by a VAE. We perform the same analysis for the PeerRead dataset (Table 3b). The sparse VAE finds some themes of computer science papers: the first factor contains words related to reinforcement learning; the second factor contains words about information theory; the third factor involves words about neural networks. We note that NMF also finds interpretable topics in both MovieLens and PeerRead (Appendix C.5.1); however, NMF has worse heldout predictive performance than the sparse VAE (Tables 1a and 1b). The sparse VAE retains the interpretability of NMF while incorporating flexible function estimation. Finally, we consider the single-cell genomics dataset of Zeisel et al. (2015). We examine how well the factors learned by each method predict the cell type in a multinomial regression (note the cell type is not used when learning the factors). The sparse VAE is the best performing method (Table 4). Although the VAE is competitive in this setting, we see that the sparse VAE does better than methods such as OI-VAE and VSC that were designed to produce interpretable results. ## 5 Discussion We developed the sparse DGM, a model with SSL priors that induce sparsity in the mapping between factors and features. Under an anchor feature assumption, we proved that the sparse DGM model has identifiable factors. To fit the sparse DGM, we develop the sparse VAE algorithm. On real and synthetic data, we show the sparse VAE performs well. (i) It has good heldout predictive performance. (ii) It generalizes to out of distribution data. (iii) It is interpretable. There are a few limitations of this work. First, the sparse DGM is designed for tabular data, where each feature j has a consistent meaning across samples. Image data does not have this property as a specific pixel has no consistent meaning across samples. Second, the sparse VAE algorithm is more computationally intensive than a standard VAE. This expense is because at each gradient step, it requires a forward pass through the network for each of the G features. Finally, we did not provide theoretical estimation guarantees for the sparse VAE algorithm. Similarly to other algorithms for fitting DGMs, estimation guarantees for the sparse VAE are difficult to obtain due to the nonconvexity of the objective function. The majority of works which consider identifiability in DGMs also do not provide estimation guarantees for their algorithms, including Locatello et al. (2020); Khemakhem et al. (2020). We leave such theoretical study of the sparse VAE algorithm to future work. ## Acknowledgments Funding to support this research was provided for by the Eric and Wendy Schmidt Center, the Canada-CIFAR AI Chairs program, National Science Foundation Grant NSF-CHE-2231174, NSF IIS 2127869, ONR N0001417-1-2131, ONR N00014-15-1-2209, the Simons Foundation, the Sloan Foundation, and Open Philanthropy. ## References Ahuja, K., Hartford, J., and Bengio, Y. (2022). Properties from mechanisms: an equivariance perspective on identifiable representation learning. In *International Conference on Learning Representations*. Ainsworth, S. K., Foti, N. J., Lee, A. K., and Fox, E. B. (2018). oi-VAE: Output interpretable VAEs for nonlinear group factor analysis. In *International Conference on Machine Learning*. Arora, S., Ge, R., Halpern, Y., Mimno, D., Moitra, A., Sontag, D., Wu, Y., and Zhu, M. (2013). A practical algorithm for topic modeling with provable guarantees. In *International Conference on Machine Learning*. Barello, G., Charles, A. S., and Pillow, J. W. (2018). Sparse-coding variational auto-encoders. *bioRxiv*. Bengio, Y., Courville, A., and Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828. Bernardo, J. M., Bayarri, M. J., Berger, J. O., Dawid, A. P., Heckerman, D., Smith, A. F. M., and West, M. (2003). Bayesian factor regression models in the "large p, small n" paradigm. *Bayesian Statistics*, 7:733–742. Bhattacharya, A. and Dunson, D. B. (2011). Sparse Bayesian infinite factor models. *Biometrika*, pages 291–306. Bing, X., Bunea, F., Ning, Y., and Wegkamp, M. (2020). Adaptive estimation in structured factor models with applications to overlapping clustering. *Annals of Statistics*, 48(4):2055–2081. Blei, D. M., Ng, A. Y., and Jordan, M. I. (2003). Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Bolstad, B. (2018). *preprocessCore: A collection of pre-processing functions*. R package version 1.44.0. Bolstad, B. M., Irizarry, R. A., Åstrand, M., and Speed, T. P. (2003). A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. *Bioinformatics*, 19(2):185–193. Carvalho, C. M., Chang, J., Lucas, J. E., Nevins, J. R., Wang, Q., and West, M. (2008). High-dimensional sparse factor modeling: applications in gene expression genomics. Journal of the American Statistical Association, 103(484):1438–1456. Carvalho, C. M., Polson, N. G., and Scott, J. G. (2009). Handling sparsity via the horseshoe. In *Artificial* Intelligence and Statistics. Castillo, I., Schmidt-Hieber, J., and Van der Vaart, A. (2015). Bayesian linear regression with sparse priors. The Annals of Statistics, 43(5):1986–2018. Dittadi, A., Träuble, F., Locatello, F., Wuthrich, M., Agrawal, V., Winther, O., Bauer, S., and Schölkopf, B. (2021). On the transfer of disentangled representations in realistic settings. In International Conference on Learning Representations. Donoho, D. L. and Stodden, V. (2003). When does non-negative matrix factorization give a correct decomposition into parts? In *Neural Information Processing Systems*. Eastwood, C. and Williams, C. K. (2018). A framework for the quantitative evaluation of disentangled representations. In *International Conference on Learning Representations*. Eriksson, J. and Koivunen, V. (2004). Identifiability, separability, and uniqueness of linear ICA models. IEEE Signal Processing Letters, 11(7):601–604. Gershman, S. and Goodman, N. (2014). Amortized inference in probabilistic reasoning. In Proceedings of the Annual Meeting of the Cognitive Science Society. Ghosh, P., Tang, X., Ghosh, M., and Chakrabarti, A. (2016). Asymptotic properties of Bayes risk of a general class of shrinkage priors in multiple hypothesis testing under sparsity. *Bayesian Analysis*, 11(3):753–796. Ghosh, S., Yao, J., and Doshi-Velez, F. (2019). Model selection in Bayesian neural networks via horseshoe priors. *Journal of Machine Learning Research*, 20(182):1–46. Griffiths, T. L. and Ghahramani, Z. (2005). Infinite latent feature models and the Indian buffet process. In Neural Information Processing Systems. Hälvä, H., Corff, S. L., Lehéricy, L., So, J., Zhu, Y., Gassiat, E., and Hyvarinen, A. (2021). Disentangling identifiable features from noisy data with structured nonlinear ICA. In *Neural Information Processing* Systems. Harper, F. M. and Konstan, J. A. (2015). The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems (TiiS), 5(4):1–19. Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. (2017). beta-vae: Learning basic visual concepts with a constrained variational framework. In *International* Conference on Learning Representations. Horan, D., Richardson, E., and Weiss, Y. (2021). When is unsupervised disentanglement possible? In *Neural* Information Processing Systems. Kang, D., Ammar, W., Dalvi, B., van Zuylen, M., Kohlmeier, S., Hovy, E., and Schwartz, R. (2018). A dataset of peer reviews (PeerRead): Collection, insights and NLP applications. *arXiv preprint arXiv:1804.09635*. Khemakhem, I., Kingma, D., Monti, R., and Hyvarinen, A. (2020). Variational autoencoders and nonlinear ICA: A unifying framework. In *Artificial Intelligence and Statistics*. Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization. *International Conference* on Learning Representations. Kingma, D. P. and Welling, M. (2014). Auto-encoding variational Bayes. In International Conference on Learning Representations. Knowles, D. and Ghahramani, Z. (2011). Nonparametric Bayesian sparse factor models with application to gene expression modeling. *Annals of Applied Statistics*, 5(2B):1534–1552. Kügelgen, J. V., Sharma, Y., Gresele, L., Brendel, W., Schölkopf, B., Besserve, M., and Locatello, F. (2021). Self-supervised learning with data augmentations provably isolates content from style. In *Neural* Information Processing Systems. Lachapelle, S., Rodriguez, P., Sharma, Y., Everett, K. E., Priol, R. L., Lacoste, A., and Lacoste-Julien, S. (2022). Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA. In Conference on Causal Learning and Reasoning. Liang, D., Krishnan, R. G., Hoffman, M. D., and Jebara, T. (2018). Variational autoencoders for collaborative filtering. In *World Wide Web Conference*. Locatello, F., Abbati, G., Rainforth, T., Bauer, S., Schölkopf, B., and Bachem, O. (2019a). On the fairness of disentangled representations. *Neural Information Processing Systems*. Locatello, F., Bauer, S., Lucic, M., Raetsch, G., Gelly, S., Schölkopf, B., and Bachem, O. (2019b). Challenging common assumptions in the unsupervised learning of disentangled representations. In International Conference on Machine Learning. Locatello, F., Poole, B., Rätsch, G., Schölkopf, B., Bachem, O., and Tschannen, M. (2020). Weaklysupervised disentanglement without compromises. In *International Conference on Machine Learning*. Lopez, R., Regier, J., Cole, M. B., Jordan, M. I., and Yosef, N. (2018). Deep generative modeling for single-cell transcriptomics. *Nature Methods*, 15(12):1053–1058. Mita, G., Filippone, M., and Michiardi, P. (2021). An identifiable double VAE for disentangled representations. In *International Conference on Machine Learning*. Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. In *International Conference on Machine Learning*. Rhodes, T. and Lee, D. (2021). Local disentanglement in variational auto-encoders using Jacobian l1 regularization. In *Neural Information Processing Systems*. Ročková, V. and George, E. I. (2016). Fast Bayesian factor analysis via automatic rotations to sparsity. Journal of the American Statistical Association, 111(516):1608–1622. Ročková, V. and George, E. I. (2018). The spike-and-slab lasso. *Journal of the American Statistical Association*, 113(521):431–444. Rohe, K. and Zeng, M. (2020). Vintage factor analysis with varimax performs statistical inference. *arXiv* preprint arXiv:2004.05387. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. *Journal of the Royal Statistical* Society: Series B (Methodological), 58(1):267–288. Tonolini, F., Jensen, B. S., and Murray-Smith, R. (2020). Variational sparse coding. In *Uncertainty in* Artificial Intelligence. Träuble, F., Creager, E., Kilbertus, N., Locatello, F., Dittadi, A., Goyal, A., Schölkopf, B., and Bauer, S. (2021). On disentangled representations learned from correlated data. In International Conference on Machine Learning. Yalcin, I. and Amemiya, Y. (2001). Nonlinear factor analysis as a statistical method. *Statistical Science*, pages 275–294. Zeisel, A., Muñoz-Manchado, A. B., Codeluppi, S., Lönnerberg, P., Manno, G. L., Juréus, A., Marques, S., Munguba, H., He, L., Betsholtz, C., Rolny, C., Castelo-Branco, G., Hjerling-Leffler, J., and Linnarsson, S. (2015). Cell types in the mouse cortex and hippocampus revealed by single-cell RNA-seq. *Science*, 347(6226):1138–1142. Zhou, D. and Wei, X.-X. (2020). Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-vae. *Neural Information Processing Systems*. ## A Proofs A.1 Proof Of **Theorem 1** We consider the case of known anchor features, where we have at least two anchor features per factor. We consider the case where the likelihood is Gaussian. We have two solutions, (θ, e ze, σe 2) and (θ, b zb, σb 2), with equal likelihood. We show that ze must be a coordinate-wise transform of zb. We have $$\sum_{i=1}^{N}\sum_{j=1}^{G}\frac{1}{\hat{\sigma}_{j}^{2}}[x_{ij}-f_{\hat{\theta}_{j}}(\vec{\mathbf{w}}_{j}\odot\vec{\mathbf{z}}_{i})]^{2}=\sum_{i=1}^{N}\sum_{j=1}^{G}\frac{1}{\hat{\sigma}_{j}^{2}}[x_{ij}-f_{\hat{\theta}_{j}}(\vec{\mathbf{w}}_{j}\odot\vec{\mathbf{z}}_{i})]^{2},\tag{17}$$ where fθj (·) denotes fθ(·)j , the jth output of fθ. We prove that, given the anchor feature for factor k, zik is identifiable. As all factors are assumed to have anchor features, this holds for all k ∈ {1*, . . . , K*}. Suppose j is an anchor feature for k. That is, wjl = 0 for all l ̸= k. Then, wfj ⊙ zei = (0, . . . , 0, wejkzeik, 0*, . . . ,* 0). With slight abuse of notation, we then write feθj (wfj ⊙ zei) = feθj (wejkzeik). Now, for Eq. 17 to hold for any value of {xij} N,G i,j=1 we must have, for all j = 1*, . . . , G*: X N i=1 1 σe 2 j [xij − feθj (wejkzeik)]2 = X N i=1 1 σb 2 j [xij − fbθj (wbjkzbik)]2(18) i=1 f 2 eθj (wejkzeik) σe 2 j − f 2 bθj (wbjkzbik) σb 2 j 1 σe 2 j − 1 σb 2 j !X N i=1 x 2 ij − 2 X N i=1 xij feθj (wejkzeik) σe 2 j − fbθj (wbjkzbik) σb 2 j ! + X N = 0. (19) $$(20)$$ $$(21)$$ $$(22)$$ For Eq. 19 to hold for all xij , we must have the coefficients equal to zero: $$\frac{1}{\hat{\sigma}_{j}^{2}}-\frac{1}{\hat{\sigma}_{j}^{2}}=0\implies\hat{\sigma}_{j}^{2}=\hat{\sigma}_{j}^{2}$$ and $$\frac{f_{\hat{\theta}_{j}}(\tilde{w}_{jk}\tilde{z}_{ik})}{\hat{\sigma}_{j}^{2}}-\frac{f_{\hat{\theta}_{j}}\hat{w}_{jk}(\tilde{z}_{ik})}{\hat{\sigma}_{j}^{2}}=0\implies f_{\hat{\theta}_{j}}(\tilde{w}_{jk}\tilde{z}_{ik})=f_{\hat{\theta}_{j}}(\hat{w}_{jk}\tilde{z}_{ik}).$$ If feθj : R → R is invertible, then we have $$\tilde{z}_{i k}=\tilde{w}_{j k}^{-1}f_{\tilde{\theta}_{j}}^{-1}(f_{\tilde{\theta}_{j}}(\hat{w}_{j k}\hat{z}_{i k})).$$ (wbjkzbik)). (22) That is, zik is identifiable up to coordinate-wise transformation. If fθj is not invertible, then there exists bijective functions g : R → R, h : R → R such that wbjkzbik = g(wejkzeik) and the equality is satisfied $$\begin{array}{l l}{{f_{\tilde{\theta}_{j}}(\tilde{w}_{j k}\tilde{z}_{i k})=f_{\tilde{\theta}_{j}}(h(g(\tilde{w}_{j k}\tilde{z}_{i k})))}}&{{\mathrm{(as~}f_{\theta_{j}}\mathrm{~is~not~invertible,~}\tilde{w}_{j k}\tilde{z}_{i k}\neq h(g(\tilde{w}_{j k}\tilde{z}_{i k})))}}\\ {{}}&{{=f_{\tilde{\theta}_{j}}(\tilde{w}_{j k}\tilde{z}_{i k}).}}\end{array}$$ As we have zbik = wb −1 jk g(wejkzeik), zik is identifiable up to coordinate-wise transformation. ## A.2 Proof Of **Theorem 2** To prove the result, we: 1. Approximate the covariance 1N PN i=1 Cov(xi) using a first order Taylor approximation. 2. Show that an anchor feature for factor k always has larger covariance with another anchor feature for k than other non-anchor features. Specifically: for any k ∈ {1*, . . . , K*} and j = anchor for k, we have * $\sum_{i=1}^{N}\mbox{Cov}(x_{ij},x_{il})=C_{kk}w_{jk}^{2}\sum_{i=1}^{N}B_{ijk}^{2}$ for all $l=$ anchor for $k$, * $\sum_{i=1}^{N}\mbox{Cov}(x_{ij},x_{il})<C_{kk}w_{jk}^{2}\sum_{i=1}^{N}B_{ijk}^{2}$ for all $l\neq$ anchor for $k$. 3. Apply Theorem 1 of Bing et al. (2020) to prove that the anchor features can be identified from the covariance of the data. Regularity condition. Let µ(zi) = E[xi|zi]. We assume: for all ziin a neighborhood of zbi = E[zi], $$\frac{1}{N}\sum_{i=1}^{N}\mu(\mathbf{z}_{i})=\frac{1}{N}\sum_{i=1}^{N}\widehat{\mathbf{z}}_{i}+\frac{\partial\mu(\mathbf{z}_{i})}{\partial\mathbf{z}_{i}}\Big|_{\mathbf{z}_{i}=\widehat{\mathbf{z}}_{i}}(\mathbf{z}_{i}-\widehat{\mathbf{z}}_{i})+o_{P}(1)$$ $$(25)$$ $$\begin{array}{l}{(26)}\\ {(27)}\end{array}$$ where oP (1) denotes a random variable converging to 0 in probability. Step 1. Here is the marginal covariance of the Sparse VAE: $$\mathrm{Var}(\mathbf{x}_{i})=\mathbb{E}\left[\mathrm{Var}(\mathbf{x}_{i}|\mathbf{z}_{i})\right]+\mathrm{Var}(\mathbb{E}\left[\mathbf{x}_{i}|\mathbf{z}_{i}\right])$$ $$=\mathbf{\Sigma}+\mathrm{Var}(\mathbf{\mu}(\mathbf{z}_{i}))$$ Take the first order Taylor expansion µ(zi) ≈ zbi + ∂µ(zi) ∂zi zi=zbi (zi − zbi). Then $$\mathrm{Var}(\mu(\mathbf{z}_{i}))\approx\frac{\partial\mu(\mathbf{z}_{i})}{\partial\mathbf{z}_{i}}\bigg|_{\mathbf{z}_{i}=\widehat{\mathbf{z}}_{i}}\mathrm{Var}(\mathbf{z}_{i})\left(\frac{\partial\mu(\mathbf{z}_{i})}{\partial\mathbf{z}_{i}}\bigg|_{\mathbf{z}_{i}=\widehat{\mathbf{z}}_{i}}\right)^{T}.\tag{1}$$ Now, µ(zi) = {fθj (wj ⊙ zi)} G j=1. Then, for any two features, j and l, we have $$\mathrm{Cov}(x_{ij},x_{il})\approx\sum_{k=1}^{K}\sum_{k^{\prime}=1}^{K}\frac{\partial f_{\theta_{i}}(\mathbf{w}_{j}\odot\mathbf{z}_{i})}{\partial z_{ik}}\bigg|_{\mathbf{z}_{i}=\widehat{\mathbf{z}}_{i}}\frac{\partial f_{\theta_{i}}(\mathbf{w}_{l}\odot\mathbf{z}_{i})}{\partial z_{ik^{\prime}}}\bigg|_{\mathbf{z}_{i}=\widehat{\mathbf{z}}_{i}}C_{kk^{\prime}}$$ $$(28)$$ where Var(zi) = C. Step 2. We re-write fθj (·) to expose the first layer of the neural network (before the activation function is applied): $$f_{\theta_{j}}(\mathbf{w}_{j}\odot\mathbf{z}_{i})=m_{\theta_{j}}(\mathbf{u}_{ij}.)\tag{1}$$ where uij· = {uijd} D1 d=1 with $$u_{ijd}=\sum_{k=1}^{K}H_{dk}^{(1)}w_{jk}z_{ik},\tag{1}$$ $$(29)$$ $$(30)$$ where {H (1) dk } D1,K d,k=1 are the weights of the first neural network layer with dimension D1. $$(31)$$ Then, $$\mathrm{Cov}(x_{ij},x_{il})=\sum_{k=1}^{K}\sum_{k^{\prime}=1}^{K}w_{jk}w_{kk^{\prime}}\left(\sum_{d=1}^{D_{1}}H_{dk}^{(1)}\frac{\partial m_{\theta_{j}}(\mathbf{u}_{ij\cdot j})}{\partial u_{ijd}}\right)\left(\sum_{d=1}^{D_{1}}H_{dk^{\prime}}^{(1)}\frac{\partial m_{\theta_{i}}(\mathbf{u}_{il\cdot})}{\partial u_{kld}}\right)C_{kk^{\prime}}$$ $$=\sum_{k=1}^{K}\sum_{k^{\prime}=1}^{K}w_{jk}w_{lk^{\prime}}B_{ijk}B_{ik^{\prime}}C_{kk^{\prime}},$$ (32) $\binom{33}{2}$ (33) . $$(34)$$ (35) $\binom{36}{2}$ . Ckk′ (32) where Bijk =PD1 d=1 H (1) dk ∂mθj ∂uijd. Suppose j is an anchor feature for k. Then the absolute covariance of feature j and any other feature l is: $$\begin{split}|\text{Cov}(x_{ij},x_{il})|&=|w_{jk}B_{ijk}\sum_{k'=1}^{K}w_{lk'}B_{ilk'}C_{kk'}|\\ &\leq|C_{kk}w_{jk}B_{ijk}\sum_{k'=1}^{K}w_{lk'}B_{ilk'}|\quad\text{by A3.}\\ &\leq C_{kk}w_{jk}^{2}B_{ijk}^{2}\quad\text{by A2.}\end{split}$$ Step 3. In Step 2, we proved the equivalent of Lemma 2 of Bing et al. (2020), adapted for the Sparse VAE. This allows us to apply Theorem 1 of Bing et al. (2020), which proves that the anchor features can be determined uniquely from the approximate covariance matrix, 1N PN i=1 Cov(xi), as N → ∞. ## A.3 Discussion Of Identifiability Assumptions In this section, we examine the suitability of assumption A2. We do so by showing A2 holds for a three-layer neural network with ReLU activation functions, independently distributed Gaussian weights and no bias terms. Specifically: $$\mathbb{E}\left[x_{i j}|\mathbf{w}_{j},\mathbf{z}_{i}\right]=\mathbf{H}_{j}^{(3)}\ \mathrm{ReLU}(\mathbf{H}^{(2)}\ \mathrm{ReLU}(\mathbf{H}^{(1)}(\mathbf{w}_{j}\odot\mathbf{z}_{i})))\tag{1}$$ where H(1) ∈ R D×K are the weights for layer 1 with D hidden units, H(2) ∈ R D×D are the weights for layer 2, and H (3) j·denotes the jth row of the weights for layer 3, H(3) ∈ R G×D. We have $$\frac{\partial\mu(\mathbf{z}_{i})_{j}}{\partial z_{ik}}=\sum_{d_{1}=1}^{D}\sum_{d_{2}=1}^{D}H_{d_{1},\mu}^{(3)}H_{d_{2},\mu}^{(3)}H_{d_{1},\lambda}^{(1)}w_{jk}\mathbb{I}\left[H_{d_{2}}^{(2)}\text{ReLU}(\mathbf{H}^{(1)}(\mathbf{w}_{j}\odot\mathbf{z}_{i}))>0\right]\mathbb{I}\left[\sum_{k=1}^{K}H_{d_{1},\lambda}^{(1)}w_{jk}z_{ik}>0\right].\tag{38}$$ $$(37)$$ where I(·) is the indicator function. Assume {wjk} G,K j,k=1 are independent and symmetric around zero. Assume all weights are independent and distributed as: H (m) d1,d2 i.i.d. ∼ N(0, τ ) for all layers m = 1, 2, 3. Taking the first order Taylor approximation, for any two features, j and l, we have Cov(xij , xil) ≈ X K k=1 X K k′=1 Ckk′X D d1=1 X D d2=1 X D d ′ 1=1 X D d ′ 2=1 (H (3) j,d2H (2) d2,d1H (1) d1,kwjk)(H (3) l,d′2 H (2) d ′ 2 ,d′1 H (1) d ′ 1 × I hH (2) d2·ReLU(H(1)(wj ⊙ zbi)) > 0 iI "X K k=1 H (1) d1,kwjkzbik > 0 # × I hH (2) d ′ 2 ·ReLU(H(1)wl ⊙ zbi) > 0 iI "X K k=1 H (1) d ′ 1 ,kwlkzbik > 0 # . (41) ,k′wlk′ ) (39) (39) $$\begin{array}{l}\left(40\right)\end{array}$$ = $$\begin{array}{l}\left(41\right)\end{array}$$ = $$\begin{array}{l}\left(41\right)\end{array}$$ . As the neural network weights are independent Gaussians, we keep only the terms where d1 = d ′ 1 and d2 = d ′ 2 : Cov(xij , xil) ≈ X D d2=1 H (3) j,d2H (3) l,d2 X D d1=1 (H (2) d2,d1 ) 2X K k=1 X K k′=1 Ckk′H (1) d1,kH (1) d1,k′wjkwlk′ (42) × I hH (2) d2·ReLU(H(1)(wj ⊙ zbi)) > 0 iI "X K k=1 H (1) d1,kwjkzbik > 0 # × I hH (2) d2·ReLU(H(1)wl ⊙ zbi) > 0 iI "X K k=1 H (1) d1,kwlkzbik > 0 # (42) $$\begin{array}{l}\small\left(43\right)\end{array}$$ = $$\begin{array}{l}\small\left(44\right)\end{array}$$ . . (44) If j and l are both anchor features for factor k, we have: $$\mathrm{Cov}(x_{ij},x_{il})\approx\sum_{d_{2}=1}^{D}(H_{j,d_{2}}^{(3)})^{2}\sum_{d_{1}=1}^{D}(H_{d_{2},d_{1}}^{(2)})^{2}C_{k k}(H_{d_{1},k}^{(1)})^{2}w_{j k}^{2}$$ $$\times\mathbb{I}\left[\mathbf{H}_{d_{2}}^{(2)}\mathrm{ReLU}(\mathbf{H}^{(1)}(\mathbf{w}_{j}\odot\hat{\mathbf{z}}_{i}))>0\right]\mathbb{I}\left[\sum_{k=1}^{K}H_{d_{1},k}^{(1)}w_{j k}\hat{z}_{i k}>0\right],$$ (45) $\binom{46}{45}$ . , (46) as H (3) j,d2 = H (3) l,d2 and wjk = wlk by definition of an anchor feature. Meanwhile, if j and l are not anchor features for the same factor, we have $$\mathrm{Cov}(x_{i j},x_{i l})\approx\sum_{d_{2}=1}^{D}H_{j,d_{2}}^{(3)}H_{l,d_{2}}^{(3)}A_{d_{2}}$$ where Ad2 = X D d1=1 (H (2) d2,d1 ) 2X K k=1 X K k′=1 Ckk′H (1) d1,kH (1) d1,k′wjkwlk′ I hH (2) d2·ReLU(H(1)(wj ⊙ zbi)) > 0 × I "X K k=1 H (1) d1,kwjkzbik > 0 #I hH (2) d2·ReLU(H(1)wl ⊙ zbi) > 0 iI "X K k=1 H (1) d1,kwlkzbik > 0 # i(48) $$(47)$$ (48) $\begin{array}{l}\text{(49)}\end{array}$ . $$(50)$$ . (49) As the weights are independent, we have that H (3) j,d2 , H (3) l,d2 , Ad2 are independent. For large D, we then have $$\sum_{d_{2}=1}^{D}H_{j,d_{2}}^{(3)}H_{l,d_{2}}^{(3)}A_{d_{2}}\to0.\tag{1}$$ Hence, for two anchor features j and l and non-anchor feature m, we have Cov(xij , xil) > Cov(xij , xim). ## B Inference B.1 Derivation Of Elbo For Identity Prior Covariance In the main body of the paper, we set the prior covariance to Σz = I as it performed well empirically and was computationally efficient. For this setting, the objective is derived as X N i=1 log pθ(xi,W, η) = X N i=1 log Zpθ(xi|zi,W)p(zi)dzi + log Zp(W|Γ)p(Γ|η)p(η)dΓ (51) = X N i=1 log Eqϕ(zi|xi) pθ(xi|zi,W)p(zi) qϕ(zi|xi) + log EΓ|W,η p(W|Γ)p(Γ|η)p(η) p(Γ|W, η) (52) ≥ X N i=1 Eqϕ(zi|xi)[log pθ(xi|zi,W)] − DKL(qϕ(zi|xi)||p(zi)) + EΓ|W,η [log[p(W|Γ)p(Γ|η)p(η)]] ≡ L(θ, ϕ,W, η). (53) The KL divergence between the variational posterior qϕ(zi|xi) and the prior zi ∼ NK(0, I) is: $$D_{K L}(q_{\phi}(\mathbf{z}_{i}|\mathbf{x}_{i})||p(\mathbf{z}_{i}))=-\frac{1}{2}\sum_{k=1}^{K}\left[1+\log(\sigma_{\phi}^{2}(\mathbf{x}_{i}))-(\mu_{\phi}(\mathbf{x}_{i}))^{2}-\sigma_{\phi}^{2}(\mathbf{x}_{i})\right]$$ The final term of the ELBO in Eq. 53 is: $$\mathbb{E}_{\mathbf{T}|\mathbf{W}^{(t)},\eta^{(t)}}\left[\log[p(\mathbf{W}|\mathbf{T})p(\mathbf{T}|\mathbf{\eta})p(\mathbf{\eta})]\right]=\sum_{k=1}^{K}\sum_{j=1}^{G}\lambda^{*}(w_{jk}^{(t)},\eta_{k}^{(t)})|w_{jk}|+\sum_{k=1}^{K}\left[\sum_{j=1}^{G}\mathbb{E}[\gamma_{jk}|w_{jk}^{(t)},\eta_{k}^{(t)}]+a-1\right]\mathbb{E}[\eta_{k}^{(t)},\eta_{k}^{(t)}]$$ $$+\left[G-\sum_{j=1}^{G}\mathbb{E}[\gamma_{jk}|w_{jk}^{(t)},\eta_{k}^{(t)}]+b-1\right]\log(1-\eta_{k}),$$ where $$(54)$$ log ηk (55) $$\binom{57}{58}$$ where $$\mathbb{E}[\gamma_{jk}|w_{jk},\eta_{k}]=\frac{\eta_{k}\psi_{1}(w_{jk})}{\eta_{k}\psi_{1}(w_{jk})+(1-\eta_{k})\psi_{0}(w_{jk})}$$ $$\lambda^{*}(w_{jk},\eta_{k})=\lambda_{1}\mathbb{E}[\gamma_{jk}|w_{jk},\eta_{k}]+\lambda_{0}(1-\mathbb{E}[\gamma_{jk}|w_{jk},\eta_{k}]).\tag{1}$$ As described in Algorithm 1, we alternate between updating E[γjk|w (t) jk , η (t) k ] and ηk, and taking gradient steps for *θ, ϕ* and W. ## B.2 Derivation Of Elbo For General Prior Covariance In this section, we detail the sparse VAE ELBO for general Σz. We do not study this algorithm in this paper but we include the derivation here for completeness. The change in the sparse VAE ELBO for general Σz is the KL divergence term: The change in the square with zero for general $\mathbf{z}_{i}$ is the full average term $$D_{KL}(q_{\phi}(\mathbf{z}_{i}|\mathbf{x}_{i})||p(\mathbf{z}_{i}))=-\frac{1}{2}\left\{\sum_{k=1}^{K}[1+\log(\sigma_{\phi}^{2}(\mathbf{x}_{i}))]-\text{tr}(\mathbf{\Sigma}_{z}^{-1}\text{diag}(\sigma_{\phi}^{2}(\mathbf{z}_{i})))-\mu_{\phi}(\mathbf{z}_{i})^{T}\mathbf{\Sigma}_{z}^{-1}\mu_{\phi}(\mathbf{z}_{i})-\log|\mathbf{\Sigma}_{z}|\right\}.\tag{59}$$ ## C Details Of Empirical Studies C.1 Dataset Details And Preprocessing PeerRead (Kang et al., 2018). We discard any word tokens that appear in fewer than about 0.1% of the abstracts and in more than 90% of the abstracts. From the remaining word tokens, we consider the G = 500 most used tokens as features. The observations are counts of each feature across N ≈ 11, 000 abstracts. Semi-synthetic PeerRead The training and testing data are distributed differently because we vary the correlations among the latent factors that generate the features across the two datasets. This data was generated as follows: (i) we applied Latent Dirichlet Allocation (LDA, Blei et al., 2003) to the PeerRead dataset using K = 20 components to obtain factors θ ∈ R N×K and topics (loadings) β ∈ R K×G. (ii) We created train set factors, θtr, by dropping the last K 2columns of θ and replacing them with columns calculated as: logit θf.k = logit θ.k + N(0, σ2) for each of the first K 2 latent dimension k. We fix the test set factors as θte = θ. MovieLens (Harper and Konstan, 2015). We consider the MovieLens 25M dataset. Following (Liang et al., 2018), we code ratings four or higher as one and the remaining ratings as zero. We retain users who have rated more than 20 movies. We keep the top G = 300 most rated movies. Finally, we randomly subsample N = 100, 000 users. Zeisel (Zeisel et al., 2015). We first processed the data following (Zeisel et al., 2015). Next, we normalized the gene counts using quantile normalization (Bolstad et al., 2003; Bolstad, 2018). Finally, following (Lopez et al., 2018), we retained the top G = 558 genes with the greatest variability over cells. ## C.2 Sparse Vae Settings - For all experiments, the Sparse VAE takes the ηk prior hyperparameters to be a = 1, b = G, where G is the number of observed features. - For experiments with Gaussian loss, the prior on the error variance is: $$(60)$$ σ 2 j ∼ Inverse-Gamma(ν/2*, νξ/*2). (60) We set ν = 3. The hyperparameter ξ is set to a data-dependent value. Specifically, we first calculate the sample variance of each feature, x·j . Then, we set ξ such that the 5% quantile of the sample variances is the 90% quantile of the Inverse-Gamma prior. The idea is: the sample variance is an overestimate of the true noise variance. The smaller sample variances are assumed to correspond to mostly noise and not signal - these variances are used to calibrate the prior on the noise. ## C.3 Experimental Settings - For the neural networks, we use multilayer perceptrons with the same number of nodes in each layer. These settings are detailed in Table 6. - For stochastic optimization, we use automatic differentiation in PyTorch, with optimization using Adam (Kingma and Ba, 2015) with default settings (beta1=0.9, beta2=0.999) - For LDA, we used Python's sklearn package with default settings. For NMF, we used sklearn with alpha=1.0. - The dataset-specific experimental settings are in Table 6. ## C.4 Additional Experiments C.4.1 Synthetic Data We consider an additional experiments to answer the questions: ![20_image_0.png](20_image_0.png) (a) Heldout mean squared error (lower is better) (b) Ground truth factor recovery (higher is better) ![20_image_1.png](20_image_1.png) (c) F-score of W support recovery (higher is better) Figure 4: Synthetic data. The Sparse VAE with less regularization on W (SparseVAE-Slab) performs slightly worse than Sparse VAE with more regularization on W (SparseVAE) in terms of (a) MSE; (b) DCI Disentanglement Score; and (c) F-score of the estimated support of W. 1. How does the sparse VAE perform when the regularization parameter in the sparsity-inducing prior is decreased? 2. Does the Spike-and-Slab Lasso prior perform similarly to an alternate sparsity-inducing prior, the Horseshoe prior (Carvalho et al., 2009)? Experiment 1. We find that the sparse VAE with less regularization (λ0 = λ1 = 0.001) performs slightly worse than the sparse VAE with λ0 = 10, λ1 = 1 in terms of MSE (Figure 4a) and DCI (Figure 4b). The sparse VAE with less regularization still performs well, however - this is because the W it learns also tends to be somewhat sparse (Figure 4c). Note that when λ0 = λ1, the Spike-and-Slab Lasso prior is equivalent to a Lasso prior (Tibshirani, 1996). Experiment 2. We find that the Spike-and-Slab Lasso and the horseshoe perform similarly for lower values of the factor correlation ρ, but that the Spike-and-Slab has lower MSE and higher DCI than the horseshoe for higher values of ρ. To implement the horseshoe prior, we follow Ghosh et al. (2019) and parameterize the half-Cauchy distribution C +(0, b) as: $$a\sim C^{+}(0,b)\Leftrightarrow a^{2}|\lambda\sim\mathrm{Inv-Gamma}(1/2,1/\lambda),\quad\lambda\sim\mathrm{Inv-Gamma}(1/2,1/b^{2}).$$ Then, we assign W a horseshoe prior: where s ∼ Inv-Gamma(*a, b*) is the inverse-Gamma distribution with density p(s) ∝ s −a−1exp{−b/s} for s > 0. We set the hyperparameters to bglobal = b*local* = 1.0. $$\begin{array}{c}{{\lambda_{g l o b a l}\sim\mathrm{Inv-Gamma}(1/2,1/b_{g l o b a l}^{2}),}}\\ {{\ }}\\ {{\lambda_{l o c a l}\sim\mathrm{Inv-Gamma}(1/2,1/b_{l o c a l}^{2}),}}\\ {{\ }}\\ {{\tau_{j k}|\lambda_{l o c a l}\sim\mathrm{Inv-Gamma}(1/2,1/\lambda_{l o c a l}),}}\\ {{v_{k}|\lambda_{g l o b a l}\sim\mathrm{Inv-Gamma}(1/2,1/\lambda_{g l o b a l}),}}\\ {{w_{j k}|\tau_{j k},v_{k}\sim N(0,(t_{j k}^{2}v_{k}^{2})I),}}\end{array}$$ global), (62) local), (63) $$(61)$$ $$(62)$$ $$(63)$$ $$(64)$$ $$(65)$$ $\left(\text{66}\right)$. ρ= 0 ρ= 0.2 ρ= 0.4 ρ= 0.6 ρ= 0.8 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0 MSE DCI ![21_image_0.png](21_image_0.png) Method SparseVAE Horseshoe (a) Heldout mean squared error (lower is better) (b) Ground truth factor recovery (higher is better) Figure 5: Synthetic data. The Sparse VAE with the Spike-and-Slab-Lasso prior performs slightly better than Sparse VAE with a horseshoe prior W in terms of (a) MSE; (b) DCI Disentanglement Score. With this horseshoe prior on W, we again optimize W using coordinate ascent and then use closed-form updates for the inverse-Gamma parameters: $$\begin{array}{l}{{\tau_{j k}^{(t)}=0.5\|w_{j k}^{(t)}\|^{2}/(2v_{k}^{(t-1)}+1/\lambda_{l o a l}^{(t-1)}),}}\\ {{v_{k}^{(t)}=2/(G+3)\left(\sum_{j=1}^{G}w_{j k}^{(t)2}/2\tau_{j k}^{2}+1/\lambda_{g l o a l}^{(t-1)}\right),}}\\ {{\lambda_{l o a c l}^{(t)}=2/(G K+3)\left(\sum_{k,j=1}^{K,G}1/\tau_{j k}^{(t)2}+1/b_{l o c a l}\right),}}\\ {{\lambda_{g l o b a l}^{(t)}=2/(K+3)\left(\sum_{k=1}^{K}1/v_{k}^{(t)2}+1/b_{g l o b a l}\right).}}\end{array}$$ $$(67)$$ $$(68)$$ $$(69)$$ $$(70)$$ ![22_image_0.png](22_image_0.png) (b) Heldout recall (higher is better) Figure 6: MovieLens. The Sparse VAE generally performs as well as or better than a VAE on (a) negative heldout log-likelihood and (b) heldout recall when varying the sample size and number of features (movies). Results are averaged over 5 splits of the data. ## C.5 Movielens: Varying Sample Size And Number Of Features In this section, we consider the performance of the Sparse VAE when we vary the sample size and number of features (movies) in the MovieLens dataset. The settings are the same as noted in Table 6, except for the latent space dimension which is set to K = 50. Generally, the Sparse VAE performs as well as or better than a VAE on negative heldout log-likelihood and heldout recall (Figure 6). For the largest setting (N = 100, 000, G = 1, 000), the Sparse VAE is about 100 times slower than a VAE. ## C.5.1 Movielens And Peerread: Comparisons With Nmf We include additional comparisons of the Sparse VAE with non-negative matrix factorization (NMF) on the MovieLens and PeerRead datasets. On the MovieLens data, examples of topics found by NMF include those related to science fiction; animated childrens' movies; and Star Wars (although Toy Story is included in the Star Wars topic). This performance is similar to the Sparse VAE. On the PeerRead data, examples of topics found by NMF include those related to reinforcement learning; language modeling; and neural network models. In summary, NMF finds interpretable topics, similarly to the Sparse VAE. However, the Sparse VAE has better heldout predictive performance than NMF. This provides evidence for the Sparse VAE retaining the interpretability of linear methods while improving model flexibility. ## C.6 Compute - GPU: NVIDIA TITAN Xp graphics card (24GB). - CPU: Intel E4-2620 v4 processor (64GB). Table 5: On MovieLens (left) and PeerReads (right), NMF finds meaningful topics. | Settings | Value | |------------------------|--------------------| | # hidden layers | 3 | | # layer dimension | 50 | | Latent space dimension | 5 | | Learning rate | 0.01 | | Epochs | 200 | | Batch size | 100 | | Loss function | Gaussian | | Sparse VAE | λ1 = 1, λ0 = 10 | | β-VAE | [2, 4, 6, 8, 16] | | VSC | α = 0.01 | | OI-VAE | λ = 1, p = 5 | | Runtime per split | CPU, 2 mins | | PeerRead | | | Settings | Value | | # hidden layers | 3 | | # layer dimension | 100 | | Latent space dimension | 20 | | Learning rate | 0.01 | | Epochs | 40 | | Batch size | 128 | | Loss function | Softmax | | Sparse VAE | λ1 = 0.001, λ0 = 5 | | β-VAE | [2, 4, 6, 8, 16] | | VSC | α = 0.01 | | OI-VAE | λ = 1, p = 5 | | Runtime per split | GPU, 20 mins | | Zeisel | | | Settings | Value | | # hidden layers | 3 | | # layer dimension | 100 | | Latent space dimension | 15 | | Learning rate | 0.01 | | Epochs | 100 | | Batch size | 512 | | Loss function | Gaussian | | Sparse VAE | λ1 = 1, λ0 = 10 | | VSC | α = 0.01 | | OI-VAE | λ = 1, p = 5 | | Runtime per split | CPU, 15 mins | | Topic | Movies | Topic | Words | |---------|--------------------------------------------------------------|----------------------------------------------------|---------------------------------------------| | A | Alien; Aliens; Terminator 2; Terminator; Blade Runner | A | agent; action; policy; state; game | | B | Shrek 1 & 2; Monsters, Inc.; Finding Nemo; The Incredibles B | word; representation; vector; embeddings; semantic | | | C | Star Wars IV, V & VI; Toy Story | C | model; inference; neural; latent; structure | Table 6: Settings for each experiment. | Settings | Value | |------------------------|------------------| | # hidden layers | 3 | | # layer dimension | 300 | | Latent space dimension | 30 | | Learning rate | 0.0001 | | Epochs | 100 | | Batch size | 100 | | Loss function | Softmax | | Sparse VAE | λ1 = 1, λ0 = 10 | | β-VAE | [2, 4, 6, 8, 16] | | VSC | α = 0.01 | | OI-VAE | λ = 1, p = 5 | | Runtime per split | GPU, 1 hour | Synthetic data | Settings | Value | |------------------------|-------------------| | # hidden layers | 3 | | # layer dimension | 50 | | Latent space dimension | 20 | | Learning rate | 0.01 | | Epochs | 50 | | Batch size | 128 | | Loss function | Softmax | | Sparse VAE | λ1 = 0.01, λ0 = 5 | | β-VAE | [2, 4, 6, 8, 16] | | VSC | α = 0.01 | | OI-VAE | λ = 1, p = 5 | | Runtime per split | GPU, 30 mins | MovieLens # hidden layers 3 # layer dimension 300 Latent space dimension 30 Learning rate 0.0001 Epochs 100 Batch size 100 Loss function Softmax Sparse VAE λ1 = 1, λ0 = 10 β-VAE [2, 4, 6, 8, 16] VSC α = 0.01 OI-VAE λ = 1, p = 5 Runtime per split GPU, 1 hour PeerRead Semi-synthetic PeerRead # hidden layers 3 # layer dimension 50 Latent space dimension 20 Learning rate 0.01 Epochs 50 Batch size 128 Loss function Softmax Sparse VAE λ1 = 0.01, λ0 = 5 β-VAE [2, 4, 6, 8, 16] VSC α = 0.01 OI-VAE λ = 1, p = 5 Runtime per split GPU, 30 mins Zeisel
Review 1: Summary: The paper proposes a method called Sparse VAE where the mapping between latent factors and observed features is enforced to be sparse in the sense that each observed feature depends only on a small number of latent factors. This is achieved by computing each observed feature independently using the same decoder but with a different subset of latent factors as input (obtained via a mask on the full set of latent factors). The mask varies across output features but is the same across data points, which the authors motivate using a genomics application where each observed feature always corresponds to the same gene. The authors further motivate the approach through an identifiability theoretical analysis, highlighting that compared to other DGMs the proposed approach is identifiable. Finally the authors demonstrate the benefits of the approach on a number of synthetic and real tasks. Strengths and Weaknesses: Strengths: The paper is well written and contributes to the DGM literature with a new modeling approach, theoretical analysis, and experiments designed to illustrate the proposed approach's main benefits. Weakness: * While the approach has some advantages over DGMs (e.g. identifiability) it has more limited applicability (can only be applied to tabular data, i.e. where the number of observed features is fixed and where each observed feature has the same interpretation across data points). * Compared to the generic VAE, the proposed approach is also more computationally expensive. Requested Changes: Given that identifiability has been analyzed in the context of NMF, I would consider either including NMF in Figure 3 (identifiability) or describing why one cannot use NMF in that context. In general, given the tabular data assumption, I think that the stronger the discussion (and empirical comparison) with alternative methods tailored for tabular data like movie reviews, the stronger the paper. Overall though, the paper is well written with all the strengths and weaknesses discussed clearly. The paper describes how the identifiability analysis motivates the modeling choices (e.g. highlighting that the SSL prior is likely to yield the sparse mapping that satisfies the anchor assumption required for identifiability) and there is at least one (synthetic) experiment to illustrate each claim e.g. identifiability, interpretability, and improved held out performance on real data. Therefore my recommendations above are not critical for securing my recommendation for acceptance. Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: This paper introduces a new family of deep generative models that is identifiable by virtue of the anchor assumption (which states that there are at least two features which depend solely on that factor). Intuitively, this assumption effectively means that the observation serves as a noisy proxy for the latent variable itself and is what is used to guarantee identification of the model. The model is instatiated via a spike and slab prior which is used to define a mask over which latent factors are used in the conditional distribution over the observations given the latent variables. The model is evaluated on synthetic data, where it is found to recover the underlying sparsity of the generating process. In addition it is evaluated on several tabular datasets. On MovieLens (Recsys) it is found to improve upon recall and NDCG. On PeerRead, it is found to improve upon likelihood. On a data of mouse cortex cells, the latent representation learned by the model is found to improve predictive accuracy. Strengths and Weaknesses: Strengths * Clarity: I found the paper clear, well written and easy to follow * Novelty: The idea to leverage the anchor assumption to obtain identifiability is neat and this work presents a compelling instantiation of this idea. Weaknesses * Evaluation: I think the biggest concern that I have is that method is solely evaluated on datasets that are relatively small (the largest number of features is ~500 and the largest number of training samples is ~100K). In order to push the limits of the method, its worthwhile to extend the evaluation to larger datasets. * The MovieLens dataset comes in a variety of different sizes, including the MovieLens20M which has in the tens of thousands of features. Given that this method is restricted to tabular data, I do think it is important for the work to evaluate on such larger training datasets. * It would be interesting to study the effect of increasing training/feature size. Specifically, this would enable practitioners to assess the utility of this method in different datasize/feature regimes. * Another reason to consider increasing the dimensionality of features is that tabular data often exhibits long-tails which can inhibit learning (e.g. https://arxiv.org/abs/1901.05534, https://arxiv.org/abs/1710.06085). This regime presents an interesting opportunity to assess the degree to which model restrictions that guarantee identifiability help/hurt learning on tabular data. Requested Changes: * Running an evaluation on larger datasets * Evaluate how the model performs as a function of feature size (and number of training examples) * What happens in scenarios where there is model misspecification? i.e synthetic data where the anchor assumptions is violated? Broader Impact Concerns: To the best of my knowledge, there are no broader impact concerns I anticipate with this manuscript. ================================================== Review 3: Summary: The authors propose a new type of deep generative models (DGMs), called sparse DGMs, and prove analytically that under certain assumptions these models are identifiable. They then instantiate this model class with an amortized inference version, called sparse VAE, and show empirically that this model outperforms existing VAE models across a range of tasks. Strengths and Weaknesses: Strengths: - The paper is well written and easy to understand. - Identifiability in DGMs is a very useful but elusive feature and this work provides a way to achieve it. - The empirical performance of the proposed sparse VAE looks very promising. Weaknesses: - It is unclear how realistic some of the theoretical assumptions are. - The connection between the theory and the presented sparse VAE model could be discussed in more detail. - Some of the design choices are not fully motivated; ablation studies could help. Requested Changes: Overall, this is a really nice paper and I think it will be a nice contribution to this journal. However, I think there are a few improvements that could be made. Critical: - It is unclear to me how strong assumption A1 is, especially that the two anchor features have the exact same mapping $f_j$. Does this mean that the features essentially have to be copies of each other? How realistic is this assumption in the real data? - As mentioned by the authors, their theory does not apply directly to the proposed sparse VAE, due to its non-convexity. It would be useful to discuss this discrepancy a bit more and give an intuition for how "close" the sparse VAE gets to the identifiable regime or what kind of adjustments in the model make it more or less identifiable. - Relatedly, some of the design decisions are not motivated very clearly. For instance, how would the SSL prior work if the slab would not be a Laplace distribution? How would it compare to, e.g., a horseshoe prior? Some ablation studies would be useful to motivate why this particular model design is the best one. Minor: - Since the proposed sparse VAE is computationally more expensive than some of the baselines, it would be useful to report runtimes in the experiments to get a better intuition for how much additional compute one has to invest for the improvement in performance. Broader Impact Concerns: There are no ethical concerns. ================================================== Metareview: Recommendation: Accept as is Comment: The paper introduces sparse VAE, a deep generative model applicable when each observed dimension is dependent on a potentially different small subset of latent variables. This sparse dependence is modelled by applying a different learned sparse mask to the latent vector for each observed dimension. The authors prove that the resulting model is identifiable under certain assumptions and show that it performs well in practice. The paper is well written and makes progress in a difficult area. The authors have addressed most of the reviewer concerns with their rebuttal. ==================================================
# The Alignment Problem In Curriculum Learning Anonymous authors Paper under double-blind review ## Abstract In curriculum learning, teaching involves cooperative selection of sequences of data via plans to facilitate efficient and effective learning. One-off cooperative selection of data has been mathematically formalized as entropy-regularized optimal transport and the limiting behavior of myopic sequential interactions has been analyzed, both yielding theoretical and practical guarantees. We recast sequential cooperation with curriculum planning in a reinforcement learning framework and analyze performance mathematically and by simulation. We prove that infinite length plans are equivalent to not planning under certain assumptions on the method of planning, and isolate instances where monotonicity and hence convergence in the limit hold, as well as cases where it does not. We also demonstrate through simulations that argmax data selection is the same across planning horizons and demonstrate problem-dependent sensitivity of learning to the teacher's planning horizon. Thus, we find that planning ahead yields efficiency at the cost of effectiveness. This failure of alignment is illustrated in particular with grid world examples in which the teacher must attempt to steer the learner away from a particular location in order to reach the desired grid square. We conclude with implications and directions for efficient and effective curricula. ## 1 Introduction Advances in AI and machine learning enable the possibility that artificial systems may autonomously facilitate human goals, including human learning. Design of such systems requires addressing a value alignment problem (Russell, 2019; Christian, 2020), which requires interacting with the system to achieve the desired goals. Toward this end, formalizing models of cooperation among agents that bridge human and machine learning is an important direction for research. In this paper, we identify a novel value alignment problem in the context of agents that facilitate learning, and we identify and test sufficient conditions for ensuring value alignment for curriculum learning. Learning may be facilitated by the teacher planning ahead, which becomes a problem of reinforcement learning. There exists an extensive literature on curriculum learning (Elman, 1993; Khan et al., 2011; Pentina et al., 2015; Matiisen et al., 2020; Graves et al., 2017); however, this literature focuses on naive learners rather than those that reason cooperatively about the teacher's selection of data. Theoretical results are limited (Milli and Dragan, 2020) and have not systematically considered the possibility of alignment problems or their solutions. Recent advances in theoretical foundations for cooperative inference admit a more unified formal treatment (Wang et al., 2020b), which is necessary to understand whether, when, and why alignment problems arise. We formalize the alignment problem in curriculum learning via the mathematical condition of consistency. Given a teacher and learner cooperatively communicating, the teacher aims to convey a distribution θ on the finite set of possible hypotheses H to the learner, over an infinite horizon. That is, if θn denotes the learner's distribution at the n-th round of communication, the alignment problem is to have limn→∞ Ph∈H |θ(h) − θn(h)| = 0. When the teacher is conveying a specific hypothesis h 0to the learner, the distribution to be learned is θ = δh0 , a Dirac distribution. We investigate the alignment problem in curriculum learning by recasting sequential cooperative Bayesian inference (SCBI) as a Markov decision process (MDP). In doing so, we retain the theoretical strengths of prior formalizations which yielded proofs of consistency and rates of convergence, while considering the benefits and drawbacks of planning by the teacher. Section 2 gives background on one-off cooperative inference and sequential cooperative inference, as well as the interpretation of SCBI as a Markov chain. Section 3 recasts SCBI as a Markov decision process, distinct from the trivial realization of a Markov chain as an MDP, contrasts SCBI with no planning ahead vs. using a teaching plan which calculates several steps into the future, and isolates the theoretical basis of misalignment. One main result of this section is that for a class of reward/cost functions, planning infinitely far ahead is equivalent to not planning at all; this includes using a reward proportional to the probability of selecting the correct hypothesis. The other main result is to give a condition for monotonicity in expectation which yields a sufficient requirement for alignment. Section 4 gives examples and simulations. Section 5 gives related work, and Section 6 concludes. Notation. M ∈ R |D|×|H| >0is the joint distribution for the teacher and learner between data and hypotheses, with |D| many rows and |H| many columns, with M(d,h) the entry of the joint distribution corresponding to datum d and hypothesis h. Mθ,λ is the joint distribution M normalized using Sinkhorn scaling to have column sums equal to θ and row sums equal to λ. That is, for every h ∈ H,Pd∈D Mθ,λ (d,h) = θ(h) and for every d ∈ D,Ph∈H Mθ,λ (d,h) = λ(d). π : P(H) → P(D) is a teaching strategy used by the teacher for a single round of teaching, while π N R : P(H) → P(D) is the teaching strategy obtained from π by planning N teaching rounds into the future and using the random variable R, representing rewards/costs inherent to the problem. δMθ,λ (d,−) /λ(d) is the atomic distribution on P(H) with atom Mθ,λ (d,−) /λ(d); i.e. δMθ,λ (d,−) /λ(d) ∈ P(P(H)), the space of distributions on the distributions on hypotheses, where the Markov operator Ψπ in our formalism is acting. ΨN π denotes Ψπ : P(P(H)) → P(P(H)) composed with itself N times. Frequently we will shorten the notation δMθ,λ (d,−) /λ(d) to δ(d). ## 2 Background Curriculum learning involves selecting a sequence of learning problems that lead the learner to a desired knowledge or capability. We formalize these as a sequence of data that lead the learner to a target hypothesis. Throughout, we will assume teachers and learners are probabilistic agents reasoning over discrete and finite spaces of hypotheses h ∈ H and data d ∈ D. Recall, in standard probabilistic inference, learners will update their posterior beliefs P(h|d) in proportion to the product of the prior beliefs, P(h) and the likelihood of the data, P(d|h), as dictated by Bayes rule. One-off cooperative inference. Cooperative inference between probabilistic agents differs from standard Bayesian inference in the second agent, the teacher, who selects the data, and in that the agents reason about each other's beliefs. Based on the shared joint distribution between data and hypotheses, the teacher reasons about the learner's beliefs, and samples data to pass according to the learner's current distribution on hypotheses, the joint distribution, and the desired hypothesis to be conveyed. The learner then reasons based upon what data they have been passed by the teacher and infers, based on the shared joint distribution, what hypothesis the teacher is attempting to convey. This process may be represented mathematically by the following system of equations: $$P_{L}(h|d)=\frac{P_{T}(d|h)P_{L_{0}}(h)}{P_{L}(d)},\qquad P_{T}(d|h)=\frac{P_{L}(h|d)P_{T_{0}}(d)}{P_{T}(h)}\tag{1}$$ where PL(h|d) represents the learner's posterior probability for hypothesis h given datum d; PT (d|h) is the probability of the teacher selecting datum d to convey hypothesis h; PL0 (h) represents the learner's prior for hypothesis h; PT0 (d) is the teacher's prior for selecting data d; PL(d) and PT (h) are normalizing constants. Sinkhorn scaling of matrices (i.e. alternating row-column normalization of the joint distribution) may be used to solve equation (1), and the result is an optimal entropy-regularized plan for transporting beliefs (Wang et al., 2019; 2020b). Sequential cooperative inference. In sequential cooperative Bayesian inference (SCBI), a teacher and learner participate in rounds of learning. To convey a particular hypothesis (or belief on the space of possible hypotheses) from the hypothesis-space H, in each round the teacher passes a datum d ∈ D to the learner, and the learner updates their belief distribution accordingly. At the end of each round, the teacher and learner both update their posterior distributions to become their prior distributions in the next round (Wang et al., 2020a). Each round of learning behaves as in cooperative inference, where the system of equations (1) must be solved. However, at the end of each round, each round differs in having updated the prior, which is one marginal constraint for Sinkhorn scaling in (1). The process of teaching-learning-updating works as follows: Beginning with the joint distribution Mn of the previous round and distribution θn ∈ P(H), which represents the learner's beliefs from the previous round of teaching-learning, where P(H) is the simplex of probability distributions on H, the teacher computes the Sinkhorn scaling of Mn with row sums λ and column sums θn. Call this Mn+1. Here λ is an underlying distribution on D reflecting inherent biases in selecting particular data points; λ is typically taken to be the uniform distribution. Then the teacher uses the distribution Mn+1 ˆθ to sample datum dn+1 from D and passes it to the learner, where ˆθ is the desired belief on hypotheses which the teacher wishes to convey, typically a Dirac distribution, corresponding to a corner of the simplex. The learner then calculates Mn+1 in exactly the same way as the teacher, then multiplies θn by the likelihood of selecting dn+1. Normalizing gives a distribution θn+1. The process then repeats inductively, with n replaced everywhere by n + 1. SCBI as a Markov chain. The process of SCBI can be realized as a Markov chain on P(H) (Wang et al., 2020a). With Td : P(H) → P(H) the map bringing the learner's prior to posterior when data d is chosen by the teacher; and τ : P(H) → P(D) the map of the teacher's sample distribution based on the learner's prior, and τd the d-th component of this map, the Markov transition operator for a fixed hypothesis h ∈ H, Ψ(h) : P(P(H)) → P(P(H)) is defined as: $$(\Psi(h)(\mu))(E):=\int_{E}\sum_{d\in{\mathcal{D}}}\tau_{d}(T_{d}^{-1}(\theta)\mathrm{d}(T_{d}^{*}(\mu))(\theta)$$ $$\left(2\right)$$ (µ))(θ) (2) where T ∗ d is the push-forward of Td on Borel measures1, µ is a Borel probability measure on the simplex P(H), and E ⊆ P(H) is a Borel measurable subset. Td and τ are computed as above by using the Sinkhorn scaling of the joint distribution between data and hypotheses. Wang et al. (2020b) consider the problem of a teacher and learner communicating cooperatively in discrete rounds of teaching/learning. The teacher and learner reason using Bayesian inference at each round, without any reference to what future probability distributions on the available hypotheses might be, and without reference to any costs/rewards the teacher uses in order to determine what distribution to use to sample the data which they are passing to the learner. Although there is a discussion of SCBI as a Markov chain in (Wang et al., 2020b), there is no extension of the method to a Markov decision process. Here we extend the formalism to include planning ahead by the teacher, as well as rewards/costs the teacher uses in planning in order to bias the learner towards/away from a particular hypothesis. ## 3 Curriculum Planning Via Markov Decision Processes Curriculum planning involves the teacher planning several teaching moves in advance. We may model this using a Markov decision process (MDP) as follows: Let the *state space* of the process be given by S = P(H), the probability distributions on H. Let the *action space* of the MDP be given by A = D, the data available for the teacher to pass to the learner (interpretation: an action d ∈ D corresponds to passing d to the learner). We fix an underlying distribution λ ∈ P(D), which represents any inherent bias towards selecting or not selecting some particular data. In SCBI (Wang et al., 2020a), λ is the uniform distribution. We will allow the reward function R to be a combination of positive and negative pieces (to include costs, if some hypotheses are particularly undesirable). The transition probability T(ω|*d, θ*) of the MDP between probability distributions ω and θ on H, based upon the teacher selecting datum d, is λ(d) if ω = Td(θ) and is zero otherwise. A teaching strategy then consists of a plan π : P(H) → P(D), effectively, 'sample datum d using π(θ) when the current distribution on hypotheses is θ.' In SCBI, the d − th component of the plan π is πd(θ) = τd(θ), i.e. the adjustment of the teacher's distribution according to the learner's current distribution. The teacher's strategy can be made 1i.e. T ∗ d (µ)(E) = µ(T −1 d(E)). ![3_image_0.png](3_image_0.png) Figure 1: Analysis of convergence to the target hypothesis as a function of longer curricula (0-4) for probabilistic action selection and argmax action selection, given 3 data and 2 hypotheses. We show the probability of the target hypothesis according to the learner versus # teaching/learning rounds, using X(θ) = θ([true hypothesis]). **(left)** On average, when the teacher selects actions probabilistically (i.e. Eq. 3), longer curricula yield a marginal difference. **(right)** On average, when the teacher selects actions using argmax, longer curricula is not different from not planning ahead. All curves for 0 steps ahead to 4 steps ahead are exactly overlaid. deterministic if πd is an atomic distribution with a single atom for every d ∈ D. Throughout this paper we will be focusing on plans π which amount to the teacher preparing a curriculum by calculating what might happen in future rounds of learning based upon the current data selection. Explicitly, the plan π N R : P(H) → P(D) corresponding to planning ahead N moves based on the teaching strategy π and a random variable X is realized by the following procedure: $$\pi_{R}^{N}(\theta)(d)=\mathrm{Norm}\left(M_{(d,h)}^{\theta,\lambda}\mathbb{E}_{\Psi_{\pi}^{N}\delta_{(d)}}[R]+C\right)\tag{1}$$ $$\left({\boldsymbol{3}}\right)$$ Here Norm(·) is normalization to ensure that Pd (π N R (θ)(d)) = 1, Mθ,λ is the Sinkhorn scaling of M so that for every d ∈ D and h ∈ H,Pd0∈D Mθ,λ (d0,h) = θ(h) and Ph0∈H Mθ,λ (d,h0) = λ(d). The h in Equation (3) is the target hypothesis the teacher desires to convey, R : P(H) → R is a random variable representing a reward or cost, and Ψπ : P(P(H)) → P(P(H)) is the Markov operator induced by the teaching plan π, i.e. for a Borel measure µ ∈ P(P(H)) and E ⊆ P(H) a measurable subset, $$\Psi_{\pi}(\mu)(E)=\int_{E}\sum_{d\in{\cal D}}\pi_{d}(T_{d}^{-1}(\theta))d(T_{d}^{*}\mu)(\theta),\tag{4}$$ and C is a constant so that what is being normalized to a distribution is non-negative (typically chosen to be + min nMθ,λ (d,h) EΨNπ δ(d) [R]|d ∈ D, h ∈ Ho, for some small ). Frequently we will drop the subscript R, to simplify notation. Note that the behavior of the teaching/learning interaction will vary, depending upon the normalization constant C. On one extreme, as C → 0 +, π N R (θ)(d) → Norm Mθ,λ (d,h) EΨNπ δ(d) [R] . Because of the potential non-positivity, but the overall normalization, this corresponds to a signed probability measure. At the other extreme, as C → ∞, π N R (θ)(d) → 1 |D| , independently of θ; i.e. the teacher's distribution is the uniform distribution on |D|, regardless of the learner's beliefs, hence the teacher's choice of data is random. In particular, if C is much greater than the expectation term for some particular choice of data d, then π N R (θ) ≈1 |D| . In order to make the distribution positive, we must have: $$C>\min_{d\text{s.t.}M^{\theta,\lambda}_{(d,h)}\mathbb{E}_{\delta(d)}[R]\leq0}M^{\theta,\lambda}_{(d,h)}\mathbb{E}_{\Psi^{\lambda}_{\theta}\delta(d)}[R]\tag{5}$$ On the other hand, in order to preserve the teacher's knowledge and avoid random data selection, we would also like: [R] (6) $$C\leq\operatorname*{min}_{d|M_{(d,h)}^{\theta,\lambda}\mathbb{E}_{\Psi_{\pi}^{N}\delta_{(d)}}[R]>0}M_{(d,h)}^{\theta,\lambda}\mathbb{E}_{\Psi_{\pi}^{N}\delta_{(d)}}[R]$$ $$(6)$$ However, for random variables R which place too high a cost on some outcomes, versus too small a reward, it may be impossible to simultaneously meet these two conditions. That is, ensuring a positive probability distribution may create a more uniform distribution for the data which already have a positive probability of selection, prior to the addition of the normalization constant C. In Equation (3), if R(ω) = ω(h), this corresponds to the teacher calculating N steps ahead in order to choose the data at the current step which will increase the likelihood the learner will be able to infer the target hypothesis in the future. Furthermore, we can replace the expectation of the random variable ω(h) with an expectation that accounts for rewards/costs inherent to the problem. For example, if π is the SCBI teaching strategy, then π(θ)(d) = Mθ,λ (d,h) θ(h) , where h is the target hypothesis (Wang et al., 2020b). The expectation term of (3) may be simplified as follows, letting ˜θ = Mθ,λ (d,−) , and assuming that the original teaching plan follows the scheme of SCBI: $$\mathbb{E}_{\Phi^{N_{d_{0}}}}[R(\theta)]=\sum_{d_{1},\ldots,d_{N}\in\mathcal{D}}\tau_{d_{N}}(\theta)\tau_{d_{N-1}}(T_{d_{N}}(\theta))\cdots\tau_{d_{1}}(T_{d_{1}}\circ T_{d_{0}}\circ\cdots\circ T_{d_{N}}(\theta))R(T_{d_{1}}\circ\cdots\circ T_{d_{N}}(\theta)).\tag{7}$$ This formula will be useful for simulations, as it allows us to replace the integral arising from the expectation EΨNπ δ(d) [R] by a finite (though growing exponentially in the number of steps planning ahead) sum. The teacher, when planning a curriculum consisting of finitely many data points, can only guide the learner to a finite number of distributions on hypotheses. ## 3.1 Optimal Policy We may compute the optimal policy as follows: $$\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}R_{u}(\theta_{t},\theta_{t+1})\right]=\sum_{t=0}^{\infty}\gamma^{t}R(\theta_{t})P_{u_{t}}(\theta_{t},\theta_{t+1})=\sum_{t=0}^{\infty}\gamma^{t}R(\theta_{t})\lambda(d(t))\delta_{\theta_{t+1}}.x_{d(t)}\sigma_{d(t)}(\theta_{t})\tag{8}$$ Assuming that $\lambda$ is uniform, i.e. the teacher has no underlying bias with regards to the data, and also using the fact that δθt+1,Td(t)(θt)is only nonzero when Td(t)(θt) = θt+1, this becomes: $$=\frac{1}{|\mathcal{D}|}\sum_{t=0}^{\infty}\gamma^{t}R\left(T_{d(t-1)}\circ T_{d(t-2)}\circ\cdots\circ T_{d(0)}(\theta_{0})\right)\pi_{d(t)}\left(T_{d(t-1)}\circ T_{d(t-2)}\circ\cdots\circ T_{d(0)}(\theta_{0})\right)\cdot$$ $$\cdot\pi_{d(t-1)}\left(T_{d(t-2)}\circ\cdots\circ T_{d(0)}(\theta_{0})\right)\cdots\pi_{d(0)}(\theta_{0})$$ The optimal policy which maximizes the expectation term in Equation 8 is therefore the argmax over all possible functions d : N → D, where the action at step t is d(t − 1), and argmax over functions π : P(H) → P(D). Note that Equation 8 is the same as Equation 7 with a factor of 1D in front, N → ∞, reverse indexing, and a discount factor γ ∈ [0, 1]. In particular, this implies that taking an argmax over the distribution in Equation 3 gives a finite horizon approximation of the optimal policy. We may state this more formally as follows: Theorem 1 *The optimal policy is given by the following:* $$\lim_{\gamma\rightarrow1}\;argmax_{\pi}argmax_{d\mid d\rightarrow D}\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}R\left(\theta_{t}\right)\right]=argmax_{\pi}\;\lim_{N\rightarrow\infty}\;argmax_{d\in\{0,\ldots,N-1\}\rightarrow\mathcal{D}}\pi^{N}(\theta_{0})\tag{9}$$ Proof: Note that normalization does not affect which entries of π N (θ) are larger than others, so we have: $$\lim_{N\to\infty}\operatorname*{argmax}_{d_{k}\{0,\ldots,N-1\}\to\mathcal{D}}\pi_{d(N-1)}^{N}(\theta_{0})=\lim_{N\to\infty}\operatorname*{argmax}_{d_{k}\{0,\ldots,N-1\}\to\mathcal{D}}\left(M_{(d_{k}(N-1),h)}^{\theta,\lambda}\mathbb{E}_{\Phi^{\prime}\delta_{d_{k}(N)}}[R]+C\right)$$ $$=\lim_{N\to\infty}\operatorname*{argmax}_{d_{k}\{0,\ldots,N-1\}\to\mathcal{D}}\left(M_{(d_{k}(N-1),h)}^{\theta,\lambda}\mathbb{E}_{\Psi^{\prime}\delta_{d_{k}(N-1)}}[R]\right)$$ ![5_image_0.png](5_image_0.png) Figure 2: Analysis of convergence as before using reward R(θ) = 10·dL1 (*θ, δ*hbad )−dL1 (*θ, δ*htrue ). On average, when the teacher selects actions probabilistically **(upper left)** or deterministically (argmax) **(upper right)** curricula yield worse outcomes. The average reward is shown in **(lower)**. Note that the reward increases in the number of steps planning ahead (> 1 steps), and slowly approaches the reward of no planning. Planning ahead leads to misalignment between learners and teachers, when the teacher attempts to avoid an undesirable hypothesis. Note that further planning ahead reduces misalignment, consistent with the fact that planning sufficiently far ahead is equivalent to not planning at all. $=\lim\limits_{N\rightarrow\infty}\operatorname{argmax}_{d:\{0,...,N-1\}\to\mathcal{D}}\left(T_{d(N-1)}(\theta)(h)\mathbb{E}_{\Psi_{\pi}^{N}\delta_{d(N-1)}}[R]\right).$ Because Td(N−1)(θ)(h) is multiplied by every term of the expectation sum, we obtain that the above is equal to: $$=\operatorname*{lim}_{N\to\infty}\operatorname{argmax}_{d:\{0,...,N-1\}\to\mathcal{D}}\left(\mathbb{E}_{\Psi_{\pi}^{N}\delta_{(d(N-1)}}[R]\right)=\operatorname*{lim}_{\gamma\to1}\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}R(\theta_{t})\right].$$ Taking the argmax over policies π : P(H) → P(D) then yields the result. The policy πd(t)(θ0) which satisfies the argmax above at time step t should be the Dirac distribution δd(t), where d(t) is the optimal choice of datum at time t. We may therefore obtain an estimate of the optimal policy by taking the argmax of π N as N becomes progressively larger. SCBI vs SCBI with planning. When preparing a curriculum, if the teacher does not plan ahead, they may base their teaching strategy for each round on Bayesian inference. This amounts to an initial teaching strategy of SCBI. However, if the same Bayesian teacher plans a curriculum which includes infinitely many data points, and uses the random variable R(θ) = θ(htarget) in their planning in order to effectively sample data points leading the learner to a stronger belief in the true hypothesis, the usual process of SCBI is recovered. Theorem 2 Suppose that π : D → H is a teaching plan and X is a reward/cost such that for every d ∈ D, limN→∞ EΨNπ δ(d) [R] exists, and is independent of d*. Then* π∞ R := limN→∞ π N R exists and is equal to the SCBI teaching strategy. Proof: Let θ ∈ P(H), and d ∈ D, and suppose that the teacher is teaching hypothesis h ∈ H. To ease the readability, we will drop the subscript R on the teaching plan. Then: $$\lim_{N\rightarrow\infty}\pi^{N}(\theta)(d)=\lim_{N\rightarrow\infty}\frac{M_{(d,\lambda)}^{d,\lambda}|\mathbb{E}_{\theta\geq\delta_{(0)}}[R]}{\sum_{d^{\prime}(d^{\prime},\lambda)}M_{(d^{\prime},\lambda)}^{d^{\prime},\lambda}|\mathbb{E}_{\theta\geq\delta_{(0)}}[R]}=\frac{M_{(d,\lambda)}^{d,\lambda}}{\sum_{d^{\prime}(d^{\prime},\lambda)}M_{(d^{\prime},\lambda)}^{d^{\prime},\lambda}}=\frac{M_{(d,\lambda)}^{d^{\prime},\lambda}}{\theta(b)}\tag{10}$$ This is the formula used in the standard SCBI teaching strategy at each round of learning. Corollary 3 With π *the standard SCBI teaching strategy and* R(θ) = θ(h), where h is the target hypothesis, planning infinitely far ahead is identical to not planning ahead at all. Proof: By the proof of ((Wang et al., 2020a), Theorem 3.5), limN→∞ EΨNπ µ[θ(h)] = 1, for any Borel measure µ ∈ P(P(H)). The claim then follows from Theorem 2 above. The corollary above says that the limit of planning infinitely far ahead using SCBI is identical to SCBI with no planning ahead! Intuitively, this makes sense: in the infinite limit, the teacher is considering all possible infinite strings of data to pass to the learner; however, most of the strings will be indistinguishable as the learner approaches a particular belief on the true hypothesis, and so only the short-term behaviour of the strings is important. Furthermore, because the proof in Wang et al. (2020b) also shows that the convergence is monotone increasing; this implies that not planning ahead is the optimal teaching strategy when the reward at each round is equal to the probability that the learner would select the target hypothesis out of all hypotheses. Figure 1 compares planning ahead up to four steps for two hypotheses and three data points, using R(θ) = θ(h), with h the hypothesis to be taught, assuming actions are selected probabilistically. The vertical axis is the learner's probability of choosing h as the correct hypothesis if they were to randomly guess, while the horizontal axis represents the number of rounds of teaching/learning. The plot was created by randomly generating 1500 initial joint distributions, performing 50 rounds of teaching-learning with each 30 times, then averaging over the learner's performance. An interesting feature of planning ahead with R(θ) = θ(htarget) is that if the teacher uses the deterministic procedure of argmax instead of sampling from the distribution to choose data to pass to the learner, the choice is the same for every number of steps planning ahead; i.e., in this case, local maximization is the same as global maximization (see Figure 1). Other random variables are available to use in the expectation: for example, if there is one particular hypothesis which the teacher wishes to avoid, while biasing the learner toward the true hypothesis, the random variable could be R(θ) = 10 · dL1 (*θ, δ*hbad ) − dL1 (*θ, δ*htrue ), where dL1 (·, ·) is the L 1 distance between θ and δh represented as points of a simplex. In this case, there is non-monotonic behaviour, as the teacher overcompensates for trying to move the learner away from the 'bad' hypothesis, which subsequently may lead the learner closer to a neutral hypothesis than to the true hypothesis. See Figure (2), in particular the trajectories corresponding to planning ahead one step and two steps, where the probability that the learner selects the true hypothesis decreases. For a comparison with a naive Bayesian learner, see Section 1 of the supplementary materials. Guaranteed alignment via monotonicity. One of the key results of (Wang et al., 2020b) is the consistency of SCBI. Consistency here refers to the convergence in expectation of the learner's belief distribution to a Dirac distribution on the target hypothesis over the infinite limit of teaching rounds. In particular, Wang et al. (2020b) shows that if monotonicity holds, i.e. EΨπµ[θ(h)] − Eµ[θ(h)] > 0, where h is the target hypothesis, then consistency and hence alignment follows. By writing out the equation above for monotonicity with π replaced by π N R , for some choice of rewards/costs R, we obtain a condition for monotonicity and hence for alignment. Writing out the equation for EΨπX µ[θ(h)] and Eµ[θ(h)], we may obtain an explicit condition for when EΨπX µ[θ(h)] ≥ Eµ[θ(h)]. Hence: Theorem 4 Monotonicity holds if and only if for any µ ∈ P(H): $$\int_{\Delta}\sum_{d}\pi_{d}(\theta)\cdot(T_{d}(\theta)(h)-\theta(h))d\mu(\theta)>0$$ πd(θ) · (Td(θ)(h) − θ(h))dµ(θ) > 0 (11) $${\mathcal{P}}({\mathcal{H}}):$$ $\text{iii}\;\;\Gamma\;\subseteq\;\Gamma$ $$(11)$$ Proof: Throughout this proof, we rewrite π(θ)(d) as πd(θ) for ease of readability. Expanding EΨπµ[θ(h)] − Eµ[θ(h)] using the definition of Ψπ yields: $$\int_{\Delta}\sum_{d}\pi_{d}\left(T_{d}^{-1}(\theta)\right)\theta(h)d(T_{d}^{*}\mu)(\theta)-\int_{\Delta}\theta(h)d\mu(\theta)$$ $$=\sum_{d}\int_{T_{d}^{-1}(\Delta)}\pi_{d}(\theta)T_{d}(\theta(h))d\mu(\theta)-\int_{\Delta}\theta(h)d\mu(\theta)$$ $$=\int_{\Delta}\sum_{d}\pi_{d}(\theta)\cdot\left(T_{d}(\theta)(h)-\theta(h)\right)d\mu(\theta)$$ To get from the penultimate line to the final line, we use the fact that Td : ∆ → ∆ is a homeomorphism of the simplex. From this equation, we can see that if the δMθ,λ (d,−) expectation of R is overly negative for a hypothesis for which Td(θ)(h) > θ(h), while Td0 (θ)(h) < θ(h) for other hypotheses, monotonicity will be broken. This implies that if the teacher places a heavy cost on a hypothesis lying close in belief-space to the target hypothesis, the curriculum of the teacher may over-emphasize moving away from the heavy cost hypothesis, at the expense of potentially converging to a more neutral hypothesis. Here two hypotheses h1 and h2 'lying close in belief-space' means that with respect to the initial joint distribution M, the L 1 distance on the simplex P(D) between M(−,h1) and M(−,h2)is small. This implies that whatever data the teacher passes to the learner will affect both rows similarly. ## 4 Examples And Simulations ![7_Image_0.Png](7_Image_0.Png) ![7_Image_1.Png](7_Image_1.Png) Figure 3: **(top)** GridWorld on a cylinder, with the entire top and bottom edges of the rectangle glued together. The grey region is hypothesis C, which is an undesirable hypothesis, and the target hypothesis is A. **(bottom)** Analysis of convergence to the target hypothesis, A, as a function of number of steps of planning ahead (0-4) for probabilistic action selection. Curricula yield marginal gains. All simulations were run on a laptop computer with an AMD Ryzen 5 3550H with Radeon Vega Mobile Gfx 2.10 GHz processor or a macOS Mojave with 3.6 GHz Intel Core i7. GridWorld on a cylinder. Suppose we have a grid on a cylinder, as in Figure 3 (left), where the top and the bottom edge have been glued together. In this problem, the teacher is attempting to direct an agent to the location A, while the location C, taking up three grid squares, represents a particularly undesirable location. In this case, the hypotheses correspond to the locations *A, B, C*, so |H| = 3, and the data correspond to the four possible directions in the grid (up, down, left, right), so |D| = 4. Starting from location X, the teachers who plan ahead out-perform the teacher who does not plan ahead on average, as shown in Figure 3 (right). When the teacher attempts to bias the agent away from C and toward A by using the random variable R(θ) = 10 · dL1 (θ, δC ) − dL1 (*θ, δ*A) in planning, the agent is more likely to converge to the neutral ![8_image_0.png](8_image_0.png) Figure 4: Analysis of convergence of the learner's beliefs for the random variable R(θ) = 10 · dL1 (*θ, δ*C ) − ![8_image_1.png](8_image_1.png) dL1 (*θ, δ*A) on the cylinder gridworld with a teacher that probabilistically selects data. **(left)** Probability that the learner guesses the correct location is A, the target hypothesis, versus # teaching/learning rounds. (right) Probability that the learner incorrectly guesses the location is B, the neutral hypothesis, versus # teaching/learning rounds. Avoiding the undesirable hypothesis, C, results in misalignment such that the teacher's examples lead the learner to the incorrect hypothesis B when planning ahead. Figure 5: Analysis of the robustness of curriculum learning to errors in the teacher's estimation of the learner's prior, assuming reward R(θ) = θ(h), with three hypotheses and four data points. Probability that the learner infers the correct hypothesis versus # teaching/learning rounds. **(upper left)** Average perturbation of size .001 on the teacher's estimation of the learner's prior, with the teacher sampling probabilistically. (upper right) Average perturbation of size .1 on the teacher's estimation of the learner's prior, with the teacher sampling probabilistically. **(lower left)** Average perturbation of size .001 on the teacher's estimation of the learner's prior, with the teacher using argmax to select data. **(lower right)** Average perturbation of size .1 on the teacher's estimation of the learner's prior with the teacher using argmax to select data. Longer curricula lead to greater misalignment due to propagation of error, and the misalignment is more severe with larger error. hypothesis, as there is more incentive to move away from C than to head toward A, so the agent gets closer to B. This is shown in Figures 4 (left) and (right). Robustness comparisons. To compare robustness, we perturbed the teacher's estimation of the learner's prior at each round of teaching/learning. As shown in Figure 5, planning ahead is typically less robust than not. Each step of planning ahead relies upon estimating the learner's beliefs, which compounds the ![9_image_0.png](9_image_0.png) Figure 6: Convergence analysis when teacher accurately (solid lines) or inaccurately (dashed lines) estimates the learner's joint distribution when curriculum varies in the number steps planning ahead (0-4) with probabilistic data selection. Accurate estimates result in robust convergence across curricula. In contrast, inaccurate estimates result in severe alignment problems, which are worse for longer curricula. overall error at each step of teaching/learning. In particular, Figure 5 shows that as the error increases, the convergence of the teachers who are planning ahead slows down, and that teachers who plan further ahead are less robust to perturbations in general. As another test of robustness, we compare runs with two different initial joint distributions. This corresponds to the teacher and learner not having a shared knowledge of the relationships between data and hypotheses at the start. Another way to think of this is that one of the agents may have a mis-estimation of the relationship between data and hypotheses. Figure 6 shows that no tested strategies are robust with respect to an initial perturbation in joint distribution. Section 2 of the supplementary materials contains simulations detailing convergence for various sizes of data/hypothesis spaces. ## 5 Related Work The mathematical theory of cooperative communication as optimal transport has been explored in (Wang et al., 2020b), (Yang et al., 2018), and (Wang et al., 2019). Sequential cooperation as utilized in this paper, including proofs of stability and rates of convergence, may be found in (Wang et al., 2020a). Cooperative inference and reinforcement learning appear together in several contexts. For example, cooperative inverse reinforcement learning is centered around cooperative inference (Hadfield-Menell et al., 2016; Fisac et al., 2020), and cooperative inference and reinforcement learning often appear in the coordination of multiple agents, e.g. (Pesce and Montana, 2020). Milli and Dragan (2020) consider misalignment in terms of levels of theory of mind recursion. Our work differs in considering the problem of developing curricula to foster learning and offers novel theoretical and simulation-based insights into the possibility of misaligned curricula. In curriculum learning (Elman, 1993), a learner is presented with carefully chosen examples or tasks, often increasing in complexity and carefully curated by the teacher. Such strategies are used in human cognition when teaching-learning complex tasks (Khan et al., 2011), and curriculum learning is a useful method in machine learning for gradually revealing complex concepts by slowly building up from simpler components (Bengio et al., 2009). Curriculum learning is used, for example, in neural networks in order to maximize learning efficiency (Graves et al., 2017), and multitask learning (Pentina et al., 2015). Curriculum learning has been included into the framework of a partially observable Markov decision process in Matiisen et al. (2020); Narvekar and Stone (2018). In Matiisen et al. (2020), the student is not directly observable by the teacher and the actions of the teacher correspond to teaching the learner on a certain task for a specific number of iterations. In Narvekar and Stone (2018), curriculum design is viewed as transfer learning, and the problem of learning a meta-policy is addressed. Our approach is distinctive in focusing on the alignment problem and offering strong mathematical proofs. It is attractive to consider curricula that actively avoid common misconceptions. We are not aware of prior work in curriculum learning that does so, but it is common in reinforcement learning to consider worlds with negative reward, which is equivalent. In education, dealing with misconceptions most typically occurs through direct negation; for example, by including incorrect examples for students to correct (Heemsoth and Heinze, 2014) and through refutation texts (Tippett, 2010). Evidence suggests that such efforts are only effective when students know enough about the domain and are most effective at avoiding the misconception rather than inducing the correct belief. Thus, our findings regarding lack of monotonicity and misalignment are broadly consistent. ## 6 Conclusion We investigate the possibility of alignment problems in curriculum learning by building recent theoretical advances in modeling cooperation. Recasting sequential cooperative Bayesian inference (SCBI) as a Markov decision process (MDP) enables the inclusion of curriculum learning to a teacher and learner cooperatively learning in multiple rounds of interactions. Through theoretical and simulation-based analysis, we show that curriculum planning introduces brittleness that leads to misalignment when the curriculum introduces additional costs, for example when avoiding a misconception, or when the teacher has imperfect knowledge of the learner. SCBI without curriculum planning offers competitive rates of convergence and desirable theoretical guarantees across our theoretical and simulation analyses. We also show that under simple assumptions on the reward/cost, e.g. taking the reward to be the probability that the learner will select the correct hypothesis, myopia appears to be optimal: planning ahead can actually decrease the rate of convergence, while converging to no planning as the number of steps ahead tends to infinity. Although this paper has primarily focused on the SCBI framework of cooperative communication, it is feasible that similar results exist for other MDP models of curriculum learning. Namely, naive choices of rewards/costs which may at face-value be reasonable, could result in misalignment or slower convergence, and are a potential pitfall for those working in reinforcement and curriculum learning. Future work should consider whether other curriculum learning approaches can offer similar theoretical and practical guarantees of alignment. ## Broader Impact Our work is on alignment problems in curriculum learning. We have identified a new alignment problem which may be of longer term benefit if methods for addressing it are developed. There is no obvious reason why people may be put at disadvantage by this research, and it is unclear what the long term consequences of this work will be. ## References Y. Bengio, J. Laouradour, R. Colloberty, and J. Weston. Curriculum learning. In L. Bottou and M. Littman, editors, *Proceedings of the 26th Annual International Conference on Machine Learning (ICML 2009)*, pages 41–48, New York, NY, 2009. Association for Computing Machinery. B. Christian. *The Alignment Problem: Machine Learning and Human Values*. WW Norton & Company, 2020. J.-L. Elman. Learning and development in neural networks: the importance of starting small. *Cognition*, 48:71–99, 1993. J.-F. Fisac, M.-A. Gates, J.-B. Hamrick, C. Liu, D. Hadfield-Menell, M. Palaniappan, D. Malik, S. S. Sastry, T.-L. Griffiths, and A.-D. Dragan. Pragmatic-pedagogic value alignment. In N. Amato, G. Hager, S. Thomas, and M. Torres-Torriti, editors, *Robotics Research. Springer Proceedings in Advanced Robotics,* vol 10, pages 49–57, Switzerland, 2020. Springer, Cham. A. Graves, M.-G. Bellemare, J. Menick, R. Munos, and K. Kavukcuoglu. Automated curriculum learning for neural networks. In D. Precup and Y. W. Teh, editors, *Proceedings of the 34th International Conference* on Machine Learning (ICML 2017), pages 1311–1320, online, 2017. JMLR.org. D. Hadfield-Menell, S. Russell, P. Abbeel, and A.-D. Dragan. Cooperative inverse reinforcement learning. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, *Advances in Neural Information* Processing Systems (NeurIPS 2016), pages 3909–3917, Red Hook, NY, 2016. Curran Associates Inc. T. Heemsoth and A. Heinze. The impact of incorrect examples on learning fractions: A field experiment with 6th grade students. *Instructional Science*, 42(4):639–657, 2014. F. Khan, J. Zhu, and B. Mutlu. How do humans teach: on curriculum learning and teaching dimension. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger, editors, *Advances in Neural Information Processing Systems 24 (NIPS 2011)*, pages 1449–1457, Red Hook, NY, 2011. Curran Associates Inc. T. Matiisen, A. Oliver, T. Cohen, and J. Schulman. Teacher-student curriculum learning. *IEEE Transactions* on Neural Networks and Learning Systems, 31(9):3732–3740, 2020. S. Milli and A.-D. Dragan. Literal or pedagogic human? analyzing human model misspecification in objective learning. In *Uncertainty in artificial intelligence*, pages 925–934. PMLR, 2020. S. Narvekar and P. Stone. Learning curriculum policies for reinforcement learning. *arXiv preprint* arXiv:1812.00285, 2018. A. Pentina, V. Sharmanska, and C.-H. Lampert. Curriculum learning of multiple tasks. In L. O'Conner, editor, *2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015)*, pages 5492– 5500, Piscataway, NJ, 2015. Conference Publishing Services. E. Pesce and G. Montana. Improving coordination in small-scale multi-agent deep reinforcement learning through memory-driven communication. *Machine Learning*, 109:1727–1747, 2020. S. Russell. *Human compatible: Artificial intelligence and the problem of control*. Penguin, 2019. C.-D. Tippett. Refutation text in science education: A review of two decades of research. International journal of science and mathematics education, 8(6):951–970, 2010. J. Wang, P. Wang, and P. Shafto. Sequential cooperative bayesian inference. In H. D. III and A. Singh, editors, *Proceedings of Machine Learning Research (PMLR 2020)*, pages 10039–10049, online, 2020a. PMLR. P. Wang, P. Paranamana, and P. Shafto. Generalizing the theory of cooperative inference. In K. Chaudhuri and M. Sugiyama, editors, *Proceedings of Machine Learning Research (PMLR 2019)*, pages 1841–1850, online, 2019. PMLR. P. Wang, J. Wang, P. Paranamana, and P. Shafto. A mathematical theory of cooperative inference. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems Pre-proceedings (NeurIPS 2020), online, 2020b. PMLR. S. C.-H. Yang, Y. Yu, A. Givchi, P. Wang, W. K. Vong, and P. Shafto. Optimal cooperative inference. In A. Storkey and F. Perez-Cruz, editors, Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS 2018), pages 376–385, online, 2018. PMLR.
Review 1: Summary: This paper investigates an extended version of the problem setting of Sequential Bayesian Cooperative Inference (SBCI), in which a teacher and a student model each other's beliefs based on shared knowledge of a joint distribution over a finite number of hypotheses and data points. Specifically, the authors extend this setting to include (1) the notion of a reward/cost function (of the hypothesis distribution of the student at each time), and (2) the ability to plan ahead for some (potentially infinite) number of steps. The authors prove theoretical properties of this setting, notably a monotonicity condition that guarantees the student converging to the intended (or "aligned") hypothesis, and (2) showing that when the reward at each time is equal to the probability of the student choosing the correct hypothesis, planning with an infinite time horizon is equivalent to no planning. The authors also present simulations supporting their theoretical conclusions. Strengths and Weaknesses: ### Strengths - This paper looks at an important problem in systems in which agents facilitate the learning of other agents, e.g. curriculum learning. - The paper provides theoretically-grounded conclusions about a specific setting of cooperative inference. - The paper provides experimental results that largely support the theoretical conclusions. ### Weaknesses - The largest weakness of this work is the disconnect between the problem setting studied and the broader problem setting initially motivated in the introduction, which is the setting of general teacher-student curricula, e.g. in supervised and reinforcement learning. The problem setting studied is limited to small, finite tabular SCBI problems. The extension of the theoretical results and algorithms studied in this setting to deep supervised or reinforcement learning (RL) settings, as mentioned in the motivating passages, remains unclear. Such an extension seems nontrivial as in the more general supervised and reinforcement learning settings (over the kinds of POMDPs studied in the deep RL literature), there is not a finite hypothesis space, and generally neither the teacher nor student have access to the joint distribution of hypotheses and data. **It is thus unclear whether any of the theoretical insights regarding the extended SCBI setting studied in this work carries over to the motivated problem setting. Therefore, I believe the current framing of the paper to overclaim the significance of its contributions.** - The implications of the experimental results are not clearly communicated in the context of the theoretical contributions. In parts, the results also seem somewhat inconsistent, and this inconsistency is not fully explained: For example, in Figure 1, the caption states that “On average…longer curricula yield a marginal difference,” but the corresponding plot shows that the probability of the student choosing the target hypothesis seems to decrease with a longer planning horizon. This marginal difference is first explained as planning not making a large difference when the reward at each time is equal to the probability of the learner selecting the target hypothesis. However, later in Section 4’s GridWorld results (Figure 3), the performance (in terms of probability of learner choosing target hypothesis) seems to consistently increase with the planning horizon. - Throughout the paper, the theory can be better connected to easier-to-grok intuitive explanations. This would be particularly helpful for some of the concepts first defined in the Notation section, e.g. Sinkhorn normalization, as well as optimal transport. Intuitive grounding of these concepts and how it relates to the SCBI setting studied would benefit the work, as these concepts seem to straddle separate and somewhat disjoint communities of research interests (which on the flipside, is an aspect of the work that makes it interesting). - The “shared joint distribution” aspect of the SCBI setting studied should be emphasized and clearly stated upfront. Requested Changes: - Given the large gap between the finite SCBI setting studied and the motivated general setting of a teacher agent faciliating the learning of a student agent, I think the authors should change the title of the paper to be less general and more specific to the setting of study, e.g. “The Alignment Problem in Cooperative Inference.” - The discussion of prior work on curriculum learning feels a bit one-dimensional. It only mentions a few, now slightly outdated results, without mentioning more recent works around both minimax and minimax-regret curricula that are theoretically grounded in decision theory (see list of suggested references). This makes the comment “Theoretical results are limited” in the introduction inaccurate: *Example of minimax curricula for robust RL*************************************************************:************************************************************ - Pinto, Lerrel, et al. "Robust adversarial reinforcement learning." *International Conference on Machine Learning*. PMLR, 2017. *Examples of minimax-regret curricula for zero-shot transfer in RL*****************:**************** - Dennis, Jaques, et al. "Emergent complexity and zero-shot transfer via unsupervised environment design." *Advances in neural information processing systems*  33 (2020): 13049-13061. - Jiang, Grefenstette & Rocktäschel. "Prioritized level replay." *International Conference on Machine Learning*. PMLR, 2021. - Jiang, Dennis et al. "Replay-guided adversarial environment design." *Advances in Neural Information Processing Systems*  34 (2021): 1884-1897. - Parker-Holder, Jack, et al. "Evolving Curricula with Regret-Based Environment Design." PMLR 2022. - Lanier et al. "Feasible adversarial robust reinforcement learning for underspecified environments." *arXiv preprint arXiv:2207.09597*  (2022). - Further, the introduction mentions that prior works “have not systematically considered the possibility of alignment problems or their solutions.” This is also factually inaccurate. Two works that consider this in both the deep RL setting and the supervised learning setting are referenced below: - Jiang, et al. "Grounding Aleatoric Uncertainty for Unsupervised Environment Design." NeurIPS 2022. - Kirsch, Rainforth & Gal. "Test Distribution-Aware Active Learning: A Principled Approach Against Distribution Shift and Outliers." *arXiv preprint arXiv:2106.11719*  (2021). - Moreover, curricula are used for many different purposes (e.g. to improve transfer to a known target task as in the work of Narvekar et al; to improve sample efficiency as in Matiisen et al; to improve adversarial robustness as in Pinto et al; and to improve zero-shot transfer to unknown test-time tasks as in Dennis et al). Thus, the authors should avoid referencing prior work in curriculum learning as a monolithic body of work. - At a high level, these requests for changes stems from the paper overclaiming the relevance and contributions of the study presented. The contributions of the paper would be better conveyed if the authors contextualized their specific setting more precisely with respect to prior works and grounded their discussion in the context of their specific problem setting. Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: This paper examines the problem of sequential cooperative inference, wherein a teacher presents data to a learner in order to teach the learner a hypothesis, and re-casts the problem in the context of RL where the aim is to get the learner to identify a goal state. The paper presents theoretical results regarding the conditions for alignment (ie., where the learner successfully recovers the hypothesis) and examines the effect of planning (or not) by the teacher. The paper demonstrates that not planning at all is equivalent to planning infinitely far in the future, and that when using short-term planning, certain reward functions can actually result in misalignment between the teacher and learner. Strengths and Weaknesses: Pros: - studies an important problem in multi-agent RL and human-agent interaction - clear theoretical results plus simulation results Cons: - lack of intuition provided for some of the results - the results are dependent on some strong assumptions, such as having perfect knowledge of the learner and that the learner is optimally Baysian - mathematical notation is difficult to parse - some details missing Requested Changes: 1. My biggest question/concern is around the intuition for some of the results. Despite reading through the methods several times, I'm still unclear on what the explanation is for why planning doesn't help, and in some cases, hurts. Some thoughts: - I guess the result that planning infinitely long is equivalent to not planning at all, can be explained by saying that the SBCI teaching strategy with $R(\theta) = \theta(h)$ is already the optimal policy for the teacher, so planning cannot improve on it. It would be nice if you could say this explicitly. I think this is implied, by presenting first the optimal policy as a function of $N$, and then showing that as $N\rightarrow \infty$ you recover the SBCI teaching strategy. But there are two things missing: first, explicitly saying "and therefore the SBCI teaching strategy *is* the optimal policy". Second, it would be helpful to also show that planning with $N=0$ is equivalent to SBCI as well. - I still don't fully understand the reason why planning would hurt for short horizons. I guess the point is that this is only true when you have a reward function that penalizes certain belief states, and by planning the teacher believes that the learner will pass through/near these belief states, and so decides to give it different data, which ends up steering it too far away from the correct hypothesis. However, when planning for an infinite horizon, the teacher would realize that the learner ends up in the wrong place and so avoids this strategy. Is that intuition correct? If so, it would be helpful to add this intuition to the paper. If not, could you please clarify (and also add the explanation to the paper). 2. My next concern is about some of the assumptions: that the teacher can perfectly estimate the learner's belief state, and that the learner is an optimal Bayesian learner. Both of these seem quite strong and that they would not generally hold in practice. Moreover, the results in Figure 6 suggest that violating the first assumption is a big problem (though it's unclear to me how big of a perturbation there was in generating this Figure). It would be helpful to provide some discussion of this, and even better if you could justify why these are reasonable assumptions (e.g. are there some real-world settings for which they would hold?). 3. Finally, I am not an expert in ML/probability theory, and I found the math to be quite dense and difficult to parse. This made it difficult for me to evaluate the correctness of the math. In several cases the paper seems to rely on mathematical concepts that I don't think are common knowledge for all ML researchers, such as concepts from measure theory. This is a shame, because I believe it limits the accessibility of the paper to researchers from areas such as HRI/HCI where it is very relevant but where researchers might not be familiar with the terminology. It would be helpful to provide more accessible explanations and perhaps even leave some of the nitty-gritty details to the supplementary material. Other questions/concerns: - The abstract says "we prove that infinite length plans are equivalent to not planning under certain assumptions on the method of planning", but as far as I could tell there was not any discussion of the planning method in the main text. Indeed, the fact that there was no discussion of the method of planning feels like a shortcoming. I assume you are using something like a breadth-first search in the experiments but it would be good to state this explicitly. - The abstract says that "planning ahead yields efficiency at the cost of effectiveness". What does efficiency mean in this context? There is no discussion of efficiency in the main text. Do you mean data efficiency, wall clock time, etc.? But for any of these choices, it seems like your results do not suggest planning is more efficient? So I am confused by this statement. - I am not sure what exactly Figure 2 (lower) corresponds to, is it the reward for selecting actions deterministically or probabilistically? The caption could be clearer in this regard. - Please include details of the experiment shown in Figure 6. What exactly does the "inaccurate" estimate refer to? How is the joint distribution perturbed, exactly? Smaller issues: - The font for the tick labels in the figures should be larger. - argmax is rendered incorrectly in Equation 9 - In Theorem 2, X does not seemed to be used. Should it be R? Broader Impact Concerns: No concerns. ================================================== Review 3: Summary: The paper investigates the concept of Sequential Cooperative Bayesian Inference (SCBI), a student-teacher paradigm in which the teacher aims to convey a belief to a student by communication via data on which the student performs inference. More precisely, the authors investigate whether SCBI, which has been previously modeled as a Markov Chain, benefits from a formulation as a Markov Decision Process in which an optimal teaching policy can be planned for a given reward objective. Using the MDP formulation, the authors introduce a multi-step planning policy that uses the SCBI policy to plan multiple "teaching steps" ahead and investigate its behavior. They show that the multi-step planning policy converges to the SCBI policy in the limit of infinite planned teaching steps. Further, they investigate both empirically and theoretically how different reward functions can affect the consistency of learning. By doing this, they show that penalizing inference of undesired hypotheses can prevent the desired hypothesis from being learned. Strengths and Weaknesses: In my [previous review](https://openreview.net/forum?id=tIdBsdyxG3&noteId=HCaPJDnJkW), my main concerns were focused on clarity of the paper due to sometimes unnecessarily complex notation, as well as the comparison of the approach to a properly computed optimal policy. I will go through the list of changes indicated by the authors and highlight how these changes affect my previous concerns: * **Planning ahead as an approximation of the optimal policy has been emphasized:** I was not able to find an additional emphasis on an optimal policy in the paper. I contrasted the new submission to the latest revision of the rejected paper. Could the authors please clarify where to find this additional emphasis? * **On page 2, in the 'Notation' section, 'row' and 'column' have been interchanged** + **In Equation 3, an intermediate equation with the same meaning but more confusing notation was removed:** While any fixes of typos and improvements in notation are welcomed, these two changes are very minor and do not address the (in my opinion) unnecessarily complicated notation for e.g. the MDP transition dynamics (see my initial review for a more detailed discussion). * **Figure 2 has been updated to include the maximum number of planning ahead steps (seven steps ahead):** The change in Figure 2 is highly welcomed. However, the additional visualization of the achieved reward raises multiple questions on my side (ordered from most important to least): * The rewards of the "planning-ahead" policies raise doubts on the correctness of the results. For any MDP, an optimal policy with a planning horizon of N time-steps should achieve the highest possible cumulative reward over these N time-steps. However, the "no-planning" policy achieves a significantly higher reward for N=1-7. This sub-optimality of the planned policy indicates that there is either a mismatch between the MDP used for planning and for evaluation, or that the optimal policy is not properly computed. * The maximum immediate reward is 10 but the maximum values visualized in Figure 2 take on values of 20. I hence assumed that the Figure shows the cumulative reward subject to discounting instead of the immediate reward. However, this assumption is again contradicted by the reward curves starting at a value of 12, which is higher than the maximum possible immediate reward of 10. Can the authors clarify what rewards are shown in Figure 2? * The reward definition in the caption of Figure 2 and on page 7 seems to contain a typo. Shouldn't it be $R(\theta) = 10 d_{L_1}(\theta, \delta_{h_{\text{true}}}) - d_{L_1}(\theta, \delta_{h_{\text{bad}}})$? Currently, the reward as defined in the paper increases, if the teacher conveys the bad hypothesis. * **In Figure 3, the dimensions of the subfigures have been adjusted in order to make them more readable:** Again, this change is welcomed but arguably minor. It does not address the main concerns without using the additionally generated free space for additional clarifications. Requested Changes: The recommendations regarding the improvement of notation in my initial review stand basically unchanged. I see potential in this paper, but its impact is currently limited since it may be hard to digest by a wider audience. With the inclusion of longer planning horizons and the visualization of the obtained rewards in Figure 2, the authors took a good step towards a more appropriate empirical verification of their theoretical insights. As outlined, I currently see evidence for issues in the computations/experiments and hence would like to see the authors carefully investigate those. This would either result in a clear description of why the observed artifacts are indeed no errors but in line with their theory and existing theory on RL, or an update of the experiments. Broader Impact Concerns: I do not have any concerns w.r.t. ethical implications of this paper. ================================================== Review 4: Summary: This paper studies curriculum learning: in what order should data be presented to a learner to accelerate learning? The key idea is to view curriculum learning as a reinforcement learning (RL) problem. The paper studies this idea in a tabular setting, making certain assumptions on how the teacher and learner update their beliefs given the data seen so far. Under this model, the paper finds the doing multi-step reasoning/planning can be no better than acting myopically. ----------------------------- **Update after reading the other reviews.** I enjoyed reading the other reviews. I agree with Reviewer UFPm that these results seem to have interesting implications for the HRI community, while also agreeing that the lack of intuition (and notational complexity) will likely preclude these results from being recognized in that community. I agree with Reviewer 5t6n that some of the concerns raised in the previous reviewing round have not yet been addressed. Strengths and Weaknesses: Strengths 1. I think that the underlying problem is really interesting: when is long horizon planning/reasoning useful for learning tasks that otherwise would be challenging to learn? 2. It's great that this theoretical paper still contains some experiments to study the theoretical predictions. Weaknesses: 1. I was a bit confused about the motivation. I'll try to explain the confusion. I tend to think about learning from a Bayesian perspective: as you observe data, you filter out models that are inconsistent with the data, leaving only those models that well-explain the data seen so far. Many ML models are exchangeable, in the sense that they assign equal likelihood to datasets regardless of the order in which that data are presented; e.g., a image classifier will assign the same log-likelihood to the ImageNet validation set regardless of whether the "dog" images are presented first or last. It seems like this paper is doing something different, assuming that the order in which data are presented affects the resulting models; I'm not sure this makes sense in most supervised and unsupervised learning settings. I think there might be a good argument for doing this for online, multi-task reinforcement learning: each round of learning would involve collecting data for some task, and that data might be useful for solving different tasks. However, this "RL to design a curriculum for RL" seems to be distinct from what this paper is discussing. 2. The introduction discusses claims that are not made in the later part of the paper. For example, the introduction starts by mentioning how "artificial systems may autonomously facilitate human goals including human learning," but the paper doesn't have any discussion of "human learning" (E.g., Do humans to SCBI?). Similarly, I'm not sure that the paper delivers on the promise to "bridge human and machine learning." 3. The notation is very complex. See, e.g., Eq. 5. **Does the resubmission address the concerns raised in the previous submission?** I don't think that the resubmission addressed the weaknesses raised in the prior review: > 1. The paper doesn't include much discussion of whether the result generalizes to other formulations of curriculum learning > > 2. The notation is very hard to parse Requested Changes: The concern about motivation could be partially addressed by clarifying how the proposed analysis relates to prior frameworks for analyzing learning. 1. How does this relate to online learning? There, it seems like the teacher is trying to _minimize_ learning; since existing online learning algorithms have good guarantees against this worst-case teacher, why do we need to design new learning algorithms that work with "easier" teachers? 2. Can we interpret the proposed method as "steering" the optimization path? What are settings where we care about the value of the intermediate points along that path, rather than the end point? 3. The RL objective in Eq. 8 seems intuitively similar to the statistical consistency and efficiency bounds used to analyze (say) MLE. Why is this RL objective preferrable over these prior notions of consistency? One suggestion for improving the paper would be to have a running example (E.g., the learner is learning to do image classification). This would help clarify what the hypotheses are (e.g., sets of image classification models) and what the data are (e.g., pairs of images and labels). Minor comments * "one-off cooperation" -- In the abstract, I'd recommend providing a bit more explanation of what this means (e.g., add a sentence before this one to setup the problem). Giving some examples would help. * "infinite length plans are equivalent to not planning ... planning ahead yields efficiency" -- At face value, this seems like a contradiction. I'd recommend addressing this (e.g., "While this may seem like a contradiction, ...") * When introducing "data" in the notation section, it'd be useful to specify the "type" of the data. E.g., is this a set of (x, y) pairs? * $\mathcal{D}$ isn't defined in the notation section. * Background -- When reading this section, the question in my mind was why simple Bayesian filtering/updating wouldn't yield the desired results. * Curriculum planning via MDPs -- When reading this section, I was wondering how this relates to prior work on learned optimizers? * Eq. 9 -- There's a arg max over $d$ on both sides, but the expression doesn't seem to depend on $d$. * Gridworld -- Again, it'd be good to clarify the "type" of the data in this example. Broader Impact Concerns: This is a theoretical paper, without direct broader impact concerns. ==================================================
# Data Models For Dataset Drift Controls In Machine Learning With Images Anonymous authors Paper under double-blind review ## Abstract Camera images are ubiquitous in machine learning research. They also play a central role in the delivery of important services spanning medicine and environmental surveying. However, the application of machine learning models in these domains has been limited because of robustness concerns. A primary failure mode are performance drops due to differences between the training and deployment data. While there are methods to prospectively validate the robustness of machine learning models to such dataset drifts, existing approaches do not account for explicit models of the primary object of interest: the data. This makes it difficult to create physically faithful drift test cases or to provide precise specifications of data models that should be avoided during the deployment of a machine learning model. In this study, we demonstrate how these shortcomings can be overcome by pairing machine learning robustness validation with physical optics. We examine the role raw sensor data and differentiable data models can play in controlling performance risks related to image dataset drift. The findings are distilled into three applications. First, drift synthesis enables the controlled generation of physically faithful drift test cases. **(Revision\#:2, Requested change** \#:3.) The results for absolute and relative changes in task model performance obtained with our method diverge markedly from an augmentation testing alternative that is not physically faithful. **Second, the gradient connection between machine learning model and our** data models allows for drift forensics that can be used to specify performance-sensitive data models which should be avoided during deployment of a machine learning model. Third, drift adjustment opens up the possibility for processing adjustments in the face of drift. This can lead to speed up and stabilization of classifier training at a margin of up to 20% in validation accuracy. Alongside our data model code we release two datasets to the public that we collected as part of this work. In total, the two datasets, Raw-Microscopy and Raw-Drone, comprise 1,488 scientifically calibrated reference raw sensor measurements, 8,928 raw intensity variations as well as 17,856 images processed through our data models with twelve different configurations. A guide to access the open code and datasets is available at https://anonymous.4open.science/r/tmlr/README.md. ## 1 Introduction In this study we demonstrate how explicit data models for images can be constructed to enjoy advanced controls in the validation of machine learning model robustness to dataset drift. We connect raw image data, differentiable data models and the standard machine learning pipeline. This combination enables three novel, physically faithful validation protocols that can be used towards intended use specifications of machine learning systems, a necessary pre-requisite for the use of any technology in many application domains such as medicine or autonomous vehicles. Camera image data are a staple of machine learning research, from the early proliferation of neural networks on MNIST [1–4] to leaps in deep learning on CIFAR and ImageNet [5–7] or high-dimensional generative models [8, **9]. Camera images also play an important role in the delivery of various high-impact public and** commercial services. Unsurprisingly, the exceptional capacity of deep supervised learning has inspired great imagination to automate or enhance such services. During the 2010s, "deep learning for ..." rang loud in most application domains under the sun, and beyond [10], spanning medicine and biology (microscopy for cell detection [11–14], histopathology [15, **16], opthalmology [17–19], malaria detection [20–23]), geospatial** modelling (climate [24–26], precision farming [27–29], pollution detection [30–32]) and more. However, the excitement has been reined in by calls for caution. Machine learning systems exhibit particular failure modes that are contingent on the makeup of their inputs [33–35]. Many findings from the machine learning robustness literature confirm supervised learning's tremendous capacity for identifying features in the training inputs that are correlated with the true labels [36–40]. But these findings also point to a flipside of this capacity: the sensitivity of the resulting machine learning model's performance to changes - both large and small - in the input data. Because this dependency touches on generalization, a **summum** bonum **of machine learning, the implications have been studied across most of its many sub-disciplines** including robustness validation [41–54], formal model verification [55–71], uncertainty quantification [72–82], out-of-distribution detection [34, 83–87], semi- [88–90] and self-supervised learning [91, **92], learning theory** and optimization [93–96], federated learning [97–99], or compression [100–102], among others. We refer to the mechanism underlying changes in the input data as dataset drift1**. Formally, we characterize** it as follows. Let (XRAW , Y **) : Ω** → R H,W × Y be the raw sensor data generating random variable 2 **on some** probability space (Ω, F, P**), for example with** Y = {0, 1} K for a classification task. Raw inputs xRAW **are in a** data state before further processing is applied, in our case photons captured by the pixels of a camera sensor as displayed in the outputs of the "Measurement" block in Figure 1. The raw inputs xRAW **are then further** processed by a *data model* Φ**Proc** : R H,W → R C,H,W **, in our case the measurement hardware like a camera** itself or other downstream data processing pipelines, to produce a processed view v = ΦProc(xRAW**) of the** data as illustrated in the output of the "Data model" block in Figure 1. This processed view v **could for** example be the finished RGB image, the image data state that most machine learning researchers typically work with to train a *task model* Φ**Task** : R C,H,W → Y**. Thus, in the conventional machine learning setting** we obtain V = ΦProc(XRAW **) as the image data generating random variable with the target distribution** Dt = P ◦ (V , Y ) −1. A different data model Φ˜Proc generates a different view V˜ = Φ˜Proc(XRAW **) of the same** underlying raw sensor data generating random variable XRAW , resulting in the *dataset drift* $${\mathcal{D}}_{s}=\mathbb{P}\circ({\tilde{V}},Y)^{-1}\neq{\mathcal{D}}_{t}.$$ $$(1)$$ −1̸= Dt. (1) This characterization of dataset drift is closely related to the concept of distributional robustness in the sense of Huber where "the shape of the true underlying distribution deviates slightly from the assumed model" [104]. In practice, a possible reason for such a dataset drift to occur in images is a change in the camera types or settings, for example different acquisition microscopes across different lab sites s and t **that lead to** drifted distributions Ds ̸= Dt**. Anticipating and validating the robustness of a machine learning model to** these variations in a realistic way is not just an engineering concern but also mandated by quality standards in many industries [105–107]. Omissions to perform physically accurate robustness validations has, among other reasons, slowed or prevented the rollout of machine learning technology in impactful applications such as large-scale automated retinopathy screening [108], machine learning melanoma detection [109, **110] or yield** prediction [111] from drone cameras. Hence, the calls for realistic robustness validation of image machine learning systems are not merely an exercise in intellectual novelty but a matter of integrating machine learning research with real world infrastructures and performance expectations around its core ingredient: the data. ## 1.1 Dataset Drift Validation For Images: Status Quo How can one go about validating a machine learning model's performance under image dataset drift? The dominant empirical techniques can broadly be categorized into augmentation and catalogue testing approaches, 1**Note that the nomenclature around dataset drift is as heterogenous as the disciplines in which it is studied. See [103]** for a good discussion of cross-disciplinary terminological ambiguity. Here we are concerned with dataset drift as defined in Equation (1), that is changes in V **that are induced by changes in** ΦProc **which some works also refer to as covariate shift or** more generally as distribution shift. 2We write an uppercase letter A for a real valued random variable and a lowercase letter a **for its realization. A bold** uppercase letter A denotes a random vector and a bold lowercase letter a its realization. For N ∈ N **realizations of the random** vector A we write a1*, ...,* aN . The state space of the random vector A **is denoted by** A = {A(ω) | ω ∈ Ω}. ![2_image_0.png](2_image_0.png) Figure 1: Schematic illustration of an optical imaging pipeline, the data states and novel, raw-enabled drift controls. Data x **transitions through different representations. The measurement process yields metrologically** accurate raw data xRAW**, where the errors on each pixel are uncorrelated and unbiased. From the RAW** sensor state data undergoes stages of image signal processing (ISP) ΦProc**, the data model we consider here.** Finally, the data is consumed by a machine learning task model ΦTask which outputs yˆ**. Combining raw data** with the standard machine learning pipeline and a differentiable data model ΦProc **enables useful controls for** dataset drift comprising 1 drift synthesis, 2 drift forensics, and **3 processing adjustments under drift.** each with their own benefits and limitations (see Table 1 for a conceptual comparison). Augmentation testing involves the application of perturbations, for example Gaussian noise, to already processed images [43, 112, **113] in order to approximate the effect of dataset drift. Given a processed dataset this allows** for fast and easy generation of test samples. However, [114] point out that perturbations applied to an already processed image can produce drift artifacts that are unfaithful to the physics of camera processing. Results in optics further support the concern that the noise obtained from an image processing pipeline is distinct from noise added to an already processed image [115, **116]. For illustration, assume we carry out** augmentation testing to test the robustness of the task model wrt. to the dataset drift (1). Let ξ ∼ D*noise* be a noise sample additively applied to the the view resulting in v + ξ**. Doing so, the task models robustness** is tested wrt. the distribution P ◦ (V + Ξ)−1that might not approximate Ds well. Since P **is unknown, this** is difficult to resolve but at least we could require that a sample used for robustness testing is an element of the image Φ˜Proc [X RAW ] of X RAW under Φ˜Proc. Following this argumentation, we define a **physically** faithful data point wrt. the dataset drift (1) as a view v˜ that satisfies v˜ ∈ Φ˜Proc [X RAW **]. In augmentation** testing, the test samples are not restricted to physically faithful data points wrt. to any dataset drift, since v + ξ ∈ Φ˜Proc [X RAW**] might not hold true for any data model.** A physically faithful alternative to augmentation testing is what we call catalogue testing. It involves the collection of datasets from different cameras which are then used as hold-out robustness validation datasets [49, **117–119]. It does not allow for as flexible and fast in-silico simulation of test cases as augmentation** testing because cataloguing requires expensive data collection after which the test cases are "locked-in". Notwithstanding, catalogue testing comes with the appealing guarantee that test samples conform to the processing physics of the different cameras they were gathered from, ensuring that only physically faithful data points are used for testing. | Augmentation testing | Catalogue testing | Data models | | |----------------------------------|---------------------|---------------|----| | Simulation of test samples | 6 | : | 6 | | Physically faithful test samples | : | 6 | 6 | | Differentiable data model | : | : | 6 | Table 1: A conceptual comparison of different empirical approaches to dataset drift validation for machine learning task models. While augmentation testing allows the flexible, ad-hoc synthesis of test cases, they are, in contrast to catalogue testing, not guaranteed to be physically faithful. Pairing qualified raw data with explicit data models allows for flexible synthesis of physically faithful test cases. In addition, the differentiable data model opens up novel drift controls including drift forensics and drift adjustments. However, the root of input data variations - the data model of images - has received little attention in machine learning robustness research to date. While machine learning practitioners are acutely aware of the dependency between data generation and downstream machine learning model performance, as 75% of respondents to a recent study confirmed [120], data models are routinely treated as a black-box in the robustness literature. This blind spot for explicit data models is particularly surprising since they are standard practice in other scientific communities, in particular optics and metrology [121–124], as well as advanced industry applications, including microscopy [125–127] or autonomous vehicles [128–130]. ## 1.2 Our Contributions In this study we bridge the disconnect between machine learning model robustness research and explicit data models from physical optics. Combining raw image data, differentiable data models ΦProc **of image** signal processing (ISP) and the standard machine learning pipeline enables us to go beyond what is possible with augmentation and catalogue testing. We provide explicit, differentiable models of the data generating process for flexible, physically faithful robustness validation of image machine learning models. Our core contributions are: - **We collected and publicly release two raw image datasets in the camera sensor state for advanced** data models. These raw datasets come with full annotations and processing variations for both a classification (Raw-Microscopy, 17,860 total samples) and a regression (Raw-Drone, 10,412 total samples) task as well as precise calibration information. The data can be downloaded from the anonymized record https://zenodo.org/record/5235536 **as well as through the data loader integrated in our** code base that is linked below. - We provide modular PyTorch code for explicit and differentiable data models ΦTask **from raw camera** sensor data. All code is anonymized and accessible at https://anonymous.4open.science/r/tmlr/ README.md. - The combination of raw sensor data and modular ΦProc data models enables three novel *dataset drift* controls **for machine learning robustness validation:** 1 Drift synthesis: **Controlled synthesis of physically faithful drift test cases across a range of** possible data models. This is demonstrated for a classification and a regression task, showing that the change in absolute and **(Revision\#:2, Requested change \#:3.) relative task model performance** diverges markedly **from what physically unfaithful alternatives like augmentation testing suggest** (Section 5.1). 2 Drift forensics: Given a particular data model ΦProc**, the gradient from the upstream task** model ΦTask can propagate to ΦProc**, thus enabling precise data forensics: the gradient can be used** to identify data model configurations of ΦProc under which ΦTask **should not be used (Section 5.2).** 3 Drift adjustments: Lastly, the gradient connection between data ΦProc and task model Φ**Task** opens up the possibility for processing adjustments in the face of drift. **(Revision\#:3, Requested** change \#:-) This can stabilize classifier training and performance (Section 5.3). ## 2 Related Work While physically sound data models of images have to the best of our knowledge not yet found their way into the machine learning robustness and dataset drift literature, they have been studied in other disciplines, in particular physical optics and metrology. Our ideas on data models and dataset drift controls we present in this manuscript are particularly indebted to the following works. Raw image data **Camera raw files contain the data captured by the camera sensors [121]. In contrast** to processed formats such as .jpeg or .png**, raw files contain the sensor data with minimal processing** [115, 131, **132]. The processing of the raw data usually differs by camera manufacturer thus contributing to** dataset drift. Existing raw data sets from the machine learning, computer vision and optics literature can be organized into two categories. First, datasets that are sometimes treated - usually not by the creators but by users of the data - as raw data but which are in fact not raw. Examples for this category can be found for both modalities considered here [133–143]. All of the preceding examples are processed and stored in formats including .jpeg, .tiff, .svs, .png, .mp4 and .mov**. Second, datasets that are labelled raw data** which are raw. In contrast to the labelled and precisely calibrated raw data presented here, existing raw datasets [144–147] are collected from various sources for image enhancement tasks without full specification of the measurement conditions or labels for classification or segmentation tasks. Data models for images [148, **149] employ deep convolutional neural networks for modelling a raw image** data processing which is optimized jointly with the task model. In contrast, we employ a parametric data model with tunable parameters that enables the modular drift forensics and synthesis presented later. [150] propose a differentiable image processing pipeline for the purpose of camera lens manufacturing. Their goal, however, is to optimize a physical component (lens) in the image acquisition process and no code or data is publicly available. Existing software packages that provide low level image processing operations include Halide [151], Kornia [152] and the rawpy package [153] which can be integrated with our Python and PyTorch code. **(Revision\#:3, Requested change \#:-) We should also take note that outside optical imaging there** are areas in machine learning and applied mathematics, in particular inverse problems such as magnetic resonance imaging (MRI) or computed tomography, that make use of known operator learning [154, **155]** to incorporate the forward model in the optimization [156] or, as in the case of MRI, learn directly in the k-space [157]. Drift synthesis **As detailed in Section 1, the synthesis of realistic drift test cases for a task model in** computer vision is often done by applying augmentations directly to the input view vGC**, e.g. a processed** .jpeg or .png **image. Hendrycks et al. [43] have done foundational work in this direction developing a** practical, standardized benchmark. However, as we explain in Section 1.1 there is no guarantee that noise added to a processed image will be physically faithful v + ξ ∈ Φ˜Proc [XRAW**]. This is problematic, as nuances** matter [158] for assessing the cascading effects dataset drift has on the task model ΦTask **downstream** [120, **159]. For the same reason, the use of generative models [47] like GANs has been limited for test data** generation as they are known to hallucinate visible and less visible artifacts [160, **161]. Other approaches,** like the WILDS data catalogue [162, **163], build on manual curation of so called natural distribution shifts,** or, like [68], on artificial worst case constructions. These are important tools for the study of dataset drifts, especially those that are created outside the camera image signal processing. Absent raw sensor data, the shared limitation of catalogue approaches is that metrologically faithful drift *synthesis* **is not possible.** Drift forensics **Phan et al. [164] use a differentiable raw processing pipeline to propagate the gradient** information back to the raw image. Similar to this work, the signal is used for adversarial search. However, Phan et al. optimize adversarial noise on a per-image basis in the raw space xRAW**, whereas our work modifies** the parameters of the data model ΦProc **itself in pursuit of harmful parameter configurations. The goal in** this work is not simply to fool a classifier, but to discover failure modes and susceptible parameters in the data model ΦProc **that will have the most influence on the task model's performance.** Drift adjustments **An explicit and differentiable image processing data model allows joint optimization** together with the task model ΦProc**. This has been done for radiology image data [165–167] though the** measurement process there is different and the focus lies on finding good sampling patterns. For optical data, a related strand of work is modelling inductive biases in the image acquisition process which is explained and contrasted to this work above [116, 150]. ## 3 Preliminaries: The Image Data Model Before proceeding with a description of the methods we use to obtain the data models ΦProc **in this study,** let us briefly review the distinction between raw data xRAW, processed image v **and the mechanisms** Φ**Proc** : R H,W → R C,H,W by which an image data transitions between these states3. **(Revision\#:3, Requested** change \#:1) Image acquisition has traditionally been optimized for the human perception of a scene [122, **168].** Human eyes detect only the visible spectrum of electromagnetic radiation, hence imaging cameras in different application domains such as medical imaging or remote sensing are usually calibrated to aid the human eye perform some downstream task. This process that gives rise to optical image data, which ultimately forms the basis for downstream machine learning models, is rarely considered in the machine learning robustness literature. Conversely, most research to date has been conducted on processed RGB image representations. The *raw sensor image* xRAW **obtained from a camera differs substantially from the processed image that is** used in conventional machine learning pipelines. The xRAW **state appears like a grey scale image with a** grid structure (see xraw **in Figure 1). This grid is given by the Bayer color filter mosaic, which lies over** sensors [121]. The final *RGB image* v is the result of a series of transformations applied to xRAW**. For many** steps in this process different possible algorithms exist. Starting from a single xRAW**, all those possible** combinations can generate an exponential number of possible images that are slightly different in terms of colors, lighting and blur - variations that contribute to dataset drift. In Figure 1 a conventional pipeline from xRAW to the final RGB image v **is depicted. Here, common and core transformations are considered. Note** that depending on the application context it is possible to reorder or add additional steps. The symbol Φi is used to denote the i th transformation and vi (*view*) for the output image of Φi**. The first step of the** pipeline is the *black level* correction ΦBL, which removes any constant offset. The image vBL **is a grey image** with a Bayer filter pattern. A *demosaicing* algorithm ΦDM **is applied to construct the full RGB color image** [169]. Given vDM, intensities are adjusted to obtain a neutrally illuminated image vWB through a **white** balance transformation ΦWB. By considering color dependencies, a *color correction* **transformation** ΦCC is applied to balance hue and saturation of the image. Once lighting and colors are corrected, a *sharpening* algorithm ΦSH **is applied to reduce image blurriness. This transformation can make the image appear more** noisy. For this reason a *denoising* algorithm ΦDN is applied afterwards [170, 171]. Finally, *gamma correction*, ΦGC**, adjusts the linearity of the pixel values. For a closed form description of these transformations see** Section 4.2. Compression may also take place as an additional step. It is not considered here as the input image size is already small. Furthermore, the effect of compression on downstream task model performance has been thoroughly examined before [172–176]. However, users of our code can add this step or reorder the sequence of steps in the modular processing object class per their needs4. ## 4 Methods In order to perform advanced drift controls, raw sensor data and differentiable data models are required, both of which we will explain next. ## 4.1 Raw Dataset Acquisition As public, scientifically calibrated and labelled raw data is, to the best of our knowledge, currently not available, we acquired two raw datasets as part of this study: Raw-Microscopy and Raw-Drone. We make both datasets publicly available at https://zenodo.org/record/5235536**. Raw-Microscopy consists of expert** annotated blood smear microscope images. Raw-Drone comprises drone images with annotations of cars. Our motivation behind the acquisition of these particular datasets was threefold. First, we wanted to ensure that the acquired datasets provide good coverage of representative machine learning tasks, including classification 3**(Revision\#:3, Requested change \#:1) We recommend [122] for a good introduction to the physics of digital optical imaging.** 4See pipeline_torch.py and pipeline_numpy.py **in our code.** ![6_image_0.png](6_image_0.png) Figure 2: Processed samples and labels of the two datasets, Raw-Microscopy (columns one to four) and Raw-Drone (columns five and eight), that were acquired for the dataset drift study presented here. (Raw-Microscopy) and regression (Raw-Drone). Second, we wanted to collect data on applications that, to our minds, are disposed towards positive welfare impact in today's world, including medicine (Raw-Microscopy) and environmental surveying (Raw-Drone). Third, we wanted to ensure the downstream machine learning task models are such where errors can be costly, here patient safety (Raw-Microscopy) and autonomous vehicles (Raw-Drone), and hence where extensive robustness and dataset drift controls are particularly relevant. Since data collection is an expensive project in and of itself we did not aspire to provide extensive benchmark datasets for the respective applications, but to collect enough data to demonstrate the advanced data modelling and dataset drift controls that raw data enables. In the following we provide detailed information on the two datasets and, following good metrological practices, the calibration setups of the acquisition process. Samples of both datasets can be inspected in Figure 2 and Appendix A.3. Full datasheet documentation following [177] is also available in Appendix A.5. Raw-Microscopy **Assessment of blood smears under a light microscope is a key diagnostic technique [178].** The creation of image datasets and machine learning models on them has received wide interest in recent years [13, 179, **180]. Variations in the image processing can affect the downstream task model performance** [181]. Dataset drift controls can thus help to specify the perimeter of safe application for a task model. A raw dataset was collected for that purpose. A bright-field microscope was used to image blood smear cytopathology samples. The light source is a halogen lamp equipped with a 0.55 NA condenser, and a pre-centred field diaphragm unit. Filters at 450 nm, 525 nm and 620 nm were used to acquire the blue, green and red channels respectively. The condenser is followed by a 40× **objective with 0.95 NA (Olympus** UPLXAPO40X). Slides can be moved via a piezo with 1 nm spatial resolution, in three directions. Focus was achieved by maximizing the variance of the pixel values5**. Images are acquired at 16 bit, with a 2560** × 2160 pixels CMOS sensor (PCO edge 5.5). The point-spread function (PSF) was measured to be 450 nm with 100 nm nanospheres. Mechanical drift was measured at 0.4 pixels per hour. Imaging was performed on de-identified human blood smear slides (Ma190c Lieder, J. Lieder GmbH & Co. KG, Ludwigsburg/Germany). All slides were taken from healthy humans without known hematologic pathology. Imaging regions were selected to contain single leukoytes in order to allow unique labelling of image patches, and regions were cropped to 256 × **256 pixels. All images were annotated by a trained hematological cytologist using the** standard scheme of normal leukocytes comprising band and segmented neutrophils, typical and atypical lymphocytes, monocytes, eosinophils and basophils [182]. To soften class imbalance, candidates for rare normal leukocyte types were preferentially imaged, and enrich rare classes. Additionally, two classes for debris and smudge cells, as well as cells of unclear morphology were included. Labelling took place for all imaged cells from a particular smear at a time, with single-cell patches shown in random order. Raw images were extracted using JetRaw Data Suite features. Blue, red and green channels are metrologically rescaled independently in intensity to simulate a standard RGB camera condition. Some pixels are discarded complementary on each channel in order to obtain a Bayer filter pattern. Raw-Drone **Automated processing of drone data has useful applications including precision agriculture** [183] or environmental protection [184]. Variation in image processing has been shown to affect task model performance [111, **115], underlining the need for drift controls. For the purposes of this study, a raw car** segmentation dataset was created for the drone image modality. A DJI Mavic 2 Pro Drone was used, 5**(Revision\#:3, Requested change \#:1) Figure 9 in Appendix A.3.1 provides an illustration of the imaging setup.** equipped with a Hasselblad L1D-20c camera (Sony IMX183 sensor) having 2.4 µm **pixels in Bayer filter array.** The objective has a focal length of 10.3 mm. The f-number was set to N **= 8, to emulate the PSF circle** diameter relative to the pixel pitch and ground sampling distance (GSD) as would be found on images from high-resolution satellites. The PSF was measured to have a circle diameter of 12.5 **µm. This corresponds to a** diffraction-limited system, within the uncertainty dominated by the wavelength spread of the image. Images were taken at 200 ISO, a gain of 0.528 DN/e−**. The 12-bit pixel values are however left-justified to 16-bits, so** that the gain on the 16-bit numbers is 8.448 DN/e−**. The images were taken at a height of 250 m, so that the** GSD is 6 cm. All images were tiled in 256 × **256 patches. Segmentation color masks were created to identify** cars for each patch. From this mask, classification labels were generated to detect if there is a car in the image. The dataset is constituted by 548 images for the segmentation task. Both datasets include six additional raw variations at different intensity scales, augmented with JetRaw Data Suite. ## 4.2 Data Models: Image Signal Processing Φ**Proc** The second ingredient to this study are the data models of image processing which form the basis for the drift controls presented later. Let (XRAW , Y **) : Ω** → R H,W × Y **be the raw sensor data generating random variable** on some probability space (Ω, F, P**), with** Y = {0, 1} K **for classification and** Y = {0, 1} H,W **for segmentation.** Let Φ**Task** : R C,H,W → Y **be the task model determined during training. The inputs that are given to the task** model ΦTask are the outputs of the data model ΦProc. We distinguish between the raw sensor image xRAW and a *view* v = ΦProc(xRAW) of this image, where Φ**Proc** : R H,W → R C,H,W **models the transformation steps** applied to the raw sensor image during processing. The objective in supervised machine learning is to learn a task model Φ**Task** : R C,H,W → Y **within a fixed** class of task models H that minimizes the expected loss wrt. the loss function L : Y × Y → [0,∞**), that is to** find Φ ⋆ Task **such that** is fulfilled. To avoid that gain, $\mathbf{\Phi}$ is determined during training such that the empirical error $$\frac{1}{N}\sum_{n=1}^{N}\mathcal{L}\left(\mathbf{\Phi}_{\text{Task}}(\mathbf{v}_{n}),y_{n}\right)\tag{3}$$ is minimized over the sample $\mathcal{S}=((\mathbf{v}_{1},y_{1}),...,(\mathbf{v}_{N},y_{N}))$ of views. Modelling in the conventional machine and $\Phi_{\rm Task}$ could have $$\inf_{\Phi_{\rm Task}\in\mathcal{H}}\mathbb{E}[\mathcal{L}\left(\Phi_{\rm Task}(\mathbf{V}),Y\right)]$$ is attained. Towards that goal, $\Phi_{\rm Task}$ is determined during training such that the empirical error $$\left(2\right)$$ learning setting begins with the image data generating random variable (V , Y ) = (ΦProc(XRAW ), Y ) and the target distribution Dt = P ◦ (V , Y ) −1**. Given a dataset drift, as specified in Equation (1), without a** data model we have little recourse to disentangle reasons for performance drops in ΦTask**. To alleviate this** underspecification, an explicit data model is needed. We consider two such models in this study: a static model Φ stat Proc **and a parametrized model** Φ para Proc. In the following, we denote by xRAW ∈ [0, 1]H,W **the normalized raw image, that is a grey scale image with a** Bayer filter pattern normalized by 216 − **1, i.e.** $$\mathbf{x}_{\text{RAW}}=\begin{bmatrix}\mathbf{A}_{1,1}&.&.&.&\mathbf{A}_{1,\frac{w}{2}}\\.&.&.&.\\.&.&.&.\\.&.&.&.\\ \mathbf{A}_{\frac{w}{2},1}&.&.&\mathbf{A}_{\frac{w}{2},\frac{w}{2}}\end{bmatrix}\quad\text{with}\quad\mathbf{A}_{h,j}=\begin{bmatrix}r_{2h+1,2w+1}&g_{2h+1,2w}\\ g_{2h,2w+1}&b_{2h,2w}\end{bmatrix},\tag{4}$$ where the values r2h+1,2w+1, g2h+1,2w, g2h,2w+1, b2h,2w **correspond to the values measured through the different** sensors and normalized by 216 − **1. We provide here a precise description of the transformations that we** consider in our static model, followed by a description how to convert this static model into a differentiable model. ![8_image_0.png](8_image_0.png) | Data models | Used functions | | |---------------|------------------|------| | bi,s,me | Φ Bil DM | Φ SF | | bi,s,ga | Φ Bil DM | Φ SF | | Bil | UM | | | bi,u,me | Φ DM | Φ | | Bil | UM | | | bi,u,ga | Φ DM | Φ | | Men | SF | | | me,s,me | Φ DM | Φ | | me,s,ga | Φ Men DM | Φ SF | | me,u,me | Φ Men DM | Φ UM | | Men | UM | | | me,u,ga | Φ DM | Φ | | Mal | SF | | | ma,s,me | Φ DM | Φ | | Mal | SF | | | ma,s,ga | Φ DM | Φ | | ma,u,me | Φ Mal | UM | | DM | Φ | | | ma,u,ga | Φ Mal DM | Φ UM | (a) Samples for both datasets, Raw-Microscopy and Raw-Drone, from all twelve static data models Φstat Proc **used for the drift synthesis experiments in Section 5.1.** A version with higher resolution is omitted here to save space and can instead be found in Figure 8 in the appendices. $$(6)$$ (b) Abbreviations of the twelve configurations of the static data model Φstat Proc **used in the drift** synthesis experiments. ## Figure 3 4.2.1 The Static Data Model Φ Stat Proc Following common steps in ISP, the *static data model* **is defined as the composition** Φ stat Proc = ΦGC ◦ ΦDN ◦ ΦSH ◦ ΦCC ◦ ΦWB ◦ ΦDM ◦ ΦBL, (5) mapping a raw sensor image to a RGB image. We note that other data model variations, for example by reordering or adding steps, are feasible. The static data models allow the controlled synthesis of different, physically faithful views from the same underlying raw sensor data by manually changing the configurations of the intermediate steps. Fixing the continuous features, but varying ΦDM, ΦSH and ΦDN **results in twelve** different views for the configurations considered here. Samples for each of the twelve data models are provided in 3a. The individual functions of the composition Φ stat Proc **are specified as follows. If not stated otherwise,** writing the equation vc,h,w = ac,h,w + b*c,h,w* defines v*c,h,w* for all 1 f c f 3, 1 f h f H **and 1** f h f W. $$\Phi_{\mathrm{BL}}:[0,1]^{H,W}\to[0,1]^{H,W},x_{\mathrm{RAW}}\mapsto\mathbf{v}_{\mathrm{BL}},$$ Black level correction (BL) **removes thermal noise and readout noise generated from the camera sensor.** The transformation is given by ΦBL : [0, 1]H,W → [0, 1]H,W , xRAW 7→ vBL, (6) with $$\begin{array}{r l}{v_{\mathrm{BL}}/2h+1,2w+1=x_{2h+1,2w+1}}&{{}}\\ {(v_{\mathrm{BL}})_{2h,2w+1}=x_{2h,2w+1}-b l_{2}}&{{}}\\ {(v_{\mathrm{BL}})_{2h+1,2w}=x_{2h+1,2w}-b l_{3}}&{{}}\\ {(v_{\mathrm{BL}})_{2h,2w}=x_{2h,2w}-b l_{4},}&{{}}\end{array}$$ $$(v_{\mathrm{BL}})_{2h+1,2w+1}=x_{2h+1,2w+1}-b l_{1}$$ By design of bl ∈ R 4, black level correction ensures that vBL is again an element of [0, 1]H,W . Demosaicing (DM) **is applied to reconstruct the full RGB color image, by applying a certain interpolation** rule. We use one out of the three demosaicing algorithms BayerBilinear (Φ Bil DM**), Menon2007 (**Φ Men DM **) and** Malvar2004 (Φ Mal DM**) from the python package color-demosaicing and denote this transformation by the map** ΦDM : [0, 1]H,W → [0, 1]3*,H,W* , v 7→ vDM. (7) White balance (WB) **is applied to obtain a neutrally illuminated image. The transformation is given by** ΦWB : [0, 1]3,H,W → [0, 1]3*,H,W* , v 7→ vWB, (8) where wb ∈ [0, 1]3 **adjusts the intensities by** (vWB)c,h,w = wbc · (vDM)*c,h,w*. (9) Color correction (CC) **balances the saturation of the image by considering color dependencies. Let** M ∈ R 3,3 **be the color matrix. The transformation is defined by** ΦCC : [0, 1]3*,H,W* → R 3*,H,W* , v 7→ vCC, **(10)** $$(v_{\mathrm{WB}})_{c,h,w}=w b_{c}\cdot(v_{\mathrm{DM}})_{c,h,w}.$$ $$(10)$$ where $$\mathbf{v}_{\text{CC}}=\begin{bmatrix}(v_{\text{CC}})_{1,h,w}\\ (v_{\text{CC}})_{2,h,w}\\ (v_{\text{CC}})_{3,h,w}\end{bmatrix}=\mathbf{M}\begin{bmatrix}(v_{\text{WB}})_{1,h,w}\\ (v_{\text{WB}})_{2,h,w}\\ (v_{\text{WB}})_{3,h,w}\end{bmatrix}.\tag{1}$$ It is not no longer restricted to $[0,1]$. $$(11)$$ The entries of the resulting vCC are no longer restricted to [0, 1]. Sharpening (SH) **reduces the blurriness of an image. We use the two methods sharpening filter (**Φ SF SH) and unsharp masking (Φ UM SH ) that are applied after a transformation of the view vCC to the *Y UV* **-color** space. To convert the view to the *Y UV* -color space we use the skimage.color function rgb2yuv (Φ*Y UV* **). The** sharpening filter $$S F:\mathbb{R}^{3,H,W}\to\mathbb{R}^{3,H,W},$$ 3*,H,W* , **(12)** is defined by a channel-wise convolution $$(12)$$ $$(13)$$ $$(15)$$ $$(16)$$ $$(17)$$ $$(18)$$ $$(19)$$ **(13)** Denoising (DN) **reduces the noise in an image that is (partly) introduced by SH and transforms the** Y UV -color space view back to the RGB**-color space. For the latter transformation, the skimage.color function** yuv2rgb (Φ −1 Y UV **) is used. We apply one out of the two methods Gaussian denoising (**Φ GD DN**) and Median** denoising (Φ GD DN). For Gaussian denoising, we apply a Gaussian filter (GF) with standard deviation of à = 0.5 from the scipy.ndimage package. For median denoising we apply a median filter (MF of size 3 from the scipy.ndimage package. Formally, this reads as This leads us $\begin{array}{l}\Phi_{\text{DN}}:\mathbb{R}^{3,H,W}\to\mathbb{R}^{3,H,W},\mathbf{v}\mapsto\mathbf{v}_{\text{DN}}\end{array}$. 3*,H,W* , v 7→ vDN **(17)** where Y UV ◦ algo(vSH) with *algo* ∈ {*GF, UM*}. **(18)** $$\mathbf{v}_{\mathrm{{DN}}}=\mathbf{\Phi}_{Y U V}^{-1}\circ a l g o(\mathbf{v}_{\mathrm{{SH}}})\quad{\mathrm{~with~}}\quad a l g o\in\{G F,U M\}.$$ $${\mathrm{~p-wise~convolution}}$$ $$(S F(\mathbf{v}))_{c,h,w}=\left((\mathbf{v}_{c}\star\mathbf{k})_{h,w}\right)_{c}\quad{\mathrm{~with~}}\quad\mathbf{k}:=\begin{bmatrix}0&-1&0\\ -1&5&-1\\ 0&-1&0\end{bmatrix}$$ of the view v = Φ*Y UV* (vCC). **(14)** For unsharp masking we use the ski.filters function unsharp_mask modeled by UM**. To formally define the** sharpening we write $$\Phi_{\mathrm{SH}}:\mathbb{R}^{3,H,W}\to\mathbb{R}^{3,H,W},\mathbf{v}\mapsto\mathbf{v}_{\mathrm{SH}}$$ 3*,H,W* , v 7→ vSH **(15)** where $$\mathbf{v}_{\mathrm{SH}}=a l g o\circ\Phi_{Y U V}(\mathbf{v}_{\mathrm{CC}})\quad{\mathrm{~with~}}\quad a l g o\in\{S H,U M\}.$$ Gamma correction (GC) **equilibrates the overall brightness of the image. First, the entries of the view** vDN are clipped to [0, **1] leading to** (vCP )*c,h,w* = (vDN)c,h,w 1{0f(vDN)c,h,wf1} + 1{(vDN)*c,h,w*>1}. **(19)** Second, the brightness adjusting transformation is defined by ΦGC : R 3,H,W → [0, 1]3*,H,W* , v 7→ vGC = (vCP ) 1 γ **(20)** for some µ > 0 applied element-wise. Note that zero-clipping is necessary for vGC **to be well-defined.** In total, we define the composition ### mposition $$\Phi_{Proc}^{\text{stat}}:[0,1]^{H,W}\mapsto[0,1]^{3,H,W}$$ $$\Phi_{Proc}^{\text{stat}}:=\Phi_{GC}\circ\Phi_{\text{DN}}\circ\Phi_{\text{SH}}\circ\Phi_{\text{CC}}\circ\Phi_{\text{WB}}\circ\Phi_{\text{DM}}\circ\Phi_{\text{BL}}$$ *pipeline*. and call Φ stat Proc the *static pipeline*. To test the effect of different static data models on the performance of two task models, we fix the continuous of the above steps: $\newcommand{\ones}[1]{\left\vert#1\right\vert}$ . features bl, wb,M and µ**, but vary the demosaicing method, the sharpening method and the denoising** method, resulting in twelve different views, generated by different configurations of Φ stat Proc**. An overview of the** data model configurations and their corresponding abbreviations can be found alongside processed samples in Figures 3a and 3b. $$(21)$$ $$(222)$$ ## 4.2.2 The Parametrized Data Model Φ Para Proc For a fixed raw sensor image, the *parametrized data model* Φ para Proc **maps from a parameter space Θ to a** RGB image. It is similar to the static data model with the notable difference that each processing step is differentiable wrt. its parameters θ**. This allows for backpropagation of the gradient from the output of the** task model ΦTask through the data model ΦProc all the way back to the raw sensor image xRAW **to perform** drift forensics and drift adjustments. Hence, we aim to design a data model Φ para Proc : R H,W × Θ → R C,H,W that is differentiable in θ ∈ **Θ satisfying** Φ stat Proc = Φ para Proc ·, θ stat**(23)** for some choice of parameters θ stat **and some fixed configuration of the static pipeline** Φ stat Proc. Black level correction (BL) **For the parametrized black level correction define the map** Φ stat BL : [0, 1]H,W × R 4 → R H,W ,(xRAW, θ1) 7→ vBL = ΦBL(xRAW)|bl=θ1 . **(24)** and set Θ1 := R 4. Demosaicing (DM) We first convert vBL **to a three channel image [**R, G, B] ∈ R 3*,H,W* **where the entries of** R, G and B **are zero except** $\begin{array}{l}R_{2h+1,2w+1}=\color{blue}{v_{BL_{2h+1,2w+1}},\quad B_{2h,2w}=v_{BL_{2h,2w}},}\\ G_{2h+1,2w}=v_{BL_{2h+1,2w}},\quad G_{2h,2w+1}=v_{BL_{2h,2w+1}}.\end{array}$ thanks! To parametrize Φ Bil DM **define the map** Φ para DM : [0, 1]H,W × R 3,3,3 → R 3*,H,W* ,(vBL, θ2) 7→ vDM **(25)** with θ2 = [k1, k2, k3**], where the kernels** k1, k2, k3 ∈ R 3,3 **are separately applied to each color channel resuling** in The source code of BayerBilinear shows that the parameter choice $$\mathbf{v}_{D M_{3,h,w}}=(B\star\mathbf{k}_{3})_{h,w}\,.$$ that the parameter choice $$\begin{array}{l}{{\mathbf{v}_{D M_{1,h,w}}=\left(R\star\mathbf{k}_{1}\right)_{h,w}}}\\ {{\mathbf{v}_{D M_{2,h,w}}=\left(G\star\mathbf{k}_{2}\right)_{h,w}}}\\ {{\mathbf{v}_{D M_{2,h,w}}=\left(R\star\mathbf{k}_{1}\right)_{h,w}}}\end{array}$$ Sayed Binomial shows that the parameter choice $\begin{array}{l}\mathbf{k}_1=\mathbf{k}_3=\begin{bmatrix}0&0.25&0\\ 0.25&1&0.25\\ 0&0.25&0\end{bmatrix}\quad and\quad\mathbf{k}_2=\begin{bmatrix}0.25&0.5&0.25\\ 0.5&1&0.5\\ 0.25&0.5&0.25\end{bmatrix}\end{array}$ $$(24)$$ $$(26)$$ $$(27)$$ $$(28)$$ $$\operatorname*{map}$$ $$\operatorname*{rc}(v_{\mathrm{WB}})|_{M=\theta_{4}}$$ $$(29)$$ **(26)** leads to $\frac{1}{2}$ Φ Bil DM = Φ para DM (·, θ2). **(27)** Towards the definition of the parameter space set Θ2 := R $$\begin{array}{l}{{\mathfrak{g}_{2}).}}\\ {{\mathfrak{s}_{,3}}}\\ {{\times\ \Theta_{1}.}}\end{array}$$ White balance (WB) **For the parametrized white balance define the map** Φ para WB : R 3*,H,W* × R 3 → R 3*,H,W* ,(vDM, θ3) 7→ vWB = ΦWB(vDM)|wb=θ3**(28)** and set Θ3 := R 3 × Θ2. Color correction (CC) **For the parametrized color correction define the map** Φ para CC : R 3*,H,W* × R 3,3 → R 3*,H,W* ,(vWB, θ4) 7→ vCC = ΦCC(vWB)|M=θ4**(29)** and set Θ4 := R 3,3 × Θ3 Sharpening (SH) **We parametrize the sharpening filter configuration of the static pipeline, by using the** entries of k ∈ R 3,3 **defined in (13) as parameters leading to** Φ para SH : R 3*,H,W* × R 3,3 → R 3*,H,W* ,(vCC, θ5) 7→ vSH = ΦSH(vCC)|k=θ5**(30)** and Θ5 := R 3,3 × ¹4. Denoising (DN) **We parametrize the configuration where the Gaussian denoising method is applied.** Applying the Gaussian filter from scipy.ndimage with à = 0.**5 is equivalent to a convolution of the view in** the *Y UV* -color space with a specific k*gauss* ∈ R 5,5. For the specific values of k*gauss* see K_*BLUR* **at the** code of the parametrized pipeline. Therefore, to parametrize DN we define the map Φ para DN : R 3*,H,W* × R 5,5 → R 3,H,W ,(vSH, θ6) 7→ vDN = ΦDN(vSH)|k*gauss*=θ6**(31)** and set Θ6 := R 5,5 × Θ5 $$(32)$$ Gamma correction (GC) **Define the parametrized gamma correction by** Φ para GC : R 3,H,W × R → [0, 1]3*,H,W* ,(vDN, θ7) 7→ v = vGC = ΦGC(vDN)|µ=θ7 . **(32)** Using all the above steps, we define for θ = (θ1*, ...,* θ7) ∈ **Θ the parametrized processing model** Φ para P roc : [0, 1]3,H,W × Θ → [0, 1]3*,H,W* ,(xRAW, θ) 7→ v **(33)** by the composition v =Φ para GC (·, θ7) ◦ Φ para DN (·, θ6) ◦ Φ para SH (·, θ5) ◦ Φ para CC (·, θ4) ◦ Φ para WB (·, θ3) ◦ Φ para DM (·, θ2) ◦ Φ para BL (·, θ1)(xRAW) . **(34)** We call Φ para Proc the *parametrized data model***. The operations used above are differentiable except for the** clipping operation in the GC that is *a.e.*-differentiable6, since the set {0, 1} **of non-differentiable points has** measure zero. Assuming in addition that P ((vDN)*c,h,w* ∈ {0, 1}) = 0 holds true for the entries of vDN **results** in an *a.e.***-differentiable processing model. We further say that** Φ para Proc **is differentiable, noting that this holds** only *a.e.* **under the aforementioned assumption.** ## 4.3 Task Models Φ**Task** Finally, with the data models in place, we also employ two task models in the experiments. For the classification task on the Raw-Microscopy dataset a 18-layer residual net (ResNet18) [185] was used as reference task model. To segment cars from the Raw-Drone dataset the convolutional neural network proposed in [186] (U-Net) was used. Both task models were trained using data augmentation to avoid naive robustness failures. A detailed description of the task models and their hyperparameters is given in A.2. ## 5 Applications With raw data, data models and task models in place we are now in a position to perform advanced controls for dataset drift validation comprising 1 drift synthesis, 2 modular drift forensics and 3 **processing adjustments** under drift. ## 5.1 Drift Synthesis The static data model enables physically faithful synthesis of drift test cases: individual components of the data model can be swapped out, allowing the controlled creation of different, physically faithful processed views from one raw reference dataset. A usage scenario of drift synthesis for machine learning researchers and practitioners is the prospective validation of their task model to drift from different camera devices, for example microscopes across different labs, without having to collect measurements from the different devices. For each data configuration laid out in Section 4.2, the task models were trained for 100 epochs on image data processed through the training data model. Hyperparameters were kept constant across all runs to isolate the effect of varying the data models. Then, dataset drift test cases were synthesized by processing the raw test data through the remaining eleven data models. The task models were then evaluated on test data from all twelve data models. All results that follow are reported as the mean with error bars over a 5-fold cross-validation7**. The metrics used to evaluate the task models are accuracy for classification and IoU** for segmentation. The leukocyte classification model, as displayed in the left matrix of Figure 4, has a critical drop for few configurations, suggesting that it is relatively robust to processing induced dataset drift except for the (ma,s,me) configuration. Note that diagonal elements serve as reference corresponding to test data that was processed in the same way as the training data. The segmentation task model (left matrix in Figure 5) displays a more heterogeneous pattern with symmetries for certain combinations of data models, such as (bi, u, me/ga) and (me, s, me/ga), which are mutually destructive to the task model performance. The average performance drop of the task models between train and test data models is from 0.82 to 0.8 for classification and from 0.71 to 0.65 for segmentation. **(Revision\#:3, Requested change \#:-) The results for individual** components of the data models can also be directly compared in Figures 4 and 5. For example, to understand 6**(Revision\#:3, Requested change \#:-)** *a.e.* **stands for almost everywhere** 7**You can find a full description of task model hyperparameters and experimental setup in Appendix A.2.** ![12_image_0.png](12_image_0.png) Figure 4: 5-fold cross-validation results of the Raw-Microscopy drift synthesis experiments. Each cell contains the average accuracy with a color coded border for the standard deviation. Task models were trained on the data models on the vertical axis and then tested on processed data as indicated on the horizontal axis. (Revision\#:2, Requested change \#:3.) Numbers 1-3 left to the vertical axis denote the ranking of task models according to their average accuracy across all test pipelines respective corruptions. Stars denote the train pipeline under which the task model performed best on the respective test pipeline/corruption. Full ranking results can be found in Tables 7 to 9 of Appendix A.4. **Top-left: Varying the data model leads to mild** performance drops except (ma,s,me). Diagonal is ΦProc = Φ˜Proc**. Top-Right: Comparison to the corruption** benchmark at medium severity (level 3). The average performance drop is more than thirteen times higher compared to data model variations. First column is ΦProc = Φ˜Proc**. Bottom: Visual inspection of worst case** train/test pipelines. how changes in the demosaicing algorithm affect the segmentation model, we can look at the left box in Figure 5 and focus on the column combinations 1-5-9, 2-6-10, 3-7-11 and 4-8-12 where the demosaicing is varied but the other components of the data model stay fixed. Considering the training condition with the (bi,s,me) data model using Bilinear demosaicing (row 1), the task model performance drops from 0.7 (column 1) to 0.67 (column 5) IoU in response to Malvar2004 demosaicing and to 0.66 (column 9) IoU when using Menon2007. Under augmentation testing with the Common Corruptions Benchmark [43] corruptions such as Gaussian blur are applied to already processed images v. **(Revision\#:2, Requested change \#:3) Only those corruptions that** can plausibly be related to the ISP were used in this comparison. Others, such as Fog, Spatter, Motion, Snow, Frost were excluded8. **In contrast to physically faithful test data, the performance drops under corruptions** are more severe across the board: from 0.82 to 0.55 for classification and from 0.71 to 0.49 for segmentation9. This is more than thirteen and four times as much as for the physically faithful drifts synthesized with the 8**A comparative overview of included and excluded corruptions can be found in Figure 11 of Appendix A.4** 9**Results at additional severity levels for the common corruptions can be found in Appendix A.4.** ![13_image_0.png](13_image_0.png) Figure 5: 5-fold cross-validation results of the Raw-Drone drift synthesis experiments. Each cell contains the average IoU with a color coded border for the standard deviation. Task models were trained on the data model on the vertical axis and then tested on processed data as indicated on the horizontal axis. **(Revision#:2,** Requested change #:3.) Numbers 1-3 left to the vertical axis denote the ranking of task models according to their average IoU across all test pipelines respective corruptions. Stars denote the train pipeline under which the task model performed best on the respective test pipeline/corruption. Full ranking results can be found in Tables 7, 10 and 11 of Appendix A.4. **Left: Varying the data model leads to mixed performance** drops. Diagonal is ΦProc = Φ˜Proc**. Right: Comparison to the corruption benchmark at medium severity** (level 3). The average performance drop is more than four times higher compared to data model variations. First column is ΦProc = Φ˜Proc**. Bottom: Visual inspection of worst case train/test pipelines.** data models considered here. **(Revision\#:2, Requested change \#:3.) Similarly, the conclusions for model** selection diverge depending on whether physically faithful data or corruptions are used. In terms of the average performance across all test conditions, none of the top-3 ranking training data models overlap between ISP and common corruptions on the classification task. For segmentation, only one of the training data models (bi,s,ga) overlaps in the top-3 under ISP and common corruptions. Similarly, the training data models under which task models perform best in individual testing conditions vary widely between ISP and common corruptions, both for classification and segmentation. (Revision\#:2, Requested change \#:3.) We argue under such circumstances, when the conclusions we arrive at diverge between two synthetic robustness testing protocols, data models are preferable because the data generating process is physically faithful. It is transparent and explicit what steps in this process change between shift views that are used for testing, allowing extrapolation to real-world deployment environments matching these data models. As common corruptions have no metrological specification it is challenging to relate them to physically faithful data synthesis in one-to-one comparison. We do however provide a purely qualitative matching heuristic based on visual perception of the drift artifacts in Figure 11 (Appendix A.4). The qualitative difference between physically faithful drift test cases and augmentation testing can also be appreciated in the samples of the bottom rows of Figures 4 and 5. For each task we display a sample from the drift test configuration with the worst case performance drop between train and test data conditions. We show the sample viewed from training data model (A), the test data model (B**), and the difference** between both (|A-B|) along the red, green and blue channel. For both tasks, the drift artifacts (|A-B|**) are** more localized than the artifacts obtained from augmentation testing. This makes sense, as changes in the composition of the test data models ΦProc **maintain the physical faithfulness of the remaining data model,** whereas augmentation testing spreads noise globally across all pixels which is not guaranteed to be physically faithful. Why does physically faithful matter for dataset drift testing? A test result is only as reliable as its constituting parts. If we are to rely on robustness test results to decide whether to use a task model in a certain data environment or not, we need to ensure the test cases represent real-world data models. If the test cases are not physically faithful, the results based on them are of limited use to make decisions. ## 5.2 Drift Forensics Similarly, clear specification of the limitations of use is a mandated requirement for many products that can potentially contain machine learning components, such as software as a medical device [105, **106] or** autonomous vehicles [187]. Without knowledge and control over the data acquisition process in practice this can be difficult to achieve. Raw data combined with a differentiable data model mitigates that challenge. Φ para Proc **enables the analysis of the task model's susceptibility to dataset drift in an interpretable manner** using adversarial search. Related work, such as [164] also uses a differentiable raw processing pipeline to propagate the gradient information back to the raw image. There, however, the signal is used in a classical adversarial setup, to optimize adversarial noise on a per-image basis. Here, gradient updates are not applied to individual images, but to the data model parameters. The goal of such an analysis is to identify the parameter configurations of the data model under which the task model should not be operated. The resulting adjustments correspond to plausible changes which reflect changes in data model, for example due to changing camera ISPs. In order to limit the parameter ranges, we chose an explicit constraint in the RGB space. $$\operatorname*{minimize}_{\widetilde{\theta}\in\Theta}\quad\lambda\|\mathbf{V}-\widetilde{\mathbf{V}}\|_{2}^{2}-{\mathcal{L}}(\widetilde{\mathbf{V}},\mathbf{Y})\,,$$ $\left(35\right)^{2}$ 2 − L(Ve ,Y ), **(35)** where V = Φ para Proc(XRAW, θ**) are the RGB images obtained from the original data model and** Ve = Φ para Proc(XRAW, θe**) are the RGB images obtained from adversarial search on the data model parameters.** Equation (35) maximizes the classification loss under a relaxed ℓ2**-constraint controlled by the hyperparameter** ¼ g **0. This procedure yields data model parameters that deteriorate the task model performance while** keeping the measured distortion minimal and the within constraints of physical faithfulness. All of the pipeline's parameters are optimized jointly to search for a task model's overall data model related weaknesses. Targeting select parameters is also possible and provides insight into a parameter's effect on the task model's performance. The plot in the top left of Figure 6 shows sensitivities of the task model accuracy to a variety of targeted parameter choices. With increased relaxation of the ℓ2**-regularization, the accuracy declines exposing** configurations under which the task model deteriorates. As to be expected, the setting allowing for all parameters to be altered shows the biggest effect on the resulting performance. Individually, changes in the black level configuration Φ*para* BL and the denoising parameters Φ*para* DN **pose the greatest risk for task model** performance, requiring a higher relaxation weight in order to be able to affect the outcome of the task. For comparison, the plot in the top right of Figure 6 shows the regularization weight ¼ **against the resulting** ℓ2**. Interestingly, a higher norm in the resulting RGB images does not directly translate to the most severe** performance degradation of the task model. At ℓ2 = 10−5**, changes in the Gaussian blur parameters induce a** norm almost twice as large as the changes in the black level parameters. However, the corresponding drop in accuracy caused by Gaussian blur is around one third less relative to black level. Similarly, at ℓ2 **= 10**−5, the sharpening filter parameters incur a norm but do not lead to accuracy drops of the task model. This underscores the importance of precise data models for dataset drift validation. Physically faithful yet small ![15_image_0.png](15_image_0.png) Figure 6: Top left: Test accuracy on the Raw-Microscopy test set after 20 epochs of adversarial search in the data model for varying regularization weight parameters ¼**. The individual plots depict the various pipeline** parameter selections. Top right: Plot showing ℓ2**-norm (of processed images between the adversarially trained** Φe para Proc **and the default** Φ para Proc**) versus attained accuracy of the task model. The metrics are evaluated on** the test set after 20 epochs of adversarial optimization for varying regularization weight parameter ¼**. The** individual plots depict the various data model parameter selections. A lower regularization results in a bigger search space for adversarial optimization. Bottom: Processed samples from the drift forensics after 20 epochs with varying regularization weights ¼. changes, as visible in the samples in bottom row of Figure 6, in processed images can have larger impact on the performance than large changes. (Revision\#:1, Requested change \#:2.) Here we demonstrated drift forensics on the classification task because we suspect it is the setting where forensics can be particularly useful. This is because regression models, in contrast to classification models, are less susceptible to instabilities. Classification problems are inherently discontinuous while inverse problems inherently allow for more stable solutions [188]. Additional drift forensic results on the segmentation task model with Raw-Drone data can be found in Appendix A.4.2. However, the performance drops are, as expected, less severe. (Revision\#:2, Requested change \#:4.) A practical use-case of drift forensics looks follows: party A develops and trains a model and then licenses it to party B for use. Party B wants to know what the data conditions are under which the model performs well and under which conditions it should not be used. Party A runs drift forensics and provides party B with a forensic signature, as in Figure 6, detailing which parameters in the processing can be changed and which should not be touched. Party B can use this information to calibrate their data processing and knows which data settings to avoid for the specific task model. ## 5.3 Drift Adjustments In the previous two experiments we demonstrated how raw data and a differentiable data model can be used to identify and then modularly test for unfavorable data models that should be avoided during deployment of the machine learning task model. The same mechanics can also be exploited to adjust the task model under dataset drift. In the drift adjustment setting, the gradient from the task model ΦTask **is propagated into the** data model ΦProc **to jointly optimize both of them.** 3 Drift adjustments with Φpara Proc ![16_image_0.png](16_image_0.png) Figure 7: Low (a) and high (b) intensity images processed by a *frozen* and a *learned* **pipeline. This type** of drift adjustment would not be possible with processed data that is typically used for machine learning experiments. The plots in the rightmost column of each block display the mean of validation metrics over five cross validation runs. Error bars are reported as one standard deviation. Optimization step 1439 and 915 correspond to epoch 60 into training. In the drift adjustment experiment, a parametrized data model Φ para Proc **is paired with a task model. As** a form of drift, the task model is trained with very *low intensity* (0.001) raw data xRAW **that is being** processed through Φ para Proc. In the *learned* **setting, the data model parameters are jointly optimized with the** task model parameters. In the *frozen* **setting, only the task model parameters are optimized and the data** model parameters are kept fixed10. In the left column (a) of Figure 7 these two scenarios are compared. The *learned* **data model is better able** to accommodate the dataset drift as visible in the improved stability of the learning trajectory. This is indicated by the blue line which displays the validation accuracy against optimization steps for the first half of training (step 1439 corresponds to epoch 60). It exceeds that of the *frozen* **data model (red line) by up to** 25 percentage points in accuracy at a lower variance. In fact, the processed image from a *learned* **data model** (see *learned* column in block (a) of Figure 7 for an example) can contain visible artifacts that aid **stability and** generalization vis-a-vis the image from the *frozen* **baseline data model which, arguably, looks cleaner to the** human eye. A possible explanation for the improved learning trajectory could be that a varying processing pipeline automatically generates samples akin to data augmentation. Such uses could further be explored in scarce data settings like fine tuning, semi-supervised or few-shot learning. Having gradient access to the data model thus offers the opportunity to optimize data generation itself for a given machine learning task. (Revision\#:3, Requested change \#:-) If learned data models are to be applied in real-world applications, it thus appears likely that a tradeoff has to be made between human perceived visual quality and artifacts that can be helpful to the task model. For the segmentation task (bottom row Figure 7) the stabilization effect is not observable. This could be due to the low resolution of the problem itself as the processing may not have a large effect on enhancing the solid blocks of cars in the raw data as well as evidence suggesting that inverse problems are inherently less unstable [188]. Similar outcomes for stability and artifacts can also be observed for the reverse situation (high intensity 1.0 xRAW) in the right column (b) of Figure 7. **(Revision\#:1, Requested change \#:2.) We demonstrated** how parametrized data models can be used to control drift under physically faithful constraints. Going beyond physically faithful drift controls, an interesting future extensions to these experiments includes training directly on raw data to optimize task model performance. Additional results illustrating two learning trajectories in this setting can be found in Appendix A.4.3. 10**The initialization of** Φ para Proc **(both** *frozen* and *learned***) is set to standard values which can be found in Appendix A.1 as well** as in pipeline_torch.py **of the code.** ## 6 Discussion The main message we hope to to convey in this manuscript is this: black-box data models for images do not have to be the norm in machine learning research and engineering. Leveraging established knowledge from physical optics enables us to push the modelling goalpost further towards machine learning's core ingredient: the data. Paired with raw data, precise differentiable data models for images allow for advanced controls of dataset drift, a common and far reaching challenge across many machine learning disciplines. Interesting uses beyond robustness validation in areas of machine learning that are held back by black-box data also appear opportune. Drift synthesis allows the physically faithful synthesis of drift test cases. In contrast to augmentation testing, the performance drops for physically faithful test cases are less severe across the board for both uses cases in our experiments. The difference between physically faithful and augmentation drift test cases can also be appreciated qualitatively where the former maintains the noise structure of the data model composition while the latter spreads noise globally across all pixels which is not guaranteed to adhere to real-world measurements and their processing. A plausible practical application scenario of drift synthesis for machine learning researchers and practitioners is the prospective validation of their task model to drift from different camera devices, for example microscopes across different lab sites or autonomous vehicles, without having to collect measurements from the different devices. Drift synthesis could also be interesting for other application domains that rely on data synthesis (semi- [88–90] and self-supervised learning [91, **92]) or on precise data** models (aleatoric uncertainty quantification [72–82], out-of-distribution detection [34, **83–87]). While we** cross-validated a substantial number of data model variations in our experiments, it should be noted that further variations, for example by reordering or adding steps, are possible. Furthermore, it should not be overlooked that dataset drift can also be caused by factors outside the ISP data model, for example the optical components of a camera. Our current data models are not yet capable of capturing factors that go beyond the ISP. Integrating work from lens manufacturing [150] to expand the reach explicit data models offers a promising next step for drift synthesis. Drift forensics allow the precise specification of data model limitations of use for a given machine learning task model. Data models under which the task model should not be operated can be identified by gradient search and then documented. In our demonstration, the setting allowing for all parameters to be altered shows the biggest effect on the resulting performance. Individually, changes in the black level configuration and the denoising parameters pose the greatest risk for performance of the task model at hand. Interestingly, a higher norm in the resulting RGB images does not directly translate to the most severe performance degradation of the task model. This underscores the importance of precise data models for dataset drift validation. In practice, clear specification of the limitations of use is a mandated requirement for many products that can potentially contain machine learning components, such as software as a medical device [105, **106] or** autonomous vehicles [187]. Drift forensics with explicit data models can help to align machine learning and data engineering with such regulatory constraints. Explicit data models combined with gradient search may also be interesting to explore in areas such as formal model verification [55–71] to obtain tighter error bounds. A caveat to be noted is that our experiments were carried out only under an ℓ2**-constraint. Other constraints** are feasible, depending on the particular use case to be analyzed, and can be plugged into our code11 We also showed how differentiable data models can be used for drift adjustments where the data model parameters are jointly optimized with the task model parameters. It leads to improved stability of the learning trajectory on the classification task in both directions (low and high intensity measurements). Interestingly, the processed image from a *learned* data model can contain visible artifacts that aid **stability** and generalization vis-a-vis the image from the *frozen* **baseline data model which arguably looks cleaner to the** human eye. In practice, the extension of the gradient connection from the task model ΦTask **to the data model** ΦProc **enables the extension of machine learning right into the data generating process. Thus, data generation** itself can be optimized to best suit the task model at hand. Furthermore, the stabilization effect could prove useful for learning problems where training is costly and speedup precious (for example large models or large datasets). This capacity could also be exploited in other areas that deal with heterogenous training or deployment environments, such as different clients in federated learning [97–99] or domain adaptation 11**Argument** args.adv_aux_loss in train.py techniques [189]. However, the above drift adjustment benefits could only be observed for the classification task, not the regression task, possibly due to the low resolution of the segmentation problem. How far we can push the gradient into the real world is an interesting future direction for data modelling. Including more parts of the data acquisition hardware into the data model and consequently the machine learning optimization pipeline appears feasible [190] and represents an important next step in aligning machine learning with real world data infrastructures. Finally, raw data, which is already routinely used in optical industries [125–130], for representative machine learning tasks has to become more accessible to researchers to align robustness research with physically faithful data models and infrastructures. **(Revision\#:1, Requested change \#:1.) While most optical imaging** devices support the extraction of raw data and this procedure is well established in industry and physics, data collection procedures for machine learning robustness research still have to catch up in order to make raw datasets and their benefits more widely available. Norms around established benchmarking datasets of processed images, such as CIFAR or ImageNet, can slow down this progress. **To that end, we collected** and publicly release two raw image datasets in the camera sensor state. **(Revision\#:3, Requested change** \#:-) The assumptions with respect to the practicality of the procedures we propose here are mild in our eyes. Raw subsets of data could be stored and then pulled in-code from cloud storage, as demonstrated in the code that we provide, for the purposes of drift synthesis or drift forensics. Learned data models obtained from drift adjustments could be calibrated directly on hardware such that the bandwidth requirements would not change compared to current image acquisition and transmission. **Granted, the size of Raw-Microscopy** and Raw-Drone is still limited because data collection is expensive in both time and money. Better APIs to optical hardware would allow more researchers and industries to make their raw data accessible. Use of Personal Data and Human Subjects **The microscopy slides were purchased from a commercial** lab vendor (J. Lieder GmbH & Co. KG, Ludwigsburg/Germany) who attained consent. The drone dataset does not directly relate to people. Instances with potential PIIs such as faces or license plates were removed. Full datasheet documentation following [177] can be found in Appendix A.5. Negative Societal Impact **Machine learning risk management, such as the drift controls, can make ML** deployment possible and safer. More deployment translates to increases in automation. A net risk-benefit analysis of automation is beyond the scope of this manuscript. What we do know is that steel can be cast into ploughs and swords. We are against the use of our findings for the latter purpose. ## References [1] **CL Wilson and MD Garris. Handprinted character database. technical report special database 1. Technical** report, National Institute of Standards and Technology, 1990. 1 [2] **MD Garris and RA Wilkinson. Handwritten segmented characters database. technical report special database 3.** Technical report, National Institute of Standards and Technology, 1992. [3] **Michael Garris. Design, collection, and analysis of handwriting sample image databases. (31), 1994-08-10 1994.** URL **https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=906483**. [4] **Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document** recognition. *Proceedings of the IEEE***, 86(11):2278–2324, 1998. 1** [5] Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009. 1 [6] **Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej** Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. volume 115, page 211–252, USA, dec 2015. Kluwer Academic Publishers. doi: 10.1007/ s11263-015-0816-y. URL **https://doi.org/10.1007/s11263-015-0816-y**. [7] **Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional** neural networks. *Commun. ACM***, 60(6):84–90, may 2017. ISSN 0001-0782. doi: 10.1145/3065386. URL** https://doi.org/10.1145/3065386. 1 [8] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In *Proceedings* of International Conference on Computer Vision (ICCV)**, December 2015. 1** [9] **Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality,** stability, and variation. *arXiv preprint arXiv:1710.10196***, 2017. 1** [10] **Hamed Valizadegan, Miguel J. S. Martinho, Laurent S. Wilkens, Jon M. Jenkins, Jeffrey C. Smith, Douglas A.** Caldwell, Joseph D. Twicken, Pedro C. L. Gerum, Nikash Walia, Kaylie Hausknecht, Noa Y. Lubin, Stephen T. Bryson, and Nikunj C. Oza. ExoMiner: A highly accurate and explainable deep learning classifier that validates 301 new exoplanets. *The Astrophysical Journal***, 926(2):120, feb 2022. doi: 10.3847/1538-4357/ac4399. URL** https://doi.org/10.3847/1538-4357/ac4399. 2 [11] **Thomas J Fuchs and Joachim M Buhmann. Computational pathology: challenges and promises for tissue** analysis. *Computerized Medical Imaging and Graphics***, 35(7-8):515–530, 2011. 2** [12] **T Terwilliger and MJBCJ Abdul-Hay. Acute lymphoblastic leukemia: a comprehensive review and 2017 update.** Blood cancer journal**, 7(6):e577–e577, 2017.** [13] **Christian Matek, Simone Schwarz, Karsten Spiekermann, and Carsten Marr. Human-level recognition of blast** cells in acute myeloid leukaemia with convolutional neural networks. *Nature Machine Intelligence***, 1(11):538–544,** 2019. 7 [14] **Qiwei Wang, Shusheng Bi, Minglei Sun, Yuliang Wang, Di Wang, and Shaobao Yang. Deep learning approach** to peripheral leukocyte recognition. *PloS one***, 14(6):e0218808, 2019. 2** [15] **Daisuke Komura and Shumpei Ishikawa. Machine learning methods for histopathological image analysis.** Computational and Structural Biotechnology Journal**, 16:34–42, 2018. ISSN 2001-0370. doi: https://doi.org/10.** 1016/j.csbj.2018.01.001. URL https://www.sciencedirect.com/science/article/pii/S2001037017300867. 2 [16] **Miriam Hägele, Philipp Seegerer, Sebastian Lapuschkin, Michael Bockmayr, Wojciech Samek, Frederick** Klauschen, Klaus-Robert Müller, and Alexander Binder. Resolving challenges in deep learning-based analyses of histopathological images using explanation methods. *Scientific Reports***, 10(1):1–12, 2020. 2** [17] **MacIej Wojtkowski, Tomasz Bajraszewski, Iwona Gorczyńska, Piotr Targowski, Andrzej Kowalczyk, Wojciech** Wasilewski, and Czesław Radzewicz. Ophthalmic imaging by spectral optical coherence tomography. *American* Journal of Ophthalmology**, 138(3):412–419, 2004. ISSN 0002-9394. doi: https://doi.org/10.1016/j.ajo.2004.04.049.** URL https://www.sciencedirect.com/science/article/pii/S0002939404004635. 2 [18] **Daniel Shu Wei Ting, Louis R Pasquale, Lily Peng, John Peter Campbell, Aaron Y Lee, Rajiv Raman, Gavin** Siew Wei Tan, Leopold Schmetterer, Pearse A Keane, and Tien Yin Wong. Artificial intelligence and deep learning in ophthalmology. *British Journal of Ophthalmology***, 103(2):167–175, 2019. ISSN 0007-1161. doi:** 10.1136/bjophthalmol-2018-313173. URL **https://bjo.bmj.com/content/103/2/167**. [19] Yan Tong, Wei Lu, Yue Yu, and Yin Shen. Application of machine learning in ophthalmic imaging modalities. Eye and Vision**, 7(1):1–15, 2020. 2** [20] MT Makler, CJ Palmer, and AL Ager. A review of practical techniques for the diagnosis of malaria. *Annals of* Tropical Medicine and Parasitology**, 92(4):419–433, 1998. 2** [21] Mahdieh Poostchi, Kamolrat Silamut, Richard J. Maude, Stefan Jaeger, and George Thoma. Image analysis and machine learning for detecting malaria. *Translational Research***, 194:36–55, 2018. ISSN 1931-5244.** doi: https://doi.org/10.1016/j.trsl.2017.12.004. URL **https://www.sciencedirect.com/science/article/pii/** S193152441730333X**. In-Depth Review: Diagnostic Medical Imaging.** [22] **KM Fuhad, Jannat Ferdousey Tuba, Md Sarker, Rabiul Ali, Sifat Momen, Nabeel Mohammed, and Tanzilur** Rahman. Deep learning based automatic malaria parasite detection from blood smear and its smartphone based application. *Diagnostics***, 10(5):329, 2020.** [23] **Rose Nakasi, Ernest Mwebaze, Aminah Zawedde, Jeremy Tusubira, Benjamin Akera, and Gilbert Maiga. A new** approach for microscopic diagnosis of malaria parasites in thick blood smears using pre-trained deep learning models. *SN Applied Sciences***, 2(7):1–7, 2020. 2** [24] **Markus Reichstein, Gustau Camps-Valls, Bjorn Stevens, Martin Jung, Joachim Denzler, Nuno Carvalhais, et al.** Deep learning and process understanding for data-driven earth system science. *Nature***, 566(7743):195–204, 2019.** 2 [25] **David Rolnick, Priya L. Donti, Lynn H. Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, Alexandra Sasha Luccioni,** Tegan Maharaj, Evan D. Sherwin, S. Karthik Mukkavilli, Konrad P. Kording, Carla P. Gomes, Andrew Y. Ng, Demis Hassabis, John C. Platt, Felix Creutzig, Jennifer Chayes, and Yoshua Bengio. Tackling climate change with machine learning. *ACM Comput. Surv.***, 55(2), feb 2022. ISSN 0360-0300. doi: 10.1145/3485128. URL** https://doi.org/10.1145/3485128. [26] **Qiangqiang Yuan, Huanfeng Shen, Tongwen Li, Zhiwei Li, Shuwen Li, Yun Jiang, Hongzhang Xu, Weiwei** Tan, Qianqian Yang, Jiwen Wang, Jianhao Gao, and Liangpei Zhang. Deep learning in environmental remote sensing: Achievements and challenges. *Remote Sensing of Environment***, 241:111716, 2020. ISSN 0034-4257.** doi: https://doi.org/10.1016/j.rse.2020.111716. URL **https://www.sciencedirect.com/science/article/pii/** S0034425720300857. 2 [27] **John Quinn, Vanessa Frias-Martinez, and Lakshminarayan Subramanian. Computational sustainability and** artificial intelligence in the developing world. *AI Magazine***, 35(3):36–47, Sep. 2014. doi: 10.1609/aimag.v35i3.2529.** URL https://ojs.aaai.org/index.php/aimagazine/article/view/2529. 2 [28] **Vittorio Mazzia, Lorenzo Comba, Aleem Khaliq, Marcello Chiaberge, and Paolo Gay. Uav and machine learning** based refinement of a satellite-driven vegetation index for precision agriculture. *Sensors***, 20(9), 2020. ISSN** 1424-8220. doi: 10.3390/s20092530. URL **https://www.mdpi.com/1424-8220/20/9/2530**. [29] **Abhinav Sharma, Arpit Jain, Prateek Gupta, and Vinay Chowdary. Machine learning applications for precision** agriculture: A comprehensive review. *IEEE Access***, 9:4843–4873, 2021. doi: 10.1109/ACCESS.2020.3048415. 2** [30] **Odei Garcia-Garin, Toni MonleÃ3n-Getino, Pere LÃ3pez-Brosa, AsunciÃ3n Borrell, Alex Aguilar, Ricardo** Borja-Robalino, Luis Cardona, and Morgana Vighi. Automatic detection and quantification of floating marine macro-litter in aerial images: Introducing a novel deep learning approach connected to a web application in r. Environmental Pollution**, 273:116490, 2021. ISSN 0269-7491. doi: https://doi.org/10.1016/j.envpol.2021.116490.** URL https://www.sciencedirect.com/science/article/pii/S0269749121000683. 2 [31] **Ferda Ofli, Patrick Meier, Muhammad Imran, Carlos Castillo, Devis Tuia, Nicolas Rey, Julien Briant, Pauline** Millet, Friedrich Reinhard, Matthew Parkan, et al. Combining human computing and machine learning to make sense of big (aerial) data for disaster response. *Big data***, 4(1):47–59, 2016.** [32] **Monique M Kuglitsch, Ivanka Pelivan, Serena Ceola, Mythili Menon, and Elena Xoplaki. Facilitating adoption** of ai in natural disaster management through collaboration. *Nature communications***, 13(1):1–3, 2022. 2** [33] **Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob** Fergus. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199***, 2013. 2** [34] **Lukas Ruff, Jacob R Kauffmann, Robert A Vandermeulen, Grégoire Montavon, Wojciech Samek, Marius Kloft,** Thomas G Dietterich, and Klaus-Robert Müller. A unifying review of deep and shallow anomaly detection. Proceedings of the IEEE**, 2021. 2, 18** [35] **Adarsh Subbaswamy, Roy Adams, and Suchi Saria. Evaluating model robustness and stability to dataset shift. In** Arindam Banerjee and Kenji Fukumizu, editors, **Proceedings of The 24th International Conference on Artificial** Intelligence and Statistics, volume 130 of *Proceedings of Machine Learning Research***, pages 2611–2619. PMLR,** 13–15 Apr 2021. URL https://proceedings.mlr.press/v130/subbaswamy21a.html. 2 [36] **Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, and** Klaus-Robert Müller. Unmasking clever hans predictors and assessing what machines really learn. **Nature** communications**, 10(1):1–8, 2019. 2** [37] **Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander** Madry. Adversarial examples are not bugs, they are features. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL **https://proceedings.neurips.cc/paper/2019/file/** e2c420d928d4bf8ce0ff2ec19b371514-Paper.pdf. [38] **Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge,** and Felix A Wichmann. Shortcut learning in deep neural networks. *Nature Machine Intelligence***, 2(11):665–673,** 2020. [39] **Emma Pierson, David M Cutler, Jure Leskovec, Sendhil Mullainathan, and Ziad Obermeyer. An algorithmic** approach to reducing unexplained pain disparities in underserved populations. *Nature Medicine***, 27(1):136–140,** 2021. [40] **Judy Wawira Gichoya, Imon Banerjee, Ananth Reddy Bhimireddy, John L Burns, Leo Anthony Celi, Li-Ching** Chen, Ramon Correa, Natalie Dullerud, Marzyeh Ghassemi, Shih-Cheng Huang, Po-Chih Kuo, Matthew P Lungren, Lyle J Palmer, Brandon J Price, Saptarshi Purkayastha, Ayis T Pyrros, Lauren Oakden-Rayner, Chima Okechukwu, Laleh Seyyed-Kalantari, Hari Trivedi, Ryan Wang, Zachary Zaiman, and Haoran Zhang. Ai recognition of patient race in medical imaging: a modelling study. *The Lancet Digital Health***, 2022. ISSN 2589-** 7500. doi: https://doi.org/10.1016/S2589-7500(22)00063-2. URL **https://www.sciencedirect.com/science/** article/pii/S2589750022000632. 2 [41] A.A. Minai and R.D. Williams. Perturbation response in feed-forward neural networks. In **[Proceedings** 1992] IJCNN International Joint Conference on Neural Networks**, volume 3, pages 857–862 vol.3, 1992. doi:** 10.1109/IJCNN.1992.227092. 2 [42] **Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroening, and Marta Kwiatkowska. Global** robustness evaluation of deep neural networks with provable guarantees for the hamming distance. In **Proceedings** of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19**, pages 5944–5952.** International Joint Conferences on Artificial Intelligence Organization, 7 2019. doi: 10.24963/ijcai.2019/824. URL **https://doi.org/10.24963/ijcai.2019/824**. [43] **Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and** perturbations. *Proceedings of the International Conference on Learning Representations***, 2019. 3, 5, 13, 42, 43,** 44 [44] **Fuxun Yu, Zhuwei Qin, Chenchen Liu, Liang Zhao, Yanzhi Wang, and Xiang Chen. Interpreting and evaluating** neural network robustness. In *Proceedings of the 28th International Joint Conference on Artificial Intelligence*, IJCAI'19, page 4199–4205. AAAI Press, 2019. ISBN 9780999241141. [45] **Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan.** AugMix: A simple data processing method to improve robustness and uncertainty. *Proceedings of the International* Conference on Learning Representations (ICLR)**, 2020.** [46] **Nimit Sohoni, Jared Dunnmon, Geoffrey Angus, Albert Gu, and Christopher Ré. No subclass left behind:** Fine-grained robustness in coarse-grained classification problems. *Advances in Neural Information Processing* Systems**, 33:19339–19352, 2020.** [47] **Karan Goel, Albert Gu, Yixuan Li, and Christopher Re. Model patching: Closing the subgroup performance** gap with data augmentation. In *International Conference on Learning Representations*, 2021. URL **https:** //openreview.net/forum?id=9YlaeLfuhJF. 5 [48] **Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural** networks for group shifts: On the importance of regularization for worst-case generalization. **arXiv preprint** arXiv:1911.08731**, 2019.** [49] **Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani,** Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran Haque, Sara M Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. Wilds: A benchmark of in-the-wild distribution shifts. In Marina Meila and Tong Zhang, editors, *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings of Machine Learning Research***, pages 5637–5664. PMLR, 18–24 Jul 2021. URL** https://proceedings.mlr.press/v139/koh21a.html. 3 [50] **Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung** Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. [51] **Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, and Matthias Bethge.** Improving robustness against common corruptions by covariate shift adaptation. **Advances in Neural Information** Processing Systems**, 33:11539–11551, 2020.** [52] **Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry.** Adversarial examples are not bugs, they are features. *Advances in neural information processing systems***, 32,** 2019. [53] **Saul Calderon Ramirez, Luis Oala, Jordina Torrentes-Barrena, Shengxiang Yang, David Elizondo, Armaghan** Moemeni, Simon Colreavy-Donnelly, Wojciech Samek, Miguel Molina-Cabello, and Ezequiel Lopez-Rubio. Dataset similarity to assess semi-supervised learning under distribution mismatch between the labelled and unlabelled datasets. *IEEE Transactions on Artificial Intelligence***, 2022.** [54] **Frederick M Howard, James Dolezal, Sara Kochanny, Jefree Schulte, Heather Chen, Lara Heij, Dezheng Huo,** Rita Nanda, Olufunmilayo I Olopade, Jakob N Kather, et al. The impact of site-specific digital histology signatures on deep learning model accuracy and bias. *Nature communications***, 12(1):1–13, 2021. 2** [55] **Kaidi Xu, Zhouxing Shi, Huan Zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin,** and Cho-Jui Hsieh. Automatic perturbation analysis for scalable certified robustness and beyond. *Advances in* Neural Information Processing Systems**, 33:1129–1141, 2020. 2, 18** [56] Linyi Li, Xiangyu Qi, Tao Xie, and Bo Li. Sok: Certified robustness for deep neural networks. *arXiv preprint* arXiv:2009.04131**, 2020.** [57] **Runtian Zhai, Chen Dan, Di He, Huan Zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, and Liwei** Wang. Macer: Attack-free and scalable robust training via maximizing certified radius. In *International* Conference on Learning Representations**, 2019.** [58] **Jongheon Jeong and Jinwoo Shin. Consistency regularization for certified robustness of smoothed classifiers.** Advances in Neural Information Processing Systems**, 33:10558–10570, 2020.** [59] **Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. A convex relaxation barrier to** tight robustness verification of neural networks. *Advances in Neural Information Processing Systems***, 32, 2019.** [60] **Sven Gowal, Krishnamurthy Dj Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato,** Relja Arandjelovic, Timothy Mann, and Pushmeet Kohli. Scalable verified training for provably robust image classification. In *Proceedings of the IEEE/CVF International Conference on Computer Vision***, pages 4842–4851,** 2019. [61] **Hadi Salman, Jerry Li, Ilya Razenshteyn, Pengchuan Zhang, Huan Zhang, Sebastien Bubeck, and Greg Yang.** Provably robust deep learning via adversarially trained smoothed classifiers. *Advances in Neural Information* Processing Systems**, 32, 2019.** [62] **Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, Aditi Raghunathan, Jonathan Uesato, Rudy R** Bunel, Shreya Shankar, Jacob Steinhardt, Ian Goodfellow, Percy S Liang, et al. Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming. *Advances in Neural Information* Processing Systems**, 33:5318–5331, 2020.** [63] **Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In** International Conference on Machine Learning**, pages 1310–1320. PMLR, 2019.** [64] **Zhuolin Yang, Linyi Li, Xiaojun Xu, Bhavya Kailkhura, and Bo Li. On the certified robustness for ensemble** models and beyond, 2021. URL **https://openreview.net/forum?id=IUYthV32lbK**. [65] Julian Bitterwolf, Alexander Meinke, and Matthias Hein. Certifiably adversarially robust detection of out-ofdistribution data. *Advances in Neural Information Processing Systems***, 33:16085–16095, 2020.** [66] **Shafi Goldwasser, Guy N Rothblum, Jonathan Shafer, and Amir Yehudayoff. Interactive proofs for verifying** machine learning. In *12th Innovations in Theoretical Computer Science Conference (ITCS 2021)***. Schloss** Dagstuhl-Leibniz-Zentrum für Informatik, 2021. [67] **Shafi Goldwasser, Adam Tauman Kalai, Yael Kalai, and Omar Montasser. Beyond perturbations: Learning** guarantees with arbitrary adversarial test examples. *Advances in Neural Information Processing Systems***, 33:** 15859–15870, 2020. [68] **Adarsh Subbaswamy, Roy Adams, and Suchi Saria. Evaluating model robustness and stability to dataset shift.** In *International Conference on Artificial Intelligence and Statistics***, pages 2611–2619. PMLR, 2021. 5** [69] **Mayee Chen, Karan Goel, Nimit S Sohoni, Fait Poms, Kayvon Fatahalian, and Christopher Ré. Mandoline:** Model evaluation under distribution shift. In *International Conference on Machine Learning***, pages 1617–1629.** PMLR, 2021. [70] Patrick Cousot. Abstract interpretation. *ACM Computing Surveys (CSUR)***, 28(2):324–328, 1996.** [71] **Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev.** Ai2: Safety and robustness certification of neural networks with abstract interpretation. In *2018 IEEE Symposium* on Security and Privacy (SP)**, pages 3–18. IEEE, 2018. 2, 18** [72] **David A Nix and Andreas S Weigend. Estimating the mean and variance of the target probability distribution.** In *Proceedings of 1994 ieee international conference on neural networks (ICNN'94)***, volume 1, pages 55–60.** IEEE, 1994. 2, 18 [73] Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction. *Journal of Machine Learning Research*, 9 (3), 2008. [74] **Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in** deep learning. In Maria Florina Balcan and Kilian Q. Weinberger, editors, **Proceedings of The 33rd International** Conference on Machine Learning, volume 48 of *Proceedings of Machine Learning Research***, pages 1050–1059,** New York, New York, USA, 20–22 Jun 2016. PMLR. URL **https://proceedings.mlr.press/v48/gal16.html**. [75] **Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty** estimation using deep ensembles. *Advances in neural information processing systems***, 30, 2017.** [76] Jochen Gast and Stefan Roth. Lightweight probabilistic deep networks. In **Proceedings of the IEEE Conference** on Computer Vision and Pattern Recognition**, pages 3369–3378, 2018.** [77] Hartmut Maennel. Uncertainty estimates and out-of-distribution detection with sine networks. 2019. [78] **Joost Van Amersfoort, Lewis Smith, Yee Whye Teh, and Yarin Gal. Uncertainty estimation using a single deep** deterministic neural network. In *International conference on machine learning***, pages 9690–9700. PMLR, 2020.** [79] **Jeremiah Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax Weiss, and Balaji Lakshminarayanan. Simple** and principled uncertainty estimation with deterministic deep learning via distance awareness. *Advances in* Neural Information Processing Systems**, 33:7498–7512, 2020.** [80] **Luis Oala, Cosmas Heiß, Jan Macdonald, Maximilian März, Gitta Kutyniok, and Wojciech Samek. Detecting** failure modes in image reconstructions with interval neural network uncertainty. *International Journal of* Computer Assisted Radiology and Surgery**, 16(12):2089–2097, 2021.** [81] **Zachary Nado, Neil Band, Mark Collier, Josip Djolonga, Michael Dusenberry, Sebastian Farquhar, Angelos Filos,** Marton Havasi, Rodolphe Jenatton, Ghassen Jerfel, Jeremiah Liu, Zelda Mariet, Jeremy Nixon, Shreyas Padhy, Jie Ren, Tim Rudner, Yeming Wen, Florian Wenzel, Kevin Murphy, D. Sculley, Balaji Lakshminarayanan, Jasper Snoek, Yarin Gal, and Dustin Tran. Uncertainty Baselines: Benchmarks for uncertainty & robustness in deep learning. *arXiv preprint arXiv:2106.04015***, 2021.** [82] **Anastasios N Angelopoulos and Stephen Bates. A gentle introduction to conformal prediction and distribution-free** uncertainty quantification. *arXiv preprint arXiv:2107.07511***, 2021. 2, 18** [83] Dipankar Dasgupta. *Artificial immune systems and their applications***. Springer Science & Business Media, 2012.** 2, 18 [84] **Skyler Speakman, Sriram Somanchi, Edward McFowland III, and Daniel B Neill. Penalized fast subset scanning.** Journal of Computational and Graphical Statistics**, 25(2):382–404, 2016.** [85] **Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution** samples and adversarial attacks. *Advances in neural information processing systems***, 31, 2018.** [86] **Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do deep** generative models know what they don't know? *arXiv preprint arXiv:1810.09136***, 2018.** [87] **Xuefeng Du, Zhaoning Wang, Mu Cai, and Yixuan Li. Vos: Learning what you don't know by virtual outlier** synthesis. In *International Conference on Learning Representations***, 2021. 2, 18** [88] AGB Oliver, AGB Odena, CGB Raffel, EGB Cubuk, and IJGB Goodfellow. Realistic evaluation of semisupervised learning algortihms. In *International conference on Learning Representations***, pages 1–15, 2018. 2,** 18 [89] **Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. Unsupervised data augmentation for** consistency training. *Advances in Neural Information Processing Systems***, 33:6256–6268, 2020.** [90] **David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch:** A holistic approach to semi-supervised learning. *Advances in Neural Information Processing Systems***, 32, 2019.** 2, 18 [91] **Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. What makes for good** views for contrastive learning? *Advances in Neural Information Processing Systems***, 33:6827–6839, 2020. 2, 18** [92] **Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl** Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. *Advances in Neural Information Processing Systems***, 33:21271–21284,** 2020. 2, 18 [93] **Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning** (still) requires rethinking generalization. *Communications of the ACM***, 64(3):107–115, 2021. 2** [94] **Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to** imagenet? In *International Conference on Machine Learning***, pages 5389–5400. PMLR, 2019.** [95] **Mahdi Haghifam, Gintare Karolina Dziugaite, Shay Moran, and Dan Roy. Towards a unified information-theoretic** framework for generalization. *Advances in Neural Information Processing Systems***, 34, 2021.** [96] Krikamol Muandet. Impossibility of collective intelligence, 2022. URL https://arxiv.org/abs/2206.02786. 2 [97] Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, and Wojciech Samek. Robust and communicationefficient federated learning from non-iid data. *IEEE transactions on neural networks and learning systems***, 31** (9):3400–3413, 2019. 2, 18 [98] **Felix Sattler, Klaus-Robert Müller, Thomas Wiegand, and Wojciech Samek. On the byzantine robustness of** clustered federated learning. In *ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and* Signal Processing (ICASSP)**, pages 8861–8865. IEEE, 2020.** [99] **Hao Wang, Zakhary Kaplan, Di Niu, and Baochun Li. Optimizing federated learning on non-iid data with** reinforcement learning. In *IEEE INFOCOM 2020 - IEEE Conference on Computer Communications***, pages** 1698–1707, 2020. doi: 10.1109/INFOCOM41043.2020.9155494. 2, 18 [100] Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. *arXiv preprint* physics/0004057**, 2000. 2** [101] **Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing** robust features with denoising autoencoders. In *Proceedings of the 25th international conference on Machine* learning**, pages 1096–1103, 2008.** [102] **Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks.** arXiv preprint arXiv:1803.03635**, 2018. 2** [103] **Georg Krempl, Vera Hofer, Geoffrey Webb, and Eyke Hüllermeier. Beyond Adaptation: Understanding** Distributional Changes (Dagstuhl Seminar 20372). *Dagstuhl Reports***, 10(4):1–36, 2021. ISSN 2192-5283. doi:** 10.4230/DagRep.10.4.1. URL https://drops.dagstuhl.de/opus/volltexte/2021/13735. 2 [104] Peter J Huber. Robust statistics. In *International encyclopedia of statistical science***, pages 1248–1251. Springer,** 2011. 2 [105] AAMI TIR57. Principles for medical device security—risk management. *Arlington, VA: Association for the* Advancement of Medical Instrumentation**, 2016. 2, 15, 18** [106] **IMDRF SaMD Working Group et al. Software as a medical device (samd): Application of quality management** system, 2018. 15, 18 [107] **Luis Oala, Jana Fehr, Luca Gilli, Pradeep Balachandran, Alixandro Werneck Leite, Saul Calderon-Ramirez,** Danny Xie Li, Gabriel Nobis, Erick Alejandro Muñoz Alvarado, Giovanna Jaramillo-Gutierrez, et al. Ml4h auditing: From paper to practice. In *Machine learning for health***, pages 280–317. PMLR, 2020. 2** [108] **Emma Beede, Elizabeth Baylor, Fred Hersch, Anna Iurchenko, Lauren Wilcox, Paisan Ruamviboonsuk, and** Laura M. Vardoulakis. A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In *Proceedings of the 2020 CHI Conference on Human Factors in Computing* Systems**, page 1–12, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450367080.** URL https://doi.org/10.1145/3313831.3376718. 2 [109] Susan M. Swetter. Artificial intelligence may improve melanoma detection. *Dermatology Times*, 41(9):36, 2020. URL **https://cdn.sanity.io/files/0vv8moc6/dermatologytimes/** 4ba31530532b36aaeb80506db61bb5691d841d06.pdf. 2 [110] **Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and Sebastian** Thrun. Dermatologist-level classification of skin cancer with deep neural networks. *nature***, 542(7639):115–118,** 2017. 2 [111] **Maitiniyazi Maimaitijiang, Vasit Sagan, Paheding Sidike, Sean Hartling, Flavio Esposito, and Felix B. Fritschi.** Soybean yield prediction from uav using multimodal data fusion and deep learning. *Remote Sensing of* Environment**, 237:111599, 2020. ISSN 0034-4257. doi: https://doi.org/10.1016/j.rse.2019.111599. URL** https://www.sciencedirect.com/science/article/pii/S0034425719306194**. 2, 7** [112] **Phillip Chlap, Min Hang, Nym Vandenberg, Jason Dowling, Lois Holloway, and Annette Haworth. A review of** medical image data augmentation techniques for deep learning applications. **Journal of Medical Imaging and** Radiation Oncology**, 65, 06 2021. doi: 10.1111/1754-9485.13261. 3** [113] **Christian Matek and Carsten Marr. Robustness evaluation of a convolutional neural network for the classification** of single cells in acute myeloid leukemia. In *ICLR 2021, RobustML workshop***, 2020. 3** [114] **Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig Schmidt. Measuring** robustness to natural distribution shifts in image classification. In *Advances in Neural Information Processing* Systems (NeurIPS), 2020. URL https://arxiv.org/abs/2007.00644. 3 [115] GJJ Verhoeven. It's all about the format–unleashing the power of raw aerial photography. *International Journal* of Remote Sensing**, 31(8):2009–2042, 2010. 3, 5, 7** [116] **Ronnachai Jaroensri, Camille Biscarrat, Miika Aittala, and Frédo Durand. Generating training data for denoising** real rgb images via camera pipeline simulation. *arXiv***, 1904.08825, 2019. 3, 6** [117] **B Albertina, M Watson, C Holback, R Jarosz, S Kirk, Y Lee, and J Lemmerman. Radiology data from the** cancer genome atlas lung adenocarcinoma [tcga-luad] collection. *The Cancer Imaging Archive***, 2016. 3** [118] **C Matek, S Schwarz, C Marr, and K Spiekermann. A single-cell morphological dataset of leukocytes from aml** patients and non-malignant controls (aml-cytomorphology_lmu). *The Cancer Imaging Archive (TCIA)***, 2019.** [119] **Weixin Liang and James Zou. Metashift: A dataset of datasets for evaluating contextual distribution shifts and** training conflicts. In *International Conference on Learning Representations*, 2022. URL **https://openreview.** net/forum?id=MTex8qKavoS. 3 [120] **Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo.** "everyone wants to do the model work, not the data work": Data cascades in high-stakes ai. In *Proceedings of* the 2021 CHI Conference on Human Factors in Computing Systems**, New York, NY, USA, 2021. Association for** Computing Machinery. ISBN 9781450380966. URL https://doi.org/10.1145/3411764.3445518**. 4, 5** [121] Bryce E Bayer. Color imaging array, July 20 1976. US Patent 3,971,065. 4, 5, 6 [122] Andy Rowlands. *Physics of digital photography***. IOP Publishing, 2017. 6** [123] **Shinsuke Tani, Yasuhiro Fukunaga, Saori Shimizu, Munenori Fukunishi, Kensuke Ishii, and Kosei Tamiya.** Color Standardization Method and System for Whole Slide Imaging Based on Spectral Sensing. **Analytical** Cellular Pathology**, 35(2):107–115, 2012. ISSN 2210-7177, 2210-7185. doi: 10.1155/2012/154735. URL** http://www.hindawi.com/journals/acp/2012/154735/. [124] **Daniel L. Bongiorno, Mitch Bryson, Donald G. Dansereau, and Stefan B. Williams. Spectral characterization of** COTS RGB cameras using a linear variable edge filter. page 86600N, Burlingame, California, USA, January 2013. doi: 10.1117/12.2001460. URL **http://proceedings.spiedigitallibrary.org/proceeding.aspx?doi=** 10.1117/12.2001460. 4 [125] HAMAMATSU. *ORCA-Flash4.0 V3 Digital CMOS camera C13440-20CU - Technical note*. HAMAMATSU. URL **https://www.hamamatsu.com/eu/en/product/cameras/cmos-cameras/C13440-20CU.html\#** element-id-95e67d91-3547-3319-a6c7-fd29c03e0089**. 4, 19** [126] PerkinElmer. *TotalChrom Workstation User's Guide - Volume I*. PerkinElmer. URL **https://www.perkinelmer.** com/CMSResources/Images/44-74577MAN_TotalChromWorkstationVolume1.pdf. [127] ZEISS. *Exporting Images and Movies in ZEN Blue*. ZEISS. URL **https://** www.zeiss.com/content/dam/Microscopy/us/download/pdf/zen-software-education-center/ exporting-images-and-movies-in-zen-blue.pdf. 4 [128] **De Jong Yeong, Gustavo Velasco-Hernandez, John Barry, and Joseph Walsh. Sensor and sensor fusion technology** in autonomous vehicles: A review. *Sensors***, 21(6), 2021. ISSN 1424-8220. doi: 10.3390/s21062140. URL** https://www.mdpi.com/1424-8220/21/6/2140. 4 [129] Andrej Karpathy. Tesla ai day 2021. URL **https://youtu.be/j0z4FweCy4M**. [130] Gert Rudolph and Uwe Voelzke. Three sensor types drive autonomous vehicles, 2017. URL **https://www.** fierceelectronics.com/components/three-sensor-types-drive-autonomous-vehicles**. 4, 19** [131] **Rang Nguyen, Dilip K Prasad, and Michael S Brown. Raw-to-raw: Mapping between image sensor color responses.** In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition***, pages 3398–3405, 2014. 5** [132] Library of Congress. Camera Raw Formats (Group Description). **https://www.loc.gov/preservation/digital/** formats/fdd/fdd000241.shtml, December 2016. URL **https://www.loc.gov/preservation/digital/** formats/fdd/fdd000241.shtml**. Accessed: 2020-11-03. 5** [133] **Sumona Biswas and Shovan Barma. A large-scale optical microscopy image dataset of potato tuber for** deep learning based plant cell assessment. *Scientific Data*, 7(1):1–11, 2020. URL **https://doi.org/10.1038/** s41597-020-00706-9. 5 [134] **Ryan Conrad and Kedar Narayan. Cem500k, a large-scale heterogeneous unlabeled cellular electron microscopy** image dataset for deep learning. *eLife***, 10:e65894, apr 2021. ISSN 2050-084X. doi: 10.7554/eLife.65894. URL** https://doi.org/10.7554/eLife.65894. [135] **Guilherme Aresta, Teresa Araújo, Scotty Kwok, Sai Saketh Chennamsetty, Mohammed Safwan, Varghese Alex,** Bahram Marami, Marcel Prastawa, Monica Chan, Michael Donovan, Gerardo Fernandez, Jack Zeineh, Matthias Kohl, Christoph Walz, Florian Ludwig, Stefan Braunewell, Maximilian Baust, Quoc Dang Vu, Minh Nguyen Nhat To, Eal Kim, Jin Tae Kwak, Sameh Galal, Veronica Sanchez-Freire, Nadia Brancati, Maria Frucci, Daniel Riccio, Yaqi Wang, Lingling Sun, Kaiqiang Ma, Jiannan Fang, Ismael Kone, Lahsen Boulmane, Aurélio Campilho, Catarina Eloy, António Polónia, and Paulo Aguiar. Bach: Grand challenge on breast cancer histology images. Medical Image Analysis**, 56:122–139, 2019. ISSN 1361-8415. doi: https://doi.org/10.1016/j.media.2019.05.010.** URL **https://www.sciencedirect.com/science/article/pii/S1361841518307941**. [136] **Le Hou, Rajarsi Gupta, John S Van Arnam, Yuwei Zhang, Kaustubh Sivalenka, Dimitris Samaras, Tahsin M** Kurc, and Joel H Saltz. Dataset of segmented nuclei in hematoxylin and eosin stained histopathology images of ten cancer types. *Scientific data***, 7(1):1–12, 2020.** [137] **Hamidreza Bolhasani, Elham Amjadi, Maryam Tabatabaeian, and Somayyeh Jafarali Jassbi. A histopathological** image dataset for grading breast invasive ductal carcinomas. *Informatics in Medicine Unlocked***, 19:100341,** 2020. ISSN 2352-9148. doi: https://doi.org/10.1016/j.imu.2020.100341. URL **https://www.sciencedirect.** com/science/article/pii/S2352914820300757. [138] TU Graz. ICG - DroneDataset. https://www.tugraz.at/index.php?id=22387, 2019. URL **https://www.** tugraz.at/index.php?id=22387**. Accessed: 2021-05-06.** [139] DroneDeploy. Segmentation Dataset. **https://github.com/dronedeploy/dd-ml-segmentation-benchmark**, 2019. URL https://github.com/dronedeploy/dd-ml-segmentation-benchmark**. Accessed: 2021-09-19.** [140] **Ye Lyu, George Vosselman, Gui-Song Xia, Alper Yilmaz, and Michael Ying Yang. Uavid: A semantic** segmentation dataset for uav imagery. *ISPRS Journal of Photogrammetry and Remote Sensing***, 165:108–119,** 2020. ISSN 0924-2716. doi: https://doi.org/10.1016/j.isprsjprs.2020.05.009. URL **https://www.sciencedirect.** com/science/article/pii/S0924271620301295. [141] **Alina Marcu, Dragos Costea, Vlad Licaret, and Marius Leordeanu. Towards automatic annotation for semantic** segmentation in drone videos. *arXiv***, 1910.10026, 2019.** [142] **Alireza Shamsoshoara, Fatemeh Afghah, Abolfazl Razi, Liming Zheng, Peter Z. Fulé, and Erik Blasch. Aerial** imagery pile burn detection using deep learning: The flame dataset. *Computer Networks***, 193:108001, 2021.** ISSN 1389-1286. doi: https://doi.org/10.1016/j.comnet.2021.108001. URL **https://www.sciencedirect.com/** science/article/pii/S1389128621001201. [143] **P. Zhu, L. Wen, D. Du, X. Bian, H. Fan, Q. Hu, and H. Ling. Detection and tracking meet drones challenge.** IEEE Transactions on Pattern Analysis and Machine Intelligence**, (01):1–1, October 2021. ISSN 1939-3539. doi:** 10.1109/TPAMI.2021.3119563. 5 [144] **Duc-Tien Dang-Nguyen, Cecilia Pasquini, Valentina Conotter, and Giulia Boato. Raise: A raw images dataset** for digital image forensics. In *Proceedings of the 6th ACM Multimedia Systems Conference***, MMSys '15,** page 219–224, New York, NY, USA, 2015. Association for Computing Machinery. ISBN 9781450333511. doi: 10.1145/2713168.2713194. URL https://doi.org/10.1145/2713168.2713194. 5 [145] Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. *2018 IEEE/CVF Conference* on Computer Vision and Pattern Recognition**, pages 3291–3300, 2018.** [146] **Abdelrahman Abdelhamed, Stephen Lin, and Michael S. Brown. A high-quality denoising dataset for smartphone** cameras. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)***, June 2018.** [147] **Samuel W. Hasinoff, Dillon Sharlet, Ryan Geiss, Andrew Adams, Jonathan T. Barron, Florian Kainz, Jiawen** Chen, and Marc Levoy. Burst photography for high dynamic range and low-light imaging on mobile cameras. ACM Transactions on Graphics (Proc. SIGGRAPH Asia)**, 35(6), 2016. 5** [148] **Nai-Sheng Syu, Yu-Sheng Chen, and Yung-Yu Chuang. Learning deep convolutional networks for demosaicing.** arXiv**, 1802.03769, 2018. 5** [149] **S. Ratnasingam. Deep camera: A fully convolutional neural network for image signal processing. In** 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)**, pages 3868–3878,** Los Alamitos, CA, USA, oct 2019. IEEE Computer Society. doi: 10.1109/ICCVW.2019.00480. URL https://doi.ieeecomputersociety.org/10.1109/ICCVW.2019.00480. 5 [150] **Ethan Tseng, Ali Mosleh, Fahim Mannan, Karl St-Arnaud, Avinash Sharma, Yifan Peng, Alexander Braun,** Derek Nowrouzezahrai, Jean-François Lalonde, and Felix Heide. Differentiable compound optics and processing pipeline optimization for end-to-end camera design. *ACM Trans. Graph.***, 40(2), June 2021. ISSN 0730-0301.** doi: 10.1145/3446791. URL https://doi.org/10.1145/3446791**. 5, 6, 18** [151] **Jonathan Ragan-Kelley, Connelly Barnes, Andrew Adams, Sylvain Paris, Frédo Durand, and Saman Amarasinghe.** Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines. *Acm Sigplan Notices***, 48(6):519–530, 2013. 5** [152] **E. Riba, D. Mishkin, D. Ponsa, E. Rublee, and G. Bradski. Kornia: an open source differentiable computer** vision library for pytorch. In *Winter Conference on Applications of Computer Vision*, 2020. URL **https:** //arxiv.org/pdf/1910.02190.pdf. 5 [153] Felix Schill. pyraw. https://github.com/fschill/pyraw**, 2015. 5** [154] **Andreas K Maier, Christopher Syben, Bernhard Stimpel, Tobias Würfl, Mathis Hoffmann, Frank Schebesch,** Weilin Fu, Leonid Mill, Lasse Kling, and Silke Christiansen. Learning with known operators reduces maximum error bounds. *Nature machine intelligence***, 1(8):373–380, 2019. 5** [155] **Andreas Maier, Harald Köstler, Marco Heisig, Patrick Krauss, and Seung Hee Yang. Known operator learning** and hybrid machine learning in medical imaging—a review of the past, the present, and the future. **Progress in** Biomedical Engineering**, 2022. 5** [156] **Tatiana A Bubba, Gitta Kutyniok, Matti Lassas, Maximilian März, Wojciech Samek, Samuli Siltanen, and** Vignesh Srinivasan. Learning the invisible: a hybrid deep learning-shearlet framework for limited angle computed tomography. *Inverse Problems***, 35(6):064002, 2019. 5** [157] **Bo Zhu, Jeremiah Z Liu, Stephen F Cauley, Bruce R Rosen, and Matthew S Rosen. Image reconstruction by** domain-transform manifold learning. *Nature***, 555(7697):487–492, 2018. 5** [158] **Samuel Dodge and Lina Karam. A study and comparison of human and deep learning recognition performance** under visual distortions. In **2017 26th International Conference on Computer Communication and Networks** (ICCCN)**, pages 1–7, 2017. doi: 10.1109/ICCCN.2017.8038465. 5** [159] **Aharon Azulay and Yair Weiss. Why do deep convolutional networks generalize so poorly to small image** transformations? *Journal of Machine Learning Research***, 20:1–25, 2019. 5** [160] **Joseph Paul Cohen, Margaux Luck, and Sina Honari. Distribution matching losses can hallucinate features** in medical image translation. In **Medical Image Computing and Computer Assisted Intervention - MICCAI** 2018**, pages 529–536. Springer International Publishing, 09 2018. ISBN 978-3-030-00928-1. doi: 10.1007/** 978-3-030-00928-1_60. 5 [161] **Florian Schiffers, Zekuan Yu, Steve Arguin, Andreas Maier, and Qiushi Ren. Synthetic fundus fluorescein** angiography using deep neural networks. In Andreas Maier, Thomas M. Deserno, Heinz Handels, Klaus Hermann Maier-Hein, Christoph Palm, and Thomas Tolxdorff, editors, *Bildverarbeitung für die Medizin 2018***, pages** 234–238, Berlin, Heidelberg, 2018. Springer Berlin Heidelberg. ISBN 978-3-662-56537-7. 5 [162] Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In *Proceedings of the European* Conference on Computer Vision (ECCV)**, pages 456–473, 2018. 5** [163] **Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani,** Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran Haque, Sara M Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. Wilds: A benchmark of in-the-wild distribution shifts. In *Proceedings* of the 38th International Conference on Machine Learning, volume 139 of **Proceedings of Machine Learning** Research, pages 5637–5664. PMLR, 2021. URL https://proceedings.mlr.press/v139/koh21a.html. 5 [164] Buu Phan, Fahim Mannan, and Felix Heide. Adversarial imaging pipelines. In **Proceedings of the IEEE/CVF** Conference on Computer Vision and Pattern Recognition**, pages 16051–16061, 2021. 5, 15** [165] Matteo Ronchetti. Torchradon: Fast differentiable routines for computed tomography. *arXiv***, 2009.14788, 2020.** 5 [166] **Christopher Syben, Markus Michen, Bernhard Stimpel, Stephan Seitz, Stefan Ploner, and Andreas K Maier.** Pyro-nn: Python reconstruction operators in neural networks. *Medical physics***, 46(11):5110–5115, 2019.** [167] **Andreas Maier, Harald Köstler, Marco Heisig, Patrick Krauss, and Seung Hee Yang. Known operator learning** and hybrid machine learning in medical imaging - a review of the past, the present, and the future. *arXiv*, 2108.04543, 2021. 5 [168] Robert William Gainer Hunt and Michael R Pointer. *Measuring colour***. John Wiley & Sons, 2011. 6** [169] Xin Li, Bahadir Gunturk, and Lei Zhang. Image demosaicing: A systematic survey. In *Visual Communications* and Image Processing 2008**, volume 6822, page 68221J. International Society for Optics and Photonics, 2008. 6** [170] Bhawna Goyal, Ayush Dogra, Sunil Agrawal, BS Sohi, and Apoorav Sharma. Image denoising review: From classical to state-of-the-art approaches. *Information Fusion***, 55:220–244, 2020. ISSN 1566-2535. doi: https://doi.org/10.** 1016/j.inffus.2019.09.003. URL **https://www.sciencedirect.com/science/article/pii/S1566253519301861**. 6 [171] **Chunwei Tian, Lunke Fei, Wenxian Zheng, Yong Xu, Wangmeng Zuo, and Chia-Wen Lin. Deep learning on image** denoising: An overview. *Neural Networks***, 131:251–275, 2020. ISSN 0893-6080. doi: https://doi.org/10.1016/j.** neunet.2020.07.025. URL https://www.sciencedirect.com/science/article/pii/S0893608020302665. 6 [172] **Mathieu Dejean-Servières, Karol Desnos, Kamel Abdelouahab, Wassim Hamidouche, Luce Morin, and Maxime** Pelcat. Study of the impact of standard image compression techniques on performance of image classification with a convolutional neural network. Research Report hal-01725126, INSA Rennes; Univ Rennes; IETR; Institut Pascal, 2017. 6 [173] **Yong-Yeon Jo, Young Sang Choi, Hyun Woo Park, Jae Hyeok Lee, Hyojung Jung, Hyo-Eun Kim, Kyounglan** Ko, Chan Wha Lee, Hyo Soung Cha, and Yul Hwangbo. Impact of image compression on deep learning-based mammogram classification. *Scientific Reports***, 11(1):1–9, 2021.** [174] **Farhad Ghazvinian Zanjani, Svitlana Zinger, Bastian Piepers, Saeed Mahmoudpour, Peter Schelkens, and Peter** H. N. de With. Impact of JPEG 2000 compression on deep convolutional neural networks for metastatic cancer detection in histopathological images. *Journal of Medical Imaging***, 6(2):1 - 9, 2019. doi: 10.1117/1.JMI.6.2.027501.** URL **https://doi.org/10.1117/1.JMI.6.2.027501**. [175] **Matt Poyser, Amir Atapour-Abarghouei, and Toby P Breckon. On the impact of lossy image and video** compression on the performance of deep convolutional neural network architectures. In **2020 25th International** Conference on Pattern Recognition (ICPR)**, pages 2830–2837. IEEE, 2021.** [176] **Enrico Pomarico, Cédric Schmidt, Florian Chays, David Nguyen, Arielle Planchette, Audrey Tissot, Adrien** Roux, Laura Batti, Christoph Clausen, Theo Lasser, et al. Statistical distortion of supervised learning predictions in optical microscopy induced by image compression. *Scientific reports***, 12(1):1–10, 2022. 6** [177] **Timnit Gebru, Jamie H. Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna M. Wallach, Hal** Daumé, and Kate Crawford. Datasheets for datasets. *arXiv***, 1803.09010, 2018. 7, 19, 36, 47** [178] B. Bain. Diagnosis from the blood smear. *The New England journal of medicine***, 353 5:498–507, 2005. 7** [179] **Ruggero Donida Labati, Vincenzo Piuri, and Fabio Scotti. All-idb: The acute lymphoblastic leukemia image** database for image processing. In *2011 18th IEEE International Conference on Image Processing***, pages** 2045–2048, 2011. doi: 10.1109/ICIP.2011.6115881. 7 [180] **Vinay Ayyappan, Alex Chang, Chi Zhang, Santosh Kumar Paidi, Rosalie Bordett, Tiffany Liang, Ishan Barman,** and Rishikesh Pandey. Identification and staging of b-cell acute lymphoblastic leukemia using quantitative phase imaging and machine learning. *ACS Sensors***, 5(10):3281–3289, 2020. doi: 10.1021/acssensors.0c01811.** URL https://doi.org/10.1021/acssensors.0c01811**. PMID: 33092347. 7** [181] **David Tellez, Geert Litjens, Péter Bándi, Wouter Bulten, John-Melle Bokhorst, Francesco Ciompi, and Jeroen** van der Laak. Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. *Medical Image Analysis***, 58:101544, 2019. ISSN 1361-8415. doi:** https://doi.org/10.1016/j.media.2019.101544. URL **https://www.sciencedirect.com/science/article/pii/** S1361841519300799. 7 [182] **S Longanbach, MK Miers, EM Keohane, LJ Smith, and JM Walenga. Rodak's hematology: Clinical principles** and applications. 2016. 7 [183] **Marek Kulbacki, Jakub Segen, Wojciech Knieć, Ryszard Klempous, Konrad Kluwak, Jan Nikodem, Julita** Kulbacka, and Andrea Serester. Survey of drones for agriculture automation from planting to harvest. In *2018* IEEE 22nd International Conference on Intelligent Engineering Systems (INES)**, pages 000353–000358. IEEE,** 2018. 7 [184] **Xiyue Jia, Yining Cao, David O'Connor, Jin Zhu, Daniel CW Tsang, Bin Zou, and Deyi Hou. Mapping soil** pollution by using drone image recognition and machine learning at an arsenic-contaminated agricultural field. Environmental Pollution**, 270:116281, 2021. 7** [185] **Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.** In *2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)***, pages 770–778, 2016. doi:** 10.1109/CVPR.2016.90. 12, 34 [186] **Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image** segmentation. In *International Conference on Medical Image Computing and Computer-Assisted Intervention*, pages 234–241. Springer, 2015. 12, 34 [187] USDOT NSTC. Ensuring american leadership in automated vehicle technologies: Automated vehicles 4.0. Las Vegas. Recuperado el**, 25:2020–02, 2020. 15, 18** [188] **Martin Genzel, Jan Macdonald, and Maximilian Marz. Solving inverse problems with deep neural networks -** robustness included. *IEEE Transactions on Pattern Analysis and Machine Intelligence***, pages 1–1, 2022. doi:** 10.1109/TPAMI.2022.3148324. 16, 17 [189] **Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan.** A theory of learning from different domains. *Machine learning***, 79(1):151–175, 2010. 19** [190] **Logan G Wright, Tatsuhiro Onodera, Martin M Stein, Tianyu Wang, Darren T Schachter, Zoey Hu, and Peter L** McMahon. Deep physical neural networks trained with backpropagation. *Nature***, 601(7894):549–555, 2022. 19** [191] **Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej** Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. *Int. J. Comput. Vision***, 115(3):211–252, dec 2015. ISSN 0920-5691. doi: 10.1007/** s11263-015-0816-y. URL https://doi.org/10.1007/s11263-015-0816-y**. 34** [192] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *CoRR***, 1412.6980, 2015. 34** [193] **Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and** perturbations. *arXiv preprint arXiv:1903.12261***, 2019. 41** ## A Appendices A.1 ![30_image_0.png](30_image_0.png) ## Data Model Samples And Initialization Figure 8: Samples for both datasets, Raw-Microscopy and Raw-Drone, from all twelve pipelines used in the drift synthesis experiments. The legend for abbreviations can be found in Figure 3b. The following values were used to initialize the parametrized pipeline (both "Frozen" and "Learned") in experiment Section 5.3: 1 class ParametrizedProcessing (nn. Module ): 2 """ Differentiable processing pipeline via torch transformations 3 4 Args: 5 camera_parameters ( tuple (list), optional ): applies given camera parameters in processing 6 track_stages (bool , optional ): whether or not to retain intermediary steps in processing 7 batch_norm_output (bool , optional ): adds a BatchNorm layer to the end of the processing 8 """ 9 10 def __init__ (self , camera_parameters =None , track_stages =False , batch_norm_output =True): 11 super (). __init__ () 12 self. stages = None 13 self. buffer = None 14 self. track_stages = track_stages 15 16 if camera_parameters is None: 17 camera_parameters = DEFAULT_CAMERA_PARAMS 18 19 black_level , white_balance , colour_matrix = camera_parameters 20 21 self. black_level = nn. Parameter ( torch . as_tensor ( black_level )) 22 self. white_balance = nn. Parameter ( torch . as_tensor ( white_balance ). reshape (1, 3)) 23 self. colour_correction = nn. Parameter ( torch . as_tensor ( colour_matrix ). reshape (3, 3)) 24 25 self. gamma_correct = nn. Parameter ( torch . Tensor ([2.2]) ) 26 27 self. debayer = Debayer () 28 29 self. sharpening_filter = nn. Conv2d (1, 1, kernel_size =3, padding =1, bias= False ) 30 self. sharpening_filter . weight .data [0][0] = K_SHARP . clone () 31 32 self. gaussian_blur = nn. Conv2d (1, 1, kernel_size =5, padding =2, padding_mode ='reflect ', bias= False ) 33 self. gaussian_blur . weight .data [0][0] = K_BLUR . clone () 34 35 self. batch_norm = nn. BatchNorm2d (3, affine = False ) if batch_norm_output else None 36 37 self. register_buffer ('M_RGB_2_YUV ', M_RGB_2_YUV . clone ()) 38 self. register_buffer ('M_YUV_2_RGB ', M_YUV_2_RGB . clone ()) 39 40 self. additive_layer = None where 1 K_G = torch . Tensor ([[0 , 1, 0], 2 [1, 4, 1], 3 [0, 1, 0]]) / 4 4 5 K_RB = torch . Tensor ([[1 , 2, 1], 6 [2, 4, 2], 7 [1, 2, 1]]) / 4 8 9 M_RGB_2_YUV = torch . Tensor ([[0.299 , 0.587 , 0.114] , 10 [ -0.14714119 , -0.28886916 , 0.43601035] , 11 [0.61497538 , -0.51496512 , -0.10001026]]) 12 M_YUV_2_RGB = torch . Tensor ([[1.0000000000 e+00 , -4.1827794561e -09 , 1.1398830414 e+00] , 13 [1.0000000000 e+00 , -3.9464232326e -01 , -5.8062183857e -01] , 14 [1.0000000000 e+00 , 2.0320618153 e+00 , -1.2232658220e -09]]) 15 16 K_BLUR = torch . Tensor ([[6.9625e -08 , 2.8089e -05 , 2.0755e -04 , 2.8089e -05 , 6.9625e -08] , 17 [2.8089e -05 , 1.1332e -02 , 8.3731e -02 , 1.1332e -02 , 2.8089 e -05] , 18 [2.0755e -04 , 8.3731e -02 , 6.1869e -01 , 8.3731e -02 , 2.0755 e -04] , 19 [2.8089e -05 , 1.1332e -02 , 8.3731e -02 , 1.1332e -02 , 2.8089 e -05] , 20 [6.9625e -08 , 2.8089e -05 , 2.0755e -04 , 2.8089e -05 , 6.9625 e -08]]) 21 K_SHARP = torch . Tensor ([[0 , -1, 0], 22 [-1, 5, -1], 23 [0, -1, 0]]) 24 DEFAULT_CAMERA_PARAMS = ( 25 [0. , 0., 0., 0.] , 26 [1. , 1., 1.] , 27 [1. , 0., 0., 0., 1., 0., 0., 0., 1.] , 28 ) Note that the camera parameters are camera, and conversely in our case dataset, dependent and defined in the dataset classes. ## A.2 Description Of The Task Models Φ**Task** ResNet18 **This model is designed to classify images from ImageNet [191] and has therefore an output** dimension of 1000. In order to use the model to classify images from Raw-Microscopy, we changed the output dimension of the fully-connected layer to nine. The model was trained for 100 epochs using pre-trained ResNet features. Hyperparameters were kept constant across all runs to isolate the effect of varying image processing pipelines. For implementation the code provided at https://pytorch.org/hub/pytorch_vision_resnet/ was used. The model consists of 34 layers with approximately 11.**2 million trainable parameters. The storage** size of the model is 44.**725 MB.** U-Net++ **The model was trained for 100 epochs using pretrained ResNet features as the encoder of the** U-Net++. Hyperparameters were kept constant across all runs to isolate the effect of varying image processing pipelines. For implementation we used the code provided at https://github.com/qubvel/segmentation_ models.pytorch. The model has approximately 26.**1 million trainable parameters. The storage size of the** model is 104.**315 MB.** For a summary of the training procedure see Table 2. | Classification | Segmentation | | |----------------------------------------|----------------------------------|------------------------| | ΦTask | ResNet18 based on [185] | U-Net++ based on [186] | | trained with Adam [192] for 100 epochs | trained with Adam for 100 epochs | | | learning rate: 10−4 | learning rate: 7.5 · 10−5 | | | mini-batch size: 128 | mini-batch size: 12 | | Table 2: Summary of the training procedure for both task models. ![34_image_0.png](34_image_0.png) Table 3: A set of random test samples for the segmentation task under learned processing. Top row: Targets, middle row: predictions of the task model after the first epoch, last row: predictions of the task model after the last epoch. ## A.3 Dataset Information In the following, core information on the two acquired datasets is provided. In Appendix A.5 you can also find detailed datasheets for both datasets, following the documentation good practices introduced by [177]. ## A.3.1 Raw-Microscopy Raw-Microscopy for segmentation comes with 940 raw images, twelve differently processed variants totaling ![35_image_0.png](35_image_0.png) 11280 images and six additional raw intensity levels totaling 5640 samples. Figure 9: An illustration of the imaging setup. | Class | Proportion in % | |------------------------------------------------|-------------------| | Basophil (BAS) | 1.91 | | Eosinophil (EOS) | 5.74 | | Smudge cell / debris (KSC) | 17.34 | | atypical Lymphocyte (LYA) | 3.19 | | typical Lymphocyte (LYT) | 24.47 | | Monocyte (MON) | 20.32 | | Neutrophil (band) (NGB) | 0.85 | | Neutrophil (segmented) (NGS) | 22.98 | | Image that could not be assigned a class (UNC) | 3.19 | Table 4: The proportion of the classes in Raw-Microscopy. | Composition of Raw-Microscopy | | |---------------------------------|------------------------| | Type of instances | Image and label | | Objects on images | White blood cells | | Type of classes | Morphological classes | | Number of instances | 940 | | Number of classes | 9 | | Image size | 256 by 256 pixels | | Image format | .tif | | Raw image format | Please see Section 4.1 | Table 5: A summary of the composition of Raw-Microscopy. ## A.3.2 Raw-Drone Raw-Drone for segmentation comes with 548 raw images, twelve differently processed variants totaling 6576 images and six additional raw intensity levels totaling 3288 samples. | | Composition of Raw-Drone | |---------------------------|----------------------------| | Type of instances | Image and mask | | Objects on images | Landscape shots from above | | Number of instances | 548 | | Number of original images | 12 | | Image size | 256 by 256 pixels | | Mask size | 256 by 256 pixels | | Original image size | 3648 by 5472 | | Image format | .tif | | Mask format | .png | | Raw image format | .DNG | Table 6: A summary of the composition of Raw-Drone. ![38_image_0.png](38_image_0.png) images are shown with the segmentation mask applied over it. (Bottom) Different intensity realizations are shown for the microscopy case. Images on the top are directly print out in the same scale of the original image. Images in the bottom row are normalized on their own min and max values to highlight the role of noise levels on low intensity images. ## A.4 Additional Results A.4.1 Drift Synthesis (Revision\#:2, Requested change \#:3. (see revision tracker)) Relative ranking results requested by reviewer Yyad. | | Microscopy-ISP | Microscopy-CC | Drone-ISP | Drone-CC | | | | | |------|------------------|-----------------|----------------|------------|----------------|------------|----------------|------------| | Rank | Train pipeline | Avg. score | Train pipeline | Avg. score | Train pipeline | Avg. score | Train pipeline | Avg. score | | 1 | ma,s,me | 0.83 | bi,u,me | 0.63 | ma,u,ga | 0.68 | ma,s,ga | 0.60 | | 2 | ma,u,me | 0.83 | me,s,me | 0.63 | bi,s,ga | 0.68 | bi,s,ga | 0.57 | | 3 | ma,u,ga | 0.82 | bi,u,ga | 0.62 | bi,s,me | 0.67 | me,s,ga | 0.57 | | 4 | bi,s,me | 0.81 | ma,s,me | 0.62 | ma,s,me | 0.67 | ma,s,me | 0.55 | | 5 | bi,u,me | 0.81 | me,u,me | 0.62 | me,u,ga | 0.67 | me,s,me | 0.55 | | 6 | me,s,me | 0.81 | ma,s,ga | 0.62 | me,u,me | 0.67 | ma,u,ga | 0.55 | | 7 | bi,s,ga | 0.81 | ma,u,me | 0.61 | ma,u,me | 0.66 | bi,s,me | 0.54 | | 8 | me,s,ga | 0.80 | me,s,ga | 0.60 | ma,s,ga | 0.66 | ma,u,me | 0.54 | | 9 | me,u,me | 0.80 | bi,s,me | 0.59 | bi,u,me | 0.65 | me,u,me | 0.53 | | 10 | ma,s,ga | 0.80 | ma,u,ga | 0.59 | me,s,me | 0.65 | me,u,ga | 0.51 | | 11 | bi,u,ga | 0.79 | bi,s,ga | 0.58 | me,s,ga | 0.64 | bi,u,me | 0.48 | | 12 | me,u,ga | 0.79 | me,u,ga | 0.58 | bi,u,ga | 0.61 | bi,u,ga | 0.46 | | | | | | | Microscopy-ISP | | | | | | | | |------|---------|---------|---------|---------|------------------|---------|---------|---------|---------|---------|---------|---------| | Rank | bi,s,me | bi,s,ga | bi,u,me | bi,u,ga | ma,s,me | ma,s,ga | ma,u,me | ma,u,ga | me,s,me | me,s,ga | me,u,me | me,u,ga | | 1 | ma,u,me | ma,u,me | ma,u,ga | ma,u,ga | ma,s,me | ma,u,ga | ma,u,ga | ma,u,ga | ma,u,me | me,s,ga | ma,u,ga | ma,u,ga | | 2 | ma,u,ga | ma,u,ga | bi,s,ga | bi,s,ga | bi,s,me | me,s,ga | ma,s,me | ma,u,me | ma,s,me | ma,u,ga | ma,u,me | ma,u,me | | 3 | bi,s,ga | bi,s,ga | ma,s,me | ma,s,me | bi,u,ga | ma,s,ga | ma,u,me | ma,s,me | bi,s,ga | ma,s,ga | ma,s,me | ma,s,me | | 4 | ma,s,me | ma,s,me | ma,u,me | ma,u,me | ma,u,me | ma,s,me | bi,s,ga | me,u,me | me,s,ga | me,u,ga | me,u,me | me,u,me | | 5 | bi,s,me | bi,u,me | me,u,me | me,u,me | bi,u,me | ma,u,me | me,u,me | ma,s,ga | bi,u,me | me,s,me | bi,s,ga | bi,s,ga | | 6 | bi,u,me | me,u,me | bi,u,me | bi,u,me | ma,u,ga | me,s,me | me,s,ga | bi,s,ga | ma,u,ga | ma,u,me | me,u,ga | me,u,ga | | 7 | me,s,me | bi,s,me | bi,s,me | me,s,me | me,s,me | me,u,me | me,s,me | me,s,ga | me,u,me | ma,s,me | me,s,me | me,s,me | | 8 | me,s,ga | me,s,me | me,s,me | bi,u,ga | bi,s,ga | bi,u,me | ma,s,ga | me,s,me | me,s,me | me,u,me | bi,s,me | bi,s,me | | 9 | me,u,me | me,s,ga | bi,u,ga | bi,s,me | me,s,ga | me,u,ga | bi,u,me | bi,u,me | bi,s,me | bi,s,me | me,s,ga | me,s,ga | | 10 | ma,s,ga | ma,s,ga | ma,s,ga | ma,s,ga | ma,s,ga | bi,s,me | bi,s,me | bi,s,me | ma,s,ga | bi,s,ga | ma,s,ga | ma,s,ga | | 11 | bi,u,ga | me,u,ga | me,u,ga | me,s,ga | me,u,ga | bi,s,ga | me,u,ga | me,u,ga | me,u,ga | bi,u,me | bi,u,me | bi,u,me | | 12 | me,u,ga | bi,u,ga | me,s,ga | me,u,ga | me,u,me | bi,u,ga | bi,u,ga | bi,u,ga | bi,u,ga | bi,u,ga | bi,u,ga | bi,u,ga | | | | | | Microscopy-CC | | | | | | | | |------|----------|-------------|---------|-----------------|---------|------------|---------|----------|------------|----------|---------| | Rank | identity | gauss noise | shot | impulse | speckle | gauss blur | zoom | contrast | brightness | saturate | elastic | | 1 | ma,u,me | ma,u,me | bi,u,me | bi,u,me | ma,s,ga | bi,s,ga | bi,s,ga | bi,s,ga | me,s,me | ma,s,me | bi,s,ga | | 2 | ma,u,ga | ma,s,ga | ma,s,ga | me,u,me | bi,u,me | ma,u,me | ma,u,ga | bi,u,ga | ma,s,me | me,u,me | ma,u,ga | | 3 | bi,s,ga | me,u,me | me,s,me | bi,u,ga | me,s,me | ma,u,ga | ma,s,me | me,u,ga | bi,u,ga | me,s,me | ma,u,me | | 4 | me,s,me | me,s,ga | ma,u,me | me,s,me | me,u,me | bi,u,me | ma,u,me | ma,s,me | ma,s,ga | bi,u,ga | ma,s,me | | 5 | ma,s,me | bi,u,me | me,s,ga | ma,s,me | bi,u,ga | me,u,me | bi,u,me | ma,u,me | bi,s,me | bi,s,ga | me,u,me | | 6 | me,u,me | ma,u,ga | me,u,me | ma,u,me | ma,s,me | ma,s,me | me,s,me | bi,s,me | bi,u,me | bi,u,me | me,s,ga | | 7 | me,s,ga | me,s,me | bi,s,me | ma,u,ga | ma,u,me | me,s,ga | bi,u,ga | bi,u,me | me,s,ga | ma,u,ga | me,s,me | | 8 | bi,u,me | bi,s,me | bi,u,ga | me,s,ga | me,s,ga | ma,s,ga | me,u,ga | me,s,me | ma,u,ga | ma,s,ga | bi,u,ga | | 9 | bi,u,ga | ma,s,me | ma,s,me | me,u,ga | bi,s,me | me,s,me | me,u,me | ma,s,ga | me,u,ga | bi,s,me | bi,u,me | | 10 | ma,s,ga | bi,u,ga | ma,u,ga | ma,s,ga | ma,u,ga | bi,u,ga | me,s,ga | ma,u,ga | bi,s,ga | me,s,ga | ma,s,ga | | 11 | bi,s,me | bi,s,ga | bi,s,ga | bi,s,me | me,u,ga | bi,s,me | ma,s,ga | me,u,me | me,u,me | me,u,ga | me,u,ga | | 12 | me,u,ga | me,u,ga | me,u,ga | bi,s,ga | bi,s,ga | me,u,ga | bi,s,me | me,s,ga | ma,u,me | ma,u,me | bi,s,me | Table 7: Rankings of task models from Section 5.1 trained on different data models (columns 2, 4, 6, 8) according to their average accuracy or IoU (columns 3, 5, 7, 9) across all test pipelines respective corruptions. ISP corresponds to drift synthesis with physically faithful data models, CC corresponds to common corruptions. Table 8: Ranking of task models from Section 5.1 trained under different train pipelines (rows) for each individual test pipeline (columns 2 - 13). Table 9: Ranking of task models from Section 5.1 trained under different train pipelines (rows) for each individual test corruptions (columns 2 - 12). (Revision\#:1, Requested change \#:1.) Additional results for reviewer jubj ![40_image_0.png](40_image_0.png) Figure 11: (Revision#:2, Requested change #:3. (see revision tracker) A comparative overview of the physically faithful data models (ISPs, top-left) and the Common Corruptions (CC, top-right) used in the the drift synthesis experiments of Section 5.1. A matching heuristic based on possible visual perception of the drift artifacts (top-middle) is provided for readers who would like to relate specific data models to specific corruptions. However, we emphasize that this is a purely qualitative heuristic and has no metrological basis. Since CCs are not physically faithful it is not clear how to relate them to actual variations in the optical data generating process. Finally, corruptions that were excluded from the experiments in Section 5.1 are displayed (bottom). The CC examples where stitched from the original paper [193] for authenticity. | | | | | Drone-ISP | | | | | | | | | |------|---------|---------|---------|-------------|---------|---------|---------|---------|---------|---------|---------|---------| | Rank | bi,s,me | bi,s,ga | bi,u,me | bi,u,ga | ma,s,me | ma,s,ga | ma,u,me | ma,u,ga | me,s,me | me,s,ga | me,u,me | me,u,ga | | 1 | bi,s,me | bi,s,ga | bi,u,me | bi,u,me | ma,u,ga | ma,s,ga | ma,u,ga | ma,u,ga | ma,s,me | ma,s,ga | ma,u,ga | ma,u,ga | | 2 | bi,u,me | bi,s,me | bi,s,me | bi,s,me | ma,s,me | me,s,ga | me,u,me | me,u,me | ma,s,ga | me,s,ga | me,u,me | me,u,ga | | 3 | ma,u,ga | ma,u,ga | bi,u,ga | bi,u,ga | bi,s,ga | ma,s,me | ma,u,me | ma,u,me | ma,u,ga | ma,s,me | ma,s,me | me,u,me | | 4 | bi,s,ga | ma,s,me | ma,u,ga | ma,u,ga | me,u,ga | me,s,me | bi,s,me | bi,s,me | bi,s,ga | me,s,me | me,u,ga | ma,s,me | | 5 | me,u,me | me,u,ga | me,u,me | me,u,me | ma,s,ga | bi,s,ga | ma,s,me | ma,s,me | me,u,ga | bi,s,ga | ma,u,me | ma,u,me | | 6 | bi,u,ga | ma,s,ga | bi,s,ga | bi,s,ga | ma,u,me | ma,u,ga | bi,s,ga | bi,s,ga | me,s,me | ma,u,ga | bi,s,me | bi,s,me | | 7 | ma,s,me | ma,u,me | ma,u,me | ma,u,me | me,u,me | me,u,ga | me,u,ga | me,u,ga | me,s,ga | me,u,ga | bi,u,me | bi,s,ga | | 8 | me,u,ga | me,s,ga | ma,s,me | ma,s,me | me,s,me | me,u,me | bi,u,me | bi,u,me | me,u,me | me,u,me | bi,s,ga | bi,u,me | | 9 | ma,u,me | me,u,me | me,u,ga | me,u,ga | bi,s,me | ma,u,me | bi,u,ga | ma,s,ga | ma,u,me | ma,u,me | me,s,me | ma,s,ga | | 10 | me,s,me | me,s,me | me,s,me | me,s,me | me,s,ga | bi,s,me | ma,s,ga | me,s,me | bi,s,me | bi,s,me | bi,u,ga | me,s,me | | 11 | ma,s,ga | bi,u,me | me,s,ga | ma,s,ga | bi,u,me | bi,u,me | me,s,me | bi,u,ga | bi,u,me | bi,u,me | ma,s,ga | bi,u,ga | | 12 | me,s,ga | bi,u,ga | ma,s,ga | me,s,ga | bi,u,ga | bi,u,ga | me,s,ga | me,s,ga | bi,u,ga | bi,u,ga | me,s,ga | me,s,ga | | | | | | | Drone-CC | | | | | | | |------|----------|-------------|---------|---------|------------|------------|---------|----------|------------|----------|---------| | Rank | identity | gauss noise | shot | impulse | speckle | gauss blur | zoom | contrast | brightness | saturate | elastic | | 1 | ma,s,ga | ma,s,ga | ma,s,ga | ma,s,ga | ma,s,ga | ma,s,ga | bi,s,me | bi,s,ga | bi,s,ga | ma,s,ga | ma,s,ga | | 2 | bi,s,ga | me,s,ga | me,s,ga | me,s,ga | me,s,ga | bi,s,ga | ma,s,ga | ma,s,ga | ma,s,ga | ma,s,me | ma,u,ga | | 3 | me,s,ga | bi,s,ga | bi,s,ga | me,s,me | bi,s,ga | ma,s,me | bi,s,ga | me,s,me | ma,s,me | ma,u,ga | ma,s,me | | 4 | ma,s,me | me,s,me | ma,s,me | bi,s,ga | ma,s,me | ma,u,ga | me,s,ga | ma,s,me | me,s,me | me,u,ga | bi,s,ga | | 5 | ma,u,ga | ma,u,ga | me,s,me | ma,u,ga | me,s,me | bi,u,me | ma,u,me | bi,s,me | ma,u,me | me,s,ga | bi,s,me | | 6 | bi,s,me | ma,u,me | ma,u,ga | ma,u,me | ma,u,ga | bi,s,me | me,s,me | ma,u,me | ma,u,ga | bi,s,ga | bi,u,me | | 7 | me,u,ga | me,u,me | ma,u,me | me,u,me | bi,s,me | me,s,ga | ma,s,me | ma,u,ga | me,u,me | bi,s,me | me,s,ga | | 8 | bi,u,me | ma,s,me | bi,s,me | ma,s,me | ma,u,me | ma,u,me | bi,u,me | me,s,ga | bi,s,me | me,s,me | me,u,me | | 9 | ma,u,me | bi,s,me | me,u,me | bi,s,me | me,u,me | me,u,me | me,u,me | bi,u,me | me,u,ga | me,u,me | me,u,ga | | 10 | me,u,me | me,u,ga | me,u,ga | me,u,ga | me,u,ga | me,s,me | bi,u,ga | bi,u,ga | me,s,ga | bi,u,me | me,s,me | | 11 | me,s,me | bi,u,me | bi,u,me | bi,u,me | bi,u,me | me,u,ga | ma,u,ga | me,u,ga | bi,u,me | ma,u,me | ma,u,me | | 12 | bi,u,ga | bi,u,ga | bi,u,ga | bi,u,ga | bi,u,ga | bi,u,ga | me,u,ga | me,u,me | bi,u,ga | bi,u,ga | bi,u,ga | Table 10: Ranking of task models from Section 5.1 trained under different train pipelines (rows) for each individual test pipeline (columns 2 - 13). Table 11: Ranking of task models from Section 5.1 trained under different train pipelines (rows) for each individual test corruptions (columns 2 - 12). ![41_image_0.png](41_image_0.png) ![42_image_0.png](42_image_0.png) Figure 13: Experiment from Section 5.1 with strong severity (level 5) for the Common corruptions benchmark. 40.00 Ma U.Ga 1000 ![42_image_1.png](42_image_1.png) Figure 14: Experiment from Section 5.1 with weak severity (level 1) for the Common corruptions benchmark. ![43_image_0.png](43_image_0.png) di u ga ma e Bra me sign 1989, U. Order mauras l ## A.4.2 Drift Forensics ![44_image_0.png](44_image_0.png) (Revision\#:1, Requested change \#:2.) Additional results for reviewer jubj Figure 16: Drift forensics experiment from Section 5.2 with the Raw-Drone dataset. ## Drift Adjustments A.4.3 (Revision\#:1, Requested change \#:2.) Additional results for reviewer Yyad ![45_image_0.png](45_image_0.png) Figure 17: Two training runs where tasks models are trained directly on Raw-Microscopy (top) and Raw-Drone (bottom) data. The classification model (top) achieves similar accuracy as the learned setting in Section 5.3. However, the learning trajectory is more volatile. Despite stabilizing quicker, the segmentation model (bottom) does not reach the same IoU as compared to the data models (both learned and frozen). ## A.5 Dataset Documentation We follow the datasheets documentation framework proposed in [177], using the template https: //de.overleaf.com/latex/templates/datasheet-for-dataset-template/jgqyyzyprxth **from Christian** Garbin. A.5.1 Datasheet for Raw-Microscopy ## Motivation For What Purpose Was The Dataset Created? With Raw-Microscopy we provide a publicly available raw image dataset in order to examine the effect of the image signal processing on the performance and the robustness of machine learning models. This dataset enables to study these effects for a supervised multiclass classification task: the classification of white blood cells (WBCs). Who created this dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? This dataset has been created by **ANONYMIZED.** Single-cell images were annotated by a trained cytologist. ## Who Funded The Creation Of The Dataset? The creation of the dataset has been funded by ANONYMIZED. ## Composition What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? An instance is a tuple of an image and a label. The image shows a human WBCs and the label indicates the morphological class of this cell. The following eight morphological classes appear in the dataset: Basophil (BAS), Eosinophil (EOS), Smudge cell / debris (KSC), atypical Lymphocyte (LYA), typical Lymphocyte (LYT ), Monocyte (MON), Neutrophil (band) (NGB), Neutrophil (segmented) (NGS). The nith class consists of images that could not be assigned a class (UNC) during the labeling process. How many instances are there in total (of each type, if appropriate)? The data set consists of 940 instances. For the proportion of each class in the dataset see table 12. Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? The dataset does not contain all possible instances. It is limited to WBC classes normally present in the peripheral blood of healthy humans. In order to cope with intrinsic class imbalance in cell distribution, rare cell class candidates such as Basophils were preferentially imaged. What data does each instance consist of? "Raw" data (e.g., unprocessed text or images) or features? Each instance consists of an image of 256 by 256 pixels. The image is a raw image in .tiff format. Is there a label or target associated with each instance? Each instance is associated to a label, that indicates the morphological class of the image. Is any information missing from individual instances? No information is missing. Are relationships between individual instances made explicit (e.g., users' movie ratings, social network links)? No, relationships between individuals are not made explicit. Are there recommended data splits (e.g., training, development/validation, testing)? There are no recommended data splits. All the data splits that we used for our experiments were randomly picked. Are there any errors, sources of noise, or redundancies in the dataset? To the best of our knowledge, there are no errors in the dataset. However, a key source of variability between slides from different laboratories and processing times is stain intensity. The samples used in this work all come from the same source, hence we assume the preanalytic treatment and staining protocol to be similar. As all images were obtained on the same microscopy equipment, focus handling and illumination are identical for all samples. Image labelling was performed by one trained morphologist with experience in hematological routine diagnostics. It is known that morphology annotations are subject to inter- and intra-rater variability. However, as we limit ourselves to normal WBCs the labeling is expected to be stable. Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained. Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals non-public communications)? The dataset consist of medical data, disclosing the morphological classes of single human WBCs. In principle, the distribution of cell types conveys information on the health state of a patient. However, the subjects in this dataset are fully deidentified, so that the image data cannot be linked back to the healthy donors of the scanned blood smears. Furthermore, it is not disclosed which cell image was taken from which blood smear, so that no frequencies of individual cell types can be determined. Additionally, we only consider cell types present in normal blood, so that no specific hematologic pathology can be deduced from cell morphologies. Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? No. The dataset does not contain data with any of the above properties. ## Does The Dataset Relate To People? Yes. The dataset consist of images of human WBCs. Does the dataset identify any subpopulations (e.g., by age, gender)? The donors of the blood smears used in this dataset are fully deidentified, and no information on subpipulation composition is provided. Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? No. It is not possible to identify individuals from an image of their white blood cells or visa versa. Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? No. While the distribution of cell types for a specific patient could reveal information about that patient's health status, isolated single-cell images of normal leukocytes do not allow for this inference. ## Any Other Comments? See table 13 for a summary of the composition of Raw-Microscopy. | Class | Proportion in % | |------------------------------------------------|-------------------| | Basophil (BAS) | 1.91 | | Eosinophil (EOS) | 5.74 | | Smudge cell / debris (KSC) | 17.34 | | atypical Lymphocyte (LYA) | 3.19 | | typical Lymphocyte (LYT) | 24.47 | | Monocyte (MON) | 20.32 | | Neutrophil (band) (NGB) | 0.85 | | Neutrophil (segmented) (NGS) | 22.98 | | Image that could not be assigned a class (UNC) | 3.19 | Table 12: The proportion of the classes in RawMicroscopy. ## Collection Process How was the data associated with each instance acquired? Images of the dataset have been acquired directly from a CMOS imaging sensor. They are in a raw unprocessed format. What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? Imaging data have been obtained via a custom brightfield microscope. If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Images have 256×**256 pixel size and have been cropped** from larger images. The dataset corresponds to a selection of white blood cells in the acquired large images. A sampling strategy aimed at increasing the proportion of rare classes of white blood cells has been used. Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? A research assistant has been involved in the data collection process and has been compensated with a monthly salary. Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? Data have been collected on a timeframe of two months, corresponding to the availability of the physical samples to image. Data have been collected on purpose for this work. Were any ethical review processes conducted (e.g., by an institutional review board)? The microscopy data was purchased from a commercial lab vendor (J. Lieder GmbH & Co. KG, Ludwigsburg/Germany) who attained consent from the subjects included. ## Does The Dataset Relate To People? Yes. The dataset consists of microscopic images of human white blood cells. Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)? Data have not been obtained via third parties. Were the individuals in question notified about the data collection? As the blood smear slides were bought from a company, notification to individuals of the data collection has been performed by the company. Did the individuals in question consent to the collection and use of their data? Yes, they did. If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? We do not know the conditions of consent adopted by the selling company. However, we believe the company provided the individuals a complete freedom in revoking their consent in the future, if desired. Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? No, this kind of analysis has not been conducted. Preprocessing/cleaning/labeling Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? Intensity scaled images are generated with Jetraw Data Suite for both datasets, which applies a physical model based on sensor calibration to accurately simulate intensity reduction. Microscopy Raw images are extracted from RGB Microscopy data through a pixel selection from images taken with three filters, in order to have a Bayer Pattern. Pixels intensities are rescaled with Jetraw Data Suite to match the measured transmissivities of a Bayer colour filters array. Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? Raw images are available in the dataset. Is the software used to preprocess/clean/label the instances available? All code used in the experiments of this manuscript is publicly available. Jetraw products that were used for acquiring the data are commercially available. | Uses | |--------| Has the dataset been used for any tasks already? The dataset has not yet been used. Is there a repository that links to any or all papers or systems that use the dataset? The repository at https://anonymous.4open. science/r/tmlr/README.md **associated to this work,** maintained by **ANONYMIZED.** What (other) tasks could the dataset be used for? The dataset can be used to study the effect of image signal processing on the performance and robustness of any other machine learing model implemented in PyTorch, designed for a supervised multiclass classification task. Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? To the best of our knowledge, we do not recognize such impacts. Are there tasks for which the dataset should not be used? To the best of our knowledge, there are no such tasks. ## Distribution Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? Yes. The dataset will be publicly available. How will the dataset will be distributed (e.g., tarball on website, API, GitHub) A guide to access the dataset is available at https: //anonymous.4open.science/r/tmlr/README.md. Moreover, the dataset can be downloaded anonymously and directly at https://zenodo.org/ record/5235536 **under the doi: 10.5281/zenodo.5235536.** When will the dataset be distributed? The dataset is already publicly available. Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? The dataset will be distributed under the Creative Commons Attribution 4.0 International. Have any third parties imposed IP-based or other restrictions on the data associated with the instances? No. Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? There are no such restrictions. ## Maintenance Who will be supporting/hosting/maintaining the dataset? ANONYMIZED on behalf of **ANONYMIZED.** How can the owner/curator/manager of the dataset be contacted (e.g., email address)? By email address via ANONYMIZED. ## Is There An Erratum? At the time of submission, there is no such erratum. If an erratum is needed in the future it will be accessible at **ANONYMIZED** Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? Yes. The dataset will be enlarged wrt. the number of instances. If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? To the best of our knowledge, there are no such limits. Will older versions of the dataset continue to be supported/hosted/maintained? Older versions will be supported and maintained in the future. The dataset will continue to be hosted as long as https://zenodo.org/ **exists.** For any of these requests contact either ANONYMIZED or **ANONYMIZED. For now, we** do not have an established mechanism to handle these requests. If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? | Composition of Raw-Microscopy | | |---------------------------------|------------------------| | Type of instances | Image and label | | Objects on images | White blood cells | | Type of classes | Morphological classes | | Number of instances | 940 | | Number of classes | 9 | | Image size | 256 by 256 pixels | | Image format | .tif | | Raw image format | Please see Section 4.1 | Table 13: A summary of the composition of RawMicroscopy. ## A.5.2 Datasheet For Raw-Drone Motivation For What Purpose Was The Dataset Created? Composition The dataset does not contain all possible instances. Only images with at least one white pixel in the associated segmentation mask are considered. What data does each instance consist of? "Raw" data (e.g., unprocessed text or images) or features? With Raw-Drone we provide a publicly available raw dataset in order to examine the effect of the image data processing on the performance and the robustness of machine learning models. This dataset enables to study these effects for a segmentation task: the segmentation of cars. The dataset was taken with specified parameters: sensor gain, point-spread function and ground-sampling distance, so that physical models may be used to process the data. It also was taken with a easily accessible and affordable system, so that it may be reproduced. Both, the image and the segmentation mask consist of 256 by 256 pixels. The image is a raw image in .tif **format and the the segmentation mask is in** .png format. The images are cropped sub-images of 12 raw images in .DNG **format, consisting of 3648 by 5472** pixels. Is there a label or target associated with each instance? Who created this dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? Each instance is associated to a binary segmentation mask. The dataset was created by ANONYMIZED **on behalf** of **ANONYMIZED.** Is any information missing from individual instances? No information is missing. Who funded the creation of the dataset? Are relationships between individual instances made explicit (e.g., users' movie ratings, social network links)? The data collection was funded by **ANONYMIZED.** The calibration of the image characteristics was jointly funded by **ANONYMIZED.** Since every image is a cropped sub-image of an original image, several of these sub-images belong to the same original image. All sub-images are disjoint, i.e. no different images share a pixel from the original image. What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there recommended data splits (e.g., training, development/validation, testing)? An instance is a tuple of an image and a segmentation mask. The image shows a landscape shot from above. The segmentation mask is a binary image. A white pixel in this mask corresponds to a pixel within a region in the image where a car is displayed. A black pixel in this mask corresponds to a pixel within a region in the image where no car is displayed. There are no recommended data splits. All the data splits that we used for our experiments were randomly picked. Are there any errors, sources of noise, or redundancies in the dataset? To the best of our knowledge, there are no errors in the dataset. The segmentation mask is created by hand and hence noisy, especially at the boundaries between a region with a car and a region without a car. How many instances are there in total (of each type, if appropriate)? The dataset consists of 548 instances. Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals non-public communications)? No. The dataset does not contain data of any of the above types. Does the data set contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? No. The dataset does not contain data with any of the above properties. ## Does The Dataset Relate To People? The dataset does not relate to people. The drone data was screened for PIIs such as faces or license plates on cars and removed by the data collection team. ## Any Other Comments? See table 14 for a summary of the composition of the Raw-Drone. ## Collection Process How was the data associated with each instance acquired? The data was collected by flying a drone and saving the raw data. The calibration data for the drone's imager was acquired both under laboratory conditions and using a ground-based calibration target, so that it could be acquired under operating conditions. What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? To acquire the drone images, we used a DJI Mavic 2 Pro Drone, equipped with a Hasselblad L1D-20c camera (Sony IMX183 sensor). This system has 2.4 µm pixels in Bayer filter array. Images were taken with the drone hovering for maximum stability. This stability was verified to be better than a single pixel by calculating the correlation of subsequent images. The objective has a focal length of 10.3 **mm. We operated** this objective at an f-number of N **= 8, to emulate** the PSF circle diameter relative to the pixel pitch and ground sampling distance (GSD) as would be found on images from high-resolution satellites. Operating at N **= 8 also minimises vignetting, aberrations,** and increases depth of focus. The point-spread function (PSF) was measured to have a circle diameter of 12.5 µm **using the edge-spread function technique** and a ground calibration target.This corresponds to à = 2.52 **px, which also corresponds to a diffractionlimited system, within the uncertainty dictated by the** wavelength spread of the image. Images were taken at 200 ISO, corresponding to a gain of 0.528 DN/e−. The 12-bit pixel values are however left-justified to 16-bits, so that the gain on the 16-bit numbers is 8.448 DN/e−**. The images were taken at a height of** 250 m, so that the GSD is 6 **cm. All images were tiled** in 256x256 patches. Segmentation color masks were created to identify cars for each patch. From this mask, classification labels were generated to detect if there is a car in the image. The dataset is constituted by 548 images for the segmentation task, and 930 for classification. Six additional intensity scales were created with Jetraw. If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? The entire dataset is presented. Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? The dataset was taken by a company employee, compensated by his salary. Labeling was performed by both a company employee and a PhD student, who's PhD is funded by the company. Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? The dataset was taken as the initial step of writing this article. Were any ethical review processes conducted (e.g., by an institutional review board)? The dataset does not contain any elements requiring an ethical review process. Does the dataset relate to people? The dataset does not relate to people. There are individuals on the images, but it is not possible to identify these individuals. Are there tasks for which the dataset should not be used? To the best of our knowledge, there are no such tasks. ## Preprocessing/Cleaning/Labeling Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? No further processing was applied to the Raw-Drone data. Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? Raw images are available in the dataset. Is the software used to preprocess/clean/label the instances available? All code used in the experiments of this manuscript is publicly available. Jetraw products that were used for acquiring the data are commercially available. Uses Has the dataset been used for any tasks already? The dataset has not yet been used. Is there a repository that links to any or all papers or systems that use the dataset? The repository at https://anonymous.4open. science/r/tmlr/README.md **associated to this work,** maintained by **ANONYMIZED.** No. What (other) tasks could the dataset be used for? The dataset can be used to study the effect of image signal processing on the performance and robustness of any other machine learing model implemented in PyTorch, designed segmentation task. Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? To the best of our knowledge, we do not recognize such impacts. Distribution Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? Yes. The dataset will be publicly available. How will the dataset will be distributed (e.g., tarball on website, API, GitHub) A guide to access the dataset is available at https: //anonymous.4open.science/r/tmlr/README.md. Moreover, the dataset can be downloaded anonymously and directly at https://zenodo.org/ record/5235536 **under the doi: 10.5281/zenodo.5235536.** When will the dataset be distributed? The dataset is already publicly available. Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? The dataset will be distributed under the Creative Commons Attribution 4.0 International. Have any third parties imposed IP-based or other restrictions on the data associated with the instances? Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? There are no such restrictions. | Maintenance | |---------------| Who will be supporting/hosting/maintaining the dataset? ANONYMIZED on behalf of **ANONYMIZED.** How can the owner/curator/manager of the dataset be contacted (e.g., email address)? By email address via ANONYMIZED. ## Is There An Erratum? At the time of submisson, there is no such erratum. If an erratum is needed in the future it will be accessible at **ANONYMIZED** Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? Yes. The dataset will be enlarged wrt. the number of instances. If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? To the best of our knowledge, there are no such limits. ## Will Older Versions Of The Dataset Continue To Be Supported/Hosted/Maintained? Older versions will be supported and maintained in the future. The dataset will continue to be hosted as long as https://zenodo.org/ **exists.** If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? For any of these requests contact either ANONYMIZED or **ANONYMIZED. For now, we** do not have an established mechanism to handle these requests. | Composition of Raw-Drone | | |----------------------------|----------------------------| | Type of instances | Image and mask | | Objects on images | Landscape shots from above | | Number of instances | 548 | | Number of original images | 12 | | Image size | 256 by 256 pixels | | Mask size | 256 by 256 pixels | | Original image size | 3648 by 5472 | | Image format | .tif | | Mask format | .png | | Raw image format | .DNG | Table 14: A summary of the composition of RawDrone.
Review 1: Summary: [Sorry for the late review, this was caused by unforeseen personal circumstances. I write this review without having read the other reviews to not be biased. I will address the rebuttals and other reviews in a follow-up.] The paper proposes explicit data model for the processing of raw imaging data to: 1) enable insights into the performance of a task model under realistic data drift, 2) allow for the identification of failure modes of the task model, and 3) to jointly optimise the data model and task model to achieve better generalisation performance under potential drifts. To the best of my knowledge, this paper is the first one to explicitly combine parametrised data models with deep learning models on raw imaging data. To that extend the authors also release newly collected raw imaging datasets and make them available for future use. Strengths and Weaknesses: Pros: - The paper introduces an interesting way to think about the preprocessing in computer vision problems that allows for more realistic charaterisation of a task model's performance under data drift. - The authors collect novel datasets that could be very useful for the wider community to study the behaviour of combined data and task models -- or directly train task models on raw imaging data. - The paper shows interesting comparisons between current data drift anlyses based on data augmentations and the more realistic use of data models. Cons: - The paper assumes that raw sensor data is beneficial for the use in computer vision systems. However, it is unclear whether their data models could be extended to optical data drifts or even drifts due to the sensor. The latter is a well known problem in medical image analysis, in which different scanners, scanning protocols, or even different operators can yield images that are imperceivably different but lead to suboptimal performance in a task model. It would be good to discuss this as well as potential relations to further work in the medical imaging domain, where some of those changes are commonly corrected for (e.g. bias field correction). - Some of the paper seems to be very image acquisition centric and requires some knowledge outside of the usual ML or CV domain. It would be great to if those sections could be expanded, or further explanations could be placed in the appendix. - The data model is currently very rigid in it's form. It would be interesting to expand upon this by considering different orders of processing steps. It also is not clear where this exact pipeline comes from and how it relates to common image processing pipelines implemented by, e.g. camera manufacturers. General comments: - Generally, it is always assumed that data models are used and the task model performance in human understandable image space. Why would one not consider training a model in raw image space directly? What would the performance of such a model be? It'd be good to add a comment about this in the paper. - In general in 5.3: What's the impact of training the data model? It seems to be introducing additional artifacts. Could this have negative impacts on human readability of the processed images? - It is assumed that having access to raw data would always be beneficial and possible. However, this would incur additional costs in terms of storage or networking bandwidth when deploying such a model. How do the authors envision the use of this in practice? Minor comments: - Please add equation numbers to all equations to make it easier to reference. - p. 1: "deep learning for ..." it rang loud - I believe the "it" should be removed - on p 4. it would be interesting to add a comment about MRI k-spaces as a community which commonly uses data models. - p 4.: "This can speed up and stabilize classifier training at a margin of up to...."; This sentence reads a bit convoluted and should be clarified. It only becomes clear after reading the full paper. - p 6: It is noted that the drone use case is chosen because of no ethical concerns. I argue that this is incorrect as similar models and datasets could be used in military applications. Please comment on this - p7: The drone use case is a segmentation task. This is not clear from the beginning. Also, please comment on whether the data models interfere with the localisation of the segmentations? Are those dependent on the choice of the demosaicing method? - p7: "Images are acquired is 16bit" -> "at 16 bit" - p7: "RI were extracted" - What are "RI"? - p7: "The PSF was measured"... What is a PSF"? - p8: What augmentations are actually performed in JetRaw Data? What is this tool? - p. 8: What's the expectation over in the first equation on that page? - p8: the equations for demosaicing indicate that the resolution does not change. Is this true? - For the static data model. How are the values, e.g. $bl_x$ chosen? - For the data model steps. the $v\rightarrow v_{x}$ notation is inconsistent across the different steps. The same goes for the parameter spaces of the parametrised model. - What does a.e. differentiable stand for? - Figure 4: The caption seems to refer to a previous version of the figure with multiple subfigures and should be corrected. - p 14: Why do you use a lagrange multiplier for the adversarial examples instead of a hard limit? - p14 I suppose $\lambda$ is the relaxation weight? This isn't clearly noted. - p15 / 5.3: what does low or high intensity refer to? Requested Changes: - Please expand some of the image preprocessing background to ease readability. - Please comment on the accessibility and usability of raw imaging data in practice. - Please comment on the relation to the medical imaging domain with domain-specific data processing. - I'd love to see responses to my other comments but these are the biggest ones. Broader Impact Concerns: None ================================================== Review 2: Summary: The paper tries to connect Machine Learning Model Robustness research with explicit data model for images captured through a camera. In-place of a black-box data model, their is a use of parametrized model to understand and study the following 1. Drift Synthesis 2. Drift Forensic 3. Drift Adjustment The following is done on two datasets : Raw Microscopy and Raw Drone Datasets Strengths and Weaknesses: Strengths 1) The paper highlights Data-drift and provides two datasets for further research. 2) Helps in broadly connecting the use of Data Models in different fields for Data Drift 3) The paper helps in pairing Data Models with Machine Learning Models Robustness Validation Framework Weaknesses 1) It would be great if the authors can add in a bit more on Limitations of acquiring/availability for scientifically calibrated and labelled raw data Requested Changes: 1) This is an optional task. It would be great if the authors can add in a bit more on Limitations of acquiring/availability for scientifically calibrated and labelled raw data Broader Impact Concerns: None ================================================== Review 3: Summary: This paper presents an explicit, differentiable data model for images captured through a camera. The model includes processing operations that occur within the camera system to produce the output image from raw recorded sensor readings. Two datasets of images from raw camera sensor readings are introduced from microscopy (classification) and drone (segmentation) applications. Using these datasets, the model is applied in three ways - 1) simulating physically realistic domain shifted images, 2) a sensitivity analysis of which parts of the processing pipeline most affect downstream model performance, and 3) to show that adapting the data model during training can stabilize training and result in better performance. Strengths and Weaknesses: **Strengths:** * The conceptual premise of the work is timely and relevant - if we are able to model the data generation process, we can then build models that are robust to real-world variation. Having the data model will potentially allow robust models to be learnt without having to collect data covering all possible variations (“catalogue testing” in the paper). * While I am not an expert in imaging, the model seems to capture the key steps involved in the imaging pipeline. It is also differentiable and compact, thus easy to integrate into existing models given that code is also provided. * Two datasets with raw sensor data and some variations in intensity are provided from real-world applications where dataset shift can occur. This will be a useful resource for researchers as raw data is not usually available. **Weaknesses:** * Given that the premise of the paper is to have physically faithful data models, the data models and applications presented should be validated using real data instead of data simulated through the proposed data model. For example, in Section 5.1 the paper talks about using synthesized images to validate robustness across drift from different devices - this would first require validating the data model to ensure it accurately captures the real drifts. * I am not sure the experimental methodology used in Section 5.3 is meaningful in the following sense. * If the raw capture data is available, then models could be trained directly on the raw data to avoid sensitivity to shifts in downstream components of the image processing pipeline, and learn the best way to ‘process’ the raw inputs. * If it is not and we only have access to processed data, then an additional inversion step is required to first estimate the raw image, which may introduce further noise that affects learning. I suggest the precise real-world setting be clarified so the appropriate evaluation can be done - what data is available and what is the shift (model trained on what data and tested on what shifted data) should be specified. If raw capture data is available, then the learned pipeline should further be compared to a task model augmented with extra initial layers that can ‘process’ the input; otherwise, estimation of the raw image is first required instead of giving the model the ground truth. **Other comments:** * I am not sure why Common Corruptions is being used as the example for augmentation testing - as the paper argues, one should use a realistic representation of the shifts in testing. The choice of augmentations used in practice would likely correspond more closely to the anticipated shifts, which in this case are camera differences. Hence, corruptions like zoom, elastic and potentially various forms of noise like impulse would not be considered appropriate for this case. * There is no demonstration of the utility of the data models in addressing dataset shift caused by differences in imaging systems, i.e. that it can help make models more robust in a “catalogue testing” scenario described in the paper, and if it works better than generic methods (e.g. unsupervised domain adaptation methods). While the paper doesn’t explicitly claim this as a contribution it is alluded to in the discussion as a practical application; actually demonstrating this would significantly increase the “interest” factor of this paper. Requested Changes: * [Required] Validate the proposed data model using real data * [Required] Clarify the setup used in 5.3 and perform the appropriate comparison * [Optional] Clarify the comparison with common corruptions (see “Other comments”) * [Optional] Demonstrate that data models can actually help in a “catalogue testing” scenario Broader Impact Concerns: None ================================================== Review 4: Summary: - The authors tackled an important problem: robustness of the machine learning model (training/test distribution mismatch). - The authors proposed a novel idea: pairing machine learning robustness with physical optics. More specifically, the authors directly utilize the data model (the function from raw sensor data to processed RGB image data) to improve the robustness of the ML model. - The authors release two datasets which include the raw sensor data that can be utilized for the future works in this area (including data models). Also, the authors published the codebase which would be another contribution. Strengths and Weaknesses: Strength - At least for me, it seems novel to utilize the data models in machine learning field. - Results in Figure 7 are promising. Like a data-centric approach, optimizing the data models can significantly improve the performances. Weakness - The experimental results seem trivial (details can be found below) - The settings seem not that practical. Exactly the same raw data but only different data models seem less common. - Section 4 and 5 can be significantly reduced and half of the contents can be moved into the appendix. Requested Changes: These are the questions/concerns that I have about this paper. 1. Practicality of the settings - I am not sure whether dataset drift is a practical setting. - It would be good if the authors provide more examples and scenarios that this dataset drift is common. 2. Section 4 - Section 4 is very informative. However, I think we do not need all the details for the types of data models. - Therefore, it would be better to define static and parametric models and the types of each data model can be included in the appendix. - Also, the raw data acquisition part is important. However, not all the details are needed in the main manuscript. - Currently, this section is 6-7 pages, I think this can be reduced to 2-3 pages. Then, this paper would be much easier to read. 3. Figure 4 & 5 - Based on Figure 4 & 5 and Section 5.1, it seems like the results are somewhat trivial. - First, based on the lower part of Figure 4, the distribution mismatch is more severe in the corruption benchmarks than the ISP. In that case, the larger performance drops are trivial. - I think if we change the level of severity in corruption, the results would be different. 4. Robustness - More important thing is whether the augmentation methods are useful for model robustness when we use them as the training side. - So, it would be good if the authors can do experiments like this. Using the augmentation methods on the training side and test the performances on the data models side. More specifically, changing the vertical and horizontal axis of Figure 4 (right). - In that case, what we can show is whether augmentation methods for training make robust models for different data models. 5. Results with drift forensic - The results in Figure 6 also seem somewhat trivial. - With smaller lambda, we can optimize Equation (6) with larger performance drops. - One thing that I am interested in is larger l2 norm differences do not always make larger performance drops. - Still, results in Figure 6 (left) are expectable based on Equation (6). - Also good to show the results on the other dataset. 6. Drift adjustment - Actually, this result is very interesting for me. - Based on Figure 7, if we optimize the task model with a data model (learned setting), we can achieve significant improvements in comparison to the frozen setting. - It means that we should optimize the data models jointly in some applications. - One concern is the robustness. When we optimize the task models with the data models, can we guarantee the robustness of that model for other data models? - Like can we show the results of learned model performances on different testing samples with different data model parameters? Broader Impact Concerns: Releasing the datasets always have some risks. The authors should seriously check all the details of the dataset publication. From page 18, the authors provide as much details for the datasets. ================================================== Review 5: Summary: The paper recommends a novel approach to using Data for images to validate the machine learning model against dataset drift. Data Drift is an industry-wide issue and it's time to find creative solutions to combat the same. This problem has been preventing novel machine learning enthusiasts to harness the potential of camera images as there is a difference in training and deployment data. Researchers have done the same and described how machine learning models can intersect with image data and increase the robustness of the model. The author describes augmentation testing in detail, why it fails, and suggests an alternate approach to it. Another interesting component of the paper was on how differentiable data models can be used for drift adjustments. The authors leveraged two datasets for their research (Raw Microscopy and Raw Drone Data). Strengths and Weaknesses: Strengths: 1. The strongest point of this entire paper was that it not only talks about how the existing approach might not be the best way forward but also suggests an alternate method for the same. Like when they claim how augmentation fails for processed images, they also back it up with how catalog testing is a better approach. 2. Another aspect of the paper that I feel really stands out is the comparison done between different approaches to dataset drift validation. This allows the reader to appreciate that one size fits all is not applicable in machine learning and which approach to choose based on your use case. 3. I liked how the author didn't jump into the data modeling immediately but explained the detailed differences between raw sensor data and processed images. 4. The summary stands out as well. The author not only talks about the impact of the research but how the research could be better by suggesting that how researchers should have access to more datasets so that they can continue with the research and keep coming up with novel approaches Requested Changes: This is just a friendly recommendation from a reader's perspective and has nothing to do with the acceptance of the research. It would be great to add some detailed description of the "physically faithful dataset". I feel it is a broad term and can be used in multiple different contexts so it's possible that the reader might have a different connotation of what this might mean. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: This is a borderline decision. As I explained above, the paper and released datasets are already interesting for the community but given the above concerns about the main claims and provided evidence, I am recommending rejection. I recommend authors to improve the paper sufficiently by taking reviewers' feedback into account and submit again. ==================================================
# Forces Are Not Enough: Benchmark And Critical Evaluation For Machine Learning Force Fields With Molecular Simulations Xiang Fu xiangfu@mit.edu Massachusetts Institute of Technology Zhenghao Wu *zhenghao.wu@northwestern.edu* Northwestern University Wujie Wang *wwj@mit.edu* Massachusetts Institute of Technology Tian Xie *tianxie@microsoft.edu* Microsoft Research Sinan Keten s-keten@northwestern.edu Northwestern University Rafael Gomez-Bombarelli rafagb@mit.edu Massachusetts Institute of Technology Tommi Jaakkola tommi@csail.mit.edu Massachusetts Institute of Technology Reviewed on OpenReview: *https: // openreview. net/ forum? id= A8pqQipwkt* ## Abstract Molecular dynamics (MD) simulation techniques are widely used for various natural science applications. Increasingly, machine learning (ML) force field (FF) models begin to replace ab-initio simulations by predicting forces directly from atomic structures. Despite significant progress in this area, such techniques are primarily benchmarked by their force/energy prediction errors, even though the practical use case would be to produce realistic MD trajectories. We aim to fill this gap by introducing a novel benchmark suite for learned MD simulation. We curate representative MD systems, including water, organic molecules, peptide, and materials, and design evaluation metrics corresponding to the scientific objectives of respective systems. We benchmark a collection of state-of-the-art (SOTA) ML FF models and illustrate, in particular, how the commonly benchmarked force accuracy is not well aligned with relevant simulation metrics. We demonstrate when and how selected SOTA methods fail, along with offering directions for further improvement. Specifically, we identify stability as a key metric for ML models to improve. Our benchmark suite comes with a comprehensive open-source codebase for training and simulation with ML FFs to facilitate future work. ## 1 Introduction Molecular Dynamics (MD) simulation is a widely used technique that provides atomistic insights into physical phenomena in materials and biological systems (Alder & Wainwright, 1959; Rahman, 1964; Schlick, 2010). MD simulations use force fields (FFs) to characterize the underlying potential energy surface (PES) of the system and simulate long trajectories based on Newton's second law (Frenkel & Smit, 2001). The PES itself ![1_image_0.png](1_image_0.png) Figure 1: (a) Results on simulating a water system with ML force fields. Models are sorted by force mean absolute error (MAE) in descending order. High stability and low radial distribution function (RDF) MAE are better. Performance in force error does not align with simulation-based metrics. (b) Force-only evaluation may not reveal key factors in simulating MD with a ML force fields. In this toy example, model 2 (green) has a lower force error but likely leads to unstable simulations due to extreme forces from local pathological behavior. (c) Illustrations of MD observables. is challenging to compute and would ideally be done through quantum chemistry which is computationally expensive. Traditionally, the alternative has been parameterized force fields built from empirically chosen functional forms (Halgren, 1996). Recently, machine learning (ML) force fields (Unke et al., 2021b) have shown promise to accelerate MD simulations by orders of magnitude while being quantum chemically accurate. The learned force field can then simulate MD or perform structural relaxation. However, despite MD simulations being a primary motivation for ML FFs, the evidence supporting the utility of ML FFs is often based on their accuracy in reconstituting forces across test cases (Faber et al., 2017) without involving simulations. This paper highlights the importance of simulation-based evaluation, demonstrating that force accuracy alone does not suffice for effective simulation (Figure 1 (a)). In practice, the exact recovery of the trajectories given the initial conditions is not the ultimate goal of learning MD simulations. Instead, MD simulation quality is better evaluated through macroscopic observables that characterize system properties (Leach & Leach, 2001; Tuckerman, 2010). These observables are designed to be predictive of material properties such as diffusivity in electrolyte materials (Webb et al., 2015), or reveal detailed physical mechanisms, such as the folding kinetics of proteins (Lane et al., 2011; Lindorff-Larsen et al., 2011). Although these observables are critical products of MD simulations, systematic evaluations have not been sufficiently studied in the existing literature. In particular, we show that even small errors in the force field parameters can lead to catastrophic failure in long-time simulation - ML force fields may exhibit pathological behavior such as extreme force/energy predictions on states not captured by the training data, causing the simulation to become unstable or explode (toy example illustrated in Figure 1 (b)). A benchmark for MD simulations requires mindful design over **the selection of systems, the simulation** protocol, and the evaluation metrics: (1) A benchmark suite should cover diverse and representative systems to reflect the various challenges in different MD applications. (2) Simulations are computationally expensive (see Table 7 in Appendix B). An ideal benchmark must balance the cost of evaluation and the system's complexity to obtain meaningful metrics with reasonable time and computation. (3) Selected systems should be well studied in the simulation domain, and chosen metrics should be based on physical observables that characterize the system's important degrees of freedom or geometric features to reflect practical utility. We carefully curate four representative MD systems and design simulation protocols/metrics considering all the abovementioned desiderata. Are current state-of-the-art (SOTA) ML FFs capable of simulating a variety of MD systems? What might cause a model to fail in simulations? This paper aims to answer these questions with a novel benchmark study. The contributions of this paper include: - We introduce a novel benchmark suite for ML MD simulation with simulation protocols and quantitative metrics. We perform extensive experiments to benchmark a collection of SOTA ML models. We provide a complete codebase for training and simulating MD with ML FFs to lower the barrier to entry and facilitate future work. - We show that many existing models are inadequate when evaluated on simulation-based benchmarks, even when they show accurate force prediction (as shown in Figure 1). - By performing and analyzing MD simulations, we summarize common failure modes and discuss the causes and potential solutions to motivate future research. ## 2 Preliminaries Training. An ML FF aims to learn the potential energy surface Eˆ(x) ∈ R as a function of atomic coordinates x ∈ R N×3(N is the number of atoms), by fitting atom-wise forces Fˆ(x) and energies from a training dataset:{xi,Fi, Ei} Ndata i=1 , where xi ∈ R N×3,F ∈ R N×3, E ∈ R. For evaluation, the test force/energy prediction accuracy is often used as a proxy to quantify the quality of the learned PES. The force field learning protocol has been well established (Unke et al., 2021b). MD simulation. Simulating molecular behaviors requires integrating a Newtonian equation of motion d 2x/dt2 = m−1F(x) with forces obtained by differentiating Eˆ(x): F(x) = −∂Eˆ(x)/∂x. To mimic desired thermodynamic conditions such as constant temperature/pressure, an appropriate thermostat and barostat are chosen to augment the equation of motion with extended variables. These conditions are system-dependent and task-dependent. The simulation produces a time series of positions {xt ∈ R N×3} T t=0 (and velocities), where t is the temporal order index, and T is the total simulation steps. Observables. From the time series of positions (and velocities), observables O(xt) can be computed to characterize the state of the system at different granularities. Under the ergodic hypothesis, the time averages of the simulation observables converge to distributional averages under the Gibbs measure (Frenkel & Smit, 2001): ⟨O⟩ = 1 T limT→∞ PT t O(xt) = Rdx p(x) O(x), where p(x) ∝ exp(− Eˆ(x) kBT ) with T as the bath temperature and kB as the Boltzmann constant. Calculations of such observables require the system to reach equilibrium. Simulation observables connect simulations to experimental measurements and are predictive of macroscopic properties of matter. Common observables (illustrated in Figure 1 (c)) include radial distribution functions (RDFs), virial stress tensor, mean-squared displacement (MSD), dihedral angles, etc. We propose evaluation metrics based on well-established observables in the respective types of systems (Section 5). ## 3 Related Work ML force fields learn the potential energy surface (PES) from the data by applying expressive regressors such as kernel methods (Chmiela et al., 2017) and neural networks on symmetry-preserving representations of atomic environments (Behler & Parrinello, 2007; Khorshidi & Peterson, 2016; Smith et al., 2017; Artrith et al., 2017; Unke & Meuwly, 2018; Zhang et al., 2018b;a; Kovács et al., 2021; Thölke & De Fabritiis, 2021; Takamoto et al., 2022). Recently, graph neural network architectures (Gilmer et al., 2017; Schütt et al., 2017; Gasteiger et al., 2020; Liu et al., 2021) have gained popularity as they provide a systematic strategy for building many-body correlation functions to capture highly complex PES. In particular, equivariant representations have been shown powerful in representing atomic environments (Satorras et al., 2021; Thomas et al., 2018; Qiao et al., 2021; Schütt et al., 2021; Batzner et al., 2022; Gasteiger et al., 2021; Liao & Smidt, 2022; Gasteiger et al., 2022), leading to significant improvements in benchmarks such as MD17 and OC22/20. Some works presented simulation-based results (Unke et al., 2021a; Park et al., 2021; Batzner et al., 2022; Musaelian et al., 2022) but do not compare different models with simulation-based metrics. Existing benchmarks for ML force fields (Ramakrishnan et al., 2014; Chmiela et al., 2017) mostly focus on force/energy prediction, with small molecules being the most typical systems. The catalyst-focused OC20 (Chanussot et al., 2021) and OC22 (Tran et al., 2022) benchmarks focus on structural relaxation with force computations, where force prediction is part of the evaluation metrics. The structural relaxation processes are around hundreds of steps, and the goal is to predict the final relaxed structure/energy. These tasks do not characterize system properties under a structural ensemble, which requires simulations that are millions of steps long. Several recent works (Rosenberger et al., 2021) have also studied the utility of certain ML FFs in MD simulations. In particular, Stocker et al. 2022 uses GemNet (Gasteiger et al., 2021) to simulate small molecules in the QM7-x (Hoja et al., 2021) dataset, with a focus on prediction robustness. ![3_image_0.png](3_image_0.png) Figure 2: Visualization of the benchmarked systems. (a) MD17 molecules: Aspirin, Ethanol, Naphthalene, and Salicylic acid. (b) 64 water molecules. (c) 512 water molecules. (d) Alanine dipeptide. (e) LiPS. | uses Metadynamics with implicit solvation. Dataset System Type PBC | | #Atoms | Simulation Length | Objectives | | |----------------------------------------------------------------------|-----------------------|----------|---------------------|---------------------|-------------------------| | MD17 | Small molecule | ✗ | 9-21 | 300 ps (600k steps) | Interatomic distances | | Water | liquid | ✓ | 192 | 500 ps (500k steps) | RDF, Diffusivity | | Water-large | liquid | ✓ | 1536 | 150 ps (150k steps) | RDF, Diffusivity | | Alanine dipeptide | Peptide | ✗ | 22 | 5 ns (2.5M steps)* | Dihedral angle analysis | | LiPS | solid-state materials | ✓ | 83 | 50 ps (200k steps) | RDF, Diffusivity | Zhai et al. 2022 applies the DeepMD (Zhang et al., 2018a) architecture to simulate water and demonstrates its shortcoming in generalization across different phases. However, existing works focus on a single system and model, while evaluation protocols and quantitative metrics for model comparison remain subject to debate. The experimental results in past research are often presented qualitatively or as figures, which can be hard to analyze or used as a scalar metric for comparing performance of different models. As previous works are mostly motivated by specific use cases, they often lack coverage over diverse MD systems and have very different data generation processes, making it hard to disentangle the impact from the ML model and data. Since the experiments are designed for a single model, the simulation protocols in previous works are often too expensive for large-scale evaluation. Systematic benchmarks for simulation-based metrics are lacking in the existing literature, which obscures the challenges in applying ML FF for MD applications. ## 4 Datasets Popular benchmark datasets, such as MD17, focus on the force prediction task for gas-phase small molecules. However, successes in these tasks are not sufficient evidence for (1) capturing complex interatomic interactions that are critical for condensed phase systems; and (2) recovery of critical simulation observables that cannot be directly indicated by force prediction accuracy. This work focuses on atomic-level MD simulations that manifest complex intermolecular interactions at multiple scales. We choose systems that (1) have been frequently used in force field development (Henderson, 1974); (2) cover diverse MD applications such as materials and biology; and (3) can be simulated within reasonable time and compute. The simulation conditions such as temperature and time step sizes (Tuckerman, 2010) are configured to replicate realistic settings in MD literature. Beyond force predictions, we conduct simulations and benchmark observables that reflect the actual simulation quality, along with stability and computational efficiency. The selected systems are summarized in Table 1. MD17 (Chmiela et al., 2017) dataset contains AIMD calculations for eight small organic molecules and is widely used as a force prediction benchmark for ML FFs. We adopt four molecules from MD17 and benchmark the simulation performance. In addition to force error, we evaluate the stability and the distribution of interatomic distances h(r). For each molecule, we randomly sample 9,500 configurations for training and 500 for validation from the MD17 database. We randomly sample 10,000 configurations from the rest of the data for force error evaluation. We perform five simulations of 300 ps for each model/molecule by initializing 4 from 5 randomly sampled testing configurations, with a time step of 0.5 fs, at 500 K temperature, under a Nosé–Hoover thermostat. Water is arguably the most important molecular fluid in biological and chemical processes. Due to its complex thermodynamic and phase behavior, it poses great challenges for molecular simulations. In addition to force error, we evaluate simulation stability and recovery of both equilibrium statistics and dynamical statistics, namely the element-conditioned RDFs and liquid diffusion coefficient. Our dataset consists of 100,000 structures collected every 10 fs from a 1 ns trajectory sampled at equilibrium and a temperature of 300 K. We benchmarked all models with various training+validation dataset sizes (1k/10k/90k randomly sampled structures) and used the remaining 10,000 structures for testing. We performed 5 simulations of 500 ps by initializing from 5 randomly sampled testing configurations, with a time step of 1 fs, at 300 K temperature, under a Nosé–Hoover thermostat. Additionally, we evaluate model generalization to a larger system of 512 water molecules with 5 simulations of 150 ps. Alanine dipeptide features multiple metastable states, making it a classic benchmark for the development of MD sampling methods and force fields (Head-Gordon et al., 1989; Kaminski et al., 2001). Its geometric flexibility is well represented by the central dihedral (torsional) angles ϕ and ψ. Our reference data are obtained from simulations with explicit water molecules, with detailed protocols described in Appendix A. For faster simulation, we learn an implicitly solvated FF following a protocol similar to Chen et al. (2021). Our task is more challenging in that it aims to learn the implicitly solvated atomistic FF rather than the implicit solvation correction in Chen et al. (2021). To facilitate accelerated sampling, we apply metadynamics with ϕ and ψ as the collective variables. We evaluate force prediction, simulation stability, and free energy surface (FES) reconstruction F(*ϕ, ψ*). Our dataset consists of 50,000 structures dumped every 2 ps from a 100 ns trajectory at a temperature of 300 K. We used 38,000 randomly sampled structures for training, 2,000 for validation, and the rest as a test set. We performed 6 simulations of 5 ns by initializing from six local minima on the FES (Figure 5) with a time step of 2 fs at 300 K, and under a Langevin thermostat to mimic random noise from solvation effects. More information on our simulation protocols can be found in Appendix A. All simulations use hydrogen bond length constraints to enable the 2 fs time step. LiPS is a crystalline superionic lithium conductor relevant to battery development and a representative system for MD simulation usage in studying kinetic properties in materials. We adopt this dataset from Batzner et al. 2022, and benchmark all models on their force error, stability, RDF recovery, and Li-ion diffusivity coefficient. The dataset has 25,000 structures in total, from which we use 19,000 randomly sampled structures for training, 1,000 structures for validation, and the rest for computing force error. We conduct 5 simulations of 50 ps by initializing from 5 randomly sampled testing configurations, with a time step of 0.25 fs, at 520 K temperature, under a Nosé–Hoover thermostat. ## 5 Evaluation Metrics We deisgn benchmark metrics based on physical observables that describe the most fundamental structural and dynamical properties of a MD system. These observables are relatively easy to understand and compute, and their accurate recovery is often deemed a prerequisite for computing more sophisticated observables. We note there are other observables such as thermal conductivity, viscosity, etc. of practical interest. However, these observables are often used for specific applications and require extra domain knowledge or non-standard MD set ups to compute. Our benchmark aims to simplify the MD procedure while being comprehensive, and focus on analyzing the performance of the ML models. Distribution of interatomic distances is a low-dimensional description of the 3D structure and has been studied in previous work (Zhang et al., 2018a). For a given configuration x, the distribution of interatomic distances h(r) is computed with: $$h(r)={\frac{1}{N(N-1)}}\sum_{i}^{N}\sum_{j\neq=i}^{N}\delta(r-||\mathbf{x}_{i}-\mathbf{x}_{j}||)$$ δ(r *− ||*xi − xj ||) (1) where r is the distance from a reference particle; N is the total number of particles; *i, j* indicates the pairs of atoms that contribute to the distance statistics; δ is the Dirac Delta function to extract value distributions. To calculate the ensemble average, h(r) is calculated and averaged over frames from equilibrated trajectories. The MAE is then calculated by integrating r: MAEh(r) =R ∞ r=0 |⟨h(r)*⟩ − ⟨*hˆ(r)⟩|dr, where ⟨·⟩ is the averaging operator, ⟨h(r)⟩ is the reference equilibrium h(r), and ⟨hˆ(r)⟩ is the model-predicted equilibrium h(r). RDF. As one of the most informative simulation observables, the radial distribution function (RDF) describes the structural/thermodynamic properties of the system and is experimentally measurable (Yarnell et al., 1973; Filipponi, 1994). It has been widely used in force field development (Henderson, 1974). By definition, the RDF describes how density varies as a function of distance from a particle. For a given configuration x, the RDF can be computed with the following formula: $$\mathrm{RDF}(r)=\frac{1}{4\pi r^{2}}\frac{1}{N\rho}\sum_{i}^{N}\sum_{j\neq i=i}^{N}\delta(r-||\mathbf{x}_{i}-\mathbf{x}_{j}||)\tag{1}$$ $$\left(2\right)$$ where r is the distance from a reference particle; N is the total number of particles; *i, j* indicates the pairs of atoms that contribute to the distance statistics; ρ is the density of the system; δ is the Dirac Delta function to extract value distributions. To calculate the ensemble average, RDF(r) is calculated and averaged over frames from equilibrated trajectories. The final RDF MAE is then calculated by integrating r: MAERDF =R ∞ r=0 |⟨RDF(r)*⟩ − ⟨*RDFˆ (r)⟩|dr, where ⟨·⟩ is the averaging operator, ⟨RDF(r)⟩ is the reference equilibrium RDF, and ⟨RDF( ˆ r)⟩ is the model-predicted RDF. Diffusivity coefficient is relevant to many practical applications such as battery design (Xie et al., 2022). The diffusivity coefficient D quantifies the time-correlation of the translational displacement, and can be computed from the mean square displacement: $$D=\lim_{t\rightarrow\infty}\frac{1}{6t}\frac{1}{N^{\prime}}\sum_{i=1}^{N^{\prime}}|\mathbf{x}_{i}(t)-\mathbf{x}_{i}(0)|^{2}\tag{3}$$ where xi(t) is the coordinate of particle i at time t, and N′is the number of particles being tracked in the system. For the water system, we monitor the liquid diffusivity coefficient and track all 64 oxygen atoms. For the LiPS system, we monitor Li-ion Diffusivity and track all 27 Li-ions. Accurate recovery of D requires sufficient long trajectories as it converges with long simulation time. In this paper, we only compute diffusivity for stable trajectories of at least 100 ps for water and 40 ps for LiPS. As we simulate multiple runs for each model, we average the diffusivity coefficient extracted from each valid trajectory to obtain the final prediction. Free energy surface. Given the probability distributions over configurations p(x) and a chosen geometric coordinate ξ transformed from x and the marginalized density p(ξ), the free energy (Tuckerman, 2010) can be calculated from F(ξ) = −kBT ln p(ξ). In the specific case of alanine dipeptide, there are two main conformational degrees of freedom: dihedral angle ϕ of C − N − Cα − C and dihedral angle ψ of N − Cα − C − N. Therefore, the FES w.r.t ϕ and ψ is the most physically informative. We propose our quantitative metric MAEF (ϕ),F (ψ) based on the absolute error in reconstructing the FES along the ϕ and ψ coordinates. We integrate the absolute difference between the reference free energy F and the model predicted Fˆ from [−*π, π*): MAEF (ϕ) =R π ϕ=−π |F(ϕ) − Fˆ(ϕ)|dϕ. Error for F(ψ) is defined similarly. Quantifying simulation stability. ML FFs can produce unstable dynamics, as the learned force field may not extrapolate robustly to the undersampled configuration space and predict extreme forces. The trajectory can enter nonphysical states that would never happen in a realistic simulation (e.g., bond breaking should never happen for non-reactive dynamics at a low temperature). Predictions can then become increasingly pathological as the input states are far from being captured in the training data. Such unstable trajectories are not meaningful for observable calculations. For a fair comparison of different models, we need to quantify stability, and compute ensemble statistics only over the stable part of the simulated trajectories. Although energy drift has been used in previous work in classical MD to monitor stability (Tuckerman et al., 1992), it is not ideal for our benchmark. As data generation using quantum chemistry is expensive, we | representation at every layer. Number of parameters on the MD17 dataset are reported. Model Symmetry Principle of Geometric Features Energy Conservation | | #Parameters | | |------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------|---------------|--------| | DeepPot-SE (Zhang et al., 2018b) | E(3)-invariant | ✓ | 1.04M | | SchNet (Schütt et al., 2017) | E(3)-invariant | ✓ | 0.12M | | DimeNet (Gasteiger et al., 2020) | E(3)-invariant | ✓ | 2.1M | | PaiNN (Schütt et al., 2021) | SE(3)-equivariant | ✓ | 0.59M | | SphereNet (Liu et al., 2021) | E(3)-invariant | ✓ | 1.89M | | ForceNet (Hu et al., 2021) | Translation-invariant | ✗ | 11.37M | | GemNet-T (Gasteiger et al., 2021) | E(3)-invariant | ✓ | 1.89M | | GemNet-dT (Gasteiger et al., 2021) | SE(3)-equivariant | ✗ | 2.31M | | NequIP (Batzner et al., 2022) | E(3)-equivariant | ✓ | 1.05M | cannot assume access to the ground truth energy function at evaluation time. Monitoring the energy drift predicted by the ML model is inconsistent across models and unreliable. Since all MD systems studied in this paper are in equilibrium, we instead keep track of stability by closely monitoring equilibrium statistics. Such equilibrium statistics have a range of sensible values that a realistic simulation will never go outside the range. We can characterize the physical range by computing the equilibrium statistics from reference data and assert a simulation becomes "unstable" when the deviation from equilibrium exceeds the realistic range by too far. Stability criterion. For systems with periodic boundary conditions, we monitor the RDF and say a simulation becomes "unstable" at time T when: $$\int_{r=0}^{\infty}||\langle\mathrm{RDF}(r)\rangle-\langle\mathrm{R}\hat{\mathrm{D}}\mathrm{F}_{t}(r)\rangle_{t=T}^{T+\tau}||dr>\Delta\tag{1}$$ $$\left(4\right)$$ where ⟨·⟩ is the averaging operator, τ is a short time window, and ∆ is the stability threshold. In this paper we use τ = 1 ps, ∆ = 3.0 for water, and τ = 1 ps, ∆ = 1.0 for LiPS. For water, we assert unstable if any of the three element-conditioned RDFs: RDF(O,O), RDF(H,H), RDF(H,O) exceeds the threshold. For flexible molecules, we keep track of stability through the bond lengths and say a simulation becomes "unstable" at time T when: $$\operatorname*{max}_{(i,j)\in B}|(||\mathbf{x}_{i}(T)-\mathbf{x}_{j}(T)||-b_{i,j})|>\Delta$$ $\left(\frac{5}{9}\right)^{\frac{1}{3}}$ |(||xi(T) − xj (T)|| − bi,j )| > ∆ (5) where B is the set of all bonds, *i, j* are the two endpoints of the bond, and bi,j is the equilibrium bond length. For both MD17 molecules and alanine dipeptide, we use ∆ = 0.5 Å. Our chosen stability thresholds are set to relaxed values to detect catastrophic failure that the model cannot recover from. We include more details on threshold selection in Appendix B. All metrics related to observable prediction are computed only over the stable part of the trajectories to decouple accuracy and stability. ## 6 Experiments Benchmarked models. In our experiments, we aim to cover diverse models with different design such as architecture and symmetry principles. We adopt the Open Catalyst Project implementation of SchNet (Schütt et al., 2017), DimeNet (Gasteiger et al., 2020), ForceNet (Hu et al., 2021), PaiNN (Schütt et al., 2021), GemNet-T/dT (Gasteiger et al., 2021), and the official implementation of DeepPot-SE (Zhang et al., 2018b), SphereNet (Liu et al., 2021), and NequIP (Batzner et al., 2022). A summary of all benchmarked models is in Table 2. These models have been popular in previous benchmark studies for force/energy prediction. They use different model architecture (feed-forward neural networks vs. message-passing neural networks), different representations for atomistic interactions (e.g., use angular/torsional information or not) and respect different levels of euclidean symmetry. We follow all original hyperparameters introduced in the respective papers and | applicable metrics. Model | DeepPot-SE | SchNet | DimeNet | PaiNN | SphereNet | ForceNet | GemNet-T | GemNet-dT | NequIP | OPLS | | |-----------------------------|--------------|------------|------------|------------|-------------|------------|------------|-------------|------------|--------|----| | Molecule Aspirin | Force (↓) | 21.0 | 35.6 | 10.0 | 9.2 | 3.4 | 22.1 | 3.3 | 5.1 | 2.3 | - | | Stability (↑) | 9(15) | 26(23) | 54(12) | 159(121) | 141(54) | 182(144) | 72(50) | 192(132) | 300(0) | - | | | h(r) (↓) | 0.65(0.47) | 0.36(0.57) | 0.04(0.00) | 0.04(0.01) | 0.03(0.00) | 0.56(0.15) | 0.04(0.02) | 0.04(0.01) | 0.02(0.00) | 0.21 | | | FPS (↑) | 88.0 | 108.9 | 20.6 | 85.8 | 17.5 | 137.3 | 28.2 | 56.8 | 8.4 | - | | | Ethanol | Force | 8.9 | 16.8 | 4.2 | 5.0 | 1.7 | 14.9 | 2.1 | 1.7 | 1.3 | - | | Stability | 300(0) | 247(106) | 26(10) | 86(109) | 33(16) | 300(0) | 169(98) | 300(0) | 300(0) | - | | | h(r) | 0.09(0.00) | 0.21(0.11) | 0.15(0.03) | 0.15(0.08) | 0.13(0.03) | 0.86(0.05) | 0.10(0.02) | 0.09(0.00) | 0.08(0.00) | 0.36 | | | FPS | 101.0 | 112.6 | 21.4 | 87.3 | 30.5 | 141.1 | 27.1 | 54.3 | 8.9 | - | | | Naphthalene | Force | 13.4 | 22.5 | 5.7 | 3.8 | 1.5 | 9.9 | 1.5 | 1.9 | 1.1 | - | | Stability | 246(109) | 18(2) | 85(68) | 300(0) | 6(3) | 300(0) | 8(2) | 25(10) | 300(0) | - | | | h(r) | 0.11(0.00) | 0.09(0.00) | 0.10(0.01) | 0.13(0.00) | 0.14(0.04) | 1.02(0.00) | 0.13(0.00) | 0.12(0.01) | 0.12(0.01) | 0.30 | | | FPS | 109.3 | 110.9 | 19.1 | 92.8 | 18.3 | 140.2 | 27.7 | 53.5 | 8.2 | - | | | Salicylic Acid | Force | 14.9 | 26.3 | 9.6 | 6.5 | 2.6 | 12.8 | 4.0 | 4.0 | 1.6 | - | | Stability | 300(0) | 300(0) | 73(82) | 281(37) | 36(16) | 1(0) | 26(24) | 94(109) | 300(0) | - | | | h(r) | 0.03(0.00) | 0.03(0.00) | 0.06(0.02) | 0.03(0.00) | 0.06(0.02) | 0.35(0.00) | 0.08(0.04) | 0.07(0.03) | 0.03(0.00) | 0.25 | | | FPS | 94.6 | 111.7 | 19.4 | 90.5 | 21.4 | 143.2 | 28.5 | 52.4 | 8.4 | - | | ![7_image_0.png](7_image_0.png) Figure 3: Head-to-head comparison of force MAE vs. Stability and h(r) MAE on MD17 molecules. Models are on the x-axis and are sorted according to force error in descending order. High stability and low h(r) MAE mean better performance. Error bars indicate 95% confidence intervals. only make minimal adjustments when the training is unstable. More details on the hyperparameters can be found in Appendix B. Key observations. We make two key observations as evidenced in the experimental results: 1. Despite being widely used, force prediction is not sufficient for evaluating ML FFs. It generally does not align with simulation stability and performance in estimating ensemble properties. 2. While often neglected, stability is a crucial prerequisite for practical usage and a major bottleneck for ML FFs. Lower force error and more training data do not necessarily give rise to more stable simulations, suggesting stability is a fundamental consideration for comparison and model design. We next go through the experimental results of all four datasets in detail to demonstrate the key observations while making other observations. MD17. As shown in Table 3, more recent models that lie on the right side of the table generally achieve a lower force error, but may lack stability. Figure 3 selects and rearranges results from Table 3 to demonstrate the non-aligned trends of force prediction performance vs. simulation performance, which supports key observation 1. SphereNet and GemNet-T/dT can attain a very low force error for all four molecules, but often collapse before the simulation finishes. This observation constitutes **key observation 2**. We note that although the stable portion of simulated trajectories produced by SphereNet and GemNet-T/dT can recover the h(r) relatively accurately, stability will become a bigger issue when the statistics of interest require long | coefficient is 2.3 × 10−9 m2/s. DeepPot-SE SchNet | DimeNet | PaiNN | SphereNet | ForceNet | GemNet-T | GemNet-dT | NequIP | | | |-----------------------------------------------------|------------|------------|-------------|------------|------------|-------------|------------|------------|------------| | Force (↓) | 5.8 | 9.5 | 1.4 | 5.1 | 16.1 | 10.9 | 0.7 | 1.3 | 1.5 | | Stability (↑) | 247(147) | 232(59) | 30(10) | 12(13) | 500(0) | 7(3) | 25(7) | 7(3) | 500(0) | | RDF(O,O) (↓) | 0.07(0.01) | 0.63(0.04) | 0.27(0.15) | 0.30(0.14) | 0.89(0.04) | 0.79(0.03) | 0.22(0.05) | 0.42(0.22) | 0.06(0.02) | | RDF(H,H) (↓) | 0.06(0.02) | 0.30(0.02) | 0.18(0.08) | 0.21(0.09) | 0.40(0.01) | 0.55(0.01) | 0.16(0.03) | 0.35(0.25) | 0.05(0.01) | | RDF(H,O) (↓) | 0.19(0.05) | 0.57(0.04) | 0.21(0.04) | 0.29(0.12) | 1.14(0.03) | 1.34(0.03) | 0.20(0.04) | 0.42(0.27) | 0.27(0.07) | | Diffusivity (↓) | 0.04 | 1.90 | - | - | 2.23 | - | - | - | 0.18 | | FPS (↓) | 91.0 | 78.9 | 17.9 | 71.8 | 3.1 | 67.6 | 11.3 | 33.7 | 3.9 | ![8_image_0.png](8_image_0.png) Figure 4: Comparison of force MAE vs. stability (Left), force MAE vs. RDF MAE (Middle), and force MAE vs. Diffusivity MAE (Right) on the water benchmark. Each model is trained with three dataset sizes. The color of a point indicates the model identity, while the point size indicates the training dataset size (small: 1k, medium: 10k, large: 90k). Metrics infeasible to extract from certain model/dataset size (e.g., Diffusivity for unstable models) are not included. simulations, as demonstrated in other experiments. On the other hand, despite having a relatively high force error, DeepPot-SE performs very well on simulation-based metrics on all molecules except for Aspirin (Figure 3). With the highest molecular weight, Aspirin is indeed the hardest task in MD17 in the sense that all models attain high force prediction errors on it. PaiNN also attains competitive simulation performance, while its force error is not among the best. As a reference, we also include the h(r) error using a popular classical force field: the optimized potentials for liquid simulations (OPLS, Jorgensen & Tirado-Rives 2005) in Table 3. More details on the classical force field results are in Appendix A and Figure 7). We further observe that good stability alone does not imply accurate recovery of trajectory statistics. Although ForceNet remains stable for Ethanol and Naphthalene, the extracted h(r) deviates a lot from the reference (Table 3), indicating that ForceNet does not learn the underlying PES correctly, possibly due to its lack of energy conservation and rotational equivariance. Overall, NequIP is the best-performing model on MD17. It achieves the best performance in both force prediction and simulation-based metrics for all molecules while requiring the highest computational cost. More detailed results on MD17 including a study on stability's relation with training epochs and individual h(r) are included in Appendix B. Water. Under a condensed phase system, **key observation 1 and 2** are still evident according to Table 4: GemNet-T/dT and DimeNet are the top-3 models in terms of force prediction, but all lack stability. The water diffusivity coefficient requires long (100 ps in our experiments) trajectories to estimate and thus cannot be extracted for unstable models. DeepPot-SE does not achieve the best force prediction performance but | DeepPot-SE | SchNet | DimeNet | PaiNN | SphereNet | ForceNet | GemNet-T | GemNet-dT | NequIP | | |-----------------|------------|------------|------------|-------------|------------|------------|-------------|------------|------------| | Force (↓) | 40.5 | 28.8 | 3.2 | 11.7 | 8.3 | 12.8 | 1.3 | 1.4 | 3.7 | | Stability (↑) | 4(3) | 50(0) | 48(4) | 50(0) | 50(0) | 26(8) | 50(0) | 50(0) | 50(0) | | RDF (↓) | 0.27(0.15) | 0.04(0.00) | 0.05(0.01) | 0.04(0.01) | 0.04(0.00) | 0.51(0.08) | 0.04(0.00) | 0.04(0.00) | 0.04(0.01) | | Diffusivity (↓) | - | 0.38 | 0.30 | 0.40 | 0.40 | - | 0.24 | 0.28 | 0.34 | | FPS (↑) | 66.1 | 35.2 | 14.8 | 75.7 | 18.1 | 72.1 | 16.9 | 43.5 | 8.2 | | and F(ψ) are reported in the unit of [kJ/mol]. DeepPot-SE SchNet DimeNet PaiNN | | SphereNet | ForceNet | GemNet-T | GemNet-dT | NequIP | | | | |----------------------------------------------------------------------------------|-------|-------------|------------|------------|-------------|----------|---------|-------|------------| | Force (↓) | 272.1 | 217.0 | 239.0 | 266.2 | 256.3 | 284.7 | 233.5 | 219.7 | 215.6 | | #Finished (↑) | 0/6 | 0/6 | 0/6 | 0/6 | 0/6 | 0/6 | 6/6 | 0/6 | 5/6 | | Stability (↑) | 0(0) | 0(0) | 0(0) | 0(0) | 804(786) | 0(0) | 5000(0) | 0(0) | 4169(1858) | | F(ϕ) (↓) | - | - | - | - | - | - | 104(2) | - | 104(14) | | F(ψ) (↓) | - | - | - | - | - | - | 110(13) | - | 113(36) | | FPS (↑) | 54.3 | 42.4 | 12.1 | 42.2 | 9.9 | 99.1 | 15.0 | 36.5 | 8.3 | demonstrates decent stability and highly accurate recovery of simulation statistics. Interestingly, SphereNet has a high force error but is highly stable. However, the properties are not accurately recovered. Figure 4 further compares model performance with different training dataset sizes. Key observations 1 and 2 are clearly shown: Models located on the left of each scatter plot have very low force error but may have poor stability or high error in simulation statistics. More specifically, although more training data almost always improve force prediction performance, its effect on simulation performance is not entirely clear. On the one hand, GemNet-T/dT, DimeNet, and ForceNet are not stable even when under the highest training data budget. On the other hand, we observe a clear improvement of DeepPot-SE when more training data is used. NequIP is again the best-performing model, achieving very low force error, excellent stability, and accurate recovery of ensemble statistics, even under the lowest data budget of 1,000 training+validation structures. However, when the training dataset is sufficiently large (90k), DeepPot-SE has equally good results as NequIP while being more than 20 times faster - dataset size also influences the model of choice for a certain dataset. The detailed results for water-1k/90k and the impact of dataset size are included in Appendix B. Furthermore, we also conduct ablation studies on model sizes and training epochs in Appendix B. Alanine dipeptide poses unique challenges in sampling different metastable states separated by high free energy barriers and modeling the implicit solvent. Table 5 shows all models have high force errors due to the random forces introduced by the lack of explicit account of water molecules. Although the force errors are in the same order of magnitude, all models except GemNet-T and NequIP fail to simulate stably. The FES reconstruction task requires stable simulation for the entire 5 ns. GemNet-T and NequIP are the only two models that manage to finish at least one simulation but produce inaccurate statistics. All other models are not stable enough to produce meaningful results. As the key difference between GemNet-T and GemNet-dT is energy conservation, this result suggests a strong stabilization effect of enforcing energy conservation through the prediction of scalar energy. We further analyze the results of this task in Section 7. LiPS. Compared to flexible molecules and liquid water, this solid material system features slower kinetics. From Table 6 we observe that most models are capable of finishing 50-ns simulations stably. In this dataset, the performance on diffusivity estimation and force prediction align well. We observe that both GemNet-T and GemNet-dT show excellent force prediction, stability, and recovery of observables, while GemNet-dT is 2.6 times faster. The better efficiency comes from the direct prediction of atomic forces F instead of taking the derivative F = *∂E/∂*x, which also makes GemNet-dT not energy-conserving - a potential issue we further discuss in Section 7. Implications on model architecture. More recent models utilizing SE(3)/E(3)-equivariant representations and operations such as GemNet-dT and NequIP are more expressive and can capture interatomic interactions ![10_image_0.png](10_image_0.png) Figure 5: (a, b) For each row, the first three panels show Ramachandran plots of the alanine dipeptide FES reconstructed from 5-ns reference vs. 5-ns NequIP/GemNet-T simulation, all using MetaDynamics. The last two panels show F(φ) and F(φ) of alanine dipeptide extracted from reference simulation vs. from NequIP simulation. The two rows result from different initialization, annotated as a yellow star. (c) (o, y) distribution of the alanine dipeptide training dataset. The six initialization points are marked with stars. NequIP fails to remain stable when the simulation starts from the point marked with black color. (d) Model-predicted total energy as a function of simulation time when simulating the LiPS system using the NVE ensemble. (e) On water-10k, stability does not improve when the time step is reduced for GemNet-T. more accurately. This is reflected by their very low force error and accurate recovery of ensemble statistics when not bottlenecked by stability. Moreover, excellent accuracy and stability can be simultaneously achieved. The stability may come from parity-equivariance and the use of tensor products (Thomas et al., 2018) in manipulating higher-order geometric features. We believe further investigations into the extrapolation behavior induced by different equivariant geometric representations and operations (Batatia et al., 2022) is a fruitful direction in designing more powerful ML FFs. ## 7 Failure Modes: Causes And Future Directions A case study on alanine dipeptide simulation. Figure 5 (a) demonstrates how NequIP and GemNet-T fail to reconstruct the FES: the ML potentials do not manage to sample much of the transition regions and the configuration space with ¢ ∈ [0, 180°] when initialized from a configuration with φ = −65°. Figure 5 (b) shows NequIP fails to simulate stably when initialized from the low-density meta-stable state. The reconstructed FES significantly deviates from the reference in both cases. This failure can be partially explained by Figure 5 (c), the training data distribution produced by the reference potential. The relatively high-energy (low-density) regions and transition regions are exactly those that are not sampled enough by the learned force fields. Even though our MD trajectory is well-equilibrated, the difference in populations of different meta-stable states creates data imbalance, making it more challenging for the model to learn well for higher-energy configurations ![11_image_0.png](11_image_0.png) Figure 6: Examples of simulation collapse when applying (a) NequIP to alanine dipeptide and (b) GemNet-T to water. The y-axis shows the maximum force observed on any atom in the system at a certain time step. An orange cross indicates visualized time steps. Notable nonphysical regions are circled. The collapse usually happens within a very short period of time after the initial local errors. where density is relatively low. The lack of energy labels for force matching in the implicit solvent setting exacerbated this problem and poses challenges in learning the relative free energy difference between the meta-stable states. This issue implies that generalization across different regions in the conformational space is an important challenge for ML FFs. The rest of the FES results are included in Figure 13. Energy conservation. Models that directly predict forces may not conserve energy. Figure 5 (d) demonstrates the evolution of model-predicted total energy for selected models on LiPS, in a microcanonical (NVE) ensemble. The energy of an isolated system in the NVE ensemble is, in principle, conserved. We observe that GemNet-T conserves energy, whereas GemNet-dT fails to conserve the predicted total energy. The existence of non-conservative forces breaks the time-reversal symmetry and, therefore, may not be able to reach equilibrium for observable calculation. Energy conservation as an inductive bias can also affect simulation stability. For alanine dipeptide, we observe GemNet-T can achieve stable simulations while GemNet-dT is unstable. On the other hand, GemNet-dT performs well on the LiPS dataset when coupled with a thermostat. Previous works (Kolluru et al., 2022) also found that energy conservation is not required for SOTA performance on OC20. The usability of non-conservative FFs in simulations requires further careful investigations. Simulation instability is a major bottleneck for highly accurate models to fail on several simulation tasks. Moreover, in our water experiments, we find a larger amount of training data does not resolve this issue for GemNet-T/dT and DimeNet (Figure 4). We further experiment with smaller simulation time steps for GemNet-T on water (Figure 5 (e)), but stability still does not improve. On the other hand, Stocker et al. (2022) demonstrates that the stability of GemNet improves with larger training sets on QM7-x, which includes high-energy off-equilibrium geometries obtained from normal mode sampling. Normal mode sampling was also used to generate large datasets in earlier ML force fields such as ANI-1 (Smith et al., 2017), and MD simulation results are demonstrated. These results indicate that dataset's coverage over the conformational space has crucial implications on the behavior of the final model, regardless of the underlying neural network model. We hypothesize that including these distorted geometries may improve the model's robustness against going into nonphysical configurations. We also observe that the simulation can collapse within a short time window after a long period of stable simulation, as visualized in Figure 6. In both cases, we observe that the nonphysical configurations first emerge at local regions (circled), which cascade to the entire system very quickly as extremely large forces are being predicted and subsequently integrated into the dynamics. At the end of the visualization, the bonds in the alanine dipeptide system are broken. Therefore, the local-descriptor-based NequIP model predicts very small forces. For the water system, the particles are packed in a finite periodic box. The nonphysical configurations exhibit incorrect coordination structures and extremely large forces. To prevent ML FFs from sampling nonphysical regions, which is a common precursor to failed simulation (Figure 6), one can deliberately include distorted and off-equilibrium geometries in the training data to improve model robustness(Stocker et al., 2022). Alternatively, one can resort to active learning (Wang et al., 2020b; Vandermause et al., 2020; Schwalbe-Koda et al., 2021) to acquire new data points based on model uncertainty. Past works also found adding noise to data paired with a denoising objective during training helpful in improving out-of-distribution test performance on OC20 (Godwin et al., 2021), and in stabilizing learned simulations (Sanchez-Gonzalez et al., 2020). Another relevant line of work in coarse-grained MD simulation has studied regularization with an empirical "prior energy" (Wang et al., 2019b) and post-prediction refinement (Fu et al., 2022) to battle simulation instability. ## 8 Conclusion And Outlook We have introduced a diverse suite of MD simulation tasks and conducted a thorough comparison of SOTA ML FFs to reveal novel insights into ML for MD simulation. As shown in our experiments, benchmarking only force error is not sufficient, and simulation-based metrics should be used to reflect the practical utility of a model. We demonstrate case studies on the failure of existing training schemes/models to better understand their limitations. The performance of a model can be highly case-dependent. For more challenging MD systems, more expressive atomistic representations may be required. For example, recent work has explored non-local descriptors (Kabylda et al., 2022) aiming at capturing long-range interactions in large molecules. Strictly local equivariant representations (Musaelian et al., 2022) are studied for large systems where computational scalability is critical. New datasets (Eastman et al., 2022) and benchmarks have been playing an important role in inspiring future work. Learning coarse-grained MD (Wang et al., 2019b; Wang & Gómez-Bombarelli, 2019; Fu et al., 2022) is another avenue to accelerate MD at larger length/time scales. The possibility of ML in advancing MD simulation is not limited to ML force fields. Enhanced sampling methods enable fast sampling of rare events and have been augmented with ML techniques (Schneider et al., 2017; Sultan et al., 2018; Holdijk et al., 2022). Differentiable simulations (Schoenholz & Cubuk, 2020; Wang et al., 2020a; Doerr et al., 2021; Ingraham et al., 2018; Greener & Jones, 2021) offer a principled way of learning the force field by directly training the simulation process to reproduce experimental observables (Wang et al., 2020a; 2022; Thaler & Zavadlav, 2021). We hope our datasets and benchmarks will encourage future developments in all related aspects to push the frontier in ML for MD simulations. ## Author Contributions X.F. led the project. X.F., Z.W., W.W., and T.X. designed the benchmark and evaluation protocols; Z.W. performed MD simulations to generate datasets; X.F. implemented the models and performed the experiments; X.F.,Z.W.,W.W., T.X., and T.J. analyzed the results; X.F., Z.W., W.W., T.X., S.K., R.G.B., and T.J. wrote the manuscript. ## Acknowledgments We thank Adrian Roitberg and the Roitberg Research Group for their valuable suggestions on rectifying our alanine dipeptide evaluation protocol. We thank Adeesh Kolluru, Simon Batzner, and Alby Musaelian for helping with reproducing previous results. X.F. is supported by the MIT-GIST collaboration. W.W. is supported by Toyota Research Institute. Z.W. is supported by the Center for Hierarchical Materials Design at Northwestern University and Army Research Office. The authors also acknowledge the MIT SuperCloud and the Lincoln Laboratory Supercomputing Center for providing HPC resources. We also acknowledge PyTorch and PyTorch Geometric. ## References Mark James Abraham, Teemu Murtola, Roland Schulz, Szilárd Páll, Jeremy C. Smith, Berk Hess, and Erik Lindahl. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers. *SoftwareX*, 1-2:19–25, September 2015. ISSN 23527110. doi: 10.1016/j.softx.2015.06.001. URL https://linkinghub.elsevier.com/retrieve/pii/S2352711015000059. Berni J Alder and Thomas Everett Wainwright. Studies in molecular dynamics. i. general method. The Journal of Chemical Physics, 31(2):459–466, 1959. Nongnuch Artrith, Alexander Urban, and Gerbrand Ceder. Efficient and accurate machine-learning interpolation of atomic energies in compositions with many species. *Physical Review B*, 96(1):014112, 2017. Ilyes Batatia, Simon Batzner, Dávid Péter Kovács, Albert Musaelian, Gregor NC Simm, Ralf Drautz, Christoph Ortner, Boris Kozinsky, and Gábor Csányi. The design space of e (3)-equivariant atom-centered interatomic potentials. *arXiv preprint arXiv:2205.06643*, 2022. Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E Smidt, and Boris Kozinsky. E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. *Nature communications*, 13(1):1–11, 2022. Jörg Behler and Michele Parrinello. Generalized neural-network representation of high-dimensional potentialenergy surfaces. *Physical review letters*, 98(14):146401, 2007. Lowik Chanussot, Abhishek Das, Siddharth Goyal, Thibaut Lavril, Muhammed Shuaibi, Morgane Riviere, Kevin Tran, Javier Heras-Domingo, Caleb Ho, Weihua Hu, et al. Open catalyst 2020 (oc20) dataset and community challenges. *ACS Catalysis*, 11(10):6059–6072, 2021. Yaoyi Chen, Andreas Krämer, Nicholas E Charron, Brooke E Husic, Cecilia Clementi, and Frank Noé. Machine learning implicit solvation for molecular dynamics. *The Journal of Chemical Physics*, 155(8): 084101, 2021. Stefan Chmiela, Alexandre Tkatchenko, Huziel E Sauceda, Igor Poltavsky, Kristof T Schütt, and Klaus-Robert Müller. Machine learning of accurate energy-conserving molecular force fields. *Science advances*, 3(5): e1603015, 2017. Daniel A. Colón-Ramos, Patrick La Riviere, Hari Shroff, and Rudolf Oldenbourg. Transforming the development and dissemination of cutting-edge microscopy and computation. *Nat Methods*, 16(8): 667–669, August 2019. ISSN 1548-7091, 1548-7105. doi: 10.1038/s41592-019-0475-y. URL http: //www.nature.com/articles/s41592-019-0475-y. Wendy D Cornell, Piotr Cieplak, Christopher I Bayly, Ian R Gould, Kenneth M Merz, David M Ferguson, David C Spellmeyer, Thomas Fox, James W Caldwell, and Peter A Kollman. A second generation force field for the simulation of proteins, nucleic acids, and organic molecules j. am. chem. soc. 1995, 117, 51795197. *Journal of the American Chemical Society*, 118(9):2309–2309, 1996. Stefan Doerr, Maciej Majewski, Adrià Pérez, Andreas Kramer, Cecilia Clementi, Frank Noe, Toni Giorgino, and Gianni De Fabritiis. Torchmd: A deep learning framework for molecular simulations. *Journal of* chemical theory and computation, 17(4):2355–2363, 2021. Peter Eastman, Jason Swails, John D Chodera, Robert T McGibbon, Yutong Zhao, Kyle A Beauchamp, Lee-Ping Wang, Andrew C Simmonett, Matthew P Harrigan, Chaya D Stern, et al. Openmm 7: Rapid development of high performance algorithms for molecular dynamics. *PLoS computational biology*, 13(7): e1005659, 2017. Peter Eastman, Pavan Kumar Behara, David L. Dotson, Raimondas Galvelis, John E. Herr, Josh T. Horton, Yuezhi Mao, John D. Chodera, Benjamin P. Pritchard, Yuanqing Wang, Gianni De Fabritiis, and Thomas E. Markland. Spice, a dataset of drug-like molecules and peptides for training machine learning potentials, 2022. Felix A Faber, Luke Hutchison, Bing Huang, Justin Gilmer, Samuel S Schoenholz, George E Dahl, Oriol Vinyals, Steven Kearnes, Patrick F Riley, and O Anatole Von Lilienfeld. Prediction errors of molecular machine learning models lower than hybrid dft error. *Journal of chemical theory and computation*, 13(11): 5255–5264, 2017. Michael Feig. Kinetics from implicit solvent simulations of biomolecules as a function of viscosity. Journal of chemical theory and computation, 3(5):1734–1748, 2007. Adriano Filipponi. The radial distribution function probed by x-ray absorption spectroscopy. *Journal of* Physics: Condensed Matter, 6(41):8415, 1994. Daan Frenkel and Berend Smit. *Understanding molecular simulation: from algorithms to applications*, volume 1. Elsevier, 2001. Xiang Fu, Tian Xie, Nathan J Rebello, Bradley D Olsen, and Tommi Jaakkola. Simulate time-integrated coarse-grained molecular dynamics with geometric machine learning. *arXiv preprint arXiv:2204.10348*, 2022. Johannes Gasteiger, Janek Groß, and Stephan Günnemann. Directional message passing for molecular graphs. In *International Conference on Learning Representations*, 2020. Johannes Gasteiger, Florian Becker, and Stephan Günnemann. Gemnet: Universal directional graph neural networks for molecules. *Advances in Neural Information Processing Systems*, 34:6790–6802, 2021. Johannes Gasteiger, Muhammed Shuaibi, Anuroop Sriram, Stephan Günnemann, Zachary Ward Ulissi, C. Lawrence Zitnick, and Abhishek Das. Gemnet-OC: Developing graph neural networks for large and diverse molecular simulation datasets. *Transactions on Machine Learning Research*, 2022. URL https://openreview.net/forum?id=u8tvSxm4Bs. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In *International conference on machine learning*, pp. 1263–1272. PMLR, 2017. Jonathan Godwin, Michael Schaarschmidt, Alexander L Gaunt, Alvaro Sanchez-Gonzalez, Yulia Rubanova, Petar Veličković, James Kirkpatrick, and Peter Battaglia. Simple gnn regularisation for 3d molecular property prediction and beyond. In *International conference on learning representations*, 2021. Joe G Greener and David T Jones. Differentiable molecular simulation can learn all the parameters in a coarse-grained force field for proteins. *PloS one*, 16(9):e0256990, 2021. Thomas A Halgren. Merck molecular force field. i. basis, form, scope, parameterization, and performance of mmff94. *Journal of computational chemistry*, 17(5-6):490–519, 1996. Teresa Head-Gordon, Martin Head-Gordon, Michael J Frisch, Charles Brooks III, and John Pople. A theoretical study of alanine dipeptide and analogs. *International Journal of Quantum Chemistry*, 36(S16): 311–322, 1989. RL Henderson. A uniqueness theorem for fluid pair correlation functions. *Physics Letters A*, 49(3):197–198, 1974. Johannes Hoja, Leonardo Medrano Sandonas, Brian G Ernst, Alvaro Vazquez-Mayagoitia, Robert A DiStasio Jr, and Alexandre Tkatchenko. Qm7-x, a comprehensive dataset of quantum-mechanical properties spanning the chemical space of small organic molecules. *Scientific data*, 8(1):1–11, 2021. Lars Holdijk, Yuanqi Du, Ferry Hooft, Priyank Jaini, Bernd Ensing, and Max Welling. Path integral stochastic optimal control for sampling transition paths. *arXiv preprint arXiv:2207.02149*, 2022. Weihua Hu, Muhammed Shuaibi, Abhishek Das, Siddharth Goyal, Anuroop Sriram, Jure Leskovec, Devi Parikh, and C Lawrence Zitnick. Forcenet: A graph neural network for large-scale quantum calculations. arXiv preprint arXiv:2103.01436, 2021. John Ingraham, Adam Riesselman, Chris Sander, and Debora Marks. Learning protein structure with a differentiable simulator. In *International Conference on Learning Representations*, 2018. William L Jorgensen and Julian Tirado-Rives. Potential energy functions for atomic-level simulations of water and organic and biomolecular systems. *Proceedings of the National Academy of Sciences*, 102(19): 6665–6670, 2005. Adil Kabylda, Valentin Vassilev-Galindo, Stefan Chmiela, Igor Poltavsky, and Alexandre Tkatchenko. Towards linearly scaling and chemically accurate global machine learning force fields for large molecules. arXiv preprint arXiv:2209.03985, 2022. George A Kaminski, Richard A Friesner, Julian Tirado-Rives, and William L Jorgensen. Evaluation and reparametrization of the opls-aa force field for proteins via comparison with accurate quantum chemical calculations on peptides. *The Journal of Physical Chemistry B*, 105(28):6474–6487, 2001. Alireza Khorshidi and Andrew A Peterson. Amp: A modular approach to machine learning in atomistic simulations. *Computer Physics Communications*, 207:310–324, 2016. Adeesh Kolluru, Muhammed Shuaibi, Aini Palizhati, Nima Shoghi, Abhishek Das, Brandon Wood, C Lawrence Zitnick, John R Kitchin, and Zachary W Ulissi. Open challenges in developing generalizable large scale machine learning models for catalyst discovery. *arXiv preprint arXiv:2206.02005*, 2022. Dávid Péter Kovács, Cas van der Oord, Jiri Kucera, Alice EA Allen, Daniel J Cole, Christoph Ortner, and Gábor Csányi. Linear atomic cluster expansion force fields for organic molecules: beyond rmse. Journal of chemical theory and computation, 17(12):7696–7711, 2021. Alessandro Laio and Michele Parrinello. Escaping free-energy minima. Proceedings of the National Academy of Sciences, 99(20):12562–12566, 2002. Thomas J Lane, Gregory R Bowman, Kyle Beauchamp, Vincent A Voelz, and Vijay S Pande. Markov state model reveals folding and functional dynamics in ultra-long md trajectories. *Journal of the American* Chemical Society, 133(45):18413–18419, 2011. Ask Hjorth Larsen, Jens Jørgen Mortensen, Jakob Blomqvist, Ivano E Castelli, Rune Christensen, Marcin Dułak, Jesper Friis, Michael N Groves, Bjørk Hammer, Cory Hargus, Eric D Hermes, Paul C Jennings, Peter Bjerre Jensen, James Kermode, John R Kitchin, Esben Leonhard Kolsbjerg, Joseph Kubal, Kristen Kaasbjerg, Steen Lysgaard, Jón Bergmann Maronsson, Tristan Maxson, Thomas Olsen, Lars Pastewka, Andrew Peterson, Carsten Rostgaard, Jakob Schiøtz, Ole Schütt, Mikkel Strange, Kristian S Thygesen, Tejs Vegge, Lasse Vilhelmsen, Michael Walter, Zhenhua Zeng, and Karsten W Jacobsen. The atomic simulation environment—a python library for working with atoms. *Journal of Physics: Condensed Matter*, 29(27):273002, 2017. URL http://stacks.iop.org/0953-8984/29/i=27/a=273002. Andrew R Leach and Andrew R Leach. *Molecular modelling: principles and applications*. Pearson education, 2001. Jonas Lederer, Michael Gastegger, Kristof T Schütt, Michael Kampffmeyer, Klaus-Robert Müller, and Oliver T Unke. Automatic identification of chemical moieties. *arXiv preprint arXiv:2203.16205*, 2022. Yi-Lun Liao and Tess Smidt. Equiformer: Equivariant graph attention transformer for 3d atomistic graphs. arXiv preprint arXiv:2206.11990, 2022. Kresten Lindorff-Larsen, Stefano Piana, Ron O Dror, and David E Shaw. How fast-folding proteins fold. Science, 334(6055):517–520, 2011. Yi Liu, Limei Wang, Meng Liu, Yuchao Lin, Xuan Zhang, Bora Oztekin, and Shuiwang Ji. Spherical message passing for 3d molecular graphs. In *International Conference on Learning Representations*, 2021. Albert Musaelian, Simon Batzner, Anders Johansson, Lixin Sun, Cameron J Owen, Mordechai Kornbluth, and Boris Kozinsky. Learning local equivariant representations for large-scale atomistic dynamics. arXiv preprint arXiv:2204.05249, 2022. Cheol Woo Park, Mordechai Kornbluth, Jonathan Vandermause, Chris Wolverton, Boris Kozinsky, and Jonathan P Mailoa. Accurate and scalable graph neural network force field and molecular dynamics with direct force architecture. *npj Computational Materials*, 7(1):1–9, 2021. Jay W Ponder and David A Case. Force fields for protein simulations. *Advances in protein chemistry*, 66: 27–85, 2003a. Jay W. Ponder and David A. Case. Force Fields for Protein Simulations. In *Advances in Protein Chemistry*, volume 66, pp. 27–85. Elsevier, 2003b. ISBN 978-0-12-034266-2. doi: 10.1016/S0065-3233(03)66002-X. URL https://linkinghub.elsevier.com/retrieve/pii/S006532330366002X. Zhuoran Qiao, Anders S Christensen, Matthew Welborn, Frederick R Manby, Anima Anandkumar, and Thomas F Miller III. Unite: Unitary n-body tensor equivariant network with applications to quantum chemistry. *arXiv preprint arXiv:2105.14655*, 2021. Aneesur Rahman. Correlations in the motion of atoms in liquid argon. *Physical review*, 136(2A):A405, 1964. Raghunathan Ramakrishnan, Pavlo O Dral, Matthias Rupp, and O Anatole Von Lilienfeld. Quantum chemistry structures and properties of 134 kilo molecules. *Scientific data*, 1(1):1–7, 2014. David Rosenberger, Justin S Smith, and Angel E Garcia. Modeling of peptides with classical and novel machine learning force fields: A comparison. *The Journal of Physical Chemistry B*, 125(14):3598–3612, 2021. Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Peter Battaglia. Learning to simulate complex physics with graph networks. In *International Conference on Machine* Learning, pp. 8459–8468. PMLR, 2020. Vıctor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. In International conference on machine learning, pp. 9323–9332. PMLR, 2021. Tamar Schlick. *Molecular modeling and simulation: an interdisciplinary guide*, volume 2. Springer, 2010. Elia Schneider, Luke Dai, Robert Q Topper, Christof Drechsel-Grau, and Mark E Tuckerman. Stochastic neural network approach for learning high-dimensional free energy surfaces. *Physical review letters*, 119 (15):150601, 2017. Samuel Schoenholz and Ekin Dogus Cubuk. Jax md: a framework for differentiable physics. *Advances in* Neural Information Processing Systems, 33:11428–11441, 2020. Kristof Schütt, Pieter-Jan Kindermans, Huziel Enoc Sauceda Felix, Stefan Chmiela, Alexandre Tkatchenko, and Klaus-Robert Müller. Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. *Advances in neural information processing systems*, 30, 2017. Kristof Schütt, Oliver Unke, and Michael Gastegger. Equivariant message passing for the prediction of tensorial properties and molecular spectra. In *International Conference on Machine Learning*, pp. 9377–9388. PMLR, 2021. Daniel Schwalbe-Koda, Aik Rui Tan, and Rafael Gómez-Bombarelli. Differentiable sampling of molecular geometries with uncertainty-based adversarial attacks. *Nature communications*, 12(1):1–12, 2021. Hythem Sidky, Wei Chen, and Andrew L Ferguson. Machine learning for collective variable discovery and enhanced sampling in biomolecular simulation. *Molecular Physics*, 118(5):e1737742, 2020. Justin S Smith, Olexandr Isayev, and Adrian E Roitberg. Ani-1: an extensible neural network potential with dft accuracy at force field computational cost. *Chemical science*, 8(4):3192–3203, 2017. Sina Stocker, Johannes Gasteiger, Florian Becker, Stephan Günnemann, and Johannes T Margraf. How robust are modern graph neural network potentials in long and hot molecular dynamics simulations? Machine Learning: Science and Technology, 3(4):045010, 2022. Mohammad M Sultan, Hannah K Wayment-Steele, and Vijay S Pande. Transferable neural networks for enhanced sampling of protein dynamics. *Journal of chemical theory and computation*, 14(4):1887–1894, 2018. So Takamoto, Chikashi Shinagawa, Daisuke Motoki, Kosuke Nakago, Wenwen Li, Iori Kurata, Taku Watanabe, Yoshihiro Yayama, Hiroki Iriguchi, Yusuke Asano, et al. Towards universal neural network potential for material discovery applicable to arbitrary combination of 45 elements. *Nature Communications*, 13(1): 1–11, 2022. Stephan Thaler and Julija Zavadlav. Learning neural network potentials from experimental data via differentiable trajectory reweighting. *Nature Communications*, 12(1):1–10, 2021. Philipp Thölke and Gianni De Fabritiis. Equivariant transformers for neural network based molecular potentials. In *International Conference on Learning Representations*, 2021. Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018. Richard Tran, Janice Lan, Muhammed Shuaibi, Siddharth Goyal, Brandon M Wood, Abhishek Das, Javier Heras-Domingo, Adeesh Kolluru, Ammar Rizvi, Nima Shoghi, et al. The open catalyst 2022 (oc22) dataset and challenges for oxide electrocatalysis. *arXiv preprint arXiv:2206.08917*, 2022. Gareth A. Tribello, Massimiliano Bonomi, Davide Branduardi, Carlo Camilloni, and Giovanni Bussi. PLUMED 2: New feathers for an old bird. *Computer Physics Communications*, 185(2):604–613, February 2014. ISSN 00104655. doi: 10.1016/j.cpc.2013.09.018. URL https://linkinghub.elsevier.com/retrieve/ pii/S0010465513003196. Mark Tuckerman. *Statistical mechanics: theory and molecular simulation*. Oxford university press, 2010. MBBJM Tuckerman, Bruce J Berne, and Glenn J Martyna. Reversible multiple time scale molecular dynamics. The Journal of chemical physics, 97(3):1990–2001, 1992. Oliver T Unke and Markus Meuwly. A reactive, scalable, and transferable model for molecular energies from a neural network approach based on local information. *The Journal of chemical physics*, 148(24):241708, 2018. Oliver T Unke, Stefan Chmiela, Michael Gastegger, Kristof T Schütt, Huziel E Sauceda, and Klaus-Robert Müller. Spookynet: Learning force fields with electronic degrees of freedom and nonlocal effects. *Nature* communications, 12(1):1–14, 2021a. Oliver T Unke, Stefan Chmiela, Huziel E Sauceda, Michael Gastegger, Igor Poltavsky, Kristof T Sch utt, Alexandre Tkatchenko, and Klaus-Robert Müller. Machine learning force fields. *Chemical Reviews*, 121 (16):10142–10186, 2021b. Jonathan Vandermause, Steven B Torrisi, Simon Batzner, Yu Xie, Lixin Sun, Alexie M Kolpak, and Boris Kozinsky. On-the-fly active learning of interpretable bayesian force fields for atomistic rare events. npj Computational Materials, 6(1):1–11, 2020. Ercheng Wang, Huiyong Sun, Junmei Wang, Zhe Wang, Hui Liu, John Z. H. Zhang, and Tingjun Hou. End-Point Binding Free Energy Calculation with MM/PBSA and MM/GBSA: Strategies and Applications in Drug Design. *Chem. Rev.*, 119(16):9478–9508, August 2019a. ISSN 0009-2665, 1520-6890. doi: 10.1021/acs.chemrev.9b00055. URL https://pubs.acs.org/doi/10.1021/acs.chemrev.9b00055. Jiang Wang, Simon Olsson, Christoph Wehmeyer, Adrià Pérez, Nicholas E Charron, Gianni De Fabritiis, Frank Noé, and Cecilia Clementi. Machine learning of coarse-grained molecular dynamics force fields. ACS central science, 5(5):755–767, 2019b. Wujie Wang and Rafael Gómez-Bombarelli. Coarse-graining auto-encoders for molecular dynamics. npj Computational Materials, 5(1):1–9, 2019. Wujie Wang, Simon Axelrod, and Rafael Gómez-Bombarelli. Differentiable molecular simulations for control and learning. *arXiv preprint arXiv:2003.00868*, 2020a. Wujie Wang, Tzuhsiung Yang, William H Harris, and Rafael Gómez-Bombarelli. Active learning and neural network potentials accelerate molecular screening of ether-based solvate ionic liquids. Chemical Communications, 56(63):8920–8923, 2020b. Wujie Wang, Zhenghao Wu, and Rafael Gómez-Bombarelli. Learning pair potentials using differentiable simulations. *arXiv preprint arXiv:2209.07679*, 2022. Michael A Webb, Yukyung Jung, Danielle M Pesko, Brett M Savoie, Umi Yamamoto, Geoffrey W Coates, Nitash P Balsara, Zhen-Gang Wang, and Thomas F Miller III. Systematic computational and experimental investigation of lithium-ion transport mechanisms in polyester-based polymer electrolytes. ACS central science, 1(4):198–205, 2015. Yujie Wu, Harald L. Tepper, and Gregory A. Voth. Flexible simple point-charge water model with improved liquid-state properties. *The Journal of Chemical Physics*, 124(2):024503, January 2006. ISSN 0021-9606, 1089-7690. doi: 10.1063/1.2136877. URL http://aip.scitation.org/doi/10.1063/1.2136877. Tian Xie, Arthur France-Lanord, Yanming Wang, Jeffrey Lopez, Michael A Stolberg, Megan Hill, Graham Michael Leverick, Rafael Gomez-Bombarelli, Jeremiah A Johnson, Yang Shao-Horn, et al. Accelerating amorphous polymer electrolyte screening by learning to reduce errors in molecular dynamics simulated properties. *Nature communications*, 13(1):1–10, 2022. J Li Yarnell, MJ Katz, Ro Go Wenzel, and SH Koenig. Structure factor and radial distribution function for liquid argon at 85 k. *Physical Review A*, 7(6):2130, 1973. Shuwen Yue, Maria Carolina Muniz, Marcos F. Calegari Andrade, Linfeng Zhang, Roberto Car, and Athanassios Z. Panagiotopoulos. When do short-range atomistic machine-learning models fall short? J. Chem. Phys., 154(3):034111, January 2021. ISSN 0021-9606, 1089-7690. doi: 10.1063/5.0031215. URL http://aip.scitation.org/doi/10.1063/5.0031215. Yaoguang Zhai, Alessandro Caruso, Sigbjörn L Bore, and Francesco Paesani. A "short blanket" dilemma for a state-of-the-art neural network potential for water: Reproducing properties or learning the underlying physics? 2022. Linfeng Zhang, Jiequn Han, Han Wang, Roberto Car, and EJPRL Weinan. Deep potential molecular dynamics: a scalable model with the accuracy of quantum mechanics. *Physical review letters*, 120(14): 143001, 2018a. Linfeng Zhang, Jiequn Han, Han Wang, Wissam Saidi, Roberto Car, et al. End-to-end symmetry preserving inter-atomic potential energy model for finite and extended systems. *Advances in Neural Information* Processing Systems, 31, 2018b. ![19_image_0.png](19_image_0.png) Figure 7: Comparison between a classical force field and the reference simulation on the h(r) of MD17 molecules. ## A Dataset Details The MD17 dataset1(Chmiela et al., 2017) and the LiPS dataset2(Batzner et al., 2022) are adapted from previous works and are publicly available. The MD17 dataset is generated from path-integral molecular dynamics simulations that incorporate quantum mechanics into the classic molecular dynamics simulations using Feynman path integrals. The LiPS datasets are generated by ab-initio molecular dynamics simulations with a generalized gradient PBE functional and projector augmented wave pseudopotentials. We refer interested readers to the respective papers for more details on the data generation process. The water dataset and alanine dipeptide dataset are generated by ourselves, and the generation process is explained in the next paragraphs. To demonstrate how ML force fields can improve simulation accuracy over classical force fields, we also simulated the four MD17 molecules with a popular classical force field: the optimized potentials for liquid simulations (OPLS, Jorgensen & Tirado-Rives 2005) with OpenMM (Eastman et al., 2017). In OPLS, interatomic interactions are described by a combination of simple functional forms, such as harmonic and Lennard-Jones potentials, for bond stretch, bond angle, torsional angle, and pair-wise non-bonded interactions. We sample the molecule conformations in vacuum under an NVT ensemble with a timestep of 1 femtosecond and temperature of 500 K, controlled with a Langevin thermostat. The resulting h(r) from the classical force field is compared to the reference in Figure 7. The errors it attains are reported in Table 3, which are much higher than most ML force fields, whose h(r) curves are shown in Figure 14. Water. Our water dataset is generated from molecular dynamics simulations of a simple classical water model, namely, the flexible version of the Extended Simple Point Charge water model (SPC/E-fw) (Wu et al., 2006) at temperature T = 300 K and pressure P = 1 atm. For this model, the interaction parameters (e.g., O-H bond stretch and H-O-H bond angles), are parameterized to match extensive experimental properties such as the self-diffusion and dielectric constants at bulk phase. This classical model has been well-studied in previous work (Wu et al., 2006; Yue et al., 2021) and has shown reasonable predictions of the physical properties of liquid water. It provides a computationally inexpensive way to generate a large amount of training data. The experience and knowledge gained from the benchmark based on the simple model can be readily extended to systems with higher accuracy, such as the *ab-initio* models. Alanine dipeptide. Our dataset is generated from the MD simulation of an alanine dipeptide molecule solvated in explicit water (1164 water molecules) performed in GROMACS (Abraham et al., 2015) using the AMBER-03 (Ponder & Case, 2003b) force-field. In the AMBER-03 force field, the potential energy parameters such as van der Waals and electrostatics are mostly derived from quantum mechanical methods with minor optimization on the bonded parameters to reproduce the experimental vibrational frequencies and structures (Cornell et al., 1996; Ponder & Case, 2003a). The NPT ensemble is applied in simulations, with hydrogen bond length constraints using LINear Constraint Solver (LINCS) and a time step of 2 fs. The temperature and pressure of the system are controlled at T = 300 K and P = 1 bar using a stochastic velocity rescaling thermostat with damping frequency tv = 0.1 ps and Parrinello-Rahman barostat with coupling frequency tp = 2.0 ps, respectively. The Particle Mesh Ewald approach is used to compute long-range 1http://www.sgdml.org/ 2https://archive.materialscloud.org/record/2022.45 ![20_image_0.png](20_image_0.png) Figure 8: F(ϕ) and F(ψ) have converged for the reference force field (a) and NequIP (b) at time 5 ns under Metadynamics. electrostatics with periodic boundary conditions applied to the x, y, and z directions. The conformational modes can be characterized by six free energy local minima, which have been used in previous work (Lederer et al., 2022). We initialize six simulations for each model from each of the six free energy local minima. Implicit solvation. The explicit solvent of 1164 water molecules is not the subject of study but adds a significant computational burden. In this task, we attempt to learn an implicit solvent model (ISM) of the alanine dipeptide, in which the explicit solvent environment is incorporated in the learned FF. The ISM is commonly used in drug design (Wang et al., 2019a) because it can speed up the computation by dramatically decreasing the number of particles required for simulation. In general, the mean-field estimation in ISM ignores the effect of solvent, thermal fluctuations, and solvent friction (Feig, 2007). Thus, molecular kinetics is not directly comparable to the explicit solvation simulation. However, the equilibrium configurations can be explicitly compared, as conducted in Chen et al. (2021). Metadynamics simulation. Simulating energy barrier jump usually requires a long sampling of the trajectory in MD simulations. The conformational change of alanine dipeptide in water involves such a process, making it difficult to extract the complete free energy surface, i.e., the Ramachandran plot, in normal MD. In order to examine the learned ML FFs within a reasonable time limit, metadynamics (Laio & Parrinello, 2002) is employed to explore the learned FES of the solvated alanine dipeptide. Metadynamics is a widely used technique in atomistic simulations to accelerate the sampling of rare events and estimate the FES of a certain set of degrees of freedom. It is based on iteratively "filling" the potential energy using a sum of Gaussians deposited on a set of suitable collective variables (CVs) along the trajectory. At evaluation time, we perform metadynamics with dihedral angles ϕ and ψ as CVs3, starting from the configurations located at one of the six energy minimums in the free energy surface indicated in Figure 5 (c). The Gaussians with height h = 1.2 and sigma σ = 0.35 are deposited every 1 ps centered on ψ and ϕ. As shown in Figure 8, the estimated FES of both ϕ and ψ do not significantly change after 5 ns. In addition, the height of the bias gaussian potential smoothly converges to ∼ 0 in the time limit of 5 ns. Therefore, a simulation time of 5 ns is sufficient for the convergence of the metadynamics. This metadynamics simulation of alanine dipeptide with AMBER force fields is carried out using GROMACS (Abraham et al., 2015) integrated with the PLUMED library (Tribello et al., 2014; Colón-Ramos et al., 2019) of version 2.8. ## B Experimental Details Selection of stability threshold. Our stability thresholds are chosen to be relaxed, so a simulation is only flagged as "unstable" when the system has already went into highly non-realistic configurations. In Figure 9 (a) and (c), we show the natural fluctuation existing in the reference data of RDF for water and bond lengths for Aspirin, demonstrating that realistic simulations would never exceed the stability thresholds. In our experiments, unstable simulations cannot recover from catastrophic failure to become stable again. Example trajectories using SchNet on Aspirin and GemNet-T on water are shown in Figure 9 (b) and (d). 3In practice, the selection of suitable collective variables can be a case-by-case challenge (Sidky et al., 2020). ![21_image_0.png](21_image_0.png) Figure 9: (a) The distribution of bond length deviation for the Aspirin reference dataset. Black dashed line is our chosen stability threshold. (b) An example simulated trajectory of Aspirin using SchNet that becomes unstable. Y-axis is log-scaled. (c) The distribution of RDF deviation for the water reference dataset. Black dashed line is our chosen stability threshold. (d) An example simulated trajectory of Water-10k using GemNet-T that becomes unstable. ![21_image_1.png](21_image_1.png) Figure 10: 3D symmetries that ML force fields should respect include translations, rotations, and reflections. Symmetry principles. the learned force fields should respect the symmetry principles: energy is permutational invariant and E(3)-invariant, and forces are permutational invariant and E(3)-equivariant. E(3) comprises translations, rotations, and reflections in 3D (Figure 10). A function f is G-equivariant if f ◦T = L◦f for some operator L, where T is an operator for transformation in G. f is invariant if f ◦ T = f. For example, the energy prediction of a E(3)-invariant model will be invariant with represet to 3D rotation, translation, and reflection of the input structure. Our benchmark includes various models with different levels of symmetry principles (Table 2) and thus have different expressive power. Experimental procedures. Baseline models are trained on the datasets described in Section 4 according to the experimental settings described in Appendix B. At evaluation time, we simulate MD trajectories using the learned models, with thermostats and simulation length described in Section 4. The simulated trajectories are recorded as time series of atom positions, along with other information including atom types, temperature, total energy, potential energy, and kinetic energy. All observables described in this section can be computed from recorded trajectories. We use the stability criterion described above to find the time step when the systems become "unstable", and only use the trajectory before that time step for the computation of observables. Among the observables, the distribution of interatomic distances and RDF are computed for each frame and averaged over the entire trajectory. Diffusivity coefficients are computed by averaging over the diffusivity coefficient computed from all applicable time windows where the time window length is predefined to be long enough. For example, for a trajectory of T steps and a time window of K steps, we average over the diffusivity computed from the time windows: [1, K], [2, K + 1]*, . . . ,* [T − K + 1, T]. We use 100 ps as the time window size for water and 35 ps as the time window size for LiPS. We also remove the first 5 ps of the simulated trajectories of LiPS for equilibrium. As we do multiple simulations per model and dataset, we compute the metrics for each trajectory and report the mean and standard deviation. When reporting the efficiency of different models, all frames per second (FPS) metrics are measured with an NVIDIA Tesla V100-PCIe GPU. We present FPS as a reference for models' computational efficiency but also note that code speed can be affected by many factors and likely has room for improvement. Further details on observable computation can be found in our code submission: observable.ipynb. The Open Catalyst Project4codebase 4https://github.com/Open-Catalyst-Project/ocp | DeepPot-SE | SchNet | DimeNet | PaiNN | SphereNet | ForceNet | GemNet-T | GemNet-dT | NequIP | | |--------------|------------|------------|------------|-------------|------------|------------|-------------|----------|------------| | Force | 6.7 | 13.1 | 3.5 | 5.2 | 27.5 | 13.6 | 5.0 | 29.2 | 1.4 | | Stability | 108(117) | 175(56) | 4(4) | 14(9) | 385(160) | 13(7) | 6(7) | 0(0) | 500(0) | | RDF(O,O) | 0.17(0.10) | 0.52(0.05) | 0.46(0.22) | 0.21(0.05) | 0.65(0.02) | 0.86(0.09) | 0.62(0.48) | - | 0.07(0.02) | | RDF(H,H) | 0.13(0.09) | 0.24(0.02) | 0.33(0.15) | 0.15(0.04) | 0.29(0.01) | 0.56(0.04) | 0.35(0.21) | - | 0.07(0.02) | | RDF(H,O) | 0.28(0.15) | 0.54(0.01) | 0.43(0.17) | 0.16(0.04) | 0.81(0.03) | 1.44(0.09) | 0.71(0.65) | - | 0.26(0.07) | | Diffusivity | 0.24 | 1.79 | - | - | 2.05 | - | - | - | 0.37 | | FPS | 61.8 | 99.2 | 16.4 | 54.5 | 3.0 | 68.1 | 15.4 | 34.5 | 3.9 | | NequIP: 5. We use a batch size of 8 for ForceNet. Dataset Training dataset size Batch size | | Max epoch | LR patience | Longest simulation time | | |----------------------------------------------------------------------------------------------|--------|-------------|---------------|---------------------------|----------| | MD17 | 9,500 | 1-100* | 2,000 | 5 epochs | 20 hours | | Water-1k | 950 | 1 | 10,000 | 50 epochs | 28 hours | | Water-10k | 9,500 | 1 | 2,000 | 5 epochs | 28 hours | | Water-90k | 85,500 | 1 | 400 | 3 epochs | 28 hours | | Alanine dipeptide | 38,000 | 5 | 2,000 | 5 epochs | 75 hours | | LiPS | 19,000 | 1 | 2,000 | 5 epochs | 7 hours | and the official codebases of DeepPot-SE5, SphereNet6, and NequIP7 are all publicly available. We build our MD simulation framework based on the Atomic Simulation Environment (ASE) library8(Larsen et al., 2017). Hyperparameters. We adopt the original model hyperparameters in the respective papers and find they can produce good force prediction results that match the trend and numbers for MD17 reported in previous work. As we introduce new datasets, we set training hyperparameters such as the batch size and summarize them in Table 7. For water and LiPS, we use a batch size of 1 like in previous work (Batzner et al., 2022) as each structure already contains a reasonable number of atoms and interactions. Following previous work, we use an initial learning rate of 0.001 for all experiments except for NequIP, which uses 0.005 as the initial learning rate in the original paper. For models that minimize a mixture of force loss and energy loss, we set the force loss coefficient λF to be 1000 and the energy loss coefficient λE to be 1, if not specified in the original paper. A higher force loss coefficient is common in previous work (Zhang et al., 2018a; Batzner et al., 2022) as simulations do not directly rely on the energy. Notably, NequIP proposed several sets of hyperparameters for different datasets, including MD17, a water+ice dataset from Zhang et al. 2018a, LiPS, etc. We follow the MD17 hyperparameters of NequIP for our MD17 and alanine dipeptide datasets; the water+ice hyperparameters of NequIP for our water dataset; and the LiPS hyperparameters of NequIP for the same LiPS dataset. For DeepPot-SE, we adopted hyperparameters introduced in Zhang et al. 2018a. The only architectural adjustment we made is because we observed training instability for ForceNet on water using the original hyperparameters. We resolve this issue by reducing the network width from 512 to 128 for ForceNet in our water experiments. To facilitate benchmarking with a reasonable computational budget, we stop the training of a model if either of the following conditions is met: (1) a maximum training time of 7 days is reached on an NVIDIA Tesla V100-PCIe GPU; (2) a maximum number of epochs specified in Table 7 is reached; (3) The learning rate drops below 10−6 with a ReduceLROnPlateau scheduler with factor 0.8 and learning rate (LR) patience specified in Table 7. We also report the longest time for an ML model to finish our benchmark simulation in 5https://github.com/deepmodeling/deepmd-kit 6https://github.com/divelab/DIG 7https://github.com/mir-group/nequip 8https://gitlab.com/ase/ase | Table 10: Water-1k results on NequIP with various model sizes and radius cutoffs. Force Stability RDF(O,O) RDF(H,H) RDF(H,O) Diffusivity | | FPS | | | | | | |--------------------------------------------------------------------------------------------------------------------------------------------|-----|--------|------------|------------|------------|------|-----| | Width=64, r=4 | 3.5 | 500(0) | 0.07(0.02) | 0.05(0.01) | 0.27(0.06) | 0.38 | 8.2 | | Width=32, r=6 | 1.5 | 500(0) | 0.06(0.01) | 0.05(0.01) | 0.26(0.06) | 0.25 | 5.2 | | Width=64, r=5 | 1.6 | 500(0) | 0.07(0.02) | 0.05(0.01) | 0.27(0.06) | 0.31 | 4.9 | | Width=64, r=6 | 1.4 | 500(0) | 0.07(0.02) | 0.07(0.02) | 0.26(0.07) | 0.37 | 3.9 | | Width=128, r=6 | 1.5 | 500(0) | 0.07(0.02) | 0.05(0.01) | 0.29(0.07) | 0.37 | 2.5 | | DeepPot-SE | SchNet | DimeNet | PaiNN | SphereNet | ForceNet | GemNet-T | GemNet-dT | NequIP | | |--------------|------------|------------|------------|-------------|------------|------------|-------------|------------|------------| | Force | 5.9 | 8.4 | 1.7 | 5.1 | 14.8 | 8.6 | 0.7 | 1.1 | 1.4 | | Stability | 500(0) | 299(70) | 36(9) | 16(7) | 500(0) | 9(12) | 20(9) | 8(10) | 500(0) | | RDF(O,O) | 0.07(0.02) | 0.67(0.03) | 0.21(0.03) | 0.26(0.17) | 0.91(0.04) | 1.31(0.49) | 0.35(0.23) | 0.20(0.01) | 0.06(0.01) | | RDF(H,H) | 0.05(0.01) | 0.31(0.02) | 0.14(0.01) | 0.20(0.12) | 0.42(0.03) | 0.82(0.26) | 0.25(0.19) | 0.16(0.01) | 0.04(0.01) | | RDF(H,O) | 0.29(0.08) | 0.67(0.04) | 0.18(0.02) | 0.21(0.06) | 1.24(0.08) | 2.05(0.60) | 0.24(0.06) | 0.26(0.02) | 0.25(0.06) | | Diffusivity | 0.35 | 1.97 | - | - | 2.26 | - | - | - | 0.18 | | FPS | 62.1 | 103.0 | 16.3 | 71.9 | 3.1 | 43.8 | 15.3 | 32.7 | 3.0 | Table 10: Water-1k results on NequIP with various model sizes and radius cutoffs. Table 7. All numbers are results of NequIP. The high computational cost for evaluating MD simulations has been a major consideration in designing our benchmark datasets and metrics. Training of DeepPot-SE is efficient and we follow the training setup specified in Zhang et al. 2018a. Complete water results. We present results on water-1k in Table 8 and results on water-90k in Table 9. Results on water-10k is presented in Table 4 in the main text. All models generally achieve lower force error when trained with more data, but stability and estimation of ensemble statistics don't necessarily improve. In particular, DeepPot-SE shows clear improvement with more training data and becomes as good as NequIP on water-90k. SchNet demonstrates significant improvement in stability, but the estimation of ensemble statistics does not improve. This may be due to the limited accuracy of SchNet coming from the limited expressiveness of the invariant atomic representation. Large water system of 512 molecules. To study model performance in generalizing to a larger system and model scalability, we evaluate models trained on the water-10k dataset on a dataset with 512 water molecules simulated for 1 ns, using the same reference force field. Given the high cost of simulating a large system, we simulate 5 trajectories of 150 ps long for each model. The results are shown in Table 11. We observe that all models suffer slightly higher force errors compared to the evaluation of the 64-molecule water system. In terms of stability, NequIP and SphereNet always remain stable for the entire 150 ps. However, SphereNet does not produce correct ensemble properties. SchNet is the third stable model, while all other models are not stable enough for diffusivity computation. DimeNet, GemNet-T, and GemNet-dT are not stable throughout the entire simulation but can produce decent RDF results. Noticeably, the stability of DeepPot-SE drops significantly. We hypothesize that the lack of message passing limits its capability in capturing long-range interactions and thus limits the performance in a larger system. 100 200 300 400 500 Training Epoch 0 5 10 15 20 25 30 35 40 0 50 100 150 200 250 300 350 Force MAE Stability F o r c e M A E [ m e V /Å ] Stability [ps] Figure 11: The force error and stability of SchNet for simulating Salicylic acid, as training progress. Table 11: Results on the large water system with 512 molecules, with models trained on the water-10k dataset (64-molecule water system). *SphereNet on Water-large requires more memory than Tesla V100 supports. We run its simulations on faster NVIDIA A100 cards so the FPS is not entirely comparable to other models. DeepPot-SE SchNet DimeNet PaiNN SphereNet* ForceNet GemNet-T GemNet-dT NequIP Force 10.6 12.1 5.1 9.7 18.4 13.2 5.6 4.2 7.7 Stability 19(22) 118(58) 38(13) 16(12) 150(0) 8(0) 45(25) 50(9) 150(0) RDF(O,O) 0.23(0.06) 0.62(0.01) 0.17(0.03) 0.31(0.06) 0.93(0.02) 0.74(0.02) 0.22(0.16) 0.16(0.02) 0.10(0.01) RDF(H,H) 0.24(0.06) 0.30(0.04) 0.12(0.03) 0.21(0.05) 0.42(0.01) 0.51(0.02) 0.15(0.11) 0.11(0.01) 0.07(0.00) RDF(H,O) 0.67(0.27) 0.55(0.01) 0.17(0.02) 0.29(0.05) 0.97(0.03) 1.38(0.05) 0.23(0.12) 0.16(0.02) 0.12(0.02) Diffusivity - 2.54 - - 2.98 - - - 0.89 FPS 80.7 23.1 3.5 17.4 0.8 11.9 2.2 5.3 0.7 | unitless. | DeepPot-SE | SchNet | DimeNet | PaiNN | SphereNet | ForceNet | GemNet-T | GemNet-dT | NequIP | |-------------|--------------|------------|------------|------------|-------------|------------|------------|-------------|------------| | Force | 5.8 | 10.1 | 1.2 | 6.6 | 18.4 | 12.3 | 0.7 | 1.3 | 1.7 | | Stability | 171(152) | 353(87) | 10(9) | 14(9) | 500(0) | 2(0) | 21(11) | 15(13) | 500(0) | | RDF(O,O) | 0.08(0.03) | 0.75(0.06) | 0.28(0.14) | 0.33(0.13) | 0.99(0.05) | 0.75(0.00) | 0.19(0.06) | 0.35(0.17) | 0.05(0.01) | | RDF(H,H) | 0.06(0.02) | 0.34(0.03) | 0.16(0.04) | 0.18(0.04) | 0.43(0.03) | 0.56(0.01) | 0.12(0.02) | 0.22(0.12) | 0.04(0.00) | | RDF(H,O) | 0.18(0.07) | 0.73(0.01) | 0.18(0.03) | 0.21(0.05) | 1.22(0.03) | 1.28(0.01) | 0.16(0.01) | 0.16(0.04) | 0.21(0.04) | | Diffusivity | 0.15 | 2.04 | - | - | 2.25 | - | - | - | 0.25 | | FPS | 62.0 | 100.5 | 17.1 | 57.3 | 2.9 | 62.2 | 15.1 | 32.5 | 3.5 | Influence of model size. Table 10 shows an ablation study over the model size and radius cutoff of NequIP over water-1k. We observe that all models are highly stable and attain equally good performance in simulation-based metrics. Although a small radius cutoff of 4 leads to worse performance in force prediction, it is more computationally efficient and preserves the trajectory statistics. These results show that there exists a trade-off between accuracy and efficiency when choosing the hyperparameters of an ML force field, and force error may not be the preferred criterion for model selection. Stability's relation with dataset size. We extract the force and stability results for each model from Figure 4 to make Figure 12 to better illustrate the relation between stability and dataset size for each model. We observe that while more data almost always reduce force error, stability does not necessarily improve. In particular, NequIP is highly stable across all dataset sizes. DeepPot-SE and SchNet have significant improvements in stability with more data. While for DimeNet ForceNet, and GemNet, more training data does not bring significant stability improvement. Section 7 contains detailed discussions on the causes of instability and potential solutions to improve stability. Stability's relation with training epochs. We study the evolution of simulation stability in the training process of an ML force field. We take the SchNet model on the MD17 molecule salicylic acid and save the checkpoint at 100, 200, 300, 400, and 500 epochs. We conduct 5 simulations of 300 ps using each checkpoint. Figure 11 shows the force error and stability of the model at different stages of training. We observe that the force error decreases as training progresses, and the stability improves to be stable across the entire 300 ps simulation and training epoch 300. This result reveals that thorough training is important to both the accuracy and stability of ML force fields. Water-10k with a time split. We investigate a time split of the water dataset by using the first 10,000 structures for training and the last 10,000 structures for testing. The results are reported in Table 12. We find most models perform slightly worse compared to the random split results in Table 4, while the trend of performance ranking stays the same. Distribution of interatomic distances for MD17. Figure 14 shows the h(r) curves for all models and molecules benchmarked. We randomly selected one simulation out of the five simulations we conducted for each model and molecule. We observe that due to lack of stability, DeepPot-SE produces noisy h(r) on ![25_image_0.png](25_image_0.png) Figure 12: The force error and stability of all models on the water dataset, with varying training dataset size: ![25_image_1.png](25_image_1.png) 1k, 10k, and 90k. Figure 13: Alanine dipeptide FES results from different initialization. Aspirin. ForceNet does not manage to learn the correct interatomic interactions and produces incorrect h(r) curves. Most models are able to produce h(r) that match well with the reference, with SchNet being less accurate on Aspirin and Ethanol. RDFs for water. Selected RDF curves for water-1k/10k/90k are in Figure 15, Figure 16, and Figure 17. Most noisy curves are due to insufficient sampling time, which results in a small number of frames to be averaged in obtaining the RDF curves. We observe that SchNet and ForceNet produce inaccurate curves that are not very noisy, showing that their failure is not entirely due to a lack of stability but because of inaccurate modeling of interactions caused by limited expressiveness and lower sample efficiency. Further, we note that the reference curves have zero values below a certain threshold, as any pair of atoms cannot get too close to each other. However, DimeNet and GemNet-T exhibit abnormally high values for very small distances, indicating the simulations have gone into nonphysical configurations and collapsed. RDFs for LiPS. As shown in Figure 18, DeepPot-SE does not manage to stay stable on LiPS. ForceNet learns inaccurate interactions and produces inaccurate RDFs. All other models can produce highly accurate RDF and can reproduce Li-ion diffusivity relatively accurately, as demonstrated in Table 6. ![26_image_0.png](26_image_0.png) ![27_image_0.png](27_image_0.png) Figure 15: RDFs of Water-1k. GemNet-dT does not remain stable for more than 1 ps and is therefore not feasible for RDF computation. ![28_image_0.png](28_image_0.png) ![29_image_0.png](29_image_0.png) Figure 17: RDFs of Water-90k. ![30_image_0.png](30_image_0.png) ![30_image_1.png](30_image_1.png) Figure 18: RDFs of LiPS.
Review 1: Summary: This paper examines several machine learning force fields, with an emphasis on modern graph-neural network force fields that are commonly used in the materials science community. Specifically, the paper evaluates the stability and accuracy of radial distribution functions coming from molecular dynamics (MD) simulations that use these force fields in a number of different benchmark systems. The stability of MD simulations is a critical evaluation criterion for accurate simulations over extended timeframes. With the exception of studies in an isolated system, previous comprehensive benchmarks have only evaluated the mean force errors of such force fields. To address this gap in previous studies, the authors have developed a comprehensive benchmark that includes multiple models from the Open Catalyst benchmark, as well as Nequip and demonstrate that the force error metric alone is insufficient for predicting the ability to perform stable long-running MD simulations with such machine-learning force fields. Strengths and Weaknesses: One major strenght of this paper is that it highlights a critical issue in the field of MD simulations, which is that even small errors in the force field parameters can lead to instabilities in certain integration schemes. These instabilities can cause the simulation to become invalid, rendeirng all subsequent results meaningless. However, the paper's definition of "stability" in the appendix may be confusing to non-experts, leading them to believe that this problem is not a significant concern. A more detailed explanation of the underlying casues of instabilities in symplectic integrators, or a connection to sampling frequency theorems, or a stronger connection to the early literature of molecular dynamics simulation that addressed such questions could have helped to a degree in strengthening the paper's main argument. Additionally, a more rigorous demonstration that once an "instability" occurs, the MD simulation will basically collapse, would have reduced any possible concerns about the importance of these observations. The current criteria for assessing stability may be misleading to non-experts: the RDF criterion appears to be related to the kinetics of sampling, while the bond-breaking criterion, while sufficient for detecting undesirable bond-breaking behavior, would also flag a valid tautomerization reaction as unstable (e.g. the hopping of the proton in aspirin). The discussion of the accuracy of the RDF very much obscures the main objective of the paper, which is the characterization of the ability to perform MD simulations with these models, which is of fundamental importance and potentially orthogonal to the perceived accuracy. It is not very clear if any remaining discrepancy between force errors and RDF errors are not simply due to the lack of the ability to perform stable simulations. Requested Changes: Perhaps the authors could clarify from their data if all of the unstable events lead to a complete breakdown of the subsequent simulations, or if some of these instabilities they've observe might be self-correcting fluctuations (which I very much doubt). As a side note I'd point out that in addition to the self-correcting force fields, people have already used variants of the Behler Parrinello, and ANI to run long MD simulations of much larger systems, so arguably solutions to all these problems already exist in earlier MD-simulation focused force fields. I don't expect the authors to add any model under the planet, however, it may help if the authors could sharpen the definition of their criteria for selecting the starting materials for their benchmarks (even if these criteria currently only involve practical considerations, such as the uniformity of the opencatalyst benchmark models.) Broader Impact Concerns: No ethical implications. ================================================== Review 2: Summary: The authors compare state-of-art molecular simulation algorithms across a number of different benchmark datasets and evaluations metrics. The different datasets vary from small to large molecules in relevant tasks. The metrics are those that are used to train models and for typical comparison, as well as new metrics that the authors have created to help compare methods. The authors are careful about performance of each, and speculate why models will be performing in certain ways. The code will also be available on github for others to run and compare against. Strengths and Weaknesses: Strengths: I appreciate how thorough the authors are in comparison across all methods. As a researcher, it is useful for me to read the paper and understand which algorithm I would like to use for my task of interest. I also really liked that there is a three bullet point summary outlining recommendations and outcomes of the study that was easy to follow. I think the stability metric is really the differentiating factor across different algorithms for comparison, so I think that is a useful contribution. Weaknesses: I do not think the stability metric is consistent across datasets, and it is difficult to compare use in each. Please see the “Requested Changes” section for more information. The 3D bar charts are difficult to interpret, and somewhat redundant. While in the tables, up and down arrows indicate “good” or “bad” performance, these are absent on the bar charts. Generally, since the graphs are projected in 3D, comparison within and across figures is difficult. The water data set is randomly split into 1k/10k/90k random structures. However, is random sampling not “difficult” enough for testing? Is there a way to first cluster structures/configurations, and hold out those that are distant from those in the training set? Requested Changes: While I think the “Stability” metric is useful, I feel that it is inconsistently used across datasets. For some datasets, it is, to my understanding, ensuring that the current state of the simulation does not differ from some set of states in the past. For other datasets, (i.e. macromolecules), it uses bond angles to ensure realistic simulation conditions. If the metrics mathematically parameterize different things, they should be different metrics. I do not think the title is justified by the work of this paper: “Forces are not enough”. Did the paper specifically address this question? Moreover, weren’t there ML models that did use forces that were successful on many desired criteria? While catchy, I do not think the paper thoroughly and concretely outlines this statement. I found Figure 1 to be confusing and out of place. Only after reading the whole paper and understanding the metrics did I realize what this figure was trying to tell me, because I needed to understand if larger or smaller metric values were “good”. Ideally, the first figure should condense the authors’ approach and relevant data (condensing Figures 2 and 3). It also takes quite a bit of reading to understand what “water-10k” means out of context. While I realize the authors are comparing a number of different methods, 3D bar charts are notoriously difficult to interpret. I’m not sure they add much more versus colored heatmap tables. Nice-to-have changes: Figures: It would be great to have a figure that described the different classes of models: SE(3)-equivariant, E(3)-invariant, and translation-invariant. As a reader, I have to read each of these papers to understand how their capacity differs. If you are tight on real estate for the paper, I think figures 2 and 3 could be condensed into a single figure. Broader Impact Concerns: Not Applicable ================================================== Review 3: Summary: The authors create a benchmark which assesses models that outputs force fields (FFs) by the accuracy of molecular dynamics (MD) simulations using those FFs. The benchmark consists of a curated set of datasets, MD setups, and evaluation metrics. They evaluate a number of existing models using their benchmark. Their main conclusion is that the models whose force predictions have a lowest mean absolute error (MAE) are _not_ in general the models with the best MD performance, suggesting that FF MAE is not a good proxy for usefulness in MD. I would list the contributions and new knowledge of this paper as the following: 1. The datasets, MD settings, and evaluations metrics which define their benchmark 2. The evaluation of many popular ML models on their benchmark 3. Analysis of the results from point 2 to demonstrate empirically that lower FF MAE does not consistently result in better MD (at least using their metrics) Conversely, one thing which is definitely **not** a contribution of this paper is the _idea_ of evaluating FF models using MD. Most papers using FFs are motivated by MD (see e.g. the review by Unke et al), and evaluating models based on their ability to do this is an obvious conclusion. I say this only because the authors do not seem to make an explicit claim about whether this idea is novel (it is not in their contribution list, but I also couldn't find a sentence stating that it is not a novel idea). Strengths and Weaknesses: Overall I think this is a good paper. The key contributions are impactful, relevant, and generally well-executed. To me the main weaknesses are related to presentation. Many things in the paper could be explained or argued better, but something that was particularly lacking was the motivation and justification of the metrics. Without this, I think the key claims of the paper are not fully supported since they depend on the metrics. Details below. **Strengths** - this kind of benchmark is the _good kind of obvious_: it did not occur to me that ML models should be evaluated using MD, but when I read the abstract it felt immediately obvious to me that this is better than MAE on forces. Given that most papers _do not_ use MD for evaluation, such a contribution is very welcome. - the main conclusion that force MAE $\neq$ MD performance is also _obvious (in a good way)_: when exploring the state spaces edge cases matter, and average performance does not highlight edge cases. However, to my knowledge this intuitive result was not empirically demonstrated before on a wide range of models. The results in this paper not only show that this can happen in practice, but that the mismatch is _systemic_ across many models. I think Figure 4 demonstrates this very well - benchmark is well-executed: admittedly I think I lack expertise to be a great judge of this, but the choice of datasets, evaluation conditions, and metrics seems well-thought out and thorough, even if the rationale for some choices is unclear (see weaknesses below) **Weaknesses** - Metrics not discussed sufficiently: $h(r)$ is mentioned only in an indirect way when discussing the MD17 dataset, and the stability metric is only described qualitatively. I thought this was odd since the main contribution of this paper is the benchmark, which is defined by the datasets, MD setup, and metrics. The conclusions also depend on the metrics, so for a reader to believe the claims it is critical to show or argue that these metrics are the right ones. Not only are the choice of metrics not discussed much (e.g. which alternatives did you choose from), _the definitions are not even in the main paper_! - Motivation and discussion of stability is particularly lacking. - Not only is it not defined in the main text (see point above), the authors only describe it in a vague way with phrases like "may not extrapolate robustly to the undersampled configuration space" and "trajectories can enter nonphysical states that are not meaningful for observable calculations". I did not know what "nonphysical" and "meaningful" specifically meant in this context. I think it needs a much more precise description. - In the appendix the authors more or less define "stability" as "large deviations in RDF metric", using a seemingly arbitrary threshold to define "large", which differs for each datasets. How were these thresholds chosen? I felt disappointed that this was put into the appendix like a minor design choice when it is actually a critical part of the benchmark which is essential to support the authors' claims. - Outside of section 4, the authors refer to stability as if it were a measure of when the simulation "fails" (indeed the name "stability" seems to imply this), when _that is not how it was defined_. I presume that when the model outputs "unstable" trajectories running the simulation for longer does not help, but as far as I can tell this was never actually stated or shown by the authors. Without this, I felt it was inappropriate to discuss stability in this way, and overall I was not convinced of its utility as a metric. - Because of these things, I decided to consider all claims stemming from stability to be "unsupported", which I marked in the yes/no question below. In reality I believe these claims are probably true, but I think the issues above need to be fixed so that the claims are rigorously justified. - Other MD design choices are not discussed: e.g. why were specific choices for temperatures and sampling times made? This is a key contribution of your paper so it should at least be stated why certain design choices were made. - The MD performance metrics to some degree involve comparing the performance of the ML FF trajectories with those from another method. However, it is not stated very clearly what these methods are, and it is not argued why they are suitable for use as reference data. For example, if the reference data was another ML FF method then arguably it may be unreliable. I'm guessing that the choice of reference data was good, I just think it should be explained and argued to convince a skeptical reader. - Discussion of related work could be improved. It seemed like the most relevant works which use force fields for MD are dismissed saying "they didn't compare multiple models". While this is indeed true, the stated contributions of this paper are not only _running_ the benchmark on multiple models, but _designing_ the experiments themselves, so I think such a dismissal is inappropriate. The experiments of many of these papers actually seem quite similar to those of this benchmark. I picked Zhai et al 2022 (at random) and saw that they also do MD on water and report a variety of observables, comparing the ML FF to a classical one and to experimental values. The key difference here seems to be which observables are used and other MD parameters. I think the authors either need to explain more clearly how their experimental setup is different/better than those or other works, _or_ state more clearly that the experimental procedure is fairly standard/in line with prior work and therefore discuss this work less dismissively. Remember that this is TMLR: if aspects of the work are standard or not very novel, the authors do not need to pretend otherwise to get accepted. - Scope of models considered could be improved: the authors seem to focus exclusively on the "latest and greatest" neural-network based models in their evaluation. While I think including these models is good, "machine learning" encompasses much more than just neural networks, for example body-order decompositions with learned coefficients. It would be nice to see at least one more classical model included, even something as simple as a fitting a Lennard-Jones potential model. This would give the reader a sense of whether neural networks are actually improving over other methods in any meaningful way. - Rhetoric could be improved. In general, I feel like the authors _undersold_ the importance of their benchmark. I think there are a lot of good arguments for the use of an MD-based benchmark which the authors did not fully exploit. I've given some suggestions in the "requested changes" section. - Better background information on MD: a reader unfamiliar with the key ideas of MD would not be able to get much out of this paper. MD is not a very complicated idea but people with an ML/math background might not be very familiar with it. TMLR papers should not assume a physics background for the readers, so I think it would help if the authors expanded this. Requested Changes: I hope the authors can make the following changes which I think are critical to ensure that the paper's main claims are well-supported: - Move definitions of metrics into the main text since they are important. - Add some sort of justification of why these metrics make sense. For example, why the RDF and not other observables? Why is stability important? - Reconcile the mismatch between the description of stability and its mathematical definition, in addition to arguing why it is an important metric. This could take many forms depending on what the authors actually observed during "unstable" MD simulations. For example, if the authors observed that once the trajectory became "unstable" it would never recover, it would be good to show this empirically, and also show that this can be detected by changes in the RDF. The authors should also think about finding a more descriptive name than "stability" if the trajectories do not in fact diverge or fail, since I do not think that the inclusion of non-physical states in a trajectory makes it "unstable" - Add at least a minimal discussion of the way the MD simulations are set up. For example, were times and time steps chosen to give each experiment a similar computational cost? Were the temperatures chosen to be in line with the training data? Currently this is completely unclear. I think it is important to show the reader that you made reasonable design choices when constructing the benchmark. I also suggest that the authors consider the following changes which I believe would strengthen the work, even though I would be willing to accept the paper without them. Given that the paper is currently _below the 12 page limit_, I think there is plenty of room for extra content. 1. Argue the importance of your benchmark more directly in your introduction. The introduction seemed to more or less say "people do MD, some people want to learn force fields for MD, so we make a benchmark for this." I think it should instead say "MD simulations are _actually the main reason why people work on ML force fields_, so performance on MD is not just a nice benefit, it is a critical requirement!" I think this would convince a reader that this is an important way to measure performance, and also strengthen the impact of your main conclusion: optimizing FF MAE may actually be _misleading_ the field to research models which aren't actually useful for anything! (although if you make these changes be sure to acknowledge that MD is not the only thing that people use FFs for, another one is relaxing structures to the nearest energy minimum). 2. Provide a better introduction to MD, which at the bare minimum should contain the key MD equation $d^2x/dt^2=-m^{-1} dE/dx$ (i.e. Newton's equation). I suggest moving section 3 to right after the introduction, or maybe even merging it with the introduction. I would also state that MD is a well-established field with tons of prior work (maybe citing a few textbooks), and add some references to your discussion of observables which currently has none. You could also state some basic challenges of MD such as accuracy/time tradeoff for modelling E(x), accuracy/speed tradeoff for the time step, challenges scaling MD to systems with many atoms, etc. This would also be a good time to state, at a high level, where ML models make different design decisions than classical methods. For example, many classical methods try to approximate the energy in a sparse way (e.g. only including atoms which are bonded together, or in a local region). My understanding is that many ML methods make slightly different approximations, which has implications for both scaling and accuracy, while other ML methods are much more similar to classical methods but just replace fixed functions with neural networks. Overall, I think these changes would help convince readers unfamiliar with MD that your benchmark makes sense and that research in this area is important. 3. Give the reader an intuitive understanding of why a low MAE when producing forces will not necessarily imply good performance in MD. I think this is an obvious possibility to anybody with physics knowledge but may not be obvious to those without. My suggestion for doing this: choose a 2-body system in 1D Lennard-Jones potential which can be visualized in a nice 1D plot. Show two approximations of the potential: one which has errors of moderate magnitude everywhere, and one which matches very closely almost everywhere except in one location where it has an erroneous, but very deep second minimum. The MAE of the first one will be larger, but the MD of the second one will be less realistic because the trajectories could get stuck in the second erroneous minimum. 4. Expand discussion of most relevant related work as discussed above. 5. Add some classical or semi-classical methods as baselines. These should be quick to implement and run if they are sufficiently simple. 6. Small thing: I suggest changing the colouring of the tables to not be all shades of green, but instead use a color map with various shades so that it is easier to differentiate performance visually. Personally I found it difficult to differentiate the various shades of green. Be careful to choose a color map which is friendly to color blind people though. Broader Impact Concerns: No concerns about ethics/broader impact. ================================================== Metareview: Recommendation: Accept as is Comment: This paper studies machine learning force fields commonly used in materials science. While previous comprehensive benchmarks have only evaluated the mean force errors of such force fields, the current paper went beyond this evaluation and analyzed the resulting stability of molecular dynamics simulations. The authors successfully addressed initial concerns during the rebuttal period. These changes included a stronger motivation of the stability metrics and their connections to the earlier literature, a restructuring of the introduction and related work sections, additional experiments on Aspirin molecules, as well as more details on dataset generation and the choice of benchmark models. Extensive changes were carried out during the review period. It is worth pointing out that the authors and reviewers worked collaboratively and constructively on further improving the manuscript. ==================================================
# Online Min-Max Problems With Non-Convexity And Nonstationarity Yu Huang∗y-huang20@mails.tsinghua.edu.cn Institute for Interdisciplinary Information Sciences Tsinghua University Yuan Cheng∗*cy16@mail.ustc.edu.cn* University of Science and Technology of China Yingbin Liang *liang.889@osu.edu* Department of Electrical and Computer Engineering The Ohio State University Longbo Huang†longbohuang@tsinghua.edu.cn Institute for Interdisciplinary Information Sciences Tsinghua University Reviewed on OpenReview: *https: // openreview. net/ forum? id= TdzQtbLeVw* ## Abstract Online min-max optimization has recently gained considerable interest due to its rich applications to game theory, multi-agent reinforcement learning, online robust learning, etc. Theoretical understanding in this field has been mainly focused on convex-concave settings. Online min-max optimization with nonconvex geometries, which captures various online deep learning problems, has yet been studied so far. In this paper, we make the first effort and investigate online nonconvex-strongly-concave min-max optimization in the nonstationary environment. We first introduce a natural notion of local Nash equilibrium (NE)-regret, and then propose a novel algorithm coined TSODA to achieve the optimal regret. We further generalize our study to the setting with stochastic first-order feedback, and show that a variation of TSODA can also achieve the same optimal regret in expectation. Our theoretical results and the superior performance of the proposed method are further validated by empirical experiments. To our best knowledge, this is the first exploration of efficient online nonconvex min-max optimization. ## 1 Introduction Online optimization (Cesa-Bianchi & Lugosi, 2006) is a powerful paradigm for modeling many applications that require decision making based on information available sequentially. Specially, at each time instance, an online player needs to make a decision based on the history information, and then receives a feedback (which can be a possibly adversarial and nonstationary reward or loss value) that may be used in the future. There have been extensive studies in this field for various scenarios, such as online convex optimization (ShalevShwartz, 2012; Hazan et al., 2016), online bilevel optimization (Tarzanagh & Balzano, 2022), online federated learning (Chen et al., 2020), etc. Recently, the online min-max (i.e., saddle point) problem has gained considerable interest due to its broad applications in game theory (Roy et al., 2019; Zhang et al., 2022), multi-agent reinforcement learning (Buşoniu et al., 2010; Zhang et al., 2021), online robust learning (Gabrel et al., 2014; Ben-Tal et al., 2015), to name a few. On the theoretical side, a line of works have explored provably efficient algorithms for online min-max optimization. Specifically, Cardoso et al. (2019); Fiez et al. (2021); Immorlica et al. (2019); Zhang et al. (2022) considered the zero-sum matrix games where the online objective function takes a bilinear form. ∗Equal Contributions †Corresponding author Rivera et al. (2018); Roy et al. (2019) studied a more general online min-max problem, where the objective is strongly-convex and strongly-concave. Noarov et al. (2021) focused on multi-objective online min-max games, where the reward is convex-concave in each coordinate. Despite many efforts so far, existing literature on online min-max optimization has mainly focused on online convex-concave problems and did not take **nonconvexity** into consideration. However, in practice, nonconvexity occurs very often in online min-max problems, particularly those that apply deep neural networks (DNNs) for decision making. For instance, in the time-varying two-player zero-sum stochastic games (Mertens & Neyman, 1981; Roy et al., 2019; Zhang et al., 2022), where the payoffs change with time, the policies are modeled by DNNs with strong regularization, and hence the online objective function is nonconvex-strongly-concave. Motivated by the aforementioned practical problems, the goal of this paper is to take the first step towards exploring the **online nonconvex-strongly-concave min-max** problem with dynamic (and hence nonstationary) loss functions. Due to the nonconvexity and nonstationarity nature of the problem, two new challenges arise as we explain below. First, *how to define an appropriate notion of regret for the nonstationary environment under the online* nonconvex setting? The standard notion of Nash Equilibrium (NE)-regret, e.g., Rivera et al. (2018) for online convex-concave problems, which quantifies the difference between the cumulative loss of players and the min-max value of the cumulative payoff loss, is highly unreasonable for nonconvex-concave setting, since the min-max comparator is intractable for a nonconvex-concave function. Hence, new surrogate for regret is in demand. Second, *with a desirable notion of regret, how to design efficient algorithms?* A natural strategy to handle the nonstationarity is that at each round, the decision maker first learns a good enough decision based on history losses and then applies it to the adversarial loss of current round. Two key difficulties will arise during this process. First, how to identify a good decision? In nonconvex min-max problems, a good decision usually refers to a stationary point. The standard definition of a stationary point involves an optimization oracle, which is unknown to the decision maker. Thus the decision maker needs to find a surrogate to identify a near stationary point at each round. Second, when applying the decision based on history information to the adversarial loss, mismatch errors arise due to the variability of the environment, which motivates the need for nonstationarity measures. ## 1.1 Our Contributions In this paper, we handle the aforementioned challenges by introducing a new regret measure and developing efficient algorithms for online nonconvex min-max problem with optimal regret guarantees. The main contributions are highlighted below. - We first introduce a novel notion of dynamic regret for online nonconvex-strongly-concave min-max problem, called **local Nash equilibrium (NE)-regret**, which jointly captures the nonconvexity, nonstationarity, and min-max nature of our problem. - Based on the regret notion, we propose an efficient online min-max optimization algorithm, named Time-Smoothed Online gradient Descent Ascent (TSODA). The main idea underlying TSODA is to output a near-stationary point at each round by performing two-timescale gradient descent ascent and utilizing a specially designed stop criterion component. - We show that the local NE-regret of TSODA scales as O( T w2 ) with a iteration complexity of O(T w), where T represents the total number of rounds and w denotes the size of the sliding window used to define local NE-regret. Such result matches the Ω( T w2 ) regret lower bound and the order of iteration complexity of O(T w) provided in Hazan et al. (2017a) for online minimization (where we set the maximization to be over a singleton). Thus, TSODA achieves the optimal performance for online nonconvex-strongly-concave min-max optimization. - We further generalize our study to the setting with stochastic first-order feedback and show that a variation of TSODA can also achieve a regret of O( T w2 ). - We verify our theoretical results and demonstrate the effectiveness of our algorithm through several empirical experiments on real-world datasets. To our best knowledge, this is the first study on online nonconvex min-max optimization with theoretical characterization of the regret performance. ## 1.2 Related Work Online Min-max Optimization. Recently, online min-max optimization, also known as online saddlepoint game, has emerged as an interesting optimization framework, and has been studied under various settings. More specifically, the zero-sum matrix game considers the special case that the function is bilinear with a payoff matrix At, where the objective function is given by ft(x, y) = x ⊤Aty. Several works, for example, Cardoso et al. (2019); Fiez et al. (2021); Immorlica et al. (2019); Zhang et al. (2022) proposed and analyzed algorithms with respect to different notions of regret. For more general objective functions, Rivera et al. (2018); Roy et al. (2019) studied the case where the loss function ft is strongly-convex-stronglyconcave. Very recently, Noarov et al. (2021) formulated a general multi-objective framework, where the goal is to minimize the maximum coordinate of the cumulative vector-valued loss with convex-concave function in every coordinate. We emphasize that all of the above studies did not consider nonconvexity in their objective functions, which is the focus of our study here. Online Nonconvex Optimization. As online nonconvex optimization is an active research area, various works have taken different approaches to handle the nonconvexity. Assuming access to an offline nonconvex optimization oracle to approximate minimizers of perturbed nonconvex functions, Suggala & Netrapalli (2020); Agarwal et al. (2019) studied the performance of "follow the perturbed leader" (FTPL) algorithm (Kalai & Vempala, 2005), and their regrets are all static regret. Further, Hazan et al. (2017a); Hallak et al. (2021); Aydore et al. (2019) considered online nonconvex problems under nonstationary environments, and utilized sliding windows method with window size w. They proposed different notions of dynamic regrets and algorithms, and achieved an order of O( T w2 ) according regret notions. Additionally, Héliou et al. (2020) studied online nonconvex optimization with imperfect feedback. Except first-order optimization, Héliou et al. (2020); Roy et al. (2022) considered zeroth-order online nonconvex optimization and Lesage-Landry et al. (2020) studied second-order online nonconvex optimization. Offline Min-max Optimization. There is a rich literature that studies a diverse set of algorithms for min-max optimization with nonconvexity in the offline setting. We next describe only those studies highly relevant to our study here. One celebrated approach is the nested-loop type algorithm (Rafique et al., 2021; Nouiehed et al., 2019; Thekumparampil et al., 2019; Kong & Monteiro, 2021), where the outer loop can be treated as an inexact gradient descent on a nonconvex function while the inner-loop aims to find an approximate solution to the maximization problem (see Lin et al. (2020a) and references therein for a good collection of such studies). Another approach, manifesting in the recent works of Lu et al. (2020) and Lin et al. (2020a) considers less complicated single-loop structures. Specifically, the two-timescale GDA analyzed in Lin et al. (2020a) is closest to the implementation at each round of our proposed TSODA method. But it is not straightforward to generalize the design to the online setting, and our analysis of the new local NE-regret for online optimization is also very different from such a offline min-max problem. ## 1.3 Notations [T] ≜ {1*, . . . , T*}. We use bold lower-case letters to denote vectors as in x, y, and denote its ℓ2-norm as *∥ · ∥*. We use calligraphic upper case letters to denote sets as in Y, and use the notation PY to denote projections onto the set. For a differentiable function Φ(·) : R m → R, we let ∇Φ(x) denote the gradient of Φ at x. For a function f(·, ·) : R m *× Y →* R of two variables, ∇xf(x, y) (or ∇yf(x, y)) denotes the partial gradient of f with respect to the first variable (or the second variable) at the point (x, y). We also use ∇f(x, y) to denote the full gradient at (x, y) where ∇f(x, y) = (∇xf(x, y), ∇yf(x, y)). Finally, we use the notation O(·) and Ω(·) to hide constant factors which are independent of problem parameters. ## 2 Problem Setup We consider solving the following online min-max (i.e., saddle-point) problem: $$(1)$$ $$t\in[T]$$ minx∈Rm maxy∈Y ft(x, y), t ∈ [T] (1) where ft : R m × R n → R is generally *nonconvex* in x but *concave* in y and where Y is a convex set. Such choice of unbounded x and bounded y is commonly used in existing analysis for nonconvex-concave problems (Lin et al., 2020a; Li et al., 2022; Yang et al., 2022). In our work, such an assumption brings the technical convenience by allowing us to control δ 0 t,w =y ⋆ t,w x 0 t − y 0 t 2at round t = 1 (see Section 6.2 for details). At each round t ∈ [T], the environment first incurs a loss function ft. Without knowing the knowledge of ft, the x-learner and y-learner are tasked with predicting xt and yt respectively to solve eq. (1) based on loss functions up to round t − 1, i.e., {fi} t−1 i=1. The learners then observe the function ft(·) and suffer a loss of ft(xt, yt). The following regularity assumptions for ft are made throughout the entire paper: Assumption 1 (Smoothness). ft is ℓ-smooth ∀t ∈ [T]*, i.e.,* ∀(x, y),(x ′, y ′)*, it holds that* ∥∇ft(x, y) − ∇ft(x ′, y ′)∥ ≤ ℓ∥(x, y) − (x ′, y ′)∥. Assumption 2 (Strong Concavity). The function ft(x, ·) is µ-strongly concave ∀t ∈ [T]*, i.e., given* x ∈ R m, ∀y, y ′*, it holds that* ft(x, y) ≤ ft (x, y ′) + ⟨∇yft (x, y ′), y − y ′⟩ − µ 2 ∥y − y ′∥ 2. Assumption 3 (Boundness). The set Y is a convex and bounded set with diameter D ≥ 0*. There exists* M > 0, s.t. |ft(x, y)| ≤ M, ∀t ∈ [T], x ∈ R m, y ∈ Y. The above assumptions are standard in the literature of online learning (Hazan et al., 2017b) and min-max optimization (Lin et al., 2020a;b). While our analysis primarily focuses on the nonconvexity of x, it is worth mentioning that our approach can be extended to the nonconvex-concave setting by employing a weaker condition for y, as discussed in Lin et al. (2020a). When the loss ft is fixed for all t, our framework specializes to the standard nonconvex-strongly-concave min-max optimization (Lin et al., 2020a;b). Putting into the context of online min-max optimization, our formulation is similar to those in Roy et al. (2019); Rivera et al. (2018); Zhang et al. (2022), where they studied only the case where ft is convex-concave. However, their standard regret minimization and equilibrium computation will be computationally infeasible for general nonconvex-strongly-concave losses. Next, we provide a motivating example for the online nonconvex-concave min-max optimization problem that we study here. Motivating Application. Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) are a popular machine learning model, in which a generator network Gx(·) playes against a discriminator network Dy(·) via a min-max formulation given by: $$\operatorname*{min}_{\mathbf{x}}\operatorname*{max}_{\mathbf{y}}M(\mathbf{x},\mathbf{y})={\frac{1}{2}}\mathbb{E}_{\mathcal{P}\sim p_{\mathrm{data}}}\;\log D_{\mathbf{y}}(\mathcal{P})+{\frac{1}{2}}\mathbb{E}_{\mathcal{Q}\sim p_{\mathrm{noise}}}\;\log\left(1-D_{\mathbf{y}}\left(G_{\mathbf{x}}(\mathcal{Q})\right)\right).$$ In practice, GANs are commonly trained with deep architectures, where both the discriminator and the generator are deep neural networks, making GANs hard to optimize and analyze. To address such a challenge, it is theoretically sound to consider an intermediate setting also arising in many real-world scenarios, called GANs with semi-shallow architectures (Grnarova et al., 2017; Moghadam et al., 2021), where the generator Gx(·) is any arbitrary deep neural network and the discriminator Dy(·) consists of a single layer network. Such an architecture naturally yields a nonconvex-concave game, i.e., M(x, y) is nonconvex in x and concave in y. Furthermore, there is a growing demand for GANs to handle *time-varying* scenarios (Mogren, 2016; Esteban et al., 2017; Yoon et al., 2019), such as time-series data, where generated samples should preserve the temporal dynamics of the data. This requires the GAN's model parameters to be updated in real-time to adapt to the changes in the data distribution, which leads to an online setting where the objective function Mt(x, y) changes over time t. Therefore, by combining the above facts, the training of time-varying GANs can be captured by online nonconvex-concave min-max problems. Solving such a type of problems has the potential to advance the field of generative models, particularly in scenarios where the data distribution changes over time. ## 3 How To Measure The Performance? 3.1 Local Nash Equilibrium (Ne)-Regret We introduce a new definition of a local regret that suits online nonconvex-strongly-concave min-max problems. Our new metric is motivated by the online nonconvex optimization literature; see for example Hazan et al. (2017a); Hallak et al. (2021). Specifically, for each t, we first define the smoothed functions of ft over a sliding-window of size w as: $$F_{t,w}(\mathbf{x},\mathbf{y})\ \stackrel{\mathrm{def}}{=}\ \frac{1}{w}\sum_{i=0}^{w-1}f_{t-i}(\mathbf{x},\mathbf{y}).$$ $$\left(2\right)$$ i=0 ft−i(x, y). (2) For notation convenience, we treat ft(x, y) as 0 for all t < 0. Moreover, since the averaging preserves stronglyconvexity, which implies Ft,w is strongly-concave in y, the maximization problem maxy∈Y Ft,w(x, y) can be solved efficiently. Then, we can naturally define the following function: $$\Phi_{t,w}({\bf x})\ {\stackrel{\mathrm{def}}{=}}\ \operatorname*{max}_{{\bf y}\in{\mathcal{Y}}}F_{t,w}({\bf x},{\bf y}).$$ $$(3)$$ = maxy∈Y Ft,w(x, y). (3) The overall goal of online min-max optimization can be viewed as online minimization over the above defined Φt,w(·) function. Thus, we define the following regret metric with respect to Φt,w(·). Definition 1 (Local Nash Equilibrium (NE)-Regret). Let ft be a sequence of functions satisfying Assumption 1-3, with Φt,w(·) *defined in eq.* (3). The w*-local Nash Equilibrium (NE)-Regret is defined as:* $$\Re_{w}^{N E}(T)\ \stackrel{d e f}{=}\ \sum_{t=1}^{T}\|\nabla\Phi_{t,w}({\bf x}_{t})\|^{2}.$$ $$\left(4\right)$$ 2. (4) ∇Φt,w is well-defined since Φt,w is differentiable for nonconvex-strongly-concave min-max problem (Lin et al., 2020a). We next justify the above notion of the local NE-regret from three aspects. Why Norm of Gradient as Metric? At each round t of the nonconvex-concave min-max optimization problem, the objective function can be expressed as minx∈Rm Γt(x), where Γt(·) = maxy∈Y ft(·, y) is generally nonconvex, and hence finding the global minimum for Γt(x) is NP hard. A common surrogate for the global minimum of Γt in the offline nonconvex-strongly-concave min-max literature is the notion of ϵ-stationary point (Lin et al., 2020a;b) for a differentiable Γt, i.e., an output x such that ∥∇Γt(x)∥ 2 < ϵ. If ϵ = 0, then x is a stationary point. Therefore, it is reasonable to leverage such a norm of gradient as the optimality criterion for online nonconvex-concave min-max optimization. Why Sliding-window Averaging? The motivation behind the window averaging is two-fold: (i) Ft,w and Φt,w represent the average performance during the window, which is widely adopted to handle noises and fluctuations when the environment and the loss function ft encounter mild perturbations and variations. For instance, when each loss function ft is an unbiased noisy realization of some f, the expected gradient norm of a randomly selected update inside the window is a standard measure in the nonconvex stochastic optimization literature (Bottou et al., 2018) and can reduce the variation caused by noises. Such smoothed notion is also a common practice in the field of online nonconvex optimization1(Hazan et al., 2017a; Hallak et al., 2021; Aydore et al., 2019; Zhuang et al., 2020). (ii) In practice, the average performance of a system is a typical and intuitive notion commonly used to evaluate real-world applications. Suppose a decision maker in a time-varying environment (with loss functions ft) has only finite term memory w. Then she naturally wishes to find the best decision based on the entire finite term memory and will choose the average loss function Ft,w and Φt,w as the performance metrics. As another example, if the environment varies in a periodic manner, such an average performance metric during a whole period is naturally adopted in time series forecasting problems. Why Capturing the Dynamic Nature? It is desirable that the regret can capture how well the players adapt their actions to the best decision at *each round* if the environment is nonstationary and changes over time. In the well-studied online convex-concave setting, the notion of dynamic NE-regret (Roy et al., 2019; Zhang et al., 2022) is defined for this purpose, since its definition of |PT t=1 ft (xt, yt) − 1If we view Y to be singleton, the local NE-regret degenerates to local regret proposed in Hazan et al. (2017a). PT t=1 minx∈Rm maxy∈Y ft(x, y)| evaluates the gap to the min-max comparator at *each round* instead of the min-max solution of the sum of functions over all rounds. For the nonconvex min-max setting, the best min-max comparator at *each round* can be set as the stationary point of the window function Φt,w(·), which has zero gradient. Hence, our local regret in eq. (4) can be interpreted as evaluating the gap between ∥∇Φt,w(xt)∥ 2 and its comparator (which equals zero gradient) at *each round*, and thus implicitly captures the player's adaption to the dynamic setting. In the special strongly-convex-strongly-concave case, under some mild continuity conditions, a lower local NE-regret with w = 1 implies a lower dynamic NE-regret. We provide a concrete toy example in Appendix A to illustrate this relationship. ## 3.2 Variability Of Environment Intuitively, if the environment (and hence the loss function ft) changes drastically over time, it will be hard to obtain meaningful guarantees efficiently. To handle this problem, dynamic (Roy et al., 2019; Zhang et al., 2022) or local (Hallak et al., 2021) regret serves as better performance metrics to take the changing environment into consideration. Such notions typically rely on certain nonstationarity measures of the environment in order to reflect how the system dynamics affects the performance. Therefore, in this subsection, we introduce such measures of variation for loss functions, which will be crucial in our analysis and capture the nonstationarity of our online min-max settings. Definition 2 (Variation of Sliding-window). *Let us denote* y ∗ t,w(x) = arg maxy∈Y Ft,w(x, y). We define the following two types of sliding-window variation: $$V_{w}^{1}[T]:=\sum_{t=1}^{T}\sup_{\mathbf{x}\in\mathbb{R}^{m}}\|\nabla_{\mathbf{x}}f_{t}\left(\mathbf{x},\mathbf{y}_{t,w}^{*}(\mathbf{x})\right)-\nabla_{\mathbf{x}}f_{t-w}\left(\mathbf{x},\mathbf{y}_{t,w}^{*}(\mathbf{x})\right)\|^{2},\tag{5}$$ $$V_{w}^{2}[T]:=\sum_{t=1}^{T}\sup_{\mathbf{x}\in\mathbb{R}^{m}}\|\nabla_{\mathbf{y}}f_{t}\left(\mathbf{x},\mathbf{y}_{t,w}^{*}(\mathbf{x})\right)-\nabla_{\mathbf{y}}f_{t-w}\left(\mathbf{x},\mathbf{y}_{t-1,w}^{*}(\mathbf{x})\right)\|^{2}.\tag{6}$$ Remark 1. V 1 w[T] primarily measures the drift of ft and ft−w in x, considering that the y-players for these models are determined by x through y ∗ t,w(·). On the other hand, V 2 w[T] further quantifies the changes of the maximum players for Ft,w(x, ·) and Ft−1,w(x, ·), i.e. y ∗ t,w and y ∗ t−1,w. Therefore, by considering both V 1 w[T] and V 2 w[T], we can jointly capture the variations in the environments of the online min-max problem. Remark 2. Clearly, V 1 w[T] and V 2 w[T] are O(T) if the gradients of ft are bounded and can be zero in the offline setting, i.e., T = 1. A key observation is that if the loss function encounters a periodic shift with certain period length of w ∗, i.e., ft+w∗ = ft, then for w = w ∗ and t ≥ w, we have ft = ft−w and y ∗ t,w = y ∗ t−1,w, which is implied by the fact that Ft+1,w = Ft,w. As a consequence, for a well-tuned w ≪ T, the sliding-window variations could be considerably smaller compared to T, especially V 1 w[T] = V 2 w[T] = O(w) in the above case. ## 4 Tsoda: Time-Smoothed Online Gradient Descent Ascent In this section, we present our proposed method, named time-Smoothed Online gradient Descent Ascent (TSODA), for online nonconvex-strongly-concave problem, and we show that our approach is capable of efficiently achieving a favorable local NE-regret bound. ## 4.1 Proposed Algorithm At the high-level, our algorithm plays following the-leader iterates, aiming to find a suitable approximating stationary point at each round using two-timescale gradient descent ascent (GDA). At each round t, TSODA performs gradient descent over the variable x with the stepsize ηx and gradient ascent over the variable y with the stepsize ηy on function Ft,w(x, y) until the following Stop Condition 1 is satisfied. Then, TSODA observes the loss function ft+1 to be used in the next round. The pseudocode of TSODA is summarized in Algorithm 1. _Stop Condition 1_. The terminating condition for Algorithm 1 is: $$\left(\frac{2\epsilon}{\eta_{\mathbf{y}}}+\ell)(1+\ell\eta_{\mathbf{y}})\right)^{2}\left\|\mathbf{y}_{t+1}-\mathcal{P}_{\mathcal{Y}}\left(\mathbf{y}_{t+1}+\eta_{\mathbf{y}}\nabla_{\mathbf{y}}F_{t,w}\left(\mathbf{x}_{t+1},\mathbf{y}_{t+1}\right)\right)\right\|^{2}+\left\|\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_{t+1},\mathbf{y}_{t+1}\right)\right\|^{2}\leq\frac{\delta^{2}}{2w^{2}}.\tag{7}$$ Algorithm 1 Time-Smoothed Online Gradient Descent Ascent (TSODA) Input: window size w ≥ 1, stepsizes (ηx, ηy), tolerance δ > 0 Initialization: (x1, y1) 1: for t = 1 to T do 2: Predict (xt, yt). Observe the cost function ft : R m × R n → R 3: Set (xt+1, yt+1) ← (xt, yt) 4: **repeat** 5: xt+1 ← xt+1 − ηx∇xFt,w (xt+1, yt+1) 6: yt+1 ← PY (yt+1 + ηy∇yFt,w (xt+1, yt+1)) 7: **until** Equation (7) in Stop Condition 1 holds 8: **end for** Discussions about Stopping Criterion. Due to the online nature, the design of the stopping condition is to guarantee that the learner outputs a good xt+1 with small local regret at round t, i.e., ∥∇Φt,w(xt+1)∥ 2 is small enough. However, in contrast to general online nonconvex games (Hazan et al., 2017a), where the first-order information is available, we do not have direct access to the first-order oracle of Φt,w. To circumvent this issue, we adopt the global error bound condition from the seminal paper (Drusvyatskiy & Lewis, 2018) to translate conditions on ∇Φt,w(xt+1) into restrictions on tractable ∇Ft,w. Specifically, we prove that ∥∇Φt,w(xt+1)∥ 2is upper bounded by the left-hand side (LHS) of inequality in Stop Condition 1 (see Lemma 6.1). Therefore, alternatively we can utilize the accessible information of ∇Ft,w to terminate the inner-loop iterations at time t. Last-iterate Guarantee. At each round t, the stop condition will be triggered only when the local regret of last iteration is small enough. Such a *last-iterate* type guarantee is different by nature from existing offline nonconvex-strongly-concave min-max results (Lin et al., 2020b;a), which are only guaranteed to visit an ϵ-stationary point within a certain number of iterations, i.e., where the return x¯ is uniformly drawn from previous iterations. Crucially, we will establish the total iteration bound (see Theorem 2) in the next subsection, which indicates that such last-iterate type outputs can be obtained efficiently. Furthermore, since the stopping criterion leads to stronger guarantee, our result is incomparable with former offline iteration complexity in the special case that T = 1. ## 4.2 Theoretical Guarantees In this subsection, we provide the regret and computational complexity guarantees of our algorithm under local NE-regret and highlight several connections with the existing results from offline min-max optimization and online nonconvex problem. Theorem 1 (Local NE Regret Minimization). Let κ = ℓ/µ denote the condition number. Under Assumptions *1-3, and letting the stepsizes be chosen as* ηx = Θ 1/κ3ℓand ηy = Θ(1/ℓ), then Algorithm 1 *enjoys* the following local NE-regret bound: $$\Re_{w}^{N E}(T)=\sum_{t=1}^{T}\|\nabla\Phi_{t,w}({\bf x}_{t})\|^{2}\leq\frac{3}{w^{2}}(T\delta^{2}+\frac{(\kappa)^{2}}{(w-1)^{2}})$$ (κw) (w−1)2 V $${\frac{\d^{2}}{\d T^{2}}}V_{w}^{2}[T]+V_{w}^{1}[T]).$$ Theorem 2 (Iteration Bound). Let τ denote the total number of iterations incurred by Algorithm *1. Then* τ *can be upper bounded as:* $$\tau\leq\frac{384\kappa^{3}\ell M w T}{\delta^{2}}+576\frac{\kappa^{2}T}{\mu}+\frac{576D^{2}\kappa^{3}\ell^{2}w^{2}}{\delta^{2}}+1152\frac{w^{2}\kappa^{5}}{(w-1)^{2}\delta^{2}}V_{w}^{2}[T].$$ Furthermore, the number of first-order gradient calls is bounded by O(wτ ). Theorems 1 and 2 together reveal the trade-off between the impact of sliding-window size w on the regret and the computational complexity, where larger w leads to smaller regret bound but incurs more gradient calls. Robustness of TSODA. Our results in Theorems 1 and 2 are expressed in terms of variation measures V 1 w[T] and V 2 w[T] of the environment introduced in Section 3.2. If we make the same assumption similar to $\frac{-4}{-1}$ . that in Hazan et al. (2017a) that the gradient of ft is bounded, the above theorems provide a robust guarantee for TSODA; namely, no matter how the environment changes at each round, TSODA always ensures O( T w2 ) local NE-regret with O(T w) iterations since V 1 w[T] and V 2 w[T] are O(T) by definitions. Therefore, the regret can be made sublinear in T if w is selected accordingly. Interestingly, depending on the degree of nonstationarity, TSODA is capable of achieving even smaller local NE-regret.Particularly, as we discussed in Remark 2, for the scenario that ft is periodic with period w ≪ T, V 1 w[T] = V 2 w[T] = O(w). Optimality of Regret Bound. Note that the basic online nonconvex minimization problem can be viewed as a special case of our online nonconvex min-max problem, if ft(*x, y*) takes values independent of y. In such a degenerate case, our local NE-regret is equivalent to the local regret analyzed in Hazan et al. (2017a); Hallak et al. (2021). Consequently, the adversarial example that incurs the local regret of Ω( T w2 ) constructed in Hallak et al. (2021) can also serve as a worst case example for our online noncovex min-max setting. Moreover, under the same assumption made in Hazan et al. (2017a) (which is more restrictive than our assumption here), we achieve a robust regret upper bound of O( T w2 ) (as discussed in the previous paragraph), which matches the worst-case lower bound, indicating that our bound Theorem 1 for online nonconvex min-max problem is optimal. Comparison to offline min-max optimization. When the environment is fixed, i.e. ft ≡ f or T = 1 with w = 1, our problem specializes to offline min-max optimization and V 1 w[T] = V 2 w[T] = 0 will disappear from our results. Therefore, an immediate implication from our theorems is that GDA is guaranteed to find ϵ-stationary point with O(κ 3ϵ −2) iteration complexity. The best known complexity bound for GDA in offline min-max optimization is O(κ 2ϵ −2) (Lin et al., 2020a). However, as we discussed in Section 4.1, TSODA aims to output x with last-iterate type guarantee, which is a stronger notion than that considered in Lin et al. (2020a), where GDA are only guaranteed to visit an ϵ-stationary point within a certain number of iterations. Thus, these results are not directly comparable. ## 5 Tsoda With Stochastic First-Order Oracle In this section, we extend our online min-max framework to an online stochastic version. This setting is motivated by the fact that, in real world application, such as training a neural network, an oracle with access to the gradient of loss function is hard to reach. Instead, a stochastic first-order oracle (SFO) is used to approximate the ground truth gradient. Similar settings have been studied in Nemirovski et al. (2009); Hazan et al. (2017a); Hallak et al. (2021). Specifically, the formal SFO definition is as follows. Definition 3 (Stochastic first-order oracle). A stochastic first-order oracle (SFO) is a function Sσ *such* that, given a point (x, y) ∈ R m × Y, a random seed ζ*, and a smooth function* h : R m × Y → R *satisfies:* - Sσ(x, y; ζ, h) is an unbiased estimate of ∇h(x, y) : E (S(x, y; ζ, h) − ∇h(x, y)) = 0; - Sσ(x, y; ζ, h) *has variance bounded by* σ 2 > 0 : E ∥S(x, y; ζ, h) − ∇h(x, y)∥ 2≤ σ 2. ## 5.1 Proposed Algorithm With the above definition of SFO, we introduce the stochastic version of Algorithm 1, named TSODA-SFO (see Algorithm 2). Similarly, TSODA-SFO also follows the-leader iterates using two-time scale GDA. Taking the noise brought by SFO into consideration, nested loops and special stopping criterion (Stop Condition 2) are modified accordingly. Specially, (i) SFO results in different coefficients in stop criterion compared to TSODA. (ii) The stopping criterion in TSODA-SFO only ensures that ∥∇Φt,w(xt+1)∥ 2is bounded by the threshold plus the variation of SFO. But the variation here does not play an important role, since sliding windows serve variance reduction purpose to reduce the variation in the final expected regret. Stop Condition 2. The terminating condition for Algorithm 2 is: $$2\left((\frac{2\kappa}{\eta_{\mathbf{y}}}+\ell)(1+\ell\eta_{\mathbf{y}})\right)^{2}\|\mathbf{y}_{t}^{k}-{\mathcal{P}}_{\mathcal{Y}}\left(\mathbf{y}_{t}^{k}+\eta_{\mathbf{y}}G_{\mathbf{y},t}^{k}\right)\|^{2}+\|G_{\mathbf{x},t}^{k}\|^{2}\leq\delta^{2}/3w^{2}.$$ 2. (8) $$({\boldsymbol{\delta}})$$ Algorithm 2 TSODA with Stochastic First-order Oracle (TSODA-SFO) Input: window size w ≥ 1, stepsizes (ηx, ηy), tolerance δ > 0 Initialization: (x1, y1) 1: for t = 1 to T do 2: Cost function ft : R m × R n → R is updated; 3: Sample ∇˜ ft (xt, yt) ← Sσ/w (xt, yt; *ζ, f*t) 4: Set ∇˜ Ft,w (xt, yt) = ∇˜ Ft−1,w (xt, yt) + 1w (∇˜ ft−w (xt, yt) − ∇˜ ft (xt, yt)) 5: Set x 0 t = xt, y 0 t = yt, G0x,t = ∇˜xFt,w (xt, yt), G0y,t = ∇˜yFt,w (xt, yt), k = 0 6: **while** Equation (8) in Stop Condition 2 is not satisfied do 7: x k+1 t ← x k t − ηxGkx,t 8: y k+1 t ← PY y k t + ηyGky,t 9: Sample ∇˜ fi(x k+1 t, y k+1 t) ← S σw (x k+1 t, y k+1 t; *ζ, f*i) for i = t − w + 1, · · · , t; 10: Set G k+1 t:= (G k+1 t,x , Gk+1 t,y ) = 1w Pt i=t−w+1 ∇˜ fi(x k+1 t, y k+1 t) 11: k ← k + 1 12: **end while** 13: xt+1 = x k t , yt+1 = y k t , and ∇˜ Ft,w(xt+1, yt+1) = Gk t 14: **end for** ## 5.2 Theoretical Guarantees Denote τt as the number of iterations of inner-loop at round t and thus τ =PT t=1 τt. Given the SFO and the inner-loop termination condition in eq. (8), one immediate question is whether Algorithm 2 terminates in finite time. To this end, we first establish that for each round t, the inner-loop terminates with finite iterations τt provided that δ is not too small (recall that δ is the tolerance for stopping criterion). Theorem 3 (Finite Iteration with SFO). Let κ = ℓ/µ denote the condition number, and let the stepsizes be chosen as ηx = Θ 1/κ3ℓand ηy = Θ(1/ℓ). Under Assumptions 1-3, for any t ∈ [T], if δ, w and σ satisfy that δ 2 = O( κ 4ℓ 2σ 2 w), then τt and τ are finite with high probability. Specially, when K ∈ R *is large enough,* P(τt > K) = O(1/K). With the finite step stopping guarantee on hand, we next characterize the performance of TSODA-SFO with expected local NE-regret formally in terms of *w, T, V* 1 w[T], V 2 w[T]. Theorem 4 (Expected Local NE-Regret with SFO). Under the setting of Theorem 3, TSODA-SFO enjoys the following expected local NE regret bound: $$\mathbb{E}\left[\Re_{w}^{N E}(T)\right]\leq\frac{T}{w^{2}}\left(3\delta^{2}+\frac{(360\kappa^{2}+9)\sigma^{2}}{w}\right)+\frac{3\kappa^{2}}{(w-1)^{2}}V_{w}^{2}[T]+\frac{3}{w^{2}}V_{w}^{1}[T].$$ Beyond the previous regret analysis, we next provide an upper bound on the overall iteration complexity of SODA-SFO. Similar to Li & Orabona (2019); Hallak et al. (2021), we adopt the following stronger boundness assumption on the SFO to control the noise caused by SFO calls in the stochastic settings. Assumption 4. *Given any point* (x, y) ∈ R m × Y, random seed ζ*, and smooth function* h: R m × Y, the SFO defined in Definition 3 satisfys that ∥S(x, y; ζ, h) − ∇h(x, y)∥ 2 ≤ σ 2. Remark 3. We remark here that Theorems 3 and 4 do not require Assumption 4, and Theorem 3 provide the finite iteration guarantee with high probability and Theorem 4 provides an upper bound for expected regret. With Assumption 4, which is slightly stronger than the assumptions in Definition 3, we are able to provide the following deterministic bound on iterations and the number of SFO calls as in Theorem 5. Furthermore, Theorem 5 can provide deterministic guarantees rather than high probability guarantees because Assumption 4 controls the variation of noise in an absolute and deterministic manner. Theorem 5 (Iterations and SFO calls bounds). Under the setting of Theorem 3 and Assumption *4, and* suppose that δ 2 > 8064κ 4σ 2*. Then the total number of iterations satisfies* $$\tau\leq\frac{1}{\eta_{\bf x}}\frac{2M T w+\frac{9\delta^{2}T}{\ell}+\frac{72\ell w^{2}}{\mu^{2}(w-1)^{2}}V_{w}^{2}[T]+w^{2}M+\frac{5\ell D^{2}w^{2}}{32}}{\left(\frac{\delta^{2}}{3}-2688\kappa^{4}\sigma^{2}\right)}$$ Furthermore, the number of SFO calls is bounded by O(wτ ). The above results also provide a robust guarantee for TSODA-SFO, where TSODA-SFO achieves an expected regret of O( T w2 ) with at most O(T w) iterations and hence O(T w2) calls of SFO, as long as V 1 w[T] and V 2 w[T] scale with O(T). Following the similar discussions from Remark 2 and Section 4.2, such condition can hold with relaxed assumptions depending on nonstationarity. Specially, if the variance of SFO defined in Definition 3 is zero, then SFO reduces to perfect first order feedback. Hence, as discussed in Section 4.2, the adversarial example provided by Hazan et al. (2017a) is also applicable to the stochastic setting, and thus indicates that the expected regret O( T w2 ) reaches optimality. If the set Y is a singleton, online nonconvex min-max problem with SFO reduces to the online nonconvex problem with SFO. In this case, the term consisting of V 2 w[T] will disappear in our analysis, and our theorems recover the results in Hallak et al. (2021). ## 6 Proof Overview In this section, we will outline the regret and iteration analyses for TSODA (Theorems 1 and 2). 6.1 Key Ideas in the Proof of Theorem 1 We can directly decompose the local NE-regret ℜ NE w by Cauchy-Schwarz inequality as follows: ℜ NE w ≤ 3 X T + 3X T t=1 ∥∇Φt−1,w (xt)∥ 2 t=1 ∥∇xFt−1,w(xt, y ∗ t,w(xt)) − ∇xFt−1,w(xt, y ∗ t−1,w(xt))∥ 2 | {z } Optimization error | {z } Variability of y +3 w2 X T t=1 ∥∇xft xt, y ∗ t,w(xt)− ∇xft−w xt, y ∗ t,w(xt)∥ 2 | {z } Variability of x In the following part, we will interpret each error term and provide a high-level overview of how to control them. Optimization error. This term arises due to the overarching strategy of TSODA, which is to perform twotimescale GDA at each round t in order to seek an approximate stationary point of Φt,w(·). The following key lemma shows that if Stop Condition 1 is satisfied, then ∥∇Φt−1,w(xt)∥ 2 ≤ δ 2 w2 when TSODA enters the t-th round. This implies that the optimization error can be controlled by 3T δ2 w2 . Lemma 6.1. *Given a pair* (x, y) ∈ R m × Y, for t ∈ [T] and w > 0*, it holds that* $$\|\nabla\Phi_{t,w}({\bf x})\|^{2}\leq2\left(\left(\frac{2\kappa}{\eta_{\bf y}}+\ell\right)(1+\ell\eta_{\bf y})\right)^{2}\|{\bf y}-{\cal P}_{\cal Y}\left({\bf y}+\eta_{\bf y}\nabla_{\bf y}F_{t,w}\left({\bf x},{\bf y}\right)\right)\|^{2}+2\|\nabla_{\bf x}F_{t,w}({\bf x},{\bf y})\|^{2}.$$ Variability of x and y. These errors occur since the model {ft−i} w−1 i=0 used for evaluating (xt, yt) is different from the training model {ft−i} w i=1, which corresponds to the level of the variation in the environment. i). The variability of x can be directly bounded by the sliding-window variation V 1 w[T] by definition. ii). We show in Lemma B.1 that Ft,w(x, y) is ℓ-smooth and y ⋆ t,w(·) is κ-Lipschitz, and hence the variability of y can be further bounded by 3κ 2 (w−1)2 ∥∇yft xt, y ∗ t,w(xt)− ∇yft−w xt, y ∗ t−1,w(xt)∥ 2, which is controlled by V 2 w[T]. ## 6.2 Key Ideas Of Theorem 2 The proof of Theorem 2 can be divided into two parts: the inner-loop and outer-loop analysis. Inner-loop Analysis. Denote the sequence generated in the inner-loop at time t ∈ [T − 1] by $$\begin{array}{l l}{{{\bf x}_{t}^{0}={\bf x}_{t},}}&{{{\bf x}_{t}^{k+1}\leftarrow{\bf x}_{t}^{k}-\eta_{\bf x}\nabla_{\bf x}F_{t,w}\left({\bf x}_{t}^{k},{\bf y}_{t}^{k}\right);}}\\ {{{\bf y}_{t}^{0}={\bf y}_{t},}}&{{{\bf y}_{t}^{k+1}\leftarrow{\cal P}_{\cal Y}\left({\bf y}_{t}^{k}+\eta_{\bf y}\nabla_{\bf y}F_{t,w}\left({\bf x}_{t}^{k},{\bf y}_{t}^{k}\right)\right).}}\end{array}$$ Let τt be the number of times the gradient update is executed at the t-th iteration. For convenience, denote τT = 0. Note that x τt t = xt+1 and y τt t = yt+1. In such a two time-scale setting, i.e. ηx ̸= ηy, the movement of x k t is slower than y k t , and the κ-Lipschitzness of y ⋆ t,w(·) indicates that y ⋆ t,w(x k t ) also moves slowly. The inner-loop of TSODA can be viewed as conducting gradient ascent on a strongly-concave function Ft,w(x k t , ·), which changes slowly. Following the standard analysis of offline nonconvex min-max optimization Lin et al. (2020b), we can establish the following descent property: $$\frac{\eta_{\mathbf{x}}}{8}\sum_{j=0}^{\eta_{\mathbf{x}}-1}\left[\left\|\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_{t}^{j},\mathbf{y}_{t}^{j}\right)\right\|^{2}+(6\varepsilon\ell)^{2}\|\mathbf{y}_{t}^{j+1}-\mathbf{y}_{t}^{j}\|^{2}\right]\leq(\Phi_{t,w}\left(\mathbf{x}_{t}\right)-\Phi_{t,w}\left(\mathbf{x}_{t+1}\right))+\frac{9\ell}{2}\delta_{t,w}^{0},\tag{9}$$ where δ 0 t,w =y ⋆ t,w x 0 t − y 0 t 2 measures the distance between y 0 t and the optimal solution for y-player given x 0 t at the beginning of round t. Notice that the LHS of eq. (9) is the quantity in Stop Condition 1 when ηy = 1 ℓ . Since TSODA performs GDA at round t only when Stop Condition 1 does not meet, we can further lower-bound the LHS of eq. (9) by ηx 8 · δ 2 2w2 · τt, and obtain $$\eta_{\mathbf{x}}\delta^{2}\tau_{t}\leq\left(\Phi_{t,w}\left(\mathbf{x}_{t}\right)-\Phi_{t,w}\left(\mathbf{x}_{t+1}\right)\right)+\frac{9\ell}{2}\delta_{t,w}^{0}.\tag{10}$$ Outer-loop Analysis. By decomposing Φ*T ,w*(xT ) = PT t=1(Φt,w(xt)−Φt−1,w(xt−1)) and rearranging terms, we obtain: $$\sum_{t=1}^{T-1}\Phi_{t,w}(\mathbf{x}_{t})-\Phi_{t,w}(\mathbf{x}_{t+1})\leq\frac{1}{w}\sum_{t=1}^{T}\left(f_{t}(\mathbf{x}_{t},\mathbf{y}_{t,w}^{*}(\mathbf{x}_{t}))-f_{t-w}(\mathbf{x}_{t},\mathbf{y}_{t,w}^{*}(\mathbf{x}_{t}))\right)-\Phi_{T,w}(\mathbf{x}_{T}).\tag{11}$$ Substituting eq. (10) over t ∈ [T] into the above inequality, we obtain $${\frac{\eta_{\mathbf{x}}\delta^{2}\tau}{16w^{2}}}\leq{\frac{1}{w}}\sum_{t=1}^{T}\left(f_{t}(\mathbf{x}_{t},\mathbf{y}_{t,w}^{*}(\mathbf{x}_{t}))-f_{t-w}(\mathbf{x}_{t},\mathbf{y}_{t,w}^{*}(\mathbf{x}_{t}))\right)-\Phi_{T,w}(\mathbf{x}_{T})+{\frac{9\ell}{2}}\sum_{\begin{subarray}{c}t=1\\ V_{1}\end{subarray}}^{T}\delta_{t,w}^{0}\,.$$ The V1 term can be bounded by considering the boundness of ft. As for the V2 term, we can upper bound δ 0 t,w for t > 1 using ∥y ∗ t,w(xt) − y ∗ t−1,w(xt)∥ 2 + ∥y ∗ t−1,w(xt) − y 0 t ∥ 2. This quantity can be further controlled by the sliding-window variation in y and the tolerance δ in Stop Condition 1 separately. Note that δ 0 1,w can be directly bounded by D, which is the diameter of Y. Significance of Techniques. Based on the sketch of the analysis provided in this section, we can now delve into the technical differences between our work and the general online nonconvex games, specifically the work presented in Hazan et al. (2017a). One key difference is that we lack direct access to the first-order oracle of Φt,w, whereas such information is available in Hazan et al. (2017a). Consequently, while our outer-loop analysis draws inspiration from Hazan et al. (2017a), we must address the challenge of limited knowledge of the first-order oracle and develop novel stop conditions tailored to the min-max setting. More importantly, our inner loop features a more intricate min-max structure, which requires further technical development to handle the dynamics of two players. The analysis presented in Hazan et al. (2017a), which focuses on a single player, cannot be directly extended to our setting. As a result, specialized techniques for min-max optimization must be employed to analyze the iteration complexity in the inner loop. ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) Figure 2: Comparison of TSODA and onlineGDmax: number of gradient calls vs. average accuracy. ## 7 Experiments In this section, we evaluate the efficiency of the proposed TSODA algorithm and verify the theoretical results through numerical simulations. We consider the min-max problem of training an empirical Wasserstein robustness model (WRM) (Sinha et al., 2017), which has the following form2: $$\operatorname*{min}_{\mathbf{x}}\;\operatorname*{max}_{\{\mathbf{y}_{i}\}_{i=1}^{N}}\;{\mathcal{L}}(\mathbf{x},\mathbf{y};{\mathcal{D}})\triangleq\frac{1}{N}\sum_{(\xi_{i},z_{i})\in{\mathcal{D}}}\left[\ell\left(h_{\mathbf{x}}\left(\mathbf{y}_{i}\right),z_{i}\right)-\gamma\left\|\xi_{i}-\mathbf{y}_{i}\right\|^{2}\right],$$ $$(12)$$ 2i, (12) where ℓ is the cross-entropy loss function, N is the number of training samples, x is the network parameter, (ξi, zi) ∈ D corresponds to the i-th data sample and label, respectively, and yiis the adversarial sample corresponding to ξi. Denote {yi} N i=1 as y. Training Settings. The real-world datasets we consider are MNIST (Deng, 2012) and FashionMNIST (Xiao et al., 2017), each containing 60k samples. We simulate the online WRM model as follows. We randomly split the given dataset into T pieces {Dt} T t=1, and the learner sequentially receives Dt. At each round t, ft(x, y) = L(x, y; Dt). We choose T = 100 for the online setting. The network architecture mainly follows Sinha et al. (2017), which consists of three convolution blocks with filters of size 8×8, 6×6 and 5×5 respectively activated by ELU function (Clevert et al., 2015), then followed by a fully connected layer and softmax output. Furthermore, we set the adversarial perturbation γ ∈ {0.4, 1.3}, which is consistent with Sinha et al. (2017). Metrics. Since we do not have access to the first-order oracle of ∇Φt,w in practice, two alternative performance metrics are considered, which capture the essence of the online setting and are consistent with the definition of our local NE-regret. The first metric is the stronger notion we utilize in the stop criterion, which provides an upper bound for ∥∇Φt,w(xt)∥ 2. Observing that the projected gradient of y does not change significantly in experiments, we only compute ∥∇xFt,w(xt, yt)∥ 2 and report the average Ravg ≜ 1 t Ptj=1 ∥∇xFj,w(xj , yj )∥ 2 of these at each round t, which serves as an approximation of 1 t ℜ NE w (t). The second metric is the average accuracy, where we evaluate the test accuracy of output (xt, yt) from the last round on the newly coming Dt and report the average from round 1 to t. The Effect of Window Size w. In Figure 1, we plot Ravg of TSODA on MNIST and Fashion-MNIST with different w. It can be observed that as w increases from 2 to 10, the local regret becomes smaller, which verifies the bound in Theorem 1 and justifies the usage of large window size. 2Note that we can choose sufficiently large γ > 0 to make the maximization part be strongly-concave. TSODA vs. Baseline Algorithm. To further investigate the performance of TSODA, we conduct experiments to compare it with a baseline algorithm. Note that to our best knowledge, there has been no existing formal studies on the performance of any developed algorithm for online nonconvex min-max problems. Here, we consider a baseline algorithm, which is a natural extension of the well-known offline min-max method GDmax (Jin et al., 2020) to the online framework, named *onlineGDmax*. Specifically, onlineGDmax replaces the inner-loop procedure of TSODA by the nested-loop GDmax, i.e., at each iteration in the inner-loop of round t, onlineGDmax will firstly maximize the function by multi-step gradient ascent for y, which is 10 steps in our setting, and then perform one-step GD for x. Typically, the stepsizes for GDmax are chosen to be equal, i.e. ηx = ηy (Sinha et al., 2017). In Figure 2, TSODA achieves similar accuracy to onlineGDmax but with significantly fewer gradient calls, which demonstrates the efficiency of our approach. ## 8 Conclusions This paper provides the first analysis for the online nonconvex-concave min-max optimization problem. We introduced a novel notion of local Nash Equilibrium regret to capture the nonconvexity and nonstationary of the environment. We developed and analyzed algorithms TSODA and its stochastic version with respect to the proposed notions of regret, establishing favorable regret and complexity guarantees. Furthermore, we conduct experiments with real-world data to validate the theoretical statements and show its superiority in practice. ## Acknowledgments The work of Yu Huang and Longbo Huang is supported by the Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grant 2020AAA0108400 and 2020AAA0108403, and Tsinghua Precision Medicine Foundation 10001020109. ## References Naman Agarwal, Alon Gonen, and Elad Hazan. Learning in non-convex games with an optimization oracle. In *Conference on Learning Theory*, pp. 18–29. PMLR, 2019. Sergul Aydore, Tianhao Zhu, and Dean P Foster. Dynamic local regret for non-convex online forecasting. Advances in neural information processing systems, 32, 2019. Aharon Ben-Tal, Elad Hazan, Tomer Koren, and Shie Mannor. Oracle-based robust optimization via online learning. *Operations Research*, 63(3):628–638, 2015. Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. Siam Review, 60(2):223–311, 2018. Lucian Buşoniu, Robert Babuška, and Bart De Schutter. Multi-agent reinforcement learning: An overview. Innovations in multi-agent systems and applications-1, pp. 183–221, 2010. Adrian Rivera Cardoso, Jacob D. Abernethy, He Wang, and Huan Xu. Competing against nash equilibria in adversarially changing zero-sum games. In *ICML*, 2019. Nicolo Cesa-Bianchi and Gábor Lugosi. *Prediction, learning, and games*. Cambridge university press, 2006. Yujing Chen, Yue Ning, Martin Slawski, and Huzefa Rangwala. Asynchronous online federated learning for edge devices with non-iid data. In *2020 IEEE International Conference on Big Data (Big Data)*, pp. 15–24. IEEE, 2020. Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). *arXiv preprint arXiv:1511.07289*, 2015. Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012. Dmitriy Drusvyatskiy and Adrian S Lewis. Error bounds, quadratic growth, and linear convergence of proximal methods. *Mathematics of Operations Research*, 43(3):919–948, 2018. Cristóbal Esteban, Stephanie L Hyland, and Gunnar Rätsch. Real-valued (medical) time series generation with recurrent conditional gans. *arXiv preprint arXiv:1706.02633*, 2017. Tanner Fiez, Ryann Sim, Stratis Skoulakis, Georgios Piliouras, and Lillian Ratliff. Online learning in periodic zero-sum games. *Advances in Neural Information Processing Systems*, 34:10313–10325, 2021. Virginie Gabrel, Cécile Murat, and Aurélie Thiele. Recent advances in robust optimization: An overview. European journal of operational research, 235(3):471–483, 2014. Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. *arXiv preprint arXiv:1406.2661*, 2014. Paulina Grnarova, Kfir Y Levy, Aurelien Lucchi, Thomas Hofmann, and Andreas Krause. An online learning approach to generative adversarial networks. *arXiv preprint arXiv:1706.03269*, 2017. Nadav Hallak, Panayotis Mertikopoulos, and Volkan Cevher. Regret minimization in stochastic non-convex learning via a proximal-gradient approach. In *International Conference on Machine Learning*, pp. 4008– 4017. PMLR, 2021. Elad Hazan, Karan Singh, and Cyril Zhang. Efficient regret minimization in non-convex games. In *International Conference on Machine Learning*, pp. 1433–1441. PMLR, 2017a. Elad Hazan, Karan Singh, and Cyril Zhang. Efficient regret minimization in non-convex games. In Doina Precup and Yee Whye Teh (eds.), *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of *Proceedings of Machine Learning Research*, pp. 1433–1441. PMLR, 06–11 Aug 2017b. URL https://proceedings.mlr.press/v70/hazan17a.html. Elad Hazan et al. Introduction to online convex optimization. Foundations and Trends® *in Optimization*, 2 (3-4):157–325, 2016. Amélie Héliou, Matthieu Martin, Panayotis Mertikopoulos, and Thibaud Rahier. Online non-convex optimization with imperfect feedback. *Advances in Neural Information Processing Systems*, 33:17224–17235, 2020. Nicole Immorlica, Karthik Abinav Sankararaman, Robert Schapire, and Aleksandrs Slivkins. Adversarial bandits with knapsacks. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pp. 202–219. IEEE, 2019. Chi Jin, Praneeth Netrapalli, and Michael Jordan. What is local optimality in nonconvex-nonconcave minimax optimization? In *International conference on machine learning*, pp. 4880–4889. PMLR, 2020. Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291–307, 2005. Weiwei Kong and Renato DC Monteiro. An accelerated inexact proximal point method for solving nonconvexconcave min-max problems. *SIAM Journal on Optimization*, 31(4):2558–2585, 2021. Antoine Lesage-Landry, Joshua A Taylor, and Iman Shames. Second-order online nonconvex optimization. IEEE Transactions on Automatic Control, 66(10):4866–4872, 2020. Xiang Li, Junchi Yang, and Niao He. Tiada: A time-scale adaptive algorithm for nonconvex minimax optimization. *arXiv preprint arXiv:2210.17478*, 2022. Xiaoyu Li and Francesco Orabona. On the convergence of stochastic gradient descent with adaptive stepsizes. In Kamalika Chaudhuri and Masashi Sugiyama (eds.), *The 22nd International Conference on Artificial* Intelligence and Statistics, AISTATS 2019, 16-18 April 2019, Naha, Okinawa, Japan, volume 89 of *Proceedings of Machine Learning Research*, pp. 983–992. PMLR, 2019. URL http://proceedings.mlr. press/v89/li19c.html. Tianyi Lin, Chi Jin, and Michael Jordan. On gradient descent ascent for nonconvex-concave minimax problems. In Hal Daumé III and Aarti Singh (eds.), *Proceedings of the 37th International Conference* on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 6083–6093. PMLR, 13–18 Jul 2020a. URL https://proceedings.mlr.press/v119/lin20a.html. Tianyi Lin, Chi Jin, and Michael I Jordan. Near-optimal algorithms for minimax optimization. In Conference on Learning Theory, pp. 2738–2779. PMLR, 2020b. Songtao Lu, Ioannis Tsaknakis, Mingyi Hong, and Yongxin Chen. Hybrid block successive approximation for one-sided non-convex min-max problems: algorithms and applications. *IEEE Transactions on Signal* Processing, 68:3676–3691, 2020. J-F Mertens and Abraham Neyman. Stochastic games. *International Journal of Game Theory*, 10(2):53–66, 1981. Monireh Mohebbi Moghadam, Bahar Boroomand, Mohammad Jalali, Arman Zareian, Alireza Daeijavad, Mohammad Hossein Manshaei, and Marwan Krunz. Game of gans: Game-theoretical models for generative adversarial networks. *arXiv preprint arXiv:2106.06976*, 2021. Olof Mogren. C-rnn-gan: Continuous recurrent neural networks with adversarial training. arXiv preprint arXiv:1611.09904, 2016. Arkadi Nemirovski, Anatoli B. Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic approximation approach to stochastic programming. *SIAM J. Optim.*, 19(4):1574–1609, 2009. doi: 10.1137/070704277. URL https://doi.org/10.1137/070704277. Georgy Noarov, Mallesh Pai, and Aaron Roth. Online multiobjective minimax optimization and applications. arXiv preprint arXiv:2108.03837, 2021. Maher Nouiehed, Maziar Sanjabi, Tianjian Huang, Jason D Lee, and Meisam Razaviyayn. Solving a class of non-convex min-max games using iterative first order methods. *Advances in Neural Information Processing* Systems, 32, 2019. Hassan Rafique, Mingrui Liu, Qihang Lin, and Tianbao Yang. Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning. *Optimization Methods and Software*, pp. 1–35, 2021. A. Pallares Rivera, He Wang, and Huan Xu. The online saddle point problem and online convex optimization with knapsacks. *arXiv: Machine Learning*, 2018. Abhishek Roy, Yifang Chen, Krishnakumar Balasubramanian, and Prasant Mohapatra. Online and bandit algorithms for nonstationary stochastic saddle-point optimization. *ArXiv*, abs/1912.01698, 2019. Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, and Prasant Mohapatra. Stochastic zerothorder optimization under nonstationarity and nonconvexity. *Journal of Machine Learning Research*, 23 (64):1–47, 2022. Shai Shalev-Shwartz. Online learning and online convex optimization. *Found. Trends Mach. Learn.*, 4(2): 107–194, 2012. doi: 10.1561/2200000018. URL https://doi.org/10.1561/2200000018. Aman Sinha, Hongseok Namkoong, Riccardo Volpi, and John Duchi. Certifying some distributional robustness with principled adversarial training. *arXiv preprint arXiv:1710.10571*, 2017. Arun Sai Suggala and Praneeth Netrapalli. Online non-convex learning: Following the perturbed leader is optimal. In *Algorithmic Learning Theory*, pp. 845–861. PMLR, 2020. Davoud Ataee Tarzanagh and Laura Balzano. Online bilevel optimization: Regret analysis of online alternating gradient methods. *arXiv preprint arXiv:2207.02829*, 2022. Kiran K Thekumparampil, Prateek Jain, Praneeth Netrapalli, and Sewoong Oh. Efficient algorithms for smooth minimax optimization. *Advances in Neural Information Processing Systems*, 32, 2019. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. *arXiv preprint arXiv:1708.07747*, 2017. Junchi Yang, Xiang Li, and Niao He. Nest your adaptive algorithm for parameter-agnostic nonconvex minimax optimization. *arXiv preprint arXiv:2206.00743*, 2022. Jinsung Yoon, Daniel Jarrett, and Mihaela Van der Schaar. Time-series generative adversarial networks. Advances in neural information processing systems, 32, 2019. Kaiqing Zhang, Zhuoran Yang, and Tamer Başar. Multi-agent reinforcement learning: A selective overview of theories and algorithms. *Handbook of Reinforcement Learning and Control*, pp. 321–384, 2021. Mengxiao Zhang, Peng Zhao, Haipeng Luo, and Zhi-Hua Zhou. No-regret learning in time-varying zerosum games. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), *International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore,* Maryland, USA, volume 162 of *Proceedings of Machine Learning Research*, pp. 26772–26808. PMLR, 2022. URL https://proceedings.mlr.press/v162/zhang22an.html. Zhenxun Zhuang, Yunlong Wang, Kezi Yu, and Songtao Lu. No-regret non-convex online meta-learning. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3942–3946. IEEE, 2020. ## Appendices A Concrete Toy Example For Section 3.1 Consider a two-player game characterized by time-varying loss functions {ft} T t=1. For any t ∈ [T], we define the function as: $$f_{t}(\mathbf{x},\mathbf{y})=a_{t}\mathbf{x}^{2}-b_{t}\mathbf{y}^{2}\quad\mathbf{x}\in\mathbb{R},\mathbf{y}\in[-1,1]$$ 2 x ∈ R, y ∈ [−1, 1] (13) | A | Concrete Toy Example for Section 3.1 | 16 | | |-----|---------------------------------------------------|------|----| | B | Missing Proof of Section 4 | 17 | | | B.1 | Technical Lemma | 17 | | | B.2 | Local Regret: Proof of Theorem 1 | | 19 | | B.3 | Oracle Queries: Proof of Theorem 2 | | 20 | | B.4 | Proof of Theorem 2 | | 22 | | C | Missing Proof of Section 5 | 23 | | | C.1 | Finite Iteration: Proof of Theorem 3 | 24 | | | C.2 | Local Regret: Proof of Theorem 4 | | 28 | | C.3 | Iteration and SFO Calls Bound: Proof of Theorem 5 | 30 | | $$(13)$$ Here, both at and bt are strictly positive. It is evident that ft is strongly-convex with respect to x and strongly-concave with respect to y. We can straightforwardly compute the min-max value of the problem minx∈R maxy∈[−1,1] ft(x, y) to be 0, which corresponds to the optimal solution (xt , yt ) = (0, 0). Referencing Zhang et al. (2022), the dynamic Nash Equilibrium (NE)-regret is defined as: $$\Re^{\mathrm{dyn}}(T):=|\sum_{t=1}^{T}f_{t}\left(\mathbf{x}_{t},\mathbf{y}_{t}\right)-\sum_{t=1}^{T}\operatorname*{min}_{\mathbf{x}\in\mathbb{R}}\operatorname*{max}_{\mathbf{y}\in[-1,1]}f_{t}(\mathbf{x},\mathbf{y})|.$$ We further simplify by setting yt = arg maxy∈[−1,1] ft(x, y), as the maximization problem can be efficiently solved even in our nonconvex-strongly-concave setting. Then, for ft as defined in eq. (13), we obtain ℜ dyn(T) = PT t=1 atx 2 t . On the other hand, when w = 1, ∇Φt,w(xt) = 2atxt, then the local NE-regret is $$\Re_{1}^{\mathrm{NE}}(T)=\sum_{t=1}^{T}\|\nabla\Phi_{t,1}(\mathbf{x}_{t})\|^{2}=2\sum_{t=1}^{T}a_{t}^{2}\mathbf{x}_{t}^{2}.$$ Combining these results, we obtain: $$\Re^{\mathrm{dyn}}(T)=\sum_{t=1}^{T}{\frac{a_{t}^{2}}{a_{t}}}x_{t}^{2}\leq{\frac{1}{\operatorname*{min}_{t\in[T]}a_{t}}}\sum_{t=1}^{T}a_{t}^{2}{\bf x}_{t}^{2}\leq C\Re_{1}^{\mathrm{NE}}(T)$$ where C =1 2 mint∈[T ] at is some positive constant. From the above derivations, it is clear that in this specific toy case, a lower local NE-regret implies a lower dynamic NE-regret. ## B Missing Proof Of Section 4 B.1 Technical Lemma Recall that Φt,w(x) = maxy∈Y Ft,w(x, y) and y ∗ t,w(x) = arg maxy∈Y Ft,w(x, y). In this section, we first present some technical lemmas to characterize the structure of the function Φt,w and y ∗ t,w in the nonconvexstrongly-concave setting, which will be essential throughout the analysis. Lemma B.1. Φt,w(·) is (ℓ + κℓ)*-smooth with* ∇Φt,w(·) = ∇xFt,w ·, y ⋆ t,w(·). *Also,* y ⋆ t,w(·) is κ*-Lipschitz.* Proof. Since the averaging maintains strongly-nonconcavity and smoothness, i.e. Ft,w(x, y) is still µ-stronglyconcave in y and ℓ-smooth. Thus, the proof directly follow Lemma 4.3 in Lin et al. (2020a) and we omit the details. Furthermore, we derive the following lemma to provide the smoothness property of y ∗ t,w(x) with respect to t. In other words, given any fixed x, the movement of y ∗ t,w(x) when t changes can be controlled by the variability of environment of the sliding window. Lemma B.2. *For any* x ∈ R m, t ∈ [T]*, it holds that* $$\left\|\mathbf{y}_{t-1,w}^{*}(\mathbf{x})-\mathbf{y}_{t,w}^{*}(\mathbf{x})\right\|\leq{\frac{\left\|\nabla_{\mathbf{y}}f_{t,w}(\mathbf{x},\mathbf{y}_{t,w}^{*}(\mathbf{x}))-\nabla_{\mathbf{y}}f_{t-w}(\mathbf{x},\mathbf{y}_{t-1,w}^{*}(\mathbf{x}))\right\|}{\mu(w-1)}}.$$ Proof. By the optimality of y ∗ t,w(x) and y ∗ t−1,w(x), for ∀x, we have $$(\mathbf{y}-\mathbf{y}_{t,w}^{*}(\mathbf{x}))^{\top}\nabla_{\mathbf{y}}F_{t,w}(\mathbf{x},\mathbf{y}_{t,w}^{*}(\mathbf{x}))\leq0,\forall\mathbf{y}\in\mathcal{Y},\tag{14}$$ $$(\mathbf{y}-\mathbf{y}_{t-1,w}^{*}(\mathbf{x}))^{\top}\nabla_{\mathbf{y}}F_{t-1,w}(\mathbf{x},\mathbf{y}_{t-1,w}^{*}(\mathbf{x}))\leq0,\forall\mathbf{y}\in\mathcal{Y}.\tag{15}$$ Summing up Equation ($\color{red}{14}$) with $\color{blue}{\mathbf{y=y_{t-1,w}^{*}(x)}}$ and Equation ($\color{red}{15}$) with $\color{blue}{\mathbf{y=y_{t,w}^{*}(x)}}$ yields that $$\left(\color{blue}{\mathbf{y_{t-1,w}^{*}(x)-y_{t,w}^{*}(x)}}\right)^{\top}\left(\color{blue}{\nabla_{\mathbf{y}}F_{t,w}(x,y_{t,w}^{*}(x))-\nabla_{\mathbf{y}}F_{t-1,w}(x,y_{t-1,w}^{*}(x))}\right)\leq0.$$ By the definition of Ft,w(x, y), we have ∇yFt,w(x, y ∗ t,w(x)) − ∇yFt−1,w(x, y ∗ t−1,w(x)) = 1 w wX−1 i=0 ∇yft−i(x, y ∗ t,w(x)) − 1 w wX−1 i=0 ∇yft−i−1(x, y ∗ t−1,w(x)) = 1 w ∇yft,w(x, y ∗ t,w(x)) − ∇yft−w(x, y ∗ t−1,w(x)) + 1 w wX−1 i=1 ∇yft−i(x, y ∗ t,w(x)) − ∇yft−i(x, y ∗ t−1,w(x)) . (17) $$\left(16\right)$$ $$\left(17\right)$$ $$(18)$$ Since for any t and fixed x, the ft(x, ·) is µ-strongly-concave, we have $$\begin{array}{l}\left(\mathbf{y}_{t-1,w}^{*}(\mathbf{x})-\mathbf{y}_{t,w}^{*}(\mathbf{x})\right)^{\top}\left\{\nabla_{\mathbf{y}}f_{t-i}(\mathbf{x},\mathbf{y}_{t-1,w}^{*}(\mathbf{x}))-\nabla_{\mathbf{y}}f_{t-i}(\mathbf{x},\mathbf{y}_{t,w}^{*}(\mathbf{x}))\right\}\\ \quad+\mu\left\|\left(\mathbf{y}_{t-1,w}^{*}(\mathbf{x})-\mathbf{y}_{t,w}^{*}(\mathbf{x})\right)\right\|^{2}\leq0.\end{array}$$ Plug Equations (17) and (18) into Equation (16), then we have $$\begin{array}{l}{{({\bf y}_{t-1,w}^{*}({\bf x})-{\bf y}_{t,w}^{*}({\bf x}))^{\top}\frac{1}{w}\left\{\nabla_{\bf y}f_{t,w}({\bf x},{\bf y}_{t,w}^{*}({\bf x}))-\nabla_{\bf y}f_{t-w}({\bf x},{\bf y}_{t-1,w}^{*}({\bf x}))\right\}}}\\ {{+\frac{w-1}{w}\mu\left\|({\bf y}_{t-1,w}^{*}({\bf x})-{\bf y}_{t,w}^{*}({\bf x}))\right\|^{2}\leq0.}}\end{array}$$ As a result w − 1 wµy ∗ t−1,w(x) − y ∗ t,w(x) 2 ≤ −(y ∗ t−1,w(x) − y ∗ t,w(x))⊤ 1w ∇yft,w(x, y ∗ t,w(x)) − ∇yft−w(x, y ∗ t−1,w(x)) ≤ 1 w y ∗ t−1,w(x) − y ∗ t,w(x)∇yft,w(x, y ∗ t,w(x)) − ∇yft−w(x, y ∗ t−1,w(x)) , where the last inequality follows from Cauchy-Schwartz inequality. Finally, by some algebra manipulation, we finish the proof as following $$\left\|\mathbf{y}_{t-1,w}^{*}(\mathbf{x})-\mathbf{y}_{t,w}^{*}(\mathbf{x})\right\|\leq{\frac{\left\|\nabla_{\mathbf{y}}f_{t,w}(\mathbf{x},\mathbf{y}_{t,w}^{*}(\mathbf{x}))-\nabla_{\mathbf{y}}f_{t-w}(\mathbf{x},\mathbf{y}_{t-1,w}^{*}(\mathbf{x}))\right\|}{\mu(w-1)}}.$$ The next lemma provides an upper bound for the gradient norm of ∇Φt,w in term of ∇Ft,w, which justifies our design of stop conditions. Lemma B.3. *Given a pair* (x, y) ∈ R m × Y, for t ∈ [T] and w > 0*, it holds that* $$\|\nabla\Phi_{t,w}(\mathbf{x})\|^{2}\leq2\left(\left(\frac{2\kappa}{\eta_{\mathbf{y}}}+\ell)(1+\ell\eta_{\mathbf{y}})\right)^{2}\|\mathbf{y}-\mathcal{P}_{\mathcal{Y}}\left(\mathbf{y}+\eta_{\mathbf{y}}\nabla_{\mathbf{y}}F_{t,w}\left(\mathbf{x},\mathbf{y}\right)\right)\|^{2}\right.$$ $$\left.+\ 2\|\nabla_{\mathbf{x}}F_{t,w}(\mathbf{x},\mathbf{y})\|^{2}.\right.$$ Proof. By Cauchy-Schwartz inequality, we have $$\begin{array}{c}{{\|\nabla\Phi_{t,w}({\bf x})\|^{2}\leq2\|\nabla\Phi_{t,w}({\bf x})-\nabla_{\bf x}F_{t,w}({\bf x},{\bf y})\|^{2}+2\|\nabla_{\bf x}F_{t,w}({\bf x},{\bf y})\|^{2}}}\\ {{\leq2\ell^{2}\|{\bf y}_{t,w}^{*}({\bf x})-{\bf y}\|^{2}+2\|\nabla_{\bf x}F_{t,w}({\bf x},{\bf y})\|^{2}}}\end{array}$$ $\square$ where the last inequality holds by combining Lemma B.1 and the fact that Ft,w is ℓ-smooth. To proceed the analysis, we need an important result about global error condition for proximal gradient algorithms from Drusvyatskiy & Lewis (2018), which is presented in Lemma B.4 for completeness. Since Ft,w(x, ·) is µ-strongly-concave over Y, we have $$F_{t,w}(\mathbf{x},\mathbf{y})\geq\Phi_{t,w}(x)+{\frac{\mu}{2}}\|\mathbf{y}-\mathbf{y}_{t,w}^{*}(x)\|^{2}.$$ which implies Ft,w(x, ·) satisfies the quadratic growth condition. Then applying Lemma B.4, we obtain the error bound condition for Ft,w(x, ·) also holds. Specifically, in our setting, α = µ, β = ℓ, and Gt degenerates to the projected gradient mapping with t = ηy. Therefore, $$\|\mathbf{y}_{t,w}^{*}(\mathbf{x})-\mathbf{y}\|\leq(\frac{2}{\mu}+\eta_{\mathbf{y}})(1+\ell\eta_{\mathbf{y}})\cdot\frac{1}{\eta_{\mathbf{y}}}\|\mathbf{y}-\mathcal{P}_{\mathcal{Y}}\left(\mathbf{y}+\eta_{\mathbf{y}}\nabla_{\mathbf{y}}F_{t,w}\left(\mathbf{x},\mathbf{y}\right)\right)\|$$ $$=(\frac{2\kappa}{\ell\eta_{\mathbf{y}}}+1)(1+\ell\eta_{\mathbf{y}})\cdot\|\mathbf{y}-\mathcal{P}_{\mathcal{Y}}\left(\mathbf{y}+\eta_{\mathbf{y}}\nabla_{\mathbf{y}}F_{t,w}\left(\mathbf{x},\mathbf{y}\right)\right)\|.$$ $$\mathbb{D}$$ Thus, we complete the proof. Lemma B.4 (Restate of Corollary 3.6 in Drusvyatskiy & Lewis (2018)). *Consider a closed, convex function* g : Rn → R *and a* C 1-smooth convex function f : Rn → R with β-Lipschitz continuous gradient. Denote the proximal mapping: $$\operatorname{prox}_{t,g}(x):=\operatorname{argmin}_{y\in\mathbf{R}^{n}}\left\{g(y)+{\frac{1}{2t}}\|y-x\|^{2}\right\}$$ and the prox-gradient mapping: $${\mathcal{G}}_{t}(x):=t^{-1}\left(x-\operatorname{prox}_{t,g}(x-t\nabla f(x))\right).$$ Suppose that the function φ := f + g has a nonempty set S *of minimizers and consider the following* conditions: - *(Quadratic growth)* $$\varphi(x)\geq\varphi^{*}+\frac{\alpha}{2}\cdot\mbox{dist}^{2}(x;S)\quad\mbox{for all}x\in[\varphi\leq\varphi^{*}+\nu]\tag{19}$$ - *(Error bound condition)* $$\operatorname{dist}(x,S)\leq\gamma\left\|{\mathcal{G}}_{t}(x)\right\|\quad{\mathrm{~is~valid~for~all~}}\quad x\in[\varphi\leq\varphi^{*}+\nu]$$ ∗ + ν] (20) Then property (19) implies property (20) with γ =2α −1 + t(1 + βt)*. Conversely, condition (20) implies* condition (19) with any α ∈0, γ−1. ## B.2 Local Regret: Proof Of Theorem 1 Proof of Theorem 1. Recall the definition of Φt,w and notice that $$\Phi_{t,w}(\mathbf{x})=\operatorname*{max}_{\mathbf{y}\in\mathcal{Y}}{\frac{1}{w}}\sum_{i=t-w+1}^{t}f_{i}(\mathbf{x},\mathbf{y})=\operatorname*{max}_{\mathbf{y}\in\mathcal{Y}}\left[F_{t-1,w}(\mathbf{x},\mathbf{y})+{\frac{1}{w}}(f_{t}(\mathbf{x},\mathbf{y})-f_{t-w}(\mathbf{x},\mathbf{y}))\right].$$ Then $$(20)$$ ∥∇Φt,w (xt)∥ 2 =∇xFt,w(xt, y ∗ t,w(xt)) 2 =∇xFt−1,w(xt, y ∗ t−1,w(xt)) + ∇xFt−1,w(xt, y ∗ t,w(xt)) − ∇xFt−1,w(xt, y ∗ t−1,w(xt)) + 1 w ∇xft xt, y ∗ t,w(xt)− ∇xft−w xt, y ∗ t,w(xt) 2 ≤ 3 ∥∇Φt−1,w (xt)∥ 2 +3κ 2 (w − 1)2 ∥∇yft xt, y ∗ t,w(xt)− ∇yft−w xt, y ∗ t−1,w(xt)∥ 2 +3 w2 ∥∇xft xt, y ∗ t,w(xt)− ∇xft−w xt, y ∗ t,w(xt)∥ 2, (21) where the second term in last inequality follows from that $$\|\nabla_{\mathbf{x}}F_{t-1,w}(\mathbf{x}_{t},\mathbf{y}_{t,w}^{*}(\mathbf{x}_{t}))-\nabla_{\mathbf{x}}F_{t-1,w}(\mathbf{x}_{t},\mathbf{y}_{t-1,w}^{*}(\mathbf{x}_{t}))\|$$ $$\leq\ell\|\mathbf{y}_{t,w}^{*}(\mathbf{x}_{t})-\mathbf{y}_{t-1,w}^{*}(\mathbf{x}_{t})\|$$ $$\leq\kappa\frac{\|\nabla_{\mathbf{y}}f_{t,w}(\mathbf{x},\mathbf{y}_{t,w}^{*}(\mathbf{x}))-\nabla_{\mathbf{y}}f_{t-w}(\mathbf{x},\mathbf{y}_{t-1,w}^{*}(\mathbf{x}))\|}{(w-1)}$$ where (a) is implied by Lemma B.2. Moreover, for the first term in Equation (21), by Lemma B.3, and the stop condition, we obtain $$\|\nabla\Phi_{t-1,w}\left(\mathbf{x}_{t}\right)\|^{2}\leq{\frac{\delta^{2}}{w^{2}}}.$$ $$\square$$ Summing over t = 1, · · · , T, and combining the definition of variation measures V 1 w[T] and V 2 w[T], then we have $$\Re_{w}^{N E}(T)=\sum_{t=1}^{T}\|\Phi_{t,w}(\mathbf{x}_{t})\|^{2}\leq\frac{3}{w^{2}}(T\delta^{2}+\frac{(\kappa w)^{2}}{(w-1)^{2}}V_{w}^{2}[T]+V_{w}^{1}[T]).$$ ## B.3 Oracle Queries: Proof Of Theorem 2 Denote the sequence generated in the inner-loop at time t ∈ [T − 1] by $$\begin{array}{r l}{\mathbf{x}_{t}^{0}=\mathbf{x}_{t}}&{{}\mathbf{x}_{t}^{k+1}\leftarrow\mathbf{x}_{t}^{k}-\eta_{\mathbf{x}}\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_{t}^{k},\mathbf{y}_{t}^{k}\right)}\\ {\mathbf{y}_{t}^{0}=\mathbf{y}_{t}}&{{}\mathbf{y}_{t}^{k+1}\leftarrow\mathcal{P}_{\mathcal{Y}}\left(\mathbf{y}_{t}^{k}+\eta_{\mathbf{y}}\nabla_{\mathbf{y}}F_{t,w}\left(\mathbf{x}_{t}^{k},\mathbf{y}_{t}^{k}\right)\right)}\end{array}$$ Let τt be the number of times the gradient update is executed at the t-th iteration. Note that x τt t = xt+1 and y τt t = yt+1. ## B.3.1 Supporting Lemmas We present three key lemmas which are important step descent lemmas. In this section, we focus on a crucial quantity, δ k t,w =y ⋆ t,w x k t − y k t 2, which are useful for the subsequent analysis. Throughout our analysis, we choose ηx =1 8κ3ℓ and ηy = 1 ℓ . Lemma B.5. Denote τt the total iteration of inner-loop at round t with 1 ≤ t ≤ T − 1. For convenience, let τT = 0. For 0 ≤ k ≤ τt − 1*, we have* $$\Phi_{t,w}\left(\mathbf{x}_{t}^{k+1}\right)\leq\Phi_{t,w}\left(\mathbf{x}_{t}^{k}\right)-\left(\frac{\eta_{\mathbf{x}}}{2}-\eta_{\mathbf{x}}^{2}\kappa\ell\right)\left\|\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_{t}^{k},\mathbf{y}_{t}^{k}\right)\right\|^{2}+\frac{\eta_{\mathbf{x}}\ell^{2}}{2}\delta_{t,w}^{k}.\tag{22}$$ Proof. Since Φt,w is (ℓ + κℓ)-smooth and ℓ + κℓ ≤ 2κℓ, for any *x, x*+ ∈ R m, we have Φt,w x +− Φt,w (x) −x + − x⊤ ∇Φt,w (x) ≤ κℓx + − x 2. Plugging x + − x = −ηx∇xFt,w (x, y) yields that Φt,w x +≤Φt,w (x) − ηx ∥∇xFt,w (x, y)∥ 2 + η 2 xκℓ ∥∇xFt,w (x, y)∥ 2 + ηx (∇xFt,w (x, y) − ∇Φt,w (x))⊤ ∇xFt,w (x, y). By Young's inequality, we have $$\begin{array}{l}{{\left(\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x},\mathbf{y}\right)-\nabla\Phi_{t,w}\left(\mathbf{x}\right)\right)^{\top}\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x},\mathbf{y}\right)}}\\ {{\leq\frac{\left\|\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x},\mathbf{y}\right)-\nabla\Phi_{t,w}\left(\mathbf{x}\right)\right\|^{2}+\left\|\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x},\mathbf{y}\right)\right\|^{2}}{2}.}}\end{array}$$ Since ∇Φt,w (x) = ∇xFt,w x, y ∗ t,w(x), we have $$\|\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x},\mathbf{y}\right)-\nabla\Phi_{t,w}\left(\mathbf{x}\right)\|^{2}\leq\ell^{2}\|\mathbf{y}-\mathbf{y}_{t,w}^{*}(\mathbf{x})\|^{2}.$$ ther, we obtain Putting these pieces together, we obtain $$\Phi_{t,w}\left(\mathbf{x}^{+}\right)\leq\Phi_{t,w}\left(\mathbf{x}\right)-\left({\frac{\eta_{\mathbf{x}}}{2}}-\eta_{\mathbf{x}}^{2}\kappa\ell\right)\left\|\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x},\mathbf{y}\right)\right\|^{2}$$ $$+{\frac{\eta_{\mathbf{x}}\ell^{2}}{2}}\left\|\mathbf{y}-\mathbf{y}_{t,w}^{*}(\mathbf{x})\right\|^{2}$$ $\square$ Lemma B.6. For any t, k ≥ 0*, the following statement holds true,* $$\|{\bf y}_{t}^{k+1}-{\bf y}_{t}^{k}\|^{2}\leq(4-\frac{2}{\kappa})\delta_{t,w}^{k}.$$ $$\left(23\right)$$. t,w. (23) Proof. By Young's inequality, we have $$\|\mathbf{y}_{t}^{k+1}-\mathbf{y}_{t}^{k}\|^{2}\leq2\|\mathbf{y}_{t}^{k+1}-\mathbf{y}_{t,w}^{\star}\left(\mathbf{x}_{t}^{k}\right)\|^{2}+2\|\mathbf{y}_{t,w}^{\star}\left(\mathbf{x}_{t}^{k}\right)-\mathbf{y}_{t}^{k}\|^{2}$$ $$\leq\left(2(1-\frac{1}{\kappa})+2\right)\delta_{t,w}^{k}=(4-\frac{2}{\kappa})\delta_{t,w}^{k}.$$ $$\square$$ **Lemma B.7**.: _Let $\delta_{t,w}^{k}=\left\|\mathbf{y}_{t,w}^{*}\left(\mathbf{x}_{t}^{k}\right)-\mathbf{y}_{t}^{k}\right\|^{2}$, the following statement holds true,_ $$\delta_{t,w}^{k}\leq\left(1-\frac{1}{2\kappa}\right)\delta_{t,w}^{k-1}+2\kappa^{3}\eta_{\mathbf{x}}^{2}\|\nabla_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k-1},\mathbf{y}_{t}^{k-1})\|^{2}.$$ Proof. Since ft(x, ·) is µ-strongly concave and ηy = 1/ℓ, we have $$\|{\bf y}_{t,w}^{\star}\left({\bf x}_{t}^{k-1}\right)-{\bf y}_{t}^{k}\|^{2}\leq(1-\frac{1}{\kappa})\delta_{t,w}^{k-1}\,.$$ By Young's inequality, we have δ k t,w ≤ 1 +1 2(κ − 1)∥y ⋆ t,w x k−1 t− y k t ∥ 2 + (1 + 2(κ − 1))∥y ⋆ t,w x k t − y ⋆ t,w x k−1 t∥ 2 ≤ 2κ − 1 2κ − 2 ∥y ⋆ t,w x k−1 t− y k t ∥ 2 + 2κ∥y ⋆ t,w x k t − y ⋆ t,w x k−1 t∥ 2 ≤ 1 − 1 2κ δ k−1 t,w + 2κ∥y ⋆ t,w x k t − y ⋆ t,w x k−1 t∥ 2. (24) Since y ⋆ t,w(·) is κ-Lipschitz, we have $$\|{\bf y}_{t,w}^{*}\left({\bf x}_{t}^{k}\right)-{\bf y}_{t,w}^{*}\left({\bf x}_{t}^{k-1}\right)\|^{2}\leq2\kappa^{2}\|{\bf x}_{t}^{k}-{\bf x}_{t}^{k-1}\|^{2}=2\kappa^{2}\eta_{\bf x}^{2}\|\nabla_{\bf x}F_{t,w}({\bf x}_{t}^{k-1},{\bf y}_{t}^{k-1})\|^{2}.$$ Thus, plug into eq. (24) $$\delta_{t,w}^{k}{\leq}\left(1-\frac{1}{2\kappa}\right)\delta_{t,w}^{k-1}+2\kappa^{3}\eta_{\mathbf{x}}^{2}\|\nabla_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k-1},\mathbf{y}_{t}^{k-1})\|^{2}.$$ ## B.4 Proof Of Theorem 2 Proof of Theorem 2. Denote γ = 1 − 1 2κ , from Lemma B.7 and using telescoping we have $$\delta^{k}_{t,w}\leq\gamma^{k}\delta^{0}_{t,w}+2\kappa^{3}\eta^{2}_{\mathbf{x}}\left(\sum_{j=0}^{k-1}\gamma^{k-1-j}\left\|\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x}^{j}_{t},\mathbf{y}^{j}_{t}\right)\right\|^{2}\right).\tag{25}$$ $$\square$$ Specially, for t > 1, δ 0 t,w = ∥y 0 t − y ∗ t,w(x 0 t )∥ 2 ≤ 2∥y τt−1 t−1 − y ∗ t−1,w(x τt−1 t−1 )∥ 2 + 2∥y ∗ t−1,w(x τt−1 t−1 ) − y ∗ t,w(x τt−1 t−1 )∥ 2 ≤δ 2 ℓ 2w2 +2 µ2(w − 1)2 ∥∇yft(x τt−1 t−1 , y ∗ t,w(x τt−1 t−1 )) − ∇yft−w(x τt−1 t−1 , y ∗ t−1,w(x τt−1 t−1 ))∥ 2. Then plug Equation (25) into Equations (22) and (23) from Lemmas B.5 and B.6, and sum over outer loop number. ( ηx 2 − η 2 xκℓ − 2κ 4η 3 x ℓ 2) τXt−1 j=0 ∇xFt,w x j t , y j t 2 ≤ Φt,w (xt) − Φt,w (xt+1) + κηxℓ 2δ 0 t,w τXt−1 j=0 ∥y k+1 t − y k t ∥ 2 ≤ (8κ − 4)δ 0 t,w + (16 − 8 κ )κ 4η 2 x τXt−1 j=0 ∇xFt,w x j t , y j t 2 . Letting ηx =1 8κ3ℓ , we have $$\sum_{j=0}^{\tau_{1}-1}\left\|\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_{t}^{j},\mathbf{y}_{t}^{j}\right)\right\|^{2}\leq\frac{8}{\eta_{\mathbf{x}}}(\Phi_{t,w}\left(\mathbf{x}_{t}\right)-\Phi_{t,w}\left(\mathbf{x}_{t+1}\right))+8\kappa\ell^{2}\delta_{t,w}^{0}$$ $$\sum_{j=0}^{\tau_{1}-1}(\kappa\ell)^{2}\|\mathbf{y}_{t}^{j+1}-\mathbf{y}_{t}^{j}\|^{2}\leq(8\kappa-4)(\kappa\ell)^{2}\delta_{t,w}^{0}+\frac{1}{4}\sum_{j=0}^{\tau_{2}-1}\left\|\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_{t}^{j},\mathbf{y}_{t}^{j}\right)\right\|^{2}.$$ . (27) Therefore add Equation (26) × 5ηx 4and Equation (27) × 9ηx 2 we have $$\frac{\eta_{\rm K}}{8}\sum_{j=0}^{\eta_{\rm L}-1}\left[\left\|\nabla_{\bf x}F_{t,w}\left({\bf x}_{t}^{j},{\bf y}_{t}^{j}\right)\right\|^{2}+(6\kappa\ell)^{2}\|{\bf y}_{t}^{j+1}-{\bf y}_{t}^{j}\|^{2}\right]\leq\left(\Phi_{t,w}\left({\bf x}_{t}\right)-\Phi_{t,w}\left({\bf x}_{t+1}\right)\right)+\frac{9\ell}{2}\theta_{t,w}^{\circ}.\tag{28}$$ (26) $\binom{27}{2}$ . Denote Φ0,w(x) = 0, we notice that ΦT ,w(xT ) = X T t=1 (Φt,w(xt) − Φt−1,w(xt−1)) = X T t=1 (Φt,w(xt) − Φt−1,w(xt)) +X T t=2 (Φt−1,w(xt) − Φt−1,w(xt−1)) = 1 w X T t=1 Ft−1,w(xt, y ∗ t,w(xt)) − Ft−1,w(xt, y ∗ t−1,w(xt)) + 1 w X T t=1 ft(xt, y ∗ t,w(xt)) − ft−w(xt, y ∗ t,w(xt))+ X T t=2 (Φt−1,w(xt) − Φt−1,w(xt−1)) (i) ≤ 1 w X T t=1 ft(xt, y ∗ t,w(xt)) − ft−w(xt, y ∗ t,w(xt))+ X T t=2 (Φt−1,w(xt) − Φt−1,w(xt−1)), where (i) follows from that y ∗ t−1,w(xt) is the maximizer of Ft−1,w(xt, ·). By some algebra, we have $$\sum_{t=1}^{T-1}\Phi_{t,w}(\mathbf{x}_{t})-\Phi_{t,w}(\mathbf{x}_{t+1})\leq\frac{1}{w}\sum_{t=1}^{T}\left(f_{t}(\mathbf{x}_{t},\mathbf{y}_{t,w}^{*}(\mathbf{x}_{t}))-f_{t-w}(\mathbf{x}_{t},\mathbf{y}_{t,w}^{*}(\mathbf{x}_{t}))\right)-\Phi_{T,w}(\mathbf{x}_{T}).$$ Sum Equation (28) over t, we have ηx 8 ×δ 2 2w2 τ = ηxδ 2τ 16w2 ≤ ηx 8 X T t=1 τXt−1 j=0 ∇xFt,w x j t , y j t 2 + (κℓ) 2∥y j+1 t − y j t∥ 2 ≤ X T t=1 (Φt,w (xt) − Φt,w (xt+1)) + 9ℓ 2 X T t=1 δ 0 t,w ≤ 1 w X T t=1 ft(xt, y ∗ t,w(xt)) − ft−w(xt, y ∗ t,w(xt))− ΦT +1,w(xT +1) + 9T δ2 2ℓw2 +9ℓ µ2(w − 1)2 V 2 w[T] + 9ℓD2 2 ≤ 2MT w+ M + 9T δ2 2ℓw2 +9ℓ µ2(w − 1)2 V 2 w[T] + 9ℓD2 2. Hence $$\tau\leq\frac{384\kappa^{3}\ell M w T}{\delta^{2}}+576\frac{\kappa^{2}T}{\mu}+1152\frac{w^{2}\kappa^{5}}{(w-1)^{2}\delta^{2}}V_{w}^{2}[T]+\frac{576D^{2}\kappa^{3}\ell^{2}w^{2}}{\delta^{2}}.$$ ## C Missing Proof Of Section 5 We first make some notation in Algorithm 2 clearly here, G k+1 t =1w Pt i=t−w+1 ∇˜ fi(x k+1 t, y k+1 t) = ∇˜ Ft,w(x k+1 t, y k+1 t) = ∇˜xFt,w(x k+1 t, y k+1 t), ∇˜yFt,w(x k+1 t, y k+1 t). And for casimplification, we denote y k t = y τt tfor any k ≥ τt. Before our theoretical analysis of Algorithm 2 and proof of Section 5, we define the filtration in Algorithm 2 formally to describe clearly what is known and what is unknown at certain stage. Definition 4 (Filtration). For any t ≥ 1, we denote filtration Ft to be the σ*-fields that corresponds to the* randomness of all gradient feedback up to stage t − 1 and the decision of ft at stage t*. In particular,* Ft includes ft, xt and ∇˜ Ft−1,w(xt, yt), but doesn't include ∇˜ ft(xt, yt), ∇˜ Ft,w(xt, yt). For any t ≥ 1, k ≥ 1*, we denote filtration* F k tto be the σ-fields that corresponds to the randomness of all gradient feedback up to the k-th iteration in line 6 at stage t in Algorithm *2. In particular,* F k t includes ft, x k t , y k t , ∇˜ Ft,w(xt, yt), {∇˜ fi(x k−1 t, y k−1 t)} t i=t−w+1 and G k−1 t*, but doesn't include* Gk t , {∇˜ fi(x k t , y k t )} t i=t−w+1. ## C.1 Finite Iteration: Proof Of Theorem 3 C.1.1 Supporting Lemmas Generally speaking, the lemmas in this section extends lemmas in Appendix B.3.1 to noisy setting. We first provide a descend lemma for Φt,w(x) in each iteration of inner loop. Lemma C.1. Denote τt the total iteration of inner-loop at stage t and δ k t,w =y ⋆ t,w x k t − y k t 2*, for* 0 ≤ k ≤ τt − 1 $$\begin{array}{c}{{\Phi_{t,w}\left(\mathbf{x}_{t}^{k+1}\right)\leq\Phi_{t,w}\left(\mathbf{x}_{t}^{k}\right)-\left(\frac{\eta_{\mathbf{x}}}{2}-\eta_{\mathbf{x}}^{2}\kappa\ell\right)\left\|\tilde{\nabla}_{\mathbf{x}}F_{t,w}\left(\mathbf{x},\mathbf{y}\right)\right\|^{2}+\eta_{\mathbf{x}}\ell^{2}\delta_{t,w}^{k}}}\\ {{+\left\|\tilde{\nabla}_{\mathbf{x}}F_{t,w}\left(\mathbf{x},\mathbf{y}\right)-\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x},\mathbf{y}\right)\right\|^{2}.}}\end{array}$$ Proof. Since Φt,w is (ℓ + κℓ)-smooth, for any *x, x*+ ∈ R m, we have $$\Phi_{t,w}\left(\mathbf{x}^{+}\right)-\Phi_{t,w}\left(\mathbf{x}\right)-\left(\mathbf{x}^{+}-\mathbf{x}\right)^{\top}\nabla\Phi_{t,w}\left(\mathbf{x}\right)\leq\kappa\ell\left\|\mathbf{x}^{+}-\mathbf{x}\right\|^{2}.$$ Set x + = x k+1 t, x = x k t , we have x + − x = x k+1 t − x k t = −ηx∇˜xFt,w x k t , y k t , which yeilds that Φt,w x k+1 t≤Φt,w x k t − ηx ∇˜xFt,w x k t , y k t 2+ η 2 xκℓ∇˜xFt,w x k t , y k t 2 + ηx ∇˜xFt,w x k t , y k t − ∇Φt,w x k t ⊤∇˜xFt,w x k t , y k t . (29) $$(29)$$ By Young's inequality, we have ∇˜xFt,w x k t , y k t − ∇Φt,w x k t ⊤∇˜xFt,w x k t , y k t ≤ ∥∇˜xFt,w x k t , y k t − ∇Φt,w x k t ∥ 2 + ∥∇˜xFt,w x k t , y k t ∥ 2 2 ≤ 2∥∇˜xFt,w x k t , y k t − ∇xFt,w x k t , y k t ∥ 2 + 2∥∇xFt,w x k t , y k t − ∇Φt,w x k t ∥ 2 2 + ∥∇˜xFt,w x k t , y k t ∥ 2 2. (30) Since $\nabla\Phi_{t,w}\left(\mathbf{x}_t^k\right)=\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_t^k,\mathbf{y}_{t,w}^*(\mathbf{x}_t^k)\right)$, we have $\mathbf{x}_t^k$. $$\|\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_{t}^{k},\mathbf{y}_{t}^{k}\right)-\nabla\Phi_{t,w}\left(\mathbf{x}_{t}^{k}\right)\|^{2}\leq\ell^{2}\|\mathbf{y}_{t}^{k}-\mathbf{y}_{t,w}^{*}(\mathbf{x}_{t}^{k})\|^{2}.$$ 2. (31) Putting Equations (29) to (31) together, we obtain $$\Phi_{t,w}\left(\mathbf{x}_{t}^{k+1}\right)\leq\Phi_{t,w}\left(\mathbf{x}_{t}^{k}\right)-\left(\frac{\eta_{\mathbf{x}}}{2}-\eta_{\mathbf{x}}^{2}\kappa\ell\right)\left\|\vec{\nabla}_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_{t}^{k},\mathbf{y}_{t}^{k}\right)\right\|^{2}$$ $$+\eta_{\mathbf{x}}\ell^{2}\|\mathbf{y}_{t}^{k}-\mathbf{y}_{t,w}^{*}(\mathbf{x}_{t}^{k})\|^{2}+\|\vec{\nabla}_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_{t}^{k},\mathbf{y}_{t}^{k}\right)-\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_{t}^{k},\mathbf{y}_{t}^{k}\right)\|^{2}.$$ $$(30)$$ $$(31)$$ The next lemma characterizes the descent property of distance to the maximizer y ∗ t,w. Lemma C.2. Let δ k t,w =y ⋆ t,w x k t − y k t 2*, the following statement holds true,* $$\delta_{t,w}^{k}\!\leq\!\left(1-\frac{1}{4\kappa}\right)\delta_{t,w}^{k-1}+8\kappa^{3}\eta_{\mathbf{x}}^{2}\|\tilde{\nabla}_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k-1},\mathbf{y}_{t}^{k-1})\|^{2}$$ $$+\frac{2\kappa}{\ell^{2}}\left\|\nabla_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k-1},\mathbf{y}_{t}^{k-1})-\tilde{\nabla}_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k-1},\mathbf{y}_{t}^{k-1})\right\|^{2}.$$ Proof. Since f(x, ·) is µ-strongly concave and ηy = 1/ℓ, we have ∥y ⋆ t,w x k−1 t− y k t ∥ 2 =y ⋆ t,w x k−1 t− PY y k−1 t + ηy∇˜xFt,w(x k−1 t, y k−1 t) 2 =y ⋆ t,w x k−1 t− PY y k−1 t + ηy∇xFt,w(x k−1 t, y k−1 t) +PY y k−1 t + ηy∇xFt,w(x k−1 t, y k−1 t)− PY y k−1 t + ηy∇˜xFt,w(x k−1 t, y k−1 t) 2 ≤ (1 + 1 2(κ − 1))y ⋆ t,w x k−1 t− PY y k−1 t + ηy∇xFt,w(x k−1 t, y k−1 t) 2 +(1+2(κ−1))∥PY (y k−1 t +ηy∇xFt,w(x k−1 t,y k−1 t))−PY (y k−1 t +ηy∇˜ xFt,w(x k−1 t,y k−1 t))∥ 2 ≤ (1 − 1 2κ )δ k−1 t,w + 2κ − 1 ℓ 2∇xFt,w(x k−1 t, y k−1 t) − ∇˜xFt,w(x k−1 t, y k−1 t) 2. (32) By Young's inequality, we have δ k t,w ≤ 1 +1 2(2κ − 1)∥y ⋆ t,w x k−1 t− y k t ∥ 2 + (1 + 2(2κ − 1))∥y ⋆ t,w x k t − y ⋆ t,w x k−1 t∥ 2 ≤ 4κ − 1 2(2κ − 1)∥y ⋆ t,w x k−1 t− y k t ∥ 2 + 4κ∥y ⋆ t,w x k t − y ⋆ t,w x k−1 t∥ 2 ≤ 1 − 1 4κ δ k−1 t,w + 4κ∥y ⋆ t,w x k t − y ⋆ t,w x k−1 t∥ 2 + 2κ ℓ 2 ∇xFt,w(x k−1 t, y k−1 t) − ∇˜xFt,w(x k−1 t, y k−1 t) 2. Since y ⋆ t,w(·) is κ-Lipschitz, we have $$\begin{array}{c}{{\|{\bf y}_{t,w}^{\star}\left({\bf x}_{t}^{k}\right)-{\bf y}_{t,w}^{\star}\left({\bf x}_{t}^{k-1}\right)\|^{2}\leq2\kappa^{2}\|{\bf x}_{t}^{k}-{\bf x}_{t}^{k-1}\|^{2}}}\\ {{=2\kappa^{2}\eta_{\bf x}^{2}\|\bar{\nabla}_{\bf x}F_{t,w}({\bf x}_{t}^{k-1},{\bf y}_{t}^{k-1})\|^{2}.}}\end{array}$$ Thus, plug into $$\begin{array}{c}{{\delta_{t,w}^{k}{\leq}\left(1-\frac{1}{4\kappa}\right)\delta_{t,w}^{k-1}+8\kappa^{3}\eta_{\mathbf{x}}^{2}\|\tilde{\nabla}_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k-1},\mathbf{y}_{t}^{k-1})\|^{2}}}\\ {{+\frac{2\kappa}{\ell^{2}}\left\|\nabla_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k-1},\mathbf{y}_{t}^{k-1})-\tilde{\nabla}_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k-1},\mathbf{y}_{t}^{k-1})\right\|^{2}.}}\end{array}$$ The next lemma shows that updates over y can be controlled by δ k t,w plus a noisy term. Lemma C.3. For any t, k ≥ 0*, the following statement holds true,* $$\|{\bf y}_{t}^{k+1}-{\bf y}_{t}^{k}\|^{2}\leq(4-\frac{1}{\kappa})\delta_{t,w}^{k}+\frac{4\kappa}{\ell^{2}}\left\|\nabla_{\bf x}F_{t,w}({\bf x}_{t}^{k},{\bf y}_{t}^{k})-\tilde{\nabla}_{\bf x}F_{t,w}({\bf x}_{t}^{k},{\bf y}_{t}^{k})\right\|^{2}.$$ $$\square$$ Proof. By Young's inequality, we have ∥y k+1 t − y k t ∥ 2 ≤ 2∥y k+1 t − y ⋆ t,w x k t ∥ 2 + 2∥y ⋆ t,w x k t − y k t ∥ 2 (i) ≤ 2(1 − 1 2κ ) + 2δ k t,w + 4κ ℓ 2 ∇xFt,w(x k t , y k t ) − ∇˜xFt,w(x k t , y k t ) 2 ≤ (4 − 1 κ )δ k t,w + 4κ ℓ 2 ∇xFt,w(x k t , y k t ) − ∇˜xFt,w(x k t , y k t ) 2, where (i) follows from Equation (32). ## C.1.2 Proof Of Theorem 3 Proof. From Lemma C.1 $$\delta_{t,w}^{k}\!\leq\!\left(1-\frac{1}{4\kappa}\right)\delta_{t,w}^{k-1}+8\kappa^{3}\eta_{\mathbf{x}}^{2}\|\tilde{\nabla}_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k-1},\mathbf{y}_{t}^{k-1})\|^{2}$$ $$+\frac{2\kappa}{\ell^{2}}\left\|\nabla_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k-1},\mathbf{y}_{t}^{k-1})-\tilde{\nabla}_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k-1},\mathbf{y}_{t}^{k-1})\right\|^{2}.$$ Denote γ = 1 − 1 4κ , Given F k−1 t we have δ k t,w ≤ γ kδ 0 t,w + 8κ 3η 2 x k X−1 j=0 γ k−1−j∇˜xFt,w x j t , y j t 2 + 2κ ℓ 2 k X−1 j=0 γ k−1−j∇˜xFt,w(x j t , y j t ) − ∇xFt,w(x j t , y j t ) 2 (i) ≤ γ kD2 + 32κ 4η 2 x δ 2 3w2+ 2κ ℓ 2 k X−1 j=0 γ k−1−j∇˜xFt,w(x j t , y j t ) − ∇xFt,w(x j t , y j t ) 2 , (33) $$\square$$ where the first term of (i) follows from that Y is bounded with D, and the second term of (i) follows from the stopping criterion of Algorithm 2 and Pk−1 j=0 γ k−1−j ≤ 4κ. Notice that for any fixed *t, k* and j ∈ [k − 1], E ∇˜xFt,w(x j t , y j t ) − ∇xFt,w(x j t , y j t ) 2 (i) = EF j t E ∇˜xFt,w(x j t , y j t ) − ∇xFt,w(x j t , y j t ) 2 F j t = EF j t 1 w2 E Xt−1 i=t−w n∇˜xfi(x j t , y j t ) − ∇xfi(x j t , y j t ) o 2 F j t (ii) = EF j t "1 w2 Xt−1 i=t−w E ∇˜xfi(x j t , y j t ) − ∇xfi(x j t , y j t ) 2 F j t # (iii) = EF j t 1 w2 · w · σ 2 w2 = σ 2 w3 , (34) where (i) follows from the property of conditional expectation, (ii) follows from that the SFO calls in line 9 $$(35)$$ of Algorithm 2 is independent and (iii) follows from definition of SFO and filtration F j t . Thus take expectation over two sides of Equation (33), we have $$\mathbb{E}\left[\delta_{t,w}^{k}\right]\leq\gamma^{k}D^{2}+\frac{32\kappa^{4}\eta_{\mathrm{{x}}}^{2}\delta^{2}}{3w^{2}}+\frac{8\kappa^{2}\sigma^{2}}{\ell^{2}w^{3}}.$$ . (35) Then by Lemma C.2 Φt,w x k t − Φt,w x k+1 t ≥ ( ηx 2 − η 2 xκℓ)∇˜xFt,w x k t , y k t 2 − ηxℓ 2δ k t,w − ∥∇˜xFt,w x k t , y k t − ∇xFt,w x k t , y k t ∥ 2 ≥ 15ηx 32 ∇˜xFt,w x k t , y k t 2 − ηxℓ 2δ k t,w − ∥∇˜xFt,w x k t , y k t − ∇xFt,w x k t , y k t ∥ 2. (36) By Lemma C.3 $$\frac{15}{4}\kappa^{2}\ell^{2}\eta_{\kappa}\delta^{k}_{t,w}+\frac{15}{4\ell}\left\|\nabla_{\mathbf{x}}F_{t,w}(\mathbf{x}^{k}_{t},\mathbf{y}^{k}_{t})-\widehat{\nabla}_{\mathbf{x}}F_{t,w}(\mathbf{x}^{k}_{t},\mathbf{y}^{k}_{t})\right\|^{2}\geq\frac{15\eta_{\kappa}}{32}\times2\kappa^{2}\ell^{2}\|\mathbf{y}^{k+1}_{t}-\mathbf{y}^{k}_{t}\|^{2}.\tag{37}$$ Sum Equation (36) and Equation (37), we have Φt,w x k t − Φt,w x k+1 t+ 15 4 κ 2ℓ 2ηxδ k t,w + 15 4ℓ ∇xFt,w(x k t , y k t ) − ∇˜xFt,w(x k t , y k t ) 2 ≥ 15ηx 32× 2κ 2ℓ 2∥y k+1 t − y k t ∥ 2 +∇˜xFt,w x k t , y k t 2 − ηxℓ 2δ k t,w − ∥∇˜xFt,w x k t , y k t − ∇xFt,w x k t , y k t ∥ 2 Rearranging the term, we have Φt,w x k t − Φt,w x k+1 t ≥ 15ηx 32× 2κ 2ℓ 2∥y k+1 t − y k t ∥ 2 +∇˜xFt,w x k t , y k t 2 − 5κ 2ℓ 2ηxδ k t,w − 15 4ℓ + 1∥∇˜xFt,w x k t , y k t − ∇xFt,w x k t , y k t ∥ 2. (38) Take expectation over both sides of Equation (38), plug into Equation (35) and follow from the similar step of Equation (34), we have E -Φt,w x k t − Φt,w x k+1 t ≥ 5ηxδ 2 32w2 − 5κ 2ℓ 2ηx γ kD2 + 32κ 4η 2 x δ 2 3w2+ 8κ 2σ 2 ℓ 2w3 − 15 4ℓ + 1σ 2 w3 . Because γ = 1 − 1 4κ ≤ 1, there exist a constant K˜ such that γ K˜ D2 ≤ max n32κ 4η 2 x 3w2 , 8κ 2σ 2 ℓ 2w3 o. Thus for k ≥ K˜ , we have E -Φt,w x k t − Φt,w x k+1 t ≥ 5ηxδ 2 32w2 − 5κ 2ℓ 2ηx 35κ 4η 2 x δ 2 3w2+ 9κ 2σ 2 ℓ 2w3 − 15 4ℓ + 1σ 2 w3 ≥ 25ηxδ 2 256w2 − 45κ 4ηxσ 2 w3− 15 4ℓ + 1σ 2 w3 . when δ 2 > 2304κ 4σ 2 5w + 256(4ℓ+1)σ 2 25ηxw, we set α = 25ηxδ $\frac{\pi\delta^2}{w^2}-\frac{45\kappa^4\eta_5\sigma^2}{w^3}-\left(\frac{15}{4\ell}+1\right)\frac{\sigma^2}{w^3}>0.$ Then for $K\geq\tilde{K}$, we have 256w2 − 2M ≥ E hΦt,w x K˜ t − Φt,w x K+1 ti = E k=K˜ Φt,w x k t − Φt,w x k+1 t X K = X K k=K˜ E Φt,w x k t − Φt,w x k+1 t τt ≥ k + 1 P (τt ≥ k + 1) + 0 · P (τt < k + 1) ≥ α X K k=K˜ P (τt ≥ k + 1) ≥ α X K k=K˜ P (τt > K) = αK − K˜P (τt > K), where the third equation follows from the Optional Stopping Theorem. Consequently, we have τt is finite in probability, which implies that τ =PT t=1 τt is finite in probability since it is the finite sum of finite variables in probability. ## C.2 Local Regret: Proof Of Theorem 4 Proof of Theorem 4. Following from Equation (21), we have ∥∇Φt,w (xt)∥ 2 =∇xFt,w(xt, y ∗ t,w(xt)) 2 ≤ 3 ∥∇Φt−1,w (xt)∥ 2 +3κ 2 (w − 1)2 ∥∇yft xt, y ∗ t,w(xt)− ∇yft−w xt, y ∗ t−1,w(xt)∥ 2 +3 w2 ∥∇xft xt, y ∗ t,w(xt)− ∇xft−w xt, y ∗ t,w(xt)∥ 2. (39) For the first term ∥∇Φt−1,w(xt)∥ 2 =∇Φt−1,w(x τt−1 t−1 ) 2 ≤ 3∇Φt−1,w(x τt−1 t−1 ) − ∇xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 + 3∇xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2+ 3∇˜xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 ≤ 3ℓ 2y ⋆ t−1 (x τt−1 t−1 ) − y τt−1 t−1 2+ 3∇˜xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 + 3∇xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2. Considery ⋆ t−1 (x τt−1 t−1 ) − y τt−1 t−1 2 y ⋆ t−1 (x τt−1 t−1 ) − y τt−1 t−1 2 (i) ≤ ( 2κ ℓηy + 1)2(1 + ℓηy) 2y τt−1 t−1 − PY yt + ηy∇yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 ≤ 2( 2κ ℓηy + 1)2(1 + ℓηy) 2y τt−1 t−1 − PY yt + ηy∇˜yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 + 2( 2κ ℓηy + 1)2(1 + ℓηy) 2PY yt + ηy∇yFt−1,w(x τt−1 t−1 , y τt−1 t−1 )− PY yt + ηy∇˜yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 (ii) ≤ 2( 2κ ℓηy + 1)2(1 + ℓηy) 2y τt−1 t−1 − PY yt + ηy∇˜yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 + 2( 2κ ℓηy + 1)2(1 + ℓηy) 2· η 2 y ∇yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2, where (i) follows from Lemma B.4 similar to Lemma B.3 and (ii) follows from the project operator is a contraction. Then ∥∇Φt−1,w(xt)∥ 2 ≤ 6(2κ ηy + ℓ) 2(1 + ℓηy) 2y τt−1 t−1 − PY yt + ηy∇˜yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2+ 3∇˜xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 + 6(2κ + ℓηy) 2(1 + ℓηy) 2∇yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 + 3∇xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 (i) ≤ δ 2 w2 + 6(2κ + ℓηy) 2(1 + ℓηy) 2∇yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 + 3∇xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2, (40) where (i) follows from Stop Condition 2 of inner-loop. Plug Equation (40) into Equation (39) and sum over t, we have Rw(T) = X T t=1 ∥∇Φt,w(xt)∥ 2 ≤ X T t=1 3δ 2 w2 + 18(2κ + ℓηy) 2(1 + ℓηy) 2∇yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 + 9∇xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 +3κ 2 (w − 1)2 ∥∇yft x τt−1 t−1 , y ∗ t,w(x τt−1 t−1 )− ∇yft−w x τt−1 t−1 , y ∗ t−1,w(x τt−1 t−1 )∥ 2 + 3 w2 ∇xft x τt−1 t−1 , y ∗ t,w(x τt−1 t−1 )− ∇xft−w x τt−1 t−1 , y ∗ t,w(x τt−1 t−1 ) 2 = 3T δ2 w2+ 3κ 2 (w − 1)2 V 2 w[T] + 3 w2 V 1 w[T] + X T t=1 n18(2κ + ℓηy) 2(1 + ℓηy) 2∇yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 + 9∇xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜xFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2o. (41) Notice that for any t ∈ [T], E ∇yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 (i) = EF τt−1 t−1 E ∇yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜yFt−1,w(x τt−1 t−1 , y τt−1 t−1 ) 2 F τt−1 t−1 1 w2 E Xt−1 i=t−w ∇yfi(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜yfi(x τt−1 t−1 , y τt−1 t−1 ) 2 F τt−1 t−1 = EF τt−1 t−1 $$(43)$$ "1 w2 Xt−1 i=t−w E ∇yfi(x τt−1 t−1 , y τt−1 t−1 ) − ∇˜yfi(x τt−1 t−1 , y τt−1 t−1 ) 2 F τt−1 t−1 # (ii) = EF τt−1 t−1 (iii) = EF τt−1 t−1 1 w2 · w · σ 2 w2 = σ 2 w3 , (42) where (i) follows from the property of conditional expectation, (ii) follows from that the SFO calls in line 9 of Algorithm 2 is independent and (iii) follows from definition of SFO. Similarly, for any t, we have Thus $$\mathbb{E}\Big\|\nabla_{\mathbf{x}}F_{t-1,w}(\mathbf{x}_{t-1}^{\tau_{t-1}},\mathbf{y}_{t-1}^{\tau_{t-1}})-\tilde{\nabla}_{\mathbf{x}}F_{t-1,w}(\mathbf{x}_{t-1}^{\tau_{t-1}},\mathbf{y}_{t-1}^{\tau_{t-1}})\Big\|^2=\frac{\sigma^2}{w^3}.$$ Plug Equations (42) and (43) into Equation (41), we have $$\mathbb{E}\left[\mathbb{N}_{w}^{NE}(T)\right]=\sum_{t=1}^{T}\mathbb{E}\left[\left\|\nabla\Phi_{t,w}(\mathbf{x}_{t})\right\|^{2}\right]$$ $$\leq\frac{3T\delta^{2}}{w^{2}}+\frac{3\kappa^{2}}{(w-1)^{2}}V_{w}^{2}[T]+\frac{3}{w^{2}}V_{w}^{1}[T]+\frac{\left(18(2\kappa+\ell_{\text{I\!\!I}}y)^{2}(1+\ell_{\text{I\!\!I}}y)^{2}+9\right)T\sigma^{2}}{w^{3}}$$ $$\leq\frac{3T\delta^{2}}{w^{2}}+\frac{3\kappa^{2}}{(w-1)^{2}}V_{w}^{2}[T]+\frac{3}{w^{2}}V_{w}^{1}[T]+\frac{\left(300\kappa^{2}+9\right)T\sigma^{2}}{w^{3}}.$$ ## C.3 Iteration And Sfo Calls Bound: Proof Of Theorem 5 Proof of Theorem 5. From Lemma C.1 $$\delta_{t,w}^{k}\!\leq\!\left(1-\frac{1}{4\kappa}\right)\delta_{t,w}^{k-1}+8\kappa^{3}\eta_{\mathbf{x}}^{2}\|\tilde{\nabla}_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k-1},\mathbf{y}_{t}^{k-1})\|^{2}$$ $$+\frac{2\kappa}{\ell^{2}}\left\|\nabla_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k-1},\mathbf{y}_{t}^{k-1})-\tilde{\nabla}_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k-1},\mathbf{y}_{t}^{k-1})\right\|^{2}.$$ Denote γ = 1 − 1 4κ , Given Ft we have $$\delta^{k}_{t,w}\leq\gamma^{k}\delta^{0}_{t,w}+8\kappa^{3}\eta^{2}_{\kappa}\left(\sum_{j=0}^{k-1}\gamma^{k-1-j}\left\|\widehat{\mathbf{v}}_{\mathbf{x}}F_{t,w}\left(\mathbf{x}^{j}_{t},\mathbf{y}^{j}_{t}\right)\right\|^{2}\right)\tag{44}$$ $$+\frac{2\kappa}{\ell^{2}}\left(\sum_{j=0}^{k-1}\gamma^{k-1-j}\left\|\widehat{\mathbf{v}}_{\mathbf{x}}F_{t,w}(\mathbf{x}^{j}_{t},\mathbf{y}^{j}_{t})-\nabla_{\mathbf{x}}F_{t,w}(\mathbf{x}^{j}_{t},\mathbf{y}^{j}_{t})\right\|^{2}\right).$$ $$\square$$ $$\left(45\right)$$ Then by Lemma C.2 $$\Phi_{t,w}\left(\mathbf{x}_{t}^{k+1}\right)\leq\Phi_{t,w}\left(\mathbf{x}_{t}^{k}\right)-\left(\frac{\eta_{\mathbf{x}}}{2}-\eta_{\mathbf{x}}^{2}\kappa\ell\right)\left\|\tilde{\nabla}_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_{t}^{k},\mathbf{y}_{t}^{k}\right)\right\|^{2}$$ $$+\eta_{\mathbf{x}}\ell^{2}\delta_{t,w}^{k}+\left\|\tilde{\nabla}_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_{t}^{k},\mathbf{y}_{t}^{k}\right)-\nabla_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_{t}^{k},\mathbf{y}_{t}^{k}\right)\right\|^{2}.$$ Then plugging Equation (44) into Equation (45) and summing up them over k = 0*, . . . , τ*t − 1, we have Φt,w(x τt t ) ≤ Φt,w(x 0 t ) − ( ηx 2 − η 2 xκℓ) τXt−1 k=0 ∇˜xFt,w x k t , y k t 2+ ηxℓ 2 τXt−1 k=0 γ kδ 0 t,w k=0 k X−1 j=0 γ k−1−j∇˜xFt,w x j t , y j t 2 + 8κ 3η 3 x ℓ 2 τXt−1 k=0 k X−1 j=0 γ k−1−j∇˜xFt,w(x j t , y j t ) − ∇xFt,w(x j t , y j t ) 2 + 2ηxκ τXt−1 + τXt−1 k=0 ∥∇˜xFt,w x k t , y k t − ∇xFt,w x k t , y k t ∥ 2 ≤ Φt,w(x 0 t ) − ( ηx 2 − η 2 xκℓ − 32κ 4η 3 x ℓ 2) τXt−1 k=0 ∇˜xFt,w x k t , y k t 2 + 4κηxℓ 2δ 0 t,w +8κ 2ηx + 1 τXt−1 k=0 ∇˜xFt,w(x k t , y k t ) − ∇xFt,w(x k t , y k t ) 2 ! , where the last inequality follows from that Pτt−1 k=0 γ k = 1−γ τt 1−γ ≤ 4κ and changing the order of summation over j and k. Rearranging the terms, we have ( ηx 2 − η 2 xκℓ − 32κ 4η 3 x ℓ 2) τXt−1 k=0 ∇˜xFt,w x k t , y k t 2 ≤ Φt,w (xt) − Φt,w (xt+1) + 4κηxℓ 2δ 0 t,w +8κ 2ηx + 1 τXt−1 k=0 ∇˜xFt,w(x k t , y k t ) − ∇xFt,w(x k t , y k t ) 2 ! . By Lemma C.3 $$\|{\bf y}_{t}^{k+1}-{\bf y}_{t}^{k}\|^{2}\leq(4-\frac{1}{\kappa})\delta_{t,w}^{k}+\frac{4\kappa}{\ell^{2}}\left\|\nabla_{\bf x}F_{t,w}({\bf x}_{t}^{k},{\bf y}_{t}^{k})-\tilde{\nabla}_{\bf x}F_{t,w}({\bf x}_{t}^{k},{\bf y}_{t}^{k})\right\|^{2}.$$ Then $$\sum_{k=0}^{\tau_{t}-1}\|\mathbf{y}_{t}^{k+1}-\mathbf{y}_{t}^{k}\|^{2}\leq(16\kappa-4)\delta_{t,w}^{0}+128\kappa^{4}\eta_{\mathbf{x}}^{2}\sum_{k=0}^{\tau_{t}-1}\left\|\tilde{\nabla}_{\mathbf{x}}F_{t,w}\left(\mathbf{x}_{t}^{k},\mathbf{y}_{t}^{k}\right)\right\|^{2}$$ $$+\frac{36\kappa^{2}}{\ell^{2}}\sum_{k=0}^{\tau_{t}-1}\left\|\tilde{\nabla}_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k},\mathbf{y}_{t}^{k})-\nabla_{\mathbf{x}}F_{t,w}(\mathbf{x}_{t}^{k},\mathbf{y}_{t}^{k})\right\|^{2}.$$ $\in\mathbb{R}^{2}$ and for $\mathbf{x}\in\mathbb{R}^{2}$ Notice that δ 0 0,w ≤ D2 and for any t ≥ 2 δ 0 t,w = ∥y 0 t − y ∗ t,w(x 0 t )∥ 2 ≤ 2∥y τt−1 t−1 − y ∗ t−1,w(x τt−1 t−1 )∥ 2 + 2∥y ∗ t−1,w(x τt−1 t−1 ) − y ∗ t,w(x τt−1 t−1 )∥ 2 ≤ 2κ 2∥y τt−1 t−1 − PY y τt−1 t−1 + ηyG τt−1 y,t−1 ∥ 2 +2 µ2(w − 1)2 ∥∇yft(x τt−1 t−1 , y ∗ t,w(x τt−1 t−1 )) − ∇yft−w(x τt−1 t−1 , y ∗ t−1,w(x τt−1 t−1 ))∥ 2 ≤δ 2 4ℓ 2w2 +2 µ2(w − 1)2 ∥∇yft(x τt−1 t−1 , y ∗ t,w(x τt−1 t−1 )) − ∇yft−w(x τt−1 t−1 , y ∗ t−1,w(x τt−1 t−1 ))∥ 2. Letting ηx =1 32κ3ℓ and ηy = 1 ℓ , we have τXt−1 k=0 ∇˜xFt,w x k t , y k t 2≤ 16 7ηx (Φt,w x k t − Φt,w x k t+1) + 64κℓ2δ 0 t,w 7 + 640 7ηx τXt−1 k=0 ∇˜xFt,w(x k t , y k t ) − ∇xFt,w(x k t , y k t ) 2 ! (46) τXt−1 k=0 2 ( 2κ ηy + ℓ)(1 + ℓηy) 2∥y k+1 t − y k t ∥ 2 = τXt−1 k=0 72(κℓ) 2∥y k+1 t − y k t ∥ 2 ≤ 72(κℓ) 2(16κ − 4)δ 0 t,w + 9 τXt−1 k=0 ∇˜xFt,w x k t , y k t 2 + 2596κ 4 τXt−1 k=0 ∇˜xFt,w(x k t , y k t ) − ∇xFt,w(x k t , y k t ) 2 ! . (47) $$\quad(46)$$ $$(48)$$ $$\quad(47)$$ Therefore add Equation (46) ×10ηx and Equation (47) ×ηx, we have ηx τXt−1 k=0 "∇˜xFt,w x k t , y k t 2+ 2 ( 2κ ηy + ℓ)(1 + ℓηy) 2∥y k+1 t − y k t ∥ 2 # ≤ 160 7 (Φt,w (xt) − Φt,w (xt+1)) + 1152ηxκ 3ℓ 2δ 0 t,w + 2688ηxκ 4 τXt−1 k=0 ∇˜xFt,w(x k t , y k t ) − ∇xFt,w(x k t , y k t ) 2 ! . (48) Denote Φ0,w(x) = 0, we notice that ΦT ,w(xT ) = X T t=1 (Φt,w(xt) − Φt−1,w(xt−1)) = X T t=1 (Φt,w(xt) − Φt−1,w(xt)) +X T t=2 (Φt−1,w(xt) − Φt−1,w(xt−1)) = 1 w X T t=1 Ft−1,w(xt, y ∗ t,w(xt)) − Ft−1,w(xt, y ∗ t−1,w(xt)) + 1 w X T t=1 ft(xt, y ∗ t,w(xt)) − ft−w(xt, y ∗ t,w(xt))+ X T t=2 (Φt−1,w(xt) − Φt−1,w(xt−1)) (i) ≤ 1 w X T t=1 ft(xt, y ∗ t,w(xt)) − ft−w(xt, y ∗ t,w(xt))+ X T t=2 (Φt−1,w(xt) − Φt−1,w(xt−1)), where (i) follows from that y ∗ t−1,w(xt) is the maximizer of Ft−1,w(xt, ·). By some algebra, we have $$\sum_{t=1}^{T}\Phi_{t,w}(\mathbf{x}_{t}))-(\Phi_{t,w}(\mathbf{x}_{t+1})\leq\frac{1}{w}\sum_{t=1}^{T}\left(f_{t}(\mathbf{x}_{t},\mathbf{y}_{t,w}^{*}(\mathbf{x}_{t}))-f_{t-w}(\mathbf{x}_{t},\mathbf{y}_{t,w}^{*}(\mathbf{x}_{t}))\right)-\Phi_{T+1,w}(\mathbf{x}_{T+1}).$$ Sum Equation (48) over t and take expectation, we have δ 2 3w2 − 2688κ 4 σ 2 w2 ηxτ ≤ X T t=1 ηx τXt−1 k=0 "∇˜xFt,w x k t , y k t 2+ 2 ( 2κ ηy + ℓ)(1 + ℓηy) 2∥y k+1 t − y k t ∥ 2 # − 2688ηxκ 4X T t=1 τXt−1 k=0 ∇˜xFt,w(x k t , y k t ) − ∇xFt,w(x k t , y k t ) 2 ! ≤ X T t=1 (Φt,w (xt) − Φt,w (xt+1)) + 1152ηxκ 3ℓ 2X T t=1 δ 0 t,w ≤ 1 w X T t=1 ft(xt, y ∗ t,w(xt)) − ft−w(xt, y ∗ t,w(xt))− ΦT +1,w(xT +1) + 1152ηxκ 3ℓ 2 (T − 1)δ 2 4ℓ 2w2 +2 µ2(w − 1)2 X T t=2 ∥∇yft(x τt−1 t−1 , y ∗ t,w(x τt−1 t−1 )) − ∇yft−w(x τt−1 t−1 , y ∗ t−1,w(x τt−1 t−1 ))∥ 2 + D2 ) ≤ 2TM w+ M + 288T ηxκ 3δ 2 w2+ 2304ηxκ 3ℓ 2 µ2(w − 1)2 V 2 w[T] + 1152ηxκ 3ℓ 2D2, where the first inequality follows from Assumption 4. Thus $$\tau\leq\frac{1}{\eta_{\bf x}}\frac{2M T w+\frac{9\delta^{2}T}{\ell}+\frac{72\ell w^{2}}{\mu^{2}(w-1)^{2}}V_{w}^{2}[T]+w^{2}M+\frac{5\ell D^{2}w^{2}}{32}}{\left(\frac{\delta^{2}}{3}-2688\kappa^{4}\sigma^{2}\right)}.$$
Review 1: Summary: This paper studies online nonconvex-concave min-max optimization problem. Specifically, its contributions are threefold: 1) This paper has introduced a new notion of regret, which is referred to as "local Nash equilibrium regret". 2) This paper has developed an algorithm, referred to as SODA; it has also developed its stochastic version (SODA-SFO) 3) This paper has analyzed SODA and SODA-SFO. In particular, it has provided both regret bounds (Theorem 1, Theorem 4) and complexity guarantees (Theorem 2, Theorem 3, Theorem 5). Strengths and Weaknesses: **Strengths:** - The considered problem is interesting. - To the best of my knowledge, the analyses in this paper are correct. **Weaknesses:** - My understanding is that the definition of the "local Nash equilibrium regret" might be problematic, which I will discuss in detail below. - This paper does not have any experiment results, which might be crucial to understand the robustness of the proposed algorithms to possible parameter misspecification. - The writing of this paper can be further improved. Requested Changes: **1) About the "local Nash equilibrium regret":** My main concern is that the definition of the "local Nash equilibrium regret" might be problematic. In particular, I think it has the following issues: first, even if the Nash equilibrium (NE) regret developed in (Rivera et al. 2018) might be problematic. In particular, if the x-learner and the y-learner cooperate, they can achieve small NE-regret at (x, y) that is far away from the min-max optimal solution. However, this definition does not prevent the x-learner and the y-learner from cooperation. This seems to be problematic. Moreover, this NE-regret measures the **performance distance** (due to the absolute value), rather than the **performance loss**. This also seems to be problematic. I am wondering if this metric can be cheated by an algorithm in some practical problems? The "local Nash equilibrium (NE) regret" seems to be more problematic. Specifically, as its name indicates, it is based on the magnitudes of the gradients (local information), rather than performance loss or distance. Intuitively, it measures how close the solution is to a stationary point (even a very bad stationary point). I am also wondering if this metric can be cheated by an algorithm in some problems? I fully understand that the NE regret was developed in previous literature and the authors are following existing work. But still, I hope the authors can further clarify these issues in a revision since the regret definition is crucial for this paper. **2) Experiment results:** I recommend the authors to add some experiment results to the paper. In particular, the proposed algorithms take several parameters, including the window size $w$, the step sizes $\eta_x$ and $\eta_y$, and the tolerance $\delta$. Theoretical results further show that the optimal choices of the step sizes depend on parameters like $\ell$ and $\mu$, **which are typically unknown in practice**. Thus, I think it is necessary to demonstrate some experiment results to illustrate the robustness of the proposed algorithms to possible misspecification of these parameters. I recommend the authors to run SODA and/or SODA-SFO on one or two practical problems and sweep over $w$ and the step sizes. The experiment results should demonstrate if the proposed algorithms are robust to possible misspecification of the algorithm parameters. **3) Writing:** Overall, the writing of this paper is pretty good. However, I recommend the authors to add proof sketches for key results (e.g. Theorem 1) to the main body of the paper so that the readers can get the high-level idea of the analysis without diving into the appendices. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper studies an online minimax optimization problem, where the objective function is nonconvex in $x$ but strongly concave in $y$. The authors propose a new performance metric, local Nash equilibrium (NE)-regret, which extends the commonly used metric for offline nonconvex-strongly-concave minimax optimization. The authors also define total-variation-type measures to quantify the variability of the environment within a time window. On top of these settings, the authors propose the (deterministic) SODA algorithm, which is OGDA but using the time-window average $F_{t, w}$ as the objective function. Then, the authors establish theoretical guarantees, that is, bounds on the local NE regret and the total number of iterations, where the latter is proportional to the total number of gradient oracle calls. Furthermore, the authors consider the setting of a stochastic first-order oracle (being unbiased & having bounded variance) instead of having full function gradient access. For this setting, they propose a variant of the algorithm, SODA-SFO, and establish its tractability, regret guarantee, and iteration bound. Strengths and Weaknesses: __Strength__ Overall correct result statements and proofs. Clear structure and writing. Clear assumption and result statements. Adequate justifications for choosing the local NE regret with a preset time window over the existing NE regret notion in online convex-concave minimax problems. Checked all proofs and did not find severe flaws. __Weaknesses__ One main point to address is whether the nonconvex-strongly-concave setting is indeed practically motivated. It would be good to have a few simple, structured, practically motivated examples (or references that suggest these) of decision-making problems that are not well captured by existing settings but fit the new nonconvex-strongly-concave setting of this paper. Arguably, "online deep learning" is very broad and breaks every assumption. They don't have to satisfy all assumptions but should really demonstrate the value/meaningfulness of the nonconvex-strongly-concave setting. For example, some previous papers (on different problem settings) have included such brief discussions: - Immorlica et al. (https://arxiv.org/pdf/1811.11881.pdf) has a dynamic pricing example and a pointer to Badanidiyuru et al. (2018) for their BwK problem setting. - Agarwal et al. (http://proceedings.mlr.press/v99/agarwal19a/agarwal19a.pdf) discusses the implication of their results to non-convex games and, in particular, GANs. They also emphasized the relevance of the more broader area of online learning in games at the very beginning. See Requested Changes for further questions. Requested Changes: Definition 2 & Remark 1: Note that the first few terms in (5) won’t be 0 under the periodicity assumption. Therefore, $V_{x,w}[T] = V_{y,w}[T] = 0$ is false, unless you define $f_{-1}$, $f_{-2}$ etc. as having periodical values instead of 0. But these are currently defined as $0$ in Section 3.1 and this is used throughout. If you keep the definition of $f_{-1} = f_{-2} = \dots = 0$, then under periodicity, the variation would likely be $O(w)$, which is ok if period length $ = w \ll T$. In proof of Lemma A.3: Can you elaborate the use of the global error bound condition in Drusvyatskiy & Lewis (2018)? I think the RHS of the last inequality in the proof of A.3 is wrong. It looks like a copy-paste error. It should instead be the first term in the RHS of the first inequality in this proof, aka norm of the prox-gradient (projected gradient) mapping. In addition, please further elaborate the use of the global error bound condition in the proof as it’s a key result for this lemma and hence for this work. Corollary 3.6 in D&L (2018) seems to be the right result you are using. You might want to restate it in your notation as a separate lemma. The deduction should be: strong convexity (concavity) of $F_{t,w}(x, \cdot)$ ⇒ quadratic growth ⇒ error bound condition with RHS being the norm of the prox-gradient mapping. The term "regret" suggests a comparison with a better objective value in hindsight from an optimal policy. I would really name $R_{w-NE}$ as "aggregate time-window averaged leader gradient residuals". You can keep the existing name, but may need to explain how it's indeed a "regret"-type metric. Maybe: In the special convex-concave case, a lower $w$-local NE regret (defined in this paper) implies a lower NE regret (defined in Rivera et al., 2018). Writing suggestions (take them at your discretion) Maybe: $R_{w-NE}$ → $R^{\rm NE}_{w}$ In the paragraph starting with "Why sliding window averaging?": … a typical notion that people are interested in real-world applications (there might need to be another “in” after “interested”; in general I would avoid phrases like “people are interested in”; try using “a typical/intuitive notion commonly used in practice”) Section 3.2 Remark 1: For the well-timed $w$ (“the” → “a”) Section 3.2 Remark 1: could be considerably small (small → smaller) Proof of Lemma A.1: Since the averaging maintains the strongly-nonconcavity and smoothness” (no “the”) Proof of Lemma A.2: strongly-convave (typo) Proof of Lemma A.2: in term of notions about $\nabla F_{t,w}$ (remove “notions about”) Section 5.2: You may want to start this section by “Given the SFO and the inner-loop termination condition in line 6 of Algorithm 2, one immediate question is whether Algorithm 2 terminates in finite time. To this end, we first establish that…”; Also computation tractable → computationally tractable; you may simply remove the last clause “which justifies that …”. __expectation__ local NE-regret vs. __expected__ local NE-regret? Theorem 3: $\tau_t$ and $\tau$ is → $\tau_t$ and $\tau$ are Paragraph under Theorem 4: “Beyond finite stopping…” Fix this sentence. Maybe start this paragraph as “Beyond …, we next provide an upper bound on the overall iteration complexity of SODA-SFO. Similar to Li & Orabona … , we adopt the following stronger boundedness assumption on the SFO.” Broader Impact Concerns: N.A. ================================================== Review 3: Summary: The paper considered the problem of online mix-max optimization, expressed as $\min_x \max_y f_t(x,y)$. The paper differs from existing works in two important aspects: - firstly, the function $f_t$ is time-varying, reflecting the changing nature of non-stationary environments; - secondly, $f_t$ is non-convex in $x$ and strongly concave in $y$, which presents unique challenges in solving the problem. To address this problem, the authors introduce a new performance measure, referred to as the local NE-regret, as well as two non-stationarity measures for the variation of gradients. The authors propose an algorithm called SODA, which guarantees $O(T/w^2)$ local NE-regret with $O(T w)$ gradient oracle complexity, where $w$ represents the size of the window, a hyper-parameter in the algorithm. Finally, the authors extend their results to stochastic first-order feedback. Strengths and Weaknesses: ### Strengths One of the strengths of this work is the significance of the problem it addresses, as non-convexity and non-stationary environments are commonly encountered in real-world applications. The authors' proposed solution for the problem is complete. Additionally, the paper is well-written, with clear explanations of the problem, its motivation, the proposed solution, and its extension. ### Weaknesses There are several weaknesses in this work that should be addressed. 1. The function $f_t$ with non-convex $x$ and strongly concave $y$ seems somewhat artificial. While the authors argue that this kind of problem can arise in time-varying two-player zero-sum stochastic games, where the policies are modeled by DNNs with strong regularization, it is not clear whether this is a common problem in real-world applications. 2. The parameter $x$ is unbounded while $y$ is bounded, and the intuition behind this choice is not fully explained. 3. The definitions of the non-stationarity measures $V_{x,w}[T]$ and $V_{y,w}[T]$ lack intuition. It is not clear why the gradient of $f_{t-w}$ is measured by $y_{t,w}^*(x)$ with a mismatch in the time index, nor is it clear why the analogous quantity in $V_{y,w}[T]$ is in terms of $y_{t-1,w}^*(x)$ but not $y_{t,w}^*(x)$ as in $V_{x,w}[T]$. 4. The algorithm shares high similarity with [Hazan et al., 2017] (ICML 2017) Efficient Regret Minimization in Non-Convex Games, but without necessary comparison and discussion, where [Hazan et al., 2017] mainly focuses on the standard online learning setting with only one player. 5. The extension to the stochastic first-order oracle also appears in the [Hazan et al., 2017]. Overall, addressing these weaknesses could improve the clarity and impact of the paper. Requested Changes: There are several major concerns that should be resolved before acceptance. Other minor comments would be also helpful for improving the paper. 1. (Major) The authors should provide more intuition and motivation for the artificial definitions and formulations to make this work more convincing to readers. This would require a careful examination of how these definitions and formulations relate to real-world applications. 2. (Major) It is necessary that the authors could discuss the difference between this work and the [Hazan et al., 2017] in terms of techniques. Are there some fundamental difficulties in extending the results of [Hazan et al., 2017]? 3. (Minor) Are there some unique challenges in extending the results to the stochastic first-order oracle? 4. (Minor) In Section 1.1, when illustrating the main contributions of this work, $w$ is not defined yet. 5. (Minor) At the beginning of Section 1.2: $f_t(x,y) = x^T A_t y$, but not $f_t(x,y) = x^T A_t y_t$. 6. (Minor) The line before Definition 1: a missing space before ‘Thus’. 7. (Minor) In my opinion, removing the definition of the NE-regret of [Rivera et al., 2018] and the dynamic NE-regret of [Zhang et al., 2022] may be better. The authors could directly discuss the reason for choosing the gradient norm as the performance and why it can capture the dynamic nature. The current statements are a little confusing. 8. (Minor) The authors could consider changing the abbreviation of their algorithm from "SODA" to avoid confusion with the top TCS conference of the same name. Broader Impact Concerns: The work is mostly theoretical and the broader impact discussion is not applicable. ================================================== Metareview: Recommendation: Accept with minor revision Comment: In this paper, the authors consider online nonconvex-strongly-concave min-max optimization in the nonstationary environment. They use local Nash equilibrium (NE)-regret as the performance measure, and propose a new algorithm named as TSODA to achieve the optimal regret. Furthermore, the authors extend their result to the setting with stochastic first-order feedback. Some experiments are done to evaluate the performance of the proposed algorithm. The paper makes significant contributions to online nonconvex-strongly-concave min-max optimization. After discussions, all the reviewers are generally positive with this paper. But there still exist several issues to be addressed: (a) More justifications of the "local Nash equilibrium regret”; (b) A more accurate title; (c) More comparisons with Lin et al. (2020a). ==================================================
# Rct Rejection Sampling For Causal Estimation Evaluation Katherine A. Keith *kak5@williams.edu* Williams College Sergey Feldman *sergey@allenai.org* Allen Institute for Artificial Intelligence David Jurgens jurgens@umich.edu University of Michigan Jonathan Bragg jbragg@allenai.org Allen Institute for Artificial Intelligence Rohit Bhattacharya *rb17@williams.edu* Williams College Reviewed on OpenReview: *https: // openreview. net/ forum? id= F74ZZk5hPa* ## Abstract Confounding is a significant obstacle to unbiased estimation of causal effects from observational data. For settings with high-dimensional covariates—such as text data, genomics, or the behavioral social sciences—researchers have proposed methods to adjust for confounding by adapting machine learning methods to the goal of causal estimation. However, empirical evaluation of these adjustment methods has been challenging and limited. In this work, we build on a promising empirical evaluation strategy that simplifies evaluation design and uses real data: subsampling randomized controlled trials (RCTs) to create confounded observational datasets while using the average causal effects from the RCTs as ground-truth. We contribute a new sampling algorithm, which we call *RCT rejection sampling*, and provide theoretical guarantees that causal identification holds in the observational data to allow for valid comparisons to the ground-truth RCT. Using synthetic data, we show our algorithm indeed results in low bias when oracle estimators are evaluated on the confounded samples, which is not always the case for a previously proposed algorithm. In addition to this identification result, we highlight several finite data considerations for evaluation designers who plan to use RCT rejection sampling on their own datasets. As a proof of concept, we implement an example evaluation pipeline and walk through these finite data considerations with a novel, real-world RCT—which we release publicly—consisting of approximately 70k observations and text data as high-dimensional covariates. Together, these contributions build towards a broader agenda of improved empirical evaluation for causal estimation. ## 1 Introduction Across the empirical sciences, confounding is a significant obstacle to unbiased estimation of causal effects from observational data. Covariate adjustment on a relevant set of confounders aka *backdoor adjustment* (Pearl, 2009) is a popular technique for mitigating such confounding bias. In settings with only a few covariates, simple estimation strategies—e.g., parametric models or contingency tables—often suffice to compute the adjusted estimates. However, modern applications of causal inference have had to contend with thousands of covariates in fields like natural language processing (Keith et al., 2020; Feder et al., 2022), genetics (Stekhoven et al., 2012), or the behavioral social sciences (Li et al., 2016; Eckles & Bakshy, 2021). In these high-dimensional scenarios, more sophisticated methods are needed and often involve machine learning. Recent approaches include non-parametric and semi-parametric estimators (Hill, 2011; Chernozhukov et al., 2018; Athey et al., 2018; Farrell et al., 2021; Bhattacharya et al., 2022), causally-informed covariate selection (Maathuis et al., 2009; Belloni et al., 2014; Shortreed & Ertefaie, 2017), proxy measurement and correction (Kuroki & Pearl, 2014; Wood-Doughty et al., 2018), and causal representation learning (Johansson et al., 2016; Shi et al., 2019; Veitch et al., 2020). Despite all this recent work targeted at high-dimensional confounding, these methods have not been systematically and empirically benchmarked. Such evaluations are essential in determining which methods work well in practice and under what conditions. However, unlike supervised learning problems which have ground-truth labels available for evaluating predictive performance on a held-out test set, analogous causal estimation problems require ground-truth labels for counterfactual outcomes of an individual under multiple versions of the treatment, data that is generally impossible to measure (Holland, 1986). A promising evaluation strategy is to directly subsample data from a randomized controlled trial (RCT) in a way that induces confounding. Causal effect estimates obtained using the confounded observational samples can then be compared against the ground-truth estimates from the RCT to assess the performance of different causal estimators. This idea has appeared in works like Hill (2011) and Zhang & Bareinboim (2021) and was recently formalized by Gentzel et al. (2021). We contribute to this evaluation strategy - which we subsequently refer to as *RCT subsampling* - via theory that clarifies why and how RCT subsampling algorithms should be constrained in order to produce valid downstream empirical comparisons. In particular, we prove previous subsampling algorithms can produce observational samples from which the causal effect is provably not identified, which makes recovery of the RCT ground-truth impossible (even with infinite samples). To address this issue, we present a new RCT subsampling algorithm, which we call *RCT rejection sampling*, that appropriately constrains the subsampling such that the observed data distribution permits identification. In addition to improving the theoretical foundations of RCT subsampling, we provide evaluation designers a scaffolding to apply the theory. We implement a proof of concept evaluation pipeline with a novel, realworld RCT dataset—which we release publicly—consisting of approximately 70k observations and text data as high-dimensional covariates. We highlight important finite data considerations: selecting an RCT dataset and examining when empirical evaluation is appropriate; empirically verifying a necessary precondition for RCT subsampling; specifying and diagnosing an appropriate confounding function using finite samples; applying baseline estimation models; and briefly speculate on additional challenges that could arise. For each of these considerations, we walk through specific approaches we take in our proof of concept pipeline. In summary, our contributions are - We provide a proof using existing results in causal graphical models showing that previous RCT subsampling procedures (e.g., Gentzel et al. (2021)) may draw observational data in a way that prevents non-parametric identification of the causal effect due to selection bias (§3.3). - We propose a new subsampling algorithm, which we call *RCT rejection sampling*, that is theoretically guaranteed to produce an observational dataset where samples are drawn according to a distribution where the effect is identified via a backdoor functional (§3.4). Using three settings of synthetic data, we show our algorithm results in low bias, which is not always the case for a previous algorithm (§3.5). - For evaluation designers who plan to use RCT rejection sampling for their own datasets, we highlight several finite data considerations and implement a proof of concept pipeline with a novel, real-world RCT dataset and application of baseline estimation models (§4). - We release this novel, real-world RCT dataset of approximately 70k observations that has text as covariates (§4.1.1). We also release our code.1 These contributions build towards a more extensive future research agenda in empirical evaluation for causal estimation (§5). 1Code and data at https://github.com/kakeith/rct_rejection_sampling. | | General | | Application to Backdoor Adjustment | | | | |------------------------------------------------------|-----------------------|--------------------|--------------------------------------|-------------------------------|---------------|-------| | Dataset | Eval. strategy | DoF | Data avail. | DGP realism Covariates (num.) | Data public? | | | Simulation, normal assmpts. (D'Amour & Franks, 2021) | Synthetic | ✗ Many ✓ High | ✗ Low | ✓ High (1000+) | ✓ Yes | | | IHDP-ACIC 2016 (Dorie et al., 2019) | Semi-synthetic | ✗ Many ✓ High | ✗ Medium | ✗ Medium (58) | ✓ Yes | | | PeerRead theorems (Veitch et al., 2020) | Semi-synthetic | ✗ Many ✓ High | ✗ Medium | ✓ High (Text vocab) ✓ Yes | | | | RCT repositories (Gentzel et al., 2021) | RCT subsampling ✓ Few | ✓ High-RCTs ✓ High | ✗ Low (1-2) | ✓ Yes | | | | Job training (LaLonde, 1986) | COS | ✓ Few | ✗ Low | ✓ High | ✗ Low (4) | ✓ Yes | | Facebook peer effects (Eckles & Bakshy, 2021) | COS | ✓ Few | ✗ Low | ✓ High | ✓ High (3700) | ✗ No | | This work | RCT subsampling ✓ Few | ✓ High-RCTs ✓ High | ✓ High (Text vocab) ✓ Yes | | | | Table 1: Select related work in empirical evaluation of causal estimators compared on general desiderata of: ✓ few degrees of freedom (DoF) for the evaluation designer, ✓ high data availability, and ✓ realistic datagenerating processes (DGP). We also examine the accompanying datasets presented for evaluating backdoor adjustment. Here, we want a ✓ high number of covariates to make the evaluation non-trivial and ✓ public availability of the data for reuse and reproducibility. ## 2 Related Work In Empirical Evaluation Of Causal Estimators As we discussed briefly in Section 1, empirical evaluation of causal estimation methods for observational data is difficult but important. We argue an evaluation strategy should in general (i) reduce the *evaluation designers' degrees of freedom* i.e. limit the number of choices researchers have to (inadvertently) pick an evaluation that favors their own method (Gentzel et al., 2019); (ii) has the necessary data (e.g., RCTs) available, (iii) ensure the data generating process (DGP) reflects the real world, and (iv) make the data publicly available for reuse and reproducibility. For applications of backdoor adjustment, we argue non-trivial evaluation should additionally (v) include a high number of covariates that could be used in the adjustment set. Table 1 compares select previous work (and our own) according to the above desiderata. We briefly discuss these and other related work, and make a qualitative argument for the RCT subsampling strategy we contribute to. Synthetic evaluations are ones in which researchers specify the entire DGP, e.g., D'Amour & Franks (2021); Schmidt et al. (2022). This allows for infinite data availability, but is prone to encoding researcher preferences and can lead to over-simplification (or overly complex DGPs) compared to real-world observational scenarios. Semi-synthetic evaluations use some real data but specify the rest of the synthetic DGP. This approach has been used in causal inference competitions (Dorie et al., 2019; Shimoni et al., 2018) and settings with text-data as confounding variables (Roberts et al., 2020; Veitch et al., 2020; Weld et al., 2022). Other semi-synthetic work fits generative models to real-world data (Neal et al., 2020; Parikh et al., 2022) or uses pre-trained language models to generate high-dimensional confounders from variables in a synthetic DGP (Wood-Doughty et al., 2021). Although more realistic than synthetic data, semi-synthetic DGPs can also make unrealistic assumptions; for example, Reisach et al. (2021) demonstrate this issue in the context of evaluating causal discovery algorithms. Constructed observational studies (COSs) start with RCTs and then find non-experimental control samples that come from a similar population (LaLonde, 1986; Hill et al., 2004; Arceneaux et al., 2006; Shadish et al., 2008; Jaciw, 2016; Gordon et al., 2019; Eckles & Bakshy, 2021; Zeng et al., 2022; Gordon et al., 2022).2 The advantage of COSs over (semi-)synthetic data is that they have few researcher degrees of freedom; however, non-experimental control groups often do not exist or do not come from similar-enough populations; see Dahabreh et al. (2022) for more details on identification from COSs. Subsampling RCTs uses an RCT as ground-truth and then subsamples the RCT data to create a confounded observational dataset. For example, Zhang & Bareinboim (2021) subsample from the International Stroke Trial (IST) of roughly 20k patients to estimate the treatment effect of aspirin allocation. This strategy also appears in prior work (Hill, 2011; Kallus et al., 2018) and was recently formalized by Gentzel et al. (2021). While this approach is limited by the availability of RCTs and sampling decreases the number of units available to the estimation methods, it does not require the comparable non-experimental control group 2For example, LaLonde (1986) adjusts for four covariates—age, years of schooling, high school drop-out status, and race (Table 4, Footnote C in LaLonde)—in his non-experimental group in a study of the effects of job training on earnings. ![3_image_0.png](3_image_0.png) Figure 1: Causal DAGs (a) corresponding to an RCT; (b) representing a sampling procedure; (c) corresponding to an observational study where C satisfies the backdoor criterion. required by COSs, resulting in greater data availability. There are also fewer researcher degrees of freedom compared to synthetic or semi-synthetic approaches. Because of these tradeoffs, we believe this is one of the most promising strategies for empirically evaluating causal estimation methods and we build upon this strategy in the remainder of this work. ## 3 Subsampling From Rcts We preface this section with a brief description of causal graphs, a prerequisite to understanding subsequent results. Then we provide identification and non-identification proofs for RCT subsampling algorithms and evidence from synthetic data. ## 3.1 Background: Causal Graphical Models A causal model of a directed acyclic graph (causal DAG) G(V ) can be viewed as the set of distributions induced by a system of structural equations: For each variable Vi ∈ V there exists a structural equation Vi ← fi(pai , ϵi) (Pearl, 2009). This function maps the variable's parents' values—pai of Viin G(V )—and an exogenous noise term3, ϵi, to values of Vi. The system of equations induces a joint distribution P(V ) that is Markov relative to the DAG G(V ), i.e., P(V = v) = QVi∈V P(Vi = vi| Pai = pai ). Independences in the distribution can be read off from G via the well-known d-separation criterion (Pearl, 2009). Interventions in the model are typically formalized using the do-operator (Pearl, 2009), where Y | do(T = t) denotes the value of an outcome Y under an intervention that sets the treatment T to value t. Here, our causal estimand of interest is the average treatment effect (ATE) defined as, $${\mathrm{ATE}}\equiv\mathbb{E}[Y|\operatorname{do}(T=t)]-\mathbb{E}[Y|\operatorname{do}(T=t^{\prime})],$$ $$(1)$$ ′)], (1) where t and t ′ denote distinct values of T. A causal parameter is said to be *identified* if it can be expressed as a function of the observed data P(V ). Given a set of variables Z ⊂ V that satisfy the *backdoor criterion* w.r.t T and Y 4, the ATE is identified via the well-known *backdoor adjustment functional* (Pearl, 1995). ## 3.2 Rct Subsampling: Setup And Conditions We now describe the specific setup and objectives of RCT subsampling.5 We start with a dataset DRCT consisting of n iid draws from an RCT of pre-treatment covariates C = {C1*, . . . , C*k}, treatment T, and outcome Y . Since this data is assumed to come from an RCT, the observed data distribution P(*C, T, Y* ) 3Typically these noise terms are assumed to be mutually independent, but this assumption is not strictly necessary (Richardson & Robins, 2013). Our results are non-parametric in the sense that we do not require any distributional assumptions on the noise terms or the specific form (e.g., linearity) of the structural equations fi(·). 4The set Z satisfies the backdoor criterion if no variable in Z is a causal descendant of T, and Z blocks all backdoor paths between T and Y , i.e., all paths of the form T *← · · · →* Y . 5Here our target of interest is the ATE, though similar principles apply to other causal parameters like conditional average treatment effects. is Markov relative to the causal DAG shown in Fig. 1(a) where T ⊥⊥ C. The goal of RCT subsampling is to construct an observational dataset DOBS that consists of m ≤ n iid draws from DRCT that satisfies the following conditions which enable appropriate evaluation of causal estimation methods: (I) **Dependence induced.** DOBS consists of samples drawn according to a new distribution P ∗(*C, T, Y* ) that satisfies the dependence relation T ̸⊥⊥ C. (II) **ATE identified.** There exists a functional g of the RCT distribution and a functional h of the subsampled data distribution such that ATE = g(P(*C, T, Y* )) = h(P ∗(*C, T, Y* )). Here, (II) is an important identification pre-condition that ensures that it is possible, at least in theory, to compute estimates of the ATE from DOBS to match the ATE from DRCT, the latter of which is treated as ground-truth in evaluation. From Fig. 1(a) it is clear that two sets of variables satisfy the backdoor criterion: the set C and the empty set. Thus, the ATE is identified from the RCT distribution P(*C, T, Y* ) via the following two backdoor adjustment functionals, $$\text{ATE}=\sum_{c}P(c)\times\left(\mathbb{E}[Y\mid t,c]-\mathbb{E}[Y\mid t^{\prime},c]\right)$$ $$=\mathbb{E}[Y\mid t]-\mathbb{E}[Y\mid t^{\prime}].$$ $$\left(2\right)$$ $$(3)$$ ′, c](2) Thus, a subsampling algorithm satisfies (II) if there is a functional h(P ∗(*C, T, Y* )) that is equal to equation 2 or equation 3. For our purposes, we add the condition (I) so that estimation in the observational data does not reduce to equation 3. That is, we aim to produce samples according to a distribution P ∗such that some adjustment is in fact necessary to produce unbiased ATE estimates. We note that (I) by itself is not sufficient to guarantee this; RCT subsampling procedures also require that there exists at least one pre-treatment covariate correlated with the outcome, i.e., ∃ Ci ∈ C such that Ci ̸⊥⊥ Y in P(*C, T, Y* ) (Gentzel et al., 2021). However, this condition is easily testable, and we implement these checks in our synthetic experiments and real-world proof of concept (§4.2). We now show a theoretical gap in existing approaches to subsampling RCTs, and propose a new algorithm that is theoretically guaranteed to satisfy conditions (I) and (II). ## 3.3 Non-Identification In Prior Work We claim that prior work that proposes RCT subsampling can result in observational samples from which the causal effect is *not identified* non-parametrically unless additional constraints are placed on the subsampling process. We consider Algorithm 2 in Gentzel et al. (2021) which does not explicitly impose such constraints and can be summarized as follows. Let S be a binary variable indicating selection into the observational data from DRCT. A structural equation S ← 1(T = Bernoulli(f(C))) is used to generate the selection variable, where f is a function defined by the researcher and 1 corresponds to the indicator function. DOBS is created by retaining only samples from DRCT where S = 1. This results in P ∗(*C, T, Y* ) = P(*C, T, Y* | S = 1) which is Markov relative to the causal DAG in Fig. 1(b). From this DAG, it is easy to check via d-separation that condition (I) is satisfied as T ̸⊥⊥ C | S = 1. However, the following proposition shows that condition (II) is not satisfied. Proposition 3.1. Given n iid samples from a distribution P that is Markov relative to Fig. 1(a), Algorithm 2 in Gentzel et al. (2021) draws samples according to a distribution P ∗such that condition (II) is not satisfied. We provide a proof in Appendix A. The intuition behind the proof of Proposition 3.1 is as follows. Identification of the ATE relies on two pieces: the conditional mean of the outcome given treatment and covariates and the marginal distribution Algorithm 1 RCT rejection sampling 1: **Inputs:** DRCT consisting of n i.i.d. draws from P(*C, T, Y* ); P ∗(T|C), a function specified by evaluation designers; M ≥ sup P ∗(T|C) P (T), a constant computed empirically 2: **Output:** DOBS, a subset of DRCT constructed according to a distribution P ∗(*C, T, Y* ) which satisfies conditions (I) and (II) 3: 4: for each unit i ∈ DRCT do 5: Sample Ui uniform on (0, 1) 6: if Ui > P ∗(T =ti|Ci) Pˆ(T =ti)M**then** 7: Discard i 8: end 9: **end for** 10: **Return:** DOBS ← DRCT − {discarded units} of covariates. From Fig. 1(b), we have E[Y |*T, C*] = E[Y |*T, C, S* = 1], but P(C) ̸= P(C|S = 1). Indeed this marginal distribution cannot be identified via any non-parametric functional of the subsampled distribution P ∗(*C, T, Y* ) (Bareinboim & Tian, 2015). However, this non-identification result holds assuming that there is no additional knowledge/constraints on how P ∗is generated; in the next section we modify the sampling to place constraints on the generated distribution P ∗that mitigate this issue. ## 3.4 Rct Rejection Sampling We propose Algorithm 1, which uses a rejection sampling procedure to subsample RCTs. Rejection sampling is useful when the target distribution is difficult to sample from but there exists a proposal distribution which is easier to sample from and the proposal distribution (times a constant) forms an "upper envelope" for the target distribution (Murphy, 2012, Chapter 23.2). Similar ideas on resampling data based on ratios of propensity scores appear in Thams et al. (2023) and Bhattacharya & Nabi (2022) in the context of testing independence constraints in post-intervention distributions. Though the rejection sampler also selects samples based on a function of T and C, as in Fig. 1(b), we prove that additional constraints placed by the sampling strategy ensure identification holds in the new observed data distribution. The intuition behind our algorithm is as follows. Sufficient constraints for maintaining identifiability of the ATE in P ∗(*C, T, Y* ) via the functional in equation 2 are to ensure that P ∗(C) = P(C) and P ∗(Y | T, C) = P(Y | *T, C*). 6 When this holds, it follows that equation 2 is equivalent to the adjustment functional h(P ∗(*C, T, Y* )) = Pc P ∗(c) × (E ∗[Y | T = *t, c*] − E ∗[Y | T = t ′, c]), where E ∗ denotes the expectation taken w.r.t P ∗(Y | *T, C*). To also satisfy (I), we propose resampling with weights that modify P(T) to a new conditional distribution P ∗(T | C). The considerations listed in the prior paragraph inform our choice of an acceptance probability of 1M × P ∗(T|C) P (T) in the rejection sampler, where M is the usual upper bound on the likelihood ratio used in the rejection sampler, which in our case is P ∗(T|C) P (T). 7 Here, P ∗(T | C) is a function specified by the evaluation designer that satisfies positivity (∀c, 0 < P∗(T | C = c) < 1 almost surely), and is a non-trivial function of C in the sense that P ∗(T | C) ̸= P ∗(T) for at least some values of T and C. Theorem 3.2. Given n iid samples from a distribution P that is Markov relative to Fig. 1(a), a confounding function P ∗(T|C) *satisfying positivity, and* M ≥ sup P ∗(T|C) P (T)*, the rejection sampler in Algorithm 1 draws* samples from a distribution P ∗*, such that conditions (I) and (II) are satisfied.* 6One could also consider maintaining equality of just the conditional mean of Y rather than the full conditional density. 7In practice, we approximate M from DRCT as maxi∈{1*,...,n*}P ∗(T =ti|Ci) mini∈{1*,...,n*} Pb(T =ti) . | Synthetic DGP Setting | Sampling Algorithm | Abs. Bias (std.) | Rel. Abs. Bias (std.) | CI Cov. | | |------------------------------------|-------------------------|----------------------------------------|-------------------------|---------------|------| | 1 | |C| = 1, P(T = 1) = 0.3 | Algorithm 2 from Gentzel et al. (2021) | 0.222 (0.010) | 0.089 (0.004) | 0.00 | | RCT rejection sampling (This work) | 0.009 (0.007) | 0.004 (0.003) | 0.97 | | | | 2 | |C| = 1, P(T = 1) = 0.5 | Algorithm 2 from Gentzel et al. (2021) | 0.009 (0.006) | 0.003 (0.003) | 0.98 | | RCT rejection sampling | 0.007 (0.005) | 0.003 (0.002) | 0.98 | | | | 3 | |C| = 5, Nonlinear | Algorithm 2 from Gentzel et al. (2021) | 0.252 (0.010) | 0.979 (0.037) | 0.00 | | RCT rejection sampling | 0.012 (0.009) | 0.046 (0.034) | 0.98 | | | Table 2: Absolute bias (abs. bias) between ATE from DRCT and the estimated ATE via backdoor adjustment on DOBS created by each sampling algorithm. We also report abs. bias relative to the RCT ATE (rel. abs. bias) and the mean and standard deviation (std.) across samples from 1000 random seeds. In the final column, we report the confidence interval coverage (CI Cov.)—the proportion of 1000 random seeds for which the 95% confidence interval contains the true (RCT) ATE. The DGPs for Settings 1-3 are given in Appendix C. Proof. Rejection sampling generates samples from a target distribution P ∗(V1*, . . . , V*k) by accepting samples from a proposal distribution P(V1*, . . . , V*K) with probability $${\frac{1}{M}}\times{\frac{P^{*}(V_{1},\ldots,V_{k})}{P(V_{1},\ldots,V_{k})}},$$ where M is a finite upper bound on the likelihood ratio P ∗/P over the support of V1*, . . . , V*k. We start with samples from an RCT, so our proposal distribution factorizes according to the causal DAG in Fig. 1(a): P(*C, T, Y* ) = P(C) × P(T) × P(Y | *T, C*). Our target distribution is one where T ̸⊥⊥ C, and factorizes as P ∗(*C, T, Y* ) = P ∗(C) × P ∗(T | C) × P ∗(Y | T, C), with additional constraints that P ∗(C) = P(C) and P ∗(Y | *T, C*) = P(Y | *T, C*). This establishes the likelihood ratio, $$\begin{array}{c}{{P^{*}(C,T,Y)}}\\ {{\overline{{{P(C,T,Y)}}}}}\end{array}=\frac{P(C)\times P^{*}(T\mid C)\times P(Y\mid T,C)}{P(C)\times P(T)\times P(Y\mid T,C)}$$ $$=\frac{P^{*}(T\mid C)}{P(T)},$$ and any choice of M ≥ sup P ∗(T|C) P (T)used in the rejection sampler in Algorithm 1 produces samples from the desired distribution P ∗, where the additional constraints satisfy the identification condition (II) and specification of P ∗(T|C) such that it truly depends on C satisfies condition (I). Since P ∗satisfies T ̸⊥⊥ C and yields identification via the usual adjustment functional obtained in a conditionally ignorable causal model, Algorithm 1 can be thought of as producing samples exhibiting confounding bias similar to the causal DAG in Fig. 1(c), despite the selection mechanism. A longer argument for this qualitative claim is in Appendix B. We conclude this section by noting that similar to prior works on RCT subsampling algorithms, the subsampling strategy in Algorithm 1 only requires researchers to specify a single function, P ∗(T | C). Hence, our procedure satisfies our original desideratum of limited researcher degrees of freedom, while providing stronger theoretical guarantees for downstream empirical evaluation. However, specification of P ∗(T | C) may still be challenging when C is high-dimensional. In Section 4.4, we discuss this finite data consideration and we use a proxy strategy for our proof of concept in which we have a low-dimensional confounding set C along with a set of high-dimensional covariates X that serve as proxies of this confounding set. ## 3.5 Evidence From Synthetic Data Using synthetic DGPs for DRCT, we produce DOBS using Algorithm 2 from Gentzel et al. (2021) and separately via our RCT rejection sampler. We then compute ATE estimates using equation 2 for DOBS and compare it to the ground-truth estimates using equation 3 in DRCT. Section C in the appendix gives the full details of the data-generating processes (DGPs) for three settings. Briefly, the DGP in Setting 1 has a single confounding covariate C, sets P(T = 1) = 0.3, and has an interaction term T C in the structural equation for Y . Setting 2 is the same as Setting 1 except we set P(T = 1) = 0.5. Setting 3 is a non-linear DGP with five covariates, C1*, . . . , C*5. All methods are provided with the true adjustment set and functional form for the outcome regression, i.e., our experiments here use oracle estimators to validate the identification theory proposed in the previous subsection. We construct 95% confidence intervals via bootstrapping and the percentile method Wasserman (2004) and report confidence interval coverage. After obtaining a sample DOBS from the RCT via either RCT rejection sampling or Algorithm 2 from Gentzel et al., we resample DOBS with replacement and calculate the ATE for that bootstrap sample. We repeat this for 1000 bootstrap samples and obtain a 95% confidence interval by taking the 2.5% and 97.5% points of the bootstrap distribution as the endpoints of the confidence interval. Across our 1000 random seeds of the synthetic DGPs, we measure the proportion of these confidence intervals that contain the true (RCT) ATE and report this metric as confidence interval coverage. See Appendix Section H for additional confidence interval plots. Table 2 shows that our proposed RCT rejection sampler results in a reduction of absolute bias compared to Algorithm 2 from Gentzel et al. (2021) by a factor of over 24 for Setting 1 (0.22/0.009) and a factor of 21 in Setting 3 (0.252/0.012). For Setting 3, Gentzel et al.'s procedure results in almost a 100% increase in bias relative to the gold RCT ATE of −0.26. In Setting 2 where P(T = 1) = 0.5, the differences in absolute bias between the two algorithms is less pronounced.8In Settings 1 and 2, the confidence interval coverage for Gentzel et al.'s procedure is 0 whereas our RCT rejection sampling algorithm results in coverage of 0.97 and 0.98 for Settings 1 and 2 respectively, both slightly above the nominal 0.95 coverage. The results of the simulation are consistent with our theoretical findings that our algorithm permits identifiability under more general settings than prior work. ## 4 Finite Data Considerations And Proof Of Concept In the previous section, we provided theoretical guarantees for RCT rejection sampling and confirmed the algorithm results in low bias on synthetic data. In this section, we demonstrate how to put this proposed theory into practice and highlight considerations when working with finite real-world data. Our goal is to surface questions that must be asked and answered in creating useful and high-quality causal evaluation. We also describe our specific approach towards each consideration as we create a proof of concept pipeline for empirical evaluation of high-dimensional backdoor adjustment methods. For this proof of concept, we use a large-scale, real-world RCT dataset with text as covariates. Although our approaches are specific to our proof of concept dataset, we believe other evaluation designers will benefit from a real-world example of how to put the theory and considerations into practice. ## 4.1 Considerations Prior To Using A Specific Rct Dataset A necessary component for RCT subsampling is obtaining a real-world RCT dataset. This ensures a more realistic data generating processes compared to synthetic or semi-synthetic approaches (see Table 1). As Gentzel et al. (2021) note, there are many RCT repositories from a variety of disciplines from which evaluation designers could gather data. However, Gentzel et al. find many of these existing datasets only have one or two covariates that satisfy C ̸⊥⊥ Y (see Consideration \#1 below). As we briefly mentioned in Section 1, for settings with just a few covariates one can often use simple estimation strategies with theoretical guarantees—e.g., parametric models or contingency tables—and empirical evaluation may not be particularly informative in this setting. Along these lines, we recommend that evalu- 8We note Gentzel et al. (2021) primarily focus on the setting for which P(T = 1) = P(T = 0) = 0.5. However, their approach does not seem to generalize well outside of this setting, theoretically and empirically. ation designers first ask themselves, Is empirical evaluation of causal estimators appropriate and necessary for this setting? Not all settings are in need of empirical evaluation. A potentially untapped resource for RCT rejection sampling data is A/B tests from large online platforms. Other work, e.g., Eckles & Bakshy (2021), have used these types of experiments for constructed observational studies and we use such a dataset in our proof of concept. The large scale of these experiments can be advantageous since RCT subsampling reduces the number of units in the observational dataset by roughly half. Further, they often contain rich metadata and many covariates, which can be used to induce confounding in a way that emulates a high-dimensional setting. For our proof of concept, we choose a setting for which empirical evaluation is appropriate and needed: highdimensional backdoor adjustment. Our high-dimensional covariates are the thousands of vocabulary words from text data, an application area that has generated a large amount of interest from applied practitioners, see Keith et al. (2020); Feder et al. (2022). We use and publicly release a large, novel, real-world RCT (approximately 70k observations) that was run on an online scholarly search engine.9 Users arrive on a webpage that hosts metadata about a single academic paper and proceed to interact with this page. The RCT's randomized binary treatment is swapping the ordering of two buttons—a PDF reader and a new "enhanced reader". We set T = 1 as the setting where the "enhanced reader" is displayed first. The outcome of interest is a user clicking (Y = 1) or not clicking (Y = 0) on the enhanced reader button. The former action transports the user to a different webpage that provides a more interactive view of the publication. The RCT suggests that the treatment has a positive causal effect with an ATE of 0.113 computed using a simple difference of conditional means in the treated and untreated populations. See Appendix D for more details about the RCT. ## 4.2 Consideration #1: Checking Necessary Precondition C ̸⊥⊥ Y As we mentioned in Section 3, a necessary precondition for RCT subsampling in general is the existence of a causal edge between C and Y , implying C ̸⊥⊥ Y . The relationship between C and Y is naturally occurring (not modified by evaluation designers) and the amount of confounding induced by sampling is, in part, contingent on this relationship (Gentzel et al., 2021). One can empirically check C ̸⊥⊥ Y via independence tests, e.g., evaluating the odds ratio when both C and Y are binary variables. If there do not exist covariates, C such that C ̸⊥⊥ Y , one cannot move forward in the evaluation pipeline using RCT subsampling. For our proof of concept, we use a subpopulation strategy to ensure the precondition C ̸⊥⊥ Y is satisfied. We choose a single interpretable covariate to induce confounding: the field of study of manuscripts. Since C ⊥⊥ Y if and only if the odds ratio between C and Y is 1, we choose subpopulations of the full RCT that have high a odds ratio between a subset of the categorical field of study variable and the outcome Y . Specifically, we choose C to be a binary covariate representing one of two fields of study; for Subpopulation A, the field is either Physics or Medicine. In Appendix G, we implement the evaluation pipeline for an additional subpopulation with C chosen as the articles with Engineering or Business as the field of study. Substantively, one can interpret this high odds ratio as natural differences in click rates from users viewing articles from different fields of study. We combine this subpopulation strategy with a proxy strategy in the next section to ensure that the estimation procedures only have access to high-dimensional covariates instead of our low-dimensional C. This has 9The RCT was conducted on the Allen Institute for Artificial Intelligence's Semantic Scholar platform https://www. semanticscholar.org/. Owners of the website conducted this experiment and gave us permission to use and release this data. ![9_image_0.png](9_image_0.png) | RCT Dataset | C categories | n | RCT ATE | OR(C, Y ) | |-----------------|-------------------|--------|-----------|-------------| | Full | - | 69,675 | 0.113 | - | | Subpopulation A | Physics, Medicine | 4,379 | 0.096 | 1.8 | Figure 2: Proof of concept approach. **Left figure.** Causal DAG for the proxy strategy. The blue edges are confirmed to empirically exist in the finite dataset. The red edge is selected by the evaluation designers via P ∗(T|C). **Right table.** RCT dataset descriptive statistics including the number of units in the population/subpopulation (n) and the odds ratio, OR(*C, Y* ). the benefit of simplifying the process of specifying P ∗(T | C) while still ensuring that downstream modeling must still contend with high-dimensional adjustment. ## 4.3 Consideration #2: Specification Of P ∗(T|C) Evaluation designers using RCT rejection sampling have one degree of freedom: specification of P ∗(T|C). We describe one specific approach in our proof of concept to choose P ∗(T|C), but we anticipate evaluation designers using RCT rejection sampling to create large empirical benchmarks may want to include many different parameterizations of P ∗(T|C) to evaluate empirical performance of methods under numerous settings. Consideration \#3 describes approaches to diagnosing the choice of P ∗(T|C) for a specific finite dataset. Proxy strategy. In Section 3, we briefly mentioned that specifying a suitable confounding function P ∗(T | C) may be difficult when C is high-dimensional. A key property of our RCT is that it has high-dimensional text data, X, that is a proxy (with almost perfect predictive accuracy) for low-dimensional structured metadata—categories of scientific articles, e.g., Physics or Medicine. We use this structured metadata as the covariates C in our RCT rejection sampler, but provide the causal estimation methods only *X, Y* and T. Note that as evaluation designers, we still have access to C to run diagnostics. This proxy strategy helps simplify the specification of P ∗(T | C) we use in the rejection sampler and avoids direct specification of a function involving high-dimensional covariates. Others have used similar proxy strategies for text in semisynthetic evaluations, e.g., Roberts et al. (2020); Veitch et al. (2020). Such a technique may also be applied in other RCTs, e.g., healthcare studies where one or two important biomarkers serve as low-dimensional confounding variables, and the larger electronic health record data serves as the proxy X. Using X. For each document i, the high-dimensional covariates Xiis a bag-of-words representation of the document's concatenated title and abstract given a 2000-unigram vocabulary. A new vocabulary is created for each RCT subpopulation; see Appendix E for details. We check that there is high predictive accuracy of P(C|X) to ensure the plausibility of causal estimation models only having access to X. 10 To measure this predictive accuracy, we model P(C|X) with a logistic regression11 classifier. Averaged across held-out test folds, the F1 score is 0.98 and the average precision is 0.99 for Subpopulation A (Physics, Medicine). Specifying P ∗(T|C). Since C is binary in our proof of concept pipeline, we choose a simple interpretable piece-wise function, $P^{*}(T_{i}=1|C_{i})=\begin{cases}\zeta_{0}\text{if}C_{i}=0\\ \zeta_{1}\text{if}C_{i}=1\end{cases}$ $$\left(4\right)$$ for each document i, where 0 < ζ0 < 1 and 0 < ζ1 < 1 are parameters chosen by the evaluation designers. We choose ζ0, ζ1 for the remainder of the proof of concept pipeline via the diagnostics in Consideration \#3. 10We leave to future work correcting for measurement error with noisy proxies X; see Wood-Doughty et al. (2018). 11Using scikit-learn Pedregosa et al. (2011) and an elasticnet penalty, L1 ratio 0.1, class-weight balanced, and SAGA solver. ![10_image_0.png](10_image_0.png) Figure 3: **Diagnostic plots for proof of concept pipeline.** For Subpopulation A data, each plot is the parameterization of P ∗(T|C) in Equation 4, which is specified by the evaluation designer. Each blue circle is a different random seed (100 seeds total per plot/parameterization). ## 4.4 Consideration #3: Diagnostics For P ∗(T|C) We recommend running diagnostics on the sampling procedure because, although Theorem 3.2 proves that our RCT rejection sampling permits identification, this is an asymptotic statement, and the sampler can produce finite samples where confounding bias is induced, but difficult to correct for. For our proof of concept pipeline, we step through the following diagnostics on the sampling procedure. Recall the notation from Section 3, DRCT is our RCT dataset (here, from Subpopulation A) and DOBS is the resulting observational dataset after RCT rejection sampling. First, we check empirically that overlap is satisfied for C in DOBS, i.e., 0 < Pˆ(T = 1|C = c) < 1 for all c. Second, in Figure 3 we compare the amount of confounding induced to the error in the oracle adjustment for different ζ0, ζ1 in Equation 4 across 100 random seeds (blue dots). On the y-axis, we plot the absolute difference between the ATE for DRCT (GoldATE) and exact backdoor estimates12 obtained from using the oracle adjustment set C. On the x-axis, we plot the absolute difference between the ATE on DRCT (GoldATE) and the unadjusted naive estimate on DOBS. In general, we want more samples to fall below the y = x line (shown in red) since this means that more confounding was induced than there is error in estimation. Samples above the y = x have more error from the sampling process than the amount of confounding induced, and thus are not useful in benchmarking methods that adjust for confounding. Of the settings in Figure 3, ζ0 = 0.85 and ζ1 = 0.15 had the best proportion of sampled datasets lying below the red line, so we choose these parameters for our proof of concept pipeline. We leave to future work providing more guidance on choosing settings of P ∗(T|C) for a comprehensive benchmark. ## 4.5 Consideration #4: Modeling The primary goal of this work is to create clear steps to follow during the evaluation design phase. Although this stage precedes a thorough modeling effort, we recommend that one runs baseline models to check for potential issues. As a proof of concept, we apply baseline causal estimation models to the resulting DOBS datasets after RCT rejection sampling (with many random seeds); as we mention above. We implement13 commonly-used causal estimation methods via two steps: (1) fitting base learners and (2) using causal estimators that combine 12The exact backdoor equation is tractable because *T, C* and Y are all binary. 13We attempted to use the EconML package Microsoft Research (2019), but at of the time of our experiments it did not support using sparse matrices, which is required for our high-dimensional datasets. | gˆ(x) | Qˆ T0 (x) | Qˆ T1 (x) | QˆX(x) | | | | | | |-----------------------------------|-----------------------|---------------------|-------------|-------------|-------------|-------------|-------------|-------------| | Prediction Ave. Prec. (↑ better) | train | inference | train | inference | train | inference | train | inference | | linear | 0.89 (0.07) | 0.59 (0.02) | 0.63 (0.32) | 0.03 (0.01) | 0.85 (0.17) | 0.13 (0.03) | 0.71 (0.2) | 0.06 (0.01) | | catboost (nonlinear) | 0.97 (0.0) | 0.60 (0.02) | 1.0 (0.0) | 0.03 (0.01) | 0.99 (0.01) | 0.13 (0.02) | 0.98 (0.01) | 0.05 (0.01) | | Causal Rel. Abs. Error (↓ better) | Unadjusted (baseline) | Backdoor C (oracle) | τˆQ | τˆIPTW | τˆAIPTW | τˆDML | | | | linear | 0.21 (0.08) | 0.12 (0.09) | 1.46 (1.04) | 0.47 (0.16) | 1.6 (0.66) | 1.91 (0.9) | | | | catboost (nonlinear) | 0.21 (0.08) | 0.12 (0.09) | 0.24 (0.1) | 0.14 (0.11) | 0.11 (0.1) | 0.13 (0.1) | | | Table 3: Modeling results for subpopulation A. **Top:** Predictive models' average precision (ave. prec.) for training (yellow) and inference (green) data splits. **Bottom:** Causal estimation models' relative absolute error (rel. abs. error) between the models' estimated ATE and the RCT ATE. Here, darker shades of red indicate worse causal estimates. Baselines, unadjusted conditional mean on the samples (unadjusted) and the backdoor adjustment with the oracle C (backdoor C), are uncolored. We use two baselearner settings: linear and catboost (nonlinear). We report both the average and standard deviation (in parentheses) over 100 random seeds during sampling. All settings use P ∗(T|C) in equation 4 parameterized by ζ0 = 0.85, ζ1 = 0.15. | QT0 (x) := E[Y |T = 0, X = x] | (5) | |---------------------------------|-------| | QT1 (x) := E[Y |T = 1, X = x] | (6) | | g(x) := P(T = 1|X = x) | (7) | | QX(x) := E[Y |X = x] | (8) | the base learners via plug-in principles or second-stage regression. We took care to ensure we use the *same* pre-trained base learners—same functional form and learned weights—as inputs into any appropriate causal estimator. Base learners. We implement base learners for:14 In our application both T and Y are binary variables, so we use an ensemble of gradient boosted decision trees (catboost) 15 and logistic regression16 for our base learners. We fit our models using cross-fitting (Hansen, 2000; Newey & Robins, 2018) and cross-validation; see Appendix F for more details. Causal estimators. After training base learners with cross-fitting, we implement the following plug-in causal estimators: backdoor adjustment (outcome regression) (Q), inverse propensity of treatment weighting (IPTW), and augmented inverse propensity of treatment weighting (AIPTW) (Robins et al., 1994). We also use DoubleML (Chernozhukov et al., 2018) which applies an ordinary least squares on residuals from the base learners. See Appendix F for exact estimation equations. Modeling results. Table 3 shows results for Subpopulation A. Since both T and Y are binary, we report average precision (AP) for the base learners on both the training and inference folds; this metric is only calculated for observed (not counterfactual) observations. We also report the relative absolute error (RAE) between estimators on DOBS and the ATE for DRCT. Comparing predictive base learners, the propensity score model, g(x), has much higher AP on inference folds than models that involve the outcome. As we previously mentioned, RCT subsampling allows us to set the relationship between X (via proxy for C) and T but not X and Y so the low AP for outcome models could reflect the difficulty in estimating this "natural" relationship between X and Y . See Section 4.6.1 for additional discussion on low average precision for the outcome models (Q). For causal estimators, we see that the doubly robust estimator AIPTW using catboost has the lowest estimation error—on par with estimates obtained using the oracle backdoor adjustment set C. It appears the doubly robust estimators using linear models do not recover from the poor predictive performance of the outcome models and IPTW is better in this setting. Though seemingly counterintuitive that the linear and catboost models have similar predictive performance but large differences in the causal estimation error, 14Note for a binary outcome Y , we can rewrite the above equations with probabilities, as E[Y | ·] = P(Y = 1 | ·). 15Using CatBoost (Dorogush et al., 2018) with default parameters and without cross-validation. 16Using scikit-learn (Pedregosa et al., 2011) and an elasticnet penalty, L1 ratio 0.1, balanced class weights, and SAGA solver. We tune the regularization parameter C via cross-validation over the set C ∈ 1e−4, 1e−3, 1e−2, 1e−1, 1e 0, 1e 1. this discrepancy between predictive performance and causal estimation error is consistent with theoretical results on fitting nuisance models in causal inference (Tsiatis, 2007; Shortreed & Ertefaie, 2017) and empirical results from semi-synthetic evaluation (Shi et al., 2019; Wood-Doughty et al., 2021). Although we used best practices from machine learning to fit our base learners, these results suggest future work is needed to adapt machine learning practices to the goals of causal estimation. ## 4.6 Consideration #5: Additional Finite Data Challenges The broader purpose of this line of empirical evaluation is to understand the real-world settings for which certain estimation approaches are successful or unsuccessful. We believe bridging the gap between theory that holds asymptotically and finite data is important for drawing valid causal inference, but evaluation designers might encounter unforeseen challenges particular to finite data that need to be examined carefully. In our proof of concept pipeline, we hypothesize the outcome models have very low average precision (Table 3) because of finite data issues with class imbalance. In particular, for Subpopulation A, 82% of our data is C = 1 and E[Y ] = 0.07 so there are few examples to learn from in the smallest category (C = 0, Y = 1): only 34 documents. This shows that even with our RCT that has a relatively large size (roughly 4k units) compared to other real-world RCTs, there are challenges with having sufficient support. ## 5 Discussion And Future Work Unlike predictive evaluation, empirical evaluation for causal estimation is challenging and still at a nascent stage. In this work, we argue that one of the most promising paths forward is to use a RCT subsampling strategy, and to this line of work, we contribute an RCT rejection sampler with theoretical guarantees. We showed the utility of this algorithm in a proof of concept pipeline with a novel, real-world RCT to empirically evaluate high-dimensional backdoor adjustment methods. Of course, there are critics of emprical evaluation. Hernán (2019) pejoratively compared a competition for causal estimation to evaluating "spherical cows in a vacuum" and claimed this discounted necessary subjectmatter expertise. Even in the machine learning community, researchers warn against "mindless bake-offs" of methods (Langley, 2011), and in some cases the creation of benchmarks has led to the community overfitting to benchmarks, e.g., Recht et al. (2019). However, in the absence of theory or when theoretical assumptions do not match reality, we see empirical evaluation as a necessary, but not exclusive, part of the broader field of causal inference. A fruitful future direction is for evaluation designers to use our RCT rejection sampler to create comprehensive benchmarks for various phenomena of interest: not only high-dimensional confounding but also heterogeneous treatment effects, unmeasured confounding, missing data, etc. This would involve gathering more RCTs and establishing interesting ways to set P ∗(T|C). Our proof of concept evaluation pipeline demonstrated the utility of RCT subsampling but there were many avenues we chose not to pursue such as: measurement error, causal null hypothesis tests, or moving to more sophisticated natural language processing approaches beyond bag-of-words, e.g., the CausalBERT model (Veitch et al., 2020). In another direction, applied practitioners need guidance on which causal estimation method to use given their specific observational data. Although other work has attempted to link observational data to experimental data (real or synthetic) in which the ground-truth is known (Neal et al., 2020; Kallus et al., 2018), we believe RCT subsampling could help with meta analyses of which combination of techniques work best under which settings. Overall, we see this work as contributing to a much larger research agenda on empirical evaluation for causal estimation. ## Broader Impact Statement We conducted this research with ethical due diligence. Our real-world RCT dataset was implemented by owners of the online platform and in full compliance with the platform's user agreement. The platform owners gave us explicit permission to use and access this dataset. Our dataset contains paper titles and abstracts, which are already publicly available from many sources, and we have removed any potentially personally identifiable information from the dataset, e.g., author names, user ids, user IP addresses, or session ids. By releasing this data, we do not anticipate any harm to authors or users. Like any technological innovation, our proposed RCT rejection sampling algorithm and evaluation pipeline have the potential for dual use—to both benefit or harm society depending on the actions of the humans using the technology. We anticipate there could be substantial societal benefit from more accurate estimation of causal effects of treatments in the medical or public policy spheres. However, other applications of causal inference could potentially harm society by controlling or manipulating individuals. Despite these tradeoffs in downstream applications, we feel strongly this paper's contributions will result in net overall benefit to the research community and society at large. ## Author Contributions KK conceived the original idea of the project and managed the project. RB contributed the ideas behind Algorithm 1 as well as the proofs in Section 3 and the Appendix. RB and KK implemented the synthetic experiments in Section 3. KK gathered and cleaned the data for the proof of concept pipeline in Section 4. KK and SF implemented the proof of concept empirical pipeline in Section 4. KK and RB wrote the first draft of the manuscript. KK, SF, DJ, JB, and RB guided the research ideas and experiments and edited the manuscript. ## Acknowledgments The authors gratefully thank David Jensen, Amanda Gentzel, Purva Pruthi, Doug Downey, Brandon Stewart, Zach Wood-Doughty and Jacob Eisenstein for comments on earlier drafts of this manuscript. The authors also thank anonymous reviewers from ICML and TMLR for helpful comments. Special thanks to the Semantic Scholar team at the Allen Institute for Artificial Intelligence for help gathering the real-world RCT dataset. ## References Kevin Arceneaux, Alan S Gerber, and Donald P Green. Comparing experimental and matching methods using a large-scale voter mobilization experiment. *Political Analysis*, 14(1):37–62, 2006. Susan Athey, Guido W Imbens, and Stefan Wager. Approximate residual balancing: debiased inference of average treatment effects in high dimensions. *Journal of the Royal Statistical Society: Series B (Statistical* Methodology), 80(4):597–623, 2018. Elias Bareinboim and Jin Tian. Recovering causal effects from selection bias. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29, 2015. Alexandre Belloni, Victor Chernozhukov, and Christian Hansen. Inference on treatment effects after selection among high-dimensional controls. *The Review of Economic Studies*, 81(2):608–650, 2014. Rohit Bhattacharya and Razieh Nabi. On testability of the front-door model via Verma constraints. In The 38th Conference on Uncertainty in Artificial Intelligence, 2022. Rohit Bhattacharya, Razieh Nabi, and Ilya Shpitser. Semiparametric inference for causal effects in graphical models with hidden variables. *Journal of Machine Learning Research*, 23:1–76, 2022. Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, and James M Robins. Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal, 21(1):C1–C68, 2018. Issa J Dahabreh, Jon A Steingrimsson, James M Robins, and Miguel A Hernán. Randomized trials and their observational emulations: A framework for benchmarking and joint analysis. *arXiv preprint* arXiv:2203.14857, 2022. Alexander D'Amour and Alexander Franks. Deconfounding scores: Feature representations for causal effect estimation with weak overlap. *arXiv preprint arXiv:2104.05762*, 2021. Vincent Dorie, Jennifer Hill, Uri Shalit, Marc Scott, and Dan Cervone. Automated versus do-it-yourself methods for causal inference: Lessons learned from a data analysis competition. *Statistical Science*, 34(1): 43–68, 2019. Anna Veronika Dorogush, Vasily Ershov, and Andrey Gulin. CatBoost: gradient boosting with categorical features support. *arXiv preprint arXiv:1810.11363*, 2018. Dean Eckles and Eytan Bakshy. Bias and high-dimensional adjustment in observational studies of peer effects. *Journal of the American Statistical Association*, 116(534):507–517, 2021. Max H Farrell, Tengyuan Liang, and Sanjog Misra. Deep neural networks for estimation and inference. Econometrica, 89(1):181–213, 2021. Amir Feder, Katherine A Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E Roberts, et al. Causal inference in natural language processing: Estimation, prediction, interpretation and beyond. Transactions of the Association for Computational Linguistics, 10:1138–1158, 2022. Amanda M Gentzel, Dan Garant, and David Jensen. The case for evaluating causal models using interventional measures and empirical data. *Advances in Neural Information Processing Systems*, 32, 2019. Amanda M Gentzel, Purva Pruthi, and David Jensen. How and why to use experimental data to evaluate methods for observational causal inference. In *International Conference on Machine Learning*, pp. 3660– 3671. PMLR, 2021. Brett R Gordon, Florian Zettelmeyer, Neha Bhargava, and Dan Chapsky. A comparison of approaches to advertising measurement: Evidence from big field experiments at Facebook. *Marketing Science*, 38(2): 193–225, 2019. Brett R Gordon, Robert Moakler, and Florian Zettelmeyer. Close enough? A large-scale exploration of non-experimental approaches to advertising measurement. *arXiv preprint arXiv:2201.07055*, 2022. Bruce E Hansen. Sample splitting and threshold estimation. *Econometrica*, 68(3):575–603, 2000. Miguel A Hernán. Comment: Spherical cows in a vacuum: data analysis competitions for causal inference. Statistical Science, 34(1):69–71, 2019. Jennifer L Hill. Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1):217–240, 2011. Jennifer L Hill, Jerome P Reiter, and Elaine L Zanutto. A comparison of experimental and observational data analyses. *Applied Bayesian Modeling and Causal Inference from Incomplete-Data Perspectives: An* Essential Journey with Donald Rubin's Statistical Family, pp. 49–60, 2004. Paul W Holland. Statistics and causal inference. *Journal of the American Statistical Association*, 81(396): 945–960, 1986. Andrew P Jaciw. Assessing the accuracy of generalized inferences from comparison group studies using a within-study comparison approach: The methodology. *Evaluation Review*, 40(3):199–240, 2016. Fredrik Johansson, Uri Shalit, and David Sontag. Learning representations for counterfactual inference. In International Conference on Machine Learning, pp. 3020–3029. ICML, 2016. Nathan Kallus, Aahlad Manas Puli, and Uri Shalit. Removing hidden confounding by experimental grounding. *Advances in Neural Information Processing Systems*, 31, 2018. Katherine Keith, David Jensen, and Brendan O'Connor. Text and causal inference: A review of using text to remove confounding from causal estimates. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020. Sören R Künzel, Jasjeet S Sekhon, Peter J Bickel, and Bin Yu. Metalearners for estimating heterogeneous treatment effects using machine learning. *Proceedings of the National Academy of Sciences*, 116(10): 4156–4165, 2019. Manabu Kuroki and Judea Pearl. Measurement bias and effect restoration in causal inference. *Biometrika*, 101(2):423–437, 2014. Robert J LaLonde. Evaluating the econometric evaluations of training programs with experimental data. The American Economic Review, pp. 604–620, 1986. Pat Langley. The changing science of machine learning. *Machine Learning*, 82(3):275–279, 2011. Sheng Li, Nikos Vlassis, Jaya Kawale, and Yun Fu. Matching via dimensionality reduction for estimation of treatment effects in digital marketing campaigns. In *Proceedings of the Twenty-Fifth International Joint* Conference on Artificial Intelligence, pp. 3768–3774, 2016. Marloes H Maathuis, Markus Kalisch, and Peter Bühlmann. Estimating high-dimensional intervention effects from observational data. *The Annals of Statistics*, 37(6A):3133–3164, 2009. Microsoft Research. EconML: A Python package for ML-based heterogeneous treatment effects estimation, 2019. URL https://github.com/microsoft/EconML. Kevin P Murphy. *Machine Learning: A Probabilistic Perspective*. MIT press, 2012. Brady Neal, Chin-Wei Huang, and Sunand Raghupathi. RealCause: Realistic causal inference benchmarking. arXiv preprint arXiv:2011.15007, 2020. Whitney K Newey and James M Robins. Cross-fitting and fast remainder rates for semiparametric estimation. arXiv preprint arXiv:1801.09138, 2018. Harsh Parikh, Carlos Varjao, Louise Xu, and Eric J Tchetgen Tchetgen. Validating causal inference methods. In *International Conference on Machine Learning*, pp. 17346–17358. PMLR, 2022. Judea Pearl. Causal diagrams for empirical research. *Biometrika*, 82(4):669–688, 1995. Judea Pearl. *Causality*. Cambridge University Press, 2009. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to imagenet? In *International Conference on Machine Learning*, pp. 5389–5400. PMLR, 2019. Alexander Reisach, Christof Seiler, and Sebastian Weichwald. Beware of the simulated DAG! causal discovery benchmarks may be easy to game. *Advances in Neural Information Processing Systems*, 34:27772–27784, 2021. Thomas S Richardson and James M Robins. Single world intervention graphs (SWIGs): A unification of the counterfactual and graphical approaches to causality. *Center for the Statistics and the Social Sciences,* University of Washington Working Paper 128, pp. 1–146, 2013. Margaret E Roberts, Brandon M Stewart, and Richard A Nielsen. Adjusting for confounding with text matching. *American Journal of Political Science*, 64(4):887–903, 2020. James M Robins. A new approach to causal inference in mortality studies with a sustained exposure period—application to control of the healthy worker survivor effect. *Mathematical Modelling*, 7(9-12): 1393–1512, 1986. James M Robins, Andrea Rotnitzky, and Lue Ping Zhao. Estimation of regression coefficients when some regressors are not always observed. *Journal of the American Statistical Association*, 89:846–866, 1994. Andrea Rotnitzky and Ezequiel Smucler. Efficient adjustment sets for population average causal treatment effect estimation in graphical models. *Journal of Machine Learning Research*, 21:188–1, 2020. Aurora C Schmidt, Christopher J Cameron, Corey Lowman, Joshua Brulé, Amruta J Deshpande, Seyyed A Fatemi, Vladimir Barash, Ariel M Greenberg, Cash J Costello, Eli S Sherman, et al. Searching for explanations: Testing social scientific methods in synthetic ground-truthed worlds. *Computational and* Mathematical Organization Theory, pp. 1–32, 2022. William R Shadish, Margaret H Clark, and Peter M Steiner. Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignments. *Journal of the* American Statistical Association, 103(484):1334–1344, 2008. Claudia Shi, David Blei, and Victor Veitch. Adapting neural networks for the estimation of treatment effects. Advances in Neural Information Processing Systems, 32, 2019. Yishai Shimoni, Chen Yanover, Ehud Karavani, and Yaara Goldschmnidt. Benchmarking framework for performance-evaluation of causal inference analysis. *arXiv preprint arXiv:1802.05046*, 2018. Susan M Shortreed and Ashkan Ertefaie. Outcome-adaptive lasso: variable selection for causal inference. Biometrics, 73(4):1111–1122, 2017. Peter L Spirtes, Clark N Glymour, and Richard Scheines. *Causation, Prediction, and Search*. MIT press, 2000. Daniel J Stekhoven, Izabel Moraes, Gardar Sveinbjörnsson, Lars Hennig, Marloes H Maathuis, and Peter Bühlmann. Causal stability ranking. *Bioinformatics*, 28(21):2819–2823, 2012. Nikolaj Thams, Sorawit Saengkyongam, Niklas Pfister, and Jonas Peters. Statistical testing under distributional shifts. *Journal of the Royal Statistical Society Series B: Statistical Methodology*, 85(3):597–663, 2023. Anastasios Tsiatis. *Semiparametric Theory and Missing Data*. Springer Science & Business Media, 2007. Victor Veitch, Dhanya Sridhar, and David Blei. Adapting text embeddings for causal inference. In Conference on Uncertainty in Artificial Intelligence, pp. 919–928. PMLR, 2020. Larry Wasserman. *All of statistics: a concise course in statistical inference*, volume 26. Springer, 2004. Galen Weld, Peter West, Maria Glenski, David Arbour, Ryan A Rossi, and Tim Althoff. Adjusting for confounders with text: Challenges and an empirical evaluation framework for causal inference. In Proceedings of the International AAAI Conference on Web and Social Media, volume 16, pp. 1109–1120, 2022. Zach Wood-Doughty, Ilya Shpitser, and Mark Dredze. Challenges of using text classifiers for causal inference. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing, volume 2018, pp. 4586. NIH Public Access, 2018. Zach Wood-Doughty, Ilya Shpitser, and Mark Dredze. Generating synthetic text data to evaluate causal inference methods. *arXiv preprint arXiv:2102.05638*, 2021. Jiaming Zeng, Michael F Gensheimer, Daniel L Rubin, Susan Athey, and Ross D Shachter. Uncovering interpretable potential confounders in electronic medical records. *Nature Communications*, 13(1):1–14, 2022. Junzhe Zhang and Elias Bareinboim. Bounding causal effects on continuous outcome. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 12207–12215, 2021. ## A Proof Of Proposition 3.1 Proof. Since the selection variable S is created using a structural equation dependent on T and C and samples are retained only when S = 1, the distribution of the selected samples P ∗(*C, T, Y* ) = P(*C, T, Y* | S = 1) and is Markov relative to Fig. 1(b). In the absence of additional constraints on P ∗(e.g., linearity assumptions) or the relation between P and P ∗, it is known that the ATE is non-parametrically identified in the presence of selection bias if and only if no variable that is a causal ancestor of the outcome in a graph where one deletes the treatment variable is also a causal ancestor of the selection variable (Bareinboim & Tian, 2015, Theorem 2). In this case, if we delete T from Fig. 1(b), C remains a causal ancestor of Y and S. Hence, the effect is not identified via any non-parametric functional of the observed data and condition (II) cannot be satisfied. ## B Viewing Algorithm 1 As Drawing From A Conditionally Ignorable Causal Model A causal model of a DAG G(V ) can be interpreted as (i) a set of statistical distributions P(V ) that factorize according to G: P(V ) = QVi∈V P(Vi| Pai); and (ii) a set of post intervention distributions given by the g-formula aka truncated factorization (Robins, 1986; Spirtes et al., 2000; Pearl, 2009): for every A ⊂ V , we have P(V \ A | do(A = a)) = QVi∈V \A P(Vi| Pai) |A=a. If it were possible to recollect data through a conditionally randomized experiment where treatment is assigned with probability P ∗(T | C) instead of P(T), the observed data distribution over *C, T, Y* would factorize according to the standard conditionally ignorable model shown in Fig. 1(c). That is, the distribution factorizes as P ∗(*C, T, Y* ) = P(C) × P ∗(T | C) × P(Y | *T, C*) and implies no independence constraints on the observed data. The distribution of samples P ∗(*C, T, Y* ) output by the rejection sampler in Algorithm 1 also implies no independence constraints on *C, T, Y* and thus has the exact same factorization. This establishes statistical equivalence of the conditionally ignorable model and the one obtained via our rejection sampler. However, statistical equivalence alone is insufficient, as it is statistically equivalent to any complete DAG on the variables *C, T, Y* that imply no independence constraints on the observed data. However, it is easy to see that all post-intervention distributions obtained via truncated factorization in the conditionally ignorable model also have the same identifying functionals in the model obtained through the rejection sampler due to the extra constraints imposed in Algorithm 1. For example, P ∗(*T, Y* | do(C = c)) = P ∗(T | C = c) × P(Y | T, C) and P ∗(*C, Y* | do(T = t)) = P(C) × P(Y | T = *t, C*) in both the conditionally ignorable model and the one obtained via rejection sampling. Since the distribution obtained from the rejection sampler is indistinguishable from a conditionally ignorable model both in terms of the statistical model and all post-intervention distributions, applying Algorithm 1 can be viewed as essentially drawing samples from a standard conditionally ignorable model. That is, despite the selection mechanism, the resulting distribution is indistinguishable from Figure 1(c), the regular causal DAG model we use to represent observed confounding. ## C Synthetic Dgps To Evaluate Sampling Algorithms We use the following data-generating process (DGP) to create synthetic data with 100k units for which the true ATE is equal to 2.5. We call this **Setting 1**: C ∼ Binomial(0.5) T ∼ Binomial(0.3) Y ∼ 0.5C + 1.5T + 2T C + Normal(0, 1) We set the researcher-specified confounding function P ∗(T = 1|C) = σ(−1 + 2.5C), where σ is the logistic (expit) function, for RCT Rejection sampling and the same function as f in Gentzel et al. (2021)'s Algorithm 2. Setting 2 has the same DGP as Setting 1 but we change T ∼ Binomial(0.5). For Settings 1 and 2, we provide the backdoor adjustment estimator with the true adjustment set, such that it adjusts for both C and the interaction term, T C. Setting 3 involves more covariates that are combined in non-linear ways: C1 ∼ Binomial(0.5) C2 ∼ C1 − Uniform(−0.5, 1) C3 ∼ Normal(0, 1) C4 ∼ Normal(0, 1) C5 ∼ C3 + C4 + Normal(0, 1) T ∼ Binomial(0.3) Y ∼ 0.5C4 + 2T C1C2 − 1.5T + C2C3 + C5 + Normal(0, 1) In this setting, we set P ∗(T = 1|C) = σ(0.5C1 + −0.7C2 + 1.2C3 + 1.5C4 + −1.2C5 + 0.5C1C2). We provide the parametric backdoor estimator with the true adjustment set: C4, T C1C2, C2C3, C5. See Table 4 for the ATEs for each of the three settings. | DGP Setting | RCT ATE | |----------------------------|-----------| | 1 - Linear, P(T = 1) = 0.3 | 2.48 | | 2 - Linear, P(T = 1) = 0.5 | 2.49 | | 3 - Nonlinear, 5 C's | -0.26 | Table 4: RCT ATEs for the synthetic DGPs. ## D Rct Dataset Details We expand on the details of the RCT described in Section 4.1.1. The RCT was conducted on the Allen Institute for Artificial Intelligence's Semantic Scholar17 platform. The experiment ran for 25 days from 2022-06-04 to 2022-06-28 and resulted in 53,281 unique users arriving on 50,833 unique paper pages. This is after filtering to users who are "active", meaning prior to the experiment they clicked somewhere on the website at least once. Treatment is randomized for the combination of a unique user's browser plus user's device. We then post-process to recognize logged-in users across devices/browsers and remove them from the results if they switched treatments. The outcome of interest is if a user clicks on the "enhanced reader" button at least once during a session (Y = 1). Intuitively, a positive causal effect is expected since we expect advertising a feature of the website to result in increased click rate. The final ATE was 0.113 as computed by a simple difference in conditional means E[Y | T = 1] − E[Y | T = 0]. ## E Creating Vocabulary For X A new vocabulary is created for each RCT subpopulation. To create each vocabulary, we use binary indicators, remove stopwords, remove numbers and strip accents. Words must occur in at least 5 documents. We ignore terms that have a document frequency strictly lower than 10%. 17https://www.semanticscholar.org/ ## F Modeling Base learners. Base learners QT0 and QT1 in Equations 5 and 6 respectively are fit as described by Künzel et al. (2019) as "T-learners" (not "S-learners"), i.e. we take all samples for which we have observed T = 0 and then regress X on Y to get a trained model for QT0 and likewise for units with observed T = 1 and QT1 . In preliminary experiments, we used the "S-learner" but found high-dimensional X dominated T and there were no differences learned between observed and counterfactual T settings. Cross-fitting with cross-validation. We fit our models using *cross-fitting* (Newey & Robins, 2018) which is also called sample-splitting (Hansen, 2000). Here, we divide the data into K folds. For each inference fold j, the other K − 1 folds (shorthand −j) are used as the training set to fit the base learners—e.g., Qˆ−j T0 or gˆ −j—where the superscript here indicates the data the model is fit on. The single hyperparameter for logistic regression is selected via cross-validation, where the training set is again split into folds. No cross-validation is performed for CatBoost. Then for each unit i in the inference set, we use the trained models to infer QˆT0 (xi) = Qˆ−j T0 (xi). Then these are inserted into the plug-in estimators to compute the average treatment effect, τ for each estimator below. Causal estimators. After training base learners with cross-fitting, we implement the following plug-in causal estimators: backdoor adjustment (outcome regression) (Q), inverse propensity of treatment weighting (IPTW), adjusted inverse propensity of treatment weighting (AIPTW) (Robins et al., 1994). QˆT1 (xi) − QˆT0 (xi) (9) τˆQ := 1 n X i yiti gˆ(xi) − yi(1 − ti) 1 − gˆ(xi) (10) τˆIPTW := 1 n X i QˆT1 (xi) − QˆT0 (xi) + ti yi − QˆT1 (xi) gˆ(xi) τˆAIPTW := 1 n X i − (1 − ti) yi − QˆT0 (xi) 1 − gˆ(xi) (11) $$(12)$$ We also use DoubleML (Chernozhukov et al., 2018) which applies an ordinary least squares on residuals from the base learners $$\hat{\tau}_{\mathrm{DML}}:=\hat{E}\big[(y-\hat{Q}_{X}(x))\big|(t-\hat{g}(x))\big]\tag{1}$$ ## G Proof Of Concept Pipeline For An Additional Subpopulation As an additional proof of concept, we follow the same steps for the proof of concept in Section 4 but we use a different subpopulation—*Subpopulation B* for which the covariates C are Engineering and Business document categories. Table 5 provides the descriptive statistics for this subpopulation. To measure the predictive accuracy for this subpopulation, we again model P(C|X) with a logistic regression classifier. Averaged across held-out test folds, the F1 score is 0.92 and the average precision is 0.97. Addressing consideration \#4, we also create diagnostic plots for Subpopulation B in Figure 4. In the subsequent pipeline, we use P ∗(T|C) in equation 4 parameterized by ζ0 = 0.85, ζ1 = 0.15. Examing the modeling results in Table 6, we hypothesize that the slight performance improvement in estimators using catboost over adjustment with the oracle set C is due to inclusion of extra covariates in X granting additional statistical efficiency (Rotnitzky & Smucler, 2020). Regarding class imbalance, Subpopulation B is slightly more balanced (compared to Subpopulation A) with 55% business, E[Y ] = 0.05 so that smallest category (C = 0, Y = 1) has 49 documents. However, this subpopulation also suffers from finite data and class imbalance issues, as evidenced by the low average precision for the inference folds of the outcome models. | RCT Dataset | C categories | n | RCT ATE | OR(C, Y ) | |-----------------|-----------------------|-------|-----------|-------------| | Subpopulation B | Engineering, Business | 2,238 | 0.075 | 1.4 | Table 5: For **Subpopulation B**, RCT dataset descriptive statistics including the number of units in the ![21_image_0.png](21_image_0.png) subpopulation (n) and the odds ratio, OR(*C, Y* ). | gˆ(x) | Qˆ T0 (x) | Qˆ T1 (x) | QˆX(x) | | | | | | |-----------------------------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | Prediction Ave. Prec. (↑ better) | train | inference | train | inference | train | inference | train | inference | | linear | 0.98 (0.01) | 0.78 (0.02) | 0.67 (0.24) | 0.03 (0.02) | 0.56 (0.22) | 0.17 (0.03) | 0.59 (0.17) | 0.14 (0.02) | | catboost (nonlinear) | 0.99 (0.0) | 0.79 (0.02) | 0.97 (0.02) | 0.03 (0.01) | 0.99 (0.01) | 0.21 (0.04) | 0.99 (0.0) | 0.15 (0.03) | | Causal Rel. Abs. Error (↓ better) | Unadjusted | Backdoor C | τˆQ | τˆIPTW | τˆAIPTW | τˆDML | | | | linear | 0.14 (0.06) | 0.14 (0.1) | 2.84 (1.43) | 0.43 (0.94) | 1.51 (2.28) | 0.48 (0.2) | | | | catboost (nonlinear) | 0.14 (0.06) | 0.14 (0.1) | 0.11 (0.08) | 0.11 (0.08) | 0.12 (0.09) | 0.14 (0.1) | | | Figure 4: **Diagnostic plot for Subpopulation B.** Each plot is the parameterization of the researcherspecified confounding functions, P ∗(T|C) in Equation 4. Each blue dot is a different random seed (100 seeds total per plot/parameterization). Table 6: Modeling results for Subpopulation B. **Top:** Predictive models' average precision (ave. prec.) for training (yellow) and inference (green) data splits. **Bottom:** Causal estimation models' relative absolute error (rel. abs. error) between the models' estimated ATE and the RCT ATE. Here, darker shades of red indicate worse causal estimates. Baselines, unadjusted conditional mean on the samples (unadjusted) and the backdoor adjustment with the oracle C (backdoor C), are uncolored. We use two baselearner settings: linear and catboost (nonlinear). We report both the average and standard deviation (in parentheses) over 100 random seeds during sampling. All settings use P ∗(T|C) in equation 4 parameterized by ζ0 = 0.85, ζ1 = 0.15. ## H Confidence Interval Plots In Figure 5, we plot the confidence intervals given by the bootstrap percentile method (see Section 3.5 for more details). First, we construct confidence intervals for the original RCT data with a difference in means estimator. Then we plot the confidence intervals for the two sampling procedures—RCT rejection sampling and Algorithm 2 from Gentzel et al.—applied to that same RCT data. We examine synthetic DGP \#1 (see Section C for details on the synthetic DGPs) for a single random seed across two different sizes of synthetic RCT data, 100K samples and 3K samples. In the plots, the dot is the mean of the 1000 bootstrap samples. The horizontal bars indicate the endpoints of the 95% confidence interval. We note that the confidence intervals for the sampling procedures are wider because sampling reduces the size of the dataset (roughly by half). Also as expected, all the confidence intervals are wider for the smaller (3K) RCT data setting. Note, in both settings, our RCT rejection sampling contains the true (RCT) ACE (red line in the plot) while Gentzel et. al's algorithm does not. ![22_image_0.png](22_image_0.png) ![22_image_1.png](22_image_1.png) ![22_image_2.png](22_image_2.png) For Synthetic DGP #1 (single random seed) and 1000 bootstrap samples, we plot the 95% Figure 5: confidence intervals for the original RCT (difference in means estimator), RCT rejection sampling with a parametric adjustment (with knowledge of the oracle adjustment), and Algorithm 2 from Gentzel et al. (2021) with a parametric adjustment (with knowledge of the oracle adjustment). The mean of the bootstrap samples is denoted by the dot. ## I Varying Confounding Strength ![22_image_3.png](22_image_3.png) Figure 6: For Synthetic DGP #1, we alter the confounding strength x in the function we specify, P*(T = 1|C) = o(-1 + x . C). We choose the lower and upper limits of x such that 0.05 < P*(T = 1|C) < 0.95 to ensure overlap is satisfied. For each level of confounding strength, we run the rejection sampling algorithm with 1000 different random seeds. We plot the estimated ATE for the observational (confounded) dataset that is created after RCT rejection sampling. We find there are no substantial differences in the estimates due to changing the confounding strength.
Review 1: Summary: In this paper, the authors propose a new method (RCT rejection sampling) for evaluating causal inference procedures based on data from RCTs. This builds on a long line of work, dating back to Lalonde 1986 in economics which attempts to measure how well particular causal inference methods designed under no unmeasured confounding can recover average treatment effects. The authors' analysis most directly builds on recent work by Gentzel et al. 2021. In particular, the authors (i) establish that the RCT sampling procedure developed in Gentzel et al. 2021 yields a DGP under which the ATE of the original RCT is not identified; (ii) propose a simple reject sampling procedure that fixes this problem and yields a subsampled DGP under which under which the ATE of the original RCT is identified. The authors then provide some additional discussion of practical considerations when implementing their RCT rejection sampling procedure and apply it to analyze a real world experiment run on users of a search engine. Strengths and Weaknesses: The paper tackles an important problem -- given the plethora of new methods for deriving causal inferences based on ML algorithms, it is important to have "standardized" procedures for evaluating their performance. Basing evaluations on RCTs is a natural candidate with a celebrated tradition. It is important to build up techniques for doing such evaluations based on RCTs rigorously. Weaknesses: 1) The paper compares the researcher's estimator in an constructed subsample against the point estimate of the ATE derived in the experimental sample. There are several things that are unclear: (i) how should the ATE of the RCT be constructed? Simple difference in means? If there is some clustering, for example, in the design of the experiment, what estimator should be used? (ii) Why is the point estimate of the ATE from the RCT treated as ground-truth? There is uncertainty surrounding that quantity that should be, in principle accounted, for since the RCT sample is just some random sample from the DGP of interest. (iii) Is the authors' intuition in (ii) that the distribution of ATE estimators across the RCT rejection sampling distribution should be somehow centered at the ATE estimate from the RCT? If so, that's a result that would need to be shown. 2) The illustration of RCT rejection sampling focuses entirely on reporting the average error of the ATE estimator over the RCT rejection sampling distribution relative to the point estimate of the ATE from the RCT. Putting aside the issues in (1), why should evaluations only be focused on the MSE? Surely we care about other quantities as well -- e.g., when RCTs are run, we may be interested in testing particular effect sizes. When comparing across methods, I may care about the length of confidence intervals etc. It would be useful for the authors to discuss whether RCT rejection sampling can help inform these other desiderata. 3) The discussion of practical considerations of RCT rejection samplings are tailored to the particular application. The paper would benefit by having a streamlined presentation of how they applied RCT rejection sampling in the particular experiment they studied, and a separate "practical considerations" section that tries to speak more generally across different experimental set-ups. Surely the tradeoffs in modeling, diagnostics, and specification of P(T | C) might differ in say, the experiments analyzed in Eckles & Bakshy 2021 or more typical job training like RCTs in economics. Requested Changes: See my previous discussion of the paper's weaknesses. Broader Impact Concerns: I have no concerns about the ethical implications of this work. ================================================== Review 2: Summary: The paper points to an issue with covariate-based subsampling of RCTs to provide real-world evaluations of causal inference methods. This is an important problem because evaluating causal inference methods on synthetic data can only take us so far in model real world causal effects. Sub-sampling from RCTs is a promising direction and the paper points out an important issue with naive covariate-based subsampling to obtain a confounded dataset. The main idea is to adjust the covariate shift issue that comes from selecting based on a function of the covariates. This is done by using rejection sampling using the density ratio of p(T | X)/p(T) instead of sampling only based on binary samples from p(T=1 | X). Strengths and Weaknesses: The paper 1. Addresses an important problem 2. is well-written and presented in a good order of concepts 2. is well-motivated and present a simple solution with supporting experiments 3. has a discussion of all the pieces in the evaluation procedure which should be paid attention to when designing a new evaluation setup. The authors do address one of the concerns I had that "specifying a suitable confounding function P∗(T | C) may be difficult when C is high-dimensional." The issue is that the confounder can determine the label. This issue could be addressed by setting the confounder strength to be high in figure 3. Another version of figure 3 would have been useful here where, for various confounding strengths, where you put estimated ATE on the y-axis and the gold standard ATE on the x-axis. Could the authors point to whether there is already a figure that addresses this concern or produce the new figure? My only other main comment is a high-level one. The central issue is that the marginal distribution of the covariates when subsampling. But in many cases, the covariate set of values for an RCT are pre-selected. In that case, restricting the evaluation set to have the same marginal distribution over covariates seems counter-productive. Wouldn't it be more suitable to have a target set of covariates to evaluate on that is more like the whole population of units? Can the authors comment on this? Requested Changes: See questions above. Broader Impact Concerns: None ================================================== Review 3: Summary: This paper addresses the challenge of confounding in causal estimation from observational data, particularly in settings with high-dimensional covariates. The authors propose a new sampling algorithm called RCT rejection sampling and provide theoretical guarantees for causal identification in observational data. They demonstrate the effectiveness of their algorithm using synthetic data and present a proof-of-concept evaluation pipeline with a real-world RCT dataset. The contributions of this work aim to improve empirical evaluation for causal estimation. Strengths and Weaknesses: Strengths: - The paper highlights considerations when working with finite real-world data and provides a real-world example of how to put the theory and considerations into practice. - The experimental results provided in the paper showcase the efficiency and validity of the proposed method. - The paper is well-organized. Weaknesses: - The paper primarily focuses on settings where the probability of treatment is equal to the probability of control, and it may not generalize well outside of this setting. More discussion or experiments are required to demonstrate the sensitivity of the estimation results beyond this setting. - The paper acknowledges that not all settings require empirical evaluation of causal estimators, suggesting that the applicability of the proposed methods may be limited in certain scenarios. Requested Changes: Please refer to weakness 1. Broader Impact Concerns: The authors have presented a Broader Impact Statement. ================================================== Metareview: Recommendation: Accept as is Comment: The paper is well-written, both in terms of technical details and the overall structure and presentation. As mentioned above, the main claims are precise, closely linked to substantive results presented in the paper, and are not in my opinion overstated. Their novel proposed procedure builds on the existing RCT sub-sampling research in a natural way, and all the reviewers were in favor of acceptance following discussion with the authors. I agree, and feel that the paper can be accepted as-is. ==================================================
# Computation And Sample Efficient Reinforcement Learning With Deep Bayesian Actor-Critic Anonymous authors Paper under double-blind review ## Abstract Actor-critic methods in reinforcement learning leverage the action value function (critic) by temporal difference learning to be used as an objective for policy improvement for the sake of sample efficiency against on-policy methods. The well-known result, overestimation bias, is usually handled by pessimistic policy evaluation based on critic uncertainty, which may lead to underestimation bias. This means that pessimism is a sensitive parameter and requires careful tuning. Most methods employ an ensemble approach to represent the uncertainty of critic estimates, but it comes with the cost of computational burden. To mitigate the sample and computation inefficiency of the actor-critic approach, we propose a novel and simple algorithm in this paper, called Deep Bayesian Actor-Critic (DBAC), that employs Bayesian dropout and a heteroscedastic critic network instead of an ensemble to make the agent uncertainty-aware. To mitigate overestimation bias, pessimistic policy evaluation is conducted where pessimism is proportional to the uncertainty of predictions. Using dropout along with a distributional representation of the critic leads to more computation-efficient calculations. With empirically determined optimal pessimism and dropout regularization, only a single critic network is enough to achieve high sample and computation efficiency, with near SOTA performance. ## 1 Introduction Reinforcement learning (RL) has witnessed significant progress with the emergence of deep neural networks (Mnih et al., 2013; 2015) in the last decade. However, sample efficiency is one of the main bottlenecks of widespread adaptation of RL into applications (Mendonca et al., 2019; Li et al., 2023a). On the other hand, efficiency is as important as sample efficiency (Chen et al., 2021a), especially to deploy RL agents into real-life applications such as robots (Zhao et al., 2020; Kormushev et al., 2013) and edge devices (Dai et al., 2022; Wei et al., 2022). Off-policy methods that leverage off-policy samples promise higher sample efficient learning compared to on-policy counterparts. Despite this advantage, they are usually stuck on poor performance due to a mismatch between behavioral and online policy since value estimates of actions, on which value network is never (or not enough) trained, are required. Finally, the erroneous value estimates are exploited and the algorithm suffers from overestimation bias that yields divergent (catastrophic) behavior (Thrun & Schwartz, 2014) named as *deadly triad* ((Sutton & Barto, 2018; Van Hasselt et al., 2018)) indicating the instability emerges once function approximation, temporal difference, and off-policy learning are in same method together. The main reason for this phenomenon is the limited generalization capability of the used function approximator (Korkmaz, 2024). Generalization is key to success for RL since the aim is to infer actions that are not similar to available data, i.e., it is expected to extrapolate a new policy to perform differently than behavioral policy. If value estimation of an off-policy method would have strong *inductive bias*, it would estimate value better for out-of-distribution actions, and much less overestimation would occur. ## 1.1 Optimism-Pessimism Dilemma One of the main approaches to the overestimation problem is to use a pessimistic objective. This way, it is acceptable to have poor generalization as long as the learner is aware of it. However, this requires to assess epistemic uncertainty, or in other words, the learner should be *uncertainty-aware*. Such models would identify more uncertainty for out-of-distribution actions and use lower bounds (conservative estimates) as objective. On the other hand, *optimism in the face of uncertainty* principle provides a reasonable exploration scheme in bandit setting. However, it is only feasible to be optimistic on critic uncertainty for policy improvement, not for critic update (Tasdighi et al., 2024a). In the literature, methods that assess their *epistemic uncertainty* (of transition model (Janner et al., 2019; Chua et al., 2018; Depeweg et al., 2016) or critic (Chen et al., 2021b; Hiraoka et al., 2021)) are able to obtain better performance and sample efficiency. Recently, for off-policy model-free actor-critic algorithms, epistemic uncertainty is estimated by using double network (Fujimoto et al., 2018; Haarnoja et al., 2018) or ensemble network (Chen et al., 2021b; Moskovitz et al., 2021). Hiraoka et al. (2021) found out that using Bayesian dropout (Srivastava et al., 2014; Gal & Ghahramani, 2016) contributes to epistemic uncertainty assessment but a small ensemble is still required. On the contrary, He et al. (2021) argued that a single critic network is enough when dropout is also used to evaluate Bellman backup. Kuznetsov et al. (2020) claims *aleatoric uncertainty* is also responsible for overestimation since any randomness is exploited when Bellman optimality operator (T ∗) is employed. For this purpose, they both use ensemble of networks for epistemic uncertainty and distributional representation (as quantiles) of network output for aleatoric uncertainty assessment. In their works, Tasdighi et al. (2024a) and Moskovitz et al. (2021) showed that optimal pessimism (or optimism) is dependent on environment, task, and learning method. Once the *synergy* between learner and task emerges (good generalization), it does not need to be pessimistic for policy improvement and can trust its estimate. However, when most out-of-distribution estimates are not consistent with reality (poor generalization), they need to be pessimistic and should not trust their estimate unless sufficient observations are obtained. Therefore, *depending on the learner's inductive bias, it requires to balance pessimism effectively*. ## 1.2 Deep Bayesian Actor-Critic Algorithm In this paper, we introduce a novel actor-critic algorithm, named Deep Bayesian Actor-Critic (DBAC), specifically designed to address both sample and computation inefficiencies. DBAC is tailored to tackle the challenges posed by overestimation bias introducing environment specific pessimism hyper-parameter to be used in the policy evaluation phase. The optimal pessimism improves sample efficiency, yielding similar results to other methods that use a much higher update-to-data (UTD) ratio. Pessimistic policy evaluation and improvement (phases of actor-critic learning) are conducted upon predictive uncertainty, which includes both epistemic and aleatoric uncertainties. Inspired by Gal & Ghahramani (2016), DBAC replaces critic ensemble with a single critic employing *Bayesian dropout* (Srivastava et al., 2014; Gal & Ghahramani, 2016) on critic network to track epistemic uncertainty and distributional (heteroscedastic) value representation for total predictive uncertainty. Although a heteroscedastic model is only able to capture aleatoric uncertainty (Kendall & Gal, 2017) in supervised learning setting, we argue that it can also capture epistemic uncertainty caused by Bellman backup since bootstrapped critic estimates are noisy due to its epistemic uncertainty (sampled with dropout). Moreover, it also captures uncertainty due to the non-stationary nature of the learning procedure. Lastly, heteroscedastic representations allows to learn loss attenuation and this makes the critic loss more robust to noisy data (Kendall & Gal, 2017). Using a single heteroscedastic critic network with dropout enhances computation efficiency compared to other methods that use ensembles. Most importantly, the implementation is very simple and can be obtained by injecting dropout to networks, introducing heteroscedastic critic network, and defining and pessimistic learning objective upon well-known Soft Actor-Critic algorithm (Haarnoja et al., 2018). We conduct extensive experiments on standard RL benchmarks to evaluate the performance of DBAC compared to existing methods. Our results demonstrate the effectiveness of DBAC in achieving superior performance while requiring fewer computational resources and fewer samples, making it a promising approach for real-world RL applications. ## 2 Reinforcement Learning Preliminaries 2.1 Model-Free Reinforcement Learning In reinforcement learning language, the agent lives in a Markov Decision Process (MDP) which is represented by a tuple M = (S, A, d0*, τ, R*), where S is state space, A is action space, d0 ∈ P(S) is initial state distribution, τ : *S × A 7→ P*(S) is transition kernel and R : *S × A 7→* R is reward function. The initial state is sampled first, s0 ∼ d0(·). At each time t being on st ∈ S; next state is sampled from the environment, st+1 ∼ τ (· | st, at) depending on taken action at ∼ π(· | st). Finally, reward is obtained, rt = R(st, at) by the reward function R. The ultimate goal of the agent is to derive a policy π : *S 7→ P*(A) to maximize discounted cumulative return, i.e., value function for a given state s, $$V^{\pi}(s)=\mathbb{E}_{\pi,\tau}\Big[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t})\Big|s_{0}=s\Big].\tag{1}$$ $$\left(1\right)$$ $$\left(2\right)$$ $$\quad(3)$$ ## 2.2 Maximum Entropy Actor-Critic In order to promote random actions for exploration and algorithm robustness, maximum entropy framework introduces policy entropy bonus into value functions (Haarnoja et al., 2017; 2018), $$V^{\pi}(s)=\mathbb{E}_{\pi,\,\tau}\Big[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t})-\alpha\log\pi(a_{t}|s_{t})\Big|s_{0}=s\Big],$$ $$Q^{\pi}(s,a)=R(s,a)+\mathbb{E}_{\pi,\,\tau}\Big[\sum_{t=1}^{\infty}\gamma^{t}R(s_{t},a_{t})-\alpha\log\pi(a_{t}|s_{t})\Big|s_{0}=s,a_{0}=a\Big].$$ $$\left(4\right)$$ i. (3) Learning iterates between solving policy evaluation and policy improvement. For the definition of critic, Bellman backup operator T πis defined, $${\cal T}^{\pi}Q(s,a)=R(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim\pi(\cdot|s,a)}\big[Q(s^{\prime},a^{\prime})-\alpha\log\pi(a^{\prime}\mid s^{\prime})\big],$$ and the critic is expected to stay same if this operator applied on itself, i.e., Qπ(*s, a*) = T πQπ(*s, a*). Policy evaluation phase minimizes the temporal difference, i.e., the difference between Q and T πQ(*s, a*) to satisfy this condition. Therefore, the Bellman backup T πQ(*s, a*) is also called temporal difference (TD) target. The optimal policy is defined as softmax over optimal critic, $$\pi^{*}(\cdot\mid s)=\arg\operatorname*{min}_{\pi}\operatorname{KL}\Bigl(\pi\bigl(\cdot\mid s\bigr)\Bigl\vert\Bigl\vert\frac{\exp\bigl(\alpha^{-1}Q^{*}(s,\cdot)\bigr)}{\int_{\mathcal{A}}\exp\bigl(\alpha^{-1}Q^{*}(s,a)\bigr)d a}\Bigr).$$ Policy improvement phase solves Equation 5 for available critic Q instead of Q∗. After sufficient iteration, both policy and critic converges to optimality in ideal case. The optimal critic must satisfy Bellman optimality, Q∗(*s, a*) = T ∗Q∗(*s, a*), where Bellman optimality operator T ∗turns out to have following form (Equation 5 from Haarnoja et al. (2017)), $$\mathcal{T}^{*}Q(s,a)=R(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim\tau(\cdot|s,a)}\Big[\alpha\log\Big(\int_{\mathcal{A}}\exp(\alpha^{-1}Q(s^{\prime},a^{\prime}))d a^{\prime}\Big)\Big].$$ ## 3 Modeling Aleatoric And Epistemic Uncertainties In this part, we explain the differences between two main types of uncertainties, *aleatoric* and *epistemic* uncertainty (Der Kiureghian & Ditlevsen, 2009; Kendall & Gal, 2017; Gal et al., 2016b). Most deep learning methods model either epistemic or aleatoric uncertainty alone (Gal et al., 2016b), whereas modeling both has fundamental importance for reliable and robust predictions. $$\left(5\right)$$ $$\left(6\right)$$ ## 3.1 Aleatoric Uncertainty This type of uncertainty comes from the inherent randomness within the data itself. It is sometimes called statistical or data uncertainty. Even if you collect more data, aleatoric uncertainty is unavoidable and cannot be reduced, because it is an intrinsic part of the process being modeled. For instance, this could be due to measurement errors or natural variability in the data. Additionally, uncertainty due to lack of learning capacity may also appear as aleatoric uncertainty from the model's side, as it cannot be reduced by collecting more data. For regression problems in deep learning setting, we can model this by having a heteroscedastic network (with parameter θ) that outputs a normal distribution N (µθ(x), σ2 θ (x)), where both mean and variance depends on input x (Lakshminarayanan et al., 2017; Kendall & Gal, 2017). Given a dataset D = {(xi, yi)} N i=1, the loss function for training the network can be derived from the negative log-likelihood of the normal distribution, $${\mathcal{L}}_{\theta}({\mathcal{D}})=-\log p({\mathcal{D}}\mid\theta)={\frac{1}{N}}\sum_{i=1}^{N}\left({\frac{1}{2\sigma_{\theta}^{2}(x_{i})}}(y-\mu_{\theta}(x_{i}))^{2}+{\frac{1}{2}}\log\sigma_{\theta}^{2}(x_{i})\right)+{\frac{1}{2}}\log2\pi.$$ $$\mathbf{\partial}^{\prime}\mathbf{\partial}$$ ## 3.2 Epistemic Uncertainty This type of uncertainty arises from a lack of knowledge of the model. Also known as model uncertainty, epistemic uncertainty can be reduced by gathering more data, refining the model, or simply using better modeling techniques. It reflects the uncertainty in the model's parameters and structure due to insufficient training data, or incomplete understanding of the underlying process. In deep learning context, Bayesian neural networks (BNNs) provide a way to model epistemic uncertainty. In BNNs, there is a prior distribution p(θ) over the network parameters θ. Given the training data D, we can compute the posterior distribution over the parameters, $$p(\theta\mid{\cal D})=\frac{p({\cal D}\mid\theta)p(\theta)}{p({\cal D})}.\tag{8}$$ $$({\mathfrak{g}})$$ In deep variational inference, we approximate posterior p(θ | D) by a neural network qw(θ) parameterized by w. The objective is to maximize the posterior distribution, which is same as minimizing the evidence lower bound (ELBO) as loss function, $${\mathcal{L}}_{w}({\mathcal{D}})=\mathbb{E}_{q_{w}(\theta)}\left[-\log p({\mathcal{D}}\mid\theta)\right]+\mathrm{KL}(q_{w}(\theta)\|p(\theta)),$$ Lw(D) = Eqw(θ)[− log p(D | θ)] + KL(qw(θ)∥p(θ)), (9) combining likelihood of data where parameter is sampled by posterior qw(θ), and Kullback-Leibler (KL) divergence from posterior qw(θ) to prior p(θ). During prediction, we marginalize over the posterior distribution of the parameters to obtain the predictive distribution p(y | x, D), while it is often approximated by sampling several sets of parameters from the posterior distribution p(θ | D) representing epistemic uncertainty, and averaging the predictions from distribution p(y | *x, θ*) representing aleatoric uncertainty, $$p(y\mid x,{\mathcal{D}})=\int p(y\mid x,\theta)p(\theta\mid{\mathcal{D}})d\theta.$$ Practical Implementation with Monte Carlo Dropout Monte Carlo dropout is a practical method to approximate Bayesian inference in neural networks (Gal & Ghahramani, 2016; Gal et al., 2017). During training, dropout is applied and the model learns to make predictions with dropout active. The loss function in this setting typically remains the same as the standard loss (e.g., negative log-likelihood) but with dropout applied. During inference, multiple stochastic forward passes with dropout enabled are performed to approximate the predictive distribution, capturing epistemic uncertainty. $$\left(10\right)$$ ## 3.3 Uncertainty In Reinforcement Learning In reinforcement learning, uncertainty representation for the value of a policy carries fundamental importance, especially in the presence of approximation (Bellemare et al., 2017). This can be conducted by atoms (Bellemare et al., 2017), quantiles (Dabney et al., 2018), and a probability distribution (Tang et al., 2019; Yang et al., 2021). On the other hand, the representation of critic output as a distribution only allows us to assess aleatoric uncertainty. In order to have a reasonable predictive uncertainty, epistemic uncertainty should also be estimated. For this, ensembles (Chen et al., 2021b; Kuznetsov et al., 2020), Bayesian neural networks (Tasdighi et al., 2024b) or Bayesian dropout (Hiraoka et al., 2021) can be used. The predictive value uncertainty representation is the key to adjust optimism vs *pessimism* balance, i.e., risk seeking vs *risk averse* behavior. Especially it is very important to balance overestimation-underestimation, as the main purpose of this work and similar studies. ## 4 Quantifying Overestimation For Sub-Gaussian Critic Distributions In this part, we are analyzing how estimation error causes overestimation due to policy improvement. Assuming the policy improvement step is successful and given Q, the target used to update the critic in maximum entropy framework is simply Bellman backup T πQ, i.e., Bellman backup operator applied on Q. The definition uses a deterministic function Q (Equation 4) while the critic is only known with some uncertainty, not exactly, represented as a distribution over returns Q. Therefore, we define stochastic Bellman backup T πQ as follows; $${\cal T}^{\pi}{\cal Q}(s,a)=R(s,a)+\gamma\mathbb{E}_{\stackrel{q\sim{\cal Q}}{s^{\prime}\sim\pi(\cdot|s,a)}}\Big[q(s^{\prime},a^{\prime})-\alpha\log\pi(a^{\prime}\mid s^{\prime})\Big].$$ $$(11)$$ Similarly, we define stochastic Bellman update T ∗Q, i.e., Bellman optimality operator applied on Q as follows; $${\cal T}^{s}{\cal Q}(s,a)=R(s,a)+\gamma{\mathbb{E}}_{s^{\prime}\sim\tau(1:s,a)}\bigg{[}\alpha\log\Big{(}\int_{\cal A}\exp(\alpha^{-1}q(s^{\prime},a^{\prime}))da^{\prime}\Big{)}\bigg{]}.\tag{12}$$ Now, we analyze overestimation bias, similar to the work of Chen et al. (2021b) and Lan et al. (2020) but in the soft learning framework instead of discrete actions. Our main purpose is to find the source of overestimation, and to devise a pessimistic Bellman operators to prevent overestimation. Definition 4.1. A random variable X ∈ R with mean µ = E[X] *is called sub-Gaussian with variance proxy* σ 2*if its moment generating function satisfies* $$\mathbb{E}[\exp(\lambda X)]\leq\exp\big(\lambda\mu+\frac{1}{2}\lambda^{2}\sigma^{2}\big),\quad\forall\lambda\in\mathbb{R}.\tag{1}$$ Let µ(*s, a*) = Eq∼Q[q(*s, a*)]. We define overestimation error as difference between T ∗Q and average T ∗µ as ϵ, $$(13)$$ $$\epsilon(s,a)={\mathcal{T}}^{*}{\mathcal{Q}}(s,a)-{\mathcal{T}}^{*}\mu(s,a).$$ $$(14)^{\frac{1}{2}}$$ ∗Q(s, a) − T ∗µ(*s, a*). (14) Ideally, ϵ(*s, a*) should be zero if there is no overestimation, which is not the case due to uncertainty. To quantify it, we assume that our critic distribution Q(*s, a*) is sub-Gaussian with variance proxy σ 2(*s, a*), representing uncertainty. Finally, we possess Theorem 4.1. Theorem 4.1 (Overestimation quantification for sub-Gaussian critics). *Let estimated critic distribution* Q be sub-Gaussian with mean µ *and variance proxy* σ 2*. Then,* $${\cal T}^{s}{\cal Q}(s,a)\leq\gamma\mathbb{E}_{s^{\prime}\sim\tau(\cdot|s,a)}\Big[\alpha\log\Big(\int_{\mathcal{A}}\exp(\alpha^{-1}\mu(s^{\prime},a^{\prime})+\frac{1}{2}\alpha^{-2}\sigma^{2}(s^{\prime},a^{\prime}))d a^{\prime}\Big)\Big].$$ In addition, overestimation due to uncertainty of estimated distribution Q, denoted as ϵ*, is upper bounded for* Bellman updates, $$\epsilon(s,a)\leq\frac{\gamma}{2\alpha}\mathbb{E}_{s^{\prime}\sim\tau(\cdot|s,a)}\biggl[\operatorname*{max}_{a^{\prime}}\sigma^{2}(s^{\prime},a^{\prime})\biggr].$$ i. (16) $$\left(15\right)$$ $$(16)$$ Corollary 4.1.1 (Pessimistic critic target). Given estimated critic distribution Q*, using shifted distribution* Q˜ = N (µ, ˜ σ˜ 2) for Bellman updates, where mean is shifted µ˜ = µ − βσ *with same variance proxy* σ˜ 2 = σ 2, prevents overestimation as long as β ≥ max (s ′,a′) 1 2 α −1σ(s ′, a′). ## 5 Deep Bayesian Actor-Critic In this section, we discuss key mechanisms needed for computation and sample efficient actor-critic learning and propose our algorithm *Deep Bayesian Actor-Critic*. This algorithm employs *single* critic network which captures uncertainty instead of ensembling. For this, it employs Bayesian dropout within the critic network and learns probability distribution as output representing *predictive uncertainty* to be used to evaluate the pessimistic Bellman backup. By sampling the mean of prediction with dropout for Bellman backup, its epistemic uncertainty is learned within the heteroscedastic distribution along with uncertainty sourced by state transition and non-stationarity of learning dynamics. In addition, the policy network has also dropout within its layers to regularize its improvement phase. ## 5.1 Heteroscedastic Critic Heteroscedastic networks output probability distribution instead of a point estimate and are originally designed to model aleatoric uncertainty of underlying phenomena (Kendall & Gal, 2017; Lakshminarayanan et al., 2017). In addition to this property, modeling output as a distribution allows the network to learn loss attenuation and makes learning robust to noisy data (Kendall & Gal, 2017). In temporal difference learning setting, the objective is to fit a distribution of Bellman backup which includes epistemic randomness of bootstrap estimate and uncertainty due to the non-stationary nature of learning procedure. Therefore, we assume that most of the critic uncertainty is modeled this way, and this can be used for pessimistic updates of critic and policy. ## 5.2 Pessimistic Objective Like most algorithms, the natural way to inhibit overestimation is employing pessimistic critic updates. Assuming critic value distribution is normal (still sub-Gaussian), we can use modified pessimistic distribution Q˜ = N (µ − *βσ, σ*2) from Corollary 4.1.1, but it would be overpessimistic for higher β values which is required to guarantee overestimation prevention. However, there are many other factors affecting the bound. For example, policy improvement is slower than policy improvement phase in actor-critic methods, decreasing possible overestimations. In addition, the real variance proxy might be lower than the estimated variance. Lastly, overestimation may not need to be fully prevented and slight optimistic updates may promote exploration. For this purpose, we state that β simply stands as a pessimism parameter to be tuned for each environment and learning hyper-parameters and it can be small depending on the learning process. At the end, we define the pessimistic Bellman update T˜ ∗Q(*s, a*) as follows; $$\tilde{\cal T}^{*}{\cal Q}(s,a)={\cal T}^{*}\tilde{\cal Q}(s,a)=R(s,a)+\gamma\mathbb{E}_{\stackrel{{s^{\prime}\sim\cal Q}}{{s^{\prime}\sim\tau(1\sim a)}}}\Big{[}\alpha\log\Big{(}\int_{A}\exp(\alpha^{-1}q(s^{\prime},a^{\prime}))da^{\prime}\Big{)}\Big{]}.\tag{17}$$ ## 5.3 Dropout Regularization Dropout regularization (Srivastava et al., 2014) allows to capture probabilistic nature of a network, representing Bayesian neural networks (Gal & Ghahramani, 2016). It is also equivalent to represent model as an ensemble since each sampled weight set of the network corresponds to a sub-model (He et al., 2021). For this purpose, DBAC employs dropout regularization for both critic and policy networks. For learning phase, it prevents policy and critic from overfitting, and improves generalization. More importantly, temporal difference (TD) targets are evaluated with dropout which has randomness of epistemic uncertainty, leading up to be learned in heteroscedastic critic distribution. In the end, overall predictive uncertainty can be quantified as a simple normal distribution and used to construct a pessimistic objective. ## 5.4 Layer Normalization Layer Normalization (Ba et al., 2016) is a normalization method applied to feature dimension of activations. It has both regularization effect and it prevents possible numerical instabilities in training time. In DBAC, we implement Layer Normalization after all hidden activations of both critic and policy networks, similar to Hiraoka et al. (2021). ## 5.5 Algorithm Summary Finally, we demonstrate DBAC algorithm using the results of analyzes from previous sections. Unlike previous methods, we parameterize critic Qθ as a single network by parameter set θ and policy πϕ as another single network by parameter set ϕ, where both networks have probability distribution as outputs, i.e., networks represent distributions over values and actions. Note that the bar notation stands for lagged network with frozen parameters. DBAC is summarized in Algorithm 1 with gradient descent but Adam optimizer (Kingma & Ba, 2014) is used in our experiments. Critic learning Critic network predicts cumulative return with some uncertainty. Using transition tuples from experience replay as batch, Db = {(si, ai, ri, s′i , donei)} Nb i=1, temporal difference (TD) target q TD i, representing Bellman backup, is β-pessimistic, $$q_{i}^{TD}=r_{i}+\gamma(\mu_{\theta}(s_{i}^{\prime},a_{i}^{\prime})-\beta\sigma_{\theta}(s_{i}^{\prime},a_{i}^{\prime})-\alpha\log\pi_{\theta}(s_{i}^{\prime},a_{i}^{\prime}))(\neg\texttt{done}_{i}),\quad a_{i}^{\prime}\sim\pi_{\phi}(\cdot\mid s_{i}^{\prime}).\tag{18}$$ Learning objective is cross-entropy loss (log loss), $${\mathcal{L}}_{\theta}({\mathcal{D}}_{b})=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}-\log{\mathcal{Q}}_{\theta}(q_{i}^{T D}\mid s_{i},a_{i}).\tag{1}$$ $$(19)$$ $$(20)$$ Theoretically, critic distribution is not restricted to any type but sub-Gaussian. For simplicity, we modeled the critic to be represented as a normal distribution, i.e., Qθ = N (µθ, σ2 θ ) in this work. In this case, the cross entropy loss becomes as follows; $${\mathcal{L}}_{\theta}({\mathcal{D}}_{b})={\frac{1}{2}}\log2\pi+{\frac{1}{N_{b}}}\sum_{i=1}^{N_{b}}{\Big(}{\frac{1}{2}}\log\sigma_{\theta}^{2}(s_{i},a_{i})+{\frac{(q_{i}^{T D}-\mu_{\theta}(s_{i},a_{i}))^{2}}{2\sigma_{\theta}^{2}(s_{i},a_{i})}}{\Big)}.$$ . (20) In this loss, q TD iis simply a bootstrapped estimate, used in temporal difference methods. The major difference is that we learn critic with cross entropy (log) loss and TD target q TD i has also epistemic randomness thanks to dropout. Lagged critic for TD target When the trained critic network is also used in calculating the target value, the critic training is prone to divergence (Li et al., 2023b). For this, a common approach is to use another critic network to evaluate TD target (Mnih et al., 2013). Similar to Lillicrap et al. (2015), Fujimoto et al. (2018), and Haarnoja et al. (2018), we use lagged critic network with frozen parameters (gradient-stopped) for TD target evaluations as demonstrated in Equation 18. This strategy is important to ensure the stability of temporal difference learning. Policy learning The policy improvement objective has very similar form to SAC algorithm (Haarnoja et al., 2018) except using standard deviation to construct β-pessimistic objective instead of the minimum of twin critic predictions. Using only states from experience replay as batches Db = {(si)} Nb i=1 with batch size NB, loss function for policy network is as follows; $${\mathcal{L}}_{\phi}({\mathcal{D}}_{b})={\frac{1}{N_{b}}}\sum_{i=1}^{N_{b}}\mathbb{E}_{a\sim\pi_{\phi}(\cdot\mid s_{i})}\big[\mu_{\theta}(s_{i},a)-\beta\sigma_{\theta}(s_{i},a)-\alpha\log\pi_{\phi}(a\mid s_{i})\big].$$ $$(21)$$ Automatic temperature tuning Inspired from Haarnoja et al. (2018), we also employed automatic temperature tuning. Using constant temperature results in different policies under once reward magnitude changes. To mitigate this, Haarnoja et al. (2018) proposed a policy entropy constraint, representing temperature as the Lagrange multiplier of the constraint. Given target entropy H¯ as hyper-parameter, the loss function related to this constraint is as follows; $${\mathcal{L}}_{\alpha}({\mathcal{D}}_{b})=-\alpha{\bar{\mathcal{H}}}+\alpha\sum_{i=1}^{N_{b}}\mathbb{E}_{a\sim\pi_{\phi}(\cdot|s_{i})}\left[\,-\log\pi_{\phi}(a\mid s_{i})\right].$$ $\left(22\right)^3$ -− log πϕ(a | si). (22) Algorithm 1 Deep Bayesian Actor-Critic Require: Environment env Require: Experience buffer D Require: Critic Qθ, lagged critic Qθ¯, policy πϕ, all with dropout Require: Initial temperature α, target entropy H¯ Require: Pessimism β Require: Learning rates ηQ, ηπ, ηα, polyak parameter ρ Require: Batch size Nb Require: Total training steps N s ∼ env.reset() ▷ Reset the environment for N timesteps do a ∼ πϕ(· | s) ▷ Sample action r, s′, done ∼ env.step(a) ▷ Act on environment D ← D ∪ (*s, a, r, s*′, done) ▷ Record transition tuple if done **then** s ← s ′ **else** s ∼ env.reset() ▷ State transition or reset for G gradient steps do Db = {(si, ai, ri, s′i , donei)} Nb i=1 ∼ D ▷ Sample minibatch for training a˜ ′ i ∼ πϕ(· | s ′ i ), ∀i ∈ {1, 2*, ..., N*b} ▷ Sample next actions q TD i = ri + γ(µθ¯(s ′ i , a˜ ′ i ) − βσθ¯(s ′ i , a˜ ′ i ))(¬donei), ∀i ∈ {1, 2*, ..., N*b} ▷ Build TD targets θ ← θ − ηQ∇θ 1 Nb PNb i=1 − log Qθ(q TD i| si, ai) ▷ Update critic ϕ ← ϕ − ηπ∇ϕ 1 Nb PNb i=1 Ea∼πϕ(·|si) -µθ(si, a) − βσθ(si, a) − α log πϕ(a | si)▷ Update policy α ← α − ηα∇α − αH¯ + αPNb i=1 Ea∼πϕ(·|si) -− log πϕ(a | si)▷ Update temperature ¯θ ← ρ ¯θ + (1 − ρ)θ ▷ Update target critic network end for end for ## 6 Experiments Our experiments aim to investigate whether enhancing off-policy actor-critic methodology with DBAC can improve their sample and computation efficiency on difficult continuous-control benchmarks. For this purpose, DBAC is compared to similar competitive algorithms; TQC (Kuznetsov et al., 2020), DROQ (Hiraoka et al., 2021), and SAC (Haarnoja et al., 2018). All algorithm results are obtained using in-house code with the same network architectures (including layer normalization) to make a fair comparison. We included DROQ algorithm with UTD ratio (G) equal to 1 and 5, although it is equal to 20 in the original paper. Additionally, two ablations studies are conducted to examine the effectiveness of different levels of pessimism and dropout rates may vary across different environments. Lastly, effect of Layer Normalization is not surveyed since it is done by Hiraoka et al. (2021) extensively for DROQ algorithm. Through Gymasium API (Towers et al., 2023), six well-known MuJoCo environments (Todorov et al., 2012) are used for comparison as they are tested by most algorithms in the literature. Hyper-parameters per environment can be found in Appendix C For all experiments, PyTorch (version 2.2.2) (Paszke et al., 2019) is used. Please refer to Appendix D for codebase. ![8_image_0.png](8_image_0.png) Figure 1: Main learning curves of DBAC and other algorithms. The standard deviation is represented by the shaded areas, while the average return across evaluation episodes is shown by solid curves. See specific hyper-parameters from Table 5. Evaluation protocol After each 1000 time steps, we execute a single test episode using the online policy and measure its performance by calculating the total reward accumulated during the episode. For each environment, total number of training steps is different to keep training duration short. Total training steps are taken from REDQ (Chen et al., 2021b) and MBPO (Janner et al., 2019), except InvertedDoublePendulum-v4 since it is not available on those papers. Learning curves Specified environments are trained through a fixed number of environment interactions, repeated 5 times to assess the stability of the algorithm shown by mean and standard deviation. Further experimental details are presented in Appendix C. Mean and standard deviations on last evaluation episode are summarized in Table 1. Additionally, average returns through all learning process averaged over random seeds are summarized in Table 2. In Figure 1, the performance of DBAC is shown against previously mentioned SOTA algorithms for 6 tasks, where important hyper-parameters used for DBAC and TQC are summarized in Table 5. Additionally, value estimation errors are presented in Figure 2. The bold lines represent the average, while the shaded area indicates the standard deviation (to represent randomness) of the total reward across evaluation episodes. Table 1: Episodic return over five training runs on MuJoCo tasks at the end of training. ± sign denotes one standard deviation across trials. The first and second best methods are highlighted in blue and red. | Env | # steps | DBAC | DROQ G=1 | DROQ G=5 | SAC | TQC | |---------------------------|-----------|-------------|-------------|-------------|-------------|-------------| | InvertedDoublePendulum-v4 | 50k | 9354 ± 2 | 8036 ± 2634 | 9349 ± 6 | 7899 ± 2884 | 9354 ± 2 | | Walker2d-v4 | 300k | 4901 ± 87 | 3896 ± 231 | 831 ± 298 | 490 ± 280 | 4368 ± 296 | | Hopper-v4 | 125k | 2886 ± 376 | 1550 ± 1006 | 1612 ± 1154 | 540 ± 137 | 2789 ± 860 | | Humanoid-v4 | 300k | 4454 ± 1672 | 1196 ± 789 | 980 ± 427 | 1482 ± 799 | 3303 ± 2330 | | HalfCheetah-v4 | 400k | 8897 ± 399 | 6925 ± 1208 | 7632 ± 995 | 7303 ± 742 | 9244 ± 611 | | Ant-v4 | 300k | 5298 ± 281 | 876 ± 946 | 4068 ± 1427 | 2638 ± 1315 | 5926 ± 163 | Sample efficiency As seen from Figure 1, DBAC outperforms all other algorithms except TQC for HalfCheetah-v4 and Ant-v4 in terms of sample efficiency. The main explanation is in Figure 2, as other algorithms except TQC suffer from positive overestimation bias, where DBAC handles it by using a pessimism ![9_image_0.png](9_image_0.png) Figure 2: Estimation error of DBAC and other algorithms on the beginning of episodes. The standard deviation is represented by the shaded areas, while the average errors across evaluation episodes are shown by solid curves. See specific hyper-parameters from Table 5. Table 2: Average episodic return through learning procedure and over five training runs on MuJoCo tasks. The first and second best methods are highlighted in blue and red. | Env | # steps | DBAC | DROQ G=1 | DROQ G=5 | SAC | TQC | |---------------------------|-----------|---------|------------|------------|---------|---------| | InvertedDoublePendulum-v4 | 50k | 6235.99 | 5967.47 | 6812.88 | 5578.03 | 5624.03 | | Walker2d-v4 | 300k | 3166.3 | 1490.9 | 1300.85 | 1371.76 | 2963.89 | | Hopper-v4 | 125k | 1122.34 | 502.25 | 1024.69 | 421.8 | 1193.79 | | Humanoid-v4 | 300k | 2271.2 | 952.87 | 994.22 | 1182.69 | 1880.35 | | HalfCheetah-v4 | 400k | 6866.44 | 5675.06 | 6483.58 | 6140.46 | 7149.85 | | Ant-v4 | 300k | 2742.98 | 983.09 | 2046.47 | 1582.81 | 3056.95 | level specifically selected for each environment. It is also the same for TQC algorithm, as we found the number of quantiles to drop per network by trial-and-error to represent pessimism. Also as seen in Table 2, DBAC also performs well not only at the end of training but also during whole learning time along with TQC. Computation efficiency As wallclock time statistics vary depending on computing units, we present number of critic networks and number of critic backprops per time step in Table 3. Note that policy is single and same network through methods, and only input and output layers of the critic are different which has very insignificant effect on number of parameters. Although TQC performs slightly better than DBAC on sample efficiency for some environments, DBAC uses less parameters and consumes less computation resources compared to TQC as DBAC uses only single critic network with UTD ratio 1. In summary, we can say that DBAC outperforms all other algorithms in computation efficiency. | DBAC | DROQ G=1 | DROQ G=5 | SAC | TQC | | |-------------------|------------|------------|-------|-------|----| | # critic network | 1 | 2 | 2 | 2 | 5 | | # critic backprop | 1 | 2 | 10 | 2 | 5 | Table 3: Number of critic networks and backprops per time step. Each critic has almost same size. However, the key behind DBAC's performance depends on two main hyper-parameters, pessimism and dropout rate of critic and policy networks. Therefore, we conduct sensitivity analysis for this purpose in the following sections. ## 6.1 Ablation Study 1: Pessimism Sensitivity To investigate sensivity of DBAC to pessimism parameter β, we run DBAC on all environment by varying β. Results are available on Appendix B. As shown in Figure 3, results indicate that optimal β parameter is neither small nor big number. Therefore, it should be determined carefully to guarantee better performance. As we stated earlier, excess pessimism paves the way to underestimation, whereas lack of it causes overestimation which is inherent to actor-critic methods. In addition, pessimism level do not affect results significantly in HalfCheetah-v4, Walker2d-v4 and InvertedDoublePendulum-v4 environments compared to others. Therefore, pessimism sensitivity varies for different environments, possibly because of varying task difficulties. ## 6.2 Ablation Study 2: Dropout Sensitivity In addition to pessimism, dropout rate is also an important parameter as it determines epistemic uncertainty. Results are available on Appendix B. As it can be seen from Figure 4, for most environments, dropout rate 0.01 is best, except HalfCheetah-v4 and Walker2d-v4. This is the reason behind using zero dropout for mentioned environments in the main comparison study (Table 5). Although optimal dropout rate may also depend on the neural network architecture and environment, it is known that mentioned environments are relatively easier compared to others in terms of overestimation issues, which is obvious from high performance with relatively primitive algorithms like Vanilla SAC (Haarnoja et al., 2018), TD3 (Fujimoto et al., 2018) and DDPG (Lillicrap et al., 2015). Therefore, we believe that dropout for those environments only slows down learning as they do not suffer from overestimation problem too much. Another interesting result is that for all zero dropout experiments, DBAC is still on par with SAC, DROQ, and TQC (for many cases) in terms of sample efficiency, meaning that only representing the critic as a distribution, and learning by hand-tuned pessimism level removes the necessity to use double or ensemble critic network. ## 7 Prior Art Pessimistic policy evaluation Earlier approaches to overcome the overestimation bias phenomenon by using double critic networks (Van Hasselt et al., 2016; Wang et al., 2016), lagged critic networks (Lillicrap et al., 2015), and combination of them (Fujimoto et al., 2018; Haarnoja et al., 2018). Later methods use an ensemble of critic networks are used to capture epistemic uncertainty, and pessimistic Bellman backup for critic training (Chen et al., 2021b; Lan et al., 2020; Kumar et al., 2019). Similarly, Hiraoka et al. (2021) utilized dropout for critic regularization on top of this approach to increase this capability. These methods use constant pessimism for policy evaluation and improvement but better estimated epistemic uncertainty allows them to use a high update-to-data (UTD) ratio, which is the number of gradient steps per environment interaction. Unlike previous approaches, Kuznetsov et al. (2020) define pessimism for each environment separately, in the form of sample truncation from quantile distribution and critic ensemble together. While ensemble is for critic uncertainty, quantile representation captures aleatoric uncertainty, in which they stated: it is especially useful for precise overestimation control supportive to our idea on aleatoricy. Similarly, our method uses a critic network returning normal distribution instead of quantile representation and employs Bayesian dropout instead of ensemble. Moskovitz et al. (2021) had shown that optimal pessimism/optimism depends on the environment, and stated: estimation bias depends on overall context in which a learner is embedded, which we similarly state it as estimation bias depends on the inductive bias of the learner. They focus on updating pessimism *on the fly* as a bandit problem instead of fixing it but this requires evaluating on-policy returns, and introduces an online bandit to update pessimism, making it not usable on offline setting. Li et al. (2023b) goes beyond this approach by parameterization of optimism/pessimism with a neural network, and obtains significantly good performance on benchmarks. Still, these methods use critic ensembles, thus increasing computational overhead. Our work do not focus on pessimism adaptation, and takes it constant through training and focuses on well-estimated epistemic critic uncertainty. Risk-sensitive reinforcement learning Pessimism is also needed for safety-critical RL applications to avoid catastrophic situations. For this purpose, pessimistic policy updates upon aleatoric uncertainty are modeled as normal distribution (Tang et al., 2019; Yang et al., 2021). Stachowicz & Levine (2024) devised a risk-sensitive actor-critic algorithm in which epistemic uncertainty is modeled by ensemble whereas aleatoric uncertainty is modeled by distributional representation as an output of critic network similar to the work of Kuznetsov et al. (2020). Their approach leads to higher performance by significantly reducing unsafe maneuvers. Overestimation vs overfitting Li et al. (2023a) shown that statistical overfitting should be mitigated for efficient RL. This is a valid statement for any machine learning problem since overfitting means poor generalization. On the other hand, Kumar et al. (2019) and Levine et al. (2020) state that overestimation is different than statistical overfitting since increasing the number of training samples does not prevent it. However, temporary overfitting may cause bias, and it may not be recoverable during RL training. In our work, we adopt pessimism to probably overfitted state-action pairs, resolving overestimation similar to previous works (Chen et al., 2021b; Hiraoka et al., 2021; Kumar et al., 2019; Lan et al., 2020) but in a more computation efficient way. Optimism in the face of uncertainty Epistemic uncertainty is also employed to improve policy *optimistically* (Audibert et al., 2007; Kocsis & Szepesvári, 2006). However, for large-scale problems, this approach usually fails or requires carefully tuned optimism (Pacchiano et al., 2020; Ciosek et al., 2019). O'Donoghue et al. (2018) used normal distribution to track critic uncertainty in which the upper bound is used as a policy improvement target. Osband et al. (2016) follows a similar way but uses ensembles, and improves policy with random critics at each episode inspired by Thompson sampling. However, their experiments are on relatively easier environments for deep RL, so overestimation correction is not a major bottleneck. Tasdighi et al. (2024a) implemented twin critic network, and used optimistic estimates for policy learning, while constructing pessimistic critic targets to mitigate overestimation problem. Ciosek et al. (2019) followed a similar way by using optimistic estimates only for exploration and pessimistic critic target using double critics. Dropout uncertainty Using dropout is kind of Bayesian approximation, so another way to assess model uncertainty (Gal & Ghahramani, 2016). It has applications on model-based (Gal et al., 2016a; 2017; Kahn et al., 2017) and model-free (Moerland et al., 2017; Jaques et al., 2019; He et al., 2021) reinforcement learning. Using the same idea in off-policy maximum entropy actor-critic setting, He et al. (2021) injects dropout to critic network, and demonstrates that one critic network is enough for an actor-critic method. Similarly, Hiraoka et al. (2021) uses dropout mechanism to evaluate epistemic uncertainty additive to ensembling, and shows that it reduces the number of required networks in the ensemble but they used critics with deterministic head. Dropout allows us to significantly reduce ensemble size and use high UTD ratio like Chen et al. (2021b). A high UTD ratio increases the risk of overestimation bias but policy is certainly improved in a more pessimistic way to handle it. They also experimented with a single critic network (called Sin-DroQ), and obtained similar performance in easier environments such as Hopper-v2 and Walker2d-v2 but failed to converge for Ant-v2 and Humanoid-v2. We empower this approach by introducing heteroscedastic critic network and bootstrapping dropout uncertainty into a normal distribution. Heteroscedastic representation for epistemic uncertainty Although heteroscedastic representation is mainly used to assess aleatoric uncertainty, it may also capture epistemic uncertainty if trained for this purpose, such as evidential learning (Sensoy et al., 2018; Amini et al., 2020). Moreover, Lakshminarayanan et al. (2017) also used ensembles to create adversarial examples to robustly assess predictive uncertainty as neural network output. We treat critic estimate as random value both epistemically and aleatorically and use distributional representation to learn both as we use random samples from the same network with dropout. ## 8 Conclusion & Future Directions In this paper, we introduced Deep Bayesian Actor-Critic (DBAC), a novel off-policy actor-critic algorithm. The main idea is to inhibit overestimation for the sake of faster and robust learning by incorporating critic uncertainty arising from both limited samples (epistemic uncertainty) and environmental stochasticity (aleatoric uncertainty). We utilize Bayesian dropout to capture epistemic uncertainty, while heteroscedastic output models the total predictive uncertainty which serves to pessimistic critic and policy updates, enabling stability and robustness in learning. Moreover, we used normal distribution to represent predictive critic uncertainty, but our analysis is valid for all sub-Gaussian critic distributions. Finally, we derived an upper bound for overestimation, demonstrating that an adequate level of pessimism mitigates overestimation without succumbing to underestimation, thus facilitating computation and sample-efficient learning. Limitations & Future work Our ablation studies demonstrate the effects of dropout rate and pessimism, revealing the sensitivity of the learning procedure to these parameters. For each specific environment and optimization method, there exists an optimal level of pessimism and dropout rate. A promising direction for future research is to develop a grounded method to adjust the pessimism level for specific environments and agents to allow better adaptation for the learner to the environment. Another promising direction for future work is to explore different methods for tracking epistemic uncertainty other than ensembles and Bayesian dropout. Bayesian neural networks (Depeweg et al., 2016), concrete dropout (Gal et al., 2017) and evidential deep learning (Sensoy et al., 2018; Amini et al., 2020) frameworks may offer more computation efficient alternatives. Broader Impact DBAC tackles critical challenges such as accelerating learning, improving stability, and ensuring computation efficiency. Our research not only pushes the boundaries of reinforcement learning but also promises significant implications for enhancing the safety and intelligence of robots, self-driving cars, and autonomous systems in healthcare and finance. ## Acknowledgments This research has received no external funding. ## References Alexander Amini, Wilko Schwarting, Ava Soleimany, and Daniela Rus. Deep evidential regression. *Advances* in Neural Information Processing Systems, 33:14927–14937, 2020. Jean-Yves Audibert, Rémi Munos, and Csaba Szepesvári. Tuning bandit algorithms in stochastic environments. In *International conference on algorithmic learning theory*, pp. 150–165. Springer, 2007. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In *International conference on machine learning*, pp. 449–458. PMLR, 2017. Lili Chen, Kimin Lee, Aravind Srinivas, and Pieter Abbeel. Improving computational efficiency in visual reinforcement learning via stored embeddings. *Advances in Neural Information Processing Systems*, 34: 26779–26791, 2021a. Xinyue Chen, Che Wang, Zijian Zhou, and Keith Ross. Randomized ensembled double q-learning: Learning fast without a model. *arXiv preprint arXiv:2101.05982*, 2021b. Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. *Advances in neural information processing systems*, 31, 2018. Kamil Ciosek, Quan Vuong, Robert Loftin, and Katja Hofmann. Better exploration with optimistic actor critic. *Advances in Neural Information Processing Systems*, 32, 2019. Will Dabney, Mark Rowland, Marc Bellemare, and Rémi Munos. Distributional reinforcement learning with quantile regression. In *Proceedings of the AAAI conference on artificial intelligence*, volume 32, 2018. Hao Dai, Jiashu Wu, Yang Wang, and Chengzhong Xu. Towards scalable and efficient deep-rl in edge computing: A game-based partition approach. *Journal of Parallel and Distributed Computing*, 168:108–119, 2022. Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, and Steffen Udluft. Learning and policy search in stochastic dynamical systems with bayesian neural networks. *arXiv preprint arXiv:1605.07127*, 2016. Armen Der Kiureghian and Ove Ditlevsen. Aleatory or epistemic? does it matter? *Structural safety*, 31(2): 105–112, 2009. Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In *International conference on machine learning*, pp. 1587–1596. PMLR, 2018. Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In *Proceedings of the 33rd International Conference on Machine Learning (ICML-16)*, 2016. Yarin Gal, Rowan McAllister, and Carl E. Rasmussen. Improving PILCO with Bayesian neural network dynamics models. In *Data-Efficient Machine Learning workshop, ICML*, April 2016a. Yarin Gal, Jiri Hron, and Alex Kendall. Concrete dropout. *Advances in neural information processing* systems, 30, 2017. Yarin Gal et al. Uncertainty in deep learning. 2016b. Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. In *International conference on machine learning*, pp. 1352–1361. PMLR, 2017. Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms and applications. *arXiv* preprint arXiv:1812.05905, 2018. Qiang He, Huangyuan Su, Chen Gong, and Xinwen Hou. Mepg: A minimalist ensemble policy gradient framework for deep reinforcement learning. *arXiv preprint arXiv:2109.10552*, 2021. Takuya Hiraoka, Takahisa Imagawa, Taisei Hashimoto, Takashi Onishi, and Yoshimasa Tsuruoka. Dropout q-functions for doubly efficient reinforcement learning. *arXiv preprint arXiv:2110.02034*, 2021. Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. *Advances in neural information processing systems*, 32, 2019. Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. *arXiv preprint arXiv:1907.00456*, 2019. Gregory Kahn, Adam Villaflor, Vitchyr Pong, Pieter Abbeel, and Sergey Levine. Uncertainty-aware reinforcement learning for collision avoidance. *arXiv preprint arXiv:1702.01182*, 2017. Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? Advances in neural information processing systems, 30, 2017. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. In European conference on machine learning, pp. 282–293. Springer, 2006. Ezgi Korkmaz. A survey analyzing generalization in deep reinforcement learning. https://arxiv.org/pdf/2401.02349.pdf, 2024. Petar Kormushev, Sylvain Calinon, and Darwin G Caldwell. Reinforcement learning in robotics: Applications and real-world challenges. *Robotics*, 2(3):122–148, 2013. Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy qlearning via bootstrapping error reduction. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/ c2073ffa77b5357a498057413bb09d3a-Paper.pdf. Arsenii Kuznetsov, Pavel Shvechikov, Alexander Grishin, and Dmitry Vetrov. Controlling overestimation bias with truncated mixture of continuous distributional quantile critics. In International Conference on Machine Learning, pp. 5556–5566. PMLR, 2020. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. *Advances in neural information processing systems*, 30, 2017. Qingfeng Lan, Yangchen Pan, Alona Fyshe, and Martha White. Maxmin q-learning: Controlling the estimation bias of q-learning. *arXiv preprint arXiv:2002.06487*, 2020. Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. *arXiv preprint arXiv:2005.01643*, 2020. Qiyang Li, Aviral Kumar, Ilya Kostrikov, and Sergey Levine. Efficient deep reinforcement learning requires regulating overfitting. *arXiv preprint arXiv:2304.10466*, 2023a. Sicen Li, Qinyun Tang, Yiming Pang, Xinmeng Ma, and Gang Wang. Realistic actor-critic: A framework for balance between value overestimation and underestimation. *Frontiers in Neurorobotics*, 16:1081242, 2023b. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. *arXiv preprint arXiv:1509.02971*, 2015. Russell Mendonca, Abhishek Gupta, Rosen Kralev, Pieter Abbeel, Sergey Levine, and Chelsea Finn. Guided meta-policy search. *Advances in Neural Information Processing Systems*, 32, 2019. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. *arXiv preprint arXiv:1312.5602*, 2013. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. *nature*, 518(7540):529–533, 2015. Thomas M Moerland, Joost Broekens, and Catholijn M Jonker. Efficient exploration with double uncertain value networks. *arXiv preprint arXiv:1711.10789*, 2017. Ted Moskovitz, Jack Parker-Holder, Aldo Pacchiano, Michael Arbel, and Michael Jordan. Tactical optimism and pessimism for deep reinforcement learning. *Advances in Neural Information Processing Systems*, 34: 12849–12863, 2021. Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped dqn. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), *Advances in Neural Information* Processing Systems, volume 29. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/ paper_files/paper/2016/file/8d8818c8e140c64c743113f563cf750f-Paper.pdf. Brendan O'Donoghue, Ian Osband, Remi Munos, and Volodymyr Mnih. The uncertainty bellman equation and exploration. In *International conference on machine learning*, pp. 3836–3845, 2018. Aldo Pacchiano, Philip J Ball, Jack Parker-Holder, Krzysztof Choromanski, and Stephen Roberts. Towards tractable optimism in model-based reinforcement learning. *arXiv preprint arXiv:2006.11911*, 2020. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. *Advances in neural information processing systems*, 32, 2019. Murat Sensoy, Lance Kaplan, and Melih Kandemir. Evidential deep learning to quantify classification uncertainty. *Advances in neural information processing systems*, 31, 2018. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. *The journal of machine learning research*, 15(1): 1929–1958, 2014. Kyle Stachowicz and Sergey Levine. Racer: Epistemic risk-sensitive rl enables fast driving with fewer crashes. arXiv preprint arXiv:2405.04714, 2024. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Yichuan Charlie Tang, Jian Zhang, and Ruslan Salakhutdinov. Worst cases policy gradients. *arXiv preprint* arXiv:1911.03618, 2019. Bahareh Tasdighi, Nicklas Werge, Yi-Shan Wu, and Melih Kandemir. Exploring pessimism and optimism dynamics in deep reinforcement learning. *arXiv preprint arXiv:2406.03890*, 2024a. Bahareh Tasdighi, Nicklas Werge, Yi-Shan Wu, and Melih Kandemir. Probabilistic actor-critic: Learning to explore with pac-bayes uncertainty. *arXiv preprint arXiv:2402.03055*, 2024b. Sebastian Thrun and Anton Schwartz. Issues in using function approximation for reinforcement learning. In Proceedings of the 1993 connectionist models summer school, pp. 255–263. Psychology Press, 2014. Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ international conference on intelligent robots and systems, pp. 5026–5033. IEEE, 2012. Mark Towers, Jordan K. Terry, Ariel Kwiatkowski, John U. Balis, Gianluca de Cola, Tristan Deleu, Manuel Goulão, Andreas Kallinteris, Arjun KG, Markus Krimmel, Rodrigo Perez-Vicente, Andrea Pierré, Sander Schulhoff, Jun Jet Tai, Andrew Tan Jin Shen, and Omar G. Younis. Gymnasium, March 2023. URL https://zenodo.org/record/8127025. Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI conference on artificial intelligence, volume 30, 2016. Hado Van Hasselt, Yotam Doron, Florian Strub, Matteo Hessel, Nicolas Sonnerat, and Joseph Modayil. Deep reinforcement learning and the deadly triad. *arXiv preprint arXiv:1812.02648*, 2018. Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, and Nando Freitas. Dueling network architectures for deep reinforcement learning. In *International conference on machine learning*, pp. 1995– 2003. PMLR, 2016. Peng Wei, Kun Guo, Ye Li, Jue Wang, Wei Feng, Shi Jin, Ning Ge, and Ying-Chang Liang. Reinforcement learning-empowered mobile edge computing for 6g edge intelligence. *Ieee Access*, 10:65156–65192, 2022. Qisong Yang, Thiago D Simão, Simon H Tindemans, and Matthijs TJ Spaan. Wcsac: Worst-case soft actor critic for safety-constrained reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 10639–10646, 2021. Wenshuai Zhao, Jorge Peña Queralta, and Tomi Westerlund. Sim-to-real transfer in deep reinforcement learning for robotics: a survey. In *2020 IEEE symposium series on computational intelligence (SSCI)*, pp. 737–744. IEEE, 2020. ## Appendix A Proofs Proof of Theorem 4.1. Analyzing Bellman update T ∗Q(*s, a*), T ∗Q(*s, a*) = R(*s, a*) + γEs ′∼τ(·|s,a) hEq∼Qhα log Z A exp(α −1q(s ′, a′))da′ii ≤ R(*s, a*) + γEs ′∼τ(·|s,a) hα log Z ΩQ Z A exp(α −1q(s ′, a′))da′dQ i = R(*s, a*) + γEs ′∼τ(·|s,a) hα log Z A Z ΩQ exp(α −1q(s ′, a′))dQda′i ≤ R(*s, a*) + γEs ′∼τ(·|s,a) hα log Z A exp(α −1µ(s ′, a′) + 12 α −2σ 2(*s, a*))da′i ≤ R(*s, a*) + γEs ′∼τ(·|s,a) hα log Z A exp(α −1µ(s ′, a′))da′·max a′exp(12 α −2σ 2(s ′, a′)i = R(*s, a*) + γEs ′∼τ(·|s,a) hα log Z A exp(α −1µ(s ′, a′))da′+ 1 2α max a′σ 2(s ′, a′) i = R(*s, a*) + γEs ′∼τ(·|s,a) hα log Z A exp(α −1µ(s ′, a′))da′i + γ 2α Es ′∼τ(·|s,a) hmax a′σ 2(s ′, a′) i. First inequality comes from Jensen's inequality (using concave property of log function) while following equality is result of Tonelli's theorem. The second inequality results from the property of sub-Gaussian distribution 4.1, where the first statement of the theorem is proven. The following inequality is a result of mean value theorem for integrals. In the last equality, the first two terms are equal to T ∗µ(*s, a*). Therefore, $$\epsilon(s,a)={\mathcal{T}}^{*}{\mathcal{Q}}(s,a)-{\mathcal{T}}^{*}\mu(s,a)\leq{\frac{\gamma}{2\alpha}}\mathbb{E}_{s^{\prime}\sim\tau(\cdot|s,a)}\Big[\operatorname*{max}_{a^{\prime}}\sigma^{2}(s^{\prime},a^{\prime})\Big].$$ Proof of Corollary 4.1.1. From the Theorem 4.1, we can show that $$\mathcal{T}^{*}\hat{\mathcal{Q}}(s,a)\leq R(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim\tau(\,\{s,a\}}\Big{[}\alpha\log\Big{(}\int_{\mathcal{A}}\exp(\alpha^{-1}(\mu(s^{\prime},a^{\prime})-\beta\sigma(s^{\prime},a^{\prime})+\frac{1}{2}\alpha^{-1}\sigma^{2}(s^{\prime},a^{\prime})))da^{\prime}\Big{)}\Big{]}$$ $$=R(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim\tau(\,\{s,a\}}\Big{[}\alpha\log\Big{(}\int_{\mathcal{A}}\exp(\alpha^{-1}\mu^{\dagger}(s^{\prime},a^{\prime}))da\Big{)}\Big{]}=\mathcal{T}^{*}\mu^{\dagger}(s,a).$$ where we have defined $\mu^{1}(s^{\prime},a^{\prime})=\mu(s^{\prime},a^{\prime})-\beta\sigma(s^{\prime},a^{\prime})+\frac{1}{2}\alpha^{-1}\sigma^{2}(s^{\prime},a^{\prime}))$. If $\beta\geq\max_{(s^{\prime},a^{\prime})}\frac{1}{2}\alpha^{-1}\sigma(s^{\prime},a^{\prime})$, then $\mu(s^{\prime},a^{\prime})=\mu(s^{\prime},a^{\prime})-\beta\sigma(s^{\prime},a^{\prime})+\frac{1}{2}\alpha^{-1}\sigma^{2}(s^{\prime},a^{\prime}))$. µ †(s ′, a′) < µ(s ′, a′). So we can show that $$\square$$ $${\mathcal{T}}^{*}{\tilde{Q}}(s,a)\leq{\mathcal{T}}^{*}\mu^{\dagger}(s,a)\leq{\mathcal{T}}^{*}\mu(s,a).$$ †(s, a) ≤ T ∗µ(*s, a*). (23) $$(23)^{\frac{1}{2}}$$ $\square$ ## Appendix B Results Of Ablation Studies ![18_image_0.png](18_image_0.png) Figure 3: Learning curves of DBAC with varying pessimism (8) parameter. Dropout is equal to 0.01 for all experiments here. ![18_image_1.png](18_image_1.png) Figure 4: Learning curves of DBAC with varying dropout rates (for both critic and policy). Pessimism parameters are used same to main experiment, which can be seen in Table 5. ## Appendix C Hyper-Parameters And Experiment Details Hyper-parameter values used in the experiments per method are listed in Table 4. Dropout parameter is found by trial-and-error and it matches the selection in DROQ paper (Hiraoka et al., 2021). In addition, target entropy and pessimism parameter (only for DBAC) are summarized in Table 5. Target entropy values are taken from DROQ paper, which uses the same values (except Humanoid-v4). For DBAC, pessimism hyper-parameter and for TQC, quantile drop parameters per environment are found by trial-and-error to obtain the best performance. | Algorithm | Parameter | Value | |----------------------------------|---------------------|----------------------------| | DBAC, TQC, DROQ, SAC | Optimizer | Adam ((Kingma & Ba, 2014)) | | Critic Learning Rate | 1 × 10−3 | | | Actor Learning Rate | 3 × 10−4 | | | Discount Rate (γ) | 0.99 | | | Target-Smoothing Coefficient (ρ) | 0.005 | | | Replay Buffer Size | 1 × 106 | | | Number of Hidden Layers | 2 | | | Number of Hidden Units per Layer | 256 | | | Mini-Batch Size | 256 | | | Random Starting Data | 10000 | | | UTD Ratio (G) | 1 | | | DROQ, DBAC | Dropout Rate | 0.01 | | DROQ | Ensemble Size | 2 | | In Target Minimization | 2 | | | Ensemble Size | 5 | | | TQC | Number of Quantiles | 25 | Table 4: Experimental Parameters per Algorithm | Environment | Target Entropy (H¯ ) | Pessimism (β) | Dropout Rate | Quantile Drop | |---------------------------|------------------------|-----------------|----------------|-----------------| | Ant-v4 | -4 | 0.5 | 0.01 | 5/25 | | Hopper-v4 | -1 | 0.6 | 0.01 | 5/25 | | Walker2d-v4 | -3 | 0.5 | 0.00 | 5/25 | | HalfCheetah-v4 | -3 | 0.1 | 0.00 | 0/25 | | Humanoid-v4 | -8 | 1.0 | 0.01 | 12/25 | | InvertedDoublePendulum-v4 | -1 | 0.2 | 0.01 | 3/25 | Table 5: Target Entropy (H¯ ), Pessimism (β for DBAC), Dropout rate (for DBAC) and Quantile Drop (for TQC) per Environment ## Appendix D Source Code Our results can be accessed publicly at https://github.com/authors-github/ deep-bayesian-actor-critic-results. This code uses our in-house developed RL framework as a subrepository, available on https://github.com/authors-github/rl-warehouse.
# Deep Goal-Oriented Clustering Anonymous authors Paper under double-blind review ## Abstract Clustering and prediction are two primary tasks in the fields of unsupervised and supervised machine learning. Although much of the recent advances in machine learning have been centered around those two tasks, the interdependent, mutually beneficial relationship between them is rarely explored. In this work, we hypothesize that a better prediction performance for the downstream task would inform a more appropriate clustering strategy. To this end, we introduce Deep Goal-Oriented Clustering (DGC), a probabilistic framework built upon a variational autoencoder with the latent prior being a Gaussian mixture distribution. DGC clusters the data by jointly predicting the *side-information* and modeling the inherent data structure in an end-to-end fashion. We show the effectiveness of our model on a range of datasets by achieving good prediction accuracies on the side-information, while, more importantly in our setting, simultaneously learning congruent clustering strategies that are on par with the state-of-the-art. We also apply DGC to a real-world breast cancer dataset and show that the discovered clusters carry clinical significance. ## 1 Introduction Many of the advances in supervised learning in the past decade are due to the development of deep neural networks (DNN), a class of hierarchical function approximators that are capable of learning complex inputoutput relationships. Prime examples of such advances include image recognition (Krizhevsky et al., 2012), speech recognition (Nassif et al., 2019), and neural translation (Bahdanau et al., 2015). However, with the explosion of the size of modern datasets, it becomes increasingly unrealistic to manually annotate all available data for training (Russakovsky et al., 2015). Therefore, understanding inherent data structure through unsupervised clustering is of increasing importance. Applying DNNs to unsupervised clustering has been studied in the past few years (Caron et al., 2018; Law et al., 2017; Shaham et al., 2018; Tsai et al., 2021), centering around the concept that the input space in which traditional clustering algorithms operate is of importance. Hence, learning this space from data is desirable. Despite the improvements these approaches have made on benchmark clustering datasets, the ill-defined, ambiguous nature of clustering remains a challenge. Such ambiguity is particularly problematic in scientific discovery, sometimes requiring researchers to choose from different, but potentially equally meaningful clustering results when little information is available a priori (Ronan et al., 2016). When facing such ambiguity, using side-information to reduce clustering ambivalence proves to be a fruitful direction (Xing et al., 2002; Khashabi et al., 2015; Jin et al., 2013). In general, side-information can be categorized as direct or indirect with respect to the final clustering task. Direct side-information straightforwardly details how the data samples should be clustered, and is usually available in terms of constraints, such as the *must-link* and the *cannot-link* constraints (Wang & Davidson, 2010; Wagstaff & Cardie, 2000), or via a pre-conceived notion of similarity (Xing et al., 2002). However, such direct information requires advanced prior knowledge about the clustering task and intensive manual labeling, thus it is rarely available in reality a priori. Consequently, we focus on indirect side-information, which we define as information that carries useful signals on how the clusters should be formed, but its relation to the clustering task needs to be *learned* and cannot be directly utilized. For instance, if the task is to cluster a group of patients, the side-information could be a medical test with continuous-valued outcome. In this case, we want to use the regression task for predicting the test outcomes to aid the clustering process. To this end, we design a framework that learns from such indirect side-information and incorporates the learned knowledge into the final clustering result. Main Contributions We propose *Deep Goal-Oriented Clustering* (DGC) to incorporate indirect sideinformation when forming a pertinent clustering strategy. Specifically: 1) We combine supervision via side-information and unsupervised data structure modeling in a probabilistic manner; 2) We make minimal assumptions on what form the supervised side-information might take, and assume no explicit correspondence between the side-information and the clusters; 3) We train DGC end-to-end so that the model simultaneously learns from the available side-information while forming a desirable clustering strategy. ## 2 Related Work We divide related works in the literature into two categories: 1) methods that utilize direct side-information to form better, less ambiguous clusters (e.g., pairwise constraints); 2) methods that learn from provided labels to lessen the ambiguity in the formed clusters, but rely on the *cluster assumption* (detailed below), and usually assume that the provided discrete labels are the *ground truth labels*. Both classes of methods exclude the possibility of learning from indirectly related, but still informative side-information. Further discussions on the difference between DGC and semi-supervised clustering methods can be found in Sec. B in the Appendix. Side-information for clustering Using pairwise constraints or similarities as side-information to form better clusters has been studied. Wagstaff & Cardie (2000) considered both must-link and cannot-link constraints in the context of K-means clustering. Motivated by image segmentation, Orbanz & Buhmann (2007) proposed a probabilistic model that can incorporate must-link constraints. Khashabi et al. (2015) proposed a nonparametric Bayesian hierarchical model to incorporate pairwise cluster constraints. Vu et al. (2019) utilized constraints and cluster labels as side information. Mazumdar & Saha (2017) gave complexity bounds when provided with an oracle that can be queried for pairwise constraints. Wasid & Ali (2019) incorporated pairwise similarities through the use of fuzzy sets. Manduchi et al. (2021a) incorporated constraint clustering using a deep Gaussian mixture model and optimized the model using stochastic gradient variational inference. Zhang et al. (2021) provided a review of utilizing deep learning frameworks for constraint clustering. Although the term "side information" is used in this review, it is refered to constraints as pairwise or triplet constraints. In supervised clustering, the side-information is the a priori known complete clustering for the training set, which is then being used as a constraint to learn a mapping between the data and the given clustering (Finley & Joachims, 2005). In contrast, importantly, we do not assume any known constraints a priori. Instead, we let the model *learn* what it deems useful from the side-information to guide the clustering process. Therefore, the goal of this work is to leverage informative side-information that is not in the form of constraints for learning better clustering strategies. The *cluster assumption* If there exists a set of semantic labels associated with the data (e.g. the digit information for MNIST images), the *cluster assumption* states that the decision boundaries for the semantic labels should not cross high density regions, but instead lie in low density regions (Färber et al., 2010; Chapelle et al., 2006). As a concrete example, Kingma et al. (2014) introduced a hierarchical generative model with two variational layers. Originally meant for semi-supervised classification tasks, it can also be used for clustering, in which case all labels are treated as missing since they are the cluster indices. This implies it has to strictly rely on the *cluster assumption*. We show that this approach is a special case of our framework without the probabilistic ensemble component (see Sec. 4.2). The cluster assumption is restrictive, especially in the case of utilizing indirect side-information where the information is informative but does not directly correspond to the clusters. There exist a line of non-deep learning approaches that attempted to relax the cluster assumption (Varol et al., 2017; Joulin et al., 2010; DeSantis et al., 2012; Chapfuwa et al., 2020). Bair (2013) systematically reviewed semi-supervised clustering methods that find clusters associated with an outcome variable. For instance, Sansone et al. (2016) proposed to model the cluster indices and the class labels separately, underscoring the possibility that each cluster may consist of multiple class labels. Nevertheless, unlike DGC, their approach cannot make use of continuous side-information. More importantly, the aforementioned approaches cannot be easily scaled to large, high-dimensional datasets, motivating us to develop a probabilistic, deep network-based approach that does not rely on the cluster assumption and can be applied to modern benchmark datasets. Joint modeling Previous works on joint modeling/latent space sharing between unsupervised and supervised signals exist. Blei & McAuliffe (2007) incorporated supervision into the latent Dirichlet allocation (LDA) model for document classification. Le et al. (2018) showed that an autoencoder that jointly predicts the targets and the inputs improves performance. Xie & Ma (2019) jointly modeled the reconstruction of a sentence pair and the prediction of the pair's similarity in a VAE (Kingma & Welling, 2014) framework. Nagpal et al. (2020) jointly learns deep nonlinear representations to estimate relative risks in time-to-event prediction problems through a mixture modeling approach. We extend the joint modeling literature to clustering and to challenge the commonly assumed cluster assumption. Most similar to our work, Manduchi et al. (2021b) developed a joint modeling framework based on VAE with a Gaussian mixture prior. Besides the supervised signals they utilized are survival data, the most critical difference between their work and DGC is that how q(c|y, z) (or in their case q(c|t, z)) is computed. In Manduchi et al. (2021b), they simply chose q(c|t, z) to be the unsupervised probability p(c|z, t), the same as what was being done in the original unsupervised VaDE model (Jiang et al., 2017). As shown in Jiang et al. (2017), simply choosing the variation probability q(c|x) in VaDE to be p(c|x) maximizes the ELBO. However, as we show in Sec. 4.3, the same choice, as it was being done in Manduchi et al. (2021b), is sub-optimal in the presence of side-information. In comparison, DGCwe analytically derived the optimal solution for q(c|y, z) in terms of maximizing the ELBO (see Sec. 4.3). We showed in Sec 5.2, specifically Tab. 1, that it is difficult for neural networks to recover this analytically derived optimal solution. ## 3 Background & Problem Setup 3.1 Background—Variational Deep Embedding The backbone of DGC is the *variational auto-encoder* (VAE) (Kingma & Welling, 2014) with the prior distribution of the latent code being a Gaussian mixture distribution, introduced in Jiang et al. (2017) as VaDE. We next briefly review VaDE. We adopt the notation that lower case letters denote samples from their corresponding distributions; bold, lower case letters denote random variables/vectors; and bold upper case letters denote random matrices. VaDE assumes the prior distribution of the latent code, z, belongs to the family of Gaussian mixture distributions, i.e., p(z) = Pc p(z|c)p(c) = Pc πcN (µc, σ2 c I) where c is a random variable indexing the mixture component p(z|c) that is assumed to be a normal distribution with mean µc and variance σ 2 c . The prior probability of each component is assumed to be πc. VaDE allows for the clustering of the input data in the latent space, with each component of the Gaussian mixture prior representing a cluster. Since a VAE-based model can be efficiently summarized in terms of its generative and inference processes, we describe VaDE from this perspective. Given an input x ∈ Rd, the following decomposition of the joint probability p(x, z, c) details VaDE's generative process: p(x, z, c) = p(x|z)p(z|c)p(c). In words, we sample the component index c from a prior categorical distribution p(c), sample the latent code z from the component p(z|c), and lastly reconstruct the input x through the reconstruction network p(x|z). To perform inference, VaDE is constructed to maximize the log-likelihood of the input data x by maximizing its *evidence lower bound* (ELBO). With certain assumptions on the prior and variational posterior distributions, the ELBO admits a closed-form expression in terms of the parameters of those distributions. We refer readers to Jiang et al. (2017) for additional details. ## 3.2 Problem Setup We denote the side-information as y, where y can be discrete labels or continuous values. Our goal is to leverage y to inform a better clustering strategy. Abstractly, given the input, side-information random variable pair (x, y), we seek to divide the input space of x into non-overlapping subspaces that are meaningful in explaining y. In other words, given a sampled dataset, {xi, yi} n i=1 of size n, we use the prediction task of predicting the side-information yi using the corresponding input xi as a *teaching agent* to guide the process ![3_image_0.png](3_image_0.png) Figure 1: The Bayesian network that underlies the generative process of DGC. θ and π together constitute the generative parameters. Solid lines indicate dependencies among the random variables whereas dashed lines indicate dependencies between the random variables and the generative parameters. of grouping the input set {xi}iinto clusters that optimally explain {yi}i. Since our goal is to discover the optimal subspace-structure, or clusters, without knowing a priori if such a structure indeed exists, a probabilistic framework is more appropriate due to its ability to reason with uncertainty. To this end, we extend the VaDE framework to incorporate the side-information y. Foundationally, we assume x and y are correlated, i.e. the input x carries predictive information with respect to the side-information y. Since the latent code z is optimized to inherit sufficient information from which the input x can be reconstructed, it is reasonable to assume that z also inherits that predictive information. This implies that x and y are conditionally independent given z, i.e., p(x, y|z) = p(x|z)p(y|z). ## 4 Deep Goal-Oriented Clustering 4.1 Generative Process DGC clusters the input x by clustering its latent representation z. As motivated in Sec. 3.2, we assume that y manifests differently with respect to the different clusters of z, and by extension, x. This is to say, when learning a predictive functional mapping from z to y, we assume that the ground truth transformation function, gc, is different for each cluster indexed by c. As a result, we learn a different mapping function for each cluster. The overall generative process of our model is as follows: 1. Generate c ∼ Cat(π); 2. Generate z ∼ p(z|c); 3. Generate x ∼ p(x|z); 4. Generate y ∼ p(y|z, c). The Bayesian network that underlies DGC is shown in Fig. 1, and the joint distribution of x, y, z, and c can be decomposed as: p(x, y, z, c) = p(y|z, c)p(x|z)p(z|c)p(c). ## 4.2 Variational Lower Bound & Inference We learn a variational distribution over the the latent code z and the cluster index c. The joint variational posterior distribution q(z, c|x, y) can be factorized as q(z, c|x, y) = q(z|x, y) · q(c|z, x, y). Inspired by the usual regression and autoencoding setups where x is the sole input, we design DGC to distill information from x alone to learn latent representations that are aware of both tasks. Therefore, we use x as the *sole* input to the encoder that maps x to z and omit the variable y in q(z|x, y) for the rest of this work. We demonstrate the advantage of this design over using both x and y as input (i.e. the concatenation approach) in Sec. 5.3. With this setup, we have the following variational lower bound (see Sec. A in the Appendix for a detailed derivation) $$\log p(\mathbf{x},\mathbf{y})\geq\mathbb{E}_{q(\mathbf{z},c|\mathbf{x},\mathbf{y})}\log p(\mathbf{y}|\mathbf{z},c)$$ $$\begin{array}{l}{{f\geq\ \mathbb{E}_{q({\bf z},c|{\bf x},{\bf y})}\log p({\bf y}\,|{\bf z},c)}}\\ {{\ \ +\ \mathbb{E}_{q({\bf z},c|{\bf x},{\bf y})}\log\frac{p({\bf x},{\bf z},c)}{q({\bf z},c|{\bf x},{\bf y})}={\mathcal{L}}_{\mathrm{ELBO}}\;.}}\end{array}$$ $\left(1\right)$. The first term in LELBO allows for a probabilistic ensemble of predictors for y based on the cluster index. This can be seen as follows $$\mathbb{E}_{q(\mathbf{z},c|\mathbf{x},\mathbf{y})}\log p(\mathbf{y}|\mathbf{z},c)=\mathbb{E}_{q(\mathbf{z}|\mathbf{x})}\left[\mathbb{E}_{q(c|\mathbf{x},\mathbf{x},\mathbf{y})}\log p(\mathbf{y}|\mathbf{z},c)\right]$$ $$=\mathbb{E}_{q(\mathbf{z}|\mathbf{x})}\left[\sum_{c'}\lambda_{c'}\log p(\mathbf{y}|\mathbf{z},c')\right]$$ $$\approx\frac{1}{M}\sum_{l=1}^{M}\left[\sum_{c'}\lambda_{c'}\log p(\mathbf{y}|\mathbf{z}^{(l)},c')\right]$$ $${\mathrm{(2)}}$$ where λc ′ = q(c = c ′|x, z, y) and l indexes the Monte Carlo samples used to approximate the expectation with respect to q(z|x). The probabilistic ensemble allows the model to maintain necessary uncertainty until an unambiguous clustering structure is captured. As a side note, the variational lower bound described in Eq. 1 holds regardless of the prior distribution we choose for z. Although we choose the mixture distribution as the prior in this work, choosing z ∼ N (0, I) and disregarding the probabilistic ensemble component would recover the exact model introduced in Kingma et al. (2014) (when all labels are missing), which is therefore a special case of DGC. ## 4.3 Mean-Field Variational Posterior Distributions Following VAE (Kingma et al., 2014), we choose q(z|x) to be Nz|µ˜z, σ˜ 2 z Iwhere -µ˜z, σ˜ 2 z = hθ(x; θ). hθ is parameterized by a feed-forward neural network with weights θ. Although it may seem unnatural to use a unimodal distribution to approximate a multimodal distribution, when the learned q(c|x, z, y) becomes discriminative, dissecting the LELBO in the following way indicates such an approximation will not incur a sizeable information loss (see Sec. A in the Appendix for derivation): $$\begin{array}{l}{{{\mathcal{L}}_{\mathrm{ELBO}}=\mathbb{E}_{q(\mathbf{z},c|\mathbf{x},\mathbf{y})}\log p(\mathbf{y}|\mathbf{z},c)+\mathbb{E}_{q(\mathbf{z}|\mathbf{x})}\log p(\mathbf{x}|\mathbf{z})}}\\ {{-\mathbb{KL}\left(q(c|\mathbf{x},\mathbf{z},\mathbf{y})||p(c)\right)-\sum_{c^{\prime}}\lambda_{c^{\prime}}\mathbb{KL}\left(q(\mathbf{z}|\mathbf{x})||p(\mathbf{z}|c^{\prime})\right)}}\end{array}$$ $$\left({3}\right)$$ where λc ′ denotes q(c = c ′|x, z, y). Analyzing the last term in Eq. equation 3, we notice that if the learned variational posterior q(c|x, z, y) is very discriminative and puts most of its weight on one specific index c, all but one KL terms in the weighted sum will be close to zero. Therefore, choosing q(z|x) to be unimodal to minimize that specific KL term is appropriate, as p(z|c) is a unimodal normal distribution for all c. Choosing q(c|x, z, y) appropriately requires us to analyze the proposed LELBO in greater detail (see Sec. A in the Appendix for a detailed derivation): $$\mathcal{L}_{\text{ELBO}}=\underbrace{\mathbb{E}_{q(\mathbf{z},c|\mathbf{x},\mathbf{y})}\log p(\mathbf{y}|\mathbf{z},c)}_{\widehat{\mathbf{Q}}}+\underbrace{\mathbb{E}_{q(\mathbf{z}|\mathbf{x})}\log\frac{p(\mathbf{x},\mathbf{z})}{q(\mathbf{z}|\mathbf{x})}}_{\widehat{\mathbf{Q}}}\tag{1}$$ $$-\underbrace{\mathbb{E}_{q(\mathbf{z}|\mathbf{x})}\mathbb{E}\mathbb{L}\left(q(c|\mathbf{x},\mathbf{z},\mathbf{y})||p(c|\mathbf{z})\right)}_{\widehat{\mathbf{Q}}}.$$ $$\mathbf{\Sigma}^{1}\mathbf{\Sigma}^{2}$$ (4) We make two observations: 1) 2 does not depend on c; and 2) the expectation over q(z|x) does not depend on c, and thus has no influence over our choice of q(c|x, z, y). Therefore, we choose q(c|x, z, y) to maximize 1 − 3and ignore the expectation over q(z|x). Casting finding q(c|x, z, y) as an optimization problem, we have $$\min_{q(c|\mathbf{x},\mathbf{z},\mathbf{y})}f_{0}(q)=\mathbb{KL}\left(q(c|\mathbf{x},\mathbf{z},\mathbf{y})||p(c|\mathbf{z})\right)$$ $$-\mathbb{E}_{q(c|\mathbf{x},\mathbf{z},\mathbf{y})}\log p(\mathbf{y}|\mathbf{z},c)$$ s.t. $$\sum_{c}q(c|\mathbf{x},\mathbf{z},\mathbf{y})=1,\ q(c|\mathbf{x},\mathbf{z},\mathbf{y})\geq0,\ \forall c$$ $\left(5\right)^{\frac{1}{2}}$ The objective functional f0 is convex over the probability space of q, as the *Kullback-Leibler divergence* is convex in q and the expectation is linear in q. Analytically solving the convex program (5) (see Sec. A in the Appendix for a detailed derivation), we obtain $$q(c=k|{\bf x},{\bf z},{\bf y})=\frac{p({\bf y}|{\bf z},c=k)\cdot p(c=k|{\bf z})}{\sum_{k}p({\bf y}|{\bf z},c=k)\cdot p(c=k|{\bf z})}\;.\tag{6}$$ To facilitate understanding, we interpret Eq. 6 in two extremes. If y is evenly distributed across clusters, i.e., the ground truth transformations gc are the same for all c, then q(c|x, z, y) = p(c = k|z), recovering the solution in (Jiang et al., 2017). However, if the side-information is informative while z itself does not admit a clustering structure (p(c|z) is uniform), the likelihoods {p(y|z, c = k)}k will dominate q. Therefore, one could interpret any in-between scenario as learning to weight the side-information and the inherent data structure based on how strong their signals are. Last but not least, one would naturally use the ground truth side-information at both training and test times if it is available. Nevertheless, to make our approach as applicable as possible, we follow the typical regression setup and do not assume access to the side-information at test time, which would prohibit us from evaluating q(c|*x, z, y*) based on Eq. 6 for a test sample x. To remedy this, we pre-train a simple neural network, f, to predict y based on the input sample x. At test time, we then use y˜ = f(x) as a surrogate for the ground truth y to evaluate q(c|*x, z,* y˜) for a test sample x. ## 4.4 A Confidence Booster For a given pair (*x, y*), we want the variational posterior indicated by Eq. 6 to be confident. In other words, the entropy of the probability distribution, q(c|*x, z, y*), should be small when appropriate. To encourage this behavior, we add the entropy of the side-information network for a given y across clusters, i.e., H **norm**{p(y|*z, c* = k)} K k=1, where K denotes the number of clusters, H denotes the entropy operator, and norm denotes the softmax function that transitions {p(y|*z, c* = k} K k=1 into a proper probability distribution, to the ELBO proposed in (4) as a regularization term. We further note that when y is continuous, p(y|*z, c*) denotes the likelihood of y under the assumed probability distribution, i.e. p(·|*z, c*). The introduced regularization in this continuous case encourages high likelihood of y only under one cluster (as likelihood by definition is non-negative and a low entropy of normalized p(y|*z, c*) for a given y across the clusters c translates to y resulting in high likelihood under only one cluster), similarly to the case of discrete y. We also note that we can directly regularize the entropy of q(c|*x, z, y*) or p(c|z), but we find regularizing the side-information network works best. The total loss we optimize is $${\mathcal{L}}_{\mathrm{Loss}}={\mathcal{L}}_{\mathrm{ELBO}}-\mathbb{H}\left(\mathbf{norm}\{p(\mathbf{y}|\mathbf{z},c=k)\}_{k=1}^{K}\right)$$ $$\quad(7)$$ k=1(7) Since H**norm**{p(y|z, c = k)} K k=1is non-negative, LLoss is still a proper lower bound and the convexity of LELBO with respect to q(c|x, z, y) is preserved. Moreover, we note that the added regularizer does not depend on q(c|x, z, y), hence the solution proposed in Eq. 6 is still the global optimum for q(c|x, z, y) with respect to LLoss in Eq. 7. ## 5 Experiments We investigate the efficacy of DGC on a range of datasets. We refer readers to Sec. C in the Appendix for the experimental details for the data split, network architectures, the choices of learning rate and optimizer, etc. Moreover, since the number of clusters desired is a hyperparameter in DGC, we provide an additional study using the Street View House Number (SVHN) dataset (Netzer et al., 2011) to investigate the impact of this hyperparameter on DGC in the Appendix. A general note on side-information and the goal of each experiment We briefly describe the side-information used in, and the goal of, each experiment: - In Sec. 5.1, the side-information indicates what digit each image belongs to. This experiment tests if DGC can use side-information to improve an already well-performing VaDE. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) Figure 2: Confusion matrices for Noisy MNIST. Abbreviations, 2B/7B, in the row/column labels denotes digits 2/7 with background. Rows represent the predicted clusters, and columns represent the ground truth. · In Sec. 5.2, the side-information is the continuous value associated with each data point of the Pacman annuli. Our goal is to show that DGC can make use of continuous side-information beyond discrete labels. · In Sec. 5.3, the side-information is the fine-grained labels from the CIFAR 100-20 dataset. This experiment is designed to demonstrate that DGC can utilize fine-grained details as side-information to help form meta clusters (i.e. the super-classes) in a natural manner. · In Sec. 5.4, the side-information is the binary cancer recurrence outcome. The goal of this experiment is to show that DGC can discover clusters that unveil patient characteristics that go beyond the recurrence information in a real-world application. ## 5.1 Noisy Mnist We introduce a synthetic data experiment using the MNIST dataset, which we name the noisy MNIST, to illustrate that the supervised part of DGC can enhance the performance of an otherwise well-performing unsupervised counterpart. We extract images that correspond to the digits 2 and 7 from MNIST. For each digit, we randomly select half of the images and superpose CIFAR-10 images onto those images as noisy backgrounds (see the Appendix for image samples). The binary side-information y indicates what digit each image belongs to. Our goal is to identify both digit and background, i.e., to cluster the images into 4 clusters: digits 2 and 7, with and without background. We parameterize the task networks, {p(y|z, c = k)] 2-1, as Bernoulli distributions where we learn the probabilities. VaDE already performs, achieving a clustering accuracy of 95.6% when the number of clusters is set to 4. Fig. 2a shows that VaDE distinguishes well between the regular and noisy backgrounds, and the incorrectly clustered samples are mainly due to its inability to differentiate the underlying digits. This is reasonable: if the background signal dominates, the network may focus on the background for clustering as it has no explicit knowledge about the digits. DGC performs nearly perfectly with the added side-information, obtaining a clustering accuracy of 99.55%. DGC handles the difficulty of distinguishing between digits under the presence of strong, noisy backgrounds well as it makes almost no mistakes in doing so (Fig. 2b). Importantly, the side-information does not overshadow the original advantage of VaDE (i.e., distinguishing whether the images contain background or not). Instead, it enhances the overall model in cases where VaDE struggles. Furthermore, as detailed in Sansone et al. (2016) and earlier sections, most existing approaches that take advantage of available labels rely on the cluster assumption, which assumes a one-to-one correspondence between the clusters and the labels used for supervision. This experiment is a concrete example that demonstrates DGC does not need to rely on such ![7_image_0.png](7_image_0.png) (a) Truth (b) DGC Samples Figure 3: 3D Pacman truth (a) and DGC samples (b). ![7_image_1.png](7_image_1.png) an assumption to form a sound clustering strategy. Instead, DGC is able to work with side-information that is only partially indicative of what the final clustering strategy should be, making DGC more applicable to general settings. We also include an ablation study where we ablate the unsupervised component of DGC(i.e. VaDE) in the Appendix to show both the supervised and the unsupervised components are needed for good clustering performance. Noisy MNIST—Ablation Study We further explore the behavior of DGC without its unsupervised part to demonstrate the importance of capturing the inherent data structure beyond the side-information. We ablate the probabilistic components (i.e., we get rid of the decoder and the loss terms associated with it, so that only the supervision will inform how the clusters are formed in the latent space) and perform clustering using only the supervised part of our model. We find that clustering accuracy degrades from the nearly-perfect accuracy obtained by the full model to 50%. Coupled with the improvements over VaDE, this indicates that each component of our model contributes to the final accuracy and that our original intuition that side-information and clustering may reinforce each other is correct. ## 5.2 Pacman In this experiment we demonstrate DGC's ability to utilize *continuous* side-information, i.e. regression tasks, to aid clustering. We introduce the Pacman dataset, a Pacman-shaped data consisting of two annuli. Furthermore, each point on the two annuli is associated with a continuous value (see the Appendix for a detailed explanation) as the side-information. These values are constructed such that they decrease linearly (from 1 to 0) in one direction for the inner (yellow) annulus, and increase exponentially (from 0 to 1) in the opposite direction for the outer (purple) annulus (see Figure 3a for a 3D illustration of the dataset). We use linear/exponential rates for the side-information to not only test our model's ability to detect different trends, but also to test its ability to fit different rates of change in values. Our goal is to separate the two annuli depicted in Fig. 3a. This is challenging as the annuli were deliberately chosen to be very close to each other. We first applied various traditional clustering methods, such as the K-means and hierarchical clustering algorithms, to cluster the 2D Pacman-shaped data (i.e., not using the side-information, but only the 2D Cartesian coordinates). Besides hierarchical clustering with single linkage (and not other distance metric), none of the traditional methods managed to separate the two annuli. Moreover, these approaches also produce different clustering results as they are based on different distance metrics (see the Appendix for these results). This phenomenon echos the fundamental problem for unsupervised clustering: the concept of clustering is inherently subjective, and different approaches can potentially produce different, but sometimes equally meaningful, clustering results. Applying DGC with the input x being the 2D Cartesian point coordinates and the side-information y being the aforementioned continuous values, and parameterizing the task networks, {p(y|z, c = k} 2 k=1, as Gaussian distributions where we learn the means and the covariance matrices, DGC distinguishes the two annulli wholly based on the discriminative information carried by the side-information. Moreover, since our approach is generative in nature, we can generate samples from the model once it is trained. We show the generated samples in Fig. 3b, where the Pacman shape, the linear trend, and the exponential trend are adequately captured. This corroborates the model's ability to incorporate continuous side-information. This ability is highly attractive, as it lends itself to any general regression setting in which one believes the optimal clustering structure should be informed by the regression task. Finally, we compare DGC to VaDE, its ablated version, and a baseline method to substantiate the efficacy of our proposed framework. We next describe the ablated version and the baseline method. First, although the solution to the convex program in Eq. 5 provides the global optimal choice of q(c|x, z, y) from a theoretical standpoint, our proposed framework, specifically the proposed LELBO (Eq. 1), holds for any choice of q(c|x, z, y). We thus ablate the convex program component of our model and parameterize q(c|x, z, y) directly | Models | ACC | |----------|--------------| | VaDE | 50.4% ± 0% | | NN-DGC | 81.6% ± 5.3% | | AUG-SS | 82.3% ± 4.6% | | DGC | 99.4% ± 0.3% | Table 1: Pacman accuracies using a neural network (named as NN-DGC). Second, recall (in Sec. 4.2) by choosing z ∼ N (0, I), the unsupervised part of DGC recovers exactly the semi-supervised (SS) approach introduced in Kingma et al. (2014) in the case when all labels (that correspond to the clusters in our case) are missing. Since SS approaches are not expected to perform well in a purely unsupervised setting, we include the probabilistic ensemble component as an augmentation (AUG-SS) to make the comparison more fair. The results in Tab. 1 are obtained from training each model 100 times, and demonstrate that: 1) without the side-information y, VaDE cannot distinguish between the two annuli, demonstrating the importance of using side-information; 2) the convex program (Eq.5) is crucial to the success of DGCAlthough technically possible, ˙ it is difficult for a neural network to find the same optimal distribution; 3) the choice of the prior on the latent code z is of importance, and the Gaussian mixture distribution is more suitable for modeling clusters than an isotropic Gaussian. ## 5.3 Cifar 100-20 We apply DGC to the CIFAR 100-20 dataset where the dataset setup is ideal for demonstrating the advantage of being able to utilize useful side-information for clustering. Two types of labels are provided for each image: one indicating which 100 fine-grained classes and another one indicating which 20 super-classes the image belongs to. Aligning with the clustering literature, our goal is to cluster the images into the 20 super-classes. Different from other approaches, our framework is able to utilize the fine-grained classes as the side-information to aid clustering. For baseline comparisons, we compare DGC to VaDE and a baseline K-means model. We also compare DGC to SCAN (Gansbeke et al., 2020), RUC (Park et al., 2020), and the current SOTA approach, SPICE (Niu & Wang, 2021). The results are shown in Tab. 2. Last but not least, we include six additional studies where the modeling flexibility of DGC and, more critically, the importance of incorporating the side-information in a principled manner are demonstrated. We next expound upon the results and our findings. First, by utilizing the fine-grained information, DGC expectedly outperforms VaDE by a significant margin, substantiating the advantage of using informative side-information. Second, to test if it is only the sideinformation dominating the clustering accuracy, and to demonstrate the importance of seamlessly incorporating | Models | ACC | |--------------|-------| | K-means | 50.4% | | VaDE | 45.2% | | SCAN | 50.7% | | RUC | 54.3% | | SPICE | 58.4% | | VaDE-Concat | 48.7% | | SPICE-Concat | 59.1% | | DGC | 62.7% | Table 2: CIFAR 100-20 clustering accuracy the side-information alongside the unsupervised modeling, we perform an ablation study where we apply the K-means algorithm on the last hidden layer embeddings of the input data obtained from the pre-trained network that provides DGC with the estimated side-information y at test time. As shown in Tab. 2, such a simple baseline does surprisingly well, outperforming VaDE and is on par with SCAN. However, DGC still outperforms this baseline by a large margin, indicating that it is beneficial to model the inherent data structure beyond the side-information. Third of all, with the aid of the fine-grained information, DGC achieves better accuracy than the SOTA unsupervised method, SPICE, and two other leading performance methods. It is crucial to note that the methods we compare DGC to do not, and more importantly cannot, utilize side-information in a *principled* manner, substantiating the theme of this work: utilizing informative side-information helps clustering. To drive this point home, we concretely establish the importance of DGC being a principled approach in the second additional study. Last but not least, we emphasize again that we assume no access to the side-information at test time because we intend to make DGC as applicable as possible. In this experiment, we achieve a clustering accuracy of 67.1% if we were to use the ground-truth fine-grained labels as the side-information at test time. We further substantiate the effectiveness of DGC through the following six addtional studies. Modeling flexibility As DGC is not restricted by the cluster assumption, we can use the 20 super-classes as the side-information to help cluster the images into 100 clusters. Compared to the clustering accuracy of 35.1% for VaDE, DGC obtains 47.6%, substantiating the utility of DGC in a scenario where using less expensive side-information can help categorize data in a more fine-grained manner. Compare to simple concatenation strategy We now demonstrate the importance of incorporating side-information in a principled manner. It is easy to see that an ad hoc alternative is to simply concatenate the side-information to the input and use the concatenated input for clustering; therefore, we compare DGC to this simple baseline using two models, VaDE (as it is the backbone of DGC, denoted as VaDE-Concat) and SPICE (as it is the SOTA method on CIFAR 100-20, denoted as SPICE-Concat). The results in Tab. 2 show that incorporating the side-information through concatenation results in marginal performance improvement while still underperforming DGC. Considering DGC and VaDE-Concat share the same backbone and utilize the same amount of input information, the benefit of our approach is particularly clear given that DGC outperforms VaDE-Concat by a wide margin. The Enhanced VaDE-Concat We note that we can further alter the generative process of VaDE-Concat such that the decoder is cluster-dependent, i.e. p(x ∗, z, c) = p(x ∗|z, c)p(z|c)p(c) where x ∗ denotes the concatenation of x and y. We note that the analytical solution to the variational distribution q(c|x) derived in Jiang et al. (2017) no longer holds, so we use a neural network to parameterize q(c|x) instead. The enhanced VaDE-Concat achieves a clustering accuracy of 54.5%, a noticeable improvement over VaDE-Concat while still lagging DGC. This echos the result in Sec. 5.2 in that obtaining an analytically global optimum solution to q(c|x) is crucial. Despite the flexibility of neural nets, they can get stuck in local minima in an unforeseeable way that would degrade the final result. Non-informative side-information As alluded in the introduction, DGC is motivated by the existence of useful side-information, and therefore naturally DGC's performance depends on the quality of the side- ![10_image_1.png](10_image_1.png) (a) DGC (b) VaDE Figure 4: Kaplan-Meier curves for DGC & VaDE ![10_image_0.png](10_image_0.png) information y. Nevertheless, Eq. 6 suggests that DGC should resort back to VaDE when y is not informative. We conduct an experiment where instead of using the fine-grained classes as y, we randomly generate integers between 0 and 99 as y. In this case, we obtain a clustering accuracy of 44.6% (compared to that of 45.2% for VaDE), suggesting that DGC indeed resorts back to VaDE for non-informative side-information. Misspecifying the number of clusters We study whether DGC will be robust to the misspecification of the number of clusters. Instead of setting the number of clusters wanted as 20, we misspecify the number of clusters as 30 and 50, and obtain clustering accuracies of 61.4% and 58.2% (versus the original clustering accuracy of 62.7%). DGC obtains a reasonable clustering accuracy even when the number of clusters wanted is set to be more than twice as the expected number, demonstrating the robustness of DGC to such misspecification. Jointly train the prediction network for the side-information In the main manuscript we pre-train a network to predict the side-information y for test time to make DGC as applicable as possible (again, we emphasize that one should use the real y at both the training and the test times if it is available). Now we jointly train the prediction network along with DGC and use the prediction yˆ, instead of the real y, at *both* the training and the test times to calculate q(c|x, *y, z* ˆ ) for a sample x. We obtain a clustering accuracy of 61.5%. This is comparable to the clustering accuracy of 62.7% achieved using separate pre-training. Hence, both strategies are viable. ## 5.4 Carolina Breast Cancer Study (Cbcs) We apply DGC to a real-world breast cancer dataset collected as part of the Carolina Breast Cancer Study (CBCS). The dataset consists of 1,713 patients, each of which has 2-4 associated histopathological images and a list of biological markers such as the Pam50 gene expressions (Troester et al., 2018) and the estrogen-receptor (ER) status. We use the binary indicator for breast cancer recurrence as the side-information y. Applying deep learning techniques, supervised or unsupervised, to analyze histopathological images of breast cancer has gained traction in recent years (Xie et al., 2019). Distinguished from those methods, our goal is to inspect whether the discovered clusters, whose formation is influenced both by the recurrence side-information and the unsupervised reconstruction signal, carry meaningful information in terms of survival rate or gene expression beyond the recurrence information. Since DGC is not restricted by the cluster assumption, we train three clusters for analysis despite the binary side-information. We parameterize the task networks, {p(y|z, c = k)} 3 k=1, as Bernoulli distributions. See the Appendix for experimental details. To investigate whether the three clusters that we discovered were identifying meaningful differences in tumor biology, we examine the differences in rates of cancer recurrence and features of tumor aggressiveness between the clusters. We also compare to the baseline clusters obtained from the purely unsupervised VaDE to corroborate the importance of the added side-information. Using a Kaplan-Meier estimator to estimate risk differences for time to cancer recurrence within three years, we obtained a p-value of 0.0024 for the differences in recurrence risk among the clusters obtained by DGC, and observed that Cluster 0 had the lowest risk of recurrence (RRD) and Cluster 2 had the highest risk (see Fig. 4a). Furthermore, we observed substantial differences in recurrence risk at three years of follow-up between the clusters, particularly Clusters 0 and 2 (see Fig. 4a). By comparison, with a p-value of 0.073, the differences in recurrence risk among the clusters from VaDE is much less significant than that from DGC. Table 3: DGC RRD between clusters | Comparsion | RRD (95% CI) | |------------------------|--------------------| | Cluster 1 VS Cluster 2 | 11.3% (-4.4, 26.9) | | Cluster 0 VS Cluster 2 | 18.8% (-3.1, 40.7) | In terms of tumor characteristics, cluster that has the highest recurrence rate should have the most negative ER subtype, the highest grade, and the most Basal-like tumor subtype. We observed that for DGC, Cluster 0, which has the lowest recurrence rate, contained more indolent tumors, characterized by good-prognosis features such as ER positivity, low grade, and Luminal A tumor subtype (see Tab. 4). In contrast, more aggressive tumor characteristics were featured in Clusters 1 and 2, such as negative ER status, high grade, and Basal-like tumor subtype, although Cluster 1 appeared to be intermediate in some characteristics. Coupled with the differences in cancer outcomes, these differences in tumor characteristics indicate that the method successfully distinguished between tumors with low-risk features (Cluster 0) and tumors with intermediate- and high-risk features (Clusters 1 and 2). We also include the same table that characterizes tumor characteristic for clusters obtained from VaDE in Tab. 5. Contrary to the clusters obtained from DGC, Cluster 0 from VaDE, which has the highest recurrence rate (Fig 4b), does not have any of the desired tumor characteristic for high recurrence group, i.e., it does not have the most high grade patients, the most negative ER subtype patients, nor the most Basal-like tumor subtype patients (Cluster 2 has all those characteristics, while being a low recurrence cluster). This corroborates the fact that DGC is able to find more meaningful clusters in a real-world application. ## 6 Conclusion We introduced DGC, a probabilistic framework that allows for the integration of both the side-information and the unsupervised information when searching for the optimal clustering structure in the latent space. This is a relevant but daunting task, where previous attempts are either largely restricted to discrete, supervised, ground-truth labels or rely heavily on the side-information being provided as manually tuned pairwise constraints or similarities. To the best of our knowledge, this is the first deep network-based attempt to learn from indirect, but informative side-information to find the optimal clustering structure, all the while making minimal assumptions on either the form of the side-information or the relationship between the side-information and the clusters. This method is applicable to a variety of fields where an instance's input and task are defined but its membership, which should be influenced by both, is important and unknown. Training the model in an end-to-end fashion, we demonstrate on various datasets that DGC is capable of learning sensible clustering strategies that align with both the side-information and the inherent data structure. ## References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. *CoRR*, abs/1409.0473, 2015. Eric Bair. Semi-supervised clustering methods. *Wiley interdisciplinary reviews. Computational statistics*, 5 5: 349–361, 2013. David M. Blei and Jon D. McAuliffe. Supervised topic models. In *NIPS*, 2007. Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In *ECCV*, 2018. Table 4: Tumor characteristics per cluster for DGC. Features are color-coded as low , intermediate , or high risk. | | Cluster 0 Cluster 1 Cluster 2 N(%) N(%) N(%) | | | |-------------------|------------------------------------------------|-------------------------------|---------------------| | ER | Positive | 20 (74.1) 58 (58.0) 43 (57.3) | | | Status | Negative | 7 (25.9) | 42 (42.0) 32 (42.7) | | Low | 8 (29.6) | 16 (16.0) | 4 (5.3) | | Grade | Medium | 7 (25.9) | 25 (25.0) 26 (34.7) | | High | 12 (44.4) 59 (59.0) 45 (60.0) | | | | Luminal A | 14 (51.9) 28 (28.0) 17 (22.6) | | | | Tumor | Luminal B | 7 (25.9) | 20 (20.0) 18 (24.0) | | Subtype ER-/HER2+ | 1 (3.7) | 9 (9.0) | 3 (4.0) | | Basal-like | 4 (14.8) | 42 (42.0) 37 (49.3) | | | Cluster 0 Cluster 1 Cluster 2 N(%) N(%) N(%) | | | | | |------------------------------------------------|-------------------------------|-----------|-----------|-----------| | ER | Positive | 24 (51.1) | 9 (56.3) | 88 (63.3) | | Status | Negative | 23 (48.9) | 7 (43.8) | 51 (36.7) | | Low | 6 (12.8) | 0 (0.0) | 22 (15.8) | | | Grade | Medium | 8 (17.0) | 3 (18.8) | 47 (33.8) | | High | 33 (70.2) 13 (81.2) 70 (50.4) | | | | | Luminal A | 8 (22.9) | 3 (23.1) | 44 (44.0) | | | Tumor | Luminal B | 5 (14.3) | 3 (23.1) | 16 (16.0) | | Subtype ER-/HER2+ | 1 (2.9) | 0 (0.0) | 9 (9.0) | | | Basal-like | 21 (60.0) | 7 (53.8) | 31 (31.0) | | Table 5: Tumor characteristics per cluster for VaDE. Features are color-coded as low , intermediate , or high risk. Mathilde Caron, Hugo Touvron, Ishan Misra, Herv'e J'egou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. *ArXiv*, abs/2104.14294, 2021. Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien. Semi-supervised learning. 2006. Paidamoyo Chapfuwa, Chunyuan Li, Nikhil Mehta, Lawrence Carin, and Ricardo Henao. Survival cluster analysis. *Proceedings of the ACM Conference on Health, Inference, and Learning*, 2020. Stacia M DeSantis, E. Andres Houseman, B. Coull, Catherine L. Nutt, and Rebecca A. Betensky. Supervised Bayesian latent class models for high-dimensional data. *Statistics in Medicine*, 31, 2012. Ines Färber, Stephan Günnemann, Hans-Peter Kriegel, Peer Kröger, Emmanuel Müller, Erich Schubert, Thomas Seidl, Arthur Zimek, and Ludwig-Maximilians-Universitaet Muenchen. On using class-labels in evaluation of clusterings. 2010. Thomas Finley and Thorsten Joachims. Supervised clustering with support vector machines. In *ICML '05*, 2005. Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. Scan: Learning to classify images without labels. In *ECCV*, 2020. Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016. Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. In *IJCAI*, 2017. Xin Jin, Jiebo Luo, Jie Yu, Gang Wang, Dhiraj Joshi, and Jiawei Han. Reinforced similarity integration in image-rich information networks. *IEEE Transactions on Knowledge and Data Engineering*, 25:448–460, 2013. Armand Joulin, Francis R. Bach, and Jean Ponce. Efficient optimization for discriminative latent class models. In *NIPS*, 2010. Daniel Khashabi, Jeffrey Yufei Liu, John Wieting, and Feng Liang. Clustering with side information: From a probabilistic model to a deterministic algorithm. *ArXiv*, abs/1508.06235, 2015. Diederik P. Kingma and Max Welling. Auto-encoding variational Bayes. *CoRR*, abs/1312.6114, 2014. Diederik P. Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In *NIPS*, 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In *NIPS*, 2012. Marc T. Law, Raquel Urtasun, and Richard S. Zemel. Deep spectral clustering learning. In *ICML*, 2017. Lei Le, Andrew Patterson, and Martha White. Supervised autoencoders : Improving generalization performance with unsupervised regularizers. 2018. Laura Manduchi, Kieran Chin-Cheong, Holger Michel, Sven Wellmann, and Julia E. Vogt. Deep conditional gaussian mixture model for constrained clustering. *ArXiv*, abs/2106.06385, 2021a. URL https://api. semanticscholar.org/CorpusID:235417417. Laura Manduchi, Ricards Marcinkevics, Michela Carlotta Massi, Verena Gotta, Timothy Müller, Flavio Vasella, Marian Christoph Neidert, Marc Pfister, and Julia E. Vogt. A deep variational approach to clustering survival data. *ArXiv*, abs/2106.05763, 2021b. URL https://api.semanticscholar.org/CorpusID:235390687. Arya Mazumdar and Barna Saha. Query complexity of clustering with side information, 2017. Chirag Nagpal, Xinyu Li, and Artur W. Dubrawski. Deep survival machines: Fully parametric survival regression and representation learning for censored data with competing risks. IEEE Journal of Biomedical and Health Informatics, 25:3163–3175, 2020. URL https://api.semanticscholar.org/CorpusID:211817982. Ali Bou Nassif, Ismail Shahin, Imtinan B. Attili, Mohammad Azzeh, and Khaled Shaalan. Speech recognition using deep neural networks: A systematic review. *IEEE Access*, 7:19143–19165, 2019. Yuval Netzer, Tiejie Wang, Adam Coates, Alessandro Bissacco, Baolin Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. 2011. Chuang Niu and Ge Wang. Spice: Semantic pseudo-labeling for image clustering. *ArXiv*, abs/2103.09382, 2021. Peter Orbanz and Joachim M. Buhmann. Nonparametric Bayesian image segmentation. *International Journal* of Computer Vision, 77:25–45, 2007. Sungwon Park, Sungwon Han, Sundong Kim, Danu Kim, Sungkyu Park, Seunghoon Hong, and M. Cha. Improving unsupervised image clustering with robust learning. *ArXiv*, abs/2012.11150, 2020. Tom Ronan, Zhijie Qi, and Kristen M Naegle. Avoiding common pitfalls when clustering biological data. Science Signaling, 9:re6–re6, 2016. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. *International Journal of Computer Vision*, 115:211–252, 2015. Emanuele Sansone, Andrea Passerini, and Francesco Natale. Classtering: Joint classification and clustering with mixture of factor analysers. In *ECAI*, 2016. Uri Shaham, Kelly Stanton, Haochao Li, Boaz Nadler, Ronen Basri, and Yuval Kluger. Spectralnet: Spectral clustering using deep neural networks. *ArXiv*, abs/1801.01587, 2018. M. Troester, Xuezheng Sun, Emma H. Allott, J. Geradts, S. Cohen, Chiu-Kit J. Tse, Erin L. Kirk, L. Thorne, M. Mathews, Y. Li, Z. Hu, W. Robinson, K. Hoadley, O. Olopade, K. Reeder-Hayes, H. S. Earp, A. Olshan, L. Carey, and C. Perou. Racial differences in PAM50 subtypes in the carolina breast cancer study. *JNCI:* Journal of the National Cancer Institute, 110, 2018. Tsung Wei Tsai, Chongxuan Li, and Jun Zhu. Mice: Mixture of contrastive experts for unsupervised image clustering. *ArXiv*, abs/2105.01899, 2021. E. Varol, Aristeidis Sotiras, and Christos Davatzikos. Hydra: Revealing heterogeneity of imaging and genetic patterns through a multiple max-margin discriminative analysis framework. *NeuroImage*, 145:346–364, 2017. Viet-Vu Vu, Quan Do, Vu-Tuan Dang, and Do Toan. An efficient density-based clustering with side information and active learning: A case study for facial expression recognition task. *Intelligent Data Analysis*, 23: 227–240, 02 2019. Kiri Wagstaff and Claire Cardie. Clustering with instance-level constraints. In *AAAI/IAAI*, 2000. Xiang Wang and Ian Davidson. Flexible constrained spectral clustering. In *KDD '10*, 2010. Mohammed Wasid and Rashid Ali. Fuzzy side information clustering-based framework for effective recommendations. *Computing and Informatics*, 38:597–620, 01 2019. Juanying Xie, R. Liu, Joseph Luttrell, and Chaoyang Zhang. Deep learning based analysis of histopathological images of breast cancer. *Frontiers in Genetics*, 10, 2019. Zhongbin Xie and Shuai Ma. Dual-view variational autoencoders for semi-supervised text matching. In IJCAI, 2019. Eric P. Xing, Andrew Y. Ng, Michael I. Jordan, and Stuart J. Russell. Distance metric learning with application to clustering with side-information. In *NIPS*, 2002. Linxiao Yang, Ngai-Man Cheung, Jiaying Li, and Jun Fang. Deep clustering by gaussian mixture variational autoencoders with graph embedding. *2019 IEEE/CVF International Conference on Computer Vision* (ICCV), pp. 6439–6448, 2019. Hongjing Zhang, Tianyang Zhan, Sugato Basu, and Ian Davidson. A framework for deep constrained clustering. Data Mining and Knowledge Discovery, 35:593 - 620, 2021. URL https://api.semanticscholar.org/ CorpusID:231418939. ## A Mathematical Details This section provides detailed derivations for the theoretical claims made in the main manuscript. Proposition 1. To explain the fact that choosing q(z|x) to be unimodal will not incur a sizable information loss when the learned q(c|x) is discriminative, we dissect the LELBO *as follows (Eq.4 in the main paper)* L*ELBO* = Eq(z,c|x,y)log p(y|z, c) + Eq(z|x)log p(x|z) − KL (q(c|y, z)||p(c)) k $$-\sum_{k}\lambda_{k}\mathbb{K}\mathbb{L}\left(q({\boldsymbol{z}}|{\boldsymbol{z}})||p({\boldsymbol{z}}|c=k)\right)\,.$$ (8) Proof. The derivation can be seen as follows $${\mathcal{L}}_{\mathrm{ELBO}}=\mathbb{E}_{q(\mathbf{z},c|\mathbf{x},\mathbf{y})}\log p(\mathbf{y}|\mathbf{z},c)$$ LELBO = Eq(z,c|x,y)log p(y|z, c) + Eq(z,c|x,y)log p(x, z, c) q(z, c|x, y) = Eq(z,c|x,y)log p(y|z, c) + Eq(z,c|x,y)log p(x|z)p(z|c)p(c) q(z|x)q(c|z, y) = Eq(z,c|x,y)log p(y|z, c) + Eq(z|x)log p(x|z) (9) − KL (q(c|y, z)||p(c)) + Eq(z,c|x,y)log p(z|c) q(z|x) where $$\mathbb{E}_{q(\mathbf{z},c|\mathbf{x},\mathbf{y})}\log\frac{p(\mathbf{z}|c)}{q(\mathbf{z}|\mathbf{x})}=\mathbb{E}_{q(c|\mathbf{y},\mathbf{z})}\mathbb{E}_{q(\mathbf{z}|\mathbf{x})}\log\frac{p(\mathbf{z}|c)}{q(\mathbf{z}|\mathbf{x})}\tag{10}$$ $$=-\sum_{k}\lambda_{k}\mathbb{KL}\left(q(\mathbf{z}|\mathbf{x})||p(\mathbf{z}|c=k)\right)$$ where λk = q(c = k|y, z). We note that we omit y in q(z|x, y) (hence q(z|x)) because the aforementioned reasoning that we want to encourage the encoder to extract information from x alone that is enough to reconstruct x and predict y. We also note that p(c|x, z) = p(c|z) because c is a latent variable that only interacts with the latent representation of the input x, i.e. z. Therefore, conditional on z, c is independent of x. Proposition 2. Choosing q(c|z, y)) requires us to decompose LELBO *as follows* $$\begin{split}\mathcal{L}_{ELBO}&=\mathbb{E}_{q(\underline{z},c|\underline{z},\boldsymbol{y})}\log p(\boldsymbol{y}|\boldsymbol{z},c)+\mathbb{E}_{q(\underline{z}|\boldsymbol{x})}\log\frac{p(\boldsymbol{x},\underline{z})}{q(\boldsymbol{z}|\boldsymbol{x})}\\ &\qquad-\mathbb{E}_{q(\underline{z}|\boldsymbol{z})}\overline{\mathbb{KL}}\left(q(c|\boldsymbol{z},\boldsymbol{y})||p(c|\boldsymbol{z})\right).\end{split}\tag{11}$$ Proof. $${\mathcal{L}}_{\mathrm{ELBO}}=\mathbb{E}_{q(\mathbf{z},c|\mathbf{x},\mathbf{y})}\log p(\mathbf{y}|\mathbf{z},c)$$ + Eq(z,c|x,y)log p(x, z, c) q(z, c|x, y) = Eq(z,c|x,y)log p(y|z, c) + Eq(z,c|x,y)log p(c|x, z)p(x, z) q(z|x)q(c|z, y) = Eq(z,c|x,y)log p(y|z, c) (12) + Eq(z|x)Eq(c|z,y) log p(x, z) q(z|x) − log q(c|z, y) p(c|z) = Eq(z,c|x,y)log p(y|z, c) + Eq(z|x)log p(x, z) q(z|x) − Eq(z|x)KL (q(c|z, y)||p(c|z)) Proposition 3. *The solution to the following convex program* $$\min_{q(c|\boldsymbol{z},\boldsymbol{y})}f_{0}(q)=\mathbb{KL}\left(q(c|\boldsymbol{z},\boldsymbol{y})||p(c|\boldsymbol{z})\right)$$ $$\quad-\mathbb{E}_{q(c|\boldsymbol{z},\boldsymbol{y})}\log p(\boldsymbol{y}|\boldsymbol{z},c)\,,$$ $$s.t.\quad\sum_{k}q(c=k|\boldsymbol{z},\boldsymbol{y})=1,\quad q(c=k|\boldsymbol{z},\boldsymbol{y})\geq0,\quad\forall k$$ $$(13)$$ is $$q(c=k|\mathbf{z},\mathbf{y})={\frac{p(\mathbf{y}|\mathbf{z},c=k)\cdot p(c=k|\mathbf{z})}{\sum_{k}p(\mathbf{y}|\mathbf{z},c=k)\cdot p(c=k|\mathbf{z})}}\,.$$ Proof. First, we note that the constraint, q(c = k|z, y) ≥ 0 for all k, is not needed (and effectively redundant), as the KL term in the objective function is not defined otherwise. Now consider a convex program that takes the form of $$\min_{\mathbf{t}\in\mathbb{R}_{+}^{k}}f_{0}(\mathbf{t})$$ s.t. $$\mathbf{1}^{T}\mathbf{t}=1\,.$$ $$(14)$$ where f0 is a convex function and "⪰" denotes "element-wise greater than or equal to." Forming the Lagrangian, we have $$\mathbf{L}\left(\mathbf{t},\gamma\right)=f_{0}(\mathbf{t})+\gamma\left(\mathbf{1}^{T}\mathbf{t}-1\right)$$ The *Karush–Kuhn–Tucker conditions* state that the optimal solution dual, (t ∗, γ∗), satisfies the following $$\mathbf{\partial}\cdot\mathbf{\partial}-\mathbf{t}^{*}\preceq0$$ $$\mathbf{\partial}\cdot\mathbf{\partial}\mathbf{1}^{T}\mathbf{t}^{*}-1=0$$ $$\mathbf{\partial}\cdot\nabla_{\mathbf{t}}\mathbf{L}\left(\mathbf{t}^{*},\gamma^{*}\right)=0$$ Since $$\nabla_{\mathbf{t}}\mathbf{L}\left(\mathbf{t},\gamma\right)=\nabla_{\mathbf{t}}f_{0}(\mathbf{t})+\gamma\cdot\mathbf{1}$$ the third condition implies that $$\nabla_{\mathbf{t}}\mathbf{L}\left(\mathbf{t}^{*},\gamma^{*}\right)=\nabla_{\mathbf{t}}f_{0}(\mathbf{t}^{*})+\gamma^{*}\cdot\mathbf{1}=0\,.$$ ∗· 1 = 0 . (16) Let t = q(c|z, y) (i.e. tk = q(c = k|y, z)), and f0(t) as being specified in Eq. 13, we have $$\nabla_{t_{k}}f_{0}(\mathbf{t})$$ $$=\frac{\partial}{\partial t_{k}}\left[\sum_{k}t_{k}\left(\log\frac{t_{k}}{p(c=k|\mathbf{z})}-\log p(\mathbf{y}|\mathbf{z},c=k)\right)\right]$$ $$=\log\frac{t_{k}}{p(c=k|\mathbf{z})}+1-\log p(\mathbf{y}|\mathbf{z},c=k)\,.$$ $$(16)^{\frac{1}{2}}$$ $$(17)$$ Based on the condition in Eq. 16, we thus have $$\nabla_{t_{k}}\mathbf{L}\left(\mathbf{t}^{*},\gamma^{*}\right)=\log\frac{t_{k}^{*}}{p(c=k|\mathbf{z})}+1$$ $$-\log p(\mathbf{y}|\mathbf{z},c=k)+\gamma^{*}$$ $$=0$$ $$(18)$$ which leads to t ∗ k = e $$e^{\log p(\mathbf{y}|\mathbf{z},c=k)-1-\gamma^{*}}\cdot p(c=k|\mathbf{z})\,.$$ Since γ ∗is chosen in a way such that Pk t ∗ k = 1 (by the second condition), we obtain the solution $$t_{k}^{*}={\frac{t_{k}^{*}}{\sum_{k}t_{k}^{*}}}={\frac{p({\mathbf{y}}|{\mathbf{z}},c=k)\cdot p(c=k|{\mathbf{z}})}{\sum_{k}p({\mathbf{y}}|{\mathbf{z}},c=k)\cdot p(c=k|{\mathbf{z}})}}\,.$$ Theorem 1. The variational lower bound for DGC is log p(x, y) ≥ Eq(z,c|x,y)log p(y|z, c) + Eq(z,c|x,y)log p(x, z, c) q(z, c|x, y) | {z } ELBO for VAE with GMM prior $$(19)$$ $$\square$$ $$(20)$$ Proof. We derive the LELBO as follows log p(x, y) = log Z z X c p(x, y, z, c)dz = log Z z X c p(x, y, z, c) q(z, c|x, y) q(z, c|x, y)dz ≥ Eq(z,c|x,y)log p(x, y, z, c) q(z, c|x, y) = Eq(z,c|x,y)log p(y|z, c) | {z } Probabilistic Ensemble + Eq(z,c|x,y)log p(x, z, c) q(z, c|x, y) | {z } ELBO for VAE with GMM prior . $$(21)$$ ## B Semi-Supervised Clustering Methods Semi-superivsed clustering methods usually assume the cluster labels of some observations are known, or certain observations are known to belong to the same cluster (e.g. the must-link and cannot-link constraints). DGC is different from traditional semi-supervised approaches in the following ways - DGC does not assume access to true cluster labels for any observations. Specifically, the side-information is information that is *relevant* to the desired clusters (as shown in the experiments) but does not contain cluster labels on its own. - The relation between the side-information and the desired clusters needs to be jointly *learned* with the clustering process and is not pre-given (compared to constraint clustering approaches where the constraints are assumed to be given). Moreover, the side-information DGC is able to incorporate, for instance the continuous side-information in Sec. 5.2, usually cannot be easily turned into constraints. Even for the binary side-information we used in Sec. 5.1 and Sec. 5.4, observations that belong to different clusters can still have the same side-information and vice versa. Therefore, without more prior knowledge, it is impossible to turn the binary side-information into constraints manually. ## C Experimental Details This section provides a detailed description of the experimental setups, such as the train/test splits, the chosen network architectures, and the choices of learning rate and optimizer, for the experiments conducted. We describe the architecture of DGC in terms of its encoder, decoder, and task network. We adopt the following abbreviations for some basic network layers - FL(di, do, f) denotes a fully-connected layer with diinput units, do output units, and activation function f. - Conv(ci, co, k1*, f,* BatchNorm2d, O(k2, s)) denotes a convolution layer with ciinput channels, co output channels, kernel size k1, activation function f, the 2D batch norm operation, and the spooling operation O(k2, s) with another kernel size k2 and stride s. ## C.1 A Note On Feature Representations Of Images Following some previous works on incorporating deep learning into clustering (Jiang et al., 2017; Yang et al., 2019), we note that for images in the CIFAR 100-20 and SVHN datasets, instead of operating directly on the raw pixels, we use a pre-trained network to extract feature representations for images. It is worth pointing out that Jiang et al. (2017); Yang et al. (2019) used a pre-trained ResNet (He et al., 2016) as the feature extractor. This might raise some concerns regarding the nature of their methods being unsupervised, as the family of pre-trained ResNets are pre-trained on ImageNet (Russakovsky et al., 2015) which does utilize supervised information. To remedy this concern, we use DINO (Caron et al., 2021), a vision transformer trained in a self-supervised manner, as the feature extractor. We then use those feature representations as inputs to our framework. ## C.2 Pre-Trained Networks For Side-Information Y Recall that since at test time we do not assume access to the side-information y, we pre-train a network using the input x to predict y so that we can use the predictions yˆ at test time. In this section we detail the architectures and performances of the pre-trained models for each experiment - The pre-trained network achieves a classification accuracy of 98.7% for Noisy MNIST with the following architecture Pre-trained Net for Noisy MNIST FL(512, 1,Sigmoid) - The pre-trained network achieves a MSE of 0.0048 for Pacman with the following architecture | Pre-trained Net for Pacman FL(2, 20,ReLU) FL(20, 1,—) | |---------------------------------------------------------| | Pre-trained Net for CIFAR 100-20 FL(2048, 512,ReLU) FL(512, 100,SoftMax) | |----------------------------------------------------------------------------| - The pre-trained network achieves a classification accuracy of 78.3% for CIFAR 100-20 with the following architecture - The pre-trained network achieves a classification accuracy of 73.8% for CBCS with the following architecture Pre-trained Net for CBCS | Pre-trained Net for SVHN FL(2048, 512,ReLU) FL(512,10,SoftMax) | |------------------------------------------------------------------| FL(512, 1,Sigmoid) - The pre-trained network achieves a classification accuracy of 82.3% for SVHN with the following architecture ![19_image_0.png](19_image_0.png) ![19_image_1.png](19_image_1.png) (a) Ground Truth Images (b) Generated Images from DGC ![19_image_2.png](19_image_2.png) Figure 5: Ground truth and generated noisy MNIST images. ## C.3 Noisy Mnist We extract images that correspond to the digits 2 and 7 from MNIST. The MNIST dataset is pre-divided into training/testing sets, so we naturally use the images that correspond to the digits 2 and 7 from the training set as our training data (12,223 images), and that from the testing set as our testing data (2,060 images). For each digit, we randomly select half of the images for that digit and superpose noisy backgrounds onto those images, where the backgrounds are cropped from randomly selected CIFAR-10 images (more specifically, we first randomly select a class, and then randomly select a CIFAR image that corresponds to that class). See Fig. 1 for the ground truth and generated noisy MNIST samples. We use the Adam optimizer for optimization. We train with a batch size of 128 images, an initial learning rate of 0.002, and a learning rate decay of 10% after every 10 epochs, for 100 epochs. We use the following network architecture: | Encoder | Decoder | Task Network FL(10, 2,Sigmoid) | |-------------------------------------------------------------------|----------------------------------------------------------------------|----------------------------------| | FL(784,500,ReLU) FL(500,500,ReLU) FL(500,2000,ReLU) FL(2000,10,—) | FL(10, 2000,ReLU) FL(200,500,ReLU) FL(500,500,ReLU) FL(500,784,ReLU) | | ## C.4 Pacman This section provides more details for our Pacman experiments. ## C.4.1 Experimental Setup We create 20,000 points, with 10,000 for the outer annulus and 10,000 for the inner annulus. Both annuli center at the origin, with the outer annulus having a radius of 1 and the inner annulus having a radius of 0.8. We create the training set by sampling 7,500 points from each annulus, and leave the rest of the data for ![20_image_2.png](20_image_2.png) ![20_image_0.png](20_image_0.png) ![20_image_1.png](20_image_1.png) (f) Pacman Task vs Inputs Figure 6: The first row shows the ground truth 2D Pacman, the responses y alone, and the combined 3D Pacman. The second row depicts the corresponding generated counterparts from DGC. testing. We create the linear responses by dividing the [0,1] range into 10,000 sub-intervals and assign the split points to the points in the inner annulus in a way that it is increasing (from 1 to 0) in the clockwise direction. We create the exponential responses by evaluating the exponential function at the aforementioned split points (generated for the linear responses), and then assign them to the points on the outer annulus in a way that it is decreasing (from 0 to 1) in the clockwise direction. Fig. 6 shows the ground truth (first row) and the generated (second row) 2D Pacman annuli, the responses, and the 3D view of the entire dataset. We use the Adam optimizer for optimization. We train with a batch size of 1,000 points, an initial learning rate of 0.001, and a learning rate decay of 10% after every 10 epochs, for 80 epochs. We use the following network architecture: | Encoder | Decoder | Task Network | |----------------------------------------------------------------------|-----------------------------------------------------------------------------|---------------------------------------| | FL(2,64,Sigmoid) FL(64,128,Sigmoid) FL(128,256,Sigmoid) FL(256,60,—) | FL(60, 256,Sigmoid) FL(256,128,Sigmoid) FL(128,64,Sigmoid) FL(64,2,Sigmoid) | FL(64, 128,Sigmoid) FL(128,4,Sigmoid) | ## C.5 Cifar 100-20 We apply DGC to the CIFAR 100-20 dataset. The training/test split of the dataset is given, with 60000 images in the training set and 10000 in the test set. As mentioned earlier, we apply DGC to the feature representations of the images rather than the images themselves. | Encoder | Decoder | Task Network | |--------------------------------------------------------------------|-----------------------------------------------------------------------|-----------------------| | FL(2048,500,ReLU) FL(500,500,ReLU) FL(500,2000,ReLU) FL(2000,10,—) | FL(10, 2000,ReLU) FL(200,500,ReLU) FL(500,500,ReLU) FL(500,2048,ReLU) | FL(10, 2000, SoftMax) | We use the Adam optimizer for optimization. We train with a batch size of 256, an initial learning rate of 0.002, and a learning rate decay of 10 for every 10 epochs . We use the following network architecture: ## C.6 Carolina Breast Cancer Study (Cbcs) C.6.1 Data Processing Due to the fact that the histopathological images collected in CBCS are large (of size 3 × 3000 × 3000), we use a pretrained VGG16 network to extract feature representations for each image, and use the extracted, fixed features as the input to DGC. The features are of dimension 512, and are the output of the 8 th layer of the pretrained VGG16 network. As mentioned in the main manuscript, each patient has 2-4 associated histopathologial images. Due to the scarce nature of medical data, we treat each image as an individual "patient" during training. At test time, we obtain patient-level prediction by aggregating image-level predictions (i.e. taking majority vote), and disregard patients with ambiguous patient-level predictions (e.g. a patient has 4 associated images. 2 of the images are predicted to be in cluster 0 and the other 2 are predicted to be in cluster 1). The number of disregarded patients accounts for 3.4% of the entire population. Finally, again due to the scarce nature of this dataset, we use 10-fold cross validation to obtain predictions on the entire dataset. More specifically, we split the dataset into 10 subsets, train on 9 of those subsets and predict on the remaining subset. We then repeat this process 10 times to obtain the final predictions on the entire dataset. ## C.6.2 Experimental Setup We use the Adam optimizer for optimization. We train with a batch size of 32, and an initial learning rate of 0.001 and decay rate of 0.9 (for every 10 epochs), for 150 epochs. The network architecture used is as follows | Encoder | Decoder | Task Network | |---------------------------------------------------|-------------------------------------------------------|-----------------| | FL(512,1024,ReLU) FL(1024,2048,ReLU) FL(2048,5,—) | FL(5, 2048,ReLU) FL(2048,1024,ReLU) FL(1024,512,ReLU) | FL(5,3,Sigmoid) |
Review 1: Summary: The authors propose a latent variable model for data that exploits supervision of side information for unsupervised learning. Specifically, they propose to incorporate what they term *indirect* side-information as a proxy for sharpening the contours of a latent variable density model in the form of a VAE, which they name Deep Goal-Oriented Clustering. The formulation of indirect side-information differs from traditional side-information used in semi-supervised learning (typically consisting of cluster inclusion or exclusion data point constraints), comprising either discrete or continuous observables associated with the data $x$. The authors derive a VAE inspired by prior work and propose to use a Gaussian mixture prior as well as a regularized ELBO in their framework. They evaluate their model on different experiments, each designed to probe (or validate) a specific design choice. Strengths and Weaknesses: ## Strengths This paper takes a somewhat unusual perspective, asking what might be possible if we ignore the barrier between supervised and unsupervised learning. They try to define their own niche apart from semi-supervised learning by considering side-information as either categorical labels or continuous descriptors. - The writing is quite clear; the model is clearly described in section 4, with the extensions from classic VAEs made clear. ## Weaknesses The principal weakness is a lack of precision when describing what information is being provided to each model in each experiment, as well as what the precise metric is, and how model predictions are produced. For example in section 5.3, the authors state the goal is to cluster the images into the 20 super classes. But given the notion of 20 super classes, this is a classification problem. A clustering problem does not usually admit the number of clusters a priori. It is also unusual to see accuracy used as a metric for clustering, given that clustering methods do not have any notion of class label. I think to help the reader, the authors should outline precisely how, given a clustering solution from each method, they calculate the accuracy reported. Also in section 5.1, what is the measure ‘clustering accuracy’? Is it homogeneity within clusters? This is unusual usage of the term, which is typically used for supervised learning. Requested Changes: ### Introduction > However, with the explosion of the size of modern datasets, it becomes increasingly unrealistic to manually annotate all available data for training. Therefore, understanding inherent data structure through unsupervised clustering is of increasing importance. - Maybe a citation to LeCun here? Or cost of annotation lit? ### Related work > If there exists a set of semantic labels associated with the data (e.g. the digit information for MNIST images), the cluster assumption states that there exists a direct correspondence between the labels and the clusters - I’m not sure that Chapelle et al 2006’s statement of the cluster assumption agrees exactly with this; It is stated rather that there is a relationship between optimal decision boundaries and the density of data points: “The cluster assumption states that the decision boundary should not cross high density regions, but instead lie in low density regions.” The statement here is more rigid and direct; I would suggest reusing Chapelle et al’s phrasing, as it’s more general. - The beginning of the subsection on joint modeling is awkward; perhaps the first sentence could define the term (I presume it refers to modeling joint probability distibutions, sometimes incorporating occasional observations of otherwise latent variables?) ### Deep Goal-Oriented Clustering - In Figure 1, the arrows (espcially dashed) should be explained in the caption to the PGM graph. - A nitpick in section 4.2, the factorization of the variational posterior q of $q(Z,c | X,Y)$ would be better off as $q(Z|X,Y) * q(c|Z,X,Y)$, so the variables agree in order of presentation in the joint distributiton. Yes, they are equivalent. But this makes it easier to read, and shows that the joint is factorized as a product of conditionals. - Later on in 4.2, the authors write > The probabilistic ensemble allows the model to maintain necessary uncertainty until an unambiguous clustering structure is captured. This is a bit unclear; do the authors imply that, as the decoder converges on a more optimal solution for generating observations, the responsibilities term $\lambda_{c’}$ will sharpen and the weights will become sparse? This seems like it might be possible on sanitized datasets like CIFAR or MNIST, but not in real data. - Further on in 4.3, when discussing the $\lambda_{c}$ term: > Analyzing the last term in Eq. equation 3, we notice that if the learned variational posterior q(c|x, z, y) is very discriminative and puts most of its weight on one specific index c, all but one KL terms in the weighted sum will be close to zero Belabouring my previous point, it seems that the suitability of the choice to parameterize the encoder as a unimodal distribution is dependent on the assumption that the model can learn to separate the data by class, and that the $\lambda_{c’}$ will become sparse. It’s far from clear that this will become so in cases where (e.g) the dataset has high aleatoric uncertainty. - For cases where $y$ is continuous, how well does the predictor described at the end of section 4.3 work? Would it be better to use a simpler model rather than an MLP? - In section 4.4, the terminology used to describe entropy is a bit confusing in a few different ways. I’ve never seen entropy referred to as an operator before. Maybe use ‘function’ instead? Also, the authors do not actually regularize the entropy, they use the entropy as a regularization term for the ELBO. - Does the formulation of the regularizer with a softmax not imply that the side-information $y$ is assumed to be discrete? I am a bit confused here about how this would work if $y$ is in fact continuous? Would you atttempt to quantize $y$? ### Experiments - In section 5.1: > Instead, DGC is able to work with side-information that is only partially indicative of what the final clustering strategy should be, making DGC more applicable to general settings. I don’t see how the authors can arrive at this conclusion looking only at the results in Figure 2. Yes, the class label side information resulted in nearly eliminating all 2B,7B errors. But I expected this given the design of the dataset; the class label information is independent of the background information. I would be more convinced if there were some dependence of background inclusion based on the class label. It’s also curious that, looking solely at the 2 vs 2B, VaDE makes no mistakes while DGC introduces two in 2 vs 2B. Was this an unlucky artifact, or did the side information result in a misclassification? - In section 5.2, the pacman problem seems more of a manifold learning problem; would it not be better to include LLE, MDS, or Laplacian Eigenmaps as comparators? - For Table 1, I do not think it is really appropriate to call the class label $y$ side-information in the pacman task; it is exactly what the model is being evaluated upon to predict. This is why VaDE achieves essentially random performance. Unless there is something else meant by ‘separate the two annuli’ beyond ‘classify the points along each annulus’, I am not sure what conclusions to draw here. - In section 5.3, specifically the paragraph at the end about **Jointly train the prediction network for the side information**; this is an interesting experiment, but only if the accuracy of the prediction network is shown; the results suggest that the prediction network his extremely accurate (meaning that mutual information between x,y is very high), but what about in cases where this isn’t the case? Broader Impact Concerns: None ================================================== Review 2: Summary: I have already reviewed this paper 8 months ago and the authors did not incorporate the majority of my previous comments and suggestions. I therefore re-state them below: ***Summary:*** This work explores the integration of side information into the VaDE clustering algorithm to boost the clustering performance. It can be seen as an extension of the semi-supervised VAE by Kingma et al. to allow clustered latent embeddings. The authors showed the effect of adding side information on different applications. Strengths and Weaknesses: ***Strengths:*** The proposed approach provides a general framework to use both categorical and real-valued side information to drive the clustering algorithm toward a preferred configuration. I think the topic is of great interest to the community, however, the novelty and the performed experiments are still limited, and several previous works are ignored by the authors. ***Weaknesses:*** Limited novelty: The proposed generative approach is exactly the same as the one proposed in [1], resulting in the same ELBO. The only difference lies in p(yIz,c), while in [1] the latter is modeled as survival probability, the proposed approach uses standard regression/classification. I would strongly advise the authors to cite [1] and provide a thorough comparison in the paper. Missing reference: The authors cite only a few works in the context of clustering with side information and exclude all works from 2019 on, however, many more recent approaches use pairwise constraints in deep clusterings, such as DC-GMM [5] and C-IDEC [3-4]. The authors cited Chapfuwa et al. and claimed that ‘the aforementioned approaches cannot be easily scaled to large, high-dimensional datasets’. First, this is a wrong statement, as Chapfuwa's paper can deal with high-dimensional datasets, secondly, there have been many more approaches that overcame this limitation (e.g. deep survival machines [2]). Imprecise claims: I believe there are several statements that are not fully supported throughout the paper. As a concrete example, the authors state that (a) the side-information considered is ‘much more general’ than pairwise constraints, and (b) they ‘let the model learn what it deems useful from the side-information'. I would argue that pairwise information can be seen as weaker information than side-information as it does not provide any exact label, which the model needs to predict, but only a looser concept of similarity of data points. Additionally, pairwise constraints are usually collected for a very small proportion of samples while the side information used by the authors does not deal with missing labels. Experiments: while the experiments tackle several problems, I think they do not fully support the authors' claim. In particular, in all sections the used side-information is indeed highly correlated with the clustering assignments, it would be interesting to see how the model performs if the side-information is loosely connected (but not uninformative) with the desired clustering. Additionally, in section 5.3. the setting of having fine-grained information is quite unrealistic and usually, one has the coarse-grained information and wishes to retrieve more fine-grained clusters [6]. Reproducibility: I could not find any available code, hence I am not able to check whether the proposed approach is reproducible. [1] Manduchi et al. “A Deep Variational Approach to Clustering Survival Data.” ICRL (2021). [2] Nagpal et al. Deep survival machines: Fully parametric survival regression and representation learning for censored data with competing risks. IEEE Journal of Biomedical and Health Informatics (2021) [3] Zhang et al. A framework for deep constrained clustering algorithms and advances. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2019. [4] Zhang et al. A framework for deep constrained clustering. Data Mining and Knowledge Discovery, 35(2):593–620, 2021. [5] Manduchi, et al. “Deep Conditional Gaussian Mixture Model for Constrained Clustering.” Neural Information Processing Systems (2021). [6] Ni, Jingchao, et al. “Superclass-Conditional Gaussian Mixture Model For Learning Fine-Grained Embeddings.” ICRL (2022). ***Improvements over the last submission:*** In my previous review, I argued that the model cannot choose to discard the side information if it is not informative for the cluster assignments, thus reducing the clustering performance. However, the authors performed a small ablation experiment where they chose uninformative side information and the performance of their method is comparable to VaDE. I believe this to be an interesting experiment, however, I would still argue that it very much depends on the dimensionality of the latent space, as some dimensions must be allocated to predict the uninformative y. I would suggest digging deeper and measuring the different losses in this setting (is the prediction loss decreasing or increasing?). Requested Changes: 1) I believe the main weakness is the lack of references and comparisons. I would strongly encourage the authors to add a thorough related work discussion. 2) Fix the imprecise claims. 3) Add the experiments described in my previous comment and add the code for reproducibility. For additional details see above. Broader Impact Concerns: I believe there are no significant concerns about the ethical implications of the proposed approach. ================================================== Review 3: Summary: The authors focus on the problem of clustering with side information. They structure their model as a variational autoencoder with Gaussian mixture model prior, thus learning a clustered representation for data in a manner that capture its density (unsupervised) but also being informative of the side-data (supervised). Strengths and Weaknesses: The methodology proposed by the authors though in principle reasonable and addressing and addressing an important problem, has significant weaknesses. First, the formulation is overly complicated and confusing provided that it seems to be presented as a formal variational formulation yet some of the derivations lack details, are not well justified and in places it seem inconsistent with other parts of the formulation. Second, the experiments are for the most part underwhelming or not convincing because of the lack of comparisons with alternatives in the literature, artificial scenarios not motivated by real use cases and comparisons that are either unfair (comparing methods that do not have access to the same data) or qualitative (see below about Section 5.4). Additional details of the three points above are described below. Provided the coverage of the related work section, which is very superficial, the authors do not seem aware of the substantial amount of existing work on supervised clustering (with autoencoder architectures or factor models) and autoencoders with mixture priors, including clustering with survival outcomes (relevant to the only real-world scenario considered in Section 5.4). It is also worth noting that the related work does not reference any publication after 2020 and that the baselines used in the experiments section are not discussed. It is not clear why the authors write the variational posterior as a function of y to then say that they will ignore y as an input to the encoder. Further, the equality in (2) is not justified and z^(l) and M are not defined. The authors claim that the standard VAE is a special case of DGC, however, it seems a somewhat moot statement provided that i) Kingma et al. (2014) does not consider labels (but there are quite a few models that do but are not mentioned), and more importantly, ii) DGC is not optimizing (1). The motivation for making c a function of z (so z is a sufficient statistic for x) and y to then replace y for a pseudo-value predicted from a model is not clear because is not consistent with the generative process, but also because if in principle z is a sufficient statistic for x that can both recover x and predict y according to the generative model, why is there a need to make the variational posterior an explicit function of y to then replace it by an approximation. The regularizer in (7) needs a better motivation and justification, specially when the side information is multivariate and/or continuous like in Section 5.2. Section 5.1: The experiment is problematic for a few reasons: i) it is not fair to compare DGC with VaDE considering that the latter has access to information that is not available to the former, ii) no performance comparisons with other supervised clustering approaches are provided, iii) the experiment is not really about clustering, but classification, in which case it is not clear why classification baselines were not considered, iv) it is not clear why the model without the decoder will perform not better than random just by looking at the objective that is being optimized, so a proper justification is needed. Section 5.2: The authors do not compare with other clustering, supervised clustering or semi-supervised approaches from the literature other than VaDE. Further, the generated data seems too contrived for a practical use case. Section 5.3: The results in Table 2 are underwhelming because the baselines (without concat) do not seem to have access to the labels thus comparisons are not fair and no other method which can use labels (supervised or semi-supervised) are considered. It is not clear in which scenario one will have access to fine-grained labels but being interested in the (known) set of super-classes. Further, having access to the side information (i.e., the labels), why would one use them as covariates as in VeDE-Concat or SPICE-Concat as opposed to build a classifier or if labels are limited, building a semi-supervised model. Section 5.4: The results are not convincing because: i) some details about Figure 4 and Table 3 are missing, ii) Tables 4 and 5 lack quantitative association comparisons, iii) there is no reasoning for the 3 cluster choice, iv) the proposed model is only compared against VaDE despite the being a number of models proposed to clustering with survival outcomes, one of which is referenced by the authors. Requested Changes: In general, the paper needs to be better organized with a more clear formulation and motivation for the assumptions made by the model and the relevance of the experiments in the context of real-world scenarios. The related work section needs to be rewritten to have a better coverage of the existing literature, especially in relation to works that use autoencoders with supervision and autoencoders with structured priors. The last line of (9) needs to be better justified as q(z|x) is used twice without explanation. Similarly, the p(c|x,z)=p(c|z) needs to be justified. The authors should justify the practicality of the scenario like the one presented with artificial data in Section 5.2. How is the cluster accuracy computed when the number of clusters is mis-specified? What hypothesis test was used for the p-values in Section 5.4? How is RRD in Table 3 defined? what are the implications of 95% CIs crossing 0? Table 3 needs to be discussed in the text. Further, the quantities in tables 4 and 5 need a quantitative measurement of the association between the clusters and covariates (ER status, Grade and tumor subtype). Broader Impact Concerns: None ================================================== Metareview: Recommendation: Reject Comment: As discussed above, the reviewers are critical of how this work is situated within the related work, the motivation of the design choices, and the empirical results. Therefore, they currently do not feel comfortable accepting this paper and would suggest a major revision and resubmission. ==================================================
# Threshold Moving For Online Class Imbalance Learning With Dynamic Evolutionary Cost Vector Anonymous authors Paper under double-blind review ## Abstract Existing online class imbalance learning methods fail to achieve optimal performance because their assumptions about enhancing minority classes are hardcoded in model parameters. To learn the model for the performance measure directly instead of using heuristics, we introduce a novel framework based on a dynamic evolutionary algorithm called Online Evolutionary Cost Vector (OECV). By bringing the threshold moving method from the cost-sensitive learning paradigm and viewing the cost vector as a hyperparameter, our method transforms the online class imbalance issue into a bi-level optimization problem. The first layer utilizes a base online classifier for rough prediction, and the second layer refines the prediction using a threshold moving cost vector learned via a dynamic evolutionary algorithm (EA). OECV benefits from both the efficiency of online learning methods and the high performance of EA, as demonstrated in empirical studies against four state-of-the-art methods on 30 datasets. Additionally, we show the effectiveness of the EA component in the ablation study by comparing OECV to its two variants, OECV-n and OECV-ea, respectively. This work reveals the superiority of incorporating EA into online imbalance classification tasks, while its potential extends beyond the scope of the class imbalance setting and warrants future research attention. We release our code1for future research. ## 1 Introduction Online learning from streaming data is common in real-world applications, facing more challenges than offline learning due to limited time and memory resources. Online class imbalance learning involves scenarios where minority classes have notably fewer samples than the majority classes, which can detrimentally affect predictive performance, particularly for minority classes. Current efforts fall into three categories: data-level, algorithm-level, and ensemble approaches. Data-level methods use oversampling and undersampling to rebalance the datasets. Ensemble methods commonly work together with data-level algorithms by randomly resampling incoming data points for each base learner. Algorithm-level approaches react differently to samples from different classes, addressing the tendency to neglect the minority classes. While designed differently, the three types of methodology all focus on how to efficiently utilize class imbalance information (e.g., imbalance ratio and data distribution) to handle the imbalance issue. However, to our knowledge, they all rely on assumptions about the expected enhancement level for minority classes, which are ad hoc and hardcoded in model parameters. For instance, cost-sensitive algorithms, one kind of algorithmlevel approach, assign different costs for misclassifying classes based on the class size or performance. But determining optimal costs remains challenging (Liu & Zhou, 2010). In this article, we aim to explore how to achieve optimality concerning any given online performance metric directly without making *ad hoc* assumptions. It can extend beyond the scope of class imbalance, but we only focus on this online class imbalance setting in this work for simplicity. Gradient-based optimization methods become impractical if we set the non-differentiable evaluation metric as objective. In fact, a wide range of metrics are non-differentiable since they require a form of step loss function (e.g., counting the number of true positives), which is intractable. This includes accuracy, precision, recall, F1 score, and other comprehensive metrics such as G-mean and 1https://anonymous.4open.science/r/OECV-1088/ balanced accuracy. The metrics can be adapted to our online setting through prequential evaluation as shown in Gama et al. (2014), while the non-differentiablity remains. To this end, we focus on gradient-free optimization methods in this work, particularly the family of evolutionary algorithms (EAs). EAs have been widely studied for classification tasks such as genetic programming (Espejo et al., 2009), learning classifier systems (Sigaud & Wilson, 2007), and evolution of neural networks (Rocha et al., 2007). Besides, recent studies have attempted to leverage EA to assist conventional algorithms in offline class imbalance learning problems (Pei et al., 2023). However, applying EAs to online class imbalance learning remains unexplored and challenging due to time and space constraints in streaming learning. More specifically, evolving classifiers on a large scale and accessing the entire dataset are impossible. Besides, the dynamic environment of concept drift may exist compared to offline learning. To this end, we have to examine a fundamental question: How can we create an online learner that combines two essential traits? That is, it should update fast under a dynamic environment like existing online models while also learning efficiently with non-differentiable objectives similar to EAs. We propose a novel framework named Online Evolutionary Cost Vector (OECV) to answer this question. OECV is conceptualized as a bi-level optimization problem, with a probabilistic online classifier in the lower layer and a lightweight cost vector in the upper layer. The classifier extracts useful information from data to provide a rough prediction while the cost vector refines the decision boundary. In the case of concept drift, especially the prior drift where class size changes, a dynamic evolutionary algorithm is applied to track optimal cost vectors using recent samples contained in a fixed-size buffer. The most crucial difference between our dynamic EA and traditional EA is that it can track the optimal cost vector in a non-stationary environment by maintaining population diversity instead of converging. Our approach can learn with non-differentiable objectives under dynamic environments using a dynamic EA-based cost vector decision head and update in a few computation efforts since the cost vector is lightweight. The motivation for formulating OECV as a bi-level architecture is highly inspired by the threshold moving method (Kukar et al., 1998; Zhou & Liu, 2005; Sheng & Ling, 2006; Voigt et al., 2014; Hancock et al., 2022) from the paradigm of cost-sensitive learning. The gist of the threshold moving is weighting the probabilistic prediction by the cost vector, which contains the relative cost of misclassifying each class. While the cost vector used in our method is essentially the same as that in the threshold moving method, however, the cost vector is usually predefined in the context of cost-sensitive learning. The key point in understanding our motivation is viewing the cost vector as a set of hyperparameters. This would interpret OECV as an online hyperparameter optimization (HPO) method built upon the threshold moving method. The simplest way of setting the hyperparameter in class imbalance learning is to set it inversely proportional to the class size, but it is not guaranteed to be an optimal solution. OECV, on the other hand, tries to optimize the "hyperparameter" using dynamic evolutionary algorithms on the fly. In viewing OECV as a kind of HPO, its two levels correspond to searching parameters and hyperparameters separately, where parameters (base classifier) give an rough prediction and hyperparameters (cost vector) refine the prediction. This effectively unifies EAs and online class imbalance learning within a cohesive framework. The main contributions of this paper are listed as follows: 1. This study is the first to explore the problem of online class imbalance learning using an EA approach. The novel approach OECV unifies EA and online class imbalance learning within a bi-level optimization framework by applying a cost vector, effectively addressing the performance-resource trade-off. 2. We present a novel dynamic evolutionary algorithm to learn the cost vector under potential concept drift adaptively and incorporate specific prior knowledge about class imbalance to guide the evolutionary learning simultaneously. 3. We study the superiority and efficiency of OECV across 30 real-world datasets. Empirical results show its ability to significantly outperform state-of-the-art (SOTA) methods and confirm the effectiveness of the EA component. The remainder of this article is organized as follows. Section 2 presents related work. Section 3 details our proposed method. Experimental setup and results are discussed in Section 4. The paper is concluded in Section 5. ## 2 Related Work Our article is related to online class imbalance learning, threshold moving methods, and evolutionary algorithm approaches for addressing the class imbalance. ## 2.1 Online Class Imbalance Learning Approaches to address online class imbalance problems can typically be classified into three categories, as mentioned in the introduction: data-level, algorithm-level, and ensemble-based methods. ## 2.1.1 Data-Level Methods Sampling methods work by oversampling and/or undersampling to rebalance data. SMOTE (Chawla et al., 2002) is a synthetic minority over-sampling technique used to balance the class distribution by generating new instances of the minority class. In online learning, it has been adopted in Online SMOTE (Wang & Pineau, 2016), which oversamples using training samples within a sliding window. C-SMOTE (Bernardo et al., 2020) addresses binary class imbalance by actively detecting concept drift via ADWIN (Bifet & Gavalda, 2007), a change detector with a sliding detection window, and applying SMOTE to the minority class in the sliding window. The ignorance of class distribution information results in their sub-optimal performance. OS-CCD based on classification contribution degree is proposed in Jiang et al. (2021), generating synthetic samples via classification contribution degree. SRE (Ren et al., 2019) introduces a selection-based resampling mechanism to handle complex data distributions by considering recent sample properties. However, the resampling procedures of OS-CCD and SRE are both based on clustering, being sensitive to hyperparameters. While showing promising performance, these methods mostly targeted the binary class imbalance problem and needed to maintain a sliding window to reserve relevant training samples, increasing the memory burden. ## 2.1.2 Algorithm-Level Methods Qin et al. (2021) employs active learning to select the most important samples to train the classifier. Online one-class Support Vector Machines (Klikowski & Woźniak, 2020) is a kind of one-class classifier that creates a model for each class and achieves a one-class decomposition of multi-class problems. Algorithm-level approaches work by modifying the training process. Cost-sensitive learning methods are a type of popular algorithm in this approach. It assigns varying costs for misclassifying classes belonging to different classes to reduce the dominating influence of majority classes, and it is commonly assumed that minority classes incur higher costs. Our method belongs to this category. Ksieniewicz (2021) introduces Prior Imbalance Compensation (PIC) for batch learning of imbalanced data streams, which adjusts the decision made by the classifier using class prior probability to compensate for the minority classes. Yan et al. (2017) trains multiple classifiers with various cost matrices and make predictions by adaptive ensembling. However, it is confined to binary class cases and challenging to extend to multi-class scenarios due to the exponential growth in the number of candidate cost matrices. Other related works (Wang et al., 2021; Ding et al., 2018; Qin et al., 2021) in cost-sensitive methods are based on weighted extreme learning machine (WELM) (Zong et al., 2013), which is a super efficient single hidden layer neural network with a weighting strategy for class imbalance. WOS-ELM (Wang et al., 2021) integrates a weighting strategy akin to WELM with an online sequential extreme learning machine (Huang et al., 2005) (OSELM). WOS-ELMK (Ding et al., 2018) incorporates kernel mapping, addressing the non-optimal hidden node issue present in WOS-ELM. AI-WSELM (Qin et al., 2021) integrates active learning to significantly reduce labeling costs, demonstrating satisfactory performance compared to existing WELM variants. Despite their promising performance, the weight strategies within this family are explicitly tailored for ELM, limiting their generalizability to other online learning models. We notice that the class sizes are frequently utilized to determine weight strategy in literature. However, this approach does not ensure an optimal weighting schedule. ## 2.1.3 Ensemble Methods Ensemble methods, such as MOOB, MUOB (Wang et al., 2016), KUE (Cano & Krawczyk, 2020), ROSE (Cano & Krawczyk, 2022), and BEDCOE (LI et al., 2023), effectively tackle the problem by combining resampling techniques. MOOB and MUOB leverage time-decay class size to determine training times. Specifically, the training time for each base classifier is determined by sampling from a Poisson distribution, whose parameter is set according to the class size. The diversity is maintained by random training times on a sample for each base classifier. Kappa Updated Ensemble (KUE) combines online and block-based ensemble approaches and uses Kappa statistics to determine dynamic weighting and select base classifiers. After that, Cano & Krawczyk (2022) proposes an advanced method called ROSE to improve the robustness of KUE by employing adaptive self-tuning, adjusting its parameters, and ensembling the line-up dynamically. To directly deal with class imbalance, ROSE computes the imbalance ratio of each class based on recent samples to derive the training times of each sample. BEDCOE considers potential complex data distribution compared to other works and introduces a borderline enhanced strategy and a disjunct cluster-based oversampling for synthetic sample generation. Despite the improved performance achieved by using multiple base classifiers, the ensemble methods entail a trade-off between the diversity of the ensemble and training time. ## 2.1.4 **Common Issues Of Existing Methods** We note that the heuristic designs exist in all three categories discussed above. Therefore, we give several examples of assumptions on existing works in this section. First, some methods assume the imbalance status is solely determined by the imbalance ratio and do resampling (Wang & Pineau, 2016; Wang et al., 2016; Bernardo et al., 2020) or design cost schemes (Zong et al., 2013; Wang et al., 2021) based on the estimated online imbalance ratio. However, this is not a unique indicator of class imbalance. Other information, such as data distribution, is also helpful. Another common assumption is that generating synthetic samples around minority instances helps with learning, including Ren et al. (2019); Jiang et al. (2021); LI et al. (2023). This assumption only holds when the minority data is well-clustered and sufficiently discriminative. If the training data is extremely imbalanced or with many corrupted labels, the minority class would be poorly represented and lack a clear structure. In this case, working under this assumption severely jeopardizes the performance. Additionally, to use the estimated imbalance status such as imbalance ratio or data distribution from clustering, existing works all assume a certain functional form of the relation between imbalance status and training scheme. For instance, WELM (Zong et al., 2013) assumes the cost of misclassifying a class is inversely proportional to its class size. Similarly, MOOB and MUOB Wang et al. (2016) assume the training time of one class should be sampled from a Poisson distribution with the imbalance ratio as a parameter. However, the concrete functional form of using imbalance status cannot be exhausted. Besides, none of them can theoretically justify that the proposed heuristic functional form could achieve optimality with respect to a certain performance metric, especially considering that data distribution becomes highly skewed and varies over time. Therefore, we propose to optimize the performance metric directly without fully relying on the estimated imbalance information. ## 2.2 Threshold Moving Method The threshold moving method (Kukar et al., 1998; Zhou & Liu, 2005; Sheng & Ling, 2006; Voigt et al., 2014; Hancock et al., 2022) is a common technique in cost-sensitive learning. It trains a classifier on the original dataset and prioritizes classes with higher misclassification costs during prediction, using a predefined cost matrix. Formally, denote the cost matrix as Mij , where 1 ≤ *i, j* ≤ C, to represent the cost of misclassifying class i to class j. Here C is the number of classes. Let Oi, where 1 ≤ i ≤ C, represent the probabilistic output with PC i=1 Oi = 1 and 0 ≤ Oi ≤ 1. The prediction is arg maxi O′i in the threshold moving method comparing to arg maxi Oiin standard classifiers, where O′i is calculated according to $$O_{i}^{\prime}=\eta\sum_{j=1}^{C}O_{i}M_{i j}=\eta(\sum_{j=1}^{C}M_{i j})O_{i}=\eta v_{i}O_{i}$$ $$(1)$$ Mij )Oi = ηviOi (1) ![4_image_0.png](4_image_0.png) time Figure 1: Illustration of the working mechanism of cost vector. The cost vector pushes the decision boundary towards the majority, and the dynamic evolutionary algorithm ensures the adaptability of the cost vector for potential concept drift. Here η is a normalization term such that PC i=1 O′i = 1 and 0 ≤ O′i ≤ 1. Note we can use the cost vector vi =PC j=1 Mij (1 ≤ i ≤ C) of lower complexity O(C) instead. The cost vector represents the misclassification cost of class i and adjusts the decision boundary toward less costly classes, making it harder to misclassify samples with higher costs. In this paper, the threshold moving method is adapted to online class imbalance learning by enabling the cost matrix/vector to be learnable in two novel ways, namely OECV-n and OECV, so that it can respond to the current stream behavior (Fig. 1) rather than being predefined. The baseline OECV-n is designed with time-decay class size, while the main algorithm OECV finds the optimal cost vector based on OECV-n and EA. ## 2.3 Evolutionary Algorithm For Class Imbalance Learning Recent studies (Pei et al., 2023) have shown the potential of EA in addressing class imbalance, while most of the existing literature remains confined to offline scenarios. In Perry et al. (2015), a genetic algorithm (GA) is used to optimize a class-dependent cost matrix for the weighted updating of a classifier. Sun et al. (2006) introduces a cost-sensitive boosting algorithm that employs GA to optimize a class-dependent cost vector. ECSB (Lemnaru & Potolea, 2017) uses GA to optimize a class-dependent cost matrix and classifier parameters simultaneously. GA is also applied to identify an optimal subset of instances in the majority class (Drown et al., 2009; Khoshgoftaar et al., 2010). In a cost-sensitive SVM method proposed in Cao et al. (2013), the misclassification cost ratio is optimized using particle swarm optimization. Furthermore, differential evolution (DE) has also been tried to optimize class-dependent cost matrices for cost-sensitive deep belief networks (Zhang et al., 2018; 2016). EA is also utilized to support data-level methods. For instance, Jiang et al. (2016) introduces GASMOTE, a GA-based SMOTE approach that optimizes sampling rates for minority class instances. There are significant challenges to adapting these methods to online settings. Unlike offline learning, which receives all training data upfront, online learning lacks this comprehensive data overview. Besides, the model must continuously and rapidly adapt to potential concept drift rather than converging. To our knowledge, only Wang & Wang (2023) has adopted a similar idea of EA in online class imbalance learning. It picks base classifiers of different parameter configurations with the highest performance so far. However, characteristics of class imbalance in Wang & Wang (2023) are only used by the original resampling method, and the class imbalance issue is not handled by EA directly. Besides, it is currently tailored for binary classification tasks, making it unsuitable for multi-class scenarios. ## 3 Online Evolutionary Cost Vector (Oecv) In this section, we introduce *Online Evolutionary Cost Vector* (OECV) to illustrate the EA-based cost vector learning approach. Section 3.1 outlines the overall training process. Section 3.2 reformulates the problem into a bi-level optimization. Section 3.3 gives the baseline algorithm OECV-n, and Section 3.4 gives the EA-based algorithm OECV. Algorithm 1: Training Procedures of Proposed OECV Input: Classifier HC t−1 , class size Ωt−1, training sample (Xt, yt), evolutionary frequency f, optimal cost vector v ∗, cost vector population V, buffer B Output: Prediction yˆt 1 Generate rough probabilistic prediction pt using HC t−1 . 2 Produce refined prediction p ∗ t as final prediction yˆt using pt and v ∗ by Eqn. 1. 3 Update HC t−1 to HC t by its own rule. 4 Update class size Ωt−1 → Ωt according to Eqn. 3. 5 Add the sample (Xt, yt) to B. 6 if t mod f == 0 **then** 7 Evolve V by Alg. 2 with Ωt, B and HC t , and update V and v ∗ based on evolution result. 8 **return** yˆt ![5_image_0.png](5_image_0.png) Figure 2: Illustration of OECV as a bi-level optimization problem (Section 3.2). The first layer consists of a probabilistic classifier, while the second layer is a cost vector (Section 2.2). A fixed-size buffer is maintained to perform oversampling and avoid potential overfitting of the cost vector. The cost vector is learned by a dynamic evolutionary algorithm at frequency f on the oversampled buffer, using G-mean for objective evaluation (Section 3.4). ## 3.1 Overall Test-Then-Train Process Of Oecv In a data stream {(Xt, yt)} +∞ t=1 , Xt ∈ R drepresents data and yt ∈ {1*, . . . , C*} represents the class label. C is the total number of classes. Uneven class prior distribution leads to class imbalance, and concept drift necessitates the algorithm to adapt to ever-changing data distribution. Xt arrives strictly one by one, being predicted firstly by the latest classifier HC t−1 , and then refined using the cost vector v ∗to give the final prediction p ∗ t . p ∗ t is used together with true label yt that comes before t + 1 to update classifier HC t−1 to HC t . This process is known as the "test-then-train" process. We present OECV in Alg. 1. At the beginning of the data stream, the cost vector population V initializes randomly. At time step t, the model {HC t−1 , v ∗}, where HC trepresents the latest online classifier, and v ∗ denotes the optimal cost vector discovered by EA up to time t − 1, undergoes initial testing as depicted in Lines 1-2. Here, the online classifier offers an initial prediction, which is then refined by the cost vector. The classifier HC updates by its own rule in Line 3. The class size Ωt−1 and fixed-size buffer B are updated in Lines 4-5, respectively. Cost vector population V evolves within the if statement (Lines 6-7) to yield a new population along with an optimal cost vector V ∗. We detail OECV in subsequent subsections individually. ## 3.2 Bi-Level Optimization Due to the impracticality of a full evolution, our framework only evolves partially, and breaks down both the model and the problem into two layers (See Fig. 2). The first layer, being an online classifier, offers a rough probabilistic prediction and updates by its own rule on the fly. The second layer, being a cost vector, refines the rough prediction and undergoes a dynamic optimization process via dynamic EA. The training data for updating the cost vector come from a fixed-size buffer B, which is augmented by a simple oversampling trick to alleviate overfitting. As shown in the left part of Fig. 2, we denote the first and second layers as H1,t and H2,t, respectively. The complete model is denoted by Ht = {H1,t, H2,t}. The lower-level problem is to minimize a loss function ℓ1(H1; Xt, yt), which assesses the probabilistic prediction loss computed for each sample in the stream. The upper-level problem involves minimizing a non-differentiable performance metric ℓ2(H2,t; p(·; H∗ 1,t), yt), which measures the refined prediction error based on the solution H∗ 1,t of the first layer. The learning process of the upper layer occurs at a fixed frequency f instead of updating every time for computational efficiency. Importantly, the lower layer updates solely based on its own rule, and optimizing the upper layer does not affect the lower layer. The overall optimization problem is stated as $\begin{array}{c}\min\ell_{2}(\mathcal{H}_{2,t};p(\cdot;\mathcal{H}_{1,t}),\mathcal{H}_{2,t})\\ \hline\mathcal{H}_{2,t}\end{array}$ $\begin{array}{c}\min\ell_{2}(\mathcal{H}_{2,t};p(\cdot;\mathcal{H}_{1,t}),\mathcal{H}_{2,t})\\ \hline\mathcal{H}_{2,t}\end{array}$ $\left(\frac{\lambda}{2}\right)$. 1,t), yt) (2) Note that the optimization of the lower level does not depend on the upper level in the sense that H∗ 1,t does not depend on H2,t. In this study, H1 is set to an online classifier HC along with its ℓ1 from existing work. HC may not consider the specific characteristics of the performance metric to be optimized. For instance, it may not care about the class imbalance in online class imbalance learning. H2 is set to a cost vector v, and the choice of ℓ2 varies depending on specific needs, such as G-mean or balanced accuracy. In this way, only the upper layer is metric-specific. In the following subsections, we only focus on the learning strategies for the cost vector. Remark 1 We notice that the upper level and the lower level are essentially optimized on the overlapping source of data, where the classifier (lower level) uses all data until time t, and the cost vector (upper level) uses past sample stores in B. Intuitively, this may result in overfitting in a bi-level optimization setting. In offline bi-level optimization, a better choice is to use distinct training and validation datasets to train the two levels. However, it is more tricky in our online setting since the samples come in the form of a stream. We did not add additional design for simplicity. In fact, the oversampling technique on B, which, although not originally proposed to handle this problem but proposed to enhance the optimization of the cost vector, may also help. Specifically, this can alleviate the overfitting of overlapping data sources by introducing diversified data via interpolation, making the data used for optimizing the cost vector different compared to that used for the classifier. Remark 2 We want to point out several trade-offs in our design of OECV. The first is introducing the updating frequency of cost vector and population size to handle the trade-off between time consumption and performance. Intuitively, a small updating frequency f will allow better performance, and in the extreme case where f = 1, the updating frequencies of both levels align, which would achieve the best performance. However, this comes with high computation costs since updating the cost vector is generally slower than the classifier. On the other hand, intermittent updating of the cost vector allows faster training while may incur sub-optimality due to the mismatching of optimization speed of two levels. Similarly, a large population size can increase the probability of finding the optimal solution and give better performance than a small population. However, this would induce high updating costs, which is not favored in online learning. Another trade-off is between memory consumption and performance, which is handled by the oversampling rate and buffer size. In real practice, using a large memory allows a large buffer, and an oversampling trick may not be necessary in this case, which is equivalent to setting r = 1. In contrast, the oversampling trick can reduce the storage requirement and enhance the data diversity but induces a higher time consumption. Additionally, it may come with issues if label noise exists in B; the labels of augmented buffer B ′ are likely to be corrupted as well and result in performance degradation. Fortunately, the choice of hyperparameters is straightforward and relatively robust within a certain range: We choose the hyperparameters to make the algorithm as fast as possible while still making improvements compared to baselines. We keep the setting of f and r through all datasets without heavy fine-tuning, which already enables OECV to outperform or on par with baseline methods over a wide range of datasets. See also Appendix E for an empirical analysis of hyperparameters. ## 3.3 Learning Cost Vector With Time Decay Class Sizes The first approach OECV-n(naive) employs time-decay class sizes (Wang et al., 2018) Ωt = {ωi,t} C i=1 at time t to continuously track the imbalance status over time using a predefined time decay factor λ: ωk,t = λωk,t−1 + (1 − λ) · I(yt = k) 0 ≤ λ ≤ 1 (3) where ωk,t represents the size of the k-th class at time step t, and I is the indicator function. Cost matrix Mij and cost vector vi are then determined heuristically as follows: $$M_{i j}={\frac{\omega_{j,t}}{\omega_{i,t}}}$$ ωi,tvi = $$v_{i}=\sum_{j=1}^{C}M_{i j}$$ $$(3)^{\frac{1}{2}}$$ $$0\leq X$$ $$\leq1$$ $$\left(4\right)$$ Mij (4) In other words, the prediction probability of a class will be scaled up if it is a relative minority class (in the sense of adaptively estimated class size ω) and scaled down otherwise. Note the loss function remains untouched in threshold moving, but the prediction probabilities are scaled by v. It can adapt to current stream behavior by passively changing the imbalance status. However, it cannot guarantee optimal performance as it relies on the heuristic form of the cost matrix as well as the hyperparameter λ. The detailed training procedure of OECV-n is similar to that of OECV, by just removing all the use of evolution and replacing v ∗ in Alg. 1 by the cost vector determined by Eqn. 4 ## 3.4 Learning Cost Vector With Dynamic Evolutionary Algorithm Algorithm 2: Cost Vector Evolution Input: Buffer B, cost vector population V, online classifier HC , number of neighbors k, sampling rate r, size of prior population m Output: Optimal cost vector v ∗, cost vector population V 1 // Maintain population diversity and integrate prior knowledge 2 Generate human-designed cost vector vh using Ωt by Eqn. 4. 3 Create prior population {v (i)} m i=1 based on vh using Eqn. 5, and add to V. 4 // Oversampling for data diversity 5 Initialize augmented buffer B ′ with samples from B. 6 for Xt in B do 7 for r − 1 *times* do 8 Find k nearest neighbors of Xt and randomly select X′ t from them. 9 Generate a new sample using Eqn. 6 with α ∼ U(0, 1), and add it to B ′. 10 // Evolution 11 Produce rough probabilistic prediction {pi} |B′| i=1 for each sample in B ′ using HC . 12 For each v (k), produce refined predictions {p (k) i} |B′| i=1 using {pi} |B′| i=1 (Eqn. 1). 13 Calculate fitness f (k)for v (k) based on {p (k) i} |B′| i=1 and true labels {yi} |B′| i=1. 14 Evolve V for one generation by crossover and mutation using {f (k)} |V| k=1. 15 Calculate {f (k)} |V| k=1 to find the optimal solution v ∗ by comparing fitness. 16 **return** v ∗, V Compared to designing with time-decay class size, EAs can find cost vectors that optimize performance measures directly. The evolution process along two related tricks of the resulting OECV are illustrated as follows. See the complete algorithm in Alg. 2. ## 3.4.1 Evolution - **Chromosome Encoding**: The cost vector v (k)is encoded into a chromosome straightforwardly, with the C-dimensional vector being the chromosome. - **Fitness Calculation**: The chance of passing genetic information to subsequent generations relies on the fitness of a cost vector. We maintain recent samples in a fixed-size buffer B 2for fitness calculation. B is enlarged into B ′ by oversampling (See next subsection) before being used for fitness evaluation. Specifically, we first do classification using the latest classifier HC on B ′, resulting in the set of rough probabilistic predictions {pi} |B′| i=1. For each individual v (k), it refines the rough predictions to give a set of final predictions {p (k) i} |B′| i=1. {p (k) i} |B′| i=1 along with the set of true labels {yi} |B′| i=1 are then used to calculate a performance metric as the fitness f (k) of v (k). With the set of fitness {f (k)} |V| k=1, the optimal individual (cost vector) can be determined straightforwardly. Note the performance metric used here is the corresponding offline metric (e.g., G-mean) instead of the online metric (e.g., online G-mean) so that the fitness calculation is not affected by the order of samples in B ′. - **Genetic Operator**: EA employs genetic operators to produce new cost vectors by crossover and mutation based on the fitness value of individuals. Any single objective genetic operator may be used in the current framework. If the generation of new cost vectors at Line 14 in Alg. 2 is removed, while the selection of the optimal individual in Line 15 is retained, we get a comparison algorithm OECV-ea as demonstrated in the ablation study. In this case, OECV-ea can be used to show whether OECV works by finding better individuals with evolution instead of simply relying on the buffered data to select a good solution from a large number of candidates. ## 3.4.2 Maintain Population Diversity And Integrate Prior Knowledge The cost vector designed by time-decay class size as in OECV-n can be used to guide EA. This benefits OECV by integrating the prior knowledge of imbalance status and preventing it from converging to a temporary optimal solution. Analogous to the time decay class size approach in spirits, the dynamic evolutionary also acts passively to counter the effect of concept drift, i.e., it would not detect the concept drift directly. We want to emphasize that this approach may not be the best choice in more complicated scenarios since it currently only focuses on the class-prior concept drift. How to adapt to more complicated drift scenarios is beyond the scope of this work. Specifically, we add m different cost vectors {v (i)} m i=1 randomly generated by vh from Eqn. 4 in a heuristic way: $$\mathbf{v}^{(i)}=\mathbf{v}_{h}+\mathbf{w}$$ $\left(5\right)^{\frac{1}{2}}$ (i) = vh + w wj ∼ Uj (5) $$w_{j}\sim U_{j}\left(0,{\frac{i}{m}}\right)$$ Recall in the definition of cost vector (Eqn. 1), we require each dimension of v (i) = 1 be in [0, 1] and sum up to 1. Therefore, each dimension of v (i)is clipped to [0, 1] and re-normalized. {v (i)} m i=1 are then merged with the previous population to form the initial population for later evolution. After a fixed frequency f, the prior population is mixed in, and the population evolves over one generation. ## 3.4.3 Oversampling For Data Diversity To ensure accurate fitness calculation, an oversampling trick for enhancing data diversity is applied to B. This creates an augmented buffer B ′. Specifically, we expand B to r times its original size by generating r − 1 samples {(X (i) t, y (i) t)} r−1 i=1 (r ∈ N+) for each sample (Xt, yt): $$X_{t}^{(i)}=X_{t}+\alpha\cdot(X_{t}^{\prime}-X_{t}),\quad y_{t}^{(i)}=y_{t}$$ t = yt (6) where α ∼ U(0, 1) and X′ t is randomly selected from k nearest neighbors of Xt. 2Certain storage requirements are generally acceptable in the literature on online class imbalance learning (Qin et al., 2021; Ren et al., 2019; Cano & Krawczyk, 2022). However, if additional storage is unavailable, an adaptive generative model can be used to generate samples in replace of the buffer. In this work, we focus on the current extra storage scheme for simplicity. $$(6)$$ ## 3.4.4 **Computational Complexity Analysis** We are aware of the potential high computational complexity of OECV in practice, including time complexity and space complexity. Therefore, in this section, we provide a formal description. See also Appendix A for a time complexity comparison between our method and baselines. Denote the population size as |V| and buffer size as |B|. OECV requires the storage of a buffer of data of size *|B| ×* (d + 1) where d denotes the number of data features, i.e., the number of stored samples times the number of features plus one (for the class label). Since temporary synthesized samples in augmented buffer B ′can be processed one by one without storing everything in the memory, whose memory consumption is then negligible, the oversampled data is not taken into account for the extra storage. The overall time complexity of OECV is linear to the length of the stream, being the same as existing works such as Wang et al. (2016), Qin et al. (2021), and LI et al. (2023). In each time step, the time complexity includes the consumption from both the classifier and the cost vector. The updating of the classifier is a constant and depends on its own rules. For updating of the cost vector, the oversampling on B takes O(*|B|×*(r−1)) time. Then, |V| individuals perform prediction and evaluation on B ′that takes O(*|V|×|B|×*r) time in total. The crossover, mutation, and selection operations are based on fitness, being method-dependent. It generally takes O(|V|) and is much faster than the fitness calculated in the last step. Summarize and simplify the above steps, and recall the updating of the cost vector occurs in a frequency of f, the overall time complexity can be represented as O( rT*|V||B|* f) for the whole data stream, where T is the length of the stream. While the evolutionary algorithms are well-known for their high computational cost, our scheme of using the cost vector in a post hoc way allows a much more efficient fitness evaluation. To see this, a forward calculation for prediction is enough to give the fitness, which is attributed to the decoupling of two layers where the training of the classifier is totally independent of the cost vector. Therefore, we only need to evaluate how well the cost vectors correct the current well-trained classifier without any time-consuming retraining at each time step. This drastically decreases the time for fitness calculation and makes OECV practical. ## 4 Experimental Studies This section evaluates OECV from four aspects: comparing it to SOTA methods, testing the effectiveness of EA, evaluating runtime efficiency, and exploring its inner workings mechanism. ## 4.1 Experimental Setup We use 30 datasets in total as shown in Table 1, including 10 streaming datasets (Elec, Abrupt, Gradual, Incremental1, Luxembourg, NOAA, Ozone, Airlines, Covtype, Incremental2, available in the USP-DS repository (Souza et al., 2020)) and 20 real-world offline datasets (remaining 20 datasets in Table 1, available in the Keel repository (Derrac et al., 2015)). The 20 offline datasets are processed in a streaming way to simulate online scenarios. The overall static imbalance ratio for each dataset illustrates the severity of class imbalance, while fluctuation of class imbalance ratio throughout the online learning scenario exists. The initial 30% samples of each stream are used for model initialization in an offline fashion to align with the setting in LI et al. (2023). The initialization samples are further split into two datasets in equivalent sizes for training the classifier and the cost vector separately. In this stage, the cost vector population evolves 10 generations to give an initial population for later online training. The buffer size |B| for OECV is fixed at 200 samples, and the oversampling rate is set to 5 for all datasets. Offline G-mean is used for fitness evaluation on the augmented buffer. The cost vector evolves every 5 sample (i.e., f = 5), with the number of individuals set to 25. We employ DE/best/1/L (Opara & Arabas, 2019) as the genetic operator. The implementation of evolutionary algorithms is easy and straightforward by directly adopting from the existing Python packages (such as geatpy 3, which was used in our experiments). All the hyperparameters related to genetic operators are set to the default values without tuning. Specifically, the scaling factor of DE is set to 0.5, and exponential crossover is applied with the probability of crossover set to 0.7. 3https://geatpy.github.io/ Table 1: Overview of the dataset. "#Data" denotes the total number of samples within this dataset, "#Fea" denotes the number of features, "#Class" denotes the number of classes, and IR denotes the overall static imbalance ratio being computed as the ratio between the largest and smallest class sizes. | Dataset | #Data #Fea #Class | IR | Dataset | #Data #Fea #Class | | IR | Dataset #Data #Fea #Class | IR | | | | | | | |--------------|---------------------|------|-----------|---------------------|----------|------|-----------------------------|------|-------|--------|------|----|----|------| | Elec | 5000 | 8 | 2 | 1.6 | Abalone1 | 2338 | 8 | 2 | 39.3 | Win1 | 691 | 11 | 2 | 68.1 | | Abrupt | 5000 | 33 | 6 | 4.0 | Abalone2 | 1622 | 8 | 2 | 49.7 | Win2 | 1599 | 11 | 2 | 29.2 | | Gradual | 5000 | 33 | 6 | 171.2 | Car1 | 1728 | 6 | 2 | 24.0 | Win3 | 656 | 11 | 2 | 35.4 | | Incremental1 | 5000 | 33 | 6 | 1.0 | Car2 | 1728 | 6 | 2 | 25.6 | Win4 | 1482 | 11 | 2 | 58.3 | | Luxembourg | 1901 | 31 | 2 | 1.06 | Kddcup | 2225 | 41 | 2 | 100.1 | Win5 | 900 | 11 | 2 | 44.0 | | NOAA | 5000 | 8 | 2 | 2.4 | Kr | 2901 | 6 | 2 | 26.6 | Yeast1 | 947 | 8 | 2 | 30.6 | | Ozone | 2534 | 72 | 2 | 14.8 | Segment | 2308 | 19 | 2 | 6.0 | Yeast2 | 1484 | 8 | 10 | 92.6 | | Airlines | 5000 | 7 | 2 | 2.1 | Shuttle1 | 3316 | 9 | 2 | 66.7 | Yeast3 | 1484 | 8 | 2 | 8.1 | | Covtype | 5000 | 54 | 7 | 7.0 | Shuttle2 | 1829 | 9 | 2 | 13.9 | Yeast4 | 1484 | 8 | 2 | 32.7 | | Incremental2 | 5000 | 33 | 6 | 25.4 | Thyroid | 720 | 21 | 3 | 39.2 | Yeast5 | 1484 | 8 | 2 | 41.4 | We compare OECV with four SOTA online multi-class imbalance learning methods: MOOB, MUOB (Wang et al., 2016), AI-WSELM (Qin et al., 2021), and BEDCOE (LI et al., 2023). The total number of base learners is set to 10, following (Wang et al., 2016; LI et al., 2023). All methods adhere to a strict online learning setup. Multilayer perceptron serves as the base classifier for all methods, except AI-WSELM, following the setup in Wang et al. (2016). We set the chunk size of AI-WSELM to be 300, being higher than our extra storage of 200. Prequential G-mean with a fading factor of 0.99 is selected as performance metrics, following Wang et al. (2018) and LI et al. (2023). Mean performance across 10 runs is evaluated on the remaining samples after the initialization number. Friedman tests (Demšar, 2006) are used to compare competing methods across datasets statistically. The null hypothesis (H0) posits that all models are equivalent in terms of the predictive performance metric. The alternative hypothesis (H1) suggests that at least one pair of methods differs significantly. If H0 is rejected, the Conover test (Conover & Iman, 1979) is conducted as the post-hoc test. ## 4.2 Performance Comparison We can see from Table 2(a) that in terms of G-mean, OECV performs the best in 14 out of 30 datasets and the 2nd best in 8 datasets. Friedman tests at significance level 0.05 reject H0 with p-value 1.11 × 10−3, showing a significant difference between methods. Average ranks ("avgRank") across datasets are reported to show how well each method performs compared to others across datasets. The average rank of OECV is 1.967, being the best. Post-hoc tests are then conducted to detect whether OECV has a significant difference from the competitors, for which OECV is chosen as the control method. Post-hoc comparisons show that OECV can significantly outperform all of the competitors. We can draw two observations on when the OECV can gain an advantage or not from Table 2. Firstly, we notice that when the number of classes is large, e.g., on Gradual, Incremental1, and Yeast2 datasets, our method does not perform the best compared to other baselines. Further analysis of the Spearman correlation (Fieller et al., 1957) shows correlation coefficients of 0.49 (moderate) between the number of classes and the value of rank on the 30 datasets (the higher the rank, the worse the performance), being positively correlated. This verifies that our method generally performs better when a small number of classes are presented. This is because the complexity of the cost vector equals the number of classes, and a larger cost vector is intuitively more difficult to optimize, especially with limited time and memory. A remedy of this issue deserves more complicated algorithm design and is left to the future. Secondly, we find our method performs better when the stream is highly skewed, i.e., with a large imbalance ratio. For example, on Win3, Win4, and Win5, our method performs the best among the baselines with a large margin. Similarly, an analysis of the Spearman correlation shows correlation coefficients of −0.29 (weak) between the imbalance ratio and the value of rank on the 30 datasets, being negatively correlated, which confirms our conjecture that a highly imbalanced stream favors OECV. We speculate, in this case, the *ad hoc* imbalance estimation, such as the time-decay imbalance ratio (which is used in MOOB, MUOB, and BEDCOE), can not capture the complicated overall imbalance status well. This downgrades the performance of baselines by using a Table 2: Performance comparison in terms of G-mean (%). Each entry is the mean±std performance across 10 runs. The best performance on each dataset is highlighted in bold, and the 2nd best performance is highlighted in italics. The last row lists the average ranks (avgRank) of each model across datasets in each subtable. Part (a) compares SOTA methods and the proposed OECV. A significant difference against OECV is highlighted in yellow. Part (b) reports the ablation results between variants of OECV. | (a) Performance comparison | (b) Ablation studies | | | | | | | | |------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------|---------------------|-------------------|-----------|--------|------|---------| | Dataset | AI-WSELM MOOB | MUOB | BEDCOE | OECV | | | | | | Elec | 78.2±1.6 | 90.9±0.2 | 88.7±0.4 | 95.5±0.1 | 83.7±0.9 | | | | | Abrupt | 66.2±1.4 | 60.2±1.7 | 60.4±2.2 | 60.0±0.4 | 62.8±0.6 | | | | | Gradual | 0.0±0.0 | 22.4±9.1 | 0.1±0.3 | 34.8±20.4 8.5±4.2 | | | | | | Incremental1 46.0±0.7 | 53.8±0.6 | 48.5±2.0 | 52.9±0.4 | 46.4±1.5 | | | | | | Luxembourg 85.5±2.4 | 100.0±0.0 100.0±0.0 100.0±0.0 100.0±0.0 | | | | | | | | | NOAA | 71.3±0.8 | 65.3±0.7 | 64.6±0.6 | 68.2±0.7 | 73.1±0.5 | | | | | Ozone | 65.0±2.9 | 72.3±1.8 | 78.0±0.6 | 70.6±1.3 | 77.1±1.7 | | | | | Airlines | 50.8±1.0 | 34.6±2.8 | 47.6±1.6 | 50.6±0.4 | 51.8±0.9 | | | | | Covtype | 0.0±0.0 | 65.4±0.8 | 0.0±0.0 | 64.6±1.2 | 28.6±1.5 | | | | | Incremental2 0.8±0.2 | 30.8±5.0 | 0.9±1.2 | 40.9±1.6 | 15.6±1.6 | | | | | | Abalone1 | 43.6±5.2 | 55.0±0.9 | 64.5±3.8 | 59.1±0.8 | 67.8±4.3 | | | | | Abalone2 | 48.0±9.3 | 4.6±0.0 | 26.8±8.2 | 33.2±0.0 | 38.7±7.6 | | | | | Car1 | 80.4±2.9 | 33.3±4.3 | 56.3±4.9 | 44.5±5.0 | 78.2±2.2 | | | | | Car2 | 96.5±2.9 | 74.9±0.7 | 79.8±3.7 | 74.4±1.5 | 96.1±1.0 | | | | | Kddcup | 78.1±11.8 | 100.0±0.0 95.9±3.5 | 100.0±0.0 100.0±0.0 | | | | | | | Kr | 94.3±1.7 | 94.4±0.7 | 90.5±1.8 | 90.2±0.7 | 94.7±1.3 | | | | | Segment | 98.7±0.4 | 98.9±0.1 | 93.0±0.6 | 99.0±0.0 | 99.1±0.1 | | | | | Shuttle1 | 100.0±0.0 | 99.4±0.6 | 97.9±1.7 | 99.0±0.9 | 99.9±0.0 | | | | | Shuttle2 | 99.4±0.1 | 99.6±0.0 | 99.8±0.1 | 99.7±0.1 | 99.7±0.0 | | | | | Thyroid | 29.0±14.0 | 38.9±2.7 | 0.6±0.0 | 56.7±2.2 | 71.6±1.5 | | | | | Win1 | 29.1±34.1 | 6.8±0.0 | 6.8±0.0 | 36.5±36.4 | 80.6±1.2 | | | | | Win2 | 39.0±4.8 | 15.4±5.1 | 62.0±5.0 | 27.3±1.4 | 59.2±3.6 | | | | | Win3 | 26.2±11.3 | 22.1±2.4 | 19.6±10.0 | 26.5±2.8 | 79.9±1.2 | | | | | Win4 | 9.7±10.7 | 43.2±11.6 | 16.7±9.5 | 27.7±1.6 | 50.6±5.7 | | | | | Win5 | 22.8±19.3 | 32.8±4.7 | 11.0±4.2 | 14.7±0.0 | 53.3±7.1 | | | | | Yeast1 | 47.8±8.0 | 32.0±0.5 | 36.1±13.8 | 33.2±4.1 | 48.0±18.9 | | | | | Yeast2 | 28.0±6.0 | 0.1±0.1 | 0.0±0.0 | 8.8±4.2 | 0.2±0.4 | | | | | Yeast3 | 81.5±2.3 | 89.2±0.3 | 89.8±1.2 | 87.5±0.3 | 87.8±1.0 | | | | | Yeast4 | 72.9±6.6 | 86.5±0.8 | 82.4±9.5 | 71.7±2.8 | 81.7±5.7 | | | | | Yeast5 | 70.3±3.8 | 64.1±1.9 | 51.7±7.1 | 53.1±3.0 | 86.5±1.7 | | | | | avgRank | 3.35 | 3.133 | 3.517 | 3.033 | 1.967 | OECV-n | OECV | OECV-ea | | | 83.1±0.4 | 83.7±0.9 | 83.6±0.9 | | | | | | | | 62.0±0.7 | 62.8±0.6 | 62.6±0.9 | | | | | | | | 15.8±2.5 | 8.5±4.2 | 4.3±2.8 | | | | | | | | 45.9±1.2 | 46.4±1.5 | 46.2±1.2 | | | | | | | | 100.0±0.0 100.0±0.0 100.0±0.0 73.0±0.5 73.1±0.5 72.9±0.5 71.8±1.9 77.1±1.7 76.1±1.6 50.7±0.5 51.8±0.9 51.7±0.8 38.7±1.1 28.6±1.5 26.6±1.3 21.2±1.8 15.6±1.6 13.0±1.0 55.7±3.0 67.8±4.3 59.0±5.7 25.6±0.1 38.7±7.6 33.1±5.9 77.0±3.1 78.2±2.2 77.5±2.0 94.7±1.2 96.1±1.0 95.2±0.9 100.0±0.0 100.0±0.0 100.0±0.0 91.5±0.8 94.7±1.3 92.2±1.3 99.1±0.1 99.1±0.1 99.4±0.1 99.9±0.0 99.9±0.0 99.9±0.0 99.7±0.0 99.7±0.0 99.6±0.0 68.5±3.8 71.6±1.5 73.6±2.2 79.9±0.1 80.6±1.2 80.6±0.9 48.1±0.9 59.2±3.6 49.6±1.9 39.0±4.2 79.9±1.2 60.6±12.9 18.9±7.7 50.6±5.7 29.1±4.1 53.3±5.6 53.3±7.1 41.0±7.4 16.5±0.9 48.0±18.9 15.5±0.2 0.0±0.0 0.2±0.4 0.0±0.0 85.0±1.1 87.8±1.0 86.8±0.8 68.3±4.0 81.7±5.7 75.0±3.7 84.8±0.1 86.5±1.7 81.8±2.8 2.467 1.333 2.2 | | | | | | | | misleading imbalance indicator. In contrast, our method seeks an optimal cost vector directly by considering the performance metric without consulting heuristically estimated imbalance status. This could explain why OECV outperforms other methods under high imbalance. ## 4.3 Ablation Study Two comparison models OECV-n and OECV-eahave been built in Section 3.3 and Section 3.4, which differ from OECV by just the way on learning cost vector. They are employed here to study the effectiveness of EA. We would expect the performance of OECV, with the full assistance of evolutionary optimization, to be the best. The performance of OECV-ea should be in the middle since while evolution is not used, several candidates of cost vectors are still under consideration for selecting the best one using extra data. The performance of OECV-n should be the worst because only human knowledge is used. If this occurs, we can conclude that the EA used for optimizing the cost vector is crucial for dealing with class imbalance, and extra data in the buffer is not the determinative reason for performance improvement. ![12_image_0.png](12_image_0.png) Figure 3: Prequential G-mean, imbalance ratio, and standard deviation of fitness of the population in dataset Ozone. The higher the standard deviation, the greater the diversity. Imbalance ratio is calculated by time-decay class sizes (Eqn. 3). Table 2(b) shows the result in terms of G-mean. The three methods are compared to each other, with Wilcoxon signed rank tests (Wilcoxon, 1992) used to determine if there are significant differences between them. We can see that the average rank of OECV (1.333) is better than that of OECV-n (2.467) and OECV-ea (2.2). Wilcoxon signed rank test rejects H0 with p-value 0.0036 and 9.62 × 10−5, respectively, meaning OECV is significantly superior to OECV-n and OECV-ea. But in comparison between OECV-ea and OECV-n, the average rank of OECV-ea (2.2) is better than OECV-n (2.467), and the Wilcoxon signed rank test fails to reject H0 with p-value 0.178, meaning there is no significant difference between OECV-ea and OECV-n. From the comparison between OECV and OECV-n, we see that eliminating the whole EA strategy would significantly decline performance. However, this can be caused by the extra data from the buffer used in OECV. We can see from the comparison between OECV-ea and OECV, as well as OECV-ea and OECV-n, that the extra data does not play a determinative role. To see this, OECV-ea also uses extra data in finding optimal cost vector with performance set to the objective of performance metric, but it does not perform significantly better than OECV-n and performs significantly worse than OECV. This means it is the EA instead of extra data making OECV outperform SOTA methods, showing the effectiveness of EA. ## 4.4 Further Discussions We explore two related questions to assess the working mechanism of OECV. ## 4.4.1 Analysis On Population Diversity We explore whether OECV can maintain population diversity over time instead of converging. The population diversity enables OECV to track the optimal cost vector instead of converging to a certain solution. We present the standard derivation of individual fitness in Fig. 3 with a further analysis of the Spearman correlation (Fieller et al., 1957). The result shows correlation coefficients of 0.594 (moderate) and 0.629 (strong) between the absolute difference of imbalance ratio (i.e., the absolute value of the difference between two neighboring class imbalance ratio) and standard deviation (std) of fitness of OECV and OECV-ea, respectively, being positively correlated. It also shows a high correlation coefficient of 0.869 between the std of fitness of OECV and OECV-ea. We can draw two observations: 1) The diversity adapts to data stream behavior. This means OECV and OECV-ea can expand the exploration of new cost vectors (high std) during a concept drift where the imbalance ratio changes drastically while adopting temporary elitists by leveraging learned knowledge about class imbalance (low std) during the steady stream where the imbalance ratio is stable. 2) Despite similar diversity and changing behavior, OECV outperforms OECV-ea. This indicates the superiority of EA in that it can maintain a population of cost vectors with higher quality under the same diversity. ## 4.4.2 Analysis On Ea-Based Cost Vector We explore how the cost vector found by the EA-based method outperforms that of OECV-n. We define the weight ratio (WR) as v1 v0 to visualize the cost vector in a binary classification scenario in Fig. 4. Here, ![13_image_0.png](13_image_0.png) Figure 4: Prequential G-mean, imbalance ratio, and weight ratio in dataset Airlines. Imbalance ratio is calculated by time-decay class sizes (Eqn. 3). vi represents the i-th dimension of the cost vector. Analogous to the imbalance ratio, the WR serves as a belief of the imbalance level indicated by the cost vector. We analyze the Spearman correlation between the WR of three variants and the imbalance ratio, yielding correlation coefficients of 0.971, 0.897, and 0.887 for OECV-n, OECV-ea, and OECV respectively, indicating strong correlations. This means the cost vectors found by EA can also reflect the beliefs about class imbalance, while some of these beliefs are sacrificed to seek more appropriate values of the cost vector in finding the optimal solution. Besides, Fig. 4 illustrates the challenge of finding the optimal solution by *ad hoc* assumptions: While OECV outperforms that of OECV-n, the WR of OECV fluctuates compared to OECV-n. This suggests that relying solely on the imbalance ratio is insufficient for identifying the best cost vector. The dynamic evolutionary algorithm addresses this limitation by directly setting the performance measure as the objective and avoiding heuristic reliance. ## 5 Conclusion This article introduces a novel approach Online Evolutionary Cost Vector (OECV) along with its two variants to tackle the online class imbalance issue by eliminating heuristic assumptions about class imbalances widely used in existing methods. The OECV instead tries to maximize performance on any specified performance metrics directly. This is achieved by adopting a dynamic evolutionary algorithm for online model evolution. The model is explicitly deconstructed into two layers: an online classifier for rough probabilistic prediction and a cost vector for refining the decision boundary. The cost vector is the only part subject to the dynamic evolutionary algorithm used for directly optimizing any specific performance metrics. This bi-level architecture is motivated by viewing the cost vector as a hyperparameter in the threshold moving method and the evolutionary algorithm as an approach to fine-tune the cost vector. This is based on the assumption that the cost vector, utilized to adjust the decision boundary, has an optimal value that yields the best online metric for a dynamic base classifier. A dynamic evolutionary algorithm is employed to find a superior value to a human-designed counterpart. Cost vectors designed by time-decay class size are integrated into the prior population to sustain population diversity and integrate prior knowledge. To mitigate overfitting, an oversampling method is used to augment the buffer and attain more beneficial evolutionary results. Comparison with SOTA methods, ablation studies, and runtime comparison demonstrate the validity and efficiency of our approach. Analysis of the working mechanism of OECV reveals how it can generate a superior cost vector compared to the human-designed counterpart. We want to emphasize that the potential of the OECV framework extends beyond the class imbalance setting. It has further exploration values in various other classification tasks. High performance across a broad range of metrics unrelated to class imbalance can be achieved with only slight adjustments to the cost vector. For instance, OECV can simultaneously serve multi-objective purposes by optimizing for multiple metrics, including accuracy, recall, and F1-score. Another future work is to handle the potential label noise. Specifically, when there exist labels that are corrupted, the samples in the buffer, as well as the oversampled buffer, will also contain corrupted labels. This will degrade the optimization of cost vectors and deserves a further specific design. ## References Alessio Bernardo, Heitor Murilo Gomes, Jacob Montiel, Bernhard Pfahringer, Albert Bifet, and Emanuele Della Valle. C-smote: Continuous synthetic minority oversampling for evolving data streams. In 2020 IEEE International Conference on Big Data (Big Data), pp. 483–492. IEEE, 2020. Albert Bifet and Ricard Gavalda. Learning from time-changing data with adaptive windowing. In Proceedings of the 2007 SIAM international conference on data mining, pp. 443–448. SIAM, 2007. Alberto Cano and Bartosz Krawczyk. Kappa updated ensemble for drifting data stream mining. Machine Learning, 109:175–218, 2020. Alberto Cano and Bartosz Krawczyk. Rose: robust online self-adjusting ensemble for continual learning on imbalanced drifting data streams. *Machine Learning*, 111(7):2561–2599, 2022. Peng Cao, Dazhe Zhao, and Osmar Zaiane. An optimized cost-sensitive svm for imbalanced data learning. In Pacific-Asia conference on knowledge discovery and data mining, pp. 280–292. Springer, 2013. Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. Smote: synthetic minority over-sampling technique. *Journal of artificial intelligence research*, 16:321–357, 2002. William Jay Conover and Ronald L Iman. On multiple-comparisons procedures. *Los Alamos Sci. Lab. Tech.* Rep. LA-7677-MS, 1:14, 1979. Janez Demšar. Statistical comparisons of classifiers over multiple data sets. The Journal of Machine learning research, 7:1–30, 2006. J Derrac, S Garcia, L Sanchez, and F Herrera. Keel data-mining software tool: Data set repository, integration of algorithms and experimental analysis framework. *J. Mult. Valued Log. Soft Comput*, 17:255–287, 2015. Shuya Ding, Bilal Mirza, Zhiping Lin, Jiuwen Cao, Xiaoping Lai, Tam V Nguyen, and Jose Sepulveda. Kernel based online learning for imbalance multiclass classification. *Neurocomputing*, 277:139–148, 2018. Dennis J Drown, Taghi M Khoshgoftaar, and Naeem Seliya. Evolutionary sampling and software quality modeling of high-assurance systems. *IEEE Transactions on Systems, Man, and Cybernetics-Part A:* Systems and Humans, 39(5):1097–1107, 2009. Pedro G Espejo, Sebastián Ventura, and Francisco Herrera. A survey on the application of genetic programming to classification. *IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)*, 40(2):121–144, 2009. Edgar C Fieller, Herman O Hartley, and Egon S Pearson. Tests for rank correlation coefficients. i. *Biometrika*, 44(3/4):470–481, 1957. João Gama, Indr˙e Žliobait˙e, Albert Bifet, Mykola Pechenizkiy, and Abdelhamid Bouchachia. A survey on concept drift adaptation. *ACM computing surveys (CSUR)*, 46(4):1–37, 2014. John Hancock, Justin M Johnson, and Taghi M Khoshgoftaar. A comparative approach to threshold optimization for classifying imbalanced data. In 2022 IEEE 8th International Conference on Collaboration and Internet Computing (CIC), pp. 135–142. IEEE, 2022. Guang-Bin Huang, Nan-Ying Liang, Hai-Jun Rong, Paramasivan Saratchandran, and Narasimhan Sundararajan. On-line sequential extreme learning machine. *Computational Intelligence*, 2005:232–237, 2005. Kun Jiang, Jing Lu, and Kuiliang Xia. A novel algorithm for imbalance data classification based on genetic algorithm improved smote. *Arabian journal for science and engineering*, 41:3255–3266, 2016. Zhenhao Jiang, Tingting Pan, Chao Zhang, and Jie Yang. A new oversampling method based on the classification contribution degree. *Symmetry*, 13(2):194, 2021. Taghi M Khoshgoftaar, Naeem Seliya, and Dennis J Drown. Evolutionary data analysis for the class imbalance problem. *Intelligent Data Analysis*, 14(1):69–88, 2010. Jakub Klikowski and Michał Woźniak. Employing one-class svm classifier ensemble for imbalanced data stream classification. In Computational Science–ICCS 2020: 20th International Conference, Amsterdam, The Netherlands, June 3–5, 2020, Proceedings, Part IV 20, pp. 117–127. Springer, 2020. Paweł Ksieniewicz. The prior probability in the batch classification of imbalanced data streams. *Neurocomputing*, 452:309–316, 2021. Matjaz Kukar, Igor Kononenko, et al. Cost-sensitive learning with neural networks. In *ECAI*, volume 15, pp. 88–94. Citeseer, 1998. Camelia Lemnaru and Rodica Potolea. Evolutionary cost-sensitive balancing: A generic method for imbalanced classification problems. In EVOLVE-A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation VI, pp. 194–209. Springer, 2017. Shuxian LI, Liyan SONG, Yiu-Ming CHEUNG, and Xin YAO. Bedcoe: Borderline enhanced disjunct cluster based oversampling ensemble for online multi-class imbalance learning. In *ECAI 2023: 26th European* Conference on Artificial Intelligence, September 30–October 4, 2023, Kraków, Poland, Including 12th Conference on Prestigious Applications of Intelligent Systems (PAIS 2023): Proceedings, pp. 1414–1421. IOS Press BV, 2023. Xu-Ying Liu and Zhi-Hua Zhou. Learning with cost intervals. In *Proceedings of the 16th ACM SIGKDD* international conference on Knowledge discovery and data mining, pp. 403–412, 2010. Karol R Opara and Jarosław Arabas. Differential evolution: A survey of theoretical analyses. Swarm and evolutionary computation, 44:546–558, 2019. Wenbin Pei, Bing Xue, Mengjie Zhang, Lin Shang, Xin Yao, and Qiang Zhang. A survey on unbalanced classification: How can evolutionary computation help? *IEEE Transactions on Evolutionary Computation*, 2023. Todd Perry, Mohamed Bader-El-Den, and Steven Cooper. Imbalanced classification using genetically optimized cost sensitive classifiers. In *2015 IEEE Congress on Evolutionary Computation (CEC)*, pp. 680–687. IEEE, 2015. Jiongming Qin, Cong Wang, Qinhong Zou, Yubin Sun, and Bin Chen. Active learning with extreme learning machine for online imbalanced multiclass classification. *Knowledge-Based Systems*, 231:107385, 2021. Siqi Ren, Wen Zhu, Bo Liao, Zeng Li, Peng Wang, Keqin Li, Min Chen, and Zejun Li. Selection-based resampling ensemble algorithm for nonstationary imbalanced stream data learning. *Knowledge-Based* Systems, 163:705–722, 2019. Miguel Rocha, Paulo Cortez, and José Neves. Evolution of neural networks for classification and regression. Neurocomputing, 70(16-18):2809–2816, 2007. Victor S Sheng and Charles X Ling. Thresholding for making classifiers cost-sensitive. In *Aaai*, volume 6, pp. 476–481, 2006. Olivier Sigaud and Stewart W Wilson. Learning classifier systems: a survey. *Soft Computing*, 11:1065–1078, 2007. Vinicius MA Souza, Denis M dos Reis, Andre G Maletzke, and Gustavo EAPA Batista. Challenges in benchmarking stream learning algorithms with real-world data. *Data Mining and Knowledge Discovery*, 34: 1805–1858, 2020. Yanmin Sun, Mohamed S Kamel, and Yang Wang. Boosting for learning multiple classes with imbalanced class distribution. In *Sixth international conference on data mining (ICDM'06)*, pp. 592–602. IEEE, 2006. Tobias Voigt, Roland Fried, Michael Backes, and Wolfgang Rhode. Threshold optimization for classification in imbalanced data in a problem of gamma-ray astronomy. *Advances in Data Analysis and Classification*, 8:195–216, 2014. Boyu Wang and Joelle Pineau. Online bagging and boosting for imbalanced data streams. *IEEE Transactions* on Knowledge and Data Engineering, 28(12):3353–3366, 2016. Liwen Wang, Yicheng Yan, and Wei Guo. Ensemble online weighted sequential extreme learning machine for class imbalanced data streams. In *2021 2nd International Symposium on Computer Engineering and* Intelligent Communications (ISCEIC), pp. 81–86. IEEE, 2021. Shuo Wang, Leandro L Minku, and Xin Yao. Dealing with multiple classes in online class imbalance learning. In *IJCAI*, pp. 2118–2124, 2016. Shuo Wang, Leandro L Minku, and Xin Yao. A systematic study of online class imbalance learning with concept drift. *IEEE transactions on neural networks and learning systems*, 29(10):4802–4821, 2018. Zhaoyang Wang and Shuo Wang. Online automated machine learning for class imbalanced data streams. In 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE, 2023. Frank Wilcoxon. Individual comparisons by ranking methods. In *Breakthroughs in Statistics: Methodology* and Distribution, pp. 196–202. Springer, 1992. Yan Yan, Tianbao Yang, Yi Yang, and Jianhui Chen. A framework of online learning with imbalanced streaming data. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 31, 2017. Chong Zhang, Kay Chen Tan, and Ruoxu Ren. Training cost-sensitive deep belief networks on imbalance data problems. In *2016 international joint conference on neural networks (IJCNN)*, pp. 4362–4367. IEEE, 2016. Chong Zhang, Kay Chen Tan, Haizhou Li, and Geok Soon Hong. A cost-sensitive deep belief network for imbalanced classification. *IEEE transactions on neural networks and learning systems*, 30(1):109–122, 2018. Zhi-Hua Zhou and Xu-Ying Liu. Training cost-sensitive neural networks with methods addressing the class imbalance problem. *IEEE Transactions on knowledge and data engineering*, 18(1):63–77, 2005. Weiwei Zong, Guang-Bin Huang, and Yiqiang Chen. Weighted extreme learning machine for imbalance learning. *Neurocomputing*, 101:229–242, 2013. ## A Running Time Comparison Since EA is known for high time complexity, we conduct a runtime experiment to show the practicality of OECV. All experiments are benchmarked on a server configured with Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz. The geometric average of runtime across datasets is reported in the case of varying runtime scales across datasets. Specifically, suppose N datasets are used, we report NqQN i=1 ti, where ti represents the runtime on the i-th dataset. Two key observations can be made from the results in Table 3. Firstly, while some methods exhibit significantly shorter runtimes, such as MUOB and OECV-n, this comes at the expense of their inferior performance, as evidenced in Table 2(a). Secondly, our approach demonstrates remarkable efficiency, as OECV achieves the best rank with tolerably short runtime compared to other SOTA methods. This validates the time efficiency and practicality of OECV despite using EA. ## B Performance Comparison In Terms Of Balanced Accuracy In Table 4, we include a complementary performance comparison in terms of balanced accuracy of Section 4.2. We can see from Table 4 that in terms of balanced accuracy, OECV performs the best in 12 out of 30 datasets and the 2nd best in 8 data sets. Friedman tests (Demšar, 2006) at the significance level 0.05 reject H0 with | shown in the last row. Dataset AI-WSELM | MOOB | MUOB | BEDCOE | OECV-n | OECV | OECV-ea | | |-------------------------------------------|----------|----------|----------|------------|---------|------------|----------| | Elec | 6.9±1.2 | 49.8±1.4 | 22.5±0.6 | 134.2±3.4 | 6.5±0.1 | 47.5±0.3 | 53.6±3.9 | | Abrupt | 13.3±2.6 | 51.2±1.5 | 11.2±0.2 | 292.8±20.3 | 6.5±0.3 | 81.4±3.4 | 75.5±1.2 | | Gradual | 15.0±3.2 | 52.9±0.4 | 5.1±0.1 | 212.8±5.1 | 6.7±0.2 | 110.3±20.5 | 74.9±1.4 | | Incremental1 | 15.1±3.2 | 49.7±0.2 | 16.2±0.4 | 383.4±5.8 | 6.5±0.2 | 101.5±2.9 | 76.1±1.2 | | Luxembourg | 5.3±0.1 | 16.4±0.1 | 12.6±0.1 | 28.2±0.5 | 2.5±0.3 | 38.0±2.1 | 27.6±0.7 | | NOAA | 10.0±0.2 | 48.4±0.2 | 23.8±0.2 | 307.8±13.2 | 7.0±0.7 | 53.9±0.6 | 49.6±0.7 | | Ozone | 8.0±0.1 | 38.5±1.7 | 6.3±0.3 | 139.4±56.7 | 4.5±0.1 | 55.9±1.8 | 42.7±0.7 | | Airlines | 12.9±0.4 | 49.0±0.9 | 25.8±0.3 | 364.2±6.4 | 7.3±0.5 | 52.8±0.6 | 47.1±0.4 | | Covtype | 27.8±0.3 | 58.7±1.0 | 5.3±0.0 | 252.7±4.7 | 7.3±0.2 | 115.4±1.7 | 80.6±1.0 | | Incremental2 | 17.5±0.5 | 62.4±0.7 | 5.6±0.0 | 379.0±7.1 | 7.2±0.2 | 107.5±3.2 | 75.1±0.9 | | Abalone1 | 2.4±0.0 | 24.2±0.4 | 4.0±0.2 | 73.4±3.1 | 3.3±0.1 | 25.1±0.4 | 22.7±0.3 | | Abalone2 | 1.6±0.0 | 16.0±0.4 | 2.5±0.1 | 52.2±1.6 | 2.4±0.2 | 17.4±0.6 | 15.6±0.2 | | Car1 | 1.6±0.0 | 22.5±0.8 | 4.6±0.8 | 76.5±5.0 | 2.5±0.1 | 18.3±0.3 | 16.4±0.2 | | Car2 | 1.6±0.0 | 21.3±0.4 | 3.7±0.6 | 46.2±0.7 | 2.5±0.1 | 18.6±0.5 | 16.3±0.1 | | Kddcup | 7.3±0.3 | 21.1±0.2 | 3.1±0.1 | 36.4±0.8 | 3.3±0.1 | 43.9±2.4 | 31.9±0.6 | | Kr | 3.8±0.1 | 36.4±0.8 | 5.7±0.4 | 65.9±0.8 | 4.3±0.2 | 30.8±0.6 | 27.4±0.3 | | Segment | 6.3±0.2 | 31.1±0.6 | 8.9±0.3 | 52.6±1.8 | 3.5±0.4 | 41.4±1.8 | 29.4±0.8 | | Shuttle1 | 6.4±0.4 | 39.0±1.2 | 6.0±0.5 | 55.4±0.9 | 4.9±0.2 | 38.1±0.5 | 32.5±0.4 | | Shuttle2 | 3.8±0.2 | 22.9±0.6 | 4.5±0.1 | 31.9±0.8 | 2.8±0.6 | 21.2±0.5 | 17.7±0.2 | | Thyroid | 2.0±0.1 | 10.1±0.1 | 1.0±0.1 | 37.1±0.6 | 1.2±0.3 | 12.2±0.8 | 8.5±0.3 | | Win1 | 1.5±0.1 | 7.7±0.2 | 0.9±0.1 | 19.4±0.3 | 1.0±0.2 | 8.4±0.3 | 6.8±0.1 | | Win2 | 3.6±0.2 | 17.0±0.9 | 3.0±0.2 | 64.6±1.2 | 2.3±0.2 | 18.6±0.5 | 15.5±0.2 | | Win3 | 1.4±0.1 | 8.1±0.4 | 1.2±0.4 | 23.2±1.1 | 1.0±0.1 | 7.5±0.3 | 6.4±0.1 | | Win4 | 3.3±0.1 | 16.7±0.4 | 2.2±0.4 | 37.2±1.0 | 2.1±0.1 | 17.0±0.3 | 14.3±0.1 | | Win5 | 2.0±0.1 | 11.1±0.6 | 1.3±0.1 | 27.7±0.9 | 1.3±0.1 | 10.4±0.3 | 8.7±0.1 | | Yeast1 | 1.9±0.1 | 13.1±0.8 | 1.7±0.3 | 36.5±0.8 | 1.4±0.1 | 10.2±0.3 | 9.2±0.1 | | Yeast2 | 3.2±0.1 | 22.5±0.5 | 1.8±0.3 | 150.2±5.2 | 2.1±0.3 | 20.7±0.4 | 15.8±0.1 | | Yeast3 | 3.1±0.1 | 17.6±0.3 | 4.7±0.2 | 57.5±1.0 | 2.1±0.1 | 15.7±0.2 | 14.5±0.1 | | Yeast4 | 3.1±0.2 | 17.7±0.5 | 2.4±0.2 | 40.4±0.6 | 2.3±0.6 | 15.6±0.2 | 14.3±0.2 | | Yeast5 | 3.0±0.1 | 17.4±0.4 | 2.3±0.3 | 42.5±0.7 | 2.1±0.2 | 15.9±0.5 | 14.2±0.1 | | G-mean time | 4.501 | 24.471 | 4.378 | 75.8 | 3.074 | 28.517 | 23.66 | Table 3: Comparison between methods in terms of runtime in seconds. The geometry average of runtime is shown in the last row. the p-value 4.21 × 10−3, showing that there is a significant difference between methods. The average rank of OECV is 2.167, being the best. Post-hoc tests are then conducted to investigate whether OECV has a significant difference from the competitors, for which OECV is chosen as the control method. Post-hoc comparisons show that OECV can significantly outperform all of the competitors except BEDCOE, where the p-value is 0.052, being only marginally higher than 0.05. We conjecture this is because the optimization objective is set to G-mean instead of balanced accuracy in OECV, making the algorithm not aware of this performance metric. ## C Ablation Studies In Terms Of Balanced Accuracy Table 4 shows the predictive performance of the three models in terms of balanced accuracy. Then, the three methods are compared to each other, with Wilcoxon signed rank tests (Wilcoxon, 1992) used to determine if there are significant differences between them. We can see from Table 4 that in terms of balanced accuracy, the average rank of OECV (1.55) is better than that of OECV-n (2.233) and OECV-ea (2.217). Wilcoxon signed rank test rejects H0 with p-value 0.042 and 4.98 × 10−4, respectively, meaning OECV is significantly superior to OECV-n and OECV-ea. This indicates Table 4: Performance comparison in terms of balanced accuracy (%). Each entry is the mean±std performance across 10 runs. The best performance on each dataset is highlighted in bold, and the 2nd best performance is highlighted in italics. The last row lists the average ranks (avgRank) of each model across datasets in each subtable. Part (a) reports the comparison between SOTA methods and the proposed OECV. A significant difference against OECV is highlighted in yellow. Part (b) reports the ablation results between variants of OECV. | (a) Performance comparison | (b) Ablation studies | | | | | | | | |------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------|---------------------|-----------|----------|--------|------|---------| | Dataset | AI-WSELM MOOB | MUOB | BEDCOE | OECV | | | | | | Elec | 79.5±1.7 | 91.0±0.2 | 88.9±0.4 | 95.5±0.1 | 84.2±0.9 | | | | | Abrupt | 68.2±1.1+ 65.9±0.6 | 67.3±0.6 | 63.7±0.3 | 64.6±0.6 | | | | | | Gradual | 34.1±0.4 | 63.1±0.4 | 20.2±0.8 | 64.1±1.3 | 52.3±1.0 | | | | | Incremental1 48.5±0.6 | 58.6±0.4 | 58.1±0.6 | 54.7±0.4 | 48.2±1.3 | | | | | | Luxembourg 85.6±2.4 | 100.0±0.0 100.0±0.0 100.0±0.0 100.0±0.0 | | | | | | | | | NOAA | 72.0±0.7 | 66.0±0.6 | 64.9±0.6 | 69.0±0.6 | 73.2±0.5 | | | | | Ozone | 67.7±2.2 | 74.9±1.3 | 78.3±0.6 | 74.1±1.0 | 77.4±1.6 | | | | | Airlines | 51.6±0.6 | 51.6±0.5 | 51.1±0.7 | 51.7±0.4 | 52.2±0.8 | | | | | Covtype | 21.1±2.9 | 70.6±0.4 | 16.4±3.0 | 70.3±0.9 | 38.6±1.1 | | | | | Incremental2 30.1±0.5 | 49.3±0.4 | 25.2±1.9 | 49.4±1.0 | 40.0±0.6 | | | | | | Abalone1 | 60.6±2.3 | 65.6±0.6 | 66.5±2.9 | 68.0±0.5 | 71.9±2.5 | | | | | Abalone2 | 61.0±3.3 | 51.6±0.2 | 45.2±3.4 | 56.2±0.2 | 54.2±3.1 | | | | | Car1 | 82.1±2.5 | 53.7±1.1 | 62.4±6.0 | 57.4±2.2 | 79.0±2.1 | | | | | Car2 | 96.7±2.6 | 77.3±0.5 | 81.2±3.7 | 77.7±1.1 | 96.2±1.0 | | | | | Kddcup | 83.5±7.3 | 100.0±0.0 96.1±3.3 | 100.0±0.0 100.0±0.0 | | | | | | | Kr | 94.5±1.6 | 94.4±0.7 | 91.0±1.6 | 90.6±0.7 | 94.7±1.2 | | | | | Segment | 98.7±0.4 | 98.9±0.1 | 93.3±0.6 | 99.0±0.0 | 99.1±0.1 | | | | | Shuttle1 | 100.0±0.0 | 99.4±0.6 | 98.0±1.7 | 99.1±0.9 | 99.9±0.0 | | | | | Shuttle2 | 99.4±0.1 | 99.6±0.0 | 99.8±0.1 | 99.7±0.1 | 99.7±0.0 | | | | | Thyroid | 54.6±4.6 | 53.9±2.8 | 34.7±0.0 | 63.9±1.9 | 75.0±1.5 | | | | | Win1 | 62.6±14.6 | 53.0±0.1 | 53.4±0.0 | 65.5±15.7 | 83.3±0.5 | | | | | Win2 | 54.9±1.7 | 51.0±0.9 | 65.0±2.5 | 52.6±0.6 | 64.6±2.0 | | | | | Win3 | 52.4±2.8 | 51.8±0.8 | 51.5±3.5 | 52.1±1.0 | 80.2±1.2 | | | | | Win4 | 52.7±1.8 | 59.8±4.6 | 50.1±2.3 | 54.4±0.5 | 59.5±4.4 | | | | | Win5 | 55.5±5.0 | 53.3±2.0 | 49.4±1.3 | 49.6±0.3 | 58.5±3.8 | | | | | Yeast1 | 60.1±3.7 | 56.7±0.4 | 53.1±5.8 | 57.1±1.0 | 59.0±7.7 | | | | | Yeast2 | 45.7±2.7 | 39.4±2.6 | 10.8±1.7 | 41.2±0.8 | 39.9±1.1 | | | | | Yeast3 | 82.3±2.0 | 89.3±0.3 | 90.1±1.0 | 87.8±0.3 | 87.9±1.0 | | | | | Yeast4 | 76.7±4.8 | 88.9±0.8 | 84.6±7.4 | 76.7±2.0 | 83.4±4.3 | | | | | Yeast5 | 75.3±2.4 | 73.5±1.1 | 65.8±4.3 | 66.1±1.9 | 86.8±1.6 | | | | | avgRank | 3.1 | 3.1 | 3.717 | 2.917 | 2.167 | OECV-n | OECV | OECV-ea | | | 83.7±0.4 | 84.2±0.9 | 84.1±0.8 | | | | | | | | 64.4±0.7 | 64.6±0.6 | 64.7±0.8 | | | | | | | | 59.2±0.9 | 52.3±1.0 | 50.6±1.1 | | | | | | | | 47.9±1.1 | 48.2±1.3 | 48.0±1.1 | | | | | | | | 100.0±0.0 100.0±0.0 100.0±0.0 73.1±0.5 73.2±0.5 73.0±0.5 73.6±1.4 77.4±1.6 76.7±1.5 51.6±0.6 52.2±0.8 52.2±0.9 50.1±1.1 38.6±1.1 33.9±1.2 43.2±0.6 40.0±0.6 36.4±0.7 65.8±1.4 71.9±2.5 67.2±2.7 54.0±0.3 54.2±3.1 54.4±1.8 79.1±2.4 79.0±2.1 78.8±1.7 94.9±1.1 96.2±1.0 95.3±0.9 100.0±0.0 100.0±0.0 100.0±0.0 91.8±0.7 94.7±1.2 92.5±1.2 99.1±0.1 99.1±0.1 99.4±0.1 99.9±0.0 99.9±0.0 99.9±0.0 99.7±0.0 99.7±0.0 99.6±0.0 73.1±2.7 75.0±1.5 77.1±2.1 83.5±0.1 83.3±0.5 83.8±0.6 60.3±0.4 64.6±2.0 59.7±1.2 56.4±2.2 80.2±1.2 66.0±7.7 51.2±1.5 59.5±4.4 51.1±2.2 63.8±2.7 58.5±3.8 54.8±4.3 53.2±0.3 59.0±7.7 51.6±0.7 40.0±1.7 39.9±1.1 36.4±1.7 85.5±1.0 87.9±1.0 87.0±0.7 73.8±3.0 83.4±4.3 78.5±2.7 85.7±0.1 86.8±1.6 83.2±2.2 2.233 1.55 2.217 | | | | | | | | that eliminating the EA strategy would significantly decline predictive performance in terms of balanced accuracy, showing its effectiveness. We follow a similar procedure to compare OECV-ea and OECV-n. In terms of balanced accuracy, the average rank of OECV-ea (2.217) is better than OECV-n (2.233). Wilcoxon signed-rank test fails to reject H0 with p-value 0.838, meaning there is no significant difference between OECV-ea and OECV-n. This indicates that using extra samples in the buffer is solely insufficient to find a significantly better cost vector. In other words, although our method uses extra data, this is not the determinative reason why OECV can outperform SOTA methods. ## D Continuous Performance Throughout Time Figure 5 shows performance comparisons over various time steps on two representative datasets in terms of G-mean and balanced accuracy. Similar patterns were observed in other datasets but are not included here due to space constraints. We can see that OECV consistently outperforms most other methods across most time steps in terms of both G-mean and balanced accuracy. This demonstrates the continuous effectiveness of our approach in improving performance over time. ![19_image_0.png](19_image_0.png) Figure 5: Continuous performance comparison throughout time on representative datasets in terms of G-mean and balanced accuracy. For ablation studies, we demonstrate continuous performance over time in Figure 6 in terms of G-mean and balanced accuracy. We notice removing the evolutionary cost vector strategy leads to a continual decline in performance across most test steps. As a result, we assert that using EA is crucial in our approach. ## E Hyperparameter Analysis To balance the performance and computational cost, we introduced a few hyperparameters in OECV. The role of each hyperparameter is straightforward and does not need heavy fine-tuning. In this section, we provide a detailed discussion on the sensitivity of population size, oversampling rate, buffer size, and the updating frequency of the cost vector. We also investigate the influence of the pre-training ratio. Note the pre-training stage is not necessary in our method and is added to make a fair comparison since LI et al. (2023) requires a pre-training setup. The ratio of 30% in the main experiments is chosen randomly and set to be the same for all compared methods without any tuning. ## E.1 Population Size We include a further experiment on the sensitivity of population size setting in OECV. Fixing the other original hyperparameter settings of OECV, we manually alter only the population size (i.e., number of individuals) to get four comparison methods: Pop-25 (original setting), Pop-50, Pop-100, Pop-200, standing for OECV with a population size of 25, 50, 100, and 200, respectively. The detailed comparison setting remains the same as in the experiments of the main paper. We report the performance in terms of G-mean in Table 5 (a) and the performance in terms of balanced accuracy in Table 5 (b). ![20_image_0.png](20_image_0.png) Figure 6: Continuous performance comparison of ablation studies throughout time on representative datasets in terms of G-mean and balanced accuracy. The result shows that increasing the population size would not boost performance significantly, however, the time complexity increases correspondingly. This can be because the problem scale is commonly small in an online learning setting, meaning a small number of individuals can already find a good enough cost vector. We conclude that OECV is not sensitive to this hyperparameter in a certain range. This is also why we only applied a relatively small population size in our main experiment: this setting can significantly improve performance compared to baseline methods while not incurring a long updating delay. In offline learning, where the problem scale is much larger, especially when the number of classes is larger, a large population size should be applied. We leave the exploration of our method in an offline setting to future work. ## E.2 **Oversampling Rate** We can show the influence of the oversampling rate r in OECV by manually altering only r to get three comparison methods: r = 1 (i.e., not oversampling), r = 3, and r = 5 (original setting). The detailed comparison setting remains the same as in the experiments of the main paper. We report the performance in terms of G-mean in Table 6 (a) and the performance in terms of balanced accuracy in Table 6 (b). The result shows that increasing the oversampling rate would boost performance constantly, however, the time complexity also increases. Intuitively, a larger r enhances the sample diversity in the sample and allows a more accurate fitness evaluation but makes the fitness evaluation slower. One can use larger r to get further performance improvement, but r = 5 is good enough to make the fitness evaluation both accurate and efficient. ## E.3 **Buffer Size** We can show the influence of the buffer size |B| in OECV by manually altering only the buffer size to get three comparison methods: |B| = 50, |B| = 100, and |B| = 200 (original setting). The detailed comparison setting remains the same as in the experiments of the main paper. We report the performance in terms of G-mean in Table 7 (a) and the performance in terms of balanced accuracy in Table 7 (b). Table 5: Performance comparison between OECV with different population size in terms of G-mean (%) on the left and balanced accuracy (%) on the right. Each entry is the mean±std performance across 10 runs. The best performance on each dataset is highlighted in bold, and the 2nd best performance is highlighted in italics. The last two rows list the average ranks (avgRank) of each model across datasets, as well as the relative average time costs. | | | (a) G-mean | | | (b) Balanced accuracy | | | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------|--------------------|----------| | Dataset | Pop-25 | Pop-50 | Pop-100 | Pop-200 | | | | | Elec | 83.7±7.8 | 83.9±7.8 | 83.8±7.9 | 84.1±7.9 | | | | | Abrupt | 62.8±3.5 | 63.2±3.5 | 63.1±3.6 | 63.1±3.6 | | | | | Gradual | 8.5±15.6 | 13.2±19.3 | 24.8±23.6 5.5±12.5 | | | | | | Incremental1 46.4±5.4 | | 46.1±5.7 | 46.2±5.6 | 46.3±5.4 | | | | | Luxembourg 100.0±0.0 100.0±0.0 100.0±0.0 100.0±0.0 NOAA 73.1±4.0 73.0±3.9 72.9±3.9 73.0±4.0 Ozone 77.1±5.6 77.8±5.9 77.3±6.0 77.0±5.9 Airlines 51.8±4.7 51.7±4.7 51.8±4.8 51.8±4.7 Covtype 28.6±15.4 36.0±18.4 39.0±18.4 40.3±18.1 Incremental2 15.6±19.5 17.2±19.4 18.5±17.1 20.5±19.3 Abalone1 67.8±16.6 66.8±16.8 67.4±16.7 71.9±15.3 Abalone2 38.7±24.4 51.8±20.2 38.2±22.2 38.1±20.2 Car1 78.2±9.9 78.1±9.8 78.5±9.9 77.6±10.0 Car2 96.1±2.0 96.2±2.0 95.8±1.9 95.9±1.7 Kddcup 100.0±0.0 94.9±10.9 100.0±0.1 98.1±4.5 Kr 94.7±2.8 93.8±3.7 93.6±3.7 95.1±2.1 Segment 99.1±0.6 99.1±0.6 99.2±0.6 99.0±0.6 Shuttle1 99.9±0.2 98.1±6.1 99.9±0.2 99.8±0.4 Shuttle2 99.7±0.6 99.6±0.6 99.6±0.6 99.7±0.6 Thyroid 71.6±19.2 74.4±19.7 74.5±19.8 76.8±20.3 Win1 80.6±18.5 84.5±13.8 80.3±18.4 81.9±15.7 Win2 59.2±15.2 58.8±12.2 61.9±11.3 60.1±12.8 Win3 79.9±8.2 81.5±6.2 80.8±6.6 80.3±6.6 Win4 50.6±20.0 64.4±11.9 46.6±25.6 49.4±25.5 Win5 53.3±14.0 59.0±9.9 57.5±13.3 62.8±10.4 Yeast1 48.0±29.4 51.1±22.6 49.2±26.1 53.6±22.7 Yeast2 0.2±3.0 0.1±2.2 0.0±0.0 0.0±0.9 Yeast3 87.8±3.2 86.5±3.7 86.6±4.0 87.2±3.4 Yeast4 81.7±13.8 86.9±5.6 86.5±9.7 89.7±4.2 Yeast5 86.5±5.8 86.6±4.8 86.2±5.0 85.5±5.0 AvgRank 2.583 2.467 2.6 2.35 Time cost ×1 ×1.11 ×1.30 ×1.73 | | | | Pop-25 | Pop-50 | Pop-100 | Pop-200 | | | | | | 83.7±7.8 | 83.9±7.8 | 83.8±7.9 | 84.1±7.9 | | | | | | 62.8±3.5 | 63.2±3.5 | 63.1±3.6 | 63.1±3.6 | | | | | | 8.5±15.6 | 13.2±19.3 | 24.8±23.6 5.5±12.5 | | | | | | | 46.4±5.4 | 46.1±5.7 | 46.2±5.6 | 46.3±5.4 | | | | | | 100.0±0.0 100.0±0.0 100.0±0.0 100.0±0.0 73.1±4.0 73.0±3.9 72.9±3.9 73.0±4.0 77.1±5.6 77.8±5.9 77.3±6.0 77.0±5.9 51.8±4.7 51.7±4.7 51.8±4.8 51.8±4.7 28.6±15.4 36.0±18.4 39.0±18.4 40.3±18.1 15.6±19.5 17.2±19.4 18.5±17.1 20.5±19.3 67.8±16.6 66.8±16.8 67.4±16.7 71.9±15.3 38.7±24.4 51.8±20.2 38.2±22.2 38.1±20.2 78.2±9.9 78.1±9.8 78.5±9.9 77.6±10.0 96.1±2.0 96.2±2.0 95.8±1.9 95.9±1.7 100.0±0.0 94.9±10.9 100.0±0.1 98.1±4.5 94.7±2.8 93.8±3.7 93.6±3.7 95.1±2.1 99.1±0.6 99.1±0.6 99.2±0.6 99.0±0.6 99.9±0.2 98.1±6.1 99.9±0.2 99.8±0.4 99.7±0.6 99.6±0.6 99.6±0.6 99.7±0.6 71.6±19.2 74.4±19.7 74.5±19.8 76.8±20.3 80.6±18.5 84.5±13.8 80.3±18.4 81.9±15.7 59.2±15.2 58.8±12.2 61.9±11.3 60.1±12.8 79.9±8.2 81.5±6.2 80.8±6.6 80.3±6.6 50.6±20.0 64.4±11.9 46.6±25.6 49.4±25.5 53.3±14.0 59.0±9.9 57.5±13.3 62.8±10.4 48.0±29.4 51.1±22.6 49.2±26.1 53.6±22.7 0.2±3.0 0.1±2.2 0.0±0.0 0.0±0.9 87.8±3.2 86.5±3.7 86.6±4.0 87.2±3.4 81.7±13.8 86.9±5.6 86.5±9.7 89.7±4.2 86.5±5.8 86.6±4.8 86.2±5.0 85.5±5.0 2.583 2.467 2.6 2.35 ×1 ×1.11 ×1.30 ×1.73 | | | | The result shows that increasing the buffer size would boost performance constantly, however, both the time complexity and storage complexity increase. Intuitively, a larger buffer makes more samples available to the cost vector and allows a more accurate fitness evaluation but makes the fitness evaluation slower. One can use a larger buffer to get further performance improvement, but |B| = 200 is good enough to make the fitness evaluation both accurate and efficient. ## E.4 **Updating Frequency** We can show the influence of the updating frequency f of the cost vector in OECV by manually altering only the f to get three comparison methods: f = 5 (original setting), f = 10, and f = 20. The detailed comparison setting remains the same as in the experiments of the main paper. We report the performance in terms of G-mean in Table 8 (a) and the performance in terms of balanced accuracy in Table 8 (b). The result shows that decreasing the update would boost performance constantly, however, the time complexity increases. Intuitively, a smaller f makes the updating frequency more aligned with the classifier in the lower Table 6: Performance comparison between OECV with different oversampling rate in terms of G-mean (%) on the left and balanced accuracy (%) on the right. Each entry is the mean±std performance across 10 runs. The best performance on each dataset is highlighted in bold, and the 2nd best performance is highlighted in italics. The last two rows list the average ranks (avgRank) of each model across datasets, as well as the relative average time costs. | | (a) G-mean | | | (b) Balanced accuracy | | | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------|----------|----------| | Dataset | r = 1 | r = 3 | r = 5 | | | | | Elec | 83.8±7.4 | 83.9±7.7 | 83.7±7.8 | | | | | Abrupt | 62.3±3.4 | 62.6±3.6 | 62.8±3.5 | | | | | Gradual | 8.5±15.5 | 8.5±15.6 | 8.5±15.6 | | | | | Incremental1 46.2±5.6 | 46.2±5.6 | 46.4±5.4 | | | | | | Luxembourg 100.0±0.0 100.0±0.0 100.0±0.0 NOAA 72.8±4.0 72.9±3.9 73.1±4.0 Ozone 75.2±5.8 76.6±5.8 77.1±5.6 Airlines 51.6±5.0 51.7±4.7 51.8±4.7 Covtype 28.2±15.2 28.3±15.2 28.6±15.4 Incremental2 15.7±19.3 15.9±19.5 15.6±19.5 Abalone1 63.0±19.2 66.8±16.7 67.8±16.6 Abalone2 29.8±24.9 37.1±24.4 38.7±24.4 Car1 77.6±9.8 78.1±9.9 78.2±9.9 Car2 89.2±10.9 96.7±2.7 96.1±2.0 Kddcup 100.0±0.0 100.0±0.0 100.0±0.0 Kr 91.1±3.5 93.9±2.7 94.7±2.8 Segment 99.4±0.5 99.2±0.6 99.1±0.6 Shuttle1 99.9±0.1 99.9±0.2 99.9±0.2 Shuttle2 99.7±0.6 99.7±0.6 99.7±0.6 Thyroid 72.8±19.4 72.0±19.1 71.6±19.2 Win1 81.0±18.7 80.6±18.5 80.6±18.5 Win2 56.7±15.4 56.7±15.3 59.2±15.2 Win3 80.5±8.4 80.8±7.6 79.9±8.2 Win4 48.8±20.9 50.2±20.1 50.6±20.0 Win5 49.9±14.4 53.1±13.9 53.3±14.0 Yeast1 42.1±31.0 47.6±29.1 48.0±29.4 Yeast2 0.2±3.0 0.2±3.0 0.2±3.0 Yeast3 86.5±3.3 87.8±3.6 87.8±3.2 Yeast4 80.1±13.7 79.5±14.7 81.7±13.8 Yeast5 84.4±8.5 83.9±7.8 86.5±5.8 avgRank 2.4 1.967 1.633 Time cost ×1 ×3.43 ×4.20 | | | r = 1 | | r = 3 | r = 5 | | | | | 84.3±6.5 | | 84.4±6.7 | 84.2±6.8 | | | | | 64.4±2.2 | | 64.5±2.2 | 64.6±2.2 | | | | | 52.3±4.7 | | 52.3±4.6 | 52.3±4.6 | | | | | 47.9±4.3 | | 48.0±4.2 | 48.2±4.1 | | | | | 100.0±0.0 100.0±0.0 100.0±0.0 73.0±3.8 73.0±3.6 73.2±3.8 75.9±5.7 76.9±5.8 77.4±5.6 52.1±3.9 52.1±3.7 52.2±3.6 38.7±13.9 38.5±13.9 38.6±13.9 40.1±6.0 40.0±5.9 40.0±5.9 69.5±8.8 71.3±8.0 71.9±7.9 51.9±9.8 53.6±9.6 54.2±9.8 78.8±7.0 79.0±7.1 79.0±7.0 90.3±5.9 96.8±2.6 96.2±2.0 100.0±0.0 100.0±0.0 100.0±0.0 91.5±3.1 94.0±2.6 94.7±2.7 99.4±0.5 99.2±0.6 99.1±0.6 99.9±0.1 99.9±0.2 99.9±0.2 99.7±0.6 99.7±0.6 99.7±0.6 76.6±9.0 75.5±8.7 75.0±9.1 83.9±15.3 83.4±15.2 83.3±15.2 63.4±7.6 63.0±7.5 64.6±7.8 81.0±8.4 81.2±7.8 80.2±8.3 59.6±10.5 59.5±10.2 59.5±10.2 57.3±10.6 58.6±10.2 58.5±10.3 57.7±17.5 59.1±18.0 59.0±18.4 39.9±4.7 39.9±4.7 39.9±4.7 86.7±2.9 87.9±3.1 87.9±2.8 82.1±7.9 81.6±8.3 83.4±7.9 85.3±7.4 84.6±6.9 86.8±5.6 2.2 2 1.8 ×1 ×3.43 ×4.20 | | | | layer. This reduces the probability of updating delay of and sub-optimal solution. One can use a smaller f to get further performance improvement, but f = 5 is good enough and we pick this value to save the runtime of OECV. ## E.5 **Pre-Training Ratio** We can show the influence of the ratio of the dataset for pretraining in OECV by manually altering only the pretraining ratio to get three comparison methods: *Ratio* = 0 (begin from scratch), *Ratio* = 0.1, and Ratio = 0.3 (original setting). The detailed comparison setting remains the same as in the experiments of the main paper. Note the model is evaluated only on the remaining stream after the pretraining stage. We report the performance in terms of G-mean in Table 9 (a) and the performance in terms of balanced accuracy in Table 9 (b). The result does not show an obvious relation between the pretraining ratio and performance in our method. Indeed, this hyperparameter is not an essential part of our method, and OECV can start from scratch Table 7: Performance comparison between OECV with different buffer size in terms of G-mean (%) on the left and balanced accuracy (%) on the right. Each entry is the mean±std performance across 10 runs. The best performance on each dataset is highlighted in bold, and the 2nd best performance is highlighted in italics. The last two rows list the average ranks (avgRank) of each model across datasets, as well as the relative average time costs. | | (a) G-mean | | | (b) Balanced accuracy | | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------|-----------| | Dataset | |B| = 50 | |B| = 100 | |B| = 200 | | | | Elec | 87.6±7.2 | 85.8±7.4 | 83.7±7.8 | | | | Abrupt | 62.7±3.2 | 62.3±3.4 | 62.8±3.5 | | | | Gradual | 8.2±14.9 | 8.3±15.2 | 8.5±15.6 | | | | Incremental1 46.0±5.2 | 46.0±5.5 | 46.4±5.4 | | | | | Luxembourg 100.0±0.1 100.0±0.0 100.0±0.0 NOAA 71.3±3.9 72.4±3.8 73.1±4.0 Ozone 74.6±6.1 75.6±5.9 77.1±5.6 Airlines 51.8±4.8 52.1±4.8 51.8±4.7 Covtype 26.3±14.2 27.5±14.8 28.6±15.4 Incremental2 15.5±19.0 15.5±19.3 15.6±19.5 Abalone1 58.9±18.8 59.7±19.6 67.8±16.6 Abalone2 23.0±22.4 29.5±21.4 38.7±24.4 Car1 75.6±9.1 76.9±9.6 78.2±9.9 Car2 97.5±1.7 94.3±3.0 96.1±2.0 Kddcup 100.0±0.0 100.0±0.0 100.0±0.0 Kr 93.8±2.8 91.0±3.5 94.7±2.8 Segment 98.9±0.7 99.1±0.6 99.1±0.6 Shuttle1 99.9±0.2 99.9±0.2 99.9±0.2 Shuttle2 99.7±0.6 99.6±0.6 99.7±0.6 Thyroid 56.3±17.7 65.8±19.3 71.6±19.2 Win1 79.9±19.9 80.4±18.9 80.6±18.5 Win2 57.1±16.2 55.2±15.6 59.2±15.2 Win3 69.8±12.0 79.1±8.9 79.9±8.2 Win4 39.9±24.2 50.0±20.8 50.6±20.0 Win5 52.1±14.9 52.2±13.9 53.3±14.0 Yeast1 41.7±28.1 42.3±26.8 48.0±29.4 Yeast2 0.2±3.0 0.2±3.0 0.2±3.0 Yeast3 84.2±4.3 86.0±4.2 87.8±3.2 Yeast4 76.1±17.6 77.5±16.1 81.7±13.8 Yeast5 82.0±7.9 85.0±6.6 86.5±5.8 avgRank 2.533 2.15 1.317 Time cost ×1 ×1.28 ×1.77 | | | |B| = 50 | |B| = 100 | |B| = 200 | | | | | 87.8±6.3 | 86.2±6.5 | 84.2±6.8 | | | | | 64.6±2.3 | 64.2±2.1 | 64.6±2.2 | | | | | 52.1±4.7 | 52.1±4.7 | 52.3±4.6 | | | | | 47.7±4.0 | 47.7±4.1 | 48.2±4.1 | | | | | 100.0±0.1 100.0±0.0 100.0±0.0 71.4±3.6 72.5±3.5 73.2±3.8 75.5±5.8 76.2±5.9 77.4±5.6 52.2±3.7 52.5±3.7 52.2±3.6 37.5±13.4 37.9±13.5 38.6±13.9 39.0±5.6 39.5±5.8 40.0±5.9 66.7±7.8 67.2±8.6 71.9±7.9 50.4±7.5 50.5±7.0 54.2±9.8 77.1±5.9 77.9±6.6 79.0±7.0 97.5±1.7 94.4±3.0 96.2±2.0 100.0±0.0 100.0±0.0 100.0±0.0 93.9±2.6 91.2±3.2 94.7±2.7 98.9±0.7 99.1±0.6 99.1±0.6 99.9±0.2 99.9±0.2 99.9±0.2 99.7±0.6 99.6±0.6 99.7±0.6 63.7±9.1 70.2±11.2 75.0±9.1 83.2±15.9 83.3±15.4 83.3±15.2 64.2±8.0 61.9±8.0 64.6±7.8 71.4±11.3 79.5±9.0 80.2±8.3 56.6±10.4 60.4±10.0 59.5±10.2 59.4±10.3 58.1±9.9 58.5±10.3 58.3±15.8 56.6±15.2 59.0±18.4 39.9±4.7 39.9±4.7 39.9±4.7 84.5±3.8 86.2±3.8 87.9±2.8 79.5±8.9 80.3±8.4 83.4±7.9 82.9±7.4 85.4±6.4 86.8±5.6 2.417 2.167 1.417 ×1 ×1.28 ×1.77 | | | (*Ratio* = 0). The hyperparameter is retained to align with the compared method LI et al. (2023), and choosing a proper ratio and setting it equally to all compared methods is enough to make the comparison fair, as we did in the main experiment. ## F **More Experimental Comparison With Comparable Storage Budget** Except for the first baseline, AI-WSELM (Qin et al., 2021), which also requires extra storage as the same as ours, the other three baselines MOOB, MUOB (Wang et al., 2016) and BEDCOE (LI et al., 2023), do not have this requirement. In this section, We compare with an additional baseline named Online SMOTE Bagging (SmoteOB) (Wang & Pineau, 2016) that also uses extra storage to demonstrate the superiority of OECV when the compared method enjoys comparable or even higher storage requirements. The SmoteOB oversamples using training samples within a sliding window, and we set the size of the sliding window to 100 for each class (i.e., at least 200 samples to be stored for all classes), being equal to or larger than ours. Table 8: Performance comparison between OECV with different updating frequency of cost vector in terms of G-mean (%) on the left and balanced accuracy (%) on the right. Each entry is the mean±std performance across 10 runs. The best performance on each dataset is highlighted in bold, and the 2nd best performance is highlighted in italics. The last two rows list the average ranks (avgRank) of each model across datasets, as well as the relative average time costs. | | (a) G-mean | | | (b) Balanced accuracy | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------|----------| | Dataset | f = 5 | f = 10 | f = 20 | | | | Elec | 83.7±7.8 | 83.6±7.7 | 83.6±7.6 | | | | Abrupt | 62.8±3.5 | 62.8±4.1 | 62.6±4.6 | | | | Gradual | 8.5±15.6 | 4.5±8.1 | 11.0±13.6 | | | | Incremental1 46.4±5.4 | 46.1±5.5 | 46.1±5.7 | | | | | Luxembourg 100.0±0.0 100.0±0.0 100.0±0.0 NOAA 73.1±4.0 72.9±4.1 72.7±4.0 Ozone 77.1±5.6 77.1±5.8 76.8±5.8 Airlines 51.8±4.7 51.8±4.7 51.7±4.8 Covtype 28.6±15.4 28.4±15.7 37.8±18.4 Incremental2 15.6±19.5 12.4±15.3 13.5±16.6 Abalone1 67.8±16.6 69.7±14.9 65.4±18.4 Abalone2 38.7±24.4 39.9±22.4 50.7±20.7 Car1 78.2±9.9 77.4±9.9 76.6±10.1 Car2 96.1±2.0 96.3±1.7 97.1±1.0 Kddcup 100.0±0.0 100.0±0.0 95.6±9.7 Kr 94.7±2.8 96.1±2.3 95.5±1.9 Segment 99.1±0.6 99.1±0.6 99.1±0.5 Shuttle1 99.9±0.2 99.9±0.2 98.4±5.1 Shuttle2 99.7±0.6 99.7±0.6 99.6±0.6 Thyroid 71.6±19.2 72.8±19.5 72.9±19.6 Win1 80.6±18.5 79.9±18.6 74.4±24.8 Win2 59.2±15.2 59.2±15.2 57.7±15.0 Win3 79.9±8.2 80.6±7.1 81.3±6.9 Win4 50.6±20.0 46.7±20.3 63.7±15.9 Win5 53.3±14.0 49.8±14.6 50.3±14.5 Yeast1 48.0±29.4 46.6±25.3 46.0±24.2 Yeast2 0.2±3.0 0.0±1.6 0.0±1.4 Yeast3 87.8±3.2 87.2±3.5 87.1±3.3 Yeast4 81.7±13.8 87.2±9.5 86.6±5.5 Yeast5 86.5±5.8 86.1±5.2 83.4±6.6 avgRank 1.717 2.0 2.283 Time cost ×2.57 ×1.36 ×1 | | | f = 5 | f = 10 | f = 20 | | | | | 84.2±6.8 | 84.1±6.7 | 84.1±6.6 | | | | | 64.6±2.2 | 64.6±2.0 | 64.5±2.1 | | | | | 52.3±4.6 | 50.7±4.3 | 43.1±6.6 | | | | | 48.2±4.1 | 48.0±4.0 | 47.9±4.1 | | | | | 100.0±0.0 100.0±0.0 100.0±0.0 73.2±3.8 73.0±3.8 72.8±3.7 77.4±5.6 77.3±5.8 77.1±5.8 52.2±3.6 52.2±3.6 52.1±3.7 38.6±13.9 37.5±13.5 46.4±8.4 40.0±5.9 28.6±8.6 27.4±10.2 71.9±7.9 72.9±7.8 69.8±9.1 54.2±9.8 53.6±8.8 55.6±14.5 79.0±7.0 78.3±7.1 77.5±7.5 96.2±2.0 96.4±1.7 97.2±1.0 100.0±0.0 100.0±0.0 96.1±8.1 94.7±2.7 96.1±2.2 95.5±1.9 99.1±0.6 99.1±0.6 99.1±0.5 99.9±0.2 99.9±0.2 98.5±4.4 99.7±0.6 99.7±0.6 99.6±0.6 75.0±9.1 76.2±9.5 76.2±9.7 83.3±15.2 82.7±15.2 75.6±23.4 64.6±7.8 64.2±7.8 63.3±7.7 80.2±8.3 81.0±7.3 81.7±7.1 59.5±10.2 56.3±9.3 67.2±10.9 58.5±10.3 56.2±9.5 56.8±9.5 59.0±18.4 57.0±16.6 55.1±17.3 39.9±4.7 30.0±5.0 28.6±5.9 87.9±2.8 87.3±3.1 87.2±2.8 83.4±7.9 87.8±5.9 86.9±5.2 86.8±5.6 86.3±5.1 84.0±6.2 1.617 1.95 2.433 ×2.57 ×1.36 ×1 | | | We report the performance in terms of G-mean in Table 10 (a) and the performance in terms of balanced accuracy in Table 10 (b). We can draw the observation that OECV outperforms SmoteOB with a similar time cost. The same analysis from the main paper can explain that our method performs better in cases where few classes are presented and the stream is highly imbalanced. This illustrates that our method can not only outperform baselines with no extra storage requirement but also outperform baselines with extra storage used, verifying the effectiveness of OECV. Table 9: Performance comparison between OECV with different pretrain ratio in terms of G-mean (%) on the left and balanced accuracy (%) on the right. Each entry is the mean±std performance across 10 runs. The best performance on each dataset is highlighted in bold, and the 2nd best performance is highlighted in italics. The last two rows list the average ranks (avgRank) of each model across datasets, as well as the relative average time costs. | (a) G-mean | (b) Balanced accuracy | | | | | |----------------------------------|-------------------------|-------------------------|-----------|-----------|-------------------------| | Dataset | Ratio = 0 | Ratio = 0.1 Ratio = 0.3 | | | | | Elec | 84.7±6.9 | 83.3±7.1 | 83.7±7.8 | | | | Abrupt | 65.4±9.6 | 61.8±8.2 | 62.8±3.5 | | | | Gradual | 14.7±25.2 7.9±15.4 | 8.5±15.6 | | | | | Incremental1 51.1±9.7 | 48.2±5.4 | 46.4±5.4 | | | | | Luxembourg 95.2±11.0 | 97.9±4.0 | 100.0±0.0 | | | | | NOAA | 63.1±5.7 | 69.5±4.1 | 73.1±4.0 | | | | Ozone | 77.2±9.1 | 75.3±9.9 | 77.1±5.6 | | | | Airlines | 50.9±4.3 | 49.9±3.7 | 51.8±4.7 | | | | Covtype | 27.2±23.8 | 25.7±20.5 | 28.6±15.4 | | | | Incremental2 19.7±23.2 14.2±20.5 | 15.6±19.5 | | | | | | Abalone1 | 72.9±10.6 67.1±9.8 | 67.8±16.6 | | | | | Abalone2 | 46.7±15.4 | 51.8±12.2 38.7±24.4 | | | | | Car1 | 56.7±7.5 | 71.5±7.1 | 78.2±9.9 | | | | Car2 | 77.7±14.7 | 92.8±2.5 | 96.1±2.0 | | | | Kddcup | 98.9±2.5 | 97.4±10.1 | 100.0±0.0 | | | | Kr | 91.5±10.7 | 93.9±3.9 | 94.7±2.8 | | | | Segment | 93.4±10.3 | 98.8±0.6 | 99.1±0.6 | | | | Shuttle1 | 98.3±5.7 | 99.2±0.9 | 99.9±0.2 | | | | Shuttle2 | 99.7±0.6 | 99.6±0.5 | 99.7±0.6 | | | | Thyroid | 51.7±16.0 | 54.3±21.3 | 71.6±19.2 | | | | Win1 | 71.1±18.9 | 88.2±11.9 80.6±18.5 | | | | | Win2 | 62.2±11.3 49.8±17.5 | 59.2±15.2 | | | | | Win3 | 62.8±20.7 | 27.6±28.8 | 79.9±8.2 | | | | Win4 | 46.5±23.5 | 37.8±29.9 | 50.6±20.0 | | | | Win5 | 60.4±13.4 29.9±29.0 | 53.3±14.0 | | | | | Yeast1 | 64.1±9.6 | 42.6±16.8 | 48.0±29.4 | | | | Yeast2 | 5.1±9.1 | 0.7±7.0 | 0.2±3.0 | | | | Yeast3 | 86.0±8.1 | 85.2±6.5 | 87.8±3.2 | | | | Yeast4 | 92.2±8.8 | 93.4±2.9 | 81.7±13.8 | | | | Yeast5 | 77.5±18.5 | 50.7±32.3 | 86.5±5.8 | | | | avgRank | 1.917 | 2.467 | 1.617 | | | | Time cost | ×1.27 | ×1.17 | ×1 | Ratio = 0 | Ratio = 0.1 Ratio = 0.3 | | | 85.0±6.5 | 83.8±6.7 | 84.2±6.8 | | | | | 67.0±7.6 | 64.4±3.1 | 64.6±2.2 | | | | | 52.6±10.8 51.3±4.7 | 52.3±4.6 | | | | | | 53.0±8.8 | 49.6±4.6 | 48.2±4.1 | | | | | 95.4±10.2 | 97.9±3.8 | 100.0±0.0 | | | | | 63.7±4.6 | 69.7±3.7 | 73.2±3.8 | | | | | 77.6±8.0 | 76.2±7.2 | 77.4±5.6 | | | | | 51.2±3.7 | 50.3±3.4 | 52.2±3.6 | | | | | 35.8±26.8 | 35.7±22.8 | 38.6±13.9 | | | | | 40.9±10.6 | 41.3±7.2 | 40.0±5.9 | | | | | 73.4±9.8 | 69.0±5.6 | 71.9±7.9 | | | | | 51.9±9.1 | 54.9±7.6 | 54.2±9.8 | | | | | 57.2±7.1 | 72.1±6.6 | 79.0±7.0 | | | | | 78.8±10.9 | 92.9±2.5 | 96.2±2.0 | | | | | 98.9±2.4 | 97.9±5.9 | 100.0±0.0 | | | | | 91.7±10.1 | 94.0±3.5 | 94.7±2.7 | | | | | 93.7±9.3 | 98.8±0.6 | 99.1±0.6 | | | | | 98.4±5.3 | 99.2±0.9 | 99.9±0.2 | | | | | 99.7±0.6 | 99.6±0.5 | 99.7±0.6 | | | | | 55.7±9.7 | 64.7±7.3 | 75.0±9.1 | | | | | 74.4±12.2 | 88.8±11.0 83.3±15.2 | | | | | | 63.3±10.5 | 55.4±7.1 | 64.6±7.8 | | | | | 64.4±19.0 | 53.9±12.4 | 80.2±8.3 | | | | | 55.6±15.8 | 57.6±13.2 | 59.5±10.2 | | | | | 62.4±12.7 51.7±16.1 | 58.5±10.3 | | | | | | 65.4±8.6 | 50.8±5.9 | 59.0±18.4 | | | | | 25.4±10.5 | 39.0±5.3 | 39.9±4.7 | | | | | 86.3±7.4 | 85.4±6.0 | 87.9±2.8 | | | | | 92.5±8.1 | 93.5±2.9 | 83.4±7.9 | | | | | 78.2±16.9 | 65.2±13.7 | 86.8±5.6 | | | | | 2.117 | 2.367 | 1.517 | | | | | ×1.27 | ×1.17 | ×1 | | | Table 10: Performance comparison between OECV and SmoteOB in terms of G-mean (%) on the left and balanced accuracy (%) on the right. Each entry is the mean±std performance across 10 runs. The best performance on each dataset is highlighted in bold, and the 2nd best performance is highlighted in italics. The last two rows list the average ranks (avgRank) of each model across datasets, as well as the relative average time costs. | | (a) G-mean | (b) Balanced accuracy | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------| | Dataset | OECV | SmoteOB | | | Elec | 83.7±0.9 | 87.5 ± 0.4 | | | Abrupt | 62.8±0.6 | 51.4 ± 13.0 | | | Gradual | 8.5±4.2 | 8.7 ± 4.6 | | | Incremental1 46.4±1.5 | 51.4 ± 0.7 | | | | Luxembourg 100.0±0.0 99.6 ± 0.2 NOAA 73.1±0.5 64.6 ± 0.3 Ozone 77.1±1.7 77.7 ± 1.7 Airlines 51.8±0.9 35.6 ± 1.9 Covtype 28.6±1.5 51.0 ± 3.5 Incremental2 15.6±1.6 4.8 ± 1.2 Abalone1 67.8±4.3 57.1 ± 1.7 Abalone2 38.7±7.6 41.0 ± 1.9 Car1 78.2±2.2 89.8 ± 1.9 Car2 96.1±1.0 78.8 ± 2.7 Kddcup 100.0±0.0 71.9 ± 0.7 Kr 94.7±1.3 88.2 ± 1.6 Segment 99.1±0.1 95.3 ± 0.5 Shuttle1 99.9±0.0 99.9 ± 0.1 Shuttle2 99.7±0.0 99.5 ± 0.1 Thyroid 71.6±1.5 52.1 ± 0.5 Win1 80.6±1.2 70.4 ± 24.3 Win2 59.2±3.6 51.7 ± 0.6 Win3 79.9±1.2 62.5 ± 1.0 Win4 50.6±5.7 68.3 ± 2.9 Win5 53.3±7.1 78.7 ± 0.4 Yeast1 48.0±18.9 55.5 ± 1.3 Yeast2 0.2±0.4 9.8 ± 7.4 Yeast3 87.8±1.0 84.7 ± 0.5 Yeast4 81.7±5.7 93.1 ± 0.2 Yeast5 86.5±1.7 83.0 ± 1.0 avgRank 1.42 1.58 Time cost ×1.32 ×1 | | OECV | SmoteOB | | | | 84.2±0.9 | 87.9 ± 0.5 | | | | 64.6±0.6 | 62.5 ± 0.9 | | | | 52.3±1.0 | 47.4 ± 1.1 | | | | 48.2±1.3 | 55.4 ± 0.7 | | | | 100.0±0.0 99.6 ± 0.2 73.2±0.5 66.1 ± 0.3 77.4±1.6 78.4 ± 1.4 52.2±0.8 49.2 ± 0.5 38.6±1.1 63.3 ± 1.6 40.0±0.6 43.3 ± 0.8 71.9±2.5 62.8 ± 1.4 54.2±3.1 57.0 ± 0.9 79.0±2.1 90.2 ± 1.8 96.2±1.0 81.7 ± 2.1 100.0±0.0 78.8 ± 0.6 94.7±1.2 89.4 ± 1.4 99.1±0.1 95.4 ± 0.5 99.9±0.0 99.9 ± 0.1 99.7±0.0 99.5 ± 0.1 75.0±1.5 59.8 ± 0.4 83.3±0.5 79.5 ± 10.3 64.6±2.0 60.8 ± 0.4 80.2±1.2 66.3 ± 0.7 59.5±4.4 72.2 ± 2.1 58.5±3.8 80.0 ± 0.3 59.0±7.7 58.3 ± 1.0 39.9±1.1 44.1 ± 1.2 87.9±1.0 86.6 ± 0.5 83.4±4.3 93.2 ± 0.2 86.8±1.6 83.6 ± 0.9 1.38 1.62 ×1.32 ×1 | |
Review 1: Summary: This paper addresses online class imbalance learning. The proposed approach employes the threshold moving method (Section 2), where the output of the model prediction is modified by a predefined weights in the standard approaches. In the proposed approach, this weights are optimized during the training by utilizing an evolutionary approaches. The proposed approach works as follows. The baseline classifier is trained independently of the weighting scheme, by using online stream data. The weight vector is optimized so that the validation performance is maximized, where the validation performance is measured by keeping online data in a buffer and evaluating the performance on them. The proposed approach is compared with several baselines on 30 datasets. The reported results show its promising performance over the compared approaches. Strengths and Weaknesses: Strengths: A simple (i.e., naive) hybrid approach to enhance the performance of online class imbalance learning. Comparison on 30 datasets are performed. Weaknesses: The proposed approach requires a huge buffer to keep validation data for evaluating the performance. I looks like contradicting to the standard setting of online learning. Related to the above point, it is not clear whether the comparison with the baseline approaches is fair. Do they also use such huge memory? Though the performance is evaluated on 30 datasets, the analysis is missing. Therefore, when and why the proposed approach is promising/not promising over the other baselines are not clear. Requested Changes: P3 “ These methods heavily rely on ad hoc heuristics and hyperparameters, like safety degree (Jiang et al., 2021), sampling rate (Wang & Pineau, 2016), borderline factor, and disjunct factor (Ren et al., 2019), which potentially hinder optimal performance.” As I am not familiar with these approaches, I could not get from what is written how complicated it is to choose a reasonable parameter. As this is one of the main motivation of the proposed approach, please elaborate on it. P5 “The overall bi-level optimization problem is stated as …” It is wrong. The lower level loss is defined by ell_1, whereas the upper level is defined by ell_2. Moreover, although the authors say that the resulting problem is a bi-level optimization problem, as the lower level optimization is independent of the upper level, it looks like just a single layer optimization with a dynamic objective where the objective is changing due to the training of the classifier. P6 “In other words, the model faces heavier penalties for misclassifying class i to class j as class j gets larger or class i gets smaller. ” I am confused here. If I understand correctly, the cost function is untouched and only the output of the model is modified by M. P8 “The buffer size |B| for OECV is fixed at 200 samples, and the oversampling rate is set to 3 for all datasets” It is contradicting the setting of online learning from streaming data described in Section 3.1. 200 out of the dataset size of at most 5000 (less than 2000 for for half of the datasets tested in this paper) is definitely not a small buffer. And it is even enlarged by the factor of 3, which already amount to the size of the whole dataset of several test datasets. If one can have such a huge buffer, why not use thse buffer to learn a lower level classifier? Isn’t it possible to easily approximate the approaches for offline class imbalance learning by using such a buffer? Do the compared approaches listed in Section 4.1 use the same amount of memory? Please compare the memory usage and execution time as well as the performance. P9 How are the datasets splited to the training and test datasets? How many data for each? Doesn’t the proposed approach overfit to the P10 Though the proposed approach is compared with baselines on a large number of datasets, analysis is missing. When the proposed approach is promising over the other approaches and when it is not? Broader Impact Concerns: no concern ================================================== Review 2: Summary: The paper proposes Online Evolutionary Cost Vector (OECV), a method that aims at handling imbalaced scenarios in online learning using evolutionary algorithm (EA). The paper proposes a dynamic EA to adaptively learn the cost vector in the presence of concept drift, which unifies EA and online class imbalance learning within a bi-level optimization framework by applying a cost vector. The first layer, an online classifier, offers a rough probabilistic prediction and updates by its own rule. The second layer, a cost vector, refines the rough prediction and undergoes a dynamic optimization process via dynamic EA. OECV performance is tested on 30 real-world datasets, in comparison to four state-of-the-art (SOTA) methods. Strengths and Weaknesses: Strengths: - This paper studies an important problem of online imbalanced learning, and proposes a reasonable mechanism to address this problem that relies on bi-level optimization using EA - The paper is in general well-written, structured and presented Weaknesses: - In the experiments, there seem to be some arbitrary decisions that are loosely justified. For example, the initial 30% samples of each stream are used for model initialization in an offline fashion. How is this rate decided? More importantly, what is the impact of this initial sample size and 'pre-training' mechanism in the overall effectiveness of the proposed method? The same question applied to other decisions such as the buffer size and the oversampling rate. Are these data-dependent, method-dependent? What is the impact these would have on the performance of OECV? - Some ablation studies seem to be missing, in particular those related to the hyper-parameters mentioned in the previous item - In the experiments, no analysis are provided in terms of the computational cost (memory or computation), with respect to other SOTA methods. How expensive, computationally, would running the proposed OECV be? This is extremely important considering the well-known cost of EA algorithms. Requested Changes: - Please add ablation studies for hyperparameters such as the samples used for initilization, buffer size and oversampling rate, and make some conclusions related to the impact of those on the performance of your method - Please justify all your technical decisions (such as those related to the hyperparameters mentioned in the previous item) as you present your main experiments - Please provide results on the computational cost of the proposed method compared to others Broader Impact Concerns: No broader impact concerns. ================================================== Review 3: Summary: The paper introduces a novel online learning framework that can handle class imbalance and concept drift by updating a cost vector using evolutionary algorithms. The authors model the online learning task as a bi-level optimization problem where the lower-level optimization involves minimizing a normal classification loss while the upper-level optimization involves learning a cost vector using evolutionary algorithms to correct classifier predictions in the presence of data drift. The algorithm is empirically validated on 30 real world datasets and compared with 4 SOTA baselines. Strengths and Weaknesses: Strengths: 1. The use of evolutionary algorithms to cast the class imbalance problem as a hyperparameter optimization problem is an interesting and novel approach that can be applied to a wide range of performance metrics 2. The experimental results show that the approach generally outperforms the SOTA baselines and the ablations studies show that the gains are indeed due to the use of evolutionary algorithms. Weaknesses: 1. The mismatch in the update frequencies of the upper and lower-level optimization problems may lead to sub-optimal choices of the cost vector (since the classifier is updated more frequently). 2. The oversampling approach appears to be a) susceptible to overfitting if the same samples from the buffer $\mathcal{B}$ are used to train the classifier and then added to $\mathcal{B'}$ to update the cost vector, and b) susceptible to noise in the class labels since the effect of erroneous labels will be amplified by the generated samples in (5) (if the original sample is corrupted then all the generated samples will also be corrupted since they have the same label). Requested Changes: 1. Please comment on the weaknesses above. I do not see them as dealbreakers but if my concerns are valid I would like to see them at least discussed in the paper. 2. Consider adding examples of the kind of online performance metrics (especially non-differentiable ones) that could be considered, in paragraph 2 of the introduction. 3. Can you explain how what is "ad-hoc" about the heuristics chosen by the related works cited in Section 2.1.1 and why these could hinder the performance? 4. The cost-vector based correction for class imbalance appears to assume that the classifier is well-calibrated (misclassified samples will be corrected if their predicted class probabilities are low). Will this actually be the case in online settings where the classifier is trained with a small number of samples and so may become overconfident? Can any calibration approaches (like Platt scaling) be used to address this issue? 5. There is a reference to Line 16 in Algorithm 2 in Section 3.4.1 but the algorithm block only has 15 lines. Please fix this. 6.While the comparisons between OECV-ea and OECV as well as OECV-n and OECV-ea seem to suggest that extra data does not play a determinative role, it isn't clear if it is at all useful. Will it be possible to add a version which uses the evolutionary algorithm but does not use the extra data to see if having the extra data makes any material difference? 7.Why is high correlation between standard deviation and class imbalance a good thing? Broader Impact Concerns: N/A ==================================================
Towards Measuring Predictability: To which extent datadriven approaches can extract deterministic relations from data exemplified with time series prediction and classification Anonymous authors Paper under double-blind review ## Abstract Minimizing loss functions is one important ingredient for machine learning to fit parameters such that the machine learning models extract relations hidden in the data. The smaller the loss function value on various splittings of a dataset, the better the machine learning model is assumed to perform. However, datasets are usually generated by dynamics consisting of deterministic components where relations are clearly defined and consequently learnable as well as stochastic parts where outcomes are random and thus not predictable. Depending on the amplitude of the deterministic and stochastic processes, the best achievable loss function value varies and is usually not known in real data science scenarios. In this research, a statistical framework is developed that provides measures to address predictability of a target given the available input data and, after training an machine learning model, how much of the deterministic relations have been missed by the model. Consequently, the presented framework allows to differentiate model errors into unpredictable parts regarding the given input and a systematic miss of deterministic relations. The work extends the definition of model success or failure as well as convergence of a training process. Moreover, it is demonstrated how such measures can enrich the procedure of model training and guide the combination of different models. The framework is showcased with time series data on different synthetic and real world datasets. The implementation of the used models and measures for quantifying the deterministic relations are provided via the git repository .... (the repository will be published and the link will be provided in case of acceptance, but for the review process it is provided as a supplementary zip-file) ## 1 Introduction Data analysis and the application of the corresponding insights work if there are reliable and stable relations between the measured quantities and their model. However, due to the fact that any measurement may not be error free or the dynamics that determine the values of the considered quantities may be inherently noisy to a certain extent, measurement values do not only purely reflect the relations to be investigated. Furthermore, the input data could miss relevant information, e.g., relevant features are not provided or even measured, to model the output data such that regarding the given input data some parts of the output are unpredictable due to the lack of information. Usually, real world datasets are not binary in terms of being predictable, meaning that there are some deterministic patterns plus patterns which can't be inferred from the provided historic data, like financial markets solely based on its history, and therefore there is no chance for any model to predict them totally accurate. Depending on the relative magnitude of these elements, the prediction error can vary even for a successful model. We extend the term for model success by its ability to extract or learn, respectively, all the available deterministic relations. Consequently, we argue that the magnitude of prediction errors alone does not unequivocally signify the model success or failure. One key issue before any data analysis, machine learning (ML) model training or information extraction from a dataset is testing if there are such reliable relations between the quantities in the dataset of interest. Such tests deliver valuable insights to evaluate efforts for further analysis, and if reasonable at all. A second key aspect after an iteration of data analysis or information extraction, like the training of an ML model, is testing if there is information left to extract or all the reliable information is extracted in terms of deterministic relations between input and output data. If all relations are extracted or learnt, respectively, the deviations between the predictions given the input data and the target (ground truth) are supposed to be stochastically independent of the input data. Thus, given that input, additional analysis on this input-target relation might not reveal further insights or improve accuracy in terms of extracting more deterministic relations. Consequently, further training might not improve a model in terms of learning these deterministic relations. Following the outline above, in the presented work, we provide a framework to address the following, which is illustrated in Figure 1: - We utilize measures to evaluate the stochastic dependence and information content between random variables modeling the data to evaluate to what extent the data is interconnected and to quantify extractable information based on defined input and output (target) variables assembled from the dataset. - After fitting a model, we use these measures to estimate if there is still information left to extract. This evaluation is done by testing the stochastic independence and information content between the input and the deviations of the model output and the ground truth. For this purpose, we investigate the relation of random variables that model corresponding quantities. ![1_image_0.png](1_image_0.png) Figure 1: Graphical abstract representing the main concept of the presented work to analyze for information content and information extracted or learnt, respectively, by a model. These bullet points provide the foundation of the framework that we endeavor to establish for characterizing failure cases in prediction modeling. A failure case is herein defined as a scenario in which a prediction model fails to capture the essential, meaning deterministic, dynamics of the target variable(s), given the input information. This definition of failure allows us to decide on whether there is potential for improvement or the prediction model has already achieved the best possible accuracy given the data. Our mathematical framework provides an additional insight for model evaluation and extends the current performance measures of ML models. Furthermore, our framework can consequently provide an explanation in case there is a bad model accuracy by analyzing the deviations if there is information left or deviations consist only of unpredictable parts from the perspective of the input data. The framework is not limited to a specific model architecture and thus is model-agnostic since we only inspect the dependence of input and the model errors, which does not require the knowledge of the inner function of the model. The implementation of this general framework provided with this work is the mutual information, the chisquare test of independence and the Pearson correlation to investigate the existence of relations between random variables. One variable could be an input feature and one variable could be an output feature to be predicted by an ML method. In case of several input or output variables, the sum of all pairwise considerations between input and output variables serves as a measure for information to extract. This approach is entirely data-driven and needs no prior knowledge about the data and its distribution. Our framework is not limited to these tests and we provide pros and potential cons of the pairwise approach in the Discussion as well as other implementations of mutual information estimators that can be found in the Related Works section. Consequently, other methods that test for information content or stochastic dependence can be used and easily included into our modularized git repository that is provided with this work. One of the main intention of this work is to apply such measures to demonstrate how to and evaluate potential model success as well as to differentiate model inaccuracies into systematic failures and unpredictable parts with respect to the input data. The choice of the measures and their implementations themselves are not the focus of this work. In analogy to the idea of using information and stochastic dependence measures for feature selection, we apply the same methods to the input and the deviations between the model prediction and the ground truth.Our approach for measuring the predictability of output based on input variables is similar to feature selection methods, like the minimize redundancy maximize relevance (mRMR) method Peng et al. (2005), which is based on mutual information. If input and these deviations take their values totally independent of each other, it means that there is nothing left to learn for an ML method, regardless of the loss value which might be high or low depending on the magnitude of the unpredictable component. Evaluating the deviations for dependence on the input has several benefits for model training: - Further convergence criterion: Apart from utilizing the loss function for convergence, we can stop training whenever there is no information or dependence left between the input and the deviation of a model prediction from the ground truth. - Hyperparameter tuning: We can stop a grid search whenever we have identified a hyperparameter set with which the corresponding model has extracted all the information on the training or validation test set. - Data efficiency: The total amount of training data can be used for fitting model parameters since no data is required to be utilized as an additional set for early stopping since overfitting can be detected directly on the training data with our framework. - Decide if an inaccuracy measured by a loss function value is due to unpredictable parts given the input data or reflects a systematic failure of the model not to capture relevant information. Thus, we can decide if a loss function value is a corresponding lower bound for the given dataset since no further deterministic relations remain within the dataset. - We provide a loss function agnostic framework to evaluate performance of ML accuracy, which can be used as a model selection criterion. Firstly, we showcase our general framework with an application to time series forecasting. The basic procedure including our framework is the following. After quantifying the level of predictability by inspecting the input and ground truth target, we can perform an exactly similar test on the input and the residuals, which define the deviations as ground truth minus prediction, to see whether they are still predictable given the input. We use supervised ML techniques to predict future parts of the time series. Secondly, we showcase the application for time series classification similarly where there is nominal target data and the deviations between ground truth and model output is defined accordingly in Section 2.3 and Section 4.4. Among different data modalities such as text, image, and timeseries, our insight is that in timeseries data, the existence of a clear relationship between input and output is often not directly noticeable. Therefore, in this work, we focus specifically on timeseries analysis. For instance, in natural language processing (NLP), we typically do not encounter a sequence of random words or tokens. Similarly, in image datasets, depending on the task (e.g., classification, segmentation), the relationship between input and output usually exists a priori, and one would not typically spend time proving such existence, instead would directly train a model to solve the task at hand. However, in timeseries analysis, the underlying dynamics of a process need to be learned from the measurements. Sometimes, the history of measurements provides little to no information about its future, leading to future samples being mainly or at least partially independent of past samples and therefore being unpredictable. For example, consider the prediction of stock prices using timeseries data. Past stock prices may not always provide a clear indication of future prices due to the complexity of market dynamics and the influence of external factors such as economic events and investor sentiment. Therefore, accurately predicting future stock prices requires understanding and modeling the underlying patterns and dynamics in the timeseries data, which may not be directly evident from historical observations alone. Without having a stopping criterion for improvement of the model in such scenarios, one could spend a huge amount of time on learning dynamics which either don't exist (such as a pure noise) or it is impossible to learn because the given history is sharing no or low information in that regards. Therefore, in such cases, we should know the upper bound of the model's performance to avoid trying to improve it while further improvement is not possible. One example is prediction of workloads, mainly network traffic in datacenters: "measurement studies found two key results with implications for the network design. First, the traffic patterns inside a data center are highly divergent..., and they change rapidly and unpredictably." Greenberg et al. (2009). A more recent manifesto Buyya et al. (2018) re-iterated the brittleness of existing *"demand* estimation and workload prediction methods", leaving it as an open question if *"Machine Learning (ML) and* Artificial Intelligencen(AI) methods could fully address this shortcoming". Therefore in this paper we would like to quantifiable answer this question: To what extent data-driven approaches can extract deterministic relations from data. Our solution is simple yet intuitive and effective. By checking the independence of residual error from the given input, we are able to judge if further improvement of models and results is still possible. Paper outline The work is organized as follows: The main theoretical concepts that are used in the proposed framework are in detail described in Section 2. This includes a precise definition of the measure of mutual information and the chi-square test of independence, as well as an equinumeric discretization scheme for features with a continuous co-domain. Furthermore, a stacking architecture how models may be combined to extract iteratively all the information is defined. A section of related methods follows this Methods section and addresses how our work extends related works using mutual information, ensemble learning and time series prediction. Applications of the information extraction evaluation of a model is showcased in Section 4 with different experiments in the area of time series prediction and classification. These experiments include a basic proof of concept, an analysis of the influence of the loss function on the information extracting depending on the structure of the noise, suggest additional convergence criteria based on the presented framework, stacking of models as well as detecting distribution shifts caused by the existence of different deterministic relations. In the Discussion, we provide assumptions and limitations of our current approach. Furthermore, we sketch further application of our framework in the area of unstructured data. Moreover, efficient model size reduction is also discussed by ranking subparts of a model with stochastic measures since the presented framework is general and can be applied to any function generating output from input data. ## 2 Theoretical Background And Methods In this section we provide the necessary background and mathematically establish our framework. ## 2.1 Basic Concept For Unpredictability In A Nutshell Definition Of Unpredictabilty The random variable Y is deemed unpredictable with respect to an information set given by the random variable X if the conditional distribution P(Y = y|X = x) aligns with the unconditional distribution P(Y = y) P(Y = y|X = x) = P(Y = y) (1) for all x and y. Specifically when X comprises the past realization of Y , Equation 1 suggest that having knowledge about these past realizations does not enhance the predictive accuracy of Y . It is important to note that this form of unpredictability in Y is an inherent attribute, unrelated to any prediction algorithm Bezbochina et al. (2023). One famous example of unpredictable time series is white noise where samples are identically but independently distributed (iid), where independence of future samples from the past samples makes it essentially unpredictable and therefore training an ML model is pointless. In this case, the best predictor in terms of L 2-loss is a mean predictor, suggesting further investing on improving the prediction model is fruitless. Please note that while unpredictable data and noisy data are related, they are not equivalent. Noise represents a specific subset of unpredictability. On the other hand, if Equation 1 doesn't hold true, it suggests that given X, it is reasonable to train an ML model to predict Y . The more the distributions of the two sides of Equation 1 deviate from each other, the more chance we have to train a model with a high accuracy to predict Y based on X. In the scope of this work, we call X the context/input variable and Y the target/output that we want to predict. In this work, we use the terms input and context interchangeably, analogously the terms output and target. In the following part, we explain how to measure and quantify the concept we have introduced so far. Measuring predictability Assuming Equation 1 is satisfied, we can derive the joint distribution by $$P(Y=y,X=x)=P(Y=y)P(X=x)$$ $$\left(2\right)$$ P(Y = *y, X* = x) = P(Y = y)P(X = x) (2) for all x and y by utilizing the definition of conditional probability. However, in practical data science scenarios (2) barely holds entirely true even in case of independence of X and Y due to noise or numerical errors. Therefore it is crucial to quantify the degree to which the independence assumption is met and to introduce statistical concepts to decide based on a level of significance if the hypothesis of independence cannot be rejected. Our approach to quantifying the independence of these random variables is grounded in Equation 2, where we measure the deviation between its left and right-hand side. Although in general any measure of deviation can be used, in this work, we mainly focus on Kullback–Leibler divergence and chi-square test of independence as a measure of this deviation. In the former case, such deviation can be calculated based on the mutual information formula given by ## I(X; Y ) = Dkl(P(X,Y )∥Px ⊗ Py ) where DKL is the Kullback–Leibler divergence, PX ⊗ PY denotes the outer product distribution, and P(X,Y ) is the joint distribution (see Murphy (2022) section 2.2.4 Figure 2.3). A higher value of mutual information represents higher predictability of Y based on X which aligns with the definition of mutual information, measuring the information gained about Y through observation of X. ## Quantifying Measure Of Success We define a successful prediction when the deviations between model output and ground truth, e.g., the residual defined as the ground truth minus the prediction, are stochastically independent of the context/input information. In case of independence, the deviations contain no information shared with the context, rendering them effectively unpredictable based on the the provided context information. Consequently, no further improvements are possible, marking the prediction as successful. It is crucial to note that as long as some mutual information remains, there is potential for enhancements, whether through selecting a different model, loss function, or adjusting various parameters. For the sake of assessment, lower values of mutual information suggest better prediction quality. We remark that the scope of this work does not include the development of new methods to calculate, estimate or approximate mutual information but the application of such methods to improve ML training. In the Related Work section, we provide references to such methods. For the demonstration of the application, we provide one implementation that worked for our experiments. However, our framework does not rely on a specific implementation of mutual information and thus any method to calculate mutual information can be taken. Furthermore, our provided git repository is modularized, ensuring that it can be easily extended by further functions calculating mutual information (or any other measure calculating stochastic dependence or information content). A rigorous mathematical formulation, including details about the provided implementation of the framework, is given in the following. ## 2.2 Foundations And Implementation Details For The Predictability Framework In this section, we explain our framework in detail. Measures for stochastic independence and information content We are given n ∈ N input features in the format x ∈ R n and m ∈ N output features in the format y ∈ R m. Each feature is modeled as a random variable taking values upon measurement. Consequently, the set of input random variable is given by X = (x1*, ..., x*n), xi: R → Zxi , t 7→ xi (t), i ∈ {1*, ..., n*} where t is a time point of measurement and Zxi :=z k xi k=1*,...,l*xi , lxi ∈ N, is a set of discrete events. Such an event can be defined by the random variable taking a value between two predefined values. Analogously, the set of output random variables is defined by Y = {y1*, ..., y*m}, yj : R → Zyj , t 7→ yj (t), j ∈ {1*, ..., m*} and Zyj := nz k yj o k=1*,...,l*yj , lyj ∈ N, is a set of discrete events. Our framework holds not only for random variables with a discrete co-domain but also for random variables with a continuous co-domain. We will later present an algorithm that discretizes random variables with a continuous co-domain equinumerically. Furthermore, independent of the topology of the co-domain, we define an information or a dependence measure by Φ : X × Y → R, (x1, ..., xn, y1*, ..., y*m) 7→ Φ (x1, ..., xn, y1*, ..., y*m) that describes how much information, resp., dependence exists between the input and the output. An example for such a measure can be the stochastic independence of multiple real valued random variables, see ,e.g., (Gallager, 2013, 1.3.4). In this work, we focus on a specific structure of Φ, which is a pairwise test between input and output random variables providing corresponding information or stochastic dependence summing up each value of the pairwise measure. The measure Φ given by $$\Phi\left(x_{1},...,x_{n},y_{1},...,y_{m}\right):=\sum_{i=1}^{n}\sum_{j=1}^{m}\phi\left(x_{i},y_{j}\right)$$ where ϕ : X × Y → R, (xi, yj ) 7→ ϕ (xi, yj ) for each i ∈ {1*, ..., n*} and j ∈ {1*, ..., m*}. We are aware that a pairwise test might be an approximation of the actual value of the measure for the deterministic relations, e.g., as in the case of the stochastic independence of multiple real valued random variables. However, this approximation provides the advantage of much less computational costs, in particular for large n and m as provided in the discussion section of Breitenbach et al. (2022). It is one outcome of this work that this approximation is a useful measure to estimate the deterministic connections between input and output as well as model deviations between predictions and ground truth. Moreover to estimate the learning success of a model where a consequence is the reduction of deterministic relations between the input dataset and the model deviations from the ground truth. A similar consideration holds for the mutual information where a precise calculation of the joint probability can be very costly in case of many input and output variables and where other mutual information estimators exist as well to circumvent this issue, please see the Discussion and Related works for references and further details about this issue. In the present work, we focus on the mutual information and the chi-square test of independence between two random variables as measures, which are both explained later in detail. However, the presented work is generic and can also be executed with different measures for independence, like Pearson's correlation coefficient as defined in, e.g., Breitenbach et al. (2023) for random variables. We remark that in terms of testing a hypothesis if input and model deviations are independent of each other, it is beneficial to rely on several tests, e.g., if the requirements for the application of a test is not fulfilled. In the following part, we explain the main ingredients of the present framework to analyze for deterministic relations. Although these concepts might be well-known, we repeat them here for the convenience of the reader since they are central for this work and a precise definition consistent with this work facilitates its understanding. Mutual information The probability Pxi = z k1 xi describes the likeliness that the outcome of xi equals the event z k1 xi for any i ∈ {1*, ..., n*} and any k1 ∈ {1*, ..., l*xi }. Analogously for P yj = z k2 yj for any j ∈ {1*, ..., m*} and k2 ∈1*, ..., l*yj . The probability P xi = z k1 xi ∧ yj = z k2 yj describes the likeliness that the outcome of xi equals the event z k1 xi and the outcome of yj equals the event z k2 yj for any i ∈ {1, ..., n}, j ∈ {1, ..., m}, k1 ∈ {1*, ..., l*xi } and k2 ∈1*, ..., l*yj . Based on this definition, we can define the mutual information for a pair of random variables xi and yj as one example for ϕ as follows $$I\left(x_{i},y_{j}\right):=\sum_{k_{1}=1}^{l_{x_{i}}}\sum_{k_{2}=1}^{l_{y_{j}}}P\left(x_{i}=z_{x_{i}}^{k_{1}}\wedge y_{j}=z_{y_{j}}^{k_{2}}\right)\log_{a}\left(\frac{P\left(x_{i}=z_{x_{i}}^{k_{1}}\wedge y_{j}=z_{y_{j}}^{k_{2}}\right)}{P\left(x_{i}=z_{x_{i}}^{k_{1}}\right)P\left(y_{j}=z_{y_{j}}^{k_{2}}\right)}\right)$$ for any i ∈ {1*, ..., n*} and j ∈ {1*, ..., m*} with the basis a ∈ N \ {1} of the logarithm. The mutual information describes how much information about the outcome of yj , we gain given the outcome of xi. If the outcome of xiis independent of yj , namely $$P\left(x_{i}=z_{x_{i}}^{k_{1}}\right)=P\left(x_{i}=z_{x_{i}}^{k_{1}}|y_{j}=z_{j}^{k_{2}}\right):={\frac{P\left(x_{i}=z_{x_{i}}^{k_{1}}\wedge y_{j}=z_{y_{j}}^{k_{2}}\right)}{P\left(y_{j}=z_{y_{j}}^{k_{2}}\right)}},$$ for all k1 ∈ {1*, ..., l*xi } and k2 ∈1*, ..., l*yj , where P xi = z k1 xi |yj = z k2 j is the conditional probability that xi = z k1 xi under the condition that yj = z k2 j , we expect zero mutual information since loga 1 = 0. The mutual information is bounded from below by 0. The upper bound depends on the number of events of yj . In order to normalize the mutual information such that it is bounded from above by 1 for any yj , we define the corresponding log base a = lyj . We remark that in case where yj is replaced by the corresponding model deviation, the basis is defined accordingly to the number of different events of the model deviation regarding the ground truth. The normalization is in particular important to compare the information content between xi and yj with the left information content between xi and the corresponding deviation between model output and ground truth after the training of the ML model to estimate the information extraction of the model from the dataset. As a next example, we introduce the chi-square test of independence of two random variables. ## Chi-Square Test Of Independence In case the chi-square test of independence is taken as the measure for stochastic independence, then ϕ returns 1 if the corresponding random variables of the pair are not independent of each other, else 0. Let us have N ∈ N measurements where at each measurement the values of all random variables are determined. For any fixed i ∈ {1*, ..., n*} and j ∈ {1*, ..., m*}, we define $$P\left(x_{i}=z_{x_{i}}^{k_{1}}\wedge y_{j}=z_{y_{j}}^{k_{2}}\right):={\frac{O_{k_{1}k_{2}}}{N}}$$ where Ok1k2 ∈ N is the number of observed measurements where xi = z k1 xi and yj = z k2 yj for the corresponding k1 ∈ {1*, ..., l*xi } and k2 ∈1*, ..., l*yj . Then, we have $$P\left(x_{i}=z_{x_{i}}^{k_{1}}\right)=\sum_{k_{2}=1}^{l_{y_{j}}}{\frac{O_{k_{1}k_{2}}}{N}}{\mathrm{~and~}}P\left(y_{j}=z_{y_{j}}^{k_{2}}\right)=\sum_{k_{1}=1}^{l_{x_{i}}}{\frac{O_{k_{1}k_{2}}}{N}}$$ with N =Plyj k2=1 Plxi k1=1 Ok1k2 . Under the hypothesis that the random variables xi and yj , i ∈ {1*, ..., l*xi } and j ∈1*, ..., l*yj , are stochastically independent, the number of expected measurements Ek1k2 ∈ R where xi = z k1 xi and yj = z k2 yj is given by $$E_{k_{1}k_{2}}:=P\left(x_{i}=z_{x_{i}}^{k_{1}}\right)P\left(y_{j}=z_{y_{j}}^{k_{2}}\right)N$$ for the corresponding k1 ∈ {1*, ..., l*xi } and k2 ∈1*, ..., l*yj . Consequently, if xi and yj are independent, it is necessary that observed and expected number of measurements for all k1 ∈ {1*, ..., l*xi } and k2 ∈1*, ..., l*yj equal each other. The chi-square statistic given by $$\chi^{2}:=\sum_{k_{1}=1}^{l_{x_{i}}}\sum_{k_{2}=1}^{l_{y_{j}}}\frac{\left(O_{k_{1}k_{2}}-E_{k_{1}k_{2}}\right)^{2}}{E_{k_{1}k_{1}}},$$ (3) $$\text{}$$ . equals zero if Ok1k2 and Ek1k2 equal each other for all k1 ∈ {1*, ..., l*xi } and k2 ∈1*, ..., l*yj and quantifies the deviation from not being equal. However, due to the presence of noise, even under independence of xi and yj , it might hold that (Ok1k2 − Ek1k2 ) 2 > 0 for some k1 ∈ {1*, ..., l*xi } and k2 ∈1*, ..., l*yj . Consequently, we need to estimate from a distribution how likely the observed chi-square value under the hypothesis of independence is. If the observed value is too unlikely, we rather assume that the opposite of our hypothesis of independence is the case, meaning the variables depend on each other and there exists a dependence between the random variables in taking values. It can be proven that χ 2is chi-square distributed with (lxi − 1) lyj − 1degrees of freedom, see, e.g., (Rao, 1973, 6d.2) or (Georgii, 2015, 11.3). One important assumption is that the term Ok1√k2−Ek1k2 Ek1k1is approximately normally distributed, which is usually sufficiently the case if Ek1k2 ≥ 5 for all k1 ∈ {1*, ..., l*xi } and k2 ∈1*, ..., l*yj , see, e.g., McHugh (2013) or (Greenwood & Nikulin, 1996, page 21). Based on the distribution, we can calculate a p-value for the observed χ 2 value. The p-value is the probability to get a higher chi-square value than the observed one. If the p-value is too small, e.g., for this work we use lower than the level of significance of 0.01, we reject the hypothesis and assume that the random variables take their values not independent of each other. Our measure of dependence is the number of chi-square tests that indicate dependence of the tested pair while testing each pair of input and output variables (xi, yj ) for each i ∈ {1*, ..., n*} and j ∈ {1*, ..., m*}. However, since the number of chi-square tests is given by nm, which can scale to large numbers, we use an adapted p-value to lower the risk of wrongly rejected hypothesis, which would result in assuming too often dependence. We use the Bonferroni-correction dividing our level of significance 0.01 by the number of chi-square tests mn. ## Discretization Scheme For Co-Domains Or Random Variables In the next part, we describe how to discretize real valued continuous random variables, meaning where the co-domain is continuous. We take Algorithm 4 from Breitenbach et al. (2022). We remark that our algorithm also works for real valued discrete random variables without any change. Consequently, no separation between discretized and continuous random variables is necessary. The algorithm provides an adaptive discretization scheme for each random variable, meaning that boundaries of the bins, in which the co-domain of a random variable is divided and define the events like z k1 xi or z k2 yj , are not set equidistant but rather equinumeric. With equinumeric, we mean that each bin has - if possible - the same number of data points which balances the likeliness of each event. Algorithm 1 Discretize the co-domain of random variables 1. Set ρ ∈ N number of minimum data points per bin 2. For any random variable v (a) Determine the minimum value m and the maximum value M of all measured values of v (b) If M ≤ m: Skip v (c) If *M > m*: i. Sort the measured data points of v in ascending order. ii. Go through the data points in ascending order. Determine the range of a bin such that there are at least ρ data points within the current bin and that the value of the last data point of the current bin is smaller than the first one of the next bin. iii. If there are less than ρ data points left: Join these data points with the last bin that has at least ρ data points. Algorithm 1 works as follows. The parameter ρ ∈ N determines the minimum number of data points per bin in which the co-domain of a random variable is divided. The parameter ρ influences the margin probability Pxi = z k1 xi and P yj = z k2 yj of the joint distribution P xi = z k1 xi ∧ yj = z k2 yj and consequently the corresponding expected frequency Ek1k2 as well as the number of bins for each random variable. Increasing ρ will increase the quantity Ek1k1 , which might be useful if Ek1k1 < 5 for one k1 ∈ {1*, ..., l*xi } and k2 ∈1*, ..., l*yj . Consequently, this discretization scheme is beneficial for the our implemented chi-square test and can be used without any restriction for the calculation of the mutual information as well. We need to keep in mind that a too big ρ might lead to a too coarse discretization deleting information from the continuous random variable. In case of the chi-square test, one strategy might be to start with a small ρ where Ek1k1 < 5 for one k1 ∈ {1*, ..., l*xi } and k2 ∈1*, ..., l*yj and increase ρ until Ek1k1 ≥ 5 for all k1 ∈ {1*, ..., l*xi } and k2 ∈1*, ..., l*yj . A further advantage of that scheme is that the by ensuring always a minimum number of data points in each bin of each marginal distribution, the denominator in the mutual information formula is never zero which could happen with a equidistant discretization strategy. In step 2 a), we determine the minimum and the maximum of the available data points of the corresponding random variable v to filter out constant functions in 2 b). Without any variation, such random variables do not provide any information for predicting something for what at least two different kind of events are necessary. For non-constant random variables, we sort the data points of the random variable in ascending order. Once this is done, we can go through the points in ascending order and determine the boundaries of the bins such that at least ρ data points are included in a bin. If there are several data points with equal values, we include all these points into the current bin such that the first data point in the next bin is larger than all data points in the bin before. If the remaining data points, which are not yet associated to a bin, are less than ρ data points, we include them into the bin with the largest upper bound. The binning generated by Algorithm 1 can be used for both the chi-square test to calculate corresponding marginal and joint probabilities and mutual information as we do in our implementation provided with this work. ## Stochastic Independence As A Test Of Hypothesis We conclude this section with a remark about the distribution of test statistics. Even under the hypothesis of independent data/random variables, there are some variations by coincident that lead to a distribution of the test statistic. Consequently, we need a probability how likely it is under the assumption of independent random variables (no deterministic relation between input and output) to get a test statistic value bigger than the observed one, and thus exclude that the relation in the measured data is just by coincident. Then, we can decide if a certain observed test statistic is too unlikely under the assumption of independent random variables and we should rather assume that that the opposite is true, meaning that there are some deterministic relations by whose action random variables do not take their values independent of each other. Based on such a statistical view, all models that cannot be rejected to have uncorrelated deviations from the ground truth with input are equally good in terms of extracting the deterministic relations. While the chi-square value χ 2is chi-square distributed with corresponding degrees of freedom under certain assumptions with which we can evaluate the observed chi-square value regarding the likeliness of being generated by two independent random variables, we are not aware of such a result for the mutual information. However, to get a threshold, such that we can decide based on a level of significance if an observed mutual information value is too unlikely under the assumption of independent data (input-output-relation), we can determine a distribution of mutual information numerically with the following permutation procedure. The idea is to shuffle the association of value pairs between input and output based on the available data according to (xi (t), yj (π (t))) for all i ∈ {1*, ..., n*} and j ∈ {1*, ..., , m*} where π : T → T, t 7→ π (t) is a bijective map, called permutation, and T is the set of all time points of measurements of the data points. The random association of pairs from different measurements is assumed to provide us a distribution of mutual information of independent input-output data to evaluate how likely the observed mutual information is assuming random input-output association. Even in case there is a strong deterministic relation, the shuffling is supposed to ensure that the corresponding dependence in taking the values is randomized. If the observed mutual information value is unlikely according to the distribution of mutual information values numerically determined based on the dataset, we should rather assume that the reason for the mutual information value is the non-random mechanisms relating input and output values or corresponding random variables, respectively, in case the output is replaced by the model deviation. The observed mutual information value is determined as unlikely under the hypothesis of independent data if less than a predefined percentage of mutual information values (level of significance; in this work 5%) generated with the randomly shuffled data is bigger than the observed mutual information value. The distribution is generated by calculating the mutual information value for several random permutations as described above. In this work, we calculate the mutual information 100 times with shuffled data. The same procedure holds to calculate a distribution for the chi-square value defined in (3) in case assumptions are violated to justify the application of the chi-square distribution for χ 2. With such a stochastic framework in place, we are able to make concrete decisions of further improvement is possible given the data or if all the learnable relations are already extracted. ## 2.3 Stacking Architecture As we see in the Results section 4, different properties of a model influence the capability of extracting deterministic relations. Consequently, we provide here an architecture to combine different models with different properties to systematically extract the deterministic relations. Roughly, models are stacked together where all models get the same input, however, try to learn only what the models in the stack of models so far have not extracted regarding deterministic relations. We need to differentiate two cases. The first case is where the target is of ordinal character. Here the random variables modeling the deviations of the model output from the ground truth are defined by the difference between the model output and the ground truth. In this case the model deviations are termed as residuals. In case where the target is of nominal character, the random variables that model the deviations of the model are defined as follows. In case the prediction of the model is not correct, the random variable modeling the deviation of the model from the ground truth takes the value of the class label that would have been correct. In case the model is right, the corresponding random variable takes a value that does not represent a model class, e.g., a negative integer. Next, we explain the stacking procedure in detail for both cases. We are given the data {(xi, yi), i ∈ I} where xiis the vector-valued input, yi the corresponding vector-valued output and I is a finite subset of the natural numbers N. With x and y we denote the corresponding vectors of random variables that take the corresponding values for a given i. First, we check if x and y have stochastic dependence. If yes, we train the first model/part of the model stack to best fit the prediction y¯ to y given x. With y¯, we denote a (vector-valued) random variable taking the values y¯i for the corresponding i ∈ I. For a regression task, i.e. with ordinal target data, the procedure looks as follows. If ∆y 1:= y − y¯ is not independent of x, there is still some information left that can be extracted. Consequently, we fit a model that may have properties different to the current model to learn these relations between x and the residuals ∆y 1. In other words, the purpose of the second model on top of first model is to correct the prediction of the first model. The output of the two models is then the prediction of the first model y¯ plus the correction ∆y 1. In the next iteration, we test the residuals ∆y 2:= y − y¯ − ∆y 1for stochastic dependence on x. This procedure can be repeated until the corresponding residuals are stochastically independent of x. We call this procedure stacking of models and can be generalized as follows such that a new model on top of a stack learns to correct the prediction of the previous stack. The procedure is illustrated in Figure 2. ![10_image_0.png](10_image_0.png) Figure 2: Stacking architecture for problems with ordinal target data. To generalize, we define $$\Delta y^{j}:=y-\sum_{k=0}^{j-1}\Delta y^{k}$$ where ∆y 0:= ¯y for any j ∈ N. In a purely deterministic scenario, we would extend the stacking until ∆y j = 0. In a real scenario where there are unpredictable parts in the data, our definition of no further improvement possible is that ∆y jis stochastically independent of x based on a statistic test or measure. The random variable representing the output of the total stack is denoted with $${\tilde{y}}:=\sum_{k=0}^{j-1}\Delta y^{k},$$ taking the corresponding values y˜i upon the input xi for i ∈ I, where j is the smallest number such that ∆y jis stochastically independent of x. Consequently, the stacking stops if y − y˜ is independent of x. This definition also works for discrete ordinal data, where the differences, resp., residuals take only discrete values. We remark that this architecture does not require more inference time since each model of the stack does not depend on the output of other layers but all perform their inference on the same input data and thus can be run in parallel. For nominal target data (the difference between class labels has no meaning), the stacking architecture works analogously except the definition of the deviation of the model from the ground truth is different, please compare with Figure 3. The deviation of the model output from the ground truth is defined by a random variable θ j, j ∈ N that take the value of the correct class in case of a wrong prediction from the model/stack below and a value that does not represent a discrete class of the ground truth in case the prediction of the model/stack below is correct. If θ jis independent of the input x for one j, we can stop the stacking of more models. As long as θ jis dependent on the input x, where θ 1is based on the predictions of the first model, there is a deterministic relation that another model can learn to predict θ j, which models a correction to the prediction of the model/stack below. In such a case the stack is extended by another model predicting θ j. Based on the prediction y¯ j of the model/stack below and the prediction for the corresponding θ j, we can decide during inference which prediction to take. In case of θ jequalling a class label, we take the value of θ j predicted by the corresponding model subsequent upstream in the stacking architecture as the value for y¯ j+1. Otherwise, i.e. θ jequals a number not representing a class label, the value of y¯ j+1 equals the one of y¯ j. The final prediction of the stack is denoted with y¯ where all the variables represent the vector-valued case as in the case above as well. We remark that the co-domain of the random variable θ jis not one-hot encoded but the models output y¯ jshould be due to the nominal character. However, in terms of the random variable θ, since there is a bijection between the co-domain of this random variable and the corresponding one-hot encoding, the information content or statistical independence between the random variable θ j with a one-dimensional co-domain consisting of integer numbers and the input also exists. ![11_image_0.png](11_image_0.png) Figure 3: Stacking architecture for problems with nominal target data. As a greedy implementation of both stacking concept above is to combine models randomly and check that after each new model ∆y j has less information with the input than ∆y j−1 has with the input to ensure benefits of the new model on top of the stack. This is a test that the new model successfully extracts remaining information and doesn't do guessing rather than really filling in what is missing in terms of deterministic relations. ## 3 Related Work Mutual Information Estimation And Applications Besides non-parametric methods to estimate mutual information such as Kraskov et al. (2004) and Gretton et al. (2005), a more recent neural network based estimation of mutual information Belghazi et al. (2018) is proposed. Such methods provide an alternative to Algorithm 1 for the calculation of mutual information. However, we remark that our implementation can be directly be used for the determination of the chisquare test as well such that we can also consider two statistical tests to decide for independent input and model deviations. DeepInfoMax (DIM) Hjelm et al. (2019) as well as contrastive predictive coding (CPC) Oord et al. (2018) maximizes mutual information between raw data and its compressed representation to find a better representation of raw images. Also Chen et al. (2016) employs mutual information to find a disentangled and interpretable representation of images. The work of Brakel & Bengio (2017) also focused on finding a disentangled/independent representation/features of images and uses mutual information as a measure of independence or better say to enforce this independence. Our framework can be directly applied to the corresponding generated representations to test if the model accuracy is only caused by unpredictable parts like noise. For more details, please see the Discussion about the case of unstructured input data. ## Ensemble Learning Our proposed stacking procedure can be considered as an extension of ensemble learning methods such as Wortsman et al. (2022). However, our approach extends it in two key facets. Firstly, we refrain from the indiscriminate combination of multiple models. Our framework triggers a combination only if it confirms the presence of potential for further improvement in terms of further learnable/extractable deterministic relations. Secondly, our approach is characterized by progressiveness: We don't assign each model the task of learning the ground truth but rather focus on capturing what remains unlearned by the stack of previous models. This progressiveness not only contributes to the efficacy of our stacking strategy but also enables each model to concentrate on specific tasks that are not covered by other models. For this purpose, we stack models in a way that the input is given to all models in a stack, and each model attempts to correct the errors left by the preceding models in the stack. In order to enable models to extract different information that the preceding models could not so far, the models should vary in their properties, like the model parameters, the loss function they are trained with or the architecture itself. ## Information Bottleneck Another application of our framework is to extend the information bottleneck concept Saxe et al. (2019); Kawaguchi et al. (2023). The basic framework of the information bottleneck is to maximize the mutual information between a data representation sought and the corresponding output data and at the same time minimize the mutual information between this representation on the input data. There is a parameter to balance both contradicting requirements. With our framework, we can extend the information bottleneck method by providing a procedure how to choose this balance parameter to find a lossless compression of the input data. Starting with a configuration where the model extracts all the information between input and output data, we tune the balance parameter such that the compression is weighted higher until the model cannot extract all the information between input and output. With that procedure, we find the threshold of the balance parameter for a lossless compression. ## Time Series Prediction Besides the above related methods, the showcase of this work is in particular associated with time series prediction. In the past few years many of time series prediction models have been developed to improve the prediction performance, such as RNN, LSTM Memory (2010), GRU Cho et al. (2014), transformer and its variants Vaswani et al. (2017); Kitaev et al. (2019); Zhou et al. (2021); Liu et al. (2022); Zhang & Yan (2022); Wu et al. (2021); Zhang et al. (2022) and state space models such as RKN Becker et al. (2019), S4 Gu et al. (2021), MTS3 Shaj et al. (2023) as well as simple yet effective linear models such as Zeng et al. (2023). The improvement is measured in terms of corresponding loss function values of the best weight configuration of a model. We would like to extend these evaluation by our framework where we introduce another quantity that evaluates how much information is left to extract from the time series given the input length of historic data. Furthermore, our approach would like to establish a level before model training, that allows for evaluation if a time series is predictable given historic data. This is in particular important for challenging time series. Additionally, there is a discussion that a simple linear layer has a superior performance in certain time series prediction scenarios Zeng et al. (2023), notably periodic time series Li et al. (2023). Nevertheless, there is a question in all scenarios: How can we make sure that the current method has already achieved the lowest possible bound of the error for each time series or can it still be improved by fitting a better model for each case? ## 4 Applications|Numerical Experiments In this section, we showcase our framework to analyze time series for their predictability, measure dependence between input/context and output/target data. Furthermore, we demonstrate how to measure learning success of models with our framework and use these insights to build model stacks to extract more information from data than single models can do. Moreover, we show how we extend the performance evaluation of a model with the loss function by our framework and derive further criteria for convergence of the training of a model. Apart from several real world datasets, we use a synthetic dataset to purely demonstrate some of the effects from above where we can control properties of the noise, like the ratio between information and noise, which we do not know in a real dataset. The synthetic dataset consists of the time series that is composed of the sinus function, sin : R → R, t 7→ sin(t) and the random variable θ : R → R, t 7→ θ(t), which generates noise by random values according to y(t) := sin(t) + aθ(t) where a > 0 scales the amplitude of the noise. In order to evaluate performance in an already established metric, we use the normalized root mean squared error (RMSE) defined by 1σ qPN i=1 (˜yi − yi) 2 where N ∈ N is the number of measurement points, y˜iis the model prediction at the discrete time point i ∈ N, yiis the corresponding actual output data and σ is the standard deviation qPN i=1 (¯y − yi) 2calculated on the training dataset where y¯ is the mean of the values yi of the training dataset. The parameter ρ of Algorithm 1 is set to 5% of the number of measurement points of the corresponding dataset on which the algorithm is performed, unless otherwise stated. We remark that the term residuals is usually used instead of model deviations to name the difference between model output and the ground truth in case of ordered target data. In case of nominal, we use further use the term model deviation. The model deviation in the nominal case is defined as as the class that would have been correct in case the model predicts the wrong class. If the model is correct, the model deviation equals a number that does not represent a class, e.g., the number -1. We will show that for both definitions the stochastic dependence and mutual information with the input decays over training epochs on training and validation datasets. ## 4.1 Splitting Noise Off From Model Inaccuracy In practical scenarios where data is obscured by noise or cases that some components of the future samples are not predictable based on the history, evaluating prediction models becomes challenging. Traditional metrics such as L 2-loss on validation set may not suffice due to the uncertainty surrounding the ratio of unpredictable to predictable part. This uncertainty complicates determining the lower bound of prediction error beforehand. Therefore, solely relying on loss metrics for model assessment is insufficient, as the source of error could be either model inadequacy/failure or inherent data unpredictability. Therefore it is important to be able to distinguish between these two cases, because in the former case we have the chance to improve the prediction, however, in the latter case we cannot. For example, when the power of noise (as an unpredictable component) equals to half of the data power, the lowest possible normalized mean squared error (MSE) on validation is 0.5. However, by employing our framework that identifies when the model has learned the primary data component effectively, we can stop training and attribute the remaining residual to initial data noise, although the L 2error is still high. Motivated by the this discussion, we start our experiments with such a data where the first component is fully predictable such as the sinusoid function plus some iid Gaussian noise that is completely unpredictable. In the following, we demonstrate how our framework can be used to distinguish if the model inaccuracy results from noise rather than a relation between input and residuals that the current model has not learned yet. Since we need to vary the amplitude of noise compared to the deterministic relation, which is modeled by the sinus function in this case, we choose to work on our synthetic data described above. For the numerical implementation, we consider a window of size 50 and the first 49 samples are considered as context/input to the model and the 50th sample as the target/output. To generate the dataset, the window is slid over the time series that was split into training and test/validation set before the experiment according to a ratio of 0.75/0.25. For the experiments depicted in Table 1, the random variable θ adds Gaussian noise. According to the table, chi-square test and the mutual information (at least in some cases) indicate that the MLP model, trained with L 2-loss, has extracted the sinus function. We see that all essential information are extracted because the chi-square test indicates that all residuals are independent of the input. In the case of mutual information, the p-values are greater than 0.01 indicating that based on that level of significance, we cannot reject the hypothesis that input and residuals are independent of each other and the input does not carry information about the outcome of the residuals. Therefore in these cases, even a much more sophisticated model is not able to decrease the error further and might lead to overfitting to the noise. While the RMSE increases with the amplitude of the noise, indicating that the model performance would become worse, our stochastic measures indicate that the model has extracted the sinus function as the main deterministic driver of the time series. This can be seen since the L 2-norm between model prediction and the pure sinus value, shown as Actual Err in 1, is (almost) independent of the noise amplitude. Furthermore, that the normalized test RSME is always close to the relative noise standard deviation, indicting that the deviation between model and data results from the noise and not from a systematic deviation from the sinus function. We remark that the first and third column can only be presented because we exactly know the signal and the noise | | Exp. Err | Chi-square Analysis | | | Mutual Information Analysis | | | | | |----------------|------------|-----------------------|----------------|---------------|-------------------------------|-----------------|---------|-----------------|------| | Rel. noise std | RMSE | Actual Err | Init. dep. var | Res. dep. var | Init. MI | Init. Perm | Res. MI | Res. Perm | pv | | 0 | 2.2 × 10−5 | 2.2 × 10−5 | 49 | 49* | 27.497 | 0.7385 ± 0.0272 | 27.2781 | 0.4039 ± 0.236 | 0 | | 0.2715 | 0.2731 | 0.0616 | 49 | 0 | 8.3351 | 0.7076 ± 0.0136 | 0.7301 | 0.7096 ± 0.0114 | 0.05 | | 0.4927 | 0.4914 | 0.1073 | 49 | 0 | 4.193 | 0.7337 ± 0.0119 | 0.7643 | 0.7332 ± 0.0108 | 0 | | 0.6465 | 0.6568 | 0.1476 | 47 | 0 | 2.5647 | 0.7024 ± 0.0104 | 0.7472 | 0.7049 ± 0.0095 | 0 | | 0.7482 | 0.7461 | 0.1682 | 38 | 0 | 1.6659 | 0.7325 ± 0.0092 | 0.7503 | 0.7054 ± 0.0089 | 0 | | 0.8166 | 0.8184 | 0.1789 | 37 | 0 | 1.2591 | 0.7293 ± 0.0095 | 0.7862 | 0.7296 ± 0.0085 | 0 | | 0.8611 | 0.8595 | 0.1831 | 30 | 0 | 1.053 | 0.711 ± 0.0077 | 0.7562 | 0.7102 ± 0.0069 | 0 | | 0.8921 | 0.9068 | 0.1893 | 28 | 1 | 0.9725 | 0.7275 ± 0.0086 | 0.7931 | 0.7278 ± 0.0075 | 0 | | 0.9148 | 0.9134 | 0.1846 | 19 | 0 | 0.8825 | 0.7109 ± 0.0079 | 0.7605 | 0.7384 ± 0.0082 | 0.01 | | 0.9306 | 0.9415 | 0.1934 | 11 | 0 | 0.8751 | 0.7341 ± 0.0075 | 0.7521 | 0.7070 ± 0.0073 | 0 | | 0.9425 | 0.9410 | 0.1952 | 5 | 0 | 0.8153 | 0.7053 ± 0.007 | 0.7652 | 0.7042 ± 0.0085 | 0 | component separately. In a real world, these numbers usually cannot be computed, however, our method can provide the valuable insight if there is some relation left to extract or the deviation between model and data is rather due to numerical errors. Table 1: Impact of varying amplitudes of white noise on a sinusoidal signal and assessing its influence on the performance of a simple MLP. The relative power of the noise component, which is the root of the variance of noise divided by the variance of the total time series, is shown in the first column, which is the theoretical lower bound of test RMSE. In the second column, the normalized RMSE on the test data is shown. The third columns shows the L 2-loss between the prediction and actual clean sinusoid. The fourth column illustrates the dependency of the target on the history evaluated by chi-square test, which is initially complete until almost half of the power is taken by noise and decreases to small numbers when noise becomes the dominant (94 percent) part of the time series. The fifth column evaluates the dependency of residual target on the corresponding context to show how successful the model is to reduce this dependencies. The last five columns show the same concept in terms of mutual information. Similar to chi-square, we have initial and residual values as well as two more columns to report their corresponding lower bound which is obtained by random permutation tests. The last column shows the p-value of the observed mutual information under the hypothesis that residuals are independent of the input given the dataset. *Note:* * Initially might seem counter-intuitive, more plots are given in Figure 10. Due to some numerical error, some periodic patterns (only visible when multiplies by 10000 see top left plot in Figure 10) are remained in the residuals which are correctly detected as dependency by the chi-square test. Such cases can be handled by introducing a threshold indicating very close fit between model output and data in some norm. An alternative could be to impose a minimal bin width in Algorithm 1 which might impact the equinumeric property. ## 4.2 Information Extraction Influenced By Loss Function And Noise Properties In this part, we show that depending on the noise properties, the loss function is an important key factor to extract the deterministic relations from the data. For this purpose, we use our synthetic dataset where the random variable θ is defined by θ(t) := 1 10 θ1(t) + 10θ2(t), θ1 is Gaussian distributed with mean zero and standard deviation 1 and θ2 is a Poisson distributed random variable over the number of peaks (high values of the time series) within a time interval. If at a time point t there is such a random peak, the variable θ2(t) = 1 and 0 otherwise. In this numerical experiment, the rate of arrivals of a peak is 1 200 peaks per sample (or time between two data points), meaning that we expect one peak after 200 data points on average. Since peaks only increase the values of the time series randomly, the mean of the noise is greater than zero, indicating asymmetry. In this case, we see in Table 2 that the initial dependence between input and output is totally reduced only by the MLP that is trained with the L 1-loss function while it does not extract the total deterministic relations when using an L 2-loss function. In this case, by saying "we extract the deterministic relations", we mean to learn/approximate the sinus-function with the model. This extraction has taken place when only noise is left that is independent of the input, as shown with Table 3 where we see that in the L 2-norm the model output trained on L 1-loss function is much closer to the corresponding sinus-function, evaluated on the test set. In other words, the results of Table 3 depict that the model trained with an L 2-loss function is prone to irregular peaks. For illustration, plots of predictions and residuals for L 1- and L 2-loss functions are given in Figure 11. We remark that also here the synthetic set is useful since we know the exact | - | # dep-test + L2 | # dep-test+ L1 | Initial MI | Residual MI + L2 | Residual MI + L1 | |---------|-------------------|------------------|--------------|--------------------|--------------------| | Trial 0 | 38 | 0 | 13.37 | 1.053 | 0.7524 | | Trial 1 | 28 | 1 | 13.567 | 0.9331 | 0.8099 | | Trial 2 | 49 | 0 | 13.394 | 1.217 | 0.7683 | | Trial 3 | 24 | 1 | 13.436 | 0.8886 | 0.7912 | | Trial 4 | 41 | 0 | 13.494 | 1.067 | 0.7451 | functional formula that generates the data apart from noise. However, this example provides evidence that once a stochastic measure reports independence between residuals and input that the model has extracted the deterministic relations not having included spurious relations from noise into the learning. | - | actual-Test-rmse-model-trained-with-L2 | actual-Test-rmse-model-trained-with-L1 | |---------|------------------------------------------|------------------------------------------| | Trial 0 | 0.1114 | 0.04958 | | Trial 1 | 0.1221 | 0.04567 | | Trial 2 | 0.1154 | 0.04189 | | Trial 3 | 0.1039 | 0.0499 | | Trial 4 | 0.1117 | 0.04799 | Table 2: The effect of the choice of the loss function in mitigating asymmetric noise effects. All values are reported on the validation data. Initial number of dependent variables is 49 on the test set, i.e, the target depends on all past time series steps. The first two columns (dep-test L 2 and dep-test L 1) represent the dependence measured by the chi-square test of independence between input and residuals on the test set of a model trained with L 1- and L 2-loss function. The model trained with L 1-loss could better reduce these dependencies. Regarding mutual information (MI) all p-values are zero, however, as shown in Figure 4 further investigation shows that the p-value rises up to 10% for the L 1 model (orange curve) on validation set, although it always remains zero for the L 2 model. That means that the p-value could serve as a convergence criterion, since we cannot reject the hypothesis that the L 1 model with the corresponding weights extracts all the deterministic relations. Table 3: Best actual RMSE for a model trained on L 2- and L 1-loss function. Lower bound on the error is 0 here since we compare the prediction with the pure sinus-function. It is worth nothing to remind that all models are trained on the noisy data. The term 'actual' refers to the fact that we report the errors here by comparing the prediction with the actual/pure signal. Another conclusion that we draw from this experiment, in particular from Table 3, is that the minimization of a loss function under the constraints from the model does not necessarily coincide with extracting the real dynamic that underlies a dataset or extracting the most information from the dataset, respectively. For example, the minimum of the L 2-loss function subject to the constraints of the model is more distracted from the real dynamics (sinus-function) by the specific noise than the L 1-correspondence. However, with our framework, we provide a way to measure if there is something left to extract, e.g., since a used loss function is not suitable for the noise of a dataset. We conclude this section with the following remark: We see the L 1-loss function is less prone to the asymmetric noise than the L 2-loss function. Since L 0-loss function weights all deviations from the real data with the same penalty, we formulate the hypothesis that L 0-loss function might perform even better than L 1-loss function. However, since the L 0-loss function is discontinuous and thus not differentiable at all, a lack of a numerical efficient optimization algorithm capable to deal with the discontinuity of the loss function might hinder its broader application. Consequently, a starting point for further research might be to analyze the performance of loss functions with regard to information extraction that consist of parts cutting off bigger deviations with, e.g., min(max(˜y−y, −τ ), τ )*, τ >* 0, which can be solved with semi-smooth methods, see, e.g., Ulbrich (2011) or as shown in Breitenbach (2022) by transforming the corresponding optimization problem into a higher dimensional one to resolve the min- and max-function to differentiable functions. Such a loss function, e.g., taking the absolute value or the square of the projection above, could be a tradeoff between numerical efficiency and robustness against asymmetric noise or outliers parameterized by the parameter τ . Starting the learning with a big τ and restarting the optimization with a smaller τ with the result from the last optimization procedure or decreasing τ within one optimization run could also accelerate the convergence speed. This procedure could make the prediction more precise with regard to extracting the deterministic relations assuming that bigger determinations come (mostly) from noise given the input data. Our framework can monitor the effect of τ with regard to extracting the deterministic relations. ## 4.3 Stochastic Measures As Convergence Criteria Next, we show how the mutual information and the chi-square test evolve over training after each epoch to demonstrate its capability to work as convergence criteria. The procedure is as follows. If input and output are not independent of each other, training of a model is started. If after an epoch, input and residuals or model deviations, respectively, are independent on the training or validation set, more detailed if we cannot reject the hypothesis that input and residuals, resp., model deviations are stochastically independent, then there is no information left to extract and we can stop the training. The value of the stochastic measure as proposed in this work is that we can evaluate if flattening of the loss function after some epochs is due to the training is done in terms of information extraction or if the convergence speed meanwhile has just slowed down. In the case of slowed down convergence, it is worth further patience since it could be that (now more slowly) further information is extracted or in later epochs the convergence speed increases again when having found the right updates for the weights. The advantage of a stochastic measure taking the relation between input and model deviations into account is that there is a lower bound of dependence known in advance that is theoretically always achievable by a model, namely when all the deterministic relations are extracted, which is by definition independent of the unpredictable parts given the input data. In contrast, for loss functions that do not take this relation into account, there is in advance no lower bound known that is achievable on the concrete dataset. In this experiment, we use the data from Section 4.2 and plot relevant metrics in Figure 4. We see that when the loss function becomes flat, the corresponding chi-square test is zero and the p-value of the mutual information becomes non-zero in a magnitude of order such that we cannot reject the independence of input and residuals. Although we only check pairwise, this experiment shows that our framework provides valuable convergence criteria as a stagnating loss function also indicates that the model has extracted all the deterministic relations between input and output. The rationale is that if input and residuals or model deviations are independent of each other, it is necessary that also a pairwise test indicates independence. Furthermore, the turning point when stochastic measures increase marks a condition for early stopping without considering a validation set. The increasing stochastic measures, while the loss function further decreases, shows that training after the turning point adapts the model weights to relations that overfit the data defined in the sense of deterministic relations. The reason is that the loss function minimum does not coincide with the minimum regarding the stochastic measures. ![17_image_0.png](17_image_0.png) Figure 4: Images showing values of stochastic measures of independence between input and residuals as well as the loss function history. We provide a further example based on the dataset ETTh2 Zhou et al. (2021) with a similar result depicted in Figure 5. We see based on the chi-square test that, after some epochs, there is already a set of weights based on which the residuals are clearly decoupled from the input. Similar, we see that the mutual information is close to the value generated by the permutation test, supporting the chi-square results. Furthermore, the increase of the chi-square and the mutual information on the test set for later epochs might indicate an overfitting since the minimum based on the L'-loss function does not necessarily coincide with the maximum of extracted deterministic relations defined by a stochastic measure, see also Section 4.2. ![18_image_0.png](18_image_0.png) as the loss function history based on the ETTh2 dataset. 4.4 Time series classification In this section, we apply our framework on time series classification (TSC) problems. We remark that for ordered classes, we can apply the framework where the differences between the discrete values and the ground truth models the residuals. In order to define model deviations from the ground truth in the nominal case, we use the definition for the nominal case as described in the Methods section 2.3. In this case, the random variable θ modeling the deviation of the model classification from the ground truth takes the value of the correct class in case the output of the model is correct and -1 otherwise. We apply our framework on subset of the well-known UCR dataset Dau et al. (2019). More specifically, the dataset DistalPhalanxOutlineCorrect for Figure 6 is a binary classification problem from time series data. Furthermore, the dataset ElectricDevices for Figure 7 consists of seven classes where we relabeled the classes always starting from 0 until all classes are labeled accordingly. In this case, the parameter ρ of Algorithm 1 is set to 10 to cope with the high imbalance of the classes and the fact that over epochs this imbalance increases since most of the cases are classified correctly and thus the class labeled with -1 for θ increases. Results including cross-entropy loss, accuracy and the mutual information per epoch are shown in Figure 6 and Figure 7. In Figure 6 and Figure 7, we see that while the accuracy is increasing, the dependence of the variable θ with the input x decreases over epochs as it is supposed to, since less and less cases are not predicted correctly. In other words, the random variable θ tends to a constant function where information gain is small/zero taking any other input variable into account. The difference in both cases is the following. From the results of Figure 6, we can say that over training epochs we approach a parameter configuration where we cannot reject the hypothesis based on a level of significance of 1% that the input and the model deviations are independent based on our mutual information measure. In this case, we do not expect a further improvement regarding the model performance since it captured potentially all deterministic relations. In contrast to the experiment depicted in Figure 7, where according to our mutual information test, we cannot reject this hypothesis and thus there are deterministic relations to extract which may improve the model performance. ![20_image_0.png](20_image_0.png) Figure 6: Results on a classification problem showcasing our framework, in particular the definition of the model deviation for nominal target data. Numbers in the parentheses shows the number of parameters of the model in million. Although validation accuracy remains below 0.8, mutual information (MI) analysis shows the model deviations are already independent of the input. ![21_image_0.png](21_image_0.png) Figure 7: Results on a classification problem showcasing our framework, in particular the definition of the model deviation for nominal target data. Numbers in the parentheses shows the number of parameters of the model in million. The p-value always remains zero in this experiment for both train and validation set for the mutual information (MI). ## 4.5 Stacking Of Models Systematically Extracts Information And Improves Prediction After we have seen in the previous sections that the loss function has an influence on the information extraction depending on the noise, see in particular Section 4.2, we further investigate in this section how different model properties, like the architecture or hyperparameters, contribute to capturing different kind of information and how such differences could be systematically combined to extract all available deterministic relations in a dataset. For this purpose, we present a general architecture that not limited to only varying loss functions and thus is supposed to combine the capabilities of different models as described in the Methods section 2.3. We showcase the efficacy of our framework with a real-world time-series datasets. The dataset is a Nasdaq datasets taken from the UCI repository and M4 competition dataset Makridakis et al. (2020). More specifically, we take the variable DE1. We train models according to MLP and Nonstationary Transformers (NSTs) Liu et al. (2022) architecture on the dataset that is split according to the ratio of 0.7/0.3 into a training and test set. The input of the models are the past 60 time lags and the output is the next value in the time series (singlestep prediction). To this end, an MLP model trained with L 1-loss is chosen as the first model and another MLP trained with L 2-loss is chosen as a second model. The weights of the last layer of the models in the stack, except the first model, are initially set such that the output of each model is close the zero. The rationale is that if the prediction from the stack below is correct, only minor corrections are necessary building on the previous predictions. Furthermore, the last layers in models are chosen for weight rescaling since the first layers are usually intended for feature extraction. Aditionaly, the layer norm operation (dividing by standard deviation) would cancel the scaling to small values, for details about this part please see the Appendix, Section C.4. As illustrated in Table 4, none of the individual models successfully rendered the residuals independent of the input. Remarkably, it was only through the combination in stacked models that an increase of p-values was observed, enhancing the overall performance. To provide a more comprehensive comparison, results for NST are also included. We see that in this case, the MLP stack not only extracts more information as the NST but are computationally even cheaper. In order to exclude that the effect is a result of more free parameters, we have included MLPs with about the half of free parameters each indicated by the "0.5". Moreover, we see that only the stacked models provide a non-zero p-value such that only in this case, we cannot reject the hypothesis that input and residuals are independent based on a level of significance of 1%. Beyond mutual information and chi-square test, considerations such as L 2-loss and learning curves in Figure 8 further support the empirical evidence that stacking models outperforms their individual counterparts by having a smaller L 2-loss function (comparison only valid if the last layer of the stack is trained with the same loss functions, which is in this case L 2-loss function, as the corresponding single model). This evidence showcases the capacity of multiple models to learn diverse aspects. However, we remark that a comparison in terms of loss functions and information extraction is tricky as shown in Section 4.2. | Metrics | Init MI | Init Perm | Init diff | Res MI | Res Perm | pv | Res diff | Init Chi-square | Res Chi-sauare | |-----------------------|-----------|-------------|-------------|----------|------------|------|------------|-------------------|------------------| | MLP L1-0.5 | 23.038 | 4.167 | 18.871 | 5.256 | 4.617 | 0 | 0.6392 | 60 | 2 | | MLP L2-0.5 | 23.038 | 4.167 | 18.871 | 5.347 | 4.579 | 0 | 0.7679 | 60 | 8 | | stacked-0.5 | 23.038 | 4.167 | 18.871 | 4.843 | 4.564 | 0.06 | 0.2798 | 60 | 0 | | 2nd stacked model-0.5 | 5.256 | 4.617 | 0.6392 | 4.483 | 4.564 | 0.06 | 0.2798 | 2 | 0 | | MLP L1 | 23.038 | 4.167 | 18.871 | 5.262 | 4.586 | 0 | 0.6758 | 60 | 4 | | MLP L2 | 23.038 | 4.167 | 18.871 | 5.414 | 4.533 | 0 | 0.8814 | 60 | 5 | | Stacked MLP | 23.038 | 4.167 | 18.871 | 4.955 | 4.579 | 0.01 | 0.3765 | 60 | 0 | | 2nd stacked model | 5.262 | 4.543 | 0.7182 | 4.955 | 4.579 | 0.01 | 0.3765 | 4 | 0 | | NST L1 | 23.038 | 4.167 | 18.871 | 5.267 | 4.577 | 0 | 0.6901 | 60 | 1 | | NST L2 | 23.038 | 4.167 | 18.871 | 5.468 | 4.564 | 0 | 0.9038 | 60 | 8 | Table 4: Comparison of performance metrics for various standalone and stacked models. Initial mutual information (MI), permutation analysis values, and their differences are presented, providing insights into the starting states. Residual metrics, including mutual information, permutation values, and the corresponding p-values in the mutual information framework assessing the independence of input and residual output, are also reported. Additionally, chi-square values for both initial and residual states are included indicating the number of correlated input lags of the time series. ![23_image_0.png](23_image_0.png) Figure 8: The L 2loss comparison for stacked models and single models. The last example in this section is provided in Table 5 on Weather dataset in 1-step ahead prediction setting. The split setting is matched with Nie et al. (2022). Except the prediction length, we use the same training parameters and architecture for PatchTST as used in Nie et al. (2022). | Metrics | Init MI | Init Perm | Init diff | Res MI | Res Perm | Res diff | |-----------------------------------------------|-----------|-------------|-------------|----------|------------|------------| | PatchTSTL2 (0.41M) | 3213 | 153 | 3060 | 659 | 177 | 482 | | MLPL1 (0.59M) | 3213 | 153 | 3060 | 514 | 174 | 340 | | NSTL1 (3.88M ) | 3213 | 153 | 3060 | 608 | 182 | 426 | | NSTL1(1.05M ) | 3213 | 153 | 3060 | 641 | 183 | 458 | | NSTL2(0.426M) | 3213 | 153 | 3060 | 700 | 182 | 518 | | NSTL1(0.86M)+PatchTSTL2(0.41M) | 3213(659) | 153(177) | 3060(482) | 506 | 181 | 325 | | NSTL1(1.13M)+PatchTSTL2(0.41M) | 3213(659) | 153(177) | 3060(482) | 516 | 183 | 333 | | NSTL2(0.53M)+MLPL1(0.59M) | 3213(514) | 153(171) | 3060(343) | 400 | 181 | 219 | | NSTL1(0.86M)+MLPL1(0.59M) | 3213(514) | 153(171) | 3060(343) | 441 | 182 | 259 | | NSTL1(0.53M)+MLPL1(0.59M) | 3213(514) | 153(171) | 3060(343) | 442 | 182 | 260 | | NSTL1(2.05M)+MLPL1(0.59M) | 3213(514) | 153(171) | 3060(343) | 448 | 182 | 266 | | MLPL1(0.59)+NSTL1(1.13M) + PatchTSTL2 (0.41M) | 3213(516) | 183(153) | 3060(363) | 499 | 182 | 317 | Table 5: Comparison of performance metrics for various standalone and stacked models on weather dataset. Initial mutual information (MI), permutation analysis values, and their differences are presented, providing insights into the starting states. Residual metrics, including mutual information, values from the permutation test of mutual information are also reported.The corresponding p-values assessing the independence of input and residual output is always 0 in all experiments. Additionally, In the first column, number of parameters for the models is shown in parentheses in millions. In the other columns the values of the metrics only for the last model of the stack is depicted in the parentheses. In the first column the model on the left is the first one and the one on the right is the last model in the stack. We remark that a linear combination of loss functions according to λ1∥· ∥2L2 +λ2*∥· ∥*L1 with the hyperparameters λ1, λ2 > 0 could be an alternative approach to stacking in terms of loss functions. However, the success of that formulation depends on the right choice of the hyperparameters λ1 and λ2. Our framework provides an option to choose the hyperparameters accordingly such that most information is extracted from the data, e.g., the mutual information is minimized between input and the residuals. Please note that finding the best combination or any hyperparameter optimization is not the focus of this paper. In the present work, the focus is on showcasing our framework in terms of its potential for applications in various ML use cases. To conclude this section, we show that our stacking framework also works for classification problems. For architectural details, please see the Methods section 2.3 and Section 4.4. In Table 6, we provide numbers based on the ElectricDevices dataset. To further improve the effect of stacking, we see potential when including a corresponding loss function into the training process such that parameters can be optimized such that the corresponding ML architecture extracts most deterministic relations possible. Regarding this topic, please see the corresponding Discussion part "Alternative for nominal data" and the Conclusion and Future Work section. | Metrics | Init MI | Init Perm | Init diff | Res MI | Res Perm | Res diff | |-----------------------|---------------|---------------|--------------|----------|------------|------------| | SVM-rbf kernel | 41.65 | 16.00 | 25.65 | 23.72 | 15.50 | 8.22 | | SVM-sigmoid kernel | 41.65 | 16.00 | 25.65 | 34.13 | 16.62 | 17.51 | | SVM-sigmoid + SVM-rbf | 41.65 (23.72) | 16.00 (15.50) | 25.65 (8.22) | 23.23 | 14.41 | 7.82 | Table 6: Numbers in parentheses shows the starting point of the last stacked model. The abbreviation SVM refers to the standard Python sklearn implementation of a support vector machines. The p-values assessing the independence of the input and the model deviation from ground truth alsways remains zero in all experiments. For a description of the meaning of the columns, please refer to Table 5. ## 4.6 Applying The Stacking Of Models To Multiple Output Variables Exemplified By Multistep Prediction In this experiments, we demonstrate our framework for a multistep prediction. We take the NASDAQ dataset from Kim et al. (2021) analogously to Section 4.5. We choose the input of length 60 to predict a target length of 30. The result is provided in Table 7 and shows that also in this case, with stacking of models, we can systematically extract the information and decouple input and residuals in contrast to single models. The evidence is provided by the fact that a stack of MLP and NST models provides the smallest mutual information between input and residuals subtracted the mutual information generated by coincidence based on the given data (column "Res diff" of Table 7). | Metrics | Init MI | Init Perm | Init diff | Res MI Res Perm Res diff Init Chi-square Res Chi-sauare | | | | | |-------------------------------------------------------------------------------------|----------------------------------------------|-------------|-------------|-----------------------------------------------------------|-----------|-------|------|-----| | MLP L1(2.09M) | 617.48 | 123.29 | 494.19 | 169.45 | 135.11 | 34.34 | 1800 | 332 | | MLP L2 (3.77M) | 617.48 | 123.29 | 494.19 | 163.53 | 135.36 | 28.17 | 1800 | 182 | | MLP L2 (4.61M) | 617.48 | 123.29 | 494.19 | 162.82 | 135.47 | 27.35 | 1800 | 177 | | NST L1(2.67M) | 617.48 | 123.29 | 494.19 | 183.23 | 135.15 | 48.08 | 1800 | 811 | | NST L2(2.67M) | 617.48 | 123.29 | 494.19 | 179.47 | 135.28 | 44.19 | 1800 | 729 | | MLP L2(0.67M) + MLP L1(2.09M) | 617.48(169.45) 123.287(135.11) 494.19(34.35) | 157.91 | 135.28 | 22.63 | 1800(332) | 55 | | | | MLPL2(0.64M)+NSTL1(2.67M) | 617.48(183.23) 123.287(135.15) 494.19(48.08) | 156.84 | 135.22 | 21.62 | 1800(811) | 35 | | | | MLP L2(0.64M)+NSTL2(2.67M) | 617.48(179.47) 123.287(135.28) 494.19(44.19) | 153.15 | 135.25 | 17.90 | 1800(729) | 30 | | | | MLPL2(0.09M)+MLPL2(0.64M)+NSTL2(2.67M) 617.48(153.17) 123.287(135.28) 494.19(17.89) | | 151.80 | 135.27 | 16.53 | 1800(30) | 20 | | | | Avg Ensemble (3.4M) | 617.48 | 123.29 | 494.19 | 170.95 | 135.36 | 35.59 | 1800 | 387 | Table 7: Comparison of performance metrics for various standalone and stacked models. Initial mutual information (MI), permutation analysis values, and their differences are presented, providing insights into the starting states. Residual metrics, including mutual information, values from the permutation test of mutual information. Additionally, the number of dependent input lags tested by the chi-square test of independence for both initial and residual states are included. In the first column, number of parameters for the models is shown in parentheses in millions. In the other columns, the values of the metrics only for the last model of the stack is depicted in the parentheses. In the first column the model on the left is the first one and the one on the right is the last model in the stack. In the last row, we take the average prediction of the three models in the penultimate row when each of those models is separately trained to predict the original ground truth. The p-values assessing the independence of input and residual output based on mutual information remains always zero in all experiments in this table. ## 4.7 Detecting Distribution Shifts One prevalent issue hindering the advancement of machine learning models towards higher accuracies is distribution shift, meaning that relations that hold within the training set do not hold on the validation set. Especially several existing works such as (Zeng et al., 2023, Figure 5) and Kim et al. (2021), in particular (Kim et al., 2021, Figure 3) have confirmed this phenomenon, e.g., on ETT1 and ETT2 data sets. This section presents a novel insight into this phenomenon, enlightening how our proposed framework can detect and distinguish such cases from mere overfitting to noise. In a typical training scenario, after some epochs while training loss continues to decrease, validation loss may gradually start to increase. Without prior assurance of the absence of distribution shift, a pure loss function based approach without including stochastic measures struggles to differentiate between overfitting to the noise in training data and (partial) distribution shift due to different deterministic relations between the training and validation set, as both can lead to similar observations of an increasing loss function on the validation set. Our framework provides a concise solution. Instead of solely monitoring the loss function, tracking mutual information enables us to determine the types of relationships the model is learning. A decrease in mutual information across epochs indicates successful extraction of information, suggesting that the model is learning deterministic relationships within the training set, and not already overfitting to noise. If it does so as well on the validation set until input and deviations between model and ground truth are independent, then we can stop the training process since the model might have learned all deterministic relations on the training and validation dataset. In that case further training might cause an overfitting to noise. Similarly, if there is no significant reduction in mutual information despite decreasing training loss, it may indicate overfitting to the noise in the training data, as the model is fitting to the unpredictable elements of the ground truth which shares no mutual information with the input. On our synthetic dataset, the deterministic relations are identical on training and training set by construction. In Figure 4, we see for the training based on L 2-loss function that the mutual information increases on training and validation set after reaching their minimum upon a few epochs simultaneously. Here the model fits noise rather than the actual sinus function since the norm of the difference between model prediction and sinus function increases simultaneously. In case the deterministic relations in the training data may (partially) not hold true for the validation set, it may lead to an increase in loss and potentially mutual information over epochs on the validation set, while mutual information on the training set decreases, as we see in Figure 9. In this figure, the deterministic relations learned on the training data do not cause a fitting output of the model on the validation set. In contrast, please see Figure 5 where mutual information between input and residuals on training and validation set from ETTh2 simultaneously decreases over epochs. Results in Figure 9 show the learning curves when fitting an MLP with L1 as a loss function on the residuals of PatchTST Nie et al. (2022) with the best setting they proposed. ![26_image_0.png](26_image_0.png) Figure 9: Distribution shift experiment on ETTh1 Dataset. ## 5 Discussion In this section, we discuss our assumptions and limitations of our implemented approach before we sketch further potential applications of our approach. Assumptions and Limitations: The implemented approach tests pairwise the relation between input and output features. We are aware that there is a difference, e.g., in pairwise stochastic independence and (mutual) stochastic independence in case of more than two random variables (Gallager, 2013, Section 1.3.4). That means that there might be more information considering, e.g., two input features at once instead of testing for pairwise relations with an output feature. However, the full consideration, instead of a pairwise testing, scales exponentially in terms of the computational costs. Consequently, we are aware that the current pairwise approach, which is computationally cheap compared to the full approach, cannot provide in general a full statement like, there is no information left to learn. In this regard, the current approach can only be used as an additional metric to evaluate if training is done and further iterations might not provide a further improvement. This could be the case if a low pairwise measure of the relation between input and output coincides with a small/no loss function improvement. However, since pairwise test is a specific case of a full consideration, a pairwise test indicating deterministic relations between input and output implies that there is information left a model can learn. An analogous framework is the gradient within an optimization framework where the gradient provides necessary conditions for convergence to a global minimum, which are only under some conditions sufficient to characterize a global optimum. However, even in this case, using only necessary conditions, provides useful optimization results while keeping the computational costs manageable, which might be analogous to our pairwise definition of our used stochastic measures. It is left to investigate under which conditions a pairwise consideration is sufficient to test for a total stochastic independence of input and output. Furthermore, apart from considering pairwise testing as an approximation for the mutual independence, there might be further approximations to the mutual independence that might be considered as well to decide for stopping an ML training. Please also see the Related Work section about mutual information estimation for further examples of approximations of full mutual information instead of a pairwise consideration. Furthermore, we remark that our framework is not limited to a specific choice of stochastic dependence/information measures and also our git repository is designed that new measures can be included quickly in a modularized manner. In particular for time series, we would like to remark that, even under a method that considers (full) mutual independence between the input and output, a result of total independence of input and output does not imply that the time series is not predictable. It just says with the given input, the time series is not predictable autoregressively. Maybe with other features that are related to the quantity measured as a time series, there is a deterministic relation that can be used for predicting the time series. Alternative for nominal data: One further option to model deviations of a model from the ground truth in the case of nominal target data could be a multi-dimensional random variable that models the difference between the actual probability distribution (e.g., 1 for the correct class, 0 otherwise) and the predicted distribution from the model for the classification. If the input variables and the difference distribution, which serves as the correction to the prediction to get the correct distribution, are deterministically related, then the correction (difference distribution) could be learned by another model and added to the distribution from the previous stage. After adding, which is all done in the decision module, see Figure 3, the new distribution can be processed with the softmax-function for normalization and the classification may correspond to, e.g., the most likely class. Extension to unstructured data: For models that extract information from an input like images or text the input data needs to be transformed into a representation that shares information with the output. An example are the pixels in a figure where an object of interest moves over different pictures or tokenized text, where the position of a word can vary while not changing the meaning. We could optimize first layer(s) to have the highest mutual information with the ground truth that is to be predicted analogously to Chen et al. (2016) or Brakel & Bengio (2017) focusing on finding entangled representations, see the Related Work section about the application of mutual information for further details. This procedure could also foster the application of a pairwise test between each node in such a representation and the output feature since the entangled representation extracts potential mutual information considering several input features at once such that each node of the representation is independent of each other. However, also in the scenario where no specific optimization for mutual information between representations and target ground truth is done, it can be tested if the nodes of a layer share mutual information with the output as the nodes may compress the information over several input nodes such that a pairwise test is meaningful, like in a encoder-decoder scenario where such compressing representations are usually present. In such a scenario, such a layer could be considered as a purposeful approximation for the mutual independence or full test as we mentioned in the discussion about the limitations above. A specific issue with the output of large language models, and in general for unstructured output data, is that there are sometimes more than one correct class (e.g., tokens associated with synonyms), but only one word is taken by a large language model. In such a case, a solution might be another large language model to tell synonyms to evaluate if a model is right. Thus this definition of the model correctness transforms the unstructured output to a structured one that can be investigated for dependence with other layers/representations. One possible implementation can be like the corresponding random variable taking the value for correct which could be several classes defined by the other model. If the model is not correct, this variable could take a number of a correct class, where in terms of synonyms one correct option, according to the evaluating model, is enough. However, also in this unstructured data case, the stacking concept works as the random variable θ jin Figure 3 is defined with the help of a second large language model instead of a simple rule based definition from the known tabular data. Such an investigation can help to answer the question how small a model can be and where most information is learned/extracted to make large language or even multimodal models more efficient. For more details, see the next paragraph about cutting down models. Another advantage of calculating stochastic measures between representations of a (pre-)trained model and the ground truth belonging to a specific task, like a classification, is to find out which representations of which model have the highest dependence to the corresponding task, which could efficiently enrich the way how (pre-)trained models are selected from the pool of available models. Cutting down models to structures extracting most information: If we perform a training with usual loss functions without any additional optimization for mutual information in specific layers, we could also identify the best structures within a trained model that extract most information and which subparts only contribute minor to, e.g., an embedding. While in this work so far the idea of stacking models has been discussed, zooming more in a model's architecture would help us analyzing the model itself regarding its subunits (e.g. layers, embeddings, attention mechanisms, etc.). The approach is to apply mutual information like in the scenario of multidimensional input and output (see, see Section 4.6). The output can be filled with the data to predict or the output of other subunits, like an embedding, upstream towards the output of the total model. If there is no contribution or only a minor one, we could cut the corresponding subunit out. By ranking subunits with such measures, we have a clear procedure what subunits to exclude, instead of randomly selecting some for cutting out before retraining the model. The training could start from the current model parameters to just fine-tune the remaining layers. Training from the scratch is also possible but could delete a lot of parameter values that are still valid. The procedure can be iteratively repeated and even to that point where model size is balanced against a (small) drop of accuracy. Our framework can in additional help to find a lossless pruning similar to the Related Work section about the information bottleneck by testing if the stochastic measures between input or any representation of it and the model deviations get worse after a cutoff. Such investigations might facilitate an understanding about the problemspecific relevant structures and keeping model sizes efficient without lowering their accuracy. As an example, we could test how large, e.g., large language or multimodal models need to be and which structures extract the most information, similar to Liu et al. (2024). Further we remark, that with a measure for self-similarity, like the Pearson correlation, we could probably identify identities, resp. structures that behave similarly, e.g., in a sequential arrangement of layers that have the same output behavior, which should be lowly ranked since they only direct information through. A similar work is done in Gromov et al. (2024) to identify layers with a similar behavior that can be cut off. However, with mutual information, we can also investigate structures that do not behave similarly, but how much single parts of a branches structure contribute to the part where these branches come together. If a training fails, we could also use such methods to test which structures fail. An example is a cascade of layers with one layer where input and output do not share any information anymore. The disruption of routing information could mean that this layer is like a constant function, which may delete important information. Stacking software pipeline: Apart from finding relevant properties of a model to vary, like size, deepness, loss function etc., another aspect is that we assume the stack to be constant once trained and parameters are kept constant while only the new model on top of the stack is trained. It is to investigate in future research if, e.g., training all the parameters of the whole stack after training the new model on top of the stack might benefit the accuracy before testing if another model for the top of the stack is needed. One important application of a pipeline is to facilitate a precise time series prediction related to a concrete single time series and provide this capabilities to a broad audience even outside the ML community that apply the predictions of time series, like weather forecasts. Another use case is improving therapies where the effect depends on time varying patient-specific parameters. Thus, by taking, e.g., the daily rhythm of gene expression of humans into account, as argued in the concept of chronotherapy Zhang et al. (2014), a precise prediction of the expression levels may improve the effect of therapies and can be one brick for personalized medicine. ## 6 Conclusion And Future Work In this work, a framework for measuring predictability of input-output relation was developed. Furthermore, it was shown how the information extraction of an ML model from this input-output-relation can be measured. Based on this framework, a stacking architecture was presented, which is able to extract information systematically in case a single model fails to do so. Moreover, it was demonstrated how the corresponding stochastic measures for predictability can be used to extend the current definition of model convergence and training success. The total framework was showcased with time series prediction and classification on synthetic and real world datasets. The presented framework provides measures to evaluate the existence of deterministic relations that a model can extract and how successful a model have been with extracting them. Promising further research might be the development of a loss function to fit the model to the deterministic relations directly, sorting out unpredictable parts like noise, in contrast to currently common loss functions, like L 1, L 2, cross entropy or KL-Divergence that fit the model's output distribution to the data distribution without differentiating if a data point is influenced mainly by, e.g., noise or the deterministic relations. According to the presented framework the mutual information between input and the model deviations might serve as such a loss functions sought. However, for the implementation there are some challenges left that need for research to over come them. One challenge might be that the current implementation is not differentiable since minor changes regarding the position of a residual can cause a change in the belonging to the corresponding bin that are generated during the calculation of the mutual information. A suitable smoothing might facilitate the application of a corresponding loss function within a numerically efficient algorithm that requires a smooth loss function. These challenge of the non-smoothness is already described in (Oord et al., 2018, Section 2.1). There are smooth approximations of mutual information, like Belghazi et al. (2018) or Franzese et al. (2023), however, these approximations could become computationally too costly since an ML model needs to be trained in any epoch approximating the mutual information between input and model deviations. A further challenge could be that there might be instabilities in estimating the mutual information, as reported in Choi & Lee (2022). Such an inaccuracy in the estimation of the mutual information for a loss function could cause divergence of the optimization procedure and thus not improving the model's capability to extract more deterministic relations differentiating them from unpredictable parts like noise given the input data. Further research for a smooth and computational cheap approximation of mutual information is promising to focus the ML training on the deterministic relations encoded in the data. ## References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. *arXiv preprint* arXiv:1607.06450, 2016. Philipp Becker, Harit Pandya, Gregor Gebhardt, Cheng Zhao, C. James Taylor, and Gerhard Neumann. Recurrent kalman networks: Factorized inference in high-dimensional deep feature spaces. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International Conference on Machine* Learning, volume 97 of *Proceedings of Machine Learning Research*, pp. 544–552. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/becker19a.html. Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. Mutual information neural estimation. In *International conference on machine learning*, pp. 531–540. PMLR, 2018. Alexandra Bezbochina, Elizaveta Stavinova, Anton Kovantsev, and Petr Chunaev. Enhancing predictability assessment: An overview and analysis of predictability measures for time series and network links. *Entropy*, 25(11):1542, 2023. Philemon Brakel and Yoshua Bengio. Learning independent features with adversarial nets for non-linear ica. arXiv preprint arXiv:1710.05050, 2017. Tim Breitenbach. On the SQH method for solving optimal control problems with non-smooth state cost functionals or constraints. *Journal of Computational and Applied Mathematics*, 415:114515, 2022. Tim Breitenbach, Lauritz Rasbach, Chunguang Liang, and Patrick Jahnke. A principal feature analysis. Journal of Computational Science, 58:101502, 2022. Tim Breitenbach, Bartosz Wilkusz, Lauritz Rasbach, and Patrick Jahnke. On a method for detecting periods and repeating patterns in time series data with autocorrelation and function approximation. Pattern Recognition, 138:109355, 2023. Rajkumar Buyya, Satish Narayana Srirama, Giuliano Casale, Rodrigo Calheiros, Yogesh Simmhan, Blesson Varghese, Erol Gelenbe, Bahman Javadi, Luis Miguel Vaquero, Marco A. S. Netto, Adel Nadjaran Toosi, Maria Alejandra Rodriguez, Ignacio M. Llorente, Sabrina De Capitani Di Vimercati, Pierangela Samarati, Dejan Milojicic, Carlos Varela, Rami Bahsoon, Marcos Dias De Assuncao, Omer Rana, Wanlei Zhou, Hai Jin, Wolfgang Gentzsch, Albert Y. Zomaya, and Haiying Shen. A manifesto for future generation cloud computing: Research directions for the next decade. *ACM Comput. Surv.*, 51(5):105:1–105:38, November 2018. ISSN 0360-0300. doi: 10.1145/3241737. URL http://doi.acm.org/10.1145/3241737. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. Advances in neural information processing systems, 29, 2016. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pp. 1724. Association for Computational Linguistics, 2014. Kwanghee Choi and Siyeong Lee. Combating the instability of mutual information-based losses via regularization. In *Uncertainty in Artificial Intelligence*, pp. 411–421. PMLR, 2022. Hoang Anh Dau, Anthony Bagnall, Kaveh Kamgar, Chin-Chia Michael Yeh, Yan Zhu, Shaghayegh Gharghabi, Chotirat Ann Ratanamahatana, and Eamonn Keogh. The ucr time series archive. *IEEE/CAA* Journal of Automatica Sinica, 6(6):1293–1305, 2019. Giulio Franzese, Mustapha Bounoua, and Pietro Michiardi. Minde: Mutual information neural diffusion estimation. *arXiv preprint arXiv:2310.09031*, 2023. Robert G. Gallager. *Stochastic Processes: Theory for Applications*. Cambridge University Press, 2013. doi: 10.1017/CBO9781139626514. Hans-Otto Georgii. *Stochastik: Einführung in die Wahrscheinlichkeitstheorie und Statistik*. Walter de Gruyter GmbH & Co KG, 2015. Albert Greenberg, James R Hamilton, Navendu Jain, Srikanth Kandula, Changhoon Kim, Parantap Lahiri, David A Maltz, Parveen Patel, and Sudipta Sengupta. Vl2: a scalable and flexible data center network. In *Proceedings of the ACM SIGCOMM 2009 conference on Data communication*, pp. 51–62, 2009. Priscilla E Greenwood and Michael S Nikulin. *A guide to chi-squared testing*, volume 280. John Wiley & Sons, 1996. Arthur Gretton, Ralf Herbrich, Alexander Smola, Olivier Bousquet, Bernhard Schölkopf, et al. Kernel methods for measuring independence. 2005. Andrey Gromov, Kushal Tirumala, Hassan Shapourian, Paolo Glorioso, and Daniel A Roberts. The unreasonable ineffectiveness of the deeper layers. *arXiv preprint arXiv:2403.17887*, 2024. Albert Gu, Karan Goel, and Christopher Re. Efficiently modeling long sequences with structured state spaces. In *International Conference on Learning Representations*, 2021. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In *International Conference on Learning Representations*, 2019. URL https://openreview.net/forum? id=Bklr3j0cKX. Kenji Kawaguchi, Zhun Deng, Xu Ji, and Jiaoyang Huang. How does information bottleneck help deep learning? In *International Conference on Machine Learning*, pp. 16049–16096. PMLR, 2023. Taesung Kim, Jinhee Kim, Yunwon Tae, Cheonbok Park, Jang-Ho Choi, and Jaegul Choo. Reversible instance normalization for accurate time-series forecasting against distribution shift. In *International* Conference on Learning Representations, 2021. Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. In *International* Conference on Learning Representations, 2019. Alexander Kraskov, Harald Stögbauer, and Peter Grassberger. Estimating mutual information. *Physical* review E, 69(6):066138, 2004. Zhe Li, Shiyi Qi, Yiduo Li, and Zenglin Xu. Revisiting long-term time series forecasting: An investigation on linear mapping. *arXiv preprint arXiv:2305.10721*, 2023. Yong Liu, Haixu Wu, Jianmin Wang, and Mingsheng Long. Non-stationary transformers: Exploring the stationarity in time series forecasting. *Advances in Neural Information Processing Systems*, 35:9881–9893, 2022. Zechun Liu, Changsheng Zhao, Forrest Iandola, Chen Lai, Yuandong Tian, Igor Fedorov, Yunyang Xiong, Ernie Chang, Yangyang Shi, Raghuraman Krishnamoorthi, et al. Mobilellm: Optimizing sub-billion parameter language models for on-device use cases. *arXiv preprint arXiv:2402.14905*, 2024. Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. The m4 competition: 100,000 time series and 61 forecasting methods. *International Journal of Forecasting*, 36(1):54–74, 2020. Mary L McHugh. The chi-square test of independence. *Biochemia medica*, 23(2):143–149, 2013. Long Short-Term Memory. Long short-term memory. *Neural computation*, 9(8):1735–1780, 2010. Kevin P Murphy. *Probabilistic machine learning: an introduction*. MIT Press, 2022. Available at https: //github.com/probml/pml-book/releases/latest/download/book1.pdf. Yuqi Nie, Nam H Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. In *The Eleventh International Conference on Learning Representations*, 2022. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*, 2018. Hanchuan Peng, Fuhui Long, and Chris Ding. Feature selection based on mutual information criteria of maxdependency, max-relevance, and min-redundancy. *IEEE Transactions on pattern analysis and machine* intelligence, 27(8):1226–1238, 2005. Calyampudi Radhakrishna Rao. *Linear statistical inference and its applications*, volume 2. Wiley New York, 1973. Andrew M Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan D Tracey, and David D Cox. On the information bottleneck theory of deep learning. *Journal of Statistical Mechanics:* Theory and Experiment, 2019(12):124020, 2019. Vaisakh Shaj, Saleh GHOLAM ZADEH, Ozan Demir, Luiz Ricardo Douat, and Gerhard Neumann. Multi time scale world models. In *Thirty-seventh Conference on Neural Information Processing Systems*, 2023. Michael Ulbrich. Semismooth Newton methods for variational inequalities and constrained optimization problems in function spaces. SIAM, 2011. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In *International Conference on Machine Learning*, pp. 23965–23998. PMLR, 2022. Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. *Advances in Neural Information Processing Systems*, 34: 22419–22430, 2021. Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. Are transformers effective for time series forecasting? In *Proceedings of the AAAI conference on artificial intelligence*, volume 37, pp. 11121–11128, 2023. Ray Zhang, Nicholas F Lahens, Heather I Ballance, Michael E Hughes, and John B Hogenesch. A circadian gene expression atlas in mammals: implications for biology and medicine. *Proceedings of the National* Academy of Sciences, 111(45):16219–16224, 2014. Xiyuan Zhang, Xiaoyong Jin, Karthick Gopalswamy, Gaurav Gupta, Youngsuk Park, Xingjian Shi, Hao Wang, Danielle C. Maddix, and Bernie Wang. First de-trend then attend: Rethinking attention for timeseries forecasting. In *NeurIPS '22 Workshop on All Things Attention: Bridging Different Perspectives on* Attention, 2022. URL https://openreview.net/forum?id=GLc8Rhney0e. Yunhao Zhang and Junchi Yan. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In *The Eleventh International Conference on Learning Representations*, 2022. Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pp. 11106–11115, 2021. ## A Models' Settings For Numerical Experiments This section of the appendix is allocated to the architecture of the utilized neural networks (NNs) in the experiments. Through this appendix we show the architecture of the MLPs with the number of nodes per each layer inside a list. The number of layers is the same as the length of the list. Unless specified differently, all activation functions are Relu and initial learning rates are 1e-4. Section 4.1: All NNs are MLPs with Relu activation functions. MLP Layers: [49,490,980,1] Activation functions: Relu Initial learning rate: 1e-4 0.506M parameters Section 4.2: All NNs are MLPs with Relu activation functions. Number of nodes in each layer is written in the list. MLP Layers: [49,490,700,490,1] Activation function: Relu Initial learning rate: 1e-4 0.712M parameters Figure 6 : Timesreise Classification MLP CrossEntropy (0.84M) MLP Layers: [80, 720, 720, 360, 2] Figure 7 : Timesreise Classification NST CrossEntropy (0.69M) Number of encoder layers: 2 Number of decoder layers: 1 Number of heads: 8 d_model: 128 Dropout: 0.1 MLP CrossEntropy (0.85M) MLP Layers: [96, 720, 720, 360, 7] Table 4: MLPL1-0.5 (0.025M) & MLPL2-0.5 (0.025M): Layers ob both models: [60,80,120,80,1] Activation function: Relu InitiaL learning rate: 1e-4 MLPL1(0.05M) & MLPL2(0.05M) : Layers for both models: [120,80,180,1] NSTL1 (0.05M) & NSTL2 (0.05M) Number of encoder layers: 1 Number of decoder layers: 1 Number of heads: 2 d_model: 40 Dropout: 0.1 Table 5: Here is the details of the pool of the used models -MLPs and Transformers- in one step ahead prediction experiment on weather dataset. For all non-stationary transformers (NSTs) dropout is set to 0.1. The initial learning_rate for all models as the first stack is 1e-4 and for the second and the third stack is 1e-5. The dropout for NSTs is 0.1 and for PatchTsT is 0.2, and there is no dropout for MLPs. The size of subsequent hidden layer after the attention head (d_ff) in transformers are set to the provided default numbers, i.e. 4*d_model for NSTs and for 2*d_model for PatchTST. Please note that PatchTST uses the vanilla Transformer encoder as its core architecture Nie et al. (2022) and therefore the number of decoder layer is zero. PatchTST L2(0.41M) PatchTST L2 architecture Number of encoder layers: 3 Number of decoder layers: 0 Number of heads: 16 d_model: 128 Dropout: 0.2 MLP L1(0.59M) MLP Layers: [96, 720, 720, 1] NST L1(3.88M) NST L1 architecture Number of encoder layers: 2 Number of decoder layers: 2 Number of heads: 8 d_model: 256 Dropout: 0.1 NST L1(1.05M) NST architecture Number of encoder layers: 2 Number of decoder layers: 2 Number of heads: 8 d_model: 128 Dropout: 0.1 NST L1(0.86M) on PatchTST L2(0.41M) NST architecture Number of encoder layers: 2 Number of decoder layers: 1 Number of heads: 8 d_model: 128 Dropout: 0.1 PatchTST architecture Number of encoder layers: 3 Number of decoder layers: 0 Number of heads: 16 d_model: 128 Dropout: 0.2 NST L1(1.13M) on PatchTST L2(0.41M) NST architecture Number of encoder layers: 2 Number of decoder layers: 2 Number of heads: 8 d_model: 128 Dropout: 0.1 PatchTST architecture Number of encoder layers: 3 Number of decoder layers: 0 Number of heads: 16 d_model: 128 Dropout: 0.2 NST L2(0.53M) on MLP L1(0.59M) MLP Layers: [96, 720, 720, 1] NST architecture: Number of encoder layers: 2 Number of decoder layers: 1 Number of heads: 6 d_model: 96 Dropout: 0.1 NST L1(0.86M) on MLP L1(0.59M) MLP Layers: [96, 720, 720, 1] NST architecture: Number of encoder layers: 2 Number of decoder layers: 1 Number of heads: 8 d_model: 128 Dropout: 0.1 NST L1(2.05M) on MLP L1(0.59M) MLP Layers: [96, 720, 720, 1] NST architecture: Number of encoder layers: 4 Number of decoder layers: 4 Number of heads: 8 d_model: 128 Dropout: 0.1 MLP L1(0.59M) on NST L1(1.13M) on PatchTST L2(0.41M) MLP Layers: [96, 720, 720, 1] NST architecture Number of encoder layers: 2 Number of decoder layers: 2 Number of heads: 8 d_model: 128 Dropout: 0.1 PatchTST architecture Number of encoder layers: 3 Number of decoder layers: 0 Number of heads: 16 d_model: 128 Dropout: 0.2 Table 6: In this experiment, two simple SVM models are used: SVM with rbf kernel SVM with with sigmoid kernel Table 7: Here is the details of the pool of the used models -MLPs and Transformers- in multistep ahead prediction experiment on NASDAQ-DE1 dataset. MLP L1 (2.09M): MLP Layers: [60,360,3440,240,30] MLP L2 (3.77M): MLP Layers: [60,720,3440,360,30] MLP L2 (4.61M): MLP Layers: [60,900,3440,420,30] NST L1 & NST L2 (2.67M): Number of encoder layers: 2 Number of decoder layers: 1 Number of heads: 8 d_model: 256 Dropout: 0.1 MLP L2 (0.67M) on MLPL1 (2.09M) MLP L2 Layers: [60,360,1080,240,30]) MLP L1 Layers: [60,360,3440,240,30] MLP L2 (0.64M) on NSTL1 (2.67M): MLP L2 layers: [60,360,1020,240,30] NST L1 architecture: Number of encoder layers: 2 Number of decoder layers: 1 Number of heads: 8 d_model: 256 Dropout: 0.1 MLP L2 (0.09M) on MLP L2 (0.64M) on NST L2 (2.67M) NST L2 architecture: Number of encoder layers: 2 Number of decoder layers: 1 Number of heads: 8 d_model: 256 Dropout: 0.1 MLP L2 (0.64M) layers: [60,360,1020,240,30] MLP L2 (0.09M) layers: [60,720,60,30] ## A.1 Data Description Datasets Here is a description of the datasets used in our experiments: (1) ETT Zhou et al. (2021) contains seven features including the oil temperature and six power load feature. ETTh indicates the ETT data with a granularity of 1-hour-level and ETTm indicates the ETT data with a granularity of 15-minutes-level. (2) *Weather*1is recorded every 10 minutes for 202 whole year, and contains 21 meteorological indicators such as humidity and air temperature. (3) *Nasdaq* dataset consists of 82 variables, including important indices of markets around the world, the price of major companies in the U.S. market, treasury bill rates, etc. It is measured daily, having a total of 1984 data samples for each variable. We set the corresponding input length as 60 similar to Kim et al. (2021). In this work, we conduct our experiments on DE1 variable. (4) *ElectricDevice* is a subset of the UCR time series classification dataset. It consists of seven different classes, each representing a specific type of electric device which is to be predicted from the time series. (5) *DistalPhalanxOutlineCorrect* is a subset of the UCR time series classification dataset. This dataset focuses on the classification of outlines of distal phalanx bones from time series. 1https://www.bgc-jena.mpg.de/wetter/ ## B Further Tables And Figures ![38_image_0.png](38_image_0.png) Figure 10: Plots of model outputs (prediction), residuals and the data (ground truth) of the experiments of Section 4.1. ![39_image_0.png](39_image_0.png) Figure 11: Plots of model outputs (prediction), residuals and the data (ground truth) of the experiments of ![39_image_1.png](39_image_1.png) Section 4.2. Figure 12: Plots of model outputs (prediction), residuals and the data (ground truth) of the ETTh2 dataset. | Theoretical bound | Experimental Error | Pearsonr Analysis | | |---------------------|----------------------|---------------------|------------| | Relative noise std | Normalized Test RMSE | Initial R | Residual R | | 0 | 0.000023 | 30.764 | 0 | | 0.2715 | 0.2737 | 28.538 | 0 | | 0.4919 | 0.5043 | 23.129 | 0 | | 0.6465 | 0.6687 | 17.494 | 0.9446 | | 0.7499 | 0.7683 | 13.197 | 0 | | 0.8161 | 0.8287 | 10.455 | 0 | | 0.8614 | 0.8718 | 7.868 | 0 | | 0.8923 | 0.901 | 5.561 | 1 | | 0.9149 | 0.9206 | 4.664 | 0 | | 0.9308 | 0.9427 | 3.723 | 0 | | 0.9429 | 0.9373 | 2.882 | 0 | | - | Initial pearsonr | Residual Pearsonr + L2 | Residual Pearsonr + L1 | |---------|--------------------|--------------------------|--------------------------| | Trial 0 | 15.33 | 0 | 0 | | Trial 1 | 16.925 | 0 | 0 | | Trial 2 | 15.25 | 0 | 0 | | Trial 3 | 17.104 | 0 | 0 | | Trial 4 | 15.449 | 0 | 0 | Table 8: Pearson correlation results for Section 4.1. The test consists of the sum of the absolute value of the Pearson correlation between each input and output features. However, the correlation measure is only considered if the p-value is smaller than 0.01. Otherwise, the corresponding input and output pair is considered as uncorrelated. In the first column, there is the relative noise, which is the root of the variance of the noise divided by the total signal. The second column provides the normalized RMSE as defined in Section 4. The third column is the sum of the absolute value of the Pearson correlation between input and the single step forecast as output and the forth column is analogous to the third column where the single step forecast is replaced by the corresponding residual (model prediction minus data). Since the residuals are less correlated or even decorrelated with the input, the model extracted the deterministic relations measured in the Pearson correlation test. Table 9: The effect of the choice of the loss function in mitigating asymmetric noise effect in terms of distracting a model from extracting/learning the deterministic relations. The experiments are conducted in Section 4.2. The Pearson test is described in Table 8. While the chi-square test and the mutual information test depicted in Table 2 reflect the better fitting of the model trained with L 1-loss function to the real data, see Table 3, the Pearson test does not, which could be traced back to the limitation of testing only linear correlations where chi-square and mutual information are generalizations in terms of measuring stochastic independence. ## C Transformer Architecture Explanation In this section, we explain the transformer architecture in more detail. Specifically, we explain the building blocks as depicted in (Vaswani et al., 2017, Figure 1). ## C.1 Token Embedding In the transformer model, each token of the input sequence is first represented as a dense vector called a token embedding. This embedding captures the semantic meaning of the token in the context of the task being performed. Each token is encoded by a vector z ∈ R dz one-hot encoding the corresponding token where dz ∈ N is the number of different tokens generated from the vocabulary. The token embedding is obtained by applying a linear transformation A ∈ R C×dz to the one-hot encoded representation of tokens z where C ∈ N is the number of dimensions of the transformer's internal representation of the embeddings (C sometimes also denoted with embedding_*size*). The entries of A are learnable weights and optimized during the training process. The mapping A : R dz → R C , z 7→ Az can be implemented as: token_embedding = nn.Linear(config.vocab_size, config.n_embed) where config.vocab_size = dz is the size of the vocabulary (number of unique tokens), and config.n_embed is the desired embedding dimension C. Consequently, this mapping turns the input shape (*B, T, d*z) to the output shape of the token embedding (*B, T, C*), where B represents the batch size and T represents the sequence length of the input (number of tokens). The transformation A is applied to each token and element of a batch by z(b, :, :) 7→ z(b, :, :)A for each b ∈ {1*, ..., B*} where z ∈ R B×T ×dz, z(b, :, :) := (zbik)i∈{1,...,T},k∈{1*,...,d*z} provides the one-hot encoded representation for each token and each element of the batch. By using batches, several inputs can be considered at once. For the specific case of time series modeling, the token embedding is replaced by the following. A convolutional neural network (CNN) is used for generating the embedding. The input channels of the CNN equal 1 in an autoregressive scenario (only historic parts of the time series itself are use to predict future parts of the time series) but can be set to any feature number F ∈ N, which is measured at each time point, in case, e.g., an output is predicted from several input time series. Each time window for each channel, which cuts out each part of the F time series of length T and which is used as the input for the prediction, is represented by a vector of size T. These vectors from the sliding window are transformed by a one-dimensional CNN into the space R C . The padding is set such that the input length T equals the output length of the CNN. We remark that the shift one by one time points is not necessary and can be increased such that the token number in the embedding is smaller than the number of time points used as the input for the transformer. In our case, each filter (convolution) of the CNN is applied to each output, controlled by the parameter groups=1 and a common choice for the kernel size of 3. To achieve this transformation, we employ a one-dimensional convolutional layer in our implementation. Specifically, a 1D convolutional layer with an input channel size of F and an output channel size of C (the embedding dimension of the transformer) can be utilized: ## Nn.Conv1D(In_Channels = F, Out_Channels = C) This convolutional layer applies a set of learnable filters across the temporal dimension T of the input data, extracting relevant patterns and features. It's important to note that the kernel size, padding, and stride parameters of the convolutional layer can be adjusted to ensure that the output length matches the input length T. For more details, see the PyTorch documentation, e.g., https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html. In general, the transformation (token embedding by a CNN) is applied as follows CNN : R T ×F → R T ×C , z(b, :, :) 7→ CNN (z(b, :, :)) to each each element of a batch z(b, :, :) ∈ R B×T ×F , z(b, :, :) := (zbik)i∈{1,...,T},k∈{1*,...,F* } , numerated by b ∈ {1*, ..., B*} with the batch size B ∈ N, (in parallel) generating a tensor of dimension (*B, T, C*). For more details about the general framework of token embedding, see, e.g., Zhou et al. (2021). ## C.2 Positional Embedding The purpose of a positional embedding is to include information about the position of a token relative to other tokens from the input into the total embedding of each token. The positional embedding is a vector of dimension C and is added to the token embeddings to provide information about the relative positions of tokens. There are several methods available to code for positional information, see, e.g., Vaswani et al. (2017). ## C.3 Attention Head Operation The attention head is the essential building block of a transformer. Each attention head forms one layer where the output of one layer is processed by a subsequent layer until the output of the final layer is processed by a linear layer to obtain a corresponding output, see, e.g., (Vaswani et al., 2017, Figure 1). Let's consider a single attention head operation in a transformer model. In this operation, we have the input tensor x ∈ R B×T ×C . The tensor x can be the one after the token embedding (inclusive positional encoding) or the output of a previous layer. We remark that each layer in this presentation preserves the format (*B, T, C*). The input tensor x is next transformed into different representations via linear transformation matrices. These matrices contain the learnable weights and are given by W K ∈ R C×C , WQ ∈ R C×C and WV ∈ R C×C . We define a tensor for key (K), query (Q), and value (V ) by the linear mappings $K(b,\cdot,\cdot)\coloneqq x(b,\cdot,\cdot)W^{K}\in\mathbb{R}^{T\times C},\quad Q(b,\cdot,\cdot)\coloneqq x(b,\cdot,\cdot)W^{Q}\in\mathbb{R}^{T\times C},\quad V(b,\cdot,\cdot)\coloneqq x(b,\cdot,\cdot)W^{V}\in\mathbb{R}^{T\times C}$ for each b ∈ {1*, ..., B*} where x(b, ·, ·) ∈ R T ×C such that x(*b, i, k*) = xbik for all b ∈ {1, ..., B}, i ∈ {1*, ..., T*} and k ∈ {1*, ..., C*}. Each of the matrices W K, WQ and WVis an instance of a linear layer and can be implemented with PyTorch as follows: $nn.Linear(C,C,bias=False)$... We remark that C may be called embedding_*size*. In order to quantify the attention of token i ∈ {1*, ..., T*} represented in its key representation (K) with regard to token j ∈ {1*, ..., T*} of the input represented in its query representation (Q), the dot product is calculated for each b ∈ {1*, ..., B*} between the query (Q) and key (K) tensors over the vector embedding for each token pair i, j ∈ {1*, ..., T*}. This operation can be represented as $$\Theta:\{1,...,B\}\times\{1,...,T\}\times\{1,...,T\}\rightarrow\mathbb{R},\quad(b,i,j)\mapsto\Theta(b,i,j):=\frac{1}{\sqrt{C}}\sum_{k=1}^{C}Q_{b,i,k}\cdot K_{b,j,k}\tag{4}$$ where b represents the batch index, i and j represent the positions in the sequence (input), and k represents the embedding dimension. The sum is scaled by C −0.5. One reason behind dividing the sum by the square root of the embedding dimension is given in the section about the Softmax function below, see Section C.3.1. We implement the mapping Θ using the key (K) and query (Q) tensors and the Einstein summation convention as follows: $$\Theta(b,i,j)=\frac{1}{\sqrt{C}}\mathrm{torch.einsum}(b i j,b k j->b i k,Q,K)$$ ## C.3.1 Softmax After calculating the similarities between keys and queries, the purpose of the Softmax function is to normalize the scores of similarity. A high similarity between the key of token i and the query of token j is a proxy for a high association or attention the token i has to token j, meaning the connection is important for predicting the corresponding output. Due to the monotonicity of the Softmax function, a bigger similarity score between the corresponding key and query will result in a bigger value, called attention between the corresponding tokens, compared to smaller ones. The Softmax function in our case is defined by $\mathrm{Softmax}:\{1,...,B\}\times\{1,...,T\}\times\{1,...,T\}\to\mathbb{R},\quad(b,i,j)\mapsto\mathrm{Softmax}(b,i,j):=\frac{\mathrm{e}^{\Theta(b,i,j)}}{\sum_{l=1}^{T}\mathrm{e}^{\Theta(b,i)}}$. Here, the Softmax function is applied along the last dimension, ensuring that the attention weights sum up to 1 along this dimension. This normalization means, fixing a batch number b and a token number i of the input provides us a normalized attention score about all the other token numbers j ∈ {1*, ..., T*}. The implementation is done by applying the corresponding Softmax function along dim=−1 to the tensor Θ*b,i,j* := Θ(*b, i, j*) for all b ∈ {1*, ..., B*} and i, j ∈ {1*, ..., T*}. Next, we explain the normalization by C −0.5 of (4). For large numbers, the Softmax function is approximately a constant function and changes in the weights of the transformer model do not result in a significant change of the attention. Depending on the embedding dimension C, the corresponding sum scales. For an illustration, see, e.g., (Vaswani et al., 2017, footnote 4). The scaling of the sum by C −0.5ensures to stay in a range where the Softmax is in an area of larger steepness and changes in the weights of the transformer result in significant changes of the attention values. Similarly, see (Vaswani et al., 2017, Section 3.2.1). ## C.3.2 Weighted Aggregation We apply the attention scores to the value tensor (V ) to obtain a weighted sum. The attention tensor is defined by A*b,i,j* := Softmax(*b, i, j*) for all b ∈ {1*, ..., B*} and i, j ∈ {1*, ..., T*} such that A ∈ R B×T ×T. The weighted sum of attention scores is given by $${\mathcal{A}}(b,\cdot,\cdot)V(b,\cdot,\cdot)\in\mathbb{R}^{T\times C}$$ for each b ∈ {1*, ..., B*} and turns the output of an attention head into the tensor format (*B, T, C*). The output tensor of attention head H ∈ R B×T ×C is defined by $$H_{b,i,k}:=\sum_{l=1}^{T}A_{b,i,l}V_{b,l,k}$$ $\square\square\square$ $\mathcal{L}$ for all b ∈ {1, ..., B}, i ∈ {1*, ..., T*} and k ∈ {1*, ..., C*}. ## C.3.3 Multi-Head Attention Optionally, in order to calculate attention on different subspaces of keys and queries for the same input in each layer, there is a multi-head attention taking only projected parts of keys and queries. In the multi-head attention formalism, the output of each head is concatenated along the last dimension, which is the embedding dimension $${\dot{\iota}}((B,T,C)$$ $$C o n c a$$ $\ell$ T. Concat((B, T, C/n_heads), ...,(B, T, C/n_*heads*)) = (*B, T, C*) where n_*heads* is the number of attention heads. Specifically, the output of each head is calculated with the following weight matrices W K h ∈ R C×C/n_*heads*, W Q h ∈ R C×C/n_*heads*, WV h ∈ R C×C/n_*heads* and WO ∈ R C×C where h ∈ {1, ..., n_*heads*}. Furthermore, n_*heads* and C are chosen such that the quotient C/n_*heads* is an integer. The matrices calculate the corresponding projections of keys, queries and values as follows Kh(b, ·, ·) := K(b, ·, ·)W K h ∈ R T ×C/n_heads, Qh(b, ·, ·) := Q(b, ·, ·)W Q h ∈ R T ×C/n_heads, V h(b, ·, ·) := V (b, ·, ·)WV h ∈ R T ×C/n_*heads* for all b ∈ {1*, ..., B*}. The weight matrices W K h , W Q hand WV hare each implemented with a linear layer according to nn.Linear(C, head_size, bias = *F alse*) and head_*size* is calculated as $$h e a d\_s i z e=\left\lfloor{\frac{e m b e d d i n g\_s i z e}{n\_h e a d s}}\right\rfloor$$ where for WO the number head_*size* is replaced by C. Applying the procedure for the single-head attention for each h ∈ {1, ..., n_*heads*} by replacing each K by the corresponding Kh, each Q by the corresponding Qh and each V by the corresponding V h provides us the tensor of each attention head Hh ∈ R B,T ,C/n_*heads* where in (4) the index in the sum is only over 1 to C/n_*heads* each. After processing all attention heads, the outputs are concatenated along the last dimension, resulting in a tensor of shape (*B, T, C*). The output of the multi-head attention is given by H(b, ·, ·) := Concat(H1(b, ·, ·), ..., Hn_*heads*(b, ·, ·))WO ∈ R T ×C for each b ∈ {1*, ..., B*}. ## C.4 Adding And Layer Normalization After the attention head operation, residual connections and layer normalization are applied. The adding of the input and the output of a layer (residual learning He et al. (2016)) is crucial for maintaining an information flow and is easing the training of deep networks. The residual connection involves adding the input tensor x to the output of the multi-head attention operation H according to x + H where + denotes an element-wise addition. The addition helps to loop through the original information from the input to all the layers in a sequential layer architecture while also incorporating the information learned by the attention mechanism. A prerequisite is that the attention head preserves the input format. Following the residual learning operation, layer normalization is applied to stabilize the learning process Ba et al. (2016) according to $${\mathrm{LayerNorm}}:={\frac{z-E(z)}{\sqrt{V a r(z)+\epsilon}}}$$ where z is the output of the previous layer, E(z) is the mean value of all the values of the output of the previous layer (mean over the elements of z) and *V ar*(z) is the corresponding variance. Since the square root is not differentiable at 0, a small constant ϵ > 0 keeps the numerical implementation stable in case of a small variance of z. Moreover, the constant ϵ avoids division by zero errors. More details about the implementation can be found under https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html. Due to its construction, the normalization is done for each element of the batch. We remark that for our implementation, the normalization is applied over the embedding dimension (C) separately for each token of the input (T dimension), meaning that z ∈ R C . This is reasonable since the feed forward networks (explained in Subsection C.5) are applied over the d_model (C) on each element of the batch (B) and input length (T). ## C.5 Feed Forward Network (Ffn) After the normalization step of the attention head's residual learning, each token's representation is passed through a feed-forward neural network (FFN). Such an FFN consists of two linear transformations separated by a non-linear activation function g : R m×n → R m×n, *m, n* ∈ N, z 7→ g(z), such as ReLU (Rectified Linear Unit; g = max) according to $$\operatorname{FFN}\left(x(b,t,\cdot)\right)=g\left(x(b,t,\cdot)W_{1}+b_{1}\right)W_{2}+b_{2}\in\mathbb{R}^{C}$$ for each b ∈ {1*, ..., B*} and t ∈ {1*, ..., T*} where x ∈ R B×T ×C is the output from the previous operation in the architecture, W1 ∈ R C×dff, df f ∈ N, in our implementation df f = 4C, W2 ∈ R dff ×C and b1 ∈ R dff. Furthermore, x(*b, t,* ·)W1 ∈ R dff , which is the same bias for each t in contrast to b1 ∈ R T ×dff where for each t there is another bias, and b2 ∈ R C are learnable parameters. These operations are applied by Pytorch's linear layer https://pytorch.org/docs/stable/generated/torch.nn.Linear.html. The activation function g introduces non-linearity to the model, enabling it to learn non-linear patterns from the data. In the transformer architecture, an FFN is also followed by residual learning and layer normalization as described above in Subsection C.4. ## C.6 Masking Looking at (Vaswani et al., 2017, Figure 1), the masking is one of the essential building blocks located within the decoder (explained in Subsection C.7). The masking of values of the attention weights Θ has the purposes of forcing attention between tokens to zero, meaning not allowing them to interact or to extract information from the interaction. Due to the iterative application of the transformer for text generation, we would like to force the attention mechanism that a token only considers tokens backwards in time (that come earlier in a sentence). This backward orientation helps to generate representations of tokens that collect information from tokens that are already there and prevents generating representations that make use of tokens that come after that token in a sentence. By the procedure of masking, the representations of tokens are more unified independent of the input length. As an example: "I am hungry and thus I go to a restaurant." Although probably "hungry" should get a lot of attention with "restaurant" without masking, in an iterative application of the transformer, the word "hungry" in "I am hungry and thus I go" would not be useful since its most attention was on restaurant that is not there yet. However, with masking we force the transformer to find embeddings and representations such that the word "hungry" gets a useful representation to predict the next token during learning, independent if the input is "I am hungry and thus I go to a restaurant." or "I am hungry and thus I go". A different interpretation of the masking can be causality in a use case where the sequence of events is of importance forcing attention only to historic events. We implement the masking effecting only backwards interaction by a lower triangular mask M ∈ R T ×T. This matrix is applied to the attention weights Θ generating masked attention weights. The tokens of the input are counted in the second dimension of the tensor Θ where the third dimension accounts for the dimension of the current vector representation (T or C depending on the current representation). Consequently, the lower triangular matrix (1 on the diagonal and below and 0 above), allows token 1 to have a non-zero attention only with token 1, token 2 can interact with token 2 and token 1 and so on until token T. Subsequently, for each b ∈ {1*, ..., B*}, the entries of the attention weight tensor Θ(b, :, :) are replaced by −∞ where M equals zero. The attention tensor Θ is thus transformed into the masked attention weight tensor ΘM. The −∞ forces the corresponding Softmax calculation to zero in the corresponding positions, meaning that the corresponding token does not pay attention to the corresponding other tokens. ## C.7 Encoder And Decoder In a transformer architecture, there are typically two main components. This is the encoder and the decoder. Next, we explain both components according to (Vaswani et al., 2017, Figure 1). Usually each layer in the encoder consists of an attention head followed by a residual learning and normalization, which is the input into a two linear transformation separated by a non-linear activation function, also followed by a residual learning and normalization. For the decoder, a layer consists of a masked attention unit (self-attention), followed by residual learning and layer normalization, followed by an attention unit where key and values are calculated from the corresponding encoder layer (cross-attention). The rest of the layer is according to an encoder layer. The repetition of layers of encoder and decoder is the main building block for the transformer. The encoder processes the input sequence. These are the text input or for time series prediction the historic time series (the time series itself or other time series of features) generating a vector representation that captures the contextual information of each token, which is meant also for the time series case as discussed in Subsection C.1. The decoder, on the other hand, takes the encoded representations and generates an output sequence. It also consists of multiple layers, each containing self-attention mechanisms and cross-attention mechanisms. The self-attention mechanisms help the decoder focus on different parts of its input sequence, while the cross-attention mechanisms allow it to incorporate information from the encoder's output. For the case of generating iteratively the next token for text generation, the prediction target is typically the next token in a sequence. Since for the text generation, several predicted tokens are required, the input of the decoder grows by the predicted token after each iteration. For inference the next token is predicted. Also during the training, the model is trained to predict the next token given the previous tokens in the input sequence. The number of input tokens for the decoder is given by L ∈ N. The input to the encoder can be passed to the input of the decoder. If tokens are iteratively generated, the number L is supposed to be bigger than T in such cases, where T is the input length of the encoder. If the iterative output of the decoder becomes longer than a maximum size L˜ ∈ N for the decoder's input, which can exist due to limitation on the hardware to calculate attention for such an input length between each token, then only the latest L˜ tokens are used as an input for the decoder. If the sequence is shorter, corresponding positions are masked out as explained in Subsection C.6. For translating, input language of the encoder's input can be different to the language of the decoder's output/input. In such a case, the encoder's input may not be passed to the decoder's input and the input of the decoder is iteratively generated by several applications of the transformer. In any case, the output of the decoder x ∈ R B×L×C is transformed by a linear map W¯ : C → R dz with bias ¯b ∈ R dz such that the last dimension fits the number of available tokens from a dictionary according to $${\bar{x}}(b,i,:):=x(b,i,:){\bar{W}}+{\bar{b}}\in\mathbb{R}^{d_{z}}$$ for each b ∈ {1*, ..., B*} and i ∈ {1*, ..., L*}. Then, the Softmax function is applied to the last slice L of the tensor x¯ according to $\mathrm{Softmax}:\{1,...,B\}\times\{1,...,d_{z}\}\to\mathbb{R},\quad(b,s)\mapsto\mathrm{Softmax}(b,s)\coloneqq\frac{\mathrm{e}^{\bar{x}_{b,L,z}}}{\sum_{l=1}^{d_{z}}\mathrm{e}^{\bar{x}_{b,L,l}}}$ to obtain a probability over all possible tokens to choose the most likely token as the following token for each b ∈ {1*, ..., B*}. To include some variety on choosing the next token, we can disturb this distribution for the next token (e.g., introducing a temperature parameter) a bit such that also tokens become the most likely one that are close to the most likely token according to the undisturbed distribution over the tokens. For time series forecasting tasks, the prediction target may vary depending on the application. It could be the next value in the time series sequence, multiple future values, or even a binary classification indicating whether certain conditions will be met in the future. The basic concept is that a linear transformation W˜ : C → E, E ∈ N, with a bias ˜b ∈ R E transforms the output of the decoder to the output format that corresponds to what is to predict, like the number of features or the numbers of classes that is then turned into a probability over classes by a corresponding Softmax function. In this work, we focus on time series prediction. As discussed in Zhou et al. (2021), it is advantageous to generate a multistep prediction (which includes a singlestep prediction) not by an iterative application of the transformer, like explained above, but provide the prediction at once, meaning to provide the prediction of length L ∈ N with a single application of the transformer. As a consequence, the training is done with a direct multistep loss. The rationale behind generating the prediction at once is to avoid error accumulation within the multistep ahead time series prediction task. Considering (Liu et al., 2022, Algorithm 4), we define the input for the decoder by x ∈ R B× T2 +L×F where L ∈ N is the number of steps within the multistep prediction or prediction length, respectively, and F ∈ N the number of features, analogously to the token embedding for time series prediction tasks described in Subsection C.1. While for the encoder the initialization is the input sequence, the initialization values for the decoder are as follows. The first T2 slices of the decoder input x are filled with the last T2 slices of the input of the encoder x˜ ∈ R B×T ×F according to x*b,i,f* = ˜xb, T2 +i,f for all b ∈ {1, ..., B}, i ∈ {1*, ...,* T 2 }, f ∈ {1*, ..., F*}. The last slices of the decoder are initialized with zeros according to x*b,i,f* = 0 for all b ∈ {1, ..., B}, i ∈ { T2 + 1*, ...,* T 2 + L}, f ∈ {1*, ..., F*}. This representation is embedded, see Subsection C.1, and processed as shown in (Vaswani et al., 2017, Figure 1) by a number of layers within the transformer. The output of the decoder, again denoted with x ∈ R B× T2 +L×C , is transformed by a linear mapping according to P(b, i, :) := x(*b, i,* :)W˜ + ˜b ∈ R E for each b ∈ {1*, ..., B*} and i ∈ {1*, ...,* T 2 + L} where W˜ ∈ R C×E, ˜b ∈ R E and P ∈ R B× T2 +L×E. In our application, where we predict the time series from its history, F = E = 1. The output after the linear transformation represents the L-step prediction and is given by $$P(b,i,:):=$$ $$e(b,i,:){\vec{W}}$$ $$\tau+b$$ $$P(b,i,e){\mathrm{~for~all~}}i\in\{{\frac{T}{2}}+1,...,{\frac{T}{2}}+L\}$$ for each element of the batch b ∈ {1*, ..., B*} and dimension e ∈ {1*, ..., E*}. Based on the output, loss functions are calculated with respect to the corresponding ground truth. ## D Architecture For Multilayer Perceptrons For Time Series Prediction In this section, we describe the multilayer perceptron (MLP) architecture that we use for the time series prediction in this work. There is evidence that also MLPs are a very powerful model to predict time series Zeng et al. (2023). Iteratively, an input tensor x ∈ R B×F ×T with B ∈ N as the batch size, T ∈ N as the length of the historic input of the time series for the prediction and F ∈ N as the number of features is transformed to the output tensor y ∈ R B×F ×L where L ∈ N is the length of the multistep prediction, which includes singlestep prediction where L = 1. In between there can be several hidden layers. All layers have the following structure taking an input tensor zd−1 ∈ R B×F ×Nd with a certain number of nodes ("neurons") Nd ∈ N where d ∈ {1*, ..., n*}, n ∈ N is the number of layers, N1 = T, Nn+1 = L and z0 := x. The layer d is defined by the function given as follows $M_{d}:\mathbb{R}^{N_{d}}\rightarrow\mathbb{R}^{N_{d+1}},\quad z_{d-1}(b,f,:)\mapsto M_{d}\left(z_{d-1}(b,f,:)\right):=g_{d}\left(z_{d-1}(b,f,:)W_{d}+b_{d}\right).$ for each b ∈ {1*, ..., B*} and f ∈ {1*, ..., F*} where gd : R F ×Nd+1 is a pointwise applied non-linear activation function for each d ∈ {1*, ..., n*}, like the ReLu function where gd = max, zd−1 ∈ R B×F ×Nd is the output from layer d − 1 and the input for layer d, Wd ∈ R Nd×Nd+1 and bd ∈ R Nd+1 . The operation zd−1(*b, f,* :)Wd is the common matrix-vector multiplication for any d ∈ {1, ..., n}, b ∈ {1, ..., B}, f ∈ {1*, ..., F*}. We remark that gd can be but does not have to be a different function for each layer. For each b ∈ {1*, ..., B*} and f ∈ {1*, ..., F*}, we have that y(b, f, :) := Mn(zn−1(*b, f,* :). In this formulation, all weights in each layer d are the same for all features. This is the implementation we provide and is used in Zeng et al. (2023). However, in the examples within the present work, we have F = 1. To implement a version that has different weights for each feature in each layer d, we just need to reformulate the input of the layers by zd−1,f (b, :) = zd−1(*b, f,* :) ∈ R Nd for all f ∈ {1*, ..., F*}. Accordingly, the definition of the layers looks like $$M_{d,f}:\mathbb{R}^{N_{d}}\to\mathbb{R}^{N_{d+1}},\quad z_{d-1,f}(b,:)\mapsto M_{d,f}\left(z_{d-1,f}(b,:)\right):=g\left(z_{d-1,f}(b,:)W_{d,f}+b_{d,f}\right).$$ where applying the definitions separately to each feature leads to F different MLPs where the weight matrices and bias can differ per feature. In order to introduce cross learning where information from one feature can influence the prediction of other features, we need to reshape the three dimensional tensor (B, F, Nd) to (*B, F N*d) for some layers where a corresponding weight matrix W∗ d : F Nd → F Nd+1 can mix information from different features. In a multi layer architecture, we can combine cross learning and learning per feature in different layers assembling them in one model by reshaping outputs in the corresponding formats from (B, F, Nd) to (*B, F N*d) or (B, F Nd) to (*B, F, N*d) after a layer before the next one depending on the learning type to change. With Pytorch such layers are implemented with nn.Linear(n, m, bias = *T rue*) where n ∈ N is the dimension of the input and m ∈ N is the dimension of the output. The *bias* parameter T rue adds a bias with non-zero values and the parameter *F alse* fixes the values of the bias to zero.
# Self-Training: A Survey Anonymous authors Paper under double-blind review ## Abstract Semi-supervised algorithms aim to learn prediction functions from a small set of labeled training set and a large set of unlabeled observations. Because these approaches are relevant in many applications, they have received a lot of interest in both academia and industry. Among the existing techniques, self-training methods have undoubtedly attracted greater attention in recent years. These models are designed to find the decision boundary on low density regions without making additional assumptions about the data distribution, and use the unsigned output score of a learned classifier, or its margin, as an indicator of confidence. The working principle of self-training algorithms is to learn a classifier iteratively by assigning pseudo-labels to the set of unlabeled training samples with a margin greater than a certain threshold. The pseudo-labeled examples are then used to enrich the labeled training data and to train a new classifier in conjunction with the labeled training set. In this paper, we present self-training methods for binary and multi-class classification as well as their variants and two related approaches, namely consistency-based approaches and transductive learning. We also provide brief descriptions of self-supervised learning and reinforced self-training, two distinct approaches despite their similar names. Finally, we present the most popular applications where self-training is employed. For pseudo-labeling, fixed thresholds usually lead to subpar results, highlighting the significance of dynamic thresholding for best results. Moreover, improving pseudo-label noise enhances generalization and class differentiation. The performance is also impacted by augmenting initial labeled training samples. These findings highlight the complex interplay in self-training efficacy between threshold selection, noise control, and labeled training size. They emphasize the need for meticulous parameter tuning and data preprocessing to fully exploit semi-supervised learning's potential and pave the way for future research in refining methodologies and expanding applicability across domains. To the best of our knowledge, this is the first thorough and complete survey on self-training. ## 1 Introduction Semi-supervised learning has risen to prominence within the machine learning domain, tackling the core challenge of making inference from both the structure of the unlabeled data and the label information from existing labeled training sets (Altun et al., 2005). The framework is particularly useful in scenarios where there are limited labeled examples but an abundance of unlabeled data available for training. This is highly relevant in a range of applications, such as computer vision, natural language processing and speech recognition, where the acquisition of labeled data can be a costly endeavor (Yu et al., 2022; Cheng et al., 2023; Gheini et al., 2023; Qu et al., 2023). ## 1.1 Central Hypothesis In general, it remains unclear how unlabeled data can be used in training and what value it can bring. The basic assumption in semi-supervised learning, called *smoothness*, stipulates that two examples in a high density region should have identical class labels (Chapelle et al., 2010; Amini & Usunier, 2015). This means that if two points are part of the same group or cluster, their class labels will most likely be the same. If they are separated by a low density zone, on the other hand, their desired labels should be different. Hence, if the examples of the same class form a partition, unlabeled training data might aid in determining the partition boundary more efficiently than if just labeled training examples were utilized. 1 ## 1.2 Three Main Semi-Supervised Learning Families There are three main families of semi-supervised methods, each with its own adaptation of the smoothness hypothesis. These adaptations are usually referred to as assumptions, albeit loosely, since they rather represent different paradigms for implementing semi-supervised learning. Data clustering uses a mixture model and assigns class labels to groups using the labeled data they include; and it constitutes the working principle of generative approaches (Nigam et al., 2006; Kingma et al., 2014). The cluster assumption, which underpins these approaches, asserts that if two examples are in the same group, they are likely to be in the same class (Figure 1 (a)). This hypothesis may be explained as follows: if a group is formed by a large number of instances, it is rare that they belong to different classes. This does not imply that a class is constituted by a single group of examples, but rather that two examples from distinct classes are unlikely to be found in the same cluster (Abu-Mostafa, 1995). If we consider the partitions of instances to be high density areas, a form of the cluster assumption known as low density separation entails determining the decision boundary over low density regions (Figure 1 (b)), and it constitutes the basis of discriminant techniques. The main difference between generative and discriminant techniques is that discriminant approaches find directly the prediction function without making any assumption on how the data are generated (Amini & Gallinari, 2002; Grandvalet & Bengio, 2004; Oliver et al., 2018). Density estimation is often based on a notion of distance, which may become meaningless for high dimensional ![1_image_0.png](1_image_0.png) spaces. A third hypothesis, known as the manifold assumption, stipulates that in high-dimensional spaces, instances reside on low-dimensional topological spaces that are locally Euclidean (Figure 1 (c)), which is supported by a variety of semi-supervised models called graphical approaches (Belkin & Niyogi, 2004; Chong et al., 2020). Figure 1: Illustration of three main hypotheses made in semi-supervised learning: (a) cluster assumption, (b) low-density separation and (c) manifold assumption. ## 1.3 Compatibility Although semi-supervised algorithms have been successfully applied in many situations, there have been cases where unlabeled data have been shown to have no effect on the performance of a learning task (Singh et al., 2008). Several attempts have been made in recent years to investigate the value of unlabeled data in the training process (Castelli & Cover, 1995; Li & Zhou, 2011), and the capacity of semi-supervised learning approaches to generalize (Rigollet, 2007; Maximov et al., 2018). The bulk of these studies are founded on the notion of *compatibility* defined by Balcan & Blum (2006), and they strive to exhibit the connection between the marginal data distribution and the target function to be learned. According to these findings, unlabeled data will be beneficial for training only if such a relationship exists. In generative approaches, the marginal distribution is viewed as a mixture of class conditional distributions, and when compared to the supervised case, semi-supervised learning has been shown to achieve lower finite-sample error bounds in some general cases, or a faster rate of error convergence in others (Castelli & Cover, 1995; Rigollet, 2007; Maximov et al., 2018; Singh et al., 2008). In this line, Ben-David et al. (2008) showed that accessing the marginal distribution on unlabeled training data would not provide sample size guarantees superior to those obtained by supervised learning unless very strong assumptions about conditional distribution on class labels are made. For graph-based approaches, Niyogi (2013); Altun (2005) provided a context in which such algorithms may be studied and perhaps justified; the key finding of the study is that unlabeled data can help learning in some situations by explicitly defining the structure of the data through a manifold. Finally, discriminant approaches mostly embed a margin maximization method that searches the decision boundary in low-density regions by pushing it from the unlabeled data (Joachims, 1999). In this survey we focus on self-training algorithms that follow this principle by assigning pseudo-labels to high-confidence unlabeled training examples and include these pseudo-labeled samples in the learning process. While various surveys have explored semi-supervised learning in recent years (Van Engelen & Hoos, 2020; Yang et al., 2023), none have specifically emphasized selftraining, which has emerged as the predominant approach in the field, widely applied across various applications. ## 1.4 Paper Structure The reminder of this paper is organized as follows. In Section 2, we go over the self-training method in detail. First, we present the framework and notations used throughout the paper in Section 2.1, then we describe the general self-training algorithm in Section 2.2, also introduced in Algorithm 1. Then, we describe pseudo-labeling methods and its variants in Section 2.3, and we discuss the selftraining with two classifiers in Section 2.4. Those methods are summed up in Table 1. Finally, we provide some insights into current theoretical studies in Section 2.6. Other related approaches are described in Section 3. First, we detail the transductive learning context in Section 3.1, and the consistency-based approaches in Section 3.2. Going beyond traditional semi-supervised learning, we investigate the extension of self-training in domain adaptation in Section 2.5, delve into self-supervised learning in Section 3.3, and explore reinforced self-training in Section 3.4. Section 4 reviews application of self-training methods in different domains, such as natural language processing in Section 4.1, computer vision in Section 4.2 and more generally in knowledge-driven applications in Section 4.3, with speech recognition, anomaly detection and genomics and proteomics. The views and future prospects are discussed in Section 5. ## 2 Self-Training Within this section, we present the fundamental aspects of the self-training approach. Initially, we introduce the framework and notation, followed by a comprehensive exploration of the core concept behind the self-training algorithm, which is further delineated in Algorithm 1. In Section 2.3 and Section 2.4, we present significant contributions directly linked to the standard algorithm. We organize these contributions effectively in Table 1. To conclude, we delve into the theoretical aspects in Section 2.6. ## 2.1 Semi-Supervised Framework And Notations We consider classification problems where the input and the output spaces are respectively X ⊆ R dand Y = {−1, +1} or Y = {1*, . . . , K*}. We further suppose available a set of labeled training examples S = (xi, yi)1⩽i⩽m ∈ (*X × Y*) m generated from a joint probability distribution P(x, y) (denoted as D) and a set of unlabeled training examples XU = (xi)m+1⩽i⩽m+u ∈ X usupposed to be drawn from the marginal distribution P(x). The classic case corresponds to when m ≪ u, and the issue is thrown into the unsupervised learning framework if S is empty. The opposite extreme scenario is when XU is empty and the problem is reduced to supervised learning. Given a hypothesis set of functions H mapping X to Y, the learner receives a labeled set S and an unlabeled set XU and outputs a hypothesis h ∈ H which is assumed to have a generalization error R(h) = E(x,y)∼D[1h(x)̸=y] smaller than if just S was used to find the prediction function, where by 1π we denote the indicator function equal to 1 if the predicate π is true and 0 otherwise. In practice, classifiers are defined based on a scoring function f from a class of functions F = {f : *X ×Y →* R}, and for an example x the corresponding classification function h outputs the class for which the score of f is the highest: $$h(\mathbf{x})=\operatorname{argmax}_{y\in{\mathcal{Y}}}f(\mathbf{x},y).$$ We define the margin ρf (x, y) of a function f for an example x ∈ X and a class y ∈ Y as $$\rho_{f}(\mathbf{x},y)=f(\mathbf{x},y)-\operatorname*{max}_{y^{\prime}\neq y}f(\mathbf{x},y^{\prime}).$$ In the binary case, Y = {−1, +1}, we define the unsigned margin of a classification function f ∈ F over an example x ∈ X (d'Alché Buc et al., 2001; Amini et al., 2008) as $$m_{f}(\mathbf{x})=|\rho_{f}(\mathbf{x},+1)|.$$ In the multi-class classification case, Y = {1*, . . . , K*}, the unsigned margin (d'Alché Buc et al., 2001; Feofanov et al., 2019) is defined as $$m_{f}(\mathbf{x})=\sum_{y\in{\mathcal{Y}}}f(\mathbf{x},y)\rho_{f}(\mathbf{x},y).$$ The maximization of the unsigned margin tends to find a decision boundary that passes through low density regions and hence follows the low density separation assumption. ## 2.2 Self-Training: The Idea Self-training, also known as decision-directed or self-taught learning machine, is one of the earliest approach in semisupervised learning (Scudder, 1965; Fralick, 1967) that has risen in popularity in recent years. To determine the decision boundary on low density regions, the idea behind self-training algorithms is to consider a pseudo-labeling strategy for assigning pseudo-labels to the examples of XU . This strategy can be characterized by a function, called *pseudo-labeler*: Φℓ : *X × F → X × Y*. We denote y˜ the pseudo-label of an unlabeled x ∈ XU for a score function f ∈ F assigned by Φℓ and XU¯ the set of pseudo-labeled examples. The self-learning strategy is an iterative wrapper algorithm that starts by learning a supervised classifier on the labeled training set S. Then, at each iteration, the current classifier selects a part of the unlabeled data, XU¯, and assigns pseudo-labels to them using the classifier's predictions. These pseudo-labeled unlabeled examples are removed from XU and a new supervised classifier is trained over S∪XU¯, by considering these pseudo-labeled unlabeled data as additional labeled examples. To do so, the classifier h ∈ H that is learned at the current iteration is the one that minimizes a regularized empirical loss over S and XU¯: $${\frac{1}{m}}\sum_{(\mathbf{x},y)\in S}\ell(h(\mathbf{x}),y)+{\frac{\gamma}{|X_{\tilde{d}}|}}\sum_{(\mathbf{x},{\tilde{y}})\in X_{\tilde{u}}}\ell(h(\mathbf{x}),{\hat{y}})+\lambda\|h\|^{2}$$ where ℓ : *Y ×Y →* [0, 1] is an instantaneous loss most often chosen to be the cross-entropy loss, γ is a hyperparameter for controlling the impact of pseudo-labeled data in learning, and λ is the regularization hyperparameter. This process of pseudo-labeling and learning a new classifier continues until the unlabeled set XU is empty or there is no more unlabeled data to pseudo-label. The pseudo-code of the self-training algorithm is shown in Algorithm 1. ## 2.3 Pseudo-Labeling Strategies At each iteration, the self-training selects just a portion of unlabeled data for pseudo-labeling, otherwise, all unlabeled examples would be pseudo-labeled after the first iteration, which would actually result in a classifier with performance ## Algorithm 1. Self-Training Input : S = (xi, yi)1⩽i⩽m, XU = (xi)m+1⩽i⩽m+u. k ← 0, XU¯ ← ∅. repeat Train f (k) on S ∪ XU¯ Πk ← {Φℓ(x, f(k)), x ∈ XU } ▷ Pseudo-labeling XU¯ ← XU¯ ∪ Πk XU ← XU \ {x | (x, y˜) ∈ Πk} k ← k + 1 until XU = ∅ ∨ Πk = ∅ Output : f (k), XU , XU¯ identical to the initial classifier (Chapelle et al., 2010). Thus, the implementation of self-training arises the following question: how to determine the subset of examples to pseudo-label? A classical assumption, that stems from the low density separation hypothesis, is to suppose that the classifier learned at each step makes the majority of its mistakes on observations close to the decision boundary. In the case of binary classification, preliminary research suggested to assign pseudo-labels only to unlabeled observations for which the current classifier is the most confident (Tür et al., 2005). Hence, considering thresholds θ − and θ + defined for respectively the negative and the positive classes, the pseudo-labeler assigns a pseudo-label y˜ to an instance x ∈ XU such that: $${\hat{y}}={\begin{cases}+1,&{\mathrm{if~}}f(\mathbf{x},+1)\geqslant\theta^{+},\\ -1,&{\mathrm{if~}}f(\mathbf{x},-1)\leqslant\theta^{-},\end{cases}}$$ $$(1)$$ −,(1) and Φℓ(x, f) = (x, y˜). An unlabeled example x that does not satisfy the conditions equation 1 is not pseudo-labeled; i.e. Φℓ(x, f) = ∅. Intuitively, thresholds should be set to high absolute values as pseudo-labeling examples with low confidence would increase chances of assigning wrong labels. However, thresholds of very high value imply excessive trust in the confidence measure underlying the model, which, in reality, can be biased due to the small labeled sample size. Using several iterations makes also the situation more intricate as at every iteration the optimal threshold might be different. One way to select the thresholds is to set them equal to the average of respectively positive and negative predictions (Tür et al., 2005). In this line, and in the context of multi-class classification (Lee, 2013) used Neural Networks as the supervised classifier and chose the most confident class to infer pseudo-labels for unlabeled data using the current model' outputs. The pseudo-labeled examples were then added to the labeled training set and treated similarly as labeled examples. Zou et al. (2018) adapted the idea of Tür et al. (2005) for multi-class classification by not choosing thresholds but rather fixing a proportion p of the most confident unlabeled data to be pseudo-labeled and then increasing this proportion at each iteration of the algorithm until p = 0.5 was reached. Following this idea, Cascante-Bonilla et al. (2021) revisited the concept of pseudo-labeling by discussing the iterative process of assigning pseudo-labels to unlabeled data and emphasized the resilience of pseudo-labeling to out-of-distribution samples. Zhang et al. (2021) also proposed an adaptation of curriculum learning to pseudo-labeling, which entails in learning a model using easy-to-learn observations before moving on to more complex ones. The principle is that at the step k of the algorithm, the pseudo-labeler selects unlabeled examples having predictions that are in the (1−αk) th percentile of the distribution of the maximum probability predictions assumed to follow a Pareto distribution, and where αk ∈ [0, 1] is an hyperparameter that varies from 0 to 1 in increments of 0.2. Considering the distribution of predictions over unlabeled data, and the majority vote classifiers, such as Random Forest or Adaboost (Schapire et al., 1997), it is possible to automatically choose a threshold for pseudo-labeling. Formally, the learning of a majority vote classifier with partially labeled data can be defined as follows. After observing the training set S ∪ XU¯, the task of the learner is to choose a posterior distribution Q over a set of hypothesis H such that the Q*-weighted majority vote* classifier BQ defined by: $$\forall\mathbf{x}\in{\mathcal{X}},B_{Q}(\mathbf{x})=\operatorname{argmax}_{y\in{\mathcal{Y}}}\mathbb{E}_{h\sim Q}\left[1_{h(\mathbf{x})=y}\right],$$ will have the smallest possible risk on examples of XU . The associated *Gibbs* classifier, GQ, is defined as the random choice of a classifier h according to the distribution Q, and its error over an unlabeled set XU is given by: $${\hat{R}}_{u}(G_{Q})={\frac{1}{u}}\sum_{{\bf x}^{\prime}\in X_{U}}\mathbb{E}_{h\sim Q}[1_{h({\bf x}^{\prime})\neq y^{\prime}}],$$ where, for every unlabeled example x ′ ∈ XU we refer to y ′as its true unknown class label. For binary and multi-class classification respectively, Amini et al. (2008) and Feofanov et al. (2019) showed that a tight upper-bound on the Gibbs classifier's risk that holds with high probability over the random choice of S and XU , guarantees a tight bound on the error of the Bayes classifier over the unlabeled training set: $${\hat{R}}_{u}(B_{Q})={\frac{1}{u}}\sum_{{\bf x}^{\prime}\in X_{U}}\mathbb{I}_{B_{Q}({\bf x}^{\prime})\neq y^{\prime}}.$$ This bound is mainly based on the distribution of predictions over unlabeled data and the derivations can be extended to bound the risk of voted majority classifiers having margins greater than a threshold θ, Rˆu∧θ(BQ), defined as: $$\hat{R}_{u\wedge\theta}(B_{Q})=\frac{1}{u}\sum_{{\bf x}^{\prime}\in X_{U}}\mathbbm{1}_{B_{Q}({\bf x}^{\prime})\neq y^{\prime}\wedge m_{B_{Q}}({\bf x}^{\prime})>\theta},$$ with a slight abuse of notation for mBQ . One of the practical aspects that arises from this result is the possibility to specify a threshold θ which minimizes an upper-bound of the conditional risk of a voted majority classifier BQ over the unlabeled training set, XU , defined as: $$\hat{R}_{u|\theta}(B_{Q})=\frac{\hat{R}_{u\wedge\theta}(B_{Q})}{\frac{1}{u}\sum_{\mathbf{x}\in X_{U}}\mathbbm{1}_{m_{B_{Q}}(\mathbf{x})\geqslant\theta}},$$ where the denominator is the proportion of the unlabeled examples with the confidence higher than the threshold θ, and the numerator is the joint Bayes risk on XU . Thus, the criterion can be interpreted as a trade-off between the number of examples going to be pseudo-labeled and the error they induce. Furthermore, these bounds are shown to be tight in the case where the majority vote classifier makes its error mostly on low margin regions. Feofanov et al. (2019) demonstrated that this technique outperforms conventional fixed-threshold pseudo-labeling strategies on different multi-class classification problems. Chen et al. (2022) highlighted two major issues with self-training: the snowball effects of cascading pseudo-labeling mistakes and random sampling of tiny samples (called data bias). The authors suggest two-phase solutions to address these problems for image classification using deep learning. First, they proposed a classification head to separate the creation and use of pseudo labels in order to reduce training errors. An additional head is utilized to receive the pseudo-labels and carry out training on unlabeled data while the default head is used for classification and pseudolabeling. ## 2.4 Self-Training With Two Classifiers In the wake of works utilizing only a single classifier in self-training algorithms, new studies have been proposed with the use of two classifiers, where each model learns on the output of the other (Xie et al., 2020b; Chen et al., 2021; Karamanolakis et al., 2021). Most of these techniques are based on the idea of consensus in predictions between two classifiers and were inspired by the seminal work of Blum & Mitchell (1998) who proposed the co-training algorithm. In co-training, examples are defined by two modalities that are comparable but not entirely correlated. Each view of an example is expected to contain complementary information about the data and if there are enough labeled training data, each of them is supposed to be sufficient for learning. The main principle is to learn a classifier on each view, | | Base classifier | Classification | Threshold | Noise | | | | |------------------------------|-------------------|------------------|-------------|-------------|-------|-----------|---------| | | Single | Double | Binary | Multi-class | Fixed | Optimized | Account | | Scudder [1965] | ✓ | − | ✓ | − | ✓ | − | − | | Joachims. [1999] | ✓ | − | ✓ | − | ✓ | − | − | | Amini et al. [2008] | ✓ | − | ✓ | − | − | ✓ | − | | Hadjadj et al. [2023] | ✓ | − | ✓ | − | − | ✓ | ✓ | | Tur et al. [2005] | ✓ | − | − | ✓ | ✓ | − | − | | Xie et al. [2020a] | ✓ | − | − | ✓ | ✓ | − | − | | Cascante et al. [2021] | ✓ | − | − | ✓ | ✓ | − | − | | Chen et al. [2022] | ✓ | − | − | ✓ | ✓ | − | ✓ | | Feofanov et al. [2019] | ✓ | − | − | ✓ | − | ✓ | − | | Zhang et al. [2021] | ✓ | − | − | ✓ | − | ✓ | − | | Blum et al. [1998] | − | ✓ | ✓ | − | ✓ | − | ✓ | | Yaslan et al. [2010] | − | ✓ | − | ✓ | ✓ | − | − | | Tarvainen and Valpola [2017] | − | ✓ | − | ✓ | ✓ | − | − | | Xie et al. [2020b] | − | ✓ | − | ✓ | ✓ | − | − | | Karamanolakis et al. [2021] | − | ✓ | − | ✓ | ✓ | − | − | | Chen et al. [2021] | − | ✓ | − | ✓ | ✓ | − | − | | Ghiasi et al. [2021] | − | ✓ | − | ✓ | ✓ | − | − | | Du et al. [2022] | − | ✓ | − | ✓ | ✓ | − | − | Table 1: A summary of principal self-training algorithms, based on pseudo-labeling with one or two classifiers, introduced in Section 2.3 and 2.4. taking initially the available labeled examples as the training set. Then, one of the classifiers assigns pseudo-labels to unlabeled data, and the other one uses them to retrain the model by including them into its training set. At each iteration, the classifiers alternately switch their roles, thereby co-training each other. As for self-training algorithms with a single classifier, this procedure continues until there are no more unlabeled instances to be pseudo-labeled. In practice, several studies artificially generated the two modalities for classification problems where examples are *monoviewed* and described by a vector representation. These approaches create the two modalities out of one by selecting at random the set of features that should correspond to each modality from the initial set of features; and their efficiency was empirically proved on various applications (Yaslan & Cataltepe, 2010). Co-training can thus be thought of as a form of self-training: rather than training a classifier on its own previous predictions, each classifier is trained on the predictions of another classifier that was trained on the predictions of the former. Without splitting the input feature set, Chen et al. (2021) proposed Cross Pseudo Supervision for semantic segmentation in images. This method employs two neural networks as supervised classifiers having the same images as inputs. Each neural-network is learned at every mini-batch by considering the pseudo-labels generated by the other network for unlabeled instances as ground-truths. In multi-task learning, Ghiasi et al. (2021) proposed to independently train specialized teachers using labeled datasets. These teachers then label an unlabeled dataset, creating a multitask pseudo-labeled dataset. Subsequently, a student model is trained using the pseudo-labeled dataset, employing multi-task learning to learn from various datasets and tasks simultaneously. Finally, the feature representations of the student model are evaluated across six vision tasks, including image recognition, to assess its performance and generalization capabilities. The learnability of co-training was studied under the PAC framework (Valiant, 1984), which also accounts for noise in the class labels of unlabeled examples caused by pseudo-labeling. The injection of noisy labels in the pseudo-labeling step is in fact inherent to any self-training algorithm. Taking into account noisy labels in training a model was first considered in supervised learning, following the paradigm of learning with an imperfect supervisor in which training data contains an unknown portion of imperfect labels (Natarajan et al., 2013; Han et al., 2018). Most of these studies tackle this problem from an algorithmic point of view, employing regularization (Miyato et al., 2019) or estimating mislabeling errors by modeling the transition probability between noisy and true labels (Patrini et al., 2017). Table 1 summarizes the main self-training approaches presented so far by emphasizing their key aspects. ## 2.5 Self-Training Under Domain Shift Recently, self-training has expanded its scope beyond semi-supervised learning and has found extensive application to the learning problems where available data is subject to a distribution shift. In unsupervised domain adaptation, where the objective is to transfer knowledge from a labeled source domain to an unlabeled target one, self-training become a popular alternative to discrepancy minimization methods (Ganin et al., 2016). In this case, self-training aims to progressively correct the domain shift by including more and more pseudo-labeled target examples to the source training set. This is particularly relevant for gradual domain adaptation, where unlabeled instances from intermediate domains are available (Shi & Liu, 2024). When intermediate domains are not given, it is important to ensure that pseudo-labeled target examples are reliable and are not biased towards the source data. While Inoue et al. (2018) and Zou et al. (2018) approached this issue by carefully choosing a pseudo-labeling policy, Saito et al. (2017) learn a representation via a tri-training scheme, in which the student is trained on target data pseudo-labeled by agreement of two teachers. Liu et al. (2021) alternate between two gradient steps: (1) to train a source classification head that generates pseudo-labels, (2) to train a target classification head using pseudo-labeled data under the constraint that it predicts well on source data. As the discrepancy between the source and the target can be large, the prediction confidence may exhibit a strong bias failing to distinguish between correct and wrong pseudo-labels. Therefore, several works focus specifically on model calibration and uncertainty estimation including the negative entropy regularization (Zou et al., 2019), the Monte-Carlo dropout (Mukherjee & Awadallah, 2020), and prediction agreement of diversified linear heads (Odonnat et al., 2024). ## 2.6 Theoretical Studies Several studies have recently looked into the theoretical properties of self-training algorithms. In this line, Wei et al. (2021) suggest a new concept of *expansion* defined as the quantity of data dispersion in an example's neighbor, where the term *neighbor* refers to adding adversarial perturbations (Miyato et al., 2019) or augmentations (Sohn et al., 2020) to the example. The study establishes distributional guarantees of self-training when the label distribution meets such expansion properties and classes are suitably separated according to neighbors. The study generates finite sample bounds for Deep Neural Networks (DNNs) by combining generalization bounds with DNN generalization bounds. Experiments with a Generative Adversarial Network (GAN) are also used to verify the expansion assumption. Frei et al. (2022) examine a self-training algorithm with linear models for the binary classification using gradient-based optimization of the cross-entropy loss after supervised learning with a small number of samples. The classifier is a mixture model with concentration and anti-concentration properties. The authors show that utilizing O(d/ϵ2) unlabeled observations in the self learning algorithm, with d the number of input variables, suffices to achieve the classification error of the Bayes-optimal classifier up to an ϵ error if the initial pseudo-labeling strategy has a classification error smaller than an absolute constant Cerr. Furthermore, the authors demonstrate that a constant number of labeled examples is sufficient for optimal performance in a self-training algorithm by demonstrating that using only O(d) labeled examples, the standard gradient descent algorithm can learn a pseudo-labeling strategy with a classification error no more than Cerr. Zhang et al. (2022) study the generalization ability of self-training in the case where the base classifier is a two-layer neural network with the second layer weights all fixed to one, and assuming that the ground truth is realizable, the labels are observed without noise, and the labeled and unlabeled instances are drawn from two isotropic Gaussian distributions. The authors show that, given some plausible assumptions about the initial point and the amount of unlabeled training examples, the algorithm converges to the ground truth with fewer observations than needed when no unlabeled data is provided. The reader can refer to Zhong et al. (2017) for a broader context. Zhang et al. (2022) extend their main result to a more general setting, where it is shown that the model still converges towards a given convex combination of the ground truth and the initial point, and is guaranteed to outperform the initial supervised model, without fixing any requirement on the number of labeled training examples. Hadjadj et al. (2023) propose a first bound over the misclassification error of a self-training method which utilizes half-spaces as the base classifier in the case where class labels of examples are supposed to be corrupted by a Massart noise model. Under this assumption, it is shown that the use of unlabeled data in the proposed self-training algorithm does not degrade the performance of the first half-space trained over the labeled training data. Sportisse et al. (2023) study the identifiability of self-training approaches. In addressing the bias in the conventional risk estimator, the proposed method, named Inverse Propensity Weighting, involves assigning weights to examples based on the inverse of their propensity scores-representing the probability of a class label being observed. The study introduces two estimators for the missing data mechanism, one of which is derived through the maximization of the observed likelihood. Furthermore, a likelihood ratio test is suggested to evaluate the informativeness of the labels, determining whether they exhibit non-random missing patterns. Some other works studied self-training from a theoretical perspective when a distribution shift takes place. Chen et al. (2020) proves that self-training can help to avoid spurious features, while Kumar et al. (2020) derived an upper-bound on the error of self-training in the case of gradual shifts. ## 3 Related And Unrelated Approaches In semi-supervised learning, there are two main other areas of research that are related to self-training. The first, known as *transductive learning*, is based on the low density separation assumption and tends to give class labels for only the set of unlabeled training samples. The second method, referred to as *consistency learning*, uses classifier predictions over unlabeled data as a confidence indicator and constrains model outputs to be comparable for similar unlabeled examples without assigning pseudo-labels. In this section, we also go a bit further, and introduce different context where self-training has been used and extended. First, we present *self-supervised learning*, which, despite its similar name with self-training, is an entirely separate technique that employs unlabeled data to train or pre-train a model. Finally, we introduce *reinforced self-training*, that merges elements of reinforcement learning with self-training principles by integrating a scoring function based on a learned reward model and employing offline reinforcement learning objectives for model fine-tuning. ## 3.1 Transductive Learning The goal of transductive learning, as previously stated, is to assign pseudolabels to samples from an unlabeled training set, XU . As this set is finite, the considered function class F, for finding the transductive prediction function, is also finite. F can be defined using a nested structure according to the structural risk minimization principle, F1 ⊆ F2 ⊆ . . . ⊆ F (Vapnik, 1998). Transductive techniques often employ the distribution of unsigned margins of unlabeled examples to guide the search for a prediction function, limiting it to following the low density separation assumption in order to find the best function class among the current ones. Transductive approaches also assume that the function class's structure should reflect prior knowledge of the learning problem at hand, and that it should be built in such a way that the correct prediction of class labels of labeled and unlabeled training examples is contained in a function class Fj of small size with a high probability. In particular, Derbeko et al. (2004) show that with high probability the error on the unlabeled training set of a function from a class function Fj is bounded by its empirical loss on the labeled training set plus a complexity term that depends on the number of labeled examples m, the number of unlabeled examples u, and the size of the class function Fj . The Transductive Support Vector Machines (TSVM) (Joachims, 1999) developed for the binary case is based on this paradigm, and is looking for the optimal hyperplane in a feature space that separates the best labeled examples while avoiding high density areas. TSVM does this by building a structure on a function class F and sorting the outputs of unlabeled samples by their margins. The solutions to the associated optimization problem are the pseudo-labels of unlabeled examples for which the resulting hyperplane separates the examples of both labeled and unlabeled training sets with the largest margin. Shi et al. (2018) extended this idea to the multi-class classification case with Neural Networks. Similar to TSVM, class labels of unlabeled examples are treated as variables, and the algorithm tries to determine their optimal values, along with the optimal NNs parameter set get by minimizing a cross-entropy loss estimated over both labeled and unlabeled training sets through an iterative training process. The authors employ the MinMax Feature regularization to constrain the neural network to learn features of same-class images to be close, and features of different classes to be separated by a preset margin, in order to overcome incorrect label estimations on outliers and noisy samples. ## 3.2 Consistency-Based Approaches Early studies in this line, see for example Zhu et al. (2003) for binary classification, were proposed to learn a single classifier defined from a scoring function f : *X × Y →* R penalized for quick changes in its predictions. The similarity matrix W = [Wij ]1⩽i⩽u 1⩽j⩽u , constructed over the unlabeled training data, is used to measure the similarity between instances. The penalization is mostly expressed as a regularization term in the learning objective. As an example, adapting the work of Zhu et al. (2003) to multi-class classification, the penalization term can be written as: $$\Omega_{\mathbf{W}}(X_{{\mathcal{U}}})=\sum_{i,j=1}^{u}W_{i j}\|f(\mathbf{x}_{m+i},.)-f(\mathbf{x}_{m+j},.)\|^{2}$$ where for a given example x, f(x, .) = (f(x, k))k∈Y is the vector class predictions of f. In terms of learning, ΩW can be seen as a regularization term, constraining the model to have the same predictions on similar unlabeled instances. Other types of penalization have been studied in the literature. Maximov et al. (2018) suggested an approach that partitions partially labeled data and then uses labeled training samples to identify dense clusters having predominant classes with a fraction of non-predominant classes below a given threshold extending earlier results on supervised classification (Joshi et al., 2017). In this situation, the proposed penalization term measures the learner's inability to predict the predominant classes of the identified clusters which in turn constrains the supervised classifier to be consistent with the structure of the dense clusters. In this line, Rangwani et al. (2022) consider non-decomposable metrics with consistency regularization by giving a cost-sensitive framework that consists of minimizing a cost-sensitive error on pseudo labels and consistency regularization. They demonstrate theoretically that they can build classifiers that can maximize the required non-decomposable measure more effectively than the original model used to produce pseudo-labels under comparable data distribution assumptions. Without explicitly stating a penalization term, consistency learning was extended to cases with two classifiers. The Mean-Teacher approach (Tarvainen & Valpola, 2017) is perhaps one of the earliest popular techniques that have been proposed in this context. This method employs Neural Networks (NNs) as supervised classifiers, and it is based on the assumption that two close models with the same input should make the same prediction. One of the models is called the *teacher*, while the other is referred to as the *student*. These two NN models are structurally identical, and their weights are related in that the teacher's weights are an exponential moving average (Laine & Aila, 2017) of the student' weights. In this scenario, the student model is the only one that is trained over the labeled training set, and the consistency loss is computed between the teacher's probability distribution prediction and the student's one using the mean square error or the Kullback-Leibler divergence. Other studies refined the Mean-Teacher approach with a data-augmentation technique by combining two images with random patches to improve prediction consistency (French et al., 2020; Xie et al., 2020a). More recently, Du et al. (2022) provide a two-stage method to reduce label propagation errors; where in the first phase, the gradients of the student loss are computed and utilized to update the teacher. In the second stage, the teacher assigns pseudo-labels which are then utilized to train the current student. ## 3.3 Self-Supervised Learning Although similar in names, self-training is a completely different approach than self-supervised learning which has demonstrated encouraging results and has become an active area of research (Ozbulak et al., 2023). In self-supervised learning, a model acquires the ability to make predictions regarding various facets of its input data, all without the necessity of explicit labeled training data. Rather than depending on labeled data, self-supervised learning harnesses the inherent structure present in the input data and autonomously generates guidance to train the model. This procedure involves the formulation of a pretext task, also referred to as a proxy task, wherein the model is trained to make predictions concerning a specific relationship inherent in the data. For instance, in the domain of computer vision, a pretext task might involve rotating images within a predefined range of angles, followed by training a supervised model to predict these angles. Once the model has undergone training on the pretext task, the knowledge it has gained in this process can be applied to downstream tasks that do require labeled data. Consequently, by learning from extensive amounts of unlabeled data, self-supervised learning empowers the acquisition of robust data representations, capitalizing on the abundant, freely available unlabeled data resources. Common approaches in self-supervised learning include predicting missing parts of an image (Lee et al., 2021), predicting the order of shuffled image patches or their orientation (Shorten & Khoshgoftaar, 2019), reconstructing corrupted images (Fang et al., 2023), filling in missing words in a sentence (Donahue et al., 2020), or predicting future frames in a video sequence (Schiappa et al., 2022). These pretext tasks encourage the model to capture meaningful representations of the input data, which can then be used for various downstream tasks, such as image classification, object detection, or natural language processing. ## 3.4 Reinforced Self-Training A recent innovative approach, called Reinforced self-training (ReST) has emerged, particularly notable for its application in conditional language modeling (Gulcehre et al., 2023; Singh et al., 2023). This approach operates through two distinct loops: the inner loop, called "Improve", which concentrates on refining the policy using a fixed dataset, and the outer loop, called "Grow", which involves expanding the dataset by sampling from the most recent policy. In the domain of conditional language modeling, ReST follows a systematic sequence of steps. Initially, during the Grow phase, the language model policy, originally a supervised policy, generates multiple output predictions for each context, thereby enriching the training dataset. Subsequently, in the Improve stage, the expanded dataset undergoes ranking and filtering using a scoring function. The language model then undergoes fine-tuning on the refined dataset using an offline reinforcement learning objective, with the potential for repeating this process with an increasing filtering threshold. The resultant policy from this iterative process is subsequently employed in the following Grow phase. ReST may find niche suitability in specific applications or scenarios where reinforcement learning principles enhance model performance through learned reward signals. In contrast, classical self-training techniques possess a broader applicability and have been employed across a wide spectrum of semi-supervised learning tasks without necessitating reinforcement learning frameworks. ## 4 Applications In this section, we will concentrate on the most popular applications where self-training was employed, although this technique may be extended and used to a variety of additional machine learning tasks. The goal of our presentation here is not to be thorough, but rather to focus on the main features of self-training that were used in the literature among the selected applications. ## 4.1 Natural Language Processing Co-training is perhaps one of the preliminary self-training techniques which was applied to web pages classification (Blum & Mitchell, 1998). In the paper, the content of a web page has been divided into two sets of words: those that appear on the page and those that appear in hyperlinks pointing to the page. The main hypothesis here is that each of the set of words contain sufficient information for the classification task and that there are enough labeled data to efficiently learn two supervised classifiers. Both theoretical and empirical studies of co-training show that if examples have two redundant but not entirely correlated views, then unlabeled data may be used to augment the original labeled training data to find more robust classifiers. However, the drawback of this strategy is that in general, text data is mono-view. For bag-of-word representation of texts, a solution was to split the set of words in two random sets, considered as two distinct views of a text (Nigam & Ghani, 2000), as mentioned in Section 2.4. However, this idea cannot be generalized to sequential models that could be used as base classifiers in co-training. Other current self-training techniques in NLP are mostly built on the concept of co-training and employ two base classifiers that are learned over each other's predictions. In this line, Wu et al. proposed a Named Entity Recognition (NER) strategy that consists in automatically detecting and classifying named entities, with a first NER model trained on labeled training data serving as a teacher to predict the probability distribution of entity labels for each token in the unlabeled set. The pseudo-labeled data with such soft labels are then used to train a student NER model for the unlabeled set and the process of pseudo-labeling and training is repeated until convergence as in co-training. For the task of Relation Extraction (RE) which consists in obtaining a predefined semantic relation between two entities in a given sentence, Yu et al. (2022) proposed an approach which classifies the pseudo-labeled instances generated from a teacher into confident, ambiguous and hard sets. In the training of the student model, the confident and ambiguous instances are subsequently interpreted as positive and set-negatives observations, respectively. Lately, Meng et al. (2020) proposed an approach to leverage the power of language models that have been pre-trained on large corpora of text to generate pseudo-labels for unlabeled text data. The pseudo-labeled data along with a smaller set of labeled data are then used to train and fine-tune the text classifier, and the process of assigning pseudo-labels and retraining of the classifier is repeated until convergence. A challenge that arises when using a single base classifier in self-training for NLP tasks is to minimize the impact of label noise in the pseudo-labeling policy. To cope with this problem, Zadeh & Rasoul (2010) devised a bootstrapping technique for semantic role labeling that consists randomly selecting a subset of the most confident samples for pseudo-labeling. In the same vein, for the sentiment analysis task, Gupta et al. (2021) advocated selecting the top most confident samples for pseudo-labeling. However, as we shall see in the next section, the use of a fixed threshold in the pseudo-labeling policy may be suboptimal in general. ## 4.2 Computer Vision As in NLP, the two variants of self-training with one or two classifiers, mainly referred to as student and teacher in the literature, are mainly considered for image classification. Most recent approaches use neural networks as base classifiers and rely on these models' ability to learn efficient representations of images, proposing various strategies to either improve the representation or reduce the effect of noise injection during the pseudo-labeling phase of selftraining. The most common strategy with student and teacher base classifiers is arguably the one proposed by Xie et al. (2020b), in which an EfficientNet model trained on labeled ImageNet images is used as a teacher to create pseudo labels on unlabeled ones. A larger EfficientNet is subsequently employed as a student model, being trained on a mix of labeled and pseudo-labeled images. This training involves altering the input images using various techniques like dropout, stochastic depth, and data augmentation. The objective is for the model to learn a representation of images that remains consistent despite these alterations. This procedure is repeated by reversing the roles of the student and the teacher. The input of the teacher model is not altered throughout the training process. The main motivation advanced is to ensure that the pseudo labels be as accurate as possible. Empirical evidence from various image collections demonstrates the effectiveness of this strategy. Sohn et al. (2020) proposed a self-training approach called FixMatch that combines consistency regularization with a confidence-based mechanism to select high-confidence pseudo-labeled examples for training. The algorithm applies to the same image two different data augmentations procedures, called weak (flip-and-shift) and strong (more heavy distortions) augmentations. As in the previous case, these perturbations helps to increase diversity and improve the model's robustness on the unlabeled images. The authors introduce a consistency loss term that encourages the consistency between the model's hard output of the weakly-augmented version and the model's soft output of the strongly-augmented version of the same unlabeled image. They demonstrate that the model learns to provide more trustworthy and accurate results by minimizing the discrepancy between these predictions. In order to decrease the influence of possibly inaccurate pseudo-labels on the learning process, the loss is evaluated only on those unlabeled data from the batch that have the confidence higher than a fixed threshold. This idea has then been adapted to various correlated tasks, including object detection, image segmentation (Cheng et al., 2023), remote sensing (Huang et al., 2023) and video anomaly detection (Lv et al., 2023), among others. Chen et al. (2022) proposed an improvement of FixMatch by introducing two novel features. First of all, they introduce a separate classification head that is used to assign pseudo-labels and trained using labeled data only in order to avoid possible label noise from wrong pseudo-labels. Secondly, they improve the feature learning by introducing an adversarial classification head whose goal is to approximate the worst possible error on unlabeled data. All these approaches employ a constant predefined threshold across all classes to choose unlabeled data for training, disregarding varying learning conditions and complexities among different classes. To tackle this concern, Zhang et al. (2021) introduced a curriculum learning technique to utilize unlabeled data based on the model's learning progress. The essence of this strategy involves dynamically adapting thresholds for distinct classes during each time step, enabling the inclusion of insightful unlabeled data and their corresponding pseudolabels. This approach has been successfully applied to many domains, including object detection (Li et al., 2022), medical image classification (Peng et al., 2023), human action recognition (Wang et al., 2023) and facial expression identification (Shabbir & Rout, 2023). ## 4.3 Knowledge-Driven Applications Through the incorporation of domain expertise, recent studies have developed more sophisticated self-training systems that reduce label noise in the pseudo-labeling phase across diverse applications. In the subsequent sections, we will consider advances made in this context in the domains of speech recognition, anomaly detection, genomics and proteomics. ## 4.3.1 Speech Recognition Newly developed methods have introduced filtering mechanisms that are congruent with domain knowledge for endto-end speech recognition. These mechanisms establish rules that assess pseudo-labels using criteria specific to the domain. For example by using filters to verify if certain phonetic patterns that are common in the domain, are present in the pseudo-labels (Gheini et al., 2023). Similar techniques incorporate phonetic information relevant to the domain to validate pseudo-labels. In these approaches, incorrectly labeled examples that violate phonetic constraints are discarded from training the model (Ling et al., 2022). Other approaches integrate domain-specific language models in the the pseudo-label generation process in order to ensure that the generated labels adhere to the linguistic nuances and terminologies of the domain. In this line, Kahn et al. (2020) introduced a self-training approach, with one base classifier combined with a language model for pseudolabeling. Their approach involves implementing tailored filtering methods designed to address common errors arising from sequence-to-sequence models, alongside an inventive ensemble technique for enhancing the breadth of pseudolabel variations. Building upon this idea, Xu et al. (2021) showcased that the synergy between self-training and unsupervised pre-training using wav2vec 2.0 (Baevski et al., 2020) offers mutual benefits across diverse scenarios involving labeled and unlabeled data. As in image classification, alternative methods for speech recognition apply data-augmentation techniques, tailored to the unique aspects of the domain, to enhance the robustness of the model's predictions and consequently the quality of pseudo-labels. In this sense, Bartelds et al. (2023) employed a text-to-speech system to generate audio training data from text-only sources. ## 4.3.2 Anomaly Detection Leveraging domain knowledge to mitigate label noise in pseudo-labels within self-training approaches has also been considered in anomaly detection. In this case, the understanding of the anomaly patterns and characteristics specific to the domain are incorporated in the model. In this regard, Li et al. (2012) identified common anomaly types, their features and potential sources of noise and Qu et al. (2023) performed time domain analysis. Also, Feng et al. (2021) created features that capture domain-specific information for video anomaly detection. It was demonstrated that these features highlight the crucial elements for anomaly detection in videos, leading to their utilization for enhancing the pseudo-labeling phase within self-training. Alternate strategies focus on simulating anomalies within the unlabeled dataset using domain knowledge. This aids the model in learning from a broad spectrum of anomalies, mitigating the potential of becoming overly specialized in a particular anomaly type (Qiu, 2023). ## 4.3.3 Genomics And Proteomics Furthermore, datasets in the field of genomics and proteomics encompass a variety of characteristics including gene expression levels, epigenetic markers, and genetic variants. These characteristics have been shown to increase the effectiveness of features used in self-training approaches, together with the selection of important features and their physiologically coherent transformation. Brubaker et al. (2019) incorporated biological context into feature engineering that integrate unsupervised modeling of datasets relating to human disease with the supervised component that concentrated on training with mouse data. In this context, Ravinder (2021) amalgamated expression data from three distinct humanized mouse models that were subjected to live attenuated yellow fever vaccine challenges in self-training with different base classifiers. The results of this study show that self-training coupled with NRG-HIS/Fluc mice exhibited the most favorable outcomes across the tested human cohorts. Additionally, El-Manzalawy et al. (2016) employed self-training in conjunction with bioinformatic tools in silico to anticipate secreted and protective proteins. This was done to eliminate pseudo-label errors from the identified P. falciparum SEPs obtained through proteomics experiments and to anticipate new SEPs within the P. falciparum proteome. Huang et al. (2021) applied domain-specific quality control steps to clean and pre-process the data. This included filtering out low-quality samples, normalizing data to account for technical biases, and addressing batch effects that can introduce noise. By doing so, they ensured that the unlabeled data that is feed into the self-training pipeline is as accurate as possible. Chan et al. (2017) utilized reference databases and annotation resources related to genomics. These resources provide information about genes, functional elements, pathways, and biological processes. Incorporating this information into the pseudo-labeling process has been shown to lead to more accurate predictions by aligning them with known biological knowledge. Yu et al. (2023) applied network analysis techniques to identify interactions between genes and proteins. The authors demonstrated that Pathway enrichment analysis can help identify genes that are functionally related and likely to be co-regulated. This information has been shown to guide the self-training process to produce more coherent and biologically plausible pseudo-labels. General observations The key observations made in these applications reveal that, in pseudo-labeling, employing fixed thresholds often yields suboptimal outcomes, underscoring the importance of dynamic thresholding for optimal results. Furthermore, enhancing pseudo-label noise improves both generalization and class differentiation. In Appendix A, we will show the impact of dynamic thresholding on pseudo-labeling across general benchmarks proposed in Feofanov et al. (2019) and examine the noise considerations in two image classification collections studied in Chen et al. (2022). ## 5 Conclusion And Perspectives In this survey, we provided an overview of self-training approaches for semi-supervised learning that have received increasing attention in recent years. First, we discussed the various strategies for selecting unlabeled samples for pseudo-labeling that have been proposed. We emphasized the significance of considering margin distributions across unlabeled data as a pivotal factor in the development of these strategies. Next, we provided an overview of the diverse variants of self-training explored in the literature, along with relevant approaches. Furthermore, we examined recent theoretical advancements in this research domain and outlined the principal characteristics of self-training employed in several widely recognized applications. Lastly, we explored the impact of fundamental aspects of self-training on a range of benchmark datasets. While the self-training approach is currently in widespread use, there are extensive opportunities for future research. Presently, the majority of studies have concentrated on perturbation-based deep learning, particularly in the domains of visual, text, and audio applications. However, there exist numerous other domains, such as industrial time-series or medical data, where the application of self-training could prove highly beneficial. Recent research emphasizes the potential of exploring self-training methods from a theoretical standpoint, particularly in addressing the challenge of training a final classifier on data with noisy labels Hadjadj et al. (2023). It has also been demonstrated that accurately estimating the confidence of pseudo-labels is crucial for effective self-training (Odonnat et al., 2024). Therefore, theoretically establishing the correlation between performance and the level of uncertainty in pseudo-labeling could be a valuable direction for future research, especially in analyzing self-training within the context of learning problems affected by domain shifts. ## References Y. S. Abu-Mostafa. Machines that learn from hints. *Scientific American Magazine*, 272(4):64–85, 1995. Y. Altun. *Discriminative Methods for Label Sequence Learning*. PhD thesis, Brown University, USA, 2005. Y. Altun, D. A. McAllester, and M. Belkin. Margin semi-supervised learning for structured variables. In *Advances in* Neural Information Processing Systems 18, pp. 33–40, 2005. M.-R. Amini and P. Gallinari. Semi-supervised logistic regression. In European Conference in Artificial Intelligence - ECAI, pp. 390–394, 2002. M.-R. Amini and N. Usunier. *Learning with Partially Labeled and Interdependent Data*. Springer, 2015. M.-R. Amini, F. Laviolette, and N. Usunier. A transductive bound for the voted classifier with an application to semi-supervised learning. In *Advances in Neural Information Processing Systems 21*, pp. 65–72, 2008. A. Baevski, Y. Zhou, A. Mohamed, and M. Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. In *Advances in Neural Information Processing Systems*, volume 33, pp. 12449–12460, 2020. M.-F. Balcan and A. Blum. An augmented PAC model for semi-supervised learning. In *Semi-Supervised Learning*, pp. 396–419. MIT, 2006. M. Bartelds, N. San, B. McDonnell, D. Jurafsky, and M. Wieling. Making more of little data: Improving low-resource automatic speech recognition using data augmentation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, pp. 715–729, 2023. M. Belkin and P. Niyogi. Semi-supervised learning on riemannian manifolds. *Machine Learning*, 56(1-3):209–239, 2004. S. Ben-David, T. Lu, and D. Pál. Does unlabeled data provably help? worst-case analysis of the sample complexity of semi-supervised learning. In *Conference on Learning Theory - COLT*, pp. 33–44, 2008. A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In Conference on Learning Theory - COLT, pp. 92–100, 1998. D. K. Brubaker, E. A Proctor, K. M. Haigis, and D.A. Lauffenburger. Computational translation of genomic responses from experimental model systems to humans. *PLoS computational biology*, 15(1):e1006286, 2019. P. Cascante-Bonilla, F. Tan, Y. Qi, and V. Ordonez. Curriculum labeling: Revisiting pseudo-labeling for semisupervised learning. In *AAAI Conference on Artificial Intelligence*, pp. 6912–6920, 2021. V. Castelli and T.M. Cover. On the exponential value of labeled samples. *Pattern Recognit. Lett.*, 16(1):105–111, 1995. K.-L. Chan, R. Rosli, T. V. Tatarinova, M. Hogan, M. Firdaus-Raih, and L. Eng-Ti Low. Seqping: gene prediction pipeline for plant genomes using self-training gene models and transcriptomic data. *BMC Bioinformatics*, 17, 2017. O. Chapelle, B. Schölkopf, and A. Zien. *Semi-Supervised Learning*. The MIT Press, 1st edition, 2010. ISBN 0262514125, 9780262514125. B. Chen, J. Jiang, X. Wang, P. Wan, J. Wang, and M. Long. Debiased self-training for semi-supervised learning. In Advances in Neural Information Processing Systems - NeurIPS, pp. 32424–32437, 2022. X. Chen, Y. Yuan, G. Zeng, and J. Wang. Semi-supervised semantic segmentation with cross pseudo supervision. In Conference on Computer Vision and Pattern Recognition - CVPR, pp. 2613–2622, 2021. Yining Chen, Colin Wei, Ananya Kumar, and Tengyu Ma. Self-training avoids using spurious features under domain shift. In *Advances in Neural Information Processing Systems - NeurIPS*, pp. 21061–21071, 2020. T. Cheng, X. Wang, S. Chen, Q. Zhang, and W. Liu. Boxteacher: Exploring high-quality pseudo labels for weakly supervised instance segmentation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition (CVPR), pp. 3145–3154, June 2023. Y. Chong, Y. Ding, Q. Yan, and S. Pan. Graph-based semi-supervised learning: A review. *Neurocomputing*, 408: 216–230, 2020. F. d'Alché Buc, Y. Grandvalet, and C. Ambroise. Semi-supervised marginboost. In Advances in Neural Information Processing Systems - NeurIPS, pp. 553–560, 2001. P. Derbeko, R. El-Yaniv, and R. Meir. Explicit learning curves for transduction and application to clustering and compression algorithms. *Journal of Artificial Intelligence Research*, 22:117–142, 2004. C. Donahue, M. Lee, and P. Liang. Enabling language models to fill in the blanks. In *58th Annual Meeting of the* Association for Computational Linguistics - ACL, pp. 2492–2501. Association for Computational Linguistics, 2020. Y. Du, Y. Shen, H. Wang, J. Fei, W. Li, L. Wu, R. Zhao, Z. Fu, and Q. Liu. Learning from future: A novel selftraining framework for semantic segmentation. In *Advances in Neural Information Processing Systems - NeurIPS*, pp. 4749–4761, 2022. D. Dua and C. Graff. UCI machine learning repository, 2017. URL https://archive.ics.uci.edu/ml/index.php. Y El-Manzalawy, EE Munoz, SE Lindner, and V Honavar. Plasmosep: Predicting surface-exposed proteins on the malaria parasite using semisupervised self-training and expert-annotated data. *Proteomics*, 16(23):2967–2976, 2016. Y. Fang, L. Dong, H. Bao, X. Wang, and F. Wei. Corrupted image modeling for self-supervised visual pre-training. In The 11th *International Conference on Learning Representations - ICLR*, 2023. J.-C. Feng, F.-T. Hong, and W.-S. Zheng. Mist: Multiple instance self-training framework for video anomaly detection. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition - CVPR*, pp. 14009– 14018, June 2021. V. Feofanov, E. Devijver, and M.-R. Amini. Transductive bounds for the multi-class majority vote classifier. In *AAAI* Conference on Artificial Intelligence, pp. 3566–3573, 2019. S. C. Fralick. Learning to recognize patterns without a teacher. *IEEE Transactions on Information Theory*, 13(1): 57–64, 1967. S. Frei, D. Zou, Z. Chen, and Q. Gu. Self-training converts weak learners to strong learners in mixture models. In International Conference on Artificial Intelligence and Statistics - AISTATS, pp. 8003–8021, 2022. G. French, S. Laine, T. Aila, M. Mackiewicz, and G. D. Finlayson. Semi-supervised semantic segmentation needs strong, varied perturbations. In *British Machine Vision Conference - BMVC*, pp. 1–14, 2020. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. March, and V. Lempitsky. Domainadversarial training of neural networks. *Journal of Machine Learning Research*, 17(59):1–35, 2016. M. Gheini, T. Likhomanenko, M. Sperber, and H. Setiawan. Joint speech transcription and translation: Pseudolabeling with out-of-distribution data. In *Findings of the Association for Computational Linguistics: ACL 2023*, pp. 7637–7650, 2023. Golnaz Ghiasi, Barret Zoph, Ekin D. Cubuk, Quoc V. Le, and Tsung-Yi Lin. Multi-task self-training for learning general representations. In *International Conference on Computer Vision - ICCV*, pp. 8836–8845, 2021. Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In Advances in Neural Information Processing Systems 17, pp. 529–536, 2004. C. Gulcehre, T. Le Paine, S. Srinivasan, K. Konyushkova, L. Weerts, A. Sharma, A. Siddhant, A. Ahern, M. Wang, C. Gu, W. Macherey, A. Doucet, O. Firat, and N. de Freitas. Reinforced self-training (rest) for language modeling. 2023. A. Gupta, S. Menghani, S. K. Rallabandi, and A. W. Black. Unsupervised self-training for sentiment analysis of codeswitched data. In *Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching*, 2021. L. Hadjadj, M.-R. Amini, and S. Louhichi. Self-training of halfspaces with generalization guarantees under massart mislabeling noise model. In *International Joint Conference on Artificial Intelligence - IJCAI*, pp. 3777–3785, 2023. B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, and M. Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In *Advances in Neural Information Processing Systems - NeurIPS*, pp. 3124–3132, 2018. K. Huang, C. Xiao, L. M. Glass, C. W. Critchlow, G. Gibson, and J. Sun. Machine learning applications for therapeutic tasks with genomics data. *Patterns*, 2(10), 2021. W. Huang, Y. Shi, Z. Xiong, Q. Wang, and X. X. Zhu. Semi-supervised bidirectional alignment for remote sensing cross-domain scene classification. *ISPRS Journal of Photogrammetry and Remote Sensing*, 195:192–203, 2023. N. Inoue, R. Furuta, T. Yamasaki, and K. Aizawa. Cross-domain weakly-supervised object detection through progressive domain adaptation. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 5001–5009, 2018. T. Joachims. Transductive inference for text classification using support vector machines. In International Conference on Machine Learning - ICML, pp. 200–209, 1999. B. Joshi, M.-R. Amini, I. Partalas, F. Iutzeler, and Y. Maximov. Aggressive sampling for multi-class to binary reduction with applications to text classification. In *Advances in Neural Information Processing Systems - NeurIPS*, pp. 4235– 4243, 2017. J. Kahn, A. Lee, and A. Hannun. Self-training for end-to-end speech recognition. In *International Conference on* Acoustics, Speech and Signal Processing - ICASSP, pp. 7084–7088, 2020. G. Karamanolakis, S. Mukherjee, G. Zheng, and A. Awadallah. Self-training with weak supervision. In *North American Conference on Chinese Linguistics - NAACL*, pp. 845–863, 2021. D.P. Kingma, S. Mohamed, D. Jimenez, and M. Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems - NeurIPS, 2014. A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master's thesis, Department of Computer Science, University of Toronto, 2009. A. Kumar, T. Ma, and P. Liang. Understanding self-training for gradual domain adaptation. In *International Conference on Machine Learning*, pp. 5468–5479. PMLR, 2020. S. Laine and T. Aila. Temporal ensembling for semi-supervised learning. In International Conference on Learning Representations - ICLR, 2017. D.-H. Lee. Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks. In ICML 2013 Workshop on Challenges in Representation Learning, 2013. J. D. Lee, Q. Lei, N. Saunshi, and J. Zhuo. Predicting what you already know helps: Provable self-supervised learning. In *Advances in Neural Information Processing Systems - NeurIPS*, pp. 309–323, 2021. G. Li, X. Li, Y. Wang, W. Yichao, D. Liang, and S. Zhang. Dtg-ssod: Dense teacher guidance for semi-supervised object detection. In *Advances in Neural Information Processing Systems*, volume 35, pp. 8840–8852, 2022. Y. Li, Y. Yin, L. Liu, S. Pang, and Q. Yu. Semi-supervised gait recognition based on self-training. In 9 th *International* Conference on Advanced Video and Signal-Based Surveillance, pp. 288–293, 2012. Y.-F. Li and Z.-H. Zhou. Towards Making Unlabeled Data Never Hurt. In *Proceedings of the 28th International* Conference on Machine Learning, pp. 1081–1088, 2011. S. Ling, C. Shen, M. Cai, and Z. Ma. Improving pseudo-label training for end-to-end speech recognition using gradient mask. In *International Conference on Acoustics, Speech and Signal Processing - ICASSP*, pp. 8397–8401, 2022. H. Liu, J. Wang, and M. Long. Cycle self-training for domain adaptation. Advances in Neural Information Processing Systems, 34:22968–22981, 2021. H. Lv, Z. Yue, Q. Sun, B. Luo, Z. Cui, and H. Zhang. Unbiased multiple instance learning for weakly supervised video anomaly detection. In *Conference on Computer Vision and Pattern Recognition - CVPR*, pp. 8022–8031, 2023. Y. Maximov, M.-R. Amini, and Z. Harchaoui. Rademacher complexity bounds for a penalized multi-class semisupervised algorithm. *Journal of Artificial Intelligence Research*, 61:761–786, 2018. Y. Meng, Y. Zhang, J. Huang, C. Xiong, H. Ji, C. Zhang, and J. Han. Text classification using label names only: A language model self-training approach. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 9006–9017. Association for Computational Linguistics, 2020. T. Miyato, S. Maeda, M. Koyama, and S. Ishii. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. *Pattern Anal. Mach. Intell.*, 41(8):1979–1993, 2019. S. Mukherjee and A. Awadallah. Uncertainty-aware self-training for few-shot text classification. *Advances in Neural* Information Processing Systems - NeurIPS, 33:21199–21212, 2020. N. Natarajan, I. S Dhillon, P. K. Ravikumar, and A. Tewari. Learning with noisy labels. In *Advances in Neural* Information Processing Systems, pp. 1196–1204, 2013. K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In *Proceedings of the International Conference on Information and Knowledge Management - CIKM*, pp. 86–93, 2000. K. Nigam, A. McCallum, and T. Mitchell. Semi-supervised text classification using EM. In *Semi-Supervised Learning*, pp. 32–55. MIT, 2006. P. Niyogi. Manifold regularization and semi-supervised learning: Some theoretical analyses. *Journal of Machine* Learning Research, 14(1):1229–1250, 2013. A. Odonnat, V. Feofanov, and I. Redko. Leveraging ensemble diversity for robust self-training in the presence of sample selection bias. *International Conference on Artificial Intelligence and Statistics - AISTATS*, 2024. A. Oliver, A. Odena, C. Raffel, E.-D. Cubuk, and I. Goodfellow. Realistic evaluation of deep semi-supervised learning algorithms. In *Advances in Neural Information Processing Systems - NeurIPS*, 2018. U. Ozbulak, H. J. Lee, B. Boga, E. Timothy Anzaku, H. Park, A. Van Messem, W. De Neve, and J. Vankerschaver. Know your self-supervised learning: A survey on image-based generative and discriminative training. *Transactions* on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=Ma25S4ludQ. Survey Certification. G. Patrini, A. Rozza, A. K. Menon, R. Nock, and L. Qu. Making deep neural networks robust to label noise: A loss correction approach. In *Conference on Computer Vision and Pattern Recognition - CVPR*, pp. 2233–2241, 2017. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, and V. Dubourg. Scikit-learn: Machine learning in python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. Z. Peng, D. Zhang, S. Tian, W. Wu, L. Yu, S. Zhou, and S. Huang. Faxmatch: Multi-curriculum pseudo-labeling for semi-supervised medical image classification. *Medical Physics*, 50(5):3210–3222, 2023. C. Qiu. *Self-Supervised Anomaly Detection with Neural Transformations*. doctoral thesis, Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau, 2023. C. Qu, H. Zhang, R. Zhang, S. Zou, L. Huang, and H. Li. Multiclass anomaly detection of bridge monitoring data with data migration between different bridges for balancing data. *Applied Sciences*, 13(13), 2023. H. Rangwani, S. Ramasubramanian, S. Takemori, K. Takashi, Y. Umeda, and V.B. Radhakrishnan. Cost-sensitive self-training for optimizing non-decomposable metrics. In *Advances in Neural Information Processing Systems -* NeurIPS, pp. 26994–27007, 2022. D. Ravinder. *Using machine learning to increase the predictive value of humanized mouse models for the human* immune response to YFV-17D. Masters of engineering in biomedical engineering, Massachusetts Institute of Technology, 2021. P. Rigollet. Generalization error bounds in semi-supervised classification under the cluster assumption. Journal of Machine Learning Research, 8:1369–1392, 2007. K. Saito, Y. Ushiku, and T. Harada. Asymmetric tri-training for unsupervised domain adaptation. In *International* Conference on Machine Learning, pp. 2988–2997. PMLR, 2017. R.E. Schapire, Y. Freund, P. Barlett, and W. S. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. In *International Conference on Machine Learning*, pp. 322–330, 1997. M. C. Schiappa, Y. S. Rawat, and M. Shah. Self-supervised learning for videos: A survey. *ACM Computing Surveys*, 2022. H. Scudder. Adaptive communication receivers. *IEEE Transactions on Information Theory*, 11(2):167–174, 1965. N. Shabbir and R. Kumar Rout. Fgbcnn: A unified bilinear architecture for learning a fine-grained feature representation in facial expression recognition. *Image and Vision Computing*, 137, 2023. L. Shi and W. Liu. Adversarial self-training improves robustness and generalization for gradual domain adaptation. Advances in Neural Information Processing Systems, 36, 2024. W. Shi, Y. Gong, C. Ding, Z. Ma, X. Tao, and N. Zheng. Transductive semi-supervised deep learning using min-max features. In *European Conference on Computer Vision - ECCV*, pp. 311–327, 2018. C. Shorten and T.-M. Khoshgoftaar. A survey on image data augmentation for deep learning. *Journal of Big Data*, 6 (1), 2019. A. Singh, R. Nowak, and J. Zhu. Unlabeled data: Now it helps, now it doesn't. In Advances in Neural Information Processing Systems - NeurIPS, pp. 513–521, 2008. A. Singh, J. Co-Reyes, R. Agarwal, A. Anand, P. Patil, X. Garcia, P. Liu, J. Harrison, J. Lee, K. Xu, A. Parisi, A. Kumar, A. Alemi, A. Rizkowsky, A. Nova, B. Adlam, B. Bohnet, G. Elsayed, H. Sedghi, I. Mordatch, I. Simpson, I. Gur, J. Snoek, J. Pennington, J. Hron, K. Kenealy, K. Swersky, K. Mahajan, L. Culp, L. Xiao, M. Bileschi, N. Constant, R. Novak, R. Liu, T. Warkentin, Y. Qian, Y. Bansal, E. Dyer, B. Neyshabur, J. Sohl-Dickstein, and N. Fiedel. Beyond human data: Scaling self-training for problem-solving with language models, 2023. K. Sohn, D. Berthelot, N. Carlini, Z. Zhang, H. Zhang, C. Raffel, E. Cubuk, A. Kurakin, and C.-L. Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In Advances in Neural Information Processing Systems - NeurIPS, pp. 596–608, 2020. A. Sportisse, H. Schmutz, O. Humbert, C. Bouveyron, and P.-A. Mattei. Are labels informative in semi-supervised learning? Estimating and leveraging the missing-data mechanism. In *Proceedings of the 40th International Conference on Machine Learning - ICML*, pp. 32521–32539, 2023. A. Tarvainen and H. Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In *Advances in Neural Information Processing Systems - NeurIPS*, pp. 1195–1204, 2017. G. Tür, D. Tür, and R.-E. Schapire. Combining active and semi-supervised learning for spoken language understanding. *Speech Communication*, 45(2):171–186, 2005. L.G. Valiant. A theory of the learnable. *Communications of the ACM*, 27(11):1134–1142, 1984. J. E. Van Engelen and H. H. Hoos. A survey on semi-supervised learning. *Machine Learning*, 109(2):373–440, 2020. V. Vapnik. *Statistical Learning Theory*. Wiley-Interscience, 1998. C. Wang, Jingzhou Luo, Xing L., H. Qi, and Z. Jin. V-dixmatch: A semi-supervised learning method for human action recognition in night video sensing. *IEEE Sensors Journal*, pp. 1–1, 2023. doi: 10.1109/JSEN.2023.3294360. C. Wei, K. Shen, Y. Chen, and T. Ma. Theoretical analysis of self-training with deep networks on unlabeled data. In International Conference on Learning Representations - ICLR, 2021. Q. Wu, Z. Lin, B. F. Karlsson, J.-G. Lou, and B. Huang. Single-/multi-source cross-lingual ner via teacher-student learning on unlabeled data in target language. In *Annual Conference of the Association for Computational Linguistics - ACL*. Q. Xie, Z. Dai, E. H. Hovy, T. Luong, and Q. Le. Unsupervised data augmentation for consistency training. In Advances in Neural Information Processing Systems - NeurIPS, pp. 6256–6268, 2020a. Q. Xie, M.-T. Luong, E. H. Hovy, and Q. V. Le. Self-training with noisy student improves imagenet classification. In Conference on Computer Vision and Pattern Recognition - CVPR, pp. 10684–10695, 2020b. Q. Xu, A. Baevski, T. Likhomanenko, P. Tomasello, A. Conneau, R. Collobert, G. Synnaeve, and M. Auli. Self-training and pre-training are complementary for speech recognition. In *International Conference on Acoustics, Speech and* Signal Processing - ICASSP, pp. 3030–3034, 2021. X. Yang, Z. Song, I. King, and Z. Xu. A survey on deep semi-supervised learning. IEEE Transactions on Knowledge and Data Engineering, 35(9):8934–8954, 2023. doi: 10.1109/TKDE.2022.3220219. Y. Yaslan and Z. Cataltepe. Co-training with relevant random subspaces. *Neurocomputing*, 73(10-12):1652–1661, 2010. J. Yu, X. Wang, J. Zhao, C. Yang, and W. Chen. STAD: self-training with ambiguous data for low-resource relation extraction. In *Proceedings of the 29th International Conference on Computational Linguistics - COLING*, pp. 2044–2054, 2022. Z. Yu, Y. Su, Y. Lu, Y. Yang, F. Wang, S. Zhang, Y. Chang, K.-C. Wong, and X. Li. Topological identification and interpretation for single-cell gene regulation elucidation across multiple platforms using scMGCA. Nature Communications, 14(400), 2023. K. Zadeh and S. Rasoul. Adapting self-training for semantic role labeling. In Proceedings of the ACL 2010 Student Research Workshop, pp. 91–96, 2010. S. Zagoruyko and N. Komodakis. Wide residual networks. In Proceedings of the British Machine Vision Conference - BMVC. BMVA Press, 2016. B. Zhang, Y. Wang, W. Hou, H. Wu, J. Wang, M. Okumura, and T. Shinozaki. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. In *Advances in Neural Information Processing Systems - NeurIPS*, pp. 18408–18419, 2021. S. Zhang, M. Wang, S. Liu, P.-Y. Chen, and J. Xiong. How unlabeled data improve generalization in self-training? a one-hidden-layer theoretical analysis. In *International Conference on Learning Representations - ICLR*, 2022. K. Zhong, Z. Song, P. Jain, P. L. Bartlett, and I. S. Dhillon. Recovery guarantees for one-hidden-layer neural networks. In *International Conference on Machine Learning - ICML*, pp. 4140–4149, 2017. X. Zhu, Z. Ghahramani, and J.D. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In International Conference on Machine Learning - ICML, pp. 912–919, 2003. Y. Zou, Z. Yu, B. Kumar, and J. Wang. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In *European conference on computer vision - ECCV*, pp. 289–305, 2018. Y. Zou, Z. Yu, X. Liu, B. Kumar, and J. Wang. Confidence regularized self-training. In International Conference on Computer Vision - ICCV, pp. 5981–5990, 2019. | Data set | # of labeled examples | # of unlabeled examples | Dimension | # of classes | |------------|-------------------------|---------------------------|-------------|----------------| | | m | u | d | K | | Vowel | 99 | 891 | 10 | 11 | | Protein | 129 | 951 | 77 | 8 | | PageBlocks | 1094 | 4379 | 10 | 5 | | Isolet | 389 | 7408 | 617 | 26 | | HAR | 102 | 10197 | 561 | 6 | | Pendigits | 109 | 10883 | 16 | 10 | | Letter | 400 | 19600 | 16 | 26 | | Fashion | 175 | 69825 | 784 | 10 | | MNIST | 175 | 69825 | 784 | 10 | Table 2: Characteristics of data sets used in our experiments, d and K correspond to respectively the dimension of the input space and the number of classes. ## A Empirical Study Within this section, we will evaluate the effectiveness and performance of the self-training algorithm. This assessment will be based on various key features presented in the preceding sections, and it will be conducted across multiple benchmark scenarios. Our primary focus will be on scenarios characterized by severely limited labeled training data, where the utilization of complex baseline classifiers like deep learning models is unfeasible. Additionally, we will address the prevalent scenario where there are sufficient labeled training data, enabling the development of an initial supervised complex model. The impact of threshold selection. We first study the effect of selecting automatically the threshold for pseudolabeling on 9 publicly available data sets proposed for semi-supervised learning (Dua & Graff, 2017). The characteristics of these datasets are presented in Table 2. It is worth noting that certain datasets contain only a limited number of labeled training examples, comprising just a few hundred instances and accounting for less than 1% of the total training examples. This condition underscores the suitability of employing complex base classifiers. In the experimentation, Random Forest was employed instead using the scikit-learn implementation (Pedregosa et al., 2011) with 200 trees of maximum depth while leaving other parameters at their default values. The primary objective was to assess and contrast the classifier's performance in two scenarios: the supervised scenario (denoted by RF) and the self-training scenario where pseudo-labeling is automatically conducted following the approach introduced by Feofanov et al. (2019)1(denoted by PL∗). Additionally, we investigated the impact of setting the pseudo-labeling threshold at predefined values from the set θ ∈ {0.5, 0.7, 0.9} (denoted by PLθ). The automatic pseudo-labeling strategy selects the threshold which minimizes the bound of the error of the Random Forest classifier over the unlabeled training samples. Results are resumed in Table 3. Experiments are repeated 20 times by choosing randomly the labeled training examples, and ↓indicates that performance is statistically worse than the best result, shown in bold, according to the Wilcoxon rank-sum test. These results suggest that the effectiveness of self-training heavily relies on the method used to determine the pseudolabeling threshold. When the threshold is automatically determined, self-training (i.e. PL∗) can perform competitively, indicating that this approach has the potential to improve results compared to the supervised RF. However, when a fixed threshold is applied, self-training tends to yield inferior results compared to the supervised learning approach. This suggests that an arbitrarily chosen threshold might not effectively capture the underlying patterns in the data for the pseudo-labeling process, leading to suboptimal performance. Moreover, when the threshold is too low as for θ ∈ {0.5, 0.7}, pseudo-labeling is likely to produce label noise and degrade the performance of self-training with respect to the supervised RF classifier in all cases. When the threshold it is too high (i.e. θ = 0.9), self-training becomes competitive compared to RF on Isolet and MNIST, but the quantity of pseudo-labeled unlabeled examples seems not to be sufficient to learn efficiently. 1https://github.com/vfeofanov/trans-bounds-maj-vote | Data set | RF | PL θ=0.5 | PL θ=0.7 | PL θ=0.9 | PL⋆ | |------------|--------------|--------------|--------------|--------------|-------------| | Vowel | .586 ± .028 | .489↓± .016 | .531↓± .034 | .576↓± .028 | .586 ± .026 | | Protein | .764↓± .032 | .653↓± .024 | .687↓ ± .036 | .724↓ ± .018 | .781 ± .034 | | PageBlocks | .965 ± .003 | .931↓± .003 | .964 ± .004 | .965 ± .002 | .966 ± .002 | | Isolet | .854↓ ± .016 | .648↓ ± .018 | .7↓ ± .04 | .861↓ ± .08 | .875 ± .014 | | HAR | .851 ±.024 | .76↓± .04 | .81↓± .041 | .823↓± .035 | .854 ± .026 | | Pendigits | .863↓± .022 | .825↓± .022 | .839↓± .036 | .845↓± .024 | .884 ± .022 | | Letter | .711 ± .011 | .062↓± .011 | .651↓± .015 | .673 ↓± .015 | .717 ± .013 | | Fashion | .718 ± .022 | .625↓± .014 | .64↓± .04 | .68↓± .014 | .723 ± .023 | | MNIST | .798↓± .015 | .665↓± .012 | .705↓± .055 | .823↓± .045 | .857 ± .013 | Table 3: Classification performance using the accuracy score on 9 publicly available data set. Best results are shown in bold and the sign ↓shows if the performance is statistically worse than the best result on the level 0.01 of significance. In summary, the findings emphasize the importance of a dynamic and adaptive threshold selection mechanism when implementing self-training. Noise Account. We now consider the case where the initial labeled training set allows to train deep neural networks and examine the effects of taking into account noise in the pseudo-labeling process along with the dynamic selection of the threshold on CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton, 2009). Both datasets contain 32x32 pixel RGB images belonging to respectively 10 and 100 classes; 50000 examples are used for training and 10000 samples for test. We consider the debiased self-training approach (DST) (Chen et al., 2022) to address the presence of noise in pseudolabeling, in conjunction with the FlexMatch method (Zhang et al., 2021) for the dynamic threshold determination in pseudo-labeling. As outlined in Section 2.3, DST involves training a dedicated head on pseudo-labeled examples, allowing the model to implicitly capture and account for noise inherent in the pseudo-labels. For FlexMatch, we followed the same experimental protocol than Zhang et al. (2021). In this case, Wide ResNet (WRN) (Zagoruyko & Komodakis, 2016) was used as the base classifier in self-training. Parameter learning was accomplished using stochastic gradient descent (SGD) with a momentum coefficient of 0.9. The initial rate was set to η0 = 0.03 with a cosine learning rate decay schedule as η = η0 cos(7πt/16T), where t denotes the current training step and T is the total training step set at 2 20. Additionally, exponential moving averaging with a momentum of 0.999 was implemented and the batch size for labeled data was fixed to 64. For DST, we used the code made available by the authors2. We compared FlexMatch with and without the DST approach denoted respectively by FM and FM+DST. We also compared self-training with WRN trained in fully supervised manner. Each experiment was repeated 5 times by changing the seed at each time. Figure 2 presents the average accuracy of different models on the test set for the same number of initial labeled training samples per class within the set {4, 10, 20, 50} for both datasets. In both datasets, considering label noise within pseudo-labels leads to improved performance, with the improvement being more pronounced in the case of CIFAR-100. In CIFAR-100, classes are structured into 20 superclasses, each comprising 5 related classes, addressing noise in this more complex task aids in class differentiation and enhances the model's ability to generalize. It is worth noting that with a greater number of initial labeled training examples, the gap between the FM and FM+DST approaches narrows, as the model becomes more proficient with the increased labeled data and makes fewer errors in pseudo-labeling. 2https://github.com/thuml/Debiased-Self-Training ![22_image_1.png](22_image_1.png) ![22_image_0.png](22_image_0.png) Figure 2: Comparisons in terms of Accuracy on CIFAR-10 and CIFAR-100 for a varying number of labeled training data. "Supervised" refers to the fully supervised learning (m = 50000, u = 0).
Review 1: Summary: - A survey paper focused on self-training within semi-supervised learning. This provides an opportunity for the paper to go deeper, e.g., discuss the theoretical studies of self-training. - Up-to-date: includes many papers from 2022 and 2023, which is what other survey papers do not cover. Strengths and Weaknesses: Strengths Although there are already many survey papers on semi-supervised learning, this survey paper focuses on self-training, which makes it a unique survey paper. The survey includes discussions about self-training from various aspects, e.g., theoretical studies of self-training and applications of self-training to different fields such as NLP and computer vision. Weaknesses The title is "Self-training: A survey", but the paper focuses on self-training from the perspective of semi-supervised learning. In my opinion, a more precise title may be "Self-Training in Semi-supervised Learning: A survey". If the paper can expand the discussions about self-training beyond semi-supervised learning, I think the current title will become a great fit. For example, the paper discusses domain adaptation in just one paragraph in the final section, but this should be discussed in depth if the title does not limit the survey to semi-supervised learning. Self-training may also include topics such as knowledge distillation where teacher/student are identical (Furlanello et al., 2018) and is a hot topic in language models (Singh et al. 2023) Furlanello et al.: https://arxiv.org/abs/1805.04770 Singh et al.: https://arxiv.org/abs/2312.06585 The contribution of Section 6 is not clear. In the first set of experiments (the impact of threshold selection), the experiments seem quite similar to the experiments in Feofanov et al. (2019). Feofanov et al. (2019) already discusses how sensitive the threshold used for pseudo-labeling can be for the final performance. In the second set of experiments: this also seems similar to the experiments in Chen et al., 2022, where they combine FlexMatch and DST. It would be great if the authors can clarify the contributions. Requested Changes: I would appreciate it if the authors can read the 2 points I raised in the weaknesses. Some other minor comments are: - Period at the end of page 1 is missing. - Page 2: filed --> field - Period missing at the end of the 2nd last paragraph of page 4 - Table 1: Only the MeanTeacher paper has the method name, while all other papers have author names. Can we make them consistent? - Period missing, last sentence, 3rd paragraph of Section 7 Broader Impact Concerns: I do not have any ethical concerns. ================================================== Review 2: Summary: This paper focuses on the self-training framework, which is to learn a classifier iteratively by assigning pseudo-labels to the set of unlabeled training samples with a margin greater than a certain threshold. This paper reviews the self-training methods for binary and multi-class classification as well as their variants and two related approaches. With the empirical studies on CIFAR10/CIFAR100, this paper provides an examination of the impact of factors of the self-training framework. In addition, the related application and future research are discussed. Strengths and Weaknesses: Strengths: - This paper studies the self-training problem which is relevant in many applications. - This paper provides an investigation of the related work of self-training methods from different perspectives and discusses the application as well as the impact of self-training features. Weaknesses: - The major claims or findings of the current survey are not clear, and readers can hardly get the main information that the authors want to present. - The current presentation can be improved by re-organizing the subsections with an overview to guide the readers to a better understanding of some commonalities and differences. - The empirical studies and references can be better added for a survey paper. Other comments/questions: - Please specify "this subject" at the end of the abstract. - It could be better to explicitly summarize the results or information of the presentation or examination of the self-training methods or the impact of significant self-training features in the abstract and introduction. Otherwise, the readers can not clearly catch up with the current version's unique discoveries or major claims. - Please add more references (citations) in the introduction part to support each sentence with claims. - Is there any rigorous definition for "smoothness"? It could be better to provide some illustrative figures to present the central hypothesis. - Is there any relationship among the three main semi-supervised learning families? The writing of this part can be further improved by considering summarizing it as a table. - The current notation of a part of the unlabeled data is not appropriate, can we use $\bar{u}$? - For each part from Sec.3.1 to Sec.3.3, it could be better to draw some important message after simply explaining each work. - Except for the benchmark CIFAR10/CIFAR100, are there any other results on different datasets or large-scale datasets (like ImageNet)? The current empirical studies are not very convincing in supporting the discovery - The current presentation has a large room to improve by better structuring and summarization. Requested Changes: Please consider the above "other comments/questions" part to revise the draft. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper presents a comprehensive review of self-training methods for both binary and multi-class classification tasks, along with their variants and two related approaches. Additionally, the authors focus on prevalent applications leveraging self-training techniques and suggest avenues for future research. The authors assert that this paper constitutes the first extensive and exhaustive survey on self-training methodologies. Strengths and Weaknesses: Strengths: 1. This paper serves as a comprehensive review, featuring a well-structured framework encompassing sections dedicated to framework introduction, methodology, application scenarios, challenges, conclusion, and perspectives. 2. This paper is the first overview and summary of the self-training. Weaknesses: 1. This paper mainly discusses 14 algorithms within the self-training. However, the classification of these algorithms lacks clarity. It is suggested that the author explicitly categorize the algorithms to enhance readers' understanding of the research landscape in the self-training domain. 2. The structure of the paper is not sufficiently rigorous. Moreover, the author fails to emphasize the logical relationships between relevant methodologies, leading to suboptimal coherence in the paper. 3. Redundancy issues are apparent in this paper. For example, the content of the section introduction does not align closely with the subsequent presentation of methodologies. It is recommended that the author streamline the content and dedicate specialized sections to elaborate on the foundational knowledge of semi-supervised learning and self-training. 4. The references exhibit formatting issues, with some papers lacking page numbers. Additionally, the author overlooks the inclusion of several crucial references pertinent to self-training. It is advised that the author rectify the formatting of the references and supplement the paper with the relevant omitted literature. [a] Self-training with noisy student improves imagenet classification. CVPR 2020 [b] Multi-task self-training for learning general representations. ICCV 2021 Requested Changes: Refer to the Weaknesses Broader Impact Concerns: None ================================================== Metareview: Recommendation: Reject Comment: The submission surveyed the self-training approach to semi-supervised learning. After the rebuttal and revision, one reviewer voted for acceptance (nd8o) and the other two voted for rejection. Their post-rebuttal comments can be found below. > (Reviewer *nd8o*) Thank you for updating the paper. The paper pushed the experiments to the Appendix, but the contributions of the experiments remain unclear. It would be better to explicitly explain the differences between the experiments in this paper and the experiments in Feofanov et al. (2019), Chen et al., (2022). > (Reviewer *LXk5*) After reading the revision and the reviews of other reviewers, I have reached a decision leaning toward rejection. This decision is grounded in several aspects. Firstly, the authors have not adequately addressed the weaknesses. For example, the logical coherence among pertinent methodologies remains deficient, and there exist missing page numbers or inaccurate publication years in the references. Secondly, the revision does not include experimental results of the self-training algorithms, which is a crucial aspect of the comprehensiveness of a survey paper. Lastly, I agree with Reviewer nd8o's comments regarding the inadequacy of the paper's title, which does not aptly reflect its contents. > (Reviewer *MFmj*) Summary: This paper focuses on the self-training framework, which is to learn a classifier iteratively by assigning pseudo-labels to the set of unlabeled training samples with a margin greater than a certain threshold. This paper reviews the self-training methods for binary and multi-class classification as well as their variants and two related approaches. With the empirical studies on CIFAR10/CIFAR100, this paper provides an examination of the impact of factors of the self-training framework. In addition, the related application and future research are discussed. > > After carefully reading the comments from the other reviewers and the response from the authors, I'm leaning reject for the current version of this work for the following reasons: > > (1) The structure of this paper remains incoherent, making the readers lost in the claims from different perspectives although the analysis is comprehensive. For example, the newly revised Section 1.4 can still not emphasize the logical relationship between relevant methodologies. The empirical study seems not closely related to the previous method description. > > (2) The author's response fails to address the reviewer's comments clearly. There is no point-to-point response to all the reviewers' requested changes, making the second round review hard to distinguish the differences from the first version and also hard to understand the exact response corresponding to the previous weaknesses. > > (3) Some requested challenges are not fully addressed, making some claims remain unclear. For example, the authors didn't explicitly explain the "smoothness" and there is limited summarization in each section to convey the major information of the survey results. For the experiments, the empirical study is not sufficient and not insightful, it is also pointed (by reviewer nd8o) as similar to previous related works. > > Overall, the reviewer thinks the current presentation still has room to improve by considering all the unsolved issues. I don't think we could accept the current version of this submission to be published on TMLR. In particular, I strongly agree with nd8o and suggest you carefully address the comments from nd8o about the title issue and the experiment issue (as a survey paper, you cannot assume that a reader looking at your title knows that self-training is for semi-supervised learning and different from self-supervised learning/training). The first post-rebuttal comment from MFmj may also be very important for improving the quality of this manuscript. ==================================================
# Lead: Min-Max Optimization From A Physical Perspective Reyhane Askari Hemmat∗*reyhane.askari.hemmat@umontreal.ca* Department of Computer Science and Operations Research University of Montreal and Mila, Quebec AI Institute Amartya Mitra∗*amitr003@ucr.edu* Department of Physics and Astronomy University of California, Riverside Guillaume Lajoie g.lajoie@umontreal.ca Department of Mathematics and Statistics University of Montreal and Mila, Quebec AI Institute Canada CIFAR AI chair Ioannis Mitliagkas ioannis@iro.umontreal.ca Department of Computer Science and Operations Research University of Montreal and Mila, Quebec AI Institute Canada CIFAR AI chair Reviewed on OpenReview: *ht tp s: // op en re vi ew .n et /f or um ?i d= vX Ss TY s6 ZB* ## Abstract Adversarial formulations such as generative adversarial networks (GANs) have rekindled interest in two-player min-max games. A central obstacle in the optimization of such games is the rotational dynamics that hinder their convergence. In this paper, we show that game optimization shares dynamic properties with particle systems subject to multiple forces, and one can leverage tools from physics to improve optimization dynamics. Inspired by the physical framework, we propose LEAD, an optimizer for min-max games. Next, using Lyapunov stability theory and spectral analysis, we study LEAD's convergence properties in continuous and discrete time settings for a class of quadratic min-max games to demonstrate linear convergence to the Nash equilibrium. Finally, we empirically evaluate our method on synthetic setups and CIFAR-10 image generation to demonstrate improvements in GAN training. ## 1 Introduction Much of the advances in traditional machine learning can be attributed to the success of gradient-based methods. Modern machine learning systems such as GANs (Goodfellow et al., 2014), multi-task learning, and multi-agent settings (Sener & Koltun, 2018) in reinforcement learning (Bu et al., 2008) require joint optimization of two or more objectives which can often be formulated as games. In these *game* settings, best practices and methods developed for single-objective optimization are observed to perform noticeably poorly (Mescheder et al., 2017; Balduzzi et al., 2018b; Gidel et al., 2019). Specifically, they exhibit rotational dynamics in parameter space about the *Nash Equilibria* (Mescheder et al., 2017), slowing down convergence. Recent work in game optimization (Wang et al., 2019; Mazumdar et al., 2019; Mescheder et al., 2017; Balduzzi et al., 2018b; Abernethy et al., 2019; Loizou et al., 2020) demonstrates that introducing additional second-order terms in the optimization algorithm helps to suppress these rotations, thereby improving convergence. Taking inspiration from recent work in single-objective optimization that re-derives existing accelerated methods from a variational perspective (Wibisono et al., 2016; Wilson et al., 2016), in this work, we adopt ∗Equal Contribution. Significant part of this work was done while Amartya Mitra was interning at Mila. a similar approach in the context of games. To do so, we borrow formalism from physics by likening the gradient-based optimization of two-player (zero-sum) games to the dynamics of a system where we introduce relevant forces that helps curb these rotations. We consequently utilize the dynamics of this resultant system to propose our novel second-order optimizer for games, *LEAD*. Next, using Lyapunov and spectral analysis, we demonstrate linear convergence of our optimizer (LEAD) in both continuous and discrete-time settings for a class of quadratic min-max games. In terms of empirical performance, LEAD achieves an FID of 10.49 on CIFAR-10 image generation, outperforming existing baselines such as BigGAN (Brock et al., 2018), which is approximately 30-times larger than our baseline ResNet architecture. What distinguishes LEAD from other second-order optimization methods for min-max games such as Mescheder et al. (2017); Wang et al. (2019); Mazumdar et al. (2019); Schäfer & Anandkumar (2019) is its computational complexity. All these different methods involve Jacobian (or Jacobian-inverse) vector-product computation commonly implemented using a form of approximation. Thus making a majority of them intractable in real-world large scale problems. On the other hand, LEAD involves computing only *one-block* of the full Jacobian of the gradient vector-field multiplied by a vector. This makes our method significantly cheaper and comparable to several first-order methods, as we show in section 6. We summarize our contributions below: - In section 3, we model gradient descent-ascent as a physical system. Armed with the physical model, we introduce counter-rotational forces to curb the existing rotations in the system. Next, we employ the principle of least action to determine the (continuous-time) dynamics. We then accordingly discretize these resultant dynamics to obtain our optimization scheme, Least Action Dynamics (LEAD). - In section 4, we use Lyapunov stability theory and spectral analysis to prove a linear convergence of LEAD in continuous and discrete-time settings for quadratic min-max games. - Finally, in section 7, we empirically demonstrate that LEAD is computationally efficient. Additionally, we demonstrate that LEAD improves the performance of GANs on different tasks such as 8-Gaussians and CIFAR-10 while comparing the performance of our method against other first and second-order methods. - The source code for all the experiments is available at https://github.com/lead-minmax-gam es/LEAD. Furthermore, we provide a blog post that summarizes our work, which is also available at https://reyhaneaskari.github.io/LEAD.html. ## 2 Problem Setting Notation Continuous time scalar variables are in uppercase letters (X), discrete-time scalar variables are in lower case (x) and vectors are in boldface (A). Matrices are in blackboard bold (M) and derivatives w.r.t. time are denoted as an over-dot (x˙). Furthermore, off-diag[M]i,j is equal to Mi,j for i ≠ j, and equal to 0 for i = j where *i, j* = 1, 2*, . . . , n*. Setting In this work, we study the optimization problem of two-player zero-sum games, $$\operatorname*{min}_{X}\operatorname*{max}_{Y}f\left(X,Y\right),$$ Yf (*X, Y* ), (1) where f : Rn × Rn → R, and is assumed to be a convex-concave function which is continuous and twice differentiable w.r.t. *X, Y* ∈ R. It is to be noted that though in developing our framework below, X, Y are assumed to be scalars, it is nevertheless found to hold for the more general case of vectorial X and Y , as we demonstrate both analytically (Appendix C) and empirically, our theoretical analysis is found to hold. $\left(1\right)$. ## 3 Optimization Mechanics In our effort to study min-max optimization from a physical perspective, we note from classical physics the following: under the influence of a net force F, the equation of motion of a physical object of mass m, is determined by Newton's 2nd Law, mX¨ = F, (2) with the object's coordinate expressed as Xt ≡ X. According to the *principle of least action*1(Landau & Lifshitz, 1960), nature "selects" this particular trajectory over other possibilities, as a quantity called the action is extremized along it. We start with a simple observation that showcases the connection between optimization algorithms and physics. Polyak's heavy-ball momentum (Polyak, 1964) is often perceived from a physical perspective as a ball moving in a "potential" well (cost function). In fact, it is straightforward to show that Polyak momentum is a discrete counterpart of a continuous-time equation of motion governed by Newton's 2nd Law. For single-objective minimization of an objective function f (x), Polyak momentum follows: $$x_{k+1}=x_{k}+\beta\left(x_{k}-x_{k-1}\right)-\eta\nabla_{x}f\left(x_{k}\right),$$ xk+1 = xk + β (xk − xk−1) − η∇xf (xk), (3) where η is the learning rate and β is the momentum coefficient. For simplicity, setting β to one, and moving to continuous time, one can rewrite this equation as, $$\frac{\left(x_{k+\delta}-x_{k}\right)-\left(x_{k}-x_{k-\delta}\right)}{\delta^{2}}=-\frac{\eta}{\delta^{2}}\nabla_{x}f\left(x_{k}\right),\tag{1}$$ and in the limit *δ, η* → 0, Eq.(4) then becomes (xk → X (t) ≡ X), $$({\mathfrak{3}})$$ $$\left(4\right)$$ $$m{\ddot{X}}=-\nabla_{X}f\left(X\right).$$ $$\left(5\right)$$ mX¨ = −∇Xf (X). (5) This is equivalent to Newton's 2nd Law of motion (Eq.(2)) of a particle of mass m = δ 2/η, and identifying F = −∇Xf (X) (i.e. f (X) acting as a *potential* function (Landau & Lifshitz, 1960)). Thus, Polyak's heavy-ball method Eq.(3) can be interpreted as an object (ball) of mass m rolling down under a potential f (X) to reach the minimum while accelerating. Armed with this observation, we perform an extension of 5 to our min-max setup, $$\begin{array}{l}{{m\ddot{X}=-\nabla_{X}f\left(X,Y\right),}}\\ {{m\ddot{Y}=\nabla_{Y}f\left(X,Y\right),}}\end{array}$$ $$({\mathfrak{G}})$$ mY¨ = ∇Y f (*X, Y* ),(6) which represents the dynamics of an object moving under a *curl force* (Berry & Shukla, 2016): Fcurl = (−∇Xf, ∇Y f) in the 2-dimensional X − Y plane, see Figure 1 (a) for a simple example where the continuoustime dynamic of a curl force (or equivalently a vortex force) diverges away from the equilibrium. Furthermore, it is to be noted that discretization of Eq.(6) corresponds to Gradient Descent-Ascent (GDA) with momentum 1. Authors in (Gidel et al., 2019) found that this optimizer is divergent in the prototypical min-max objective, f (*X, Y* ) = XY , thus indicating the need for further improvement. To this end, we note that the failure modes of the optimizer obtained from the discretization of Eq.(6), can be attributed to: (a) an outward rotatory motion by our particle of mass m, accompanied by (b) an increase in its velocity over time. Following these observations, we aim to introduce suitable *counter-rotational* and dissipative forces to our system above, in order to tackle (a) and (b) in an attempt to achieve converging dynamics. Specifically, as an initial consideration, we choose to add to our system, two ubiquitous forces: $\mathbf{M}\to\mathbf{M}$ - magnetic force, $$F_{\mathrm{mag}}=\Big(-\,q\nabla_{X Y}f\ \dot{Y},q\nabla_{X Y}f\ \dot{X}\Big)$$ − q∇XY f *Y , q* ˙ ∇XY f X˙(7) 1Also referred to as the Principle of Stationary Action. $$\left(7\right)$$ known to produce rotational motion (in charged particles), to counteract the rotations introduced by Fcurl. Here, q is the charge imparted to our particle. - friction, Ffric =*µX, µ* ˙ Y˙(8) to prevent the increase in velocity of our particle (µ: coefficient of friction). Assimilating all the above forces Fcurl, Fmag and Ffric, the equations of motion (EOMs) of our crafted system then becomes, $$F_{\mathrm{fric}}=\left(\mu{\dot{X}},\mu{\dot{Y}}\right)$$ $$({\boldsymbol{\delta}})$$ $$m\ddot{X}=\mathbf{F}_{\rm curl}+\mathbf{F}_{\rm mag}+\mathbf{F}_{\rm fric},\tag{1}$$ $$m\ddot{Y}=\mathbf{F}_{\rm curl}+\mathbf{F}_{\rm mag}+\mathbf{F}_{\rm fric}.$$ $$(9)$$ $$(10)$$ Or equivalently, $$\begin{array}{l}{{m\ddot{X}=-\mu\dot{X}-\nabla_{X}f-q\nabla_{X Y}f\dot{Y},}}\\ {{m\ddot{Y}=-\mu\dot{Y}+\nabla_{Y}f+q\nabla_{X Y}f\dot{X}.}}\end{array}$$ Without loss of generality, from hereon we set the mass of our object to be unity. In the rest of this work, we study the above EOMs in continuous and discrete time for min-max games. See Figure 1 (c) for a simple example where the continuous-time dynamic of a particle under all the above forces converges to the equilibrium. ![3_image_0.png](3_image_0.png) Figure 1: Depiction of the continuous-time dynamics of a particle experiencing different forces. In (a), a particle inside a vortex diverges from the equilibrium. This corresponds to the continuous-time dynamics of gradient descent-ascent with momentum in the bilinear game. In (b), we introduce a counter-rotational magnetic force to the existing system with the vortex force, assuming the particle is charged. The magnetic force is exerted in a perpendicular direction to the current direction of the particle and affects the rotations induced by the vortex. However, we do not observe convergence, which is expected from a physics perspective. The vortex force is known to increase the particle's speed over time Berry & Shukla (2016), while the magnetic force does not affect the particle's speed. Therefore, the particle's velocity will keep increasing over time, preventing convergence. To decrease the particle's velocity for convergence, we introduce friction to the system in (c). As a result, we observe that the friction causes the particle to lose speed and converge. ## 3.1 Discretization With the continuous-time trajectory of Eq.(10) in hand, we now proceed to discretize it using a combination of Euler's implicit and explicit discretization schemes. To discretize X˙ = VX we have, Euler Implicit Discretization : $\,x_{k+1}-x_k=\delta v_{k+1}^x$ Euler Explicit Discretization : $\,x_{k+1}-x_k=\delta v_k^x$. where δ is the discretization step-size and k is the iteration step. $$(11)$$ Proposition 1. *The continuous-time EOMs (10) can be discretized in an implicit-explicit way, to yield,* **Lemma 1.1**.: _Let $\mathcal{P}$ be a finite set of $\mathcal{P}$. Then $\mathcal{P}$ is a finite set of $\mathcal{P}$._ Proof.: Let $\mathcal{P}$ be a finite set of $\mathcal{P}$. Then $\mathcal{P}$ is a finite set of $\mathcal{P}$. Proof.: Let $\mathcal{P}$ be a finite set of $\mathcal{P}$. + α∇yxf (xk, yk) (xk − xk−1), $$(12)$$ where we have defined α = 2*qδ, β* = 1 − µδ and η = δ 2(Proof in Appendix B). Taking inspiration from the fact that Eq. 10 corresponds to the trajectory of a charged particle under a curl, magnetic and frictional force, as governed by the principle of least action, we refer to the discrete update rules of Eq. 12 as *Least Action Dynamics (LEAD*). (Algorithm 1 details the pseudo-code of LEAD). Understanding the terms in LEAD: Analyzing our novel optimizer, we note that it consist of three types of terms, namely, 1. **Gradient Descent or Ascent**: −∇xf or ∇yf: Each player's immediate direction of improving their own objective. 2. **Momentum**: Standard Polyak momentum term; known to accelerate convergence in optimization and recently in smooth games. (Gidel et al., 2019; Azizian et al., 2020b; Lorraine & Duvenaud, 2022) 3. **Coupling term**: $$-\nabla_{x y}f\left(x_{k},y_{k}\right)\left(y_{k}-y_{k-1}\right),\nabla_{y x}f\left(x_{k},y_{k}\right)\left(x_{k}-x_{k-1}\right)$$ Main new term in our method. It captures the first-order interaction between players. This cross-derivative corresponds to the counter-rotational force in our physical model; it allows our method to exert control on rotations. Algorithm 1 Least Action Dynamics (LEAD) Input: learning rate η, momentum β, coupling coefficient α. Initialize: x0 ← xinit, y0 ← y*init*, t ← 0 while not converged do t ← t + 1 gx ← ∇xf(xt, yt) gxy∆yt ← ∇y(gx)(yt − yt−1) xt+1 ← xt + β(xt − xt−1) − ηgx − αgxy∆yt gy ← ∇yf(xt, yt) gxy∆xt ← ∇x(gy)(xt − xt−1) yt+1 ← yt + β(yt − yt−1) + ηgy + αgxy∆xt end while return (xk+1, yk+1) ## 4 Convergence Analysis We now study the behavior of LEAD on the quadratic min-max game, $$f\left(\mathbf{X},\mathbf{Y}\right)={\frac{h}{2}}||\mathbf{X}||^{2}-{\frac{h}{2}}||\mathbf{Y}||^{2}+\mathbf{X}^{T}\mathbb{A}\mathbf{Y}$$ ||Y||2 + XT AY (13) where X, Y ∈ Rn, A ∈ Rn × Rn is a (constant) coupling matrix and h is a scalar constant. Additionally, the Nash equilibrium of the above game lies at X∗ = 0, Y∗ = 0. Let us further define the *vector field* v of the above game, f, as, $$\mathbf{v}={\begin{bmatrix}\nabla\mathbf{x}f\left(\mathbf{X},\mathbf{Y}\right)\\ -\nabla\mathbf{Y}f\left(\mathbf{X},\mathbf{Y}\right)\end{bmatrix}}={\begin{bmatrix}h\mathbf{X}+\mathbb{A}\mathbf{Y}\\ h\mathbf{Y}-\mathbb{A}^{\top}\mathbf{X}\end{bmatrix}}.$$ . (14) $$(13)$$ $$(14)$$ ## 4.1 Continuous Time Analysis A general way to prove the stability of a dynamical system is to use a Lyapunov function (Hahn et al., 1963; Lyapunov, 1992). A scalar function Et : Rn × Rn → R, is a Lyapunov function of a continuous-time dynamics if ∀ t, $$\begin{array}{l}{{(i)\ \mathcal{E}_{t}(\mathbf{X},\mathbf{Y})\geq0,}}\\ {{(i i)\ \mathcal{E}_{t}(\mathbf{X},\mathbf{Y})\leq0}}\end{array}$$ The Lyapunov function Et can be perceived as a generalization of the total energy of the system and the requirement (ii) ensures that this generalized energy decreases along the trajectory of evolution, leading the system to convergence as we will show next. For the quadratic min-max game defined in Eq.(13), Eq.(10) generalizes to, $$\begin{array}{l}{{\ddot{\mathbf{X}}=-\mu\dot{\mathbf{X}}-(h+\mathbb{A})\mathbf{Y}-q\mathbb{A}\dot{\mathbf{Y}}}}\\ {{\ddot{\mathbf{Y}}=-\mu\dot{\mathbf{Y}}-(h-\mathbb{A}^{T})\mathbf{X}+q\mathbb{A}^{T}\dot{\mathbf{X}},}}\end{array}$$ T X˙,(15) Theorem 1. *For the dynamics of Eq.(15),* $$\begin{array}{l}{{\mathcal{E}_{t}=\frac{1}{2}\left(\dot{\mathbf{X}}+\mu\mathbf{X}+\mu\mathbb{A}\mathbf{Y}\right)^{T}\left(\dot{\mathbf{X}}+\mu\mathbf{X}+\mu\mathbb{A}\mathbf{Y}\right)}}\\ {{\quad+\frac{1}{2}\left(\dot{\mathbf{Y}}+\mu\mathbf{Y}-\mu\mathbb{A}^{T}\mathbf{X}\right)^{T}\left(\dot{\mathbf{X}}+\mu\mathbf{Y}-\mu\mathbb{A}^{T}\mathbf{X}\right)}}\\ {{\quad+\frac{1}{2}\left(\dot{\mathbf{X}}^{T}\dot{\mathbf{X}}+\dot{\mathbf{Y}}^{T}\dot{\mathbf{Y}}\right)+\mathbf{X}^{T}(h+\mathbb{A}\mathbb{A}^{T})\mathbf{X}+\mathbf{y}^{T}(h+\mathbb{A}^{T}\mathbb{A})\mathbf{Y}}}\end{array}$$ $$\left(15\right)$$ $$\left(16\right)$$ $$(17)$$ is a Lyapunov function of the system. Furthermore, setting q = (2/µ) + µ, we find E˙t ≤ −ρEt for $$\rho\leq\operatorname*{min}\left\{{\frac{\mu}{1+\mu}},\ {\frac{2\mu(\sigma_{\operatorname*{min}}^{2}+h)}{(1+\sigma_{\operatorname*{min}}^{2}+2h)\left(\mu^{2}+\mu\right)+2\sigma_{\operatorname*{min}}^{2}}}\right\}.$$ with σmin being the smallest singular value of A. This consequently ensures linear convergence of the dynamics of Eq. 15, $$\boxed{||\mathbf{X}||^{2}+||\mathbf{Y}||^{2}\leq{\frac{\mathcal{E}_{0}}{h+\sigma_{m i n}^{2}}}\exp{\left(-\rho t\right)}}.$$ (Proof in Appendix C). ## 4.2 Discrete-Time Analysis In this section, we next analyze the convergence behavior of LEAD, Eq.(12) in the case of the quadratic min-max game of Eq.(13), using spectral analysis, $$\mathbf{x}_{k+1}=\mathbf{x}_{k}+\beta\Delta\mathbf{x}_{k}-\eta h\mathbf{x}_{k}-\eta\mathbb{A}\mathbf{y}_{k}-\alpha\mathbb{A}\Delta\mathbf{y}_{k}$$ $$\mathbf{y}_{k+1}=\mathbf{y}_{k}+\beta\Delta\mathbf{y}_{k}-\eta h\mathbf{y}_{k}+\eta\mathbb{A}^{T}\mathbf{x}_{k}+\alpha\mathbb{A}^{T}\Delta\mathbf{x}_{k},\tag{1}$$ $$(18)$$ where ∆xk = xk − xk−1. For brevity, consider the joint parameters ωt := (xt, yt). We start by studying the update operator of simultaneous gradient descent-ascent, $$F_{\eta}(\omega_{t})=\omega_{t}-\eta\mathbf{v}(\omega_{t-1}).$$ where, the vector-field is given by Eq. 14. Thus, the fixed point ω ∗ of Fη(ωt) satisfies Fη(ω ∗) = ω ∗. Furthermore, at ω ∗, we have, $$\nabla F_{\eta}(\omega^{*})=\mathbb{I}_{n}-\eta\nabla\mathbf{v}(\omega^{*}),$$ ∗), (19) with In being the n × n identity matrix. Consequently the spectrum of ∇Fη(ω ∗) in the quadratic game considered, is, $$\mathrm{Sp}(\nabla F_{\eta}(\omega^{*}))=\{1-\eta h-\eta\lambda\mid\lambda\in\mathrm{Sp}(\mathrm{off-diag}[\nabla v(\omega^{*})])\}\,,$$ ∗)])} , (20) The next proposition outlines the condition under which the fixed point operator is guaranteed to converge around the fixed point. Proposition 2 (Prop. 4.4.1 (Bertsekas, **1999)).** *For the spectral radius,* $$(19)$$ $$\rho_{max}:=\rho\{\nabla F_{\eta}(\mathbf{\omega}^{*})\}<1\tag{1}$$ $$(21)$$ and for some ω0 *in a neighborhood of* ω ∗, the update operator F*, ensures linear convergence to* ω ∗ *at a rate,* $$\Delta_{t+1}\leq\mathcal{O}(\rho+\epsilon)\Delta_{t}\ \forall\ \epsilon>0,$$ where ∆t+1 := ||ωt+1 − ω ∗||22 + ||ωt − ω ∗||22 . Next, we proceed to define the update operator of Eq.(12) as FLEAD (ωt, ωt−1) = (ωt+1, ωt) . For the quadratic min-max game of Eq.(13), the Jacobian of FLEAD takes the form, $$\nabla F_{\rm LEAD}=\begin{bmatrix}\mathbb{I}_{2n}+\beta\mathbb{I}_{2n}-\left(\eta+\alpha\right)\nabla\mathbf{v}&-\beta\mathbb{I}_{2n}+\alpha\nabla\mathbf{v}\\ \mathbb{I}_{2n}&0\end{bmatrix}.\tag{22}$$ In the next Theorem 2, we find the set of eigenvalues corresponding to the update operator ∇FLEAD which are then used in Theorem 3, where we show for a selected values of η and α, LEAD attains a linear rate. Theorem 2. The eigenvalues of ∇F*LEAD*(ω ∗) *are,* $$\mu_{\pm}=\frac{1-(\eta+\alpha)\lambda+\beta-\eta h\pm\sqrt{\Delta}}{2}\tag{23}$$ where, $$\Delta=\left(1-\left(\eta+\alpha\right)\lambda+\beta-\eta h\right)^{2}-4\left(\beta-\alpha\lambda\right)$$ and λ ∈ Sp(*off-diag*[∇v(ω ∗)]). Furthermore, for h, η, |α|, |β| << 1*, we have,* $$\begin{array}{c}{{\mu_{+}\approx1-\eta h+\frac{\left(\eta+\alpha\right)^{2}\lambda^{2}+\eta^{2}h^{2}+\beta^{2}-2\eta h\beta}{4}}}\\ {{+\lambda\left(\frac{\eta+\alpha}{2}\left(\eta h-\beta\right)-\eta\right)}}\end{array}$$ and $$\begin{array}{c}{{\mu_{-}\approx\!\!\beta-\frac{\left(\eta+\alpha\right)^{2}\lambda^{2}+\eta^{2}h^{2}+\beta^{2}-2\eta h\beta}{4}}}\\ {{+\lambda\left(\frac{\eta+\alpha}{2}\left(\beta-\eta h\right)-\alpha\right)}}\end{array}$$ $$(24)$$ $$(25)^{\frac{1}{2}}$$ 7 ![7_image_0.png](7_image_0.png) Figure 2: Diagram depicts positioning of the eigenvalues of GDA in blue (Eq. 19) and those of LEAD (eqs. (24) and (25) with β = h = 0) in red. Eigenvalues inside the black unit circle imply convergence such that the closer to the origin, the faster the convergence rate (Prop. 2). Every point on solid blue and red lines corresponds to a specific choice of learning rate. No choice of learning rate results in convergence for gradient ascent descent method as the blue line is tangent to the unit circle. At the same time, for a fixed value of α, LEAD shifts the eigenvalues (µ+) into the unit circle which leads to a convergence rate proportional to the radius of the red dashed circle. Note that LEAD also introduces an extra set of eigenvalues (µ−) which are close to zero and do not affect convergence . ## See Proof In Appendix D. Theorem 2 states that the LEAD operator has two eigenvalues µ+ and µ− for each λ ∈ Sp (off-diag[∇v(ω ∗)]). Specifically, µ+ can be viewed as a shift of the eigenvalues of GDA in Eq.(20), while additionally being the leading eigenvalue for small values of *h, η,* |α| and |β|. (See Fig. 2 for a schematic description) Also, for small values of α, µ+ is the limiting eigenvalue while µ− ≈ 0. In the following Proposition, we next show that locally, a choice of positive α decreases the spectral radius of ∇Fη (ω ∗) defined as, $$\rho:=\operatorname*{max}\{|\mu_{+}|^{2},|\mu_{-}|^{2}\}\;\forall\;\lambda.$$ Proposition 3. For any λ ∈ Sp(*off-diag*[∇v(ω ∗)]), $$\left.\nabla_{\alpha}\rho\left(\lambda\right)\right|_{\alpha=0}<0\Leftrightarrow\eta\in\left(0,\frac{2}{\mathrm{Im}(\lambda_{m a x})}\right),$$ $$(26)^{\frac{1}{2}}$$ where Im(λmax) is the imaginary component of the largest eigenvalue λmax. See proof in Appendix E. Having established that a small positive value of α improves the rate of convergence, in the next theorem, we prove that for a specific choice of positive α and η in the quadratic game Eq.(13), a linear rate of convergence to its Nash equilibrium is attained. Theorem 3. *Setting* η = α =1 2(σmax(A)+h) , then we have ∀ ϵ > 0, $$\Delta_{t+1}\in\mathcal{O}\left(\left(1-\frac{\sigma_{min}^{2}}{4(\sigma_{max}+h)^{2}}-\frac{(1+\beta/2)h}{\sigma_{max}+h}+\frac{3h^{2}}{8(\sigma_{max}+h)^{2}}\right)^{t}\Delta_{0}\right)\tag{27}$$ where σmax(σmin) is the largest (smallest) eigen value of A and $$\Delta_{t+1}:=||\omega_{t+1}-\omega^{*}||_{2}^{2}+||\omega_{t}-\omega^{*}||_{2}^{2}.$$ Theorem 3 ensures a linear convergence of LEAD in the quadratic min-max game. (Proof in Appendix F). ## 5 Comparison Of Convergence Rate For Quadratic Min-Max Game In this section, we perform a Big-O comparison of convergence rates of LEAD (Eq. 59), with several other existing methods. Below in Table 5 we summarize the convergence rates for the quadratic min-max game of Eq. 13. For each method that converges at the rate of O((1 − r) t), we report the quantity r (larger r corresponds to faster convergence). We observe that for the quadratic min-max game, given the analysis in Azizian et al. (2020a), for *h < σ*max (A) and β > 3h 2 8(σmax+h) , rLEAD ≳ rEG and rLEAD ≳ rOG. Furthermore, for the bilinear case, where h = 0, LEAD has a faster convergence rate than EG and OG. | Method | r | |-----------------------------|--------------------------------------------------------------------| | Alternating-GDA | h/2L min (A) /16L 2 2 ) | | Extra-Gradient (EG) | 1 4 (h/L + σ 2 | | Optimistic Gradient(OG) | 1 4 (h/L + σ min (A) /32L 2 ) 2 2 2 | | Consensus Optimization (CO) | h 2/2L H + σ min (A) /2L H | | LEAD (Th. 3) | (1 + β/2)h/(σmax + h) + σ min/4(σmax + h) 2 2 − 3h 2/8(σmax + h) 2 | Table 1: Big-O comparison of convergence rates of LEAD against EG (Korpelevich, 1976), OG (Mertikopoulos et al., 2018) and CO (Mescheder et al., 2017) for the **quadratic min-max game** of Eq. 13. We report the EG, OG and CO rates from the tight analysis in Azizian et al. (2020a) and Alt-GDA from Zhang et al. (2022). For each method that converges at the rate of O((1 − r) t), we report the quantity r (larger r corresponds to faster convergence). Note that L := √2 max{*h, σ*max(A)}, is the Lipschitz constant of the vector field and and L2H is the Lipschitz-smoothness of 12 ∥v∥ 2. ## 6 Comparison Of Computational Cost In this section we first study several second-order algorithms and perform computational comparisons on an 8-Gaussians generation task. The Jacobian of the gradient vector field v = (∇xf(x, y), −∇yf(x, y)) is given by, $$\mathbb{J}=\begin{bmatrix}\nabla_{x}^{2}f\left(\boldsymbol{x},\boldsymbol{y}\right)&\nabla_{x y}f\left(\boldsymbol{x},\boldsymbol{y}\right)\\ -\nabla_{y x}f\left(\boldsymbol{x},\boldsymbol{y}\right)&-\nabla_{y}^{2}f\left(\boldsymbol{x},\boldsymbol{y}\right)\end{bmatrix}.\tag{1}$$ Considering player x, a LEAD update requires the computation of the term ∇xyf (xk, yk) (yk − yk−1), thereby involving only one block of the full Jacobian J. On the other hand Symplectic Gradient Adjustment (SGA) (Balduzzi et al., 2018a), requires the full computation of two Jacobian-vector products Jv, J ⊤v. Similarly, Competitive Gradient Descent (CGD) (Schäfer & Anandkumar, 2019) involves the computation of the following term, $$\left(1+\eta\nabla_{x y}^{2}f(\mathbf{x}_{k},\mathbf{y}_{k})\nabla_{y x}^{2}f(\mathbf{x}_{k},\mathbf{y}_{k})\right)^{-1}$$ along with the Jacobian-vector product, $$(28)$$ $$\nabla_{x y}^{2}f(x)$$ $\lambda\nabla\phi$ $$f(x_{k},y_{k}).$$ ∇2xyf(xk, yk)∇yf(xk, yk). While the inverse term is approximated using conjugate gradient method, it still involves the computation of approximately ten Jacobian-vector products for each update. To explore these comparisons in greater detail and on models with many parameters, we experimentally compare the computational cost of our method with several other second as well as first-order methods on the 8-Gaussians problem in Figure 3 (architecture reported in Appendix J). We calculate the average wall-clock time (in milliseconds) per iteration. Results are reported on an average of 1000 iterations, computed on the same architecture and the same machine with forced synchronous execution. All the methods are implemented in PyTorch Paszke et al. (2017) and SGA is replicated based on the official implementation 2. Furthermore, we observe that the computational cost per iteration of LEAD while being much lower than SGA and CGD, is similar to WGAN-GP and Extra-Gradient. The similarity to Extra-Gradient is due to 2SGA official DeepMind implementation (non-zero sum setting): https://github.com/deepmind/symplectic-gradient-adjustment/ blob/master/Symplectic_Gradient_Adjustment.ipynb ![9_image_0.png](9_image_0.png) Figure 3: Average computational cost per iteration of several well-known methods for (non-saturating) GAN optimization. The numbers are reported on the 8-Gaussians generation task and averaged over 1000 iterations. Note that the y-axis is log-scale. We compare Competitive Gradient Descent (CGD) (53) (using official CGD optimizer code), Symplectic Gradient Adjustment (SGA) (6), Consensus Optimization (CO) (39), Extra-gradient with Adam (Extra-Adam) (17), WGAN with Gradient Penalty (WGAN GP) (21). We observe that per-iteration time complexity of our method is very similar to Extra-Adam and WGAN GP and is much cheaper than other second order methods such as CGD. Furthermore, by increasing the size of the hidden dimension of the generator and discriminator's networks we observe that the gap between different methods increases. the fact that for each player, Extra-Gradient requires the computation of a half-step and a full-step, so in total each step requires the computation of two gradients. LEAD also requires the computation of a gradient (∇fx) which is then used to compute (∇fxy) multiplied by (yk − yk−1). Using PyTorch, we do not require to compute ∇fxy and then perform the multiplication. Given ∇fx the whole term ∇fxy(yk − yk−1), is computed using PyTorch's Autograd Vector-Jacobian product, with the computational cost of a single gradient. Thus, LEAD also requires the computation of two gradients for each step. ## 7 Experiments In this section, we first empirically validate the performance of LEAD on several toy as well as largescale experiments. Furthermore, we extend LEAD based on the Adam algorithm to be used in large-scale experiments. See 2 for the detailed Algorithm. ## 7.1 Adversarial Vs Cooperative Games In Section 6 we showed that using Auto-grad software tools such as TensorFlow and PyTorch, LEAD can be computed very efficiently and as fast as extra-gradient. In this section we compare the perforamce of LEAD with several first order methods in a toy setup inspired by (Lorraine & Duvenaud, 2022). Consider the following game, $$\operatorname*{min}_{\mathbf{x}}\operatorname*{max}_{\mathbf{y}}\mathbf{x}^{T}(\mathbf{\gamma}\mathbf{A})\mathbf{y}+\mathbf{x}^{T}((\mathbf{I}-\mathbf{\gamma})\mathbf{B}_{1})\mathbf{x}-\mathbf{y}^{T}((\mathbf{I}-\mathbf{\gamma})\mathbf{B}_{2})\mathbf{y}.$$ T((I − γ)B2)y. (29) Such formulation in Eq. 29 enables us to compare the performance of different methods in cooperative games, adversarial games, and any interpolation between the two. Namely, varying γ from 0 to I changes the dynamics of the game from purely cooperative to adversarial. Many real-world applications such as GANs exhibit an analogous range of adversarial to cooperative changes during training (Lorraine & Duvenaud, 2022). In Figure 4, we compare LEAD against several methods including gradient descent-ascent (GDA), extragradient (EG) (Korpelevich, 1976), optimistic gradient (OG) (Mertikopoulos et al., 2018), complex momentum (CM) (Lorraine & Duvenaud, 2022), negative momentum (NM) (Gidel et al., 2019), and positive momentum (PM) (Polyak, 1964). Each method is tuned to optimality for each setup. $$(29)^{\frac{1}{2}}$$ ![10_image_0.png](10_image_0.png) Figure 4: Comparison of several methods on the game in Eq. 29. The diagonal matrix γ determines the degree of adversarialness along each direction. Elements on the diagonal are sampled from a uniform distribution,γii ∼ Unif[0, γmax]. By varying γmax from 0 to 1, we move from a purely cooperative setup to a hybrid setup with a mixture of cooperative and adversarial games. The spectral radius (shown on the y-axis) determines the convergence rate in this game and is a function of γmax. The smaller the spectral radius, the faster the convergence rate. A spectral radius of 1 corresponds to non-convergent dynamics. We compare several methods including gradient descent-ascent (GDA), extragradient (EG) (Korpelevich, 1976), optimistic gradient (OG) (Mertikopoulos et al., 2018), complex momentum (CM) (Lorraine & Duvenaud, 2022), negative momentum (NM) (Gidel et al., 2019), and positive momentum (PM) (Polyak, 1964). Each method has been tuned to optimality for each setup. For cooperative games (on the leftmost), LEAD and Positive Momentum achieve great performance. In more adversarial settings (on the rightmost), LEAD performs on par with other game-specific optimization methods (excluding Negative Momentum, GDA and Positive Momentum which diverge). This plot suggests that LEAD is a robust optimizer across different types of games. We conjecture that for the same reason, LEAD performs desirably in real-world setups such as GANs where the adversarialness changes dynamically throughout the training (Lorraine & Duvenaud, 2022). ## 7.2 Generative Adversarial Networks We study the performance of LEAD in zero-sum as well as non-zero sum settings. See Appendix I for a comparison of LEAD-Adam against vanilla Adam on the generation task of a mixture of 8-Gaussians. CIFAR-10 DCGAN: We evaluate LEAD-Adam on the task of CIFAR-10 (Krizhevsky & Hinton, 2009) image generation with a non-zero-sum formulation (non-saturating) on a DCGAN architecture similar to Gulrajani et al. (2017). As shown in Table. 2, we compare with several first-order and second order methods and observe that LEAD-Adam outperforms the rest in terms of Fréchet Inception Distance (FID) (Heusel et al., 2017) and Inception score (IS) (Salimans et al., 2016) 3, reaching an FID score of 19.27±0.10 which outperforms OMD (Mertikopoulos et al., 2018) and CGD (Schäfer & Anandkumar, 2019). See Figure 5 that shows the improvement of FID using LEAD-Adam vs vanilla Adam and Figure 9 for a sample of the generated images. CIFAR-10 ResNet: Furthermore, we evaluate LEAD-Adam on more complex and deep architectures. We adapt the ResNet architecture in SN-GAN Miyato et al. (2018). We compare with several existing results on the task of image generation on CIFAR-10 using ResNets. See Table 2 for a full comparison. Note that state of the art performance in recent work such as Style-GAN based models (Sauer et al., 2022; Kang et al., 2021; Lee et al., 2021) or BigGAN based models (Brock et al., 2018; Lorraine & Duvenaud, 2022) use architectures that are 30 times or more larger than the architecture that we have chosen to test our method on. We report our results against a properly tuned version of SNGAN that achieves an FID of 12.36. Our method obtains a competitive FID of 10.49. We give a detailed description of these experiments and full detail on the architecture and hyper-parameters in Appendix J. See also Figure 10 for a sample of generated samples on a ResNet using LEAD-Adam. 3The FID and IS are metrics for evaluating the quality of generated samples of a generative model. Lower FID and higher inception score (IS) correspond to better sample quality. ![11_image_0.png](11_image_0.png) Figure 5: Plot showing the evolution of the FID over 400 epochs for our method (LEAD-Adam) vs vanilla Adam on a DCGAN architecture. It is important to note that compared to the Adam, LEAD-Adam is twice expensive computationally. Table 2: Performance of several methods on CIFAR-10 image generation task. The FID and IS is reported over 50k samples unless mentioned otherwise. | ima | DCGAN | FID (↓) | IS (↑) | |---------------------------------------|--------------|-------------|----------| | Adam (Radford et al., 2015) | 24.38 ± 0.13 | 6.58 | | | LEAD-Adam | 19.27 ± 0.10 | 7.58± 0.11 | | | CGD-WGAN (Schäfer & Anandkumar, 2019) | 21.3 | 7.2 | | | OMD (Daskalakis et al., 2018) | 29.6 ± 0.19 | 5.74 ± 0.1 | | | ResNet SNGAN | 12.10 ± 0.31 | 8.58 ± 0.03 | | | LEAD-Adam (ours) | 10.49 ± 0.11 | 8.82 ± 0.05 | | | ExtraAdam (Gidel et al., 2018) | 16.78 ± 0.21 | 8.47 ± 0.1 | | | LA-GAN (Chavdarova et al., 2020) | 12.67 ± 0.57 | 8.55 ± 0.04 | | | ODE-GAN (Qin et al., 2020) | 11.85 ± 0.21 | 8.61 ± 0.06 | | | Evaluated with 5k samples | | | | | SN-GAN (DCGAN) (Miyato et al., 2018) | 29.3 | 7.42 ± 0.08 | | | SN-GAN (ResNet) (Miyato et al., 2018) | 21.7 ± 0.21 | 8.22 ± 0.05 | | ## 8 Related Work Game Optimization: With increasing interest in games, significant effort is being spent in understanding common issues affecting optimization in this domain. These issues range from convergence to non-Nash equilibrium points, to exhibiting rotational dynamics around the equilibrium which hampers convergence. Authors in Mescheder et al. (2017) discuss how the eigenvalues of the Jacobian govern the local convergence properties of GANs. They argue that the presence of eigenvalues with zero real-part and large imaginary part results in oscillatory behavior. To mitigate this issue, they propose Consensus Optimization (CO). Along similar lines, Balduzzi et al. (2018b); Gemp & Mahadevan (2018); Letcher et al. (2019); Loizou et al. (2020) use the *Hamiltonian* of the gradient vector-field, to improve the convergence in games through disentangling the convergent parts of the dynamics from the rotations. Another line of attack taken in Schäfer & Anandkumar (2019) is to use second-order information as a regularizer of the dynamics and motivate the use of Competitive Gradient Descent (CGD). In Wang et al. (2019), Follow the Ridge (FtR) is proposed. They motivate the use of a second order term for one of the players (follower) as to avoid the rotational dynamics in a sequential formulation of the zero-sum game. See appendix K for full discussion on the comparison of LEAD versus other second-order methods. Another approach taken by Gidel et al. (2019), demonstrate how applying negative momentum over GDA can improve convergence in min-max games, while also proving a linear rate of convergence in the case of bilinear games. More recently, Zhang & Wang (2021) have shown the suboptimality of negative momentum in specific settings. Furthermore, in (Lorraine & Duvenaud, 2022) authors carry-out an extensive study on the effect of momentum in games and specifically show that complex momentum is optimal in many games ranging from adversarial to non-adversarial settings. Daskalakis et al. (2018) show that extrapolating the next value of the gradient using previous history, aids convergence. In the same spirit, Chavdarova et al. (2020), proposes LookAhead GAN (LA-GAN) and show that the LookAhead algorithm is a compelling candidate in improving convergence in GANs. Gidel et al. (2018) also explores this line of thought by introducing averaging to develop a variant of the extra-gradient algorithm and proposes Extra-Adam-Averaging. Similar to ExtraAdam-Averaging is SN-EMA Yazıcı et al. (2019) which uses the SN-GAN and achieves great performance by applying an exponential moving average on the parameters. More recently, Fiez & Ratliff (2021) study using different time-scales for each player in zero-sum non-convex, non-concave games. Furthermore, in Rosca et al. (2021) authors study the dynamics of game optimization in both continuous and discrete time and examine the effects of discritization drift on the game performance. They suggest a modified continuous-time dynamical system that more closely matches the discrete time dynamics and introduce regularizers that mitigate the effect harmful drifts. Lastly, in regard to convergence analysis in games, Zhang et al. (2022) study the convergence of alternating gradient descent-ascent for minmax games, Golowich et al. (2020) provide last iterate convergence rate for convex-concave saddle point problems. Nouiehed et al. (2019) propose a multi-step variant of gradient descent-ascent, to show it can find a game's ϵ–first-order stationary point. Additionally, Azizian et al. (2020a) and Ibrahim et al. (2020) provide spectral lower bounds for the rate of convergence in the bilinear setting for an accelerated algorithm developed in Azizian et al. (2020b) for a specific families of bilinear games. Furthermore, Fiez & Ratliff (2020) use Lyapunov analysis to provide convergence guarantees for gradient descent ascent using timescale separation and in Hsieh et al. (2020), authors show that commonly used algorithms for min-max optimization converge to attractors that are not optimal. Single-objective Optimization and Dynamical Systems: The authors of Su et al. (2014) started a new trend in single-objective optimization by studying the continuous-time dynamics of Nesterov's accelerated method (Nesterov, 2013). Their analysis allowed for a better understanding of the much-celebrated Nesterov's method. In a similar spirit, Wibisono et al. (2016); Wilson et al. (2016) study continuous-time accelerated methods within a Lagrangian framework, while analyzing their stability using Lyapunov analysis. These work show that a family of discrete-time methods can be derived from their corresponding continuous-time formalism using various discretization schemes. Additionally, several recent work (Muehlebach & Jordan, 2019; Bailey & Piliouras, 2019; Maddison et al., 2018; Ryu et al., 2019) cast game optimization algorithms as dynamical systems so to leverage its rich theory, to study the stability and convergence of various continuoustime methods. Nagarajan & Kolter (2017) also analyzes the local stability of GANs as an approximated continuous dynamical system. ## 9 Conclusion And Future Direction In this paper, we leverage tools from physics to propose a novel second-order optimization scheme LEAD, to address the issue of rotational dynamics in min-max games. By casting min-max game optimization as a physical system, we use the principle of least action to discover an effective optimization algorithm for this setting. Subsequently, with the use of Lyapunov stability theory and spectral analysis, we prove LEAD to be convergent at a linear rate in bilinear min-max games. We supplement our theoretical analysis with experiments on GANs and toy setups, demonstrating improvements over baseline methods. Specifically for GAN training, we observe that our method outperforms other second-order methods, both in terms of sample quality and computational efficiency. Our analysis underlines the advantages of physical approaches in designing novel optimization algorithms for games as well as for traditional optimization tasks. It is important to note in this regard that our crafted physical system is a way to model min-max optimization physically. Alternate schemes to perform such modeling can involve other choices of counter-rotational and dissipative forces which can be explored in future work. Other directions for future work can extend the Lyapunov analysis to the general convex-concave setting. Our existent analysis and the subsequent promising experimental results there upon, makes us believe that analysis can be extended to the more general convex-concave setting. Nevertheless, such an attempt is dependent on finding an appropriate Lyapunov function which may be challenging given the complex game dynamics. As a future direction, one may pursue other types of games where LEAD may be useful. Any 2-parameter dynamical system can be interpreted as a 2-player game. Since LEAD models second-order interactions of the players, it may be a preferable optimization algorithm for games with higher-order structure. Furthermore, studying the performance of LEAD on larger GAN architecture such as Style-GAN may be studied in future work. ## Broader Impact Statement While our contribution is mostly theoretical, our research has the potential to improve the optimization of multi-agent machine learning models, such as generative adversarial networks (GANs). GANs have been very successful in generating realistic images, music, speech and text, and for improving performance on an array of different real-world tasks. On the other hand, GANs can be misused to generate fake news, fake images, and fake voices. Furthermore, a common problem encountered during GAN training is mode-collapse. This results in GANs being biased in generating certain types of data moreover others, thereby causing data misrepresentation. In this paper, we show that our proposed method can tackle the mode collapse problem by observing improvements over baseline methods. However, we would like to emphasize that practitioners should use our research with caution as the change of dataset and tasks might not prevent the mode collapse problem. ## Acknowledgments The authors would like to thank Mohammad Pezeshki, Gauthier Gidel, Tatjana Chavdarova, Maxime Laborde, Nicolas Loizou, Hugo Berard, Giancarlo Kerg, Manuela Girotti, Adam Ibrahim, Damien Scieur and Michael Mulligan, for useful discussions and feedback. This work is supported by NSERC Discovery Grants (RGPIN-2018-04821 and RGPIN-2019-06512), FRQNT Young Investigator Startup Grants (2019-NC-253251 and 2019-NC-257943), a startup grant by IVADO, the Canada CIFAR AI chairs program and a collaborative grant from Samsung Electronics Co., Ltd. Reyhane Askari Hemmat also acknowledges support of Borealis AI Graduate Fellowship, Microsoft Diversity Award and the NSERC Postgraduate Scholarships. Amartya Mitra acknowledges support of the internship program at Mila and the UCR Graduate Fellowship. This research was enabled in part by compute resources, software and technical help provided by Mila (mila.quebec) and Compute Canada. ## References Jacob Abernethy, Kevin A Lai, and Andre Wibisono. Last-iterate convergence rates for min-max optimization. arXiv preprint arXiv:1906.02027, 2019. Waïss Azizian, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. A tight and unified analysis of gradient-based methods for a whole spectrum of differentiable games. In International Conference on Artificial Intelligence and Statistics, pp. 2863–2873, 2020a. Waïss Azizian, Damien Scieur, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. Accelerating smooth games by manipulating spectral shapes. International Conference on Artificial Intelligence and Statistics, 2020b. James P Bailey and Georgios Piliouras. Multi-agent learning in network zero-sum games is a hamiltonian system. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 233–241. International Foundation for Autonomous Agents and Multiagent Systems, 2019. D Balduzzi, S Racaniere, J Martens, J Foerster, K Tuyls, and T Graepel. *deepmind-symplectic-gradientadjustment*, 2018a. https://github.com/deepmind/symplectic-gradient-adjustment/blob/master/ Symplectic_Gradient_Adjustment.ipynb. D Balduzzi, S Racaniere, J Martens, J Foerster, K Tuyls, and T Graepel. The mechanics of n-player differentiable games. In *ICML*, volume 80, pp. 363–372. JMLR. org, 2018b. MV Berry and Pragya Shukla. Curl force dynamics: symmetries, chaos and constants of motion. *New Journal* of Physics, 18(6):063018, 2016. Dimitri P Bertsekas. *Nonlinear programming*. Athena scientific Belmont, 1999. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. *arXiv preprint arXiv:1809.11096*, 2018. Lucian Bu, Robert Babu, Bart De Schutter, et al. A comprehensive survey of multiagent reinforcement learning. *IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)*, 38 (2):156–172, 2008. Tatjana Chavdarova, Matteo Pagliardini, Martin Jaggi, and Francois Fleuret. Taming gans with lookahead. arXiv preprint arXiv:2006.14567, 2020. Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, and Haoyang Zeng. Training gans with optimism. In *International Conference on Learning Representations*, 2018. Tanner Fiez and Lillian Ratliff. Gradient descent-ascent provably converges to strict local minmax equilibria with a finite timescale separation. *arXiv preprint arXiv:2009.14820*, 2020. Tanner Fiez and Lillian J Ratliff. Local convergence analysis of gradient descent ascent with finite timescale separation. In *Proceedings of the International Conference on Learning Representation*, 2021. Jakob Foerster, Richard Y Chen, Maruan Al-Shedivat, Shimon Whiteson, Pieter Abbeel, and Igor Mordatch. Learning with opponent-learning awareness. In *Proceedings of the 17th International Conference on* Autonomous Agents and MultiAgent Systems, pp. 122–130. International Foundation for Autonomous Agents and Multiagent Systems, 2018. Ian Gemp and Sridhar Mahadevan. Global convergence to the equilibrium of gans using variational inequalities. arXiv preprint arXiv:1808.01531, 2018. Gauthier Gidel, Hugo Berard, Gaëtan Vignoud, Pascal Vincent, and Simon Lacoste-Julien. A variational inequality perspective on generative adversarial networks. In *International Conference on Learning* Representations, 2018. Gauthier Gidel, Reyhane Askari Hemmat, Mohammad Pezeshki, Rémi Le Priol, Gabriel Huang, Simon Lacoste-Julien, and Ioannis Mitliagkas. Negative momentum for improved game dynamics. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1802–1811, 2019. Noah Golowich, Sarath Pattathil, Constantinos Daskalakis, and Asuman Ozdaglar. Last iterate is slower than averaged iterate in smooth convex-concave saddle point problems. *arXiv preprint arXiv:2002.00057*, 2020. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In *Advances in neural information processing systems*, pp. 5767–5777, 2017. Wolfgang Hahn, Hans H Hosenthien, and H Lehnigk. *Theory and application of Liapunov's direct method*. Prentice-Hall Englewood Cliffs, NJ, 1963. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In *Advances in neural information* processing systems, pp. 6626–6637, 2017. Ya-Ping Hsieh, Panayotis Mertikopoulos, and Volkan Cevher. The limits of min-max optimization algorithms: convergence to spurious non-critical sets. *arXiv preprint arXiv:2006.09065*, 2020. Adam Ibrahim, Waïss Azizian, Gauthier Gidel, and Ioannis Mitliagkas. Linear lower bounds and conditioning of differentiable games. In *International conference on machine learning*, 2020. Minguk Kang, Woohyeon Shim, Minsu Cho, and Jaesik Park. Rebooting acgan: Auxiliary classifier gans with stable training. *Advances in Neural Information Processing Systems*, 34:23505–23518, 2021. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. GM Korpelevich. The extragradient method for finding saddle points and other problems. *Matecon*, 12: 747–756, 1976. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. LD Landau and EM Lifshitz. *Course of theoretical physics. vol. 1: Mechanics*. Oxford, 1960. Kwonjoon Lee, Huiwen Chang, Lu Jiang, Han Zhang, Zhuowen Tu, and Ce Liu. Vitgan: Training gans with vision transformers. *arXiv preprint arXiv:2107.04589*, 2021. Alistair Letcher, David Balduzzi, Sébastien Racaniere, James Martens, Jakob N Foerster, Karl Tuyls, and Thore Graepel. Differentiable game mechanics. *Journal of Machine Learning Research*, 20(84):1–40, 2019. Nicolas Loizou, Hugo Berard, Alexia Jolicoeur-Martineau, Pascal Vincent, Simon Lacoste-Julien, and Ioannis Mitliagkas. Stochastic hamiltonian gradient methods for smooth games. *ICML*, 2020. David Acuna Paul Vicol Lorraine, Jonathan P. and David Duvenaud. Complex momentum for optimization in games. In *International Conference on Artificial Intelligence and Statistics*, pp. 7742–7765. PMLR, 2022. Aleksandr Mikhailovich Lyapunov. The general problem of the stability of motion. *International journal of* control, 55(3):531–534, 1992. Chris J Maddison, Daniel Paulin, Yee Whye Teh, Brendan O'Donoghue, and Arnaud Doucet. Hamiltonian descent methods. *arXiv preprint arXiv:1809.05042*, 2018. Eric V Mazumdar, Michael I Jordan, and S Shankar Sastry. On finding local nash equilibria (and only local nash equilibria) in zero-sum games. *arXiv preprint arXiv:1901.00838*, 2019. Panayotis Mertikopoulos, Bruno Lecouat, Houssam Zenati, Chuan-Sheng Foo, Vijay Chandrasekhar, and Georgios Piliouras. Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile. arXiv preprint arXiv:1807.02629, 2018. Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. The numerics of gans. In Advances in Neural Information Processing Systems, pp. 1825–1835, 2017. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. *arXiv preprint arXiv:1802.05957*, 2018. Michael Muehlebach and Michael Jordan. A dynamical systems perspective on nesterov acceleration. In International Conference on Machine Learning, pp. 4656–4662, 2019. Vaishnavh Nagarajan and J Zico Kolter. Gradient descent gan optimization is locally stable. In *Advances in* neural information processing systems, pp. 5585–5595, 2017. Yurii Nesterov. *Introductory lectures on convex optimization: A basic course*, volume 87. Springer Science & Business Media, 2013. Maher Nouiehed, Maziar Sanjabi, Tianjian Huang, Jason D Lee, and Meisam Razaviyayn. Solving a class of non-convex min-max games using iterative first order methods. In Advances in Neural Information Processing Systems, pp. 14934–14942, 2019. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. s. 2017. Boris T Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964. Chongli Qin, Yan Wu, Jost Tobias Springenberg, Andrew Brock, Jeff Donahue, Timothy P Lillicrap, and Pushmeet Kohli. Training generative adversarial networks by solving ordinary differential equations. *arXiv* preprint arXiv:2010.15040, 2020. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. *arXiv preprint arXiv:1511.06434*, 2015. Mihaela C Rosca, Yan Wu, Benoit Dherin, and David Barrett. Discretization drift in two-player games. In International Conference on Machine Learning, pp. 9064–9074. PMLR, 2021. Ernest K Ryu, Kun Yuan, and Wotao Yin. Ode analysis of stochastic gradient methods with optimism and anchoring for minimax problems and gans. *arXiv preprint arXiv:1905.10899*, 2019. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In *Advances in neural information processing systems*, pp. 2234–2242, 2016. Axel Sauer, Katja Schwarz, and Andreas Geiger. Stylegan-xl: Scaling stylegan to large diverse datasets. In ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10, 2022. Florian Schäfer and Anima Anandkumar. Competitive gradient descent. In *Advances in Neural Information* Processing Systems, pp. 7623–7633, 2019. Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. In Advances in Neural Information Processing Systems, pp. 527–538, 2018. Bin Shi, Simon S Du, Weijie Su, and Michael I Jordan. Acceleration via symplectic discretization of highresolution differential equations. In *Advances in Neural Information Processing Systems*, pp. 5745–5753, 2019. Weijie Su, Stephen Boyd, and Emmanuel Candes. A differential equation for modeling nesterov's accelerated gradient method: Theory and insights. In *Advances in Neural Information Processing Systems*, pp. 2510–2518, 2014. Yuanhao Wang, Guodong Zhang, and Jimmy Ba. On solving minimax optimization locally: A follow-the-ridge approach. In *International Conference on Learning Representations*, 2019. Andre Wibisono, Ashia C Wilson, and Michael I Jordan. A variational perspective on accelerated methods in optimization. *Proceedings of the National Academy of Sciences*, 113(47):E7351–E7358, 2016. Ashia C Wilson, Benjamin Recht, and Michael I Jordan. A lyapunov analysis of momentum methods in optimization. *arXiv preprint arXiv:1611.02635*, 2016. Yasin Yazıcı, Chuan-Sheng Foo, Stefan Winkler, Kim-Hui Yap, Georgios Piliouras, and Vijay Chandrasekhar. The unusual effectiveness of averaging in gan training, 2019. Guodong Zhang and Yuanhao Wang. On the suboptimality of negative momentum for minimax optimization. In *International Conference on Artificial Intelligence and Statistics*, pp. 2098–2106. PMLR, 2021. Guodong Zhang, Yuanhao Wang, Laurent Lessard, and Roger B Grosse. Near-optimal local convergence of alternating gradient descent-ascent for minimax optimization. In *International Conference on Artificial* Intelligence and Statistics, pp. 7659–7679. PMLR, 2022. ## A Appendix B Proof Of Proposition 1 Proof. The EOMs of the quadratic game in continuous-time (Eq.(10)), can be discretized in using a combination of implicit and explicit update steps as (Shi et al., 2019), $$x_{k+1}-x_{k}=\delta v_{k+1}^{x},\tag{30a}$$ $$y_{k+1}-y_{k}=\delta v_{k+1}^{x},$$ (30b) $$v_{k+1}^{x}-v_{k}^{x}=-\theta\nabla_{xy}f\left(x_{k},y_{k}\right)v_{k}^{y}-\mu\delta v_{k}^{x}-\delta\nabla_{x}f\left(x_{k},y_{k}\right)$$ (30c) $$v_{k+1}^{x}-v_{k}^{y}=\theta\nabla_{xy}f\left(x_{k},y_{k}\right)v_{k}^{x}-\mu\delta v_{k}^{y}+\delta\nabla_{y}f\left(x_{k},y_{k}\right)\tag{30d}$$ where $\delta$ is the discretization step-size. Using Eqns.(30a) and (30b), we can further re-express Eqns. (30c), (30d) as, $x_{k+1}=x_k+\beta\Delta x_k-\eta\nabla_x f\left(x_k,y_k\right)-\alpha\nabla_{x,y}f\left(x_k,y_k\right)\Delta y_k$ $y_{k+1}=y_k+\beta\Delta y_k+\eta\nabla_y f\left(x_k,y_k\right)+\alpha\nabla_{x,y}f\left(x_k,y_k\right)\Delta x_k$ 1. $$(31)$$ where ∆xk = xk − xk−1, and, $$(30\mathrm{a})$$ $$(30\mathrm{b})$$ $\left(30\text{c}\right)$ $\left(30\text{d}\right)$ $$\beta=1-\mu\delta,\ \eta=\delta^{2},\ \alpha=2q\delta$$ 2, α = 2qδ (32) ## C Continuous-Time Convergence Analysis: Quadratic Min-Max Game Proof. For the class of quadratic min-max games, The graph is: $ f\left({X,Y}\right)=\frac{h}{2}|X|^2-\frac{h}{2}|Y|^2+X^T\mathbb{A}Y$ $ \left(Y^1,\cdots,Y^n\right)\in\mathbb{R}^n$ and $ \mathbb{A}_{n+1}$ is a constant positive-definite matrix. where X ≡X1, · · · , Xn,Y ≡Y 1, · · · , Y n∈ R n and An×n is a constant positive-definite matrix, the continuous-time EOMs of Eq.(10) become: $$\begin{array}{l}{{\dot{\mathbf{X}}=-\mu\dot{\mathbf{X}}-h\mathbf{X}-\mathbb{A}\mathbf{Y}-q\mathbb{A}\dot{\mathbf{Y}}}}\\ {{\dot{\mathbf{Y}}=-\mu\dot{\mathbf{Y}}-h\mathbf{Y}+\mathbb{A}^{T}\mathbf{X}+q\mathbb{A}^{T}\dot{\mathbf{X}}}}\end{array}$$ We next define our continuous-time Lyapunov function in this case to be, Et = 1 2 X˙ + µX + µAYT X˙ + µX + µAY + 1 2 Y˙ + µY − µA T XT X˙ + µY − µA T X + 1 2 X˙ T X˙ + Y˙ T Y˙+ XT(h + AAT)X + Y T(h + A T A)Y ≥ 0 ∀ t $$(32)$$ $\square$ $$(33)$$ $$(34)$$ $$(35)$$ The time-derivative of Et is then given by, E˙t =X˙ + µX + µAYT X¨ + µX˙ + µAY˙+Y˙ + µY − µA T XT Y¨ + µY˙ − µA T X˙ +X˙ T X¨ + Y˙ T Y¨+ 2 XT(h + AAT)X˙ + Y T(h + A T A)Y˙ =X˙ T + µXT + µY T A T (−q + µ) AY˙ − AY+ X˙ T−qAY˙ − µX˙ − AY +Y˙ T + µY T − µXT A (q − µ) A T X˙ + A T X+ Y˙ TqA T X˙ − µY˙ + A T X + 2 XT(h + AAT)X˙ + Y T(h + A T A)Y˙ = (µ (q − µ) − 2) Y T A T X˙ − XT AY˙− (µ (q − µ) − 2) XT AAT X˙ + Y T A T AY˙ − µXT(h + AAT)X + Y T(h + A T A)Y− µX˙ T X˙ + Y˙ T Y˙ (36) where we have used the fact that XT AY being a scalar thus implying XT AY = Y T A T X. If we now set q = (2/µ) + µ in the above, then that further leads to, $$\begin{split}\dot{\mathcal{E}}_{t}&=-\mu\left(\mathbf{X}^{T}(h+\mathbb{A}\mathbb{A}^{T})\mathbf{X}+\mathbf{Y}^{T}(h+\mathbb{A}^{T}\mathbb{A})\mathbf{Y}\right)-\mu\left(\dot{\mathbf{X}}^{T}\dot{\mathbf{X}}+\dot{\mathbf{Y}}^{T}\mathbf{Y}\right)\\ &=-\mu\left(h||\mathbf{X}||^{2}+h||\mathbf{Y}||^{2}+||\mathbb{A}^{T}\mathbf{X}||^{2}+||\mathbb{A}\mathbf{Y}||^{2}\right)-\mu\left(||\dot{\mathbf{X}}||^{2}+||\mathbf{Y}||^{2}\right)\leq0\ \forall\ t\end{split}\tag{37}$$ exhibiting that the Lyapunov function, Eq.(16) is *asymptotically stable* at all times t. Next, consider the following expression, − ρEt − ρµ 2 X − X˙ 2− ρµ 2 Y − Y˙ 2− ρµ 2 X˙ − AY 2− ρµ 2 A T X + Y˙ 2 = −ρEt − ρµ 2 ||X||2 + ||Y ||2+ ρµ XT X˙ + Y T Y˙− ρµ X˙ 2+ ||Y ||2 − ρµ XT AY˙ − X˙ T AY− ρµ 2 A T X 2+ ||AY ||2 (38) = −ρ (1 + µ) X˙ 2+ Y˙ 2− ρ 2 µ 2 + µ + 2h||X||2 + ||Y ||2 − ρ 2 µ 2 + µ + 2 A T X 2+ ||AY ||2 ≤ −ρEt where ρ is some positive definite constant. This implies that the above expression is negative semi-definite by construction given µ ≥ 0. Now, for a general square matrix A, we can perform a singular value decomposition (SVD) as A = V TSU. Here, U and V are the right and left unitaries of A, while S is a diagonal matrix of singular values (σi) of A. Using this decomposition in Eq.(38), then allows us to write, − ρ (1 + µ) X˙ 2+ Y˙ 2− ρ 2 µ 2 + µ + 2h||X||2 + ||Y ||2 − ρ 2 µ 2 + µ + 2 A T X 2+ ||Ay||2 = −ρ (1 + µ) VX˙ 2+ UY˙ 2− ρ 2 µ 2 + µ + 2h||VX||2 + ||UY ||2 − ρ 2 µ 2 + µ + 2||SVX||2 + ||SUY ||2 = −ρ (1 + µ) X˙ 2+ Y˙ 2− ρ 2 µ 2 + µ + 2h||X ||2 + ||Y||2 (39) − ρ 2 µ 2 + µ + 2||SX ||2 + ||SY||2 = − Xn j=1 ρ (1 + µ) X˙ j 2+ Y˙ j 2 − Xn j=1 ρ 2 1 + σ 2 j + 2h µ 2 + µ+ 2σ 2 j X j 2+ Y j 2 where we have made use of the relations U TU = UUT = In = V TV = VVT, and additionally performed a basis change, as X = VX and Y = UY . Now, we know from Eq.(37) that, E˙t = −µ h||X||2 + h||Y ||2 + A T X 2+ ||AY ||2− µ X˙ 2+ Y˙ 2 = −µ h||X||2 + h||Y ||2 + U TSVX 2+ V TSUY 2− µ VX˙ 2+ UY˙ 2 (40) = −µ h||X ||2 + h||Y||2 + ||SX ||2 + ||SY||2− µ X˙ 2+ Y˙ 2 = − Xn j=1 µσ 2 j + h X j 2+ Y j 2− Xn j=1 µ X˙ j 2+ Y˙ j 2 Comparing the above expression with Eq.(39), we note that a choice of ρ as, $$\rho\leq\min\left\{\frac{\mu}{1+\mu},\ \frac{2\mu(\sigma_{\min}^{2}+h)}{\left(1+\sigma_{\min}^{2}+2h\right)\left(\mu^{2}+\mu\right)+2\sigma_{\min}^{2}}\right\}\ \forall\ j\in[1,n]\tag{41}$$ implies, E˙t ≤ −ρE ⇒ Et ≤ E0 exp (−ρt) ⇒ XTh + AATX + Y Th + A T A Y ≤ E0 exp (−ρt) ⇒ X T(h + S 2)X + Y T(h + S 2)Y ≤ E0 exp (−ρt) ⇒ Xn j=1 (h + σ 2 j )||X j||2 + ||Yj||2≤ E0 exp (−ρt) ⇒ Xn j=1 (h + σ 2 j )||Xj||2 + ||Y j||2≤ E0 exp (−ρt) ∴ ||X||2 + ||Y ||2 ≤E0 h + σ 2min exp (−ρt) ∀ j $$\left(42\right)$$ ![19_image_0.png](19_image_0.png) $$(43)$$ Figure 6: **Left:** Contours of the Lyapunov function Ek, Eq. (35) (black), and convergence trajectory of LEAD (red) in the quadratic min-max game (Eq.(34)) to the Nash equilibrium (0, 0). **Right**: The evolution of the discrete-time Lyapunov function of Eq. (35) over iteration, confirming Ek − Ek−1 ≤ 0 ∀ k ∈ N. ## D Proof Of Theorem 2 Theorem. The eigenvalues of ∇F*LEAD*(ω ∗) *about the Nash equilibrium* ω ∗ = (x ∗, y∗) *of the quadratic* min-max game are, $$\mu_{\pm}(\alpha,\beta,\eta)=\frac{1-(\eta+\alpha)\lambda+\beta-\eta h\pm\sqrt{\Delta}}{2}$$ where, ∆ = (1 − (η + α) λ + β − ηh) 2 − 4 (β − αλ) and λ ∈ Sp(*off-diag*[∇v(ω ∗)])*. Furthermore, for* h, η, |α|, |β| << 1*, we have,* $$\mu_{+}^{(i)}(\alpha,\beta,\eta)\approx1-\eta h+\frac{\left(\eta+\alpha\right)^{2}\lambda_{i}^{2}+\eta^{2}h^{2}+\beta^{2}-2\eta h\beta}{4}\tag{44}$$ $$+\lambda_{i}\left(\frac{\eta+\alpha}{2}\left(\eta h-\beta\right)-\eta\right)$$ $$\mu_{-}^{(i)}(\alpha,\beta,\eta)\approx\beta-\frac{\left(\eta+\alpha\right)^{2}\lambda_{i}^{2}+\eta^{2}h^{2}+\beta^{2}-2\eta h\beta}{4}$$ $$+\lambda_{i}\left(\frac{\eta+\alpha}{2}\left(\beta-\eta h\right)-\alpha\right)$$ $$(45)$$ $$(46)$$ $$(47)$$ Proof. For the quadratic game 33, the Jacobian of the vector field v is given by, $$\nabla\mathbf{v}\equiv\nabla\begin{bmatrix}\nabla_{x}f(\mathbf{x}_{t},\mathbf{y}_{t})\\ -\nabla_{y}f(\mathbf{x}_{t},\mathbf{y}_{t})\end{bmatrix}=\begin{bmatrix}h\mathbb{I}_{2n}&\mathbb{A}\\ -\mathbb{A}^{\top}&h\mathbb{I}_{2n}\end{bmatrix}\in\mathbb{R}^{2n}\times\mathbb{R}^{2n}.\tag{1}$$ Let us next define a matrix Dq as, Dq = ∇2xyf(x, y) 0 0 −∇2xyf(x, y) = A 0 0 −A ⊤ ∈ R 2n × R 2n(46) Consequently, the update rule for LEAD can be written as: xt+1 yt+1= xt yt + β xt − xt−1 yt − yt−1 − η ∇xf(xt, yt) −∇yf(xt, yt) − α ∇2xyf(xt, yt)∆yt −∇2xyf(xt, yt)∆xt = xt yt + β xt − xt−1 yt − yt−1 − ηv − αDq ∆yt ∆xt (47) where ∆yt = yt − yt−1 and ∆xt = xt − xt−1. Next, by making use of the permutation matrix P, $$\mathbb{P}:={\begin{bmatrix}0&\mathbb{I}_{n}\\ \mathbb{I}_{n}&0\end{bmatrix}}\in\mathbb{R}^{2n}\times\mathbb{R}^{2n}$$ $$(48)$$ we can re-express Eq. 47 as, ωt+1 ωt = I2n 0 I2n 0 ωt ωt−1 + β I2n −I2n 0 0 ωt ωt−1 − η v 0 − α Dq 0 0 0 P −P 0 0 ωt ωt−1 = I2n 0 I2n 0 ωt ωt−1 + β I2n −I2n 0 0 ωt ωt−1 − η v 0 − α DqP −DqP 0 0 ωt ωt−1 where ωt ≡ (xt, yt). Hence, the Jacobian of FLEAD is then given by, (48) $$\begin{split}\nabla F_{\text{LEAD}}&=\begin{bmatrix}\mathbb{I}_{2n}&0\\ \mathbb{I}_{2n}&0\end{bmatrix}+\beta\begin{bmatrix}\mathbb{I}_{2n}&-\mathbb{I}_{2n}\\ 0&0\end{bmatrix}-\eta\begin{bmatrix}\nabla\mathbf{v}&0\\ 0&0\end{bmatrix}-\alpha\begin{bmatrix}\mathbb{D}_{\text{q}}\mathbb{P}&-\mathbb{D}_{\text{q}}\mathbb{P}\\ 0&0\end{bmatrix}\\ &=\begin{bmatrix}(1+\beta)\mathbb{I}_{2n}-\eta\nabla\mathbf{v}-\alpha\mathbb{D}_{\text{q}}\mathbb{P}&-\beta\mathbb{I}_{2n}+\alpha\mathbb{D}_{\text{q}}\mathbb{P}\\ \mathbb{I}_{2n}&0\end{bmatrix}\end{split}\tag{49}$$ It is to be noted that, for games of the form of Eq. 33, we specifically have, ∇v = DqP + hI2n and, off-diag[∇v] = DqP Therefore, Eq. 49 becomes, $$\nabla F_{\mathrm{LEAD}}=\left[\begin{array}{c c c}{{\left(1+\beta-\eta h\right)\mathbb{I}_{2n}-\left(\eta+\alpha\right)\mathbb{D}_{\mathrm{q}}\mathbb{P}}}&{{-\beta\mathbb{I}_{2n}+\alpha\mathbb{D}_{\mathrm{q}}\mathbb{P}}}\\ {{\mathbb{I}_{2n}}}&{{0}}\end{array}\right]$$ We next proceed to study the eigenvalues of this matrix which will determine the convergence properties of LEAD around the Nash equilibrium. Using Lemma 1 of (Gidel et al., 2019), we can then write the characteristic polynomial of ∇FLEAD as, det(XI4n − ∇FLEAD) = 0 ⇒ det (X − 1)I2n − (β − ηh)I2n + (η + α) DqP βI2n − αDqP −I2n XI2n = 0 ⇒ det -(X − 1) (X − β)I2n + XηhI2n + (Xη + Xα − α) DqP = 0 (51) ⇒ det ((X − 1) (X − β) + Xηh) UU−1 + (Xη + Xα − α) UΛU−1 = 0 ⇒ det ((X − 1) (X − β) + Xηh)I2n + (Xη + Xα − α) Λ = 0 ⇒ Y 2n i=1 [(X − 1) (X − β) + Xηh + (Xη + α (X − 1)) λi] = 0 $$(50)$$ Where, in the above, we have performed an eigenvalue decomposition of DqP = UΛU−1. Therefore, $$X^{2}-X\left(1-(\eta+\alpha)\lambda_{i}+\beta-\eta h\right)+\beta-\alpha\lambda=0,\ \lambda_{i}\in\mathrm{Sp}(\mathbb{D}_{\mathrm{q}}\mathbb{P})$$ $$\Rightarrow X^{(i)}\equiv\mu_{\pm}^{(i)}=\frac{1-(\eta+\alpha)\lambda_{i}+\beta-\eta h\pm\sqrt{\Delta}}{2}$$ $$(52)$$ $$\left(53\right)$$ $$\square$$ $$\left(55\right)$$ with, $$\Delta=\left(1-\left(\eta+\alpha\right)\lambda_{i}+\beta-\eta h\right)^{2}-4\left(\beta-\alpha\lambda_{i}\right)$$ 2 − 4 (β − αλi) (53) Furthermore for *h, η,* |β|, |α| << 1, we can approximate the above roots to be, $$\begin{split}\mu_{+}^{(i)}(\alpha,\beta,\eta)\approx1-\eta h+\frac{\left(\eta+\alpha\right)^{2}\,\lambda_{i}^{2}+\eta^{2}h^{2}+\beta^{2}-2\eta h\beta}{4}+\lambda_{i}\left(\frac{\eta+\alpha}{2}\left(\eta h-\beta\right)-\eta\right)\\ \mu_{-}^{(i)}(\alpha,\beta,\eta)\approx\beta-\frac{\left(\eta+\alpha\right)^{2}\,\lambda_{i}^{2}+\eta^{2}h^{2}+\beta^{2}-2\eta h\beta}{4}+\lambda_{i}\left(\frac{\eta+\alpha}{2}\left(\beta-\eta h\right)-\alpha\right)\end{split}\tag{54}$$ ## E Proof Of Proposition 3 Proposition. For any λ ∈ Sp(*off-diag*[∇v(ω ∗)]), $$\left.\nabla_{\alpha}\rho\left(\lambda\right)\right|_{\alpha=0}<0\Leftrightarrow\eta\in\left(0,\frac{2}{\mathrm{Im}(\lambda_{m a x})}\right),$$ where Im(λmax) is the imaginary component of the largest eigenvalue λmax. We observe from Proposition 3 above that for *h, η,* |α|, |β| << 1, $$\rho(\alpha,\eta,\beta):=\max\{|\mu_{+}^{(i)}|^{2},|\mu_{-}^{(i)}|^{2}\}\ \forall\ i$$ $$=\max\{\left|\mu_{+}^{(i)}\right|^{2}\}\ \forall\ i$$ $$(56)$$ ∴ ∇αρ α=0 ≈ max nη 2|λi| 2 − η 2h 2 − β 2 4η |λi| 2 + ηhβ − (ηh − β) 2 2η |λi| 2 − (1 + β) η |λi| 2o∀ i (57) ≈ max nη 3 4 |λi| 4 − 1 + β + 3β 2 4 η |λi| 2o∀ i < max n η 2 4 |λi| 2 − 1 η |λi| 2o∀ i where we have retained only terms up to cubic-order in η, |β| and h. Hence, choosing η ∈ 0,2 Im(λmax) , ensures: ∇αρ α=0 < 0 ∀ i, (58) We thus posit, that a choice of a positive α causes the norm of the limiting eigenvalue µ+ of FLEAD to decrease. ## F Proof Of Theorem 3 Theorem. *Setting* η = α =1 2(σmax(A)+h) , then we have ∀ ϵ > 0, $$\Delta_{t+1}\in\mathcal{O}\left(\left(1-\frac{\sigma_{m i n}^{2}}{4(\sigma_{m a x}+h)^{2}}-\frac{(1+\beta/2)h}{\sigma_{m a x}+h}+\frac{3h^{2}}{8(\sigma_{m a x}+h)^{2}}\right)^{t}\Delta_{0}\right)$$ $$(59)$$ where σmax(σmin) *is the largest (smallest) eigenvalue of* A, ∆t+1 := ||ωt+1 − ω ∗||22 + ||ωt − ω ∗||22 . Proof : From Eq. (52), we recall that the eigenvalues of ∇FLEAD (ω ∗) for the quadratic game are, $$\mu_{\pm}^{(i)}(\alpha,\beta,\eta)=\frac{(1-(\alpha+\eta)\lambda_{i}+\beta-\eta h)}{2}\left(1\pm\sqrt{1-\frac{4\left(\beta-\eta\lambda_{i}\right)}{(1-(\alpha+\eta)\lambda_{i}+\beta-\eta h)^{2}}}\right)$$ $$(60)$$ with λi ∈ Sp(off-diag[∇v(ω ∗)]). Now, since in the quadratic-game setting considered, we have, $$\operatorname{off-diag}[\nabla\mathbf{v}(\omega^{*})]=\mathbb{D}_{\mathrm{q}}\mathbb{P}={\left[\begin{matrix}{0}&{\mathbb{A}}\\ {-\mathbb{A}^{T}}&{0}\end{matrix}\right]}$$ $$(61)^{\frac{1}{2}}$$ hence, λi = ±iσi with σi being the singular values of A. This, then allows us to write, $$\mu_{\pm}^{(1)}(\alpha,\beta,\eta)=\frac{(1-(\alpha+\eta)(\pm i\sigma_{i})+\beta-\eta h)}{2}\left(1\pm\sqrt{1-\frac{4\left(\beta-\alpha(\pm i\sigma_{i})\right)}{(1-(\alpha+\eta)(\pm i\sigma_{i})+\beta-\eta h)^{2}}}\right)$$ $$(62)$$ According to Proposition 2, the convergence behavior of LEAD is determined as, ∆t+1 ≤ O(ρ + ϵ)∆t ∀ ϵ > 0, where (setting η = α), $$\rho:=\max\{|\mu_{+}^{(i)}|^{2},|\mu_{-}^{(i)}|^{2}\}\ \forall\ i$$ $$=|\mu_{+}^{(i)}|^{2}\ \forall\ i$$ $$(63)$$ Now assuming that η is small enough, such that, η 3 ≈ 0 and β 2 ≈ 0, we have, $$\rho\approx1-\eta^{2}\sigma_{i}^{2}+\frac{3}{2}\eta^{2}h^{2}-\left(2+\beta\right)\eta h$$ 2 − (2 + β) ηh (64) Furthermore, using a learning rate η as prescribed by Proposition 3, such as η = α =1 2(σmax(A)+h) we find, $$r_{\mathrm{LEAD}}=\frac{\sigma_{m i n}^{2}}{4(\sigma_{m a x}+h)^{2}}+\frac{(1+\beta/2)h}{\sigma_{m a x}+h}-\frac{3h^{2}}{8(\sigma_{m a x}+h)^{2}}$$ $$(64)^{\frac{1}{2}}$$ $$\left(65\right)$$ Therefore, $$\begin{split}\Delta_{t+1}&\leq\mathcal{O}\left(\left(1-r_{\mathrm{LEAD}}\right)^{t}\Delta_{0}\right)\\ &=\mathcal{O}\left(\left(1-\frac{\sigma_{m i n}^{2}}{4(\sigma_{m a x}+h)^{2}}-\frac{(1+\beta/2)h}{\sigma_{m a x}+h}+\frac{3h^{2}}{8(\sigma_{m a x}+h)^{2}}\right)^{t}\Delta_{0}\right)\end{split}$$ !(66) where ∆t+1 := ||ωt+1 − ω ∗||22 + ||ωt − ω $-\;\omega^*\,|\frac{2}{2}|$ ## . G Robustness To The Discretization Parameter In this section we study the effect of the discretization parameter δ, defined in equation 32 and Proposition 1 on the quadratic game defined in 29, $$\operatorname*{min}_{\mathbf{x}}\operatorname*{max}_{\mathbf{y}}\mathbf{x}^{T}(\mathbf{\gamma}\mathbf{A})\mathbf{y}+\mathbf{x}^{T}((\mathbf{I}-\mathbf{\gamma})\mathbf{B}_{1})\mathbf{x}-\mathbf{y}^{T}((\mathbf{I}-\mathbf{\gamma})\mathbf{B}_{2})\mathbf{y}.$$ In Figure 7, we provide a grid of experiments to study the effect of discretization on the convergence of the quadratic game explored in section 7.1 (Adversarial vs Cooperative games). We observe that for every level of γmax that changes the dynamics of the game from adversarial to cooperative game, LEAD allows for a wide range of discretization steps that lead to convergence. Note that the discretization step and consequently the learning rate can be viewed as a hyper-parameter and consequently require tuning. ![23_image_0.png](23_image_0.png) $$(66)$$ Figure 7: The effect of discretization parameter (δ) on the convergence of LEAD on the quadratic min-max game defined in 29. The x-axis changes the game dynamics from cooperative to adversarial. The y-axes shows the range of δ. The color shows the spectral radius which corresponds to the convergence rate (smaller spectral radius leads to faster convergence). Convergent dynamics, require a spectral radius that is smaller than one. We plot experiments with non-convergent dynamics (spectral radius > 1) in white. Darker colors correspond to faster convergence. We observe that LEAD allows for a wide range of δ that result in convergent dynamics. Thus, LEAD is robust against the discretization parameter. ## H Lead-Adam Since Adam algorithm is commonly used in large-scale experiments, we extend LEAD to be used with the Adam algorithm. Algorithm 2 Least Action Dynamics Adam (LEAD-Adam) 1: **Input:** learning rate η, momentum β, coupling coefficient α. 2: **Initialize:** x0 ← xinit, y0 ← y*init*, t ← 0, mx 0 ← 0, v x 0 ← 0 m y 0 ← 0, v y 0 ← 0 3: **while** not converged do 4: t ← t + 1 5: gx ← ∇xf(xt, yt) 6: gxy∆y ← ∇y(gx)(yt − yt−1) 7: g x t ← gxy∆y + gx 8: mx t ← β1.mx t−1 + (1 − β1).gx t 9: v x t ← β2.vx t−1 + (1 − β2).(g x t ) 2 10: mˆ t ← mt/(1 − β t 1 ) 11: vˆt ← vt/(1 − β t 2 ) 12: xt+1 ← xt − η mˆ t/( √vˆt + ϵ) 13: gy ← ∇yf(xt+1, yt) 14: gxy∆x ← ∇x(gy)(xt+1 − xt) 15: g y t ← gxy∆x + gy 16: m y t ← β1.m y t−1 + (1 − β1).g y t 17: v y t ← β2.v y t−1 + (1 − β2).(g y t ) 2 18: mˆ y t ← m y t /(1 − β t 1 ) 19: vˆ y t ← v y t /(1 − β t 2 ) 20: yt+1 ← yt + η mˆ y t /(pvˆ y t + ϵ) 21: **end while** 22: **return** (*x, y*) ## I 8-Gaussians Generation We compare our method LEAD-Adam with vanilla-Adam (Kingma & Ba, 2014) on the generation task of a mixture of 8-Gaussians. Standard optimization algorithms such as vanilla-Adam suffer from mode collapse in this simple task, implying the generator cannot produce samples from one or several of the distributions present in the real data. Through Figure 8, we demonstrate that LEAD-Adam fully captures all the modes in the real data in both saturating and non-saturating losses. ![24_image_0.png](24_image_0.png) Figure 8: Performance of LEAD-Adam on the generation task of 8-Gaussians. All samples are shown after 10k iterations. Samples generated using Adam exhibit mode collapse, while LEAD-Adam does not suffer from this issue. ## J Experiments And Implementation Details J.1 Mixture Of Eight Gaussians Dataset The real data is generated by 8-Gaussian distributions their mean are uniformly distributed around the unit circle and their variance is 0.05. The code to generate the data is included in the source code. Architecture The architecture for Generator and Discriminator, each consists of four layers of affine transformation, followed by ReLU non-linearity. The weight initialization is default PyTorch's initialization scheme. See a schematic of the architecture in Table 3. Table 3: Architecture used for the Mixture of Eight Gaussians. | Generator | Discriminator | |----------------------------|----------------------| | Input: z ∈ R 64 ∼ N (0, I) | Input: x ∈ R 2 | | Linear (64 → 2000) | Linear (2 → 2000) | | ReLU | ReLU | | Linear (2000 → 2000) | Linear (2000 → 2000) | | ReLU | ReLU | | Linear (2000 → 2000) | Linear (2000 → 2000) | | ReLU | ReLU | | Linear (2000 → 2) | Linear (2000 → 1) | Other Details We use the Adam (Kingma & Ba, 2014) optimizer on top of our algorithm in the reported results. Furthermore, we use batchsize of 128. ## J.2 Cifar 10 Dcgan Dataset The CIFAR10 dataset is available for download at the following link; https://www.cs.toronto.e du/~kriz/cifar.html Architecture The discriminator has four layers of convolution with LeakyReLU and batch normalization. Also, the generator has four layers of deconvolution with ReLU and batch normalization. See a schematic of the architecture in Table 4. | Generator | Discriminator | |---------------------------------------------------|-------------------------------------------------| | Input: z ∈ R 100 ∼ N (0, I) | Input: x ∈ R 3×32×32 | | conv. (ker: 4×4, 100 → 1024; stride: 1; pad: 0) | conv. (ker: 4×4, 3 → 256; stride: 2; pad: 1) | | Batch Normalization | LeakyReLU | | ReLU | conv. (ker: 4×4, 256 → 512; stride: 2; pad: 1) | | conv. (ker: 4×4, 1024 → 512; stride: 2; pad: 1) | Batch Normalization | | Batch Normalization | LeakyReLU | | ReLU | conv. (ker: 4×4, 512 → 1024; stride: 2; pad: 1) | | conv. (ker: 4×4, 512 → 256; stride: 2; pad: 1) | Batch Normalization | | Batch Normalization | LeakyReLU | | ReLU | conv. (ker: 4×4, 1024 → 1; stride: 1; pad: 0) | | conv. (ker: 4×4, 256 → 3; stride: 2; pad: 1) Tanh | Sigmoid | Table 4: Architecture used for CIFAR-10 DCGAN. Other Details For the baseline we use Adam with β1 set to 0.5 and β2 set to 0.99. Generator's learning rate is 0.0002 and discriminator's learning rate is 0.0001. The same learning rate and momentum were used to train LEAD model. We also add the mixed derivative term with αd = 0.3 and αg = 0.0. The baseline is a DCGAN with the standard non-saturating loss (non-zero sum formulation). In our experiments, we compute the FID based on 50,000 samples generated from our model vs 50,000 real samples. Samples ![26_image_0.png](26_image_0.png) Figure 9: Performance of LEAD on CIFAR-10 image generation task on a DCGAN architecture. **Left**: LEAD achieves FID 19.27. **Right**: Vanilla Adam achieves FID 24.38. LEAD is able to generate better sample qualities from several classes such as ships, horses and birds (red). Best performance is reported after 100 epochs. ## J.3 Cifar 10 Resnet Dataset The CIFAR10 dataset is available for download at the following link; https://www.cs.toronto.e du/~kriz/cifar.html Architecture See Table 6 for a schematic of the architecture used for the CIFAR10 experiments with ResNet. | | Dis–Block | | |------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------| | Gen–Block | | | | Shortcut: | Upsample(×2) | | | Residual: | Batch Normalization ReLU Upsample(×2) | | | conv. (ker: 3×3, 256 → 256; stride: 1; pad: 1) Batch Normalization ReLU conv. (ker: 3×3, 256 → 256; stride: 1; pad: 1) | Shortcut: | downsample | | | conv. (ker: 1×1, 3ℓ=1/128ℓ̸=1 → 128; stride: 1) Spectral Normalization [AvgPool (ker:2×2, stride:2)], if ℓ ̸= 1 | | | | Residual: | [ ReLU ], if ℓ ̸= 1 | | | conv. (ker: 3×3, 3ℓ=1/128ℓ̸=1 → 128; stride: 1; pad: 1) Spectral Normalization ReLU conv. (ker: 3×3, 128 → 128; stride: 1; pad: 1) Spectral Normalization | | Table 5: ResNet blocks used for the ResNet architectures (see Table 6). Other Details The baseline is a ResNet with non-saturating loss (non-zero sum formulation). Similar to (Miyato et al., 2018), for every time that the generator is updated, the discriminator is updated 5 times. For both the Baseline SNGAN and LEAD-Adam we use a β1 of 0.0 and β2 of 0.9 for Adam. Baseline SNGAN uses a learning rate of 0.0002 for both the generator and the discriminator. LEAD-Adam also uses a learning rate of 0.0002 for the generator but 0.0001 for the discriminator. LEAD-Adam uses an α of 0.5 and 0.01 for the generator and the discriminator respectively. Furthermore, we evaluate both the baseline and our method on an exponential moving average of the generator's parameters. | Generator | Discriminator | |---------------------------------------------|------------------------| | Input: z ∈ R 64 ∼ N (0, I) | Input: x ∈ R 3×32×32 | | Linear(64 → 4096) | D–ResBlock | | G–ResBlock | D–ResBlock | | G–ResBlock | D–ResBlock | | G–ResBlock | D–ResBlock | | Batch Normalization | ReLU | | ReLU | AvgPool (ker:8×8 ) | | conv. (ker: 3×3, 256 → 3; stride: 1; pad:1) | Linear(128 → 1) | | T anh(·) | Spectral Normalization | Table 6: ResNet architectures used for experiments on CIFAR10. In our experiments, we compute the FID based on 50,000 samples generated from our model vs 50,000 real samples and reported the mean and variance over 5 random runs. We have provided pre-trained models as well as the source code for both LEAD-Adam and Baseline SNGAN in our GitHub repository. Samples ![27_image_0.png](27_image_0.png) Figure 10: Generated sample of LEAD-Adam on CIFAR-10 after 50k iterations on a ResNet architecture. We achieve an FID score of 10.49 using learning rate 2e − 4 for the generator and the discriminator, α for the generator is set to 0.01 and for the discriminator is set to 0.5. ## K Comparison To Other Methods In this section we compare our method with several other second order methods in the min-max setting. The distinction of LEAD from SGA and LookAhead, can be understood by considering the 1st-order approximation of xk+1 = xk − η∇xf (xk, yk + η∆yk), where ∆yk = η∇yf (xk + η∆*x, y*k). 4For FtR, we provide the update for the second player given the first player performs gradient descent. Also note that in this table SGA is simplified for the two player zero-sum game. Non-zero sum formulation of SGA such as the one used for GANs require the computation of Jv, J⊤v. Table 7: Comparison of several second-order methods in min-max optimization. Each update rule, corresponding to a particular row, can be constructed by adding cells in that row from Columns 4 to 7 and then multiplying that by the value in Column 1. Furthermore, ∆xk+1 = xk+1 − xk, while C =I + η 2∇2xyf∇2yxf. We compare the update rules of the first player4for the following methods: Gradient Descent-Ascent (GDA), Least Action Dynamics (LEAD, ours), Symplectic Gradient Adjustment (SGA), Competitive Gradient Descent (CGD), Consensus Optimization (CO), Follow-the-Ridge (FtR) and Learning with Opponent Learning Awareness (LOLA), in a zero-sum game. | | Coefficient | Momentum | Gradient | Interaction-xy | Interaction-xx | | |----------|---------------|------------|------------|------------------|------------------|--------------| | GDA | ∆xk+1 = | 1 | 0 | −η∇xf | −η∇xf | 0 | | LEAD | ∆xk+1 = | 1 | β∆xk | −η∇xf | −α∇2 xyf∆yk | 0 | | SGA(6) | ∆xk+1 = | 1 | 0 | −η∇xf | −ηγ∇2 xyf∇yf | 0 | | CGD(53) | ∆xk+1 = | C −1 | 0 | −η∇xf | −η 2∇2 xyf∇yf | 0 | | CO(39) | ∆xk+1 = | 1 | 0 | −η∇xf | −ηγ∇2 xyf∇yf | −ηγ∇2 xxf∇xf | | FtR(57) | ∆yk+1 = | 1 | 0 | ηy∇yf | ηx ∇2 yyf −1 ∇2 yxf∇xf | 0 | | LOLA(15) | ∆xk+1 = | 1 | 0 | −η∇xf | −2ηα∇xyf∇yf | 0 | This gives rise to: $$\begin{array}{l}{{x_{k+1}=x_{k}-\eta\nabla_{x}f\left(x_{k},y_{k}\right)-\eta^{2}\nabla_{x y}^{2}f\left(x_{k},y_{k}\right)\Delta y,}}\\ {{y_{k+1}=y_{k}+\eta\nabla_{y}f\left(x_{k},y_{k}\right)+\eta^{2}\nabla_{x y}^{2}f\left(x_{k},y_{k}\right)\Delta x,}}\end{array}$$ $$\begin{array}{l}{(67)}\\ {(68)}\end{array}$$ with ∆x, ∆y corresponding to each player accounting for its opponent's potential next step. However, SGA and LookAhead additonally *model* their opponent as *naive* learners i.e. ∆x = −∇xf(xk, yk), ∆y = ∇yf(xk, yk). On the contrary, our method does away with such specific assumptions, instead modeling the opponent based on its most recent move. Furthermore, there is a resemblance between LEAD and OGDA that we would like to address. The 1st order Taylor expansion of the difference in gradients term of OGDA yields the update (for x): $$x_{k+1}=x_{k}-\eta\nabla_{x}f-\eta^{2}\nabla_{x y}^{2}f\nabla_{y}f+\eta^{2}\nabla_{x x}^{2}f\nabla_{x}f,$$ $$(69)$$ 2∇2xxf∇xf, (69) ![28_image_0.png](28_image_0.png) which contains an extra 2nd order term ∇2xxf compared to ours. As noted in Schäfer & Anandkumar (2019), ![28_image_1.png](28_image_1.png) the ∇2xxf term does not systematically aid in curbing the min-max rotations, rather causing convergence to non-Nash points in some settings. For e.g., let us consider the simple game f(*x, y*) = γ(x 2 − y 2), where *x, y, γ* are all scalars, with the Nash equilibrium of this game located at (x ∗ = 0, y∗ = 0). For a choice of γ ≥ 6, OGDA fails to converge for any learning rate while methods like LEAD, Gradient Descent Ascent (GDA) and CGD (Schäfer & Anandkumar (2019)) that do not contain the ∇xxf(∇yyf) term do exhibit convergence. See Figure 11 and Schäfer & Anandkumar (2019) for more discussion. Figure 11: Figure depicting the convergence/divergence of several algorithms on the game of f(*x, y*) = γ(x 2 − y 2) (Nash equilibrium at x ∗ = 0, y∗ = 0). **Left**: For γ = 1, OGDA and LEAD/GDA/CGD (overlaying) are found to converge to the Nash eq. **Right**: For γ = 6, we find that OGDA fails to converge while LEAD/GDA/CGD (overlaying) converge. We conjecture that the reason behind this observation is the existence of ∇2xxf term in the optimization algorithm of OGDA.
Review 1: Summary: This paper proposes a new optimization algorithm called LEAD for min-max optimization. The algorithm is developed by observing gradient descent dynamics, drawing analogies with a physical system, and modifying it using momentum and a coupling term. The convergence of LEAD is analyzed on a quadratic min-max optimization problem. It converges exponentially and is asymptotically quicker than an extra-gradient method in some regimes. The paper also provides a detailed empirical study of the computational cost of implementing LEAD v/s other methods, showing LEAD can be implemented with two gradient computations at each step. Finally, the paper compares the optimization of LEAD v/s previous methods on a family of synthetic and well-known non-convex tasks, showing either improved or comparable performance. **Overall, I like the writing and the flow of the paper and recommend accepting it.** Strengths and Weaknesses: 1. The paper is well written. It starts with a discussion of the optimization mechanics, providing good intuition for how the optimization method was developed by introducing two different kinds of "forces" and showing their effect on the final update equations of the algorithm. Making an analogy to Newton's second law and using (for the most part) familiar physical terminology was helpful. I didn't fully understand why magnetic force does what the authors describe, and it might be good to add an illustrative physical (or abstract) example in the paper. 2. The rotational dynamics hindering the convergence of gradient descent ascent is a well-talked-about phenomenon, and several algorithms have been proposed to avoid it. I found the discussion of the existing algorithms a bit lacking. In particular, while many works were cited, the precise convergence rates were not discussed. This makes it difficult to put the theoretical results in this paper in perspective. 3. The authors don't discuss if their results can be extended to the general quadratic setting and, more broadly, to convex-concave games. I think this is a major limitation of the paper and requires discussion. Requested Changes: 1. Add an intuitive example discussing the effect of the magnetic forces. This is not a big issue but might help provide further physical intuition. 2. Add theoretical comparison to other methods. A table earlier in the paper discussing the convergence rates for all the methods would be most desirable. This should be discussed for all the algorithms used in Figure three. This is an important addition. 3. Discuss any difficulties in extending the results to the general convex-concave games. Broader Impact Concerns: None. I appreciate the authors adding a broader impact section. ================================================== Review 2: Summary: This paper proposes a novel optimization method to solve min-max game optimization problems, and the proposed method leverages physical properties, e.g., mechanism of forces, to design the proposed optimization methods. Strengths and Weaknesses: This is paper is very well written and organized. The novelty of the proposed method lies on the connection between optimization method with momentum and physical laws. I am not aware of such explicit proposal in the literature. $\mathbf{1}.$ The technical analysis is thorough where authors considered both discrete and continuous cases, and the convergence analysis is sound and solid. $\mathbf{2}.$ The novel proposal of connecting momentum with physics in solving min-max game can be considered as a solid contribution, as well as the intuitive design of the algorithm. $\mathbf{3}.$ The related work is very well stated with detailed background information and lines of research. The motivation and physical connection are very well stated with intuitive illustrations, mainly on Section $3$. Authors also conduct a good comparison with existing algorithms especially on the computational cost. $\mathbf{4}.$ The experiment is very comprehensive as I see, as well as being supported by comprehensive details, such as the experiment setup. The appendix has extensive details that helps get further understanding of the empirical experiments. I see the proposed toy experiment on the min-max problem in Eq. (31) is very helpful in understanding the practical convergence between LEAD and other existing algorithms. Followed by the toy problem, the experiments on GAN is also presented and designed well to support the claim of the paper. $\mathbf{5}.$ Author provided extensive details in appendix as well as a blog post of a nice illustration in the supplementary material. I found it very helpful to understand the problem to be solved and the proposed algorithm. -------------------- Regarding the weakness, I defer it to the next section where I'd like to ask questions and propose a few changes. $\textbf{To highlight my major concern}$, I see the proposed algorithm performs well on top of Adam on a few GAN, but I'd like to understand more of its general applicability during discussion, as it could be potentially the limitation of the proposed method. Requested Changes: There are some trivial and non-trivial issues I found in the paper. $\mathbf{1}.$ It seems as Figure 1 is not referred in the text, and I found its graphic/notation is a bit confusing. It'd be nice to add its context in the text and make its explanation clearer. $\mathbf{2}.$ In the experiment, authors mention "State-of-the-art methods use architectures that are 30 times or more larger than the architecture that we have chosen to test our method on”. Is it necessary to evaluate the proposed methods on more up-to-date models/instances? If so, it'd be a good practice to add additional experiments into the paper. If not, please explain. $\mathbf{3}.$ I understand there exists a section discussing the computational cost of proposed algorithm again other existing methods. However, in the experiment, e.g., when comparing Adam vs. LEAD, it would be good to add either cost of computation (e.g., # of gradient) or wall clock time. Trivial one: $\mathbf{4}.$ in Section 7.2 on page 10, “See H for a comparison” missed “Appendix”. Broader Impact Concerns: Author stated a possible concern of broader impact in the end of the main paper. I do not have any concerns on the ethical implications of this work. ================================================== Review 3: Summary: Inspired by the intuition behind Polyak's momentum methods, which interpret the process of gradient-based optimization as an object (ball) rolling downhill to reach a minimum, the author introduces a similar physics formalism for two-player zero-sum games. They apply "forces" to the object in Polyak's physical intuition, which helps curb rotations and limit speed. Based on this intuition, the author proposes the Least Action Dynamics (LEAD) optimization scheme, which describes these "forces." The work then extends Lyapunov analysis from momentum methods to LEAD, proving its linear convergence in both continuous and discrete-time settings for quadratic min-max games. Experimentally, the author demonstrates that LEAD is computationally efficient and improves the performance of GANs against multiple baselines across various tasks. The main contributions are as follows: (1) LEAD is well-motivated by the physical intuition behind the common momentum method. (2) Lyapunov stability analysis guarantees linear convergence speed in both discrete and continuous-time settings for min-max games. (3) Experiments effectively support the performance claims made by the author. Strengths and Weaknesses: Strengths: (1) The motivation rooted in physics is robust and easily comprehensible. (2) The proof is overall clear and persuasive. (3) Computationally efficient – it does not require the entire Jacobian matrix, but only a quarter block of it. This "partial" Jacobian can be computed using auto-grad tools. Weaknesses: The scope of applicability is unclear. This work focuses on two-player quadratic games, specifically requiring an objective function in the form of Eq. 13 in the main text. It is not evident whether the method works for generalized quadratic objective functions, such as objective functions in the form of $X^{T}AX + Y^{T}BY + X^{T}CY$, where A, B, and C are not guaranteed to be positive-definite (in which case the Nash equilibrium may not be zero). Requested Changes: It is difficult to determine how the discretization parameter affects performance. It would be beneficial if you could display your parameters in the experimental section or provide empirical justification. As stated immediately before Equation 19, the fixed point $\omega^*$ satisfies $F_{\eta}(\omega^*) = \omega^*$. Does this lead to Equation 19 when $\nabla v(\omega^*)$ (the second term on the RHS of Eq 19) is equal to zero? The spectrum of $\nabla F_{\eta}(\omega^*)$ is defined in Eq. 20, but the definition of “Sp(off-diag(.))” remains unclear. The attached blog images in the supplementary, which depict the dynamics in the x-y plane, effectively illustrate the outward rotatory motion and increasing velocity. It would be advantageous to include these images in the main text if possible. Broader Impact Concerns: There is no concern to be addressed as far as I know. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper proposes a physics-inspired formalism for two-player zero-sum games and an optimization scheme, called LEAD. The work applies Lyapunov analysis to LEAD, proving its linear convergence in both continuous and discrete-time settings for quadratic min-max games. The connection to physics is well grounded and the theoretical contributions are solid. Besides that, the authors demonstrate experimentally that LEAD is computationally efficient and improves the performance of GANs. The paper contains a lot of details that show evidence of the empirical performance. The reviewers found the contributions of this work are solid. Moreover, the paper is very well written. All reviewers unanimously voted for acceptance. Therefore, the paper is ready for acceptance almost in its current form. Please make sure to remove the red font from the text. I also found a relevant reference that I think should be included in the final version of the paper: https://arxiv.org/abs/2105.13922 ==================================================
# Accountable Textual-Visual Chat Learns To Reject Human Instructions In Image Re-Creation Zhiwei Zhang bitzzw@gmail.com The Chinese University of Hong Kong Centre for Perceptual and Interactive Intelligence Yuliang Liu†*ylliu@hust.edu.cn* The Chinese University of Hong Kong Huazhong University of Science and Technology Centre for Perceptual and Interactive Intelligence Reviewed on OpenReview: *https: // openreview. net/ forum? id= kQmz1BMIYi* ## Abstract The recent success of ChatGPT and GPT-4 has drawn widespread attention to multimodal dialogue systems. However, there is a lack of datasets in the academic community that can effectively evaluate the multimodal generation capabilities of Visual Language Models (VLMs) in textual-visual chat tasks. In this paper, we address this gap by introducing two novel multimodal datasets: the synthetic CLEVR-ATVC dataset (620K) and the manually pictured Fruit-ATVC dataset (50K). These datasets incorporate both visual and text-based inputs and outputs. Furthermore, to facilitate the accountability of multimodal systems in rejecting human requests, similar to language-based ChatGPT conversations, we introduce specific rules as supervisory signals within the datasets. This allows the trained VLM to provide a yes or no answer after engaging in visual and textual reasoning, accompanied by a language explanation to clarify the reasons behind the inability to execute the given human instruction. Our proposed method involves a two-stage training procedure, which includes training the image auto-encoder and the auto-regressive transformer from scratch. The first stage employs a discrete variational autoencoder (dVAE) to compress each image into concise tokens, which are then combined with text tokens into a single data stream. This stream is subsequently fed into the decoder-based transformer to generate visual recreations and textual feedback in the second stage. We conduct comprehensive analyses of experimental results, focusing on re-created image quality, answer accuracy, and the model's behavior when faced with uncertainty and imperfect user queries. Through our explorations and findings, we aim to contribute valuable insights into the accountability of textual-visual generative models. Dataset and code are available at https://matrix-alpha.github.io. ## 1 Introduction Recently, the most important breakthrough was made by ChatGPT (OpenAI, 2023a) and GPT-4 (OpenAI, 2023b), which unveiled the emerging potential of the conversation between human and artificial intelligence system. ChatGPT serves as a chatbot that operates with language as both input and output, while GPT4 is a multimodal model capable of accepting both image and text inputs and producing text outputs. A successful multimodal generative model should excel in both textual and visual reasoning, generating high-quality text and image feedback. Visual ChatGPT (Chenfei Wu & Duan, 2023) is a pioneering work that combines ChatGPT with a series of pre-trained visual foundation models, enabling text-image chat. Another relevant work, FROMAGe (Jing Yu Koh, 2023), also involves image-text inputs and outputs for †Corresponding author: Yuliang Liu. multimodal dialogue, with the fine-tuning of linear layers and the freezing of pre-trained LLM. However, existing datasets lack definitive ground truths to effectively measure the quality of text and images generated in multimodal dialogue systems. Therefore, there is an urgent need for a dataset that can evaluate the performance of multimodal generative models. In this paper, we aim to address this need by constructing new multimodal datasets that require models to output high-quality images and provide textual feedback while accepting a text-image pair. We introduce the synthetic CLEVR-ATVC dataset (620K), created by rendering images using approximately 200 GPU days, and the real Fruit-ATVC dataset (50K), manually captured and annotated. Another significant contribution of this paper pertains to the issue of responsibility in text-to-image generation models, specifically the need for models to learn how to reject human instructions. Previous works (Doshi-Velez et al., 2017; Loi & Spielkamp, 2021) have highlighted the importance of accountability in AI systems, including the ability to hold decision-makers accountable for adhering to procedural and substantive standards. The responsible management and deployment of AI systems is a crucial and often overlooked topic. ChatGPT (OpenAI, 2023a) implements a similar requirement at deployment time by employing rules defined by OpenAI to restrict certain requests. However, to the best of our knowledge, we are the first to consider the issue of responsibility in textual-visual dialogue models. Previous text-toimage generation methods (Ramesh et al., 2021a; Nichol et al., 2021; Wu et al., 2021; Ramesh et al., 2022; Saharia et al., 2022; Ding et al., 2021) have produced random and uncontrolled outcomes lacking human oversight. While some works have explored the controllability of image generation by focusing on object categories or attributes (e.g., position, pose, color) (Li et al., 2019; Niemeyer & Geiger, 2021; Jiang et al., 2022; Patashnik et al., 2021), such as StyleCLIP (Patashnik et al., 2021), which manipulates visual image attributes using text, the rejection of human instructions and providing explanations for such decisions have not been adequately addressed in the textual-visual dialogue context. Therefore, in this paper, we propose that the machine should be capable of rejecting human instructions and providing explanations, particularly for instructions that cannot be executed or are prohibited. To serve the aforementioned research goals, we introduce a novel task called accountable text-based visual re-creation, which explores whether textual-visual chat models can learn to reject human instructions. This task requires the model to generate both visual re-creations and language-based feedback while accepting a text-image pair. Our datasets include rules as supervision signals to teach the model to reject certain instructions. As illustrated in Figure 1, when a chatbot receives instructions from a human, it responds with a "no" to forbidden or unfeasible instructions, accompanied by an explanation. An instruction may be deemed unfeasible if the corresponding objects mentioned in the text-based query are absent in the visual input. Prohibited instructions are determined manually, taking into account the laws of physics. Prohibited actions encompass both actions that are feasible but not allowed, as well as actions that are simply not possible. In the case of prohibited instructions that cannot be executed, the model should provide additional explanations to clarify the reasons behind their infeasibility. These instructions are elaborated and listed in Table 1. Our proposed method, as depicted in Figure 1, employs a multimodal generative model comprising an image auto-encoder and an auto-regressive transformer. We propose a two-stage training procedure to train these components from scratch. The first stage involves a discrete variational autoencoder (dVAE) that compresses each image into concise tokens, which are then concatenated with text tokens and fed into the decoder-based transformer responsible for generating the visual re-creations and textual feedback. Additionally, we provide comprehensive analysis of our experimental results, including assessments of recreated image quality, answer accuracy, and model's behavior when faced with uncertainty and imperfect user queries. We also compare two different image auto-encoders, VQVAE (Oord et al., 2017) and VQGAN (Esser et al., 2021), and analyze the reasons behind the subpar performance of VQGAN. All the datasets used in this study are publicly available, and we will release the source code for our annotation tool, evaluation tool, implementation of baseline models, metric calculations, and detailed instructions. To summarize, the main contributions of this paper are as follows: - We construct two multimodal datasets, CLEVR-ATVC (620K) and Fruit-ATVC (50K), which incorporate textual-visual inputs and outputs to evaluate the performance of multimodal generative models. ![2_image_0.png](2_image_0.png) Figure 1: The left figure shows that a multimodal generative model can simultaneously recreate visual input and give textual feedback when faced with human instructions, especially it can reject some commands. The right figure illustrates the overall framework of our method. The model is required to generate a re-created image (M) and a textual feedback (A) conditioned on the visual input (V) and text-based user query (T), and the language-based explanation is also given for those instructions that cannot be executed and the prohibited instructions. - We consider the issue of accountability in multimodal generative models by embedding pre-set rules as supervised signals in our datasets. This enables the VLMs to learn to reject human instructions in multimodal conversations. - We propose a two-stage training procedure for training the image auto-encoder and auto-regressive transformer, aiming to enable the models to learn how to reject human instructions. All of our models are trained from scratch, and their training took 350∼900 GPU days on our constructed datasets. - We provide extensive qualitative and quantitative results, evaluating the quality of generated images, the accuracy of answers, and the model's ability to handle uncertainty and incomplete queries. The remainder of this paper is organized as follows. Section 2 presents detailed information about our datasets. In Section 3, we introduce our models and training process. We then present extensive experiments and detailed analysis in Section 4 to validate our proposed method. Section 5 provides an overview of related work, and Section 6 concludes the paper. Section 7 pertains to acknowledgments. Additional details of our datasets are provided in the appendix. ## 2 Datasets In this section, we present the distinctive aspects of our datasets compared to existing ones. We then describe the data collection process. Additionally, we provide an overview of the information contained in the annotation files for our datasets, with more detailed information available in the appendix. Finally, we summarize the dataset statistics in the tables. ## 2.1 Definition Of Atvc Our proposed dataset and task introduce two key innovations compared to previous methods. Firstly, our multimodal dataset comprises image-text inputs and outputs, which facilitates the development of more effective multimodal generative models. Secondly, our datasets incorporate predefined rules that teach the model to reject certain human instructions, even if they are technically feasible. This gives rise to the concept of accountable text-based visual re-creation (ATVC) task, which can be defined as follows: Given a visual input (V) and a text-based query (T), the multimodal generative model is expected to produce a re-created image (M) and provide a language-based explanation (A) for its decisions. A successful ATVC model should possess the ability to comprehend the user's query and employ language cues to reason about the visual input, while also generating appropriate textual feedback and re-created visual results. ## 2.2 Data Collection In order to validate the proposal of this paper, we conduct experiments on two distinct datasets. The first dataset, CLEVR-ATVC, is a large-scale synthetic dataset specifically designed to comprehensively evaluate the proposed task. On the other hand, the Fruit-ATVC dataset is utilized to evaluate our method on realworld scenarios. In the following sections, we provide a detailed description of the visual image collection process for both datasets. CLEVR-ATVC: We utilize the default environmental settings, including object settings and rendering options, from CLEVR (Liu et al., 2019) to generate synthetic images. The rendered images have a size of 256, with each image containing 3 to 6 objects. In addition, we apply a desiged algorithmic filter to exclude scenes with objects located on the image border before rendering. This ensures that all objects in both the original and re-created images are within the boundaries, facilitating the evaluation of experimental results. The synthetic CLEVR-ATVC dataset is generated over approximately 200 GPU days. Fruit-ATVC: This dataset consists of real-world scenarios where all the images are captured manually by the authors using mobile phones and tripods. The images are taken with various mobile phone models, including iPhone, Huawei, Xiaomi, and others. For the Fruit-ATVC dataset, five researchers contributed to capturing the visual inputs and re-created images. Additionally, we incorporated 12 different scenes to enhance the diversity of the dataset. The diversity of collection scenes is illustrated in the figures provided in the appendix. ## 2.3 Generation Of Textual Query And Feedback For the CLEVR-ATVC dataset, we employ an automated program that simultaneously generates visual inputs, text-based queries, re-created images, and textual feedbacks. This program saves the visual inputs and re-created images in their respective folders, while the text-based queries, textual feedbacks, and other data information are automatically stored in an annotation file. Each visual input is associated with ten different text-based queries and their corresponding textual feedbacks. Among these queries, six instructions can be executed by the AI system, resulting in appropriate image re-creations, while two instructions cannot be executed due to the presence of objects mentioned in the text-based queries that are not present in the visual input. Additionally, two instructions are deliberately prohibited by manual specification. The number of each instruction is approximately the above ratio, because some visual inputs cannot generate prohibited instructions. The process of generating text-based queries and textual feedbacks for the Fruit-ATVC dataset differs from the aforementioned process. Since the visual inputs and re-created images are manually captured, and the text-based queries and textual feedbacks are generated semi-automatically using our designed labeling tool. Initially, we collect and organize the visual inputs and re-created images according to predefined rules, ensuring the inclusion of executable and non-executable queries for each visual input. Then, we utilize an annotation interface (illustrated in Figure 2) to assist in generating text-based queries and textual feedbacks, which are automatically saved to an annotation file. The specific process is as follows: we select image pairs consisting of the visual input and the re-created image for executable instructions. These image pairs are ![4_image_0.png](4_image_0.png) Figure 2: The mention frequency of different fruits and the annotation interface for Fruit-ATVC dataset. | | Textual Feedback | | |--------------------|-----------------------------------------------|----------------------| | Answer Type | Reason for prohibition | Explanation | | 1) No problem. | / | / | | 2) Cannot be done. | / | no mentioned objects | | 3) Forbidden. | cannot put certain object under an object | no mentioned objects | | 4) Forbidden. | cannot put an object on top of certain object | no mentioned objects | Table 1: The detailed information of textual feedback. ![4_image_1.png](4_image_1.png) Figure 3: The main format visualization of data annotation file. The details can be found in the appendix. then imported automatically through the interface depicted in Figure 2, where the object ID, container ID, and query type are manually added to the designated fields on the interface ("fruit_ids", "Ct_ids", and "Query", respectively). The "fruit_ids" field includes all the fruit types depicted in the image, while the "Query" field is filled with the object and container categories involved in the operation. For instance, "0b" represents an apple and a bowl. Finally, by pressing the "ERR" key, text queries and feedback are automatically generated. ## 2.4 Data Annotation In this section, we present the structure of our data annotation file, which follows the format of the COCO dataset (Lin et al., 2014). The main format, depicted in Figure 3, consists of four parts: text-based query (T), visual input (V), re-created image (M), and textual feedback (A). Since each visual input corresponds to multiple text-based queries, we utilize an "idx" value to establish a one-to-one mapping between T, M, and A. The "objects" subkey enumerates the objects present in the visual input. The "explanation" provides a rationale for prohibition and explains the absence of certain objects. Table 1 provides further details on the textual feedback. There are three types of answers: a) no problem; b) the action cannot be performed; and c) the action is prohibited. Two specific instructions are prohibited: placing a certain object under another object and placing an object on top of a certain object. The prohibited actions include those that can be | CLEVR-ATVC | | | | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|-----------|-------------------------|-----------| | Size | Train Pairs | Test size | Textual-Visual Feedback | Data type | | 68 GB | 619,741 | 5,000 | Yes | Synthetic | | Please <action1> {size color material} [object] <action1> {size color material} [object]. Please <action2> of {size color material} [object] and {size color material} [object]. Action1 Action2 Size Color Material Object put on top exchange color big red, green, blue, cyan metal cylinder put under exchange position small purple, brown, yellow rubber cube, sphere | | | | | Table 2: The summary information of CLEVR-ATVC dataset. | | Fruit-ATVC | | | | |---------|--------------------------------------------------|-----------|----------------------------|---------------------| | Size | Train Pairs | Test size | Textual-Visual Feedback | Data type | | 168 GB | 27,503 | 1,872 | Yes | Real | | | Please <action1> [object]. | | | | | | Please <action2> of [object] and [object]. | | | | | | Please <action3> [object] <action3> [container]. | | | | | Action1 | Action2 | Action3 | Object | Container | | remove | exchange position | put in | apple, coconut, lemon etc. | plate, bottle, etc. | Table 3: The summary information of Fruit-ATVC dataset. performed and those that cannot be performed. For prohibited instructions that cannot be executed, the model needs to provide further explanations. Please refer to the example given in Table 4. In addition, the results in Table 6 show that we conducted statistics on accuracy of answer type and accuracy of explanation in textual feedback. ## 2.5 Data Statistics Table 2 and Table 3 present the summarized information of our constructed datasets. CLEVR-ATVC consists of a training set with a total of 620,000 pairs and a testing set with 500 visual inputs, each accompanied by 10 queries, resulting in a total of 5,000 pairs. This dataset incorporates four action types: "exchange color", "exchange position", "put under", and "put on top". The rendered images encompass three object shapes, seven object colors, two object materials, and other attributes, leading to hundreds of millions of possible combinations. The text-based queries follow two different templates: the first template is used for "put on top" and "put under" actions, while the second template is employed for "exchange color" and "exchange position" actions. Fruit-ATVC comprises 27,503 training pairs and 1,872 pairs in the testing set. This dataset involves ten different types of fruits, namely apple, pear, coconut, orange, banana, kiwi, mango, avocado, lemon, and pitaya. The frequency of mentions for each fruit is depicted in Figure 2. Additionally, the dataset includes 14 types of containers. Regarding the queries, there are three types of actions: "put a fruit in a container", "exchange positions of the fruits", and "remove the container". ## 3 Method In our method, the generative model includes an image auto-encoder and an auto-regressive transformer (Vaswani et al., 2017) to predict tokens of re-created image and textual feedback. The framework of our method is shown in Figure 1. Our method is a two-stage training procedure, similar to Ramesh et al. (2021a). Image Auto-encoder This stage forces the model to learn the sequential discretized latent space of high dimensional images to utilize the expressive transformer architecture, *i.e.*, learning a codebook to encode the visual input. Given an image of resolution 256 × 256, a quantization model encodes it into 32 × 32 = 1024 discretized latent codes where the codebook size is 8192. We have tried two alternatives, including Vector Quantized Variational Autoencoder (VQVAE) (Oord et al., 2017) and VQGAN (Esser et al., 2021). As illustrated in Figure 1, the VQVAE can learn a latent space matrix with discrete learnable variables, which is trained end-to-end via straight-through estimation. The codebook and the VQVAE model can be trained end-to-end using the following objective: $$\mathcal{L}_{VQ}(G,C,E)=\underbrace{\left\|sg[E(v)]-e\right\|_{2}^{2}}_{\mathcal{L}_{cofactorbook}}+\underbrace{\beta\left\|sg[e]-E(v)\right\|_{2}^{2}}_{\mathcal{L}_{commit}}$$ $$+\underbrace{\left\|v-G(e)\right\|_{2}^{2}}_{\mathcal{L}_{rec}},$$ $$\left({\mathrm{3}}\right)$$ $$\left(1\right)$$ where v is the visual input, e is the embedding for learning the codebook-indices, C is the learned codebook, E is the image encoder, G(e) is the decoder for reconstruction. sg is the stop-gradient operation, and β is a hyper-parameter to balance the commitment loss (Oord et al., 2017). The quantized codebook index is determined by looking up the nearest codebook vector of input features E(v) in terms of the Euclidean distance. The VQGAN (Esser et al., 2021) is a variant of VQVAE, which adds a perceptual loss and an adversarial loss produced by patch-based discriminator. It can achieve high resolution while significantly reducing the sequence length. The complete objective for the compression model Q∗is: The complete objective for the comparison model $\Phi^{*}$ is: $$Q^{*}=\arg\min_{G,C,E}\max_{D}\mathbb{E}_{x\sim p(v)}[\mathcal{L}_{VQ}(G,C,E)$$ $$+\lambda\mathcal{L}_{GAN}(\{G,C,E\},D)],$$ $$\left(2\right)$$ where LGAN is to learn the differences between real and reconstructed images: $${\mathcal{L}}_{G A N}(\{G,C,E\},D)=[\log D(v)+\log(1-D({\hat{v}}))].$$ LGAN ({G, C, E}, D) = [log D(v) + log(1 − D(ˆv))]. (3) Auto-regressive Transformer Based on the first stage, the images can be highly compressed by the codebook-indices of their embeddings. In this state, the image encoder E is fixed. Therefore, the V, T, M, and A can be represented by the embedding S of the sequence of tokens. We adopt the axial positional embedding P to process the codebook-indices generating by the image reconstruction module R, which is practically effective for multi-dimensional data. In our implementation, the sequence Tseq of the transformer is sequentially formed by T, V, M, and A, which is defined as follows: $$T_{seq}=\text{Concat}(\mathbb{S}(\mathcal{O}(T)),\mathbb{P}(\mathbb{R}(V)),\mathbb{P}(\mathbb{R}(M)),\mathbb{S}(\mathcal{O}(A))),\tag{4}$$ which focuses to generate the re-created result and textual feedback. O represents the tokenize operation. The transformer is designed as a decoder-only model, which is same as Child et al. (2019). In our architecture, text tokens have the ability to attend to each image token in any of the self-attention layers. The model employs two types of self-attention masks. The attention masks used for text-to-text attention follow the conventional causal mask. On the other hand, for image-to-image attention, the model applies either a row, column, or convolutional attention mask in different self-attention layer, as explained in Child et al. (2019); Ramesh et al. (2021a). During training, each token is asked to predict the distribution of the next token, which formulates as an autoregressive next-index prediction. Therefore, it is equivalent to maximize the log-likelihood of the data representations. L*T ransf ormer* = Ex∼p(x)[− log p(Tseq)], (5) where p is the full representation of the likelihood of the possible next indices. Note that for the "cannot" and "forbidden" pairs, as there is no re-created ground truth, the loss of the its prediction will not be backpropagated. We have tried to predict a black image or the original visual input for these two cases, but both of them perform much worse than simply ignoring their loss. In the testing procedure, we only need to input the T and V for pixel-wise iterative generation until the required length is achieved. $$\mathcal{L}_{T r a n s f o r m e r}=\mathbb{E}_{x\sim p(x)}[-\log p(T_{s e q})],$$ $\downarrow$ . ## 4 Experiment In this section, we introduce the experimental details, evaluation metrics, and analyse on the quantitative and qualitative results. ## 4.1 Experimental Details All experiments are all conducted with Pytorch 1.8.1. The model parameters are updated through Adam (Diederik P. Kingma, 2014) with β1 = 0.9, β2 = 0.999. We first trains the image reconstruction module for 200 epochs with an initial learning rate of 0.001 and a decay rate of 0.99 per epoch. This stage takes about half a day to train for 200 epochs. The number of attention heads, the attention key and value dimensions, the number of layers, and the dimensions of the models are set to 8, 64, 4, and 512, respectively. The text tokenizer operates on a lower-cased byte pair encoding (BPE) representation of the text with a 49,408 vocab size (Sennrich R & A, 2015). The text sequence is bracketed with [SOS] and [EOS] tokens. The maximum length of the text sequence is set to 64, and the output length of the image codebook is 1024, which results in a total 2112 sequence lengths for the transformer model. The second stage is distributively trained over 200 epochs with a fixed learning rate of 0.0003. Unless specified, all the parameters of the architecture for the following experiments will remain the same. The model is trained for approximate 900 GPU days on CLEVR-ATVC dataset and 350 GPU days on Fruit-ATVC dataset. ## 4.2 Evaluation Metrics We evaluate the proposed method using the following two types of metrics. Image Quality Metric. The first two are the Peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM), which are commonly used to measure the similarity and quantify reconstruction quality of images. We also adopt the FSIM metric Zhang et al. (2011), which could be more suitable for the human visual system (HVS) using phase congruency (PC) and gradient magnitude (GM). Human Evaluation. Following the previous methods (Lee et al., 2020; Ding et al., 2021; Dong et al., 2017; Zhang et al., 2017; 2018; 2020; Kayser et al., 2021), we also adopt Human Rank (HR) to precisely reveal the performance. HR can be used to evaluate whether the synthesized image conforms to subjective effects (authenticity, matching degree with text, etc.). We set three different classes for the HR: a) if the action is perfectly done, the result would be ranked "A", representing score 1; b) if the action is correct, but other parts are affected, *e.g.*, mistakenly change the color or erasing other irrelevant objects, the result would be ranked "B", representing score 0.5; c) if the action is incorrect, the result is "C", representing score 0. Therefore, the final HR score is between 0 and 1, and the Full-Match score (FM-Score) represents that both the re-creation and the answer are correct. The higher scores of all metrics, the better performance is represented. The "accountable" part is totally automatic, for which we use the full string matching to measure the accuracy between ground truth answers and the final predictions. We also take into account that the textual feedback in the explanation part will change, such as that both explanations "There is no object1 and object2" and "There is no object2 and object1" are correct. We also designed tools to help with human evaluation, and we provide its interface and evaluation criteria in the appendix. In addition, we assign 100 sets of results to 5 works for scoring, we empirically find that our task usually has an explicit answer which is not easily affected by human subjective opinions. ## 4.3 Results And Analyses In this section, we evaluate the proposed method on the CLEVR-ATVC and Fruit-ATVC datasets, whose results include the image re-creation, textual feedback, qualitative visualizations and the model behavior when facing uncertainty. ## 4.3.1 Results On Clevr-Atvc For the CLEVR-ATVC dataset, the re-created results are shown in Table 5. We can see that the machine achieves FM-Score 52.6% (re-creation and textual feedback are completely correct), which also shows the | Visual (V) | Text (T) | Re-created (M) | Answer (A) | FM-Score | |-------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|------------------|--------------|------------| | Please exchange the positions of the large brown metal cylinder and the large blue rubber cube. | No problem. | 1.0 | | | | Please put the large brown metal cylinder on top of the large blue rubber cube. | No problem. | 0.75 | | | | Please put the small gray rubber cylinder | / | | | | | on top of the large yellow metal cube. | This action cannot be done. Because there is no large yellow metal cube. | 1.0 | | | | Please put the small gray metal sphere | / | | | | | under the small purple rubber cylinder. | This action is | | | | | | forbidden. Because you cannot put the sphere under an object, and there is no small gray metal sphere and no small purple rubber cylinder. | 1.0 | | | Table 4: Qualitative results of CLEVR-ATVC dataset. | PSNR | SSIM | FSIM | Human Rank (%) | | | | | |--------|--------|--------|------------------|----------|------|------|------| | A | B | C | Score | FM-Score | | | | | 55.0 | 0.9944 | 0.669 | 25.9 | 16.2 | 58.0 | 39.0 | 52.6 | | Can | Cannot | Forbidden | Score (%) | | | | | | |-------|----------|-------------|-------------|----------|------|-----------|----------|------| | Num | Acc. | Num | Type Acc. | Exp Acc. | Num | Type Acc. | Exp Acc. | | | 1662 | 71.6% | 1997 | 49.4% | 46.5% | 1341 | 99.7% | 58.2% | 66.2 | Table 5: Image re-creation evaluation on the CLEVR-ATVC dataset. Table 6: Texual feedback evaluation on the CLEVR-ATVC dataset. Type Acc. represents that the answer can correctly recognize the query that cannot be done or forbidden. Exp Acc. represents the explanation is correct in both answer type and the reasons. difficulty of the task. The high SSIM 0.9944 represents the model can obtain high image reconstruction quality. Based on our observation, we find that the "exchange color" queries are mostly correct, while "exchange positions" queries are somewhat limited by the image reconstruction. The quantitative results of textual feedback are shown in Table 6. We can see that there are 1662 pairs for the results on "can" queries; among them, 71.6% pairs are answered correctly. For the queries about "cannot", 49.4% pairs can be accurately recognized that the query is not doable, and 46.5% pairs can not only answer the type but correctly answer the reasons. For the queries about "forbidden", 99.7% can be correctly recognized, because the model only needs to complete textual reasoning. However, our task requires | Visual (V) | Text (T) | Re-created (M) | Answer (A) | FM-Score | |-----------------------------------|----------------------|---------------------------------------------------------------------|--------------|------------| | Please | remove | the | | | | plate. | | No problem. | 1.0 | | | Please put the lemon in the bowl. | | No problem. | 0.75 | | | Please exchange the position of the banana and the coconut. | / | This action cannot be done. Because | 1.0 | | | | there is no coconut. | | | | | Please | remove | the | | | | bottle. | / | This action cannot be done. Because there is no kiwi and no bottle. | 0.75 | | Table 7: Qualitative results of the Fruit-ATVC dataset. | PSNR | SSIM | FSIM | Human Rank (%) | | | | | |--------|--------|--------|------------------|-------|----------|------|------| | | A | B | C | Score | FM-Score | | | | 44.1 | 0.9272 | 0.420 | 12.8 | 29.4 | 57.9 | 27.5 | 46.1 | Table 8: Image re-creation evaluation on the Fruit-ATVC dataset. the model to further explain why the instruction cannot be executed, which requires the model to complete both visual and textual reasoning. Therefore, only 58.2% can correctly explain the reasons. The score is the weighted average of the above results. As there are numerous possible answers, a random guessing result is less than 1%, and thus it should be safe to conclude that the proposed method has shown impressive results. Example qualitative results are shown in Table 4. We can conclude that the model can cay "No" to instructions and give accurate language-based explanations. In addition, it is worth mentioning that the model also implicitly learns the depth of the image and size of the objects, and thus it knows whether the object is occluded and how much it is occluded when you manipulate some objects. ## 4.3.2 Results On Fruit-Atvc For the Fruit-ATVC dataset, the re-created results are shown in Table 8. The final score for re-creation is 46.1%, which results can produce both correct re-creation as well as the textual feedback. These results also demonstrate the the difficulty of the task, especially in real scenarios. The quantitative results of answers are shown in Table 9. We can see that 78.3% of "can" queries can be answered correctly, and 25.0% queries about "cannot" can be correctly explained. In the "exchange position" queries, we find that although it can perform correct re-creation, the reconstruction for other objects is challenging. For the "put in" queries, the errors usually occur by creating a new target fruit without erasing it in the image. The re-creation results of the "remove" are the best among three actions. All the results suggest the challenges of this dataset of real scenes. Example qualitative results are shown in Table 7. Table 9: Textual feedback evaluation on the Fruit-ATVC dataset. | Table 10: Uncertainty evaluation on CLEVR-ATVC dataset. | | | | | |-------------------------------------------------------------------------------------------------|----------|----------------|------------|---------| | Visual (V) | Text (T) | Re-created (M) | Answer (A) | Ranking | | Please exchange the color of the large blue rubber sphere and the small purple rubber cylinder. | | No problem. | A | | | Please exchange the color of the large purple metal cube and the large red metal cylinder. | | No problem. | A | | | Please exchange the color of the large cyan metal sphere and the small red rubber cube. | | No problem. | A | | | Please exchange the position of the small yellow rubber cube and the small purple rubber cube. | | No problem. | A | | | Please exchange the color of the large purple metal cube and the small blue metal cylinder. | | No problem. | A | | | Please put the large purple metal cube under the large gray metal sphere. | | No problem. | B | | | Can | Cannot | Score | | | | |-------|----------|---------|-----------|----------|-------| | Num | Acc. | Num | Type Acc. | Exp Acc. | | | 950 | 78.3% | 922 | 77.2% | 25.0% | 64.7% | ## 4.3.3 Uncertainty Of Image Re-Creation In this section, we would like to explore whether the model can handle the uncertainty appeared in the query. The results show that our method deals with emerging uncertainties well. As shown in Table 10, all the image re-creation results get ranking A, except for the result of the last row. The score B is due to an object being erased during image reconstruction, not due to uncertainty. For example, as shown in the fourth row of Table 10, our model only exchanges the position of one "small yellow rubber cube", which is consistent with the settings in our constructed dataset. The "exchange position" only needs to process one | Methods | Text | Image | PSNR | SSIM | FSIM | Human Rank (%) | | | | |---------------|--------|-----------|--------|--------|--------|------------------|------|------|------| | | A | B | C | Score | | | | | | | Ours w/ VQVAE | Short | 128 × 128 | 56.3 | 0.9957 | 0.689 | 65.9 | 14.4 | 19.7 | 71.9 | | Ours w/ VQVAE | Long | 128 × 128 | 56.6 | 0.9963 | 0.693 | 66.4 | 14.8 | 18.8 | 73.8 | Table 11: Evaluation results on the CLEVR-ATVC sub-dataset. ![11_image_1.png](11_image_1.png) ![11_image_0.png](11_image_0.png) Figure 4: Image re-creation results on the CLEVR-ATVC sub-dataset using imperfect query. The red text represents the short or imperfect query. of the objects, but "exchange color" needs to exchange the colors of all satisfied objects. Existing models are able to make different decisions for the above different situations, so our method shows a good behavior. ## 4.4 Single Action Without Answer In this section, we firstly evaluate the image auto-encoder method VQVAE (Oord et al., 2017) with short query and the imperfect query on the CLEVR-ATVC sub-dataset. We only select one "put on top" action without textual feedback for verification and evaluation. Subsequently, we evaluate the image auto-encoder methods, VQVAE (Oord et al., 2017) and VQGAN (Esser et al., 2021), on the CLEVR-ATVC sub-dataset. ## 4.4.1 Short Or Imperfect Query For Re-Creation Qian et al. (2021) concluded that the model can produce entity memorization on the form of the input, so we test the model's ability to cope with changes in the input form. The short query means that we remove redundant text in the query, such as "please", "put", and "the" are removed, while the latter randomly removes keywords (such as "color", "size" or "shape"). The quantitative results shown in Table 11 show that the intact queries (long text) perform slightly better than the changed version (short text). This demonstrates that the model can cope with small changes in the form of the input queries. The visualization results are shown in Figure 4, with the observations on the imperfect queries: a) If the "blue" is removed from query, the generated cylinder seems to be shorter; b) If the query contains a nonexistent "blue" adjective for the cube, the model creates one; c) If both the nonexistent "blue" cube | Methods | Image | PSNR | SSIM | FSIM | Human Rank (%) | | | | |---------------|-----------|--------|--------|--------|------------------|------|------|------| | | A | B | C | Score | | | | | | Ours w/ VQVAE | 128 × 128 | 56.6 | 0.9963 | 0.693 | 66.4 | 14.8 | 18.8 | 73.8 | | Ours w/ VQGAN | 256 × 256 | 53.9 | 0.9947 | 0.604 | 52.7 | 27.4 | 19.9 | 66.4 | Table 12: Evaluation results on the CLEVR-ATVC sub-dataset. ![12_image_0.png](12_image_0.png) Query: Please put the small cyan cube on top of the small brown cylinder. ![12_image_1.png](12_image_1.png) Query: Please put the small yellow cube on top of the small brown cylinder. Figure 5: Analyse for the results of VQGAN-based image encoder. VQGAN frequently changes the colors of the objects, which downgrades the performance. and "cyan" cylinder coexist in the same query, the model seems to mistakenly recognize the cyan cube, and we can find the small gray cylinder is also erased, which is incorrect; d) If we simply remove the "cube", the model just dyes a litter red on top of the cylinder; e) If we remove the "please put", the result just remains the original figure; f) If we add the unseen word, *e.g.*, "amazing", in the query, the manipulation result can still be achieved. We further test a lot of images, which show that the unlearned "create" and "dye" abilities commonly appear but not always. The aforementioned findings also indicate that without incorporating restriction rules into the dataset, the model may fulfill human instructions by generating new objects, which is not the intended outcome and lacks control. Hence, it becomes imperative to govern the model's behavior by incorporating pre-defined rules as supervision signals. ## 4.4.2 Vqvae Vs. Vqgan As shown in Table 12, although VQGAN-based image auto-encoder can output a large resolution and at 4 times faster of training than the VQVAE counterpart, the latter is 7.4% better. The results shown in Figure 5 and B4 show that VQGAN-based image encoder always changes the color of objects, erases objects and add irrelevant objects on the re-created images. These results also demonstrate that VQGAN can produce highquality images using fewer computational resources, but it appears to have lost its ability to discriminate on object properties in latent space. ## 5 Related Works In this section, we first review some representative datasets and methods on vision-and-language tasks whose inputs and outputs are always unimodal. Next, we introduce the prior multimodal dialogue models. Finally, we briefly summarize the recent controllable text-to-image generation methods and the main differences between them and our task. ## 5.1 Vision-And-Language Tasks Recent years have witnessed the rapid development of vision-and-language tasks. Their development trend can be observed through the construction of the datasets, which can be roughly categorized into two groups: Vision to Language, *Language to Vision*. We briefly summarize these relevant datasets and methods. Vision to Language. There are several sub-tasks for this category such as *Visual Description Generation* (Ordonez et al., 2011), *Visual Storytelling* (Lin et al., 2014), *Visual Dialog* (Das et al., 2017; Kottur et al., 2019), *Visual Entailment* (Vu et al., 2018; Liu et al., 2020), *Visual Reasoning* (Johnson et al., 2017), Visual Question Answering (Antol et al., 2015; Marino et al., 2019) and *Visual Referring Expression* (Liu et al., 2019). - Visual Description Generation, a.k.a., captioning, is the most well-known task, where many datasets in the past decade are constructed to address different scales, scenes, etc. For examples, image captioning datasets, SBU1M (Ordonez et al., 2011), Flickr8k (Hodosh et al., 2013), Flickr30k (Young et al., 2014), MSCOCO (Lin et al., 2014), Multi30k-CLID (Elliott et al., 2016), Flickr30k-Entities (Plummer et al., 2015), STAIR Captions (Yoshikawa et al., 2017), Conceptual Captions (Sharma et al., 2018), MSCOCO-Entities (Cornia et al., 2019), Personality Captions (Shuster et al., 2019) and video captioning datasets, MSVD (Chen & Dolan, 2011), MPII Cooking (Rohrbach et al., 2012), YouCook (Das et al., 2013), TACoS (Regneri et al., 2013), TACoS-MultiLevel (Rohrbach et al., 2014), M-VAD (Torabi et al., 2015), MSR-VTT (Xu et al., 2016), VTW (Zeng et al., 2016), ANetCap (Krishna et al., 2017), YouCook II (Zhou et al., 2018), ANetEntities (Zhou et al., 2019), COIN (Tang et al., 2019), HowTo100M (Miech et al., 2019), have played an important role in advancing visual captioning. - Visual Storytelling usually requires the methods to generate a series of text to describe a set or a sequence of the visual inputs, as shown in several datasets: NYC-Storytelling (Park & Kim, 2015), Disneyland Storytelling (Kim et al., 2015), SIND, and VIST (Huang et al., 2016). - Visual Dialog can be regarded as a natural conversation system based on the image or video. Examples can be found in the existing benchmarks such as VisDial (Das et al., 2017), CLEVR-Dialog (Kottur et al., 2019) and AVSD (Alamri et al., 2018). - Visual Entailment is introduced in V-SNLI (Vu et al., 2018), SNLI-VE (Xie et al., 2019), and VIOLIN (Liu et al., 2020), which requires the method to learn to select the correct premise and hidden semantic information that can be inferred from the visual contents. - Vision Reasoning is expected to provide the scene graphs between objects and then to answer questions based on the visual input and relationships. Many datasets such as CLEVR (Johnson et al., 2017), NLVR (Suhr et al., 2017), CLEVR-CoGenT (Yang et al., 2018), NLVR2 (Suhr et al., 2018), GQA (Hudson & Manning, 2019), VCR (Zellers et al., 2019), and Visual COMET (Park et al., 2020) have been constructed between 2017 and 2020, which demonstrate visual reasoning abilities can be effectively learned by the deep neural networks (LeCun et al., 2015). - Visual Question Answering is another representative vision-to-language task, in which the model is expected to generate text-based answer according to the question and the visual cues. The most influential dataset VQA v1.0 (Antol et al., 2015) as well as MovieQA (Tapaswi et al., 2016), TVQA (Lei et al., 2018), OK-VQA (Marino et al., 2019), KVQA (Shah et al., 2019), and VQA v2.0. (Antol et al., 2015) have greatly promoted the development. - Visual Referring Expression is another task which requires the method to comprehend the referring expressions by showing evidence in the visual images, *i.e.*, using bounding boxes to locate the corresponding objects. There are several datasets targeting this task, such as RefCLEF (Kazemzadeh et al., 2014), and CLEVR-Ref+4 (Liu et al., 2019). Language to Vision. This task aims at image generation based on the pure natural language. Currently, there are only a few datasets for this task such as Oxford-102 (Reed et al., 2016), Caltech-UCSD Birds (CUB) (Reed et al., 2016), Multi-Modal-CelebA-HQ (Xia et al., 2021), Text2Human (Jiang et al., 2022), Laion-400M (Schuhmann et al., 2021), Laion-5B (Schuhmann et al., 2022) and Text2Video (Xu et al., 2016; Li et al., 2018; Hong et al., 2022; Ho et al., 2022; Khachatryan et al., 2023; ?). These datasets are similar to the vision-to-language task above, and the text-image pairs contain text descriptions of the image content. The most representative task is text-based image generation, which has recently seen large progress in multimodal learning (Brooks et al., 2023; Li et al., 2023b). Many previous works train GANs with textconditioning on publicly available image captioning datasets (Tao et al., 2020; Zhang et al., 2021). DALL·E (Ramesh et al., 2021b) uses 250 million text-image pairs to successfully achieve promising results, which can even perform zero-shot visual reasoning by using some prompts. Another zero-shot model termed CLIP (Radford et al., 2021) is used to rank and measure the similarity between image and text, as usually one text prompt can produce numerous plausible results. The recent DALL·E 2 (Ramesh et al., 2022) uses a diffusion prior on CLIP text latents and cascaded diffusion models to generate high resolution 1024 × 1024 images. GLIDE (Nichol et al., 2021) also applies guided diffusion to the problem of text-conditional image synthesis. Imagen (Saharia et al., 2022) combines the power of transformer language models with high-fidelity diffusion models to deliver an unprecedented degree of photorealism in text-to-image synthesis. In contrast to the aforementioned datasets that typically concentrate on one aspect, our proposed datasets introduce a novel requirement for models to generate both visual re-creations and textual feedback simultaneously. The visual manipulation aspect necessitates not only performing the required actions accurately but also preserving a visually plausible background. Furthermore, the feedback generated by the model needs to be aware of the possibilities of feasible actions, actions that cannot be performed, and actions that are explicitly prohibited. This requirement ensures that our constructed datasets possess both intrinsic value and present a significant challenge. ## 5.2 Multimodal Dialogue Models The task of vision-to-language involves using visual samples as input and generating text as output, as seen in tasks like Visual Dialog. Conversely, language-to-vision tasks operate in the opposite direction. In multimodal dialogue methods, the model must reason about multimodal or unimodal inputs and produce multimodal or unimodal outputs. These methods can be categorized into the following two types§: - *Multimodal Input and Unimodal Output (MIUO)*: This conversational task is similar to visual dialog, where the input typically consists of visual data and text-based prompts (i.e., multimodal input). The visual language model performs visual reasoning on the image and provides an answer (i.e., unimodal output) (Alayrac et al., 2022; Brooks et al., 2023; Gong et al., 2023; Bo Li, 2023; Li et al., 2023a; OpenAI, 2023b; Su et al., 2023; Yang et al., 2023b; Zhao et al., 2023; Zhu et al., 2023; Liu et al., 2023; Mu et al., 2023; Wang et al., 2023; Zhang et al., 2023). GPT-4 (OpenAI, 2023b) and MultiModal-GPT (Gong et al., 2023) can handle prompts consisting of interleaved visual inputs and text-based queries, generating text outputs. VideoChat (Li et al., 2023a) introduces a videocentric multimodal dialogue dataset, enabling trained models to understand and generate detailed conversations about videos. - *Multimodal Input and Multimodal Output (MIMO)*: This task requires the model to perform multimodal reasoning and generation simultaneously (Koh et al., 2023; Jing Yu Koh, 2023; Chenfei Wu & Duan, 2023; Yang et al., 2023a). Visual ChatGPT (Chenfei Wu & Duan, 2023) is a pioneering work that combines ChatGPT and a series of pre-trained visual foundation models, allowing them §https://github.com/zzw-zwzhang/Awesome-of-Multimodal-Dialogue-Models to accept and produce text and images during textual-visual conversations. GILL (Koh et al., 2023) proposes a mapping network that efficiently maps the output embedding space of a frozen text-only language model to that of a frozen generation model (e.g., Stable Diffusion (Rombach et al., 2022)). This mapping only requires fine-tuning a small number of parameters on image-caption pairs for tasks such as image retrieval, novel image generation, and multimodal dialogue. FROMAGe (Jing Yu Koh, 2023) also involves image-text inputs and outputs for multimodal dialogue, with a few linear layers fine-tuned while keeping the pre-trained language model frozen. GPT4Tools (Yang et al., 2023a) introduces an instruction dataset and extends Visual ChatGPT (Chenfei Wu & Duan, 2023) to the image understanding task. ## 5.3 Controllable Text-To-Image Generation The above text-based image generation methods primarily focus on generating high-quality images based on given text descriptions, without providing the user with the ability to manipulate specific visual attributes using natural language instructions (Anderson et al., 2018; Nguyen et al., 2019; Thomason et al., 2020; Cheng et al., 2014; Scalise et al., 2018; Stepputtis et al., 2020; Nam et al., 2018; Yüksel et al., 2021; Shen et al., 2020; Li et al., 2020; Richardson et al., 2020; Esser et al., 2021). However, some recent works have explored techniques to control the synthesis process by representing scenes as compositional generative neural feature fields, enabling the disentanglement of objects from the background, as well as individual objects' shapes and appearances (Li et al., 2019; Bodla et al., 2018; Nam et al., 2018; Chen et al., 2021; Shuster et al., 2019). For instance, ControlGAN (Li et al., 2019) can synthesize high-quality images while allowing control over specific parts of the image generation based on natural language descriptions. FusedGAN (Bodla et al., 2018) achieves enhanced controllability in sampling by disentangling different factors in the latent space. Text2Human (Jiang et al., 2022) can synthesize human images by specifying clothing shapes and textures solely through natural language descriptions. However, the aforementioned approaches focus on controlling the image generation process through composition or disentanglement techniques. In this paper, our objective is to control the outcomes of re-created images, where the system learns to respond with a "no" to commands that are either prohibited or cannot be executed. ## 6 Conclusion In this paper, we raise an important yet underexplored concern regarding the accountability of multimodal generative models. To tackle this issue, we introduce two novel datasets, namely CLEVR-ATVC and FruitATVC, designed for a unique task called Accountable Text-based Visual Re-creation. The objective of this task is to train Visual Language Models (VLMs) to reject human instructions. Our datasets consist of both visual and textual inputs and outputs, requiring the model to perform visual re-creation while ensuring image quality when the answer is "yes". In cases where the model cannot complete the action or the action is prohibited, it is required to provide an explanation. We provide baseline models, experimental settings, evaluation metrics, and a comprehensive analysis, presenting some promising results. These high-quality datasets can also be utilized in similar tasks, and we hope that our work inspires further research on the accountability problem, encompassing model design, label annotation, and the creation of larger datasets. We firmly believe that addressing the issue of accountability is crucial for advancing the development and deployment of multimodal generative models. ## 7 Acknowledgments The authors would like to thank the anonymous reviewers for their helpful feedback and the funding agencies listed below for supporting this work. This work is supported in part by the National Key Research and Development Program of China (Project Number 2022YFC2305102). We would like to thank Ying Cai, Xiaoai Sha and other members for helping label text-image pairs and participate in human evaluation experiments. Finally, we appreciate all the people who are working on making code, models and data publicly available. ## References Huda Alamri, Chiori Hori, Tim K Marks, Dhruv Batra, and Devi Parikh. Audio visual scene-aware dialog (avsd) track for natural language generation in dstc7. In *DSTC7 at AAAI2019 Workshop*, volume 2, 2018. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. *Advances in Neural Information Processing Systems*, 35:23716–23736, 2022. Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton Van Den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In *Proceedings of the IEEE Conference on Computer Vision and* Pattern Recognition, pp. 3674–3683, 2018. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In *Proceedings of the IEEE international conference on* computer vision, pp. 2425–2433, 2015. Liangyu Chen Jinghao Wang Fanyi Pu Jingkang Yang Chunyuan Li Ziwei Liu Bo Li, Yuanhan Zhang. Mimic-it: Multi-modal in-context instruction tuning. *arXiv preprint arXiv:2306.05425*, 2023. Navaneeth Bodla, Gang Hua, and Rama Chellappa. Semi-supervised fusedgan for conditional image generation. In *Proceedings of the European conference on computer vision (ECCV)*, pp. 669–683, 2018. Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 18392–18402, 2023. David Chen and William B Dolan. Collecting highly parallel data for paraphrase evaluation. In *Proceedings of* the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 190–200, 2011. Tianyi Chen, Yi Liu, Yunfei Zhang, Si Wu, Yong Xu, Feng Liangbing, and Hau San Wong. Semisupervised single-stage controllable gans for conditional fine-grained image generation. In *Proceedings* of the IEEE/CVF International Conference on Computer Vision, pp. 9264–9273, 2021. Weizhen Qi Xiaodong Wang Zecheng Tang Chenfei Wu, Shengming Yin and Nan Duan. Visual chatgpt: Talking, drawing and editing with visual foundation models. *arXiv preprint arXiv:2303.04671*, 2023. Yu Cheng, Yunyi Jia, Rui Fang, Lanbo She, Ning Xi, and Joyce Chai. Modelling and analysis of natural language controlled robotic systems. *IFAC Proceedings Volumes*, 47(3):11767–11772, 2014. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. *arXiv preprint arXiv:1904.10509*, 2019. Marcella Cornia, Lorenzo Baraldi, and Rita Cucchiara. Show, control and tell: A framework for generating controllable and grounded captions. In *Proceedings of the IEEE/CVF Conference on Computer Vision* and Pattern Recognition, pp. 8307–8316, 2019. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José MF Moura, Devi Parikh, and Dhruv Batra. Visual dialog. In *Proceedings of the IEEE Conference on Computer Vision and Pattern* Recognition, pp. 326–335, 2017. Pradipto Das, Chenliang Xu, Richard F Doell, and Jason J Corso. A thousand frames in just a few words: Lingual description of videos through latent topics and sparse object stitching. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2634–2641, 2013. Jimmy Ba Diederik P. Kingma. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, et al. Cogview: Mastering text-to-image generation via transformers. arXiv preprint arXiv:2105.13290, 2021. Hao Dong, Simiao Yu, Chao Wu, and Yike Guo. Semantic image synthesis via adversarial learning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5706–5714, 2017. Finale Doshi-Velez, Mason Kortz, Ryan Budish, Chris Bavitz, Sam Gershman, David O'Brien, Kate Scott, Stuart Schieber, James Waldo, David Weinberger, et al. Accountability of ai under the law: The role of explanation. *arXiv preprint arXiv:1711.01134*, 2017. Desmond Elliott, Stella Frank, Khalil Sima'an, and Lucia Specia. Multi30k: Multilingual english-german image descriptions. *arXiv preprint arXiv:1605.00459*, 2016. Patrick Esser, Robin Rombach, and Björn Ommer. Taming transformers for high-resolution image synthesis. CVPR, 2021. Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans. *arXiv* preprint arXiv:2305.04790, 2023. Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. *arXiv preprint arXiv:2204.03458*, 2022. Micah Hodosh, Peter Young, and Julia Hockenmaier. Framing image description as a ranking task: Data, models and evaluation metrics. *Journal of Artificial Intelligence Research*, 47:853–899, 2013. Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. *arXiv preprint arXiv:2205.15868*, 2022. Ting-Hao Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, et al. Visual storytelling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1233–1239, 2016. Drew A Hudson and Christopher D Manning. Gqa: a new dataset for compositional question answering over real-world images. *arXiv preprint arXiv:1902.09506*, 2(3):11, 2019. Yuming Jiang, Shuai Yang, Haonan Qju, Wayne Wu, Chen Change Loy, and Ziwei Liu. Text2human: Text-driven controllable human image generation. *ACM Transactions on Graphics (TOG)*, 41(4):1–11, 2022. Daniel Fried Jing Yu Koh, Ruslan Salakhutdinov. Grounding language models to images for multimodal inputs and outputs. *arXiv preprint arXiv:2301.13823*, 2023. Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2901–2910, 2017. Maxime Kayser, Oana-Maria Camburu, Leonard Salewski, Cornelius Emde, Virginie Do, Zeynep Akata, and Thomas Lukasiewicz. e-vil: A dataset and benchmark for natural language explanations in vision-language tasks. *arXiv preprint arXiv:2105.03761*, 2021. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. Referitgame: Referring to objects in photographs of natural scenes. In *Proceedings of the 2014 conference on empirical methods in natural* language processing (EMNLP), pp. 787–798, 2014. Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, and Humphrey Shi. Text2video-zero: Text-to-image diffusion models are zero-shot video generators. *arXiv preprint arXiv:2303.13439*, 2023. Gunhee Kim, Seungwhan Moon, and Leonid Sigal. Ranking and retrieval of image sequences from multiple paragraph queries. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 1993–2001, 2015. Jing Yu Koh, Daniel Fried, and Ruslan Salakhutdinov. Generating images with multimodal language models. arXiv preprint arXiv:2305.17216, 2023. Satwik Kottur, José MF Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. Clevr-dialog: A diagnostic dataset for multi-round reasoning in visual dialog. *arXiv preprint arXiv:1903.03166*, 2019. Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. Dense-captioning events in videos. In *Proceedings of the IEEE international conference on computer vision*, pp. 706–715, 2017. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. *nature*, 521(7553):436–444, 2015. Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. Maskgan: Towards diverse and interactive facial image manipulation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 5549–5558, 2020. Jie Lei, Licheng Yu, Mohit Bansal, and Tamara L Berg. Tvqa: Localized, compositional video question answering. *arXiv preprint arXiv:1809.01696*, 2018. Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, and Philip Torr. Controllable text-to-image generation. *Advances in Neural Information Processing Systems*, 32, 2019. Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, and Philip HS Torr. Manigan: Text-guided image manipulation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 7880–7889, 2020. KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. Videochat: Chat-centric video understanding. *arXiv preprint arXiv:2305.06355*, 2023a. Yitong Li, Martin Min, Dinghan Shen, David Carlson, and Lawrence Carin. Video generation from text. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and Yong Jae Lee. Gligen: Open-set grounded text-to-image generation. *arXiv preprint arXiv:2301.07093*, 2023b. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In *European conference on computer* vision, pp. 740–755. Springer, 2014. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023. Jingzhou Liu, Wenhu Chen, Yu Cheng, Zhe Gan, Licheng Yu, Yiming Yang, and Jingjing Liu. Violin: A large-scale dataset for video-and-language inference. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10900–10910, 2020. Runtao Liu, Chenxi Liu, Yutong Bai, and Alan L Yuille. Clevr-ref+: Diagnosing visual reasoning with referring expressions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4185–4194, 2019. Michele Loi and Matthias Spielkamp. Towards accountability in the use of artificial intelligence for public administrations. In *Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society*, pp. 757–766, 2021. Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In *Proceedings of the IEEE/CVF Conference on* Computer Vision and Pattern Recognition, pp. 3195–3204, 2019. Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2630–2640, 2019. Yao Mu, Qinglong Zhang, Mengkang Hu, Wenhai Wang, Mingyu Ding, Jun Jin, Bin Wang, Jifeng Dai, Yu Qiao, and Ping Luo. Embodiedgpt: Vision-language pre-training via embodied chain of thought. arXiv preprint arXiv:2305.15021, 2023. Seonghyeon Nam, Yunji Kim, and Seon Joo Kim. Text-adaptive generative adversarial networks: manipulating images with natural language. *arXiv preprint arXiv:1810.11919*, 2018. Khanh Nguyen, Debadeepta Dey, Chris Brockett, and Bill Dolan. Vision-based navigation with languagebased assistance via imitation learning with indirect intervention. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 12527–12537, 2019. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. *arXiv preprint arXiv:2112.10741*, 2021. Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 11453–11464, 2021. Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. arXiv preprint arXiv:1711.00937, 2017. OpenAI. Chatgpt. *https://openai.com/blog/chatgpt/*, 2023a. OpenAI. Gpt-4 technical report. *arXiv preprint arXiv:2303.08774*, 2023b. Vicente Ordonez, Girish Kulkarni, and Tamara Berg. Im2text: Describing images using 1 million captioned photographs. *Advances in neural information processing systems*, 24:1143–1151, 2011. Cesc C Park and Gunhee Kim. Expressing an image stream with a sequence of natural sentences. *Advances* in neural information processing systems, 28:73–81, 2015. Jae Sung Park, Chandra Bhagavatula, Roozbeh Mottaghi, Ali Farhadi, and Yejin Choi. Visualcomet: Reasoning about the dynamic context of a still image. In *European Conference on Computer Vision*, pp. 508–524. Springer, 2020. Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski. Styleclip: Text-driven manipulation of stylegan imagery. *arXiv preprint arXiv:2103.17249*, 2021. Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In *Proceedings of the IEEE international conference on computer vision*, pp. 2641–2649, 2015. Kun Qian, Ahmad Beirami, Zhouhan Lin, Ankita De, Alborz Geramifard, Zhou Yu, and Chinnadhurai Sankar. Annotation inconsistency and entity bias in multiwoz. *arXiv preprint arXiv:2105.14150*, 2021. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. *arXiv preprint arXiv:2103.00020*, 2021. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In *International Conference on Machine Learning*, pp. 8821–8831. PMLR, 2021a. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. *arXiv preprint arXiv:2102.12092*, 2021b. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. *arXiv preprint arXiv:2204.06125*, 2022. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In *International Conference on Machine Learning*, pp. 1060–1069. PMLR, 2016. Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. Grounding action descriptions in videos. *Transactions of the Association for Computational Linguistics*, 1:25–36, 2013. Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, and Daniel CohenOr. Encoding in style: a stylegan encoder for image-to-image translation. *arXiv preprint arXiv:2008.00951*, 2020. Anna Rohrbach, Marcus Rohrbach, Wei Qiu, Annemarie Friedrich, Manfred Pinkal, and Bernt Schiele. Coherent multi-sentence video description with variable level of detail. In German conference on pattern recognition, pp. 184–195. Springer, 2014. Marcus Rohrbach, Sikandar Amin, Mykhaylo Andriluka, and Bernt Schiele. A database for fine grained activity detection of cooking activities. In *2012 IEEE Conference on Computer Vision and Pattern Recognition*, pp. 1194–1201. IEEE, 2012. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pp. 10684–10695, 2022. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-toimage diffusion models with deep language understanding. *arXiv preprint arXiv:2205.11487*, 2022. Rosario Scalise, Shen Li, Henny Admoni, Stephanie Rosenthal, and Siddhartha S Srinivasa. Natural language instructions for human–robot collaborative manipulation. *The International Journal of Robotics Research*, 37(6):558–565, 2018. Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. *arXiv preprint arXiv:2111.02114*, 2021. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. *arXiv preprint arXiv:2210.08402*, 2022. Haddow B Sennrich R and Birch A. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909v5, 2015. Sanket Shah, Anand Mishra, Naganand Yadati, and Partha Pratim Talukdar. Kvqa: Knowledge-aware visual question answering. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 33, pp. 8876–8884, 2019. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2556–2565, 2018. Yujun Shen, Jinjin Gu, Xiaoou Tang, and Bolei Zhou. Interpreting the latent space of gans for semantic face editing. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 9243–9252, 2020. Kurt Shuster, Samuel Humeau, Hexiang Hu, Antoine Bordes, and Jason Weston. Engaging image captioning via personality. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 12516–12526, 2019. Simon Stepputtis, Joseph Campbell, Mariano Phielipp, Stefan Lee, Chitta Baral, and Heni Ben Amor. Language-conditioned imitation learning for robot manipulation tasks. *Neurips*, 2020. Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, and Deng Cai. Pandagpt: One model to instructionfollow them all. *arXiv preprint arXiv:2305.16355*, 2023. Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. A corpus of natural language for visual reasoning. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2:* Short Papers), pp. 217–223, 2017. Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. A corpus for reasoning about natural language grounded in photographs. *arXiv preprint arXiv:1811.00491*, 2018. Yansong Tang, Dajun Ding, Yongming Rao, Yu Zheng, Danyang Zhang, Lili Zhao, Jiwen Lu, and Jie Zhou. Coin: A large-scale dataset for comprehensive instructional video analysis. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1207–1216, 2019. Ming Tao, Hao Tang, Songsong Wu, Nicu Sebe, Xiao-Yuan Jing, Fei Wu, and Bingkun Bao. Df-gan: Deep fusion generative adversarial networks for text-to-image synthesis. *arXiv preprint arXiv:2008.05865*, 2020. Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. Movieqa: Understanding stories in movies through question-answering. In *Proceedings of the IEEE* conference on computer vision and pattern recognition, pp. 4631–4640, 2016. Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. Vision-and-dialog navigation. In Conference on Robot Learning, pp. 394–406. PMLR, 2020. Atousa Torabi, Christopher Pal, Hugo Larochelle, and Aaron Courville. Using descriptive video services to create a large data source for video annotation research. *arXiv preprint arXiv:1503.01070*, 2015. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. *arXiv preprint arXiv:1706.03762*, 2017. Hoa Trong Vu, Claudio Greco, Aliia Erofeeva, Somayeh Jafaritazehjan, Guido Linders, Marc Tanti, Alberto Testoni, Raffaella Bernardi, and Albert Gatt. Grounded textual entailment. *arXiv preprint* arXiv:1806.05645, 2018. Teng Wang, Jinrui Zhang, Junjie Fei, Yixiao Ge, Hao Zheng, Yunlong Tang, Zhe Li, Mingqi Gao, Shanshan Zhao, Ying Shan, et al. Caption anything: Interactive image description with diverse multimodal controls. arXiv preprint arXiv:2305.02677, 2023. Chenfei Wu, Jian Liang, Lei Ji, Fan Yang, Yuejian Fang, Daxin Jiang, and Nan Duan. N\" uwa: Visual synthesis pre-training for neural visual world creation. *arXiv preprint arXiv:2111.12417*, 2021. Weihao Xia, Yujiu Yang, Jing-Hao Xue, and Baoyuan Wu. Tedigan: Text-guided diverse face image generation and manipulation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2256–2265, 2021. Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. Visual entailment: A novel task for fine-grained image understanding. *arXiv preprint arXiv:1901.06706*, 2019. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 5288–5296, 2016. Guangyu Robert Yang, Igor Ganichev, Xiao-Jing Wang, Jonathon Shlens, and David Sussillo. A dataset and architecture for visual reasoning with a working memory. In *Proceedings of the European Conference* on Computer Vision (ECCV), pp. 714–731, 2018. Rui Yang, Lin Song, Yanwei Li, Sijie Zhao, Yixiao Ge, Xiu Li, and Ying Shan. Gpt4tools: Teaching large language model to use tools via self-instruction. *arXiv preprint arXiv:2305.18752*, 2023a. Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action. *arXiv preprint arXiv:2303.11381*, 2023b. Yuya Yoshikawa, Yutaro Shigeto, and Akikazu Takeuchi. Stair captions: Constructing a large-scale japanese image caption dataset. *arXiv preprint arXiv:1705.00823*, 2017. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78, 2014. Oğuz Kaan Yüksel, Enis Simsar, Ezgi Gülperi Er, and Pinar Yanardag. Latentclr: A contrastive learning approach for unsupervised discovery of interpretable directions. *arXiv preprint arXiv:2104.00820*, 2021. Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6720–6731, 2019. Kuo-Hao Zeng, Tseng-Hung Chen, Juan Carlos Niebles, and Min Sun. Generation for user generated videos. In *European conference on computer vision*, pp. 609–625. Springer, 2016. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In *Proceedings of the IEEE international conference on computer vision*, pp. 5907–5915, 2017. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. Stackgan++: Realistic image synthesis with stacked generative adversarial networks. *IEEE* transactions on pattern analysis and machine intelligence, 41(8):1947–1962, 2018. Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, and Yinfei Yang. Cross-modal contrastive learning for text-to-image generation. In *Proceedings of the IEEE/CVF conference on computer vision and pattern* recognition, pp. 833–842, 2021. Lin Zhang, Lei Zhang, Xuanqin Mou, and David Zhang. Fsim: A feature similarity index for image quality assessment. *IEEE transactions on Image Processing*, 20(8):2378–2386, 2011. Tianjun Zhang, Yi Zhang, Vibhav Vineet, Neel Joshi, and Xin Wang. Controllable text-to-image generation with gpt-4. *arXiv preprint arXiv:2305.18583*, 2023. Zhiqiang Zhang, Wenxin Yu, Jinjia Zhou, Xuewen Zhang, Ning Jiang, Gang He, and Zhuo Yang. Customizable gan: A method for image synthesis of human controllable. *IEEE Access*, 8:108004–108017, 2020. Zijia Zhao, Longteng Guo, Tongtian Yue, Sihan Chen, Shuai Shao, Xinxin Zhu, Zehuan Yuan, and Jing Liu. Chatbridge: Bridging modalities with large language model as a language catalyst. arXiv preprint arXiv:2305.16103, 2023. Luowei Zhou, Chenliang Xu, and Jason Corso. Towards automatic learning of procedures from web instructional videos. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32, 2018. Yimin Zhou, Yiwei Sun, and Vasant Honavar. Improving image captioning by leveraging knowledge graphs. In *2019 IEEE winter conference on applications of computer vision (WACV)*, pp. 283–293. IEEE, 2019. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing visionlanguage understanding with advanced large language models. *arXiv preprint arXiv:2304.10592*, 2023. ![23_image_0.png](23_image_0.png) ## A Additional Data Details This section provides further information about the data used in our research. We begin by presenting a wider range of example images from the Fruit-ATVC dataset to highlight the diversity of collection scenes. Next, we provide a summary of statistics regarding the various actions found in the datasets. Subsequently, we elaborate on the process of human evaluation experiments. Lastly, we introduce the format of the annotation files for our datasets. ## A.1 Diversity Of Fruit-Atvc The following Figure A1 shows the diversity of collection scenes on Fruit-ATVC dataset. The image resolution is average 2700×2700, and the images were taken with different categories of mobile phones, including iphone, ![24_image_0.png](24_image_0.png) ![24_image_2.png](24_image_2.png) Figure A2: The distribution of different actions on CLEVR-ATVC and Fruit-ATVC datasets. ![24_image_1.png](24_image_1.png) FruitATVM ClEVR-ATVM-M Figure A3: The visualization of bounding boxes in the CLEVR-ATVC dataset. huawei, xiaomi, etc. We would like to emphasize that the image quality of this dataset is very high, which results in the size of the original Fruit-ATVC is 168 GB. In addition, the background of data collection, fruit categories and container types are also very diverse. We believe this dataset can also be used for other image generation tasks. The Figure A2 illustrates the distribution of different actions on our constructed datasets. ## A.2 Annotation File Format We describe the format of the annotation files for both the CLEVR-ATVC and Fruit-ATVC datasets. Listings 1 and 2 demonstrate the consistent structure of the annotation files, with the primary distinction being the inclusion of detailed 2D and 3D coordinate locations of objects in the CLEVR-ATVC dataset. Figure A3 provides a visual representation of the bounding boxes in the CLEVR-ATVC dataset. It is worth noting that we anticipate the potential use of this dataset for other related tasks, such as compositional visual reasoning. While we have previously introduced the main structure of the annotation files in the main paper, we now aim to provide additional details. The annotation files begin by providing a summary of the dataset contributors, timestamp, project link, version, and license information. This introductory section ensures comprehensive documentation and proper attribution. Following this, the annotation files include the categories of objects present in the dataset, along with associated object adjectives, actions, and other relevant information. The "images" section contains all the necessary information about the visual inputs. This includes the image index, filename, link, the number of objects in the image, and detailed descriptions of each object present. Each object is listed under the "objects" child list, accompanied by specific properties such as index, category index, and bounding box coordinates. The text-based queries (Q) and textual feedback (A) are included in the "questions" section of the annotation files. The answer types are indicated by numbers ranging from 0 to 2. Specifically, a value of 0, 1, or 2 represents an instruction that cannot be executed, an instruction that can be executed, or a prohibited instruction, respectively. If the action type is 0 or 2, the subsequent "actions" list in the "recreations" section will be empty. Notably, the CLEVR-ATVC dataset provides additional information regarding any changes in the color or position of an object after a displacement or property modification. ## B Human Evaluation Experiments This section presents examples of the interfaces utilized for facilitating human evaluation. During the evaluation phase, the visual input, text-based query, re-created image, answer, and ground truths for recreation and textual feedback are automatically imported into the corresponding sections of the auxiliary tool. Evaluators are then only required to assign scores to the respective results and save them. Subsequently, we calculate the final score of the model based on the saved results. As detailed in the main paper, we assigned the same results to different evaluators, with each evaluator providing identical scores for the experimental outcomes. The aforementioned experiments demonstrate that the auxiliary tools we have designed not only significantly reduce the evaluation time required but also ensure the accuracy and reliability of the widely used human evaluation process. Listing 1: The format of annotation file on CLEVR-ATVC dataset. { " c o n t ri b u t o r " : " Zhiwei Zhang , Yuliang Liu " , " d a t a_c re a ted " : " 2 0 2 3 " , " u r l " : " h t t p s : / / matrix−alph a . gi thub . i o / " , " d e s c r i p t i o n " : "CLEVR−ATVC D a t a se t " , " v e r t i o n " : " 1 . 0 " , " l i c e n s e s " : { " i d " : " 1 " , " u r l " : "Nnone " , "name " : " C r e a ti v e Commons A t t ri b u ti o n (CC−BY 4 . 0 ) " , } , " c a t e g o r i e s " : [ " cube , c yli n d e r , s p h e r e " ] , " s i z e s " : [ " sm all , l a r g e " ] , " c o l o r s " : [ " gray , red , blue , green , brown , pu rple , cyan , y ell o w " ] , " m a t e ri al " : [ " rubber , metal " ] , " a c t i o n s " : [ " put on top , put under , exchange p o si ti o n , exchange c o l o r " ] , " images " : [ { "image_idx " : " 0 0 0 0 0 0 1 " , " image_ filename " : " 5 2 1 1 0 0_03_000011 . png " , " i d " : " 5 2 1 1 0 0_03_000011 " , " object_number" : 3 , " d a t a_c re a ted " : "2023−04−27 0 0 : 1 7 : 1 4 " , " o b j e c t s " : [ { " inde x " : 0 , " c a te g o r y_id " : 0 , " si z e _i d " : 1 , " c ol o r_i d " : 3 , " bbox " : [ 2 9 , 6 2 , 8 1 , 1 1 6] , " 3d_coords" : [ −2. 9 6 8 5 6 0 4 5 , −1.99158716 , 0. 6 9 9 9 9 9 9] , " m a t e ri al_i d " : 1 } , { . . . } , { . . . } , ] } , ] " q u e s ti o n s " : [ { " q u e s ti o n_i d x " : " 0 0 0 0 0 0 1 " , " q u e s ti o n_i d " : " 5 2 1 1 0 0_03_000011 " , " question_number" : 1 0 , " ques " : [ { " ques_id " : 0 , " ques_idx " : 1 , " i d " : " 5 2 1 1 0 0_03_000011_0_01 " , " type " : 1 , "Q" : [ " Pl e a s e put the c y l i n d e r on top o f the cube . " ] , "A" : [ " No problem . " ] } , ] " r e c r e a t i o n s " : [ ![27_image_0.png](27_image_0.png) ![27_image_1.png](27_image_1.png) ![27_image_2.png](27_image_2.png) ![27_image_4.png](27_image_4.png) ![27_image_5.png](27_image_5.png) ![27_image_6.png](27_image_6.png) ![27_image_7.png](27_image_7.png) ![27_image_8.png](27_image_8.png) " rec_idx " : " 0 0 0 0 0 0 1 " , " rec_id " : " 5 2 1 1 0 0_03_000011 " , "rec_num " : 1 0 , " a c t i o n s " : [ ![27_image_3.png](27_image_3.png) " a c ti o n s_i d " : 0 , " a c ti o n s_i d x " : 1 , " r e c_ fil e n am e " : " 5 2 1 1 0 0_03_000011_0_01 . png " , " object_number" : 3 , " d a te_c re a ted " : "2023−04−27 0 0 : 1 7 : 5 9 " , " o b j e c t s " : [ { } , { } , { " inde x " : 2 , " c a te g o r y_id " : 1 , " si z e _i d " : 0 , " c ol o r_i d " : 1 , " bbox " : [ 3 9 , 4 5 , 6 5 , 7 2 ] , " 3d_coords" : [ −2. 9 6 8 5 6 0 4 5 , −1.99158716 , 1 . 7 5 ] , " m a t e ri al_i d " : 0} ] } , ] } Listing 2: The format of annotation file on Fruit-ATVC dataset. { " c o n t ri b u t o r " : " Zhiwei Zhang , Yuliang Liu " , " d a t a_c re a ted " : " 2 0 2 3 " , " u r l " : " h t t p s : / / matrix−alph a . gi thub . i o / " , " d e s c r i p t i o n " : " F rui t−ATVC D a t a se t " , " v e r t i o n " : " 1 . 0 " , " l i c e n s e s " : { " i d " : " 1 " , " u r l " : "Nnone " , "name " : " C r e a ti v e Commons A t t ri b u ti o n (CC−BY 4 . 0 ) " , } , " f r u i t s " : [ " apple , cocount , orange , banana , kiwi , mango , avocado , e t c . " ] , " c o n t a i n e r s " : [ " pl a t e , bowl , b o t t l e " ] , " a c t i o n s " : [ " put in , exchange p o si ti o n , remove " ] , " images " : [ { "image_idx " : " 0 0 0 0 0 0 1 " , " image_ filename " : " 0 0 0 0 0 0 1. png " , " i d " : " 0 0 0 0 0 0 1 " , " fruit_number " : 3 , " container_number " : 1 , " d a t a_c re a ted " : "2023−04−27 0 0 : 1 0 : 1 1 " , " f r u i t s " : [ { " inde x " : 0 , " c a te g o r y_id " : 0 } { " inde x " : 1 , " c a te g o r y_id " : 6 } { " inde x " : 2 , " c a te g o r y_id " : 8 } ] , " c o n t a i n e r s " : [ { " inde x " : 0 , " c a te g o r y_id " : 2 } ] , } , ] " q u e s ti o n s " : [ { " q u e s ti o n_i d x " : " 0 0 0 0 0 0 1 " , " q u e s ti o n_i d " : " 0 0 0 0 0 0 1 " , " question_number" : 3 , " ques " : [ { " ques_id " : 0 , " ques_idx " : 1 , " i d " : " 0 0 0 0 0 0 1_0_01 " , " type " : 1 , "Q" : [ " Pl e a s e put the kiwi i n the p l a t e . " ] , "A" : [ " No problem . " ] } , ] " r e c r e a t i o n s " : [ { " rec_idx " : " 0 0 0 0 0 0 1 " , " rec_id " : " 0 0 0 0 0 0 1 " , "rec_num " : 3 , " a c t i o n s " : [ { " a c ti o n s_i d " : 0 , " a c ti o n s_i d x " : 1 , " r e c_ fil e n am e " : " 0 0 0 0 0 0 1_0_01 . png " , " fruit_number " : 3 , " container_number " : 1 , " d a te_c re a ted " : "2023−04−27 0 0 : 1 0 : 1 8 " , } , ] } ![30_image_0.png](30_image_0.png) ![30_image_1.png](30_image_1.png) ![30_image_2.png](30_image_2.png) Query: Please put the small red cube on top of the large yellow cylinder. ![30_image_3.png](30_image_3.png) ![30_image_4.png](30_image_4.png) Query: Please put the large green cube on top of the small red cylinder. ![30_image_5.png](30_image_5.png) ![30_image_6.png](30_image_6.png) Query: Please put the large green cube on top of the small red cylinder. ![30_image_7.png](30_image_7.png) ![30_image_8.png](30_image_8.png) Query: Please put the small yellow cube on top of the large blue cylinder. ![30_image_9.png](30_image_9.png) ![30_image_10.png](30_image_10.png) Query: Please put the large brown cube on top of the small green cylinder. Figure B4: More results are to show why VQGAN-based image encoder performs worse on image re-creation. ![31_image_0.png](31_image_0.png) Query: Please exchange the color of the small yellow rubber cube and the large red metal sphere. Answer: No problem. Ground truth: No problem. Human Evaluation 1. Please rate the visual re-creation? A: 1 B: 0.5 C: 0 2. Please rate the type of answer? 1 0 3. Please rate the explanation of answer? 1 0 4. What is the score for textual feedback? [1.0] 5. What is the full-match score? ![31_image_1.png](31_image_1.png) [1.0] Input Re-creation Ground truth Query: Please put the large gray metal cylinder under the small yellow rubber cylinder. Answer: This action cannot be done. Because there is no large gray metal cylinder and no small yellow rubber cylinder. Ground truth: This action cannot be done. Because there is no large gray metal cylinder and no small yellow rubber cylinder. Human Evaluation 1. Please rate the visual re-creation? A: 1 B: 0.5 C: 0 2. Please rate the type of answer? 1 0 3. Please rate the explanation of answer? 1 0 4. What is the score for textual feedback? [1.0] 5. What is the full-match score? [1.0] Input Re-creation Ground truth ![32_image_0.png](32_image_0.png) ![32_image_1.png](32_image_1.png) ![32_image_2.png](32_image_2.png) Query: Please put the large tray metal cube on top of the large blue metal sphere. Answer: This action is forbidden. Because you cannot put an object on top of the sphere, and there is no large gray metal cube and no large blue metal sphere. Ground truth: This action is forbidden. Because you cannot put an object on top of the sphere, and there is no large gray metal cube and no large blue metal sphere. Human Evaluation 1. Please rate the visual re-creation? A: 1 B: 0.5 C: 0 1 0 3. Please rate the explanation of answer? 1 0 4. What is the score for for textual feedback? [1.0] 5. What is the full-match score? ![32_image_3.png](32_image_3.png) [1.0] Input Re-creation Ground truth Query: Please put the large red metal sphere under the small yellow rubber cube. Answer: This action is forbidden. Because you cannot put the sphere under an object, and there is no large red metal sphere and no small yellow rubber cube. Ground truth: This action is forbidden. Because you cannot put the sphere under an object. Human Evaluation 1. Please rate the visual re-creation? A: 1 B: 0.5 C: 0 ![32_image_4.png](32_image_4.png) ![32_image_5.png](32_image_5.png) 2. Please rate the type of answer? 1 0 ![32_image_6.png](32_image_6.png) 3. Please rate the explanation of answer? 1 0 4. What is the score for for textual feedback? [0.5] 5. What is the full-match score? [0.75] 2. Please rate the type of answer? Input Re-creation Ground truth ![33_image_0.png](33_image_0.png) ![33_image_1.png](33_image_1.png) Query: Please put the small yellow metal sphere under the large red metal cube. Answer: This action is forbidden. Because you cannot put the sphere under an object, and there is no small yellow metal sphere and no small red metal cube. Ground truth: This action is forbidden. Because you cannot put the sphere under an object, and there is no small yellow metal sphere and no large red metal cube. Human Evaluation 1. Please rate the visual re-creation? A: 1 B: 0.5 C: 0 2. Please rate the type of answer? 1 0 3. Please rate the explanation of answer? 1 0 4. What is the score for for textual feedback? [0.5] 5. What is the full-match score? [0.75] ![33_image_2.png](33_image_2.png) Query: Please put the small green metal cube under the large purple metal sphere. Answer: No problem. Ground truth: No problem Human Evaluation 1. Please rate the visual re-creation? A: 1 B: 0.5 C: 0 2. Please rate the type of answer? 1 0 3. Please rate the explanation of answer? 1 0 4. What is the score for for textual feedback? [1.0] 5. What is the full-match score? [0.75] ![34_image_0.png](34_image_0.png) Query: Please put the large purple metal sphere on top of the large purple metal cylinder. Answer: This action is forbidden. Because you cannot put the sphere under an object. Ground truth: No problem. Human Evaluation 1. Please rate the visual re-creation? A: 1 B: 0.5 C: 0 2. Please rate the type of answer? 1 0 3. Please rate the explanation of answer? 1 0 4. What is the score for for textual feedback? [0] 5. What is the full-match score? ![34_image_1.png](34_image_1.png) [0] Input Re-creation Ground truth Query: Please exchange the position of the large yellow metal cube and the large green metal cylinder. Answer: No problem. Ground truth: This action cannot be done. Because there is no large yellow metal cube and no large metal cylinder. Human Evaluation 1. Please rate the visual re-creation? A: 1 B: 0.5 C: 0 2. Please rate the type of answer? 1 0 3. Please rate the explanation of answer? 1 0 4. What is the score for for textual feedback? [0] 5. What is the full-match score? [0]
Review 1: Summary: The paper proposes new two datasets to evaluate the multimodal generation capabilities of Visual Language Models (VLMs): the synthetic CLEVR-ATVC dataset and the manually pictured Fruit-ATVC dataset. The most notable feature of these datasets is that on these datasets, the model is required to reject textual prompts if they are prohibited according to pre-defined rules or cannot be executed. The authors develops a simple baseline model consisting of VQVAE and an autoregressive Transformer, and show quantitative and qualitative experiment results for the generated image quality and answer using the baseline. They also evaluate the model's behavior when facing uncertain or imperfect queries. Strengths and Weaknesses: + As authors mentioned in the conclusion section, the collected datasets will foster future research on the accountability of VLMs. + The proposed baseline seems to manipulate image input well based on the textual prompt, or reject the prompt if it cannot be executed or is prohibited. However, the writing should be improved. - Generation of textual queries and feedbacks on Fruit-ATVC is unclear. Please elaborate on the interface shown in Figure 2. - Some of the sections should be differently represented; e.g., Section 4.3.3 and Section 4.4.3 describe redundant issues (VQVAE vs VQGAN), the descriptions about imperfect queries are mentioned in Section 4.4.1, not Section 4.4.2, and many examples mentioned in Section 4.4.2 are not about imperfect queries, but the queries that cannot be executed or unseen queries. - The attention mask used in the decode-based Transformer should be described in more detail. The following sentences in Section 3 are quite unclear: The transformer is a pure decoder, where text tokens can be attended by each image token in any one of the self-attention layers. Causal mask and full attention mask are used for the text-to-text and image to-image, respectively. - The paper addresses the accountability of VLMs, but it is limited in terms of object attributes. The VLMs must also reject prompts requiring unethical behaviors or that cannot be executed in more diverse scenarios, such as the prompt that defies the law of physics. The authors should address the accountability of VLMs in more detail in the Related Work section. And most importantly, conducting experiments with only the proposed baseline does not fully demonstrate the model's ability and reduces the reliability of dataset analysis. Requested Changes: Please modify the draft to reflect what I mentioned above. Also, provide additional results using at lest two or three other VLMs. Broader Impact Concerns: Please add the broader impact of the work. ================================================== Review 2: Summary: The paper points out the lack of a dataset for validating model performance in the textual-visual chat tasks and proposes two datasets specifically designed to evaluate the quality of image and text outputs in multimodal dialogue systems. The proposed datasets include CLEVR-ATVC, a synthetic dataset, and Fruit-ATVC, containing real images, both including visual and text-based inputs and outputs. For the first time in the field of multimodal dialogue systems, these datasets incorporate specific rules to enable the model to reject specific human requests. To explore the capability of rejecting human instructions in a textual-visual chat task, the paper proposes an accountable text-based visual re-creation task using the proposed datasets. This task involves receiving text-image pairs and providing re-created images with language-based feedback including explanations for the rejection of requests in specific cases. To achieve this task, the paper suggests a two-stage training procedure; training a discrete VAE (dVAE) for compressing each image into short tokens and training a decoder-based transformer that takes the previously generated image tokens concatenated with text tokens as input for generating final outputs. The experiments analyze the quality of the re-created images, the accuracy of responses, and the model’s behavior when handling imperfect and ambiguous user queries. Additionally, it compares the performance of VQVAE and VQGAN as image auto-encoders, revealing that VQGAN shows lower performance. Strengths and Weaknesses: [Strengths] - The newly proposed datasets seem to have the potential to make a meaningful impact within this research field. Their intended design, which consist of image-text inputs and outputs including the prohibited requests, aligns well with the current trends on recent generative dialogue systems. Therefore, evaluations with these datasets are expected to be utilized in future studies. [Weaknesses] - Evaluating the ability to reject prohibited requests using the proposed dataset seems to be inappropriate. The exceptionally high performance of ‘Type Acc.’ (99.7%) for 'forbidden' in Table 6 suggests that recognizing prohibited requests may be a straightforward task. The test samples in the dataset may be too simplistic, diminishing the effectiveness to evaluate the challenges in rejecting prohibited requests. - The paper does not seem to demonstrate a clear explanation for why Fruit-ATVC, one of the proposed datasets, does not include forbidden rules. Considering the ethical implications of rejecting specific requests on real images, it appears necessary for the paper to articulate the rationale behind the absence of forbidden rules within the Fruit-ATVC dataset. Requested Changes: - It seems necessary to have test set requests that use different expressions but hold similar meanings to the forbidden rules present in the training set. This would enable the evaluation of whether the model robustly recognizes forbidden expressions. Broader Impact Concerns: - It appears that there are no specific ethical issues that need to be particularly considered regarding the paper. ================================================== Review 3: Summary: This paper propose a new task ATVC - a dataset where models have to generate images and decline generations if it is not possible to do so. They create 2 datasets for this task. They train a model similar to DALL-E on their dataset and analyze their results. Strengths and Weaknesses: Strength - The task of generating images (while possibly rejecting human instructions) is a new and impactful task that should be studied. - The dataset is constructed to capture this task well. Weakness - Clarity of contribution. What distinguishes and motivates this particular multimodal dataset? From my understanding, it is that the dataset measures the quality of generated text and image. The generated text is only required since the authors introduce a new task of ATVC where the model can reject the request. If my understanding is correct, I think the paper should motivate this new task first and then the accompanying dataset and make this more clear in the intro. - Lacking GPT-4 results, though GPT-4 is the motivation. The motivation for constructing this multimodal dataset is to evaluate the performance of multimodal generative models like ChatGPT and GPT-4. However, there are no results of GPT-4 on this dataset. I understand GPT-4 requires paid access, but given that this is the motivation for the dataset in the first place, these results should be included to justify the motivation. Also, including results on GPT-4 would make this work more impactful and useful to others. Requested Changes: Major - clarify of contribution as listed above Minor - Add GPT-4 results Questions - Are the answer types part of the ATVC framework? If so, it would make the paper easier to follow if the answer types are described in section 2.1 - “For example, six human instructions can be executed by the AI system and re-created the images accordingly, two instructions cannot be executed and two instructions are forbidden.” - Is the number of queries for each action type is randomly chosen for each image or always used the breakdown described in the example? - What is the difference between an instruction that is forbidden and an instruction that cannot be executed? An instruction that is forbidden also cannot be executed. - The two types of prohibition are cannot put object under/on top of other object? Is this prohibition based on physics? And why only these prohibitions of all the prohibitions? These two prohibitions seems arbitrary. - In the uncertainty experiments, the model handles it well by choosing one object (if multiple objects are correct). Shouldn’t the correct behavior be to clarify with the user which object is correct? - In the imperfect query for re-creation results, does this mean the model is not robust and cannot handle noisy inputs? Broader Impact Concerns: NA ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper introduces new multimodal datasets/tasks that require models to reject human instructions that may be physically impossible. All three reviewers and the AE appreciated the novelty and importance of the task introduced, the effort in curating the dataset and tasks, and the thorough experiments to benchmark models for these tasks. The paper is recommended for conditional acceptance. I describe some of the remaining issues that need to be addressed below. Additionally, the authors did a commendable job in surveying the related work, and as such the paper is recommended for Survey certification. Congratulations to the authors! Major comment: - Given the limited set of actions/names/colors/etc, powerful transformers are expected to memorize these names and patterns, and it is likely that they won't generalize to unseen ones. As an example, see (Qian et al., 2021) where transformer-based dialog models do not generalize to unseen named entities. I suggest that the authors leave out some of the colors/names/actions/etc or create unseen ones for the test set only, and devise a clear recipe for testing generalization to unseen objects/actions/etc, which is one of the hallmarks of using foundation models for solving new tasks. Qian, Kun, et al. "Annotation Inconsistency and Entity Bias in MultiWOZ." Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue. 2021. Minor comments: - End of page 5: Figure 2.4 -> Figure 3 - Please use \𝗰𝗶𝘁𝗲𝘁{} and \𝗰𝗶𝘁𝗲𝗽{} correctly or the paper will be hard to read. \𝗰𝗶𝘁𝗲𝘁{} should be used when the author name is intended to be part of the sentence, e.g., "X et al. (2024) introduced a new benchmark." \𝗰𝗶𝘁𝗲𝗽{} should be used when the citation appears in parentheses and is not read in the sentence, e.g., "There has been great progress in vision language models (X et al., 2024)." - Please move the tables and images around so that they appear in the same order as they appear in the text, and preferably closer to where they appear in the text. - Please remove the page break between Conclusion and References. - Several lines in the annotation formats in the appendix don't fit the page. Please fix them. ==================================================
# Tree Ensembles For Contextual Bandits Anonymous authors Paper under double-blind review ## Abstract We propose a new framework for contextual multi-armed bandits based on tree ensembles. Our framework adapts two widely used bandit methods, Upper Confidence Bound and Thompson Sampling, for both standard and combinatorial settings. As part of this framework, we propose a novel method of estimating the uncertainty in tree ensemble predictions. We further demonstrate the effectiveness of our framework via several experimental studies, employing XGBoost and random forest, two popular tree ensemble methods. Compared to the state-of-the-art methods based on decision trees and neural networks, our methods exhibit superior performance in terms of both regret minimization and computational runtime, when applied to benchmark datasets and the real-world application of navigation over road networks. ## 1 Introduction Stochastic *multi-armed bandits* (MABs) (Slivkins, 2019) provide a principled framework for making optimal sequences of decisions under uncertainty. As discussed by Zhu & Van Roy (2023), MABs constitute a vital component of modern recommendation systems for efficient exploration, among many other applications. An important variant known as the *contextual multi-armed bandit* incorporates additional contextual / side information into the decision-making process, allowing for more personalized or adaptive action selection. Following the recent success of (deep) neural networks in solving various machine learning tasks, several methods building on such models have been suggested for finding functional relationships between the context and outcomes of actions to aid the decision-making process (Zhou et al., 2020; Zhang et al., 2021; Zhu & Van Roy, 2023; Osband et al., 2021; Hoseini et al., 2022). Even though sophisticated, these methods can be impractical, time-consuming, and computationally expensive. In light of these challenges, we propose bandit algorithms that utilize *tree ensemble* (TE) methods (Hastie et al., 2009), like *gradient-boosted decision trees* (GBDT) and *random forests*, to comprehend the contextual features, combined with the most popular bandit methods *Upper Confidence Bound* (UCB) and Thompson Sampling (TS) as exploration strategies. Compared with other tree-based and neural network methods, our methods, called TEUCB and TETS, yield superior results on UCI benchmark datasets. Additionally, our methods benefit from more effective learning with less computational overhead. We furthermore extend the framework to the combinatorial contextual setting, a variant of bandits that deals with complex combinatorial sets of arms, and investigate them on the important real-world application of efficient navigation in road networks. ## 1.1 Related Work The concept of applying decision trees to contextual bandit problems has been studied to a limited extent. Elmachtoub et al. (2017) suggest using a single decision tree for each arm available to the agent. Their method is referred to as *TreeBootstrap*, as the concept of bootstrapping is employed in order to resemble the Thompson Sampling process. One issue with their method is its limited ability to achieve the accuracy and robustness of tree ensemble methods, particularly when estimating complex reward functions. The possibility of applying random forests in their framework is discussed, but no concrete methods or experimental results are presented. Furthermore, storing and training one tree model per arm does not scale, especially not with large action spaces (e.g., in the combinatorial setting), and could potentially lead to excessive exploration for dynamic arm sets, as it cannot attend to what it has learned from the contexts of other arms. In contrast, Féraud et al. (2016) employ a tree ensemble but only use decision trees of unit depth also known as decision stumps. While tree ensembles with a large number of decision stumps tend to perform well regarding accuracy, lower variance, and increased robustness, restricting the depth in such an extreme way may not be adequate when addressing complex tasks (Hastie et al., 2009). The experimental studies by Hastie et al. (2009) indicate that tree depths of 4 to 8 work well for boosting. Additionally, Féraud et al. (2016) only investigate the bandit algorithm known as *successive elimination*, which usually leads to less efficient exploration compared to UCB and TS, and thus is used less commonly. Moreover, the method, called *Bandit Forest*, requires binary encoded contextual features. Comparing these two tree-based bandit methods, Elmachtoub et al. (2017) experimentally show that TreeBootstrap tends to outperform Bandit Forest in practice. It should be noted that neither Elmachtoub et al. (2017) nor Féraud et al. (2016) consider combinatorial contextual bandits. Apart from the two methods mentioned above, the problem has primarily been addressed using other machine learning models. When the reward function is linear w.r.t. the context, *LinUCB* (Li et al., 2010) and Linear Thompson Sampling (LinTS) (Agrawal & Goyal, 2013) have demonstrated good performance. In more complicated cases, the linear assumption no longer holds, and thus more expressive models are needed. In recent years, neural contextual bandits have attracted a lot of attention. As the name suggests, these methods use (deep) neural networks to model the expected rewards for some observed contextual features. Out of several proposed neural contextual bandit algorithms, *NeuralUCB* (Zhou et al., 2020) and *NeuralTS* (Zhang et al., 2021), in particular, have been shown to perform well, while also providing theoretical regret bounds. On the negative side, due to how these methods estimate the uncertainty in predictions, they rely on inverting a matrix of size equal to the number of network parameters, which is usually computationally expensive and time-consuming, especially when the prediction tasks require large neural networks. Zhu & Van Roy (2023) provide a thorough review of different neural bandit methods and suggests a method based on epistemic neural networks (Osband et al., 2021) for more efficient uncertainty modeling, showing its advantages in terms of regret minimization and computational efficiency. Despite their promising results, we argue that neural networks may not necessarily be the most appropriate models for contextual bandits. For instance, even with epistemic neural networks, training is still resource-intensive. On the other hand, there is a large body of work that shows the effectiveness of tree ensemble methods for regression and classification tasks in the standard supervised learning setting (Borisov et al., 2022; Gorishniy et al., 2021; Grinsztajn et al., 2022), and therefore we believe that their extension to contextual bandits can have great potential. One less directly related method is *Ensemble Sampling* (Lu & Van Roy, 2017), where the ensembles are solely used for uncertainty estimation and not for modeling the reward functions in bandits. Uncertainty estimates in Ensemble Sampling and similar methods focus on uncertainties in the parameters of the models used to approximate the underlying processes being sampled. In contrast, we sample directly from estimated distributions of the expected rewards given contexts. Our approach aligns closer with sampling-based bandit methods like LinTS and NeuralTS. ## 2 Background In this section, we describe the multi-armed bandit problem and its extensions relevant to our work. ## 2.1 Multi-Armed Bandit Problem The multi-armed bandit (MAB) problem is a sequential decision-making problem under uncertainty, where the decision maker, i.e., the agent, interactively learns from the environment over a horizon of T time steps (interactions). At each time step t ≤ T, the agent is presented with a set of K actions A, commonly referred to as arms. Each action a ∈ A has an (initially) unknown reward distribution with expected value µa. The objective is to maximize the cumulative reward over the time horizon T, or more commonly (and equivalently) to minimize the cumulative regret, which is calculated as $$R(T)\triangleq\sum_{t=1}^{T}(\mu_{a^{*}}-\mu_{a_{t}}),$$ $$(1)$$ ), (1) where µa∗ is the expected reward of the optimal arm a ∗, and at is the arm selected at time step t. It is important to note that the agent only receives feedback from the selected arm / action. More specifically, when selecting at the agent receives a reward rt,at which is sampled from the underlying reward distribution. As previously mentioned, the agent's objective is to minimize the cumulative regret, not to learn the complete reward function for each arm. Thus, the agent has to balance delicately between exploration and exploitation. ## 2.2 Contextual Bandit Problem The contextual multi-armed bandit problem is an extension of the classical MAB problem described in Section 2.1. The agent is, at each time step t, presented with a context vector xt,a ∈ R dfor each action a ∈ A. For example, a recommendation system could encode a combination of user-related and actionspecific features into the context vector xa. Then, the expected reward of action a at time t is given by an (unknown) function of the context q : R d → R, such that E[rt,a] = q(xt,a). Learning to generalize the relationship between the contextual features and the expected reward is crucial for effective learning and minimizing the cumulative regret. ## 2.3 Combinatorial Bandits Combinatorial multi-armed bandits (CMAB) (Cesa-Bianchi & Lugosi, 2012) deal with problems where at each time step t, a subset St (called a *super arm*) of the set of all *base arms* A is selected, instead of an individual arm. The reward of a super arm S depends on its constituent base arms. When the reward for a super arm is given as a function (e.g., sum) of feedback from its individual base arms (observable if and only if the super arm is selected), it is referred to as *semi-bandit feedback*. As previously, the evaluation metric is the cumulative regret, and for the combinatorial semi-bandit setting (with the sum of base arm feedback as super arm reward) it can be calculated as $$R(T)\triangleq\sum_{t=1}^{T}\left(\sum_{i\in\mathcal{S}_{t}^{*}}\mu_{i}-\sum_{j\in\mathcal{S}_{t}}\mu_{j}\right),\tag{1}$$ $$\left(2\right)$$ where S ∗ t denotes the optimal super arm at time t. ## 3 Proposed Algorithms Machine learning models based on decision trees have consistently demonstrated solid performance across various supervised learning tasks (Borisov et al., 2022; Gorishniy et al., 2021; Grinsztajn et al., 2022). An up-to-date overview of tree ensemble methods is provided by Blockeel et al. (2023), where several advantages are discussed over other techniques. For instance, they are known to learn fast from small sets of examples, while simultaneously possessing the capability of handling large data sets, and are computationally efficient to train. Despite the potential benefits, these types of models have not been studied much for contextual bandits. To the best of our knowledge, there is no previously known work that in a principled way combines tree ensemble models with UCB, which is one of the most effective strategies known for handling the explorationexploitation dilemma. Further, works that combine tree ensemble methods with Thompson Sampling are limited. We address this research gap and introduce a novel approach for the contextual MAB problem that combines any tree ensemble method with a natural adaption of these two prominent exploration schemes. Within our framework, we empirically investigate the XGBoost and random forest algorithms and demonstrate their auspicious performance on several benchmark tasks, as well as a combinatorial contextual MAB problem for efficient navigation over a stochastic road network in Section 4. However, we emphasize that our bandit algorithms are sufficiently general to be employed with any decision tree ensemble. ## 3.1 Tree-Based Weak Learners The underlying concept of decision tree ensembles is to combine several *weak learners*. Each standalone tree is sufficiently expressive to learn simple relationships and patterns in the data, yet simple enough to prevent overfitting to noise. By combining the relatively poor predictions of a large number of weak learners, the errors they all make individually tend to cancel out, while the true underlying signal is enhanced. In our notation, a tree ensemble regressor f is a collection of N decision trees {fn}, where the trees are collectively fitted to a set of training samples {(x, r)}. Each sample consists of a context vector x and a target value r. The fitting procedure can differ between different types of tree ensemble methods, which are often divided into two distinct categories depending on which fitting procedure is employed, i.e., *bagging* or boosting. In both variants, the prediction of each tree is determined by assigning the training samples to distinct leaves, where all samples in the same leaf resemble each other in some way, based on the features attended to in the splitting criteria associated with the ancestor nodes of the leaf. Hence, when fitted to the data, each tree fn accepts an input vector x which it assigns to one of its leaves. Every leaf is associated with three values that depend on the samples from the training data assigned to that particular leaf. We denote the output of the nth tree by on, which is this tree's contribution to the total ensemble prediction and depends on which leaf the input vector x is assigned to. The number of training samples assigned to the same leaf as x in tree n is denoted by cn. Finally, the sample variance in the output proposed by the individual training samples assigned to the leaf is denoted s 2 n . This value represents the uncertainty in the output on associated with the leaf and depends not only on the training samples, but also on the particular tree ensemble method used. We elaborate on the calculations of s 2 n for two types of tree ensembles in Sections 3.6 and 3.7. Note that, for the sample variance to exist, we must have at least two samples. This is ensured by requiring the tree to be built such that no leaf has fewer than two training samples assigned to it. The tree ensemble constructs its total target value prediction p by summing up all of the N individual trees' outputs on: $$p(\mathbf{x})=\sum_{i=1}^{N}o_{i}(\mathbf{x}).$$ $$\left({\boldsymbol{3}}\right)$$ oi(x). (3) The outputs on, in turn, are averages of the cn outputs suggested by each training sample assigned to the same leaf. These suggested outputs are somehow based on the training samples' target values, and how they are calculated depends on the particular type of tree ensemble used. We always consider the full tree ensemble target value predictions as a sum of the individual tree predictions on to unify the notation of all kinds of tree ensembles. Hence, for averaging models, such as random forests, we assume that the leaf values on are already weighted by 1/N. ## 3.2 Uncertainty Modeling For bandit problems, keeping track of the uncertainty in the outcomes of different actions is crucial for guiding the decision-making process. In order to form estimates of the uncertainty in the final prediction of the tree ensembles we employ, we make a few assumptions. Firstly, we assume that the output on of the n'th decision tree, given a context x, is an arithmetic average of cn independent and identically distributed random variables with finite mean µn and variance σ 2 n . By this assumption, on is itself a random variable with mean µn and variance σ 2 n cn . Also, we can approximate µn and σ 2 n by the sample mean and variance respectively, based on the training samples assigned to the leaf. Moreover, the central limit theorem (CLT) (see e.g., Dodge, 2008) ensures that, as cn → ∞, the distribution of on tends to a Gaussian distribution. Secondly, if we also assume the output of each one of the N trees in the ensemble to be independent of every other tree, we have that the total tree ensemble's target value prediction, which is a sum of N random variables, is normally distributed with mean µ =PN i=1 µi and variance σ 2 =PN i=1 σ 2 i . We acknowledge that these assumptions may not always be true in practice, but they act here as motivation for the design of our proposed algorithms. Specifically, they entail a straightforward way of accumulating the variances of the individual tree contributions to obtain an uncertainty estimate in the total reward prediction, allowing us to construct efficient exploration strategies. In the following two subsections, we present our approach to doing so with UCB and TS methods, respectively. ## 3.3 Tree Ensemble Upper Confidence Bound UCB methods act under the principle of *optimism in the face of uncertainty*, and have established themselves as one of the most prominent approaches to handling exploration in bandit problems. A classic example is the UCB1 algorithm (Auer et al., 2002) which builds confidence bounds around the expected reward from each arm depending on the fraction of times they have been played, and for which there are proven upper bounds on the expected regret. One disadvantage of the method, however, is that it does not take the variance of the observed rewards into account, which may lead to a sub-optimal exploration strategy in practice. In light of this, Auer et al. (2002) further proposed the UCB1-Tuned and UCB1-Normal algorithms, which extend UCB1 by including variance estimates, and demonstrate better performance experimentally. The main difference between the two extended versions is that UCB1-Tuned assumes Bernoulli-distributed rewards, while UCB1-Normal is constructed for Gaussian rewards. At each time step t, UCB1-Normal selects the arm a with the maximal upper confidence bound Ut,a, calculated as $$U_{t,a}\gets\hat{\mu}_{t,a}+\sqrt{16\hat{\sigma}_{t,a}^{2}\frac{\ln(t-1)}{m_{t,a}}},$$ $$\left(4\right)$$ where mt,a is the number of times arm a has been played so far, and µ˜t,a and σ˜ 2 t,a are the sample mean and variance of the corresponding observed rewards, respectively. For the sample variance to be defined for all arms, they must have been played at least twice first. By the assumptions we have made on the tree ensembles, the predictions of the expected rewards will be approximately Gaussian. Therefore, we propose an algorithm called Tree Ensemble Upper Confidence Bound (TEUCB) in Algorithm 1, which draws inspiration from UCB1-Normal, and is suitable for both Gaussian and Bernoulli bandits. As seen in lines 27 and 31 of Algorithm 1, the selection rule of TEUCB closely resembles that of UCB1-Normal, but is constructed specifically for contextual MABs where the arms available in each time step are characterized by their context vectors. Therefore, TEUCB considers each individual contribution from all samples in a leaf as a sample of that leaf's output distribution, which is demonstrated in line 14. The total prediction for a context xt,a is subsequently computed as the sum of sampled contributions from each leaf that xt,a is assigned to in the ensemble (lines 16, 17, 18). Beyond yielding better performance in experiments, there are additional benefits associated with considering the sample variances of rewards in the TEUCB method, as discussed regarding UCB1-Normal. Since each tree in the ensemble may be given a different weight, certain trees can contribute more than others to the final reward prediction. This should be accounted for in the total uncertainty as well, since it can otherwise be dominated by high uncertainty estimates from trees of low importance to the prediction. Accounting for the sample variances of the proposed tree contributions individually is a way of preventing such behavior. ## 3.4 Tree Ensemble Thompson Sampling The way in which uncertainties are estimated by TEUCB can also be incorporated into Thompson Sampling (Thompson, 1933). In its traditional form, Thompson Sampling selects arms by sampling them from the posterior distribution describing each arm's probability of being optimal, given some (known or assumed) prior distribution and previously observed rewards. This can be achieved by sampling mean rewards from each arm's posterior distribution over expected rewards, and playing the arm with the largest sampled mean reward. Using a TS based approach, we propose to estimate the uncertainty in the predicted reward given an arm's context in the same way as in the case with UCB. However, in line 23 of Algorithm 1, instead of constructing confidence bounds, we sample mean rewards from the resulting distributions (here, interpreted as posterior Algorithm 1 Tree Ensemble Upper Confidence Bound / Tree Ensemble Thompson Sampling 1: **Input:** Number of rounds T, number of initial random selection rounds TI , number of trees in ensemble N, exploration factor ν, tree ensemble regressor f, bandit method (UCB or TS). 2: for t = 1 to TI do 3: Randomly select and play an arm at 4: Observe context xt,at and reward rt,at 5: **end for** 6: for t = TI + 1 to T do 7: Fit tree ensemble f to previously observed context-reward pairs {(xi,ai , ri,ai )} t−1 i=1 8: Observe current contexts {xt,a} K a=1 9: for a = 1 to K do 10: Initialize arm parameters: µ˜t,a ← 0, σ˜ 2 t,a ← 0, ct,a ← 0 11: for n = 1 to N do 12: Assign leaf values: 13: (ot,a,n, st,a,n, ct,a,n) ← fn (xt,a) 14: Update leaf parameters: | regressor f, bandit method (UCB or TS). | | | | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|--------------------------------------|------| | 2: for t = 1 to TI do 3: Randomly select and play an arm at 4: Observe context xt,at and reward rt,at 6: for t = TI + 1 to T do 7: Fit tree ensemble f to previously observed t−1 context-reward pairs {(xi,ai , ri,ai )} i=1 K 8: Observe current contexts {xt,a} a=1 9: for a = 1 to K do 10: Initialize arm parameters: µ˜t,a ← 0, σ˜ t,a ← 0, ct,a ← 0 2 11: for n = 1 to N do 12: Assign leaf values: 13: (ot,a,n, st,a,n, ct,a,n) ← fn (xt,a) 14: Update leaf parameters: | 2 | 2 | 2 | | | 17: | σ˜ t,a ← σ˜ t,a + ˜σ t,a,n | | | | 18: | ct,a ← ct,a + ct,a,n | | | | 19: | end for | | | | 20: | if bandit method is UCB then ln(t−1) | | | | 21: | Ut,a ← µ˜t,a + q ν 2σ˜ 2 t,a | ct,a | | | 22: | else if bandit method is TS then | | | | 23: | r˜t,a ∼ N (˜µt,a, ν2σ˜ 2 t,a) | | | | 24: | end if | | | | 25: | end for | | | | 26: | if bandit method is UCB then | | | | 27: | at ← argmaxaUt,a | | | | 28: | else if bandit method is TS then | | | | 29: | at ← argmaxa r˜t,a | | | | 30: | end if | | | | 31: | Play at and observe reward rt,at | | | | 32: end for | | | distributions). Hence, the main difference from TEUCB is how the uncertainty is used to guide exploration. Due to its similarity with standard Thompson Sampling, we call this algorithm *Tree Ensemble Thompson* Sampling (TETS). Regular Thomson Sampling is inherently a Bayesian approach. However, the framework may be extended to include the frequentist perspective as well, and the two views have been unified in a larger set of algorithms called *Generalized Thompson Sampling* (Li, 2013). It should be noted that TETS, as we present it here, is prior-free and should fall under the umbrella of Generalized TS. Although not the focus of this work, the algorithm could be modified to explicitly incorporate and utilize prior beliefs, making it Bayesian in the traditional sense. ## 3.5 Extension To Combinatorial Bandits The framework outlined in Algorithm 1 is formulated to address the standard contextual MAB problem. However, TEUCB and TETS can easily be extended to the combinatorial semi-bandit setting. The main difference is in the way arms are selected, i.e., super arms instead of individual arms. In Algorithm 1, this corresponds to modifying lines 27 to $${\mathcal{S}}_{t}\leftarrow\operatorname{argmax}_{a\in{\mathcal{S}}}\sum_{a\in{\mathcal{S}}}U_{t,a},$$ $$\left({\mathfrak{H}}\right)$$ Ut,a, (5) and 29 to $${\mathcal{S}}_{t}\leftarrow\operatorname{argmax}_{a\in{\mathcal{S}}}\tilde{r}_{t,a}.$$ r˜t,a. (6) The selected super arm St at time t is subsequently played in line 31. In this setting, the set of observed context-reward pairs would include all rewards received for each base arm in the selected super arms individually, which are generally more than one per time step. $$({\hat{0}})$$ ## 3.6 Adaptation To Xgboost When using XGBoost regression models, we can extract on directly from the individual leaves in the constructed ensemble. The sample variance s 2 n is not accessible directly, but we can easily calculate it. As a GBDT method, XGBoost calculates its outputs during the training procedure as the average difference between the target value and the value predicted from the collection of preceding trees in the ensemble, multiplied with a learning rate. Hence, the trees do not have predetermined weights as in, e.g., random forests. Instead, the relative size of the contributions of an assigned leaf to the final prediction depends on the particular paths that are traversed through the other trees, which may be different for every sample we observe in a leaf. Therefore, we cannot estimate s 2 n from the variance in target values directly. However, XGBoost is augmented with many useful features, one of them being staged predictions (XGBoost Developers, 2022b). This means that we can propagate the predicted outputs on sub-ensembles up to a certain tree on all data samples and cache these outputs. After recording these we include the next tree in the prediction as well, without having to start over. By utilizing this technique, s 2 n is easily estimated from the previously observed samples assigned to a particular leaf. Furthermore, cn is simply the number of such samples. This gives us all the pieces of the puzzle needed to apply XGBoost to the TEUCB and TETS algorithms. In order to reduce bias when calculating on, s 2 n and cn, one may split the previously observed samples into two distinct data sets; one which is used for building the trees, and a second for the value estimations. ## 3.7 Adaptation To Random Forest Similar to XGBoost, random forests can also be incorporated into TEUCB and TETS. This is even less complicated in the case of random forests as the trees in a random forest do not depend on the predictions made by any of the other trees. Therefore, s 2 n can be directly estimated from the variance in the target values, taking the tree's weight of 1/N into account. Hence, all the quantities of interest (i.e., on, s 2 n , and cn) are available from the trees of a random forest by looking at which observed samples are assigned to the terminal nodes. As with XGBoost, one may consider dividing the previously observed samples into different subsets for fitting and value estimation. Alternatively, another way of handling bias with random forests is to consider only the out-of-bag samples for computing on, s 2 n , and cn. The random forest algorithm employs the concept of bagging—resampling with replacement—when building trees, which means roughly one-third of the samples will be omitted in the construction of any particular tree. Therefore, random forests yield a natural way of providing an independent set of data samples for model evaluation without limiting the size of the training data set. ## 4 Experiments In this section, we evaluate TEUCB and TETS, and compare them against several well-known algorithms for solving the contextual bandit problem. For the implementations, we use the XGBoost library (Chen & Guestrin, 2016) to build gradient-boosted decision trees as our tree ensembles, as well as the version of random forest found in the scikit-learn library (Pedregosa et al., 2011). We first evaluate the algorithms on benchmark datasets. We then study them in the combinatorial contextual setting for solving the realworld problem of navigation over stochastic road networks in Luxembourg. The agent's cumulative regret is calculated by comparing the path to an oracle agent, who knows the expected travel duration for each edge given the current time of day. All the algorithms are evaluated w.r.t. cumulative regret averaged over ten random seeds and we report the average results. ## 4.1 Setup We use the same procedure as Zhang et al. (2021) to transform a classification task into a contextual MAB problem. In each time step, the environment provides a context vector belonging to an unknown class. For the agent, each class corresponds to an arm, and the goal is to select the arm corresponding to the correct class. This results in a reward of 1, while an incorrect prediction results in a reward of 0. Since only the context of a single (unknown) arm is provided, the agent must have a procedure for encoding it differently for each arm. This is generally done through positional encoding, where in the case of K arms and d data features, we form K context vectors (each of dimensionality Kd) such that: x1 = (x; 0; *· · ·* ; 0), x2 = (0; x; · · · ; 0), *· · ·* , xK = (0; 0; *· · ·* ; x). As no features are shared between any of the arms, this setup is an instance of a disjoint model, as opposed to a hybrid model (Li et al., 2010). In the experiments in Section 4.2, we use the disjoint model for LinUCB, LinTS, NeuralUCB, and NeuralTS. However, due to the nature of discrete splitting by decision trees, we can encode the context more effectively using a hybrid model for tree-based methods. There, we simply append the corresponding labels as a single character for each arm respectively at the beginning of the context: xk = (k, x), and mix numeric and categorical features in the context vectors. For LinUCB, LinTS, NeuralUCB, and NeuralTS, categorical features are one-hot-encoded. In these experiments, we set the time horizon to 10,000 time steps (except for the *Mushroom* dataset, where a horizon of 8,124 is sufficient to observe it entirely). For each of those time steps, the agents are presented with a feature vector x drawn randomly without replacement, which they encode for the different arms as described above. Subsequently, the agents predict the rewards for the individual arms according to the algorithms used. TEUCB and TETS (Algorithm 1) are described in Section 3. The TreeBootstrap (Elmachtoub et al., 2017), NeuralUCB (Zhou et al., 2020), NeuralTS (Zhang et al., 2021), LinUCB (Li et al., 2010) and LinTS (Agrawal & Goyal, 2013) algorithms are implemented according to their respective references. ## 4.2 Contextual Bandits For this experiment, the data is collected from the UCI machine learning repository (Kelly et al., n.d.), where an overview of the used selected datasets is given in Table 1. *Magic* (Bock, 2007), which is short for Magic Gamma Telescope, contains only numerical features. The same is true for *Shuttle* (Statlog (Shuttle), n.d.), also known as *Statlog*. All features of *Mushroom* (Mushroom, 1987) are categorical, and *Adult* (Becker & Kohavi, 1996) has a balanced distribution between numerical and categorical features. ## 4.2.1 Implementation For all datasets in Table 1, the neural network agents use a network architecture of one hidden layer with 100 neurons. Regarding NeuralUCB, NeuralTS, LinUCB, and LinTS, we search the same sets of hyper-parameters as described by Zhang et al. (2021), who run these agents on the same datasets, and select the parameters with the best performance. One difference in our experiments is that we add a dropout probability of 0.2 when training the neural networks, which seems to have a positive impact on the performance. Furthermore, we let each of the networks train for 10 epochs. For all tree ensemble bandits, we use XGBoost and random forest regressors with MSE loss and ensembles of 100 trees. Table 1: Datasets overview. ``` Dataset Features Instances Classes Adult 14 48,842 2 Magic 10 19,020 2 Mushroom 22 8,124 2 Shuttle 9 58,000 7 ``` Initial test runs revealed that the agents are robust w.r.t. different choices of hyper-parameters for XGBoost (XGBoost Developers, 2022a), and consequently, we settle on the default values. The only exception is the maximum tree depth, which we set to 10. The same maximum tree depth is used for the implementation of random forests. For the exploration factor of TEUCB and TETS, we select ν = 1. As for the decision tree (DT) algorithm used for TreeBootstrap, we use the scikit-learn library to employ CART (Breiman et al., 2017), which is free of tunable parameters. This is similar to the implementation presented in the original work on TreeBootstrap (Elmachtoub et al., 2017). Another point we noted during our initial test runs was that accurate predictions seem to be more important than less biased estimates obtained by splitting the previously observed samples into distinct data sets for ![8_image_0.png](8_image_0.png) Figure 1: Comparison of contextual MAB algorithms on UCI datasets. Figures 1a, 1b, 1c, and 1d share the same color scheme for consistency, but the legend is only presented in 1a for improved visibility. ensemble fitting and value calculations respectively. Therefore, we use all observed context-reward pairs for both purposes in all experiments with TEUCB and TETS. In order to avoid unnecessary computations and speed up the runs, TEUCB and TETS do not build new tree ensembles from scratch at each time step. Instead, they only consider which leaves the latest observation assigned in each tree and update the corresponding on, s2, and cn parameters. Initially, when new observed samples may have a relatively large effect on the optimal tree ensemble, re-building happens more frequently. However, as the effect that the samples are expected to have on the tree architectures degrades over time, less re-building takes place. More precisely, re-building happens when the function [81n(t)] increases by one compared to the previous time step. The experimental results are obtained using an NVIDIA A40 GPU for NeuralUCB and NeuralTS, and a desktop CPU for the other agents. To illustrate the differences in computational performance between the agents, we report the runtime of the neural agents on the same CPU as well. ## 4.2.2 Experimental Results The results of the experiments described above are presented in Fig. 1 and Table 2. We observe that the tree ensemble methods consistently outperform all other models by a large margin, both with XGBoost and | Adult | Magic | Mushroom | Shuttle | | | |-----------------------|-----------|---------------|---------------|---------------|---------------| | TEUCB-XGBoost | Mean ± SD | 1532.5 ± 27.8 | 1685.7 ± 27.7 | 69.2 ± 8.6 | 171.4 ± 40.0 | | CPU | 0.13 | 0.14 | 0.09 | 0.08 | | | TETS-XGBoost | Mean ± SD | 1548.0 ± 30.9 | 1703.2 ± 25.2 | 78.6 ± 11.8 | 165.7 ± 31.2 | | CPU | 0.13 | 0.14 | 0.09 | 0.08 | | | TEUCB-RF | Mean ± SD | 1550.6 ± 32.9 | 1678.5 ± 31.3 | 58.2 ± 10.9 | 190.8 ± 68.9 | | CPU | 0.12 | 0.13 | 0.10 | 0.12 | | | TETS-RF | Mean ± SD | 1566.3 ± 34.2 | 1688.6 ± 38.3 | 57.7 ± 8.1 | 166.3 ± 37.1 | | CPU | 0.12 | 0.13 | 0.10 | 0.12 | | | NeuralUCB | Mean ± SD | 1974.0 ± 82.0 | 2155.2 ± 39.5 | 160.8 ± 68.9 | 862.4 ± 428.8 | | CPU | 10 | 10 | 7.4 | 8.1 | | | GPU | 1.2 | 0.68 | 0.37 | 1.4 | | | NeuralTS | Mean ± SD | 1929.7 ± 68.3 | 2209.5 ± 62.2 | 243.4 ± 168.9 | 853.8 ± 340.3 | | CPU | 10 | 10 | 7.4 | 8.1 | | | GPU | 1.2 | 0.68 | 0.37 | 1.4 | | | LinUCB | Mean ± SD | 2109.8 ± 51.8 | 2598.4 ± 25.4 | 666.2 ± 14.7 | 1156.3 ± 41.3 | | CPU | 0.02 | 0.02 | 0.04 | 0.11 | | | LinTS | Mean ± SD | 2253.2 ± 37.7 | 2703.8 ± 30.4 | 892.2 ± 18.6 | 1263.0 ± 22.1 | | CPU | 0.02 | 0.02 | 0.04 | 0.11 | | | TreeBootstrap-XGBoost | Mean ± SD | 1693.2 ± 18.8 | 1836.1 ± 36.7 | 91.6 ± 5.6 | 164.1 ± 24.4 | | CPU | 0.97 | 1.2 | 0.10 | 0.27 | | | TreeBootstrap-RF | Mean ± SD | 1584.3 ± 32.5 | 1722.9 ± 28.0 | 82.5 ± 3.3 | 161.9 ± 29.2 | | CPU | 3.8 | 4.2 | 1.8 | 4.2 | | | TreeBootstrap-DT | Mean ± SD | 2187.2 ± 57.1 | 2473.3 ± 35.3 | 168.9 ± 11.0 | 216.0 ± 22.9 | | CPU | 0.08 | 0.09 | 0.04 | 0.08 | | Table 2: Average regret accumulated by agents after the final step, with standard deviation, and number of hours required to run one experiment with CPU. Runtime in hours on GPU is also reported where it applies. random forest, and yield significantly lower cumulative regrets. Furthermore, TEUCB and TETS tend to perform better than TreeBootstrap on the adult, magic, and mushroom data sets. On the shuttle data set, with seven different arms, the TreeBootstrap agents exhibit comparable and even slightly better average performance in terms of regret minimization. It appears that TreeBootstrap's assigning of a separate tree-based model for each arm is beneficial for this problem. However, this comes at the expense of having to fit multiple models in each time step, which is time-consuming and computationally demanding. Furthermore, its inability to generalize the reward predictions of distinct but related arms may in some cases be disadvantageous, especially for larger arm sets. Comparing TEUCB vs. TETS, and XGBoost vs. random forest (i.e., the different design choices within our framework), there is no clear winner as they all tend to perform comparably (and very effectively) on different data sets. In addition to regret minimization, the CPU experiments indicate that TEUCB and TETS are significantly more efficient than their neural counterparts from a computational perspective. Notably, LinUCB, LinTS, and TreeBootstrap-DT are the most efficient methods in terms of computation. However, they do not minimize regret as effectively as most other agents in our data sets. ## 4.3 Combinatorial Contextual Bandits In this section, we investigate the combinatorial contextual bandit methods on a real-world application, where we study two scenarios corresponding to performing the most efficient navigation over the real-world road network of Luxembourg. This problem is crucial with the emergence of electric vehicles to mitigate the socalled range anxiety. Similar navigation problems have recently been studied from a CMAB perspective, but often without contextual information (Åkerblom et al., 2023) or limited to neural bandit methods (Hoseini et al., 2022). CMAB methods are well-suited to the navigation problem since the traversal time of each road segment can be highly stochastic and dependent on local factors (e.g., road works, traffic congestion, stop lights) about which knowledge may be gathered through sequential interactions with the environment. ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) (a) Cumulative regret on paths problem instance 1 (b) Cumulative regret on paths problem instance 2 Figure 2: Experimental results on real-world road network navigation in Luxembourg. Fig. 2b shares the legend provided in Fig. 2a. ## 4.3.1 Implementation We model the road network of Luxembourg via graph G(V, E), with |V| = 2, 247 vertices and |E| = 5, 651 edges. The vertices represent intersections in the road network, and the edges represent individual road segments connecting the intersections. In this scenario, edges correspond to base arms, and paths (i.e., ordered sequences of edges) correspond to super arms. Each vertex has a coordinate consisting of longitude, latitude, and altitude values. For all edges e ∈ E, the contextual vector xe describes each road segment in the network. The agent is presented with a vector containing contextual data according to Table 4. Edge traversal times have been collected using the Luxembourg SUMO Traffic (LuST) simulation scenario (Codeca et al., 2015). The recorded edge traversal times are used to form kernel density estimators (KDE) (Weglarczyk, 2018) for each edge. If an edge does not contain any recorded traversals, the expected traversal time is set to the length of the edge divided by the speed limit. At each time step t, the time of day is randomly sampled, and used for updating the expected travel times of all edges and the corresponding KDE's. We generate edge-specific feedback by individually sampling travel times from the KDE of each edge on the chosen path. Table 3: The variables included in the contextual vector describing each edge in the graph, presented to the agent when navigating in the road network. | Variable | Description | |---------------|-----------------------------------| | x | Start position along x-axis. | | y | Start position along y-axis. | | z | Start position along z-axis. | | ′ | End position along x-axis. | | x y ′ | End position along y-axis | | ′ | End position along z-axis | | z | | | p (x − x′) 2 | Euclidean distance along x-axis. | | p (y − y ′) 2 | Euclidean distance along y-axis. | | p (z − z ′) 2 | Euclidean distance along z-axis. | | speed_limit | Maximum speed limit | | stop | A boolean if edge includes a stop | | time | The current time of day | As the graph contains more than 5,000 edges, the agents need to learn how the contextual features impact the expected travel time. The road types in the graph are highways, arterial roads, and residential streets, with a total length of 955 km. In our experiments, we specifically study two problem instances (characterized by different start and end nodes), referred to as problem instance 1 and problem instance 2. The paths selected by the different agents during a single run are visualized for problem instance 1 in Fig. 3 (in the Appendix). An agent predicts the expected travel time for each of the edges in the graph, and then solves the shortest path problem using Dijkstra's algorithm (Dijkstra, 1959). The agents are evaluated based on the sum of the expected travel times for all edges forming the traversed path compared to the expected travel time of the optimal path S ∗. For this experiment, the neural agents utilize a neural network with two hidden layers, both containing 100 fully connected neurons. A dropout probability of 0.2 is used and after a parameter search over the same sets of values as in Section 4.2.1, we set λ = 0.1 and ν = 0.001. Furthermore, the network is trained over 10 epochs with the option of early stopping if the MSE loss does not tend to keep improving. We implement the tree ensemble bandits utilizing XGBoost and random forest regressors with maximum tree depths of 10; all other hyper-parameters being set to their default values. The number of trees is set to N = 100 for all agents with tree ensembles, and the initial random selection is set to TI = 10. An exploration factor of ν = 1 is being used for TEUCB and TETS and the frequency of re-building their tree ensembles is the same as described in Section 4.2.1. ## 4.3.2 Experimental Results Fig. 2 shows the results of the TEUCB and TETS methods along with the baselines. The road network and the agents' traversals are presented in Fig. 3 (in the Appendix). The frequency by which an edge has been traversed is indicated by the red saturation, where more traversals correspond to a higher saturation. It is worth noting that LinUCB, LinTS, and the TreeBootstrap require excessive exploration as their models are edge-specific. In contrast, TETS, TEUCB, NeuralTS, and NeuralUCB train one model with the possibility of generalizing the expected travel time predictions over different edges. We observe that using both XGBoost and random forest, TEUCB and TETS significantly outperform the other methods. In this setting, if an agent can generalize well what it has learned from one arm's reward distribution to the other arms, it can then effectively avoid over-exploration, as indicated by both the regret plots in Fig. 2 and the agent's specific path selections in Fig. 3. Comparing the two different tree ensemble methods, XGBoost seems to perform better on the navigation task compared to random forest in the TEUCB and TETS frameworks. The neural methods tend to outperform LinUCB and LinTS. Compared to TreeBootstrap, however, the advantage is less clear, and seems to depend on the particular time horizon T. NeuralUCB and NeuralTS tend to learn faster, which is unsurprising since they can generalize reward predictions over different specific arms. After an initial period of heavy exploration, however, the TreeBootstrap agents appear to make good path selections more frequently, and yield cumulative reward curves that are more similar to those of TEUCB and TETS in slope. All the agents were run on a single desktop CPU, apart from NeuralUCB and NeuralTS, which were run on an NVIDIA A40 GPU. NeuralTS and NeuralUCB tend to require highly parameterized networks and substantial hyper-parameter tuning to achieve deliberate exploration, which can make them time-consuming while running and searching for good parameter values. In terms of run-time, the experiments took about an hour on the GPU for a neural agent. It is also noticeable that the neural agents appear to be sensitive to the weight initialization of the networks, which leads to higher variance (see Fig. 2). In contrast, TETS and TEUCB achieve solid results for a large range of parameter settings, yield lower regret, and tend to run at comparable speed on a single CPU (instead of GPU). The experiments with TEUCB and TETS took about 1.5 hours with XGBoost, and 5 hours using random forest as the tree ensemble methods. The run-time of TreeBootstrap also tends to depend heavily on the particular tree model used. With a single decision tree per arm, the experiments took 0.5 hours, which is about the same as LinUCB and LinTS. Using tree ensembles, they took about 4 and 20 hours with XGBoost and random forests, respectively. ## 5 Conclusion We developed a novel framework for contextual multi-armed bandits using tree ensembles. Within this framework, we adapted the two commonly used methods for handling the exploration-exploitation dilemma: UCB and Thompson Sampling. Furthermore, we extended the framework to handle combinatorial contextual bandits, enabling more complex action selection at each time step. To demonstrate the effectiveness of the framework, we conducted experiments on benchmark datasets using the XGBoost and random forest tree ensemble methods. Additionally, we utilized it for navigation over stochastic real-world road networks. ## References Shipra Agrawal and Navin Goyal. Thompson sampling for contextual bandits with linear payoffs. In *Proceedings of the 30th International Conference on Machine Learning*, Proceedings of Machine Learning Research, pp. 127–135, Atlanta, Georgia, USA, 17–19 Jun 2013. PMLR. Niklas Åkerblom, Yuxin Chen, and Morteza Haghir Chehreghani. Online learning of energy consumption for navigation of electric vehicles. *Artificial Intelligence*, 317:103879, 2023. doi: 10.1016/j.artint.2023.103879. Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47(2):235–256, 2002. ISSN 1573-0565. doi: 10.1023/A:1013689704352. Barry Becker and Ronny Kohavi. Adult. UCI Machine Learning Repository, 1996. URL https://doi.org/ 10.24432/C5XW20. Hendrik Blockeel, Laurens Devos, Benoît Frénay, Géraldin Nanfack, and Siegfried Nijssen. Decision trees: from efficient prediction to responsible ai. *Frontiers in Artificial Intelligence*, 6, 2023. ISSN 2624-8212. R. Bock. MAGIC Gamma Telescope. UCI Machine Learning Repository, 2007. URL https://doi.org/ 10.24432/C52C8B. Vadim Borisov, Tobias Leemann, Kathrin Sessler, Johannes Haug, Martin Pawelczyk, and Gjergji Kasneci. Deep neural networks and tabular data: A survey. *IEEE Transactions on Neural Networks and Learning* Systems, pp. 1–21, 2022. ISSN 2162-2388. doi: 10.1109/tnnls.2022.3229161. Leo Breiman, Jerome Friedman, Richard Olshen, and Charles Stone. *Classification And Regression Trees*. 10 2017. ISBN 9781315139470. doi: 10.1201/9781315139470. Nicolò Cesa-Bianchi and Gábor Lugosi. Combinatorial bandits. *Journal of Computer and System Sciences*, 78(5):1404–1422, 2012. ISSN 0022-0000. doi: 10.1016/j.jcss.2012.01.001. JCSS Special Issue: Cloud Computing 2011. Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16. ACM, 2016. doi: 10.1145/2939672.2939785. Lara Codeca, Raphael Frank, and Thomas Engel. Luxembourg sumo traffic (lust) scenario: 24 hours of mobility for vehicular networking research. In *2015 IEEE Vehicular Networking Conference (VNC)*, pp. 1–8, 2015. doi: 10.1109/VNC.2015.7385539. E.W. Dijkstra. A note on two problems in connexion with graphs. *Numerische Mathematik*, 1:269–271, 1959. Yadolah Dodge. *The Concise Encyclopedia of Statistics*, pp. 66–68. Springer New York, New York, NY, 2008. ISBN 978-0-387-32833-1. doi: 10.1007/978-0-387-32833-1_50. Adam N Elmachtoub, Ryan McNellis, Sechan Oh, and Marek Petrik. A practical method for solving contextual bandit problems using decision trees. *arXiv preprint arXiv:1706.04687*, 2017. Raphaël Féraud, Robin Allesiardo, Tanguy Urvoy, and Fabrice Clérot. Random forest for the contextual bandit problem. In *Proceedings of the 19th International Conference on Artificial Intelligence and Statistics*, volume 51 of *Proceedings of Machine Learning Research*, pp. 93–101, Cadiz, Spain, 09–11 May 2016. PMLR. Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. Revisiting deep learning models for tabular data. In *Advances in Neural Information Processing Systems*, pp. 18932–18943. Curran Associates, Inc., 2021. Leo Grinsztajn, Edouard Oyallon, and Gael Varoquaux. Why do tree-based models still outperform deep learning on typical tabular data? In *Advances in Neural Information Processing Systems*, pp. 507–520. Curran Associates, Inc., 2022. Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The elements of statistical learning: data mining, inference and prediction. Springer, 2 edition, 2009. Fazeleh Sadat Hoseini, Niklas Åkerblom, and Morteza Haghir Chehreghani. A contextual combinatorial semi-bandit approach to network bottleneck identification. *CoRR*, abs/2206.08144, 2022. doi: 10.48550/ ARXIV.2206.08144. Markelle Kelly, Rachel Longjohn, and Kolby Nottingham. The uci machine learning repository, n.d. Lihong Li. Generalized thompson sampling for contextual bandits. *arXiv preprint arXiv:1310.7163*, 2013. Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. A contextual-bandit approach to personalized news article recommendation. In *Proceedings of the 19th international conference on World wide web*. ACM, 2010. doi: 10.1145/1772690.1772758. Xiuyuan Lu and Benjamin Van Roy. Ensemble sampling. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2017. Mushroom. UCI Machine Learning Repository, 1987. URL https://doi.org/10.24432/C5959T. Ian Osband, Zheng Wen, Seyed Mohammad Asghari, Vikranth Dwaracherla, Morteza Ibrahimi, Xiuyuan Lu, and Benjamin Van Roy. Epistemic neural networks. *arXiv preprint arXiv:2107.08924*, 2021. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. Aleksandrs Slivkins. Introduction to multi-armed bandits. Foundations and Trends® *in Machine Learning*, 12(1-2):1–286, 2019. ISSN 1935-8237. doi: 10.1561/2200000068. Statlog (Shuttle). UCI Machine Learning Repository, n.d. URL https://doi.org/10.24432/C5WS31. William R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. *Biometrika*, 25(3/4):285–294, 1933. ISSN 00063444. Stanislaw Weglarczyk. Kernel density estimation and its application. *ITM Web of Conferences*, 23:00037, 2018. doi: 10.1051/itmconf/20182300037. XGBoost Developers. Xgboost documentation: Parameters. https://xgboost.readthedocs.io/en/ stable/parameter.html, 2022a. [Accessed 2024-02-09]. XGBoost Developers. Xgboost documentation: Prediction. https://xgboost.readthedocs.io/en/ stable/prediction.html, 2022b. [Accessed 2024-02-09]. Weitong Zhang, Dongruo Zhou, Lihong Li, and Quanquan Gu. Neural thompson sampling. In *International* Conference on Learning Representations, 2021. Dongruo Zhou, Lihong Li, and Quanquan Gu. Neural contextual bandits with UCB-based exploration. In *Proceedings of the 37th International Conference on Machine Learning*, volume 119 of *Proceedings of* Machine Learning Research, pp. 11492–11502. PMLR, 13–18 Jul 2020. Zheqing Zhu and Benjamin Van Roy. Scalable neural contextual bandit for recommender systems. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, CIKM '23, pp. 3636–3646, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9798400701245. doi: 10.1145/3583780.3615048. ## A Appendix A.1 Nomenclature | Table 4: Table of notations used in this paper. | | |---------------------------------------------------|---------------------------------------------------------------------------------| | Notation | Description | | a | Arm/action | | ∗ | Optimal arm | | a at | Arm selected at time step t | | A | Action space, set of all arms | | cn | Number of training samples assigned to a leaf in regression tree n | | f | Tree ensemble regressor consisting of N regression trees fitted in unison | | fn | nth individual tree of the ensemble | | mt,a | Number of times arm a has been selected up to time step t 2 | | N (µ, σ2 ) | Normal distribution with mean µ and variance σ | | on | Output value associated with a leaf in regression tree n | | q | Function of expected reward | | r | Observed reward | | r˜ | Estimated reward | | R | Regret | | s 2 n | Sample variance of the output value associated with a leaf in regression tree n | | S | Super arm, a set of arms | | S ∗ | Optimal super arm | | St | super arm selected at time step t | | t | time step | | U | Upper confidence bound | | x | Context/feature vector | | µ | True mean | | µ˜ | Estimated mean | | ν | Exploration factor | | σ 2 | True variance | | σ˜ 2 | Estimated variance | | I | | | {xi} i=1 | List of I entries | ![15_image_0.png](15_image_0.png) Figure 3: The road network of Luxembourg shows the trajectories of different agents for the experiment on problem instance 1, where the corresponding cumulative regret is presented in Fig. 2a.
# Multi-Objective Bayesian Optimization With Heuristic Objectives For Biomedical And Molecular Data Analysis Workflows Alina Selega aselega@lunenfeld.ca Lunenfeld-Tanenbaum Research Institute Vector Institute Kieran R. Campbell *kierancampbell@lunenfeld.ca* Lunenfeld-Tanenbaum Research Institute University of Toronto Vector Institute Ontario Institute for Cancer Research Reviewed on OpenReview: *https: // openreview. net/ forum? id= QspAcsAyis* ## Abstract Many practical applications require optimization of multiple, computationally expensive, and possibly competing objectives that are well-suited for multi-objective Bayesian optimization (MOBO). However, for many types of biomedical data, measures of data analysis workflow success are often heuristic and therefore it is not known *a priori* which objectives are useful. Thus, MOBO methods that return the full Pareto front may be suboptimal in these cases. Here we propose a novel MOBO method that adaptively updates the scalarization function using properties of the posterior of a multi-output Gaussian process surrogate function. This approach selects useful objectives based on a flexible set of desirable criteria, allowing the functional form of each objective to guide optimization. We demonstrate the qualitative behaviour of our method on toy data and perform proof-of-concept analyses of single-cell RNA sequencing and highly multiplexed imaging datasets for univariate input optimization. ## 1 **Introduction** The analysis of high-dimensional biological data is often exploratory and unsupervised. For example, gene expression data may be subject to clustering algorithms to find groups representative of meaningful biological variation. For assays that profile at the patient level, these clusters may represent novel disease subtypes, while for assays at the single-cell level, they may represent novel cell types. Despite the importance of these methods, there is no "one-size-fits-all" approach to the analysis of such data. Instead, there is a myriad of different possible parameter combinations that govern these workflows and lead to variations in the results and interpretation. For example, in the analysis of single-cell RNA-sequencing (scRNA-seq) - a technology that quantifies the expression profile of all genes at single-cell resolution - a common analysis strategy is to cluster the cells to identify groups with biological significance. However, each workflow for doing so has variations with respect to data normalization, cell filtering strategies, and the choice of clustering algorithm and parameters thereof. Changes to these algorithm and parameter choices produce dramatically different results (Germain et al., 2020; Duò et al., 2018) and there is no ground truth available. This motivates an important question: how do we optimize these workflows such that the resulting exploratory analysis best reflects the underlying biology? It is important to note the presence of measurement noise in virtually all biomedical data, which can arise from the technology used for data acquisition or represent underlying biological heterogeneity. In the adjacent field of supervised machine learning (ML), such optimization over workflows has largely been tackled from the perspective of automated ML (AutoML, He et al. (2021)). This comprises a diverse set of methods such as Bayesian optimization (Snoek et al., 2012) and Neural Architecture Search (Elsken et al., 2018) that attempt to optimize the success of the model with respect to one or more hyperparameter settings. In this context, success is defined as the model accuracy on a held out test set, though can also correspond to the marginal likelihood of the data given the model and hyperparameters. However, in the context of exploratory analysis of genomic data, existing AutoML approaches face three challenges. Firstly, they are almost exclusively unsupervised, meaning there is no notion of accuracy on a test set we may optimize with respect to. Secondly, the majority of methods are not generative probabilistic models (Zappia et al., 2018) so it is impossible to optimize with respect to the marginal or test likelihood. Finally, the objectives used to optimize a workflow are numerous, conflicting, noisy, due to the underlying noise present in the raw data, and can be highly subjective, due to often being heuristics. This is demonstrated by attempts to benchmark clustering workflows of scRNA-seq data. As said above, there are many parameters that must be set, e.g. which subset of genes and clustering algorithm to use, along with such parameters as resolution in the case of community detection (Germain et al., 2020). However, there is no quantitative way to choose which parameter setting is "best" and so the community turns to a number of heuristic objectives to quantify the performance of a workflow. For example, Cui et al. (2021) attempt to optimize the adjusted Rand index (ARI) with respect to expert annotations and a heuristic based around downsampling rare cell types while minimizing runtime. Germain et al. (2020) similarly consider the ARI but also the average silhouette width to maximize cluster purity. Zhang et al. (2019) consider a range of heuristics including agreement with simulated data and robustness to model misspecification. However, given that these objectives are all heuristic and open to user preference, there is no guarantee that all of them are *useful* and have maxima that align with the meta-objective at hand, which in the above example is the ability to identify a biologically relevant population of cells. Conversely, some heuristic objectives may be *non-useful* - they are largely noisy and attribute nothing to the overall optimization problem by not aligning with a meta-objective. For example, in an Imaging Mass Cytometry experiment, which also aims to cluster cells, an antibody that quantifies the expression of a given protein may fail entirely, which would not be identified prior to data analysis. In that case, any objective that included that protein's expression would be irrelevant to the meta-objective, but this would not be known up front. This motivates the central question we attempt to address: how can we adapt AutoML approaches to optimize unsupervised workflows over multiple heuristic objectives that are frequently subjective and conflicting? To begin to tackle this question, we introduce MANATEE (Multi-objective bAyesiaN optimizAtion wiTh hEuristic objEctives). The key idea is that by considering a linear scalarization as a probabilistic weighting over (heuristic) objective inclusion, we may up- or downweight an objective based on desirable or nondesirable properties of its posterior functional form. Consequently, rather than returning the full Pareto front that may include points (parameter values) that maximize potentially non-useful heuristic objectives, we automatically concentrate on a useful region in accordance with the specified properties. In this work, we evaluate our method only for univariate input optimization; while we designed our framework to be extensible to multivariate inputs, we do not make any claims with regards to performance in that setting here. The main contributions we present are: 1. Introduce the concept of *behaviours* B of the posterior functional form of the surrogate objective function f that are desirable if a function is *useful* for overall optimization. 2. Suggest an example set of such behaviours that may be inferred from the posterior of a multi-output Gaussian process, if used as the surrogate function. 3. Build upon previous MOBO approaches with random scalarizations that compute the distribution of scalarization weights p(λ) but instead condition on objective behaviours with p(λ|B), inferring which objectives are useful. 4. Construct a set of example objectives measuring workflow success for real molecular imaging and transcriptomic data and show that the proposed procedure compares favourably to existing approaches for univariate input optimization. ## 2 **Background** 2.1 **Bayesian Optimization** Bayesian optimization (BO, see Frazier (2018) and references therein) attempts to optimize a function g(x) ∈ R for some x ∈ R D that is, in some sense, expensive to evaluate and for which derivative information is not available, precluding gradient-based approaches. Note that while we set the scene in the general case of a multivariate input, in this work we perform experiments with univariate inputs, though our approach can be extended to D > 1 in the future. Applications of BO have become popular in the tuning of ML hyperparameters (Turner et al., 2021) and indeed entire workflows (Fusi et al., 2018) due to the expensive nature of re-training the models. At their core, BO approaches propose a surrogate function f defined on the same range and domain as g that may be searched efficiently to find points x that either maximize g, reduce uncertainty about f, or both. This leads to the concept of an acquisition function1 acq(x) ∈ R that may be optimized to find the next x at which g may be evaluated. While multiple acquisition functions have been proposed, here we focus on the Upper Confidence Bound (UCB) (Auer, 2002) defined as: $$\operatorname{acq}_{\mathrm{UCB}}(x)=\mu^{(t)}(x)+{\sqrt{\beta_{t}}}\sigma^{(t)}(x)$$ $$(1)^{\frac{1}{2}}$$ (t)(x) (1) where µ (t)(x) and σ (t)(x) are the posterior mean and standard deviation of f at x after t acquisitions from g, while βt is a hyperparameter that controls the balance between exploration and exploitation. While there are many possible choices for the surrogate function f, including deep neural networks (Snoek et al., 2015), a popular choice is a Gaussian process due to its principled handling of uncertainty and capacity to approximate a wide range of functions. ## 2.2 **Gaussian Processes** Overview Gaussian processes (GPs) (Williams & Rasmussen, 2006) define a framework for performing inference over nonparametric functions. Let m(x) be a mean function and k(*x, x*′) a positive-definite covariance function for *x, x*′ ∈ R D. We define f(x) to be a Gaussian process denoted f(x) ∼ GP (m(x), k(*x, x*′)) if for any finite-dimensional subset x = [x1*, . . . , x*N ] T ∈ R N×D, the corresponding function outputs f = [f(x1)*, . . . , f*(xN )] follow a multivariate Gaussian distribution p(f|x) = N (0, K), where K is the covariance matrix with entries (K)ij = k(xi, xj ) and we have assumed a zero-mean function without loss of generality. The kernel fully specifies the prior over functions, with one popular choice we use throughout the paper being the *exponentiated quadratic* kernel k(*x, x*′) = exp (x−x ′) 2 l 2. It is common to model noisy observations y via the likelihood p(y|f), which when taken to be N (f, σ2 ϵ ) with noise variance σ 2 ϵ leads to the exact marginalization of f. Multi-output Gaussian processes GPs may be extended to model K distinct outputs2 via the functions {fk(x)} K k=1 (Bonilla et al., 2007). One construction is to model the full covariance matrix as the Kronecker product between the K × K inter-objective covariance matrix KIO and the data covariance matrix: $$\mathbf{\partial}_{\cdot}^{\mathrm{{LO}}})_{k,k^{\prime}}k(x,x^{\prime}).$$ cov (fk(x), fk′ (x ′)) = (KIO)k,k′k(*x, x*′). (2) Here the kernel hyperparameter l is shared across objectives, though in the following we model objective-specific observation noises ϵk with variances σ 2 ϵk as p(yk|fk) ∼ N(fk, σ2 ϵk ). ## 2.3 **Multi-Objective Bayesian Optimization** Multi-objective optimization Multi-objective optimization attempts to simultaneously optimize K objectives g1(x)*, . . . , g*K(x) over x ∈ R D, which is common in many real-world settings (Deb, 2014). However, it is rare in practice to be able to optimize all K functions simultaneously and instead is common to attempt to recover the *Pareto front*. We say a point x1 is *Pareto dominated* by x2 iff gk(x1) ≤ gk(x2) ∀k = 1*, . . . , K* 1We will later refer to it as the "single-objective acquisition function". 2Commonly referred to as *tasks*, we here refer to them as *objectives* given the application. $\left(2\right)$. and ∃ k ∈ 1*, . . . , K* s.t. gk(x1) < gk(x2). A point is said to be *Pareto optimal* if it is not dominated by any other point. The *Pareto front* is then defined as the set of Pareto optimal points, which intuitively corresponds to the set of equivalently optimal points given no prior preference between objectives. Scalarization functions One popular approach to multi-objective optimization is the use of scalarization functions (see Chugh (2020) for an overview). A scalarization function sλ(g(x)) parameterized by λ takes the set of K functions g(x) = [g1(x)*, . . . , g*K(x)] and outputs a single scalar value to be optimized in lieu of g(x). Roijers et al. (2013) show that if sλ is monotonically increasing in all gk(x) then the resulting optimum x ∗lies on the Pareto front of g. While many scalarization functions exist, one popular ![3_image_0.png](3_image_0.png) choice is the linear scalarization function sλ(g(x)) = PK k=1 λkgk(x), λk > 0 ∀k. This has the intuitive interpretation that each λk corresponds to the weight of function k, with a larger relative value pulling the optimum of sλ towards the optimum of gk. Hypervolume improvement Another multiobjective optimization approach relies on the notion of *hypervolume* (HV), the volume of the space dominated by a Pareto front and bounded from below by a reference point, which current work assumes to be known by the practitioner (Daulton et al., 2021). HV is used as a metric to assess the quality of a Pareto front and is sought to be maximized in the optimization. Expected HV improvement (EHVI) for a new set of points can be computed using box decomposition algorithms (Yang et al., 2019a). Multi-objective Bayesian optimization with scalarizations Multi-objective Bayesian optimization (MOBO) approaches that use scalarizations operate under the same conditions as BO, where each evaluation of gk(x) is expensive and derivative information is unavailable. An example method is ParEGO (Knowles, 2006), which randomly scalarizes objectives with augmented Chebyshev scalarization and uses expected improvement. It was recently extended to qNParEGO (Daulton et al., 2020), which supports parallel and constrained optimization in a noisy setting. Unlike hypervolume-based methods which can struggle with > 5 objectives (Balandat et al., 2021), qNParEGO is more suited for such problems. Figure 1: Cartoon illustrating proposed desirable behaviours of objectives. *Explainability* captures how much an objective covaries with the parameter x, favouring those with lower observation noise. *Interobjective agreement* favours objectives that agree with each other. *Maximum not at boundary* determines whether the optimum is contained within the userspecified parameter range. Paria et al. (2019) propose a MOBO procedure that, rather than maximizing sλ for a single λ, constructs a distribution p(λ) and minimizes the expected pointwise regret, $${\mathcal{R}}(\mathbf{X})=\mathbb{E}_{p(\boldsymbol{\lambda})}\left(\operatorname*{max}_{x\in{\mathcal{X}}}s_{\boldsymbol{\lambda}}(\mathbf{g}(x))-\operatorname*{max}_{x\in\mathbf{X}}s_{\boldsymbol{\lambda}}(\mathbf{g}(x))\right),$$ where X is the feature space of x and X is the subset of X lying on the Pareto front to be computed. The exact region of the Pareto front to be considered is governed by p(λ) and the authors provide a bounding box procedure for the user to select p(λ), akin in the case of a linear scalarization to asserting *a priori* which objectives k are important. However, to our knowledge, no MOBO approach has proposed a p(λ|·), inferred 4 from either the data or the posterior over functions, that adaptively up- or downweights objectives based on desirable properties. Multi-objective Bayesian optimization beyond scalarizations For hypervolume-based methods in the MOBO setting, EHVI has been extended to parallel evaluation of q points, leveraging automatic differentiation and boosting efficiency (Daulton et al., 2020). As EHVI assumes noise-free case and can be affected in noisy settings, recent work introduced noisy EHVI (NEHVI), which uses its expectation under the posterior distribution of the surrogate function values given noisy observations (Daulton et al., 2021). NEHVI is more robust to noise than other hypervolume-based MOBO methods, is equivalent to EHVI in the noiseless setting, and its parallel formulation (qNEHVI) achieves computational gains and state-of-the-art performance in large batch optimization (Daulton et al., 2021). Another approach to MOBO utilizes uncertainty, overcoming the issue of scalability with the number objectives faced by hypervolume-based methods. Predictive entropy search for multi-objective optimization (PESMO) aims to minimize entropy of the posterior distribution over the Pareto set (Hernández-Lobato et al., 2016). Max-value Entropy Search for Multi-objective Optimization (MESMO) employs efficient output space entropy search, improving its computation time over PESMO (Belakaria et al., 2019). Uncertainty-aware Search framework for optimizing Multiple Objectives (USeMO) selects points that maximize a multi-objective measure of uncertainty and outperforms existing methods on problems with up to six objectives with faster convergence (Belakaria et al., 2020). ## 2.4 **Applications Of Automl And Bayesian Optimization In Molecular Biology And Genomics** AutoML and BO approaches have previously been successfully applied across multiple problems in molecular biology and genomics. One example is a popular application of BO to generate protein candidates at the sequence level with desirable chemical properties (Yang et al., 2019b). While less common, there are a handful of examples of AutoML applications to hyperparameter optimization in genomics. The GenoML project (Makarious et al., 2021) provides a Python framework centered on open science principles to perform end-to-end AutoML procedures for supervised learning problems in genomics. AutoGeneS (Aliee & Theis, 2021) develops a multi-objective optimization framework for the selection of genes for the deconvolution of bulk RNA-sequencing without relying on marker genes and instead optimizing properties of identified cell clusters. However, to our knowledge, there is no work that tackles the general problem of optimizing bioinformatics and genomics workflows in the absence of well-defined objective functions. In contrast, there are multiple BO techniques that allow a user to express a preference between solutions (González et al., 2017). While these methods could have exciting applications in genomics, we consider an alternative setup, where the user expresses a preference on the functional form of the unseen objectives but does not participate during acquisition. ## 3 **Multi-Objective Bayesian Optimization Over Heuristic Objectives** 3.1 **Setup** We assume we have access to K noisy, heuristic objectives that at acquisition step t return a measurement ykt for an input location xt ∈ X , where X is a compact subset of R on [*a, b*]. We introduce surrogate functions fk(x) that we model with a multi-output GP as described in Section 2.2 with a full kernel given by Eq. 2. The choice to fit a multi-output GP to data reflects our prior assumption that the heuristic objectives may have a correlated functional form or operate on similar lengthscales. In settings where these assumptions do not hold, using a multi-output GP may be suboptimal. Our framework is applicable to any scalarization function that uses weightings (e.g. Tchebyshev scalarization (Nakayama et al., 2009)) and here we consider a linear scalarization function over objectives sλ(f(x)) = Pk λkfk(x). Ultimately, we seek to maximize Ep(λ|·)[sλ(f(x))]. ## 3.1.1 **Acquisition Functions** The next point to query xt+1 is chosen by maximizing the expectation of the acquisition function. For this we propose two approaches: - SA (scalarized acquisition): maximize the expectation of the scalarization of the single-objective acquisition function of each objective Ep(λ|·)[sλ(acq(f(x)))] as per Paria et al. (2019); - AS (acquisition of scalarized): maximize the expectation of the single-objective acquisition function of the scalarized objectives Ep(λ|·)[acq(sλ(f(x)))]. We refer to these as acquisition functions and derive expressions for both in Appendix C. While many choices of single-objective acquisition functions are possible, we use acqUCB (Eq. 1), the UCB single-objective acquisition function. The SA formulation simplifies to an intuitive interpretation where each objective's UCB function value is weighted by the probability of that objective being useful given its behaviours (Appendix C). The AS formulation takes into account the multi-objective posterior covariance structure (Appendix C) but has a longer computation time that may require approximations when K is large. ## 3.2 **Desirable Heuristic Objective Behaviours** Next, we wish to set p(λ|·) to upweight objectives that are inferred as useful based on desirable properties learned from the data. In our framework, we assume that the practitioner has *a priori* no preference over objectives, only that some or all may be useful. Instead, they have a preference over the functional form of the objectives, which is expressed via some desirable properties. We begin by considering what properties of a given heuristic objective fk(x) may be considered desirable. While many are possible, we suggest three behaviours (Figure 1): 1. **Explainability:** fk(x) covaries significantly with x (i.e. is explained by x). The justification here is that the practitioner has selected heuristic k assuming it will provide insight into the choice of x, so if there is no correlation then it should be downweighted. Given that the data have been scaled to empirical variance 1, the inferred variance of the fitted observation noise σ 2 ϵk represents the proportion of variance unexplained by fk so we define B (1) k:= σ 2 ϵk . 2. **Inter-objective agreement:** fk shares a similar functional form with fk′ , k′ ̸= k, with the intuition that it is useful for practitioners to find regions of the input space where multiple heuristics agree. After fitting the multi-output GP, (KIO)k,k′ defines the covariance between objective k and k ′for k ̸= k ′ and (KIO)k,k defines the variance of objective k. We therefore introduce the inter-objective agreement behaviour as $$B_{k}^{(2)}:=\sum_{k^{\prime}=1\,:\,k^{\prime}\neq k}^{K}\operatorname*{max}\left(0,\frac{1}{K-1}\frac{(\mathbf{K}^{10})_{k,k^{\prime}}}{\sqrt{(\mathbf{K}^{10})_{k,k}(\mathbf{K}^{10})_{k^{\prime},k^{\prime}}}}\right).$$ $\left(3\right)^{2}$ The intuition is that (KIO)k,k′ √(KIO)k,k(KIO)k′,k′ represents the correlation between objectives k and k ′so B (2) k represents the average correlation with other objectives while not penalizing negative correlation worse than no correlation. 3. **Maximum not at boundary:** Within X , fk has a maximum that is not at the boundary of x. The useful range of x is specified by the practitioner. Then, if fk is maximized by a boundary value of x, then either (i) the optimum is outside of the specified range, conflicting with the practitioner's intuition, or (ii) fk is unbounded in x, in which case it is not useful for optimization. In the former case, one approach would be to revise the range and repeat the process. Since the derivative of a GP is also a GP, we may identify whether a stationary point exists in X by searching for the zeros of the posterior mean derivative ¯f ′(x). We therefore define B (3) k:= hasmax(fk, X ), where hasmax returns 1 if fk has a maximum on X and 0 otherwise by evaluating the derivatives of the posterior mean of the multi-output GP (derived in Appendix D). We will denote a set of three above behaviours for a given objective k as Bk = {B (1) k, B(2) k, B(3) k}. $\exists$ ) $\downarrow$. $\mathbf{M}$ ![6_image_0.png](6_image_0.png) Figure 2: Multi-objective Bayesian optimization with heuristic objectives. A multi-output Gaussian process is fitted (Step 2) to the initial training dataset (Step 1). Objective behaviours are inferred from the posterior form (Step 3) and are used to update the distribution of objective inclusion weights (Step 4). The acquisition function defined as the expectation under the weights distribution is optimized (Step 5) to give the next location to sample objectives at (Step 6). Step 5 shows the scalarized acquisition (SA) function approach. ## 3.3 **Incorporating Desirable Behaviours Into Scalarization Weights** We next consider how to use the set of behaviours B = {Bk} K k=1 to parameterize the scalarization probabilities p(λ|B). We assume that λk is a binary variable ∀k that corresponds to whether objective k is *useful* or otherwise, with p(λk|Bk) given by a Bernoulli distribution. While this construction initially appears restrictive compared to existing approaches with random scalarizations where λ ∈ R K , it has a desirable property that maintains its generality (note that we assume sλ to be a linear scalarization function). Specifically, optimization under our setup returns Pareto optimal points of f (proof presented in Appendix E). Theorem 3.1. If Ep(λk|Bk)[λk] > 0 ∀k*, then* x ∗ = arg maxx Ep(λ|B)[sλ(f(x))] *lies on the Pareto front of* f. However, how to construct p(λk = 1|B (1) k, B(2) k, B(3) k) directly is non-obvious. Instead, we ask how would *each* objective behaviour appear if we knew that objective was useful (or otherwise)? This allows us to specify conditional distributions over behaviours for a useful/non-useful objective p(B (i) k |λk = 1), p(B (i) k |λk = 0) for i = 1, 2, 3 and combine with a prior p(λk = 1) = 1 − p(λk = 0) to compute the posterior probability that an objective is useful given its behaviours: $$p(\lambda_{k}\!=\!1|{\bf B}_{k})=\frac{\prod_{i}p(B_{k}^{(i)}|\lambda_{k}\!=\!1)p(\lambda_{k}\!=\!1)}{\sum_{q=0,1}\prod_{i}p(B_{k}^{(i)}|\lambda_{k}\!=\!q)p(\lambda_{k}\!=\!q)}$$ $$\left(4\right)$$ With these considerations, we suggest distributions for p(B (i) k |λk); however, we emphasize that these are suggestions only and there are many possible that would fit the problem. Explainability: For B (1) k, the explainability of objective k (i.e. the proportion of variance unexplained by that function), we assume that if that objective is desirable (λk = 1) then the lower the observation noise, the better and in the non-desirable case (λk = 0), higher noise is expected. Given the lack of additional assumptions, we appeal to the principle of parsimony and propose a linear relationship of the form: $p(B_{k}^{(1)}|\lambda_{k})=\begin{cases}2(1-\lambda_{k})B_{k}^{(1)}+2\lambda_{k}(1-B_{k}^{(1)})&\text{if}B_{k}^{(1)}\in[0,1]\\ 0&\text{otherwise.}\end{cases}$ $\left(5\right)^3$ Inter-objective agreement: For inter-objective agreement B (2) k, our reasoning is that identifying solutions where multiple objectives agree can be desirable for a practitioner. We thus posit that high inter-objective correlation should be more likely for a desirable objective and vice-versa for a non-desirable one, and again a linear relationship is the most parsimonious: $$p(B_{k}^{(2)}|\lambda_{k})=\begin{cases}2\lambda_{k}B_{k}^{(2)}+2(1-\lambda_{k})(1-B_{k}^{(2)})&\text{if}B_{k}^{(2)}\in[0,1]\\ 0&\text{otherwise.}\end{cases}$$ $$({\mathfrak{f}}{\mathfrak{h}})$$ Maximum not at boundary: We propose B (3) k|λk =i ∼ Bernoulli(πi) where π0, π1 are user-settable hyper-parameters. This means that conditioned on an objective being useful (or otherwise), there is a fixed probability of that objective containing a maximum in the region. In our experiments, we set π0, π1 such that a useful objective has a maximum in the region with 75% chance, and a non-useful one with 25% chance. $$p(B_{k}^{(3)}|\lambda_{k})={\begin{cases}\mathrm{Bernoulli}(\pi_{0})&\mathrm{if}\ \lambda_{k}=0\\ \mathrm{Bernoulli}(\pi_{1})&\mathrm{if}\ \lambda_{k}=1\end{cases}}$$ $$\left(7\right)$$ ## 3.4 **Manatee** Putting these steps together results in the MANATEE framework, an iterative MOBO procedure as outlined in Figure 2 and Algorithm 1. First, the objectives are evaluated at a set of input locations randomly chosen on the parameter space. Second, the multi-output GP surrogate function with covariance given by Eq. 2 is fitted to all objectives. Then, the objective behaviours B are computed from the surrogate function and the distributions over objective weights that indicate whether each objective is useful are updated. Finally, the updated acquisition function is optimized (see Section 3.1.1 and Appendix C), guiding acquisition of the next point. The procedure is repeated for a predetermined number of steps. The overall "best" point to be used for downstream analysis may be chosen as that which maximizes the scalarized surrogate function. Algorithm 1 MANATEE framework Input: training data D(0); input space X on region [*a, b*]; objectives f(x) = {fk(x)} K k=1; acquisition function A(f(x), sλ, acq); behaviours B = {B(1), B(2), B(3)}; distributions over behaviours for useful/non-useful objective p(B (i) k |λk); number of iterations T for t ← 1*, . . . , T* do Fit multi-output Gaussian process p(f|D(t−1)) for k ← 1*, . . . , K* do for i ← 1, 2, 3 do Compute objective behaviour B (i) kfrom the Gaussian process posterior f (Section 3.2) Compute p(B (i) k |λk) (how likely B (i) kis given objective is useful/non-useful) using Eq. 5, 6, 7 end for Update p(λk = 1|Bk) (probability that objective is useful given its behaviours) using Eq. 4 end for Update acquisition function A(f(x), sλ, acq) using objective weights p(λ|B) (Section 3.1.1) Pick next candidate by maximizing acquisition function xt ← arg maxx∈X A(f(x), sλ, acq) Acquire data at new location yk(xt) ∀k = 1*, . . . , K* objectives Update training data with new datapoints D(t) ← D(t−1) ∪ {(xt, y(xt))} end for ## 3.5 **Baselines For Experiments** We contrast our method against two baselines and three existing approaches for MOBO: (i) *Random* acquisition: draw xt ∼ Unif(0, 1) at each iteration, (ii) *Random scalarization*: use identical surrogate and acquisition functions as MANATEE-SA to sample xt but draw λk ∼ Unif(0, 1) rather than conditional on B, (iii) qNEHVI (Daulton et al., 2021) with approximate hypervolume computation to facilitate inference over ![8_image_0.png](8_image_0.png) Figure 3: A 150 random samples of toy data for the 5 objectives, including those on the Pareto front (orange) and otherwise (blue), along with points acquired by MANATEE-SA (red) in an example run. B Inclusion probabilities for each of the objectives as a function of acquisition step. Solid line shows the mean and shaded region denotes the 95% confidence interval across runs. > 5 objectives, (iv) qNParEGO (Daulton et al., 2020), and (v) USeMO (Belakaria et al., 2020). Since it is not known how noisy real biomedical problems are, we supplement our evaluation including qNEHVI and qNParEGO, specifically designed for MOBO of noisy objectives, with USeMO, an efficient uncertainty-based state-of-the-art approach for many objectives in the noiseless setting (Belakaria et al., 2020). In our setup not all objectives are considered useful, so quantifying the hypervolume improvement over all objectives as a commonly used performance measure (Daulton et al., 2020; 2021) does not align with the stated goal. Thus, we construct evaluation measures (referred to as *meta-objectives*) that use independent information (e.g. expert labels, see Sections 4.2, 4.3) to quantify the quality of the acquisitions. When a meta-objective h(xt) is available at every iteration t = 1*, . . . , T* with overall maximum y ∗ = maxx∈X h(x), to compare among approaches we compute the following metrics: (i) *Cumulative regret:* 1 T PT t=1 (y ∗ − h(xt)), (ii) *Full regret:* y ∗ − maxx∈X1:T h(x), and (iii) *Bayes regret:* 1 T PT t=1 (y ∗ − maxx∈X1:t h(x)), where X1:t is the set of x acquired up to time t. Of these, we place most emphasis on cumulative regret as it quantifies how close each method gets to the optimal solution on average. In contrast, the full and Bayes regret quantify how close the "best" acquired point gets to y ∗ as measured by the max over h of all points acquired so far; however, since the meta-objective h is in general inaccessible for our problem setup (and only used for method comparison), it is impossible to quantify maxx∈X1:T h(x) in practice outside of benchmarking exercises. ## 4 **Experiments** 4.1 **Toy Data Experiment** We begin by demonstrating the overall problem setup on toy data on an input space x ∈ [0, 1]. We consider 5 objectives overall - 3 that act as the *useful* objectives with maxima around that of a meta-objective at 1/4 given by sin 2πx, max(0,sin 2πx), and sin 2π(x − 0.05) and 2 that disagree and act as the *non-useful* objectives given by 2x and −2x. Each objective is augmented with noise (Appendix B.1). Note that on real data we do not know *a priori* which objectives are useful3. Further, the meta-objective is not specified – it may be linear, non-linear, and not necessarily a function of the heuristic objectives - it simply needs a maximum at x ≈ 1 4 . 3Otherwise only useful objectives would be included and standard MOBO procedures applied. Samples from each of these functions can be seen in Figure 3A (blue points). The overall Pareto front (orange points) spans almost the entire region including samples at the very right where one of the non-useful linear objective functions has its maximum. However, when applied to this toy problem, MANATEE quickly begins acquiring samples around the joint maxima of the three useful objective functions (red points). Indeed, tracing the inclusion probabilities p(λk = 1|Bk) across the iterations (Figure 3B) demonstrates how MANATEE learns to upweight objectives 1-3 while downweighting 4-5. This demonstrates than when we do not know a priori which objectives to trust, we may still recover a region of high utility when the Pareto front spans the full space of conflicting objectives. ## 4.2 **Imaging Mass Cytometry Cofactor Selection** We next apply MANATEE to the selection of cofactors for Imaging Mass Cytometry (IMC) data, a new technology that can measure the expression of up to 40 proteins at subcellular resolution in tissue sections (Giesen et al., 2014). In the analysis of mass cytometry data, a cofactor c is frequently used to normalize the data (Ray & Pyne, 2012; Wagner et al., 2019) via the transformation y˜ = sinh−1(y/c). However, to our knowledge no systematic approach exists to set the cofactor and it is typically left as a user-specified parameter. Table 1: Results for IMC cofactor optimization experiment. CR: cumulative regret, FR: full regret; BR: Bayes regret. M-SA: MANATEE with scalarized acquisition, M-AS: MANATEE with acquisition of scalarized function, RA: random acquisition, RS: random scalarization. ARI: adjusted Rand index, NMI: normalized mutual information. Values are mean (s.d.). | Method | ARI | | | NMI | | | |----------|--------------|--------------|-----------------------------------------------------|--------------|--------------|--------------| | CR | FR | BR | CR | FR | BR | | | M-SA | 0.017(0.005) | 0.003(0.003) | 0.007(0.006) 0.019(0.009) 0.002(0.005) 0.008(0.009) | | | | | M-AS | 0.021(0.010) | 0.006(0.010) | 0.011(0.011) | 0.025(0.017) | 0.007(0.016) | 0.015(0.017) | | RA | 0.045(0.001) | 0.024(0.013) | 0.031(0.008) | 0.065(0.003) | 0.026(0.013) | 0.036(0.009) | | RS | 0.021(0.004) | 0.003(0.004) | 0.008(0.005) | 0.025(0.006) | 0.003(0.006) | 0.008(0.007) | | qNEHVI | 0.042(0.004) | 0.011(0.007) | 0.021(0.011) | 0.061(0.007) | 0.011(0.007) | 0.026(0.017) | | qNParEGO | 0.037(0.005) | 0.002(0.003) | 0.017(0.012) | 0.049(0.009) | 0.005(0.003) | 0.023(0.017) | | USeMO | 0.043(0.003) | 0.010(0.009) | 0.016(0.009) | 0.062(0.005) | 0.012(0.011) | 0.018(0.013) | Here, we consider the standard workflow where (i) the expression data is normalized with a given cofactor c and (ii) the data is clustered using standard methods with the "best" cofactor being the one that leads to the most biologically relevant cellular populations4. Given that this problem in general has no notion of "test accuracy" with respect to which we could optimize the cofactor, we instead suggest a number of heuristic objectives based around maximizing the correlation of cluster-specific mean expression of known protein marker combinations. For example, the proteins CD19 and CD20 are highly expressed in B lymphocyte cells and lowly expressed in all others. Therefore, if a clustering correctly separates B cells from others, the correlation between the mean CD19 and CD20 expression in each cluster should be high as the proteins should either be co-expressed or both not expressed (at the origin), as demonstrated in Appendix F.1. We can apply this logic to a range of cell type markers to construct our set of heuristic objectives (Appendix F.2). To quantify the ability of each clustering to uncover biologically relevant populations, we use expert annotated cell types from Jackson et al. (2020) and assess cluster overlap with the adjusted Rand index (ARI) and normalized mutual information (NMI), which for this experiment form the overall meta-objectives 5in line with prior benchmarking efforts of single-cell clustering (Qi et al., 2020; Kiselev et al., 2017). Note that this is in general unavailable for the analysis of newly generated data and we would *only* have access to the correlation (heuristic) objectives. 4All parameters of the clustering procedure are held constant across cofactors to allow for fair comparison. 5The fact that we can easily specify 2 meta-objectives highlights the ubiquity of the "multiple heuristic objective" issue in bioinformatics. The results comparing MANATEE to the alternative methods are shown in Table 1 and the optimization trajectories across acquisitions are shown in Supplementary Figure 4. On the metric of cumulative regret, which as above, is most relevant for the problem setup at hand, MANATEE-SA outperforms the alternative approaches. On full and Bayes regret, MANATEE performs comparably with the baselines. On cumulative regret, qNEHVI and USeMO are comparable to random acquisition, suggesting that consistently acquiring close-to-optimal solutions over > 5 noisy objectives is challenging even for approximate hypervolume computation and that a noisy setting is challenging for methods assuming noise-free observations. Interestingly, we find that random scalarization exhibits strong performance on several measures, which may be understood by the fact that the scalarized objective Pk λkfk naturally places high weight on regions where many objectives agree, mimicking a similar scenario to our inter-objective agreement criterion. Samples from the objectives along with methods' acquisitions are shown in Supplementary Figure 5. MANATEE samples low cofactor values corresponding to regions where meta-objectives' values are high, while other methods sample throughout the parameter range. Examining the inclusion probabilities shows that MANATEE learns to downweight the CD45/CD20 co-expression objective, which is maximized for high cofactor values (Supplementary Figures 5 and 6). We also show each regret metric as a function of the acquisition step to demonstrate the convergence rate of each method (Appendix A.5, Supplementary Figure 10a). MANATEE-SA reaches low regret values faster than or comparable to other methods. We further performed ablation experiments of each behaviour and found that no single behaviour drives the performance (Appendix A.4). We also performed cross-validation on data splits to demonstrate that MANATEE does not overfit to a given dataset (Appendix A.3). ## 4.3 **Single-Cell Rna-Seq Highly Variable Gene Selection** Single-cell RNA-sequencing (scRNA-seq, see Hwang et al. (2018) for an overview) quantifies wholetranscriptome gene expression at single-cell resolution. A key step in the analysis of the resulting data is selection of a set of highly variable genes (HVGs) for downstream analysis, typically taken as the "top x%" (Yip et al., 2019), but there are no systematic or quantitative recommendations for selecting this proportion (Luecken & Theis, 2019). Therefore, we apply MANATEE to this problem following a clustering workflow similar to the IMC experiment, but by varying the proportion of HVGs used for the analysis and keeping all other clustering parameters fixed. We again propose a number of co-expression based heuristics (Appendix F.3) and augment these with measures of cluster purity (mean silhouette width, Calinski and Harabasz score, Davies-Bouldin score) previously used in scRNA-seq analysis (Germain et al., 2020). | Method | | ARI | NMI | | | | |----------|--------------|---------------------------|--------------|---------------------------|--------------|--------------| | CR | | FR | BR | CR | FR | BR | | M-SA | 0.126(0.019) | 0.056(0.010) | 0.064(0.012) | 0.125(0.026) | 0.022(0.010) | 0.031(0.013) | | M-AS | 0.127(0.026) | 0.058(0.015) | 0.067(0.015) | 0.124(0.036) | 0.023(0.013) | 0.034(0.017) | | RA | 0.192(0.020) | 0.053(0.009) | 0.067(0.013) | 0.199(0.025) | 0.025(0.009) | 0.040(0.015) | | RS | 0.140(0.018) | 0.049(0.008) 0.059(0.010) | 0.130(0.021) | 0.014(0.007) 0.026(0.012) | | | | qNEHVI | 0.186(0.038) | 0.050(0.008) | 0.092(0.045) | 0.191(0.047) | 0.023(0.009) | 0.073(0.057) | | qNParEGO | 0.161(0.039) | 0.055(0.009) | 0.091(0.048) | 0.158(0.049) | 0.023(0.009) | 0.070(0.064) | | USeMO | 0.218(0.031) | 0.057(0.013) | 0.071(0.014) | 0.231(0.040) | 0.027(0.014) | 0.042(0.019) | Table 2: Results for scRNA-seq HVG selection optimization experiment. CR: cumulative regret, FR: full regret; BR: Bayes regret. M-SA: MANATEE with scalarized acquisition, M-AS: MANATEE with acquisition of scalarized function, RA: random acquisition, RS: random scalarization. ARI: adjusted Rand index, NMI: normalized mutual information. Values are mean (s.d.). For these workflows, no general ground truth clustering or cell types are available. However, a new technology called CITE-seq can simultaneously quantify both the RNA and surface protein expression at single-cell level (Stoeckius et al., 2017). Given that cell types are traditionally defined by surface protein expression (Oostrum et al., 2019), we use a clustering of the surface protein expression alone as the ground truth following existing work (Liu et al., 2021). The concordance with this clustering acts as the meta-objective in this experiment, which we benchmark the proposed approaches against. We supply each method with the heuristic objectives above and benchmark the gene proportion acquisitions by contrasting the resulting clusterings with the surface protein-derived ground truth using ARI and NMI as metrics. Once again, these represent only two possible choices of meta-objective and there are many more we could design, highlighting the prevalence of heuristic objectives in the field. The results are shown in Table 2, the optimization trajectories in Supplementary Figure 7, and regret metrics (Appendix A.5) at each acquisition step in Supplementary Figure 10b. As above, our main focus is on cumulative regret since when deploying in a real-world scenario, we would not have access to the meta-objective. MANATEE performs favourably on cumulative regret compared to the other approaches, though has higher full and Bayes regret. MANATEE learns to strongly upweight the Davies-Bouldin objective, which agrees with the meta-objectives and is less noisy, and downweight the CD3E/CD4 and CD3D/CD8A objectives, which are maximized in a different region to the meta-objectives (Supplementary Figures 8 and 9). This allows our method to acquire proportions of HVGs close to the upper bound of the parameter range where the meta-objectives are maximized, unlike qNParEGO and USeMO that don't reach this region (Supplementary Figure 8). Overall, this demonstrates our method to be a promising approach to tackle hyperparameter optimization on real, noisy datasets and achieve competitive performance compared to existing baselines and state-of-the-art methods. Finally, we quantified the effects of different model initializations on the parameter values selected by our method by reanalyzing the outputs of the above experiments, which contained multiple repetitions across different random seeds, controlling the initial training set and model initialization. In the IMC cofactor selection experiment, MANATEE-SA and MANATEE-AS most frequently selected the same cofactor value in 99% and 90% of all repetitions (Supplementary Figure 11a). In the HVGs selection experiment, the interquartile range of percentage values most frequently selected by MANATEE-SA in all repetitions was 5% and that for MANATEE-AS was 8%, while the considered range of optimization was between 1% and 50% of genes (Supplementary Figure 11b). All implementation details are provided in Appendix A.1 and the experimental setup is described in Appendix A.2. ## 5 **Discussion** A common theme here is the subjectivity of parameter setting in biological data analysis workflows. Setting these often involves no heuristic objectives at all, simply relying on an iterative data exploration to find a parameter combination that "works". Even when heuristic objectives are involved - such as in the benchmarking analyses of scRNA-seq workflows - the precise choice of which objectives to include is fundamentally subjective too. It is important to note that our proposed approach does not remove subjectivity from the analysis. Many important steps, including the chosen behaviours B and their conditional inclusion distributions p(B|λ) are set by the user. Therefore, it abstracts the subjectivity by a level, changing the question from *"which objectives* should I use to benchmark my method?" to *"what would the behaviour of a good objective function be?"*. Given that no link is assumed between the specified heuristic objectives and the true meta-objective and that the choice of desirable objective behaviours is given as example only, we make no optimality claims about the ability to explore the Pareto front. In our setup, weighting of the acquisition functions may effectively exclude some objectives and lead to potential under-exploration. This may be mitigated by starting the objective weighting process after some set number of steps. Furthermore, the inter-objective agreement behaviour may be potentially limiting if a practitioner is interested in jointly suboptimal solutions that still perform reasonably well on all objectives, some of which are competing. If all objectives had the maximum at the boundary, this behavior would no longer be relevant to the optimization; in such cases, the user could be alerted and asked to re-define which behaviours they want to use. As each behaviour's weight depends on the specification of p(B|λ), there is no guarantee that these would have a similar scale, though we found no one behaviour driving the performance on the problems we considered here. Here we have assumed that all tasks are quantifiable for all observations, though in some Bayesian optimization settings this doesn't hold (Krause & Ong, 2011). Our framework can be used with other scalarization functions that include a weighting λ which may be more suitable in some scenarios, but here we performed experiments using linear scalarization. We note that if the true meta-objective is not linear in the observed objectives then a linear scalarization will be suboptimal. However, in the context we considered, the true meta-objective may represent complex notions such as the ability to uncover biologically meaningful results, which, while likely nonlinear, is impossible to quantify or specify. Therefore, the linear scalarization represents a trade-off between having access to an intuitive weight λk for each function while not necessarily being theoretically optimal. In addition, MANATEE performed competitively with state-of-the-art methods in our experiments with real data analysis pipelines, which represent cases with unknown true meta-objective. As our general approach is applicable to any multi-objective optimization scenario (while tailored to biomedical analyses), we acknowledge that it could be used in highly diverse applications. We note that these may include ethically dubious bioinformatics analyses such as those pertaining to genetic testing of embryos. We strongly caution against any such use without a thorough ethical review process. There are several extensions that would serve as future steps. We have only considered optimizing x ∈ R, but future work can use our method to optimize multiple pipeline parameters as our work generalizes to x ∈ R D for D > 1 (Appendix D). There is much current research in BO methods over both continuous and categorical domains (Ru et al., 2020), which may better suit the parameter space of scRNA-seq analysis pipelines (Germain et al., 2020). A lot of research in BO centres on the incorporation of user input and expert opinions to guide optimization (Häse et al., 2021; Abdolshah et al., 2019). While we have explicitly considered the opposite problem - where *a priori* it is not known which objectives should be upweighted – there could be situations where both approaches could be integrated. For example, an expert may provide ratings for the results of each scRNA-seq clustering during optimization. In such settings, these ratings could be integrated into our proposed framework by updating the distributions p(λ|B, Θ) over Θ such that they confer high weights to functions of expert ratings. Finally, we welcome future work evaluating gains in discovery and accuracy of biological results and computation time arising from using our method to choose values for pipeline parameters. ## Code Availability Our code is available at https://github.com/camlab-bioml/2022_manatee_paper_analyses. ## Acknowledgments We thank Dr. Gavia Gray for her advice on PyTorch optimizers and the reviewers for their thoughtful comments and suggestions. This work was supported by a Canadian Institutes of Health Research project grant number PJT175270 to K.R.C, the Canadian Statistical Sciences Institute Ontario Top-up for Postdoctoral Fellows in Data Science to A.S., the Hold'em for Life Oncology Fellowship to A.S., and the Vector Institute Postgraduate Affiliate Award to A.S. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) RGPIN-2020-04083 to K.R.C. This research was undertaken, in part, thanks to funding from the Canada Research Chairs Program to K.R.C. ## References Majid Abdolshah, Alistair Shilton, Santu Rana, Sunil Gupta, and Svetha Venkatesh. Multi-objective bayesian optimisation with preferences over objectives. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), *Advances in Neural* Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 12214–12224, 2019. URL https: //proceedings.neurips.cc/paper/2019/hash/a7b7e4b27722574c611fe91476a50238-Abstract.html. Hananeh Aliee and Fabian J Theis. Autogenes: Automatic gene selection using multi-objective optimization for RNA-seq deconvolution. *Cell Systems*, 2021. Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 3(Nov):397–422, 2002. Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, and Eytan Bakshy. Botorch: A framework for efficient monte-carlo bayesian optimization. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), *Advances* in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/ 2020/hash/f5b1b89d98b7286673128a5fb112cb9a-Abstract.html. Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, and Eytan Bakshy. Noisy, parallel, multi-objective BO in BoTorch with qEHVI, qNEHVI, and qNParEGO. https://botorch.org/tutorials/multi_objective_bo, 2021. Accessed: 2022-01-26. Syrine Belakaria, Aryan Deshwal, and Janardhan Rao Doppa. Max-value entropy search for multi-objective bayesian optimization. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'AlchéBuc, Emily B. Fox, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 7823–7833, 2019. URL https://proceedings.neurips.cc/paper/2019/ hash/82edc5c9e21035674d481640448049f3-Abstract.html. Syrine Belakaria, Aryan Deshwal, Nitthilan Kannappan Jayakodi, and Janardhan Rao Doppa. Uncertaintyaware search framework for multi-objective bayesian optimization. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp. 10044–10052. AAAI Press, 2020. URL https://aaai.org/ojs/index.php/AAAI/article/view/6561. Lukas Biewald. Experiment tracking with Weights and Biases, 2020. URL https://www.wandb.com/. Edwin V. Bonilla, Kian Ming Adam Chai, and Christopher K. I. Williams. Multi-task gaussian process prediction. In John C. Platt, Daphne Koller, Yoram Singer, and Sam T. Roweis (eds.), *Advances* in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 3-6, 2007, pp. 153–160. Curran Associates, Inc., 2007. URL https://proceedings.neurips.cc/paper/2007/hash/ 66368270ffd51418ec58bd793f2d9b1b-Abstract.html. Richard P Brent. *Algorithms for minimization without derivatives*. Courier Corporation, 2013. Tinkle Chugh. Scalarizing functions in Bayesian multiobjective optimization. In *2020 IEEE Congress on* Evolutionary Computation (CEC), pp. 1–8. IEEE, 2020. Yaxuan Cui, Shaoqiang Zhang, Ying Liang, Xiangyun Wang, Thomas N Ferraro, and Yong Chen. Consensus clustering of single-cell RNA-seq data by enhancing network affinity. *Briefings in Bioinformatics*, 2021. Samuel Daulton, Maximilian Balandat, and Eytan Bakshy. Differentiable expected hypervolume improvement for parallel multi-objective bayesian optimization. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 6fec24eac8f18ed793f5eaad3dd7977c-Abstract.html. Samuel Daulton, Maximilian Balandat, and Eytan Bakshy. Parallel Bayesian optimization of multiple noisy objectives with expected hypervolume improvement. *ArXiv preprint*, abs/2105.08195, 2021. URL https://arxiv.org/abs/2105.08195. Kalyanmoy Deb. Multi-objective optimization. In *Search methodologies*, pp. 403–449. Springer, 2014. Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and TAMT Meyarivan. A fast and elitist multiobjective genetic algorithm: Nsga-ii. *IEEE transactions on evolutionary computation*, 6(2):182–197, 2002. Angelo Duò, Mark D Robinson, and Charlotte Soneson. A systematic performance evaluation of clustering methods for single-cell RNA-seq data. *F1000Research*, 7, 2018. P.T. Eendebak and A.R. Vazquez. Oapackage: A python package for generation and analysis of orthogonal arrays, optimal designs and conference designs. *Journal of Open Source Software*, 2019. Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. arxiv e-prints, page. *ArXiv preprint*, abs/1808.05377, 2018. URL https://arxiv.org/abs/1808.05377. Peter I Frazier. A tutorial on Bayesian optimization. *ArXiv preprint*, abs/1807.02811, 2018. URL https: //arxiv.org/abs/1807.02811. Nicoló Fusi, Rishit Sheth, and Melih Elibol. Probabilistic matrix factorization for automated machine learning. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò CesaBianchi, and Roman Garnett (eds.), *Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018,* Montréal, Canada, pp. 3352–3361, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/ b59a51a3c0bf9c5228fde841714f523a-Abstract.html. Jacob R. Gardner, Geoff Pleiss, Kilian Q. Weinberger, David Bindel, and Andrew Gordon Wilson. Gpytorch: Blackbox matrix-matrix gaussian process inference with GPU acceleration. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 7587–7597, 2018. URL https: //proceedings.neurips.cc/paper/2018/hash/27e8e17134dd7083b050476733207ea1-Abstract.html. Pierre-Luc Germain, Anthony Sonrel, and Mark D Robinson. pipecomp, a general framework for the evaluation of computational pipelines, reveals performant single cell RNA-seq preprocessing tools. *Genome Biology*, 21(1):1–28, 2020. Charlotte Giesen, Hao AO Wang, Denis Schapiro, Nevena Zivanovic, Andrea Jacobs, Bodo Hattendorf, Peter J Schüffler, Daniel Grolimund, Joachim M Buhmann, Simone Brandt, et al. Highly multiplexed imaging of tumor tissues with subcellular resolution by mass cytometry. *Nature Methods*, 11(4):417–422, 2014. Javier González, Zhenwen Dai, Andreas C. Damianou, and Neil D. Lawrence. Preferential bayesian optimization. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pp. 1282–1291. PMLR, 2017. URL http://proceedings.mlr.press/v70/ gonzalez17a.html. Yuhan Hao, Stephanie Hao, Erica Andersen-Nissen, William M. Mauck III, Shiwei Zheng, Andrew Butler, Maddie J. Lee, Aaron J. Wilk, Charlotte Darby, Michael Zagar, Paul Hoffman, Marlon Stoeckius, Efthymia Papalexi, Eleni P. Mimitou, Jaison Jain, Avi Srivastava, Tim Stuart, Lamar B. Fleming, Bertrand Yeung, Angela J. Rogers, Juliana M. McElrath, Catherine A. Blish, Raphael Gottardo, Peter Smibert, and Rahul Satija. Integrated analysis of multimodal single-cell data. *Cell*, 2021. doi: 10.1016/j.cell.2021.04.048. Florian Häse, Matteo Aldeghi, Riley J Hickman, Loïc M Roch, and Alán Aspuru-Guzik. Gryffin: An algorithm for Bayesian optimization of categorical variables informed by expert knowledge. *Applied Physics Reviews*, 8(3):031406, 2021. Xin He, Kaiyong Zhao, and Xiaowen Chu. AutoML: A survey of the state-of-the-art. Knowledge-Based Systems, 212:106622, 2021. Daniel Hernández-Lobato, José Miguel Hernández-Lobato, Amar Shah, and Ryan P. Adams. Predictive entropy search for multi-objective bayesian optimization. In Maria-Florina Balcan and Kilian Q. Weinberger (eds.), Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of *JMLR Workshop and Conference Proceedings*, pp. 1492–1501. JMLR.org, 2016. URL http://proceedings.mlr.press/v48/hernandez-lobatoa16.html. Byungjin Hwang, Ji Hyun Lee, and Duhee Bang. Single-cell RNA sequencing technologies and bioinformatics pipelines. *Experimental & molecular medicine*, 50(8):1–14, 2018. Hartland W Jackson, Jana R Fischer, Vito RT Zanotelli, H Raza Ali, Robert Mechera, Savas D Soysal, Holger Moch, Simone Muenst, Zsuzsanna Varga, Walter P Weber, et al. The single-cell pathology landscape of breast cancer. *Nature*, 578(7796):615–620, 2020. Vladimir Yu Kiselev, Kristina Kirschner, Michael T Schaub, Tallulah Andrews, Andrew Yiu, Tamir Chandra, Kedar N Natarajan, Wolf Reik, Mauricio Barahona, Anthony R Green, et al. Sc3: consensus clustering of single-cell RNA-seq data. *Nature Methods*, 14(5):483–486, 2017. Joshua Knowles. Parego: A hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. *IEEE Transactions on Evolutionary Computation*, 10(1):50–66, 2006. Andreas Krause and Cheng Soon Ong. Contextual gaussian process bandit optimization. In John Shawe-Taylor, Richard S. Zemel, Peter L. Bartlett, Fernando C. N. Pereira, and Kilian Q. Weinberger (eds.), Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Proceedings of a meeting held 12-14 December 2011, Granada, Spain, pp. 2447–2455, 2011. URL https://proceedings.neurips.cc/paper/2011/hash/ f3f1b7fc5a8779a9e618e1f23a7b7860-Abstract.html. Xuan Liu, Sara JC Gosline, Lance T Pflieger, Pierre Wallet, Archana Iyer, Justin Guinney, Andrea H Bild, and Jeffrey T Chang. Knowledge-based classification of fine-grained immune cell types in single-cell RNA-seq data. *Briefings in Bioinformatics*, 22(5):bbab039, 2021. Malte D Luecken and Fabian J Theis. Current best practices in single-cell RNA-seq analysis: a tutorial. Molecular systems biology, 15(6):e8746, 2019. Wesley J Maddox, Maximilian Balandat, Andrew G Wilson, and Eytan Bakshy. Bayesian optimization with high-dimensional outputs. *Advances in Neural Information Processing Systems*, 34:19274–19287, 2021. Mary B Makarious, Hampton L Leonard, Dan Vitale, Hirotaka Iwaki, David Saffo, Lana Sargent, Anant Dadu, Eduardo Salmerón Castaño, John F Carter, Melina Maleknia, et al. GenoML: Automated machine learning for genomics. *ArXiv preprint*, abs/2103.03221, 2021. URL https://arxiv.org/abs/2103.03221. Davis J. McCarthy, Kieran R. Campbell, Aaron T. L. Lun, and Quin F. Willis. Scater: pre-processing, quality control, normalisation and visualisation of single-cell RNA-seq data in R. *Bioinformatics*, 33:1179–1186, 2017. doi: 10.1093/bioinformatics/btw777. Andrew McHutchon. Differentiating Gaussian Processes. *Cambridge (ed.)*, 2013. Jonas Mockus, Vytautas Tiesis, and Antanas Zilinskas. The application of Bayesian methods for seeking the extremum. *Towards global optimization*, 2(117-129):2, 1978. Hirotaka Nakayama, Yeboon Yun, and Min Yoon. Sequential approximate multiobjective optimization using computational intelligence. Springer Science & Business Media, 2009. Marc Oostrum, Maik Müller, Fabian Klein, Roland Bruderer, Hui Zhang, Patrick GA Pedrioli, Lukas Reiter, Panagiotis Tsapogas, Antonius Rolink, Bernd Wollscheid, et al. Classification of mouse B cell types using surfaceome proteotype maps. *Nature Communications*, 10(1):1–9, 2019. Biswajit Paria, Kirthevasan Kandasamy, and Barnabás Póczos. A flexible framework for multi-objective bayesian optimization using random scalarizations. In Amir Globerson and Ricardo Silva (eds.), Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI 2019, Tel Aviv, Israel, July 22-25, 2019, volume 115 of *Proceedings of Machine Learning Research*, pp. 766–776. AUAI Press, 2019. URL http://proceedings.mlr.press/v115/paria20a.html. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), *Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada*, pp. 8024–8035, 2019. URL https://proceedings.neurips.cc/paper/2019/hash/ bdbca288fee7f92f2bfa9f7012727740-Abstract.html. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. Ren Qi, Anjun Ma, Qin Ma, and Quan Zou. Clustering and classification methods for single-cell RNAsequencing data. *Briefings in Bioinformatics*, 21(4):1196–1208, 2020. Surajit Ray and Saumyadipta Pyne. A computational framework to emulate the human perspective in flow cytometric data analysis. *PloS one*, 7(5):e35693, 2012. Diederik M Roijers, Peter Vamplew, Shimon Whiteson, and Richard Dazeley. A survey of multi-objective sequential decision-making. *Journal of Artificial Intelligence Research*, 48:67–113, 2013. Bin Xin Ru, Ahsan S. Alvi, Vu Nguyen, Michael A. Osborne, and Stephen J. Roberts. Bayesian optimisation over multiple continuous and categorical inputs. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 8276–8285. PMLR, 2020. URL http://proceedings.mlr.press/v119/ru20a. html. Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical bayesian optimization of machine learning algorithms. In Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger (eds.), Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States, pp. 2960–2968, 2012. URL https://proceedings.neurips.cc/ paper/2012/hash/05311655a15b75fab86956663e1819cd-Abstract.html. Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Md. Mostofa Ali Patwary, Prabhat, and Ryan P. Adams. Scalable bayesian optimization using deep neural networks. In Francis R. Bach and David M. Blei (eds.), *Proceedings of the 32nd International Conference on* Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pp. 2171–2180. JMLR.org, 2015. URL http://proceedings.mlr.press/v37/snoek15.html. Marlon Stoeckius, Christoph Hafemeister, William Stephenson, Brian Houck-Loomis, Pratip K Chattopadhyay, Harold Swerdlow, Rahul Satija, and Peter Smibert. Simultaneous epitope and transcriptome measurement in single cells. *Nature Methods*, 14(9):865–868, 2017. William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. *Biometrika*, 25(3-4):285–294, 1933. Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, and Isabelle Guyon. Bayesian optimization is superior to random search for machine learning hyperparameter tuning: Analysis of the black-box optimization challenge 2020. *ArXiv preprint*, abs/2104.10201, 2021. URL https://arxiv.org/abs/2104.10201. Johanna Wagner, Maria Anna Rapsomaniki, Stéphane Chevrier, Tobias Anzeneder, Claus Langwieder, August Dykgers, Martin Rees, Annette Ramaswamy, Simone Muenst, Savas Deniz Soysal, et al. A single-cell atlas of the tumor and immune ecosystem of human breast cancer. *Cell*, 177(5):1330–1345, 2019. Christopher K Williams and Carl Edward Rasmussen. *Gaussian processes for machine learning*, volume 2. MIT press Cambridge, MA, 2006. F Alexander Wolf, Philipp Angerer, and Fabian J Theis. Scanpy: large-scale single-cell gene expression data analysis. *Genome Biology*, 19(1):1–5, 2018. Kaifeng Yang, Michael Emmerich, André Deutz, and Thomas Bäck. Efficient computation of expected hypervolume improvement using box decomposition algorithms. *Journal of Global Optimization*, 75(1): 3–34, 2019a. Kevin K Yang, Zachary Wu, and Frances H Arnold. Machine-learning-guided directed evolution for protein engineering. *Nature Methods*, 16(8):687–694, 2019b. Shun H Yip, Pak Chung Sham, and Junwen Wang. Evaluation of tools for highly variable gene discovery from single-cell RNA-seq data. *Briefings in Bioinformatics*, 20(4):1583–1589, 2019. Luke Zappia, Belinda Phipson, and Alicia Oshlack. Exploring the single-cell RNA-seq analysis landscape with the scRNA-tools database. *PLoS Computational Biology*, 14(6):e1006245, 2018. doi: 10.1371/journal. pcbi.1006245. Allen W Zhang, Ciara O'Flanagan, Elizabeth A Chavez, Jamie LP Lim, Nicholas Ceglia, Andrew McPherson, Matt Wiens, Pascale Walters, Tim Chan, Brittany Hewitson, et al. Probabilistic cell-type assignment of single-cell RNA-seq for tumor microenvironment profiling. *Nature Methods*, 16(10):1007–1015, 2019. Luisa M Zintgraf, Timon V Kanters, Diederik M Roijers, Frans Oliehoek, and Philipp Beau. Quality assessment of MORL algorithms: A utility-based approach. In *Benelearn 2015: proceedings of the 24th* annual machine learning conference of Belgium and the Netherlands, 2015. ## A **Implementation Details** A.1 **Method Hyperparameters** Hyperparameters for all methods are summarized in Supplementary Table 3. MANATEE MANATEE (with both AS and SA acquisition functions) was implemented with PyTorch v. 1.9.0 (Paszke et al., 2019) and gpytorch v. 1.6.0 (Gardner et al., 2018) for the Gaussian process model and inference. Optimization was performed with the LBFGS optimizer. At every acquisition step, the model was initialized and fit to to the current training set 5 times and the model with the highest log-likelihood was kept. If fitting failed, the process would be re-tried a maximum of 20 times before halting. Optimization of the acquisition function was initialized with the maximum of 100 random samples. Above implementation details also apply to the random scalarization (RS) baseline. Optimization of the SA and AS acquisition functions was performed with the LBFGS optimizer with line search. For MANATEE, maxima of the posterior mean were identified by computing the first derivative of the posterior mean, finding its zeros using Brent's method (Brent, 2013) implemented by scipy.optimize.brentq with default parameters, and computing the second derivative at those locations. A candidate was declared a maximum not at boundary if its second derivative was less than -10 and if the candidate was at least 0.01 units away from the range extrema. q**NEHVI** The qNEHVI approach was implemented with botorch v. 0.6.1.dev37+g4f0a2889 (Balandat et al., 2020). The development version was used to facilitate usage of qNEHVI with the KroneckerMultiTaskGP model (Maddox et al., 2021). Implementation closely followed the tutorial on multi-objective Bayesian optimization (Balandat et al., 2021). Batch size was set to 1. fit_gpytorch_model was called with max_retries set to 20. Other parameters were set following the tutorial. Reference point was set to the minimum of the initial acquired points. To accommodate > 5 objectives, we used approximate hypervolume | Method | Parameter | Value | Explanation | |-------------------------------------|----------------------------------------------------------------|----------------------------------------------------------|--------------------------------------| | p(λk = 1) | 0.5 | Prior over binary λk is set as Bernoulli(0.5) (3) | | | π1, π0 | 0.75, 0.25 | Bernoulli hyperparameters for p(B k | |λk =i) | | δ 2 f δx2 < | -10 | Upper bound to call a max | | | MANATEE Min distance | 0.01 | Distance from max to extrema | | | l > | 0.1 | Kernel lengthscale constraint | | | σ 2 | 1 | Kernel variance | | | σ ϵk > | 0.01 | Observation noise variance constraint | | | 2 Line search function strong_wolfe | LBFGs optimizer arg | | | | MANATEE RS UCB βt | 0.125 log(2t + 1) | Set as in Paria et al. (2019) | | | GP fits | 5 | Model inits and fits at each acquisition | | | GP fit re-tries | 20 | Max re-tries to fit model at each acquisition | | | Acquisition samples 100 | Initial samples from acquisition function | | | | MC_SAMPLES | 128 | QMC sampler arg, set as in Balandat et al. (2021) | | | max_retries | 20 | botorch.fit_gpytorch_model arg | | | batch_range | (0, −1) | QMC sampler arg, set as in Balandat et al. (2021) | | | GP model | KroneckerMultiTaskGP | Set as in Balandat et al. (2021) | | | qNEHVI | BATCH_SIZE | 1 | Points to acquire | | qNParEGO NUM_RESTARTS | 20 | Optimization re-starts, set as in Balandat et al. (2021) | | | RAW_SAMPLES | 1024 | Acquisition samples, set as in Balandat et al. (2021) | | | batch_limit | 5 | optimize_acqf arg, set as in Balandat et al. (2021) | | | maxiter | 200 | optimize_acqf arg, set as in Balandat et al. (2021) | | | qNEHVI | reference point | sample minimum | Lower HV bound | | alpha | 10−8+#objectives | approximate partitioning level | | | reference point | 105 | Set as in github.com/belakaria/USeMO | | | acquisition function | TS | Thompson sampling (Thompson, 1933) | | | batch_size | 1 | Points to acquire | | | d | 1 | Input dimensionality | | | USeMO | beta | log t d 2 +2π 2 0.15 | Set as in github.com/belakaria/USeMO | | algorithm | NSGA-II Deb et al. (2002) Set as in github.com/belakaria/USeMO | | | | function evalutions | 2500 | Set as in github.com/belakaria/USeMO | | Supplementary Table 3: Method hyperparameters used in the experiments. RS: random scalarization. computation by setting the alpha parameter according to the heuristic based on the number of objectives as proposed in (Daulton et al., 2020). q**NParEGO** The qNParEGO approach was implemented with botorch v. 0.6.1.dev37+g4f0a2889 (Balandat et al., 2020). Implementation closely followed the tutorial (Balandat et al., 2021). Batch size was set to 1. fit_gpytorch_model was called with max_retries set to 20. Other parameters were set following the tutorial. USeMO USeMO was implemented using the code deposited in the http://github.com/belakaria/USeMO repository, with main.py adapted to work with pipelines for the IMC and scRNA-seq experiments. USeMO was run with the Thompson sampling (TS) acquisition function (Thompson, 1933) as USeMO-TS and USeMO-EI (EI, expected improvement (Mockus et al., 1978)) were shown to outperform existing methods (Belakaria et al., 2020) and TS didn't require a hyperparameter to set like the exploration/exploitation trade-off hyperparameter in the implementation of the EI acquisition function. All other hyperparameters were left as in main.py. ## A.2 **Experimental Setup** | Experiment Parameter | Value | | |----------------------------|------------------------|----| | Number of initial points 5 | | | | Toy | Number of acquisitions | 30 | | Replicates | 100 | | | xmin | 0 | | | xmax | 1 | | | Number of objectives | 5 | | | Number of initial points 5 | | | | IMC | Number of acquisitions | 35 | | Replicates | 98 | | | xmin | 1 | | | xmax | 100 | | | Number of objectives | 7 | | | Number of initial points 5 | | | | scRNA-seq | Number of acquisitions | 36 | | Replicates | 100 | | | xmin | 0.01 | | | xmax | 0.5 | | | Number of objectives | 9 | | Supplementary Table 4: Parameters of the experimental procedure. Parameters for all experimental procedures are summarized in Supplementary Table 4. Toy experiment The Pareto front on toy objectives was computed with the OApackage (Eendebak & Vazquez, 2019). The initial dataset contained 5 training points at random locations and MANATEE-SA performed 30 acquisition steps. The experiment was repeated 100 times. The range of the optimized parameter x was between 0 and 1. Toy objectives are described in Appendix B.1. IMC experiment At each acquisition step, data was normalized with the acquired cofactor value and clustered. Mean marker expression in each cluster was computed on the data normalized with cofactor=1. The co-expression objective values were computed as a Pearson correlation between the mean expression of a marker pair across clusters. The experiment was repeated 98 times. The range of the optimized cofactor value was between 1 and 100. The overall meta-objective maximum for ARI and NMI, used to compute regrets, was set as the maximum of ARI/NMI values computed from all acquisitions by all methods (MANATEE-SA, MANATEE-AS, RA and RS baselines, qNEHVI, qNParEGO, USeMO), including behaviour ablation methods (MANATEE-SA with leave-one-out behaviour). qNEHVI returned an error and completed fewer than the total of 35 acquisitions on 49 runs; qNParEGO returned an error and completed fewer than the total of 35 acquisitions on 80 runs, other methods completed all acquisitions on all runs. The objectives are described in Appendix F.2. ![20_image_0.png](20_image_0.png) Supplementary Figure 4: Optimized cofactor value as a function of acquisition step for all methods. M-SA: MANATEE with scalarized acquisition, M-AS: MANATEE with acquisition of scalarized function, RA: random acquisition, RS: random scalarization. Solid line shows the mean and shaded region denotes the 95% confidence interval. scRNA-seq experiment At each acquisition step, data was subsetted to the top highly variable genes according to the acquired proportion value and clustered. The co-expression objective values were computed as a Pearson correlation between the mean expression of a marker pair across clusters. The experiment was repeated 100 times. The range of the optimized highly variable gene proportion was between 0.01 and 0.5. Unsupervised cluster purity metrics were computed on the PCA transform of the normalized scRNA-seq data. The overall meta-objective maximum for ARI and NMI, used to compute regrets, was set as the maximum of ARI/NMI values computed from all acquisitions by all methods (MANATEE-SA, MANATEE-AS, RA and RS baselines, qNEHVI, qNParEGO, USeMO), including behaviour ablation methods (MANATEE-SA with leave-one-out behaviour). qNEHVI returned an error and completed fewer than the total of 36 acquisitions on 64 runs, qNParEGO returned an error and completed fewer than the total of 36 acquisitions on 62 runs, other methods completed all acquisitions on all runs. The objectives are described in Appendix F.3. Experimental results were tracked with Weights & Biases (Biewald, 2020). Reported regrets include acquisitions from incomplete runs for qNEHVI and qNParEGO. ![21_image_0.png](21_image_0.png) Supplementary Figure 5: 150 random samples from 7 co-expression objectives and meta-objectives (ARI, NMI) in the IMC cofactor selection experiment. Orange indicates samples on the Pareto front, blue indicates Pareto dominated samples, and red indicates points acquired by each method. The acquisitions shown for each method are from the run with the highest average ARI value. Pareto optimality of random samples was computed with the OApackage (Eendebak & Vazquez, 2019). ![22_image_0.png](22_image_0.png) Supplementary Figure 6: Inclusion probabilities for MANATEE-SA and MANATEE-AS for each of the objectives as a function of acquisition step in the IMC cofactor selection experiment. Solid line shows the mean and shaded region denotes the 95% confidence interval across runs. ![22_image_1.png](22_image_1.png) Supplementary Figure 7: Optimized percentage of highly variable genes as a function of acquisition step for all methods. M-SA: MANATEE with scalarized acquisition, M-AS: MANATEE with acquisition of scalarized function, RA: random acquisition, RS: random scalarization. Solid line shows the mean and shaded region denotes the 95% confidence interval. ## A.3 **Cross-Validation Experiments** In both experiments, subsampled and processed data (as described in Appendix B) was divided in 70/30% train/test splits in 5-fold cross-validation, where the parameter was optimized with MANATEE-SA only on train and cumulative regrets were computed in train and test. To ensure sufficient data, CITE-seq data was subsampled to 3000 cells in the cross-validation experiments. For the highly variable gene selection experiment, the same set of highly variable genes (computed according to the proportion value acquired on train at each step) was used when computing regrets on test. For both experiments, the initial dataset contained 5 training points at random locations and MANATEE-SA performed 10 acquisitions. Each cross-validation experiment was repeated 100 times. When computing regrets, the overall meta-objective maximum for ARI and NMI was set as the maximum of ARI/NMI values computed from the acquisitions by MANATEE-SA in each run and each fold. For each run, cumulative regret was averaged over folds. Supplementary Table 5 shows cross-validation cumulative regrets averaged over runs on train and test for both experiments. ![23_image_0.png](23_image_0.png) Supplementary Figure 8: 150 random samples from 9 objectives and meta-objectives (ARI, NMI) in the % HVGs selection experiment. (cont.) ## A.4 **Behaviour Ablation Experiments** In leave-one-out behaviour experiments, MANATEE-SA was run without one of each behaviours at the time. Supplementary Table 6 shows results for the IMC cofactor selection experiment, with each row indicating ![24_image_0.png](24_image_0.png) Supplementary Figure 8: 150 random samples from 9 objectives and meta-objectives (ARI, NMI) in the % HVGs selection experiment. Orange indicates samples on the Pareto front, blue indicates Pareto dominated samples, and red indicates points acquired by each method. The acquisitions shown for each method are from the run with the highest average ARI value. Pareto optimality of random samples was computed with the OApackage (Eendebak & Vazquez, 2019). regrets for MANATEE-SA without said behaviour. Supplementary Table 7 similarly shows results for the scRNA-seq highly variable gene selection experiment. ## A.5 **Computing Regret Curves** Regret curves for cumulative regret (CR), full regret (FR), and Bayes regret (BR) (denoted CR(t ′), FR(t ′), BR(t ′)) were computed as a function of the acquisition step t ′ = 1*, . . . , T* as follows: ![25_image_0.png](25_image_0.png) Supplementary Figure 9: Inclusion probabilities for MANATEE-SA and MANATEE-AS for each of the objectives as a function of acquisition step in the % HVGs selection experiment. Solid line shows the mean and shaded region denotes the 95% confidence interval across runs. Supplementary Table 5: 5-fold cross-validation mean cumulative regret on train and test data splits. IMC denotes the IMC cofactor optimization experiment, scRNA-seq denotes the highly variable gene proportion optimization experiment. ARI: adjusted Rand index, NMI: normalized mutual information. | IMC ARI train | 0.015 ± 0.003 | |---------------------------------------------------------------------------------------------------------------------------------------|-----------------| | IMC ARI test | 0.017 ± 0.003 | | IMC NMI train | 0.021 ± 0.005 | | IMC NMI test | 0.023 ± 0.005 | | scRNA-seq ARI train 0.095 ± 0.015 scRNA-seq ARI test 0.089 ± 0.015 scRNA-seq NMI train 0.126 ± 0.021 scRNA-seq NMI test 0.126 ± 0.020 | | $$\begin{array}{l}{{\mathrm{CR}(t^{\prime})=\frac{1}{t^{\prime}}\sum_{t=1}^{t^{\prime}}(y^{*}-h(x_{t})),\;\forall t^{\prime}=1,..,T}}\\ {{\mathrm{FR}(t^{\prime})=y^{*}-\operatorname*{max}_{x\in X_{1:t^{\prime}}}h(x),\;\forall t^{\prime}=1,..,T}}\\ {{\mathrm{BR}(t^{\prime})=\frac{1}{t^{\prime}}\sum_{t=1}^{t^{\prime}}(y^{*}-\operatorname*{max}_{x\in X_{1:t^{\prime}}}h(x)),\;\forall t^{\prime}=1,..,T}}\end{array}$$ Supplementary Table 6: Results for the behaviour ablation experiments for IMC cofactor optimization. CR: cumulative regret, FR: full regret; BR: Bayes regret. ARI: adjusted Rand index, NMI: normalized mutual information. Values are mean (s.d.). | Ablated | ARI | NMI | | | | | |--------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------|-------|----|----|----|----| | | CR | FR | BR | CR | FR | BR | | Explainability | 0.016(0.004) 0.002(0.003) 0.007(0.004) 0.017(0.006) 0.002(0.005) 0.007(0.007) | | | | | | | Inter-obj agreement | 0.018(0.005) 0.003(0.005) 0.007(0.006) 0.020(0.009) 0.003(0.008) 0.008(0.010) | | | | | | | Max not at boundary 0.018(0.007) 0.004(0.006) 0.008(0.007) 0.020(0.012) 0.004(0.011) 0.009(0.012 | ) | | | | | | | Ablated | ARI | NMI | | | | | |--------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------|-------|----|----|----|----| | | CR | FR | BR | CR | FR | BR | | Explainability | 0.126(0.023) 0.055(0.012) 0.063(0.012) 0.121(0.032) 0.021(0.009) 0.030(0.012) | | | | | | | Inter-obj agreement | 0.129(0.018) 0.055(0.011) 0.063(0.012) 0.126(0.025) 0.020(0.010) 0.030(0.014) | | | | | | | Max not at boundary 0.120(0.022) 0.055(0.012) 0.063(0.013) 0.113(0.033) 0.020(0.010) 0.030(0.013 | ) | | | | | | Supplementary Table 7: Results for the behaviour ablation experiments for scRNA-seq highly variable gene selection. CR: cumulative regret, FR: full regret; BR: Bayes regret. ARI: adjusted Rand index, NMI: normalized mutual information. Values are mean (s.d.). Note that these cumulative quantities up to step t ′for t ′ = 1*, . . . , T* are normalized by the number of acquisition steps so far, leading to most curves decreasing with acquisition step. However, these quantities are not guaranteed to decrease with acquisition step for all methods as they are computed w.r.t. the meta-objectives (ARI, NMI), while all methods make acquisitions by having access only to heuristic objectives, which do not necessarily agree with the meta-objectives in our setup. The heuristic objectives and meta-objectives are also noisy functions owing to being computed using real noisy genomics data. Regret curves for all methods for the IMC cofactor selection experiment are shown in Supplementary Figure 10a and those for the % HVGs selection experiment are shown in Supplementary Figure 10b. ![26_image_0.png](26_image_0.png) Supplementary Figure 10: Normalized cumulative CR, BR, FR for ARI and NMI meta-objectives for all methods in both experiments. Solid line shows the mean and shaded region denotes the 95% confidence interval. CR: cumulative regret, FR: full regret; BR: Bayes regret. M-SA: MANATEE with scalarized acquisition, M-AS: MANATEE with acquisition of scalarized function, RA: random acquisition, RS: random scalarization. ARI: adjusted Rand index, NMI: normalized mutual information. ![27_image_0.png](27_image_0.png) ![27_image_1.png](27_image_1.png) (a) Cofactor values selected in runs of the IMC cofactor selection experiment (b) Percentage values selected in runs of the % HVGs selection experiment Supplementary Figure 11: Boxplots showing modes of the acquisitions made in each run by two MANATEE variants for a) IMC cofactor selection experiment and b) % HVGs selection experiment. The mode of the acquisitions made in a run represents the parameter value selected by a method in that run. Modes were computed on the acquisition values rounded to 2 decimal places. Each run (repetition) of an experiment had a different random seed controlling the initial training set and initialization of the Gaussian process model. M-SA: MANATEE-SA, M-AS: MANATEE-AS. ## A.6 **Computing Best Achieved Hypervolume** For each experiment, the hypervolume was computed with the compute_hypervolume() function of the DominatedPartitioning BoTorch class. At each acquisition step, we computed the dominated hypervolume of the dataset acquired so far w.r.t. the reference point. The acquired dataset included the initial points and the acquisitions made up to the current step. All acquisitions were scaled to [0, 1] to account for different ranges of the objectives. To scale the Calinski and Harabasz score and the Davies-Bouldin score, we computed their upper and lower bounds (note that we define our objective as the negative Davies-Bouldin score) as the maximum and minimum acquired value across all methods in all runs, respectively. We defined the lower bound of the Calinski and Harabasz score and the upper bound of the negative Davies-Bouldin score as 0. For all correlation-based objectives and the silhouette width, we defined the lower and upper bounds as -1 and 1, respectively. For each experiment, we defined the reference point as the lower bound of each objective. Specifically, for the correlation-based objectives and the silhouette width the lower bound was -1, for the Calinski and Harabasz score the lower bound was 0, and for the Davies-Bouldin score the lower bound was set as the minimum of all acquisitions. The curves showing the best achieved hypervolume at each iteration for all methods in both experiments are shown in Supplementary Figure 12. The best achieved hypervolume at each iteration was computed as the maximum hypervolume at that iteration across all runs. For the cofactor selection experiment, the maxima were computed across 98 runs for MANATEE-SA, RS, RA, USeMO; 96 runs for MANATEE-AS; 49 successfully terminated runs for qNEHVI; and 17 successfully terminated runs for qNParEGO. For the highly variable gene selection experiment, the maxima were computed across 100 runs for RA, USeMO; 99 runs for MANATEE-SA, MANATEE-AS, RS; 36 successfully terminated runs for qNEHVI; and 37 successfully terminated runs for qNParEGO. ![28_image_0.png](28_image_0.png) ![28_image_1.png](28_image_1.png) (a) Best achieved hypervolume in the IMC cofactor selection experiment 4. y4(x) = −2x + ϵ, ϵ ∼ N (0, 0.8 2) (b) Best achieved hypervolume in the % HVGs selection experiment Supplementary Figure 12: Best achieved hypervolume at each acquisition step for a) IMC cofactor selection experiment and b) % HVGs selection experiment. Dominated hypervolume of the acquired dataset so far was computed for each acquisition step and the maximum was selected across all runs. Each run (repetition) of an experiment had a different random seed controlling the initial training set and initialization of the Gaussian process model. M-SA: MANATEE-SA, M-AS: MANATEE-AS, RA: random acquisition, RS: random scalarization. ## B **Data Processing** B.1 **Toy Data** The 5 objectives in the toy experiment had the following functional forms all defined on x ∈ [0, 1]: 1. y1(x) = max(0,sin 2πx) + ϵ, ϵ ∼ N (0, 0.3 2) 2. y2(x) = sin 2π(x − 0.05) + ϵ, ϵ ∼ N (0, 0.3 2) 3. y3(x) = sin 2πx + ϵ, ϵ ∼ N (0, 0.3 2) 5. y5(x) = 2x + ϵ, ϵ ∼ N (0, 0.1 2) ## B.2 **Imc Data** IMC data and expert annotated ground truth clustering used in the experiments come from Jackson et al. (2020). Data was randomly subsampled to 5000 cells and the heavy metal markers that weren't conjugated to antibodies were removed. Data was clustered with the scikit-learn (Pedregosa et al., 2011) k-means algorithm with k = 10. ## B.3 **Cite-Seq Data** CITE-seq data used in these experiments come from Stoeckius et al. (2017) retrieved using the SingleCellMultiModal Bioconductor R package with data version 1.0.0. Cell surface antibody expression was normalized using the logNormCounts from the scuttle R package (McCarthy et al., 2017) and clustered using Seurat v. 4.1.0 (Hao et al., 2021) with top 10 principal components as input and resolution parameter set to 0.8. Intra-cellular single-cell RNA-seq data was filtered for genes with at least 100 reads and further processed with scanpy (Wolf et al., 2018). Data was randomly subsampled to 1000 cells (except for cross-validation experiment, where it was subsampled to 3000 cells) and normalized using pp.normalize_total with target_sum=1e4, pp.log1p, and pp.scale. scanpy was used to select highly variable genes, compute the neighbourhood graph with 10 neighbours and top 40 principal components, compute the PCA decomposition with default arguments, and compute Leiden clustering with resolution parameter set to 0.8. ## C **Derivations Of Acquisition Functions** C.1 **Expectation of scalarization of the single-objective acquisition function of objectives (SA)** We define the SA acquisition function as Ep(λ|B)[sλ(acqUCB(f(x)))] and derive the following expression: $$\mathbb{E}_{p(\boldsymbol{\lambda}|\mathcal{B})}\left[s_{\boldsymbol{\lambda}}(\boldsymbol{\mu}(x)+\sqrt{\beta}\boldsymbol{\sigma}(x))\right]=\mathbb{E}_{p(\boldsymbol{\lambda}|\mathcal{B})}\left[\sum_{k=1}^{K}\lambda_{k}(\mu_{k}(x)+\sqrt{\beta}\sigma_{k}(x))\right]$$ $$=\sum_{k=1}^{K}\mathbb{E}_{p(\boldsymbol{\lambda}|\mathcal{B})}\left[\lambda_{k}\right](\mu_{k}(x)+\sqrt{\beta}\sigma_{k}(x))$$ $$=\sum_{k=1}^{K}p(\lambda_{k}=1|\mathbf{B}_{k})(\mu_{k}(x)+\sqrt{\beta}\sigma_{k}(x))$$ For the SA acquisition function, the optimized expression simplifies to the sum over K objectives of each objective's acqUCB(fk(x)) function value weighted by the probability of that objective being useful conditioned on its behaviours p(λk = 1|Bk). While the SA formulation only takes into account the posterior variance of each fk, k = 1*, . . . , K*, fitting a multi-output Gaussian process to data is still desirable as it ensures that the posterior form of f reflects our assumptions about the heuristic objectives, namely, that they should be maximized by similar values of x. C.2 **Expectation of single-objective acquisition function of the scalarized objectives (AS)** We define the AS acquisition function as Ep(λ|B)[acqUCB(sλ(f(x)))] and wish to derive the following expression: $$\mathbb{E}_{p(\lambda|B)}\left[\operatorname{acq}_{\mathrm{UCB}}\left(\sum_{k=1}^{K}\lambda_{k}f_{k}(x)\right)\right]$$ First, notice that, $$(\lambda_{1}f_{1}(x),...\lambda_{K}f_{K}(x))\sim{\mathcal N}\left(\lambda\mu(x),\lambda^{T}\lambda\Sigma(x)\right),$$ where µ(x) is the posterior mean and Σ(x) is the posterior covariance of f evaluated at some x. Then, their sum is distributed as: $$\sum_{k}\lambda_{k}f_{k}(x)\sim{\mathcal{N}}\left(\sum_{k}\lambda_{k}\mu_{k}(x),\sum_{k}\lambda_{k}^{2}\Sigma_{k k}(x)+2\sum_{1\leq i<j\leq K}\lambda_{i}\lambda_{j}\Sigma_{i j}(x)\right).$$ Now, we use this to derive: $$\mathbb{E}_{p(\boldsymbol{\lambda}|\mathcal{B})}\left[\operatorname{acq}_{\operatorname{UCB}}\left(\sum_{k=1}^{K}\lambda_{k}f_{k}(x)\right)\right]$$ $$=\mathbb{E}_{p(\boldsymbol{\lambda}|\mathcal{B})}\left[\sum_{k}\lambda_{k}\mu_{k}(x)+\sqrt{\beta}\cdot\sqrt{\sum_{k}\lambda_{k}^{2}\Sigma_{kk}(x)+2\sum_{1\leq i<j\leq K}\lambda_{i}\lambda_{j}\Sigma_{ij}(x)}\right]$$ $$=\sum_{k}p(\lambda_{k}=1|\mathbf{B}_{k})\mu_{k}(x)+\sqrt{\beta}\cdot\mathbb{E}_{p(\boldsymbol{\lambda}|\mathcal{B})}\left[\sqrt{\sum_{k}\lambda_{k}^{2}\Sigma_{kk}(x)+2\sum_{1\leq i<j\leq K}\lambda_{i}\lambda_{j}\Sigma_{ij}(x)}\right]\.$$ For the AS acquisition function, the UCB is applied to the weighted sum of fk, k = 1*, . . . , K*. The optimized expression contains the sum of the posterior means weighted by p(λk = 1|Bk) as before, but the variance term takes into account the posterior covariance of f. The proposed acquisition functions can differ e.g. within regions of search space, where those objectives that have negative posterior covariance with others but still have a large weight due to other behaviours, have large posterior variance. In these cases, the SA formulation will assign a larger value to those regions than the AS formulation. ## C.2.1 **Computing The Variance Term** We compute the expectation of the variance term w.r.t. p(λ|B) by exhaustively considering all K-dimensional binary vectors λ r, r = 1*, . . . ,* 2 K, which is feasible in our experiments with K = 7 and K = 9 objectives. Specifically, we evaluate the expectation as: Ep(λ|B) sX k λ 2 kΣkk(x) + 2 X 1≤i<j≤K λiλjΣij (x) λr∈{0,1}K p(λ r|B) sX k (λ r k ) 2 Σkk(x) + 2 X 1≤i<j≤K λ r i λ r jΣij (x) =X λr∈{0,1}K Y k p(λk = λ r k |Bk) ! sX k (λ r k ) 2 Σkk(x) + 2 X 1≤i<j≤K λ r i λ r jΣij (x) =X In problems with large K where this approach becomes infeasible, the expectation can be approximated with S Monte Carlo samples of λ s k ∼ p(λk|Bk) ∀ k and s = 1*, . . . , S* as follows: Ep(λ|B) sX k λ 2 kΣkk(x) + 2 X 1≤i<j≤K λiλjΣij (x) ≈ 1 S X S s=1 sX k (λ s k ) 2 Σkk(x) + 2 X 1≤i<j≤K λ s iλ s jΣij (x) ## D **Derivatives Of A Multi-Output Gaussian Process** The posterior mean of a single-output Gaussian process with N noisy observations y at a new location x∗ is given by (Williams & Rasmussen, 2006): $$\bar{f}_{*}=\mathbf{K}(x_{*},X)\left(\mathbf{K}(X,X)+\sigma_{\epsilon}^{2}I_{N}\right)^{-1}\mathbf{y}.$$ 31 ## D.1 **First Derivative** For the exponentiated quadratic kernel function, the first derivative of the posterior mean (which is also the mean of the distribution over derivatives of the GP posterior functions) is (McHutchon, 2013): $$\frac{\delta\bar{f}_{*}}{\delta x_{*}}=\frac{\delta\mathbf{K}(x_{*},X)}{\delta x_{*}}\boldsymbol{\alpha}=-\Lambda^{-1}\tilde{X}^{T}\left(\mathbf{K}(x_{*},X)^{T}\odot\boldsymbol{\alpha}\right),$$ $$\boldsymbol{\alpha}=\left(\mathbf{K}(X,X)+\sigma_{*}^{2}I_{N}\right)^{-1}\mathbf{y}$$ $$\tilde{X}=\left[x_{*}-x_{1},\ldots,x_{*}-x_{N}\right]^{T}.$$ For D-dimensional input x, diagonal matrix Λ collects lengthscales l 2 d , d = 1 *. . . D* on the diagonal, X˜ is an N × D matrix and ⊙ represents element-wise multiplication. The resulting derivative δf¯∗ δx∗ is D-dimensional, with each element corresponding to the d-th input dimension. ## D.2 **Second Derivative** The second derivative of the posterior mean is given by $$\frac{\delta^{2}\bar{f}_{*}}{\delta(x_{*})^{2}}=\frac{\delta}{\delta x_{*}}\frac{\delta\mathbf{K}(x_{*},X)}{\delta x_{*}}\alpha$$ The second derivative of k(x∗, xi) is a D × D matrix given by (McHutchon, 2013): $$\frac{\delta^{2}k(x_{*},x_{i})}{\delta(x_{*})^{2}}=\left(-\Lambda^{-1}+\Lambda^{-1}(x_{*}-x_{i})(x_{*}-x_{i})^{T}\Lambda^{-1}\right)k(x_{*},x_{i})$$ We stack these to compute a D × D × N matrix δ 2K(x∗,X) δ(x∗) 2 and multiply it by α to compute the second derivative of the posterior mean. The resulting derivative δ 2f¯∗ δ(x∗) 2 is a D × D matrix with (*i, j*)-th element corresponding to δ 2f¯∗ δ(x∗)iδ(x∗)j (the second derivative w.r.t. dimensions i and j of x∗). ## D.3 **One-Dimensional Input Case** For one-dimensional input and lengthscale l, the second derivative of the posterior mean w.r.t. x∗ simplifies to: $$\frac{\delta^{2}\tilde{f}_{*}}{\delta x_{*}^{2}}=\left(-\frac{1}{l^{2}}\mathbf{K}(x_{*},X)+\frac{1}{l^{4}}\tilde{X}^{T}\odot\tilde{X}^{T}\odot\mathbf{K}(x_{*},X)\right)\alpha$$ ## D.4 **Multi-Output Gaussian Process** For a multi-output Gaussian process with M objectives, we arrange observations as MN × 1 array: y = [y11, . . . , yN1 . . . y1M*, . . . , y*NM] T. We also augment the auxiliary matrix X˜ to become an MN × D matrix: X˜ = [x∗ − x1, . . . , x∗ − xN . . . x∗ − x1*, . . . , x*∗ − xN ] T. The multi-output kernel is defined as Kmulti = KIO ⊗ K and the additive noise term is D ⊗ IN , where D is the diagonal matrix with task-specific observation noises. The first derivative of the posterior mean for a multi-output Gaussian process is D-dimensional for each of the M tasks. The second derivative returns a D × D matrix for each of the M tasks. In the computation of the second derivative, the last two dimensions of δ 2Kmulti(x∗,X) δ(x∗) 2 are flattened similarly to y before multiplication by α. We note that the desired gradients can also be accessed via the automatic differentiation engine instead of using the analytically derived expressions as we have done here. ## E **Proof Of Theorem** Theorem 3.1: If Ep(λk|Bk)[λk] > 0 ∀k, then x ∗ = arg maxx Ep(λ|B)[sλ(f(x))] lies on the Pareto front of f. Proof: Let λk ∈ {0, 1} be a binary variable for k = 1*, . . . , K* with λk ∼ p(λk|Bk) given by a Bernoulli distribution parameterized by a function of Bk. Let us denote its expectation as ψk := Ep(λk|Bk)[λk] > 0 ∀k and denote λ = [λ1*, . . . , λ*K]. We further assume λk is independent from λk′ conditioned on its behaviours Bk ∀*k, k*′: k ≠ k ′. We also denote the set of all behaviours B = {Bk} K k=1. Then, for a linear scalarization function sλ and by linearity of expectation: Ep(λ|B)[sλ(f(x))] = Ep(λ|B) "X k λkfk(x) # = X k fk(x) · Ep(λ|B)[λk] = X k fk(x) · Ep(λ1,...,λK|B1,...,BK)[λk] k fk(x) · X λk λkX λi:i∈{1,...,K}\k p(λ1, . . . , λK|B1, . . . , BK) = X (8) k fk(x) · X λk∈{0,1} λk p(λk|Bk) = X = X k fk(x) · (1 · p(λk = 1|Bk) + 0 · p(λk = 0|Bk)) = X k fk(x)p(λk = 1|Bk) = X k fk(x)ψk As ψk > 0 ∀k,Pk fk(x)ψk is monotonically increasing in all fk(x). If any scalarization function sλ is monotonically increasing in all coordinates fk(x), x ∗ = arg maxx sλ(f(x)) lies on the Pareto front of f for f = {fk} K k=1(Roijers et al., 2013; Zintgraf et al., 2015; Paria et al., 2019). Thus, the solution to maxx Ep(λ|B)[sλ(f(x))] = maxxPk fk(x)ψk lies on the Pareto front of f. In other words, maximizing expected sλ(f(x)) under p(λ|B) with our construction returns Pareto optimal solutions. ## F **Cluster Mean Co-Expression As A Heuristic** F.1 **Overview** The cluster mean co-expression heuristic is demonstrated in Figure 13. After clustering the single-cell data, we can consider the expression of two proteins which should be *markers* for a given cell type, i.e. they should either both be co-expressed or not expressed. An example of this is shown in Figure 13 (right). Consequently, the correlation in the cluster means is high. Conversely, if the clustering does not capture the cell types well, the correlation in the cluster means will be low (Figure 13 left). The opposite logic applies if two proteins should be mutually exclusively expressed: the correlation of cluster mean expression should be minimized. ## F.2 **Imc Experiment** The protein pairs used to construct co-expression heuristic objectives are listed in Supplementary Table 8. ## F.3 **Scrna-Seq Experiment** The gene pairs used to construct co-expression heuristic objectives are listed in Supplementary Table 9. ![33_image_0.png](33_image_0.png) Supplementary Figure 13: Mean expression of two example proteins after two sets of clustering. Left: the cluster means poorly separate into double-positive and double-negative populations as would be expected if the two proteins are markers for the same cell type. Left: the ideal situation, where clusters only co-express both proteins simultaneously or not at all. Supplementary Table 8: Co-expression of marker pairs used as objectives in the IMC cofactor selection experiment. | Protein pair | Co-expression direction Cell type | | |--------------------------------|-------------------------------------|------------------| | E-Cadherin, pan-Cytokeratin | + | Epithelial | | CD45, CD20 | + | B cell | | CD45, CD68 | + | Myeloid | | CD45, CD3 | + | T cell | | Vimentin, Fibronectin | + | Stromal cell | | CD19, CD20 | + | B cell | | pan-Cytokeratin, Cytokeratin 5 | + | Basal epithelial | | Gene pair | Co-expression direction Cell type(s) | | |-------------|----------------------------------------|-------------------| | CD3E, CD4 | + | Regulatory T cell | | CD3D, CD8A | + | Cytotoxic T cell | | PTPRC, CD68 | + | Myeloid | | CD19, MS4A1 | + | B cell | | CD3D, CD68 | - | T/myeloid | | CD68, MS4A1 | - | Myeloid/B | Supplementary Table 9: Co-expression of marker pairs used as objectives in the scRNA-seq highly variable gene selection experiment.
Review 1: Summary: This paper proposes MANATEE, an acquisition function for multi-objective Bayesian optimization, that infers positive qualities of different objective functions in terms of practitioner preferences, rather than being solely reliant on the hypervolume or other scalarizations of the several objective functions. In the experiments here, MANATEE focuses in on points with low predictive noise (what they call explainability), correlation between objectives, and non-boundary points. The authors perform experiments with synthetic data and two biology tasks – mass cytometry cofactor detection and single cell gene selection. The method performs reasonably well across the objectives they measure across both cumulative regret (Fig. 10) and the objectives they were looking for (Tables 1 and 2). The biology tasks seem well connected to real problems despite being one dimensional and are designed around the concept of a meta-objectives, which softly measure optimization goals. Strengths and Weaknesses: Full disclosure, I have reviewed this paper before and find that most of both my requested changes and that of the other reviewers have been made. Good work overall. *Strengths:* - Maximizing for predictive confidence (high explainability) and not moving towards the boundary in multi-objective problems is certainly a good idea. - Overall, I think I now better understand the subtle idea of a meta-objective that we’re slowly moving the optimization towards. This seems generally applicable across many areas of biological design campaigns, especially when our proxy tasks are defective in some way. - Experimental tasks are realistic and seem grounded in biological motivation. - Botorch code is provided in a common framework. - Thank you for providing the algorithm box. *Weaknesses:* - Only single dimensional problems are considered in the experiments. All things considered this isn’t that bad of an issue. - The regret curves are only somewhat demonstrating superiority over observations (e.g. over the course of a design campaign), which is probably just most reflective of the noisiness of the tasks. - This is somewhat of a fundamental weakness that can probably be addressed in the discussion: Inter-objective agreement seems like it might limit the practitioner’s need to make tradeoffs between competing objectives in favor of jointly suboptimal but reasonable performing objective values. Requested Changes: Requested changes: (medium) Can you doublecheck the cumulative regret plots in Fig. 10b? I’m not entirely sure they are going to be leveling off, but should continue upwards, or at least asymptoting? (minor) Fig 10a: label these as cumulative regret, full regret, bayes regret. (minor) Fig 5,8: maybe only display the pareto front or show the other points with a lower alpha value (minor) fig 7: these should probably be all on the same plot to maximize differences (medium) I would also suggest that the authors plot best achived hypervolume across iterations for the tasks as that seems to be a common concern. Appendix d: I still think that this can be all achieved with autograd but it’s really a minor point. Broader Impact Concerns: n/a ================================================== Review 2: Summary: I have reviewed the previous version of this submission. The current paper has addressed the outstanding concerns from last reviewing round satisfactorily. I recommend accepting the paper. Strengths and Weaknesses: N/A Requested Changes: N/A Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: This resubmission was only reviewed by two reviewers because we were unable to reach one of the original reviewers. Both reviewers find that the resubmission addresses all reviewer comments raised for the original submission and is publishable. ==================================================
# Do We Need To Estimate The Variance In Robust Mean Estimation? Anonymous authors Paper under double-blind review ## Abstract In this paper, we propose self-tuned robust estimators for estimating the mean of heavytailed distributions, where heavy-tailed distributions refer to distributions with only finite variances. Our method involves introducing a new loss function that considers both the mean parameter and a robustification parameter. By simultaneously optimizing the empirical loss function with respect to both parameters, the resulting estimator for the robustification parameter can automatically adapt to the unknown data variance and can achieve nearoptimal finite-sample performance. Our approach outperforms previous methods in terms of both computational and asymptotic efficiency. Specifically, it does not require crossvalidation or Lepski's method to tune the robustification parameter, and the variance of our estimator achieves the Cramér-Rao lower bound. ## 1 Introduction The success of numerous statistical and learning methods heavily relies on the assumption of light-tailed or sub-Gaussian errors (Wainwright, 2019). A random variable Z is considered to have sub-Gaussian tails if there exist constants c1 and c2 such that P(|Z − EZ| > t) ≤ c1 exp(−c2t 2) for any t ≥ 0. However, in many practical applications, data are often collected with a high degree of noise. For instance, in the context of gene expression data analysis, it has been observed that certain gene expression levels exhibit kurtoses much larger than 3, regardless of the normalization method used (Wang et al., 2015). Furthermore, a recent study on functional magnetic resonance imaging (Eklund et al., 2016) demonstrates that the principal cause of invalid functional magnetic resonance imaging inferences is that the data do not follow the assumed Gaussian shape. It is therefore important to develop robust statistical methods with desirable statistical performance in the presence of heavy-tailed data. In this paper, we focus on robust mean estimation problems, which serves as the foundation for tackling more general problems. Specifically, we consider a generative model where data, {yi, 1 ≤ i ≤ n}, are generated according to $$y_{i}=\mu^{*}+\varepsilon_{i},\ 1\leq i\leq n,$$ ∗ + εi, 1 ≤ i ≤ n, (1.1) where the random errors εi ∈ R are independent and identically distributed (i.i.d.) samples from ε, which follows the law F0. We assume that ε is mean zero and has a finite variance only, with Eε∼F0 ε = 0 and Eε∼F0 ε 2 = σ 2, where the expectation Eε∼F0 ε represents the expected value of ε when ε follows the distribution F0. When estimating the mean µ ∗, the sample mean estimator Pn i=1 yi/n is known to achieve, at best, a polynomial-type nonasymptotic confidence width (Catoni, 2012), when the errors have only finite variances. This means that there exists a distribution F = Fn,δ for ε with a mean of 0 and variance of σ 2, such that $$(1.1)$$ the followings hold simultaneously: $$\mathbb{P}\left(\left|\sum_{i=1}^{n}\frac{y_{i}}{n}-\mu^{*}\right|\leq\sigma\sqrt{\frac{1}{2n}\cdot\frac{1}{\delta}}\right)\geq1-2\delta,\quad\forall\ \delta\in\left(0,1/2\right);$$ $$\mathbb{P}\left(\left|\sum_{i=1}^{n}\frac{y_{i}}{n}-\mu^{*}\right|\leq\sigma\sqrt{\frac{1}{2n}\cdot\frac{1}{\delta}}\left(1-\frac{2e\delta}{n}\right)^{\frac{n-1}{2}}\right)\leq1-2\delta,\quad\forall\ \delta\in\left(0,(2e)^{-1}\right).$$ In simpler terms, this indicates that the sample mean does not converge quickly enough to the true mean when the errors have only finite variances. Catoni (2012) made an important step towards estimating the mean with faster concentration. Specifically, Catoni introduced a robust mean estimator µb(τ ), which depends on a tuning parameter τ , and deviates from the true mean µ ∗logarithmically in 1/δ. Specifically, with τ properly tuned, µb(τ ) satisfies the following concentration inequality: $$\mathbb{P}\left(|\widehat{\mu}(\tau)-\mu^{*}|\leq c\sigma\sqrt{\frac{1}{n}\cdot\log\left(\frac{1}{\delta}\right)}\right)\geq1-2\delta,\tag{1.2}$$ where c is some constant. We refer to estimators that satisfy this deviation property as sub-Gaussian mean estimators because they achieve the same performance as if the data were sub-Gaussian. Following Catoni's work, there has been a surge of research on sub-Gaussian estimators using the empirical risk minimization approach in various settings; see Brownlees et al. (2015); Hsu & Sabato (2016); Fan et al. (2017); AvellaMedina et al. (2018); Lugosi & Mendelson (2019b); Lecué & Lerasle (2020); Wang et al. (2021) and Sun et al. (2020), among others. For a recent review, we refer readers to Ke et al. (2019). To implement Catoni's estimator (Catoni, 2012), there is a tuning parameter τ = τ (σ) that depends on the unknown variance σ 2 and needs to be carefully tuned. However, in practice, this often involves computationally expensive methods such as cross-validation or Lepski's method (Catoni, 2012). For instance, when using the adaptive Huber estimator (Sun et al., 2020; Avella-Medina et al., 2018) to estimate a d × d covariance matrix entrywise, as many as d 2tuning parameters can be involved. If cross-validation or Lepski's method were employed, the computational burden would increase significantly as d grows. Therefore, it is natural to ask the following question: Is it possible to develop computationally efficient robust mean estimators for distributions with finite but unknown variances? This paper tackles the aforementioned challenge by proposing a self-tuned robust mean estimator for distributions with only two moments. We propose an empirical risk minimization (ERM) approach based on a novel loss function. The proposed loss function is smooth with respect to both the mean parameter and the robustification parameter. By jointly optimizing both parameters, we prove that the resulting robustification parameter can automatically adapt to the unknowns, and the resulting mean estimator can achieve sub-Gaussian-like performance as if the data were Gaussian, up to logarithmic terms. Therefore, compared to previous methods, our approach eliminates the need to use cross-validation or Lepski's method to tune the robustification parameters. This significantly boosts the computational efficiency of data analysis in practical applications. Furthermore, from an asymptotic standpoint, we establish that our proposed estimator is asymptotically efficient, meaning it achieves the Cramér-Rao lower bound asymptotically (Van der Vaart, 2000). Related work In addition to the empirical risk minimization (ERM) based methods, median-of-means (MoM) techniques (Devroye et al., 2016; Lugosi & Mendelson, 2019b; Lecué & Lerasle, 2020) are often used to construct robust estimators in the presence of heavy-tailed distributions. A recent survey on median-ofmeans can be found in the work by Lugosi & Mendelson (2019a). The MoM technique involves randomly splitting the full data into k subsamples and computing the mean for each subsample. The MoM estimator is then obtained as the median of these local estimators. The number of subsamples k is the only user-defined $\textit{atoms for distrib}$ parameter in MoM, and it can be chosen to be independent of the unknowns, making it tuning-free. However, in our experience, MoM often exhibits undesirable numerical performance when compared to ERM based estimators. To understand this phenomenon, we take an asymptotic viewpoint and compare the asymptotic efficiencies of our estimator and the MoM estimator. We show that the relative efficiency of the MoM estimator with respect to ours is only 2/π ≈ 0.64. Paper overview Section 2 introduces a novel loss function and presents the empirical risk minimization (ERM) approach. The nonasymptotic theory is presented in Section 3. In Section 4, we compare our proposed estimator with popular alternatives in terms of asymptotic performance. Section 5 provides numerical experiments. Finally, we conclude in Section 6. The supplementary material contains basic calculations, algorithms, a comparison with Lepski's method, proofs of the main results, supporting lemmas, and additional results. Notation We summarize here the notation that will be used throughout the paper. We use c and C to denote generic constants which may change from line to line. For two sequences of real numbers {an, n ≥ 1} and {bn, n ≥ 1}, an ≲ bn or an = O(b0) denotes an ≤ Cbn for some constant C > 0, and an ≳ bn if bn ≲ an. We use an ∝ bn to denote that an ≳ bn and an ≲ bn. Oe hides logarithmic terms. The log operator is understood with respect to the base e. For a function f(*x, y*), we use ∇xf(*x, y*) or ∂ ∂x f(*x, y*) to denote its partial derivative of f(*x, y*) with respect to x. ∇f(*x, y*) denotes the gradient of f(*x, y*). For a vector x ∈ R d, let ∥x∥2 denote its Euclidean norm. For a symmetric positive semi-definite matrix Σ, λmax(Σ) denotes its largest eigenvalue. ## 2 A New Loss Function For Self-Tuning This section introduces a new loss function to robustly estimate the mean of distributions with only finite variances while automatically tuning the robustification parameter. We start with the pseudo-Huber loss (Hastie et al., 2009) $$\ell_{\tau}(x)=\tau\sqrt{\tau^{2}+x^{2}}-\tau^{2}=\tau^{2}\sqrt{1+x^{2}/\tau^{2}}-\tau^{2},$$ $$(2.1)$$ where τ serves as a tuning parameter. It exhibits behavior similar to the Huber loss (Huber, 1964), approximating x 2/2 when |x| is small and resembling a straight line with slope τ for large values of |x|. To see this, some algebra yields $$\left\{\begin{array}{l l}{{\frac{\epsilon^{2}-2(1+\epsilon)}{2\epsilon^{2}}x^{2}\leq\ell_{\tau}(x)\leq\frac{x^{2}}{2},}}&{{\mathrm{if~}x^{2}/\tau^{2}\leq4(1+\epsilon)/\epsilon^{2},}}\\ {{\frac{\tau|x|}{1+\epsilon}\leq\ell_{\tau}(x)\leq\tau|x|,}}&{{\mathrm{if~}x^{2}/\tau^{2}>4(1+\epsilon)/\epsilon^{2}.}}\end{array}\right.$$ We refer to τ as the robustification parameter because it controls the trade-off between the quadratic loss and the least absolute deviation loss, where the latter induces robustness. In practice, τ is typically tuned using computationally expensive methods such as Lepski's method (Catoni, 2012) or cross-validation (Sun et al., 2020). To avoid these computationally expensive procedures, our goal is to propose a new loss function of both the mean parameter µ and the robustification parameter τ (or its equivalent) so that optimizing over them jointly yields an automatically tuned robustification parameter τb and thus the correspondingly self-tuned mean estimator µb(τb). Unlike the Huber loss (Sun et al., 2020), the pseudo-Huber loss is a smooth function of τ , making optimization with respect to τ possible. To motivate the new loss function, let us first consider the estimator µb(τ ) with τ fixed *a priori*: $$\widehat{\mu}(\tau)=\underset{\mu}{\operatorname{argmin}}\left\{\frac{1}{n}\sum_{i=1}^{n}\ell_{\tau}(y_{i}-\mu)\right\}.$$ $$(2.2)$$ We provide an informal result below, with its rigorous version presented as Theorem 3.1 in subsequent sections. Theorem 2.1 (Informal result). Take τ = σ √n/z with z =plog(1/δ), and assume n is sufficiently large. Then for any 0 *< δ <* 1, with probability at least 1 − δ, we have $$|{\widehat{\mu}}(\tau)-\mu^{*}|\lesssim\sigma{\sqrt{\frac{\log(2/\delta)}{n}}}.$$ The above result indicates that when τ = σ √n/z with z =plog(1/δ), the estimator µb(τ ) achieves the desired sub-Gaussian performance. The only unknown parameter in τ is the standard deviation σ. In view of this, we treat σ as an unknown parameter v, substitute τ = √*nv/z* into (2.1), and obtain $$\ell(x,v):=\ell_{\tau}(x)=\frac{nv^{2}}{z^{2}}\left(\sqrt{1+\frac{x^{2}z^{2}}{nv^{2}}}-1\right),\tag{2.3}$$ where z is a confidence parameter because it depends on δ as in the theorem above. Instead of searching for the optimal τ , we will search for the optimal v, which is expected to be close to the underlying standard deviation σ intuitively. We will use the term "robustification parameter" interchangeably for both τ and v, as they are equivalent up to a multiplier. However, directly minimizing ℓ(*x, v*) with respect to v leads to meaningless solutions, specifically v = 0 and v = +∞. To avoid these trivialities, we consider a new loss by dividing ℓ(*x, v*) by v and then adding a linear penalty function av. This will be referred to as the penalized pseudo-Huber loss, formally defined below. Definition 2.2 (Penalized pseudo-Huber loss). The penalized pseudo-Huber loss ℓ p(*x, v*) is defined as follows: $$\ell^{\rm p}(x,v):=\frac{\ell(x,v)+av^{2}}{v}=\frac{nv}{z^{2}}\left(\sqrt{1+\frac{x^{2}z^{2}}{nv^{2}}}-1\right)+av,\tag{2.4}$$ where n is the sample size, z is a confidence parameter, and a is an adjustment factor. We thus propose to optimize jointly over µ and v by solving the following ERM problem: $$\left\{\,\widehat{\mu},\,\widehat{v}\,\right\}=\underset{\mu,\,v}{\operatorname{argmin}}\left\{L_{n}(\mu,v):=\frac{1}{n}\sum_{i=1}^{n}\ell^{p}(y_{i}-\mu,v)\right\}$$ $$=\underset{\mu,\,v}{\operatorname{argmin}}\,\frac{1}{n}\sum_{i=1}^{n}\left\{\frac{nv}{z^{2}}\left(\sqrt{1+\frac{(y_{i}-\mu)^{2}z^{2}}{nv^{2}}}-1\right)+av\right\}.\tag{2.5}$$ When v is fixed *a priori*, solving the optimization problem above with respect to µ is equivalent to directly minimizing the empirical pseudo-Huber loss in (2.2) with τ = v √n/z. To gain insight into the loss function Ln(*µ, v*), let us consider its population version: $$L(\mu,v)=\mathbb{E}L_{n}(\mu,v)={\frac{n v}{z^{2}}}\mathbb{E}\left({\sqrt{1+{\frac{(y-\mu)^{2}z^{2}}{n v^{2}}}}}-1\right)+a v.$$ Define the population oracle v∗ as the minimizer of L(µ ∗, v) with the true mean µ ∗ given *a priori*, that is $$v_{*}=\operatorname*{argmin}_{\tau}L(\mu^{*},v)=\operatorname*{argmin}_{v}\left\{{\frac{n v}{z^{2}}}\mathbb{E}\left({\sqrt{1+{\frac{(y-\mu)^{2}z^{2}}{n v^{2}}}}}-1\right)+a v\right\},$$ or equivalently, $$\left.\nabla_{v}L(\mu^{*},v)\right|_{v=v_{v}}=\left\{\frac{n}{z^{2}}\left(\nabla_{v}\mathbb{E}\sqrt{v^{2}+\frac{\varepsilon^{2}z^{2}}{n}}-1\right)+a\right\}\right|_{v=v^{*}}=0.$$ By interchanging the derivative with the expectation, we obtain $$\mathbb{E}{\frac{v_{*}}{\sqrt{v_{*}^{2}+z^{2}\varepsilon^{2}/n}}}=1-{\frac{a z^{2}}{n}}.$$ $$(2.6)$$ . (2.6) Let σ 2 x2 := E{ε 21(ε 2 ≤ x 2)}, where 1(A) is the indicator function of the set A. Our first key result utilizes the above characterization of v∗ to derive how v∗ is able to automatically adapt to the standard deviation σ, promising the effectiveness of our procedure. Theorem 2.3 (Self-tuning property of v∗). Suppose n ≥ az2. Then, for any γ ∈ [0, 1), we have $$\frac{(1-\gamma)\sigma_{\varphi\tau_{*}^{2}}^{2}}{2a}\leq v_{*}^{2}\leq\frac{\sigma^{2}}{2a},$$ where φ = γ/(1 − γ) and τ∗ = v∗ √*n/z.* Moreover, we have limn→∞ v 2 ∗ = σ $\tau^2/(2a)$. The above result indicates that for any finite sample size n ≥ az2, the oracle v 2 ∗ can automatically adapt to the (truncated) variance. It is bounded between the scaled truncated variance σ 2 φτ∗ /(2a) and the scaled variance σ 2/(2a). Since the second moment exists, we have σ 2 φτ2 ∗ → σ 2 as φτ 2 ∗ → ∞ by the dominant convergence theorem. For a large sample size n, σ 2 φτ2 ∗ is close to σ 2, and therefore v 2 ∗ is approximately between (1−γ)σ 2/(2a) and σ 2/(2a). Furthermore, an asymptotic analysis reveals that limn→∞ v 2 ∗ = σ 2/(2a). Taking a = 1/2 yields limn→∞ v 2 ∗ = σ 2, indicating that the oracle v 2 ∗ with a = 1/2 should approximate the true variance in the large sample limit. This also suggests the optimality of choosing a = 1/2, which is assumed throughout the rest of the paper. Our next result shows that the proposed empirical loss function is jointly convex in both µ and v. This convexity property allows us to employ standard first-order optimization algorithms to compute the global optima efficiently. Proposition 2.4 (Joint convexity). The empirical loss function Ln(*µ, v*) in (2.5) is jointly convex in both µ and v. Furthermore, if there exist at least two distinct data points, the empirical loss function is strictly convex in both µ and v provided that v > 0. Lastly, it was brought to our attention that our formulation (2.5) bears a resemblance to the concomitant estimator by Ronchetti & Huber (2009): $$\operatorname{argmin}_{\mu,v}\left\{{\frac{1}{n}}\sum_{i=1}^{n}\rho\left({\frac{y_{i}-\mu}{v}}\right)v+a v\right\},$$ where ρ is any loss function, and a is a user-specified constant. Notably, the selection of the appropriate constant a is scarcely addressed within the existing literature. Our motivation stems from a different perspective. We aim to develop robust mean estimators that exhibit improved finite-sample properties in the presence of heavy-tailed data. The empirical loss function Ln that we propose can be perceived as an intricately adapted variant of theirs. Specifically, we leverage the smooth pseudo-Huber loss, in which we set the robustification parameter τ as τ = v √n/z to ensure the sub-Gaussian-like performance for the robust mean estimator. Here z serves as a judiciously chosen confidence parameter. Concurrently, we identify the optimal adjustment factor as a = 1/2. To the best of our knowledge, all of these findings are the first among the literature. ## 3 Finite-Sample Theory This section presents the self-tuning property for estimated robustification parameter and then the finitesample property of the self-tuned mean estimator. Recall a = 1/2. 3.1 Estimation with a fixed v With a light abuse of notation, we use µb(v) to denote the minimizer of the penalized pseudo-Huber loss in (2.5) with v fixed. Recall that we have used µb(τ ) to denote the minimizer of the pseudo-Huber loss in (2.2), and the µb(v) equivalent to µb(τ ) with τ = v √n/z. We start with the theoretical properties for µb(v). We need the following locally strong convexity assumption, which will be verified later in this subsection. Assumption 1 (Locally strong convexity in µ). The empirical Hessian matrix is locally strongly convex with respect to µ such that, for any µ ∈ Br(µ ∗) := {µ : |µ − µ ∗| ≤ r}, $$\operatorname*{inf}_{\mu\in\mathbb{B}_{r}(\mu^{*})}{\frac{\langle\nabla_{\mu}L_{n}(\mu,v)-\nabla_{\mu}L_{n}(\mu^{*},v),\mu-\mu^{*}\rangle}{|\mu-\mu^{*}|^{2}}}\geq\kappa_{\ell}>0$$ where r > 0 is a local radius parameter. Theorem 3.1. For any 0 *< δ <* 1, suppose v > 0 is fixed and let z 2 = log(1/δ). Assume Assumption 1 holds with any r≥ r0 : = κ −1 ℓσ/( √2v) + 12 plog(2/δ)/n. Then, with probability at least 1 − δ, we have $$|\widehat{\mu}(v)-\mu^{*}|<\frac{1}{\kappa_{\ell}}\left(\frac{\sigma}{\sqrt{2}v}+1\right)^{2}\sqrt{\frac{\log(2/\delta)}{n}}=\frac{C}{\kappa_{\ell}}\sqrt{\frac{\log(2/\delta)}{n}},$$ where C = (σ/( √2v) + 1)2 only depends on v and σ. The above theorem states that under the assumption of local strong convexity, µb(v) achieves a sub-Gaussian deviation bound when the data have only bounded variances. In particular, if we choose v = σ and apply the theorem, we obtain: $$|\widehat{\mu}(\sigma)-\mu^{*}|\leq\frac{1}{\kappa_{\ell}}\left(\frac{\sigma}{\sigma}+1\right)^{2}\sqrt{\frac{\log(2/\delta)}{n}}\leq\frac{4}{\kappa_{\ell}}\sqrt{\frac{\log(2/\delta)}{n}}.$$ Assumption 1 requires the loss function to exhibit curvature in a small neighborhood Br(µ ∗), while the penalized loss (2.4) transitions from a quadratic function to a linear function roughly at |x| = τ ∝ √n. Quadratic functions always have curvature, so intuitively, Assumption 1 holds as long as $\sqrt{n}\geq r\geq r_0\propto\sqrt{1/n}$, . The latter condition is automatically guaranteed when n is sufficiently large. Taking r to be the smallest r0 results in Assumption 2 being at its weakest. In other words, in this scenatrio, the empirical loss function only needs to exhibit curvature in a diminishing neighborhood of µ ∗, approximatley with radius of p1/n. The following lemma rigorously proves this claim. Lemma 3.2. Suppose v ≥ v0. For any 0 *< δ <* 1, suppose n ≥ C max z 2(σ 2 + r 2)/v2 0 , log(1/δ) for some absolute constant C. Then, with probability at least 1 − δ, Assumption 1 with κℓ = 1/(2v) holds uniformly over v ≥ v0 > 0. The first sample complexity condition that n ≥ Cz2(σ 2+r 2)/v2 0 comes from requirement that τ 2 v0 := v 2 0n/z2 ≥ C(σ 2 + r 2) in the proof of Lemma 3.2. Recall that the robustification parameter τ 2 v0 := v 2 0n/z2 determines the size of the quadratic region. Intuitively, this requirement is minimal in the sense that Assumption 2 can only hold when τ 2 v0 is larger than r 2 plus the noise variance σ 2(due to stochasticity). As argued before, Assumption 2 holds with any r such that √n ≳ r ≳p1/n. Thus we can take r to be a constant, and this will not make the sample complexity condition worse. Finally, by combing Lemma 3.2 and Theorem 3.1, we obtain the following result. Corollary 3.3. Suppose v ≥ v0. For any 0 *< δ <* 1, suppose n ≥ C max (r 2 + σ 2)/v2 0 , 1 log(1/δ) for some universal constant C, and let z 2 = log(1/δ). For any v ≥ v0, we have with probability at least 1 − δ that $$|\widehat{\mu}(v)-\mu^{*}|\leq2v\left(\frac{\sigma}{\sqrt{2}v}+1\right)^{2}\sqrt{\frac{\log(4/\delta)}{n}}\lesssim v\sqrt{\frac{1+\log(1/\delta)}{n}}.$$ ## 3.2 Self-Tuned Mean Estimators We proceed to characterize the theoretical property of the self-tuned mean estimator. We need the additional constraint that v0 ≤ v ≤ V0, and consider the constrained empirical risk minimization problem $$\{\,{\widehat{\mu}},\,{\widehat{v}}\,\}=\operatorname*{argmin}_{\mu,\,v_{0}\leq v\leq V_{0}}\left\{L_{n}(\mu,v):={\frac{1}{n}}\sum_{i=1}^{n}\ell^{p}(y_{i}-\mu,v)\right\}.$$ Indeed, when v is either 0 or ∞, the loss function is no longer smooth or it becomes trivial respectively. In other words, the loss function is not strongly convex in µ in either case, and the strong convexity is essential for our analysis. Let τv0 = v0 √n/z. Theorem 3.4 (Self-tuning property). Assume that n is sufficiently large. Let c0 and C0 be some constants, and suppose v0 < c0στ 2 v0 /2−1 ≤ C0*σ < V*0. For any 0 *< δ <* 1, take z 2 ≥ log(5/δ). Then, with probability at least 1 − δ, we have $$(3.1)$$ $C_0\sigma_{\tau_{v_0}^2/2-1}\leq\widehat{v}\leq C_0\sigma$. The above theorem suggests that vb automatically adapts to the standard deviation, aka vb approximates σ, if στ 2 v0 /2−1 approximates σ which is expected to hold for a large sample size by the dominated convergence theorem. Of course στ 2 v0 can not be close to σ at any predictable rate under the weak assumption that the data have bounded variances only. We proceed to characterize the finite-sample property of the self-tuned mean estimator µb(vb). Theorem 3.5. Assume that n is sufficiently large. Let c0 and C0 be some constants, and suppose v0 < c0στ 2 v0 /2−1 ≤ C0*σ < V*0. For any 0 *< δ <* 1, take z 2 = log(n/δ). Then, with probability at least 1 − δ, we have $$|{\widehat{\mu}}({\widehat{v}})-\mu^{*}|\leq C\cdot\sigma{\sqrt{\frac{\log(n/\delta)}{n}}}$$ where C is some constant. The above result indicates that the mean estimator µb = µb(vb) with a self-tuned robustification parameter vb achieves the optimal deviation property up to a logarithmic factor. In practical applications, we recommend setting δ = 0.05, which corresponds to a failure probability of 0.05 or a confidence level of 0.95. ## 4 Comparing With Alternatives Other than the ERM based approach, the median-of-means technique (Lugosi & Mendelson, 2019a) is another method to construct robust estimators under heavy-tailed distributions. The MoM mean estimator works as follows: 1. Partition [n] = {1*, . . . , n*} into k blocks B1*, . . . ,* Bk, each of size |Bi| ≥ ⌊n/k⌋ ≥ 2. 2. Compute the sample mean in each block zj =1 |Bj | Pi∈Bj xi. 3. Take the median of zj 's as the the MoM mean estimator µb MoM = med(z1*, . . . , z*k), where med(·) is the median operator. The following theorem is taken from Lugosi & Mendelson (2019a). Without loss of generality and for simplicity, we shall assume throughout this section that n is divisible by k so that each block has m = n/k elements. $${\mathrm{e}}\ |{\mathcal{B}}_{i}|\geq\lfloor n/k\rfloor\geq2.$$ Theorem 4.1 (Theorem 2 by Lugosi & Mendelson (2019a) ). For any δ ∈ (0, 1), if k = ⌈8 log(1/δ)⌉, then, with probability at least 1 − δ, $$\left|\widehat{\mu}^{\mathrm{Moh}}-\mu^{*}\right|\leq\sigma\sqrt{\frac{32\log(1/\delta)}{n}}.$$ The theorem above indicates that in order to obtain a sub-Gaussian mean estimator, we only need to choose k = ⌈8 log(1/δ)⌉ when constructing the MoM mean estimator. Thus, the MoM estimator is naturally tuningfree. However, in our numerical experiments, we have observed that the MoM estimator often has inferior numerical performance compared to our proposed estimator. To shed light on this observation, we compare the asymptotic efficiencies of µb MoM and our estimator µb(τb) in the following two theorems. Theorem 4.2 (Asymptotic inefficiency of µb MoM). Fix any 0 < ι ≤ 1. Assume E|yi − µ ∗| 2+ι < ∞. Suppose k → ∞ and k = on ι/(1+ι), then $$\sqrt{n}\left(\widehat{\mu}^{\mathrm{MoM}}-\mu^{*}\right)\leadsto\mathcal{N}\left(0,\frac{\pi}{2}\sigma^{2}\right).$$ Theorem 4.3 (Asymptotic efficiency of our estimator). Fix any 0 < ι ≤ 1. Assume Eε 2+ι i < ∞ and the same assumptions as in Theorem 3.4. Take z 2 = 2 log(n). Then $$\sqrt{n}\left(\widehat{\mu}(\widehat{v})-\mu^{*}\right)\leadsto\mathcal{N}\left(0,\sigma^{2}\right).$$ We emphasize that the MoM mean estimator shares the same asymptotic property as the median estimator (Van der Vaart, 2000) due to taking the median operation in the third step, and thus is asymptotically inefficient. In sharp contrast, our proposed estimator achieves full asymptotic efficiency. The relative efficiency er of MoM with respect to our estimator is $$e_{\mathrm{r}}\left(\widehat{\mu}^{\mathrm{MoM}},\widehat{\mu}(\widehat{v})\right)=\frac{2}{\pi}\approx0.64.$$ This means that our proposed estimator is more efficient than the MoM estimator in terms of asymptotic performance, partly explaining the empirical success of our method; see Section 5 for details. We explain intuitively why our self-tuned estimator can achieve (near) optimal performance in both the finite-sample regime and the asymptotic regime. Because our self-tuned estimator in (3.1) is a self-tuned version of the pseudo-Huber estimator in (2.2), thus we focus on the pseudo-Huber estimator µb(τ ). Theorem 2.1 suggests that taking τ = σpn/ log(1/δ) guarantees the sub-Gaussian performance of µb(τ ) for finite samples. Meanwhile, as n → ∞, we have τ = σpn/ log(1/δ) → ∞. Thus the pseudo-Huber loss approaches to the least square loss which corresponds to the negative maximum likelihood of Gaussian distributions, which leads to the asymptotically efficient mean estimator. For MoM estimators, the situation differs. On one hand, to attain robustness in the finite-sample regime, the number of blocks k should be greater than or equal to ⌈8 log(1/δ)⌉, as demonstrate in the proof of Theorem 4.1 by Lugosi & Mendelson (2019a). On the other hand, to approach the sample mean estimator and achieve asymptotic efficiency in the large sample limit, the number of blocks should diminish to 1 as the sample size n grows. Consequently, optimal finite-sample and asymptotic properties represent two contrasting characteristics for MoM estimators. In other words, the MoM estimator can not simultaneously adapt to both regimes (to perform optimally). This contrast seems to arise from the discontinuous nature of the MoM estimator which cannot smoothly transition from requiring at least k = 3 blocks (for defining the median) to functioning as an empirical mean estimator. Another popular estimator is the trimmed mean estimator (Lugosi & Mendelson, 2021). The univariate trimmed mean estimator works as follows: (i) Split the data points into two subsamples with equal size, (ii) use the first subsample to determine the trimming parameters, and (iii) use the second subsample to construct the trimmed mean estimator. Due to this sample splitting scheme, the trimmed mean estimator lacks sample efficiency. ![8_image_0.png](8_image_0.png) Figure 1: Estimation error versus confidence level for our estimator, the sample mean estimator, the MoM estimator and the trimmed mean estimator. ![8_image_1.png](8_image_1.png) Figure 2: Empirical 99%-quantile of the estimation error versus parameter measuring tails and skewness for our estimator, the sample mean estimator, the MoM estimator and the trimmed mean estimator. ## 5 Numerical Studies This section examines numerically the finite sample performance of our proposed mean estimators in the presence of heavy-tailed data. In all of our numerical examples, we take z =plog(n/δ) with δ = 0.05 as suggested by Theorem 3.5. This choice ensures that our results hold with a probability of at least 0.95. We consider the following four distribution settings for the random data point y in order to investigate the robustness and efficiency of the proposed estimator: 1. Normal distribution N (*µ, σ*2) with mean µ = 1 and a sequence of variances σ 2 ≥ 1; 2. Skewed generalized t distribution sgt(µ, σ2*, λ, p, q*), where mean µ = 0, a sequence of variances σ 2 = q/(q − 2) with q > 2, shape p = 2 and skewness λ = 0.75. For each setting, we generate an independent sample of size n = 100 and compute four mean estimators: our proposed estimator (ours), the sample mean estimator, the MoM mean estimator, and the trimmed mean estimator. ![9_image_0.png](9_image_0.png) Figure 3: Estimation error versus confidence level for our estimator, cross validation and Lepski's method. Figure 1 displays the α-quantile of the squared estimation error, with α ranging from 0.5 to 1, based on 1000 simulations for both scenarios. For Gaussian distributed data, our estimator performs almost identically to the sample mean estimator, both of which outperform the MoM mean estimator and the trimmed mean estimator. Since the sample mean estimator is the optimal mean estimator for Gaussian distributed data, this suggests that our estimator does not sacrifice statistical efficiency when the data is Gaussian. In the case of heavy-tailed skewed generalized t distributions, the deviation of the sample mean from the population mean grows rapidly with the confidence level. This is in contrast to the three robust estimators: our estimator, the MoM mean estimator, and the trimmed mean estimator. Our estimator is the only one that consistently outperforms the others in both scenarios. Figure 2 examines the 99%-quantile of the estimation error versus a distribution parameter that measures the tail behavior and skewness, based on 1000 simulations. Specifically, for Gaussian data, we let σ vary between 1 and 4. For skewed generalized t distributions, we increase the shape parameter q from 2.5 to 4. For Gaussian data, our estimator performs identically to the sample mean estimator, both of which outperform the MoM mean estimator and the trimmed mean estimator. For skewed generalized t distributions with q ≤ 3, all three robust mean estimators outperform, or are as competitive as, the sample mean estimator. When q > 3, the sample mean estimator starts to performs better than both MoM and the trimmed mean estimator. Our proposed estimator, on the other hand, consistently outperforms all other methods across the entire range of parameter values. We also compare the performances of our proposed method, pseudo-Huber loss + cross validation and pseudo-Huber loss + Lepski's method. For cross validation, we choose the best τ , which is equivalent to choosing the best v, from a list of candidates {1, 2*, . . . ,* 100} using 10-fold cross validation. For Lepski's method, we follow the appendix and pick V = 2, ρ = 1.2 and s = 50. We run 1000 simulations for the mean estimation problem in Setting 1 with σ 2 = 1 and sample size n = 100. All studies are performed on a Macbook Pro with Apple M1 Max and 64 GB memory. The execution times are summarized in Table 1. Our proposed method is about 90× faster than cross validation and about 10× faster than Lepski's method. The run time for sample mean, MoM, and trimmed mean in the same scenario is 0.018, 0.111, and 0.057 seconds, respectively. Lastly we compare the run time for our estimator with increasing sample sizes. Specifically, for n = 100, 1000, 10, 000, 100, 000, the run time is 1.54, 1.58, 3.02, 25.04 seconds, respectively. Finally we compare their statistical performances in both settings with various parameters. The results are summarized in Figure 3 and Figure 4. In both figures, the our method and the cross validation have similar performances although our method is slightly better while Lepski's method does not perform well. We suspect this is because Lepski's method depends on three more hyper parameters *V, ρ* and s and our choice are perhaps not optimally tuned. This perhaps shows that the Lepski's method does not achieve great empirical performances in general. ![10_image_0.png](10_image_0.png) Figure 4: Empirical 99%-quantile of the estimation error versus parameter measuring tails and skewness for our estimator, cross validation and Lepski's method. Table 1: Comparing different tuning methods: Run time (in seconds) for 1000 simulations in Setting 1 with σ 2 = 1 and n = 100. ours Lepski's method cross validation 1.5 16.7 133.5 In summary, the most attractive feature of our method is its self-tuning property: (i) It is as efficient as the sample mean estimator for normal distributions and is more robust for asymmetric and/or heavy-tailed distributions; (ii) It incurs much less computational cost than cross validation and Lepski's method. The latter property is particularly important for large-scale inference with a myriad of parameters to be estimated simultaneously. ## 6 Conclusions Summary This paper investigates robust mean estimation for distributions with finite variances only. We introduce a novel loss function that depends on both the mean parameter and a robustification parameter. By jointly optimizing these parameters, we demonstrate that, even under only second moment conditions, the resulting robustification parameter can automatically adapt to the variance. As a result, our mean estimator achieves nearly optimal performance in finite samples, akin to the case of sub-Gaussian data. This distinguishes our approach from previous methods that require cross-validation or Lepski's method to tune the robustification parameter. Adaptivity In our experience, the performance of MoM estimators is often subpar1, and our proposed estimator consistently outperforms MoM estimators. As discussed previously, we believe this is because our estimator can perform (near) optimally in both the finite-sample and large-sample regimes. We shall refer to this ability as "adaptivity to different regimes". The MoM estimator does not naturally enjoy this adaptivity due to its discontinuous nature. The multidimensional case We briefly discuss how to extend the proposed estimator to the multivariate case. Assume model (1.1) but with yi, µ∗, and εi ∈ R d being i.i.d. such that Eεi = 0 and cov(εi) = Σ. A simple strategy, as recommended by one of the referees, is to apply the univariate estimator coordinate-wise and then combine them to form our final estimator µb. Then the following proposition holds. Let σ 2 kk be the k-th diagonal term of Σ, and σ 2 kk,x2 = E[ε 2 ik1(ε 2 ik ≤ x 2)], where εik is the k-th coordinate of εi. 1We had similar empirical observations in our earlier studies. Proposition 6.1. Assume that n is sufficiently large. Let c0 and C0 be some constants, and suppose v0 < c0σ*kk,τ*2 v0 −1 ≤ C0σkk < V0. For any 0 *< δ <* 1, taking z 2 = log(n/δ), with probability at least 1 − δ, we have $\|\hat{\mu}-\mu^*\|_2\leq C\sqrt{\frac{\operatorname{tr}(\Sigma)\,\log(nd/\delta)}{n}}$, where C is some constant. We also have the following asymptotic result which states that the multivariate mean estimator also achieves asymptotic efficiency. Proposition 6.2 (Asymptotic efficiency for the multivariate mean estimator). Fix any 0 < ι ≤ 1. Assume max1≤k≤d Eε 2+ι ik < ∞ and the same assumptions as in Proposition 6.1. Take z 2 = 2 log(n). Then $\sqrt{n}\left(\widehat{\mu}(\widehat{v})-\mu^{*}\right)\leadsto\mathcal{N}\left(0,\Sigma\right).$ . Limitation One limitation is that the finite-sample performance of our self-tuned estimator depends on unknown constants, which means that the sample complexity cannot be computed in advance for fixed error. Moreover, the proposed estimator is only optimal up to a logrithmic term. It remains unclear whether this logrithmic factor can be removed. Another limitation is the scope of the study. This paper focuses on robust mean estimators since this is the simplest case and the proof is already complicated. However, it is possible to extend current work to more general problems, such as regression and matrix estimation problems. We have extended the estimator to the multivariate case in the above but such an extension is not optimal; see Lugosi & Mendelson (2019a) for the optimal finite-sample bound. It would also be interesting to study the asymptotic properties of the multivariate median-of-mean estimators. ## References Marco Avella-Medina, Heather S Battey, Jianqing Fan, and Quefeng Li. Robust estimation of highdimensional covariance and precision matrices. *Biometrika*, 105(2):271–284, 2018. Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. *Concentration Inequalities: A Nonasymptotic* Theory of Independence. Oxford University Press, Oxford, 2013. Christian Brownlees, Emilien Joly, and Gábor Lugosi. Empirical risk minimization for heavy-tailed losses. The Annals of Statistics, 43(6):2507–2536, 2015. Olivier Catoni. Challenging the empirical mean and empirical variance: A deviation study. In Annales de l'IHP Probabilités et statistiques, volume 48, pp. 1148–1185, 2012. Luc Devroye, Matthieu Lerasle, Gabor Lugosi, and Roberto I Oliveira. Sub-Gaussian mean estimators. The Annals of Statistics, 44(6):2695–2725, 2016. Anders Eklund, Thomas E Nichols, and Hans Knutsson. Cluster failure: Why fmri inferences for spatial extent have inflated false-positive rates. *Proceedings of the National Academy of Sciences*, 113(28):7900– 7905, 2016. Jianqing Fan, Quefeng Li, and Yuyan Wang. Estimation of high dimensional mean regression in the absence of symmetry and light tail assumptions. *Journal of the Royal Statistical Society: Series B*, 79(1):247, 2017. Jianqing Fan, Han Liu, Qiang Sun, and Tong Zhang. I-LAMM for sparse learning: Simultaneous control of algorithmic complexity and statistical error. *The Annals of Statistics*, 46(2):814, 2018. Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, NY, 2009. Daniel Hsu and Sivan Sabato. Loss minimization and parameter estimation with heavy tails. The Journal of Machine Learning Research, 17(1):543–582, 2016. Peter J. Huber. Robust Estimation of a Location Parameter. *The Annals of Mathematical Statistics*, 35(1): 73 - 101, 1964. Yuan Ke, Stanislav Minsker, Zhao Ren, Qiang Sun, and Wen-Xin Zhou. User-friendly covariance estimation for heavy-tailed distributions. *Statistical Science*, 34(3):454–471, 2019. Guillaume Lecué and Matthieu Lerasle. Robust machine learning by median-of-means: Theory and practice. The Annals of Statistics, 48(2):906–931, 2020. Gábor Lugosi and Shahar Mendelson. Mean estimation and regression under heavy-tailed distributions: A survey. *Foundations of Computational Mathematics*, 19(5):1145–1190, 2019a. Gabor Lugosi and Shahar Mendelson. Risk minimization by median-of-means tournaments. *Journal of the* European Mathematical Society, 22(3):925–965, 2019b. Gábor Lugosi and Shahar Mendelson. Robust multivariate mean estimation: The optimality of trimmed mean. *The Annals of Statistics*, 49(1):393 - 410, 2021. Stanislav Minsker. Distributed statistical estimation and rates of convergence in normal approximation. Electronic Journal of Statistics, 13(2):5213 - 5252, 2019. Elvezio M Ronchetti and Peter J Huber. *Robust Statistics*. John Wiley & Sons, NJ, 2009. Qiang Sun, Wen-Xin Zhou, and Jianqing Fan. Adaptive Huber regression. *Journal of the American Statistical* Association, 115(529):254–265, 2020. Aad W Van der Vaart. *Asymptotic Statistics*, volume 3. Cambridge University Press, Cambridge, 2000. Martin J Wainwright. *High-dimensional Statistics: A Non-asymptotic Viewpoint*, volume 48. Cambridge University Press, Cambridge, 2019. Lan Wang, Bo Peng, and Runze Li. A high-dimensional nonparametric multivariate test for mean vector. Journal of the American Statistical Association, 110(512):1658–1669, 2015. Lili Wang, Chao Zheng, Wen Zhou, and Wen-Xin Zhou. A new principle for tuning-free huber regression. Statistica Sinica, 31(4):2153–2177, 2021. # Appendix ## Table Of Contents | Table of Contents A Basics | 14 | | | |------------------------------|-------------------------------------------|----|----| | B | Population bias | 15 | | | C | An alternating gradient descent algorithm | 16 | | | D | Comparing with Lepski's method | 16 | | | E | Proofs for Section 2 | 17 | | | E.1 | Proofs for Theorem 2.3 | 17 | | | E.2 | Proof of Proposition 2.4 | | 19 | | E.3 | Supporting lemmas | 20 | | | F | Proofs for the fixed v case | 20 | | | F.1 | Proof of Theorem 3.1 | 20 | | | F.2 | Proof of Lemma 3.2 | | 21 | | F.3 | Proof of Corollary 3.3 | | 23 | | F.4 | Supporting lemmas | 23 | | | G | Proofs for the self-tuned case | 25 | | | G.1 | Proof of Theorem of 3.4 | | 25 | | G.2 | Proof of Theorem 3.5 | 30 | | | G.3 | Supporting lemmas | 31 | | | H | Proofs for Section 4 | 34 | | | H.1 | Proof of Theorem 4.2 | 34 | | | H.2 | Proof of Theorem 4.3 | 35 | | | H.3 | Consistency of bv | | 37 | | H.4 | Local strong convexity in v | | 41 | | H.5 | Supporting lemmas | 44 | | | I | Proofs for Section 6 | 45 | | | J | Preliminary lemmas | 46 | | ## A Basics This section collects the basic facts such as first-order derivatives and the Hessian matrix for the loss function. Let τ = v √n/z throughout the appendix. Recall that our loss function is $$L_{n}(\mu,v)=\frac{1}{n}\sum_{i=1}^{n}\ell(y_{i}-\mu,v)=\frac{1}{n}\sum_{i=1}^{n}\left\{\frac{\sqrt{n}}{z}\sqrt{\frac{nv^{2}}{z^{2}}+(y_{i}-\mu)^{2}}-\left(\frac{n}{z^{2}}-a\right)v\right\}$$ $$=\frac{1}{n}\sum_{i=1}^{n}\left\{\frac{\sqrt{n}}{z}\left(\sqrt{\tau^{2}+(y_{i}-\mu)^{2}}-\tau\right)+a\cdot\frac{\tau}{z\sqrt{n}}\right\}.$$ The first-order, second-order derivatives of Ln(*µ, v*) are $$\nabla_{\mu}L_{n}(\mu,v)=-\frac{1}{n}\sum_{i=1}^{n}\frac{y_{i}-\mu}{v\sqrt{1+z^{2}(y_{i}-\mu)^{2}/(m^{2})}}=-\frac{\sqrt{n}}{z}\cdot\frac{1}{n}\sum_{i=1}^{n}\frac{y_{i}-\mu}{\sqrt{\tau^{2}+(y_{i}-\mu)^{2}}},$$ $$\nabla_{v}L_{n}(\mu,v)=\frac{1}{n}\sum_{i=1}^{n}\frac{y_{i}z^{2}}{\sqrt{1+z^{2}(y_{i}-\mu)^{2}/(m^{2})}}-\left(\frac{n}{z^{2}}-a\right)=\frac{n}{z^{2}}\cdot\frac{1}{n}\sum_{i=1}^{n}\left(\frac{\tau}{\sqrt{\tau^{2}+(y_{i}-\mu)^{2}}}-1\right)+a,$$ where $a=1/2$. The Hessian matrix is √n z1n Pn i=1τ 2 τ 2+(yi−µ) 23/2 n z 2 1 n Pn i=1 τ(yi−µ) (τ 2+(yi−µ) 2) 3/2 n z 2 1 n Pn i=1 τ(yi−µ) (τ 2+(yi−µ) 2) 3/2n 3/2 z 31n Pn i=1 (yi−µ) 2 (τ 2+(yi−µ) 2) 3/2 H(µ, v) = . ## B Population Bias Let µ ∗(v) be the underlying pseudo-Huber regression coefficient with v fixed *a priori* $$\mu^{*}(v)=\operatorname*{argmin}_{\mu}\mathbb{E}L_{n}(\mu,v).$$ Recall that τ = v √n/z. Let $$\psi_{v}(x):=\nabla_{x}\ell^{p}(x,v)=\frac{\sqrt{n}x}{z\sqrt{\tau^{2}+x^{2}}},$$ $$h_{v}(x):=\nabla_{x}^{2}\ell^{p}(x,v)=\frac{\sqrt{n}}{z\sqrt{\tau^{2}+x^{2}}}-\frac{\sqrt{n}x^{2}}{z(\tau^{2}+x^{2})^{3/2}}=\frac{\sqrt{n}\tau^{2}}{z(\tau^{2}+x^{2})^{3/2}}.$$ **Assumption 2**.: The second-order derivative of $L(\mu,v)=\mathbb{E}\ell^{p}(y_{i}-\mu,v)$ satisfies that $$0<\kappa_{\ell}\leq\nabla_{\mu}^{2}L(\mu,v)$$ for any µ ∈ B(*r, µ*∗) := {µ : |µ − µ ∗| ≤ r}, where we use the same κℓ as in Assumption 1 without loss of generality. Our next proposition shows that the population bias is at the order of √n/τ 2. Proposition B.1 (Population bias). Assume Assumption 2 holds with r > √nσ2/(2κℓτ 2). We have $$|\mu^{*}(v)-\mu^{*}|\leq\frac{\sigma^{2}}{2z\kappa_{\ell}}\cdot\frac{\sqrt{n}}{\tau^{2}}\lesssim\frac{\sqrt{n}}{\tau^{2}}.$$ Proof of Proposition B.1. Define the bias term ∆ = µ ∗−µ ∗(v) and the function hv(µ) = n −1 Pn i=1 E{ℓ p(yi− µ, v)}. We first assume that |∆| ≤ r. By the first order optimality of µ ∗(v), we have ∇hv(µ ∗(v)) = 0, and thus $$\langle\Delta,\nabla^{2}h_{v}(\overline{\mu})\Delta\rangle=\langle\nabla h_{v}(\mu^{*})-\nabla h_{v}(\mu^{*}(v)),\Delta\rangle=\langle\nabla h_{v}(\mu^{*}),\Delta\rangle=-\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\{\psi_{v}(\sigma\varepsilon_{i})\}\Delta,$$ (B.1) where µe = λµ∗ + (1 − λ)µ ∗(v) for some 0 ≤ λ ≤ 1. Since E(εi) = 0, we have $\varepsilon=0$, we have $$|\mathbb{E}\{-\psi_{v}(\varepsilon_{i})\}|=\frac{\sqrt{n}}{z}\cdot\left|\mathbb{E}\left\{\frac{-\varepsilon_{i}/\tau}{\sqrt{1+\varepsilon_{i}^{2}/\tau^{2}}}\right\}\right|=\frac{\sqrt{n}}{z}\cdot\left|\mathbb{E}\left\{\frac{\tau^{-1}\varepsilon_{i}\left(\sqrt{1+\varepsilon_{i}^{2}/\tau^{2}}-1\right)}{\sqrt{1+\varepsilon_{i}^{2}/\tau^{2}}}\right\}\right|$$ $$\leq\frac{\sqrt{n}}{2z}\cdot\mathbb{E}\left|\frac{(\varepsilon_{i})^{3}/\tau^{3}}{\sqrt{1+\varepsilon_{i}^{2}/\tau^{2}}}\right|\leq\frac{\sqrt{n}\sigma^{2}}{2z\tau^{2}},$$ $$(\mathrm{B.2})$$ Algorithm 1 An alternating gradient descent algorithm. Input: µinit, vinit, v0, V0, η1, η2, (y1*, . . . , y*n) for k = 1, 2*, . . .* until convergence do µk+1 = µk − η1∇µLn(µk, vk) vek+1 = vk − η2∇τLn(µk+1, vk) and vk+1 = min{max{vek+1, v0}, V0} end for Output: µb = µk+1, vb = vk+1 where the first inequality uses the inequality √1 + x 2 ≤ 1 + x 2 p /2 and the last inequality uses the fact that 1 + ε 2 i /τ 2 ≥ 1 ∨ |εi|/τ. Using equality (B.1) together with Assumption 2 and inequality (B.2) and canceling one |∆| term on both sides, we obtain $$|\Delta|\leq\frac{\sqrt{n}\sigma^{2}}{2z\kappa_{\ell}\tau^{2}}.$$ We then show it must hold that |∆| ≤ r. If not, then we shall construct an intermediate solution between µ ∗ and µ ∗(v), denoted by µ ∗ η (v) = µ ∗ +η(µ ∗(v)−µ ∗), such that |µ ∗ η (v)−µ ∗| = r. Specifically, we can choose some η ∈ (0, 1) so that |µ ∗ η (v) − µ ∗| = r. We then proceed the above calculation and would obtain $$|\mu_{\tau,\eta}-\mu^{*}|\leq\frac{\sqrt{n}\sigma^{2}}{2z\rho_{\ell}\tau^{2}}<r.$$ $\square$ This is a contradiction. ## C An Alternating Gradient Descent Algorithm This section derives algorithms to optimize (2.5) with the constraint v0 ≤ v ≤ V0. Starting with initialization v = vinit and µ = µinit, we use gradient descent to alternatively update the solution sequence {(µk, vk) : k ≥ 1} where (µ1, v1) = (µinit, vinit). Specifically, at working solution (µk, vk), the (k + 1)-th iteration carries out the following two steps 1. µk+1 = µk − η1∇µLn(µk, vk), 2. vek+1 = vk − η2∇τLn(µk+1, vk) and vk+1 = min{max{vek+1, v0}, V0}, where η1 and η2 are the learning rates and $$L_{n}(\mu_{k},v_{k}),$$ $$\begin{array}{l}{{\nabla_{\mu}L_{n}(\mu,v)=-\frac{1}{n}\sum_{i=1}^{n}\frac{y_{i}-\mu}{v\sqrt{1+z^{2}(y_{i}-\mu)^{2}/(n v^{2})}},}}\\ {{\nabla_{v}L_{n}(\mu,v)=\frac{1}{n}\sum_{i=1}^{n}\frac{n/z^{2}}{\sqrt{1+z^{2}(y_{i}-\mu)^{2}/(n v^{2})}}-\left(\frac{n}{z^{2}}-a\right).}}\end{array}$$ We then repeat the above two steps until convergence. We summarize the details in Algorithm 1. In practice, the learning rate η1 and η2 can be chosen adaptively. Specifically, in our experiments, we use alternating gradient descent with the Barzilai and Borwein method and backtracking line search. ## D Comparing With Lepski'S Method We compare our method with Lepski's method. The idea of Lepski's method is very simple: consider a sequence of confidence intervals obtained by assuming that the variance is bounded by a sequence of bounds vk and pick up as an estimator the middle of the smallest interval intersecting all the larger ones. We will use Lepski's method to tune the robustification parameter v and thus τ = v √n/z in the empirical pseudo-Huber loss: $$L_{n}^{\mathrm{h}}(\mu,\tau)=\frac{1}{n}\sum_{i=1}^{n}\left(\tau\sqrt{\tau^{2}+(y_{i}-\mu)^{2}}-\tau^{2}\right).$$ Let vmax be an upper bound for σ, and τmax = vmax√n/z with z =plog(1/δ). Let n be sufficiently large. Then with probability at least 1 − δ, we have $$|\widetilde{\mu}(v_{\mathrm{max}})-\mu^{*}|\leq6\,v_{\mathrm{max}}{\sqrt{\frac{\log(4/\delta)}{n}}}=:\epsilon(v_{\mathrm{max}},\delta),$$ where µe(vmax) = argminµ Ln(*µ, τ*max). Let ϵ(vmax, 0) = +∞ by convention. Clearly, ϵ(vmax, δ) is homogeneous: $$\epsilon(v_{\mathrm{max}},\delta)=B(\delta)v_{\mathrm{max}},{\mathrm{~with~}}B(\delta)=6{\sqrt{\frac{\log(4/\delta)}{n}}}.$$ For some parameters V ∈ R, ρ > 1, and s ∈ N, choose for V the following distribution for vmax $$\mathcal{V}(v_{\operatorname*{max}})=\begin{cases}\frac{1}{2s+1},&\mathrm{if}\ v_{\operatorname*{max}}=V\rho^{k},\,k\in\mathbb{Z},\,|k|\leq s,\\ 0,&\mathrm{otherwise}.\end{cases}$$ Consider for any vmax such that ϵ(vmax*, δv*(vmax)) < ∞ the confidence interval $\epsilon/J$ 2. I(vmax) = µe(vmax) + ϵ(vmax*, δv*(vmax)) × [−1, 1], where ϵ(vmax*, δv*(vmax)) = 6vmaxqlog(4/δ)+log(2s+1) $v_{\max}\sqrt{\frac{\log(4/\delta)+\log(2s+1)}{n}}$. We set $I(v_{\max})=\mathbb{R}$ when $\epsilon(v_{\max},\delta v(v_{\max}))=+\infty$. Let us consider the non-decreasing family of closed intervals $$J(v_{1})=\bigcap\left\{I(v_{\operatorname*{max}}):v_{\operatorname*{max}}\geq v_{1}\right\},\,v_{1}\in\mathbb{R}_{+}.$$ Lepski's method picks the center point of the intersection $$\bigcap\left\{J(v_{1}):v_{1}\in\mathbb{R}_{+},\,J(v_{1})\neq\emptyset\right\}$$ to be the final estimator µbLepski. Then the following result holds. Proposition D.1. Suppose | log(σ/V )| ≤ 2s log(ρ). Then with probability at least 1 − δ $$|\widehat{\mu}_{\mathrm{Lepski}}-\mu^{*}|\leq12\rho\sigma{\sqrt{\frac{\log(4/\delta)+\log(2s+1)}{n}}}.$$ If we take the grid fine enough such that s = n, then the deviation bound above reduces to $$12\rho\sigma{\sqrt{\frac{\log(4/\delta)+\log(2n+1)}{n}}},$$ which agrees with deviation bound for our proposed estimator, up to a constant multiplier. Therefore, our proposed estimator is comparable to Lepski's method in terms of deviation bound. Computationally, our estimator is self-tuned and is thus computationally more efficient than Lepski's method. ## E Proofs For Section 2 E.1 Proofs For Theorem 2.3 Proof of Theorem 2.3. We prove first the finite-sample result and then the asymptotic result. Recall that τ∗ = v∗ √n/z. Proving the finite-sample result. On one side, if v∗ = 0 and by the definition of v∗, v∗ satisfies $$1-{\frac{a z^{2}}{n}}=\mathbb{E}{\frac{\sqrt{n}v_{*}}{\sqrt{n v_{*}^{2}+z^{2}\varepsilon^{2}}}}=0,$$ which is a contradiction. Thus v∗ > 0. Using the convexity of 1/ √1 + x for x > −1 and Jensen's inequality acquires 1 − az2 n= E √ p nv∗ nv2 ∗ + z 2ε 2 = E1 p1 + z 2ε 2/(nv2 ∗ ) ≥1 p1 + z 2σ 2/(nv2 ∗ ) ≥ 1 − z 2σ 2 2nv2 ∗ , where the last inequality uses the inequality (1 + x) −1/2 ≥ 1 − x/2, that is Lemma J.4 (i) with r = −1/2. This implies $$v_{*}^{2}\leq{\frac{\sigma^{2}}{2a}}.$$ On the other side, using the concavity of √x, we obtain, for any γ ∈ [0, 1), that 1 − az2 n= E √ p nv∗ nv2 ∗ + z 2ε 2 = E1 p1 + σ 2z 2ε 2/(nv2 ∗ ) ≤ sE 1 1 + z 2ε 2/(nv2 ∗ ) ≤ sE 1 − (1 − γ) z 2ε 2 nv2 ∗ 1 z 2ε 2 nv2 ∗ ≤γ 1 − γ +1 1 + z 2ε 2/(nv2 ∗ ) 1 z 2ε 2 nv2 ∗ >γ 1 − γ ≤ s 1 − (1 − γ) E z 2ε 2 nv2 ∗ 1 z 2ε 2 nv2 ∗ ≤γ 1 − γ ≤ s 1 − (1 − γ) E {ε 21 (ε 2 ≤ γτ 2 ∗ /(1 − γ))} nv2 ∗/z2, (E.1) where the second inequality uses Lemma E.1, that is, $$(1+x)^{-1}\leq1-(1-\gamma)x,{\mathrm{~for~any~}}x\in\left[0,{\frac{\gamma}{1-\gamma}}\right].$$ Taking square on both sides of (E.1) and using the fact that n ≥ az2together with Lemma J.4 (i) with r = 2, aka (1 + x) r ≥ 1 + rx for x ≥ −1 and r ∈ R \ (0, 1), we obtain $$1-\frac{2a z^{2}}{n}\leq\left(1-\frac{a z^{2}}{n}\right)^{2}\leq1-(1-\gamma)\,\frac{\mathbb{E}\{\varepsilon^{2}1(\varepsilon^{2}\leq\gamma\tau_{*}^{2}/(1-\gamma))\}}{n v_{*}^{2}/z^{2}},$$ or equivalently $$v_{*}^{2}\geq\frac{\sigma_{\varphi\tau_{*}^{2}}^{2}}{2a},$$ where φ = γ/(1 − γ). Combining the upper bound and the lower bound for v 2 ∗ completes the proof for the finite-sample result. Proving the asymptotic result. The above implies that v∗ < ∞ for any n ≥ az2. By the definition of v∗, we have $$\frac{a z^{2}}{n}=1-\mathbb{E}\frac{1}{\sqrt{1+z^{2}\varepsilon^{2}/(n v_{*}^{2})}}.$$ $$(\mathrm{E.2})$$ We must have nv2 ∗/z2 → ∞. Otherwise assume lim supn→∞ nv2 ∗/z2 ≤ M < ∞. Taking n → ∞, the left hand side of the above equality goes to 0 while the right hand is lower bounded as $$1-\mathbb{E}\frac{1}{\sqrt{1+\varepsilon^{2}/M}}\geq1-\sqrt{\mathbb{E}\left(\frac{1}{1+\varepsilon^{2}/M}\right)}$$ $$\geq1-\sqrt{1-\frac{\mathbb{E}\left\{\varepsilon^{2}1(\varepsilon^{2}\leq M)\right\}}{2M}}$$ $$\geq1-\sqrt{\frac{1}{2}}>0,$$ where the first two inequalities follow from (E.1) with γ = 1/2 and the third inequality uses the fact that E{ε 21(ε 2 ≤ M)} ≤ M. This is a contradiction. Thus nv2 ∗/z2 → ∞. Multiplying both sides of the above equality by n, taking n → ∞, and using the dominated convergence theorem, we obtain $$az^{2}=\lim_{n\to\infty}\mathbb{E}\left(n\cdot\frac{\sqrt{1+z^{2}\varepsilon^{2}/(n\nu_{*}^{2})}-1}{\sqrt{1+z^{2}\varepsilon^{2}/(n\nu_{*}^{2})}}\right)$$ $$=\lim_{n\to\infty}\mathbb{E}\left(n\cdot\frac{1}{\sqrt{1+z^{2}\varepsilon^{2}/(n\nu_{*}^{2})}}\cdot\frac{\sqrt{1+z^{2}\varepsilon^{2}/(n\nu_{*}^{2})}-1}{z^{2}\varepsilon^{2}/(2n\nu_{*}^{2})}\cdot\frac{z^{2}\varepsilon^{2}}{2n\nu_{*}^{2}}\right)$$ $$=\frac{\mathbb{E}z^{2}\varepsilon^{2}}{2\lim_{n\to\infty}\nu_{*}^{2}},$$ and thus limn→∞ v 2 ∗ = σ 2/(2a). ## E.2 Proof Of Proposition 2.4 Proof of Proposition 2.4. The convexity proof consists of two steps: (1) prove that Ln(*µ, v*) is jointly convex in µ and v; (2) prove that Ln(*µ, v*) is strictly convex, provided that there are at least two distinct data points. To show that Ln(*µ, v*) = n −1 Pn i=1 ℓ p(yi − *µ, v*) in (2.5) is jointly convex in µ and v, it suffices to show that each ℓ p(yi − *µ, v*) is jointly convex in µ and v. Recall that τ = v √*n/z.*The Hessian matrix of ℓ p(yi − *µ, v*) is $$H_{i}(\mu,v)=\frac{\sqrt{n}}{z}\cdot\frac{1}{\left(\tau^{2}+(y_{i}-\mu)^{2}\right)^{3/2}}\left[(\sqrt{n}/z)\;\tau(y_{i}-\mu)\;\;\;\;(\sqrt{n}/z)^{2}\;(y_{i}-\mu)^{2}\right]\geq0,$$ which is positive semi-definite. Thus Ln(*µ, v*) is jointly convex in µ and v. We proceed to show (2). Because the Hessian matrix H(µ, v) of Ln(*µ, v*) satisfies H(*µ, v*) = n −1 Pn i=1 Hi(*µ, v*) and each Hi(*µ, v*) is positive semidefinite, we only need to show H(*µ, v*) is of full rank. Without generality, assume that y1 ̸= y2. Then $$H_{1}(\mu,v)+H_{2}(\mu,v)=\frac{\sqrt{n}}{z}\cdot\sum_{i=1}^{2}\frac{1}{\left(\tau^{2}+(y_{i}-\mu)^{2}\right)^{3/2}}\left[(\sqrt{n}/z)\;\tau(y_{i}-\mu)\;\;\;\;\;(\sqrt{n}/z)^{2}\;(y_{i}-\mu)^{2}\right].$$ Some algebra yields $$\operatorname*{det}\left(H_{1}(\mu,v)+H_{2}(\mu,v)\right)={\frac{n^{2}\tau^{2}}{z^{4}}}\cdot{\frac{(y_{1}-y_{2})^{2}}{(\tau^{2}+(y_{1}-\mu)^{2})^{3/2}(\tau^{2}+(y_{2}-\mu)^{2})^{3/2}}}\neq0$$ for any v > 0, and thus τ > 0, and µ ∈ R, provided that y1 ̸= y2. Therefore, H1(*µ, v*) + H2(*µ, v*) is of full rank and thus is H(*µ, τ* ), provided v > 0, µ ∈ R, and y1 ̸= y2. Lemma E.1. Let 0 ≤ γ < 1. For any 0 ≤ x ≤ γ/(1 − γ), we have $$(1+x)^{-1}\leq1-(1-\gamma)x.$$ Proof of Lemma E.1. To prove the lemma, it suffices to prove, for any γ ∈ [0, 1), that $$1\leq(1+x)-(1-\gamma)x(1+x),\quad\forall\;0\leq x\leq{\frac{\gamma}{1-\gamma}},$$ which is equivalently to $$x\left(x-\frac{\gamma}{1-\gamma}\right)\leq0,\quad\forall\;0\leq x\leq\frac{\gamma}{1-\gamma}.$$ The above inequality always holds, and this completes the proof. ## F Proofs For The Fixed V **Case** This section collects proofs for Theorem 3.1, Lemma 3.2, and Corollary 3.3. Recall that τ = v √n/z, and the gradients with respect to µ and v are $$\nabla_{\mu}L_{n}(\mu,v)=-\frac{1}{n}\sum_{i=1}^{n}\frac{y_{i}-\mu}{v\sqrt{1+z^{2}(y_{i}-\mu)^{2}/(n v^{2})}}=-\frac{\sqrt{n}}{z}\cdot\frac{1}{n}\sum_{i=1}^{n}\frac{y_{i}-\mu}{\sqrt{\tau^{2}+(y_{i}-\mu)^{2}}},$$ $$\nabla_{v}L_{n}(\mu,v)=\frac{1}{n}\sum_{i=1}^{n}\frac{n/z^{2}}{\sqrt{1+z^{2}(y_{i}-\mu)^{2}/(n v^{2})}}-\left(\frac{n}{z^{2}}-a\right)=\frac{n}{z^{2}}\cdot\frac{1}{n}\sum_{i=1}^{n}\left(\frac{\tau}{\sqrt{\tau^{2}+(y_{i}-\mu)^{2}}}-1\right)+a.$$ F.1 Proof of Theorem 3.1 Proof of Theorem 3.1. Let τ = v √n/z. Because µb(v) is the stationary point of Ln(*µ, v*), we have $$\frac{\partial}{\partial\mu}L_{n}(\widehat{\mu}(v),v)=-\frac{1}{n}\sum_{i=1}^{n}\frac{y_{i}-\widehat{\mu}(v)}{v\sqrt{1+z^{2}(y_{i}-\widehat{\mu}(v))^{2}/(n\nu^{2})}}=-\frac{\sqrt{n}}{z}\cdot\frac{1}{n}\sum_{i=1}^{n}\frac{y_{i}-\widehat{\mu}(v)}{\sqrt{r^{2}+(y_{i}-\widehat{\mu}(v))^{2}}}=0.$$ Let $\Delta=\widehat{\mu}(v)-\mu$. We first assume that $|\Delta|:=|\widehat{\mu}(v)-\mu^{*}|\leq r_{0}\leq r$. Using Assumption 1 obtains Let ∆ = µb(v) − µ. We first assume that |∆| := |µb(v) − µ $$\kappa_{\ell}|\widehat{\mu}(v)-\mu^{*}|^{2}\leq\left\langle\frac{\partial}{\partial\mu}L_{n}(\widehat{\mu}(v),v)-\frac{\partial}{\partial\mu}L_{n}(\mu^{*},v),\widehat{\mu}(v)-\mu^{*}\right\rangle$$ $$\leq\left|\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\frac{\varepsilon_{i}}{z\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}\right|\widehat{\mu}(v)-\mu^{*}|\,,$$ or equivalently $$\kappa_{\ell}|{\widehat{\mu}}(v)-\mu^{*}|\leq\left|{\frac{1}{\sqrt{n}}}\sum_{i=1}^{n}{\frac{\varepsilon_{i}}{z{\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}}}\right|.$$ Applying Lemma F.1 with the fact that Eτεi/(τ 2 + ε 2 i ) 1/2 ≤ σ 2/(2τ ), we obtain with probability at least 1 − 2δ $$\kappa_{\ell}|\widehat{\mu}(v)-\mu^{*}|\leq\left|{\frac{\sqrt{n}}{\tau}}{\frac{1}{n}}\sum_{i=1}^{n}{\frac{\tau\varepsilon_{i}}{z{\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}}}\right|\leq{\frac{\sqrt{n}}{z\tau}}\left(\sigma{\sqrt{\frac{2\log(1/\delta)}{n}}}+{\frac{\tau\log(1/\delta)}{3n}}+{\frac{\sigma^{2}}{2\tau}}\right),$$ or equivalently $$\kappa_{\ell}|\widehat{\mu}(v)-\mu^{*}|\leq\sqrt{\frac{2\log(1/\delta)}{z^{2}\tau^{2}/\sigma^{2}}}+\frac{\log(1/\delta)}{3z\sqrt{n}}+\frac{\sqrt{n}\sigma^{2}}{2z\tau^{2}}.$$ Since τ = v √n/z, we have $$\kappa_{\ell}|\widehat{\mu}(v)-\mu^{*}|\leq\left(\frac{\sqrt{2}\sigma}{v}+\frac{\sqrt{\log(1/\delta)}}{3z}\right)\sqrt{\frac{\log(1/\delta)}{n}}+\frac{1}{2}\cdot\frac{\sigma^{2}}{v^{2}}\cdot\frac{z}{\sqrt{n}}.$$ Taking z =plog(w/δ) then yields $$\kappa_{\ell}|\widehat{\mu}(\tau)-\mu^{*}|\leq\left(\frac{\sqrt{2}\sigma}{v}+\frac{\sqrt{\log(1/\delta)}}{3\sqrt{\log(w/\delta)}}\right)\sqrt{\frac{\log(1/\delta)}{n}}+\frac{1}{2}\cdot\frac{\sigma^{2}}{v^{2}}\cdot\sqrt{\frac{\log(w/\delta)}{n}}$$ $$\leq\left(\frac{\sqrt{2}\sigma}{v}+\frac{1}{3}+\frac{1}{2}\cdot\frac{\sigma^{2}}{v^{2}}\right)\sqrt{\frac{\log(w/\delta)}{n}}$$ $$<\left(1+\frac{\sigma}{\sqrt{2}v}\right)^{2}\sqrt{\frac{\log(w/\delta)}{n}}=:\kappa_{\ell}r_{0}\leq\kappa_{\ell}r$$ for any 0 ≤ δ < 1. Moving κℓ to the right hand side obtains the desired bound. We then show that |∆| ≤ r0 must hold. If not, we shall construct an intermediate solution between µ ∗ and µb(τ ), denoted by µη = µ ∗ +η(µb(τ )−µ ∗), such that |µη −µ ∗| = r0. Specifically, we can choose some η ∈ (0, 1) so that |µη − µ ∗| = r0. We then repeat the above calculation and obtain $$|\widehat{\mu}(\tau)-\mu^{*}|\leq\frac{1}{\kappa_{\ell}}\cdot\left(\frac{\sqrt{2}\sigma}{v}+\frac{1}{3}+\frac{1}{2}\cdot\frac{\sigma^{2}}{v^{2}}\right)\sqrt{\frac{\log(w/\delta)}{n}}$$ $$<r_{0}=\frac{1}{\kappa_{\ell}}\cdot\left(1+\frac{\sigma}{\sqrt{2}v}\right)^{2}\sqrt{\frac{\log(w/\delta)}{n}}$$ which is a contradiction. Thus it must hold that |∆| ≤ r0. Taking w = 1 and using a change of variable 2δ → δ complete the proof. ## F.2 Proof Of Lemma 3.2 Proof of Lemma 3.2. We prove that, with probability at least 1 − δ, Assumption 1 with κℓ = 1/(2v) holds uniformly over v ≥ v0. Recall that τ = v √n/z. For notational simplicity, let ∆ = µ − µ ∗ and τv0 = v0 √n/z. It follows that $$\left(\nabla_{\mu}L_{n}(\mu,v)-\nabla_{\mu}L_{n}(\mu^{*},v),\,\Delta\right)=\left\langle\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\frac{\varepsilon_{i}}{z\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}-\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\frac{y_{i}-\mu}{z\sqrt{\tau^{2}+(y_{i}-\mu)^{2}}},\,\Delta\right\rangle$$ $$=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\frac{\tau^{2}}{z(\tau^{2}+(y_{i}-\widehat{\mu})^{2})^{3/2}}\,\Delta^{2},$$ $\square$ where µe is some convex combination of µ ∗ and µ, that is µe = (1 − λ)µ ∗ + λµ for some λ ∈ [0, 1]. Obviously we have |µe − µ ∗| = λ|∆| ≤ |∆| ≤ r0. Since (yi − µe) 2 ≤ 2ε 2 i + 2λ 2∆2 ≤ 2ε 2 i + 2∆2 ≤ 2ε 2 i + 2r 2 0 the above displayed equality implies inf µ∈Br(µ∗) ⟨∇µLn(µ, v) − ∇µLn(µ ∗, v), µ − µ ∗⟩ |µ − µ∗| 2 ≥ √n z· 1 n Xn i=1 τ 2 z(τ 2 + 2r 2 0 + 2ε 2 i ) 3/2 = √n z· τ 2 (τ 2 + 2r 2 0 ) 3/2 · 1 n Xn i=1 (τ 2 + 2r 2 0 ) 3/2 (τ 2 + 2r 2 0 + 2ε 2 i ) 3/2 ≥ √n z· τ 2 (τ 2 + 2r 2 0 ) 3/2 · E (τ 2 v0 + 2r 2 0 ) 3/2 (τ 2 v0 + 2r 2 0 + 2ε 2 i ) 3/2 − rlog(1/δ) 2n ! = √n z· τ 2 (τ 2 + 2r 2 0 ) 3/2 · I − rlog(1/δ) 2n ! , (F.1) where the last inequality uses Lemma F.2. It remains to lower bound I. Using the convexity of 1/(1 + x) 3/2 and Jensen's inequality, we obtain $$\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\frac{(\tau_{v_{0}}^{2}+2r^{2})^{3/2}}{(\tau_{v_{0}}^{2}+2r^{2}+2\varepsilon_{i}^{2})^{3/2}}=\mathbb{E}\frac{(\tau_{v_{0}}^{2}+2r^{2})^{3/2}}{(\tau_{v_{0}}^{2}+2r^{2}+2\varepsilon_{i}^{2})^{3/2}}$$ $$=\mathbb{E}\frac{1}{(1+2\varepsilon_{i}^{2}/(\tau_{v_{0}}^{2}+2r^{2}))^{3/2}}$$ $$\geq\frac{1}{(1+2\sigma^{2}/(\tau_{v_{0}}^{2}+2r^{2}))^{3/2}}$$ $$=\frac{(\tau_{v_{0}}^{2}+2r^{2})^{3/2}}{(\tau_{v_{0}}^{2}+2r^{2}+2\sigma^{2})^{3/2}}.$$ Plugging the above lower bound into (F.1) and using the facts that $$\frac{\tau^3}{(\tau^2+2r^2)^{3/2}}\geq\frac{\tau_{v_0}^3}{(\tau_{v_0}^2+2r^2)^{3/2}}\;\;\mbox{for}\;\tau_{v_0}\geq\tau\quad\mbox{and}\quad\frac{\tau^3}{(\tau^2+2r^2)^{3/2}}\leq1,$$ we obtain with probability at least $1-\delta$ inf µ∈Br(µ∗) ⟨∇µLn(µ) − ∇µLn(µ ∗), µ − µ ∗⟩ |µ − µ∗| 2 ≥ √n z· τ 2 (τ 2 + 2r 2) 3/2 · (τ 2 v0 + 2r 2) 3/2 (τ 2 v0 + 2r 2 + 2σ 2) 3/2 − rlog(1/δ) 2n ! = √n zτ τ 3 (τ 2 + 2r 2) 3/2 ·(τ 2 v0 + 2r 2) 3/2 (τ 2 v0 + 2r 2 + 2σ 2) 3/2 −τ 3 (τ 2 + 2r 2) 3/2 · rlog(1/δ) 2n ! ≥ √n zτ 1 (1 + (2r 2 + 2σ 2)/τ 2 v0 ) 3/2 − rlog(1/δ) 2n ! = 1 v 1 (1 + (2r 2 + 2σ 2)/τ 2 v0 ) 3/2 − rlog(1/δ) 2n ! ≥ 1 2v provided τ 2 v0 ≥ 4r 2 + 4σ 2 and n ≥ C log(1/δ) for some large enough absolute constant C. ## F.3 Proof Of Corollary 3.3 Proof of Corollary 3.3. Recall z =plog(w/δ) and let $$r=2v\left({\frac{\sigma}{\sqrt{2}v}}+1\right)^{2}{\sqrt{\frac{\log(2w/\delta)}{n}}}.$$ If n ≥ C max (r 2 + σ 2)/v2 0 , 1 log(1/δ), which is guaranteed by the conditions of the corollary, then with probability at least 1−δ, Assumption 1 holds with κℓ = 1/(2v) uniformly over v ≥ v0. Denote this probability event by E. If Assumption 1 holds, then by Theorem 3.1, we have $$\mathbb{P}\left(|{\widehat{\mu}}(v)-\mu^{*}|\leq2v\left({\frac{\sigma}{\sqrt{2}v}}+1\right)^{2}{\sqrt{\frac{\log(2w/\delta)}{n}}}\left|{\mathcal{E}}\right.\right)\geq1-\delta.$$ Thus P |µb(v) − µ ∗| > 2v σ √2v + 12 rlog(2w/δ) n ! = P |µb(v) − µ ∗| > 2v σ √2v + 12 rlog(2w/δ) n, E ! + P |µb(v) − µ ∗| > 2v σ √2v + 12 rlog(2w/δ) n, E c ! ≤ P |µb(v) − µ ∗| > 2v σ √2v + 12 rlog(2w/δ) n E ! + P (E c) ≤ 2δ. Then with probability at least 1 − 2δ, we have $$|\widehat{\mu}(v)-\mu^{*}|\leq2v\left(\frac{\sigma}{\sqrt{2}v}+1\right)^{2}\sqrt{\frac{\log(2w/\delta)}{n}}.$$ $\rightarrow\delta$ finishes the proof. Using a change of variable 2δ → δ finishes the proof. This subsection collects two supporting lemmas that are used earlier in this section. Lemma F.1. Let εi be i.i.d. random variables such that Eεi = 0 and Eε 2 i = 1. For any 0 ≤ δ ≤ 1, with probability at least 1 − 2δ, we have $$\left|{\frac{1}{n}}\sum_{i=1}^{n}{\frac{\tau\varepsilon_{i}}{\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}}-\mathbb{E}{\frac{\tau\varepsilon_{i}}{\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}}\right|\leq\sigma{\sqrt{\frac{2\log(1/\delta)}{n}}}+{\frac{\tau\log(1/\delta)}{3n}}.$$ Proof of Lemma F.1. The random variables Zi:= τψτ (εi) = τεi/(τ 2 + ε 2 i ) 1/2 with µz = EZi and σ 2 z = var(Zi) are bounded i.i.d. random variables such that doll variables such that $$\begin{split} |Z_i|&=\left|\tau\varepsilon_i/(\tau^2+\varepsilon_i^2)^{1/2}\right|\leq|\varepsilon_i|\wedge\tau\leq\tau,\\ |\mu_z|&=|\mathbb{E}Z_i|=\left|\mathbb{E}\left(\tau\varepsilon_i/(\tau^2+\varepsilon_i^2)^{1/2}\right)\right|\leq\frac{\sigma^2}{2\tau},\\ \mathbb{E}Z_i^2&=\mathbb{E}\left(\frac{\tau^2\varepsilon_i^2}{\tau^2+\varepsilon_i^2}\right)\leq\sigma^2,\\ \sigma_z^2&:=\text{var}(Z_i)=\mathbb{E}\big(\tau\varepsilon_i/(\tau^2+\varepsilon_i^2)^{1/2}-\mu_z\big)^2\\ &=\mathbb{E}\left(\frac{\tau^2\varepsilon_i^2}{\tau^2+\varepsilon_i^2}\right)-\mu_z^2\leq\sigma^2.\\ \end{split}$$ For third and higher order absolute moments, we have $$\mathbb{E}|Z_{i}|^{k}=\mathbb{E}\left|\frac{\tau\varepsilon_{i}}{\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}\right|^{k}\leq\sigma^{2}\tau^{k-2}\leq\frac{k!}{2}\sigma^{2}(\tau/3)^{k-2},\text{for all integers}k\geq3.$$ 0 ≤ Zi = τ 3/(τ 2 + ε 2 i ) 3/2 ≤ 1. Using Lemma J.2 with v = nσ2 and c = τ /3, we have for any t ≥ 0 $$\mathbb{P}\left(\left|\sum_{i=1}^{n}\frac{\tau\varepsilon_{i}}{\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}-\sum_{i=1}^{n}\mathbb{E}\frac{\tau\varepsilon_{i}}{\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}\right|\geq\sqrt{2n\sigma^{2}t}+\frac{\tau t}{3}\right)\leq2\exp\left(-t\right).$$ Taking $t=\log(1/\delta)$ acquires that for any $0\leq\delta\leq1$ Taking $\epsilon=\log(1/\delta)$ denote that for any $\delta\leq0\leq1$ $$\mathbb{P}\left(\left|\frac{1}{n}\sum_{i=1}^n\frac{\tau\varepsilon_i}{\sqrt{\tau^2+\varepsilon_i^2}}-\frac{1}{n}\sum_{i=1}^n\mathbb{E}\frac{\tau\varepsilon_i}{\sqrt{\tau^2+\varepsilon_i^2}}\right|\leq\sigma\sqrt{\frac{2\log(1/\delta)}{n}}+\frac{\tau\log(1/\delta)}{3n}\right)\geq1-2\delta.$$ This completes the proof. Lemma F.2. For any 0 *< δ <* 1, with probability at least 1 − δ, $ \frac{1}{n}\sum_{i=1}^n\frac{\tau^3}{(\tau^2+\varepsilon_i^2)^{3/2}}-\mathbb{E}\frac{\tau^3}{(\tau^2+\varepsilon_i^2)^{3/2}}\geq-\sqrt{\frac{\log(1/\delta)}{2n}}.$ at least $1-\delta$, it holds uniformly over $\tau\geq\tau_w\geq0$ th. Moreover, with probability at least 1 − δ, it holds uniformly over τ ≥ τv0 ≥ 0 that $${\frac{1}{n}}\sum_{i=1}^{n}{\frac{\tau^{3}}{(\tau^{2}+\varepsilon_{i}^{2})^{3/2}}}\geq\mathbb{E}{\frac{\tau_{v_{0}}^{3}}{(\tau_{v_{0}}^{2}+\varepsilon_{i}^{2})^{3/2}}}-{\sqrt{\frac{\log(1/\delta)}{2n}}}.$$ Proof of Lemma F.2. The random variables Zi = Zi(τ ) := τ 3/(τ 2 + ε 2 i ) 3/2 with µz = EZi and σ 2 z = var(Zi) are bounded i.i.d. random variables such that Therefore, using Lemma J.1 with v = n acquires that for any t ≥ 0 $$\mathbb{P}\left(\sum_{i=1}^{n}\frac{\tau^{3}}{(\tau^{2}+\varepsilon_{i}^{2})^{3/2}}-\sum_{i=1}^{n}\mathbb{E}\left(\frac{\tau^{3}}{(\tau^{2}+\varepsilon_{i}^{2})^{3/2}}\right)\leq-\sqrt{\frac{n t}{2}}\right)\leq\exp(-t).$$ Taking $t=\log(1/\delta)$ acquires that for any $0<\delta\leq1$ $$\mathbb{P}\left(\frac{1}{n}\sum_{i=1}^{n}\frac{\tau^{3}}{(\tau^{2}+\varepsilon_{i}^{2})^{3/2}}-\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left(\frac{\tau^{3}}{(\tau^{2}+\varepsilon_{i}^{2})^{3/2}}\right)>-\sqrt{\frac{\log(1/\delta)}{2n}}\right)>1-\delta.$$ The second result follows from the fact that Zi(τ ) is an increasing function of τ . Specifically, we have with probability at least 1 − δ In fact I'm just $$\frac{1}{n}\sum_{i=1}^n\frac{\tau^3}{(\tau^2+\varepsilon_i^2)^{3/2}}\geq\frac{1}{n}\sum_{i=1}^n\frac{\tau_{v_0}^3}{(\tau_{v_0}^2+\varepsilon_i^2)^{3/2}}$$ $$\geq\mathbb{E}\left(\frac{\tau_{v_0}^3}{(\tau_{v_0}^2+\varepsilon_i^2)^{3/2}}\right)+\frac{1}{n}\sum_{i=1}^n\frac{\tau_{v_0}^3}{(\tau_{v_0}^2+\varepsilon_i^2)^{3/2}}-\mathbb{E}\left(\frac{\tau_{v_0}^3}{(\tau_{v_0}^2+\varepsilon_i^2)^{3/2}}\right)$$ $$\geq\mathbb{E}\left(\frac{\tau_{v_0}^3}{(\tau_{v_0}^2+\varepsilon_i^2)^{3/2}}\right)-\sqrt{\frac{\log(1/\delta)}{2n}}.$$ thanks for. This finishes the proof. ## G Proofs For The Self-Tuned Case This section collects proofs for Theorem 3.4. ## G.1 Proof Of Theorem Of 3.4 Proof of Theorem of 3.4. Recall that τ = v √n/z. For simplicity, let τb = vb √n/z. Define the profile loss L pro n(v) as $$L_{n}^{\mathrm{pro}}(v):=L_{n}({\widehat{\mu}}(v),v)=\operatorname*{min}_{\mu}L_{n}(\mu,v).$$ Its first order gradient is $$\nabla L_{n}^{\rm geo}(v)=\nabla L_{n}(\widehat{\mu}(v),v)=\frac{\partial}{\partial v}\widehat{\mu}(v)\cdot\frac{\partial}{\partial v}L_{n}(\mu,v)\Big{|}_{\mu=\widehat{\mu}(v)}+\frac{\partial}{\partial v}L_{n}(\mu,v)\Big{|}_{\mu=\widehat{\mu}(v)}=\frac{\partial}{\partial v}L_{n}(\widehat{\mu}(v),v),$$ (G.1) where we use the fact that ∂/∂µ Ln(*µ, v*)|µ=bµ(v) = 0, implied by the stationarity of µb(v). Assuming that the constraint is inactive. We first assume that the constraint is not active for any stationary point vb, that is, any stationary point vb is an interior point of [v0, V0], aka vb ∈ (v0, V0). By the joint convexity of Ln(*µ, v*) and the convexity of L pro n(v), (µb(vb), vb) and vb are stationary points of Ln(*µ, v*) and Ln(µb(v), v), respectively. Thus we have ∂ ∂µLn(µ, v) (µ,v)=(bµ(bv),bv) = − √n z· 1 n Xn i=1 yi − µb(vb) pτb 2 + (yi − µb(vb))2 = 0, ∂ ∂vLn(µ, v) (µ,v)=(bµ(bv),bv) = n z 2 · 1 n Xn i=1 τb pτb 2 + (yi − µb(vb))2 − n z 2 − a = 0, ∇L pro n(v) v=bv = ∇Ln(µb(vb), vb) v=bv = ∂ ∂vLn(µb(v), v) v=bv = ∂ ∂vLn(µ, v) (µ,v)=(bµ(bv),bv) = 0, where the first two equalities are on partial derivatives of Ln(*µ, v*) and the last one is on the derivative of the profile loss Ln(µb(v), v). Recall that τ = √*nv/z*. Let f(τ ) = z 2∇L pro n(v)/n, that is $$f(\tau)=\frac{1}{n}\sum_{i=1}^{n}\frac{\tau}{\sqrt{\tau^{2}+(y_{i}-\widehat{\mu}(v))^{2}}}-\left(1-\frac{a z^{2}}{n}\right).$$ In other words, τb = √*nv/z* b satisfies f(τb) = 0. We now split the proof for the inactive constraint case into two steps. Step 1: Proving vb ≤ C0σ **for some universal constant** C0. We will employ the proof by contradiction argument. Assume there exists some v such that v > (1 + ϵ) √r 2 + σ 2 and ∇L pro v(v) = 0; or equivalently, there exists some τ such that $\tau>(1+\epsilon)\sqrt{r^{2}+\sigma^{2}}\sqrt{n}/z=:\bar{\tau}\;\;\;\mbox{and}\;\;\;f(\tau)=0,$ (G.2) where ϵ and r are to be determined later. Let τv0 = v0 √*n/z.* Then, provided that n is large enough, Lemma 3.2 implies that Assumption 1 with r and κℓ = 1/(2v) holds uniformly over v ≥ v0 conditional on the following event $${\mathcal{E}}_{1}:=\left\{{\frac{1}{n}}\sum_{i=1}^{n}{\frac{(\tau_{v_{0}}^{2}+2r^{2})^{3/2}}{(\tau_{v_{0}}^{2}+2r^{2}+2\varepsilon_{i}^{2})^{3/2}}}-{\frac{1}{n}}\sum_{i=1}^{n}{\mathbb{E}}{\frac{(\tau_{v_{0}}^{2}+2r^{2})^{3/2}}{(\tau_{v_{0}}^{2}+2r^{2}+2\varepsilon_{i}^{2})^{3/2}}}\geq-{\sqrt{\frac{\log(1/\delta)}{2n}}}\right\}.$$ Conditional on the intersection of event E1 and the following event $${\mathcal{E}}_{2}:=\left\{\operatorname*{sup}_{v\in[v_{0},V_{0}]}\left|{\frac{1}{n}}\sum_{i=1}^{n}{\frac{\varepsilon_{i}}{\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}}\right|\leq C\cdot{\frac{V_{0}}{v_{0}}}\cdot{\frac{\log(n/\delta)}{n}}\right\},$$ where z ≲plog(n/δ) and C is some constant, and following the proof of Theorem 3.1, for any fixed v and thus τ , we have $$\kappa_{\ell}|\widehat{\mu}(v)-\mu^{*}|\leq\left|\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\frac{\varepsilon_{i}}{z\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}\right|.$$ $:=v_{0}\vee(1+\epsilon)\sqrt{r^{2}+\sigma^{2}}<v<V_{0}$, we have on $\mathcal{E}_{2}$ that Thus, for any v such that v0 ∨ v¯0 := v0 ∨ (1 + ϵ) that $v_{0}\vee\bar{v}_{0}:=v_{0}\vee(1+\epsilon)\lor r^{2}+\sigma^{2}<v<V_{0}$, we have on $\mathcal{E}_{2}$ that $$\sup_{v_{0}\vee\bar{v}_{0}<v<V_{0}}\kappa_{\ell}(v)\left|\widehat{\mu}(v)-\mu^{*}\right|\leq\sup_{v\in[v_{0},V_{0}]}\kappa_{\ell}(v)\left|\widehat{\mu}(v)-\mu^{*}\right|$$ $$\leq\sup_{v\in[v_{0},V_{0}]}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\frac{\varepsilon_{i}}{z\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}\right|$$ $$\leq C\cdot\frac{V_{0}}{v_{0}}\cdot\frac{\log(n/\delta)}{z\sqrt{n}},$$ $\kappa$ which, by Lemma 3.2, yields $$\sup_{v\in[v_{0},V_{0}]}|\widehat{\mu}(v)-\mu^{*}|\leq2C\cdot\frac{V_{0}^{2}}{v_{0}}\cdot\frac{\log(n/\delta)}{z\sqrt{n}}=:r.$$ Such has a length $\delta$ and $\delta$ as far as The above r can be further refined by using the finer lower bound v¯0 of v instead of v0, but we use v0 for simplicity. Let ∆ = µ ∗ − µb(v), and we have |∆| ≤ r. Let the event E3 be $$\mathcal{E}_{3}:=\left\{\frac{1}{n}\sum_{i=1}^{n}\frac{\sqrt{\tau^{2}+2(r^{2}+\epsilon_{i}^{2})}-\tilde{\tau}}{\sqrt{\tau^{2}+2(r^{2}+\epsilon_{i}^{2})}}-\tilde{\tau}\left(\frac{\sqrt{\tau^{2}+2(r^{2}+\epsilon_{i}^{2})}-\tilde{\tau}}{\sqrt{\tau^{2}+2(r^{2}+\epsilon_{i}^{2})}}\right)\leq\sqrt{\frac{\log(1/\delta)2(r^{2}+\sigma^{2})}{n\tau^{2}}}+\frac{\log(1/\delta)}{3n}\right\}.$$ Thus on the event $\mathcal{E}_{1}\cap\mathcal{E}_{2}\cap\mathcal{E}_{3}$ and using the fact that $1-1/\sqrt{1+\tilde{x}}$ is an increasing function, we have f(τ ) = az2 n− 1 n Xn i=1 pτ 2 + (∆ + εi) 2 − τ pτ 2 + (∆ + εi) 2≥ az2 n− 1 n Xn i=1 pτ 2 + 2(r 2 + ε 2 i ) − τ pτ 2 + 2(r 2 + ε 2 i ) > az2 n− 1 n Xn i=1 pτ¯ 2 + 2(r 2 + ε 2 i ) − τ¯ pτ¯ 2 + 2(r 2 + ε 2 i ) (τ < τ¯) ≥ az2 n− (E pτ¯ 2 + 2(r 2 + ε 2 i ) − τ¯ pτ¯ 2 + 2(r 2 + ε 2 i ) ! + 1 n Xn i=1 pτ¯ 2 + 2(r 2 + ε 2 i ) − τ¯ pτ¯ 2 + 2(r 2 + ε 2 i ) − E pτ¯ 2 + 2(r 2 + ε 2 i ) − τ¯ pτ¯ 2 + 2(r 2 + ε 2 i ) !) ≥ az2 n− r 2 + σ 2 τ¯ 2+ rlog(1/δ) · 2(r 2 + σ 2) nτ¯ 2+ log(1/δ) 3n ! = z 2 n a − log(1/δ) 3z 2 − r 2 + σ 2 r 2 + σ 2 z 2 (1 + ϵ) 2n + sr 2 + σ 2 r 2 + σ 2 2z 2 log(1/δ) (1 + ϵ) 2n2 ! (Definition of τ¯) ≥ (a − 1/3)z 2 n− r 2 + σ 2 r 2 + σ 2 z 2 (1 + ϵ) 2n + sr 2 + σ 2 r 2 + σ 2 2z 4 (1 + ϵ) 2n2 ! (z 2 ≥ log(1/δ)) ≥ (a − 1/3)z 2 n− z 2 n · 1 (1 + ϵ) 2 + s2 (1 + ϵ) 2 ! = z 2 n a − 1 3 −1 (1 + ϵ) 2 − s2 (1 + ϵ) 2 ! ≥ 0, provided that $${\frac{1}{1+\epsilon}}\leq{\frac{\sqrt{1+2(a-1/3)}-1}{\sqrt{2}}},$$ or equivalently $$\epsilon\geq{\frac{\sqrt{4a+2/3}+2/3-\sqrt{2}-2a}{2(a-1/3)}}=:\epsilon(a).$$ In other words, conditional on the event E1 ∩ E2 ∩ E3 and taking ϵ ≥ ϵ(a), f(τ ) > 0 for τ > τ¯ := (1 + ϵ) √r 2 + σ 2 √n/z. This contradicts with (G.2), and thus $$\widehat{\tau}\leq(1+\epsilon)\sqrt{r^{2}+\sigma^{2}}\sqrt{n}/z.$$ If a = 1/2 and conditional on the same event, the above holds with $$\epsilon={\frac{\sqrt{4a+2/3}+2/3-\sqrt{2}-2a}{2(a-1/3)}}\geq9.$$ If n is large enough such that 12σ ≥ 5 √r 2 + σ 2, then conditional on the event E1 ∩ E2 ∩ E3, we have v0 ≤ vb ≤ C0σ, where C0 = 12. Step 2: Proving vb ≥ c0στ 2 v0 −1 **for some constant** c0. We will again employ the proof by contradiction argument. Let $$g(\tau):=\left(\frac{1}{n}\sum_{i=1}^{n}\frac{\tau^{2}}{\sqrt{\tau^{2}+(\Delta+\varepsilon_{i})^{2}}}\right)^{2}-\left(1-\frac{a z^{2}}{n}\right)^{2}.$$ Assume there exists some v such that *v < c* and ∂ ∂vLn(µb(v), v) = 0. Or equivalently, assume there exists some τ such that $$\tau<c{\sqrt{n}}/z=:\underline{{{\tau}}}\;\;\;\mathrm{and}\;\;\;g(\tau)=0.$$ τ < c√n/z =: τ and g(τ ) = 0. (G.3) It is impossible that c ≤ v0 because any stationary point v is in (v0, V0). Thus *c > v*0. Let ∆ = µb(v) − µ ∗. Thus on the event E1 ∩ E2, using the facts that √x is a concave function and 1/p1 + y/x is an increasing function of x, we have 1 n Xn i=1 τ 2 pτ 2 + (∆ + εi) 2 = 1 n Xn i=1 1 p1 + (∆ + εi) 2/τ 2 ≤ 1 n Xn i=1 1 p1 + (∆ + εi) 2/τ 2 ≤ vuut 1 n Xn i=1 1 1 + (∆ + εi) 2/τ 2 ≤ vuut 1 n Xn i=1 1 1 + τ −2(∆ + εi) 2 · 1 ((∆ + εi) 2 ≤ τ 2) ≤ vuut1 − 1 n ·1 2τ 2 Xn i=1 (∆ + εi) 2 · 1 ((∆ + εi) 2 ≤ τ 2). (G.3) $$\newcommand{\vecs}[1]{\overset{\rightharpoonup}{\mathbf{#1}}}$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}}$$ By the proof from step 1, on the event E1 ∩ E2, we have $$\operatorname*{sup}_{v\in[v_{0},V_{0}]}|{\widehat{\mu}}(v)-\mu^{*}|\leq r,$$ where r is the same as in step 1. Then g(τ ) ≤ 1 − 1 n ·1 2τ 2 Xn i=1 (∆ + εi) 2· 1(∆ + εi) 2 ≤ τ 2− 1 − az2 n 2 < 2az2 n− 1 n ·1 2τ 2 Xn i=1 (∆ + εi) 2· 1(∆ + εi) 2 ≤ τ 2(as long as az2/n > 0) ≤ 2az2 n− 1 n ·1 2τ 2 Xn i=1 ε 2 i + 2∆εi · 1ε 2 i ≤ 2 −1τ 2 − r 2 ≤ 2az2 n− 1 2τ 2 1 n Xn i=1 ε 2 i 1 ε 2 i ≤ τ 2 2 − r 2 − 2 n Xn i=1 r|εi|1 ε 2 i ≤ τ 2 2 − r 2 ! = 2az2 n− 1 2τ 2 (I − 2r · II). Define the probability event E4 as E4 := E41 ∩ E42 where E41 =: (1n Xn i=1 ε 2 i 1 ε 2 i ≤ τ 2 2 − r 2 ≥ Eε 2 i 1 ε 2 i ≤ τ 2 2 − r 2 − σ τ2 2 rτ 2 log(1/δ) n− τ 2log(1/δ) 6n ) , 1 n Xn i=1 |εi|1 ε 2 i ≤ τ 2 2 − r 2 ≤ E|εi|1 ε 2 i ≤ τ 2 2 − r 2 + s2σ 2 τ 2/2 log(1/δ) n+ τ log(1/δ) 3 √2n E42 =: . If n is sufficiently large such that $$\begin{array}{l}{{r^{2}\leq\epsilon_{0}\lesssim\left(\frac{\log n+\log(1/\delta)}{z\sqrt{n}}\right)^{2}\leq1,}}\\ {{\frac{r}{\tau^{2}}\left(\sigma_{\tau^{2}/2}^{2}+\sqrt{\frac{2\sigma_{\tau^{2}/2}^{2}\log(1/\delta)}{n}}+\frac{\tau\log(1/\delta)}{3\sqrt{2}n}\right)\leq\frac{1}{12}\frac{\log(1/\delta)}{n},}}\end{array}$$ then conditional on E4, we have $$\begin{array}{l}{{\mathrm{I}\geq\mathbb{E}\varepsilon_{i}^{2}1\left(\varepsilon_{i}^{2}\leq\frac{\pi^{2}}{2}-r^{2}\right)-\sigma_{\frac{\pi^{2}}{2}}\sqrt{\frac{\pi^{2}\log(1/\delta)}{n}}-\frac{\pi^{2}\log(1/\delta)}{6n},}}\\ {{\mathrm{II}\leq\mathbb{E}|\varepsilon_{i}|1\left(\varepsilon_{i}^{2}\leq\frac{\pi^{2}}{2}-r^{2}\right)+\sqrt{\frac{2\sigma_{\frac{\pi^{2}}{2}/2}^{2}\log(1/\delta)}{n}}+\frac{\tau\log(1/\delta)}{3\sqrt{2}n}.}}\end{array}$$ Thus conditional on E4, we have g(τ ) < 2az2 n− 1 2τ 2 (I − 2r · II) ≤ 2az2 n− 1 2τ 2 Eε 2 i 1 ε 2 i ≤ τ 2 2 − r 2 − στ 2/2 rτ 2 log(1/δ) n− τ 2log(1/δ) 6n ! + r τ 2 E|εi|1 ε 2 i ≤ τ 2 2 − r 2 + s2σ 2 τ 2/2 log(1/δ) n+ τ log(1/δ) 3 √2n σ 2 τ 2/2 + s2σ 2 τ 2/2 log(1/δ) n+ τ log(1/δ) 3 √2n ≤ 2az2 n− σ 2 τ 2/2−ϵ0 2τ 2+ στ 2/2 plog(1/δ) 2τ √n+ log(1/δ) 12n+ r τ 2 ≤ z 2 n 2a + log(1/δ) z 2· 1 6 − σ 2 τ 2/2−ϵ0 2τ 2+ στ 2/2 plog(1/δ) 2τ √n = z 2 2n 4a + log(1/δ) z 2· 1 3 − σ 2 τ 2/2−ϵ0 c 2+ στ 2/2 c· plog(1/δ) z ! (τ = c √n/z) ≤ z 2 2n 4a + 1 3 − σ 2 τ 2/2−ϵ0 c 2+ στ 2/2 c ! (z 2 ≥ log(1/δ)) ≤ 0, for any c such that $$c\leq\frac{\sigma_{\underline{{{x}}}^{2}/2}}{2(4a+1/3)}\left(\sqrt{1+\frac{4(4a+1/3)\sigma_{\underline{{{x}}}^{2}/2-e_{0}}^{2}}{\sigma_{\underline{{{x}}}^{2}/2}^{2}}}-1\right),$$ In other words, conditional no the event E1 ∩ E2 ∩ E4 and taking any c such that it satisfies the above inequality, we have $$g(\tau)<0\mathrm{~for~any~}\tau<\underline{{{\tau}}}=c\sqrt{n}/z.$$ This is a contradiction. Thus, τb ≥ τ = c Thus, $\widehat{\tau}\geq\underline{\tau}=c\sqrt{n}/z$, or equivalently $\widehat{v}\geq c>v_{0}$. Using the inequality, $\sqrt{1+x}-1\geq1(x\geq3)+\frac{x}{3}1(0\leq x<3)\geq\frac{x}{3}\wedge1\quad\forall\ x\geq0$, we have vuut1 + 4(4a + 1/3)σ 2 τ 2/2−ϵ0 σ 2 τ 2/2 − 1 στ 2/2 2(4a + 1/3) vuut1 + 28σ 2 τ 2/2−ϵ0 3σ 2 τ 2/2 − 1 = 3στ 2 v0 /2 14 (a = 1/2) ≥ 3στ 2/2 14 28σ 2 τ 2/2−ϵ0 9σ 2 τ 2/2 ∧ 1 ! = 2σ 2 τ 2/2−ϵ0 3στ 2/2 ∧ 3στ 2/2 14 ≥ 1 5 στ 2/2−1 στ 2/2 ∧ 1 στ 2/2−1 ≥ 1 5 στ 2 v0 /2−1 στ 2 v0 /2 ∧ 1 ! στ 2 v0 /2−1. Therefore we can take c = 5−1(στ 2 v0 /2−1/στ 2 v0 /2 ∧ 1)στ 2 v0 /2−1. Thus on the event E1 ∩ E2 ∩ E4, we have $${\widehat{v}}\geq{\mathsf{c}}:=c_{0}\sigma_{\tau_{v_{0}}^{2}/2-1},$$ where c0 = 5−1(στ 2 v0 /2−1/στ 2 v0 /2 ∧ 1). This finishes the proof of step 2. Proving that the constraint is inactive. If vb ̸∈ (v0, V0), then vb ∈ {v0, V0}. Suppose vb = v0, then vb = v0 < c. Recall that τv0 = v0 √*n/z.* Then we must have f(τv0 ) ≥ 0, and thus g(τv0 ) ≥ 0. However, conditional on the probability event E1 ∩ E2 ∩ E4, repeating the above analysis in step 2 would obtain g(τv0 ) < 0. This is a contradiction. Therefore vb ̸= v0. Similarly, conditional on probability event E1∩ E2∩ E3, we can obtain vb ̸= V0. Therefore, conditional on the probability event E1 ∩ E2 ∩ E3 ∩ E4, the constraint must be inactive, aka vb ∈ (v0, V0). Using the first result of Lemma F.2 with τ 2 and ε 2 i replaced by τ 2 v0 + 2r 2 and 2ε 2 i respectively, Lemma G.1, Lemma G.2 with τ 2 and w 2 i replaced by τ¯ 2 and 2(r 2 + ε 2 i ) respectively, and Lemma G.3, we obtain $\mathbb{P}(\mathcal{E}_{1})\geq1-\delta,\ \mathbb{P}(\mathcal{E}_{2})\geq1-\delta,\ \mathbb{P}(\mathcal{E}_{3})\geq1-\delta,\ \mathbb{P}(\mathcal{E}_{4})\geq1-2\delta,$ and thus $$\mathbb{P}({\mathcal{E}}_{1}\cap{\mathcal{E}}_{2}\cap{\mathcal{E}}_{3}\cap{\mathcal{E}}_{4})\geq1-5\delta.$$ Putting the above results together, and using Lemmas G.1 and G.3, we obtain with probability at least 1 − 5δ that $$c_{0}\sigma_{\tau_{v_{0}}^{2}/2-1}\leq\hat{v}\leq C_{0}\sigma.$$ Using a change of variable 5δ → δ completes the proof. ## G.2 Proof Of Theorem 3.5 Proof of Theorem 3.5. On the probability event E1 ∩ E2 ∩ E3 ∩ E4 where Ek's are defined the same as in the proof of Theorem 3.4, we have $$c_{0}\sigma_{\tau_{v_{0}}^{2}/2-1}\leq\hat{v}\leq C_{0}\sigma.$$ Following the proof of Theorem 3.1, for any fixed v and thus τ , we have $$\kappa_{\ell}|{\widehat{\mu}}(v)-\mu^{*}|\leq\left|{\frac{1}{\sqrt{n}}}\sum_{i=1}^{n}{\frac{\varepsilon_{i}}{z{\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}}}\right|.$$ $$\square$$ For any v such that c0στ 2 v0 /2−1 ≤ v ≤ C0σ and any z > 0, using Lemma G.1 but with v0 and V0 replaced by c0στ 2 v0 /2−1 and C0σ respectively, we obtain with probability at least 1 − δ sup v∈[c0στ 2 v0 /2−1 , C0σ] κℓ(v)|µb(v) − µ ∗| ≤ sup v∈[c0στ 2 v0 /2−1 , C0σ] κℓ(v)|µb(v) − µ ∗| ≤ sup v∈[c0στ 2 v0 /2−1 , C0σ] 1 √n Xn i=1 εi zpτ 2 + ε 2 i ≤σ c0στ 2 v0 /2−1 r2 log(n/δ) n+ 1 z log(n/δ) √n z √n + 3(C0σ − c0στ 2 v0 /2−1) στ 2 v0 /2−1 1 z √n , +σ 2 2c 2 0σ 2 τ 2 v0 /2−1 which yields $$\operatorname*{sup}_{v\in[c_{0}\sigma_{\tau_{v_{0}}^{2}/2-1},\ C_{0}\sigma]}|\widehat{\mu}(v)-\mu^{*}|\leq C\sigma\,\frac{\log(n/\delta)\lor z^{2}\lor1}{z\sqrt{\bar{n}}},$$ where C is some constant only depending on σ/στ 2 v0 /2−1, c0, and C0. Putting the above pieces together and if log(1/δ) ≤ z 2 ≤ log(n/δ), we obtain with probability at least 1 − 6δ that $$|\widehat{\mu}(\widehat{v})-\mu^{*}|\leq\operatorname*{sup}_{v\in[c_{0}\sigma_{\tau_{v_{0}}^{2}/2-1},\ C_{0}\sigma]}|\widehat{\mu}(v)-\mu^{*}|\leq C\cdot\sigma\,\frac{\log(n/\delta)\lor1}{z\sqrt{n}}.$$ Using a change of variable 6δ → δ and then setting z = log(n/δ) gives $$|\widehat{\mu}(\widehat{v})-\mu^{*}|\leq\operatorname*{sup}_{v\in[c_{0}\sigma_{v_{0}^{2}/2-1},\ C_{0}\sigma]}|\widehat{\mu}(v)-\mu^{*}|\leq C\cdot\sigma\,\sqrt{\frac{\log(n/\delta)}{n}}\ .$$ with a lightly different constant C, provided that log(n/δ) ≥ 1, aka n ≥ eδ. This completes the proof. We collect supporting lemmas, aka Lemmas G.1, G.2, and G.3, in this subsection. Lemma G.1. Let 0 *< δ <* 1. Suppose σ ≲ V0 and z ≲plog(n/δ). Then, with probability at least 1 − δ, we have $$\operatorname*{sup}_{v\in[v_{0},V_{0}]}\left|{\frac{1}{n}}\sum_{i=1}^{n}{\frac{\varepsilon_{i}}{\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}}\right|\leq C\cdot{\frac{V_{0}}{v_{0}}}\cdot{\frac{\log(n/\delta)}{n}}$$ where C is some constant. Proof of Lemma G.1. To prove the uniform bound over [v0, V0], we adopt a covering argument. For any 0 < ϵ ≤ 1, there exists an ϵ-cover N of [v0, V0] such that |N | ≤ 3(V0 − v0)/ϵ. Let τw = w √n/z. Then for every v ∈ [v0, V0], there exists a w ∈ N ⊂ [v0, V0] such that |w − τ | ≤ ϵ and 1 √n Xn i=1 εi zpτ 2 + ε 2 i ≤ 1 √n Xn i=1 εi zpτ 2w + ε 2 i + 1 √n Xn i=1 εi zpτ 2w + ε 2 i −1 √n Xn i=1 εi zpτ 2 + ε 2 i ≤ 1 √n Xn i=1 εi zpτ 2w + ε 2 i − E "1 √n Xn i=1 εi zpτ 2w + ε 2 i # + E "1 √n Xn i=1 εi zpτ 2w + ε 2 i # εi zpτ 2 + ε 2 i = I + II + III. + 1 √n Xn i=1 εi zpτ 2w + ε 2 i −1 √n Xn i=1 For II, we have $$\mathrm{II}\leq\frac{\sqrt{n}}{z}\cdot\frac{\sigma^{2}}{2\tau_{w}^{2}}\leq\frac{z\sigma^{2}}{2v_{0}^{2}\sqrt{n}}.$$ For III, using the inequality $$\left|{\frac{x}{\sqrt{\tau_{w}^{2}+x^{2}}}}-{\frac{x}{\sqrt{\tau^{2}+x^{2}}}}\right|\leq{\frac{|\tau_{w}-\tau|}{2\,|\tau_{w}|\wedge|\tau|}},$$ we obtain $$\mathrm{III}\leq{\frac{\sqrt{n}}{z}}\cdot{\frac{\epsilon}{2(w\wedge v)}}\leq{\frac{\sqrt{n}}{z}}\cdot{\frac{\epsilon}{2v_{0}}}.$$ We then bound I. For any fixed τw, applying Lemma F.1 with the fact that Eτwεi/(τ 2 w + ε 2 i ) 1/2 ≤ σ 2/(2τw), we obtain with probability at least 1 − 2δ , we obtain with probability of that $\tau_{\rm e}=\frac{1}{2}$ $$\left|\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\frac{\varepsilon_{i}}{z\sqrt{\tau_{w}^{2}+\varepsilon_{i}^{2}}}-\mathbb{E}\left[\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\frac{\varepsilon_{i}}{z\sqrt{\tau_{w}^{2}+\varepsilon_{i}^{2}}}\right]\right|\leq\frac{\sqrt{n}}{z\tau_{w}}\left(\sigma\sqrt{\frac{2\log(1/\delta)}{n}}+\frac{\tau_{w}\log(1/\delta)}{n}\right)$$ $$\leq\frac{\sigma}{z\tau_{w}}\sqrt{2\log(1/\delta)}+\frac{1}{z}\frac{\log(1/\delta)}{\sqrt{n}}$$ where τv0 = v0 √n/z. Therefore, putting above pieces together and using the union bound, we obtain with probability at least 1 − 6ϵ −1(V0 − v0)δ sup v∈[v0,V0] 1 √n Xn i=1 εi zpτ 2 + ε 2 i ≤ sup w∈N 1 √n Xn i=1 εi zpτ 2w + ε 2 i − E "1 √n Xn i=1 εi zpτ 2w + ε 2 i # +zσ2 2v 2 0 √n + √n z· ϵ 2v0 z √n + √n z· ϵ 2v0 . ≤ σ v0 r2 log(1/δ) n+ 1 z log(1/δ) √n+ σ 2 2v 2 0 Taking ϵ = 6(V0 − v0)/n, we obtain with probability at least 1 − nδ $$\operatorname*{sup}_{v\in[v_{0},V_{0}]}\left|{\frac{1}{\sqrt{n}}}\sum_{i=1}^{n}{\frac{\varepsilon_{i}}{z{\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}}}\right|\leq{\frac{\sigma}{v_{0}}}{\sqrt{\frac{2\log(1/\delta)}{n}}}+{\frac{1}{z}}{\frac{\log(1/\delta)}{\sqrt{n}}}+{\frac{\sigma^{2}}{2v_{0}^{2}}}{\frac{z}{\sqrt{n}}}+{\frac{3(V_{0}-v_{0})}{v_{0}}}{\frac{1}{z{\sqrt{n}}}}.$$ Thus with probability at least 1 − δ, we have $$\sup_{v\in[v_{0},V_{0}]}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\frac{\varepsilon_{i}}{z\sqrt{\tau^{2}+\varepsilon_{i}^{2}}}\right|\leq\frac{\sigma}{v_{0}}\sqrt{\frac{2\log(n/\delta)}{n}}+\frac{1}{z}\frac{\log(n/\delta)}{\sqrt{n}}+\frac{\sigma^{2}}{2v_{0}^{2}}\frac{z}{\sqrt{n}}+\frac{3(V_{0}-v_{0})}{v_{0}}\frac{1}{z\sqrt{n}}$$ $$\leq C\cdot\frac{V_{0}}{v_{0}}\cdot\frac{\log(n/\delta)}{z\sqrt{n}}$$ provided z ≲plog(n/δ), where C is a constant only depending on σ 2/(v0V0). When v0 and V0 are taken symmetrically around 1, v0V0 is close to 1. Multiplying both sides by z/√n finishes the proof. Lemma G.2. Let wi be i.i.d. copies of w. For any 0 *< δ <* 1, with probability at least 1 − δ $${\frac{1}{n}}\sum_{i=1}^{n}{\frac{\sqrt{\tau^{2}+w_{i}^{2}}-\tau}{\sqrt{\tau^{2}+w_{i}^{2}}}}-\mathbb{E}\left({\frac{\sqrt{\tau^{2}+w_{i}^{2}}-\tau}{\sqrt{\tau^{2}+w_{i}^{2}}}}\right)\leq{\sqrt{\frac{\log(1/\delta)\,\mathbb{E}w_{i}^{2}}{n\tau^{2}}}}+{\frac{\log(1/\delta)}{3n}}.$$ Proof of Lemma G.2. The random variables $$Z_{i}=Z_{i}(\tau):=\frac{\sqrt{\tau^{2}+w_{i}^{2}}-\tau}{\sqrt{\tau^{2}+w_{i}^{2}}}=\frac{\sqrt{1+w_{i}^{2}/\tau^{2}}-1}{\sqrt{1+w_{i}^{2}/\tau^{2}}}$$ with µz = EZi and σ 2 z = var(Zi) are bounded i.i.d. random variables such that $$0\leq Z_{i}\leq1\wedge\frac{w_{i}^{2}}{2\tau^{2}}.$$ Moreover we have $$\mathbb{E}Z_{i}^{2}\leq{\frac{\mathbb{E}w_{i}^{2}}{2\tau^{2}}},\ \sigma_{z}^{2}:=\operatorname{var}(Z_{i})\leq{\frac{\mathbb{E}w_{i}^{2}}{2\tau^{2}}}.$$ For third and higher order absolute moments, we have $$\mathbb{E}|Z_{i}|^{k}\leq{\frac{\mathbb{E}w_{i}^{2}}{2\tau^{2}}}\leq{\frac{k!}{2}}\cdot{\frac{\mathbb{E}w_{i}^{2}}{2\tau^{2}}}\cdot\left({\frac{1}{3}}\right)^{k-2},{\mathrm{~for~all~integers~}}k\geq3.$$ Therefore, using Lemma J.2 with v = n Ew 2 i /(2τ 2) and c = 1/3 acquires that for any t > 0 $$\mathbb{P}\left(\sum_{i=1}^{n}\frac{(1+w_{i}^{2}/\tau^{2})^{1/2}-1}{(1+w_{i}^{2}/\tau^{2})^{1/2}}-\sum_{i=1}^{n}\mathbb{E}\left(\frac{(1+w_{i}^{2}/\tau^{2})^{1/2}-1}{(1+w_{i}^{2}/\tau^{2})^{1/2}}\right)\geq-\sqrt{\frac{tn\mathbb{E}w_{i}^{2}}{\tau^{2}}}-\frac{t}{3}\right)\leq\exp(-t).$$ Taking $t=\log(1/\delta)$ acquires that for any $0<\delta<1$ $$\mathbb{P}\left(\frac{1}{n}\sum_{i=1}^{n}\frac{(1+w_{i}^{2}/\tau^{2})^{1/2}-1}{(1+w_{i}^{2}/\tau^{2})^{1/2}}-\mathbb{E}\left(\frac{(1+w_{i}^{2}/\tau^{2})^{1/2}-1}{(1+w_{i}^{2}/\tau^{2})^{1/2}}\right)>-\sqrt{\frac{\log(1/\delta)\,\mathbb{E}w_{i}^{2}}{n\tau^{2}}}-\frac{\log(1/\delta)}{3n}\right)>1-\delta.$$ In further the proof. This finishes the proof. Lemma G.3. For any 0 *< δ <* 1, we have with probability at least 1 − δ that $${\frac{1}{n}}\sum_{i=1}^{n}\varepsilon_{i}^{2}1\left(\varepsilon_{i}^{2}\leq{\frac{\tau^{2}}{2}}-r^{2}\right)\geq{\frac{1}{n}}\sum_{i=1}^{n}\mathbb{E}\varepsilon_{i}^{2}1\left(\varepsilon_{i}^{2}\leq{\frac{\tau^{2}}{2}}-r^{2}\right)-\sigma_{\Sigma^{2}/2}{\sqrt{\frac{\tau^{2}\log(1/\delta)}{n}}}-{\frac{\tau^{2}\log(1/\delta)}{6n}}.$$ For any 0 *< δ <* 1, we have with probability at least 1 − δ that $$\frac{1}{n}\sum_{i=1}^{n}|\varepsilon_{i}|1\left(\varepsilon_{i}^{2}\leq\frac{\tau^{2}}{2}-r^{2}\right)\leq\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}|\varepsilon_{i}|1\left(\varepsilon_{i}^{2}\leq\frac{\tau^{2}}{2}-r^{2}\right)+\sqrt{\frac{2\sigma_{\varepsilon}^{2}/2\log(1/\delta)}{n}}+\frac{\tau\log(1/\delta)}{3\sqrt{2}n}.$$ The above is the _n-th-order_ of the above equation, which is the Consequently, we have, with probability at least 1 − 2δ, the above two inequalities hold simultaneously. Proof of Lemma G.3. We prove the first two results one by one and the last result directly follows from first two. First result. Let Zi = ε 2 i 1ε 2 i ≤ τ 2/2 − r 2. The random variables Zi with µz = EZi and σ 2 z = var(Zi) are bounded i.i.d. random variables such that |Zi| = ε 2 i 1ε 2 i ≤ τ 2/2 − r 2 ≤ τ 2/2, |µz| = |EZi| = Eε 2 i 1ε 2 i ≤ τ 2/2 − r 2 ≤ σ 2 τ 2/2 , EZ 2 i = Eε 4 i 1ε 2 i ≤ τ 2/2 − r 2 ≤ τ 2σ 2 τ 2/2 /2, σ 2 z := var(Zi) = EZi − µz 2≤ τ 2σ 2 τ 2/2 /2. For third and higher order absolute moments, we have $$\mathbb{E}|Z_{i}|^{k}=\mathbb{E}\left|\varepsilon_{i}^{2}1\left(\varepsilon_{i}^{2}\leq\underline{x}^{2}/2-r^{2}\right)\right|^{k}\leq\frac{\tau^{2}\sigma_{\varepsilon_{i}^{2}/2}^{2}}{2}\left(\frac{\underline{x}^{2}}{2}\right)^{k-2}\leq\frac{k!}{2}\frac{\tau^{2}\sigma_{\varepsilon_{i}^{2}/2}^{2}}{2}\left(\frac{\underline{x}^{2}}{6}\right)^{k-2},\text{for all integers}k\geq3.$$ Using Lemma J.2 with v = nτ 2σ 2 τ 2/2 /2 and c = τ 2/6, we have for any t > 0 $$\mathbb{P}\left(\sum_{i=1}^{n}\varepsilon_{i}^{2}1\left(\varepsilon_{i}^{2}\leq{\frac{\pi^{2}}{2}}-r^{2}\right)-\sum_{i=1}^{n}\mathbb{E}\varepsilon_{i}^{2}1\left(\varepsilon_{i}^{2}\leq{\frac{\pi^{2}}{2}}-r^{2}\right)\leq-{\sqrt{n\pi^{2}\sigma_{\varepsilon/2}^{2}t}}-{\frac{\pi^{2}t}{6}}\right)\leq\exp\left(-t\right).$$ Taking t = log(1/δ) acquires the desired result. Second result. With an abuse of notation, let Zi = |εi|1ε 2 i ≤ τ 2/2 − r 2. The random variables Zi with µz = EZi and σ 2 z = var(Zi) are bounded i.i.d. random variables such that |Zi| = εi1ε 2 i ≤ τ 2/2 − r 2 ≤ τ/ √2, |µz| = |EZi| = E|εi|1ε 2 i ≤ τ 2/2 − r 2 ≤ √2σ 2 τ 2/2 /τ , EZ 2 i = Eε 2 i 1ε 2 i ≤ τ 2/2 − r 2 ≤ σ 2 τ 2/2 , σ 2 z := var(Zi) = EZi − µz 2≤ σ 2 τ 2/2 . For third and higher order absolute moments, we have $$\mathbb{E}[Z_{i}]^{k}=\mathbb{E}\left|\left|\varepsilon_{i}\right|1\left(\varepsilon_{i}^{2}\leq\mathbb{z}^{2}/2-r^{2}\right)\right|^{k}\leq\sigma_{\mathbb{z}^{2}/2}^{2}\left(\frac{\tau}{\sqrt{2}}\right)^{k-2}\leq\frac{k!}{2}\sigma_{\mathbb{z}^{2}/2}^{2}\left(\frac{\tau}{3\sqrt{2}}\right)^{k-2},\text{for all integers}k\geq3.$$ Using Lemma J.2 with v = nσ2 τ 2/2 and c = τ/(3√2), we have for any t > 0 $$\mathbb{P}\left(\sum_{i=1}^{n}\left|\varepsilon_{i}|1\right|\left(\varepsilon_{i}^{2}\leq\frac{\pi^{2}}{2}-r^{2}\right)-\sum_{i=1}^{n}\mathbb{E}[\varepsilon_{i}|1\left(\varepsilon_{i}^{2}\leq\frac{\pi^{2}}{2}-r^{2}\right)\geq\sqrt{2n\sigma_{\varepsilon^{2}/2}^{2}t}+\frac{\pi t}{3\sqrt{2}}\right)\leq\exp\left(-t\right).$$ Taking $t=\log(1/\delta)$ acquires the desired result. ## H Proofs For Section 4 This section collects proofs for results in Section 4. H.1 Proof of Theorem 4.2 Proof of Theorem 4.2. First, the MoM estimator µb MoM = M(z1*, . . . , z*k) is equivalent to $$\operatorname{argmin}\sum_{j=1}^{k}|z_{j}-\mu|\,.$$ $$\square$$ For any x ∈ R, let ℓ(x) = |x| and define L(x) = Eℓ ′(x + Z) where Z ∼ N (0, 1) and $$\ell^{\prime}(x)={\begin{cases}1,&{\mathrm{if~}}x>0,\\ 0,&{\mathrm{if~}}x=0,\\ -1,&{\mathrm{otherwise}}.\end{cases}}$$ If the assumptions of Theorem 4 of Minsker (2019) are satisfied, we obtain, after some algebra, that $$\sqrt{n}\left(\widehat{\mu}^{\mathrm{Moh}}-\mu^{*}\right)\leadsto\mathcal{N}\left(0,\frac{\mathbb{E}(\ell^{\prime}(Z))^{2}}{(L^{\prime}(0))^{2}}\right).$$ Some algebra derives that $${\frac{\mathbb{E}(\ell^{\prime}(Z))^{2}}{(L^{\prime}(0))^{2}}}={\frac{\pi\sigma^{2}}{2}}.$$ It remains to check the assumptions there. Assumptions (1), (4), and (5) trivially hold. Assumption (2) can be verified by using the following Berry-Esseen bound. Fact H.1. Let y1*, . . . , y*m be i.i.d. random copies of y with mean µ, variance σ 2 and E|y − µ| 2+ι < ∞ for some ι ∈ (0, 1]. Then there exists an absolute constant C such that $$\operatorname*{sup}_{t\in\mathbb{R}}\left|\mathbb{P}\left({\sqrt{m{\frac{{\bar{y}}-\mu}{\sigma}}}}\leq t\right)-\Phi(t)\right|\leq C\,{\frac{\mathbb{E}|y-\mu|^{2+\iota}}{\sigma^{2+\iota}m^{\iota/2}}}.$$ It remains to check Assumption (3). Because g(m) ≲ m−ι/2, √kg(m) ≲ √km−ι/2 → 0 if k = o(n ι/(1+ι)) as n → ∞. Thus Assumption (3) holds if k = o(n ι/(1+ι)) and k → ∞. This completes the proof. ## H.2 Proof Of Theorem 4.3 In this subsection, we state and prove a stronger result of Theorem 4.3, aka Theorem H.2. Theorem 4.3 can then be proved following the same proof under the assumption that E|εi| 2+ι < ∞ for any prefixed 0 < ι ≤ 1. Theorem H.2. Assume the same assumptions as in Theorem 3.4. Take z 2 = 2 log(n). If Eε 4 i < ∞, then $$\sqrt{n}\,\left[\widehat{\mu}-\mu^{*}\right]\sim\mathcal{N}\left(0,\Sigma\right),\,\,\mathrm{where}\,\,\Sigma=\begin{bmatrix}\sigma^{2}&\sigma\,\mathbb{E}\varepsilon_{i}^{3}/2\\ \sigma\,\mathbb{E}\varepsilon_{i}^{3}/2&(\sigma^{2}\mathbb{E}\varepsilon_{i}^{4}-\sigma^{6})/4\end{bmatrix}.$$ Proof of Theorem H.2. Now we are ready to analyze the self-tuned mean estimator µb = µb(vb). For any 0 *< δ <* 1, following the proof of Theorem 3.4, we obtain with probability at least 1 − δ that $$|\widehat{\mu}(\widehat{v})-\mu^{*}|\leq\operatorname*{sup}_{v\in[v_{0},V_{0}]}|\widehat{\mu}(v)-\mu^{*}|\leq2C\cdot\frac{V_{0}^{2}}{v_{0}}\cdot\frac{\log(n/\delta)}{z\sqrt{n}}.$$ Taking z 2 ≥ log(n/δ) with δ = 1/n in the above inequality, we obtain µb → µ ∗in probability. Theorem H.3 implies that vb → σ in probability. Thus we have ∥θb− θ ∗∥2 → 0 in probability, where $${\widehat{\theta\,}}=({\widehat{\mu\,}},{\widehat{v}})^{\mathrm{{T}}},{\mathrm{~and~}}\theta^{*}=(\mu^{*},\sigma)^{\mathrm{{T}}}.$$ Using the Taylor's theorem for vector-valued functions, we obtain $$\nabla L_{n}(\widehat{\theta)}=0=\nabla L_{n}(\theta^{*})+H_{n}(\theta^{*})(\widehat{\theta\,}-\theta^{*})\,+\,\frac{R_{2}(\theta)}{2}\big(\widehat{\theta\,}-\theta^{*}\big)^{\otimes2},$$ where ⊗ indicates the tensor product. Let τσ = σ √n/z. We say that Xn and Yn are asymptotically equivalent, denoted as Xn ≃ Yn, if both Xn and Yn converge in distribution to some same random variable/vector Z. Rearranging, we obtain √nθb− θ ∗≃ [Hn(θ ∗)]−1− √n ∇Ln(θ ∗) √n · 1 n Pn i=1τσ εi σ √τ 2 σ+ε 2 i √n · n z 2 1 n Pn i=1 √1+ε 2 i /τ2 σ−1 √1+ε 2 i /τ2 σ − √n · a √n z· 1 n Pn i=1 τ 2 σ (τ 2 σ+ε 2 i ) 3/2n z 2 · 1 n Pn i=1τσ εi (τ 2 σ+ε 2 i ) 3/2 n z 2 · 1 n Pn i=1τσ εi (τ 2 σ+ε 2 i ) 3/2n 3/2 z 3 · 1 n Pn i=1 ε 3 i (τ 2 σ+ε 2 i ) 3/2 −1 = √n · 1 n Pn i=1τσ εi σ √τ 2 σ+ε 2 i √n · n z 2 1 n Pn i=1 √1+ε 2 i /τ2 σ−1 √1+ε 2 i /τ2 σ − √n · a ≃ σ 0 0 σ 3 = σ 0 0 σ 3 I II, where the second ≃ uses the fact that $$H_{n}(\theta^{*})\ {\xrightarrow{\mathrm{a.s_{\lambda}}}}\ \begin{bmatrix}{\frac{1}{\sigma}}&{0}\\ {0}&{\frac{1}{\sigma^{3}}}\end{bmatrix}.$$ We proceed to derive the asymptotic property of (I,II) T. For I, we have I = √n · 1 n Xn i=1 τσεi σpτ 2 σ + ε 2 i − E "τσεi σpτ 2 σ + ε 2 i #! + √n · E "τσεi σpτ 2 σ + ε 2 i # ⇝ N 0, limn→∞ var "τσεi σpτ 2 σ + ε 2 i #! + limn→∞ √n · E "τσεi σpτ 2 σ + ε 2 i # . It remains to calculate $$\operatorname*{lim}_{n\to\infty}\mathbb{E}\left({\frac{\sqrt{n}\tau_{\sigma}\varepsilon_{i}}{\sqrt{\tau_{\sigma}^{2}+\varepsilon_{i}^{2}}}}\right)\quad{\mathrm{and}}\quad\operatorname*{lim}_{n\to\infty}\operatorname{var}\left[{\frac{\tau\varepsilon_{i}}{\sqrt{\tau_{\sigma}^{2}+\varepsilon_{i}^{2}}}}\right].$$ For the former term, if there exists some 0 < ι ≤ 1 such that E|εi| 2+ι < ∞, using the fact that Eεi = 0, we have E √ τ −1 σεi p1 + ε 2 i /τ 2 σ − 1 p nτσεi τ 2 σ + ε 2 i ! = √nτσ · E ( p −εi/τσ 1 + ε 2 i /τ 2 σ ) = √nτσ · E p1 + ε 2 i /τ 2 σ ≤ √nτσ 2· E ε 3 i /τ 3 p σ 1 + ε 2 i /τ 2 σ ≤ √nτσ 2· E|εi| 2+ι τ 2+ι σ ≤ √n E|εi| 2+ι 2τ 1+ι σ → 0, (H.1) where the first inequality uses Lemma J.4 (ii) with r = 1/2, that is, √1 + x ≤ 1 + x/2 for x ≥ −1. For the second term, we have $$\operatorname*{lim}_{n\to\infty}\operatorname{var}\left[{\frac{\tau_{\sigma}\varepsilon_{i}}{\sqrt{\tau_{\sigma}^{2}+\varepsilon_{i}^{2}}}}\right]=\operatorname*{lim}_{n\to\infty}\mathbb{E}\left[{\frac{\tau_{\sigma}^{2}\varepsilon_{i}^{2}}{\tau_{\sigma}^{2}+\varepsilon_{i}^{2}}}\right]=\sigma^{2},$$ by the dominated convergence theorem. Thus I ⇝ N (0, 1). For II, recall a = 1/2 and using the facts that $$\lim_{n\to\infty}\frac{n}{z^{2}}\cdot\mathbb{E}\left(\frac{\sqrt{1+\varepsilon_{i}^{2}/\tau_{o}^{2}}-1}{\sqrt{1+\varepsilon_{i}^{2}/\tau_{o}^{2}}}\right)=\lim_{n\to\infty}\frac{n}{2\tau_{o}^{2}z^{2}}\cdot\mathbb{E}\left(\frac{1}{\sqrt{1+\varepsilon_{i}^{2}/\tau_{o}^{2}}}\,,\,\frac{\sqrt{1+\varepsilon_{i}^{2}/\tau_{o}^{2}}-1}{1/(2\tau_{o}^{2})}\right)=\frac{1}{2},$$ $$\lim_{n\to\infty}\sqrt{n}\cdot\left(\frac{n}{z^{2}}\cdot\mathbb{E}\left(\frac{\sqrt{1+\varepsilon_{i}^{2}/\tau_{o}^{2}}-1}{\sqrt{1+\varepsilon_{i}^{2}/\tau_{o}^{2}}}\right)-\frac{1}{2}\right)=0,$$ we have II = √n · n z 2 · 1 n Xn i=1 p1 + ε 2 i /τ 2 σ − 1 p1 + ε 2 i /τ 2 σ − √n · 1 2 ≃ √n · 1 n Xn i=1 n z 2 · p1 + ε 2 i /τ 2 σ − 1 p1 + ε 2 i /τ 2 σ − E n z 2 · p1 + ε 2 i /τ 2 σ − 1 p1 + ε 2 i /τ 2 σ !! ≃ N 0, limn→∞ var n z 2 · p1 + ε 2 i /τ 2 σ − 1 p1 + ε 2 i /τ 2 σ !! . If Eε 4 i < ∞, then $$\operatorname*{lim}_{n\to\infty}\operatorname{var}\left({\frac{n}{z^{2}}}\cdot{\frac{\sqrt{1+\varepsilon_{i}^{2}/\tau_{\sigma}^{2}}-1}{\sqrt{1+\varepsilon_{i}^{2}/\tau_{\sigma}^{2}}}}\right)={\frac{\mathbb{E}\varepsilon_{i}^{4}}{4\sigma^{4}}}-{\frac{1}{4}},$$ and thus II ≃ N 0,(Eε 4 i /σ4 − 1)/4. For the cross covariance, we have 1)) 1): For the cross covariance, we have $ \begin{gathered}\mathop{\lim}\limits_{n\to\infty}\text{cov}\left({\frac{{\tau_\sigma\varepsilon_i}}{{\sigma\sqrt{\tau_\sigma^2+\varepsilon_i^2}}},\frac{n}{{z^2}}\cdot\frac{{\sqrt{1+\varepsilon_i^2/\tau_\sigma^2}-1}}{{\sqrt{1+\varepsilon_i^2/\tau_\sigma^2}}}}\right)\\ =\mathop{\lim}\limits_{n\to\infty}\mathbb{E}\left({\frac{{\tau_\sigma\varepsilon_i}}{{\sigma\sqrt{\tau_\sigma^2+\varepsilon_i^2}}}\cdot\frac{n}{{z^2}}\cdot\frac{{\sqrt{1+\varepsilon_i^2/\tau_\sigma^2}-1}}{{\sqrt{1+\varepsilon_i^2/\tau_\sigma^2}}}}\right)\\ =\frac{{\mathbb{E}\varepsilon_i^3}}{{2\sigma^3}}.\end{gathered}$ Thus $$\sqrt{n}\,(\widehat{\theta\,}-\theta^{*})\leadsto{\mathcal{N}}(0,\Sigma),$$ where $$\Sigma=\begin{bmatrix}\sigma&0\\ 0&\sigma^{3}\end{bmatrix}\begin{bmatrix}1&\mathbb{E}\varepsilon_{i}^{3}/(2\sigma^{3})\\ \mathbb{E}\varepsilon_{i}^{3}/(2\sigma^{3})&(\mathbb{E}\varepsilon_{i}^{4}/\sigma^{4}-1)/4\end{bmatrix}\begin{bmatrix}\sigma&0\\ 0&\sigma^{3}\end{bmatrix}=\begin{bmatrix}\sigma^{2}&\sigma\mathbb{E}\varepsilon_{i}^{3}/2\\ \sigma\mathbb{E}\varepsilon_{i}^{3}/2&(\sigma^{2}\mathbb{E}\varepsilon_{i}^{4}-\sigma^{6})/4\end{bmatrix}.$$ In $\mathbb{R}$, we have Therefore, for µb only, we have√n (µb − µ $$\sqrt{n}\,(\widehat{\mu}-\mu^{*})\leadsto{\mathcal{N}}(0,\sigma^{2}).$$ H.3 Consistency of vb This subsection proves that vb is a consistent estimator of σ when the (2 + ι)-th moment exists. Recall that $$\nabla_{v}L_{n}(\mu,v)={\frac{n}{z^{2}}}\cdot{\frac{1}{n}}\sum_{i=1}^{n}\left({\frac{\tau}{\sqrt{\tau^{2}+(y_{i}-\mu)^{2}}}}-1\right)+a$$ where a = 1/2. We emphasize that the following proof only needs the second moment assumption σ 2 = Eε 2 i < ∞. Theorem H.3 (Consistency of vb). Assume the same assumptions as in Theorem 3.4. Take z 2 ≥ log(n). Then $${\widehat{v}}\longrightarrow\sigma\;\;\;{\mathrm{in~probability}}.$$ Proof of Theorem H.3. By the proof of Theorem 3.4, we obtain with probability at least 1 − δ that the following two results hold simultaneously: $$\sup_{v\in[v_{0},V_{0}]}|\widehat{\mu}(v)-\mu^{*}|\leq2C\cdot\frac{V_{0}^{2}}{v_{0}}\cdot\frac{\log(n/\delta)}{z\sqrt{n}}=:r,$$ (H.2) $$v_{0}<c_{0}\sigma_{r_{0}^{2}-1}\leq\widehat{v}\leq C_{0}\sigma<V_{0},$$ (H.3) provided that z 2 ≥ log(5/δ) and n is large enough. Therefore, the constraint in the optimization problem (3.1) is not active, and thus $$\nabla_{v}L_{n}({\widehat{\mu}},{\widehat{v}})=0.$$ Using Lemma H.4 together with the equality above, we obtain with probability at least 1 − δ that $$\frac{c_{0}}{V_{0}^{3}}|\widehat{v}-\sigma|^{2}\leq\frac{c_{0}}{\widehat{v}^{3}\lor\sigma^{3}}|\widehat{v}-\sigma|^{2}\leq\rho_{\ell}|\widehat{v}-\sigma|^{2}$$ $$\leq\langle\nabla_{v}L_{n}(\widehat{\mu},\widehat{v})-\nabla_{v}L_{n}(\widehat{\mu},\sigma),\widehat{v}-\sigma\rangle$$ $$\leq|\nabla_{v}L_{n}(\widehat{\mu},\sigma)|\,|\widehat{v}-\sigma|$$ $$\leq\left|\frac{n}{z^{2}}\cdot\frac{1}{n}\sum_{i=1}^{n}\left(\frac{\tau_{\sigma}}{\sqrt{\tau_{\sigma}^{2}+(y_{i}-\widehat{\mu})^{2}}}-1\right)+a\right|\,|\widehat{v}-\sigma|\,.$$ Plugging (H.2) into the above inequality and canceling |vb − σ| on both sides, we obtain with probability at least 1 − 2δ that c0 V 3 0 |vb − σ| ≤ n z 2 · 1 n Xn i=1 pτσ τ 2 σ + (yi − µb) 2 − 1 ! + a ≤ sup µ∈Br(µ∗) n z 2 · 1 n Xn i=1 pτσ τ 2 σ + (yi − µ) 2 − 1 ! + a = n z 2 · sup µ∈Br(µ∗) 1 n Xn i=1 pτσ τ 2 σ + (yi − µ) 2 − 1 ! + az2 n ≤ n z 2 · sup µ∈Br(µ∗) 1 n Xn i=1 1 − pτσ τ 2 σ + (yi − µ) 2 ! − E 1 − p τσ τ 2 σ + (yi − µ) 2 ! + n z 2 · sup µ∈Br(µ∗) E 1 − p τσ τ 2 σ + (yi − µ) 2 ! − az2 n =: I + II. It remains to bound terms I and II. We start with term II. Let r 2 i = (yi − µ) 2. We have $$\Pi=\frac{n}{z^{2}}\cdot\sup_{\mu\in\mathfrak{B}_{\mu}(\mu)}\left|\mathbb{E}\left(1-\frac{\tau_{\sigma}}{\sqrt{\tau_{\sigma}^{2}+(y_{i}-\mu)^{2}}}\right)-\frac{az^{2}}{n}\right|$$ $$=\max\left\{\sup_{\mu\in\mathfrak{B}_{\mu}(\mu^{*})}\left(\frac{n}{z^{2}}\cdot\mathbb{E}\frac{\sqrt{1+r_{i}^{2}/\tau_{\sigma}^{2}}-1}{\sqrt{1+r_{i}^{2}/\tau_{\sigma}^{2}}}-a\right),\ \sup_{\mu\in\mathfrak{B}_{\mu}(\mu^{*})}\left(a-\frac{n}{z^{2}}+\mathbb{E}\frac{1}{\sqrt{1+r_{i}^{2}/\tau_{\sigma}^{2}}}\right)\right\}$$ $$=:\Pi_{1}\vee\Pi_{2}.$$ In order to bound II, we bound II1 and II2 respectively. For term II1, using Lemma J.4 (ii), aka (1 + x) r ≤ 1 + rx for x ≥ −1 and r ∈ (0, 1), and a = 1/2, we have 1), and $a=1/2$, we have $$\begin{aligned} \Pi_1 &= \sup_{\mu\in\mathbb{B}_{\nu}(\mu^*)}\left(\frac{n}{z^2}\cdot\mathbb{E}\frac{\sqrt{1+r_i^2/r_\sigma^2}-1}{\sqrt{1+r_i^2/r_\sigma^2}}-a\right)\nonumber\\ &\leq \sup_{\mu\in\mathbb{B}_{\nu}(\mu^*)}\left\{\frac{n}{z^2}\cdot\left(1+\mathbb{E}\frac{\pi^2_i}{2r_\sigma^2}-1\right)-a\right\}\nonumber\\ &\leq \frac{n}{z^2}\cdot\mathbb{E}\frac{\varepsilon_i^2+2r|\varepsilon_i|+r^2|}{2r_\sigma^2}-\frac{1}{2}\nonumber\\ &\leq \frac{r}{\sigma}\left(1+\frac{r}{2\sigma}\right)\nonumber\\ &\leq \frac{2r}{\sigma}\nonumber \end{aligned}$$. if n is large enough such that r ≤ 2σ. To bound II2, we need Lemma E.1. Specifically, for any 0 ≤ γ < 1, we have $$(1+x)^{-1}\leq1-(1-\gamma)x,\mathrm{~for~any~}0\leq x\leq\frac{\gamma}{1-\gamma}.$$ Using this result, we obtain E 1 p1 + r 2 i /τ 2 σ ≤ sE 1 1 + r 2 i /τ 2 σ (concavity of √x) ≤ sE 1 − (1 − γ)r 2 i τ 2 σ 1 r 2 i τ 2 σ ≤γ 1 − γ +1 1 + r 2 i /τ 2 σ 1 r 2 i τ 2 σ >γ 1 − γ ≤ s 1 − (1 − γ) E r 2 i τ 2 σ 1 r 2 i τ 2 σ ≤γ 1 − γ (Lemma E.1) ≤ s 1 − (1 − γ) E r 2 i τ 2 σ 1 r 2 i τ 2 σ ≤γ 1 − γ ≤ s 1 − (1 − γ) E ε 2 i − 2r|εi| + r 2 τ 2 σ 1 2(ε 2 i + r 2) τ 2 σ ≤γ 1 − γ (∀ µ ∈ Br(µ ∗)) ≤ 1 − 1 − γ 2 E ε 2 i − 2r|εi| + r 2 τ 2 σ 1 2(ε 2 i + r 2) τ 2 σ ≤γ 1 − γ , where the first inequality uses the concavity of √x, the third inequality uses Lemma E.1, and the last inequality uses the inequality that (1 + x) −1 ≤ 1 − x/2 for x ∈ [0, 1], aka Lemma J.4 (iii) with r = −1, provided that $$(1-\gamma)\,\mathbb{E}\left(\frac{\varepsilon_{i}^{2}-2r|\varepsilon_{i}|-r^{2}}{\tau_{\sigma}^{2}}1\left(\frac{2(\varepsilon_{i}^{2}+r^{2})}{\tau_{\sigma}^{2}}\leq\frac{\gamma}{1-\gamma}\right)\right)\leq(1-\gamma)\frac{\sigma^{2}-2r\sigma-r^{2}}{\tau_{\sigma}^{2}}\leq1.$$ Thus term II2 can be bounded as II2 = sup µ∈Br(µ∗) a − n z 2 + n z 2 · E1 p1 + r 2 i /τ 2 σ ! ≤ a − n z 2 + n z 2 · 1 − 1 − γ 2 E ε 2 i − 2r|εi| + r 2 τ 2 σ 1 2(ε 2 i + r 2) τ 2 σ ≤γ 1 − γ ≤ a − 1 − γ 2σ 2· Eε 2 i + 1 − γ 2σ 2· 2r · E (|εi|) ≤ a − 1 − γ 2+ r(1 − γ) σ = γ 2 + r(1 − γ) σ. (a = 1/2) Combining the upper bound for II1 and II2 and using the fact that, we obtain $$\mathrm{II}\leq\operatorname*{max}\{\mathrm{II}_{1},\mathrm{II}_{2}\}\leq{\frac{\gamma}{2}}+{\frac{2r}{\sigma}}\to0,$$ if γ = γ(n) → 0. We proceed to bound I. Recall that $$\mathrm{I}=\frac{n}{z^{2}}\cdot\operatorname*{sup}_{\mu\in\mathbb{B}_{r}(\mu^{*})}\left|\frac{1}{n}\sum_{i=1}^{n}\left(1-\frac{\tau_{\sigma}}{\sqrt{\tau_{\sigma}^{2}+(y_{i}-\mu)^{2}}}\right)-\mathbb{E}\left(1-\frac{\tau_{\sigma}}{\sqrt{\tau_{\sigma}^{2}+(y_{i}-\mu)^{2}}}\right)\right|.$$ For any 0 < ϵ ≤ 2r, there exists an ϵ-cover N ⊆ Br(µ ∗) of Br(µ ∗) such that |N | ≤ 6*r/ϵ.* Then for any µ ∈ Br(µ ∗) there exists a ω ∈ N such that |ω − µ| ≤ γ, and 1 n Xn i=1 1 − pτσ τ 2 σ + (yi − µ) 2 ! − E 1 − p τσ τ 2 σ + (yi − µ) 2 ! = 1 n Xn i=1 p1 + (yi − µ) 2/τ 2 σ − 1 p1 + (yi − µ) 2/τ 2 σ − E p1 + (yi − µ) 2/τ 2 σ − 1 p1 + (yi − µ) 2/τ 2 σ ≤ 1 n Xn i=1 p1 + (yi − ω) 2/τ 2 σ − 1 p1 + (yi − ω) 2/τ 2 σ − E p1 + (yi − ω) 2/τ 2 σ − 1 p1 + (yi − ω) 2/τ 2 σ + 1 n Xn i=1 p1 + (yi − µ) 2/τ 2 σ − 1 p1 + (yi − µ) 2/τ 2 σ − 1 n Xn i=1 p1 + (yi − ω) 2/τ 2 σ − 1 p1 + (yi − ω) 2/τ 2 σ = I1 + I2 + I3. + E p1 + (yi − µ) 2/τ 2 σ − 1 p1 + (yi − µ) 2/τ 2 σ − E p1 + (yi − ω) 2/τ 2 σ − 1 p1 + (yi − ω) 2/τ 2 σ For I1, using Lemma G.2 acquires with probability at least 1 − 2δ that $$\begin{array}{r l}{\operatorname{I}_{1}\leq{\sqrt{\frac{\operatorname{E}(y_{i}-\omega)^{2}\,\log(1/\delta)}{n\tau_{\sigma}^{2}}}}+{\frac{\log(1/\delta)}{3n}}}\\ {\leq{\sqrt{\frac{2(\sigma^{2}+r^{2})\log(1/\delta)}{n\tau_{\sigma}^{2}}}}+{\frac{\log(1/\delta)}{3n}}}\\ {\leq{\frac{2z{\sqrt{\log(1/\delta)}}}{n}}+{\frac{\log(1/\delta)}{3n}}}\end{array}$$ provided r 2 ≤ σ 2. Let $$g(x)=-\frac{1}{n}\sum_{i=1}^{n}\frac{\tau}{\sqrt{\tau^{2}+(x+\varepsilon_{i})^{2}}}.$$ Using the mean value theorem and the inequality that |x/(1 + x 2) 3/2| ≤ 1/2, we obtain $$|g(x)-g(y)|=\left|{\frac{1}{n}}\sum_{i=1}^{n}{\frac{({\tilde{x}}+\varepsilon_{i})/\tau_{\sigma}}{(1+({\tilde{x}}+\varepsilon_{i})^{2}/\tau_{\sigma}^{2})^{3/2}}}\cdot{\frac{x-y}{\tau_{\sigma}}}\right|\leq{\frac{|x-y|}{2\tau_{\sigma}}},$$ where xe is some convex combination of x and y. Then we have $$\mathrm{I}_{2}=\left|\frac{1}{n}\sum_{i=1}^{n}\frac{(\tilde{\Delta}+\varepsilon_{i})/\tau_{\sigma}}{(1+(\tilde{\Delta}+\varepsilon_{i})^{2}/\tau_{\sigma}^{2})^{3/2}}\cdot\frac{\Delta_{\mu}-\Delta_{\omega}}{\tau_{\sigma}}\right|\leq\frac{\epsilon}{2\tau_{\sigma}}$$ where ∆e is some convex combination of ∆w = µ ∗ − w and ∆µ = µ ∗ − µ. For II3, a similar argument for bounding II2 yields $$\begin{split}\text{I}_{3}&=\left|\mathbb{E}\left(\frac{(\tilde{\Delta}+\varepsilon_{i})/\tau_{\sigma}}{(1+(\tilde{\Delta}+\varepsilon_{i})^{2}/\tau_{\sigma}^{2})^{3/2}}\right)\cdot\frac{\Delta_{\mu}-\Delta_{\omega}}{\tau_{\sigma}}\right|\\ &\leq\mathbb{E}|\tilde{\Delta}+\varepsilon_{i}|\cdot\frac{\epsilon}{\tau_{\sigma}^{2}}\\ &\leq\frac{\epsilon\sqrt{2(r^{2}+\sigma^{2})}}{\tau_{\sigma}^{2}},\end{split}$$ where the last inequality uses Jensen's inequality, i.e. E|∆ +e εi| ≤ qE(∆ +e ε 2 i ) ≤p2(r 2 + σ 2). Putting the above pieces together and using the union bound, we obtain with probability at least 1 − 12ϵ −1rδ $$\begin{array}{c}{{\mathrm{I}\leq\frac{n}{z^{2}}\cdot\operatorname*{sup}_{\omega\in\mathcal{N}}\left|\frac{1}{n}\sum_{i=1}^{n}\frac{\sqrt{1+(y_{i}-\omega)^{2}/\tau_{\sigma}^{2}}-1}{\sqrt{1+(y_{i}-\omega)^{2}/\tau_{\sigma}^{2}}}-\mathbb{E}\frac{\sqrt{1+(y_{i}-\omega)^{2}/\tau_{\sigma}^{2}}-1}{\sqrt{1+(y_{i}-\omega)^{2}/\tau_{\sigma}^{2}}}\right|}}\\ {{+\frac{n}{z^{2}}\cdot\frac{\epsilon}{2\tau_{\sigma}}\left(1+\frac{2\sqrt{2(r^{2}+\sigma^{2})}}{\tau_{\sigma}}\right)}}\\ {{\leq\frac{2\sqrt{\log(1/\delta)}}{z}+\frac{\log(1/\delta)}{3z^{2}}+\frac{\epsilon\sqrt{n}}{\sigma z},}}\end{array}$$ provided that $$2\sqrt{2(r^{2}+\sigma^{2})}\leq\tau_{\sigma}.$$ Putting above results together, we obtain with probability at least 1 − (12r/ϵ + 2)δ that $$|\widehat{v}-\sigma|\lesssim_{\rm I}+{\rm II}$$ $$\leq\frac{2\sqrt{\log(1/\delta)}}{z}+\frac{\log(1/\delta)}{3z^{2}}+\frac{\epsilon\sqrt{n}}{\sigma z}+\frac{\gamma}{2}+\frac{2r}{\sigma}.$$ Let C ′ = 24CV 2 0 /v0. Therefore, taking ϵ = 1/ √n, δ = 1/log n, and z 2 ≥ log(n), we obtain with probability at least $$1-{\frac{C^{\prime}{\bigl(}{\sqrt{\log n}}+\log\log n/{\sqrt{\log n}}{\bigr)}+2}{\log n}}$$ that $$|{\hat{v}}-\sigma|\lesssim{\sqrt{\frac{\log\log n}{\log n}}}+{\frac{\log\log n}{\log n}}+{\frac{1}{\sqrt{\log n}}}+\gamma+r\to0.$$ Therefore vb → σ in probability. This finishes the proof. ## H.4 Local Strong Convexity In V In this section, we first present the local strong convexity of the empirical loss function with respect to v uniformly over a neighborhood of µ ∗. Lemma H.4 (Local strong convexity in v). Let Br(µ ∗) = {µ : |µ − µ ∗| ≤ r}. Assume r = r(n) = o(1). Let 0 *< δ <* 1 and n is sufficiently large. Take ϖ such that max{ϖr√n, ϖ} → 0 and ϖ √n → ∞. Then, with probability at least 1 − δ, we have $$\operatorname*{inf}_{\mu\in\mathbb{B}_{r}(\mu^{*})}{\frac{\langle\nabla_{v}L_{n}(\mu,v)-\nabla_{v}L_{n}(\mu,v_{*}),v-\sigma\rangle}{|v-\sigma|^{2}}}\geq\rho_{\ell}={\frac{\sigma_{c w^{2}/(4s^{2})}^{2}}{2(v^{3}\vee\sigma^{3})}}\geq{\frac{c_{0}}{v^{3}\vee\sigma^{3}}},$$ where c and c0 are some constants. Proof of Lemma H.4. Recall τ = v √n/z. For notational simplicity, write τσ = σ √n/z, τv0 = v0 √n/z, τϖ = ϖ √n/z, and ∆ = µ ∗ − µ. It follows that ⟨∇vLn(µ, v) − ∇vLn(µ, σ), v − σ⟩ = n z 2 *1 n Xn i=1 τ pτ 2 + (yi − µ) 2 − 1 n Xn i=1 p τσ τ 2 σ + (yi − µ) 2 , v − σ + = n 3/2 z 3· 1 n Xn i=1 (yi − µ) 2 (τe 2 + (yi − µ) 2) 3/2 |v − σ| 2 ≥ n 3/2 z 3· 1 n Xn i=1 (yi − µ) 2 ((τ ∨ τσ) 2 + (yi − µ) 2) 3/2 |v − σ| 2 where τe is some convex combination of τ and τσ, that is τe = (1 − λ)τσ + λτ for some λ ∈ [0, 1]. Because τ 3x 2/(τ 2 + x 2) 3/2is an increasing function of τ , if τϖ ≤ τ ∨ τσ, we have $$\frac{\langle\nabla_{v}L_{n}(\mu,v)-\nabla_{v}L_{n}(\mu,\sigma),v-v_{*}\rangle}{|v-\sigma|^{2}}\geq\frac{n^{3/2}}{z^{3}(\tau\vee\tau_{\sigma})^{3}}\cdot\frac{1}{n}\sum_{i=1}^{n}\frac{(\tau\vee\tau_{\sigma})^{3}(y_{i}-\mu)^{2}}{(\tau^{2}\vee\tau_{\sigma}^{2}+(y_{i}-\mu)^{2})^{3/2}}$$ $$\geq\frac{n^{3/2}}{z^{3}(\tau\vee\tau_{\sigma})^{3}}\cdot\frac{1}{n}\sum_{i=1}^{n}\frac{\tau_{\infty}^{3}(y_{i}-\mu)^{2}}{(\tau_{\infty}^{2}+(y_{i}-\mu)^{2})^{3/2}}.$$ Thus inf µ∈Br(µ∗) ⟨∇vLn(µ, v) − ∇vLn(µ, σ), v − v∗⟩ |v − σ| 2 ≥n 3/2 z 3(τ ∨ τ∗) 3 · inf µ∈Br(µ∗) 1 n Xn i=1 τ 3 ϖ(yi − µ) 2 (τ 2ϖ + (yi − µ) 2) 3/2 =n 3/2 z 3(τ ∨ τσ) 3 · inf µ∈Br(µ∗) E τ 3 ϖ(yi − µ) 2 (τ 2ϖ + (yi − µ) 2) 3/2 − sup µ∈Br(µ∗) 1 n Xn i=1 τ 3 ϖ(yi − µ) 2 (τ 2ϖ + (yi − µ) 2) 3/2 − Eτ 3 ϖ(yi − µ) 2 (τ 2ϖ + (yi − µ) 2) 3/2 ! =n 3/2 z 3(τ ∨ τσ) 3 · (I − II). It remains to lower bound I and upper bound II. We start with I. Let f(x) = x/(1 + x) 3/2 which satisfies $$f(x)\geq{\begin{cases}\epsilon x&x\leq c_{\epsilon}\\ 0&x>c_{\epsilon},\end{cases}}$$ and Z = (y − µ) 2/τ 2ϖ in which y ∼ yi. Suppose r 2 ≤ cϵτ 2 ϖ/4, then we have inf µ∈Br(µ∗) E τ 3 ϖ(yi − µ) 2 (τ 2ϖ + (yi − µ) 2) 3/2 = inf µ∈Br(µ∗) E τ 2 ϖZ (1 + Z) 3/2 ! ≥ ϵ · inf µ∈Br(µ∗) E -(y − µ) 21((y − µ) 2 ≤ cϵτ 2 ϖ) ≥ ϵ · inf µ∈Br(µ∗) E -(y − µ) 21(ε 2 ≤ cϵτ 2 ϖ/2 − r 2) ≥ ϵ · inf µ∈Br(µ∗) E (∆2 + ε 2)1 ε 2 ≤ cϵτ 2 ϖ 4 − 8∆σ 2 cϵτ 2ϖ ≥ ϵ · E ε 21 ε 2 ≤ cϵτ 2 ϖ 4 − 8rσ2 cϵτ 2ϖ . We then proceed with II. For any 0 < γ ≤ 2r, there exists an γ-cover N of Br(µ ∗) such that |N | ≤ 6*r/γ.* Then for any µ ∈ Br(µ ∗) there exists an ω ∈ N such that |ω − µ| ≤ γ, and thus by Lemma H.5 we have 1 n Xn i=1 τ 3 ϖ(yi − µ) 2 (τ 2ϖ + (yi − µ) 2) 3/2 − Eτ 3 ϖ(yi − µ) 2 (τ 2ϖ + (yi − µ) 2) 3/2 ≤ 1 n Xn i=1 τ 3 ϖ(yi − ω) 2 (τ 2ϖ + (yi − ω) 2) 3/2 − Eτ 3 ϖ(yi − ω) 2 (τ 2ϖ + (yi − ω) 2) 3/2 + 1 n Xn i=1 τ 3 ϖ(yi − ω) 2 (τ 2ϖ + (yi − ω) 2) 3/2 − 1 n Xn i=1 τ 3 ϖ(yi − µ) 2 (τ 2ϖ + (yi − µ) 2) 3/2 + E τ 3 ϖ(yi − ω) 2 (τ 2ϖ + (yi − ω) 2) 3/2 − Eτ 3 ϖ(yi − µ) 2 (τ 2ϖ + (yi − µ) 2) 3/2 = II1 + II2 + II3. For II1, Lemma H.5 implies with probability at least 1 − 2δ $$\Pi_{1}\leq\sqrt{\frac{2\tau_{w}^{2}\mathbb{E}(y_{i}-\omega)^{2}\log(1/\delta)}{3n}}+\frac{\tau_{w}^{2}\log(1/\delta)}{3\sqrt{3}n}\leq\sqrt{\frac{2\tau_{w}^{2}(\sigma^{2}+r^{2})\log(1/\delta)}{3n}}+\frac{\tau_{w}^{2}\log(1/\delta)}{3\sqrt{3}n}.$$ Let $$g(x)={\frac{1}{n}}\sum_{i=1}^{n}{\frac{\tau^{3}(x+\varepsilon_{i})^{2}}{(\tau^{2}+(x+\varepsilon_{i})^{2})^{3/2}}}.$$ Using the mean value theorem and the inequality that |τ 2x/(τ 2 + x 2) 3/2| ≤ 1/ √3, we obtain $$|g(x)-g(y)|=\left|\frac{1}{n}\sum_{i=1}^{n}\frac{\tau^{3}(\tilde{x}+\varepsilon_{i})\left(\tau^{2}-(\tilde{x}+\varepsilon_{i})^{2}\right)}{(\tau^{2}+(\tilde{x}+\varepsilon_{i})^{2})^{5/2}}(x-y)\right|\leq\frac{\tau}{\sqrt{3}}|x-y|.$$ Then we have $$\Pi_{2}=\left|\frac{1}{n}\sum_{i=1}^{n}\frac{\tau_{\varpi}^{3}(\tilde{\Delta}+\varepsilon_{i})\left(\tau_{\varpi}^{2}-(\tilde{\Delta}+\varepsilon_{i})^{2}\right)}{(\tau_{\varpi}^{2}+(\tilde{\Delta}+\varepsilon_{i})^{2})^{5/2}}(\Delta_{w}-\Delta_{\mu})\right|\leq\frac{\tau_{\varpi}\gamma}{\sqrt{3}}$$ where ∆e is some convex combination of ∆w = µ ∗ − w and ∆µ = µ ∗ − µ. For II3, we have $$\Pi_{3}=\left|\mathbb{E}\left(\frac{\tau_{\varpi}^{3}(\widetilde{\Delta}+\varepsilon_{i})\left(\tau_{\varpi}^{2}-(\widetilde{\Delta}+\varepsilon_{i})^{2}\right)}{(\tau_{\varpi}^{2}+(\widetilde{\Delta}+\varepsilon_{i})^{2})^{5/2}}\right)(\Delta_{w}-\Delta_{\mu})\right|\leq\gamma\mathbb{E}|\widetilde{\Delta}+\varepsilon_{i}|\leq\gamma\sqrt{\mathbb{E}\left(\widetilde{\Delta}+\varepsilon_{i}\right)^{2}},$$ where the last inequality uses Jensen's inequality. Putting the above pieces together and using the union bound, we obtain with probability at least 1 − 12γ −1rδ II ≤ sup ω∈N 1 n Xn i=1 τ 3 ϖ(yi − ω) 2 (τ 2ϖ + (yi − ω) 2) 3/2 − Eτ 3 ϖ(yi − ω) 2 (τ 2ϖ + (yi − ω) 2) 3/2 + τϖγ √3 + γ pr 2 + σ 2 ≤ r2τ 2ϖ(r 2 + σ 2) log(1/δ) 3n+ τ 2 ϖ log(1/δ) 3 √3n+ τϖγ √3 + γ pr 2 + σ 2 = pr 2 + σ 2 r2ϖ2 log(1/δ) 3z 2+ γ ! + ϖ2log(1/δ) 3 √3z 2+ ϖγ√n √3. Combining the bounds for I and II yields with probability at least 1 − δ inf µ∈Br(µ∗) ⟨∇vLn(µ, v) − ∇vLn(µ, σ), v − σ⟩ |v − σ| 2 ≥n 3/2 z 3(τ ∨ τσ) 3 ( ϵ E ε 21 ε 2 ≤ cϵτ 2 ϖ 4 − 8rσ2 cϵτ 2ϖ − pr 2 + σ 2 r2ϖ2 log(1/δ) 3z 2+ γ ! − ϖ2log(1/δ) 3 √3z 2− ϖγ√n √3 ) ≥1 2(v ∨ σ) 3 E ε 21 ε 2 ≤ cϵτ 2 ϖ 4 where *ϵ, ϖ, γ, n* are picked such that ϵ = 3/4, γ = 12r, and $$\epsilon\left(\mathbb{E}\left[e^{21}\left(e^{2}\leq\frac{c_{r}\tau_{\epsilon}^{2}}{4}\right)\right]-\frac{8\sigma^{2}z^{2}}{c_{r}\varpi^{2}n}\right)-\sqrt{r^{2}+\sigma^{2}}\left(\sqrt{\frac{2\varpi^{2}\log(1/\delta)}{3z^{2}}}+\gamma\right)-\frac{\varpi^{2}\log(1/\delta)}{3\sqrt{3}z^{2}}-\frac{\varpi\gamma\sqrt{n}}{\sqrt{3}}\right)$$ $$\geq\frac{1}{2}\mathbb{E}\left[e^{21}\left(e^{2}\leq\frac{c_{r}\tau_{\epsilon}^{2}}{4}\right)\right]\geq\frac{1}{4}\sigma.$$ For example, we can pick ϖ such that max{ϖr√n, ϖ} → 0 and ϖ √n → ∞ as n → ∞. This completes the proof. This subsection proves a supporting lemma that is used prove Lemma H.4. Lemma H.5. Let wi be i.i.d. copies of w. For any 0 *< δ <* 1, we have τ 3w 2 i (τ 2 + w2 i ) 3/2 − Eτ 3w 2 i (τ 2 + w2 i ) 3/2 ≥ −r2τ 2Ew2 i log(1/δ) 3n− τ 2log(1/δ) 3 √3n, with prob. 1 − δ, 1 n Xn i=1 1 n Xn i=1 τ 3w 2 i (τ 2 + w2 i ) 3/2 − Eτ 3w 2 i (τ 2 + w2 i ) 3/2 ≤ r2τ 2Ew2 i log(1/δ) 3n+ τ 2log(1/δ) 3 √3n, with prob. 1 − 2δ. Proof of Lemma H.5. We only prove the first result and the second result follows similarly. The random variables Zi = Zi(τ ) := τ 3w 2 i /(τ 2 + w 2 i ) 3/2 with µz = EZi and σ 2 z = var(Zi) are bounded i.i.d. random variables such that $$0\leq Z_{i}=\tau^{3}w_{i}^{2}/(\tau^{2}+w_{i}^{2})^{3/2}\leq w_{i}^{2}\wedge\frac{\tau^{2}}{\sqrt{3}}\wedge\frac{\tau|w_{i}|}{\sqrt{3}}.$$ Moreover we have $$\mathbb{E}Z_{i}^{2}=\mathbb{E}\left({\frac{\tau^{6}w_{i}^{4}}{(\tau^{2}+\varepsilon_{i}^{2})^{3}}}\right)\leq{\frac{\tau^{2}\mathbb{E}w_{i}^{2}}{3}},\ \sigma_{z}^{2}:=\operatorname{var}(Z_{i})\leq{\frac{\tau^{2}\mathbb{E}w_{i}^{2}}{3}}.$$ For third and higher order absolute moments, we have $$\mathbb{E}|Z_{i}|^{k}=\mathbb{E}\left|\frac{\tau^{3}w_{i}^{2}}{(\tau^{2}+\epsilon_{i}^{2})^{3/2}}\right|\leq\frac{\tau^{2}\mathbb{E}w_{i}^{2}}{3}\cdot\left(\frac{\tau^{2}}{\sqrt{3}}\right)^{k-2}\leq\frac{k!}{2}\cdot\frac{\tau^{2}\mathbb{E}w_{i}^{2}}{3}\cdot\left(\frac{\tau^{2}}{3\sqrt{3}}\right)^{k-2},\text{for all integers}k\geq3.$$ Therefore, using Lemma J.2 with v = nτ 2 Ew 2 i /3 and c = τ 2/3 √3acquires that for any t ≥ 0 $$\mathbb{P}\left(\sum_{i=1}^{n}\frac{\tau^{3}w_{i}^{2}}{(\tau^{2}+\epsilon_{i}^{2})^{3/2}}-\sum_{i=1}^{n}\mathbb{E}\left(\frac{\tau^{3}w_{i}^{2}}{(\tau^{2}+\epsilon_{i}^{2})^{3/2}}\right)\geq-\sqrt{\frac{2m\tau^{2}\mathbb{E}w_{i}^{2}t}{3}}-\frac{\tau^{2}t}{3\sqrt{3}}\right)\leq\exp(-t).$$ Taking $t=\log(1/\delta)$ acquires that for any $0<\delta<1$ Taking $t=\log(1/\theta)$ acquires that for any $0<\theta<1$ $$\mathbb{P}\left(\frac{1}{n}\sum_{i=1}^{n}\frac{r^{2}w_{i}^{2}}{(r^{2}+w_{i}^{2})^{3/2}}-\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\left(\frac{r^{2}w_{i}^{2}}{(r^{2}+\epsilon_{i}^{2})^{3/2}}\right)>-\sqrt{\frac{2r^{2}\mathbb{E}w_{i}^{2}\log(1/\delta)}{3n}}-\frac{r^{2}\log(1/\delta)}{3\sqrt{3n}}\right)>1-\delta.$$ This finishes the proof. ## I Proofs For Section 6 We first prove Proposition 6.1. Proof of Proposition 6.1. The proof directly follows from Theorem 3.5 and the union bound. Next, we prove Proposition 6.2. Proof of Proposition 6.2. We only sketch the proof, as most of the proof follows from that of Theorem H.2. By Proposition 6.1 and taking z 2 = 2 log n, we obtain ∥µb − µ ∗∥2 → 0 in probability. Similarly, following the proof of Theorem H.3, we obtain ∥vb − σ∥2 → 0 in probability, where vb = (vb1*, . . . ,* vbd) T and σ = (σ11*, . . . , σ*dd) T. With a slight overload of notation, let Ln(µ) = Ln(*µ, σ*). Let τσk = σkk√n/z. Then following the proof of Theorem H.2, we obtain √nµb − µ ∗≃ [Hn(µ ∗)]−1− √n ∇Ln(µ ∗) √n · 1 n Pn i=1 τσ11 εi1 σ11√τ 2 σ11 +ε 2 i1 ... √n · 1 n Pn i=1 τσdd εid σddpτ 2 σdd +ε 2 id = √n z· 1 n Pn i=1 τ 2 σ11 (τ 2 σ11 +ε 2 i1 ) 3/2 0 . . . ...... 0 √n z· 1 n Pn i=1 τ 2 σdd (τ 2 σdd +ε 2 id) 3/2 −1 ≃ σ11 0 . . . ...... 0 σdd √n · 1 n Pn i=1 τσ11 εi1 σ11√τ 2 σ11 +ε 2 i1 ... √n · 1 n Pn i=1 τσdd εid σddpτ 2 σdd +ε 2 id =: ΛI, where Λ = diag(σ11*, . . . , σ*dd). We only to derive the asymptotic distribution of the term I: I = √n · 1 n Pn i=1 τ 2 σ11 εi1 σ √τ 2 σ11 +ε 2 i1 ... 1 n Pn i=1 τ 2 σdd εid σpτ 2 σdd +ε 2 id − E 1 n Pn i=1 τσ11 εi1 σ11√τ 2 σ11 +ε 2 i1 ... 1 n Pn i=1 τσdd εid σddpτ 2 σdd +ε 2 id + √n · E 1 n Pn i=1 τσ11 εi1 σ11√τ 2 σ11 +ε 2 i1 ... 1 n Pn i=1 τσdd εid σddpτ 2 σdd +ε 2 id = I1 + I2. Again, following the proof of Theorem H.2, the ℓ2 norm of the second term goes to 0. For the first term I, we have $\left(\begin{array}{c}\includegraphics[height=36.135pt]{0.0pt}{\includegraphics[height=36.135pt]{0.0pt}}\\ \includegraphics[height=36.135pt]{0.0pt}{\includegraphics[height=36.135pt]{0.0pt}}\\ \includegraphics[height=36.135pt]{0.0pt}{\includegraphics[height=36.135pt]{0.0pt}}\\ \includegraphics[height=36.135pt]{0.0pt}{\includegraphics[height=36.135pt]{0.0pt}}\\ \includegraphics[height=36.135pt]{0.0pt}{\includegraphics[height=36.135pt]{0.0pt}}\\ \includegraphics[height=36.135pt]{0.0pt}{\includegraphics[height=36.135pt]{0.0pt}}\\ \end{array}\right)$ Thus we have $${\sqrt{n}}\left({\widehat{\mu}}-\mu^{*}\right)\;\leadsto{\mathcal{N}}(0,\Sigma).$$ This finishes the proof. ## J Preliminary Lemmas This section collects preliminary lemmas that are frequently used in the proofs for the main results and supporting lemmas. We first collect the Hoeffding's inequality and then present a form of Bernstein's inequality. We omit their proofs and refer interested readers to Boucheron et al. (2013). Lemma J.1 (Hoeffding's inequality). Let Z1*, . . . , Z*n be independent real-valued random variables such that a ≤ Zi ≤ b almost surely. Let Sn =Pn i=1(Zi − EZi) and v = n(b − a) 2. Then for all t ≥ 0, $\mathbb{P}\left(S_{n}\geq\sqrt{vt/2}\right)\leq e^{-t},\ \mathbb{P}\left(S_{n}\leq-\sqrt{vt/2}\right)\leq e^{-t},\ \mathbb{P}\left(|S_{n}|\geq\sqrt{vt/2}\right)\leq2e^{-t}.$ Lemma J.2 (Bernstein's inequality). Let Z1*, . . . , Z*n be independent real-valued random variables such that $$\sum_{i=1}^{n}\mathbb{E}Z_{i}^{2}\leq v,\,\sum_{i=1}^{n}\mathbb{E}|Z_{i}|^{k}\leq{\frac{k!}{2}}v c^{k-2}{\mathrm{~for~all~}}k\geq3.$$ If $S_n=\sum_{i=1}^n(Z_i-\mathbb{E}Z_i)$, then for all $t\geq0$,. $\mathbb{P}\left(S_{n}\geq\sqrt{2vt}+ct\right)\leq e^{-t},\ \mathbb{P}\left(S_{n}\leq-(\sqrt{2vt}+ct)\right)\leq e^{-t},\ \mathbb{P}\left(|S_{n}|\geq\sqrt{2vt}+ct\right)\leq2e^{-t}.$ Proof of Lemma J.2. This lemma involves a two-sided extension of Theorem 2.10 by Boucheron et al. (2013). The proof follows from a similar argument used in the proof of Theorem 2.10, and thus is omitted. Our third lemma concerns the localized Bregman divergence for convex functions. It was first established in Fan et al. (2018). For any loss function L, define the Bregman divergence and the symmetric Bregman divergence as $$\begin{array}{l}{{D_{L}(\beta_{1},\beta_{2})=L(\beta_{1})-L(\beta_{2})-\langle\nabla L(\beta_{2}),\beta_{1}-\beta_{2}\rangle,}}\\ {{D_{L}^{s}(\beta_{1},\beta_{2})=D_{L}(\beta_{1},\beta_{2})+D_{L}(\beta_{2},\beta_{1}).}}\end{array}$$ Lemma J.3. For any βη = β ∗ + η(β − β ∗) with η ∈ (0, 1] and any convex loss function L, we have $$D_{L}^{s}(\beta_{\eta},\beta^{*})\leq\eta D_{L}^{s}(\beta,\beta^{*}).$$ Our forth lemma in this section concerns three basic inequalities that are frequently used in the proofs. Lemma J.4. The following inequalities hold: (i) (1 + x) r ≥ 1 + rx for x ≥ −1 and r ∈ R \ (0, 1); (ii) (1 + x) r ≤ 1 + rx for x ≥ −1 and r ∈ (0, 1); (iii) (1 + x) r ≤ 1 + (2r − 1)x for x ∈ [0, 1] and r ∈ R \ (0, 1).
Review 1: Summary: This work studies the problem of how to estimate the mean of a distribution with finite variance given i.i.d. samples and proposes a self-tuned estimator which does not require cross-validation or Lepski's method to tune hyper-parameter. The estimator is based on jointly minimizing a newly defined penalized pseudo-Huber loss function over both the estimation variable $\mu$ and robustness variable $\nu$. For finite-sample theory, an estimation error bound in the order of $O(\sqrt{\log(n)/n})$ is established, where $n$ is the number of samples.To compare with the existing estimators median-of-means (MoM) and trimmed mean estimator, the asymptotic efficiency of the proposed estimator is studied. In numerical experiments, the proposed method achieves lower estimation errors on datasets generated from skewed generalized t distributions, compared with sample mean, MoM, trimmed mean, cross validation, and Lepski's method, and it is also more computationally efficient than the last two methods. Strengths and Weaknesses: Strengths (1) The problem of mean estimation of skewed or heavy tailed distributions is fundamental and of wide interest. Thus, an efficient method that is not overly complicated could have a broad impact. (2) The main idea of the proposed self-tuned estimator, jointly minimizing over estimated mean $\mu$ and robustness parameter $\nu$, is clearly explained. Theorem 2.3 further justifies minimization over $\nu$ by proving that, given ground truth $\mu_{\star}$, the optimal $\nu_{\star}$ can converge to the distribution variance $\sigma$ as $n$ tends to infinity. (3) Conclusions on finite-sample estimation error bound and asymptotic error bound provide theoretical guarantees on the performance of the estimators. These theoretical results also justify the advantage of the proposed estimator over existing methods. Weaknesses/questions (1) The motivation of defining the penalized pseudo-Huber function is to avoid trivial solutions $0$ and $\infty$. Then why is the constraint $\nu_0 \leq \nu \leq V_0$ still needed to guard $\hat{\nu}$ from $0$ and $\infty$ in (3.1)? Since problem (3.1) is the actual optimization problem to be solved, why not just use the pseudo-Huber function? How do we choose $\nu_0$ and $V_0$ in practice? Is cross-validation still needed? (2) The theorems can be stated more clearly. - In Theorem 3.1, both $z$ and $r$ depend on $\delta$, but after the definitions of $z$ and $r$ it says that for any $\delta$ the error bound holds with probably $1-\delta$. Are they the same $\delta$? If so, it'd be better to put the phrase ``for any $\delta$'' in the front, before defining $z$ and $r$. The same issue is also in Lemma 3.2. - In Theorem 3.1, the error bound actually equals to the radius $r$. It'd be more clear to use the same notation and state $|\hat{\mu} - \mu^\star| < r$. Also, is there any intuitive explanation on why the error bound equals to the radius in the assumption? - In Lemma 3.2 and Corollary 3.3, one assumption is $n \geq C\max(z^2(\sigma^2 + r^2)/v_0^2, \log (1/\delta))$, but $r$ also depends on $n$, so what is the requirement on $n$ for this assumption to hold? It would be better to plug $r$ into this inequality to draw a condition on $n$. (3) In Theorem 3.4 and Theorem 3.5, one assumption is $c_0 \sigma_{v_0^2 n/z^2} \leq C_0 \sigma$. What is the requirement on $n$ for this assumption to hold? It'd be better to discuss this issue at least for some commonly encountered distributions. The key question here is whether or not this assumption imposes a more strict condition on $n$. (4) In the numerical section, what is the method/solver actually used to solve problem (3.5)? How does this method/solver scale as the number of samples $n$ increase? This would determine the runtime efficiency of the proposed estimator. (5) In the numerical section, is there a reason not to compare all methods in the same plot? For example, can Figure 1 and Figure 3 be merged together? It'd be better to also report the runtimes of sample mean, MoM, and trimmed mean. Requested Changes: The points in the Weaknesses/questions are my suggested adjustments. In my view, (1) (3) (4) are critical, while the others would strengthen the work. Besides, there are some minor issues. - Should the $\hat{\mu}(\tau)$ in (2.2) be $\tilde{\mu}(\tau)$? - Is the $\hat{\mu}(\tau)$ in Theorem 2.1 the solution of the pseudo-Huber loss or the penalized pseudo-Huber loss? - In the last paragraph of section 2, what is the loss function $\ell$? Should it be $\rho$ instead? Broader Impact Concerns: I have no concern on the ethical implications of the work. ================================================== Review 2: Summary: This paper addresses the question of computationally efficient, heavy-tailed, one-dimensional mean estimation for distributions with finite but unknown covariance. A new estimator is proposed for this task, which adapts to the unknown variance and achieves near optimal finite-sample performance. Their estimator is shown to be asymptotically efficient (achieving the CR lower bound), in comparison to existing approaches for the problem, and can be computed efficiently. Numerical experiments demonstrating the superiority of the proposed approach compared to other methods are also provided. The idea behind the construction is the following: the Huber loss can be used for this task with known variance (i.e., penalize mean estimate by expected squared loss for small loss values and mean absolute deviation for large loss values). Further, it is possible to smooth/convexify the Huber loss, which yields the pseudo-Huber loss. Both depend on some proxy for the variance, which is unknown. The authors then add in an extra optimization parameter for this variance proxy and augment the pseudo-Huber loss so that the expected risk is minimized when this parameter is a decent proxy of the true variance. Strengths and Weaknesses: Strengths: - The self-tuning estimator and the fact that it overcomes the unknown variance issue is interesting to me, and I wonder if this can be characterized as a special case of a more general approach. The idea is novel (I, at least, have not seen this approach before) and constitutes a good contribution. - The paper is well written. The technical writing is solid, the exposition/motivation/outlook are to a good level as well. I enjoyed reading it. - The efficiency result provides a nice theoretical distinction between the proposed estimator and the MoM method. The argument is reminiscent of that for the classical sample mean vs. median estimator for the expectation (it seems appropriate to mention this and provide a brief discussion). This result provides a theoretical explanation of the improved performance observed empirically. - Speaking of the experiments, the practical performance of the estimator is solid, beating others consistently (though see second weakness, not clear that they couldn’t be improved) Weaknesses: -The conditions for the finite-sample guarantees from Theorems 3.4 and 3.5 seem quite subtle and are hard to interpret. Could these conditions be relaxed to more natural and primitive assumptions? - The MoM estimator is dead simple, more computationally efficient, and solves the formulated problem. The authors admit this but emphasize the asymptotic performance as a key benefit of their approach. However, the guarantees of the self-tuning approach depend on some subtle assumptions, and the finite-sample performance depends on unknown constants. This means that the sample complexity cannot be computed in advance for fixed error. This should be mentioned in the limitation section. - A glaring limitation of the approach is the fact that it only treats the one-dimensional case. I find it odd that the authors do not even comment on the extension to $d>1$. For example, they can apply the current method coordinate-wise to obtain an L^2 error of $\sqrt{d}$ times the individual error, perhaps with some $\log d$ factor for a union bound. I encourage the authors to examine this case and, if possible, provide an analysis. The attained risk may not be optimal but still worth mentioning. Either way, the limitations bit in the Conclusion section should discuss the extension to higher dimensions and the roadblocks towards it. - It is not clear that the MoM and trimmed mean estimators have been optimized for asymptotic efficiency. I wonder if the constant of 8 which appears in the MoM case could be reduced for large n in a way that would make it more efficient. This idea is implemented in the proposed approach and leads to its superiority. It seems appropriate to try check where existing approaches can be adapted or prove a formal limitation result. - My sense is that this approach is unlikely to scale to high-dimensions (in general, M-estimation like Huber loss usually does not work for high-dimensional robust statistics). Did the authors try looking at the $d>1$ case. The authors motivate their work by mentioning `high-dimensional data analysis' in the first sentence of the intro. It would be appropriate to the multivariate case or at least adequately discuss it and explain the roadblocks towards the extension. This too should be added to the limitations section. Overall: Despite the above weaknesses, the paper is interesting and appears correct. If the authors could get finite-sample performance guarantees with more interpretable assumptions, and/or argue convincingly that MoM or similar approaches could not be easily improved, I think it will qualify for acceptance. Requested Changes: Beyond addressing the weaknesses above, here are a few more secondary comments: - I suggest setting $\alpha=0.5$ from the get go and perhaps comment on why this is the right choice. The current approach (which leaves it as a free parameter and then arrives at this choice at the end of the paragraph following Theorem 2.3) does not contribute to clarity nor generality. - In regard to Theorems. 4.2-4.3, do the authors have intuition about what is the distinctive characteristic of the proposed approach that enables its asymptotic efficiency, compared to the MoM estimator? Adding such a discussion would be a welcome addition. Another small point: I suggest being consistent with how $\iota\in(0,1]$ (vs. $0<\iota\leq 1$) is written in those theorems. - $v_0$ should be fixed at the start of Lemma 3.2. Currently, it appears out of nowhere. - In Eq. (3.1), $\hat{v}$ should be just $v$. - Can the authors provide an interpretation for the variance ration assumption from Theorem 3.4? Perhaps identifying primitive sufficient conditions or providing an example that verifies it would be helpful. Broader Impact Concerns: No concerns. ================================================== Review 3: Summary: This paper proposes an algorithm for robust mean estimation. The proposed method can automatically estimate the variance, and theoretical justification on its advantage over median of means (MoM) is provided. Numerical results also demonstrate its advantage. Strengths and Weaknesses: Pros: This paper provides rigorous theoretical justification on the performance of the proposed method (Theorem 4.2) and its advantage over MoM (Theorem 4.1). The mathematical statements are clear and easy to understand. Cons: My two major concerns are about the contribution and the topic of this paper: * This paper studies a robust mean estimation and provides a lot of theoretical justification. However, in my point of view, this paper is more suitable for a statistics journal rather than a ML journal for the lack of real-data analysis. Mean estimation is a very traditional statistical problem. * The theory main considers a univariate case and does not consider multi-variate case. A theory in a multi-variate scenario may provide more insights on the relationship between the mean estimation task performance and the covariance structure of the distribution, i.e., for the asymptotic normality results in Theorem 4.1 and 4.2, how are the asymptotic distributions related to the covariance structure? Besides the two main concerns, the following are some minor comments in writing: * I would suggest mentioning "heavy-tailed distribution" in either the abstract or the title. * The first paragraph of the paper is a little bit misleading. It mentions about some applications in high-dimensional statistics. However, starting from the second paragraph, it talks about a mean estimation problem for a univariate random variable. * In the first sentence of the abstract, "...self-tuned robust estimators for estimating the mean of distributions with only finite variances", does "with only finite variances" refer to the robust estimator, or refer to the distribution? * In "the resulting estimator for the robustification parameter can adapt to the unknown variance automatically", what does the "variance" refer to? Requested Changes: Please provide some more results in multi-variate asymptotic distribution. Broader Impact Concerns: NA ==================================================
# Pomrl: No-Regret Learning-To-Plan With Increasing Horizons Khimya Khetarpal∗ † khimya.khetarpal@mail.mcgill.ca Department of Computer Science McGill University Claire Vernade∗‡ *claire.vernade@gmail.com* University of Tuebingen Brendan O'Donoghue *bodonoghue@google.com* Google Deepmind Satinder Singh Baveja baveja@google.com Google Deepmind Tom Zahavy *tomzahavy@google.com* Google Deepmind Reviewed on OpenReview: *https: // openreview. net/ forum? id= brGgOAXYtr* ## Abstract We study the problem of planning under model uncertainty in an online meta-reinforcement learning (RL) setting where an agent is presented with a sequence of related tasks with limited interactions per task. The agent can use its experience in each task and across tasks to estimate both the transition model and the distribution over tasks. We propose an algorithm to meta-learn the underlying relatedness across tasks, utilize it to plan in each task, and upper-bound the regret of the planning loss. Our bound suggests that the average regret over tasks decreases as the number of tasks increases and as the tasks are more similar. In the classical single-task setting, it is known that the planning horizon should depend on the estimated model's accuracy, that is, on the number of samples within task. We generalize this finding to meta-RL and study this dependence of planning horizons on the number of tasks. Based on our theoretical findings, we derive heuristics for selecting slowly increasing discount factors, and we validate its significance empirically. ## 1 Introduction Meta-learning (Caruana, 1997; Baxter, 2000; Thrun & Pratt, 1998; Finn et al., 2017; Denevi et al., 2018) offers a powerful paradigm to leverage past experience to reduce the sample complexity of learning future related tasks. *Online meta-learning* considers a sequential setting, where the agent progressively accumulates knowledge and uses past experience to learn good priors and to quickly adapt within each task Finn et al. (2019); Denevi et al. (2019). Robots acting in real world for instance need to be responsive to and robust against perturbation inherent in the environment dynamics and their decision making. When the tasks share a structure i.e. have similar transition dynamics and are related, such approaches enable progressively faster convergence, or equivalently better model accuracy with better sample complexity (Schmidhuber & Huber, 1991; Thrun & Pratt, 1998; Baxter, 2000; Finn et al., 2017; Balcan et al., 2019). ∗Equal Contribution †Work partially done during an internship at DeepMind. Now at Deepmind. ‡Work partially done at DeepMind. In model-based reinforcement learning (RL), the agent uses an estimated model of the environment to plan actions ahead towards the goal of maximizing rewards. A key component in the agent's decision making is the horizon used during planning. In general, an *evaluation horizon* is imposed by the task itself, but the learner may want to use a different and potentially shorter *guidance horizon*. In the discounted setting, the size of the evaluation horizon is of order (1 − γeval) −1, for some discount factor γeval ∈ (0, 1), and the agent may use γ =6 γeval for planning. For instance, a classic result known as Blackwell Optimality (Blackwell, 1962) states there exists a discount factor γ ? and a corresponding optimal policy such that the policy is also optimal for any greater discount factor γ ≥ γ ?. Thus, an agent that plans with γ = γ ? will be optimal for any γeval > γ?. In the Arcade Learning Environment (Bellemare et al., 2013) a discount factor of γeval = 1 is used for evaluation, but typically a smaller γ is used for training (Mnih et al., 2015). Using a smaller discount factor acts as a regularizer (Amit et al., 2020; Petrik & Scherrer, 2008; Van Seijen et al., 2009; François-Lavet et al., 2019; Arumugam et al., 2018) and reduces planner over-fitting in random MDPs (Arumugam et al., 2018). Indeed, the choice of planning horizon plays a significant role in computation (Kearns et al., 2002), optimality (Kocsis & Szepesvári, 2006), and on the complexity of the policy class (Jiang et al., 2015). In addition, meta-learning discount factors has led to significant improvements in performance (Xu et al., 2018; Zahavy et al., 2020; Flennerhag et al., 2021; 2022; Luketina et al., 2022). When doing model-based RL with a learned model, the optimal guidance planning horizon, called *effective* horizon by Jiang et al. (2015), depends on the accuracy of the model, and so on the amount of data used to estimate it. Jiang et al. (2015) show that when data is scarce, a guidance discount factor *γ < γ*eval should be preferred for planning. The reason for this is straightforward; if the model used for planning is inaccurate, then errors will tend to accumulate along the planned trajectory. A shorter effective planning horizon will accumulate less error and may lead to better performance, even when judged using the true γeval. While that work treated only the batch, single-task setting, the question of effective planning horizon remains open in the online meta-learning setting where the agent accumulates knowledge from many tasks, with limited interactions within each task. In this work, we consider a *meta-reinforcement-learning* problem made of a sequence of **related tasks**. We leverage this structural task similarity to obtain model estimators with faster convergence as more tasks are seen. The central question of our work is: Can we meta-learn the model across tasks and adapt the effective planning horizon accordingly? We take inspiration from the *Average Regret-Upper-Bound Analysis* [ARUBA] framework (Khodak et al., 2019) to generalize planning loss bounds to the meta-RL setting. A high-level, intuitive outline of our approach is presented in Fig. 1. **Our main contributions** are as follows: - We formalize planning in a model-based meta-RL setting as an *average planning loss* minimization problem, and we propose an algorithm to solve it. - Under a structural *task-similarity* assumption, we prove a novel high-probability task-averaged regret upper-bound on the planning loss of our algorithm, inspired by ARUBA. We also demonstrate a way to learn the task-similarity parameter σ on-the-fly. To the best of our knowledge, this is a first formal (ARUBA-style) analysis to show that meta-RL can be more efficient than RL. - Our theoretical result highlights a new dependence of the planning horizon on the size of the withintask data m and on the number of tasks T. This observation allows us to propose two heuristics to adapt the planning horizon given the overall sample-size. ## 2 Preliminaries Reinforcement Learning. We consider tabular Markov Decision Processes (MDPs) M = hS, A*, R, P, γ*evali, where S is a finite set of states, A is a finite set of actions and we denote the set cardinalities as S = |S| and A = |A|. For each state s ∈ S, and for each available action a ∈ A, the probability vector P(· | *s, a*) defines a transition model over the state space and is a probability distribution in a set of feasible models DP ⊂ ∆S, where ∆S the probability simplex of dimension S − 1. We denote Σ ≤ 1 the diameter of DP . A policy is a function π : *S → A* and it characterizes the agent's behavior. ![2_image_0.png](2_image_0.png) Figure 1: **Effective Planning Horizons in Meta-Reinforcement Learning.** The agent faces a sequence of tasks with transition vector (P t)t∈[T] (probability vectors represented by blue dots) all close to each other (σ < Σ = 1). The agent builds a transition model for each task and plans with these inaccurate models. By using data from previous tasks, the agent meta-learns an initialization of the model (Pˆo,t), which leads to better planning in new related but unseen tasks. We show an improved average regret upper bound that scales with task-similarity parameter σ and inversely with the number of tasks T: as knowledge accumulates, uncertainty diminishes, and the agent can plan with longer horizons. All tasks P t ∼ P are centered at some fixed but unknown P o, depicted here by the shaded red dot and pointed by the arrow. We consider the bounded reward setting, i.e., R ∈ [0, Rmax] and without loss of generality we set Rmax = 1 (unless stated otherwise). Given an MDP, or task, M, for any policy π, let V π M,γ ∈ R S be the value function when evaluated in MDP M with discount factor γ ∈ (0, 1) (potentially different from γeval); defined as V π M,γ(s) = EP∞ t=0 (γ tRst | s0 = s). The goal of the agent is to find an optimal policy, π ? M,γ = arg maxπ Es∼ρV π M,γ(s) where ρ > 0 is any positive measure, denoted π ? when there is no ambiguity. For given state and action spaces and reward function (S, A, R), we denote Πγ the set of *potentially* optimal policies for discount factor γ: Πγ = {π | ∃ P s.t. π = π ? M,γ where M = hS, A*, R, P, γ*i}. We use Big-O notation, O(·) and O˜(·), to hide respectively universal constants and poly-logarithmic terms in *T, S, A* and δ > 0 (the confidence level). Model-based Reinforcement Learning. In practice, the true model of the world is unknown and must be estimated from data. One approach to approximately solve the optimization problem above is to construct a model, hR, ˆ Pˆi from data, then find π ? M ,γ ˆ for the corresponding MDP Mˆ = hS, A, R, ˆ *P , γ* ˆ i. This approach is called model-based RL or *certainty-equivalence (CE) control*. Planning with inaccurate models. In this setting, Jiang et al. (2015) define the planning loss as the gap in expected return in MDP M when using γ ≤ γeval and the optimal policy for an approximate model Mˆ : $${\mathcal{L}}({\hat{M}},\gamma\mid M,\gamma_{\mathrm{eval}})=\|V_{M,\gamma_{\mathrm{eval}}}^{\pi_{M,\gamma_{\mathrm{eval}}}^{*}}-V_{M,\gamma_{\mathrm{eval}}}^{\pi_{M,\gamma}^{*}}\|_{\infty}.$$ Thus, the **optimal effective planning horizon** (1−γ ?) −1is defined using the discount factor that minimizes the planning loss, *i.e.*, γ ?:= min0≤γ≤γeval L(M, γ ˆ | *M, γ*eval). Theorem 1. (Jiang et al. (2015)) Let M *be an MDP with non-negative bounded rewards and evaluation* discount factor γeval. Let Mˆ be the approximate MDP comprising the true reward function of M and the approximate transition model Pˆ, estimated from m > 0 *samples for each state-action pair. Then, with* probability at least 1 − δ, $$\left|\left|V_{M,\gamma_{\mathrm{real}}}^{\pi_{M,\gamma_{\mathrm{real}}}^{\pi_{M,\gamma_{\mathrm{real}}}^{\pi_{M,\gamma}}}}-V_{M,\gamma_{\mathrm{real}}}^{\pi_{M,\gamma_{\mathrm{real}}}^{\pi_{M,\gamma}}}\right|\right|_{\infty}\leq\frac{\gamma_{\mathrm{real}}-\gamma}{(1-\gamma_{\mathrm{real}})(1-\gamma)}+\frac{2\gamma R_{\mathrm{max}}}{(1-\gamma)^{2}}\left(\sqrt{\frac{\Sigma}{2m}\log\frac{2S A|\Pi_{\gamma}|}{\delta}}\right)\,.$$ (1) where Σ *is upper-bounded by 1 as* P, Pˆ ∈ ∆S. This result holds for a count-based model estimator (i.e, empirical average of observed transitions) given by a generator model for each pair (*s, a*). It gives an upper-bound on the planning loss as a function of the ![3_image_0.png](3_image_0.png) Figure 2: On the role of incorporating a ground truth prior of transition model on planning horizon. The planning loss is a function of the discount factor γ and is impacted by incorporating prior knowledge. The learner has m = 5, 10, 20, 50 samples per task to estimate the model, corresponding to the curves in each sub figure. Inspecting any of the sub figures, we observe that larger values of m lead to lower planning loss and a larger effective discount factor. Besides, inspecting one value of m across tasks (e.g., m = 5 ), we see that the same effect (lower planning loss and larger effective discount) occurs when the learner puts more weight on the ground truth prior through α. guidance discount factor γ < 1. The result decomposes the loss into two terms: the constant bias which decreases as γ tends to γeval, and the variance (or uncertainty) term which increases with γ but decreases as 1/ √m. As m → ∞ that second factor vanishes, but in the low-sample regime the optimal effective planning horizon should trade-off both terms. Illustration. These effects are illustrated in Fig. 2 on a simple 10-state, 2-action random MDP. The leftmost plot uses the simple count-based model estimator and reproduces the results from Jiang et al. (2015). We then incorporate the true prior (mean model P o as in Fig 1 and defined above Eq. 3 in Assumption 1) in the estimator with a growing mixing factor α ∈ (0, 1): Pˆ(m) = αPo + (1 − α) Pi Xi m . We observe that increasing the weight α ∈ (0, 1) on good prior knowledge enables longer planning horizons and lower planning loss. Online Meta-Learning and Regret. We consider an online meta-RL problem where an agent is presented with a sequence of tasks M1, M2*, ..., M*T , where for each t ∈ [T], Mt = hS, A, Pt*, R, γ*evali, that is, the MDPs only differ from each other by the transition matrix (dynamics model) P t. The learner must sequentially estimate the model Pˆtfor each task t from a batch of m transitions simulated for each state-action pair1. Its goal is to minimize the average planning loss also expressed in the form of task averaged regret suffered in planning and defined as L¯(Mˆ1:T , γ|M1:T , γeval) = 1 T X T t=1 L(Mˆt, γ|Mt, γeval) = 1 T X T t=1 kV π ? Mt,γeval Mt,γeval− V π ? Mˆt,γ Mt,γeval (2) $$\binom{1}{2}$$. k∞ (2) Note that the reference MDP for each term is the true Mt, and the discount factor γ is the same in all tasks. One can see this objective as a stochastic dynamic regret: at each task t ∈ [T], the learner competes against the optimal policy for the *current* true model, as opposed to competing against the best fixed policy in hindsight used in classical definitions of regret. Note that **our dynamic regret is different from the one considered in ARUBA** (Khodak et al., 2019). They consider the fully online setting where the data is observed as an arbitrary stream within each task, and each comparator is simply the minimum of the within-task loss in hindsight. In our model, however, given access to a simulator (See Sec. 2) allows us to get i.i.d transition samples as a batch at the beginning of each task, and consequently we define our regret with respect to the true generating parameter. One key consequence of this difference is that their regret bounds cannot be directly applied to our setting, and we prove new results further below. 1So a total of mSA samples. ## 3 Planning With Online Meta-Reinforcement Learning We here formalize planning in a model-based meta-RL setting. We start by specifying all our assumptions in Sec 3.1 including our main assumption about task relatedness in Sec. 1, present our approach and explain the proposed algorithms POMRL and ada-POMRL in Sec. 3.2. Our main result is a high-probability upper bound on the average planning loss under the assumed task relatedness, presented as Theorem 2. ## 3.1 Assumptions In many real world scenarios such as robotics, it is required to be responsive to changes in the environment and, at the same time, to be robust against perturbation inherent in the environment and their decision making. In such practical scenarios, the key reason to employ meta-learning is for the learner to leverage task-similarity (or task variance) across tasks. Bounded task similarity is becoming a core assumption in the analysis of recent meta learning (Khodak et al., 2019) and multi-task (Cesa-Bianchi et al., 2021) online learning algorithms. Assumption 1 (Structural Assumption Across Tasks: Task Relatedness). In this work, we exploit the structural assumption that for all t ∈ [T], P t ∼ P *centered at some fixed but unknown* P o ∈ ∆ S×A Sand such that for any (*s, a*), $$\|P_{s,a}^{t}-P_{s,a}^{o}\|_{\infty}\leq\sigma=\operatorname*{max}_{(s,a)}\sigma(s,a)\quad a.s.$$ σ(s, a) a.s. (3) $$(3)$$ This also implies that maxt,t0 kP t s,a − P t 0 s,ak∞ ≤ 2σ, and that the meta-distribution P is bounded within a small subset of the simplex. It is immediate to extend our results under a high-probability assumption instead of the almost sure statement above. In our experiments, we will use Gaussian or Dirichlet priors over the simplex, whose moments are bounded with high-probability, not almost surely. Importantly, we will say that a multi-task environment is *strongly structured* when σ < Σ, *i.e.* when the effective diameter of the models is smaller than that of the entire feasible space. Assumption 2 (Access to a Simulator). We assume that for each task t ∈ [T] we have access to a simulator of transitions (Kearns et al., 2002) providing m i.i.d. samples (Xt,i s,a)i=1..m ∈ Sm ∼ P t(·|s, a) *(categorical* distribution). Next, for simplicity we assume throughout that the rewards are known and focus on learning and planning with an approximate dynamics model. Additionally estimating the reward is a straightforward extension of our analysis and would not change the implications of our main result. Assumption 3 (Known Rewards). *Given a distribution of tasks, we assume that the rewards are known.* ## 3.2 Our Approach With access to a simulator (Assumption 2); for each (*s, a*), we can compute an empirical estimator for each s 0 ∈ [S]: P¯t s,a(s 0) = Pm i=1 1{Xt,i s,a = s 0}/m, with naturally Ps 0 P¯t s,a(s 0) = 1. We perform meta-RL via alternating minimizing a batch *within-task* regularized least-squares loss, and an outer-loop step where we optimize the regularization to optimally balance bias and variance of the next estimator. Estimating dynamics model via regularized least squares. We adapt the standard technique of meta-learned regularizer (see e.g. Baxter (2000); Cella et al. (2020) for supervised learning and bandit respectively) to this model estimation problem. At each round t, the **current model** Pˆt (s,a) is estimated by minimizing **a regularized least square loss**: for a given **regularizer** ht (to be specified below)2 and parameter λt > 0 for each (s, a) *∈ S × A* we solve $$\hat{P}^{t}_{(s,a)}=\operatorname*{arg\,min}_{P_{(s,a)}\in\Delta_{S}}\big{\|}\quad\underbrace{\frac{1}{m}\sum_{i=1}^{m}\mathds{1}\{X^{t,i}_{s,a}\}}_{\text{empirical transition prob.}}-P_{(s,a)}\big{\|}_{2}^{2}+\lambda_{t}\|P_{(s,a)}-h_{t}\|_{2}^{2},\tag{4}$$ 2In principle, this loss is well defined for any regularizer ht but we specify a meta-learned one and prove that it induces good performance. where we use 1{Xt,i s,a} to denote the one-hot encoding of the state into a vector in R S. Importantly, ht and λt are meta-learned in the outer-loop (see below) and affect the bias and variance of the resulting estimator. The solution of equation 4 can be computed in closed form as a convex combination of the empirical average (count-based) and the prior: Pˆt = αtht + (1 − αt)P¯t where αt =λt 1+λt is the current mixing parameter. Outer-loop: Meta-learning the regularization. At the beginning of task 1 < t ≤ T, the learner has already observed t − 1 *related but different* tasks. We define ht as an **average of Means (AoM)**: $$h_{(s,a)}^{t}\gets\hat{P}_{(s,a)}^{\rho,t}=\frac{1}{t-1}\sum_{j=1}^{t-1}\frac{\sum_{i=1}^{m}\mathbbm{1}\{X_{(s,a)}^{j,i}\}}{m}:=\frac{1}{t-1}\sum_{j=1}^{t-1}\bar{P}_{(s,a)}^{j}.$$ $$\left({\bar{\mathbf{5}}}\right)$$ . (5) Deriving the mixing rate. To set αt, we compute the Mean Squared Error (MSE) of Pˆt (s,a) , and minimize an upper bound (see details in Appendix B): MSE(Pˆt (s,a) ) ≤ α 2 t σ 2(1 + 1 t ) + (1 − αt) 2 1 m , which leads to αt =1 σ2(1+1/t)m+1 . Algorithm 1 depicts the complete pseudo code. We note here that POMRL (σ) assumes, for now, that the underlying task-similarity parameter σ is known, and we discuss a fully empirical extension further below (See Sec. 4). The learner does not know the number of tasks a priori and tasks are faced sequentially online. The learner performs meta-RL alternating between within-task estimation of the dynamics model Pˆt via a batch of m samples for that task, and an outer loop step to meta-update the regularizer Pˆo,t+1 alongside the mixing rate αt+1. For each task, we use a γ-Selection-Procedure to choose planning horizon γ ∗ ≤ γeval. We defer the details of this step to Sec. 6 as it is non-trivial and only a partial consequence of our theoretical analysis. Next, the learner performs planning with an imperfect model Pˆt. For planning, we use dynamic programming, in particular policy iteration (a combination of policy evaluation, and improvement), and value iteration to obtain the optimal policy π ? Pˆt,γ∗ for the corresponding MDP Mˆt. Algorithm 1: POMRL (σ) - Planning with Online Meta-Reinforcement Learning Input: Given task-similarity (σ(*s, a*)) a matrix of size S × A. Initialize Pˆo,1to uniform, α1 = 0. for *task* t ∈ [T] do for t th batch of m *samples* do Pˆt(m) = (1 − αt) 1 m Pm i=1 Xi + αtPˆo,t // regularized least squares minimizer. γ ? ←− γ-Selection-Procedure(m, αt*, σ, T, S, A*) π ? Pˆt,γ∗ ← Planning(Pˆt(m)) // Output: π $\uparrow^\star_{\rho t,\gamma\in\mathcal{I}}$ Pˆt,γ∗ Update Pˆo,t+1, αt+1 =1 σ2(1+1/t)m+1 // meta-update AoM (Eq. 5) and mixing rate $\mathrm{d}\mathbf{a}\mathbf{t}$ ## 3.3 Average Regret Bound For Planning With Online-Meta-Learning Our main theoretical result below controls the average regret of POMRL (σ), a version of Alg. 1 with additional knowledge of the underlying task relatedness, *i.e.*, the true σ > 0. Theorem 2. Using the notation of Theorem 1, we bound the average planning loss equation 2 for *POMRL* (σ): $$\tilde{\cal L}\leq\frac{\gamma_{\mathrm{real}}-\gamma}{(1-\gamma_{\mathrm{real}})(1-\gamma)}+\frac{2\gamma S}{(1-\gamma)^{2}}\tilde{O}\left(\frac{\sigma+\sqrt{\frac{1}{T}}\left(\sigma+\sqrt{\sigma^{2}+\frac{\Sigma}{m}}\right)}{\sigma^{2}m+1}+\frac{\sigma^{2}m\sqrt{\frac{\Sigma}{m}}}{\sigma^{2}m+1}\right)$$ $\mathbf{M}$ $\mathbf{a}\mathbf{a}\mathbf{b}$ (6) $$\frac{1}{2}$$ . with probability at least 1 − δ*, where* σ 2 < 1 *is the measure of the task-similarity and* σ = max(s,a) σ(*s, a*). The proof of this result is provided in Appendix D and relies on a new concentration bound for the metalearned model estimator. The last term on the r.h.s. corresponds to the uncertainty on the dynamics. First we verify that if T = 1 and m grows large, the second term dominates and is equivalent to O˜( qΣ m ) (as σ 2/(σ 2m + 1) → 0), which is similar to that of Jiang et al. (2015) as there is no meta-learning, with an additional O( 1 m ) but second order term due to the introduced bias. Then, if m is fixed and small, for small enough values of σ 2(typically σ < 1/ √m), the first term dominates and the r.h.s. boils down to O˜(σ + √ 1 m )/ √T . This highlights the interplay of our structural assumption parameter σ and the amount of data m available at each round. The regimes of the bound for various similarity levels are explored empirically in Sec. 5 (Q3). We also show the dependence of the regret upper bound on m and T for a fixed σ, in Appendix Fig. F3. Implications for degree of task-similarity *i.e.*, σ **values.** Our bound suggests that the degree of improvement you can get from meta learning scales with the task similarity σ instead of the set size Σ. Thus, for σ ≤ Σ, performing meta learning with Algorithm 1 guarantees better learning measured via our improved regret bound when there is underlying structure in the problem space which we formalize through Eq. 3. Should σ be large, the techniques will still hold and our bounds will simply scale accordingly. When σ = 0**, all tasks are exactly the same.** Indeed, the mixing rate αt ≈ 1 for all t, so our algorithm boils down to returning the average of means Pˆo,t for each task, which simply corresponds to solving the tasks as a continuous, uninterrupted stream of batches from the nearly same model that Pˆo,t aggregates. Unsurprisingly, our bound recovers that of (Jiang et al., 2015, Theorem 1): the bound below reflects that we have to estimate only one model in a space of "size" Σ with mT samples. $$\vec{\mathcal{L}}\leq\frac{\gamma_{\mathrm{{\small{eval}}}}-\gamma}{(1-\gamma_{\mathrm{{\small{eval}}}})(1-\gamma)}+\frac{2\gamma S}{(1-\gamma)^{2}}\partial\left(\sqrt{\frac{\Sigma}{m T}}\right)$$ $$\text{(7)}$$. When σ = 1**, then** σ = Σ = 1**, then the meta-learning assumption is not relevant but our bound** remains valid and gracefully degrades to reflect it. We need to estimate T models each with m samples. Then the second term √ 1 m reflects the usual estimation error for each task while the first term is an added bias (second order in 1m ) due to our regularization to our mean prior P othat is not relevant here. $$\bar{\cal L}\leq\frac{\gamma_{\rm{enz1}}-\gamma}{(1-\gamma_{\rm{enz1}})(1-\gamma)}+\frac{2\gamma S}{(1-\gamma)^{2}}\,\bar{O}\bigg{(}\frac{1}{m}\Big{(}1+\frac{1}{\sqrt{T}}(1+\sqrt{1+\frac{1}{m}})\Big{)}+\frac{1}{\sqrt{m}}\bigg{)}\tag{8}$$ Connections to ARUBA. As explained earlier, our metric is not directly comparable to that of ARUBA (Khodak et al., 2019) but it is interesting to make a parallel with the high-probability average regret bounds proved in their Theorem 5.1. They also obtain an upper bound in O˜(1/ √m + 1/ √mT) if one upper bounds their average within-task regret U¯ ≤ B √m. Remark 1 (Role of the task similarity σ in Eq. 2). When σ > 0, POMRL naturally integrates each new data batch into the model estimation. The knowledge of σ is necessary to obtain this exact and intuitive update rule, and our theory only covers POMRL equipped with this prior knowledge, but we discuss how to learn and plug-in σˆt in practice. Note that it would be possible to extend our result to allow for using the empirical variance estimator with tools like the Bernstein inequality, but we believe this it out of the scope of this work as it would essentially give a similar bound as obtained in Theorem 2 *with an additional lower order term in* O(1/T)*, and it would not provide much further intuition on the meta-planning problem we study.* ## 4 Practical Considerations: Adaption On-The-Fly In this section we propose a variant of POMRL that meta learns the task similarity parameter, which we call ada-POMRL . We compare the two algorithms empirically in a 10 state, 2 action MDP with closely related tasks with a total of T = 15 tasks (details of the experiment setup are deferred to Sec. 5). Performance of **POMRL** . Recall that POMRL is primarily learning the regularizer and assumes the knowledge of the underlying task similarity (i.e. σ). We observe in Fig. 3 that with each round t ∈ T POMRL is able to plan better as it learns and adapts the regularizer to the incoming tasks. The convergence rate and final performance corroborates with our theory. Can we also meta-learn the task-similarity parameter? In practice, the parameter σ may not be known and must be estimated online and plugged in (see Appendix C for details). Alg. 2 ada-POMRL uses Welford's algorithm to compute an online estimate of the variance after every task using the model estimators, and simply plugs-in this estimate wherever POMRL was using the true value. From the perspective of ada-POMRL , POMRL is an "oracle", i.e. the underlying task-similarity is known. However, in most practical scenarios, the learner does not have this information a priori. ![7_image_0.png](7_image_0.png) We compare empirically POMRL and ada-POMRL on a strongly structured problem (σ ≈ 0.01) in Fig. 3 and observe that meta-learning the underlying task relatedness allows ada-POMRL to adapt to the incoming tasks accordingly. Adaptation on-the-fly with ada-POMRL comes at a cost *i.e.*, the performance gap in comparison to POMRL but eventually converges albeit with a slower rate. This is intuitive and a similar theoretical guarantee applies (See Remark 1). Figure 3: **ada-POMRL** enables meta-learning the task-similarity on-the-fly with a performance gap for the initial set of tasks as compared to the oracle POMRL , but improves with more tasks This online estimation of σ means that ada-POMRL now requires an initial value for σˆ1, which is a choice left to the practitioner, but will only affect the results of a finite number of tasks at the beginning. Using σˆ1 too small will give a slightly increased weight to the prior in initial tasks, which is not desirable as the latter is not yet learned and will result in an increased bias. On the other hand, setting σˆ1 too large (i.e close to 1/2) will decrease the weight of the prior and increase the variance of the returned solution; in particular, in cases where the true σ is small, a large initialization will slow down convergence and we observe empirical larger gaps between POMRL and ada-POMRL . In the extreme case where σ ≈ 0, a large initialization will drastically slow down ada-POMRL as it will take many tasks before it *discovers* that the optimal behavior is essentially to aggregate the batches. Algorithm 2: ada-POMRL - Planning with Online Meta-Reinforcement Learning Input: Initialize Pˆo,1to uniform, (ˆσ)1 as a matrix of size S × A,α1 = 0. for *task* t ∈ [T] do for t th batch of m *samples* do Pˆt(m) = (1 − αt) 1 m Pm i=1 Xi + αtPˆo,t // regularized least squares minimizer. γ ? ←− γ-Selection-Procedure(m, αt, σt*, T, S, A*) π ? Pˆt,γ? ← Planning(Pˆt(m)) Output: π ? Pˆt,γ? Update Pˆo,t+1, σˆt+1 ←− Welford's online algorithm( ˆσo)t, Pˆo,t+1, Pˆo,t// meta-update AoM (Eq. 5) and task-similarity parameter. Update αt+1 =1 σˆt+12(1+1/t)m+1 // meta-update mixing rate, plug max(σS×A) Tasks vary only in certain states and actions. Thus far, we considered a *uniform* notion of task similarity as Eq. 3 holds for any (*s, a*). However, in many practical settings the transition distribution might remains the same for most part of the state space but only vary on some states across different tasks. These scenarios are hard to analyse in general because local changes in the model parameters do not always imply changes in the optimal value function nor necessarily modify the optimal policy. Our Theorem 2 still remains valid, but it may not be tight when the meta-distribution has non-uniform noise levels. More precisely Theorem 1 in Appendix D remains locally valid for each (*s, a*) pair and one could easily replace the uniform σ with local σ(s,a), but this cannot directly imply a stronger bound on the average planning loss. Indeed, in our experiments, in both POMRL and ada-POMRL , the parameter σ and σˆ respectively, are S × A matrices of state-action dependent variances resulting in state-action dependent mixing rate αt. ## 5 Experiments We now study the empirical behavior of planning with online meta-learning in order to answer the following questions: Q1.Does meta-learning a good initialization of the dynamics model facilitate improved planning accuracy for the choice of γ = γeval? (Sec. 5.1) Q2.Does meta-learning a good initialization of the dynamics model enables longer planning horizons? (Sec. 5.2) Q3.How does performance depend on the amount of shared structure across tasks *i.e.*, σ? (Sec. 5.3) Source code is provided in the supplementary material. Setting: For each experiment, we fix a mean model P o ∈ ∆ S×A S(see below how), and for each new task t ∈ [T], we sample P tfrom a Dirichlet distribution3centered at P o. As prescribed by theory (see Sec.3.2), we set4 σ ≈ 0.01 . 1/S√m unless otherwise specified (see Q3). Note that σ and σˆ respectively, are S × A matrices of state-action dependent variances that capture the directional variance as we used Dirichlet distributions as priors and these have non-uniform variance levels in the simplex, depending on how close to the simplex boundary the mean is located. Aligned with our theory, we use the max of the σ matrices resulting in the aforementioned single scalar value. As in Jiang et al. (2015), P o(and each P t) characterizes a random chain MDP with S = 10 states5 and A = 2 actions, which is drawn such that, for each state–action pair, the transition function P(*s, a, s*0) is constructed by choosing randomly k = 5 states whose probability is set to 0. Then we draw the value of the S − k remaining states uniformly in [0, 1] and normalize the resulting vector. ## 5.1 Meta-Reinforcement Learning Leads To Improved Planning Accuracy For [Γeval**]. [Q1.]** We consider the aforementioned problem setting with a total of T = 15 closely related tasks and focus on the planning loss gains due to improved model accuracy. We fix γ = γeval, a rather naive γ-Selection-Procedure and show the planning loss of POMRL (Alg. 1) with the following **baselines**: 1) **Oracle Prior Knowledge** knows a priori the underlying task structure (P o, σ) and uses an estimator (Eq. 4) with exact regularizer P o and optimal mixing rate αt =1 σ2(1+1/t)m+1 , 2) **Without Meta-Learning** simply uses Pˆt = P¯t, the count-based estimated model using the m samples seen in each task, 3) **POMRL** (Alg. 1) meta-learns the regularizer but knows apriori the underlying task structure, and 4) **ada-POMRL** (Alg. 2) meta-learns not only the regularizer, but also the underlying task-similarity online. The oracle is a strong baseline that provides a minimally inaccurate model and should play the role of an "empirical lower bound". For all baselines, the number of samples per task m = 5. Results are averaged over 100 independent runs. Besides, we also propose and empirically validate competitive heuristics for γ-Selection-Procedure in Sec. 6. Besides, we also run another baseline called Aggregating(α = 1), that simply ignores the meta-RL structure and just plans assuming there is a single task (See Appendix F.2). Inspecting Fig. **4(a)**, we can see that our approach ada-POMRL (green) results in decreasing per-task planning loss as more tasks are seen, and decreasing variance as the estimated model gets more stable and approaches the optimal value returned by the oracle prior knowledge baseline (blue). On the contrary, without meta-learning (red), the agent struggles to cope as it faces new tasks every round, and its performance does not improve. ada-POMRL gradually improves as more tasks are seen whilst adaptation to learned task-similarity on-the-fly which is the primary cause of the performance gap in ada-POMRL and POMRL . Importantly, no prior knowledge about the underlying task relatedness enables a more practical algorithm with the same theoretical guarantees (See Sec. 4). Recall that oracle prior knowledge is a strong baseline as it corresponds to both known task relatedness and regularizer. ## 5.2 Meta-Learning The Underlying Task Relatedness Enables Longer Planning Horizons. [Q2.] We run ada-POMRL for T = 15 (with σ ≈ 0.01) as above and report planning losses for a range of values of guidance γ factors. Results are averaged over 100 independent runs and displayed on Fig. 4(b). We observe 3The variance of this distribution is controlled by its coefficient parameters α1:S: the larger they are, the smaller is the variance. More details on our choices are given in Appendix F.1. Dirichlet distributions with small variance satisfy the high-probability version of our structural assumption 3 for σ = maxi σi 4Our priors are multivariate Dirichlet distribution in dimension S so we divide the theoretical rate by S to ensure the max bounded by 1/ √m. See App. F for implementation details. 5We provide additional experiments with varying size of the state space in Appendix Fig. F5. ![9_image_0.png](9_image_0.png) Figure 4: **Planning with Online Meta-Learning. (a) Per-task planning loss** of our algorithms POMRL and ada-POMRL compared to an Oracle, and Without Meta-learning baselines. All methods use a fixed γ = γeval = 0.99. (b) ada-POMRL **'s planning loss** decreases as more tasks are seen. Markers denote the γ that minimizes the planning loss in respective tasks. Error bars show standard error. (c) ada-POMRL 's empirically optimal guidance discount factor (right y axis) depicts the effective planning horizon, *i.e.*, one that minimizes the planning loss. Optimal γ aka the effective planning horizon is larger with online meta-learning. Planning loss (left y axis) shows the minimum planning loss achieved by the agent in that round T. Results are averaged over 100 independent runs and error bars represent 1-standard deviation. in Fig. 4(b) when the agent has seen fewer tasks T, an intermediate value of the discount is optimal, *i.e.*, one that minimizes the task-averaged planning loss (γ ? < 0.5). In the presence of strong underlying structure across tasks, **as the agent sees more tasks, the effective planning horizon** (γ ? > 0.7) **shifts to a** larger value - one that is closer to the gamma used for evaluation (γeval = 0.99). As we incorporate the knowledge of the underlying task distribution, *i.e.*, meta-learned initialization of the dynamics model, we note that the adaptive mixing rate αt puts increasing amounts of weight on the shared task-knowledge. Note that this conforms to the effect of increasing weight on the model initialization that we observed in Fig. 2. As predicted by theory, the per-task planning loss decreases as T grows and is minimized for progressively larger values of γ, meaning for longer planning horizons (See Fig. 4(c)). In addition, Appendix Fig. F4 depicts the effective planning horizon individually for ada-POMRL , Oracle and without meta learning baselines. ## 5.3 Pomrl And Ada-Pomrl **Perform Consistently Well For Varying Task-Similarity. [Q3.]** We have thus far studied scenarios where the learner can exploit strong task relatedness, *i.e.*, σ ≈ 0.01 < 1/(S √m) (for low data per task *i.e.*, m = 5) is small and we now illustrate the other regimes discussed in Section 3.2. We show that our algorithms remain consistently good for all amounts of task-similarity. We let σ vary to cover the **three regimes**: σ ≈ 0.01 corresponding to fast convergence, σ = 0.025 is in the intermediate regime (needs longer T), and σ = 0.047 is the loosely structured case where we don't expect much meta-learning to help improve model accuracy. The small inset figures in Fig. 5 represent the task distribution in the simplex. In all cases, ada-POMRL estimates σ online and we report the planning losses for a range of γ's. Inspecting Fig. 5, we observe that while in the presence of closely related tasks (Fig. 5(a)) all methods perform well (except without meta-learning). As the underlying task relatedness decreases (for intermediate regime in Fig. 5(b)), both POMRL and ada-POMRL remain consistent in their performance as compared to the Oracle Prior Knowledge baseline. When the underlying tasks are loosely related (as in Fig. 5(c)), ada-POMRL and POMRL can still perform well in comparison to other baselines. Next, we report and discuss the planning loss plot for ada-POMRL for the three cases are shown in Figures 5(d), 5(e), and 5(f) respectively. An intermediate value of task-similarity (Fig. 5(e)) still leads to gains, albeit at a lower speed of convergence. In contrast, a large value of σ = 0.047 indicates little relatedness across tasks resulting in minimal gains from meta-learning here as seen in Fig. 5(f). The learner struggles to learn a good initialization of the model dynamics as there is no natural one. All planning loss curves remain U-shaped and overall higher with an intermediate optimal guidance γ value ( 0.5). However, ada-POMRL does not do worse overall than the initial run T = 1, meaning that while there is not a significant improvement, our method ![10_image_0.png](10_image_0.png) (d) (e) Strong Structure Medium Structure Loosely Structured (f) Figure 5: POMRL and ada-POMRL **are robust to varying task-similarity** σ for a small fixed amount of data m = 5 available at each round t ∈ T. A small value of σ reflects the fact that tasks are closely related to each other and share a good amount of structure whereas a much larger value indicates loosely related tasks (simplex plots illustrate the meta-distribution in dimension 2). In the former case, meta-learning the shared structure alongside a good model initialization leads to most gains. In the latter, the learner struggles to cope with new unseen tasks which differ significantly. Error bars represent 1-standard deviation of uncertainty across 100 independent runs. does not hurt performance in loosely related tasks6. Recall that ada-POMRL has no apriori knowledge of the number of tasks (T), or the underlying task relatedness (σ) *i.e.*, adaptation is on-the-fly. ## 6 Adaptation Of Planning Horizon Γ We now propose and empirically validate two heuristics to design an adaptive schedule for γ based on existing work (Sec. 6.1) and on our average regret upper bound (Sec. 6.2). ## 6.1 Schedule Adapted From Dong Et Al. **(2021) [**Γ = F(M, Αt, Σt, T)] Dong et al. (2021) study a continuous, never-ending RL setting. They divide the time into growing phases (Tt)t≥0, and tune a discount factor γt = 1 − 1/T1/5 t. We adapt their schedule to our problem, where the time is already naturally divided into tasks: for each t ≥ 0, we define the phase size Tt and the corresponding γt as $$T_{0}=m,\quad T_{t}=\frac{S A}{L}\big(\underbrace{(1-\alpha_{t})m+\alpha_{t}m(t-1)}_{\mathrm{\scriptsize~efficient~sample~size}}\big),\quad\gamma_{t}=1-\frac{1}{T_{t}^{1/5}},$$ where L is the maximum trajectory length. The size of each Tt, t ≥ 1, is controlled by an "efficient sample size" which includes a combination of the current task's samples and of the samples observed so far, as used to construct our estimator in POMRL . ## 6.2 Using The Upper Bound To Guide The Schedule [Γ = Min{1, Γ0 + ˜Γ}] Having a second look at Theorem 2, we see that the r.h.s. is a function of γ of the form $$U:\gamma\mapsto{\frac{1}{1-\gamma_{\mathrm{eval}}}}+{\frac{1}{\gamma-1}}+C_{m,T,S,A,\sigma,\delta}{\frac{\gamma}{(1-\gamma)^{2}}}\,,$$ 6The theoretical bound may lead to think that the average planning loss is higher due to the introduced bias, but in practice we do not observe that, which means our bound is pessimistic on the second order terms. where the first term is positive and monotonically decreasing on (0, γeval) and the second term is positive and monotonically increasing on (0, 1). We simplify and scale this constant, keeping only problem-related terms: Ct = ( √ 1 t (σ + √ 1 m )/(σ 2m + 1) + σ 2m √ 1 m /(σ 2m + 1), which is of the order of the constant in equation 6. Optimizing γ by using the function U with constant C does not lead to a principled analytical value strictly speaking because U is derived from an upper bound that may be loose and may not reflect the true shape of the loss w.r.t. γ, but we may use the resulting growth schedule to guide our choices online. In general, the existence of a strict minimum for U in (0, 1) is not always guaranteed: depending on the values of C ≈ C*m,T ,S,A,σ*, the function may be monotonic and the minimum may be on the edges. We give explicit ranges in the proposition below, proved in Appendix E. Proposition 1. The existence of a strict minimum in (0, 1) is determined by C = Cm,T ,S,A,σ,δ *(which can* be computed) as follows: $\hat{\gamma}=\begin{cases}0&\text{if}C\geq1\\ 1&\text{if}C<1/2\\ \frac{1-C}{1+C}&\text{otherwise,}i.e\text{if}1/2<C<1\end{cases}$ We use these values as a guide. Typically, when T = 1 and m is small, the multiplicative term C is large and the bound is not really informative (concentration has not happened yet), and γ should be small, potentially close to but not equal to zero. As a heuristic, we propose to simply offset γ˜ by an additional γ0 such that the guidance discount factor is γ = min{1, γ0 + γ˜}, where γ0 should be reasonably chosen by the practitioner to allow for some short-horizon planning at the beginning of the interaction. Empirically, γ0 =∈ (0.25, 0.50) seems reasonable for our random MDP setting as it corresponds to the empirical minima on Fig 4(b). ![11_image_0.png](11_image_0.png) Figure 6: **Adapting the planning horizon during online meta-learning reduces planning loss. (a)** Planning with online-meta learning shows that all baselines outperform using a constant discount factor. (b) Zoomed in plot of average planning loss over the progression of tasks T shows competitive performance with the proposed schedule of γ = f(m, αt, σt, T) beating best-fixed as more tasks are seen. The γ schedule γ = min{1, γ0 + γ˜} using the upper bound as a guidance beats the best-fixed and is very competitive to the dynamic-best baseline. (c) Using the upper bound to guide the schedule significantly outperforms γeval and is shown for γ0 ∈ (0.25, 0.50). Error bars depict 1-standard error for 600 independent runs. ## 6.3 Empirical Validation Next, we empirically test the proposed schedules for adaptation of discount factors. We consider the setup described in Sec. 5 with 15 tasks in a 10-state, 2-action random MDP distribution of tasks with σ ≈ 0.01. In Fig. 6, we plot the planning loss obtained by POMRL with our schedules, a fixed γeval and two strong baselines: *best fixed* which considers the best fixed value of discount over all tasks estimated in hindsight and dynamic best which considers the best choice if we had used the optimal γ ?in each round as in Fig. 4(c). It is important to note that *dynamic best* is a lower bound that we cannot outperform. We observe in Fig. 6(a) that γeval results in a very high loss, potentially corresponding to trying to plan too far ahead despite model uncertainty. Upon inspecting Fig. 6(b), we observe that the proposed γ = f(m, αt, σt, T) obtains similar performance to *best fixed* and is within the significance range of the lower bound. Our second heuristic, γ = min{1, γ0 + γ˜} obtains similarly good performance, as seen in Fig. 6(b). Fig. 6(c) shows the effect of different values of γ0 in the prescribed range. These results provide evidence that it is possible to adapt the planning horizon as a function of the problem's structure (meta-learned task-similarity) and sample sizes. Adapting the planning horizon online is an open problem and beyond the scope of our work. ## 7 Discussion And Future Work We presented connections between planning with inaccurate models and online meta-learning via a highprobability task-averaged regret upper-bound on the planning loss that primarily depends on task-similarity σ as opposed to the entire search space Σ. Algorithmically, we demonstrate that the agent can use its experience in each task and across tasks to estimate both the transition model and the distribution over tasks. Meta-learning the underlying task similarity and a good initialization of transition model across tasks enables longer planning horizons. Beyond the tabular case: Function approximation is at the heart of practical RL so a natural question is how to extend our work to parametrized models. For linear MDPs, Müller & Pacchiano (2022) recently derived regret bounds in the fixed-horizon setting for an algorithm using meta-regularizers similar to ours. One question is whether this idea could be extended to infinite horizons and further to non-linear, richer representations. Another, and perhaps deeper question, is around designing and evaluating better planning strategies. Should we revisit such line of work under the light of the planning loss rather than the regret? **Onor Off- Policy Meta-Learning without a simulator:** Realistic problem settings in RL involve using sequentially learnt policies to collect data instead of the simulator. One direction could be to extend our approach to model-based RL algorithms via meta-gradient updates as in ARUBA or MAML, and seek regret guarantees induced by our concentration results. **Non-stationary meta-distribution**: Many real-world scenarios have (slow or sudden) drifts in the underlying distribution itself, e.g. weather. A promising future direction is to consider non-stationary environments where the optimal initialization varies over time. ## Acknowledgments The authors would like to thank Zheng Wen, Andras Gyorgy, and anonymous reviewers for valuable feedback on a draft of this paper, Sebastian Flennerhag, David Abel, and Benjamin Van Roy for insightful discussions during the course of this project. C.Vernade is funded by the Deutsche Forschungsgemeinschaft (DFG) under both the project 468806714 of the Emmy Noether Programme and under Germany's Excellence Strategy - EXC number 2064/1 - Project number 390727645. CV also thanks the international Max Planck Research School for Intelligent Systems (IMPRS-IS). ## References Ron Amit, Ron Meir, and Kamil Ciosek. Discount factor as a regularizer in reinforcement learning. In International conference on machine learning, pp. 269–278. PMLR, 2020. Dilip Arumugam, David Abel, Kavosh Asadi, Nakul Gopalan, Christopher Grimm, Jun Ki Lee, Lucas Lehnert, and Michael L Littman. Mitigating planner overfitting in model-based reinforcement learning. arXiv preprint arXiv:1812.01129, 2018. Maria-Florina Balcan, Mikhail Khodak, and Ameet Talwalkar. Provable guarantees for gradient-based meta-learning. In *International Conference on Machine Learning*, pp. 424–433. PMLR, 2019. Jonathan Baxter. A model of inductive bias learning. *Journal of artificial intelligence research*, 12:149–198, 2000. Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. *Journal of Artificial Intelligence Research*, 47:253–279, 2013. David Blackwell. Discrete dynamic programming. *The Annals of Mathematical Statistics*, pp. 719–726, 1962. Rich Caruana. Multitask learning. *Machine learning*, 28(1):41–75, 1997. Leonardo Cella, Alessandro Lazaric, and Massimiliano Pontil. Meta-learning with stochastic linear bandits. In *International Conference on Machine Learning*, pp. 1360–1370. PMLR, 2020. Nicolò Cesa-Bianchi, Pierre Laforgue, Andrea Paudice, and Massimiliano Pontil. Multitask online mirror descent. *arXiv preprint arXiv:2106.02393*, 2021. Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, and Massimiliano Pontil. Learning to learn around a common mean. *Advances in Neural Information Processing Systems*, 31, 2018. Giulia Denevi, Dimitris Stamos, Carlo Ciliberto, and Massimiliano Pontil. Online-within-online meta-learning. Advances in Neural Information Processing Systems, 32, 2019. Shi Dong, Benjamin Van Roy, and Zhengyuan Zhou. Simple agent, complex environment: Efficient reinforcement learning with agent state. *arXiv preprint arXiv:2102.05261*, 2021. William Fedus, Carles Gelada, Yoshua Bengio, Marc G Bellemare, and Hugo Larochelle. Hyperbolic discounting and learning over multiple horizons. *arXiv preprint arXiv:1902.06865*, 2019. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In *International Conference on Machine Learning*, pp. 1126–1135. PMLR, 2017. Chelsea Finn, Aravind Rajeswaran, Sham Kakade, and Sergey Levine. Online meta-learning. In International Conference on Machine Learning, pp. 1920–1930. PMLR, 2019. Sebastian Flennerhag, Yannick Schroecker, Tom Zahavy, Hado van Hasselt, David Silver, and Satinder Singh. Bootstrapped meta-learning. *arXiv preprint arXiv:2109.04504*, 2021. Sebastian Flennerhag, Tom Zahavy, Brendan O'Donoghue, Hado van Hasselt, András György, and Satinder Singh. Optimistic meta-gradients. In *Sixth Workshop on Meta-Learning at the Conference on Neural* Information Processing Systems, 2022. Vincent François-Lavet, Guillaume Rabusseau, Joelle Pineau, Damien Ernst, and Raphael Fonteneau. On overfitting and asymptotic bias in batch reinforcement learning with partial observability. *Journal of* Artificial Intelligence Research, 65:1–30, 2019. Nan Jiang, Alex Kulesza, Satinder Singh, and Richard Lewis. The dependence of effective planning horizon on model accuracy. In *Proceedings of the 2015 International Conference on Autonomous Agents and* Multiagent Systems, pp. 1181–1189. International Foundation for Autonomous Agents and Multiagent Systems, 2015. Michael Kearns, Yishay Mansour, and Andrew Y Ng. A sparse sampling algorithm for near-optimal planning in large markov decision processes. *Machine learning*, 49(2):193–208, 2002. Mikhail Khodak, Maria-Florina Balcan, and Ameet Talwalkar. Adaptive gradient-based meta-learning methods. *arXiv preprint arXiv:1906.02717*, 2019. Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. In *European conference on* machine learning, pp. 282–293. Springer, 2006. Jelena Luketina, Sebastian Flennerhag, Yannick Schroecker, David Abel, Tom Zahavy, and Satinder Singh. Meta-gradients in non-stationary environments. In *ICLR Workshop on Agent Learning in Open-Endedness*, 2022. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. *Nature*, 518(7540):529, 2015. Robert Müller and Aldo Pacchiano. Meta learning mdps with linear transition models. In *International* Conference on Artificial Intelligence and Statistics, pp. 5928–5948. PMLR, 2022. Junhyuk Oh, Matteo Hessel, Wojciech M Czarnecki, Zhongwen Xu, Hado van Hasselt, Satinder Singh, and David Silver. Discovering reinforcement learning algorithms. *arXiv preprint arXiv:2007.08794*, 2020. Marek Petrik and Bruno Scherrer. Biasing approximate dynamic programming with a lower discount factor. Advances in neural information processing systems, 21, 2008. Joelle Pineau. The machine learning reproducibility checklist. *arxiv*, 2019. Juergen Schmidhuber and Rudolf Huber. Learning to generate artificial fovea trajectories for target detection. International Journal of Neural Systems, 2(01n02):125–134, 1991. Terence Tao and Van Vu. Random matrices: universality of local spectral statistics of non-hermitian matrices. The Annals of Probability, 43(2):782–874, 2015. Sebastian Thrun and Lorien Pratt. Learning to learn: Introduction and overview. In *Learning to learn*, pp. 3–17. Springer, 1998. Harm Van Seijen, Hado Van Hasselt, Shimon Whiteson, and Marco Wiering. A theoretical and empirical analysis of expected sarsa. In 2009 ieee symposium on adaptive dynamic programming and reinforcement learning, pp. 177–184. IEEE, 2009. Zhongwen Xu, Hado van Hasselt, and David Silver. Meta-gradient reinforcement learning. arXiv preprint arXiv:1805.09801, 2018. Tom Zahavy, Zhongwen Xu, Vivek Veeriah, Matteo Hessel, Junhyuk Oh, Hado P van Hasselt, David Silver, and Satinder Singh. A self-tuning actor-critic algorithm. *Advances in neural information processing systems*, 33, 2020. ## A Additional Related Work Discount Factor Adaptation. For almost all real-world applications, RL agents operate in a much larger environment than the agent capacity in the context of both the computational and memory complexity (e.g. the internet). Inevitably, it becomes crucial to adapt the planning horizon over time as opposed to using a relatively longer planning horizon from the start (which can be both expensive and sub-optimal). This has been extensively studied in the context of planning with inaccurate models in reinforcement learning (Jiang et al., 2015; Arumugam et al., 2018). Dong et al. (2021) introduced a schedule for γ that we take inspiration from in Section 6.1. They consider a 'never-ending RL' problem in the infinite-horizon, average-regret setting in which the true horizon is 1, but show that adopting a different smaller discount value proportional to the time in the agent's life results in significant gains. Their focus and contributions are different from ours as they are interested in asymptotic rates, but we believe the connection between our findings is an interesting avenue for future research. Meta-Learning and Meta-RL, or *learning-to-learn* has shown tremendous success in online discovery of different aspects of an RL algorithm, ranging from hyper-parameters (Xu et al., 2018) to complete objective functions (Oh et al., 2020). In recent years, many deep RL agents (Fedus et al., 2019; Zahavy et al., 2020) have gradually used higher discounts moving away from the traditional approach of using a fixed discount factor. However, to the best of our knowledge, existing works do not provide a formal understanding of why this is helping the agents in better performance, especially across varied tasks. Our analysis is motivated by the aforementioned empirical success of adapting the discount factor. While there has been significant progress in meta-learning-inspired meta-gradients techniques in RL (Xu et al., 2018; Zahavy et al., 2020; Flennerhag et al., 2021), they are largely focused on *empirical analysis* with lot or room for in-depth insights about the source of underlying gains. ## B Closed-Form Solution Of The Regularized Least Squares We note that each Pˆ should be understood as Pˆ(s,a)(s 0). $$\nabla\ell(P|h)=-\frac{2}{m}\sum_{i=1}^{m}(X^{i}-P)+2\lambda(P-h)$$ $$\nabla\ell(P|h)=0\iff P(1+\lambda)=\frac{\sum_{i}X^{i}}{m}+\lambda h$$ $$\hat{P}_{(s,\alpha)}(s^{\prime}|h)=\frac{1}{1+\lambda}\frac{\sum_{i}X^{i}}{m}+\frac{\lambda}{1+\lambda}h\tag{9}$$ $$=\alpha h+(1-\alpha)\frac{\sum^{i}X^{i}}{m}\quad\text{where}\alpha=\frac{\lambda}{1+\lambda}\tag{10}$$ Derivation of Mixing Rate αt: To choose αt, we want to minimize the MSE of the final estimator. $$\mathbb{E}_{X\sim P^{t}}\left(\hat{P}^{t}-P^{t}\right)^{2}=\mathbb{E}_{X\sim P^{t}}\left(\alpha_{t}h_{t}+(1-\alpha_{t})\bar{P}^{t}-P^{t}\right)^{2}$$ $$=\mathbb{E}_{X\sim P^{t}}\left(\alpha_{t}(h_{t}-P^{t})+(1-\alpha_{t})(\bar{P}^{t}-P^{t})\right)^{2}$$ $$=\alpha_{t}^{2}(h_{t}-P^{t})^{2}+(1-\alpha_{t})^{2}\mathbb{E}_{X\sim P^{t}}\left((\bar{P}^{t}-P^{t})\right)^{2}$$ where the cross term 2α)t(ht − P t)(1 − αt)EX∼P tE-P¯t − P t= 0 since E[P¯t] = P t. This is the classic bias-variance decomposition of an estimator and we see that the choice of ht plays a role as well as the variance of P¯t, which is upper bounded by 1/m (because each Xi,t is bounded in (0, 1)). For instance, for the choice ht = P o, by our structural assumption 3 we get: $$\mathbb{E}_{X\sim P^{t}}\left(\hat{P}^{t}-P^{t}\right)^{2}\leq\alpha^{2}\sigma^{2}+(1-\alpha)^{2}\frac{1}{m},$$ and we minimize this upper bound in α to obtain the mixing coefficient with smallest MSE: α ∗ =1 σ2m+1 , or equivalently λ ∗ =1 σ2m . Recall this is the within-task estimator's variance where we consider the true P o. In practice, however, we meta-learn the prior, so for t > 1, ht = Pˆo,t =1 t−1 Pt−1 j=1 P¯j. Intuitively, as m and t grow large, Pˆo,t → P o and we retrieve the result above (we show this formally to prove Eq. 20 in the proof of our main theorem). To obtain a simple expression for αt, we minimize the "meta-MSE" of our estimator: $$\mathbb{E}_{P^{t}\sim P^{\alpha}}\left(\hat{P}^{t}-P^{t}\right)^{2}=\alpha_{t}^{2}\mathbb{E}_{P^{t}\sim P^{\alpha}}\mathbb{E}_{X\sim P^{t}}\left(h_{t}-P^{t}\right)^{2}+(1-\alpha_{t})^{2}\mathbb{E}_{P^{t}\sim P^{\alpha}}\mathbb{E}_{X\sim P^{t}}\left((\tilde{P}^{t}-P^{t})\right)^{2}$$ $$\leq\alpha_{t}^{2}\mathbb{E}_{P^{t}\sim P^{\alpha}}\left(\frac{1}{t-1}\sum_{j=1}^{t-1}P^{j}-P^{o}+P^{o}-P^{t}\right)^{2}+(1-\alpha_{t})^{2}\frac{1}{m}$$ $$\leq\alpha_{t}^{2}\sigma^{2}(1+1/t)+(1-\alpha_{t})^{2}\frac{1}{m},$$ where in the last inequality, we upper bounded the variance of 1 t−1 Pt−1 j=1 P j(the "denoised" Pˆ0,t) by σ 2/t since each P tis bounded in [P o − *σ, P*o + σ] by our structural assumption. Minimizing that last upper bound in αt leads to αt =1 (σ2)(1+1/t)m+1 ≤ →t α ∗, when t → ∞. This means that the uncertainty on the prior implies that its weight in the estimator is smaller, but eventually converges at a fast rate to the optimal value (when the exact optimal prior is known). This inequality holds with probability 1 − δ because we use the concentration of Pˆo,t (see proof of Theorem 17 below) ## C Online Estimation Online Estimation of Prior. At each task, the learner gets m interactions per state-action pair. At task t = 1, learner can compute the prior based on the samples seen so far, i.e.: $${\hat{P}}_{o}^{t=1}(s^{\prime}|s,a)={\frac{\{\sum_{i=1}^{m}X_{i}\}_{t=1}}{m}}$$ At subsequent tasks, $$\hat{P}_{o}^{t=2}(s^{\prime}|s,a)=\frac{\{\sum_{i=1}^{2m}X_{i}\}_{t=1:2}}{2m}=\frac{1}{2}\Big{(}\frac{\{\sum_{i=1}^{m}X_{i}\}}{m}+\frac{\{\sum_{i=m+1}^{2m}X_{i}\}}{m}\Big{)}$$ $$=\frac{1}{2}\Big{(}\hat{P}_{o}^{t=1}(s^{\prime}|s,a)+\frac{\{\sum_{i=m+1}^{2m}X_{i}\}}{m}\Big{)}\,.$$ Similarly, $$\hat{P}_{o}^{t=3}(s^{\prime}|s,a)=\frac{\{\sum_{i=1}^{3m}X_{i}\}_{t=1:3}}{3m}=\frac{\{\sum_{i=1}^{2m}X_{i}\}_{t=1:2}+\{\sum_{i=2m+1}^{3m}X_{i}\}_{t=3}}{3m}$$ $$=\frac{1}{3}\left(\frac{2\{\sum_{i=1}^{2m}X_{i}\}_{t=1:2}}{2m}+\frac{\{\sum_{i=2m+1}^{3m}X_{i}\}_{t=3}}{m}\right)$$ $$\implies\hat{P}_{o}^{t}(s^{\prime}|s,a)=\frac{1}{t}\left((t-1)\hat{P}_{o}^{t-1}(s^{\prime}|s,a)+\frac{\sum_{i=(t-1)m+1}^{tm}X_{i}}{m}\right)$$ Therefore, $$\hat{P}_{o}^{t}(s^{\prime}|s,a)=\Big(1-\frac{1}{t}\Big)\hat{P}_{o}^{t-1}(s^{\prime}|s,a)+\Big(\frac{1}{t}\Big)\frac{\sum_{i=(t-1)m+1}^{t m}X_{i}}{m}$$ Online Estimation of Variance. Similarly, we can derive the online estimate of the variance: In addition, "Similarly, we can derive the finite estimate of the variance: ${(\hat{\sigma}_o^2)^t=(\hat{\sigma}_o^2)^{t-1}+\frac{(X_{mt}-\hat{P}_o^{t-1})(X_{mt}-\hat{P}_o^t)-(\hat{\sigma}_o^2)^{t-1}}{t}}$ $$(11)$$ $$(12)$$ Since the above method is numerically unstable, we will employ Welford's online algorithm for variance estimate. ## D Concentration Bounds And Proof Of Theorem 2 D.1 Proof Of Theorem 2 We begin the proof by decomposing each term of the loss: Lemma 1. For a task t denoted by M, and its estimate denoted by M, ˆ ∀s ∈ S, V π ∗ P t,γeval P t,γeval(s) − V π ∗ Pˆt,γ P t,γeval (s) = V π ∗ P t,γeval P t,γeval(s) − V π ∗ P t,γeval P t,γ(s) + V π ∗ P t,γeval P t,γ(s) − V π ∗ Pˆt,γ P t,γeval (s) | {z } At | {z } Bt $$(13)$$ We are going to bound each term separately. The term (At) corresponds to the bias constant due to using γ instead of γeval and was already bounded by Jiang et al. (2015): Lemma 2. Jiang et al. (2015) For any MDP Mˆ with rewards in [0, Rmax], ∀π : S → A and γ ≤ γ*eval*, $V_{P^{\pi}_{t},\gamma}^{\pi}\leq V_{P^{\pi}_{t},\gamma_{\rm{remi}}}^{\pi}\leq V_{P^{\pi}_{t},\gamma}^{\pi}+\frac{\gamma_{\rm{veal}}-\gamma}{(1-\gamma_{\rm{veal}})(1-\gamma)}R_{\rm{max}}$ We denote C(γ) = γeval−γ (1−γeval)(1−γ)Rmax and notice that Pt At/T = C(γ) so that bounds the first part of the average loss. To bound the second term Bt, we first use Lemma 3 (Equation 18) in Jiang et al. (2015) to upper bound V π ∗ P t,γeval P t,γ(s) − V π ∗ Pˆt,γ P t,γeval (s) ≤ 2 max s∈S,π∈ΠR,γ |V πP t,γeval P t,γ(s) − V πPˆt,γ Pˆt,γeval (s)| (14) ≤ 2 max s∈S,a∈A, π∈ΠR,γ |Q πP t,γeval P t,γ(s, a) − Q πPˆt,γ Pˆt,γeval (s, a)| (15) Using Lemma 4 from Jiang et al. (2015) and noticing that in our setting we do not estimate R so Rˆ = R, QπP t,γ(s, a) = R(s, a) + γhP t(s, a, ; ), V π P t,γi and QπPˆt,γ (s, a) = R(s, a) + γhPˆt(s, a, ; ), V π Pˆt,γ i, we have max s∈S,a∈A, π∈ΠR,γ |Q πP t,γeval P t,γ(s, a) − Q πPˆt,γ Pˆt,γeval (s, a)| ≤ 1 (1 − γ)max s∈S,a∈A, π∈ΠR,γ γhPˆt(s, a, ; ), V π P t,γi − γhP t(s, a, ; ), V π P t,γi )| $\binom{16}{2}$ . (14) $\binom{15}{}$ (15) ... Notice that we are comparing the value functions of two different MDPs which is non-trivial and we leverage the result of Jiang et al. (2015). We refer the reader to the proof of Lemma 4 therein for intermediate steps. Now summing over tasks, we have Pt (B)t T≤ 1 T X T 2 (1 − γ)max s∈S,a∈A, π∈ΠR,γ γhPˆt(s, a, ; ), V π P t,γi − γhP t(s, a, ; ), V π P t,γi t=1 ≤2γ (1 − γ) 1 T X T t=1 max s∈S,a∈A, π∈ΠR,γ hPˆt(s, a, ; ) − P t(s, a, ; ), V π P t,γi ≤ 2Rmax (1 − γ) 1 T X T t=1 X s 0∈[S] max s∈S,a∈A, π∈ΠR,γ Pˆt(s, a, s0) − P t(s, a, s0) |V π P t,γ| ≤ 2Rmaxγ (1 − γ) 2 S T X T t=1 max s,s0∈S,a∈A Pˆt(s, a, s0) − P t(s, a, s0) where we upper-bounded the value function by Rmax/(1 − γ) and one sum over S by S × maxs 0∈S *. . .*. Note that this step differs from Jiang et al. (2015) and allows us to boil down to an average (worst-case) estimation error of the transition model. We finally upper bound the r.h.s using Theorem 1 stated and proved below. Remark 2. In Jiang et al. (2015), the argument is slightly more direct and involves directly controlling the deviations of the scalar random variables R(*s, a*) + γhPˆt(*s, a,* ; ), V π Pˆt,γ i*, arguing that it is bounded and* centered at QπP t,γ(s, a). This approach is followed by taking a union bound over the policy space ΠRγ and results in a factor log(ΠR,γ) *under the square root. We could have followed this approach and obtained a* similar result but we made the alternative choice above as we believe it is informative. In our case, this factor is replaced (and upper bounded) by the extra S term. As a result, we lose the direct dependence on the size of the policy class, which is a function of γ *and should play a role in the bound. In turn, and at* the price of this extra looseness, we get a slightly more "exploitable" bound (see our heuristic for a gamma schedule in Section *6). It is easy and straightforward to adapt our concentration bound below to directly bound* R(*s, a*) + γhPˆt(*s, a,* ; ), V π Pˆt,γ i − QπP t,γ(s, a) as in Jiang et al. *(2015), and one would obtain a similar bound as* Eq. equation 6 without the factor S*, but with an extra* log(ΠR,γ). ## D.2 Concentration Of The Model Estimator To avoid clutter in the notation of this section , we drop the (*s, a, s*0) everywhere, as we did in Appendix B above. All definitions of Pˆ and Pˆ0 are as stated in the latter section. Theorem 1. *with probability* 1 − δ: **Theorem 1**: _Let $\rho$ be a positive constant. Then $\rho^{2}=\rho^{2}$._ $$\max_{x,a,\rho^{2}}|\hat{P}^{d}-P^{d}|\leq\frac{1}{\sigma^{2}m+1}\left(\sqrt{\frac{\log(\frac{qT}{\rho^{2}})\log(\frac{q\pi^{2}A}{\sigma^{2}})(\sigma^{2}+\frac{\sum\log^{2}(\frac{qT^{2}}{\rho^{2}})}{m})}{T}}+\sigma\sqrt{\frac{\log(\frac{qT^{2}\delta^{2}A}{T})}{T}}+2\sigma\right)\\ +\frac{\sigma^{2}m}{2\sigma^{2}m+1}\sqrt{\frac{\Sigma\log(\frac{qT^{2}\delta^{2}A}{T})}{2m}}\tag{17}$$ For any t ∈ [T], *s, a, s*0 and π ∈ ΠR,γ, define Pˆt,∗ = αtP o + (1 − αt)P¯tm the optimally regularized estimator (using the true unknown P ofor each t). We have $$\begin{split}\left|\hat{P}^{t}-P^{t}\right|&\leq\left|\hat{P}^{t}-\hat{P}^{t,*}\right|+\left|\hat{P}^{t,*}-P_{t}\right|\\ &\leq\underbrace{\alpha_{t}\left|\hat{P}^{o,t}-P^{o}\right|}_{(A)}+\underbrace{(1-\alpha_{t})\left|\hat{P}^{t}_{m}-P^{t}\right|}_{(B)}+\underbrace{\alpha_{t}\left|P^{o}-P^{t}\right|}_{\leq2\text{-}oby\text{assum.}}\end{split}\tag{18}$$ Bounding Term A Substituting the estimator Pˆt = αt 1 m Pm i Xi + (1 − αt)Pˆt o , $$A\leq\alpha_{t}\left(|\hat{P}_{o}^{t}-\frac{1}{t-1}\sum_{j=1}^{t-1}P^{j}|+|\frac{1}{t-1}\sum_{j=1}^{t-1}P_{j}-P_{o}|\right)$$ $$\leq\frac{1}{\sigma^{2}m+1}\left(|\hat{P}_{o}^{t}-\frac{1}{t-1}\sum_{j=1}^{t-1}P^{j}|+|\frac{1}{t-1}\sum_{j=1}^{t-1}P_{j}-P_{o}|\right)$$ where αt is simply upper bounded by its initial value 1 σ2m+1 and we introduced the *denoised* (expected) average 1 t−1 Pt−1 j=1 P j = EP 1,...P t−1 Pˆo,t. Indeed, by assumption, EP ∼P1 t−1 Pt−1 j=1 P j = P o and the variance of this estimator is bounded by σ 2/(t − 1) by our structure assumption. This allows to naturally bound A2 using Hoeffding's inequality for bounded random variables: with probability at least 1 − δ/3, $$\max_{s,a,s^{\prime}}A_{2}\leq\sigma\sqrt{\frac{\log(6S^{2}AT/\delta)}{T}}\tag{19}$$ We now bound A1 $$\mathbf{A1}=\left|{\frac{1}{t-1}}\sum_{j}({\bar{P}}_{m}^{j}-P^{j})\right|$$ We note here that the first term in A1 is indeed a martingale Mt =Pt−1 j=1 Zj , where Zj = P¯jm − P j, such that each increment is bounded with high probability: for each j, |Zj | ≤ cj w.p 1 − δ 6 , where cj = qΣ m log( 6T 2 δ ). Moreover, the differences |Zj − Zj+1| are also bounded with high probability: $$|Z_{j}-Z_{j+1}|\leq|P^{j}-P^{j+1}|+|\bar{P}^{j}-\bar{P}^{j+1}|<2\sigma+2c_{j}=D_{j}=2\left(\sigma+{\frac{\sqrt{\Sigma}\log({\frac{q T^{2}}{\delta}})}{\sqrt{m}}}\right).$$ Then by (Tao & Vu, 2015, Prop. 34), for any > 0, $$P{\Bigl(}{\Bigl|}{\frac{M_{t}}{t-1}}{\Bigr|}\geq{\frac{\epsilon}{t-1}}{\sqrt{\sum_{j=1}^{t-1}D_{j}^{2}}}{\Bigr)}\leq2\exp(-2\epsilon^{2})+\sum_{j=1}^{t-1}{\frac{\delta}{6T^{2}}}$$ Choosing = q1 2 log( 12T δ ), we get $$P\left(\left|\frac{1}{t-1}\sum_{j=1}^{t-1}\tilde{P}_{m}^{j}-P_{j}\right|\geq\sqrt{\frac{(\sigma+\frac{\sqrt{\Sigma}\log(\frac{6T^{2}}{4})}{\sqrt{m}})^{2}\log(\frac{6T}{\delta})}{T}}\right)\leq\frac{\delta}{6T}+\frac{\delta}{6T}=\frac{\delta}{3T}\;.$$ With a union bound as before, we get that with probability at least 1 − δ/3, $$A_{1}\leq{\sqrt{\frac{\log({\frac{6T}{\delta}})\log({\frac{T S^{2}A}{\delta}})(\sigma^{2}+{\frac{\Sigma\log^{2}({\frac{6T^{2}}{\delta}})}{m}})}{T}}}$$ because (σ + qΣ m ) 2 ≥ σ 2 + Σ m . By combining equation 20 and equation 19, we get: $$\max_{s,u,v}\alpha_{t}|P^{\alpha,t}-P^{\alpha}|\leq\frac{1}{\sigma^{2}m+1}\left(\sqrt{\frac{\log(\frac{\alpha_{t}}{T})\log(\frac{T\sigma^{2}A}{\delta})(\sigma^{2}+\frac{\log^{2}(10^{2})}{m})}{T}}+\sigma\sqrt{\frac{\log(3S^{2}AT/\delta)}{T}}\right)\tag{21}$$ $$(20)$$ ## Bounding Term B Term B is simply the concentration of the average of bounded variables P¯tm = 1 m Pi Xi, whose variance is bounded by 1. So by Hoeffding's inequality, and a union bound, with probability at least 1 − δ/4 $$\operatorname*{max}_{s,a,s^{\prime}}|\bar{P}_{m}^{t}-P^{t}|\leq{\sqrt{\frac{\Sigma\log(4T S^{2}A/\delta)}{2m}}}$$ To bound term B, it remains to upper bound 1 − αt for all t ∈ [T]: $$1-\alpha_{t}=\frac{\sigma^{2}(1+\frac{1}{t})m}{\sigma^{2}(1+\frac{1}{t})m+1}\leq\frac{\sigma^{2}m}{2\sigma^{2}m+1}$$ ∀γ ∈ (0, 1), −C(γ + 1) + (1 − γ) > 0 *⇐⇒ −*2C + 1 > 0 ⇐⇒ C < 1/2. ∀γ ∈ (0, 1), −C(γ + 1) + (1 − γ) < 0 ⇐⇒ C ≥ 1; We get that with probability 1 − δ/3 $$\max_{s,a,s^{\prime}}(B)\leq\frac{\sigma^{2}m}{2\sigma^{2}m+1}\sqrt{\frac{\Sigma\log(3TS^{2}A/\delta)}{2m}}\tag{22}$$ ## Combining All Bounds To conclude, we combine the bounds on the terms in equation 18, replacing with equation 21,equation 22, and with a union bound, we get that with probability 1 − δ, $$\max_{s,a,s^{\prime}}|P^{\prime}-P^{\prime}|\leq\frac{1}{\sigma^{2}m+1}\left(\sqrt{\frac{\log(\frac{6T}{\delta})\log(\frac{7S^{2}A}{\delta})(\sigma^{2}+\frac{\Sigma\log^{2}(\frac{\delta^{2}}{2})}{m})}{T}}+\sigma\sqrt{\frac{\log(3S^{2}AT/\delta)}{T}}+2\sigma\right)\\ +\frac{\sigma^{2}m}{2\sigma^{2}m+1}\sqrt{\frac{\Sigma\log(3T S^{2}A/\delta)}{2m}}\tag{23}$$ ## Discussion The bound has 4 main terms respectively in O˜( q1 mT ), O˜( q1 T ), O˜( 1 m ) and O˜( q1 m ), all scaled by some factor depending on σ 2 and m. A first remark is that when m is large and T = 1, the last part in O˜( q1 m ) dominates due to the factor σ 2m σ2m+1 → 1, while the coefficient of the first two terms goes to 0 fast (in 1/(σ 2m)). ## E Proof Of Proposition 1 We study the function U defined by $$U:\gamma\mapsto\frac{1}{1-\gamma_{\mathrm{eval}}}+\frac{1}{\gamma-1}+C_{m,T,S,A,\sigma,\delta}\frac{\gamma}{(1-\gamma)^{2}}\;,$$ where γeval is a fixed constant and C := C*m,T ,S,A,σ,δ* is seen as a parameter whose value controls the general "shape" of the function. We differentiate with respect to γ: $$\frac{d U}{d\gamma}=-\frac{-C(\gamma+1)+(1-\gamma)}{(1-\gamma)^{3}}.$$ We see that the sign of the derivative is affected by the value of the parameter C: - If ∀γ ∈ (0, 1), −C(γ + 1) + (1 − γ) > 0 then U is monotonically decreasing on (0, 1) and the minimum is reached for γ = 1, - Similarly, if C is really large, U may be monotonically increasing on (0, 1): - Finally, if C ∈ (1/2, 2), the minimum exists inside (0, 1) and is reached for $$-C\gamma-C+1-\gamma=0\iff\gamma=\gamma^{*}=\frac{1-C}{1+C}$$ ## F Experiments: Implementation Details, Ablations & Additional Results F.1 Implementation Details We consider a Dirichlet distribution of tasks such that all tasks t ∈ [T], P t ∼ P are centered at some fixed mean P o ∈ ∆ S×A Sas shown in Figure F1. The mean of the task distribution P ois chosen as a sampled random MDP and variance of this distribution is determined such that kP t s,a − P o s,ak∞ ≤ σ < 1. Next, we compute the variance of this distribution σi = α˜i(1−α˜i) α0+1 , where α˜i = αi α0 and α0 =PS i αi. ![21_image_0.png](21_image_0.png) Figure F1: **Dirichlet Task Distribution** for S = 3 states, with Dir(α) where α = [15, 15, 15], resulting in our task-similarity measure approximately to be σ = 0.0129. ## F.2 Ablations We also run ablations with **Aggregating(**α = 1), a naive baseline that simply ignores the meta-RL structure and just plans assuming there is a single task. We observe in Fig. F2 the aggregating baseline works at-par with our method POMRL which is intuitive when the tasks are strongly related to each other in this case. However, as the underlying task structure decreases, we note that Aggregating(α = 1) as though it is one single task is problematic and suffers from a non-vanishing bias due to which for each new task there is on average an error which does not go to zero. More importantly, the Aggregating(α = 1) baseline cannot have the same guarantees as POMRL and ada-POMRL . ![21_image_1.png](21_image_1.png) Figure F2: Ablations for Efficacy of POMRL and ada-POMRL **for varying task-similarity.** depicts the effect of the task-similarity parameter σ for a small fixed amount of data m = 5 available at each round. We run another baseline called Aggregating (orange) that simply ignores the meta-RL structure and acts as if it is all one single task. In the presence of strong structure, meta-learning the shared structure alongside a good model initialization leads to most gains and even naively aggregating the tasks transitions might seem to work well. However, such a naive method is not reliable as the underlying task similarity decreases - the learner struggles to cope with new unseen tasks which differ significantly and the planning loss doesn't improve. Error bars represent 1-standard deviation of uncertainty across 100 independent runs. ![22_image_0.png](22_image_0.png) Figure F3: **Effect of m and T on Average Regret Upper Bound on Planning:** for a fixed value of task similarity σ, depends on the number of samples per task m and the number of tasks T. (a) For m = T, smaller loss is obtained with very small discount factor. This implies that with a lot of uncertainty it is not interesting to plan far too ahead, (b) For m >> T, each task has enough samples to inform itself resulting in slightly larger effective discount factors. Not a lot is gained in this scenario from meta-learning, (c) m T is the most interesting case as samples seen in each individual task are very limited due to small m. However, the number of tasks are much more resulting in huge gains from leveraging shared structure across tasks. ## F.3 Additional Experiments We examine more properties of ada-POMRL , namely Effect of m, and T **on Planning Loss** in Fig. F3, Individual Baseline's Performance in Fig. F4, and Varying State Space |S|, m**, and** T in Fig. F5. ![22_image_1.png](22_image_1.png) (a) ada-POMRL (b) Oracle Prior Knowledge (c) Without Meta-Learning Figure F4: **Planning with Online Meta Learning - Baselines.** (a) **ada-POMRL** . Meta updates include learning Po, σ, α as a function of tasks. (b) **Oracle Prior Knowledge** considers the optimal α, true mean of the task distribution Po and actual underlying task similarity σ as known apriori, (c)Without Meta-Learning estimates the transition kernel in each round T without any meta-learning. All baselines are obtained with T = 15 tasks and m = 5 samples per task. ## F.4 Reproducibility We follow the reproducibility checklist by Pineau (2019) to ensure this research is reproducible. For all algorithms presented, we include a clear description of the algorithm and source code is included with these supplementary materials. For any theoretical claims, we include: a statement of the result, a clear explanation of any assumptions, and complete proofs of any claims. For all figures that present empirical results, we include: the empirical details of how the experiments were run, a clear definition of the specific measure or statistics used to report results, and a description of results with the standard error in all cases. ## F.5 Computing And Open Source Libraries. All experiments were conducted using Google Colab instances7. 7https://colab.research.google.com/ ![23_image_0.png](23_image_0.png) Figure F5: Varying the size of state-space S, number of samples per task m**, and number of** tasks T**, on Task-averaged Regret Upper Bound on Planning:** for a fixed value of task similarity σ, We note that despite larger state-space we observe the same effect i.e. (a,d,g) For m = T, smaller loss is obtained with very small discount factor i.e. a lot of uncertainty and inability to plan far too ahead, (b,e,h) For m >> T, each task has enough samples to inform itself resulting in slightly larger effective discount factors. Not a lot is gained in this scenario from meta-learning. (c,f,i) m T is the most interesting case as samples seen in each individual task are very limited due to small m. Meta-learning has most significant gains in this case by leveraging the structure across tasks. Results are averaged over 20 independent runs and error bars represent 1-standard deviation.
Review 1: Summary: This paper considers an environment where a reinforcement learning (RL) agent faces a sequence of related tasks. The objective is to use an online meta-learning approach that learns and exploits the underlying similarities across tasks to effectively adapt to every new task (through planning in a model-based approach). The paper proposes a practical algorithm for the stated objective and theoretically provides a task-averaged regret upper-bound on the planning loss. This bound shows that the regret decreases with the number of tasks and the associated task similarity. Further, the paper provides two heuristics for selecting slowly increasing discount factors, in this online meta-learning approach. Empirically the paper demonstrates that meta-learning an initialization of the transition model and a distribution (underlying similarity) across tasks leads to improved accuracy for planning in each new (unseen) task, and planning ahead (i.e., planning using long horizons). Strengths and Weaknesses: Strengths: The paper is well-written and clear. The paper presents its contributions clearly and does a nice job in situating the work in literature. The problem addressed is important and fundamental with a large set of possible real-world applications (though it would be nice for the authors to discuss some of them, preferably right in the introduction where they are motivating the work). Weaknesses: I think the paper has some very stringent assumptions that reduces the scope of the work. Assumptions of stationary task distribution, similarity of tasks, the tabular setting, and assumptions of known rewards (in several places) are very limiting. The authors could have provided some detailed discussions regarding relaxing some of these assumptions at-least empirically. Nonetheless, I agree that this is a novel theoretical effort and such limiting assumptions may be required in establishing the problem framework and providing the first line of results. Requested Changes: I only have some minor comments in the paper. I encourage the authors to incorporate as many of these as possible. I would like to recommend acceptance of the paper and the requested changes here are not critical to changing my recommendation. 1) Assumptions: The paper rolls in several assumptions into the text at various places, which hurts the flow of reading and causes some misunderstandings. For example, in Section 3.2, the authors state that "we shall assume throughout that the rewards are known", and in Section 2, "we assume that we have access to a simulator". I recommend that the authors list their set of assumptions formally early on and refer back to these assumptions in the appropriate places in the text. 2) Throughout the paper it would be useful to have some real-world examples that motivate the problem setting. I think there can be some immediate applications to domains such as robotics. The authors can use such examples to provide intuition in several places in the paper. 3) I had trouble understanding Equation 4 and connecting back to the text around it. I would encourage the authors to explain each term and variable used in this equation precisely along with the necessary intuitions. 4) In several places, the authors use the term "task similarity structure" or "underlying structure". The meaning of "structure" here was unclear to me. It would be helpful to define/introduce this term early on (or during the first usage) in the paper. 5) In Figure 1, I did not get the meaning of the (blue) circles and arrows in the left-most figure. It will be helpful for the authors to clarify that in the description of this figure. 6) Minors: Section 2: consequently to define -> consequently we define; Section 4: comes comes -> comes; Section 4: will gives a -> will give a; Section 5: several places: dynamics model -> the dynamics model; section 5.1: estimator -> an estimator. Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: The paper develops model-based methods in meta-reinforcement learning. Its contributions are fourfold: - they conceptualize and formalize the problem of planning under model uncertainty - they prove regret bounds in the above formalization - they propose an algorithm to solve the meta-reinforcement learning setting. In particular, they present versions that learn the task-similarity parameter and the planning horizon Strengths and Weaknesses: Strengths: - the paper is well-executed. The write-up is very clean, the structure is correct, and the research questions are well-stated - the problem studied in the paper, planning in a meta-reinforcement learning setting, is of significant - Conceptualization and formalization of the problem, even if unsurprising, is valuable for the community - The provided regret bounds further solidify the theoretical contribution - The experimental part nicely illustrates the theoretical part Weaknesses: - The whole analysis is concerned with the tabular setting, and then even is mostly useful, then the 'size' of the environment is modest. This makes it hard to assess what is the long-term significance. I understand that providing any theory for the case when approximators are used is probably very hard. Nevertheless, even a simple empirical evaluation would be of value. Requested Changes: Discuss the expected impact of the results on the field. For example, running empirical evaluations in a non-tabular setting (even simplified) might, in my opinion, significantly strengthen the paper. Broader Impact Concerns: n/a ================================================== Review 3: Summary: In this work, the authors study online meta-learning. Specifically, they study a problem in which an agent sequentially tackles related tasks. The agent uses data acquired from previous tasks to learn an initial guess of the current task model. The proposed analysis is inspired by the Average Regret-Upper-Bound Analysis (ARUBA) framework, adapted to the meta-learning scenario. Regret upper bounds are derived and numerical results on a synthetic experiment are provided. Source code for the experimental evaluation is provided. Strengths and Weaknesses: Strengths. The paper is well written and clearly positioned with respect to the state of the art. The topic addressed is relevant to the community and the results introduced in the paper appear to be correct. Weaknesses. Some of the assumptions in the paper could use some stronger motivation. Not saying that they are unreasonable, but it would help to clarify their need to the reader and how to overcome them in practice (i.e., assumption of known rewards, access to simulator of transitions, etc). Numerical evaluations are provided for synthetic data. It would be interesting to see the method applied to real datasets instead of a synthetic experiment. Requested Changes: I find the paper to be of reasonable quality. I'd suggest some of the modifications mentioned in "Weaknesses", but overall I found this work to be well-written and worthy of publication. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Accept as is Comment: The paper proposes a new model-based meta-learning technique for RL. While the technique is only demonstrated in the context of synthetic tabular domains, it lends itself to a strong theoretical analysis. More specifically, regret bounds are provided under syutable assumption. The applicability of those assumptions was questioned by the reviewers, but they are common for a theoretical analysis. Overall, this work makes several important theoretical contributions that advance the state of the art of meta RL and our understanding of the various factors that influence performance in meta-RL. ==================================================
# Summary Statistic Privacy In Data Sharing Anonymous authors Paper under double-blind review ## Abstract Data sharing between different parties has become increasingly common across industry and academia. An important class of privacy concerns that arises in data sharing scenarios regards the underlying distribution of data. For example, the total traffic volume of data from a networking company can reveal the scale of its business, which may be considered a trade secret. Unfortunately, existing privacy frameworks (e.g., differential privacy, anonymization) do not adequately address such concerns. In this paper, we propose *summary statistic privacy*, a framework for analyzing and protecting these summary statistic privacy concerns. We propose a class of quantization mechanisms that can be tailored to various data distributions and statistical secrets, and analyze their privacy-distortion tradeoffs under our framework. We prove corresponding lower bounds on the privacy-utility tradeoff, which match the tradeoffs of the quantization mechanism under certain regimes, up to small constant factors. Finally, we demonstrate that the proposed quantization mechanisms achieve better privacy-distortion tradeoffs than alternative privacy mechanisms on real-world datasets. ## 1 **Introduction** Data sharing between organizations is an important driver for many use cases, including data-driven product development (Lee & Whang, 2000), industry-wide coordination efforts (e.g., cybersecurity (Choucri et al., 2016), law enforcement (Jacobs & Blitsa, 2008)), and the creation of benchmarks for evaluating scientific progress (Deng et al., 2009; Reiss et al., 2011; Luo et al., 2021). For example, network traces shared from customers to networking vendors enable vendors to debug and improve products (Yin et al., 2022; cai). Medical data shared between hospitals (Esteban et al., 2017; Warren et al., 2019) enables them to develop new machine-learning-based diagnosis algorithms collaboratively (Chaibub Neto et al., 2019). Data shared by researchers allow their research to be reproducible by others (Deng et al., 2009; Lin et al., 2020). In recent years, data sharing has grown into its own sub-industry (e.g., data marketplaces on platforms such as Databricks and Snowflake). Shared data can take many forms, including processed or scrubbed raw data (Reiss et al., 2012; Google, 2018; Commission, 2018; Warren et al., 2019), aggregate analytics, and/or synthetic data (Liu & Wu, 2022). However, *summary statistics* of the shared data may leak sensitive information (Suri & Evans, 2021; Suri et al., 2023). For example, *property inference* attacks allow an attacker to infer properties about the individuals in the training dataset of a released machine learning model (Ateniese et al., 2015; Ganju et al., 2018; Zhang et al., 2021; Mahloujifar et al., 2022; Chaudhari et al., 2022). A video content provider that shares video session data may wish to hide the total or mean traffic volume, which could be used to infer the company's total revenue (Manousis et al., 2021). A cloud provider that shares cluster performance traces may not want to reveal the proportions of different server types that the cloud provider owns, which are regarded as business secrets (Lin et al., 2020). Note that this information (total/mean traffic volume, proportions of data types) cannot be inferred from any single record, but is inherent to the overall data distribution (or the aggregate dataset). Unfortunately, existing privacy metrics and privacy-preserving data sharing algorithms do not adequately address these *summary statistic privacy concerns*. They either focus on protecting the privacy of individual records in a database (e.g., differential privacy (Dwork et al., 2006), anonymization (Reiss et al., 2012), sub- ![1_image_0.png](1_image_0.png) Figure 1: Problem overview. The data holder produces released data and wants to hide *statistical secrets* of the original data. The data user requires that the utility of the released data be good. The attacker (could be the data user) also observes the released data, and wants to guess the *secrets* of the original data. Note that we focus on secrets about the *underlying distribution* (e.g., mean, quantile, standard deviation, of a specific data *column*). As a comparison, many of existing frameworks (e.g., differential privacy (Dwork et al., 2006), anonymization (Reiss et al., 2012), sub-sampling (Reiss et al., 2012)) protect information from individual samples (rows). Our end goal is to provide a *summary statistic privacy toolbox* for data holders to use. The summary statistic privacy toolbox contains data release mechanisms for a set of pre-defined secrets and data distributions. Data holders can choose the mechanism according to the secret that they want to hide and the closest data distributions. sampling (Reiss et al., 2012)), or are designed for algorithms that release low-dimension statistical queries of the dataset instead of the entire dataset (Zhang et al., 2022; Makhdoumi et al., 2014; Issa et al., 2019). For example, differential privacy (DP) (Dwork et al., 2006), a *de facto* privacy definition, evaluates how much individual samples influence the final output of an algorithm. Assume that a video content provider has a dataset of daily page views that they want to release, and they are concerned about the *mean* page views (as this implies the revenue). A typical DP algorithm (Wasserman & Zhou, 2010) would add noise (e.g., Laplace) to the individual page view counts. This process does not change the *mean* of the entire data on expectation. Indeed, DP mechanisms have been shown not to protect summary statistics (Ateniese et al., 2015) (in fact, they are designed to preserve them). See more discussion in §2.2. Hence, a privacy framework is needed for *defining, analyzing, and protecting summary statistic privacy* concerns in data sharing settings. Early work in this space has aimed to obfuscate only between two possible data distributions (Suri & Evans, 2021; Suri et al., 2023), or has been implicitly designed for the release of lowdimensional query release (Zhang et al., 2022). In this paper, we aim to design a general summary statistic privacy framework that can apply to general data release settings. At a high level, the proposed framework works as follows (detailed formulation in §3). A data holder first chooses one or more secrets, which are mathematically defined as functions of the data holder's data distribution. For example, a video analytics company might choose the mean daily observed traffic as a secret quantity. Then, the data holder obfuscates their data according to some *mechanism* and releases the output (Fig. 1). Our framework quantifies the privacy of this mechanism by analyzing the probability that a worst-case attacker can infer the data holder's true secret after observing the output. To capture the utility of released data, we define the *distortion* of a mechanism as the worst-case distance (where the distance metric can be chosen by the data holder or data user) between the original and released data distributions. Our goal is to design data release mechanisms that control tradeoffs between privacy and distortion. ## 1.1 **Contributions** Our contributions are as follows. - **Formulation (§3):** We formalize the notion of summary statistic privacy and propose privacy and distortion metrics tailored to data sharing applications. Intuitively, we define privacy as a worst-case adversary's probability of guessing a secret function of the underlying data distribution. We define distortion as the worst-case distributional distance1 between the original data distribution and the released, perturbed data distribution. Precise definitions are in §3. - **Mechanism design (§5):** We propose a class of mechanisms that achieve summary statistic privacy called *quantization mechanisms*, which intuitively quantize a data distribution's parameters2into bins. We present a *sawtooth technique* for theoretically analyzing the quantization mechanism's privacy tradeoff under various types of secret functions and data distributions (§5.3). Intuitively, the sawtooth technique exploits the geometry of the distribution parameter(s) to divide the parametric space into two regions: one in which privacy risk is small and analytically tractable, and another in which privacy risk can be high, but which occurs with low probability. The method is named after the boundary of the tractable region, which has a sawtooth shape. We use the sawtooth technique to analyze the quantization mechanism under various secret functions and data distributions (summary in Table 1). For most of these case studies, we provide concrete upper bounds characterizing the exact privacy-distortion tradeoff under a family of priors over the true data distribution parameters. For the remaining case studies, we provide a dynamic programming algorithm that efficiently numerically instantiates the quantization mechanism. - **Lower bounds (§4):** We derive general lower bounds on distortion given a privacy budget for any mechanism. These bounds depend on both the secret function and the data distribution. We then instantiate the lower bounds for each of our case studies to show that for the case studies we analyze theoretically in Table 1, our proposed quantization mechanism achieves a privacy-distortion tradeoff within a small constant factor of optimal (usually 3) in the regime where quantization bins are small relative to the overall support set of the distribution parameters. - **Empirical evaluation (§7):** We give empirical results showing how to use summary statistic privacy to release a real dataset, and how to evaluate the corresponding summary statistic privacy metric. We show that the proposed quantization mechanism achieves better privacy-distortion tradeoffs than other natural privacy mechanisms. This paper is only a first step in the study of summary statistic privacy. Our formulation has many limitations and leaves many questions unanswered (§9). Still, we hope it will draw attention to what we believe to be an important privacy concern and research question. ## 2 **Motivation And Related Work** In this section, we discuss motivating scenarios where summary statistic privacy is a concern (§2.1), and why existing privacy frameworks are not able to capture and protect summary statistic privacy (§2.2). ## 2.1 **Motivating Scenarios** Whether sharing data models (e.g., classifiers (Ateniese et al., 2015; Ganju et al., 2018; Mahloujifar et al., 2022; Chaudhari et al., 2022), generative models (Zhou et al., 2021)) or datasets (e.g., cluster traces (Wilkes, 2020; Cortez et al., 2017; Luo et al., 2021), video session data (Jiang et al., 2016; Manousis et al., 2021), network flow datasets (Zeng, 2017)), data sharing can leak *sensitive global properties of the data distribution*. Examples include: S1. Business strategies can be leaked from data. As mentioned before, cluster trace datasets (Wilkes, 2020; Cortez et al., 2017; Luo et al., 2021) are very useful in the systems community. However, cluster traces can reveal strategic enterprise choices, such as the fraction of server types in use (Lin et al., 2020). Such information reflects the company's business strategy and should be kept secret from competitors and vendors. Note that simply removing the server type from the dataset is not a good option, as server type is an 1In this work, we consider Wasserstein-1 distance and total variation distance (§3), though our formulation can accommodate other distance metrics. 2We assume data distributions are drawn from a parametric family; more details in §3. ![3_image_0.png](3_image_0.png) Figure 2: An illustrative example of why some of the privacy frameworks are not suitable for summary statistic privacy. Assume that we want to protect the *mean* of the data. A typical differential privacy algorithm (Wasserman & Zhou, 2010) would add zero-mean noise (e.g., Laplace noise) to the bins. Anonymization (Wilkes, 2020) removes sensitive features (e.g., name of users) from data but leaves other features the same. Sub-sampling (Reiss et al., 2012) down-sample the dataset. All of these mechanisms do not change the expected mean of the data, and thus an attacker can still guess the mean with a small (expected) error. See §2.2 for the discussion of other privacy mechanisms. important feature for the downstream applications of the dataset (e.g., for predicting future CPU/memory usage). S2. Business scales can be leaked from data. For example, networking datasets that contain traffic measurements or raw records are another common type of data (e.g., Meta flow trace dataset (Zeng, 2017), Wikipedia Web Traffic Dataset (Google, 2018), video session data used in Manousis et al. (2021)). While being useful, the total (or mean) traffic volume in these datasets (e.g., number of transferred bytes in a network, number of page views of websites, viewership values of video delivery systems) can reveal the scale of the business such as the number of users and the revenue of the company. Indeed, due to these concerns, it is a common practice to hide the actual traffic volumes of sensitive proprietary datasets even in research papers (e.g., removing the actual traffic values in Manousis et al. (2021)). S3. System capabilities can also be revealed. For instance, the cluster trace datasets mentioned before (Wilkes, 2020; Cortez et al., 2017; Luo et al., 2021) contain CPU and memory usage of servers. It is likely that the maximum value of memory usage is close to the memory size of the system. Such system capabilities could be used by adversaries to launch attacks (e.g., denial-of-service attacks). Due to these concerns, some companies use customized techniques to obfuscate system capabilities before data release (e.g., normalizing system usage (Wilkes, 2020)). S4. Company sentiment or performance [Example 1 from Mahloujifar et al. (2022)] A company releases a spam classifier trained on company emails. However, using property inference, an attacker is able to infer the aggregate sentiment of those emails (positive/negative). If the fraction of negative emails is high, it suggests that company morale is low, which is sensitive. ## 2.2 **Existing Privacy Frameworks Are Insufficient For Summary Statistic Privacy** Most existing privacy frameworks or mechanisms are not suitable for summary statistic privacy because they either focus on protecting individual records in the data (e.g., differential privacy (Dwork et al., 2006), anonymization (Wilkes, 2020), sub-sampling (Reiss et al., 2012)) (Fig. 1), or are designed for algorithms that release low-dimension statistical queries of the dataset instead of the entire dataset (e.g., attribute privacy (Zhang et al., 2022), maximal leakage (Issa et al., 2019), privacy funnel (Makhdoumi et al., 2014)). We divide the relevant work into three categories: approaches that are based on indistinguishability over candidate distributions or inputs, industry heuristics, and information-theoretic approaches. ## 2.2.1 **Indistinguishability Approaches** This class of approaches provides privacy by ensuring that pairs of input datasets or data distributions are indistinguishable. These approaches are typically motivated by differential privacy (Dwork et al., 2006). Differential privacy (DP) Dwork et al. (2006) is one of the most popular privacy notions. A random mechanism M is (*ϵ, δ*)-differentially-private if for any neighboring datasets D0 and D1 (i.e., D0 and D1 differ one sample), and any set S ⊆ *range* (M), we have P (M(D0) ∈ S) ≤ e ϵ· P (M(D1) ∈ S) + δ . In our data sharing scenarios, we could apply DP framework by treating M as the data release mechanism that reads the original dataset and outputs the released dataset. However, the privacy concerns of DP and our suggested framework are completely different: we aim to hide functions of *a distribution*, while DP aims to hide whether *any given sample* contributed to the shared data. For example, we say that we want to release the data in Fig. 2 while protecting its mean. A typical differential privacy algorithm (Wasserman & Zhou, 2010) would add zero-mean noise (e.g., Laplace noise) to the bins. This process does not change the expected mean of the data, and therefore, the attack is still able to derive an unbiased estimator of the mean from the released data. Indeed, we will show through experiments in §7 that this DP mechanism is not effective in hiding statistical secrets. There exist generalizations of DP for protecting more general random variables (besides individual samples) (Chatzikokolakis et al., 2013). However, a strong DP guarantee such that any two datasets with different secrets are indistinguishable from the released datasets implies that the released dataset has bad utility. For example, suppose that the original distributions are Gaussian distributions N*µ, σ*2, and the secret is the mean of the distribution µ. Two distributions with different secrets could have very different σ 2. To make any two distributions with different secrets (e.g., N (0, 1) and N (1, 100)) indistinguishable from the released dataset, we must destroy information about the true σ. While relaxations like metric differential privacy relaxation may help (Chatzikokolakis et al., 2013), this also introduces new challenges, e.g., how to choose the metric function that maps dataset distance to a privacy parameter. Attribute privacy (Zhang et al., 2022) considers a similar privacy concern as us: it tries to protect a function of a sensitive column in the dataset (named *dataset attribute privacy*) or a sensitive parameter of the underlying distribution from which the data is sampled (named *distribution attribute privacy*). Attribute privacy addresses the previously-mentioned shortcomings of vanilla DP under the *pufferfish privacy framework* (Kifer & Machanavajjhala, 2014). Roughly, an algorithm is said to satisfy dataset/distribution attribute privacy if for any two different ranges of a secret function value (e.g., the fraction of the server type A is in [0.1, 0.2) or [0.2, 0.3)), the distributions of the algorithm output do not differ too much. Attribute privacy constrains the set of candidate distributions a priori, which prevents the problem we discussed earlier, in which vanilla DP requires the addition of unbounded noise (Zhang et al., 2021). Although their privacy concerns are highly related to ours, attribute privacy focuses on algorithms that output *a statistical query of the dataset* instead of the entire dataset. We could apply their framework to analyze full-dataset-sharing algorithms, but due to the high dimensionality of the dataset, attribute privacy needs to add substantial noise, which harms utility (§7). Distribution privacy (Kawamoto & Murakami, 2019) is a closely related notion, which releases a full data distribution under DP-style indistinguishability guarantees. Roughly, for any two input distributions θ0 and θ1 from a pre-defined set of candidate distributions, a distribution private mechanism outputs a distribution M(θi) such that for any set S in the output space, we have P[M(θi) ∈ S] ≤ e ϵP[M(θ1−i) ∈ S] + δ. This formulation is stronger than what we need; by obfuscating the whole distribution, we inherently protect the private information in question. However mechanisms that protect distribution privacy may add more noise than what is required only to protect select secret(s). A recent work by Chen and Ohrimenko (Chen & Ohrimenko, 2022) proposes mechanisms for distribution privacy, and we observe exactly this trend experimentally in §7; the noise added by the mechanisms in Chen & Ohrimenko (2022) is larger than what we require with summary statistic privacy. Distribution inference (Suri & Evans, 2021; Suri et al., 2023) is very closely related to our goals. Like our setting, the data holder is trying to protect a secret function of its data (or data distribution). To this end, it sets up a hypothesis test in which the adversary must choose whether the released model (or data) comes from one of two fixed data distributions, which are derived from an underlying public data distribution. These two distributions are assumed to be known both to the attacker and the defenders. In many practical settings, it may be difficult to establish a reasonable pair of candidate distributions; moreover, this approach is not directly aligned with the data holder's goal, which is simply to hide some secret quantities - not to render the full data distribution indistinguishable with another (the latter is closer to distribution privacy). ## 2.2.2 **Industry Heuristics** Industry heuristics are algorithms that are commonly used in industrial data sharing settings. They may not provide provable privacy guarantees, and indeed, many of these heuristics have been broken in practice. Examples include **anonymization**, which removes certain attributes (e.g., name of the patients in medical data, name of jobs in cluster dataset) (Reiss et al., 2012); anonymization is widely used in the release of datasets (e.g., Wilkes (2020)). However, it does not change the distribution of attributes. Another example is **sub-sampling**, which works by sampling the original datasets at the level of individual records (Reiss et al., 2012). The intuition is that by reducing the number of samples, less information is leaked. However, sub-sampling does not change statistical properties of the distribution. ## 2.2.3 **Information-Theoretic Approaches** The third category of defenses are information theoretic. These approaches have a similar goal to ours and typically rely on (or relate to) the mutual information between problem variables. Maximal leakage (Issa et al., 2019) is an information-theoretic framework for quantifying the leakage of sensitive information. We denote X as the random variable of the data to be shared (which may contain sensitive information), and Y as the random variable of the information that is processed from X and is accessible to the attacker. Having observed Y , the attacker's goal is to guess a secret function of X denoted by U, and the guess is denoted by Uˆ. Based on this setup, the Markov chain U − X −Y −Uˆ holds. Maximal leakage L from X to Y is defined as $${\mathcal{L}}\left(X\to Y\right)=\operatorname*{sup}_{U-X-Y-U}\log{\frac{\mathbb{P}\left(U={\hat{U}}\right)}{\operatorname*{max}_{u}P_{U}(u)}}$$ $$(1)$$ maxu PU (u)(1) where the sup is taken over U (i.e., considering the worst-case secret) and Uˆ (i.e., considering the strongest attacker). Intuitively, Eq. (1) evaluates the ratio (in nats) of the probabilities of guessing the secret U correctly with and without observing Y . To apply maximal leakage in data sharing scenario, we may regard X as the original dataset, Y as the released dataset, and U as the secret (e.g., the fraction of a specific server type). However, this formulation is still unsuitable for the following reasons. (1) Maximal leakage only considers discrete U and Uˆ under finite alphabet. Note that it is a critical assumption for making sure that P U = Uˆin the definition (Eq. (1)) is nonzero. However, in our problem, secrets typically have continuous support (e.g., §2.1). (2) Maximal leakage assumes that the secret to protect U is unknown a priori and therefore considers the worst-case leakage among all possible secrets. However, in our problem, data holders know what secret they want to protect. Although we cannot directly use maximal leakage in our problem, its core idea can be useful for extending our framework (see §9). Privacy funnel (Makhdoumi et al., 2014) is another popular information-theoretic privacy framework. As with maximal leakage, we denote X as the random variable of the data that many contain sensitive information U, and Y as the random variable of the information that is processed from X and is accessible by the attacker. The privacy funnel framework evaluates privacy leakage with the mutual information I(U; Y ), and the utility of Y with mutual information I(X; Y ). To find a good privacy-preserving data processing strategy PY |X, the privacy funnel solves the optimization $$\operatorname*{min}_{P_{Y|X}:I(X;Y)\geq R}I(U;Y)\;\;,$$ where R is a desired threshold on the utility of Y . To apply it in data sharing problems, we could regard X as the original data, Y as the released data, and U as the secret data holder wants to protect (e.g., the fraction of a specific server type). However, mutual information is not a good metric for either privacy or utility. On the privacy front, prior work has shown that I(U; Y ) can be reduced while allowing the attacker to guess S correctly from Y with higher probability (see Example 1 in Issa et al. (2019)). On the utility front, higher mutual information I(X; Y ) does not mean that the released data Y is a useful representation of X. For example, Y could be an arbitrary oneto-one transformation of X. In that case, I(X; Y ) is maximized, but the data structure could be completely destroyed. In addition, privacy funnel (Makhdoumi et al., 2014) only considers X and Y in discrete supports, which is too restrictive for our setting. ## 3 **Summary Statistic Privacy Formulation** Notation. We denote random variables with uppercase English letters or upright Greek letters (e.g., X, μ), and their realizations with italicized lowercase letters (e.g., *x, µ*). For a random variable X, we denote its probability density function (PDF), or, in the case of discrete random variables, its probability mass function (PMF), as fX, and its distribution measure as ωX. If a random variable X is drawn from a parametric family (e.g., Gaussian with specified mean and covariance); the parameters will be denoted with a subscript of X, i.e., the above notations become Xθ, fXθ , ωXθ respectively for parameters θ ∈ R q, where q ≥ 1 denotes the dimension of the parameters. In addition, we denote fX|Y as the conditional PDF or PMF of X given another random variable Y . We use Z, Z>0, N, R, R>0, to denote the set of integers, positive integers, natural numbers, real numbers, and positive real numbers respectively. Original data. Consider a data holder who possesses a dataset of n samples X = {x1*, . . . , x*n}, where for each i ∈ [n], xi ∈ R pis drawn i.i.d. from an underlying distribution. We assume the distribution comes from a parametric family, and the parameter vector θ ∈ R q of the distribution fully specifies the distribution. That is, xi ∼ ωXθ , where we further assume that θ is itself a realization of random parameter vector Θ, and ωΘ is the probability measure for Θ. We will discuss how to relax the assumption on this prior distribution of θ in §9. We assume that the data holder knows θ (and hence knows its full data distribution ωXθ ); our results and mechanisms generalize to the case when the data holder only possesses the dataset X (see §6). For example, suppose the original data samples come from a Gaussian distribution. We have θ = (*µ, σ*), and Xθ ∼ N (*µ, σ*). ωΘ (or fΘ) describes the prior distribution over (*µ, σ*). For example, if we know a priori that the mean of the Gaussian is drawn from a uniform distribution between 0 and 1, and σ is always 1, we could have fΘ (*µ, σ*) = I(µ ∈ [0, 1]) · δ (σ), where I(·) is the indicator function, and δ is the Dirac delta function. In practice, the underlying distribution can be much more complicated than a Gaussian. In general, the data can be multi-dimensional (i.e., p > 1). We study one-dimensional data as a starting point (§3.2). Statistical secrets to protect. We assume the data holder wants to hide ℓ ∈ Z>0 *secrets* from the original data distribution. Since the true data distribution is fully-specificed by parameter vector θ, these secrets can be expressed as a function g (θ) : R q → R ℓ. In the Gaussian example Xθ ∼ N (*µ, σ*), suppose the random variable Xθ represents the traffic volume experienced by an enterprise in a day. The data holder may wish to hide the mean traffic per day, in which case g(·) would be the mean of the distribution, i.e., g (*µ, σ*) = µ. In this example, we are hiding only one secret (the mean), so ℓ = 1. In general, the secret can be any (vector-valued) function that can be deterministically computed from θ. As shown in Fig. 1, the secret could be derived from one feature (e.g., the mean salary) or computed from multiple features (e.g., the mean salary of males). The secrets could also be multi-dimensional (e.g., mean of salary, and the fraction of males). In this paper, we present general results for one-dimensional secrets (i.e., ℓ = 1) and defer a discussion of higher-dimensional secrets to future work (see §9). Data release mechanism. The data holder releases data by passing the private parameter θ through a data release mechanism Mg. That is, for a given θ, the data holder first draws internal randomness z ∼ ωZ, and then releases another distribution parameter θ ′ = Mg (*θ, z*), where Mg is a deterministic function, and ωZ is a fixed distribution from which z is sampled. Note that we assume both the input and output of Mg are distribution parameters. It is straightforward to generalize to the case when the input and/or output are datasets of samples (see §6). For example, in the Gaussian case discussed above, the data release mechanism can be Mg ((µ, σ), z) = (µ + *z, σ*) where z ∼ N (0, 1). I.e., this mechanism shifts the mean of the Gaussian by a random amount drawn from a standard Gaussian distribution and keeps the variance. Threat model. We assume that the attacker knows the parametric family from which our data is drawn, but does not know the initial parameter θ. The attacker is also assumed to know the data release mechanism Mg and output θ ′ but not the realization of the data holder's internal randomness z. The attacker guesses the initial secret g (θ) based on the released parameter θ ′ according to estimate gˆ (θ ′). gˆ can be either random or deterministic, and we assume no computational bounds on the adversary. For instance, in the running Gaussian example, an attacker may choose gˆ (µ ′, σ′) = µ ′. When the data holder releases a dataset of samples instead of the parameter θ ′, this formulation can be used to upper bound the attacker's performance on correctly guessing the secret, since the estimation error on released distribution parameter is induced due to the finite samples in the released dataset. ## 3.1 **Metrics** Privacy metric. The data holder wishes to prevent an attacker from guessing its secrets. We define our privacy metric privacy Πϵ,ωΘ as the attacker's probability of guessing the secret(s) to within a tolerance ϵ, taken worst-case over all attackers gˆ: $$\Pi_{\epsilon,\omega\Theta}\,\triangleq\,\,\,\,\operatorname*{sup}_{\hat{g}}\,\,\mathbb{P}\left(|\hat{g}\left(\theta^{\prime}\right)-g\left(\theta\right)|\leq\epsilon\right)\,\,.$$ $$\left(2\right)$$ ′) − g (θ)| ≤ ϵ) . (2) The probability is taken over the randomness of the original data distribution (θ ∼ ωΘ), the data release mechanism (z ∼ ωZ), and the attacker strategy (gˆ). Distortion metric. The main goal of data sharing is to provide useful data; hence, we (and data holders and users) want to understand how much the released data distorts the original data. We define the distortion ∆ of a mechanism as the worst-case distance between the original distribution and the released distribution: $$\begin{array}{r l}{{\Delta\triangleq\operatorname*{sup}_{\theta\in\operatorname{Supp}(\omega_{\Theta}),\theta^{\prime},}}}&{{d\left(\omega_{X_{\theta}}\|\omega_{X_{\theta^{\prime}}}\right),}}\\ {{\operatorname*{}_{z\in\operatorname{Supp}(\omega_{Z}):\mathcal{M}_{g}(\theta,z)=\theta^{\prime}}}}\end{array}$$ , (3) where d is a general distance metric defined over distributions. The choice of the distance metric depends on the data type and potentially on the applications that stakeholders care about. For example, if the data holders or users have concrete metrics that they want to preserve (e.g., the difference between the mean salaries of males and females in Fig. 1), they could use this quantity as the distance metric. Otherwise, one can use statistical distance metrics between distributions (e.g., total variation distance, Wasserstein distance). In this paper, we adopt Wasserstein-1 distance for continuous distributions and total variation (TV) distance for discrete distributions. These distances are often used for evaluating data quality (e.g., Yin et al. (2022); Lin et al. (2020)) and as the distance metric in neural network design (e.g., Arjovsky et al. (2017); Lin et al. (2018)). Note that the definition in Eq. (3) can be extended to data release mechanisms that take datasets as inputs and/or outputs. $$(3)$$ Objective. To summarize, the data holder's objective is to choose a data release mechanism that minimizes distortion metric ∆ subject to a constraint on privacy Πϵ,ωΘ : $$\left(4\right)$$ $$\begin{array}{c c}{{\operatorname*{min}_{\mathcal M_{g}}}}&{{\Delta}}\\ {{\mathrm{subject~to}}}&{{\Pi_{\epsilon,\omega_{\Theta}}\leq T.}}\end{array}$$ The alternative formulation, minMg Πϵ,ωΘ subject to ∆ ≤ T is analyzed in App. A. The optimal data release mechanisms for Eq. (4) depends on the secrets, the distance metric d in Eq. (3), and the characteristics of the original data. We envision a *summary statistic privacy toolbox* (Fig. 1) that encodes data release mechanisms for a list of predefined secrets, d, and data distributions. Data holders specify the secret function they want to protect and the desired distance metric; the toolbox then selects the data distribution parametric family that most closely reflects the holder's raw data and uses the corresponding data release mechanism to process the raw data for sharing. ## 3.2 **Scope Of This Work** 3.2.1 **Simplifying Assumptions** Although our formulation supports a wide range of distribution distance metrics, secret functions, and parametric families of data distributions, we make simplifying assumptions as a starting point on this problem. Distortion metric. As discussed in §3.1, we use Wasserstein-1 and TV as the distance metrics for continuous and discrete distributions respectively in the case studies (§6). We leave the discussion of other metrics to §9. The type and the number of secrets. Our formulation supports general statistical secrets, as long as they are a (possibly vector-valued) function of the data distribution. In this paper, we start by assuming that the secret is one-dimensional, and discuss several natural secret functions in §6. The dimension and distribution of the data. Although our formulation includes multi-dimensional data, in this paper, we consider one-dimensional distributions as a starting point. ## 3.2.2 **Research Questions** We aim to answer two questions: Q1 What are fundamental limits on the tradeoff between privacy and distortion? Q2 Do there exist data release mechanisms that can match or approach these fundamental limits? In general, these questions can have different answers for different choices of distance metric in Eq. (3), different parametric families of data distributions, and different secret functions. In §4 and §5, we first present general results that do not depend on data distribution or secret function. We then present case studies for specific secrets and data distributions for building up our initial *summary statistic privacy toolbox* in §6. ## 4 **General Lower Bound On Privacy-Distortion Tradeoffs** Given a privacy budget T, we first present a lower bound on distortion that applies *regardless of the prior* distribution of data ωΘ and *regardless of the secret* g. As discussed in §3.2, we assume that the secret is scalar (i.e., ℓ = 1), but the data distribution can have arbitrary dimension. Theorem 1 (Lower bound of privacy-distortion tradeoff). Let D (Xθ1 , Xθ2 ) ≜ 1 2 dωXθ1 ∥ωXθ2 , where d (·∥·) is defined in the line after Eq. (3)*. Further, let* R (Xθ1 , Xθ2 ) ≜ |g(θ1) − g(θ2)| and $$\gamma\triangleq\operatorname*{inf}_{\theta_{1},\theta_{2}\in S u p p(\omega_{\Theta})}{\frac{D\left(X_{\theta_{1}},X_{\theta_{2}}\right)}{R\left(X_{\theta_{1}},X_{\theta_{2}}\right)}}.$$ . (5) $\left(5\right)$. For any T ∈ (0, 1), when Πϵ,ωΘ ≤ T, $$\Delta>\left(\lceil{\frac{1}{T}}\rceil-1\right)\cdot2\gamma\epsilon~.$$ $$(6)$$ · 2*γϵ .* (6) The proof is shown as below. From Thm. 1 we see that the lower bound of distortion is inversely correlated with the privacy budget and positively correlated with the guess tolerance ϵ. The dependent quantity γ in Eq. (5) can be thought of as a conversion factor that bounds the translation from probability of detection to distributional distance. Note that we have not made γ exact as its form depends on the type of the secret and prior distribution of data. We will instantiate it in the cases studies in §6. Proof. Our proof proceeds by constructing an ensemble of attackers, such that at least one of them will be correct by construction. We do this by partitioning the space of possible secret values, and having each attacker output the midpoint of one of the subsets of the partition. We then use the fact that each attacker can be correct with probability at most T, combined with γ, which intuitively relates the distance between distributions to the distance between their secrets, to derive the claim. Recall that θ is the true private parameter vector, θ ′is the released parameter vector as a result of the data release mechanism. $$T\geq\Pi_{\epsilon,\omega_{\Theta}}$$ $$=\sup_{\theta}\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)$$ $$=\sup_{\theta}\mathbb{E}\left(\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\left|\theta^{\prime}\right.\right)\right)$$ $$=\mathbb{E}\left(\sup_{\theta}\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\left|\theta^{\prime}\right.\right)\right)\ \,\tag{7}$$ where **Eq. (7)** is due to the following facts: (1) $\sup_{\hat{g}}\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\left|\theta^{\prime}\right.\right)\geq\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\left|\theta^{\prime}\right.\right)$. LHS because $\hat{g}$ can only depend on $\theta^{\prime}$. Therefore, we can map any a $\frac{1}{\text{LHS}}$ and obtain the same value, since the expectation is taken over $\theta^{\prime}$. $\sup_{\hat{g}}\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\left|\theta^{\prime}\right.\right)\leq T$. Let ′ $$\begin{array}{l l l l}{{\mathrm{IS}}}&{{\leq}}&{{\mathrm{RHS}}}&{{\mathrm{because}}}\\ {{}}&{{}}&{{}}\\ {{\mathrm{for}\ \ \mathrm{any}\ \ \theta^{\prime};\ \ (2)\ \ \mathrm{RHS}\ \leq}}\\ {{}}&{{}}&{{}}\\ {{\mathrm{sup}_{\mathrm{\tilde{\theta}}}\ \ \mathrm{in}\ \ \mathrm{the}\ \ \mathrm{RHS}\ \mathrm{to}\ \ \mathrm{the}}}\end{array}$$ ′. Thus, there exists θ ′s.t. $$L_{\theta^{\prime}}\triangleq\operatorname*{inf}_{\theta\in\operatorname{Supp}(\omega_{\Theta}),z:{\mathcal{M}}_{g}(\theta,z)=\theta^{\prime}}g\left(\theta\right)\ ,$$ $$R_{\theta^{\prime}}\triangleq\operatorname*{sup}_{\theta\in\operatorname{Supp}(\omega\in),z:\mathcal{M}_{g}(\theta,z)=\theta^{\prime}}g\left(\theta\right)\ .$$ We can define a sequence of attackers and a constant N such that gˆi (θ ′) = Lθ ′ + (i + 0.5) · 2ϵ for i ∈ {0, 1*, . . . , N* − 1} and Lθ ′ + 2N ϵ ≥ Rθ ′ > Lθ ′ + 2(N − 1)ϵ (Fig. 3). From the above, we have $$T\cdot N\geq\sum_{i}\mathbb{P}\left({\hat{g}}_{i}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\left|\theta^{\prime}\right.\right)\geq1,$$ Therefore, we have N ≥ ⌈ 1 T ⌉, and $$R_{\theta^{\prime}}-L_{\theta^{\prime}}>\left(\lceil{\frac{1}{T}}\rceil-1\right)\cdot2\epsilon\;\;.$$ · 2ϵ . (8) $$({\boldsymbol{\delta}})$$ 10 ![10_image_0.png](10_image_0.png) Figure 3: The construction of attackers for proof of Thm. 1. The 2ϵ ranges of gˆ0*, ...,* gˆN−1 jointly cover the entire range of possible secret [Lθ ′ , Rθ ′ ]. The probability of guessing the secret correctly for any attacker is ≤ T. Therefore, Rθ ′ − Lθ ′ >⌈ 1 T ⌉ − 1· 2ϵ (Eq. (8)). Then we have $$\Delta\geq\sup_{\theta\in\text{Supp}(\omega_{\theta}),z\in\text{Supp}(\omega_{Z}):\mathcal{M}_{\theta}(\theta,z)=\theta^{\prime}}d\left(\omega_{X_{\theta}}\|\omega_{X_{\theta^{\prime}}}\right)\tag{9}$$ $$\geq\sup_{\theta_{i}\in\text{Supp}(\omega_{\theta}),z_{i}:\mathcal{M}_{\theta}(\theta_{i},z_{i})=\theta^{\prime}}D\left(X_{\theta_{1}},X_{\theta_{2}}\right)$$ $$>\left(\lceil\frac{1}{T}\rceil-1\right)\cdot2\gamma\epsilon.$$ where in Eq. (9), θi for i ∈ {1, 2} denotes two arbitrary parameter vectors in the support space, and Eq. (9) comes from the triangle inequality, and Eq. (10) utilizes Rθ ′ − Lθ ′ >⌈ 1 T ⌉ − 1· 2ϵ and the definition of γ. ## 5 **Data Release Mechanisms** We first present in §5.1 the *quantization mechanism*, a template for data release mechanisms used in the case studies of §6. The quantization mechanism can be instantiated differently for different secret functions and data distributions. We show in §5.2 techniques for instantiating the quantization mechanism, either based on theoretical insights or numerically. Finally, we give some intuition in §5.3 about how to analyze the quantization mechanism. These insights will be used in our case studies (§6) to show that we can sometimes match the lower bounds from §4 up to small constant factors. ## 5.1 **The Quantization Mechanism** At a high level, the quantization mechanisms follow two steps: 1. **Offline Phase:** Partition the space of parameters Supp (Θ) into carefully-chosen bins. 2. **Online Phase:** For an observed data distribution parameter θ, deterministically release the quantized parameters, according to the partition from the Offline Phase. More precisely, we first divide the set of possible distribution parameters Supp (Θ) into subsets Si such that ∪i∈ISi ⊇ Supp (Θ) and Si1 ∩ Si2 = ∅ for i1 ̸= i2, where I is the (possibly uncountable) set of indices of the subsets. For θ ∈ Supp (Θ), I (θ) is the index of the set that θ belongs to; in other words, we have I (θ) = i, where θ ∈ Si. The mechanism first looks up which set θ belongs to (i.e., I (θ)), then *deterministically* releases a parameter θ ∗ I(θ) that corresponds to the set. Here, θ ∗ i for i ∈ I denotes another parameter. In short, our data release mechanism has the form $${\mathcal M}_{g}\left(\theta,z\right)=\theta_{I(\theta)}^{*}\quad.$$ Note that the policy is fully determined by Si and θ ∗ i . In the remainder of the paper, we will show different ways of instantiating quantization mechanism to approach the lower bound in §4. $\mathrm{dist}$ Intuitively, quantization mechanisms will have a bounded distortion as long as d ωXθ ∥ωXθ∗ I(θ) is bounded for all θ ∈ Supp (Θ). At the same time, they obfuscate the secret as different data distributions within the same set are mapped to the same released parameter. It turns out this simple *deterministic* mechanism is sufficient to achieve the (order) optimal privacy-distortion trade-offs in many cases, as opposed to DP where randomness is required to achieve DP guarantees (Dwork et al., 2006) (examples in the case studies §6). ## 5.2 **Algorithms For Instantiating The Quantization Mechanism** To implement the quantization mechanism, we need to define the quantization bins Si and the released parameter per bin θ ∗ i . Depending on the data distribution, the secret function, and quantization mechanism parameters, the mechanism can have very different privacy-distortion tradeoffs. We present two methods for selecting quantization parameters: (1) an analytical approach, and (2) a numeric approach. (1) Analytical approach. In some cases, outlined in the case studies of §6 and the appendices, we can find analytical expressions for Si and θ ∗ i while (near-)optimally trading off privacy for distortion. This is usually possible when the lower bound depends on the problem parameters in a particular way. For example, for the Gaussian distribution where θ = (*µ, σ*), when secret=standard deviation, we can work out the lower bound from Thm. 1 (details in App. G). Note that the lower bound is tight if our mechanism minimizes $$\frac{D\left(X_{\mu_{1},\sigma_{1}},X_{\mu_{2},\sigma_{2}}\right)}{R\left(X_{\mu_{1},\sigma_{1}},X_{\mu_{2},\sigma_{2}}\right)}=\sqrt{\frac{1}{2\pi}}e^{-\frac{1}{2}\left(\frac{\mu_{1}-\mu_{2}}{\sigma_{1}-\sigma_{2}}\right)^{2}}-\left(\frac{\mu_{1}-\mu_{2}}{\sigma_{1}-\sigma_{2}}\right)\left(\frac{1}{2}-\Phi\left(\left(\frac{\mu_{1}-\mu_{2}}{\sigma_{1}-\sigma_{2}}\right)\right)\right)\tag{11}$$ where where D (Xθ1 , Xθ2 ) and R (Xθ1 , Xθ2 ) are defined in Thm. 1, and Φ denotes the CDF of the standard Gaussian distribution. That is, for any true parameters µ1 and σ1, the mechanism should always choose to release µ2 and σ2 such that Eq. (11) is as small as possible. The exact form of Eq. (11) is not important for now; notice instead that the problem parameters (σi, µi) take the same form every time they appear in this equation. We define t(θ1, θ2) = µ1−µ2 σ1−σ2 to be that form.3 Next, we find the t(θ1, θ2) that minimizes Eq. (11): $$t_{0}\triangleq\operatorname*{arg\,inf}_{t(\theta_{1},\theta_{2})}\,{\frac{D\left(X_{\theta_{1}},X_{\theta_{2}}\right)}{R\left(X_{\theta_{1}},X_{\theta_{2}}\right)}}$$ For instance, in our Gaussian example, we can write t0 as $$t_{0}=\operatorname*{arg\,inf}_{t(\theta_{1},\theta_{2})}\sqrt{\frac{1}{2\pi}}e^{-\frac{1}{2}(t(\theta_{1},\theta_{2}))^{2}}-(t(\theta_{1},\theta_{2}))\left(\frac{1}{2}-\Phi\left(t(\theta_{1},\theta_{2})\right)\right),$$ which can be solved numerically. Finally, we can choose Si and θ ∗ i to be sets for which t(*θ, θ*∗ i ) = t0, ∀θ ∈ Si. Using this rule, we derive the mechanism: $$\begin{array}{c}{{\mathcal{S}_{\mu,i}=\left\{\left(\mu+t_{0}\cdot t,\underline{{{\sigma}}}+\left(i+0.5\right)\cdot s+t\right)|t\in\left[-\frac{s}{2},\frac{s}{2}\right)\right\}\;\;,}}\\ {{\theta_{\mu,i}^{\ast}=\left(\mu,\underline{{{\sigma}}}+\left(i+0.5\right)\cdot s\right)\;\;,}}\\ {{\mathcal{I}=\left\{\left(\mu,i\right)|i\in\mathbb{N},\mu\in\mathbb{R}\right\},}}\end{array}$$ where s is a hyper-parameter of the mechanism that divides (σ − σ), and *σ, σ* are upper and lower bounds of σ. For our Gaussian example, the resulting sets Sµ,i for the quantization mechanism are shown in Fig. 4; the space of possible parameters is divided into infinitely many subsets Sµ,i, each consisting of a diagonal line 3Indeed, for many of the case studies in §6, t(θ) takes an analogous form; we will see the implications of this in the analysis of the upper bound in §5.3. segment (parallel blue lines in Fig. 4). The space of possible σ values is divided into segments of length s, which correspond to the horizontal bands in Fig. 4. The fact that the intervals Sµ,i are diagonal lines arises from choosing t(θ1, θ2) = µ1−µ2 σ1−σ2 ; each interval corresponds to a set of points that satisfy t(θ1, θ2) = t0, i.e., with slope 1/t0. We will see how to use this construction to obtain upper bounds on privacy-distortion tradeoffs in §5.3. (2) Numeric approach. In some cases, the above procedure may not be possible. To this end, we present a dynamic programming algorithm to numerically compute the quantization mechanism parameters. This algorithm achieves an optimal privacy-distortion tradeoff (Bellman, 1966) among the class of quantization algorithms with finite precision and continuous intervals Si. We use this algorithm in some of the case studies in §6. We present our dynamic programming algorithm for univariate data distributions. We assume Supp (Θ) = -θ, θ, where θ, θ are lower and upper bounds of θ, respectively. We consider the class of quantization mechanisms such that Si = hθ i, θ i, i.e., each subset of parameters are in a continuous range. Furthermore, we explore mechanisms such that θ i, θ i, θ∗ i ∈θ, θ + *κ, θ* + 2*κ, . . . ,* θ , where κ is a hyper-parameter that encodes numeric precision (and therefore divides (θ − θ)). For example, if we want to hide the mean of a Geometric random variable with θ = 0.1 and θ = 0.9, we could consider three-decimalplace precision, i.e., κ = 0.001 and θ i, θ i, θ∗ i ∈ {0.100, 0.101, 0.102*, . . . ,* 0.900}. Since ∆ (Eq. (3)) is defined as the *worst-case* distortion whereas Πϵ,ωΘ (Eq. (2)) is defined as a *probability*, which is related to the original data distribution, optimizing Πϵ,ωΘ given bounded ∆ (Eq. (12)) is easier to solve than the final goal of optimizing ∆ given bounded Πϵ,ωΘ (Eq. (4)). $$\min_{{\cal M}_{g}}\ \ \Pi_{\varepsilon,\omega_{0}}\qquad\quad\mbox{subject to}\ \ \Delta\leq T.\tag{12}$$ Observing that in Eq. (4) the optimal value of minMg ∆ is a monotonic decreasing function w.r.t. the threshold T, we can use a binary search algorithm (shown in App. B) to reduce problem Eq. (4) to problem Eq. (12). It calls an algorithm that finds the optimal quantization mechanism with numerical precision over continuous intervals under a distortion budget T (i.e., solving Eq. (12)). This problem can be solved by a dynamic programming algorithm. Let pri(t ∗) (t ∗ ∈θ, θ + *κ, θ* + 2*κ, . . . ,* θ ) be the minimal privacy Πϵ,ωΘ we can get for Supp (Θ) = {Xθ : θ ∈ [*θ, t*∗)} such that ∆ ≤ T. Denote D (θ1, θ2) as the minimal distortion a quantization mechanism can achieve under the quantization bin [θ1, θ2), we have $${\mathcal{D}}\left(\theta_{1},\theta_{2}\right)=\operatorname*{inf}_{\theta\in\mathbb{R}^{q}}\operatorname*{sup}_{\theta^{\prime\prime}\in\left[\theta_{1},\theta_{2}\right)}d\left(\omega_{X_{\theta^{\prime\prime}}}\|\omega_{X_{\theta}}\right),$$ where d (·∥·) is defined in Eq. (3). We also denote D∗(θ1, θ2) = arg infθ∈[θ1,θ2) supθ ′′∈[θ1,θ2) dωXθ′′ ∥ωXθ . If the prior over parameters is fΘ, we have the Bellman equation $$p r i\left(t^{*}\right)=\operatorname*{min}_{\theta\in[\underline{{{\theta}}},t^{*}-\kappa],\mathcal{D}\left(\theta,t^{*}\right)\leq T}\frac{\int_{\underline{{{\theta}}}}^{\theta}f_{\Theta}\left(t\right)\mathrm{d}t}{\int_{\underline{{{\theta}}}}^{t^{*}}f_{\Theta}\left(t\right)\mathrm{d}t}\cdot p r i\left(\theta\right)+\frac{\int_{\theta}^{t^{*}}f_{\Theta}\left(t\right)\mathrm{d}t}{\int_{\underline{{{\theta}}}}^{t^{*}}f_{\Theta}\left(t\right)\mathrm{d}t}\cdot\mathcal{P}\left(\theta,t^{*}\right)$$ with the initial state pri(θ) = 0, where $$\mathcal{P}\left(\theta,t^{*}\right)=\mathbb{P}\left(\hat{g}^{*}\left(\theta^{\prime}\right)\in\left[g\left(\theta_{0}\right)-\epsilon,g\left(\theta_{0}\right)+\epsilon\right]\left|\theta_{0}\in\left[\theta,t^{*}\right],\theta^{\prime}\right)$$ $$=\sup_{t_{1},t_{2}\text{:}\sup_{t^{\prime},t^{\prime\prime}\in\left[t_{1},t_{2}\right]}\left|g(t^{\prime\prime})-g(t^{\prime})\right|=2\epsilon}\frac{\int_{\max\left\{t_{1},\theta\right\}}^{\min\left\{t_{2},t^{*}\right\}}f_{\Theta}\left(t\right)\mathrm{d}t}{\int_{\theta}^{t^{*}}f_{\Theta}\left(t\right)\mathrm{d}t}.$$ θ ′is the released parameter when the private parameter θ0 ∈ [*θ, t*∗] and gˆ ∗is the optimal attack strategy. The full algorithm is listed in Alg. 1. The time complexity of this algorithm is O θ−θ/κ2· CD · CP · CI , where CD is the time complexity for computing D and D∗, CP is the time complexity for computing P, and CI is the time complexity for computing the integrals in the Bellman equation. In our cases studies, D and D∗can be computed in CD = Oθ−θ/κ, and P and the integrals can be computed in closed forms within constant time, i.e., CP = CI = O (1). Algorithm 1: Dynamic-programming-based data release mechanism for single-parameter distributions. Input: Parameter range: -θ, θ Prior over parameter: fΘ Distortion budget: T Step size: κ (which divides θ − θ) 1 pri(θ) ← 0 2 I (θ) ← ∅ 3 for t ∗ ← θ + *κ, θ* + 2*κ, . . . ,* θ do 4 pri(t ∗) ← ∞ 5 min_t ← NULL 6 for θ ← t ∗ − *κ, . . . , θ* do 7 if D (θ, t∗) > T **then** 8 break 9 p ← R θ θ fΘ(t)dt R t∗ θfΘ(t)dt · pri(θ) + R t ∗ θfΘ(t)dt R t∗ θfΘ(t)dt · P (*θ, t*∗) 10 if *p < pri*(t ∗) **then** 11 pri(t ∗) ← p 12 min_t ← θ 13 if min_t *is not NULL* **then** 14 St ∗ ← [min_*t, t*∗) 15 θ ′ t ∗ ← D∗(min_*t, t*∗) 16 I (t ∗) ← I (min_t) ∪ {t ∗} 17 if pri(θ) = ∞ **then** 18 ERROR: No answer 19 **return** pri(θ),Si: i ∈ I θ ,θ ′ i : i ∈ I θ ![14_image_0.png](14_image_0.png) Figure 4: We separate the space of possible parameters into two regions (yellow and green) and bound the attacker's success rate on each region separately. The blue lines represent examples of Sµ,i. When dynamic programming is not practical (e.g., in high-dimensional problems), we also provide a greedy algorithm in App. B as a baseline and show the empirical comparison between these two algorithms in the case studies (Apps. E, G and H). ## 5.3 **Technique For Analyzing The Quantization Mechanism** We next provide an overview of techniques for analyzing the quantization mechanism, both for privacy and for distortion. We use these techniques for the analysis in our case studies, where we will make the expressions and claims more precise. For concreteness, we will recall the Gaussian example from §5.2, for which we have already derived a mechanism. The mechanism presented in §5.2 can geometrically be interpreted as follows. Over the square of possible parameter values µ and σ (Fig. 4), the mechanism selects intervals Sµ,i that consist of short diagonal line segments (e.g., blue line segments in Fig. 4). When the true distribution parameters fall in one of these intervals, the mechanism releases the midpoint of the interval. We find that many of our case studies naturally give rise to the same form of t(θ). As a result, all of the case studies we analyze theoretically (with multiple parameters) have mechanisms that instantiate intervals Sµ,i as diagonal lines, as shown in Fig. 4. The sawtooth technique, which we present next, can be used to analyze the privacy of all such mechanism instantiations. More precisely, the following pattern of quantization mechanism admits diagonal line intervals, and can be analyzed with the sawtooth technique (§6 and Apps. E and G): $$\mathcal{S}_{\mu,i}=\left\{\left(\mu+t_{0}\cdot t,\underline{{{\sigma}}}+(i+0.5)\cdot s+t\right)|t\in\left[-\frac{s}{2},\frac{s}{2}\right)\right\}\,,$$ $$\theta_{\mu,i}^{*}=\left(\mu,\underline{{{\sigma}}}+(i+0.5)\cdot s\right)\quad,$$ $$\mathcal{I}=\left\{\left(\mu,i\right)|i\in\mathbb{N},\mu\in\mathbb{R}\right\},$$ o , where s is a hyper-parameter of the mechanism that denotes quantization bin size and divides (σ − σ) and t0 is a constant that can be determined by the mechanism design strategy described in §5.2. (1) Privacy analysis. For ease of illustration, we assume that the support of parameters is Supp (Θ) = (a, b)|a ∈-µ, µ, b ∈ [σ, σ) , but the analysis can be generalized to any case. In Fig. 4, we separate the space of possible data parameters into two regions represented by yellow and green colors. The yellow regions S*yellow* constitute right triangles with height s and width |t0|s. The green region S*green* is the rest of the parameter space. The high-level idea of our proof is as follows. Note that for any parameter θ ∈ S*green*, there exists a quantization bin Sµ,i s.t. θ ∈ Sµ,i and Sµ,i ⊂ S*green*. This occurs because the mechanism intervals (blue lines in Fig. 4) all have the same slope and a length of at most s for σ. As such, each interval is either fully in the green region, or fully in the yellow region. Since we know the length of each bin, we can upper bound the attack success rate if θ ∈ S*green*. While the attacker can be more successful in the yellow region, the probability of θ ∈ S*yellow* is small. Hence, we upper bound the overall attacker's success rate (i.e., Πϵ,ωΘ ). More specifically, let the optimal attacker be gˆ ∗. We have $$\Pi_{\epsilon,\omega_{\Theta}}=\mathbb{P}\left(\hat{g}^{\star}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)$$ $$=\int_{\theta\in S_{given}}p(\theta)\mathbb{P}\left(\hat{g}^{\star}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)d\theta$$ $$\quad+\int_{\theta\in S_{given}}p(\theta)\mathbb{P}\left(\hat{g}^{\star}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)d\theta$$ $$<\sup_{\theta\in S_{given}}\mathbb{P}\left(\hat{g}^{\star}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)+\int_{\theta\in S_{given}}p(\theta)d\theta$$ The first term can be bounded away from 1 due to the carefully chosen t0. The second term is bounded away from 1 because the size of S*yellow* is relatively small. The formal justification is given in Prop. 2 and Apps. C.4.2, F.2 and G.4. (2) Distortion analysis. For the distortion performance, it is straightforward to show that ∆ = supθ∈Supp(Θ) d ωXθ ∥ωXθ∗ I(θ) , where θ ∗ I(θ) is the released parameter when the original parameter is θ. This quantity can often be derived directly from the mechanism and parameter support. ## 6 **Case Studies** In this section, we instantiate the general results on concrete distributions and secrets (mean §6.1, quantile §6.2, and we defer standard deviation and discrete distribution fractions to Apps. G and H). See Table 1 for a summary of each setting we consider, and a pointer to any theoretical results. Our results in each setting generally include a privacy lower bound, a concrete instantiation of the quantization mechanism, and privacy-distortion analysis of the data release mechanisms. In §6.3, we will discuss how to extend the data release mechanisms to the cases when data holders only have data samples and do not know the parameters of the underlying distributions. These data release mechanisms serve as the initial version of *summary statistic privacy toolbox* (Fig. 1). | Distribution | Continuous Distribution | Ordinal Distribution | | | | |---------------------------|---------------------------|--------------------------|----------------|----------|---------| | (order-optimal mechanism) | (Alg. 1 and Alg. 3) | Categorical Distribution | | | | | Secret | (order-optimal mechanism) | | | | | | Gaussian | Uniform | Exponential | Geometric | Binomial | Poisson | | Mean | §6.1 | App. E | Not applicable | | | | Quantile | §6.2 and App. F | Not applicable | Not applicable | | | | Standard Deviation | App. G.1 | App. G.2 | Not applicable | | | | Fraction | Not applicable | App. H.1 | App. H.2 | | | Table 1: Summary of the case studies. ## 6.1 **Secret = Mean** In this section, we discuss how to protect the mean of a distribution for general continuous distributions. We start with a lower bound. Corollary 1 (Privacy lower bound, secret = mean of a continuous distribution). *Consider the secret function* g (θ) = Rx xfXθ (x) dx. For any T ∈ (0, 1), when Πϵ,ωΘ ≤ T*, we have* ∆ >⌈ 1 T ⌉ − 1· ϵ. ![16_image_0.png](16_image_0.png) Figure 5: Illustration of the data release mechanism for continuous distributions when secret=mean. The proof is in App. C.1. We next design a data release mechanism that achieves a tradeoff close to this bound. Data release mechanism. We first consider continuous distributions that can be parameterized with a location parameter, where the prior distribution of the location parameter is uniform and independent of other factors. We relax this assumption to Lipschitz-continuous priors in App. D.1. For now, we assume the following: Assumption 1. The distribution parameter vector θ can be written as (u, v)*, where* u ∈ R, v ∈ R q−1, and for any u ̸= u ′, fXu,v (x) = fXu′,v (x − u ′ + u). The prior over distribution parameters is fU,V (*a, b*) = fU (a) · fV (b)*, where* fU (a) = 1 u−u I(a ∈ [u, u)). Examples include the Gaussian, Laplace, and uniform distributions, as well as shifted distributions (e.g., shifted exponential, shifted log-logistic). Using the strategy from §5.2, we derive the following quantization mechanism. Mechanism 1 (For secret = mean of a continuous distribution). *The parameters of the data release mechanism are* where s is a hyper-parameter of the mechanism that divides (u − u) and N = u−u s∈ N. Fig. 5 shows an example when the original data distribution is Gaussian, i.e., Xθ ∼ N (*u, v*), and u ∈-µ, µ. Intuitively, our data release mechanism "quantizes" the range of possible mean values into segments of length s. It then shifts the mean of private distribution fXu,v to the midpoint of its corresponding segment, and releases the resulting distribution. This simple deterministic mechanism is able to achieve order-optimal privacy-distortion tradeoff in some cases, as shown below. Proposition 1. Under Asm. 1, Mech. 1 has Πϵ,ωΘ ≤ 2ϵ sand ∆ = s2 < 2∆opt, where ∆opt is the minimal distortion an optimal data release mechanism can achieve given the privacy Mech. 1 *achieves.* $\mathcal{S}_{i,v}=\left\{\left(t,v\right)|t\in\left[\underline{u}+i\cdot s,\ \underline{u}+\left(i+1\right)\cdot s\right)\right\},$ $\theta_{i,v}^{*}=\left(\underline{u}+\left(i+0.5\right)\cdot s,v\right),$ $\mathcal{I}=\left\{\left(i,v\right):i\in\left\{0,1,\ldots,N-1\right\},v\in Supp\left(\omega_{V}\right)\right\},$ $$(13)$$ $$(14)$$ $$(15)$$ The proof is in App. C.2. The two takeaways from this proposition are that: (1) the data holder can use s to control the trade-off between distortion and privacy, and (2) the mechanism is order-optimal with multiplicative factor 2. ## 6.2 **Secret = Quantiles** S3 in §2.1 explains how quantiles of continuous distributions can reveal sensitive information. In this section, we show how to protect it for a typical continuous distribution: the (shifted) exponential distribution. We analyze the Gaussian and uniform distributions in App. F. We choose these distributions as a starting point of our analysis as many distributions in real-world data can be approximated by one of these distributions. In our analysis, the parameters of (shifted) exponential distributions are denoted by: - Exponential distribution: θ = λ, where λ is the scale parameter. In other words, fXλ (x) = 1λ e −x/λ. - Shifted exponential distribution generalizes the exponential distribution with an additional shift parameter h: θ = (*λ, h*). In other words, fXλ,h (x) = 1λ e −(x−h)/λ. As before, we first present a lower bound. Corollary 2 (Privacy lower bound, secret = α-quantile of a continuous distribution). Consider the secret function g (θ) = α*-quantile of* fXθ . For any T ∈ (0, 1), when Πϵ,ωΘ ≤ T*, we have* ∆ >⌈ 1 T ⌉ − 1· 2γϵ*, where* γ *is defined as follows:* - *Exponential:* $$\gamma=-\frac{1}{2\ln\left(1-\alpha\right)}.$$ - *Shifted exponential:* $$\gamma=\begin{cases}\frac{1}{2}\left|1+\frac{\ln(1-\alpha)+1}{W_{-1}\left(-\frac{\ln(1-\alpha)+1}{2(1-\alpha)e}\right)}\right|&\alpha\in[0,1-e^{-1})\\ \frac{1}{2}\left|1+\frac{\ln(1-\alpha)+1}{W_{0}\left(-\frac{\ln(1-\alpha)+1}{2(1-\alpha)e}\right)}\right|&\alpha\in[1-e^{-1},1)\end{cases},$$ , where W−1 and W0 are Lambert W *functions.* The proof is in App. C.3. Next, we provide data release mechanisms for each of the distributions that achieve trade-offs close to these bounds. Mechanism 2 (For secret = quantile of a continuous distribution). *We design mechanisms for each of the* distributions. In both cases, s > 0 is the quantization bin size chosen by the operator to divide λ − λ*, where* λ and λ *are upper and lower bounds of* λ. - *Exponential:* $$\begin{array}{l}{{\mathcal{S}_{i}=\left[\underline{{{\lambda}}}+i\cdot s,\underline{{{\lambda}}}+(i+1)\cdot s\right)\;\;\;,}}\\ {{\theta_{i}^{*}=\underline{{{\lambda}}}+(i+0.5)\cdot s\;\;\;,}}\\ {{\mathcal{I}=\mathbb{N}.}}\end{array}$$ - *Shifted exponential:* $\mathcal{S}_{i,h}=\left\{\left(\underline{\lambda}+\left(i+0.5\right)s+t,h-t_{0}\cdot t\right)|t\in\left[-\frac{s}{2},\frac{s}{2}\right)\right\}$, $\theta_{i,h}^{*}=\left(\underline{\lambda}+\left(i+0.5\right)s,h\right)\quad,$ $\mathcal{I}=\left\{(i,h)|i\in\mathbb{N},h\in\mathbb{R}\right\},$ where $$t_{0}=\begin{cases}-1-\ln\left(1-\alpha\right)-W_{-1}\left(-\frac{\ln\left(1-\alpha\right)+1}{2(1-\alpha)e}\right)&\left(\alpha\in[0,1-e^{-1})\right)\\ -1-\ln\left(1-\alpha\right)-W_{0}\left(-\frac{\ln\left(1-\alpha\right)+1}{2(1-\alpha)e}\right)&\left(\alpha\in[1-e^{-1},1)\right)\end{cases}.$$ For the privacy-distortion trade-off analysis of Mech. 2, we assume that the parameters of the original data are drawn from a uniform distribution with lower and upper bounds. Again, we relax this assumption to Lipschitz priors in App. D.2. Precisely, Assumption 2. *The prior over distribution parameters is:* - Exponential: λ *follows the uniform distribution over* -λ, λ. - Shifted exponential: (λ, h) follows the uniform distribution over (a, b)|a ∈-λ, λ, b ∈-h, h . We relax Asm. 2 and analyze the privacy-distortion trade-off of Mech. 2 in App. D.2. Proposition 2. Under Asm. 2, Mech. 2 has the following Πϵ,ωΘ and ∆ *value/bound.* - *Exponential:* $$\Pi_{\epsilon,\omega_{\Theta}}=\frac{2\epsilon}{-\ln{(1-\alpha)}\,s},\qquad\Delta=\frac{1}{2}s<2\Delta_{o p t}.$$ - *Shifted exponential:* $$\begin{array}{c}{{\Pi_{\epsilon,\omega_{0}}<\frac{2\epsilon}{|\ln{(1-\alpha)}+t_{0}|s}+\frac{|t_{0}|s}{\bar{h}-\underline{{{h}}}},}}\\ {{\Delta=\frac{s}{2}\left(t_{0}-1\right)+s e^{-t_{0}}<\left(2+\frac{|t_{0}|\cdot|\ln{(1-\alpha)}+t_{0}|s^{2}}{\epsilon\left(\bar{h}-\underline{{{h}}}\right)}\right)\Delta_{o p t}.}}\end{array}$$ $$\in\left[\underline{{{\lambda}}},\overline{{{\lambda}}}\right),b\in\left[\underline{{{h}}},\overline{{{h}}}\right)\right\}.$$ Under the high-precision regime where s $where\ \frac{s^2}{h\ -h}\to0\ as\ s,(\overline{h}-\underline{h})$ h−h → 0 as s,(h − h) → ∞*, when* α ∈ [0.01, 0.25] ∪ [0.75, 0.99], ∆ satisfies $$\operatorname*{lim}_{\frac{s^{2}}{h-\frac{h}{2}}\to0}\Delta<3\Delta_{o p t}.$$ ∆opt is the optimal achievable distortion given the privacy achieved by Mech. 2, and t0 *is a constant defined* in *Mech. 2.* The proof is in App. C.4. Note that the quantization bin size s cannot be too small, or the attacker can always successfully guess the secret within a tolerance ϵ (i.e., Πϵ,ωΘ = 1). Therefore, for the "high-precision" regime, we consider the asymptotic scaling as both s and h − h grow. Prop. 2 shows that the quantization mechanism is order-optimal with multiplicative factor 2 for the exponential distribution. For shifted exponential distribution, order-optimality holds asymptotically in the high-precision regime. ## 6.3 **Extending Data Release Mechanisms For Dataset Input/Output** The data release mechanisms discussed in previous sections assume that data holders know the distribution parameter of the original data. In practice, data holders often only have a dataset of samples from the data distribution and do not know the parameters of the underlying distributions. As mentioned in §3, our data release mechanisms can be easily adapted to handle dataset input/output. The high-level idea is that the data holders can estimate the distribution parameters θ from the data samples and find the corresponding quantization bins Si according to the estimated parameters, and then modify the original samples as if they are sampled according to the released parameter θ ∗ i . For brevity, we only present the concrete procedure for secret=mean on continuous distributions as an example. For a dataset of X = {x1*, . . . , x*n}, the procedure is: 1. Estimate the mean from the data samples: µˆ = 1 n Pi∈[n] xi. 2. According to Eq. (13), compute the index of the corresponding set i = ⌊ $\dfrac{\hat{\mu}-\mu}{s}$ ]. * [10] M. C. $\varepsilon[n]\;\mathcal{L}_{i}$. $\Leftarrow$ ... 3. According to Eq. (14), change the mean of the data samples to µ*target* = µ + (i + 0.5) · s. This can be done by sample-wise operation x ′ i = xi − µˆ + µ*target*. 4. The released dataset is Mg (X , z) = {x ′ 1 , . . . , x′n}. Note that this mechanism applies to samples. Therefore, it can be applied either to the original data, or as an add-on to existing data sharing tools (Esteban et al., 2017; Lin et al., 2020; Yin et al., 2022; Jordon et al., 2018; Yoon et al., 2019). For example, it can be used to modify synthetically-generated samples after they are generated, or to modify the training dataset for a generative model, or to directly modify the original data for releasing. ## 7 **Experiments** In the previous sections, we theoretically demonstrated the privacy-distortion tradeoffs of our data release mechanisms in some special case studies. In this section, we focus on *orthogonal* questions through realworld experiments: (1) how well our data release mechanisms perform when the assumptions do not hold in practice, and (2) why existing privacy frameworks are not suitable for summary statistic privacy (which we explained qualitatively in §2.2). Datasets. We use three real-world datasets to simulate each of the motivating scenarios in §2.1. 1. Wikipedia Web Traffic Dataset (WWT) (Google, 2018) contains the daily page views of 145,063 Wikipedia web pages in 2015-2016. To preprocess it for our experiments, we remove the web pages with empty page view record on any day (117,277 left), and compute the mean page views across all dates for each web page. Our goal is to release the page views (i.e., a 117,277-dimensional vector) while protecting the **mean of the distribution** (which reveals the business scales of the company §2.1). 2. Google Cluster Trace Dataset (GCT) (Reiss et al., 2011) contains usage logs (e.g., CPU/memory) of an internal Google cluster with 12.5k machines in 2011. We use "platform ID" field of the dataset, which represents "microarchitecture and chipset version of the machine" (Reiss et al., 2011). Our goal is to release another distribution of platform ID while protecting the fraction of one specific platform ID (which reveals business strategy §2.1). 3. Measuring Broadband America Dataset (MBA) (Commission, 2018) contains network statistics (including network traffic counters) collected by United States Federal Communications Commission from homes across United States. We select the average network traffic (GB/measurement) from AT&T clients as our data. Our goal is to release a copy of this data while hiding the **0.95-quantile** (which reveals the network capability §2.1). Baselines. We compare our mechanisms discussed in §6 with three popular mechanisms proposed in prior work (§2.2): differentially-private density estimation (Wasserman & Zhou, 2010) (shortened to DP), attribute-private Gaussian mechanism (Zhang et al., 2022) (shortened to AP), and Wasserstein mechanism for distribution privacy (Chen & Ohrimenko, 2022) (shortened to DistP). Note that these mechanisms are not designed for our problem setting—in that sense, these experiments are not a fair comparison. Nontheless, we include them simply to illustrate that prior techniques are not sufficient *for our problem setting*. For a dataset of samples X = {x1*, ..., x*n}, DP works by: (1) Dividing the space into m bins: B1*, ..., B*m. 4 (2) Computing the histogram Ci =Pn j=1 I(xj ∈ Bi). (3) Adding noise to the histograms Di = max 0, Ci + Laplace 0, ϵ2 , where Laplace 0, ϵ2means a random noise from Laplace distribution with mean 0 and variance ϵ 2. (4) Normalizing the histogram pi = P Di m j=1 Dj . We can then draw yi according to the histogram and release Y = {y1*, ..., y*n} with differential privacy guarantees. AP works by releasing 4In Google Cluster Trace Dataset, the bin is already pre-specified (i.e., the platform IDs), so this step is skipped. Y =xi + N0, ϵ2 n i=1. 5 DistP works by releasing Y =xi + Laplace 0, ϵ2 n i=1. 6 Note that for each of these mechanisms, normally their noise parameters would be set carefully to match the desired privacy guarantees (e.g., differential privacy). In our case, since our privacy metric is different, it is unclear how to set the noise parameters for a fair privacy comparison. For this reason, we evaluate different settings of the noise parameters, and measure the empirical tradeoffs. Metrics. Our privacy and distortion metrics depend on the prior distribution of the original data θ ∼ ωΘ (though the mechanism does not). In practice (and also in these experiments), the data holder only has one dataset. Therefore, we cannot empirically evaluate the proposed privacy and distortion metrics, and resort to surrogate metrics to bound our true privacy and distortion. Surrogate privacy metric. For an original dataset X = {x1*, ..., x*n} and the released dataset Y = {y1*, ..., y*n}, we define the surrogate privacy metric Π˜ϵ as the error of an attacker who guesses the secret of the released dataset as the true secret: Π ˜ ϵ,ωΘ ≜ − |g (X ) − g (Y)|, where g (D) = mean of D, fraction of a specific platform ID in D, and 0.95-quantile of D in WWT, GCT, and MBA datasets respectively. Note that in the definition of Π ˜ ϵ,ωΘ , a minus sign is added so that a smaller value indicates stronger privacy, as in privacy metric Eq. (2). This simple attacker strategy is in fact a good proxy for evaluating the privacy Πϵ,ωΘ due to the following facts. (1) For our data release mechanisms for these secrets Mechs. 1, 2 and 5, when the prior distribution is uniform, this strategy is actually optimal, so there is a direct mapping between Π˜ϵ and Πϵ,ωΘ . (2) For AP applied on protecting mean of the data (i.e., Wikipedia Web Traffic Dataset experiments), this strategy gives an unbiased estimator of the secret. (3) For DP and AP on other cases, this mechanism may not be an unbiased estimator of the secret, but it gives an *upper bound* on the attacker's error. Surrogate distortion metric. We define our surrogate distortion metric as the distance between the two datasets: ∆˜ ≜ d (pX ∥pY ) where pD denotes the empirical distribution of a dataset D, and d is defined as in our formulation §3 (i.e., Wassersstein-1 distance for continuous distributions in WWT and MBA, and TV distance for discrete distributions in GCT). This metric evaluates how much the mechanism distorts the dataset. In fact, we can deduce a theoretical lower bound for the surrogate privacy and distortion metrics for secret = mean/fractions (shown later in Fig. 6) using similar techniques as the proofs in the main paper (see App. C.5). ![20_image_1.png](20_image_1.png) (a) Wikipedia Web Traffic Dataset. (secret=mean) ![20_image_2.png](20_image_2.png) (b) Google Cluster Trace Dataset. (secret=categorical fraction) ![20_image_0.png](20_image_0.png) (c) Measuring Broadband America Dataset. (secret=quantile) Figure 6: Privacy (lower is better) and distortion (lower is better) of AP, DP, DistP, and ours. Each point represents one instance of data release mechanism with one hyper-parameter. "Lower bound" is the theoretical lower bound of the achievable region. Our data release mechanisms achieve better privacydistortion tradeoff than AP, DP, and DistP. 5In Google Cluster Trace Dataset, the Gaussian noise N0, ϵ2are added to the counts of different platform IDs. We then normalize the counts and sample released platform IDs from this categorical distribution. 6In Google Cluster Trace Dataset, the Laplace noise Laplace 0, ϵ2are added to the counts of different platform IDs. We then normalize the counts and sample released platform IDs from this categorical distribution. ## 7.1 **Results** We enumerate the hyper-parameters of each method (bin size and ϵ for DP, ϵ for AP and DistP, and s for ours). For each method and each hyper-parameter, we compute their surrogate privacy and distortion metrics. The results are shown in Fig. 6 (bottom left is best); each data point represents one realization of mechanism Mg under a distinct hyperparameter setting. Two takeaways are below. (1) Our data release mechanisms has good privacy-distortion trade-offs even when the assumptions do not hold. We envision that data holders can choose the data release mechanisms in the toolbox (Fig. 1) that matches their need. However, in practical scenarios, the data distributions supported in the toolbox may not always match real data exactly. Our data release mechanisms for mean (i.e., Mech. 1 used in WWT) and fractions (i.e., Mech. 5 used in GCT) support general continuous distributions and categorical distributions, and therefore, there is no such a distribution gap. Indeed, even for these surrogate metrics, our Mech. 1 and Mech. 5 are also optimal (see App. C.5). This is visualized in Figs. 6a and 6b where we can see that our data release mechanisms match the theoretical lower bound of the trade-off. However, our data release mechanisms for quantiles (i.e., Mech. 2 used in Fig. 6c) are order-optimal only when the distributions are within certain classes (§6.2). Observing that network traffic in MBA follows a one-side fat-tailed distribution (not shown), we apply the data release mechanism for exponential distribution (Mech. 2) for this dataset. Despite the distribution mismatch, our data release mechanism still achieves a good privacy-distortion compared to DP, AP, and DistP (Fig. 6c). More discussions are below. (2) Our data release mechanisms achieve better privacy-distortion trade-off than DP, AP, and DistP. AP and DistP directly add Gaussian/Laplace noise to each sample. This process does not change the mean of the distribution on expectation. Therefore, Figure 6 shows that AP and DistP have a bad privacy-distortion tradeoff. DP quantizes (bins) the samples before adding noise. Quantization has a better property in terms of protecting the mean of the distribution, and therefore we see that DP has a better privacy-distortion tradeoff than AP and DistP, but still worse than ours. Note that in Fig. 6c, a few of the DP instances have better privacy-distortion trade-offs than ours. This is not an indication that DP is fundamentally better. Instead, it is due to the randomness in DP (from the added Laplace noise), and some realizations of the specific noise in this experiment happened to lead to a better trade-off. Another instance of the DP algorithm could lead to a bad trade-off, and therefore, DP's achievable trade-off points are widespread. In summary, these results confirm our intuition in §2.2 that DP, AP, and DistP are not suitable for summary statistic privacy (which is expected—they are designed for a different objective). As such, the quantization mechanism (under the summary statistic privacy framework) gives better practical protections for summary statistic privacy. Additional results on downstream tasks are in App. I. ## 8 **Limitations** This work has several important limitations, some of which relate to the framework itself, others of which are specific to the mechanisms and results we prove. We outline several of these limitations. ## 8.1 **Limitations Of The Framework** Prior knowledge of distribution. The current privacy metric Πϵ,ωΘ depends on the prior distribution of the parameters ωΘ, which is typically unknown. The outcome is that if a mechanism is analyzed under a mismatched prior, it may lead a data holder to over- or under-estimate their privacy parameter. Composition guarantees. Another limitation of the current privacy metric Πϵ,ωΘ is that it does not provide composition guarantees; in other words, if one applies a summary statistic-private mechanism υ times, we cannot easily bound the privacy parameter of the υ-fold composed mechanism. In contrast, composition is an important and desirable property exhibited by differential privacy (Dwork et al., 2006). The lack of composition can be problematic in situations where a data holder wants to release a dataset (or correlated datasets) multiple times. ## 8.2 **Limitations Of The Analysis And Mechanisms** Our analysis in this work considers the simplest set of cases, which are neither fully representative of how real data users release data, nor the secrets they wish to hide. Number of secrets. In this work, we studied a case where the data holder only wishes to hide a single secret. In practice, data holders often want to hide multiple properties of their underlying data. The dimension and the type of data distributions. Although our lower bounds in Section 4 apply to general prior distributions, we analyze the quantization mechanism under a limited set of one-dimensional distributions (Table 1) under which different parameters of the distribution are drawn independently of each other. An interesting direction for future work is to define mechanisms that have good tradeoffs under prior distributions with correlated parameters and priors. ## 9 **Discussion And Future Work** We introduce *summary statistic privacy* for defining, analyzing, and protecting summary statistic privacy concerns in data sharing applications. This framework can be used to analyze the leakage of statistical information and the privacy-distortion trade-offs of data release mechanisms (§ 3 and 4). Our data release mechanisms can be used to protect statistical information (§ 5 and 6). However, as discussed in §8, this paper leaves many questions unanswered. Several of these pose interesting questions for future work. Approximation error. We studied a number of data distributions and prior distributions in this work. However, an interesting question is to bound the error in privacy and distortion metrics as a function of approximation error when describing either the original data distribution or the prior. Extensions. As described in §8, one limitation of the current privacy metric Πϵ,ωΘ is that it depends on the prior distribution of the parameters ωΘ, which is unknown in many applications. Motivated by maximal leakage (Issa et al., 2019) (§2.2), one possibility is to consider a *normalized* privacy metric: $$\Pi_{\epsilon,\omega_{\Theta}}^{\prime}\triangleq\operatorname*{sup}_{\omega_{\Theta}}\;\log{\frac{\Pi_{\epsilon,\omega_{\Theta}}}{\operatorname*{sup}_{\hat{g}}\;\mathbb{P}\left({\hat{g}}\left(\omega_{\Theta}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)}},$$ where gˆ (ωΘ) is an attacker that knows the prior distribution but does not see the released data, and the denominator is the probability that the strongest attacker guesses the secret within tolerance ϵ. Similar to maximal leakage, we consider the worst-case leakage among all possible priors. This normalized Π′ϵ,ωΘ considers how much additional "information" that the released data provides to the attacker in the worstcase (see also inferential privacy (Ghosh & Kleinberg, 2016)). This privacy definition is strong so that we will not be able to achieve good privacy and reasonable distortion at the same time. Proposition 3. Let ∆ ≜ 1 2 supθ1,θ2∈*Supp*(ωΘ) dωXθ1 ∥ωXθ2 . There exists no Mg such that Π′ϵ,ωΘ < log 2 and ∆ < ∆. The proof is in App. C.6. It would be interesting to further study the feasibility of such a formulation, for instance by changing the utility metric to an expected distortion, rather than a worst-case one. ## References The caida ucsd anonymized internet traces. https://www.caida.org/catalog/datasets/passive_ dataset. Accessed: 2022-01-30. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214–223. PMLR, 2017. Giuseppe Ateniese, Luigi V Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali, and Giovanni Felici. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. *International Journal of Security and Networks*, 10(3):137–150, 2015. Richard Bellman. Dynamic programming. *Science*, 153(3731):34–37, 1966. Elias Chaibub Neto, Abhishek Pratap, Thanneer M Perumal, Meghasyam Tummalacherla, Phil Snyder, Brian M Bot, Andrew D Trister, Stephen H Friend, Lara Mangravite, and Larsson Omberg. Detecting the impact of subject characteristics on machine learning-based diagnostic applications. *NPJ digital medicine*, 2(1):1–6, 2019. Konstantinos Chatzikokolakis, Miguel E Andrés, Nicolás Emilio Bordenabe, and Catuscia Palamidessi. Broadening the scope of differential privacy using metrics. In *Privacy Enhancing Technologies: 13th* International Symposium, PETS 2013, Bloomington, IN, USA, July 10-12, 2013. Proceedings 13, pp. 82–102. Springer, 2013. Harsh Chaudhari, John Abascal, Alina Oprea, Matthew Jagielski, Florian Tramèr, and Jonathan Ullman. Snap: Efficient extraction of private properties with poisoning. In *2023 IEEE Symposium on Security and* Privacy (SP), pp. 1935–1952. IEEE Computer Society, 2022. Michelle Chen and Olga Ohrimenko. Protecting global properties of datasets with distribution privacy mechanisms. *arXiv preprint arXiv:2207.08367*, 2022. Nazli Choucri, Stuart Madnick, and Priscilla Koepke. Institutions for cyber security: International responses and data sharing initiatives. *Cambridge, MA: Massachusetts Institute of Technology*, 2016. Federal Communications Commission. Raw data - measuring broadband america - seventh report, 2018. https://www.fcc.gov/reports-research/reports/measuring-broadband-america/ raw-data-measuring-broadband-america-seventh. Eli Cortez, Anand Bonde, Alexandre Muzio, Mark Russinovich, Marcus Fontoura, and Ricardo Bianchini. Resource central: Understanding and predicting workloads for improved resource management in large cloud platforms. In *Proceedings of the 26th Symposium on Operating Systems Principles*, pp. 153–167, 2017. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3, pp. 265–284. Springer, 2006. Cristóbal Esteban, Stephanie L Hyland, and Gunnar Rätsch. Real-valued (medical) time series generation with recurrent conditional gans. *arXiv preprint arXiv:1706.02633*, 2017. Karan Ganju, Qi Wang, Wei Yang, Carl A Gunter, and Nikita Borisov. Property inference attacks on fully connected neural networks using permutation invariant representations. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619–633, 2018. Arpita Ghosh and Robert Kleinberg. Inferential privacy guarantees for differentially private mechanisms. arXiv preprint arXiv:1603.01508, 2016. Google. Web traffic time series forecasting, 2018. https://www.kaggle.com/c/ web-traffic-time-series-forecasting. Ibrahim Issa, Aaron B Wagner, and Sudeep Kamath. An operational approach to information leakage. IEEE Transactions on Information Theory, 66(3):1625–1657, 2019. James B Jacobs and Dimitra Blitsa. Sharing criminal records: The united states, the european union and interpol compared. *Loy. LA Int'l & Comp. L. Rev.*, 30:125, 2008. Junchen Jiang, Vyas Sekar, Henry Milner, Davis Shepherd, Ion Stoica, and Hui Zhang. {CFA}: A practical prediction system for video {QoE} optimization. In 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI 16), pp. 137–150, 2016. James Jordon, Jinsung Yoon, and Mihaela Van Der Schaar. Pate-gan: Generating synthetic data with differential privacy guarantees. In *International conference on learning representations*, 2018. Yusuke Kawamoto and Takao Murakami. Local obfuscation mechanisms for hiding probability distributions. In *Computer Security–ESORICS 2019: 24th European Symposium on Research in Computer Security,* Luxembourg, September 23–27, 2019, Proceedings, Part I 24, pp. 128–148. Springer, 2019. Daniel Kifer and Ashwin Machanavajjhala. Pufferfish: A framework for mathematical privacy definitions. ACM Transactions on Database Systems (TODS), 39(1):1–36, 2014. Hau L Lee and Seungjin Whang. Information sharing in a supply chain. *International journal of manufacturing technology and management*, 1(1):79–93, 2000. Zinan Lin, Ashish Khetan, Giulia Fanti, and Sewoong Oh. Pacgan: The power of two samples in generative adversarial networks. *Advances in neural information processing systems*, 31, 2018. Zinan Lin, Alankar Jain, Chen Wang, Giulia Fanti, and Vyas Sekar. Using gans for sharing networked time series data: Challenges, initial promise, and open questions. In Proceedings of the ACM Internet Measurement Conference, pp. 464–483, 2020. Terrance Liu and Zhiwei Steven Wu. Private synthetic data with hierarchical structure. arXiv preprint arXiv:2206.05942, 2022. Shutian Luo, Huanle Xu, Chengzhi Lu, Kejiang Ye, Guoyao Xu, Liping Zhang, Yu Ding, Jian He, and Chengzhong Xu. Characterizing microservice dependency and performance: Alibaba trace analysis. In Proceedings of the ACM Symposium on Cloud Computing, pp. 412–426, 2021. Saeed Mahloujifar, Esha Ghosh, and Melissa Chase. Property inference from poisoning. In 2022 IEEE Symposium on Security and Privacy (SP), pp. 1120–1137. IEEE, 2022. Ali Makhdoumi, Salman Salamatian, Nadia Fawaz, and Muriel Médard. From the information bottleneck to the privacy funnel. In *2014 IEEE Information Theory Workshop (ITW 2014)*, pp. 501–505. IEEE, 2014. Antonis Manousis, Harshil Shah, Henry Milner, Yan Li, Hui Zhang, and Vyas Sekar. The shape of view: an alert system for video viewership anomalies. In Proceedings of the 21st ACM Internet Measurement Conference, pp. 245–260, 2021. Charles Reiss, John Wilkes, and Joseph L Hellerstein. Google cluster-usage traces: format+ schema. *Google* Inc., White Paper, pp. 1–14, 2011. Charles Reiss, John Wilkes, and Joseph L Hellerstein. Obfuscatory obscanturism: making workload traces of commercially-sensitive systems safe to release. In *2012 IEEE Network Operations and Management* Symposium, pp. 1279–1286. IEEE, 2012. Anshuman Suri and David Evans. Formalizing and estimating distribution inference risks. arXiv preprint arXiv:2109.06024, 2021. Anshuman Suri, Yifu Lu, Yanjin Chen, and David Evans. Dissecting distribution inference. In *First IEEE* Conference on Secure and Trustworthy Machine Learning, 2023. Leigh R Warren, Jonathan Clarke, Sonal Arora, and Ara Darzi. Improving data sharing between acute hospitals in england: an overview of health record system distribution and retrospective observational analysis of inter-hospital transitions of care. *BMJ open*, 9(12):e031637, 2019. Larry Wasserman and Shuheng Zhou. A statistical framework for differential privacy. Journal of the American Statistical Association, 105(489):375–389, 2010. John Wilkes. Google cluster-usage traces v3. Technical report, Google Inc., Mountain View, CA, USA, April 2020. Posted at https://github.com/google/cluster-data/blob/master/ClusterData2019.md. Yucheng Yin, Zinan Lin, Minhao Jin, Giulia Fanti, and Vyas Sekar. Practical gan-based synthetic ip header trace generation using netshare. In *Proceedings of the ACM SIGCOMM 2022 Conference*, pp. 458–472, 2022. Jinsung Yoon, Daniel Jarrett, and Mihaela Van der Schaar. Time-series generative adversarial networks. Advances in neural information processing systems, 32, 2019. James Hongyi Zeng. Data sharing on traffic pattern inside facebook's data center network - meta research, Jan 2017. URL https://research.facebook.com/blog/2017/01/ data-sharing-on-traffic-pattern-inside-facebooks-datacenter-network/. Wanrong Zhang, Shruti Tople, and Olga Ohrimenko. Leakage of dataset properties in multi-party machine learning. In *USENIX Security Symposium*, pp. 2687–2704, 2021. Wanrong Zhang, Olga Ohrimenko, and Rachel Cummings. Attribute privacy: Framework and mechanisms. In *2022 ACM Conference on Fairness, Accountability, and Transparency*, pp. 757–766, 2022. Junhao Zhou, Yufei Chen, Chao Shen, and Yang Zhang. Property inference attacks against gans. arXiv preprint arXiv:2111.07608, 2021. ## Appendix A **Analysis Of The Alternative Formulation** In this section, we present the alternative formulation of minimizing privacy metric Πϵ,ωΘ subject to a constraint on distortion ∆: min MgΠϵ,ωΘ subject to ∆ ≤ T (16) Theorem 2 (Lower bound of privacy-distortion tradeoff). Let D (Xθ1 , Xθ2 ) ≜ 1 2 dωXθ1 ∥ωXθ2 , where d (·∥·) is defined in Eq. (3)*. Further, let* R (Xθ1 , Xθ2 ) ≜ |g(θ1) − g(θ2)|*, and let* γ ≜ infθ1,θ2∈*Supp*(ωΘ) D(Xθ1 ,Xθ2 ) R(Xθ1 ,Xθ2 ) . For any T > 0, when ∆ ≤ T, we have Πϵ,ωΘ ≥ ⌈ T 2γϵ ⌉ −1. Proof. For any θ ′, we have $T\geq\Delta$. T ≥ ∆ ≥ sup θ∈Supp(ωΘ),z∈Supp(ωZ ):Mg(θ,z)=θ ′ dωXθ ∥ωXθ′ ≥ sup θi∈Supp(ωΘ),zi:Mg(θi,zi)=θ ′ D (Xθ1 , Xθ2 ) (17) ≥ γ · sup θi∈Supp(ωΘ),zi:Mg(θi,zi)=θ ′ R (Xθ1 , Xθ2 ) where Eq. (17) comes from triangle inequality. Let $$L_{\theta^{r}}\triangleq\operatorname*{inf}_{\theta\in\operatorname{Supp}(\omega\in),z:\mathcal{M}_{g}(\theta,z)=\theta^{r}}g\left(\theta\right)\ ,$$ $$R_{\theta^{\prime}}\triangleq\operatorname*{sup}_{\theta\in\operatorname{Supp}(\omega\in),z:\mathcal{M}_{g}(\theta,z)=\theta^{\prime}}g\left(\theta\right)\ .$$ From the above result, we know that Rθ ′ − Lθ ′ ≤ T γ . We can define a sequence of attackers such that gˆi (θ ′) = Lθ ′ + (i + 0.5) · 2ϵ for i ∈ n0, 1*, . . . ,* ⌈ T 2γϵ ⌉ − 1 o(Fig. 7). We have Possible range of '(,) ![26_image_0.png](26_image_0.png) $$\left\lfloor{\frac{T}{2\gamma\in}}\right\rfloor2\in$$ ``` Figure 7: The construction of attackers for proof of Thm. 2. The 2ϵ ranges of gˆ0, ..., gˆ⌈ T 2γϵ ⌉−1 jointly cover the ``` entire range of possible secret [Lθ ′ , Rθ ′ ]. Therefore, there exists one attacker whose probability of guessing the secret correctly within ϵ is ≥ ⌈ T 2γϵ ⌉ −1(Eq. (18)). $$\sum_{i}\mathbb{P}\left({\hat{g}}_{i}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\left|\theta^{\prime}\right.\right)\geq1,$$ and therefore, $$\operatorname*{max}_{i}\mathbb{P}\left({\hat{g}}_{i}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]{\Bigg|}\theta^{\prime}\right)\geq\lceil{\frac{T}{2\gamma\epsilon}}\rceil^{-1},$$ $$\quad(18)$$. which implies that $$\operatorname*{sup}_{\hat{g}}\mathbb{P}\left({\hat{g}}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]{\bigg|}\theta^{\prime}\right)\geq\lceil{\frac{T}{2\gamma\epsilon}}\rceil^{-1}.$$ Therefore, we have $$\Pi_{\epsilon,\omega_{\Theta}}=\sup_{\hat{g}}\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)$$ $$=\sup_{\hat{g}}\mathbb{E}\left(\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\left|\theta^{\prime}\right.\right)\right)$$ $$=\mathbb{E}\left(\sup_{\hat{g}}\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\left|\theta^{\prime}\right.\right)\right)$$ $$\geq\lceil\frac{T}{2\gamma\epsilon}\rceil^{-1}.$$ ## B **Binary Search And Greedy Algorithms For Designing Quantization Mechanism** We use the binary search algorithm in Alg. 2 to search for the distortion budget that matches the privacy budget under the optimal data release mechanism. Algorithm 2: Data release mechanism with privacy budget. Input: Parameter range: -θ, θ Privacy budget: T Distortion budget search range: [B, B] Step size: s (which divides θ − θ) Precision: η ## 7 return _Data release mechanism parameters_: $\mathcal{S},\theta^{\prime}$ 1 while T − T ≥ η do 2 pri, S, θ′ ← Algorithm-1 -θ, θ, T +T 2, κ 3 if pri > T then 4 B ← T +T 2 5 else 6 B ← T +T 2 We provide the greedy algorithm in Alg. 3. In this algorithm, we greedily select the ranges of θ for each Si in order. The left end point of the first range is the parameter lower bound (Line 2). We then scan across all possible right end point such that the distortion for this range will not exceed the budget T (Line 8), and pick the one that gives the minimal attacker confidence (Line 10). After deciding the range of θ, we will set of the released distribution for this range (Line 16), and then move on to the next range (Line 21). The time complexity of this algorithm is O θ−θ/κ2· CD · CP , the same as the dynamic programming algorithm. Algorithm 3: Greedy-based data release mechanism for single-parameter distributions. Input: Parameter range: θ, θ Prior over parameter: fΘ Distortion budget: T Step size: κ (which divides θ − θ) 1 *I ← ∅* 2 L ← θ 3 *privacy* ← 0 4 **while** L < θ do 5 min_p ← ∞ 6 min_R ← NULL 7 R ← L 8 **while** R ≤ θ and D (*L, R*) ≤ T do 9 p ← P (*L, R*) 10 if p ≤ min_p **then** 11 min_p ← p 12 min_R ← R 13 R ← R + κ 14 if min_R *is not NULL* **then** 15 SL ← {Xθ : θ ∈ (*L, min*_R]} 16 θ ′ L ← D (*L, min*_R) 17 *I ← I ∪ {*L} 18 *privacy* ← R L θ fΘ(t)dt R min_R θfΘ(t)dt · *privacy* + R min_R LfΘ(t)dt R min_R θfΘ(t)dt · min_p 19 **else** 20 ERROR: No answer 21 L ← min_R 22 **return** privacy, {Si: i *∈ I}* , {θ ′ i : i *∈ I}* ## C **Proofs** C.1 Proof Of **Corollary 1** Proof. For any Xθ1 , Xθ2 , we have $$D\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\frac{1}{2}d_{\text{Wasserstein-1}}\left(\omega_{X_{\theta_{1}}}\left\|\omega_{X_{\theta_{2}}}\right.\right)\tag{19}$$ $$\geq\frac{1}{2}|g\left(\theta_{1}\right)-g\left(\theta_{2}\right)|$$ $$=\frac{1}{2}R\left(X_{\theta_{1}},X_{\theta_{2}}\right).$$ where Eq. (19) comes from Jensen inequality. Therefore, we have γ = infθ1,θ2∈Supp(ωΘ) D(Xθ1 ,Xθ2 ) R(Xθ1 ,Xθ2 ) ≥ 1 2 . The result then follows from Thm. 1. ## C.2 Proof Of **Prop. 1** Proof. For any released parameter θ ′ = (u ′, v′), there exists i ∈ {0*, ..., N* − 1} such that u ′ = u + (i + 0.5)· s. We have sup gˆ P gˆ (θ ′) ∈ [g (θ) − ϵ, g (θ) + ϵ] θ ′ = sup gˆ Z u+(i+1)·s u+i·s fU|U′ (u|u ′) · Z u+ϵ u−ϵ fgˆ(u′,v′) (h) dh du = sup gˆ Z u+(i+1)·s+ϵ u+i·s−ϵ fgˆ(u′,v′)(h) · Z gˆfXu′,v′ +ϵ gˆfXu′,v′ −ϵ fU|U′ (u|u ′) du dh ≤ sup gˆ Z u+(i+1)·s+ϵ u+i·s−ϵ 2ϵ s · fgˆ(u′,v′)(h) dh ≤ 2ϵ s . Therefore, we have $$\Pi_{\epsilon,\omega_{\Theta}}=\sup_{\hat{g}}\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)$$ $$=\sup_{\hat{g}}\mathbb{E}\left(\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\left|\theta^{\prime}\right.\right)\right)$$ $$=\mathbb{E}\left(\sup_{\hat{g}}\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\left|\theta^{\prime}\right.\right)\right)$$ $$\leq\frac{2\epsilon}{s}.$$ For the distortion, we can easily get that ∆ = s2 . According to Corollary 1, we have ∆opt > ⌈1 Πϵ,ωΘ ⌉ − 1 ϵ ≥ ϵ. We can get that $$\begin{array}{l}{{\Delta=\Delta_{\mathrm{opt}}+\Delta-\Delta_{\mathrm{opt}}}}\\ {{\qquad<\Delta_{\mathrm{opt}}+\Delta-\left(\lceil{\frac{1}{\Pi_{\epsilon,\omega\Theta}}}\rceil-1\right)\cdot\epsilon}}\\ {{\qquad\leq\Delta_{\mathrm{opt}}+\epsilon+\Delta-{\frac{\epsilon}{\Pi_{\epsilon,\omega\Theta}}}}}\\ {{\qquad\leq\Delta_{\mathrm{opt}}+\epsilon}}\\ {{\qquad\leq2\Delta_{\mathrm{opt}}.}}\end{array}$$ ## C.3 Proof Of **Corollary 2** C.3.1 **Exponential Distribution** Proof. Let Xλ1 , Xλ2 be two exponential random variables. We have D (Xλ1 , Xλ2 ) R (Xλ1 , Xλ2 ) = 1 2 (λ1 − λ2) − ln (1 − α) (λ1 − λ2) = −1 2 ln (1 − α) . (20) Therefore we can get that $$\gamma=-{\frac{1}{2\ln\left(1-\alpha\right)}}.$$ ## C.3.2 **Shifted Exponential Distribution** Proof. Let Xλ1,h1 , Xλ2,h2 be random variables from shifted exponential distributions. Let λ2 ≤ λ1 without loss of generality. Let a = λ1 λ2 and b = (h1/λ1 − h2/λ2) λ2. We can get that fXλ1,h1 (x) = afXλ2,h2 (a (x + b)), and D (Xλ1,h1 , Xλ2,h2 ) = 12 dWasserstein-1 ωXλ1,h1 ∥ωXλ2,h2 = 1 2 Z +∞ h1 x − x a − b fXλ1,h1 (x) dx = λ2 2λ1 Z +∞ h1 |(1/λ2 − 1/λ1) x + h1/λ1 − h2/λ2| e − 1 λ1 (x−h1)dx = (1 2 (h2 − h1 + λ2 − λ1) − e h2−h1 λ2−λ1 (λ2 − λ1) (h1 < h2) 1 2 (h1 − h2 + λ1 − λ2) (h1 ≥ h2) , (21) $$R\left(X_{\lambda_{1},h_{1}},X_{\lambda_{2},h_{2}}\right)=\left|\ln\left(1-\alpha\right)\left(\lambda_{1}-\lambda_{2}\right)+h_{2}-h_{1}\right|.$$ When h1 < h2, let t = h2−h1 λ1−λ2 ∈ (0, +∞). We have D (Xλ1,h1 , Xλ2,h2 ) R (Xλ1,h1 , Xλ2,h2 ) = h2 − h1 + λ2 − λ1 − 2e h2−h1 λ2−λ1 (λ2 − λ1) 2 |ln (1 − α) (λ1 − λ2) + h2 − h1| =t + 2e −t − 1 2 |ln (1 − α) + t| 1 2 1 + ln(1−α)+1 W−1− ln(1−α)+1 2(1−α)e α ∈ [0, 1 − e −1) ≥ 1 2 1 + ln(1−α)+1 W0− ln(1−α)+1 2(1−α)e α ∈ [1 − e −1, 1) , where W−1 and W0 are Lambert W functions. "=" achieves when $$t=t_{0}\triangleq\begin{cases}-1-\ln\left(1-\alpha\right)-W_{-1}\left(-\frac{\ln\left(1-\alpha\right)+1}{2(1-\alpha)e}\right)&\left(\alpha\in\left[0,1-e^{-1}\right)\right)\\ -1-\ln\left(1-\alpha\right)-W_{0}\left(-\frac{\ln\left(1-\alpha\right)+1}{2(1-\alpha)e}\right)&\left(\alpha\in\left[1-e^{-1},1\right)\right)\end{cases}.$$ When h1 ≥ h2, let t = h1−h2 λ1−λ2 ∈ (0, +∞). We have $$\frac{D\left(X_{\lambda_{1},h_{1}},X_{\lambda_{2},h_{2}}\right)}{R\left(X_{\lambda_{1},h_{1}},X_{\lambda_{2},h_{2}}\right)}=\frac{h_{1}-h_{2}+\lambda_{1}-\lambda_{2}}{2\left|\ln\left(1-\alpha\right)\left(\lambda_{1}-\lambda_{2}\right)+h_{2}-h_{1}\right|}$$ $$=\frac{t+1}{2\left|\ln\left(1-\alpha\right)-t\right|}$$ $$\geq\min\left\{\frac{1}{2},-\frac{1}{2\ln\left(1-\alpha\right)}\right\}.$$ Therefore we can get that $$\gamma=\begin{cases}\frac{1}{2}\left|1+\frac{\ln(1-\alpha)+1}{W_{-1}\left(-\frac{\ln(1-\alpha)+1}{2(1-\alpha)e}\right)}\right|&\alpha\in[0,1-e^{-1})\\ \frac{1}{2}\left|1+\frac{\ln(1-\alpha)+1}{W_{0}\left(-\frac{\ln(1-\alpha)+1}{2(1-\alpha)e}\right)}\right|&\alpha\in[1-e^{-1},1)\end{cases}.$$ ## C.4 Proof Of **Prop. 2** C.4.1 **Exponential Distribution** Proof. The proof of ∆ and Πϵ,ωΘ is the same as App. C.2, except that we use the D (·, ·) and R (·, ·) from Eq. (20). For ∆opt, we have ∆opt > ⌈1 $${\frac{1}{\left[\epsilon,\omega_{\Theta}\right]}}-1\right)\cdot2\gamma\epsilon\geq0$$ · 2γϵ ≥ 2γϵ, where γ = −1 2 ln(1−α) . We can get that $$\begin{array}{l}{{\Delta=\Delta_{\mathrm{opt}}+\Delta-\Delta_{\mathrm{opt}}}}\\ {{\qquad<\Delta_{\mathrm{opt}}+\Delta-\left(\lceil\frac{1}{\Pi_{\epsilon,\omega\Theta}}\rceil-1\right)\cdot2\gamma\epsilon}}\\ {{\qquad\leq\Delta_{\mathrm{opt}}+2\gamma\epsilon+\Delta-\frac{2\gamma\epsilon}{\Pi_{\epsilon,\omega\Theta}}}}\\ {{\qquad=\Delta_{\mathrm{opt}}+2\gamma\epsilon}}\\ {{\qquad\leq2\Delta_{\mathrm{opt}}.}}\end{array}$$ $\text{e}\gamma=-\frac{1}{2\ln(1-\alpha)}$ ## C.4.2 **Shifted Exponential Distribution** Proof. We first focus on the proof for Πϵ,ωΘ . In Fig. 8, we separate the space of possible data parameters into two regions represented by yellow and green colors. The yellow regions S*yellow* constitute right triangles with height s and width |t0|s. The green region S*green* is the rest of the parameter space. The high-level idea of our proof is as follows. Note that for any parameter θ ∈ S*green*, there exists a Si,h s.t. θ ∈ Si,h and Sµ,i ⊂ S*green*. Therefore, we can bound the attack success rate if θ ∈ S*green*. At the same time, the probability of θ ∈ S*yellow* is bounded. Therefore, we can bound the overall attacker's success rate (i.e., Πϵ,ωΘ ). More specifically, let the optimal attacker be gˆ ∗. We ![32_image_0.png](32_image_0.png) Figure 8: The construction for proof of Prop. 2 for shifted exponential distributions. We separate the space of possible parameters into two regions (yellow and green) and bound the attacker's success rate on each region separately. have $$\Pi_{\epsilon,\omega_{\Theta}}=\mathbb{P}\left(\hat{g}^{*}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)$$ $$=\int_{\theta\in S_{g r e n}}p(\theta)\mathbb{P}\left(\hat{g}^{*}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)d\theta$$ $$\quad+\int_{\theta\in S_{g r i t o w}}p(\theta)\mathbb{P}\left(\hat{g}^{*}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)d\theta$$ $$<\frac{2\epsilon}{|\ln\left(1-\alpha\right)+t_{0}|s}+\frac{|t_{0}|s}{\overline{h}-\underline{h}}.$$ For the distortion, it is straightforward to get that ∆ = s2 (t0 − 1) + se−t0from Eq. (21), and ∆opt > ⌈1 Πϵ,ωΘ ⌉ − 1 · 2γϵ ≥ 2γϵ, where γ is defined in Corollary 2. Denote ζ =2ϵ |ln(1−α)+t0|s + |t0|s h−h − Πϵ,ωΘ , we can get that $\left(\Pi_{\epsilon,\omega_\Theta}+\zeta-\frac{|t_0|s}{h-h}\right)\cdot\Delta=2\gamma\epsilon$ and $$\Delta=\Delta_\text{opt}+\Delta-\Delta_\text{opt}$$ $$<\Delta_\text{opt}+\Delta-\left(\lceil\frac{1}{\Pi_{\epsilon,\omega_\Theta}}\rceil-1\right)\cdot2\gamma\epsilon$$ $$\leq\Delta_\text{opt}+2\gamma\epsilon+\Delta-\frac{2\gamma\epsilon}{\Pi_{\epsilon,\omega_\Theta}}$$ $$=\Delta_\text{opt}+2\gamma\epsilon+\frac{\frac{|t_0|s}{h-h}-\zeta}{\frac{2\epsilon}{|\ln(1-\alpha)+t_0|s}+\frac{|t_0|s}{h-h}-\zeta}\cdot\Delta$$ $$<\Delta_\text{opt}+2\gamma\epsilon+\frac{\frac{|t_0|s}{h-h}}{\frac{2\epsilon}{|\ln(1-\alpha)+t_0|s}+\frac{|t_0|s}{h-h}}\cdot\Delta.$$ Therefore Therefore, $$\Lambda<\left(1+\frac{|t_{0}|\cdot|\ln\left(1-\alpha\right)+t_{0}|s^{2}}{2\epsilon\left(\overline{h}-\underline{h}\right)}\right)\left(\Delta_{\text{opt}}+2\gamma\epsilon\right)$$ $$\leq\left(2+\frac{|t_{0}|\cdot|\ln\left(1-\alpha\right)+t_{0}|s^{2}}{\epsilon\left(\overline{h}-\underline{h}\right)}\right)\Delta_{\text{opt}}.$$ t0 is bounded when α ∈ [0, c1] ∪-1 − 1 e , c2 , where c1 ∈-0, 1 − 1 e , c2 ∈-1 − 1 e , 1. Therefore, when α ∈ [0.01, 0.25] ∪ [0.75, 0.99], we can get that $$\operatorname*{lim}_{\frac{\epsilon^{2}}{h-\frac{h}{2}}\to0}\Delta<\operatorname*{lim}_{\frac{\epsilon^{2}}{h-\frac{h}{2}}\to0}\left(2+\frac{|t_{0}|\cdot|\ln\left(1-\alpha\right)+t_{0}|s^{2}}{\epsilon\left(\overline{{{h}}}-\underline{{{h}}}\right)}\right)\Delta_{\mathrm{opt}}<3\Delta_{\mathrm{opt}}.$$ ## C.5 **Proofs For The Surrogate Metrics** C.5.1 **Secret=Mean** For any pY , we have $$\tilde{\Delta}=d_{\mathrm{Wassertein-1}}\left(p_{\mathcal{X}}\|p_{\mathcal{Y}}\right)\geq\left|{\frac{1}{n}}\sum_{i=1}^{n}x_{i}-{\frac{1}{n}}\sum_{i=1}^{n}y_{i}\right|=-\Pi_{\epsilon,\omega_{\Theta}}^{\tilde{\omega}}.$$ For pY released from our mechanism (§6.3), we have ∆ = ˜ dWasserstein-1 (pX ∥pY ) = 1n Pn i=1 xi − 1 n Pn i=1 yi = −Π ˜ ϵ,ωΘ . ## C.5.2 **Secret=Fraction** Assume that we want to protect the fraction of class j, and fraction (D, j) means the fraction of sample j in the dataset D. For any pY , we have ∆ = ˜ dTV (pX ∥pY ) ≥ |fraction (X , j) − fraction (Y, j)| = −Π ˜ ϵ,ωΘ . For pY released from our mechanism (Mech. 5), we have ∆ = ˜ dTV (pX ∥pY ) = |fraction (X , j) − fraction (Y, j)| = −Π ˜ ϵ,ωΘ . ## C.6 Proof Of **Prop. 3** Proof. We prove by contradiction. For any two parameters θ1, θ2 ∈ Supp (ωΘ), we can construct a prior distribution P (θ = θ1) = P (θ = θ2) = 12 . Because Π′ϵ,ωΘ < log 2, we have $$\operatorname*{sup}_{\hat{g}}\ \mathbb{P}\left({\hat{g}}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)<1$$ q cL − ϵ2+ 2cs L . under this prior distribution. Therefore, there exists θ ′ and z1, z2 ∈ Supp (ωΘ) s.t. Mg (θ1, z1) = Mg (θ2, z2) = θ ′. According to triangle inequality, we have max dωXθ1 ∥ωXθ′ , d ωXθ2 ∥ωXθ′ ≥ 1 2 dωXθ1 ∥ωXθ2 . Therefore, we have ∆ ≥ ∆, which gives a contradiction. ## D **Privacy-Distortion Performance Of Data Release Mechanism With Relaxed** Assumption D.1 Privacy-Distortion Performance of Mech. 1 **with Relaxed Assumption** We relax Asm. 1 as follows. Assumption 3. The distribution parameter vector θ can be written as (u, v)*, where* u ∈ R, v ∈ R q−1, and for any u ̸= u ′, fXu,v (x) = fXu′,v (x − u ′ + u). The prior over distribution parameters is fU,V (*a, b*) = fU (a) · fV (b)*, where Supp* (U) = [u, u), and fU is L*-Lipschitz continuous and has lower bound* c. Based on Asm. 3, the Privacy-distortion performance of Mech. 1 is shown below. Proposition 4. Under Asm. 3, Mech. 1 has ∆ = s2 and Πϵ,ωΘ ≤ 2ϵ[c+L(s−x ∗−ϵ)] cs+ L 2 (s−x∗) 2 *, where* x ∗ = s + c L − ϵ − Proof. We first provide the following lemma. Lemma 1. For a L-Lipschitz continuous function f(x), x ∈ [x, x], infx∈[x,x] f(x) ≥ c ≥ 0*, it satisfies* $$\sup_{x^{\prime}\in[\underline{x},\overline{x}-\delta]}\frac{\int_{\underline{x}}^{x^{\prime}+\delta}f(x)\mathrm{d}x}{\int_{\underline{x}}^{\underline{x}}f(x)\mathrm{d}x}\leq\frac{\delta\left[\underline{c}+\mathcal{L}\left(\overline{x}-x^{*}-\frac{\delta}{2}\right)\right]}{\underline{c}\left(\overline{x}-\underline{x}\right)+\underline{\underline{c}}\left(\overline{x}-x^{*}\right)^{2}},$$ _where $x^{*}=\overline{x}+\frac{\underline{c}}{\mathcal{L}}-\frac{\delta}{2}-\sqrt{\left(\frac{\underline{c}}{\mathcal{L}}-\frac{\delta}{2}\right)^{2}+\frac{2\underline{c}(\overline{x}-\overline{x})}{\mathcal{L}}}$._ For any released parameter θ ′ = (u ′, v′), there exists i ∈ {0*, ..., N* − 1} such that u ′ = u + (i + 0.5) · s. We have sup gˆ P gˆ (θ ′) ∈ [g (θ) − ϵ, g (θ) + ϵ] θ ′ = sup gˆ Z u+(i+1)·s u+i·s fU|U′ (u|u ′) · Z u+ϵ u−ϵ fgˆ(u′,v′) (h) dh du = sup gˆ Z u+(i+1)·s+ϵ u+i·s−ϵ fgˆ(u′,v′)(h) · Z gˆfXu′,v′ +ϵ gˆfXu′,v′ −ϵ fU|U′ (u|u ′) du dh. For R gˆfXu′,v′ +ϵ gˆfXu′,v′ −ϵ fU|U′ (u|u ′) du, denote $$\begin{array}{l}{{x_{1}=\operatorname*{max}\left(0,\hat{g}\left(f_{X_{u^{\prime},v^{\prime}}}\right)-\epsilon-\underline{{{u}}}-i\cdot s\right),}}\\ {{x_{2}=\operatorname*{min}\left(\hat{g}\left(f_{X_{u^{\prime},v^{\prime}}}\right)+\epsilon-\underline{{{u}}}-i\cdot s,s\right),}}\end{array}$$ we have $$\begin{array}{l}{{\int_{\tilde{g}}^{\tilde{g}}\!\left(f_{X_{u^{\prime},v^{\prime}}}\right)\!+\!\epsilon}}\\ {{\int_{\tilde{g}}\!\left(f_{X_{u^{\prime},v^{\prime}}}\right)\!-\!\epsilon}}\end{array}f_{U\,|U^{\prime}}\left(u|u^{\prime}\right)\ \mathrm{d}u=\frac{\int_{x_{1}}^{x_{2}}f_{U}\left(\underline{{{u}}}+i\cdot s+x\right)\ \mathrm{d}x}{\int_{0}^{s}f_{U}\left(\underline{{{u}}}+i\cdot s+x\right)\ \mathrm{d}x}.$$ fU (u + i · s + x) is L-Lipschitz and has lower bound c. x2 − x1 ≤ 2ϵ and x1, x2 ∈ [0, s]. According to Lemma 1, we have $$\begin{array}{c}{{\int_{\hat{g}\left(f x_{u^{\prime},v^{\prime}}\right)-\epsilon}^{\hat{g}\left(f x_{u^{\prime},v^{\prime}}\right)+\epsilon}f_{U|U^{\prime}}\left(u|u^{\prime}\right)\,\mathrm{~d}u=\frac{\int_{x_{1}}^{x_{2}}f_{U}\left(\underline{{{u}}}+i\cdot s+x\right)\,\mathrm{~d}x}{\int_{0}^{s}f_{U}\left(\underline{{{u}}}+i\cdot s+x\right)\,\mathrm{~d}x}}\\ {{\leq\frac{2\epsilon\left[\underline{{{c}}}+\mathcal{L}\left(s-x^{*}-\epsilon\right)\right]}{c s+\frac{\mathcal{L}}{2}\left(s-x^{*}\right)^{2}},}}\end{array}$$ where x ∗ = s + c L − ϵ − q cL − ϵ2+ 2cs L . Therefore, we can get that $$\sup_{\hat{g}}\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\left|\theta^{\prime}\right.\right)$$ $$\leq\sup_{\hat{g}}\int\limits_{\underline{u}+i\cdot s-\epsilon}^{\underline{u}+\left(i+1\right)\cdot s+\epsilon}\frac{2\epsilon\left[\underline{c}+\mathcal{L}\left(s-x^{*}-\epsilon\right)\right]}{\mathcal{C}s+\frac{\epsilon}{2}\left(s-x^{*}\right)^{2}}\cdot f_{\hat{g}\left(u^{\prime},v^{\prime}\right)}(h)\ \mathrm{d}h$$ $$\leq\frac{2\epsilon\left[\underline{c}+\mathcal{L}\left(s-x^{*}-\epsilon\right)\right]}{\mathcal{C}s+\frac{\epsilon}{2}\left(s-x^{*}\right)^{2}}.$$ Therefore, we have $$\Pi_{\epsilon,\omega_{\Theta}}=\sup_{\hat{g}}\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)$$ $$=\sup_{\hat{g}}\mathbb{E}\left(\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\left|\theta^{\prime}\right.\right)\right)$$ $$=\mathbb{E}\left(\sup_{\hat{g}}\mathbb{P}\left(\hat{g}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\left|\theta^{\prime}\right.\right)\right)$$ $$\leq\frac{2\epsilon\left[\underline{c}+\mathcal{L}\left(s-x^{*}-\epsilon\right)\right]}{\underline{c}s+\frac{\epsilon}{2}\left(s-x^{*}\right)^{2}}.$$ For the distortion, we can easily get that ∆ = s2 . ## D.1.1 Proof Of **Lemma 1** Without loss of generality, we assume that f(x) ≥ f(x). Based on simple geometric analysis, we can get that when R x ′+δ x′f(x)dx R x x f(x)dx achieves supremum, as illustrated in Fig. 9, f(x) = c, x ′ = x−δ, and f(x) = x+L(x − x ′′), where x ′′ ∈ [*x, x*′]. In this case, we can get that $${\frac{\int_{\overline{{{x}}}}^{\overline{{{x}}}}f(x)\mathrm{d}x}{\int_{\underline{{{x}}}}^{\overline{{{x}}}}f(x)\mathrm{d}x}}={\frac{\delta\left[\underline{{{c}}}+{\mathcal{L}}\left(\overline{{{x}}}-x^{\prime\prime}-{\frac{\delta}{2}}\right)\right]}{\underline{{{c}}}\left(\overline{{{x}}}-\underline{{{x}}}\right)+{\frac{\mathcal{L}}{2}}\left(\overline{{{x}}}-x^{\prime\prime}\right)^{2}}}\triangleq h\left(x^{\prime\prime}\right),$$ $$\underline{{{c}}}+\mathcal{L}(\overline{{{x}}}-x^{\prime\prime})$$ ![36_image_0.png](36_image_0.png) Figure 9: Illustration of $ f(x)$ when ? R x ′+δ x′f(x)dx R x x f(x)dx achieves supremum. where x ′′ ∈ [*x, x*′]. When x ′′ = x+ c L − δ 2 − q cL − δ 2 2+ 2c(x−x) L ≜ x ∗, h(x ′′) achieves supremum. Therefore, we have sup x′∈[x,x−δ] R x ′+δ x′ f(x)dx R x x f(x)dx ≤ sup f sup x′∈[x,x−δ] R x ′+δ x′ f(x)dx R x x f(x)dx = δ-c + Lx − x ∗ − δ 2 c (x − x) + L 2 (x − x ∗) 2 . ## D.2 Privacy-Distortion Performance Of Mech. 2 **With Relaxed Assumption** We relax Asm. 2 as follows. Assumption 4. *The prior over distribution parameters as specified below.* - *Exponential: Supp* (λ) = -λ, λ, and fλ is L*-Lipschitz continuous and has lower bound* c. - Shifted exponential: Supp (*λ, h*) = (a, b)|a ∈-λ, λ, b ∈-h, h , fλ,h (*a, b*) = fλ (a) · fh (b), and fλ *(resp.* fh) is Lλ-Lipschitz (resp. Lh*-Lipschitz) and has lower bound* kλ µ−µ with kλ ∈ (0, 1] *(resp.* kh σ−σ with kh ∈ (0, 1]). Based on Asm. 4, the Privacy-distortion performance of Mech. 2 is shown below. Proposition 5. Under Asm. 4, Mech. 2 has the following ∆ and Πϵ,ωΘ *value/bound.* - *Exponential:* $$\begin{array}{c}{{\Delta=\frac{1}{2}s,}}\\ {{\Pi_{\epsilon,\omega\Theta}\leq\frac{\frac{2\epsilon}{-\ln(1-\alpha)}\cdot\left[\underline{{{c}}}+\mathcal{L}\left(s-x^{*}+\frac{\epsilon}{\ln(1-\alpha)}\right)\right]}{\underline{{{c}}}s+\frac{\epsilon}{2}\left(s-x^{*}\right)^{2}},}}\end{array}$$ $$w h e r e\ x^{*}=s+{\frac{c}{\mathcal{L}}}+{\frac{\epsilon}{\ln(1-\alpha)}}-{\sqrt{\left({\frac{c}{\mathcal{L}}}+{\frac{\epsilon}{\ln(1-\alpha)}}\right)^{2}+{\frac{2c s}{\mathcal{L}}}}}.$$ - Shifted exponential: ∆ = s2 (t0 − 1) + se−t0, Πϵ,ωΘ < 2ϵ |ln(1−α)+t0| · hc + Lλ,h s2 − t ∗ −ϵ |ln(1−α)+t0| i cs + Lλ,h 2s2 − t ∗2 + M h − h,kh h − h ,Lh, 1 · M λ − λ,kλ λ − λ ,Lλ, 1 ·λ − λ|t0|s, where c =khkλ (h−h)·(λ−λ) , function M satisfies M (x, c,L, A) = (A x + Lx 2 , if c ≤ A x − Lx 2 c +p2L(A − cx), if c > A x − Lx 2 , Lλ,h = LλM h−h |t0| , kh h−h , |t0|Lh,1 |t0| + |t0|LhM λ − λ, kλ λ−λ ,Lλ, 1 , and t ∗ = s 2 +c Lλ,h −ϵ |ln(1−α)+t0| − rc Lλ,h −ϵ |ln(1−α)+t0| 2+ 2cs Lλ,h . The t0 parameter is defined in Mech. 2. ## D.2.1 Proof Of Prop. 5 **For Exponential Distribution** It is straightforward to get the formula for ∆ from Eq. (20). Here we focus on the proof for Πϵ,ωΘ . Similar to the proof in App. D.1, according to Lemma 1, we have Πϵ,ωΘ = E sup gˆ P gˆ (θ ′) ∈ [g (θ) − ϵ, g (θ) + ϵ] θ ′ ≤ sup i∈N,t′∈R R mins,t′− 2ϵ ln(1−α) max{0,t′}fλ (λ + i · s + t) dt R s 0 fλ (λ + i · s + t) dt ≤ 2ϵ − ln(1−α) · hc + L s − x ∗ +ϵ ln(1−α) i cs + L 2 (s − x ∗) 2, where x ∗ = s + c L +ϵ ln(1−α) − rc L +ϵ ln(1−α) 2+ 2cs L . ## D.2.2 Proof Of Prop. 5 **For Shifted Exponential Distribution** It is straightforward to get the formula for ∆ from Eq. (21). Here we focus on the proof for Πϵ,ωΘ . According to Eq. (13), we can bound the attack success rate Πϵ,ωΘ as $$\Pi_{\epsilon,\omega\infty}<\operatorname*{sup}_{\theta\in S_{g r e n}}\mathbb{P}\left({\hat{g}}^{*}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)+\int_{\theta\in S_{g r e l o w}}p(\theta)d\theta.$$ As for the first term supθ∈Sgreen P (ˆg ∗(θ ′) ∈ [g (θ) − ϵ, g (θ) + ϵ]), we can get that sup θ∈Sgreen P (ˆg ∗(θ ′) ∈ [g (θ) − ϵ, g (θ) + ϵ]) = sup i∈N,h,t′∈R R mins2 ,t′+ 2ϵ |ln(1−α)+t0| max{− s2 ,t′}fλ,h (λ + (i + 0.5) · s + t, h − t0 · t) dt R s2 − s2 fλ,h (λ + (i + 0.5) · s + t, h − t0 · t) dt . To analyze the above term, we provide the following lemma. Lemma 2. For a L-Lipschitz continuous function f(x), x ∈ [x, x]*, if* R x x f(x)dx = A and infx∈[x,x] f(x) ≥ c, it satisfies sup x∈[x,x] f(x) ≤ ( A x−x + L(x−x) 2, if c ≤A x−x − L(x−x) 2 c +p2L(A − c (x − x)), if c >A x−x − L(x−x) 2 ≜ M (x − x, c,L, A). The proof is in App. D.2.3. Since fλ,h (λ + (i + 0.5) · s + *t, h* − t0 · t) = fλ (λ + (i + 0.5) · s + t) · fh (h − t0 · t), according to Lemma 2, we can get that $f_{\lambda,h}$ is $\mathcal{L}_{\lambda,h}$-Lipschitz continuous, where $$\mathcal{L}_{\lambda,h}=\mathcal{L}_{\lambda}\cdot M\left(\frac{\overline{h}-\underline{h}}{|t_{0}|},\frac{k_{h}}{\overline{h}-\underline{h}},|t_{0}|\mathcal{L}_{h},\frac{1}{|t_{0}|}\right)+|t_{0}|\mathcal{L}_{h}\cdot M\left(\overline{\lambda}-\underline{\lambda},\frac{k_{\lambda}}{\overline{\lambda}-\underline{\lambda}},\mathcal{L}_{\lambda},1\right).$$ We can also get that $$k_{h}k_{\lambda}$$ $$\sim$$ $$\operatorname*{inf}_{a\in\left[\underline{{{\lambda}}},\overline{{{\lambda}}}\right),b\in\left[\underline{{{h}}},\overline{{{h}}}\right)}f_{\lambda,h}\left(a,b\right)\geq\frac{k_{h}k_{\lambda}}{\left(\overline{{{h}}}-\underline{{{h}}}\right)\cdot\left(\overline{{{\lambda}}}-\underline{{{\lambda}}}\right)}\triangleq\underline{{{c}}}.$$ Therefore, according to Lemma 1, we can get that sup θ∈S*green* P (ˆg ∗(θ ′) ∈ [g (θ) − *ϵ, g* (θ) + ϵ]) = sup i∈N*,h,t*′∈R R mins2 ,t′+ 2ϵ |ln(1−α)+t0| max{− s2 ,t′}fλ,h (λ + (i + 0.5) · s + *t, h* − t0 · t) dt R s2 − s2 fλ,h (λ + (i + 0.5) · s + t, h − t0 · t) dt ≤ 2ϵ |ln(1−α)+t0| · hc + Lλ,h s2 − t ∗ −ϵ |ln(1−α)+t0| i cs + Lλ,h 2s2 − t ∗2, where t ∗ = s 2 +c Lλ,h −ϵ |ln(1−α)+t0| − rc Lλ,h −ϵ |ln(1−α)+t0| 2+ 2cs Lλ,h , Lλ,h = Lλ ·M h−h |t0| , kh h−h , |t0|Lh,1 |t0| + |t0|Lh · M λ − λ, kλ λ−λ ,Lλ, 1 , and c =khkλ (h−h)·(λ−λ) . As for Rθ∈S*yellow* p(θ)dθ, we have $$\int_{\theta\in S_{y u t l o w}}p(\theta)d\theta$$ $$\leq M\left(\overline{h}-\underline{h};\frac{k_{h}}{\overline{h}-\underline{h}},\mathcal{L}_{h},1\right)\cdot M\left(\overline{\lambda}-\underline{\lambda};\frac{k_{\lambda}}{\overline{\lambda}-\underline{\lambda}},\mathcal{L}_{\lambda},1\right)\cdot\int_{\theta\in S_{y u t l o w}}d\theta$$ $$=M\left(\overline{h}-\underline{h};\frac{k_{h}}{\overline{h}-\underline{h}},\mathcal{L}_{h},1\right)\cdot M\left(\overline{\lambda}-\underline{\lambda};\frac{k_{\lambda}}{\overline{\lambda}-\underline{\lambda}},\mathcal{L}_{\lambda},1\right)\cdot\left(\overline{\lambda}-\underline{\lambda}\right)|t_{0}|s.$$ get that Above all, we can get that Πϵ,ωΘ < sup θ∈Sgreen P (ˆg ∗(θ ′) ∈ [g (θ) − ϵ, g (θ) + ϵ]) + Z θ∈Syellow p(θ)dθ. ≤ 2ϵ |ln(1−α)+t0| · hc + Lλ,h s2 − t ∗ −ϵ |ln(1−α)+t0| i cs + Lλ,h 2s2 − t ∗2 + M h − h,kh h − h ,Lh, 1 · M λ − λ,kλ λ − λ ,Lλ, 1 ·λ − λ|t0|s, where M(·, ·, ·, ·), c,Lλ,h, t∗ are defined as above. ## D.2.3 Proof Of **Lemma 2** Without loss of generality, we assume that f(x) ≥ f(x). Based on simple geometric analysis, we can get that there are two patterns when supx∈[x,x] f(x) achieves supremum, which are shown in Fig. 10. ![39_image_0.png](39_image_0.png) ![39_image_1.png](39_image_1.png) $\square$ Figure 10: Two patterns when supx∈[x,x] f(x) achieves supremum. For pattern 1, f(x) = c1 ≥ c, f(x) = c1 + L(x − x), and R x x f(x)dx =c1 + L 2 (x − x)· (x − x) = A. Therefore, when c ≤A x−x − L(x−x) 2, we have $$\operatorname*{sup}_{f}\operatorname*{sup}_{x\in[\underline{{{x}}},\overline{{{x}}}]}f(x)=c_{1}+\mathcal{L}(\overline{{{x}}}-\underline{{{x}}})=\frac{\mathcal{A}}{\overline{{{x}}}-\underline{{{x}}}}+\frac{\mathcal{L}\left(\overline{{{x}}}-\underline{{{x}}}\right)}{2}.$$ For pattern 2, f(x) = c, f(x) = c+L(x−x ′), where x ′ ∈ (x, x], and R x x f(x)dx = c(x−x) + L 2 (x − x ′) 2 = A. Therefore, when c >A x−x − L(x−x) 2, we have $$\operatorname*{sup}_{f}\operatorname*{sup}_{x\in[\underline{{{x}}},\overline{{{x}}}]}f(x)=\underline{{{c}}}+\mathcal{L}(\overline{{{x}}}-x^{\prime})=\underline{{{c}}}+\sqrt{2\mathcal{L}\left(\mathcal{A}-\underline{{{c}}}\left(\overline{{{x}}}-\underline{{{x}}}\right)\right)}.$$ Above all, we can get that $$\sup_{x\in[\underline{x},\overline{x}]}f(x)\leq\sup_{f}\sup_{x\in[\underline{x},\overline{x}]}f(x)$$ $$=\begin{cases}\frac{A}{\overline{x}-\underline{x}}+\frac{\mathcal{L}(\overline{x}-\underline{x})}{2},&\text{if}\underline{c}\leq\frac{A}{\overline{x}-\underline{x}}-\frac{\mathcal{L}(\overline{x}-\underline{x})}{2}\\ \underline{c}+\sqrt{2\mathcal{L}\left(\mathcal{A}-\underline{c}\left(\overline{x}-\underline{x}\right)\right)},&\text{if}\underline{c}>\frac{A}{\overline{x}-\underline{x}}-\frac{\mathcal{L}(\overline{x}-\underline{x})}{2}\end{cases}.$$ ## E **Discrete Distribution With Secret = Mean** Here, we consider three typical examples of discrete distributions: geometric distributions, binomial distributions, and Poisson distributions with parameter θ. More specifically, the original distribution is $$\mathbb{P}\left(X_{\theta}=k\right)={\begin{cases}\left(1-\theta\right)^{k}\theta&\text{(geometric distribution)}\\ \binom{n}{k}\theta^{k}\left(1-\theta\right)^{n-k}&\text{(binomial distribution)}\\ \frac{\theta^{k}e^{-\theta}}{k!}&\text{(Poisson distribution)}\end{cases}}$$ where n standards for the number of trials in binomial distribution. The support of the parameter is Supp (Θ) = Xθ : θ ∈θ, θ where θ, θ⊆ (0, 1) for geometric distribution and binomial distribution, and θ, θ⊆ (0, ∞) for Poisson distribution. We first analyze the lower bound. Corollary 3 (Privacy lower bound, secret = mean of a discrete distribution). *Consider the secret function* g (θ) = Px xfXθ (x). For any T ∈ (0, 1), when Πϵ,ωΘ ≤ T*, we have* ∆ >⌈ 1 T ⌉ − 1· 2γϵ*, where the value of* γ *depends on the type of the distributions:* - *Geometric:* $$\gamma=\operatorname*{inf}_{\frac{\theta<\theta_{1}<\theta_{2}\leq\overline{{\theta}}}{\theta}}\frac{\left(1-\theta_{2}\right)^{h(\theta_{1},\theta_{2})}-\left(1-\theta_{1}\right)^{h(\theta_{1},\theta_{2})}}{2\left(\frac{1}{\theta_{2}}-\frac{1}{\theta_{1}}\right)}$$ where h (θ1, θ2) = ⌊log(θ2)−log(θ1) log(1−θ1)−log(1−θ2) ⌋ + 1. - *Binomial:* $$\begin{array}{l}{{\gamma=\operatorname*{inf}_{\theta<\theta_{1}<\theta_{2}\leq\overline{{\theta}}}}}\\ {{\frac{I_{1-\theta_{2}}(n-h(\theta_{1},\theta_{2}),1+h(\theta_{1},\theta_{2}))-I_{1-\theta_{1}}(n-h(\theta_{1},\theta_{2}),1+h(\theta_{1},\theta_{2}))}{2n(\theta_{1}-\theta_{2})},}}\end{array}$$ $\left({1,\;\#_{\mathrm{b}}}\right),\;\left({1,\;\#_{\mathrm{a}}}\right)$ where h (θ1, θ2) = ⌊k ′⌋, k ′ = n ln 1−θ2 1−θ1 . ln θ1(1−θ2) θ2(1−θ1) , and I *represents the regularized incomplete beta* function. - *Poisson:* $$\gamma=\operatorname*{inf}_{\underline{{{\theta}}}<\theta_{1}<\theta_{2}\leq\overline{{{\theta}}}}\frac{Q\left(h\left(\theta_{1},\theta_{2}\right),\theta_{2}\right)-Q\left(h\left(\theta_{1},\theta_{2}\right),\theta_{1}\right)}{2\left(\theta_{1}-\theta_{2}\right)},$$ where h (θ1, θ2) = ⌊θ1−θ2 ln(θ1)−ln(θ2) ⌋ + 1 and Q *is the regularized gamma function.* The proof is in App. E.1. The above lower bounds can be computed numerically. Since these distributions only have one parameter, we can use Alg. 1 and Alg. 3 to derive a data release mechanism. The performance of greedy-based and dynamic-programming-based data release mechanisms for each distribution is shown in Fig. 11. ![40_image_0.png](40_image_0.png) Distortion Figure 11: Privacy-distortion performance of Alg. 1 and Alg. 3 for geometric, binomial and Poisson distribution when secret = mean. As we can observe, the distortion that dynamic-programming-based data release mechanism achieves it is always smaller than or equal to that of the greedy-based data release mechanism. ## E.1 Proof Of **Corollary 3** E.1.1 **Geometric Distribution** Proof. Let Xθ1 and Xθ2 be two Geometric random variables with parameters θ1 and θ2 respectively. We assume that θ1 > θ2 without loss of generality. Let k ′satisfy (1 − θ1) k ′ θ1 = (1 − θ2) k ′ θ2 and k0 = ⌊k ′⌋ + 1. Then we can get that $$\begin{array}{c}{{D\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\frac12d_{\mathrm{TV}}\left(\omega_{X_{\theta_{1}}}\left\|\omega_{X_{\theta_{2}}}\right.\right)}}\\ {{=\frac12\left(1-\theta_{2}\right)^{k_{0}}-\frac12\left(1-\theta_{1}\right)^{k_{0}},}}\\ {{R\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\frac1{\theta_{2}}-\frac1{\theta_{1}}.}}\end{array}$$ Therefore, we have $$\gamma=\operatorname*{inf}_{\frac{\theta<\theta_{1}<\theta_{2}\leq\overline{{\theta}}}{\theta}}\frac{\left(1-\theta_{2}\right)^{k_{0}}-\left(1-\theta_{1}\right)^{k_{0}}}{2\left(\frac{1}{\theta_{2}}-\frac{1}{\theta_{1}}\right)}$$ The rest follows from Thm. 1. ## E.1.2 **Binomial Distribution** Proof. Let Xθ1 and Xθ2 be two binomial random variables with parameters θ1 and θ2 respectively with fixed number of trials n. We assume that θ1 > θ2 without loss of generality. Let k ′satisfy n k′ θ k ′ 1 (1 − θ1) n−k ′ = n k′ θ k ′ 2 (1 − θ1) n−k ′ and k0 = ⌊k ′⌋. We can get that $$\begin{array}{c}{{D\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\frac{1}{2}d_{\mathrm{TV}}\left(\omega_{X_{\theta_{1}}}\|\omega_{X_{\theta_{2}}}\right)}}\\ {{=\frac{1}{2}I_{1-\theta_{2}}\left(n-k_{0},1+k_{0}\right)-\frac{1}{2}I_{1-\theta_{1}}\left(n-k_{0},1+k_{0}\right),}}\\ {{R\left(X_{\theta_{1}},X_{\theta_{2}}\right)=n\left(\theta_{1}-\theta_{2}\right),}}\end{array}$$ where I represents the regularized incomplete beta function. Therefore, we have $$\gamma=\operatorname*{inf}_{\underline{{{\theta}}}<\theta_{1}<\theta_{2}\leq\overline{{{\theta}}}}\frac{I_{1-\theta_{2}}\left(n-k_{0},1+k_{0}\right)-I_{1-\theta_{1}}\left(n-k_{0},1+k_{0}\right)}{2n\left(\theta_{1}-\theta_{2}\right)}.$$ The rest follows from Thm. 1. ## E.1.3 **Poisson Distribution** Proof. Let Xθ1 and Xθ2 be two Poisson random variables with parameters θ1 and θ2 respectively. We assume that θ1 > θ2 without loss of generality. Let k ′satisfy θ k ′ 1 e −θ1 = θ k ′ 2 e −θ2 and k0 = ⌊k ′⌋ + 1. Then we can get that $$\begin{array}{c}{{D\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\frac{1}{2}d_{\mathrm{TV}}\left(\omega_{X_{\theta_{1}}}\|\omega_{X_{\theta_{2}}}\right)}}\\ {{=\frac{1}{2}Q\left(k_{0},\theta_{2}\right)-\frac{1}{2}Q\left(k_{0},\theta_{1}\right),}}\\ {{R\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\theta_{1}-\theta_{2},}}\end{array}$$ where Q is the regularized gamma function. Therefore, we have $$\gamma=\operatorname*{inf}_{\underline{{{\theta}}}<\theta_{1}<\theta_{2}\leq\overline{{{\theta}}}}\frac{Q\left(k_{0},\theta_{2}\right)-Q\left(k_{0},\theta_{1}\right)}{2\left(\theta_{1}-\theta_{2}\right)}.$$ The rest follows from Thm. 1. ## F **More Distributions With Secret = Quantiles** In this section, we discuss how to protect the quantiles for typical examples of continuous distributions: Gaussian distributions and uniform distributions. In our analysis, their parameters are denoted by: - Gaussian distributions: θ = (*µ, σ*), where *µ, σ* are the mean and the standard deviation of the Gaussian distribution. - Uniform distributions: θ = (*m, n*), where *m, n* denote the lower and upper bound of the uniform distribution. In other words, Xm,n is a random variable from uniform distribution U ([*m, n*]). As before, we first present the lower bound. Corollary 4 (Privacy lower bound, secret = α-quantile of a continuous distribution). Consider the secret function g (θ) = α*-quantile of* fXθ . For any T ∈ (0, 1), when Πϵ,ωΘ ≤ T*, we have* ∆ >⌈ 1 T ⌉ − 1· 2γϵ, where the value of γ *depends on the type of the distributions:* - *Gaussian:* $$\gamma=\operatorname*{min}_{t}{\frac{\sqrt{\frac{1}{2\pi}}e^{-{\frac{1}{2}}t^{2}}-t\left({\frac{1}{2}}-\Phi\left(t\right)\right)}{\left|t+Q_{\alpha}\right|}},$$ where Φ *denotes the CDF of the standard Gaussian distribution and* Qα ≜ Φ −1(α). - *Uniform:* $$\gamma=\begin{cases}\sqrt{\alpha^{2}-\alpha+\frac{1}{2}}+\alpha-\frac{1}{2}&\alpha\leq0.5\\ \sqrt{\alpha^{2}-\alpha+\frac{1}{2}}-\alpha+\frac{1}{2}&\alpha>0.5\end{cases}.$$ . The proof is in App. F.1. The bound for uniform is in closed form, while the bound for Gaussian can be computed numerically. Next, we provide data release mechanisms for each of the distributions. Here, we assume that the parameters of the original data are drawn from a uniform distribution with lower and upper bounds. In more details, we make the following assumptions. Assumption 5. *The prior over distribution parameters as specified below.* - Gaussian: (μ, σ) follows the uniform distribution over n(a, b)| a ∈-µ, µ, b ∈ [σ, σ) o. - Uniform: (M, N) follows the uniform distribution over (a, b)| a ∈ [m, m), b ∈ [m, m)*, a < b* . Mechanism 3 (For secret = quantile of a continuous distribution). We design mechanisms for each of the distributions. - *Gaussian:* $$\mathcal{S}_{\mu,i}=\left\{\left(\mu+t_{0}\cdot t,\underline{{{\sigma}}}+(i+0.5)\cdot s+t\right)|t\in\left[-\frac{s}{2},\frac{s}{2}\right)\right\}\,,$$ $$\theta_{\mu,i}^{*}=\left(\mu,\underline{{{\sigma}}}+(i+0.5)\cdot s\right)\quad,$$ $$\mathcal{I}=\left\{(\mu,i):i\in\mathbb{N},\mu\in\mathbb{R}\right\},$$ o , where s is a hyper-parameter of the mechanism that divides (σ − σ) and $$t_{0}=\arg\min_{t}{\frac{\sqrt{{\frac{1}{2\pi}}}e^{-{\frac{1}{2}}t^{2}}-t\left({\frac{1}{2}}-\Phi\left(t\right)\right)}{\left|t+Q_{\alpha}\right|}}.$$ . - *Uniform:* *divides* ($\overline{\sigma}-\sigma$) c. Sm,i = $\left(m-t_{0}\cdot t,m+(i+0.5)\cdot s+t\right)|t\in\left(-\frac{s}{2(t_{0}+1)},\frac{s}{2(t_{0}+1)}\right]\right)$. $\theta^{*}_{m,i}=(m,m+(i+0.5)\cdot s)$, $\mathcal{I}=\{(m,i)\,|\,i\in\mathbb{Z}_{>0},m\in\mathbb{R}\}$, where t0 =1 1 l −1 for $$l=\begin{cases}\alpha+{\sqrt{\alpha^{2}-\alpha+{\frac{1}{2}}}}&\alpha\leq0.5\\ \alpha-{\sqrt{\alpha^{2}-\alpha+{\frac{1}{2}}}}&\alpha>0.5\end{cases}.$$ and s > 0 *is a hyper-parameter of the mechanism that divides* (m − m). These data release mechanisms achieve the following ∆ and Πϵ,ωΘ . Proposition 6. Under Asm. 5, Mech. 3 has the following ∆ and Πϵ,ωΘ *value/bound.* - *Gaussian:* * _Odissanti_: $$\Pi_{\epsilon,\omega_{0}}<\frac{2\epsilon}{|t_{0}+Q_{\alpha}|s}+\frac{|t_{0}|s}{\overline{\mu}-\underline{\mu}},$$ $$\Delta=\frac{s}{2}\sqrt{\frac{2}{\pi}}e^{-\frac{1}{2}t_{0}^{2}}-\frac{t_{0}s}{2}\left(1-2\Phi\left(t_{0}\right)\right)<\left(2+\frac{|t_{0}|\cdot|t_{0}+Q_{\alpha}|s^{2}}{\left(\overline{\mu}-\underline{\mu}\right)\epsilon}\right)\Delta_{opt}.$$ _Under the "high-precision" regime where $\frac{s^{2}}{\overline{\mu}-\underline{\mu}}\to0$ as $s,\left(\overline{\mu}-\underline{\mu}\right)\to\infty$, $\Delta$ satisfies_ $$\lim_{\overline{\mu}-\underline{\mu}\to0}\Delta<3\Delta_{opt}.$$ - *Uniform:* Πϵ,ωΘ <2ϵ (t0 + 1) |(1 − α)t0 − α|s +2s · t0 (t0 + 1) (m − m) +s 2 2 (m − m) 2 , ∆ = t 2 0 + 1s 4(t0 + 1)2 < 2 + |(1 − α)t0 − α|s ϵ (t0 + 1) · 2s · t0 (t0 + 1) (m − m) +s 2 2 (m − m) 2 !! ∆opt. Under the "high-precision" regime where s 2 m−m → 0 as s,(m − m) → ∞, ∆ satisfies lim sup s2 m−m →0 ∆ < 3∆opt. The t0 parameter is defined in Mech. 3 *for each distribution.* The proof is in App. F.2. For Gaussian distribution, we relax Asm. 5 and analyze the privacy-distortion performance of Mech. 3 in App. F.3. For both distributions, we consider the "high-precision" regime. The two takeaways are that: (1) data holder can use s to control the trade-off between distortion and privacy, and (2) the mechanism is order-optimal with multiplicative factor 3. ## F.1 Proof Of **Corollary 4** F.1.1 **Gaussian Distribution** Proof. Let Xµ1,σ2 , Xµ2,σ2 be two Gaussian random variables with means µ1, µ2 and sigmas σ1, σ2 respectively. Let Φ denotes the CDF of the standard Gaussian distribution and let Φ −1(α) ≜ Qα. When σ1 = σ2, we have $${\frac{D\left(X_{\mu_{1},\sigma_{1}},X_{\mu_{2},\sigma_{2}}\right)}{R\left(X_{\mu_{1},\sigma_{1}},X_{\mu_{2},\sigma_{2}}\right)}}={\frac{{\frac{1}{2}}|\mu_{1}-\mu_{2}|}{|\mu_{1}+\sigma Q_{\alpha}-(\mu_{2}+\sigma Q_{\alpha})|}}={\frac{1}{2}}.$$ When σ1 ̸= σ2, we assume σ2 > σ1 without loss of generality. Let a = σ1 σ2 and b = σ2 σ1 µ1 − µ2. Let a = σ1 σ2 and b = σ2 σ1 µ1 − µ2. We can get that fXµ1,σ1 (x) = afXµ2,σ2 (a (x + b)), and D (Xµ1,σ1 , Xµ2,σ2 ) = 12 dWasserstein-1 ωXµ1,σ1 ∥ωXµ2,σ2 = 1 2 Z +∞ −∞ x − x a − b fXµ1,σ1 (x) dx = (µ1 − µ2) Φ µ1 − µ2 σ2 − σ1 − 1 2 + r1 2π (σ2 − σ1) e − 12 µ1−µ2 σ2−σ1 2, (22) R (Xµ1,σ1 , Xµ2,σ2 ) = |µ1 + σ1Qα − (µ2 + σ2Qα)| = |(µ1 − µ2) + (σ1 − σ2) Qα|. Let µ1−µ2 σ1−σ2 ≜ t, we can get that $${\frac{D\left(X_{\mu_{1},\sigma_{1}},X_{\mu_{2},\sigma_{2}}\right)}{R\left(X_{\mu_{1},\sigma_{1}},X_{\mu_{2},\sigma_{2}}\right)}}={\frac{\sqrt{\frac{1}{2\pi}}e^{-{\frac{1}{2}}t^{2}}-t\left({\frac{1}{2}}-\Phi\left(t\right)\right)}{\left|t+Q_{\alpha}\right|}}\triangleq h\left(t\right).$$ Since limt→∞ = 1 2 , we have min mint h (t), 1 2 = mint h (t), and therefore we can get that $$\gamma=\operatorname*{min}_{t}h\left(t\right).$$ ## F.1.2 **Uniform Distribution** Proof. Let Xm1,n1 , Xm2,n2 be two uniform random variables. Let FXm1,n1 , FXm2,n2 be their CDFs, and let m2 ≥ m1 without loss of generality. We can get that D (Xm1,n1 , Xm2,n2 ) = 12 dWasserstein-1 ωXm1,n1 ∥ωXm2,n2 = 1 2 Z +∞ −∞ |FXm1,n1 (x) − FXm2,n2 (x)|dx = ( m2−m1+n2−n1 4n2 ≥ n1 (m2−m1) 2+(n1−n2) 2 4(m2−m1+(n1−n2)) n2 < n1 , (23) R (Xm1,n1 , Xm2,n2 ) = |m2 + α (n2 − m2) − [m1 + α (n1 − m1)]| = |(1 − α) (m2 − m1) + α (n2 − n1)|. When n2 = n1, we have $${\frac{D\left(X_{m_{1},n_{1}},X_{m_{2},n_{2}}\right)}{R\left(X_{m_{1},n_{1}},X_{m_{2},n_{2}}\right)}}={\frac{m_{2}-m_{1}}{4\left(1-\alpha\right)\left(m_{2}-m_{1}\right)}}={\frac{1}{4\left(1-\alpha\right)}}.$$ When n2 > n1, let t1 = m2−m1 n2−n1 ∈ [0, +∞), we have $$\begin{split}\frac{D\left(X_{m_{1},n_{1}},X_{m_{2},n_{2}}\right)}{R\left(X_{m_{1},n_{1}},X_{m_{2},n_{2}}\right)}&=\frac{1}{4}\frac{m_{2}-m_{1}+n_{2}-n_{1}}{\left(1-\alpha\right)\left(m_{2}-m_{1}\right)+\alpha\left(n_{2}-n_{1}\right)}\\ &=\frac{1}{4}\frac{t_{1}+1}{\left(1-\alpha\right)t_{1}+\alpha}\\ &=\frac{1}{4\left(1-\alpha\right)}\left(1+\frac{1-2\alpha}{1-\alpha}\cdot\frac{1}{t_{1}+\frac{\alpha}{1-\alpha}}\right)\\ &\geq\begin{cases}\frac{1}{4(1-\alpha)}&\alpha\leq0.5\\ \frac{1}{4\alpha}&\alpha>0.5\end{cases}.\end{split}$$ When n2 < n1, let t2 = m2−m1 n1−n2 ∈ (0, +∞), we have D (Xm1,n1 , Xm2,n2 ) R (Xm1,n1 , Xm2,n2 ) = 1 4 (m2 − m1) 2 + (n1 − n2) 2 (m2 − m1 + (n1 − n2)) · 1 |(1 − α) (m2 − m1) − α (n1 − n2)| = 1 4 t 2 2 + 1 (t2 + 1)|(1 − α)t2 − α| ≥ qα2 − α + 1 2 + α − 1 2α ≤ 0.5 qα2 − α + 1 2 − α + 1 2α > 0.5 . "=" achieves when t2 =1 1 l −1 ≜ t0, where $$l=\begin{cases}\alpha+{\sqrt{\alpha^{2}-\alpha+{\frac{1}{2}}}}&\alpha\leq0.5\\ \alpha-{\sqrt{\alpha^{2}-\alpha+{\frac{1}{2}}}}&\alpha>0.5\end{cases}.$$ Therefore we can get that $$\gamma=\begin{cases}\sqrt{\alpha^{2}-\alpha+\frac{1}{2}}+\alpha-\frac{1}{2}&\alpha\leq0.5\\ \sqrt{\alpha^{2}-\alpha+\frac{1}{2}}-\alpha+\frac{1}{2}&\alpha>0.5\end{cases}.$$ ## F.2 Proof Of **Prop. 6** F.2.1 **Gaussian Distribution** Proof. We first focus on the proof for Πϵ,ωΘ . In Fig. 12, we separate the space of possible data parameters into two regions represented by yellow and green colors. The yellow regions S*yellow* constitute right triangles with height s and width |t0|s. The green region S*green* is the rest of the parameter space. The high-level idea of our proof is as follows. Note that for any parameter θ ∈ S*green*, there exists a Sµ,i s.t. θ ∈ Sµ,i and Sµ,i ⊂ S*green*. Therefore, we can bound the attack success rate if θ ∈ S*green*. At the same time, the probability of θ ∈ S*yellow* is bounded. Therefore, we can bound the overall attacker's success rate (i.e., Πϵ,ωΘ ). More specifically, let the optimal attacker be gˆ ∗. ![46_image_0.png](46_image_0.png) Figure 12: The construction for proof of Prop. 6 for Gaussian distributions. We separate the space of possible parameters into two regions (yellow and green) and bound the attacker's success rate on each region separately. We have $$\Pi_{\epsilon,\omega_{\Theta}}=\mathbb{P}\left(\hat{g}^{*}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)$$ $$=\int_{\theta\in S_{g r e e n}}p(\theta)\mathbb{P}\left(\hat{g}^{*}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)d\theta$$ $$\quad+\int_{\theta\in S_{y r e l o w}}p(\theta)\mathbb{P}\left(\hat{g}^{*}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)d\theta$$ $$<\frac{2\epsilon}{|t_{0}+Q_{\alpha}|s}+\frac{|t_{0}|s}{\overline{\mu}-\underline{\mu}}.$$ For the distortion, it is straightforward to get that ∆ = s2 q2 π e − 12 t 2 0 − t0s 2 (1 − 2Φ (t0)) from Eq. (22), and ∆opt > ⌈1 Πϵ,ωΘ ⌉ − 1 ·2γϵ ≥ 2γϵ, where γ is defined in Corollary 4. We can get that Πϵ,ωΘ − |t0|s µ−µ ·∆ < 2γϵ and ∆ = ∆opt + ∆ − ∆opt < ∆opt + ∆ − ⌈1 Πϵ,ωΘ ⌉ − 1 · 2γϵ ≤ ∆opt + 2γϵ + ∆ −2γϵ Πϵ,ωΘ < ∆opt + 2γϵ + |t0|s µ−µ 2ϵ |t0+Qα|s + |t0|s µ−µ · ∆ = 1 + |t0| · |t0 + Qα|s 2 2ϵµ − µ ! (∆opt + 2γϵ) ≤ 2 + |t0| · |t0 + Qα|s 2 ϵµ − µ ! ∆opt. ## F.2.2 **Uniform Distribution** Proof. We first focus on the proof for Πϵ,ωΘ . ![47_image_0.png](47_image_0.png) Figure 13: The construction for proof of Prop. 6 for uniform distributions. We separate the space of possible parameters into two regions (yellow and green) and bound the attacker's success rate on each region separately. In Fig. 13, we separate the space of possible data parameters into two regions represented by yellow and green colors. The yellow regions S*yellow* constitute triangles with height st0 t0+1 and width s (except for the right-bottom triangle with height and width s). The green region S*green* is the rest of the parameter space. The high-level idea of our proof is as follows. Note that for any parameter θ ∈ S*green*, there exists a Sµ,i s.t. θ ∈ Sµ,i and Sµ,i ⊂ S*green*. Therefore, we can bound the attack success rate if θ ∈ S*green*. At the same time, the probability of θ ∈ S*yellow* is bounded. Therefore, we can bound the overall attacker's success rate (i.e., Πϵ,ωΘ ). More specifically, let the optimal attacker be gˆ ∗. We have Πϵ,ωΘ = P (ˆg ∗(θ ′) ∈ [g (θ) − ϵ, g (θ) + ϵ]) = Z θ∈Sgreen p(θ)P (ˆg ∗(θ ′) ∈ [g (θ) − ϵ, g (θ) + ϵ]) dθ + Z θ∈Syellow p(θ)P (ˆg ∗(θ ′) ∈ [g (θ) − ϵ, g (θ) + ϵ]) dθ <2ϵ (t0 + 1) |(1 − α)t0 − α|s +2s · t0 (t0 + 1) (m − m) +s 2 2 (m − m) 2 . The second term 2s·t0 (t0+1)(m−m) bounds the probability of the yellow region except for the right-bottom triangle, and the last term s 2 2(m−m) 2 is the probability of the right-bottom triangle. For the distortion, it is straightforward to get that ∆ = (t 2 0+1)s 4(t0+1)2 from Eq. (23), and ∆opt > ⌈1 Πϵ,ωΘ ⌉ − 1 · 2γϵ ≥ 2γϵ, where γ is defined in Corollary 4. We can get that Πϵ,ωΘ −2s·t0 (t0+1)(m−m) −s 2 2(m−m) 2 · ∆ < 2γϵ and ∆ = ∆opt + ∆ − ∆opt < ∆opt + ∆ − ⌈1 Πϵ,ωΘ ⌉ − 1 · 2γϵ ≤ ∆opt + 2γϵ + ∆ −2γϵ Πϵ,ωΘ < ∆opt + 2γϵ + 2s·t0 (t0+1)(m−m) +s 2 2(m−m) 2 2ϵ(t0+1) |(1−α)t0−α|s +2s·t0 (t0+1)(m−m) +s2 2(m−m) 2 · ∆ = 1 + |(1 − α)t0 − α|s 2ϵ (t0 + 1) 2s · t0 (t0 + 1) (m − m) +s 2 2 (m − m) 2 (∆opt + 2γϵ) ≤ 2 + |(1 − α)t0 − α|s ϵ (t0 + 1) · 2s · t0 (t0 + 1) (m − m) +s 2 2 (m − m) 2 ∆opt. When s 2 m−m → 0 as s,(m−m) → ∞, we can get that s 3 (m−m) 2 → 0. Therefore, in this case, lim sup s2 m−m →0∆ < 3∆opt. ## F.3 Privacy-Distortion Performance Of Mech. 3 **With Relaxed Assumption** For Gaussian distribution, we relax Asm. 5 as follows. Assumption 6. *The prior over Gaussian distribution parameters satisfies Supp* (μ, σ) = (a, b)|a ∈-µ, µ, b ∈ [σ, σ) , fμ,σ (*a, b*) = fμ (a) · fσ (b), and fμ (a) (resp. fσ (b)) is Lµ*-Lipschitz* (resp. Lσ*-Lipschitz) and has lower bound* kµ µ−µ with kµ ∈ (0, 1] (resp. kσ σ−σ with kσ ∈ (0, 1]). Based on Asm. 6, the Privacy-distortion performance of Mech. 3 is shown below. $\square$ Proposition 7. Under Asm. 6, Mech. 3 has the following ∆ and Πϵ,ωΘ *value/bound:* ∆ = s2 r2 π e − 12 t 2 0 − t0s 2(1 − 2Φ (t0)), Πϵ,ωΘ < 2ϵ |t0+Qα| · hc + Lµ,σ s2 − t ∗ −ϵ |t0+Qα| i cs + Lµ,σ 2s2 − t ∗2 + M µ − µ,kµ µ − µ ,Lµ, 1 · M σ − σ,kσ σ − σ ,Lσ, 1 · (σ − σ)|t0|s, where c =kµkσ (µ−µ)·(σ−σ) , function M *satisfies* $$M\left(x,c,\mathcal{L},\mathcal{A}\right)=\begin{cases}\frac{\mathcal{A}}{x}+\frac{\mathcal{L}x}{2},&\text{if}c\leq\frac{\mathcal{A}}{x}-\frac{\mathcal{L}x}{2}\\ c+\sqrt{2\mathcal{L}\left(\mathcal{A}-c x\right)},&\text{if}c>\frac{\mathcal{A}}{x}-\frac{\mathcal{L}x}{2}\end{cases},$$ $\mathcal{L}_{n,\sigma}=\mathcal{L}_{\sigma}\cdot M\left(\frac{\eta_{\sigma}}{1+\eta_{\sigma}},\frac{\xi_{\sigma}}{2-\xi_{\sigma}},|t_{0}|\mathcal{L}_{\mu},\frac{1}{\eta_{0}}\right)+|t_{0}|\mathcal{L}_{\mu}\cdot M\left(\overline{\sigma}-\underline{\xi},\frac{\xi_{\sigma}}{2-\xi_{\sigma}},\mathcal{L}_{\sigma},1\right)$, $and\ t^{*}=\frac{\xi}{2}+\frac{\xi}{\mathcal{L}_{\mu,\sigma}}-\frac{\xi}{|t_{0}+\mathcal{L}_{\sigma}|}-\sqrt{\left(\frac{\xi_{\sigma}}{\mathcal{L}_{\mu,\sigma}}-\frac{\xi_{\sigma}}{|t_{0}+\mathcal{L}_{\sigma}|}\right)^{2}+\frac{2\xi_{\sigma}}{\mathcal{L}_{\sigma,\sigma}}}$. Proof. It is straightforward to get the formula for ∆ from Eq. (22). Here we focus on the proof for Πϵ,ωΘ . Similar to App. D.2.2, based on Lemma 1 and Lemma 2, we can get that sup θ∈Sgreen P (ˆg ∗(θ ′) ∈ [g (θ) − ϵ, g (θ) + ϵ]) = sup i∈N,µ,t′∈R R mins2 ,t′+ 2ϵ |t0+Qα| max{− s2 ,t′}fμ,σ (µ + t0 · t, σ + (i + 0.5) · s + t) dt R s2 − s2 fμ,σ (µ + t0 · t, σ + (i + 0.5) · s + t) dt ≤ 2ϵ |t0+Qα| · hc + Lµ,σ s2 − t ∗ −ϵ |t0+Qα| i cs + Lµ,σ 2s2 − t ∗2, where t ∗ = s 2 +c Lµ,σ −ϵ |t0+Qα| − rc Lµ,σ −ϵ |t0+Qα| 2+ 2cs Lµ,σ , Lµ,σ = Lσ ·M µ−µ |t0| , kµ µ−µ , |t0|Lµ,1 |t0| +|t0|Lµ · M σ − σ, kσ σ−σ ,Lσ, 1 , and c =kµkσ (µ−µ)·(σ−σ) . As for Rθ∈Syellow p(θ)dθ, we have Z θ∈Syellow p(θ)dθ ≤ M µ − µ,kµ µ − µ ,Lµ, 1 · M σ − σ,kσ σ − σ ,Lσ, 1 · Z θ∈Syellow dθ = M µ − µ,kµ µ − µ ,Lµ, 1 · M σ − σ,kσ σ − σ ,Lσ, 1 · (σ − σ)|t0|s. Above all, we can get that Πϵ,ωΘ < sup θ∈Sgreen P (ˆg ∗(θ ′) ∈ [g (θ) − ϵ, g (θ) + ϵ]) + Z θ∈Syellow p(θ)dθ. ≤ 2ϵ |t0+Qα| · hc + Lµ,σ s2 − t ∗ −ϵ |t0+Qα| i cs + Lµ,σ 2s2 − t ∗2 + M µ − µ,kµ µ − µ ,Lµ, 1 · M σ − σ,kσ σ − σ ,Lσ, 1 · (σ − σ)|t0|s, . where M(·, ·, ·, ·), c,Lµ,σ, t∗ are defined as above. ## G **Case Study With Secret = Standard Deviation** In this section, we discuss how to protect standard deviation for several continuous and discrete distributions. ## G.1 **Continuous Distributions** We consider the same distributions discussed in §6.2 and App. F: Gaussian, uniform, and (shifted) exponential distributions. Corollary 5 (Privacy lower bound, secret = standard deviation of a continuous distribution). *Consider* the secret function g (θ) = *standard deviation of* fXθ . For any T ∈ (0, 1), when Πϵ,ωΘ ≤ T*, we have* ∆ >⌈ 1 T ⌉ − 1· 2γϵ, where the value of γ *depends on the type of the distributions:* - *Gaussian:* $$\gamma=\operatorname*{min}_{t}{\sqrt{\frac{1}{2\pi}}}e^{-{\frac{1}{2}}t^{2}}-t\left({\frac{1}{2}}-\Phi\left(t\right)\right),$$ where Φ *denotes the CDF of the standard Gaussian distribution.* - *Uniform:* γ = √3 4 . - *Exponential:* γ = 1 2 . - *Shifted exponential:* γ = ln 2 2 . The proof is in App. G.3. The bounds for Gaussian can be computed numerically, while the bounds for all other distributions are in closed form. Next, we present the data release mechanism for these distributions and the secret under the same assumption as Asm. 2. Mechanism 4 (For secret = standard deviation of a continuous distribution). *We design mechanisms for* each of the distributions. - *Gaussian:* $$\begin{array}{c}{{\mathcal{S}_{\mu,i}=\left\{\left(\mu+t_{0}\cdot t,\underline{{{\sigma}}}+(i+0.5)\cdot s+t\right)|t\in\left[-\frac{s}{2},\frac{s}{2}\right)\right\}\quad,}}\\ {{\theta_{\mu,i}^{\mathrm{s}}=\left(\mu,\underline{{{\sigma}}}+(i+0.5)\cdot s\right)\quad,}}\\ {{\mathcal{I}=\left\{\left(\mu,i\right)|i\in\mathbb{N},\mu\in\mathbb{R}\right\},}}\end{array}$$ where s is a hyper-parameter of the mechanism that divides (σ − σ) and $$t_{0}=\arg\operatorname*{min}_{t}{\sqrt{\frac{1}{2\pi}}}e^{-{\frac{1}{2}}t^{2}}-t\left({\frac{1}{2}}-\Phi\left(t\right)\right).$$ - *Uniform:* $\mathcal{S}_{m,i}=\left\{(m-t,m+(i+0.5)\cdot s+t)\,|t\in\left(-\frac{s}{4},\frac{s}{4}\right]\right\}\;\;,$ $\theta_{m,i}^{*}=\left(m,m+(i+0.5)\cdot s\right)\;\;\;,$ $\mathcal{I}=\left\{(m,i)\,|i\in\mathbb{Z}_{>0},m\in\mathbb{R}\right\},$ where s > 0 *is a hyper-parameter of the mechanism that divides* (m − m). - *Exponential:* $$\begin{array}{l}{{S_{i}=\left[\underline{{{\lambda}}}+i\cdot s,\underline{{{\lambda}}}+(i+1)\cdot s\right)}}\\ {{\theta_{i}^{*}=\underline{{{\lambda}}}+(i+0.5)\cdot s\;\;,}}\\ {{\mathcal{I}=\mathbb{N},}}\end{array}$$ where s > 0 *is a hyper-parameter of the mechanism that divides* λ − λ. - *Shifted exponential:* $$\begin{array}{c}{{\mathcal{S}_{i,h}=\left\{\left(\underline{{{\lambda}}}+\left(i+0.5\right)s+t,h-\ln2\cdot t\right)|t\in\left[-\frac{s}{2},\frac{s}{2}\right)\right\}\quad,}}\\ {{\theta_{i,h}^{*}=\left(\underline{{{\lambda}}}+\left(i+0.5\right)s,h\right)\quad,}}\\ {{\mathcal{I}=\left\{(i,h)|i\in\mathbb{N},h\in\mathbb{R}\right\},}}\end{array}$$ where s > 0 *is a hyper-parameter of the mechanism that divides* λ − λ. These data release mechanisms achieve the following ∆ and Πϵ,ωΘ . Proposition 8. Under Asm. 2, Mech. 4 has the following ∆ and Πϵ,ωΘ *value/bound.* * _Gaussian_: $$\Pi_{\epsilon,\omega_{0}}<\frac{2\epsilon}{s}+\frac{|t_{0}|s}{\overline{\mu}-\mu},$$ $$\Delta=\frac{s}{2}\sqrt{\frac{2}{\pi}}e^{-\frac{1}{2}t_{0}^{2}}-\frac{t_{0}s}{2}\left(1-2\Phi\left(t_{0}\right)\right)<\left(2+\frac{|t_{0}|s^{2}}{\left(\overline{\mu}-\mu\right)\epsilon}\right)\Delta_{opt},$$ _where $t_{0}$ is defined in **Mech. 4**. Under the "high-precision" regime where $\frac{s^{2}}{\overline{\mu}-\mu}\to0$ as → 0 as s,(µ − µ) → ∞, ∆ satisfies $$\operatorname*{lim}_{\begin{array}{c}{{\frac{s^{2}}{\mu-\underline{{\mu}}}}\to0}\end{array}}\Delta<3\Delta_{o p t}.$$ - *Uniform:* $$\Pi_{e,\omega_{0}}<\frac{4\sqrt{3}e}{s}+\frac{s}{(\overline{m}-\underline{m})}+\frac{s^{2}}{2\,(\overline{m}-\underline{m})^{2}},$$ $$\Delta=\frac{s}{8}<\left(2+\frac{s}{2\sqrt{3}e}\cdot\left(\frac{s}{\overline{m}-\underline{m}}+\frac{s^{2}}{2\,(\overline{m}-\underline{m})^{2}}\right)\right)\Delta_{opt}.$$ _Under the "high-precision" regime where $\frac{s^{2}}{\overline{m}-\underline{m}}\to0$ as $s,(\overline{m}-\underline{m})\to\infty$, $\Delta$ satisfies_ $$\lim_{\begin{array}{c}\underline{s^{2}}\\ \overline{m}-\underline{m}\end{array}}\Delta<3\Delta_{opt}.$$ - *Exponential:* $$\Pi_{\epsilon,\omega_{\Theta}}=$$ $$\Delta=$$. $$=\frac{2\epsilon}{s},$$ $$=\frac{1}{2}s<2\Delta_{o p t}.$$ ∆ = 12 - *Shifted exponential:* $$\begin{array}{c}{{\Pi_{\epsilon,\omega_{\Theta}}<\frac{2\epsilon}{s}+\frac{s\ln2}{\overline{{{h}}}-\underline{{{h}}}},}}\\ {{\Delta=\frac{s\ln2}{2}<\left(2+\frac{s^{2}\ln2}{\epsilon\left(\overline{{{h}}}-\underline{{{h}}}\right)}\right)\Delta_{o p t}.}}\end{array}$$ Under the "high-precision" regime where s $\textit{here}$ $\frac{s^2}{h-\frac{h}{2}}\to0\textit{as}s,(\overline{h}-\underline{h})\to\infty,$ $\Delta\textit{satisfies}$ $$\operatorname*{lim}_{\frac{z^{2}}{h-\frac{h}{2}}\to0}\Delta<3\Delta_{o p t}.$$ The proof is in App. G.4. For Gaussian, exponential and shifted exponential distributions, we relax Asm. 2 and analyze the privacy-distortion performance of Mech. 4 in App. G.5. From these propositions, we have similar takeaways as the alpha-quantile case ( §6.2): (1) data holder can use s to control the trade-off between distortion and privacy, and (2) the mechanism is order-optimal under the "high-precision" regime. ## G.2 **Discrete Distributions** Here, we consider the same discrete distributions studied in App. E: Geometric distributions, binomial distributions, and Poisson distributions. We first analyze the lower bound. Corollary 6 (Privacy lower bound, secret = standard deviation of a discrete distribution). *Consider the* secret function g (θ) = *standard deviation of* fXθ . For any T ∈ (0, 1), when Πϵ,ωΘ ≤ T*, we have* ∆ > ⌈ 1 T ⌉ − 1· 2γϵ, where the value of γ *depends on the type of the distributions:* - *Geometric:* $$\gamma=\operatorname*{inf}_{\underline{{{\theta}}}<\theta_{1}<\theta_{2}\leq\overline{{{\theta}}}}\frac{\left(1-\theta_{2}\right)^{h(\theta_{1},\theta_{2})}-\left(1-\theta_{1}\right)^{h(\theta_{1},\theta_{2})}}{2\left(\frac{\sqrt{1-\theta_{2}}}{\theta_{2}}-\frac{\sqrt{1-\theta_{1}}}{\theta_{1}}\right)}$$ where h (θ1, θ2) = ⌊log(θ2)−log(θ1) log(1−θ1)−log(1−θ2) $${\frac{-\log(\theta_{1})}{-\log(1-\theta_{2})}}\rfloor+1.$$ - *Binomial:* $$\begin{array}{l}{{\gamma=\operatorname*{inf}_{\underline{{{\theta}}}<\theta_{1}<\theta_{2}\leq\overline{{{\theta}}}}}}\\ {{\frac{I_{1-\theta_{2}}(n-h(\theta_{1},\theta_{2}),1+h(\theta_{1},\theta_{2}))-I_{1-\theta_{1}}(n-h(\theta_{1},\theta_{2}),1+h(\theta_{1},\theta_{2}))}{2\left|\sqrt{n\theta_{2}(1-\theta_{2})}-\sqrt{n\theta_{1}(1-\theta_{1})}\right|},}}\end{array}$$ where h (θ1, θ2) = ⌊k ) = $\lfloor k'\rfloor,\;k'$ = . ′ = n ln 1−θ2 , and I *represents the regularized incomplete beta* $\frac{1}{\theta_1}\left(\frac{1-\theta_2}{1-\theta_1}\right)\bigg/\ln\left(\frac{\theta_1\left(1-\theta_2\right)}{\theta_2\left(1-\theta_1\right)}\right),\;\theta_2$ function. - *Poisson:* $$\gamma=\operatorname*{inf}_{\underline{{{\theta}}}<\theta_{1}<\theta_{2}\leq\overline{{{\theta}}}}\frac{Q\left(h\left(\theta_{1},\theta_{2}\right),\theta_{2}\right)-Q\left(h\left(\theta_{1},\theta_{2}\right),\theta_{1}\right)}{2\left(\sqrt{\theta_{1}}-\sqrt{\theta_{2}}\right)},$$ where h (θ1, θ2) = ⌊θ1−θ2 ln(θ1)−ln(θ2) ⌋ + 1 and Q *is the regularized gamma function.* The proof is in App. G.6. The above lower bounds can be computed numerically. Since these distributions only have one parameter, we can use Alg. 1 and Alg. 3 to derive a data release mechanism. The performance of greedy-based and dynamic-programming-based data release mechanisms for each distribution is shown in Fig. 14. ![53_image_0.png](53_image_0.png) ![53_image_1.png](53_image_1.png) (c) Distribution = Poisson Figure 14: Privacy-distortion performance of Alg. 1 and Alg. 3 for binomial and Poisson distribution when secret = standard deviation. ## G.3 Proof Of **Corollary 5** G.3.1 **Gaussian Distribution** Proof. Let Xµ1,σ2 , Xµ2,σ2 be two Gaussian random variables with means µ1, µ2 and sigmas σ1, σ2 respectively, where σ1 ̸= σ2. Let Φ denotes the CDF of the standard Gaussian distribution. We can get that $$\begin{array}{c}{{D\left(X_{\mu_{1},\sigma_{1}},X_{\mu_{2},\sigma_{2}}\right)=\left(\mu_{1}-\mu_{2}\right)\left(\Phi\left(\frac{\mu_{1}-\mu_{2}}{\sigma_{2}-\sigma_{1}}\right)-\frac{1}{2}\right)}}\\ {{+\sqrt{\frac{1}{2\pi}}\left(\sigma_{2}-\sigma_{1}\right)e^{-\frac{1}{2}\left(\frac{\mu_{1}-\mu_{2}}{\sigma_{2}-\sigma_{1}}\right)^{2}},}}\\ {{R\left(X_{\mu_{1},\sigma_{1}},X_{\mu_{2},\sigma_{2}}\right)=|\sigma_{1}-\sigma_{2}|.}}\end{array}$$ Let µ1−µ2 σ1−σ2 ≜ t, we can get that $${\frac{D\left(X_{\mu_{1},\sigma_{1}},X_{\mu_{2},\sigma_{2}}\right)}{R\left(X_{\mu_{1},\sigma_{1}},X_{\mu_{2},\sigma_{2}}\right)}}={\sqrt{\frac{1}{2\pi}}}e^{-{\frac{1}{2}}t^{2}}-t\left({\frac{1}{2}}-\Phi\left(t\right)\right)\triangleq h\left(t\right).$$ Therefore we can get that $$\gamma=\operatorname*{min}_{t}h\left(t\right).$$ ## G.3.2 **Uniform Distribution** Proof. Let Xm1,n1 , Xm2,n2 be two uniform random variables. Let FXm1,n1 , FXm2,n2 be their CDFs, and let m2 ≥ m1 without loss of generality. We can get that D (Xm1,n1 , Xm2,n2 ) = 12 dWasserstein-1 ωXm1,n1 ∥ωXm2,n2 = 1 2 Z +∞ −∞ |FXm1,n1 (x) − FXm2,n2 (x)|dx = ( m2−m1+n2−n1 4n2 ≥ n1 (m2−m1) 2+(n1−n2) 2 4(m2−m1+(n1−n2)) n2 < n1 , R (Xm1,n1 , Xm2,n2 ) = 1 √12 (n1 − m1) −1 √12 (n2 − m2) =1 √12 |m2 − m1 − (n2 − n1)|. Therefore, we can get that when n2 ≥ n1, we have $${\frac{D\left(X_{m_{1},n_{1}},X_{m_{2},n_{2}}\right)}{R\left(X_{m_{1},n_{1}},X_{m_{2},n_{2}}\right)}}={\frac{\sqrt{3}}{2}}{\frac{m_{2}-m_{1}+n_{2}-n_{1}}{\left|m_{2}-m_{1}-\left(n_{2}-n_{1}\right)\right|}}$$ $$\geq{\frac{\sqrt{3}}{2}}.$$ When n2 < n1, we have D (Xm1,n1 , Xm2,n2 ) R (Xm1,n1 , Xm2,n2 ) = √3 2 (m2 − m1) 2 + (n1 − n2) 2 (m2 − m1 + (n1 − n2))2 = √3 2 (m2 − m1) 2 + (n1 − n2) 2 (m2 − m1) 2 + (n1 − n2) 2 + 2 (m2 − m1) (n1 − n2) ≥ √3 2· (m2 − m1) 2 + (n1 − n2) 2 2 h(m2 − m1) 2 + (n1 − n2) 2i = √3 4 . Therefore we can get that $$\gamma={\frac{\sqrt{3}}{4}}.$$ ## G.3.3 **Exponential Distribution** Proof. Let Xλ1 , Xλ2 be two exponential random variables. We have $${\frac{D\left(X_{\lambda_{1}},X_{\lambda_{2}}\right)}{R\left(X_{\lambda_{1}},X_{\lambda_{2}}\right)}}={\frac{{\frac{1}{\lambda_{1}}}-{\frac{1}{\lambda_{2}}}}{2\left({\frac{1}{\lambda_{1}}}-{\frac{1}{\lambda_{2}}}\right)}}={\frac{1}{2}}.$$ Therefore we can get that $$\gamma={\frac{1}{2}}.$$ ## G.3.4 **Shifted Exponential Distribution** Proof. Let Xλ1,h1 , Xλ2,h2 be random variables from shifted exponential distributions. Let λ2 ≤ λ1 without loss of generality. Let a = λ1 λ2 and b = (h1/λ1 − h2/λ2) λ2. We can get that fXλ1,h1 (x) = afXλ2,h2 (a (x + b)), and D (Xλ1,h1 , Xλ2,h2 ) = 12 dWasserstein-1 ωXλ1,h1 ∥ωXλ2,h2 = 1 2 Z +∞ h1 x − x a − b fXλ1,h1 (x) dx = λ2 2λ1 Z +∞ h1 |(1/λ2 − 1/λ1) x + h1/λ1 − h2/λ2| e − 1 λ1 (x−h1)dx = (1 2 (h2 − h1 + λ2 − λ1) − e h2−h1 λ2−λ1 (λ2 − λ1) (h1 < h2) 1 2 (h1 − h2 + λ1 − λ2) (h1 ≥ h2) , R (Xλ1,h1 , Xλ2,h2 ) = λ1 − λ2. (24) $$(24)$$ When λ1 = λ2 and h1 ̸= h2, we have D(Xλ1,h1 ,Xλ2,h2 ) R(Xλ1,h1 ,Xλ2,h2 ) = ∞. When λ1 ̸= λ2 and h1 < h2, let t = h2−h1 $\leq h_2$, let $t=\frac{1}{\lambda_1-\lambda_2}\in(0,\,t\infty)$. We have $$\begin{aligned} \frac{D\left(X_{\lambda_1,h_1}, X_{\lambda_2,h_2}\right)}{R\left(X_{\lambda_1,h_1}, X_{\lambda_2,h_2}\right)}&=\frac{h_2-h_1+\lambda_2-\lambda_1-2e^{\frac{h_2-h_1}{\lambda_2-\lambda_1}}\left(\lambda_2-\lambda_1\right)}{2\left(\lambda_1-\lambda_2\right)}\\ &=\frac{t+2e^{-t}-1}{2}\\ &\geq\frac{\ln2}{2}.\nonumber \end{aligned}$$ $t_2=\ln2$. ∈ (0, +∞). We have "=" achieves when t = t0 = ln 2. When λ1 ̸= λ2 and h1 ≥ h2, we have $\geq h_2$, we have $$\frac{D\left(X_{\lambda_1,h_1},X_{\lambda_2,h_2}\right)}{R\left(X_{\lambda_1,h_1},X_{\lambda_2,h_2}\right)}=\frac{h_1-h_2+\lambda_1-\lambda_2}{2\left(\lambda_1-\lambda_2\right)}\geq\frac{\lambda_1-\lambda_2}{2\left(\lambda_1-\lambda_2\right)}=\frac{1}{2}.$$ that Therefore we can get that $$\gamma={\frac{\ln2}{2}}.$$ ## G.4 Proof Of **Prop. 8** The proof outline is almost the same as the ones in App. C.4 and App. F.2. We omit the details and point to the proof sections where we can adapt from. ## G.4.1 **Gaussian Distribution** The proof is the same as App. F.2.1, except that we use the D (·, ·) and R (·, ·) from App. G.3.1. ## G.4.2 **Uniform Distribution** The proof is the same as App. F.2.2, except that we use the D (·, ·) and R (·, ·) from App. G.3.2. ## G.4.3 **Exponential Distribution** The proof is the same as App. C.4.1, except that we use the D (·, ·) and R (·, ·) from App. G.3.3. ## G.4.4 **Shifted Exponential Distribution** The proof is the same as App. C.4.2, except that we use the D (·, ·) and R (·, ·) from App. G.3.4. G.5 Privacy-Distortion Performance of Mech. 4 **with Relaxed Assumption** Based on Asm. 6 and Asm. 4, the Privacy-distortion performance of Mech. 4 is shown below. Proposition 9. Under Asm. 6 and Asm. 4, Mech. 4 has the following ∆ and Πϵ,ωΘ *value/bound.* - *Gaussian:* ∆ = s2 r2 π e − 12 t 2 0 − t0s 2(1 − 2Φ (t0)), Πϵ,ωΘ < 2ϵ ·-c + Lµ,σ s2 − t ∗ − ϵ cs + Lµ,σ 2s2 − t ∗2 + M µ − µ,kµ µ − µ ,Lµ, 1 · M σ − σ,kσ σ − σ ,Lσ, 1 · (σ − σ)|t0|s, where t0 is defined in *Mech. 4,* c =kµkσ (µ−µ)·(σ−σ) , function M *satisfies* $$M\left(x,c,\mathcal{L},\mathcal{A}\right)=\begin{cases}\frac{\mathcal{A}}{x}+\frac{\mathcal{L}x}{2},&\text{if}c\leq\frac{\mathcal{A}}{x}-\frac{\mathcal{L}x}{2}\\ c+\sqrt{2\mathcal{L}\left(\mathcal{A}-c x\right)},&\text{if}c>\frac{\mathcal{A}}{x}-\frac{\mathcal{L}x}{2}\end{cases},$$ $\mathcal{L}_{\mu,\sigma}=\mathcal{L}_{\sigma}M\left(\frac{\eta-\mu}{|\eta_{\sigma}|},\frac{b_{\sigma}}{b_{\sigma}^{2}-b_{\sigma}^{2}},|t_{0}|\mathcal{L}_{\sigma},\frac{1}{|\eta_{\sigma}|}\right)+|t_{0}|\mathcal{L}_{\sigma}M\left(\eta-\underline{\sigma},\frac{b_{\sigma}}{b_{\sigma}^{2}-b_{\sigma}^{2}},\mathcal{L}_{\sigma},1\right)$, and $t^{*}=\frac{1}{2}+\frac{\mu}{\mathcal{L}_{\sigma,\sigma}}-e-\sqrt{\left(\frac{\mu}{\mathcal{L}_{\sigma,\sigma}}-e\right)^{2}+\frac{2\mu}{\mathcal{L}_{\sigma,\sigma}}}$. $\bullet$_Exponential:_ $$\Delta=\frac{1}{2}s,$$ $$\Pi_{\epsilon,\omega\Theta}\leq\frac{2\epsilon\cdot\left[\underline{{{c}}}+\mathcal{L}\left(s-x^{*}+\epsilon\right)\right]}{\underline{{{c}}}s+\frac{\mathcal{L}}{2}\left(s-x^{*}\right)^{2}},$$ where x ∗ = s + $$\cdot\,{\frac{c}{\mathcal{L}}}+\epsilon-{\sqrt{\left({\frac{c}{\mathcal{L}}}+\epsilon\right)^{2}+{\frac{2c s}{\mathcal{L}}}}}.$$ - *Shifted exponential:* ∆ = s ln 2 2, Πϵ,ωΘ < 2ϵ ·-c + Lλ,h s2 − t ∗ − ϵ cs + Lλ,h 2s2 − t ∗2 + ln 2 · M h − h,kh h − h ,Lh, 1 · M λ − λ,kλ λ − λ ,Lλ, 1 ·λ − λs, (h−h)·(λ−λ) , function M satisfies where c =khkλ M (x, c,L, A) = (A x + Lx 2 , if c ≤ A x − Lx 2 c +p2L(A − cx), if c > A x − Lx 2 , Lλ,h = LλM h−h ln 2 , kh h−h , ln 2 · Lh, 1 ln 2 + ln 2 · LhM λ − λ, kλ λ−λ , Lλ, 1 , and t ∗ =s2 +c Lλ,h− ϵ − rc Lλ,h − ϵ 2+ 2cs Lλ,h . The proofs are the same as App. F.3, App. D.2.1 and App. D.2.2, except that we use the D (·, ·), and R (·, ·) from App. G.3.1, App. G.3.3, and App. G.3.4. ## G.6 Proof Of **Corollary 6** G.6.1 **Geometric Distribution** Proof. Let Xθ1 and Xθ2 be two Geometric random variables with parameters θ1 and θ2 respectively. We assume that θ1 > θ2 without loss of generality. Let k ′satisfy (1 − θ1) k ′ θ1 = (1 − θ2) k ′ θ2 and k0 = ⌊k ′⌋ + 1. Then we can get that $$\begin{array}{c}{{D\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\frac{1}{2}d_{\mathrm{TV}}\left(\omega_{X_{\theta_{1}}}\|\omega_{X_{\theta_{2}}}\right)}}\\ {{=\frac{1}{2}\left(1-\theta_{2}\right)^{k_{0}}-\frac{1}{2}\left(1-\theta_{1}\right)^{k_{0}},}}\\ {{R\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\frac{\sqrt{1-\theta_{2}}}{\theta_{2}}-\frac{\sqrt{1-\theta_{1}}}{\theta_{1}}.}}\end{array}$$ Therefore, we can get that $$\gamma=\operatorname*{inf}_{\underline{{{\theta}}}<\theta_{1}<\theta_{2}\leq\overline{{{\theta}}}}\frac{\left(1-\theta_{2}\right)^{k_{0}}-\left(1-\theta_{1}\right)^{k_{0}}}{2\left(\frac{\sqrt{1-\theta_{2}}}{\theta_{2}}-\frac{\sqrt{1-\theta_{1}}}{\theta_{1}}\right)}.$$ $$\square$$ ## G.6.2 **Binomial Distribution** Proof. Let Xθ1 and Xθ2 be two binomial random variables with parameters θ1 and θ2 respectively with fixed number of trials n. We assume that θ1 > θ2 without loss of generality. Let k ′satisfy n k′ θ k ′ 1 (1 − θ1) n−k ′ = n k′ θ k ′ 2 (1 − θ1) n−k ′ and k0 = ⌊k ′⌋. We can get that $$\begin{array}{c}{{D\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\frac{1}{2}d_{\mathrm{TV}}\left(\omega_{X_{\theta_{1}}}\|\omega_{X_{\theta_{2}}}\right)}}\\ {{=\frac{1}{2}I_{1-\theta_{2}}\left(n-k_{0},1+k_{0}\right)-\frac{1}{2}I_{1-\theta_{1}}\left(n-k_{0},1+k_{0}\right),}}\\ {{R\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\left|\sqrt{n\theta_{2}\left(1-\theta_{2}\right)}-\sqrt{n\theta_{1}\left(1-\theta_{1}\right)}\right|,}}\end{array}$$ where $n$ is the $n$-th order parameter. where I represents the regularized incomplete beta function. Therefore, we can get that $$\gamma=\operatorname*{inf}_{\underline{{{\theta}}}<\theta_{1}<\theta_{2}\leq\overline{{{\theta}}}}\frac{I_{1-\theta_{2}}\left(n-k_{0},1+k_{0}\right)-I_{1-\theta_{1}}\left(n-k_{0},1+k_{0}\right)}{\left|\sqrt{n\theta_{2}\left(1-\theta_{2}\right)}-\sqrt{n\theta_{1}\left(1-\theta_{1}\right)}\right|}.$$ ## G.6.3 **Poisson Distribution** Proof. Let Xθ1 and Xθ2 be two Poisson random variables with parameters θ1 and θ2 respectively. We assume that θ1 > θ2 without loss of generality. Let k ′satisfy θ k ′ 1 e −θ1 = θ k ′ 2 e −θ2 and k0 = ⌊k ′⌋ + 1. Then we can get that $$\begin{array}{c}{{D\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\frac{1}{2}d_{\mathrm{TV}}\left(\omega_{X_{\theta_{1}}}\|\omega_{X_{\theta_{2}}}\right)}}\\ {{=\frac{1}{2}Q\left(k_{0},\theta_{2}\right)-\frac{1}{2}Q\left(k_{0},\theta_{1}\right),}}\\ {{R\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\sqrt{\theta_{1}}-\sqrt{\theta_{2}},}}\end{array}$$ where Q is the regularized gamma function. Therefore, we can get that $$\gamma=\operatorname*{inf}_{\underline{{{\theta}}}<\theta_{1}<\theta_{2}\leq\overline{{{\theta}}}}\frac{Q\left(k_{0},\theta_{2}\right)-Q\left(k_{0},\theta_{1}\right)}{2\left(\sqrt{\theta_{1}}-\sqrt{\theta_{2}}\right)}.$$ ## H **Case Study With Secret = Fraction** As indicated in S1 in §2.1, the fraction of discrete distributions can reveal sensitive information. In this section, we first present the results for ordinal distributions, where there is a specific formula for the fractions at each bin (i.e., binomial, Poisson, geometric that we discussed in Apps. E and G.2). We then present the results for categorical distributions, where there is no constraint on the fractions of the bins so long as they are normalized. ## H.1 **Ordinal Distribution** Here, we consider the same three discrete distributions studied in Apps. E and G.2: geometric distributions, binomial distributions, and Poisson distributions. We first analyze the lower bound. We assume that the secrete is the fraction of the j-th bin. Corollary 7 (Privacy lower bound, secret = fraction of an ordinal distribution). *Consider the secret function* g (θ) = fXθ (j). For any T ∈ (0, 1), when Πϵ,ωΘ ≤ T*, we have* ∆ >⌈ 1 T ⌉ − 1· 2γϵ*, where the value of* γ depends on the type of the distributions: - *Geometric:* $$\gamma=\operatorname*{inf}_{\underline{{{\theta}}}<\theta_{1}<\theta_{2}\leq\overline{{{\theta}}}}\frac{(1-\theta_{2})^{h(\theta_{1},\theta_{2})}-(1-\theta_{1})^{h(\theta_{1},\theta_{2})}}{2\left|(1-\theta_{2})^{j}\,\theta_{2}-(1-\theta_{1})^{j}\,\theta_{1}\right|}$$ , where h (θ1, θ2) = ⌊log(θ2)−log(θ1) log(1−θ1)−log(1−θ2) $${\frac{(\theta_{1})}{(1-\theta_{2})}}\rfloor+1.$$ - *Binomial:* γ = inf θ<θ1<θ2≤θ I1−θ2 (n−h(θ1,θ2),1+h(θ1,θ2))−I1−θ1 (n−h(θ1,θ2),1+h(θ1,θ2)) 2|( n j )θ j 2 (1−θ2) n−j−( n j )θ j 1 (1−θ1) n−j|, ′ = n ln 1−θ2 1−θ1 . ln θ1(1−θ2) θ2(1−θ1) where h (θ1, θ2) = ⌊k $\left(\frac{1}{2}\right)=\left[k'\right]$, $k'=n\ln\left(\frac{1-\theta_{2}}{1-\theta_{1}}\right)\Big/\ln\left(\frac{\theta_{1}\left(1-\theta_{2}\right)}{\theta_{2}\left(1-\theta_{1}\right)}\right)$, *and I repre*. , and I *represents the regularized incomplete beta* function. - *Poisson:* $$\gamma=\operatorname*{inf}_{\underline{{{\theta}}}<\theta_{1}<\theta_{2}\leq\overline{{{\theta}}}}\frac{Q\left(h\left(\theta_{1},\theta_{2}\right),\theta_{2}\right)-Q\left(h\left(\theta_{1},\theta_{2}\right),\theta_{1}\right)}{2\left|\frac{\theta_{1}^{j}e^{-\theta_{1}}}{j!}-\frac{\theta_{2}^{j}e^{-\theta_{2}}}{j!}\right|},$$ where h (θ1, θ2) = ⌊θ1−θ2 ln(θ1)−ln(θ2) ⌋ + 1 and Q *is the regularized gamma function.* The proof is in App. H.3. The above lower bounds can be computed numerically. Since these distributions only have one parameter, we can use Alg. 1 and Alg. 3 to derive a data release mechanism. The performance of greedy-based and dynamic-programming-based data release mechanisms for each distribution is shown in Fig. 15. ![58_image_0.png](58_image_0.png) Figure 15: Privacy-distortion performance of Alg. 1 and Alg. 3 for geometric, binomial and Poisson distribution when secret = fraction. ## H.2 **Categorical Distribution** In this section, we consider categorical distributions where the fraction of each bin can be changed freely (as long as they are normalized). We assume that θ = (p1, p2*, . . . , p*C ) s.t. pi ∈ [0, 1] ∀i ∈ [C] and Pi pi = 1. Note that this is completely different from the distributions discussed in App. H.1 where the parameter of the distribution is one-dimensional. We first analyze the lower bound. Without loss of generality, we assume that we want to protect the fraction of the j-th bin, i.e. pj . Corollary 8 (Privacy lower bound, secret = fraction of a general discrete distribution). *Consider the secret* function g (θ) = p1. For any T ∈ (0, 1), when Πϵ,ωΘ ≤ T*, we have* ∆ >⌈ 1 T ⌉ − 1· ϵ. The proof is in App. H.4. Next, we present the data release mechanism under the following assumption. Assumption 7. The prior distribution of (p1, . . . , pC ) *is a uniform distribution over all the probability* simplex {(p1, . . . , pC )|pi ∈ [0, 1) ∀i ∈ [C] and Pi pi = 1}. Mechanism 5 (For secret = fraction of a categorical distribution). *The parameters of the mechanism are* as follows. $$\mathcal{S}_{p_{1},\ldots,p_{C}}=\left\{\left(p_{1}-\frac{t}{C-1},\ldots,p_{j-1}-\frac{t}{C-1},p_{j}+t,\right.\right.$$ $$\left.\left.p_{j+1}-\frac{t}{C-1},\ldots,p_{C}-\frac{t}{C-1}\right)\right|t\in\left[-\frac{s}{2},\frac{s}{2}\right)\right\}$$ $$\left.\theta_{p_{1},\ldots,p_{C}}^{*}=\left(p_{1}-T,\ldots,p_{j-1}-T,p_{j}+\left(C-1\right)T,\right.\right.$$ $$\left.p_{j+1}-T,\ldots,p_{C+1}-T\right)\ \,$$ where T = min {p1, . . . , pj−1, pj+1, . . . , pC , 0}*, and* $1,\:p_{j+1},\:\ldots,\:p_{C},0\},\:\:and\:\:$ . $$\mathcal{I}=\Bigg{\{}\left(p_{1},\ldots,p_{C}\right)\left|\forall i\ p_{i}\in\left(-\frac{s}{2\left(C-1\right)},1\right],\sum_{i}p_{i}=1,\right.$$ $p_{j}=\left(k+0.5\right)s,$ where $k\in\left\{0,1,\ldots,C-1\right\}\Bigg{\}}.$ Here s > 0 *is a hyper-parameter of the mechanism that divides 1.* This data release mechanism achieves the following privacy-distortion trade-off. Proposition 10. Under Asm. 7, Mech. 5 has the following Πϵ,ωΘ and ∆ *value/bound.* $$\begin{array}{c}{{\Pi_{\epsilon,\omega\Theta}<\frac{2\epsilon}{s}+1-\left(1-\frac{s}{C-1}\right)^{C-1},}}\\ {{\Delta=\frac{s}{2}<\left(2+\frac{s}{\epsilon}\right)\Delta_{o p t}.}}\end{array}$$ Under the regime sup (s) → Aϵ, where A is a constant larger than 2, ∆ *satisfies* $$\operatorname*{lim}_{\operatorname*{sup}(s)\to\mathcal{A}\epsilon}\Delta<(2+\mathcal{A})\Delta_{o p t}.$$ ∆opt is the minimal distortion an optimal data release mechanism can achieve given the privacy *Mech. 5* achieves. The proof is in App. H.5. To ensure that Πϵ,ωΘ < 1, s should satisfy s > 2ϵ. According to Prop. 10, the mechanism is order-optimal with multiplicative factor 2 + A when sup (s) → Aϵ, where A > 2. ## H.3 Proof Of **Corollary 7** H.3.1 **Geometric Distribution** Proof. Let Xθ1 and Xθ2 be two Geometric random variables with parameters θ1 and θ2 respectively. We assume that θ1 > θ2 without loss of generality. Let k ′satisfy (1 − θ1) k ′ θ1 = (1 − θ2) k ′ θ2 and k0 = ⌊k ′⌋ + 1. Then we can get that $$\begin{array}{c}{{D\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\frac{1}{2}d_{\mathrm{TV}}\left(\omega_{X_{\theta_{1}}}\left\|\omega_{X_{\theta_{2}}}\right.\right)}}\\ {{=\frac{1}{2}\left(1-\theta_{2}\right)^{k_{0}}-\frac{1}{2}\left(1-\theta_{1}\right)^{k_{0}},}}\\ {{R\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\left|\left(1-\theta_{2}\right)^{j}\theta_{2}-\left(1-\theta_{1}\right)^{j}\theta_{1}\right|.}}\end{array}$$ Therefore, we can get that $$\gamma=\operatorname*{inf}_{\frac{\theta<\theta_{1}<\theta_{2}<\overline{{\theta}}}{2}}\frac{\left(1-\theta_{2}\right)^{k_{0}}-\left(1-\theta_{1}\right)^{k_{0}}}{\left|\left(1-\theta_{2}\right)^{j}\theta_{2}-\left(1-\theta_{1}\right)^{j}\theta_{1}\right|}\ \ .$$ ## H.3.2 **Binomial Distribution** Proof. Let Xθ1 and Xθ2 be two binomial random variables with parameters θ1 and θ2 respectively with fixed number of trials n. We assume that θ1 > θ2 without loss of generality. Let k ′satisfy n k′ θ k ′ 1 (1 − θ1) n−k ′ = n k′ θ k ′ 2 (1 − θ2) n−k ′ and k0 = ⌊k ′⌋. We can get that $$\begin{array}{c}{{D\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\frac{1}{2}d_{\mathrm{TV}}\left(\omega_{X_{\theta_{1}}}\|\omega_{X_{\theta_{2}}}\right)}}\\ {{=\frac{1}{2}I_{1-\theta_{2}}\left(n-k_{0},1+k_{0}\right)-\frac{1}{2}I_{1-\theta_{1}}\left(n-k_{0},1+k_{0}\right),}}\\ {{R\left(X_{\theta_{1}},X_{\theta_{2}}\right)=n\left(\theta_{1}-\theta_{2}\right),}}\end{array}$$ where I represents the regularized incomplete beta function. Therefore, we can get that $$\gamma=\operatorname*{inf}_{\underline{{{\theta}}}<\theta_{1}<\theta_{2}\leq\overline{{{\theta}}}}\frac{I_{1-\theta_{2}}\left(n-k_{0},1+k_{0}\right)-I_{1-\theta_{1}}\left(n-k_{0},1+k_{0}\right)}{2\left|\binom{n}{j}\theta_{2}^{j}\left(1-\theta_{2}\right)^{n-j}-\binom{n}{j}\theta_{1}^{j}\left(1-\theta_{1}\right)^{n-j}\right|}.$$ ## H.3.3 **Poisson Distribution** Proof. Let Xθ1 and Xθ2 be two Poisson random variables with parameters θ1 and θ2 respectively. We assume that θ1 > θ2 without loss of generality. Let k ′satisfy θ k ′ 1 e −θ1 = θ k ′ 2 e −θ2 and k0 = ⌊k ′⌋ + 1. Then we can get that $$\begin{array}{c}{{D\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\frac{1}{2}d_{\mathrm{TV}}\left(\omega_{X_{\theta_{1}}}\left\|\omega_{X_{\theta_{2}}}\right.\right)}}\\ {{=\frac{1}{2}Q\left(k_{0},\theta_{2}\right)-\left.\frac{1}{2}Q\left(k_{0},\theta_{1}\right),}}\\ {{R\left(X_{\theta_{1}},X_{\theta_{2}}\right)=\left|\frac{\theta_{1}^{j}e^{-\theta_{1}}}{j!}-\frac{\theta_{2}^{j}e^{-\theta_{2}}}{j!}\right|,}}\end{array}$$ $$\square$$ where Q is the regularized gamma function. Therefore, we can get that $$\gamma=\operatorname*{inf}_{\underline{{{\theta}}}<\theta_{1}<\theta_{2}\leq\overline{{{\theta}}}}\frac{Q\left(k_{0},\theta_{2}\right)-Q\left(k_{0},\theta_{1}\right)}{2\left|\frac{\theta_{1}^{j}e^{-\theta_{1}}}{j!}-\frac{\theta_{2}^{j}e^{-\theta_{2}}}{j!}\right|}.$$ ## H.4 Proof Of **Corollary 8** Proof. Let Xp 1 1 ,p12 ,...,p1C and Xp 2 1 ,p22 ,...,p2C be two categorical random variables. We have $$D\left(X_{p_{1}^{1},p_{2}^{1},\ldots,p_{C}^{1}},X_{p_{1}^{2},p_{2}^{2},\ldots,p_{C}^{2}}\right)$$ $$=\frac{1}{2}d_{\rm TV}\left(\omega_{X_{p_{1}^{1},p_{2}^{1},\ldots,p_{C}^{1}}}\|\omega_{X_{p_{1}^{2},p_{2}^{2},\ldots,p_{C}^{2}}}\right)$$ $$\geq\frac{1}{2}\left|p_{j}^{1}-p_{j}^{2}\right|,\tag{25}$$ $$R\left(X_{p_{1}^{1},p_{2}^{1},\ldots,p_{C}^{1}},X_{p_{1}^{2},p_{2}^{2},\ldots,p_{C}^{2}}\right)$$ $$=\left|p_{j}^{1}-p_{j}^{2}\right|.$$ Therefore, we can get that $$\gamma\geq{\frac{1}{2}}...$$ ## H.5 Proof Of **Prop. 10** Proof. We first focus on the proof for Πϵ,ωΘ . We separate the space of possible data parameters into two regions: S1 =n(p1*, . . . , p*C )|pi ∈ hs 2(C−1) , 1 −s 2(C−1) i∀i ∈ [C] and Pi pi = 1oand S2 = {(p1*, . . . , p*C )|pi ∈ [0, 1) ∀i ∈ [C] and Pi pi = 1} \ S1. The high-level idea of our proof is as follows. Note that for any parameter θ ∈ S1, there exists a Sp1*,...,p*C s.t. θ ∈ Sp1*,...,p*C and Sp1*,...,p*C ⊂ S1. Therefore, we can bound the attack success rate if θ ∈ S1. At the same time, the probability of θ ∈ S2 is bounded. Therefore, we can bound the overall attacker's success rate (i.e., Πϵ,ωΘ ). More specifically, let the optimal attacker be gˆ ∗. We have $$\begin{array}{c}{{\Pi_{\epsilon,\omega_{\Theta}}=\mathbb{P}\left(\hat{g}^{\ast}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)}}\\ {{=\int_{\theta\in S_{1}}p(\theta)\mathbb{P}\left(\hat{g}^{\ast}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)d\theta}}\\ {{+\int_{\theta\in S_{2}}p(\theta)\mathbb{P}\left(\hat{g}^{\ast}\left(\theta^{\prime}\right)\in\left[g\left(\theta\right)-\epsilon,g\left(\theta\right)+\epsilon\right]\right)d\theta}}\\ {{<\frac{2\epsilon}{s}+\left(1-\left(1-\frac{s}{C-1}\right)^{C-1}\right).}}\end{array}$$ ![62_image_0.png](62_image_0.png) Figure 16: Histogram of salary dataset. For the distortion, it is straightforward to get that ∆ = s2 s straightforward to get that $\Delta=\frac{\epsilon}{2}$ from **Eq. (25)**, and $\Delta_{\rm opt}>\left(\lceil\frac{1}{\Pi_{\epsilon\to0}}\rceil-1\right)\cdot\epsilon\geq\epsilon$ can get that $\left(\Pi_{\epsilon,\omega_0}-\left(1-\left(1-\frac{\epsilon}{C-1}\right)^{C-1}\right)\right)\cdot\Delta=\epsilon$ and $$\Delta=\Delta_{\rm opt}+\Delta-\Delta_{\rm opt}$$ from Corollary 2. We can get that Πϵ,ωΘ − C−1 ∆ = ∆opt + ∆ − ∆opt < ∆opt + ∆ − ⌈1 Πϵ,ωΘ ⌉ − 1 · ϵ ≤ ∆opt + ϵ + ∆ −ϵ Πϵ,ωΘ 1 − 1 −s C−1 C−1 = ∆opt + ϵ + 2ϵ s + 1 − 1 −s C−1 C−1 · ∆ = 1 + s 2ϵ 1 − 1 −s C − 1 C−1!! (∆opt + 2γϵ) ≤ 2 + s ϵ 1 − 1 −s C − 1 C−1!! ∆opt < 2 + s ϵ ∆opt. ## I **Additional Results** In this section, we provide additional results on how released data from our mechanisms can support downstream applications. We consider the salaries from people with Master's and PhD degrees in this Kaggle dataset https://www. kaggle.com/datasets/rkiattisak/salaly-prediction-for-beginer. We plot its histogram in Fig. 16. We can see that there are two peaks. They correspond to people with age<=40 and age>40 (see Fig. 17). Assume the goal is to release this dataset and preserve the salary difference between people with age<=40 and age>40, while protecting the mean salaries. We can apply our mechanism for mean (§6.3) on this dataset. The histogram of the released data is shown in Fig. 18. Data receivers can obtain the salary ![63_image_0.png](63_image_0.png) Figure 17: Histogram of salary dataset for people with age <= 40 and > 40. ![63_image_1.png](63_image_1.png) Figure 18: Histogram of salary dataset for applying the mechanism in §6.3. difference between people with age<=40 and age>40 accurately by computing the difference between the two peaks, while the mean salaries are protected under our mechanism. Here we use the salary difference between people with age<=40 and age>40 as an example. In general, any downstream tasks that depend only on the "shape" of the distribution will not be affected by our mechanism, since our mechanism shifts all samples by the same amount.
Review 1: Summary: This paper studies the problem of sharing data when some of the properties of the underlying distribution are sensitive and must be kept private. They define a notion of what “privacy” and “distortion” mean in this setting and design algorithms that achieve the optimal trade-off between the definition of these notions that they define. Strengths and Weaknesses: Comments: This paper is well-written and easy to follow. I did not read all the proofs in fine detail, but I did not see any technical errors. My main concern with this paper is the use of the metric defined in eqn (2) as a measure of privacy, and the ability of the quantization mechanism to provide meaningful privacy guarantees. - The guarantee does not compose. That is, if I release the data (or summary statistics) using the quantization mechanism twice with two different quantizations, the combination of the two results may leak a lot more than intended. This makes this technique difficult to use in practice unless the original dataset is disgarded after a single use. - My understanding is that the bound on the privacy metric in the examples given in section 6 only hold when a particular prior belief is held by the adversary. The authors discuss this towards the end of the paper. This seems like a major drawback since in practice, one would not know the adversaries prior. I’m not convinced that there is not a variant of DP that is appropriate in this setting (although I agree that none of the versions that the authors discuss are appropriate). For example, in the example where the mean is the secret, one could protect the mean, up to a variation of size Delta, by adding noise Lap(Delta/eps) to the mean. This would likely have worse distortion than the quantization mechanism, but possibly not by too much, and would provide composable privacy guarantees. I suspect one could formalize the guarantee in this particular setting by defining two databases in the definition of DP to be “neighboring” if their means differ by at most Delta. Since the notion of summary statistic privacy presented in this paper seems much weaker than a definition like this, I think there should be convincing evidence that this weakening is necessary. The motivation of protecting business secrets that are properties of the underlying distribution seems reasonable. However, I didn’t think the authors talked enough about what they hoped to maintain about the database, and whether or now the goals were fundamentally at odds (although I guess the lower bound on the distortion indicates that the goals are at odds). The experiments show that the quantization mechanism achieves the optimal privacy/distortion trade-off, but is the data released useful for downstream tasks? The authors don’t really address this in their discussion of examples. The example of wanting to release the salary difference between men and women without revealing the mean salaries seems like an achievable goal, and perhaps would have been a useful task to measure the success on. The authors state several times that standard DP (and the variants they discuss) solve a different problem than the one the authors are attempting to address. However, they still use this as a comparison point in the experiments. This isn’t a fair comparison since these methods are basically designed to preserve summary statistics. Thus, the concentration of the DP-based methods in the low distortion, low “privacy” regime in Fig 6 is a feature, not a bug. Something like adding noise to the summary statistic (as described above for the mean) seems like a much more fair comparison. Minor comments: - The citation for DP should be “Calibrating Noise to Sensitivity in Private Data Analysis” by Dwork, McSherry, Nissim and Smith. It’s not true that DP has focused on low-dimensional statistical queries, there is a good amount of work on DP synthetic data (although I agree that it is usually in a different setting to this papers focus). - It would have been helpful to have the formulation of summary statistic privacy before the discussion of indistinguishability approaches to provide a comparison point (since I was familiar with these approaches and so wanted a comparison point). In particular, I struggled to understand throughout this section what the authors hoped to retain about the data. - Just before eq 5, it is stated that d is defined in eqn 3, but it is not. Requested Changes: - Evaluate performance on downstream tasks to provide evidence that produced synthetic data is useful. - Discuss whether or not there exists a variant of DP that could be appropriate (e.g. by defining databases as neighboring if their "secret" summary statistics are close?) This would provide composability of the privacy guarantee. Broader Impact Concerns: I have no ethical concerns. ================================================== Review 2: Summary: The paper proposes a new approach to provide privacy and quantify privacy-accuracy (here, distortion) trade-offs, when the goal is to protect dataset-level properties or attributes (such as, for example, the mean value of some column of a dataset). More details on the contributions in "strengths" Strengths and Weaknesses: The paper proposes a new approach to provide privacy and quantify privacy-accuracy (here, distortion) trade-offs, when the goal is to protect dataset-level properties or attributes (such as, for example, the mean value of some column of a dataset). First, in terms of strengths: - Within their framework, the authors provide a lower bound (Theorem 1) on the privacy-distortion trade-off. This lower bound comes with nice and intuitive properties in how it scales with the privacy budget, and a parameter $\gamma$ that measures how much of an effect a change in the value of the secret (the dataset-level property we are trying to protect) changes the data distribution. Intuitively, if a small change in the secret creates a large in the generated data, privacy is hard to protect information-theoretically - Sections 6.1 and 6.2. are nice in that the authors show a couple of cases of practical interests (computing means and quantiles) where they develop mechanisms that essentially approximately achieve the lower bound on privacy-distortion trade-offs. However, I have major concerns about the paper, its motivation, and how it inscribes itself in the literature. My concerns are the below: 1) I am a bit confused about the discussion of other privacy techniques. Some points here: - My understanding is that the authors claim that differential privacy is innaproppriate here for several reasons. One of the reasons is that it protects the data of each individual data point, rather than some property of the entire dataset. But I want to point out here that group privacy can translate a notion of privacy at the level of each individual data point to the level of the whole dataset. I found the discussion there inaccurate and confusing and I think this may need to be made more careful. - The authors acknowledge work on releasing dataset-level statistics while preserving privacy (e.g. Zhang et al., 2022, which I will focus on because I am more familiar with this paper than others). One critique the authors make is that they focus on "low-dimension statistical queries of the dataset instead of the entire datase". But this is also what the current paper is doing (aside from Section 6.3., but it only contains an overall framework with no real analysis and evidence that it does something reasonable), so I do not quite understand the novelty. I also think the authors missed the point of Zhang et al. 2022: the point of that paper is to deal with correlations across different columns of a dataset, and the problem of "how do I release information about the dataset while not revealing information about one secret in the dataset, and deal with the fact these two things might be correlated". The current paper takes a different point of view of "how to I reveal information about the current secret but not too much information" (e.g., in section 6.1., the protected secret is the mean, and the technique releases quantized estimates of the mean to reduce the information released, if I understood well). This comparison, is, I believe, apple-to-oranges. Also, on that note, Section 7 claims to compare their results to that of Zhang et al. 2022, but the considered benchmark (simply adding Gaussian noise to the secret) is a simplified and inacurrate version of what Zhang et al. 2022 (where the Gaussian noise has to be carefully computed to take correlation across the query we want to answer, F(X), and the dataset-level attributes we are aiming to protect). This leads me to points 2 and 3 below. 2) A major issue seems to be that the paper does not seem to deal with correlations. To give an example of this, I will go to the end of p7/beginning of p8 and the corresponding Gaussian example, as well as protecting the mean in Section 6.1. There, the goal is to hide the mean of a distribution (the secret). Say, that I focus on a Gaussian distribution like in p7-8, and I apply the mechanism proposed by the authors (deterministic quantization) on the mean. Now imagine that I know there is correlation between the mean and the variance of my Gaussian distribution (e.g. the prior is a joint distribution of $(\mu,\sigma)$ rather than two independent distributions); then, no matter what I do in terms of quantization to protect the mean, if I perfectly release the variance, I release information about the mean through the correlation between mean and variance. This, to the best of my knowledge, seems not to be taken into account in the paper (which focuses only on quantizing the secret itself without thinking about/modeling what information is released by other correlated attributes), which is a step back compared to previous work (e.g. Zhang et al 2022, etc.) and potentially a major oversight from a privacy point of view. 3) Finally, I am not sure the problem asked by the paper makes practical sense. Effectively, what the paper asks for is "can I release a dataset-level property without releasing much information about said dataset-level property?". It's basically effectively asking for something that is impossible regardless of the frameworks/techniques used: reveal an accurate statistic, about, say, the mean of the population, without anyone learning information about the statistic of the population. You can only have one of those at a time. The quantization mechanism for mean estimation, for example, to have good distortion/accuracy, must narrow down a small bin in which the true parameter belongs; doing this effectively reveals the mean/secret itself to high accuracy, and no reasonable privacy is then possible. Of course, you can still quantify how much distortion you can get versus privacy and try to understand this trade-off in a fine grained manner like the paper does, but I think this is missing a next step to be of practical interest here. Something that could be useful for example is to say (this is a possible direction, but may not be the right one): "I have access to n agents, each agent has a secret and I do not want to learn the secret of that specific agent... but I am interested in learning the average secret over all agents" (note this is what DP already does but a comparison could be interesting in terms of the different metric of privacy proposed here). Also note that there are no composition guarantees here unlike in DP (for example); in fact, it is not hard to show that I could run two quantization mechanisms with relatively big values of the step sizes s and s' but that are very close to each (which, in the author's framework, would provide good privacy), yet for some values of the secret I could narrow down a very small bin to which it belong (e.g., the secret belongs in [u + s, u + s']) providing close to no privacy. 4) Some of the results feel incomplete. E.g., section 5, where the privacy analysis is written in terms of a hard-to-interpret integral, with a handwavy justification that some terms are bounded away from 1. SUch arguments need to be made formal and precise for the paper to be self-contained. For the reason above, I am voting to reject the paper. There is an interesting high-level information-theoretic approach, but the motivation and the actual implementation (e.g. in the case studies) have, I believe, some serious issues. Requested Changes: I do not think the paper can really be changed in a way that addresses my concerns without completely changing the results. I am sorry to say that I currently do not see a path to acceptance, unless I am seriously misunderstanding something about the contribution of the paper (in which case I would be happy to revise my review). Broader Impact Concerns: N/A. The point is to protect privacy so the goal of the paper is well aligned with broader impact considerations. ================================================== Review 3: Summary: This paper aims to find a way to protect the privacy of statistical metrics when data is released to the public. The authors argue that the current methods used to protect privacy are not effective enough. To address this issue, the authors have developed a new framework that allows data holders to release data while maintaining privacy. They have also calculated the maximum amount of distortion that can occur. Based on this calculation, they have created mechanisms for data release that balance privacy protection and data accuracy. The authors have conducted experiments to show that their approach outperforms existing methods when measured by their specific definitions of privacy and distortion. Strengths and Weaknesses: Improving the privacy of data releases, especially in the field of machine learning, is crucial given the current state of data privacy. One way to achieve this is by exploring alternative definitions of privacy that can enhance the effectiveness of data release mechanisms. The paper argues that using differential privacy doesn’t protect the statistics of the data release because in expectation for example the mean of the data is still the same. Also the referred work does not exactly show this. Based on my understanding differential privacy does not work in expectation and you have to pay a privacy cost for every time you run the mechanism. Specially since the work also shows this in the experiment section that differential privacy has reasonable protection against the defined attacks. Moreover Census data uses differential privacy [R1]. It would be necessary for the work to come up with example where for example differential privacy does not protect the target statistics. One of the main part of the paper is Theorem 1, however, I had a hard time following the proof. It would be beneficial to the work to maybe do a proof read. In Equation 7 I didn’t follow that the authors went to E[Sup P()] from Sup E[P()]. The argument in the next paragraph also does not read well to me. The authors argue RHS <= LHS, because “gˆ behaves according to θ′ ” but as far as I understand g^ can be any attack therefore I can fix g^ to a fix deterministic function and therefore it would be independent of the θ′. Is there anything specific that I am missing ? Again in equation 9 the authors use θ, θ′, θi, θ1, θ2 which is confusing. In general the main downside of the work is the fact that this framework uses a specific property of the data as the privacy leakage. What happens if the data holder first only cares about the mean of the data, however, at the later date found out another property of the data is very important. The authors should discuss such scenarios more in the paper. [R1] Cohen, Aloni, et al. "Census TopDown: The impacts of differential privacy on redistricting." arXiv preprint arXiv:2203.05085 (2022). Requested Changes: * Add convincing example why frameworks such as DP with correct sensitivity and mechanism cannot protect the statistics such as mean considering the surrogate metrics used in the work? * Improve the proofs * Explore the failures of this privacy framework. Broader Impact Concerns: no ethical issues ================================================== Metareview: Recommendation: Reject Comment: All three reviewers felt that the work does not yet adequately meet the TMLR acceptance requirement "Are the claims made in the submission supported by accurate, convincing and clear evidence?" Since the other claims in the paper would need to be re-evaluated in light of such a revision, acceptance with minor revision was felt to be inappropriate. The authors are encouraged to revise based on the stated critiques and to consider resubmitting. Indeed, beyond the definition itself and within the framework proposed, the reviewers were more positive. One reviewer also said that the motivation is interesting and is positive on a solution to this problem. ==================================================
# Divine: Diverse-Inconspicuous Feature Learning To Mitigate Abridge Learning Anonymous authors Paper under double-blind review ## Abstract Deep learning algorithms aim to minimize overall error and exhibit impressive performance on test datasets across various domains. However, they often struggle with outof-distribution data samples. We posit that deep models primarily focus on capturing the prominent features beneficial for the task while neglecting other subtle yet discriminative features. This phenomenon is referred to as *Abridge Learning*. To address this issue and promote a more comprehensive learning process from data, we introduce a novel *DIVerse* and INconspicuous feature lEarning (DIVINE) approach aimed at counteracting Abridge Learning. DIVINE embodies a holistic learning methodology, effectively utilizing data by engaging with its diverse dominant features. Through experiments conducted on ten datasets, including MNIST, CIFAR10, CIFAR100, TinyImageNet, and their corrupted and perturbed counterparts (CIFAR10-C, CIFAR10-P, CIFAR100-C, CIFAR100-P, TinyImageNet-C, and TinyImageNet-P), we demonstrate that DIVINE encourages the learning of a rich set of features. This, in turn, boosts the model's robustness and its ability to generalize. The results on out-of-distribution datasets, such as those that are perturbed, achieve a performance 5.36%, 3.10%, and 21.85% mean Flip Rate (mFR) corresponding to CIFAR10-P, CIFAR100-P, and TinyImageNet-P datasets using DIVINE.On the other hand, Abridged Learning on CIFAR10-P, CIFAR100-P, and TinyImageNet-P datasets, achieve a performance 6.53%, 11.75%, and 31.90% mFR, respectively ## 1 Introduction Deep learning algorithms have achieved tremendous success in several tasks including image classification (Dehghani et al., 2023), (Su et al., 2023), object detection (Wang et al., 2023b), (Zong et al., 2022), (Tan et al., 2020), and segmentation (Fang et al., 2023), (Xie et al., 2020). However, the robustness and generalizability of these algorithms in real-world scenarios is still an open problem. Supervised learning tasks in deep neural networks primarily focus on maximizing the classification accuracy by learning the easiest solutions/patterns that exist in the entire dataset (Geirhos et al., 2020). In other words, models take *shortcuts* by learning only the dominant input features that are sufficient for confident classification (Li et al., 2023),(Ilyas et al., 2019), (Pezeshki et al., 2021). This results in ignorance of other useful and distinct features that can be helpful in classification. Therefore, as shown in Figure 1, these models often fail to classify *out-of-distribution* samples. In this research, we termed the above-mentioned learning process as "Abridge Learning". Formally, Abridge Learning (AL) is defined as the "learning process in which a model learns only the dominant input features while failing to learn other useful input features for the target task". The solution obtained by Abridge learning process lacks generalizability and is not suitable for deployment in real-world scenarios. To mitigate the problem of Abridge Learning, one solution is to identify the inconspicuous discriminative input features and use them to learn a diverse unified model that is generalizable to unseen real-world datasets. In an ideal training process, a model should identify and learn all the input features present; however, the identification of all the input features is intractable. Therefore, it is important to identify a set of diverse input features that can provide a generalized solution. ![1_image_0.png](1_image_0.png) Figure 1: Illustration of the *Abridge Learning* (AL) and the proposed method. Model conventionally trained with AL, learns only 'paw' input feature and ignore other features. This results in failure of the model on the image with missing 'paw' feature. The proposed DIVINE learns 'paw' along with the feature 'ear' resulting in successful classification of the image where 'paw' feature is missing and the 'ear' feature is present. Research Contributions: Existing methods address the problem of robustness and lack of generalizability in models; however they do not explicitly focus on learning the inconspicuous features from the dataset that are also discriminative in nature. This leads to learning of shortcut features that only work well for a given task on the given data but cannot sustain under real-world variations in data. To abridge this gap, we propose a novel learning method termed as DIVINE i.e., Diverse-Inconspicuous Feature Learning which mitigates this problem by removing the shortcuts and learning a diverse set of input features. The objective of the proposed method is two-fold: (a) identification of the minimum number of diverse inconspicuous discriminative input features as illustrated in Figure 1 and (b) train a unified model to learn the identified features for enhancing generalizability. The diverse features identified using the proposed method are disjoint, which in turn maximizes diverse learning of the model. Experiments for image classification tasks on CIFAR10, MNIST, CIFAR10-C, and CIFAR100-C datasets show that, DIVINE is generalized and can be applied to different machine learning tasks. ## 2 Related Work The notion of "Abridge Learning" is excessively applicable to train deep learning models. In recent research, this learning schema has been termed as shortcut learning1(Geirhos et al., 2020) where the model finds shortcuts to minimize the loss, primarily by picking up only the dominant features in the input. This has been the traditional way of training the models. Carter et al. (2021) have shown that only 5% spurious pixel subsets are enough for confident prediction leading to over-interpretation of pixels resulting in shortcut learning. Such shortcuts were also recently identified in a medical imaging task (Oakden-Rayner et al., 2020) where the model picked undesired tumor patterns in the dataset. It gave a falsely reasonable performance, which went undetected because of the chosen evaluation metric. This implies that such spurious correlations can also remain unobserved because of the selected evaluation metrics. Lapuschkin et al. (2019) discussed Clever Hans strategies in the domain of Computer Vision and Arcade Gaming. They generated heatmaps and showed the patterns focused by the model while taking shortcuts. Various drawbacks of shortcut learning process have recently been identified and addressed by the community. One such learning process termed as Gradient Starvation (Pezeshki et al., 2021) showed that the deep learning model extracts a subset of features to minimize the loss for training, while the gradients from other potentially essential features starve. Du et al. (2020) proposed a CREX method to avoid shortcuts. This method regularizes the subset of features annotated by experts. Wang et al. (2023a) proposed DFM-X, which use the prior knowledge from the previous models to train the target model to enhance the generalizability and robustness of the model. Zhang et al. (2023) proposed SADA method to generate images for the data augmentation corresponding to highly sensitive frequencies. Gao et al. (2023) proposed DDA based adaptation method, which learns the diffusion model on the source data and projects the target input to the source during testing. Guo et al. (2023) proposed a method to reduce the sensitivity of vision transformer against patch based corruptions. Ross & Doshi-Velez (2018) found that the networks trained with input gradients are more robust and generalizable. A similar approach to make models more robust involves regularizing the gradient norm of the model output with respect to the inputs. These Jacobian-based regularizers have been shown to significantly improve classification accuracy (Varga et al., 2017). Another approach incorporating the Jacobian in deep models is Jacobian Adversarially Regularized Networks (JARN), where the model's Jacobian is optimized by adversarial regularization (Chan et al., 2020). Similar insights have been utilized to evaluate the robustness of deep models on data containing random and adversarial input perturbations (Hoffman et al., 2019). The input perturbations and corruptions in the data are widely studied in the literature to evaluate the robustness of models (Rusak et al., 2020) (Taori et al., 2020). Different approaches (Bai et al., 2021; Strisciuglio & Azzopardi, 2022; Krueger et al., 2021; Benkert et al., 2022; Machireddy et al., 2022; Timpl et al., 2022; Chefer et al., 2022) are used to mitigate the effect of such distribution shifts. For example, Hendrycks et al. (2021) proposed a data augmentation method to address this problem, which includes geographic location as well as camera operation. Another work discusses the impact of manipulating batch normalization statistics for corrupted data to improve model performance (Schneider et al., 2020). ## 3 Diverse-Inconspicuous Feature Learning As shown in Figure 2, the proposed DIVINE algorithm is a two-part learning approach. In the first learning part, the proposed algorithm identifies and then suppresses the learned input features via a dominance feature map to identify other inconspicuous input features. The dominance feature map represents the dominance of the identified input features for classification. Further, in the second part, a unified model is trained using these dominance feature maps to learn all the identified input features for enhancing generalizability. It should be noted that the problem of identification of all the inconspicuous features is intractable, and hence, the proposed method identifies a set of diverse inconspicuous features that maximize the learning of the model. As a result, the proposed algorithm is able to alleviate the problem of Abridged Learning, and the generalizability towards the out-of-distribution data samples. Let X = {(x1, y1),(x2, y2), ...,(xn, yn)} be a dataset with original training images and their corresponding labels. For simplicity, let x is an image of the dataset X with label y in one-hot form. Consider a model fX(.; θ) with parameters θ. This model is trained on the dataset X using cross entropy loss function, optimized over parameters θ. $$\operatorname*{min}_{\theta}\ \mathbb{E}_{(x,y)\sim\mathbf{X}}[-y^{T}\log f_{\mathbf{x}}(x;\theta)]$$ Tlog fx(x; θ)] (1) where, fX(x; θ) outputs the probability vector for a given input image x. As mentioned earlier, the objective is to identify diverse inconspicuous input features via dominance feature maps followed by training a unified diverse model for enhancing generalizability. Here, we will first discuss the method for identification of inconspicuous features followed by training of the unified model. 1In the literature, Shortcut Learning is referred to as the broad term for various ways in which a model takes shortcuts by learning unintended strategies to minimize the loss. Abridge Learning is a sub-problem of Shortcut Learning that deals only with the problem where the model picks up only dominant cues and ignores other relevant features from the input data. $$(1)$$ ![3_image_0.png](3_image_0.png) Figure 2: Pipeline of the proposed method for learning diverse and inconspicuous features. (a) Illustrates the process of identifying inconspicuous input features. Steps 1 and 2 involve training the model with original images from the dataset using a cross-entropy loss function. Steps 3 and 4 depict the computation of the dominance matrix for each image. In step 5, a feature-suppressed dataset is derived from the dominance matrix and dominance feature maps. The final step involves training the model with the feature-suppressed dataset to identify inconspicuous input features. (b) Demonstrates the unified model training process with original images. ## 3.1 Identification Of Inconspicuous Input Features Let F(x) = {F1(x), F2(x)*, ...F*r(x)} be the set of all input features present in the image x that can be learned by the model using the loss function in Eq. 1. Each feature Fi(x) in the feature set F is the combination of input image pixels that can be learned by the model for classification. For example, in Figure 1, one of the image features (in the feature set learned by the model) constitutes of pixels of the paw region. From the literature (Geirhos et al., 2020) (Pezeshki et al., 2021), it is known that during training, a model learns the most dominant input features present in the dataset. Hence, the first step is to identify the dominance of each pixel in an input image x for classification. For this purpose, we have used input-output Jacobian method (Chan et al., 2020) (Hoffman et al., 2019) that computes the dominance of each input image pixel on the model's output decision. Mathematically, for a given image x with input perturbation ϵ, the Taylor series expansion of the function f(x + ϵ; θ) is defined as: $$f(x+\epsilon;\theta)=f(x;\theta)+\epsilon{\frac{d f(x;\theta)}{d x}}+{\mathcal{O}}(\epsilon^{2})$$ The higher-order terms can be neglected for very small perturbations. Hence, the above equation is updated as: $$f(x+\epsilon;\theta)\approx f(x;\theta)+\epsilon{\frac{d f(x;\theta)}{d x}}$$ dx (3) $$\left(2\right)$$ $$(3)$$ where, term df(x;θ) dx represents the input-output Jacobian matrix. Since we are computing Jacobians with respect to the true class only, we term the output matrix as Dominance matrix denoted by D1(x) = df(x;θ) dx . Large values (both +ve and -ve) in the dominance matrix represent higher dominance of the corresponding image pixels in the input image x. In other words, the model's decision is highly dependent on the input image pixels with high dominance values in the dominance matrix. Once we obtain the dominance matrix, the next step is to identify the image pixels with higher dominance values. Let p be the percentage of the most dominant image pixels. The combination of these identified pixels represents the first identified input feature F1(x). To obtain F1(x), a mask M1(x) is created using the following function: $$M_{1}(x)=\left\{\begin{array}{ll}1&\mbox{if}\quad|D_{1}(x)|\geq t\\ 0&\mbox{otherwise}\end{array}\right.\tag{1}$$ Here, t is the threshold obtained by sorting the dominance values of the dominance matrix for top p percentage of pixels. Next, we compute the element-wise multiplication of the mask with the input image x for obtaining feature F1(x). Mathematically, it is written as: $$\mathbf{\Sigma}$$ $$F_{1}(x)=M_{1}(x)\odot x$$ $\downarrow$ . $$\mathbf{\Sigma}$$ F1(x) = M1(x) ⊙ x (5) After obtaining the feature F1(x), the next step is to compute the dominance feature map Dm1 (x) corresponding to the identified feature F1(x). These maps are used during unified model training. Here, dominance feature map Dm1 (x) represents the dominance values of identified feature F1(x) i.e., the combination of identified dominant pixels. The dominance map Dm1 (x) is obtained by: $$D_{m_{1}}(x)=M_{1}(x)\odot D_{1}(x)$$ (x) = M1(x) ⊙ D1(x) (6) Next, we identify other diverse inconspicuous input features. Conceptually, diversity can be achieved by identifying features that are completely different from one another. Thus, we enforced identified features to be disjoint, i.e., Fi(x) ∩ Fj (x) = ϕ, for i ̸= j. For this purpose, we have suppressed the identified input feature F1(x) in the input image x. This is done by setting the input image pixels to zero corresponding to the non-zero dominance values in the dominance feature map Dm1 (x). Mathematically, it is written as: $$x_{s_{1}}=\left\{\begin{array}{ll}x&\mbox{if}\quad|D_{m_{1}}(x)|=0\\ 0&\mbox{if}\quad|D_{m_{1}}(x)|\neq0\end{array}\right.\tag{1}$$ $$\mathbf{\partial}$$ where, xs1 is the output image with suppressed feature F1(x). The above-mentioned process is applied to all the training images in the dataset X. This results in a new dataset Xs1 with suppressed features obtained corresponding to all the images. It is important to note that suppressing the identified features corresponding to all the images in the dataset X will enforce the model to learn other features. To identify other input features, we use the feature-suppressed dataset Xs1 . For this purpose, we have trained a separate model using the loss function mentioned in Equation 1. Then, the process from Equations 2 to 7 is repeated using Xs1 in place of X to obtain Xs2 . This process is repeated to identify input features until the stopping criteria is achieved. Figure 3 shows samples of the original and feature-suppressed datasets corresponding to MNIST dataset. The details of the stopping criteria are discussed in Subsection 3.6. ## 3.2 Diverse Unified Model Learning Once the input features are identified, the next objective is to train a unified model that learns all the identified input features. For this purpose, we have used the dominance feature maps corresponding to all the identified features. Let k be the number of input features identified for each image x. This means we have k number of dominance feature maps i.e., Dm1 (x), Dm2 (x)*, ..., D*mk (x). To provide supervision during training, all the dominance feature maps are combined into a single unified map by adding them together. Mathematically, it is written as: $$D_{m}(x)=D_{m_{1}}(x)+D_{m_{2}}(x)+...+D_{m_{k}}(x)$$ (x) (8) ![5_image_0.png](5_image_0.png) Figure 3: Sample of original images and the corresponding intermediate feature-suppressed images from the MNIST dataset obtained using the proposed method. where, Dm(x) is the unified dominance feature map. Let fu(.; θ) be the unified model with parameters θ to be trained on the dataset X with original images. In order to enforce the model to learn all the identified input features, the following objective function is used. $$\operatorname*{min}_{\theta}\ \mathbb{E}_{(x,y)\sim\mathbf{X}}[-y^{T}\log f_{u}(x;\theta)+L_{D}(x)]$$ Tlog fu(x; θ) + LD(x)] (9) Here, the first term represents the standard cross-entropy loss while the second term LD(x) represents the dominance loss. The dominance loss enforces the model to focus on the identified input features. Let Du(x) be the Dominance matrix with dominance values corresponding to the unified model fu(x; θ). Then, in order to compute the dominance loss, the squared Euclidean distance between the unified dominance feature map Dm(x) and the dominance matrix Du(x) is minimized. Mathematically, the loss LD(x) is written as: $$({\mathfrak{g}})$$ $$L_{D}(x)=||D_{m}(x)-D_{u}(x)||_{2}^{2}$$ $$(10)$$ $$(11)$$ LD(x) = ||Dm(x) − Du(x)||22(10) Therefore, the final objective function is written as : $$\operatorname*{min}_{\theta}\;\mathbb{E}_{(x,y)\sim\mathbf{x}}[-y^{T}\log f_{u}(x;\theta)+||D_{m}(x)-D_{u}(x)||_{2}^{2}]$$ ] (11) Since the unified model learns inconspicuous and diverse input features, thereby, it not only alleviates the effect of Abridged Learning but also enables the model to generalize over out-of-distribution samples. ## 3.3 Experimental Setup The primary objective of this paper is to remove the *shortcuts* learned by the model via learning inconspicuous and diverse input features. Our hypothesis is based on the observation that existing algorithms learn dominant features while ignoring other relevant features from the dataset within a distribution. The performance of the models suffers when dominant features are distorted/suppressed. This is because the inductive bias of the model is based on the dominant feature only. To address this, the proposed DIVINE algorithm is designed to learn the dominant features along with inconspicuous features reducing the dependence on the dominant feature only. Since the proposed learning process introduces the model to suppressed features as well, making the training diverse in nature. This ensures the generalized inductive bias of the final trained model. In order to validate this hypothesis, the experiments are performed for Abridged Learning on the feature-suppressed datasets corresponding to MNIST (LeCun et al., 1998), CIFAR10 (Krizhevsky et al., 2009), and TinyImageNet (Le & Yang, 2015) datasets. These feature-suppressed datasets are obtained by suppressing the identified input features (described in Section 3.1). Table 1: Comparison of classification accuracy (%) of existing algorithms and the DIVINE on the original and feature-suppressed datasets. Abridge Learning Jacobian Regularization Random Suppression **DIVINE** MNIST Original 99.21 85.80 98.67 97.31 Xs1 66.53 76.52 85.56 91.41 Xs2 50.98 70.18 68.10 84.98 Xs3 40.87 62.97 54.24 75.20 Average 64.40 73.87 71.64 87.22 CIFAR10 Original 82.38 81.63 81.73 80.34 Xs1 64.16 65.90 63.97 67.41 Xs2 52.63 54.88 52.01 58.18 Xs3 44.45 46.87 45.19 51.83 Average 60.98 62.32 60.72 64.44 CIFAR100 Original 74.63 75.95 75.46 74.57 Xs1 37.44 48.31 45.21 56.31 Xs2 27.84 39.61 36.86 49.79 Xs3 15.22 26.28 14.71 33.49 Average 39.23 44.64 43.06 53.54 Tiny-ImageNet Original 60.59 49.06 51.66 54.59 Xs1 40.46 39.29 43.71 43.42 Xs2 27.35 31.91 33.92 32.67 Xs3 21.67 27.28 27.94 26.66 Average 37.51 36.88 39.30 39.33 To further analyze the applicability of the proposed algorithm in real-world scenarios, we perform the experiment by evaluating the unified model on out-of-distribution samples. These samples are taken from the corrupted datasets CIFAR10-C, CIFAR100-C, TinyImageNet-C and perturbed datasets CIFAR10-P, CIFAR100-P, and TinyImageNet-P. The corrupted datasets contain 15 different corruptions corresponding to the CIFAR10, CIFAR100, and TinyImageNet datasets, respectively. We discuss the details of the corruption and perturbed datasets employed for evaluation in supplementary. ## 3.4 Existing Approaches For Comparison Since the proposed method involves the suppression of different input pixels under the supervision of dominance maps, we have performed random suppression of input features for comparison. In the random suppression method, we randomly drop p% pixels during training. Further, we have compared our method with Jacobian Regularization. In literature, Jacobian Regularization (Chan et al., 2020) is primarily used for reducing the sensitivity of input image pixels to enhance the robustness of the models. We have used this regularization method with the standard cross-entropy loss function. ## 3.5 Evaluation Metrics We compute the classification accuracy to evaluate model performance on the MNIST and CIFAR10 datasets. To evaluate the performance on the corrupted images, we report the Relative Corruption Error (Relative CE) and Relative Mean Corruption Error (Relative mCE) (Hendrycks & Dietterich, 2018). Relative $CE_c^f=\dfrac{\sum_{s=1}^{5}E_{s,c}^f-E_{original}^f}{\sum_{s=1}^{5}E_{s,c}^b-E_{original}^b}$ (12) where, f denotes the model to be evaluated, b denotes the baseline model obtained using Abridge Learning (AL), E f clean and Eb clean denote the error obtained corresponding to the model to be evaluated and the AL model on original images. Ef s,c and Eb s,c denote error rates on corruption c at severity level s corresponding | corruption type on the CIFAR10-C and CIFAR100-C datasets. CIFAR10-C | | CIFAR100-C | | | | | |-----------------------------------------------------------------------|----------------|--------------|----------|--------|--------|--------| | Corruptions | Jacobian | Random | Random | | | | | Regularization | Suppression | DIVINE | Jacobian | | | | | | Regularization | Suppression | DIVINE | | | | | Defocus Blur | 102.35 | 108.77 | 92.90 | 105.88 | 108.38 | 102.08 | | Contrast | 100.19 | 103.55 | 99.64 | 103.86 | 114.32 | 122.66 | | Pixelate | 95.09 | 113.08 | 75.58 | 108.59 | 106.95 | 74.61 | | Snow | 91.57 | 81.15 | 81.38 | 97.17 | 99.32 | 78.91 | | Fog | 105.04 | 111.62 | 99.17 | 111.01 | 110.61 | 110.52 | | Glass Blur | 96.11 | 87.62 | 73.69 | 98.51 | 98.49 | 82.99 | | Brightness | 106.40 | 133.70 | 101.00 | 129.75 | 136.48 | 125.60 | | Elastic | 103.84 | 109.48 | 97.95 | 107.35 | 106.55 | 96.89 | | Frost | 100.47 | 113.05 | 88.12 | 104.53 | 111.80 | 83.74 | | JPEG | 86.40 | 93.95 | 70.86 | 93.23 | 91.82 | 73.24 | | Shot Noise | 95.65 | 111.77 | 72.62 | 101.47 | 107.27 | 76.46 | | Impulse Noise | 91.70 | 92.17 | 73.02 | 105.16 | 96.99 | 78.15 | | Zoom Blur | 99.16 | 111.06 | 95.85 | 97.47 | 98.42 | 89.50 | | Gaussian Noise | 95.49 | 110.70 | 73.32 | 101.74 | 107.97 | 79.18 | | Motion Blur | 105.37 | 116.31 | 99.40 | 107.86 | 113.26 | 98.38 | | Relative mCE | 98.32 | 106.29 | 86.30 | 104.90 | 107.24 | 91.53 | Table 2: Comparison of relative corruption error obtained using existing and proposed methods for each corruption type on the CIFAR10-C and CIFAR100-C datasets. to the model to be evaluated and the AL model, respectively. A lower Relative CE indicates a higher performance over the baseline. To evaluate the performance on the perturbed datasets, we measure the probability that two consecutive frames with different intensity of perturbations, have "flipped" or mismatched predictions. This is termed as mean Flip Rate (mFR) Hendrycks et al. (2019). ## 3.6 Selection Criteria For Number Of Feature Maps In order to decide the number of features, we have computed the running average of the classification accuracy obtained on the original and feature suppressed dataset using AL method. We decided to iterate computing feature maps at most 3 times given the average running classification should not be below 50 percent of the classification accuracy obtained on the original dataset. ## 4 Results And Analysis The proposed DIVINE algorithm is evaluated on two types of datasets: (i) feature-suppressed datasets and (ii) corruptions. In the first set of evaluations, we validate our assertion of "Abridge Learning" using the MNIST, CIFAR10, CIFAR100, and TinyImageNet datasets. For corruptions, we showcase results on CIFAR10-C, CIFAR100-C, and TinyImageNet-C datasets. We further compare the DIVINE algorithm with a recent algorithm proposed by Carter et al. (2021). ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) Figure 4: Visualizations of the semantically relevant features learned by the model. (a) shows the strokes learned by the model, which distinguishes digit 8 from 2, and (b) shows the feature-suppressed image. Similarly, (c) and (d) show the distinguishing strokes learned by the model and the feature-suppressed image. Table 3: Classification accuracy (%) obtained using Jacobian Regularization (JR), Random Suppression (RS), and DIVINE algorithm on the TinyImageNet-C dataset for different corruptions. Corruptions **Jacobian** Regularization Random Supression **DIVINE** Defocus Blur 21.07 19.97 29.43 Contrast 12.37 12.51 18.70 Pixelate 27.98 31.79 39.53 Snow 25.16 26.83 32.8 Fog 21.46 23.37 33.09 Glass Blur 28.33 30.36 32.14 Brightness 27.02 28.33 36.14 Elastic 21.23 21.47 30.89 JPEG 26.30 29.07 37.25 Shot Noise 35.77 37.68 43.09 Impluse Noise 29.71 34.09 39.14 Zoom Blur 20.07 18.83 28.20 Gaussian Noise 34.75 36.99 41.84 Motion Blur 21.47 22.11 30.87 Mean 25.19 26.67 **33.79** ## 4.1 Evaluation On Feature-Suppressed Datasets This experiment is performed to validate that a model trained using conventional methods relies on the dominant features (which are easy to learn) during classification, thereby reducing the performance of the model when these dominant input features are missing. The performance of the AL model trained on original images is evaluated on the testing set of original and feature-suppressed datasets, i.e., X, Xs1 , Xs2 and Xs3 . Dataset Xs1 has images with one suppressed dominant input feature in each image. Similarly, datasets Xs2 and Xs3 have images with two and three suppressed dominant input features in each image, respectively. Table 1 shows the performance of the AL models corresponding to the MNIST, CIFAR10, CIFAR100, and TinyImageNet datasets. It is observed that the performance of the AL models degrades significantly on the feature-suppressed datasets. For instance, the performance of the AL model trained on the MNIST dataset drops from 99.21% to 66.53% on feature-suppressed dataset Xs1 (32.68% drop), which further degrades to 50.98% on feature-suppressed dataset Xs2 . This shows that the performance of the models trained using conventional methods is heavily dependent on the dominant input features. On the other hand, the performance of the unified model drops by only 5.90% when evaluated on Xs1 . As seen in Table 1, the unified model trained using the proposed DIVINE algorithm performs well on the feature-suppressed datasets. In Figure 5, the first row depicts the feature-suppressed images obtained for a sample of class 'horse'. From xs1 , it is apparent that the model is focusing on the pixels associated with the body of the horse (dominant features). However, after two iterations, the model starts focusing on other (inconspicuous) features such as the tail of the horse. This highlights that the DIVINE algorithm learns both dominant and inconspicuous ![8_image_0.png](8_image_0.png) Figure 5: A few sample original images and the corresponding intermediate feature-suppressed images on the CIFAR10 dataset obtained using the proposed method. | CIFAR10-P and CIFAR100-P datasets for different perturbations. mFR % CIFAR10 | | CIFAR100 | | | |--------------------------------------------------------------------------------|--------|-------------------|--------|-------| | Abridged Learning | DIVINE | Abridged Learning | DIVINE | | | Brightness | 1.33 | 1.22 | 2.98 | 1.03 | | Gaussian Noise | 5.21 | 3.86 | 15.3 | 2.21 | | Motion Blur | 11.26 | 9.9 | 14.74 | 2.1 | | Rotate | 8.29 | 6.24 | 11.12 | 3.09 | | Scale | 9.55 | 7.99 | 13.2 | 4.87 | | Shot Noise | 6.4 | 4.82 | 17.96 | 2.9 | | Snow | 3.75 | 2.96 | 6.59 | 1.15 | | Tilt | 3.13 | 2.49 | 5.52 | 1.55 | | Translate | 15.63 | 13.53 | 28.34 | 11.79 | | Zoom Blur | 0.79 | 0.67 | 1.78 | 0.32 | | Overall mFR | 6.534 | 5.368 | 11.753 | 3.101 | features, not depending only on the dominant features for classification. It is also observed that the models learn semantically relevant features in the feature-suppressed datasets. Figure 4 (a) & (b), show that the discriminative stroke of digit '8' (highlighted in red) is the most dominant feature differentiating it from digit '2', and is therefore suppressed. Similar observations can be made for digits '4' and '1' in Figure 4 ((c) & (d)). The visualization of the dominance matrix computed corresponding to both the AL and unified models is shown in Figure 6. It is observed that the dominance matrix computed corresponding to the AL models and the models trained using feature-suppressed datasets are focused on specific input features. On the other hand, the dominance matrix of the unified model is focused on all identified input features. This results in the high performance of the unified model on the feature-suppressed datasets. Results of the unified model are compared with random suppression, and the Jacobian regularization method (Chan et al., 2020). Both approaches are used to enhance the robustness of the models. It is observed that existing approaches perform better than the AL model, especially on the MNIST dataset. However, the performance is not as good as that obtained using the unified model. In the random suppression method, there is no supervision to the model for learning a diverse set of features. While in the Jacobian regularization method, the model reduces its dependency on the dominant features during training, it is not able to learn the inconspicuous features. Ablation Study for parameter p: On updating the values of p from 3% to 10%, the performance of the unified model degrades on feature-suppressed datasets. Since, the model prediction is dependent only on a few input pixels, setting a higher value of p results in suppressing of dominant as well as other input features, which in turn decreases the performance of the unified model on the feature-suppressed datasets. ## 4.2 Evaluation On Corruptions And Perturbations This experiment is performed to evaluate the generalizability and robustness of the proposed unified model on out-of-distribution images. The performance on corruptions and perturbations are discussed below: ![9_image_0.png](9_image_0.png) Figure 6: Illustration of the dominant features and inconspicuous features obtained in the MNIST dataset. Sample original images x and the corresponding dominance matrices D1(x), D2(x), D3(x), and Du(x). D1(x) and Du(x) are obtained corresponding to the AL and unified model trained on the original dataset. In contrast, D2(x) and D3(x) are obtained corresponding to the model trained on feature-suppressed datasets. Table 5: Classification accuracy (%) obtained using Abridged Learning and DIVINE algorithm on the TinyImageNet-P dataset for different perturbations. mFR % TinyImageNet Abridged Learning **Unified Model** Brightness 12.09 9.53 Gaussian Noise 28.56 18.58 Gaussian Noise V3 56.82 42.36 Rotate 45.49 30.19 Shear 37.83 24.19 Shot Noise V2 41.11 26.99 Snow 15.66 11.26 Specekle Noise 26.4 15.26 Specekle Noise V3 53.97 38.08 Translate 41.92 31.51 Gaussian Blur 5.17 3.91 Gaussian Noise V2 47.06 32.98 Motion blur 7.66 5.36 Scale 40.3 29.87 Shot Noise 32.09 20.14 Shot Noise V3 50.29 34.85 Spatter 17.5 12.22 Specekle Noise V2 44.1 28.65 Tilt 25.26 15.45 Zoom Blur 8.73 5.74 Overall mFR 31.9005 **21.856** Performance on Corruption We have reported the relative corruption error (Relative CE) and the relative mean corruption error (Relative mCE) on the CIFAR10-C and CIFAR100-C datasets. The results are shown in Table 2. The proposed unified model outperforms random suppression and Jacobian Regularization methods on the corruption datasets. The proposed unified model gives a Relative mCE of **86.30** and **91.53** corresponding to the CIFAR10-C and CIFAR100-C datasets, respectively. On comparing the performance of CIFAR10-C and CIFAR100-C datasets for individual corruptions, the proposed unified model trained with the DIVINE method outperforms other methods on 14 corruptions (excluding snow corruption on the CIFAR10-C and contrast corruption on the CIFAR100-C) and gives a comparable performance on the snow and contrast corruption corresponding to CIFAR10-C dataset, respectively. Table 1 shows that the performance of all the existing methods is comparable to the original images. However, random suppression and Jacobian regularization methods fail to generalize well on out-of-distribution images. This happens because existing approaches focus on learning dominant features only, which may be absent/distorted in the corrupted images. The proposed model performs better as it has learned diverse and inconspicuous features which are helpful for classification. To test our method on a large-scale dataset, we computed the performance on the TinyImageNet-C dataset and report the results obtained in Table 3. On the TinyImageNet-C dataset, DIVINE yields an absolute improvement of 25.54% and 19.02% over the random suppression and Jacobian regularization methods, respectively as shown in Table 3. We observe the robustness of the proposed DIVINE algorithm against a variety of corruptions. These results are illustrated for three different learning methods namely- Jacobian Regularization, Random Suppression, and the proposed DIVINE algorithm. From the table, it is clearly visible that the proposed algorithm outperforms other algorithms on all corruptions. We also achieve significantly better mean classification accuracy using the proposed DIVINE algorithm. This clearly describes the applicability of DIVINE algorithm on large-scale datasets as well. Performance on Perturbations We have reported the Flip Rate (FR) and the Overall mean flip rate (Overall mFR) on the CIFAR10-P, CIFAR100-P, and TineImageNet-P datasets. The results are shown in Table 4 and 5. The proposed unified model outperforms abridge learning on the perturbed datasets. The proposed unified model gives an overall mFR of 5.36%, **3.10%**, and **21.85%** corresponding to the CIFAR10- P, CIFAR100-P, and TinyImageNet-P datasets, respectively. On comparing the performance of individual 11 ![11_image_0.png](11_image_0.png) Figure 7: Sample images corrupted with impulse noise and the corresponding dominance matrices obtained using Jacobian regularization, random suppression, and the proposed unified model (Best viewed in color). perturbations on all datasets, the proposed unified model trained with the DIVINE method outperforms abridge learning. The visualization of the dominance matrix obtained corresponding to the unified model and existing approaches on the corrupted images of the CIFAR10 dataset are shown in Figure 7. We can see that the unified model focuses on multiple input features/regions while the existing approaches fail to focus diversely. On observing the relative corruption error corresponding to impulse noise corruption in Table 2, it is found that the error for the proposed method is 73.02, which is 18.68 and 19.15 less than Jacobian regularization and random suppression methods, respectively. This shows the applicability of the proposed method in realworld scenarios where external corruptions are common. Additionally, the proposed method can be used in combination with existing approaches for improving robustness. Comparison with Carter et al. (NeurIPS 2021) (Carter et al., 2021): Carter et al. Carter et al. (2021) have shown that only 5% spurious pixel subsets in an image are enough for confident prediction. These pixel subsets may be meaningless to humans and lead to over-interpretation by the model. This validates our assumption of abridge learning during training of the model. The authors further used an ensemble method to mitigate the problem of abridge learning. In order to showcase the effectiveness of the proposed DIVINE algorithm, we have also compared its performance on CIFAR10-C dataset. The DIVINE algorithm achieves a relative mCE of 86.30 which outperforms the ensemble method (Carter et al., 2021) by a margin of 8.62. In general, ensemble methods reduce overfitting and improve model performance. However, they do not necessarily make the model robust towards out-of-distribution samples. By enforcing the learning of inconspicuous input features, the DIVINE algorithm offers better robustness. ## 4.3 Ablation Experiment To Visualize The Trend Of Parameter P We conduct a series of experiments for varying values of p, specifically at 0.5%, 1.5%, 3%, 6%, and 10%. The ![11_image_1.png](11_image_1.png) trend observed in average classification accuracy for these values is depicted in Figure 8, and detailed results are presented in Table 1. From this representation, we notice a slight decrease in accuracy at p=0.5% and Figure 8: Plot representing the trend line of the average classification accuracy corresponding to baseline and proposed algorithm on different values of p on the MNIST dataset. | Table 6: Training time of Abridged Learning and proposed DIVINE method on CIFAR10 data | set. | | |------------------------------------------------------------------------------------------|--------|-----| | Abridged Learning | DIVINE | | | Time per Epoch (in seconds) | 73 | 177 | | Total Training Time (in minutes) | 25 | 60 | p=1.5% for the baseline model, indicating that dominant features are still present in the datasets with feature suppression at these lower p values. However, at p=3%, there is a noticeable decline in the baseline model's performance, highlighting the successful elimination of dominant features in the dataset. Consequently, we have selected p=3% as the optimal value for conducting our experiments. ## 4.4 Computational Runtime We have calculated the temporal cost associated with the MNIST, CIFAR10, CIFAR100, and TinyImageNet datasets during the first phase. In this phase, the model undergoes training on the dataset with suppressed features, and this process is repeated three times. The MNIST dataset requires 10 epochs for training, with each epoch taking approximately 14 seconds to complete. The entire training process for MNIST, therefore, takes around 420 seconds, equivalent to 7 minutes. In the case of CIFAR10 and CIFAR100 datasets, each epoch lasts about 61 seconds, while for the TinyImageNet dataset, it is around 935 seconds per epoch. These time durations present opportunities for optimization, possibly through the application of methods like the one proposed by Wang et al. (Wang et al., 2020), which involves pruning the network at the initial stage, prior to training. It is important to note that the time required for inference (testing) remains consistent between the proposed unified model and the AL model. Further, we compute the time complexity of Abridge Learning and the proposed DIVINE algorithm for each epoch as well as the total training time. From Table 6 we oberve that overhead training time is 104 seconds for each epoch, which is mostly spent on the computation of Jacobians. Since, the proposed DIVINE method increase the computation time over Abridge Learning, we consider this a limitation of the proposed method the minimization of this overhead can be explored in the future work. ## 5 Limitations We highlight the following limitations of the proposed DIVINE algorithm: - Though the proposed method mitigates Abridged Learning, it is computationally expensive as it utilizes multiple iterations to identify inconspicuous and diverse features and training of the models on the same data. - The proposed idea is validated only for the image modality in classification setting. However, the proposed approach can be extended for more modalities and different tasks. - The proposed method DIVINE is not applicable to those abridge learning problems where the dominant features are not semantic. For example, in ColoredMNIST dataset, the spurious correlation is due to color attribute of the digits, which is not a semantic feature and DIVINE is capable of only highlighting the semantic features in the input data. 6 Conclusion Conventional deep learning algorithms typically prioritize enhancing classification accuracy, which can result in inadequate learning and poor generalization to out-of-distribution samples. In this paper, we introduce a unique, holistic learning approach called *Diverse and Inconspicuous Learning* (DIVINE) which focuses on maximizing learning from a given set of inputs and learning as many discriminative features as possible. We validate DIVINE's effectiveness through extensive experiments across various datasets, including MNIST, CIFAR10, CIFAR10-C, CIFAR10-P, CIFAR100-C, CIFAR100-P, TinyImageNet, TinyImageNet-C, and TinyImageNet-P. The results reveal that the dominance maps generated via our method provide superior guidance for learning a rich set of input features. Consequently, our model demonstrates enhanced generalizability and robustness, particularly in the presence of out-of-distribution samples. We posit that this comprehensive style of learning ensures more reliable model predictions, especially in real-world situations where data corruption and distribution shifts can significantly impair performance. ## References Haoyue Bai, Rui Sun, Lanqing Hong, Fengwei Zhou, Nanyang Ye, Han-Jia Ye, S-H Gary Chan, and Zhenguo Li. Decaug: Out-of-distribution generalization via decomposed feature representation and semantic augmentation. *AAAI Conference on Artificial Intelligence*, 2021. Ryan Benkert, Mohit Prabhushankar, and Ghassan AlRegib. Forgetful active learning with switch events: Efficient sampling for out-of-distribution data. In *IEEE International Conference on Image Processing*, pp. 2196–2200, 2022. Brandon Carter, Siddhartha Jain, Jonas W Mueller, and David Gifford. Overinterpretation reveals image classification model pathologies. *Advances in Neural Information Processing Systems*, 34, 2021. Alvin Chan, Yi Tay, Yew Soon Ong, and Jie Fu. Jacobian adversarially regularized networks for robustness. In *International Conference on Learning Representations*, 2020. URL https://openreview.net/forum? id=Hke0V1rKPS. Hila Chefer, Idan Schwartz, and Lior Wolf. Optimizing relevance maps of vision transformers improves robustness. *Advances in Neural Information Processing Systems*, 2022. Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Peter Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, et al. Scaling vision transformers to 22 billion parameters. In *International Conference on Machine Learning*, pp. 7480–7512. PMLR, 2023. Mengnan Du, Ninghao Liu, Fan Yang, and Xia Hu. Learning credible dnns via incorporating prior knowledge and model local explanation. *Knowledge and Information Systems*, pp. 1–28, 2020. Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. Eva: Exploring the limits of masked visual representation learning at scale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19358–19369, 2023. Jin Gao, Jialing Zhang, Xihui Liu, Trevor Darrell, Evan Shelhamer, and Dequan Wang. Back to the source: Diffusion-driven adaptation to test-time corruption. In *Proceedings of the IEEE/CVF Conference on* Computer Vision and Pattern Recognition, pp. 11786–11796, 2023. Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. *Nature Machine Intelligence*, 2(11):665–673, 2020. Yong Guo, David Stutz, and Bernt Schiele. Improving robustness of vision transformers by reducing sensitivity to patch corruptions. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 4108–4118, 2023. Dan Hendrycks and Thomas G Dietterich. Benchmarking neural network robustness to common corruptions and surface variations. *arXiv preprint arXiv:1807.01697*, 2018. Dan Hendrycks, Norman Mu, Ekin D Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. arXiv preprint arXiv:1912.02781, 2019. Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. The many faces of robustness: A critical analysis of outof-distribution generalization. In *Proceedings of the IEEE International Conference on Computer Vision*, pp. 8340–8349, 2021. Judy Hoffman, Daniel A Roberts, and Sho Yaida. Robust learning with jacobian regularization. arXiv preprint arXiv:1908.02729, 2019. Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. In *Advances in Neural Information Processing* Systems, pp. 125–136, 2019. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). In International Conference on Machine Learning, pp. 5815–5826. PMLR, 2021. Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Unmasking clever hans predictors and assessing what machines really learn. *Nature* communications, 10(1):1–8, 2019. Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. *CS 231N*, 7(7):3, 2015. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998. Zhiheng Li, Ivan Evtimov, Albert Gordo, Caner Hazirbas, Tal Hassner, Cristian Canton Ferrer, Chenliang Xu, and Mark Ibrahim. A whac-a-mole dilemma: Shortcuts come in multiples where mitigating one amplifies others. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 20071–20082, 2023. Amrutha Machireddy, Ranganath Krishnan, Nilesh Ahuja, and Omesh Tickoo. Continual active adaptation to evolving distributional shifts. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3444–3450, 2022. Luke Oakden-Rayner, Jared Dunnmon, Gustavo Carneiro, and Christopher Ré. Hidden stratification causes clinically meaningful failures in machine learning for medical imaging. In *Proceedings of the ACM Conference on Health, Inference, and Learning*, pp. 151–159, 2020. Mohammad Pezeshki, Oumar Kaba, Yoshua Bengio, Aaron C Courville, Doina Precup, and Guillaume Lajoie. Gradient starvation: A learning proclivity in neural networks. *Advances in Neural Information* Processing Systems, 34, 2021. Andrew Ross and Finale Doshi-Velez. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. Evgenia Rusak, Lukas Schott, Roland S Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, and Wieland Brendel. A simple way to make neural networks robust against diverse image corruptions. In *European Conference on Computer Vision*, pp. 53–69. Springer, 2020. Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, and Matthias Bethge. Improving robustness against common corruptions by covariate shift adaptation. *Advances in Neural* Information Processing Systems, 33, 2020. Nicola Strisciuglio and George Azzopardi. Visual response inhibition for increased robustness of convolutional networks to distribution shifts. In *NeurIPS Workshop on Distribution Shifts: Connecting Methods and* Applications, 2022. Weijie Su, Xizhou Zhu, Chenxin Tao, Lewei Lu, Bin Li, Gao Huang, Yu Qiao, Xiaogang Wang, Jie Zhou, and Jifeng Dai. Towards all-in-one pre-training via maximizing multi-modal mutual information. In *Proceedings* of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15888–15899, 2023. Mingxing Tan, Ruoming Pang, and Quoc V. Le. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2020. Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig Schmidt. Measuring robustness to natural distribution shifts in image classification. *Advances in Neural Information* Processing Systems, 33, 2020. Lukas Timpl, Rahim Entezari, Hanie Sedghi, Behnam Neyshabur, and Olga Saukh. Understanding the effect of sparsity on neural networks robustness. *arXiv preprint arXiv:2206.10915*, 2022. Dániel Varga, Adrián Csiszárik, and Zsolt Zombori. Gradient regularization improves accuracy of discriminative models. *arXiv preprint arXiv:1712.09936*, 2017. Chaoqi Wang, Guodong Zhang, and Roger Grosse. Picking winning tickets before training by preserving gradient flow. In *International Conference on Learning Representations*, 2020. URL https://openreview. net/forum?id=SkgsACVKPH. Shunxin Wang, Christoph Brune, Raymond Veldhuis, and Nicola Strisciuglio. Dfm-x: Augmentation by leveraging prior knowledge of shortcut learning. *arXiv preprint arXiv:2308.06622*, 2023a. Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 14408–14419, 2023b. Yutong Xie, Jianpeng Zhang, Hao Lu, Chunhua Shen, and Yong Xia. Sesv: Accurate medical image segmentation by predicting and correcting errors. *IEEE Transactions on Medical Imaging*, 40(1):286–296, 2020. Jiajin Zhang, Hanqing Chao, Amit Dhurandhar, Pin-Yu Chen, Ali Tajer, Yangyang Xu, and Pingkun Yan. When neural networks fail to generalize? a model sensitivity perspective. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 11219–11227, 2023. Zhuofan Zong, Guanglu Song, and Yu Liu. Detrs with collaborative hybrid assignments training. arXiv preprint arXiv:2211.12860, 2022.
Review 1: Summary: This paper studies the shortcut learning, and proposes the Diverse and Inconspicuous Learning (DIVINE) method to maximize learning from a given set of inputs and learning as many discriminative features as possible. Empirical results demonstrate its better generalizability and robustness. Strengths and Weaknesses: Strengths - The problem investigated by this paper is very important. - This paper is well-written and organizes well. Weaknesses - The abrige learning setting also seems to be suitable for OOD detection/generalization benchmarks. The related experiments should be included here. - For Eq. (2), it is a common tayler expansion, while not a "Jacobian". In my view, the Jacobian is just a special matrix. - There exist some typos, making the readers hard to understand. For example, the Eq. (8) misses a "+". Some figures in this paper are rough. - It is not clear how to decide the number of feature maps. Requested Changes: Please refer to Weaknesses. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper studied a problem called "Abridge Learning", which was defined as "a sub-problem of Shortcut Learning that deals only with the problem where the model picks up only dominant cues and ignores other relevant features from the input data." The author proposed a method called "Diverse-Inconspicuous Feature Learning" for this problem which "removes the shortcuts and learning a diverse set of input features." This method was evaluated on the MNIST, CIFAR-10, CIFAR-100, and TinyImageNet datasets. Strengths and Weaknesses: Strengths: - Some people might be interested in masking the input to improve the generalization, and this paper showed a potential direction. Weaknesses: - The definition of the so-called "abridge learning" was very unclear. There was no clear mathematical definition of this problem, which prevented people from following this research direction. - The methodology seems ad hoc and lacks theoretical support. - The proposed method is limited to visual features. Being $0$ is regarded as an absence of a "feature". The experiments only used image datasets. - The math writing needs to be improved. For example: - The definition of "dataset" $\mathbf{X}$ was not given. - Eq. (1) is not a loss function but an optimal parameter. - Eq. (2) is not a Jacobian but the Taylor expansion. - "$f(x; \theta)$ outputs the probability vector," but the Jacobians were computed "with respect to the true class only." The author used the same notation for both. - No discussion on the limitations of this work. Requested Changes: Minor issues: - "Abridged Learning" was sometimes used, which I assume was a typo. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper proposes DIVINE, which is used to make a model more robust to out-of-distribution corruptions to inputs. The method is based off of suppressing input features that have high relative impact on model classification according to the Jacobian. These suppression masks are created in multiple rounds to create multiple versions of feature-suppressed data. A "unified" model is finally trained over the combination of suppressed data and a regularization term based off the suppression masks to get DIVINE. The evaluation is performed over MNIST, CIFAR10, CIFAR100, and TinyImageNet with corruptions. Strengths and Weaknesses: Strengths: * The method is intuitive and seems simple to implement. * Evaluations show improvements over baselines. Weaknesses: * The writing is hard to follow. There is no summary of results in the abstract, so the reader does not know why they should read the paper. The method of DIVINE in Section 3 is spending 11 equations discussing DIVINE, but these equations weren't much more helpful than reading Figure 2 and cause high reader fatigue. Evaluation has many details that are best left to an appendix. * Figures aren't self-contained. Table 1, for example, isn't highlighting results/conclusions, nor even describing the columns. Captions are not capable of being interpreted without referencing the text. Captions do not have a conclusion. * Spacing/formatting is suboptimal. Page 7, for example, has equations, text, and figures with very different context. Figure 4 is introduced on page 7 but is mentioned in text in page 10. * Figure 7 seems similar to me for the 3 methods with respect to the feature diversity. Can you explain what the reader should be looking at? * Table 2 and 3 seem like the same table with different input datasets and different metrics, but this is not obvious to the reader. These tables and their captions are not easy to read or interpret. * The experiments limit application of the method to few real-world scenarios (e.g., beyond MNIST/CIFAR10). TinyImageNet is a welcome addition, but experiments on the standard ImageNet dataset are more rigorous. Experiments also seem ad-hoc. Nits: * Dot for multiplication in Equation 2/3 can be removed. Dot is not used consistently in other equations (e.g., Equation 9) and it looks worse with dot. * Quotes for "out-of-distribution" in abstract are unnecessary. There is also an extra quote at the end. * Equation 1 is using X for a dataset of inputs, x, and labels, y. Use a different symbol for dataset, like D, to avoid confusion with inputs. Requested Changes: Critical: * Please add scaffolding across the document to summarize key results and ideas before going into details. Fix all writing issues in weaknesses. Ideally, move specific details (e.g., hyperparameters) to an appendix. It should be clear to the reader what each Figure and Section are about without reading the entire paper. Strengthen: * Add full ImageNet results. Broader Impact Concerns: N/A ================================================== Review 4: Summary: This paper considers the problem of Abridge learning, a subproblem of more general shortcut learning discussed previously in the literature. Abridge learning refers to situations when the model focuses only on a small subset of the most prominent features and as such fails to generalize to OOD situations when these features are not sufficient. The authors propose the DIVINE approach that relies on iteratively training a classifier and masking out the most prominent features detected by the classifier. Finally, DIVINE combines all the retrieved masks to train the final model. The authors check the performance of the method on several standard vision datasets. Strengths and Weaknesses: Strengths: - The paper considers an important problem, i.e. how to guide models to rely on a diverse set of features rather than any particular strong feature. - DIVINE, the proposed method, is simple and intuitive. - The experimental results show that the proposed method works quite well on OOD corrupted data. Additionally, the authors show some interesting analysis and ablation studies, investigating what type of features the network learns and how the $p$ parameter impacts the performance - The paper is clearly written and easy to follow. Weaknesses: - The method seems to be very expensive computationally since it requires additional retraining and computing the Jacobian wrt. to inputs. The authors touch on this in Section 4.4, but I think a more thorough discussion is needed. First of all, the presentation of this issue should be improved -- e.g. although these stats can be inferred from the paper, a table comparing the training time required by the baselines and DIVINE would be nice, and providing total training time on top of the per-epoch time would help the reader. Second, additional data would be useful -- e.g. what's the overhead of the method in each training step? In particular, I'd be curious to find out how slow the Jacobian computation is and how much time it takes to generate the feature-suppressed datasets. - Checking the performance on corrupted OOD data is interesting by itself, but I think additional experiments would strengthen the paper. For example, how would the method perform on datasets with spurious correlations such as Colored MNIST [1] or domain generalization datasets [2]? - The authors do not investigate the statistical significance of their results -- in particular, how many seeds were used to train each model? Can you provide the standard deviation? - The baselines used in the paper are relatively old. It would be great to compare DIVINE to newer methods such as Chefer et al. 2022. Minor comments: - The first sentence of the abstract says: "Deep learning algorithms aim to minimize overall classification error". I would say this is an oversimplification as not all deep learning tasks deal with classification - "We provide a detailed analysis of its trend in the supplementary." - but these results are in Section 4.3 an there does not seem to be any supplementary materials, am I correct? [1] Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant risk minimization. arXiv preprint arXiv:1907.02893. \ [2] Gulrajani, I., & Lopez-Paz, D. (2020). In search of lost domain generalization. arXiv preprint arXiv:2007.01434. Requested Changes: The changes I would like to see are related to the weaknesses listed above: - Please present more data on the computational complexity of DIVINE - Please discuss the statistical significance of the results (number of seeds etc.) - Please explain if this method be used with datasets containing spurious correlations such as Colored MNIST. Of course, experimental evaluation would be best, but a discussion of this point would also be nice. Broader Impact Concerns: N/A ================================================== Review 5: Summary: This paper proposes Abridge Learning to alleviate the effect that the feature extractor in DNN tends to overlook a part of discriminative features (this is also called shortcut learning). To mitigate this challenge, DIVINE is proposed to counteract Abridge Learning. Experiments on some standard benchmarks show the proposed method works well and can boost the model's robustness. Strengths and Weaknesses: Strengths: 1. This paper is well-written, it is enjoyable to read. 2. The core idea in DIVINE is intuitive and interesting. 3. The targeted problem is valuable and important for deep learning algorithms. Weaknesses: 1. Feature learning is a popular topic in deep learning theory recently, while feature learning is a main contribution in this paper. Is there any connection between the proposed method with theory? 2. The proposed approach directly sets the input image pixels to zero corresponding to the non-zero dominance values in the dominance feature matrix, which is easy but somewhat unreasonable. 3. The learning process is called "Abridge Learning". It is not very clear for me why the new learning setup should be newly defined. 4. It is also not clear how to select the number of identified feature maps. 5. In Eq.8, why choosing use the operation of "sum" to fusion the identified feature maps? 6. The experiments results in Table 1 are not very convincing. The proposed DIVINE directly change the value of pixel in pixel space. It is not very reasonable to verify the model's performance in feature-suppressed datasets. Requested Changes: Please refer to Weaknesses. Broader Impact Concerns: No ================================================== Metareview: Recommendation: Reject Comment: The reviewers' opinions are somewhat divided. However, it is noteworthy that they unanimously agree that the paper discusses an important issue and the proposed method is rational and straightforward. Yet, due to the lack of more comprehensive and in-depth theoretical or experimental analysis, the conclusions of the paper are not sufficiently validated. Therefore, we recommend the authors consider a major revision and resubmission to further refine the paper. ==================================================
# Reducing Predictive Feature Suppression In Resourceconstrained Contrastive Image-Caption Retrieval Maurits Bleeker *m.j.r.bleeker@uva.nl* University of Amsterdam Andrew Yates *a.c.yates@uva.nl* University of Amsterdam Maarten de Rijke *m.derijke@uva.nl* University of Amsterdam Reviewed on OpenReview: *https: // openreview. net/ forum? id= T1XtOqrVKn* ## Abstract To train image-caption retrieval (ICR) methods, contrastive loss functions are a common choice for optimization functions. Unfortunately, contrastive ICR methods are vulnerable to predictive feature suppression. Predictive features are features that correctly indicate the similarity between a query and a candidate item. However, in the presence of multiple predictive features during training, encoder models tend to suppress redundant predictive features, since these features are not needed to learn to discriminate between positive and negative pairs. While some predictive features are redundant during training, these features might be relevant during evaluation. We introduce an approach to reduce predictive feature suppression for resource-constrained ICR methods: *latent target decoding* (LTD). We add an additional decoder to the contrastive ICR framework, to reconstruct the input caption in a latent space of a general-purpose sentence encoder, which prevents the image and caption encoder from suppressing predictive features. We implement the LTD objective as an optimization constraint, to ensure that the reconstruction loss is below a bound value while primarily optimizing for the contrastive loss. Importantly, LTD does not depend on additional training data or expensive (hard) negative mining strategies. Our experiments show that, unlike reconstructing the input caption in the input space, LTD reduces predictive feature suppression, measured by obtaining higher recall@k, r-precision, and nDCG scores than a contrastive ICR baseline. Moreover, we show that LTD should be implemented as an optimization constraint instead of a dual optimization objective. Finally, we show that LTD can be used with different contrastive learning losses and a wide variety of resource-constrained ICR methods. ## 1 Introduction Image-caption retrieval (ICR) is the task of using an image or a caption as a query and ranking a set of candidate items in the other modality. Both the images and captions are mapped into a shared latent space by two encoders, which correspond to the two modalities. These encoders usually do not share parameters and are typically optimized with a contrastive loss function (Faghri et al., 2018; Lee et al., 2018; Li et al., 2019; Wang et al., 2019; Messina et al., 2020b; Chen et al., 2020a; Liu et al., 2020; Wang et al., 2020; Messina et al., 2020a; Jia et al., 2021; Diao et al., 2021; Yu et al., 2021). How well an ICR method generalizes beyond the specific training data depends on the features that the method has learned during training. The contrastive loss explicitly learns the similarity between positive (matching) candidates, while pushing away negative (non-matching) candidates during training. In the ideal situation, the contrastive objective optimizes the image and caption encoder such that both encoders extract all relevant information from the ![1_image_0.png](1_image_0.png) Figure 1: Visualization of *predictive feature suppression* using two examples from the MS-COCO captions (COCO) dataset. x ∗ I and x ∗ Cj are input *images* and *captions*, respectively, and z ∗ I and z ∗ Cj are the latent representations of the image and caption. We use cosine similarity as similarity metric. The objective of a contrastive loss is to optimize the similarity scores on the diagonal of the similarity matrix, while minimizing the off-diagonal scores. In this small-scale training setup, if both the image and caption encoder only extract the concepts **kitten** and dog, the remaining concepts in both the images and captions are irrelevant to predicting the correct matching scores between the images and captions. The input features that are not needed to predict a match between an image and a caption are likely to be *suppressed* by the encoder. However, these features might be relevant during evaluation to predict a correct match. caption and image that is needed for matching the positive candidates during evaluation. However, it is not defined upfront what information is needed during evaluation for retrieving the correct item among a set of candidates. Predictive feature suppression. Hermann & Lampinen (2020) show that, in the presence of two *predictive* features that redundantly predict the output label of the input data, a deep neural model preferentially represents one of the two predictive features while the other feature is suppressed. In this work, we define predictive feature suppression for ICR as the suppression of features by an encoder network during training that would be useful to correctly predict the match between a query and the positive candidate at inference time. For contrastive training tasks, the features that are relevant for matching the query with the positive candidate (i.e., the predictive features) mainly depend on the negative candidates in the training batch. Only optimizing the contrastive InfoNCE loss does not guarantee avoidance of *shortcut features* that suppress certain (predictive) input features and the learned features mainly depend on the difficulty of the discrimination task (Robinson et al., 2021). Especially in a resource-constrained training setup, it is likely that the majority of the input features in the caption and image are redundant for learning the similarity between matching images and captions, due to the limited number of negative samples available to contrast with. The contrastive optimization objective is easy to solve by only using a small subset of the predictive input features of the images and captions. Suppressing predictive features during training is an undesirable side-effect of contrastive representation learning in a resource-constrained training setup, since some of these features might be needed during evaluation to retrieve the matching candidate. In Figure 1, we provide a visual example of predictive feature suppression in a resource-constrained contrastive image-text matching setup. How to prevent predictive feature suppression. To increase the difficulty of a contrastive discrimination task, one can increase the batch size during training in order to increase the probability of having difficult in-batch negative samples (Chen et al., 2020c; Qu et al., 2021). It is, therefore, not surprising that most progress on the two widely used ICR benchmark evaluation sets, Flickr30k (F30k) (Young et al., 2014) and MS-COCO captions (COCO) (Lin et al., 2014), has recently been made by using large-scale image-text matching training, mainly in combination with transformer network architectures (Jia et al., 2021; Yuan et al., 2021). Using more data and larger model architectures improves performance but comes with a significant extra computational cost, both in terms of data needed for training and the number of parameters that need to be optimized. The two benchmark datasets for ICR, the F30k and COCO datasets, are relatively small in terms of training samples compared to the training data of state-of-the-art pre-trained ICR or image-text matching methods (Jia et al., 2021; Yuan et al., 2021). When an ICR method is trained from scratch using these benchmark datasets only, for example, in a resource-constrained training setup, scaling up the size of a batch is not a feasible solution to increase performance, due to the limited data size of F30k and COCO or due to the lack of computational resources. Hence, it is important to develop algorithms that can improve the effectiveness of ICR methods in a *resource-constrained* training setup, without relying on more data and more compute to achieve this. A method to increase the difficulty of the contrastive objective that does not rely on the size of the dataset to reduce predictive feature suppression, is to directly mine *hard* negative examples for each query over the entire dataset, rather than relying on an increased batch size to include difficult negative examples. The disadvantage of hard-negative mining is that it can be computationally expensive (Chen et al., 2020b). Moreover, the COCO dataset contains many visually similar images (Parekh et al., 2020); when a similar image is mined as a hard-negative, it will be considered as a negative w.r.t. the query, which may create conflicting and incorrect supervision signals. The autoencoding paradigm (Hinton & Salakhutdinov, 2006) provides an alternative solution to reduce predictive feature suppression by learning latent data representations that contain as much of the important input features as possible. Using the information bottle-neck principle, the encoder should compress the input information into a low-dimensional representation while preserving as much as possible of the input features. Combining autoencoding with contrastive learning should prevent the image and caption encoder from learning features that are only needed to solve the contrastive optimization objective. Therefore, a logical step is to add a decoder to the learning algorithm that decodes the original input from either the caption or image representation (or both). However, adding a decoder on top of the image representations, as in (Li et al., 2020a), is sub-optimal for the ICR task. The captions provided for each image are already a dense summary of the image; reconstructing every pixel in the image results in image representations that contain too much local information, which is irrelevant for the ICR task. A more natural choice would be to decode the input caption rather than the image, but adding a decoder on top of the caption representations might not result in a reduction of predictive feature suppression. Strong textual decoders can reduce a reconstruction loss by mainly relying on the learned language model (Lu et al., 2021). The input for this decoder (the latent caption representation) can mostly be ignored while correctly decoding the input sequence. Our proposed solution. To address the disadvantages of current approaches to mitigating predictive feature suppression, viz. (i) high costs (in terms of compute and data), and (ii) reconstruction of the input caption and images in the input space, we introduce *latent target decoding* (LTD). For each caption in the training set, we generate a *latent target* representation by using a general-purpose sentence encoder. We train an image and caption encoder that can be trained in a resource-constrained setup, using a standard contrastive learning objective. Next to that, we add an extra decoder to the learning algorithm. We decode the information of the caption in a latent space of the sentence encoder. Thus, the decoder cannot rely on learning a dataset-specific language model to decode the input, and the caption representation learned by the caption encoder should contain all input features that are needed to decode the latent target. By reconstructing this latent target representation we aim to reduce predictive feature suppression by the caption encoder, which should result in representations that generalize better to the evaluation task. See Figure 2 in Section 3 for a high-level overview of our LTD method. LTD only requires an additional target representation for each caption and a simple feed-forward decoder network, and can be combined with any ICR method that uses a separate caption and image encoder to compute a global representation of the input data. LTD does not depend on (i) additional training data, (ii) extra manual data annotation or (hard) negative mining, or (iii) significantly more computational resources. In this work, we focus on *resource-constrained* ICR methods that are trained from scratch on the F30k or COCO dataset on a single GPU. If we were to add LTD to the learning algorithm, the overall training objective would become a multi-task loss: a contrastive and reconstruction loss. However, multi-task losses are difficult to optimize (Malkiel & Wolf, 2021). The reconstruction loss should serve as an extra regularizer rather than the main learning objective. We also do not want the caption encoder to mainly focus on the reconstruction objective, since that can harm the contrastive utility of the representations. Therefore, we implement LTD as an optimization constraint. In this manner, we can target a specific value for that loss function. The main training objective is to minimize the contrastive loss, given the constraint that the reconstruction loss is below a certain bound value. Similar to (Rezende & Viola, 2018; van Rozendaal et al., 2020), we implement the reconstruction loss constraint using a Lagrange multiplier (Platt & Barr, 1987); the two losses are scaled automatically such that the reconstruction bound is met, while minimizing the contrastive loss. ## Our Main Findings. 1. The proposed constrained-based LTD reduces predictive feature suppression and improves the generalizability of learned representations, as it outperforms ICR baselines that are only optimized by using a contrastive loss. We measure the reduction of predictive feature suppression by using the standard evaluation metrics for the ICR task. 2. Implementing LTD as a dual loss, as opposed to an optimization constraint, does not reduce predictive feature suppression. Our analyses suggest that optimizing the reconstruction loss only until a specific bound value is met, results in better evaluation performance than minimizing the reconstruction loss as a dual loss. 3. LTD can be used in combination with different contrastive losses, for example, InfoNCE (van den Oord et al., 2018) and the triplet loss (Faghri et al., 2018), and it can be combined with a wide variety of ICR methods that can be optimized in a resource-constrained setup, such as VSRN (Li et al., 2019) and TERN (Messina et al., 2020b). Below, we first cover related work, then introduce the proposed LTD method, before presenting our experimental setup and discussing the outcomes of our experiments, and concluding. ## 2 Related Work 2.1 Image-Caption Retrieval Neural architectures for **ICR.** We focus on ICR methods that compute a global representation for both image and caption. In general, an ICR method consists of two encoders: one to encode the image and one to encode the caption into a latent representation (Faghri et al., 2018; Li et al., 2019; Chun et al., 2021; Jia et al., 2021). Most work on ICR focuses on new network architectures to learn multi-modal feature representations. State-of-the-art results have been obtained using graph neural networks (Li et al., 2019; Wang et al., 2020; Liu et al., 2020; Diao et al., 2021) to represent visual relations in scenes as a graph, or attention mechanisms to align words in the caption with specific regions in the input image (Lee et al., 2018; Chen et al., 2019; Wang et al., 2019; Chen et al., 2020a; Yu et al., 2021; Zhang et al., 2022). Li et al. (2019) combine a caption encoder-decoder with the image encoder to add extra training signals to the learning algorithm. These methods are only trained and evaluated on the F30k and COCO datasets. Recently, there has been a shift to transformer-based (Vaswani et al., 2017) network architectures for both the image and caption encoder. Messina et al. (2020b;a) introduce a transformer-based network architecture solely trained for the ICR task. Since then, several transformer-based methods have been introduced (Lu et al., 2019; Chen et al., 2020d; Li et al., 2020b; 2021; Jia et al., 2021; Li et al., 2022); some combine the image and caption encoder into one unified architecture. These methods are all (pre-)trained on a large amount of additional training data and most are not trained for the ICR task specifically, but as general-purpose vision-text models. Hard negative mining. Few publications have looked into the improvement of contrastive optimization for ICR methods. Faghri et al. (2018) introduce a new formulation of the triplet loss that only considers the hardest negative candidate in the training batch instead of all negative candidates, which significantly improved the evaluation scores on the ICR benchmarks. Since then, many ICR methods (Lee et al., 2018; Li et al., 2019; Wang et al., 2020; Liu et al., 2020; Messina et al., 2020b;a; Chen et al., 2020a; Diao et al., 2021; Yu et al., 2021) have used this loss function for optimization. Chen et al. (2020b) introduce an offline hard-negative mining approach for ICR in order to overcome the limitations of in-batch negative mining. Instead of mining an in-batch hard-negative, they mine additional negative candidates, for each query, over the entire training set to learn from so-called harder-to-distinguish negatives. One-to-many problem. Chun et al. (2021) focus on the one-to-many problem in ICR. An image can be described by many captions. However, most methods in ICR learn one representation for the image, which should match with a number of different captions. They propose a probabilistic ICR method, where images and captions are represented as probability distributions in a shared latent space instead of a point representation. Although their method does not focus on contrastive optimization, it addresses predictive feature suppression by learning a distribution over features instead of a point prediction of features. Chun et al. (2021) also propose the plausible match metric, a heuristic for identifying missing positive examples by finding images that contain similar objects (i.e., plausible matches) and considering these in the evaluation. Biten et al. (2022) propose semantic adaptive margin (SAM). Instead of using the binary relevance annotation between images and caption (of the F30k and COCO datasets) for the triplet loss computation, the authors propose an adaptive margin to model the many-to-many relation between images and caption. The standard triplet loss uses a fixed margin parameter α. SAM dynamically assigns a unique distance value to the triplets in the training batch, based on the semantic similarity between an image and caption. In contrast, in this work we do not change the formulation of the contrastive loss. We add an extra optimization objective to the learning algorithm to prevent predictive feature suppression. ## Upshot Unlike most previous work, we do not focus on the network architecture to improve the ICR performance. Similar to (Chun et al., 2021), we focus on small-scale learning set-ups to train an ICR method from scratch to show the strength of our method in a resource-constrained setting. Our proposed approach incorporates autoencoding into the learning algorithm in order to reconstruct the input caption to reduce predictive feature suppression. ## 2.2 Contrastive Representation Learning Contrastive learning losses are used to learn discriminative representations of the input data that can be used to contrast positive and negative pairs of information in a latent space. These loss functions have made a big impact in representation learning, whether self-supervised (van den Oord et al., 2018; Chen et al., 2020c) or supervised (Radford et al., 2021; Karpukhin et al., 2020). Although ICR is a supervised contrastive learning task, some of the theoretical findings about self-supervised contrastive learning apply to supervised settings as well. Self-supervised contrastive learning. A common approach to learn general-purpose representations in a self-supervised manner, is to create two (matching) views of the same (or similar) data point(s) by applying different augmentations (Chen et al., 2020c) or by splitting the data into parts (van den Oord et al., 2018) (i.e., predicting the future). The two positive views are contrasted with other negative samples in the training batch. The goal is to learn encoders that are invariant under these augmentations and that can discriminate between positive and negative pairs. How well self-supervised representations generalize to different settings, after training, is often assessed using a down-stream evaluation task, such as object classification (Chen et al., 2020c) or speaker identification (van den Oord et al., 2018). Some work examines data augmentation to learn strong feature representations. Good augmentations retain task-relevant information while removing task-irrelevant nuisances (Tian et al., 2020). The main purpose of removing task-irrelevant nuisances is to prevent encoders from using this information as predictive features during training. Xiao et al. (2021) show that the features needed to learn good representations depend on the down-stream task. ICR does not depend on augmentations to generate positive and negative pairs. These pairs are given by the annotations of the benchmark datasets (Young et al., 2014; Lin et al., 2014). The difficulty of the discrimination task (and hence the learned features) mainly depends on which candidates are present in the training batch. The generalizability of contrastive learning methods is also influenced by the number of (hard) negatives present in a training batch. In general, the larger the number of in-batch negatives, the higher the down-stream evaluation performance (Chen et al., 2020c). Some work has focused on methods to increase the number of negatives during training (He et al., 2020) or on applying hard-negative mining strategies to increase the number of hard negatives in the batch (Lindgren et al., 2021; Xiong et al., 2021). Since we are focusing on a resource-constrained setup in this work, scaling up the batch size to increase the number of (hard) negatives is not a feasible solution. Moreover, the COCO dataset contains many visually similar images (Parekh et al., 2020). Mining visually similar images as (hard) negatives will result in a suboptimal supervision signal, which makes hard-negative mining also not a feasible approach to reduce shortcut feature suppression. Shortcut feature representations. Robinson et al. (2021) show that the contrastive InfoNCE loss (van den Oord et al., 2018) does not guarantee avoidance of shortcut feature representations. Shortcut feature representations are solutions that suppress predictive input features, i.e., a shortcut to discriminate between matching/non-matching candidates. The features learned by the InfoNCE loss depend on the difficulty of instance discrimination during training. If the instance discrimination task is easy to solve during training, the model will learn shortcut features. Especially in a resource-constrained ICR training setup, the contrastive objective is easy to solve, since there is only a limited number of (hard) negative samples in the training batch, which will result in shortcuts/predictive feature suppression. Feature suppression among competing features. Chen et al. (2021) introduce the notion of feature suppression among *competing features*. Chen et al. (2020c) show that, for example, the SimCLR method (Chen et al., 2020c) when trained without the crop or color augmentation (which randomly crops or shifts the color distribution of an image), shows a significant drop in performance on the down-stream evaluation task. Apparently, the (color) pixel distribution of the image is a sufficient predictive feature to match two views of the same image during training. However, these features do not generalize well to a down-stream evaluation task, such as object classification. The desired predictive features of the input image (i.e., the object class and its properties) are suppressed by competing features (i.e., the color distribution of the image). Chen et al. (2021) refer to this phenomenon as *feature suppression among competing features*. Feature suppression among competing features is closely related to work by Hermann & Lampinen (2020), who show that in the presence of multiple redundantly predictive features, deep neural models prefer one of the features over the other, while the other feature is suppressed. Chen et al. (2021) add artificially generated features (i.e., MNIST digits) as an extra overlay to images. They show that the "easy" predictive features (the MNIST digits) are preferred by a deep neural encoder model over the real predictive features (i.e., the object class) when optimizing with a contrastive learning loss. Chen et al. (2021) conclude that contrastive losses rely on easy-to-detect features that solve the contrastive objective, while suppressing the remaining (partly irrelevant) information. Predictive feature suppression is a prominent problem in resource-constrained contrastive ICR. Captions often describe multiple aspects of a scene. However, in a resource-constrained contrastive setup, only one (or a few) of the aspects that are described in the caption is likely to be sufficient to match with the positive candidate (i.e., the image) during training due to the limited number of negative candidate in the training batch. To mitigate this problem of predictive feature suppression for resource-constrained contrastive ICR, we need an extra optimization objective that is independent of the negative samples in the training batch. Autoencoding. An approach to reduce predictive feature suppression is autoencoding (Hinton & Salakhutdinov, 2006). Autoencoding can be combined with a contrastive learning loss and reduces predictive feature suppression without depending on sampling (hard) negative candidates. To learn high-quality text sequence embeddings for the dense passage retrieval task, Lu et al. (2021) add a weak decoder on top of a document encoder to reconstruct the original document. To make image encoders more robust against shortcut features, Li et al. (2020a) add a decoder on top of the image encoder to decode the input image. ## Upshot To reduce predictive feature suppression in a resource-constrained ICR task, we introduce latent target decoding (LTD). LTD reduces predictive feature suppression, without focusing on the difficulty of the contrastive discrimination task. LTD requires neither a large number of negative samples nor hard negative mining strategies. Unlike other methods that reconstruct the input data, we reconstruct the input information of the caption in a latent space instead of the input space. ## 3 Method In Table 4 in Appendix A, we provide an overview of the symbols and variables used throughout this work. We start with preliminaries and then discuss the InfoNCE contrastive loss and why autoencoding captions in the input space is not a solution to reduce predictive feature suppression. Finally, we introduce latent target decoding (LTD) to reduce predictive feature suppression for recourse-constrained ICR. In Figure 2 we provide an overview of LTD. ![6_image_0.png](6_image_0.png) Figure 2: Overview of latent target decoding (LTD). The baseline (left) consists of a general image-caption retrieval framework with an image and caption encoder. The encoders are trained by using the contrastive InfoNCE loss. To reduce predictive feature suppression we add latent target decoding to the baseline ICR method (right). This extra decoder decodes the information of the input caption in a latent space of a general-purpose sentence encoder. The decoder is not used during inference, which we indicate by the dashed line around the model hω(·). ## 3.1 Preliminaries And Notation 3.1.1 Notation We follow the notation introduced in previous work (Chen et al., 2020c; Brown et al., 2020). For the ICR task we use a multi-modal dataset D = {(x i I , x i C1 , . . . , x i Ck )*, . . .* } N i=1. This dataset consists of N image-caption tuples. Each tuple contains one image x i I and k captions x i Cj , where 1 ≤ j ≤ k, that describe the scene depicted in the image. At each training iteration, we randomly sample a batch B of image-caption pairs from D. Per image, one of the k captions is sampled per training iteration; together, this image and caption form a positive (or matching) image-caption pair. Each caption is used once during a training epoch. The image and caption encoder are trained for two tasks: *image-to-text* (i2t) and *text-to-image* (t2i) retrieval. Thus, each image and caption in B is used as a query q. We denote the matching candidate in the other modality as v +. All other candidates in B, in the other modality, are considered as negative candidates v −. The set of all negative candidates for query q in batch B is S − q , where v − ∈ S− q . ## 3.1.2 Contrastive Baseline Model The baseline (BL) ICR framework in this work consists of two encoders. The image encoder fθ(·) takes an image x i I as input and encodes this image into a latent representation z i I := fθ(x i I ). The caption encoder gϕ(·) takes a caption as input and encodes this caption into a latent representation z i Cj := gϕ(x i Cj ). The vectors z i Cj and z i I have the same dimensionality and are normalized on the unit sphere. The encoders are only optimized by minimizing a contrastive learning loss Lcon. Our goal is not to obtain the highest possible evaluation performance, but to show the strength of LTD on resource-constrained training setups. ## 3.2 Contrastive Loss To train the image and caption encoder, we use the InfoNCE contrastive loss (van den Oord et al., 2018; Chen et al., 2020c). The InfoNCE loss is a popular loss function for self-supervised representation learning (Chen et al., 2020c; He et al., 2020) and multi-modal representation learning (Radford et al., 2021; Yuan et al., 2021; Jia et al., 2021). To keep the notation simple and intuitive, we use q and v for the latent representations computed by the caption and image encoder and not zCj and zI. The InfoNCE loss is defined as follows: $${\mathcal{L}}_{c o n}={\frac{1}{|{\mathcal{B}}|}}\sum_{(\mathbf{q},\mathbf{v}^{+})\in{\mathcal{B}}}-\log\ {\frac{\exp(\mathbf{q}^{T}\mathbf{v}^{+}/\tau)}{\exp(\mathbf{q}^{T}\mathbf{v}^{+}/\tau)+\sum_{\mathbf{v}^{-}\in{\mathcal{S}}_{q}^{-}}\exp(\mathbf{q}^{T}\mathbf{v}^{-}/\tau)}}.$$ $$(1)$$ Lcon in Eq. 1 is minimized when, given a query q, the cosine similarity score with the positive candidate v + is high (i.e., ≈ 1), while the similarity scores with the negative candidates v − ∈ S− q in the batch are as low as possible; τ serves as a temperature parameter. The main objective of a contrastive learning loss for the ICR task is to learn data representations that can be used to discriminate between similar and dissimilar image-caption pairs. However, there is no constraint that enforces the encoders to learn representations that contain all available input information to make this discrimination, which is what we add. ## 3.2.1 Gradient W.R.T. The Input Representations To show some important properties of the InfoNCE loss, we provide the derivative of −Lcon w.r.t. the input in Eq. 2 (Chen et al., 2020c): Z(q, v) returns the similarity score of candidate v w.r.t. the query q, normalized by the sum of similarity scores of all candidates in the batch B. The full derivations of the derivative −Lcon are provided in Appendix B. Based on Eq. 2, we infer the following properties: 1. The update w.r.t. the query q (Eq. 2b), is a weighted sum over the positive candidate v + and all negatives v − ∈ S− q . The query representation q will be pulled closer to v +, while being pushed away from all v − ∈ S− q . The weight value of each candidate, Z(q, v) (Eq. 2a), depends on the similarity score with the query. 2. v + (Eq. 2c) will be pulled closer to the query representation (and the other way around). 3. All negatives v − (Eq. 2d) will be pushed away from the query representation (and the other way around). Without contrasting with negative candidates, the encoders will learn a trivial solution where latent representations collapse to a single point in the latent space (Jing et al., 2021). This means that the learned representation mainly depends on contrasting with negative candidates during training. If the representations v only contain a subset of the predictive input features (which still minimize the contrastive training objective), the query representation q (in the other modality) will be updated to match/mismatch these representations. $$Z(\mathbf{q},\mathbf{v})=\frac{\exp(\mathbf{q}^{T}\mathbf{v}/\tau)}{\exp(\mathbf{q}^{T}\mathbf{v}^{+}/\tau)+\sum_{\mathbf{v}^{-}\in\mathcal{S}_{\mathbf{q}}^{-}}\exp(\mathbf{q}^{T}\mathbf{v}^{-}/\tau)}$$ $$\frac{\partial\mathcal{L}_{con}}{\partial\mathbf{q}}\tau=(1-Z(\mathbf{q},\mathbf{v}^{+}))\mathbf{v}^{+}-\sum_{\mathbf{v}^{-}\in\mathcal{S}_{\mathbf{q}}^{-}}Z(\mathbf{q},\mathbf{v}^{-})\mathbf{v}^{-}$$ $$\frac{\partial\mathcal{L}_{con}}{\partial\mathbf{v}^{+}}\tau=(1-Z(\mathbf{q},\mathbf{v}^{+}))\mathbf{q}$$ $$\frac{\partial\mathcal{L}_{con}}{\partial\mathbf{v}^{-}}\tau=-Z(\mathbf{q},\mathbf{v}^{-})\mathbf{q}.$$ $$(2\mathrm{a})$$ $$(2\mathrm{b})$$ $$(2\mathrm{c})$$ $$(2\mathrm{d})$$ The contrastive InfoNCE objective itself does not guarantee that all the predictive features in the input data are learned (Robinson et al., 2021) and mainly relies on easy-to-detect features to contrast between positive and negative pairs (Chen et al., 2021). Importantly, the query and candidate representations are in different modalities and therefore generated by different encoders. Hence, the update of the query and candidate representations is based on *fixed* representations in the other modality (e.g., the caption encoder can only try to match/not match with the representations of the image encoder and vice versa). By adding a constraint on the representations of one of the two modalities, the other modality encoder will follow automatically. Therefore, in order to prevent predictive feature suppression for the caption modality in a resource-constraint ICR setting, we add a constraint to the learning algorithm that forces the caption representation to be projected into the latent space of a general-purpose sentence encoder. ## 3.3 Autoencoding Reconstruction Objective Autoencoding (Hinton & Salakhutdinov, 2006) is a natural choice for learning latent data representations that contain most of the important input features without relying on hard negative samples. To reconstruct the input caption from the encoded latent representation z i Cj , we introduce a decoder network hω(·): $$\tilde{x}_{c_{j}}^{i}:=h_{\omega}(z_{c_{j}}^{i}).$$ $\left(3\right)^3$ $$\left(4\right)$$ ). (3) The decoder network hω(·) takes the latent caption representation as input and outputs a reconstruction of the input caption xe i Cj . To decode the input sequence from the latent representation, this latent representation should be a dense representation of the entire input sequence. The reconstruction loss, Lrec, of a sequence of tokens, xi*, . . . , x*n of length n, is the negative log-likelihood of the input data: $${\mathcal{L}}_{r e c}=-\sum_{t=1}^{n}\log\ p(x_{t}|x_{t-1:1},z_{C j}^{i}).\tag{1}$$ ). (4) Based on Eq. 4 it is clear that each predicted token xt in the sequence is conditioned on: (i) the latent caption representation z i Cj , and (ii) the already predicted sequence xt−1:1. As discussed in Section 1, a strong decoder will mainly rely on the learned language model and language patterns to decode the input sequence (Lu et al., 2021). This implies that the input sequence can be decoded correctly while mainly ignoring z i Cj , especially when t is large. Therefore, decoding the caption sequence in the input space is not guaranteed to reduce predictive feature suppression. ## 3.4 Latent Target Decoding In Section 3.2 we argued why the contrastive InfoNCE loss is prone to predictive feature suppression, and in Section 3.3 we discussed why decoding a caption in the input space will not prevent predictive feature suppression. In this section, we introduce *latent target decoding* (LTD). LTD decodes the semantics of the input caption in a latent space of a general-purpose sentence encoder to reduce predictive feature suppression, which can be used in combination with a contrastive loss. LTD addresses the issues of decoding the caption in the input space. See Figure 2 for a high-level overview of LTD for ICR. For each caption x i Cj in the training dataset we generate y i Cj , a *latent target representation*. The vector y i Cj is a dense vector representation. We assume that this vector contains all the (semantic) information of the caption, captured by a general-purpose language encoder. We use our decoding network hω to decode y i Cj instead of the input caption. By reconstructing a vector representation of the caption instead of the original input sequence, the reconstruction is not conditioned on the already predicted sequence of tokens. The latent target decoder assumes conditional independence of each feature in the latent target. Therefore, the decoder cannot rely on conditional (language model) patterns in the data to reconstruct the input semantics. This implies that we force the decoder to rely completely on z i Cj to decode the latent target representation. LTD reduces predictive feature suppression by reconstructing the latent target from the caption embedding. To combine LTD with a contrastive loss, it is necessary to compute the similarity score between a *global* representation of the entire caption and the image embedding. ICR methods that compute the similarity score by using fragments of the caption and regions in the image cannot be combined with LTD as introduced in this work. If there is no global representation of the caption, it is not possible enforce that all the semantic information from the target encoder will be distilled into the caption representation that is used for computing the similarity score. ## 3.4.1 Target Decoding Network To decode y i Cj we use a three layer feed-forward decoder network: $$h_{\omega}(\mathbf{z}_{\hat{C}_{j}}^{i})=\mathbf{W}^{(3)}\sigma\left(\mathbf{W}^{(2)}\sigma\left(\mathbf{W}^{(1)}\ \mathbf{z}_{\hat{C}_{j}}^{i}\right)\right),\tag{5}$$ where σ is the ReLU non-linearity; hω takes the latent caption representation z i Cj as input and maps it to a reconstruction of the latent target representation ye i Cj . ## 3.4.2 Loss Function To train hω, we use the cosine distance between ye i Cj and y i Cj as reconstruction loss Lrec: $${\mathcal{L}}_{r e c}=1-{\frac{{\widetilde{\mathbf{y}}}_{C j}^{i}}{\left\|{\widetilde{\mathbf{y}}}_{C j}^{i}\right\|}}\cdot{\frac{{\mathbf{y}}_{C j}^{i}}{\left\|{\mathbf{y}}_{C j}^{i}\right\|}}.$$ $$\left({\mathrm{6}}\right)$$ . (6) Minimizing the cosine distance is equivalent to minimizing the mean squared error of two vectors normalized on the unit sphere (Chen & He, 2021; Grill et al., 2020). By introducing an extra loss criterion, the overall training objective becomes a dual optimization problem. The dual loss L*dual* is defined as follows: $${\mathcal{L}}_{d u a l}={\mathcal{L}}_{c o n}+\beta{\mathcal{L}}_{r e c},$$ $$\left(7\right)$$ Ldual = Lcon + βLrec, (7) where β serves as a balancing parameter to scale the two losses. ## 3.4.3 Constraint-Based Optimization By adding LTD to the learning framework, we introduce an extra loss component Lrec. To effectively minimize L*dual*, we have to find the right value for the balancing parameter β in Eq. 7. This may require a considerable amount of manual tuning, and often one specific value for β does not generalize to different training settings. Besides that, Lrec is not the main training objective for the ICR tasks. The main reason we add Lrec to the learning algorithm is to reduce predictive feature suppression caused by solely optimizing the contrastive loss. We therefore argue that implementing LTD as an optimization *constraint* (Rezende & Viola, 2018; van Rozendaal et al., 2020), as opposed to an optimization *objective*, might be more effective. Our goal, then, is to minimize the contrastive loss Lcon given the constraint that the reconstruction loss is lower than or equal to a certain bound value η: $$\operatorname*{min}_{\theta,\psi,\omega}{\mathcal{L}}_{c o n}{\mathrm{~subject~to~}}{\mathcal{L}}_{r e c}\leq\eta.$$ Lcon subject to Lrec ≤ η. (8) We can implement this optimization constraint in combination with gradient descent by using the method of Lagrange multipliers: $$\max_{\lambda}\min_{\theta,\psi,\omega}{\cal L}_{lag}={\cal L}_{con}+\lambda\left(\frac{{\cal L}_{rec}}{\eta}-1\right).\tag{1}$$ The optimization objective is to minimize Llag w.r.t. the model parameters *θ, ψ, ω*, while maximizing Lrec w.r.t. to the multiplier λ. The multiplier λ is tuned automatically by using stochastic gradient ascent with momentum. By optimizing λ with stochastic gradient ascent, the two losses will be balanced automatically during training such that the reconstruction constraint is met, while the contrastive loss is minimized by gradient descent. $\}$). $$\mathbf{\mu})$$ ## 3.4.4 Choice Of Latent Target Representation To generate the latent target yCj , we use a Sentence-BERT transformer model (Reimers & Gurevych, 2019; Song et al., 2020).1 Sentence-BERT is a general-purpose sentence encoder that is trained on a large amount of data to capture the semantic input information. Thus, we expect these embeddings to be more general than those we learn for the resource-constrained contrastive ICR task, which makes them a suitable choice for the latent target representations yCj . ## 3.5 Ltd Vs. Teacher-Student Framework LTD is somewhat similar to a teacher-student framework used with knowledge distillation (Hinton et al., 2015). Indeed, the target generator can be seen as a teacher network and the caption encoder in combination with the target decoder as a student network. However, in contrast with knowledge distillation, the goal of LTD is not to closely mimic a teacher network. Instead, the goal is to learn caption representations that can be used for multi-modal contrastive-based retrieval while extracting as much of the textual semantic input information of the caption as possible. ## 4 Experimental Setup We design experiments aimed at showing: (i) a reduction of predictive feature suppression by using LTD, with a focus on the ICR task; (ii) the advantages of LTD over reconstructing the caption in the input space; (iii) the benefit of constraint-based optimization of LTD over dual loss optimization; and (iv) the generalizability of LTD to different contrastive losses and resource-constrained ICR methods that use different encoder network architectures. To facilitate reproducibility and further research of our work, we include the code with our paper.2 ## 4.1 Datasets For training and evaluating our ICR method, we use the two common ICR benchmark datasets: Flickr30k (F30k) (Young et al., 2014) and MS-COCO captions (COCO) (Lin et al., 2014). The F30k dataset contains 31,000 image-caption tuples. We use the train, validate and test split from (Karpathy & Fei-Fei, 2015), with 29,000 images for training, 1,000 for validation, and 1,000 for testing. COCO consists of 123,287 image-caption tuples. We use the train, validate and test split from (Karpathy & Fei-Fei, 2015); we do not use the 1k test setup. Both F30k and COCO come with k = 5 captions per image. We also use the crisscrossed captions (CxC) dataset, which extends the COCO validation and test set with additional annotations of similar captions and images (Parekh et al., 2020), so as to evaluate whether LTD improves the evaluation scores by retrieving semantically similar candidates. ## 4.2 Implementation Details Unless otherwise specified, we use the following architectures for the target decoder, image encoder, and caption encoder. We use similar network architectures for the image and caption encoder as the ones used in (Chun et al., 2021), which are simple network architectures that can be trained using a limited amount of training data. Image encoder. For the image encoder, we use a pre-trained ResNet-50 (He et al., 2016) network. We apply average pooling on the last convolutional layer followed by a projection head to map the image feature vector into a shared multi-modal latent space; the projection head has two feed-forward layers and a ReLU non-linearity. Caption encoder. For the caption encoder, we use a bi-directional, single-layer, GRU network (Cho et al., 2014). We use pre-trained GloVe embeddings (Pennington et al., 2014) as word embeddings. We use a similar 1https://huggingface.co/sentence-transformers/all-mpnet-base-v2 2https://github.com/MauritsBleeker/reducing-predictive-feature-suppression/ projection head as for the image encoder (which does not share parameters) to map the caption embedding into the shared latent space. Target decoding network. For the target decoding network, we use a three-layer feed-forward network as in Eq. 5. To generate the latent target representations, we use the HuggingFace *all-MiniLM-L6-v2* Sentence-BERT implementation. The target decoding network is trained together with the image and caption encoder. The target decoding network is not used during evaluation. Input decoding network. We compare *latent* target decoding (LTD) with *input* target decoding (ITD), which reconstructs the input caption in the input space (i.e., the input tokens). For ITD, we use a single-layer GRU (Cho et al., 2014) decoder that reconstructs the input tokens in the caption (as explained in Section 3.3). We train the word embeddings for ITD from scratch. ITD is optimized with the negative log-likelihood loss (Eq. 4). Training. Similar to (Chun et al., 2021), we use 30 warm-up and 30 fine-tune epochs, a batch size of 128, and a cosine annealing learning rate schedule with an initial learning rate of 2e −4. The Lagrange multiplier is initialized with a value of 1, bounded between 0 and 100, and is optimized by stochastic gradient ascent with a fixed learning rate of 5e −3 and a momentum (to prevent λ from fluctuating too much) and dampening value of α = 0.9. When we use L*dual*, we set β to 1. For the InfoNCE loss, we use a temperature value τ of 0.05. Evaluation scores of ICR methods tend to differ depending on the random seed used during training (Rao et al., 2022); to improve robustness, we apply stochastic weight averaging (SWA) (Izmailov et al., 2018); we take the average of 5 checkpoints, stored during the last 10% of the training iterations each epoch. For the reconstruction constraint bound η, we try for all experiments several values, η ∈ {0.05, 0.1, 0.15, 0.2, 0.25, 0.3}. When we apply ITD we use η ∈ {0.5, 1, 2, 3, 4, 5, 6}. All results are based on the best-performing value of η. Generalizibility to different network architectures. LTD is a general method that can be combined with any global representation contrastive ICR method. To show that LTD works in combination with different network architectures, we apply LTD with multiple ICR methods that can be trained in a resource-constrained setup. To cover a wide spectrum of network architectures, we choose two methods that use different network architectures for the image and caption encoder. VSRN. The visual semantic reasoning network (VSRN) (Li et al., 2019) consists of an image and caption encoder that both compute a global representation of each input modality. The caption encoder consists of a single directed GRU, similar to the caption encoder used in (Faghri et al., 2018). The image encoder takes a set of pre-computed regions of interest as input generated by a ResNet-101 (He et al., 2016) backbone, pre-trained on visual genome (Krishna et al., 2017). This set of pre-computed visual features is considered a fully connected graph of regions in the input image. To perform reasoning on the graph of visual features, a multi-layer graph convolutional networks (Kipf & Welling, 2017) is used. Finally, to obtain one global representation of the entire image, a GRU is used to aggregate the regions of interest into one single representation. We use the same learning rate schedule and number of training epochs as in (Li et al., 2019) and we use the model implementation as provided by the authors.3 However, we modify the original VSRN model on two points: (i) We use the same caption encoder as described in Section 4.2 instead of the *single* directed GRU used for the original VSRN model and use a hidden dimensional of 1024 instead of 2048 for the caption and image representations. The goal of this work is not to show which specific network architectures perform best for the ICR task, but to show the generalizability of LTD in combination with different encoder networks. (ii) The original VSRN model also comes with an additional caption decoder that decodes the input caption from the visual features. In this work, we investigate the reduction of predictive feature suppression for the general ICR framework, consisting of two encoders optimized by using a contrastive loss. If we would add LTD to the original VSRN method, we would have two reconstruction objectives and a contrastive loss. The main reason we use VSRN is for the use of graph convolution networks 3https://github.com/KunpengLi1994/VSRN in the image encoder to show the generalizability of LTD in combination with different encoder networks. Therefore, we remove this caption decoder from the learning algorithm. TERN. The transformer reasoning network (TERN) (Messina et al., 2020b) is a transformer-based ICR method that is solely trained and evaluated on the COCO dataset. TERN consists of a pre-trained BERT (Devlin et al., 2018) caption encoder. The image encoder takes a set of pre-computed regions of interest as input (similar to VSRN) and consists of a stack of four transformer layers. The pre-computed features are only available for the COCO dataset. Next, both the image and caption features are pushed through a stack of shared transformer layers. Although the weights of the last part of the caption and image encoder are shared, there is no (cross) attention between the two modalities; the representations of the images and captions are still computed independently. The image and caption CLS token is used as a global representation of both the image and the caption. We use the same learning rate schedule, dropout rate, number of training epochs, and data augmentations as in (Messina et al., 2020b) and use the model implementation as provided by the authors.4 In this work, we use ICR methods that are trained from scratch on the F30k and COCO dataset and that are trainable on a single GPU. That does not imply that some of the weights of our encoders are not initialized with pre-trained parameters. However, we only use pre-trained weights that are trained on a uni-modal task(s), and not for image-text matching specifically. ## 4.3 Evaluation Metrics To measure the reduction of predictive feature suppression, we evaluate how well the learned encoders generalize to the ICR evaluation task. The more predictive features the encoders learn to capture, the better these encoders are able to retrieve the correct candidate given a query. The standard evaluation metric for ICR is recall@k metric, with k = {1, 5, 10}. During training, we evaluate the model after each training epoch on the validation set. Similar to (Faghri et al., 2018; Lee et al., 2018; Li et al., 2019; Chen et al., 2020a; Chun et al., 2021), we select the model with the highest score on the validation set (using the sum of all recall@k scores as a metric) for evaluation on the test set. Recall@k. For ICR, recall@k is implemented as the fraction of how many times a matching candidate is present in the top-k ranking (Karpathy & Fei-Fei, 2015; Parekh et al., 2020). For the t2i task, there is only one matching image per query caption. For the i2t task, there are 5 matching captions per image. Only the highest-ranked caption per query is used for calculating the recall scores. When using the CxC annotations, for both i2t and t2i, we take the highest-ranked candidate. R-precision. When extending the COCO dataset with the CxC annotations, we have one or more matching candidates per query for both i2t and t2i. Like Chun et al. (2021), we also use r-precision (R-P) for evaluation; it measures the precision for a top-r ranking, where r is the number of matching candidates given a query. nDCG. The standard evaluation metric for the ICR task, recall@k, mainly measures if the positive candidate is present in the top k of the ranking. However, this does not provide much insight into the overall quality of the ranking. To address the limitations of only using recall@k, Messina et al. (2020b) start using nDCG as an additional evaluation metric for t2i retrieval. However, for t2i retrieval there is only one positive image per query caption. To generate more positive images per caption query, images that have captions with a high overlap with the query caption, are also considered positive. As similarity measurements between the captions, ROUGE-L (Lin, 2004) and SPICE (Anderson et al., 2016) are used. There are multiple re-annotations of COCO available that provide multiple matching images per caption query; see, for example, (Parekh et al., 2020). However, these re-annotations are not used by Messina et al. (2020b) to compute the nDCG scores. To keep the evaluation consistent, we use the same relevance labels as used in (Messina et al., 2020b). The nDCG relevance labels are only available for COCO and not for F30k. 4https://github.com/mesnico/TERN | i2t | t2i | | | | | | | | | | | | |-----------------------------------------------------------------------------------------------------------|----------|--------------|------|------|------|------|------|-------|---------|-------|--------|--------| | R@k | R@k | nDCG | | | | | | | | | | | | Method | Loss | | | | | | | | | | | | | # | 1 | 5 | 10 | R-P | 1 | 5 | 10 | R-P | ROUGE-L | SPICE | | | | F30k | | | | | | | | | | | | | | 1.1 | BL | Lcon | 47.4 | 75.9 | 84.8 | 0.34 | 33.9 | 65.2 | 76.6 | - | - | - | | 1.2 | BL + ITD | Ldual, β=1 | 45.7 | 74.0 | 84.4 | 0.33 | 33.7 | 65.1 | 75.8 | - | - | - | | 1.3 | BL + ITD | Llag, η=6 | 36.6 | 66.8 | 76.5 | 0.28 | 27.8 | 59.1 | 71.0 | - | - | - | | 1.4 | BL + LTD | Ldual, β=1 | 46.1 | 75.3 | 84.1 | 0.34 | 34.0 | 65.9 | 77.4 | - | - | - | | 1.5 | BL + LTD | Llag, η=0.2 | 49.6 | 78.7 | 86.4 | 0.37 | 36.7 | 68.4 | 79.3 | - | - | - | | COCO | | | | | | | | | | | | | | 2.1.1 | BL | Lcon | 33.7 | 64.4 | 76.6 | 0.24 | 24.2 | 53.5 | 67.0 | - | 0.6487 | 0.5729 | | 2.2.1 | BL + ITD | Ldual, β=1 | 32.7 | 64.4 | 76.3 | 0.24 | 24.2 | 53.8 | 67.6 | - | 0.6496 | 0.5733 | | 2.3.1 | BL + ITD | Llag, η=4 | 28.4 | 59.2 | 72.2 | 0.22 | 22.0 | 50.4 | 64.5 | - | 0.6424 | 0.5638 | | 2.4.1 | BL + LTD | Ldual, β=1 | 34.2 | 64.7 | 76.6 | 0.25 | 25.0 | 54.3 | 67.9 | - | 0.6510 | 0.5756 | | 2.5.1 | BL + LTD | Llag, η=0.15 | 36.0 | 66.5 | 78.1 | 0.26 | 26.2 | 56.2 | 69.4 | - | 0.6531 | 0.5786 | | CxC | | | | | | | | | | | | | | 2.1.2 | BL | Lcon | 36.1 | 68.1 | 80.2 | 0.22 | 26.7 | 57.6 | 71.0 | 0.23 | - | - | | 2.2.2 | BL + ITD | Ldual, β=1 | 35.0 | 68.0 | 79.7 | 0.22 | 26.6 | 58.0 | 71.6 | 0.23 | - | - | | 2.3.2 | BL + ITD | Llag, η=4 | 31.0 | 62.9 | 75.8 | 0.20 | 24.6 | 54.8 | 68.9 | 0.21 | - | - | | 2.4.2 | BL + LTD | Ldual, β=1 | 36.6 | 68.1 | 79.9 | 0.23 | 27.6 | 58.8, | 72.0 | 0.24 | - | - | | 2.5.2 | BL + LTD | Llag, η=0.15 | 38.4 | 70.4 | 81.5 | 0.24 | 28.9 | 60.4 | 73.3 | 0.25 | - | - | | Table 1: Recall@k, nDCG, and r-precision (R-P) evaluation scores for the F30k and COCO datasets (includin | g | | | | | | | | | | | | Table 1: Recall@k, nDCG, and r-precision (R-P) evaluation scores for the F30k and COCO datasets (including the CxC annotations). We evaluate three loss functions Lcon , L*dual* and Llag. We use three methods, the contrastive ICR baseline (BL), BL + input target decoding (ITD), and BL + latent target decoding (LTD). Boldface indicates the highest value for each evaluation metric per dataset. '-' indicates that it is not possible to compute the evaluation score for that dataset/experiment since the annotations are not available. ## 5 Results In Section 5.1, we compare a contrastive ICR baseline with the same baseline combined with LTD. Next, in Section 5.2 we ask if similar results can be achieved with ITD as with LTD. In Section 5.3, we investigate the role of the optimization constraint and compare constraint-based LTD with LTD optimized as a dual loss. Finally, in Section 5.4, we ask whether LTD can be used in combination with a different contrastive loss function, and in Section 5.5 we show that LTD can be combined with a wide variety of resource-constrained ICR methods. ## 5.1 Constrastive Icr Baseline Vs. Baseline + Ltd In Table 1 we compare the contrastive ICR baseline, which is optimized by using the contrastive loss Lcon defined in Eq. 1, with the baseline combined with LTD. Based on Table 1, row 1.1, 1.4, 2.1.1, 2.4.1, and 2.4.2, 2.4.2, we observe that LTD optimized as a dual loss (L*dual*) with β = 1 does not convincingly (or only with a small margin) outperform the baseline ICR method, which is optimized solely in a contrastive manner, in terms of recall@k, nDCCG, and r-precision for both datasets and both i2t and t2i. In contrast, when we implement LTD as an optimization constraint, by using Llag, row 1.5, 2.1.5, and 2.2.5, we observe that LTD consistently outperforms the baseline ICR methods on both F30k and COCO for both tasks (i2t and t2i) with a large margin. An increase in recall also comes with an increase in the r-precision Recall Sum ![14_image_0.png](14_image_0.png) ![14_image_2.png](14_image_2.png) (a) Trajectory of the recall sum on the validation set ![14_image_3.png](14_image_3.png) ![14_image_1.png](14_image_1.png) (b) Trajectory of λ during training for different values of η. A log-scale is used for the y-axis. ![14_image_4.png](14_image_4.png) (c) Trajectory of the reconstruction loss Lrec. (d) Trajectory of the contrastive loss Lcon. Figure 3: Overview of constraint-based optimization on the evaluation metric and the optimization objectives. We train Llag with four different values of η ∈ {0.05, 0.10, 0.15, 0.20, 0.25}. All training steps are to the power of 1e5. scores and nDCG scores. Hence, features learned by constraint-based LTD perform better on the evaluation task, which is an indication of the reduction of predictive feature suppression. ## 5.2 Latent Target Decoding Vs. Input Target Decoding As argued in Section 3.3, decoding the caption in the input space will probably not result in a reduction of predictive feature suppression due to overfitting of the learned language model. To empirically show this, we also implemented a decoder that decodes tokens of the input caption (ITD) to reduce predictive feature suppression (see Section 4.2 for details). Based on row 1.2, 2.1.2, and 2.2.2 in Table 1, we conclude that implementing ITD as a dual loss does not result in improved recall@k scores, for most values of k, compared to the contrastive baseline. Surprisingly, when we implement ITD as an optimization constraint (with η = 4 for F30k and η = 6 for COCO, other values of η do not yield improvements) the evaluation scores are even lower (row 1.3, 2.1.3 and 2.2.3) than when implemented as a dual loss. We conclude that: (i) ITD does not reduce predictive feature suppression for ICR, and (ii) implementing ITD as an optimization constraint even hurts performance. ## 5.3 The Role Of The Optimization Constraint What is the role of the optimization constraint when minimizing Lrec and what is the effect on the evaluation scores compared to using L*dual*? In Figure 3b we plot the trajectory of λ for different values of η ∈ {0.05, 0.1, 0.15, 0.2, 0.25} during training on the COCO dataset. We also provide (i) the trajectory of the evaluation score (recall sum) over the validation set during training (Figure 3a), (ii) the trajectory of the reconstruction loss for different values of η and when optimized without using a constraint (L*dual*) (Figure 3c), and (iii) the trajectory of the contrastive loss for different values of η (Figure 3d). Based on Figure 3 we observe: 1. λ increases until the optimization constraint is met (i.e., the bound η). The closer the reconstruction loss is to η, the slower the increase of λ. When the reconstruction constraint is met, λ decreases to 0 (Figure 3b). 2. λ is positive again when the reconstruction loss becomes higher than the bound η (Figure 3b, purple line). 3. The reconstruction loss converges to the value of η (Figure 3c). However, it is not possible to meet every value of η. E.g., η = 0.05 is too low to achieve for the model. 4. A lower reconstruction loss does not necessarily result in higher evaluation scores (Figure 3a). E.g., the recall sum is higher for η = 0.15 than for η = 0.1 or η = 0.05. 5. The value and the development of the contrastive loss do not depend much on the value of the reconstruction loss (Figure 3d). E.g., a model optimized with Lcon has the same contrastive loss trajectory as a model that is optimized with Llag and L*dual*. Hence, the contrastive loss on its own does not provide a good indication of the performance on the evaluation task. Similar trajectories of the contrastive loss result in different evaluation scores (hence different learned representations). When we implement LTD as a dual loss, there is always a gradient from the reconstruction loss w.r.t. the parameters of the caption encoder, until the reconstruction loss is 0. This is not the case when we implement LTD as a reconstruction constraint. When the constraint is met, λ drops to zero and there is no gradient anymore from the reconstruction loss. We can conclude that a constant gradient from the reconstruction loss does not improve the learned representations of the caption encoder in terms of evaluation scores. The evaluation scores are higher when there is only a gradient until a certain reconstruction bound η is met. ## 5.4 Generalizability W.R.T. Contrastive Loss In Section 3.2 we argued that the InfoNCE loss is prone to predictive feature suppression. A popular choice of contrastive loss function for ICR methods is the triplet loss with in-batch hard-negative mining (Faghri et al., 2018). The triplet loss with in-batch hard-negative mining is a special case of the InfoNCE loss, where the number of positives and negatives are each one (Khosla et al., 2020). Therefore, our line of reasoning in Section 3.2 holds for the triplet loss too. To show the strength and generalizability of LTD to other contrastive losses, we run the same experiments as in Section 5.1 (only for LTD not for ITD), with the triplet loss instead of the InfoNCE loss as Lcon. To prevent the triplet loss from collapsing to the trivial solution, we added a batch normalization layer after the projection head, for both the image and caption encoder; we use a margin value of α = 0.2 (Faghri et al., 2018; Li et al., 2019; Messina et al., 2020b). Based on Table 1, we can observe that the highest recall@k scores also come with the highest r-precision and nDCG scores. Since the main goal of this experiment is to show that LTD can be used in combination with different contrastive losses we, therefore, only evaluate for recall@k. Table 2 provides the recall@k scores for the F30k and COCO datasets. For both F30k and COCO the triplet loss with constraint-based LTD (see rows 3.1.3 and 3.2.3) results in higher evaluation scores than the InfoNCE loss with constraint-based LTD (see Table 1, rows 1.5 and 2.5). Our goal here is not to identify the best contrastive loss for ICR or LTD, but to show the generalizability of LTD to different contrastive losses. Moreover, using the triplet loss as Lcon (see row 3.2.1), results in expected evaluation scores on the COCO dataset (given the reproducibility work in (Bleeker & de Rijke, 2022)). Surprisingly, however, the evaluation scores for the F30k dataset while using the triplet loss as Lcon (see row 3.1.1) are lower than expected (when compared to Table 1, row 1.1). It is unclear why we observe these low evaluation scores for the F30k dataset when only using the triplet loss as Lcon. In contrast, we observe that constraint-based LTD in combination with the triplet loss drastically improves the evaluation scores for the F30k dataset, which shows the strength | | | | i2t | | t2i | | | | |-------|----------|---------------|-------|------|-------|------|------|------| | # | Method | Loss | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | | | | | F30k | | | | | | 3.1.1 | BL | Lcon | 12.8 | 33.2 | 45.4 | 11.1 | 32.6 | 46.6 | | 3.1.2 | BL + LTD | Ldual, β = 1 | 17.1 | 42.7 | 56.4 | 13.1 | 40.5 | 55.7 | | 3.1.3 | BL + LTD | Llag, η = 0.2 | 54.7 | 81.5 | 88.3 | 40.8 | 71.3 | 80.6 | | | | | | COCO | | | | | | 3.2.1 | BL | Lcon | 37.1 | 67.1 | 78.2 | 27.8 | 56.6 | 69.3 | | 3.2.2 | BL + LTD | Ldual, β = 1 | 37.4 | 67.8 | 79.1 | 28.2 | 57.2 | 70.5 | | 3.2.3 | BL + LTD | Llag, η = 0.2 | 39.1 | 69.3 | 80.6 | 29.6 | 59.4 | 72.2 | of constraint-based LTD for improving ICR evaluation scores and also making the triplet loss more robust to predictive feature suppression and feature collapsing. Table 2: Recall@k evaluation scores for the F30k and COCO datasets. For experiments 3.∗, we use the triplet loss with in-batch hard-negative mining, as defined in (Faghri et al., 2018) instead of the InfoNCE loss (van den Oord et al., 2018) for Lcon. Boldface indicates the highest value for each evaluation metric per dataset. ## 5.5 Generalizability W.R.T. Network Architectures In this section, we consider whether LTD can be used in combination with different resource-constrained ICR methods that use different network architectures. To answer this question we use LTD in combination with the VSRN and TERN methods. In line with the observations in Table 1, we observe in Table 3 that for both VSRN and TERN: (i) constraint-based LTD outperforms the contrastive baseline, and (ii) constraint-based LTD results in a stronger performance improvement than implementing LTD as a dual loss. In line with the results in (Messina et al., 2020b), the VSRN baseline outperforms the TERN baseline in terms of Recall@k. However, the difference in nDCG scores between the two models is relatively small. Although constraint-based LTD outperforms both the baseline and LTD implemented as a dual loss, improvements gained by using LTD in combination with VSRN are less convincing than for TERN. The most consistent improvement in evaluation scores is obtained by combining LTD with TERN, which is a fully transformer-based ICR method. This is a substantially different architecture from the one in Table 1 and VSRN, and such transformer network architectures are the most prominent network architectures these days for multi-modal tasks (Radford et al., 2021; Chen et al., 2020d; Lu et al., 2019; Yuan et al., 2021). When only a limited amount of training data is available and one wants to make use of (partly) pre-trained transformer networks for multi-modal contrastive learning, constraint-based LTD can help to significantly improve the evaluation scores for ICR. Furthermore, TERN makes use of a pre-trained BERT (Devlin et al., 2018) model as a caption encoder. BERT is a general-purpose text encoder pre-trained on text only. An open question is still why to train a caption encoder, while we use a (frozen) general-purpose sentence encoder to generate the latent targets for LTD; why not use the target decoder directly as a caption encoder? The results in Table 3 show that fine-tuning a general-purpose language encoder (i.e., BERT) with a contrastive loss as caption encoder results in lower evaluation scores than fine-tuning the caption encoder in combination with constraint-based LTD. This shows that LTD helps to extract features (i.e., not suppressing these features) from the input data that are relevant for the ICR task, that are not captured by either the pre-trained BERT model or by only using the contrastive optimization objective. In Appendix C, we provide three qualitative ranking results for i2t retrieval using TERN and samples from the COCO test set. Based on the examples it is clear that a baseline TERN does not represent specific concepts (i.e., predictive features) that are needed to rank the | i2t | | t2i | | | | | | | | | |-------|----------|----------------|------|------|------|------|------|------|---------|--------| | R@k | R@k | nDCG | | | | | | | | | | # | Method | Loss | 1 | 5 | 10 | 1 | 5 | 10 | ROUGE-L | SPICE | | VSRN | F30k | | | | | | | | | | | 5.1.1 | BL | Lcon | 60.3 | 85.6 | 90.8 | 44.9 | 74.0 | 83.3 | - | - | | 5.1.2 | BL + LTD | Ldual, β = 1 | 59.6 | 86.6 | 91.6 | 44.3 | 74.7 | 83.5 | - | - | | 5.1.3 | BL + LTD | Llag, η = 0.25 | 61.9 | 86.8 | 92.1 | 45.3 | 76.4 | 84.4 | - | - | | | COCO | | | | | | | | | | | 5.2.1 | BL | Lcon | 47.0 | 76.9 | 87.0 | 34.5 | 65.7 | 78.1 | 0.6779 | 0.6080 | | 5.2.2 | BL + LTD | Ldual, β = 1 | 45.9 | 77.8 | 87.6 | 34.6 | 66.2 | 78.3 | 0.6791 | 0.6088 | | 5.2.3 | BL + LTD | Llag, η = 0.15 | 47.6 | 79.0 | 87.8 | 35.0 | 66.7 | 78.9 | 0.6797 | 0.6112 | | TERN | COCO | | | | | | | | | | | 5.3.1 | BL | Lcon | 41.2 | 72.6 | 83.6 | 31.0 | 61.9 | 74.7 | 0.6648 | 0.5926 | | 5.3.2 | BL + LTD | Ldual, β = 1 | 42.3 | 74.3 | 84.4 | 31.4 | 62.7 | 75.4 | 0.6684 | 0.5993 | | 5.3.3 | BL + LTD | Llag, η = 0.2 | 44.1 | 74.8 | 85.7 | 33.6 | 64.6 | 76.9 | 0.6727 | 0.6059 | Table 3: Recall@k and nDCG evaluation scores for the F30k and COCO datasets using the VSRN and TERN network architectures. Boldface indicates the highest value for each evaluation metric per dataset. '-' indicates that it is not possible to compute the evaluation score for that dataset/experiment since the annotations are not available. correct captions on top of the ranking, while TERN optimized with constraint-based LTD does represent these predictive features. ## 6 Conclusion We have presented latent target decoding, a novel approach to reduce predictive feature suppression for contrastive resource-constrained ICR methods. Instead of reconstructing the captions in the input space, LTD reduces predictive feature suppression by reconstructing the input caption in the latent space of a general-purpose sentence encoder. By reconstructing the input caption, it is more difficult for the image and caption encoder to suppress predictive features that are not needed to solve the contrastive optimization objective. Main findings. Our results show that constraint-based LTD obtains higher evaluation scores than both a contrastive ICR baseline and LTD implemented as a dual loss. This implies that we are able to reduce predictive feature suppression (and hence improve evaluation performance) by using constraint-based LTD, which does not require additional image-text training data or hard-negative mining strategies. Furthermore, we show that constraint-based LTD consistently results in a bigger improvement in evaluation scores than implementing LTD as a dual loss. These results suggest that, instead of simply minimizing both the contrastive and reconstruction loss, better evaluation scores can be obtained by only optimizing the reconstruction loss until a certain bound value η is met. Finally, we show that constraint-based LTD can be combined with different contrastive learning losses and a wide variety of resource-constrained ICR methods. Implications. The results of our work show that in a resource-constrained setup the evaluation performances of contrastive ICR methods can be substantially improved by using constraint-based LTD, without relying on more training data or hard-negative mining strategies. We, therefore, argue that, in a resource-constrained setup, LTD should be part of the standard ICR framework to mitigate the problem of predictive feature suppression. Furthermore, we argue that when one uses an additional reconstruction objective to reduce predictive feature suppression, this objective should be considered to be implemented as an optimization constraint instead of a dual loss. Limitations. In this work, we use a general-purpose sentence encoder to generate our latent target representation yCk. However, we need to assume that this latent target representation contains the relevant information of the input caption. Furthermore, the availability of a general-purpose sentence encoder is not always guaranteed (e.g., when working with low-resource languages). For the ICR task, the predictive features are the features needed to retrieve the positive item from a set of candidates. We, therefore, measure the reduction of predictive feature suppression by using the standard ranking evaluation metrics, such as recall@k, r-precision, and nDCG. However, we do not explicitly know which features are causing the observed improvement in the evaluation scores by using LTD. Future work. We have several directions for future work. First, we plan to examine if the choice of different target generators will result in different ICR evaluation scores. Moreover, we also want to look into generating latent target representations without relying on a pre-trained sentence encoder. Another promising direction for future work is analyses on the exact role of the optimization constraint. In Section 5.3 we examine the role of the optimization constraint when training the image and caption encoder. When the optimization constraint η is met, λ (i.e., the balancing parameter of the two losses) drops to zero and the reconstruction loss no longer provides a gradient (Figure 3b). Although λ is (close to) zero for the majority of the training after the constraint is met, the evaluation scores on the validation set remain higher than when optimizing with L*dual*, with β = 1 (Figure 3a). This suggests that a constant gradient from the reconstruction loss does not benefit the training process, which is the case if LTD is implemented as a dual loss. We plan future research into the role of the optimization constraint, by trying constraint-based optimization for other multi-task optimization problems. In this work, we focus on the reduction of predictive feature suppression for resource-constrained ICR methods. In Section 1 we argued that predictive feature suppression is less of an issue for models that are trained with large batch sizes since more information is needed to match the query with the positive candidate (due to a large number of negative candidates). However, it remains a promising direction for further research to investigate if and how constraint-based LTD can be used for either large-scale contrastive image-text representation learning or for fine-tuning. Prominent large-scale image-text matching methods, such as ALIGN (Jia et al., 2021), use noisy image-text pairs scraped from the internet. It is unclear if the target generator will provide useful targets (and hence a training signal) when the caption has a weak relation with its matching image (which is possible for noisy image-text pair). It might be the case that the target generator mainly provides useful supervision signals when using high-quality human-curated datasets, such as F30k and COCO. Finally, we suggest working on methods to measure which features are responsible for the gained improvement in evaluation performance. A logical choice would be to use feature attribution methods. However, different feature attribution methods tend to disagree with each other, for both RNN and transformer-based models (Neely et al., 2021). Therefore, the choice of feature attribution method will influence the analyses of which predictive features are better captured by using LTD. To gain further insights into which features are captured by the learned encoders, we recommend developing task-specific feature attribution methods that can measure the reduction of predictive feature suppression directly. ## Acknowledgments We thank the reviewers and the action editor for their valuable comments and suggestions. We thank Maartje ter Hoeve for reviewing the text. This research was supported by the Nationale Politie and the Hybrid Intelligence Center through the Netherlands Organisation for Scientific Research. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors. ## References Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. SPICE: semantic propositional image caption evaluation. In *ECCV*, pp. 382–398, 2016. Ali Furkan Biten, Andrés Mafla, Lluís Gómez, and Dimosthenis Karatzas. Is an image worth five sentences? A new look into semantics for image-text matching. In *WACV*, pp. 2483–2492, 2022. Maurits Bleeker and Maarten de Rijke. Do lessons from metric learning generalize to image-caption retrieval? In *ECIR*, pp. 535–551, 2022. Andrew Brown, Weidi Xie, Vicky Kalogeiton, and Andrew Zisserman. Smooth-AP: Smoothing the path towards large-scale image retrieval. In *ECCV*, pp. 677–694, 2020. Hui Chen, Guiguang Ding, Zijia Lin, Sicheng Zhao, and Jungong Han. Cross-modal image-text retrieval with semantic consistency. In *ACM MM*, pp. 1749–1757, 2019. Hui Chen, Guiguang Ding, Xudong Liu, Zijia Lin, Ji Liu, and Jungong Han. IMRAM: iterative matching with recurrent attention memory for cross-modal image-text retrieval. In *CVPR*, pp. 12652–12660, 2020a. Tianlang Chen, Jiajun Deng, and Jiebo Luo. Adaptive offline quintuplet loss for image-text matching. In ECCV, pp. 549–565, 2020b. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In *ICML*, pp. 1597–1607, 2020c. Ting Chen, Calvin Luo, and Lala Li. Intriguing properties of contrastive losses. In *NeurIPS*, pp. 11834–11845, 2021. Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In *CVPR*, pp. 15750–15758, 2021. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. UNITER: universal image-text representation learning. In *ECCV*, pp. 104–120, 2020d. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. In *EMNLP Workshop on Syntax, Semantics and* Structure in Statistical Translation, pp. 103–111, 2014. Sanghyuk Chun, Seong Joon Oh, Rafael Sampaio de Rezende, Yannis Kalantidis, and Diane Larlus. Probabilistic embeddings for cross-modal retrieval. In *CVPR*, pp. 8415–8424, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*, pp. 4171–4186, 2018. Haiwen Diao, Ying Zhang, Lin Ma, and Huchuan Lu. Similarity reasoning and filtration for image-text matching. In *AAAI*, pp. 1218–1226, 2021. Fartash Faghri, David J. Fleet, Jamie Ryan Kiros, and Sanja Fidler. VSE++: improving visual-semantic embeddings with hard negatives. In *BMVC*, pp. 12, 2018. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Ávila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko. Bootstrap your own latent - A new approach to selfsupervised learning. In *NeurIPS*, pp. 21271–21284, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pp. 770–778, 2016. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. Momentum contrast for unsupervised visual representation learning. In *CVPR*, pp. 9726–9735, 2020. Katherine L. Hermann and Andrew K. Lampinen. What shapes feature representations? exploring datasets, architectures, and training. In *NeurIPS*, pp. 9995–10006, 2020. Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 2006. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry P. Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. In UAI, pp. 876–885, 2018. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In *ICML*, pp. 4904–4916, 2021. Li Jing, Pascal Vincent, Yann LeCun, and Yuandong Tian. Understanding dimensional collapse in contrastive self-supervised learning. In *ICLR*, 2021. Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In CVPR, pp. 3128–3137, 2015. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. Dense passage retrieval for open-domain question answering. In *EMNLP*, pp. 6769–6781, 2020. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. In *NeurIPS*, pp. 18661–18673, 2020. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123:32–73, 2017. Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. Stacked cross attention for image-text matching. In *ECCV*, pp. 212–228, 2018. Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq R. Joty, Caiming Xiong, and Steven C. H. Hoi. Align before fuse: Vision and language representation learning with momentum distillation. In NeurIPS, pp. 9694–9705, 2021. Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi. BLIP: bootstrapping language-image pre-training for unified vision-language understanding and generation. In *ICML*, pp. 12888–12900, 2022. Kunpeng Li, Yulun Zhang, Kai Li, Yuanyuan Li, and Yun Fu. Visual semantic reasoning for image-text matching. In *ICCV*, pp. 4653–4661, 2019. Tianhong Li, Lijie Fan, Yuan Yuan, Hao He, Yonglong Tian, Rogerio Feris, Piotr Indyk, and Dina Katabi. Making contrastive learning robust to shortcuts. *arXiv preprint arXiv:2012.09962*, 2020a. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. Oscar: Object-semantics aligned pre-training for vision-language tasks. In *ECCV*, pp. 121–137, 2020b. Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In *Text summarization branches* out, 2004. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In *ECCV*, pp. 740–755, 2014. Erik Lindgren, Sashank J. Reddi, Ruiqi Guo, and Sanjiv Kumar. Efficient training of retrieval models using negative cache. In *NeurIPS*, volume 4134–4146, 2021. Chunxiao Liu, Zhendong Mao, Tianzhu Zhang, Hongtao Xie, Bin Wang, and Yongdong Zhang. Graph structured network for image-text matching. In *CVPR*, pp. 10918–10927, 2020. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In *NeurIPS*, pp. 13–23, 2019. Shuqi Lu, Di He, Chenyan Xiong, Guolin Ke, Waleed Malik, Zhicheng Dou, Paul Bennett, Tie-Ya Liu, and Arnold Overwijk. Less is more: Pre-training a strong siamese encoder using a weak decoder. In *EMNLP*, pp. 2780–2791, 2021. Itzik Malkiel and Lior Wolf. Mtadam: Automatic balancing of multiple training loss terms. In *EMNLP*, pp. 10713–10729, 2021. Nicola Messina, Giuseppe Amato, Andrea Esuli, Fabrizio Falchi, Claudio Gennaro, and Stéphane MarchandMaillet. Fine-grained visual textual alignment for cross-modal retrieval using transformer encoders. ACM Trans. Multim. Comput. Commun. Appl., 17:128:1–128:23, 2020a. Nicola Messina, Fabrizio Falchi, Andrea Esuli, and Giuseppe Amato. Transformer reasoning network for image-text matching and retrieval. In *ICPR*, pp. 5222–5229, 2020b. Michael Neely, Stefan F. Schouten, Maurits Bleeker, and Ana Lucic. Order in the court: Explainable AI methods prone to disagreement. *arXiv preprint arXiv:2105.03287*, 2021. Zarana Parekh, Jason Baldridge, Daniel Cer, Austin Waters, and Yinfei Yang. Crisscrossed captions: Extended intramodal and intermodal semantic similarity judgments for ms-coco. In *EACL*, pp. 2855–2870, 2020. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In *EMNLP*, pp. 1532–1543, 2014. John C. Platt and Alan H. Barr. Constrained differential optimization. In *NIPS*, pp. 612–621, 1987. Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. Rocketqa: An optimized training approach to dense passage retrieval for open-domain question answering. In *NAACL-HLT*, pp. 5835–5847, 2021. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In *ICML*, pp. 8748–8763, 2021. Jun Rao, Fei Wang, Liang Ding, Shuhan Qi, Yibing Zhan, Weifeng Liu, and Dacheng Tao. Where does the performance improvement come from? - A reproducibility concern about image-text retrieval. In *SIGIR*, pp. 2727–2737, 2022. Nils Reimers and Iryna Gurevych. Sentence-BERT: Sentence embeddings using siamese bert-networks. In EMNLP-IJCNLP, pp. 3980–3990, 2019. Danilo J. Rezende and Fabio Viola. Generalized ELBO with constrained optimization, GECO. In *NeurIPS* Workshop on Bayesian Deep Learning, 2018. Joshua Robinson, Li Sun, Ke Yu, Kayhan Batmanghelich, Stefanie Jegelka, and Suvrit Sra. Can contrastive learning avoid shortcut solutions? In *NeurIPS*, pp. 4974–4986, 2021. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. Mpnet: Masked and permuted pre-training for language understanding. In *NeurIPS*, pp. 16857–16867, 2020. Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. What makes for good views for contrastive learning? In *NeurIPS*, 2020. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. Ties van Rozendaal, Guillaume Sautière, and Taco S. Cohen. Lossy compression with distortion constrained optimization. In *CVPR Workshops*, 2020. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *NeurIPS*, pp. 5998–6008, 2017. Sijin Wang, Ruiping Wang, Ziwei Yao, Shiguang Shan, and Xilin Chen. Cross-modal scene graph matching for relationship-aware image-text retrieval. In *WACV*, pp. 1497–1506, 2020. Zihao Wang, Xihui Liu, Hongsheng Li, Lu Sheng, Junjie Yan, Xiaogang Wang, and Jing Shao. CAMP: cross-modal adaptive message passing for text-image retrieval. In *ICCV*, pp. 5763–5772, 2019. Tete Xiao, Xiaolong Wang, Alexei A. Efros, and Trevor Darrell. What should not be contrastive in contrastive learning. In *ICLR*, 2021. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In *ICLR*, 2021. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78, 2014. Tan Yu, Yi Yang, Yi Li, Lin Liu, Hongliang Fei, and Ping Li. Heterogeneous attention network for effective and efficient cross-modal retrieval. In *SIGIR*, pp. 1146–1156, 2021. Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, and Pengchuan Zhang. Florence: A new foundation model for computer vision. *arXiv preprint arXiv:2111.11432*, 2021. Kun Zhang, Zhendong Mao, Quan Wang, and Yongdong Zhang. Negative-aware attention framework for image-text matching. In *CVPR*, pp. 15640–15649, 2022. ## A Notation And Variables | Symbol | Explanation | | | | | | | | | | |-----------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------|----------------|--------|----------|-----------|----------|----|----|----| | Lcon | Symbol for the contrastive loss. In this work we either use the InfoNCE (van den Oord et al., 2018) or a triplet loss with in-batch hard-negative mining (Faghri et al., 2018). | | | | | | | | | | | Lrec | Symbol for the reconstruction loss of the input caption (i.e., decoding loss). In this work we use the negative cosine similarity when using embeddings (latent target decoding) or the log-likelihood when we reconstruct the tokens of the input captions (input target decoding). | | | | | | | | | | | Ldual | Symbol for the sum of the contrastive loss and the reconstruction loss (i.e., the dual loss). The reconstruction loss is scaled by β. | | | | | | | | | | | Llag | Symbol for the sum of the contrastive loss and the reconstruction loss, where the reconstruction loss is implemented as a Lagrange multiplier optimization constraint. | | | | | | | | | | | q | Vector representation of a query, either an image or a caption. | | | | | | | | | | | v, v +, v − | Vector representation of a candidate. Given a query q, a candidate is either matching (v +) or not matching (v −). Candidates are either images or captions. | | | | | | | | | | | D | Dataset consisting of N image-caption tuples; each image i ∈ N comes with k captions. | | | | | | | | | | | x i I | Input image i. | | | | | | | | | | | x i Cj | Input caption j that describes image i. | | | | | | | | | | | i z I | Latent representation of image i. | | | | | | | | | | | z i Cj | Latent representation of caption j that describes image i. | | | | | | | | | | | η | Reconstruction bound (or threshold). The reconstruction loss is only minimized up to the value of η. | | | | | | | | | | | λ | The Lagrange multiplier. | | | | | | | | | | | β | Balancing parameter to balance (or scale) two losses when using the dual loss. | | | | | | | | | | | B | Batch with training samples. | | | | | | | | | | | − | | | | | | | | | | | | S q | The set of all negative candidates v −, in a training batch, given query q. | | | | | | | | | | | τ | Temperature parameter to scale the logist (i.e., cosine similarity) for the InfoNCE loss. | | | | | | | | | | | α | Margin parameter for the triplet loss. | | | | | | | | | | | i | | | | | | | | | | | | y Cj | Latent | target | representation | (i.e., | semantic | embedding | produced | by | a | Sentence | | BERT/language encoder) for caption j that describes image i. | | | | | | | | | | | | i | | | | | | | | | | | | ye Cj | Reconstructed latent target representation by the decoder network (i.e. LTD), for caption j that describes image i. | | | | | | | | | | | i Cj | Reconstruction of the input tokens (i.e. ITD) by the decoder network for caption j that | | | | | | | | | | | xe | describes image i. | | | | | | | | | | | fθ(·) | Image encoder parameterized by θ. Takes as input x i . Outputs a latent (global) image I representation z i . I i | | | | | | | | | | | gϕ(·) | Caption encoder parameterized by ϕ. Takes as input x Cj . Outputs a (global) latent caption representation z i . Ck i | | | | | | | | | | | hω(·) | Decoder network parameterized by ω. Takes as input z | . Outputs a reconstuction of the | | | | | | | | | | | Cj | | | | | | | | | | | | i | i | (ITD). | | | | | | | | | input caption, either ye Cj (LTD) or xe Cj Table 4: Overview of the notation and variables used throughout this work. | | | | | | | | | | | ## B Gradient Of The Infonce Loss W.R.T. The Query And Candidates We start with the definition of the InfoNCE loss (van den Oord et al., 2018) using the notation introduced in Section 3, for one query candidate pair (q, v +) and a set of negative candidates (Sq): Lcon = − log exp(q T v +/τ ) exp(q T v+/τ ) + Pv−∈S− q exp(q T v−/τ )(10a) = − q T v +/τ − log exp(q T v +/τ ) −X v−∈Sq exp(q T v −/τ ) (10b) −Lcon = q T v +/τ − log exp(q T v +/τ ) −X v−∈Sq exp(q T v −/τ ) . (10c) (10a) $$\left(\text{10b}\right)$$ $$\left(\text{10c}\right)$$. Next, we take the derivative of −Lcon w.r.t. q (as also provided in (Chen et al., 2020c)): −1 · (11a) − ∂Lcon ∂q= v +/τ − exp(q T v +/τ ) −X v−∈Sq exp(q T v −/τ ) exp(q T v +/τ )v +/τ −X v−∈Sq exp(q T v −/τ )v −/τ = v +/τ − exp(q T v +/τ ) exp(q T v+/τ ) −Pv−∈Sq exp(q T v−/τ ) ! v +/τ − (11b) v−∈Sq exp(q T v −/τ ) exp(q T v+/τ ) −Pv−∈Sq exp(q T v−/τ ) ! v −/τ. X $$\left(11\mathrm{a}\right)$$ $$\left(11\mathrm{b}\right)$$... $$(12)^{\frac{1}{2}}$$ (13a) $\begin{array}{l}~~~~~~~~~~~~~~\end{array}$ (13b) $\begin{array}{l}~~~~~~~~~~~~~~\end{array}$ (13c) . Now let us define Z(q, v) (similar to Eq. 2a in Section 3): $$Z(\mathbf{q},\mathbf{v})={\frac{\exp(\mathbf{q}^{T}\mathbf{v}/\tau)}{\exp(\mathbf{q}^{T}\mathbf{v}^{+}/\tau)+\sum_{\mathbf{v}^{-}\in{\mathcal{S}}_{q}^{-}}\exp(\mathbf{q}^{T}\mathbf{v}^{-}/\tau)}}.$$ Next, we plug-in Z(q, v) into Eq. 11: $$-\frac{\partial\mathcal{L}_{con}}{\partial\mathbf{q}}=\mathbf{v}^{+}/\tau-Z(\mathbf{q},\mathbf{v}^{+})\mathbf{v}^{+}/\tau-\sum_{\mathbf{v}^{-}\in\mathcal{S}_{\mathbf{q}}}Z(\mathbf{q},\mathbf{v}^{-})\mathbf{v}^{-}/\tau$$ $$-\frac{\partial\mathcal{L}_{con}}{\partial\mathbf{q}}\tau=\mathbf{v}^{+}-Z(\mathbf{q},\mathbf{v}^{+})\mathbf{v}^{+}-\sum_{\mathbf{v}^{-}\in\mathcal{S}_{\mathbf{q}}}Z(\mathbf{q},\mathbf{v}^{-})\mathbf{v}^{-}$$ $$=\left(1-Z(\mathbf{q},\mathbf{v}^{+})\right)\mathbf{v}^{+}-\sum_{\mathbf{v}^{-}\in\mathcal{S}_{\mathbf{q}}}Z(\mathbf{q},\mathbf{v}^{-})\mathbf{v}^{-}.$$ $$(14\mathrm{a})$$ $$(14\mathrm{b})$$ (15a) $\begin{array}{l}\left(15\text{b}\right)\end{array}$ In a similar way, we can take the derivative of −Lcon w.r.t. v + and v −: $$\begin{array}{l}{{-\frac{\partial{\cal L}_{con}}{\partial{\mathbf{v}}^{+}}={\mathbf{q}}/\tau-Z({\mathbf{q}},{\mathbf{v}}^{+}){\mathbf{q}}/\tau}}\\ {{-\frac{\partial{\cal L}_{con}}{\partial{\mathbf{v}}^{+}}\tau=\left(1-Z({\mathbf{q}},{\mathbf{v}}^{+})\right){\mathbf{q}}.}}\end{array}$$ $$-\frac{\partial{\cal L}_{con}}{\partial\mathbf{v}^{-}}=-Z(\mathbf{q},\mathbf{v}^{-})\mathbf{q}/\tau$$ $$-\frac{\partial{\cal L}_{con}}{\partial\mathbf{v}^{+}}\tau=-Z(\mathbf{q},\mathbf{v}^{-})\mathbf{q}.$$ ![25_image_0.png](25_image_0.png) Figure 4: Three query images from the COCO test set. For each image query we show the top 5 retrieved captions by TERN. We compare the TERN baseline (BL) and TERN optimized in combination with constraint-based LTD (BL + LTD). Ground-truth captions (i.e., matching) are indicated in green. Captions in red indicate captions that do not match with the query image. ## C Ranking Examples In Figure 4 we provide three query images from the COCO test set and the top 5 retrieved captions by TERN. We compare the TERN baseline (BL) and TERN optimized in combination with constraint-based LTD (BL + LTD). We selected three examples with a large difference in precision@5 between the BL and BL + LTD. For all three examples, it is clear that the baseline ICR methods miss a concept (i.e., *predictive feature(s)*) that is needed to rank the ground-truth captions in the top 5. In the left example, the best matching captions according to the BL ignore that the man in the image *takes a photo*. In the middle example, the best matching captions, according to the BL, do not match on the fact that there is a *cup/mug*. In the right example, the best matching captions do not contain the concept of *teddy bears*. Clearly, an ICR method optimized in combination with LTD is able to match images and queries based on more fine-grained features in the images and captions than a baseline ICR method.
Review 1: Summary: This work proposes latent target decoding, a method to improve representations learnt through pretraining with contrastive learning. Typical contrastive losses do not enforce constraints that require learnt representations to contain all available information. LTD is motivated as an optimization constraint which measures the reconstruction quality of an input caption in the latent space of a language encoder. The paper shows the positive effect of LTD on various contrastive learning setups for image-text retrieval. Strengths and Weaknesses: The paper is well written, and the proposed approach is well motivated and easy to follow. I appreciate the contributions, but hold some concerns with the experimental results. I believe that the current experiment findings do not comprehensively justify the benefits of LTD over vanilla contrastive learning. ## Strengths: - The proposed approach LTD is well motivated. The paper explains in detail why feature suppression is an unintended consequence of previous contrastive learning frameworks, and why other methods (such as autoencoding) are insufficient. - The proposed approach is implemented as an optimization constraint, which minimizes hyperparameter tuning. - LTD is general: it works for both InfoNCE and triplet loss, which most contrastive pretraining methods use. ## Weaknesses: - I am not sure that the derivations presented in eq 2b, 2c and 2d are correct. I get a slightly different result when computing it. Can you please provide details for the derivatives? - It’s suggested in the paper that LTD is agnostic to the choice of the language encoder. Could you show some experiments with other pretrained encoders, and show how the results vary with different models? - The architecture used is not competitive with most SOTA contrastive learning methods. In particular, the caption encoder is a single layer GRU, which seems to be significantly worse off in general compared to the larger transformers used in CLIP and ALIGN. It’s unclear if LTD will generalize to stronger contrastive learning setups. - The generalization experiments are run on VSRN and TERN, which are fine. However, I’m not sure why a CLIP-like model was not used instead, since that appears to be one of the stronger open sourced baselines used in many contrastive learning tasks. - The batch size used in experiments seems to be 128 (from the anonymized code). As mentioned by the authors, most SOTA contrastive learning approaches benefit from large batch sizes (for example, CLIP uses a batch size of 32768). Can you show some form of scaling curve as batch size increases, to motivate that LTD will still be beneficial at larger scales? ## Minor issues - “Upshot” (page 4 and 6) is a strange (and I believe non-standard) way to phrase the conclusion. Consider removing this header. - Missing space on page 8: “(Robinson et al., 2021)and” Requested Changes: ## Critical for acceptance: - Show experiments with more competitive contrastive learning setups such as CLIP - Include experiments showing the effect of LTD over different batch sizes. It is important to know that the proposed method will still be helpful at scale, and its effects will not be washed out with larger batch sizes (which many SOTA models use) and larger model sizes. ## Would strengthen the work: - Provide the intermediate steps for derivation of the gradients in eq 2b, 2c, and 2d - Show results with sentence/language encoders other than Sentence-BERT - Include hyperparameter details into the main text. The authors mention that it is the same as Chun et al., 2021, but certain details (such as batch size) are not found in Chun et al., 2021 either. Broader Impact Concerns: There are no major ethical or impact concerns introduced in this paper. This work is about contrastive training to learn features, and the datasets used are fairly standard in the community. ================================================== Review 2: Summary: The paper discusses the Image-Caption retrieval (ICR) task. Commonly the task is solved with contrastive learning. This approach might pick on a few "predictive features" (i.e., that allow discriminative learning between positive and negative pairs) during training. The authors propose to have an additional decoder head to reconstruct the input caption embedding (namely, latent target decoding). Main ideas: i) reconstruct the latent space of the caption instead of the caption directly. ii) use the reconstruction as a constraint instead of a dual optimization Evaluation is done on image-to-text and text-to-image tasks on the COCO, COCO CxC, and F30k datasets with two different architectures, VSRN and TERN. Strengths and Weaknesses: Strengths: (+) The idea is interesting. Contrastive models tend to focus on a low number of discriminative features (Robinson et al. 2021). Pushing the representation to keep relevant information for reconstruction should help it avoid relying only on discriminative features. (+) The paper suggests an important design choice. The reconstruction loss should require similarity to latent space (of the caption) instead of decoding the tokens directly. (+) The authors show two variations for the regularization. That is as a dual loss (calibrated with hyperparameter) and constraint-based that does not require intensive tuning and can instead optimize a multiplier. (+) The related work section is comprehensive. Weaknesses: The evidence that the method does with it intends to do, i.e., reducing predictive feature suppression, needs to be stronger. (W1) Retrieval Metric improvement is not proving cause. The authors did not show any evidence that the reason for the improvement is related to "reducing feature suppression": There is no analysis that shows the problem exists. The paper should be more self-contained, and here is no qualitative analysis or metrics related to this specific issue. (W2) There is no comparison to baseline results. The paper only compares against re-implementations. It limits the ability to get a real sense of how robust the method is. Why not follow the evaluation setups presented by Robinson et al. 2021? (W3) The resource-constrained setup is not defined or justified. Since the method can be applied to existing architectures, the authors can also fine-tune existing large-scale solutions employing their new loss. Such analysis is not provided. (W4) In general, contrastive loss has been found to be an efficient way to acquire zero-shot capabilities by training on large-scale data (Radford et al., 2021). It is intresting to show if the new loss idea can improve this capability. It is especially intresting since Radford et al., 2021 argued that they could not achieve the same zero-shot capabilities in the reconstruction setup. Requested Changes: Overall the idea is intresting, but the current results only show that the direction is promising, but it is not a thorough study. Changes should answer the weakness. Specifically, the authors should focus on defining the problem - with metrics that assess it and qualitative analysis. The improvements should be on both the proposed metrics and qualitatively. An intresting analysis could be employing this method to large-scale models (e.g., CLIP) with fine-tuning. Further, it might be a matter of styling, but the writing can be more concise, e.g., these two seances repeating the same thing. "Implementing LTD as dual loss, as opposed to an optimization constraint, does not reduce predictive feature suppression. Our analyses suggest that optimizing the reconstruction loss only until a specific bound value is met results in better evaluation performance than minimizing the reconstruction loss as a dual loss." Also, there are some typos, like the word `viz' at the end of page 2. Broader Impact Concerns: It may be interesting to study if predictive features can cause undesirable biases, and if this new suggestion fixes it or makes it worse. ================================================== Review 3: Summary: This paper proposes latent target decoding (LTD) to reduce predictive feature suppression for resource-constrained image-caption retrieval (ICR) methods. An decoder is added to the contrastive ICR framework for the reconstruction of the input caption in a latent space to prevent the image and caption encoder from suppressing predictive features. The LTD objective is implemented as an optimization constraint to optimize the reconstruction loss as well as the contrastive loss. Experiments are conducted on two popular ICR benchmark datasets to show the effectiveness of the propopsed method. Strengths and Weaknesses: Strengths: 1. The studied toIs an image worth five sentences? a new look into semantics for image-text matchingpic, i.e. image-caption retrieval, is a popular research task to bridge the cross-modal gap. The motivation is clear and the idea to investigate resource-constrained image-caption retrieval is novel. 2. The proposed method, i.e., latent target decoding (LTD) is well presented and easy to follow. Genearlly, LTD is reasonable and makes good sense. 3. The reported results are superior to the compared baselines. Different aspects of the proposed method are analyzed in experiments. Weankesses and suggestions: 1. The compared baselines are insufficient. There are many recent image-caption retrieval methods, such as Negative-Aware Attention Framework for Image-Text Matching", "Is an image worth five sentences? a new look into semantics for image-text matching", "Cross-Modal Image-Text Retrieval with Semantic Consistency". Introducing and comparing with them in both related work and experimental part is necessary. 2. Some detailed retrieval resutls are expected. Please give both successful and failed cases and analyze the reasons. 3. A visualized example would be better to show the motivation. 4. Therr are many notations and equations. A talbe summarizing the used variables will make it easier for users to understand. Requested Changes: Please see the weaknesses above to revise this paper. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Accept with minor revision Comment: While most issues raised during reviewing were addressed, several questions by ApZs were not incorporated or replied to. Minimally, there are some lingering concerns about how to evaluate if the proposed approach improves robustness (either additional evaluations or metrics). zvm2 also requests some additional visualization of examples. While additional experiments would be an ideal inclusion for ApZs, minimally, the authors should try to adjust claims or indicate places for future work to address the listed weaknesses/concerns. ==================================================
# Graph-Based Multi-Ode Neural Networks For Spatiotemporal Traffic Forecasting Zibo Liu∗*zbliu@vt.edu* Parshin Shojaee *parshinshojaee@vt.edu* Chandan K. Reddy reddy@cs.vt.edu Department of Computer Science, Virginia Tech, Arlington, VA Reviewed on OpenReview: *https: // openreview. net/ forum? id= Oq5XKRVYpQ* ## Abstract There is a recent surge in the development of spatio-temporal forecasting models in the transportation domain. Long-range traffic forecasting, however, remains a challenging task due to the intricate and extensive spatio-temporal correlations observed in traffic networks. Current works primarily rely on road networks with graph structures and learn representations using graph neural networks (GNNs), but this approach suffers from over-smoothing problem in deep architectures. To tackle this problem, recent methods introduced the combination of GNNs with residual connections or neural ordinary differential equations (ODE). However, current graph ODE models face two key limitations in feature extraction: (1) they lean towards global temporal patterns, overlooking local patterns that are important for unexpected events; and (2) they lack dynamic semantic edges in their architectural design. In this paper, we propose a novel architecture called Graph-based Multi-ODE Neural Networks (GRAM-ODE) which is designed with multiple connective ODE-GNN modules to learn better representations by capturing different views of complex local and global dynamic spatiotemporal dependencies. We also add some techniques like shared weights and divergence constraints into the intermediate layers of distinct ODE-GNN modules to further improve their communication towards the forecasting task. Our extensive set of experiments conducted on six real-world datasets demonstrate the superior performance of GRAM-ODE compared with state-of-the-art baselines as well as the contribution of different components to the overall performance. The code is available at https://github.com/zbliu98/GRAM-ODE ## 1 Introduction Spatio-temporal forecasting is one of the main research topics studied in the context of temporally varying spatial data which is commonly seen in many real-world applications such as traffic networks, climate networks, urban systems, etc. (Jiang & Luo, 2022; Du et al., 2017; Jones, 2017; Longo et al., 2017). In this paper, we investigate the problem of traffic forecasting, in which the goal is to statistically model and identify historical traffic patterns in conjunction with the underlying road networks to predict the future traffic flow. This task is challenging primarily due to the intricate and extensive spatio-temporal dependencies in traffic networks, also known as intra-dependencies (i.e., temporal correlations within one traffic series) and inter-dependencies (i.e., spatial correlations among correlated traffic series). In addition to this, frequent events such as traffic peaks or accidents lead to the formation of non-stationary time-series among different locations, thus, posing challenges for the prediction task. Traffic forecasting is a spatio-temporal prediction task that exploits both the location and event data collected by sensors. Traditional methods such as AutoRegressive Integrated Moving Average (ARIMA) and Support Vector Machine (SVM) algorithms only consider the temporal patterns and ignore the corresponding spatial relations (Jeong et al., 2013; Williams & Hoel, 2003; Van Der Voort et al., 1996). Statistical and classical ∗Corresponding author ![1_image_0.png](1_image_0.png) Figure 1: An overview of the proposed model alongside with prior models. The sub-figures depict traffic data over time with blue nodes representing recording points and lines representing roads. The orange dashed line shows the common global temporal pattern. Our model incorporates local temporal patterns represented by the purple dashed line, node-edge correlations depicted by red arrows, and dynamic edge-edge correlations displayed by green arcs. machine learning methods suffer from limitations in learning complex spatio-temporal interactions, and thus deep learning models were later introduced for this task. Early examples of deep learning approaches to capture spatial and temporal relations are FC-LSTM (Shi et al., 2015) where authors integrate CNN and LSTM modules; and ST-ResNet (Zhang et al., 2017) which uses deep residual CNNs for both spatial and temporal views. However, these CNN-based methods are developed towards grid data and cannnot account for traffic road networks which are more akin to graph-structures. Hence, researchers in the domain have recently developed Graph Neural Network (GNN) based approaches for effectively learning graph-based representations. Examples of these graph-based models are: STGCN (Yu et al., 2018) which utilizes complete convolutional structures for both temporal and spatial views; and DCRNN (Li et al., 2018b) which combines the diffusion process with bi-directional random walks on directed graphs in order to capture spatio-temporal dependencies. However, such GNN-based methods cannot capture long-range spatio-temporal relations or develop deeper representations due to their limitation of over-smoothing (Lan et al., 2022; Li et al., 2018a; Zhou et al., 2020). Deep GNN over-smoothing occurs when a GNN model with deeper architecture tends to lose the discriminative ability and learn similar node representations for all nodes. Thus, making it challenging to learn richer representations and investigate more complex graph structures. In order to address this problem, researchers have introduced combining GNNs with residual or skip connections (Chen et al., 2020) which are connections that bypass one or more layers and allow information to flow more freely through the network, therefore, improving the network's ability in learning more complex temporal relations. Neural Ordinary Differential Equation (NODE) is a type of deep learning model that uses continuous-time dynamics to learn more powerful representations of the time series data. NODEs can also be used to address over-smoothing in GNNs by providing a more flexible and expressive model architecture in capturing temporal relations. Recently, this was studied by the STGODE (Fang et al., 2021) model that integrates GNNs with NODEs (Chen et al., 2018) to model the dynamics of traffic over time. This combination can derive a continuous GNN (CGNN) with continuous temporal connections toward alleviating the over-smoothing problem. Nevertheless, current CGNN models still encounter the following limitations: (1) Previous approaches in this area tend to overemphasize the global temporal structures while undervaluing local patterns, which are often crucial for predictions in the presence of unexpected traffic events, e.g., a road has a higher chance of getting clogged shortly after a car accident. This will cause drivers to switch to a faster route and thus the traffic flow on this road may significantly drop for a short period of time. Therefore, ignoring these local temporal patterns may cause significant problems in the final predictions for roads facing unexpected events. (2) The dynamic correlation of traffic nodes is ignored by existing approaches. In other words, models usually do not consider the dynamic semantic spatial correlations (i.e., dynamic semantic edges) in their architecture. (3) Several baselines use vanilla aggregations, such as average pooling, to combine latent features learned from multiple streams which ignore the high-dimensional feature interactions. To overcome the above limitations, we propose a novel framework called GRAM-ODE, GRAph-based MultiODE Neural Networks. First, in order to balance the consideration of global and local temporal patterns, we design new ODE modules for the local temporal patterns in addition to the existing ODE module for the global temporal patterns (Fang et al., 2021) using different sizes of temporal kernels (represented as purple and orange dashed lines in Fig. 1). Specifically, for local dependencies, we assign ODE functions to the local kernel's output that approximate local patterns, and then concatenate the results. These local and global ODE modules are depicted with more details in Fig. 3(a) and Fig. 3(b), respectively. Second, we design a new ODE module into our model to consider the dynamic correlation of traffic nodes as well as edges. In other words, at each time step, we find the intermediate dynamic spatial correlations based on the embedding representations of nodes (i.e., dynamic semantic edges), and then construct a new ODE module to approximate patterns of semantic edge-to-edge correlations over time (represented with different edge patterns over time in Fig. 1). More details regarding this dynamic semantic edge ODE module are provided in Fig. 3(c). Third, we also design the nonlinear aggregation paradigm and a multi-head attention across different ODE modules (represented in Fig. 3(d)) and different streams of traffic graphs (represented in the last layer of Fig. 2), respectively. Through these two operations, we account for high-dimensional correlations and similarities of latent features corresponding to different ODE modules and traffic graphs. By doing so, we let the model select and fuse latent features from different views for the forecasting task. Also, since our proposed GRAM-ODE includes multiple ODE modules, we ensure that these different modules are not isolated and have effective connectivity in the intermediate layers. To do so, we design coupled ODE modules in two ways: (1) adding similarity constraint between the semantic parts of local and global modules to ensure these two semantic embeddings do not diverge from one another (represented with red marks in Fig. 3); (2) sharing weights for the global node-based and edge-based ODE modules (represented with green marks in Fig. 3). Therefore, we create a coupled graph-based multi-ODE structure as GRAM-ODE in which all modules are designed to effectively connect with each other for the downstream application of traffic prediction. The major contributions of this work are summarized below. - *Developing a new ODE module for capturing local temporal patterns.* Due to the importance of shortterm temporal dependencies in the traffic prediction in case of unexpected events, we develop a new ODE module for short-term dependencies in addition to the current ODE block for global temporal patterns. - *Developing a new ODE module for the dynamic semantic edges.* In addition to ODE blocks for traffic nodes, which model the dynamic node-to-node correlations over time, we also add a new ODE module for the dynamic semantic edges based on node representations (i.e., dynamic spatial correlations) to model the edge-to-edge correlations over time. - *Building effective connections between different ODE modules (coupled multi-ODE blocks).* To build effective interactions between multiple ODE modules, we consider shared weights for node-based and edge-based ODE modules as well as adaptive similarity constraints for the outputs of local and global temporal ODE modules. - *Designing a new aggregation module with multi-head attention across features of different streams.* To enhance the model's awareness in the selection and fusion of different ODE modules as well as the streams corresponding to different types of traffic graphs, we design multi-head attention mechanism at the aggregation layers. The rest of this paper is organized as follows. Section 2 summarizes existing traffic forecasting works based on machine learning, graph-based and NODE-based methods. Section 3 provides some of the required preliminaries and explains the problem formulation. Section 4 covers the details of our proposed GRAM-ODE method and its different components. Our experimental evaluation including both quantitative and qualitative comparison results, ablation studies, and robustness assessments are reported in Section 5. Finally, Section 6 concludes the paper. ## 2 Related Works 2.1 Machine Learning And Deep Learning Methods Researchers have employed traditional statistical and machine learning methods for the task of traffic forecasting. Some prominent example models are (1) K-Nearest Neighbor (KNN) (Zhang et al., 2013) which predict traffic of a node based on its k-nearest neighbors; (2) ARIMA (Van Der Voort et al., 1996; Alghamdi et al., 2019) which integrates the autoregressive model with moving average operation; and (3) SARIMA (Williams & Hoel, 2003) which adds a specific ability to ARIMA for the recognition of seasonal patterns. Many of these machine learning models only consider the temporal dependencies and ignore the spatial information. Also, they are usually based on human-designed features and have limitations in learning informative features for the intended task. Later, deep learning methods became popular due to their ability in considering the complex and high-dimensional spatio-temporal dependencies through richer representations. Early examples of deep learning models considering the spatial and temporal relations are FC-LSTM (Shi et al., 2015), ConvLSTM (Xingjian et al., 2015), ST-ResNet (Zhang et al., 2017), ST-LSTM (Liu et al., 2016), and STCNN (He et al., 2019) which are usually based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to account for spatial and temporal information. However, these models are developed for the grid-based data and disregard the graph structure of traffic road networks. Due to this limitation, researchers moved into the application of graph-based deep learning models like graph neural networks. ## 2.2 Graph-Based Methods Graphs provide vital knowledge about the spatial, temporal, and spatio-temporal relationships that can potentially improve performance of the final model. Recently, researchers employed the graph convolution network (GCN) (Kipf & Welling, 2017) to model the spatio-temporal interactions. DCRNN (Veeriah et al., 2015) is an early example of it that utilizes diffusion GCN with bi-directional random walks to model the spatial correlations as well as a GRU (Gated Recurrent Unit) based network to model the temporal correlations. GRU is an RNN-based method which is not effective and efficient in modeling long-range temporal dependencies. To address this limitation, works such as STGCN (Yu et al., 2018) and GraphWaveNet (Wu et al., 2021) utilize convolutional operations for both spatial and temporal domains. After the rise of attention-based models in deep learning, researchers realized that these two models still have some limitations in learning spatial and temporal correlations due to the limited capability of convolutional operations in capturing high-dimensional correlations. Therefore, two attention layers are later employed in ASTGCN (Guo et al., 2019) to capture the spatial and temporal correlations. However, these models are limited in capturing local relations and may lose local information due to the sensitivity of representations to the dilation operation. STSGCN (Song et al., 2020) used localized spatio-temporal subgraphs to enhance prior models in terms of capturing the local correlations. However, since the model formulation does not incorporate global information, this model still had limitations when it came to long-term forecasting and dealing with data that included missing entries or noise. In addition to the spatial graphs from predefined road networks, STFGNN (Li & Zhu, 2021) later introduced the use of Dynamic Time Warping (DTW) for data-driven spatial networks which helped the model to learn representations from different data-driven and domain-driven views. STFGNN also utilized a new fusion module to capture spatial and temporal graphs in parallel. However, due to the over-smoothing problem of deep GNNs (which happens when a GNN model learns similar node representations for all nodes in case of adopting deeper architecture), current GNN-based models incur some difficulties in learning rich spatio-temporal representations with deep layers. ## 2.3 Neural Differential Equation Methods Neural Differential Equations (NDEs) (Kidger, 2022) provide a new perspective of optimizing neural networks in a continuous manner using differential equations (DEs). In other words, DEs help to create a continuous depth generation mechanism for neural networks, provide a high-capacity function approximation, and offer strong priors on model space. There are a few types of NDEs: (1) neural ordinary differential equation (NODE) such as ODE-based RNN models (Habiba & Pearlmutter, 2020; Lechner & Hasani, 2022) (e.g., ODE-RNN, ODE-GRU, ODE-LSTM) and latent ODE models (Rubanova et al., 2019); (2) neural controlled differential equation (NCDE) (Choi et al., 2022; Kidger et al., 2020; Morrill et al., 2021) which is usually used to learn functions for the analysis of irregular time-series data; and (3) neural stochastic differential equation | Notation | Description | |-------------------|------------------------------------------------------------| | G | Traffic graph | | V | Set of nodes | | E | Set of edges | | A C | Connection adjacency matrix | | A SE | DTW adjacency matrix | | D | Degree matrix | | Aˆ | Normalized adjacency matrix | | fe/ fg/ fl | Message generation for Edge/Global/Local Neural ODEs | | H(t) | ODE function (integrals from 0 to t) initialized with H(0) | | He(t)/Hg(t)/Hl(t) | Edge/Global/Local ODE function features | | X | Historical time series data | | Y | Future time series data | | Yˆ | Predicted future time series | | L | Length of historical time series | | L ′ | Length of target time series | | N | Number of nodes, |V | | | C | Number of channels | | H | Input of multi ODE-GNN | | A | Updated adjacency matrix with shared spatial weights | | Te/Tg/Tl | Updated Global/Local/Edge temporal shared weights | | EM | Edge message | | GM | Global message | | LM | Local message | | HAi(t) | Features of single embedded time step in Local ODE-GNN | | L ′′ | Length of the embedded temporal window in Local ODE-GNN | | K(i) | Local ODE-GNN's output for the i-th embedded temporal step | | e | Learnable threshold parameter in message filtering | | pn | Embeddings of n-th ODE module in aggregation layer | | H′ | Output of multi ODE-GNN's aggregation layer | | Wr/br | Weight matrix/bias vector in update layer | | H′′ | Output of multi ODE-GNN's update layer | | Hl tcn | Hidden states of l-th TCN's layer | | C l | Embedding size of l-th TCN's layer | | Wl | Convolutional kernel of l-th TCN's layer | | d l | Exponential dilation rate of l-th TCN's layer | | Wq/ Wk/ Wv | The attention query/key/value weight matrix | | bq/ bk/ bv | The attention query/key/value bias vector | | h | Number of attention heads | | ′ | | | X i | Attention output of the i-th head | | δ | Error threshold in the Huber loss | Table 1: Notations used in this paper. (NSDE) (Kidger et al., 2021b;a) which is usually employed for generative models that can represent complex stochastic dynamics. More recently, NODEs have been employed for the traffic forecasting task (Fang et al., 2021; Su et al., 2022; Pu et al., 2022). For example, STGODE (Fang et al., 2021) is proposed to address the aforementioned over-smoothing problem of GNN-based models. STGODE provides a new perspective of continuous depth generation by employing Neural Graph ODEs to capture the spatio-temporal dependencies. However, complex spatio-temporal correlations such as the lack of local temporal dependencies and dynamic node-edge communications are not captured by this model. In this study, we employ the idea of Neural Graph ODE for multiple coupled ODE-GNN modules which take into account the local temporal patterns and dynamic spatio-temporal dependencies, and as a result, can provide better function approximation for the forecasting in the presence of complex spatio-temporal correlations. ## 3 Problem Formulation This section explains the required preliminaries and definitions for GRAM-ODE, and then defines the main problem statement. All notations used in this paper are summarized in Table 1. ## 3.1 Definitions Definition 1: Traffic Graphs. We consider the traffic network as a graph G = (*V, E, A*), where V is the set of nodes, E is the set of edges and A is the adjacency matrix such that A ∈ R N×N if |V | = N. In this paper, we use two types of graphs, the connection map graph and the DTW (dynamic time warping) graph. In the connection map graph, the adjacency matrix, denoted by AC , represents the roads and actual connectivity among traffic nodes with binary value. value: $A_{i,j}^C=\left\{\begin{array}{ll}1,&if\ v_i\ and\ v_j\ are\ neighbors\\ 0,&otherwise\end{array}\right.$. 0*, otherwise* (1) $\left(1\right)$. where vi and vj refer to traffic nodes i and j in the graph. In the DTW graph, the adjacency matrix, indicated by ASE, is generated from the Dynamic Time Warping (DTW) algorithm (Berndt & Clifford, 1994) which calculates the distance between two time-series corresponding to a pair of nodes. $$A_{i,j}^{S E}=\left\{\begin{array}{l l}{{1,D T W\left(X^{i},X^{j}\right)<\epsilon}}\\ {{0,\mathrm{~otherwise}}}\end{array}\right.$$ $$\mathbf{\Sigma}$$ where Xi and Xjrefers to the time-series data of nodes i and j, respectively; ϵ also identifies the sparsity ratio of adjacency matrix. Notably, DTW is more effective than other point-wise similarity metrics (such as Euclidean distance) due to its sensitivity to shape and pattern similarities. Definition 2: Graph Normalization. Adjacency matrix A ∈ {AC , ASE} ∈ R N×N is normalized with D− 12 AD− 12 where D is the degree matrix of A. As shown in Eq. (3), the self-loop identity matrix I is incorporated in normalization to avoid the negative eigenvalues. $${\hat{A}}=\alpha\left(I+D^{-{\frac{1}{2}}}A D^{-{\frac{1}{2}}}\right)$$ $$\left({\boldsymbol{3}}\right)$$ $$\left(4\right)$$ (3) where Aˆ is the normalized adjacency matrix; and α ∈ (0, 1) identifies the eigenvalue range to be in [0, α]. Definition 3: Graph-based Neural ODE. The standard formulation of GNN-based continuous-time ODE function is defined in Eq. (4). It takes the initial value H(0), temporal integrals from 0 to given time t, traffic graph G, and the Neural ODE's network parameter θ. $${\mathcal{H}}(t)={\mathcal{H}}(0)+\int_{0}^{t}f({\mathcal{H}}(s),s;{\mathcal{G}},\theta)\mathrm{d}s$$ where f is the process to generate semantic message of Neural ODE parameterized by θ to model the hidden dynamics of graph G. Since the structure of our proposed method consists of multiple ODE-GNN blocks, we have different types of Eq. (4) graph neural ODEs which are explained with more details in Section 4. Definition 4: Tensor n*-mode multiplication.* We use subscript to identify the tensor-matrix multiplication on the specific dimension as shown below. $$(\mathcal{B}\times_{2}\mathcal{C})_{i l k}=\sum_{j=1}^{n_{2}}\mathcal{B}_{i j k}\mathcal{C}_{j l}$$ $$\left(5\right)$$ BijkCjl (5) where B ∈ R N1×N2×N3, C ∈ R N2×N′2 , so, B ×2 C ∈ R N1×N′2×N3. The n-mode tensor-matrix multiplication is denoted as ×n with the n-th subscript. ## 3.2 Problem Statement The spatio-temporal forecasting problem is described as learning a mapping function F that transforms the current historical spatio-temporal data X = (Xt−L+1, Xt−L+2*, ..., X*t) into the future spatio-temporal data Y = (Xt+1, Xt+2*, ..., X*t+L′ ), where L and L ′ denote the length of historical and target time-series to be predicted, respectively. In the traffic forecasting problem, we have a historical tensor X ∈ R B×N×L×C and traffic graph G, where B is the batch size; N is the number of nodes; L is the input temporal length; and C is the number of input features (e.g., traffic speed, flow, density). The goal is to find Yˆ = F(X *, f,* G) in which F is the overall forecasting network and f corresponds to different graph-based Neural ODE processes. ## 4 Gram-Ode 4.1 Overview The overview of our proposed GRAM-ODE is shown in Fig. 2, which is composed of two streams of operations: (i) DTW-based graph (top row) which is constructed based on semantical similarities, and (ii) connection map graph (bottom row) which is constructed based on geographical spatial connectivities. These two types of adjacency matrices are fed into the model separately to capture spatial correlations from both data-driven and geographical domain-driven views. They are later integrated with the multi-head attention mechanism in the final layer. As shown in Fig. 2, we have three parallel channels for each graph, each of which has two consecutive GRAM-ODE layers with a multi ODE-GNN block sandwiched between two blocks of temporal ![6_image_0.png](6_image_0.png) Figure 2: A graphical overview of the proposed GRAM-ODE framework. The DTW-based graph will be obtained from data at the blue triangle mark. The connection map will be obtained from the distance among recording nodes at the blue diamond mark. These two traffic graphs are separately imported into the model which consists of three parallel channels of two consecutive GRAM-ODE layers. Each layer contains a sandwiched multi ODE-GNN block (explained in Fig. 3) between two temporal convolution networks (TCNs). Outputs of these two streams are then aggregated with an Attention Module (AM) in the final layer. dilated convolution (TCN). The final features of each graph from different channels are then concatenated and imported into the attention module (AM) which is designed to effectively fuse these two separate sets of operations by taking into account all the high-dimensional relations towards the intended forecasting task. We provide further details about each of these components in the subsections below. ## 4.2 Gram-Ode Layer Each GRAM-ODE layer consists of a multi ODE-GNN block placed between two TCNs. Fig. 3 illustrates details of our proposed Multi ODE-GNN block in which we combine multiple ODE modules to take into account all the essential temporal and spatial patterns from different viewpoints and extract informative features from their fusion. In the multi ODE-GNN block, we design three types of message passing operations, a message filtering constraint as well as a new aggregation module to consider high-dimensional correlations. ## 4.2.1 Message Passing Layer The current models primarily focus on node-based global temporal correlations and overlook short-term temporal patterns as well as the dynamic edge-based correlations in their formulation. Hence, we design three types of message passing processes to formulate ODE modules corresponding to global, local, and edge-based temporal dependencies. Shared Weights: Although node-based and edge-based features have distinct semantic meanings, they can still exhibit common spatial and temporal patterns. In order to consider these common patterns in our problem formulation, we design a shared weight matrix between node-based and edge-based global temporal ODE modules (shown in Fig. 3(a) and 3(c)). We define two weight matrices for consideration of these shared spatial and temporal weights. The shared spatial weight, denoted by M, is added to the normalized adjacency matrix as Aˆ = Aˆ+ M, where M is initialized randomly from the normal distribution. The shared temporal weights, denoted by Ws1, Ws2, is added to the node-based and edge-based global temporal modules, given in Eqs. (6) and (7). To consider the local temporal patterns in model formulation, we apply different sizes of temporal kernels Wl1, Wl2 into the local temporal module as shown in Eq. (8). ![7_image_0.png](7_image_0.png) Figure 3: An overview of the multi ODE-GNN block which consists of three ODE modules, i.e., (a) global, (b) local, and (c) edge-based temporal dependencies as well as a (d) new aggregation layer. The inputs and outputs of the multi ODE-GNN block are displayed with H and H′ blocks on the left and right sides of the diagram. The shared weights among different ODE modules are marked in green, and a constraint to limit the divergence of embeddings is marked in red. AM denotes the Attention Module defined in Section 4.3. $$\begin{array}{l}{{{\cal T}_{e}=({\cal H}_{e}(t)\times_{4}W_{s1})^{T}({\cal H}_{e}(t)\times_{4}W_{s2})}}\\ {{{\cal T}_{g}=({\cal H}_{g}^{T}(t)\times_{3}W_{s1})({\cal H}_{g}^{T}(t)\times_{3}W_{s2})^{T}}}\\ {{{\cal T}_{l}=({\cal H}_{l}^{T}(t)\times_{3}W_{l1})({\cal H}_{l}^{T}(t)\times_{3}W_{l2})^{T}}}\end{array}$$ $$(6)$$ $$\left(7\right)$$ $$({\boldsymbol{\delta}})$$ where Ws1, Ws2 ∈ R L×L represent the global temporal shared weights with L referring to the global temporal length; Wl1, Wl2 ∈ R 1×1represent the local temporal weights for each time step in the embedded temporal space; and He(t), Hg(t), and Hl(t) represent the feature of edge, global, and local ODE module, respectively. Global and Local Message Passing: Fig 3(a) represents the global message passing with the goal of modeling long-term node-based temporal patterns in a continuous manner. Eqs. (9) and (10) represent the global message processing operations and return the global message GM, respectively. Additionally, Fig 3(b) represents the local message passing operations on local temporal data to emphasize the importance of short-term temporal patterns. In this module, we first use self-attention mechanism with dense projection layers to create the input in the lower-dimensional embedded temporal space by considering all the possible high-dimensional correlations (similar to the attention module (AM) explained with more details in Section 4.3). Then, features of each time stamp in the embedded temporal inputs are separately imported into the local ODE functions, encouraging the network to learn implicit local dependencies. These features at each embedded time step are denoted by HAi ∈ R B×N×1×C , where i ∈ {0, 1, 2, 3*, ..., L*′′ − 1}, and L ′′ is the size of the embedded temporal window. As shown in Eqs. (13) and (15), at each embedded time step, HAi is used as the initial value to formulate temporal dependencies of future t = L/L′′ time stamps which are returned as K(i). Then, the outputs are concatenated to form the final local message LM = K(0)||K(1)|| . . . ||K(L ′′ −1). $$\mathbf{GM}=GlobalMessagePassing(\mathcal{H}_{g}(0),\mathcal{A},\mathcal{T}_{g},\mathcal{W})=\mathcal{H}_{g}(0)+\int_{0}^{t}f_{g}(\mathcal{H}_{g}(\tau),\mathcal{A},\mathcal{T}_{g},\mathcal{W})\mathrm{d}\tau\tag{9}$$ $$f_{g}=\mathcal{H}_{g}(t)\times_{2}(\hat{\mathcal{A}}-I)+((S(\mathcal{T}_{g})-I)\mathcal{H}_{g}^{T}(t))^{T}+\mathcal{H}_{g}(t)\times_{4}(\mathcal{W}-I)\tag{10}$$ LM =LocalMessageP assing(AT T, Hl(0), Aˆ, Tl, W) = K(0)||K(1)||......||K(L HA0,HA1, ..., HAl = AT T(Hl(0)) (12) K(i) =Fl(HAi, t0)||Fl(HAi, t1)||......||Fl(HAi, tL/L′′−1) (13) Fl(HAi, tj ) = HAi + Z tj 0 fl(HAi(τ ), Aˆ, Tl, W)dτ (14) fl =HAi(t) ×2 (A −ˆ I) + ((S(Tl) − I)HTAi(t))T + HAi(t) ×4 (W − I) (15) ′′ − 1) (11) $\left(11\right)$ (12) (13) ... $$(14)$$ $$(15)$$ Edge Message Passing: Fig. 3(c) depicts the edge message passing procedures that take into account the dynamic edge-based temporal patterns in addition to the node-based correlations. We first create the initial edge features He(0) ∈ R B×N×N×L from the node representation H ∈ R B×N×L×C by taking an average over C channels and copying the N dimension. Eqs. (16) and (17) represent the edge message passing operations which return edge message EM. EM = EdgeMessageP assing(He(0), Aˆ, Te) = He(0) + Z t 0 fe(He(τ ), Aˆ, Te)dτ (16) fe = He(t) ×2 (A −ˆ I) + He(t)(S(Te) − I) (17) In Eqs (9)-(17), Hg, Hl, and He represent the feature of global, local, and edge ODE modules, repsectively; Tg, Tl, and Te represent the global, local, and edge temporal weights formulated in Eqs. (6)-(8), respectively; Aˆ represents the updated adjacency matrix based on shared spatial weights; W ∈ R C×C represents the weight matrix for modeling interaction of different channels; S(.) shows the sigmoid operation; and I is the identity matrix. ## 4.2.2 Message Filtering To prevent the semantic parts of local and global modules diverging from one another, we add a message filtering constraint into the problem formulation (shown with red mark in Fig. 3). Eq. (18) shows that this message filtering constraint works like a both-way clipping operation in which we clip local semantic embeddings if they are significantly larger or smaller than global semantic embeddings. $$LM=\left\{\begin{array}{ll}GM+e,if&LM>GM+e\\ GM-e,if&LM<GM-e\\ LM,otherwise&\end{array}\right.\tag{18}$$ $$(16)$$ $$(17)$$ where LM and GM represent the local and global semantic embeddings; and e represents a learnable noise parameter initialized with normal distribution. ## 4.2.3 Aggregation Layer Fig. 3(d) represents our aggregation paradigm designed for combining output features of the three ODE modules. Instead of employing single add or pooling operations in the aggregation layer, we use the nonlinear matrix multiplication operation to account for key correlations in the higher dimensions. Eq. (19) represents our designed aggregation in which the output of each ODE module is multiplied by the sum of other modules that are normalized by softmax operation. This gated aggregation has several benefits: (i) it enables the selection of features that are more crucial for forecasting; (ii) it allows for non-linear aggregation; and (iii) it contains a linear gradient path, reducing the risk of vanishing gradient. $$H^{\prime}=Aggregation(GM,LM,EM)=\frac{1}{2K}\sum_{m}^{K}\sum_{n\neq m}^{K}p_{m}\odot softmax(p_{n})\tag{19}$$ where H′represents the output of Multi ODE-GNN aggregation layer; LM, GM, and EM represent the local, global, and edge-based semantic embeddings, respectively; K = 3 refers to three ODE modules (p0 = GM, p1 = *LM, p*2 = EM); and ⊙ operation represents the point-wise multiplication. ## 4.2.4 Update Layer After obtaining the aggregated information, we add a residual connection around the multi ODE-GNN block to update the node information of GNN. To achieve this, the input of multi ODE-GNN block is remapped to the aggregated output using a Fully Connected Layer as shown below. $$H^{\prime\prime}=U p d a t e(H^{\prime},H)=\alpha*S i g m o i d(W_{r}H+b_{r})+\beta*H^{\prime}$$ $$(20)$$ where H and H′represent the inputs and outputs of the multi ODE-GNN block, respectively; Wr and br denote the weight matrix and bias vector of the residual fully connected layer, respectively. Also, α and β are the hyperparameters identifying the combination weights in the residual connection. ## 4.2.5 Temporal Convolutional Network Since the problem being considered in this paper is spatio-temporal in nature, we need to consider the temporal correlations of nodes in addition to their spatial relations in the problem formulation. To model the temporal correlations, we utilize temporal convolutional networks (TCNs) which usually have more efficient training, more stable gradients, and faster response to dynamic changes compared to recurrent neural networks (RNNs). The architecture of TCN adopts the dilated convolutional operations with 1-D kernels along the time axis. $$H_{t c n}^{l}=\begin{cases}X&,\ \ l=0\\ Sigmoid\left(W^{l}\odot d^{l}H_{t c n}^{l-1}\right)&,\ \ l=1,2,\ldots,L\end{cases}$$ $$(21)$$ where X ∈ R B×N×L×C is the input of TCN; Hl tcn ∈ R B×N×L×C lis the latent output with C l as the embedding size in the l-th TCN's layer; Wlrepresents the convolutional kernel of the l-th layer; and d lis the exponential dilation rate which is used to expand the receptive field during the convolution. We usually take d l = 2l−1in the temporal dilated convolution. Algorithm 1 provides the pseudocode for the GRAM-ODE layer by sequentially passing the input via TCN, Multi ODE-GNN, and another TCN blocks. The previously explained steps of Multi ODE-GNN block including initialization, message passing, message filtering, aggregation, and update are summarized in lines 4 - 26. Algorithm 1: GRAM-ODE Layer Input: Node Information X , Traffic Graph G Output: Updated Node Information Xˆ 1 \# TCN Block 2 H ← TCN(X ) 3 \# Find Initial Values 4 Hg(0) ← H 5 HAi(0) ← ATT(H) 6 Hˆ ← Mean(H) \# average on channel dimension 7 He(0) ← Repeat(Hˆ ) \# repeat N times 8 \# Edge Message Passing 9 EM ← He(t) ×2 (A −ˆ I) + He(t)(S(Te) − I) 10 \# Global Message Passing 11 GM ← Hg(t) ×2 (A −ˆ I) + ((S(Tg) − I)HT g (t))T + Hg(t) ×4 (W − I) 12 \# Local Message Passing 13 LM ← HAi(t) ×2 (A −ˆ I) + ((S(Tl) − I)HTAi(t))T + HAi(t) ×4 (W − I) 14 \# Message Filtering 15 if LM >*GM+e* **then** 16 LM ← GM + e 17 **else if** LM < *GM-e* **then** 18 LM ← GM − e 19 **else** 20 LM ← LM 21 end 22 \# Aggregate Multi ODE-GNNs 23 p0, p1, p2 ← *GM, LM, EM* 24 H′ ← 1 2KPK m PK n̸=m pm ⊙ *sof tmax*(pn) 25 \# Update 26 H′′ ← α ∗ *Sigmoid*(WrH + br) + β ∗ H′ 27 \# TCN Block 28 X ←ˆ TCN(H′′) $^{\ast}+\mathcal{H}_g\left(t\right)\times_4$ $\mathbf{a}\;(\mathcal{W}-\mathbf{b})$ $\mathbf{a}$ $i\left(t\right)$ ## 4.3 Attention Module We use the attention mechanism in the last layer of GRAM-ODE to effectively aggregate the final learned embeddings of two traffic graphs (i.e., DTW-based graph and the connection map graph) in a way that is better aligned with the forecasting objective. The attention module (AM) is designed to replace the previous fully connected layers while capturing the correlations of high-dimensional information. In this module, we first concatenate the embeddings of two graphs and then compute the attention scores between them. Eqs. (22) and (23) mathematically describe this attention operation. $Q=XW_{q}+b_{q},\ K=XW_{k}+b_{k},\ V=XW_{v}+b_{v}$ $X^{\prime}_{i}=softmax\ \left(\sqrt{\frac{h}{C^{\prime}}}\ *\ (\ Q^{T}_{i}K_{i}\ )\right)\ V_{i}$ $$\left(22\right)$$ $$(23)$$ where Wq(bq), Wk(bk), and Wv(bv) represent the attention query, key, and value weight matrix (bias vector), respectively; X is the input of attention module; h is the number of heads; and qh C′is the normalization factor with C ′representing the embedding dimension. Therefore, Q:{Q1, Q2, ..., Qh}, K:{K1, K2*, ..., K*h}, V :{V1, V2*, ..., V*h} shows the query, key, and value set for multiple attention heads. Finally, output of the attention module X′ i can be mapped to the feature space of original data for prediction with a linear dense layer. ## 4.4 Loss Function In regression problem settings (such as the traffic flow forecasting), Huber loss function between the real value and predicted value is widely used in the literature. It combines the merits of L1 and L2 loss functions. Huber loss is a piece-wise function which consists of two parts: (1) a squared term with small and smooth gradient when the difference between the true and predicted values is small (i.e., less than a δ threshold value) and (2) a restricted gradient term when the true and predicted values are far from each other. Eq. (24) represents the standard form of Huber loss function. in of Huber loss function. $$L(\hat{\mathcal{Y}},\mathcal{Y})=\begin{cases}\dfrac{1}{2}(\hat{\mathcal{Y}}-\mathcal{Y})^{2},&|\hat{\mathcal{Y}}-\mathcal{Y}|\leq\delta\\ \delta|\hat{\mathcal{Y}}-\mathcal{Y}|-\dfrac{1}{2}\delta^{2},&\text{otherwise}\end{cases}$$ $\delta$ is the interval $\mathcal{Y}$ in the set of two matrices. $$(24)$$ where δ is a hyperparameter value set for the intended threshold; Y is the true future spatio-temporal data; and Yˆ is the predicted future data. Algorithm 2 provides the complete pseudocode of GRAM-ODE training. In each optimization step of this ![10_image_0.png](10_image_0.png) algorithm, the outputs of all GRAM-ODE layers across parallel channels are concatenated, and these concatenated outputs of different graph types are then fed into the attention module for better aggregation and prediction. Algorithm 2: GRAM-ODE training Input: Historical Data X, Future Data Y, Traffic Graph G Output: Forecast Model with parameter θ ![10_image_1.png](10_image_1.png) 17 end ## 5 Our Experiments We conduct experiments on six real-world datasets and seven baseline models to evaluate the effectiveness of our proposed GRAM-ODE and its components for the traffic forecasting task. ## 5.1 Datasets We show the performance results of our model on six widely used public benchmark traffic datasets 1: PEMS03, PEMS04, PEMS07, and PEMS08 released by (Song et al., 2020) as well as PEMS-BAY (Li et al., 2017) and METR-LA (Jagadish et al., 2014). The first four datasets (PEMS03, PEMS04, PEMS07, PEMS08) are collected based on the California Districts they represent (District 3, District 4, District 7, and District 8, respectively). PEMS-BAY covers the San Francisco Bay Area and METR-LA focuses on the traffic data of the Los Angeles Metropolitan Area. All these datasets collect three features (flow, occupation, and speed) at each location point over a period of time (with 5-minute time intervals). The spatial connection network for each dataset is constructed using the existing road network. The details of data statistics are shown in Table. 2. To use these datasets in experiments, we pre-process the features by z-score normalization. | Data | PEMS03 PEMS04 PEMS07 PEMS08 PEMS-BAY METR-LA | | | | | | |------------------------------------------|------------------------------------------------|------------|------------|--------|--------|--------| | Location | CA, USA | | | | | | | Time Span | 9/1/2018 - 1/1/2018 - 5/1/2017 - 7/1/2016 - | 1/1/2017 - | 3/1/2012 - | | | | | 11/30/2018 2/28/2018 8/31/2017 8/31/2016 | 5/31/2017 | 6/30/2012 | | | | | | Time Interval | 5 min | | | | | | | Sensors | 358 | 307 | 883 | 170 | 325 | 207 | | Edges | 547 | 340 | 866 | 295 | 2,369 | 1,515 | | Time Steps | 26,208 | 16,992 | 28,224 | 17,856 | 52,116 | 34,272 | Table 2: Basic statistics of the datasets used in our experiments. ## 5.2 Evaluation Metrics We use Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Squared Error (RMSE) metrics to evaluate the spatio-temporal forecasting. These metrics are defined as follows. $$MAE=\frac{1}{n}\sum_{i=1}^{n}|y_{i}-\hat{y}_{i}|,\quad MAPE=(\frac{1}{n}\sum_{i=1}^{n}|\frac{y_{i}-\hat{y}_{i}}{y_{i}}|)*100\%,\quad RMSE=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(y_{i}-\hat{y}_{i}\right)^{2}},$$ ## 5.3 Baselines We compared our proposed GRAM-ODE with the following baselines. - **ARIMA** (Box & Pierce, 1970): Auto-Regressive Integrated Moving Average is one of the most well-known statistical models for time-series analysis. - **DCRNN** (Veeriah et al., 2015): Diffusion Convolutional Recurrent Neural Network utilizes diffusion graph convolutional networks with bidirectional random walks on directed graphs, and seq2seq gated recurrent unit (GRU) to capture spatial and temporal dependencies, respectively. - **STGCN** (Yan et al., 2018): Spatio-Temporal Graph Convolutional Network combines graph structure convolutions with 1D temporal convolutional kernels to capture spatial dependencies and temporal correlations, respectively. - **GraphWaveNet** (Wu et al., 2021): GraphWaveNet integrates adaptive graph convolution with 1D dilated casual convolution to capture spatio-temporal dependencies. - **STSGCN** (Song et al., 2020): Spatio-Temporal Synchronous Graph Convolutional Networks decompose the problem into multiple localized spatio-temporal subgraphs, assisting the network in better capturing of spatio-temporal local correlations and consideration of various heterogeneities in spatio-temporal data. 1Datasets are downloaded from STSGCN github repository https://github.com/Davidham3/STSGCN/ | Dataset | Metric | ARIMA DCRNN | STGCN | GraphWaveNet STSGCN | STFGNN | STGODE GRAM-ODE | | | | |------------------|----------|---------------|---------|-----------------------|----------|-------------------|-------|-------|-------| | MAE | 33.51 | 18.18 | 17.48 | 19.85 | 17.48 | 16.77 | 16.50 | 15.72 | | | PEMS03 | MAPE(%) | 33.78 | 18.91 | 17.15 | 19.31 | 16.78 | 16.30 | 16.69 | 15.98 | | RMSE | 47.59 | 30.31 | 30.12 | 32.94 | 29.21 | 28.34 | 27.84 | 26.40 | | | MAE | 33.73 | 24.70 | 22.70 | 25.45 | 21.19 | 20.84 | 20.84 | 19.55 | | | MAPE(%) | 24.18 | 17.12 | 14.59 | 17.29 | 13.90 | 13.02 | 13.77 | 12.66 | | | PEMS04 | RMSE | 48.80 | 38.12 | 35.55 | 39.70 | 33.65 | 32.51 | 32.82 | 31.05 | | MAE | 38.17 | 25.30 | 25.38 | 26.85 | 24.26 | 23.46 | 22.99 | 21.75 | | | PEMS07 | MAPE(%) | 19.46 | 11.16 | 11.08 | 12.12 | 10.21 | 9.21 | 10.14 | 9.74 | | RMSE | 59.27 | 38.58 | 38.78 | 42.78 | 39.03 | 36.60 | 37.54 | 34.42 | | | MAE | 31.09 | 17.86 | 18.02 | 19.13 | 17.13 | 16.94 | 16.81 | 16.05 | | | PEMS08 | MAPE(%) | 22.73 | 11.45 | 11.40 | 12.68 | 10.96 | 10.60 | 10.62 | 10.58 | | RMSE | 44.32 | 27.83 | 27.83 | 31.05 | 26.80 | 26.22 | 25.97 | 25.17 | | | MAE | 3.38 | 2.07 | 2.49 | 1.95 | 2.11 | 2.02 | 2.30 | 1.67 | | | PEMS-BAY MAPE(%) | 8.30 | 4.90 | 5.79 | 4.61 | 4.96 | 4.79 | 4.61 | 3.83 | | | RMSE | 6.50 | 4.74 | 5.69 | 4.48 | 4.85 | 4.63 | 4.89 | 3.34 | | | MAE | 6.90 | 3.60 | 4.59 | 3.53 | 3.65 | 3.55 | 3.75 | 3.44 | | | METR-LA | MAPE(%) | 17.40 | 10.50 | 12.70 | 10.01 | 10.67 | 10.56 | 10.26 | 9.38 | | RMSE | 13.23 | 7.60 | 9.40 | 7.37 | 7.81 | 7.47 | 7.37 | 6.64 | | Table 3: Performance comparison of GRAM-ODE and baselines on six benchmark datasets. A lower MAE/MAPE/RMSE indicates better performance. The **best** results are in bold and the second-best are underlined. - **STFGNN** (Li & Zhu, 2021): Spatio-Temporal Fusion Graph Neural Networks uses Dynamic Time Warping (DTW) algorithm to gain features, and follow STSGCN (Song et al., 2020) in using sliding window to capture spatial, temporal, and spatio-temporal dependencies. - **STGODE** (Fang et al., 2021): Spatio-Temporal Graph ODE Networks attempt to bridge continuous differential equations to the node representations of road networks in the area of traffic forecasting. ## 5.4 Experimental Settings Following the previous works in this domain, we perform experiments by splitting the entire dataset into 6:2:2 for train, validation, and test sets. This split follows a temporal order, using the first 60% of the time length for training, and the subsequent 20% each for validation and testing. We use the past one hour to predict the future one hour. Since the time interval of data collection is 5 minutes, *L, L*′ = 12 temporal data points. Based on Table 2, we have different number of sensors among different datasets, therefore, |V | is different. DTW threshold (ϵ) in Eq. (2) is 0.1; number of channels (C) in the historical data is 3 (i.e., flow, speed, and occupation) and in the embedding space is 64. The shared temporal weights Ws1, Ws2 ∈ R 12×12 are initialized randomly from normal distribution. The length of latent space for the input of local ODE block is L ′′ = 4, and in the final attention module, number of attention heads h = 12. During training, we use the learning rate of 10−4, 10−4, 10−5, 10−5, 10−4, and 10−4for PEMS03, PEMS04, PEMS07, PEMS08, PEMSBAY, and METR-LA datasets, respectively. The optimizer is AdamW. All experiments are implemented using PyTorch (Paszke et al., 2019) and trained using Quadro RTX 8000 GPU, with 48GB of RAM. ## 5.5 Experimental Results Our proposed GRAM-ODE outperforms other baselines on all datasets in Table 3, except for the MAPE metric in PEMS07, where it is slightly greater than that of STFGNN. ARIMA performs considerably worse than other baselines, likely because it ignores the graph structure of spatio-temporal data. GraphWaveNet performs relatively poorly, possibly due to its limited capability in stacking spatio-temporal layers and expanding the receptive field learned from 1D CNN temporal kernels. DCRNN uses bi-directional random walks and a GRU to model spatial and temporal information, respectively, but its relatively low modeling efficacy and efficiency for long-range temporal information may contribute to its subpar performance. STGCN uses GCN for the spatial domain and 1D dilated convolutional operations for the temporal domain, but may lose local information due to the dilation operation, while its absence of attention-based operations and limited capability of convolutional operations in modeling high-dimensional spatial, temporal, and spatio-temporal correlations may also result in relatively poor performance. STSGCN captures local correlations with localized spatio-temporal subgraphs but may miss global information and thus perform poorly in long-range forecasting and with data containing missing entries or noise. STFGNN uses DTW for latent spatial networks and performs well, but is limited in learning comprehensive spatio-temporal correlations. STGODE uses Neural ODEs to capture spatio-temporal dynamics and achieves very good performance compared to other baselines, but still lacks the ability to capture complex spatio-temporal correlations, balance local and global patterns, and model dynamic interactions between its components. ## 5.6 Case Study We selected two nodes from the PEMS08 road networks to conduct our case study for the qualitative ![13_image_0.png](13_image_0.png) demonstration of results. As Fig.4 shows, the predicted curves by our proposed GRAM-ODE (red curve) is better (more closely) aligned with the ground truth than STGODE (grey curve). The ground truth in node 109 has more fluctuations compared to the node 17 which causes more difficulty in the forecasting task. We can also observe that our model provides faster response in case of these abrupt fluctuations. This highlights the effectiveness of local ODE-GNN block in our model architecture which helps the model to better learn the local fluctuations. Figure 4: The comparison of traffic flow forecasting between our proposed GRAM-ODE and STGODE visualized for node 17 (left column) and node 109 (right column) of the PEMS08 dataset. ## 5.7 Ablation Study To investigate the effect of different components of GRAM-ODE, we conduct ablation experiments on PEMS04 and PEMS08 with several different variants of our model. (1) **Base:** In this model, data only passes through a TCN, an ODE block and another TCN module. The ODE block only contains the global ODE-GNN with a fully connected layer after that. The output of TCN is then imported to a simple fully connected layer instead of an attention module. (2) +E: Beyond (1), this model adds the dynamic edge correlations with an edge-based ODE-GNN block and aggregates its outputs with the global ODE-GNN block through a simple weighted sum operation. (3) +L: Beyond (2), this model adds the local ODE-GNN block with different temporal kernels to better consider the local temporal patterns. The outputs of local ODE modules are aggregated with other ODE modules through a weighted sum operation. (4) **+share:** Compared to (3), this model uses shared temporal weights between the edge and global ODEGNN modules and uses shared spatial weights among all three ODE-GNN modules, global, local and edge module. (5) **+cons:** Beyond (4), this model adds an adaptive message filtering constraint to restrict the divergence of embeddings from local and global ODE modules. (6) **+agg:** Beyond (5), this model replaces the weighted sum aggregation with a newly designed aggregation module explained in Eq. (19). (7) **+res:** Beyond (6), this model adds the intra-block residual connections between outputs and inputs of the multi ODE-GNN blocks (which is explained in Eq. (20)). (8) GRAM-ODE: Beyond (7), this model replaces the last linear layer with the attention module to better combine the features learned from different traffic graphs and parallel channels of GRAM-ODE layers (given by Eqs. (22) and (23)). ![14_image_2.png](14_image_2.png) ![14_image_0.png](14_image_0.png) & 16.8 분 109 10.8 ![14_image_1.png](14_image_1.png) Figure 5: Ablation experiment results with different configurations of GRAM-ODE on PEMS03 (top row) and PEMS08 (bottom row) datasets. Fig. 5 shows the results of ablation experiments with MAE, MAPE, and RMSE metrics. It can be observed that the edge and local ODE-GNN modules are both enhancing the feature representation in the traffic forecasting task. The model variant '+E' improves the performance of the base model across all metrics and both datasets. This shows that the simple node-based global ODE-GNN is insufficient in learning informative features and adding the edge-based global ODE-GNN module can considerably help the performance. Without any further techniques, the model gets worse by only using '+L' beyond '+E'. However, after adding the shared weight and divergence constraint techniques, the model usually gets better across all metrics. The shared weights are applied in the spatial and temporal operations of global node-based and edge-based ODE-GNN blocks to take into account the dynamic correlations of edges as well as nodes in the model formulation. The constraint is added to prevent local and global ODE-GNN embeddings deviating from one another. In this figure, we can also observe the impact of aggregation layer, residual-based update layer as well as the designed attention module. It appears that, among these three elements, adding the attention module (AM) will always result in better performance, which is consistent with our hypothesis that the attention mechanism makes the model more effective in considering all the high-dimensional correlations during feature extraction. ## 5.8 Robustness Study To evaluate the robustness of our model, we add noise to the historical input of training data, which can potentially mimic uncertainties and biases that can arise during the data collection process. The added noise follows zero-mean i.i.d. Gaussian distribution with fixed variance, e.g., N(0, 2), where 2 € {2, 4}. We conduct robustness analysis across different values of n E {0.1,0.2,0.3,0.4} representing the ratio of training data impacted by noise. In other words, n = 0 captures the performance without any noise. Fig. 6 represents the results of robustness comparisons between GRAM-ODE and STGODE on PEMS04 dataset across all the three metrics (MAE, MAPE, and RMSE). It can be observed that GRAM-ODE performance is more robust than STGODE with different levels of added noise n = 0.1; 0.2; 0.3; 0.4 which is probably due to the powerful spatio-temporal feature extraction gained by multiple ODE-GNN modules. We can also notice that, when the noise levels are high (with 7 = 4 and n = 0.4), GRAM-ODE can still beat many baseline models listed in Table 3, demonstrating the significant benefit of incorporating various ODE modules in our framework which can improve robustness. ![15_image_0.png](15_image_0.png) Figure 6: Robustness comparison of GRAM-ODE and STGODE on PEMS04 dataset. ## 6 Conclusion In this paper, we propose Spatio-Temporal Graph Multi-ODE Neural Networks (GRAM-ODE) for forecasting traffic. In this model, multiple coupled ODE-GNN blocks are used to capture complex spatio-temporal dependencies from different views and learn better representations. We also add some techniques to further improve the communication between different ODE-GNN blocks including sharing weights, advanced aggregation, and divergence constraint. Extensive experiments on six real-world datasets show the superior performance of our proposed model compared to other state-of-the-art models as well as the effectiveness of each component. Future work may focus on investigating model compression techniques to reduce model size without sacrificing performance, exploring distributed computing strategies for efficiency, and evaluating GRAM-ODE's applicability to other spatio-temporal applications like climate modeling or social network analysis. ## References Taghreed Alghamdi, Khalid Elgazzar, Magdi Bayoumi, Taysseer Sharaf, and Sumit Shah. Forecasting traffic congestion using arima modeling. In *2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC)*, pp. 1227–1232, 2019. doi: 10.1109/IWCMC.2019.8766698. Donald J Berndt and James Clifford. Using dynamic time warping to find patterns in time series. In KDD workshop, volume 10, pp. 359–370. Seattle, WA, USA:, 1994. George EP Box and David A Pierce. Distribution of residual autocorrelations in autoregressive-integrated moving average time series models. *Journal of the American statistical Association*, 65(332):1509–1526, 1970. Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In Hal Daumé III and Aarti Singh (eds.), *Proceedings of the 37th International Conference* on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 1725–1735. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/chen20v.html. Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ 69386f6bb1dfed68692a24c8686939b9-Paper.pdf. Jeongwhan Choi, Hwangyong Choi, Jeehyun Hwang, and Noseong Park. Graph neural controlled differential equations for traffic forecasting. *Proceedings of the AAAI Conference on Artificial Intelligence*, 36(6): 6367–6374, Jun. 2022. doi: 10.1609/aaai.v36i6.20587. URL https://ojs.aaai.org/index.php/AAAI/ article/view/20587. Shengdong Du, Tianrui Li, Xun Gong, Yan Yang, and Shi Jinn Horng. Traffic flow forecasting based on hybrid deep learning framework. In 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), pp. 1–6, 2017. doi: 10.1109/ISKE.2017.8258813. Zheng Fang, Qingqing Long, Guojie Song, and Kunqing Xie. Spatial-temporal graph ode networks for traffic flow forecasting. In *Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data* Mining, KDD '21, pp. 364–373, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383325. doi: 10.1145/3447548.3467430. URL https://doi.org/10.1145/3447548.3467430. Shengnan Guo, Youfang Lin, Ning Feng, Chao Song, and Huaiyu Wan. Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):922–929, Jul. 2019. doi: 10.1609/aaai.v33i01.3301922. URL https://ojs.aaai.org/ index.php/AAAI/article/view/3881. Mansura Habiba and Barak A. Pearlmutter. Neural ordinary differential equation based recurrent neural network model. 2020. URL https://arxiv.org/abs/2005.09807. Zhixiang He, Chi-Yin Chow, and Jia-Dong Zhang. Stcnn: A spatio-temporal convolutional neural network for long-term traffic prediction. In *2019 20th IEEE International Conference on Mobile Data Management* (MDM), pp. 226–233. IEEE, 2019. H. V. Jagadish, Johannes Gehrke, Alexandros Labrinidis, Yannis Papakonstantinou, Jignesh M. Patel, Raghu Ramakrishnan, and Cyrus Shahabi. Big data and its technical challenges. *Commun. ACM*, 57(7):86–94, jul 2014. ISSN 0001-0782. doi: 10.1145/2611567. URL https://doi.org/10.1145/2611567. Young-Seon Jeong, Young-Ji Byon, Manoel Mendonca Castro-Neto, and Said M. Easa. Supervised weightingonline learning algorithm for short-term traffic flow prediction. *IEEE Transactions on Intelligent Transportation Systems*, 14(4):1700–1707, 2013. doi: 10.1109/TITS.2013.2267735. Weiwei Jiang and Jiayun Luo. Graph neural network for traffic forecasting: A survey. Expert Systems with Applications, 207:117921, 2022. ISSN 0957-4174. doi: https://doi.org/10.1016/j.eswa.2022.117921. URL https://www.sciencedirect.com/science/article/pii/S0957417422011654. Nicola Jones. How machine learning could help to improve climate forecasts. *Nature*, 548(7668):379–379, Aug 2017. ISSN 1476-4687. doi: 10.1038/548379a. URL https://doi.org/10.1038/548379a. Patrick Kidger. On neural differential equations. *CoRR*, abs/2202.02435, 2022. URL https://arxiv.org/ abs/2202.02435. Patrick Kidger, James Morrill, James Foster, and Terry Lyons. Neural controlled differential equations for irregular time series. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 6696– 6707. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 4a5876b450b45371f6cfe5047ac8cd45-Paper.pdf. Patrick Kidger, James Foster, Xuechen Li, and Terry Lyons. Efficient and accurate gradients for neural SDEs. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural* Information Processing Systems, 2021a. URL https://openreview.net/forum?id=b2bkE0Qq8Ya. Patrick Kidger, James Foster, Xuechen Li, Harald Oberhauser, and Terry Lyons. Neural {sde}s made easy: {SDE}s are infinite-dimensional {gan}s, 2021b. URL https://openreview.net/forum?id=padYzanQNbg. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017. Shiyong Lan, Yitong Ma, Weikang Huang, Wenwu Wang, Hongyu Yang, and Pyang Li. DSTAGNN: Dynamic spatial-temporal aware graph neural network for traffic flow forecasting. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), *Proceedings of the 39th* International Conference on Machine Learning, volume 162 of *Proceedings of Machine Learning Research*, pp. 11906–11917. PMLR, 17–23 Jul 2022. Mathias Lechner and Ramin Hasani. Mixed-memory RNNs for learning long-term dependencies in irregularly sampled time series, 2022. Mengzhang Li and Zhanxing Zhu. Spatial-temporal fusion graph neural networks for traffic flow forecasting. Proceedings of the AAAI Conference on Artificial Intelligence, 35(5):4189–4196, May 2021. doi: 10.1609/ aaai.v35i5.16542. URL https://ojs.aaai.org/index.php/AAAI/article/view/16542. Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for semisupervised learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18. AAAI Press, 2018a. ISBN 978-1-57735-800-8. Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting, 2017. URL https://arxiv.org/abs/1707.01926. Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. In *International Conference on Learning Representations*, 2018b. URL https://openreview.net/forum?id=SJiHXGWAZ. Jun Liu, Amir Shahroudy, Dong Xu, and Gang Wang. Spatio-temporal lstm with trust gates for 3d human action recognition. In *European conference on computer vision*, pp. 816–833. Springer, 2016. Antonella Longo, Marco Zappatore, Mario Bochicchio, and Shamkant B. Navathe. Crowd-sourced data collection for urban monitoring via mobile sensors. *ACM Trans. Internet Technol.*, 18(1), oct 2017. ISSN 1533-5399. doi: 10.1145/3093895. URL https://doi.org/10.1145/3093895. James Morrill, Patrick Kidger, Lingyi Yang, and Terry Lyons. Neural controlled differential equations for online prediction tasks, 2021. URL https://arxiv.org/abs/2106.11028. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances* in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https: //proceedings.neurips.cc/paper/2019/file/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf. Bin Pu, Jiansong Liu, Yan Kang, Jianguo Chen, and Philip S. Yu. Mvstt: A multiview spatial-temporal transformer network for traffic-flow forecasting. *IEEE Transactions on Cybernetics*, pp. 1–14, 2022. doi: 10.1109/TCYB.2022.3223918. Yulia Rubanova, Ricky TQ Chen, and David K Duvenaud. Latent ordinary differential equations for irregularly-sampled time series. *Advances in neural information processing systems*, 32, 2019. Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-kin Wong, and Wang-chun WOO. Convolutional lstm network: A machine learning approach for precipitation nowcasting. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 28. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/paper/2015/file/ 07563a3fe3bbe7e3ba84431ad9d055af-Paper.pdf. Chao Song, Youfang Lin, Shengnan Guo, and Huaiyu Wan. Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting. In *Proceedings of the* AAAI Conference on Artificial Intelligence, volume 34, pp. 914–921, 2020. Yuqiao Su, Bin Ren, and Kunhua Zhang. Graph ode recurrent neural networks for traffic flow forecasting. In 2022 IEEE 5th International Conference on Electronics and Communication Engineering (ICECE), pp. 178–182, 2022. doi: 10.1109/ICECE56287.2022.10048605. Mascha Van Der Voort, Mark Dougherty, and Susan Watson. Combining kohonen maps with arima time series models to forecast traffic flow. *Transportation Research Part C: Emerging Technologies*, 4(5):307– 318, 1996. Vivek Veeriah, Naifan Zhuang, and Guo-Jun Qi. Differential recurrent neural networks for action recognition. In *Proceedings of the IEEE international conference on computer vision*, pp. 4041–4049, 2015. Billy M Williams and Lester A Hoel. Modeling and forecasting vehicular traffic flow as a seasonal arima process: Theoretical basis and empirical results. *Journal of transportation engineering*, 129(6):664–672, 2003. Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, and Chengqi Zhang. Graph wavenet for deep spatialtemporal graph modeling. In *Proceedings of the 28th International Joint Conference on Artificial Intelligence*, IJCAI'19, pp. 1907–1913. AAAI Press, 2021. ISBN 9780999241141. Shi Xingjian, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. In *Advances in neural* information processing systems, pp. 802–810, 2015. Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. In *Thirty-second AAAI conference on artificial intelligence*, 2018. Bing Yu, Haoteng Yin, and Zhanxing Zhu. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. In *Proceedings of the Twenty-Seventh International Joint Conference on* Artificial Intelligence, IJCAI-18, pp. 3634–3640. International Joint Conferences on Artificial Intelligence Organization, 7 2018. doi: 10.24963/ijcai.2018/505. Junbo Zhang, Yu Zheng, and Dekang Qi. Deep spatio-temporal residual networks for citywide crowd flows prediction. *Proceedings of the AAAI Conference on Artificial Intelligence*, 31(1), Feb. 2017. doi: 10.1609/ aaai.v31i1.10735. URL https://ojs.aaai.org/index.php/AAAI/article/view/10735. Lun Zhang, Qiuchen Liu, Wenchen Yang, Nai Wei, and Decun Dong. An improved k-nearest neighbor model for short-term traffic flow prediction. *Procedia - Social and Behavioral Sciences*, 96:653–662, 2013. ISSN 1877-0428. doi: https://doi.org/10.1016/j.sbspro.2013.08.076. URL https://www.sciencedirect.com/ science/article/pii/S1877042813022027. Intelligent and Integrated Sustainable Multimodal Transportation Systems Proceedings from the 13th COTA International Conference of Transportation Professionals (CICTP2013). Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. *AI Open*, 1: 57–81, 2020. ISSN 2666-6510. doi: https://doi.org/10.1016/j.aiopen.2021.01.001. URL https://www. sciencedirect.com/science/article/pii/S2666651021000012. ## Appendix A Prediction Variance A.1 Initialization Randomness Table 4 presents the average and standard deviation results of multiple runs of our model using five different random seeds. Results demonstrate that GRAM-ODE's performance exhibits only minor variance across different seeds. Moreover, despite these variations, GRAM-ODE continues to outperform other baselines across different benchmark datasets, as also evident by comparing to baseline results in Table 3. Table 4: GRAM-ODE performance variance for different random seeds over all datasets. | Dataset | Metric | GRAM-ODE | |-----------|--------------|--------------| | MAE | 15.72 ± 0.29 | | | PEMS03 | MAPE(%) | 15.98 ± 0.31 | | RMSE | 26.40 ± 0.42 | | | MAE | 19.55 ± 0.12 | | | PEMS04 | MAPE(%) | 12.66 ± 0.29 | | RMSE | 31.05 ± 0.25 | | | MAE | 21.75 ± 0.30 | | | PEMS07 | MAPE(%) | 9.74 ± 0.21 | | RMSE | 34.42 ± 0.39 | | | MAE | 16.05 ± 0.20 | | | PEMS08 | MAPE(%) | 10.58 ± 0.13 | | RMSE | 25.17 ± 0.14 | | | MAE | 1.67 ± 0.02 | | | PEMS-BAY | MAPE(%) | 3.83 ± 0.03 | | RMSE | 3.34 ± 0.03 | | | MAE | 3.44 ± 0.08 | | | METR-LA | MAPE(%) | 9.38 ± 0.11 | | RMSE | 6.64 ± 0.05 | | ## A.2 Training Randomness We also conducted experiments using three distinct random seeds across various cross-validation splits on the PEMS-BAY dataset (due to time constraints, we were only able to run this experiment for one dataset). The results of these experiments can be found in Table 5. As stated in the manuscript, we follow the previous works and divide the datasets into train/val/test splits with a 6:2:2 ratio. Given that the data is time-series, we can create different cross-validation splits for the train and test data as follows: T_X, TX_, _TX, XT_, _XT, X_T where T and X refers to train and test sets. Despite the minor variance in performance across different cross-validation splits and model initialization seeds, we can observe that GRAM-ODE still outperforms other baselines for this dataset. ## B Efficiency And Scalability Figure 7 comprises two subfigures that demonstrate the efficiency and scalability of our proposed GRAMODE. Fig. 7(a) presents a Pareto plot illustrating the trade-off between RMSE forecasting performance and inference time for various methods, including our GRAM-ODE. Although our multi-ODE framework exhibits a larger inference time, it delivers superior RMSE performance compared to other methods. This highlights that, in use cases where prediction accuracy is of paramount importance, the enhanced performance of our Table 5: GRAM-ODE performance variance for different cross-validation splits across different model initialization seeds on PEMS-BAY dataset. | CV Split | RMSE | MAE | MAPE | Seed ID | |------------|--------|-------|--------|-----------| | T,_,X | 3.39 | 1.7 | 3.95 | 0 | | 3.39 | 1.69 | 3.91 | 1 | | | 3.39 | 1.7 | 3.92 | 2 | | | T,X,_ | 3.38 | 1.69 | 3.97 | 0 | | 3.39 | 1.68 | 3.96 | 1 | | | 3.38 | 1.68 | 3.94 | 2 | | | _,T,X | 3.26 | 1.62 | 3.67 | 0 | | 3.24 | 1.6 | 3.63 | 1 | | | 3.27 | 1.62 | 3.68 | 2 | | | X,T,_ | 2.97 | 1.59 | 3.44 | 0 | | 3.00 | 1.63 | 3.49 | 1 | | | 2.98 | 1.59 | 3.45 | 2 | | | _,X,T | 3.41 | 1.82 | 4.05 | 0 | | 3.4 | 1.8 | 4.01 | 1 | | | 3.41 | 1.82 | 4.03 | 2 | | | X,_,T | 3.18 | 1.72 | 3.79 | 0 | | 3.21 | 1.75 | 3.83 | 1 | | | 3.18 | 1.71 | 3.78 | 2 | | GRAM-ODE can outweigh the increased computational complexity. Fig. 7(b) showcases the percentage ![20_image_0.png](20_image_0.png) performance gain achieved by GRAM-ODE in comparison to the best-performing baseline. The dataset size on the x-axis is calculated by multiplying the squared number of nodes by the number of timesteps. As the dataset size increases, we observe that GRAM-ODE provides relatively better performance gains, indicating that our multi-ODE framework is scalable to larger datasets and higher-dimensional inputs. This scalability further emphasizes the advantages of GRAM-ODE, especially when handling more extensive and complex data. Figure 7: Efficiency and Scalability of GRAM-ODE. (a) RMSE and inference time trade-off for various methods on the PEMS03 dataset. (b) Relative RMSE gain by GRAM-ODE in comparison to the bestperforming baseline across datasets of varying sizes. ## C Exploring The Outputs Of Various Ode Modules We analyzed the outputs of these different ODE modules in the aftermath of a traffic jam event, when fluctuations and changes are more pronounced. Our observations suggest that the distinct ODE modules contribute to varying performance and predictions, likely resulting from the different features they learn from multiple perspectives. In particular, Fig. 8 shows that shortly after the traffic jam, when traffic flow have sudden increases, the outputs of the Local Module (LM) appear to align more closely with the ground truth. As traffic flow stabilizes, both the Local Module (LM) and Global Module (GM) outputs exhibit better alignment with the ground truth. However, as traffic flow gradually decreases, the Global Module (GM) outputs start to diverge from the ground truth, while the Edge Module (EM) and Local Module (LM) outputs remain more consistent with the actual data. Therefore, as traffic flow changes, the alignment of the outputs with the ground truth varies across modules, indicating that each module captures different aspects of the data. ![21_image_0.png](21_image_0.png) Figure 8: Outputs of different ODE modules during traffic fluctuations. (a) Varying performance and predictions of the Local Module (LM), Glocal Module (GM), and Edge Module (EM) in response to changes in traffic flow. (b) Normalized distance of different prediction curves with ground truth at each time step.
Review 1: Summary: This paper provides a graph convolutional network (GCN) for traffic forecasting, called GRAM-ODE. It claims the previous works ignore the local temporal embedding and edge-based dynamic embedding. Then it proposes the multi-ODE layer including the sublayers to process global temporal, local temporal and edge-based features, respectively. An attention-based aggregation layer is also applied to merge the advanced embeddings from two traffic graphs. The experiments show: 1) GRAM-ODE surpasses selective baselines; 2) the effectiveness of the key components in GRAM-ODE; 3) the robustness of the trained GRAM-ODE. Strengths and Weaknesses: **Strengths:** Traffic forecasting is a very interesting and challenging research topic. This paper has described the background of the topic, with a good motivation to make the graph embedding of the traffic network more informative. The proposed method is based on ordinary differential equation (ODE), which is a commonly used component in traffic forecasting. The technical part is fairly sound and not difficult to understand. The experiments contain various methods for traffic forecasting and the ablation study has verified the effect of the components. This work also considers the noisy data and evaluates the robustness of the proposed method, which I think is a very practical point and deserves research in traffic forecasting. **Weaknesses:** Despite the above merits, I have to say this paper still has space to further improve. 1. Some descriptions are ambiguous. For example, the authors claim that "Despite different semantic meanings corresponding to node-based and edge-based features, they can share some important spatial and temporal patterns." What kind of spatial and temporal patterns are not mentioned. It could be good to clarify it with a concrete example. 2. The novelty is not very high. The proposed method is based on ODE, which is a very common component. Some works have proposed similar approaches for traffic forecasting, such as [1] [2] below, which are not mentioned in the literature review. In [1], it even considers the local, global temporal embedding and edge-based dynamic embedding, which is very close to this work. [1] MVSTT: A Multiview Spatial-Temporal Transformer Network for Traffic-Flow Forecasting [2] Graph ODE Recurrent Neural Networks for Traffic Flow Forecasting 3. The experimental setting is not well described. How each baseline is implemented should have been suitably mentioned. 4. Analysis of the power of the proposed approach. Despite the evaluation metrics used, the authors have no analyses on the training and testing efficiency of the approach, which I think is important to learn the performance in a more comprehensive way. 5. The limitations of this work and the future work are ignored. Requested Changes: The following points are necessary to be addressed for meeting the level of acceptance. 1. I strongly recommend the authors to present the analyses on the training and testing efficiency of the proposed approach. 2. The above highly related works should be added in the literature review or excluded by some reasons. 3. The experimental setting should be introduced with more words. The limitations and the future work should be added. Broader Impact Concerns: I have no concerns on the ethical implications of the work. ================================================== Review 2: Summary: This paper focuses on the task of spatio-temporal traffic forecasting. Given the recent advances of graph ODE and their limitations on the task, the authors proposed multi-ODE network (GRAM-ODE). The proposed method seems to be able to capture complex local and global dynamics from the spatial-temporal dependencies. Strengths and Weaknesses: This paper focuses on the task of spatio-temporal traffic forecasting. Given the recent advances of graph ODE and their limitations on the task, the authors proposed multi-ODE network (GRAM-ODE). The proposed method seems to be able to capture complex local and global dynamics from the spatial-temporal dependencies. Following are my concerns about this work, which I hope the authors can address (some of which are minor concerns): 1. I would recommend the authors to add some citations in the first sentence of the Introduction. 2. The different arrows in fig. 1 is barely readable. 3. For the results showed in Table 3, I would suggest the authors to also report the standard deviations for a better understanding on the significances. Ideally, averaging the results of multiple runs of cross validation and reporting the average with standard deviation would be the best. 4. Given the audience of this journal, I think Eq. 25 might not be necessary. 5. I wonder if there are other publicly available datasets that can be used for evaluation, which can add more diversity to the datasets. 6. Given the multi-ODE design of the proposed method, I think it's necessary to also compare with baselines on perspectives of scalability and efficiency. For example, complexity analysis of the proposed method, runtime comparison of the proposed method against baselines, etc. 7. In the introduction, the authors mentioned that different ODE modules proposed in this work were designed for capturing different knowledge such as local temporal patterns or semantic edges. How do we validate these claims? While there are experiments showing better overall performances and ablation studies showing each designed components contributing to the performance, I think it is still hard to conclude that one design is specifically capturing some knowledge. 8. Some of the different colors in Fig. 5 are barely separable, please consider changing to more separable colors or patterns for better readability. 9. The manuscript seems is missing a careful proofread and typo check. Requested Changes: please address the concerns in the previous Q. Broader Impact Concerns: n/a ================================================== Review 3: Summary: This paper introduces Graph-based Multi-ODE (GRAM-ODE), a neural network architecture for spatio-temporal traffic forecasting. The model consists of novel components for addressing temporal patterns at different scales (e.g. short-term vs. long-term) and for including semantic edge information. The model further uses multi-head attention pooling to aggregate features (while prior work in this area primarily used simpler aggregation modules such as sum- or max-pooling). The method is validated on four traffic prediction datasets and shows benefits over prior works. Strengths and Weaknesses: **Strengths:** * Novel architectural contributions are reasonably well-motivated * Apparent (but potentially marginal) improvements on an existing traffic prediction task * Detailed ablation study and study of model robustness to noise **Weaknesses:** * Experimental protocol does not support claims/hypotheses well: no variance is reported; datasets are of very limited diversity/size * Insufficient clarity and quality of writing and exposition **Regarding clarity of writing:** Figures: * Figure 2 is unclear. It is difficult to get an overview of the model. * Figure 3 is similarly unclear. What do the blue rectangles represent? What does “AM” stand for? What does it mean if a rectangle is shaded in a particular way (some are colored with a gradient from dark blue to blue, whereas others are from light blue to white)? Writing: * “To fuse these two separate sets of operations more intelligently” -> this is a vague statement: how would one determine whether a particular fusion operation is more or less intelligent? * The paper goes straight into very specific details around the newly introduced weight sharing, before introducing an overview of the model: I would recommend that the paper starts with a concise overview of the model and then introduces the particular novel components. Lastly, there are frequent typos and grammatical inconsistencies in the paper. A few examples below: * Table 1 (notations): inconsistent lower-/upper-casing; “numbers of channels” * “Identical matrix” -> “identity matrix” **Regarding experiments:** Experiments: * Results look largely comparable to prior work. It is unclear whether benefit stems from a better choice of hyperparameters or from modeling novelties * No error bars or indication of variance * 4 small datasets from the same provider, i.e. it is unclear how this model behaves on vastly different road networks, e.g. of larger size, in different continents, cities etc. Requested Changes: * [Critical] The experimental protocol needs to be revisited: the authors should not only analyze variance of results across the different cross-validation splits, but should also report variance for different model initialization seeds, i.e. how much variance the results have based on randomness in model training/initialization. Only results that are significantly different from baselines or model variants (in terms of ablations) can be considered for testing the initial hypotheses made in the paper. * [Critical] The claims and hypotheses that the paper proposes need to be revisited after improving the experimental protocol. It is quite likely that several claims made in the current version of the paper will not survive this test. For example, already in the current version of the paper, the effect of special handling for local temporal patterns (“+L” ablation) is minimal or even negative. * [Critical] The clarity of the model definition and figures need to be significantly improved. I suggest a major revision of the method section, in line with the potential revision of the claims/hypotheses. * [Highly recommended] The authors should consider a wider range of datasets and/or tasks to more convincingly test their hypotheses and to show more evidence for their claims. * [Highly recommended] The authors should consider reducing the length of the paper to 12 pages (excl. appendix) or less; the current version of the paper is far less concise than it could be. Improving conciseness would also significantly help with clarity issues that the paper currently has. Other question: Regarding the shared spatial weight: as this is a dense matrix (with parameters initialized from a Gaussian distribution) that is being added to the adjacency matrix, this means that all message passing operations will be dense thereafter. This seems to be infeasible for large graphs, which would generally be the case for real-world applications of traffic modeling. Could the authors comment on this? Broader Impact Concerns: Not applicable. The work is an incremental architectural improvement on an existing task and model, i.e. no Broader Impact Statement is required. ================================================== Metareview: Recommendation: Accept as is Comment: All reviewers agree that the paper is useful and the ideas are interesting. Although the idea may seem straightforward, it has never been systematically studied before, and potentially is very useful for spatio-temporal modelling of signals on the graph. ==================================================
# Bagel: A Benchmark For Assessing **Graph Neural Network** Explanations Anonymous authors Paper under double-blind review ## Abstract Evaluating interpretability approaches for graph neural networks (GNN) specifically is known to be challenging due to the lack of a commonly accepted benchmark. Given a GNN model, several interpretability approaches exist to explain GNN models with diverse (sometimes conflicting) evaluation methodologies. In this paper, we propose a benchmark for evaluating the explainability approaches for GNNs called Bagel. In Bagel, we firstly propose four diverse GNN explanation evaluation regimes - 1) *faithfulness*, 2) *sparsity*, 3) *correctness*, and 4) *plausibility*. We reconcile multiple evaluation metrics in the existing literature and cover diverse notions for a holistic evaluation. Our graph datasets range from citation networks, document graphs, to graphs from molecules and proteins. We conduct an extensive empirical study on four GNN models and nine post-hoc explanation approaches for node and graph classification tasks. We release both the benchmarks and reference implementations and make them available at https://anonymous.4open.science/r/Bagel-benchmark-F451/. ## 1 Introduction Graph neural networks (GNNs) (VelikoviÊ et al., 2018; Kipf & Welling, 2017; Klicpera et al., 2019; Xu et al., 2019; Hamilton et al., 2017) are representation learning techniques that encode structured information into low dimensional space using a feature aggregation mechanism over the node neighborhoods. GNNs have shown state-of-the-art performance across many scientific fields in various important downstream applications, such as molecular data analysis, drug discovery, toxic molecule detection, and community clustering (Dong et al., 2022; Gaudelet et al., 2021; Ying et al., 2018). There have been benchmarks and datasets for interpretability of machine learning models (Wiegree & MarasoviÊ, 2021; Liu et al., 2021). The rising number of applications of GNNs in several sensitive domains like medicine and healthcare (Dong et al., 2022; Lu & Uddin, 2021) necessitates the need to explain their decision-making process. GNNs are inherently black-box and non-interpretable. Moreover, due to the complex interplay of node features and neighborhood structure in the decision-making process, general explanation approaches (Lundberg & Lee, 2017; Ribeiro et al., 2016; Singh & Anand, 2020) cannot be trivially applied for graph models. Consequently, several explanation technique (Ying et al., 2019; Funke et al., 2022; Vu & Thai, 2020; Yuan et al., 2020; Schnake et al., 2021; Huang et al., 2020; Schlichtkrull et al., 2020; Yuan et al., 2021) have been proposed for GNNs in the last few years. A known challenge in developing explanation techniques is that of evaluation of the quality of explanations. This challenge also extends to the evaluation of explainability approaches for GNNs and is the focus of this work. Existing approaches usually focus on a certain aspect of evaluation, sometimes even performed on synthetic datasets. For example, some works employ synthetic datasets with an already-known subgraph (sometimes referred to as the ground truth reason or simply the ground truth). Explanations are then evaluated based on their agreement with the ground truth. Such an evaluation is sometimes flawed as one cannot always guarantee that the GNN has used in the first place the seeded subgraph for its decision-making process Faber et al. (2021). Besides, there is no standardized procedure for comparing dierent GNN explanations. For example, feature attribution methods can generate soft masks (feature importance as a distribution) or hard masks (boolean selections) over features. Comparing hard and soft mask explanations needs a common and 0 1 0 0 0 1 1 0 Vs. sparsity Plausibility ![1_image_0.png](1_image_0.png) Feature Mask 75 **Graph Neural Networks.** Let G(V, E) be a graph with V is set of nodes and E is set of edges. Let A {0, 1}(n,n) 76 be the adjacency matrix of the graph where n is the number of nodes in the graph with Aij = 1 if there is an edge between node i and j and 0 otherwise. Let X R(n,d) 77 78 be the features matrix where d is the number of features. For a given node v V, xv denotes its 79 features vector and Nv is a set of its neighbors. We denote the trained GNN model as f on the given 80 graph. For each layer , the representation of node v is obtained by aggregating and transforming the 81 representations of its neighboring nodes at layer 1. 74 **2 Background and Preliminaries** 75 **Graph Neural Networks.** Let G(V, E) be a graph with V is set of nodes and E is set of edges. Let A {0, 1}(n,n) 76 be the adjacency matrix of the graph where n is the number of nodes in the graph with Aij = 1 if there is an edge between node i and j and 0 otherwise. Let X R(n,d) 77 78 be the features matrix where d is the number of features. For a given node v V, xv denotes its 79 features vector and Nv is a set of its neighbors. We denote the trained GNN model as f on the given 80 graph. For each layer , the representation of node v is obtained by aggregating and transforming the h() v = AGG x(1) v , x(1) u | u Nv , x() v = TRANSFORM h() where W() 82 represents the weight matrix at layer . The aggregation function, AGG function is 83 dependent on the GNN model, for example graph convolution network (GCN) [13] uses a degree 84 weighted aggregation of neighborhood features, whereas graph attention network (GAT) [33] learns 85 neighborhood weights via attention mechanism. At the final layer, the prediction can be obtained v = TRANSFORM h() v , W()(1) 86 using a *softmax function*. An additional pooling layer is applied for *graph classification* tasks before Explanations Explanations Evaluation Evaluation 0 1 1 0 Feature Mask 0 1 1 0 Feature Mask 1 1 0 0 Node Mask Hard Mask 1 3 4 2 1 1 0 0 Node Mask Hard Mask 1 3 4 2 Edge Mask 1 0 0 1 3 4 2 faithfulness *correctness* Edge Mask Explanations 1 3 4 2 Post-hoc Expainers GNNs Evaluation 1 0 0 1 faithfulness correctness 3 4 2 Vs. 1 1 1 1 1 1 0 0 0 1 1 0 Feature Mask Plausibility 0.1 0.8 0.1 0.0 Feature Mask 0 1 1 0 Vs. 1 3 4 2 Post-hoc Expainers GNNs 0 1 0 0 1 1 0 0 Node Mask Hard Mask 0.2 0.6 0.1 0.1 Node Mask Vs. 1 1 1 1 1 1 0 0 1 3 4 2 1 0 1 1 sparsity 0 1 0 0 Plausibility 0.1 0.8 0.1 0.0 Feature Mask 0 1 1 0 Vs. Edge Mask Soft Mask 0 1 0 0 Edge Mask 1 0 0 1 3 4 2 faithfulness correctness 0.8 0.1 0.1 1 3 4 2 0 1 0 0 1 3 4 2 Post-hoc Expainers 0.2 0.6 0.1 0.1 Node Mask GNNs 1 0 1 1 Vs. 1 1 1 1 1 1 0 0 sparsity 0 1 0 0 Figure 2 74 **2 Background and Preliminaries** 87 applying *softmax function*. 81 representations of its neighboring nodes at layer 1. Figure 1: An overview of the Bagel benchmark. where W() 82 represents the weight matrix at layer . The aggregation function, AGG function is 88 **2.1 Post-hoc explanations and evaluation for GNNs** 97 and structural information by observing the predictive power of model when noise is added to the 87 applying *softmax function*. 102 input features/nodes/edges as an explanation. 103 **2.2 Related work on evaluation for post-hoc explanations** 3 3 99 neighborhood of a query nodes and it's prediction. The explanations generated by this simple model 100 are treated as the explanations of the original model. We comment that BAGEL is in general applicable 101 for any kind of explainer which returns binary (hard) of continuous (soft) importance scores for the 102 input features/nodes/edges as an explanation. 103 **2.2 Related work on evaluation for post-hoc explanations** 104 Evaluation of explanation methods for any predictive model is inherently tricky. Specifically, when 105 evaluating already trained models we are faced with the *ground-truth dilemma*. The ground-truth 3 83 dependent on the GNN model, for example graph convolution network (GCN) [13] uses a degree h() v = AGG x(1) v , x(1) u | u Nv , x() v = TRANSFORM h() v , W()(1) standardized protocol. Finally, the check for *human plausibility* and correctness have been ignored in the evaluation of GNN explainers. Human plausibility checks if a model predicts *right for the right reason.* On the other hand, the correctness of an explanation checks if the explainers is able to isolate spurious correlations and biases that are intentionally added to the training data as a proxy for biases present in real-world data. 84 weighted aggregation of neighborhood features, whereas graph attention network (GAT) [33] learns 85 neighborhood weights via attention mechanism. At the final layer, the prediction can be obtained 89 Post-hoc explainers for GNNs produce feature and local structure attributions where a combination of 86 using a *softmax function*. An additional pooling layer is applied for *graph classification* tasks before 90 masked set of nodes, edges and features is retrieved as an explanation. To compute the explanation for where W() 82 represents the weight matrix at layer . The aggregation function, AGG function is 87 applying *softmax function*. 91 a k-layer GNN, the k-hop neighborhood (also referred to as node's computational graph) of the node To address the issues of a holistic evaluation and contribute a resource to the growing community on GNN explainability, we developed Bagel, a benchmark platform for evaluating explanation approaches for graph neural networks or GNNs. Bagel as depicted in Figure 1 covers diverse datasets (Sen et al., 2008; Debnath et al., 1991; Borgwardt et al., 2005; Zaidan & Eisner, 2008) from the literature, a range of standardized metrics, and a modular, extendable framework for execution and evaluation of GNN explanation approaches, along with initial implementations of recent popular explanation methods. In Table 1, we show the combination of metrics and datasets that we use in our benchmark. Bagel includes: 83 dependent on the GNN model, for example graph convolution network (GCN) [13] uses a degree 92 is utilized. BAGEL currently consists of 3 classes of post-explanation techniques: *gradient based*, 88 **2.1 Post-hoc explanations and evaluation for GNNs** 93 *perturbation based* and *surrogate model* approaches. The gradient based methods [28, 32, 30, 21] 84 weighted aggregation of neighborhood features, whereas graph attention network (GAT) [33] learns 94 are the simplest approaches for generating the explanation for any differentiable trained model. In 85 neighborhood weights via attention mechanism. At the final layer, the prediction can be obtained 89 Post-hoc explainers for GNNs produce feature and local structure attributions where a combination of 95 these approaches the importance scores for the explanation are usually computed using gradients 90 masked set of nodes, edges and features is retrieved as an explanation. To compute the explanation for 96 of the input. The perturbation based approaches [6, 36, 19, 39, 26] learns the important features 86 using a *softmax function*. An additional pooling layer is applied for *graph classification* tasks before 91 a k-layer GNN, the k-hop neighborhood (also referred to as node's computational graph) of the node 92 is utilized. BAGEL currently consists of 3 classes of post-explanation techniques: *gradient based*, 98 input. The surrogate based approaches [34, 10, 41] learns a simple interpretable model for the local ¶ Four diverse evaluation notions that evaluate the *faithfulness, sparsity, correctness, and plausibility* of GNN explanations on real-world datasets. While the first three metrics focus on evaluating the explainers, plausibility checks for explanations to be human congruent. 93 *perturbation based* and *surrogate model* approaches. The gradient based methods [28, 32, 30, 21] 99 neighborhood of a query nodes and it's prediction. The explanations generated by this simple model 94 are the simplest approaches for generating the explanation for any differentiable trained model. In 100 are treated as the explanations of the original model. We comment that BAGEL is in general applicable 88 **2.1 Post-hoc explanations and evaluation for GNNs** 95 these approaches the importance scores for the explanation are usually computed using gradients ¶ Besides the widely used datasets for measuring faithfulness of explanations, Bagel consists of new datasets for the plausibility of explanation approaches in our benchmark datasets. 101 for any kind of explainer which returns binary (hard) of continuous (soft) importance scores for the 96 of the input. The perturbation based approaches [6, 36, 19, 39, 26] learns the important features 97 and structural information by observing the predictive power of model when noise is added to the 89 Post-hoc explainers for GNNs produce feature and local structure attributions where a combination of ¶ We unify multiple evaluations, metrics, domains, and datasets into an easy-to-use format that reduces the entry barrier for evaluating new approaches to explain GNNs. Additionally, we provide an extendable library to implement and evaluate GNN explainers. 98 input. The surrogate based approaches [34, 10, 41] learns a simple interpretable model for the local 90 masked set of nodes, edges and features is retrieved as an explanation. To compute the explanation for 99 neighborhood of a query nodes and it's prediction. The explanations generated by this simple model 100 are treated as the explanations of the original model. We comment that BAGEL is in general applicable 91 a k-layer GNN, the k-hop neighborhood (also referred to as node's computational graph) of the node ¶ We conduct an extensive evaluation of GNNExplainer(GNNExp) (Ying et al., 2019), PGMExplainer(PGM) (Vu & Thai, 2020), Zorro (Funke et al., 2022), Grad (Simonyan et al., 2013), GradInput(Simonyan et al., 2013), Integrated Gradient(IG) (Sundararajan et al., 2017), SmoothGrad (Smilkov et al., 2017), CAM (Pope et al., 2019) and GradCAM (Pope et al., 2019) in Bagel. 104 Evaluation of explanation methods for any predictive model is inherently tricky. Specifically, when 101 for any kind of explainer which returns binary (hard) of continuous (soft) importance scores for the 92 is utilized. BAGEL currently consists of 3 classes of post-explanation techniques: *gradient based*, 105 evaluating already trained models we are faced with the *ground-truth dilemma*. The ground-truth 102 input features/nodes/edges as an explanation. 93 *perturbation based* and *surrogate model* approaches. The gradient based methods [28, 32, 30, 21] 94 are the simplest approaches for generating the explanation for any differentiable trained model. In 103 **2.2 Related work on evaluation for post-hoc explanations** We show that there is no clear winner in GNN explanation methods showing nuanced interpretations of the GNN explanation methods using the multiple metrics considered. We finally note that evaluating the eectiveness of explanations is an intrinsically human-centric task that ideally requires human studies. However, the goal of Bagel is to provide a fast and accurate evaluation strategy that is often desirable to develop new explainability techniques using empirical evaluation metrics be95 these approaches the importance scores for the explanation are usually computed using gradients 104 Evaluation of explanation methods for any predictive model is inherently tricky. Specifically, when 96 of the input. The perturbation based approaches [6, 36, 19, 39, 26] learns the important features 105 evaluating already trained models we are faced with the *ground-truth dilemma*. The ground-truth 97 and structural information by observing the predictive power of model when noise is added to the | Task | Dataset | Metric | | | | | |--------------|---------------|----------|-------------|--------------|----|----| | | Faithfulness | Sparsity | Correctness | Plausibility | | | | RDT-Fidelity | Su. & Comp. | | | | | | | Node | Cora | 3 | 7 | 3 | 3 | 7 | | CiteSeer | 3 | 7 | 3 | 3 | 7 | | | PubMed | 3 | 7 | 3 | 7 | 7 | | | ogbn-arxiv | 3 | 7 | 3 | 7 | 7 | | | Synthetic | 3 | 7 | 3 | 3 | 3 | | | Graph | Movie Reviews | 7 | 3 | 7 | 7 | 3 | | MUTAG | 3 | 3 | 3 | 7 | 7 | | | PROTEINS | 3 | 3 | 3 | 7 | 7 | | | ENZYMES | 3 | 3 | 3 | 7 | 7 | | Table 1: Datasets and metrics. fore the human trial stage. The code and the datasets used in our benchmark are available at https://anonymous.4open.science/r/Bagel-benchmark-F451/. ## 2 Background And Preliminaries Graph Neural Networks. Let G(V, E) be a graph with V is a set of nodes and E is a set of edges. Let A œ {0, 1}(n,n) be the adjacency matrix of the graph where n is the number of nodes in the graph with Aij = 1 if there is an edge between node i and j and 0 otherwise. Let X œ R(n,d) be the features matrix where d is the number of features. For a given node v œ V, xv denotes its features vector and Nv is a set of its neighbors. We denote the trained GNN model as f on the given graph. For each layer ¸, the representation of node v is obtained by aggregating and transforming the representations of its neighboring nodes at layer ¸ ≠ 1 $$\mathbf{h}_{v}^{(t)}=\text{AGG}\left(\left\{\mathbf{x}_{v}^{(t-1)},\left\{\mathbf{x}_{u}^{(t-1)}\mid u\in\mathcal{N}_{v}\right\}\right\}\right),\quad\mathbf{x}_{v}^{(t)}=\text{TRANSFORM}\left(\mathbf{h}_{v}^{(t)},W^{(t)}\right),\tag{1}$$ where W(¸) represents the weight matrix at layer ¸. The aggregation function AGG function is dependent on the GNN model. For example, graph convolution network (GCN) (Kipf & Welling, 2017) uses a degree weighted aggregation of neighborhood features, whereas graph attention network (GAT) (VelikoviÊ et al., 2018) learns neighborhood weights via an attention mechanism. The prediction can be obtained at the final layer using a *softmax function*. An additional pooling layer is applied for *graph classification* tasks before applying *softmax function*. Computational Graph. We note that for the task of node classification, for any node v the subgraph taking part in the computation of neighborhood aggregation operation, see (1), fully determines the information used by GNN during inference time to predict its class. In particular, , for a k-layer GNN, we refer to the subgraph induced on nodes in the k-hop neighborhood of v, as its *computational graph*. Note the for the task of graph classification the computational graph would be the entire graph. ## 2.1 Post-Hoc Explanations And Evaluation For Gnns GNN Explanation. Post-hoc explainers for GNNs produce feature and local structure attributions where a combination of a masked set of nodes, edges, and features is retrieved as an explanation. To compute the explanation for a k-layer GNN, the k-hop neighborhood (i.e.its computational graph) of the node is utilized. For an explanation S, the explanation mask M(S) is computed over the input nodes/edges and the features in the computational graph. Note that M(S) could be binary or a continuous mask and contains the importance scores for the corresponding nodes/features. We note that as dierent explainers either return node or edge importance scores, for consistent comparison we convert edge masks to node masks. Bagel currently consists of 3 classes of post-explanation techniques: gradient based, *perturbation based* and *surrogate model* approaches. The gradient-based methods (Simonyan et al., 2013; Sundararajan et al., 3 2017; Smilkov et al., 2017; Pope et al., 2019) are the simplest approaches for generating the explanation for any dierentiable trained model. In these approaches, the importance scores for the explanation are usually computed using gradients of the input. The perturbation-based approaches (Funke et al., 2022; Ying et al., 2019; Luo et al., 2020; Yuan et al., 2021; Schlichtkrull et al., 2020) learns the important features and structural information by observing the predictive power of the model when noise is added to the input. The surrogate-based approaches (Vu & Thai, 2020; Huang et al., 2020; Zhang et al., 2021) learns a simple interpretable model for the local neighborhood of a query node and its prediction. The explanations generated by this simple model are treated as the explanations of the original model. We note that Bagel is, in general, applicable for any explainer which returns binary (hard) or continuous (soft) importance scores (as depicted in Figure 1) for the input features/nodes/edges as an explanation. ## 2.2 Related Work On Evaluation For Post-Hoc Explanations Evaluation of explanation methods for any predictive model is inherently tricky. Specifically, when evaluating already trained models, we are faced with the *lack of true explanations*. Collecting true explanations (sometimes referred to as ground truth) for GNNs is even more challenging due to the abstract nature of the input graphs. Moreover, depending on the explanation collection method, it is not always clear if the model used the ground truth in its decision-making process. Nevertheless, some current works employ small synthetic datasets seeded with a ground truth subgraph. Consequently, metrics such as *explanation accuracy* (Sanchez-Lengeling et al., 2020; Ying et al., 2019) were proposed, which measure the agreement of found explanation with that of ground truth. Observing the false optimism of accuracy metric for small explanation subgraphs, Funke et al. (2022) proposed the use of Precision instead of accuracy. We need models that are not only accurate, but right for the right reasons. Towards this, we exploit the text datasets consisting of right reasons or *human rationales* (DeYoung et al., 2019) to construct GNN models. Note that, while being recently popular in the NLP community, comparison with human rationales is missing in the current GNN explanation approaches. To address this gap, we introduce a metric called *plausibility* which measures agreement of explanations with the human rationales. Plausibility can be used in conjunction with the *faithfulness* metric which actually evaluates the explainer. An important notion for evaluating explanations is *faithfulness* where the key idea is to measure how much the explanation characterizes the model's working. To measure faithfulness Sanchez-Lengeling et al. (2020) degrade model performance by damaging the training dataset and measuring how each explanation method responds. The lack of ground truth again limits such a measure. Pope et al. (2019) proposed to compute faithfulness as the dierence of accuracy (or predicted probability) between the original predictions and the new predictions after masking out the input features found by the explanation. This was called *Fidelity* in their work. As the features cannot be removed in entirety to measure their impact Funke et al. (2022) proposed RDT-Fidelity based on rate distortion theory defined as the expected predictive score of an explanation over all possible configurations of the non-explanation features. To measure faithfulness for dierent explanation types Bagel uses RDT-Fidelity in addition to two complementary fidelity metrics similar to the one in Pope et al. (2019) and inspired from DeYoung et al. (2019) when only node/edge level explanations are provided. An important criterion to measure the goodness of an explanation is its size. For example, the full input is also a faithful explanation. However, humans find shorter explanations easier to analyze and reason. Works such as Pope et al. (2019) measure the sparsity of an explanation as the fraction of features selected by the explainer. Noting that this definition is not directly applicable for softmask approaches, Funke et al. (2022) proposes to quantify sparsity as entropy over the normalized distribution of explanation masks. We use the entropy-based sparsity metric as it can be applied both for hard and soft masking approaches. The authors in Sanchez-Lengeling et al. (2020) argued that the explanation should be stable under input perturbations. In particular, for graph classification, they perturbed test graphs by adding a few nodes/edges such that the final prediction remains the same as that for an unperturbed graph. Lower the change in explanation under perturbations better the stability. A challenge here is that there is no principled way to find the perturbations. For example, a part of the explanation might be altered under random perturbations even if the prediction is unchanged. In the following, we will see that the faithfulness metric of RDT-Fidelity already accounts for explanation stability without altering the explanation. We also note that there have been other benchmarks to study robustness of GNN models (Fan et al., 2021; Zheng et al., 2021). However, we focus on explaining GNN models predictions and not robustness. Having said this, we arm that Bagel could be used in a complementary manner to these existing benchmarks to test trustworthy GNN models. ## 3 **Bagel: A Unified Framework For Evaluating Explanations** We now present our framework Bagel for evaluating GNN explanations. Specifically, Bagel unifies existing and our proposed notions into 2 main classes. In the *first class* of measures we aim at evaluating the explanation methods in the sense that whether they are truly describing the model's workings. The first category includes three metrics: faithfulness, *sparsity* and *correctness*. Faithfulness determines if an explanation alone can replicate the model's behavior. Sparsity focuses on rewarding shorter explanations. Correctness determines if the explanation model is able to detect any injected correlations responsible for altering model's behavior. The metrics in the second class are aimed at evaluating the GNN model itself. Here we propose *plausibility* which measures how close is the decision making process of the trained model (as revealed by explanations) to human rationales. ## 3.1 Faithfulness: Can Explanations Approximate Model'S Behavior? The key idea here to evaluate the ability of the explanation to characterize model's working. Unlike previous works we argue that there is not a single measure for faithfulness which can be eectively used for all kinds of datasets and explanations. Consequently we propose a set of two measures to quantify faithfulness depending on the dataset/explanation type. - Rate distortion based fidelity. The fidelity of an explanation is usually measured by the ability of an explanation to approximate the model behavior (Ribeiro et al., 2016). For explanations which contain the feature attributions with or without structure attributions, we use the rate distortion theory based metric proposed in Funke et al. (2022) to measure the fidelity of an explanation. In short, a subgraph of the node's computational graph and its set of features are relevant for a classification decision if the expected classifier score remains nearly the same when randomizing the remaining features. Let X denotes the input node and features of the computational graph. In particular X corresponds to matrix of nodes in the computational graph and their corresponding feature values. As we use node and feature explanation masks, we compute the final M(S) corresponding to some explanation S by an elementwise product of node and feature masks. The *RDT-Fidelity* of explanation S respect to the GNN f, input X and the noise distribution N is then given by $${\mathcal{F}}({\mathcal{S}})=\mathbb{E}_{Y s|Z\sim{\mathcal{N}}}\left[\mathbb{1}_{f(X)=f(Y s)}\right].$$ $$\left(2\right)$$ $f(\mathbf{S})$ $$Z\sim N.$$ $\left(3\right)$. $. (2) where the perturbed input is given by $X\odot M(S)$ . YS = X § M(S) + Z § ( ≠ M(S)), Z ≥ N , (3) where § denotes an element-wise multiplication, and a matrix of ones with the corresponding size and N is a noise distribution. We choose the noise distribution as the global empirical distribution of the features. We sample the values from the underline training data distribution. The purpose of adding noise is not to replace the unimportant features of input with 0, rather its value should not matter. The replacement of unimportant features with 0 may cause side eects like in some datasets, the value 0 may represent some semantic meaning or biasness towards some pooling strategy, for example, minpool. Also, the noise from global features distribution makes sure that the perturbed data points are still in the same distribution as the original data (Hooker et al., 2019). Connection to explanation stability. As shown in Funke et al. (2022), explanations with high RDT-fidelity are highly stable. High fidelity score implies that the explanation has high predictive power under perturbations of the rest of the input. Unlike the strategy of Sanchez-Lengeling et al. (2020) to evaluate explanation stability, it is here ensured that the explanation itself is never altered. The special case of dense feature representations. For some datasets it is more appropriate to consider only structure based explanations. For example, when features themselves are dense representations extracted using some black-box embedding method, feature explanations as well as feature perturbations might not make much sense. It is then more appropriate to check the abilities of the explanation with the rest of nodes/edges removed and keeping the features intact. Towards that we employ the following measures of comprehensiveness and suciency also used in (DeYoung et al., 2019). - Comprehensiveness and Sufficiency. For explanations which contain only nodes or/and edges we adapt the comprehensiveness and suciency measures of DeYoung et al. (2019) for GNNs. Let G be the graph and GÕ ™ G be the explanation graph with important (attribution) nodes/edges. In particular, GÕ is generated by removing all nodes/edges from G which are not part of the explanation. Let f be the trained GNN model and f(G)j be the prediction made by GNN for jth class, where j is the predicted class. We measure fidelity by *comprehensiveness* (which answers the question if all nodes/edges in the graph needed to make a prediction were selected?) and *suciency* (if the extracted nodes/edges are sucient to come up the original prediction?) $$c o m p r e h e n s i v e n e s=f\left({\mathcal{G}}\right)_{j}-f\left({\mathcal{G}}\backslash{\mathcal{G}}^{\prime}\right)_{j}$$ $$(4)^{\frac{1}{2}}$$ $\mathfrak{suff}$. $${\mathfrak{f}}\ ({\mathfrak{Y}})_{j}-{\mathfrak{f}}$$ $$(\boldsymbol{\boldsymbol{\theta}}\ )_{j}\ ,$$ suciency = f (G)j ≠ f (GÕ)j , comprehensiveness = f (G)j ≠ f (G\GÕ)j (4) A positive value of suciency implies that the probability prediction of f on G is higher than that of GÕ, which tells us that nodes/edges in the GÕ are not sucient to reach to the same or better prediction. A negative suciency score points out that the model f has better prediction on GÕ than G which signifies that explainer was successful in eliminating certain noisy nodes which led to better performance. Similar arguments hold for comprehensiveness. In short these measures should not be symmetric. The high *comprehensiveness* value shows that the prediction is most likely because of the explanation GÕ and low *comprehensiveness* value shows that GÕ is mostly not responsible for the prediction. Since most of the explainers retrieve soft masks we employ aggregated *comprehensiveness* and *suciency* measures. In particular, we divide the soft masks into |B| = 5 bins by using top k œ B = {1%, 5%, 10%, 20%, 50%} of the explanation with respect to the soft masks values (Samek et al., 2016). The aggregated *suciency* is defined as: 1 |B| 1q|B| k=1 f (G)j ≠ f (GÕk)j 2. The aggregated *comprehensiveness* is defined in similar fashion. Remark: We would like to point out that the subgraph GÕ can have disconnected components or even sometimes isolated nodes. The main issue here is that when we convert soft masks to hard masks we might lose the connectivity among the important nodes. It depends on the explainer if it imposed a restriction on returning a connected component. As we are only evaluating the explainer we do not add any additional constraint. Also, since this is a graph classification task, all nodes (from any component) are used towards prediction in the global pooling. For a more holistic evaluation we return the aggregated faithfulness across a set of important explanation subgraphs selected across a range of hard thresholds. ## 3.2 Sparsity: Are The Explanations Non Trivial? High faithfulness ensures that the explanation approximates the model behavior well. However, the complete input completely determines the model behavior. Thus explanation sparsity is an important criteria for evaluation. Let p be the normalized distribution of explanation (feature) masks. Then sparsity of an explanation is given by the entropy H(p) and is bounded from above by log(|M|) where M corresponds to a complete set of features or nodes. While an entire input can be a faithful explanation it is important to evaluate an explanation with respect to its size. A shorter explanation is easier to analyse and is more human understandable. We adopt the entropy based definition of sparsity as in Funke et al. (2022) because of its applicability to both soft and hard explanation masks. In particular, let p denote the normalized distribution of node/edge/feature masks. We compute sparsity of an explanation as the entropy over the mask distribution: H(p) = ≠q"œM p(") log p("). ## 3.3 Correctness: Can The Explanations Detect Externally Injected Correlations? While the above measures are essential that the given explanation is predictive certain applications might need explanations for model debugging, for example to detect any spurious correlations picked up by model thereby increasing model bias. Towards that we measure correctness of an explanation in terms of its ability to recognize the *externally injected correlations* which alters the model decision. A switch in model decision is an evidence of the use of these injected correlations in the actual decision making process. In particular, we first choose a set of incorrectly labelled nodes, V . To each such node v, we add edges to the nodes in the training data which have the same label as v. We call such edges *decoys*. We retrain the GNN model with the perturbed data. We measure the correctness of explanation S for nodes in V which are now correctly predicted in terms of precision and recall of the decoys in the returned explanation: P recisionC = Nde Ne , RecallC = Nde Nd , where Nde is the number of decoys in the obtained explanation, Nd total number of decoys injected and Ne is the size of the retrieved explanation. Note that our proposed approach of using injected correlations is dierent from using a synthetic graph with seeded ground truth. In particular, for seeded graph approach it is not always clear if the ground truth is actually picked up by the model to make its decision. ## 3.4 Plausibility: How Close Is The Model'S Decision Process To Humans Rationals? Human congruence or plausibility (Lei et al., 2016; Lage et al., 2019; Strout et al., 2019) tries to establish how close or congruent is the trained model to human rationales for solving a predictive task. Trained models often exhibit the *clever-hans eect*, that is predictive models can adopt spurious correlations in the training data or due to misplaced inductive biases that have right results for the wrong reasons. Towards this, data is collected from humans for perceptive tasks where humans explicitly provide their rationales. These human rationales are in used as ground truth for evaluating if trained models are right for the right reasons. In Figure 2,we showcase a movie review and the explanations generated (in red) by dierent explainers. The true label for this review is negative and the GCN makes correct prediction for the review. | Human Rationales | The first problem that fair game has is the casting of supermodel cindy crawford in the lead role. not that cindy does that bad... sure william is n't a bad actor. unfortunately he just does n't demonstrate it all in this movie... The first problem that fair game has is the casting of supermodel cindy crawford in the lead role. not that cindy does that bad... sure william is n't a bad actor. unfortunately he just does n't demonstrate it all in this movie... | |---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | GNNExp Grad | The first problem that fair game has is the casting of supermodel cindy crawford in the lead role. not that cindy does that bad... sure william is n't a bad actor. unfortunately he just does n't demonstrate it all in this movie... | | CAM | The first problem that fair game has is the casting of supermodel cindy crawford in the lead role. not that cindy does that bad... sure william is n't a bad actor. unfortunately he just does n't demonstrate it all in this movie... | Figure 2: An anecdotal example of explanations generated by dierent explainers. The respective plausibility scores for the current example for GNNExp, Grad and CAM are 0.50,0.54 and 0.61 respectively. We observe that the explanation of CAM agrees best with the human rationales. For applications where obtaining human rationales is indeed possible we propose the use of *token-level F1* for binary explanation masks and area under precision recall curve (AUPRC) for soft masks. The tokens are words in the input text and are modelled as nodes in the graph. The human rationals are binary masks over the nodes. The token level-F1 score is computed as macro-F1 for predicted binary explanation masks where human rationals serve the true labels. For predicted soft explanation masks we measure additionally the area under precision recall curve. Rather than fixing a threshold, AUPRC provides us a measure of precision -recall tradeo across dierent decision thresholds. The reader might have noticed that this metric is similar to the explanation accuracy in earlier works. We argue against the use of term *accurate* to measure plausibility as similarity to human rationale does not always guarantee that the model has learnt an explanation which contains the reasoning of the model itself and not only of the humans. ## 4 Experimental Setup Models and Explainers. We demonstrate the use and advantage of the proposed framework by evaluating 9 explanation methods over 8 datasets and 4 GNN models. Currently our benchmark consists of these GNN models: graph convolutional networks (GCN) (Kipf & Welling, 2017), graph attention network (GAT) (VelikoviÊ et al., 2018), the approximation of personalized propagation of neural predictions (APPNP) (Klicpera et al., 2019), and graph isomorphism network (GIN) (Xu et al., 2019). The models were chosen based on their dierences in (i) exploiting inductive biases (based on dierent feature aggregation strategies), (ii) test performance (see tables 9 and 10 in the Appendix) and (iii) response to injected correlations (see Table 5 and the corresponding discussion). The further details on training of GNNs are available in Appendix M. We perform experiments with perturbation based approaches like GNNExplainer (GNNExp) (Ying et al., 2019) and Zorro (Funke et al., 2022), surrogate methods like PGM-Explainer (PGM) (Vu & Thai, 2020), and gradient-based approaches like Grad (Simonyan et al., 2013), GradInput(Simonyan et al., 2013), Integrated Gradient (IG) (Sundararajan et al., 2017), SmoothGrad (Smilkov et al., 2017), CAM (Pope et al., 2019) and GradCAM (Pope et al., 2019). GNNExp returns soft feature masks and edge masks. We transform the edge masks into node masks, in which we equally distribute the edge importance score to both nodes sharing the edge. The further details of these explainers are available in Appendix B. As already mentioned Bagel is extendable, and more approaches and explainers can be easily added. ## 4.1 Datasets We now describe new and existing datasets used in our evaluation framework and the corresponding rationale. New Dataset for Plausibility. To measure the plausibility of an explanation, we first require the corresponding human rationales. Since the existing graph datasets do not have such annotated information, we transform a text sentiment prediction task into a graph classification task. Specifically, we adopt the Movie Reviews dataset (Zaidan & Eisner, 2008) from the ERASER benchmark (DeYoung et al., 2019). The task is the binary classification, which is the dierentiation between positive and negative movie reviews. Construction of Movie Reviews Dataset. Each input instance or review is a passage of text, typically with multiple sentences. Each input review is annotated by humans that reflects the actual "human" reasons for predicting the sentiment of the review. These annotations are are extractive pieces of text and we call them human rationales. We transform sentences into graphs using the graph-of-words approach (Rousseau & Vazirgiannis, 2013). As a pre-processing step, we remove stopwords, such as "the" or "a". The complete list of used stopwords is included in our repository. Each word is represented as a node and all words within a sliding window of three are connected via edges. As features, we use the output of a pre-trained Glove model (Pennington et al., 2014). Figure 3 provides an example of a graph from Movie Reviews dataset. ![7_image_0.png](7_image_0.png) Figure 3: An example for text to graph generation. The graph is corresponding to input sentences *"? romeo* and juliet ' , and ? the twelfth night ' . it is easier for me to believe that he had a wet dream and that 's how all his plays develop , but please spare me all of this unnecessary melodrama." Dataset to measure Comprehensiveness and Suciency. Note that the measures comprehensiveness and suciency are only applicable for node/edge explanations. We use the above-described Movie Review dataset for these two measures too. The rationale is that the node features are generated using Glove and are not human-understandable. In this case, a structure-based explanation would be more meaningful than a feature-based one. Further, we evaluate comprehensiveness and suciency on MUTAG, PROTEINS and ENZYMES datasets. Datasets for Correctness. We employ two citation datasets Cora (Sen et al., 2008) and CiteSeer (Sen et al., 2008). After injecting correlations/decoys corresponding to incorrectly labeled nodes as described in Section 3.3, we re-train the GNN model. The rationale behind adding homophily increasing correlations is the observation from previous works (Khosla et al., 2019; Zhu et al., 2020) that GNN's performance increases with higher homophily. A model will have picked up these correlations if the previous incorrect nodes are now correctly predicted. We further evaluate the correctness of the explanation only for newly correctly predicted nodes. Datasets for RDT-Fidelity. We perform the *RTD-Fidelity* evaluation on both *node classification* and graph classification tasks. At node level, we use Citation datasets namely Cora, CiteSeer, PubMed, ogbn-arxiv (Hu et al., 2020a) and synthetic datasets Ying et al. (2019). Table 9 shows the dataset statistics and GNNs performances. For node classification task, we select 300 nodes for Cora and CiteSeer, and PubMed and 1000 nodes for ogbn-arxiv randomly. We choose a smaller sample of nodes to get explanations due to longer run times of certain explainers. The sample is chosen randomly to avoid any biases. Moreover, in Figure 11, we provide additionally the standard deviation for explainer performance corresponding to node samples. For graph classification task, we use MUTAG (Debnath et al., 1991), PROTEINS (Borgwardt et al., 2005) and ENZYMES Morris et al. (2020) datasets. We select 50 graphs for both MUTAG and PROTEINS datasets and 200 graphs for ENZYMES dataset. ## 5 Result Analysis 5.1 Faithfulness | Mask Methods | Cora | | CiteSeer | | | PubMed | | | | | | | | |----------------|--------|---------------|------------|------|---------------|----------|------|-------|------|------|------|------|------| | GCN GAT | GIN | APPNP GCN GAT | | GIN | APPNP GCN GAT | | GIN | APPNP | | | | | | | Hard | Zorro | 0.97 | 0.97 | 0.96 | 0.97 | 0.97 | 0.97 | 0.97 | 0.96 | 0.96 | 0.97 | 0.97 | 0.96 | | PGM | 0.84 | 0.77 | 0.60 | 0.89 | 0.92 | 0.93 | 0.73 | 0.95 | 0.78 | 0.69 | 0.74 | 0.96 | | | Soft | GNNExp | 0.71 | 0.66 | 0.52 | 0.65 | 0.68 | 0.69 | 0.51 | 0.62 | 0.67 | 0.73 | 0.67 | 0.72 | | Grad | 0.15 | 0.18 | 0.19 | 0.17 | 0.17 | 0.19 | 0.28 | 0.18 | 0.37 | 0.43 | 0.42 | 0.37 | | | GradInput | 0.15 | 0.18 | 0.18 | 0.16 | 0.16 | 0.18 | 0.26 | 0.17 | 0.36 | 0.42 | 0.42 | 0.36 | | | SmoothGrad | 0.44 | 0.42 | 0.38 | 0.50 | 0.54 | 0.57 | 0.45 | 0.62 | 0.52 | 0.53 | 0.67 | 0.59 | | | IG | 0.45 | 0.47 | 0.26 | 0.51 | 0.53 | 0.70 | 0.45 | 0.62 | 0.52 | 0.56 | 0.68 | 0.59 | | | Empty Expl. | 0.15 | 0.18 | 0.18 | 0.16 | 0.16 | 0.18 | 0.26 | 0.17 | 0.36 | 0.42 | 0.42 | 0.36 | | | Random Expl. | 0.63 | 0.60 | 0.42 | 0.55 | 0.59 | 0.57 | 0.52 | 0.52 | 0.75 | 0.67 | 0.67 | 0.70 | | Table 2: Results for *RDT-Fidelity* for node classification. RDT-Fidelity. In Table 2 we compare the RDT-Fidelity scores of various explanation methods. A common feature of Zorro and PGM is that they both learn the explanations from a sampled local dataset. The local dataset is created by perturbing the features of nodes from the computational graph (neighborhood of query nodes). While they employ dierent optimization strategies to find explanations, the result is a stable explanation that also reflects our results. The gradient-based explanations achieve the lowest fidelity. We also choose empty and random explanations as baselines. In an empty explanation, the importance scores for all nodes and features are set to be 0. In case of a random explanation, we select nodes/features masks randomly from a uniform distribution. We observe that empty explanation performs similar to GradInput. The reason for this is that the explanation mask output by GradInput is close to a zero vector. Table 3: Faithfulness as *comprehensiveness* and *suciency* measured for Movie Reviews dataset. Low suciency and high comprehensiveness indicates high faithfulness. | Methods | GCN | GAT | GIN | APPNP | | | | | |------------|-------|-------|-------|---------|-------|------|-------|------| | Su. | Comp. | Su. | Comp. | Su. | Comp. | Su. | Comp. | | | GNNExp | 0.56 | -0.01 | 0.47 | 0.03 | 0.24 | 0.32 | 0.48 | 0.07 | | Grad | 0.02 | 0.39 | 0.15 | 0.16 | 0.25 | 0.31 | 0.11 | 0.20 | | GradInput | 0.07 | 0.36 | 0.14 | 0.16 | 0.22 | 0.33 | 0.12 | 0.24 | | SmoothGrad | 0.08 | 0.34 | 0.15 | 0.23 | 0.25 | 0.27 | 0.14 | 0.23 | | IG | 0.11 | 0.38 | 0.16 | 0.21 | 0.23 | 0.27 | 0.16 | 0.24 | | CAM | 0.41 | 0.01 | 0.11 | 0.14 | 0.26 | 0.26 | 0.39 | 0.05 | | GradCAM | 0.02 | 0.29 | 0.06 | 0.27 | 0.21 | 0.27 | 0.07 | 0.28 | For the graph classification task (see Table 11 in Appendix), all methods, including the gradient-based approaches, perform relatively well except for the GCN model. PGM shows more consistent performance across all models and datasets. We leave out Zorro as it is not applicable for the graph classification task. Comprehensiveness and Suciency. We evaluate faithfulness for explanations for the Movie Reviews dataset using aggregated *comprehensiveness* and *suciency* measures. The results for soft-mask explanations are shown in Table 3. GradCAM has the lowest *suciency* which suggests that the explanations are sucient to mimic the prediction of GNN models. On the other hand, the explanations generated by GradCAM with GCN and GIN suggest that there still exists important part of the input outside of the explanations which are required to approximate the GNN's prediction. On the other hand, GNNExp, which so far outperformed gradient-based explanations for the node classification task, shows the worst suciency and comprehensiveness. Even if we use the complete feature set and only the node masks to evaluate explanations, the node masks for GNNExp are learned together with the feature explanations. This diers from gradient-based approaches, which ignore feature and structure explanation trade-os. The current performance of GNNExp indicates that it might not be appropriate to use entangled features and structure explanations independently. We report the *comprehensiveness* and *suciency* on the MUTAG, PROTEINS and ENZYMES dataset in Appendix D (See Tables 13 to 16). We observe that there is no single explainer which outperforms consistently with all GNNs on these 3 molecule datasets. In Table 16, we report *suciency* and *comprehensiveness* of molecules datasets trained with GCN model. We calculate the *suciency* and *comprehensiveness* when the edge masks are used to generate induced subgraphs with dierent thresholds. GNNExp-Edge represents GNNExp when edge masks are directly used as the explanations. GradEdge represents gradients over edges. We also use random edge masks as a baseline. GNNExp-Edge outperforms on MUTAG and ENZYMES datasets and GradEdge outperforms on PROTEINS dataset. ## 5.2 Sparsity As already mentioned, a complete input could already do well on all faithfulness measures. Therefore, we further look for sparser explanations. The results for node sparsity for explanations in node classification task are provided in Table 4. For the hard masking approaches (Zorro and PGM), Zorro outperforms PGM with all GNN models except for GIN. Conversely, there is no clear winner for the soft mask approach. The high sparsity for soft-masking approaches implies a near-uniform node attribution and consequently lower interpretability. In general, faithfulness and sparsity of an explanation should be analyzed together. A uniformly distributed explanation mask could already provide an explanation with high faithfulness as it leads to using the complete input as an explanation. We also report feature sparsity for node classification in Appendix E (in Table 17). We observe similar trends on features level sparsity where Zorro outperforms over almost all datasets except SmothGrad when GIN is trained on CiteSeer. We further report node sparsity on MUTAG, PROTEINS and ENZYMES in Appendix C(in table 12), where PGM outperforms over all three datasets. ## 5.3 Correctness | Mask Methods | Cora | | CiteSeer | | | PubMed | | | | | | | | |----------------|--------|---------------|------------|---------------|------|----------|------|-------|------|------|------|------|------| | GCN GAT | GIN | APPNP GCN GAT | GIN | APPNP GCN GAT | | | GIN | APPNP | | | | | | | Hard | Zorro | 1.58 | 1.59 | 2.17 | 1.48 | 1.26 | 1.09 | 1.58 | 1.07 | 1.51 | 1.31 | 2.18 | 1.25 | | PGM | 2.06 | 1.82 | 1.66 | 1.99 | 1.47 | 1.59 | 1.10 | 1.54 | 1.64 | 1.16 | 1.62 | 2.93 | | | Soft | GNNExp | 2.48 | 2.49 | 2.56 | 2.51 | 1.67 | 1.67 | 1.70 | 1.68 | 2.70 | 2.71 | 2.71 | 2.71 | | Grad | 2.48 | 2.34 | 2.25 | 2.35 | 1.70 | 1.61 | 1.55 | 1.60 | 2.91 | 2.76 | 3.11 | 2.73 | | | GradInput | 2.53 | 2.43 | 2.23 | 2.41 | 1.61 | 1.58 | 1.54 | 1.52 | 3.02 | 2.94 | 3.41 | 2.81 | | | SmoothGrad | 2.48 | 2.52 | 2.91 | 2.31 | 1.77 | 1.77 | 1.93 | 1.66 | 2.89 | 3.02 | 3.23 | 2.54 | | | IG | 2.49 | 2.50 | 2.84 | 2.31 | 1.76 | 1.77 | 1.91 | 1.66 | 2.84 | 2.89 | 3.06 | 2.58 | | | Random Expl. | 7.71 | 7.71 | 7.71 | 7.71 | 7.92 | 7.92 | 7.92 | 7.92 | 9.69 | 9.69 | 9.69 | 9.69 | | Table 4: Results for sparsity (computed as entropy over mask distribution) for node classification. The lower the score sparser is the explanation. The correctness results corresponding to dierent models and explainers are reported for Cora (in Table 6) and CiteSeer (in Table 18). We report precision, recall and F1 score by choosing top k nodes for the soft explanations. For hard masked approaches the number of returned nodes is listed under |S|. Note that the number of decoys added per node is 10. For Table 6 and Table 18 we use k = 20. Table 5: The number of incorrectly labelled nodes (7) decreases after addition of decoys. The number of new correctly labelled nodes after injecting decoys is listed under 3. | Model | Cora | CiteSeer | | | | | |---------|--------|------------|------|-----|------|------| | 7 | 3 | ø(%) | 7 | 3 | ø(%) | | | GCN | 88 | 79 | 89.7 | 329 | 229 | 69.6 | | GAT | 86 | 85 | 98.8 | 311 | 301 | 96.7 | | GIN | 6 | 6 | 100 | 56 | 56 | 100 | | APPNP | 73 | 70 | 95.8 | 280 | 252 | 90.0 | In Table 5, the eect of decoys can be seen where most of the earlier incorrectly classified nodes are now correctly classified except for GCN on CiteSeer. We also observe that number of selected nodes for GIN is very low for Cora dataset (i.e., only a few nodes were initially incorrectly labeled). GNNExp outperforms all other based explainers in detecting the injected correlations for both Cora and CiteSeer (detailed results moved to Table 18 in the Appendix due to space constraints). Comparing soft mask and hard mask approaches in this setting is tricky as for some approaches like Zorro, we cannot control the explanation size. For example, for GAT Zorro retrieved an explanation of size 40. A precision of 0.25 shows that it found all 10 injected correlations. Lack of feature ranking, as in soft mask approaches, makes it dicult to evaluate hard mask approaches for Correctness. For fairer evaluation, we further plot the performance of soft mask approaches with dierent k in Appendix H. For example, the GNNExp shows high improvement when we increase the size of the explanation to 15. It is not surprising to see the performance degrades when we increase the size of the explanation further since it already had captured all injected decoys. Now it returns some irrelevant nodes in the explanation. Furthermore, in Table 20 and 21, we use mean as a threshold to generate hard masks. As the mean threshold turns out to be very low for all approaches, almost all nodes of the computational graph are selected as the explanation. Consequently, we observe a very low correctness score (when measured in terms of precision). ## 5.4 Plausibility Table 7 shows the *Plausibility* scores computed for explanations of dierent GNN models. Recall that we compare explanations with human rationales to compute plausibility. The average size of human rationales over the test dataset is 165. To compute token level F1 score, we use mean as a threshold to generate hard masks from soft masks. We observe that all explainers assign the best plausibility scores to GCN. GIN obtains the second-best plausibility scores. We also observe that the overall dierence in the plausibility scores over models is relatively small, with some exceptions like the combination of GIN and GNNExp. The corresponding explanation also has the largest size. This further highlights the issues of soft-hard mask conversion. AUPRC scores which directly use the soft masks are more stable. One surprising fact in these results is that even though other GNN models achieve higher test accuracy than GIN (see Table 10 in the Appendix). Overall, their | | | | Cora | | | | | | | | | | | | | | |-----------------|--------|-------------------------------------------------------------------------|--------|-------------|------|------|----------------------|----------------------|------|------|------|----|------|------|------|----| | Mask Methods | GCN | GAT | | | GIN | | APPNP | | | | | | | | | | | P@k R@k | F1 | |S| P@k R@k | F1 | |S| P@k R@k | | F1 | |S| P@k R@k | F1 | |S| | | | | | | | | | Hard Zorro | 0.19 | 0.80 | 0.30 | 45 | 0.25 | 0.83 | 0.37 | 40 | 0.26 | 0.45 | 0.27 | 33 | 0.22 | 0.79 | 0.33 | 38 | | PGM | 0.11 | 0.22 | 0.15 | 20 | 0.18 | 0.36 | 0.24 | 20 | 0.18 | 0.36 | 0.25 | 20 | 0.19 | 0.38 | 0.25 | 20 | | Soft | GNNExp | 0.42 0.84 0.56 20 0.44 0.88 0.59 20 0.50 1.00 0.67 20 0.34 0.67 0.58 20 | | | | | | | | | | | | | | | | Grad | 0.23 | 0.46 | 0.31 | 20 | 0.29 | 0.58 | 0.39 | 20 | 0.30 | 0.60 | 0.40 | 20 | 0.33 | 0.67 | 0.45 | 20 | | GradInput | 0.16 | 0.32 | 0.21 | 20 | 0.28 | 0.56 | 0.34 | 20 | 0.30 | 0.60 | 0.40 | 20 | 0.28 | 0.56 | 0.38 | 20 | | SmoothGrad 0.12 | 0.25 | 0.16 | 20 | 0.24 | 0.48 | 0.32 | 20 0.50 1.00 0.67 20 | 0.22 | 0.43 | 0.29 | 20 | | | | | | | IG | 0.16 | 0.32 | 0.22 | 20 | 0.24 | 0.49 | 0.33 | 20 0.50 1.00 0.67 20 | 0.28 | 0.55 | 0.37 | 20 | | | | | Table 6: Correctness of the explanation for node classification on Cora dataset. We use k = 20. explanations have similar plausibility as for GIN except for GNNExp. In such cases, an application user might want to look in more detail at specific correctly labeled instances to check if the model imitates human reasoning. | Mask | Methods | GCN | GAT | GIN | | APPNP | | | | | | | | |------------|-----------|-------|-------|-------|------|---------|------|------|-------|------|------|------|-----| | auprc | F1 | |S| | auprc | F1 | |S| | auprc | F1 | |S| | auprc | F1 | |S| | | | | Hard | PGM | - | 0.42 | 25 | - | 0.43 | 25 | - | 0.43 | 25 | - | 0.43 | 25 | | Soft | GNNExp | 0.46 | 0.54 | 168 | 0.43 | 0.54 | 149 | 0.45 | 0.35 | 410 | 0.45 | 0.53 | 158 | | Grad | 0.44 | 0.52 | 265 | 0.38 | 0.51 | 158 | 0.40 | 0.52 | 156 | 0.38 | 0.50 | 255 | | | GradInput | 0.39 | 0.51 | 221 | 0.37 | 0.50 | 154 | 0.39 | 0.51 | 154 | 0.37 | 0.50 | 227 | | | SmoothGrad | 0.40 | 0.52 | 219 | 0.37 | 0.50 | 154 | 0.40 | 0.52 | 172 | 0.38 | 0.50 | 221 | | | IG | 0.37 | 0.49 | 225 | 0.37 | 0.50 | 188 | 0.39 | 0.51 | 186 | 0.38 | 0.50 | 219 | | | CAM | 0.54 | 0.61 | 224 | 0.40 | 0.51 | 177 | 0.44 | 0.55 | 156 | 0.44 | 0.53 | 195 | | | GradCAM | 0.67 | 0.34 | 175 | 0.67 | 0.35 | 191 | 0.67 | 0.34 | 166 | 0.67 | 0.34 | 188 | | We now provide a concrete example of how plausibility metric can be used in conjunction with faithfulness to metric to evaluate model's decision making process. From our previous example in Figure 2 we choose Grad which achieves the best faithfulness score on this example. In Figure 4 we compare the explanations of dierent models as provided by Grad explainer and compare the explanations based on plausibility. We observe the for GCN it achieves the highest faithfulness and plausibility scores. Table 7: Plausibility for movie review dataset measured by auprc and F1 score (macro). |S| represents the average size of the explanations generated by the explainers. The first problem that fair game has is the casting of supermodel cindy crawford in the lead role. not that cindy does that bad... sure william is n't a bad actor. unfortunately he just does n't demonstrate it all in this movie... GCN GAT The first problem that fair game has is the casting of supermodel cindy crawford in the lead role. not that cindy does that bad... sure william is n't a bad actor. unfortunately he just does n't demonstrate it all in this movie... APPNP The first problem that fair game has is the casting of supermodel cindy crawford in the lead role. not that cindy does that bad... sure william is n't a bad actor. unfortunately he just does n't demonstrate it all in this movie... Figure 4: An example to illustrate the use of *plausibility* in conjunction with *faithfulness* to select the model that best agrees with human rationales. We compare dierent models for Grad explanations because Grad explanations are highly faithful. Grad explanations over GCN agree best with human rationales. ![12_image_0.png](12_image_0.png) Figure 5: An overview of performances on Synthetic dataset. For each metric, the higher the better. ## 5.5 All Metrics On Synthetic Dataset In Table 8, we report performances over all metrics on synthetic dataset using GCN. We use synthetic dataset proposed by Ying et al. (2019), where five-node house graph is attached to randomly selected nodes, which works as the ground truth. Since the ground truths are the reasons for the prediction by the design of dataset construction, we treat the ground truth nodes as decoys to compute correctness. The further details on dataset are available in Appendix A. We train a 3-layers GCN model on synthetic dataset. We evaluate all metrics on 700 randomly selected nodes. We added SubgraphX Yuan et al. (2021) for this experiment. For hard masks approaches Zorro and SubgraphX, we can not retrieve the explanations of pre-defined size. For PGM and all soft mask approaches, we choose top k nodes as the explanation. We use k = 10. Similar to real-world datasets, Zorro generates the most faithful explanations for synthetic dataset. For gradient based approaches, SmoothGrad generates most faithful explanations. The low sparsity and high precision of explanations show that the explanations generated by SubgraphX are small and correct as well. Further we observe that the gradient based approaches like GradInput, SmoothGrad and IG assign the high plausible score to GCN. In Figure 5, we plot the all four metrics on Synthetic dataset. Since for the used sparsity metric, lower is better, we use reciprocal of original sparsity in the plot. ## 6 Conclusion We develop a unified, modular, extendable benchmark called Bagel to evaluate GNN explanations on four diverse axes: 1) *faithfulness*, 2) *sparsity*, 3) *correctness*, and 4) *plausibility*. Faithfulness measured via *RDT-Fidelity* can be employed for a wide set of tasks and datasets. We note that high RDT-Fidelity also implies high explanation stability. The *comprehensiveness* and *suciency* measures should be used to evaluate the faithfulness of structure-based explanations where perturbing features might not be feasible. It is important to measure the sparsity of the explanation to avoid the extreme case of using the whole input as an explanation. Correctness should be used with care, as injecting appropriate correlations to change a model's decision is not always straightforward. | Mask | Methods | Faithfulness | Sparsity | Correctness | Plausibility | | | |--------------|-----------|----------------|------------|---------------|----------------|------|------| | RDT-Fidelity | P@k | R@k | auprc | F1 | | | | | Zorro | 0.98 | 1.68 | 0.40 | 0.69 | na | 0.55 | | | Hard | PGM | 0.67 | 2.28 | 0.38 | 0.73 | na | 0.50 | | SubgraphX | 0.82 | 1.25 | 0.72 | 0.53 | na | 0.57 | | | Soft | GNNExp | 0.74 | 3.65 | 0.37 | 0.75 | 0.56 | 0.50 | | Grad | 0.84 | 2.60 | 0.46 | 0.92 | 0.69 | 0.61 | | | GradInput | 0.72 | 2.13 | 0.49 | 0.99 | 0.74 | 0.66 | | | SmoothGrad | 0.92 | 2.17 | 0.49 | 0.99 | 0.74 | 0.66 | | | IG | 0.81 | 2.06 | 0.49 | 0.99 | 0.74 | 0.66 | | | Zero Exp. | 0.50 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | | | Random Exp. | 0.83 | 7.05 | 0.01 | 0.01 | 0.01 | 0.01 | | Table 8: All metrics on synthetic dataset. We use k = 10. Plausibility measures the joint utility of the explanation method and the trained GNN model with respect to human rationales. Assuming that the generated explanations are faithful to the model, one can use plausibility to check the model's congruence to human rationales. This means that the loss of plausibility can be either due to human-incongruent correlations or due to non-faithfulness of the explainer. To fully interpret the results of plausibility one should first check explanation faithfulness. ## Broader Impact Statement By providing a unified evaluation framework we hope to have a positive impact on the further development and holistic evaluation of explainability techniques for graph neural networks. Given the growing applications of GNNs in various sensitive domains, such assessment benchmarks are essential to provide dierent perspectives on the provided explanation. On the other hand, automatic deployment of GNN explanations have been shown to lead to leakage of data privacy (Olatunji et al., 2022). A more general perspective on privacy-transparency tradeos in graph machine learning is provided in Khosla (2022). ## References Karsten M Borgwardt, Cheng Soon Ong, Stefan Schönauer, SVN Vishwanathan, Alex J Smola, and Hans-Peter Kriegel. Protein function prediction via graph kernels. *Bioinformatics*, 21(suppl_1):i47–i56, 2005. Asim Kumar Debnath, Rosa L Lopez de Compadre, Gargi Debnath, Alan J Shusterman, and Corwin Hansch. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. *Journal of medicinal chemistry*, 34(2):786–797, 1991. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. Eraser: A benchmark to evaluate rationalized nlp models. *arXiv preprint arXiv:1911.03429*, 2019. Thi Ngan Dong, Stefanie Mucke, and Megha Khosla. Mucomid: A multitask graph convolutional learning framework for mirna-disease association prediction. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2022. Lukas Faber, Amin K. Moghaddam, and Roger Wattenhofer. When comparing to ground truth is wrong: On evaluating gnn explanation methods. In *Proceedings of the 27th ACM SIGKDD Conference on Knowledge* Discovery & Data Mining, pp. 332–341, 2021. Wenqi Fan, Wei Jin, Xiaorui Liu, Han Xu, Xianfeng Tang, Suhang Wang, Qing Li, Jiliang Tang, Jianping Wang, and Charu Aggarwal. Jointly attacking graph neural network and its explanations. arXiv preprint arXiv:2108.03388, 2021. Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In *ICLR* Workshop on Representation Learning on Graphs and Manifolds, 2019. Thorben Funke, Megha Khosla, Mandeep Rathee, and Avishek Anand. Z orro: Valid, sparse, and stable explanations in graph neural networks. *IEEE Transactions on Knowledge and Data Engineering*, 2022. Thomas Gaudelet, Ben Day, Arian R Jamasb, Jyothish Soman, Cristian Regep, Gertrude Liu, Jeremy B R Hayter, Richard Vickers, Charles Roberts, Jian Tang, David Roblin, Tom L Blundell, Michael M Bronstein, and Jake P Taylor-King. Utilizing graph machine learning within drug discovery and development. Briefings in Bioinformatics, 22(6), 05 2021. ISSN 1477-4054. doi: 10.1093/bib/bbab159. URL https: //doi.org/10.1093/bib/bbab159. bbab159. William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NIPS, 2017. Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. A benchmark for interpretability methods in deep neural networks. In *Advances in Neural Information Processing Systems*, pp. 9737–9748, 2019. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. *arXiv preprint* arXiv:2005.00687, 2020a. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687, 2020b. Qiang Huang, Makoto Yamada, Yuan Tian, Dinesh Singh, Dawei Yin, and Yi Chang. Graphlime: Local interpretable model explanations for graph neural networks. *arXiv:2001.06216*, 2020. M. Khosla, V. Setty, and A. Anand. A comparative study for unsupervised network representation learning. TKDE, 2019. Megha Khosla. Privacy and transparency in graph machine learning: A unified perspective. In Advances in Interpretable Machine Learning and Artificial Intelligence (AIMLAI) at International Conference on Information and Knowledge Management (CIKM'22), 2022. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017. Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. Predict then propagate: Graph neural networks meet personalized pagerank. *International Conference on Learning Representations (ICLR)*, 2019. Isaac Lage, Emily Chen, Jerey He, Menaka Narayanan, Been Kim, Sam Gershman, and Finale Doshi-Velez. An evaluation of the human-interpretability of explanation. *arXiv preprint arXiv:1902.00006*, 2019. Tao Lei, Regina Barzilay, and Tommi Jaakkola. Rationalizing neural predictions. arXiv preprint arXiv:1606.04155, 2016. Yang Liu, Sujay Khandagale, Colin White, and Willie Neiswanger. Synthetic benchmarks for scientific research in explainable machine learning. *arXiv preprint arXiv:2106.12543*, 2021. Haohui Lu and Shahadat Uddin. A weighted patient network-based framework for predicting chronic diseases using graph neural networks. *Scientific reports*, 11(1):1–12, 2021. Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Advances in neural information processing systems, pp. 4765–4774, 2017. Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, and Xiang Zhang. Parameterized explainer for graph neural network. *arXiv preprint arXiv:2011.04573*, 2020. Christopher Morris, Nils M. Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. In *ICML 2020 Workshop on* Graph Representation Learning and Beyond (GRL+ 2020), 2020. URL www.graphlearning.io. Iyiola E Olatunji, Mandeep Rathee, Thorben Funke, and Megha Khosla. Private graph extraction via feature explanations. *arXiv preprint arXiv:2206.14724*, 2022. Jerey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In *Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)*, pp. 1532–1543, 2014. Phillip E Pope, Soheil Kolouri, Mohammad Rostami, Charles E Martin, and Heiko Homann. Explainability methods for graph convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10772–10781, 2019. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. " why should i trust you?" explaining the predictions of any classifier. In *Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery* and data mining, pp. 1135–1144, 2016. François Rousseau and Michalis Vazirgiannis. Graph-of-word and tw-idf: new approach to ad hoc ir. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management, pp. 59–68, 2013. Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, and Klaus-Robert Müller. Evaluating the visualization of what a deep neural network has learned. IEEE transactions on neural networks and learning systems, 28(11):2660–2673, 2016. Benjamin Sanchez-Lengeling, Jennifer Wei, Brian Lee, Emily Reif, Peter Wang, Wesley Wei Qian, Kevin McCloskey, Lucy Colwell, and Alexander Wiltschko. Evaluating attribution for graph neural networks. Advances in neural information processing systems, 33, 2020. Michael Sejr Schlichtkrull, Nicola De Cao, and Ivan Titov. Interpreting graph neural networks for nlp with dierentiable edge masking. *arXiv preprint arXiv:2010.00577*, 2020. Thomas Schnake, Oliver Eberle, Jonas Lederer, Shinichi Nakajima, Kristof T Schütt, Klaus-Robert Müller, and Grégoire Montavon. Higher-order explanations of graph neural networks via relevant walks. IEEE transactions on pattern analysis and machine intelligence, 44(11):7581–7596, 2021. Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. *AI magazine*, 29(3):93–93, 2008. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. *arXiv preprint arXiv:1312.6034*, 2013. Jaspreet Singh and Avishek Anand. Model agnostic interpretability of rankers via intent modelling. In Conference on Fairness, Accountability, and Transparency, 2020. Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. *arXiv preprint arXiv:1706.03825*, 2017. Julia Strout, Ye Zhang, and Raymond J Mooney. Do human rationales improve machine explanations? arXiv preprint arXiv:1905.13714, 2019. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In *Proceedings* of the 34th International Conference on Machine Learning-Volume 70, pp. 3319–3328. JMLR. org, 2017. Petar VelikoviÊ, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. *ICLR*, 2018. Minh N. Vu and My T. Thai. Pgm-explainer: Probabilistic graphical model explanations for graph neural networks. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, 2020. Sarah Wiegree and Ana MarasoviÊ. Teach me to explain: A review of datasets for explainable nlp. *arXiv* preprint arXiv:2102.12060, 2021. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In *Proceedings of the 24th ACM* SIGKDD international conference on knowledge discovery & data mining, pp. 974–983, 2018. Rex Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. Gnn explainer: A tool for post-hoc explanation of graph neural networks. *arXiv preprint arXiv:1903.03894*, 2019. Hao Yuan, Jiliang Tang, Xia Hu, and Shuiwang Ji. Xgnn: Towards model-level explanations of graph neural networks. In *KDD '20*, pp. 430–438. Association for Computing Machinery, 2020. Hao Yuan, Haiyang Yu, Jie Wang, Kang Li, and Shuiwang Ji. On explainability of graph neural networks via subgraph explorations. *arXiv preprint arXiv:2102.05152*, 2021. Omar Zaidan and Jason Eisner. Modeling annotators: A generative approach to learning from annotator rationales. In *Proceedings of the 2008 conference on Empirical methods in natural language processing*, pp. 31–40, 2008. Yue Zhang, David Defazio, and Arti Ramesh. Relex: A model-agnostic relational model explainer. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pp. 1042–1049, 2021. Qinkai Zheng, Xu Zou, Yuxiao Dong, Yukuo Cen, Da Yin, Jiarong Xu, Jiarong Xu, Yang Yang, and Jie Tang. Graph robustness benchmark: Benchmarking the adversarial robustness of graph machine learning. In J. Vanschoren and S. Yeung (eds.), *Proceedings of the Neural Information Processing Systems Track on* Datasets and Benchmarks, volume 1, 2021. URL https://datasets-benchmarks-proceedings.neurips. cc/paper/2021/file/6cdd60ea0045eb7a6ec44c54d29ed402-Paper-round2.pdf. Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and eective designs. *Advances in Neural Information* Processing Systems, 33:7793–7804, 2020.
Review 1: Summary: The authors propose a benchmark for GNN explanations. The study a large number of popular explainers, different models, and different datasets. They propose 4 different metrics (faithfulness, correctness, sparsity, and plausibility) that aim to capture various important aspects of what makes a good explanation. Strengths and Weaknesses: Strengths: - The problem address in the paper is important since there is a pressing need for a unified and fair comparison of GNN explainer methods. - The benchmark shift the focus away from toy datasets with planted explanations (which is the default evaluation). Weaknesses: - The authors transform edge masks into node masks by equally distributing the edge importance score. This is a major drawback. First, there is a bias towards high degree nodes. A high degree node with many unimportant edges may even end up being more important than a low degree node with a few important edges. Second, this implicitly assumes that the importance scores are additive. Finally, this is not evaluating the explanation methods on their own terms which might have been defined with edge importance in mind, making in unfair contrary to the motivation. More importantly, it is not necessary since some metrics can handle edge importance. - It is not clear whether sparsity is a useful metric. The size of the true explanation -- the true subgraph that the model uses for prediction -- may vary for different graphs, and moreover may vary with the size of the graph (and the size of the k-hop neighborhood for node-level tasks). As an extreme example, we can create a model that randomly selects a subgraph of a varying size, throws away the rest of the graph and then makes a prediction. The true explanation should recover this subgraph regardless of the sparsity. - In general none of the metrics are very convincing. Sparsity, plausibility, sufficiency and comprehensiveness have different issues as explained above and in the detailed comments below. Correctness, applies only to node-level homophobic tasks. RDT-Fidelity seems somewhat useful, but one can argue it tells us more about model stability than about the explanation. Moreover, it only applies to the node features. Extensions with a perturbation to edge features would be interesting. Further detailed comments: - The framing in the abstract and introduction of the paper is misleading. Based on wording and Figure 1, it appears as if though all metrics are applicable to all datasets and all tasks, but this is not the case. - For example, plausibility is only applicable where human rationale is available. The authors reuse the movie reviews dataset and construct an ad-hoc graph based on the text (all words within a sliding window of three are connected via edge). While this might be a good toy experiment it's hard to draw useful conclusions that generalize since e.g. the distribution of graphs is completely different from other "naturally" occurring graphs such as molecules. - The correctness metric only applies to node-level tasks. Moreover, it's only suitable for homophilic graphs (and models). - It is not clear to me that the plausibility metric conceptually fits with the rest of the metrics or that is belongs in the proposed benchmark. As the authors state, plausibility measures how close is the decision-making process of the trained model (as revealed by explanations) to human rationales. However, this is a statement about the model and not the explainer. In other words, the explainer may be perfectly accurate, but this score can still be low because the way the model makes decisions is different. This is not to say that this metric is useless or that it has to be omitted, but at the very least this distinction compared to the other metrics needs to be discussed in more detail. - As currently defined sufficiency promotes high confidence predictions. As the authors state a negative score signifies a higher $f(G')_j$. However, this does not seem to be desirable in all cases. For example, let's say that the graph is on the decision boundary between two classes (and that this is correct due to aleatoric uncertainty, i.e. the true $p(y|G)$ is uncertain). Then, we want the predicted distribution on the subgraph to also be similar. One measure for this would be the KL divergence between the predicted distributions over classes for $G$ and $G'$. Similar argument applies to comprehensiveness. - Relatedly, you might be able to manipulate these metrics by simply applying temperature scaling on the softmax, although this is property of the model and not the explainer. - In Eqs. 2 and 3 the author say that they use the global empirical distribution of the features as the noise distribution. First, it is not clear how exactly this is computed and more details needed to be provided. Second, it is not well justified why this is a good choice and what is its implicit bias. - Since some explainers produce hard masks, it is not clear why aggregated sufficiency/comprehensiveness makes sense for them. Moreover, in the aggregation equal weight is given to all bins which is not necessarily justified. - The authors do not discuss the case of negative importance. Minor comments: - It might be helpful to give a short introduction to the evaluated explainers (e.g. in the appendix). - The authors claim that they develop a unified, modular and extendable benchmark, but it is not clear what the modules are and how easy it is to extend them. - The authors state that "there is no clear winner in GNN explanation methods". One can argue that based on the results Zorro is one candidate since for the models where it is applicable it does consistently perform in the top. Requested Changes: - Evaluate node and edge level importance as given by the explainers without transformations. This means that the metrics would need to be adapted. - Pareto plots that show the trade-offs between different metrics and from where it easier to read off whether a certain explainer dominates others. Broader Impact Concerns: None. ================================================== Review 2: Summary: The authors provide a framework how to evaluate the explanation of graph neural network. They provide metrics to check whether a GNN explanation is faithful, sparse, correct and/or plausible. They apply their metrics using different datasets, models and explanation methods. Strengths and Weaknesses: pros: The paper is clearly written and discusses a very important subject. The authors provide a good and intuitive experimental setup where different properties of different explanation methods are provided. cons: - The references are not complete. The paper is about evaluating explanation methods for GNNs, yet only very few GNN explanation methods are provided. In my opinion the authors only use and cite 3 different explanation methods: GNNExpl, Zorro, PGM. The other explanation methods are not GNN specific. Yet there exists a large landscape of explanation methods that are particularly desiged for GNNs, to name a few: SubgraphX: On Explainability of Graph Neural Networks via Subgraph Exploration, Hao Yuan et al. 2021. GNN-LRP: Higher-Order Explanations of Graph Neural Networks via Relevant Walks, Schnake et al. 2021. GraphMask: Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking, Michael Schlichtkrull et al. 2021. PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks, Minh et al., 2020. - The authors restrict themself to GNN explanations methods that consider nodes or edges. There exists other approaches that consider walks (GNN-LRP) subgraphss (SubgraphX) or sequences of subgraphs (GraphMask) which are neither cited nor discussed. If the authors claim to develop metrics for general GNN explanation methods this has to be mentioned. Particularly because some evaluation metrics would be not directly applicable. For example the subgraph search G' in faithfulness becomes much more complex when not considering only edges or nodes. - It remains unclear how the metrics are restricted to GNN explanations. Isn't this adaptable to any neural network, or even any ML model? Please discuss this. - The actual contribution seems quite mild to me. Particularly because parts of faithfulness and sparsity has already been discussed in previous work (e.g. in Pope et al.). So in this view only correctness and plausibility is new to me. Please be more clear what is new and maybe also provide experiments which show how your metrics differ to those that already exist. Requested Changes: My requested changes are provided with the description of the weaknesses, above. A minor change: I think in the first line of page 4 you mean \mathcal{G}' and \mathcal{G} rather than just G' and G. Broader Impact Concerns: The paper doesn't seem to be integrated sufficiently in the existing research landscape, since many references are missing. In addition, as said above, the actual contribution seems quite mild or not sufficiently discussed. ================================================== Review 3: Summary: This work presents a benchmark for evaluating graph neural network explanations. The benchmark contains multiple metrics and datasets. Most metrics are from existing works. But the paper seems to propose a new metric named “correctness” which checks whether the explanations can detect externally injected correlations. Strengths and Weaknesses: Strengths: - This work provides a benchmark which may help future works to evaluate explanation methods. - The work constructed a Movie Reviews dataset using NLP to construct graphs which have corresponding human rationales for measuring plausibility. Weaknesses: - The contribution is limited as a significant part of this paper is about combining existing metrics and datasets. - For a benchmark work, this paper is not comprehensive enough to contain multiple different datasets for every metric. Metrics such as comprehensiveness, sufficiency, plausibility are only evaluated on the single Movie Reviews dataset created in this work. - A new “correctness” metric is proposed, but the soundness of the new metric is not justified by extensive experiments or theory, i.e., why is this a good metric and why is having a high “correctness” desired for explanation methods. ----Post rebuttal updates---- Thanks to the authors for the repsonse. Constructing the graph classification dataset from the text classification task is indeed a contribution of this paper. But I am not quite convinced about the soundness of plausibility and correctness: - Plausibility: I don't think a good interpretation should match human rationales, as the interpretation should faithfully interpret the behavior of the model which may make inference in a totally different way compared to humans; - Correctness: In this metric, the paper mentions "We retrain the GNN model with the perturbed data". I think it is hard to tell if a good interpretation should identify the decoys, if the models are retrained, which is a complex process. The authors provided some explanation in the author response, but the soundness of the metric is still not justified by experiments or theory. I think the paper may be improved to better justify the soundness of the metrics or propose more sound metrics. Requested Changes: - [Critical] Include multiple different datasets for every metric. - [Critical] Justify the new “correctness” metric. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: The paper proposes a new benchmark framework, called BAGEL, for evaluating explanations. BAGEL consists of evaluation criteria and datasets, some of which are novel. In the experiment, popular explanation methods are evaluated in the framework. The current manuscript simply introduces the set of measures and datasets in BAGEL, and applied them to some explanation methods. What is missing is to support the usefulness of BAGEL, i.e., importance of the measures and datasets, and convince readers that BAGEL will help push the field of XAI for GNNs. Although BAGEL includes a new evaluation measure, the authors failed to convince reviewers about its usefulness. Some previous papers proposed sets of evaluation measures, and the authors selected a new particular sets. What is missing in the current manuscript is the discussion on what were problems with the previous sets, and how the selected ones remedy them. Also, the paper should justify their choice of measures by arguing their optimality, necessity, and sufficiency (not necessarily rigorously though). The authors should discuss what property is overlooked if one of the BAGEL measures is missing, why adding any of the unselected existing measures is redundant, and what happens if you replace one BAGEL measure with a similar existing one, etc. Without such arguments, readers wouldn't be convinced that BAGEL would significantly contribute to XAI. Similar arguments should be given for the selected datasets. How do the selected datasets compliment each other, why is adding another dataset redundant? I'd expect that the authors would really show the benefit of BAGEL in a future version, so that they can convince readers about the usefulness of BAGEL. ==================================================
# Dynamic Regret Analysis Of Safe Distributed Online Optimization For Convex And Non-Convex Problems Ting-Jui Chang *chang.tin@northeastern.edu* Department of Mechanical and Industrial Engineering Northeastern University Sapana Chaudhary sapanac@tamu.edu Department of Electrical and Computer Engineering Texas A&M University Dileep Kalathil *dileep.kalathil@tamu.edu* Department of Electrical and Computer Engineering Texas A&M University Shahin Shahrampour *s.shahrampour@northeastern.edu* Department of Mechanical and Industrial Engineering Northeastern University Reviewed on OpenReview: *https://openreview.net/forum?id=xiQXHvL1eN* ## Abstract This paper addresses safe distributed online optimization over an unknown set of linear safety constraints. A network of agents aims at jointly minimizing a global, *time-varying* function, which is only partially observable to each individual agent. Therefore, agents must engage in *local* communications to generate a *safe* sequence of actions competitive with the best minimizer sequence in hindsight, and the gap between the performance of two sequences is quantified via dynamic regret. We propose distributed safe online gradient descent (D-Safe-OGD) with an exploration phase, where all agents estimate the constraint parameters collaboratively to build estimated feasible sets, ensuring the action selection safety during the optimization phase. We prove that for convex functions, D-Safe-OGD achieves a dynamic regret bound of O(T 2/3√log T + T 1/3C ∗ T ), where C ∗ T denotes the path-length of the best minimizer sequence. We further prove a dynamic regret bound of O(T 2/3√log T + T 2/3C ∗ T ) for certain non-convex problems, which establishes the first dynamic regret bound for a *safe distributed* algorithm in the *non-convex* setting. ## 1 Introduction Online learning (or optimization) is a sequential decision-making problem modeling a repeated game between a learner and an adversary (Hazan, 2016). At each round t, t ∈ [T] ≜ {1*, . . . , T*}, the learner chooses an action xt in a convex set X ⊆ Rd using the information from previous observations and suffers the loss ft(xt), where the function ft : X → R is chosen by the adversary. Due to the sequential nature of the problem, a commonly used performance metric is *regret*, defined as the difference between the cumulative loss incurred by the learner and that of a benchmark comparator sequence. When the comparator sequence is fixed, this metric is called *static* regret, defined as follows $$\mathbf{R}\mathbf{e}_{T}^{s}\triangleq\sum_{t=1}^{T}f_{t}(\mathbf{x}_{t})-\operatorname*{min}_{\mathbf{x}\in{\mathcal{X}}}\sum_{t=1}^{T}f_{t}(\mathbf{x}).$$ $$(1)$$ ft(x). (1) Static regret is well-studied in the online optimization literature. In particular, it is well-known that online gradient descent (OGD) achieves an O( √T) (respectively, O(log T)) static regret bound for convex (respectively, exp-concave 1 and strongly convex) problems (Zinkevich, 2003; Hazan et al., 2007), and these bounds are optimal in the sense of matching the lower bound of regret in the respective problems (Hazan, 2016). For non-convex problems, however, we expect that the standard regret notion used in the convex setting may not be a tractable measure for gauging the algorithm performance. For example, in the context of online non-convex optimization, Hazan et al. (2017) quantified the regret in terms of the norm of (projected) gradients, consistent with the stationarity measure in offline optimization. More recently, Ghai et al. (2022) showed that under certain geometric and smoothness conditions, OGD applied to non-convex functions is an approximation of online mirror descent (OMD) applied to convex functions under a reparameterization. In view of this equivalence, OGD achieves an O(T 2/3) static regret that is defined in (1). A more stringent benchmark for measuring the performance of online optimization algorithms is the *dynamic* regret (Besbes et al., 2015; Jadbabaie et al., 2015), defined as $$\mathbf{R}\mathbf{e}_{T}^{d}\triangleq\sum_{t=1}^{T}f_{t}(\mathbf{x}_{t})-\sum_{t=1}^{T}f_{t}(\mathbf{x}_{t}^{*}),$$ $$\text{(2)}$$. where x ∗ t ≜ argminx∈X ft(x). It is well-known that dynamic regret scales linearly with T in the worst-case scenario, when the function sequence fluctuates drastically over time. Therefore, various works have adopted a number of variation measures to characterize the dynamic regret bound. We provide a detailed review of these measures in Section 2 and describe the *safe distributed* online optimization problem (which is the focus of this work) in the next section. ## 1.1 Safe Distributed Online Optimization There are two distinctive components that make "safe distributed online optimization" more challenging than the standard centralized online optimization: (i) **Distributed Setting:** Distributed online optimization has been widely applied to robot formation control (Dixit et al., 2019b), distributed target tracking (Shahrampour & Jadbabaie, 2017), and localization in sensor networks (Akbari et al., 2015). In this setting, a network of m agents aims at solving the online optimization problem collectively. The main challenge is that the time-varying function sequence is only partially observable to each individual agent. Each agent j ∈ [m] receives (gradient) information about the "local" function fj,t(·), but the objective is to control the dynamic regret of each agent with respect to the global function ft(·) = Pm i=1 fi,t(·), i.e., $$\mathbf{Reg}_{j,T}^{d}\triangleq\sum_{t=1}^{T}f_{t}(\mathbf{x}_{j,t})-\sum_{t=1}^{T}f_{t}(\mathbf{x}_{t}^{*})=\sum_{t=1}^{T}\sum_{i=1}^{m}f_{i,t}(\mathbf{x}_{j,t})-\sum_{t=1}^{T}f_{t}(\mathbf{x}_{t}^{*}).\tag{3}$$ Therefore, despite availability of only local information, the action sequence of agent j is evaluated in the global function and is compared to the global minimizer sequence. It is evident that agents must communicate with each other (subject to a graph/network constraint) to approximate the global function, which is common to distributed problems. The discussion on the network structure and communication protocols is provided in Section 3.3. (ii) **Safety Constraints:** The literature on distributed online optimization has mostly focused on problems where the constraint set X is known, and less attention has been given to problems with *unknown* feasible sets (see Section 2 for a comprehensive literature review). However, in many real-world applications, this set represents certain safety constraints that are *unknown* in advance. Examples include voltage regulation constraints in power systems (Dobbe et al., 2020), transmission bandwidth in communication networks due to human safety considerations (Luong et al., 2019) and stabilizing action set in robotics applications (Åström & Murray, 2010). In these scenarios, one needs to perform parameter estimation to *learn* the safety constraints while ensuring that the regret is as small as possible. ## 1.2 Contributions In this work, we address the problem of *distributed* online optimization with *unknown* linear safety constraints. In particular, the constraint set $${\mathcal{X}}^{s}\triangleq\{\mathbf{x}\in\mathrm{R}^{d}:\mathbf{A}\mathbf{x}\leq\mathbf{b}\},$$ is linear, where A is *unknown* and must be learned by agents to subsequently choose their actions from this set. The superscript s in X salludes to safety. Our specific objective is to study *dynamic* regret (2) for both convex and non-convex problems when the set X sis unknown to agents. Our contributions are three-fold: 1) We propose and analyze safe distributed online gradient descent (D-Safe-OGD) algorithm, which has two phases (exploration and optimization). In the exploration phase, agents individually explore and jointly estimate the constraint parameters in a distributed fashion. Then, each agent constructs a feasible set with its own estimate, which ensures the action selection safety with high probability (Lemma 2). Since the estimates are only local, in the optimization phase, agents apply distributed OGD projected to *different* feasible sets, which brings forward an additional technical challenge. We tackle this using the geometric property of linear constraints (Fereydounian et al., 2020) as well as the sensitivity analysis of perturbed optimization problems with second order regular constraints (Bonnans et al., 1998), which allows us to quantify the distance between projections of a point to two different sets that are "close enough" to each other (Lemma 12). 2) We analyze D-Safe-OGD in the *convex* setting. Due to the challenge discussed in the previous item, we cannot directly apply existing results on distributed online optimization with a common feasible set. The agents must use the exploration time to estimate their own feasible sets, during which they incur linear regret. Therefore, after striking a trade-off between the exploration and optimization phases, we prove (Theorem 3) a dynamic regret bound of O(T 2/3√log T + T 1/3C ∗ T ), where $$C_{T}^{*}\triangleq\sum_{t=2}^{T}\|\mathbf{x}_{t}^{*}-\mathbf{x}_{t-1}^{*}\|,$$ $$(4)$$ , (4) is the *path-length*, defined with respect to the global minimizer sequence (Mokhtari et al., 2016; Jadbabaie et al., 2015). If the problem is centralized (single agent) and the comparator sequence is fixed, i.e., x ∗ t = x, our result recovers the static regret bound of Chaudhary & Kalathil (2022). 3) We further analyze D-Safe-OGD in the *non-convex* setting. We draw upon the idea of the algorithmic equivalence between OGD and OMD (Ghai et al., 2022) to establish that in certain problem geometries (Assumptions 6-9), D-Safe-OGD can be approximated by a distributed OMD algorithm applied to a reparameterized "convex" problem. We prove that the dynamic regret is upper bounded by O(T 2/3√log T + T 2/3C ∗ T ) in Theorem 5, which is the first dynamic regret bound for a *safe distributed* algorithm in the *non-convex* setting. If the problem is centralized (single agent) and the comparator sequence is fixed, i.e., x ∗ t = x, our result recovers the static regret bound of Ghai et al. (2022) (disregarding log factors). ## 1.3 Summary Of The Technical Analysis We now elaborate on the technical contribution of this work: ## 1) **Estimation Error** To learn the constraint parameters in a distributed way, in the early stage of D-Safe-OGD (i.e., Algorithm 1), agents apply random actions to collectively form a least squares (LS) problem. Then, a distributed optimization algorithm is applied to jointly learn the constraint parameters. Though we apply EXTRA (Shi et al., 2015) in this work, we note that this part can be replaced by other distributed optimization methods suitable for smooth, strongly convex problems. Based on existing results on LS estimators (Theorem 7), matrix concentration inequalities for random matrices (Theorem 8), and the convergence rate of EXTRA (Theorem 10), we quantify the estimation error bound for each agent in Lemma 2. ## 2) **Gap Between Projections On Two Estimated Feasible Sets** As mentioned earlier, since the estimated feasible sets are different across agents, in our technical analysis we need to quantify the gap between projections on different estimated sets to derive the final regret bounds. To tackle this, in Lemma 12 we show that if we consider the projection problem for one agent as a quadratic programming with second-order cone constraints, the projection problem for any other agent can be cast as a perturbed version of the first problem. Then, we build on the sensitivity analysis of optimization problems (Theorem 13) to show that the difference between the projected points (optimal solutions of the original and perturbed problems) is of the same order as the difference between agents estimates of the constraint parameters. 3) **Regret Bounds for Convex and Non-convex Cases** Building on the previous points, we extend the analysis of the distributed OGD in Yan et al. (2012) to the safe setup (Lemma 14), from which we derive dynamic regret bounds for both convex and non-convex cases. We note that for the non-convex (centralized) case, Ghai et al. (2022) showed that if certain geometric properties are satisfied, OGD on non-convex costs can be well-approximated as OMD on convex costs through reparameterization (Lemma 15). We establish in Lemma 17 that this approximation also holds for the safe, distributed setup with an extra error term due to the fact that the estimated feasible sets are different across agents. The proofs of our results are provided in the Appendix. ## 2 Related Literature I) Centralized Online Optimization: For static regret, as defined in (1), it is well-known that the optimal regret bound is O( √T) (respectively, O(log T)) for convex (respectively, exp-concave and strongly convex) problems (Hazan, 2016). For dynamic regret, various regularity measures have been considered. Let us first define the dynamic regret with respect to a general comparator sequence {ut} T t=1 and the corresponding regularity measure called path-length as follows $\mathbf{Reg}_{T}^{d}(\mathbf{u}_{1},\ldots,\mathbf{u}_{T})\stackrel{{\Delta}}{{=}}\sum_{t=1}^{T}f_{t}(\mathbf{x}_{t})-\sum_{t=1}^{T}f_{t}(\mathbf{u}_{t}),$ $C_{T}(\mathbf{u}_{1},\ldots,\mathbf{u}_{T})\stackrel{{\Delta}}{{=}}\sum_{t=2}^{T}\|\mathbf{u}_{t}-\mathbf{u}_{t-1}\|.$ $$({\mathfrak{H}})$$ The regret definition above is widely known as *universal* dynamic regret. Zinkevich (2003) showed that for the convex functions, OGD attains an upper bound of O( √T(1 + CT )) on the universal dynamic regret (as defined in (5)). This universal dynamic regret bound was later improved to O(pT(1 + CT )) using expert advice by Zhang et al. (2018a). More recently, for the class of comparator sequences with path length PT t=2 ut − ut−1 1 ≤ C, Baby & Wang (2021) presented, for exp-concave losses, universal dynamic regret bounds of O˜d 3.5(max{T 1/3C 2/3, 1}), where d is the problem dimension and O˜(·) hides the logarithmic factors when C ≥ 1/T and of O(d 1.5log(T)) otherwise. Later Baby & Wang (2022) presented O˜(max{d 1/3T 1/3C 2/3, d}) universal dynamic regret bound for strongly convex losses. For dynamic regret defined with respect to the minimizer sequence (as in (2)), Mokhtari et al. (2016) showed a regret bound of O(C ∗ T ) for strongly convex and smooth functions with OGD. A related notion of higher-order pathlength defined as C ∗ p,T ≜PT t=2 x ∗ t − x ∗ t−1 phas also been considered by several works. When the minimizer sequence {x ∗ t } T t=1 is uniformly bounded, O(C ∗ p,T ) implies O(C ∗ q,T ) for *q < p*. Zhang et al. (2017a) proved that with multiple gradient queries per round, the dynamic regret is improved to O(min{C ∗ T , C∗ 2,T }). II) Distributed Online Optimization: Yan et al. (2012) studied distributed OGD for online optimization and proved a static regret bound of O( √T) (respectively, O(log T)) for convex (respectively, strongly-convex) functions. Distributed online optimization for time-varying network structures was then considered in (Mateos-Núnez & Cortés, 2014; Akbari et al., 2015; Hosseini et al., 2016). Shahrampour & Jadbabaie (2018) proposed a distributed OMD algorithm with a dynamic regret bound of O( √T(1 + C ∗ T )). Dixit et al. (2019a) considered time-varying network structures and showed that distributed proximal OGD achieves a dynamic regret of O(log T(1 + C ∗ T )) for strongly convex functions. Zhang et al. (2019) developed a method based on gradient tracking and derived a regret bound in terms of C ∗ T and a gradient path-length. More recently, Eshraghi & Liang (2022) showed that dynamic regret for strongly convex and smooth functions can be improved to O(1 + C ∗ T ) using both primal and dual information boosted with multiple consensus steps. The non-convex case is also recently studied by Lu & Wang (2021), where the regret is characterized by the first-order optimality condition. On the other hand, for the projection free setup, Zhang et al. (2017b) presented a distributed online conditional gradient algorithm with a static regret bound of O(T 3/4) and a communication complexity of O(T). The communication complexity was further improved to O( √T) in (Wan et al., 2020). Nevertheless, the works mentioned in (I) and **(II)** do not consider neither long-term nor safety constraints, which are discussed next. Table 1: Related works on centralized and distributed constrained online optimization for general functions with regret and constraint violation (CV) guarantees. Let g(x) = (g1(x), g2(x)*, . . . , g*n(x))⊤ be the vector formed by n convex constraints. Let c ∈ (0, 1), α0 > 1 and [a]+ = max{0, a}. Problem type 'C' stands for centralized, 'D' stands for distributed (or decentralized), 'CNX' stands for convex cost functions, and 'N-CNX' stands for non-convex cost functions. Notes: † : The CV bound in (Yu & Neely, 2020) can be further reduced to O(1) under a Slater's condition assumption. | CV Type | Problem | Reference | Regret Type | Regret Bound | CV Bound | | | |----------------------------|------------------------|------------------------------|------------------------------|--------------------|------------------|-------------|----| | PT | √ T) | O(T 3/4 ) | | | | | | | t=1 gi (xt) ∀i ∈ [n] | C, CNX | (Mahdavi et al., 2012) | Static | O( | | | | | PT t=1 maxi∈[n] gi (xt) | C, CNX | (Jenatton et al., 2016) | Static | O(T max{c,1−c} ) | O(T 1−c/2 ) | | | | PT t=1 [gi (xt)]+ ∀i ∈ [n] | C, CNX | (Yuan & Lamperski, 2018) | Static | O(T max{c,1−c} ) | O(T 1−c/2 ) | | | | PT t=1 ∥ [g (xt)]+ ∥ | C, CNX | (Yi et al., 2021) | Static | O(T max{c,1−c} ) | O(T (1−c)/2 ) | | | | PT t=1 ∥ [g (xt)]+ ∥ | C, CNX | (Yi et al., 2021) | Dynamic | O( p T(1 + CT )) | O( √ T) | | | | PT t=1 gi (xt) ∀i ∈ [n] | C, CNX | (Yu & Neely, 2020) | Static | O( √ T) | O(T 1/4 ) † | | | | hPT | i , i = 1, ∀j ∈ [m] | D, CNX | (Yuan et al., 2017) | Static | O(T 1/2+c/2 ) | O(T 1−c/4 ) | | | t=1 gi (xj,t) + | √ T) | O(T | | | | | | | PT t=1 Pm j=1 Pn i=1 a⊤ i xj,t + | D, CNX | (Yuan et al., 2020) | Static | O( | 3/4 ) | | | | PT | Pm Pn i=1 [gi (xj,t)]+ | D, CNX | (Yuan et al., 2021) | Static | O(T max{c,1−c} ) | O(T 1−c/2 ) | | | t=1 | j=1 | | | | | | | | 1m Pm PT | | | 0T 2 1−c + T c(1 + CT ))/α0) | O( p (α0 + 1)T2−c) | | | | | i=1 | t=1 [gt (xi,t)]+ | D, CNX | (Yi et al., 2022) | Dynamic | O((α | | | | PT t=1 ∥[Axt − b]+∥ | C, CNX | (Chaudhary & Kalathil, 2022) | Static | O( p log(T)T 2/3 ) | 0 | | | | PT | Pm | 2/3√ | ∗ | | | | | | t=1 | j=1 ∥[Axj,t − b]+∥ | D, CNX | This work | Dynamic | O T | log T + T 1/3C T | 0 | | PT | Pm | 2/3√ | ∗ | | | | | | t=1 | j=1 ∥[Axj,t − b]+∥ | D, N-CNX | This work | Dynamic | O T | log T + T 2/3C T | 0 | III) Constrained Online Optimization: Practical systems come with inherent system imposed constraints on the decision variable. Some examples of such constraints are inventory/budget constraints in one-way trading problem (Lin et al., 2019) and time coupling constraints in networked distributed energy systems (Fan et al., 2020). For a known constraint set, projecting the decisions back to the constraint set is a natural way to incorporate constraints in online convex optimization (OCO). Here, projected OGD for general cost functions achieves O( √T) static regret. However, for complex constraints, projection can induce computational burden. An early work by Hazan & Kale (2012) solves the constrained optimization problem by replacing the quadratic convex program with simpler linear program using Frank-Wolfe. For understanding the following references better, let X = {x ∈ X ⊆ Rd: g(x) ≤ 0} where g(x) = (g1(x), g2(x)*, . . . , g*n(x))⊤ is the vector formed by n convex constraints, and X is a closed convex set. The work by Mahdavi et al. (2012) proposed to use a simpler closed form projection in place of true desired projection attaining O( √T) static regret with PT t=1 gi (xt) ≤ O(T 3/4) constraint violation ∀i ∈ [n]. Thus, their method achieves optimal regret with lesser computation burden at the cost of incurring constraint violations. The follow up work by Jenatton et al. (2016) proposed adaptive step size variant of Mahdavi et al. (2012) with OT max{c,1−c} static regret and OT 1−c/2constraint violation for c ∈ (0, 1). These bounds were further improved in Yu & Neely (2020) with a static regret bound of O( √T) and constraint violation bound of O(T 1/4). Here, the constraint violation is further reduced to O(1) when gi(x) satisfy Slater's condition. The work by Yuan & Lamperski (2018) considered stricter 'cumulative' constraint violations of the form PT t=1 [gi (xt)]+ ∀i ∈ [n] and proposed algorithms with OT max{c,1−c}static regret and OT 1−c/2'cumulative' constraint violation for c ∈ (0, 1). For strongly convex functions, Yuan & Lamperski (2018) proved O(log(T)) static regret and the constraint violation of respective form is O(plog(T)T). More recently, the work by Yi et al. (2021) proposed an algorithm with OT max{c,1−c}regret and PT t=1 [g (xt)]+ ≤ OT (1−c)/2'cumulative' constraint violation. For strongly convex functions, Yi et al. (2021) reduced both static regret and constraint violation bounds to O(log(T)). Further, Yi et al. (2021) presented a bound of O(pT(1 + CT )) for dynamic regret with an O( √T) 'cumulative' constraint violation. The algorithms in Mahdavi et al. (2012); Jenatton et al. (2016); Yu & Neely (2020); Yuan & Lamperski (2018); Yi et al. (2021) employ some flavor of online primal-dual algorithms. A series of recent works (Sun et al., 2017; Chen et al., 2017; Neely & Yu, 2017; Yu et al., 2017; Cao & Liu, 2018; Liu et al., 2022) have also dealt with time-varying constraints. Yu et al. (2017) specifically work with 'stochastic' time varying constraints. More recent works (Yuan et al., 2017; 2020; 2021; Yi et al., 2022) have looked at distributed OCO with long term constraints. The work by Yuan et al. (2017) proposed a consensus based primal-dual sub-gradient algorithm with O(T 1/2+β0 ) regret and O(T 1−β0/2) constraint violation for β0 ∈ (0, 0.5). Single constraint function was considered in (Yuan et al., 2017), where constraint violation is of the form [PT t=1 gi (xj,t)]+, i = 1, ∀j ∈ [m]. Yuan et al. (2020) proposed algorithms for distributed online linear regression with O( √T) regret and O(T 3/4) constraint violation. Here, constraint violation takes the form PT t=1 Pm j=1 Pn i=1 -a ⊤ i xj,t+ , where ai, i ∈ [n] is a constraint vector. Another primal-dual algorithm was presented in Yuan et al. (2021) with O(T max{1−c,c}) regret and O(T 1−c/2) constraint violation of the form PT t=1 Pm j=1 Pn i=1 [gi (xj,t)]+ for c ∈ (0, 1). In all of (Yuan et al., 2017; 2020; 2021) constraint functions are known a priori. More recently, Yi et al. (2022) proposed algorithms for distributed OCO with timevarying constraints, and for stricter 'network' constraint violation metric of the form 1m Pm i=1 PT t=1 [gt (xi,t)]+ . The algorithm in (Yi et al., 2022) gives a dynamic regret of O((α 2 0T 1−c +T c(1+CT ))/α0) with O(p(α0 + 1)T2−c) constraint violation for α0 > 1 and c ∈ (0, 1). Additionally, constrained distributed OCO with coupled inequality constraints is considered in (Yi et al., 2020a;b); with bandit feedback on cost function is considered in (Li et al., 2020); with partial constraint feedback is studied in (Sharma et al., 2021). For more references in this problem space, we refer the readers to the survey in (Li et al., 2022). IV) Safe Online Optimization: Safe online optimization is a fairly nascent field with only a few works studying per-time safety in optimization problems. Amani et al. (2019); Khezeli & Bitar (2020) study the problem of safe linear bandits giving O(log(T) √T) regret with no constraint violation, albeit under an assumption that a lower bound on the distance between the optimal action and safe set's boundary is known. Without the knowledge of such a lower bound, Amani et al. (2019) show O(log(T)T 2/3) regret. Safe convex and non-convex optimization is studied in (Usmanova et al., 2019; Fereydounian et al., 2020). Safety in the context of OCO is studied in Chaudhary & Kalathil (2022) with a regret of O(plog(T)T 2/3). Remark 1. Different from the works listed above, we study the problem of safe distributed online optimization with unknown linear constraints. We consider both convex and non-convex cost functions. ## 3 Preliminaries 3.1 Notation | [m] | The set {1, 2, . . . , m} for any integer m | |--------|----------------------------------------------------------------| | · F | Frobenius norm of a matrix | | X V | p trace(X⊤VX), for a matrix X and a positive-definite matrix V | | ΠX [·] | The operator for the projection to set X | | [A]ij | The entry in the i-th row and j-th column of A | | [A]i,: | The i-th row of A | | [A]:,j | The j-th column of A | | 1 | The vector of all ones | | ei | The i-th basis vector | | Jf (x) | The Jacobian of a mapping f(·) at x | ## 3.2 Strong Convexity And Bregman Divergence Definition 1. A function f : X → R is µ-strongly convex (µ > 0*) over the convex set* X if $$f(\mathbf{x})\geq f(\mathbf{y})+\nabla f(\mathbf{y})^{\top}(\mathbf{x}-\mathbf{y})+{\frac{\mu}{2}}\|\mathbf{x}-\mathbf{y}\|^{2},\quad\forall\mathbf{x},\mathbf{y}\in{\mathcal{X}}.$$ Definition 2. For a strongly convex function ϕ(·), the Bregman divergence w.r.t. ϕ(·) over X *is defined as* $${\mathcal{D}}_{\phi}(\mathbf{x},\mathbf{y})\triangleq\phi(\mathbf{x})-\phi(\mathbf{y})-\nabla\phi(\mathbf{y})^{\top}(\mathbf{x}-\mathbf{y}),\quad\mathbf{x},\mathbf{y}\in{\mathcal{X}}.$$ ## 3.3 Network Structure The underlying network topology is governed by a symmetric doubly stochastic matrix P, i.e., [P]ij ≥ 0, ∀*i, j* ∈ [m], and each row (or column) is summed to one. If [P]ij > 0, agents i and j are considered neighbors, and agent i assigns the weight [P]ij to agent j when they communicate with each other. We assume that the graph structure captured by P is connected, i.e., there is a (potentially multi-hop) path from any agent i to another agent j ̸= i. Each agent is considered as a neighbor of itself, i.e., [P]ii > 0 for any i ∈ [m]. These constraints on the communication protocol imply a geometric mixing bound for P (Liu, 2008), such that Pm j=1 [Pk]ji − 1/m ≤ √mβk, for any i ∈ [m], where β is the second largest singular value of P. Remark 2. In all of the algorithms proposed in the paper, we will see P as an input. This does not contradict the decentralized nature of the algorithms, as agent i only requires the knowledge of [P]ji > 0 for any j *in its* neighborhood. The knowledge of P *is not global, and each agent only has local information about it.* ## 4 Safe Set Estimation To keep the regret small, we first need to identify the linear safety constrains. It is impossible to learn the safety constraints if the algorithm receives no information that can be used to estimate the unknown constraints (Chaudhary & Kalathil, 2022). In our problem setup, we assume that the algorithm receives noisy observations of the form $${\hat{\mathbf{x}}}_{i,t}=$$ $$\mathbb{T}^{1}$$ xˆi,t = Axi,t + wi,t ∀i ∈ [m], at every time step t, where the nature of noise wi,t is described below. Here, A ∈ Rn×d, b ∈ Rn, and n is the number of constraints. Note that all agent updates are synchronous. ## 4.1 Assumptions We make the following assumptions common to both the convex and the non-convex problem settings. Assumption 1. *The set* X sis a closed polytope, hence, convex and compact. Also,x ≤ L, ∀x ∈ X s*, and* maxi∈[n] [A]i,: 2 ≤ LA. Assumption 2. The constraint noise sequence {wi,t, t ∈ [T]} *is R-sub-Gaussian with respect to the filtration* {Fi,t, t ∈ [T]}, i.e., ∀t ∈ [T], ∀i ∈ [m], E[wi,t|Fi,t−1] = 0 *and we have for any* σ ∈ R E[exp(σx ⊤wi,t)| Ft−1] ≤ exp(σ 2R 2x 2/2). Assumption 3. *Every agent has knowledge of a safe baseline action* x s ∈ X s*such that* Axs = b s < b*. The agents* are aware of x s and b s *and thus, the safety gap* ∆s ≜ mini∈[n](bi − b s i ), where bi *(respectively,* b s i ) denotes the i-th element of b *(respectively,* b s). The first assumption is typical to online optimization, and the second assumption on the noise is standard. The third assumption stems from the requirement to be absolutely safe at every time step. The assumption warrants the need for a safe starting point which is readily available in most practical problems of interest. Similar assumptions can be found in previous literature on safe linear bandits (Amani et al., 2019; Khezeli & Bitar, 2020), safe convex and non-convex optimization (Usmanova et al., 2019; Fereydounian et al., 2020), and safe online convex optimization (Chaudhary & Kalathil, 2022). ## 4.2 Explore And Estimate In this section, we present an algorithmic subroutine, Algorithm 1, for agents to obtain sufficiently good local estimates of X s before beginning to perform OGD. For the first T0 time steps, each agent safely explores around the baseline action x s. Each exploratory action is a γ-weighted combination of the baseline action and an i.i.d random vector ζi,t. Here, for the agent i ∈ [m] at time step t ∈ [T0], γ ∈ [0, 1), and ζi,t is zero mean i.i.d random vector with ∥ζi,t∥ ≤ L and Cov(ζi,t) = σ 2 ζ I. Performing exploration in this manner ensures per time step safety requirement as noted in Lemma 1. The proof of lemma is immediate from the assumptions. Lemma 1. *(Lemma 1 in Chaudhary & Kalathil (2022)) Let Assumptions 1-3 hold. With* γ =∆s LLA , Axi,t ≤ b for each xi,t = (1 − γ)x s + γζi,t ∀i ∈ [m], t ∈ [T0]. Once the data collection phase is finished, each agent i ∈ [m] constructs local function li(A) of the form $$l_{i}(\mathbf{A})\triangleq\sum_{t=1}^{T_{0}}\left\|\mathbf{A}\mathbf{x}_{i,t}-{\hat{\mathbf{x}}}_{i,t}\right\|^{2}+{\frac{\lambda}{m}}\left\|\mathbf{A}\right\|_{F}^{2}.$$ Then, for time steps t ∈ [T0 + 1, T0 + T1], Alg. **EXTRA** (Shi et al., 2015) is used to solve the global Least Squares (LS) estimation problem Pm i=1 li(A) in a distributed fashion with a proper choice of α. Lemma 2. *Suppose Assumptions 1-2 hold. Let Algorithm 1 run with* T0 = Ω( L 2 mγ2σ 2 ζ log( d δ )) *for data collection and* T1 = Θ(log T ρ), where ρ is a positive constant. Denote the final output of the algorithm as Ab i*for agent* i ∈ [m]. Then, with probability at least (1 − 2δ), we have ∀k ∈ [n] and ∀*i, j* ∈ [m] $\Gamma$_least $(1-2\delta)$, we have $\forall k\in[n]$ and $\forall i,j\in[m]$_ $$\begin{split}&\|[\widehat{\mathbf{A}}_{i}]_{k,:}-[\mathbf{A}]_{k,:}\|\leq\frac{1}{T^{p}}+\frac{R\sqrt{d\log\left(\frac{1+m\delta_{i}L^{2}/\lambda}{\delta/n}\right)}+\sqrt{\lambda}L_{A}}{\sqrt{\frac{1}{2}m^{\gamma^{2}}\sigma_{\varsigma}^{2}T_{0}}},\\ &\|[\widehat{\mathbf{A}}_{i}]_{k,:}-[\widehat{\mathbf{A}}_{j}]_{k,:}\|\leq\frac{2}{T^{p}},\end{split}\tag{6}$$ $$\mathbf{\Pi}$$ $$\mathbf{\Pi}_{j}^{\dagger}$$ where [Ab i]k,: and [A]k,: are the k-th rows of Ab i and A*, respectively.* It is worth noting that the safety gap ∆saffects the estimation error according to Lemma 2. As we mentioned earlier, the exploratory action xi,t = (1 − γ)x s + γζi,t, where the coefficient γ =∆s LLA . Intuitively, if ∆sis larger, we can put more weight on the exploration through ζi,t, which is beneficial to the estimation accuracy. We see from Equation (6) that when γ is larger, the estimation error bound is tighter. Let us also discuss the computational complexity of Algorithm 1. The time cost of the data-collection phase is O(mdT0), assuming that it takes O(d) time to compute each action. For the estimation phase, to perform a single update, each agent spends O(mnd) time for the calculation of the weighted average and O(T0nd) time for the gradient computation. Accordingly, the total time cost of Algorithm 1 is OmT1(mnd + T0nd). Let us now define the estimated safe set for each agent i ∈ [m]. Let the parameter estimate for agent i ∈ [m] at the end of T0 + T1 time step be denoted by Ab i. For each row k ∈ [n] of Ab i, a ball centered at [Ab i]k,: with a radius of Br can be defined as follows $$\mathcal{C}_{i,k}\triangleq\big{\{}\mathbf{a}\in\mathbb{R}^{d}:\big{\|}\mathbf{a}-[\tilde{\mathbf{A}}]_{k,i}\big{\|}\leq\mathcal{B}_{r}\big{\}},\tag{7}$$ where $\mathcal{B}_{r}\triangleq\frac{1}{\mathcal{T}^{d}}+\frac{R\sqrt{4\log\left(\frac{1+\log\left(\mathcal{B}_{r}/2\right)}{2\log\left(\mathcal{B}_{r}/2\right)}\right)}+\sqrt{\mathcal{B}_{i,k}}}{\sqrt{\frac{1}{4\log\left(\mathcal{B}_{r}/2\right)}\sigma_{i,k}^{2}}}$. The true parameter $[\mathbf{A}]_{k,i}$ lies inside the set $\mathcal{C}_{i,k}$ with a high probability. Now, using (7) the safe estimated set for agent $i\in[m]$ can be constructed as follows: $${\hat{\mathcal{X}}}_{i}^{s}\triangleq\{\mathbf{x}\in\mathrm{R}^{d}:{\hat{\mathbf{a}}}_{k}^{\top}\mathbf{x}\leq b_{k},\ \forall{\hat{\mathbf{a}}}_{k}\in{\mathcal{C}}_{i,k},\ \forall k\in[n]\}.$$ k x ≤ bk, ∀a˜k ∈ Ci,k, ∀k ∈ [n]}. (8) The safe estimated set above will be used by each agent for the projection step in subsequent algorithms. ## 5 Dynamic Regret Analysis For The Convex Setting During the first T0+T1 time steps in Algorithm 1 agents do not expend any effort to minimize the regret. This is due to the fact that without the knowledge of the feasible set, they cannot perform any projection. In this section, we propose D-Safe-OGD, which allows agents to carry out a safe distributed online optimization, and we analyze D-Safe-OGD in the *convex* setting. D-Safe-OGD is summarized in Algorithm 2, where in the exploration phase, all agents collaboratively estimate the constraint parameters based on Algorithm 1, and then each agent constructs the feasible set based on its own estimate. Algorithm 1 Distributed Constraint Parameter Estimation 1: **Require:** number of agents m, doubly stochastic matrix P ∈ Rm×m, P˜ ≜ I+P 2, hyper-parameters α, γ and λ, data-collection duration T0, constraint-estimation duration T1, a strictly feasible point x s(safe baseline action). 2: **Explore around baseline action:** 3: for t = 1, 2*, . . . , T*0 do 4: for i = 1, 2*, . . . , m* do 5: Select action xi,t = (1 − γ)x s + γζi,t 6: Receive noisy observation xˆi,t = Axi,t + wi,t 7: **end for** 8: **end for** 9: **Form local functions using collected data:** $$l_{i}(\mathbf{A})\triangleq\sum_{t=1}^{T_{0}}\left\|\mathbf{A}\mathbf{x}_{i,t}-{\hat{\mathbf{x}}}_{i,t}\right\|^{2}+{\frac{\lambda}{m}}\|\mathbf{A}\|_{F}^{2}.$$ 10: **Use EXTRA** (Shi et al., 2015) **to solve global LS problem** Pm i=1 li(A) **in a distributed fashion:** 11: Randomly generate Ab T0 i ∈ Rn×dfor all i ∈ [m]. 12: ∀i ∈ [m], Ab T0+1 i =Pm j=1[P]jiAb T0 j − α∇li(Ab T0 i ), where the gradient is computed based on the following expression: $$\nabla l_{i}(\mathbf{A})=\sum_{t=1}^{T_{0}}\left[2\mathbf{A}\mathbf{x}_{i,t}\mathbf{x}_{i,t}^{\top}-2{\hat{\mathbf{x}}}_{i,t}\mathbf{x}_{i,t}^{\top}\right]+{\frac{2\lambda}{m}}\mathbf{A}.$$ 13: for t = T0*, . . . , T*0 + T1 − 2 do 14: for i = 1, 2*, . . . , m* do 15: Ab t+2 i =Pm j=1 2[P˜ ]jiAb t+1 j −Pm j=1[P˜ ]jiAb tj − α[∇li(Ab t+1 i) − ∇li(Ab t i $(\widehat{\mathbf{A}}_i^t)$]. 16: **end for** 17: **end for** In the optimization phase, the network applies distributed OGD, where all agents first perform gradient descent with their local gradients, and then they communicate their iterates with neighbors based on the network topology imposed by P. We note that the projection operator of each agent is defined w.r.t. the local estimated feasible set (line 8 of Algorithm 2), thereby making the feasible sets close enough but slightly different from each other. Therefore, previous regret bounds for distributed online optimization over a common feasible set (e.g., (Yan et al., 2012; Hosseini et al., 2016; Shahrampour & Jadbabaie, 2018; Eshraghi & Liang, 2022)) are not immediately applicable. We tackle this challenge by exploiting the geometric property of linear constraints (Fereydounian et al., 2020) as well as the sensitivity analysis of perturbed optimization problems with second order regular constraints (Bonnans et al., 1998), and we present an upper bound on the dynamic regret in terms of the path-length regularity measure. We adhere to the following standard assumption in the context of OCO: Assumption 4. The cost functions fi,t are convex ∀i ∈ [m] and ∀t ∈ [T]*, and they have a bounded gradient, i.e.,* ∇fi,t(x) ≤ G for any x ∈ X s. Theorem 3. *Suppose Assumptions 1-4 hold and* T = Ω L 2 mγ2σ 2 ζ log( d δ )3/2*. By running Algorithm 2 with* γ ≤ ∆s LLA , η = Θ(T −1/3), T0 = Θ(T 2/3) and T1 = Θ(log T)*, we have with probability at least* (1 − 2δ) $\pi(\frac{d}{2}))^3$ $\mathbf{X}_{i,t}\in\mathbb{R}$ xi,t ∈ X s, ∀i ∈ [m], t ∈ [T], and $/\circ$), $\small T$. $$\mathbf{Reg}_{i,T}^{d}=O\left(T^{2/3}{\sqrt{\log(T/\delta)}}+{\frac{\beta}{(1-\beta)}}T^{2/3}+T^{1/3}C_{T}^{*}\right),\;\forall i\in[m].$$ Theorem 3 establishes a dynamic regret bound for D-Safe-OGD that is at least O(T 2/3√log T), and for a large enough path-length the bound becomes O(T 1/3C ∗ T ). We can also see the impact of network topology through β, the second $$a m d t$$ Algorithm 2 Distributed Safe OGD with linear constraints 1: **Require:** number of agents m, doubly stochastic matrix P ∈ Rm×m, hyper-parameters *α, γ, η, δ, λ*, time horizon T, a strictly feasible point x s. 2: Specify T0 and T1 based on given hyper-parameters and run Algorithm 1 to learn agents estimates {Aˆi}i∈[m]in a distributed fashion. 3: For all i ∈ [m], construct the safe set Xbs i from the estimate Ab i. 4: **Distributed online gradient descent over different feasible sets:** 5: Let Ts ≜ (T0 + T1 + 1) and initialize all agents at the same point xi,Ts = xTs chosen randomly. 6: for t = Ts*, . . . , T* do 7: for i = 1, 2*, . . . , m* do 8: yi,t = ΠXbs i $$=\Pi_{\widehat{\mathcal{X}}_{i}^{s}}\left[\mathbf{x}_{i,t}-\eta\nabla f_{i,t}(\mathbf{x}_{i,t})\right].$$ 9: **end for** 10: For all i ∈ [m], 11: $$\mathbf{x}_{i,t+1}=\sum_{j=1}^{m}[\mathbf{P}]_{j i}\mathbf{y}_{j,t}.$$ 12: **end for** largest singular value of P. When the network connectivity is stronger (i.e., β is smaller), the regret bound is tighter. For the dependence on other parameters, we refer readers to the exact upper bound expression (Equation (38)). Corollary 4. *Suppose that the comparator sequence is fixed over time, i.e.,* x ∗ t = x ∗, ∀t ∈ [T]*. Then, the individual* regret bound is O(T 2/3√log T), which recovers the static regret bound of the centralized case in (Chaudhary & Kalathil, 2022) in terms of order. Remark 3. Note that when A is known, there is no estimation error, and the trade-off in terms of η and T0 *no longer* exists. In other words, the agents do not incur the initial regret of T0 + T1 = O(T 2/3)*, caused by estimation. Then, by* choosing η = Θ( √ 1 T )*, the resulting bound is* O( √T(1 +C ∗ T ), which recovers the result of Shahrampour & Jadbabaie (2018) in terms of order. Remark 4. *In the proof of Theorem 3, the regret bound is shown to be* O(T0 + 1 η + 1 η C ∗ T + T √ log T0 √T0 +βηT (1−β) ). If agents have the knowledge of C ∗ T , by setting η = Θ(T −1/2pC∗ T )*, the regret bound in Theorem 3 is improved to* O(T 2/3√log T +pT C∗ T )*, which enjoys better dependence on* C ∗ T . Though this choice of step size is non-adaptive, we conjecture that using techniques such as expert advice (Zhang et al. (2018a)) or adaptive step sizes (Jadbabaie et al. (2015)), one can improve the regret bound, which is an interesting future direction. ## 6 Dynamic Regret Analysis For The Non-Convex Setting In this section, we study the *non-convex* setting for safe distributed online optimization. Even for offline optimization in the non-convex setting, the standard metric for the convergence analysis is often stationarity, i.e., characterizing the decay rate of the gradient norm. In online optimization, we can also expect that standard regret notions, used in the convex setting, may not be tractable for understanding the algorithm performance. However, in a recent work by Ghai et al. (2022), the authors studied an algorithmic equivalence property between OGD and OMD for certain problem geometries, in the sense that OGD applied to non-convex problems can be approximated by OMD applied to convex functions under reparameterization, using which a sub-linear static regret bound is guaranteed. More specifically, for a centralized problem, suppose that there is a bijective non-linear mapping q, such that ut = q(xt), and consider the OGD and OMD updates Centralized OGD: $\mathbf{x}_{t+1}=\text{argmin}_{\mathbf{x}\in\mathcal{X}}\left\{\nabla f_{t}(\mathbf{x}_{t})^{\top}(\mathbf{x}-\mathbf{x}_{t})+\frac{1}{2\eta}\left\|\mathbf{x}-\mathbf{x}_{t}\right\|^{2}\right\},$ 2o, (9) Centralized OMD: ut+1 = argminu∈X ′ $$\left\{\nabla\bar{f}_{t}(\mathbf{u}_{t})^{\top}(\mathbf{u}-\mathbf{u}_{t})+{\frac{1}{\eta}}{\mathcal{D}}_{\phi}(\mathbf{u},\mathbf{u}_{t})\right\},$$ $$(10)$$ o, (10) where X ′is the image of X under the mapping q. Ghai et al. (2022) quantified the deviationut+1 − q(xt+1) as O(η 3/2) under the following technical assumptions (together with boundedness of gradient norms): Assumption 5. There exists a bijective mapping q : X → X ′*such that* [∇2ϕ(u)]−1 = Jq(x)Jq(x) ⊤ *where* u = q(x). Assumption 6. Let W > 1 be a constant. Assume that q(·) is W-Lipschitz, ϕ(·) is 1-strongly convex and smooth with its first and third derivatives upper bounded by W*. The first and second derivatives of* q −1(·) are also bounded by W. For all u ∈ X ′, Dϕ(u, ·) is W*-Lipschitz over* X ′. Examples that satisfy these assumptions are provided in Section 3.1 of Ghai et al. (2022). For example, if ϕ is the negative entropy (respectively, log barrier), we can use quadratic (respectively, exponential) reparameterization for q. Amid & Warmuth (2020) showed that in the continuous-time setup when Assumption 5 holds, the mirror descent regularization induced by ϕ can be transformed back to the Euclidean regularization by q −1, which implies the equivalence between OMD for convex functions and OGD for non-convex functions. This is due to the fact that higher than second order factors vanish in continuous time, and this assures that the mirror flow and the reparameterized gradient flow coincide. Though in the discrete-time case, the exact equivalence does not hold, Ghai et al. (2022) showed that OGD for non-convex functions can still be approximated as OMD for convex functions, and the corresponding static regret bound is O(T 2/3) under the assumption that ft(x) = ˜ft(q(x)), where ˜ft(·) is convex. However, we need more technical assumptions to handle the discrete-time setup as higher order terms are relevant and must be judiciously analyzed. Ghai et al. (2022) characterized the sufficient condition to achieve Assumption 5, which entails an implicit OMD reparameterization for a non-convex OGD. We state these (two assumptions) by tailoring them to our problem setting: Assumption 7.∇ ˜fi,t(u) ≤ GF for all u ∈ X s′ and supu,z∈X s′ Dϕ(u, z) ≤ D′. Assumption 8. *Properties of the mapping* q(·): - There exists a mapping q(·) such that fi,t(x) = ˜fi,t(q(x)), where ˜fi,t(·) *is convex.* - q(·) *is a* C 3-diffeomorphism, and Jq(x) *is diagonal.* - For any X ⊂ X s *which is compact and convex,* X ′ ≜ q(X ) *is convex and compact.* We again refer the reader to Section 3.1 of Ghai et al. (2022) for examples related to Assumption 8. In this work, we extend this equivalence to "distributed" variants of OGD and OMD under the additional complexity that the constraint set is unknown, and it can only be approximated via Algorithm 1. Our focus is on analyzing the effect of (i) the constraint estimation as well as (ii) the distributed setup in non-convex online learning, and we also generalize the analysis of Ghai et al. (2022) to the *dynamic* regret. For the technical analysis of the non-convex setting, we also use the following assumption. Assumption 9. Let u and {yi} m i=1 be vectors in Rd. The Bregman divergence satisfies the separate convexity in the following sense $${\mathcal{D}}_{\phi}(\mathbf{u},\sum_{i}^{m}\alpha_{i}\mathbf{y}_{i})\leq\sum_{i}^{m}\alpha_{i}{\mathcal{D}}_{\phi}(\mathbf{u},\mathbf{y}_{i}),$$ where α ∈ ∆m is on the m−*dimensional simplex.* This assumption is satisfied by commonly used Bregman divergences, e.g., Euclidean distance and KL divergence. We refer interested readers to (Bauschke & Borwein, 2001; Shahrampour & Jadbabaie, 2018) for more information. In the following theorem, we prove that with high probability, the dynamic regret bound of D-Safe-OGD is O(T 2/3√log T + T 2/3C ∗ T ). Theorem 5. *Suppose Assumptions 1-3 and 6-9 hold and* T = Ω L 2 mγ2σ 2 ζ log( d δ )3/2*. By running Algorithm 2 with* $$1^{\star}$$ γ ≤∆s LLA , η = Θ(T −2/3), T0 = Θ(T 2/3) and T1 = Θ(log T)*, we have with probability at least* (1 − 2δ) $\mathbf{x}_{i,t}\in\mathcal{X}^s$, $\forall i\in[m],t\in[T]$, and $\mathbf{Reg}_{i,T}^{d}=O(T^{2/3}\sqrt{\log(T/\delta)}+T^{2/3}C_{T}^{*}),\ \forall i\in[m]$. The complete proof is provided in the Appendix, and the dependence on other problem parameters can be found in Equation (54). The idea is to show that distributed OMD and distributed OGD iterates are close enough to each other if the reference points of both updates are identical, i.e., ui,t = q(xi,t) for all i ∈ [m]. Then, distributed OGD can be viewed as a perturbed version of distributed OMD, and under the assumption of convexity of ˜fi,t the regret bound can be established. Also, as mentioned in the convex case (Remark 4), we conjecture that the dependence of regret bound to the path-length can be improved to pC∗ T . We further have the following corollary that shows our result is a valid generalization of Ghai et al. (2022) to the distributed, *dynamic* setting. Corollary 6. *Suppose that the comparator sequence is static over time, i.e.,* x ∗ t = x ∗, ∀t ∈ [T]*. Then, the individual* regret bound becomes O(T 2/3√log T), which recovers the static regret bound of Ghai et al. (2022) *up to log factors.* It is worth noting that though in the convex case, the estimation of unknown constraints exacerbates the regret bound (due to O(T 2/3) time spent on exploration), for the non-convex case, the resulting bound still matches the static regret of Ghai et al. (2022), where there is no estimation error. In other words, there is no trade-off in this case as the static regret (without estimation error) is O(T 2/3) (Ghai et al., 2022) (disregarding log factors). ## 7 Discussion On Regret Bounds In Terms Of Other Regularity Measures In this work, we focused on dynamic regret bounds characterized by C ∗ T , the path length of the minimizer sequence. It is worth noting that existing works also presented regret bounds in terms of other regularity measures, which capture the properties of the function sequence from different perspectives. Such measures include (1) the function variation VT ≜PT t=2 supx∈X |ft(x) − ft−1(x)| (Besbes et al., 2015), (2) the predictive path-length C ′ T (u1*, . . . ,* uT ) ≜PT t=2 ut−Ψt(ut−1) (Hall & Willett, 2013), where Ψt is a given dynamics, and (3) the gradient variation DT ≜PT t=1 ∇ft(xt)−mt 2(Rakhlin & Sridharan, 2013), where mt is a predictable sequence computed by the learner. Besbes et al. (2015) proposed a restarting OGD algorithm and showed that when the learner has access to only noisy gradients, the expected dynamic regret is bounded by O(T 2/3(VT + 1)1/3) for convex functions and O(log TpT(1 + VT )) for strongly convex functions. The above mentioned regularity measures are not directly comparable to each other. In this regard, Jadbabaie et al. (2015) provided a dynamic regret bound in terms of C ∗ T , DT and VT for the adaptive optimistic OMD algorithm. Also, Chang & Shahrampour (2021) revisited OGD with multiple gradient queries per iteration (in the unconstrained setup) and established the regret bound of O(min{VT , C∗ T , C∗ 2,T }) for strongly convex and smooth functions. Dynamic regret has also been studied for functions with a parameterizable structure (Ravier et al., 2019) as well as composite convex functions (Ajalloeian et al., 2020). Besides the dynamic regret, a relevant regret measure called *adaptive* regret (Hazan & Seshadhri, 2009) for a contiguous time interval Tsub is defined as follows $$\mathbf{Reg}_{T}^{a}(T_{sub})\triangleq\max_{[i,i+T_{sub}-1]\subset[T]}\left(\sum_{t=i}^{i+T_{sub}-1}f_{t}(\mathbf{x}_{t})-\min_{\mathbf{x}\in\mathcal{X}}\sum_{t=i}^{i+T_{sub}-1}f_{t}(\mathbf{x})\right).\tag{11}$$ Zhang et al. (2018b) analyzed the connection between adaptive regret and dynamic regret and provided adaptive algorithms with provably small dynamic regret for convex, exponentially concave, and strongly convex functions. In contrast to aforementioned works, where a projection operator is needed, Wan et al. (2021) proposed a projection free online method replacing the projection step with multiple linear optimization steps. Without assuming smoothness, they proved dynamic regret bounds of O(max{T 2/3V 1/3 T, √T}) and O(max{ √T VT log T , log T}) for convex and strongly convex functions, respectively. On the other hand, Wan et al. (2023) considered the case of smooth convex functions and improved the dynamic regret bound from O√T(1 + VT + √DT )to O(pT(1 + VT )). ## Conclusion In this work, we considered safe distributed online optimization with an unknown set of linear constraints. The goal of the network is to ensure that the action sequence selected by each agent, which only has partial information about the global function, is competitive to the centralized minimizers in hindsight without violating the safety constraints. To address this problem, we proposed D-Safe-OGD, where starting from a safe region, it allows all agents to perform exploration to estimate the unknown constraints in a distributed fashion. Then, distributed OGD is applied over the feasible sets formed by agents estimates. For convex functions, we proved a dynamic regret bound of O(T 2/3√log T + T 1/3C ∗ T ), which recovers the static regret bound of Chaudhary & Kalathil (2022) for the centralized case (single agent). Then, we showed that for the non-convex setting, the dynamic regret is upper bounded by O(T 2/3√log T + T 2/3C ∗ T ), which recovers the static regret bound of Ghai et al. (2022) for the centralized case (single agent) up to log factors. Possible future directions include improving the regret using adaptive techniques and/or deriving comprehensive regret bounds in terms of other variation measures, such as VT . ## Acknowledgements The authors gratefully acknowledge the support of National Science Foundation (NSF). T-J. Chang and S. Shahrampour were supported by NSF ECCS-2136206. The work of S. Chaudhary and D. Kalathil was supported in part by grants NSF CAREER-EPCN-2045783 and NSF CNS-1955696. ## References Yasin Abbasi-Yadkori, Dávid Pál, and Csaba Szepesvári. Improved algorithms for linear stochastic bandits. *Advances* in neural information processing systems, 24, 2011. Amirhossein Ajalloeian, Andrea Simonetto, and Emiliano Dall'Anese. Inexact online proximal-gradient method for time-varying convex optimization. In *American Control Conference (ACC)*, pp. 2850–2857, 2020. Mohammad Akbari, Bahman Gharesifard, and Tamás Linder. Distributed online convex optimization on time-varying directed graphs. *IEEE Transactions on Control of Network Systems*, 4(3):417–428, 2015. Sanae Amani, Mahnoosh Alizadeh, and Christos Thrampoulidis. Linear stochastic bandits under safety constraints. Advances in Neural Information Processing Systems, 32, 2019. Ehsan Amid and Manfred KK Warmuth. Reparameterizing mirror descent as gradient descent. *Advances in Neural* Information Processing Systems, 33:8430–8439, 2020. Karl Johan Åström and Richard M Murray. *Feedback systems*. Princeton university press, 2010. Dheeraj Baby and Yu-Xiang Wang. Optimal dynamic regret in exp-concave online learning. In *Conference on Learning Theory*, pp. 359–409. PMLR, 2021. Dheeraj Baby and Yu-Xiang Wang. Optimal dynamic regret in proper online learning with strongly convex losses and beyond. In *International Conference on Artificial Intelligence and Statistics*, pp. 1805–1845. PMLR, 2022. Heinz H Bauschke and Jonathan M Borwein. Joint and separate convexity of the bregman distance. In Studies in Computational Mathematics, volume 8, pp. 23–36. Elsevier, 2001. Omar Besbes, Yonatan Gur, and Assaf Zeevi. Non-stationary stochastic optimization. *Operations research*, 63(5): 1227–1244, 2015. J Frédéric Bonnans, Roberto Cominetti, and Alexander Shapiro. Sensitivity analysis of optimization problems under second order regular constraints. *Mathematics of Operations Research*, 23(4):806–831, 1998. Xuanyu Cao and KJ Ray Liu. Online convex optimization with time-varying constraints and bandit feedback. IEEE Transactions on automatic control, 64(7):2665–2680, 2018. Ting-Jui Chang and Shahin Shahrampour. On online optimization: Dynamic regret analysis of strongly convex and smooth problems. *Proceedings of the AAAI Conference on Artificial Intelligence*, 35(8):6966–6973, 2021. Sapana Chaudhary and Dileep Kalathil. Safe online convex optimization with unknown linear safety constraints. Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 36(6):6175–6182, 2022. Tianyi Chen, Qing Ling, and Georgios B Giannakis. An online convex optimization approach to proactive network resource allocation. *IEEE Transactions on Signal Processing*, 65(24):6350–6364, 2017. Rishabh Dixit, Amrit Singh Bedi, Ketan Rajawat, and Alec Koppel. Distributed online learning over time-varying graphs via proximal gradient descent. In *2019 IEEE 58th Conference on Decision and Control (CDC)*, pp. 2745– 2751. IEEE, 2019a. Rishabh Dixit, Amrit Singh Bedi, Ruchi Tripathi, and Ketan Rajawat. Online learning with inexact proximal online gradient descent algorithms. *IEEE Transactions on Signal Processing*, 67(5):1338–1352, 2019b. Roel Dobbe, Patricia Hidalgo-Gonzalez, Stavros Karagiannopoulos, Rodrigo Henriquez-Auba, Gabriela Hug, Duncan S Callaway, and Claire J Tomlin. Learning to control in power systems: Design and analysis guidelines for concrete safety problems. *Electric Power Systems Research*, 189:106615, 2020. Nima Eshraghi and Ben Liang. Improving dynamic regret in distributed online mirror descent using primal and dual information. In *Learning for Dynamics and Control Conference*, pp. 637–649. PMLR, 2022. Shuai Fan, Guangyu He, Xinyang Zhou, and Mingjian Cui. Online optimization for networked distributed energy resources with time-coupling constraints. *IEEE Transactions on Smart Grid*, 12(1):251–267, 2020. Mohammad Fereydounian, Zebang Shen, Aryan Mokhtari, Amin Karbasi, and Hamed Hassani. Safe learning under uncertain objectives and constraints. *arXiv preprint arXiv:2006.13326*, 2020. Udaya Ghai, Zhou Lu, and Elad Hazan. Non-convex online learning via algorithmic equivalence. *Advances in Neural* Information Processing Systems (NeurIPS), 35:22161–22172, 2022. Eric Hall and Rebecca Willett. Dynamical models and tracking regret in online convex programming. In *International* Conference on Machine Learning, pp. 579–587. PMLR, 2013. Elad Hazan. Introduction to online convex optimization. *Foundations and Trends in Optimization*, 2(3-4):157–325, 2016. Elad Hazan and Satyen Kale. Projection-free online learning. In *International Coference on International Conference* on Machine Learning (ICML), pp. 1843–1850, 2012. Elad Hazan and Comandur Seshadhri. Efficient learning algorithms for changing environments. In *International* Conference on Machine Learning (ICML), pp. 393–400, 2009. Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex optimization. *Machine* Learning, 69(2-3):169–192, 2007. Elad Hazan, Karan Singh, and Cyril Zhang. Efficient regret minimization in non-convex games. In International Conference on Machine Learning (ICML), pp. 1433–1441. PMLR, 2017. Saghar Hosseini, Airlie Chapman, and Mehran Mesbahi. Online distributed convex optimization on dynamic networks. IEEE Transactions on Automatic Control, 61(11):3545–3550, 2016. Ali Jadbabaie, Alexander Rakhlin, Shahin Shahrampour, and Karthik Sridharan. Online optimization: Competing with dynamic comparators. In *Artificial Intelligence and Statistics*, pp. 398–406, 2015. Rodolphe Jenatton, Jim Huang, and Cédric Archambeau. Adaptive algorithms for online convex optimization with long-term constraints. In *International Conference on Machine Learning*, pp. 402–411. PMLR, 2016. Kia Khezeli and Eilyan Bitar. Safe linear stochastic bandits. Proceedings of the AAAI Conference on Artificial Intelligence, 34(06):10202–10209, 2020. Jueyou Li, Chuanye Gu, Zhiyou Wu, and Tingwen Huang. Online learning algorithm for distributed convex optimization with time-varying coupled constraints and bandit feedback. *IEEE transactions on cybernetics*, 2020. Xiuxian Li, Lihua Xie, and Na Li. A survey of decentralized online learning. *arXiv preprint arXiv:2205.00473*, 2022. Qiulin Lin, Hanling Yi, John Pang, Minghua Chen, Adam Wierman, Michael Honig, and Yuanzhang Xiao. Competitive online optimization under inventory constraints. *Proceedings of the ACM on Measurement and Analysis of* Computing Systems, 3(1):1–28, 2019. Jun S Liu. *Monte Carlo strategies in scientific computing*. Springer Science & Business Media, 2008. Qingsong Liu, Wenfei Wu, Longbo Huang, and Zhixuan Fang. Simultaneously achieving sublinear regret and constraint violations for online convex optimization with time-varying constraints. ACM SIGMETRICS Performance Evaluation Review, 49(3):4–5, 2022. Kaihong Lu and Long Wang. Online distributed optimization with nonconvex objective functions: Sublinearity of first-order optimality condition-based regret. *IEEE Transactions on Automatic Control*, 67(6):3029–3035, 2021. Nguyen Cong Luong, Dinh Thai Hoang, Shimin Gong, Dusit Niyato, Ping Wang, Ying-Chang Liang, and Dong In Kim. Applications of deep reinforcement learning in communications and networking: A survey. *IEEE Communications Surveys & Tutorials*, 21(4):3133–3174, 2019. Mehrdad Mahdavi, Rong Jin, and Tianbao Yang. Trading regret for efficiency: online convex optimization with long term constraints. *The Journal of Machine Learning Research*, 13(1):2503–2528, 2012. David Mateos-Núnez and Jorge Cortés. Distributed online convex optimization over jointly connected digraphs. IEEE Transactions on Network Science and Engineering, 1(1):23–37, 2014. Aryan Mokhtari, Shahin Shahrampour, Ali Jadbabaie, and Alejandro Ribeiro. Online optimization in dynamic environments: Improved regret rates for strongly convex problems. In IEEE 55th Conference on Decision and Control (CDC), pp. 7195–7201, 2016. Michael J Neely and Hao Yu. Online convex optimization with time-varying constraints. arXiv preprint arXiv:1702.04783, 2017. Alexander Rakhlin and Karthik Sridharan. Online learning with predictable sequences. In Conference on Learning Theory, pp. 993–1019. PMLR, 2013. Robert J Ravier, A Robert Calderbank, and Vahid Tarokh. Prediction in online convex optimization for parametrizable objective functions. In *IEEE 58th Conference on Decision and Control (CDC)*, pp. 2455–2460, 2019. Shahin Shahrampour and Ali Jadbabaie. An online optimization approach for multi-agent tracking of dynamic parameters in the presence of adversarial noise. In *American Control Conference (ACC)*, pp. 3306–3311, 2017. Shahin Shahrampour and Ali Jadbabaie. Distributed online optimization in dynamic environments using mirror descent. *IEEE Transactions on Automatic Control*, 63(3):714–725, 2018. Pranay Sharma, Prashant Khanduri, Lixin Shen, Donald J Bucci, and Pramod K Varshney. On distributed online convex optimization with sublinear dynamic regret and fit. In 2021 55th Asilomar Conference on Signals, Systems, and Computers, pp. 1013–1017. IEEE, 2021. Wei Shi, Qing Ling, Gang Wu, and Wotao Yin. Extra: An exact first-order algorithm for decentralized consensus optimization. *SIAM Journal on Optimization*, 25(2):944–966, 2015. Wen Sun, Debadeepta Dey, and Ashish Kapoor. Safety-aware algorithms for adversarial contextual bandit. In *International Conference on Machine Learning*, pp. 3280–3288. PMLR, 2017. Joel A Tropp et al. An introduction to matrix concentration inequalities. *Foundations and Trends® in Machine* Learning, 8(1-2):1–230, 2015. Ilnura Usmanova, Andreas Krause, and Maryam Kamgarpour. Safe convex learning under uncertain constraints. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 2106–2114. PMLR, 2019. Yuanyu Wan, Wei-Wei Tu, and Lijun Zhang. Projection-free distributed online convex optimization with o( √T) communication complexity. In *International Conference on Machine Learning*, pp. 9818–9828. PMLR, 2020. Yuanyu Wan, Bo Xue, and Lijun Zhang. Projection-free online learning in dynamic environments. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 10067–10075, 2021. Yuanyu Wan, Lijun Zhang, and Mingli Song. Improved dynamic regret for online frank-wolfe. arXiv preprint arXiv:2302.05620, 2023. Feng Yan, Shreyas Sundaram, SVN Vishwanathan, and Yuan Qi. Distributed autonomous online learning: Regrets and intrinsic privacy-preserving properties. *IEEE Transactions on Knowledge and Data Engineering*, 25(11):2483– 2493, 2012. Xinlei Yi, Xiuxian Li, Lihua Xie, and Karl H Johansson. Distributed online convex optimization with time-varying coupled inequality constraints. *IEEE Transactions on Signal Processing*, 68:731–746, 2020a. Xinlei Yi, Xiuxian Li, Tao Yang, Lihua Xie, Tianyou Chai, and Karl Henrik Johansson. Distributed bandit online convex optimization with time-varying coupled inequality constraints. *IEEE Transactions on Automatic Control*, 66(10):4620–4635, 2020b. Xinlei Yi, Xiuxian Li, Tao Yang, Lihua Xie, Tianyou Chai, and Karl Johansson. Regret and cumulative constraint violation analysis for online convex optimization with long term constraints. In International Conference on Machine Learning, pp. 11998–12008. PMLR, 2021. Xinlei Yi, Xiuxian Li, Tao Yang, Lihua Xie, Tianyou Chai, and H Karl. Regret and cumulative constraint violation analysis for distributed online constrained convex optimization. *IEEE Transactions on Automatic Control*, 2022. Hao Yu and Michael J Neely. A Low Complexity Algorithm with O( √T) Regret and O(1) Constraint Violations for Online Convex Optimization with Long Term Constraints. *Journal of Machine Learning Research*, 21(1):1–24, 2020. Hao Yu, Michael Neely, and Xiaohan Wei. Online convex optimization with stochastic constraints. Advances in Neural Information Processing Systems, 30, 2017. Deming Yuan, Daniel WC Ho, and Guo-Ping Jiang. An adaptive primal-dual subgradient algorithm for online distributed constrained optimization. *IEEE transactions on cybernetics*, 48(11):3045–3055, 2017. Deming Yuan, Alexandre Proutiere, and Guodong Shi. Distributed online linear regressions. IEEE Transactions on Information Theory, 67(1):616–639, 2020. Deming Yuan, Alexandre Proutiere, and Guodong Shi. Distributed online optimization with long-term constraints. IEEE Transactions on Automatic Control, 67(3):1089–1104, 2021. Jianjun Yuan and Andrew Lamperski. Online convex optimization for cumulative constraints. Advances in Neural Information Processing Systems, 31, 2018. Lijun Zhang, Tianbao Yang, Jinfeng Yi, Rong Jin, and Zhi-Hua Zhou. Improved dynamic regret for non-degenerate functions. In *Advances in Neural Information Processing Systems*, pp. 732–741, 2017a. Lijun Zhang, Shiyin Lu, and Zhi-Hua Zhou. Adaptive online learning in dynamic environments. In Advances in neural information processing systems, pp. 1323–1333, 2018a. Lijun Zhang, Tianbao Yang, Zhi-Hua Zhou, et al. Dynamic regret of strongly adaptive methods. In *International* conference on machine learning, pp. 5882–5891. PMLR, 2018b. Wenpeng Zhang, Peilin Zhao, Wenwu Zhu, Steven CH Hoi, and Tong Zhang. Projection-free distributed online learning in networks. In *International Conference on Machine Learning*, pp. 4054–4062. PMLR, 2017b. Yan Zhang, Robert J Ravier, Michael M Zavlanos, and Vahid Tarokh. A distributed online convex optimization algorithm with improved dynamic regret. In *2019 IEEE 58th Conference on Decision and Control (CDC)*, pp. 2449–2454. IEEE, 2019. Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning, pp. 928–936, 2003. ## A Appendix In this section, we provide the proofs of our theoretical results. In Section A.1, we state the results we use in our analysis. Section A.2 includes the proof of estimation error bound in Lemma 2. In Sections A.3 and A.4, we provide the proofs for Theorem 3 (convex case) and Theorem 5 (non-convex case), respectively. ## A.1 Preliminaries: Theorem 7. *(Theorem 2 in Abbasi-Yadkori et al. (2011)). Let* {Ft}∞ t=0 *be a filtration and* {wt}∞ t=1 be a real-valued stochastic process. Here, wt is Ft-measurable and wt is conditionally R-sub Gaussian for some R ≥ 0*. Let* {xt}∞ t=1 be an Rd-valued stochastic process such that xt is Ft−1*-measurable. Let* VT ≜PT t=1 xtx ⊤ t +λI where λ > 0*. Define* yt = a ⊤xt + wt*, then* aˆT = V−1 TPT t=1 ytxt is the ℓ2-regularized least squares estimate of a*. Assume*a ≤ LA andxt ≤ L, ∀t. Then, for any δ ∈ (0, 1), with probability (1 − δ), the true parameter a *lies in the following set:* $$\left\{\mathbf{a}\in\mathrm{R}^{d}:\left\|\mathbf{a}-{\hat{\mathbf{a}}}_{T}\right\|_{\mathbf{V}_{T}}\leq R{\sqrt{d\log\left({\frac{1+T L^{2}/\lambda}{\delta}}\right)}}+{\sqrt{\lambda}}L_{A}\right\},$$ for all T ≥ 1. Theorem 8. (Theorem 5.1.1 in Tropp et al. (2015)). Consider a finite sequence {Xt} of independent, random and positive semi-definite matrices of dimension d*. Assume that* λmax(Xt) ≤ L, ∀t. Define Y ≜Pt Xt *and denote* λmin(E[Y]) as µ*. Then, we have* $$\mathbb{P}(\lambda_{\operatorname*{min}}(\mathbf{Y})\leq\epsilon\mu)\leq d\exp\big(-(1-\epsilon)^{2}{\frac{\mu}{2L}}\big),{\mathrm{~for~any~}}\epsilon\in(0,1).$$ Now, let us define the *shrunk* version of the polytope as follows $${\mathcal{X}}_{\mathrm{in}}^{s}\triangleq\{\mathbf{x}\in$$ d: [A]k,:x + τin ≤ bk, ∀k ∈ [n]}, for some τin > 0. (12) Lemma 9 (Lemma 1 in Fereydounian et al. (2020)). Consider a positive constant τin *such that* X s in *is non-empty. Then,* for any x ∈ X s, $$(12)$$ $$\in[n]\,\},{\mathrm{~for~some~}}\tau_{\mathrm{in}}>0.$$ $${\bar{\mathbf{\nabla}}}\cdot\left[A\mathbf{\nabla}\right]h$$ $\hat{\tau}\hat{\tau}$ $$\|\Pi_{\mathcal{X}_{\mathrm{in}}^{s}}(\mathbf{x})-\mathbf{x}\|\leq{\frac{\sqrt{d}\tau_{\mathrm{in}}}{C(\mathbf{A},\mathbf{b})}},$$ $$(13)$$ , (13) where C(A, b) is a positive constant that depends only on the matrix A *and the vector* b. Theorem 10. *(Theorem 3.7 in Shi et al. (2015)) Let us consider the following notation for EXTRA algorithm* $\mathbf{x}_{i,k}:$_The iterate of agent $i$ at time $k$ of the EXTRA algorithm,_ $\mathbf{X}_{i,k}$ : _The iterate of agent $i$ at $k$_ $\mathbf{X}_{k}=\begin{bmatrix}\mathbf{X}_{1,k}^{\top}\\ \vdots\\ \mathbf{X}_{m,k}^{\top}\end{bmatrix},$ $\mathbf{X}^{*}=\text{argmin}_{\mathbf{X}}\Bigg{\{}\sum_{i=1}^{m}f_{i}(\mathbf{x})\Bigg{\}},$ $\mathbf{X}^{*}=\begin{bmatrix}\mathbf{X}^{*\top}\\ \vdots\\ \mathbf{X}^{*\top}\end{bmatrix},$ $\mathbf{f}(\mathbf{X})=\sum_{i=1}^{m}f_{i}(\mathbf{x}_{i}).$ A convex function h(·) is restricted strongly convex w.r.t. a point y if there exists µ > 0 *such that* $$\langle\nabla h(\mathbf{x})-\nabla h(\mathbf{y}),\mathbf{x}-\mathbf{y}\rangle\geq\mu{\big|}\mathbf{x}-\mathbf{y}{\big|}^{2},\forall\mathbf{x}.$$ Suppose that the gradient of f(X) w.r.t. X is Lipschitz continuous with a constant Lf and f(X) + 1 4α X − X∗P˜ −P is restricted strongly convex w.r.t. X∗ with a constant µg. Then, with a proper step size α < 2µgλmin(P˜ ) L2f*, there exists* ς > 0 *such that*Xk − X∗ 2 P˜ converges to 0 at the R*-linear rate of* O((1 + ς) −k). ## A.2 Safe Distributed Set Estimation Proof of Lemma 2. Let VT0 ≜Pm i=1 PT0 t=1 xi,tx ⊤ i,t and V = VT0 + λI. Let Ab be the solution of argminA Pm i=1 li(A). Let [Ab ]k,: and [A]k,: be the k-th rows of Ab and A, respectively. Based on Theorem 7, we have with probability at least (1 − δ), $$\|[\widehat{\mathbf{A}}]_{k,:}-[\mathbf{A}]_{k,:}\|_{\mathbf{V}}\leq R{\sqrt{d\log{\big(}{\frac{1+m T_{0}L^{2}/\lambda}{\delta/n}}{\big)}}}+{\sqrt{\lambda}}L_{A},\ \forall k\in[n].$$ Knowing that ∀i ∈ [m], ∀t ∈ [T0], xi,t = (1 − γ)x s + γζi,t, we have λmax(xi,tx ⊤ i,t) ≤ L 2and E[xi,tx ⊤ i,t] = (1 − γ) 2x sx s⊤ + γ 2σ 2 ζ I ⪰ γ 2σ 2 ζ I. Therefore, we have $$\lambda_{\operatorname*{min}}(\mathbb{E}[\mathbf{V}_{T_{0}}])=\lambda_{\operatorname*{min}}(\sum_{i=1}^{m}\sum_{t=1}^{T_{0}}\mathbb{E}[\mathbf{x}_{i,t}\mathbf{x}_{i,t}^{\top}])\geq m T_{0}\gamma^{2}\sigma_{\zeta}^{2}.$$ $$(14)$$ $$(15)$$ $$(16)$$ $$(17)$$ $$(18)$$ Based on (15) and Theorem 8, we have $$\mathbb{P}\Big(\lambda_{\operatorname*{min}}(\mathbf{V}_{T_{0}})\leq\epsilon\lambda_{\operatorname*{min}}(\mathbb{E}[\mathbf{V}_{T_{0}}])\Big)\leq d\exp\big(-(1-\epsilon)^{2}\frac{m T_{0}\gamma^{2}\sigma_{\zeta}^{2}}{2L^{2}}\big).$$ ``` By setting ϵ = 1 2 and T0 ≥8L 2 mγ2σ 2 ζ log( d δ ), from (16), we have ``` $$\mathbb{P}\big{(}\lambda_{\operatorname*{min}}(\mathbf{V})\geq{\frac{1}{2}}m T_{0}\gamma^{2}\sigma_{\zeta}^{2}\big{)}\geq\mathbb{P}\big{(}\lambda_{\operatorname*{min}}(\mathbf{V}_{T_{0}})\geq{\frac{1}{2}}m T_{0}\gamma^{2}\sigma_{\zeta}^{2}\big{)}\geq(1-\delta).$$ Combining equations (14) and (17), we have with probability at least (1 − 2δ), $$\|[\widehat{\mathbf{A}}]_{k,:}-[\mathbf{A}]_{k,:}\|\leq{\frac{R{\sqrt{d\log\left({\frac{1+m T_{0}L^{2}/\lambda}{\delta/n}}\right)}}+{\sqrt{\lambda}}L_{A}}{{\sqrt{\frac{1}{2}}m\gamma^{2}\sigma_{\zeta}^{2}T_{0}}}},\ \forall k\in[n].$$ Let agent i's local estimate of A at time t ∈ [T0 + 1, T0 + T1] returned by the **EXTRA** algorithm (Shi et al., 2015) be denoted by Ab t i . Next, we upper bound the distance between Ab = argminA Pm i=1 li(A) and Ab t i based on Theorem 10 as follows. Based on the definition of li(A), considering the vectorized version of A, the Hessian matrix has the following expression t=1 2 xi,tx ⊤ i,t xi,tx ⊤ i,t ... xi,tx ⊤ i,t + 2 λ m I ⪯ 2(T0L 2 + λ m )I, ∇2li(A) = X T0 where the inequality is due to the boundedness of the baseline action and the noise vector. From above, we know Pm i=1 li(Ai) is Lipschitz smooth with the constant 2(T0L 2 + λ m ) and strongly convex with the constant 2 λ m , so by selecting a step size α < (λ/m)λmin(P˜ ) (T0L2+ λm ) 2as suggested by Theorem 10, there exists a τ ∈ (0, 1) such that $\|[\widehat{\mathbf{A}}_{i}^{t}]_{k,:}-[\widehat{\mathbf{A}}]_{k,:}\|\leq\nu\tau^{(t-T_{0})},\ \forall i\in[m],k\in[n],t\in[T_{0}+1,\ldots,T_{0}+T_{1}]$ $$(19)$$ where ν > 0 is a constant. Based on (18), (19) and our choice of T1 (T1 = (− log τ ) −1log(νT ρ)), for k ∈ [n], t ∈ [T0 + 1*, . . . , T*0 + T1] and *i, j* ∈ [m], we have $$\|[\widehat{\mathbf{A}}_{i}^{t}]_{k_{i},i}-[\mathbf{A}]_{k_{i},i}\|\leq\|[\widehat{\mathbf{A}}_{i}^{t}]_{k_{i},i}-[\widehat{\mathbf{A}}]_{k_{i},i}\|+\|[\widehat{\mathbf{A}}]_{k_{i},i}-[\mathbf{A}]_{k_{i},i}\|\leq\frac{1}{T^{p}}+\frac{R\sqrt{d\log\frac{\left(1+m_{i}L^{2}/\lambda\right)}{\delta\left(n\right)}}+\sqrt{\lambda}L_{A}}{\sqrt{\frac{1}{2}m^{2}\sigma_{\epsilon}^{2}T_{0}}},$$ , (20) and $$||[\widehat{\mathbf{A}}_{i}^{t}]_{k,:}-[\widehat{\mathbf{A}}_{j}^{t}]_{k,:}||\leq||[\widehat{\mathbf{A}}_{i}^{t}]_{k,:}-[\widehat{\mathbf{A}}]_{k,:}||+||[\widehat{\mathbf{A}}]_{k,:}-[\widehat{\mathbf{A}}_{j}^{t}]_{k,:}||\leq\frac{2}{T^{p}}.$$ . (21) Lemma 11. *Define* $$\mathcal{B}_{r}\triangleq\frac{1}{T^{\rho}}+\frac{R\sqrt{d\log\left(\frac{1+m T_{0}L^{2}/\lambda}{\delta/n}\right)}+\sqrt{\lambda}L_{A}}{\sqrt{\frac{1}{2}m\gamma^{2}\sigma_{\zeta}^{2}T_{0}}}.$$ For each agent i*, construct* Xbs i based on (8) with Ci,k *following from* (7)*. By running Algorithm 1 with user-specified* T0 = Ω( L 2 mγ2σ 2 ζ log( d δ )) and T1 = Θ(log T ρ)*, there exists a mutual shrunk polytope (see the definition in* (12)*) subset* X s in (τin = 2BrL*) for* Xbs i , ∀i ∈ [m] with probability at least (1 − 2δ). Proof of Lemma 11. Consider a mutual shrunk polytope subset X s in (τin = 2BrL). Based on Lemma 2, with probability at least 1 − 2δ, we have for any x ∈ X s in, $$\widehat{\mathbf{A}}_{i}|_{k_{i},\mathbf{x}}+\mathcal{B}_{r}\|\mathbf{x}\|=[\mathbf{A}]_{k_{i},\mathbf{x}}+([\widehat{\mathbf{A}}_{i}]_{k_{i},\ldots}[\mathbf{A}]_{k_{r}})\mathbf{x}+\mathcal{B}_{r}\|\mathbf{x}\|$$ $$\leq[\mathbf{A}]_{k_{i},\mathbf{x}}+\|[\widehat{\mathbf{A}}]_{k}\|_{k_{i},\ldots}[\mathbf{A}]_{k_{i}}\|\|\mathbf{x}\|+\mathcal{B}_{r}\|\mathbf{x}\|$$ $$\leq[\mathbf{A}]_{k_{i},\mathbf{x}}+2\mathcal{B}_{r}\|\mathbf{x}\|\leq[\mathbf{A}]_{k_{i},\mathbf{x}}+2\mathcal{B}_{r}L\leq b_{k},\ \forall k\in[n]\ \text{and}\ \forall i\in[m],$$ $$(20)$$ $$(21)$$ $$(22)$$ $$(23)$$ which implies that X s in ⊂ Xbs i , ∀i. Lemma 12. For each agent i*, construct* Xbs i based on (8) with Ci,k *following from* (7). By running Algorithm 1 with user-specified T0 = Ω( L 2 mγ2σ 2 ζ log( d δ )) and T1 = Θ(log T ρ)*, we have for any point* x, $$\|\Pi_{\widehat{\chi}_{i}^{s}}({\bf x})-\Pi_{\widehat{\chi}_{j}^{s}}({\bf x})\|\leq O(\frac{1}{T^{\rho}}),\ \forall i,j\in[m].$$ Before we discuss the proof of Lemma 12, for the sake of completeness, we provide the formal statement of Theorem 3.1 in Bonnans et al. (1998), used in the derivation of Lemma 12. We first define the notations used in (Bonnans et al., 1998). Note that the notations here are only locally defined for the statement of Theorem 3.1 in Bonnans et al. (1998). The work of Bonnans et al. (1998) focuses on the sensitivity analysis of parametric optimization problems of the form $$(P_{\mathbf{u}}):\operatorname*{min}_{\mathbf{x}\in{\mathcal{X}}}f(\mathbf{x},\mathbf{u}){\mathrm{~subject~to~}}G(\mathbf{x},\mathbf{u})\in{\mathcal{K}},$$ where X is a finite dimensional space, U is a Banach space, K is a closed subset of Banach space Y and f and G are twice continuously differentiable mappings from *X × U* to R and Y, respectively. The optimization problem is considered to be unperturbed when u = 0. Given u, the feasible set, optimal value and set of optimal solutions of (Pu) are denoted as follows $$\begin{array}{l}{{\Phi(\mathbf{u})\triangleq\{\mathbf{x}\in{\mathcal{X}}:G(\mathbf{x},\mathbf{u})\in{\mathcal{K}}\},}}\\ {{v(\mathbf{u})\triangleq\operatorname*{inf}\{f(\mathbf{x},\mathbf{u}):\mathbf{x}\in\Phi(\mathbf{u})\},}}\\ {{{\mathcal{S}}(\mathbf{u})\triangleq\operatorname*{argmin}\{f(\mathbf{x},\mathbf{u}):\mathbf{x}\in\Phi(\mathbf{u})\}.}}\end{array}$$ A point x ∈ X is called an ϵ-optimal solution of (Pu) if x ∈ Φ(u) and f(x, u) ≤ v(u) + ϵ. | Y ∗ | Dual space of Y | | |-----------------|------------------------------------------------------------------------------------------------|-------------------------| | dist(y, X ) | The minimum distance from point y to set X : inf{ y − x : x ∈ X } | | | TK(y) | The tangent cone to the set K at the point y ∈ K: {h ∈ Y : dist(y + th, K) = o(t)} ∗ ∈ Y∗ : ⟨y | | | NK(y) | The normal cone to the set K at the point y ∈ K: {y | ∗ , h⟩ ≤ 0, ∀h ∈ TK(y)} | | Df(x, u) | Derivative of f | | | Dxf(x, u) | Partial derivative of f w.r.t. x | | | Dxxf(x, u) | Second order derivative of f w.r.t. x | | | ′ , u ′ )(x, u) | The linear function based on the derivative at (x ′ , u ′ ) | | | Df(x L(x, λ, u) | The Lagrangian f(x, u) + ⟨λ, G(x, u)⟩, λ ∈ Y∗ | | | Λu(x) | {λ ∈ NK(G(x, u)) : DxL(x, λ, u) = 0} | | | {X1 + X2} | ∪{x1 + x2}, x1 ∈ X1, x2 ∈ X2 | | | int(X ) | The interior of the set X | | We also define the following notations to present the theorem statement. To study the first order differentiabilitiy of the optimal value function v(u), for a given direction d ∈ U and the optimal solution of the unperturbed problem x0 ∈ S(0), Bonnans et al. (1998) consider the linearization of the family of problems (Ptd) and its dual as follows $(PL_{\mathbf{d}}):\min_{\mathbf{h}}f(\mathbf{x}_{0},0)(\mathbf{h},\mathbf{d})$ subject to $DG(\mathbf{x}_{0},0)(\mathbf{h},\mathbf{d})\in T_{\mathcal{K}}(G(\mathbf{x}_{0},0))$. $$(D L_{\mathbf{d}}):\operatorname*{max}_{\lambda\in\Lambda_{0}(\mathbf{x}_{0})}D_{\mathbf{u}}L(\mathbf{x}_{0},\lambda,0)\mathbf{d}.$$ Theorem 13. (Theorem 3.1 in Bonnans et al. (1998)) Let x¯(t) *be an* O(t 2)-optimal trajectory of (Ptd) *converging to* a point x0 ∈ Φ(0) as t → 0. Assume v(P Ld) to be finite. Suppose that the following conditions hold: 1. x0 *satisfies the directional constraint qualification, which is implied if* 0 ∈ int{G(x0, 0) + DxG(x0, 0)*X − K}*. 2. v(td) ≤ v(0) + tv(P Ld) + O(t 2), t ≥ 0 *(Equation 3.4 in (Bonnans et al., 1998)).* 3. The strong second order sufficient condition (Equation 3.1 in (Bonnans et al., 1998)) holds, which is implied if _is sufficient condition (Equation 3.1 in (Bonmans et al., 1998))_ $$\sup_{\lambda\in\mathcal{S}(DL_{\mathbf{d}})}D_{\mathbf{xx}}^{2}L(\mathbf{x}_{0},\lambda,0)(\mathbf{h},\mathbf{h})>0,\ \forall\mathbf{h}\in C(\mathbf{x}_{0})\backslash\{0\},$$ where C(x0) *denotes the critical cone.* Then x¯(t) is Lipschitz stable at x0*, i.e., for* t ≥ 0,x¯(t) − x0 = O(t). Proof. (Proof of Lemma 12) The key idea is to leverage Theorem 13, which quantifies the sensitivity of the optimal solution of a "perturbed" optimization problem. More specifically, it is shown that the distance between the original optimal solution and the optimal solution of the perturbed problem is upper-bounded by the magnitude of the perturbation. First, we show that ∀i ∈ [m], the projection problem ΠXbs i (x) can be formulated as a quadratic programming with second-order cone constraints. The definition of Xbs i has the following equivalent expression $\widehat{\mathcal{X}}_{t}^{s}\triangleq\{\mathbf{x}\in\mathrm{R}^{d}:\widehat{\mathbf{a}}_{k}^{\top}\mathbf{x}\leq b_{k},\ \forall\widehat{\mathbf{a}}_{k}\in\mathcal{C}_{t,k},\ \forall k\in[n]\}=\{\mathbf{x}\in\mathrm{R}^{d}:\max_{\mathbf{a}\in\mathcal{C}_{t,k}}\widehat{\mathbf{a}}_{k}^{\top}\mathbf{x}\leq b_{k},\ \forall k\in[n]\}$ $$=\{\mathbf{x}\in\mathrm{R}^{d}:[\widehat{\mathbf{A}}_{k}]_{k,\,\mathbf{x}}+\mathcal{B}_{r}\|\mathbf{x}\|\leq b_{k},\ \forall k\in[n]\},$$ $$(24)$$ where each second-order cone inequality: [Ab i]k,:x + Br x ≤ bk can be equivalently written as a linear matrix inequality (LMI): $$[\widehat{\mathbf{A}}_{i}]_{k_{i},\mathbf{x}}+\mathcal{B}_{r}\|\mathbf{x}\|\leq b_{k}\Leftrightarrow\mathbf{G}^{k}(\mathbf{x},\widehat{\mathbf{A}}_{i})\triangleq\begin{bmatrix}(b_{k}-[\widehat{\mathbf{A}}_{i}]_{k_{i},\mathbf{x}})&\mathcal{B}_{r}\mathbf{x}^{\top}\\ \mathcal{B}_{r}\mathbf{x}&(b_{k}-[\widehat{\mathbf{A}}_{i}]_{k_{i},\mathbf{x}})\mathbf{I}\end{bmatrix}\succeq0.$$ ⪰ 0. (25) For simplicity, we define the following matrix $${\mathcal G}(\mathbf{x},{\hat{\mathbf{A}}}_{i})\triangleq{\begin{bmatrix}\mathbf{G}^{1}(\mathbf{x},{\hat{\mathbf{A}}}_{i})\\ &\mathbf{G}^{2}(\mathbf{x},{\hat{\mathbf{A}}}_{i})\\ &\ddots\\ &&&\mathbf{G}^{n}(\mathbf{x},{\hat{\mathbf{A}}}_{i})\end{bmatrix}}$$ . Considering the intersection of all LMIs, we have $$(25)$$ $${\hat{\mathcal{X}}}_{i}^{s}\triangleq\{\mathbf{x}\in\mathrm{R}^{d}:{\mathcal{G}}(\mathbf{x},{\hat{\mathbf{A}}}_{i})\succeq0\}.$$ $$(26)^{\frac{1}{2}}$$ $$(27)$$ $$(28)$$ d: G(x, Ab i) ⪰ 0}. (26) Based on (26), for a point x ∈ Rd, we have ΠXbs i (x) = x+ξi, where ξiis derived by solving the following optimization problem ξi = argminξ ξ ⊤ξ, s.t. G1(x + ξ, Ab i)G2(x + ξ, Ab i)... Gn(x + ξ, Ab i) ⪰ 0. (27) Based on Lemma 2, we have[Ab i]k,: − [Ab j ]k,: = O( 1 T ρ ), ∀*i, j* ∈ [m] and ∀k ∈ [n]. Therefore, [Ab j ]k,: can be expressed as [Ab i]k,: + ψk, whereψk = O( 1 T ρ ). With this expression, the projection ΠXbs j (x) = x + ξj can be formulated as a perturbed version of the optimization (27), where the perturbation is parameterized in terms of ψ = [ψ1*, . . . , ψ*n] as follows: $\xi_{j}=\operatorname*{argmin}_{\xi}\xi^{\top}\xi,\ \operatorname*{s.t.}\left[\begin{array}{ccccc}\mathbf{G}^{1}(\mathbf{x}+\xi,\widehat{\mathbf{A}}_{i}+\psi)&&&&\\ &\mathbf{G}^{2}(\mathbf{x}+\xi,\widehat{\mathbf{A}}_{i}+\psi)&&\\ &&\ddots&\\ &&&\mathbf{G}^{n}(\mathbf{x}+\xi,\widehat{\mathbf{A}}_{i}+\psi)\end{array}\right]\succeq0.$ ⪰ 0. (28) To show that ΠXbs i (x) − ΠXbs j (x) =ξi − ξj = O(ψ) = O( 1 T ρ ), we apply Theorem 13, where three conditions need to be satisfied: directional constraint qualification (DCQ), Equation 3.4 in Bonnans et al. (1998) and strong second-order sufficient conditions (we refer readers to Bonnans et al. (1998) for detailed definitions). - DCQ: A sufficient condition for DCQ is constraint qualification (CQ) (see the definition in Bonnans et al. (1998)), which is satisfied in our problem formulation if the first-order approximation of G(x + ξ, Ab i + ψ) w.r.t. the variable ξ can be positive-definite. Noting that G(x + ξ, Ab i + ψ) is an affine function of ξ, the first-order approximation is exactly the original function. Now suppose that ∀i ∈ [m], Xbs i has a strictly feasible point (this is implied by the existence of the mutual shrunk polytope), which means there exists a ˆξ such that G(x + ˆξ, Ab i + ψ) is positive-definite, and then CQ is satisfied. - **Equation 3.4 in Bonnans et al. (1998)**: In Bonnans et al. (1998), the authors provided the sufficient conditions for Equation 3.4: DCQ and secondorder regularity (Definition 2.2 in Bonnans et al. (1998)). DCQ, as mentioned previously, holds in our case, and second-order regularity holds for semi-definite optimization, which is the case for our problem setup. - **Second-order sufficient conditions**: The strong second-order sufficient condition (Equation 3.1 in Bonnans et al. (1998)) has an alternative form (Equation 3.3 in Bonnans et al. (1998)), which is satisfied in our problem setup since the Hessian of the Lagrangian is 2I, which is positive-definite. Since all the conditions above are met, the lemma is proved by applying Theorem 13. ## A.3 Convex Part Lemma 14. Let Algorithm 2 run with step size η > 0 *and define* xt ≜ 1 m Pm i=1 xi,t and yt ≜ 1 m Pm i=1 yi,t. Under Assumptions 1 to 3 and the fact that gradients are bounded, i.e.,∇fi,t(x) ≤ G for any x ∈ X s*, we have that* ∀i ∈ [m] $$\left\|\mathbf{x}_{t}-\mathbf{x}_{i,t}\right\|\leq\left(O({\frac{1}{T^{\rho}}})+2\eta G\right){\frac{\sqrt{m}\beta}{1-\beta}}.$$ [m,t], **G**$t$ = [Vf1,t(**x1,t**), **.** Proof. For the presentation simplicity, we define the following matrices Xt ≜ [x1,t, . . . , xm,t], Yt ≜ [y1,t, . . . , ym,t], Gt ≜ [∇f1,t(x1,t), . . . , ∇fm,t(xm,t)], and Rt ≜ [r1,t, . . . , rm,t], where ri,t ≜ yi,t −xi,t − η∇fi,t(xi,t). Then, the update can be expressed as Xt = Yt−1P =Xt−1 − ηGt−1 + Rt−1 P. Expanding the update recursively, we have $${\bf X}_{t}={\bf X}_{T_{x}}{\bf P}^{(t-T_{x})}-\eta\sum_{l=1}^{t-T_{x}}{\bf G}_{t-l}{\bf P}^{l}+\sum_{l=1}^{t-T_{x}}{\bf R}_{t-l}{\bf P}^{l}.\tag{29}$$ Since P is doubly stochastic, we have Pk1 = 1 for all k ≥ 1. Based on the geometric mixing bound of P and the above equation we get xt − xi,t =Xt( 1 m 1 − ei) ≤xTs − XTs [P (t−Ts)]:,i + η tX−Ts l=1 Gt−l( 1 m 1 − [P l]:,i) + tX−Ts l=1 Rt−l( 1 m 1 − [P l]:,i) ≤ tX−Ts l=1 (ηG) √mβl + tX−Ts l=1 O( 1 T ρ ) + ηG√mβl ≤O( 1 T ρ ) + 2ηG √mβ 1 − β , wherexTs − XTs [Pt−Ts]:,i = 0 by the identical initialization of all agents with the same action at Ts, and the other inequality is based on Lemma 12 as follows $$\left\|\mathbf{r}_{i,t}\right\|=\left\|\mathbf{y}_{i,t}-\left(\mathbf{x}_{i,t}-\eta\nabla f_{i,t}(\mathbf{x}_{i,t})\right)\right\|$$ $$\leq\left\|\sum_{j}[\mathbf{P}]_{ji}\Pi_{\widehat{\mathcal{H}}_{i}^{\mathrm{s}}}[\mathbf{y}_{j,t-1}]-\left(\sum_{j}[\mathbf{P}]_{ji}\mathbf{y}_{j,t-1}-\eta\nabla f_{i,t}(\mathbf{x}_{i,t})\right)\right\|$$ $$\leq O(T^{-\rho})+\eta G.$$ Proof of Theorem 3. First, we decompose the individual regret of agent j into three terms: $$\sum_{k}\sum_{i}f_{i,t}(\mathbf{x}_{j,t})-\sum_{k}f_{i}(\mathbf{x}_{i}^{*})=\underbrace{\sum_{i=1}^{T_{i}-1}\sum_{j}f_{i,t}(\mathbf{x}_{j,t})-f_{i,t}(\mathbf{x}_{i}^{*})}_{\text{Term1}}+\underbrace{\sum_{i=T_{i}}^{T}\sum_{j}f_{i,t}(\mathbf{x}_{j,t})-f_{i,t}(\mathbf{x}_{i}^{*})}_{\text{Term1}}+\underbrace{\sum_{i=T_{i}}^{T}f_{i,t}(\mathbf{x}_{i}^{*})-f_{i,t}(\mathbf{x}_{i}^{*})}_{\text{Term1}},\tag{30}$$ where $\mathbf{x}_{i}^{*}$ is the projection of $\mathbf{x}_{i}$. We wish to study $\mathbf{x}_{i}$ of $\mathbf{x}_{i}^{*}$. $$(31)$$ where x˜ ∗ t is the projection of x ∗ t on X s in, which is a mutual subset of {Xbs i }i∈[m] with τin = 2BrL based on Equation (22) in Lemma 11. We now proceed to bound each term. The upper bound of Term I: Note that by choosing γ ≤∆s LLA , we have ∀i ∈ [m] and t ∈ [1*, . . . , T*0 + T1] $$[\mathbf{A}]_{k,:\mathbf{x}_{i,t}}=[\mathbf{A}]_{k,:}\left((1-\gamma)\mathbf{x}^{s}+\gamma\zeta_{i,t}\right)\leq(1-\gamma)b_{k}^{s}+\Delta^{s}\leq(1-\gamma)b_{k}^{s}+(b_{k}-b_{k}^{s})<b_{k},$$ ) < bk, (31) which implies the safeness of the action. Based on the Lipschitz property of the function sequence, we have $$\sum_{t=1}^{T_{s}-1}\sum_{i}f_{i,t}({\bf x}_{j,t})-f_{i,t}({\bf x}_{t}^{*})\leq\sum_{t=1}^{T_{s}-1}\sum_{i}G\big{\|}{\bf x}_{j,t}-{\bf x}_{t}^{*}\big{\|}\leq2GLm(T_{0}+T_{1}).\tag{32}$$ The upper bound of Term II: Based on the update rule, ∀i ∈ [m] and t ∈ [Ts*, . . . , T*] we have $$f_{i,t}(\mathbf{x}_{i,t})-f_{i,t}({\tilde{\mathbf{x}}}_{t}^{*})\leq\nabla f_{i,t}(\mathbf{x}_{i,t})^{\top}(\mathbf{x}_{i,t}-{\tilde{\mathbf{x}}}_{t}^{*})$$ t ) ≤∇fi,t(xi,t) t = 1 η h1 2 η 2∇fi,t(xi,t) 2+ 1 2 xi,t − x˜ ∗ t 2− 1 2 xi,t − x˜ ∗ t − η∇fi,t(xi,t) 2i ≤ 1 η h1 2 η 2∇fi,t(xi,t) 2+ 1 2 xi,t − x˜ ∗ t 2− 1 2 yi,t − x˜ ∗ t 2i (33) = 1 η h1 2 η 2∇fi,t(xi,t) 2+ 1 2 X j [P]jiyj,t−1 − x˜ ∗ t 2 − 1 2 yi,t − x˜ ∗ t 2i ≤ 1 η h1 2 η 2∇fi,t(xi,t) 2+ 1 2 X j [P]ji yj,t−1 − x˜ ∗ t 2− 1 2 yi,t − x˜ ∗ t 2i, where the second inequality is due to the projection property thatyi,t − x˜ ∗ t ≤xi,t − η∇fi,t(xi,t) − x˜ ∗ t , and the third inequality is due to the convexity of the square function. Based on Equation (33) and Lemma 14, we have fi,t(xj,t) − fi,t(x˜ ∗ t ) = fi,t(xj,t) − fi,t(xi,t) + fi,t(xi,t) − fi,t(x˜ ∗ t ) ≤Gxj,t − xi,t + fi,t(xi,t) − fi,t(x˜ ∗ t ) (34) ≤2GO( 1 T ρ ) + 2ηG √mβ 1 − β + 1 2 η∇fi,t(xi,t) 2+ 1 2η X j [P]ji yj,t−1 − x˜ ∗ t 2− 1 2η yi,t − x˜ ∗ t 2. Summing Equation (34) over i, we get X i (fi,t(xj,t) − fi,t(x˜ ∗ t )) ≤2mGO( 1 T ρ ) + 2ηG √mβ 1 − β + η 2 X i ∇fi,t(xi,t) 2+ 1 2η X j yj,t−1 − x˜ ∗ t 2− 1 2η X i yi,t − x˜ ∗ t 2 =2mGO( 1 T ρ ) + 2ηG √mβ 1 − β + η 2 X i ∇fi,t(xi,t) 2+ 1 2η X i yi,t−1 2−yi,t 2+ 2(yi,t − yi,t−1) ⊤x˜ ∗ t . (35) Summing Equation (35) over t ∈ [Ts*, . . . , T*], we have X T t=Ts X i (fi,t(xj,t) − fi,t(x˜ ∗ t )) ≤ η 2 X T t=Ts X i ∇fi,t(xi,t) 2+ 1 2η X i yi,Ts−1 2+ 1 η X i y ⊤ i,T x˜ ∗ T − X i y ⊤ i,Ts−1x˜ ∗ Ts−1 (36) T X−1 t=Ts−1 X i (x˜ ∗ t − x˜ ∗ t+1) ⊤yi,t + 2TmGO( 1 T ρ ) + 2ηG √mβ 1 − β . + 1 η The upper bound of Term III: Based on Lemma 9, we have for any x ∗ t ∈ X sand its projection to X s in, denoted by x˜ ∗ t , that $$\sum_{t=T_{s}}^{T}\sum_{i}\left(f_{i,t}({\hat{\mathbf{x}}}_{t}^{*})-f_{i,t}(\mathbf{x}_{t}^{*})\right)\leq\sum_{t=T_{s}}^{T}\sum_{i}G\|{\hat{\mathbf{x}}}_{t}^{*}-\mathbf{x}_{t}^{*}\|\leq m T G{\frac{2{\sqrt{d}}L B_{r}}{C(\mathbf{A},\mathbf{b})}}.$$ . (37) Substituting Equations (32), (36) and (37) into Equation (30), we get X t X i (fi,t(xj,t) − fi,t(x ∗ t )) ≤2GLm(T0 + T1) + ηmT G2 2+ 1 2η X i yi,Ts−1 2+ 1 η X i y ⊤ i,T x˜ ∗ T − X i y ⊤ i,Ts−1x˜ ∗ Ts−1 (38) T X−1 t=Ts−1 X i (x˜ ∗ t − x˜ ∗ t+1) ⊤yi,t + 2TmGO( 1 T ρ ) + 2ηG √mβ 1 − β + mT G2 √dLBr C(A, b) , + 1 η √ $$(37)$$ which is O(T0 + T1 + 1 η + 1 η C ∗ T + T log T0 √T0 + βηT (1−β) ) and the final regret bound is derived by substituting the choices of η and T0 into above. ## A.4 Non-Convex Part Lemma 15 (Lemma 4 in Ghai et al. (2022)). Suppose Assumptions 5, 6, 7 hold and ut = q(xt)*, then*q(xt+1) − ut+1 = O(W4G 3/2 Fη 3/2) *based on the following update rule:* $$\mathbf{u}_{t+1}=d r g m i n_{\mathbf{u}\in\mathcal{X}^{s^{\prime}}}\left\{\nabla\tilde{f}_{t}(\mathbf{u}_{t})^{\top}\mathbf{u}+\frac{1}{\eta}\mathcal{D}_{\phi}(\mathbf{u},\mathbf{u}_{t})\right\},$$ $$\mathbf{x}_{t+1}=d r g m i n_{\mathbf{x}\in\mathcal{X}^{s}}\left\{\nabla f_{t}(\mathbf{x}_{t})^{\top}\mathbf{x}+\frac{1}{2\eta}\|\mathbf{x}-\mathbf{x}_{t}\|^{2}\right\}.$$ Theorem 16 (Theorem 7 in Ghai et al. (2022)). Given a convex and compact domain X ⊂ X s*, and not necessarily* convex loss ft(·) *satisfying Assumption 7. When Assumption 8 is met, there exists an OMD object with convex loss* ˜ft(·), a convex domain and a strongly convex regularization ϕ *satisfying Assumption 5.* Lemma 17. Suppose Assumptions 5-7 hold and ui,t = q(xi,t), ∀i ∈ [m]*; then* $$\left\|q({\bf x}_{i,t+1})-{\bf u}_{i,t+1}^{\prime}\right\|=O(\frac{1}{T^{2\rho}}+\eta^{3/2}),$$ based on the following update rules: zi,t = argminu∈Xbs′ i ∇ ˜fi,t(ui,t) ⊤u + 1 η Dϕ(u, ui,t) , u ′ i,t+1 = X j [P]jizj,t, yi,t = argminx∈Xbs i ∇fi,t(xi,t) ⊤x + 1 2η x − xi,t 2, xi,t+1 = X j [P]jiyj,t. (39) $$(39)$$ $$(40)$$ Proof. We first upper boundq(xi,t+1) − u ′ i,t+1 as follows $$\|q(\mathbf{x}_{i,t+1})-\mathbf{u}_{i,t+1}^{\prime}\|\leq\|\sum_{j}[\mathbf{P}]_{j,t}\mathbf{z}_{j,t}-\sum_{j}[\mathbf{P}]_{j,t}q(\mathbf{y}_{j,t})\|+\|\sum_{j}[\mathbf{P}]_{j,t}q(\mathbf{y}_{j,t})-q(\sum_{j}[\mathbf{P}]_{j,t}\mathbf{y}_{j,t})\|.$$ [P]jiyj,t).(40) To bound the second term, we consider the Taylor expansion of q(y) w.r.t. a point yˆ in the convex hull of {yi,t}i: X j [P]jiq(yj,t) − q( X j [P]jiyj,t) ≤ X j [P]jiq(yˆ) + Jq(yˆ)(yj,t − yˆ) + Oyj,t − yˆ 2 − q(yˆ) + Jq(yˆ)(X j [P]jiyj,t − yˆ) + O X j [P]jiyj,t − yˆ 2 ≤OX j [P]ji yj,t − yˆ 2+ O X j [P]jiyj,t − yˆ 2 ≤O(D2), (41) where D denotes the diameter of the convex hull of {yi,t} and is upper bounded as follows $$(41)$$ D ≜ max (i,j) yi,t − yj,t = max (i,j) ΠXbs i xi,t − η∇fi,t(xi,t)− ΠXbs j xj,t − η∇fj,t(xj,t) = max (i,j) ΠXbs i xi,t − η∇fi,t(xi,t)− ΠXbs j xi,t − η∇fi,t(xi,t) + ΠXbs j xi,t − η∇fi,t(xi,t)− ΠXbs j xj,t − η∇fj,t(xj,t) (42) ≤ max (i,j) ΠXbs i xi,t − η∇fi,t(xi,t)− ΠXbs j xi,t − η∇fi,t(xi,t) +xi,t − η∇fi,t(xi,t)−xj,t − η∇fj,t(xj,t) ≤O( 1 T ρ ) + 2O( 1 T ρ ) + 2ηG √mβ 1 − β + 2Gη = O( 1 T ρ + η). 25 The first inequality follows from the non-expansive property of projection, whereΠX (x) − ΠX (y) ≤x − y for any x, y and a closed convex set X , and the last inequality is based on Lemma 12, Lemma 14 and the Lipschitz continuity of the function sequence. Substituting Equations (41) and (42) into Equation (40) and based on Lemma 15, we have $$\begin{split}\|q(\mathbf{x}_{i,t+1})-\mathbf{u}^{\prime}_{i,t+1}\|\leq&\|\sum_{j}[\mathbf{P}]_{ji}\mathbf{z}_{j,t}-\sum_{j}[\mathbf{P}]_{ji}q(\mathbf{y}_{j,t})\|+\|\sum_{j}[\mathbf{P}]_{ji}q(\mathbf{y}_{j,t})-q(\sum_{j}[\mathbf{P}]_{ji}\mathbf{y}_{j,t})\|\\ \leq&O(W^{4}G_{F}^{3/2}\eta^{3/2})+O(\frac{1}{T^{2\rho}}+\eta^{2})=O(\frac{1}{T^{2\rho}}+\eta^{3/2}),\end{split}\tag{1}$$ $$(43)$$ when η is small enough. Proof of Theorem 5. As for the proof of Theorem 3, we decompose the individual regret into three terms: t X i fi,t(xj,t)− X t ft(x ∗ t ) = T Xs−1 t=1 X i fi,t(xj,t) − fi,t(x ∗ t ) + X T + X T X t=Ts X i fi,t(xj,t) − fi,t(x˜ ∗ t ) t=Ts ft(x˜ ∗ t ) − ft(x ∗ t ) , | {z } Term I | {z } Term II | {z } Term III (44) where x˜ ∗ t is the projection of x ∗ t on X s in, which is a mutual subset of {Xbs i }i∈[m] with τin = 2BrL based on Equation (22). The upper bound of Term I: Similar to the proof of convex part, during the estimation phase, γ is less than ∆s LLA to ensure the safeness of each agent's action, and based on the Lipschitz property we have $$\sum_{t=1}^{T_{s}-1}\sum_{i}f_{i,t}(\mathbf{x}_{j,t})-f_{i,t}(\mathbf{x}_{t}^{*})=\sum_{t=1}^{T_{s}-1}\sum_{i}\bar{f}_{i,t}\big{(}q(\mathbf{x}_{j,t})\big{)}-\bar{f}_{i,t}\big{(}q(\mathbf{x}_{t}^{*})\big{)}$$ $$\leq\sum_{t=1}^{T_{s}-1}\sum_{i}G_{F}W\|\mathbf{x}_{j,t}-\mathbf{x}_{t}^{*}\|\leq2G_{F}WLm(T_{0}+T_{1}).\tag{45}$$ The upper bound of Term II: Define Xbs′ i ≜ {q(x)|x ∈ Xbs i }, (same for X s in and X s). Then, for any q(x˜ ∗ t ) = u˜ ∗ t ∈ X s′ in , based on Equation (39), we have η (fi,t(xi,t) − fi,t(x˜ ∗ t )) = η˜fi,t(ui,t) − ˜fi,t(u˜ ∗ t ) ≤ η∇ ˜fi,t(ui,t) ⊤(ui,t − u˜ ∗ t ) =∇ϕ(ui,t) − ∇ϕ(zi,t) − η∇ ˜fi,t(ui,t)⊤(u˜ ∗ t − zi,t) + (∇ϕ(zi,t) − ∇ϕ(ui,t))⊤(u˜ ∗ t − zi,t) + η∇ ˜fi,t(ui,t) ⊤(ui,t − zi,t) ≤ (∇ϕ(zi,t) − ∇ϕ(ui,t))⊤(u˜ ∗ t − zi,t) + η∇ ˜fi,t(ui,t) ⊤(ui,t − zi,t) = Dϕ(u˜ ∗ t , ui,t) − Dϕ(u˜ ∗ t , zi,t) − Dϕ(zi,t, ui,t) + η∇ ˜fi,t(ui,t) ⊤(ui,t − zi,t) ≤ Dϕ(u˜ ∗ t , ui,t) − Dϕ(u˜ ∗ t , zi,t) − Dϕ(zi,t, ui,t) + 12 ui,t − zi,t 2+ η 2 2 ∇ ˜fi,t(ui,t) 2 ≤ Dϕ(u˜ ∗ t , ui,t) − Dϕ(u˜ ∗ t , zi,t) + η 2 2 ∇ ˜fi,t(ui,t) 2 = Dϕ(u˜ ∗ t , ui,t) − Dϕ(u˜ ∗ t , u ′ i,t) + Dϕ(u˜ ∗ t , u ′ i,t) − Dϕ(u˜ ∗ t , zi,t) + η 2 2 ∇ ˜fi,t(ui,t) 2 ≤ Dϕ(u˜ ∗ t , ui,t) − Dϕ(u˜ ∗ t , u ′ i,t) +X j [P]jiDϕ(u˜ ∗ t , zj,t−1) − Dϕ(u˜ ∗ t , zi,t) + η 2 2 ∇ ˜fi,t(ui,t) 2, $$(46)$$ $$(47)$$ where the second inequality is based on the optimality of zi,t; the fourth inequality is due to the strong convexity of ϕ(·) and the fifth inequality is based on Assumption 9. Based on Theorem 16, Lemma 17, and the Lipschitz assumption on Dϕ, we have $$\|{\mathcal{D}}_{\phi}(\hat{\mathbf{u}}_{t}^{*},\mathbf{u}_{i,t})-{\mathcal{D}}_{\phi}(\hat{\mathbf{u}}_{t}^{*},\mathbf{u}_{i,t}^{\prime})\|\leq W\|\mathbf{u}_{i,t}-\mathbf{u}_{i,t}^{\prime}\|\leq O\big{(}W(\frac{1}{T^{2p}}+\eta^{3/2})\big{)}.$$ And based on Lemma 14, we get max i,j∈[m] ui,t − uj,t = max i,j∈[m] q(xi,t) − q(xj,t) = OW η.(48) With Equations (46), (47) and (48), we derive ˜fi,t(uj,t) − ˜fi,t(u˜ ∗ t ) = ˜fi,t(uj,t) − ˜fi,t(ui,t) + ˜fi,t(ui,t) − ˜fi,t(u˜ ∗ t ) ≤ GF ui,t − uj,t + OW( 1 ηT2ρ + η 1/2) + 1 η X j [P]jiDϕ(u˜ ∗ t , zj,t−1) − 1 η Dϕ(u˜ ∗ t , zi,t) + η2 ∇ ˜fi,t(ui,t) 2 ≤ OGF W η+ OW(1 ηT2ρ + η 1/2) + 1 η X j [P]jiDϕ(u˜ ∗ t , zj,t−1) − 1 η Dϕ(u˜ ∗ t , zi,t) + η2 ∇ ˜fi,t(ui,t) 2. (49) $$(48)$$ $$(49)$$ Based on the definition of Bregman divergence, we have the following relationship Dϕ(u˜ ∗ t , zi,t−1) − Dϕ(u˜ ∗ t , zi,t) = (∇ϕ(zi,t) − ∇ϕ(zi,t−1))⊤(u˜ ∗ t − zi,t) + Dϕ(zi,t, zi,t−1) = (∇ϕ(zi,t) − ∇ϕ(zi,t−1))⊤u˜ ∗ t +ϕ(zi,t) − ∇ϕ(zi,t) ⊤zi,t−ϕ(zi,t−1) − ∇ϕ(zi,t−1) ⊤zi,t−1 . $$(50)$$ 27 Summing Equation (49) over i, based on Equation (50) we get X i ˜fi,t(uj,t) − ˜fi,t(u˜ ∗ t ) ≤OmGF W η+ OmW(1 ηT2ρ + η 1/2)+ X i η 2 ∇ ˜fi,t(ui,t) 2 + 1 η X i h(∇ϕ(zi,t) − ∇ϕ(zi,t−1))⊤u˜ ∗ t +ϕ(zi,t) − ∇ϕ(zi,t) ⊤zi,t−ϕ(zi,t−1) − ∇ϕ(zi,t−1) ⊤zi,t−1 i. Then, by summing Equation (51) over [Ts, . . . , T], we have $$(\mathbb{S}1)$$ X T t=Ts X i ˜fi,t(uj,t) − ˜fi,t(u˜ ∗ t ) ≤OmT GF W η+ OmTW(1 ηT2ρ + η 1/2)+ X T η 2 ∇ ˜fi,t(ui,t) 2 t=Ts X i (52) + 1 η "TX−1 t=Ts−1 X i (u˜ ∗ t − u˜ ∗ t+1) ⊤∇ϕ(zi,t) +X i ∇ϕ(zi,T ) ⊤u˜ ∗ T − X i ∇ϕ(zi,Ts−1) ⊤u˜ ∗ Ts−1 # + 1 η X i -ϕ(zi,T ) − ∇ϕ(zi,T ) ⊤zi,T −ϕ(zi,Ts−1) − ∇ϕ(zi,Ts−1) ⊤zi,Ts−1 . The upper bound of Term III: Based on Lemma 9, we have for any x ∗ t ∈ X sand its projection to X s in: x˜ ∗ t $$\sum_{t=T_{s}}^{T}\sum_{i}\left(\tilde{f}_{i,t}(q(\mathbf{\hat{x}}_{t}^{*}))-\tilde{f}_{i,t}(q(\mathbf{x}_{t}^{*}))\right)\leq\sum_{t=T_{s}}^{T}\sum_{i}G_{F}W\|\mathbf{\hat{x}}_{t}^{*}-\mathbf{x}_{t}^{*}\|\leq mTG_{F}W\,\frac{2\sqrt{d}L\mathcal{B}_{r}}{C(\mathbf{A},\mathbf{b})}.\tag{53}$$ Substituting Equations (45), (52) and (53) into Equation (44), the final regret bound is as X t=1 X i (fi,t(xj,t) − fi,t(x ∗ t )) ≤OmT GF W η+ OmTW(1 ηT2ρ + η 1/2)+ X T η 2 ∇ ˜fi,t(ui,t) 2 t=Ts X i + 1 η "TX−1 t=Ts−1 X i (u˜ ∗ t − u˜ ∗ t+1) ⊤∇ϕ(zi,t) +X i ∇ϕ(zi,T ) ⊤u˜ ∗ T − X i ∇ϕ(zi,Ts−1) ⊤u˜ ∗ Ts−1 # + 2GF W Lm(T0 + T1) + 1 η X i -ϕ(zi,T ) − ∇ϕ(zi,T ) ⊤zi,T −ϕ(zi,Ts−1) − ∇ϕ(zi,Ts−1) ⊤zi,Ts−1 + mT GF W 2 √dLBr C(A, b) =O(T0 + T1 + T √η + T √log T0 √T0+ 1 η + 1 η X T t=Ts u˜ ∗ t − u˜ ∗ t+1 ), (54) where the final regret bound is proved by applying the specified η and T0. By choosing ρ as a large enough number, 1 ηT 2ρ is dominated by η 1/2.
Review 1: Summary: This paper studies safe distributed online optimization over an unknown set of linear constraints. Dynamic regrets for convex and non-convex loss functions have been established. Overall the topic of the paper is interesting and the paper is well written. Strengths and Weaknesses: The constraint information is unknown, which is different from most of the existing literature on online distributed optimization under inequality constraints. The regret bounds are in dynamic setting and match those of the centralized counterpart. Some details regarding the computational complexity, learning rate, and non-convexity need to further clarified. Requested Changes: 1. It is suggested to provide the explicit expression of $\nabla l_{i}(\widehat{A}_{i}^{t}) $ in Algorithm 1 and perform a detailed analysis of the computational complexity of Algorithm 1. 2. Please specify the choice of the step size $\alpha$ in Lemma 2. 3. The learning rate $\eta$ in Algorithm 2 requires the information of $T$, which is global, is it possible to adopt adaptive learning rate that depends only on $t$? 4. The authors should specify what type of non-convexity is considered in Section 6. In addition, why is the definition of the dynamic regret identical to that of the convex setting? 5. Does the safe baseline action in Assumption 3 affect the accuracy of the parameter estimation? I think more lines should be added to discuss about this assumption. Broader Impact Concerns: No concerns ================================================== Review 2: Summary: The paper studies safe distributed online optimization. In particular, a network of agents optimizes a global, time-varying function subject to linear unknown constraints. Each agent observes only local information about the objective function and can communicate with their neighbors. Differently from the objective function, the linear safety constraints are common to all the agents. The authors provide an algorithm that provides a $O(T^{2/3} \sqrt{\log T}+ T^{1/3} C^*_T))$ dynamic regret bound for convex functions, where $C_T^*$ denotes the path length of the best minimizer sequence. Moreover, they provide an algorithm for some non-convex problems. Strengths and Weaknesses: This work provides the first algorithm for distributed safe optimization that guarantees that the constraints are not violated at any round with high probability. However, from the technical point of view, it is not clear which are the main contributions of the paper since most of the results are heavily based on previous work. Regarding Algorithm 1. The use of a convex combination between a known safe action and a zero-mean random vector is not novel, and the second part of the parameter estimation algorithm uses a known algorithm. Regarding Algorithm 2. The analysis seems quite close to previous work, once you notice the existence of a mutual shrunk polytope. Regarding the non-convex setting. The main goal of the study of this setting is to understand why online gradient descent works well in some specific non-convex settings. It is not clear which is the significance of extending the analysis to distributed algorithms with unknown safety constraints. Requested Changes: Please highlight the technical challenges you face to extend distributed algorithms with known constraints to your setting. Broader Impact Concerns: no concerns ================================================== Review 3: Summary: This paper investigates the problem of safe distributed online optimization, in which all local learners need to satisfy an unknown set of linear safety constraints. Both convex functions and a certain type of non-convex functions are considered, and an algorithm called D-Safe-OGD is proposed. Moreover, the authors prove that it can attain a dynamic regret bound of $O(T^{2/3}\sqrt{\log T}+T^{1/3}C_T^\ast)$ for convex functions, and a dynamic regret bound of $O(T^{2/3}\sqrt{\log T}+T^{2/3}C_T^\ast)$ for the certain type of non-convex functions. Strengths and Weaknesses: # Strengths 1) To the best of my knowledge, this paper is the first work to study the problem of safe distributed online optimization. 2) Compared with existing studies on centralized safe online optimization, this paper provides a more careful analysis to handle the difference of safe sets estimated by these local learners. 3) The dynamic regret analysis and the distributed extension of the existing non-convex online optimization are new to me. # Weaknesses 1) The main idea to handle the unknown safety constraints almost is the same as that of Chaudhary & Kalathil (2022), though they only consider the centralized setting. It seems that the distributed setting does not bring some significant challenges for satisfying the unknown safety constraints, which makes this paper incremental to some extent. 2) This paper only provides the upper bounds on dynamic regret. It is not clear whether these bounds are optimal or not. 3) In the theoretical analysis, the authors have utilized some useful conclusions from previous studies, but the introduction about these results is not easy for readers to check, e.g., the application of Theorem 3.1 in Bonnans et al. (1998) (below Eq. (27) in this submission), the application of Theorem 3.7 in Shi et al. (2015) (below Eq. (17) in this submission). Moreover, as listed below, this paper needs to make some important modifications, and the current version cannot be accepted. Requested Changes: 1) The authors should provide a more formalized introduction to Theorem 3.1 in Bonnans et al. (1998) and Theorem 3.7 in Shi et al. (2015) before using them. 2) In the first inequality of Eq. (41), the authors have utilized $\\|\Pi_{X}(x)-\Pi_{X}(y)\\|\leq\\|x-y\\|$ for any $x,y$, which is not very straightforward. So, it would be better if the authors could provide detailed explanations. 3) In page 3, Besbes et al. (2015) only establish an $O(\sqrt{T(1+V_T)}\log T)$ dynamic regret bound for strongly convex functions. 4) Lemma 1 is almost the same as Lemma 1 in Chaudhary & Kalathil (2022). So, the authors should also give some credit to Chaudhary & Kalathil (2022) when introducing this lemma. 5) The upper bounds presented by this paper ignore all other constants except $T$, $\delta$, and $C_T^\ast$. It would be better if the authors could also provide these omitted constants. 6) When discussing the related work on dynamic regret, the authors should clearly distinguish the general dynamic regret with respect to any sequence of comparators and the worst-case one studied in this paper. 7) Some related studies on the worse-case dynamic regret [1][2][3] are missed, and should also be discussed. 8) Some related studies on distributed online optimization with complex constraints [5][6] are missed, and should also be discussed. [1] Zhang et al. Dynamic Regret of Strongly Adaptive Methods. In ICML, 2018. [2] Wan et al. Projection-Free Online Learning in Dynamic Environments. In AAAI, 2021. [3] Wan et al. Improved Dynamic Regret for Online Frank-Wolfe. In arXiv:2302.05620, 2023. [4] Zhang et al. Projection-free Distributed Online Learning in Networks. In ICML, 2017. [5] Wan et al. Projection-free Distributed Online Convex Optimization with $O(\sqrt{T})$ Communication Complexity. In ICML, 2020. Broader Impact Concerns: I have no concerns about the ethical implications of the work. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The problem being studied and the technical results are new. The majority of the reviewers found the results of interest. One expert reviewer has concerns about the "relatively weak technical contribution" of the work as most of the techniques used for the algorithm and its analysis are from the existing work. The authors highlighted the challenges they had to overcome but the reviewer's concern on significance remains. As per TMLR evaluation criteria https://jmlr.org/tmlr/editorial-policies.html#evaluation, the work meets both the "soundness" rule and the "interest" rule. The "interest" is clear from the positive subset of the reviewers and I went through the paper with interest too. Overall, I am happy to recommend the paper for publication at TMLR. I have some more comments for the authors consideration as a minor revision 1. The optimality in each parameter of the problem should be discussed more clearly. The authors are frank that the regret bound they obtained might not be optimal in $C_T^*$. But is the $T^{1/3}$ term from Theorem 3 optimal (when the third term dominates)? I am not sure if the conjecture that the optimal dependence on $C_T^*$ should be $\sqrt{C_T^*}$ in Remark 4 makes sense because when $C_T^* = T$ the expression is superlinear. 2. I encourage the authors to add a "technical summary" after the summary of results at the end of the introduction, to comment on the proof techniques (which are new from this paper, which ones are adapted from existing work, and which ones are borrowed directly from existing work). An upfront discussion of the techniques helps the theoretical audience to quickly understand the "meat" of the work and determine whether some of the techniques from this paper can be useful for them elsewhere. 3. The literature review on dynamic regret is a bit messy. It introduced too many different versions of "path lengths" and different settings. It might make sense to categorize the discussion a bit, and also to defer very involved discussion to the end of the paper (or even in the appendix). For example, the dynamic regret guarantee for the pointwise minimizer sequence C* is very different from (and much weaker than) that of the universal dynamic regret that competes with all sequences (Universal dynamic regret). Recent work has obtained optimal universal dynamic regret bounds for strongly convex / exp-concave losses (e.g., Baby and Wang, COLT'2021; AISTATS'2022) which is qualitatively very different compared to those of O(C_T^*) from Mokhtari et al. (2016). Generally speaking, in almost all stochastic / adversarial cases $C_T^* = O(T)$, e.g., linear regression, logistic regression, universal portfolio... the comparator sequence is not instantiated as pointwise optimal sequence, but rather another more slowly changing sequence for the interest of a better bias/variance tradeoff. Related to the above, the authors also introduced many other variants of pathlength: V_T, C'_T, and squared variants. These are all good but dumping them on the readers when they are still trying understand what *this paper* is about is a bit distracting. This is especially so because none of these alternative pathlengths were used in this paper. Usually such literature discussion should focus on work that is directly relevant to the current work, e.g. either those that provide lower bounds for this setting, or those that the authors have borrowed settings / techniques / algorithmic ideas. A longer discussion of the related work can be deferred to the end after the main results are presented. ==================================================
# Macrpo: Multi-Agent Cooperative Recurrent Policy Optimization Anonymous authors Paper under double-blind review ## Abstract This work considers the problem of learning cooperative policies in multi-agent settings with partially observable and non-stationary environments without a communication channel. We focus on improving information sharing between agents and propose a new multi-agent actor-critic method called *Multi-Agent Cooperative Recurrent Proximal Policy Optimization* (MACRPO). We propose two novel ways of integrating information across agents and time in MACRPO: First, we use a recurrent layer in critic's network architecture and propose a new framework to use the proposed meta-trajectory to train the recurrent layer. This allows the network to learn the cooperation and dynamics of interactions between agents, and also handle partial observability. Second, we propose a new advantage function that incorporates other agents' rewards and value functions by controlling the level of cooperation between agents using a parameter. The use of this control parameter is suitable for environments in which the agents are unable to fully cooperate with each other. We evaluate our algorithm on three challenging multi-agent environments with continuous and discrete action spaces, Deepdrive-Zero, Multi-Walker, and Particle environment. We compare the results with several ablations and state-of-the-art multi-agent algorithms such as MAGIC, IC3Net, CommNet, GAComm, QMIX, MADDPG, and RMAPPO, and also single-agent methods with shared parameters between agents such as IMPALA and APEX. The results show superior performance against other algorithms. The code is available online at https://github.com/kargarisaac/macrpo. ## 1 Introduction While reinforcement learning (RL) (Kaelbling et al., 1996) has gained popularity in policy learning, many problems which require coordination and interaction between multiple agents cannot be formulated as single-agent reinforcement learning. Examples of such scenarios include self-driving cars (Shalev-Shwartz et al., 2016), autonomous intersection management (Dresner & Stone, 2008), multiplayer games (Berner et al., 2019; Vinyals et al., 2019), and distributed logistics (Ying & Dayong, 2005). Solving these kind of problems using single-agent RL is problematic, because the interaction between agents and the non-stationary nature of the environment due to multiple learning agents can not be considered (Hernandez-Leal et al., 2019; Lazaridis et al., 2020). Multi-agent reinforcement learning (MARL) and cooperative learning between several interacting agents can be beneficial in such domains and has been extensively studied (Nguyen et al., 2020; Hernandez-Leal et al., 2019). However, when several agents are interacting with each other in an environment without real-time communication, the lack of communication deteriorates policy learning. In order to alleviate this problem, we propose to share information during training to learn a policy that implicitly considers other agents' intentions to interact with them in a cooperative manner. For example, in applications like autonomous driving and in an intersection, knowing about other cars' intentions can improve the performance, safety, and collaboration between agents. A standard paradigm for multi-agent planning is to use the centralized training and decentralized execution (CTDE) approach (Kraemer & Banerjee, 2016; Foerster et al., 2016; Lowe et al., 2017; Foerster et al., 2018; Xiao et al., 2021), also taken in this work. In this work, we propose a new cooperative multi-agent reinforcement learning algorithm, which is an extension to Proximal Policy Optimization (PPO), called *Multi-Agent Cooperative Recurrent Proximal Policy Optimization* ![1_image_0.png](1_image_0.png) Figure 1: Different frameworks for information sharing. Our proposed method and the standard approach for information sharing through agents are shown in separate boxes. Blue arrows are for ours, and the red ones are for the standard approach to share parameters. After collecting trajectories by agents, ours, in addition to sharing parameters between agents, uses the meta-trajectory to train the critic's LSTM layer. This allows the critic to learn the interaction between agents along the trajectories through its hidden state. In contrast, the literature approach, which does parameter sharing, uses separate trajectories collected by agents to train the LSTM layer. For more details about the network architectures, please see Fig 2. (MACRPO). MACRPO combines and shares information across multiple agents in two ways: First, in network architecture using long short term memory (LSTM) layer and train it by creating a meta-trajectory from trajectories collected by agents, as shown in Fig 1. This allows the critic to learn the cooperation and dynamics of interactions between agents, and also handle the partial observability. Second, in the advantage function estimator by considering other agents' rewards and value functions. MACRPO uses a centralized training and decentralized execution paradigm that the centralized critic network uses extra information in the training phase and switches between agents sequentially to predict the value of a state for each agent. In the execution time, only the actor networks are used, and each learned policy (actor network) only uses its local information (i.e., its observation) and acts in a decentralized manner. Moreover, in environments with multiple agents that are learning simultaneously during training, each agent's policy and the dynamics of the environment, from each agent's perspective, is constantly changing. This causes the nonstationarity problem (Hernandez-Leal et al., 2019; Xiao et al., 2021). To reduce this effect, MACRPO uses an on-policy approach and the most recent collected data from the environment. In summary, our contributions are as follows: (1) proposing a cooperative on-policy centralized training and decentralized execution framework that is applicable for both discrete and continuous action spaces; (2) sharing information across agents using two ways: a recurrent component in the network architecture which uses a combination of trajectories collected by all agents and an advantage function estimator that uses a weighted combination of rewards and value functions of individual agents which uses a control parameter that can be utilized to change the cooperation level between agents in MARL problems; (3) evaluating the method on three cooperative multi-agent tasks: DeepDrive-Zero (Quiter, 2020), Multi-Walker (Gupta et al., 2017), and Particle (Mordatch & Abbeel, 2018) environments, demonstrating similar or superior performance compared to the state-of-the-art. The rest of this paper is organized as follows. The review of related works in Section 2 demonstrates that while MARL has been extensively studied, existing approaches do not address the dynamics of interaction between agents in detail. In Section 3, we provide the required background in Markov Games and Proximal Policy Optimization. The problem definition and the proposed method are described in Section 4, with emphasis on the two innovations, metatrajectory for recurrent network training and joint advantage function. Then, Section 5 presents empirical evaluation in three multi-agent environments showing superior performance of the proposed approach compared to state-of-the-art. Finally, in Section 6 we conclude that implicit information sharing can be used to improve cooperation between agents while discussing its limitations in settings with high number of agents. ## 2 Related Work The most straightforward and maybe the most popular approach to solve multi-agent tasks is to use single-agent RL and consider several independent learning agents. Some prior works compared the performance of cooperative agents to independent agents, and tried independent Q-learning (Tan, 1993) and PPO with LSTM layer (Bansal et al., 2017), but they did not work well in practice (Matignon et al., 2012). Also, Zhao et al. (2020) tried to learn a joint value function for two agents and used PPO with LSTM layer to improve the performance in multi-agent setting. In order to use single-agent RL methods for multi-agent setting, improve the performance, and speed up the learning procedure, some works used parameter sharing between agents (Gupta et al., 2017; Terry et al., 2020b). Especially in self-play games, it is common to use the current or older versions of the policy for other agents (Berner et al., 2019). We will compare our proposed method with several state-of-the-art single-agent RL approaches with shared parameters between agents proposed in Terry et al. (2020b) in the experiments section. Our way of training the LSTM layer in the critic differs from parameter sharing used in the literature such that instead of using separate LSTMs for each agent, the LSTM layer in our method has a shared hidden state, which is updated using a combination of all agents' information. This lets the LSTM layer to learn about the dynamics of interaction and cooperation between agents and across time. In addition to using single-agent RL methods with or without parameter sharing, some other works focused on designing multi-agent RL algorithms for multi-agent settings. In multi-agent environments, considering communication between agents and information sharing will lead to designing multi-agent methods (Niu et al., 2021; Singh et al., 2019; Liu et al., 2020; Sukhbaatar et al., 2016; Dutta et al., 2005; Da Silva & Costa, 2019; Kash et al., 2011). The communication channel is often limited, leading to methods that try to optimize the communication including message structure (Mao et al., 2020; Kullu et al., 2017). However, in some environments, there is no explicit communication channel between agents. For example, consider an autonomous driving environment without connection between cars. Finding a solution to address this problem and decrease the lack of communication effect seems necessary. A recently popularized paradigm to share information between agents is to use centralized training and decentralized execution. In general, we can categorize these types of approaches into two groups: value-based and actor-critic-based. In value-based methods, the idea is to train a centralized value function and then extract the value functions for each agent from that to act in a decentralized manner in the execution time (Sunehag et al., 2018; Rashid et al., 2018). On the other hand, the actor-critic-based methods have actor and critic networks (Lowe et al., 2017; Foerster et al., 2018). The critic network has access to data from all agents and is trained in a centralized way, but the actors have only access to their local information. They can act independently in the execution time. The actors can be independent with individual weights (Lowe et al., 2017) or share the policy with shared weights (Foerster et al., 2018). In this work, we use an actor-critic-based method with centralized training and decentralized execution, providing two innovations to improve information sharing without communication channel between agents during execution. RMAPPO (Yu et al., 2022) is a method close to ours, which uses CTDE framework. They make no mention of recurrent neural networks (RNNs) in their paper, but their code contains recurrent layers. In addition, they concentrate primarily on adapting PPO components such as clipping, mini-batching, batch size, value normalization, value function input representation, etc. for multi-agent environments. The distinction between our work and theirs is the meta-trajectory we generate from the data of all agents and the specific manner in which we employ the RNN layer, whereas they employ CTDE and RNN as usual without a combined trajectory as input. To share information, they use a shared policy between all agents which is similar to what we do in addition to the meta-trajectory idea. Also, they have a shared reward function for all agents which is the sum of all agents' rewards without any cooperation control parameter. In addition, their implementation and benchmark environments all use discrete action spaces, while we test our method on both discrete and continues action spaces. In Foerster et al. (2018), which is another work near ours, the actor is recurrent, but the critic is a feed-forward network, whereas our actor and critic are both recurrent, and the recurrent layer in our critic has a crucial role in our method. Their method is also for settings with discrete action spaces, whereas we test our method on three environments with both discrete and continuous action spaces. ROLA (Xiao et al., 2021) is another work near ours. They use LSTMs in both actor and critic networks. Additionally, ROLA employs both centralized and individual asymmetric critics that estimate individual advantage values using local history and/or state information. However, we construct the meta-trajectory which has not only the history of each agent, but also the history of the interaction between agents and the environment's dynamics. In addition, we propose a novel advantage function estimator which is a combination of all agents' advantage functions and the cooperation level of agents can be changed based on the problem using a control parameter. Durugkar et al. (2020) is also a work that combines the agent specific reward and an environment-specific reward to accomplish the shared task. They consider a framework that uses a linear mixing scheme to balance individual preferences and task rewards. They demonstrate that in their test environments, a small amount of selfishness and not full cooperation can be advantageous and facilitate team learning. In our test environments and with our framework, full cooperation among agents yields superior performance. Depending on the environment, the amount of cooperation and selfishness can be different. The other similar work to ours, which is one of the most popular MARL methods, is the multi-agent deep deterministic policy gradient (MADDPG) (Lowe et al., 2017) that proposed similar frameworks with centralized training and decentralized execution. They tested their method on some Particle environments (Mordatch & Abbeel, 2018). Their approach differs from ours in the following ways: (1) They do not have the LSTM (memory) layer in their network, whereas the LSTM layer in the critic network plays a critical role in our method. It helps to learn the interaction and cooperation between agents and also mitigate the partial observability problem. (2) They tested MADDPG on Multi-Agent Particle Environments with discrete action spaces. But we test our method in both continuous and discrete action space environments. (3) They consider separate critic networks for each agent, which is beneficial for competitive scenarios, whereas we use a single critic network and consider the cooperative tasks. (4) Their method is off-policy with replay buffer, and they combat the non-stationarity problem by centralized training. In contrast, our approach, in addition to centralized training, is an on-policy method without replay buffer allowing the networks to use the most recent data from the environment. We will compare our method with MADDPG and show that ours has comparable or superior performance. Wang et al. (2020) extends the MADDPG idea and adds a recurrent layer into the networks, but they have separate actors and critics for agents, similar to MADDPG, and recurrent hidden states of critics are isolated, and there is no combination of information in them. They also tested their method on one environment with a discrete action space. We target problems where agents attempt to collaboratively maximize the sum of all agents' expected rewards but where each agents receives its own reward. We do not specifically consider the credit assignment problem for multiagent games where all agents have a shared team reward. The proposed algorithm can be applied to such problems, but it is not designed for them. ## 3 Background 3.1 Markov Games In this work, we consider a multi-agent extension of Partially Observable Markov Decision Processes (MPOMDPs) (Gmytrasiewicz & Doshi, 2005), also called partially observable Markov games (Littman, 1994). It can also be modeled as a partially observable stochastic games (POSGs) (Hansen et al., 2004). A Markov game for N agents is defined by a set of states S describing the possible configurations of all agents, a set of actions U1*, . . . ,* UN and a set of observations O1*, . . . ,* ON for each agent. The probability distribution of the next state as a function of current state and actions is determined by a Markovian transition function T : S × U1 × . . . × UN → S. Each agent i uses a stochastic policy πθi : Oi × Ui → [0, 1], parametrized by θi, to choose an action. Upon the state transition, the agent receives a scalar reward ri: *S × U*i → R. We consider games where the total reward can be decomposed to individual agent rewards ri. Each agent i aims to maximize the rewards for all agents in a cooperative way (Lowe et al., 2017). ## 3.2 Proximal Policy Optimization Proximal Policy Optimization (PPO) is a family of policy gradient methods for solving reinforcement learning problems, which alternate between sampling data through interaction with the environment, and optimizing a surrogate objective function using stochastic gradient descent while limiting the deviation from the policy used to collect the data (Schulman et al., 2017). PPO aims to maximize the clipped expected improvement of the policy $${\cal L}^{C L I P}(\theta)=\hat{\mathbb{E}}_{t}[m i n(f_{t}(\theta)\hat{A}_{t},c i p(f_{t}(\theta),1-\epsilon,1+\epsilon)\hat{A}_{t})]$$ where Aˆt is the advantage obtained by Generalized Advantage Estimation (GAE), ϵ is a hyperparameter, and ft(θ) denotes the probability ratio ft(θ) ≡πθ(ut|ot) π*θold* (ut|ot) for importance sampling. The clipping prevents excessively large policy updates. In addition to the expected improvement, the total objective function for PPO incorporates a loss function for a critic network required for GAE and an entropy bonus term to encourage exploration, resulting in the total objective (Schulman et al., 2017) $$L_{t}^{C L I P+V F+S}(\theta)=\hat{\mathbb{E}}_{t}[L_{t}^{C L I P}(\theta)-c_{1}L_{t}^{V F}(\theta)+c_{2}S[\pi_{\theta}](o_{t})]$$ t(θ) + c2S[πθ](ot)] (1) where c1, c2 are weight factors, S denotes the entropy bonus, and L V F tis a squared-error loss for the critic $$(1)$$ $$L_{t}^{V F}(\theta)=(V_{\theta}(o_{t})-V_{t}^{t a r g})^{2}$$ $$(2)^{\frac{1}{2}}$$ 2(2) In the above equations, Vθ(ot) is the state-value function and θ denotes the combined parameter vector of actor and critic networks. PPO uses multiple epochs of minibatch updates for each set of sampled interactions. ## 4 Method In this section, we first explain the problem setting and outline our proposed solution. We then proceed to describing its two main components: a critic based on a recurrent neural network with a new proposed meta-trajectory and advantage estimation using weighted rewards with a control parameter for cooperation level between agents, ending with a summary of the proposed algorithm. ## 4.1 Problem Setting And Solution Overview Information sharing across agents can help to improve the performance and speed up learning (Gupta et al., 2017; Foerster et al., 2018; Terry et al., 2020b). In this work, we focus on improving information sharing between agents in multi-agent settings in addition to just sharing parameters across actors. We propose Multi-Agent Cooperative Recurrent Proximal Policy Optimization (MACRPO) algorithm, which is a multi-agent cooperative algorithm and uses the centralized learning and decentralized execution framework. In order to improve information sharing between agents, MACRPO, in addition to parameter sharing, uses two novel ideas: (a) a recurrent critic architecture that is trained using a meta-trajectory created by combining trajectories collected by all agents (Section 4.2), and (b) an advantage function estimator that combines the rewards and value functions of individual agents using a control parameter that can be employed to alter the degree of cooperation amongst agents. (Section 4.3). ## 4.2 Macrpo Framework The proposed MACRPO framework consists of one recurrent actor, similar to Foerster et al. (2018), and one recurrent critic network, as illustrated in Fig. 1. To consider the partial observability of multi-agent settings, we use recurrent LSTM layers in both actor and critic networks to allow integration of information over time. The actor network architecture is composed of a stack of Embedding, LSTM, and Linear layers and is trained using trajectories collected by all agents. We denote the shared weights of actors with θa and use the same, latest weights for all agents. The behaviors of different agents vary because of stochasticity and difference in their inputs. Denoting the trajectory data for episode k with length T for agent i as $$\tau_{i}^{k}=(o_{1}^{i},u_{1}^{i},r_{1}^{i},\ldots,o_{T}^{i},u_{T}^{i},r_{T}^{i}),$$ the training data for the actor is then DA = (τ 1 1 , . . . , τ k i , . . .). ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) Figure 2: Actor and critic network architectures. (a) Actor network architecture for agent i which uses the collected trajectory by itself, (b) The centralized critic network architecture which uses the created meta-trajectory. Note that u, v, and o denote action, value, and observation, respectively. Also, superscripts and subscripts show agent number and time-step, respectively. To allow the critic network, which is also a stack of Embedding, LSTM, and Linear layers, to integrate information across agents and time, we use all agents' trajectories in each roll-out and concatenate them in a sequence to create a meta-trajectory, and train the critic network using that (see Fig. 1). To remove the dependency to the order of agents, we randomize the order of agents at each meta-trajectory generation phase. Whenever we create a meta-trajectory, we consider one order and create the meta-trajectory and train the network with it. So the order stays the same in that meta-trajectory. We then change the order of agents for creation of the next meta-trajectory, but it remains the same for the entire meta-trajectory. Similar to the training data for actor, we can define the training data for the critic network too. Denoting the meta-trajectory for episode k with length T for N agents as $$\begin{array}{c}{{\mu^{k}=(o_{1}^{1},\ldots,o_{1}^{N},u_{1}^{1},\ldots,u_{1}^{N},r_{1}^{1},\ldots,r_{1}^{N},\ldots,}}\\ {{,o_{T}^{1},\ldots,o_{T}^{N},u_{T}^{1},\ldots,u_{T}^{N},r_{T}^{1},\ldots,r_{T}^{N})}}\end{array}$$ the training data for the critic is then DC = (µ 1, . . . , µk*, . . .*). By using the above meta-trajectory, the critic network receives information from all agents to capture the agents' history, the interactions between them, and the environment dynamics, all capture by the hidden state. In other words, MACRPO is able to consider temporal dynamics using the LSTM layer, which incorporates a history of states and actions across all agents. Modeling temporal dynamics allows the latent space to model differential quantities such as the rate of change (derivative) between the distance of two agents and integral quantities such as the running average of the distance. Additionally, the hidden state of recurrent networks can be viewed as a communication channel that allows information to flow between agents to create richer training signals for actors during training. The network will update the hidden state in each time-step by getting the previous hidden state and the data from the agent i in that time-step. The network architectures for actor and critic are shown in Fig 2. It is important to note that the critic network is only needed during training and that the optimized policy can be deployed using only the actor such that the agents are able to operate in a fully distributed manner without communication. ## 4.3 Objective Function In addition to the LSTM layer, we propose a novel advantage function estimator based on weighted discounted returns using a parameter which controls the agents' cooperation level and integrates information across agents. We consider the V targ tin Equation (2) as discounted return and propose to calculate it for agent i at time t as $R_{t}^{i}=\overline{r}_{t}+\gamma\overline{r}_{t+1}+\cdot\cdot\cdot+\gamma^{T-t+1}\overline{V}(a_{T}^{i})$ ) (3) $$({\mathfrak{I}})$$ ## Algorithm 1 Macrpo 1: Randomly initialize actor and critic networks' parameters θc and θa 2: for iteration=1, 2, ... do 3: for environment=1, 2, ..., E do 4: Run all N agents with latest trained weights in the environment for T time-steps and collect data 5: Combine collected trajectories by all agents according to Fig 1 6: Compute discounted returns and advantage estimates using Equations (5, 3) 7: **end for** 8: for epoch=1, ..., K do 9: for minibatch=1, ..., M do 10: Calculate the loss functions using Equations (7, 8) 11: Update Actor and Critic parameters via Adam 12: **end for** 13: **end for** 14: **end for** where $$\overline{{{r}}}_{t}=\frac{r_{t}^{i}+\beta\sum_{j\neq i}r_{t}^{j}}{N},\quad\overline{{{V}}}(o_{T}^{i})=\frac{V(o_{T}^{i})+\beta\sum_{j\neq i}V(o_{T}^{j})}{N}$$ N(4) where r i t is the reward for agent i at time t, γ is the discount factor, β is the cooperation control parameter used for rewards of other agents, and V (o i T ) is the value for the final state of agent i. The advantage for each agent i is then calculated as T −1(5) where $$\hat{A}_{t}^{i}=\delta_{t}^{i}+(\gamma\lambda)\delta_{t+1}^{i}+\ldots+\ldots+(\gamma\lambda)^{T-t+1}\delta_{T-1}^{i}$$ $$\begin{array}{c}{{\delta_{t}^{i}=\!\!\frac{1}{N}[r_{t}^{i}+\gamma V(o_{t+1}^{i})-V(o_{t}^{i})+}}\\ {{+\beta\sum_{j\neq i}(r_{t}^{j}+\gamma V(o_{t+1}^{j})-V(o_{t}^{j}))]}}\\ {{j\neq i}}\end{array}$$ .) $\blacksquare$ $$\mathbf{\vec{j}}$$ $$\mathbf{\Pi}$$ ))] (6) where λ is the temporal difference factor of the GAE algorithm, and V (o i t ) is the state-value at time t for agent i. The intuition behind the weighting is that each agents' own rewards are likely to be affected most by its own action choice but that the actions taken by other agents can also affect the reward. In addition, the β parameter can be interpreted as a control parameter for cooperation level between agents. This heuristic is related to credit assignment between agents and provides a trade-off between optimizing the policy considering only individual rewards (β = 0 and no cooperation between agents), which could lead to sub-optimal total reward when individual rewards are in conflict with each other, and optimizing the policy using the sum of all rewards (β = 1 and full cooperation between agents), which could lead to challenging assignment of credit between agents. One should note that policy optimization is performed across all agents such that in the end, the expected rewards over all agents are maximized, independent of the choice of β. MACRPO uses separate networks for actor and critic. Therefore, the objective functions of the actor and critic networks are separate, in contrast to PPO. The actor's objective function in the shared weights case is defined as $$L_{t}^{C L I P+S}(\theta_{a})=\hat{\mathbb{E}}_{t}[L_{t}^{C L I P}(\theta_{a})+c S[\pi_{\theta_{a}}](o_{t})]$$ ](ot)] (7) and the critic objective function is $$L_{t}^{V F}(\theta)=(V_{\theta_{c}}(o_{t})-V_{t}^{t a r g})^{2}$$ ## 2(8) Where Θc Are The Parameters Of The Critic A Parallelized Version Of The Macrpo Algorithm Is Shown In Algorithm 1. 5 Experiments This section presents empirical results that compare the performance of our proposed method, MACRPO, with several ablations to see the effect of each proposed novelty. We also compare our method with recent advanced RL meth- $$\left(7\right)$$ ($\mathfrak{S}$). ods in both single-agent domain with shared parameters between agents (Gupta et al., 2017; Terry et al., 2020b) and multi-agent domain like MAGIC, (Niu et al., 2021), IC3Net (Singh et al., 2019), GA-Comm (Liu et al., 2020), CommNet (Sukhbaatar et al., 2016), MADDPG (Lowe et al., 2017), RMAPPO (Yu et al., 2022), and QMIX (Rashid et al., 2018). ## 5.1 Test Environments We test our method in three MARL environments. In two of them, DeepDrive-Zero (Quiter, 2020) and MultiWalker (Terry et al., 2020a) environments, the action space is continuous, and in the third environment, the Particle environment (Mordatch & Abbeel, 2018), the action space is discrete. Fig 3 show these three environments. ![7_image_0.png](7_image_0.png) Figure 3: Considered MARL simulation environments (a) DeepDrive-Zero environment: an unprotected left turn scenario, (b) Multi-Walker environment, (c) Particle environment: cooperative navigation. DeepDrive-Zero Environment: There are several autonomous driving simulators which can be used for multi-agent simulation (Dosovitskiy et al., 2017; Santara et al., 2021; Quiter, 2020). In this work, we use DeepDrive-Zero (Quiter, 2020), because we don't need to deal with image data and also need a fast simulation environment for training. DeepDrive-Zero is a very fast and 2D simulation environment for self-driving cars which uses a bike model for the cars. We use the unsignalized intersection scenario in this work, which is shown in Fig 3a. To test our algorithm, we consider two cars in the environment, one starts from the south and wants to follow the green waypoints to do an unprotected left-turn, and the other one starts from the north and wants to go to the south and follow the orange waypoints. The agents need to learn to cooperate and negotiate to reach their destination without any collision. Multi-Walker Environment: The multi-walker environment is a multi-agent continuous control locomotion task introduced in Gupta et al. (2017). The environment contains agents (bipedal walkers) that can actuate the joints in each of their legs and convey objects on top of them. Fig 3b shows a snapshot from the environment. Cooperative Navigation in Particle Environment: Using the particle environment package from OpenAI (Lowe et al., 2017), we created a new environment based on the cooperative navigation environment. This new environment consists of N agents and N landmarks, and agents must avoid collisions and cooperate to reach and cover all landmarks. Fig 3c shows the simulation environment. Check Appendix A for more details about the environments. ## 5.2 Ablation Study Four ablations were designed to evaluate each novelty. The name of the method and the explanation shows which ablation has Feed-forward or LSTM or how information is shared in that ablation. In all cases, the parameter sharing proposed in Gupta et al. (2017) and Terry et al. (2020b) was used: FF-NIC *(Feed-forward multi-layer perceptron (MLP) network + no information combination)*: two feed-forward neural networks for actor and critic. The GAE is calculated using the single-agent PPO GAE equation (Schulman et al., 2017). There is no LSTM layer or reward and value functions combination for information sharing in this case. FF-ICA *(Feed-forward MLP network + information combination using the advantage estimation function)*: This case is similar to the previous case, but the GAE is calculated using Equation (5) to show the effect of mixing reward and value functions for information sharing. There is no LSTM layer in this case too. LSTM-NIC *(LSTM network + no information combination)*: two networks with LSTM layers for actor and critic. There is no information sharing between agents through GAE calculation or the LSTM's hidden state. The GAE is calculated using the single-agent PPO GAE equation (Schulman et al., 2017). LSTM-ICA (LSTM network + information combination using the advantage estimation function but not through the LSTM layer): This case is identical to the previous case, but the GAE is calculated using Equation (5). LSTM-ICF (LSTM network + information sharing using both the advantage estimation function and an LSTM layer in the critic network (full proposed method)): two networks with LSTM layers for actor and critic. In addition to parameter sharing between actors, the information integration is done through both the advantage estimation function and the LSTM's hidden state in the centralized critic network, shown in Fig 1. Also, in order to see the effect of the β value in Equations (4, 6), the proposed method was evaluated with different β values which shows different cooperation levels between agents. All experiments were repeated with identical random seeds for each method to reduce the effect of randomness. Hyperparameters used in MACRPO for three environments are detailed in Appendix C. DeepDrive-Zero Environment: We ran all ablations for ten random seeds in the DeepDrive-Zero environment to test our proposed method. We used self-play in simulations and used the latest set of parameters for actors in each episode. The results are shown in Fig 4a. The x-axis shows the number of training iterations. In each iteration, we ran 100 parallel environments for 3000 steps and collected data. Next, we updated actors and critic networks using the collected data. After each iteration, we ran the agents for 100 episodes, took the mean of these episodes' rewards (sum of all agents' rewards), and plotted them. The shaded area shows one standard deviation of episode rewards. The hyperparameters used in the MACRPO algorithm are listed in Table 2 in Appendix C. The proposed algorithm, LSTM-ICF, outperforms the ablations. The next best performances are for LSTM-ICA and FF-ICA, which are almost the same. Moreover, information integration in the advantage function, in both FF-ICA and LSTM-ICA, improves the performance compared to FF-NIC and LSTM-NIC; however, the achieved performance gain in the fully connected case is higher. The FF-ICA surpasses LSTM-NIC, which shows the effectiveness of sharing information across agents through the proposed advantage function, even without an LSTM layer. Furthermore, the addition of LSTM layer to add another level of information integration, LSTM-ICF, boosts performance when compared to FF-ICA. Fig 4b shows the analysis of the effect of different β values in Equations (3, 4, 6). The best performance is for β = 1, which is for the full cooperation between agents, and as the value of β, agents' cooperation level, is reduced, the agents' performance decreases. We demonstrate the effect of different β values in this environment, but for other environments, the results will be provided for β ∈ {0, 1} only. To achieve smooth driving performance, a curriculum-based learning method and gradual weight increase of reward factors were used. The weights of Jerk, G-force, steering angle change, acceleration change, and going out of the lane in the reward function were gradually increased to 3.3 × 10−6, 0.1, 3, 0.05, and 0.3, respectively. We then added termination of episodes for lane violation to force cars to stay between the lanes. After curriculum learning and smoothing the driving behavior, the cars follow the waypoints to reach their destination. The car that starts from the bottom and wants to make a left-turn yields nicely for the other agent if they reach the intersection simultaneously and then make the left-turn, and if it has time to cross the intersection before the other agent arrives, it does. A video of the final result can be found in the supplementary materials. Multi-Walker Environment: We ran 20 parallel environments and 2500 time-steps during each update iteration for the Multi-Walker environment. After each iteration, we ran agents for 100 episodes and plotted the mean of these episodes' rewards. Each episode's reward is the sum of all the agents' rewards. Ten different random seeds are used for each ablation. We also used the latest set of parameters for all actors. The hyperparameters used in the MACRPO algorithm are listed in Table 2 in Appendix C. ![9_image_0.png](9_image_0.png) Figure 4: Simulation results in DeepDrive-Zero environment. (a) Mean episode reward for different ablations, (b) mean episode reward for different β values. The shaded area shows one standard deviation. ![9_image_1.png](9_image_1.png) Figure 5: Simulation results in Multi-Walker and Particle environments for different ablations. (a) Multi-Walker simulation results, (b) Particle environment simulation results. Fig 5a shows a massive performance improvement of our proposed method, LSTM-ICF with β = 1, when compared to ablations. LSTM-ICF with β = 0, information integration through only the LSTM layer, has the next best performance. After these two, LSTM-ICA, which does the information integration using the advantage estimation function, performs better than FF-ICA, FF-NIC, and LSTM-NIC cases. The effect of β value and information sharing through the advantage estimation function in performance improvement can be seen as we move from LSTM-ICF with β = 0 to LSTM-ICF with β = 1 and from FF-NIC to FF-ICA. By comparing FF-ICA and LSTM-ICF, we can also see the impact of information integration using the LSTM layer. Note that the β value in FF-ICA is equal to 1. A video of the trained model can be found in the supplementary materials. Cooperative Navigation in Particle Environment: In the particle environment, in each iteration, we ran 20 parallel environments to collect data for 2500 time steps and used that data to update the network. The agents were then evaluated using the trained weights for 100 episodes. We ran the simulation with six random seeds. MACRPO hyperparameters are shown in Table 2 in Appendix C. The results of this environment are depicted in Fig 5b. Similar to the other two environments, the proposed LSTMICF with β = 1 outperforms ablations. The next best performance is achieved with LSTM-ICF with β = 0, which only uses the LSTM layer that was trained using the created meta-trajectory. Moreover, the LSTM-ICA's performance | Method | DeepDriveZero | Multi-Walker | Particle | |-------------------|-------|----------------|------------| | DQN | 4 | -100000 | -151.8 | | RDQN | 6 | -100000 | 153.2 | | A2C | 0.5 | -27.6 | -148.6 | | DDPG | 2 | -57.8 | - | | PPO | 16 | 41 | -144.3 | | SAC | -1.5 | -16.9 | -143.7 | | TD3 | -1 | -8 | - | | APEX-DQN | 8 | -100000 | -136.2 | | APEX-DDPG | 14 | -23 | - | | IMPALA | -0.66 | -88 | -155.2 | | MADDPG | -0.1 | -96 | -98.3 | | QMIX | -0.9 | -24 | -155.6 | | MAGIC | 3.1 | - | -114 | | IC3Net | 2.1 | - | -117 | | GA-Comm | 1.9 | - | -119 | | CommNet | 1.6 | - | -115 | | RMAPPO | -0.43 | - | -131 | | Ours (β = 0) | 17.3 | 24.2 | -100.7 | | Ours (full model) | 23.7 | 47.8 | -95.8 | Table 1: Comparing performance of our method with state-of-the-art approaches. Numbers show average reward in each environment for ten random seeds, except for the Multi-Walker environment which is 1000 random seeds. is almost identical to LSTM-ICF when β = 0. This shows that both novel ideas cause the same performance gain over LSTM-NIC. These results show that cases with LSTM layer perform better than feed-forward ones, even in the FF-ICA case, which integrates information through the advantage function. A video of the trained model can be found in the supplementary materials. Both ideas were evaluated in the ablation study, and the results clearly demonstrate the effect of the proposed ideas in performance improvement. Ablation studies provide evidence that the findings are not spurious, but are associated with the proposed enhancements. β = 1 corresponds to the total reward over all agents, the optimization goal. However, it is known that such a team reward causes a credit assignment problem since each agent's contribution to the team reward could differ. Due to this, we wanted to experimentally study whether beta values less than one would alleviate the credit assignment problem to the extent that the suboptimality of the reward would be overcome. According to the results of the experiment, this wasn't the case, and β = 1 gave the best performance. Moreover, as the results illustrate, both proposed ideas result in a performance gain, but this is not the same for all environments. In the DeepDrive-Zero environment, information integration through advantage function estimation improves the performance slightly more than the LSTM layer. However, in the Multi-Walker environment, the LSTM layer is more effective, and in the Particle environment, their effect is almost the same. ## 5.3 Comparison To State-Of-The-Art Methods We compared the proposed method with several state-of-the-art algorithms in each environment. Our method is compared against several single-agent baselines with shared parameters across agents (DQN, RDQN, A2C, DDPG, PPO, SAC, TD3, APEX-DQN, APEX-DDPG, and IMPALA), which were tested in Terry et al. (2020b). We also compared our method to state-of-the-art multi-agent approaches such as MAGIC, (Niu et al., 2021), IC3Net (Singh et al., 2019), GA-Comm (Liu et al., 2020), CommNet (Sukhbaatar et al., 2016), MADDPG (Lowe et al., 2017), RMAPPO (Yu et al., 2022), and QMIX (Rashid et al., 2018). The architecture and hyperparameters used for PPO, IMPALA, A2C, SAC, APEX-DQN, Rainbow-DQN, DQN, APEX-DDPG, DDPG, TD3, and QMIX are taken from Terry et al. (2020b) which has an open source implementation 1. For MADDPG (Lowe et al., 2017), we used the original source code 2, for RMAPPO (Yu et al., 2022) we used their open source source code 3, and for MAGIC, IC3Net, CommNet, and GA-Comm we used the open source implementation 4. We performed hyperparameter tuning using grid search to optimize performance for each method. Note that some of the official implementations of baselines we used here do not support both discrete and continues action spaces and we did not modify the code. The non-reported results for some baselines in the paper's tables and charts are due to this. In addition, we tried to use the discritized version of the DeepDrive-Zero environment for algorithms with discrete action space which may cause poor performance. Each agent's goal in MACRPO is to maximize the total reward of all agents, while the goal of other methods is to maximize the total reward of each agent without considering other agents' reward in their objective function. In order to have a more fair comparison, We report the result for our method when β = 0 too. The results are shown in Table 1. The table contains some empty fields due to the fact that some algorithms do not support continuous or discrete action spaces. Check Appendix B for more details. DeepDrive-Zero Environment: In this environment, our full method and also the case with β = 0 achieved the highest average reward. The next best was PPO with parameter sharing between agents followed by APEX-DQN and APEX-DDPG. A version of the environment with discretized action space was used for algorithms with discretized action space. Multi-Walker Environment: Similar to the previous environment, the proposed method outperformed other methods by a large margin with an average reward of 47.8. Next, PPO with parameter sharing had the second-best performance with a maximum average reward of 41. Our method with β = 0 achieved the third best average reward. Some algorithms do not support continuous action spaces and are marked with a dash in the table. Cooperative Navigation in Particle Environment: As in both previous environments, our approach outperformed other approaches in this environment as well, although the difference was minor compared to MADDPG. Our method with β = 0 is in the third place after MADDPG with small margin. We used a categorical distribution instead of a multivariate Gaussian distribution in this environment with discrete action space. Algorithms with continuous action spaces were not tested in this environment, and are marked with a dash in the table. Adapting these algorithms for discrete action environments could be achieved using the same trick, but we did not change the standard implementation for baselines. It is evident from the reported results that RMAPPO performance in the DeepDrive-Zero environment is not satisfactory and that it is average in the Particle environment. As the current implementation of RMAPPO does not support continuous action spaces, we could not test this method in the Multi-Walker environment. Additionally, we conducted limited hyperparameter searches for RMAPPO, but since this method aims to recommend a set of modifications and hyperparameters that will improve PPO's performance for multi-agent systems, we did not deviate too far from the main hyperparameters. The performance of MADDPG is not also good in DeepDerive-Zero and Multi-Walker environments. However, it performs well when used in the Particle environment. All hyperparameters for each algorithm are included in Appendix C. The results show that the performance benefit given by the two proposed ways of sharing information across agents is significant such that the method outperforms state-of-the-art algorithms. 1https://github.com/parametersharingmadrl/parametersharingmadrl 2https://github.com/openai/maddpg 3https://github.com/marlbenchmark/on-policy 4https://github.com/CORE-Robotics-Lab/MAGIC ## 6 Conclusion & Future Work In this paper, MACRPO, a centralized training and decentralized execution framework for multi-agent cooperative settings was presented. The framework is applicable to both discrete and continuous action spaces. In addition to parameter sharing across agents, this framework integrates information across agents and time in two novel ways: network architecture and the advantage estimation function. An ablation study in three environments revealed that both ways of information sharing are beneficial. Furthermore, the method was compared to state-of-the-art multiagent algorithms such as MAGIC, IC3Net, CommNet, GA-Comm, QMIX, and MADDPG, as well as single-agent algorithms that share parameters between agents, such as IMPALA and APEX. The results showed that the proposed algorithm performed significantly better than state-of-the-art algorithms. A single recurrent network to summarize the state of all agents may be problematic when the number of agents is large. A potential solution to this problem could be to use an attention mechanism for the agent to learn on which other agents to pay attention to, warranting further study to realize the potential of the proposed approach with a high number of agents. ## References Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, and Igor Mordatch. Emergent complexity via multiagent competition. *arXiv preprint arXiv:1710.03748*, 2017. Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław D˛ebiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680, 2019. Felipe Leno Da Silva and Anna Helena Reali Costa. A survey on transfer learning for multiagent reinforcement learning systems. *Journal of Artificial Intelligence Research*, 64:645–703, 2019. Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. In *Conference on robot learning*, pp. 1–16. PMLR, 2017. Kurt Dresner and Peter Stone. A multiagent approach to autonomous intersection management. Journal of artificial intelligence research, 31:591–656, 2008. Ishan Durugkar, Elad Liebman, and Peter Stone. Balancing individual preferences and shared objectives in multiagent reinforcement learning. *Good Systems-Published Research*, 2020. Partha Sarathi Dutta, Nicholas R Jennings, and Luc Moreau. Cooperative information sharing to improve distributed learning in multi-agent systems. *Journal of Artificial Intelligence Research*, 24:407–463, 2005. Jakob Foerster, Ioannis Alexandros Assael, Nando De Freitas, and Shimon Whiteson. Learning to communicate with deep multi-agent reinforcement learning. In *Advances in neural information processing systems*, pp. 2137–2145, 2016. Jakob Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, and Shimon Whiteson. Counterfactual multi-agent policy gradients. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32, 2018. Piotr J Gmytrasiewicz and Prashant Doshi. A framework for sequential planning in multi-agent settings. Journal of Artificial Intelligence Research, 24:49–79, 2005. Jayesh K Gupta, Maxim Egorov, and Mykel Kochenderfer. Cooperative multi-agent control using deep reinforcement learning. In *International Conference on Autonomous Agents and Multiagent Systems*, pp. 66–83. Springer, 2017. Eric A Hansen, Daniel S Bernstein, and Shlomo Zilberstein. Dynamic programming for partially observable stochastic games. In *AAAI*, volume 4, pp. 709–715, 2004. Pablo Hernandez-Leal, Bilal Kartal, and Matthew E Taylor. A survey and critique of multiagent deep reinforcement learning. *Autonomous Agents and Multi-Agent Systems*, 33(6):750–797, 2019. Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. Reinforcement learning: A survey. *Journal of* artificial intelligence research, 4:237–285, 1996. Ian A Kash, Eric J Friedman, and Joseph Y Halpern. Multiagent learning in large anonymous games. *Journal of* Artificial Intelligence Research, 40:571–598, 2011. Landon Kraemer and Bikramjit Banerjee. Multi-agent reinforcement learning as a rehearsal for decentralized planning. Neurocomputing, 190:82–94, 2016. Kurtulus Kullu, Ugur Güdükbay, and Dinesh Manocha. Acmics: an agent communication model for interacting crowd ˘ simulation. *Autonomous Agents and Multi-Agent Systems*, 31(6):1403–1423, 2017. Aristotelis Lazaridis, Anestis Fachantidis, and Ioannis Vlahavas. Deep reinforcement learning: A state-of-the-art walkthrough. *Journal of Artificial Intelligence Research*, 69:1421–1471, 2020. Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In Machine learning proceedings 1994, pp. 157–163. Elsevier, 1994. Yong Liu, Weixun Wang, Yujing Hu, Jianye Hao, Xingguo Chen, and Yang Gao. Multi-agent game abstraction via graph attention neural network. In *The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The* Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp. 7211– 7218. AAAI Press, 2020. URL https://aaai.org/ojs/index.php/AAAI/article/view/6211. Ryan Lowe, Yi I Wu, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. In *Advances in neural information processing systems*, pp. 6379– 6390, 2017. Hangyu Mao, Zhengchao Zhang, Zhen Xiao, Zhibo Gong, and Yan Ni. Learning multi-agent communication with double attentional deep reinforcement learning. *Autonomous Agents and Multi-Agent Systems*, 34(1):1–34, 2020. Laëtitia Matignon, Guillaume J. Laurent, and Nadine Le Fort-Piat. Independent reinforcement learners in cooperative markov games: a survey regarding coordination problems. *The Knowledge Engineering Review*, 27:1 - 31, 2012. Igor Mordatch and Pieter Abbeel. Emergence of grounded compositional language in multi-agent populations. In Thirty-second AAAI conference on artificial intelligence, 2018. Thanh Thi Nguyen, Ngoc Duy Nguyen, and Saeid Nahavandi. Deep reinforcement learning for multiagent systems: A review of challenges, solutions, and applications. *IEEE transactions on cybernetics*, 2020. Yaru Niu, Rohan Paleja, and Matthew Gombolay. Multi-agent graph-attention communication and teaming. In *Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems*, pp. 964–973, 2021. Craig Quiter. Deepdrive zero, June 2020. URL https://doi.org/10.5281/zenodo.3871907. Tabish Rashid, Mikayel Samvelyan, Christian Schroeder De Witt, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning. arXiv preprint arXiv:1803.11485, 2018. Anirban Santara, Sohan Rudra, Sree Aditya Buridi, Meha Kaushik, Abhishek Naik, Bharat Kaul, and Balaraman Ravindran. Madras: Multi agent driving simulator. *Journal of Artificial Intelligence Research*, 70:1517–1555, 2021. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *arXiv preprint arXiv:1707.06347*, 2017. Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. Safe, multi-agent, reinforcement learning for autonomous driving. *arXiv preprint arXiv:1610.03295*, 2016. Amanpreet Singh, Tushar Jain, and Sainbayar Sukhbaatar. Learning when to communicate at scale in multiagent cooperative and competitive tasks. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id= rye7knCqK7. Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. Learning multiagent communication with backpropagation. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 2244–2252, 2016. URL https://proceedings.neurips. cc/paper/2016/hash/55b1927fdafef39c48e5b73b5d61ea60-Abstract.html. Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinícius Flores Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z Leibo, Karl Tuyls, et al. Value-decomposition networks for cooperative multi-agent learning based on team reward. In *AAMAS*, pp. 2085–2087, 2018. Ming Tan. Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings of the tenth international conference on machine learning, pp. 330–337, 1993. Justin K Terry, Benjamin Black, Mario Jayakumar, Ananth Hari, Luis Santos, Clemens Dieffendahl, Niall Williams, Praveen Ravi, Yashas Lokesh, Caroline Horsch, and Dipam Patel. PettingZoo. https://github.com/ PettingZoo-Team/PettingZoo, 2020a. GitHub repository. Justin K Terry, Nathaniel Grammel, Ananth Hari, Luis Santos, Benjamin Black, and Dinesh Manocha. Parameter sharing is surprisingly useful for multi-agent deep reinforcement learning. *arXiv preprint arXiv:2005.13625*, 2020b. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. *Nature*, 575(7782):350–354, 2019. Rose E Wang, Michael Everett, and Jonathan P How. R-maddpg for partially observable environments and limited communication. *arXiv preprint arXiv:2002.06684*, 2020. Yuchen Xiao, Xueguang Lyu, and Christopher Amato. Local advantage actor-critic for robust multi-agent deep reinforcement learning. In *2021 International Symposium on Multi-Robot and Multi-Agent Systems (MRS)*, pp. 155– 163. IEEE, 2021. Wang Ying and Sang Dayong. Multi-agent framework for third party logistics in e-commerce. *Expert Systems with* Applications, 29(2):431–436, 2005. Chao Yu, Akash Velu, Eugene Vinitsky, Jiaxuan Gao, Yu Wang, Alexandre Bayen, and Yi Wu. The surprising effectiveness of PPO in cooperative multi-agent games. In *Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track*, 2022. URL https://openreview.net/forum?id=YVXaxB6L2Pl. Weiwei Zhao, Hairong Chu, Xikui Miao, Lihong Guo, Honghai Shen, Chenhao Zhu, Feng Zhang, and Dongxin Liang. Research on the multiagent joint proximal policy optimization algorithm controlling cooperative fixed-wing uav obstacle avoidance. *Sensors*, 20(16):4546, 2020. ## A Detailed Environment Descriptions DeepDrive-Zero Environment: The observation space is a vector with continuous values. Each agent in the environment receives some information about itself, as well as information from other agents. This information can come from some modules like Perception, Localization, and HDMap in a self-driving car and be used by the decision making and control modules. The observation vector for each agent contains some information about the agent itself like distance and angle to waypoints, velocity, acceleration, and distance to left and right lanes, and also some information about the other agents like the relative velocity of the other agent to the ego agent, velocity and acceleration of the other car, angles to corners of the other agent, and distance to corners of the other agent. Each action vector element is continuous from -1 to 1: steering, acceleration, and braking. Negative acceleration can be used to reverse the car, and the network outputs are scaled to reflect physically realistic values. This environment also has a discretized version that we used in discrete action methods. The reward function is a weighted sum of several terms like speed, reaching the destination, collision, G-force, jerk, steering angle change, acceleration change, and staying in the lane. Initially, we used 0.5, 1, 4, 1 × 10−7, 6 × 10−6, 0.0001, 0.0001, 0.001 as weights, then used curriculum learning to smooth the driving behavior. Multi-Walker Environment: To keep the package balanced and move it as far to the right as possible, the walkers must coordinate their movements. A positive reward is given to each walker locally, based on the change in the package distance summed with 130 times the change in the walker's position. A walker is given a reward of -100 if they fall, and all walkers receive a reward of -100 if the package falls while moving forward has a reward of 1. By default, the environment is done whenever a walker or package falls or when the walkers reach the edge of the terrain. The action space is continuous, with four values for torques applied to each walker's leg. The observation vector for each walker is a 32-dimensional vector that contains information about nearby walkers as well as data from some noisy LiDAR sensors. Cooperative Navigation in Particle Environment: We assign each agent a landmark and calculate its local reward based on its proximity to its landmark and collisions with other agents. As a result, agents will have different reward values; not one shared reward. Each agent's observation data is its position and velocity, as well as the relative position of other agents and landmarks. There are five discrete actions in the action space: up, down, left, right, and no move. After 25 time-steps, the episode ends. ## B Comparison To State Of The Art Methods To get a better idea of the performance of the state-of-the-art algorithms, the mean episode reward for different baseline algorithms in test environments are shown in Fig 6. ![16_image_0.png](16_image_0.png) Mean Episode Reward (a) Mean Episode Reward -- MACRPO = random random APEX_DQN - RDQN - QMIX -- IMPALA -- TRPO (Gupta) ー MADDPG DDPG (Gupta) PPO ![16_image_1.png](16_image_1.png) (b) Mean Episode Reward Showing first 30 runs - IC3NET - GACOMM - MAGIC - MADDPG - RMAPPO - COMNNET - THPALY - RDON -- PPP - SAC -- SAC -- ASC -- QAIX -- DON - REDDEC ![16_image_2.png](16_image_2.png) (c) Figure 6: Analysis of baseline algorithms proposed in Terry et al. (2020b) in three environments: (a) DeepDrive-Zero, (b) Multi- Walker, and (c) Particle environments. 17 ## C Hyperparameters Hyperparameters used in MACRPO for three environments are described in Table 2. Table 2: MACRPO hyperparameters for three MARL environments | Param. | DeepDriveZero | Multi-Walker | Particle | |---------------------------------------|--------|----------------|------------| | actor hidden size | 64 | 32 | 128 | | critic hidden size | 128 | 32 | 128 | | batch size | 512 | 32 | 1500 | | discount | 0.99 | 0.99 | 0.99 | | GAE lambda | 0.94 | 0.95 | 0.95 | | PPO clip | 0.15 | 0.3 | 0.2 | | PPO epochs | 4 | 4 | 10 | | max grad norm | 1.0 | 1.0 | 1.0 | | entropy factor | 0.001 | 0.01 | 0.01 | | learning rate | 0.0002 | 0.001 | 0.005 | | recurrent sequence length (time-step) | 20 | 40 | 3 | | no. of recurrent layers | 1 | 1 | 1 | The architecture and hyperparameters used for other baselines are taken from Terry et al. (2020b) with some finetuning to get better performance, and are shown in Tables 3, 4, and 5. Some hyperparameter values are constant across all RL methods for all environments. These constant values are reported in Table 6. We used the source code for all algorithms from Terry et al. (2020b) except for MADDPG which we used the original implementation (Lowe et al., 2017). | Table 3: Hyperparameters for three MARL environments | | | | | |--------------------------------------------------------|---------------------------|---------------------------|---------------------------|----------| | RL method | Hyperparameter | DeepDrive-Zero | Multi-Walker | Particle | | PPO | sample_batch_size | 100 | 100 | 100 | | train_batch_size | 5000 | 5000 | 5000 | | | sgd_minibatch_size | 500 | 500 | 1000 | | | lambda | 0.95 | 0.95 | 0.95 | | | kl_coeff | 0.5 | 0.5 | 0.5 | | | entropy_coeff | 0.01 | 0.01 | 0.001 | | | num_sgd_iter | 10 | 10 | 50 | | | vf_clip_param | 10.0 | 10.0 | 1.0 | | | clip_param | 0.1 | 0.1 | 0.5 | | | vf_share_layers | True | True | True | | | clip_rewards | True | True | False | | | batch_mode | truncate_episodes | truncate_episodes | truncate_episodes | | | IMPALA | sample_batch_size | 20 | 20 | 20 | | train_batch_size | 512 | 512 | 512 | | | lr_schedule | [[0, 5e-3], [2e7, 1e-12]] | [[0, 5e-3], [2e7, 1e-12]] | [[0, 5e-3], [2e7, 1e-12]] | | | clip_rewards | True | True | False | | | A2C | sample_batch_size | 20 | 20 | 20 | | train_batch_size | 512 | 512 | 512 | | | lr_schedule | [[0, 7e-3], [2e7, 1e-12]] | [[0, 7e-3], [2e7, 1e-12]] | [[0, 7e-3], [2e7, 1e-12]] | | | SAC | sample_batch_size | 20 | 20 | 20 | | train_batch_size | 512 | 512 | 512 | | | Q_model | {activation: | {activation: | {activation: | | | relu, | relu, | relu, | | | | layer_sizes: | layer_sizes: | layer_sizes: | | | | [266, 256]} | [266, 256]} | [266, 256]} | | | | optimization | {actor_lr: | {actor_lr: | {actor_lr: | | | 0.0003, | 0.0003, | 0.0003, | | | | actor_lr: | actor_lr: | actor_lr: | | | | 0.0003, | 0.0003, | 0.0003, | | | | entropy_lr: | entropy_lr: | entropy_lr: | | | | 0.0003,} | 0.0003,} | 0.0003,} | | | | clip_actions | False | False | False | | | exploration_enabled | True | True | True | | | no_done_at_end | True | True | True | | | normalize_actions | False | False | False | | | prioritized_replay | False | False | False | | | soft_horizon | False | False | False | | | target_entropy | auto | auto | auto | | | tau | 0.005 | 0.005 | 0.005 | | | n_step | 1 | 1 | 5 | | | evaluation_ interval | 1 | 1 | 1 | | | metrics_smoothing_ episodes | 5 | 5 | 5 | | | target_network_ update_freq | 1 | 1 | 1 | | | learning_starts | 1000 | 1000 | 1000 | | | timesteps_per_ iteration | 1000 | 1000 | 1000 | | | buffer_size | 100000 | 100000 | 100000 | | Table 3: Hyperparameters for three MARL environments | Table 4: Hyperparameters for DeepDrive-Zero, Multi-Walker, and Particle environments | | | | | |----------------------------------------------------------------------------------------|-------------------|-------------------|-------------------|----------| | RL method | Hyperparameter | DeepDrive-Zero | Multi-Walker | Particle | | APEX-DQN | sample_batch_size | 20 | 20 | 20 | | train_batch_size | 32 | 512 | 5000 | | | learning_starts | 1000 | 1000 | 1000 | | | buffer_size | 100000 | 100000 | 100000 | | | dueling | True | True | True | | | double_q | True | True | True | | | Rainbow-DQN | sample_batch_size | 20 | 20 | 20 | | train_batch_size | 32 | 512 | 1000 | | | learning_starts | 1000 | 1000 | 1000 | | | buffer_size | 100000 | 100000 | 100000 | | | n_step | 2 | 2 | 2 | | | num_atoms | 51 | 51 | 51 | | | v_min | 0 | 0 | 0 | | | v_max | 1500 | 1500 | 1500 | | | prioritized_replay | True | True | True | | | dueling | True | True | True | | | double_q | True | True | True | | | parameter_noise | True | True | True | | | batch_mode | complete_episodes | complete_episodes | complete_episodes | | | Plain DQN | sample_batch_size | 20 | 20 | 20 | | train_batch_size | 32 | 512 | 5000 | | | learning_starts | 1000 | 1000 | 1000 | | | buffer_size | 100000 | 100000 | 100000 | | | dueling | False | False | False | | | double_q | False | False | False | | | QMIX | buffer_size | 10000 | 3000 | 100000 | | gamma | 0.99 | 0.99 | 0.99 | | | critic_lr | 0.001 | 0.0005 | 0.001 | | | lr | 0.001 | 0.0005 | 0.001 | | | grad_norm_clip | 10 | 10 | 10 | | | optim_alpha | 0.99 | 0.99 | 0.99 | | | optim_eps | 0.00001 | 0.05 | 0.00001 | | | epsilon_finish | 0.02 | 0.05 | 0.02 | | | epsilon_start | 1.0 | 1.0 | 1.0 | | | MADDPG | lr | 0.001 | 0.0001 | 0.01 | | batch_size | 64 | 512 | 500 | | | num_envs | 1 | 64 | 1 | | | num_cpus | 1 | 8 | 1 | | | buffer_size | 1e5 | 1e5 | 1e5 | | | steps_per_update | 4 | 4 | 4 | | | RL method | Hyperparameter | DeepDrive-Zero | Multi-Walker | |-------------------------------|-------------------|------------------|----------------| | APEX-DDPG | sample_batch_size | 20 | 20 | | train_batch_size | 512 | 512 | | | lr | 0.0001 | 0.0001 | | | beta_annealing_fraction | 1.0 | 1.0 | | | exploration_fraction | 0.1 | 0.1 | | | final_prioritized_replay_beta | 1.0 | 1.0 | | | n_step | 3 | 3 | | | prioritized_replay_alpha | 0.5 | 0.5 | | | learning_starts | 1000 | 1000 | | | buffer_size | 100000 | 100000 | | | target_network_update_freq | 50000 | 50000 | | | timesteps_per_iteration | 2500 | 25000 | | | Plain DDPG | sample_batch_size | 20 | 20 | | train_batch_size | 512 | 512 | | | learning_starts | 5000 | 5000 | | | buffer_size | 100000 | 100000 | | | critics_hidden | [256, 256] | [256, 256] | | | TD3 | sample_batch_size | 20 | 20 | | train_batch_size | 512 | 512 | | | critics_hidden | [256, 256] | [256, 256] | | | learning_starts | 5000 | 5000 | | | pure_exploration_steps | 5000 | 5000 | | | buffer_size | 100000 | 100000 | | Table 5: Hyperparameters for DeepDrive-Zero and Multi-Walker | Variable | Value set in all RL methods | |-------------------|-------------------------------| | # worker threads | 8 | | # envs per worker | 8 | | gamma | 0.99 | | MLP hidden layers | [400, 300] | Table 6: Variables set to constant values across all RL methods for all environments
Review 1: Summary: The authors are studying Cooperative multi-agent reinforcement learning (but with defined individual rewards) and propose an algorithm combining existing components in a new manner and perform an experimental comparison across three domains where they show their algorithm having the best performance. A strength is that it works on both continuous and discrete domains. A key point in their architecture is to feed in a combined trajectory from all actors into a centralized critic to enable it to better learn about the interactions. I am aware of marl works with individual rewards where individually embedding each trajectory and combing has been utilized for purposes of scaling to many agents, dealing with variable number of agents and permutation invariance within groups but I am not sure to what extent centralized critics for coop-marl has utilized this technique. The original VDN underlying Q-mix had comparisons with agents embedding and combining each trajectory into one LSTM and then outputting individual heads in a DQN style approach, which also differs from the use in the paper. Another utilized technique is making the individuals other-regarding (common topic/technique in sequential social dilemmas) and having the reward (and values) used to learn from, incorporating a term that is the average for the other agents. In the case of team rewards, this term would not make a difference as they all have the same reward. Strengths and Weaknesses: As the introduced algorithm is incremental as in combining existing ideas the experimental evaluation is very important. The experimental comparison is performed on three domains. They show that they outperform the state-of-the-art though one immediately notices the state-of-the-art losing to a simple single-agent algorithm like PPO which is at first confusing. Q-mix, usually a strong performer in coop-marl, is terrible in this comparison. A reason for this is probably that they have applied it to action discretizatized versions of continuous domains, which might lead to very bad performance. They have two continuous domains and one discrete. They say that they do not show results for the continuous actions methods in the discrete domain, though for MADDPG they do apply it with a categorical distribution instead of the gaussian used in continuous domains, but does not do the same for DDPG and APEX-DDPG which I am not sure if there is a good reason for. Further, as much recent coop-marl work has been performed on the SCII micromanagement suite of tasks based on which the VDN/QMIX/… class of algorithms has continued its development it would have been good to see how the new proposed approach compared there where the notion of QMIX being state-of-the-art comes from. I find the comparison is a bit unconvincing, as the chosen state-of-the-art coop-marl is performing much worse than generic single-agent algorithms on the chosen domains as they have been applied. Is there some particular weakness of all existing coop-marl approaches that are highlighted by these domains and the new approach addresses it? Is there no stronger coop-marl approach for these domains, or application/setup of the chosen approaches. Besides the comparison to other approaches, the authors are proposing a nice ablation study of their design choices. Requested Changes: Add a SCII comparison. Look into finding coop-marl baselines that outperforms PPO or explain why a simple single-agent algorithm is the second best algorithm across the domains chosen for a evaluating coop-marl approaches. If there is no reason why the same method used to have MADDPG evaluated on discrete domain cannot be applied for DDPG and APEX-DDPG, then please complete the table in this manner. Broader Impact Concerns: No ================================================== Review 2: Summary: This paper proposes a version of MAPPO that uses a recurrent network for the critic. They also add a term that adds the rewards of the other agents to a particular agent's reward, inducing cooperation. These changes are very incremental. Experiments perform equally well or outperform baselines, but do not compare to MAPPO. Strengths and Weaknesses: The main strength of this paper is the experimental results, where the method performs better than some baselines. However, notably absent is MAPPO (https://arxiv.org/abs/2103.01955) which is not even cited. So since the method is a very small modification of MAPPO but isn't compared to MAPPO, I do not think members of the community will find this work interesting as is. Perhaps with strong comparisons against MAPPO, and principled reasons why the method performs better than MAPPO then people would find it interesting. Requested Changes: Add experiments comparing to MAPPO. Acknowledgements should be anonymized. Broader Impact Concerns: NA ================================================== Review 3: Summary: This paper deals with cooperative multi-agent RL where each agent is given its own reward signal by the environment. In order to encourage cooperation in such a setting, one prevalent approach that avoids an explicit communication channel is the paradigm of centralized training and decentralized execution (CTDE). This paper considers a CTDE actor critic setting. In particular, it uses a PPO based actor critic setup where the critic in centralized and the actors are decentralized. This paper claims to propose two innovations to this setup: 1. Use of an LSTM in the critic 2. Scaling and adding other agents' rewards to each agent in order to encourage coordination. It tests these two modifications in 3 domains, and compares to some previous CTDE algorithms, as well as some independent RL algorithms. Strengths and Weaknesses: ### Strengths: * The explanation of the proposed techniques is clear for the most part. * Comparison to related work are done well for the most part. * Code has been shared openly (though without anonymization) * The experimental evaluation on 3 domains with various baselines is well done. * The ablations testing the proposed components are, for the most part, evaluating these additions appropriately. ### Weaknesses and Questions: * I am not convinced that either of the proposed innovations have not been tried before. Details below. * R-MAPPO [1] already uses PPO in a CTDE setting with an RNN in the critic. It also combines the individual agent information for a more informed critic. At the very least, this work should be compared to, and should be explained and contrasted with appropriately in related work. * The idea of combining individual agent rewards with the shared team rewards is also not unique. Previous work has balanced individual objectives and shared rewards. Look at [2] for example. * On page 5, the paper states: `To remove the dependency to the order of agents, we randomize the order by permuting all inputs to the LSTM layer according to a random order of agents at each time step`. This permutation of the order at each time step and its utility is unclear as a reader. Won't this make the problem of encouraging coordination unnecessarily hard for the centralized critic? If ordering is changed at each time step the critic won't be able to keep track of which observations are which agents. Is there some nuance that has been overlooked? * The experiments seem to indicate that $\beta=1$, meaning that the agents are learning with a fully shared team reward, is the one that does best. This result also seems to bear out across domains. Given this result, should one of the insights of this paper be that teams of agents should be trained with shared rewards if cooperation is paramount? How would this result contrast with that from [2]? * Could the mixed reward scheme be tested on some of the other baselines that are compared with, to quantify the benefit of this reward sharing mechanism? * In related work, the paper claims that MADDPG and follow up work is tested on environments with discrete action spaces. This cannot be true since DDPG is a continuous action space specific algorithm. ### References: [1] Yu, C., Velu, A., Vinitsky, E., Wang, Y., Bayen, A. and Wu, Y., 2021. The surprising effectiveness of ppo in cooperative, multi-agent games. arXiv preprint arXiv:2103.01955. [2] Durugkar, I., Liebman, E. and Stone, P., 2021, January. Balancing individual preferences and shared objectives in multiagent reinforcement learning. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence (pp. 2505-2511). Requested Changes: * Response to questions above. * Clarification as to how the proposed contributions differ from the approaches in [1] and [2] from section above. If these approaches are found to be related, a comparison with them in the experiments should also be included. * Updating related work section to more accurately depict MADDPG. * In Background, the paper introduces partially observable Markov games as MPOMDPs. While this is an acceptable way to denote that each agent is in a POMDP from its own perspective, a more elegant framework to communicate the setting would be as partially observable stochastic games (POSGs) [3]. This change is up to the discretion of the authors. * In the ablations, an ablation with a centralized critic but no RNN. * Short descriptions of the environments should be in the main document. In the appendix, more details about each environment would be more helpful, such as the exact dimensionality of the state and action spaces, the number of agents trained, etc. ### References: [3] Hansen, E.A., Bernstein, D.S. and Zilberstein, S., 2004, July. Dynamic programming for partially observable stochastic games. In AAAI (Vol. 4, pp. 709-715). Broader Impact Concerns: Effective multi-agent RL training could help scale the deployment of intelligent agents that learn while interacting with the environment. This paper essentially considers a problem of training agents to maximize social good. The necessity that agents with super-human policies do not act solely in their own interest is of wide societal interest. These points are not discussed at all in the manuscript. Some discussion along these lines, with perhaps some reference to works that try to incorporate social good in the training mechanisms, would be good to have. ================================================== Metareview: Recommendation: Reject Comment: As outlined in Claims and Evidence, reviewers were not sufficiently convinced by the experimental evaluation, and thus unanimously agreed that the paper is not yet ready for publication. The paper would be strengthened by bolstering reader confidence in the experimental results, in particular in explaining why (via intuition building or further experiments) previously SOTA MARL methods seem to perform worse than single-agent methods. Along these lines, reviewers suggest further hyperparameter tuning of baselines and comparison on the SCII suite of micromanagement tasks. ==================================================
# Bit-By-Bit: Investigating The Vulnerabilities Of Binary Neural Networks To Adversarial Bit Flipping Shamik Kundu∗1, Sanjay Das∗1, Sayar Karmakar2, Arnab Raha3, Souvik Kundu3, Yiorgos Makris1**, Kanad Basu**1 1University of Texas at Dallas, 2University of Florida, 3*Intel Corporation* ## Abstract Binary Neural Networks (BNNs), operating with ultra-low precision weights, incur a significant reduction in storage and compute cost compared to the traditional Deep Neural Networks (DNNs). However, vulnerability of such models against various hardware attacks are yet to be fully unveiled. Towards understanding the potential threat imposed on such highly efficient models, in this paper, we explore a novel adversarial attack paradigm pertaining to BNNs. In specific, we assume the attack to be executed during deployment phase, prior to inference, to achieve malicious intentions, via manipulation of accessible network parameters. We aim to accomplish a graceless degradation in BNN accuracy to a point, where the fully functional network can behave as a random output generator at best, thus subverting the confidence in the system. To this end, we propose an Outlier Gradient-based Evolutionary (OGE) attack, that learns injection of minimal amount of critical bit flips in the pre-trained binary network weights, to introduce classification errors in the inference execution. To the best of our knowledge, this is the first work that leverages the outlier gradient weights to orchestrate a hardware-based bit-flip attack, that is highly effective against the typically resilient low-quantization BNNs. Exhaustive evaluations on popular image recognition datasets including Fashion-MNIST, CIFAR10, GTSRB, and ImageNet demonstrate that, OGE can drop up to 68.1% of the test images mis-classification, by flipping as little as 150 binary weights, out of 10.3 millions in a BNN architecture. Code is open sourced at: https://github.com/isnadnr/OGE. ## 1 Introduction A commitment to reducing size and compute demands has led to ultra-low-precision BNNs, featuring onebit weights and activations (-1 or +1). Introduced by Courbariaux et al. (2016), BNNs drastically improve power efficiency and inference latency by minimizing memory access and using fast bit-wise operations instead of complex matrix multiplications. These improvements maintain classification accuracy compared to high-precision Deep Neural Networks (DNNs) (Qin et al., 2020; Yuan & Agaian, 2021). BNNs are favored for Machine Learning as a Service (MLaaS) across hardware platforms (Sanyal et al., 2018). Recent advancements in Deep learning have integrated low-precision DNNs into critical domains like facial recognition (Dong et al., 2019) and autonomous driving (Eykholt et al., 2018). However, this widespread adoption of DNNs has ushered in a concerning surge in adversarial attempts, exploiting network vulnerabilities through backdoor and inference attacks (Saha et al., 2020; Xie et al., 2019; Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2017). In contrast to these known attack vectors, there exists a relatively uncharted territory: an innovative attack paradigm centered on the manipulation of pre-trained model weights (Breier et al., 2018). Executing such an attack hinges on a rather menacing threat model, assuming that the adversary possesses unrestricted access to a device's memory, enabling direct parameter alterations within a deployed model to serve adversarial objectives. Given that the deployed DNN is stored ∗Authors have equal contributions. This work is supported by the Semiconductor Research Corporation (SRC GRC Task: 3243.001). in a binarized format in the memory, attackers can tamper with model parameters employing techniques like the Row-Hammer Attack (Agoyan et al., 2010; Kim et al., 2014b) and the Laser Beam Attack (Selmke et al., 2015), as illustrated in Figure 1. While bit-flip attacks have demonstrated their potential to wreak havoc at the system level, those targeting the control path are comparatively easier to address, as they are integral to the overall system's integrity. Conversely, datapath attacks, which surreptitiously manipulate accuracy, operate in stealth mode, posing a significant challenge in detection. An instance of misclassification during inference may be erroneously dismissed as an intrinsic network characteristic, when it is, in fact, a consequence of a stealthy bit-flip attack. Existing research have demonstrated that it is feasible to change the model weights via bit flipping to accomplish malicious intentions (Rakin et al., 2019; 2020; 2022; Bai et al., 2021). However, these techniques are either based on heuristic strategies (Rakin et al., 2019), or focused on identifying critical weights in high precision DNNs (Bai et al., 2021). Since BNNs are emerging as promising candidates to be deployed in security-critical high assurance environments, exploring the vulnerability of such low precision networks is imperative to estimate its adversarial impacts. These BNNs are generally apprehended to be resilient against bit flip attacks owing to their limited parameter space (Rakin et al., 2021). This resiliency may be attributed to the fact that the magnitude of error by flipping a bit in a BNN is minimal. Since the massive degradation is inherently circumvented in case of BNNs, existing attack strategies that exploit the large bit-width of the weights to orchestrate the attack are not effective in case of these ultra low-precision binary weights in a BNN. Such inherent tolerance can only be disrupted with a significant number of bit flips (usually, over 39×, compared to a DNN, as we have demonstrated in this paper) (Rakin et al., 2019; 2021). However, manipulating a plethora of bits disrupts the stealthiness of the attack. As a result, attacking such extremely low precision networks becomes particularly challenging.1 Figure 1: Threat model demonstrating Bit Flip attack. Our Contributions: In this paper, we challenge the conventional wisdom of treating BNNs to be inherently robust against malicious bit-flips. we devise a technique to determine a diminutive set of most vulnerable binary weights in the BNN, flipping which furnishes a significant reduction in accuracy to a point where the network is deemed as a random output generator. In this direction, we propose a novel Outlier Gradient-based Evolutionary (OGE) attack framework, that systematically identifies the most vulnerable set of binary weights to be attacked in a pre-trained BNN. To this end, we reformulate the task of obtaining the most vulnerable layers in the BNN by ranking the network loss for top-k gradient bit flips in each layer. Subsequently, the outlier gradients from the vulnerable layers are isolated to be provided as input to an evolutionary algorithm. This accomplishes the search space optimization using an entropy-based objective loss function in iterations. Optimization of this loss function caters to the drop in inference accuracy of the BNN. In our proposed OGE framework, the subset selection of outlier gradient weights aids in circumventing the challenge posed while attacking a BNN, by enabling the evolutionary algorithm to obtain the critical weights from this vulnerable search space. To the best of our knowledge, this is the first work that systematically explores the vulnerability in a BNN via adversarial bit flipping, instead of a heuristic strategy. This paper specifically makes the following key contributions: - We, for the first time, have discovered the significance of outlier gradient weights in a binary context and linked their influence to substantial reductions in classification accuracy within a Binary Neural Network (BNN) due to bit flips. - Based on the understanding from the outlier gradients, we propose an Outlier Gradient-based Evolutionary (OGE) attack, a novel bit-flip attack framework, hat utilizes an evolutionary search approach, 1As the weights in a BNN are binarized to be represented using a single bit (+1 or −1), flipping a bit corresponds to a weight flip in the network. Hence, we use both these terms interchangeably in the remaining paper. coupled with an isolation forest-based outlier detection framework for furnishing the most critical set of binary weights in a BNN. - We perform extensive experiments on complex image recognition datasets including ImageNet, and evaluate the efficiency of our OGE attack on both binary convolution neural networks and binarized vision transformers. Our results demonstrate that up to 68.1% of the test images could be misclassified to an incorrect class, by flipping as little as 150 binary weights, out of 10.3 millions in the BNN architecture, which is almost 84% less than the state-of-the-art bit flip attack. ## 2 Background & Related Works 2.1 Binary Neural Networks (Bnns) BNNs are gaining prominence as highly promising candidates for deployment in resource-constrained edge devices, primarily due to their exceptional efficiency in terms of storage and computation. Unlike fullprecision networks that demand a 32-bit floating-point representation, BNNs operate with one-bit weights and neuron activations, taking the form of -1 or +1. To shed light on the inner workings of this innovative approach, consider the binarization function employed in BinaryNet, a state-of-the-art binary convolutional neural network as introduced by Courbariaux et al. (2016). This function meticulously considers the sign bit of the real-valued variable, and its representation can be articulated as follows: $$x^{b}=s i g n(x)={\begin{cases}+1&{\mathrm{if~}}x\geq0\\ -1&{o t h e r w i s e}\end{cases}}$$ where x bis the binarized variable (weight or activation) and x the real-valued variable. This function, which binarizes network parameters into -1 and +1, paves the way for a significant departure from traditional compute-intensive multiplication operations. Instead, it replaces these operations with a simple one-bit XNOR operation, followed by a simple bit-counting operation for accumulation. Consequently, the MultiplyAccumulate (MAC) operation, which is highly compute intensive in the realm of neural networks, is executed with remarkable efficiency in Binary Neural Networks (BNNs). This efficiency stands in stark contrast to the computationally demanding floating-point operations prevalent in full-precision Deep Neural Networks (DNNs). In addition to these computational advantages, BNN also delivers substantial energy savings and remarkable speedups, with reported improvements of up to 7× (Courbariaux et al., 2016). Remarkably, these performance gains do not come at the cost of classification accuracy, as demonstrated by various studies (Yuan & Agaian, 2021). These combined attributes position BNN as an optimal solution for edge-based inference tasks, where the trifecta of efficiency, speed, and accuracy is of paramount importance. Furthermore, in addition to the computational and efficiency advantages, the inherent reduced precision in BNNs contributes to enhancing the network's resilience. To illustrate, a weight of +1 can eventuate to −1, or vice-versa, thereby furnishing a maximum error magnitude of 2. On the other hand, in case of a high precision DNN, e.g., an 8-bit quantized network, a bit flip in the most significant bit (sign bit) can manipulate a specific weight value of +123 to −5, thus engendering a massive error magnitude. This resilience extends over and above that of traditional high-precision DNNs, especially when faced with attacks aimed at injecting errors by tampering with the network parameters (as we demonstrate later in this paper). ## 2.2 Bit-Flip Attacks Memory bit-flip attacks, often executed through techniques like the Row-Hammer Attack, have garnered recognition as an established threat model. This threat has been extensively investigated and documented in existing literature, as exemplified by Kim et al. (2014a); Razavi et al. (2016). Given that the pretrained network parameters of a Deep Neural Network (DNN) reside in memory before deployment at the edge, the occurrence of such bit-flip attacks raises legitimate concerns and presents substantial challenges concerning the security and reliability of DNNs. This threat becomes even more pressing in security-critical edge applications, where implementing sophisticated defense mechanisms becomes a formidable task due to resource limitations. Within the existing body of literature, researchers have explored various methods for injecting faults in neural networks, targeting elements such as biases, weights, and activation functions, with the ultimate aim of inducing both targeted and untargeted misclassifications during the inference process. It's worth noting that these prior techniques were primarily designed to attack full-precision Deep Neural Network (DNN) models. At the edge, however, many DNN implementations operate in quantized precision, as emphasized by Wu et al. (2016), rendering them inherently robust to variations in network parameters. In response to this challenge, efficient algorithms have been developed and put into practice to identify the most vulnerable bits within a quantized DNN, as demonstrated in Rakin et al. (2019; 2022); Bai et al. (2021). The Bit Flip Attack (BFA) utilizes a heuristic progressive search strategy, wherein network layers are iteratively examined to pinpoint the layer that contributes the most to network loss for a certain number of bit flips during each iteration, as elucidated in Rakin et al. (2019). Subsequently, the selected bits are flipped within the most influential layer, and the model is updated, thereby undermining network accuracy. Targeted-BFA (TBFA) represents an extension of the BFA technique, aiming to manipulate data samples with the intent of targeting specific classes, employing the same progressive search approach, as documented in Rakin et al. (2022); Bai et al. (2021). It's important to emphasize that the bit flip attacks discussed earlier primarily employ heuristic strategies and are tailored for high-precision Deep Neural Networks (DNNs). These attacks indeed rely on exploiting computations that involve network parameters with high-precision values to achieve their intended goals, such as manipulating the network's behavior and causing misclassifications. However, it's worth noting that these attacks are not effective when directed at low-precision binary network weights, which operate using only one-bit values, as opposed to the multi-bit values present in high-precision DNNs. The unique characteristics of BNNs, including their limited precision and binary parameters, make them less susceptible to the types of attacks that target high-precision DNNs. While few recent research has focused on mitigating adversarial noise at the input of BNNs by leveraging characteristic hardware attributes, the impact of bit flip-based attacks has largely been ignored (Bhattacharjee & Panda, 2020; Kim et al., 2022). ## 3 Proposed Oge Attack Framework In this section, we begin by defining the threat model, which outlines the standard assumptions regarding the capabilities of the adversary. We then propose a novel bit flip attack framework, pertaining to BNNs, that furnish the most critical binary weights in the pre-trained network, which when flipped, cause significant degradation in classification accuracy. ## 3.1 Threat Model Within the realm of Machine Learning as a Service (MLaaS), individuals upload their DNN models to a platform that may carry inherent security risks due to the desire for increased computational resources. In this setting, DNN inferences and other applications, which may be under the control of potential attackers, often share server hardware components, such as the last-level cache and main memory. While attackers lack explicit permission to read or write data within the user's memory, the existing tools empower them to exploit side channels, thereby gaining access to sensitive information. This discussion categorizes threats into three levels based on the extent of knowledge that attackers can extract from the victim. ## 3.1.1 Full Knowledge Attack A full-knowledge attack represents the most critical scenario. In this scenario, attackers can harness powerful tools, such as Cache Side-Channel Attacks targeting the Last-Level Cache (LLC), as detailed in Yarom & Falkner (2014); Kayaalp et al. (2016), to retrieve memory data stored in the cache. Although the throughput of such a channel is limited (less than 1 MB/s as noted in Hong et al. (2018)), it poses a significant risk to the model, especially if sensitive data within the memory is exposed. In this context, we assume that the model's weights may become vulnerable to the attackers. Moreover, attackers are well-informed about the proposed defense methods, like randomized rotation, and have the capability to enhance their attack strategies if any leakage persists. Given the impracticality of a full-knowledge attack due to the limited throughput of this channel, it remains a severe but technically challenging threat. ## 3.1.2 White Box Attack The attackers lack the ability to directly access data within the memory, but they can make educated estimations regarding the locations of the most susceptible bits, allowing them to potentially execute rowhammer attacks. There are two primary reasons for this: 1) The majority of commercial models and datasets are open source, and model users often download pre-trained models, customizing them through transfer learning (Marcelino, 2018). These customized models inherit vulnerabilities from the open-source models, effectively providing attackers with a source of information. 2) Attackers can employ hardware tools to monitor memory access patterns, as documented in Hu et al. (2020); Yan et al. (2020), enabling them to accurately deduce the model's architecture. Armed with this knowledge, attackers can locally train their own models, which would also share the same vulnerabilities as the user's model. This white-box attack approach is inherently perilous and practical, and it serves as the baseline attack model for the rest of this paper. ## 3.1.3 Black Box Attack When attackers are unable to acquire any knowledge about the model, an alternative approach known as the Black-box adversarial attack (Cheng et al., 2018) becomes a viable option. In the black-box attack, the objective is to compromise the model without any prior understanding of its architecture or weight distribution. However, this method demands a substantial number of attempts to be effective. One effective attack within this category harnesses the Rowhammer vulnerability and is termed the Random High-BitFlip Attack. In this attack, the attacker randomly selects weights within the DNN model and flips the Most Significant Bit (MSB) or sign bit. Since these attacks are inherently random in nature, they necessitate a substantial number of bit flips to achieve a notable decrease in classification accuracy of the network, as we demonstrate later in this paper. Following this description of our threat model, we introduce our proposed Outlier Gradient-based Evolutionary (OGE) attack. This attack is characterized by a systematic weight selection process, which unfolds through a three-step approach. The details of each step, which collectively constitute the OGE attack, will be elaborated in subsequent sections. ## 3.2 Gradient-Based Layer Selection A traditional BNN architecture is composed of several convolution and fully connected layers, consisting of millions of parameters. An exhaustive search from this plethora of binary weights would be computationally impractical from an adversarial perspective. To this end, we reformulate the task of obtaining the most vulnerable layers in the BNN by ranking the network loss for top-k gradient bit flips in each layer. In order to obtain the gradients with respect to the latent weights for each mini-batch in the test dataset, we adopted the cost function as a standard categorical cross-entropy loss for c classes, which can be represented as: $$C=C(Y,g(X,w))=-\frac{1}{n}\sum_{i=1}^{n}\sum_{r=1}^{c}I(Y_{i}=r)\log(p_{ir})=-\frac{1}{n}\sum_{i=1}^{n}\sum_{r=1}^{c}y_{i}^{r}\log\hat{y}_{i}^{r}\tag{1}$$ where y r i is the one-hot coded vectors and yˆ r i = pir is the softmax probability for the i th observation to fall in r th class depending on the feature values X and weights w. In particular, for classic multinomial logit regression with c classes, w is replaced by the coefficient vector and pir takes the following form: $$\hat{y}_{i}^{r}=p_{ir}=\mathbb{P}(Y_{i}=r)=\frac{\exp\beta_{r}^{\prime}X_{i}}{\sum_{s=1}^{c}\exp(\beta_{s}^{\prime}X_{i})}\tag{2}$$ Here, Xiis the feature vector. However, it has been a common practice to replace the simplistic multinomial logit structure β ′ SXi using feature vectors by a suitable neural network structure. In this paper, we consider the same with g(*X, w*), assuming a DNN structure. This is further extended to accommodate binary weights pertaining to BNNs. First, we demonstrate how to compute the gradients for a proxy real-valued network structure. The corresponding classification probability takes the following form: $${\hat{y}}_{i}^{r}=p_{i r}=\mathbb{P}(Y_{i}=r)={\frac{\exp\{(N N_{W,b}(X_{i}))_{r}\}}{\sum_{s=1}^{c}\exp\{(N N_{W,b}(X_{i}))_{s}\}}}$$ Pcs=1 exp{(NNW,b(Xi))s}(3) $$(3)^{\frac{1}{2}}$$ where (NNW,b(X))r refers to the r-th co-ordinate of $$N N_{W,b}(X)=W_{1}\sigma_{1}(W_{2}\ \mbox{-}\sigma_{d-1}(W_{d}X+b_{d}))\ \mbox{-}))+b_{1}$$ NNW,b(X) = W1σ1(W2 ··σd−1(WdX + bd)) ··)) + b1 (4) Here NNW,b(X) is a d−layer neural network with c-dimensional output and σi are the corresponding activations for the connection between layer i and i + 1. To calculate the gradient of C in Equation 1 for any entry of the d layers of weight matrices, let us consider a particular weight w. Let us fix i and consider n = 1 case for the cost function C. We write yˆ r i = pir = exp(zr)/Ps exp(zs) where zr = (NNW,b(X))r. Then, $$\frac{\delta C}{\delta z_{r}}=-\frac{y_{r}}{\hat{y}_{r}}\frac{\delta\hat{y}_{r}}{\delta z_{r}}+\sum_{s\neq r}\frac{-y_{s}}{\hat{y}_{s}}\frac{\delta\hat{y}_{s}}{\delta z_{r}}=-\frac{y_{r}}{\hat{y}_{r}}(\hat{y}_{r}-\hat{y}_{r}^{2})+\sum_{s\neq r}\frac{-y_{s}}{\hat{y}_{s}}(-\hat{y}_{r}\hat{y}_{s})=y_{r}\sum_{s}y_{s}-\hat{y}_{r}=y_{r}-\hat{y}_{r}\tag{5}$$ Thus, by chain rule, we can easily compute δC δw as following: $$\left({4}\right)$$ $$\frac{\delta C}{\delta w}=\sum_{t=1}^{c}\frac{\delta C}{\delta z_{t}}\frac{\delta z_{t}}{\delta w}\tag{1}$$ $$\mathbf{\Sigma}$$ Here, we omit the detail of computing δzt δw , as it heavily depends on network structure involving the specific weight w. Note that, for a BNN, the gradient computation is difficult, since the sign function is not even continuous, let alone differentiable. However, we adopt a similar strategy, as delineated in Helwegen et al. (2019), to argue that the computation for the real-valued proxy network suffices all practical purposes. Upon training the model, gradient calculation of all the trainable convolutional and dense layers with the test data is evaluated. The corresponding weights in each layer are ranked from the highest to the lowest, based on their absolute gradient values. To understand the rationale behind this ranking keeping the layer fixed, consider a specific example of a 3-depth neural net. In light of Equation 6, it suffices to compute δzt δw . We rewrite z as: $$z=W_{1}v_{1}+b_{1},v_{1}=\sigma_{1}(v_{2}),v_{2}=W_{2}v_{3}+b_{2}.$$ z = W1v1 + b1, v1 = σ1(v2), v2 = W2v3 + b2. (7) Now choose w to be one of the entry from W2, say w2,1,1. The entries of δz δw2,1,1 vector looks as following: $$\left(7\right)$$ $$\frac{\delta z_{t}}{\delta w_{2,1,1}}=\frac{\delta z_{t}}{\delta W_{1}}\frac{\delta\sigma_{1}(v_{2})}{\delta v_{2}}\frac{\delta v_{2}}{\delta w_{2,1,1}}.\tag{13.11}$$ $$\mathbf{\Sigma}$$ This calculation clearly shows that for all entries of W2 matrix, the other two factors will remain the same; whereas if we choose a weight from a different layer, then the number of components will vary significantly, making two different layers seemingly incomparable. Thus, for our loss/cost gradient computations in Equations 5 and 6, we select the weights having the highest absolute gradient values to flip. Therefore, in this step, top-k (k is a pre-determined integer value) gradient weight bits of each layer are flipped simultaneously, while keeping all the other layers unaffected. The corresponding effect on the network is evaluated based on loss (Equation 1). Similarly, we calculate these losses for all the other layers independently. Next, the layers are ranked to detect those that contribute the most network loss. This process is executed with different k-values and the results are analyzed to detect the most vulnerable layers, as demonstrated later in Section 4.2.1. ## 3.3 Outlier Weight-Subset Selection Once the vulnerable layers are detected, a weight subset is selected from those layers to be provided as input to the next step in the evolutionary framework, as described in Section 3.4. To this end, isolation forest, an outlier detection technique on multivariate data (Liu et al., 2008), is applied on the weights of a layer to find these weights with outlying absolute gradients. In this step, we iterate the isolation forest applied on the computed *δC/δw* at the final wˆ. For every minibatch bk and bit i, we compute δCk/δw|w= ˆw ′ i for i ∈ {1, · · · n},k∈ {1, · · · , B} where wˆ ′ i denotes flipping the Algorithm 1 Evolutionary Optimization Input: Generated Outliers locations W = [w1, · · · , wn0 ], *M axGen, P, Q, R, S* Output: *ResultSolution* 1: SolutionList = *EmptyList* 2: for i in 1*....Q* do 3: *Solution* = choose P weights randomly from W 4: Append Solution to *SolutionList* 5: **end for** 6: **while** *M axGen* ̸= 0 do 7: M axGen = *M axGen* − 1 8: LossList = *EmptyList* 9: for i in 1*....Q* do 10: Loss = *CalculateLoss*(Q[i]) 11: Append Loss to *LossList* 12: **end for** 13: Sort *SolutionList* in descending order using *LossList* 14: SolutionList = *SolutionList*[: R] 15: for i in 1*....S* do 16: a1 = Random choice(*SolutionList*) 17: a2 = Random choice(*SolutionList*) 18: W′ = Set (a1 + a2) 19: *Solution* = choose P items randomly from W′ 20: Append Solution to *SolutionList* 21: **end for** 22: Q = R + S 23: **end while** 24: ResultSolution = *SolutionList*[0] i-th bit of the final gradient vector wˆ ∈ R n and δCk/δw denotes the gradient computed for k−th mini-batch. Here, C from Equation 1 will be modified by replacing the index belonging in the specific mini-batch. For any bit x and batch b, we compute the outlier score sx,b as follows: sx,b = 2− E(hb (x)) c(n), where E(hb(x)) is the average tree height across the isolation forest for x, when the data input is restricted to the batch b and c(n) = 2Hn−1 − 2(n − 1)/n, with Hn denoting n-th harmonic number. For each fixed bit x, we define the aggregated outlier-based frequency score as: sfx = \#{b : sx,b ≥ sy,b for all y ̸= x}. (9) The scores sf1, · · · *, sf*n are sorted in descending order as: sf(n) > sf(n−1) > · · · *> sf*(1). (10) We denote their corresponding bits as vn, vn−1, · · · , v1, where this nomenclature means that vn is the most vulnerable bit, vn−1 is the second most, and so on. We obtain a specific number of outliers by modifying the contamination parameter in the algorithm. The value of the parameter depends on the size of the weight subset. For instance, if we want to select n0-weights from the total number of weights N, then the contamination parameter provided is k0 = n0/N and we select {vn, · · · , vn−n0+1}. Once these vulnerable weights are selected, we proceed to the following step, where this subset of weights is provided as input to the evolutionary algorithm. ## 3.4 Weight Search Space Optimization Inspired by Darwinian natural evolution, various algorithms have been designed to solve constrained and unconstrained optimization problems with an adaptive heuristic approach based on natural selection (Vikhar, 2016; Slowik & Kwasnicka, 2020). The randomness associated with generating new solutions, coupled with their qualitative ranking and ability to obtain the acceptable solution within a limited search space makes this evolutionary approach an algorithm of choice over other conventional optimization techniques (Kachitvichyanukul, 2012). In this paper, we design an evolutionary algorithm for optimizing a search space to contain the most vulnerable binary weight indices. The outlier-set detected in Section 3.3 is taken as the solution space for the optimization. We consider the cross entropy loss in Equation (1) as the fitness function and evaluate it for each generated solution by flipping all its binary weight bits simultaneously. The proposed evolutionary optimization approach is outlined in Algorithm 1. The subset of selected binary weight indices (W) and the variables (defined as follows) M axGen, *P, Q, R,* and S are provided as inputs to the algorithm. First, the evolutionary algorithm generates Q solutions by randomly choosing P weight indices from W for each solution *(lines 1-4)*. Following this, the network loss (fitness value) for each generated solution is evaluated and collected in a list (LossList) *(lines 8-11)*. Specifically, the *CalculateLoss* is computed by fitting the Neural Network Y, X, NN(*W, b*) as argument in Equation 1, where the weight bits in a specific solution are flipped. Thereafter, the solutions are ranked based on their fitness values and the best R solutions are kept for the next iteration *(lines 13-14)*. These R solutions are then leveraged to generate S new solutions by randomly selecting two solutions (a1, a2) from the entire set of R solutions. The weight indices contained in these solutions are pooled together (W′), from which P items are randomly selected to generate the new solution *(lines 15-21)*. Here we adopt a traditional two-parent solution in accordance with existing research (Lee, 2002; Sivanandam & Deepa, 2008). Subsequently, the total R + S solutions are used for the next iteration *(line 22)*. The algorithm execution stops after *M axGen* iterations, to obtain a solution that contains the most critical binary weights in the network *(lines 6-7, 23)*. ## 4 Experimental Results 4.1 Experimental Setup To assess the effectiveness of our proposed OGE attack framework, we have chosen to employ the widely used Binary Neural Network architecture, BinaryNet (Courbariaux et al., 2016). This network has been trained on three distinct image recognition datasets: Fashion-MNIST, CIFAR10, and GTSRB. The initial classification accuracy obtained upon training with these datasets is as follows: Fashion-MNIST yields a baseline accuracy of 92%, CIFAR10 achieves 80%, and GTSRB reaches an impressive 98%. To gauge the efficiency of the OGE attack scheme, we employ the following key metrics: the number of bit flips (Nflip) and Relative Accuracy Drop (RAD). The RAD quantifies the classification accuracy of the network after it has been subjected to Nflip bit flips. This metric is evaluated in relation to the baseline classification accuracy of the specific network-dataset configuration under consideration. Our objective is to identify the minimum number of weight flips (Nflip) that maximally degrades the classification accuracy of the BinaryNet architecture, thereby revealing the vulnerability of the model to this form of attack. ## 4.2 Efficiency Of The Proposed Oge Attack 4.2.1 Gradient-Based Layer Selection In this experiment, we vary the parameter k in selecting top-k gradient values from each layer of the BinaryNet architecture. Subsequently, by flipping the corresponding binary weights, we observe the network loss to obtain the most vulnerable layers for each network-dataset configuration. The results are outlined in Figure 2. As demonstrated in Figure 2a, top-200 weight flips furnish a cumulative network loss of 0.79 and 0.67 in layers *conv*1 (first convolution layer) and f c3 (last fully connected layer), respectively, on the Fashion-MNIST dataset. Except these two, all other layers in the network furnish a loss below 0.5, which is denoted in the figure as *Other Layers*. The network loss increases with k; BinaryNet furnishes losses of 2.31 and 1.22 for layers *conv*1 and f c3, respectively for top-800 weight bit flips. An identical trend is observed for the other two datasets, as shown in Figures 2b and 2c. Since the layers *conv*1 and f c3 exhibit the maximum network loss compared to all other layers in the network, we consider the binary weights from these two layers of the network to obtain the critical weight bits, flipping which will accomplish the adversarial intent. ![8_image_0.png](8_image_0.png) Losses ![8_image_1.png](8_image_1.png) Figure 2: Variation of network loss for increasing top-k bit flips across different layers in BinaryNet. ![8_image_2.png](8_image_2.png) Figure 3: Variation in Accuracy w.r.t. iterative generations for different number of outlier gradient weights. 4.2.2 Weight-subset Selection using Isolation Forest In this experiment, we vary the subset of binary weights (n) to be selected from the vulnerable layers of the network, obtained in Section 4.2.1. The contamination parameter that determines the cardinality of the subset is varied to obtain the weights with outlier gradients. Correspondingly, this binary weight subset is utilized to analyze the efficiency of the evolutionary search algorithm, by providing this subset as its input solution space. Figure 3a demonstrates the solution optimization space that furnishes the increase in network loss and reduces the accuracy with increasing number of outlier gradient weights on Fashion-MNIST dataset. After 100 iterations, the network exhibits a RAD of 55.32% with 1000 outliers and 300 bit flips. Under identical bit flips, the RAD reaches to 50.56% and 47.34% for 2000 and 3000 outliers, respectively. We observe similar trends for CIFAR10 and GTSRB datasets as well, as represented in Figures 3b and 3c, respectively. The solution with 1000 outliers saturates the search optimization algorithm faster than the remaining outlier sets, and furnishes higher reduction in accuracy compared to the rest. Hence, we choose this configuration to be provided as input to the evolutionary algorithm in the subsequent step. ## 4.2.3 Weight Search Space Optimization In this section, we first showcase the impact of tuning the evolutionary algorithm, highlighting its capability to optimize the attack strategy. Subsequently, we present the effectiveness of the attack across a range of scenarios involving different numbers of bit flips within the Binary Neural Network (BNN) architecture, illustrating its capacity to induce significant reductions in classification accuracy. Tuning the Evolutionary Algorithm: In this experiment, we vary the maximum number of iterations (*M axGen*) to execute the evolutionary algorithm, as discussed in Section 3.4. Setting an appropriate M axGen factor is highly crucial, since early termination of the evolutionary algorithm will result in a nonoptimal solution. Fig. 4a demonstrates the variation in accuracy drop with the increase in the number of iterating generations for all three datasets. As outlined in the figure, on Fashion-MNIST dataset, a solution of size containing 200 weight indices from both the *conv*1 and f c3 layers, when flipped, furnishes a RAD of 36.85% after 10 iterations, which further increases to 50.9% after 80 iterations. Similarly, for CIFAR10 and GTSRB datasets, we observe an analogous trend in the accuracy drop. The reduction in accuracy saturates beyond 80 iterations in all three datasets. This motivates us to utilize 80 *M axGen* iterations for the evolutionary algorithm. Accuracy Drop with Varying N**flip**: In this experiment, we vary the number of binary weights (Nflip) to be flipped, as obtained from our proposed OGE attack. As shown in the Figure 4b, the accuracy relatively drops 31.5% for 100 Nflip in Fashion-MNIST dataset, which further increases to 64.56% by flipping 400 weights, out of 10.3 million parameters in the model. CIFAR10 furnishes a similar accuracy reduction trend to reach 50.625% RAD for 400 Nflip. Similarly, With GTSRB, on flipping 400 binary weights, the BinaryNet furnishes only a minimal classification accuracy of 0.625% with RAD of 99.4%, at which the model can be termed as a random output generator at best. Therefore, our proposed OGE framework is able to subvert the confidence of the system, thereby demonstrating the efficiency of the adversarial bit flip attack. ![9_image_0.png](9_image_0.png) Figure 4: Relative Accuracy degradation for varying number of (a) iterations of the evolutionary algorithm and (b) weight bit flips in the OGE attack. ## 4.3 Comparison With State-Of-The-Art In this section, we compare the OGE attack with existing attack strategies, as captured in Table 1. We first compare our strategy with a random bit flip attack that randomly selects weights from the BinaryNet architecture. The corresponding reduction in accuracy is observed for all three datasets. In the case of Fashion-MNIST, as high as 50000 random bit flips furnishes a minuscule 0.2% degradation in accuracy, while our proposed OGE approach, with only 200 bit flips, renders a relative accuracy drop of 54.3%. Identical trends are observed for the other two datasets as well. Therefore, random bit flips are ineffective in reducing the confidence in BNNs, which can be attributed to their inherent robustness arising from limited parameter space. Table 1: Comparing the efficiency of the proposed OGE attack on BNNs against existing adversarial bit flip-based attacks. | Dataset | O-Acc(%) | Methods | Affected vulnerable layers | Nflip | PA-Acc(%) | RAD(%) | |---------------|---------------|--------------------------|------------------------------|---------|-------------|----------| | | Random attack | All layers | 50000 | 91.9 | 0.2 | | | Fashion-MNIST | 92.07 | BFA (Rakin et al., 2019) | conv1, fc3, conv3 | 500 | 45.2 | 50.8 | | | OGE | conv1, fc3 | 200 | 42.1 | 54.3 | | | | Random attack | All layers | 50000 | 80.05 | 0.3 | | | CIFAR10 | 80.3 | BFA (Rakin et al., 2019) | conv1, fc3, conv2, fc1 | 740 | 38.1 | 52.5 | | | OGE | conv1, fc3 | 400 | 39.5 | 50.6 | | | | Random attack | All layers | 50000 | 98.5 | 0.2 | | | GTSRB | 98.7 | BFA (Rakin et al., 2019) | conv1, fc3, fc1, conv3 | 950 | 42.8 | 56.6 | | | OGE | conv1, fc3 | 150 | 31.48 | 68.1 | | The state-of-the-art technique in the domain of un-targeted adversarial bit flipping in DNNs is proposed in Bit flip Attack (BFA) (Rakin et al., 2019). BFA iteratively searches for the most vulnerable layer in the network in a progressive heuristic manner using gradient-based bit flips. Next, on flipping the weight bits in the chosen layer, the model is updated with the modified weights and used for the next iteration. Thus, network loss increases with each loop, which in turn, degrades the overall classification accuracy of the network. We adopted this BFA strategy to attack BNNs, and compare its efficacy with our proposed OGE attack. While BFA requires approximately 500 binary weight flips to obtain a relative accuracy drop (RAD) of 50.8% in the case of Fashion-MNIST, the OGE approach achieves higher accuracy reduction with only 200 bit flips. We observe an identical trend for CIFAR10 dataset as well. The efficiency of our proposed OGE attack is best observed in the case of GTSRB dataset. While BFA requires 950 bit flips to furnish 56.6% relative degradation in accuracy, our OGE attack engenders a much higher RAD of 68.1% with a minimal 150 bit flips in the BinaryNet architecture. Hence, with almost 84% less bit flips, OGE attack achieves even higher degradation in network accuracy, compared to the state-of-the-art, which demonstrates the efficiency of our proposed attack. The proposed OGE framework furnishes an attack runtime advantage of 3.56× on average over this state-of-the-art BFA method. ## 4.4 Alternate Bnn Architectures On Imagenet In order to evaluate the prowess of our proposed OGE attack framework, we have performed experiments on recent state-of-the-art BNN architectures(as demonstrated in Figure 5), that are trained on ImageNet dataset (Deng et al., 2009). The corresponding baseline classification accuracies are 73.6% for QuickNet, 58.7% for XNORNet, 65.4% for BiRealNet, 61.38% for LQNet (ResNet-18), 62.73% for BinaryResNet18 and 60.6% for BinaryAlexNet. Quicknet demonstrates the highest vulnerability among all the networks, owing to inherent model attributes arising from depthwise separable convolutions and parametric ReLU operations. XNORNet, on the contrary exhibits highest resilience against OGE bit flip attack. This can be attributed to the approximate convolutions using binary operators, that mask the impact of attack on the ensuing classification accuracy of the network. The proposed attack, when compared against BFA and random bit flip attacks, furnishes the highest RAD over the other two attack strategies, thereby demonstrating the efficiency of our proposed approach. ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) ![10_image_2.png](10_image_2.png) Figure 5: Efficiency of OGE attack on state-of-the-art BNN architectures trained on ImageNet. ## 4.5 Evaluations On Novel Architectures To evaluate the effectiveness of OGE on novel architectures like transformers used in natural language processing (NLP) tasks such as sequence classification, we implement the framework on a Binary precision BERT architecture, termed BinaryBert (Bai et al., 2020). We analyze BinaryBert's performance on MRPC and MNLI datasets (Wang et al., 2018). We assess performance using relative accuracy drop (RAD) and relative drop in F1 score, indicating the model's performance degradation. Higher F1 and RAD drops signify lower model performance. Our attack scheme significantly reduces model accuracy, achieving a 50.8% RAD in Binary-Bert on the MNLI dataset with just 1500 bit-flips (0.000018% of total parameters). Additionally, it consistently yields over 90% relative drop in F1 score, with bit-flips totaling less than 0.000014% of total parameters. This highlights the effectiveness of OGE. ## 4.6 Effectiveness Against State-Of-The-Art (Sota) Defences We have identified SOTA defenses against bit-flip attacks on deep neural networks (DNNs) (He et al., 2020; Özdenizci & Legenstein, 2022; Wang et al., 2023; Chitsaz et al., 2023). These approaches aim to safeguard full-precision DNNs by either quantizing the model (He et al., 2020; Chitsaz et al., 2023), encrypting the output coding scheme (Özdenizci & Legenstein, 2022), or modifying the model architecture (Wang et al., 2023). However, in the subsequent discussion, we elaborate on why these defenses prove ineffective against our proposed OGE attack framework designed specifically for targeting binary neural networks (BNNs). The approach outlined in He et al. (2020) suggests using binarization-aware training to convert DNNs into binary form as a defense against bit-flip attacks. However, the OGE framework has shown that binary neural networks can still be effectively attacked with minimal bit-flips, rendering this defense strategy insufficient against our proposed attack. On the other hand, the defense method proposed in Özdenizci & Legenstein (2022) employs an output code-matching strategy to make targeted class attacks more challenging. Yet, the OGE framework takes a broader approach, targeting all classes, making the defense tactic ineffective against our OGE attack. Similarly, the Aegis defense strategy advocated in Wang et al. (2023) proposes a multi-exit approach using multiple internal classifiers. However, if the adversary targets the initial layers before any internal classifier, erroneous activations occur across all classifiers, rendering the Aegis defense ineffective against our OGE strategy. Furthermore, the defense mechanism outlined in Chitsaz et al. (2023) suggests using learned quantization of DNN layers to enhance resilience during inference. Despite this, our study, OGE, reveals that even fully binarized models remain vulnerable to bit-flip attacks. While a defense strategy based on multiple quantization levels may not directly apply to our investigation, considering its implementation is plausible given that targeting a BNN represents the most extreme scenario. ## 4.7 Comparison With Recent Bit-Flip Attack Baselines The research outlined in Chen et al. (2021); Bai et al. (2022) illustrates Trojan attacks on DNNs employing bit-flip methodologies. These attacks pinpoint critical bits within a DNN, which, when attacked by meticulously crafted Trojan patterns, lead to misclassifications into alternative target classes, serving adversarial intentions. In the search for these critical bits within the model, these works rely on the magnitude of high precision (Float16, float32, int8, etc.) weights, which certain optimization techniques exploit to identify these crucial bits. Conversely, BNN weights are solely restricted to +1 or -1, which are represented by a single bit. This renders the optimization techniques utilized in previous studies ineffective in converging and uncovering critical bits within BNNs. Additionally, the attacks discussed in Chen et al. (2021); Bai et al. (2022) are targeted attacks, aiming at input patterns of specific classes or patterns, which causes the overall model accuracy drop to be minimal. However, in our OGE strategy, the attack targets all classes indiscriminately throughout, resulting in substantial drops in model accuracy, and thus demonstrating the efficacy of the attack. ## 5 Ablation Studies In order to evaluate the efficiency of our OGE attack framework, we performed an extensive ablation study by varying the different design parameters of our algorithm. As summarized in this section, OGE attack furnishes identical trends for various ablation studies on all three datasets - Fashion-MNIST, CIFAR10 and GTSRB. The corresponding results are demonstrated in Figures 6, 7 and 8 respectively. Ablation study for layer selection: In this experiment, we performed ablation study for layer selection, as demonstrated in Figure 7a for CIFAR10 dataset. We observed that, executing OGE attack on selected layers furnishes much higher degradation in classification accuracy, when compared to a scenario where the attack is executed without layer selection. Identical results are obtained for other two datasets as well. This demostrates the importance of layer selection in order to orchestrate an effective bit flip attack on the BNN. Ablation study on outlier detection: In this experiment, we varied the outlier detection approach to select a subset of the binary weights, by considering 4 different techniques - (1) random weight selec- ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) ![12_image_2.png](12_image_2.png) Figure 6: Ablation studies performed by varying OGE design parameters for Fashion-MNIST dataset. ![12_image_3.png](12_image_3.png) Figure 7: Ablation studies performed by varying OGE design parameters for CIFAR10 dataset. tion, (2) weights with top-k gradients, (3) gradient based outlier detection with One-class SVM (OCSVM) with standard rbf kernel and, (4) Isolation Forest, that is originally used in the proposed OGE attack. As demonstrated in Figure 7b, outlier detection with isolation forest exhibits the highest RAD, compared to other weight subset selection techniques. When evaluated on Fashion-MNIST and GTSRB datasets, OGE demonstrated similar results for the isolation forest technique, thereby demonstrating the efficiency of the proposed approach. Furthermore, we performed experiments with different kernels of OCSVM - (1) *poly*, (2) *sigmoid* and (3) rbf, and observed that rbf kernel furnishes the highest RAD among other two kernels in OCSVM (as demonstrated in Figure 9 of the Appendix). However, our isolation forest method furnishes even higher degradation in accuracy, when compared to OCSVM method with rbf kernel. ![12_image_4.png](12_image_4.png) Figure 9: Variation in classification accuracy under OGE attack, for different kernels of OCSVM, which is compared with the performance of isolation forest technique. Ablation study with optimization algorithms: We also performed an ablation study on 3 optimization techniques - (1) Particle Swarm Optimization (PSO) and (2) Grey Wolf Optimization (GWO) along with the proposed (3) evolutionary optimization algorithm, as shown in Figure 7c. The evolutionary approach furnishes the lowest RAD over 100 iterations, compared to PSO and GWO respectively across all three datasets. Similar results are obtained, when we iterated identical experiments on Fashion-MNIST and GTSRB datasets as well. These comprehensive ablation studies conducted serve to substantiate our design choices for the proposed OGE attack framework, which has proven its efficiency in targeting and compromising the robust binary weights found within a BNN. ![13_image_0.png](13_image_0.png) Figure 8: Ablation studies performed by varying OGE design parameters for GTSRB dataset. ![13_image_1.png](13_image_1.png) Figure 10: (a) Impact of Bit Flip attack and (b) the proposed defense against OGE attack on BNNs. ## 6 Discussion Since a BNN demands extremely low amount of resources at the edge, it has the potential to be deployed in the next generation mission-critical applications, *e.g.*, in an autonomous vehicle, as represented in Figure 10a. The decisions ascertained by this network, for instance, detecting a street sign with a speed limit of 35 MPH, enable the automotive to drive as intended. However, under such an adversarial bit-flip attack, the network might incorrectly infer the limit to be 85 MPH, which would direct the vehicle to reach uncontrollable speeds. Since the decisions furnished by such networks are beyond human control, the impact of such attacks can end in catastrophic circumstances, including the loss of human lives. Therefore, it is imperative to explore the vulnerabilities in such extremely low-precision networks and address the manifestation of adversarial attempts that can subvert the confidence of the system. In order to thwart such attacks, we propose an adaptive defense mechanism. When manifested in a network, this defense strategy aims to jeopardize the underlying strategy of the OGE attack and hence is termed as adaptive. The proposed defense utilizes the concept of XOR-cipher, inspired by the concept of logic locking (Yasin & Sinanoglu, 2017). Once the BNN is trained, a critical subset of vulnerable BNN parameters can be encrypted with XOR-cipher, which, on providing the correct set of keys, will furnish the original accuracy during inference. However, an incorrect key set will exhibit a graceless degradation in accuracy. Therefore, the attacker, without knowing the key, will obtain a network having extremely low baseline accuracy, beyond which, attacking the network further is impractical from an adversarial perspective. Figure 10b demonstrates a high-level representation of this proposed defense strategy. As exhibited in existing research, the multiplication operation in a BNN is accomplished by a logical XNOR operation between the activation (A) and the binary weight (W), thereby resulting in a product of ALW (Courbariaux et al., 2016). With the implementation of the defense strategy, the weight W will be XOR-ed with a key-bit (k), prior to its multiplication with the input activation. Hence, the product obtained can be represented as AL(WLk). Reverse engineering the key is also highly improbable in this scenario; locking m vulnerable binary weights will furnish a probability of 1/2 m (*e.g.* approximately 7 × 10−46 for 150 encrypted weights). Hence, with this infinitesimal probability, it is almost impossible for the adversary to obtain the correct keys, and subsequently, the accurate set of pre-trained network weights to execute the attack. Bottlenecks: While the defense approach holds the promise of countering the OGE attack, there is a practical hurdle to implementing this defense. The issue stems from the fact that implementing this defense would require provisions at the system level where Binary Neural Networks (BNNs) are executed. Specifically, every position where the weights are mapped would need provisions to encrypt those particular weights. Since BNNs are designed to achieve efficient multiplication operations using XNOR gates, adding XOR gates corresponding to each XNOR gate would substantially increase the area and power overheads necessary for implementing this defense strategy. This addition would nearly double the resources required for processing, significantly compromising the efficiency and effectiveness of such BNNs. The primary motivation for using BNNs is to make them well-suited for deployment at the edge, where resource constraints and efficiency are paramount. Consequently, implementing a defense strategy that introduces such substantial overhead would undermine the very purpose of deploying BNNs at the edge. This underscores the challenge in developing defense mechanisms for these types of attacks that are both effective and practical, without sacrificing the key advantages that BNNs offer. It highlights the urgent need for innovative defense strategies that can effectively thwart such carefully crafted attacks while remaining compatible with inherently robust BNNs and practical for edge deployment. ## 7 Conclusion In this work, we propose OGE, an adversarial bit flip attack designed specifically for Binary Neural Networks (BNNs). Our attack aims to disrupt the classification accuracy of these networks, even within their low-precision settings, consequently undermining the system's confidence in its predictions. Our results demonstrate that it's possible to significantly subvert the classification accuracy of a BNN, achieving a Relative Accuracy Drop (RAD) of 68.1% using the OGE methodology. Impressively, this level of disruption can be achieved by flipping as few as 150 weights within a BNN. We achieve this through a systematic approach based on outlier gradients and evolutionary techniques, emphasizing the efficiency of our proposed attack. To defend against such attacks, a potential mitigation strategy could involve the use of XOR-cipher to protect a critical subset of vulnerable bits. However, it's important to note that this defense approach may introduce significant computational and resource overheads, as discussed earlier. Nonetheless, it represents a potential avenue for further exploration in future research, as we continue to grapple with the challenge of securing low-precision networks like BNNs against sophisticated adversarial attacks. ## References Michel Agoyan, Jean-Max Dutertre, Amir-Pasha Mirbaha, David Naccache, Anne-Lise Ribotta, and Assia Tria. How to flip a bit? In *2010 IEEE 16th International On-Line Testing Symposium*, pp. 235–239. IEEE, 2010. Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jing Jin, Xin Jiang, Qun Liu, Michael Lyu, and Irwin King. Binarybert: Pushing the limit of bert quantization. *arXiv preprint arXiv:2012.15701*, 2020. Jiawang Bai, Baoyuan Wu, Yong Zhang, Yiming Li, Zhifeng Li, and Shu-Tao Xia. Targeted attack against deep neural networks via flipping limited weight bits. In *ICLR*, 2021. Jiawang Bai, Kuofeng Gao, Dihong Gong, Shu-Tao Xia, Zhifeng Li, and Wei Liu. Hardly perceptible trojan attack against neural networks with bit flips. In *European Conference on Computer Vision*, pp. 104–121. Springer, 2022. Abhiroop Bhattacharjee and Priyadarshini Panda. Switchx: Gmin-gmax switching for energy-efficient and robust implementation of binary neural networks on reram xbars. *arXiv preprint arXiv:2011.14498*, 2020. Jakub Breier, Xiaolu Hou, Dirmanto Jap, Lei Ma, Shivam Bhasin, and Yang Liu. Practical fault attack on deep neural networks. In *Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security*, pp. 2204–2206, 2018. Huili Chen, Cheng Fu, Jishen Zhao, and Farinaz Koushanfar. Proflip: Targeted trojan attack with progressive bit flips. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 7718–7727, 2021. Minhao Cheng, Thong Le, Pin-Yu Chen, Jinfeng Yi, Huan Zhang, and Cho-Jui Hsieh. Query-efficient hard-label black-box attack: An optimization-based approach. *arXiv preprint arXiv:1807.04457*, 2018. Kamran Chitsaz, Goncalo Mordido, Jean-Pierre David, and François Leduc-Primeau. Training dnns resilient to adversarial and random bit-flips by learning quantization ranges. *Transactions on Machine Learning* Research, 2023. Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Yinpeng Dong, Hang Su, Baoyuan Wu, Zhifeng Li, Wei Liu, Tong Zhang, and Jun Zhu. Efficient decisionbased black-box adversarial attacks on face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7714–7722, 2019. Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. Robust physical-world attacks on deep learning visual classification. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 1625–1634, 2018. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Zhezhi He, Adnan Siraj Rakin, Jingtao Li, Chaitali Chakrabarti, and Deliang Fan. Defending and harnessing the bit-flip based adversarial weight attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14095–14103, 2020. Koen Helwegen, James Widdicombe, Lukas Geiger, Zechun Liu, Kwang-Ting Cheng, and Roeland Nusselder. Latent weights do not exist: Rethinking binarized neural network optimization. Advances in neural information processing systems, 32, 2019. Sanghyun Hong, Michael Davinroy, Yiˇgitcan Kaya, Stuart Nevans Locke, Ian Rackow, Kevin Kulda, Dana Dachman-Soled, and Tudor Dumitraş. Security analysis of deep neural networks operating in the presence of cache side-channel attacks. *arXiv preprint arXiv:1810.03487*, 2018. Xing Hu, Ling Liang, Shuangchen Li, Lei Deng, Pengfei Zuo, Yu Ji, Xinfeng Xie, Yufei Ding, Chang Liu, Timothy Sherwood, et al. Deepsniffer: A dnn model extraction framework based on learning architectural hints. In *Proceedings of the Twenty-Fifth International Conference on Architectural Support for* Programming Languages and Operating Systems, pp. 385–399, 2020. Voratas Kachitvichyanukul. Comparison of three evolutionary algorithms: Ga, pso, and de. Industrial Engineering and Management Systems, 11(3):215–223, 2012. Mehmet Kayaalp, Nael Abu-Ghazaleh, Dmitry Ponomarev, and Aamer Jaleel. A high-resolution side-channel attack on last-level cache. In *Proceedings of the 53rd Annual Design Automation Conference*, pp. 1–6, 2016. Yoongu Kim, Ross Daly, Jeremie Kim, Chris Fallin, Ji Hye Lee, Donghyuk Lee, Chris Wilkerson, Konrad Lai, and Onur Mutlu. Flipping bits in memory without accessing them: An experimental study of dram disturbance errors. In *2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA)*, pp. 361–372, 2014a. doi: 10.1109/ISCA.2014.6853210. Yoongu Kim, Ross Daly, Jeremie Kim, Chris Fallin, Ji Hye Lee, Donghyuk Lee, Chris Wilkerson, Konrad Lai, and Onur Mutlu. Flipping bits in memory without accessing them: An experimental study of dram disturbance errors. *ACM SIGARCH Computer Architecture News*, 42(3):361–372, 2014b. Youngeun Kim, Hyunsoo Kim, Seijoon Kim, Sang Joon Kim, and Priyadarshini Panda. Gradient-based bit encoding optimization for noise-robust binary memristive crossbar. In *2022 Design, Automation & Test* in Europe Conference & Exhibition (DATE), pp. 1111–1114. IEEE, 2022. Shane Lee. Genetic algorithms. *Orthogonal arrays*, 2002. Fei Tony Liu, Kai Ming Ting, and Zhi-Hua Zhou. Isolation forest. In 2008 eighth ieee international conference on data mining, pp. 413–422. IEEE, 2008. Pedro Marcelino. Transfer learning from pre-trained models. *Towards data science*, 10:23, 2018. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 1765–1773, 2017. Ozan Özdenizci and Robert Legenstein. Improving robustness against stealthy weight bit-flip attacks by output code matching. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 13388–13397, 2022. Haotong Qin, Ruihao Gong, Xianglong Liu, Xiao Bai, Jingkuan Song, and Nicu Sebe. Binary neural networks: A survey. *Pattern Recognition*, 105:107281, 2020. Adnan Siraj Rakin, Zhezhi He, and Deliang Fan. Bit-flip attack: Crushing neural network with progressive bit search. In *Proceedings of the IEEE International Conference on Computer Vision (ICCV)*, pp. 1211–1220, 2019. Adnan Siraj Rakin, Zhezhi He, and Deliang Fan. Tbt: Targeted neural network attack with bit trojan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13198–13207, 2020. Adnan Siraj Rakin, Li Yang, Jingtao Li, Fan Yao, Chaitali Chakrabarti, Yu Cao, Jae-sun Seo, and Deliang Fan. Ra-bnn: Constructing robust & accurate binary neural network to simultaneously defend adversarial bit-flip attack and improve accuracy. *arXiv preprint arXiv:2103.13813*, 2021. Adnan Siraj Rakin, Zhezhi He, Jingtao Li, Fan Yao, Chaitali Chakrabarti, and Deliang Fan. T-bfa: Targeted bit-flip adversarial weight attack. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 44 (11):7928–7939, 2022. doi: 10.1109/TPAMI.2021.3112932. Kaveh Razavi, Ben Gras, Erik Bosman, Bart Preneel, Cristiano Giuffrida, and Herbert Bos. Flip feng shui: Hammering a needle in the software stack. In *25th USENIX Security Symposium (USENIX Security 16)*, pp. 1–18, 2016. Aniruddha Saha, Akshayvarun Subramanya, and Hamed Pirsiavash. Hidden trigger backdoor attacks. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 11957–11965, 2020. Amartya Sanyal, Matt Kusner, Adria Gascon, and Varun Kanade. Tapas: Tricks to accelerate (encrypted) prediction as a service. In *International Conference on Machine Learning*, pp. 4490–4499. PMLR, 2018. Bodo Selmke, Stefan Brummer, Johann Heyszl, and Georg Sigl. Precise laser fault injections into 90 nm and 45 nm sram-cells. In *International Conference on Smart Card Research and Advanced Applications*, pp. 193–205. Springer, 2015. SN Sivanandam and SN Deepa. Genetic algorithms. In *Introduction to genetic algorithms*, pp. 15–37. Springer, 2008. Adam Slowik and Halina Kwasnicka. Evolutionary algorithms and their applications to engineering problems. Neural Computing and Applications, 32(16):12363–12379, 2020. Pradnya A. Vikhar. Evolutionary algorithms: A critical review and its future prospects. In 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC), pp. 261–265, 2016. doi: 10.1109/ICGTSPICC.2016.7955308. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. *arXiv preprint* arXiv:1804.07461, 2018. Jialai Wang, Ziyuan Zhang, Meiqi Wang, Han Qiu, Tianwei Zhang, Qi Li, Zongpeng Li, Tao Wei, and Chao Zhang. Aegis: Mitigating targeted bit-flip attacks against deep neural networks. In *32nd USENIX Security* Symposium (USENIX Security 23), pp. 2329–2346, 2023. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. *arXiv preprint arXiv:1609.08144*, 2016. Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li. Dba: Distributed backdoor attacks against federated learning. In *International Conference on Learning Representations*, 2019. Mengjia Yan, Christopher W Fletcher, and Josep Torrellas. Cache telepathy: Leveraging shared resource attacks to learn {DNN} architectures. In *29th USENIX Security Symposium (USENIX Security 20)*, pp. 2003–2020, 2020. Yuval Yarom and Katrina Falkner. {FLUSH+ RELOAD}: A high resolution, low noise, l3 cache {SideChannel} attack. In *23rd USENIX security symposium (USENIX security 14)*, pp. 719–732, 2014. Muhammad Yasin and Ozgur Sinanoglu. Evolution of logic locking. In 2017 IFIP/IEEE International Conference on Very Large Scale Integration (VLSI-SoC), pp. 1–6, 2017. doi: 10.1109/VLSI-SoC.2017. 8203496. Chunyu Yuan and Sos S Agaian. A comprehensive review of binary neural network. arXiv preprint arXiv:2110.06804, 2021.
Review 1: Summary: The paper presents an adversarial attack framework called OGE (Optimized Gradient-based Adversarial Bit Flip Attack) specifically designed for Binary Neural Networks (BNNs). The authors propose a multi-stage attack strategy that leverages gradient-based layer selection, weight-subset selection using Isolation Forest, and weight search space optimization through an evolutionary algorithm. The effectiveness of the OGE attack is evaluated on several datasets trained with the different BinaryNet architectures. Strengths and Weaknesses: Strengths: 1. The authors conduct experiments on multiple datasets as well as architectures. The comparison with other baselines demonstrates the effectiveness of the proposed attack. 2. The proposed attack is easy-to-follow and the proposed framework seems natural. Weaknesses: 1. Although the authors claim the efficiency of the proposed attack, it is difficult to see the corresponding analysis of efficiency in the methodology part. 2. The effectiveness of the proposed attack is not evaluated on adversarially robust BNNs, such as [1,2,3]. 3. The compared baselines are out-of-date. Some advanced flip attacks should be compared, such as [4,5]. [1]. Improving Robustness Against Stealthy Weight Bit-Flip Attacks by Output Code Matching. CVPR 2022. [2]. Defending and Harnessing the Bit-Flip based Adversarial Weight Attack. CVPR 2020. [3]. Training DNNs Resilient to Adversarial and Random Bit-Flips by Learning Quantization Ranges. TMLR 2023. [4]. ProFlip: Targeted Trojan Attack With Progressive Bit Flips. ICCV 2021. [5]. Hardly perceptible trojan attack against neural networks with bit flips. ECCV 2022. Requested Changes: 1. Please discuss the efficiency of the proposed attack. 2. Please evaluate the effectiveness of the proposed attack on some advanced adversarially robust BNNs. 3. Please compare with recent flip attack baselines. Broader Impact Concerns: I do not see any broader impact concerns. ================================================== Review 2: Summary: Binary Neural Networks (BNNs) offer reductions in storage and compute costs compared to traditional Deep Neural Networks (DNNs) by employing ultra-low precision weights. However, their vulnerability to hardware attacks remains a concern, prompting the exploration of adversarial attack paradigms targeting BNNs during deployment to degrade accuracy and undermine confidence in the system. The proposed Outlier Gradient-based Evolutionary (OGE) attack introduces minimal critical bit flips in pre-trained binary network weights, resulting in up to a 68.1% drop in test image misclassification across various datasets, with just 150 binary weight flips out of millions in a BNN architecture. Strengths and Weaknesses: Strengths - The Outlier Gradient-based Evolutionary (OGE) attack appears to be novel to the best of my knowledge. - The presented attack performances show the vulnerability of BNNs to the proposed attack. - The introduced minimal critical bit flips to fool a BNN are interesting. ## Weaknesses - The paper did not evaluate any efficiency against defenses. - This work is outdated. This work was submitted in 2024 and cites only 3 works from 2023, of which only one is related to BNNs. - Given the above, I doubt that the authors have truly compared against the state-of-the-art, especially since the works they compare to are from 2019. - The community has seen various new architectures, Vision-Transformers, MLP-Mixer, Diffusion models, etc. None of these models were tested for the proposed attack. Requested Changes: - Evaluate the attack against SOTA defenses - Update the entire paper to the state of research of 2024 - This includes comparing to the latest BNN attacks - Provide evaluations on novel architectures (Vision-Transformers, MLP-Mixer, Diffusion models) Broader Impact Concerns: N/A ================================================== Review 3: Summary: Authors propose an adversarial weight bit-flip attack algorithm for binary neural networks. Former studies so far considered having low-bit quantization as a strong defense in itself, against the existing heuristic attacks on high-precision quantized NNs. This work introduces an attack tailored for the binary case, which challenges the existing belief of weight binarization to be a strong defense against such attacks. Authors' main proposal is the so-called "outlier gradient-based evolutionary (OGE) attack" algorithm, which mainly relies on the information of outlier gradients to restrict its layer/weight search space to detect the set of most vulnerable bits in the BNN. Several experiments are performed across various datasets and binary DNN architectures. Strengths and Weaknesses: Strengths: - The clear contribution of this work is the weight bit-flip attack algorithm that is tailored for the binary quantization case, which is new. - Experimental results scale to ImageNet level and demonstrate effectiveness of the proposed approach. - Narrative is clear and the storyline is communicated well. Weaknesses: - Scope of the work and the actual utility of a BNN bit-flip attack is highly limited. However I still believe this paper provides a technically valid contribution in its own niche. Requested Changes: - An addition (e.g., table) regarding the required time of the algorithm to identify the most vulnerable bits would provide a good comparison, as opposed to BFA (not only Alg. 1, but the complete process starting from gradient-based layer selection). - In the middle of the Discussion section, suddenly the authors start proposing an adaptive defense mechanism with a logic locked circuit against this vulnerability. Perhaps this should be separated and included in a short additional subsection. - The paper sufficiently talks about existing weight bit-flip attacks on neural networks, however too little on existing defenses against these challenges. Some notable studies that the authors could discuss in their work: "Defending and harnessing the bit-flip based adversarial weight attack." CVPR 2020. "Improving robustness against stealthy weight bit-flip attacks by output code matching." CVPR 2022. "Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks." USENIX 2023. "Training DNNs Resilient to Adversarial and Random Bit-Flips by Learning Quantization Ranges." TMLR 2023. - Minor comments: The reference citing style of the paper is generally inconsistent and should be re-checked (e.g., use of parantheses in Author et al. style natbib format). There are also several typos and uppercase-lowercase mix-ups here and there. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept with minor revision Comment: The reviewers raised several points that were addressed in the author's rebuttal. However, the authors did not upload a revision of the paper in accordance with their answers. The authors should upload a revised manuscript that includes additional results and discussions provided in the rebuttal. ==================================================
# Breaking The Spurious Causality Of Conditional Generation Via Fairness Intervention With Corrective Sampling Junhyun Nam junh.nam@samsung.com Samsung Advanced Institute of Technology (SAIT)∗ Sangwoo Mo *swmo@kaist.ac.kr* Korea Advanced Institute of Science & Technology (KAIST) Jaeho Lee jaeho.lee@postech.ac.kr Pohang University of Science and Technology (POSTECH) Jinwoo Shin jinwoos@kaist.ac.kr Korea Advanced Institute of Science & Technology (KAIST) Reviewed on OpenReview: *https: // openreview. net/ forum? id= VV4zJwLwI7* ## Abstract To capture the relationship between samples and labels, conditional generative models often inherit spurious correlations from the training dataset. This can result in label-conditional distributions that are imbalanced with respect to another latent attribute. To mitigate this issue, which we call spurious causality of conditional generation, we propose a general twostep strategy. (a) Fairness Intervention (FI): emphasize the minority samples that are hard to generate due to the spurious correlation in the training dataset. (b) Corrective Sampling (CS): explicitly filter the generated samples and ensure that they follow the desired latent attribute distribution. We have designed the fairness intervention to work for various degrees of supervision on the spurious attribute, including unsupervised, weakly-supervised, and semi-supervised scenarios. Our experimental results demonstrate that FICS can effectively resolve spurious causality of conditional generation across various datasets. ## 1 Introduction Deep generative models are capable of producing synthesized visual content that closely resembles the training data, but sometimes in undesirable ways. Similar to classifiers, generative models can be affected by dataset bias (Torralba & Efros, 2011; Buolamwini & Gebru, 2018) inherited from the training set, resulting in significantly fewer samples of minority groups compared to majority group samples (Zhao et al., 2018; Salminen et al., 2020; Jain et al., 2021). This group imbalance is usually amplified in the generated dataset compared to the original training dataset, as noted by Tan et al. (2020). The exacerbated imbalance in generated samples can lead to various fairness issues in downstream applications, such as underrepresentation of minority groups in content creation (Choi et al., 2020; Tan et al., 2020; Jalal et al., 2021) or harm to the classifier trained with generated samples (Xu et al., 2018; Sattigeri et al., 2018; Xu et al., 2019). Conditional generative models face a different type of fairness concern due to the association of the input condition and spurious attributes in data generation. Specifically, the input-conditional distribution of generated data may be imbalanced with respect to some latent attribute, replicating the spurious correlation present in the training dataset. For instance, a BigGAN (Brock et al., 2019) model trained on the CelebA (Liu et al., 2015) dataset with the "blondness" attribute as a class label may generate severely imbalanced samples in terms of "gender," with the vast majority of samples generated under the blond class being female (see ∗Work done at KAIST ![1_image_0.png](1_image_0.png) Figure 1: **Motivation.** Conditional generative models trained on datasets with spuriously correlated attributes may generate severely imbalanced samples when conditioned on the correlated attribute. This behavior is particularly problematic in conditional generation, as there is a causal relationship between the act of conditioning (intervention) and the distribution of generated samples, which can amplify biased beliefs held by users. We tackle this problem, which we refer to as spurious causality of conditional generation. Figure 1). It has also recently been reported that popular text-to-image generation models exhibit gender and cultural biases (Barr, 2022). When prompted with specific occupations like "childcare worker," the generated images are highly imbalanced in terms of gender and race. The reproduction of spurious correlations by conditional generative models raises a deeper fairness concern compared to imbalances in unconditional generative models. Input conditioning (i.e., providing the class label "blond") can be seen as an intervention from the perspective of counterfactual causality (Pearl, 2010). Once a spurious correlation is incorporated into conditional generative models, changing the input (intervention) leads to a change in the distribution of spuriously correlated latent attributes of the conditionally generated samples, forming a cause-effect relationship. This false sense of causation in conditional generation, which we refer to as *spurious causality of conditional generation*, is a significant ethical threat, as this misconception is likely to propagate when using generative models (Mariani et al., 2018).1 To the best of our knowledge, no previous work has aimed to address the spurious causality of conditional generation. Previous studies on fair generative models have focused on addressing majority bias by either (a) reducing the discrepancy from data distribution (Grover et al., 2019; Mo et al., 2019b; Lee et al., 2021; Yu et al., 2020; Humayun et al., 2022) or (b) following the desired fair distribution specified by explicit labels of sensitive attributes (Tan et al., 2020) or a reference set of desiderata (Choi et al., 2020). A naive solution to address the issue could be to extend the previous work on majority bias by applying it to each condition. Instead, we propose to leverage the conditional structure to tackle the problem. Contribution. We propose a general framework, coined *fairness intervention with corrective sampling* (FICS), to mitigate the severe yet underexplored problem of spurious causality in conditional generative models. FICS consists of two components. (1) Fairness intervention (FI): emphasize samples suffering from the spurious correlation in the dataset at the training phase. (2) Corrective sampling (CS): explicitly filter samples after the training phase so that the trained generator follows the desired distribution. FICS provides a unified approach to resolve spurious causality of conditional generation for a wide range of scenarios, with various degrees of supervision available on the latent attributes and various target ratio of latent attribute distributions. We focus on the task of class-conditional image generation using conditional generative adversarial networks (Mirza & Osindero, 2014, cGANs) and apply FICS on unsupervised, weakly supervised, and semi-supervised setups. The goal is to generate the conditional distributions whose latent attribute distribution is either identical to that of the training dataset or some desired fair distribution. Our experiments show that FICS is an effective remedy for spurious causality of conditional generation. In particular, we train BigGAN (Brock et al., 2019) on CelebA (Liu et al., 2015) and Waterbirds (Sagawa et al., 2020), and ResNetGAN (Gulrajani et al., 2017) for Colored MNIST (Arjovsky et al., 2019). FICS consistently improves upon the conditional extensions of previous fair generation methods in both unsupervised and supervised scenarios, in terms of the sample quality and the minority attribute occurrence ratio. 1We use the term "spurious causality" as an analogy to the term "spurious correlation" used in classification contexts. ## 2 Related Work Generative models. Generative models have synthesized high-fidelity samples in various domains, e.g., image (Karras et al., 2021), video (Yu et al., 2022), and language (Brown et al., 2020). Numerous methods have been proposed, including generative adversarial networks (Goodfellow et al., 2014, GANs) and diffusion models (Dhariwal & Nichol, 2021). Here, GANs are known to merit both sample quality and speed (Xiao et al., 2022). In addition, conditional GANs (cGANs) utilize additional supervision such as class labels (Mirza & Osindero, 2014), text descriptions (Ramesh et al., 2022; Ding et al., 2022), or reference images (Zhu et al., 2017; Mo et al., 2019a) as input conditions to extend the applicability further. Specifically, cGANs control the synthesized outputs and improve sample quality by restricting the output space for each condition. Despite their utility, we find that cGANs often suffer from a severe yet underexplored fairness issue. We investigate this issue, coined spurious causality of conditional generation, and propose a remedy for it. Fairness in generative models. The fairness issue in generative models has received considerable attention due to its practical impact. Most prior work on fair generation focuses on addressing the majority bias, which is the tendency of generative models to learn the imbalances inherited from the training dataset and is often amplified during training (Tan et al., 2020). One line of work considers an unsupervised setup, where the group identities of samples are unknown. Approaches in this direction handle bias by using discrepancy metrics corrected with density ratios as the optimization objective (Grover et al., 2019; Mo et al., 2019b; Lee et al., 2021) or by imposing regularizations to enforce a uniform learning of the data manifold (Yu et al., 2020; Humayun et al., 2022). Another line of work learns the fair distribution in a supervised manner (Xu et al., 2018; Sattigeri et al., 2018; Xu et al., 2019; Tan et al., 2020; Choi et al., 2020). However, previous works only focus on the majority bias, and the spurious correlation over multiple attributes has not been investigated. Spurious correlation of classifiers. Spurious correlations in training datasets can cause classifiers to perform poorly on out-of-distribution test sets where these correlations do not hold (Geirhos et al., 2020). To mitigate spurious correlation, several works have been proposed, such as invariant learning (Arjovsky et al., 2019) and distributionally robust optimization (Sagawa et al., 2020), but these methods require explicit supervision on the spuriously correlated attribute, which is often expensive and time-consuming to obtain. Weaker forms of supervision, such as fair validation sets (Bahng et al., 2020; Nam et al., 2020; Creager et al., 2021; Liu et al., 2021), a small set of attribute-annotated samples (Nam et al., 2022; Sohoni et al., 2021), or pre-trained vision-language models (Kim et al., 2023), have been explored to identify and mitigate poorly performing samples. Specifically, this line of work focuses on identifying and mitigating poorly performing samples. Our method shares the same philosophy as these debiasing methods for spurious correlation but instead focuses on tackling the spurious causality issue in conditional generative models. Casuality in generative models. While our paper focuses on achieving fairness between multiple attributes in the vision domain, research has also been conducted in the tabular domain with a similar focus. In the tabular domain, each data point typically has all attribute labels, including the sensitive attribute of interest, making it easier to directly observe all attributes (Xu et al., 2018). Moreover, some research assumes prior knowledge of the causal structure among the attributes and leverages the parent attribute to generate its child attribute (Xu et al., 2019; van Breugel et al., 2021). However, our study addresses more flexible scenarios with varying levels of supervision on the sensitive attribute. Relation to counterfactual fairness. Our framework shares some similarities with counterfactual fairness (Kusner et al., 2017) in terms of its ability to conduct counterfactual analysis on specific attributes. However, our framework performs the do-operation on the class attribute and focuses on the distribution of the sensitive attribute in the generated images. In contrast, counterfactual fairness aims to achieve fair prediction by performing the do-operation on the sensitive attribute. Therefore, the methods used to achieve counterfactual fairness are not directly applicable to addressing the problem of interest in our framework. Spurious correlations beyond class labels. While we focus on the class attributes in this paper, spurious correlation also appears in different domains, e.g., keyword bias (Moon et al., 2021) or scene bias (Mo et al., 2021). It can be problematic for generative models using such conditions, e.g., text-to-image models can be biased to some keywords. Here, we remark that our principle of leveraging the spurious correlation of classifiers to mitigate the spurious causality of conditional generative models can be applied in general. ![3_image_1.png](3_image_1.png) (a) data generating process ![3_image_0.png](3_image_0.png) (b) spurious causality of cGAN Figure 2: **Causal graphs.** Although there is no causal relationship between the class attribute Y and the sensitive attribute A, conditioning the class to generate data can impact the sensitive attribute of the generated data. As a result, users of cGANs who observe the generated data and its sensitive attribute may falsely perceive that manipulating the class attribute alters the sensitive attribute of the generated data, as long as a spurious correlation exists between the class attribute and the sensitive attribute. ## 3 Spurious Causality Of Conditional Gan Conditional generative models aim to approximate the ground-truth distribution pdata(x|y) of data x ∈ X conditioned on a label y ∈ Y using a model distribution pgen(x|y). For brevity, we assume that the models are conditional generative adversarial networks (Mirza & Osindero, 2014, cGAN). cGAN trains a generator G : (*z, y*) 7→ x, which generates data x conditioned on a latent variable z ∼ p(z) drawn from a fixed prior distribution p(z), and a label y ∼ pdata(y). This procedure defines the generator distribution pgen(x|y). Simultaneously, cGAN trains a discriminator D : (*x, y*) 7→ r ∈ (0, 1), which predicts whether the data-label pair (*x, y*) comes from the data distribution (r = 1) or the generator distribution (r = 0). Formally, the training objective of cGAN is: $\mathcal{L}_{\text{eGAN}}(G,D):=\mathbb{E}_{y\sim p_{\text{data}}(y)}[\mathbb{E}_{x\sim p_{\text{data}}(x|y)}\log D(x,y)+\ \mathbb{E}_{z\sim p(z)}\log(1-D(G(z,y),y))]$. We consider training cGANs on a training dataset with spurious correlations. Specifically, we assume that there is a sensitive attribute a(x) ∈ A associated with the data x, which is correlated but not causally related to the label y. We observe that the model distribution pgen(x|y), when naively trained, learns to replicate the spurious correlation present in the training dataset. More specifically, we define the label-conditional probability of sensitive attributes for the model distribution pgen(x|y) as follows: $$p_{\mathrm{gen}}(a|y)=\int_{\mathcal{X}}\mathbf{1}[a=a(x)]\,p_{\mathrm{gen}}(x|y)\;\mathrm{d}x$$ and define pdata(a|y) similarly. We observe that the model distribution pgen(x|y) is trained in a way that pgen(a|y) is significantly more imbalanced than pdata(a|y). Such exaggerated reproduction of spurious correlation by conditional generative models is a big problem, as the model itself now provides a causal link between the act of "conditioning on the label" and the "sensitive attribute of the generated sample." Indeed, the model can be viewed as a black-box on which a counterfactual experiment can be performed by intervening on the label with everything else fixed (e.g., latent variable z). Such analysis will lead to a conclusion that there exists a causal structure (Pearl, 2010) between the label and the spurious attribute of the generated samples. This problem, which we call *spurious causality of conditional generation*, is potentially more problematic than the spurious correlation learned by classifiers. Namely, conditional generative models can propagate ![4_image_0.png](4_image_0.png) Figure 3: **Method overview.** The proposed FICS is a general two-stage framework that aims to address the spurious causality of conditional generation. It involves fairness intervention during the training phase to encourage the generator to synthesize minority samples (FI), followed by corrective sampling during the generation phase (CS). The specific steps depend on the type of fairness supervision. harmful stereotypes, such as the belief that the attribute "female" is caused by being "blond," to users who directly observe the effect of providing conditions to the generated samples. Our goal is to address the spurious causality of conditional generative models by learning a balanced conditional distribution pgen(x|y) that yields a desired sensitive attribute distribution pgen(a|y). The desired distribution may vary depending on the user intention: (a) *Approximating the data distribution* pdata(a|y). As previously discussed, standard GAN training often yields a more imbalanced model distribution pgen(a|y) than the data distribution pdata(a|y). Our aim is to learn a model distribution that closely approximates the data distribution, i.e., pgen(a|y) ≈ pdata(a|y). (b) *Approximating a fair distribution* pfair(a|y). Our aim is to approximate a sensitive attribute distribution that aligns with specific fairness criteria. For instance, we may enforce the attribute to follow a fair distribution, i.e., pgen(a|y) ≈ pfair(a) for some desired pfair(a). In addition to achieving desired pgen(a|y), we also consider achieving a *fair* generative quality for each label as an auxiliary goal. For this purpose, we also report the (fair) Intra-FID for the compared methods. ## 4 Breaking The Spurious Causality Of Conditional Gan 4.1 Overall Framework We describe the proposed FICS (Figure 3), a general strategy for mitigating the spurious causality of conditional generation. In essence, FICS comprises two mechanisms to conditionally synthesize samples with desired attribute distributions, with each component being designed based on the level of supervision available on the latent attributes. - *Fairness Intervention* (FI): Encourage the generator to synthesize minority samples during the training phase. The specific intervention rules depend on the degree of supervision available on the latent attributes. - *Corrective Sampling* (CS): Apply rejection sampling on the synthesized samples during the generation phase. The criteria for rejection depend on the desired goal, follow the data distribution or fair distribution. We consider two different scenarios and give concrete versions of FICS for each case. First, an unsupervised setup aims to recover the data distribution by fixing the bias of the current model. Second, a supervised setup aims to follow the desired fair distribution described by additional supervision, such as a reference set of desired distribution or labels of sensitive attributes. The following subsections illustrate how to design the fairness intervention and corrective sampling for unsupervised and supervised scenarios. ## 4.2 Unsupervised: Learning The Data Distribution In the unsupervised scenario, we aim to reduce the bias in the generator by encouraging it to generate more samples that are likely to be mispredicted by a biased classifier. This approach is motivated by the ![5_image_0.png](5_image_0.png) Figure 4: **Observation.** The relationship between the biased classifier and the generator is depicted by the percentile ranks of the classification loss of generated samples, with the x and y coordinates of a point representing the percentile rank of its classification loss among blond minority samples and all blond samples, respectively. For instance, a sample with a loss at the 20th percentile among blond male samples has a loss at approximately the 70th percentile among all blond samples, indicating that blond male samples typically have higher classification loss among all blond samples. The attributes of "gender" and "wearing eyeglasses" exhibit spuriously correlation with "blondness," while the attribute of "smiling" does not. observation that there is a strong correlation between the misprediction of the biased classifier and the generation frequency of the biased cGAN. We first introduce this observation and then describe how to adjust the training and inference of the cGAN to mitigate this bias. ## 4.2.1 Observation Our observation (Figure 4) suggests the usefulness of having a separately trained classifier to mitigate the spurious correlation of conditional generative models. Specifically, we train a classifier and a conditional generative model on the CelebA (Liu et al., 2015) dataset and focused on the relationship between the "blondness" attribute and the "gender," "wearing eyeglasses," and "smiling" attributes. When conditioned on blond, the dataset is severely imbalanced with respect to the gender and wearing eyeglasses attributes. Unlike the other two attributes, there is no clear correlation between blondness and smiling. From the trained model, we observe that the generated samples classified as the minority group are more likely to have a higher classification loss. For example, given a blond-conditionally generated sample, a sample with a higher classification loss is more likely to be male. Specifically, we observe the percentile rank of the given sample among all blond samples and specific blond male samples. For instance, a sample having the 20th percentile among blond male samples has a loss with around the 70th percentile among all blond samples. We observed a similar pattern for the "wearing eyeglasses" attribute, which is spuriously correlated with the "blondness" attribute. In contrast, the "smile" attribute, which has no clear correlation with "blondness" attribute, does not show a distinct loss distribution for all blond samples and blond and no smile samples. This observation suggests that the classification loss of generated samples may contain useful information to identify minority group samples, even in cases where the generator and classifier are trained separately and receive no supervision about the correlated attributes (e.g., "gender," "wearing eyeglasses"). ## 4.2.2 Training And Sampling Fairness intervention. Motivated by our observation, we intervene in the generator G to encourage the synthesis of samples that are difficult for the biased classifier fb : x 7→ y to classify. These samples are likely to have minor attributes, i.e., low p(a|y). To achieve this, we apply a regularizer that encourages the generator to minimize the (sum of) cross-entropy loss over the wrong targets y ′ ̸= y, rather than maximizing the correct target to ensure the convexity. This approach encourages the generator to generate more minority samples that are hard to classify. Formally, our objective is: $$\operatorname*{min}_{G}\operatorname*{max}_{D}\ {\mathcal{L}}_{\mathrm{cGAN}}+\lambda\cdot\sum_{y^{\prime}\neq y}{\mathsf{C E}}(f_{b}(G(z,y)),y^{\prime}),$$ $$\left(1\right)$$ CE(fb(G(z, y)), y′), (1) where λ is a weight for the regularizer and CE denotes the cross-entropy loss. We simply set λ = 0.01 since it worked well for all our experiments. We pretrain the classifier fb and fix it for the training of cGAN. Corrective sampling. In the unsupervised scenario, the regularizer used in the first step can distort the training distribution. Therefore, we use discriminator rejection sampling to correct the distribution of the generated samples. Although the regularizer in Eq. (1) increases the weight of the minority samples, it does not guarantee that the generative models follow the data distribution. Thus, we apply correction sampling to recover the original data distribution. Rejection sampling is a commonly used technique to sample from a probability distribution p when the ratio p/q is known and sampling from q is feasible. This technique involves sampling x from q and accepting the samples with a probability proportional to p/q. This procedure is equivalent to sampling from p. In our scenario, we sample x from pgen and accept the samples with probability proportional to pdata pgen , which is equivalent to sampling from pdata. We estimate the density ratio using an optimal discriminator D∗ =pdata pdata+pgen for a given generator G following the discriminator rejection sampling Azadi et al. (2018). In practice, we use the discriminator D of cGAN as a proxy for D∗. ## 4.3 Supervised: Learning The Fair Distribution In the supervised scenario, we intervene in the generator to upsample the minority samples described by a fair reference set (weakly-supervised) or labels of sensitive attributes (semi-supervised). We illustrate how to train the cGAN given each form of supervision. ## 4.3.1 Weakly-Supervised Fairness intervention. Assuming that we have a reference set Sref representing the desired fair distribution, where pfair(x|y) is the empirical distribution of Sref, we intervene in the generator to follow the reference set using the density ratio between the reference and data distributions. To achieve this, we train a binary classifier fref : x 7→ r ∈ (0, 1) to distinguish whether a sample belongs to the reference (r = 1) or data (r = 0) distributions, similar to the density ratio trick used in GANs. The (unnormalized) resampling probability is given by: $$w(x)=\frac{p(f_{\rm ref}(x)=1)}{Mp(f_{\rm ref}(x)=0)}\tag{1}$$ $$\left(2\right)$$ where M > 0 is some large constant to ensure w(x) ≤ 1. During training, we draw samples from p ′ data(x) := w(x) W pdata(x), where W is a normalizing factor that makes p ′ data(x) a probability density function. Corrective sampling. We train an additional discriminator D′ which is responsible for distinguishing between the reference and generator distributions. Once trained, we use rejection sampling (similar to the unsupervised scenario) to correct the samples to align with the desired fair distribution pfair(x|y). ## 4.3.2 Semi-Supervised Fairness intervention. Assuming we have a set of samples with sensitive attribute labels L := {(*x, y, a*)} and a set of unlabeled samples U := {(*x, y*)}, we first train an attribute classifier fa : x 7→ a using the labeled data L, which can also leverage the unlabeled data U using semi-supervised learning techniques. Using this classifier, we estimate the population of each group g = (*y, a*) over all samples, constructing the pseudo-labeled dataset D := L ∪ Uˆ, where Uˆ := (*x, y,* aˆ) for (x, y) ∈ U and aˆ := fa(x). To balance the occurrence of each group, we intervene with the generator by resampling the training data with a probability inversely proportional to the group population. Specifically, the (unnormalized) resampling probability is given by: $$w(x)={\frac{1}{|\{(x^{\prime},y^{\prime},a^{\prime})\in{\mathcal{D}}:(y^{\prime},a^{\prime})=(y,f_{a}(x))\}|}}.$$ $\left(3\right)$. ′, a′) = (*y, f*a(x))}|. (3) As in the weakly-supervised scenario, we draw samples from p ′ data(x) = w(x) W pdata(x) during training. Corrective sampling. Similar to the unsupervised case, we correct the generator distribution to match the desired fair distribution pfair(a|y) via rejection sampling. However, we cannot simply reuse the discriminator D from cGAN since it is trained with a biased data distribution that is resampled by Eq. (3). Thus, we train an additional discriminator D′to estimate the density ratio between the unbiased data and generator distributions, using D′to recover the original data distribution pdata(x|y). After recovering pdata(x|y), we apply another rejection sampling stage to control the desired group ratio pfair(a|y) using the predicted attribute fa(x). This two-stage rejection sampling approach corrects the samples to follow the desired fair distribution. ## 5 Experiments 5.1 Experimental Setup Datasets and models. We evaluate our framework on the CelebA (Liu et al., 2015) dataset and the Waterbirds (Sagawa et al., 2020) dataset. The CelebA dataset consists of 162,770 face images of celebrities and has 40 attribute annotations. We select two attributes that are spuriously correlated, one as an input condition and the other to evaluate the occurrence ratio of the minority attribute. The Colored MNIST dataset (Arjovsky et al., 2019) consists of 40,000 digit images with binary labels (0 for digits 0-4, 1 for digits 5-9). Each digit is colored either red or green to impose a spurious correlation between binary labels and colors. We do not flip the final label, unlike the original construction. The Waterbirds dataset consists of 4,795 bird images with two types of birds (waterbirds and landbirds) on two types of backgrounds (water and land); bird type is used as an input condition, and the background is used to evaluate the occurrence ratio of the minority attribute. In all our experiments, we use BigGAN (Brock et al., 2019) for the conditional generative models and ResNet-50 (He et al., 2016) for the classifiers. For Colored MNIST experiments, we use ResNetGAN (Gulrajani et al., 2017) for the conditional generative models and a 4-layer convolutional network for the classifiers. Evaluation metrics. We evaluate the conditional generative models in two aspects: (a) sample quality, i.e., p(x|y), and (b) attribute balancing, i.e., p(a|y). For (a) in unsupervised scenarios, we use the intra-class Fr'echet Inception distance (Intra-FID) (Heusel et al., 2017), which measures both sample quality and diversity of the conditional distribution. Specifically, Intra-FID measures the distance between the feature distributions of the data and generated samples. For supervised scenarios, we use a fair version of Intra-FID (Fair Intra-FID), which measures the distance between the fair and generated samples. For (b), we report the occurrence ratio of the minority (or sensitive) attribute, which should follow the ratio of the data and fair distributions for unsupervised and supervised scenarios, respectively. We also report the estimated occurrence ratio for the (training/reference) dataset of the classifier, as the prediction can be under- (or over-) estimated. Baselines. We extend the prior (unconditional) fair generation methods as the baselines for the fair conditional generation, i.e., modify their formula from p(x) to p(*x, y*). We consider three representative methods: for unsupervised (Lee et al., 2021), weakly-supervised (Choi et al., 2020) scenarios, and supervised (Tan et al., 2020), respectively. For the unsupervised method, we consider Dia-GAN (Lee et al., 2021), which resamples training data based on an estimated log-density ratio to emphasize minority samples. For the weakly supervised method, we consider Choi et al. (2020), giving different weights for each sample based on the density ratio classifier trained to distinguish the training dataset and the reference dataset. For the supervised method, we consider FairGen (Tan et al., 2020), a latent variable modification method based on the attribute classifier trained on the latent variable space. See Appendix B for a detailed explanation. Computation time. Training of the original BigGAN on CelebA for 200k iteration takes 30 hours on a single TITAN Xp GPU and 40 CPU cores (Intel Xeon CPU E5-2630 v4). FICS takes a longer training time due to an additional classifier; takes ×1.6 times for our training setup. Table 1: **Unsupervised results (CelebA).** Comparison of unsupervised fair conditional generation methods under CelebA, using (a) "blondness" or (b) "wearing lipstick" attributes as a class condition. We report the Intra-FID (↓) and minority occurrence ratio (%) of generated samples. We report the minority occurrence ratio of the training dataset and the estimated values by attribute classifier. Our method achieves the best of both worlds: almost matching the minority occurrence ratio of the original dataset (estimated by the classifier) while producing high-quality samples. The lowest intra-FID and the closest minority occurrence ratio to the estimated are marked in bold. | estimated are marked in bold. | Colored MNIST | Waterbirds | Landbirds | | | | |--------------------------------------------|-----------------|---------------|-------------|---------------|-----------|------| | Intra-FID (↓) | Minor (%) | Intra-FID (↓) | Minor (%) | Intra-FID (↓) | Minor (%) | | | Dataset | 20.00 | 5.03 | 5.00 | | | | | Estimated | 20.00 | 8.72 | 8.96 | | | | | Unsupervised cGAN (Mirza & Osindero, 2014) | 10.31 | 0.16 | 44.68 | 5.52 | 24.73 | 4.69 | | Dia-GAN (Lee et al., 2021) | 10.20 | 0.05 | 44.32 | 5.54 | 24.80 | 4.66 | | FICS (ours) | 2.78 | 23.50 | 41.80 | 6.75 | 23.46 | 4.89 | | (a) Blondness Intra-FID (↓) | Minor in Blond (%) | | | | | |--------------------------------------------|-----------------------|------|------------|-------------|-------| | Blond | Non-blond | Male | Eyeglasses | No lipstick | | | Dataset | 5.72 | 1.66 | 18.78 | | | | Estimated | 5.85 | 1.85 | 17.61 | | | | Unsupervised cGAN (Mirza & Osindero, 2014) | 1.86 | 1.64 | 3.97 | 0.88 | 11.04 | | Dia-GAN (Lee et al., 2021) | 1.88 | 1.84 | 5.06 | 1.49 | 14.10 | | FICS (ours) | 1.78 | 1.38 | 6.13 | 1.90 | 16.34 | | (b) Wearing lipstick Intra-FID (↓) | Minor in Lipstick (%) | | | | | | Lipstick | No lipstick | Male | Eyeglasses | Hat | | | Dataset | 0.58 | 1.05 | 1.25 | | | | Estimated | 0.80 | 1.21 | 1.87 | | | | Unsupervised cGAN (Mirza & Osindero, 2014) | 1.36 | 1.67 | 0.28 | 0.76 | 1.04 | | Dia-GAN (Lee et al., 2021) | 1.36 | 1.66 | 0.30 | 0.86 | 1.09 | | FICS (ours) | 1.24 | 1.41 | 0.33 | 1.02 | 1.53 | ## 5.2 Unsupervised Experiments We first demonstrate the results in an unsupervised scenario, where the goal is to reduce the discrepancy between the generator and data distributions. For CelebA, we use "blondness" and "wearing lipstick" as input conditions. For each condition, we choose {male, eyeglasses, no lipstick} and {male, eyeglasses, hat} as corresponding minority attributes. We report Intra-FID and minority occurrence ratio (compared to the data distribution) of the baselines and FICS in Table 1 and Table 2. Table 3: **Supervised results.** Comparison of supervised fair conditional generation methods where "blondness" attribute is used as a class condition. We report the Fair Intra-FID (↓) and minority occurrence ratio (%) of generated samples. The lowest fair intra-FID and the closest minority occurrence ratio to the estimated are marked in bold. | estimated are marked in bold. | Target Male ratio: 30% | Target Male ratio: 50% | | | |--------------------------------------------|--------------------------|--------------------------|--------------------|---------------| | | Fair Intra-FID (↓) | Occurrence of | Fair Intra-FID (↓) | Occurrence of | | | | Male in Blond (%) | | | | | Blond | Blond | | | | | Male in Blond (%) | | | | | Reference | 30.00 | 50.00 | | | | Estimated | 28.60 | 47.15 | | | | Weakly-supervised Choi et al. (2020) | 10.71 | 7.60 | 10.70 | 18.34 | | FICS (ours) | 10.36 | 15.65 | 10.24 | 27.04 | | Semi-supervised FairGen (Tan et al., 2020) | 12.71 | 29.54 | 11.98 | 49.76 | | FICS (ours) | 10.56 | 25.72 | 11.68 | 47.11 | ![9_image_0.png](9_image_0.png) Figure 5: **Visualization.** Blond-conditionally generated samples from the original BigGAN and our method, under the semi-supervised scenario with a target male ratio of 50%. Our method creates diverse blond male images, unlike the original BigGAN is biased toward female images. In the CelebA dataset, the vanilla cGAN suffers from the spurious causality issue, significantly amplifying the imbalance of minority attributes, e.g., only creates 3.97% of blond males while the original dataset contains 5.85%. Dia-GAN relieves this amplification and increases the minority occurrence; however, it often decreases the Intra-FID, i.e., losing the diversity as a cost of the boosted minority. In contrast, our proposed method, FICS, achieves the best of both worlds: it improves the Intra-FID of all considered cases and recovers the minority occurrence of the data distribution, e.g., producing 6.13% of blond males, almost matching the 5.85% of the training dataset. We observe a similar tendency in the Colored MNIST and Waterbirds datasets. In the Colored MNIST dataset, both vanilla cGAN and Dia-GAN fails to generate sufficient number of minority group samples. In contrast, FICS achieves a minority occurrence ratio close to the training dataset. In the Waterbirds dataset, the vanilla cGAN suffers from the spurious causality issue, providing only 5.52% and 4.69% of minority samples of waterbirds and landbirds, respectively. Dia-GAN shows similar results to the vanilla cGAN, resulting from the almost identical sample weights. FICS shows improved Intra-FID and the closest minority occurrence to that of the training set for both classes. ## 5.3 Supervised Experiments We demonstrate the results for the supervised scenario, where the goal is to follow fair distribution. We conduct experiments on the CelebA dataset, using "blondness" as an input condition and "gender" as a Table 4: **Ablation studies.** Comparison of (a) unsupervised and (b) semi-supervised fair conditional generation methods with or without each component of FICS. For (b), we use "blondness" attribute is used as a class condition and 50% male in blond as the desired distribution. The lowest fair intra-FID and the closest minority occurrence ratio to the estimated are marked in bold. Both fairness intervention (FI) and corrective sampling (CS) benefits. | (a) Unsupervised | (b) Semi-supervised | | | | |--------------------|-----------------------|--------------------|-------------------|------| | Intra-FID (↓) | Male in Blond (%) | Fair Intra-FID (↓) | Male in Blond (%) | | | cGAN | 1.86 | 3.97 | | | | FI only | 1.80 | 6.02 | | | | CS only | 1.85 | 5.02 | | | | FICS (ours) | 1.78 | 6.13 | 13.16 | 3.97 | | | 12.30 | 18.23 | | | | | 12.91 | 38.49 | | | | | 11.68 | 47.11 | | | sensitive attribute. Recall that the original data distribution has only a few blond males; we aim to learn a fair distribution that produces {30%, 50%} of male images conditioned on blond. We report the Fair Intra-FID and minority occurrence ratio (should be similar to the desired fair distribution) of the baselines and our proposed framework. Table 3 shows the results on weakly- and semi-supervised scenarios: we will discuss the details and observations in the remaining section. Weakly-supervised. We use the same 4,000 samples (but without sensitive attribute annotations) as the reference set for the weakly-supervised setup. We compare our method with Choi et al. (2020), and use the same binary classifier that distinguishes the reference and data distributions for both methods; recall that we resample training data with the classifier while Choi et al. reweights the loss. The results show that our method outperforms Choi et al. for all considered cases, both Fair Intra-FID (i.e., generation quality) and minority occurrence ratio (i.e., attribute balancing). Here, both methods still have a gap with the semi-supervised methods since they rely on the weaker form of supervision. Still, we remark that our method significantly reduces the gap between semi-supervised and weakly-supervised scenarios, particularly in terms of the minority occurrence ratio. Semi-supervised. We use 4,000 labeled (sensitive attribute) samples with the same attribute occurrence ratio of the desired fair distribution for the semi-supervised setup. We compare our method with FairGen, and use the same attribute classifier for both methods, which is trained by a semi-supervised learning technique called FixMatch (Sohn et al., 2020). The results show that our method shows better Fair Intra-FID than FairGen, while both methods show comparable minority occurrence ratio. Recall that FairGen modifies the prior distribution to generate minority samples; it distorts the prior to the low-density area of the original Gaussian distribution. Thus, FairGen often generates low-fidelity samples (though satisfying the desired attribute), leading to the lower FID. Visualization. Figure 5 visualizes the generated samples conditioned on blond from vanilla BigGAN and debaised one by FICS. We use the semi-supervised version with a target male ratio of 50% for our method. Unlike the original BigGAN mostly creates female images, our method creates diverse blond male images. ## 5.4 Ablation Study We conduct an ablation study to see the contribution of each component in our semi-supervised method. The result is shown in Table 4. With fairness intervention during the training (based on pseudo-labeling and group balancing), we could observe gain in both fair intra-FID and minority occurence ratio, still there exists large gap between the occurrence ratio achieved and the desired occurrence ratio. With corrective sampling after the training, we could observe reasonable occurrence ratio for male, still there is room to be improved. With full components of our semi-supervised method, we can see further improvement in both minority occurrence ratio and fair intra-FID. ## 6 Conclusion We highlight the issue of spurious causality of conditional generation, an important yet underexplored fairness problem in conditional generation. To address this issue, we propose the FICS framework, which leverages intervened training and corrected sampling. We hope that our work inspires further research in spurious correlation and generative models, particularly the spurious causality in text-to-image generative models. ## Broader Impact Statement Although we alleviate some spurious causality of conditional generative models, the model can still retain unrecognized biases. Since the algorithmic debiasing cannot remove all possible biases from the model, the users should carefully check the model on their usage. ## Acknowledgments This work was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST); No. 2022-0-00184, Development and Study of AI Technologies to Inexpensively Conform to Evolving Policy on Ethics), Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE), and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2022R1F1A1075067). ## References Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv* preprint arXiv:1907.02893, 2019. Samaneh Azadi, Catherine Olsson, Trevor Darrell, Ian Goodfellow, and Augustus Odena. Discriminator rejection sampling. In *International Conference on Learning Representations*, 2018. Hyojin Bahng, Sanghyuk Chun, Sangdoo Yun, Jaegul Choo, and Seong Joon Oh. Learning de-biased representations with biased representations. In *International Conference on Machine Learning*, 2020. Kyle Barr. AI image generators routinely display gender and cultural bias. *Gizmodo*, November 2022. URL https://gizmodo.com/ai-dall-e-stability-ai-stable-diffusion-1849728302. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. In *International Conference on Learning Representations*, 2019. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, 2020. Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In *Conference on fairness, accountability and transparency*, 2018. Kristy Choi, Aditya Grover, Trisha Singh, Rui Shu, and Stefano Ermon. Fair generative modeling via weak supervision. In *International Conference on Machine Learning*, 2020. Elliot Creager, Jörn-Henrik Jacobsen, and Richard Zemel. Environment inference for invariant learning. In International Conference on Machine Learning, 2021. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. In *Advances in* Neural Information Processing Systems, 2021. Ming Ding, Wendi Zheng, Wenyi Hong, and Jie Tang. Cogview2: Faster and better text-to-image generation via hierarchical transformers. *arXiv preprint arXiv:2204.14217*, 2022. Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. *Nature Machine Intelligence*, 2(11): 665–673, 2020. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, 2014. Aditya Grover, Jiaming Song, Ashish Kapoor, Kenneth Tran, Alekh Agarwal, Eric J Horvitz, and Stefano Ermon. Bias correction of learned generative models using likelihood-free importance weighting. In Advances in Neural Information Processing Systems, 2019. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. *Advances in Neural Information Processing Systems*, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2016. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In *Advances in Neural Information* Processing Systems, 2017. Ahmed Imtiaz Humayun, Randall Balestriero, and Richard Baraniuk. Magnet: Uniform sampling from deep generative network manifolds without retraining. In *International Conference on Learning Representations*, 2022. Niharika Jain, Alberto Olmo, Sailik Sengupta, Lydia Manikonda, and Subbarao Kambhampati. Imperfect imaGANation: Implications of GANs exacerbating biases on facial data augmentation and snapchat selfie lenses. In *ICLR Workshop on Synthetic Data Generation - Quality, Privacy, Bias*, 2021. Ajil Jalal, Sushrut Karmalkar, Jessica Hoffmann, Alex Dimakis, and Eric Price. Fairness for image generation with uncertain sensitive attributes. In *International Conference on Machine Learning*, 2021. Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free generative adversarial networks. In *Advances in Neural Information Processing Systems*, 2021. Younghyun Kim, Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jaeho Lee, and Jinwoo Shin. Bias-to-text: Debiasing unknown visual biases through language interpretation. *arXiv preprint arXiv:2301.11104*, 2023. Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. 2017. Jinhee Lee, Haeri Kim, Youngkyu Hong, and Hye Won Chung. Self-diagnosing GAN: Diagnosing underrepresented samples in generative adversarial networks. In *Advances in Neural Information Processing Systems*, 2021. Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In *International Conference on Machine Learning*, 2021. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In IEEE/CVF International Conference on Computer Vision, 2015. Giovanni Mariani, Florian Scheidegger, Roxana Istrate, Costas Bekas, and Cristiano Malossi. Bagan: Data augmentation with balancing gan. *arXiv preprint arXiv:1803.09655*, 2018. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. *arXiv preprint arXiv:1411.1784*, 2014. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In *International Conference on Learning Representations*, 2018. Sangwoo Mo, Minsu Cho, and Jinwoo Shin. Instagan: Instance-aware image-to-image translation. In International Conference on Learning Representations, 2019a. Sangwoo Mo, Chiheon Kim, Sungwoong Kim, Minsu Cho, and Jinwoo Shin. Mining gold samples for conditional gans. In *Advances in Neural Information Processing Systems*, 2019b. Sangwoo Mo, Minsu Cho, and Jinwoo Shin. Freeze the discriminator: a simple baseline for fine-tuning gans. arXiv preprint arXiv:2002.10964, 2020. Sangwoo Mo, Hyunwoo Kang, Kihyuk Sohn, Chun-Liang Li, and Jinwoo Shin. Object-aware contrastive learning for debiased scene representation. In *Advances in Neural Information Processing Systems*, 2021. Seung Jun Moon, Sangwoo Mo, Kimin Lee, Jaeho Lee, and Jinwoo Shin. Masker: Masked keyword regularization for reliable text classification. In *AAAI Conference on Artificial Intelligence*, 2021. Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin. Learning from failure: De-biasing classifier from biased classifier. In *Advances in Neural Information Processing Systems*, 2020. Junhyun Nam, Jaehyung Kim, Jaeho Lee, and Jinwoo Shin. Spread spurious attribute: Improving worst-group accuracy with spurious attribute estimation. In *International Conference on Learning Representations*, 2022. Judea Pearl. An introduction to causal inference. *International Journal of Biostatistics*, 2010. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. *arXiv preprint arXiv:2204.06125*, 2022. Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. In International Conference on Learning Representations, 2020. Joni Salminen, Soon-gyo Jung, Shammur Chowdhury, and Bernard J. Jansen. Analyzing demographic bias in artificially generated facial pictures. In *Extended Abstracts of the 2020 CHI Conference on HUman Factors* in Computing Systems, 2020. Prasanna Sattigeri, Samuel C Hoffman, Vijil Chenthamarakshan, and Kush R Varshney. Fairness gan. *arXiv* preprint arXiv:1805.09910, 2018. Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In *Advances in Neural Information Processing Systems*, 2020. Nimit Sohoni, Maziar Sanjabi, Nicolas Ballas, Aditya Grover, Shaoliang Nie, Hamed Firooz, and Christopher Ré. Barack: Partially supervised group robustness with guarantees. *arXiv preprint arXiv:2201.00072*, 2021. Shuhan Tan, Yujun Shen, and Bolei Zhou. Improving the fairness of deep generative models without retraining. arXiv preprint arXiv:2012.04842, 2020. Antonio Torralba and Alexei A Efros. Unbiased look at dataset bias. In *IEEE/CVF Conference on Computer* Vision and Pattern Recognition, 2011. Boris van Breugel, Trent Kyono, Jeroen Berrevoets, and Mihaela van der Schaar. Decaf: Generating fair synthetic data using causally-aware generative networks. 2021. Zhisheng Xiao, Karsten Kreis, and Arash Vahdat. Tackling the generative learning trilemma with denoising diffusion gans. In *International Conference on Learning Representations*, 2022. Depeng Xu, Shuhan Yuan, Lu Zhang, and Xintao Wu. Fairgan: Fairness-aware generative adversarial networks. In *IEEE International Conference on Big Data (Big Data)*, 2018. Depeng Xu, Yongkai Wu, Shuhan Yuan, Lu Zhang, and Xintao Wu. Achieving causal fairness through generative adversarial networks. In *International Joint Conferences on Artificial Intelligence*, 2019. Ning Yu, Ke Li, Peng Zhou, Jitendra Malik, Larry Davis, and Mario Fritz. Inclusive GAN: Improving data and minority coverage in generative models. In *European Conference on Computer Vision*, 2020. Sihyun Yu, Jihoon Tack, Sangwoo Mo, Hyunsu Kim, Junho Kim, Jung-Woo Ha, and Jinwoo Shin. Generating videos with dynamics-aware implicit generative adversarial networks. In *International Conference on* Learning Representations, 2022. Shengjia Zhao, Hongyu Ren, Arianna Yuan, Jiaming Song, Noah Goodman, and Stefano Ermon. Bias and generalization in deep generative models: An empirical study. In *Advances in Neural Information* Processing Systems, 2018. Shengyu Zhao, Zhijian Liu, Ji Lin, Jun-Yan Zhu, and Song Han. Differentiable augmentation for data-efficient gan training. *Advances in Neural Information Processing Systems*, 2020. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In *IEEE/CVF International Conference on Computer Vision*, 2017. ## A Implementation Details A.1 Classifiers Network architectures. For CelebA and Waterbirds, we use the torchvision implementation of ResNet-50 starting from ImageNet-pretrained weights. For Colored MNIST, we use 4-layer convolutional network with channel sizes of 32, 32, 64, 64, kernel size of 5, 3, 3, 3, and stride of 2, 1, 2, 1. Biased classifiers for FICS. It is well-known that ERM trained classifier trained with a biased dataset inherits spurious correlations existing in the dataset (Sagawa et al., 2020; Liu et al., 2021). Thus, we did not perform any additional technique to amplify the spurious correlation. To handle a class imbalance in the dataset, we use group DRO (Sagawa et al., 2020) to ensure low worst-class error. We use the SGD optimizer with a learning rate of 0.001, momentum 0.9 for 100 epochs with batch size 64. In addition, we use ℓ2 regularization of 0.0005 and early stopping for the best validation accuracy. Attribute classifiers for evaluation. For the attribute classifiers for evaluation, we train the models with group DRO to ensure low worst-group error. We use the SGD optimizer with a learning rate of 0.0001, momentum 0.9 for 15 epochs with batch size 64. In addition, we use ℓ2 regularization of 0.01 and early stopping for the best validation accuracy. Table 5, 6 reports the accuracy of attribute classifiers on the CelebA and Waterbirds dataset, respectively. The attribute classifier for classifying color on the Colored MNIST dataset showed 100% accuracy for all groups. Table 5: Accuracy of the attribute classifiers used for evaluation on the CelebA dataset. | Blond | Non-blond | | | | |-----------------------|-------------|--------|---------|--------| | Attribute of interest | Present | Absent | Present | Absent | | Male | 86.11 | 97.18 | 97.20 | 96.08 | | Eyeglasses | 96.77 | 99.30 | 92.91 | 99.56 | | Lipstick | 82.47 | 83.79 | 86.83 | 92.10 | | Waterbirds | Landbirds | | | | |-----------------------|-------------|-------|-------|-------| | Attribute of interest | Water | Land | Water | Land | | Background | 94.74 | 90.98 | 89.70 | 95.93 | Table 6: Accuracy of the attribute classifiers used for evaluation on the Waterbirds dataset. Reference set classifiers for weak-supervision. For the reference set classifiers, we follow the details from the original paper (Choi et al., 2020). We use the Adam optimizer with a learning rate of 0.0001 without any ℓ2 regularization for 15 epochs with batch size 64. We use the same reference set classifiers for the weakly-supervised baseline Choi et al. (2020) proposed and our weakly-supervised version. We use α = 0.3 for target male ratio 30% and α = 0.5 for target male ratio 50% for the weakly-supervised baseline. Attribute classifiers for semi-supervision. We use 4,000 samples with the sensitive attribute annotation and the rest of the samples in the training set without the sensitive attribute annotation to train the sensitive attribute classifier. We use FixMatch (Sohn et al., 2020), a recent SOTA semi-supervised learning technique, to train the sensitive attribute classifier. We only use random horizontal flip for data augmentation and use a fixed threshold of 0.95 for τ . We use the SGD optimizer with a learning rate of 0.001, momentum 0.9 for 100 epochs with batch size 64. In addition, we use ℓ2 regularization of 0.005 and early stopping for the best validation accuracy. ## A.2 Generative Models Network architectures. For generative model, we implement our code based on PyTorch-StudioGAN repository2. We use BigGAN for the CelebA and Waterbirds experiments, and ResNetGAN with spectral normalization (Miyato et al., 2018) for the Colored MNIST experiments. For CelebA and Colored MNIST, we train our generative model from scratch. For Waterbirds, we train our model starting from ImageNetpretrained weights provided by PyTorch-StudioGAN repository. Training details. For all the baselines and our method on the CelebA dataset, we train BigGAN with the same configurations. We train the models using the Adam optimizer with learning rate of 0.0002, β1 = 0, β2 = 0.999, for total 200k iterations with batch size 32. We update the discriminator 4 times per one generator step. For our Waterbirds experiments, we train models using the Adam optimizer with learning rate of 0.0001 for the generator and 0.0002 for the discriminator, β1 = 0, β2 = 0.999, for total 20k iteration with batch size 128. We update the discriminator 2 times per one generator step. We use DiffAug (Zhao et al., 2020) to supplement the limited number of training data. For the Colored MNIST experiments, we train the models using the Adam optimizer with learning rate of 0.0002, β1 = 0.5, β2 = 0.999, for total 20k iterations with batch size 64. We update the discriminator 5 times per one generator step. Dia-GAN. For the CelebA dataset, we train 150k iterations for phase 1 and use log density ratios from 140k to 150k iterations to reweight samples for phase 2. For the Waterbirds dataset, we train 5k iterations for phase 1 and use log density ratios from 500 to 5 iterations to reweight samples for phase 2. For the Colored MNIST dataset, we train 4k iteration for phase 1 and use log density ratios from 100 to 4k iterations to reweight samples for phase 2. Following the original paper, we clip the sample weights from 0.01 to 0.5. FICS. For the CelebA dataset, we first pretrain 100k iterations with the vanilla configurations and then finetune for the next 100k iterations. For finetuning, we freeze the discriminator until 4 layers using FreezeD (Mo et al., 2020) and add the proposed regularizer. We also start to resample after 100k iterations when supervision is provided. For the Waterbirds and Colored MNIST dataset, we did not perform any additional finetuning before FICS training. For the λ of FICS regularizer, we tuned over {0.002, 0.005, 0.01, 0.02} and use the values with the best FID. ## B Baselines We extend the prior (unconditional) fair generation methods as the baselines for the fair conditional generation, i.e., modify their formula from p(x) to p(*x, y*). We consider three representative methods for unsupervised (Lee et al., 2021), supervised (Tan et al., 2020), and weakly-supervised (Choi et al., 2020) scenarios. Unsupervised. Unsupervised fair generation aims to promote underrepresented samples and reduce the gap between data and generator distributions. For instance, Dia-GAN (Lee et al., 2021) resamples the training data based on the log-density ratio log pdata(x)/pgen(x) estimated by the discriminator. Supervised. Given annotations of sensitive attribute a ∈ A, a straightforward solution for a fair conditional generation is conditioning both y and a, as one can explicitly control the generated attributes using p(x|*y, a*). However, this joint-condition approach is hard to train when the data is imbalanced; too scarce for the minority group (*y, a*). As an alternative, FairGen (Tan et al., 2020) propose a latent modification approach that modifies an unconditional generative model to a conditional model by learning a prior distribution pa(z) that restricts the samples under p(x|a), unlike the original prior p(z) sampling p(x). Note that the supervised method performs well if the target attribute a is known; however, it requires heavy annotation costs and cannot prevent the unknown biases. Weakly-supervised. Choi et al. (2020) consider a weaker form of the fairness supervision, assuming a reference set Sref with empirical distribution pref(x) ≈ pfair(x). Specifically, they reweighted the training data with density ratio pref(x)/pdata(x) estimated by a classifier distinguishing the reference and data distributions. Note that the weakly-supervised method heavily depends on the collection of the reference set; it is often unrealistic to collect the oracle reference set in practice. 2https://github.com/POSTECH-CVLab/PyTorch-StudioGAN ## C Additional Experiments C.1 The relationship between the biased classifier and the generator ![17_image_0.png](17_image_0.png) ![17_image_1.png](17_image_1.png) (a) Vanilla BigGAN (Brock et al., 2019) (b) FICS (ours) Figure 6: The relationship between the biased classifier and the generator represented through classification loss percentile ranks of generated samples. Given a point, its x, y coordinate represents the percentile rank of its classification loss among blond minority samples and all blond samples respectively. The correlation between the classification loss and the chance to be minority still remains after the FICS training. To validate the effectiveness of our proposed regularization approach, we conducted an investigation into the correlation between the classification loss and the probability of a sample being a minority at the end of the FICS training. We present the results in Figure 5, where we plot the classification loss percentile ranks as shown in Figure 3. As the number of minority samples generated by the FICS training increased, we observed that a sample with a loss ranking in the 20th percentile among blond male samples had a relatively lower rank in the FICS-trained cGAN compared to the vanilla cGAN. However, we found that the correlation between the classification loss and the likelihood of a sample being a minority persisted even after the FICS training. Based on these findings, we can conclude that our regularizer, which is based on the observed correlation, consistently encouraged the cGAN to generate minority samples throughout the entire training process.
Review 1: Summary: The authors introduce FICS, a generative model that learns a fair distribution (wrt a protected feature) of the data. In particular, fairness is defined as being robust against "spurious causality" whereby a model may wrongly learn relationships such as hair colour -> gender (the example being blonde -> female). FICS learns a fair distribution through an adversarial term in their loss function where the distribution is corrected by "worsening" the ability to classify a protected attribute from the representation. Strengths and Weaknesses: STR. - the authors acknowledge the importance of causality in a fairness definition and use it to their benefit. I like this perspective and would advocate more research on this topic. - It seems the proposed method can be used post-hoc after training other generative models (though this needs confirming -> I asked a question in this vain below) WKN. - The proposed idea seems quite related to counterfactual fairness which should probably be discussed and compared against since it is quite a popular paper (https://proceedings.neurips.cc/paper/2017/hash/a486cd07e4ac3d270571622f4f316ec5-Abstract.html) - following the above, there are quite some relevant papers missing from the authors' comparison (some of them also casual GANs which are trained to promote fair sampling). I listed them below. - on p4 the authors introduce the notion of "spurious causality", whereby a generative model introduces a causal link based on supriously correlated variables. I don't understand why we need such new terminology, what the authors describe as spurious causality is still spurious correlation? why would this be different? Nothing in a GAN makes a correlation causal, just as there is nothing in a GAN that makes a spurious correlation spuriously causal. - Fig 2 shows that we sample from the model by first performing a do-operation. Later in sec. 4.3.2 we see that the authors translate this do operation by simply weighting the likelihood of each sample, such that the complete sample matches a fair distribution. How is this causal? Especially when p^{do} is given (in the form of a reference distribution), I'm completely at a loss why the proposed model is in any way causally more aware than subsampling from another model. - There are more fairness notions than simple class imbalance (see the refs I listed below), how would this model translate those into a new ref. distribution? Furthermore, am I correctly understanding that the model requires retraining whenever we want to update our fairness definition? ### Refs. van Breugel, B., Kyono, T., Berrevoets, J., & van der Schaar, M. (2021). Decaf: Generating fair synthetic data using causally-aware generative networks. Advances in Neural Information Processing Systems, 34, 22221-22233. Lu Zhang, Yongkai Wu, and Xintao Wu. A causal framework for discovering and removing direct and indirect discrimination. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 3929–3935, 2017. doi: 10.24963/ijcai. 2017/549. URL https://doi.org/10.24963/ijcai.2017/549. Flavio P. Calmon, Dennis Wei, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney. Optimized data pre-processing for discrimination prevention, 2017 Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkata subramanian. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 259–268, 2015. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In International conference on machine learning, pages 325–333. PMLR, 2013. Depeng Xu, Yongkai Wu, Shuhan Yuan, Lu Zhang, and Xintao Wu. Achieving causal fairness through generative adversarial networks. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, 2019. Requested Changes: Please consider the weaknesses I listed above and provide a new insight/answer to rebut those. Some additional questions: - How would this approach work on tabular data? Causality is definitely more evolved in a tabular domain. - Couldn't you just use _any_ generative model in combination with subsampling? Broader Impact Concerns: I don't think there are any BI concerns. ================================================== Review 2: Summary: This work aims to solve the unfairness/spurious correlations caused by data imbalance in conditional generative models. For example, BigGAN conditioning on the attribute "blond" will generated female samples more frequently. It formulates the problem as approximate p(a|y) from the original dataset. It proposes different corrective sampling methods for different settings. Experiments show the proposed method can achieve high quality as well as mitigate the imbalance issue in conditional generation. Strengths and Weaknesses: S - This work investigates an important fairness problem -- the amplified bias (imbalance) in conditional generative models. - This work covers the setting of unsupervised, weakly supervised and semi supervised learning. W - It is not clear, from eq.(2) and the paragraphs following it, why do conditional generative models always amplify the imbalance conditioned on certain attribute in the dataset. - Although empirically, with a cGAN trained with the original loss, authors find the imbalance in sampling is correlated with the classification loss. Note that this correlation is a property of the cGAN model trained with original loss. But it is not clear why training the cGAN model with eq.(3) can guarantee that more balanced samples would be generated. - The presentation can be improved. Requested Changes: - In the unsupervised setting, the authors use a classifier to classify minority and majority groups, how did they train this classifier and how this is unsupervised if a classifier is trained in a supervised manner? - Can the authors explain, in the rejection sampling of unsupervised setting, why the samples have to be accepted with the probability proportional to p_{data}/p_{gen}? - Is eq.(2) correct? It looks to me the RHS is not a probability density function. - Does FICS always take the same training time under different settings (unsupervised, weakly supervised and semi supervised). - The authors did not specify how w(x) will be used in resampling. Broader Impact Concerns: NA ================================================== Review 3: Summary: This paper studies the mitigation of conditional generative models in an unsupervised setting where the sensitive attribute is unknown. The idea is to build a classifier separately on the training data and use the classification loss to indicate underrepresented samples. Then, a regularizer is added to the loss of the generative model to promote the generation of samples with large classification losses. Finally, rejection sampling is used to recover the original distribution. Strengths and Weaknesses: Strength The paper studies the unsupervised setting which is different from prior work on fair generation where the sensitive attribute is given. The idea of using the classification loss to infer underrepresented samples is reasonable. Weakness To explain the motivation of the paper, the authors introduce the concept of spurious causality, which is not quite convincing to me. To create a causal path from $Y$ (the act of conditioning on the label) to $A$ (the sensitive attribute), the authors assume that $A$ is a function of $X$. However, in reality we often assume that $X$ is a function of $A$ since sensitive attributes like gender and race represent the inherent natures of individuals. In addition, the paper does not explain why spurious causality is more problematic to downstream classifiers than spurious correlations. The proposed method FICS is actually decoupled with the proposed spurious causality concept and is directly based on the observation that “the generated samples that are classified as the minority group are more likely to have higher classification loss”. Requested Changes: It would be helpful for comprehension if the authors can add causal graphs in Section 3 to explain the difference between spurious causality and spurious correlation. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Accept with minor revision Comment: ### Summary The paper identifies the problem of spurious correlations in conditional generative models (such as cGANs). The authors term this problem as "spurious causality", as users may use such models to make causal deductions. They propose solutions to this problem for both the unsupervised and supervised setting, by minimising the effect of such spurious correlations on the cGAN objective. ### Reviewer Recommendations Two reviewers found the paper to be interesting, and were overall supportive of acceptance. A third reviewer had concerns on the usage of the term "spurious causality", since this does not refer to any causal structure in the original data, and the techniques proposed are about mitigating spurious correlation (and do not use any techniques from causality). I note that one of the positive reviewers had also raised the question of the difference between spurious correlation and causality. ### AE Recommendation From my review of the paper, I find the points raised by all reviewers to be valid: the paper tackles what appears to be a novel problem, offers solutions for a few different settings, and shows good empirical results. It is also well-written. At the same time, the notion of "spurious causality" appears to be equivalent to the established notion of "spurious correlation", but specialised for the setting of conditional generative models. While I do understand the authors' motivation for using the former, there are some reasons to argue against it: - as two reviewers were unclear on this point, it is plausible that other members of the TMLR audience might also be confused about the precise distinction to regular spurious correlations. - the methods proposed are entirely in line with the spurious correlations literature, and are not apparently generalisable to other causality problems. - in the precise examples used to evaluate the method, it is not immediately clear that an end-user might make causal inferences based on the suggested interventions. e.g., on Waterbirds, it is not apparent that an end-user might make inferences about the type of bird backgrounds affecting the background. Even on CelebA, it is not obvious that one might infer that "Blondness" _causes_ "Eyeglasses". (I understand that there may be other scenarios where such causal inference could be more plausible.) On the other hand, if the problem were simply referred to as "spurious causality for generative models", it would not affect any of the technical details of the methods and experiments. While perhaps more verbose, even if one were to retain usage of the term "spurious causality", it would be appropriate to qualify that it is "spurious causality for conditional generation". This is because one could imagine encountering a problem of spurious causality in regular (i.e., non-generative) classification problems, where one attempts to make causal inferences using techniques from the causality literature. Our recommendation is thus for the paper to be accepted, subject to a softening of the usage of the term "spurious causality". Our suggestion is for the authors to primarily use a term such as "spurious causality for generative models", or a suitable abbreviation. (The discussion on causality could perhaps be contained in one or two focussed sub-sections.) While the term "spurious causality" is used at various points in the paper, the total usage is about 20 times; thus, we believe this terminology change can be accommodated in a minor revision to be reviewed by the AE. ==================================================
# Equivariant Finite Normalizing Flows Anonymous authors Paper under double-blind review ## Abstract Generative modelling seeks to uncover the underlying factors that give rise to observed data that can often be modeled as the natural symmetries that manifest themselves through invariances and equivariances to certain transformation laws. However, current approaches to representing these symmetries are couched in the formalism of continuous normalizing flows that require the construction of equivariant vector fields—inhibiting their simple application to conventional higher dimensional generative modelling domains like natural images. In this paper, we focus on building equivariant normalizing flows using a finite number of layers. We first theoretically prove the existence of an equivariant map for compact groups whose actions are on compact spaces. We further introduce three new equivariant flows: GResidual Flows, G-Coupling Flows, and G-Inverse Autoregressive Flows that elevate classical Residual, Coupling, and Inverse Autoregressive Flows with equivariant maps to a prescribed group G. Our construction of G-Residual Flows are also universal, in the sense that we prove an G-equivariant diffeomorphism can be exactly mapped by a G-residual flow. Finally, we complement our theoretical insights with demonstrative experiments—for the first timeon image datasets like CIFAR-10 and show G-Equivariant Finite Normalizing flows lead to increased data efficiency, faster convergence, and improved likelihood estimates. ## 1 Introduction Many data generating processes are known to have underlying symmetries that govern the resulting data. Indeed, understanding symmetries within data can lead to powerful insights such as conservation laws in physics that are a direct result of the celebrated Noether's theorem and the understanding of natural equivariances of structured objects such as sequences, sets, and graphs. Representing such geometric priors as inductive biases in deep learning architectures has been a core design principle in many equivariant neural network models leading to large gains in parameter efficiency and convergence speeds in supervised learning tasks (Cohen & Welling, 2016; Weiler & Cesa, 2019). Perhaps more importantly, equivariant models also enjoy a data-independent form of generalization as they are guaranteed to respect the encoded equivariances regardless of the amount of data available during training. For generative modelling tasks like exact density estimation, the development of equivariant architectures currently requires the careful design of invariant potential functions whose gradients give rise to equivariant time-dependent vector fields. These methods belong to a larger family of models called Continuous Normalizing Flows (CNFs) (Chen et al., 2018) and, while elegant in theory, impose several theoretical and practical limitations. For instance, each equivariant map within a CNF must be globally Lipschitz, and in practice, we need to use off-the-shelf ODE solvers for forward integration which may require hundreds of vector field evaluations to reach a suitable level of numerical accuracy. CNFs are also susceptible to other sources of numerical errors such as computing gradients via backward integration which is prone to producing noisy gradients that lead to computationally expensive training times and inferior final results (Gholami et al., 2019). Moreover, to calculate the log density requires computing the divergence of the vector field which for computational tractability is estimated via the Hutchinson's trace estimator whose variance scales linearly with dimension restricting its applicability to smaller systems in some applications (Köhler et al., 2020). Consequently, such numerical challenges inhibit the simple application of equivariant continuous flows to traditional generative modelling domains such as invariant density estimation over images that are of significantly higher dimensions than previously considered datasets. Normalizing flows composed of finite layers —i.e. each layer is an invertible map, are an attractive alternative to CNFs as they do not, in general, suffer the aforementioned numerical challenges nor do they require external ODE solvers. However, imbuing symmetries within previously proposed flow layers is highly non-trivial, as for certain architectures the equivariance and invertibility constraints may be incompatible. Furthermore, even if both constraints are satisfied the resulting function class modelled by the flow may not be universal—diminishing its ease of application when compared to non-equivariant finite flows. As a result, one may ask whether it is even possible to build equivariant finite layers that are also invertible such that classical finite normalizing flows (e.g. RealNVP, Inverse Autoregressive Flows, Residual Flows, etc ...) can be imbued with powerful inductive biases that preserve the natural symmetries found in data? Present work. In this paper, we consider the general problem of building equivariant diffeomorphisms as finite layers in an Equivariant Finite Normalizing Flow (EFNF). To do so, we first ask the fundamental question—an open problem despite the existence of continuous equivariant flows—of whether such a map even exists between two invariant measures. We affirmatively answer this open question when both the group G and the space on which it acts are compact. Leveraging our theoretical insights we design G-residual and G-Affine Coupling Flows that are not only invertible but are also equivariant by construction. Moreover, our G-Affine Coupling allows constructing equivariant instantiations of two popular flow architectures in RealNVP (Dinh et al., 2017) and the Inverse Autoregressive Flow (Kingma et al., 2016). In sharp contrast with past efforts on equivariant flows our proposed layers are designed in the language of Steerable CNNs which defines a very general notion of equivariant maps via convolutions on homogeneous spaces, of which R n is only one example. Consequently, our proposed flows are more naturally equipped to handle conventional datasets in generative modelling such as images which are invariant to subgroups of the Euclidean group in two dimensions, E(2). On a theoretical front we study the representational capabilities of G-Residual flows, and show that any equivariant diffeomorphism on R n can be exactly mapped by a corresponding G-residual flow when operating on an augmented space —i.e. zero padding the input. Our key contributions are summarized as follows: - We propose two novel G-equivariant layers: G-Affine coupling and G-residual using which we build G-equivariant finite normalizing flows. - We prove the existence of an equivariant diffeomorphism between two invariant measures when the group and the space it acts on are compact. - We study the representational capabilities of our proposed equivariant finite flows. We show, by example, that while G-coupling flows cannot be applied to all input data types; in contrast G-residual flows are universal. Specifically, we prove the universality of G-residual flows by demonstrating that any G-equivariant diffeomorphism on R n can be exactly represented by a G-residual flow. - We conduct experiments on both synthetic invariant densities and higher dimensional image datasets such as Rotation MNIST and augmented CIFAR10 and highlight the benefits of EFNFs. ## 2 Background And Preliminaries 2.1 Equivariance And Linear Steerability In this section, we briefly review the necessary facts regarding equivariance. First, recall the main definition: Definition 2.1. Let X and Y be two sets with an action of a group G. A map ϕ : X → Y is called G-equivariant, if it respects the action, i.e., g · ϕ(x) = ϕ(g · x), ∀g ∈ G and x ∈ X. A map χ : X → Y is called G-invariant, if χ(x) = χ(g · x), ∀g ∈ G and x ∈ X. Modelling equivariant maps becomes important when we have a group action on the input and we want to "preserve" this action during the transformation. For example, if we have an image, we would like its intermediate representations to transform accordingly every time we rotate the image itself. Cohen & Welling (2016) introduced steerable CNNs satisfying this property. In particular, an (ideal) image can be thought as a function f : R 2 → R K, where K is a number of channels. The set of all possible images F form a vector space. Having a two dimensional representation of a group G, one can define its action on the set of all images via the *induced representation* [Ind(R 2,+)⋊G G R] where G = E(2). Note, that the group E(2) contain elements (tg) where t ∈ (R 2, +) and g ∈ O(2). We can then simply write the action on F by: [g · f](x) = Rgf(g −1(x − t)), where Rg ∈ GL(R c) is a representation and c is the number of channels. Clearly, input images transform as scalar fields in which case ρ(g) is the trivial representation, but intermediate layers in an equivariant network may transform under other types such as the regular representation. If one considers a feature space F ′(a vector space with G-action), then a convolution layer ϕ : *F → F*′is called steerable if it is G-equivariant; it's kernel κ can be built using irreducible representations of G (Weiler & Cesa, 2019). ## 2.2 Normalizing Flows In this section, we recall the most relevant constructions of Normalizing Flows and set notations. A more extensive overview of Normalizing Flows is given in (Kobyzev et al., 2020; Papamakarios et al., 2019). To model a target density one can choose a simple base density and then transform it with a parameterized diffeomorphism (a differentiable invertible map whose inverse is also differentiable). A *normalizing flow* is such a way to model a diffeomorphism such that its Jacobian determinant and inverse are easily computable (i.e., faster than O(n 3)). More formally, starting from a sample from a base distribution, z ∼ q(z), and a diffeomorphism f : R n → R n, then the density p(z ′) of z ′ = f(z) is determined by the chain rule as: $$\log p(z^{\prime})=\log q(z)-\log\operatorname*{det}\left|{\frac{\partial f}{\partial z}}\right|.$$ $$(1)$$ . (1) There are multiple ways to construct normalizing flows. In this work we consider flows which are compositions of finite number of elementary diffeomorphisms (f = fK ◦ fK−1... ◦ f1)—so-called *discrete flows*—and whose output density is determined by: ln p(zK) = ln p(z0) −PK k=1 ln det ∂fk ∂zk−1 . We will also focus on the case when each fiis an affine flow (Dinh et al., 2017; Rezende & Mohamed, 2015) due to their simplicity and expressiveness in capturing complex distributions. RealNVP. Dinh et al. (2017) introduced real-valued non-volume preserving (RealNVP) flows, where affine coupling layers are stacked together. These layers update a part of the input vector using a function that is simple to invert, but which depends on the remainder of the input vector in a complex way. Formally, given an n-dimensional input x and *d < n*, the output y of an affine coupling layer C : R n → R n is given by: (2) (3) $\frac{1}{2}$ $y_{1:d}=x_{1:d}$ $y_{d+1:n}=x_{d+1:n}\odot\exp(s(x_{1:d}))+t(x_{1:d})$. One can immediately write its inverse which itself is another coupling layer. For the remainder of the paper we will use C = concat(C1, C2) to refer to coupling layer operations on the first partition (identity map) and second partition of the input (equation 3) respectively. Inverse Autogressive Flows. Affine Coupling flows using the RealNVP architectures enjoy computationally efficient evaluation and sampling but often require a longer chain of transformations in comparison to other flow-based models to achieve comparable performance. An extension of the RealNVP architecture which transforms the entire input in a single pass while iterating over all feature indices is the Inverse Autoregressive Flow (IAF) model (Kingma et al., 2016). Specifically, an IAF I model with Affine Coupling can be defined over each index i, using again scale and translation networks s and t, as follows: $${\mathcal{I}}(x)_{i}=s_{i}(x_{<i})\cdot x_{i}+t_{i}(x_{<i}).$$ I(x)i = si(x<i) · xi + ti(x<i). (4) As the i-th input depends on all previous indices < i the Jacobian of an IAF layer is lower triangular and can be computed in linear time. Residual Normalizing Flows. An orthogonal approach to designing expressive finite normalizing flows is to instead consider residual networks as a form of Euler discretization of an ODE. $$x_{t+1}=x_{t}+h_{t}(x_{t})$$ xt+1 = xt + ht(xt) (5) $$\left({4}\right)$$ Here, xt represents the activations at a given layer t (or time). A sufficient condition for invertibilty then is Lip(ht) < 1 ∀t = 1*, . . . , T*. A Resnet whose ht(xt) satisfies the invertibility condition is known as an i-ResNet in the literature Behrmann et al. (2019). In practice, satisfying the Lipschitz constraint means constraining the spectral norm of any weight matrix —i.e. Lip(h) < 1 if ||Wi||2 < 1. While i-ResNets are guaranteed to be invertible there is no analytic form of the inverse. However, it can be obtained by a fixed-point iteration by using the output at a layer as a starting point for the iteration. Also, the computation of the Jacobian determinant can be estimated efficiently. ## 3 Existence Of Equivariant Diffeomorphisms In this paper we are most concerned with the generative modelling of symmetric densities defined over Euclidean spaces. More specifically, assume that we have a group G acting on R n and our goal is to model a G-invariant density p. Adopting the normalizing flows approach, one needs to understand how to pick the base density q and how it should be pushed to the desired target p. The basic result opening the topic of equivariant normalizing flows is the following theorem. Theorem 1. (Papamakarios et al., 2019; Köhler et al., 2020, Theorem 1) Let p be the density function of a flow-based model with transformation ϕ : R n → R n and base density q. If ϕ is G-equivariant map and q is G-invariant density with respect to a group G, then p is also G*-invariant density on* R n. The proof can be found in (Papamakarios et al., 2019) with an analogous result in (Köhler et al., 2020). Theorem 1 gives us a general recipe to construct an equivariant normalizing flow by choosing an appropriate invertible map ϕ that is equivariant with respect to G. A small technical consideration is the choice of base distribution q which must be invariant with respect to G, for example the standard normal distribution is invariant to rotations about the origin and reflections. The next question one could ask is whether such a construction of an equivariant map ϕ is always possible. In particular, let p and q be two G-invariant densities. Does there always exists a G-equivariant map ϕ, that pushes forward one density into another (i.e., pdx = ϕ∗(qdx))? We can prove a more restricted result in the case when the group G is compact and the ambient space is not R n but a compact manifold M. Theorem 2. Let G *be a compact group with a smooth action on a connected compact smooth orientable* manifold M with or without boundary. Let µ and ν be two G-invariant volume forms representing the given orientation. Assume that RM µ =RM ν. Then there exists a G*-equivariant diffeomorphism* ϕ, such that ϕ ∗µ = ν. Proof Sketch. The full proof is detailed in §A while here we provide a proof sketch for the case when M is a closed manifold. The proof can be obtained as an equivariant modification of the Moser's trick (Moser, 1965). First, the equality of integrals implies the existence of the form η, such that ν = µ+dη. Without loss of generality, we can assume that the form η is G-invariant (if not, we can average it by a group action and consider a new form instead). Then, we can connect the volume forms ν and µ by a segment: µt = µ+tdη for t ∈ [0, 1]. Note that µ0 = µ and µ1 = ν. We want to find a 1-parameter continuous family of diffeomorphisms (an isotopy) {ϕt}t∈[0,1], such that: ϕ ∗ t µt = µ0. (6) As the manifold is compact, an isotopy can be generated by the flow of a time-dependent vector field vt (Spivak, 1999, Chapter 5). As deducted in the computation given in equations 21 - 24 in the Appendix §A, the desired equation 6 will be satisfied if: ivtµt = η. (7) Because µt is non-degenerate, we can solve this equation pointwise. As a result, we obtain a unique smooth vector field vt. The compactness of M allows us to integrate vt in the flow ϕt. The integration of vt will result in an G-equivariant diffeomorphism as required. Theorem 2 guarantees the existence of an equivariant diffeomorphism that pushes forward a base density to any desired target. Crucially, this means that if ϕ is within the representation capability of a chosen flow model then it justifies our goal of modelling invariant densities via equivariant diffeomorphisms. Note that an alternative proof of this fact is given in Katsman et al. (2021). They proved the existence of the diffeomorphism by integrating a constructed vector field in the direction of decreasing KL divergence between the source and the target distributions which requires fixing a Riemannian structure over the manifold. In contrast, our result and proof follows a different logic and is more geometric. In section §4.2 we show that the equivariant diffeomorphism ϕ can always be represented exactly using a G-residual flow operating on an augmented space. ## 4 Constructing G**-Equivariant Normalizing Flows** The existence of a G-equivariant diffeomorphism—while providing a solid theoretical foundation—gives no instruction on how we can easily construct such maps in practice. We now outline and demonstrate practical methods for building G-equivariant maps in the language of discrete normalizing flows. Specifically, we are interested in crafting normalizing flow models that take an invariant prior density to an invariant target density where the invariance properties are known *a priori*. More precisely, this means that each invertible function, fi: R n → R n, in our flow must additionally be an equivariant map with respect to a prescribed group G acting on R n. Mathematically, this requires each fi to satisfy the following transformation law: ## Fi(Rgx) = Tgfi(X), ∀X ∈ R N, ∀G ∈ G, where Rg and Tg are two representations of the group element g ∈ G. For the remainder of the paper we will take the group G to be a subgroup of the Euclidean group in ndimensions E(n) ∼= (R n, +) ⋊ O(n) which can be constructed from the translation group and the orthogonal group in n-dimensions via the semi-direct product. For instance, natural images—despite being objects in R n—may transform along isometries of the plane R 2(e.g. rotations, reflections, and translations) which are captured by the group E(2). Consequently, to understand how data transforms under prescribed equivariance constraints it is equally important to understand how data can be best represented. In §4.1 we outline one possible avenue for representing data as a combination of a chosen base manifold such as R 2 and natural fibers (e.g. channels in an image) that assign a value to every point on the base. We then exploit this construction in §4.4 and §4.2 to build G-Affine coupling and G-residual flows respectively while proving the universality of the latter in §4.3. Finally, we close the section with a construction of linear equivariant maps which are more of theoretical than practical interest. In classical approaches to normalizing flows including prior work on equivariant CNFs it is customary to treat each input, as a point residing in some n-dimensional space by vectorizing the input xˆ = Vec(x). However, a seemingly innocuous operation like vectorizing, without corresponding constraints on the model, also destroys exploitable geometric structure within the data—e.g. rotation equivariance—hampering the overall generative modelling task. Fig. 1 illustrates this phenomena on an image of a planar circle which is rotated by π/2 rotations. Clearly, a planar-circle is invariant to any continuous rotations in SO(2), but when discretized into four quadrants transforms according to the finite subgroup C4. We can now treat this 2d-planar circle as a point in R 4 by discretizing and labelling 4.1 Exploiting Geometric Structure1 2 ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) Figure 1: All possible actions of the group C4 on a 2D image of a circle. Each quadrant is labelled from 1−4. each quadrant from 1 to 4. Now any rotation in R 4can be modelled as an element g ∈ SO(4), but notice that the action of π/2 planar rotations corresponds to specific permutation labels for the quadrants—i.e. Rg ∈ GL(4) is a permutation matrix. In fact, these are the *only* permissible permutations out of a possible of 4! permutations. This important observation dictates that any equivariant normalizing flow should assign equal likelihoods of the 2d-planar circle to all transformations under C4 but not all possible permutations of the quadrants. How can we build normalizing flows that are capable of exploiting the rich underlying geometry found in data modalities like images? We leverage the theory of steerable CNNs (Cohen & Welling, 2016) which provides a thorough treatment of building equivariant linear maps on homogenous spaces. More prominently, it allows us to reason over different types of geometric structures present in data in the language of associated vector bundles. For example, in the case of equivariant generative modelling of images, we can consider R 2to be the base space and each channel (e.g. RGB) as a scalar field that transforms independently. In fact, the theory of steerable G-CNN's is applicable not just to scalar fields but also to vector and tensor fields making them an ideal tool to study general equivariant maps. The principal benefit of this design choice is that any equivariance constraint on a given layer can be represented as a convolution kernel with an analogous constraint on a given linear operator. We now introduce two novel equivariant layers that serve as building blocks to constructing G-equivariant normalizing flows. ## 4.2 G**-Residual Normalizing Flows** As demonstrated we can convert regular coupling transformations to be equivariant by modifying a vanilla coupling layer to a G-coupling layer. This raises the question of whether such a layer exists for the other families of normalizing flows, namely Residual Flows. We now show that this is indeed possible by introducing a G-Residual layer. Let us consider residual networks of the form: $$\phi^{i}(x)=x+h^{i}(x)$$ where each sub-network h i: R d → R d at layer i is parametrized by it's weight matrix Wi. A composition of maps ϕ i's —i.e. ϕ = (ϕ 1 ◦ ϕ 2 *◦ · · · ◦* ϕ T) is then a deep residual network. If the map h also satisfies the condition Lip(h) < 1 then each ϕ is also invertible Behrmann et al. (2019). If h is additionally equivariant with respect to a group G then the map ϕ i —and by extension ϕ— is also an equivariant map and we call the equivariant map ϕ i a G-Residual layer. Proposition 1. Let ϕ i: R n → R n be a G-residual layer as defined above. Let R be a representation for the group G. If h i: R n → R n is a G-equivariant map then, ϕ iis also a G-equivariant map —i.e. ϕ i(Rgx) = Rgϕ i(x) with respect to the group G. Proof. We begin by writing the result of first transforming the input to a G-residual layer as follows: $$\phi^{i}(R_{g}x)=R_{g}x+h^{i}(R_{g}x)$$ $$=R_{g}x+R_{g}h^{i}(x)$$ $$=R_{g}(x+h^{i}(x))$$ $$=R_{g}\phi^{i}(x)$$ $$({\boldsymbol{\delta}})$$ $$({\mathfrak{g}})$$ i(x) (9) $$(10)$$ $\left(11\right)$. $\square$ It is important to note that our chosen way of enforcing invertibility by controlling the Lipschitz constant of every layer is fully compatible with equivariance. That is point-wise multiplication on the weight matrices with c/σ commutes with any group representation Rg. ## 4.3 Representation Capabilities Of G**-Residual Flows** For any G-equivariant flow it is natural to ask about its representational power in the space of functions and distributions. Unfortunately, it is well known that i-resnets, neural ODEs, and as a result, conventional residual flows are unable to approximate any diffeomorphism without the use of auxiliary dimensions —i.e. zero-padding (Zhang et al., 2020; Dupont et al., 2019). For example, the function f(x) = −x cannot be approximated by an i-resnet. An analogous question for G-residual flows is whether they can approximate any G-equivariant diffeomorphism on R n. We now show it is possible to construct—in a similar manner to Zhang et al. (2020)—a G-Residual Flow to exactly build any G-equivariant diffeomorphism. To do so, we first extend the action of G to the augmented space in a trivial manner. Given an action of a group G on a vector space V , then for any other vector space V ′, we can define a G-action on V ⊕ V ′ by: g · [v, v′] 7→ [g · *v, v*′]. This helps us to extend a G-action to the padded space, where V = V ′ = R n. We can then extend Theorem 6 of Zhang et al. (2020) to the equivariant case. Theorem 3. Assume that G *acts on* R n. For any G*-equivariant Lipschitz continuous diffeomorphism* ϕ : R n → R n there exists a G*-residual flow on the padded space* ψ : R 2n → R 2n, where the G action is extended to the padding of R n *as above, such that* ψ([x, 0]) = [ϕ(x), 0] *for any* x ∈ R n. Proof Sketch. The full proof is outlined in §B. As Zhang et al. (2020) show there exists a Residual flow on the padded space ψ : R 2n → R 2n, such that ψ([x, 0]) = [ϕ(x), 0]. We must now show that this flow is equivariant with respect to the extended G-action on R 2n. Indeed by G-equivariance of ϕ and the definition of the extended G-action we have: $\psi(g\cdot[x,0])=\psi([g\cdot x,0])=[\phi(g\cdot x),0]$ $=[g\cdot\phi(x),0]=g\cdot[\phi(x),0]$ $=g\cdot\psi([x,0])$. $\square$ Theorem 3 explicates that a G-residual operating on an augmented space is sufficiently powerful to exactly map a G-invariant prior to a G-invariant target. What Theorem 3 does not say, however, is the ease of optimization and the resulting training dynamics due to the interplay of the Lipschitz and equivariance constraints which are needed for invertibility and guaranteeing the pushforward to be an invariant density. ## 4.4 G**-Coupling Normalizing Flows** Coupling layers form the backbone of countless normalizing flow architectures due to their simplicity in computing the change in volume as well as having an analytic inverse. Thus it is tempting to consider whether such coupling layers can be made G-equivariant. To construct such a flow we need to impose some restrictions on the flow function and the representation of the group. We outline these in the following proposition below. Proposition 2. Let C : R n → R n be a coupling layer with functions s and t as defined as follows. $$y_{1:d}=x_{1:d}$$ $$y_{d+1:n}=x_{d+1:n}\odot s(x_{1:d})+t(x_{1:d}).$$ y1:d = x1:d (12) yd+1:n = xd+1:n ⊙ s(x1:d) + t(x1:d). (13) Also let R be a n-dimensional representation of the group G. Assume that n = 2d and R is a completely reducible diagonal permutation representation: R(g) = (Rg, Rg), where Rg is a permutation d × d matrix. If s and t are G-equivariant then the coupling layer C is G-equivariant. $$(12)$$ $$(13)$$ Proof. The equivariance condition for the flow R(g)C(x) = C(R(g)x), ∀g ∈ G) written explicitly gives: $$R(g){\mathcal{C}}(x)=\left(R_{g}x_{d+1:n}\odot s(R_{g}x_{1:d})+t(R_{g}x_{1:d})\right)$$ $$(14)$$ (14) Equation 14 outlines which restrictions are sufficient on the functions *t, s* : R d → R dto obtain an equivariant coupling flow. Using the assumption on the representation R, one has: Rg (C2(x)) = Rgxd+1:n ⊙ s(Rgx1:d) + t(Rgx1:d). Where C2 is a coupling layer operating on the second partition of the input vector as described in equation 3. Since permutation matrices satisfy the following identity Rg(x ⊙ y) = (Rgx) ⊙ (Rgy), it is sufficient to take s and t to be G-equivariant. Using this and the fact that the identity is trivially equivariant the overall coupling layer C is also equivariant. Remark 1. From equation 14 we already see that the representation R cannot be irreducible. To preserve equivariance we need a non-linearity equivariant to permutations (for example, element-wise exp as in equation 3). Furthermore, because of the non-commutativity of the Hadamard product with general matrix multiplication, R cannot be any representation. Indeed, this is the key rationale used by Köhler et al. (2020) to justify the negative claim that G-equivariant coupling flows do not exist. However, as we prove if we employ the permutation representation then the Hadamard product is commutative and as result, we have a G-equivariant coupling layer. The immediate consequence of needing to use permutation representations is that G must be finite which eliminates compact groups such as SO(2). However, in practice due to discretization and aliasing of signals, finite subgroups of E(2) under the regular representation—which are permutations themselves—are a large class of groups that can be modelled using G-coupling flows. Our proposed definition of the G-coupling layer can be seen as the most general equivariant coupling layer and is in fact a strict generalization of previous efforts when G is taken as the permutation group (Rasul et al., 2019; Biloš & Günnemann, 2021). The G-coupling layer, while being equivariant, is limited by the fact the representation of the group R cannot be irreducible. In practice, we can take representations of the group independently for each channel attached to the base space (e.g. RGB channels in an image where the base space is R 2). However, when such a decoupling is not possible a G-coupling equivariant flow cannot learn the desired target density. ## 4.5 G**-Inverse Autoregressive Flows** In a similar manner to G-Coupling flows we can construct Inverse Autoregressive Flows that respect the desired symmetry constraint. However, unlike non-equivariant IAF models to bake in non-trivial symmetries, we consider k equal partitions of the input which allows us to define a layer in a G-equivariant IAF flow. Proposition 3. Let I(x)i: R d → R d be the i-th block transformation of an IAF layer with scale and translation functions s and t as defined above. Also let R be a n-dimensional representation of the group G. Assume that n = k · d and R is completely reducible diagonal permutation representation: R(g) = (Rg1 , Rg2 , · · · , Rgk ), such Rgi , i ∈ [k] is a permutation d×d matrix. If s and t are G-equivariant then the IAF layer I is G-equivariant. Proof Sketch. The G-equivariance of an Affine Coupling layer applied k-times for each partition i ∈ [k] of the input to the IAF layer. ## 4.6 Invertible Equivariant Linear Maps In the previous sections, we considered imbuing equivariance into the coupling and residual flows yet it is well known in the literature that a linear map that is equivariant must be a convolution with steerable kernels (Cohen et al., 2018). One may then ask if such maps can also be invertible enabling the construction of linear equivariant flows. Here we choose to present the theory in its abstract form with no insight into practical instantiations—i.e. the group G is not any specific group like E(2), and the reader may choose to skip this section and move directly to §6 to avoid interruptions in the flow of exposition. To achieve the goal of building linear equivariant maps let us first consider the space of all linear maps, HomG(Fn, Fn+1) not necessarily equivariant, between an arbitrary layers n and n+ 1. The set of equivariant linear maps, also called intertwiners, forms a vector space and can be constructed under the following constraint on H := HomG(Fn, Fn+1), $\mathcal{H}:\{f\in\mathrm{Hom}_{G}(\mathcal{F}_{n},\mathcal{F}_{n+1})\mid fR_{n,g}=R_{n+1,g}f\quad\forall g\in G\}$. It is well known that a continuous linear map under mild assumptions can be written as a continuous kernel κ : R n × R n → R Kn×Kn+1 : $[\kappa\cdot T](x)=\int_{\mathbb{R}^n}\kappa(x,y)T(y)dy$ nalitv of the feature (field) attached at each point in the resp. Here Km and Kn+1 are the dimensionality of the feature (field) attached at each point in the respective layer's feature space. When Fn and Fn+1 are taken to be induced representations the equivariance constraint results in a one-argument kernel which can be thought of a convolution like integral (Cohen et al., 2018; Weiler et al., 2018; Weiler & Cesa, 2019). Thus the kernel is subject to the following linear constraint: $$(15)$$ $$\kappa(g x)=R_{n+1,g}\kappa(x)R_{n,g}^{-1},$$ $$(16)$$ $$(17)$$ n,g, (16) where Rn,g is a representation of G, and whenever numerically feasible the kernel can be built via basis functions using irreducible representations of G. Invertible Kernels. An observant reader may also recognize equation 15 as the definition of an integral transform of which the input is a function. Such transformations need not be invertible but whenever an inverse exists the inverse transform must satisfy: $$\int_{\mathbb{R}^{n}}\kappa^{-1}(y-x)\kappa(y^{\prime}-x)d y=\delta(y-y^{\prime})$$ A few well-known integral transforms include the Fourier and Laplace transform each of which is completely determined by the choice of kernel and interestingly both transforms are invertible. It is worth noting for invertibility to hold it is necessary for the kernel to be non-zero everywhere, for example, such a condition is needed in the standard convolution theorem for the Fourier transform. Thus these kernels not only satisfy the linear constraint in equation 16 but also the invertibility constraint equation 17. Matrix Exponential Equivariant Flows. An alternative to finding a kernel that is simultaneously equivariant and invertible is to impose invertibility on the overall operation. As demonstrated by Hoogeboom et al. (2020) when the convolution operation is expanded as matrix multiplication with an appropriately expanded kernel z = κ ∗ x = K⃗x, acting on a vectorized input ⃗x invertibility corresponds to the invertibility of K. In general expanded equivariant kernels need not be invertible but their matrix exponential of K —i.e. e K are guaranteed to be invertible and also preserves equivariance Hoogeboom et al. (2020); Xiao & Liu (2020). Thus the matrix exponential-based flow is also an example of an equivariant linear flow as the matrix exponential must be numerically approximated using a truncated power series. ## 5 Related Work Imbuing normalizing flow models with symmetries was first studied concurrently (Köhler et al., 2020) and (Rezende et al., 2019). The former approach utilizes continuous normalizing flows (CNF) and is closest in spirit to the G-residual flows proposed in this paper. The second approach uses the Hamiltonian formalism from physics and decouples a system into its generalized position and momentum for which a finite-time invertible solution can be found using Leapfrog-integration. As a result, such an approach can be seen as the specific but continuous time instantiation of the proposed G-coupling flow. Flow-based generative models have also seen applications with regard to specific symmetry groups, most notably the symmetric group which manifests itself through permutation invariance. For example, when data is comprised of sets —e.g. a dataset of N point clouds— and the permutation invariance is known as exchangeability of finite data. At present, the only flow-based models that handle this specific case utilize a coupling-based transform (Rasul et al., 2019; Bender et al., 2020) but can be thought of as another instantiation of the G-coupling layer. Outside of permutations, equivariant flows have also been constructed for E(n) (Satorras et al., 2021) which in contrast to this work requires global equivariance and is more suitable to model application domains such as molecular dynamics where a global coordinate frame can be easily assigned. Lastly, building equivariant generative models beyond flows has also been a nascent but active direction of research with equivariant score matching (De Bortoli et al., 2022) and diffusion models (Hoogeboom et al., 2022; Xu et al., 2022; Igashov et al., 2022) which may applications in molecular generation. Finally, the study of equivariances in theoretical physics has a long and celebrated history. Recently, the application of flow-based generative models has risen in prominence in the study of such systems like sampling in Lattice Gauge Theory (Kanwar et al., 2020), the SU(N) group (Boyda et al., 2020), and most recently Katsman et al. (2021) construct equivariant manifold flows—extending euclidean space equivariance results in Papamakarios et al. (2019)—for isometry groups of a given manifold. ## 6 Experiments We evaluate the ability of our Equivariant Finite Flows on both synthetic toy datasets and higher dimensional image datasets in Rotation MNIST and CIFAR-10. For baselines, we use classical RealNVP-based coupling flows and Residual flows which are non-equivariant but are trained with heavy data augmentation sampled uniformly at random from the group. In practice, when constructing G-Residual flows the input and output of a layer must be a scalar field which transforms according to the trivial representation which at first might appear limiting. However, the function h inside a residual layer itself can have intermediate representations that are not scalar fields (e.g. we can use regular, irreps, etc. . . ) as long as the output of h also maps to a scalar field. Such a design principle—where intermediate layers can include more complex representations than the trivial one—is common in steerable neural networks (Weiler & Cesa, 2019). Moreover, we also point out that our equivariance theory is constructed for n-dimensional spaces and to apply this to natural images we embed our data into R 2 with channels being the fibers. As a result, this means that we are able to use steerable-convolutions for subgroups of O(2) which acts on the base space of R 2 but because of the embedding in R n equivariance is preserved. ## 6.1 Synthetic Experiments We first consider a toy dataset of a mixture of 8-gaussians and 4-concentric rings in R 2. The empirical distribution for each dataset contains both rotation and reflection symmetries and as a result, we would like the pushforward of an G-invariant prior (e.g. bivariant normal distribution) to remain invariant under the action of G. For fair comparison, we allocate a parameter budget of 2k to each model and train for 20k steps using Adam with default hyperparameters (Kingma & Ba, 2014). Note that, while conventional coupling flows could also be employed here their equivariant counterparts preserve equivariance along the channel dimensions, thus hindering their application on single channel data. In Fig 2 we visualize the density of each for non-equivariant and equivariant residual flows for groups C16 and D16. We observe that the equivariance constraint, in such low parameter regimes, enables the learned density not only to respect the data symmetries but also to produce sharper transitions between high and low-density regions as found in the target density. ## 6.2 Image Datasets We now consider density estimation over images as found in Rotation MNIST and CIFAR-10. Note that CIFAR-10 by itself does not have rotation symmetry but reflection symmetry and as a result—for fair comparison—we construct an augmented dataset where each input is transformed via an element from G uniformly at random (Weiler & Cesa, 2019). In fig 3 we visualize generated samples from G-Residual Flows trained on three variations of the MNIST dataset equipped with D8, C16, and the translation group T symmetries respectively. In table 1 we report bits per dim values for non-equivariant and G-residual ![10_image_0.png](10_image_0.png) fl fl fl Figure 2: Density Estimation over Toy Data in R 2 using non-equivariant and G-residual flows. For 8 Gaussians we notice that G-equivariant Resflows have less smearing of the probability mass on individual modes. For the spirals dataset we observe that all Resflows with 2k parameters struggle to completely separate the high and low-density regions but D16 Resflow is able to respect the data symmetry better. the group is C16. In the first four rows we notice rotated digits about the center of the image and a few reflections about the vertical axis. The bottom two rows illustrate the translation group T implemented by considering the input space as a Torus and vanilla convolutions with circular padding. Here we notice generated digits splicing at the boundary of the image indicating translation symmetry. ![10_image_1.png](10_image_1.png) Figure 4: Ablation study on augmented CIFAR-10 using G-Coupling flows with discrete subgroup of SO(2) (Left figure) and O(2) (Right figure). flows as well their respective parameter counts. We observe that G-residual flows produce slightly worse bits per dim but compensate with large gains in parameter efficiency —i.e. 227% and 195% respectively on Rotation MNIST and CIFAR-10. We hypothesize that G-residual flows while universal are harder to optimize due to the potentially destructive interference between the Lipschitz and equivariance constraints. Moreover, due to the equivariance constraints on the kernel each forward pass requires us to rebuild the kernel which adds a computational overhead inhibiting larger model sizes. Furthremore, in the specific case of Gresidual flows these kernels must also be Lipschitz (—i.e. The matrix W used in the convolution operation must be Lipschitz) which is enforced via spectral or induced matrix norm. Simultaneously satisfying both the Lipschitz and equivariance constraints leads to an increased training overhead as both are themselves optimization problems internal to the general optimization problem of maximizing the log-likelihood of the data. In practice, we also find that the equivariance constraint and Lipschitz constraint can often be antagonistic with each other as forcibly enforcing equivariance can force the network to increase the Lipschitz constant to compensate which can lead to training instability. In these cases, it is important to more aggressively enforce the Lipschitz constraint after each iteration which is in contrast to vanilla Residual Flows that trade off enforcing this constraint after every N iterations in favor of training speed. | Model | Rotation MNIST | CIFAR-10 | |------------|-------------------|-------------------| | RealNVP | - | 4.87 | 6.2 × 106 | | C4-RealNVP | - | 4.66 | 1.09 × 106 | | Resflow | 1.65 | 9.78 × 103 | 3.87|2.83 × 104 | | G-Resflow | 2.07 | 2.99 × 103 | 4.04 | 9.57 × 103 | Table 1: Density Estimation over images in Rotation MNIST and CIFAR-10. Each cell reports bits per dim as well as the total number of parameters used. For CIFAR-10 we additionally perform an ablation study using G-coupling flows for subgroups of SO(2) and O(2). Note, that because our G-coupling flows must use the permutation representation in theory we can only model discrete subgroups of SO(2). Consequently, as the exact equivariance for G-coupling flows to permutation representations is a rigid constraint on model expressivity we implement soft equivariance using regular G-convolutions in place of vanilla convolutions. We report the average group test bits per dimension which compute the mean bits per dimension assigned to a test datapoint across all elements in a finite subgroup (e.g. C16) of a larger continuous group. Fig. 4 illustrates G-coupling flow models on finite subgroups of SO(2) while Fig. 4 right repeats the same procedure for O(2). As observed in both ablation experiments we find that all G-coupling flows converge significantly faster and to lower average group bits per dim than the non-equivariant coupling flow. Interestingly, we also observe increasing the equivariance constraint—e.g. C4 vs. C16—leads to improved test bits per dim values highlighting the benefits of modelling invariant densities using equivariant maps. ## 7 Conclusion In this paper, we study the problem of building equivariant diffeomorphisms on Euclidean spaces using finite layers. We first theoretically prove the existence of an equivariant map for compact groups with group actions on compact spaces laying principled foundations for the modelling of invariant densities using equivariant flows. We introduce Equivariant Finite Normalizing Flows which enable the construction of classical normalizing flows but with the added guarantee of equivariance to a group G. To build our EFNFs we introduce G-coupling and G-residual layers which elevate classical coupling and residual flows to their equivariant counterparts. Empirically, we find G-residual flows enjoy significant parameter efficiency but also lead to a small drop in performance in density estimation tasks over images. G-coupling flows on the other hand are limited in their applicability to domains which contain more than a single channel (e.g. RGB images) but achieve faster convergence to lower average test bits per dim on augmented CIFAR-10. While we proved the existence of equivariant maps between invariant densities on compact spaces many data domains with symmetries are in fact non-compact and proving the existence of an equivariant map in this setting is a natural direction for future work. ## References Augustin Banyaga. Formes-volume sur les variétés à bord. *Enseignement Math.*, 2(20):127–131, 1974. Jens Behrmann, Will Grathwohl, Ricky TQ Chen, David Duvenaud, and Jörn-Henrik Jacobsen. Invertible residual networks. In *International Conference on Machine Learning*, pp. 573–582. PMLR, 2019. Christopher Bender, Kevin O'Connor, Yang Li, Juan Garcia, Junier Oliva, and Manzil Zaheer. Exchangeable generative models with flow scans. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 10053–10060, 2020. Marin Biloš and Stephan Günnemann. Scalable normalizing flows for permutation invariant densities. In International Conference on Machine Learning, pp. 957–967. PMLR, 2021. Denis Boyda, Gurtej Kanwar, Sébastien Racanière, Danilo Jimenez Rezende, Michael S Albergo, Kyle Cranmer, Daniel C Hackett, and Phiala E Shanahan. Sampling using su(n) gauge equivariant flows. arXiv preprint arXiv:2008.05456, 2020. Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. *arXiv preprint arXiv:1806.07366*, 2018. Ricky TQ Chen, Jens Behrmann, David K Duvenaud, and Jörn-Henrik Jacobsen. Residual flows for invertible generative modeling. *Advances in Neural Information Processing Systems*, 32, 2019. Taco Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant cnns on homogeneous spaces. *arXiv preprint arXiv:1811.02017*, 2018. Taco S Cohen and Max Welling. Steerable cnns. *arXiv preprint arXiv:1612.08498*, 2016. Valentin De Bortoli, Emile Mathieu, Michael Hutchinson, James Thornton, Yee Whye Teh, and Arnaud Doucet. Riemannian score-based generative modeling. *arXiv preprint arXiv:2202.02763*, 2022. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. In The 5th International Conference on Learning Representations (ICLR), Toulon, 2017. Emilien Dupont, Arnaud Doucet, and Yee Whye Teh. Augmented neural odes. arXiv preprint arXiv:1904.01681, 2019. Amir Gholami, Kurt Keutzer, and George Biros. Anode: Unconditionally accurate memory-efficient gradients for neural odes. *arXiv preprint arXiv:1902.10298*, 2019. Emiel Hoogeboom, Victor Garcia Satorras, Jakub M Tomczak, and Max Welling. The convolution exponential and generalized sylvester flows. *arXiv preprint arXiv:2006.01910*, 2020. Emiel Hoogeboom, Vıctor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3d. In *International Conference on Machine Learning*, pp. 8867–8887. PMLR, 2022. Ilia Igashov, Hannes Stärk, Clément Vignac, Victor Garcia Satorras, Pascal Frossard, Max Welling, Michael Bronstein, and Bruno Correia. Equivariant 3d-conditional diffusion models for molecular linker design. arXiv preprint arXiv:2210.05274, 2022. Gurtej Kanwar, Michael S Albergo, Denis Boyda, Kyle Cranmer, Daniel C Hackett, Sébastien Racaniere, Danilo Jimenez Rezende, and Phiala E Shanahan. Equivariant flow-based sampling for lattice gauge theory. *Physical Review Letters*, 125(12):121601, 2020. Isay Katsman, Aaron Lou, Derek Lim, Qingxuan Jiang, Ser Nam Lim, and Christopher M De Sa. Equivariant manifold flows. In *Advances in Neural Information Processing Systems*, volume 34, pp. 10600–10612. Curran Associates, Inc., 2021. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. *Advances in neural information processing systems*, 29, 2016. Ivan Kobyzev, Simon Prince, and Marcus Brubaker. Normalizing flows: An introduction and review of current methods. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2020. doi: 10.1109/ TPAMI.2020.2992934. Jonas Köhler, Leon Klein, and Frank Noé. Equivariant flows: exact likelihood generative learning for symmetric densities. *arXiv preprint arXiv:2006.02425*, 2020. J. Moser. On the volume elements on a manifold. *Transactions of the American Mathematical Society*, 120: 286–294, 1965. George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. *arXiv preprint arXiv:1912.02762*, 2019. Kashif Rasul, Ingmar Schuster, Roland Vollgraf, and Urs Bergmann. Set flow: A permutation invariant normalizing flow. *arXiv preprint arXiv:1909.02775*, 2019. Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedings of the 32nd international conference on Machine learning. ACM, 2015. Danilo Jimenez Rezende, Sébastien Racanière, Irina Higgins, and Peter Toth. Equivariant hamiltonian flows. arXiv preprint arXiv:1909.13739, 2019. Victor Garcia Satorras, Emiel Hoogeboom, Fabian B Fuchs, Ingmar Posner, and Max Welling. E (n) equivariant normalizing flows for molecule generation in 3d. *arXiv preprint arXiv:2105.09016*, 2021. M. Spivak. *A Comprehensive Introduction to Differential Geometry*, volume 1. 1999. ISBN 9780914098706. Maurice Weiler and Gabriele Cesa. General e(2)-equivariant steerable cnns. *arXiv preprint arXiv:1911.08251*, 2019. Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco Cohen. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. *arXiv preprint arXiv:1807.02547*, 2018. Steven H. Weintraub. *Differential Forms: Theory and Practice*. Academic Press, 2014. ISBN 9780123944030. Changyi Xiao and Ligang Liu. Generative flows with matrix exponential. In International Conference on Machine Learning, pp. 10452–10461. PMLR, 2020. Minkai Xu, Lantao Yu, Yang Song, Chence Shi, Stefano Ermon, and Jian Tang. Geodiff: A geometric diffusion model for molecular conformation generation. *arXiv preprint arXiv:2203.02923*, 2022. Han Zhang, Xi Gao, Jacob Unterman, and Tom Arodz. Approximation capabilities of neural odes and invertible residual networks. In *International Conference on Machine Learning*, pp. 11086–11095. PMLR, 2020. ## A Existence Of The Equivariant Map A.1 Proof Of Theorem 2. Proof. Here we provide the necessary details to the sketch of the proof we gave earlier. The proof can be obtained as an equivariant modification of the Moser's trick (Moser, 1965; Banyaga, 1974). First, we need to recall two facts: 1) for any compact connected closed oriented smooth manifold, its top cohomology group is one-dimensional; 2) for any compact connected oriented smooth manifold with nonempty boundary, its top cohomology group is zero (for the proof of these two statements see, for example, (Weintraub, 2014, Theorem 8.3.10)). Hence, because we are additionally given that RM µ =RM ν, there exists η ∈ Ω (n−1)(M), such that ν = µ + dη. In case M has a nonempty boundary, we automatically get an additional condition on the form η that its integral over the boundary must vanish: $$0=\int_{M}\nu-\int_{M}\mu=\int_{M}d\eta=\int_{\partial M}\eta.\tag{18}$$ The last equality is Stoke's theorem. Without loss of generality, we can assume that the form η is G-invariant. Indeed, because of the naturality of the action (denote it R), one has: $$dR^{*}_{g}\eta=R^{*}_{g}d\eta=R^{*}_{g}(\nu-\mu)=\nu-\mu,\ \forall g\in G.\tag{19}$$ The last equality holds because the forms ν and µ are given to be G-invariant. Then if we fix the Haar measure on G (we can do it because G is compact), we can average the form η, and consider this new form instead of η. By construction, it is G-invariant. Then, connect the volume forms ν and µ by a segment: µt = µ + tdη for t ∈ [0, 1]. Clearly, µ0 = µ and µ1 = ν. By construction, each µt is a G-invariant n-form and RM µ0 =RM µt. We want to find a 1-parameter continuous family of diffeomorphisms (an isotopy) {ϕt}t∈[0,1], such that $$\phi_{t}^{*}\mu_{t}=\mu_{0}.$$ $$(20)$$ $$(21)$$ t µt = µ0. (20) Indeed, if we substitute t = 1 it will give the desired result. As the manifold is compact, an isotopy can be generated by the flow of a time-dependent vector field vt (Spivak, 1999, Chapter 5). Let's differentiate equation 20 with respect to t. The right-hand side will give zero, while the left-hand side is: $$\frac{d}{d t}(\phi_{t}^{*}\mu_{t})=\phi_{t}^{*}({\mathcal L}_{v_{t}}\mu_{t}+\frac{d}{d t}\mu_{t}),$$ where L is Lie derivative. Note that this equation is a chain rule. Recall Cartan's formula: $\mathbf{u}=\mathbf{u}\cdot\mathbf{c}$. $${\mathcal{L}}_{v}=d i_{v}+i_{v}d$$ $$(22)$$ $$(23)$$ $$i_{v_{t}}\mu_{t}+\eta=0.$$ Lv = div + ivd (22) where iv is an interior product. Using this, the fact that dµt = 0 (because it is a top form) and the computation d dtµt = dη, we have: $$\frac{d}{dt}(\phi_{t}^{*}\mu_{t})=\phi_{t}^{*}(d i_{v_{t}}\mu_{t}+d\eta)=\phi_{t}^{*}d(i_{v_{t}}\mu_{t}+\eta).$$ This will be equal to zero, if ivtµt + η = 0. (24) Because µt is non-degenerate, we can solve equation 24 pointwise. As a result, we obtain a unique smooth vector field vt. The compactness of M allows us to integrate vt in the flow ϕt. In case when the manifold M has a boundary, we additionally get that the restriction of the vector field on a boundary is zero: vt|∂M Banyaga (1974), hence ϕt|∂M = Id. Since µt and η are G-invariant, so is the vector field vt. The integration of vt will result in an G-equivariant diffeomorphism. $$(24)$$ ## B Representation Capabilities Of G**-Residual Layer** Proof. First, we need reproduce the existential result of Zhang et al. (2020). We show that for any Lipschitz continuous diffeomorphism ϕ : R n → R n there exists a Residual flow on the padded space ψ : R 2n → R 2n, such that ψ([x, 0]) = [ϕ(x), 0]. Denote the Lipschitz constant of ϕ by L. Also denote T = ⌊L + 1⌋. Consider the function: δ(x) = ϕ(x)−x T. Clearly δ(x) is smooth and its Lipschitz constant is less than one. Hence, the map of the padded space R 2n → R 2n given by: $$(25)$$ $$\psi_{0}:[x,y]\mapsto[x,y]+[0,\delta(x)]$$ $$(26)$$ ψ0 : [x, y] 7→ [*x, y*] + [0, δ(x)] (25) is an i-ResNet. In particular ψ0([x, 0] = [*x, δ*(x)]. Now let us consider the map $\psi_{i}:[x,y]\mapsto[x,y]+[\frac{yT}{T+1},0],\ \mbox{where}\ i=1,\ldots,T+1.$ $$\psi_{T+1}\circ\cdots\psi_{1}\circ\psi_{0}([x,0])=[\phi(x),\delta(x)].$$ $$\psi_{T+2}:[x,y]\mapsto[x,y]+[0,-\delta(x)].$$ $$\mathbf{F}\mathbf{i}\mathbf{n}$$ The residual part has Lipschitz constant less than one, hence this map is also an i-ResNet. By definition of δ, we have: ψT +1 *◦ · · ·* ψ1 ◦ ψ0([x, 0]) = [ϕ(x), δ(x)]. (27) Finally, consider the map: ψT +2 : [x, y] 7→ [*x, y*] + [0, −δ(x)]. (28) Similarly to ψ0 it is an i-ResNet. Overall, denote the composition of all ψi where i = 0*, . . . , T* + 2 by ψ. Then we showed that it is a ResFlow (as a composition of i-ResNets). And by construction, ψ([x, 0]) = [ϕ(x), 0]. Now we must show that if the diffeomorphism ϕ is additionally taken to be G-equivariant, then the ResFlow ψ that we constructed above is also G-equivariant with respect to the extended G-action on R 2n. Indeed by G-equivariance of ϕ and the definition of the extended G-action we have: $$\begin{array}{l l}{{\psi(g\cdot[x,0])=\psi([g\cdot x,0])=[\phi(g\cdot x),0]}}\\ {{}}&{{=[g\cdot\phi(x),0]=g\cdot[\phi(x),0]}}\\ {{}}&{{=g\cdot\psi([x,0]).}}\end{array}$$ $$(27)$$ $$(28)$$ $\square$ ## C Experimental Details C.1 G**-Residual Flows Architecture** The architecture we used for our G-residual flows is built off the codebase in the original Residual flows architecture (Chen et al., 2019). To maintain equivariance we replace the LipSwish activation function with a point-wise non-linearity (e.g. Pointwise-ReLU (Weiler & Cesa, 2019)). We also remove the input LogitTransform layer as we found it leads to training instability. Finally, we also convert the activation normalization layer to an equivariant one and we use it before and after each residual block. Each residual connection consists of: $\text{ReLU}\to3\times3$ $\text{Conv}\to\text{ReLU}\to1\times1$ $\text{Conv}\times\text{ReLU}\times3\times3$ $\text{Conv}$ . Each convolution layer contains 128 hidden units and while the input and output transform under the trivial representation intermediate layers transform under the regular representation. The main differentiating factor from vanilla residual flows is that kernels in each of the convolution layers are built with respect to specific kernel constraints of a given group as described in the appendix of Weiler & Cesa (2019). It is important to note the number of effective parameters are the ones that correspond to the learnable basis coefficients for each convolution kernel and not all the weights themselves. We now outline the specific architectures for each dataset used in the main paper: RotationMNIST. Image → 8 × ResBlock → Image FlipRotMNIST. Image → 8 × ResBlock → Image https://www.overleaf.com/projectCifar10. For Cifar10 as observed in (Weiler & Cesa, 2019) we use 5 × 5 convolution kernels as opposed to 3 × 3 in the MNIST datasets. Image → 12 × ResBlock → Image ## C.2 G**-Coupling Flows Architecture** Unlike the non-equivariant RealNVP architecture which makes use of various masking patterns and the multi-scale architecture we cannot directly use these in building G-Coupling flows as it would break equivariance. Instead, G-Coupling flows employ channel-wise masking such that the group acts on each channel independently. For our non-linearity we use the PointWise ReLU in a similar manner to G-Residual Flows. Each G-Coupling block consists of: s : 4 × [3 × 3 Conv → InnerBatchNorm → ReLU] → 3 × 3 Conv → InnerBatchNorm → ReLU t : 4 × [3 × 3 Conv → InnerBatchNorm → ReLU] → 3 × 3 Conv → InnerBatchNorm → ReLU Each convolution layer contains 128 hidden units and the last convolution layer in every block brings the input type to be a scalar field—i.e. a valid image. Cifar10. For Cifar10 we use the following architecture and train with a weight decay of 1e − 4 Image → 8 × G − Coupling Layer → Image
Review 1: Summary: This paper proposes a new method for modelling invariant probability densities using flow models. The basic idea, following previous work, is to learn an equivariant diffeomorphism. For this purpose new invertible equivariant layers are introduced. In practice, these work only for discrete groups with permutation matrix representations. The paper proves universality of this type of model for modelling invariant densities. Experiments on synthetic data and small images are performed. Strengths and Weaknesses: The theoretical part of the paper is quite strong. I particularly like the universality proof for compact groups, which is a fundamental contribution. The method itself is natural enough, though unfortunately it requires working with discrete groups and permutation representations, which is more restrictive than general compact groups. Finally, the experimental section is a bit limited, both in terms of scope (only small image datasets) and results (it seems that more work is required to make it work really well). Although I think this is a nice paper with a lot of potential I do have some concerns that in my view need to be addressed before publication: - On the top of page 3 (section 2.1), the explanation of how the group acts on the space of images is not entirely correct. Indeed G might act in this way, but in general it can also act on the feature vectors R^K via some representation of the stabilizer subgroup H. E.g. a 2D vector field would transform as rho(r) f(r^{-1} (x - t)) for some roto-translation g=(r,t) and rho a 2x2 rotation matrix. This is called the representation of SE(2) induced by the representation rho of SO(2). - The assumption in proposition 2 that R_g is a permutation matrix only works for discrete groups. So it does not work for the main example of SE(2). This is mentioned later in Remark 1. It seems more transparent to change the narrative of the paper to be about discrete subgroups of SE(2), so that this doesn't come as a surprise / disapointment to the reader. - It is also not clear how the method can deal with input data that does not transform by a permutation representation, since the layers are constrained to have the same input and output representation. For example, in the toy dataset, the data live in R^2, but we consider an action of C16/D16. The standard action of these groups on R^2 by rotations/reflections is not a permutation representation. Are you converting the 2D vectors to a higher dimensional rep? This needs to be explained in the paper. - Images transform as scalar fields. The flow layers must have the same input/output representations, so the outputs are also scalar fields. Steerable filters for scalar-to-scalar convolutions are isotropic, which is a bit limiting. Is this correct, and how you implement the network for MNIST/CIFAR? - The details of network architectures are not included. Some further minor issues: - It was not clear to me until page 2 that "finite" refers to the depth of the network (finite number of layers), rather than the symmetry group. It would be helpful to say this more explicitly in the abstract. - The first paragraph of 4.1 is a bit confusing. The operation of vectorizing does not change anything - if the original vector space has a group representation then the vectorized one has an isomorphic representation. Of course one has to remember and use this representation. I don't follow the story about SO(2), SO(4), C4, GL(4). The main message - that one should use the geometric structure (i.e. that the data is a field, acted on by a group via a certain representation) is clear enough though. - Proposition 1 is only true if h^i is equivariant wrt the same input and output representation of G, called R_g. You use this in equation 9, h^i(R_g x) = R_g h^i(x), whereas general equivariance would be h^i(R_g x) = T_g h^i(x). So prop 1 needs an additional assumption. - Typo: page 6. "principle benefit" -> "principal benefit" - In eq 14, the brackets (R_g x)_1:d should be removed as in the equation below, to R_g x_1:d, because R_g is size d x d. - Sec 4.6: "It is well known in the literature that a linear map that is equivariant muyst be a convolution with steerable kernels". This is only true if the input and output representations are induced representations, i.e. the input/output data transforms as a field over a base space. - Sec 4.6, it is not correct to say H = Hom(F_n, F_n+1) since H is a subspace. I would use H = Hom_G(F_n, F_n+1). - Eq 16: not here the two groups G and H used in the definition of induced representation need to be distinguished. For the case G = E(2) we have H = O(2), and the constraint in eq. 16 only pertains to H. Requested Changes: Please fix all major and minor issues mentioned above Broader Impact Concerns: no concerns ================================================== Review 2: Summary: In this work the authors introduce discrete normalising flows acting on $R^d$ which are $G$-equivariant, leading to $G$-invariant probability distribution---when combined with a $G$-invariant base distribution. In particular, they extend residual flows to $G$-equivariant residual flows with $G \subseteq E(n)$, and similarly for coupling and autoregressive flows to their $G$-equivariant counterparts with G acting via a permutation matrix, hence finite subgroups $G \nsubseteq O(n)\ltimes T(n)$. They introduce practical implementations for these flows by using on steerable CNNs for the flows components (residual connection or $s$ and $t$ neural networks for coupling and autoregressive flows). They theoretically show that (on a padded space) G-residual flows are universal approximators. They also show the existence of G-equivariant flow between two G-invariant densities (when both G and the space it is acting on are compact). They empirically show the parameter efficiency of the introduced flows in contrast with standard flows, on several image datasats with $E(3)$ symmetries. Strengths and Weaknesses: ### Strenghts First, I found this work pretty well written and quite easy to read. The paper is well organised, and most concepts are well presented. It is nice to start with Theorem 2 which shows the existence of G-equivariant flows. Although reasonnably simple, Proposition 2 which shows a universal approximator result for G-residual flows is welcome. The proposed flows are both simple and practical, building on prior work for the actual implementation of the flows individual components. ### Weaknesses Not really a weakness per se, but with regard to writing, I feel that the sketch of proof provided in Section 3 could be enhanced, as some concepts/notations aren't introduced. I have a few questions on this proof in the following section of this review. I believe that the main weakness of this work is really the empirical demonstration of the practical use of the proposed methods. Section 6 convinced me that the introduced flows can be more parameter efficient than their counterpart but not much more. I a making more detailed suggestions in the following section, but in short: - There seem be some issue with the proposed models when increasing the number of parameters as stated by the authors themselves: 'G-residual flows while universal are harder to optimize'. It is unclear what is the specific issue and whether it is only related to ResFlows or also coupling/autoregressive flows (no Lipschitz constraint) - It is lacking some sanity checks such as showing that an O(2)-equivariant flow indeed lead to an O(2)-invariant density, would be particularly relevant in Section 6.1. - Similarly, in Section 6.2, would be worth training models on non G-augmented datasets but evaluating them on G-augmented dataset to check that the G-invariant models are over-performing. - Lastly, there are limited baselines being assessed, namely ResFlows and Real-NVP, CNFs would be particularly relevant since they are being (rightfully) criticised in the introduction. Training CNFs is well-known to be slow so a fixed computational budget could be set for a fair assessement. - Yet, there seems to be a gap between the reported performance (for baselines) and what has been published in the litterature, likely due to the use of smaller networks but that is unspecified. Requested Changes: In what follows, I wrote a mixture of questions and suggestions, not necessarily strict 'Requested Changes', but more like opening a discussion on a few (hopefully relevant) points. - Title: Why 'finite'? I would suggest swapping with 'discrete' - Section 1 - "equivariant models also enjoy a data-independent form of generalization as they are guaranteed to respect the encoded equivariances regardless of the amount of data available during training." -> reference? - The authors are motivating discrete flows in opposition to continuous flows due to the Lipschitz contraint on the vector field, but isn't this similat to the constraints for ResFlows? - Section 2 - The notion of group 'representation' is mentioned without being introduced. 'Note that the defined action is linear' -> I assume that is because *linear* representations are considered. May be useful to clarify this point. - Section 3 - Theorem 2 - As stressed by the authors, the result from Theorem 2 is for compact space, as opposed to the earlier motivation with Euclidean space. Would it be possible to prove a negative result for R^d or would the authors conjecture that it still holds? Where does the current proof breaks? - I feel that the current sketch of proof could be enhanced - Worth defining what is an 'isotopy' - Should state that $\mu_1=\nu$. - 'As the manifold is compact, an isotopy can be generated by the flow of a time-dependent vector field' -> reference? - I am not certain to fully follow the proof. The existence of a vector which induces a flow is shown, which itself allow to interpolate between the two measures of interest. Then it is stated 'Since $\mu_t$ tand $\nu$ are G-invariant, so is the vector field v t., but how come $\mu_t$ is invariant? Also $\phi_t$ is G-equivaraint iif $v_t$ is G-equivaraint, not G-*invariant*? - Section 4 - 4.1: 'any equivariance constraint on a given layer can be represented as a convolution kernel with an analogous constraint' -> on a given *linear* operator - 4.2: Does the residual connection forces the output representation of $h_i$ to be of the same type as $x$, that is of type-1? Can this be limiting? - 4.4: The constraints on the group representation is that it is a permutation matrix, and since all finite groups are subgroups of the permutation group, this include $C_n$ and $D_n$ groups. Yet, is this group is acting on half of the space, or on the full space? I am struggling to understand how the latter could be true. Perhaps an illustration on a 2d image would be useful to clarify this. - 4.6: So Eq 17 is necessary but not sufficient to impose invertability of the linear map? How is the invertibility of the operators $s$ and $t$ in G-coupling flows enforced then? - Section 5 - A few relevant work are missing from the related work section. I would suggest mentioning/discussing [A, Appendix L.3] which proposed G-invariant score-based models. - Section 6 - 6.1 - A 'parameter budget of 2k' sounds quite small. In particular, none of the models accurately fit the spirals dataset. Additionaly, although the empirical results show that in this parameter regime the standard ResFlow fails to accurately fit the mixture of Gaussian dataset, and as such that G-invariant NFs are more parameter efficient, I would be interested in seeing whether the ResFlow would be able to learn the invariance with more parameters. - Why not have an O(2)-ResFlow? Then we could check that the density is exactly O(2) invariant. - 6.2 - Similarly here, although the parameter efficiency gain is important, what is preventing from fitting larger G-resflow? Is it a computational bottleneck? Or are numerical instabilities arising (from the constrained choice of non-linearity)? - Looking at Table 1 from [B], the reported bits/dim on (non augmented) MNIST and CIFAR-10 are 0.97 and 3.28 respectively, why such a gap with report 3.87 for Resflow in Table 1 of this work? - Why not including the G-coupling results from the ablation study in Table 1? Similarly, I would be interested in having results for G-ResFlow for finite subgroups of $O(2)$ and $SO(2)$. - Here too the reported bits/dim of 3.49 for Real-NVP from [B] is different from waht we can read in Figure 4 in this work. - It would be worth in Table 1, additionally showing results on augmented test dataset, but trained on a non augmented dataset, as a sanity check since then the G-equivariant model should easily outperform its counterpart. - What is the reason for not empircally assessing Real-NVP and the introduced G-coupling flows methods? - Although this requires more work, it would be of interest to compare the proposed apparoach to G-invariant continuous normalising flows and G-invariant score based models. [A] De Bortoli et al., Riemannian Score-Based Generative Modeling, 2022. [B] Ricky T. Q. et al.,Residual flows for invertible generative modeling, 2019. Broader Impact Concerns: I do not have concerns on that matter. ================================================== Review 3: Summary: This paper proposes a variety of ways to extend so-called "discrete" normalizing flows to respect various symmetries through equivariance. In particular, the paper primarily focuses on generalizing residual, coupling, and autogressive normalizing flow architectures to be equivariant to some group $G$. Strengths and Weaknesses: Strengths ----------- + The paper provides a relatively thorough treatment of equivariant flows and how to construct equivariant versions of classical architectures. + The paper provides good theoretical justification of said models with some proofs of universality/representation power. Weaknesses -------------- - I have several problems with the theoretical results (elaborated below). - The paper, as written, is often confusing / under-specificied in several areas. - The experiments do not provide a good empirical backing of the central claims. Requested Changes: Theoretical Results ---------------------- * A stronger version of Theorem 2 appears in Katsman et al. 2021 (in particular they use heat flow to construct a gradient flow that couples probability measures on compact manifolds). Furthermore, as Theorem 2 uses Moser's coupling, the actual result can be seen by inspection and does not warrant such an in-depth proof. In particular, the Moser vector field build is built from the distributions to be coupled, and, by doing so, one can see that invariant distributions lead to an equivariant vector field. Finally, the authors should note that Theorem 2 is heavily divorced from the actual applications of the paper. In particular, the results are purely geometric and crucially do not generalize to compact sets in R^n (in particular compact sets with nonzero volume in R^n are not compact manifolds but rather compact manifolds with boundary). * Proposition 1 follows from simple linear algebra facts. It should not be included as a proposition; perhaps a remark. * Theorem 3 follows directly from definitions as well. Should be shortened similar to Proposition 1. * The results of Proposition 2 should note the fact that the permutation condition is pretty restrictive. In particular, if the representation of each group element is a permutation matrix, this implies that the group G is discrete (and perhaps there are other conditions I'm missing). This is needed since many previous results such as Kohler et al. 2020 operate on continuous Lie Groups like SO(3). * Similar to the above statement, Proposition 3 should also have this note. Unclear --------- * For the background introduction to RealNVP, i'm assuming that "D" variable should be "n". * The authors should mention that Theorem 1 relies on G being a subgroup of E(n), a fact that they later use. * I'm unsure what the purpose of Section 4.1 is. I can see that it serves as some motivating factor for equivariance, but this should either be included in a non-contribution section or made more concise. * For proposition 2, I'm assuming that the n=2d should be replaced with D=2d as in the background section. * The authors should precisely introduce the groups C16, D16, etc... in the experimental section. Experiments -------------- * The results in Figure 2 are not convincing. For example, the results for the C16 and D16 results for concentric circles do not appear to be invariant with respect to the groups. This leads me to suspect at least several implementation bugs. * The authors should also mention more about which architectures they use for the equivariant networks. In particular, this is the nonidentity part in the residual flows and the s, t networks in the realnvp like architecture. In particular, they should be specific about which transformations are used and the hidden dimension of each. * The results in Figure 3 are not very convincing, as the generated images are not very good. Since this is MNIST, one should expect better image generation. * The results in Table 1 are not convincing. In particular, since one wants to have a lower BPD, I'm unsure how ResFlow being better than G-ResFlow on higher parameter count shows a contribution. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: The reviewers' opinions ranged from strong reject to borderline. On the positive side, two out of three reviewers found the theoretical contribution of interest and appreciated the quality of the exposition. On the negative side, one reviewer questioned the originality of the theoretical contribution, and no reviewer was fully convinced by the empirical results. In conclusion, it seems that the paper does not fully meet TMLR's criteria for acceptance at this stage. However, TMLR would consider a resubmission that addressed the following concerns: - The relationship between Theorem 2 and previous work should be clarified. If Theorem 2 has been proven by Katsman et al. (2021), it ought to be attributed to them, with this paper being clear that its own contribution is a new proof. If not, the paper should make clear what the differences are. - The experimental evaluation should be strengthened, in order to evaluate the proposed models more convincingly and support the experimental findings more conclusively. - Both the theory and the experiments should clarify whether the proposed flows are equivariant or approximately equivariant. If the former, the experiments should establish that unambiguously. If the latter, the theory should make it clear early on. ==================================================
# Exploiting Category Names For Few-Shot Classification With Vision-Language Models Anonymous authors Paper under double-blind review ## Abstract Vision-language foundation models pretrained on large-scale data influence many visual understanding tasks. Notably, many vision-language models build two encoders (visual and textual) that can map two modalities into the same embedding space. As a result, the learned representations achieve good zero-shot performance on tasks like image classification. However, when there are only a few examples per category, the potential of large vision-language models is not fully realized, mainly due to the disparity between the vast number of parameters and the relatively limited amount of training data. This paper shows that we can significantly improve the performance of few-shot classification by using the category names to initialize the classification head. More interestingly, we can borrow the non-perfect category names, or even names from a foreign language, to improve the few-shot classification performance compared with random initialization. With the proposed category name initialization method, our model obtains state-of-the-art performance on several few-shot image classification benchmarks (e.g., 87.37% on ImageNet and 96.08% on Stanford Cars, both using five-shot learning). Additionally, we conduct an in-depth analysis of category name initialization, explore the point at which the benefits of category names decrease, examine how distillation techniques can enhance the performance of smaller models, and investigate other pivotal factors and intriguing phenomena in the realm of few-shot learning. Our findings offer valuable insights and guidance for future research endeavors. ## 1 Introduction In recent years, large vision-language models have opened doors to many new applications and provided new thoughts to existing problems. The advantages of large vision-language models are blessed by learning from largely available images with surrounding texts, as well as exploring the capacity of transformer network (Dosovitskiy et al., 2021) to model web-scale image-text data. Radford et al. (2021) first proposed CLIP for vision-language modeling, which was followed by numerous works, including ALIGN (Jia et al., 2021), LiT (Zhai et al., 2022b), Flamingo (Alayrac et al., 2022), Florence (Yuan et al., 2021), CoCa (Yu et al., 2022), etc. The development of vision-language models provides novel perspectives of few-example learning. This paper considers the problem of few-shot classification in the new light of large vision-language models. Researchers have found that models pretrained from ImageNet can be easily transferred by finetuning on a new classification task (Huh et al., 2016). Similarly, we can take the vision encoder from the pretrained vision-language model and finetune it with a few examples. Since state-of-the-art vision-language models were pretrained on billions of web images and texts, such finetuning often outperforms the models trained on ImageNet with better robustness and generalization capabilities. Moreover, large vision-language models can be adapted to more downstream tasks with fewer labeled data. Despite the capability of the text branch in pretrained vision-language models, it is not optimally utilized when directly fine-tuning the vision component for downstream image classification tasks. Additionally, the large size of these models can lead to over-fitting when trained on limited data. In addition to the above approach, we exploit another source of information in vision-language models that traditional models have ![1_image_0.png](1_image_0.png) | Category ID | Category Name | Name in Spanish | |---------------|-------------------------------------------------------------|-----------------------------------------------------------------------| | | "tench", "goldfish", "great white shark", "tiger shark", … | "tenca", "pez de colores", "gran tiburón blanco", "tiburón tigre", … | Figure 1: Comparing one-shot classification accuracy on ImageNet using different category information. The typical way of finetuning using images with their category IDs does not work well for one-shot learning with big models. With the information on the category names of training images, we develop a new initialization approach that significantly boosts the performance of vision-language models in few-shot learning. Interestingly, using non-English names can still help even though the model was pre-trained using images and English text data pairs. overlooked. Such new information comes from the category names in downstream image classification tasks. Because vision-language models can generate powerful representations for images and texts, we will show that by utilizing semantic category names for initialization, vision-language models can be transferred better with few examples in downstream tasks. As summarized in Figure 1, this paper explores several scenarios: (1) randomly initializing a classification head; (2) initializing a classification head with category names; (3) initializing a classification head with other heuristics such as class digits or even non-English category names. Note that (1) corresponds to the scenario when we only know the category ID (e.g., class 0, class 1, ..., class N) without knowing the meaning of each category. However, (2) implicitly parses the information from category names such as "tench" and "goldfish". The pretrained language model could process these label names to provide a better initialization for the model adaption. Compared to (2), (3) provides different types of category name information. The main difference between scenario (1) and the others is that (1) does not utilize text/language information from the categories. In scenario (1), the backbone network is initialized from the pretrained model weights, and the classification head is randomly initialized. We set (1) to be our baseline as it is the most common model adaptation method. We leverage the pretrained language model for the other scenarios to parse the text information in the provided categories. Specifically, we pair all category names with prompts and extract the average text embedding as the weight to initialize the classification head. The second scenario is called category name initialization (CNI), and it has achieved the best performance among all these scenarios when finetuning using one-shot ImageNet data, as shown in Figure 1. In this paper, we conduct extensive experiments exploring few-shot performance on ImageNet (Deng et al., 2009), Cifar100 (Krizhevsky, 2009), Oxford Flowers (Nilsback & Zisserman, 2008), Stanford cars (Krause et al., 2013), etc. Using the powerful pretrained models, we sweep hyper-parameters such as learning rates, training layers, weight regularization, etc., and find a stable recipe for few-shot learning that can significantly outperform the state-of-the-art in many classification tasks. Notably, we achieve a one-shot top-1 accuracy of 86.15% and a five-shot 87.90% top-1 accuracy on ImageNet, which outperforms many other approaches using the same or more training examples. More interestingly, in this work, we demonstrate that: - Category name initialization can significantly boost the finetuning performance in few-shot settings, outperforming many other initialization or fine-tuning methods. However, the contribution of category names diminishes when there are a sufficiently large number of training images. - Leveraging the proposed category name initialization can speed up convergence compared to random initialization. - In scenarios where a user does not speak English, we find that the non-English category name still helps with few-shot learning. For example, we can use Spanish category names to initialize the network, which is more effective than random initialization. - A larger pretrained model could further boost the few-shot performance of a small model by carrying out model distillation. We have achieved 1.01% performance boost using 1% labeled images from ImageNet. - The selection of finetuning layers is crucial to the performance. Empirically, finetuning the last few layers is much better than full model finetuning in a few-shot setting. On the other hand, finetuning the entire network works better when the training data is sufficient. - We explore additional factors that impact few-shot learning, specifically the learning rate and weight regularization. We provide a comprehensive guide on determining the optimal learning rate and analyze the interesting effects of incorporating L2 weight regularization into few-shot learning. ## 2 Related Work The human vision system can surprisingly learn from only a few examples. More amazingly, one may learn more effectively by knowing the new species' names. For example, people who have seen "fish" and "cat" before can quickly understand what "catfish" means with or without the help of additional images. Motivated by this phenomenon, few-shot learning has been extensively studied in computer vision (Fei-Fei et al., 2006; Hariharan & Girshick, 2017). Since deep CNN became popular, a common practice is to train a deep CNN on ImageNet and then transfer the model to downstream tasks (Huh et al., 2016). However, transferring a pretrained ImageNet model requires hundreds or thousands of images. When there are only a few examples per category, the few-shot learning using pretrained ImageNet models is inferior to those trained with enough in-domain data. Recently, there has been increasing interest in utilizing the vision-language model for visual zero-shot learning, a related problem of few-shot learning. CLIP (Radford et al., 2021) is a pioneering work in large-scale vision-language modeling. Unlike previous works in vision-language representation (Donahue et al., 2015; Vinyals et al., 2015), CLIP collects image-text pairs from the Web, which contains diversified semantics in a weakly supervised fashion. In addition, CLIP is built on large-scale contrastive learning, which maps images and text into the same subspace. Through this, the model can map textual class names with images hence performing image classification in a zero-shot manner. The approach of CLIP was followed by ALIGN (Jia et al., 2021), Flamingo (Alayrac et al., 2022), LiT (Zhai et al., 2022b), Florence (Yuan et al., 2021), FLAVA (Singh et al., 2022), SimVLM (Wang et al., 2022) and CoCa (Yu et al., 2022). Among these works, ALIGN, Florence, FLAVA, and LIT are based on contrastive learning. Flamingo chooses to optimize a generative loss with gated cross-attention layers. At last, CoCa integrates contrastive and generative loss into one framework. Although training CoCa seems the most challenging among all these vision-language works, it obtains consistently better results in many tasks. In the literature, CLIP, LiT, ALIGN, Florence, FLAVA, and CoCa have demonstrated promising results with zero-shot learning. However, the potential of these models for few-shot learning is not well exploited. Li et al. (2022) construct a benchmark and toolkit named Elevater for evaluating the transferability of visionlanguage models using different training samples. Radford et al. (2021) point out that using few training examples could improve the effectiveness robustness while undermining the relative robustness. Few-shot learning algorithms are trained exclusively on image data, ignoring the valuable text information that can be used to enhance the learning process. However, Flamingo has emerged as a promising approach for addressing this issue. Flamingo utilizes few-shot interleaved prompts that incorporate gated cross-attention layers to improve few-shot learning. Zhou et al. (2022a) propose context optimization (CoOp) to model text in prompts through continuous representations. Zhou et al. (2022b) propose CoCoOp, which extends CoOp by further learning a lightweight neural network to generate an input-conditional token (vector) for each image. In addition, a series of prior-based methods utilize CLIP priors with a cache model. CLIP-Adapter (Gao et al., 2021) combines zero-shot visual or language embeddings with corresponding finetuning features to improve performance. TIP-Adapter (Zhang et al., 2022) constructs adapters using a key-value cache model from few-shot training sets and updates their prior knowledge through feature retrieval. TIP-X (Udandarao et al., 2022) further constructs an affinity matrix by measuring the KL divergence between test and few-shot samples, which removes direct reliance on the uncalibrated image-image similarities. APE (Zhu et al., 2023) explores the trilateral affinities between the test image, prior cache model, and textual representations and only enables a lightweight category-residual module to be trained. Among these approaches, TIP-Adapter, TIP-X, and APE are training-free, while CoOp, CoCoOp, CLIP-Adapter, and APE-T (Zhu et al., 2023) require training. Klein et al. (2014) suggest that using a fisher vector derived from other distributions can improve accuracy in central computer vision tasks. Category names have also been exploited in image-text tasks, such as visual grounding (Wang et al., 2017) and visual question answering (Gupta et al., 2017). In these methods, the text embedding of the category names and the image embedding is extracted separately by two branches. Then their inner product is calculated as the similarity score between an image region and an object category. This paper demonstrates that leveraging category names for initialization can significantly enhance the few-shot performance of the CoCa model without bells and whistles. Our approach outperforms Flamingo and CLIP's performance and establishes a new state-of-the-art for both ImageNet and several other datasets with fewer training examples. ## 3 Approach In this section, we first briefly review CoCa (Yu et al., 2022), one of the state-of-the-art vision-language models, and then discuss two initialization strategies: the standard random initialization and new category name initialization (CNI) for finetuning tasks. ![3_image_0.png](3_image_0.png) Figure 2: An overview of the CoCa pretraining and finetuning. (a) The pretraining of CoCa relies on mapping image and text pairs into the same space for embedding alignment, where the image and text embeddings are extracted through an image encoder and a unimodal text decoder, respectively. The image pooler is used to customize the image embedding for different tasks. (b) We append a randomly initialized linear projector to the image pooler and initialize the image encoder from pretrained weights. (c) We construct text sequences by pairing all C category names with N different prompts. Via the pretrained unimodal decoder, we can compute the text embeddings for all text sequences (with a total number of N × C), each of which is a D-dimensional vector. The normalized average embeddings can be used to initialize the linear projector's weight. ## 3.1 Revisiting Coca Pretraining Unlike other recent vision-language models, CoCa adopts an encoder-decoder model architecture to learn the generic vision and multi-modal representations. As shown in Figure 2 (a), CoCa encodes images to latent representation via an encoder network (e.g., vision transformer (ViT) (Dosovitskiy et al., 2021)) and encodes text representations via a unimodal decoder. We append an image pooler after the image encoder to customize the image representations. Practically, CoCa adopts a cascade design by using two image poolers, i.e., a generative image pooler and a contrastive image pooler. The motivation for this design comes from the preliminary experimental results that single pooled image embedding helps vision recognition tasks while more visual tokens benefit multi-modal understanding tasks. Following Lee et al. (2019), both generative and contrastive image poolers are single multi-head attention layers with different numbers of learnable queries, enabling the model to pool embedding with different lengths. They can also customize visual representations for different tasks and training objectives. For simplicity and clarity, we depict them in one box named *Image Pooler*. On the other hand, CoCa uses a unimodal decoder to extract text-only embeddings. It cascades multi-modal decoder layers cross-attending to image embeddings to learn multi-modal image-text representations. CoCa is pretrained on image-text pairs using two objective functions. The first is contrastive loss, where the image representations are contrasted against the paired text representations. The contrastive loss enables cross-modal representation alignment. The other is image-captioning loss, which requires the model to autoregressively predict the tokenized texts by maximizing the conditional likelihood. The resulting CoCa can thus generate both unimodal visual/textual embeddings and multi-modal joint embeddings. The unimodal visual output generated by the encoder and the unimodal textual output generated by the unimodal decoder are aligned in the same vector space and thus can be used to map images with their class names in a zero-shot manner. Here, we focus on reusing these two components to initialize for few-shot learning. ## 3.2 Finetuning Coca Random initialization. One straightforward model adaption approach is to add a randomly initialized linear projector upon the pretrained model and selectively finetune the model (all or part of the layers), as depicted in Figure 2 (b). Following the approach used by CLIP (Radford et al., 2021) and CoCa (Yu et al., 2022), we first use an image pooler to obtain the aggregated image embedding H ∈ RD and then apply a linear projector to get the prediction Y ∈ RC , $$Y=\mathrm{softmax}(W H+b),$$ $\left(1\right)$. Y = softmax(W H + b), (1) where W ∈ RC×D and b ∈ RC are learnable weight and bias of the linear projector. Here W and b are randomly initialized, while the image encoder and generative image pooler are initialized from the pretrained weights. Table 7 summarizes the number parameters of different modules of CoCa. Category name initialization. We argue that the above random initialization ignores the potential of the language model for model adaptation. In contrast, we propose the category name initialization to maximize the capacity of the pretrained unimodal decoder. First, we pair all category names (whose total number is C) with N different prompts as the text inputs. For example, pairing the category name "tench" with a prompt "A bad photo of {}" gives us a text sequence "A bad photo of tench". Next, we compute the text embeddings for all these N × C text sequences via the unimodal decoder. As the text embedding for each text input is a D-dimensional vector, we can obtain a text embedding tensor with a shape of N × C × D. Following the previous work CLIP (Radford et al., 2021), we compute the average over different prompts and perform the L2 normalization to obtain the average embeddings of shape C × D. Unlike random initialization, we initialize the weight W by the average embeddings and bias b by a zero vector in the linear projector. We initialize the image encoder and the image pooler from the pretrained model weights to enable zero-shot inference of the category name initialized model. Discussion. Category name initialization is model-agnostic, making it applicable to other foundation models that utilize contrastive loss. Vision-language models trained with contrastive learning inherently yield a two-tower representation, where the text tower's output is embedded into the image tower's embedding space. This shared embedding space allows for cosine distance computation through the inner product of normalized embedding vectors. Consequently, the text embeddings of category names can effectively initialize the visual classifier. With category name initialization, the model can maintain its zero-shot performance even before fine-tuning, avoiding starting from scratch and undergoing a lengthy fine-tuning process. In contrast to CoOp (Zhou et al., 2022a), where prompts are learnable variables, the prompts for each downstream image classification task are fixed. In Section 4.3, we will demonstrate that context optimization is less effective than category name initialization. TIP-Adapter (Zhang et al., 2022) calculates predicted logits by measuring the affinity between embeddings of the test image and cached training images, as well as textual embeddings. In Section 4.4, we will show that using cached image embeddings for initialization leads to poorer performance. CLIP-Adapter (Gao et al., 2021) introduces and fine-tunes two learnable adapters, each consisting of two layers of linear transformations, to transform classifier weights and image features. However, we found that using text embeddings of category names to initialize the classifier and finetuning the final few layers is the most effective method for few-shot learning without the need for overly complex designs. Layer selection will be discussed in Section 4.7. In practice, we may not always have the names of all categories. For example, when the finetuning service is provided to users from another country with different languages, the user may use category names in a foreign language or even digital labels for each category. Interestingly, although trained only with English texts, CoCa uses a word piece model and sentence piece model as the tokenizer and thus can compute the embedding for any text sequence without reporting the out-of-vocabulary error. In Section 4.4, we will compare the impact of different variants of category name initialization. ## 4 Experiments In this section, we first describe the details of our experimental setups, and then present our experimental results as well as key findings with comprehensive analysis. ## 4.1 Experimental Setup Data. We conducted finetuning experiments on several widely-used image classification datasets, including ImageNet (Deng et al., 2009), ImageNet-V2 (Recht et al., 2019), ImageNet-R (Hendrycks et al., 2021a), ImageNet-A (Hendrycks et al., 2021b), ImageNet-Sketch (Wang et al., 2019), Cifar100 (Krizhevsky, 2009), Oxford Flowers (Nilsback & Zisserman, 2008), Stanford Cars (Krause et al., 2013), Country-211 (Radford et al., 2021), Food-101 (Bossard et al., 2014), FGVC Aircraft (Maji et al., 2013), EuroSAT (Helber et al., 2019), and Oxford-IIIT Pets (Parkhi et al., 2012). To account for different few-shot settings, we randomly sampled a specific portion of data from each dataset. For instance, in one-shot ImageNet, we only chose one image from the ImageNet training data for each category. Despite this sampling, we evaluated all models on the entire testing set. Following the existing benchmark (Li et al., 2022), we employed the same text prompts1for evaluating all methods for a fair comparison. CoCa (Yu et al., 2022) is pretrained using JFT3B (Zhai et al., 2022a) and Align datasets (Jia et al., 2021). During the pretraining stage, all near-domain examples (3.6M images) are removed following the strict de-duplication procedures (Zhai et al., 2022a; Jia et al., 2021). Optimization. We use the Adafactor optimizer (Shazeer & Stern, 2018) with β1 = 0.9, β2 = 0.999, and a weight decay ratio of 0.01. All input images are first rescaled to 580 × 580 and then randomly cropped to the size of 540 × 540. We further apply RandAugment (Cubuk et al., 2020) and label smoothing in our data preprocessing pipeline. Our model is implemented in the Lingvo framework using Tensorflow (Shen et al., 2019). Hyper-parameters. The choice of batch size depends on the dataset and its number of categories. When the total number of training examples is relatively small, using a large batch size may not be feasible. However, using the largest possible batch size for efficient training is generally desirable. For instance, in the case of ImageNet, which consists of 1000 categories, we opt for a batch size of 512. This decision is based on the consideration that we have a substantial number of images per category, either 1000 images (for one-shot tasks) or 5000 images (for five-shot tasks). Therefore, using a batch size of 512, we can efficiently utilize the available computational resources during training. However, it is important to note that the batch size is adjusted accordingly for datasets with a smaller number of categories. For instance, in the case of Cifar-100, 1https://github.com/Computer-Vision-in-the-Wild/Elevater_Toolkit_IC/blob/main/vision_benchmark/datasets/ prompts.py Table 1: Few-shot results on ImageNet and its variants. We use IN as the abbreviation for ImageNet, and CNI for category name initialization. The second column means how much training data per class is used for finetuning. 0 shot means the pretrained vision-language model is directly evaluated without finetuning. Full means the entire training set has been used. All the numbers under the last five columns denote the top-1 test accuracy. | Model | Shot | IN | IN-V2 | IN-R | IN-A | IN-Sketch | |------------------------------------------------|--------|-------|---------|--------|--------|-------------| | MAE (He et al., 2022) | full | - | - | 66.50 | 76.70 | 50.90 | | CLIP (ViT-B/16) (Radford et al., 2021) | 0 | 68.40 | 62.60 | 77.60 | 50.00 | 48.20 | | full | 79.90 | 69.80 | 70.80 | 46.40 | 46.90 | | | CLIP (ViT-L/14) (Radford et al., 2021) | 0 | 76.20 | 70.10 | 88.90 | 77.2 | 60.20 | | full | 85.20 | 75.80 | 85.30 | 76.10 | 58.70 | | | 0 | 55.50 | - | - | - | - | | | CLIP+Adapter (ResNet-50) (Gao et al., 2021) | 1 | 58.10 | - | - | - | - | | 4 | 59.50 | - | - | - | - | | | CLIP+CoOp (ViT-B/16) (Zhou et al., 2022a) | 0 | 58.18 | - | - | - | - | | 1 | 58.00 | - | - | - | - | | | 4 | 60.01 | - | - | - | - | | | 0 | 60.33 | - | - | - | - | | | Tip-Adapter-F (ResNet-50) (Zhang et al., 2022) | 1 | 61.32 | - | - | - | - | | 4 | 62.52 | - | - | - | - | | | WiSE-FT (ViT-L/14) (Wortsman et al., 2022) | full | 85.30 | 76.90 | 89.80 | 79.70 | 63.00 | | Flamingo-3B (Alayrac et al., 2022) | 1 | 70.90 | - | - | - | - | | 5 | 72.70 | - | - | - | - | | | Flamingo-80B (Alayrac et al., 2022) | 1 | 71.90 | - | - | - | - | | 5 | 77.30 | - | - | - | - | | | CoCa-base (Yu et al., 2022) | 0 | 82.26 | 76.22 | 93.16 | 76.17 | 71.12 | | CoCa-base+CNI (Ours) | 1 | 82.35 | 76.47 | 93.37 | 77.00 | 71.61 | | 5 | 83.58 | 77.23 | 93.22 | 77.23 | 71.35 | | | CoCa-2B (Yu et al., 2022) | 0 | 86.09 | 80.39 | 96.19 | 89.39 | 77.12 | | CoCa-2B+CNI (Ours) | 1 | 86.15 | 80.57 | 96.62 | 90.12 | 77.49 | | 5 | 87.37 | 81.66 | 96.41 | 89.68 | 77.39 | | where there are 100 categories, we choose a batch size of 256 for the five-shot setting and 64 for the one-shot setting. Model cost. The computational cost of training a model depends on the model size and the chosen training batch size. To provide specific examples, when fine-tuning CoCa-base on ImageNet (five-shot), we utilized a 4x4 Jellyfish TPU with a batch size of 512, and the training process took approximately 6 hours. Similarly, when fine-tuning CoCa-2B on Cifar-100 (five-shot), we employed a 4x4 Dragonfish TPU with a batch size 256, and the training duration was around 9 hours. ## 4.2 Improving Coca In Few-Shot Classification State-of-the-art on ImageNet and its variants. We use the pretrained CoCa model and apply category name initialization. We then compare our method against the previous works on ImageNet and its variants, including ImageNet-V2 (Recht et al., 2019), ImageNet-R (Hendrycks et al., 2021a), ImageNet-A (Hendrycks et al., 2021b) and ImageNet-Sketch (Wang et al., 2019). As shown in Table 1, CoCa-2B+CNI has achieved state-of-the-art few-shot classification results on all these benchmarks. Surprisingly, the one-shot and five- Table 2: Comparing with the state-of-the-art on multiple classification benchmarks. CNI stands for category name initialization, and RI means random initialization. Our model obtains the state-of-the-art few-shot learning performance with less training data than others. | Model | Shot | Cifar100 | Oxford | Stanford Cars | Country-211 | Food-101 | FGVC Aircraft | EuroSAT | Oxford-IIIT | |--------------------------------|--------|------------|----------|-----------------|---------------|------------|-----------------|-----------|---------------| | Flowers | Pets | | | | | | | | | | 5 | 21.20 | 50.90 | 6.30 | 2.80 | 7.70 | 7.00 | 64.60 | 17.20 | | | MAE (He et al., 2022) | 20 | 43.50 | 71.90 | 25.50 | 4.40 | 30.40 | 29.90 | 74.10 | 60.00 | | full | 68.30 | 72.00 | 37.20 | 10.10 | 65.10 | 39.10 | 94.80 | 81.60 | | | 5 | 38.30 | 70.30 | 8.70 | 3.50 | 18.60 | 14.30 | 76.70 | 37.30 | | | CAE (Chen et al., 2022) | 20 | 55.10 | 81.20 | 27.50 | 5.50 | 35.70 | 32.60 | 89.00 | 63.30 | | full | 78.90 | 81.20 | 40.40 | 11.40 | 67.40 | 40.80 | 96.70 | 79.80 | | | 5 | 60.50 | 79.50 | 13.40 | 4.80 | 36.60 | 11.80 | 77.10 | 76.20 | | | MoCo-v3 (Chen et al., 2021) | 20 | 75.50 | 89.50 | 49.50 | 7.60 | 59.30 | 38.20 | 84.80 | 86.40 | | full | 85.30 | 89.50 | 63.00 | 13.70 | 78.00 | 48.00 | 95.90 | 91.40 | | | 5 | 61.50 | 82.70 | 27.60 | 4.40 | 41.90 | 24.10 | 62.50 | 87.80 | | | DeiT (Touvron et al., 2021) | 20 | 73.70 | 92.70 | 68.80 | 6.20 | 61.50 | 34.10 | 90.70 | 91.90 | | full | 89.60 | 92.40 | 83.00 | 14.10 | 84.50 | 59.30 | 98.20 | 93.90 | | | 5 | 75.40 | 99.20 | 27.60 | 6.80 | 59.00 | 22.70 | 70.00 | 89.60 | | | ViT (Dosovitskiy et al., 2021) | 20 | 84.00 | 99.20 | 53.90 | 11.50 | 81.70 | 40.50 | 86.50 | 92.60 | | full | 89.80 | 99.20 | 67.50 | 16.60 | 89.60 | 47.80 | 96.00 | 94.80 | | | CLIP (Radford et al., 2021) | 5 | 71.10 | 94.20 | 73.60 | 21.70 | 89.70 | 36.00 | 76.70 | 90.50 | | 20 | 75.40 | 96.8 | 73.60 | 25.20 | 90.60 | 48.10 | 86.60 | 92.30 | | | CoCa-2B (Yu et al., 2022) | 0 | 77.19 | 92.04 | 94.37 | 42.15 | 94.79 | 44.83 | 49.74 | 97.88 | | CoCa-2B+RI | 1 | 5.69 | 40.78 | 14.29 | 1.71 | 1.26 | 12.24 | 56.84 | 61.95 | | 5 | 7.49 | 84.71 | 86.31 | 19.06 | 62.45 | 27.21 | 82.38 | 78.61 | | | CoCa-2B+CNI | 1 | 77.89 | 98.45 | 95.29 | 42.44 | 94.91 | 58.33 | 75.06 | 97.93 | | 5 | 78.62 | 99.25 | 96.08 | 44.52 | 95.50 | 69.29 | 85.78 | 98.12 | | shot performance of CoCa-base is even better than the performance of some other recent methods finetuned on the whole dataset. State-of-the-art on other benchmarks. In addition to ImageNet and variants, we show that our method can achieve state-of-the-art few-shot performance on other image classification benchmarks, including Cifar100 (Krizhevsky, 2009), Oxford Flowers (Nilsback & Zisserman, 2008) and Stanford Cars (Krause et al., 2013), Country-211 (Radford et al., 2021), Food-101 (Bossard et al., 2014), FGVC Aircraft (Maji et al., 2013), EuroSAT (Helber et al., 2019), and Oxford-IIIT Pets (Parkhi et al., 2012). By examining Table 2, it becomes apparent that our CoCa-2B model outperforms many other approaches, even when trained with fewer data. The performance gain results from the category name initialization, which serves as a strong foundation that enables the model to achieve better results with only a few examples. To gain a deeper understanding of this phenomenon, we provide an analysis of the category name initialization in the following section. ## 4.3 Analysis Of Category Name Initialization This section delves deeper into how the proposed category name initialization helps with large vision-language models in few-shot learning. Vision-language models are adept at zero-shot inference without knowing any class names from downstream tasks. However, the zero-shot performance heavily depends on the domain gap and data distribution, thus varying on different downstream tasks. By leveraging a few training examples from the target domain, the pretrained vision-language models can adapt to the target domain. Improvement upon zero-shot performance. We first examine how category name initialization improves zero-shot performance. As illustrated in Table 1 and Table 2, category name initialization enhances performance across all datasets. The improvement in performance from zero-shot to five-shot varies depending on the dataset. For instance, CoCa-2B on ImageNet sees a 1.32% increase in performance, whereas EuroSAT sees 36.04% growth. CoCa's zero-shot performance on ImageNet leaves less room for few-shot learning. Nonetheless, the performance gain achieved through our category name initialization is notewor- Table 3: Comparing other fine-tuning methods on ImageNet and its variants. We use IN as the abbreviation for ImageNet, and CNI for category name initialization. The second column means how much training data per class is used for finetuning. 0 shot means the pretrained vision-language model is directly evaluated without finetuning. All the numbers under the last five columns denote the top-1 test accuracy. | Model | Shot | IN | IN-V2 | IN-R | IN-A | IN-Sketch | |--------------------------|--------|-------|---------|--------|--------|-------------| | CoCa-base | 0 | 82.26 | 76.32 | 93.16 | 76.17 | 71.43 | | CoCa-base+Linear Probing | 1 | 57.49 | 54.20 | 69.19 | 53.38 | 47.94 | | 5 | 79.33 | 73.18 | 90.02 | 73.18 | 68.03 | | | CoCa-base+Full Fintuning | 1 | 43.77 | 41.64 | 55.98 | 40.31 | 33.29 | | 5 | 60.90 | 54.32 | 71.20 | 54.34 | 49.25 | | | Coca-base+CoOp | 1 | 79.85 | 73.21 | 89.88 | 76.42 | 65.81 | | 5 | 81.01 | 75.81 | 92.58 | 76.55 | 71.27 | | | CoCa-base+CNI | 1 | 82.35 | 76.47 | 93.37 | 77.00 | 71.61 | | 5 | 83.47 | 77.23 | 93.22 | 77.23 | 71.35 | | thy, as some other methods may not achieve comparable improvements, which will be discussed below. We also contend that our few-shot performance is not solely attributable to the strong pretrained CoCa model but also our proposed category name initialization. For example, CoCa-2B's zero-shot performance on EuroSAT is 49.74%, which is lower than that of most other approaches. However, with our category name initialization, it achieves 85.78%, outperforming other approaches in the five-shot setting. Comparing with other fine-tuning methods. To further validate the efficacy of category name initialization, we compare it with several other finetuning methods. We choose CoCa-base as the pretrained vision-language model and carry out experiments on ImageNet with different finetuning methods, such as linear probing, full finetuning, CoOp (Zhou et al., 2022a), and category name initialization. As demonstrated in Table 3, all finetuning methods, except category name initialization, fail to improve over zero-shot CoCa when one or five training examples per class are used. Furthermore, full finetuning underperforms linear probing because the number of training examples is inconsistent with the number of trainable parameters in few-shot learning. Although showing better performance than linear probing and full finetuning, the one- or five-shot performance of CoOp is slightly inferior to zero-shot CoCa. This suggests that learning contextual prompts does not significantly improve CoCa's few-shot performance. On the other hand, category name initialization effectively improves the few-shot performance, which is challenging when the zero-shot performance of CoCa is significantly higher than that of other counterparts such as CLIP (Radford et al., 2021) and FLAVA (Singh et al., 2022). Category name initialization vs. random initialization. To gain a deeper understanding of the advantages of category name initialization, we compared it with random initialization. Comparing the last three rows in Table 2, we can observe that the few-shot classification results using random initialization are worse than the zero-shot classification with pretrained CoCa. However, employing category name initialization would effectively use those few training examples and boost performance. Figure 3 provides a more detailed comparison of the optimization process using the two initialization methods. By meticulously tuning the parameters, we set the initial learning rate to 1e-5 for category name initialization and 5e-5 for random initialization. Employing category name initialization results in a better starting model with higher test accuracy than random initialization. Furthermore, the model utilizing category name initialization converges faster than random initialization. This can be attributed to the fact that the test accuracy while using random initialization continues to increase even after 250 epochs, whereas the accuracy achieved with category name initialization plateaus around 200 epochs when fine-tuning on ImageNet. Similarly, the one-shot test accuracy on Cifar-100 converges within 100 epochs by employing category name initialization, while the counterpart using random initialization converges after 300 epochs. ![9_image_0.png](9_image_0.png) Figure 3: Comparison of test accuracy over the training epoch. We finetune the CoCa-base model with category name initialization or random initialization. Category name initialization provides better initial test accuracy and helps the model converge better and faster than random initialization. ## 4.4 Exploring Different Initialization Approaches In real-world scenarios, we cannot always guarantee the availability of perfect category names for every classification task. Sometimes we may only have digital labels such as class "1", "2", and so on, while in other cases, users may not be fluent in English. In such scenarios, it is crucial to evaluate how the model performs with different versions of category names. Table 4 compares the performance of using no category names (i.e., random initialization) with various variants of category names. The most straightforward approach is to use digits (class 1, 2, and so on) as category names. However, this approach provides little semantic information and does not improve few-shot performance. Conversely, category names in English and other languages significantly enhance few-shot recognition. This is surprising because CoCa was trained on English-only text with limited knowledge of other languages. Nevertheless, due to the sentence piece tokenizer (Kudo & Richardson, 2018) and token sharing, our method can still benefit from foreign language transfer, resulting in better performance than random initialization, even though the performance of these foreign language names is not as good as that of English names. Inspired by the aforementioned observation, we hypothesize that initialization with only partial category information can still yield benefits. To test this hypothesis, we randomly selected 50% of the category names for initialization while using random initialization for the remaining names. The results are shown in Table 5, where it can be seen that using 50% of the names still improves the one-shot accuracy from random initialization from 59.17% to 66.82%, and the five-shot accuracy from 79.33% to 80.67%. This indicates that our method has the potential as a valuable tool in situations where within-domain labels are incomplete or expressed in different languages. Another question that arises is whether we can apply a similar initialization approach using image embeddings instead of text. To test this hypothesis, we select one representative image per class from ImageNet, resulting in 1000 images for 1000 categories, and used the pretrained CoCa-base model to extract 1000 embedding Table 4: Comparison of category name initialization using digits or different languages. We use the same pretrained CoCa-base model for all category name initialization. The numbers below are top-1 test accuracy on ImageNet. | Category Name Initialization | Zero-shot | One-shot | Five-shot | |--------------------------------|-------------|------------|-------------| | No | N/A | 59.17 | 79.33 | | Digits | 0.10 | 53.60 | 78.75 | | Korean | 22.89 | 53.71 | 79.53 | | Russian | 43.59 | 53.43 | 79.55 | | Germany | 29.24 | 63.15 | 79.90 | | Spanish | 34.38 | 79.87 | 80.05 | | English | 82.26 | 82.35 | 83.58 | | Initialization | Zero-shot | One-shot | Five-shot | |---------------------|-------------|------------|-------------| | No category name | N/A | 59.17 | 79.33 | | 50% category names | 44.36 | 66.82 | 80.67 | | 100% category names | 82.26 | 82.35 | 83.58 | Table 5: Comparing the performance of using all category names or 50% of names (the other half will be initialized with random vectors) for initialization. The numbers below are top-1 test accuracy on ImageNet. | Initialization | Accuracy (%) | |-----------------------|----------------| | IEI | 47.16 | | 0.5 × IEI + 0.5 × CNI | 61.84 | | CNI | 82.26 | vectors. We then initialize the linear projector of our few-shot model with these image embeddings, which we call image embedding initialization (IEI). We compare the performance of IEI (using one example image per category) with CNI (using category names but no images) and present the accuracy of initialized models (without finetuning) in Table 6. The results indicate that IEI performs worse than CNI, suggesting that embedding category names are more robust than embedding a single image. Moreover, we compute the average of the IEI and CNI weights to create a new initialization vector and find that the average weight's performance lies in the middle of IEI and CNI. Table 6: Comparing top-1 accuracy of image embedding initialization (IEI) and category name initialization (CNI) on ImageNet. ## 4.5 Limitations After comparing different initialization approaches, one question that arises is whether category name initialization continues to be helpful with more training data. We investigate this by fine-tuning pretrained vision-language models using varying numbers of training images. To demonstrate the effectiveness, we establish a baseline for comparison by using random initialization. We utilize two different pretrained CoCa models, CoCa-base and CoCa-2B, and fine-tune them on ImageNet and Cifar100 using different training data. As shown in Figure 4, category name initialization outperforms random initialization across different datasets, model architectures, and numbers of training data. However, the contribution of category name initialization diminishes as more training data is provided. Another limitation of the proposed category name initialization is that it relies on category names to initialize the classification head. While it can significantly improve few-shot image classification accuracy, it may not ![11_image_0.png](11_image_0.png) Figure 4: Comparison of test accuracy over different percentages of training images. Category name initialization outperforms random initialization over different datasets, model architectures, and numbers of training data. be applicable in all scenarios. For example, in domains where category names are not available or are not reliable, the proposed method may not be effective. ## 4.6 Model Distillation We first show that category name initialization can be used for different scales of models by carrying out few-shot experiments using two different pretrained CoCa architectures: CoCa-base and CoCa-2B, under different numbers of training data. Abandoning the uni-modal and multi-modal text decoders, CoCa-base and CoCa-2B contain 96M and 1B parameters for downstream image classification tasks (see Table 7). As shown in Table 8, we can observe the trend that bigger models do better and more shots help. Table 7: Number of parameters of different modules. | Module | CoCa-base | CoCa-2B | | | | | | |------------------|-------------|---------------|-------|-----------|----------|-----------|----| | Image encoder | 85,999,872 | 1,011,740,288 | | | | | | | Image pooler | 19,095,296 | 63,843,648 | | | | | | | Linear projector | 769,000 | 1,409,000 | Model | Zero-shot | One-shot | Five-shot | 1% | | CoCa-2B | 86.19 | 86.15 | 87.37 | 87.90 | | | | | CoCa-base | 82.26 | 82.35 | 83.58 | 83.80 | | | | | + distillation | - | - | - | 84.81 | | | | As larger models tend to perform better, it is natural to consider knowledge distillation, which involves using the predictions of a teacher model to guide the training of a student model. In this work, we use the finetuned CoCa-2B model with 1% of the ImageNet images as the teacher model and CoCa-base as our student model. In addition to the 1% labeled ImageNet images, we use other unlabeled images for knowledge distillation. During the finetuning process, we freeze the teacher model weights and update the student model weights using two loss objectives. The first objective is the supervised loss, where we compute the cross entropy between the student model predictions and the labels for the 1% labeled ImageNet images. The second objective is the distillation loss, computed over all unlabeled data. Unlike few-shot finetuning, where only the last few layers are finetuned, we finetune the entire student model here since the distillation loss is computed over many unlabeled images. For example, table 8 shows that by distilling from the larger finetuned teacher model, CoCa-base achieves a 1.01% improvement in accuracy (from 83.80% to 84.81%). ## 4.7 Ablation Studies In this section, we analyze several important factors that influence the few-shot performance. We conduct our ablation study using CoCa-base as the model. Finetuning layers. We evaluate the performance of the CoCa-base model on ImageNet (Deng et al., 2009) in various few-shot learning scenarios, with different finetuning layers selected. We compare the results to a baseline using random initialization. In our notation, P denotes the image pooler and L denotes the linear projector. For both category name initialization and random initialization, we experiment with three different optimization strategies: 1) optimizing only the linear projector (L); 2) optimizing both the image pooler (P) and the linear projector (L); and 3) optimizing all layers. Note that we have extensively tried various hyper-parameters (such as initial learning rate) and presented the optimal values for each setting. The results presented in Table 9 indicate that the best performance is achieved by finetuning both the image pooler and linear projector under all settings when compared to the other two optimization strategies for random initialization. To enhance the few-shot learning performance, we experiment with category name initialization discussed in Section 3.2. In contrast to random initialization, we initialize the linear projector using the average text embeddings of the category names. As shown in Table 10, this initialization method significantly improves few-shot recognition performance. Moreover, we observe that finetuning P + L is the most effective optimization strategy for few-shot settings while finetuning all layers performs better with more training data. Table 9: Comparison of different finetuning layers for random initialization. P: image pooler; L: linear projector; All: all layers. The best performance of each column is in **bold**. | Finetuning Layers | One-shot | Five-shot | 1% | 100% | |---------------------|------------|-------------|-------|--------| | L | 49.38 | 69.64 | 76.53 | 85.62 | | P + L | 57.49 | 79.33 | 81.48 | 88.22 | | All | 43.77 | 60.90 | 79.75 | 86.03 | | Finetuning Layers | One-shot | Five-shot | 1% | 100% | |---------------------|------------|-------------|-------|--------| | L | 82.35 | 81.03 | 81.67 | 86.16 | | P + L | 82.35 | 83.58 | 83.91 | 88.25 | | All | 82.28 | 82.63 | 83.63 | 88.35 | Table 10: Comparison of different finetuning layers for category name initialization. P: image pooler; L: linear projector; All: all layers. The best performance of each column is in **bold**. Learning rates. We analyze the influence of the initial learning rate on few-shot learning. We set a batch size of 512, froze the image encoder, and adopted a cosine learning rate schedule for the final three layers. Figure 5 presents the top-1 test accuracy on ImageNet using different initial learning rates. A small initial learning rate (5e-6) results in a slow convergence rate, while a larger learning rate (5e-5) achieves faster convergence. However, despite reaching the highest test accuracy within 1000 training steps, the finetuning becomes unstable as the test accuracy declines right after the peak value. Conversely, using an even larger learning rate (5e-4) could prevent the surging phase, resulting in a downward trend of test accuracy. By contrast, selecting an appropriate learning rate (1e-5) is the key to stable and rapid few-shot finetuning. Unfortunately, there is no mathematical formula for determining the optimal initial learning rate since it varies across different datasets and depends on the batch size. We can adjust the initial learning rate by trial and observation, and these four test accuracy curves could indicate whether to enlarge or reduce the initial learning rate. L2 **weight regularization.** Out of all the few-shot settings, one-shot learning is the most unique and intriguing. As illustrated in Figure 6, the one-shot test accuracy (in red) on ImageNet decreases even with category name initialization during finetuning, unlike the five-shot accuracy (in blue). Using only one training image per class can easily distort the decision boundary, as illustrated in Figure 7. We plot the decision boundary in Figure 7 for an illustration. Without L2 regularization, the decision boundary of the ![13_image_0.png](13_image_0.png) Training Step Figure 5: The top-1 test accuracy of finetuning CoCa-base on 1% ImageNet using different initial learning rates. ![13_image_1.png](13_image_1.png) Figure 6: The effect of L2 weight regularization for one-shot and five-shot learning. We plot the top-1 test accuracy of CoCa-base on ImageNet vs. the training step. The L2 weight regularization is beneficial to one-shot learning but harmful to five-shot learning. finetuned model is easily distorted by the limited training examples, resulting in a degradation from zero-shot performance. However, by applying L2 weight regularization for one-shot learning, the decision boundary does not deviate much from the decision boundary of the pretrained model. This is reflected in the steady increase of test accuracy from 82.26% to 82.35%, as depicted by the yellow curve in Figure 6. Although the performance gain is small, it is still noteworthy since the information provided by one-shot data is limited in helping a pretrained model. On the other hand, applying L2 weight regularization in five-shot learning could adversely affect the model adaptation, as shown by the green curve. The reason is that L2 weight regularization, acting as an additional constraint, restricts the model from learning new knowledge from the training data when sufficient information is available to refine the decision boundary of the pretrained model. It should be noted that all of the aforementioned phenomena are dependent on utilizing category name initialization. The decision boundary will lack discriminative power if category name initialization is not used. Therefore, adding L2 weight regularization would have no meaningful effect. ## 5 Conclusion This paper has studied the few-shot classification problem using large vision-language models. Since it is hard to optimize large vision-language models with a few training examples, we propose exploring category names to initialize the classification head, significantly improving performance. In addition, we have also ![14_image_0.png](14_image_0.png) Figure 7: Visualization of decision boundary in one-shot learning. From left to right, The first subfigure displays the decision boundary of the pretrained model. In contrast, the second and third subfigures show the finetuned model without and with L2 weight regularization, respectively. Each model was trained using only one training example per class, with three classes retained for simplicity. The decision boundary does not shift significantly when finetuning on the one-shot dataset with L2 regularization. This indicates that the model's generalization ability is improved, as it is less likely to overfit the training examples. investigated the condition when the category names help. We demonstrate that borrowing other nonperfect category names or even names from a foreign language could also help the few-shot classification of vision-language models, which is better than randomly initializing the classification head. However, the contribution of category names diminishes when the number of training samples becomes large. This paper obtains state-of-the-art few-shot performance on numerous benchmarks, including ImageNet, ImageNet-V2, ImageNet-R, ImageNet-A, ImageNet-Sketch, Cifar100, Oxford Flowers, Stanford Cars, Country-211, Food101, FGVC Aircraft, EuroSAT, and Oxford-IIIT Pets. Our few-shot classification result is even better than many previous works that have employed the whole training set. ## References Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karen Simonyan. Flamingo: a visual language model for few-shot learning. In Neural Information Processing Systems (NeurIPS), 2022. 1, 3, 7 Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 - mining discriminative components with random forests. In *European Conference on Computer Vision (ECCV)*, 2014. 6, 8 Xiaokang Chen, Mingyu Ding, Xiaodi Wang, Ying Xin, Shentong Mo, Yunhao Wang, Shumin Han, Ping Luo, Gang Zeng, and Jingdong Wang. Context autoencoder for self-supervised representation learning. ArXiv, 2202.03026, 2022. 8 Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In *IEEE International Conference on Computer Vision (ICCV)*, pp. 9620–9629, 2021. 8 Ekin Dogus Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V. Le. Randaugment: Practical automated data augmentation with a reduced search space. IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 3008–3017, 2020. 6 Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, K. Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 248–255, 2009. 2, 6, 13 Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 2625–2634, 2015. 3 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning Representations (ICLR)*, 2021. 1, 4, 8 Li Fei-Fei, Robert Fergus, and Pietro Perona. One-shot learning of object categories. *IEEE Transactions on* Pattern Recognition and Machine Intelligence (PAMI), 28(4):594–611, 2006. 3 Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. Clip-adapter: Better vision-language models with feature adapters. *ArXiv*, 2110.04544, 2021. 3, 6, 7 Tanmay Gupta, Kevin J. Shih, Saurabh Singh, and Derek Hoiem. Aligned image-word representations improve inductive transfer across vision-language tasks. *2017 IEEE International Conference on Computer* Vision (ICCV), pp. 4223–4232, 2017. 4 Bharath Hariharan and Ross Girshick. Low-shot visual recognition by shrinking and hallucinating features. In *IEEE International Conference on Computer Vision (ICCV)*, pp. 3018–3027, 2017. 3 Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll'ar, and Ross B. Girshick. Masked autoencoders are scalable vision learners. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 15979–15988, 2022. 7, 8 Patrick Helber, Benjamin Bischke, Andreas R. Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12:2217–2226, 2019. 6, 8 Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Lixuan Zhu, Samyak Parajuli, Mike Guo, Dawn Xiaodong Song, Jacob Steinhardt, and Justin Gilmer. The many faces of robustness: A critical analysis of out-of-distribution generalization. In IEEE International Conference on Computer Vision (ICCV), pp. 8320–8329, 2021a. 6, 7 Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Xiaodong Song. Natural adversarial examples. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 15257–15266, 2021b. 6, 7 Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. What makes imagenet good for transfer learning? ArXiv, abs/1608.08614, 2016. 1, 3 Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In *International Conference on Machine Learning (ICML)*, pp. 4904–4916, 2021. 1, 3, 6 Benjamin Klein, Guy Lev, Gil Sadeh, and Lior Wolf. Fisher vectors derived from hybrid gaussian-laplacian mixture models for image annotation. *ArXiv*, abs/1411.7399, 2014. 4 Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. *IEEE International Conference on Computer Vision Workshops (ICCVW)*, pp. 554–561, 2013. 2, 6, 8 Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. 2, 6, 8 Taku Kudo and John Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018. 10 Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, and Yee Whye Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. In *International Conference on Machine Learning (ICML)*, 2019. 5 Chengkun Li, Haotian Liu, Liunian Harold Li, Pengchuan Zhang, Jyoti Aneja, Jianwei Yang, Ping Jin, Yong Jae Lee, Houdong Hu, Zicheng Liu, and Jianfeng Gao. Elevater: A benchmark and toolkit for evaluating language-augmented visual models. *ArXiv*, abs/2204.08790, 2022. 3, 6 Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew B. Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. *ArXiv*, abs/1306.5151, 2013. 6, 8 Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. Indian Conference on Computer Vision, Graphics & Image Processing, pp. 722–729, 2008. 2, 6, 8 Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3498–3505, 2012. 6, 8 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In *International Conference on Machine* Learning (ICML), 2021. 1, 3, 5, 6, 7, 8, 9 Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to imagenet? In *International Conference on Machine Learning (ICML)*, 2019. 6, 7 Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning (ICML), pp. 4596–4604. PMLR, 2018. 6 Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, et al. Lingvo: a modular and scalable framework for sequenceto-sequence modeling. *ArXiv*, abs/1902.08295, 2019. 6 Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. Flava: A foundational language and vision alignment model. In *IEEE* Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15638–15650, 2022. 3, 9 Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herv'e J'egou. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning (ICML), 2021. 8 Vishaal Udandarao, Ankush Gupta, and Samuel Albanie. Sus-x: Training-free name-only transfer of visionlanguage models. *ArXiv*, abs/2211.16198, 2022. 4 Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 3156–3164, 2015. 3 Haohan Wang, Songwei Ge, Eric P. Xing, and Zachary Chase Lipton. Learning robust global representations by penalizing local predictive power. In *Neural Information Processing Systems (NeurIPS)*, 2019. 6, 7 Liwei Wang, Yin Li, Jing Huang, and Svetlana Lazebnik. Learning two-branch neural networks for imagetext matching tasks. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 41:394–407, 2017. 4 Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. Simvlm: Simple visual language model pretraining with weak supervision. In *International Conference on Learning Representations (ICLR)*. OpenReview.net, 2022. 3 Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robust finetuning of zero-shot models. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 7959–7971, 2022. 7 Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive captioners are image-text foundation models. *ArXiv*, abs/2205.01917, 2022. 1, 3, 4, 5, 6, 7, 8 Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel C. F. Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, and Pengchuan Zhang. Florence: A new foundation model for computer vision. *ArXiv*, abs/2111.11432, 2021. 1, 3 Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision transformers. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1204–1213, 2022a. 6 Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. Lit: Zero-shot transfer with locked-image text tuning. In *IEEE Conference on Computer Vision* and Pattern Recognition (CVPR), pp. 18123–18133, 2022b. 1, 3 Renrui Zhang, Rongyao Fang, Peng Gao, Wei Zhang, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. Tip-adapter: Training-free clip-adapter for better vision-language modeling. In *European Conference on* Computer Vision (ECCV), 2022. 4, 6, 7 Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. *International Journal on Computer Vision (IJCV)*, 130:2337–2348, 2022a. 3, 5, 7, 9 Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for visionlanguage models. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2022b. 3 Xiangyang Zhu, Renrui Zhang, Bowei He, Aojun Zhou, Dong Wang, Bin Zhao, and Peng Gao. Not all features matter: Enhancing few-shot clip with adaptive prior refinement. *ArXiv*, abs/2304.01195, 2023. 4
Review 1: Summary: The following work evaluates the effectiveness of using pretrained image and text embeddings as initialization weights for training few or zero shot classifiers. Authors evaluate the effectiveness of the initialization setup with respect to recent advances in large-scale pretrained LLMs. The approach is compared against standard initialization schemes that do not leverage linguistic structure. Further ablations are evaluate performance under the multi-lingual scenario as well as diminishing returns when additional data is available. Strengths and Weaknesses: Strengths: + detailed hyperparameter sweep as well as identifying point of diminishing returns + Principled approach to leverage pretrained multimodal embedding models + perhaps a useful revisiting of a fundamental technique in vision-language literature in the context of more recent advances + highlights a pretty notable issue in the initialization approach of the prior work they build upon Weaknesses: - As previously mentioned, embeddings-as-classifiers [3] has been a standard approach in vision-language literature since (and possibly before) introduction of word2vec. Authors should include relevant literature from phrase-grounding [1] works. There also exist prior works with the same or similar category name initialization setup such as [2] - While the multilingual experiments are interesting, I think it would be pretty straightforward to run a translator to map everything to english, effectively eliminating performance gap? - Another potential issue that applies to both this work and its predecessor is that of dataset leakage. I did not find any discussion regarding potential overlap of pretraining data and test data in either this manuscript or that of CoCa. While the approach itself is principled, any leakage would invalidate the reported numbers. [1] Wang et al. Learning Two-Branch Neural Networks for Image-Text Matching Tasks. PAMI 2018 [2] T Gupta, K Shih, S Singh, D Hoiem. Aligned Image-Word Representations Improve Inductive Transfer Across Vision-Language Tasks. ICCV 2017 [3] Klein, B., Lev, G., Sadeh, G., and Wolf, L. (2014). Fisher vectors derived from hybrid gaussian-laplacian mixture models for image annotation Requested Changes: - Include discussion for prior embeddings-as-classifiers approaches from older vision-language literature. - Address concerns regarding potential overlap between pretraining data and test data Broader Impact Concerns: Authors did not include a broader impact statement ================================================== Review 2: Summary: The paper addresses few-shot classification problem using pretrained large Vision Language Models (VLMs). In particular, the authors build their method on top of the Contrastive Captioning (CoCa) [Yu et al., 2022] model, a state-of-the-art VLM. CoCa relies on contrastive objective from vision and unimodal text encoders as well as on captioning objective from the same vision encoder but a multimodal text encoder. After pretraining on web-level noisy image-text pairs, CoCa performs downstream visual (image, video recognition) and multimodal (retrieval, captioning etc.) tasks by finetuning the encoders or training small attention and linear classification layers on top of the frozen encoders. The current work proposes a modification on dealing with the downstream tasks by coming up with a clever initialization mechanism (known as Category Name Initialization or CNI) for the linear classifier. The linear classifier may or may not be trained depending on whether the downstream task is performed zero-shot or not. Given $D$ dimensional output from the encoders and $C$ classes, the linear classifier needs to have $C\times D$ learnable weights. For each of the $C$ classes, the authors propose to obtain $N$ embeddings ($D$ dimensional) by passing each category name along with $N$ different but predefined prompts through the unimodal text encoder. For $C$ category or class names, these will give $C\times D$ matrix which when properly normalized, can act as the initialization weights for the linear classifier. The authors performed evaluations on image classification on different datasets in both zero and few shot scenarios and have shown the superiority of their approach. The authors have also performed good ablation experiments. While the experiments are well thought, a few missing experiments are of concern. As the work is based on top of CoCa, the main competitor seems to be CoOp [Zhou et al., 2022], another school of parameter efficient tuning of VLMs that uses learnable prompts. So, extensive comparison with this is required. I’m detailing it below (in `Requested Changes’). Strengths and Weaknesses: Strengths: - The work provides a new perspective of using information contained in class names for zero/few shot adaptation of large pretrained VLMs towards downstream tasks. - The number of experiments is large along with well thought and analyzed ablations. Weaknesses: - One of the major weaknesses actually is related to the second strength. Though the number of experiments is large, it might not be comprehensive. Many datasets used in CoOp are not evaluated. A related task on which CoCa with random initialization has shown very good performance is video classification. This experiment is completely missing from the proposed approach. Having this would help understand the broad applicability of the approach. Some important CoCa variations on datasets that CoCa has not shown results but CNI shows, are missing. - The writing is generally good, but I must say things are sloppy at certain places (including duplication of lines). - Details of some tables/figures are required. I’m detailing more on the weaknesses in the `Requested Changes’ point below. Requested Changes: - Table 2: This is where experiments with datasets not specific to CoCa but rather specific to mostly CoOp are performed. A fairer comparison would be to compare CoCa-base and CoCa-2B with random initialization of the task specific attentional pooling on top of the CoCa encoder. In CoCa, this is termed as the Frozen-feature evaluation. This would show the difference it makes to have CNI vis-a-vis random initialization. Though this is done in ImageNet and its variants (Table 3), diverse datasets like the ones examined in CoOp would show the robustness of the approach. CoOp does a fantastic job, by showing their performance on 11 different and diverse datasets. While the proposed work has picked a few of them, the omission of the rest raises the question of how the performance of CNI would be on these missing datasets. - Category name initialization vs. random initialization (Page 8) and Figure 3: Very little detail is provided here and the associated figure 3. Which dataset is this? Is it for a single dataset only, or average over the 11 datasets on which CoOp does its evaluation? This would be a meaningful analysis only if it is done on these 11 datasets and in ImageNet and its variants. - CoCa has shown very good performance on video recognition on Kinetics and MiT datasets as well as multimodal classification tasks like VQA or Visual Entailment. Experimenting on these tasks and comparing with related works would increase the applicability of the approach. - As the main contribution of the paper is in initializing a linear classifier, Xavier initialization [a] can be a good comparison to perform. - Page 5 says about normalizing to obtain the average embeddings. What sort of normalization is performed is not detailed. - Writing issues: - Introduction, first paragraph last line: 'provides novel perspectives of thinking of few-example learning’ -> 'provides novel perspectives on few-example learning’ - Page 3: Not sure what is meant by - 'could improve the effectiveness robustness while undermining the relative robustness’. - Page 4: '… are training required’ – Not proper usage. - Section 3.2: 'the number parameters for’ – ‘of’ is missing. - Page 12: 'Figure 5 presents the top-1 test accuracy on ImageNet using different initial learning rates’ – This line is duplicated. - Page 3 (Last paragraph, first line): CoOp’s citation is wrong. [a] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." Proceedings of the thirteenth international conference on artificial intelligence and statistics, 2010. Broader Impact Concerns: The work does not seem to be concerning on the ethical ground. However, a small statement on this would always be helpful. ================================================== Review 3: Summary: This article addresses the problem of few-shot image classification using vision-language models. To do so, first, the model is pretraining in a similar way as CLIP. Then, the image classification head is finetuned to predict the classes. Then, it is finetuned again using the weights produced by some promptings of the language models with sentences that contain the actual name of the image classes. The article clearly states their contributions: - In the few-shot scenario, the proposed model that initializes the linear projection weights with the prompts' embeddings with the class names improves the SOTA. - Their model converges quicker than random initialization - It also works with embeddings from languages other than English - Other minor contributions Strengths and Weaknesses: The article presents a significant contribution to the field of few-shot image classification. The paper proposes a method that leverages the category names to initialize the classification head, which leads to significant improvements in performance. The strengths of the article are numerous. Firstly, the paper is highly relevant. Few-shot image classification is a challenging task with practical applications in various domains, such as medicine and robotics. The proposed method can significantly improve the performance of vision-language models on this task, which has important implications for real-world scenarios. Secondly, the article is built on top of state-of-the-art methods. The authors use a large-scale vision-language foundation model and extend it with a novel category name initialization method for the classification head. This approach builds on existing techniques and improves them to achieve better performance. Thirdly, the paper is evaluated on over 13 different few-shot image classification benchmarks. This extensive evaluation demonstrates the effectiveness and generalizability of the proposed method across various datasets and tasks. Fourthly, the article is well-written, with a clear and concise presentation of the proposed method, experiments, and results. The authors provide detailed explanations and visualizations, making it easy to understand the proposed approach and its performance. Fifthly, the results of the proposed method are impressive. The paper shows that the category name initialization method significantly improves the performance of the vision-language model on various few-shot image classification benchmarks. In particular, the proposed approach achieves state-of-the-art performance on ImageNet and Stanford Cars datasets. Finally, the paper includes meaningful ablation experiments that help to understand the contribution of each component of the proposed method. The authors provide a thorough analysis of the results, demonstrating the effectiveness of the proposed method and providing insights into its limitations. One weakness of the article is that the proposed method relies on category names to initialize the classification head. While this approach can significantly improve few-shot classification performance, it may not be applicable in all scenarios. For example, in domains where category names are not available or are not reliable, the proposed method may not be effective. Another potential weakness is that the paper does not provide a detailed analysis of the computational and memory requirements of the proposed method. While the authors mention that their method is efficient, it would be useful to have a more thorough discussion of the computational costs associated with the proposed approach, particularly given the large size of vision-language models. Additionally, the paper does not explore the impact of hyperparameter choices on the performance of the proposed method. While the authors provide some information on the hyperparameters used in their experiments, a more detailed investigation of the sensitivity of the method to different hyperparameters would be valuable. Requested Changes: Explain more about hyperparameters and computational cost. I miss comparison with other SOTA methods like MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting Broader Impact Concerns: No ethical concerns. ================================================== Metareview: Recommendation: Reject Comment: Thanks for your submission. This paper was borderline, with two reviewers leaning accept and one leaning reject. From my read of the reviews, it seems that there are still some unaddressed concerns, particularly from reviewer 6zmr. In particular, after the rebuttal period this reviewer feels that there are two important experiments missing. I'm going to paste in a comment that the reviewer made to me after the rebuttal period which was not visible to you, as it neatly summarizes the issues that this reviewer has with the paper as is: "As I said in my review CoOp [Zhou et al., 2022] being in another school of parameter efficient tuning that uses learnable prompts, is one of the crucial competitors. In five important datasets (Caltech101, Flowers102, SUN397, DTD, UCF101) where CoOp showed their performance, the proposed approach did not show any comparison as I asked in the review. However, they have shown comparison on 6 selective datasets out of the total 11. The missing experiments are important to get a wholesome view of the usefulness of the proposed approach as one of them is unique in the sense that it is a video dataset while another is a texture dataset. The second missing experiment is related to the main approach (CoCa) on which it is built on. CoCa has shown very good performance on video recognition and other multimodal classification as well. As I told in my review, these are also classification tasks and falls in the purview of comparison with the proposed classification approach. With the computation effort mentioned as a response to one of the fellow reviewers, I think, these experiments could have been performed. In summary, though the work has shown many experiments, with the missing experiments it seems to me as lacking the completeness to be accepted in TMLR in its current form." These seem to be the main concerns, as the other reviewers have indicated that they're happy with the rebuttal provided by the authors. I would like to see these concerns addressed in a major revision to the paper, with another round of reviewing, before having this paper accepted to TMLR. I think addressing these concerns will make the paper considerably stronger. ==================================================
# Not All Tasks Are Equal - Task Attended Meta-Learning For Few-Shot Learning Anonymous authors Paper under double-blind review ## Abstract 1 Meta-learning (ML) has emerged as a promising direction in learning models under con2 strained resource settings like few-shot learning. The popular approaches for ML either 3 learn a generalizable initial model or a generic parametric optimizer through batch episodic 4 training. In this work, we study the importance of tasks in a batch for ML. We hypothesize 5 that the common assumption in batch episodic training where each task in a batch has an 6 equal contribution to learning an optimal meta-model need not be true. We propose to 7 weight the tasks in a batch according to their "importance" in improving the meta-model's 8 learning. To this end, we introduce a training curriculum called task attended meta-training 9 to learn a meta-model from weighted tasks in a batch. The task attention module is a stan10 dalone unit and can be integrated with any batch episodic training regimen. Comparison of 11 task-attended ML models with their non-task-attended counterparts on complex datasets, 12 performance improvement of proposed curriculum over state-of-the-art task scheduling algo13 rithms on noisy datasets, and cross-domain few shot learning setup validate its effectiveness. ## 14 **1 Introduction** 15 The ability to infer knowledge and discover complex representations from data has made deep learning models 16 widely popular in the machine learning community. However, these models are data-hungry, often requiring 17 large volumes of labeled data for training. Collection and annotation of such large amounts of training data 18 may not be feasible for many real life applications, especially in domains that are inherently data constrained, 19 like medical and satellite image classification, drug toxicity estimation, etc. Meta-learning (ML) has emerged 20 as a promising direction for learning models in such settings, where only a limited amount (few-shots) of 21 labeled training data is available. A typical ML algorithm employs an episodic training regimen that differs 22 from the training procedure of conventional learning tasks. This episodic meta-training regimen is backed 23 by the assumption that a machine learning model quickly generalizes to novel unseen data with minimal 24 fine-tuning when trained and tested under similar circumstances (Vinyals et al., 2016). To facilitate such 25 a generalization capacity, a meta-training phase is undertaken, where the model is trained to optimize its 26 performance on several homogeneous tasks/episodes randomly sampled from a dataset. Each episode or task 27 is a learning problem in itself. In the few-shot setting each task is a classification problem, a collection of K 28 support (train) and Q query (test) samples corresponding to each of the N classes. Task-specific knowledge 29 is learned using the support data, and meta-knowledge across the tasks is learned using query samples, 30 which essentially encodes "how to learn a new task effectively." The learned meta-knowledge is generic and 31 agnostic to tasks from the same distribution. It is typically characterized in two different forms - either as an 32 optimal initialization for the machine learning model or a learned parametric optimizer. Under the optimal 33 initialization view, the learned meta-knowledge represents an optimal prior over the model parameters, that 34 is equidistant, but close to the optimal parameters for all individual tasks. This enables the model to rapidly 35 adapt to unseen tasks from the same distribution (Finn et al., 2017; Li et al., 2017; Jamal & Qi, 2019). 36 Under the parametric optimizer view, meta-knowledge pertaining to the traversal of the loss surface of tasks 37 is learned by the meta-optimizer. Through learning task specific and task agnostic characteristics of the loss 38 surface, a parametric optimizer can thus effectively guide the base model to traverse the loss surface and 39 achieve superior performance on unseen tasks from the same distribution (Ravi & Larochelle, 2017). 40 Initialization based ML approaches accumulate the meta-knowledge by simultaneously optimizing over a 41 batch of tasks. On the other hand, a parametric optimizer sequentially accumulates meta-knowledge across 42 individual tasks. The sequential accumulation process leads to a long oscillatory optimization trajectory 43 and a bias towards the last task, limiting the parametric optimizer's task agnostic potential. However, 44 recently meta-knowledge has been accumulated in a batch mode even for the parametric optimizer (Aimen 45 et al., 2021). Further, under such batch episodic training (for both initialization and optimization views), a 46 common assumption in ML that the randomly sampled episodes of a batch contribute equally to improving 47 the learned meta-knowledge need not hold good. Due to the latent properties of the sampled tasks in a 48 batch and the model configuration, some tasks may be better aligned with the optimal meta-knowledge 49 than others. We hypothesize that proportioning the contribution of a task as per its alignment towards 50 the optimal meta-knowledge can improve the meta-model's learning. This is analogous to classical machine 51 learning algorithms like sample re-weighting, which however, operate at sample granularity. In re-weighting, 52 samples leading to false positives are prioritized and therefore replayed. Hence, the latent properties due to 53 which a sample is prioritized are explicitly defined. For complex task distributions, explicitly handcrafting 54 the notion of "importance" of a task would be hard. To this end, we propose a task attended meta-training 55 curriculum that employs an attention module that learns to assign weights to the tasks of a batch with 56 experience. The attention module is parametrized as a neural network that takes meta-information in terms 57 of the model's performance on the tasks in a batch as input and learns to associate weights to each of the tasks 58 according to their contribution in improving the meta-model. Overall, we make the following contributions, 59 - We propose a task attended meta-training strategy wherein different tasks of a batch are weighted 60 according to their "importance" defined by the attention module. This attention module is a stan61 dalone unit that can be integrated into any batch episodic training regimen. 62 - We extend the empirical investigation of the batch-mode parametric optimizer (MetaLSTM++) to 63 complex datasets like miniImagenet, FC100, and tieredImagenet and validate its efficiency over its 64 sequential counter-part (MetaLSTM). 65 - We conduct extensive experiments on miniImagenet, FC100, and tieredImagenet datasets and com66 pare ML algorithms like MAML, MetaSGD, ANIL, and MetaLSTM++ with their task-attended 67 counterparts to validate the effectiveness of the task attention module and its coupling with any 68 batch episodic training regimen. 69 - We compare the proposed training curriculum with task-disagreement resolving approaches like 70 TAML (Jamal & Qi, 2019) and conflict-averse gradient descent (Liu et al., 2021a) and validate the 71 goodness of the proposed hypothesis. We extend these task-disagreement based approaches to the 72 meta-learning regimen for a fair comparison. 73 - We further compare task-attended curriculum with state-of-the-art task scheduling approaches and 74 also show the merit of the proposed approach on the miniImagenet-noisy dataset and cross-domain 75 few shot learning (CDFSL) setup. 76 - We perform exhaustive empirical analysis and visual inspections to decipher the working of the task 77 attention module. ## 78 **2 Related Work** 79 Transfer learning and meta-learning are two approaches that are commonly used to address few-shot learning 80 problems. Transfer learning involves learning generalizable representations from larger datasets and models, 81 and then using simple algorithms like fine-tuning to adapt to the specific task at hand. On the other 82 hand, meta-learning approaches aim to find an algorithmic solution to few-shot learning. Due to their 83 simplicity, transfer learning approaches scale well with larger image sizes and deeper models. In contrast, 84 meta-learning approaches are memory intensive, which has become a barrier in scaling them to larger image 85 sizes and deeper backbones (Dumoulin et al., 2021). Addressing the computational issues of meta-learning 86 approaches and scaling them to larger support sets, deeper backbones and larger image sizes is a concurrent 87 area of research (Bronskill et al., 2021; Shin et al., 2021). We leave the integration of our approach with 88 these techniques to enhance the scalability to the future. Equipped with deeper backbones and larger 89 image sizes, transfer learning approaches achieved high performances, particularly in cross-domain settings 90 (Bronskill et al., 2021; Guo et al., 2020; Dhillon et al., 2019; Dumoulin et al., 2021). However, a line of 91 literature (Bronskill et al., 2021) suggests meta-learning approaches may be better suited for constrained 92 test settings. This is because transfer learning relies on large pre-trained feature extractors and may require 93 hundreds of optimization steps and careful hyperparameter tuning to perform well (Bronskill et al., 2021; 94 Kolesnikov et al., 2020). For example, Meta-dataset Transfer approach (Triantafillou et al., 2019) finetunes 95 all parameters of a ResNet18 feature backbone with a cosine classifier head for 200 optimization steps. 96 Similarly, BiT (Kolesnikov et al., 2020) finetunes the feature backbone with a linear head, sometimes up 97 to 20,000 optimization steps, to acquire state-of-the-art performance on VTAB dataset. Further, transfer 98 learning approaches require significant hyper-parameter tuning on validation sets of each downstream task 99 that also adds to the cost. On the other hand, meta-learning approaches can generalize to unseen meta-test 100 tasks with just a few adaptation steps and often with little or no hyperparameter tuning (Bronskill et al., 101 2021). While transfer learning may be a better choice in some contexts, meta-learning can be a practical 102 option in cases where computational resources are limited or when the task needs to be adapted on the fly. 103 Overall, both approaches have their own strengths and can be useful in different settings. Our work focuses 104 on a resource-constrained setting, where the number of support instances and the computing available for 105 meta-test adaptation are limited. As a result, our study is confined to meta-learning setups. 106 ML literature is profoundly diverse and may broadly be classified into *initialization* (Finn et al., 2017; Li et al., 107 2017; Jamal & Qi, 2019; Raghu et al., 2020; Rusu et al., 2019; Sun et al., 2019) and *optimization approaches* 108 (Ravi & Larochelle, 2017) depending on the metaknowledge. However, these approaches assume uniform 109 contribution of tasks in learning a meta-model. In supervised learning, assigning non-uniform priorities to 110 the samples is not new (Kahn & Marshall, 1953; Shrivastava et al., 2016). Self-paced learning (Kumar et al., 111 2010) and hard example mining (Shrivastava et al., 2016) have popularly been used to reweight the samples 112 and various attributes like losses, gradients, and uncertainty have been used to assign priorities to samples 113 (Lin et al., 2017; Zhao & Zhang, 2015; Chang et al., 2017). Zhao & Zhang (2015) introduce importance 114 sampling to reduce variance and improve the convergence rate of stochastic optimization algorithms over 115 uniform sampling. They theoretically prove that the reduction in the variance is possible if the sampling 116 distribution depends on the norm of the gradients of the loss function. Chang et al. (2017) conclude that 117 mini-batch SGD for classification is improved by emphasizing the uncertain examples. Lin et al. (2017) 118 propose reshaped cross-entropy loss (focal loss) that down-weights the loss of confidently classified samples. 119 Nevertheless, assigning non-uniform priorities to tasks in meta-learning is under-explored and has recently 120 drawn attention (Kaddour et al., 2020; Gutierrez & Leonetti, 2020; Liu et al., 2020; Yao et al., 2021; Arnold 121 et al., 2021). Gutierrez & Leonetti (2020) propose Information-Theoretic Task Selection (ITTS) algorithm 122 to filter training tasks that are distinct from each other and close to the tasks of the target distribution. This 123 algorithm results in a smaller pool of training tasks. A model trained on the smaller subset learns better than 124 the one trained on the original set. On the other hand, Kaddour et al. (2020) propose probabilistic active 125 meta-learning (PAML) that learns probabilistic task embeddings. Scores are assigned to these embeddings 126 to select the next task presented to the model. These algorithms are, however, specific to meta-reinforcement 127 learning (meta-RL). On the contrary, our focus is on the few shot classification problem. Liu et al. (2020) 128 propose a greedy class-pair potential-based adaptive task sampling strategy wherein task selection depends 129 on the difficulty of all class-pairs in a task. This sampling technique is static and operates at a class 130 granularity. On the other hand, our approach is dynamic and operates at a task granularity. Assigning 131 non-uniform weights to samples prevents overfitting on corrupt data points (Ren et al., 2018b; Jiang et al., 132 2018). Ren et al. (2018b) used gradient directions to re-weight the data points, and Jiang et al. (2018) 133 learned a curriculum on examples using a mentor network. However, these approaches assume availability 134 of abundant labeled data. Yao et al. (2021) extend Jiang et al. (2018) to the few-shot learning setup. They 135 propose an adaptive task scheduler (ATS) to predict the sampling probability of tasks from a candidate 136 pool containing a subset of tasks sampled from the original (noisy or imbalanced) task distribution (similar 137 to (Jiang et al., 2018). Thus, the sampling probabilities of the tasks are (approximately) global. Another 138 global task sampling approach is Uniform Sampling (Arnold et al., 2021), built on the premise that task 139 difficulty (defined as the negative log-likelihood of the model on the task) approximately follows a normal 140 distribution and is transferred across model parameters during training. They also find sampling uniformly 141 over episode difficulty outperforms other sampling schemes like curriculum, easy and hard mining. Our 142 work is different from these approaches (ATS and Uniform Sampling) as we do not propose a global task 143 sampling strategy but a dynamic task-batch re-weighting mechanism for the current meta-model update. 144 We hypothesize that the task's importance depends on the data contained in it and the current meta145 model's configuration. For example, in the initial stage of the meta-models training, coarse-grained tasks 146 (tasks composed of semantically distinct classes) may have higher importance than fine-grained tasks (tasks 147 composed of comparable classes), while this behavior may reverse as the training progresses. Further, 148 our approach differs from Uniform Sampling in the definition of task difficulty, i.e., we neither explicitly 149 handcraft the notion of task difficulty nor assume a normal distribution over it. Instead, we let an attention 150 network learn the suitable weights for the tasks in a batch. Although ATS also dynamically learns the task 151 sampling priority, it maintains a candidate pool to satisfy the global task priority criteria, causing overhead. 152 Further, it performs an additional warm start to the scheduler, utilizes more task batches in a run, and uses 153 REINFORCE for reward estimation; therefore, it is more expensive than the proposed approach. Contrary 154 to our idea is TAML (Jamal & Qi, 2019) - a meta-training curriculum that enforces equity across the tasks in 155 a batch. We show that weighting the tasks according to their "importance" and hence utilizing the diversity 156 present in a batch given the meta-model's current configuration offers better performance than enforcing 157 equity in a batch of tasks. ## 158 **3 Preliminary** 159 In a typical ML setting, the principal dataset D is divided into disjoint meta-sets M (meta-train set), 160 Mv (meta-validation set) and Mt (meta-test set) for training the model, tuning its hyperparameters and 161 evaluating its performance, respectively. Every meta-set is a collection of tasks T drawn from the joint task distribution P(T ) where each task Ti consists of support set Di = {(x c k , yck ) K k=1} N 162 c=1 and query set D∗ i = {(x ∗c q , y∗c q ) Q q=1} N c=1 163 . Here (*x, y*) represents a (sample, label) pair and N is the number of classes, K and 164 Q are the number of samples belonging to each class in the support and query set, respectively. According to support-query characterization M, Mv and Mt could be represented as {(Di, D∗ i )}M i=1, {(Di, D∗ i )} R i=1 165 , {(Di, D∗ i )} S 166 i=1 where *M, R* and S are the total number of tasks in M, Mv and Mt respectively. During meta-training, meta-model θ is adapted on Di of all tasks in a batch {Ti} B i=1 of size B, T times to obtain ϕ T i 167 . 168 The adaptation occurs through gradient descent or parametric update on the train loss L using learning rate α. The adapted model ϕ T iis then evaluated on D∗ i to obtain test loss L ∗ 169 , which along with learning rate β, 170 is used to update θ. The output of this episodic training is either an optimal prior or a parametric optimizer, 171 both aiming to facilitate the rapid adaptation of the model on unseen tasks from Mt. The detailed note on 172 initialization and optimization approaches is deferred to the supplementary material. ## 173 **4 Task Attention In Meta-Learning** 174 A common assumption under the batch-wise episodic training regimen adopted by ML is that each task in a 175 batch has an equal contribution in improving the learned meta-knowledge. However, this need not always be 176 true. It is likely that given the current configuration of the meta-model, some tasks may be more important 177 for the meta-model's learning. A contributing factor to this difference is that tasks sampled from complex 178 data distributions can be profoundly diverse. The diversity and latent properties of the tasks coupled with 179 the model configuration may induce some tasks to be better aligned with the optimal meta-knowledge than 180 others. The challenging aspect in the meta-learning setting is to define the "importance" and associate 181 weights to the tasks of a batch proportional to their contribution to improving the meta-knowledge. As 182 human beings, we *learn* to associate importance to events subjective to meta-information about the events 183 and prior experience. This motivates us to define a learnable module that can map the meta-information of 184 tasks to their importance weights. ![4_image_0.png](4_image_0.png) Figure 1: Computational Graph of the forward pass of the meta-model using task attended meta-training curriculum. The output of this procedure is a meta-model θ n. Gradients are propagated through solid lines and restricted through dashed lines. ## 185 **4.1 Characteristics Of Meta-Information** Given a task-batch {Ti} B i=1 186 , the task attention module takes as input meta-information about each task (Ti) 187 in the batch, defined as the four tuple below: $${\cal I}=\left\{\ \left(\ ||\nabla_{\phi_{i}^{T}}L^{*}(\phi_{i}^{T})||,L^{*}(\phi_{i}^{T}),A^{*}(\phi_{i}^{T}),\frac{L^{*}(\phi_{i}^{T})}{L^{*}(\phi_{i}^{0})}\ \right)\ \right\}_{i=1}^{B}\tag{1}$$ where corresponding to each task i in the batch ||∇ϕ T i L ∗(ϕ T i )|| denotes the norm of gradient, L ∗(ϕ T i 188 ) and A∗(ϕ T i ) are the test loss and accuracy of the adapted model respectively, and L ∗(ϕ T i ) L∗(ϕ 0 i ) 189 is the ratio of the 190 model's test loss post and prior adaptation. Let P =ϕ T i B i=1 192 be the parameters of the models obtained after adapting the initial model (for T iterations) on the support data {Di} B i=1 of tasks {Ti} B i=1. Also, let G =n∇ϕ T i L ∗(ϕ T i ) oB i=1 193 be the gradients of the adapted model parameters w.r.t the query losses {L ∗(ϕ T i )} B i=1 194 . The gradient norm n||∇ϕ T i L ∗(ϕ T i )||oB i=1 195 is the L2 norm of the gradients and quantifies the magnitude of the con-196 solidated displacement of the adapted model parameters during a gradient descent update on query 197 data. Larger gradient norm on query dataset could indicate that the model has either not learned 198 the support set or has overfitted. Hence the model is not generalizable on query set compared to the 199 models with low gradient norm. Gradient norm, therefore, carries information about the convergence 200 and generalizability of the adapted models which has been theoretically studied in (Li et al., 2019). ## 191 **4.1.1 Gradient Norm** 201 202 **4.1.2 Test Loss** Algorithm 1: Task Attended Meta-Training Input: Dataset: M = {Di, D∗ i }M i=1 Models: Meta-model θ, Base-model ϕ, Att-module δ Learning-rates: α, β, γ Parameters: Iterations n*iter*, Batch-size B, Adaptation-steps T Output: Meta-model θ 1 **Initialization:** *θ, δ* ← Random Initialization 2 for iteration in n*iter* do 3 {Ti} B i=1 = {Di, D∗ i } B i=1 ← Sample task-batch(M) 4 for all Ti do 5 ϕ 0 i ← θ 6 L ∗(ϕ 0 i ), _ ← *evaluate*(ϕ 0 i , D∗ i ) ▷ Compute loss and accuracy of input model on given dataset. 7 ϕ T i = *adapt*(ϕ 0 i , Di) 8 L∗(ϕ T i ), A∗(ϕ T i ) ← *evaluate*(ϕ T i , D∗ i ) 9 end 10 [wi] B i=1 ← Att_*module* L ∗(ϕ T i ) L∗(ϕ 0 i ) , A∗(ϕ T i ), ||∇ϕ T i L ∗(ϕ T i )||, L∗(ϕ T i ) B {L ∗(ϕ T i )} B i=1 203 represents the empirical error (cross 204 entropy loss) of the adapted base models on 205 unseen query instances and hence characterizes 206 their generalizability. Unlike gradient norm, 207 which characterizes the generalizability in pa-208 rameter space, query loss quantifies generaliz-209 ability in the output space as the divergence be-210 tween the real and predicted probability distributions. As {L ∗(ϕ T i )} B i=1 211 is a key component in 212 the meta-update equation, it is an important fac-213 tor influencing the meta-model's learning. Fur-214 ther, test errors of classes have been widely used 215 to determine their "easy or hardness" (Bengio 216 et al., 2009; Liu et al., 2021b; Arnold et al., 2021). Thus {L ∗(ϕ T i )} B i=1 217 acquaints the atten-218 tion module with the generalizability aspect of 219 task models and their influence in updating the 220 meta-model. 234 **4.1.4 Loss-ratio** ## 221 **4.1.3 Test Accuracy** $\star\ell=0$ {A∗(ϕ T i )} B i=1 222 corresponds to the accuracies of {ϕ T i } B i=1 on {D∗ i } B i=1 223 scaled in the range [0,1]. A∗(ϕ T i 224 ) evaluates the thresholded predictions (predicted labels) unlike L ∗(ϕ T i 225 ), which evaluates 226 the confidence of the model's predictions on the 227 true class labels. Two task models may predict 228 the same class labels but differ in the confidence 229 of the predictions. In such scenarios, neither loss 230 nor accuracy is individually sufficient to compre-231 hend this relationship among the tasks. So, the 232 combination of these two entities is more reflec-233 tive of the nature of the learned task models. i=1! 11 θ ← θ − β∇θPB i=1 wiL ∗(ϕ T i ) 12 {Dj , D∗ j } B j=1 ← Sample task-batch(M) 13 for all Tj do 14 ϕ 0 j ← θ 15 ϕ T j = *adapt*(ϕ 0 j , Dj ) 16 end 17 δ ← δ − γ∇δPB j=1 L ∗(ϕ T j ) 18 end 19 **Return** θ 20 **Function** adapt(ϕ t i , Di): 21 θ ← ϕ t i 22 if θ *is optimal-initialization* **then** 23 for *t=1 to T* do 24 ϕ t+1 i ← ϕ t i − α∇ϕ t i L(ϕ t i ) 25 end 26 end 27 **else if** θ *is parametric-optimizer* **then** 28 for *t=1 to T* do 29 ϕ t+1 i ← θ L(ϕ t i ), ∇ϕ t i L(ϕ t i ) ▷ Parameter updates given by cell state of θ. 30 end 31 end 32 *Return* ϕ T i Let L ∗(ϕ 0 i ) be the loss of θ on the D∗ i , and L ∗(ϕ T i 235 ) be the loss of the adapted model ϕ T i on D∗ i 236 . The loss-ratio L ∗(ϕ T i ) L∗(ϕ 0 i ) 237 is representative of the relative 238 progress of a meta-model on each task. Higher 239 values (> 1) of the loss-ratio suggests adapting θ 240 to Di has an adverse effect on generalizing it to D∗ i 241 (negative impact), while lower values (< 1) 242 of the loss-ratio indicates the benefit of adapta-243 tion of θ on Di (positive impact). Loss-ratio of 244 exactly one signifies adaptation attributes to no additional benefit (neutral impact). Therefore, loss-ratio 245 provides information regarding the impact of adaptation on each task for a given meta-model. ## 246 **4.2 Task Attention Module** 247 We learn a task attention module parameterized by δ, which attends to the tasks that contribute more to 248 the model's learning i.e., the objective of the task attention module is to learn the relative importance of 249 each task in the batch for the meta-model's learning. Thus the output of the module is a B−dimensional vector w = [w1*, . . . , w*B], (PB i=1 wi = 1 and ∀Ti 250 , wi ≥ 0) quantifying the attention-score (weight - wi) for 251 each task. The attention vector w is multiplied with the corresponding task losses of the adapted models L ∗(ϕ T i ) on the held-out datasets D∗ i 252 to update the meta-model θ: $$\theta^{t+1}\leftarrow\theta^{t}-\beta\nabla_{\theta^{t}}\sum_{i=1}^{B}w_{i}L^{*}(\phi_{i}^{T})$$ $${\mathrm{(2)}}$$ ) (2) 253 After the meta-model is updated using the weighted task losses, we evaluate the goodness of the generated attention weights. We sample a new batch of tasks {Dj , D∗ j } B j=1 254 and adapt a base-model ϕj using the updated meta-model θ t+1 255 on the train data {Dj} of each task. The mean test-loss of the adapted models {ϕ T j } B j=1 256 reflect the goodness of the weights assigned by the attention-module in the previous iteration. The 257 attention module δ is thus updated using the gradients flowing back into it w.r.t to this mean test-loss. The 258 attention network is trained simultaneously with the meta-model in an end to end fashion using the update 259 rule: $$\delta^{t+1}\leftarrow\delta^{t}-\gamma\nabla_{\delta^{t}}\sum_{j=1}^{B}L^{*}(\phi_{j}^{T})$$ $$\left({\boldsymbol{3}}\right)$$ ) (3) where ϕ T j is adapted from θ t+1 260 and γ is the learning rate. ## 261 **4.3 Task Attended Meta-Training Algorithm** 262 We demonstrate the meta-training curriculum using the proposed task attention in Figure 1 and formally 263 summarize it in Algorithm 1. The detailed explanation is presented in Figure 7 in the appendix. As with 264 the classical meta-training process, we first sample a batch of tasks from the task distribution. For each task Ti 265 , we adapt the base-model ϕi using the train data Di for T time-steps (line 7 and lines 20-32 in Algorithm 266 1). Specifically, for initialization approaches, adaptation is performed by gradient descent on train loss L 267 (lines 22-26 in Algorithm 1). However, for optimization approaches, current loss and gradients are inputted 268 to the meta-model θ, which outputs the updated base-model parameters (lines 27-31 in Algorithm 1). Then 269 we compute the meta-information about the adapted model corresponding to each task. It comprises of the loss L ∗(ϕ T i ), accuracy A∗(ϕ T i ), loss-ratio L ∗(ϕ T i ) L∗(ϕ 0 i ) and gradient norm ||∇ϕ T i L ∗(ϕ T i )|| on the test data D∗ i 270 . 271 This meta-information corresponding to each task in a batch is given as input to the task attention module 272 (Figure 1 - Label: 2 ) which outputs the attention vector (line 10 in Algorithm 1). The attention vector and test losses {L ∗(ϕ T i )} B i=1 273 are used to update meta-model parameters θ according to equation 2 (line 11 in Algorithm 1, Figure 1 - Label: 4 ). We sample a new batch of tasks {Dj , D∗ j } B j=1 274 and adapt the base-models {ϕ T j } B 275 j=1 using the updated meta-model (lines 12-16 in Algorithm 1, Figure 1 - Label: 5 ). We compute the mean test loss over the adapted base-models {L ∗(ϕ T j )} B j=1 276 , which is then used to update the parameters of 277 the task attention module δ according to equation 3 (line 17 in Algorithm 1, Figure 1 - Label: 6 ). The attention network is designed as a stand-alone module to learn the mapping from the meta-information space to the importance of tasks in a batch. The meta-model is learned according to equation 2 and aims to minimize the weighted loss. It is important to decouple the learning of the attention network from that of the meta-model. If there is information flow from the task attention module to the meta-model, the latter may reduce its weighted loss by learning an initialization that is suboptimal, but for which the task attention network assigns lower weights. This would introduce an undesirable bias to the learning process. To circumvent this bias, we restrict the flow of gradients to the meta-model θ through the task attention module δ by enforcing ∇θwiL ∗(ϕ T i ) = wi∇θL ∗(ϕ T i ) i.e., ∇θwiis not computed. Also, gradients flowing through the attention network to the meta-model create additional computational overhead. Specifically, the term ∇θP i wiL ∗(ϕ T i ) from equation 2 can be expanded as follows - $$\nabla_{\theta}\sum_{i}w_{i}L^{*}(\phi_{i}^{T})=\sum_{i}\nabla_{\theta}w_{i}L^{*}(\phi_{i}^{T})=\underbrace{\sum_{i}w_{i}\nabla_{\theta}L^{*}(\phi_{i}^{T})}_{\mathrm{Term~1}}+\underbrace{\sum_{i}L^{*}(\phi_{i}^{T})\nabla_{\theta}w_{i}}_{\mathrm{Term~2}}$$ 7 The ∇θwiin Term 2 is computationally expensive as ∇θwi = ∇δwi 278 .∇I δ.∇ϕI.∇θϕ. Restricting the gradient 279 flow avoids these additional computations. We also note that the meta-model and attention network are 280 updated only once during each training iteration, although on different batches of tasks. ## 281 **5 Experiments And Results** 282 We conduct experiments to demonstrate the merit of the task-attention across multiple datasets, training 283 setups, and learning paradigms. We verify that the proposed regimen could be integrated with various 284 ML approaches like MAML, MetaSGD, MetaLSTM++, and ANIL and further show its superiority over 285 state-of-the-art task-scheduling and conflict-resolving approaches. We also analyze the attention network. ## 286 **5.1 Dataset And Implementation Details** 287 In line with the state-of-the-art literature (Sun et al., 288 2020; Arnold et al., 2021), we use miniImagenet, FC100, 289 and tieredImagenet for evaluating the effectiveness of the 290 proposed attention module as they are more challenging 291 datasets comprising of highly diverse tasks. We also test 292 the efficacy of the proposed approach on noisy dataset 293 (miniImagenet-noisy), and under cross-domain few shot 294 learning (CDFSL) miniImagenet → CUB-200 and mini295 Imagenet → FGVC-Aircrafts datasets. The details of the 296 datasets are presented in the supplementary material. 297 We use a 4-layer CNN from (Finn et al., 2017) as a base 298 model and a two-layer LSTM (Ravi & Larochelle, 2017) 299 for the parametric optimizer. The architecture of the Figure 2: Architecture of Task-attention module. ![7_image_0.png](7_image_0.png) 300 task-attention module is illustrated in Figure 2 and de301 scribed as follows.The task attention module is implemented as a 4-layer neural network. The first layer 302 performs a 1×1 convolution over the input (meta-information) of size B×4 where B denotes the meta-batch 303 size, producing a vector of size B×1 as output. This vector is then passed through two fully connected 304 layers with 32 hidden nodes, each followed by a ReLU activation. This output is then passed through a fully 305 connected layer with B nodes, followed by a softmax activation to produce the normalized attention weights. Table 1: Comparison of few-shot classification performance of MAML and TA-MAML on miniImagenet dataset with meta-batch size 4 and 6 and 8 for 5 and 10 way (1 and 5 shot) settings. The ± represents the 95% confidence intervals over 300 tasks. Algorithms denoted by * are rerun on their optimal hyper-parameters on our experimental setup. We observe that TA-MAML consistently performs better than MAML, and an increase in the tasks in a batch improves the performance of both MAML and TAMAML. Test Accuracy (%) on miniImagenet 5 Way 10 Way Model 1 Shot 5 Shot 1 Shot 5 Shot Batch Size 4 MAML∗46.10 ± 0.19 60.16 ± 0.17 29.42 ± 0.11 41.98 ± 0.10 TA-MAML∗ 48.36 ± 0.23 62.48 ± 0.18 31.15± 0.11 43.70 ± **0.09** Batch Size 6 MAML∗47.72 ± 1.041 63.45 ± 1.083 31.55 ± 0.626 46.27 ± 0.64 TA-MAML∗ 49.14 ± 1.211 65.26 ± 0.956 32.62± 0.635 46.67 ± 0.63 Batch Size 8 MAML∗47.68±1.20 63.81±0.98 31.54±0.66 46.15±0.58 TA-MAML∗ 50.35±1.22 65.69±1.08 32.00±0.68 48.33±**0.63** 306 We perform a grid search over 307 30 different configurations for 308 5000 iterations to find the opti309 mal hyper-parameters for each 310 setting. The search space is 311 shared across all meta-training 312 algorithms and datasets. The 313 meta, base and attention model 314 learning rates are sampled 315 from a log uniform distribution in the ranges -1e −4, 1e −2 316 , -1e −2, 5e −1and -1e −4, 1e −2 317 318 respectively (see appendix for 319 more details). The hyperpa-320 rameter λ for TAML (Theil) 321 is sampled from a log uniform 322 distribution over the range of -1e −2, 1 323 . For CA-MAML, c is 324 set as 0.5. The meta-batch size 325 is set to 4 for all settings (Finn 326 et al., 2017; Jamal & Qi, 2019). 327 However, we study its impact in Table 1. All models were trained for 55000 iterations (early stopping was 328 employed for tieredImagenet) using the optimal set of hyper-parameters using an Adam optimizer (Kingma 329 & Ba, 2015). All the experimental results and comparisons correspond to our re-implementation of the ML 330 algorithms integrated into learn2learn library (Arnold et al., 2020) to ensure fairness and uniformity. We 331 believe that integrating the proposed attention module and additional ML algorithms into the learn2learn 332 library will benefit the ML community. We perform individual hyperparameter tuning for all the models over the same hyperparameter space to ensure a fair comparison. The source code is publicly available.1 333 ## 348 **5.2 Influence Of Task Attention On Meta-Training** 334 The literature reports significant variations in the meta-test performances of various ML approaches (Table 335 7 in supplementary material). The reported average meta-test accuracies of MAML on the miniImagenet 336 dataset range from 46.47 % to 48.70 % (55.16% to 64.39%) for 5 way 1 shot (5 shot) settings. A careful 337 analysis reveals the different experimental setups resulting in the observed variation. Experimental setups 338 (Finn et al., 2017; Oreshkin et al., 2018; Oh et al., 2020) differ in the number of examples per class in the 339 query set, the number of gradient descent steps in the inner loop, meta-batch size, inductive or transductive 340 batch normalization, etc. We conduct two sets of experiments to test the proposed task attention model's 341 efficacy in a fair manner. The first set of experiments use the train and test setups reported in the literature (denoted using \#). The second set uses our setup (denoted using ∗ 342 ) that has the same train and test 343 conditions. Specifically, we set the query examples per class to 15 and gradient steps to 5 for both the meta-344 train and meta-test phases. However, for 10 way 5 shot setting, we use only 2 gradient steps to reduce the 345 computational burden. More query examples per class (15) during the meta-test provide a robust estimate of 346 the model's generalizability. Further, setting gradient steps to 5 (or 2) better evaluates the quick adaptation 347 capabilities of a learned prior. 349 As task-attention (TA) is a standalone module, it can be integrated with any batch episodic training reg350 imen. We, therefore, use MetaLSTM++ (batch mode of MetaLSTM) for our experiments. In (Aimen 351 et al., 2021), authors demonstrated the merit of MetaLSTM++ on MetaLSTM only on Omniglot dataset. 352 We extend upon this empirical investigation by comparing the performance of MetaLSTM and MetaL- 353 STM++ on complex datasets like miniImagenet, FC100, and tieredImagenet (Table 2). It is evident 354 from the results that batch-wise episodic training is more effective than sequential episodic training. 355 We also investigate the MAML∗ MetaSGD∗ MetaLSTM++∗ 356 performance of models 357 trained with the TA miniImagenet 358 meta-training regimen 359 with their non-TA coun360 terparts on both (our 361 and reported - wher362 ever available) setups. 363 Specifically, we compare 364 MAML, MetaSGD, 365 MetaLSTM++, and 366 ANIL with their tasktieredImagenet 367 attended versions on 5 368 and 10 way (1 and 5 369 shot) settings on mini370 Imagenet, FC100, and 371 tieredImagenet datasets 372 and report the results 373 in Table 2. We consider Figure 3: Mean validation accuracies of MAML∗(Col-1), MetaSGD∗(Col-2) and MetaLSTM++∗(Col-3) across 300 tasks with/without attention on 5 way 1 shot setting on miniImagenet (Row-1) and tieredImagenet (Row-2) datasets. 374 300 meta-test tasks for 375 all approaches unless 376 specified otherwise. For 1https://github.com/taskattention/task-attended-metalearning.git | across all settings and datasets. | Test Accuracy (%) | | | | |-------------------------------------|---------------------|--------------|--------------|--------------| | 5 Way | 10 Way | | | | | Model | 1 Shot | 5 Shot | 1 Shot | 5 Shot | | miniImagenet | | | | | | MAML#(Finn et al., 2017) | 48.07 ± 1.75 | 63.15 ± 0.91 | - | - | | CA-MAML#(Liu et al., 2021a) | 47.86 ± 2.50 | 64.27 ± 1.26 | - | - | | TAML#(Jamal & Qi, 2019) | 51.77 ± 1.86 | 65.6 ± 0.93 | - | - | | TA-MAML# | 53.80 ± 1.85 | 66.11 ± 0.11 | - | - | | MAML∗ | 46.10 ± 0.19 | 60.16 ± 0.17 | 29.42 ± 0.11 | 41.98 ± 0.10 | | TAML∗ | 46.26 ± 0.21 | 53.40 ± 0.14 | 29.76 ± 0.11 | 36.88 ± 0.10 | | TA-MAML∗ | 48.36 ± 0.23 | 62.48 ± 0.18 | 31.15± 0.11 | 43.70 ± 0.09 | | MetaSGD# (Li et al., 2017) | 50.47 ± 1.87 | 64.03 ± 0.94 | - | - | | TA-MetaSGD# | 52.60 ± 0.25 | 67.54 ± 0.12 | - | - | | MetaSGD∗ | 47.65± 0.21 | 61.60 ± 0.17 | 30.09± 0.10 | 42.22 ± 0.11 | | TA-MetaSGD∗ | 49.28 ± 0.20 | 63.37 ± 0.16 | 31.50± 0.11 | 44.06 ± 0.10 | | MetaLSTM∗ | 41.48 ± 1.02 | 58.87 ± 0.94 | 28.62 ± 0.64 | 44.03 ± 0.69 | | MetaLSTM++∗ | 48.00 ± 0.19 | 62.73 ± 0.17 | 31.16 ± 0.09 | 45.46 ± 0.10 | | TA-MetaLSTM++∗ | 49.18 ± 0.17 | 64.89 ± 0.16 | 32.07± 0.11 | 46.66 ± 0.09 | | ANIL#(Raghu et al., 2020) | 46.7 ± 0.4 | 61.5 ± 0.5 | - | - | | TA-ANIL# | 49.53 ± 0.41 | 63.73 ± 0.33 | - | - | | ANIL∗ | 46.92 ± 0.62 | 58.68 ± 0.54 | 28.84 ± 0.34 | 40.95 ± 0.32 | | TA-ANIL∗ | 48.84 ± 0.62 | 60.80± 0.55 | 31.14± 0.34 | 42.52 ± 0.34 | | FC100 | | | | | | MAML∗ | 36.40 ± 0.38 | 46.76±0.21 | 23.93±0.14 | 31.14 ± 0.07 | | TAML∗ | 38.00 ± 0.26 | 48.05± 0.13 | 21.60± 0.14 | 33.19± 0.07 | | TA-MAML∗ | 39.86± 0.25 | 49.56 ± 0.13 | 25.46± 0.15 | 36.06± 0.08 | | MetaSGD∗ | 33.46 ± 0.23 | 43.96± 0.13 | 21.40±0.15 | 30.59± 0.07 | | TA-MetaSGD∗ | 35.66±0.25 | 49.49± 0.12 | 23.80±0.15 | 32.08±0.07 | | MetaLSTM∗ | 37.20 ± 0.26 | 47.89 ± 0.13 | 21.70 ± 0.14 | 32.11 ± 0.07 | | MetaLSTM++∗ | 38.60 ±0.23 | 49.82 ± 0.12 | 22.80 ± 0.14 | 33.46 ± 0.08 | | TA-MetaLSTM++∗ | 41.53 ±0.28 | 51.17 ±0.13 | 25.33 ±0.15 | 34.18 ±0.08 | | ANIL∗ | 34.08 ± 1.29 | 44.74 ± 0.68 | 20.65 ± 0.77 | 27.93 ± 0.42 | | TA-ANIL∗ | 38.06 ± 1.26 | 46.94± 0.69 | 23.27± 0.79 | 28.29 ± 0.40 | | tieredImagenet | | | | | | MAML#(Oh et al., 2020) | 47.44 ± 0.18 | 64.70 ± 0.14 | - | - | | TA-MAML# | 51.90 ± 0.19 | 69.43± 0.18 | - | - | | MAML∗ | 44.40 ± 0.49 | 57.07 ± 0.22 | 27.40 ± 0.25 | 34.30 ± 0.14 | | TAML∗ | 46.40 ± 0.40 | 56.80 ± 0.23 | 26.40 ± 0.25 | 34.40 ± 0.15 | | TA-MAML∗ | 48.40 ± 0.46 | 60.40 ± 0.25 | 31.00± 0.26 | 37.60± 0.15 | | MetaSGD∗ | 52.80 ± 0.44 | 62.35 ± 0.26 | 31.90 ± 0.27 | 44.16 ± 0.15 | | TA-MetaSGD∗ | 56.20 ± 0.45 | 64.56 ± 0.24 | 33.20± 0.29 | 47.12 ± 0.16 | | MetaLSTM∗ | 37.00 ± 0.44 | 59.83 ± 0.25 | 29.80 ± 0.28 | 39.28 ± 0.13 | | MetaLSTM++∗ | 47.60 ± 0.49 | 63.24 ± 0.25 | 30.70 ± 0.27 | 47.97 ± 0.16 | | TA-MetaLSTM++∗ | 49.00 ± 0.44 | 66.15 ± 0.23 | 32.10± 0.27 | 51.35 ± 0.17 | | ANIL∗ | 45.08 ± 1.37 | 59.71 ±0.77 | 29.32 ± 0.83 | 42.76 ± 0.50 | | TA-ANIL∗ | 45.96 ± 1.32 | 60.96± 0.72 | 32.68± 0.92 | 47.56 ± 0.51 | 377 ANIL and its task-attended counterpart, we consider 1000 testing tasks. From Table 2, we observe that 378 models trained with TA regimen generalize better to the unseen meta-test tasks than their non-task-attended 379 versions across all the settings in all datasets. Note that the proposed task attention mechanism aims not 380 to surpass the state-of-the-art meta-learning algorithms but provides new insight into the batch episodic 381 meta-training regimen, which as per our knowledge, is common to all meta-learning algorithms. 399 400 We investigate the influence of the TA meta401 training regimen on the model's convergence by 402 analyzing the trend of the model's validation ac403 curacy over iterations. Figure 3 depicts the mean 404 validation accuracy over 300 tasks on miniImagenet 405 and tieredImagenet datasets for a 5 way 1 shot set406 ting across training iterations. We observe that 407 the models meta-trained with TA regimen tend to 408 achieve higher/at-par performance in fewer itera409 tions than the corresponding models meta-trained 410 with the non-TA regimen. 412 We compare our proposed approach with ATS (Yao 413 et al., 2021) and uniform sampling (Arnold et al., 414 2021) and demonstrate that our weighting mecha415 nism imparts better generalizability to the meta416 model than the global weighting of the tasks. 417 Yao et al. (2021) ascertained the merit of ATS 418 over Greedy class-pair (GCP) technique (Liu et al., 419 2020) on miniImagenet dataset. We extend this 420 comparison and show in Table 3 that the pro421 posed approach performs better than state-of-the422 art ATS and GCP in both 1 and 5 shot settings. 423 We also observe that the TA mechanism performs 424 better than uniform sampling on the miniImagenet 425 dataset on 1 and 5 shot settings for MAML and 426 ANIL. ATS has been designed for noisy and im427 balanced task distributions. So, we compare the ## 411 **5.3 Comparison With Sampling Approaches** 382 We also compare the performance of TA-MAML against TAML - a meta-training regimen that forces the 383 meta-model to be equally close to all the tasks. The results, as presented in Table 2, suggest that TA-MAML 384 performs better than TAML on all benchmarks across all settings. Note that both TAML and TA-MAML 385 are approaches that built upon MAML to address the inequality/diversity of tasks in a batch. Our aim is 386 thus to compare TAML and TA-MAML and not to assess the efficacy of TAML when meta-trained using 387 task attention. Liu et al. (2021a) proposed an optimization method to neutralize conflicts of an average 388 model with individual tasks in a multi-task learning setup. Specifically, they find an optimal update vector 389 that lies in the proximity of the average gradient across the batch of the tasks without conflicting with any 390 task gradient. This method is similar to (Jamal & Qi, 2019) in a meta-learning setup, which constrains 391 the losses of tasks towards the average loss on a task batch. As the update vector is constrained to be 392 close to the average gradient vector on a task batch, information flow from certain useful tasks to the meta393 model may decrease. We note that we extend (Liu et al., 2021a) to a meta-learning setup by computing 394 the average and weighted average gradients on query loss of the adapted models instead of a model from 395 the previous iteration (as in a multi-task setup). Table 2 demonstrates that the proposed attention mech396 anism has better generalizability to unseen tasks than conflict-averse gradient descent adapted for a meta397 learning setup (CA-MAML). Our approach utilizes a non-linear model to extract knowledge from multiple 398 meta-information components to learn the weights, which helps it to outperform TAML and CA-MAML. Table 3: Comparison (Test Accuracy (%)) of task attention with GCP, ATS and Uniform Sampling for MAML and MetaSGD (or ANIL) on miniImagenet dataset and various sampling techniques for ANIL on the miniImagenet-noisy dataset for 5 way 1 and 5 shot settings. For miniImagenet, algorithms denoted by * and # are rerun on the optimal hyper-parameters on our and reported experimental setups, respectively. 5 Way Model 1 Shot 5 Shot miniImagenet MAML with GCP# 46.92 ± 0.83 63.28 ± 0.66 MAML with ATS# 47.89 ± 0.77 64.07 ± 0.70 MAML+UNIFORM (Offline)# 46.67 ± 0.63 62.09 ± 0.55 MAML+UNIFORM (Online)# 46.70 ± 0.61 61.62 ± 0.54 TA-MAML∗(Ours) 48.36 ± 0.23 62.48 ± 0.18 TA-MAML# (Ours) 53.80 ± 1.85 66.11 ± **0.11** | our and reported experimental setups, respectively. 5 Way Model 1 Shot 5 Shot miniImagenet MAML with GCP# 46.92 ± 0.83 63.28 ± 0.66 MAML with ATS# 47.89 ± 0.77 64.07 ± 0.70 MAML+UNIFORM (Offline)# 46.67 ± 0.63 62.09 ± 0.55 MAML+UNIFORM (Online)# 46.70 ± 0.61 61.62 ± 0.54 TA-MAML∗ (Ours) 48.36 ± 0.23 62.48 ± 0.18 TA-MAML# (Ours) 53.80 ± 1.85 66.11 ± 0.11 MetaSGD with GCP# 47.77 ± 0.75 63.50 ± 0.71 MetaSGD with ATS# 48.59 ± 0.79 64.79 ± 0.74 TA-MetaSGD∗ (Ours) 49.28 ± 0.20 63.37± 0.16 TA-MetaSGD# (Ours) 52.60 ± 0.25 67.54 ± 0.12 ANIL+UNIFORM (Offline)# 46.93 ± 0.62 62.75 ± 0.60 ANIL+UNIFORM (Online)# 46.82 ± 0.63 62.63 ± 0.59 TA-ANIL∗ (Ours) 48.84 ± 0.62 60.80± 0.55 TA-ANIL#(Ours) 49.53 ± 0.41 63.73 ± 0.33 miniImagenet-noisy Uniform 41.67 ± 0.80 55.80 ± 0.71 SPL 42.13 ± 0.79 56.19 ± 0.70 Focal Loss 41.91 ± 0.78 53.58 ± 0.75 GCP 41.86 ± 0.75 54.63 ± 0.72 PAML 41.49 ± 0.74 52.45 ± 0.69 DAML 41.26 ± 0.73 55.46 ± 0.70 ATS 44.21 ± 0.76 59.50 ± 0.71 TA-ANIL∗ (Ours) 45.17 ± 0.23 62.15 ± 1.01 | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| 428 proposed approach with GCP, ATS, and other sampling techniques on the miniImagenet-noisy dataset (Yao 429 et al., 2021) and report the results in Table 3. We observe that task attention outperforms all scheduling 430 algorithms on the miniImagenet-noisy dataset. As ATS is the most competitive baseline for the proposed 431 method on the miniImagenet-noisy dataset, we compare the TA-ANIL and ATS on varying noise ratios for 432 the miniImagenet dataset on 5 way 1 shot setting (Table 4). We observe that the proposed method outper433 forms ATS on all noise ratios except 0.8. Note that the algorithm used for all sampling approaches is ANIL. ## 434 **5.4 Effectiveness Of Task Attention In Cdfsl Setup** 435 Classical meta-learning ap- Table 4: Comparative analysis of ANIL integrated with ATS and proposed method on miniImagenet dataset with varying noise ratios for 5 way 1 shot setting. BNS is the best non-adaptive scheduler. Test Accuracy (%) on miniImagenet-noisy Noise ratio 0.2 0.4 0.6 0.8 ANIL with Uniform 43.46 ± 0.82 42.92 ± 0.78 41.67 ± 0.80 36.53 ± 0.73 ANIL with BNS 44.04 ± 0.81 43.36 ± 0.75 42.13 ± 0.79 38.21 ± 0.75 ANIL with ATS 45.55 ± 0.80 44.50 ± 0.86 44.21 ± 0.76 42.18 ± **0.73** TA-ANIL∗(Ours) 47.98 ± 0.26 46.69 ± 0.22 45.17 ± **0.23** 40.35 ± 1.14 436 proaches assume meta-train 437 and meta-test data belong to 438 the same distribution such 439 that the meta-trained model 440 extends its knowledge to the 441 meta-test set. This is, how442 ever, not always the case. The 443 difference in the data acquisi444 tion techniques, or evolution 445 of data with time, may cause a discrepancy between the meta-train and meta-test distributions. This 446 realistic setting is popularly termed as cross-domain few-shot learning (CDFSL) (Guo et al., 2020). 447 We conducted experiments to show the merit of the proposed approach in CDFSL setup. Specifically, 448 we train a model using a TA meta-training regimen on the miniImagenet dataset and meta-test it on 449 CUB-200, FGVC-Aircraft, Describable Textures, and Omniglot datasets from Metadataset (Triantafil450 lou et al., 2019). The results reported for 5 way 1 and 5 shot settings in Table 5 indicate that the 451 proposed approach outperforms the state-of-the-art task scheduling approach (Uniform Sampling - wher452 ever applicable) or non-task-attended counterparts (for Omniglot) on CDFSL setup by a large margin. 453 As some classes of 454 Imagenet overlap with 455 Metadataset, we also 456 conduct experiments 457 on the diverse VTAB 458 dataset (Zhai et al., 459 2019), which does not 460 share classes with the 461 Imagenet (consequently 462 miniImagenet) dataset. 463 We note that some 464 VTAB sub-datasets like 465 Sun397 are quite mem466 ory intensive and others 467 like Patch Camelyon, 468 Retinopathy, etc., have 469 fewer classes. In the 470 interest of time and 471 resources, we meta472 train a conv4 model 473 on the miniImagenet 474 dataset and evaluate 475 it on a few of feasible 476 sub-datasets covering 477 all three domains - 478 Natural, Specialized, 479 and Structured. Specif- | and VTAB datasets for 5 way 1 and 5 shot settings. 5 Way | 5 Way | | | | |------------------------------------------------------------|----------------------|---------------------------|--------------|--------------| | Model | 1 Shot | 5 Shot | 1 Shot | 5 Shot | | | Metadataset | | | | | CUB-200 | FGVC-Aircraft | | | | | MAML+ UNIFORM (Online)# | 35.84 ± 0.54 | 46.67 ± 0.55 | 26.62 ± 0.39 | 34.41 ± 0.44 | | TA-MAML# (Ours) | 42.87 ± 1.18 | 57.49 ± 0.99 | 29.42 ± 0.78 | 36.34 ± 0.86 | | Describable Textures | | | | | | MAML+ UNIFORM (Online)# | 31.84 ± 0.49 | 40.81 ± 0.44 | | | | TA-MAML# (Ours) | 31.98 ± 0.98 | 44.39 ± 0.79 | | | | Omniglot | | | | | | MAML# | 72.40 ± 1.43 | 86.81 ± 0.99 | | | | TA-MAML#(Ours) | 78.73 ± 1.08 | 88.92 ± 0.76 VTAB Dataset | | | | FC100 | Flowers102 | | | | | MAML# | 35.49 ± 1.95 | 44.42 ± 0.83 | 51.93 ± 1.59 | 75.22 ± 0.48 | | TA-MAML# (Ours) | 38.87 ± 1.90 | 46.57 ± 0.85 | 61.86 ± 1.72 | 77.49 ± 0.16 | | SVHN | | | | | | MAML# | 20.93 ± 1.01 | 22.42 ± 0.88 | | | | TA-MAML# (Ours) | 21.73 ± 1.09 | 24.20 ± 0.78 | | | | EuroSAT | Resisc45 | | | | | MAML# | 45.80 ± 1.49 | 62.0 ± 0.71 | 33.60 ± 1.49 | 42.07 ± 0.37 | | TA-MAML# (Ours) | 51.67 ± 1.62 | 66.69 ± 0.70 | 35.20 ± 1.21 | 46.27 ± 0.39 | | DSprites_location | DSprites_orientation | | | | | MAML# | 36.67 ± 1.55 | 48.91 ± 0.84 | 20.86 ± 1.81 | 22.89 ± 0.95 | | TA-MAML# (Ours) | 39.93 ± 1.33 | 56.48 ± 0.95 | 24.27 ± 1.18 | 22.92 ± 0.93 | 480 ically, we investigate the merit of the proposed approach on Natural sub-datasets like DTD, CIFAR FC 100, 481 Flowers102, and SVHN, specialized sub-datasets like EuroSAT and Resisc45, and structured sub-datasets like 482 dSprites_location and dSprites_orientation. We have kept Describable Textures as a part of Metadataset 483 and Flowers102 as a component of VTAB dataset according to (Dumoulin et al., 2021). We convert the se484 lected VTAB sub-datasets to a few-shot setup (5-way 1 and 5 shot tasks) and evaluate task-attended MAML 485 (TA-MAML) and its vanilla version (MAML) on 300 tasks. Our experiments (Table 5) demonstrate that task 486 attention allows MAML to better generalize to unseen, diverse out-of-distribution VTAB meta-test sets. ## 487 **5.5 Ablation Studies** 514 **5.6 Analysis Of Attention Network** 515 To gain further insights into the op516 eration of the attention module, we 517 also examine the trend of the attention518 vector (Figure 5) while meta-training 519 TA-MAML for 5 way 1 and 5 shot set520 tings on the miniImagenet dataset. We 521 plot the maximum and the minimum at522 tention score assigned to the tasks of a 523 batch across iterations together with a 524 few weighted task batches in 5 way 1 shot Figure 4: Mean Pearson correlation of TA-MAML∗ on 5 way 1 shot (left) and 5 shot (right) setting on miniImagenet. 525 setting for illustration. We note that the 526 weighted task batches are only intended 527 to demonstrate the change in the tasks' attention scores across iterations. The next experiment presents 528 a more rigorous analysis studying the relationship among classes in a task and attention scores assigned. 488 To examine the significance of each input ``` Table 6: Effect of ablating components of meta-information in TA-MAML∗for 5 way 1 and 5 shot settings on miniImagenet dataset. Ablation on inputs Grad norm Loss Loss-ratio Accuracy Test Accuracy 5 way 1 shot 5 way 5 shot × × × × 46.10±0.19 60.16±0.17 × 47.30±0.16 60.48±0.16 × 47.62±0.17 62.17±0.17 × 48.10±0.18 60.90±0.20 × 47.30±0.18 61.52±0.16 48.36±0.23 62.48±0.18 ``` 489 given to the task attention model, we con490 duct an ablation study on 5 way 1 and 5 shot 491 TA-MAML on miniImagenet dataset and 492 report the results in Table 6. We observe 493 that all the components of meta-information 494 contribute to the learning of a more general495 izable meta-model. To further support this 496 observation, we investigate the relationship 497 between the meta-information and weights 498 assigned by the task attention module by 499 analyzing the mean Pearson correlation of 500 each of the components (four tuple) of the meta-information with the attention vector across the training 501 iterations. This is depicted in Figure 4 for TA-MAML on 5 way 1 and 5 shot settings for miniImagenet 502 dataset. We observe that the loss ratio and loss are positively correlated with the attention vector, while 503 accuracy and gradient norm are negatively correlated. 504 In 5 way 5 shot setting, we observe that the correlation pattern is comparable to 5 way 1 shot setting, but 505 the mean correlation value of grad norm across iterations is less than that of the 5 way 1 shot setting. This 506 could be because the 5 way 5 shot setting is richer in data than the 5 way 1 shot setting, which allows better 507 learning and therefore has low average values of grad norm (Section 4.1.1). The critical observation, however, 508 is that the meta-information components have a weak correlation with the attention weights, indicating that 509 the TA module does not trivially follow any single component of meta-information. We also analyze the 510 ranks of the tasks for maximum and minimum values of : loss, loss ratio, accuracy, and grad norm in a 511 batch, as per the weights across training iterations, and describe results in the supplementary material. The 512 rank analysis also reinforces the same observation. We ascertain the decreasing trend of mean weighted loss 513 across iterations in the supplementary material. 529 We note that the mean attention score is always 0.25 as we follow a meta-batch size of 4. We observe 530 that the TA module's output follows an interesting trend. Initially, the TA module assigns almost uniform 531 weights to all the tasks of a batch; however, as the iterations increase, it assigns unequal scores to the tasks 532 in a batch, preferring some over the other. This suggests that during the initial phases of the meta-model's 533 training, all tasks have equal contribution towards learning a *generic structure* of the meta-knowledge. 534 As the meta-model's learning proceeds, learning the further *fine-grained meta-knowledge structure* requires 535 prioritizing some tasks in a batch over the others, which are potentially better aligned with learning the 536 optimal meta-knowledge. We study the computational burden imposed by TA regimen in the appendix. ## 556 **6 Conclusion** 537 We further decipher the functioning of 538 the black box attention network by an539 alyzing the qualitative relation among 540 weights and the classes of task batches 541 (Figure 6). In Figure 6 left column (col542 1) corresponds to the cases where the as543 signment of attention scores to the tasks 544 is human interpretable. In contrast, the 545 right column (col-2) refers to the uninter546 pretable attention scores. From the hu547 man perspective, tasks containing images 548 from similar classes are hard to distin549 guish and are assigned higher attention 550 scores indicated by red bounding boxes Figure 5: Trend of an attention vector in 5 way 1 shot (left) and ![13_image_0.png](13_image_0.png) 5 shot (right) settings on miniImagenet dataset for TA-MAML∗. 551 (Figure 6 col-1). Specifically, (col-1, row552 1) task 2 is regarded as most important, 553 possibly because it includes three breeds of dogs followed by task 4, which comprises two species of fish. 554 However, the aforementioned is not a hard constraint, as there are some task batches (Figure 6 col-2) in 555 which the distribution of weights cannot be explained qualitatively. ## 570 **7 Broader Impact** 571 We acknowledge that transfer and metric approaches like (Kolesnikov et al., 2020; Triantafillou et al., 2019; 572 Bronskill et al., 2021; Dvornik et al., 2020) use more advanced backbones and our approach is limited to a 573 basic architecture (Conv4) and gradient-based methods. We clarify that though our approach is extendable to 574 any episodic curriculum (including metric approaches with minor design changes), we choose gradient-based 575 approaches like MAML and ANIL approaches as they are domain-agnostic in contrast to metric learning. 576 However, we leave the investigation of attention mechanisms for metric approaches and domains, such as 577 reinforcement learning or regression problems for gradient approaches for future work. Unfortunately, due 557 In this work we have shown that the batch wise episodic training regimen adopted by ML strategies can 558 benefit from leveraging knowledge about the importance of tasks within a batch. Unlike prior approaches that 559 assume uniform importance for each task in a batch, we propose task attention as a way to learn the relevance 560 of each task according to its alignment with the optimal meta-knowledge. We have validated the effectiveness 561 of task attention by augmenting it to popular initialization and optimization based ML strategies. We have 562 demonstrated through experiments on miniImagenet, FC100 and tieredImagenet datasets that augmenting 563 task attention helps attain better generalization to unseen tasks from the same distribution while requiring 564 fewer iterations to converge. We also show that the task attention is meritorious over existing task scheduling 565 algorithms, even on noisy and CDFSL setups. We also conduct an exhaustive empirical analysis on the 566 distribution of attention weights to study the nature of the meta-knowledge and task attention module. 567 We leave the theoretical motivation of the meta-information components and the proof of convergence of 568 the proposed curriculum as part of our future work. We believe that this end-to-end attention-based meta 569 training paves the way towards efficient and automated meta-training. ![14_image_0.png](14_image_0.png) Figure 6: Explanations of TA module in TA-MAML* on miniImagenet. Left Col) Higher weights accredited to tasks with comparable classes marked by red bounding boxes. Right Col) Association of weights and task data is qualitatively uninterpretable. Rows correspond to the batches. 578 579 to computational and storage restrictions, we are unable to experiment with deeper backbones and large image sizes for gradient-based methods. We, therefore, limit the scope of our study only to algorithms, 580 datasets, and conditions and leave the scalability aspect to the future. We, however, point out the existing 581 literature (Chen et al., 2018) that compares vanilla transfer learning (with no Imaganet pretraining or data 582 augmentation) for conv4 backbone with episodic training (MAML) under fair conditions. Chen et al. have 583 demonstrated that MAML performs better than vanilla transfer learning under fair conditions for conv4 584 architecture. However, transfer learning scales much better with the architectures than MAML (or other 585 episodic methods) (Chen et al., 2018). Nevertheless, transfer learning (TL) is a good solution for few-shot 586 learning (especially with Imagenet pretraining and larger backbones), and translating attention to TL for 587 a few-shot setup is a promising direction for further research. An attention module, in this case, could be 588 used to reweigh the examples instead of tasks, and it could be trained using a smaller validation data pool. 589 Also, sampling a validation pool from a combination of distributions (transduction) is worth exploring. We 590 leave these extensions for future work. We, acknowledge, that similar to (Yao et al., 2021; Wu et al., 2022; 591 Raghu et al., 2020), our study is limited to understanding the fundamentals of episodic training rather than 592 developing an algorithm that surpasses the state-of-the-art approach for few shot learning. ## 593 **References** 594 Mayank Agarwal, Mikhail Yurochkin, and Yuekai Sun. On sensitivity of meta-learning to support data. 595 *Advances in Neural Information Processing Systems*, 34:20447–20460, 2021. 596 Aroof Aimen, Sahil Sidheekh, Vineet Madan, and Narayanan C Krishnan. Stress Testing of Meta-learning 597 Approaches for Few-shot Learning. In *AAAI Workshop on Meta-Learning and MetaDL Challenge*, 2021. 598 Antreas Antoniou, Harri Edwards, and Amos Storkey. How to train your maml. In *Seventh International* 599 *Conference on Learning Representations*, 2019. 600 Sébastien Arnold, Guneet Dhillon, Avinash Ravichandran, and Stefano Soatto. Uniform sampling over 601 episode difficulty. *Advances in Neural Information Processing Systems*, 34:1481–1493, 2021. 602 Sébastien MR Arnold, Praateek Mahajan, Debajyoti Datta, Ian Bunner, and Konstantinos Saitas Zarkias. 603 learn2learn: A library for meta-learning research. *CoRR*, 2020. 604 Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceed605 *ings of the 26th Annual International Conference on Machine Learning, ICML 2009, Montreal, Quebec,* 606 *Canada, June 14-18, 2009*, volume 382 of *ACM International Conference Proceeding Series*, pp. 41–48. 607 ACM, 2009. 608 John Bronskill, Daniela Massiceti, Massimiliano Patacchiola, Katja Hofmann, Sebastian Nowozin, and 609 Richard Turner. Memory efficient meta-learning with large images. *Advances in Neural Information* 610 *Processing Systems*, 34:24327–24339, 2021. 611 Haw-Shiuan Chang, Erik G. Learned-Miller, and Andrew McCallum. Active bias: Training more accurate 612 neural networks by emphasizing high variance samples. In *Advances in Neural Information Processing* 613 *Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017,* 614 *Long Beach, CA, USA*, pp. 1002–1012, 2017. 615 Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at 616 few-shot classification. In *International Conference on Learning Representations*, 2018. 617 Guneet Singh Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. A baseline for few-shot 618 image classification. In *International Conference on Learning Representations*, 2019. 619 Vincent Dumoulin, Neil Houlsby, Utku Evci, Xiaohua Zhai, Ross Goroshin, Sylvain Gelly, and Hugo 620 Larochelle. A unified few-shot classification benchmark to compare transfer and meta learning approaches. 621 In *Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track* 622 *(Round 1)*, 2021. 623 Nikita Dvornik, Cordelia Schmid, and Julien Mairal. Selecting relevant features from a multi-domain repre624 sentation for few-shot classification. In *European Conference on Computer Vision*, pp. 769–786. Springer, 625 2020. 626 Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep 627 networks. In *ICML*, 2017. 635 Muhammad Abdullah Jamal and Guo-Jun Qi. Task agnostic meta-learning for few-shot learning. In *IEEE* 636 *Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June* 637 *16-20, 2019*, pp. 11719–11727. Computer Vision Foundation / IEEE, 2019. 628 Yunhui Guo, Noel C Codella, Leonid Karlinsky, James V Codella, John R Smith, Kate Saenko, Tajana 629 Rosing, and Rogerio Feris. A broader study of cross-domain few-shot learning. In *European conference on* 630 *computer vision*, pp. 124–141. Springer, 2020. 631 Ricardo Luna Gutierrez and Matteo Leonetti. Information-theoretic task selection for meta-reinforcement 632 learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Infor633 *mation Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual*, 2020. 634 Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural computation*, 1997. 638 Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven 639 curriculum for very deep neural networks on corrupted labels. In *Proceedings of the 35th International* 640 *Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018*, 641 volume 80 of *Proceedings of Machine Learning Research*, pp. 2309–2318. PMLR, 2018. 642 Jean Kaddour, Steindór Sæmundsson, and Marc Peter Deisenroth. Probabilistic active meta-learning. In 643 Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Pro644 *cessing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual*, 2020. 645 H. Kahn and A. W. Marshall. Methods of reducing sample size in monte carlo computations. *Oper. Res.*, 646 1953. 647 Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *3rd International* 648 *Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference* 649 *Track Proceedings*, 2015. 650 Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil 651 Houlsby. Big transfer (bit): General visual representation learning. In *European conference on computer* 652 *vision*, pp. 491–507. Springer, 2020. 653 M. Pawan Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable models. In 654 *Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information* 655 *Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia,* 656 *Canada*, pp. 1189–1197. Curran Associates, Inc., 2010. 657 Jian Li, Xuanyuan Luo, and Mingda Qiao. On generalization error bounds of noisy gradient methods for 658 non-convex learning. In *International Conference on Learning Representations*, 2019. 659 Zhenguo Li, Fengwei Zhou, Fei Chen, and Hang Li. Meta-sgd: Learning to learn quickly for few-shot learning, 660 2017. 661 Tsung-Yi Lin, Priya Goyal, Ross B. Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object 662 detection. In *IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October* 663 *22-29, 2017*, pp. 2999–3007. IEEE Computer Society, 2017. 664 Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, and Qiang Liu. Conflict-averse gradient descent for multi-task 665 learning. *Advances in Neural Information Processing Systems*, 34:18878–18890, 2021a. 666 Chenghao Liu, Zhihao Wang, Doyen Sahoo, Yuan Fang, Kun Zhang, and Steven C. H. Hoi. Adaptive task 667 sampling for meta-learning. In *ECCV*, 2020. 668 Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, 669 and Chelsea Finn. Just train twice: Improving group robustness without training group information. In 670 *ICML*, 2021b. 671 Jaehoon Oh, Hyungjun Yoo, ChangHwan Kim, and Se-Young Yun. Boil: Towards representation change for 672 few-shot learning. In *International Conference on Learning Representations*, 2020. 673 Boris N. Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. TADAM: task dependent adaptive metric 674 for improved few-shot learning. In Advances in Neural Information Processing Systems 31: Annual Con675 *ference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal,* 676 *Canada*, pp. 719–729, 2018. 680 Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In *5th International* 681 *Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference* 682 *Track Proceedings*. OpenReview.net, 2017. 683 Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B. Tenenbaum, Hugo 684 Larochelle, and Richard S. Zemel. Meta-learning for semi-supervised few-shot classification. In 6th Inter685 *national Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May* 686 *3, 2018, Conference Track Proceedings*. OpenReview.net, 2018a. 687 Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. Learning to reweight examples for robust 688 deep learning. In *Proceedings of the 35th International Conference on Machine Learning, ICML 2018,* 689 *Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018*, volume 80 of *Proceedings of Machine Learning* 690 *Research*, pp. 4331–4340. PMLR, 2018b. 691 Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and 692 Raia Hadsell. Meta-learning with latent embedding optimization. In *7th International Conference on* 693 *Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019*. OpenReview.net, 2019. 694 Jaewoong Shin, Hae Beom Lee, Boqing Gong, and Sung Ju Hwang. Large-scale meta-learning with continual 695 trajectory shifting. In *International Conference on Machine Learning*, pp. 9603–9613. PMLR, 2021. 696 Abhinav Shrivastava, Abhinav Gupta, and Ross B. Girshick. Training region-based object detectors with 697 online hard example mining. In *2016 IEEE Conference on Computer Vision and Pattern Recognition,* 698 *CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016*, pp. 761–769. IEEE Computer Society, 2016. 699 Qianru Sun, Yaoyao Liu, Tat-Seng Chua, and Bernt Schiele. Meta-transfer learning for few-shot learning. 700 In *IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA,* 701 *June 16-20, 2019*, pp. 403–412. Computer Vision Foundation / IEEE, 2019. 702 Qianru Sun, Yaoyao Liu, Zhaozheng Chen, Tat-Seng Chua, and Bernt Schiele. Meta-transfer learning through 703 hard tasks. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2020. 704 Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, 705 Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, et al. Meta-dataset: A dataset of datasets for 706 learning to learn from few examples. In *International Conference on Learning Representations*, 2019. 707 Oriol Vinyals, Charles Blundell, Tim Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks 708 for one shot learning. In *Advances in Neural Information Processing Systems 29: Annual Conference* 709 *on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain*, pp. 3630–3638, 710 2016. 711 Yichen Wu, Long-Kai Huang, and Ying Wei. Adversarial task up-sampling for meta-learning. In *Advances* 712 *in Neural Information Processing Systems*, 2022. 677 Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals. Rapid learning or feature reuse? towards 678 understanding the effectiveness of MAML. In *8th International Conference on Learning Representations,* 679 *ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020*. OpenReview.net, 2020. 713 Huaxiu Yao, Yu Wang, Ying Wei, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, and Chelsea Finn. Meta714 learning with an adaptive task scheduler. *Advances in Neural Information Processing Systems*, 2021. 715 Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip 716 Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, et al. A large-scale study of repre717 sentation learning with the visual task adaptation benchmark. *arXiv preprint arXiv:1910.04867*, 2019. 718 Peilin Zhao and Tong Zhang. Stochastic optimization with importance sampling for regularized loss mini719 mization. In *Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille,* 720 *France, 6-11 July 2015*, volume 37 of *JMLR Workshop and Conference Proceedings*, pp. 1–9. JMLR.org, 721 2015. ## 722 **8 Appendix** 723 **8.1 Preliminary** 724 **8.1.1 Meta-Knowledge As An Optimal Initialization** 725 When meta-knowledge is a generic initialization on the model parameters learned through the experience 726 over various tasks, it is enforced to be close to each individual training tasks' optimal parameters. A model 727 initialized with such an optimal prior quickly adapts to unseen tasks from the same distribution during 728 meta-testing. **MAML** (Finn et al., 2017) employs a nested iterative process to learn the task-agnostic 729 optimal prior θ. In the inner iterations representing the task adaptation steps, θ is separately fine-tuned for 730 each meta-training task Ti of a batch using Di to obtain ϕi through gradient descent on the train loss L using learning rate α. Specifically, ϕi 731 is initialized as θ and updated using ϕi ← ϕi − α∇ϕiL(ϕi), T times resulting in the adapted model ϕ T i 732 . In the outer loop, meta-knowledge is gathered by optimizing θ over loss L ∗computed with the task adapted model parameters ϕ T ion query dataset D∗ i 733 . Specifically, during meta-optimization θ ← θ − β∇θPB i=1 L ∗(ϕ T i 734 ) using a task batch of size B and learning rate β. **MetaSGD** 735 (Li et al., 2017) improves upon MAML by learning parameter-specific learning rates α in addition to the 736 optimal initialization in a similar nested iterative procedure. Meta-knowledge is gathered by optimizing θ and α in the outer loop using the loss L ∗computed on query set D∗ i 737 . Specifically, during meta-optimization (θ, α) ← (θ, α) − β∇(θ,α)PB i=1 L ∗(ϕ T i 738 ). Learning dynamic learning rates for each parameter of a model 739 makes MetaSGD faster and more generalizable than MAML. A single adaptation step is sufficient to adjust 740 the model towards a new task. The performance of MAML is attributed to the reuse of the features 741 across tasks rather than the rapid learning of new tasks (Raghu et al., 2020). Exploiting this characteristic, 742 **ANIL** freezes the feature backbone layers (1*, . . . , l* − 1) and only adapts classifier layer (l) in the inner loop T times. Specifically during adaptation ϕ l i ← ϕ l i − α∇ϕ l i L(ϕ l i ). During meta-optimization θ 743 1*,...,l* ← θ 1*,...,l* − β∇θ 1,...,l PB i=1 L ∗(ϕ lT i 744 ) i.e., all layers are learned in the outer loop. Freezing the feature backbone 745 during adaptation reduces the overhead of computing gradient through the gradient (differentiating through 746 the inner loop), and thereby heavier backbones could be used for the feature extraction. **TAML** (Jamal 747 & Qi, 2019) suggests that the optimal prior learned by MAML may still be biased towards some tasks. 748 They propose to reduce this bias and enforce equity among the tasks by explicitly minimizing the inequality 749 among the performances of tasks in a batch. The inequality defined using statistical measures such as Theil 750 Index, Atkinson Index, Generalized Entropy Index, and Gini Coefficient among the performances of tasks 751 in a batch is used as a regularizer while gathering the meta-knowledge. For the baseline comparison, in 752 our experiments, we use the Theil index for TAML owing to its average best results. Specifically during meta-optimization θ ← θ − β∇θ PB i=1 L ∗(ϕ T i ) + λ L ∗(ϕ 0 i ) L¯∗(ϕ 0 i ) ln L ∗(ϕ 0 i ) L¯∗(ϕ 0 i ) 753 (for TAML-Theil Index) where B is the number of tasks in a batch, L ∗(ϕ 0 i ) is the loss incurred by initial model ϕ 0 i on the query set D∗ i 754 of task Ti and L¯∗(ϕ 0 i 755 ) is the average query loss of initial model on a batch of tasks. As TAML enforces equity 756 of the optimal prior towards meta-train tasks, it counters the adaptation, which leads to slow and unstable 757 training largely dependent on λ. ## 758 **8.1.2 Meta-Knowledge As A Parametric Optimizer** 759 A regulated gradient-based optimizer gathers the task-specific and task-agnostic meta-knowledge to traverse 760 the loss surfaces of tasks in the meta-train set during meta-training. A base model guided by such a 761 learned parametric optimizer quickly finds the way to minima even for unseen tasks sampled from the 762 same distribution during meta-testing. **MetaLSTM** (Ravi & Larochelle, 2017) is a recurrent parametric 763 optimizer θ that mimics the gradient-based optimization of a base model ϕ. This recurrent optimizer is an 764 LSTM (Hochreiter & Schmidhuber, 1997) and is inherently capable of performing two-level learning due to its architecture. During adaptation of ϕi on Di 765 , θ takes meta information of ϕi characterized by its current loss L and gradients ∇ϕi (L) as input and outputs the next set of parameters for ϕi 766 . This adaptation procedure is repeated T times resulting in the adapted base-model ϕ T i . Internally, the cell state of θ corresponds to ϕi 767 , 768 and the cell state update for θ resembles a learned and controlled gradient update. The emphasis on previous 769 parameters and the current update is regulated by the learned forget and input gates respectively. While adapting ϕi to Di 770 , information about the trajectory on the loss surface across the adaptation steps is captured 771 in the hidden states of θ, representing the task-specific knowledge. During meta-optimization, θ is updated based on the loss of the adapted model L ∗(ϕ T i ) computed on the query set D∗ i 772 to garner the meta-knowledge across tasks. Specifically, during meta-optimization, θ ← θ − β∇θL ∗(ϕ T i 773 ). MetaLSTM updates parametric 774 optimizer θ after adapting the base model ϕ to each task. This causes θ to follow optima's of all adapted 775 base models leading to its elongated and fluctuating optimization trajectory, which is biased towards the last 776 task. **MetaLSTM++** (Aimen et al., 2021) circumvents these issues as θ is updated by an aggregate query 777 loss of the adapted models on a batch of tasks. Batch updates smoothen the optimization trajectory of θ and eliminate its bias towards the last task. Specifically, during meta-optimization θ ← θ − β∇θPB i=1 L ∗(ϕ T i 778 ). 779 8.2 **Detailed Explanation of the Proposed approach** ![20_image_0.png](20_image_0.png) Figure 7: [Best viewed in color] Workflow of proposed training curriculum. 780 We explain the proposed approach through Figure 1, Figure 7, Algorithm 1, and equations. We first sample a batch of tasks (B) from a random pool of data (Figure 7 - Label 1 ). For each task, the base-model ϕi 781 782 is adapted using the support data Di for T time-steps (line 7 and lines 20-32 in Algorithm 1, Figure 7 - 783 Label 3 ). Specifically, the adaptation is done using gradient descent on the train loss L for initialization 784 approaches (lines 22-26 in Algorithm 1, Figure 7 - GD), or the current loss and gradients are inputted to the 785 meta-model θ for optimization approaches, which then outputs the updated base-model parameters (lines 786 27-31 in Algorithm 1, Figure 7 - PO). The meta-information (I) corresponding to each task in the batch 787 is then calculated (Figure 7 - Label 4 ), which includes the loss, accuracy, loss-ratio, and gradient norm of 788 adapted models on the query data. This is given as input to the task attention module (Figure 1 - Label 2 , 789 Figure 7 - Label 5 ), which outputs the attention vector (line 10 in Algorithm 1, Figure 7- Label 6 ). The 790 attention vector and test losses are used to update the meta-model parameters θ according to equation 2 791 (line 11 in Algorithm 1, Figure 1 - Label 4 , Figure 7 - Label 7 ). A new batch of tasks is then sampled and 792 the base-models are adapted using the updated meta-model (Lines 12-16 in Algorithm 1, Figure 1 - Label 793 5 ). The mean test loss over the adapted base-models is calculated and used to update the parameters of 794 the task attention module δ according to equation 3. ## 795 **8.3 Experiments** 796 **8.3.1 Datasets Details** 797 **miniImagenet** dataset (Vinyals et al., 2016) comprises 600 color images of size 84 × 84 from each of 798 100 classes sampled from the Imagenet dataset. The 100 classes are split into 64, 16 and 20 classes for 799 meta-training, meta-validation and meta-testing respectively. **miniImagenet-noisy** (Yao et al., 2021) is 800 constructed from the miniImagenet dataset with the additional constraint that tasks have noisy support la801 bels and clean query labels. The noise in support labels is introduced by symmetry flipping, and the default 802 noise ratio is 0.6. **Fewshot Cifar 100 (FC100)** dataset (Oreshkin et al., 2018) has been created from Cifar 803 100 object classification dataset. It contains 600 color images of size 32 × 32 corresponding to each of 100 804 classes grouped into 20 super-classes. Among 100 classes, 60 classes belonging to 12 super-classes correspond 805 to the meta-train set, 20 classes from 4 super-classes to the meta-validation set, and the rest to the meta-test 806 set. **tieredImagenet** (Ren et al., 2018a) is a more challenging benchmark for few-shot image classification. 807 It contains 779,165 color images sampled from 608 classes of Imagenet and are grouped into 34 super808 classes. These super-classes are divided into 20, 6, and 8 disjoint sets for meta-training, meta-validation, 809 and meta-testing. **Metadataset** (Triantafillou et al., 2019) comprises of 10 freely available diverse datasets 810 - Aircraft, CUB-200-2011, Describable Textures, Fungi, ILSVRC-2012, MSCOCO, Omniglot, Quick Draw, 811 Traffic Signs, and VGG Flower. We utilized CUB-200, FGVC-Aircraft, Describable Textures, and Omniglot 812 datasets from Metadataset. **VTAB dataset** (Zhai et al., 2019) is a more diverse dataset than Metadataset 813 that was proposed to avoid overlapping classes of sub-datasets with the Imagenet dataset. VTAB comprises 814 of 19 datasets divided into three domains - Natural, Specialized, and Structured, depending on the type of 815 images. The natural group contains Caltech101, CIFAR100, DTD, Flowers102, Pets, Sun397, and SVHN 816 sub-datasets, while the specialized group consists of remote sensing datasets like EuroSAT and Resisic 45 817 and medical datasets like Retinopathy and Patch Camelyon. Structured contains object counting or 3D 818 depth prediction datasets like Clevr/count, Clevr/distance, dSprites/location, dSprites/orientation, Small819 NORB/azimuth, SmallNORB/elevation, DMLab, and KITTI/distance. We considered Natural sub-datasets 820 like DTD, CIFAR FC 100, Flowers102, and SVHN, specialized sub-datasets like EuroSAT and Resisc45, and 821 structured sub-datasets like dSprites_location and dSprites_orientation for cross-domain experimentation. 822 According to (Dumoulin et al., 2021), we have kept Describable Textures as a part of Metadataset and 823 Flowers102 as a component of the VTAB dataset. ## 824 **8.3.2 Ablation Studies** 825 We analyze the ranks of the tasks for maximum and minimum values of : loss, loss ratio, accuracy, and grad 826 norm in a batch wrt attention weights throughout meta-training of TA-MAML on a 5 way 1 and 5 shot 827 settings on miniImagenet dataset (Figures 8 and 9). Specifically, the highest weighted task is given rank 828 one, and the least weighted task in a batch is given the last rank. We observe that the TA module does not 829 assign maximum weight to the tasks with maximum or minimum values of : test loss, loss ratio, grad norm 830 or accuracy throughout meta-training. Thus, the TA module does not trivially learn to assign weights to 831 the tasks based on some component of meta-information but learns useful latent information from all the 832 components to assign importance for the tasks in a batch. ## 833 **8.3.3 Relation Of Weights With Meta-Information** 834 In Figure 10, we illustrate the trend of mean weighted loss across iterations for TA-MAML on 5 way 1 and 835 5 shot settings on miniImagenet dataset. The trend indicates that the average weighted loss decreases over 836 the meta-training iterations. The shaded region represents a 95% confidence interval over 100 tasks. ## 837 **8.3.4 Computational Overhead** 838 The training time for all scheduling/sampling approaches is expected to be higher than their non839 scheduling/sampling counterparts. We observe a three-fold increase in the training time from the vanilla 840 setting for a model trained with our strategy and a two-fold increase in the training time if a non-neural 841 scheduling approach (Liu et al., 2021a) is employed. However, our approach significantly outperforms vanilla ![22_image_0.png](22_image_0.png) Figure 8: Rank Analysis of tasks for maximum and minimum values of : loss, loss-ratio, accuracy and grad norm throughout the training of TA-MAML∗for 5 way 1 shot setting on miniImagenet dataset. Table 7: Comparison of few-shot classification performance of MAML and ANIL reported in the original papers (denoted by #) and the re-implementation by others on miniImagenet dataset for 5 way 1 and 5 shot settings. The highest and lowest accuracies for an approach are represented in blue and red, respectively. Test Accuracy (%) 5 Way Model 1 Shot 5 Shot miniImagenet MAML#(Finn et al., 2017) 48.07 ± 1.75 63.15 ± 0.91 - MAML (Antoniou et al., 2019) 48.25 ± 0.62 64.39 ± 0.31 MAML (Raghu et al., 2020) 46.9 ± 0.2 63.1 ± 0.4- MAML (Chen et al., 2018) 46.47 ± 0.82 62.71 ± 0.71 MAML(Oh et al., 2020) 47.44 ± 0.23 61.75 ± 0.42 MAML (Agarwal et al., 2021) 47.13 ± 8.78 57.69 ± 7.92 MAML (Arnold et al., 2021) 46.88 ± 0.60 55.16 ± 0.55 ANIL#(Raghu et al., 2020) 46.7 ± 0.4 61.5 ± 0.5 ANIL(Oh et al., 2020) 47.82 ± 0.20 63.04 ± 0.42 ANIL(Arnold et al., 2021) 46.59±0.60 63.47±0.55 ![23_image_0.png](23_image_0.png) Figure 9: Rank Analysis of tasks for maximum and minimum values of : loss, loss-ratio, accuracy and grad norm throughout the training of TA-MAML* for 5 way 5 shot setting on minilmagenet dataset. ![23_image_1.png](23_image_1.png) Figure 10: Trend analysis of weighted loss across meta-training iterations for TA-MAML* on 5 way 1 shot (left) and 5 shot (right) settings on miniImagenet dataset. Iterations are in thousands. 842 ML approaches and all state-of-the-art scheduling strategies on various datasets, training setups, and learn843 ing paradigms (Tables 2, 3, 4 and 5). As training is typically performed offline, the increased computational 844 overhead is expected to be permissible. Further, ours, as well as other approaches, perform vanilla finetuning 845 during meta-testing (i.e., task attention, neural scheduling or conflict resolving mechanism is not employed 846 during meta-testing), resulting in comparable test time (15-20 seconds on 300 tasks for MAML 5-way 1847 and 5-shot setups). We also note that we do not pre-train the attention network, unlike state-of-the-art 848 schedulers like ATS. 849 **8.3.5 Hyperparameter Details** | Setting | Model | base lr | meta lr | attention lr | lambda | |----------------|--------------|-----------|-----------|----------------|----------| | | miniImagenet | | | | | | 5.1 | MAML | 0.5000 | 0.0030 | - | - | | TAML | 0.5000 | 0.0030 | - | 0.0748 | | | TA-MAML∗ | 0.0763 | 0.0005 | 0.0004 | - | | | MetaSGD | 0.5000 | 0.0030 | - | - | | | TA-MetaSGD∗ | 0.0529 | 0.0011 | 0.0004 | - | | | MetaLSTM | - | 0.005 | - | - | | | MetaLSTM++ | - | 0.0012 | - | - | | | TA-MetaLSTM++∗ | - | 0.0012 | 0.0031 | - | | | ANIL | 0.3000 | 0.0006 | - | - | | | TA-ANIL∗ | 0.0763 | 0.0005 | 0.0004 | - | | | 5.5 | MAML | 0.5000 | 0.0030 | - | - | | TAML | 0.5000 | 0.0030 | - | 0.7916 | | | TA-MAML∗ | 0.0763 | 0.0005 | 0.0004 | - | | | MetaSGD | 0.5000 | 0.0030 | - | - | | | TA-MetaSGD∗ | 0.0529 | 0.0011 | 0.0004 | - | | | MetaLSTM | - | 0.005 | - | - | | | MetaLSTM++ | - | 0.0012 | - | - | | | TA-MetaLSTM++∗ | - | 0.0004 | 0.0001 | - | | | ANIL | 0.3000 | 0.0006 | - | - | | | TA-ANIL∗ | 0.0763 | 0.0005 | 0.0004 | - | | | 10.1 | MAML | 0.5000 | 0.0030 | - | - | | TAML | 0.5000 | 0.0030 | - | 0.2631 | | | TA-MAML∗ | 0.2551 | 0.0015 | 0.0001 | - | | | MetaSGD | 0.5000 | 0.0030 | - | - | | | TA-MetaSGD∗ | 0.0627 | 0.0008 | 0.0013 | - | | | MetaLSTM | - | 0.005 | - | - | | | MetaLSTM++ | - | 0.0015 | - | - | | | TA-MetaLSTM++∗ | - | 0.0009 | 0.0015 | - | | | ANIL | 0.5000 | 0.0030 | - | - | | | TA-ANIL∗ | 0.2551 | 0.0015 | 0.0001 | - | | | 10.5 | MAML | 0.5000 | 0.0030 | - | - | | TAML | 0.5000 | 0.0030 | - | 0.0741 | | | TA-MAML∗ | 0.2551 | 0.0015 | 0.0001 | - | | | MetaSGD | 0.5000 | 0.0030 | - | - | | | TA-MetaSGD∗ | 0.0627 | 0.0008 | 0.0013 | - | | | MetaLSTM | - | 0.005 | - | - | | | MetaLSTM++ | - | 0.0036 | - | - | | | TA-MetaLSTM++∗ | - | 0.0024 | 0.0002 | - | | | ANIL | 0.5000 | 0.0030 | - | - | | | TA-ANIL∗ | 0.2551 | 0.0015 | 0.0001 | - | | | Setting | Model | base lr | meta lr | attention lr | lambda | |----------------|---------|-----------|-----------|----------------|----------| | | FC100 | | | | | | 5.1 | MAML | 0.5000 | 0.0030 | - | - | | TAML | 0.5000 | 0.0030 | - | 0.0164 | | | TA-MAML∗ | 0.2826 | 0.0003 | 0.0024 | - | | | MetaSGD | 0.5000 | 0.0030 | - | - | | | TA-MetaSGD∗ | 0.0349 | 0.0008 | 0.0001 | - | | | MetaLSTM | - | 0.005 | - | - | | | MetaLSTM++ | - | 0.0010 | - | - | | | TA-MetaLSTM++∗ | - | 0.0002 | 0.0074 | - | | | ANIL | 0.5000 | 0.0030 | - | - | | | TA-ANIL∗ | 0.2826 | 0.0003 | 0.0024 | - | | | 5.5 | MAML | 0.5000 | 0.0030 | - | - | | TAML | 0.5000 | 0.0030 | - | 0.0153 | | | TA-MAML∗ | 0.2826 | 0.0003 | 0.0024 | - | | | MetaSGD | 0.5000 | 0.0030 | - | - | | | TA-MetaSGD∗ | 0.0349 | 0.0008 | 0.0001 | - | | | MetaLSTM | - | 0.005 | - | - | | | MetaLSTM++ | - | 0.0002 | - | - | | | TA-MetaLSTM++∗ | - | 0.0007 | 0.0003 | - | | | ANIL | 0.5000 | 0.0030 | - | - | | | TA-ANIL∗ | 0.2826 | 0.0003 | 0.0024 | - | | | 10.1 | MAML | 0.5000 | 0.0030 | - | - | | TAML | 0.5000 | 0.0030 | - | 0.0794 | | | TA-MAML∗ | 0.2353 | 0.0002 | 0.0001 | - | | | MetaSGD | 0.5000 | 0.0030 | - | - | | | TA-MetaSGD∗ | 0.2583 | 0.0029 | 0.0007 | - | | | MetaLSTM | - | 0.005 | - | - | | | MetaLSTM++ | - | 0.0021 | - | - | | | TA-MetaLSTM++∗ | - | 0.0005 | 0.0014 | - | | | ANIL | 0.5000 | 0.0030 | - | - | | | TA-ANIL∗ | 0.2826 | 0.0003 | 0.0024 | - | | | 10.5 | MAML | 0.5000 | 0.0030 | - | - | | TAML | 0.5000 | 0.0030 | - | 0.0193 | | | TA-MAML∗ | 0.2353 | 0.0002 | 0.0001 | - | | | MetaSGD | 0.5000 | 0.0030 | - | - | | | TA-MetaSGD∗ | 0.2583 | 0.0029 | 0.0007 | - | | | MetaLSTM | - | 0.005 | - | - | | | MetaLSTM++ | - | 0.0004 | - | - | | | TA-MetaLSTM++∗ | - | 0.0004 | 0.0090 | - | | | ANIL | 0.5000 | 0.0030 | - | - | | | TA-ANIL∗ | 0.2826 | 0.0003 | 0.0024 | - | | | Setting | Model | base lr | meta lr | attention lr | lambda | |----------------|----------------|-----------|-----------|----------------|----------| | | tieredImagenet | | | | | | 5.1 | MAML | 0.5000 | 0.0030 | - | - | | TAML | 0.5000 | 0.0030 | - | 0.3978 | | | TA-MAML∗ | 0.0261 | 0.0005 | 0.0015 | - | | | MetaSGD | 0.5000 | 0.0030 | - | - | | | TA-MetaSGD∗ | 0.0944 | 0.0003 | 0.0002 | - | | | MetaLSTM | - | 0.005 | - | - | | | MetaLSTM++ | - | 0.0002 | - | - | | | TA-MetaLSTM++∗ | - | 0.0010 | 0.0006 | - | | | ANIL | 0.5000 | 0.0030 | - | - | | | TA-ANIL∗ | 0.0261 | 0.0005 | 0.0015 | - | | | 5.5 | MAML | 0.5000 | 0.0030 | - | - | | TAML | 0.5000 | 0.0030 | - | 0.7733 | | | TA-MAML∗ | 0.0261 | 0.0005 | 0.0015 | - | | | MetaSGD | 0.5000 | 0.0030 | - | - | | | TA-MetaSGD∗ | 0.0944 | 0.0003 | 0.0002 | - | | | MetaLSTM | - | 0.005 | - | - | | | MetaLSTM++ | - | 0.0009 | - | - | | | TA-MetaLSTM++∗ | - | 0.0012 | 0.0001 | - | | | ANIL | 0.5000 | 0.0030 | - | - | | | TA-ANIL∗ | 0.0261 | 0.0005 | 0.0015 | - | | | 10.1 | MAML | 0.5000 | 0.0030 | - | - | | TAML | 0.5000 | 0.0030 | - | 0.4752 | | | TA-MAML∗ | 0.0821 | 0.0002 | 0.0006 | - | | | MetaSGD | 0.5000 | 0.0030 | - | - | | | TA-MetaSGD∗ | 0.0512 | 0.0007 | 0.0018 | - | | | MetaLSTM | - | 0.005 | - | - | | | MetaLSTM++ | - | 0.0011 | - | - | | | TA-MetaLSTM++∗ | - | 0.0018 | 0.0002 | - | | | ANIL | 0.5000 | 0.0030 | - | - | | | TA-ANIL∗ | 0.0821 | 0.0002 | 0.0006 | - | | | 10.5 | MAML | 0.5000 | 0.0030 | - | - | | TAML | 0.5000 | 0.0030 | - | 0.2501 | | | TA-MAML∗ | 0.0821 | 0.0002 | 0.0006 | - | | | MetaSGD | 0.5000 | 0.0030 | - | - | | | TA-MetaSGD∗ | 0.0512 | 0.0007 | 0.0018 | - | | | MetaLSTM | - | 0.0050 | - | - | | | MetaLSTM++ | - | 0.0024 | - | - | | | TA-MetaLSTM++∗ | - | 0.0015 | 0.0019 | - | | | ANIL | 0.5000 | 0.0030 | - | - | | | TA-ANIL∗ | 0.0821 | 0.0002 | 0.0006 | - | |
Review 1: Summary: In this paper, the authors investigate the importance of each task sampled via episodic sampling while training a model using meta-learning. Towards this, they propose to use an auxiliary network, which based on certain features from each task (norm-of-gradients, val-acc, val-loss etc) learns a relative weighting for each task in a segment and uses that weighting to adjust the contribution of each task in the meta-learning training loss. Authors show through a variety of (relatively small-scale) experiments that adding such a learned task importance module in the meta-learning training paradigm can improve the performance of the model. Strengths and Weaknesses: Strengths ------------ * Overall, I like the idea when it comes to learning the importance of each task using certain characteristics from the task (which are properly ablated in the paper) and incorporating that weighting in the end-to-end meta-learning procedure. I also agree with the claim of the paper and associated experiments that this proposed technique is generic and can be added to any meta-learning algorithm that uses episodic sampling. * I like the breadth of the experiments performed in terms of core results, cross-domain few-shot results, comparing against uniform sampling and the other paper, ablation studies etc. Weaknesses --------------- * As (probably) suggested by the previous reviewer, the experiments are done with a toy network and primarily compares against MAML, which is a method from 2017 and is quite far when compared to state-of-the-art few-shot learning methods. For cross-domain, the paper only evaluates the method on a selected set of datasets. In order to truly understand the efficacy of the proposed method, the work needs to be applied to a reasonably sized network (e.g. ResNet-18/34 or ViT-B/32) and compared against state-of-the-art few-shot learning methods on Mini/Tiered-ImageNet and on the full Meta-Dataset test-suite. Outperforming MAML or ANIL is not sufficient at this point. * The actual method is not well explained e.g. I don't understand how the task weights are computed and then plugged back into the meta-learning procedure. Typically, in an episodic training regime, we randomly sample one task, meta-train using it, then sample another task, meta-train with that and so on. But in this case, authors need to have a set of tasks first to compute the relative importance over those and then only those tasks can be used in the meta-training process. I am not clear about the specifics of how this implemented - are you sampling N tasks first, computing the importance score, meta-learning using it and then again picking N new tasks? If so, how large is N in your implementation? Apologies if this is mentioned in the paper and I missed it. * I do not understand why the task importance module is dubbed as task attention. It seems to me like a feed-forward network which is taking a set of features per task and applying a softmax layer on the output. Is there any attention between the tasks (e.g. self attention) or some other form of attention (e.g. cross-attention) being at play here? Requested Changes: Please see my comments regarding weaknesses. * The proposed method needs to be explained better; the diagram can be made more intuitive and the whole workflow e.g. how the tasks are getting sampled and being fed into the end-to-end learner should be explained in a more succinct manner. * Experimental results need to be performed using networks which are used in practice and should compare against SOTA methods on the benchmark tasks. For example, [this paper](https://arxiv.org/pdf/1911.04623.pdf) from 2019 achieves 64/81 1-shot/5-shot performance on Mini-ImageNet whereas this paper reports 53/66 (MAML+TA, Table 3). * [Minor] Connecting the proposed method to existing methods from ML/NLP literature on attention will help the reader better understand where the attention part is coming from. Broader Impact Concerns: N/A. ================================================== Review 2: Summary: The paper investigates if per-element importance weights on batch elements can improve performance in a meta-learning. The paper proposes to add an attention network that assigns different importance weights to batch elements in a meta-learning framework. The attention network is trained end-to-end with the meta-learner and uses various hand-crafted features, such as the norm of the gradient, and the ration of losses before/after learning to derive these weights. The proposed method can be combined with various meta-learners and was evaluated with MAML, MetaSGD, metaLSTM, and ANIL. Strengths and Weaknesses: Strengths: Considerable evaluations and ablation studies. I appreciate the author's efforts to clarify the difference in performance from published/reproduced MAML for example, as well as parameter tuning details. Intuitively convincing ablation/analyses. The authors show that attention weights do not have a clear interpretation in all cases. Also I like the nice plot in Figure 5, which reinforces the intuition that all samples are equally important early in training but certain samples are revealed to be more "difficult" only later in training. Weaknesses: Lack of evaluation on MetaDataset, one of the largest dataset for meta-learning evaluation. The performance seems to be far below recent works, which are far simpler algorithmically. For example "Selecting Relevant Features from a Multi-domain Representation for Few-shot Classification (SUR)" by Dvornik et al. 2020 achieves 81.19 ± 0.41%, 63.93 ± 0.63% on 5-way 5-shot and 1-shot mini-ImageNet, compared to the author's best performing model TA-MAML 65.69 ± 1.08%, 50.35 ± 0.1.22%. SUR uses no meta-learning at all, simply supervised training with nearest neighbor-like classification. Requested Changes: Please explain the gap in performance with SUR and why meta-learning is still a promising approach to few-shot learning. Broader Impact Concerns: None. ================================================== Review 3: Summary: In this paper the authors propose a training schedule and attention module for weighting the tasks in a meta-batch based on their importance. The authors compare the proposed method on various datasets (e.g. miniImageNet, CF100, tieredImageNet, etc), methods (e.g. MAML, ANIL, MetaSGD), conditions (e.g. cos-domain), and against concurrent sampling approaches (e.g. ATS, GCP). Ablations are presented to highlight the importance of each component provided to the attention module. Overall the paper has improved from the previous version, and the empirical evidences are now more solid. However, I have some doubts regarding the overall impact of the work that I have discussed below in more detail. Strengths and Weaknesses: Strengths --------- - The contribution of a task in the learning dynamics is an important topic that has not received a lot of attention. - The authors provide a large number of experiments that gives a good overview of the capabilities of the proposed method. - The paper is overall clear, and has significantly improved from the earlier version. Weaknesses ---------- 1. My main concern regards the novelty and overall impact of the paper. A recent line of work has showed that standard fine-tuning methods (often called "transfer learning" methods) can achieve state-of-the-art accuracy on a variety of visual classification problems (e.g. Big Transfer, Kolesnikov et al., 2020). The performance gains obtained with the proposed method seems marginal when compared with the performances of those state-of-the-art fine-tuners. Note that, other classes of meta-learners (e.g. ProtoNets, CNAPs-variants, etc) have also proved to be able to scale to much more challenging benchmarks with good performance (Bronskill et al., 2021). All these methods are able to scale to large datasets/benchmarks such as MetaDataset (Triantafillou et al. 2019) and VTAB (Zhai et al. 2019), using deep architectures (e.g. ResNet50 or larger), and large images, while this seems more complicated for the proposed approach. My point here is that while the authors show improvements over standard MAML-like approaches, if we zoom out and consider the most recent developments in the filed then those improvements become less significant. It is not clear to me how this paper can be framed when we consider this larger perspective. I would like to see a discussion of the authors about this point. 2. As a corollary of the previous point, while the method has been tested under domain shift (Section 5.4) the nature of the shift is rather limited. The method has been trained on miniImagenet and tested on CUB-200 and FGVC-Aircraft, that are composed of objects from similar classes. Generalization in this setting may be easier. I am not sure that the proposed method would be able to generalize well to very different tasks. Training on miniImagenet (or ImageNet) and testing on VTAB may be necessary to prove this point. References ---------- Bronskill, J., Massiceti, D., Patacchiola, M., Hofmann, K., Nowozin, S., & Turner, R. (2021). Memory efficient meta-learning with large images. Advances in Neural Information Processing Systems, 34, 24327-24339. Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., & Houlsby, N. (2020, August). Big transfer (bit): General visual representation learning. In European conference on computer vision (pp. 491-507). Springer, Cham. Triantafillou, E., Zhu, T., Dumoulin, V., Lamblin, P., Evci, U., Xu, K., ... & Larochelle, H. (2019). Meta-dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096. Zhai, X., Puigcerver, J., Kolesnikov, A., Ruyssen, P., Riquelme, C., Lucic, M., ... & Houlsby, N. (2019). A large-scale study of representation learning with the visual task adaptation benchmark. arXiv preprint arXiv:1910.04867. Requested Changes: Refer to the points described in the "Weaknesses" section above. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Reject Comment: An early version of this paper was rejected with encouragement to resubmit. This is the resubmitted version. The paper was reviewed by three reviewers (who are not reviewers of the early version). Unfortunately all three reviewers recommended "leaning reject" after reviewing the rebuttal. Reviewer 8WoW thinks that the paper did not shed enough light on the underlying importance of the various tasks in the batch, and the paper does not seem to provide enough empirical evidences on the performance, flexibility, and scalability of the method. Reviewer yWta thinks that the experiments are in a regime which is largely irrelevant today: small scale networks, outdated meta-learning algorithms (e.g. MAML). Reviewer uLbm thinks that the proposed algorithm needs to either have enough technical novelty/insight or needs to do better compared to existing non-meta-learning few-shot classification methods. I must say that I appreciate the authors' insight and efforts. But I also think the reviewers' criticisms are valid. Given the overall negative reviews, I am afraid that I have to recommend reject. ==================================================
# Implicit Ensemble Training For Efficient And Robust Multiagent Reinforcement Learning Macheng Shen macshen@mit.edu Department of Mechanical Engineering Massachusetts Institute of Technology Jonathan P. How jhow@mit.edu Department of Department of Aeronautics and Astronautics Massachusetts Institute of Technology Reviewed on OpenReview: *https: // openreview. net/ forum? id= LfTukxzxTj& referrer= %5BTMLR%* 5D( %2Fgroup% 3Fid% 3DTMLR) ## Abstract An important issue in competitive multiagent scenarios is the distribution mismatch between training and testing caused by variations in other agents' policies. As a result, policies optimized during training are typically sub-optimal (possibly very poor) in testing. Ensemble training is an effective approach for learning robust policies that avoid significant performance degradation when competing against previously unseen opponents. A large ensemble can improve diversity during the training, which leads to more robust learning. However, the computation and memory requirements increase linearly with respect to the ensemble size, which is not scalable as the ensemble size required for learning robust policy can be quite large (Czarnecki et al., 2020). This paper proposes a novel parameterization of a policy ensemble based on a deep latent variable model with a multi-task network architecture, which represents an ensemble of policies implicitly within a single network. Our implicit ensemble training (IET) approach strikes a better trade-off between ensemble diversity and scalability compared to standard ensemble training. We demonstrate in several competitive multiagent scenarios in the board game and robotic domains that our new approach improves robustness against unseen adversarial opponents while achieving higher sample-efficiency and less computation. ## 1 Introduction In competitive multiagent scenarios, agents learn concurrently (Lowe et al., 2017) and the learned policy of each agent depends on the joint policy of all the other learning agents. One challenge resulting from such multiagent learning is that the policy learned from training might not perform well in the testing environment where the opponents' policies could be significantly different from those of the training opponents (distribution shift between training and testing). Even worse, the testing opponents could be trained to exploit the weakness of the policy learned from the training (Gleave et al., 2019). One effective approach to mitigate the performance degradation from training to testing is ensemble training, which has been applied in many previous works (Bansal et al., 2017; Lowe et al., 2017; Silver et al., 2016; Jaderberg et al., 2019; Vinyals et al., 2019) and shown improved robustness against previously unseen opponents compared with training a single policy per agent. In ensemble training, each agent has multiple policies as in (Lowe et al., 2017; Jaderberg et al., 2019; Vinyals et al., 2019) or keeps multiple copies of previous policies as in (Silver et al., 2016; Bansal et al., 2017), from which one policy for each agent is sampled to play against each other. As a result, each policy is optimized against a distribution of the other agents' policies, which effectively reduces the distribution shift between training and testing (because a single policy can be regarded as a single Dirac-delta distribution centered at one point within the policy space). However, one drawback of applying this ensemble training technique is the significant increase in computation and memory consumption due to the learning and storage of multiple policies. Besides, the number of policies required for guaranteed policy strength improvement is a function of the non-transitivity dimension of the multiagent scenario, which could be on the order of tens for simple games such as Tic-TacToe and more than thousands for complicated real-world games (Czarnecki et al., 2020). As a result, this ensemble training approach could become intractable due to constraints on computational resources and/or end up learning weak policies due to insufficient policy diversity. In this paper, we propose a novel implicit ensemble training (IET) approach by formulating ensemble training as a multitask learning problem. 1Instead of maintaining multiple policies explicitly with independent neural networks, our IET approach uses a multitask network architecture with a learnable conditional latent variable distribution to implicitly represent a policy ensemble. The multitask network architecture improves sample-efficiency via parameter sharing and the conditional latent variable captures the diversity of the policy. Our contributions are: 1) we identify the cause of inefficiency in standard ensemble training; 2) we develop a novel algorithm that extends ensemble training with latent variables and a multitask network, resulting in a better trade-off between policy diversity and scalability, which is demonstrated by improved policy robustness with fewer samples and less computation compared with standard ensemble training. ## 2 Preliminaries 2.1 Markov Games We use Markov Games (Littman, 1994) as the decision-making framework in our work. A Markov game for N agents is defined by a set of states S describing the possible configurations of all agents, a set of actions A1*, . . . ,* AN , and a set of observations O1*, . . . ,* ON for each agent. Each agent has a stochastic policy πi: Oi × Ai7→ [0, 1], and a reward function ri: *S × A*i7→ R. ## 2.2 Multiagent Reinforcement Learning (Marl) The objective of each agent is to maximize its own cumulative reward Ri =PT t=0 γ tr t i with discount factor γ and time horizon T (Lowe et al., 2017). As a result, the learning problem is formulated as finding a joint policy π = {πi} i=1:N , where each policy maximizes its expected cumulative reward, $$J_{i}=\mathbb{E}_{s\sim p^{\pi},a_{i}\sim\pi_{i},a_{-i}\sim\pi_{-i}}\left[R_{i}(s,a)\right],$$ $$(1)$$ $\star\star\star\star$ [Ri(s, a)] , (1) with p π being the transition dynamics and the subscript −i denotes the set {j|j ̸= *i, j* = 1, 2*, . . . , N*}. ## 2.3 Marl With Policy Distribution In practice, learning with a single joint policy typically leads to over-fitting between policies and poor generalization to previously unseen policies. A variety of empirical studies (Lowe et al., 2017; Silver et al., 2016; Jaderberg et al., 2019; Vinyals et al., 2019; Lanctot et al., 2017; Bansal et al., 2017; Vinitsky et al., 2020) and a recent theoretical study (Czarnecki et al., 2020) suggest that it is necessary to maintain a diverse set of policies for each agent to improve the strength of the joint policy learned via MARL. Therefore, instead of learning a single joint policy, we consider the following objective function that learns a distribution of policies for each agent, Ji = Es∼pπ,a∼π,π∼P(Π),[Ri(s, a)] , (2) where P is a joint distribution over the joint policy space Π = Π1 × Π2... × ΠN . Each agent is learning its own policy distribution Πi to optimize its objective Ji subject to the joint distribution Π. 1code: https://github.com/MachengShen/ImplicitEnsembleTraining $$J_{i}=\mathbb{E}_{s\sim p^{\pi},a\sim\pi,\pi\sim\mathcal{P}(\Pi)},\left[R_{i}(s,a)\right],$$ $$\left(2\right)$$ Note that the feasibility set of Eq. 2 contains that of Eq. 1, which is analogous to the relationship between a mixed-strategy Nash Equilibrium and a pure-strategy Nash Equilibrium (Nash et al., 1950). This relationship also suggests that Eq. 2 is a more general learning objective than Eq. 1. One simple parametrization of Pj (Πj ) is a uniform distribution over a policy ensemble of fixed size (a set of independent policies, each parameterized by a neural network), such as in (Lowe et al., 2017; Bansal et al., 2017; Vinitsky et al., 2020). Policy space response oracle (PSRO) (Lanctot et al., 2017) is another parametrization, with an expanding set of policies. At each iteration, a new policy produced by an oracle, via solving for the (approximate) best response against a mixture of other agents' policies, is added to the policy ensemble. PSRO leads to better convergence towards Nash Equilibrium in poker games than self-play (single policy) and fictitious self-play (uniform distributed over previous policies). Despite the better convergence property, these ensemble approaches are not scalable because the computation increases linearly with respect to the ensemble size, which can be exponential with respect to the complexity of the multiagent interactions (Czarnecki et al., 2020) to guarantee policy improvement. This limitation motivates developing a new parametrization of policy distribution with better parameter-efficiency, which leads to higher sample-efficiency for training. ## 3 New Approach In this section, we present a parameter-efficient parametrization of the policy distribution, which we call an implicit ensemble. We start by discussing the relationship between a uniform ensemble with a latentconditioned policy and then show how our implicit ensemble approach extends the uniform ensemble for higher parameter-efficiency and policy diversity. ## 3.1 Generalization Of Ensemble Training As Latent-Conditioned Policy Ensemble training with a uniform distribution over a fixed-sized set of policies is a common approach, e.g. in (Lowe et al., 2017; Bansal et al., 2017; Vinitsky et al., 2020), for improving the robustness of policies in MARL. In ensemble training, each agent's policy πiis an ensemble of K sub-policies, with each denoted as πθ (k) i or π (k) iand parameterized by separate neural network parameters θ (k) i. The generative process for the ensemble training policy πiis expressed as closed as $ \quad k\sim\text{unif}(1,\,\text{K})=\frac{1}{K}\sum_{j=1}^K\delta(k-j),$ $ \quad\theta_i|k\sim\delta(\theta_i-\theta_i^{(k)}),$ $ \quad a_i|\theta_i\sim f_{\theta_i}(o_i),$ $$\quad(4)$$ where δ is the Dirac-delta distribution. One interpretation of Eq. 3 is that πiis a conditional policy on a uniform discrete latent random variable k, $$a_{i}|k\sim\bar{f}(o_{i},\theta_{i}^{(1)},\theta_{i}^{(2)},\ldots,\theta_{i}^{(K)};k)=f_{\theta_{i}^{(k)}}(o_{i}).\tag{1}$$ Eq. 4 suggests that ensemble training can be interpreted as a latent-conditioned policy (Haarnoja et al., 2018; Petangoda et al., 2019; Lee et al., 2019) conditioned on the discrete random variable k coming from a uniform distribution. However, this parametrization is inefficient because only one out of the K sets of subpolicy parameters [θ (1) i, θ(2) i*, . . . , θ*(K) i] is activated per sampling of ai. As a result, the rollout from executing one set of sub-policy parameter cannot be used to optimize the rest K − 1 sets of sub-policy parameters, which reduces the sample efficiency by a factor of K. ## 3.2 Implicit Ensemble Training Our IET generalizes ensemble training via two steps: 1) relax the discrete random variable k ∈ {1, 2*, . . . , K*} into a continuous latent variable c ∈ R L with a learned distribution that adaptively captures the diversity ![3_image_0.png](3_image_0.png) Figure 1: Connections between a standard ensemble and our proposed implicit ensemble: In the standard ensemble, a random index is sampled and the corresponding network is selected to output an action; while in our implicit ensemble (right), a random Gaussian variable is sampled and encoded into a latent vector by a shaping network, which is then passed to a conditional policy network to output an action. of the policy ensemble; 2) replace the K independent neural networks with a unified modular network architecture with parameter ϕ that improves parameter-efficiency by knowledge sharing between sub-modules within the network. The continuous relaxation also makes the policy differentiable with respect to the latent variable, thus making it possible to synthesize new policies from the learned skills represented by the submodules within the modular network by perturbing the latent variable distribution. The generative process of this implicit ensemble is $$\begin{array}{c}{{z\sim{\mathcal{N}}({\bf0},\mathbb{I}_{L\times L}),}}\\ {{c|z=g_{\psi}(z),}}\\ {{a_{i}|c\sim h_{\phi}(o_{i};c).}}\end{array}$$ $$\left(5\right)$$ In Eq. 5, a random Gaussian noise vector z is sampled from the standard multivariate Gaussian distribution (sampled once at the beginning of each episode and remains fixed during the episode), then passed through a shaping network parameterized by ψ to output a latent condition variable c. Finally, the action is sampled from a policy that is parameterized by ϕ and conditioned on the latent condition variable c. Both ψ and ϕ are learnable parameters that are optimized end-to-end with respect to the reinforcement learning objective Eq. 2. The implicit ensemble Eq. 5 is a more flexible parameterization than the ensemble training Eq. 3. In fact, the latter is a special case of the former, where the distribution of the latent variable c collapses into a discrete distribution (corresponding to the uniform discrete distribution of k), and the shared parameter ϕ is divided into K disjoint sets of parameters each of which is activated in the policy network when only one of the discrete values of c is drawn. In contrast to the ensemble training formulation Eq. 3, where the ensemble size K is a hyperparameter to control the diversity of the (ensemble) policy, Eq. 5 does not contain this explicit hyperparameter. Instead, the diversity of the policy is captured adaptively by the learned latent distribution parameterized by ψ: a complicated multi-modal distribution corresponds to high diversity, while a simple uni-modal distribution corresponds to low diversity. ## 3.3 Model Architecture Fig. 1 shows the model architecture for implementing the IET. There are two major components within this architecture: 1) a shaping network, corresponding to the mapping gψ(·) in Eq. 5 that transforms the Gaussian noise vector z into the latent condition variable c; and 2) a conditional policy network, corresponding to the parametrization hϕ. The shaping network is responsible for adding diversity to the conditional policy for improving the robustness of the learned policy, and the conditional policy network improves parameter-efficiency via knowledge sharing. The detailed design of these two networks is presented in the following sections. ## 3.3.1 Shaping Network The shaping network takes a Gaussian noise vector z ∈ R L as input and transforms z into a latent condition variable c, which modifies the standard multi-variate Gaussian distribution into a learned (complicated) distribution. We use a feedforward network with 2 fully-connected layers followed by a Softmax layer to parameterize this shaping network. ## 3.3.2 Multi-Tasking Network Recent studies (Yang et al., 2020; Devin et al., 2017; Huang et al., 2020) found that modular architecture is a parameter-efficient way of learning multi-tasking policies. In ensemble training, the multi-tasking requirement naturally arises from the fact that each policy is optimized against the ensemble of policies of the other agents. To improve the parameter-efficiency of ensemble training, we use the modular architecture proposed in (Yang et al., 2020) as our conditional policy network for multi-tasking. We describe the detail of this modular network in the Appendix section. However, this specific network architecture is not the only choice for effective knowledge sharing. We show in Section 4.6 that a commonly used multi-head network is also effective for parameter-efficient ensemble training. ## 3.4 Model Training Algorithm 1 shows the pseudocode of the IET. At the beginning of each environment episode, each agent samples its own latent vector zi and then executes its conditional policy till the end of the episode. The latent vectors are fixed within each episode so that agents' policies are consistent across time steps. The model parameters ψi and ϕi are trained through policy gradient update, which maximizes the long-term discounted cumulative reward objective for each agent. Algorithm 1 Implicit Ensemble Training Require: Number of training episodes M, a set of agents with index in {1*, . . . , N*} 1: Initialize network parameters ψi, ϕi, i ∈ {1*, . . . , N*} for each agent 2: for j = 1 : M do 3: for i = 1 : N do 4: Sample latent vector zi ∼ N (0,IL×L) 5: **end for** 6: Rollout with each agent sampling its action via ai ∼ hϕi (oi; gψ(zi)) 7: Update network parameters ψi ϕi with policy gradient 8: **end for** 9: **return** ψi, ϕi Although Algorithm 1 shows that each agent independently samples its private latent vector, it is beneficial to share the latent vectors between agents under certain situations. For example, consider a mixed cooperativecompetitive scenario where two teams of agents compete against each other. Agents within the same team are incentivized to communicate their latent vectors or condition their policies on the same latent vector (which could be achieved without communication by using a common random seed) to achieve effective coordination. However, since the focus of this paper is on policy robustness, later sections of this work focus on experimental evaluation of the proposed IET approach in competitive scenarios involving two agents. ## 4 Evaluation 4.1 Scenarios We evaluate our approach on two types of 2-player (which we refer to as the blue agent and the red agent hereafter) multiagent scenarios. - **Board-game**: turn-based games implemented in the PettingZoo multiagent environment2(Terry et al., 2020) and the RLCard toolkit3(Zha et al., 2019): Connect Four, **Leduc Hold'em**, and Texas Hold'em (Limit). - **RoboSchool-Racer**: continuous problems modified from the robot racing scenarios in the RoboSchool environment4: Ant and **Cheetah**, where we decompose each robot into front and rear parts, and assign opposite rewards to each part. As such, the front is learning to move forward while the rear is learning to move backward. ## 4.2 Baselines We compare our approach with the following baselines: 1. **Single Policy Training** (SPT): a standard multiagent training approach wherein each agent optimizes only one policy. 2. **Simple Ensemble Training** (SET): a standard ensemble training approach wherein each agent optimizes an ensemble of policies against each other. We use an ensemble size of 3 for each agent (an ensemble of 3 policies has been shown to improve robustness significantly in previous works (Lowe et al., 2017) and (Bansal et al., 2017)), which also increases the amount of computation by a factor of 3. 3. **Minimax Training** (MT): achieves worst-case robustness through optimizing one's policy against the worst-case opponents, which is formulated as a minimax optimization. We use the one-step approximation approach in (Li et al., 2019) with the Multiagent Actor-attention-critic (MAAC) algorithm (Iqbal & Sha, 2019) ## 4.3 Implementation Detail We used the RLlib (Liang et al., 2018) implementation of Proximal Policy Optimization (PPO) with a minibatch size of 256 and a learning rate of 5 × 10−5. We use independent networks for the policy and the value function approximations and set the following hyperparameters for IET: L = 10 for the latent condition variable dimension; H = 64 for the hidden layer dimension in the shaping network; n = 2 and m = 2 for the number of layers and number of modules of the modular network; D = 64 and d = 64 for the embedding and module hidden dimension. For the other approaches, each of the policy and the value networks consists of two fully-connected layers with 256 hidden units. In the simple ensemble training setting, each sub-policy within an ensemble only has a probability of 1/3 to be selected for execution. If the same number of environment rollouts is used to train the simple ensemble as used for the other training settings, the simple ensemble policy will perform poorly due to insufficient training. Therefore, we roll out two additional environment simulations (but counted as one training step when training the simple ensemble) for a fair comparison at the cost of additional computational overhead. To evaluate the robustness of the learned policies, we adopted a similar approach as in (Vinyals et al., 2019; Gleave et al., 2019) by training an independent exploiter agent. Specifically, we launched two concurrent threads, one for the training, the other for the testing, and repeated the following steps: 2https://www.pettingzoo.ml 3https://github.com/datamllab/rlcard 4https://github.com/openai/roboschool 1. Train the blue agent and the red agent in the training thread for one training epoch. 2. Copy the blue agent's policy to the testing thread and freeze it. 3. Train the red exploiter agent in the testing thread against the fixed blue agent. As a result, the red exploiter agent learns to exploit any weakness of the blue policy, and the corresponding reward is an informative indicator of the adversarial robustness of the blue policy. ## 4.4 Results And Comparisons This section discusses experimental results and compares baseline approaches with our proposed IET approach. ## 4.4.1 Adversarial Robustness We investigate the adversarial robustness of the learned blue policy by training a red exploiter agent against the frozen blue agent policy. Table 1 shows the average testing reward, the best testing reward, and the average reward gap between training and testing. We compare our IET approach with two baselines. One baseline is our IET approach with the input latent noise fixed as a zero vector, and the other baseline is a standard ensemble with 10 policies. For the IET settings, we ran experiments across 25 random seeds. However, we found that due to the difficulty of balancing the relative strength between the blue and the red agents, sometimes (for some random seeds) the blue agent's training reward converges to near -1.0, and so does the testing reward. Since we are interested in investigating the robustness of the learned policy (whether a strong training performance is sufficient to guarantee a strong testing performance), we filter out those seeds that lead to poor training performance (converged training reward lower than -0.5). There turned out to be 19 remaining valid seeds for each of the two IET settings, and the results are evaluated only across these valid seeds. As for the SET setting, we found that both the training and the testing reward are not very sensitive to random seeds, so we report the results across 4 random seeds. Our IET approach outperforms the baselines for both the mean testing reward and the best testing reward, especially for the besting testing reward where our IET reached near optimal testing reward within 4 out of the 19 valid seeds. | | Table 1: Testing reward against exploiter and reward degradation | | | |----------------------------------|--------------------------------------------------------------------|--------------|--------------| | Connect Four | IET (fixed latent) | SET (10) | IET | | Mean testing reward | -0.40 ± 0.08 | -0.21 ± 0.06 | -0.06 ± 0.14 | | Best testing reward | -0.16 | -0.03 | 1.0 | | Mean training/testing reward gap | 0.70 ± 0.18 | 0.17 ± 0.08 | 0.42 ± 0.14 | ## 4.5 Competition Scores Between Training Settings To evaluate the out-of-distribution robustness, we also include the competition scores between the blue agent and the red exploiter agent, calculated as (rblue − rred)/2 for each pair of training settings in Tables 2. The blue agent uses the policy learned in the training thread, while the red agent uses the exploiter policy learned in the testing thread. As a result, the diagonal scores are in favor of the red agent, because the red exploiter policy is trained against the corresponding blue policy, but not the other way around. The diagonal scores measure the adversarial robustness of the learned blue policy. In contrast, the off-diagonal scores measure the out-of-distribution robustness of the blue policy, because neither the blue policy nor the red policy has been trained against each other. However, drawing conclusions directly from Tables 2 about the relative strength of the policies learned by the three training settings is difficult, because the competition scores show that there is no single policy that can dominate all the rest policies. This non-transitive nature of real-world games has also been observed in previous work (Czarnecki et al., 2020). Table 2: Competition scores between SPT, SET, and IET. Higher score implies that the blue policy is stronger than the red policy. The last column lists the lowest scores over the columns for each row, where our IET achieves the highest scores among the training settings. Connect Four SPT SET IET Lowest across columns SPT -1.0±0.0 -0.16±0.04 0.29±0.04 -1.0±0.0 SET 0.35±0.04 -0.98±0.01 0.65±0.03 -0.98±0.01 IET 1.0±0.0 1.0±0.0 -0.33±0.04 -0.33±0.04 | our IET achieves the highest scores among the training settings. Connect Four SPT SET IET | Lowest across columns | | | | |---------------------------------------------------------------------------------------------|-------------------------|------------|------------|-----------------------| | SPT | -1.0±0.0 | -0.16±0.04 | 0.29±0.04 | -1.0±0.0 | | SET | 0.35±0.04 | -0.98±0.01 | 0.65±0.03 | -0.98±0.01 | | IET | 1.0±0.0 | 1.0±0.0 | -0.33±0.04 | -0.33±0.04 | | Leduc Hold'em | SPT | SET | IET | Lowest across columns | | SPT | -0.63±0.12 -0.34±0.09 | 0.12±0.07 | -0.63±0.12 | | | SET | 0.55±0.14 | -0.04±0.1 | -0.07±0.09 | -0.07±0.09 | | IET | 0.47±0.12 | 0.11±0.09 | -0.03±0.09 | -0.03±0.09 | | Texas Hold'em | SPT | SET | IET | Lowest across columns | | SPT | -1.47±0.32 -0.12±0.11 | 0.0±0.05 | -1.47±0.32 | | | SET | -0.6±0.27 | -0.16±0.18 | 0.05±0.10 | -0.6±0.27 | | IET | 0.39±0.24 | -0.09±0.16 | 0.13±0.12 | -0.09±0.16 | To quantitatively evaluate the robustness of the learned blue policy, we provide two metrics: (1) Normalized score: we find the worst-case reward gap between blue and red minπr∈Π[rblue(πb, πr) − rred(πb, πr)], Π = [SPT, SET,IET] (higher is better) and normalize it to [0, 1], (2) Nash policy probability: we follow the approach proposed in (Balduzzi et al., 2018) by solving for the Nash-Equilibrium (NE) (Holt & Roth, 2004) of the meta-game (Tuyls et al., 2020), which involves a row-player and a column-player, whose actions are selecting which row/column policy to execute from the three available policies. The NE of the meta-game is a pair of stochastic policies. The probability of a policy being selected by the meta-player measures the strength (robustness) of this policy. We show these two metrics in Table 3 and Table 4. The results show that our IET approach consistently outperforms the other ones across all the scenarios. | Scenarios / Settings SPT SET | IET | | | |--------------------------------|-------|------|------| | Connect Four | 5e-3 | 1e-2 | 0.33 | | Leduc Hold'em | 0.18 | 0.46 | 0.48 | | Texas Hold'em | 0.0 | 0.2 | 0.46 | | Scenarios / Settings SPT SET | IET | | | |--------------------------------|-------|------|------| | Connect Four | 0.0 | 0.38 | 0.62 | | Leduc Hold'em | 0.0 | 0.41 | 0.59 | | Texas Hold'em | 0.0 | 0.0 | 1.0 | Table 3: Normalized scores that measure the adversarial robustness. Table 4: Nash policy probability of the meta-player that measures the overall strength of the policy. ## 4.5.1 Comparison With The Minimax Robust Approach We also show in Table 5 the comparison between the ensemble training approaches with the minimax robust learning approach (Li et al., 2019) which maximizes the worst-case reward to obtain adversarial robustness. We use the RoboSchool environment because the minimax approach requires differentiability with respect to the action (therefore continuous action environments). As the complexity of the environments increases, we see that the scores become noisier due to the difficulty of the reinforcement learning algorithm to optimize the policy (indicated by the fact that fewer minimal scores across the column are achieved at the diagonal). Again, we show the Nash meta-policy probability in Table 5: Competition scores between SPT, SET, IET, and MT. IET achieves the best adversarial robustness in Ant as well as best out-of-distribution robustness in both Ant and **Cheetah**. Ant SPT SET IET MT Lowest across columns SPT 8.42±0.26 26.2±0.82 1.95±0.54 -4.49±2.86 -4.49±2.86 SET -35.3±1.54 5.09±1.0 -23.7±0.86 -24.5±2.03 -35.3±1.54 IET 4.13±0.84 33.64±1.03 15.2±0.38 37.7±0.81 4.13±**0.84** MT -25.37±1.73 -2.53±0.90 -13.85±1.44 -25.7±0.50 -25.7±0.50 | in Ant as well as best out-of-distribution robustness in both Ant and Cheetah | | | | . | | |---------------------------------------------------------------------------------|-------------------------|-------------|------------------------|-------------|-----------------------| | Ant | SPT | SET | IET | MT | Lowest across columns | | SPT | 8.42±0.26 | 26.2±0.82 | 1.95±0.54 | -4.49±2.86 | -4.49±2.86 | | SET | -35.3±1.54 | 5.09±1.0 | -23.7±0.86 | -24.5±2.03 | -35.3±1.54 | | IET | 4.13±0.84 | 33.64±1.03 | 15.2±0.38 | 37.7±0.81 | 4.13±0.84 | | MT | -25.37±1.73 | -2.53±0.90 | -13.85±1.44 -25.7±0.50 | -25.7±0.50 | | | Cheetah | SPT | SET | IET | MT | Lowest across columns | | SPT | -15.54±0.68 -15.95±0.72 | -7.16±0.34 | 4.26±0.22 | -15.95±0.72 | | | SET | -9.05±0.21 | -12.60±0.64 | -1.76±0.41 | 6.45±0.29 | -12.60±0.64 | | IET | -9.22±0.22 | -11.88±0.27 | 0.24±0.37 | 8.16±0.24 | -11.88±0.27 | | MT | -13.71±0.59 -20.12±0.65 | -4.02±0.50 | 4.84±0.38 | -20.12±0.65 | | Table 6, which suggests our IET approach achieves the best robustness. The fact that the minimax robust approach does not work very well could be a result of two reasons: 1) The minimax formulation may not be a good approach to achieve out-of-distribution robustness since it optimizes for the worst-case reward, which may lead to overly-conservative behaviors that fail to exploit the weaknesses of the opponent; 2) The one-step solution technique for the minimax problem proposed in (Li et al., 2019) is an approximate solution, which may find a sub-optimal solution due to the difficulty of selecting a suitable step-size parameter, which has been reported in previous work (Sun et al., 2021). | Scenarios / Settings SPT SET | IET | MT | | | |--------------------------------|-------|------|------|-----| | Ant | 0.0 | 0.37 | 0.63 | 0.0 | | Cheetah | 0.0 | 0.0 | 1.0 | 0.0 | Table 6: Nash meta-policy probability in the RoboSchool scenarios. ## 4.6 Ablation Studies To further understand the role of the shaping network and the multi-tasking network, we conduct two additional experiments: 1) We show that the shaping network learns a non-trivial mapping from the Gaussian noise to the latent variable only when diversity is required through visualizing the mappings represented by the shaping networks learned in multiagent scenarios and compare those with that learned in a single-agent scenario; 2) We show that we can improve the sample-efficiency of the PSRO algorithm by sharing the intermediate layers within a multi-tasking network, and the optimal performance is obtained at a medium level of sharing. ## 4.6.1 Policy Diversity To verify our conjecture that the robustness of our IET approach is a result of IET's capability to present a diverse set of policies, we show the histogram of the action probabilities (Fig. 3) in the Connect Four game, and the corresponding training and testing rewards (Fig. 2). Fig. 2a shows that the IET agent's testing reward increases significantly at around 60 million training steps. Fig. 3a and 3b show the action probability histograms of the IET agent at 15 million training steps, which is almost uni-modal. This lack of diversity explains the poor testing reward during the early training stage (0-60M steps). In contrast, Fig. 3c and 3d shows the histograms at 60 million training steps, which span a much wider range of action probabilities. Specifically, Fig. 3c corresponds to the first time step of the game only, where the agent observation is always empty board (deterministic). Therefore, the wide range of action probabilities is a result of the policy conditioning on the latent noise. Fig. 3e and 3f shows the histogram at 75 million training steps, which is also uni-modal. Notice that the corresponding testing reward shown in Fig. 2 is near optimal. This result suggests that the IET agent has learned a strong policy (Another evidence is that the probability of taking action 3, which corresponds to placing the first chip in the middle of the board, is near 1.0. It is known that the optimal strategy in Connect Four for the first player is to place the chip in the middle.), which is in accordance with the fact that the optimal policy in Connect Four is a deterministic policy. Our results are also in accordance with the findings in (Czarnecki et al., 2020) that this Connect Four game has a spinning top structure: when the policy is very weak (e.g. at 15M steps) or very strong (e.g. at 75M steps), there is little diversity; but within the intermediate region of the policy strength spectrum (e.g. at 60M steps), there are many diverse policies; training against a sufficiently diverse set of policies will lead to improved policy strength. For comparison, Fig. 3g and Fig. 3h show the histograms of a SET agent of ensemble size 10, where the ![9_image_0.png](9_image_0.png) SET agent's action probabilities only span a few discrete values, implying less diversity as compared against the IET agent at 60 million training steps. These results explain the superior testing reward achieved by the IET agent. (a) IET agent reward (b) SET (size 10) agent reward Figure 2: Training and testing rewards of IET agent (left) and SET agent of ensemble size 10 in the Connect Four scenario. ## 4.6.2 Ablation On The Levels Of Sharing We investigate how the level of sharing within the multi-tasking network influences the sample-efficiency. As our approach is not restricted to the modular network architecture, for the convenience of ablation, we instead use a more intuitive multi-head multi-tasking network architecture consisting of L = 5 fully connected layers, where the first Lsharing layers are shared and the last L − Lsharing layers are independent for each policy. We show in Fig. 4a, 4b, 4c and 4d the exploitability and its area under the curve (AUC) of the joint policy when running the PSRO algorithm with the multi-tasking network of different sharing levels. With Lsharing = 0 (independent policies) corresponds to the standard PSRO, and Lsharing = 5 (identical policies) corresponds to self-play. We see that the best exploitability descent happens at Lsharing = 2, while the two extremes (Lsharing = 0, 5) perform poorly. This observation suggests a trade-off between knowledge sharing (positive transfer) and loss of flexibility (negative transfer), which is commonly observed in multitask learning. This ablation study also verifies our design purpose that the multi-tasking network in our implicit ensemble approach is responsible for improving the sample-efficiency via sharing parameters. Table 7 shows the scaling of network parameters and the representational power (policy diversity) of different approaches to parameterize the policy ensemble, where Kpolicy-net denotes the number of parameters of the policy network, Khead and Kshared-net denote the number of parameters of the policy head and shared network in the multi-head multi-task network, and N denotes the number of policies within the ensemble. For the standard simple ensemble approach that uses independent networks for each policy, the total number of parameters scales linearly w.r.t. the number of policy N. For the multi-head ensemble that uses independent heads with a shared base network to parameterize policies, the number of parameters also scales linearly, where the scaling coefficient is Khead. For a small Khead, the parameter complexity is low. However, in this case, since the shared network will become the dominant part of this multi-head network, the diversity of the ![10_image_0.png](10_image_0.png) (a) IET 15M training steps (empty board) (b) IET 15M training steps ![10_image_1.png](10_image_1.png) (c) IET 60M training steps (empty board) (d) IET 60M training steps (e) IET 75M training steps (empty board) (f) IET 75M training steps (g) SET (size 10) 60M training steps (empty board) (h) SET (size 10) 60M training steps Figure 3: Histogram of action probabilities corresponding to the rewards shown in Fig. 2 at snapshots of the IET agent (at 15, 60, 75 million training steps) and the SET agent of ensemble size 10 (at 60 million training steps): 3a, 3c, 3e and 3g show the histogram of action probabilities at the first time step of the game (empty board as in Connect Four); 3b, 3d, 3f and 3h show the histogram of action probabilities accumulated across all the time steps of the game. IET agent's policy at 60M training steps covers a wide range of action probabilities. ![11_image_0.png](11_image_0.png) Figure 4: Exploitability (interpreted as a distance from Nash Equilibrium (Lanctot et al., 2017)) and the area under the curve (AUC) of the joint policies learned through running PSRO with the multi-task network of different sharing levels. The optimal exploitability descent happens at medium sharing levels (Lsharing = 2). Scenarios First Sealed Auction (FSA) and Leduc Hold'em (LH) implementation from OpenSpiel (https://github.com/deepmind/open_spiel). policy ensemble parameterized by this multi-head network is compromised. In contrast, for a large Khead, the policy diversity is improved, but the parameter complexity scales poorly. For the implicit ensemble that uses a single shared network with a latent random variable to parameterize policy distribution, the parameter complexity is constant. The policy diversity arises from the randomness of the continuous latent variable. As a result, the implicit ensemble can represent a continuous distribution of policies. | policy ensemble Approaches | Simple Ensemble | Multi-head Ensemble | Implicit Ensemble | |--------------------------------------------------------------------------|------------------------|-----------------------|-------------------------------------| | Parameter complexity Kpolicy-netN (Linear) KheadN + Kshared-net (Linear) | Kshared-net (Constant) | | | | Policy diversity | N distinct policies | N distinct policies | Continuous distribution of policies | ## 5 Related Works This work is at the intersection of robust MARL and multi-task RL. ## 5.1 Robust Marl Ensemble training for robust MARL has been studied in many previous works such as (Lowe et al., 2017; Bansal et al., 2017; Jaderberg et al., 2017; 2019; Vinyals et al., 2019). These approaches focus on improving the robustness of the learned policies, regardless of the associated increase of computational complexity. In contrast, our work focuses on improving the efficiency of ensemble training without sacrificing robustness. Another approach for robust MARL is minimax policy optimization, in which each agent optimizes its policy assuming all the other agents and the environment dynamics are adversarial (see (Zhang et al., 2020; Li et al., 2019)). One difficulty with this approach is the requirement of solving the nested minimax optimization, which is typically approximated by optimizing the inner minimization for one step only (Li et al., 2019). In addition, the minimax formulation tends to result in overly-conservative policies because of the pessimistic assumption about the other agents. ## 5.2 Multi-Task Reinforcement Learning Our work is related to multi-task reinforcement learning (MTRL). MTRL aims to improve the learning efficiency by knowledge sharing across tasks (D'Eramo et al., 2019). Recent works (Huang et al., 2020; Yang et al., 2020; Devin et al., 2017) found that the modular policy network is an effective architecture for improving parameter-efficiency via sharing of learned modules. However, that MTRL work focuses on solving the single-agent multi-task problem. In contrast, our work leverages the multi-tasking network architecture combined with latent conditional policy learning (Haarnoja et al., 2018; Lee et al., 2019) to improve the efficiency of ensemble-based robust MARL. The innovation of our approach is re-formulating ensemble training as multi-task learning while using a latent variable to preserve policy diversity which is essential for robust MARL. ## 5.3 Entropy-Regularized Reinforcement Learning At first glance, our approach has similarities to entropy regularized RL with Gaussian policy (Haarnoja et al., 2017; Wei et al., 2018), which has the benefit of improved exploration. The key difference is that entropy regularized RL samples a random Gaussian noise at each time step, while our approach samples a policy from a continuous policy space by sampling a latent Gaussian noise at the beginning of each episode and keeps it fixed across the whole episode. This difference is analogous to the difference between a stochastic behavioral strategy and a mixed strategy (Walker & Wooders, 2006) in game theory. Although adding randomness at each time step is helpful for smoothing the RL learning objective to avoid local minimum (Ahmed et al., 2019), it is not as effective as randomizing over the policy for creating diversity, which enables our IET approach to improve policy robustness. ## 5.4 Stochastic Neural Network Stochastic neural networks such as networks with dropout layers (Gal & Ghahramani, 2016) and Osband et al. (2016) is another effcient way of parameterizing policy distribution. Compared with these approaches, our IET parametrization is more flexible and expressive. The dropout approach in (Gal & Ghahramani, 2016), for example, involves the dropout rate as a hyper-parameter, which determines the diversity of the policy ensemble. As we discussed earlier, the required policy diversity for learning robust policy distribution in real-world games also evolves with the training stage and policy strength, which makes it difficult to select a fixed dropout rate beforehand. In contrast, our approach enables adaptive policy diversity that is arguably more suitable for learning in these settings. ## 6 Conclusions This paper proposes IET, an implicit ensemble training approach that effectively reduces the computational complexity of ensemble training while maintaining the policy diversity for learning robust multiagent policies. In contrast to previous ensemble training approaches that require optimizing multiple policy networks, our IET approach optimizes a single shared network, which requires less computation and memory. Numerical results show that our approach improves both the learning efficiency and the robustness of the learned policy. We identify two promising future directions to extend our work: 1) Improving the stability of our IET training. Our current approach does not impose any regularization on the learned representation of the latent conditional vector, which sometimes leads to mode collapse. A mutual information regularizer might be helpful to avoid mode collapse. 2) Our approach can be straightforwardly extend to scenarios where two teams of agents compete with each other. The latent vector can be shared across agents within the same team which acts as a shared signal for team coordination to naturally emerge. ## References Zafarali Ahmed, Nicolas Le Roux, Mohammad Norouzi, and Dale Schuurmans. Understanding the impact of entropy on policy optimization. In *International conference on machine learning*, pp. 151–160. PMLR, 2019. David Balduzzi, Karl Tuyls, Julien Pérolat, and Thore Graepel. Re-evaluating evaluation. *CoRR*, abs/1806.02643, 2018. URL http://arxiv.org/abs/1806.02643. Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, and Igor Mordatch. Emergent complexity via multi-agent competition. *arXiv preprint arXiv:1710.03748*, 2017. Wojciech Marian Czarnecki, Gauthier Gidel, Brendan Tracey, Karl Tuyls, Shayegan Omidshafiei, David Balduzzi, and Max Jaderberg. Real world games look like spinning tops. *arXiv preprint arXiv:2004.09468*, 2020. Carlo D'Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, and Jan Peters. Sharing knowledge in multi-task deep reinforcement learning. In *International Conference on Learning Representations*, 2019. Coline Devin, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, and Sergey Levine. Learning modular neural network policies for multi-task and multi-robot transfer. In 2017 IEEE international conference on robotics and automation (ICRA), pp. 2169–2176. IEEE, 2017. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In *international conference on machine learning*, pp. 1050–1059. PMLR, 2016. Adam Gleave, Michael Dennis, Cody Wild, Neel Kant, Sergey Levine, and Stuart Russell. Adversarial policies: Attacking deep reinforcement learning. *arXiv preprint arXiv:1905.10615*, 2019. Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. In *International Conference on Machine Learning*, pp. 1352–1361. PMLR, 2017. Tuomas Haarnoja, Kristian Hartikainen, Pieter Abbeel, and Sergey Levine. Latent space policies for hierarchical reinforcement learning. In *International Conference on Machine Learning*, pp. 1851–1860. PMLR, 2018. Charles A Holt and Alvin E Roth. The nash equilibrium: A perspective. Proceedings of the National Academy of Sciences, 101(12):3999–4002, 2004. Wenlong Huang, Igor Mordatch, and Deepak Pathak. One policy to control them all: Shared modular policies for agent-agnostic control. In *International Conference on Machine Learning*, pp. 4455–4464. PMLR, 2020. Shariq Iqbal and Fei Sha. Actor-attention-critic for multi-agent reinforcement learning. In *International* Conference on Machine Learning, pp. 2961–2970. PMLR, 2019. Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, et al. Population based training of neural networks. arXiv preprint arXiv:1711.09846, 2017. Max Jaderberg, Wojciech Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Castaneda, Charles Beattie, Neil Rabinowitz, Ari Morcos, Avraham Ruderman, et al. Human-level performance in 3D multiplayer games with population-based reinforcement learning. *Science*, 364(6443):859–865, 2019. Marc Lanctot, Vinicius Zambaldi, Audrunas Gruslys, Angeliki Lazaridou, Karl Tuyls, Julien Pérolat, David Silver, and Thore Graepel. A unified game-theoretic approach to multiagent reinforcement learning. arXiv preprint arXiv:1711.00832, 2017. Alex X Lee, Anusha Nagabandi, Pieter Abbeel, and Sergey Levine. Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model. *arXiv preprint arXiv:1907.00953*, 2019. Shihui Li, Yi Wu, Xinyue Cui, Honghua Dong, Fei Fang, and Stuart Russell. Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 4213–4220, 2019. Eric Liang, Richard Liaw, Robert Nishihara, Philipp Moritz, Roy Fox, Ken Goldberg, Joseph Gonzalez, Michael Jordan, and Ion Stoica. Rllib: Abstractions for distributed reinforcement learning. In *International* Conference on Machine Learning, pp. 3053–3062. PMLR, 2018. Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In Machine learning proceedings 1994, pp. 157–163. Elsevier, 1994. Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. *arXiv preprint arXiv:1706.02275*, 2017. John F Nash et al. Equilibrium points in n-person games. *Proceedings of the national academy of sciences*, 36(1):48–49, 1950. Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped dqn. *Advances in neural information processing systems*, 29, 2016. Janith C Petangoda, Sergio Pascual-Diaz, Vincent Adam, Peter Vrancx, and Jordi Grau-Moya. Disentangled skill embeddings for reinforcement learning. *arXiv preprint arXiv:1906.09223*, 2019. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. *nature*, 529(7587):484–489, 2016. Chuangchuang Sun, Dong-Ki Kim, and Jonathan P How. Romax: Certifiably robust deep multiagent reinforcement learning via convex relaxation. *arXiv preprint arXiv:2109.06795*, 2021. Justin K Terry, Benjamin Black, Ananth Hari, Luis Santos, Clemens Dieffendahl, Niall L Williams, Yashas Lokesh, Caroline Horsch, and Praveen Ravi. Pettingzoo: Gym for multi-agent reinforcement learning. arXiv preprint arXiv:2009.14471, 2020. Karl Tuyls, Julien Perolat, Marc Lanctot, Edward Hughes, Richard Everett, Joel Z Leibo, Csaba Szepesvári, and Thore Graepel. Bounds and dynamics for empirical game theoretic analysis. *Autonomous Agents and* Multi-Agent Systems, 34(1):1–30, 2020. Eugene Vinitsky, Yuqing Du, Kanaad Parvate, Kathy Jang, Pieter Abbeel, and Alexandre Bayen. Robust reinforcement learning using adversarial populations. *arXiv preprint arXiv:2008.01825*, 2020. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. *Nature*, 575(7782):350–354, 2019. Mark Walker and John Wooders. Mixed strategy equilibrium. *Retrieved December*, 19:2006, 2006. Ermo Wei, Drew Wicke, David Freelan, and Sean Luke. Multiagent soft q-learning. *arXiv preprint* arXiv:1804.09817, 2018. Ruihan Yang, Huazhe Xu, Yi Wu, and Xiaolong Wang. Multi-task reinforcement learning with soft modularization. *arXiv preprint arXiv:2003.13661*, 2020. Daochen Zha, Kwei-Herng Lai, Yuanpu Cao, Songyi Huang, Ruzhe Wei, Junyu Guo, and Xia Hu. Rlcard: A toolkit for reinforcement learning in card games. *arXiv preprint arXiv:1910.04376*, 2019. Kaiqing Zhang, Tao Sun, Yunzhe Tao, Sahika Genc, Sunil Mallya, and Tamer Basar. Robust multi-agent reinforcement learning with model uncertainty. *Advances in Neural Information Processing Systems*, 33, 2020. ## A Appendix A We use the soft modular network proposed in (Yang et al., 2020) as the multi-tasking sub-network in our IET approach. The modular network is composed of a base network and a routing network. The base network is a modular network that has n layers, and each layer has m modules. Each module is a feedforward network, and the input and the output are d-dimension vectors. The routing network takes the observation and the latent condition variable as inputs, and it then outputs the n normalized weight vectors, one for each layer, for weighting the modules of the base policy network. The weight vectors w j ∈ R m2are calculated as $$\begin{array}{c}{{p^{j=1}=W_{d}^{j=1}\left(\sigma\left(F\left(o\right)\cdot H\left(c\right)\right)\right),}}\\ {{p^{j+1}=W_{d}^{j}\left(\sigma\left(W_{u}^{j}p^{j}\cdot\left(F\left(o\right)\cdot H\left(c\right)\right)\right)\right),}}\\ {{w^{j}=\mathrm{Softmax}(p^{j}),}}\end{array}$$ (6) where F and H are the embedding layers, which map the observation vector o and the latent condition variable c into D-dimension embeddings. σ is the activation function (we use the ReLU activation). Wu and Wd are fully-connected layers of size R D×m2and R m2×D, respectively. The base network takes the observation vector o as input and outputs policy logits / value, with the following relationship between the i-th module in the j +1 layer's input and the k-th module in the j-th layer's output, $$f_{i}^{j+1}=\sum_{l=1}^{m}w_{i,l}^{j}\left(\sigma\left(W_{l}^{j}\,\hat{f}_{l}^{j}\right)\right),$$ $$\left(7\right)$$ , (7) where W j l ∈ R d×dis the learnable module parameters.
Review 1: Summary: The authors introduce a more compute-efficient way to induce policy diversity in agents and show that these agents show increased robustness in the face of novel players in multi-agents games (MARL). Specifically, they introduce implicit ensemble training (IET), which parameterize an ensemble as a goal-conditioned network with the goal coming from a source of external noise. They compare this approach to single policy agents, simple ensembles (i.e. N discrete networks), and adversarial trained agents on several two player adversarial games. They also analyze the effect of cross-player parameter sharing, though how this connects to their primary contribution is somewhat unclear. Strengths and Weaknesses: ## Strengths * The introduction to MARL was well-written * The problem statement was quite clear and well-motivated -- I agree that we need more compute-efficient ensembles * The approach chosen seems simple to implement ## Weaknesses 1) Everything in your problem framing suggests that IET shouldn't help training time performance: a single policy as as expressive for best-response. IET should *only* help for notions of robustness and generalization. However, your results show that IET does significantly boost performance during training (perhaps more so than at test time...). This can be explained by the large differences between the network architecture of IET and the baseline methods. Ideally, all methods should use the same architecture. Since the modular architecture used by IET is inherently multi-task, perhaps keeping the conditioning variable constant for baseline methods would be sufficient. 2) Prior work has shown the general difficulty in learning diverse sets of policies, even in the single-agent regime (e.g. "Diversity is all you need"). I'm highly credulous that merely conditioning on a random variable will yield policies that are diverse wrt that random without any explicit objective pushing it to do so (e.g. the mutual information between the random variable and the resultant behavior). The only evidence you show for policy diversity is the increased performance (which could have other causes e.g. 1) ) and the T-SNE plots of the random variable in different games. Interpreting the complexity of T-SNE images is already a fraught undertaking, but even if we assume the conditioning variable is more diverse in harder environments, that isn't what we care about. We care about policy diversity: the extent to which a change in the conditioning yields a meaningful change in behavior. Without such evidence, how can we be sure that your explanation of the performance improvements is correct? 3) The experimental results could be better formatted. The top and bottom rows of Figures 2-4 should be redundant, since one player's performance fully determines the performance of the other player, so the bottom rows should be removed. I'm a bit confused why they aren't perfectly symmetric e.g shouldn't Figure 3a bottom just be the negation of Figure 3a top? The axes aren't shared across columns, making cross-method comparison difficult -- why not just have a single figure comparing IET to both baselines? 4) How does the 4.6.2 tie into the rest of the paper? Cross player parameter sharing is another way to reduce computation, but it seems orthogonal to IET and I'd suggest removing it. 5) Ablate aspects of your approach. Specifically, IET without modular networks and IET without the shaping network. Choices for the input noise (why Gaussian, not categorical?) and shaping network output activation should at least be explained and potentially ablated. Requested Changes: level of criticality: ** -> **** (least to most critically important for acceptance) **** Evidence that IET helps test-time performance when controlling for train-time performance *** Evidence that IET yields a diverse set of policies, at least comparable to explicit ensemble methods. ** Replace 4.6.2. with ablations of IET ** Remove the T-SNE stuff or explain how your interpretation is justified ** Improved experimental presentation as per 3) Broader Impact Concerns: None ================================================== Review 2: Summary: This paper addresses the problem of robustness to distribution mismatches between training and testing in multi-agent settings. Previous works demonstrated the effectiveness of ensemble training to overcome the distribution mismatch, but the computation and memory complexity scale linearly with the size of the ensemble. This paper proposes an implicit ensemble training approach, called IET, in which a single policy architecture is conditioned on a latent variable to produce diverse behaviors, effectively simulating an ensemble of policies. This simple procedure is evaluated on a series of board games and continuous control tasks, in which IET shows robustness to distribution mismatches and improved sample efficiency thanks to parameters sharing. Strengths and Weaknesses: Strengths - (Simplicity of the Approach) The paper proposes a simple and reasonable approach to achieve robustness to distribution mismatches while preserving sample and computation efficiency; - (Experimental Analysis) The experimental analysis includes various domains and a rich set of metrics, illustrations, and ablations; - (Clarity of the Presentation) The paper is well-written and easy to follow; - (Related Work) The paper does a good job in relating the presented contributions with previous works. Weaknesses - (Significance of the Experiments) The statistical significance of the reported results is mostly unclear. The text does not explicitly state the number of seeds employed in the experiments, as well as the meaning of shaded areas in the plots, performance ranges in the tables; - (Clarity of the Results) The experimental results are not always strong. The experimental section tries to convey too many information instead of focusing on a few crucial results. General Comment The paper tackles a relevant problem with a very reasonable approach, by borrowing ideas from multi-task learning to improve the efficiency of ensemble training. Having read the first three sections I was sold on the value of the methodology and the benefits it can provide. However, the experimental section is slightly underwhelming, as it struggles to clearly validate the promises of the previous sections, perhaps trying to convey too many information. While I am not an expert in the multi-agent domain, I think this paper provides a valuable contribution overall, even if the paper could be improved in a couple of aspects (more details below). Minor Comments - In Figure 2, it is not immediately clear what is changing between the top row and the bottom row of plots; - There is a duplicated reference for (Czarnecki, 2020). Requested Changes: Critical - (Significance of the Results) I encourage the authors to detail the number of seeds employed in the experiments, and to comment the statistical significance of the reported results. Somewhat Critical - (Main Experimental Result) To my understanding, the main message the paper is trying to convey is that IET could provide similar robustness w.r.t. distribution mismatches as training with large ensembles, while avoiding the computation and memory blow-up. However, the experimental analysis only provides a comparison with a rather small ensemble (of three policies). I would suggest the authors to center their experimental section on a main experiment that compares the robustness achieved by IET with standard ensembles of increasing size. It would be especially crucial to show how many policies are needed to match the robustness of IET, while detailing the computation and memory requirement of the explicit ensemble. Strengthening the Work - (Analysis of the End-to-End Training) I am not completely convinced it is safe to train the diverse module end-to-end. What is preventing the diverse module to collapse to a Dirac delta, especially if the game admits a pure strategy equilibrium? I think that a deeper discussion on the pros/cons of end-to-end training could nicely complement the presentation of the IET approach. - (Minmax Comparison) I was actually expecting the comparison against the MT baseline to act as a sanity check, i.e., to show that IET is not far off the robustness that can be achieved by an "ideal" method, while being much more efficient in practice. Instead, the comparison against MT is quite unclear, and it seems to underline the flaws of the baseline rather than the merits of IET. I would suggest the author to provide a comparison in a much simpler domain where MT can perform well, or to move the comparison to the appendix to free some space for more interesting results. Broader Impact Concerns: I do not believe this paper requires to explicitly address the potential ethical implications of the work. ================================================== Review 3: Summary: This paper studies the topic ensemble training, which is an effective approach for learning robust policies that avoid significant performance degradation when competing against previously unseen opponents, since a large ensemble can improve diversity during the training, which leads to more robust learning. This paper proposes a parameterization trick of a policy ensemble based on a deep latent variable model with a multi-task network architecture, and demonstrate in several competitive multiagent scenarios in the board game and robotic domains that our new approach improves robustness against unseen adversarial opponents while achieving higher sample-efficiency and less computation. Strengths and Weaknesses: **Strengths** ● A novel solution to a real problem in ensemble methods is proposed. By proposing a single model that represents different agents dependent on the sampled latent variable, they are able to mitigate the major issue of computation scaling linearly with individual models for each ensemble agent. ● Much of the paper is well-written and also well-motivated. ● There has been a good initial attempt at the evaluation. For example, I believe the selection of environments is reasonably complex and I found the ablation studies, in particular on the levels of sharing, to be thought-provoking for the usage of single-models in ensemble methods. **Weaknesses** ● In section 3.4 there is discussion on how the latent vector can be used in the mixed cooperative-competitive setting in which two teams of agents compete against each other. However, there is no example of this in the evaluation section. In my opinion, if it could be shown that the IET method is able to produce effective coordination between agents in a team then this would be greatly increase the benefit and appeal provided by IET. ● There are aspects of the conditional variable that are unclear to me. Mainly, the sampling of the latent vector and the outputted latent condition variable do not seem to have any relevance to the observations of the environment nor the opponent that is being trained against. Understandably, I may not be fully grasping what is happening - but my instinct would be that the conditioning variable that essentially dictates how the agent is going to play in the environment should be influenced by the opponent in some way. Please could you clarify where I may be incorrect, or how the conditioning variable is influenced by the opponent? ● In my opinion, the related work and how this work fits in misses two critical areas. ● Firstly, the discussion with respect to PSRO methods is limited and does not mention any examples of extensions to PSRO that aim to deal with scalability problems. For example - P2SRO [McAleer et al., 2020], NAC [Feng et al., 2021]. In addition, I view PSRO style methods as the major competitor to this approach in terms of performance and would expect there to, in general, to have been a wider discussion to PSRO methods that are not also just focused on improved scalability - Rectified PSRO [Balduzzi et al., 2019], Diverse PSRO [Nieves et al., 2021] JPSRO [Marris et al., 2021] ● Secondly, NeuPL [Liu et al., 2022] was introduced recently and I am surprised this has not been discussed by this work. NeuPL additionally utilises a single conditional model that conditions on the strategy of the opponent in order to improve the transfer of skills between agents. It would be great to hear the authors thoughts on how IET improves over NeuPL and also maybe falls short of NeuPL. ● In my opinion, the evaluation section is not extensive enough in terms of baselines. ● Firstly, a similar weakness to the one above but I think the major competitor to IET is PSRO methods. It would be great to see an environment where the amount of PSRO policies needed becomes intractable, whilst IET is able to handle the problem due to its ability to represent a larger amount of policies more efficiently. ● Similarly again, it would be great to see a comparison to NeuPL if feasible. ● In terms of the Simple Ensemble Training, the setup seems rather basic. For example, an ensemble size of 3 feels small in comparison to the amount of diversity that I believe IET can show, but also why is uniform sampling over the ensemble utilised? Is not using something like the Nash distribution over the policies more standard, and potentially a fairer comparison? ● Personally, I found figures 2,3,4 confusing to understand most notably due to their poor legends. Requested Changes: ● Improvements to the related work section as mentioned in the weaknesses. ● Update figures 2,3,4 for improved clarity about what each line represents and make it clearer why these figures show good performance of IET. ● Improvements to the baselines used as mentioned in the weaknesses. ● Experiment looking at team settings in mixed competitive cooperative environments. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper studies how to address distribution robustness for multi-agent RL (MARL) setting against novel opponents, and proposes to replace network ensembles, which could be computationally expensive to scale, with a single latent conditioned policy with external noise. Two reviewers recommended "leaning accept" while one reviewer recommended "leaning reject". A "leaning accept" also expresses the decision to be borderline. Given the recommendations, my proposal is "Accept", mainly for: - this is a simple easy-to-understand/apply method with clear audience (MARL). other papers may take this method as future baselines. - reviewers appear satisfied with the efforts in the rebuttal: e.g. Reviewer Cc15's many feedback were addressed with new experiments (e.g. Jacobian norm), Reviewer mj1G's feedback on lack of larger ensembles as baselines was addressed with new experiment with 5/10 ensembles and additional related work were discussed, and Reviewer obGm's concern on statistical significance is partially addressed. However, this "Accept" is borderline, and there are worries about the work. Please provide these two additional revisions: - statistical significance of the results: the results are quite noisy and the whole validity of method/domain seems to be questioned by Reviewer obGm recommending "leaning reject" as well as Reviewer Cc15 recommending "leaning accept" with a careful note. Could you follow up with additional experiments to engage in convincing these two more about statistical significance of the results? e.g. even running on more seeds or qualify the claims. - lack of discussion on literatures on efficient ensemble: Replacing ensembles with a stochastic network is not novel. It has been debated/studied in many papers on Bayesian deep learning (sometimes applied to sparse-reward exploration problems in RL), e.g. dropout [1] vs bootstrapped DQN [2]. These papers and other work should be referenced and added to the related work section. [1] Gal, Yarin, and Zoubin Ghahramani. "Dropout as a bayesian approximation: Representing model uncertainty in deep learning." international conference on machine learning. PMLR, 2016. [2] Osband, Ian, et al. "Deep exploration via bootstrapped DQN." Advances in neural information processing systems 29 (2016). ==================================================
# Find Your Friends: Personalized Federated Learning With The Right Collaborators Anonymous authors Paper under double-blind review ## Abstract In the traditional federated learning setting, a central server coordinates a network of clients to train one global model. However, the global model may serve many clients poorly due to data heterogeneity. This problem can be mitigated when participating clients learn personalized models that can better serve their own needs. By noting that each client's distribution can be represented as a mixture of all clients' distributions, we derive a principled algorithm based on expectation maximization. Our framework, FedeRiCo, estimates the utilities of other participants' models on each client's data so that everyone can select the right collaborators for learning. As a result, each client can learn as much or as little from other clients as is optimal for its local data distribution. Additionally, we theoretically analyze the convergence of FedeRiCo and empirically demonstrate its communication efficiency even in the fully decentralized setting. Our algorithm outperforms other federated, personalized, and/or decentralized approaches on several benchmark datasets, being the *only* approach that consistently performs better than training with local data alone. ## 1 Introduction ![0_image_0.png](0_image_0.png) Figure 1: **Left:** Noisy data points generated for clients along a sine curve (solid) where the x and y axes are the input and target, respectively. The FedAvg model (dotted line) fails to adapt to the local data seen by each client, in contrast to our FedeRiCo personalized models (dashed lines). **Right:** The collaboration weights learned by FedeRiCo to average participant outputs for each client. Clients learn to collaborate only with similar neighbours. Federated learning (FL) (McMahan et al., 2017) offers a framework in which a central server-side model is collaboratively trained across decentralized client datasets. This approach has been successfully implemented in practice for developing machine learning models without direct access to client data, which is crucial in heavily regulated industries such as banking and healthcare (Long et al., 2020; Sadilek et al., 2021). For example, multiple hospitals that collect patient data may desire to merge their datasets for increased diversity and size, but are unable due to privacy regulations. Statistical heterogeneity (Zhao et al., 2018; Adnan et al., 2022) is a major and common practical challenge in FL, where each client may hold different data distributions. Traditional FL methods like Federated Averaging (FedAvg) (McMahan et al., 2017) have demonstrated promising performance with homogeneous client data. However, these methods often struggle to handle statistical heterogeneity for two main reasons. Firstly, the variation in client distributions can lead to divergences in weight updates during training (Zhao et al., 2018). Secondly, it can be challenging for a single global model to provide optimal performance across all clients during inference. As an illustrative example, consider a simple scenario where each client seeks to fit a linear model to limited data on an interval of the sine curve as shown in Figure 1. This is analogous to the FL setting where several participating clients would like to collaborate, but each client only has access to data from its own data distribution. It is clear that no single linear model can be adequate to describe the entire joint dataset, so a global model learned by FedAvg can perform poorly, as shown by the dotted line. Ideally, each client should benefit from collaboration by increasing the effective size and diversity of data available, but in practice, forcing everyone to use the same global model without proper personalization can hurt performance on their own data distribution (Kulkarni et al., 2020; Tan et al., 2022). To address this, we propose **Fede**rating with the Right Collaborators (FedeRiCo), a novel personalized framework suitable for every client to find other participants with similar data distributions to collaborate with. As illustrated in Figure 1, FedeRiCo enables each client to choose the *right collaborators*; clients are able to correctly leverage information from neighboring clients when it is beneficial to do so. The final personalized models can serve the local distributions well, as demonstrated in the top plot. More specifically, FedeRiCo assumes that each client has an underlying data distribution, and exploits the hidden relationships among the clients' data. By selecting the most relevant clients, each client can collaborate as much or, maybe more importantly, as little as they need, and learn a personalized mixture model to fit the local data. Additionally, FedeRiCo can achieve this in a fully decentralized manner that is not beholden to any central authority (Li et al., 2021a; Huang et al., 2021; Kalra et al., 2023). Our contributions We propose FedeRiCo, a novel personalized FL framework based on expectationmaximization (EM). We also show that with some common assumptions, our algorithm is guaranteed to converge. Through extensive experiments on several benchmark datasets, we demonstrate that our approach finds good client collaboration and outperforms other methods in the non-i.i.d. setting. Paper outline The rest of the paper is organized as follows. In Section 2 we discuss related approaches towards personalized federated learning. Section 3 describes our algorithm formulation, its relationship to expectation-maximization, and an efficient protocol for updating clients. We provide experimental results in Section 4, and conclude in Section 5. ## 2 Related Work For Personalized Federated Learning Meta-learning Federated learning can be interpreted as a meta-learning problem, where the goal is to extract a global meta-model based on data from several clients. This meta-model can be learned using, for instance, the well-known Federated Averaging (FedAvg) algorithm (McMahan et al., 2017), and personalization can then be achieved by locally fine-tuning the meta-model (Jiang et al., 2019). Later studies explored methods to learn improved meta-models. Khodak et al. (2019) proposed ARUBA, a meta-learning algorithm based on online convex optimization, and demonstrates that it can improve upon FedAvg's performance. PerFedAvg (Fallah et al., 2020) uses the Model Agnostic Meta-Learning (MAML) framework to build the initial meta-model. However, MAML requires computing or approximating the Hessian term and can therefore be computationally prohibitive. Acar et al. (2021) adopted gradient correction methods to explicitly de-bias the meta-model from the statistical heterogeneity of client data and achieved sample-efficient customization of the meta-model. Model regularization / interpolation Several works improve personalization performance by regularizing the divergence between the global and local models (Hanzely & Richtárik, 2020; Li et al., 2021b; Huang et al., 2021; Zhang et al., 2022). Similarly, PFedMe (Dinh et al., 2020) formulates personalization as a proximal regularization problem using Moreau envelopes. FML (Shen et al., 2020) adopts knowledge distillation to regularize the predictions between local and global models and handle model heterogeneity. In recent work, SFL (Chen et al., 2022a) also formulates the personalization as a bi-level optimization problem with an additional regularization term on the distance between local models and its neighbor models according to a connection graph. Specifically, SFL adopts GCN to represent the connection graph and learns the graph as part of the optimization to encourage useful client collaborations. Introduced by Mansour et al. (2020) as one of the three methods for achieving personalization in FL, model interpolation involves mixing a client's local model with a jointly trained global model to build personalized models for each client. Deng et al. (2020) further derive generalization bounds for mixtures of local and global models. Multi-task learning Personalized FL naturally fits into the multi-task learning (MTL) framework. MOCHA (Smith et al., 2017) utilizes MTL to address both systematic and statistical heterogeneity but is restricted to simple convex models. VIRTUAL (Corinzia et al., 2019) is a federated MTL framework for non-convex models based on a hierarchical Bayesian network formed by the central server and the clients, and inference is performed using variational methods. SPO (Cui et al., 2022) applies Specific Pareto Optimization to identify the optimal collaborator sets and learn a hypernetwork for all clients. While also aiming to identify necessary collaborators, SPO adopts a centralized FL setting with clients jointly training the hypernetwork. In contrast, our work focuses on decentralized FL where clients aggregate updates from collaborators, and jointly make predictions. In a similar spirit to our work, Marfoq et al. (2021) assume that the data distribution of each client is a mixture of several underlying distributions/components. Federated MTL is then formulated as a problem of modeling the underlying distributions using Federated Expectation-Maximization (FedEM). Clients jointly update a set of several global models, also known as component models, and each maintains a customized set of weights for prediction, corresponding to the mixing coefficients of the underlying distributions. One shortcoming of FedEM is that it uses an instance-level weight assignment during training but a client-level weight assignment at inference time. As a concrete example, consider a client consisting of a 20%/80% data mixture from distributions A and B. FedEM will learn two models, one for each distribution. Given a new data point at inference time, the client will always predict 0.2 · predA + 0.8 · predB, *regardless of whether* the data point came from distribution A or B. This is caused by the mismatched behaviour between training and inference time. On the contrary, FedeRiCo naturally considers a client-level weight assignment for both training and inference in a decentralized setting. Other approaches Clustering-based approaches are also popular for personalized FL (Sattler et al., 2020; Ghosh et al., 2020; Mansour et al., 2020). Such personalization lacks flexibility since each client can only collaborate with other clients within the same cluster. FedFomo (Zhang et al., 2021) interpolates the model updates of each client with those of other clients to improve local performance. FedPer (Arivazhagan et al., 2019) divides the neural network model into base and personalization layers. Base layers are trained jointly, whereas personalization layers are trained locally. Self-FL (Chen et al., 2022b) balances local training and global training objectives from uncertainty perspective. ## 3 Federated Learning With The Right Collaborators 3.1 Problem Formulation We consider a federated learning (FL) scenario with K clients. Let [K] := {1, 2*, . . . , K*} denote the set of positive integers up to K. Each client i ∈ [K] has a local dataset Di = {(x (i) s , y (i) s )} ni s=1 where niis the number of examples for client i, and the input xs ∈ X and output ys ∈ Y are drawn from a joint distribution Di over the space *X × Y*. The goal of personalized FL is to find a prediction model hi: *X 7→ Y* that can perform well on the local distribution Di for each client. One of the main challenges in personalized FL is that we do not know if two clients i and j share the same underlying data distribution. If their data distributions are vastly different, forcing them to collaborate is likely to result in worse performance compared to local training without collaboration. Our method, **Fede**rating with the Right Collaborators (FedeRiCo), is designed to address this problem so that each client can choose to collaborate or not, depending on their data distributions. For better exposition, Section 3.2 first demonstrates how our algorithm works in the centralized setting with a central server. Then Section 3.3 presents several enhancements of FedeRiCo so that it will work even in the fully decentralized setting with minimal communication overhead. ## 3.2 Federico In Centralized Settings Note that every client's local distribution Di can always be represented as a mixture of {Dj} K j=1 with some weights πi =[πi1*, . . . , π*iK]∈∆K, where ∆K is the (K − 1)-dimensional simplex1. Let zi be the latent assignment variable of client i, and Π := [π1*, . . . ,*πK] ⊤ be the prior Πij = Pr(zi = j). Suppose that the conditional probability pi(y|x) satisfies − log pi(y|x) = ℓ(hϕ∗ i (x), y) + c for some parameters ϕ ∗ i ∈ R d, loss function ℓ : *Y × Y 7→* R +, and normalization constant c. By using the stacked notation Φ ∗ = [ϕ ∗ 1 , . . . , ϕ ∗ K] ∈ R d×K, Figure 2 shows the graphical model of how the local dataset is generated. Our goal is to learn the parameters Θ := (Φ, Π) by maximizing the log-likelihood: Φ ∗ Dizi K Clients Figure 2: Graphical model $$f(\Theta):=\frac{1}{n}\log p(D;\Theta)=\frac{1}{n}\sum_{i=1}^{K}\log p(D_{i};\Theta)=\frac{1}{n}\sum_{i=1}^{K}\log\sum_{z_{i}=1}^{K}p(D_{i},z_{i};\Theta).\tag{1}$$ where D := ∪iDi and n := Pi ni. One standard approach to optimization with latent variables is expectation maximization (EM) (Dempster et al., 1977). The corresponding variational lower bound is given by2 $${\cal L}(q,\Theta):=\frac{1}{n}\sum_{i}\mathbb{E}_{q(z_{i})}[\log p(D_{i},z_{i};\Theta)]+C,\tag{2}$$ where C is a constant not depending on Θ and q is an alternative distribution. To obtain concrete objective functions suitable for optimization, we further assume that the marginal distributions satisfy pi(x) = p(x), ∀i ∈ [K]. Similar to Marfoq et al. (2021), we adopted this assumption as it allows us to narrow our focus on discriminative modelling. With this assumption, we perform the following updates at each iteration t: E-step: Calculate the client weights wij , which represent the latent variable assignment z for each client by finding their best alternative distribution q ∗: $$w^{(t)}_{ij}:=q^{*(t)}(z_{i}=j)\propto\Pi^{(t-1)}_{ij}\exp\left[-\sum_{s=1}^{n_{i}}\ell\left(h_{\phi^{(t-1)}_{j}}(\mathbf{x}^{(i)}_{s}),\ y^{(i)}_{s}\right)\right].\tag{3}$$ M-step: Given the posterior q ∗(t)from the E-step, maximize L w.r.t. Θ = (Φ, Π): $$\Pi_{ij}^{(t)}=w_{ij}^{(t)}\ \ \text{and}\ \ \Phi^{(t)}\in\operatorname*{argmin}_{\Phi}\sum_{i=1}^{K}\widehat{\mathcal{L}}_{w,i}(\Phi),$$ where $\widehat{\mathcal{L}}_{w,i}(\Phi):=\sum_{j=1}^{K}\frac{w_{ij}^{(t)}}{n_{i}}\sum_{s=1}^{n_{i}}\ell\left(h_{\phi_{j}}(\mathbf{x}_{s}^{(i)}),\ y_{s}^{(i)}\right).$ Note that the prior Π (t) ij for the next iteration will be updated using the posterior from the current iteration w (t) ij . Due to the update in Equation (4), we refer to Π (t) ij or w (t) ij as client weights. For each client, the 1One-hot πi is always feasible, but other mixtures may exist. When πi is one-hot with a one in the ith position, each client learns alone without collaboration. While this "mixture" is trivial, it is an important case to have available for circumstances in which all other clients' distributions are truly different and collaboration would be detrimental. 2All derivations of this section can be found in Appendix A. priors Π (t−1) ij from previous round can be stored locally and used to update the posterior w (t) ij once it receives the model Φ (t−1) from the server. Φ (t)in the M-step, however, is trickier to compute since each client can potentially update Φ towards different directions due to data heterogeneity amongst the clients. Bear in mind that each client can only see its local data Diin the federated setting. To stabilize optimization and avoid overfitting from client updates, we rely on small gradient steps in lieu of full optimization in each round. Since solving for Φ (t)exactly is computationally expensive, we approximate it using gradient descent: 1. Each participating client computes the local gradient ∇Lbw,i(Φ(t−1)) on its local dataset Di with fixed weights w (t) ij , and sends the gradient back to the server. 2. The server aggregates the gradients and updates the central model using a step size η > 0: $$\Phi^{(t)}=\Phi^{(t-1)}-\eta\sum_{j=1}^{K}\frac{n_{j}}{n}\cdot\nabla\widehat{\mathcal{L}}_{w,j}(\Phi^{(t-1)}).\tag{6}$$ Finally, at inference time, each client uses bhi(x)=Pj w (t) ij hϕ (t) j (x) for prediction. Remark 1 The posterior w (t) ij (or equivalently the prior in the next iteration Π (t) ij ) reflects the importance of model ϕj on the data Di. When w (t) ij is one-hot with a one in the ith position, client i can perform learning by itself without collaborating with others. When w (t) ij is more diverse, client i can find the right collaborators with useful models ϕj . Such flexibility enables each client to make its own decision on whether or not to collaborate with others, hence the name of our algorithm. Remark 2 Unlike prior work (Mansour et al., 2020; Marfoq et al., 2021), our assignment variable z and probability Π are on the client level. If we assume that all clients share the same prior (i.e., there is only a vector π instead of a matrix Π), the algorithm would be similar to HypCluster (Mansour et al., 2020). Marfoq et al. (2021) used a similar formulation as ours but their assignment variable z is on the instance level: every data point (instead of every client) comes from a mixture of distributions. Such an approach can cause several issues at inference time, as the assignment for a novel data point is unknown. We refer the interested readers to Section 2 and Section 4 for further comparison. Theoretical convergence Under some regularity assumptions, our algorithm converges as follows: Theorem 3.1. *[Convergence] Under Assumptions B.1-B.6, when the clients use SGD with learning rate* η = √a0T > 0*, and the number of rounds* T ≥ a 2 0 · max{8L 2, 16L 2β 2}*, the iterates of our algorithm satisfy* $$\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}[\nabla_{\Phi}f(\Phi^{t},\Pi^{t})]\mathbb{I}_{F}^{2}\leq\mathcal{O}\left(\frac{1}{\sqrt{T}}\right),\tag{7}$$ _and_ $$\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}[\Delta_{\Pi}f(\Phi^{t},\Pi^{t})]\leq\mathcal{O}\left(\frac{1}{T^{3/4}}\right),$$ (8) where the expectation is over the random batch samples and ∆Πf(Φt, Πt) := f(Φt, Πt) − f(Φt, Πt+1) ≥ 0. Due to their length, the technical assumptions, further details, and the complete proof are deferred to Appendix B. The above theorem shows that the gradient with respect to the model parameters Φ and the improvement over the mixing coefficients Π becomes small as we increase the number of rounds T, thus converging to a stationary point of the negative log-likelihood objective f. ## 3.3 A Communication-Efficient Protocol In Decentralized Settings So far, we have discussed how FedeRiCo works in the centralized setting, however, this setting presents several challenges (Lian et al., 2017; Li et al., 2020). Especially for cross-silo FL, there may not exist a third party that all clients trust, and that has the computational resources to act as a central server (Kalra et al., 2023). Centralization creates a single point of failure for both the ongoing communication between clients, and for client privacy via data leaks (Beltrán et al., 2022). To surmount these challenges, we propose several enhancements for FedeRiCo to work in the fully decentralized setting with minimal communication. Specifically, we tackle the bottlenecks in both the E-step (3) and the M-step (4) since they require joint information of all models Φ. The pseudocode of the complete algorithm is provided in Algorithm 1, which is the implementation we use in our experiments. E-step For client i, the key missing quantity to compute (3) are the losses ℓ(ϕ (t−1) j), or likelihoods p(Di|zi = j; Φ(t−1)), of other clients' models ϕj , j ̸= i. Since the models Φ are being updated slowly, one can expect that ℓ(ϕ (t−1) j) will not be significantly different from the loss ℓ(ϕ (t−2) j) of the previous iteration. Therefore, each client can maintain a list of losses for all the clients, sample a subset of clients in each round using a sampling scheme S (e.g., ϵ-greedy sampling as discussed later), and only update the losses of the chosen clients. In our experiment, we show that even sampling only one other client would be sufficient, making FedeRiCo have the same communication requirement on average as some centralized algorithms like FedAvg. M-step To make clear how Φ is updated in the M-step, we focus on the update to a specific client's model ϕi. According to (5) and (6), the update to ϕiis given by $$-\eta\sum_{j=1}^{K}w_{ji}^{(t)}\sum_{s=1}^{n_{j}}\nabla_{\phi_{i}}\ell\left(h_{\phi_{i}}({\bf x}_{s}^{(j)}),\ y_{s}^{(j)}\right).\tag{9}$$ Note that the aggregation is based on w (t) ji instead of w (t) ij . Intuitively, this suggests ϕi should be updated based on how the model is being used by *other clients* rather than how client i itself uses it. If ϕi does not appear to be useful to any clients, i.e. w (t) ji = 0, ∀j, it does not get updated. Therefore, whenever client i is sampled by another client j using the sampling scheme S, it will send ϕi to j, and receive the gradient update gij := w (t) ji Pnj s=1 ∇ϕi ℓhϕi (x (j) s ), y (j) sfrom client j. One issue here is that gij is governed by w (t) ji , which could be arbitrarily small, leading to no effective update to ϕi. We will show how this can be addressed by using an ϵ-greedy sampling scheme. Sampling scheme S We deploy an ϵ-greedy scheme where, in each round, each client uniformly samples clients with probability ϵ ∈ [0, 1] and samples the client(s) with the highest posterior(s) otherwise. This allows a trade off between emphasizing gradient updates from high-performing clients (small ϵ), versus receiving updates from clients uniformly to find potential collaborators (large ϵ). The number M of sampled clients (neighbors) per round and ϵ can be tuned based on the specific problem instance. We will show the effect of varying the hyperparameters in the experiments. Tracking the losses for the posterior The final practical consideration is the computation of the posterior w (t) ij . From the E-step (3) and the M-step (4), one can see that w (t) ij is the softmax transformation of the negative accumulative loss L (t) ij := Pt−1 τ=1 ℓ (τ) ij over rounds (see Appendix A for derivation). However, the accumulative loss can be sensitive to noise and initialization. If one of the models, say ϕj , performs slightly better than other models for client i at the beginning of training, then client i is likely to sample ϕj more frequently, thus enforcing the use of ϕj even when better models exist. To address this, we instead keep track of the exponential moving average of the loss with a momentum parameter β ∈ [0, 1), Lb(t) ij = (1 − β)Lb(t−1) ij + βl(t) ij , and compute w (t) ij using Lb(t) ij . This encourages clients to seek new collaborators rather than focusing on existing ones. Algorithm 1: FedeRiCo: Federating with the Right Collaborators Input: Client local datasets {Di} K i=1, number of communication rounds r, number of neighbors M, ϵ-greedy sampling probability ϵ, momentum for exponential moving average loss tracking β, learning rate η. Output: Client models {ϕi} K i=1 and client weights wij . // Initialization 1 Randomly initialize {ϕi} K i=1; 2 for client Ci in {Ci} K i=1 do 3 Initialize Lb(0) ij = 0, ℓ(0) ij = 0, w (0) ij = 1 K ; 4 end 5 for *iterations* t = 1 *. . . T* do 6 for *client* Ci in {Ci} K i=1 do 7 Sample M neighbors of this round Bt according to ϵ-greedy selection w.r.t. w (t−1) ij ; 8 Send ϕi to other clients that sampled Ci; 9 Receive ϕj from sampled neighbors Bt; // E-step 10 ℓ (t) ij = ℓ (t−1) ij ; // Keep the loss from previous round 11 for *b in* Bt do 12 ℓ (t) ib =Pni s=1 ℓ hϕ (t) b (x (i) s ), y (i) s ; // Update the sampled ones 13 end 14 Lb(t) ij = (1 − β)Lb(t−1) ij + βℓ(t) ij ; // Update exponential moving averages 15 w (t) ij =exp(−Lb (t) ij ) PK j ′=1 exp(−Lb (t) ij′ ) ; // M-step 16 for Cb in Bt do // Could also do multiple gradient steps instead 17 Compute and send gbi = w (t) ib ∇ϕb Pni s=1 ℓ hϕb (x (i) s ), y (i) s to Cb; 18 end 19 for Cj *that sampled* Ci do 20 Receive gij = w (t) ji Pnj s=1 ∇ϕi ℓ hϕi (x (j) s ), y (j) s; 21 end 22 ϕ t i = ϕ (t−1) i − ηPj gij ; // Or any other gradient-based method 23 end 24 end ## 4 Experiments 4.1 Experimental Settings We conduct a range of experiments to evaluate the performance of our proposed FedeRiCo with multiple datasets. Additional experiment details and results can be found in Appendix C. Datasets We compare different methods on several real-world datasets. We evaluate on image classification tasks with the CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and Office-Home3(Venkateswara et al., 2017) datasets. Particularly, we consider a non-IID data partition among clients by first splitting data by labels into several groups with disjoint label sets. Each group is considered a distribution, and each client samples from one distribution to form its local data. For each client, we randomly divide the local data into 80% training data and 20% test data. Baseline methods We compare our FedeRiCo to several federated learning baselines. FedAvg (McMahan et al., 2017) trains a single global model for every client. We also compare to other personalized FL approaches including FedAvg with local tuning (FedAvg+) (Jiang et al., 2019), Clustered FL (Sattler et al., 2020), FedEM (Marfoq et al., 2021)4, FedFomo (Zhang et al., 2021), as well as a local training baseline. All accuracy results are reported as the mean and standard deviation across different random data splits and random training seeds. Unless specified otherwise, we allow each client to communicate with 3 other clients (neighbours) per round, with ϵ = 0.3 and momentum β = 0.6 as the default hyperparameters for FedeRiCo in all experiments. For FedEM, we let all clients train 4 components jointly for the learned distribution, which provides sufficient capacity to accommodate different numbers of label groups (or data distributions). For FedFomo, we hold out 20% of the training data for client weight calculations. For FedAvg+, we follow Marfoq et al. (2021) and update the local model with 1 epoch of local training. Training settings For all models, we use the Adam optimizer with learning rate 0.01. CIFAR experiments use 150 rounds of training, while Office-Home experiments use 400 rounds. CIFAR-10 results are reported across 5 different data splits and 3 different training seeds for each data split. CIFAR-100 and Office-Home results are reported across 3 different data splits each with a different training seed. ## 4.2 Performance Comparison | Table 1: | Accuracy (in percentage) with different number of data distributions. Best results in bold. CIFAR-10 # of distributions CIFAR-100 # of distributions Office-Home # of distributions | | | | | | | | | |---------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------|------------------------|------------|------------|------------|------------|------------|------------| | Method | 2 | 3 | 4 | 2 | 3 | 4 | 2 | 3 | 4 | | FedAvg | 11.44±3.28 | 11.73±3.68 | 13.93±5.74 | 21.28±5.04 | 17.41±3.27 | 18.36±3.68 | 66.58±1.88 | 53.36±4.21 | 51.25±4.37 | | FedAvg+ | 12.45±8.46 | 29.86±17.85 45.65±21.61 29.95±1.07 | 35.33±1.77 | 36.17±3.27 | 80.21±0.68 | 81.88±0.91 | 84.50±1.37 | | | | Local Training 40.09±2.84 | 55.27±3.11 | 69.03±7.05 | 16.60±0.64 | 25.99±2.38 | 31.05±1.68 | 76.76±0.23 | 83.30±0.32 | 88.05±0.44 | | | Clustered FL | 11.50±3.65 | 15.24±5.79 | 16.43±5.17 | 20.93±3.57 | 23.15±7.04 | 15.15±0.60 | 66.58±1.88 | 53.36±4.21 | 51.25±4.37 | | FedEM | 41.21±10.83 55.08±6.71 | 63.61±9.93 | 26.25±2.40 | 24.11±7.36 | 19.23±2.58 | 22.59±1.95 | 28.72±1.83 | 22.46±3.99 | | | FedFomo | 42.24±8.32 | 59.45±5.57 | 71.05± 6.09 12.15±0.57 | 20.49±2.90 | 24.53±2.77 | 78.61±0.78 | 82.57±0.24 | 87.86±0.77 | | | FedeRiCo | 56.61±2.51 69.76±2.25 78.22±4.80 30.95±1.62 39.19±1.64 41.41±1.07 83.56±0.49 90.28±0.75 93.76±0.12 | | | | | | | | | The performance of each FL method is shown in Table 1. Following the settings introduced by Marfoq et al. (2021), each client is evaluated on its own local testing data and the average accuracies weighted by local dataset sizes are reported. We observe that FedeRiCo has the best performance across all datasets and number of data distributions. Here, local training can be seen as an indicator to assess if other methods benefit from client collaboration as local training has no collaboration at all. We observe that our proposed FedeRiCo is the only method that consistently outperforms local training, meaning that FedeRiCo is the only method that consistently encourages effective client collaborations. Notably, both FedEM and FedFomo performs comparably well to FedeRiCo on CIFAR-10 but worse when the dataset becomes more complex like 3This dataset is publically available for research purposes only. 4We use implementations from https://github.com/omarfoq/FedEM for Clustered FL and FedEM. CIFAR-100. This indicates that building the right collaborations among clients becomes a harder problem for more complex datasets. Moreover, FedEM can become worse as the number of distributions increases, even worse than local training, showing that it is increasingly hard for clients to participate effectively under the FedEM framework for complex problems with more data distributions. In addition, Clustered FL has similar performance to FedAvg, indicating that it is hard for Clustered FL to split into the right clusters. In Clustered FL (Sattler et al., 2020), every client starts in the same cluster and cluster only split when the FL objective is close to a stationary point, i.e. the norm of averaged gradient update from all clients inside the cluster is small. Therefore, in a non-i.i.d setting like ours, the averaged gradient update might always be noisy and large, as clients with different distributions are pushing diverse updates to the clustered model. As a result, cluster splitting rarely happens which makes clustered FL behave similarly to FedAvg in practice. ## 4.3 Client Collaboration In this section, we investigate client collaboration by plotting the personalized client weights w (t) ij of FedeRiCo over training. With different client data distributions, we show that FedeRiCo assigns more weight to similar clients coming from the same distribution. As observed in Figure 3, similar clients collaborate to make the final predictions. For example, clients 3, 4 and 7 use a mixture of predictions from each other (in light blue) while client 0 only uses itself for prediction as it is the only client coming from distribution 0 (in dark blue) in this particular random split. On the contrary, as shown in Figure 4, even with 4 components, FedEM fails to use all of them for predictions for the 4 different data distributions. Notably, clients 2, 3, 4, 6 and 7 coming from two different distributions are using only the component 3 for prediction, while component 0 is never used by any client. Based on this, we find FedeRiCo better encourages the clients to collaborate with other similar clients and less with different clients. Each client can collaborate as much or as little as they need. Additionally, as all the non-similar clients have a weight of (almost) 0, each client only needs a few models for prediction. ![8_image_0.png](8_image_0.png) Figure 3: Client weights over time of FedeRiCo with CIFAR100 data and four different client distributions. Clients are color coded by their private data's distribution. ![9_image_0.png](9_image_0.png) Figure 4: Component weights over training for FedEM with 4 components, on CIFAR100 data with 4 different client distributions. Clients are color coded by their private data's distribution. ![9_image_1.png](9_image_1.png) Figure 5: Client weights on CIFAR-10 with two different client distributions. **Left two:** Client weights with accumulative loss. **Right two:** Client weights with exponential moving average. ## 4.4 Effect Of Using Exponential Moving Average Loss Here, we visualize the effect of using the exponential moving average loss by plotting client weights with both accumulative loss and exponential moving average loss in Figure 5.5 We observe that with the accumulative loss, the client weights quickly converge to one-hot, while with the exponential moving average loss, the client weights are more distributed to similar clients. This corresponds to our expectation stated in Section 3.3: the clients using exponential moving average loss are expected to seek for more collaboration compared to using accumulative loss. It is also worth noting that although the client weight vectors become one-hot part way through training with the accumulative loss, clients are not learning in isolation. In fact, clients 0, 3, 4, and 5, who share a common data distribution, all end up using client 5's model only, whereas clients 1, 2, 6, and 7 end up using client 6's model only. We show the weights of all clients in Figure 8 of Appendix D. ## 4.5 Hyperparameter Sensitivity In this section, we explore the effect of hyperparameters of our proposed FedeRiCo. Effect of ϵ**-greedy sampling** Here we show the effect of different ϵ values. Recall that each client deploys an ϵ-greedy selection strategy. The smaller the value of ϵ, the more greedy the client is in selecting the most relevant collaborators with high weights, leading to less exploration. Figure 6 shows the accuracy over training with different ϵ values on the Office-Home dataset. One can see that there is a trade-off between exploration and exploitation. If ϵ is too high (e.g., ϵ = 1, uniform sampling), then estimates of the 5We used uniform sampling for the accumulative loss (ϵ = 1) as most of the client weights are 0 after a few rounds. ![10_image_0.png](10_image_0.png) Figure 6: Test accuracy with different hyperparameters: sampling ϵ (**left**), number of sampled neighbors (**middle**), momentum β (**right**). ![10_image_1.png](10_image_1.png) Figure 7: Client weights learned by client 0 with different momentum values β on the client weight update. likelihoods/losses are more accurate. However, some gradient updates will vanish because the client weight is close to zero (see Section 3.3), resulting in slow convergence. On the other hand, if ϵ is too small, the client may miss some important collaborators due to a lack of exploration. As a result, we use a moderate ϵ = 0.3 in all experiments. Effect of number of sampled neighbors We plot accuracy with number of neighbors M ∈ {0, 1, 3, 5, 7} on CIFAR100 using 4 different client distributions, where M = 0 is similar to Local Training as no collaboration happens. As shown in Figure 6, when the number of neighbors increases, FedeRiCo converges more slowly as each client is receiving more updates on other client's models. While a smaller number of neighbors seems to have a lower final accuracy, we notice that even with M = 1, we still observe significant improvement compared to no collaboration. Therefore, we use M = 3 neighbors in our experiments as it has reasonable performance and communication cost. Effect of client weight momentum We plot the overall test accuracy of client 0 on the Office-Home dataset with 4 different data distributions over β ∈ {0.1, 0.3, 0.6, 0.9} in Figure 6 and similarly for the client weights in Figure 7. With smaller β, as shown in Figure 7, we observe a smoother update on the client weights, which is expected as the old tracking loss dominates the new one. Although various values produce similar final client weights, a bigger β can lead to more drastic changes in early training. However, one shouldn't pick a very small β just because it can produce smoother weights. As shown in Figure 6, the algorithm may converge more slowly with smaller β. Therefore, we use β = 0.6 as it encourages smoother updates and also maintains good convergence speed. In Appendix D, Table 4 shows the effect of using a Dirichlet data split, which is a common choice in FL (Marfoq et al., 2021). Our method still outperforms all benchmarks. Finally, in Figure 10, we consider whether there is a benefit to better optimization in the M-step of our method by increasing the number of local epochs. While there is some improvement when using more optimization, there is a tradeoff with increased computation time. ## 5 Conclusion And Discussion In this paper, we proposed FedeRiCo, a novel framework for personalized FL derived from EM for noni.i.d data. We evaluated FedeRiCo across different datasets and demonstrated that FedeRiCo outperforms multiple personalized FL baselines and encourages clients to collaborate with similar clients, i.e., the right collaborators. Federated learning architectures Traditional FL methods, such as Federated Averaging (FedAvg) (McMahan et al., 2017), primarily focus on a centralized FL architecture, where a single central server coordinates and aggregates updates from all clients. However, this centralized setting is heavily dependent on the central server and is thus subject to several challenges in practice. Firstly, the presence of a commonly trusted central server may not always be guaranteed. Additionally, the server may fail or be maliciously attacked, disrupting the entire system (Li et al., 2020). Furthermore, as previously mentioned, when all clients simultaneously communicate with the server, the communication burden on the server can be substantial (Lian et al., 2017). To mitigate these challenges, decentralized FL (Dai et al., 2022; Beltrán et al., 2022) has emerged as a viable solution for addressing single point failure and reducing the communication bandwidth on the server. Security implications Security is a significant concern in FL frameworks, particularly regarding malicious attacks (Blanco-Justicia et al., 2021), which may be further exacerbated in a decentralized setting without the central authority. One potential solution is to integrate existing trust mechanisms, such as blockchain (Qin et al., 2022; Short et al., 2020) and homomorphic encryption (Nguyen & Thai, 2022), with FL frameworks (Kairouz et al., 2021) to ensure secure collaboration. Privacy implications A key motivation of federated learning's original development was to protect clients' privacy by avoiding sharing private data (McMahan et al., 2017). However, FL schemes, centralized or decentralized, do not provide an explicit guarantee of privacy since sharing model parameters (or updates) may also reveal important information about the data (Carlini et al., 2019; Lyu et al., 2022). To address these concerns, a promising direction is to apply differentially private mechanisms in FL training to provide privacy guarantees (Wei et al., 2020; Truex et al., 2020; Liu et al., 2022), which invoke a trade-off between utility and privacy (Chen et al., 2022c; Bietti et al., 2022). Fortunately, most of the existing FL frameworks, including FedeRiCo are compatible with differential privacy. ## References Durmus Alp Emre Acar, Yue Zhao, Ruizhao Zhu, Ramon Matas, Matthew Mattina, Paul Whatmough, and Venkatesh Saligrama. Debiasing model updates for improving personalized federated training. In Proceedings of the 38th International Conference on Machine Learning, 2021. Mohammed Adnan, Shivam Kalra, Jesse C. Cresswell, Graham W. Taylor, and Hamid R. Tizhoosh. Federated learning and differential privacy for medical image analysis. *Scientific reports*, 12(1):1–10, 2022. Manoj Ghuhan Arivazhagan, Vinay Aggarwal, Aaditya Kumar Singh, and Sunav Choudhary. Federated learning with personalization layers. *arXiv preprint arXiv:1912.00818*, 2019. Enrique Tomás Martínez Beltrán, Mario Quiles Pérez, Pedro Miguel Sánchez Sánchez, Sergio López Bernal, Gérôme Bovet, Manuel Gil Pérez, Gregorio Martínez Pérez, and Alberto Huertas Celdrán. Decentralized federated learning: Fundamentals, state-of-the-art, frameworks, trends, and challenges. *arXiv preprint* arXiv:2211.08413, 2022. Alberto Bietti, Chen-Yu Wei, Miroslav Dudik, John Langford, and Steven Wu. Personalization improves privacy-accuracy tradeoffs in federated learning. In *Proceedings of the 39th International Conference on* Machine Learning, 2022. Alberto Blanco-Justicia, Josep Domingo-Ferrer, Sergio Martínez, David Sánchez, Adrian Flanagan, and Kuan Eeik Tan. Achieving security and privacy in federated learning systems: Survey, research challenges and future directions. *Engineering Applications of Artificial Intelligence*, 106:104468, 2021. Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks. In Proceedings of the 28th USENIX Conference on Security Symposium, SEC'19, pp. 267–284, 2019. Fengwen Chen, Guodong Long, Zonghan Wu, Tianyi Zhou, and Jing Jiang. Personalized federated learning with a graph. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pp. 2575–2582, 7 2022a. Huili Chen, Jie Ding, Eric William Tramel, Shuang Wu, Anit Kumar Sahu, Salman Avestimehr, and Tao Zhang. Self-aware personalized federated learning. In *Advances in Neural Information Processing Systems*, 2022b. Wei-Ning Chen, Christopher A Choquette Choo, Peter Kairouz, and Ananda Theertha Suresh. The fundamental price of secure aggregation in differentially private federated learning. In *Proceedings of the 39th* International Conference on Machine Learning, 2022c. Luca Corinzia, Ami Beuret, and Joachim M Buhmann. Variational federated multi-task learning. arXiv preprint arXiv:1906.06268, 2019. Sen Cui, Jian Liang, Weishen Pan, Kun Chen, Changshui Zhang, and Fei Wang. Collaboration equilibrium in federated learning. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 241–251, 2022. Rong Dai, Li Shen, Fengxiang He, Xinmei Tian, and Dacheng Tao. DisPFL: Towards communication-efficient personalized federated learning via decentralized sparse training. In *Proceedings of the 39th International* Conference on Machine Learning, 2022. Arthur P. Dempster, Nan M. Laird, and Donald B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. *Journal of the Royal Statistical Society: Series B (Methodological)*, 39(1):1–22, 1977. Yuyang Deng, Mohammad Mahdi Kamani, and Mehrdad Mahdavi. Adaptive personalized federated learning. arXiv preprint arXiv:2003.13461, 2020. Canh T. Dinh, Nguyen Tran, and Josh Nguyen. Personalized Federated Learning with Moreau Envelopes. Advances in Neural Information Processing Systems, 33:21394–21405, 2020. Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach. In Advances in Neural Information Processing Systems, volume 33, pp. 3557–3568, 2020. Avishek Ghosh, Jichan Chung, Dong Yin, and Kannan Ramchandran. An efficient framework for clustered federated learning. In *Advances in Neural Information Processing Systems*, volume 33, pp. 19586–19597, 2020. Filip Hanzely and Peter Richtárik. Federated learning of a mixture of global and local models. arXiv preprint arXiv:2002.05516, 2020. Yutao Huang, Lingyang Chu, Zirui Zhou, Lanjun Wang, Jiangchuan Liu, Jian Pei, and Yong Zhang. Personalized cross-silo federated learning on non-IID data. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35-9, pp. 7865–7873, 2021. Yihan Jiang, Jakub Konečn`y, Keith Rush, and Sreeram Kannan. Improving federated learning personalization via model agnostic meta learning. *arXiv preprint arXiv:1909.12488*, 2019. Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Hang Qi, Daniel Ramage, Ramesh Raskar, Mariana Raykova, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning. Foundations and Trends in Machine Learning, 14(1–2):1–210, 2021. ISSN 1935-8237. Shivam Kalra, Junfeng Wen, Jesse C. Cresswell, Maksims Volkovs, and H. R. Tizhoosh. Decentralized federated learning through proxy model sharing. *Nature Communications*, 14(1):2899, May 2023. ISSN 2041-1723. doi: 10.1038/s41467-023-38569-4. URL https://doi.org/10.1038/s41467-023-38569-4. Mikhail Khodak, Maria-Florina F Balcan, and Ameet S Talwalkar. Adaptive gradient-based meta-learning methods. In *Advances in Neural Information Processing Systems*, volume 32, 2019. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Viraj Kulkarni, Milind Kulkarni, and Aniruddha Pant. Survey of personalization techniques for federated learning. In 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4), pp. 794–797. IEEE, 2020. Chengxi Li, Gang Li, and Pramod K Varshney. Decentralized federated learning via mutual knowledge transfer. *IEEE Internet of Things Journal*, 2021a. Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. *IEEE signal processing magazine*, 37(3):50–60, 2020. Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. Ditto: Fair and robust federated learning through personalization. In *International Conference on Machine Learning*, pp. 6357–6368, 2021b. Xiangru Lian, Ce Zhang, Huan Zhang, Cho-Jui Hsieh, Wei Zhang, and Ji Liu. Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. Advances in neural information processing systems, 30, 2017. Ken Liu, Shengyuan Hu, Steven Wu, and Virginia Smith. On privacy and personalization in cross-silo federated learning. In *Advances in Neural Information Processing Systems*, 2022. Guodong Long, Yue Tan, Jing Jiang, and Chengqi Zhang. Federated learning for open banking. In Federated learning, pp. 240–254. Springer, 2020. Lingjuan Lyu, Han Yu, Xingjun Ma, Chen Chen, Lichao Sun, Jun Zhao, Qiang Yang, and S Yu Philip. Privacy and robustness in federated learning: Attacks and defenses. *IEEE Transactions on Neural Networks* and Learning Systems, 2022. Yishay Mansour, Mehryar Mohri, Jae Ro, and Ananda Theertha Suresh. Three approaches for personalization with applications to federated learning. *arXiv preprint arXiv:2002.10619*, 2020. Othmane Marfoq, Giovanni Neglia, Aurélien Bellet, Laetitia Kameni, and Richard Vidal. Federated multitask learning under a mixture of distributions. *Advances in Neural Information Processing Systems*, 34, 2021. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp. 1273–1282. PMLR, 2017. Truc Nguyen and My T Thai. Preserving privacy and security in federated learning. *arXiv preprint* arXiv:2202.03402, 2022. Zhen Qin, Shuiguang Deng, Xueqiang Yan, Schahram Dustdar, and Albert Y Zomaya. Secure and efficient decentralized federated learning with data representation protection. *arXiv preprint arXiv:2205.10568*, 2022. Adam Sadilek, Luyang Liu, Dung Nguyen, Methun Kamruzzaman, Stylianos Serghiou, Benjamin Rader, Alex Ingerman, Stefan Mellem, Peter Kairouz, Elaine O. Nsoesie, Jamie MacFarlane, Anil Vullikanti, Madhav Marathe, Paul Eastham, John S. Brownstein, Blaise Aguera y. Arcas, Michael D. Howell, and John Hernandez. Privacy-first health research with federated learning. *npj Digital Medicine*, 4(1), September 2021. Felix Sattler, Klaus-Robert Müller, and Wojciech Samek. Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints. *IEEE Transactions on Neural Networks and* Learning Systems, 32(8):3710–3722, 2020. Tao Shen, Jie Zhang, Xinkang Jia, Fengda Zhang, Gang Huang, Pan Zhou, Kun Kuang, Fei Wu, and Chao Wu. Federated mutual learning. *arXiv preprint arXiv:2006.16765*, 2020. Andrew Ronald Short, Helen C Leligou, Michael Papoutsidakis, and Efstathios Theocharis. Using blockchain technologies to improve security in federated learning systems. In *2020 IEEE 44th Annual Computers,* Software, and Applications Conference (COMPSAC), pp. 1183–1188. IEEE, 2020. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S Talwalkar. Federated multi-task learning. Advances in neural information processing systems, 30, 2017. Alysa Ziying Tan, Han Yu, Lizhen Cui, and Qiang Yang. Towards personalized federated learning. *IEEE* Transactions on Neural Networks and Learning Systems, 2022. Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, and Wenqi Wei. LDP-Fed: Federated learning with local differential privacy. In Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking, pp. 61–66, 2020. Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In *Proceedings of the IEEE Conference on Computer Vision* and Pattern Recognition, pp. 5018–5027, 2017. Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H Yang, Farhad Farokhi, Shi Jin, Tony QS Quek, and H Vincent Poor. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, 15:3454–3469, 2020. Michael Zhang, Karan Sapra, Sanja Fidler, Serena Yeung, and Jose M. Alvarez. Personalized federated learning with first order model optimization. In *International Conference on Learning Representations*, 2021. Xu Zhang, Yinchuan Li, Wenpeng Li, Kaiyang Guo, and Yunfeng Shao. Personalized federated learning via variational bayesian inference. In *International Conference on Machine Learning*, pp. 26293–26310. PMLR, 2022. Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated Learning with Non-IID Data. *arXiv preprint arXiv:1806.00582*, 2018. ## A Derivations Variational lower bound Here we derive the variational lower bound Equation (2) for the log-likelihood objective Equation (1). For each i ∈ [K], $$\log\sum_{z_{i}=1}^{K}p(D_{i},z_{i};\Theta)=\log\sum_{z_{i}=1}^{K}q(z_{i})\cdot\frac{p(D_{i},z_{i};\Theta)}{q(z_{i})}$$ $$=\log\mathbb{E}_{q(z_{i})}\left[\frac{p(D_{i},z_{i};\Theta)}{q(z_{i})}\right]$$ $$\geq\mathbb{E}_{q(z_{i})}\left[\log\frac{p(D_{i},z_{i};\Theta)}{q(z_{i})}\right]$$ $$=\mathbb{E}_{q(z_{i})}\left[\log p(D_{i},z_{i};\Theta)\right]-\mathbb{E}_{q(z_{i})}[\log q(z_{i})],$$ $$(14)$$ where q is an alternative distribution, the inequality is due to Jensen's Inequality and the last term Eq(zi)[log q(zi)] is constant independent of the parameter Θ. Derivations of the EM steps Given the assumptions in the main text about pi(y|x) and pi(x), we know that $$-\log p(D_{i}|z_{i}=j;\Phi)=\sum_{s=1}^{n_{i}}\ell(h_{\phi_{j}}({\bf x}_{s})^{(i)},y_{s}^{(i)})-\log p({\bf x}_{s}^{(i)})+c.$$ - **E-step:** Find the best q ∗for each client given the current parameters Θ(t−1): w (t) ij := q ∗(t)(zi = j) = p(zi = j|Di; Θ(t−1)) (15) =p(zi = j|Π(t−1)) · p(Di|zi = j; Φ(t−1)) PK j ′=1 p(zi = j ′|Π(t−1)) · p(Di|zi = j ′; Φ(t−1)) = Π (t−1) ij · p(Di|zi = j; Φ(t−1)) PK j ′=1 Π (t−1) ij′ · p(Di|zi = j ′; Φ(t−1)) ∝ Π (t−1) ij exp "− Xni s=1 ℓ hϕ (t−1) j (x (i) s ), y(i) s #. (18) (15) $\binom{16}{2}$ . $$\binom{17}{17}$$ $$\left(18\right)$$ Then the variational lower bound becomes $$\mathcal{L}(q^{(t)},\Theta)=\frac{1}{n}\sum_{i}\sum_{j}w^{(t)}_{ij}\cdot\log p(D_{i},z_{i}=j;\Theta)+C$$ $$=\frac{1}{n}\sum_{i}\sum_{j}w^{(t)}_{ij}\cdot(\,\log p(z_{i}=j;\Pi)+\log p(D_{i}|z_{i}=j;\Phi)\,)+C$$ $$=\frac{1}{n}\sum_{i}\sum_{j}w^{(t)}_{ij}\cdot(\,\log\Pi_{ij}+\log p(D_{i}|z_{i}=j;\Phi)\,)+C.$$ - **M-step:** Given the posterior w (t) ij from the E-step, we need to maximize L w.r.t. Θ = (Φ, Π). For the priors Π, we can optimize each row i of Π individually since they are decoupled in Equation (21). Note that each row of Π is also a probability distribution, so the optimum solution is given by Π (t) ij = w (t) ij . This is because the first term of Equation (21) for each i is the negative cross entropy, which is maximized when Πij matches w (t) ij . Optimizing Equation (21) w.r.t. Φ gives $$\Phi^{(t)}\in\operatorname*{arg\,max}_{\Phi}{\mathcal{L}}(q^{(t)},\Theta)=\operatorname*{arg\,min}_{\Phi}{\frac{1}{n}}\sum_{i=1}^{K}\sum_{j=1}^{K}w_{i j}^{(t)}\sum_{s=1}^{n_{i}}\ell\left(h_{\Phi_{j}}({\bf x}_{s}^{(i)}),\ y_{s}^{(i)}\right).$$ (19) $$\begin{array}{l}\left(20\right)\end{array}$$ = $$\begin{array}{l}\left(21\right)\end{array}$$ . $$(22)$$ . (22) Posterior and accumulative loss Here we show an alternative implementation for Equation (3) using accumulative loss. To shorten notations, let ℓ (t) ij := Pni s=1 ℓ hϕ (t) j (x (i) s ), y (i) s . Combining Equation (3) and Equation (4) gives $$w_{ij}^{(t)}=p(z_{i}=j|D_{i};\Theta^{(t-1)})$$ $$\propto w_{ij}^{(t-1)}\exp\left[-\ell_{ij}^{(t-1)}\right]$$ $$\propto w_{ij}^{(t-2)}\exp\left[-\left(\ell_{ij}^{(t-2)}+\ell_{ij}^{(t-1)}\right)\right].$$ We can see that it is accumulating the losses of previous models (e.g., ϕ (t−2) j, ϕ (t−1) jand so on) inside the exponential. Therefore, assuming the uniform prior Π (0) ij = 1/K, ∀j, w (t)is the softmax transformation of the negative of the accumulative loss L (t) ij := Pt−1 τ=1 ℓ (τ) ij up until round t. ## B Convergence Proof This section provides the missing proof of our convergence result (Theorem 3.1). At a high level, we will show that our algorithm in Section 3.2 can be viewed as an instantiation of the Federated Surrogate Optimization (Marfoq et al., 2021, Algo.3). Therefore, we can apply the general convergence result of Federated Surrogate Optimization (Marfoq et al., 2021, Thm.3.2′) to get our convergence guarantee. Toward that end, Lemma B.7 shows that the local objective $$f_{i}(\Theta)=f_{i}(\Phi,\pi_{i}):=-\frac{1}{n_{i}}\log p(D_{i}|\Phi,\pi_{i})=-\frac{1}{n_{i}}\sum_{s=1}^{n_{i}}\log p(x_{i}^{(s)},y_{i}^{(s)}|\Phi,\pi_{i}),$$ admits a *partial first-order surrogates* near (Φ(t−1), Π(t−1)) (Marfoq et al., 2021, Def.1), given by $$\begin{array}{l l}{{g_{i}^{(t)}(\Phi,\Pi):=g_{i}^{(t)}(\Phi,\pi_{i})}}\\ {{}}&{{:=\frac{1}{n_{i}}\sum_{s=1}^{n_{i}}\sum_{j=1}^{K}q_{j}^{(t)}\left[\ell\left(h_{\phi_{j}}(x_{i}^{(s)}),y_{i}^{(s)}\right)-\log p_{j}(x_{i}^{(s)})-\log\pi_{i j}+\log q_{j}^{(t)}-c\right]}}\end{array}$$ $$(26)$$ $$\begin{array}{c}{{(27)}}\\ {{}}\end{array}$$ (28) . i(28) so that our algorithm shares the same form as Federated Surrogate Optimization. Then Theorem 3.1 follows from Marfoq et al. (2021, Thm.3.2′) under suitable assumptions. To start, we adapt assumptions 2 to 7 of Marfoq et al. (2021) to our setting as follows: Assumption B.1. ∀i ∈ [K], pi(x) = p(x). Assumption B.2. The conditional probability pi(y|x) satisfies $$(29)$$ $$-\log p_{i}(y|x)=\ell(h_{\phi_{i}^{*}}(x),y)+c,$$ (x), y) + c, (29) for some parameters ϕ ∗ i ∈ R d, loss function ℓ : *Y × Y 7→* R + and normalization constant c. Let f(Φ, Π) := 1n log p(D; Φ, Π) be the log-likelihood objective as in Equation (1). Assumption B.3. f is bounded below by f ∗ ∈ R. Assumption B.4 (Smoothness and bounded gradient). For all *x, y*, the function ϕ 7→ ℓ(hϕ(x), y) is L-smooth, twice continuously differentiable and has bounded gradient: there exists B < ∞ such that ∥∇ϕℓ(hϕ(x), y)∥ ≤ B. Assumption B.5 (Unbiased gradients and bounded variance). Each client i ∈ [K] can sample a random batch ξ and compute an unbiased estimator gi(ϕ, ξ) of the local gradient with bounded variance, i.e., Eξ[gi(ϕ, ξ)]= 1 ni Pni s=1 ∇ℓ(hϕ(x (s) i), y (s) i) and Eξ∥gt(ϕ, ξ)− 1 ni Pni s=1 ∇ℓ(hϕ(x (s) i), y (s) i)∥ ≤ σ 2. Assumption B.6 (Bounded dissimilarity). There exist β and G such that any set of weights γ ∈ ∆K: $$\sum_{i=1}^{K}\frac{n_{i}}{n}\left\|\frac{1}{n_{i}}\sum_{s=1}^{n_{i}}\sum_{j=1}^{K}\gamma_{j}\nabla\ell(h_{\mathbf{\phi}}(\mathbf{x}_{i}^{(s)}),y_{i}^{(s)})\right\|^{2}\leq G^{2}+\beta^{2}\left\|\frac{1}{n}\sum_{t=1}^{K}\sum_{s=1}^{n_{i}}\sum_{j=1}^{K}\gamma_{j}\nabla\ell(h_{\mathbf{\phi}}(\mathbf{x}_{i}^{(s)}),y_{i}^{(s)})\right\|^{2}.\tag{30}$$ Lemma B.7 (Partial first-order surrogate). g (t) iis a partial first-order surrogate of fi *near* (Φ(t−1), Π(t−1)). Proof. In the following, we will verify that g (t) isatisfies the three conditions of partial first-order surrogates near (Φ(t−1), Π(t−1)): $$1.\,\,\,g_{i}^{(t)}(\Phi,\Pi)\geq f_{i}(\Phi,\Pi),\forall t,\Phi,\Pi;$$ 2. $r_{i}^{(t)}(\Phi,\Pi):=g_{i}^{(t)}(\Phi,\Pi)-f_{i}(\Phi,\Pi)$ is differentiable and $\widetilde{L}$-smooth w.r.t. $\Phi$ (for some $\widetilde{L}<\infty$). Moreover, $r_{i}^{(t)}(\Phi^{(t-1)},\Pi^{(t-1)})=0$ and $\nabla_{\Phi}r_{i}(\Phi^{(t-1)},\Pi^{(t-1)})=0$; 3. $g_{i}^{(t)}(\Phi,\Pi^{(t-1)})-g_{i}(\Phi,\Pi)=\mathrm{d}(\Pi^{(t-1)},\Pi)$ for all $\Phi$ and $\Pi\in\mathrm{argmin}_{\Pi^{\prime}}\,g(\Phi,\Pi^{\prime})$ where $\mathsf{d}$ is non-negative and $\mathsf{d}(\Pi,\Pi^{\prime})=0$ if $\Pi=\Pi^{\prime}$. To simplify notations, define the following (the dependency on round t is ignored when it is clear from context) $q_{j}:=q_{i}(z_{i}=j)$, $\mathcal{L}_{j}:=\sum_{s=1}^{n_{i}}\ell\left(h_{\Phi_{j}}(x_{i}^{(s)}),y_{i}^{(s)}\right),$ $\gamma_{j}:=p_{i}(z_{i}=j|D_{i},\Phi,\pi_{i})$. (34) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (35) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (36) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (37) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (37) ... Condition 1 To start verifying the first condition, gi(Φ,πi) = 1 ni Xni s=1 X K j=1 qj hℓ hϕj (x (s) i), y (s) i − log pj (x (s) i) − log πij + log qj − c = 1 ni Xni s=1 X j qj h− log pj (y (s) i|x (s) i, ϕj ) · pj (x (s) i) · pi(zi = j) + log qj = 1 ni Xni s=1 X j qj h− log pi x (s) i, y (s) i, zi = j Φ,πi + log qj i(36) = 1 ni X j qj [− log pi (Di, zi = j|Φ,πi) + log qj ] . (37) i(34) i(35) Then $$r_{i}(\Phi,\pi_{i})=g_{i}(\Phi,\pi_{i})-f_{i}(\Phi,\pi_{i})$$ $$=\frac{1}{n_{i}}{\cal KL}\left(\ q(\cdot)\ \|\ p_{t}(\cdot|D_{i},\Phi,\pi_{i})\ \right),\tag{1}$$ where KL is the KL-divergence. This verifies the first condition of partial first-order surrogates since the KL-divergence is non-negative. $$(38)$$ $$(39)$$ Condition 2 Now we verify the second condition. Note that rt is twice continuously differentiable due to Assumption B.4. With Assumption B.1 $$\gamma_{j}=p_{i}(z_{i}=j|D_{i},\Phi,\pi_{i})=\frac{\exp\left[-\mathcal{L}_{j^{\prime}}+\log\pi_{ij}\right]}{\sum_{j^{\prime}}\exp\left[-\mathcal{L}_{j^{\prime}}+\log\pi_{ij^{\prime}}\right]},$$ $$\nabla_{\Phi_{j^{\prime}}}\gamma_{j}=\begin{cases}(-\gamma_{j}+\gamma_{j}^{2})\nabla\mathcal{L}_{j}&\text{if}j^{\prime}=j\\ \gamma_{j}\gamma_{j^{\prime}}\nabla\mathcal{L}_{j^{\prime}}&\text{if}j^{\prime}\neq j,\end{cases}$$ where ∇Lj is shorthand for ∇ϕjLj . Then ∇ϕj ′ ri = 1 ni ∇ϕj ′ X j (−qj log γj ) Definition of KL (42) = 1 ni X j − qj γj ∇ϕj ′ γj (43) = 1 ni qj ′ (1 − γj ′ ) − X j̸=j ′ qjγj ′ ∇Lj ′ When j = j ′vs j ̸= j ′(44) = 1 ni [qj ′ (1 − γj ′ ) − (1 − qj ′ )γj ′ ] ∇Lj ′X j qj = 1 (45) = 1 ni (qj ′ − γj ′ )∇Lj ′ . (46) (40) $\binom{41}{4}$ (41) ... $$(42)$$ $$(43)$$ (44) $\binom{45}{45}$ . The Hessian of ri, H(ri) ∈ R dK×dK w.r.t. Φ, is a block matrix, with blocks given by $$\left(\mathbf{H}(r_{t})\right)_{j,j^{\prime}}=\begin{cases}\frac{1}{n_{i}}\left[(q_{j}-\gamma_{j})\mathbf{H}(\mathcal{L}_{j})+(\gamma_{j}-\gamma_{j}^{2})(\nabla\mathcal{L}_{j})(\nabla\mathcal{L}_{j})^{\top}\right]\\ -\frac{1}{n_{i}}\gamma_{j}\gamma_{j^{\prime}}(\nabla\mathcal{L}_{j})(\nabla\mathcal{L}_{j^{\prime}})^{\top}&\text{when}j\neq j^{\prime},\end{cases}$$ where H(Lj ) ∈ R d×dis the Hessian of Lϕj (Dt) w.r.t. ϕj . Introduce block matrices He , Hb ∈ R dK×dK as $$\widehat{\mathbf{H}}_{j,j^{\prime}}=\begin{cases}\frac{1}{n_{i}}(\gamma_{j}-\gamma_{j}^{2})(\nabla\mathcal{L}_{j})(\nabla\mathcal{L}_{j})^{\top}\\ -\frac{1}{n_{i}}\gamma_{j^{\prime}j^{\prime}}(\nabla\mathcal{L}_{j})(\nabla\mathcal{L}_{j^{\prime}})^{\top}\end{cases}\quad\text{when}j\neq j^{\prime},$$ $$\widehat{\mathbf{H}}_{j,j^{\prime}}=\begin{cases}\frac{1}{n_{i}}(q_{j}-\gamma_{j})\mathbf{H}(\mathcal{L}_{j})\\ \mathbf{0}\quad\text{when}j\neq j^{\prime}.\end{cases}$$ $$(46)$$ $$(47)$$ $$(48)$$ $$\left(50\right)$$ Since qj , γj ∈ [0, 1] and ℓ is L-smooth by Assumption B.4, we have −L·IdK ≼ Hb ≼ L·IdK. Using Lemma B.8 (see below), we have 0 ≼ He ≼ B2· IdK (note that ∇Lj is the sum of niindividual gradients and H(rt) has 1/ni). As a result, −Le · IdK ≼ H(rt) ≼ Le · IdK (where Le = L + B2 < ∞) and therefore rt is Le-smooth. Finally, q (t) j = pi(zi = j|Di, Φ (t−1),π (t−1) i), ∀t > 0 by the algorithm, which means $$r_{i}^{(t)}(\Phi^{(t-1)},\Pi^{(t-1)})=r_{i}^{(t)}(\Phi^{(t-1)},\pi_{i}^{(t-1)})=0.\tag{49}$$ Additionally, from Equation (39) we know that r (t) i(Φ,πi) is a (non-negative) KL-divergence for all Φ, Π. Recall that r (t) iis differentiable. It follows that Φ (t−1) is a minimizer of the function {Φ 7→ r (t) i(Φ,π (t−1) i)} and $$\nabla_{\Phi}r_{i}^{(t)}(\Phi^{(t-1)},\pi_{i}^{(t-1)})=0.\tag{1}$$ This verifies the second condition of the partial first-order surrogate. Condition 3 Note that π (t) i = argminπ g (t) i(Φ,π) due to the choice of q (t) iby the algorithm. Then for any πi and i ∈ [K], $$g_{i}^{(t)}(\Phi,\pi_{i})-g_{i}^{(t)}(\Phi,\pi_{i}^{(t)})=\sum_{j}q_{j}^{(t)}(\log\pi_{ij}^{(t)}-\log\pi_{ij})\tag{51}$$ $$=\sum_{j}\pi_{ij}^{(t)}(\log\pi_{ij}^{(t)}-\log\pi_{ij})$$ $$=\mathcal{K}\mathcal{L}(\pi_{i}^{(t)}\|\pi_{i}),$$ $$(52)$$ which is non-negative and equals zero iff π (t) i = πi. This verifies the third condition of partial first-order surrogate. The following lemma is used when verifying the second condition from above. Lemma B.8. Suppose g1*, . . . ,* gK ∈ R d and γ = (γ1, . . . , γK) ∈ ∆K*. The block matrix* H ∈ R dK: $$\mathbf{H}_{j,j^{\prime}}=\begin{cases}(\gamma_{j}-\gamma_{j}^{2})\mathbf{g}_{j}\mathbf{g}_{j}^{\top}\\ -\gamma_{j}\gamma_{j^{\prime}}\mathbf{g}_{j}\mathbf{g}_{j^{\prime}}^{\top}\end{cases}\quad\text{when}j\neq j^{\prime},\tag{1}$$ is positive semi-definite (PSD). If in addition ∥gj∥ ≤ B < ∞, ∀j ∈ [K]*, then* H ≼ B2· IdK Proof. Let x = [x1*, . . . ,* xK] ∈ R dK, then x ⊤Hx = X K j,j′=1 x ⊤ j Hj,j′xj (53) j=1 x ⊤ j Hj,jxj + X j ′̸=j x ⊤ j Hj,j′xj ′ = X K (54) j=1 X j ′̸=j γjγj ′ · (x ⊤ j gj ) · (x ⊤ j ′gj ′ ) = X K j=1 (γj − γj ) 2· (x ⊤ j gj ) 2 − X K $$(53)$$ (55) j=1 γj (x ⊤ j gj ) · X j ′̸=j γj ′ · (x ⊤ j ′gj ′ ) = X K j=1 γj (1 − γj ) · (x ⊤ j gj ) 2 − X K (56) j=1 γj X j ′̸=j γj ′ j=1 γj (x ⊤ j gj ) · X j ′̸=j γj ′ · (x ⊤ j ′gj ′ ) = X K · (x ⊤ j gj ) 2 − X K (57) = X K j=1 γj (x ⊤ j gj ) · X j ′̸=j γj ′x ⊤ j gj − x ⊤ j ′gj ′(58) = X K j=1 γj (x ⊤ j gj ) · X K j ′=1 γj ′x ⊤ j gj − x ⊤ j ′gj ′(59) 2 $$\text{(54)}$$ $$\text{(55)}$$ $$\text{(56)}$$ $$\text{(57)}$$ $$\text{(58)}$$ $$\text{(59)}$$ $$\text{(60)}$$ $$\text{(61)}$$ $$\text{(62)}$$ . j=1 γj (x ⊤ j gj ) 2 − j=1 γjx ⊤ j gj = X K X K = Ej∼γ[(x ⊤ j gj ) 2] −Ej∼γ[x ⊤ j gj ]2(61) = Vj∼γ[x ⊤ j gj ] ≥ 0, (62) where we have repeatedly applied Pγj = 1 and E, V denote expectation and variance, treating x ⊤ j gj as a random variable. As a result, H is PSD. Suppose in addition ∥gj∥ ≤ B < ∞, ∀j ∈ [K]. Using the Cauchy-Schwarz inequality, we have $-B\cdot\|\mathbf{x}_{j}\|\leq-\|\mathbf{x}_{j}\|\cdot\|\mathbf{g}_{j}\|\leq\mathbf{x}_{j}^{\top}\mathbf{g}_{j}\leq\|\mathbf{x}_{j}\|\cdot\|\mathbf{g}_{j}\|\leq B\cdot\|\mathbf{x}_{j}\|$. Since ∥xj*∥ ≤ ∥*x∥, ∀j ∈ [K], we have $$(63)$$ $$-B\cdot\|\mathbf{x}\|\leq\mathbf{x}_{j}^{\top}\mathbf{g}_{j}\leq B\cdot\|\mathbf{x}\|.$$ $$(64)$$ $$(65)$$ $\square$ j gj ≤ B · ∥x∥. (64) Finally, with the Popoviciu's inequality on variances, we have $$\mathbf{x}^{\top}\mathbf{H}\mathbf{x}=\mathbb{V}_{j\sim\gamma}[\mathbf{x}_{j}^{\top}\mathbf{g}_{j}]\leq{\frac{1}{4}}{\big(}B\cdot\|\mathbf{x}\|+B\cdot\|\mathbf{x}\|{\big)}^{2}=B^{2}\|\mathbf{x}\|^{2},$$ which means H ≼ B2IdK. As shown above, our algorithm can be seen as an instantiation of the Federated Surrogate Optimization (Marfoq et al., 2021, Algo.3). As a result, we have the following convergence derived from Marfoq et al. (2021, Thm.3.2′). Theorem 3.1. *[Convergence] Under Assumptions B.1-B.6, when the clients use SGD with learning rate* η = √a0T > 0*, and the number of rounds* T ≥ a 2 0 · max{8L 2, 16L 2β 2}*, the iterates of our algorithm satisfy* $$\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\|\nabla_{\Phi}f(\Phi^{t},\Pi^{t})\|_{F}^{2}\leq\mathcal{O}\left(\frac{1}{\sqrt{T}}\right),$$ _and_ $\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}[\Delta_{\Pi}f(\Phi^{t},\Pi^{t})]\leq\mathcal{O}\left(\frac{1}{T^{3/4}}\right),$ where the expectation is over the random batch samples and ∆Πf(Φt, Πt) := f(Φt, Πt) − f(Φt, Πt+1) ≥ 0. ## C Additional Experiment Details Dataset To speed up training, we take 10%, and 15% of the training data from CIFAR-10, and CIFAR-100 respectively. For the Office-Home dataset, we merge images from all domains to get the training dataset, and use the features extracted from the penultimate layer of ResNet-18 pretrained on ImageNet. Models and Methods For CIFAR-10, we use the CNN2 from Shen et al. (2020) with three 3×3 convolution layers (each with 128 channels followed with 2×2 max pooling and ReLu activation) and one FC layer. For CIFAR-100, we use ResNet-18 as in Marfoq et al. (2021). For Office-Home, the model is an MLP with two hidden layers (1000 and 200 hidden units). The batch size is 50 for CIFAR, and 100 for Office-Home. For FedFomo, we use 5 local epochs in CIFAR-100 to adapt to the noisiness of training and 1 local epoch per communication round for all other experiments. Settings CIFAR experiments use 8 clients and Office-Home experiments use 10 clients. Computational resources and software We summarize the computational resources used for the experiments in Table 2 and software versions in Table 3. | | Table 2: Summary of computational resource | | |--------|----------------------------------------------|-------------------| | Memory | CPU | GPU | | 700GB | Intel(R) Xeon(R) Platinum 8168@2.70GHz | 8 Tesla V100-SXM2 | | Table 3: Software versions | | | | |------------------------------|--------|---------|--------| | Operating System | Python | Pytorch | mpi4py | | Ubuntu 18.04.5 | 3.9 | 1.9.0 | 3.1.2 | ## D Additional Experimental Results Client weights with accumulative loss Figure 8 provides more complete information compared to Figure 5 from Section 4.4 of the main text. It shows the client weights over training for each of the 8 clients using the accumulative loss (only client 0 and 1 were shown in Figure 5). We find that clients 0, 3, 4, and 5, who share a common data distribution, all end up using client 5's model only, whereas clients 1, 2, 6, and 7 end up using client 6's model only. Hence, although the client weight vectors become one-hot part way through training, clients are not learning in isolation, and are still benefiting from their collaborators. ![21_image_0.png](21_image_0.png) Figure 8: Client weights over time of FedeRiCo with accumulative loss. Dirichlet data split In Table 4 we compare FedeRiCo with the other baselines with Office-Home dataset using a different data splitting approach. Specifically, we firstly partition the data labels into 4 clusters and then distribute data within the same clusters across different clients using a symmetric Dirichlet distribution with parameter of 0.4, as in FedEM (Marfoq et al., 2021)6. As a result, each client contains a slightly different mixture of the 4 distributions. The results are reported over a single run. | Method | FedAvg | FedAvg+ | Local Training Clustered FL | FedEM | FedFomo | FedeRiCo | |----------------------------------|-------------|-------------|-----------------------------------|---------|-----------|------------| | Accuracy 69.73±11.02 71.20±24.41 | 68.32±19.43 | 69.73±11.02 | 47.15±25.43 75.78±6.20 83.90±4.11 | | | | Table 4: Accuracy of different algorithms with Office-Home dataset and Dirichlet distribution. Client collaboration Here we include more client weight plots, similar to Figure 3 in the main text, of our proposed FedeRiCo on CIFAR100 with four client distributions to show how the level of collaboration evolves over training. As shown in Figure 9, clients from the same distribution have higher client weights and hence more collaboration as expected. ![22_image_0.png](22_image_0.png) Figure 9: Client weights over time of FedeRiCo with CIFAR100 data and four different client distributions. Clients are color coded by their private data's distribution. 6We use the implementation from https://github.com/omarfoq/FedEM. ![23_image_0.png](23_image_0.png) Figure 10: Accuracy vs number of local epochs. Number of local epochs We compared the average training accuracy over clients for different amounts of optimization in each M-step. As shown in Figure 10, when the number of local epochs is increased, and the M-step is better optimized, the test accuracy can converge to a better result. This can be characterized as a trade-off between performance and training time.
Review 1: Summary: In order to tackle data heterogeneity, this paper presents a methodology to perform personalised federated learning (FL), by allowing each client $i \in [K]$ to learn a personalised model $\phi_i$. To model data heterogeneity, the authors assume that each client's local distribution $D_i$ can be expressed as a mixture of other clients' distributions, that is $D_i = \sum_{j=1}^K \pi_{ij} D_j$, where $\pi_{ij}$ stands for a local weight vector. Similar to statistical inference methods in mixture models (e.g. Gaussian ones), the authors introduce a set of latent variables representing the distribution from which the data has been generated; and rely on expectation maximisation to infer both model parameters and latent variables. Convergence guarantees are provided and the benefits of the proposed approach are illustrated on a set of experiments involving 3 data sets. Compared to prior and related works, the novel contributions brought by this paper are : * The combination of local weights $\pi_{ij} at the client level with the mixture presentation at the client level. Closest works are Mansour et al., 2020 and Marfoq et al., 2021. * Empirical assessment of the benefits of the proposed methodology referred to as FedeRiCo. Strengths and Weaknesses: Strengths: * The methodology proposed by the authors is clear. * The authors pointed out several issues if the naive version of FedeRiCo is used and proposed workarounds to cope with them, such as (i) communication bottleneck to update both the clients' weights (E-step) and the local model parameters and (ii) convergence issues associated to the clients' sampling scheme which has been addressed using a smoothing mechanism. * The numerical assessment is thorough. Despite these strengths, I have some major concerns regarding the paper in its current state, see below. Weaknesses: * Albeit not the main criteria to accept this submission, the novelty of the proposed methodology is rather incremental with respect to prior works, especially Marfoq et al., 2021. * The appendix lack rigor and clarity, especially the proof of Theorem 3.1. First, I understand that the main assumptions and the proof techniques B1-6 come from Marfoq et al., 2021 but I would expect at the beginning of Section B a brief recap of the proof organisation. For example, I do not understand why you should verify the three conditions of partial first-order surrogates. Second, while it is indeed mandatory to provide high-level theoretical results in the main paper, I would like to have more rigorous results in the Appendix. Notably, (i) replace "after a sufficient number of rounds $T$" with a clear lower bound on $T$; (ii) replace $O(.)$ in (7) and (8) by exact upper bounds. * Regarding experimental results, I would like to have figures comparing the different approaches w.r.t. the communication overhead (e.g. size of the vectors to be transmitted from one party to another) and to add HypCluster (Mansour et al., 2020) to the set of competitors as it is a specific instance of the proposed methodology. Requested Changes: See my comments in the Weaknesses section above. Broader Impact Concerns: N/A. ================================================== Review 2: Summary: In this paper, the authors propose a personalized extension of the FedEM method proposed by Marfoq et al. The authors provide convergence guarantee for the proposed method. Empirical results on CIFAR also shows that the proposed method seems to achieve stronger performance. Strengths and Weaknesses: Strengths: - This work is focusing on personalized federated learning, which is an important area of study. - The paper organization is clear. Weaknesses: 1. **Presentation:** I found the presentation unclear in many places which makes understanding the method hard. - The authors seems to assume the $\mathcal{D}_i$ can be represented as a linear combination of $K$ distributions where $K$ concides with the number of clients. Isn't that a special case of the original Marfoq et al. paper where they assume $M$ distributions? - In that case, is $\Phi_i$ the personalized model for distribution or the personalized model for client? It seems to be the personalized model for distribution but based on Equation 4, all $K$ models should be broadcast to all the clients, which could introduce gigantic amount of communication in cross-device setting where there are thousands/millions of users. - The optimization objective is not clearly stated. Could the authors explain what should be minimized / maximized in Equation 2? 2. **Effectiveness:** Given the presentation ambiguity, I don't think I'm convinced by the effectiveness of the current method. It seems that for this personalized optimization problem, the minimizer should converge to every one learns its own local model, i.e. $w_{ij}=1$ iff $i=j$. Please correct me if I'm wrong. Given my understanding above, it is unclear to me why there is a huge difference between local training and the proposed method. If there is any, it might due to that in Algorithm 1, the solver is not finding the exact solution but just doing a gradient update step. However, that would mean the benefits come from ways of optimization instead of the objective itself. 3. **Theory:** Beyond that, could the authors explain how do we benefit from using personalization theoretically compared to learning a global model under mixture of distribution (Marfoq et al.). It seems that the convergence rate is identical. But then it is hard to explain why the empirical difference is large between the two methods. If the authors could provide any explanation that would be great. 4. **Decentralization:** The motivation for decentralization is unclear to me. Why could not we use SMC when we don't have a trusted server? Is decentralization itself a contribution of this work? 5. **Experiments:** Having FedAvg of CIFAR10 to be nearly random guessing seems strange to me. What's more concerning is FedAvg+, which is FedAvg with local finetuning, is much worse compared to local training on CIFAR10, which does not make sense, since in that case FedAvg should just provide local training a different initialization. Could the authors explain why this is the case? Beyond that, it would be interesting to see how the methods could perform in real world cross-device FL cases where you have thousands clients [1]. A bunch of baselines are also missing, including pFedMe [2], Ditto [3], etc. [1] Caldas, Sebastian, et al. "Leaf: A benchmark for federated settings." arXiv preprint arXiv:1812.01097 (2018). [2] T Dinh, C., Tran, N., & Nguyen, J. (2020). Personalized federated learning with moreau envelopes. Advances in Neural Information Processing Systems, 33, 21394-21405. [3] Li, T., Hu, S., Beirami, A., & Smith, V. (2021, July). Ditto: Fair and robust federated learning through personalization. In International Conference on Machine Learning (pp. 6357-6368). PMLR. Requested Changes: See weaknesses above. In summary, I would like to authors to provide: - Explanations to me concerns and questions above. - Reorganize the technical parts so that the notations and assumptions are clear. - Additional experiments on larger scale datasets and more baselines. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper proposes a variant to personalized federated learning where local clients compute a weighted average of the gradients of other clients. The weights are determined by the loss of other client's models on its own data. Strengths and Weaknesses: Strength: - personalized federated learning is an important problem - the paper proves convergence of the proposed method Weaknesses: - the setting of this paper is not perfectly sound (see requested changes) - the differences to the work of Marfoq, et al. 2021 are not clear - the paper does not discuss and evaluate privacy aspects and communication complexity of the proposed approach Requested Changes: - in the introduction, the paper argues that "forcing everyone to use the same global model without proper personalization can hurt performance on their own data distribution". This is exemplified using a set of linear models for parts of a non-linear curve: each local model can fit its part well, but there is no good overall linear model for the curve. While this argument makes sense for hypothesis classes of limited capacity, it seems unjustified in the interpolation regime, where for every (practical) dataset there exists at least on hypothesis that reaches negligible empirical error - this encompasses a large part of deep learning. In this setup one could argue that there exists a global model that achieves optimal performance across all local clients. Of course, it might be impractical to find it via federated learning, but I think this argument needs to be strengthened, because in its current form it appears to be wrong. - In section 3.2 the paper argues that "every client's local distribution $D_i$ can always be represented as a mixture". Without further assumptions, this is wrong. Assume some input $X=\{1,2,3\}$, then already for the marginal over $X$ there is a simple counter example. Assume for $3$ clients local distributions $p_i(x)=1,$ if $x=i$ and $0$ otherwise. Then any $p_i$ cannot be represented as a mixture of the other two. This is a serious problem, since the entire paper hinges on this assumption. Moreover, it appears to me that without such explicit assumptions, the impossibility result in Marfoq, et al. 2021 should apply to this paper as well and no gain over local learning plus a constant should be possible. - In remark 2 in Section 3, the paper states that Marfoq, et al. 2021 (NeurIPS) differs from this work in that every data point comes from a mixture of distributions. This seems to be a misrepresentation of the work of Marfoq, et al. 2021, which has a similar setting as this work: each client has an individual data distribution $D_t$ which is a mixture of a set of base distributions. The main difference is that Marfoq, et al. 2021 make this assumption explicit, while this paper assumes that each client's distribution is a mixture of the other client's distributions. This work even uses the assumptions from Marfoq, et al. 2021 for its convergence proof. I would argue that a more thorough discussion on the similarities and differences between these two works is necessary. This should include a discussion on the difference in experimental setup, since Marfoq, et al. 2021 reports consistently higher performance on CIFAR10 and CIFAR100 (in both cases higher than the performance achieved by the method proposed in this paper), while the results of local training match with those reported in this work for $4$ distributions. - An often cited benefit of federated learning over other means of collaboration is its privacy benefit, which is not addressed in this paper. It would be great to have a clear discussion about the privacy issues of sharing all models with all clients. - The paper provides several tweaks to the approach to improve communication-efficiency. The paper, however, provides no theoretical or empirical evaluation of the communication complexity. Since communication efficiency is one of the selling points of federated learning, such an evaluation seems necessary. - The performance comparison in table 1 lacks a centralized baseline. Since a main claim of the paper is that a single, centralized model is inferior to personalized local models, this comparison is essential. Broader Impact Concerns: I have no concerns about the broader impact. ================================================== Metareview: Recommendation: Reject Comment: I want to apologize to the authors for the significant delay in taking the final decision. This was notably due to one of the reviewers becoming unresponsive (despite many reminders) and never submitting his/her recommendation. Nevertheless, there was a consensus among the other two reviewers to reject the paper for the reasons explained in the "Claims And Evidence" part above. As Action Editor, after having reviewed the entire discussion and the reviewers' recommendations, I agree with their arguments. ==================================================
# A Rigorous Study Of The Deep Taylor Decomposition Leon Sixt leon.sixt@fu-berlin.de Department of Computer Science Freie Universität Berlin Tim Landgraf *tim.landgraf@fu-berlin.de* Department of Computer Science Freie Universität Berlin Reviewed on OpenReview: *https: // openreview. net/ forum? id= Y4mgmw9OgV* Code Repository: *https: // github. com/ berleon/ A-Rigorous-Study-Of-The-Deep-Taylor-Decomposition* ## Abstract Saliency methods attempt to explain deep neural networks by highlighting the most salient features of a sample. Some widely used methods are based on a theoretical framework called Deep Taylor Decomposition (DTD), which formalizes the recursive application of the Taylor Theorem to the network's layers. However, recent work has found these methods to be independent of the network's deeper layers and appear to respond only to lower-level image structure. Here, we investigate the DTD theory to better understand this perplexing behavior and found that the Deep Taylor Decomposition is equivalent to the basic gradientˆinput method when the Taylor root points (an important parameter of the algorithm chosen by the user) are locally constant. If the root points are locally input-dependent, then one can justify any explanation. In this case, the theory is under-constrained. In an empirical evaluation, we find that DTD roots do not lie in the same linear regions as the input - contrary to a fundamental assumption of the Taylor theorem. The theoretical foundations of DTD were cited as a source of reliability for the explanations. However, our findings urge caution in making such claims. ## 1 Introduction Post-hoc explanations are popular for explaining Machine Learning models as they do not require changing the model's architecture or training procedure. In particular, feature attribution methods are widely used. They assign a saliency score to each input dimension, reflecting their relevance for the model's output. For images, the saliency scores can be visualized as heatmaps (see Figure 2). Evaluating post-hoc explanations is challenging because it is inherently circular: As we do not understand the internal workings of the model, which we are trying to explain, we cannot judge the quality of the explanation. The situation is further complicated as many methods simplify the model's complexity to render explanations accessible to the human eye. For example, most methods focus on the local neighborhood of an input sample, and rely on assumptions such as linearity (e.g. gradient-based methods) or independence of the input features (e.g. approximation of Shapley values (Štrumbelj & Kononenko, 2011; 2014; Lundberg & Lee, 2017; Kumar et al., 2020)). These factors and the complexity of deep neural networks make it difficult to assess whether an explanation is correct or not. We can not disentangle failures of the explanation method and unexpected behavior of the model. While it is acceptable for methods to introduce simplifications or rely on assumptions, their existence, purpose, and violation should be made transparent. In the best case, a method would be based on a solid theoretical foundation providing guarantees regarding an explanation's correctness. Such a theoretical foundation is the *Deep Taylor Decomposition* (DTD, Montavon et al. (2017)). DTD recursively applies the Taylor Theorem to the network's layers, and backpropagates modified gradients to the input, thereby computing the input's relevance. It was used as theoretical foundation of LRP (Bach et al., 2015), a poplar method to explain image models. LRP was repeatedly advertised as a sound and reliable explanation technique (Montavon et al., 2019; Samek et al., 2021b; Holzinger et al., 2022). For example, Holzinger et al. (2022) stated: "The main advantages of LRP are its high computational efficiency [...], its theoretical underpinning making it a trustworthy and robust explanation method [...], and its long tradition and high popularity [...]." However, Sixt et al. (2020) has shown that certain LRP and DTD backpropagation rules create explanations partially independent of the model's parameters: the explanation will remain the same even if the last layer's parameters are randomized. The theoretical analysis in (Sixt et al., 2020) revealed that the propagation matrices, which correspond to the layers' Jacobian matrices, are all positive and their product converges to a rank-1 matrix quickly. To obtain the saliency map, the result is usually normalized, and thereby even the last single degree of freedom is lost. Thus, the explanation does not change when explaining a different class or the parameters of the deeper layers is changed. This perplexing behavior questions the consistency of DTD directly. While Sixt et al. (2020) described the convergence to the rank-1 matrix in detail, the failure was not related to DTD's theory such as the choice of root points1 and the recursive application of the Taylor Theorem. Here, we fill this gap: *Can we identify* flaws in DTD's theory that would explain the perplexing behavior of ignoring the network's parameters? Does DTD provide transparency regarding its assumptions and guarantees about the explanation's correctness? Before we approach these questions, we summarize the relevant background of the Deep Taylor Decomposition in Section 3. For completeness, we start with the well-known Taylor theorem and then discuss how the theorem connects to DTD's relevances. We then continue with stating the recursive application of the Taylor Theorem formally and recapitulate the so-called *train-free DTD* approximation, which allows to compute layer-wise relevances efficiently. In section 4, we present our theoretical analysis of DTD. In particular, we contribute: **(C1)** a proof that the root points must be contained in the same linear region as the input; **(C2)** we generalize a previous observation about LRP0 (Shrikumar et al., 2016): if the layers' root points are chosen locally constant w.r.t the layers' input, then DTD's relevances take a similar form as inputˆgradient; **(C3)** DTD is underconstrained: if the root points depend on the layers' input, then the Deep Taylor Decomposition can be used to create any arbitrary explanation; **(C4)** we also find that DTD cannot be extended easily to analytic activation functions (e.g. Softplus), without introducing complex higher-order derivatives with the same order as the number of network layers. In an empirical evaluation (Section 5), we applied the theoretical insights from the previous section and studied the *train-free DTD* approximation in several experiments: **(C5)** The train-free DTD does not enforce the root points to be located in the valid local linear region of the network; **(C6)** We also validated this empirically using a small multilayered perceptron, where we found a substantial number of samples having roots located outside the valid local linear region; **(C7)** Additionally, we include a reproducibility study of (Arras et al., 2022) that claimed that DTD's explanations would not suffer from the problems reported in Sixt et al. (2020). This reproducibility study also highlights DTD's black-box character and how difficult it is to evaluate explanation quality empirically. Given the theoretical and empirical evidence, we conclude that DTD obscures its simplifications and violates its own assumptions. DTD is underconstrained and even allows justifying virtually any explanation. ## 2 Related Work The theoretical analysis of explanation methods is a small research area. PatternAttribution (Kindermans et al., 2018) investigated the insensitiveness of DTD rules to input noise and then proposed a way to learn 1Following Montavon et al. (2017), we name the points used for the Taylor Theorem *root points*. For example, for a function f : R Ñ R, the first-order Taylor approximation is fpxq « fpx˜q ` f 1px˜qpx ´ x˜q, where x˜ is the root point. the root points from data. Other lines of work are the manipulation of saliency maps (Dombrowski et al., 2019; Viering et al., 2019; Wang et al., 2020), the runtime-complexity of explanation methods (Waeldchen et al., 2021), or the explanations methods with provable guarantees (Chen et al., 2019). Previous works have analyzed the theoretical properties of various saliency methods. For example, the insenstivity of Guided-Backprop (Springenberg et al., 2014) was analyzed in (Nie et al., 2018), and (Lundstrom et al., 2022) found flaws in the theoretical motivation of integrated gradients Sundararajan et al. (2017). (Kumar et al., 2020) discussed the issues from an independence assumption between input variables, often introduced in sampling algorithms (Štrumbelj & Kononenko, 2011; 2014; Lundberg & Lee, 2017). In (Shah et al., 2021), it was empirically analyzed and proven for a specific dataset that the Gradient's magnitude will not correspond to relevant features. Our work also analyzes the theoretical properties of saliency methods but differs from previous works as it focuses on the DTD and LRP methods. Although different review articles (Montavon et al., 2018; 2019; Samek et al., 2021b;a) and extensions of LRP and Deep Taylor (Binder et al., 2016; Kohlbrenner et al., 2020; Hui & Binder, 2019; Ali et al., 2022) have been published, none discussed the theoretical issues brought forward in our manuscript. ## 3 Background In this section, we provide the necessary background on Deep Taylor Decomposition to understand the theoretical analysis in Section 4. We mainly reproduce the derivations given in (Montavon et al., 2017; 2018). If we comment on the derivations, we do this in the *Remark* sections. ## 3.1 Taylor Theorem For Multivariate Functions Taylor Theorem for multivariate functions can be concisely stated using multi-index notation. A multi-index α P N k 0 is a vector of non-negative integers (α " rα1*, . . . , α*ks). The following operations are defined as: α! " α1!α2! *. . . α*k!, |α| " řk i"1 αi, x α " x α1 1 x α2 2*. . . x* αk k , and B αf " B|α|f{pBα1 x1B α2 x2 *. . .* B αk xkq, where x P Rk and f : R k Ñ R. The following theorem is adapted from Folland (2002, Theorem 2.68): Theorem 1 (Multivariate Taylor Theorem). *Suppose* f : R d Ñ R *is of differentiability class* C k on an open convex set S. If x P S and x˜ P S*, then:* _where the remainder is given by:_ $S$, then:_ $$f(\mathbf{x})=\sum_{|\mathbf{\alpha}|\leqslant k}\frac{\partial^{\mathbf{\alpha}}f(\tilde{\mathbf{x}})}{\mathbf{\alpha}!}(\mathbf{x}-\tilde{\mathbf{x}})^{\mathbf{\alpha}}+g_{k}(\mathbf{x},\tilde{\mathbf{x}}),\tag{1}$$ $\tilde{\mathbf{x}}$: $$g_{k}(\mathbf{x},{\hat{x}})=k\sum_{|\mathbf{\alpha}|=k}{\frac{(\mathbf{x}-{\hat{x}})^{\mathbf{\alpha}}}{\alpha!}}\int_{0}^{1}(1-t)^{k-1}{\Big[}{\mathcal{C}}^{\mathbf{\alpha}}f{\big(}t\mathbf{x}+(1-t)\,{\hat{x}}{\big)}-{\mathcal{C}}^{\mathbf{\alpha}}f(\mathbf{x}){\Big]}d t.$$ dt. (2) As the Deep Taylor Decomposition focuses on neural networks with ReLU activations, we will mainly look at the first-order Taylor Theorem: $$\quad(1)$$ $$\left(2\right)$$ $$f(\mathbf{x})=f({\hat{\mathbf{x}}})+{\frac{\partial f(\mathbf{x})}{\partial\mathbf{x}}}{\Big|}_{\mathbf{x}={\hat{\mathbf{x}}}}\cdot(\mathbf{x}-{\hat{\mathbf{x}}})$$ $$(3)$$ where x˜ is the root point, and |x"x˜ denotes the gradient evaluated at the root point x˜. The higher order terms are zero due to the local linearity of ReLU networks. As the Taylor Theorem requires f P C 1(i.e., all partial derivatives Bfpxq{Bx must be continuous in the local neighborhood S), the root point must be within the same linear region as the input. Definition 1 (Linear Region). A linear region of a function f : R d Ñ R is the set Nf pxq of all points x 1 P Nf pxq that (1) have the same gradient at x: ∇fpxq " ∇fpx 1q, and (2) can be reached from x without passing through a point a with a different gradient, i.e., ∇fpaq ‰ ∇fpxq. In case of a ReLU network, the approximation error cannot be bounded when selecting a root point x˜ R Nf pxq, as that linear region's gradient might differ substantially. ## 3.2 Taylor Theorem And Relevances In the previous section, we have recapitulated the Taylor Theorem. We now discuss how the Taylor Theorem can be used to compute input relevances. A common approach explaining deep neural network is to contrast the network's output with a similar point predicted differently. A user could then study pairs px, fpxqq and px˜, fpx˜qq and relate the input differences to the output differences. To guide the user's attention, it would be desirable to highlight which changes between x and x˜ were responsible for the difference in the output. The first observation in (Montavon et al., 2017) is that the network's output differences can be redistributed to the input by using the Taylor Theorem. If the point x˜ is in the local neighborhood Nf pxq, we can use the first-order Taylor Theorem (equation 3) to write the difference fpxq ´ fpx˜q as: $$f(\mathbf{x})-f({\tilde{\mathbf{x}}})={\frac{\partial f(\mathbf{x})}{\partial\mathbf{x}}}{\big|}_{\mathbf{x}={\tilde{\mathbf{x}}}}\cdot(\mathbf{x}-{\tilde{\mathbf{x}}})$$ $\downarrow$ . $$\mathbf{\Sigma}$$ ¨ px ´ x˜q (4) The relevance of the input R : R d Ñ R dis then defined to be the point-wise product of the partial derivatives with the input differences: $$R(\mathbf{x})={\frac{\partial f(\mathbf{x})}{\partial\mathbf{x}}}{\big|}_{\mathbf{x}={\hat{\mathbf{x}}}}\odot(\mathbf{x}-{\hat{\mathbf{x}}})$$ While this would be a simple way to compute the relevances, the following reasons are given in (Montavon et al., 2017; 2019), to not directly use the Taylor Theorem on the network output: 1. **Adversarial perturbations** (Szegedy et al., 2013): Small input perturbations can lead to a large change in the output. Therefore, the difference in the output might be enormous but |x ´ x˜| tiny and uninterpretable. 2. **Finding a root point might be difficult**: *"It is also not necessarily solvable due to the possible* non-convexity of the minimization problem" (Montavon et al., 2017). 3. **Shattered gradients** (Balduzzi et al. (2017)): "While the function value fpxq *is generally accurate,* the gradient of the function is noisy" (Montavon et al., 2019). Remark 1. We want to point out that the more general problem seems to be that the local linear regions are tiny, or rather the number of linear regions grows exponentially with the depth of the network in the worst case (Arora et al., 2018; Xiong et al., 2020; Montufar et al., 2014). This restricts the valid region for the root point to a small neighborhood around the input. ## 3.3 Deep Taylor: Recursive Application Of Taylor Theorem The main idea of (Montavon et al., 2017) is to recursively apply the Taylor Theorem to each network layer. Before we present this in detail, we first we need to clarify the notation of an n-layered ReLU network shortly: Definition 2 (ReLU network). An n-layered ReLU network f : R d1 Ñ R dn`1 ě0is the composition of n functions f " fn ˝ *. . .* ˝ f1, where each function fl: R dl Ñ R dl`1 ě0 has the form flpal**q " r**Wlals `, and where r.s ` is the ReLU activation. Instead of directly calculating the relevance of the input as done in the previous section, we can apply Taylor Theorem to the final network layer and then apply the Taylor Theorem again to the resulting relevance. By recursively applying the Taylor Theorem per individual layer, we can calculate the relevance of the input. As the base case of the recursive application, the relevance of the network output is set to the value of the explained logit frξspxq " an`1rξs : $$R^{n+1}(a_{n+1})=a_{n+1_{[\xi]}},$$ , (6) where Rn`1 denotes the relevance of the n ` 1-th network activation. We decided to use superscripts for the relevance functions as their individual dimensions are often index as in Rn`1 rjs. Suppose that we already $$(6)$$ know the relevance function Rl`1pal`1q P R dl`1for the layer l`1. We can then calculate the relevance of al (the input to layer l) to the j-th coordinate of Rl`1pal`1q: $$R_{i,j}^{l+1}(\mathbf{a}_{l+1})=R_{i,j}^{l+1}\left(f_{l}(\hat{\mathbf{a}}_{l})\right)+\left.\frac{\partial R_{i,j}^{l+1}\left(f_{l}(\mathbf{a}_{l})\right)}{\partial\mathbf{a}_{l}}\right|_{\mathbf{a}_{l}=\hat{\mathbf{a}}_{l}(\mathbf{a}_{l})}\cdot(\mathbf{a}_{l}-\hat{\mathbf{a}}_{l}(\mathbf{a}_{l})),\tag{1}$$ $$\mathbf{\partial}$$ where we used al`1 " flpalq. The root point is selected in dependency of the layers' input al, i.e., it is a function a˜l: R dl Ñ R dl. The total relevance of the input to layer l is given by the sum over all dl`1 hidden neurons. Definition 3 (Recursive Taylor). Given a function f : R d1 Ñ R dn`1, which can be written as a composition of n functions f " f1 ˝ *. . .* ˝ fn with fl: R dl Ñ R dl`1, the input to each function fl are denoted by al and an`1 specifics f's output. Additionally, a root point function a˜l: R dl Ñ R dlis defined for each layer, which must only return admissible values a˜lpalq P NRlpxq and a˜lpalq ‰ al. Then, the base case is given by Rn`1pan`1q " an`1rξs and the relevance function Rl: R dl Ñ R dl of layer l ‰ n is recursively defined by: $$R^{l}(\mathbf{a}_{l})=\sum_{j=1}^{d_{l+1}}\left(\left.\frac{\partial R_{[j]}^{l+1}\left(f_{l}(\mathbf{a}_{l})\right)}{\partial\mathbf{a}_{l}}\right|_{\mathbf{a}_{l}=\tilde{\mathbf{a}}_{l}^{(j)}(\mathbf{a}_{l})}\odot\left(\mathbf{a}_{l}-\tilde{\mathbf{a}}_{l}^{(j)}(\mathbf{a}_{l})\right)\right)$$ ¨ $$({\mathfrak{s}})$$ ˛ ' (8) The above definition corresponds to equation 6 in (Montavon et al., 2017). Except for the root point selection, which will be discussed in the following sections, definition 3 contains all information to implement the recursive decomposition using an automatic differentiation library. An exemplary pseudo-code can be found in Algorithm 1. Before continuing with the approximations of the Deep Taylor Decomposition, we want to make a few remarks: Remark 2 (No Axiomatic Motivation). Only some vague arguments are provided to motivate the recursive application of the Taylor Theorem: The deep Taylor decomposition method is inspired by the divide-and-conquer paradigm, and exploits the property that the function learned by a deep network is decomposed into a set of simpler subfunctions, either enforced structurally by the neural network connectivity, or occurring as a result of training. - Montavon et al. (2017, Section 3) In contrast, Shapely values (Shapley, 1951) are motivated by four axiomatic properties, which are uniquely fulfilled by the Shapely values. A comparable set of axioms with uniqueness result does not exist for the Deep Taylor Decomposition. ## 3.4 Deep Taylor Decomposition For A One-Layer Network And Dtd'S Rules In the previous section, we introduced the recursive application of the Taylor Theorem. For a concrete example, we will now discuss how DTD is applied to a one-layered network. It will also explain the propagation rules of DTD. This subsection corresponds to Section 4 in Montavon et al. (2017), and we will refer to the corresponding equations with the notation (DTD eq. 11). The one-layered network consists of a linear layer with a ReLU activation followed by a sum-pooling layer, i.e. f : R d Ñ R, fpxq " řjrWx ` bs ` j , where r.s ` is the ReLU activation. We will denote the output of the ReLU layer as hpx**q " r**Wx ` bs `, and the sum-pooling layer as yphq " řj hj . For this subsection, we will denote the relevance function with Rx, Rh, and Ryfor the input, hidden, and output layer, respectively. The relevance of the final layer is simply given by the network's output (equation 6; DTD eq. 8): $$R^{y}(\mathbf{h}(\mathbf{x}))=y(\mathbf{h}(\mathbf{x})).$$ yphpx**qq "** yphpxqq. (9) DTD suggests to select a root point h˜ such that yph˜q " 0. The advantage of yph˜q " 0 is that the network's output yphq is absorbed to the first-order term (i.e., fpx˜q " 0 in equation 3). We then have yphq " BR yph˜q Bh˜¨ph´h˜q such that the network output is fully redistributed to the hidden layer's relevance. Additionally, $$({\mathfrak{g}})$$ the root point should be a valid input to the layer. As yphq's input comes from the ReLU layer hpxq, it is positive and only h˜ " 0 solves řj h˜j " 0. The derivative BR f h ph˜q{Bh˜ " Byph˜q{Bh˜ " 1 and therefore, we can use equation 8 to write the relevance of the ReLU layer's output as (DTD eq. 10): $$R^{h}(\mathbf{x})=\left.\frac{\partial R^{y}(\mathbf{h})}{\partial\mathbf{h}}\right|_{\mathbf{h}=0}\mathbb{G}\,\mathbf{h}=\mathbf{h}\tag{10.1}$$ and that is, $\mathbf{h}=\mathbb{G}$. TD eq. $\small11$): . As h is the ReLU output, we can write Rhpxq also as (DTD eq. 11): $$R^{h}(\mathbf{x})=[W\mathbf{x}+\mathbf{b}]^{+}$$ ` (11) The next step is to connect the input relevance Rxpxq with the ReLU neurons' relevances Rh. We will use equation 8 and apply the Taylor theorem to the relevance of each hidden neuron hrjs: ˇ $$R^{\mathbf{x}}(\mathbf{x})=\sum_{j=1}^{d}\left(\frac{\partial R_{[j]}^{\mathbf{h}}(\mathbf{x})}{\partial\mathbf{x}}\Bigg|_{\mathbf{x}=\hat{\mathbf{x}}^{(j)}}\odot\left(\mathbf{x}-\hat{\mathbf{x}}^{(j)}\right)\right)=\sum_{j=1}^{d}\left(\mathbf{w}_{j}\odot\left(\mathbf{x}-\hat{\mathbf{x}}^{(j)}\right)\right),$$ ¸ ˜ $$(10)$$ $$(11)$$ $$\left(12\right)$$ , (12) where we used that the derivative of the hidden neuron hrjs w.r.t the input is the weight vector wj " Wrj:s. Relevance Propagation Rules For the root point x˜, we could, in theory, select any point in the halfspace wjx˜ ` bj ą 0, as they are all valid according to the Taylor Theorem. However, as it is beneficial to fully redistribute the relevance, DTD proposed selecting a point that sets the j-th neuron relevance to zero, i.e., any point on the hyperplane wT j x˜ ` bj " 0. The non-differentiability at the ReLU hinge is resolved by picking the gradient from the case wjx˜ ` bj ą 0. As there is no unique solution, DTD derives different points by starting at the input x and moving along a direction vj such that h˜j " x ´ tvj with t P R. The root point x˜j is then the intersection of the line x ` tvj with the hyperplane wT j x ` bj " 0. Combining these equations yields t " ´ wT j x`bj wT j vjand therefore the root point x˜ pjq " x ´ wT j x`bj wT j vjvj . Substituting this into equation 12 yields: (DTD-Appendix eq. 6, 7) $$R^{\mathbf{w}}(\mathbf{x})=\sum_{j=1}^{d}\left(\mathbf{w}_{j}\odot\frac{\mathbf{w}_{j}^{T}\mathbf{x}+\mathbf{b}_{j}}{\mathbf{w}_{j}^{T}\mathbf{v}_{j}}\mathbf{v}_{j}\right)=\sum_{j=1}^{d}\left(\mathbf{w}_{j}\odot\frac{\mathbf{v}_{j}}{\mathbf{w}_{j}^{T}\mathbf{v}_{j}}R^{\mathbf{h}}_{[1]}(\mathbf{x})\right).\tag{13}$$ While almost all choices of v yield a root point with Rxpx˜q " 0 (except v K w), a few special directions exists: - The w 2*-rule* chooses the closest root point in L2 metric: vj " wj . This will yield the root point x˜ " x ´wj wT j wj Rhpxq and the following relevance propagation rule: Rxpxq " řd j"1 w2 j wT j wj Rhpxq, where w2 j " wj d wj . - The z ` *rule* uses a direction that always yields a positive root: vj " 1wjě0x, which is preferred for positive inputs (e.g. from ReLU activations). The resulting root point is x˜ " x´w1wjě0x wT j p1wjě0xq Rh rjspxq and the following relevance propagation rule: Rxpxq " řd j"1 z ` j ř i z ` ji Rh rjspxq, where z ` j " 1wjě0xdw ` j . - *The gamma rule* proposed in Montavon et al. (2019) uses the search direction vj " 1 ` γ1wjě0 d x, where γ P R `. The corresponding relevance propagation rule is then: Rxpxq " řd j"1 wj`γz ` j wT j p1`γz ` j q Rh rjspxq. In the limit γ Ñ 8, the gamma rule becomes the z `-rule. - A special case is the LRP0 rule which does not use any vector to find a root point but chooses x˜ " 0. Although zero is not a valid root point in general, it was shown that LRP0 corresponds to gradientˆinput Shrikumar et al. (2016); Ancona et al. (2018); Kindermans et al. (2016). The LRPε rule is an extension of LRP0 that adds a small ε to increase numeric stability. ## 3.4.1 Which Rule Should Be Chosen? The computed relevance values depend substantially on the rule. For example, the LRP0 rule can compute negative relevance values, whereas the z ` rule will always return positive relevance values. In Montavon et al. (2017, Sec. 4.1, 4.2, 4.3), the input domain was used as the primary selection criterion. For example, it was suggested to pick z ` rule for R ` 0 and the w 2rule for the domain R. The input domain is not a sufficient selection for the root points, as it does not provide a unique solution. In the later work Montavon et al. (2019), other selection criterion were proposed for deep neural networks, which we will analyze in Section 3.5.1. For now, we can conclude that no principled way to pick the roots and rules exists. ![6_image_0.png](6_image_0.png) Figure 1: Local linear regions of an randomly initialized neural network (3 layers, ReLU, 2 inputs, 10 hidden neurons). The biases are initialized (a) non-positive and (b) unrestricted. The gradient are visualized as arrows for a random selection of points. ## 3.4.2 Non-Positive Biases In Montavon et al. (2017), it was proposed to constrain the biases of the linear layer to be non-positive, i.e. bj ď 0. The main motivation was to guarantee that the origin is a root point of the function f. However, this is not the case, as the following simple counter-example will show. Suppose the bias b " ´⃗1. Then the function fp⃗0q " 0, as rW⃗0 ´ ⃗1s ` " 0, but the origin ⃗0 is not a valid root point as the gradient is zero there. However, any input x with fpxq ě 0 will have a non-zero gradient and will therefore be in a different local region. In Figure 1, we visualized the local regions of a small 3-layered network for non-positive and unrestricted bias. ## 3.5 Dtd For Deep Neural Neworks: The Training-Free Relevance Model Applying the recursive Taylor decomposition to a one-layered network yielded a set of easily applied relevance propagation rules, which allowed to skip computing the root points explicitly. Of course, it would be desirable to skip the computation of roots for deep neural networks too. As solution, Montavon et al. (2017) proposed a so-called *training-free* relevance model. We follow the derivation from the review article (Montavon et al., 2018). Let Rl`1palq be the relevance computed for an upper layer. Montavon et al. (2017) then makes the following assumption: Assumption 1 (Positive Linear Relevance). The relevance of the upper layer Rl`1pal`1q can be written as Rl`1pal`1q " al`1 d cl`1, where cl`1 P R ` should be a constant and positive vector. As Rl`1palq " cl`1 d rWlal ` bls `, we can construct a so-called relevance neuron: $\mathbf{a}\cdot\mathbf{b}=\mathbf{a}\cdot\mathbf{b}$. $\mathcal{L}$ $$\hat{R}^{l+1}(\mathbf{a}_{l+1})=[\hat{W}_{l+1}\mathbf{a}_{l+1}+\hat{\mathbf{b}}_{l+1}]^{+},$$ `, (14) where we pulled c into the layer's parameters: Wˆl`1 " Wl`1 d Cl`1 and where Cl`1 " rcl`1*, . . . ,* cl`1s is a repeated version of cl`1 and ˆbl`1 " bl`1 d cl`1. This formulation is similar to the relevance of the hidden layer of the one-layer network in equation 11. The difference is that the root point and search direction will be based on the modified weights Wˆl`1 and ˆbl`1. Using wˆj " Wˆl`1rj:s " cl`1rjsWl`1rj:s , we can write the general relevance propagation rule of equation 13 as: $$\hat{R}^{l}(\mathbf{a}_{i})=\sum_{j=1}^{d}\left(\frac{\mathbf{\hat{w}}_{j}\odot\mathbf{v}_{j}}{\mathbf{w}_{j}^{T}\mathbf{v}_{j}}\hat{R}_{l\cup1}^{l+1}(f_{l}(\mathbf{a}_{i}))\right)=\sum_{j=1}^{d}\left(\frac{\mathbf{w}_{j}\odot\mathbf{v}_{j}}{\mathbf{w}_{j}^{T}\mathbf{v}_{j}}\hat{R}_{l\cup1}^{l+1}(f_{l}(\mathbf{a}_{i}))\right),\tag{15}$$ $$(14)$$ where the cl`1j canceled out. The corresponding root point would be: a˜ pjq l " al ´ wT j x`bj wT j vjvj . Interestingly, this deviation recovers the one-layer case from equation 13. Thus, Montavon et al. (2018) argues that all the rules from the linear case (Section 3.4) can be applied to a deep neural network too. This result can be easily extended to sum-pooling layers, as they are equivalent to a linear layer with the weights of value 1. Remark 3 (Is it correct that ck is constant?). A global constant ck cannot exist, as changing the input vector can result a in totally different output, which would change the relevance magnitude. A local approximation of ck could be correct if root points stays within the same local linear region where the function's gradient ∇f is locally constant. Remark 4 (C5 no mechanism to enforce that the root point a˜ pjq lP NRl palq? ). The corresponding root point to equation 15 would be: a˜ pjq l " al ´ wT j x`bj wT j vjvj . Will this root point be in the local region Nf palq of al? Probably not, as there is no mechanism enforcing this. We test this in more detail in the empirical evaluation. ## 3.5.1 Which Root Points To Choose For A Deep Neural Network? In Section 3.4.1, we already discussed that there is no principled way to select the root points or corresponding rules. For deep neural networks, DTD Montavon et al. (2017) originally proposed to pick the z `-rule for all layers except the first one. In a more recent work Montavon et al. (2019), it was argued to use a combination of LRP0, LRP-ε and the γ-rule. This was motivated by rather vague properties such as the "activations' entanglement", "spurious variations", and "spreading of relevance". It is concluded that: "Overall, in order to apply LRP successfully on a new task, it is important to carefully inspect the properties of the neural network layers, and to ask the human what kind of explanation is most understandable for him." - Montavon et al. (2019, Sec. 10.3). Thus, the choice of the rules lies in the hands of the user who might choose any rule or root point. Kohlbrenner et al. (2020) introduced a similar combination of rules as *LRP-Composite*. For the convolution layers, they used the z `-rule (or the LRPα1β0) and for the fully-connected layers the LRP0. An improvement over using the z `-rule for all layers, they found that this combination of rules did not suffer from class-insensitivity, i.e., the saliency map do change when the explained output class is changed. However, it must be noted that this combination relies on the particular properties of the convolutional neural network. Specifically, there is little information mixing between more distant locations. Furthermore, the explanations are still insensitive to the later convolutional layers: the z ` rule creates a fixed saliency map for the convolutional layers, which, however, can be scaled by the output of the LRP0-rule. For example, if the final convolutional output has shape (8, 8) than the saliency map can be scaled in an 8x8 grid. ## 4 Analysis Of The Recursive Application Of The Taylor Theorem In the previous sections, we recapitulated how DTD applies the Taylor Theorem recursively to a one-layer and a deep neural network, and explained how the different propagation rules were derived. In this section, we provide a theoretical analysis of the recursive application of the Taylor Theorem. In particular, we study the definition 3 from Section 3.3. As this definition is the most general formulation of the DTD theory, we ensure that the results of our analysis are applicable to all the propagation rules and are also not caused by one specific approximation but are rather inherent to the recursive application of the Taylor theorem. The following propositions are proven in the Appendix A. The main idea of the proof is to apply the product rule to equation 8 and then analyze the individual terms. ## 4.1 Size Of Admissable Regions For The Root Points Cannot Be Increased Proposition 1 (C1: Recursively applying the Taylor Theorem cannot increase the size of admissible regions). *Given a ReLU network* f : R d1 Ñ R dn`1 ě0, recursive relevance functions Rlpalq *with* l P t1*, . . . , n*u according to definition 3, and let ξ *index the explained logit, then it holds for the admissible region* NRl palq for the root points a˜ pjq lof the relevance function Rl*that* NRl palq Ď Nfnξ ˝...˝flpalq. As the valid region for root points is restricted by the network f, we then we cannot evade the local region. This motivates a simple empirical test in Section 5.1: for each root point, we can check whether it is contained in the correct admissible region. This result questions the motivation that the distance |x ´ x˜| might be small T from 3.2, as this distance remains bounded by the local linear region of the network. ## 4.2 Locally Constant Roots Imply Equivalence Of Recursive Taylor And Gradient×Input It is well known that LRP0 is equivalent to gradientˆinput for ReLU networks. This was first noted in (Shrikumar et al., 2016) and later also in (Kindermans et al., 2016; Ancona et al., 2018). We proof the following generalization for the recursive application of the Taylor Theorem in Appendix A.1. Proposition 2 (C2). Let f : R d1 Ñ R dn`1 ě0be a ReLU network, ξ *be the index of the explained logit, and* Rlpalq (with l P 1 . . . n ` 1*) are recursive relevance functions according to definition 3. If the root points* a˜lpalq are locally constant w.r.t. the layer's input ( @l P 1 . . . n : Ba˜l{Bal " 0*), then:* $$R(\mathbf{x})=R({\tilde{\mathbf{x}}})+\nabla f_{[\xi]}(\mathbf{x})\odot(\mathbf{x}-{\tilde{\mathbf{x}}}),$$ Rpxq " Rpx˜q ` ∇frξspx**q d p**x ´ x˜q, (16) where x " a1 *is the input vector and* Rpxq " R1pxq. The similarity with gradientˆinput can be seen when choosing a root point x˜ " 0 such that Rp0q"0. Then, the resulting relevance would be ∇frξspxq d x. A fixed root point for each linear region would be a valid and even desirable choice. For example, from an efficiency perspective, it would be preferable to search for a valid root point in each linear region only once. Or one might want to select the one root point corresponding to the lowest network output. We also want to emphasize that no continuous constraint for selecting the root points exists. Jumps at the boundaries between the linear region are allowed. This result contradicts DTD's motivation described in Section 3.2, as it explicitly aimed to find something more "stable" than the gradient. ## 4.3 Locally Dependent Root Points As a next case, we will look at the more general case of root points depending locally on the layer's input: Proposition 3 (C3). *For a ReLU network* f : R d1 Ñ R dn`1 ě0 with n *layers, and layer activations* al " fl´1pal´1q, the relevance functions Rl´1pal´1q *of the recursive applications of the Taylor Theorem as given* in equation 8 can be written as: $$R^{l-1}(a_{l-1})=\sum_{j=1}^{d_{l}}\sum_{m=1}^{d_{l+1}}\left[\left(\frac{\partial f_{l}(\mathbf{a}_{l})}{\partial\mathbf{a}_{l(j)}}-\frac{\partial\mathbf{a}_{l(1)}^{(m)}(\mathbf{a})}{\partial\mathbf{a}_{l}}\cdot\frac{\partial f_{l}(\mathbf{a}_{l})}{\partial\mathbf{a}_{l}}\right)\cdot\frac{\partial R_{l+1}^{l+1}\left(f_{l}(\mathbf{a}_{l})\right)}{\partial f_{l}(\mathbf{a}_{l})}\right]\odot\left(\mathbf{a}_{l-1}-\hat{\mathbf{a}}_{l-1}^{(j)}(\mathbf{a}_{l-1})\right),\tag{17}$$ $$(16)$$ The relevance function Rl´1is determined by the next layer's Jacobian Bflpalq{Bal, and also a term including root point Jacobian Ba˜ pjq l{Bal. Although some directions are recommended, the choice of root point is not restricted per se. It is merely recommended to choose it within the layer's input domain2 and it should minimize the explained relevance. Any root point could be chosen, as long as it is from the linear region NRlrks palq. However, this also means that Rl´1pal´1q can be influenced arbitrarily by the root point's Jacobian. Therefore, any explanation could be justified. A theory under which anything can be justified is clearly insufficient. ## 4.4 Why Not Use Analytic Activation Functions (Softplus)? For ReLu networks, the Deep Taylor Decomposition suffers from the problem that the root point must be from the local linear region around the layer input al. A possible solution would be to use an analytic activation function, e.g. the Softplus activation. This would allow to choose any root point in R dl, although a sufficiently good approximation might require an unreasonable amount of higher-order terms. The main obstacle would be that with each decomposition, higher-order derivatives are accumulated: 2In Montavon et al. (2017, Section 4.1.), the different rules were selected based on the input domain. However, the γ rule, introduced in a more recent work Montavon et al. (2019), can lead to root points outside the ReLU's input domain R`, e.g. for γ " 0 the root point is given by x˜ " x ´ x wT x Rlpxq which can become negative for large relevance values Rlpxq. Table 1: Empirical results of different DTD rules on a small neural network (3 layers, 10 input dimensions, 10 hidden dimensions). They show that the root points picked by the rules are not within the local region of the input, as each rule produced outputs below 100%. It is also the case that some root points will have the exact same network output as the original input. | | ` | | | | |------------------------------------------|--------|---------|--------|--------| | Evaluation \ Rule | LRP0 | γ " 1.0 | w 2 | z | | Same local linear region [expected 100%] | 41.20% | 38.70% | 37.70% | 41.10% | | Same network output [expected 0%] | 14.62% | 13.85% | 14.01% | 13.99% | Proposition 4 (C4). Let f : R d1ÞÑ R dn`1 be a neural network, contains an analytic activation function, then each recursive application of the Taylor Theorem yields a higher-order derivative of the form: $$\frac{\partial}{\partial\mathbf{a}_{l}}\cdot\left[\frac{\partial R_{[j]}^{l+1}\left(f_{l}(\mathbf{a}_{l})\right)}{\partial\mathbf{a}_{l}}\right]_{\mathbf{a}_{l}=\tilde{\mathbf{a}}_{l}^{(j)}(\mathbf{a}_{l})}\cdot\left\{\mathbf{a}_{l}-\hat{\mathbf{a}}_{l}^{(j)}(\mathbf{a}_{l})\right\}_{[k]}.$$ $$(18)$$ . (18) Thus, for a n-layered network, we would get n-ordered derivatives. The problem is that it is unclear how these chains of higher-order derivatives behave. ## 5 Experiments 5.1 (C6) Dtd-Train-Free We implemented the train-free DTD using an explicit computation of the root points. The network consists of 3 linear layers, each with a ReLU activation. The input and each layer has 10 dimensions. We initialized the network with random weights and used non-positive biases, as Montavon et al. (2017) suggested (even though we have shown that this has not the same consequences as claimed in Montavon et al. (2017). As we are only interested in disproving claims, it is acceptable to show that there exists one of neural network on which the DTD delivers inconsistent results. Therefore, we also did not train the network on any task. We provide pseudocode for our implementation in Algorithm 2. The main simplifications of the implementation are that (1) the relevance of the higher layers is computed with the input of the layer and not the root point, and (2) the root points are computed using the search directions outlined in section 3.4. We tested our implementation against Captum's implementation of the DTD (Kokhlikyan et al., 2020) and found the deviation to be less than 1 ˆ 10´8. Verifying that the two points are within the local region would require to show that the gradients are equal and that there is a path between the two points with all points on the path also having equal gradients. As the last part is more difficult to show, we only test the necessary condition of equal gradients. Therefore, we compare the gradient of the input with the gradient of the root points |∇fpxq ´ ∇fpx˜q| on 1000 random inputs. The input points were sampled such that it has a network output greater than 0.1. We reported the numerical results in Table 1. Less than 100% of all root points have gradient differences that are zero, thus root point exists which must be from a different local region. This violates Proposition 1, which requires all root points to be within the function's local region. Although we only show results on an exemplary 3-layered network, the situation would only be worse for more complex networks as the number of local regions increases exponentially with layer depth (Montufar et al., 2014). As a second analysis, we tested how the root points influence the network output. One might assume that a root point will alter the network output. However, this is not always the case (see row "Same network output" in Table 1). At least, these root points do not explain the output of the neural network. 5.2 (C7) Applying Sanity Checks to (Arras et al., 2022) ![10_image_0.png](10_image_0.png) Figure 2: (1) Input images from the CLEVER-XAI dataset with the LRPα1β0 saliency maps computed for the (2) correct class, (3) an incorrect class, and (4) a different question. The original question is written above. The saliency maps do not change visually when a different output class is explained. However, changing the question highlights other regions. A recent work (Arras et al., 2022) evaluated different saliency methods on the CLEVR VQA dataset using ground truth segmentation masks. Interestingly, they found LRPα1β0 (equivalent to the DTD z `-rule) to highlight the object of interest particularly well: *"[...] a high connection between the relevance heatmaps* and the target objects of each question". This finding seems to contradict Sixt et al. (2020), which found that LRPα1β0 becomes independent of the network's deeper layer. In Arras et al. (2022), it was therefore concluded: "Maybe the phenomenon described in (Sixt et al., 2020) becomes predominant in the asymptotic case of a neural network with a very high number of layers, [...]". ![10_image_1.png](10_image_1.png) A simple empirical test would have been to check if LRPα1β0's saliency maps change when the network's last layer is changed. To perform this test, we replicated their setup and trained a relation network (Santoro et al., 2017) on the CLEVR V1.0 dataset Johnson et al. (2017). The network reached an accuracy of 93.47%, comparable to 93.3% (Arras et al., 2022) and 95.5% (Santoro et al., 2017). We included more details about the model in Appendix B. Figure 3: The histogram of absolute differences between the saliency maps for Correct Logit vs. Random Question and Correct Logit vs. Random Logit. We then compared 1000 LRPα1β0's saliency maps for the correct answer, an incorrect answer (but from the same category), and the correct answer but a different question. It is valid to ask for an explanation of a different class, for example, to understand which evidence is present for sphere instead of cube. The saliency maps were scaled to cover the range [0,1], and the differences were measured using the mean absolute difference. In Figure 3, a histogram of the differences is shown. While the saliency maps are very similar in both cases, there seems to be more variability in the question: for "correct logit vs. random question" there is an order of magnitude more pixels with a difference of « 1. When looking at the resulting saliency maps in Figure 2, one can see that the saliency maps differ quite significantly when changing the question. In contrast, the saliency map of the wrong answer does not change. First, these results validate the claim in (Sixt et al., 2020) that LRPα1β0 is independent of the network's deeper layer. Second, they indicate that an information leakage between the question and LRPα1β0's saliency maps is present. The reason for this information leakage can be found in the specific architecture of the relation networks: pairs are formed between all feature-map locations of the convolutional output. As the feature map has shape p8, 8, 24q, 64¨64 pairs are formed (i.e., height2¨width2). Additionally, the question embedding produced by an LSTM layer is concatenated to each pair. This yields triples of pvij , vkl, oLSTMq, where *i, j, k, l* P t1*, . . . ,* 8u, and v P R 8ˆ8ˆ24 is the convolutional stack's output. As the convolutional layers and the LSTM layer are trained together, their representations are aligned. Thus, changing the LSTM embedding will change the internal representation in the subsequent layers. For relevance locations, the question embedding will match better with the convolutional activation and, therefore will lead to a higher saliency map at relevant locations. However, the saliency maps will still become independent of the network's deeper layer. The implications are substantial: for example, if the model's final layers' were fine-tuned on a new task, the LRPα1β0 explanation would not change and could not be used to explain this model. Even worse, if your model was altered to predict spheres instead of cubes, the LRP explanation would not reflect this. It is quite fascinating that the LRPα1β0 explanations highlight the right object according to the ground truth, but fail to highlight evidence for the wrong object. This result also shows how difficult it is to evaluate explanation methods empirically. ## 6 Conclusion We have shown that DTD, which has been cited as the theoretical foundation of numerous follow-up post-hoc explanation techniques (Ali et al., 2022; Binder et al., 2016; Kindermans et al., 2018; Arras et al., 2017; Hui & Binder, 2019; Huber et al., 2019; Eberle et al., 2020), exhibits serious flaws that explain why saliency maps created with these methods are independent of the output. From Sixt et al. (2020), we know that that the positive matrices produced by the z `-rule will converge to a rank-1 matrix. These positive matrices stem from a specific selection of the root-point, and as the selection of the root-points is not restricted, the z `-rule can be justified by the DTD theory, as every other explanation could be - by picking an appropriate root. DTD as a theoretical framework for explanations is under-constraint, and can be considered insufficient. Caution must be used when using explanations derived from this theory. At the core of the problem, there is no restriction and little guidance on choosing the root points. Under certain conditions (constant root points), DTD reduces to backpropagating the gradient, albeit hidden behind a complex mathematical structure. In the other case (input-dependent root points), DTD leaves open a backdoor through which virtually any explanation can be created by crafting the root point's Jacobian accordingly. However, this again is obfuscated by the theory rather than made transparent. Since its first ArXiv submission (Montavon et al., 2015), the DTD publication has been cited numerous times. Although even the authors have reported class-insensitive behavior (Kohlbrenner et al., 2020; Montavon et al., 2018)3, follow-up works have readily used DTD's key concepts, motivated by the seemingly robust mathematical foundation, instead of searching for the underlying reasons. Furthermore, explanations based on DTD were used in various applications, for example, for validating their model (Andresen et al., 2020), gain insights into geoscientific questions (Toms et al., 2020), or conduct user studies (Alqaraawi et al., 2020). While we were able to discover serious issues of DTD, we do not see a solution how to solve them. We therefore want to point out that other theoretically well-justified methods exist: *(Deletion of Information)* which information can be deleted without changing the network output? One approach uses noise (Schulz et al., 2020), other discrete deletion of the input (Macdonald et al., 2019). *(Testing prediction capabilities)*: We can test whether certain concepts are present in the network by trying to predict them from intermediate features (Kim et al., 2018). *(Model inversion)*: How would the input need to change to predict another class? This question can be answered using invertible models (Hvilshøj et al., 2021; Dombrowski et al., 2021) or conditional generative models (Singla et al., 2020). *(Simple Models)*: If a similar performance is achieved by a simpler, more interpretable model, why not simply use that? For example, (Zhang et al., 2021) replaced part of the network with a linear model. All these approaches do not come with a complicated mathematical superstructure, rather they have a simple and intuitive motivation. 3*"A reduced number of fully-connected layers avoids that the relevance, when redistributed backwards, looses its connection* to the concept being predicted." - Montavon et al. (2018, Sec 5.1.) ## Broader Impact Statement Although our work focuses on the theoretical foundations of a particular explanation method, we see broader implications of this work. Our work demonstrates that the theoretical foundation of explanation methods need rigorous analysis before they can support the trust that developers, users, and even regulatory bodies may put in it. This is especially important in the field of explainable AI since empirically evaluating explanations is difficult. The field offers a variety of explanation methods, and ways to test the quality of explanations. We recommend using more than just one method and employing a range of metrics and user tests to make sure explanations are helpful in potentially critical use-cases such as medical decision making or the screening of job applications. ## Acknowledgement We want to thank the reviewers for their helpful feedback that improved this manuscript further. The computation were done on the Curta cluster provided by the Zedat, Freie Universtität Berlin (Bennett et al., 2020). Finally, we thank Jonas Köhler for discussions about the manuscript idea. ## References Ameen Ali, Thomas Schnake, Oliver Eberle, Grégoire Montavon, Klaus-Robert Müller, and Lior Wolf. XAI for transformers: Better explanations through conservative propagation. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), *Proceedings of the 39th* International Conference on Machine Learning, volume 162 of *Proceedings of Machine Learning Research*, pp. 435–451. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/ali22a.html. Ahmed Alqaraawi, Martin Schuessler, Philipp Weiß, Enrico Costanza, and Nadia Berthouze. Evaluating saliency map explanations for convolutional neural networks: A user study. In Proceedings of the 25th International Conference on Intelligent User Interfaces, IUI '20, pp. 275–285, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450371186. doi: 10.1145/3377325.3377519. URL https: //doi.org/10.1145/3377325.3377519. Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. Towards better understanding of gradientbased attribution methods for Deep Neural Networks. In *International Conference on Learning Representations*, February 2018. URL https://openreview.net/forum?id=Sy21R9JAW. Niek Andresen, Manuel Wöllhaf, Katharina Hohlbaum, Lars Lewejohann, Olaf Hellwich, Christa ThöneReineke, and Vitaly Belik. Towards a fully automated surveillance of well-being status in laboratory mice using deep learning: Starting with facial expression analysis. *PLoS One*, 15(4):e0228059, 2020. Raman Arora, Amitabh Basu, Poorya Mianjy, and Anirbit Mukherjee. Understanding Deep Neural Networks with Rectified Linear Units. In *International Conference on Learning Representations (ICLR)*, February 2018. URL https://openreview.net/forum?id=B1J_rgWRW. Leila Arras, Grégoire Montavon, Klaus-Robert Müller, and Wojciech Samek. Explaining recurrent neural network predictions in sentiment analysis. *EMNLP 2017*, pp. 159, 2017. Leila Arras, Ahmed Osman, and Wojciech Samek. Clevr-xai: A benchmark dataset for the ground truth evaluation of neural network explanations. *Information Fusion*, 81:14–40, 2022. ISSN 1566-2535. doi: https: //doi.org/10.1016/j.inffus.2021.11.008. URL https://www.sciencedirect.com/science/article/pii/ S1566253521002335. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. *PLoS ONE*, 10(7), 2015. ISSN 19326203. doi: 10.1371/journal.pone.0130140. URL http://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0130140&type= printable. 00067. David Balduzzi, Marcus Frean, Lennox Leary, J. P. Lewis, Kurt Wan-Duo Ma, and Brian McWilliams. The Shattered Gradients Problem: If resnets are the answer, then what is the question? In Proceedings of the 34th International Conference on Machine Learning, pp. 342–350. PMLR, July 2017. URL https: //proceedings.mlr.press/v70/balduzzi17b.html. ISSN: 2640-3498. Loris Bennett, Bernd Melchers, and Boris Proppe. Curta: A general-purpose high-performance computer at zedat, freie universität berlin. http://dx.doi.org/10.17169/refubium-26754, 2020. Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, Klaus-Robert Müller, and Wojciech Samek. Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers. In Artificial Neural Networks and Machine Learning - ICANN 2016, Lecture Notes in Computer Science, pp. 63–71. Springer International Publishing, 2016. ISBN 978-3-319-44781-0. doi: 10.1007/978-3-319-44781-0_8. Jianbo Chen, Le Song, Martin J. Wainwright, and Michael I. Jordan. L-shapley and c-shapley: Efficient model interpretation for structured data. In *International Conference on Learning Representations*, 2019. URL https://openreview.net/forum?id=S1E3Ko09F7. Ann-Kathrin Dombrowski, Maximillian Alber, Christopher Anders, Marcel Ackermann, Klaus-Robert Müller, and Pan Kessel. Explanations can be manipulated and geometry is to blame. *Advances in Neural* Information Processing Systems, 32, 2019. Ann-Kathrin Dombrowski, Jan E Gerken, and Pan Kessel. Diffeomorphic explanations with normalizing flows. In ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models, 2021. URL https://openreview.net/forum?id=ZBR9EpEl6G4. Oliver Eberle, Jochen Büttner, Florian Kräutli, Klaus-Robert Müller, Matteo Valleriani, and Grégoire Montavon. Building and interpreting deep similarity models. *IEEE Transactions on Pattern Analysis and* Machine Intelligence, 2020. G. B. Folland. *Advanced calculus*. Prentice Hall, Upper Saddle River, NJ, 2002. ISBN 978-0-13-065265-2. Andreas Holzinger, Anna Saranti, Christoph Molnar, Przemyslaw Biecek, and Wojciech Samek. *Explainable* AI Methods - A Brief Overview, pp. 13–38. Springer International Publishing, Cham, 2022. ISBN 978-3031-04083-2. doi: 10.1007/978-3-031-04083-2_2. URL https://doi.org/10.1007/978-3-031-04083-2_ 2. Tobias Huber, Dominik Schiller, and Elisabeth André. Enhancing explainability of deep reinforcement learning through selective layer-wise relevance propagation. In Joint German/Austrian Conference on Artificial Intelligence (Künstliche Intelligenz), pp. 188–202. Springer, 2019. Lucas Y. W. Hui and Alexander Binder. BatchNorm Decomposition for Deep Neural Network Interpretation. In *Advances in Computational Intelligence*, volume 11507, pp. 280–291. Springer International Publishing, 2019. doi: 10.1007/978-3-030-20518-8_24. URL http://link.springer.com/10.1007/ 978-3-030-20518-8_24. Series Title: Lecture Notes in Computer Science. Frederik Hvilshøj, Alexandros Iosifidis, and Ira Assent. Ecinn: Efficient counterfactuals from invertible neural networks. *ArXiv*, abs/2103.13701, 2021. Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2901–2910, 2017. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning, pp. 2668–2677. PMLR, 2018. Pieter-Jan Kindermans, Kristof Schütt, Klaus-Robert Müller, and Sven Dähne. Investigating the influence of noise and distractors on the interpretation of neural networks. Technical Report arXiv:1611.07270, arXiv, November 2016. URL http://arxiv.org/abs/1611.07270. arXiv:1611.07270 [cs, stat] type: article. Pieter-Jan Kindermans, Kristof T. Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, and Sven Dähne. Learning how to explain neural networks: PatternNet and PatternAttribution. February 2018. URL https://openreview.net/forum?id=Hkn7CBaTW. Maximilian Kohlbrenner, Alexander Bauer, Shinichi Nakajima, Alexander Binder, Wojciech Samek, and Sebastian Lapuschkin. Towards best practice in explaining neural network decisions with lrp. In *2020* International Joint Conference on Neural Networks (IJCNN), pp. 1–7. IEEE, 2020. Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. Captum: A unified and generic model interpretability library for pytorch, 2020. I. Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, and Sorelle Friedler. Problems with shapley-value-based explanations as feature importance measures. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 5491–5500. PMLR, 2020. URL https://proceedings.mlr.press/v119/kumar20e.html. Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. *Advances in neural* information processing systems, 30, 2017. Daniel D Lundstrom, Tianjian Huang, and Meisam Razaviyayn. A rigorous study of integrated gradients method and extensions to internal neuron attributions. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of *Proceedings of Machine Learning Research*, pp. 14485–14508. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/lundstrom22a.html. Jan Macdonald, Stephan Wäldchen, Sascha Hauch, and Gitta Kutyniok. A rate-distortion framework for explaining neural network decisions, 2019. Grégoire Montavon, Sebastian Bach, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. Explaining nonlinear classification decisions with deep taylor decomposition. *arXiv preprint arXiv:1512.02479*, 2015. Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, and Klaus-Robert Müller. Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, pp. 193–209, 2019. Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. Explaining nonlinear classification decisions with deep taylor decomposition. *Pattern Recognition*, 65: 211–222, 2017. ISSN 0031-3203. doi: https://doi.org/10.1016/j.patcog.2016.11.008. URL https://www. sciencedirect.com/science/article/pii/S0031320316303582. Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Methods for interpreting and understanding deep neural networks. *Digital Signal Processing*, 73:1–15, February 2018. ISSN 10512004. doi: 10.1016/j. dsp.2017.10.011. URL https://linkinghub.elsevier.com/retrieve/pii/S1051200417302385. Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the Number of Linear Regions of Deep Neural Networks. In *Advances in Neural Information Processing Systems*, volume 27. Curran Associates, Inc., 2014. URL https://arxiv.org/abs/1402.1869. Weili Nie, Yang Zhang, and Ankit Patel. A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations. In Proceedings of the 35th International Conference on Machine Learning, pp. 3809–3818. PMLR, July 2018. URL https://proceedings.mlr.press/v80/nie18a.html. ISSN: 2640-3498. Wojciech Samek, Leila Arras, Ahmed Osman, Grégoire Montavon, and Klaus-Robert Müller. Explaining the decisions of convolutional and recurrent neural networks. 2021a. Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, and Klaus-Robert Müller. Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications. *Proceedings of the IEEE*, 109(3):247–278, March 2021b. ISSN 1558-2256. doi: 10.1109/JPROC.2021.3060483. Conference Name: Proceedings of the IEEE. Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural* Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings. neurips.cc/paper/2017/file/e6acf4b0f69f6f6e60e9a815938aa1ff-Paper.pdf. Karl Schulz, Leon Sixt, Federico Tombari, and Tim Landgraf. Restricting the flow: Information bottlenecks for attribution. In *International Conference on Learning Representations*, 2020. URL https: //openreview.net/forum?id=S1xWh1rYwB. Harshay Shah, Prateek Jain, and Praneeth Netrapalli. Do input gradients highlight discriminative features? In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 2046–2059. Curran Associates, Inc., 2021. URL https: //proceedings.neurips.cc/paper/2021/file/0fe6a94848e5c68a54010b61b3e94b0e-Paper.pdf. Lloyd S. Shapley. *Notes on the N-Person Game - II: The Value of an N-Person Game*. RAND Corporation, Santa Monica, CA, 1951. doi: 10.7249/RM0670. Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. Not just a black box: Learning important features through propagating activation differences, 2016. URL https://arxiv.org/abs/ 1605.01713. Sumedha Singla, Brian Pollack, Junxiang Chen, and Kayhan Batmanghelich. Explanation by progressive exaggeration. In *International Conference on Learning Representations*, 2020. URL https://openreview. net/forum?id=H1xFWgrFPS. Leon Sixt, Maximilian Granz, and Tim Landgraf. When explanations lie: Why many modified BP attributions fail. In Hal Daumé III and Aarti Singh (eds.), *Proceedings of the 37th International Conference* on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 9046–9057. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/sixt20a.html. Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. *arXiv preprint arXiv:1412.6806*, 2014. Erik Štrumbelj and Igor Kononenko. A general method for visualizing and explaining black-box regression models. In *International Conference on Adaptive and Natural Computing Algorithms*, pp. 21–30. Springer, 2011. Erik Štrumbelj and Igor Kononenko. Explaining prediction models and individual predictions with feature contributions. *Knowledge and information systems*, 41(3):647–665, 2014. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International conference on machine learning, pp. 3319–3328. PMLR, 2017. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*, 2013. Benjamin A Toms, Elizabeth A Barnes, and Imme Ebert-Uphoff. Physically interpretable neural networks for the geosciences: Applications to earth system variability. *Journal of Advances in Modeling Earth* Systems, 12(9):e2019MS002002, 2020. Tom Viering, Ziqi Wang, Marco Loog, and Elmar Eisemann. How to manipulate cnns to make them lie: the gradcam case. *arXiv preprint arXiv:1907.10901*, 2019. Stephan Waeldchen, Jan Macdonald, Sascha Hauch, and Gitta Kutyniok. The computational complexity of understanding binary classifier decisions. *J. Artif. Int. Res.*, 70:351–387, may 2021. ISSN 1076-9757. doi: 10.1613/jair.1.12359. URL https://doi.org/10.1613/jair.1.12359. Junlin Wang, Jens Tuyls, Eric Wallace, and Sameer Singh. Gradient-based analysis of NLP models is manipulable. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pp. 247–258, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.24. URL https://aclanthology.org/2020.findings-emnlp.24. H. Xiong, L. Huang, M. Yu, L. Liu, F. Zhu, and L. Shao. On the Number of Linear Regions of Convolutional Neural Networks. In *ICML*, 2020. Ruihan Zhang, Prashan Madumal, Tim Miller, Krista A Ehinger, and Benjamin IP Rubinstein. Invertible concept-based explanations for cnn models with non-negative concept activation vectors. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 11682–11690, 2021. ## A Proofs A.1 Proof Of Proposition 3 And 2 We will proof Propositions 2, 3, and 4 together. We start with the partial derivative of the relevance function at layer l: B Bal ¨ ˝ « BRl`1 rjspflpalqq Balrks al"a˜ pjq lpalq ¨ ! al ´ a˜ pjq lpalq ) rks ˛ ' (19) ff BRl rkspalq Bal" dÿl`1 j"1 j"1 ¨ ˝ B ´ alrks ´ a˜ pjq lrks palq ¯ Bal¨ « BRl`1 rjspflpalqq Bal ff " dÿl`1 al"a˜ pjq lpalq ˛ (19) $\binom{19}{20}$ (20) . $$+\underbrace{\frac{\partial}{\partial\mathbf{a}_{l}}\cdot\left[\left.\frac{\partial R_{l\,ij}^{l+1}\left(f_{l}(\mathbf{a}_{l})\right)}{\partial\mathbf{a}_{l}}\right]_{\mathbf{a}_{l}=\mathbf{a}_{l}^{(j)}(\mathbf{a}_{l})}\cdot\left\{\mathbf{a}_{l}-\tilde{\mathbf{a}}_{l}^{(j)}(\mathbf{a}_{l})\right\}_{[x]}\right\}\tag{21}$$ $$=0,\text{for ReLU networks}$$ In this first step, we applied the product rule. For ReLU networks, the higher-order terms are zero. For other networks (Transformer, LSTMs), the higher-order terms will not be zero. The terms which are zero for ReLU networks are exactly the terms from Proposition 4. In the next step, we will apply the chain rule and rewrite Balrks{Bal as the k-standard basis ek (a one-hot vector where the k-th dimension is 1): $$\frac{\partial R_{\{n\}}^{l}(\mathbf{a}_{l})}{\partial\mathbf{a}_{l}}=\sum_{j=1}^{d_{l+1}}\left(e_{k}-\frac{\partial\tilde{\mathbf{a}}_{\{n\}}^{(j)}(\mathbf{a}_{l})}{\partial\mathbf{a}_{l}}\right)\cdot\left[\frac{\partial f_{l}(\mathbf{a}_{l})}{\partial\mathbf{a}_{l}}\cdot\frac{\partial R_{\{n\}}^{l+1}\left(f_{l}(\mathbf{a}_{l})\right)}{\partial f_{l}(\mathbf{a}_{l})}\right]_{\mathbf{a}_{l}=\tilde{\mathbf{a}}_{l}^{(j)}(\mathbf{a}_{l})},$$ $$(22)$$ $$(23)$$ , (22) The next observation is that the gradients inside the r*. . .*sal"a˜ pjq lpalq must be the same for al and the root point a˜ pjq las both are in the same local region of fl ˝ Rl`1 rjs . Therefore, we can safely drop the evaluation of the gradient at the root point (r*. . .*sal"a˜ pjq lpalq ) and write: $$\frac{\partial R_{[k]}^{l}(\mathbf{a}_{l})}{\partial\mathbf{a}_{l}}=\sum_{j=1}^{d_{l+1}}\left(\mathbf{a}_{j}\right)$$ ¨ $$\frac{\partial f_{l}(\mathbf{a}_{l})}{\partial\mathbf{a}_{l_{[k]}}}-\frac{\partial\hat{\mathbf{a}}_{l_{[k]}}^{(j)}(\mathbf{a}_{l})}{\partial\mathbf{a}_{l}}\cdot\frac{\partial f_{l}(\mathbf{a}_{l})}{\partial\mathbf{a}_{l}}\Bigg{)}\cdot\frac{\partial R_{l_{i3}}^{l+1}\left(f_{l}(\mathbf{a}_{l})\right)}{\partial f_{l}(\mathbf{a}_{l})}$$ (24) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (25) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (26) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (27) . Substituting this result into the definition of Rl´1pal´1q from equation 8 yields the result of Proposition 3. To proof proposition 2, we use Ba˜ pjq lrks palq{Bal " 0 and get: BRl rkspalq Bal" dÿl`1 j"1 Bflpalq Balrks¨ BRl`1 rjspflpalqq Bflpalq(24) " Bflpalq Balrks¨ dÿl`1 j"1 BRl`1 rjspflpalqq Bflpalq(25) " Bflpalq Balrks¨ dÿl`1 j"1 Bfl`1pal`1q Bal`1rjs¨ dÿl`2 i"1 BRl`2 rispfl`1pal`1qq Bfl`1pal`1q(26) " Bflpalq Balrks¨ Bfl`1pal`1q Bal`1¨ dÿl`2 i"1 BRl`2 rispfl`1pal`1qq Bfl`1pal`1q(27) i"1 BRn`1 ris ` f n`1 npanq ˘ " Bflpalq Balrks¨ Bfl`1pal`1q Bal`1¨ . . . ¨ Bf n n´1pan´1q Ban´1¨ dÿn`1 Bf n`1 n panq " Bflpalq Balrks¨ Bfl`1pal`1q Bal`1¨ . . . ¨ Bfn´1pan´1q Ban´1¨ Bfnpanq Ban¨ eξ (29) " ∇flrξspalq (30) $$(31)$$ (28) $\binom{29}{2}$ (30) . Substituting this into the relevance function of the input Rpxq " R1pa1q and using BRpxq Bx ˇ ˇ x"x˜pxq " BRpxq Bx (as x˜ must be in the same linear region), yields: ˇ $$R(\mathbf{x})=R({\hat{\mathbf{x}}}(\mathbf{x}))+\left.{\frac{\partial R(\mathbf{x})}{\partial\mathbf{x}}}\right|_{\mathbf{x}={\hat{\mathbf{x}}}(\mathbf{x})}\odot(\mathbf{x}-{\hat{\mathbf{x}}})=R({\hat{\mathbf{x}}}(\mathbf{x}))+\nabla f_{[t]}(\mathbf{x})\odot(\mathbf{x}-{\hat{\mathbf{x}}}(\mathbf{x})),$$ which finished the proof of Proposition 2. ## A.2 Admissible Region For The Root Points Of The Relevance Function We now proof Proposition 1 which is restated here: Proposition 1 (C1: Recursive Taylor cannot increase the size of admissible regions) *Given a ReLU network* f : R d1 Ñ R dn`1 ě0, recursive relevance functions Rlpalq *with* l P t1, . . . , nu *according to definition 3, and let* ξ index the explained logit. Then it holds for the admissible region NRl palq *for the root points* a˜ pjq l*of the* relevance function Rl*that* NRl palq Ď Nfnξ ˝...˝flpalq. Let a˜1*, . . . ,* a˜n fix the root points. Proof by induction over the number of layers. We start with the induction base case at the final layer. There, we have Rnpa˜nq " frξspa˜nq, which follows from the recursion base case. Clearly, NRn pa˜nq Ď Nfnξ pa˜nq. Induction step: We assume NRl`1pa˜l`1q Ď Nfl`1pa˜l`1q. For the layer l, the root points must be valid for the function Rl`1pflpa˜lqq. As we know that NRl`1pa˜l`1q Ď Nfl`1pa˜l`1q, it must also be the case that NRl pa˜lq Ď Nflpa˜lq. ## B Details About The Relation Network For The Clevr Dataset Our code-base builds upon a public available implementation of relation networks4 and utilized Captum for computing LRPα1β0 explanations Kokhlikyan et al. (2020). We also setup the CLEVR XAI dataset released on Github5 4https://github.com/rosinality/relation-networks-pytorch 5https://github.com/ahmedmagdiosman/clevr-xai/releases/tag/v1.0 ## C Pseudo-Code C.1 Full-Backward Dtd Algorithm 1 Pseudocode for the recursive application of the Taylor Theorem. The global state contains the following variables: f1*, . . . , f*n the layer functions of the network, d1*, . . . , d*n`1 the dimension of the input to each layer, and ξ the index of the output neuron. function get_relevance(l: layer index, al: the layer input) if l " n ` 1 then return alrξs end if a˜l Ð find_root_point(*f, l,* al) Rl`1 Ð get_relevance(l ` 1, flpa˜lq) for j P 1 *. . . d*l`1 do a˜.grad Ð 0 Rl`1 rjs .backwardpq rj Ð a˜.grad d pal ´ a˜lq end for return řdl`1 j"1 rj end function ## C.2 Train-Free Dtd Algorithm 2 DTD Train-Free function find_root_point(l: layer index, al: the layer input, Rl`1 rjs : the relevance) wj " Wrj:s if z `-rule **then** v " al d 1wjě0 else if w 2-rule **then** v " wj else if γ-rule **then** v " alp1 ` γ1wjě0q . . . end if t " Rl`1 rjs{pwT j vq return al ´ tv Ź Ensures that wT jpal ´ a˜lq " wT j Rrjs wT j v v " Rrjs end function function get_relevance(l: layer index, al: the layer input) if l " n ` 1 **then** return alrξs end if Rl`1 Ð get_relevance(l ` 1, flpalq) Ź relevance of input instead of root point a˜l for j P 1 *. . . d*l`1 do a˜ pjq l Ð find_root_point(f, l, al, Rl`1 rjs ) a˜.grad Ð 0 o Ð rWlrjsa pjq l ` blrjss o.backward() rj Ð a˜.grad d pal ´ a˜lq end for return řdl`1 j"1 rj end function
Review 1: Summary: This paper provides a very technical theoretical study on the shortcomings of the Deep Taylor Decomposition (DTD) as a post-hoc explanation method for deep learning. Specifically, this work builds on prior studies that showed that the DTD method could produce arbitrarily different explanations depending on the root point specified by the user. The main contribution of this work is therefore a series of theoretical arguments that scrutinize this behavior and give a technical explanation for why it happens. In short, the authors show analytically that the DTD method can be arbitrarily manipulated to produce any explanation through the effect of the intermediatee layer Jacobians with respect to the root points, and empirically, that the common heuristics used in practice to identify valid root points for the DTD method do not actually satisfy its main assumptions. This series of arguments leads the authors to conclude that the DTD method is not a theoretically sound post-hoc explanation method as it is (i) underspecified in its construction, and (ii) its main assumptions are violated by the heuristics used in practice. Strengths and Weaknesses: # Strengths 1. **A solid technical argumentation based on simple math**: The main strength of this work is that it provides theoretically solid arguments to settle a previously observed empirical phenomenon. Although very technical and specific to the DTD technique, the arguments are easy to follow and do not require heavy mathematical machinery. The fact that the root points influence the explanations through the Jacobian of the layer's output might look obvious in retrospect, but it is nice to see some work that provides analytical expressions for this dependence. 2. **Concrete message**: I appreciate that this work focuses on a very specific problem with a clear objective: "Analytically explaining how the DTD method is underspecified and why can be made to fail in practice" # Weaknesses 1. **Introduction is too technical**: Although I appreciate that this work is focused on a technical topic and goes straight to its point, I find the starting section to be too technical and hard to follow on a first read due to the use of very specific DTD jargon which has not been introduced before right from the start (e.g., $z^+$- rule or LRP). 2. **Abuse of heavy notation**: I find that some sections of the paper could be better off with some polishing of the notation to reduce the overuse of incoherent subscripts and upperscripts. For example, in Section 2 the relevances of each layer are denoted as $R^l$ while later on Section 3 they are denoted as $R^f$, $R^{x}$ and $R^{h}$. The distinct use of "Tn" or "An" to denote the different arguments is also a bit confusing. 3. **Hard-to-follow storyline**: The structure of the paper is sometimes a bit hard to follow due to the constant use of technical detours to introduce new concepts. In this regard, I believe that having a less technical introduction where the main issues are broadly discussed followed by a better structured discussion of the paper organization could help with this. 4. **(Minor) Unnecesarily naive experiments**: Although I do not think it is important for the paper's argumentation, I believe there is no reason for the experiments of this work to be performed on such a toy setting. The authors could very easily provide at almost no computational cost or engineering effort the same experiment using some standard classification dataset such as CIFAR-10 and a popular architecture like ResNet18. Requested Changes: Overall, I think this paper meets the requirements to be published at TMLR: (i) it is technically sound, and (ii) it may appeal to some people in the community (although in this particular case, it might be a narrow set of the community interested in the technical details of the DTD method). In any case, I still bellieve the paper could be easily improved, mostly by improving its introduction, structure and notation, as I listed under "weaknesses" before. Running the experiments on a less toy setting might also make the paper appeal to a broader audience (although I personally believe this is not strictly necessary). Another way the authors could strengthen the appeal of their technical derivations is if they used their derived expressions explicitly to break a real trained network's DTD explanation by identifying root points that make the computed relevance independent of the output. This would be similar to the methods that compute adversarial examples on a model's explanations but in this case the authors would make use of their analytically derived expressions, thus providing further insights about the structure of these failure modes. Broader Impact Concerns: I see no clear broader impact concerns derived from this very theoretical and technical work. ================================================== Review 2: Summary: This paper undertakes the study of the method called Deep Taylor Expansion DTE), which is a strategy to provide some kind of post hoc explanations for the output of neural networks. In particular, the authors present the main ideas (expanding the output in terms of a Taylor expansion), explain how this is simplified when the root points are in the same linear regions, and discuss different strategies on how to do this. Contrary to what one would expect, these different choices are shown to be problematic (e.g. resulting in relevance vectors that can be independent of input, or that can be obtained to justified any output). The empirical demonstration furthermore shows that in practice some of the underlying assumptions -such as the root corresponding to the same linear region- don't really hold either. In a nutshell, this work attempts to shed some light onto the DTE, clarifying some concepts while disproving some common "myths" about this approach. Strengths and Weaknesses: Strneghts: - The paper is clearly written, including an overview of the overall framework and ideas - The paper is well motivated, and has an important purpose: if explanations methods are to provide further understanding about the predictions of NNs, it is fundamental that this is done in a rigorous and careful way, and not resulting to loosely motivated 'hacks'. - The contribution is simple and important. Weaknesses: - Very small reorganization of content, and rephrasing of some parts might be good (see below). - Some related references missing (see below). Requested Changes: I very much appreciate this contribution, and the authors' intention to clarify what certain popular approaches do, and what they do not do. I only have a few minor comments: - As I read through the paper, I thought that some comments on other theoretically-motivated methods for explainability would be contribute to the broader picture of the field. Later, almost at the end, I discovered that the authors do in fact have a section on Related Works which addresses these points. I believe it might be better to relocate that section to near the beginning, perhaps after the introduction. In that way, the reader will understand that several other principled methods exist, and that this paper will only study one family of them (DTE). Explaining this explicitly might provide a nice context for the reader to better understand the contribution of this paper. - I think there are some works that the authors might find relevant for this paper. E.g.: [Do Input Gradients Highlight Discriminative Features?, Shah et al, 2021] provides very interesting insights showing that methods based on gradients can be provably incorrect - which supports the finding of this paper. Furthermore, the work [From Shapley back to Pearson: Hypothesis Testing via the Shapley Value, Teneggi et al, 2022] shows that Shapley values, contrary to previous interpretations and in a similar spirit to this paper, provides a formal notion of statistical importance by mathematically analyzing what those coefficients estimate. - In Section 2.2, the authors refer to the exponential number of linear regions of deep nets. While this is true in the worst case, note that more recent analysis has shown that these are significantly lesser, and can even become linear [Complexity of Linear Regions in Deep Networks, Hanin & Rolnick, 2019]. - After Eq (11), in the expression $R^h$, $h$ should be bold. - First line in section 3.2.1, "principle" -> "principled". - Last line in first paragraph in Sec 3.2.1, "choice" -> "choose" - Second paragraph in Sec 3.2.1, could the authors clarify what they mean exactly by "class insensitivity"? - Proposition 2: The first sentence is missing a subject. Maybe the authors meant "Let f be a Relu network...", or else the sentence should not break before continuing with the assumptions on the root points. - First line after Prop 2, the zero in $R(0)$ should be bold. - Last paragraph on page 8, the authors write that "It is merely recommended to choose it within the layer’s input domain". What do the authors mean exactly? Anything on $\mathbb{R}^{d_l}$? Or perhaps the authors meant "within the layer's input linear region"? - Sec 5.1, the authors note that if the difference $||\nabla f(x) - \nabla f(\tilde{x}) ||=0$, the root points are assumed to be in the same linear region. Rather than "assuming" this, the authors might want to explain that this condition is necessary but not sufficient. Thus, showing that this is not the case, suffices for the argument they are trying to make. - Close to the end of page 9: "Less than 100% of all root points have gradient differences are zero,.." -> "Less than 100% of all root points have gradient differences *that* are zero,..". Broader Impact Concerns: While not strictly necessary, I would recommend the authors include a Broader Impact Concerns section. This work does not represent any concerns per se, but I think this would be a good opportunity for the authors to raise concerns on using explanation methods that lack the necessary careful and precise theoretical guarantees. ================================================== Review 3: Summary: In the paper "A Rigorous Study of the Deep Taylor Decomposition", the authors provide some technical work related to the generation of post-hoc explanations from trained machine learning models. The authors point out that some existing methods create explanations remain constant even if the last layer is randomized. The authors note that the Deep Taylor Decomposition is used to provide theoretical underpinning of methods like this and establish technical claims relating root points associated with the Deep Taylor Decomposition to the explanations produced. Strengths and Weaknesses: A strength of this paper is that it attempts to provide a more thorough theoretical understanding of explanation methods, which are an important problem in ML. A weakness of this paper is that the writing was not clear to me as a non expert in this subfield. It may be the case that the writing is appropriate for readers who are experienced in this subfield, but I am not such a reader. Here are some aspects that were unclear to me: - What is the nature of explanations provided by the studied methods? - What is the z+ rule and what does it mean to say it will "remain the same even if the last layer is randomized"? - What is the logic use in relating "flaws" in the DTD theory to theoretical explanations? Is the contribution of this paper that certain scenarios do not satisfy assumptions necessary in existing theoretical work for DTD-theory to apply? - In (T1) the authors say they show that "the root points must be contained in the same linear region as the input". What do they mean by saying "must be"? - The paper mentions "root points", but it is not clear to me what quantity has roots that are being studied. These weaknesses make it difficult for me to assess whether the conclusions support the claims. Requested Changes: I recommend a substantial rewrite of the introduction (and possibly abstract) so that a reader who is not thoroughly familiar with this subfield can understand the context and the conclusions drawn. I also recommend rewriting throughout the whole paper. It is hard to distinguish what is background material versus what is new. It is also hard to figure out what the authors are intending to do in each section. It would help if the authors at each section led with a paragraph explaining what was going to happen in that section. Without substantial rewriting I do not feel I will understand the work well enough to be able to recommend for acceptance. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Accept as is Comment: This paper undertakes the study of the method called Deep Taylor Expansion DTE), which is a strategy to provide some kind of post hoc explanations for the output of neural networks. During the initial review period, reviewers raised some concerns about this paper which later have been addressed by the author's rebuttal. All reviewers believe this paper meets the requirements to be published at TMLR. Reviewer rZ5T believes that the related works section could be broader, and the authors could have done a better job at situating their contribution in light of other, theoretically-sound, explanation methods. I hope authors could make improvements in their camera-ready version. ==================================================
# Rifle: Imputation And Robust Inference From Low Order Marginals Sina Baharlouei *baharlou@usc.edu* University of Southern California Kelechi Ogudu kogudu@usc.edu University of Southern California Sze-chuan Suen ssuen@usc.edu University of Southern California Meisam Razaviyayn razaviya@usc.edu University of Southern California Reviewed on OpenReview: *https: // openreview. net/ forum? id= oud7Ny0KQy* ## Abstract The ubiquity of missing values in real-world datasets poses a challenge for statistical inference and can prevent similar datasets from being analyzed in the same study, precluding many existing datasets from being used for new analyses. While an extensive collection of packages and algorithms have been developed for data imputation, the overwhelming majority perform poorly if there are many missing values and low sample sizes, which are unfortunately common characteristics in empirical data. Such low-accuracy estimations adversely affect the performance of downstream statistical models. We develop a statistical inference framework for regression and classification in the presence of missing data without imputation. Our framework, RIFLE (Robust InFerence via Low-order moment Estimations), estimates low-order moments of the underlying data distribution with corresponding confidence intervals to learn a distributionally robust model. We specialize our framework to linear regression and normal discriminant analysis, and we provide convergence and performance guarantees. This framework can also be adapted to impute missing data. In numerical experiments, we compare RIFLE to several state-of-the-art approaches (including MICE, Amelia, MissForest, KNN-imputer, MIDA, and Mean Imputer) for imputation and inference in the presence of missing values. Our experiments demonstrate that RIFLE outperforms other benchmark algorithms when the percentage of missing values is high and/or when the number of data points is relatively small. RIFLE is publicly available at https://github.com/optimization-for-data-driven-science/RIFLE. ## 1 Introduction Machine learning algorithms have shown promise when applied to various problems, including healthcare, finance, social data analysis, image processing, and speech recognition. However, this success mainly relied on the availability of large-scale, high-quality datasets, which may be scarce in many practical problems, especially in medical and health applications (Pedersen et al., 2017; Sterne et al., 2009; Beaulieu-Jones et al., 2018). Moreover, many experiments and datasets suffer from the small sample size in such applications. Despite the availability of a small number of data points in each study, an increasingly large number of datasets are publicly available. To fully and effectively utilize information on related research questions from diverse datasets, information across various datasets (e.g., different questionnaires from multiple hospitals with overlapping questions) must be combined in a reliable fashion. ![1_image_0.png](1_image_0.png) Figure 1: Consider the problem of predicting the trait y from feature vector (x1*, . . . ,* x100). Suppose that we have access to three data sets: The first dataset includes the measurements of (x1, x2, . . . , x40, y) for n1 individuals. The second dataset collects data from another n2 individuals by measuring (x30*, . . . ,* x80) with no measurements of the target variable y in it; and the third dataset contains the measurements from the variables (x70*, . . . ,* x100, y) for n3 number of individuals. How one should learn the predictor yˆ = h(x1*, . . . ,* x100) from these three datasets? After integrating data from different studies, the obtained dataset can contain large blocks of missing values, as they may not share the same features (Figure 1). There are three general approaches for handling missing values in statistical inference (classification and regression) tasks. A Naïve method is to remove the rows containing missing entries. However, such an approach is not an option when the percentage of missingness in a dataset is high. For instance, as demonstrated in Figure 1, the entire dataset will be discarded if we eliminate the rows with at least one missing entry. The most common methodology for handling missing values in a learning task is to impute them in a preprocessing stage. The general idea behind data imputation is that the missing values can be predicted using the available data entries and correlated features. Imputation algorithms cover a wide range of methods, including imputing missing entries with the columns means Little & Rubin (2019, Chapter 3) (or median), least-square and linear regression-based methods (Raghunathan et al., 2001; Kim et al., 2005; Zhang et al., 2008; Cai et al., 2006; Buuren & Groothuis-Oudshoorn, 2010), matrix completion and expectation maximization approaches Dempster et al. (1977); Ghahramani & Jordan (1994); Honaker et al. (2011), KNN based (Troyanskaya et al., 2001), Tree based methods (Stekhoven & Bühlmann, 2012; Xia et al., 2017), and methods using different neural network structures. Appendix A presents a comprehensive review of these methods. The imputation of data allows practitioners to run standard statistical algorithms requiring complete data. However, the prediction model's performance can be highly reliant on the accuracy of the imputer. High error rates in the prediction of missing values by the imputer can lead to the catastrophic performance of the downstream statistical methods executed on the imputed data. Another class of methods for inference in the presence of missing values relies on robust optimization over the uncertainty sets on missing entries. Shivaswamy et al. (2006) and Xu et al. (2009) adopt robust optimization to learn the parameters of a support vector machine model. They consider uncertainty sets for the missing entries in the dataset and solve a min-max problem over those sets. The obtained classifiers are robust to the uncertainty of missing entries within the uncertainty regions. In contrast to the imputation-based approaches, the robust classification formulation does not carry the imputation error to the classification phase. However, finding appropriate intervals for each missing entry is challenging, and it is unclear how to determine the uncertainty range in many real datasets. Moreover, their proposed algorithms are limited to the SVM classifier. ![2_image_0.png](2_image_0.png) Figure 2: Prediction of the target variable without imputation. RIFLE estimates confidence intervals for low-order (first and second-order) marginals from the input data containing missing values. Then, it solves a distributionally robust problem over the set of all distributions whose low-order marginals are within the estimated confidence intervals. In this paper, we propose RIFLE (Robust InFerence via Low-order moment Estimations) for the direct inference of a target variable based on a set of features containing missing values. The proposed framework does not require the data to be imputed in a pre-processing stage. However, it can also be used as a pre-processing tool for imputing data. The main idea of the proposed framework is to estimate the first and second-order moments of the data and their confidence intervals by bootstrapping on the available data matrix entries. Then, RIFLE finds the optimal parameters of the statistical model for the worst-case distribution with the low-order moments (mean and variance) within the estimated confidence intervals (See Figure 2). Compared to Shivaswamy et al. (2006); Xu et al. (2009), we estimate uncertainty regions for the low-order marginals using the Bootstrap technique. Furthermore, our framework is not restricted to any particular machine learning model, such as support vector machines (Xu et al., 2009). Contributions: Our main contributions are as follows: 1. We present a distributionally robust optimization framework over the low-order marginals of the training data distribution for inference in the presence of missing values. The proposed framework does not require data imputation as a pre-processing stage. In Section 3 and Section 4, we specialize the framework to ridge regression and classification models as two case studies respectively. The proposed framework provides a novel strategy for inference in the presence of missing data, especially for datasets with large proportions of missing values. 2. We provide theoretical convergence guarantees and the iteration complexity analysis of the presented algorithms for robust formulations of ridge linear regression and normal discriminant analysis. Moreover, we show the consistency of the prediction under mild assumptions and analyze the asymptotic statistical properties of the solutions found by the algorithms. 3. While the robust inference framework is primarily designed for direct statistical inference in the presence of missing values without performing data imputation, it can also be adopted as an imputation tool. To demonstrate the quality of the proposed imputer, we compare its performance with several widely-used imputation packages such as MICE (Buuren & Groothuis-Oudshoorn, 2010), Amelia (Honaker et al., 2011), MissForest (Stekhoven & Bühlmann, 2012), KNN-Imputer (Troyanskaya et al., 2001), MIDA (Gondara & Wang, 2018), GAIN (Yoon et al., 2018) on real and synthetic datasets. Generally speaking, our method outperforms all of the mentioned packages when the number of missing entries is large. ## 2 Robust Inference Via Estimating Low-Order Moments RIFLE is based on a distributionally robust optimization (DRO) framework over low-order marginals. Assume that (x, y) ∈ R d × R follows a joint probability distribution P ∗. A standard approach for predicting the target variable y given the input vector x is to find the parameter θ that minimizes the population risk with respect to a given loss function `: $$\operatorname*{min}_{\boldsymbol{\theta}}\quad\mathbb{E}_{(\mathbf{x},y)\sim P^{*}}\left[\ell\Big(\mathbf{x},y;{\boldsymbol{\theta}}\Big)\right].$$ i. (1) Since the underlying distribution of data is rarely available in practice, the above problem cannot be directly solved. The most common approach for approximating (1) is to minimize the empirical risk with respect to $$(1)$$ n given i.i.d samples (x1, y1), . . . ,(xn, yn) drawn from the joint distribution P ∗: $$\operatorname*{min}_{\boldsymbol{\theta}}\quad{\frac{1}{n}}\sum_{i=1}^{n}\ell(\mathbf{x}_{i},y_{i};{\boldsymbol{\theta}}).$$ The above empirical risk formulation assumes that all entries of xi and yi are available. Thus, to utilize the empirical risk minimization (ERM) framework in the presence of missing values, one can either remove or impute the missing data points in a pre-processing stage. Training via robust optimization is a natural alternative in the presence of missing data. Shivaswamy et al. (2006); Xu et al. (2009) suggest the following optimization problem that minimizes the loss function for the worst-case scenario over the defined uncertainty sets per data points: $$\min_{\mathbf{\theta}}\max_{\{\mathbf{\delta}_{i}\in\mathcal{N}_{i}\}_{i=1}^{n}}\frac{1}{n}\sum_{i=1}^{n}\ell(\mathbf{x}_{i}-\mathbf{\delta}_{i},y_{i};\mathbf{\theta}),\tag{2}$$ where Ni represents the uncertainty region of data point i. Shivaswamy et al. (2006) obtains the uncertainty sets by assuming a known distribution on the missing entries of datasets. The main issue in their approach is that the constraints defined on data points are totally uncorrelated. Xu et al. (2009) on the other hand defines Ni as a "box" constraint around the data point i such that they can be linearly correlated. For this specific case, they show that solving the corresponding robust optimization problem is equivalent to minimizing a regularized reformulation of the original loss function. Such an approach has several limitations: First, it can only handle a few special cases (SVM loss with linearly correlated perturbations on data points). Furthermore, Xu et al. (2009) is primarily designed for handling outliers and contaminated data. Thus, they do not offer any mechanism for the initial estimation of xi when several vector entries are missing. In this work, we instead take a *distributionally robust* approach by considering uncertainty on the data distribution instead of defining an uncertainty set for each data point. In particular, we aim to fit the best parameters of a statistical learning model for the worst distribution in a given uncertainty set by solving the following: $$\operatorname*{min}_{\mathbf{\theta}}\quad\operatorname*{max}_{P\in{\mathcal{P}}}\quad\mathbb{E}_{(\mathbf{x},y)\sim P}[\ell(\mathbf{x},y;\mathbf{\theta})],$$ E(x,y)∼P [`(x, y; θ)], (3) where P is an uncertainty set over the underlying distribution of data. A key observation is that defining the uncertainty set P in (3) is easier and computationally more efficient than defining the uncertainty sets {Ni} n i=1 in (2). In particular, the uncertainty set P can be obtained naturally by estimating low-order moments of data distribution using only available entries. To explain this idea and to simplify the notations, let z = (x, y), µ¯ z , E[z], and C¯ z , E[zzT]. While µ¯ z and C¯ z are typically not known exactly, one can estimate them (within certain confidence intervals) from the available data by simply ignoring missing entries (assuming the missing value pattern is completely at random, e.g., MCAR). Moreover, we can estimate the confidence intervals via bootstrapping. Particularly, we can estimate µ z min, µ z max, Czmin, and Czmax from data such that µ z min ≤ µ¯ z ≤ µ z max and Czmin ≤ C¯ z ≤ Czmax with high probability (where the inequalities for matrices and vectors denote component-wise relations). In Appendix B, we show how a bootstrapping strategy can be used to obtain the confidence intervals described above. Given these estimated confidence intervals from data, (3) can be reformulated as $$\begin{array}{ll}\min_{\mathbf{\theta}}&\max_{P}\ \mathbb{E}_{P}[\ell(\mathbf{z};\mathbf{\theta})]\\ \mbox{s.t.}&\mathbf{\mu}_{\min}^{\mathbf{z}}\leq\mathbb{E}_{P}[\mathbf{z}]\leq\mathbf{\mu}_{\max}^{\mathbf{z}},\\ &\mathbf{C}_{\min}^{\mathbf{z}}\leq\mathbb{E}_{P}[\mathbf{zz}^{T}]\leq\mathbf{C}_{\max}^{\mathbf{z}}.\end{array}\tag{4}$$ $$\left({\boldsymbol{3}}\right)$$ Gao & Kleywegt (2017) utilize the distributionally robust optimization as (3) over the set of positive semidefinite (PSD) cones for robust inference under uncertainty. While their formulation considers `2 balls for the constraints on low order moments of the data, we use `∞ constraints that are computationally more natural in the presence of missing entries when combined with bootstrapping. Furthermore, while it can be applied to general convex losses, their method relies on the ellipsoid and the existence of oracles for performing the steps of the ellipsoid method, which is not applicable in modern high-dimensional problems. Moreover, they assume concavity in data (the existence of some oracle to return the worst-case data points) that is practically unavailable even in convex loss functions (including linear regression and normal discriminant analysis studied in our work). In Section 3, we study the proposed distributionally robust framework described in (4) for the ridge linear regression. We design efficient first-order convergent algorithms to solve the problem and show how we can use the algorithms for both inference and imputation in the presence of missing values. Further, in Appendix F, we study the proposed distributionally robust framework for the classification problems under the normality assumption of features. In particular, we show how Framework (4) can be specialized to the robust normal discriminant analysis in the presence of missing values. ## 3 Robust Linear Regression In The Presence Of Missing Values Let us specialize our framework to the ridge linear regression model. In the absence of missing data, ridge regression finds optimal regressor parameter θ by solving $$\operatorname*{min}_{\theta}\quad||\mathbf{X}\theta-\mathbf{y}||_{2}^{2}+\lambda||\theta||_{2}^{2},$$ or equivalently by solving: $$\operatorname*{min}_{\boldsymbol{\theta}}\quad{\boldsymbol{\theta}}^{T}\mathbf{X}^{T}\mathbf{X}{\boldsymbol{\theta}}-2{\boldsymbol{\theta}}^{T}\mathbf{X}^{T}\mathbf{y}+\lambda\|{\boldsymbol{\theta}}\|_{2}^{2}.$$ . (5) Thus, having the second-order moments of the data C = XT X and b = XT y is sufficient for finding the optimal solution. In other words, it suffices to compute the inner product of any two column vectors ai, aj of X, and the inner product of any column ai of X with vector y. Since the matrix X and vector y are not fully observed due to the existence of missing values, one can use the available data (see (24) for details) to compute the point estimators C0 and b0. These point estimators can be highly inaccurate, especially when the number of non-missing rows for two given columns is small. In addition, if the pattern of missing entries does not follow the MCAR assumption, the point estimators are not unbiased estimators of C and b. ## 3.1 A Distributionally Robust Formulation Of Linear Regression As we mentioned above, to solve the linear regression problem, we only need to estimate the second-order moments of the data (XT X and XT y). Thus, the distributionally robust formulation described in (4) is equivalent to the following optimization problem for the linear regression model: $$\begin{array}{ll}\min_{\mathbf{\theta}}&\max_{\mathbf{C},\mathbf{b}}&\mathbf{\theta}^{T}\mathbf{C}\mathbf{\theta}-2\mathbf{b}^{T}\mathbf{\theta}+\lambda\|\mathbf{\theta}\|_{2}^{2}\\ \text{s.t.}&\mathbf{C}_{0}-c\mathbf{\Delta}\leq\mathbf{C}\leq\mathbf{C}_{0}+c\mathbf{\Delta},\\ &\mathbf{b}_{0}-c\mathbf{\delta}\leq\mathbf{b}\leq\mathbf{b}_{0}+c\mathbf{\delta},\\ &\mathbf{C}\succeq0,\end{array}\tag{6}$$ $$\mathbf{\Sigma}$$ where the last constraint guarantees that the covariance matrix is positive and semi-definite. We dicuss the procedure of estimating the confidence intervals (b0, C0, δ, and ∆) in Appendix B. ## 3.2 Rifle For Ridge Linear Regression Since the objective function in (6) is convex in θ (ridge regression) and concave in b and C (linear), the minimization and maximization sub-problems are interchangeable (Sion et al., 1958). Thus, we can equivalently rewrite Problem (6) as: $$\begin{array}{ll}\max&g({\bf C},{\bf b})\\ {\rm s.t.}&{\bf C}_{0}-c{\bf\Delta}\leq{\bf C}\leq{\bf C}_{0}+c{\bf\Delta},\\ &{\bf b}_{0}-c{\bf\delta}\leq{\bf b}_{0}+c{\bf\delta},\\ &{\bf C}\succeq0,\end{array}$$ $$\left(7\right)$$ where g(b, C) = minθ θ T Cθ − 2b T θ + λkθk 2. Function g can be computed in closed-form given any pair of (C, b) by setting θ = (C + λI) −1b. Thus, using Danskin's Theorem (Danskin, 2012), we can apply projected gradient ascent to function g to find an optimal solution of (7) as described in Algorithm 1. At each iteration of the algorithm, we first perform one step of projected gradient ascent on matrix C and vector b; then we update θ in closed-form for the obtained C and b. We initialize C and b using entriwise point estimation on the available rows (see Equation (24) in Appendix B). The projection of b to the box Algorithm 1 RIFLE for Ridge Linear Regression in the Presence of Missing Values 1: **Input**: C0, b0, ∆, δ, T 2: **Initialize**: C = C0, b = b0. 3: for i = 1*, . . . , T* do 4: Update C = Π∆+ -C + αθθT 5: Update b = Πδ(b − 2αθ) 6: Set θ = (C + λI) −1b constraint b0 − cδ ≤ b ≤ b0 + cδ can be done entriwise and has the following closed-form $$\Pi_{\delta}(\mathbf{b}_{i})={\begin{cases}\mathbf{b}_{i}&\mathrm{if}\\ \mathbf{b}_{0i}-c\delta_{i}&\mathrm{if}\\ \mathbf{b}_{0i}+c\delta_{i}&\mathrm{if}\end{cases}}$$ $\mathbf{b}_{0i}-c\delta_{i}\leq\mathbf{b}_{i}\leq\mathbf{b}_{0i}+c\delta_{i}$, $\mathbf{b}_{i}\leq\mathbf{b}_{0i}-c\delta_{i}$, $\mathbf{b}_{0i}+c\delta_{i}\leq\mathbf{b}_{i}$. $\square$ biif b0i − cδi ≤ bi ≤ b0i + cδi, b0i − cδiif bi < b0i − cδi, b0i + cδiif b0i + cδi < bi. Theorem 1. Let (θ˜, C˜ , b˜) *be the optimal solution of* (6), θ ∗(b, C) = arg minθ θ T Cθ − 2b T θ + λkθk 2*, and* D = kC0 − C˜ k 2 F + kb0 − b˜k 2 2 . Assume that for any given b and C*, within the uncertainty (constraint) sets* described in (6), kθ ∗(b, C)k ≤ τ . Then Algorithm 1 computes an -optimal solution of the objective function in (7) in O D(τ+1)2 λ iterations. Proof. The proof is relegated to Appendix H. In Appendix C, we show how using the acceleration method of Nesterov can improve the convergence rate of Algorithm 1 to O qD(τ+1)2 λ . A technical issue of Algorithm 1 and its accelerated version presented in Appendix C is that projection of C to the intersection of box constraints and the set of positive semidefinite matrices (Π∆+ [C]) is challenging and cannot be done in closed-form. In the implementation of Algorithm 1, we relax the problem by removing the PSD constraint on C to avoid this complexity and time-consuming singular value decomposition at each iteration. This relaxation does not drastically change the algorithm's performance, as our experiments show in Section 5. A more systematic approach is to write the dual problem of the maximization problem and handle the resulting constrained minimization problem with the Alternating Direction Method of Multipliers (ADMM). The detailed procedure of such an approach can be found in Appendix D. All these algorithms are provably convergent to the optimal points of Problem (6). In addition to theoretical convergence, we have numerically evaluated the convergence of resulting algorithms in Appendix K. Further, the proposed algorithms are **consistent**, as discussed in Appendix J. ## 3.3 Performance Guarantees For Rifle Thus far, we have discussed how to efficiently solve the robust linear regression problem in the presence of missing values. A natural question in this context is the statistical performance of the obtained optimal solution in the previous section on the unseen test data points. Theorem 2 answers this question from two perspectives: Assuming that the missing values are distributed completely at random, our estimators are consistent. Moreover, for the finite case, Theorem 2 part (b) states that with the proper choice of confidence intervals, with high probability, the test loss of the obtained solution is bounded by the training loss of the estimator. Note that the results regarding the performance of the robust estimator generally hold for MCAR missing pattern. However, we perform several experiments on datasets with MNAR patterns to show how RIFLE works in practice on such datasets in Section 5. Theorem 2. *Assume the data domain is bounded and that the missing pattern of the data follows MCAR. Let* Xn×d, y *be the training data drawn i.i.d. from the ground-truth distribution* P ∗ *with low-order moments* C∗ and b ∗. Further, assume that each entry of X and y is missing with probability p < 1*. Let* (θ˜n, C˜n, b˜n) be the solution of Problem (6). (a) Consistency of the Covariance Estimator*: As the number of data points goes to infinity, the* estimated low-order marginals converge to the ground-truth values, almost surely. More precisely, $$\lim_{n\to\infty}\tilde{\bf C}_{n}=\mathbb{E}_{P^{*}}[{\bf xx}^{T}],\quad a.s.,$$ $$\lim_{n\to\infty}\tilde{\bf b}_{n}=\mathbb{E}_{P^{*}}[{\bf xy}],\quad a.s.$$ $$({\boldsymbol{\delta}})$$ $$({\mathfrak{g}})$$ $$(10)$$ (b) *Defining* $L_{\rm train}(\tilde{\theta}_{n})=\tilde{\theta}_{n}^{T}\tilde{\bf C}_{n}\tilde{\theta}_{n}-2\tilde{\bf b}_{n}\tilde{\theta}_{n}+\lambda||\tilde{\theta}_{n}||_{2}^{2}$ $L_{\rm test}(\tilde{\theta}_{n})=\tilde{\theta}_{n}^{T}{\bf C}^{*}\tilde{\theta}_{n}-2{\bf b}^{*T}\tilde{\theta}_{n}+\lambda||\tilde{\theta}_{n}||_{2}^{2}$ $$L_{\mathrm{test}}(\tilde{\theta})\leq L_{\mathrm{train}}(\tilde{\theta}),$$ where C∗ = E(x,y)∼P ∗ [xxT] and b ∗ = E(x,y)∼P ∗ [xy] *are the ground-truth second-order moments. Given* V = maxi,j Var(XiXj ) *(maximum variance of pairwise feature products), with the probability of at least* 1 −d 2V 2c 2∆2n(1−p) , we have: Ltest(θ˜) ≤ Ltrain(θ˜), (10) where ∆ = min{∆ij} and c *is the hyper-parameter for controlling the size of the confidence intervals as* presented in (6) Proof. The proof is relegated to Appendix H. ## 3.4 Imputation Of Missing Values And Going Beyond Linear Regression RIFLE can be used for imputing missing data. To this end, we impute different features of a given dataset independently. More precisely, to impute each feature containing missing values, we consider it as a target variable y and the rest of the features as the input X in our methodology. Then, we train a model to predict the feature y given X via Algorihm 1 (or its ADMM version, Algorithm 7, in the appendix). Let the obtained optimal solutions be C∗, b ∗, and θ ∗. For a given missing entry, we can use θ ∗ only if all other features in the row of that missing entry are available. However, that is not usually the case in practice, as each row can contain more than one missing entry. Therefore, one can learn a separate model for each missing pattern in the dataset. Let us clarify this point through the example in Figure 1. In this example, we have three different missing patterns (one missing pattern for each dataset). For missing entries in Dataset 1, the first forty features are available. Let rj denote the vector of the first 40 features in row j. Assume that we aim to impute entry i ∈ {41*, . . . ,* 100} in row j where i denoted by xji. To this end, we restrict X to the first 40 features. Moreover, we consider y = xi as the target variable. Then, we run Algorithm 1 on X and y to obtain the optimal C∗, b ∗ i , and θ ∗ i . Consequently, we impute xji as follows: $$\square$$ $$x_{j i}=\mathbf{r}_{j}^{T}\theta_{i}^{*}$$ We can use the same methodology for imputing missing entries in each feature for missing patterns in Dataset 2 and Dataset 3. While this approach is reasonable for the missing pattern observed in Figure 1, in many practical problems, different rows can have distinct missing patterns. Thus, in the worst case, Algorithm 1 must be executed once for each missing entry. Such an approach is computationally expensive and might be infeasible in large-scale datasets containing large amounts of missing entries. Alternatively, one can perform Algorithm 1 only once to obtain C∗ and b ∗(considered the "worst-case/pessimistic" estimation of the moments). Then to impute each missing entry, C∗ and b ∗ are restricted to the features available in that missing entry's row. Having the restricted C∗ and b ∗, the regressor θ ∗can be obtained in closed-form (line 6 in Algorithm 1). In this approach, we perform algorithm 1 once and find the optimal θ ∗for each missing entry based on the estimated C∗ and b ∗. This approach can lead to sub-optimal solutions compared to the former approach, but it is much faster and more scalable. Beyond Linear Regression: While the developed methods are primarily designed for ridge linear regression, one can apply non-linear transformations (kernels) to obtain models beyond linear. In Appendix E, we show how to extend the developed algorithms to quadratic models. The RIFLE framework applied to the quadratically transformed data is called **QRIFLE**. ## 4 Robust Classification Framework In this section, we study the proposed framework in (4) for the classification tasks in the presence of missing values. Since the target variable y ∈ Y = {1*, . . . , M*} takes discrete values in classification tasks, we consider the uncertainty sets over the data's first- and second-order marginals given each target value (label) separately. Therefore, the distributionally robust classification over low-order marginals can be described as: $$\begin{array}{ll}\min_{\mathbf{w}}&\max_{P}\ \mathbb{E}_{P}[\ell(\mathbf{x},y,\mathbf{w})]\\ &\mbox{s.t.}\ \mathbf{\mu}_{\min,y}\ \leq\ \mathbb{E}_{P}[\mathbf{x}|y]\ \leq\ \mathbf{\mu}_{\max,y}\quad\forall y\in\mathcal{Y}\\ &\mathbf{\Sigma}_{\min,y}\ \leq\ \mathbb{E}_{P}[\mathbf{x}\mathbf{x}^{T}|y]\ \leq\ \mathbf{\Sigma}_{\max,y}\quad\forall y\in\mathcal{Y}\end{array}\tag{11}$$ where µmin, µmax, Σmin, and Σmax are the estimated confidence intervals for the first and second order of the data distribution. Unlike the robust linear regression task in Section 3, the evaluation of the objective function in (11) might depend on higher-order marginals (beyond second-order) due to the nonlinearity of the loss function. As a result, Problem (11) is a non-convex non-concave intractable min-max optimization problem in general. For the sake of computational traceability, we restrict the distribution in the inner maximization problem to the set of normal distributions. In the following section, we specialize (11) to the quadratic discriminant analysis as a case study. The methodology can be extended to other popular classification algorithms, such as support vector machines and multi-layer neural networks. ## 4.1 Robust Quadratic Discriminant Analysis Learning a logistic regression model on datasets containing missing values has been studied extensively in the literature (Fung & Wrobel, 1989; Abonazel & Ibrahim, 2018). Besides deleting missing values and imputation-based approaches, Fung & Wrobel (1989) models the logistic regression task in the presence of missing values as a linear discriminant analysis problem where the underlying assumption is that the predictors follow normal distribution conditional on the labels. Mathematically speaking, they assume that the data points assigned to a specific label follow a Gaussian distribution, i.e., x|y = i ∼ N(µi, Σ). They use the available data to estimate the parameters of each Gaussian distribution. Therefore, the parameters of the logistic regression model can be assigned based on the estimated parameters of the Gaussian distributions for different classes. Similar to the linear regression case, the estimations of means and covariances are unbiased only when the data satisfies the MCAR condition. Moreover, when the number of data points in the dataset is small, the variance of the estimations can be very high. Thus, to train a logistic regression model that is robust to the percentage and different types of missing values, we specialize the general robust classification framework formulated in Equation (11) to the logistic regression model. Instead of considering a common covariance matrix for the conditional distributions of x given labels y (linear discriminant analysis), we assume a more general case where each conditional distribution has its own covariance matrix (quadratic discriminant analysis). Assume that x|y ∼ N(µy, Σy) for y = 0, 1. We aim to find the optimal solution to the following problem: min wmax µ0,µ1,Σ0,Σ1 Ex|y=1∼N(µ1,Σ1) h− log σ(wT x) iP(y = 1) + Ex|y=0∼N(µ0,Σ0) h− log 1 − σ(wT x) iP(y = 0) s.t. µmin0 ≤ µ0 ≤ µmax0 µmin1 ≤ µ1 ≤ µmax1 Σmin0 ≤ Σ0 ≤ Σmax0 Σmin1 ≤ Σ1 ≤ Σmax1 $$(12)$$ Where $\sigma(\mathbf{x})=1/\Big(1+\exp(-\mathbf{x})\Big)$. is the sigmoid function. To solve Problem (12), first, we focus on the scenario when the target variable has no missing values. In this case, each data point contributes to the estimation of either (µ1, Σ1) or (µ0, Σ0), depending on its label. Similar to the robust linear regression case, we can apply Algorithm 4 to estimate the confidence intervals for µi, Σi using data points whose target variable equals i (y = i). Obviously, the objective function is convex in w since the logistic regression loss is convex, and the expectation of loss can be seen as a weighted summation, which is convex. Thus, fixing µ, Σ the outer minimization problem can be solved with respect to w using standard first-order methods such as gradient descent. Although the robust reformulation of logistic regression stated in (12) is convex in w and concave in µ0 and µ1, the inner maximization problem is intractable with respect to Σ0 and Σ1. We approximate Problem (12) in the following manner: min wmax µ0,Σ0,µ1,Σ1 π1Ex|y=1∼N(µ1,Σ1) h− log σ(wT x) i + π0Ex|y=0∼N(µ0,Σ0) h− log 1 − σ(wT x) i, s.t. µmin0 ≤ µ0 ≤ µmax0 µmin1 ≤ µ1 ≤ µmax1 Σ0 ∈ {Σ01, Σ02, . . . , Σ0k} Σ1 ∈ {Σ11, Σ12, . . . , Σ1k}, (13) $$(13)$$ $$(14)$$ where π1 = P(y = 1) and π0 = P(y = 0). To compute optimal µ0 and µ1, we have: $$\operatorname*{max}_{\boldsymbol{\mu}_{1}}\quad\mathbb{E}_{\mathbf{x}\sim N(\boldsymbol{\mu}_{1},\boldsymbol{\Sigma}_{1})}\bigg[-\log\left(\sigma(\mathbf{w}^{T}\mathbf{x})\right)\bigg]\quad\text{s.t.}\quad\boldsymbol{\mu}_{\min}\leq\boldsymbol{\mu}_{1}\leq\boldsymbol{\mu}_{\max}$$ Theorem 3. Let a[i] be the i-th element of vector a*. The optimal solution of Problem* (14) *has the following* form: $$\mu_{1}^{*}[i]=\begin{cases}\mu_{\max}[i],&\text{if}\ \mathbf{w}[i]\leq0\\ \mu_{\min}[i],&\text{if}\ \mathbf{w}[i]>0.\end{cases}\tag{1}$$ $$(15)$$ Note that we relaxed (12) by taking the maximization problem over a finite set of Σ estimations. We estimate each Σ by bootstrapping on the available data using Algorithm 4. Define fi(w) as: $$f_{i}(\mathbf{w})=\pi_{1}\mathbb{E}_{\mathbf{x}\sim N(\boldsymbol{\mu}_{1}^{*},\boldsymbol{\Sigma}_{i1})}\Big[-\log\Big(\sigma(\mathbf{w}^{T}\mathbf{x})\Big)\Big]\tag{1}$$ Similarly, we can define: $$g_{i}(\mathbf{w})=\pi_{0}\mathbb{E}_{\mathbf{x}\sim N(\boldsymbol{\mu}_{0}^{*},\mathbf{\Sigma}_{i0})}\bigg[-\log\Big(1-\sigma(\mathbf{w}^{T}\mathbf{x})\Big)\bigg]$$ i (17) Since the maximization problem is over a finite set, we can rewrite Problem (13) as: $$\begin{array}{ll}\min_{\mathbf{w}}&\max_{i,j\in\{1,\ldots,k\}}f_{i}(\mathbf{w})+g_{j}(\mathbf{w})=\min_{\mathbf{w}}\max_{p_{1},\ldots,p_{k},q_{1},\ldots,q_{k}}\sum_{i=1}^{k}p_{i}f_{i}(\mathbf{w})+\sum_{j=1}^{k}p_{i}g_{j}(\mathbf{w})\\ \text{s.t.}&\sum_{i=1}^{k}p_{i}=1,\quad p_{i}\geq0\\ &\sum_{j=1}^{k}q_{j}=1,\quad q_{j}\geq0\end{array}\tag{1}$$ $$(16)$$ $$(17)$$ $$(18)$$ Since the maximum of several functions is not necessarily smooth (differentiable), we add a quadratic regularization term to the maximization problem, accelerating the convergence rate (Nouiehed et al., 2019) as follows: min wmax p1,...,pk,q1,...,qk X k i=1 pifi(w) − δ X k i=1 p 2 i + X k j=1 qjgj (w) − δ X k j=1 q 2 j (19) s.t.Pk i=1 pi = 1, pi ≥ 0 Pk j=1 qj = 1, qj ≥ 0 First, we show how to solve the inner maximization problem. Note that the pi's and qi's are independent. We show how to find optimal pi's. Optimizing with respect to qi's is similar. Since the maximization problem is a constrained quadratic program, we can write the Lagrangian function as follows: **Algorithm, we can write the Lagrangian function as follows.** $$\max_{p_{1},\ldots,p_{k}}\quad\sum_{i=1}^{k}p_{i}f_{i}(\mathbf{w})-\delta\sum_{i=1}^{k}p_{i}^{2}-\lambda(\sum_{i=1}^{k}p_{i}-1)$$ s.t. $p_{i}\geq0$ $$(20)$$ Having the optimal λ, the above problem has a closed-form solution with respect to each pi, which can be written as: $$p_{i}^{*}=\left[\frac{-\lambda+f_{i}}{2\delta}\right]_{+}$$ Since p ∗ i is a non-increasing function with respect to λ, we can find the optimal value of λ using the following bisection algorithm. Algorithm 2 demonstrates how to find an -optimal λ and p ∗ i 's efficiently using the bisection idea. **Algorithm 2** Finding the optimal $\lambda$ and $p_{i}$'s using the bisection ideal. $\begin{array}{ll}\hline1:&\mbox{\bf{Initialize:}}\lambda_{\rm{low}}=0,\lambda_{\rm{high}}=\max_{i}f_{i},p_{i}=0\quad\forall i\in\{1,2,\ldots,k\}.\\ 2:&\mbox{\bf{while}}\left[\sum_{i=1}^{n}p_{k}-1\right]>\epsilon\mbox{\bf{do}}\\ 3:&\lambda=\frac{\lambda_{\rm{low}}+\lambda_{\rm{high}}}{2}\\ 4:&\mbox{\bf{Set}}p_{i}=[\frac{-\lambda+f_{i}}{2\delta}]_{+}\quad\forall i\in\{1,2,\ldots,k\}\\ 5:&\mbox{\bf{if}}\sum_{i=1}^{k}p_{i}<1\mbox{\bf{then}}\\ 6:&\lambda_{\rm{high}}=\lambda\\ 7:&\mbox{\bf{else}}\\ 8:&\lambda_{\rm{low}}=\lambda\\ \hline\end{array}$ Remark 4. An alternative method for finding optimal λ, and pi's is to sort fi *values in* O(k log k) first, and then finding the smallest fi such that if we set λ = fi, the sum of pi's is bigger than 1 (let j be the index of that value). Without loss of generality, assume that f1 ≤ · · · ≤ fk*. Then,* Pk i=j −λ+fi 2δ = 1, which has a closed-form solution with respect to λ. To update w, we need to solve the following optimization problem: $$\min_{\mathbf{w}}\sum_{i=1}^{k}p_{i}^{*}f_{i}(\mathbf{w})+\sum_{j=1}^{k}q_{j}^{*}g_{i}(\mathbf{w}),\tag{1}$$ $$(21)$$ Similar to the standard statistical learning framework, we solve the following empirical risk minimization problem by applying the gradient descent to w on a finite data sample. Define ˆfi as follows: $${\hat{f}}_{i}(\mathbf{w})=\pi_{1}\sum_{t=1}^{n}{\Big[}-\log{\Big(}\sigma(\mathbf{w}^{T}\mathbf{x}_{t}){\Big)}{\Big]},$$ where x1*, . . . ,* xn are generated from the distribution N (µ ∗ 1 , Σ1i). The empirical risk minimization problem can be written as follows: $$\min_{\mathbf{w}}\sum_{i=1}^{k}p_{i}^{*}\hat{f}_{i}(\mathbf{w})+\sum_{j=1}^{k}q_{j}^{*}\hat{g}_{i}(\mathbf{w}),\tag{1}$$ $$(22)$$ $$(23)$$ Algorithm 3 summarizes the robust linear discriminant analysis method for the case where the label of all data points is available. Theorem 5 demonstrates the convergence of gradient descent algorithm applied to (23) in O k log(M ) iterations to an -optimal solution. Algorithm 3 Robust Quadratic Discriminant Analysis in the Presence of Missing Values 1: **Input**: X0, X1: matrix of data points with labels 0 and 1 respectively, T : Number of iterations, α : Step-size. 2: Estimate µmin0 and µmax0 using the available entries of X0. 3: Estimate µmin1 and µmax1 using the available entries of X1. 4: Estimate Σ01*, . . . ,* Σ0k using bootstrap estimator on the available data of X0. 5: Estimate Σ11*, . . . ,* Σ1k using bootstrap estimator on the available data of X1. 6: for i = 1*, . . . , T* do 7: Compute µ ∗ 1 and µ ∗ 0 by Equation (15). 8: Find optimal p1*, . . . , p*k, and q1*, . . . , q*k using Algorithm 2. 9: w = w − α Pk i=1 p ∗ i ∇ ˆfi(w) + Pk j=1 q ∗ j ∇gˆi(w) ! Theorem 5. *Assume that* M = maxi fi*. Gradient descent algorithm requires* O k log(M ) gradient evaluations for converging to an *-optimal saddle point of the optimization problem* (23). In Appendix F, we extend the methodology to the case where y contains missing entries. ## 5 Experiments In this section, we evaluate RIFLE's performance on a diverse set of inference tasks in the presence of missing values. We compare RIFLE's performance to several state-of-the-art approaches for data imputation on synthetic and real-world datasets. The experiments are designed in a manner that the sensitivity of the model to factors such as the number of samples, data dimension, types, and proportion of missing values can be evaluated. The description of all datasets used in the experiments can be found in Appendix I. ## 5.1 Evaluation Metrics We need access to the ground-truth values of the missing entries to evaluate RIFLE and other state-of-the-art imputation approaches. Hence, we artificially mask a proportion of available data entries and predict them with different imputation methods. A method performs better than others if the predicted missing entries are closer to the ground-truth values. To measure the performance of RIFLE and the existing approaches on a regression task for a given test dataset consisting of N data points, we use normalized root mean squared error (NRMSE), defined as: $${\mathrm{NRMSE}}={\frac{\sqrt{{\frac{1}{N}}\sum_{i=1}^{N}(y_{i}-{\hat{y}}_{i})^{2}}}{\sqrt{{\frac{1}{N}}\sum_{i=1}^{N}(y_{i}-{\bar{y}})^{2}}}}$$ where yi, yˆi, and y¯ represent the true value of the i-th data point, the predicted value of the i-th data point, and the average of true values of data points, respectively. In all experiments, generated missing entries follow either a missing completely at random (MCAR) or a missing not at random (MNAR) pattern. A discussion on the procedure of generating these patterns can be found in Appendix G. ## 5.2 Tuning Hyper-Parameters Of Rifle The hyper-parameter c in (7) controls the robustness of the model by adjusting the size of confidence intervals. This parameter is tuned by performing a cross-validation procedure over the set {0.1, 0.25, 0.5, 1, 2, 5, 10, 20, 50, 100}, and the one with the lowest NMRSE is chosen. The default value in the implementation is c = 1 since it consistently performs well over different experiments. Furthermore, λ, the hyper-parameter for the ridge regression regularizer, is tuned by choosing 20% of the data as the validation set from the set {0.01, 0.1, 0.5, 1, 2, 5, 10, 20, 50}. To tune K, the number of bootstrap samples for estimating the confidence intervals, we tried 10, 20, 50, and 100. No significant difference is observed in terms of the test performance for the above values. Furthermore, we tune the hyper-parameters of the competing packages as follows. For KNN-Imputer (Troyanskaya et al., 2001), we try {2, 10, 20, 50} for the number of neighbors (K) and pick the one with the highest performance. For MICE (Buuren & Groothuis-Oudshoorn, 2010) and Amelia (Honaker et al., 2011), we generate 5 different imputed data and pick the one with the highest performance on the test data. MissForest has multiple hyper-parameters. We keep the criterion as "MSE" since our performance evaluation measure is NRMSE. Moreover, we tune the number of iterations and number of estimations (number of trees) by checking values from {5, 10, 20} and {50, 100, 200}, respectively. We do not change the structure of the neural networks for MIDA (Gondara & Wang, 2018) and GAIN (Yoon et al., 2018), and the default versions are performed for imputing datasets. ## 5.3 Rifle Consistency In Theroem 2 Part (a), we demonstrated that RIFLE is consistent. In Figure 3, we investigate the consistency ![11_image_0.png](11_image_0.png) Figure 3: Comparing the consistency of RIFLE, MissForest, KNN Imputer, MICE, Amelia, and Expectation Maximization methods on a synthetic dataset containing 40% of missing values. of RIFLE on synthetic datasets with different proportions of missing values. The synthetic data has 50 input features following a jointly normal distribution with the mean whose entries are randomly chosen from the interval (−100, 100). Moreover, the covariance matrix equals Σ = SST where S elements are randomly picked from (−1, 1). The dimension of S is 50 × 20. The target variable is a linear function of input features added to a mean zero normal noise with a standard deviation of 0.01. As depicted in Figure 3, RIFLE requires fewer samples to recover the ground-truth parameters of the model compared to MissForest, KNN Imputer, Expectation Maximization (Dempster et al., 1977), and MICE. Amelia's performance is significantly good since the predictors have a joint normal distribution and the linear underlying model. Note that by increasing the number of samples, the NRMSE of our framework converges to 0.01, which is the standard deviation of the zero-mean Gaussian noise added to each target value (the dashed line). ## 5.4 Data Imputation Via Rifle As explained in Section 3, while the primary goal of RIFLE is to learn a robust regression model in the presence of missing values, it can also be used as an imputation tool. We run RIFLE and several state-ofthe-art approaches on five datasets from the UCI repository (Dua & Graff, 2017) (Spam, Housing, Clouds, Breast Cancer, and Parkinson datasets) with different proportions of MCAR missing values (the description of the datasets can be found in Appendix I). Then, we compute the NMRSE of imputed entries. Table 1 shows the performance of RIFLE compared to other approaches for the datasets where the proportion of missing values are relatively high n(1−p) d ≈ O(1). RIFLE outperforms these methods in almost all cases and performs slightly better than MissForest, which uses a highly non-linear model (random forest) to impute missing values. | Dataset Name | RIFLE | QRIFLE | MICE | Amelia | GAIN | MissForest | MIDA | EM | |---------------------|-------------|-------------|-------------|-------------|-------------|--------------|-------------|--------------| | Spam (30%) | 0.87 ±0.009 | 0.82 ±0.009 | 1.23 ±0.012 | 1.26 ±0.007 | 0.91 ±0.005 | 0.90 ±0.013 | 0.97 ±0.008 | 0.94 ± 0.004 | | Spam (50%) | 0.90 ±0.013 | 0.86 ±0.014 | 1.29 ±0.018 | 1.33 ±0.024 | 0.93 ±0.015 | 0.92 ±0.011 | 0.99 ±0.011 | 0.97 ± 0.008 | | Spam (70%) | 0.92 ±0.017 | 0.91 ±0.019 | 1.32 ±0.028 | 1.37 ±0.032 | 0.97 ±0.014 | 0.95 ±0.016 | 0.99 ±0.018 | 0.98 ± 0.017 | | Housing (30%) | 0.86 ±0.015 | 0.89 ±0.018 | 1.03 ±0.024 | 1.02 ±0.016 | 0.82 ±0.015 | 0.84 ±0.018 | 0.93 ±0.025 | 0.95 ± 0.011 | | Housing (50%) | 0.88 ±0.021 | 0.90 ±0.024 | 1.14 ±0.029 | 1.09 ±0.027 | 0.88 ±0.019 | 0.88 ±0.018 | 0.98 ±0.029 | 0.96 ± 0.016 | | Housing (70%) | 0.92 ±0.026 | 0.95 ±0.028 | 1.22 ±0.036 | 1.18 ±0.038 | 0.95 ±0.027 | 0.93 ±0.024 | 1.02 ±0.037 | 0.98 ± 0.017 | | Clouds (30%) | 0.81 ±0.018 | 0.79 ±0.019 | 0.98 ±0.024 | 1.04 ±0.027 | 0.76 ±0.021 | 0.71 ±0.011 | 0.83 ±0.022 | 0.86 ± 0.013 | | Clouds (50%) | 0.84 ±0.026 | 0.84 ±0.028 | 1.10 ±0.041 | 1.13 ±0.046 | 0.82 ±0.027 | 0.75 ±0.023 | 0.88 ±0.033 | 0.89 ± 0.018 | | Clouds (70%) | 0.87 ±0.029 | 0.90 ±0.033 | 1.16 ±0.044 | 1.19 ±0.048 | 0.89 ±0.035 | 0.81 ±0.031 | 0.93 ±0.044 | 0.92 ± 0.023 | | Breast Cancer (30%) | 0.52 ±0.023 | 0.54 ±0.027 | 0.74 ±0.031 | 0.81 ±0.032 | 0.58 ±0.024 | 0.55 ±0.016 | 0.70 ±0.026 | 0.67 ± 0.014 | | Breast Cancer (50%) | 0.56 ±0.026 | 0.59 ±0.027 | 0.79 ±0.029 | 0.85 ±0.033 | 0.64 ±0.025 | 0.59 ±0.022 | 0.76 ±0.035 | 0.69 ± 0.022 | | Breast Cancer (70%) | 0.59 ±0.031 | 0.65 ±0.034 | 0.86 ±0.042 | 0.92 ±0.044 | 0.70 ±0.037 | 0.63 ±0.028 | 0.82 ±0.035 | 0.67 ± 0.014 | | Parkinson (30%) | 0.57 ±0.016 | 0.55 ±0.016 | 0.71 ±0.019 | 0.67 ±0.021 | 0.53 ±0.015 | 0.54 ±0.010 | 0.62 ±0.017 | 0.64 ± 0.011 | | Parkinson (50%) | 0.62 ±0.022 | 0.64 ±0.025 | 0.77 ±0.029 | 0.74 ±0.034 | 0.61 ±0.022 | 0.65 ±0.014 | 0.71 ±0.027 | 0.69 ± 0.022 | | Parkinson (70%) | 0.67 ±0.027 | 0.74 ±0.033 | 0.85 ±0.038 | 0.82 ±0.037 | 0.69 ±0.031 | 0.73 ±0.022 | 0.78 ±0.038 | 0.75 ± 0.029 | ## 5.5 Sensitivity Of Rifle To The Number Of Samples And Proportion Of Missing Values In this section, we analyze the sensitivity of RIFLE and other state-of-the-art approaches to the number of ![12_image_0.png](12_image_0.png) Figure 4: Performance Comparison of RIFLE, MICE, and MissForest on four UCI datasets: Parkinson, Spam, Wave Energy Converter, and Breast Cancer. For each dataset, we count the number of features that each method outperforms the others. 40%, 50%, 60%, 70%, and 80% of MCAR missing values, respectively, for four real datasets (Spam, Parkinson, Wave Energy Converter, and Breast Cancer) from UCI Repository (Dua & Graff, 2017) (the description of the datasets can be found in Appendix I). Given a feature in a dataset containing missing values, we say an imputer wins that feature if the imputation error in terms of NRMSE for that imputer is less than the error of the other imputers. Figure 4 reports the number of features won by each imputer on the created datasets ![13_image_0.png](13_image_0.png) described above. As we observe, the number of wins for RIFLE increases as we increase the proportion of missing values. This observation shows that the sensitivity of RIFLE as an imputer to the proportion of missing values is less than MissForest and MICE in general. Figure 5: Sensitivity of RIFLE, MissForest, Amelia, KNN Imputer, MIDA, and Mean Imputer to the percentage of missing values on the Drive dataset. Increasing the percentage of missing value entries degrades the benchmarks' performance compared to RIFLE. KNN-imputer implementation cannot be executed on datasets containing 80% (or more) missing entries. Moreover, Amelia and MIDA do not converge to a solution when the percentage of missing value entries is higher than 70%. Figure 4 does not show how the increase in the proportion of missing values changes the NRMSE of imputers. Next, we analyze the sensitivity of RIFLE and several imputers to change in missing value proportions. Fixing the proportion of missing values, we generate 10 random datasets containing missing values in random locations on the Drive dataset (the description of datasets is available in Appendix I). We impute the missing values for each dataset with RIFLE, MissForest, Mean Imputation, and MICE. Figure 5 shows the average and the standard deviation of these 4 imputers' performances for different proportions of missing values (10% to 90%). Figure 5 depicts the sensitivity of MissForest and RIFLE to the proportion of missing values in the Drive dataset. We select 400 data points for each experiment with different proportions of missing values (from 10% to 90%) and report the average NRMSE of imputed entries. Finally, in Figure 6, we have evaluated RIFLE and other methods on the BlogFeedback dataset (see Appendix I) containing 40% missing values. The results show that RIFLE's performance is less sensitive to decreasing the number of samples. ## 5.6 Performance Comparison On Real Datasets In this section, we compare the performance of RIFLE to several state-of-the-art approaches, including MICE (Buuren & Groothuis-Oudshoorn, 2010), Amelia (Honaker et al., 2011), MissForest (Stekhoven & Bühlmann, 2012), KNN Imputer (Raghunathan et al., 2001), and MIDA (Gondara & Wang, 2018). There are two primary ways to do this. One method to predict a continuous target variable in a dataset with many missing values is first to impute the missing data with a state-of-the-art package, then run a linear regression. An alternative approach is to directly learn the target variable, as we discussed in Section 3. Table 2 compares the performance of mean imputation, MICE, MIDA, MissForest, and KNN to that of RIFLE on three datasets: NHANES, Blog Feedback, and superconductivity. Both Blog Feedback and Superconductivity datasets contain 30% of MNAR missing values generated by Algorithm 9, with 10000 and 20000 training samples, respectively. The description of the NHANES data and its distribution of missing values can be found in Appendix I. ![14_image_0.png](14_image_0.png) Figure 6: Sensitivity of RIFLE, MissForest, MICE, Amelia, Mean Imputer, KNN Imputer, and MIDA to the number of samples for the imputations of Blog Feedback dataset containing 40% of MCAR missing values. When the number of samples is limited, RIFLE outperforms other methods, and its performance is very close to the non-linear imputer MissForest for larger samples. Efficiency of RIFLE: We perform RIFLE for 1000 iterations and the step size of 0.01 in the above experiments. At each iteration, the main operation is to find the optimal θ for any given b and C. The average time of each method on each dataset is reported in Table 5 in Appendix L. The main reason for the time efficiency of RIFLE compared to MICE, MissForest, MIDA, and KNN Imputer is that it directly predicts the target variable without imputation of all missing entries. | Methods | Datasets | | | |-----------------------------|--------------------|-----------------|-----------------| | | Super Conductivity | Blog Feedback | NHANES | | Regression on Complete Data | 0.4601 | 0.7432 | 0.6287 | | RIFLE | 0.4873 ± 0.0036 | 0.8326 ± 0.0085 | 0.6304 ± 0.0027 | | Mean Imputer + Regression | 0.6114 ± 0.0006 | 0.9235 ± 0.0003 | 0.6329 ± 0.0008 | | MICE + Regression | 0.5078 ± 0.0124 | 0.8507 ± 0.0325 | 0.6612 ± 0.0282 | | EM + Regression | 0.5172 ± 0.0162 | 0.8631 ± 0.0117 | 0.6392 ± 0.0122 | | MIDA Imputer + Regression | 0.5213 ± 0.0274 | 0.8394 ± 0.0342 | 0.6542 ± 0.0164 | | MissForest | 0.4925 ± 0.0073 | 0.8191 ± 0.0083 | 0.6365 ± 0.0094 | | KNN Imputer | 0.5438 ± 0.0193 | 0.8828 ± 0.0124 | 0.6427 ± 0.0135 | Table 2: Normalized RMSE of RIFLE and several state-of-the-art Methods on Superconductivity, blog feedback, and NHANES datasets. The first two datasets contain 30% Missing Not At Random (MNAR) missing values in the training phase generated by Algorithm 9. Each method applied 5 times to each dataset, and the result is reported as the average performance ± standard deviation of experiments in terms of NRMSE. Since MICE and MIDA cannot predict values during the test phase without data imputation, we use them in a pre-processing stage to impute the data. Then we apply the linear regression to the imputed dataset. On the other hand, RIFLE, KNN imputer, and MissForest can predict the target variable without imputing the training dataset. Table 2 shows that RIFLE outperforms all other state-of-the-art approaches executed on the three mentioned datasets. In particular, RIFLE outperforms MissForest, while the underlying model RIFLE uses is simpler (linear) compared to the nonlinear random forest model utilized by Missforest. | Number of Training Data Points | Method | | | |----------------------------------|----------------|----------------|----------------| | LDA | Robust LDA | Robust QDA | | | 50 | 52.38% ± 3.91% | 62.14% ± 1.78% | 61.36% ± 1.62% | | 100 | 61.24% ± 1.89% | 68.46% ± 1.04% | 70.07% ± 0.95% | | 200 | 73.49% ± 0.97% | 73.35% ± 0.67% | 73.51% ± 0.52% | Table 3: Sensitivity of Linear Discriminant Analysis, Robust LDA (Common Covariance Matrices), and Robust QDA (Different Covariance matrices for two groups) to the number of training samples. In Section 4, we discussed how to specialize RIFLE to robust normal discriminant analysis in the presence ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) of missing values. Since the maximization problem over the second moments of the data (Σ) is intractable, we solved the maximization problem over a set of k covariance matrices estimated by bootstrap sampling. To investigate the effect of choosing k on the performance of the robust classifier, we train robust normal discriminant analysis models for different values of k on two training datasets (Avila and Magic) containing 40% MCAR missing values. The description of the datasets can be found in Appendix I. For k = 1, there is no maximization problem, and thus, it is equivalent to the classifier proposed in Fung & Wrobel (1989). As shown in Figure 7, increasing the number of covariance estimations generally enhances the accuracy of the classifier in the test phase. However, as shown in Theorem 5, the required time for completing the training phase grows linearly regarding the number of covariance estimations. ## 5.6.1 Performance Of Rifle On Classification Tasks Figure 7: Effect of the number of covariance estimations on the performance (left) and run time (right) of robust LDA on Avila and Magic datasets. Increasing the number of covariance estimations (k) improves the model's accuracy on the test data. However, it takes longer training time. ## 5.6.2 Comparison Of Robust Linear Regression And Robust Qda An alternative approach to the robust QDA presented in Section 4 is to apply the robust linear regression algorithm (Section 3) and mapping the solutions to each one of the classes by thresholding (positive value maps to Label 1 and negative values to label −1). Table 4 compares the performance of two classifiers on three different datasets. As demonstrated in the table, when all features are continuous, quadratic discriminant analysis has a better performance. It shows the QDA model relies highly on the normality assumption, while robust linear regression handles the categorical features better than robust QDA. Limitations and Future Directions: The proposed framework for robust regression in the presence of missing values is limited to linear models. While in Appendix E, we use polynomial kernels to apply non-linear transformations on the data, such an approach can potentially increase the number of missing values in the | | | Accuracy of Methods | | | | | | |----------------------|--------------|-----------------------|----------------|-------------------------------|------------------------------------------------|-----------------|-----------------| | Dataset | Feature Type | RIFLE | Robust QDA | MissForest | MICE | KNN Imputer | EM | | Glass Identification | Continuous | 67.12% ± 1.84% | 69.54% ± 1.97% | 65.76% ± 1.49% | 62.48% ± 2.45% 60.37% + ±1.12% 68.21% + ±0.94% | | | | Annealing | Mixed | 63.41% ± 2.44% | 59.51% ± 2.21% | 64.91% ± 1.35% 60.66% ± 1.59% | 57.44% ± 1.44% | 59.43% + ±1.29% | | | Abalone | Mixed | 68.41% ± 0.74% | 63.27% ± 0.76% | 69.40% ± 0.42% 63.12% ± 0.98% | 62.43% ± 0.38% | 62.91% + ±0.37% | | | Lymphography | Discrete | 66.32% ± 1.05% | 58.15% ± 1.21% | 66.11% ± 0.94% | 55.73% ± 1.24 | 57.39% ± 0.88% | 59.55% + ±0.68% | | Adult | Discrete | 72.42% ± 0.06% | 60.36% ± 0.08 | 70.34% ± 0.03% | 63.30% ± 0.14% | 60.14% ± 0.00 | 60.69% + ±0.01% | Table 4: Accuracy of RIFLE, MICE, KNN-Imputer, Expectation Maximization (EM), and Robust QDA on different discrete, mixed, and continuous datasets. Robust QDA can perform better than other methods when the input features are continuous, and the target variable is discrete. However, RIFLE results in higher accuracy in mixed and discrete settings. kernel space generated by the composition of the original features. A future direction is to develop efficient algorithms for non-linear regression models such as multi-layer neural networks, decision tree regressors, gradient boosting regressors, and support vector regression models. In the case of robust classification, the methodology is extendable to any loss beyond quadratic discriminant analysis. Unlike the regression case, a limitation of the proposed method for robust classification is its reliance on the Gaussianity assumption of data distribution (conditioned on each data label). A natural extension is to assume the underlying data distribution follows a mixture of Gaussian distributions. Conclusion: In this paper, we proposed a distributionally robust optimization framework over the distributions with the low-order marginals within the estimated confidence intervals for inference and imputation of datasets in the presence of missing values. We developed algorithms for regression and classification with convergence guarantees. The method's performance is evaluated on synthetic and real datasets with different numbers of samples, dimensions, missing value proportions, and types of missing values. In most experiments, RIFLE consistently outperforms other existing methods. ## Acknowledgments This work was supported by the NIH/NSF Grant 1R01LM013315-01, the NSF CAREER Award CCF2144985, and the AFOSR Young Investigator Program Award FA9550-22-1-0192.
Review 1: Summary: The paper proposes a methodology, called RIFLE, to handle missing data in supervised learning. The principle idea, Equation 4, is a variant of distributionally robust risk minimization: the uncertainty set is parametrized by mean and covariance conditions. The paper concretely instantiates the framework in Equation 4 to linear model-based hypothesis classes (Section 3 and Section 4), and suggests that the framework can be generalized beyond the linear case. Experiments indicate that the methodology performs favorably compared to existing approaches that handle missing data in supervised learning. For instance, Figure 3 shows that RIFLE has high prediction accuracy even in the presence of missing data, and Figure 4 shows that RIFLE performs accurate imputation of missing values. Strengths and Weaknesses: # Strengths Missing data is a prevalent problem, so any methodology to address it has the potential to make a big impact. The proposed methodology is easy to understand, has only a small number of hyperparameters with clear meaning, and the paper adequately discusses ways to set such hyperparameters. The writing is of high quality. Claims are supported by evidence. Sections/paragraphs have thesis statements that guide readers on what to take away. The transition between different sections is smooth. The figures are very legible: the caption and the types of plots are clear in meaning, and corroborate with the text. The proofs in the appendix are easy to verify. # Weaknesses The section on how RIFLE can be used to impute missing values (Section 3.4) is not as strong as the rest of the paper. I believe this section will be strengthened with more exposition. The current paper already utilizes Figure 1 - would it be possible to walk through a concrete example using this Figure 1, with more (but not too many) equations? Requested Changes: The primary change I would like to see is about Section 3.4. Currently I don't understand how Algorithm 1 is used to impute missing values. Some secondary changes include: * Figure 3: change the x-axis to be a log scale since the values are not equally spaced in the linear scale. This is also a problem in Figure 6. * Section 5.2 the equation pointed to by the hyperlink (11) doesn't contain the hyperparameter c * Reorganize the information in Table 1. I personally find tables hard to read. Perhaps you can convert Table 1 into a boxplot. Time-permitting / future work * Do you have results for linear discriminant analysis? Section 4 explains the methodology, but I don't see numerical demonstrations for this methodology in the main text. Broader Impact Concerns: None ================================================== Review 2: Summary: This paper develops a general framework for robust inference with missing data, by combining distributionally robust optimization and bootstrap confidence intervals. More specifically, the optimization problem is formulated with $\ell_\infty$ constraints on low-order moments, which are computed from the incomplete data with a bootstrap method. The paper illustrates the framework on two examples, ridge regression and quadratic discriminant analysis. The paper provides some theory and conducts performance comparisons through numerical experiments. Strengths and Weaknesses: Strengths: It seems novel to combine the ideas of distributionally robust optimization and statistical inference for low-order moments. While confidence intervals for individual imputed values may be too wide, the sufficient statistics (i.e., low-order moments) are usually easier to estimate. Weaknesses: The proposed framework has two major weaknesses: (1) In principle, it can only deal with data missing completely at random (MCAR), which is too restrictive. There are a wide variety of methods, including those based on inverse probability weighting and multiple imputation, that can handle data missing at random (MAR), which are more realistic. (2) The distributionally robust approach has a price to pay and inevitably incurs an efficiency loss. The theoretical guarantees developed by the paper are quite weak and not sufficient to understand the trade-off between robustness and efficiency. Also, Theorems 1 and 2 are not mathematically rigorous and contain many incorrect or ambiguous statements. For example: (1) Theorem 1: $\boldsymbol\theta^*(\mathbf{b},\mathbf{C})$ is not defined; it is defined in Appendix H, Lemma 11. The assumption on a constant $\tau$ is questionable in the ridge regression example when $\mathbf{b}$ tends to infinity. (2) Theorem 2, part (a): the limiting statements (8) and (9) do not specify the mode of convergence (e.g., in probability or almost surely). Also, the statements “the estimator proposed in Algorithm 2 converges to its actual value” and “by increasing the number of samples, the size of confidence intervals goes to zero” are not mathematically rigorous. (3) Theorem 2, part (b): the definition of $L_{\text{test}}(\boldsymbol\theta)$ is problematic; if it is minimized over $\boldsymbol\theta$, then it should not depend on $\boldsymbol\theta$. Requested Changes: (1) The authors should rework the theory to make it precise and understandable. Furthermore, stronger theoretical guarantees should be provided, e.g., to quantify the efficiency loss due to distributional robustness. (2) Extensions to the MAR setting should be discussed. Broader Impact Concerns: None ================================================== Review 3: Summary: This paper proposes a framework for prediction with missing values that combines the bootstrap to estimate data distribution characteristics and distributionally robust optimization. Explicit algorithms are given for ridge regression and discriminant analysis with convergence guarantees when the data are missing completely at random. Experiments are done on datasets (mostly from the UCI machine learning repository) with a range of sizes and dimensions. The proposed algorithm outperforms baselines when there is a greater amount of missing data and when the data may be missing not at random. Strengths and Weaknesses: ## Strengths 1. The paper is clearly written and the approach is straightforward to understand. 2. The algorithm has good theoretical support; convergence guarantees are provided. 3. Experimental results show that RIFLE improves over baselines when there is more limited data. ## Weaknesses 1. I feel too much was relegated to the appendix; some discussion of the classification case in the main text and more experiments for the classification setting would strengthen the work. 2. There is no discussion of the computational burden of RIFLE, in particular compared to the baselines, which would be of importance to practitioners. Requested Changes: I think some discussion or analysis on the computational burden of the algorithm is critical. Broader Impact Concerns: The authors do not include a broader impact statement, which I think is fine since the work is largely methodological. The authors do not discuss the limitations; I feel that including them would strengthen the work, even if it's just some discussion of future work. ================================================== Metareview: Recommendation: Accept as is Comment: The reviewers all agree that the submission provides a strong contribution to the general framework for robust inference with missing data. The claims are supported and all the concerns are well addressed. Thus I recommend acceptance. ==================================================
# Sparsifying Bayesian Neural Networks With Latent Binary Variables And Normalizing Flows Anonymous authors Paper under double-blind review ## Abstract Artificial neural networks are powerful machine learning methods used in many modern applications. A common issue is that they have millions or billions of parameters, and therefore tend to overfit. Bayesian neural networks (BNN) can improve on this since they incorporate parameter uncertainty. Latent binary Bayesian neural networks (LBBNN) further take into account structural uncertainty by allowing the weights to be turned on or off, enabling inference in the joint space of weights and structures. Mean-field variational inference is typically used for computation within such models. In this paper, we will consider two extensions of variational inference for the LBBNN: Firstly, by using the local reparametrization trick (LCRT), we improve computational efficiency. Secondly, and more importantly, by using normalizing flows on the variational posterior distribution of the LBBNN parameters, we learn a more flexible variational posterior than the mean field Gaussian. Experimental results on real data show that this improves predictive power compared to using mean field variational inference on the LBBNN method, while also obtaining sparser networks. We also perform two simulation studies. In the first, we consider variable selection in a logistic regression setting, where the more flexible variational distribution improves results. In the second study, we compare predictive uncertainty based on data generated from twodimensional Gaussian distributions. Here, we argue that our Bayesian methods lead to more realistic estimates of predictive uncertainty. ## 1 Introduction Modern deep learning architectures can have billions of trainable parameters (Khan et al., 2020). Due to the large number of parameters in the model, the network can overfit, and therefore may not generalize well to unseen data. Further, the large number of parameters gives computational challenges both concerning training the network and for prediction. The lottery ticket hypothesis (Frankle & Carbin, 2018; Pensia et al., 2020) states that dense networks contain sparse subnetworks that can achieve the performance of the full network. Even though the construction/training of such subnetworks is not necessarily simpler than training the full network, substantial gains can be obtained in the prediction stage when using sparse subnetworks. Bayesian neural networks (BNN, Neal, 1992; MacKay, 1995; Bishop, 1997) use a rigorous Bayesian methodology to handle parameter and prediction uncertainty. In principle, prior knowledge can be incorporated through appropriate prior distributions, but most approaches within BNN so far only apply very simple convenience priors (Fortuin, 2022). However, approaches where knowledge-based priors are incorporated are starting to appear (Tran et al., 2022; Sam et al., 2024). Due to a more proper procedure for handling uncertainty, the Bayesian approach does, in many cases, result in more reliable solutions with less overfitting and better uncertainty measures. However, this comes at the expense of extremely high computational costs. Until recently, inference on Bayesian neural networks could not scale to large multivariate data due to limitations of standard Markov chain Monte Carlo (MCMC) approaches, the main quantitative procedure used for complex Bayesian inference. Recent developments of variational Bayesian approaches (Gal, 2016) allow us to approximate the posterior of interest and lead to more scalable methods. In this work, we consider a formal Bayesian approach for obtaining sparse subnetworks by including latent binary variables allowing the weights in the network to be turned on or off. This allows us to model structural uncertainty and makes BNNs more robust to misspecification (Papamarkou et al., 2024). This also opens the door for Bayesian model selection and averaging (Hubin & Storvik, 2019). The computational procedure is based on variational inference. Earlier work (Hubin & Storvik, 2019; Bai et al., 2020; Hubin & Storvik, 2024), have considered similar settings approaches, our main contributions are - improvements of computational efficiency in LBBNNs through the use of the local parametrization trick (Kingma et al., 2015); - extending the class of variational distributions to normalizing flows allowing modeling dependencies; - demonstrating improvements in predictive power, sparsity, and variable selection through experiments on real and simulated data; - demonstrating robust performance in uncertainty quantification through the expected calibration error for classification and Pinball loss for regression. ## 1.1 Literature Background The idea of using a mathematical model to imitate how the brain works was first introduced in McCulloch & Pitts (1943). However, it was not until more recent years that the true power of these models could be harnessed with the idea of using backpropagation (Rumelhart et al., 1986) to train the model with gradient descent. With the advent of modern GPU architectures, deep neural networks can be scaled to big data, and have shown to be very successful on a variety of tasks including computer vision (Voulodimos et al., 2018), and natural language processing (Young et al., 2018). Modern deep learning architectures can have billions of trainable parameters (Khan et al., 2020). Due to the large number of parameters in the model, the network has the capacity to overfit, and therefore may not generalize well to unseen data. Various regularization methods are used to try to deal with this, such as early stopping (Prechelt, 1998), dropout (Srivastava et al., 2014) or data augmentation (Shorten & Khoshgoftaar, 2019). These techniques are heuristic and therefore it is not always clear how to use them and how well they work in practice. It is also possible to reduce the number of parameters in the network with pruning. This is typically done with the dense-to-sparse method (Han et al., 2017). Here, a dense network is trained, while the importance of the weights (i.e. their magnitude) is recorded. Then, the weights that fall below the sparsity threshold (a hyperparameter) are removed. In Frankle & Carbin (2018), it is hypothesized that in randomly initialized dense networks, there exists a sparse sub-network (the winning lottery ticket) that can obtain the same test accuracy as the original dense network. Instead of training and pruning once, referred to as one-shot pruning, this process is repeated sequentially several times, removing a certain percentage of the remaining weights each time, which then results in networks that have a higher degree of sparsity than the ones found with one-shot pruning. However, this comes at a higher computational cost. Further refinements to this are done in Evci et al. (2020), where the network starts off dense, and dynamically removes the weights with the smallest magnitude, while also adding new connections based on gradient information. Again, these approaches are heuristic and lack a solid theoretical foundation. Another issue with deep learning models is that they often make overconfident predictions. In Szegedy et al. (2013), it was shown that adding a small amount of noise to an image can trick a classifier into making a completely wrong prediction (with high confidence), even though the image looks the same to the human eye. The opposite is also possible, images that are white noise can be classified with almost complete certainty to belong to a specific class (Nguyen et al., 2015). Bayesian neural networks (BNNs) were presented by Neal (1992), MacKay (1995), and Bishop (1997). They use a rigorous Bayesian methodology to handle parameter and prediction uncertainty. In many cases, this results in more reliable solutions with less overfitting. Still, BNNs tend to be heavily over-parameterized and difficult to interpret. It is therefore interesting to consider sparsity-inducing methods from a Bayesian perspective. This is typically done by using sparsity-inducing priors, as in variational dropout (Kingma et al., 2015; Molchanov et al., 2017), which uses the independent log uniform prior on the weights. This is an improper prior, meaning that it is not integrable and thus not a valid probability distribution. As noted in Hron et al. (2017), using this prior, combined with commonly used likelihood functions leads to an improper ![2_image_0.png](2_image_0.png) Figure 1: A dense network on the left, one possible sparse structure on the right. posterior, meaning that the obtained results can not be explained from a Bayesian modeling perspective. It is argued that variational dropout should instead be interpreted as penalized maximum likelihood estimation of the variational parameters. Additionally, Gale et al. (2019) finds that while variational dropout works well on smaller networks, it gets outperformed by the heuristic (non-Bayesian) methods on bigger networks. Another type of sparsity inducing prior is the independent scale mixture prior, where Blundell et al. (2015) proposed a mixture of two Gaussian densities, where using a small variance for the second mixture component leads to many of the weights having a prior around 0. Another possibility is to use the independent spikeand-slab prior, most commonly used in Bayesian linear regression models. This prior is used in latent binary Bayesian neural networks (LBBNN) introduced by Hubin & Storvik (2019; 2024) and concurrently in Bai et al. (2020). The spike-and-slab prior for a special case of LBBNN with the ReLu activation function was studied from a theoretical perspective in Polson & Ročková (2018). In Hubin & Storvik (2019) it was empirically shown that using this prior will induce a very sparse network (around 90 % of the weights were removed) while maintaining good predictive power. Using this approach thus takes into account uncertainty around whether each weight is included or not (structural uncertainty) and uncertainty in the included weights (parameter uncertainty) given a structure, allowing for a fully (variational) Bayesian approach to network sparsification (see Figure 1). In this paper, we show that transforming the variational posterior distribution with normalizing flows can result in even sparser networks while improving predictive power compared to the mean field approach used in Hubin & Storvik (2019). Additionally, we demonstrate that the flow network handles predictive uncertainty well, and performs better than the mean-field methods at variable selection in a logistic regression setting with highly correlated variables, thus demonstrating higher quality in structure learning. ## 2 The Model Given the explanatory variable x ∈ R n, and the response variable y ∈ R m, a neural network models the function $$y\sim f(\cdot;\eta(x))$$ where the distribution f(·; η) is parameterised by the vector η. The vector η is obtained through a composition of semi-affine transformations: $$u_{j}^{(l)}=\sigma^{(l)}\bigg{(}\sum_{i=1}^{n^{(l-1)}}u_{i}^{(l-1)}\gamma_{ij}^{(l)}w_{ij}^{(l)}+b_{j}^{(l)}\bigg{)},j=1,\ldots,n^{(l)},l=1,\ldots,L,\tag{1}$$ with ηj = u (L) j. Additionally, u (l−1) denotes the inputs from the previous layer (with u 0 = x corresponding to the explanatory variables), the w (l) ij 's are the weights, the b (l) j 's are the bias terms, and n (l)(and n (0) = n) the number of inputs at layer l of a total L layers. Further, we have the elementwise non-linear activation functions σ (l). The additional parameters γ (l) ij ∈ {0, 1} denote binary inclusion variables for the corresponding weights. Following Polson & Ročková (2018); Hubin & Storvik (2019); Bai et al. (2020), we consider a *structure* to be defined by the configuration of the binary vector γ, and the weights of each structure conditional on this configuration. To consider uncertainty in both structures and weights, we use the spike-and-slab prior, where for each (independent) layer l of the network, we also consider the weights to be independent: $$\begin{array}{c}{{p(w_{i j}^{(l)}|\gamma_{i j}^{(l)})=\gamma_{i j}^{(l)}\mathcal{N}(w_{i j}^{(l)};0,(\sigma^{(l)})^{2})+(1-\gamma_{i j}^{(l)})\delta(w_{i j}^{(l)})}}\\ {{p(\gamma_{i j}^{(l)})=\mathrm{Bernoulli}(\gamma_{i j}^{(l)};\alpha^{(l)}).}}\end{array}$$ We will use the nomenclature from Hubin & Storvik (2019) and refer to this as the LBBNN model. Here, δ(·) is the Dirac delta function, which is considered to be zero everywhere except for a spike at zero. In addition, σ 2 and α denote the prior variance and the prior inclusion probability of the weights, respectively. In practice, we use the same variance and inclusion probability across all the layers and weights, but this is not strictly necessary. In principle, one can incorporate knowledge about the importance of individual covariates or their co-inclusion patterns by adjusting the prior inclusion probabilities for the input layer or specifying hyper-priors. This is common in Bayesian model selection literature (Fletcher & Fletcher, 2018), but not yet within BNNs. ## 3 Bayesian Inference The main motivation behind using LBBNNs is that we are able to take into account both structural and parameter uncertainty, whereas standard BNNs are only concerned with parameter uncertainty. By doing inference through the posterior predictive distribution, we average over all possible structural configurations, and parameters. For a new observation y˜ given training data, D, we have: $$p(\tilde{\mathbf{y}}|{\mathcal{D}})=\sum_{\gamma}\int_{\mathbf{w}}p(\tilde{\mathbf{y}}|\mathbf{w},\gamma,{\mathcal{D}})p(\mathbf{w},\gamma|{\mathcal{D}})\,d\mathbf{w}.$$ This expression is intractable due to the ultra-high dimensionality of w and γ, and using Monte Carlo sampling as an approximation is also challenging due to the difficulty of obtaining samples from the posterior distribution, p(w, γ|D). Instead of trying to sample from the true posterior, we turn it into an optimization problem, using variational inference (VI, Blei et al., 2017). The key idea is that we replace the true posterior distribution with an approximation, qθ(w, γ), with θ denoting some variational parameters. We learn the variational parameters that make the approximate posterior as close as possible to the true posterior. Closeness is measured through the Kullback-Leibler (KL) divergence, $${\rm KL}\left[q_{\theta}(\mathbf{w},\mathbf{\gamma})||p(\mathbf{w},\mathbf{\gamma}|{\cal D})\right]=\sum_{\mathbf{\gamma}}\int_{\mathbf{w}}q_{\theta}(\mathbf{w},\mathbf{\gamma})\log\frac{q_{\theta}(\mathbf{w},\mathbf{\gamma})}{p(\mathbf{w},\mathbf{\gamma}|{\cal D})}\,d\mathbf{w}.\tag{2}$$ $$\left({\mathfrak{3}}\right)$$ Minimizing the KL-divergence (with respect to θ) is equivalent to maximizing the evidence lower bound $$\begin{array}{l}{{\mathrm{Minimizing~the}}}\\ {{\mathrm{(ELBO):}}}\end{array}$$ The objective is thus to maximize the expected log-likelihood while penalizing with respect to the KL divergence between the prior and the variational posterior. How good the approximation becomes depends $$\operatorname{ELBO}(q_{\theta})=\mathbb{E}_{q_{\theta}(\mathbf{w},\mathbf{\gamma})}\left[\log p(\mathcal{D}|\mathbf{w},\mathbf{\gamma})\right]-\operatorname{KL}\left[q_{\theta}(\mathbf{w},\mathbf{\gamma})||p(\mathbf{w},\mathbf{\gamma})\right].$$ ## On The Family Of Variational Distributions {Qθ, Θ ∈ Θ} That Is Chosen. 3.1 Choices Of Variational Families A common choice (Blundell et al., 2015) for the approximate posterior in (dense) Bayesian neural networks is the mean-field Gaussian distribution. For simplicity of notation, denote now by W the set of weights corresponding to a specific layer. Note that from here on, we drop the layer notation for readability, since ![4_image_0.png](4_image_0.png) Figure 2: On the left, the mean-field variational posterior where the weights are assumed independent. On the right, the latent variational distribution z allows for modeling dependencies between the weights. the parameters at different layers will always be considered independent in both the variational distribution and the prior. Then $$q_{\theta}(\mathbf{W})=\prod_{i=1}^{n_{i n}\;n_{o u t}}\prod_{j=1}^{n_{o u t}}{\mathcal{N}}(w_{i j};{\tilde{\mu}}_{i j},{\tilde{\sigma}}_{i j}^{2}),$$ where nin and nout denote the number of neurons in the previous and current layer, respectively. Weights corresponding to different layers are assumed independent as well. The mean-field Gaussian distribution for Bayesian neural networks can be extended to include the binary inclusion variables following Carbonetto & Stephens (2012): $$q_{\theta}(\mathbf{W}|\mathbf{\Gamma})=\prod_{i=1}^{n_{i\pi}}\prod_{j=1}^{n_{i\pi}}\left(\gamma_{ij}\mathcal{N}(w_{ij};\tilde{\mu}_{ij},\tilde{\sigma}_{ij}^{2})+(1-\gamma_{ij})\delta(w_{ij})\right);\tag{4}$$ $$q_{\tilde{\alpha}_{ij}}(\gamma_{ij})=\text{Bernoulli}(\gamma_{ij};\tilde{\alpha}_{ij}).$$ Here, Γ is the set of inclusion indicators corresponding to a specific layer. However, the mean-field Gaussian distribution (Blundell et al., 2015) is typically too simple to be able to capture the complexity of the true posterior distribution. We follow Ranganath et al. (2016), and introduce a set of auxiliary latent variables z to model dependencies between the weights in q, and use the following variational posterior distribution: $$q_{\mathbf{\theta}}(\mathbf{W}|\Gamma,\mathbf{z})=\prod_{i=1}^{n_{in}}\prod_{j=1}^{n_{out}}\left(\gamma_{ij}\mathcal{N}(w_{ij};z_{i}\tilde{\mu}_{ij},\tilde{\sigma}_{ij}^{2})+(1-\gamma_{ij})\delta(w_{ij})\right);\tag{5}$$ $$q_{\hat{\alpha}_{ij}}(\gamma_{ij})=\text{Bernoulli}(\gamma_{ij};\hat{\alpha}_{ij}),$$ where z = (z1*, ..., z*nin ) follows a distribution qϕ(z). For an illustration of the difference between the two variational distributions in Equation (4) and Equation (5), see Figure 2. The novelty in our suggested variational distribution is to combine both weight and structural uncertainty, in addition to modeling dependencies between the weights. As for W, also z is a set of variables related to a specific layer and independence between layers is assumed also for z's. To increase the flexibility of the variational posterior, we apply normalizing flows (Rezende & Mohamed, 2015) to qϕ(z). In general, a normalizing flow is a composition of invertible transformations of some initial (simple) random variable z0, $$\mathbf{z}_{k}=f_{k}(\mathbf{z}_{k-1}),\quad k=1,...,K.$$ The log density of the transformed variable z = zK is given as, $$\log q(\mathbf{z})=\log q_{0}(\mathbf{z}_{0})-\sum_{k=1}^{K}\log\left|\det\frac{\partial\mathbf{z}_{k}}{\partial\mathbf{z}_{k-1}}\right|.\tag{1}$$ We are typically interested in transformations that have a Jacobian determinant that is tractable, and fast to compute, in addition to being highly flexible. Transforming the variational posterior distribution in a BNN with normalizing flows was first done in Louizos & Welling (2017), who coined the term multiplicative normalizing flows (BNN-FLOW), where the transformations were applied in the activation space instead of the weight space. As the weights are of much higher dimensions, the number of flow parameters and thus the number of parameters of variational distribution would explode quickly. We will follow Louizos & Welling (2017) here. The main difference in our work is that by using the variational posterior in Equation (5), we also get sparse networks. For the normalizing flows, we will use the inverse autoregressive flow (IAF), with numerically stable updates, introduced by Kingma et al. (2016). It works by transforming the input in the following way: zk−1 = input mk, sk = g(zk−1) κk = sigmoid(sk) zk = κk ⊙ zk−1 + (1 − κk) ⊙ mk, $$\mathbf{\Sigma}$$ $$\left(7\right)$$ where g is a neural network and ⊙ denotes elementwise multiplication. Assuming the neural network in Equation (7) is autoregressive (i.e zk,i can only depend on zk,1:i−1), we get a lower triangular Jacobian and $$\log\left|\det\frac{\partial\mathbf{z}_{k}}{\partial\mathbf{z}_{k-1}}\right|=\sum_{i=1}^{n_{i}m}\log\kappa_{k,i}.\tag{8}$$ ## 3.2 Computing The Variational Bounds Minimization of the KL in Equation (2) is difficult due to the introduction of the auxiliary variable z in the variational distribution. In principle, z could be integrated out, but in practice this is difficult. Following Ranganath et al. (2016), we instead introduce z as an auxiliary variable also in the posterior distribution by defining p(w, γ, z|D) = p(w, γ|D)r(z|w, γ) where r(z|w, γ) in principle can be any distribution. We then consider the KL divergence in the extended space for (w, γ, z): $\text{KL}\left[q(\boldsymbol{w},\boldsymbol{\gamma},\boldsymbol{z})||p(\boldsymbol{w},\boldsymbol{\gamma},\boldsymbol{z}|\mathcal{D})\right]=\int_{\boldsymbol{z}}\sum_{\gamma}\int_{\boldsymbol{w}}q(\boldsymbol{w},\boldsymbol{\gamma},\boldsymbol{z})\log\frac{q(\boldsymbol{w},\boldsymbol{\gamma},\boldsymbol{z})}{p(\boldsymbol{w},\boldsymbol{\gamma},\boldsymbol{z}|\mathcal{D})}\,d\boldsymbol{w}d\boldsymbol{z}$ which, by utilizing the definitions of p(w, γ, z) and q(w, γ, z) can be rewritten to $$\mathrm{KL}\left[q(\mathbf{w},\mathbf{\gamma},\mathbf{z})||p(\mathbf{w},\mathbf{\gamma},\mathbf{z}|\mathcal{D})\right]$$ $$=\mathbb{E}_{q(\mathbf{w},\mathbf{\gamma})}\bigg{[}\mathrm{KL}\left[q(\mathbf{w},\mathbf{\gamma}|\mathbf{z})||p(\mathbf{w},\mathbf{\gamma})\right]+\log q(\mathbf{z})\bigg{]}-\mathbb{E}_{q(\mathbf{w},\mathbf{r},\mathbf{z})}\bigg{[}\log p(\mathcal{D}|\mathbf{w},\mathbf{\gamma})+\log r(\mathbf{z}|\mathbf{w},\mathbf{\gamma})\bigg{]}+\log p(\mathcal{D}).\tag{9}$$ As shown in Ragnansath et al. (2016), $$\operatorname{KL}\left[q(\mathbf{w},\gamma)||p(\mathbf{w},\gamma|\mathcal{D})\right]\leq\operatorname{KL}\left[q(\mathbf{w},\gamma,\mathbf{z})||p(\mathbf{w},\gamma,\mathbf{z}|\mathcal{D})\right],$$ KL [q(w, γ)||p(*w, γ*|D)] ≤ KL [q(w, γ, z)||p(w, γ, z|D)] , (10) giving a looser than the original upper bound (see Ranganath et al. (2016) for a proof), but the dependence structure in the variational posterior distribution can compensate for this. $$(10)$$ After doing some algebra, we get the following contribution to the first term within the first expectation in Equation (9) from a specific layer: $$\sum_{i j}\left(\bar{\alpha}_{i j}\!\left(\log\frac{\sigma_{i j}}{\bar{\sigma}_{i j}}+\log\frac{\bar{\alpha}_{i j}}{\bar{\alpha}_{i j}}-\frac{1}{2}+\frac{\bar{\sigma}_{i j}^{2}+(\bar{\mu}_{i j}z_{i}-0)^{2}}{2\sigma_{i j}^{2}}\right)+(1-\bar{\alpha}_{i j})\log\frac{1-\bar{\alpha}_{i j}}{1-\alpha_{i j}}\right).$$ Since we use autoregressive flows, the contribution to the second term in the first expectation simplifies to $$\log q(z)=\log q_{0}(z_{0})-\sum_{k=1}^{K}\sum_{i=1}^{n_{i n}}\log\kappa_{k,i}.$$ For the specific choice of r(z|*w, γ*), we follow Louizos et al. (2017) in choosing $$r_{B}(z_{B}|\mathbf{w},\gamma)=\prod_{i=1}^{n_{i n}}\mathcal{N}(\nu_{i},\tau_{i}^{2}).$$ We define the dependence of ν = (ν1*, ..., ν*in) and τ 2 = (τ 2 1 , ..., τ 2 in) on w and γ similar to Louizos & Welling (2017): $$\nu=n_{\rm out}^{-1}(d_{1}s^{T})1,\qquad\qquad\qquad\mbox{with}s=\zeta({\rm e}^{T}(\mathbf{w}\odot\mathbf{\gamma}))\tag{11}$$ $$\log\mathbf{\tau}^{2}=n_{\rm out}^{-1}(d_{2}s^{T})1.$$ Here, d1, d2 and e are trainable parameters with the same shape as z. For ζ, we use hard-tanh 1, as opposed to tanh (used in Louizos & Welling (2017)) as this works better empirically. For the last term of Equation (9), we thus have: $\log r\left(\mathbf{z}|\mathbf{w},\mathbf{\gamma}\right)=\log r_{B}\left(\mathbf{z}_{B}|\mathbf{w},\mathbf{\gamma}\right)+\log\left|\det\frac{\partial\mathbf{z}_{B}}{\partial\mathbf{z}}\right|$. This means that we must use two normalizing flows, one to get from z0 to z = zK, and another from z to zB. Here, we have shown the inverse normalizing flow with only one layer, but this can in general be extended to an arbitrary number of them just like in Equation (6). For the biases of a given layer, we assume they are independent of the weights, and each other. We use the standard normal prior with the mean-field Gaussian approximate posterior. As we do not use normalizing flows on the biases, we only need to compute the KL-divergence between two Gaussian distributions: $$\mathrm{KL}\left[q(\mathbf{b})||p(\mathbf{b})\right]=\sum_{ij}\left(\log\frac{\sigma_{b_{ij}}}{\hat{\sigma}_{b_{ij}}}-\frac{1}{2}+\frac{\hat{\sigma}_{b_{ij}}^{2}+(\hat{\mu}_{b_{ij}}-0)^{2}}{2\sigma_{b_{ij}}^{2}}\right).$$ . In practice, the ELBO is optimized through a (stochastic) gradient algorithm where the reparametrization trick (Kingma & Welling, 2013) combined with mini-batch is applied. This involves sampling the large Γ and W matrices. ## 4 Combining Lbbnns With The Lcrt **And Mnf** The variational distribution in Equation (4) (used in both Hubin & Storvik (2019) and Bai et al. (2020)) has two major drawbacks when utilized in deep Bayesian neural networks. Firstly, each forward pass during training requires sampling the large Γ and W matrices, consisting of all γij 's, and wij 's, to compute the activations for each layer in the network, as opposed to standard BNNs that only require to sample W. Additionally, due to the binary nature of the γij 's, they must be approximated with a continuous distribution in order to be able to propagate gradients through them using the reparametrization trick. Here, we will show how to circumvent *both* of these issues by sampling the pre-activations hj (by which we mean the linear 1See https://pytorch.org/docs/stable/generated/torch.nn.Hardtanh.html for a definition. 7 combination before the non-linear activation function is applied) given in Equation (1) directly, typically referred to as the local reparametrization trick (Kingma et al., 2015, LCRT). The general idea behind the LCRT is that if we have a sum of independent Gaussian random variables, the sum will also be (exactly) Gaussian. In our variational approximation, we have a sum of independent random variables where each member of the sum is, in turn, a product of a pair of discrete and a continuous random variable. The central limit theorem still holds for a sum of independent random variables, as long as Lindeberg's condition (Billingsley, 2017) is satisfied. Thus, in order to apply the LCRT in our case, we must compute the mean and variance of the (approximate) Gaussian pre-activations: Then, we still use the same stochastic variational inference optimization algorithm as in Hubin & Storvik (2024). $$\mathbb{E}[h_{j}]=\mathbb{E}\left[b_{j}+\sum_{i=1}^{N}o_{i}\gamma_{ij}w_{ij}\right]=\tilde{\mu}_{b_{j}}+\sum_{i=1}^{N}o_{i}\tilde{\alpha}_{ij}\tilde{\mu}_{ij}$$ $$\mathrm{Var}[h_{j}]=\mathrm{Var}\left[b_{j}+\sum_{i=1}^{N}o_{i}\gamma_{ij}w_{ij}\right]=\tilde{\sigma}_{b_{j}}^{2}+\sum_{i=1}^{N}o_{i}^{2}\tilde{\alpha}_{ij}(\tilde{\sigma}_{ij}^{2}+(1-\tilde{\alpha}_{ij})\tilde{\mu}_{ij}^{2}).$$ Here, o denotes the output from the previous layer, consisting of N neurons. Another advantage of using the LCRT is that we get a reduction in the variance of the gradient estimates, as shown in Kingma et al. (2015). Note also that the approximations induced by the sampling procedure for h also can be considered as an alternative variational approximation directly for p(h|D). For our second extension, we apply normalizing flows in the activation space to increase the flexibility of the variational posterior. When using normalizing flows, the mean and the variance of the activation hj are: $$\mathbb{E}[h_{j}]=\tilde{\mu}_{b_{j}}+\sum_{i=1}^{N}o_{i}z_{i}\tilde{\alpha}_{i j}\tilde{\mu}_{i j}$$ $$\mathrm{Var}[h_{j}]=\tilde{\sigma}_{b_{j}}^{2}+\sum_{i=1}^{N}o_{i}^{2}\tilde{\alpha}_{i j}(\tilde{\sigma}_{i j}^{2}+(1-\tilde{\alpha}_{i j})z_{i}^{2}\tilde{\mu}_{i j}^{2}),$$ It should be noted that z affects both the mean and the variance of our Gaussian approximation, whereas in Louizos & Welling (2017) it only influences the mean. Louizos & Welling (2017) also sample one z for each observation within the mini-batch. We found that empirically it made no difference in performance to only sample one vector and multiply the same z with each input vector. We do this, as it is more computationally efficient. ## 5 Experiments In this section, we demonstrate the robustness of our approach and show improvements with respect to the closest baseline methods of Hubin & Storvik (2024), (denoted LBBNN-SSP-MF in their paper), denoted LBBNN here, with the two approaches proposed in this paper. We consider both full Bayesian model averaging (Hoeting et al., 1999), averaging over the posterior distribution for the inclusion variables, as well as the use of the median probability model (Barbieri et al., 2018), only including weights with posterior probabilities for inclusion variables above 0.5. Median probability models have the potential of giving huge sparsity gains. We also compare it to other reasonable baseline methods. We are not interested in trying to obtain state-of-the-art predictive results at all costs, hence all our experiments are run *without* using ad-hoc tricks commonly found in the Bayesian deep learning literature that often improve on performance, such as tempering the posterior (Wenzel et al., 2020) or clipping the variance of the variational posterior distribution as done in Louizos & Welling (2017). Using these tricks (although tempting) would not allow us to evaluate the pure contribution of the methodology. We provide comparisons to a standard BNN, and to the multiplicative normalizing flow method (BNN-FLOW) introduced by Louizos & Welling (2017), as these are closely related to the LBBNN and its two extensions detailed in this paper. The goal is to compare results between frequentist and our Bayesian networks with the same architecture and hyper-parameters. See Figure 3 for a graphical illustration of how the different methods are related to one another. We have a ![8_image_0.png](8_image_0.png) Figure 3: Illustration of the relations between the different methods considered in this paper. Exactly one design change is present between all direct neighbours disregarding the directions of the edges. standard, frequentist neural network without any regularization (ANN), that corresponds to using maximum likelihood to estimate the weights of the network. We also have a frequentist network with L2 regularization, corresponding to the maximum a posteriori estimator (MAP) with independent Gaussian priors from a standard BNN. We added a BNN approximated with a mean-field variational inference, which we also added to comparisons. This standard BNN takes into account uncertainty in the weights, rather than finding a point estimate allowing us to evaluate the benefit of it as compared to corresponding MAP estimates. From there, we get to the LBBNN method by having an extra inclusion parameter for each weight, allowing for a sparse BNN. For LBBNN exactly the same parameter priors (slab components) as in BNN were used, allowing us to evaluate the effects of adding the structural uncertainty. The multiplicative normalizing flow method (BNN-FLOW) is also closely related to a standard BNN, but here instead of sparsifying the network, we allow the variational posterior distribution to be more flexible than a standard mean-field Gaussian, used in BNNs. Further, using the local reparametrization trick (LBBNN-LCRT) is mainly a computational advantage compared to the LBBNN method. Finally, LBBNN-FLOW (proposed in this paper) is related to both BNN-FLOW and LBBNN-LCRT, in the sense that it can learn a sparse BNN, and in addition have a more flexible posterior distribution than the mean-field Gaussian used in LBBNN-LCRT. In Hubin & Storvik (2019), comprehensive classification experiments show that LBBNNs can sparsify Bayesian neural networks to a large degree while maintaining high predictive power. We demonstrate that increasing the flexibility of the variational posterior with normalizing flows improves both predictive performance and sparsity levels against the mean-field approximations for LBBNN on a set of addressed datasets. Additionally, we perform two simulation studies. In the first one, we consider variable selection in a logistic regression setting, with highly correlated explanatory variables. In the second, we generate data from clusters of two-dimensional Gaussian distributions and compare how the different methods handle predictive uncertainty. All the experiments were coded in Python, using the PyTorch deep learning library (Paszke et al., 2019). In addition to the results reported here, we also perform classification experiments on various tabular datasets, taken from the UCI machine learning repository (Kelly et al., 2023). The results (detailed in Appendix C), demonstrate that our suggested approach also works in these settings. Additionally, for the UCI datasets, for an empirical measure of calibration, we report the expected calibration error (ECE) Guo et al. (2017) on the classification problems. Further, we calculate Pinball loss (Gneiting, 2011) on regression problems, averaged across levels from 0.05 to 0.95 (see Tables 6 and 10 in AppendixC). On the regression datasets, we include two additional baselines for comparison, Gaussian Processes, using an out-of-the-box version of the package from Varvia et al. (2023), and BNNs fitted with Hamiltonian Monte Carlo (HMC), using the package from Sharaf et al. (2020). While the former performed well and on par with our methods, the latter was underperforming. Our flow method when both using the full (variational) model averaging and the median probability model is demonstrating robust performance (one may with caution | CS | LBBNN-LCRT | LBBNN-FLOW | | |----------|--------------|--------------|-------| | mean TPR | 0.681 | 0.838 | 0.972 | | mean FPR | 0.125 | 0.084 | 0.074 | Table 1: Performance metrics on the logistic regression variable selection simulation study. say even on par or better) compared to the baselines on these datasets. Finally for the UCI datasets, we additionally check the internal parsimony of all of the Bayesian baselines through the pWAIC1 and pWAIC2 penalties from Gelman et al. (2014), metrics that also have been used as estimates of the effective number of parameters. ## 5.1 Logistic Regression Simulation Study In this section, we do a variable selection experiment within a logistic regression setting. As logistic regression is just a special case of a neural network with one neuron (and hence one layer), modifying the algorithms is straightforward. We are limiting ourselves to the logistic regression context to be able to compare to the original baseline method from Carbonetto & Stephens (2012), who have shown that the mean-field variational approximation starts to fail the variable selection task when the covariates are correlated. As we are only interested in comparing the mean-field variational approach against the variational distribution with normalizing flows, we do not include comparisons with more traditional variable selection methods such as Lasso (Tibshirani, 1996) or Elastic Net (Zou & Hastie, 2005). We use the same data as in Hubin & Storvik (2018), consisting of a mix of 20 binary and continuous variables, with a binary outcome, and we have 2 000 observations. The covariates, x, are generated with a strong and complicated correlation structure between many of the variables (see Figure 4). For more details on exactly how the covariates are generated, see appendix B of Hubin & Storvik (2018). The response variable, y, is generated according to the following data-generating process: $$\eta\sim{\mathcal{N}}(\beta\mathbf{x},0.5)$$ $$y\sim\mathrm{Bernoulli}\left({\frac{\exp(\eta)}{1+\exp(\eta)}}\right)$$ $\mathbf{\hat{}},3,0$). with the regression parameters defined to be: $$\beta=(-4,0,1,0,0,0,1,0,0,0,1.2,0,37.1,0,0,50,-0.00005,10,3,0).$$ The goal is to train the different methods to select the non-zero elements of β. We consider the parameter βj to be included if the posterior inclusion probability αj > 0.5, i.e. the median probability model of Barbieri & Berger (2004). We fit the different methods 100 times (to the same data), each time computing the true positive rate (TPR), and the false positive rate (FPR). In this experiment we compare our approaches LBBNN-LCRT and LBBNN-FLOW against the algorithm proposed by Carbonetto & Stephens (2012), denoted as CS henceforth. That method is very similar to LBBNN-LCRT, as it uses the same variational distribution. But in CS, optimization is done with coordinate ascent variational inference and without subsampling from the data. For the normalizing flows, we use flows of length two with the neural networks having two hidden layers of 100 neurons each. We use a batch size of 400 and train for 500 epochs. We use standard normal priors for the weights and a prior inclusion probability of 0.25 on the inclusion indicators for all three approaches. Hence, we are in the setting of a Bayesian logistic regression, with variable selection. The results are in Table 3. We also show a bar-plot (Figure 5) for each of the 20 weights over the 100 runs. We see that LBBNN-FLOW performs best, with the highest TPR and the lowest FPR. It is especially good at picking out the correct variables where there is a high correlation between many of them (for example ![10_image_0.png](10_image_0.png) Figure 4: Plots showing the correlation between different variables in the logistic regression simulation study. ![11_image_0.png](11_image_0.png) Figure 5: Bar-plots showing how often the weights are included over 100 runs. β1 − β6). We might attribute this to the more flexible variational posterior distribution, as opposed to the mean-field Gaussian distribution used in the other three methods. Carbonetto & Stephens (2012) also discuss how the mean-field approach can only be expected to be a good approximation when the variables are independent or at most weakly correlated. ## 5.2 Predictive Uncertainty A key motivation behind using a Bayesian approach is their ability to handle predictive uncertainty more accurately than non-Bayesian neural networks. We therefore in this experiment want to illustrate how our approaches LBBNN-LCRT and LBBNN-FLOW, as well as Monte Carlo dropout (Gal & Ghahramani, 2016), and a regular (dense) BNN behave in terms of the predictive uncertainty. The purpose of this study is, thus, illustrative rather than comparative and the methods are not competing here. For this experiment, we simulate 5 clusters of data from two-dimensional Gaussian distributions. For the five Gaussians, we use the means and covariances reported in Appendix B. The data is then transformed to be in the range between 0 and 1, for ease of visualization. The task is to classify to the correct class corresponding to a specific cluster. We generate three datasets, with 10, 50, and 200 samples from each class, respectively. For all the methods, we fit a network with one hidden layer consisting of 1000 neurons, meaning we are in a setting where the number of trainable parameters is much larger than the number of observations, which is a typical scenario for applications of Bayesian neural networks. For dropout, we use 0.5 for the dropout probability, and we use 0.5 for the prior inclusion probabilities for LBBNN-LCRT and LBBNN-FLOW. We use flows of length two, with the neural networks consisting of two hidden layers of 50 neurons each. For all the methods, we use 10 samples for model averaging. To measure predictive uncertainty, we generate a test set over a grid over [0, 1]2 and compute the entropy of the predictive distributions for each point in the grid. Maximum entropy is attained when the predictive distribution is uniform, i.e. 0.2 for each class. The results are shown in Figure 6, Figure 7, and Figure 8. With little data, we see a stark difference between dropout and the Bayesian networks. Dropout predictions are highly confident everywhere, except for at the decision boundaries between the classes. In contrast, the Bayesian networks exhibit high uncertainty in most areas, especially where little data is observed. When we increase the amount of data, we can see that the Bayesian networks gradually get more certain about predictions, and the entropies (as desired) start to converge towards the data-generative ones, while for dropout at a given rate, the uncertainties do not reduce. It should be noted that there is no under-fitting happening, as we have close to 100% accuracy during training for all the methods. As a final observation, we see that the dense BNN typically has slightly less uncertainty than LBBNN with LCRT and FLOW. However, we can not say much about how good/bad this is, since it is difficult to obtain the true uncertainties for our model, i.e. running a reversible jump MCMC (Green & Hastie, 2009) in the settings of LBBNN of a reasonable size is currently just infeasible computationally. Additionally, we perform an experiment where we generate 10 000 test samples (2 000 from each cluster), after training with 50 samples (10 from each cluster). After training, we compute the entropy of the predictive distribution on the test data and sort the data from lowest to highest entropy. We also sort the samples based ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) ![12_image_3.png](12_image_3.png) Figure 6: Entropy with 10 samples from each cluster ![12_image_2.png](12_image_2.png) Figure 7: Entropy with 50 samples from each cluster ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) Figure 8: Entropy with 200 samples from each cluster on the maximum class probability and compute the cumulative accuracy (with 100 data samples at a time). By that, we mean that we start with the accuracy for the 100 most confident predictions, followed by 100 less confident predictions, and so on until we reach 100 of the least confident predictions. The results are in Figure 9. With dropout, the maximum class probability is typically very high (i.e. we are extremely certain about which class the sample belongs to). After the first 5000 (sorted) samples, the output probability for the most likely class is at around 95%. With LCRT and FLOW, on the other hand, it has dropped to roughly 50%. This mirrors what we saw earlier, dropout has high certainty most of the time. Despite this, we see that in this experiment the Bayesian methods have higher predictive accuracy than dropout for the cases with the most uncertainty. As a final illustration, we consider an experiment where we take the maximum model averaged pre-activation output (pre-softmax) of the last layer (i.e. just before applying the softmax function) as a measure instead of using entropy. We use the training data (m = 1000) to generate an empirical confidence interval for the model-averaged pre-activation outputs for all the classes. We use a one-sided 95% confidence interval on the upper bound. During testing, we generate a sample over a grid, now between -1 and 2 in both dimensions, and take the highest model-averaged pre-activation output. We then check whether it falls within the empirical confidence interval or not. The results are shown in Figure 10. We see that in the regions with extremely low entropy, we can detect out-of-distribution data. This shows that using maximal entropy for out-of-distribution data as suggested in Louizos & Welling (2017) might not be optimal. However, we still see the potential of BNNs to differentiate between in and out-of-domain uncertainty using the pre-activation values of the output of BNNs. We do not go any further here and leave this topic for future research. We rather continue with some real data examples. ![14_image_0.png](14_image_0.png) Figure 9: Top left, cumulative accuracy (100 samples at a time), where each point is the accuracy for the corresponding data points. Top right, entropy sorted from low to high. Bottom, maximum class probability sorted from high to low. ![15_image_0.png](15_image_0.png) Figure 10: Out of distribution detection, where dark blue corresponds to the OOD data detected by the BNN, and white is the in-distribution data. ## 5.3 Classification Experiments We perform two classification experiments, one with the same fully connected architecture as in Hubin & Storvik (2019), and the other with a convolutional architecture (see appendix A for details on how this is implemented, while the specifications on the architecture will be provided later in the text). In both cases, we classify on MNIST (Deng, 2012), FMNIST (Fashion MNIST) (Xiao et al., 2017) and KMNIST (Kuzushiji MNIST) (Clanuwat et al., 2018). MNIST is a database of handwritten digits ranging from 0 to 9. FMNIST consists of ten different fashion items from the Zalando (Europe's largest online fashion retailer) database. Lastly, KMNIST also consists of ten classes, with each one representing one row of Hiragana, a Japanese syllabary. All of these datasets contain 28x28 grayscale images, divided into a training and validation set with 60 000 and 10 000 images respectively. MNIST and FMNIST are well-known and often utilized datasets, so it is easy to compare performance when testing novel algorithms. KMNIST is a somewhat recent addition and is considered a more challenging task than the classical MNIST digits dataset because each Hiragana can have many different symbols. For the experiments with the fully connected architecture, we have two hidden layers with 400 and 600 neurons respectively, ReLU (Agarap, 2018) activation functions. For fitting the models, we used the Adam (Kingma & Ba, 2014) optimizer. We use a batch size of 100 and train for 250 epochs. All the experiments are run 10 times, and we report the minimum, median, and maximum predictive accuracy over these 10 runs. In addition to the performance measure (accuracy), we also report the density of the network, defined as the ratio of non-zero weights. The reported density (1-sparsity) is an average over these 10 runs. For the UCI datasets, we also measure the computed pWAIC1 , and pWAIC2 from Gelman et al. (2014)), as these can be argued to be a measure the effective number of parameters of the models. Although, according to Gelman et al. (2014) the latter is only valid for normal linear models with large sample size, known variance, and uniform prior distribution. The results are in Tables 7, 8, and 11 in Appendix C. For BNN and BNN-FLOW, we use standard normal priors. For ANN + L2, we use the weight decay of 0.5, inducing a penalized likelihood, which corresponds to MAP (maximum aposteriori probability) solutions of BNN and BNN-FLOW under standard normal priors. For the LBBNN-LCRT and LBBNN-FLOW methods, we use the standard normal prior for the slab components of all the weights and biases in the network, and a prior inclusion probability of 0.10. For both q(z) and r(z|W,Γ), we use flows of length two, where the neural networks consist of two hidden layers with 250 neurons each. For our second classification experiment, we use the LeNet-5 (LeCun et al., 1998) convolutional architecture, but with 32 and 48 filters for the convolutional layers. We use the same priors and normalizing flows as in the previous experiment, and the same datasets. We emphasize that it is possible to use deeper and more complicated architectures (for example Resnet-18 (He et al., 2016)), which may improve the results reported in this paper. As the goal here is not to try to approach (or *hack* through tuning and engineering) state-of-the-art results, we do not experiment any further with this. To Table 2: Performance metrics (accuracy and density) on the KMNIST, MNIST, FMNIST validation data, for the fully connected architecture. For the accuracies (%), we report the minimum, maximum, and median over the ten different runs. Density is computed as an average over the ten runs. The best median results are bold. | KMNIST | Median probability model | Full model averaging | | | | | | | |------------|----------------------------|------------------------|-------|---------|-------|--------|-------|---------| | Method | min | median | max | density | min | median | max | density | | LBBNN | 89.22 | 89.59 | 89.98 | 0.113 | 89.43 | 89.76 | 90.21 | 1.000 | | LBBNN-LCRT | 90.04 | 90.26 | 90.43 | 0.136 | 90.23 | 90.39 | 90.60 | 1.000 | | LBBNN-FLOW | 90.64 | 91.12 | 91.46 | 0.096 | 91.16 | 91.30 | 91.61 | 1.000 | | BNN-FLOW | - | - | - | - | 92.02 | 92.28 | 92.61 | 1.000 | | BNN | - | - | - | - | 92.21 | 92.53 | 92.64 | 1.000 | | ANN | - | - | - | - | 90.44 | 91.02 | 91.28 | 1.000 | | ANN + L2 | - | - | - | - | 87.24 | 87.76 | 88.15 | 1.000 | | MNIST | Median probability model | Full model averaging | | | | | | | | Method | min | median | max | density | min | median | max | density | | LBBNN | 98.01 | 98.10 | 98.20 | 0.098 | 98.03 | 98.14 | 98.23 | 1.000 | | LBBNN-LCRT | 97.84 | 97.95 | 98.09 | 0.103 | 98.01 | 98.08 | 98.11 | 1.000 | | LBBNN-FLOW | 98.14 | 98.36 | 98.42 | 0.074 | 98.23 | 98.42 | 98.53 | 1.000 | | BNN-FLOW | - | - | - | - | 98.43 | 98.58 | 98.63 | 1.000 | | BNN | - | - | - | - | 98.36 | 98.48 | 98.63 | 1.000 | | ANN | - | - | - | - | 97.95 | 98.13 | 98.20 | 1.000 | | ANN + L2 | - | - | - | - | 96.97 | 97.05 | 97.16 | 1.000 | | FMNIST | Median probability model | Full model averaging | | | | | | | | Method | min | median | max | density | min | median | max | density | | LBBNN | 88.47 | 88.76 | 88.90 | 0.106 | 88.60 | 88.74 | 88.91 | 1.000 | | LBBNN-LCRT | 87.51 | 87.82 | 87.94 | 0.141 | 87.88 | 87.94 | 88.14 | 1.000 | | LBBNN-FLOW | 89.49 | 89.70 | 89.88 | 0.097 | 89.52 | 89.80 | 89.92 | 1.000 | | BNN-FLOW | - | - | - | - | 89.19 | 89.42 | 89.53 | 1.000 | | BNN | - | - | - | - | 90.07 | 90.20 | 90.43 | 1.000 | | ANN | - | - | - | - | 88.75 | 89.51 | 89.88 | 1.000 | | ANN + L2 | - | - | - | - | 86.85 | 87.37 | 87.54 | 1.000 | measure predictive performance, we consider two approaches. First, the fully variational Bayesian model averaging approach, where we average over 100 samples from the variational posterior distribution, taking into account uncertainty in both weights and structures following Hubin & Storvik (2019). Secondly, we consider the median probability model (Barbieri & Berger, 2004), where we only do model averaging over the weights that have a posterior inclusion probability greater than 0.5, whilst others are excluded from the model. This allows for significant sparsification of the network. We emphasize that this is possible because we can go back to sampling the weights when doing inference, i.e. we sample only from the weights that have a corresponding inclusion probability greater than 0.5. We also report the density, i.e. the proportion of weights included in the median probability model. For the full variational model averaging approach we consider the density to be equal to one since we do not explicitly exclude any weights when computing the predictions (even though a large proportion of the weights may have a small inclusion probability and in practice within any 10 samples over which we are marginalizing less than 100% of the weights will be used, yet ideally one wants to average over more than 10 samples). The results with the fully connected architecture can be found in Table 2 and for the convolutional architecture in Table 3. Firstly, we see that using the LBBNN-LCRT gives results that are comparable to the baseline LBBNN method, except for FMNIST where it performs a bit worse both with the fully connected Table 3: Performance metrics on the KMNIST, MNIST, FMNIST validation data, with the convolutional architecture. See the caption in Table 2 for more details. | KMNIST | Median probability model | Full model averaging | | | | | | | |------------|----------------------------|------------------------|-------|---------|-------|--------|-------|---------| | Method | min | median | max | density | min | median | max | density | | LBBNN | 95.13 | 95.52 | 95.89 | 0.359 | 95.21 | 95.48 | 95.78 | 1.000 | | LBBNN-LCRT | 94.73 | 94.94 | 95.16 | 0.429 | 95.07 | 95.42 | 95.65 | 1.000 | | LBBNN-FLOW | 95.73 | 95.99 | 96.43 | 0.351 | 96.00 | 96.18 | 96.44 | 1.000 | | BNN-FLOW | - | - | - | - | 96.14 | 96.42 | 96.64 | 1.000 | | BNN | - | - | - | - | 95.19 | 95.34 | 95.58 | 1.000 | | ANN | - | - | - | - | 94.18 | 94.95 | 95.27 | 1.000 | | ANN + L2 | - | - | - | - | 92.00 | 92.51 | 92.77 | 1.000 | | MNIST | Median probability model | Full model averaging | | | | | | | | Method | min | median | max | density | min | median | max | density | | LBBNN | 99.22 | 99.26 | 99.35 | 0.353 | 99.21 | 99.28 | 99.33 | 1.000 | | LBBNN-LCRT | 99.11 | 99.26 | 99.31 | 0.406 | 99.20 | 99.28 | 99.34 | 1.000 | | LBBNN-FLOW | 99.15 | 99.27 | 99.41 | 0.338 | 99.16 | 99.29 | 99.42 | 1.000 | | BNN-FLOW | - | - | - | - | 99.26 | 99.32 | 99.41 | 1.000 | | BNN | - | - | - | - | 99.21 | 99.30 | 99.36 | 1.000 | | ANN | - | - | - | - | 99.01 | 99.15 | 99.23 | 1.000 | | ANN + L2 | - | - | - | - | 97.93 | 98.30 | 98.40 | 1.000 | | FMNIST | Median probability model | Full model averaging | | | | | | | | Method | min | median | max | density | min | median | max | density | | LBBNN | 91.14 | 91.31 | 91.48 | 0.352 | 91.10 | 91.26 | 91.44 | 1.000 | | LBBNN-LCRT | 90.04 | 90.40 | 90.85 | 0.433 | 90.52 | 90.73 | 91.06 | 1.000 | | LBBNN-FLOW | 90.52 | 91.54 | 91.75 | 0.367 | 91.38 | 91.71 | 92.04 | 1.000 | | BNN-FLOW | - | - | - | - | 91.60 | 91.87 | 92.10 | 1.000 | | BNN | - | - | - | - | 91.04 | 91.60 | 91.99 | 1.000 | | ANN | - | - | - | - | 90.40 | 91.21 | 91.63 | 1.000 | | ANN + L2 | - | - | - | - | 87.79 | 88.05 | 88.48 | 1.000 | and with the convolutional architecture. It is no surprise that these results are similar, as using the LCRT is mainly a computational advantage. Secondly, we note that the LBBNN-FLOW method performs better than the other two methods, on both convolutional and fully connected architectures, while having the most sparse networks. We also see that LBBNN-FLOW performs well compared to the BNN and BNN-FLOW architectures, especially on the fully connected architecture where it gets comparable accuracy even with very sparse networks. The higher density in general on the convolutional architectures is mainly a result of them being already sparse in the beginning. However, these networks could also be sparsified further by using more conservative priors on inclusions of the weights. The increased predictive power of using normalizing flows comes at a computational cost. With the fully connected architecture, we observed that it took around 4 seconds to train one epoch with LBBNN-LCRT, 13 seconds with LBBNN, and 17 seconds with LBBNN-FLOW on an NVIDIA A10 GPU. On the convolutional architecture, it took 7 seconds per epoch with the LBBNN-LCRT, 18 seconds with LBBNN, and 28 with normalizing flows. We note that the frequentist networks perform slightly worse on these datasets with our chosen architectures. The results could likely be improved by adding more regularization, such as dropout or batch-normalization, but we do not do this here, as we are not interested in trying to obtain state-of-the-art results. Naturally, the frequentist networks are much more computationally efficient, as they only have half the parameters of a standard BNN. ## 6 Discussion We have demonstrated that increasing the flexibility in the variational posterior distribution with normalizing flows improves the predictive power compared to the baseline method (with mean-field posterior) while obtaining more sparse networks, despite having a looser variational bound than the mean-field approach. Also, the flow method performed best on a variable selection problem demonstrating better structure learning performance, while the mean-field approaches struggle with highly correlated variables. More generally, we argue that Bayesian neural networks (BNNs) are much better at obtaining realistic predictive uncertainty estimates than their frequentist counterparts, as they have higher uncertainty when data is sparse. We do not observe a big difference in the uncertainty estimates obtained with dense BNN compared to our approaches. Also, calibration of uncertainties in predictive applications is similar with a slight advantage of the proposed in this paper approach. Unlike dense BNNs, our methods have the additional advantage of being able to perform variable selection. The downside is that LBBNNs have an extra parameter per weight, making them less computationally efficient than dense BNNs. Using normalizing flows is a further computational burden as we must also optimize over all the extra flow parameters. If uncertainty handling is not desirable, one could gain the minimal number of predictive parameters using the model trained with flows by relying on the posterior means of the median probability model's parameters. This approach is studied for simpler approximations in more detail in Hubin & Storvik (2024) but it is omitted in this paper. In this paper, we use the same prior for all the weights and inclusion indicators, although this is not necessary. A possible avenue of further research could be to vary the prior inclusion probabilities, to induce different sparsity structures or to incorporate the actual prior knowledge about prior inclusion probabilities of the covariates. Currently, we are taking into account uncertainty in weights and parameters, given some neural network architecture. In the future, it may be of interest to see if it is also possible to incorporate uncertainty in the activation functions. By having skip connections to the output, we could learn with uncertainties to skip all non-linear layers if a linear function is enough, or if a constant estimate of the parameters of the responses is enough (null model), or if one needs some nonlinear layers. This could lead to more transparent Bayesian deep learning models. But the success in that task relies on sufficiently good structure learning, where mean field-approximations are known to not work well (Carbonetto & Stephens, 2012). A possible application is to do a genome-wide association study (GWAS), using our method. Combining LBBNNs and GWAS has been proposed by Demetci et al. (2021), however, this only uses the mean-field posterior. With our normalizing flow approach, we can easily model dependencies within each SNP set, in addition to dependencies between the different SNP sets. Other set of promising applications are recovering structural equations in nonlinear dynamical systems as the robust and uncertainty aware alternative to l1 penalty used in the Sindy approach (Brunton et al., 2016). ## References Abien Fred Agarap. Deep learning using rectified linear units (relu). *arXiv preprint arXiv:1803.08375*, 2018. Jincheng Bai, Qifan Song, and Guang Cheng. Efficient variational inference for sparse deep learning with theoretical guarantee. *Advances in Neural Information Processing Systems*, 33:466–476, 2020. Maria Maddalena Barbieri and James O Berger. Optimal predictive model selection. *The annals of statistics*, 32(3):870–897, 2004. Marilena Barbieri, James O Berger, Edward I George, and Veronika Rockova. The median probability model and correlated variables. *arXiv preprint arXiv:1807.08336*, 2018. Patrick Billingsley. *Probability and measure*. John Wiley & Sons, 2017. Christopher M Bishop. Bayesian neural networks. *Journal of the Brazilian Computer Society*, 4(1), 1997. David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American statistical Association, 112(518):859–877, 2017. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In *International conference on machine learning*, pp. 1613–1622. PMLR, 2015. Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. *Proceedings of the national academy of sciences*, 113 (15):3932–3937, 2016. Peter Carbonetto and Matthew Stephens. Scalable variational inference for bayesian variable selection in regression, and its accuracy in genetic association studies. *Bayesian analysis*, 7(1):73–108, 2012. İlkay ÇINAR, Murat Koklu, and Şakir Taşdemir. Classification of raisin grains using machine vision and artificial intelligence methods. *Gazi Mühendislik Bilimleri Dergisi*, 6(3):200–209, 2020. Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, and David Ha. Deep learning for classical japanese literature. *arXiv preprint arXiv:1812.01718*, 2018. Cortez,Paulo, Cerdeira,A., Almeida,F., Matos,T., and Reis,J. Wine Quality. UCI Machine Learning Repository, 2009. DOI: https://doi.org/10.24432/C56S3T. Pinar Demetci, Wei Cheng, Gregory Darnell, Xiang Zhou, Sohini Ramachandran, and Lorin Crawford. Multiscale inference of genetic trait architecture using biologically annotated neural networks. *PLoS genetics*, 17(8):e1009754, 2021. Li Deng. The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE signal processing magazine, 29(6):141–142, 2012. Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen. Rigging the lottery: Making all tickets winners. In *International Conference on Machine Learning*, pp. 2943–2952. PMLR, 2020. David Fletcher and David Fletcher. Bayesian model averaging. *Model averaging*, pp. 31–55, 2018. Vincent Fortuin. Priors in bayesian deep learning: A review. *International Statistical Review*, 90(3):563–591, 2022. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. *arXiv preprint arXiv:1803.03635*, 2018. Yarin Gal. *Uncertainty in Deep Learning*. PhD thesis, University of Cambridge, 2016. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In *international conference on machine learning*, pp. 1050–1059. PMLR, 2016. Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks. *arXiv preprint* arXiv:1902.09574, 2019. Andrew Gelman, Jessica Hwang, and Aki Vehtari. Understanding predictive information criteria for bayesian models. *Statistics and computing*, 24:997–1016, 2014. Tilmann Gneiting. Quantiles as optimal point forecasts. *International Journal of forecasting*, 27(2):197–207, 2011. Peter J Green and David I Hastie. Reversible jump mcmc. *Genetics*, 155(3):1391–1403, 2009. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International conference on machine learning, pp. 1321–1330. PMLR, 2017. Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Enhao Gong, Shijian Tang, Erich Elsen, Peter Vajda, Manohar Paluri, John Tran, Bryan Catanzaro, and William J. Dally. Dsd: Dense-sparse-dense training for deep neural networks, 2017. David Harrison Jr and Daniel L Rubinfeld. Hedonic housing prices and the demand for clean air. Journal of environmental economics and management, 5(1):81–102, 1978. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Jennifer A Hoeting, David Madigan, Adrian E Raftery, and Chris T Volinsky. Bayesian model averaging: a tutorial (with comments by m. clyde, david draper and ei george, and a rejoinder by the authors. *Statistical* science, 14(4):382–417, 1999. Jiri Hron, Alexander G de G Matthews, and Zoubin Ghahramani. Variational gaussian dropout is not bayesian. *arXiv preprint arXiv:1711.02989*, 2017. Aliaksandr Hubin and Geir Storvik. Mode jumping MCMC for Bayesian variable selection in GLMM. Computational Statistics & Data Analysis, 127:281–297, Nov 2018. ISSN 0167-9473. doi: 10.1016/j.csda. 2018.05.020. URL http://dx.doi.org/10.1016/j.csda.2018.05.020. Aliaksandr Hubin and Geir Storvik. Combining model and parameter uncertainty in Bayesian neural networks. *arXiv:1903.07594*, 2019. Aliaksandr Hubin and Geir Storvik. Sparse bayesian neural networks: Bridging model and parameter uncertainty through scalable variational inference. *Mathematics*, 12(6):788, 2024. Markelle Kelly, Rachel Longjohn, and Kolby Nottingham. The UCI machine learning repositoty. https: //archive.ics.uci.edu, 2023. Asifullah Khan, Anabia Sohail, Umme Zahoora, and Aqsa Saeed Qureshi. A survey of the recent architectures of deep convolutional neural networks. *Artificial Intelligence Review*, 53(8):5455–5516, 2020. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational Bayes, 2013. Durk P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. *Advances in neural information processing systems*, 28, 2015. Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. *Advances in neural information processing systems*, 29, 2016. Ron Kohavi. Census Income. UCI Machine Learning Repository, 1996. DOI: https://doi.org/10.24432/C5GP7S. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, 1998. Christos Louizos and Max Welling. Multiplicative normalizing flows for variational Bayesian neural networks. In *International Conference on Machine Learning*, pp. 2218–2227. PMLR, 2017. Christos Louizos, Karen Ullrich, and Max Welling. Bayesian compression for deep learning. Advances in neural information processing systems, 30:3288–3298, 2017. David JC MacKay. Bayesian neural networks and density networks. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 354(1): 73–80, 1995. Warren S McCulloch and Walter Pitts. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4):115–133, 1943. Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. In *International Conference on Machine Learning*, pp. 2498–2507. PMLR, 2017. S. Moro, P. Rita, and P. Cortez. Bank Marketing. UCI Machine Learning Repository, 2012. DOI: https://doi.org/10.24432/C5K306. Nash,Warwick, Sellers,Tracy, Talbot,Simon, Cawthorn,Andrew, and Ford,Wes. Abalone. UCI Machine Learning Repository, 1995. DOI: https://doi.org/10.24432/C55C7W. Radford M Neal. Bayesian training of backpropagation networks by the hybrid Monte Carlo method. Technical report, Citeseer, 1992. Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 427–436, 2015. IA Ozkan, M Koklu, and Rıdvan Saraçoğlu. Classification of pistachio species using improved k-nn classifier. Health, 23:e2021044, 2021. Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, Aliaksandr Hubin, et al. Position paper: Bayesian deep learning in the age of large-scale ai. *arXiv preprint arXiv:2402.00809*, 2024. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, highperformance deep learning library. In Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/ 9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf. Ankit Pensia, Shashank Rajput, Alliot Nagle, Harit Vishwakarma, and Dimitris Papailiopoulos. Optimal lottery tickets via subset sum: Logarithmic over-parameterization is sufficient. *Advances in neural information processing systems*, 33:2599–2610, 2020. Nicholas G Polson and Veronika Ročková. Posterior concentration for sparse deep learning. Advances in Neural Information Processing Systems, 31, 2018. Lutz Prechelt. Early stopping-but when? In *Neural Networks: Tricks of the trade*, pp. 55–69. Springer, 1998. Quinlan Quinlan. Credit Approval. UCI Machine Learning Repository, 2007. DOI: https://doi.org/10.24432/C5FS30. Rajesh Ranganath, Dustin Tran, and David Blei. Hierarchical variational models. In *International conference* on machine learning, pp. 324–333. PMLR, 2016. Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In *International conference on machine learning*, pp. 1530–1538. PMLR, 2015. David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by backpropagating errors. *nature*, 323(6088):533–536, 1986. Dylan Sam, Rattana Pukdee, Daniel P Jeong, Yewon Byun, and J Zico Kolter. Bayesian neural networks with domain knowledge priors. *arXiv preprint arXiv:2402.13410*, 2024. Taysseer Sharaf, Theren Williams, Abdallah Chehade, and Keshav Pokhrel. Blnn: An r package for training neural networks using bayesian inference. *SoftwareX*, 11:100432, 2020. Connor Shorten and Taghi M Khoshgoftaar. A survey on image data augmentation for deep learning. *Journal* of Big Data, 6(1):1–48, 2019. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. *The journal of machine learning research*, 15(1): 1929–1958, 2014. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*, 2013. Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B: Statistical Methodology, 58(1):267–288, 1996. Ba-Hien Tran, Simone Rossi, Dimitrios Milios, and Maurizio Filippone. All you need is a good functional prior for bayesian deep learning. *Journal of Machine Learning Research*, 23(74):1–56, 2022. UCI. Dry Bean Dataset. UCI Machine Learning Repository, 2020. DOI: https://doi.org/10.24432/C50S4B. Petri Varvia, Janne Räty, and Petteri Packalen. mgpr: An r package for multivariate gaussian process regression. *SoftwareX*, 24:101563, 2023. Athanasios Voulodimos, Nikolaos Doulamis, Anastasios Doulamis, and Eftychios Protopapadakis. Deep learning for computer vision: A brief review. *Computational intelligence and neuroscience*, 2018, 2018. Florian Wenzel, Kevin Roth, Bastiaan Veeling, Jakub Swiatkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, and Sebastian Nowozin. How good is the bayes posterior in deep neural networks really? In *International Conference on Machine Learning*, pp. 10248–10259. PMLR, 2020. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. *arXiv preprint arXiv:1708.07747*, 2017. Tom Young, Devamanyu Hazarika, Soujanya Poria, and Erik Cambria. Recent trends in deep learning based natural language processing. *IEEE Computational intelligence magazine*, 13(3):55–75, 2018. Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B: Statistical Methodology, 67(2):301–320, 2005. ## Supplementary Material The code used for the experiments can be found in the accompanying zip folder. ## A Convolutional Architectures For convolutional layers, the variational distribution is defined to be: $$q_{\mathbf{\theta}}(\mathbf{W}|\mathbf{\Gamma},\mathbf{z})=\prod_{i=1}^{n_{k}}\prod_{j=1}^{n_{w}}\prod_{k=1}^{n_{f}}[\gamma_{ijk}\mathcal{N}(w_{ijk};z_{k}\hat{\mu}_{ijk},\hat{\sigma}_{ijk}^{2})+(1-\gamma_{ijk})\delta(w_{ijk})]\tag{12}$$ $$q_{\hat{\alpha}_{ijk}}(\gamma_{ijk})=\text{Bernoulli}(\gamma_{ijk};\hat{\alpha}_{ijk}),$$ where nh, nw, and nf denote the height, width, and number of filters in the convolutional kernel. For the convolutional layers, we use the following for the inverse normalizing flows: $$\begin{array}{l}\mathbf{\nu}=((\mbox{Mat}(\mathbf{W}\odot\mathbf{\Gamma})\mbox{\bf e})\otimes\mbox{\bf d}_{1})\,(\mbox{\bf1}\odot(n_{\rm h}n_{\rm w})^{-1})\\ \log\mathbf{\tau}^{2}=((\mbox{Mat}(\mathbf{W}\odot\mathbf{\Gamma})\mbox{\bf e})\otimes\mbox{\bf d}_{2})\,(\mbox{\bf1}\odot(n_{\rm h}n_{\rm w})^{-1}).\end{array}\tag{13}$$ Here, Mat(·) denotes the matricisation operator (as defined in Louizos & Welling (2017)), i.e. changing the shape of a multidimensional tensor into a matrix. ## B Data For Predictive Uncertainty Experiments For the predictive uncertainty experiment, we generate data from the following Gaussian distributions: $G_{1}\sim\mathcal{N}\left(\begin{pmatrix}-8\\ -8\end{pmatrix},\begin{pmatrix}6&-1\\ -1&3.5\end{pmatrix}\right),$ $G_{2}\sim\mathcal{N}\left(\begin{pmatrix}6\\ 6\end{pmatrix},\begin{pmatrix}0&3\\ 3&0\end{pmatrix}\right),$ $G_{3}\sim\mathcal{N}\left(\begin{pmatrix}-7\\ 8\end{pmatrix},\begin{pmatrix}-3&4\\ -5&1\end{pmatrix}\right),$ $G_{4}\sim\mathcal{N}\left(\begin{pmatrix}8\\ -8\end{pmatrix},\begin{pmatrix}0&5\\ 4&2\end{pmatrix}\right),$ $G_{5}\sim\mathcal{N}\left(\begin{pmatrix}0\\ 0\end{pmatrix},\begin{pmatrix}0&9\\ 9&0\end{pmatrix}\right).$ ## C Experiments On Tabular Datasets In this experiment, we consider six classification datasets and three regression datasets. We compare our approach LBBNN-FLOW against LBBNN-LCRT, LBBNN, a dense BNN, BNN-FLOW, ANN+L2, ANN, and Monte Carlo (MC) dropout (with dropout rates corresponding to our prior inclusion probabilities), in addition to Gaussian processes and Hamiltonian Monte Carlo for the regression datasets. For the neural networks (except those trained with HMC), we use a single hidden layer with 500 neurons and again train for 250 epochs with the Adam optimizer. For HMC, we use the default tuning parameters from Sharaf et al. (2020) run single chains for 100000 iterations. We also use 10-fold cross-validation, and report the minimum, mean and maximum accuracy over these 10 repetitions, in addition to the mean sparsity. We also report the expected calibration error (ECE), in addition to pWAIC1 and pWAIC2 for the classification datasets, while for the regression datasets we report RMSE, the pinball loss, pWAIC1 and pWAIC2 . For the six classification datasets, most are taken from the UCI machine learning repository. The Credit Approval dataset (Quinlan, 2007) consists of 690 samples with 15 variables, with the response variable being whether someone gets approved for a credit card or not. The Bank Marketing dataset (Moro et al., 2012) consists of data (45211 samples and 17 variables) related to a marketing campaign of a Portuguese banking institution, where the goal is to classify whether the persons subscribed to the service or not. In addition to this, we use the Census Income dataset (Kohavi, 1996) with 48842 samples and 14 variables, where we try to classify whether someone's income exceeds 50000 dollars per year. Additionally, we have three datasets related to classifying food items. The first, the Raisins dataset (ÇINAR et al., 2020), consists of 900 samples and 7 variables, where the goal is to classify into two different types of raisings grown in Turkey. Secondly, we use the Dry Beans dataset (UCI, 2020), consisting of 13611 samples, 17 variables, and 7 different types of beans. Lastly, the Pistachio dataset (Ozkan et al., 2021) consists of 2148 samples and 28 variables, with two different types of Pistachios. For the regression datasets, we use the abalone shell data (Nash,Warwick, Sellers,Tracy, Talbot,Simon, Cawthorn,Andrew, and Ford,Wes, 1995), the wine quality dataset (Cortez,Paulo, Cerdeira,A., Almeida,F., Matos,T., and Reis,J., 2009), and the Boston housing data (Harrison Jr & Rubinfeld, 1978). To avoid model misspecification, responses were standardized for the regression datasets. The variance of the responses was then assumed fixed and equal to 1 in all of the models, while the mean parameter was modeled. For the classification datasets, we see that our LBBNN-FLOW method typically performs well compared to the LBBNN baseline, mostly having higher predictive power, also with the median probability model. It also performs well compared to our other baseline methods. On calibration, we see that our method again performs well compared to baselines. Finally, for the pWAIC1 and pWAIC2 , we see that using the median probability model typically reduces this metric. We note that our method often has lower values than BNN-FLOW and BNN, even though these methods have fewer parameters. As mentioned before, however, it is unclear how good of an estimate this is for the effective number of parameters in highly non-linear methods such as neural networks. For the regression examples, our method performs slightly worse than some baselines in terms of RMSE and pinball loss, however, LBBNN-LCRT performs best (on average) on one dataset. Here, it is worth noting that convergence checks for HMC were not satisfactory not allowing us to claim the algorithms have converged. Tuning the parameters of the sampler and increasing the number of chains were not helpful in this case. Table 4: Performance results on the Credit Approval, Bank Marketing, and Census Income datasets, using 10-fold cross-validation. The minimum, mean and maximum accuracies are reported, in addition to the density. The best results are bold. | Credit Approval | Median probability model | Full model averaging | | | | | | | |-------------------|----------------------------|------------------------|-------|---------|-------|-------|-------|---------| | Method | min | mean | max | density | min | mean | max | density | | LBBNN | 81.16 | 85.22 | 91.30 | 0.431 | 81.16 | 85.80 | 91.30 | 1.000 | | LBBNN-LCRT | 81.16 | 86.23 | 92.75 | 0.347 | 81.16 | 86.23 | 91.30 | 1.000 | | LBBNN-FLOW | 84.10 | 88.55 | 94.20 | 0.348 | 82.61 | 87.68 | 91.30 | 1.000 | | BNN-FLOW | - | - | - | - | 82.61 | 86.23 | 89.86 | 1.000 | | BNN | - | - | - | - | 78.26 | 83.33 | 88.41 | 1.000 | | ANN | - | - | - | - | 78.26 | 83.19 | 91.30 | 1.000 | | ANN + L2 | - | - | - | - | 73.91 | 83.19 | 89.86 | 1.000 | | ANN + MC dropout | - | - | - | - | 81.16 | 86.52 | 92.75 | 1.000 | | Bank Marketing | Median probability model | Full model averaging | | | | | | | | Method | min | mean | max | density | min | mean | max | density | | LBBNN | 89.75 | 90.66 | 91.74 | 0.430 | 89.75 | 90.61 | 91.43 | 1.000 | | LBBNN-LCRT | 90.75 | 91.27 | 92.16 | 0.347 | 90.60 | 91.27 | 92.16 | 1.000 | | LBBNN-FLOW | 90.58 | 91.38 | 92.08 | 0.347 | 90.75 | 91.36 | 92.03 | 1.000 | | BNN-FLOW | - | - | - | - | 90.14 | 91.13 | 91.96 | 1.000 | | BNN | - | - | - | - | 90.63 | 91.16 | 91.74 | 1.000 | | ANN | - | - | - | - | 90.93 | 90.97 | 91.62 | 1.000 | | ANN + L2 | - | - | - | - | 90.75 | 91.15 | 91.77 | 1.000 | | ANN + MC dropout | - | - | - | - | 90.70 | 91.17 | 92.06 | 1.000 | | Cencus Income | Median probability model | Full model averaging | | | | | | | | Method | min | mean | max | density | min | mean | max | density | | LBBNN | 85.24 | 85.74 | 86.49 | 0.431 | 85.52 | 85.85 | 86.69 | 1.000 | | LBBNN-LCRT | 85.36 | 85.90 | 86.77 | 0.349 | 85.63 | 85.92 | 86.57 | 1.000 | | LBBNN-FLOW | 85.60 | 86.04 | 86.43 | 0.349 | 85.57 | 86.05 | 86.49 | 1.000 | | BNN-FLOW | - | - | - | - | 85.05 | 85.32 | 85.79 | 1.000 | | BNN | - | - | - | - | 84.77 | 85.27 | 86.45 | 1.000 | | ANN | - | - | - | - | 84.56 | 85.09 | 85.54 | 1.000 | | ANN + L2 | - | - | - | - | 84.73 | 85.39 | 85.91 | 1.000 | | ANN + MC dropout | - | - | - | - | 85.32 | 85.89 | 86.49 | 1.000 | Table 5: Performance results on the Dry Beans, Pistachio, and Raisin datasets, using 10-fold cross-validation. The minimum, mean and maximum accuracies are reported, in addition to the density. The best results are bold. | Dry Beans | Median probability model | Full model averaging | | | | | | | |------------------|----------------------------|------------------------|-------|---------|-------|-------|-------|---------| | Method | min | mean | max | density | min | mean | max | density | | LBBNN | 90.88 | 92.65 | 93.90 | 0.442 | 91.25 | 92.80 | 93.82 | 1.000 | | LBBNN-LCRT | 91.62 | 93.18 | 94.41 | 0.349 | 91.69 | 93.34 | 94.34 | 1.000 | | LBBNN-FLOW | 89.26 | 92.38 | 93.90 | 0.279 | 89.85 | 92.57 | 94.19 | 1.000 | | BNN-FLOW | - | - | - | - | 91.32 | 93.05 | 94.19 | 1.000 | | BNN | - | - | - | - | 91.47 | 93.35 | 94.63 | 1.000 | | ANN | - | - | - | - | 91.47 | 93.37 | 94.71 | 1.000 | | ANN + L2 | - | - | - | - | 91.54 | 93.38 | 94.71 | 1.000 | | ANN + MC dropout | - | - | - | - | 91.40 | 92.96 | 94.04 | 1.000 | | Pistachio | Median probability model | Full model averaging | | | | | | | | Method | min | mean | max | density | min | mean | max | density | | LBBNN | 91.12 | 93.46 | 96.26 | 0.433 | 91.12 | 93.36 | 95.33 | 1.000 | | LBBNN-LCRT | 91.59 | 93.93 | 95.79 | 0.350 | 92.06 | 94.07 | 95.79 | 1.000 | | LBBNN-FLOW | 91.12 | 93.46 | 95.33 | 0.350 | 91.12 | 93.46 | 95.33 | 1.000 | | BNN-FLOW | - | - | - | - | 90.65 | 93.60 | 96.26 | 1.000 | | BNN | - | - | - | - | 92.52 | 94.07 | 96.26 | 1.000 | | ANN | - | - | - | - | 92.06 | 94.11 | 96.73 | 1.000 | | ANN + L2 | - | - | - | - | 92.06 | 93.93 | 96.26 | 1.000 | | ANN + MC dropout | - | - | - | - | 91.12 | 94.16 | 96.26 | 1.000 | | Raisins | Median probability model | Full model averaging | | | | | | | | Method | min | mean | max | density | min | mean | max | density | | LBBNN | 83.33 | 87.00 | 92.22 | 0.439 | 83.33 | 86.78 | 92.22 | 1.000 | | LBBNN-LCRT | 81.11 | 86.11 | 91.11 | 0.349 | 81.11 | 86.78 | 92.22 | 1.000 | | LBBNN-FLOW | 83.33 | 86.67 | 92.22 | 0.349 | 82.22 | 86.56 | 92.22 | 1.000 | | BNN-FLOW | - | - | - | - | 82.22 | 87.22 | 91.11 | 1.000 | | BNN | - | - | - | - | 81.11 | 87.89 | 92.22 | 1.000 | | ANN | - | - | - | - | 81.11 | 86.44 | 90.00 | 1.000 | | ANN + L2 | - | - | - | - | 81.11 | 87.56 | 92.22 | 1.000 | | ANN + MC dropout | - | - | - | - | 81.11 | 87.00 | 93.33 | 1.000 | Table 6: Expected calibration error, with minimum, mean, and maximum values obtained using 10-fold crossvalidation. MPM denotes the medium probability model. ECE (min, mean, max) Method Credit Approval Bank Marketing Census Income Dry Beans Pistachio Raisins LBBNN (0.056, 0.079, 0.127) (0.023, 0.031, 0.039) (0.005, **0.010**, 0.015) (0.011, 0.016, 0.021) (0.012, 0.028, 0.047) (0.043, 0.073, 0.097) LBBNN-MPM (0.035, 0.080, 0.122) (0.013, 0.025, 0.033) (0.006, 0.011, 0.016) (0.002, 0.015, 0.031 ) (0.014, 0.028, 0.043) (0.035, 0.069, 0.103) LBBNN-LCRT (0.035, 0.071,0.097) (0.001, 0.016, 0.020) (0.006, **0.010**, 0.012) (0.008, 0.014, 0.019) (0.014, **0.022**, 0.034) (0.034, 0.070, 0.094) LBBNN-LCRT-MPM (0.015, 0.073, 0.119) (0.011, 0.015, 0.018) (0.008, 0.012, 0.016) (0.007, **0.013**, 0.025) (0.012, 0.025, 0.035) (0.057, 0.076, 0.115) LBBNN-FLOW (0.030, **0.069**, 0.103) (0.002, **0.007**, 0.015) (0.007, 0.011, 0.016) (0.010, 0.017, 0.040) (0.011, 0.024, 0.045) (0.018, 0.068, 0.103) LBBNN-FLOW-MPM (0.025, 0.071, 0.126) (0.003, 0.008, 0.012) (0.009, 0.012, 0.015 ) (0.008, 0.014, 0.026) (0.012, 0.030, 0.047) (0.038, 0.072, 0.096) BNN-FLOW (0.025, 0.074, 0.123) (0.006, 0.012, 0.022) (0.020, 0.027, 0.047) (0.007, 0.014, 0.029) (0.010, 0.029, 0.055) (0.038, 0.069, 0.104) BNN (0.046, 0.097, 0.195) (0.010, 0.015, 0.024) (0.020, 0.025, 0.031) (0.005, **0.013**, 0.030) (0.010, 0.039, 0.057) (0.034, 0.070, 0.103) ANN (0.072, 0.131, 0.188) (0.012, 0.017, 0.023) (0.020, 0.025, 0.031) (0.007, **0.013**, 0.021) (0.027, 0.042, 0.056) (0.040, 0.074, 0.109) ANN + L2 (0.020, 0.114, 0.178) (0.013, 0.018, 0.024) (0.015, 0.020, 0.026) (0.007, **0.013**, 0.020) (0.014, 0.040, 0.053) (0.025, **0.066**, 0.111) ANN + MC dropout (0.028, 0.079, 0.115) (0.011, 0.014, 0.017) (0.008, 0.011, 0.018) (0.033, 0.039, 0.048) (0.016, 0.029, 0.051) (0.027, 0.069, 0.113) Table 7: The pWAIC1 metric, on our six classification datasets, where the minimum, mean, and maximum values are obtained with 10-fold cross-validation. pWAIC1 (min, mean, max) Method Credit Approval Bank Marketing Census Income Dry Beans Pistachio Raisins LBBNN (0.620, 1.323, 2.595) (28.18, 31.15, 34,74) (22.98, 29.33, 37.14) (15.82, 23.18, 30.44) (2.594, 3.277, 4.762) (0.149, 0.271, 0.668) LBBNN-MPM (0.000, 0.000, 0.000) (0.001, 0.002, 0.003) (0.002, 0.002, 0.003) (0.001, 0.002, 0.003) (0.000, 0.000, 0.000) (0.000, 0.000, 0.000) LBBNN-LCRT (0.013, 0.026, 0.041) (0.575, 0.643, 0.725) (0.997, 1.144, 1.357) (9.937, 13.32, 20.12) (0.015, 0.034, 0.052) (0.011, 0.023, 0.043) LBBNN-LCRT-MPM (0.011, 0.025, 0.043) (0.566, 0.645, 0.718) (1.013, 1.140, 1.180) (0.002, 0.002, 0.003) (0.016, 0.032, 0.054) (0.011, 0.024, 0.045) LBBNN-FLOW (0.009, 0.021, 0.033) (0.486, 0.542, 0.608) (1.045, 1.216, 1.488) (12.83, 17.66, 26.65) (0.016, 0.034, 0.058) (0.011, 0.025, 0.052) LBBNN-FLOW-MPM (0.009, 0.022, 0.037) (0.476, 0.544, 0.621) (1.065, 1.245, 1.501) (0.331, 0.420, 0.567) (0.013, 0.033, 0.055) (0.011, 0.026, 0.055) BNN-FLOW (0.012, 0.029, 0.054) (0.523, 0.593, 0.655) (1.235, 1.398, 1.745) (0.402, 0.599, 1.089) (0.017, 0.039, 0.062) (0.012, 0.023, 0.045) BNN (0.015, 0.055, 0.255) (0.580, 0.686, 0.760) (1.165, 1.352, 1.442) (0.005, 0.007, 0.009) (0.019, 0.048, 0.073) (0.011, 0.022, 0.034) MC dropout (0.016, 0.058, 0.249) (0.536, 0.600, 0.687) (1.057, 1.201, 1.353) (75.99, 105.3, 133.2) (0.027, 0.040, 0.059) (0.011, 0.027, 0.042) | values are obtained with 10-fold cross-validation. | | | pWAIC2 (min, mean, max) | | | | |------------------------------------------------------|-----------------------|-----------------------|---------------------------|-----------------------|-----------------------|-----------------------| | Method | Credit Approval | Bank Marketing | Census Income | Dry Beans | Pistachio | Raisins | | LBBNN | (0.649, 1.367, 2.711) | (28.18, 31.73, 35,74) | (23.20, 65.55, 396.7) | (16.69, 24.55, 32.75) | (2.724, 3.485, 5.058) | (0.150, 0.269, 0.647) | | LBBNN-MPM | (0.000, 0.000, 0.000) | (0.001, 0.002, 0.003) | (0.002, 0.002, 0.003) | (0.001 0.002, 0.003) | (0.000, 0.000, 0.000) | (0.000, 0.000, 0.000) | | LBBNN-LCRT | (0.021, 0.058, 0.111) | (1.016, 1.176, 1.437) | (1.588, 2.981, 12.04) | (10.38, 13.86, 20.86) | (0.023, 0.078, 0.138) | (0.015, 0.044, 0.101) | | LBBNN-LCRT-MPM | (0.017, 0.056, 0.114) | (0.985, 1.188, 1.374) | (1.1639, 2.041, 2.247) | (0.002, 0.002, 0.003) | (0.026, 0.073, 0.151) | (0.016, 0.047, 0.112) | | LBBNN-FLOW | (0.014, 0.043, 0.072) | (0.780, 0.926, 1.154) | (1.718, 7.906, 21.91) | (13.30, 18.88, 29.37) | (0.025, 0.071, 0.137) | (0.016, 0.058, 0.197) | | LBBNN-FLOW-MPM | (0.014, 0.047, 0.083) | (0.755, 0.934, 1.193) | (1.781, 8.940, 21.97) | (0.331, 0.421, 0.569) | (0.018, 0.070, 0.126) | (0.016, 0.059, 0.188) | | BNN-FLOW | (0.020, 0.074, 0.199) | (0.856, 1.049, 1.303) | (2.230, 4.745, 12.83) | (0.403, 0.601, 1.093) | (0.030, 0.097, 0.178) | (0.016, 0.046, 0.123) | | BNN | (0.024, 1.087, 10.14) | (1.036, 1.307, 1.517) | (2.052, 3.591, 12.34) | (0.005, 0.007, 0.009) | (0.034, 0.137, 0.244) | (0.015, 0.039, 0.064) | | MC dropout | (0.027, 1.112, 10.13) | (0.905, 1.059, 1.351) | (1.767, 4.148, 12.03) | (101.0, 148.4, 265.8) | (0.054, 0.102, 0.216) | (0.015, 0.064, 0.181) | Table 9: Root mean squared error, using 10-fold cross-validation. The minimum, mean and maximum are reported, in addition to the density. Best results are bold. | Abalone | Median probability model | Full model averaging | | | | | | | |------------------|----------------------------|------------------------|-------|---------|-------|-------|-------|---------| | Method | min | mean | max | density | min | mean | max | density | | LBBNN | 0.597 | 0.677 | 0.799 | 0.435 | 0.577 | 0.665 | 0.795 | 1.000 | | LBBNN-LCRT | 0.564 | 0.644 | 0.736 | 0.350 | 0.560 | 0.641 | 0.722 | 1.000 | | LBBNN-FLOW | 0.577 | 0.660 | 0.767 | 0.350 | 0.572 | 0.657 | 0.761 | 1.000 | | BNN-FLOW | - | - | - | - | 0.585 | 0.654 | 0.733 | 1.000 | | BNN | - | - | - | - | 0.579 | 0.651 | 0.759 | 1.000 | | ANN | - | - | - | - | 0.575 | 0.657 | 0.801 | 1.000 | | ANN + L2 | - | - | - | - | 0.572 | 0.652 | 0.764 | 1.000 | | ANN + MC dropout | - | - | - | - | 0.579 | 0.655 | 0.759 | 1.000 | | Gaussian process | - | - | - | - | 0.570 | 0.650 | 0.734 | 1.000 | | BNN-HMC | - | - | - | - | 1.047 | 1.316 | 1.778 | 1.000 | | Wine Quality | Median probability model | Full model averaging | | | | | | | | Method | min | mean | max | density | min | mean | max | density | | LBBNN | 0.768 | 0.805 | 0.854 | 0.435 | 0.757 | 0.799 | 0.845 | 1.000 | | LBBNN-LCRT | 0.742 | 0.782 | 0.820 | 0.350 | 0.741 | 0.780 | 0.820 | 1.000 | | LBBNN-FLOW | 0.751 | 0.788 | 0.822 | 0.351 | 0.750 | 0.787 | 0.823 | 1.000 | | BNN-FLOW | - | - | - | - | 0.747 | 0.778 | 0.815 | 1.000 | | BNN-HMC | - | - | - | - | 0.749 | 0.767 | 0.790 | 1.000 | | ANN | - | - | - | - | 0.740 | 0.761 | 0.792 | 1.000 | | ANN + L2 | - | - | - | - | 0.746 | 0.763 | 0.789 | 1.000 | | ANN + MC dropout | - | - | - | - | 0.758 | 0.799 | 0.847 | 1.000 | | Gaussian process | - | - | - | - | 0.700 | 0.739 | 0.777 | 1.000 | | BNN-HMC | - | - | - | - | 0.937 | 1.172 | 1.451 | 1.000 | | Boston Housing | Median probability model | Full model averaging | | | | | | | | Method | min | mean | max | density | min | mean | max | density | | LBBNN | 0.267 | 0.412 | 0.621 | 0.431 | 0.262 | 0.393 | 0.552 | 1.000 | | LBBNN-LCRT | 0.245 | 0.374 | 0.560 | 0.350 | 0.233 | 0.366 | 0.534 | 1.000 | | LBBNN-FLOW | 0.267 | 0.402 | 0.595 | 0.352 | 0.253 | 0.395 | 0.561 | 1.000 | | BNN-FLOW | - | - | - | - | 0.241 | 0.363 | 0.524 | 1.000 | | BNN | - | - | - | - | 0.240 | 0.351 | 0.481 | 1.000 | | ANN | - | - | - | - | 0.247 | 0.364 | 0.597 | 1.000 | | ANN + L2 | - | - | - | - | 0.234 | 0.346 | 0.487 | 1.000 | | ANN + MC dropout | - | - | - | - | 0.244 | 0.371 | 0.502 | 1.000 | | Gaussian process | - | - | - | - | 0.217 | 0.349 | 0.444 | 1.000 | | BNN-HMC | - | - | - | - | 0.611 | 1.014 | 1.320 | 1.000 | | maximum values are obtained with 10-fold cross-validation. pWAIC1 (min, mean, max) | | | | pWAIC2 (min, mean, max) | | | |--------------------------------------------------------------------------------------|-----------------------|-----------------------|-----------------------|---------------------------|-----------------------|-----------------------| | Method | Abalone | Wine Quality | Boston Housing | Abalone | Wine Quality | Boston Housing | | LBBNN | (1.937, 4.486, 14.09) | (6.304, 7.496, 8.617) | (0.037, 0.202, 0.890) | (1.987, 5.206, 20.01) | (6.660, 7.832, 9.642) | (0.038, 0.210, 0.933) | | LBBNN-MPM | (0.000, 0.001, 0.002) | (0.001, 0.001, 0.003) | (0.000, 0.000, 0.000) | (0.000, 0.001, 0.002) | (0.001, 0.001, 0.003) | (0.000, 0.000, 0.000) | | LBBNN-LCRT | (0.766, 1.469, 2.228) | (4.978, 5.620, 6.318) | (0.013, 0.128, 0.483) | (0.774, 1.528, 2.595) | (5.112, 5.888, 6.989) | (0.013, 0.133, 0.514) | | LBBNN-LCRT-MPM | (0.001, 0.001, 0.003) | (0.003, 0.004, 0.006) | (0.000, 0.000, 0.000) | (0.001, 0.001, 0.003) | (0.003, 0.004, 0.006) | (0.000, 0.000, 0.000) | | LBBNN-FLOW | (0.563, 1.923, 7.515) | (3.220, 4.202, 6.069) | (0.025, 0.223, 1.025) | (0.568, 2.191, 9.925) | (3.273, 4.407, 7.352) | (0.025, 0.238, 1.157) | | LBBNN-FLOW-MPM | (0.039, 0.147, 0.430) | (0.171, 0.226, 0.268) | (0.001, 0.012, 0.077) | (0.039, 0.147, 0.432) | (0.172, 0.226, 0.269) | (0.001, 0.012, 0.078) | | BNN-FLOW | (0.074, 0.237, 0.468) | (0.362, 0.609, 2.210) | (0.003, 0.021, 0.062) | (0.075, 0.238, 0.473) | (0.363, 0.632, 2.415) | (0.003, 0.021, 0.062) | | BNN | (0.002, 0.007, 0.045) | (0.014, 0.019, 0.039) | (0.000, 0.000, 0.000) | (0.002, 0.007, 0.045) | (0.014, 0.019, 0.039) | (0.000, 0.000, 0.000) | | MC dropout | (7.295, 15.42, 44.78) | (18.70, 22.81, 27.87) | (0.344, 1.721, 6.331) | (7.972, 56.82, 430.4) | (20.35, 29.71, 71.86) | (0.397, 2.658, 12.43) | | values were obtained using 10-fold cross-validation. Mean pinball (min, mean, max) Method Abalone Wine Quality Boston Housing LBBNN (0.210, 0.236, 0.260) (0.291, 0.310, 0.325) (0.100, 0.133, 0.164) LBBNN-MPM (0.220, 0.242, 0.260) (0.296, 0.313, 0.328) (0.097, 0.137, 0.167) LBBNN-LCRT (0.203, 0.227, 0.251) (0.289, 0.304, 0.316) (0.087, 0.121, 0.150) LBBNN-LCRT-MPM (0.204, 0.228, 0.259) (0.290, 0.304, 0.318) (0.089, 0.123, 0.151) LBBNN-FLOW (0.208, 0.232, 0.253) (0.289 0.306 0.321) (0.091, 0.132, 0.158) LBBNN-FLOW-MPM (0.209, 0.231, 0.261) (0.290, 0.306 0.319) (0.092, 0.133, 0.154) MNF (0.211, 0.232, 0.250) (0.293, 0.303, 0.317) (0.085, 0.120, 0.142) BNN (0.210, 0.230, 0.255) (0.286, 0.295, 0.307) (0.084, 0.117, 0.145) ANN (0.210, 0.231, 0.255) (0.279, 0.292, 0.301) (0.087, 0.119, 0.149) ANN + L2 (0.208, 0.230, 0.255) (0.283, 0.293, 0.301) (0.083, 0.114, 0.142) ANN + MC dropout (0.211, 0.231, 0.252) (0.295, 0.312, 0.328) (0.089, 0.125, 0.156) Gaussian process (0.210, 0.231, 0.252) (0.267, 0.283, 0.295) (0.079, 0.117, 0.145) BNN-HMC (0.358, 0.522, 0.749) (0.368, 0.464, 0.597) (0.223, 0.389, 0.503) | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
Review 1: Summary: The authors propose modeling and optimization improvements to latent binary Bayesian neural networks (LBBNN): Bayesian neural networks with a spike-and-slab prior and a variational spike-and-slab posterior on the weights. The authors propose using a conditionally factorized posterior approximation instead of a fully factorized one to improve the expressivity of the variational posterior and use a normalizing flow to model the posterior distribution of the newly introduced conditioning variable. They perform some toy experiments on MNIST-type datasets and synthetic data. Strengths and Weaknesses: ## Strengths The authors' description of how they are extending earlier models and inference procedures in Sections 2-4 is clear and easy to follow. Furthermore, they also perform extensive ablation studies, so it is easy to see what improvement is brought about by each of the authors' proposed extensions. ## Weaknesses I have two main issues with the paper: - The authors' contribution seems to be very incremental - The use cases of the authors' method are unclear The author's two technical contributions are combining two previous models (LBBNNs and flow posteriors) and deriving a local reparameterization trick (LRT) for their model. However, the combination is straightforward and requires no new technical insight. Similarly, the experimental methodology is correct, but the experiments are standard and do not seem to give any new insight. Overall, I do not think this work would be of interest to the TMLR community. The above ties in with my second problem with the paper: I could not find much motivation for why the authors developed the model. In particular, it is unclear why inducing sparsity in the model weights is useful, especially given that the new model requires many new parameters for the normalizing flow used in the variational posterior. The authors also do not study the calibration of the model's uncertainty estimates very thoroughly, as they do not compare with relevant methods such as HMC + Gibbs sampling the structure or using a Laplace approximation for the approximate posterior. Furthermore, looking at Figures 6-8, it appears that the authors' proposed model yields uncertainty estimates whose calibration is even worse than the mean-field Gaussian ones, as the predictive entropy for the cyan class is a lot higher than I would expect. Requested Changes: For me to recommend acceptance, the authors would need to give reasonable motivation for why their particular model is useful for the TMLR community. In particular, there isn't much justification for why sparsity is desirable in any of the experiments conducted by the authors, except for the variable selection experiment. Furthermore, I think the authors should compare the calibration of the predictive uncertainty to that of standard baselines, such as Gaussian processes and HMC-sampled BNNs. As for the writing, the introduction should be rewritten because, except for the first few sentences, it is mostly a literature review/ related works section rather than an introduction. Furthermore, I would suggest the authors clearly lay out their contributions at the end of the introduction section as a bullet-pointed list. Finally, I have some miscellaneous points: - What is $f$ above eq 1? Why is $\eta(x)$ called the mean vector? - There's a sum over $k$ missing in the eq above eq 9 - hard-tanh is not standard terminology; please define it in the text, e.g., as a footnote. - What do the square brackets mean in the sum in the first equation in section 4? - The brackets used for the expectation and variance in section 4 are inconsistent, sometimes round, sometimes square. - Most figures, especially Figure 3, are blurry and pixelated; please change them to vector graphics. - Switching from BNN to MNF in Table 1 is misleading; an "MNF" is also a BNN. Hence, I suggest the authors change it to MNF-BNN. - What is $M$ in the TPR and FPR definitions? Broader Impact Concerns: n/a ================================================== Review 2: Summary: The paper motivates, builds, makes efficient, and trains a sparse bayesian neural network. To do this it uses a spike and slab prior, as well as a spike and slap posterior with some structural sharing between weights. The shared structural latents are themselves fit with a flow, and the whole model is fit variationally with the local reparameterization trick. Strengths and Weaknesses: This is a straight forward idea to try, took some clever tricks to make tractable and the authors built it and tested it on MNIST KMNIST, and FMNIST. In this case, the straightforwardness of the idea I think is a strength. Obviously, testing something these days on the MNIST family of models doesn't provide a huge amount of evidence of superiority of a method, but I think in this case the goal was less to claim that these LBBNN networks were superior to alternatives, rather that they can be built and show good performance, and later show qualitatively good predictive uncertainty. Requested Changes: I would expand the discussion of the local reparameterization trick, I was lost upon first reading how it was used in the LBBNN. I would also like you to comment more on how the full model average had full density, even if the individual or mean predictions were sparse. This seems to run afoul of the usual stories of sparsity, in that many of the weights are unecessary, here it seems instead that all weights are utilized, just in different elements of the ensemble. I'd be curious to see some attempt to estimate the effective dimensionality of the networks, perhaps by way of a WAIC (the Widely Applicable Information Criterion of Watanabe) style estimate, i.e. I'd be curious about the gap between the predictive likelihood and the average likelihood, i.e.: $$ 2 \sum_{i=1}^n \left( \log E_{post}[ p(y_i|x_i,\theta) ] - E_{post}[ \log p(y_i|x_i,\theta) ] \right) $$ Which we ought to be able to interpret as an effective number of parameters. I'd be curious if this was similar to the observed densities for the individual samples. Broader Impact Concerns: No concerns. ================================================== Review 3: Summary: The authors combine several existing ideas to extend Bayesian NNs (BNNs): spike-&-slab priors on the weights for sparsity, normalizing flows to represent posterior dependencies, and the "local reparameterization trick" which moves the variational approximation from the weights to the pre-activations. They present results on MNIST and derivatives, and synthetic logistic regression problems. Strengths and Weaknesses: The paper is reasonably well written although probably a little longer than it needs to be and could be edited down. The order is also odd in places: the simulated data examples coming before the MNIST(+) experiments, and mentioning the LRT long before describing it. The introduction of z was a bit jarring to me: is z a model parameter (in which case it should be described before the inference is described), or just an auxiliary variable for q (in which case that should be said explicitly). What is the domain of z? (the reals clearly but that isn't obvious when it's first introduced). Combining existing ideas obviously feels somewhat incremental, but is a good fit for TMLR. My main concern with the manuscript is that uncertainty/calibration is only assessed on simulated data, and mostly on the final Gaussian clusters data, which is very simple (only two dimensions in particular). It would be more valuable to assess calibration (e.g. via ECE) on real world data (and ideally regression problems not just classification). This is the main contribution one would hope for from a sophisticated BNNs: I am OK with the authors' argument that outright predictive accuracy shouldn't be the primary goal, but my impression is existing BNNs do no better than simpler approaches (dropout, deep ensembles) for predictive uncertainty and still require post-hoc calibration. I have no idea from the paper whether these extensions are helping address this. Minor things: - to me (and anyone who has done much classical stats) LRT=likelihood ratio test, so this is unfortunate acronym. - I disagree with the claim that BNNs incorporate prior knowledge. Maybe they could, but they are not used in that way. - I don't think mean field BNNs should be called "fully Bayesian" since they are only representing a small amount of local uncertainty - I think you should be comparing to dropout at least (especially when checking calibration) - for the logistic regression example it would be valuable assess whether the NF based q correctly models that the correlated features should result in posterior correlation between the corresponding weights - cut the definition of TPR and FPR, we know what those are Requested Changes: Look at calibration on the classification datasets they already use, and ideally in some regression settings too. Edit for length, clarity around what exactly z is, and order. Broader Impact Concerns: NA ================================================== Metareview: Recommendation: Reject Comment: The algorithm proposed here proposed on a combination of two existing tricks. The reviewers agree that the idea is simple (almost straightforward, to be honest) but clever: [A9N5: "Combining existing ideas obviously feels somewhat incremental"], [WWin: "This is a straight forward idea to try (...) the straightforwardness of the idea I think is a strength"]. [K8JC] shares this opinion. There are many possible ways to defend such a simple idea: to prove it works with theoretical arguments and/or to provide strong empirical evidence. The authors chose the experimental way. First, the experimental results are good. However, they don't show a clear improvement over existing methods. The results are totally comparable in terms of predictive accuracy, which is expected [WWin: "Obviously, testing something these days on the MNIST family of models doesn't provide a huge amount of evidence of superiority of a method"]. [A9N5] also points out a lack of calibration check on real data, and in the regression setting. Of course, the benefit expected from the Bayesian approach is uncertainty quantification. But this would be true from many methods (any other Bayesian approach, dropbout etc.) and here again, there is no evidence that the proposed method improves on them. [K8JC: "I am OK with the authors' argument that outright predictive accuracy shouldn't be the primary goal, but my impression is existing BNNs do no better than simpler approaches (dropout, deep ensembles) for predictive uncertainty and still require post-hoc calibration. I have no idea from the paper whether these extensions are helping address this"]? To carricature things, the experiments look like a routine sanity check, but they do not bring any striking evidence of a benefit due to the new algorithm proposed here: [K8JC: "the experiments are standard and do not seem to give any new insight"]. It must be noted that the authors took all these comments very seriously. They provided a detailed reply to all three reviewers together with a deeply revised version of the paper. The new version includes regression examples [A9N5: "The authors did a solid job addressing my concerns"]. This led [A9N5] to (weakly) support publication. The authors also included comparison to standard methods (HMM, GP) for uncertainty quantification. However [K8JC] still believe that this part is not convicing yet (the comparison to HMM seems to be unfair while it's not clear what GP algorithm is used, etc). [K8JC] clearly states he/she might ultimately support publication of the paper, but only after a major revision. Finally, [WWin] recommends to accept the paper. It is thus a tough call. The paper is clearly boarderline. But I believe that the experimental evidence must be stronger to justify publication. I will thus follow reviewer [K8JC] and reject the paper. However, in view of the relatively positive opinion of [WWin,A9N5] I strongly encourage the authors to take into account the additional comments/questions of [K8JC] and to resubmit the paper to TMLR. ==================================================
# Sampling From The Latent Space In Autoencoders: A Simple Way Towards Generative Models? Anonymous authors Paper under double-blind review ## Abstract By sampling from the latent space of an autoencoder and decoding the latent space samples to the original data space, any autoencoder can simply be turned into a generative model. For this to work, it is necessary to model the autoencoder's latent space with a distribution from which samples can be obtained. Several simple possibilities (kernel density estimates, Gaussian distribution) and more sophisticated ones (Gaussian mixture models, copula models, normalization flows) can be thought of and have been tried recently. This study aims to discuss, assess, and compare various techniques that can be used to capture the latent space so that an autoencoder can become a generative model while striving for simplicity. Among them, a new copula-based method, the *Empirical Beta Copula Autoencoder*, is considered. Furthermore, we provide insights into further aspects of these methods, such as targeted sampling or synthesizing new data with specific features. ## 1 Introduction Generating realistic sample points of various data formats has been of growing interest in recent years. Thus, new algorithms such as *Autoencoders (AEs)* and *Generative Adversarial Networks (GANs)* Goodfellow et al. (2014) have emerged. GANs use a discriminant model, penalizing the creation of unrealistic data from a generator and learning from this feedback. On the other hand, AEs try to find a low-dimensional representation of the high-dimensional input data and reconstruct from it the original data. To turn an AE into a generative model, the low-dimensional distribution is modeled, samples are drawn, and thereupon new data points in the original space are constructed with the decoder. We call this low dimensional representation of the data in the autoencoder the *latent space* in the following. Based on that, Variational Autoencoders (VAEs) have evolved, optimizing for a Gaussian distribution in the latent space Kingma & Welling (2014). Adversarial autoencoders (AAEs) utilize elements of both types of generative models, where a discriminant model penalizes the distance of the encoded data from a prior (Gaussian) distribution (Makhzani et al., 2016). However, such strong (and simplifying) distributional assumptions as in the VAE or AAE can have a negative impact on performance, leading to a rich literature coping with the challenge of reducing the gap between approximate and true posterior distributions (e.g., Rezende & Mohamed 2015; Tomczak & Welling 2018; Kingma et al. 2016; Gregor et al. 2015; Cremer et al. 2018; Marino et al. 2018; Takahashi et al. 2019). In this paper we discuss more flexible approaches modeling the latent space without imposing restrictions on the underlying distribution. Recently, Tagasovska et al. 2019 presented the *Vine Copula Autoencoder (VCAE)*. Their approach comprises two building blocks, an autoencoder and a vine copula which models the dependence structure in latent space. By that, they were able to create realistic, new images with samples from the fitted vine copula model in the latent space. In this work, we want to elaborate on this idea and compare various methods to model the latent space of an autoencoder to turn it into a generative model. To this end, we analyze, amongst others, the usage of *Gaussian mixture models (GMM)* as done by Ghosh et al. 2020, the vine copula approach by Tagasovska et al. 2019, and simple multivariate *Kernel Density Estimates*. Additionally, we introduce a new, non-parametric copula approach, the *Empirical Beta Copula Autoencoder (EBCAE)*. To get a deeper understanding of how this can turn a standard autoencoder into a generative model, we inspect resulting images, check the models for their ability to generalize and compare additional features. In this ![1_image_0.png](1_image_0.png) Figure 1: Function scheme of simple generative autoencoders. 1. An encoder f encodes the data X to a low dimensional representation Y . 2.1 Y is modeled by Y 0, 2.2 Generate new synthetic samples of the latent space by sampling from Y 0. 3. Decode the new samples with the decoder g. study we do not aim to beat the latest SOTA generative models but want to shed light on different modeling techniques in the latent space and their characteristics in a rather straightforward autoencoder setting, which may be applied in more sophisticated models as well. Thus, we strive for simplicity and take an alternative route to more and more complex models. We believe that such an analysis in a straightforward setting is essential for understanding the effects from different sampling methods, which may then be applied in more advanced generative models. We also check whether the methods may be a simple alternative to more complex models, such as normalization flows Rezende & Mohamed (2015) or diffusion models (see, e.g., Rombach et al. 2022; Vahdat et al. 2021). More specifically, we use the well-known Real NVP (Dinh et al., 2017) as an example from these more sophisticated machine learning models in the latent space but do not elaborate on these in detail. Note that in contrast to other methods (e.g., as proposed by Oring et al. 2021, Berthelot et al. 2019 or van den Oord et al. 2017), the investigated overall approach does not restrict or change the training of the autoencoder in any form. All models considered in this work are constructed in three steps, visualized in Figure 1. First, an autoencoder, consisting of an encoder f and a decoder g, is trained to find a low-dimensional representation of the data X. Second, the data in the latent space Y is used to learn the best fitting representation Y 0 of it. This is where the examined models differ from each other by using different methods to model the latent space. Finally, we sample from the learned representation of the latent space and feed the samples into the decoder part of the autoencoder, creating new synthetic data samples. Generative models are a vivid part of the machine learning literature. For example, new GAN developments Varshney et al. (2021); Karras et al. (2021); Lee et al. (2021); Hudson & Zitnick (2021), developments in the field of autoencoders, Larsen et al. (2016); Yoon et al. (2021); Zhang et al. (2020); Shen et al. (2020) or developments in variational autoencoders Sohn et al. (2015); Havtorn et al. (2021); Masrani et al. (2019); Xu et al. (2019) are emerging. We again want to emphasize that for the models we consider, no prior is needed, nor the optimization approach is changed, i.e., the latent space is modeled after the training of the autoencoder post-hoc. Thus, the presented approach could be transferred to other, more sophisticated, state-of-the-art autoencoders, as hinted in Ghosh et al. 2020. The general idea of creating new data by sampling in the latent space of a generative model has already been used by, e.g., Tagasovska et al. 2019; Dai & Wipf 2019; Brehmer & Cranmer 2020 or Ghosh et al. 2020, but to the best of our knowledge, no analysis and comparison of such methods have been made so far. Closely related, more and more researchers specifically address the latent space of generative models Mishne et al. (2019); Fajtl et al. (2020); Moor et al. (2020); Oring et al. (2021); Hofert et al. (2021) in their work. There, especially hierarchical methods as suggested by Maaløe et al. (2019) seem to be promising. Further, Autoencoders based on the Wasserstein Distance lately achieved excellent results by changing the regularization term of a VAE and using or learning a Gaussian Mixture Prior Tolstikhin et al. (2019); Mondal et al. (2021), analogously to our use of Gaussian Mixtures fitting the latent space distribution. This work does not propose a new 'black-box algorithm' for generating data (although we present the new EBCAE) but analyses challenges and possible answers on how autoencoders can be turned into generative models by using well-understood tools of data modeling. One of our main findings is, that is hard to find a trade-off between out-of-bound sampling and creating new pictures. We conclude that besides a pure numerical perspective and looking at new random samples of a generative model with a latent space, the resulting image of the nearest neighbor in the latent space from the training data should be inspected. We demonstrate in our experiments that copula-based approaches may be promising alternatives to traditional modeling methods since they allow for the recombination of marginal distributions from one class with the dependence structure of another class leading to new possibilities in synthesizing images and discuss targeted sampling. Our conclusion is intended to point out relevant aspects to the user and discusses the advantages and disadvantages of the models examined. The remainder of the paper is structured as follows. Section 2 introduces various methods for modeling the latent space. Besides traditional approaches, copula-based methods are introduced. Section 3 describes the implementation, evaluation, and results of the experiments carried out. In Section 4 we discuss the results and conclude the paper. Last, we provide additional experiments and insides for interested readers in the appendix. ## 2 Modeling The Latent Space In this section, we want to introduce and reflect on different methods to model the latent space in an autoencoder (Step 2 in Figure 1). All methods aim to fit the low-dimensional data Y as best as possible to be able to create new sample points in the latent space, which leads to new realistic images after passing the decoder. We first recap more 'traditional' statistical tools, followed by copulas as an intuitive and flexible tool for modeling high-dimensional data. We briefly explain how each approach can be used to model data in the latent space and how to obtain samples thereof. Note that we do not introduce our benchmark models, namely the standard plain vanilla VAE and the *Real NVP*, and refer to the original papers instead (Kingma & Welling, 2014; Dinh et al., 2017). Pseudocode of the overall sampling approach is given in the Appendix (Algorithm 2). ## 2.1 Traditional Modeling Methods We classify the *multivariate Gaussian distribution*, a *Kernel Density Estimation (KDE)*, and a *Gaussian* Mixture Model (GMM) as traditional modeling methods and give a rather short treatment of each below. They are well known and can be studied in various statistics textbooks such as Hastie et al. 2001 or Bishop 2006. ## Multivariate Gaussian The probably simplest method is to assume the data in the latent space to follow a multivariate Gaussian distribution. Thus, we estimate the covariance matrix Σˆ and mean vector µˆ of the date Y . In the second step, we draw samples thereof and pass them through the decoder to generate new images. Note that this is similar to the sampling procedure in a VAE, but without forcing the latent space to be Gaussian during training. ## Gmm The *Gaussian Mixture Model (GMM)* aims to model the density of the latent space by mixing M multivariate Gaussian distributions. Thus, the Gaussian mixture model has the form $$f(x)=\sum_{m=1}^{M}\alpha_{m}\phi(x;\mu_{m},\Sigma_{m})$$ where αm denotes the mixing parameter and φ the density of the multivariate normal distribution with mean vector µm and covariance matrix Σm. The model is usually fit by maximum likelihood using the EM algorithm. By combining several Gaussian distributions, it is more flexible than estimating only one Gaussian $$\quad(1)$$ distribution as above. A GMM can be seen as some kind of kernel method (Hastie et al., 2001), having a rather wide kernel. In the extreme case, i.e., where m equals the number of points the density is estimated on, a Gaussian distribution with zero variance is centered over each point. Kernel density estimation is introduced in the following. ## Kde Kernel Density Estimation is a well-known non-parametric tool for density estimation. Put simply, a KDE places a density around each data point. The total resulting estimated density is constructed by $$f(x)=\frac{1}{N\lambda}\sum_{i=1}^{N}K_{\lambda}(x_{0},x_{i})\tag{1}$$ $$\left(2\right)$$ with N being the total number of data points, λ the bandwidth, and K the used kernel. Note that the choice of bandwidth and kernel can affect the resulting estimated density. The kernel density estimation can be performed in univariate data as well as in multivariate data. In this work, we rely on the most commonly used kernel, the Gaussian Kernel, and a bandwidth fitted via *Silverman's rule of thumb* (Silverman, 1986) for the univariate KDEs (i.e. for estimating the marginal distributions of the latent space), while we use a grid search with 10-fold cross-validation in the multivariate case. We use kernel density estimation in multiple manners throughout this work. First, we use a multivariate KDE to model the density of the data in the latent space itself. In the case of a Gaussian kernel, it can be written by $$f(x)=\frac{1}{N\sqrt{\Sigma}2\pi}\sum_{i=1}^{N}e^{-1/2(x-x_{i})^{\prime}\Sigma^{-1}(x-x_{i})}$$ $$(3)$$ where Σ represents the covariance matrix of the kernel, i.e., the matrix of bandwidths. Second, we ignore the dependence structure between margins and estimate the univariate densities of each dimension in the latent space by a KDE for each marginal distribution. In this way, we are able to find out whether explicitly modeling the dependence structure is necessary or not. We call that approach the Independent modeling approach also denoted short by *Independent* in the following. Last, we use univariate KDEs for modeling the marginal distributions of each dimension in the latent space and use them in the copula models described below. ## 2.2 Copula Based Models Besides the traditional modeling methods introduced above, we apply copula based models. In the following, we first introduce copulas as a tool for high-dimensional data, which allows us to model the latent space in our application. Then, we focus on the two copula-based methods to model the latent space of the autoencoder: the *vine copula* and the *empirical beta copula* approach. For detailed introductions to copulas, we refer the reader to Nelsen 2006; Joe 2014; Durante & Sempi 2015. Copulas have been subject to an increasing interest in the *Machine Learning* community over the last decades, see, e.g., Dimitriev & Zhou 2021; Janke et al. 2021; Messoudi et al. 2021; Ma et al. 2021; Letizia & Tonello 2020; Liu 2019; Kulkarni et al. 2018; Tran et al. 2015. In a nutshell, copula theory enables us to decompose any d-variate distribution function into d marginal univariate distributions and their joint dependence structure, given by the copula function. Thus, copulas "couple" multiple univariate distributions into one joint multivariate distribution. More formally, a d-variate copula C : [0, 1]d −→ [0, 1] is a d-dimensional joint distribution function whose margins are uniformly distributed on the unit interval. Decomposing and coupling distributions with copulas is formalized in Theorem 2.1 going back to Sklar 1959. Theorem 2.1 (Sklar 1959). Consider a d*-dimensional vector of random variables* Yi = (Yi,1, . . . , Yi,d) *with* joint distribution function FY (yi) = P(Y1 ≤ yi,1, . . . , Yd ≤ yi,d) for i = 1, . . . , n*. The marginal distribution* functions Fj are defined by Fj (yi,j ) = P(Yj ≤ yi,j ) for yi,j ∈ R, i = 1, . . . , n and j = 1, . . . , d*. Then, there* exists a copula C*, such that* FY (y1*, .., y*d) = C(F1(y1)*, . . . , F*d(yd)) for (y1*, . . . , y*d) ∈ R d. Vice versa, using any copula C, ˜ it follows that F˜Y (y1*, .., y*d) := C˜(F1(y1)*, . . . , F*d(yd)) is a proper multivariate distribution function. This allows us to construct multivariate distributions with the same dependence structure but different margins or multivariate distributions with the same margins but different couplings/pairings, i.e., dependence structures. The simplest estimator is given by the empirical copula. It can be estimated directly on the ranks of each marginal distribution by $${\hat{C}}(\mathbf{u})={\frac{1}{n}}\sum_{i=1}^{n}\prod_{j=1}^{d}1{\left\{{\frac{r_{i,j}^{(n)}}{n}}\leq u_{j}\right\}}\tag{1}$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ with u = (u1*, . . . , u*d) ∈ [0, 1]d and r (n) i,j denoting the rank of each yi,j within (y1,j , . . . , yn,j ), i.e., $$r^{(n)}_{i,j}=\sum_{k=1}^{n}\mathbf{1}\{y_{k,j}\leq y_{i,j}\}.\tag{1}$$ Note that u = (u1*, . . . , u*d) represents a quantile level, hence a scaled rank. Simultaneously, the univariate margins can be estimated using a KDE so that the full distribution latent space is governed for. Note that it is not possible to draw new samples from the empirical copula directly as no random process is involved. In our applications, the latent space is typically equipped with dimensions ≥ 2. Although a variety of two-dimensional copula models exist, the amount of multivariate (parametric) copula models is somewhat limited. We present two solutions to this problem in the following, namely *vine copulas* and the empirical beta copula. ## Vine Copula Autoencoder Vine copulas decompose the multivariate density as a cascade of bivariate building blocks organized in a hierarchical structure. This decomposition is not unique, and it influences the estimation procedure of the model. Here, we use *regular-vine (r-vine)* models Czado (2019); Joe (2014) to model the 10, 20 and 100 dimensional latent space of the autoencoders at hand. An r-vine is built of a sequence of linked trees Ti = (Vi, Ei), with nodes Vi and edges Ei for i = 1*, . . . , d* − 1 and follows distinct construction rules which we present in Appendix B. The d-dimensional copula density can then be written as the product of its bivariate building blocks: c(u1, . . . , ud) = d Y−1 i=1 Y e∈Ei caebe;De (uae|De , ube|De ) (6) with conditioning set De and conditional probabilities, e.g., uae|De = P(Uae ≤ uae |De). The conditioning set De includes all variables conditioned on at the respective position in the vine structure (see Appendix B). For each resulting two-dimensional copula of conditional variables, any parametric or non-parametric copula model (as done by Tagasovska et al. 2019) can be chosen. However, the construction and estimation of vine copulas is rather complicated. Hence, assuming independence for seemingly unimportant building blocks, so-called truncation, is regularly applied. Because of this, truncated vine copula models do not capture the complete dependence structure of the data, and their usage is not underpinned by asymptotic theory. We refer to Czado (2019); Czado & Nagler (2022); Aas (2016) for reviews of vine copula models. ## Empirical Beta Copula Autoencoder The *empirical beta copula* (Segers et al., 2017) avoids the problem of choosing a single, parametric multivariate copula model due to its non-parametric nature. Further, and in contrast to the presented vine copula approach, it offers an easy way to model the full, non-truncated multivariate distribution based on the univariate ranks of the joint distribution and, thus, seems to be a reasonable choice to model the latent space. The empirical beta copula is closely related to the empirical copula (see Formula 5) and is a crucial element of the Empirical-Beta-Copula Autoencoder. It is solely based on the ranks r (n) i,j of the original data Y and can be interpreted as a continuous counterpart of the empirical copula. It is defined by $$C^{\beta}={\frac{1}{n}}\sum_{i=1}^{n}\prod_{j=1}^{d}F_{n,r_{i,j}^{(n)}}(u_{j})$$ $$\left(7\right)$$ $$({\boldsymbol{\delta}})$$ $$({\mathfrak{g}})$$ (uj ) (7) for u = (u1*, . . . , u*d) ∈ [0, 1]d, where $$F_{n,r_{i,j}^{(n)}}(u_{j})=P(U_{(r_{i,j}^{(n)})}\leq u_{j})$$ ≤ uj ) (8) $$=\sum_{p=r_{i}^{(n)}}^{n}{\binom{n}{p}}u_{j}^{p}(1-u_{j})^{(n-p)}$$ p=r i,j (n−p)(9) is the cumulative distribution function of a *beta distribution*, i.e., B(r (n) i,j , r (n) i,j + n − 1). As ri,j is the rank of the i th element in dimension j, U(ri,j ) represents the ri,j th order statistic of n i.i.d. uniformly distributed random variables on [0, 1]. For example, if the rank of the i th element in dimension j is 5, U(ri,j ) = U(5) denotes the 5 th order statistic on n i.i.d. uniformly distributed random variables. The intuition behind the empirical beta copula is as follows: Recall that the marginal distributions of a copula are uniformly distributed on [0, 1] and, hence, the k th smallest value of scaled ranks r (n) i,j /n corresponds to the k th order statistic U(k). Such order statistics are known to follow a beta distribution B(*k, k* + n − 1) (David & Nagaraja, 2003). Consequently, the mathematical idea of the empirical beta copula is to replace each indicator function of the empirical copula with the cumulative distribution function of the corresponding rank r (n) i,j . We argue that the empirical beta copula can be seen as the naturally extended version of the empirical copula, thus, it seems to be a good choice for dependence modeling. Segers et al. 2017 further demonstrates that the empirical beta copula outperforms the empirical copula both in terms of bias and variance. A theorem stating the asymptotic behavior of the empirical copula is given in Appendix C. Synthetic samples in the latent space y 0 are created by reversing the modeling path. First, random samples from the copula model u = (u1*, . . . , u*d) are drawn. Then, the copula samples are transformed back to the natural scale of the data by the inverse probability integral transform of the marginal distributions, i.e., y 0 j = Fˆj (uj ), where Fˆj is the estimated marginal distribution and uj the jth element of the copula sample for j ∈ {1*, . . . , d*}. Algorithm 1 summarizes the procedure. Algorithm 1: Sampling from Empirical Beta Copula Input: Sample Y ⊂ R n×d, new sample size m begin Compute rank matrix R n×dout of Y Estimate marginals of Y with KDE, fb1(y1)*, . . . ,* fbd(yd). for i ≤ m do Draw random from I ∈ [1*, . . . , n*] for j ≤ d do Draw uI,j ∼ B(RIj , n + 1 − RIj ) Set ui = (uI1*, . . . , u*Id) Rescale margins by Yi = Fb−1 $${}^{1}(u_{i1}),\ldots,{\widehat{F}}_{d}^{-\,1}(u_{i d}).$$ Output: New sample Y 0of size m We now present the experiments and results of a comparative study including all mentioned methodologies to model the latent space in the next section. ## 3 Experiments In this section, we present the results of our experiments. We use the same architecture for the autoencoder in all experiments for one dataset but replace the modeling technique for the latent space for all algorithms. The architecture, as well as implementation details, are given in Appendix D. We further include a standard VAE and the Real NVP normalization flow approach modeling the latent space in our experiments to serve as a benchmark. ![6_image_0.png](6_image_0.png) Figure 2: Comparison of random, synthetic samples of different Autoencoder models row by row for MNIST (left) and CelebA (right). Original input samples are given in the last row. ## 3.1 Setup We first describe the overall methodology and the usage of the methods proposed in Section 2. We then introduce the used data sets and evaluation framework. ## Methodology We train an autoencoder consisting of two neural nets, an *encoder* f, and a *decoder* g. The encoder f maps data X from the original space to a lower-dimensional space, while the decoder g reconstructs this low-dimensional data Y from the low-dimensional latent space to the original space (see Fig. 1). We train both neural nets in a way that the reconstruction loss is minimized, i.e., that the reconstructed data X0 = g(f(X)) is as similar to the original data X as possible. In the second step, we model the latent space Y data with a multivariate Gaussian distribution, a Gaussian mixture model, Kernel density estimates, the two presented copula methods and the Real NVP. Thus, we fit models with different flexibility and complexity while keeping the training process of the autoencoder untouched. Last, new samples are generated by decoding random samples from the learned model in the latent space. Note that such an approach is only reasonable when the underlying autoencoder has learned a relevant and interesting representation of the data and the latent space is smooth. We demonstrate this in Appendix E. ## Datasets We conduct experiments on one small-scale, one medium, and one large-scale dataset. The small-scale *MNIST* dataset (LeCun et al., 2010) includes binary images of digits, while the medium-scale *SVHN* dataset (Netzer et al., 2011) contains images of house numbers in Google Street View pictures. The large-scale *CelebA* dataset (Liu et al., 2015) consists of celebrity images covering 40 different face attributes. We split data into a train set and a test set of 2000 samples which is a commonly used size for evaluation (Tagasovska et al., 2019; Xu et al., 2018). Note that the data sets cover different dimensionalities in the latent space, allowing for a throughout assessment of the methods under investigation. ## Evaluation Evaluation of results is performed in several ways. First, we visually compare random pictures generated by the models. Second, we evaluate the results with the framework proposed by Xu et al. 2018, since a log-likelihood evaluation is known to be incapable of assessing the quality (Theis et al., 2016) and unsuitable for non-parametric models. Based on their results, we choose five metrics in our experiments: The *earth* mover distance (EMD), also known as *Wasserstein distance* (Vallender, 1974); the mean maximum discrepancy (MMD) (Gretton et al., 2007); the *1-nearest neighbor-based two-sample test (1NN)*, a special case of the classifier two-sample test (Lopez-Paz & Oquab, 2017); the *Inception Score* (Salimans et al., 2016); and the Frêchet inception distance (Heusel et al., 2017) (the latter two over ResNet-34 softmax probabilities). In line with Tagasovska et al. 2019 and as proposed by Xu et al. 2018, we further apply the EMD, MMD, and 1NN over feature mappings in the convolution space over ResNet-34 features. For all metrics except the Inception Score, lower values are preferred. For more details on the metrics, we refer to Xu et al. 2018. Next, we evaluate the ability to generate new, realistic pictures by the different latent space modeling techniques. Therefore, we compare new samples with their nearest neighbor in the latent space stemming from the original data. This shows us whether the learned distribution covers the whole latent space, or stays too close to known examples, i.e., the model does not generalize enough. Finally, we compare other features of the tested models, such as their ability of targeted sampling and of recombining attributes. ## 3.2 Results In the following, we show results for our various experiments. First, we present visual results for each of the methods investigated to gain a qualitative understanding of their differences. Second, we compare the methods in terms of performance metrics. Third, we evaluate the latent space and nearest neighbots in the latent space. Finally, we address computing times and discuss targeted sampling and recombination of image features. ## Visual Results Figure 2 shows images generated from each method for MNIST and CelebA. The GMM model is composed of 10 elements, and the KDE is constructed using a Gaussian kernel with a bandwidth fitted via a grid search and 10-fold cross-validation. The specification of the Real NVPs are given in the Appendix. For the MNIST dataset, we observe the best results for the EBCAE (row 6) and KDE (row 3), while the other methods seem to struggle a bit. For the CelebA, our visual observations are slightly different. All methods produce images that are clearly recognizable as faces. However, the Gaussian samples in row 1 and independent margins in row 2 create pictures with some unrealistic artefacts, blurry backgrounds, or odd colors. This is also the case for the GMM in row 4 and VCAE in row 5, but less severe. We believe that this comes from samples of an empty area in the latent space, i.e., where none of the original input pictures were projected to. In contrast to that, the samples in the latent space of the KDE, EBCAE, and Real NVP stay within these natural bounds, producing good results after passing the decoder (rows 3, 6, 8). Recall that all methods use the same autoencoder and only differ by means of sampling in the latent space. From our observations, we also conclude that the autoencoder for the CelebA dataset is less sensitive toward modeling errors in the latent space since all pictures are clearly recognizable as faces. In contrast, for the MNIST dataset, not all images clearly show numbers. Similar results for SVHN are presented in the Appendix. ## Numerical Results The numerical results computed from 2000 random samples displayed in Figure 3 prove that dependence truly matters within the latent space. Simultaneously, the KDE, GMM, and EBCAE perform consistently well over all metrics, delivering comparable results to the more complex Real NVP. Especially the EBCAE outperforms the other methods, whereas the VCAE, Gauss model, and VAE usually cluster in the middle. ![8_image_0.png](8_image_0.png) Figure 3: Performance metrics of generative models on CelebA, reported over epochs computed from 2000 random samples. Note that they only differ in the latent space sampling and share the same autoencoder. We further report results over the number of samples in the latent space in Figure 9 in the Appendix. This, at first sight, unusual perspective visualizes the capability to reach good performance even for small sample sizes in latent space. In a small-sample regime, it is crucial to assess how fast a method adapts to data in the latent space and models it correctly. We see that all methods perform well for small sample sizes, i.e., n = 200. Similar experiments for MNIST and SVHN can be found in Appendix F. ## Nearest Neighbour And Latent Space Evaluation Next, we evaluate the different modeling techniques in their ability to generate new, realistic images. For this, we focus on pictures from the CelebA dataset in Figure 4. First, we create new, random samples with the respective method (top row) and then compare these with their decoded nearest neighbor in the latent space (middle row). The bottom row displays the latent space nearest neighbor in the original data space before applying the autoencoder. By doing so, we are able to disentangle two effects. First, the effect from purely encoding-decoding an image and, second, the effect of modeling the latent space. Thus, we can check whether new images are significantly different from the input, i.e., whether the distribution modeling the latent space merely reproduces images or generalizes to some extent. We observe that the samples from GMM, VCAE and the Real NVP substantially differ from their nearest neighbors. However, again they sometimes exhibit unrealistic colors and blurry backgrounds. The samples created from KDE and EBCAE look much more similar to their nearest neighbors in the latent space, indicating that these methods do not generalize to the extent of the other methods. However, their samples do not include unrealistic colors or features and seem to avoid sampling from areas where no data point of the original data is present. Thus, they stay in 'natural bounds'. Note that this effect apparently is not reflected in the numerical evaluation metrics. We, therefore, recommend that, in addition to a quantitative evaluation, a qualitative evaluation of the resulting images should always be performed. To further underpin this point, Figure 5 shows 2-dimensional TSNE-Embeddings (see, e.g.,van der Maaten & Hinton 2008) of the latent space for all six versions of the autoencoder (MNIST). Black points indicate original input data, and colored points are synthetic samples from the corresponding method. We see that the KDE, as well as the EBCAE, stay close to the original space. The samples from the GMM and Real NVP also seem to closely mimic the original data, whereas the other methods fail to do so. This visualization confirms our previous conjecture that some algorithms tend to sample from 'empty' areas in the latent space, leading to unrealistic results. ![9_image_0.png](9_image_0.png) Figure 4: Nearest neighbor evaluation of the six investigated modeling methods after decoding. **Top row:** ![9_image_1.png](9_image_1.png) Newly generated images. **Middle row:** Nearest neighbor of new image in the latent space of training samples after decoding. **Bottom row:** Original input training picture of nearest neighbor in latent space. Figure 5: TSNE embeddings of samples in the latent space of the **MNIST** dataset. Points from the original input training data Y are given in black, whereas new, synthetic samples Y 0in the latent space stemming from the different modeling methods are colored. ## Computing Times, Targeted Sampling And Recombination We also report computing times for learning and sampling of the different models for MNIST and CelebA in Table 1. Unsurprisingly, the more straightforward methods such as Gauss, Independence, KDE, and GMM, exhibit the lowest sampling times. The Real NVP shows the highest learning time as a neural network is fitted. However, we expect the difference to be much smaller once trained on an appropriate GPU. The times also reflect the complexities of the methods in the latent space dimensions. Table 1: Modeling and sampling time in the **CelebA** and **MNIST** dataset of 2000 artificial samples based on a latent space of size n = 2000 in [s]. | CelebA | CelebA | MNIST | MNIST | | |----------|----------|---------|---------|--------| | Method | Learn | Sample | Learn | Sample | | Gauss | <0.01 | 0.01 | 0.002 | 0.002 | | Indep. | 4.10 | 0.07 | 0.393 | 0.003 | | KDE | 75.25 | 0.01 | 13.958 | 0.001 | | GMM | 1.35 | 0.03 | 0.115 | 0.004 | | VCAE | 306.97 | 148.48 | 10.345 | 4.590 | | EBCAE | 3.41 | 59.36 | 0.328 | 5.738 | | Real NVP | 2541.19 | 3.69 | 341.608 | 0.477 | Last, we discuss other features of the tested methods, such as targeted sampling and recombination. In contrast to the other techniques, the KDE and EBCAE allow for targeted sampling. Thus, we can generate new images with any desired characteristic directly, e.g., only ones in a data set of images of numbers. In the case of the KDE, this simply works by sampling from the estimated density of the corresponding sub-group. In the case of the EBCAE, we randomly choose among rows in the rank matrix of original samples that share the desired specific attribute, i.e., we sample I in the first for-loop in Algorithm 1 conditional on the sub-group. Thus, newly generated samples stay close to the original input and therefore share the same main characteristics. Other approaches are also possible, however, they need further tweaks to the model, training, or sampling as the *conditional variational autoencoder* (Sohn et al., 2015). The second feature we discuss is recombination. By using copula-based models (VCAE and EBCAE), we ![10_image_0.png](10_image_0.png) can facilitate the decomposition idea and split the latent space in its dependence structure and margins, i.e., we combine the dependence structure of images with a specific attribute with the marginal distributions of images with different attributes. Therefore, copula-based methods allow controlling the attributes of created samples to some extent. Our experiments suggest that the dependence structure provides the basic properties of an image, while the marginal distributions are responsible for details (see, e.g., Figure 6). However, we want to point out that it is not generally clear what information is embedded in the dependence structure and what information is in the marginal distributions of latent space. This might also depend on the autoencoder and the dataset at hand. Thad said, using such a decomposition enables higher flexibility and hopefully fuels new methodological developments in this field. Figure 6: Samples from recombination experiment with the EBCAE. Glasses are removed by using the marginal distribution of the training data without glasses in the latent space. **Top row:** Samples created with the dependence structure in latent space from samples with glasses and marginal distributions in latent space from samples without glasses. **Middle row:** Nearest neighbor of newly created sample in the training data after decoding. **Bottom row:** Original input picture of nearest neighbor in latent space. ## 4 Discussion In this section, we want to discuss the results of our experiments and want to express some further thoughts. So, is sampling from the latent space a simple way towards generative models? We observed that sampling from the latent space via the investigated methods is indeed a viable approach to turn an autoencoder into a generative model and may be promising for application in more advanced autoencoders. However, each modeling approach in this setting comes with its own restrictions, advantages, and problems. We witness a trade-off between the ability to generalize, i.e., to create genuinely new pictures, and sample quality, i.e., to avoid unrealistic colors or artefacts. In cases where new data points are sampled in the neighborhood to existing points (as in the KDE or EBCAE), the newly generated data stays in somehow natural bounds and provides realistic, but not completely new, decoded samples. On the other hand, modeling the latent space too generically leads to bad-quality images. We believe this is similar to leaving the feasible set of an optimization problem or sampling from a wrong prior. While being close to actual points of the original latent space, new samples stay within the feasible set. By moving away from these points, the risk of sampling from an unfeasible region and thus creating unrealistic new samples increases. Recombination via a copula-based approach of marginal distributions and dependence structures offers the possibility to detect new feasible regions in the latent space for the creation of realistic images. Also, interpolating by building convex combinations of two points in the latent space seems reasonable. However, without further restrictions during training (see, e.g., discussion in Ghosh et al. 2020), we cannot principally guarantee proper interpolation results. Further, we observe that the mentioned trade-off is not reflected by the performance metrics. Therefore, we strongly recommend not only checking quantitative results but also finding and analyzing the nearest neighbor in the original data to detect the pure reproduction of pictures. This also reveals that the development of further evaluation metrics could be beneficial. A closely related issue is the choice of a parametric vs. a non-parametric modeling method in the latent space. Parametric methods can place probability mass in the latent space, where no data point of the original input data was observed. Thus, parametric methods are able to generate (truly) new data, subject to their assumption. However, if the parametric assumption is wrong, the model creates samples from 'forbidden' areas in the latent space leading to unrealistic images. In spite of this, carefully chosen parametric models can be beneficial, and even a log-likelihood is computable and traceable (although we do not use it for training). Non-parametric methods avoid this human decision and possible source of error completely but are closely bound to the empirical distribution of the given input data. Consequently, such methods can miss important areas of the latent space but create more realistic images. Furthermore, adjusting parameters of the non-parametric models, such as increasing bandwidths or lowering truncation levels, offer possibilities to slowly overcome these limitations. Besides the major points above, the EBCAE and KDE offer an easy way of targeted sampling without additional training effort. This can be beneficial for various applications and is not as straightforward with other methods. Lastly, the investigated methods differ in their runtime. While vine copula learning and sampling is very time-intensive for high dimensions, the EBCAE is much faster but still outperformed by the competitors. For the non-copula methods, the GMM is really fast in both datasets while still capturing the dependence structure to some extent. In contrast to that, the Real NVP needs more time for training but is rather quick in generating new samples. To sum up, we can confirm that there are indeed simple methods to turn a plain autoencoder into a generative model, which may then also be beneficial in more complex generative models. We conclude that the optimal method to do so depends on the goals of the user. Besides runtime considerations, the specific application of the autoencoder matters. For example, if one is interested in targeted sampling, EBCAE or KDE should be applied. Recombination experiments call for a copula-based approach, whereas in all cases, the trade-off between generalization and out-of-bound sampling should be considered. ## References Kjersti Aas. Pair-copula constructions for financial applications: A review. *Econometrics*, 4(4), 2016. ISSN 2225-1146. Tim Bedford and Roger M. Cooke. Vines: A new graphical model for dependent random variables. The Annals of Statistics, 30(4):1031–1068, 2002. ISSN 00905364. David Berthelot, Colin Raffel, Aurko Roy, and Ian Goodfellow. Understanding and improving interpolation in autoencoders via an adversarial regularizer. In *International Conference on Learning Representations*, 2019. Christopher M. Bishop. *Pattern Recognition and Machine Learning (Information Science and Statistics)*. Springer-Verlag, Berlin, Heidelberg, 2006. ISBN 0387310738. Johann Brehmer and Kyle Cranmer. Flows for simultaneous manifold learning and density estimation. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 442–453. Curran Associates, Inc., 2020. Chris Cremer, Xuechen Li, and David Duvenaud. Inference suboptimality in variational autoencoders, 2018. URL https://arxiv.org/abs/1801.03558. Claudia Czado. *Analyzing Dependent Data with Vine Copulas: A Practical Guide With R*. Springer International Publishing, 2019. Claudia Czado and Thomas Nagler. Vine copula based modeling. *Annual Review of Statistics and Its* Application, 9(1):453–477, 2022. doi: 10.1146/annurev-statistics-040220-101153. Bin Dai and David Wipf. Diagnosing and enhancing vae models, 2019. URL https://arxiv.org/abs/1903. 05789. H. A. David and H. N. Nagaraja. *Order Statistics*. John Wiley and Sons, 08 2003. doi: http://dx.doi.org/10. 1002/0471722162. Alek Dimitriev and Mingyuan Zhou. CARMS: Categorical-antithetic-REINFORCE multi-sample gradient estimator. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural* Information Processing Systems, 2021. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. Fabrizio Durante and Carlo Sempi. *Principles of copula theory*. CRC Press LLC, 01 2015. doi: 10.1201/b18674. Jiri Fajtl, Vasileios Argyriou, Dorothy Monekosso, and Paolo Remagnino. Latent bernoulli autoencoder. In Hal Daumé III and Aarti Singh (eds.), *Proceedings of the 37th International Conference on Machine* Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 2964–2974. PMLR, 13–18 Jul 2020. Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, and Bernhard Scholkopf. From variational to deterministic autoencoders. In *International Conference on Learning Representations*, 2020. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Q. Weinberger (eds.), *Advances in Neural Information Processing Systems*, volume 27. Curran Associates, Inc., 2014. Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. In Francis Bach and David Blei (eds.), *Proceedings of the 32nd International* Conference on Machine Learning, volume 37 of *Proceedings of Machine Learning Research*, pp. 1462–1471, Lille, France, 07–09 Jul 2015. PMLR. Arthur Gretton, Karsten Borgwardt, Malte Rasch, Bernhard Schölkopf, and Alex Smola. A kernel method for the two-sample-problem. In B. Schölkopf, J. Platt, and T. Hoffman (eds.), *Advances in Neural Information* Processing Systems, volume 19. MIT Press, 2007. Charles R. Harris, K. Jarrod Millman, Stéfan J van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. Array programming with NumPy. *Nature*, 585:357–362, 2020. doi: 10.1038/s41586-020-2649-2. Trevor Hastie, Robert Tibshirani, and Jerome Friedman. *The Elements of Statistical Learning*. Springer Series in Statistics. Springer New York Inc., New York, NY, USA, 2001. Jakob D. Drachmann Havtorn, Jes Frellsen, Soren Hauberg, and Lars Maaløe. Hierarchical vaes know what they don't know. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 4117–4128. PMLR, 18–24 Jul 2021. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing* Systems, volume 30. Curran Associates, Inc., 2017. Marius Hofert, Avinash Prasad, and Mu Zhu. Quasi-random sampling for multivariate distributions via generative neural networks. *Journal of Computational and Graphical Statistics*, 30(3):647–670, 2021. doi: 10.1080/10618600.2020.1868302. Drew A Hudson and Larry Zitnick. Generative adversarial transformers. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of Proceedings of Machine Learning Research, pp. 4487–4499. PMLR, 18–24 Jul 2021. Tim Janke, Mohamed Ghanmi, and Florian Steinke. Implicit generative copulas. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021. H. Joe. *Dependence modeling with copulas*. Chapman & Hall/CRC, 01 2014. doi: 10.1201/b17116. Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free generative adversarial networks. In *Thirty-Fifth Conference on Neural Information Processing* Systems, 2021. Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 29. Curran Associates, Inc., 2016. Vaibhav Kulkarni, Natasa Tagasovska, Thibault Vatter, and Benoit Garbinato. Generative models for simulating mobility trajectories. *ArXiv*, 2018. Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. In Maria Florina Balcan and Kilian Q. Weinberger (eds.), Proceedings of The 33rd International Conference on Machine Learning, volume 48 of *Proceedings of* Machine Learning Research, pp. 1558–1566, New York, New York, USA, 20–22 Jun 2016. PMLR. Yann LeCun, Corinna Cortes, and CJ Burges. Mnist handwritten digit database. *ATT Labs [Online].* Available: http://yann.lecun.com/exdb/mnist, 2, 2010. Jinhee Lee, Haeri Kim, Youngkyu Hong, and Hye Won Chung. Self-diagnosing GAN: Diagnosing underrepresented samples in generative adversarial networks. In *Thirty-Fifth Conference on Neural Information* Processing Systems, 2021. Nunzio A. Letizia and Andrea M. Tonello. Segmented generative networks: Data generation in the uniform probability space. *IEEE Transactions on Neural Networks and Learning Systems*, pp. 1–10, 2020. doi: 10.1109/TNNLS.2020.3042380. Weiwei Liu. Copula multi-label learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. David Lopez-Paz and Maxime Oquab. Revisiting classifier two-sample tests. In *5th International Conference* on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. Jiaqi Ma, Bo Chang, Xuefei Zhang, and Qiaozhu Mei. CopulaGNN: Towards integrating representational and correlational roles of graphs in graph neural networks. In *International Conference on Learning* Representations, 2021. Lars Maaløe, Marco Fraccaro, Valentin Liévin, and Ole Winther. Biva: A very deep hierarchy of latent variables for generative modeling, 2019. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders, 2016. Joe Marino, Yisong Yue, and Stephan Mandt. Iterative amortized inference. In Jennifer Dy and Andreas Krause (eds.), *Proceedings of the 35th International Conference on Machine Learning*, volume 80 of Proceedings of Machine Learning Research, pp. 3403–3412. PMLR, 10–15 Jul 2018. Vaden Masrani, Tuan Anh Le, and Frank Wood. The thermodynamic variational objective. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. Soundouss Messoudi, Sébastien Destercke, and Sylvain Rousseau. Copula-based conformal prediction for multi-target regression. *Pattern Recognit.*, 120:108101, 2021. Gal Mishne, Uri Shaham, Alexander Cloninger, and Israel Cohen. Diffusion nets. *Applied and Computational* Harmonic Analysis, 47(2):259–285, 2019. ISSN 1063-5203. doi: https://doi.org/10.1016/j.acha.2017.08.007. Arnab Kumar Mondal, Himanshu Asnani, Parag Singla, and AP Prathosh. Flexae: flexibly learning latent priors for wasserstein auto-encoders. In Cassio de Campos and Marloes H. Maathuis (eds.), *Proceedings* of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, volume 161 of Proceedings of Machine Learning Research, pp. 525–535. PMLR, 27–30 Jul 2021. Michael Moor, Max Horn, Bastian Rieck, and Karsten Borgwardt. Topological autoencoders. In Hal Daumé III and Aarti Singh (eds.), *Proceedings of the 37th International Conference on Machine Learning*, volume 119 of *Proceedings of Machine Learning Research*, pp. 7045–7054. PMLR, 13–18 Jul 2020. Roger B. Nelsen. *An Introduction to Copulas*. Springer Science+Business Media, Inc., 2006. ISBN 0-38728659-4. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In *NIPS Workshop on Deep Learning and* Unsupervised Feature Learning 2011, 2011. URL http://ufldl.stanford.edu/housenumbers/nips2011_ housenumbers.pdf. Alon Oring, Zohar Yakhini, and Yacov Hel-Or. Autoencoder image interpolation by shaping the latent space. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 8281–8290. PMLR, 18–24 Jul 2021. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Francis Bach and David Blei (eds.), *Proceedings of the 32nd International Conference on Machine Learning*, volume 37 of Proceedings of Machine Learning Research, pp. 1530–1538, Lille, France, 07–09 Jul 2015. PMLR. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, and Xi Chen. Improved techniques for training gans. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 29. Curran Associates, Inc., 2016. Johan Segers, Masaaki Sibuya, and Hideatsu Tsukahara. The empirical beta copula. Journal of Multivariate Analysis, 155:35–51, 2017. ISSN 0047-259X. doi: doi.org/10.1016/j.jmva.2016.11.010. Tianxiao Shen, Jonas Mueller, Regina Barzilay, and Tommi S. Jaakkola. Educating text autoencoders: Latent representation guidance via denoising. In Proceedings of The thirty-seventh International Conference on Machine Learning, pp. 8719–8729, 2020. B. W. Silverman. *Density estimation for statistics and data analysis / B.W. Silverman*. Chapman and Hall London ; New York, 1986. ISBN 0412246201. Abe Sklar. Fonctions de répartition à n dimensions et leurs marges. Publications de l'Institut de Statistique de l'Université de Paris, 8:229–231, 1959. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. Natasa Tagasovska, Damien Ackerer, and Thibault Vatter. Copulas as high-dimensional generative models: Vine copula autoencoders. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. Hiroshi Takahashi, Tomoharu Iwata, Yuki Yamanaka, Masanori Yamada, and Satoshi Yagi. Variational autoencoder with implicit optimal priors. AAAI'19/IAAI'19/EAAI'19. AAAI Press, 2019. ISBN 978-157735-809-1. doi: 10.1609/aaai.v33i01.33015066. L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. In International Conference on Learning Representations, Apr 2016. Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. Wasserstein auto-encoders, 2019. Jakub Tomczak and Max Welling. Vae with a vampprior. In Amos Storkey and Fernando Perez-Cruz (eds.), Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, volume 84 of *Proceedings of Machine Learning Research*, pp. 1214–1223. PMLR, 09–11 Apr 2018. Dustin Tran, David Blei, and Edo M Airoldi. Copula variational inference. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 28. Curran Associates, Inc., 2015. Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based generative modeling in latent space. In *Neural* Information Processing Systems (NeurIPS), 2021. S. S. Vallender. Calculation of the wasserstein distance between probability distributions on the line. *Theory* of Probability and Its Applications, 18:435–435, 1974. Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, pp. 6309–6318, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. *Journal of Machine Learning* Research, 9(86):2579–2605, 2008. Guido Van Rossum and Fred L Drake Jr. *Python reference manual*. Centrum voor Wiskunde en Informatica Amsterdam, 1995. Sakshi Varshney, Vinay Kumar Verma, Srijith P K, Lawrence Carin, and Piyush Rai. CAM-GAN: Continual adaptation modules for generative adversarial networks. In *Thirty-Fifth Conference on Neural Information* Processing Systems, 2021. Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey, İlhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. *Nature Methods*, 17:261–272, 2020. doi: 10.1038/s41592-019-0686-2. Haowen Xu, Wenxiao Chen, Jinlin Lai, Zhihan Li, Youjian Zhao, and Dan Pei. On the necessity and effectiveness of learning the prior of variational auto-encoder, 2019. URL https://arxiv.org/abs/1905. 13452. Qiantong Xu, Gao Huang, Yang Yuan, Chuan Guo, Yu Sun, Felix Wu, and Kilian Q. Weinberger. An empirical study on evaluation metrics of generative adversarial networks. *ArXiv*, abs/1806.07755, 2018. Sangwoong Yoon, Yung-Kyun Noh, and Frank Park. Autoencoding under normalization constraints. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings of Machine Learning Research*, pp. 12087–12097. PMLR, 18–24 Jul 2021. Zijun Zhang, Ruixiang Zhang, Zongpeng Li, Yoshua Bengio, and Liam Paull. Perceptual generative autoencoders. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 11298–11306. PMLR, 13–18 Jul 2020. ## Appendix A Pseudocode: Overall Sampling Approach Algorithm 2: Overall sampling approach Input: Autoencoder with Encoder f and Decoder g begin Compute latent space Y by passing training samples through encoder f for *each method* do Model the latent space by fitting the respective method Create new samples from the latent space Y 0 by drawing (randomly) from the fitted method for *each element* Y 0 iin Y 0 do Decode Y 0 i by passing it through the decoder g Output: New sample X 0 ## B Details On The Vine Copula In the vine copula autoencoder Tagasovska et al. (2019) use *regular-vine (r-vines)*. A r-vine is built of a sequence of linked trees Ti = (Vi, Ei), with nodes Vi and edges Ei for i = 1*, . . . , d* − 1. A d−dimensional vine tree structure V = (T1*, ..., T*d−1) is a sequence of T − 1 trees if (see Czado 2019): 1. Each tree Tj = (Ni, Ei) is connected, i.e. for all nodes a, b ∈ Ti, i = 1*, ..., d* − 1, there exists a path n1*, ..., n*k ⊂ Nj with a = n1, b = nk. 2. T1 is a tree with node set N1 = {1*, ..., d*} and edge set E1. 3. For i ≥ 2, Tj is a tree with node set Ni = Ei−1 and edge set Ei. 4. For i = 2*, ..., d* − 1 and {a, b} ∈ Eiit must hold that |a ∩ b| = 1. An example of a five-dimensional vine tree structure is given below in Figure 7. Note that the structure has ![17_image_0.png](17_image_0.png) to be estimated and multiple structures are possible. For details on vine copula estimation, see Czado (2019); Joe (2014); Bedford & Cooke (2002). Figure 7: Example of a vine copula tree structure T1 − T4 for five dimensions. ## C Asymptotics Of The Empirical Beta Copula Theorem C.1 gives the asymptotic behavior of the empirical beta copula. Theorem C.1 (Asymptotics of the empirical beta copula). Let the copula C have continuous first-order partial derivatives C˙j = δC(u)/δuj for each j ∈ {1, . . . , d} *on the set* Ij = {u ∈ [0, 1]d: 0 < uj < 1}. The corresponding empirical copula is denoted as Cn*, with empirical copula process* Gn = √n Cn(u) − C(u) and empirical beta copula C β n *with empirical beta copula process* Gβn = √n C β n (u) − C(u) *. Suppose* Gn G for n *−→ ∞* to G in l∞([0, 1]d), where G *is a limiting process having continuous trajectories almost surely.* Then, in l∞([0, 1]d) $${\mathcal{G}}_{n}^{\beta}={\mathbb{G}}_{n}+o_{p}(1),n\longrightarrow\infty.$$ Proof. See Segers et al. 2017 Section 3. In short, Theorem C.1 states that the empirical beta copula has the same large-sample distribution as the empirical copula and, thus, converges to the true copula. However, the empirical beta copula performs better for small samples. Segers et al. 2017 demonstrate that the empirical beta copula outperforms the empirical copula both in terms of bias and variance. $\square$ ## D Implementation D.1 Implementation Of The Autoencoder We implemented the experiments in Python 3.8 Van Rossum & Drake Jr (1995) using numpy 1.22.0, scipy 1.7.1" scikit-learn 1.1.0 and pytorch 1.10.1 Harris et al. (2020); Virtanen et al. (2020); Pedregosa et al. (2011); Paszke et al. (2019). The AEs were trained using the Adam optimizer with learning rate 0.001 for MNIST and 0.0005 for SVHN and CelebA. A weight decay of 0.001 was used in all cases. Batch sizes were fixed to 128 (MNIST), 32 (SVHN) and 100 (CelebA) samples for training, while the size of the latent space was set to 10 (MNIST), 20 (SVHN) and 100 (CelebA) according to the data sets size and complexity. Training was executed on a separate train set and evaluated on a hold-out test set of 2000 samples, similar to Tagasovska et al. 2019. For comparison with the VCAE and performance metrics, we have resorted to the implementation from Tagasovska et al. 2019 and Xu et al. 2018. The architectures for all networks are described in Appendix D.2. We trained the autoencoders on an NVIDIA Tesla V100 GPU with 10 Intel Xeon Gold 6248 CPUs. The experiments are executed afterward on a PC with an Intel i7-6600U CPU and 20GB RAM. ## D.2 Architectures Of Autoencoders And Vae We use the same architecture for EBCAE, VCAE, and VAE as described below. All models were trained by minimizing the Binary Cross Entropy loss. ## Mnist Encoder: x ∈ R 32×32 → Conv32 → BN → *ReLu* → Conv64 → BN → *ReLu* → *Conv*128 → BN → *ReLu* → F C10 Decoder: $$\begin{array}{c}{{y\in R^{10}\to F C_{100}\to C o n v T_{128}}}\\ {{\qquad\qquad\to C o n v T_{64}}}\\ {{\qquad\qquad\to C o n v T_{32}}}\end{array}$$ 10 → F C100 → *ConvT*128 → BN → *ReLu* → ConvT64 → BN → *ReLu* → ConvT32 → BN → *ReLu* → F C1 For all (de)convolutional layers, we used 4 × 4 filters, a stride of 2, and a padding of 1. BN denotes batch normalization, *ReLU* rectified linear units, and F C fully connected layers. Last, *Conv*k denotes the convolution with k filters. ## Svhn In contrast to the MNIST dataset, images in SVHN are colored. We do not use any preprocessing in this dataset. Encoder: $$x\in R^{3\times32\times32}\to C o n v_{64}$$ $$\to C o n v_{128}$$ $\to Conv_{256}$ . 3×32×32 → Conv64 → BN → *ReLu* → *Conv*128 → BN → *ReLu* → *Conv*256 → BN → *ReLu* $$\begin{array}{r l}{\to B N\to R e L u}\\ {\to B N\to R e L u}\\ {\to F C_{100}\to F C_{20}}\end{array}$$ $\to BN\to ReLU$ . Decoder: $$\begin{array}{r}{y\in R^{20}\to F C_{100}\to C o n v T_{256}}\\ {\to C o n v T_{128}}\\ {\to C o n v T_{64}}\\ {\to C o n v T_{32}}\end{array}$$ 20 → F C100 → *ConvT*256 → BN → *ReLu* → *ConvT*128 → BN → *ReLu* → ConvT64 → BN → *ReLu* → ConvT32 → BN → *ReLu* → F C1 Notations are the same as described above. ## Celeba In contrast to the MNIST dataset, images in CelebA are colored. Further, we first took central crops of 140 × 140 and resize the images to a resolution 64 × 64. Encoder: $$\begin{array}{c}{{x\in R^{3\times64\times64}\to C o n v_{64}}}\\ {{\qquad\qquad\to C o n v_{128}}}\\ {{\qquad\qquad\to C o n v_{256}}}\end{array}$$ 3×64×64 → Conv64 → BN → *LeakyReLu* → *Conv*128 → BN → *LeakyReLu* → *Conv*256 → BN → *LeakyReLu* → *Conv*512 → BN → *LeakyReLu* $$\to F C_{100}\to F C_{100}$$ Decoder: $$\to B N\to L e a k y R e L u$$ $$\to C o n v_{512}$$ $$\begin{array}{r}{y\in R^{100}\to F C_{100}\to C o n v_{512}}\\ {\to C o n v T_{256}}\end{array}$$ | 100 → F C100 → Conv512 | → BN → ReLu | |--------------------------|--------------------| | → ConvT256 | → BN → ReLu | | → ConvT128 | → BN → ReLu | | → ConvT64 | → BN → ReLu | | → ConvT32 | → BN → ReLu → F C1 | LeakyReLU uses a negative slope of 0.2, and padding was set to 0 for the last convolutional layer of the encoder and the first of the decoder. All other notations are the same as described above. ## D.3 Implementation Of Real Nvp In our study, we used a Real NVP (see Dinh et al. 2017) to model the latent space of the autoencoder and serve as a benchmark. For all data sets, we use spatial checkerboard masking, where the mask has a value of 1 if the sum of coordinates is odd, and 0 otherwise. For the MNIST data set, we use 4 coupling layers with 2 hidden layers each and 256 features per hidden layer. Similarly, for the SVHN data set, we also use four coupling layers with two hidden layers each and 256 hidden layer features. Lastly, for the CelebA data set, we use four coupling layers with two hidden layers each and 1024 hidden layer features. For all data sets, we applied a learning rate of 0.0001 and learn for 2000 epochs. ## E Image Interpolation Of The Autoencoder We show that our used autoencoder learned a relevant and smooth representation of the data by interpolation in the latent space and, thus, modeling the latent space for generating new images is reasonable. For example, consider two images A and B with latent variables yA,1*, ..., x*A,100 and yB,1*, ..., y*B,100. We now interpolate linearly in each dimension between these two values and feed the resulting interpolation to the decoder to get ![20_image_0.png](20_image_0.png) Figure 8: Interpolation in the latent space of samples of the autoencoder. the interpolated images. Each row in Figure 8 shows a clear linear progression in ten steps from the first face on the left to the final face on the right. For example, in the last row, we see a female with blonde hair slowly transforming into a male with a beard. The transition is smooth, and no sharp changes or random images occur in-between. F ## Additional Experiments F.1 ## Numerical Assessment Of Methods On Celeba ![21_image_0.png](21_image_0.png) Figure 9: Performance metrics of generative models on CelebA, reported over latent space sample size. Note that they only differ in the latent space sampling and share the same autoencoder. ## F.2 ![22_image_1.png](22_image_1.png) ## Numerical Assessment Of Methods On Mnist ![22_image_0.png](22_image_0.png) MMD PIXEL ![22_image_2.png](22_image_2.png) INN CONV KDE Figure 10: Performance metrics of generative models on MNIST, reported over epochs computed from 2000 random samples. Note that they only differ in the latent space sampling and share the same autoencoder. ![22_image_3.png](22_image_3.png) Figure 11: Performance metrics of generative models on MNIST, reported over latent space sample size. Note that they only differ in the latent space sampling and share the same autoencoder. ## F.3 Numerical Assessment Of Methods On Svhn ![23_image_0.png](23_image_0.png) -- - Independent ![23_image_1.png](23_image_1.png) ![23_image_2.png](23_image_2.png) Figure 12: Performance metrics of generative models on SVHN, reported over epochs computed from 2000 random samples. Note that they only differ in the latent space sampling and share the same autoencoder. ![23_image_3.png](23_image_3.png) Figure 13: Performance metrics of generative models on SVHN, reported over latent space sample size. Note that they only differ in the latent space sampling and share the same autoencoder. ## F.4 Generated Images From Svhn ![24_Image_0.Png](24_Image_0.Png) Figure 14: Comparison of synthetic samples of different Autoencoder models. 1 st **row:** Fitted normal distribution, 2 nd **row:** Independent margins, 3 rd **row:** KDE-AE, 4 th **row:** GMM, 5 th **row:** VCAE, 6 th row: EBCAE, 7 th **row:** VAE, 8 th **row:** Real NVP, **Last row:** original pictures. ## G Code Code will be provided here. [link to the repository will be inserted]
Review 1: Summary: This paper empirically compare different post-hoc methodologies to model the latent space of an autoencoder in order to turn it into a generative model. The empirical beta copula is suggested as an efficient and flexible alternative. These two-steps generative models are also benchmarked against a VAE and a RealNVP. Results on three datasets (MNIST, CelebA, and SVHN) illustrate that the two-steps approach is viable. Strengths and Weaknesses: Strengths: * An easy to read and follow comparative empirical analysis. * An efficient methodology, the EBCAE, is proposed. Weaknesses: * Not a new methodology, only the EBC appears to be novel in this context. * No new experiments, standard and small datasets. No SOTA results. * No theoretical contribution. Questions/Suggestions: * The EBCAE appears able to produce more realistic samples but with less variability. Vice versa for some others methodologies. What is more important? * It is not clear why the fitted vine-copula performs relatively poorly and that its spanned latent space (Figure 5) is so different from the original data. Could it be an estimation and/or a model-selection issue? Requested Changes: * Not enough details is provided on the estimation techniques and fitted model parameters. * A more thorough analysis of the *built-in* conditional sampling feature for EBCAE. * Clarify how the images attributes are split between the marginals and the copula for the copulas models is interesting. Is this anecdotal? * Clarify the theoretical suggestion in the last section on the structural link between copulas and NF. Broader Impact Concerns: NA. ================================================== Review 2: Summary: This paper considers the problems of creating generative models by learning to sample from the latent space of auto-encoders. They also propose a method called the Empirical Beta Copula Autoencoder which uses an empirical beta copula for modeling and sampling from the latent space of an auto-encoder. The paper conducts a few experiments to compare several AE-based samplers over several measures. Strengths and Weaknesses: Strengths: 1. Considers a critical problem in the AE community. 2. Well written and easy to comprehend. Weaknesses: 1. This paper does not offer any heretofore unknown information. Most of the observations made in this paper are very well-known to an expert in the community. 2. It provides no theoretical insights into the proposed new model. 3. Comparisons and experiments are rather weak. Requested Changes: 1. The authors should first provide a solid theoretical grounding to the framework of turning an AE into a sampler. There are several papers that do this. For instance, authors can refer to https://arxiv.org/abs/1711.01558, https://proceedings.mlr.press/v161/mondal21a.html, or https://arxiv.org/abs/1903.05789. Without these theoretical underpinnings, placing the present work in the proper context is difficult. 2. A lot of important baselines are missing both in section 2 and experimental comparisons- https://arxiv.org/abs/1705.07120, https://arxiv.org/abs/1903.12436. 3. Authors mention - "Vine copulas offer a solution to this problem and decompose the multivariate density as a cascade of bivariate building blocks organized in a hierarchical structure" - The context does not describe the "problem" that the Authors are referring to. 4. The text suddenly jumps to the section on Empirical Beta Copula Autoencoder without motivating the need for it. 5. There is no justification (theoretical) to claim the advantage of the considered distribution on the latent space. 6. The datasets considered for experiments are very toyish. Some experiments on large-scale corpus such as Imagenet have to be demonstrated. 7. In Fig. 3, the range of FID seems unreasonable. 8. Does the proposed model induce interpretable semantic directions on the latent space? 9. One important question in the context of AE as a generative model is the following - Should one induce a sampler on the latent space jointly with the AE training or it has to be done post-hoc like the one presented in the paper? Studies have shown that the former approach is more principled and yields better results. Did the Authors try such approaches? Broader Impact Concerns: NA ================================================== Review 3: Summary: This paper studies the behaviour of autoencoder with diverse latent space modeling methodologies such as multivariate Gaussian distribution, Gaussian mixture model, kernel density estimates, copula-based methods and the RealNVP. The authors argue that turning traditional autoencoder with latent space modeling can be alternative of generative models. The authors conducted experiments on MNIST, CelebA, and SVHN datasets to verify their claim. Strengths and Weaknesses: Strengths: - The authors investigated various generative model-based latent space modeling approach in the autoencoder. - If I understand correctly, the methodology that the authors utilized in the experiment is straightforward. - The authors provide appropriate evidences for their claim. Weaknesses: - It seems the authors' arguing point is not much different from modeling data in the reduced dimension space of dimentionality reduction algorithms. In other words, instead of direct modeling with GMM, the considered methodology is that (1) first, reduce dimension with autoencoder, (2) modeling the reduced dimension space with GMM, and (3) revert back to the original data space. - The paper is written hard to understand the message. Questions: - What are "Independent" and "Independent margins" methodologies stands for? Are they "Independent modeling approach" discussed in the KDE section? - What is the advantage of the empirical beta coplua that the authors proposed (but not argued as the contribution in this work)? - All the experiments are conducted in the settings of AE + latent space modeling, except the VAE and RealNVP, right? Isn't the same approach can be utilized with RealNVP, i.e., AE + RealNVP? . Requested Changes: - The methodology part in Section 3 should be more specified. What is the protocol of latent space modeling with autoencoders? It would be better if there are additional pseudo-codes for the overall learning and sampling processes. - The experiment should be conducted stand-alone generative model vs AE + generative model. For example, GMM in the data space vs GMM in the latent space of AE. Broader Impact Concerns: None ================================================== Review 4: Summary: While auto-encoders learn high-quality data representations, they are not generative models. Tagasovska et al., 2019 proposed a method to convert a trained auto-encoder into a generative model using a three-step process: Learn an autoencoder. Use training data representations as observed samples and learn a distribution in the representation space. Sample from the learned representation distribution and decode to sample novel data points. In this paper, the authors examine different existing methods of learning the representation distribution. The paper aims to understand the different properties of existing methods and analyze the trade-offs. Additionally, they propose a new method, the Empirical Beta Copula Autoencoder (EBCAE). Overall, the paper aims to develop a clearer understanding of how different methods fare against each other and the advantages and trade-offs of using one over the other. Strengths and Weaknesses: # Strengths - I like the overall direction of this work. We need research that attempts to make sense of several possible methods for a particular problem. Such analyses are essential in general and fit perfectly in the scope of TMLR. # Weakness * I found the paper could have been written better. The manuscript will benefit immensely from a thorough rewrite. For instance * "We argue that imposing restrictions on the distribution should be avoided and that more flexible approaches for modeling the latent space seem beneficial." I need help figuring out what the argument is here. Why should this be better? * The authors use latent space and representation interchangeably. Nothing is latent (or "hidden") about the representations of the autoencoders. The mapping from the data point to the representation is deterministic. I suggest that authors clarify their notation a bit more and set up the context of the paper early. * "conditioning set $D_e$ and conditional probabilities, e.g.,.." There needs to be an explanation of the conditioning set. In general, several things were hard to understand about the copula background. - "For each copula, encoding the dependence of two conditional variables, any bivariate copula model, including non-parametric modeling approaches (as done by Tagasovska et al. 2019) can be chosen." Complex sentences like this can make it harder to follow through with the author's argument. - It felt the authors were very defensive about not proposing a new method, which is perfectly fine. Instead, using the landscape in the first few pages is vital to bring out the key points from the analysis. No section clearly articulates the contributions of the paper. What were the specific insights that the authors found in their extensive experiments? One has to dig deep into the paper's experiments section to understand the key takeaways. - I found Figure 2 extremely hard to parse. There are nine methods. You must match the method to the row whenever you want to compare anything. The core contribution of this work is the analysis. Therefore, the analysis should be easier to parse. - Authors conduct different experiments; however, I found them incomprehensible sometimes. I suggest restructuring the experimental section such that different settings are brought out better. For instance, the setup related to Figure 4 and Figure 5 is hidden in the dense text. Requested Changes: See the weakness section. Broader Impact Concerns: Not applicable. ================================================== Metareview: Recommendation: Reject Comment: The paper has gone a through a thorough review and discussion phase. All reviewers and undersigned agree that the paper should be rejected. The topic is of general interest, the authors have well-written and scoped article so if the authors feel that they can fully address the reviewers' and undersigned comments, then a new submission building on this paper is welcomed. The authors are encouraged to work on scoping the contributions they want to convey, for example the copula-based method seems to be the most novel contribution here. This could be extended upon in the current setting or used to in a different setting to make an entire new contribution. This is up to you to decide if you want to submit again. Your AE ==================================================
# Self-Supervised Graph Representation Learning For Neuronal Morphologies Marissa A. Weis1,2,*, Laura Hansel1, **Timo Lüddecke**1, and **Alexander S. Ecker**1,3 1Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany 2Institute for Theoretical Physics, University of Tübingen, Germany 3Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany **Correspondence: marissa.weis@uni-goettingen.de* Reviewed on OpenReview: https://openreview.net/forum?id=ThhMzfrd6r ## Abstract Unsupervised graph representation learning has recently gained interest in several application domains such as neuroscience, where modeling the diverse morphology of cell types in the brain is one of the key challenges. It is currently unknown how many excitatory cortical cell types exist and what their defining morphological features are. Here we present GraphDINO, a purely data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled large-scale datasets. GraphDINO is a novel transformer-based representation learning method for spatially-embedded graphs. To enable self-supervised learning on transformers, we (1) developed data augmentation strategies for spatially-embedded graphs, (2) adapted the positional encoding and (3) introduced a novel attention mechanism, AC-Attention, which combines attention-based global interaction between nodes and classic graph convolutional processing. We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings that are on par with manual feature-based classification by experts, but without using prior knowledge about the structural features of neurons. Moreover, it outperforms previous approaches on quantitative benchmarks predicting expert labels. Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets. It is applicable beyond neuroscience in settings where samples in a dataset are graphs and graph-level embeddings are desired. ## 1 Introduction The brain is structured into different areas that contain diverse types of neurons (Ascoli et al., 2008). The morphology of cortical neurons is highly complex with widely varying shapes. Cell morphology has long been used to classify neurons into cell types (Ramón y Cajal, 1911), but characterizing neuronal morphologies is still a challenging open research question. Morphological analysis has traditionally been carried out by visual inspection (Ascoli et al., 2008; Defelipe et al., 2013) or by computing a set of predefined, quantitatively measurable features such as number of branching points (Uylings & Van Pelt, 2002; Scorcioni et al., 2008; Oberlaender et al., 2012; Polavaram et al., 2014; Markram et al., 2015; Lu et al., 2015; Gouwens et al., 2019). However, both approaches have deficits: expert assessments have a high variance (Defelipe et al., 2013) and the manual definition of morphological features introduces biases (Wang, 2018), thus calling for more unbiased, data-driven approaches to characterize the morphology of neurons. Recent advances in recording technologies have greatly accelerated data collection and therefore the amount of data available (MICrONS Consortium et al., 2023; Ramaswamy et al., 2015; Scala et al., 2021; Allen Institute, 2016; Peng et al., 2021; Winnubst et al., 2019). These developments have opened the floor for data-driven approaches based on unsupervised machine learning methods (Schubert et al., 2019, Elabbady et al., 2022). One form of data representation that is particularly suitable for neurons is representing the skeleton of a neuron as a tree. In such a tree, the root node represents the neuron's cell body and the node features are their 3D locations. The availability of a number of such skeleton datasets has recently sparked some work on graph-level representation learning of neuronal morphologies (Laturnus & Berens, 2021; Zhao et al., 2022; Chen et al., 2022a). Following this line of research and work from the graph learning community (Sun et al., 2020; You et al., 2020), we present an unsupervised graph-level representation learning approach. Our contributions in this paper are fourfold: 1. We propose a new self-supervised model to learn graph-level embeddings for spatial graphs. Unlike previous methods, our approach does not require human annotation or manual feature definition. 2. We introduce a novel attention module that combines transformer-style attention and message passing between neighboring nodes as in graph neural networks. 3. We apply this approach to the classification of excitatory neuronal morphologies and show that it produces clusters that are comparable with known excitatory cell types obtained by manual featurebased classification and expert-labeling. 4. We outperform existing approaches based on manual feature engineering and auto-encoding in predicting expert labels. Our code is available at https://eckerlab.org/code/weis2023/. ## 2 Related Work 2.1 Representation Learning For Neuronal Morphologies Morphology has been used for a long time to classify neurons by either letting experts visually inspect the cells (Ramón y Cajal, 1911; Defelipe et al., 2013) or by specifying expert-defined features that can be extracted and used as input to a classifier (Oberlaender et al., 2012; Markram et al., 2015; Kanari et al., 2017; Wang, 2018; Kanari et al., 2019; Gouwens et al., 2019) (see Armañanzas & Ascoli (2015) for review). Ascoli et al. (2008) made an effort to unify the used expert-defined features. With the advent of new technologies for microscopic imaging, electrical recording, and molecular analysis such as Patch-seq (Cadwell et al., 2015) that allow the simultaneous recording of transcriptomy, electrophysiology and morphology of whole cells, several works have explored the prediction of cell types from multiple modalities (Gala et al., 2021) or one modality from the other (Cadwell et al., 2015; Scala et al., 2021; Gouwens et al., 2020). Multiple previous works try to either hand-engineer or learn a representation of neuronal morphologies. Laturnus & Berens (2021) propose a generative approach involving random walks in graphs to model neuronal morphologies. Schubert et al. (2019) process 2D projections of morphologies with a convolutional neural network (CNN) to learn low-dimensional representations. Seshamani et al. (2020) extract local mesh features around spines and combine them with traditional Sholl analysis (Sholl, 1953). Gouwens et al. (2019) define a set of morphological features based on graphs and perform hierarchical clustering on them. We use the latter as a baseline for a classical approach with pre-defined features and Laturnus & Berens (2021) as a baseline of a model with learned features. Concurrent work (Zhao et al., 2022) proposes a contrastive graph neural network to learn neuronal embeddings with a focus on retrieval efficiency from large-scale databases. Elabbady et al. (2022) learn representations of neurons based on subcellular features of the somatic region of the neurons and show that those features are sufficient for classifying cell types on large-scale EM datasets. Chen et al. (2022b) propose a combination of graph-based processing and manually-defined features to learn embeddings of neuronal morphologies using a LSTM-based network and contrastive learning. We compare to the latter in Section 5.8. ## 2.2 Graph Neural Networks (Gnns) Graph neural networks learn node representations by recursively aggregating information from adjacent nodes as defined by the graph's structure. While early approaches date back over a decade (Scarselli et al., 2009), recently numerous new variants were introduced for (semi-) supervised settings: relying on convolution over nodes (Duvenaud et al., 2015; Hamilton et al., 2017; Kipf & Welling, 2017), using recurrence (Li et al., 2016), or making use of attention mechanisms (Veličković et al., 2018). A representation for the whole graph is often derived by a readout operation on the node representations, for instance averaging. See Dwivedi et al. (2020) for a recent benchmark on graph neural network architectures. Transformer-based GNNs. Similar to us, Zhang et al. (2020) and Dwivedi & Bresson (2021) use transformer attention to work with graphs. However, Zhang et al. (2020) compute transformer attention over the nodes of sampled subgraphs, while Dwivedi & Bresson (2021) compute the attention only over local neighbors of nodes, which boils down to a weighted message passing that is conditioned on node feature similarity, and trains with supervision. Unlike these previous approaches, we compute the attention between nodes of the global graph and adapt the transformer attention to consider the adjacency matrix of the graph, which allows the model to take into account both the direct neighbors of a node as well as all other nodes in the graph. Mialon et al. (2021) consider encoding local sub-structures into their node features and leverage kernels on graphs in their attention as relative positional encodings. Their 1-step random walk (RW) kernel is similar to our AC-Attention mechanism, except that the influence of the adjacency in their attention is not learnable. Ying et al. (2021) propose strategies to adapt positional encodings to graphs in order to leverage the structural information of the graphs with transformer attention. Specifically, they propose to use three different structural encodings: (1) a centrality encoding based on the node degree; (2) edge encodings based on the edge features and (3) a spatial encoding based on the shortest path between two nodes. For neural skeletons, the centrality encoding is not effective as all the nodes besides the soma have a node degree of two or three. Furthermore, the edge encoding is not applicable since in the neuronal graphs do not have edge features. We use Laplacian positional encodings instead as it was shown that they are beneficial to capture structural and positional information (Dwivedi & Bresson, 2021) and outperform previously proposed positional encodings (Zhang et al., 2020). We did not use any additional positional encodings such as shortest-path encodings (Ying et al., 2021), but they could be easily integrated into our model. Concurrent to our work, Rampášek et al. (2022) proposed a two-stream architecture, in which transformer attention and message passing are computed in parallel and then combined after each block. In contrast, we propose one combined attention mechanism that subsumes transformer attention and message passing with a learned trade-off per node between the two settings. Chen et al. (2022a) incorporate structural information into the transformer attention by extracting a subgraph representation around each node before computing attention over nodes. Self-supervised learning on graphs. Self-supervised learning has proven to be a useful technique for training image feature extractors (Oord et al., 2018; Chen et al., 2020; Chen & He, 2021; Caron et al., 2021) and has been investigated for learning graph (Li et al., 2016; Hassani & Khasahmadi, 2020; Qiu et al., 2020; You et al., 2020; Xu et al., 2021) and node (Veličković et al., 2019) representations. Narayanan et al. (2017) learn graph representations through skip-gram with negative sampling by predicting present sub-graphs. You et al. (2020) propose four data augmentations for contrastive learning of graph-level embeddings. Sun et al. (2020) learn graph-level representations in a contrastive way, by predicting if a subgraph and a graph representation originate from the same graph. Similarly, Hassani & Khasahmadi (2020) put node features of one view in contrast with the graph encoding of a second view and vice versa. They build on graph diffusion networks (Klicpera et al., 2019) and only augment the structure of the graph but not the initial node features. We use Sun et al. (2020) and You et al. (2020) as a baseline for graph-level unsupervised representation learning. Qiu et al. (2020) propose a generic pre-training method which uses an InfoNCE objective (Oord et al., 2018) to learn features by telling augmented versions of one subgraph from other subgraphs with random walks as augmentations. Xu et al. (2021) aim to capture local and global structures for wholegraph representation learning. They rely on an EM-like algorithm to jointly train the assignment of graphs to hierarchical prototypes, the GNN parameters and the prototypes. Zhu et al. (2021) propose adaptive augmentation, which considers node centrality and importance to generate graph views in a contrastive framework. Similar to our approach, Thakoor et al. (2022) use two encoders of which only one is trained and the other is an exponential moving average of the first. In contrast to our approach, though, their training objective encourages the *node embeddings* of two augmented versions of the same graph to be similar - not the *graph-level* embedding. Moreover, they use node feature and edge masking as graph augmentations. Unlike most prior work, we contrast two global views of a graph in order to learn a whole-graph representation. Our method operates on spatially embedded graphs, in which nodes correspond to points in 3D space. We make use of this knowledge in the choice of augmentations. ## 3 Graphdino ![3_image_0.png](3_image_0.png) Figure 1: A. Self-supervised learning of low dimensional vector embeddings z1, z2 that capture the essence of the 3D morphology of individual neurons using GraphDINO. Two augmented "views" of the neuron are input into the network, where the weights of one encoder (bottom) are an exponential moving average (EMA) of the other encoder (top). The resulting latent embeddings z are projected to probability distributions p by a MLP. The objective is to maximize the similarity between p1 and p2. B. An individual neuron is represented by its vector embedding as a point in the D-dimensional vector space. We propose GraphDINO, a method for selfsupervised representation learning of graphs. It is inspired by recent progress in self-supervised representation learning of images that has been shown to be competitive to supervised learning without relying on labels. The core idea is to enforce that the representations of two augmented versions of the same image are close to each other in latent space. DINO (Caron et al., 2021) is an implementation of this self-supervised learning framework consisting of two encoders with a transformer backbone. To avoid mode collapse, only one encoder is directly trained through backpropagation (student) while the weights of the other encoder (teacher) are an exponential moving average (ema) of the student's weights. The latent representations z ∈ R D1 given by the encoders are mapped to probability distributions p ∈ R D2 by a multi-layer perceptron (MLP) and subsequent softmax operator over which the cross-entropy loss is computed (Fig. C.4). For further explanation of DINO see Appendix C.1. GraphDINO adapts this self-supervised framework to the data domain of graphs (Fig. 1). In order to use information given by the connectivity of the graph, we modify the computation of the transformer attention to take the graph adjacency matrix into account and use the graph Laplacian as positional encoding. More specifically, we introduce the following modifications: (1) we incorporate the graph's adjacency matrix into the attention computation; (2) we use the graph Laplacian as positional encoding; (3) we define augmentations suitable for spatial graphs. Input. Input to the network is the 3D shape of a neuron which is represented as an undirected graph G = (*V, E*). V is the set of nodes {vi} N i=1 and E the set of undirected edges E = {eij = (vi, vj )} that connect two nodes vi, vj . The features of each node viin the graph are encoded into a token using a linear transformation. These tokens are then used as input to the transformer model, which consists of l multi-head attention modules with h heads each. Attention bias. Key-value query attention became popular in natural language modelling (Vaswani et al., 2017) and is now used routinely also in image models (Dosovitskiy et al., 2020). To make use of the information given by the adjacency matrix A ∈ R N×N of the input graph - i.e. the neighborhood of nodes —, we bias the attention towards A by adding a learned bias to the attention matrix that is conditioned on the input token values: $$Attention(\mathbf{Q},\mathbf{K},\mathbf{V},\mathbf{A})=\sigma\Bigg{(}\lambda\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d_{k}}}+\gamma\mathbf{A}\Bigg{)}\,\mathbf{V},\quad\text{with}[\lambda_{i},\gamma_{i}]=\exp{(\mathbf{W}x_{i})},\tag{1}$$ where K, Q, V are the keys, queries and values which are computed as a learned linear projection of the tokens. σ(·) denotes the softmax function. xi ∈ R D is the token of node vi, W ∈ R 2×D is a learned weight matrix, *λ, γ* ∈ R N are two factors per node that trade off how much weight is assigned to neighboring nodes versus all other nodes in the graph, and N is the number of nodes. When γ = 0 and λ = 1, the adjacency-conditioned attention (AC-Attention) reduces to regular transformer attention. In the other extreme case (λ = 0,), the attention matrix is dominated by A and the transformer attention computation is akin to the message passing algorithm that is commonly used when working with graphs (Scarselli et al., 2009; Duvenaud et al., 2015; Gilmer et al., 2017). GraphDINO is more flexible than both regular message passing and point-cloud attention since it can decide how much weight is given to the neighbors of a node while maintaining the flexibility to attend to all other nodes in the graph as well. Positional encoding. Following Dwivedi et al. (2020), we use the normalized graph Laplacian matrix L as positional encoding, which is computed by L = I − D−1/2AD−1/2 = UT ΛU, where I is the identity matrix, D the N × N degree matrix, A the adjacency matrix, and U and Λ are the matrices of eigenvectors and eigenvalues, respectively. The positional encodings are the first 32 eigenvectors with largest eigenvalues. Positional encodings are added to the nodes features after tokenization. Data augmentation. Data augmentation plays an important role in self-supervised learning and needs to be adapted to the data, since it expresses which invariances should be imposed. Given the spatial neuronal data, we apply the following augmentations: **(1) Subsampling**: We subsample the original graph to a fixed number of n nodes by randomly removing nodes that are not branching points (i.e. nodes connected to more than two other nodes), and connecting the two neighbors of the removed node. This facilitates batch processing. Furthermore, this augmentation retains the global structure of the neuron, while altering local structure in the two views. (2) Rotation: we perform random 3D rotation around the y-axis, that is orthogonal to the pia. **(3) Jittering**: we randomly translate individual node positions by adding Gaussian noise with N (0, σ1). (4) Subgraph deletion: We identify branches that connect leaf nodes to the last upstream parent node in the graph, i.e. terminal branches that do not split into further branches, and randomly delete n of them starting at a random location along the branch, while maintaining the overall graph structure. **(5) Graph position**: we randomly translate the graph as a whole by adding Gaussian noise with N (0, σ2) to all nodes. Unlike Caron et al. (2021), we do not differentiate between the augmentations seen by the student and the teacher network. Table 1: Overview of data augmentations for spatiallyembedded graphs such as neuronal skeletons. Level Augmentations Graph (1) Subsampling, (2) Rotation, (5) Translation Subgraph (3) Jittering, (4) Branch deletion ## 4 Data And Experiments 4.1 Synthetic Graphs To demonstrate that our novel attention mechanism is strictly more powerful than simple all-to-all attention on a graph, we generate a synthetic graph dataset. In this dataset, the five classes share similar node locations but differ in how the nodes are connected. See Appendix A for the detailed generation process. We use this dataset to test the efficacy of our novel attention mechanism, AC-Attention, and the positional encoding. ## 4.2 Neuronal And Tree Graphs We apply GraphDINO to five publicly available neuronal datasets and one non-neuronal dataset. Blue Brain Project (BBP): Rat somatosensory cortex. Available from the Neocortical Microcircuit Collaboration Portal of the Blue Brain Project1(Ramaswamy et al., 2015), the dataset contains 1,389 neurons from juvenile rat somatosensory cortex. We train GraphDINO without supervision on the 3D dendritic morphologies of all neurons. For evaluation, we use the subset of 616 neurons which have been labeled by experts into cell types and cortical layer. Of these 616 neurons 286 are excitatory that have been assigned to 14 cell types (Markram et al., 2015). See Appendix C.5.1 for more details on the dataset. We use this dataset to evaluate the capability of GraphDINO to learn useful representations of neuronal morphologies that align with known cell types, perform ablation experiments on the novel graph augmentation strategies and compare to previous work using manually-defined features. M1 PatchSeq: Mouse motor cortex. The M1 PatchSeq dataset contains 275 excitatory and 371 inhibitory cells from M1 in adult mouse primary motor cortex (Scala et al., 2021).2 The excitatory cells (M1 EXC) have been classified into tufted, untufted and other neurons based on their morphology in a previous study (Laturnus & Berens, 2021). We use this dataset to compare to previous work that learns morphological embeddings in a data-driven way. We train GraphDINO without supervision on the 3D dendritic morphologies of the 646 neurons. For evaluation, we follow the evaluation protocol and use the same dataset split as Laturnus & Berens (2021). We additionally report the 5-nearest neighbour accuracy of three additional dataset splits to estimate the variance due to the chosen split, since the test set is very small (60 neurons) and the balanced accuracy is strongly influenced by the morphologically heterogeneous "other" class that is only represented by six samples in the test set (Laturnus & Berens, 2021). Allen Brain Atlas (ACT): Mouse visual cortex. As part of the Allen Cell Types Database, the dataset contains 510 neurons from the mouse visual cortex with a broad coverage of types, layers and transgenic lines.3 See Allen Institute (2016) for details on how the dataset was recorded. It comes with a classification of each neuron into spiny, aspiny, or sparsely spiny, where spiny are assumed to be excitatory neurons and all else are inhibitory (Gouwens et al., 2019). Additionally, the cortical layer of each neuron is provided. Brain Image Library (BIL): Whole mouse brain. The Brain Image Library contains 1,741 reconstructed neurons from cortex, claustrum, thalamus, striatum and other brain areas in mice (Peng et al., 2021).4 Janelia MouseLight (JML): Whole mouse brain. The Janelia MouseLight platform contains 1,200 projection neurons from the motor cortex, thalamus, subiculum, and hypothalamus (Winnubst et al., 2019).5 Joint training on ACT, BIL and JML. Following Chen et al. (2022b), for joint training of the ACT, BIL and JML datasets, we rotate the neurons such that the first principal component is aligned with the 1http://microcircuits.epfl.ch/#/main 2https://download.brainimagelibrary.org/3a/88/3a88a7687ab66069/ 3http://celltypes.brain-map.org/ 4https://download.brainimagelibrary.org/biccn/zeng/luo/fMOST/ 5http://mouselight.janelia.org/ y-axis. Chen et al. (2022b) group the neurons of the three datasets ACT, BIL and JML into eleven classes based on the cortical layer or brain region they originate from. They then evaluate their learned embeddings on a subset of six (for BIL) or four classes (for ACT and JML) that have a broad coverage across the datasets. See Appendix C.5.4 for further details. Botanical Trees. The Trees dataset (Seidel et al., 2021) is a highly diverse dataset comprised of 391 skeletons of trees stemming from 39 different genuses and 152 species or breedings. The skeletons were extracted from LIDAR scans of the trees. Nodes of the skeletons have a 3D coordinate associated with them. We normalize the data such that the lowest point (start of the tree trunk) is normalized to (0, 0, 0). ## 4.2.1 Data Preprocessing Since the objective of GraphDINO is to learn purely from the 3D dendritic morphology of neurons, we normalize each graph such that the soma location is centered at (0, 0, 0) (no cortical depth information is given to the model). Furthermore, axons are removed for all experiments in the paper, because the reconstruction of axonal arbors of excitatory neurons from light microscopy images is difficult due to their small thickness and long ranges that they cover (Kanari et al., 2019) and thus often unreliable. The input nodes V have features vi = [*x, y, z*] where vi ∈ R 3 are the spatial xyz-coordinates in micrometers [µm]. ## 4.2.2 Training Details. GraphDINO is implemented in PyTorch (Paszke et al., 2019) and trained with the Adam optimizer (Kingma & Ba, 2015). The latent dimensionality of z is 16 for the synthetic graphs and 32 for the neuronal and the botanical tree datasets. For M1 PatchSeq we use a latent dimensionality of 64. See C.2 for an overview of the hyperparameters used for training on the different datasets. At inference time, the latent embeddings z are extracted from the student network for the unaugmented graphs. We use scipy for fitting Gaussian Mixture models (GMM) and k-nearest neighbor classifiers (kNN) (Pedregosa et al., 2011). ## 5 Results We first establish that GraphDINO works on the synthetic graph dataset and show that our novel AC-Attention is necessary for exploiting information from graph connectivity. Second, we show that our novel augmentation strategies are suitable for spatially-embedded graphs that are treestructured and that classical GNN message-passing is not sufficient when graphs have long-ranging branches. Then, we move to the gradually more complex, biological questions of spiny-aspiny differentiation, cell type recovery and consistency with existing labels. To this end, we employ in total five neuronal datasets that encompass two species and range across multiple brain areas. Finally, we compare our model to several previous works based on manually-defined morphological features as well as approaches with learned features. See Appendix B for the application of GraphDINO to a non-neuronal dataset. Figure 2: t-SNE embedding of latent representations of 3D synthetic graphs shown for ![6_image_0.png](6_image_0.png) one example run per model. Accuracy averaged over five random seeds and given as mean ± standard deviation. "–" means removing one component from the full model. ## 5.1 Ac-Attention Recovers Information Encoded By Graph Connectivity We start by demonstrating the efficacy of our novel AC-Attention module. For this experiment, we use the synthetic graph dataset where classes differ in how nodes are connected whereas the distribution of node positions does not vary across classes. Therefore, considering the graph structure is necessary to differentiate between the classes (more details in Appendix A). We train GraphDINO on the synthetic graph dataset without labels in three configurations: (1) with AC-Attention, (2) with regular transformer attention and (3) with transformer attention and without positional encoding. We asses the quality of the learned embedding using the ground truth labels. A linear classifier on the learned embeddings achieves a test set accuracy of (1) 95% ± 4, (2) 29% ± 6, and (3) 20% ± 1, showing that AC-Attention allows us to capture the structure of spatially embedded graphs when the location of the nodes alone does not provide sufficient information. Removing both AC-Attention and the positional encoding results in the classifier performing at chance level. Using only the positional encoding performs slightly better than chance, because the positional encodings contain some information about node connections through the graph Laplacian. To make full use of the information given by the connectivity of the graphs, using AC-Attention is essential (Fig. 2). ## 5.2 Tailored Graph Augmentations Are Well-Suited For Spatially-Embedded Graphs In self-supervised learning, data augmentation is used to obtain two views that define a positive input pair. The augmentations here are chosen to encode invariances that should not change the underlying sample identity. In previous contrastive learning for graphs, these augmentations were for example dropping random edges or masking node features (You et al., 2020). These augmentations are not appropriate for our spatially-embedded graphs that form a tree and whose only node features are their 3D location in space. Thus we designed five novel augmentation techniques specifically for spatially embedded graphs such as neural morphologies or botantical trees: subsampling, rotation, node jittering, branch deletion and graph translation (see Section 3). Table 2: Ablation results on the BBP dataset. Cell-type classification accuracy [%] of our model and ablations averaged over three random seeds and given as mean ± standard deviation. "–" indicates removal of an augmentation or model component. To test the importance of the individual graph augmentations we perform a set of ablation experiments using the BBP dataset. We remove one augmentation from our model at a time and evaluate the leave-one-out 5-nearest neighbor accuracy when predicting the expert labels. For the subsampling augmentation we vary the number of retained nodes. Our full model achieves an average accuracy of 65.8% when classifying the excitatory cells into the 12 expert labels (Appendix C.5.1). When removing individual data augmentations the accuracy decreases (Tab. 2). Especially 3D rotation and graph translation are important augmentation strategies whose removal lead to substantial performance deterioration. ## 5.3 Message-Passing Is Not Sufficient For Long-Range Graphs Next, we investigate whether classical message-passing is sufficient to process graphs with long-ranging branches such as neuronal morphologies. Therefore, we train GraphDINO once when only using message passing while removing the global attention (setting λ = 0 in Eq. 1). This decreases the performance to 59.8% (Tab. 2). Additionally, we train InfoGraph (Sun et al., 2020), as a baseline for an unsupervised method that learns graph-level representations and uses GNN message-passing. InfoGraph achieves accuracy of 48.2% (Tab. 3). Thus, we conclude that using global attention is beneficial in situations where graphs contain long-range branches. Global attention enables information flow between distant (in terms of graph connectivity) nodes that might be close in space or function. Table 3: Cell-type classification accuracy [%] on the BBP dataset. Performance of our model and InfoGraph (Sun et al., 2020) averaged over three random seeds and given as mean ± standard deviation. Model Accuracy InfoGraph (Sun et al., 2020) 48.2 ± 0 GraphDINO 65.8 ± 1 ## 5.4 Morphological Embeddings Differentiate Between Spiny/Aspiny Cells And Layers To evaluate the capability of GraphDINO to capture essential features of 3D neuronal shapes purely datadriven, we train GraphDINO on the BBP dataset and use t-distributed stochastic neighbor embedding (tSNE) (van der Maaten & Hinton, 2008) to map the learned embeddings of the BBP dataset into 2D (Fig. 3) for | Model | Accuracy | |------------------------|------------| | GraphDINO | 65.8 ± 1 | | - 3D rot. | 55.4 ± 1 | | - node jitter | 64.8 ± 2 | | - graph translation | 55.6 ± 2 | | - drop branch | 64.6 ± 1 | | subsampling: 50 nodes | 60.0 ± 0 | | subsampling: 200 nodes | 62.0 ± 3 | | - adjacency (γ = 0) | 62.8 ± 1 | | - attention (λ = 0) | 59.8 ± 2 | ![8_image_0.png](8_image_0.png) Figure 3: A. t-SNE embedding (perplexity=30) of latent representation of 3D neuronal morphologies of the BBP dataset showing a separation into spiny and aspiny neurons (n = 616). B. t-SNE embedding (perplexity=30) of the latent representations of the morphologies of the spiny neurons colored by the cluster found by our model (n = 286). C. Relative cortical layer distribution of neurons per cluster across L2/3—L6. Higher values are indicated by red. D. As B. but neurons colored by their total apical length revealing an organization of the latent space in terms of morphological properties. E. Log-likelihood of Gaussian Mixture Model on held-out test set for spiny neurons used to select the optimal number of clusters. F. Example neurons for each cluster are shown with apical dendrites in lighter color, while basal dendrites are colored darker. Soma is indicated by black circle. visualization. A clear separation between spiny and aspiny neurons can be observed (see Fig. 3A), indicating that our learned representation captures meaningful biological differences of the neuronal morphologies. Interestingly, some of the spiny neurons end up in the aspiny cluster (Fig. 3A bottom right). These are inverted L6 neurons (Fig. 3F Cluster 15), whose size and dendritic tree are morphologically similar to the surrounding aspiny neurons that also show a downwards bias in the dendritic tree. ## 5.5 Morphological Embeddings Recover Known Excitatory Cell Types To identify cell types, we fit a Gaussian mixture model (GMM) with a diagonal covariance matrix to our learned representation of the spiny neurons. To determine the number of clusters, we fit 1,000 GMMs with different random seeds using five-fold cross-validation for 2—30 clusters. We average over the log-likelihood for each number of clusters over repetitions and folds. We find n = 15 to be the optimal number of clusters (Fig. 3E). Having identified the optimal number of clusters, we re-fit the GMM to the full dataset including all spiny neurons. To avoid picking a particularly good or bad random clustering, we fit 100 models and choose the one that has the highest average adjusted rand index (ARI) to all other clusterings. The spiny neurons cluster nicely into different shapes and layers (Fig. 3F and Appendix Fig. D.6), retrieving known excitatory cell types. The first four spiny clusters contain mainly cells from layer 2/3 (L2/3) (Fig. 3C) and group them by morphology: Cluster 1 contains wide and short neurons from layer 2/3, while L2/3 neurons in cluster 4 are more elongated with a less pronounced apical tuft (Fig. 3F). Clusters 5-7 group cells from layer 4 (L4) (Fig. 3C), differentiating between spiny stellate cells (cluster 5) and atufted L4 neurons (cluster 7) (Fig. 3F). Within layer 5 and 6, neurons are grouped by their size, amount of apical tuft and obliques, as well as the direction of the apical-like dendrites: For instance, cluster 10 groups thick-tufted pyramidal cells from layer 5 and cluster 15 contains inverted L6 neurons (Fig. 3F). Most clusters show a strong preference for grouping cells whose soma position is in a certain layer (Fig. 3C) even though the model - in contrast to the experts who labeled the cells - does not have access to anatomical knowledge such as cortical layer of origin. One exception are pyramidal L6 cells with upwarddirected apicals that separate less well and get rather clustered with L4 and L5 neurons of the same size and similar morphological shape. This is to be expected, as the model only learns to differentiate between different morphologies but has no knowledge about anatomical features such as soma depth. ## 5.6 Data-Driven Clusters Are Consistent With Expert Labels To compare our data-driven features to manually-designed features, we compute the adjusted rand index (ARI) between our clusters and the expert-identified cell types on the BBP dataset and compare the performance to the clusters based on morphometrics obtained by Gouwens et al. (2020). We achieve an ARI performance of 0.31 when clustering neurons across all cortical layers together while using significantly less prior information than Gouwens et al. (2019). In comparison, Gouwens et al. (2019) reached an ARI of 0.27 with a feature space specifically designed for spiny neurons and by splitting the neurons into their cortical layer of origin before performing the clustering. This approach reduces the complexity of the problem significantly, since misassignments across layers are excluded by construction. When performing the clustering like Gouwens et al. (2019) only within the layers, we achieve an ARI of 0.46 (Tab. 4). ## 5.7 Morphological Embeddings Encode Distinct Morphological Features Laturnus & Berens (2021) classified the M1 EXC dataset (Scala et al., 2021) into three classes based on presence of an apical tuft (tufted, untufted and others). Following their work, we train a 5-nearest-neighbor classifier on our learned embeddings and show that GraphDINO learns meaningful features to differentiate between the three classes (Tab. 5). Our method outperforms their MorphVAE method as well as a baseline using density maps of the neurons (Laturnus & Berens, 2021). This dataset is rather small and Laturnus & Berens (2021) used only a single train/test split. To estimate how reliable the reported accuracy metrics are, we compute the cross-validated accuracy across multiple different train/test splits, which show a vari- Table 4: Adjusted rand index (ARI) between identified clusters and expert labels for the learned embeddings from GraphDINO and manually-defined features by Gouwens et al. (2019) and expert labels, when performing the clustering within cortical layers and across the whole cortex. | Clustering | Features | ARI | | | |---------------|-----------------------|--------------|-------------|----------| | across layers | GraphDINO | 0.31 | Accuracy | Accuracy | | | over runs | over splits | | | | | Model | (mean ± SEM) | (mean ± SD) | | | | MorphVAE (100 %) | 70 ± 5 | - | | | | MorphVAE (0 %) | 58± 7 | - | | | | Density Map (0 %) | 60 | - | | | | GraphDINO (0 %) | 68 ± 5 | 71 ± 9 | | | within layers | Gouwens et al. (2019) | 0.27 | | | | GraphDINO | 0.46 | | | | Table 5: Balanced accuracy [%] on M1 EXC test set using the learned embeddings from either GraphDINO or MorphVAE (Laturnus & Berens, 2021) (mean ± SEM across three runs and across three data splits, respectively). Percentages in brackets indicate the amount of labels used during training for MorphVAE. GraphDINO is trained without labels. ability across splits of ±9% (standard deviation; Tab. 5). We conclude that GraphDINO likely outperforms MorphVAE trained withoutsupervision and performs approximately on par with MorphVAE trained fully supervised. ## 5.8 Morphological Embeddings Encode Cortical Regions TreeMoCo (Chen et al., 2022b) is an LSTM-based model that was concurrently proposed to perform unsupervised representation learning on neuronal graphs. The model uses as input the simplified skeletons of neurons that only contain the branching points as nodes. They compute 26 manually-selected features in addition to the xyz-coordinates as node features to describe the morphology of the skeletons between branching points. TreeMoCo is trained on a combination of the datasets BIL, JML and ACT and quantitatively evaluated on the task of predicting the brain anatomical region or cortical layer of origin of the neurons on a subset of the neuronal classes. Chen et al. (2022b) remove 955 neurons from the dataset due to "reconstruction errors" and evaluate on a 80-20% training-test split. Since we did not have access to the exact neurons used for training and evaluation both in terms of split and which neurons were removed, we trained unsupervised on the joint dataset and evaluated using 5-fold cross-validation, i.e. splitting the data into five folds and evaluating each fold, given the other four folds as training data and reporting the average performance across folds. For further details regarding the evaluation, see Appendix C.5.4. GraphDINO performs on par or better than TreeMoCo and GraphCL when predicting the origin of neurons (Tab. 6). Note that GraphDINO is fully data-driven while TreeMoCo and GraphCL additionally employ manually extracted node features. Table 6: Cell-type classification on the TreeMoCo dataset. Performance of our model (GraphDINO) averaged over three random seeds and given as mean ± standard deviation. TreeMoCo and GraphCL performance given as the average accuracy over the last five epochs per dataset. ∗Results taken Fig. C1 of Chen et al. (2022b). Note that the evaluation reported by Chen et al. (2022b) uses excitatory and inhibitory neurons at the same time. With this approach, morphologies of neurons of the "same" class can look very different (Fig. C.5). A better proxy task to evaluate the encoding capabilities of the models would be to restrict the evaluation to only excitatory cells. For the ACT dataset this information is available. We therefore repeated the evaluation only on this subset (Tab. 6), which should provide a more meaningful baseline for future studies. Model BIL-6 JML-4 ACT ACT spiny TreeMoCo∗ 76.9 59.7 53.9 - GraphCL∗ 66.3 50.6 55.6 - GraphDINO 79± 1 63± 6 54± 5 73± 6 ## 6 Limitations GraphDINO is designed to learn graph-level representations of spatially-embedded tree-structured graphs using self-supervised learning. As we focus on graphs where each node has a location in 3D space and design the data augmentations accordingly, the approach is not expected to work out-of-the-box on graphs that have different node features. AC-Attention is likely to be beneficial in many other scenarios as well, since it can smoothly interpolate between message passing and global attention based on node similarity, but this hypothesis remains to be tested empirically. Data augmentations would need to be adapted to the respective data domain and the respective invariances that should be encoded or supervised learning to be used. The attention mechanism is not tied in any way to the self-supervised learning objective we use. We encode the desired invariance for neuronal morphologies in GraphDINO via tailored data augmentations. Rotation and translation equivariance could alternatively be built into the architecture of the encoders explicitly. Recent works have proposed such architectures for GNNs (Satorras et al., 2021), as well as for transformers (Fuchs et al., 2020). Adapting these for AC-Attention would be an interesting future research direction. Computing the full transformer attention matrix has a quadratic complexity and might therefore be computationally infeasible for graphs with a large number of nodes. We solve this problem here by subsampling the neuronal skeletons to a smaller number of nodes, which has the added benefit of being a strong data augmentation that keeps the global morphology of the neuron intact while altering the local structure between the two views. However, this approach might not be suitable for all graph datasets. There has been some work in building attention mechanism that scale linearly with the number of input tokens (Wang et al., 2020; Kitaev et al., 2020; Choromanski et al., 2021), but integrating them with the message passing might not be straightforward. Self-supervised learning has been shown to be most successful when training on large datasets (Bao et al., 2022; Oquab et al., 2023). We equipped GraphDINO with appropriate inductive biases to make it possible to learn on the smaller publicly available neuronal datasets that have been used in previous studies. Nevertheless, applying GraphDINO to neuronal datasets with more samples will likely improve its learning capabilities. With the continual development of better imaging techniques and initiatives like MICrONS (MICrONS Consortium et al., 2023) more large-scale datasets of neuronal morphologies will be available to test this hypothesis. In terms of neuronal cell type classification, we did not take some features into account that have been previously used to differentiate cell types, such as the shape of the soma (as formerly used for GABAergic interneurons) or spine densities (Ascoli et al., 2008). Future work could focus on incorporating them into our framework. Depending on the type of feature, they could be easily integrated by adding them as features of the graph or as additional node features. ## 7 Conclusion Increasingly large and complex datasets of neurons have given rise to the need for unbiased and quantitative approaches to cell type classification. We have demonstrated one such approach that is purely data-driven and self-supervised, and that learns a low-dimensional representation of the 3D shape of a neuron. By using self-supervised learning, we do not pre-specify which cell types to learn and which features to use, thereby reducing bias in the classification process and opening up the possibility to discover new cell types. A similar approach can also be useful in other domains beyond neuroscience, where samples of the dataset are spatial graphs and graph-level embeddings are desired, such as tree classification in forestry. ## Acknowledgments We thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS), Tübingen, for supporting Marissa A. Weis. This project has received funding from the European Research Council (ERC) under the European Union's Horizon Europe research and innovation program (Grant agreement No. 101041669). ## References Allen Institute. Allen cell types database technical white paper: Cell morphology and histology. 2016. URL http://help.brain-map.org/download/attachments/8323525/CellTypes_Morph_Overview.pdf. Rubén Armañanzas and Giorgio A. Ascoli. Towards the automatic classification of neurons. Trends in Neurosciences, 38(5):307–318, 2015. Giorgio Ascoli, Lidia Alonso-Nanclares, Stewart Anderson, Germán Barrionuevo, Ruth Benavides-Piccione, Andreas Burkhalter, Gyorgy Buzsáki, Bruno Cauli, Javier Defelipe, and Alfonso Fairen. Petilla terminology: nomenclature of features of GABAergic interneurons of the cerebral cortex. *Nature reviews.* Neuroscience, 9:557–568, 2008. Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. BEit: BERT pre-training of image transformers. In Proc. of the International Conf. on Learning Representations (ICLR), 2022. Cathryn Cadwell, Athanasia Palasantza, Xiaolong Jiang, Philipp Berens, Qiaolin Deng, Marlene Yilmaz, Jacob Reimer, Shan Shen, Matthias Bethge, Kimberley Tolias, Rickard Sandberg, and Andreas Tolias. Electrophysiological, transcriptomic and morphologic profiling of single neurons using patch-seq. Nature Biotechnology, 34, 2015. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9650–9660, 2021. Dexiong Chen, Leslie O'Bray, and Karsten Borgwardt. Structure-aware transformer for graph representation learning, 2022a. Hanbo Chen, Jiawei Yang, Daniel Maxim Iascone, Lijuan Liu, Lei He, Hanchuan Peng, and Jianhua Yao. Treemoco: Contrastive neuron morphology representation learning. In *Advances in Neural Information* Processing Systems (NeurIPS), 2022b. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *Proc. of the International Conf. on Machine learning (ICML)*, 2020. Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021. Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, David Benjamin Belanger, Lucy J Colwell, and Adrian Weller. Rethinking attention with performers. In Proc. of the International Conf. on Learning Representations (ICLR), 2021. Javier Defelipe, Pedro López-Cruz, Ruth Benavides-Piccione, Concha Bielza, Pedro Larranaga, Stewart Anderson, Andreas Burkhalter, Bruno Cauli, Alfonso Fairen, Dirk Feldmeyer, Gord Fishell, David Fitzpatrick, Tamás Freund, Guillermo Gonzalez Burgos, Shaul Hestrin, Sean Hill, Patrick Hof, Josh Huang, Edward Jones, and Giorgio Ascoli. New insights into the classification and nomenclature of cortical gabaergic interneurons. *Nature reviews. Neuroscience*, 14, 2013. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *CVPR*, 2009. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv.org*, 2020. David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan AspuruGuzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems (NeurIPS), volume 28, 2015. Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. *arXiv.org*, 2012.09699, 2021. Vijay Prakash Dwivedi, Chaitanya K Joshi, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking graph neural networks. *arXiv.org*, 2003.00982, 2020. Leila Elabbady, Sharmishtaa Seshamani, Shang Mu, Gayathri Mahalingam, Casey M Schneider-Mizell, Agnes Bodor, J Alexander Bae, Derrick Brittain, JoAnn Buchanan, Daniel J Bumbarger, et al. Quantitative census of local somatic features in mouse visual cortex. *bioRxiv*, 2022. Fabian Fuchs, Daniel Worrall, Volker Fischer, and Max Welling. Se(3)-transformers: 3d roto-translation equivariant attention networks. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems (NeurIPS), volume 33, pp. 1970–1981, 2020. Rohan Gala, Agata Budzillo, Fahimeh Baftizadeh, Jeremy Miller, Nathan Gouwens, Anton Arkhipov, Gabe Murphy, Bosiljka Tasic, Hongkui Zeng, Michael Hawrylycz, et al. Consistent cross-modal identification of cortical neurons with coupled autoencoders. *Nature Computational Science*, 1(2):120–127, 2021. Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In *Proc. of the International Conf. on Machine learning (ICML)*, pp. 1263–1272, 2017. Nathan Gouwens, Staci Sorensen, Jim Berg, Changkyu Lee, Tim Jarsky, Jonathan Ting, Susan Sunkin, David Feng, Costas Anastassiou, Eliza Barkan, Kris Bickley, Nicole Blesie, Thomas Braun, Krissy Brouner, Agata Budzillo, Shiella Caldejon, Tamara Casper, Dan Castelli, Peter Chong, and Christof Koch. Classification of electrophysiological and morphological neuron types in the mouse visual cortex. *Nature Neuroscience*, 22, 2019. Nathan W. Gouwens, Staci A. Sorensen, Fahimeh Baftizadeh, Agata Budzillo, Brian R. Lee, Tim Jarsky, Lauren Alfiler, Katherine Baker, Eliza Barkan, Kyla Berry, Darren Bertagnolli, Kris Bickley, Jasmine Bomben, Thomas Braun, Krissy Brouner, Tamara Casper, Kirsten Crichton, Tanya L. Daigle, Rachel Dalley, Rebecca A. de Frates, Nick Dee, Tsega Desta, Samuel Dingman Lee, Nadezhda Dotson, Tom Egdorf, Lauren Ellingwood, Rachel Enstrom, Luke Esposito, Colin Farrell, David Feng, Olivia Fong, Rohan Gala, Clare Gamlin, Amanda Gary, Alexandra Glandon, Jeff Goldy, Melissa Gorham, Lucas Graybuck, Hong Gu, Kristen Hadley, Michael J. Hawrylycz, Alex M. Henry, DiJon Hill, Madie Hupp, Sara Kebede, Tae Kyung Kim, Lisa Kim, Matthew Kroll, Changkyu Lee, Katherine E. Link, Matthew Mallory, Rusty Mann, Michelle Maxwell, Medea McGraw, Delissa McMillen, Alice Mukora, Lindsay Ng, Lydia Ng, Kiet Ngo, Philip R. Nicovich, Aaron Oldre, Daniel Park, Hanchuan Peng, Osnat Penn, Thanh Pham, Alice Pom, Zoran Popović, Lydia Potekhina, Ramkumar Rajanbabu, Shea Ransford, David Reid, Christine Rimorin, Miranda Robertson, Kara Ronellenfitch, Augustin Ruiz, David Sandman, Kimberly Smith, Josef Sulc, Susan M. Sunkin, Aaron Szafer, Michael Tieu, Amy Torkelson, Jessica Trinh, Herman Tung, Wayne Wakeman, Katelyn Ward, Grace Williams, Zhi Zhou, Jonathan T. Ting, Anton Arkhipov, Uygar Sümbül, Ed S. Lein, Christof Koch, Zizhen Yao, Bosiljka Tasic, Jim Berg, Gabe J. Murphy, and Hongkui Zeng. Integrated morphoelectric and transcriptomic classification of cortical gabaergic cells. *Cell*, 183(4):935– 953.e19, 2020. William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems (NeurIPS), pp. 1025–1035, 2017. Kaveh Hassani and Amir Hosein Khasahmadi. Contrastive multi-view representation learning on graphs. In Proc. of the International Conf. on Machine learning (ICML), volume 119, pp. 4116–4126, 2020. Lida Kanari, Pawel Dlotko, Martina Scolamiero, Ran Levi, Julian C. Shillcock, Kathryn Hess, and Henry Markram. A topological representation of branching neuronal morphologies. *Neuroinformatics*, 16:3 - 13, 2017. Lida Kanari, Srikanth Ramaswamy, Ying Shi, Sebastien Morand, Julie Meystre, Rodrigo Perin, Marwan Abdellah, Yun Wang, Kathryn Hess, and Henry Markram. Objective morphological classification of neocortical pyramidal cells. *Cerebral Cortex*, 29(4):1719–1735, 2019. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *Proc. of the International Conf. on Learning Representations (ICLR)*, 2015. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In Proc. of the International Conf. on Learning Representations (ICLR), 2017. Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. In *Proc. of the* International Conf. on Learning Representations (ICLR), 2020. Johannes Klicpera, Stefan Weiß enberger, and Stephan Günnemann. Diffusion improves graph learning. In Advances in Neural Information Processing Systems (NeurIPS), volume 32, 2019. Sophie C. Laturnus and Philipp Berens. Morphvae: Generating neural morphologies from 3d-walks using a variational autoencoder with spherical latent space. In Proc. of the International Conf. on Machine learning (ICML), volume 139, pp. 6021–6031, 2021. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. Proc. of the International Conf. on Learning Representations (ICLR), 2016. Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with restarts. *arXiv.org*, 1608.03983, 2016. Yanbin Lu, Lawrence Carin, Ronald Coifman, William Shain, and Badrinath Roysam. Quantitative arbor analytics: unsupervised harmonic co-clustering of populations of brain cell arbors based on l-measure. Neuroinformatics, 13(1):47–63, 2015. Henry Markram, Eilif Muller, Srikanth Ramaswamy, Michael Reimann, Marwan Abdellah, Carlos Aguado, Anastasia Ailamaki, Lidia Alonso-Nanclares, Nicolas Antille, Selim Arsever, Atenekeng Kahou Guy Antoine, Thomas K Berger, Ahmet Bilgili, Nenad Buncic, Athanassia Chalimourda, Giuseppe Chindemi, Jean-Denis Courcol, Fabien Delalondre, Vincent Delattre, and Felix Schürmann. Reconstruction and simulation of neocortical microcircuitry. *Cell*, 163:456–492, 2015. Grégoire Mialon, Dexiong Chen, Margot Selosse, and Julien Mairal. Graphit: Encoding graph structure in transformers, 2021. The MICrONS Consortium, J. Alexander Bae, Mahaly Baptiste, Caitlyn A. Bishop, Agnes L. Bodor, Derrick Brittain, JoAnn Buchanan, Daniel J. Bumbarger, Manuel A. Castro, Brendan Celii, Erick Cobos, Forrest Collman, Nuno Maçarico da Costa, Sven Dorkenwald, Leila Elabbady, Paul G. Fahey, Tim Fliss, Emmanouil Froudarakis, Jay Gager, Clare Gamlin, William Gray-Roncal, Akhilesh Halageri, James Hebditch, Zhen Jia, Emily Joyce, Justin Joyce, Chris Jordan, Daniel Kapner, Nico Kemnitz, Sam Kinn, Lindsey M. Kitchell, Selden Koolman, Kai Kuehner, Kisuk Lee, Kai Li, Ran Lu, Thomas Macrina, Gayathri Mahalingam, Jordan Matelsky, Sarah McReynolds, Elanine Miranda, Eric Mitchell, Shanka Subhra Mondal, Merlin Moore, Shang Mu, Taliah Muhammad, Barak Nehoran, Oluwaseun Ogedengbe, Christos Papadopoulos, Stelios Papadopoulos, Saumil Patel, Xaq Pitkow, Sergiy Popovych, Anthony Ramos, R. Clay Reid, Jacob Reimer, Patricia K. Rivlin, Victoria Rose, Casey M. Schneider-Mizell, H. Sebastian Seung, Ben Silverman, William Silversmith, Amy Sterling, Fabian H. Sinz, Cameron L. Smith, Shelby Suckow, Marc Takeno, Zheng H. Tan, Andreas S. Tolias, Russel Torres, Nicholas L. Turner, Edgar Y. Walker, Tianyu Wang, Adrian Wanner, Brock A. Wester, Grace Williams, Sarah Williams, Kyle Willie, Ryan Willie, William Wong, Jingpeng Wu, Chris Xu, Runzhe Yang, Dimitri Yatsenko, Fei Ye, Wenjing Yin, Rob Young, Szi chieh Yu, Daniel Xenes, and Chi Zhang. Functional connectomics spanning multiple areas of mouse visual cortex. *bioRxiv*, 2023. doi: 10.1101/2021.07.28.454025. Annamalai Narayanan, Mahinthan Chandramohan, Rajasekar Venkatesan, Lihui Chen, Yang Liu, and Shantanu Jaiswal. graph2vec: Learning distributed representations of graphs. *arXiv.org*, 1707.05005, 2017. Marcel Oberlaender, Christiaan P. J. de Kock, Randy M. Bruno, Alejandro Ramirez, Hanno S. Meyer, Vincent J. Dercksen, Moritz Helmstaedter, and Bert Sakmann. Cell Type–Specific Three-Dimensional Structure of Thalamocortical Circuits in a Column of Rat Vibrissal Cortex. *Cerebral Cortex*, 22(10): 2375–2391, 2012. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. *arXiv.org*, 1807.03748, 2018. Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, and Piotr Bojanowski. Dinov2: Learning robust visual features without supervision, 2023. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS), 2019. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research (JMLR)*, 12:2825–2830, 2011. Hanchuan Peng, Peng Xie, Lijuan Liu, Xiuli Kuang, Yimin Wang, Lei Qu, Hui Gong, Shengdian Jiang, Anan Li, Zongcai Ruan, Liya Ding, Zizhen Yao, Chao Chen, Mengya Chen, Tanya Daigle, Rachel Dalley, Zhangcan Ding, Yanjun Duan, Aaron Feiner, and Hongkui Zeng. Morphological diversity of single neurons in molecularly defined cell types. *Nature*, 598:174–181, 10 2021. doi: 10.1038/s41586-021-03941-1. Sridevi Polavaram, Todd A Gillette, Ruchi Parekh, and Giorgio A Ascoli. Statistical analysis and data mining of digital reconstructions of dendritic morphologies. *Frontiers in neuroanatomy*, 8:138, 2014. Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang. Gcc: Graph contrastive coding for graph neural network pre-training. In Proc. of Conf. on Knowledge Discovery and Data Mining (KDD), pp. 1150–1160, 2020. Srikanth Ramaswamy, Jean-Denis Courcol, Marwan Abdellah, Stanislaw R. Adaszewski, Nicolas Antille, Selim Arsever, Guy Atenekeng, Ahmet Bilgili, Yury Brukau, Athanassia Chalimourda, Giuseppe Chindemi, Fabien Delalondre, Raphael Dumusc, Stefan Eilemann, Michael Emiel Gevaert, Padraig Gleeson, Joe W. Graham, Juan B. Hernando, Lida Kanari, Yury Katkov, Daniel Keller, James G. King, Rajnish Ranjan, Michael W. Reimann, Christian Rössert, Ying Shi, Julian C. Shillcock, Martin Telefont, Werner Van Geit, Jafet Villafranca Diaz, Richard Walker, Yun Wang, Stefano M. Zaninetta, Javier DeFelipe, Sean L. Hill, Jeffrey Muller, Idan Segev, Felix Schürmann, Eilif B. Muller, and Henry Markram. The neocortical microcircuit collaboration portal: a resource for rat somatosensory cortex. *Frontiers in Neural Circuits*, 9: 44, 2015. Ladislav Rampášek, Mikhail Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, and Dominique Beaini. Recipe for a general, powerful, scalable graph transformer, 2022. Santiago Ramón y Cajal. *Histologie du système nerveux de l'homme et des vertébrés.* 1911. Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E(n) equivariant graph neural networks. arXiv.org, 2102.09844, 2021. Federico Scala, Dmitry Kobak, Matteo Bernabucci, Yves Bernaerts, Cathryn Cadwell, Jesus Castro, Leonard Hartmanis, Xiaolong Jiang, Sophie Laturnus, Elanine Miranda, Shalaka Mulherkar, Zheng Tan, Zizhen Yao, Hongkui Zeng, Rickard Sandberg, Philipp Berens, and Andreas Tolias. Phenotypic variation of transcriptomic cell types in mouse motor cortex. *Nature*, 598:1–7, 2021. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. *IEEE Transactions on Neural Networks*, 20(1):61–80, 2009. Philipp Schubert, Sven Dorkenwald, Michal Januszewski, Viren Jain, and Joergen Kornfeld. Learning cellular morphology with neural networks. *Nature Communications*, 10:2736, 2019. Ruggero Scorcioni, Sridevi Polavaram, and Giorgio A Ascoli. L-measure: a web-accessible tool for the analysis, comparison and search of digital reconstructions of neuronal morphologies. *Nature protocols*, 3 (5):866–876, 2008. Dominik Seidel, Yonten Dorji, Bernhard Schuldt, Emilie Isasa, and Klaus Körber. Dataset: New insights into tree architecture from mobile laser scanning and geometry analysis. *Dryad*, 2021. doi: https://doi. org/10.5061/dryad.2fqz612n6. Sharmishtaa Seshamani, Leila Elabbady, Casey Schneider-Mizell, Gayathri Mahalingam, Sven Dorkenwald, Agnes Bodor, Thomas Macrina, Daniel Bumbarger, JoAnn Buchanan, Marc Takeno, Wenjing Yin, Derrick Brittain, Russel Torres, Daniel Kapner, Kisuk Lee, Ran Lu, Jingpeng Wu, Nuno daCosta, R. Clay Reid, and Forrest Collman. Automated neuron shape analysis from electron microscopy. *arXiv.org*, 2006.00100, 2020. D. A. Sholl. Dendritic organization in the neurons of the visual and motor cortices of the cat. Journal of Anatomy, 87:387–406, 1953. Fan-Yun Sun, Jordan Hoffmann, and Jian Tang. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In *Proc. of the International Conf. on* Learning Representations (ICLR), 2020. Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L Dyer, Remi Munos, Petar Veličković, and Michal Valko. Large-scale representation learning on graphs via bootstrapping. In Proc. of the International Conf. on Learning Representations (ICLR), 2022. Harry BM Uylings and Jaap Van Pelt. Measures for quantifying dendritic arborizations. *Network: computation in neural systems*, 13(3):397, 2002. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research (JMLR), 9(86):2579–2605, 2008. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is All you Need. Advances in Neural Information Processing Systems (NeurIPS), 2017. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. In *Proc. of the International Conf. on Learning Representations (ICLR)*, 2018. Petar Veličković, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. In *Proc. of the International Conf. on Learning Representations (ICLR)*, 2019. Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. *arXiv.org*, 2020. Yun Wang. A simplified morphological classification scheme for pyramidal cells in six layers of primary somatosensory cortex of juvenile rats. *IBRO Reports*, 5, 2018. Johan Winnubst, Erhan Bas, Tiago A. Ferreira, Zhuhao Wu, Michael N. Economo, Patrick Edson, Ben J. Arthur, Christopher Bruns, Konrad Rokicki, David Schauder, Donald J. Olbris, Sean D. Murphy, David G. Ackerman, Cameron Arshadi, Perry Baldwin, Regina Blake, Ahmad Elsayed, Mashtura Hasan, Daniel Ramirez, Bruno Dos Santos, Monet Weldon, Amina Zafar, Joshua T. Dudman, Charles R. Gerfen, Adam W. Hantman, Wyatt Korff, Scott M. Sternson, Nelson Spruston, Karel Svoboda, and Jayaram Chandrashekar. Reconstruction of 1,000 projection neurons reveals new cell types and organization of long-range connectivity in the mouse brain. *Cell*, 179(1):268–281.e13, 2019. ISSN 0092-8674. doi: https://doi.org/10.1016/j.cell.2019.07.042. Minghao Xu, Hang Wang, Bingbing Ni, Hongyu Guo, and Jian Tang. Self-supervised graph-level representation learning with local and global structure. In Proc. of the International Conf. on Machine learning (ICML), volume 139, pp. 11548–11558, 2021. Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. Do transformers really perform badly for graph representation? In Advances in Neural Information Processing Systems (NeurIPS), 2021. Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. Graph contrastive learning with augmentations. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2020. Jiawei Zhang, Haopeng Zhang, Congying Xia, and Li Sun. Graph-bert: Only attention is needed for learning graph representations. *arXiv.org*, 2001.05140, 2020. Jie Zhao, Xuejin Chen, Zhiwei Xiong, Zheng-Jun Zha, and Feng Wu. Graph representation learning for largescale neuronal morphological analysis. *IEEE Transactions on Neural Networks and Learning Systems*, pp. 1–12, 2022. doi: 10.1109/TNNLS.2022.3204686. Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. Graph contrastive learning with adaptive augmentation. In *Proceedings of the Web Conference 2021*, pp. 2069–2080, 2021. # Appendices ## A Synthetic Graph Dataset ![18_image_0.png](18_image_0.png) To test whether our model is able to use information encoded in the connectivity of the graphs, we generate a synthetic graph dataset with five classes that differ in connectivity while having similar node locations. We create this synthetic graph dataset by uniformly sampling 20 mean node positions in 3D space in [−50, 50]3. The mean node locations are shared between the five classes to ensure that the presence of a specific node does not encode class membership. For each class, we construct a distinct graph connectivity as follows: We first randomly sample a root node and two children, then we recursively sample one or two children per child (with a branching probability of 50%) until all 20 nodes are connected. Using this method, we generate 100,000 graphs for the training set and 10,000 graphs for validation and test set each (with 15 class probability) by sampling node positions from N (*µ, σ*) with µ equal to the above drawn means and σ = 10. Figure A.1: Example classes 1 (A) and 2 (B) of synthetic graph dataset. Samples within one class share graph connectivity. Samples between classes share mean node locations. Node locations are drawn from N (*µ, σ*). Tab. C.1, Tab. C.2 and Tab. C.3 list the hyperparamers used for experiments on the synthetic graphs. t-SNE of the learned latent spaces: To visualize the learned latent space we perform t-SNE with a perplexity of 30 to reduce the embedding to two dimensions (Fig. A.2). Linear classifier: We train a supervised linear classifier on the extracted embeddings of GraphDINO for 100 epochs and a learning rate of 0.01. To train the classifier, we use the test set that has not been used in training GraphDINO, and split it in 8,000 samples for training the classifier and 2,000 samples for evaluating held-out test set accuracy. ![18_image_1.png](18_image_1.png) Figure A.2: t-SNE embedding of synthetic graph dataset colored by class membership for A five runs of GraphDINO with GraphAttention, B five runs of GraphDINO with regular transformer attention, and C five runs of GraphDINO with transformer attention and without positional encoding. ![19_image_0.png](19_image_0.png) Figure B.3: A. t-SNE embedding of Trees dataset colored by cluster membership based on GMM clustering with 15 clusters. B. Three example tree morphologies are shown for different clusters. ## B Application To Different Domain: Tree Morphologies We developed a model that is able to learn graph-level embeddings of spatially-embedded graphs. So far, we have shown that it yields meaningful cell types clusterings of neuronal morphologies. To show that GraphDINO is applicable to data domains beyond neuronal morphologies, we train our model on 3D skeletons of individual trees (from a forest). The Trees dataset (Seidel et al., 2021) is a highly divers dataset comprised of 391 skeletons of trees stemming from 39 different genuses and 152 species or breedings. The skeletons were extracted from LIDAR scans of the trees. Nodes of the skeletons have a 3D coordinate. We normalize the data such that the lowest point (start of the tree trunk) is normalized to (0, 0, 0). GraphDINO learns a latent space that orders tree morphologies with respect to their size, crown size and crown shape (Fig. B.3, Fig. D.7). ## C Extended Methods C.1 Background: Dino DINO (Fig. C.4) (Caron et al., 2021) is a method for self-supervised image representation learning. Similar to previous approaches, it consists of two image encoders which process different views of an image. These views are obtained by image augmentation. The training objective is to enforce both encoders to generate the same output distribution when the same input image is shown. This can be implemented by the cross entropy loss function: Pi −qilog pi. Both encoders are transformers that share the architecture but differ in their weights: One of the encoders is the student encoder which receives weight updates through gradients of the training objective while the other encoder's (teacher) weights are an exponential moving average of the student's weights. In contrast to some other self-supervised methods, DINO does not require contrastive (negative) samples. To prevent collapse, i.e. predicting the same distribution independent of the input image, two additional operations on the teacher's predictions are crucial: sharpening by adjusting the softmax temperature, and centering using batch statistics. Besides competitive performance on downstream image classification tasks, another key finding of the paper is that object segmentations emerge in the self-attention when applying DINO training on visual transformer image encoders. ## C.2 Data Preprocessing. To speed up data loading during training, we reduce the number of nodes in the graph of each neuron to 1000 nodes in the same way as when subsampling and ensure that it contains only one connected component. If there are unconnected components, we connect them by adding an edge between two nodes of two unconnected components that have the least distance between their spatial coordinates. ![20_image_0.png](20_image_0.png) Figure C.4: The DINO method for self-supervised image representation learning (figure adapted from Caron et al. (2021)). ## C.3 Training Details And Hyperparameters To select hyperparameters we run three grid searches and pick the best hyperparameters according to the lowest average loss over the BBP and M1 PatchSeq dataset. For the optimization, we run a hyperparameter search over batch size ∈ {32, 64, 128}, learning rate ∈ {10−3, 10−4, 10−5}, and number of training iterations ∈ {20, 000, 50, 000, 100, 000}. For the augmentation strength, we run a hyperparameter search over jitter variance σ1 ∈ {1.0, 0.1, 0.001}, number of deleted branchesn ∈ {1, 5, 10}, and graph position variance σ2 ∈ {0.1, 1.0, 10.0}. For the architecture, we run a hyperparameter search over latent dimension ∈ {16, 32, 64}, number of GraphAttention blocks (depth) ∈ {5, 7, 10}, and number of attention heads per block ∈ {2, 4, 8}. ## C.3.1 Architecture Hyperparameters Table C.1: Hyperparameters used for the architecture for the different datasets. D1: Dimensionality of latent embedding z. D2: Dimensionality of probability distribution p. PE: Positional encoding. **T temp:** Softmax temperature of teacher network. | Dataset | D1 | D2 | # layers | # heads | MLP dims | PE dims | T temp | |-------------------------------|------|------|------------|-----------|------------|-----------|----------| | Synthetic Graphs | 16 | 300 | 4 | 4 | 16 | 16 | 0.04 | | BBP | 32 | 1000 | 10 | 8 | 64 | 32 | 0.06 | | M1 PatchSeq | 64 | 1000 | 7 | 8 | 64 | 32 | 0.06 | | Joint dataset (BIL, JML, ACT) | 32 | 1000 | 7 | 4 | 64 | 32 | 0.06 | | Trees | 32 | 1000 | 7 | 8 | 64 | 32 | 0.06 | Tab. C.1 lists the hyperparameters used for the architecture for the different datasets. For the synthetic graph dataset, we downscale the network as it is a simpler dataset. DINO (Caron et al., 2021) uses an output dimensionality of 65,536 for p when training on ImageNet (Deng et al., 2009) (1,000 classes). The number of classes in the neuronal datasets is unknown, but previous literature described 14 - 19 cell types (Gouwens et al., 2019; Markram et al., 2015). Hence, we decrease the number of dimensions D2 of p proportionally to 1,000, approximately retaining the ratio between classes and number of dimensions. ## C.3.2 Optimization Hyperparameters The learning rate is linearly increased to the value given in Tab. C.2 during the first 2,000 iterations and then decayed using a exponential decay with rate 0.5 (Loshchilov & Hutter, 2016). | Dataset | Iterations | Batch size | Learning rate | |-------------------------------|--------------|--------------|-----------------| | Synthetic Graphs | 100,000 | 512 | 10−4 | | BBP | 100,000 | 64 | 10−4 | | M1 PatchSeq | 50,000 | 128 | 10−3 | | Joint dataset (BIL, JML, ACT) | 100,000 | 128 | 10−3 | | Trees | 100,000 | 64 | 10−4 | Table C.2: Hyperparameters used for optimization for the different datasets. ## C.3.3 Augmentation Hyperparameters | Dataset | # nodes | σ1 | # DB | σ2 | |-------------------------------|-----------|-------|--------|------| | Synthetic Graphs | 15 | 0.1 | 0 | 0 | | BBP | 100 | 0.001 | 10 | 10.0 | | M1 PatchSeq | 100 | 0.1 | 10 | 10.0 | | Joint dataset (BIL, JML, ACT) | 200 | 1.0 | 5 | 10.0 | | Trees | 200 | 0.1 | 5 | 10.0 | Table C.3: Augmentation hyperparameters for the different datasets. **\# nodes:** Number of nodes to subsample to. σ1: Variance of node jittering. **\# DB:** Number of deleted branches. σ2: Variance of graph translation. ## C.3.4 Computation All trainings were performed on a NVIDIA Quadro RTX 5000 single GPU. Training on the neuronal BBP dataset ran for approximately 10 hours for 100,000 training iterations. ## C.4 Inference To extract the latent representation per sample, we encode the unaugmented graphs subsampled to 200 nodes using the student encoder and extract the latent representation z using the weights of the last iteration of training (no early-stopping is used). ## C.5 Evaluation C.5.1 Evaluation On Bbp For visualization of the latent space, we use t-distributed stochastic neighbor embedding (t-SNE) (van der Maaten & Hinton, 2008) with PCA-initialization, Euclidean distance and a perplexity of 30. For quantitative evaluation we use the subset of labeled excitatory neurons (n = 286) with the following 14 expert labels: L23-PC, L4-PC, L4-SP, L4-SS, L5-STPC, L5-TTPC1, L5-TTPC2, L5-UTPC, L6-BPC, L6-IPC, L6-TPC-L1, L6-TPC-L4, L6-UTPC, L6-HPC (Markram et al., 2015). For the ablation experiments and the comparison to InfoGraph Sun et al. (2020), we perform k-nearest neighbor (kNN) classification with k = 5 in a leave-one-out setting predicting the above listed expert labels with two exceptions: We remoev the L6-HPC cells, since there are only three samples in the dataset, and we group the L5-TTPC1 and L5-TTPC2 into one class L5-TTPC following previous work that found that they rather form a continuum then two separate classes (Gouwens et al., 2019; Kanari et al., 2019). For the clustering analysis and the comparison to Gouwens et al. (2019), we follow Gouwens et al. (2019) and compute the adjusted rand index between our found clusters and the 14 expert labels. To determine the optimal number of clusters, we use cross-validation to compute the log-likelihood of held-out data of the Gaussian Mixture model and choose the number of clusters with the highest log-likelihood. The optimal number of clusters is 15 for the BBP dataset. To perform clustering within cortical layers, we chose the number of clusters per layer based on the number of clusters with the majority of cells from the cortex-wide clustering (Fig. 3): four for layer 2/3, layer 5 and layer 6 and three for layer 4. ## C.5.2 Comparison To Infograph (Sun Et Al., 2020) We use the official implementation6to train InfoGraph on the BBP dataset. We perform a hyperparameter search for InfoGraph as detailed in the original publication (Sun et al., 2020) and extend it to include more training epochs to train it for approximately the same number of iterations as GraphDINO. In detail, we run a grid search over learning rate (lr) ∈ {10−2, 10−3, 10−4}, number of training epochs ∈ {10, 20, 100, 200, 1, 000, 2, 000} and GNN layers ∈ {4, 8, 12}. We select the hyperparameters based on the lowest unsupervised loss. The chosen hyperparameters are: lr = 0.001, *epochs* = 1, 000 and four GNN layers with a hidden dimensionality of 32. We evaluate the performance of InfoGraph (Sun et al., 2020) using a kNN classifier analogous to the ablation experiments (Appendix. C.5.1). ## C.5.3 Comparison To Morphvae (Laturnus & Berens, 2021) We follow the evaluation protocol of Laturnus & Berens (2021) and perform k-nearest neighbor (kNN) classification with k = 5 on the learned latent embeddings of the excitatory neurons to predict whether they are untufted, tufted or "other" on the test set (n = 60) and report the balanced accuracy. The "other" class only contains six examples in the test set. To get an estimate of the variance that is due to the chosen data split, we additionally evaluate three further data splits and report the average test set performance over the three splits. We report the performance of MorphVAE as given in Tab. 3 of Laturnus & Berens (2021). ## C.5.4 Comparison To Treemoco (Chen Et Al., 2022B) A fair comparison to TreeMoCo proved difficult. We tried to replicate their setting as best as possible from the information given in the paper as well as by inferring it from their code base7 while trying to set up a more fair benchmark for future works. We downloaded the three datasets BIL, JML and ATC using the official code base of TreeMoCo and used it to assign the eleven class labels: L1, L2/3, L4, L5, L6, VPM, CP, VPL, SUB, PRE, MG and Others as used by Chen et al. (2022b). However, our cell counts slightly differ from those given in Chen et al. (2022b). More specifically, the JML dataset contained 1,200 neurons instead of 1,107. Chen et al. (2022b) removed a substantial amount of the neurons (995 of 3,358 neurons) from the datasets due to reconstruction errors. Since we did not have access to the identities of these neurons, we trained GraphDINO unsupervised on all cells with more than 200 nodes (n*total* = 3, 138; nBIL = 1, 739, nJML = 889, nACT = 510) and evaluated the proposed classes as assigned by the TreeMoCo code base. We replicated the proposed data preprocessing by centering the somata at (0, 0, 0) and aligning the neurons' first principal component to the y-axis. Chen et al. (2022b) performs the quantitative evaluation on a 80-20% training-test data split. Since we did not have access to the exact split, we performed five cross-validations instead and report the average accuracy over folds. According to the paper, Chen et al. (2022b) perform k-nearest neighbor classification (k = 5 or k = 20 depending on the dataset). We unify the evaluation and report the kNN accuracy with k = 5 for all experiments in this paper. For reference, we list the k = 20 performance in Tab. C.4. In their code base, the implementation of kNN is weighted, where the neighbors vote is weighted by the cosine similarity of the embeddings. We follow the description in the paper (Chen et al., 2022b) and use the standard kNN classification without weighing the neighbors' votes. 6https://github.com/sunfanyunn/InfoGraph 7We additionally tried to reach out to the authors but did not get a reply. Table C.4: Cell-type classification on the TreeMoCo dataset. Performance of our model (GraphDINO) averaged over three random seeds and given as mean ± standard deviation when using k = 20 for the kNN classifer. Figure C.5: Example neurons labeled as Isocortex 4 ![23_image_0.png](23_image_0.png) of the ACT dataset. | Model | BIL-6 | ACT | |---------|---------|-------| | Ours | 78 ± 2 | 54± 4 | The performances reported by Chen et al. (2022b) are overfitted on the test set: They picked the best test set performance over epochs (for the three datasets separately) (see Fig. C1 in Chen et al. (2022b)). Additionally, they picked whether to use the latent embedding z or the projection head's output p based on the test set performance per dataset. To give an estimate of the less overfitted performance of TreeMoCo (Chen et al., 2022b) (at least with respect to which epoch to evaluate), we report the averaged performance over the last five epochs given by Fig. C1 (Chen et al., 2022b). Similarly, the performance of GraphCL (You et al., 2020) as reported by Chen et al. (2022b) is picked as the best test set performance per dataset over training epochs. We therefore report the average accuracy over the last five epochs with the given by Fig. C1 (Chen et al., 2022b). ## D Complete Cluster Visualizations In the Fig. D.6 and Fig. D.7, we show the cluster assignments of all samples of the excitatory BBP dataset (n = 286) and the Trees dataset (n = 391), respectively. ![24_image_0.png](24_image_0.png) Figure D.6: Clusters of spiny neurons of BBP dataset as identified by GMM based on our learned feature space. Apical dendrites are colored lighter, while basal dendrites are shown in a darker color. Soma is marked by a black circle.25 ![25_image_0.png](25_image_0.png) Figure D.7: Clusters of trees as identified by GMM based on our learned feature space.
Review 1: Summary: The paper proposes a self-supervised representation learning approach (based on DINO [Caron et al., 2021]), that obtains graph-level embeddings using a transformer-based architecture. Using graph data augmentation, e.g., with node subsampling, random rotation, node jittering, etc., the model is trained to maximize similarity between the embeddings for different augmented views of an input. The transformer-based architecture uses a novel, modified "adjacency-conditioned" attention mechanism, which imposes a learned bias on the attention values towards the adjacency matrix of the input graph. Experimental validations are thoroughly performed (e.g., on downstream classification tasks) by learning representations from graph-like neuronal morphologies on five publicly available datasets. Strengths and Weaknesses: Strengths: GraphDINO is generic to any graph-like data structure with similar node features, and appears to not require heavy tailoring for each problem. The transformer attention mechanism is extended feasibly to graph-like structures with a novel and simple adjacency matrix bias. Experimental analyses are performed well in depth and addresses all related aspects of the analyses and proposed model. Weaknesses: Some minor changes are still necessary for clarification, especially in terms of notation for the architecture. Requested Changes: I think the paper's claims are well supported in the manuscript, and there are no major concerns on my side. Only minor questions below: - It appears like there is no constraint on the bias factors \lambda and \gamma, which are learned per-node. Would the approach still reasonably work if one reduces it to a single learned parameter vector by setting e.g., \lambda=1-\gamma, and hence reducing the number of parameters in W by D? - How sensitive is the learning algorithm to the softmax temperature of the teacher encoder? - Variable p is not defined in the main manuscript or the supplementary until Table C.1 where the chosen values are listed? It is also confusing that it might be referring to the DINO notation in Figure C.4. A clarification on how p relates/differs from z should be added for the reader. - In Figure 1.B caption, the embeddings are stated to lie on a 32-dim vector space, which is not necessarily fixed and can vary. Since this becomes confusing later, perhaps this should be discussed parameterically in text and clarified in 4.2.2 with the chosen values. - In Table C.1, the embeddings are chosen to be 64 dimensional for the M1 PatchSeq dataset, although in Section 4.2.2 it was stated that for neuronal datasets it was fixed to 32. This should be corrected. Also since N denotes something else, in Table C.1, N layers -> \# layers, and N heads -> \# heads. - Typo on page 9, second line of Section 5.5: "1'000" -> "1,000". Broader Impact Concerns: No specific concern. ================================================== Review 2: Summary: This work proposes a representation method for 3D neuronal morphologies. Seeing neuron skeletons as spatial trees, the paper proposes (i) modification to the transformer architecture suited to graph processing (ii) domain specific data augmentations to be used in the context of DINO (iii) extensive study of the resulting representation. (i) Self-attention is modified so that it interpolates between global attention and attention limited to neighboring nodes only. This interpolation is learned and depends on the node features. (ii) Data augmentations are central in non-contrastive SSL since these methods enforce similar embeddings for two different augmented versions of the same sample. Therefore, data augmentation determines to some extent to what modification the final representation will be invariant. Domain specific data augmentations are therefore crucial. Here, the authors argue that their embedding should be invariant to (a) tree specific subsampling (b) 3D rotation (c) Gaussian noise added to node coordinates (d) tree specific subgraph deletion (e) random translations of the node coordinates. (iii) The authors demonstrate that the learned representation retains many characteristics of interest for practitioners, e.g., recovering known excitatory cell types, or forming clusters that are consistent with expert labels. The learned representations seem to outperform existing baselines. They also perform ablations on their proposed augmentation and attention strategies. Strengths and Weaknesses: Strengths: - This work makes good use of the existing knowledge on SSL and graph transformers: proposed strategies for domain specific data augmentation and attention on neuronal graphs make sense. - The authors did a good job at demonstrating the usefulness of their representation. SSL and graph transformers seem a promising direction for studying neuronal morphologies and perhaps making discoveries from large, unlabelled datasets. - The paper is overall well written. Weaknesses: - The authors could have done a slightly better job at discussing the relevant literature. For example: - In SSL for graphs, [1] adapted BYOL to graph representation learning. - The authors mention various works proposing graph transformers. These works contain schemes that interpolate between local and global attention, as well as different position encodingss since Laplacian eigenvectors suffer from issues: why are they not suited to the problem of learning representation of neuronal morphologies? - In terms of employed methods, there seems to be some room for improvement that could also be discussed in the limitations. - Taken separately, transformers and SSL are often data hungry. Here, the data only consists of a few thousands of examples: although the need for data is alleviated here since the authors inject some inductive biases in the architecture (namely the adjacency matrix in attention and a non-learned positional encoding), have you considered looking for more data? This may lead to substantial improvements. This could also improve the instability of the results (error bars seem a bit high). - In computer vision, having a balanced hence curated dataset (in terms of underlying class) is crucial to achieve state of the art performance with SSL. Have you considered this issue here? - The authors seek a representation that is invariant to translations and 3D rotations. Instead of enforcing this property through SSL data augmentation, have you considered using equivariant architectures such as [2][3]? ([2] can be adapted to transformers). This added inductive bias could also alleviate the need for data. - Could you clarify why masking node features was not a good augmentation strategy in your setup? In SSL, masking is generally simple and beneficial. [1] Thakoor et al. Large-Scale Representation Learning on Graphs via Bootstrapping. [2] Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. [3] Fabian Fuchs, Daniel Worrall, Volker Fischer, and Max Welling. Se (3)-transformers: 3d roto-translation equivariant attention networks. Requested Changes: See above (all equally important): - Discussing some missing work, or cited work that haven't been used here. - Discussing limitations and potential improvements. - Clarification of node feature masking. Broader Impact Concerns: None ================================================== Review 3: Summary: This paper introduces a new self-supervised method for graph representation learning. This learning method is specifically designed for neuronal morphologies representation learning, but can be generalized to other similar graphs to some extent. The contributions of this paper mainly include: (1) a new self-supervised graph-level feature learning method; (2) a new attention module is introduced; (3) this paper offers a bunch of ablation studies and visualizations, which demonstrate the effectiveness of the proposed method. Overall, the findings of this paper are interesting. Strengths and Weaknesses: Strength: 1. This paper is well written, with clear motivations and organizations. 2. The proposed method is novel to some extent, with tailored data augmentation methods for the neuronal skeleton graph (and other similar graphs). 3. This paper offers a lot of ablation studies and visualizations. They are helpful in understanding the benefits of the proposed method, and demonstrate the advantages of this method compared to the prior works. Weakness: 1. The novelty of the proposed method is not very significant. The overall idea is built upon DINO. The positional encoding strategy is borrowed from previous work. The major novelty lies in the data augmentation, as well as in the hybrid combination of attention and adjacency matrix. Overall I think such technical novelties are not significant. 2. Usually for the DINO method on image domain, it needs a lot of training data. In this paper I find the proposed method was only trained on a few hundred of graphs (e.g., 646). It is surprising that the trained model on such a small dataset still works quite well. Is this due to some kind of overfitting? Or is it possible that the task is not challenging enough? Can we pre-train the model on the synthetic dataset and then fine-tune it on this small dataset (in a self-supervised manner)? Requested Changes: One interesting question is that: can we pre-train the model on the synthetic dataset and then fine-tune it on another small dataset? Is it possible that this pre-training will further boost the performance? Broader Impact Concerns: I do not see any ethical issues of this work. ================================================== Metareview: Recommendation: Accept as is Comment: All reviewers agree that the manuscript contains interesting results, in particular for the field of computational neuroscience. It also tackles an interesting input modality that has not been considered in this context. The technical novelty is limited since similar graph transformers exist. Nevertheless, the novel application scenario and strategies to adapt the method to the considered domain are worth publication. ==================================================
# Policy Gradient Algorithms Implicitly Optimize By Continuation Adrien Bolland adrien.bolland@uliege.be Montefiore Institute, University of Liège Gilles Louppe *g.louppe@uliege.be* Montefiore Institute, University of Liège Damien Ernst dernst@uliege.be Montefiore Institute, University of Liège LTCI, Telecom Paris, Institut Polytechnique de Paris Reviewed on OpenReview: *https: // openreview. net/ forum? id= 3Ba6Hd3nZt* ## Abstract Direct policy optimization in reinforcement learning is usually solved with policy-gradient algorithms, which optimize policy parameters via stochastic gradient ascent. This paper provides a new theoretical interpretation and justification of these algorithms. First, we formulate direct policy optimization in the optimization by continuation framework. The latter is a framework for optimizing nonconvex functions where a sequence of surrogate objective functions, called continuations, are locally optimized. Second, we show that optimizing affine Gaussian policies and performing entropy regularization can be interpreted as implicitly optimizing deterministic policies by continuation. Based on these theoretical results, we argue that exploration in policy-gradient algorithms consists in computing a continuation of the return of the policy at hand, and that the variance of policies should be history-dependent functions adapted to avoid local extrema rather than to maximize the return of the policy. ## 1 Introduction Applications where one has to control an environment are numerous and solving these control problems efficiently is the preoccupation of many researchers and engineers. Reinforcement learning (RL) has emerged as a solution when the environments at hand have complex and stochastic dynamics (Sutton & Barto, 2018). Direct policy optimization and more particularly (on-policy) policy gradients are methods that have been successful in recent years. These methods, reviewed by Duan et al. (2016) and Andrychowicz et al. (2020), all consist in parameterizing a policy (most often with a neural network) and adapting the parameters with a local-search algorithm in order to maximize the expected sum of rewards received when the policy is executed, called the return of the policy. We distinguish two basic elements that determine the performance of these methods. As first element, we have the formalization of the optimization problem. It is defined through two main choices: the (functional) parametrization of the policy and the learning objective function, which mostly relies on adding an entropy regularization term to the return. As second element, there is the choice of the local-search algorithm to solve the optimization problem - we focus on stochastic gradient ascent methods in this study. The policy parameterization is the first formalization choice. In theory, there exists an optimal deterministic policy (Sutton & Barto, 2018), which can be optimized by deterministic policy gradient (Silver et al., 2014) with a guarantee of converging towards a stationary solution (Xiong et al., 2022). However, this approach may give poor results in practice as it is subject to convergence towards local optima (Silver et al., 2014). It is therefore usual to optimize stochastic policies where this problem is mitigated in practice (Duan et al., 2016; Andrychowicz et al., 2020). For discrete state and action spaces, theoretical guarantees of global convergence hold for softmax or direct policy parameterization (Bhandari & Russo, 2019; Zhang et al., 2021; Agarwal et al., 2020). In the general case of continuous spaces, these results no longer hold and only convergence towards stationarity can be ensured under strong hypotheses (Bhatt et al., 2019; Zhang et al., 2020b; Bedi et al., 2021). Recently, convergence under milder assumptions was established assuming that the policy follows a heavy-tailed distribution, which guarantees a sufficiently spread distribution of actions (Bedi et al., 2022). Nevertheless, most of the empirical works have focused on (light-tailed) Gaussian policies (Duan et al., 2016; Andrychowicz et al., 2020) for which convergence is thus not ensured in the general case (Bedi et al., 2022). The importance of a sufficiently spread distribution in policy gradient had already been observed in early works and was loosely interpreted as exploration (Lillicrap et al., 2015; Mnih et al., 2016). This concept originally introduced in bandit theory and value-based RL, where it consists in selecting a suboptimal action to execute in order to refine a statistical estimate (Simon, 1955; Sutton & Barto, 2018), is to our knowledge not well defined for direct policy optimization. Other empirical works also showed that relying on Beta distributions when the set of actions is bounded within an interval outperformed Gaussian policies (Chou et al., 2017; Fujita & Maeda, 2018). As a side note, another notable advantage of stochastic policies is the possibility to rely on information geometry and use efficient trust-region methods to speed up the local-search algorithms (Shani et al., 2020; Cen et al., 2022). In summary, no consensus has yet been reached on the exact policy parameterization that should be used in practice. The second formalization choice is the learning objective and more particularly the choice of entropy regularization. Typically, a bonus enforcing the uniformity of the action distribution is added to the rewards in the objective function (Williams & Peng, 1991; Haarnoja et al., 2019). Intuitively, it avoids converging too fast towards policies with small spread, which are subject to being locally optimal. More general entropy regularizations were applied for encouraging high-variance policies while keeping the distribution sparse (Nachum et al., 2016) or enforcing the uniformity of the state-visitation distribution in addition to the action distribution (Islam et al., 2019). Again, no consensus is reached about the best regularization to use in practice. The importance of introducing sufficient stochasticity and regularizing entropy is commonly accepted in the community. Some preliminary research has been conducted to develop a theoretical foundation for this observation. Ahmed et al. (2019) proposed an empirical analysis of the impact of the entropy regularization term. They concluded that adding this term yields a smoothed objective function. A local-search algorithm will therefore be less prone to convergence to local optima. This problem was also studied by Husain et al. (2021). They proved that optimizing a policy by regularizing the entropy is equivalent to performing a robust optimization against changes in the reward function. This result was recently reinterpreted by Brekelmans et al. (2022) who deduced that the optimization is equivalent to a game where one player adapts the policy while an adversary adapts the reward. The research papers that have been reviewed concentrate solely on learning objectives in the context of entropy regularization, leaving unanswered the question of the relationship between a policy's return and the distribution of actions. This question is of paramount importance for understanding how the formalization of the direct policy optimization problem impacts the resulting control strategy. In this work, we propose a new theoretical interpretation of the effects of the action distribution on the objective function. Our analysis is based on the theory of optimization by continuation (Allgower & Georg, 1980), which consists in locally optimizing a sequence of surrogate objective functions. The latter are called continuations and are often constructed by filtering the optimization variables in order to remove local optima. Our main contributions are twofold. First, we define a continuation for the return of policies and formulate direct policy optimization in the optimization by continuation framework. Second, based on this framework, we study different formulations, i.e., policy parameterization and entropy regularization, of direct policy optimization. Several conclusions are drawn from the analysis. First, we show that the continuation of the return of a deterministic policy is equal to the return of a Gaussian policy. Second, we show that the continuation of the return of a Gaussian policy equals the return of another Gaussian policy with scaled variance. We then derive from the previous results that optimizing Gaussian policies using policygradient algorithms and performing regularization can be interpreted as optimizing deterministic policies by continuation. In this regard, exploration as it is usually understood in policy gradients, consists in computing the continuation of the return of the policy at hand. Finally, we show that for a more general continuation, the continuation of the return of a deterministic policy equals the return of a Gaussian policy where the variance is a function of the observed history of states and actions. These results provide a new interpretation for the variance of a policy: it can be seen as a parameter of the policy-gradient algorithm instead of an element of the policy parameterization. Moreover, to fully exploit the power of continuations, the variance of a policy should be a history-dependent function iteratively adapted to avoid the local extrema of the return. Optimization by continuation provides or aims to provide two main practical advantages to solve optimization problems. First, these methods smooth the objective function and allow the application of gradient-based optimization methods, as discussed in the literature on variational optimization (Staines & Barber, 2012) and applied by Murray & Ng (2010) to discrete optimization problems. Second, it enables to compute the global optimum of the optimization problem. This global optimum can be reached for example assuming that the optimum of the first continuation is simple to compute, and that the path of local optima of each continuation leads to the global optimum of the problem (Allgower & Georg, 1980). Theoretical guarantees on convergence are still scarce in the literature. Several works focus on Gaussian continuations where the continuations are convolutions of the objective function by Gaussian kernels (Mobahi & Fisher III, 2015). It is particularly useful when the objective function is concave, ensuring smoothness of the surrogate optimization problems and providing convergence guarantees to the global optimal solution (Nesterov & Spokoiny, 2017). Interestingly, a recent work links these continuations to concave envelopes (Mobahi & Fisher, 2015). Another notable result holds for similar continuations relying on uniform kernels for building the surrogate problems where convergence to the global solution is guaranteed under certain assumptions on the objective function (Hazan et al., 2016). In the general case, finding the right sequence of continuations for achieving convergence to the global optimum remains heuristic and problem-dependent. The framework of optimization by continuation can nevertheless provide valuable insights, as will be further explored in this work. Despite the lack of theoretical guarantees, optimization by continuation has found successful machine learning applications, including image alignment (Mobahi et al., 2012), greedy layerwise training of neural networks (Bengio, 2009), and neural network training by iteratively increasing the non-linearity of the activation functions (Pathak & Paffenroth, 2019). To our knowledge, optimization by continuation has never yet been applied to direct policy optimization. However, optimizing a distribution over the policy parameters rather than directly optimizing the policy is a reinforcement learning technique that has been used to perform direct policy optimization (Sehnke et al., 2010; Salimans et al., 2017; Zhang et al., 2020a). Among other things, it decreases the variance of the gradient estimates in some cases. If this distribution over policy parameters is a Gaussian, it is furthermore by definition equivalent to optimizing the policy by Gaussian continuation (Mobahi & Fisher III, 2015). Another method, called reinforcement learning with logistic reward-weighted regression (Wierstra et al., 2008; Peters & Schaal, 2007), consists in optimizing a surrogate objective of the return. The surrogate is the expected utility of the sum of rewards. Originally justified relying on the field of decision theory (Chernoff & Moses, 2012), it can equivalently be seen as an optimization by continuation method. The paper is organized as follows. In Section 2, the background of direct policy optimization is reminded. The framework for optimizing policies by continuation is developed in Section 3 and theoretical results relating the return of policies to their continuations are presented in Section 4. In Section 5, these results are used for elaborating on the formulations of direct policy optimization. Finally, the results are summarized and further works discussed in Section 6. ## 2 Theoretical Background In this section, we remind the background of reinforcement learning in Markov decision processes and discuss the direct policy optimization problem. ## 2.1 Markov Decision Processes We study problems in which an agent makes sequential decisions in a stochastic environment in order to maximize an expected sum of rewards (Sutton & Barto, 2018). The environment is modeled with an infinitetime Markov Decision Process (MDP) composed of a state space S, an action space A, an initial state distribution with density p0, a transition distribution (dynamic) with conditional density p, a bounded reward function ρ, and a discount factor γ ∈ [0, 1[. When an agent interacts with the MDP (S, A, p0*, p, ρ, γ*), first, an initial state s0 ∼ p0(·) is sampled, then, the agent provides at each time step t an action at ∈ A leading to a new state st+1 ∼ p(·|st, at). Such a sequence of states and actions ht = (s0, a0, . . . , st−1, at−1, st) ∈ H is called a history and H is the set of all histories of any arbitrary length. In addition, at each time step t, a reward rt = ρ(st, at) ∈ R is observed. A (stochastic) history-dependent policy η ∈ E = H → P(A) is a mapping from the set of histories H to the set of probability measures on the action space P(A), where η(a|h) is the associated conditional probability density of action a given the history h. A (stochastic) Markov policy π ∈ Π = *S → P*(A) is a mapping from the state space S to the set of probability measures on the action space P(A), where π(a|s) is the associated conditional probability density of action a in state s. Finally, deterministic policies µ ∈ M = *S → A* are functions mapping an action a = µ(s) ∈ A to each state s ∈ S. We note that for each deterministic policy µ there exists an equivalent Markov policy, where the probability measure is a Dirac measure on the action a = µ(s) in each state s. In addition, for each Markov policy, there exists an equivalent historydependent policy only accounting for the last state in the history. We therefore write by abuse of notation that M ⊊ Π ⊊ E. The function J : E → R is defined as the function mapping to any policy η the expected discounted cumulative sum of rewards gathered by an agent interacting in the MDP by sampling actions from the policy η. The value J(η) is called the return of the policy η and is computed as follows: $$J(\eta)=\quad\operatorname*{\mathbb{E}}_{\stackrel{s_{0}\sim\eta_{0}(\cdot)}{a_{t}\sim\eta(\cdot\mid h_{t})}}\quad\left[\sum_{t=0}^{\infty}\gamma^{t}\rho(s_{t},a_{t})\right]\ .$$ st+1∼p(·|st,at) . (1) An optimal agent follows an optimal policy η ∗ maximizing the expected discounted sum of rewards J. ## 2.2 Direct Policy Optimization Problem statement. Let (S, A, p0*, p, ρ, γ*) be an MDP and let ηθ ∈ E be a policy parameterized by the real vector θ ∈ R dΘ . The objective of the optimization problem is to find the optimal parameter θ ∗ ∈ R dΘ such that the return of the policy is maximized: $$(1)$$ $$\theta^{*}=\operatorname{argmax}_{\theta\in\mathbb{R}^{d_{\Theta}}}J(\eta_{\theta})\ .$$ $$\left(2\right)$$ J(ηθ) . (2) In this work, we consider on-policy policy-gradient algorithms (Andrychowicz et al., 2020). These algorithms optimize differentiable policies with local-search methods using the derivatives of the policies. They iteratively repeat two operations. First, they approximate an ascent direction relying on histories sampled from the policy, with the current parameters, in the MDP. Second, they update these parameters in the ascent direction. Deterministic Policies. It is possible to approximate a solution of the optimization problem equation (2) for a deterministic policy parameterized with a function approximator µθ ∈ M by stochastic gradient ascent using the deterministic policy gradient theorem (Silver et al., 2014). Nevertheless, optimizing deterministic policies with *inadequate exploration* (i.e., without sufficient additional policy randomization) usually results in locally optimal policies with poor performance (Silver et al., 2014). Gaussian Policies. In direct policy optimization, most of the works focus on learning Markov policies where the actions follow a Gaussian distribution whose mean and covariance matrix are parameterized function approximators (Duan et al., 2016; Andrychowicz et al., 2020). More precisely, a parametrized Gaussian policy π GP θ ∈ Π is a policy where the actions follow a Gaussian distribution of mean µθ(s) and covariance matrix Σθ(s) for each state s and parameter θ, it thus has the following density: $$\pi_{\theta}^{G P}(a|s)={\mathcal{N}}(a|\mu_{\theta}(s),\Sigma_{\theta}(s))\,.$$ θ(a|s) = N (a|µθ(s), Σθ(s)) . (3) Affine Policies. A parameterized policy (deterministic or stochastic) is said to be affine, if the function approximators used to construct the functional form of the policy are affine functions of the parameter θ. Formally, each function approximator fθ of a history-dependent policy has the following form ∀h ∈ H: $$\left({\boldsymbol{3}}\right)$$ $$f_{\theta}(h)=a(h)^{T}\theta+b(h)\ ,$$ $$\left({\boldsymbol{4}}\right)$$ Tθ + b(h) , (4) where a and b are functions building features from the histories. Such policies are often considered in theoretical studies (Busoniu et al., 2017) and perform well on complex tasks in practice (Rajeswaran et al., 2017). ## 3 Optimizing Policies By Continuation In this section, we introduce the optimization by continuation methods and formulate direct policy optimization in this framework. ## 3.1 Optimization By Continuation Optimization by continuation (Allgower & Georg, 1980) is a technique used to optimize nonconvex functions with the objective of avoiding local extrema. A sequence of optimization problems is solved iteratively using the optimum of the previous iteration. Each problem consists in optimizing a deformation of the original function and is typically solved by local search. Through the iterations, the function is less and less deformed. Such procedure is also sometimes referred to as graduated optimization (Blake & Zisserman, 1987) or optimization by homotopy (Watson & Haftka, 1989), as the homotopy of a function refers to its deformation in topology. Formally, let f : X → R be the real-valued function to optimize. Let g : Y → R be another real-valued function used for building the deformation of f. Finally, let the conditional distribution function p : X → P(Y) be the mapping from an optimization variable x ∈ X to the set of probability measures P(Y), such that p(y|x) is the associated density function for any random event y ∈ Y given x ∈ X . The continuation of the function f under the distribution p and deformation function g is defined as the function f p: X → R such that ∀x ∈ X : $$f^{p}(x)=\operatorname*{\mathbb{E}}_{y\sim p(\cdot|x)}[g(y)]\ .$$ $$\left(5\right)$$ [g(y)] . (5) For the optimization by continuation described hereafter, there must exist a conditional distribution p ∗for which f pequals f in the limit as p approaches p ∗. A typical example is to choose the function g equal to f, and to use a Gaussian distribution with a constant diagonal covariance matrix for the distribution p. We then have so-called Gaussian continuations (Mobahi & Fisher III, 2015). Finally, optimizing a function f by continuation involves iteratively locally optimizing its continuation for a sequence of conditional distributions approaching p ∗ with decreasing spread. Formally, let p0 ≻ p1 *≻ · · · ≻* pI−1 be a sequence of conditional distributions (monotonically) approaching p ∗ with strictly decreasing covariance matrices1. Then, optimizing f by continuation consists in locally optimizing its continuation f pi with a local-search algorithm initialized at x ∗ i for each iteration i. This general procedure is summarized in Algorithm 1. Particular instances of this algorithm are described by Hazan et al. (2016) and Shao et al. (2019) for Gaussian continuations. 1In this work, we consider the L2-norm of functions and the Loewner order over the set of covariance matrices (Siotani, 1967). In practice, the optimization process can be approximated by performing a limited number of local-search iterations at each step of the optimization by continuation. In the following sections, we consider that each optimization of the continuation f piis approximated with a single gradient ascent step and that the continuation distribution sequence p0 ≻ p1 *≻ · · · ≻* pI−1 is constructed by iteratively reducing the variance of the distribution pi. Note that if this variance reduction is sufficiently slow, and the stepsize is well chosen, a single gradient ascent step enables to accurately approximate x ∗ i . Algorithm 1 Optimization by Continuation 1: Provide a sequence of I functions p0 ≻ p1 *≻ · · · ≻* pI−1 2: Provide an initial variable value x ∗ 0 ∈ X for the local search 3: for i = 0, 1*, . . . , I* − 1 do 4: x ∗ i+1 ← Optimize the continuation f pi by local search initialized at x ∗ i 5: **end for** 6: **return** x ∗ I ## 3.2 Continuation Of The Return Of A Policy The direct policy optimization problem usually consists in maximizing a nonconvex function. Optimization by continuation is thus a good candidate for computing a solution. In this section, we introduce a novel continuation adapted to the return of policies. The return of a policy depends on the probability of a sequence of actions through the product of the density ηθ(at|st) of each action at for a given parameter θ, see equation (1). We define the continuation of interest as the expectation of the return where each factor in the product of densities depends on a different parameter vector. This expectation is taken according to a distribution that disturbs these parameter vectors at each time step with a variance depending on the history. Formally, using the notations from Section 3.1, we optimize the function f that for all x = θ equals the return, f(θ) = J(πθ), over the set X = R dΘ . Let the covariance function Λ : H → R dΘ×dΘ be a function mapping a history ht ∈ H to a covariance matrix Λ(ht). Let the continuation distribution q be a distribution such that q(θt|θ,Λ(ht)) is the density of θt distributed with mean θ and covariance matrix Λ(ht). Then, let Y =*S × A ×* R dΘNbe the set of (infinite) sequences of states, actions and parameters and let p and g, the two functions defining the continuation, be as follows: $$p(y|x)=p(s_{0})\prod_{t=0}^{\infty}\eta_{\theta_{t}}(a_{t}|h_{t})p_{\theta}(\theta_{t}|h_{t})p(s_{t+1}|s_{t},a_{t})\tag{1}$$ $$g(y)=\sum_{t=0}^{\infty}\gamma^{t}\rho(s_{t},a_{t})\;,\tag{2}$$ $$\mathbf{\Sigma}$$ where pθ(θt|ht) = q(θt|θ,Λ(ht)) such that the spread of pθ depends on the function Λ. Taken together, the continuation f q Λ = f p of the return of the policy ηθ ∈ E corresponding to the distribution q and covariance function Λ, is defined ∀θ ∈ R dΘ as: $$\mathbf{\partial}$$ $$\begin{array}{c c}{{f_{\Lambda}^{q}(\theta)=}}&{{\mathbbm{E}}}\\ {{}}&{{s_{0}{\sim}p_{0}(\cdot)}}\\ {{}}&{{\theta_{t}{\sim}q(\cdot|\theta,\Lambda(h_{t}))}}\\ {{}}&{{a_{t}{\sim}\eta_{\theta_{t}}(\cdot|h_{t})}}\\ {{}}&{{s_{t+1}{\sim}p(\cdot|s_{t},a_{t})}}\end{array}\left[\sum_{t=0}^{\infty}\gamma^{t}\rho(s_{t},a_{t})\right]\ .$$ . (8) Finally, the continuation equation (8) converges towards the return of ηθ in the limit as the covariance function Λ approaches zero, as required in Section 3.1. This continuation is expected to be well-suited for removing local extrema of the return for three main reasons. First, marginalizing the variables of a function as in our continuation is expected to smooth this function and therefore remove local extrema - the particular case of Gaussian blurring has been widely studied in the literature (Mobahi & Fisher, 2015; Nesterov & Spokoiny, 2017). Second, we underline the interest of considering a continuation in which the disturbance of the policy parameters may vary based on $$\mathbf{\Sigma}$$ the time step. Indeed, changing the parameter vector of the policy at different time steps (and changing the action distributions) may modify the objective function in significantly different ways. Third, we justify the factorization of the conditional distribution pθ equation (6) by the causal effect of actions in the MDP. As the actions only influence the rewards to come, the past history is expected to provide a sufficient statistic for disturbing the parameters in order to remove local optima. We therefore chose parameter probabilities conditionally independent given the past history. This history-dependency is encoded through the covariance function Λ in equation (8). Maximizing f q Λ to solve the optimization problem from Algorithm 1 is a complicated task. A common local-search algorithm used in machine learning is stochastic gradient ascent, which is known for performing well on several complex functions depending on many variables (Bottou, 2010). The gradient of f q Λ can be computed by Monte-Carlo sampling applying the reparameterization trick (Goodfellow et al., 2016) for simple continuation distributions or relying on the REINFORCE trick (Williams, 1992) in the more general case. Due to the complex time dependencies of the random events, these vanilla gradient estimates have practical limitations: the estimates may have large variance, the infinite horizon shall be truncated, and the direction provided is computed in the Euclidean space of parameters rather than in a space of distributions (Peters & Schaal, 2008). Finally, the evaluation of the continuation and its derivatives require one to sample parameter vectors, which may be computationally expensive for complex high-dimensional distributions. The study of different continuation distributions and the application of the optimization procedure from Algorithm 1 to practical problems is left for further works. In this work, we rather rely on the continuation to study existing direct policy optimization algorithms. To this end, we show in the next section that maximizing the continuation defined by equation (8) is equivalent to solving a direct policy optimization problem for another policy, called a mirror policy. ## 4 Mirror Policies And Continuations This section is dedicated to the interpretation of the continuation of the return of a policy. We show it equals the return of another policy, called a mirror policy. The existence and closed form of mirror policies is also discussed. ## 4.1 Optimizing By Continuation With Mirror Policies Definition 1. Let (S, A, p0, p, ρ, γ) be an MDP and let ηθ ∈ E *be a history-dependent policy parameterized* with the vector θ ∈ R dΘ *. In addition, let* f q Λ be the continuation of the return of the policy ηθ corresponding to a continuation distribution q and covariance function Λ *as defined in equation* (8). We call a mirror policy of the original policy ηθ, under the continuation distribution q and covariance function Λ, any history-dependent policy η ′ θ ∈ E *such that* ∀θ ∈ R dΘ : $$f_{\Lambda}^{q}(\theta)=J(\eta_{\theta}^{\prime})\;.$$ $\left(\mathbf{q}\right)$. ) . (9) Let us assume we are provided with the continuation f q Λ of the return of an original policy ηθ depending on the parameter θ that shall be optimized. In addition, let us assume we can compute a mirror policy η ′ θ for the original policy ηθ. By Definition 1, the continuation of the original policy equals the return of the mirror policy for all θ. In addition, under smoothness assumptions, all their derivatives are equal too. Therefore, maximizing the continuation of an original policy by stochastic gradient ascent can be performed by maximizing the return of its mirror policy by policy gradient. Applying state-of-the-art policy-gradient algorithms on the mirror policies for optimizing the continuations in Algorithm 1 may alleviate several of the shortcomings of the optimization procedure described earlier. ## 4.2 Existence And Closed Form Of Mirror Policies In this section, we first show that there always exists a mirror policy. In addition, several closed forms are provided depending on the original policy, the continuation distribution, and the covariance function. Theorem 1. For any original history-dependent policy ηθ ∈ E *parameterized with the vector* θ ∈ R dΘ and for any continuation distribution q and covariance function Λ*, there exists a mirror history-dependent policy* η ′ θ ∈ E of the original policy ηθ *that writes as:* $$\eta_{\theta}^{\prime}(a|h)=\operatorname*{\mathbb{E}}_{\theta^{\prime}\sim q(\cdot|\theta,\Lambda(h))}\left[\eta_{\theta^{\prime}}(a|h)\right]\ .$$ $$(10)$$ ′ (a|h)] . (10) Theorem 1 guarantees the existence of mirror policies. Such a mirror policy is a function depending on the same parameters as its original policy but that has a different functional form and may therefore provide actions following a different distribution compared to the original policy. Theorem 1 leads to two important corollary results. First, as demonstrated in Theorem 2 in Appendix A, let η ′′ be a mirror policy of η ′ and let η ′ be a mirror policy of the original policy η of the form of equation (10). Then, there exists a continuation for which η ′′ is a mirror policy of the original policy η. It follows that the return of the mirror policy of another mirror policy is itself equal to a continuation of the original policy. Second, Theorem 1 also reveals that for a given original policy and continuation distribution, the variance of the mirror policy is defined through the continuation covariance function Λ. Furthermore, we remind that the variance of the continuation is an hyperparameter that shall be selected for each iteration of the optimization by continuation, see Section 3. This choice of hyperparameter is thus reflected as the choice of the variance of a mirror policy. The expert making this choice sees the effect of the disturbed parameters on the environment through the variance of the mirror policy. From a practical perspective, it is probably easier to quantify the effect on the local extrema depending on the variance of the mirror policy rather than depending on the variance of the continuation. Property 4.1. Let the original policy πθ ∈ Π *be a Markov policy and let the covariance function depend* solely on the last state in the history. Then, there exists a mirror Markov policy π ′ θ ∈ Π. Property 4.1 is an intermediate result providing sufficient assumptions on the continuation for having mirror Markov policies. Note that for this type of continuation, the parameters of the policy are disturbed independently of the history followed by the agent. Property 4.2. *Let the original policy* π GP θ ∈ Π *be a Gaussian policy as defined in equation* (3) *with affine* function approximators. Let the covariance function depend solely on the last state in the history and let the distribution q *be a Gaussian distribution. Then, there exists a mirror Markov policy* π ′ θ ∈ Π *such that for all* states s ∈ S*, it converges towards a Gaussian policy in the limit as the affine coefficients of the covariance* matrix Σθ(s) approaches zero (∥∇θΣθ(s)∥ → 0): $$\pi_{\theta}^{\prime}(a|s)\to{\mathcal{N}}(a|\mu_{\theta}(s),\Sigma_{\theta}^{\prime}(s))\;,$$ $$(11)$$ (s)) , (11) where Σ ′ θ (s) = Cθ(s) + Σθ(s) and Cθ(s) = ∇θµθ(s) $)^T\,\Lambda(s)\,\nabla_{\theta}\mu_\theta(s)$. Under the assumptions of Property 4.2, a mirror policy can be approached by a policy that only differs from the original one by having a variance which is increased by the term Cθ(s) proportional to the variance of the continuation. In particular, when the variance of the original policy π GP θis solely dependent on the state, then ∥∇θΣθ(s)∥ = 0 and π ′ θ (a|s) = N (a|µθ(s), Σ ′ θ (s)). In this case, for any θ, the covariance matrix of this mirror policy is additionally bounded from below such that Σ ′ θ (s) ⪰ Cθ(s). Property 4.3. Let the original policy µθ ∈ M *be an affine deterministic policy. Let the covariance function* depend solely on the last state in the history and let the distribution q be a Gaussian distribution. Then, the Markov policy π GP θ ′∈ Π *is a mirror policy:* $$\pi_{\theta}^{G P^{\prime}}(a|s)={\mathcal N}(a|\mu_{\theta}(s),\Sigma_{\theta}^{\prime}(s))\;,$$ (s)) , (12) where Σ $\nabla'_{\mu}(s)=\nabla_{\theta}\mu$ θ (s) = ∇θµθ(s) T Λ(s) ∇θµθ(s). Therefore, under some assumptions, disturbing a deterministic policy and optimizing it afterwards can be interpreted as optimizing the continuation of the return of this policy. Property 4.4. Let the original policy µθ ∈ M be an affine deterministic policy. Let the distribution q *be a* Gaussian distribution. Then, the policy η ′ θ ∈ E *is a mirror policy:* $$\eta_{\theta}^{\prime}(a|h)={\mathcal N}(a|\mu_{\theta}(s),\Sigma_{\theta}^{\prime}(h))\ ,$$ (h)) , (13) $$(12)$$ $$(13)$$ ## Where Σ ′ Θ (H) = ∇Θµθ(S) T Λ(H) ∇Θµθ(S). Property 4.4 extends Property 4.3 to more general continuation distributions. This extension is used later to justify the interest of optimizing history-dependent policies in order to optimize an underlying deterministic policy by continuation. The theorem and properties are shown in Appendix A. ## 5 Implicit Optimization By Continuation In this section, two formulations, i.e., a parameterized policy and a learning objective each, used by several policy-gradient algorithms are analyzed relying on original and mirror policies. In Section 5.1, we show that optimizing each formulation by local search corresponds to optimizing a continuation. The optimized policy is thus the mirror policy of an unknown original policy. We show the existence of the corresponding continuation and original policy and discuss their closed form. This analysis provides a novel interpretation of the state-of-the-art algorithms for direct policy optimization. We discuss the role of stochastic policies in light of this interpretation in Section 5.2. ## 5.1 Gaussian Policies And Regularization The policy-gradient literature has mainly focused on optimizing two problem formulations by local search - typically with stochastic gradient ascent and (approximate) trust-region methods. First, the vast majority of works focuses on optimizing the return of Gaussian policies (Duan et al., 2016; Andrychowicz et al., 2020). Second, in many formulations this objective function is extended by adding a bonus to the entropy of the optimized policy (Williams & Peng, 1991; Haarnoja et al., 2019). We show that when optimizing a policy according to these formulations, there exists an (unknown) deterministic original policy and a continuation under which the optimized policy is a mirror policy. Provided with the local-search algorithm from the policy-gradient method, we conclude that optimizing both formulations is equivalent to implicitly optimizing a deterministic policy by continuation. First, we remind that under Property 4.3, for any affine deterministic policy µθ, there exists an affine Gaussian mirror policy π GP ′ θas defined by equation (12). In Property 5.1, the converse of Property 4.3 is stated, which answers to the question: under which conditions a Gaussian policy is the mirror policy of an (unknown) deterministic policy. For this converse statement to be true, the transformation between covariance functions in Property 4.3 must be surjective, which is guaranteed if dA ≤ dΘ and ∇θµθ(s) is full rank. The first assumption is always met in practice and the second is met when no action is a deterministic function of the others. Property 5.1. Let π GP ′ θbe an affine Gaussian policy with mean function µθ*, and with covariance function* Σ ′ θ = Σ′constant with respect to the parameters of the policy (i.e., a function depending solely on the state). If dA ≤ dΘ and if ∇θµθ(s) is full rank, then, there exists a continuation, with covariance Λ *proportional to* Σ ′*, for which* π GP ′ θ*is a mirror policy of the original policy* µθ. Entropy regularization ensures that the variance of the policy remains sufficiently large during the optimization process.2 Similar objectives are pursued with maximum entropy reinforcement learning (Haarnoja et al., 2019) or with (approximate) trust-region methods where the trust-region constraint is dualized (Schulman et al., 2015; 2017). Let us consider an affine Gaussian original policy π GP θ with constant covariance Σθ = Σ. Under Property 4.2, there exists another affine Gaussian policy π GP ′ θthat is a mirror policy of π GP θ. This mirror policy has the same mean function and a covariance function bounded from below by Cθ = C. Property 5.2 provides the converse and answers to the question: under which conditions a Gaussian policy with sufficiently large covariance is the mirror policy of an (unknown and Gaussian) policy. Similar to the previous property, this is guaranteed when dA ≤ dΘ and ∇θµθ(s) is full rank. 2Formally, for two matrices A and B, we have that A ⪰ B ⇒ |A*| ≥ |*B| (Siotani, 1967). As the entropy of a Gaussian policy is a concave function of the determinant of the covariance matrix, a bounded covariance matrix implies a bounded entropy. The entropy-regularization learning objective can therefore be interpreted as the Lagrangian relaxation of the latter entropy-bounded optimization problem. Property 5.2. Let π GP ′ θbe an affine Gaussian policy with mean function µθ*, and with covariance function* Σ ′ θ = Σ′ ⪰ C constant with respect to the parameters of the policy (i.e., a function depending solely on the state) and bounded from bellow by C. If dA ≤ dΘ and if ∇θµθ(s) is full rank, then, there exists a continuation, with covariance Λ proportional to C*, for which* π GP ′ θis a mirror policy of an original Gaussian policy π GP θ with the same mean function µθ *and with constant covariance function* Σ ⪯ Σ ′. The two previous properties indicate that a Gaussian policy is guaranteed to be a mirror policy of another policy, Gaussian or deterministic, under some assumptions. If we furthermore guarantee that the continuation covariance decreases during the optimization, policy-gradient algorithms optimizing affine Gaussian policies can be interpreted as algorithms optimizing an original policy by continuation. Let us consider two cases, each corresponding to a problem formulation, where we optimize by policy gradient an affine Gaussian policy π GP ′ θ with covariance function constant with respect to the parameters of the policy. First, we consider the case where its covariance matrix decreases during the optimization through a manual scheduling. In this context, under property 5.1, there exists an original deterministic policy and the covariance of the continuation decreases through the optimization, such that the policy-gradient algorithm optimizes this policy by continuation. Second, we consider the case where the entropy is regularized with a decreasing regularization term (e.g., by scheduling the Lagrange multiplier). Then, as entropy regularization can be seen as a constraint on the covariance of the policy, under property 5.2, there exists an original Gaussian policy and the covariance of the continuation decreases through the optimization, such that the policy-gradient algorithm optimizes this stochastic policy by continuation. Finally, as stated previously and shown in Theorem 2 in Appendix A, optimizing the return of the mirror policy of another mirror policy is equivalent to optimizing a continuation of the original policy. Therefore, policy-gradient algorithms that optimize affine Gaussian policies with both discounted covariance and decreasing regularization by local search can also be interpreted as algorithms optimizing the mean function (i.e., a deterministic policy) of this policy by continuation. We now illustrate how policy-gradient algorithms implicitly optimize by continuation. We take as example an environment in which a car moves in a valley and must reach its lowest point (positioned in x*target*) to maximize the expected sum of rewards gathered by the agent, see Appendix B. We assume we want to find the best K-controller, i.e., a deterministic policy µθ(x) = θ × (x − x*target*), where x is the position of the car. Directly optimizing such a policy is in practice subject to converging to a local extremum, as explain hereafter. We thus consider the Gaussian policy π GP θ(a|x) = N (a|µθ(x), σ′), where µθ(x) and σ ′ are the mean and variance of the policy, respectively. This policy is a mirror policy of the deterministic policy µθ under a continuation of variance λ = σ ′/(x − x*target*) 2, see Property 4.3. As can be seen in Figure 1, for each value of σ ′, the return of the mirror policy equals the smoothed return of the original deterministic policy µθ. Consequently, optimizing by policy gradient the Gaussian policy is equivalent to optimizing the deterministic policy by continuation. For a well-chosen sequence of σ ′, with a fixed scheduling or with adequate entropy regularization, the successive solutions found by local search will escape the basin of attraction of the suboptimal parameter for any initial parameter of the local search - whereas optimizing the deterministic policy directly would provide suboptimal solutions. In this section, we have established an equivalence between the optimization of some policies by policy gradient and the optimization of an underlying policy by continuation. It opens up new questions about the hypothesis space of the (mirror) policy to consider in practice in order to exploit the properties of continuations at best. These considerations are made in the next section. We finally recall that a central assumption in the previous results is the affinity of policies. Seemingly restrictive, such policies allow to optimally control complex environments in practice (Rajeswaran et al., 2017) and give first-order results for non-affine policies. ## 5.2 Continuations For Interpreting Stochastic Policies In practice, we know that optimizing stochastic policies tends to converge to a final policy with low variance and better performance than if we had directly optimized a deterministic policy. Practitioners often justify this observation by the need to explore through a stochastic policy. Nevertheless, to our knowledge, this concept inherited from bandit theory is not well defined for direct policy optimization. The previous ![10_image_0.png](10_image_0.png) Figure 1: Illustration of the return of the policies N (a|µθ(x), σ′), where µθ(x) = θ×(x−x*target*), for different σ ′ values. The darker the curve, the smaller σ ′, and the darkest one is the return of the deterministic policy µθ. The green dots represent the global maxima and the red dots the local maxima. For some sufficiently large value for σ ′, the return of the policy has a single extremum. For a well chosen schedule of decreasing σ ′, a local-search algorithm will track the sequence of global extrema and converge towards the optimal deterministic policy. analysis establishes an equivalence between optimizing stochastic policies with policy-gradient algorithms and optimizing deterministic policies by continuation. Furthermore, as explained in Section 3.2, the continuation equation (8) consists in smoothing the return of this deterministic policy through the continuation distribution. Local optima tend to be removed when the variance of the continuation is sufficiently large. Optimizing stochastic policies and regularizing the entropy, as in most state-of-the-art policy-gradient algorithms, is therefore expected to avoid local extrema before converging towards policies with small variance. We thus provide a theoretical motivation for the performance reached by algorithms applying exploration as understood in direct policy optimization. The relationships between optimization by continuation and policy gradient in Section 5.1 have been established relying on Property 4.2 and Property 4.3. They assume continuations where the covariance matrix depends only on the current state and not on the whole observed history. In the general case, Property 4.4 allows one to extend these results by performing an analysis similar to Section 5.1. To be more specific, let us assume an affine Gaussian policy π GP ′ θ, where the mean µθ is a function of the state and where the covariance Σθ = Σ is a function of the history and is constant with respect to θ. Under this assumption, if dA ≤ dΘ and ∇θµθ(s) is full rank, the return of the policy π GP ′ θis equal to a (unknown) continuation of the mean function µθ (i.e., a deterministic policy). Furthermore, optimizing the Gaussian policy by policy gradient while discounting the covariance can be interpreted as optimizing the deterministic policy µθ by continuation. In practice, this result suggests to optimize history-dependent policies by policy gradient to take advantage of the most general regularization of the objective function through implicit continuation. A similar observation was recently made by Mutti et al. (2022) who argued that history-dependent policies are required when more complex regularizations are involved. In parallel, Patil et al. (2022) also discussed the potential advantage of using history-depend parameterized policies for approximating correctly optimal policies using little representation power. Finally, a last point has been left open in the previous discussions, namely the update of the covariance matrix of the mirror policies. The latter is defined through the covariance of the continuation. Therefore, the covariance must decrease through the optimization and must be chosen to avoid local optima. One direction to investigate in order to select a variance that removes local extrema is to update the parameters of the policy by following a combination of two directions: the functional gradient of the optimized policy's return with respect to the policy mean and the functional gradient of another measure (to be defined) with respect to the policy variance. An example of heuristic measure for smoothness might be the entropy of the actions and/or states encountered in histories. This strategy obviously does not follow the classical approach when optimizing stochastic policies where the covariance is adapted by the policy-gradient algorithm to locally maximize the return and the exact procedure for updating the variance will require future studies. However the empirical inefficiency of this classical approach has already been highlighted in previous works that improved the performance of policy-gradient algorithms by exploring alternative learning objective functions (Houthooft et al., 2018; Papini et al., 2020). ## 6 Conclusion In this work, we have studied the problem formulation, i.e., policy parameterization and reward-shaping strategy, when solving direct policy optimization problems. More particularly, we established connections between formulations of state-of-the-art policy-gradient algorithms and the optimization by continuation framework (Allgower & Georg, 1980). We have shown that algorithms optimizing stochastic policies and regularizing the entropy inherit the properties of optimization by continuation and are thus less subject to converging towards local optima. In addition, the role of the variance of the policies is reinterpreted in this framework: it is a parameter of the optimization procedure to adapt in order to avoid local extrema. Additionally, to inherit the properties of generic continuations, it may be beneficial to consider variances that are functions of the history of states and actions observed at each time step. Our study leaves several questions open. Firstly, our results rely on several assumptions that may not hold in practice. Specifically, it is unclear how our findings can be generalized to non-affine policies and alternative to Gaussian policies. Nonetheless, our results can be extended in cases where we can obtain an analytic expression for the mirror policy outlined in Theorem 1. While finding such an expression may be challenging in general, we can easily extend our conclusions to non-affine policies by considering the firstorder approximation. Additionally, our study is focused on Gaussian policies, which are commonly used in continuous state-action spaces. However, for discrete action spaces, a natural choice of policy is a Bernoulli distribution over the actions (or a categorical distribution for more than one action). If the state space is also discrete, this distribution may be parameterize by a table providing the success probability of the Bernoulli distribution for each state. In the case of a Beta continuation distribution, a mirror policy can be derived where actions follow a Beta-binomial distribution in each state, a result known in Bayesian inference as the Beta distribution is a conjugate distribution of the binomial distribution (Bishop & Nasrabadi, 2006). An analysis of this mirror policy would allow us to draw conclusions equivalent to those of the continuous case studied in this paper. Secondly, the study focused on entropy regularization of the policy only. Recent works have underlined the benefits of other regularization strategies that enforce the spread of other distributions as the state visiting frequency or the marginal state probability (Hazan et al., 2019; Guo et al., 2021; Mutti et al., 2022). Future research is also needed to better understand the effect of these regularizations on the optimization procedure. Finally, we give a new interpretation for the variance of policies that suggests it shall be updated to avoid local extrema rather than to maximize the return locally. A first strategy for updating the variance is proposed in Section 5.2, which opens the door to further research and new algorithm development. ## 7 Acknowledgments The authors would like to thank Csaba Szepesvári for the discussion on some mathematical aspects that allowed to increase the quality of this study. We also thank our colleagues Gaspard Lambrechts, Arnaud Delaunoy, Pascal Leroy, and Bardhyl Miftari for valuable comments on this manuscript. Adrien Bolland gratefully acknowledges the financial support of a research fellowship of the F.R.S.-FNRS. ## References Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. Optimality and approximation with policy gradient methods in markov decision processes. In *Conference on Learning Theory*, pp. 64–66. PMLR, 2020. Zafarali Ahmed, Nicolas Le Roux, Mohammad Norouzi, and Dale Schuurmans. Understanding the impact of entropy on policy optimization. In *International Conference on Machine Learning*, pp. 151–160. PMLR, 2019. Eugene L Allgower and Kurt Georg. *Numerical continuation methods: an introduction*, volume 13. Springer Series in Computational Mathematics, 1980. Marcin Andrychowicz, Anton Raichuk, Piotr Stańczyk, Manu Orsini, Sertan Girgin, Raphael Marinier, Léonard Hussenot, Matthieu Geist, Olivier Pietquin, Marcin Michalski, et al. What matters in on-policy reinforcement learning? a large-scale empirical study. *arXiv preprint arXiv:2006.05990*, 2020. Amrit Singh Bedi, Anjaly Parayil, Junyu Zhang, Mengdi Wang, and Alec Koppel. On the sample complexity and metastability of heavy-tailed policy search in continuous control. *arXiv preprint arXiv:2106.08414*, 2021. Amrit Singh Bedi, Souradip Chakraborty, Anjaly Parayil, Brian M Sadler, Pratap Tokekar, and Alec Koppel. On the hidden biases of policy mirror ascent in continuous action spaces. In International Conference on Machine Learning, pp. 1716–1731. PMLR, 2022. Yoshua Bengio. Learning deep architectures for ai. *Foundations and Trends in Machine Learning*, 2(1): 1–127, 2009. Jalaj Bhandari and Daniel Russo. Global optimality guarantees for policy gradient methods. arXiv preprint arXiv:1906.01786, 2019. Sujay Bhatt, Alec Koppel, and Vikram Krishnamurthy. Policy gradient using weak derivatives for reinforcement learning. In *Conference on Decision and Control (CDC)*, volume 58, pp. 5531–5537. IEEE, 2019. Christopher M Bishop and Nasser M Nasrabadi. *Pattern recognition and machine learning*, volume 4. Springer, 2006. Andrew Blake and Andrew Zisserman. *Visual reconstruction*. MIT press, 1987. Léon Bottou. Large-scale machine learning with stochastic gradient descent. In *Proceedings of COMPSTAT'2010*, pp. 177–186. Springer, 2010. Rob Brekelmans, Tim Genewein, Jordi Grau-Moya, Grégoire Delétang, Markus Kunesch, Shane Legg, and Pedro Ortega. Your policy regularizer is secretly an adversary. *arXiv preprint arXiv:2203.12592*, 2022. Lucian Busoniu, Robert Babuska, Bart De Schutter, and Damien Ernst. Reinforcement learning and dynamic programming using function approximators. CRC press, 2017. Shicong Cen, Chen Cheng, Yuxin Chen, Yuting Wei, and Yuejie Chi. Fast global convergence of natural policy gradient methods with entropy regularization. *Operations Research*, 70(4):2563–2578, 2022. Herman Chernoff and Lincoln E Moses. *Elementary decision theory*. Courier Corporation, 2012. Po-Wei Chou, Daniel Maturana, and Sebastian Scherer. Improving stochastic policy gradients in continuous control with deep reinforcement learning using the beta distribution. In International Conference on Machine Learning, pp. 834–843. PMLR, 2017. Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In *International Conference on Machine Learning*, pp. 1329–1338. PMLR, 2016. Yasuhiro Fujita and Shin-ichi Maeda. Clipped action policy gradient. In International Conference on Machine Learning, pp. 1597–1606. PMLR, 2018. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. *Deep learning*. MIT press, 2016. Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Alaa Saade, Shantanu Thakoor, Bilal Piot, Bernardo Avila Pires, Michal Valko, Thomas Mesnard, Tor Lattimore, and Rémi Munos. Geometric entropic exploration. *arXiv preprint arXiv:2101.02055*, 2021. Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905, 2019. Elad Hazan, Kfir Yehuda Levy, and Shai Shalev-Shwartz. On graduated optimization for stochastic nonconvex problems. In *International Conference on Machine Learning*, pp. 1833–1841. PMLR, 2016. Elad Hazan, Sham Kakade, Karan Singh, and Abby Van Soest. Provably efficient maximum entropy exploration. In *International Conference on Machine Learning*, pp. 2681–2691. PMLR, 2019. Rein Houthooft, Yuhua Chen, Phillip Isola, Bradly Stadie, Filip Wolski, OpenAI Jonathan Ho, and Pieter Abbeel. Evolved policy gradients. *Advances in Neural Information Processing Systems*, 31, 2018. Hisham Husain, Kamil Ciosek, and Ryota Tomioka. Regularized policies are reward robust. In *International* Conference on Artificial Intelligence and Statistics, pp. 64–72. PMLR, 2021. Riashat Islam, Zafarali Ahmed, and Doina Precup. Marginalized state distribution entropy regularization in policy optimization. *arXiv preprint arXiv:1912.05128*, 2019. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In *International Conference on Machine Learning*, pp. 1928–1937. PMLR, 2016. Hossein Mobahi and John W Fisher. On the link between gaussian homotopy continuation and convex envelopes. In *International Workshop on Energy Minimization Methods in Computer Vision and Pattern* Recognition, pp. 43–56. Springer, 2015. Hossein Mobahi and John Fisher III. A theoretical analysis of optimization by gaussian continuation. In Conference on Artificial Intelligence, volume 29. AAAI, 2015. Hossein Mobahi, C Lawrence Zitnick, and Yi Ma. Seeing through the blur. In *Conference on Computer* Vision and Pattern Recognition, pp. 1736–1743. IEEE, 2012. Walter Murray and Kien-Ming Ng. An algorithm for nonlinear optimization problems with binary variables. Computational optimization and applications, 47(2):257–288, 2010. Mirco Mutti, Riccardo De Santi, and Marcello Restelli. The importance of non-markovianity in maximum state entropy exploration. *arXiv preprint arXiv:2202.03060*, 2022. Ofir Nachum, Mohammad Norouzi, and Dale Schuurmans. Improving policy gradient by exploring underappreciated rewards. *arXiv preprint arXiv:1611.09321*, 2016. Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 17(2):527–566, 2017. Matteo Papini, Andrea Battistello, and Marcello Restelli. Balancing learning speed and stability in policy gradient via adaptive exploration. In *International conference on artificial intelligence and statistics*, pp. 1188–1199. PMLR, 2020. Harsh Nilesh Pathak and Randy Paffenroth. Parameter continuation methods for the optimization of deep neural networks. In *International Conference on Machine Learning And Applications (ICMLA)*, volume 18, pp. 1637–1643. IEEE, 2019. Gandharv Patil, Aditya Mahajan, and Doina Precup. On learning history based policies for controlling markov decision processes. *arXiv preprint arXiv:2211.03011*, 2022. Jan Peters and Stefan Schaal. Reinforcement learning by reward-weighted regression for operational space control. In *International conference on Machine learning*, volume 24, pp. 745–750, 2007. Jan Peters and Stefan Schaal. Reinforcement learning of motor skills with policy gradients. *Neural networks*, 21(4):682–697, 2008. Aravind Rajeswaran, Kendall Lowrey, Emanuel V Todorov, and Sham M Kakade. Towards generalization and simplicity in continuous control. *Advances in Neural Information Processing Systems*, 30, 2017. Calyampudi Radhakrishna Rao. *Linear statistical inference and its applications*, volume 2. Wiley New York, 1973. Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. *arXiv preprint arXiv:1703.03864*, 2017. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In *International Conference on Machine Learning*, pp. 1889–1897. PMLR, 2015. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *arXiv preprint arXiv:1707.06347*, 2017. Frank Sehnke, Christian Osendorfer, Thomas Rückstieß, Alex Graves, Jan Peters, and Jürgen Schmidhuber. Parameter-exploring policy gradients. *Neural Networks*, 23(4):551–559, 2010. Lior Shani, Yonathan Efroni, and Shie Mannor. Adaptive trust region policy optimization: Global convergence and faster rates for regularized mdps. In *Conference on Artificial Intelligence*, volume 34, pp. 5668–5675. AAAI, 2020. Weijia Shao, Christian Geißler, and Fikret Sivrikaya. Graduated optimization of black-box functions. *arXiv* preprint arXiv:1906.01279, 2019. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In *International Conference on Machine Learning*, pp. 387–395. PMLR, 2014. Herbert A Simon. A behavioral model of rational choice. *The quarterly journal of economics*, 69(1):99–118, 1955. Minoru Siotani. Some applications of loewner's ordering on symmetric matrices. *Annals of the Institute of* Statistical Mathematics, 19:245–259, 1967. Joe Staines and David Barber. Variational optimization. *arXiv preprint arXiv:1212.4507*, 2012. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Layne T Watson and Raphael T Haftka. Modern homotopy methods in optimization. Computer Methods in Applied Mechanics and Engineering, 74(3):289–305, 1989. Daan Wierstra, Tom Schaul, Jan Peters, and Juergen Schmidhuber. Episodic reinforcement learning by logistic reward-weighted regression. In *International Conference on Artificial Neural Networks*, pp. 407– 416. Springer, 2008. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. Ronald J Williams and Jing Peng. Function optimization using connectionist reinforcement learning algorithms. *Connection Science*, 3(3):241–268, 1991. Huaqing Xiong, Tengyu Xu, Lin Zhao, Yingbin Liang, and Wei Zhang. Deterministic policy gradient: Convergence analysis. In *Conference on Uncertainty in Artificial Intelligence*, volume 28, 2022. Jiaxin Zhang, Hoang Tran, Dan Lu, and Guannan Zhang. A novel evolution strategy with directional gaussian smoothing for blackbox optimization. *arXiv preprint arXiv:2002.03001*, 2020a. Junzi Zhang, Jongho Kim, Brendan O'Donoghue, and Stephen Boyd. Sample efficient reinforcement learning with reinforce. In *Conference on Artificial Intelligence*, volume 35, pp. 10887–10895. AAAI, 2021. Kaiqing Zhang, Alec Koppel, Hao Zhu, and Tamer Basar. Global convergence of policy gradient methods to (almost) locally optimal policies. *SIAM Journal on Control and Optimization*, 58(6):3586–3612, 2020b. ## A Theoretical Results On Mirror Policies Theorem 1. For any original history-dependent policy ηθ ∈ E *parameterized with the vector* θ ∈ R dΘ and for any continuation distribution q and covariance function Λ*, there exists a mirror history-dependent policy* η ′ θ ∈ E of the original policy ηθ *that writes as:* $$\eta_{\theta}^{\prime}(a|h)=\operatorname*{\mathbb{E}}_{\theta^{\prime}\sim q(\cdot|\theta,\Lambda(h))}\left[\eta_{\theta^{\prime}}(a|h)\right]\ .$$ $$(14)$$ $$\left(17\right)$$ (18) $\binom{19}{2}$ (19) . Proof. The continuation f q Λ is defined in equation (8) as the expectation of the function (7) over the distribution (8) such that: f q Λ (θ) = E s0∼p0(·) θt∼q(·|θ,Λ(ht)) at∼ηθt (·|ht) st+1∼p(·|st,at) "X∞ t=0 γ tρ(st, at) # (15) = Z p(s0) Y∞ t=0 ηθt (at|ht)q(θt|θ,Λ(ht))p(st+1|st, at) ! X∞ t=0 γ tρ(st, at) ! ds0da0dθ0 . . . . (16) For the sake of simplifying the notations, let h = (s0, a0, s1, a1, . . .) ∈ H be a history and let R(h) be the discounted sum of rewards computed from this history. Let us reorder the terms of the integral and change the order of integration such that: f q Λ (θ) = Z p(s0) Y∞ t=0 ηθt (at|ht)q(θt|θ,Λ(ht))p(st+1|st, at) ! R(h) dhdθ0 . . . (17) = Z Y∞ t=0 ηθt (at|ht)q(θt|θ,Λ(ht))! p(s0) Y∞ t=0 p(st+1|st, at) ! R(h) dhdθ0 . . . (18) = Z Z Y∞ t=0 ηθt (at|ht)q(θt|θ,Λ(ht))dθ0 . . . ! p(s0) Y∞ t=0 p(st+1|st, at) ! R(h) dh . (19) In the inner integral over the parameters, each term of the product depends solely on the parameter at a single time step such that the integral of the product simplifies to the product of the integrals as follows: $$f_{\lambda}^{s}(\theta)=\int\left(\prod_{t=0}^{\infty}\int\eta_{s_{t}}(a_{t}|h_{t})q(\theta_{t}|\theta,\Lambda(h_{t}))\;d\theta_{t}\right)\left(p_{0}(s_{0})\prod_{t=0}^{\infty}p(s_{t+1}|s_{t},a_{t})\right)R(h)\;dh\tag{20}$$ $$=\int\left(\prod_{t=0}^{\infty}\eta_{\theta}^{\prime}(a_{t}|h_{t})\right)\left(p_{0}(s_{0})\prod_{t=0}^{\infty}p(s_{t+1}|s_{t},a_{t})\right)R(h)\;dh\;.\tag{21}$$ The last expression is the above expression (1) of the diagonal elements of By definition, the latter equation is equal to the return J(η ′ θ ) of the policy η ′ θ for any parameter vector θ. ``` Therefore, η ′ θ is a mirror policy of the original policy ηθ under the process q and covariance matrix Λ. □ ``` Theorem 2. Let q be a continuation distribution and let Λ *be a covariance function as defined in Section* 3.2. In addition, let ηθ, η ′ θ and η ′′ θ be three parameterized history-dependent policies such that: $$\eta_{\theta}^{\prime}(a|h)=\int\eta_{\theta^{\prime}}(a|h)q(\theta^{\prime}|\theta,\Lambda(h))\,d\theta^{\prime}$$ $$\eta_{\theta}^{\prime\prime}(a|h)=\int\eta_{\theta^{\prime\prime}}^{\prime}(a|h)q(\theta^{\prime\prime}|\theta,\Lambda(h))\,d\theta^{\prime\prime}\;.$$ Then, η ′ θ is a mirror policy of the original policy ηθ and η ′′ θ is a mirror policy of the original policy η ′ θ under continuation distribution q and covariance function Λ*. In addition, there exists a continuation for which* η ′′ θ is a mirror policy of the original policy ηθ. $$(22)$$ $$(23)$$ Proof. First, η ′ θ is a mirror policy of the original policy ηθ and η ′′ θ is a mirror policy of the original policy η ′ θ under continuation distribution q and covariance function Λ, see Theorem 1. Then, let us substitute equation (22) in equation (23): $$\eta^{\prime\prime}_{\theta}(a|h)=\int\eta^{\prime}_{\theta^{\prime\prime}}(a|h)q(\theta^{\prime\prime}|\theta,\Lambda(h))\;d\theta^{\prime\prime}$$ $$=\int\left(\int\eta_{\theta^{\prime\prime}}(a|h)q(\theta^{\prime}|\theta^{\prime\prime},\Lambda(h))\;d\theta^{\prime}\right)q(\theta^{\prime\prime}|\theta,\Lambda(h))\;d\theta^{\prime\prime}$$ $$=\int\eta_{\theta^{\prime\prime}}(a|h)\left(\int q(\theta^{\prime}|\theta^{\prime\prime},\Lambda(h))q(\theta^{\prime\prime}|\theta,\Lambda(h))\;d\theta^{\prime\prime}\right)\;d\theta^{\prime}\;.$$ $$(24)$$ (25) $\binom{26}{25}$ . We thus have that: $$\eta_{\theta}^{\prime\prime}(a|h)=\int\eta_{\theta^{\prime}}(a|h)p_{\theta}(\theta^{\prime}|h)\;d\theta^{\prime}$$ $$p_{\theta}(\theta^{\prime}|h)=\int q(\theta^{\prime}|\theta^{\prime\prime},\Lambda(h))q(\theta^{\prime\prime}|\theta,\Lambda(h))\;d\theta^{\prime\prime}\;.$$ $$(27)$$ $$(28)$$ ``` The distribution pθ is a continuation distribution with a spread depending on the history h through the covariance function Λ. By Theorem 1, η ′′ θ is a mirror policy of the original policy ηθ. □ ``` Property 4.1. Let the original policy πθ ∈ Π *be a Markov policy and let the covariance function depend* solely on the last state in the history. Then, there exists a mirror Markov policy π ′ θ ∈ Π. Proof. By hypotheses, the covariance matrix only depends on the last state st of the history ht, therefore: $$q(\theta_{t}|\theta,\Lambda(h_{t}))=q(\theta_{t}|\theta,\Lambda(s_{t}))\;.$$ q(θt|θ,Λ(ht)) = q(θt|θ,Λ(st)) . (29) In addition, the original policy πθ is a Markov policy, therefore : $$\eta_{\theta}(a_{t}|h_{t})=\pi_{\theta}(a_{t}|s_{t})\;.$$ ηθ(at|ht) = πθ(at|st) . (30) The closed form of the mirror policy, provided by equation (14), can thus be simplified as: $$\eta^{\prime}_{\theta}(a_{t}|h_{t})=\int\eta_{\theta_{t}}(a_{t}|h_{t})q(\theta_{t}|\theta,\Lambda(h_{t}))\;d\theta_{t}$$ $$=\int\pi_{\theta_{t}}(a_{t}|s_{t})q(\theta_{t}|\theta,\Lambda(s_{t}))\;d\theta_{t}\;.$$ The previous equation is independent of ht knowing st, there thus exists a Markov mirror policy π ′ θ ∈ Π respecting Theorem 1 such that: $$\eta_{\theta}^{\prime}(a_{t}|h_{t})=\pi_{\theta}^{\prime}(a_{t}|s_{t})\;.$$ (at|st) . (33) $\square$ $$(29)$$ $$(30)$$ $$(31)$$ $$(32)$$ $$(33)$$ $\square$ $$(34)$$ Property 4.2. *Let the original policy* π GP θ ∈ Π *be a Gaussian policy as defined in equation* (3) *with affine* function approximators. Let the covariance function depend solely on the last state in the history and let the distribution q *be a Gaussian distribution. Then, there exists a mirror Markov policy* π ′ θ ∈ Π *such that for all* states s ∈ S*, it converges towards a Gaussian policy in the limit as the affine coefficients of the covariance* matrix Σθ(s) approaches zero (∥∇θΣθ(s)∥ → 0): $$\pi_{\theta}^{\prime}(a|s)\to{\mathcal N}(a|\mu_{\theta}(s),\Sigma_{\theta}^{\prime}(s))\,,$$ where Σ ′ θ (s) = Cθ(s) + Σθ(s) and Cθ(s) = ∇θµθ(s) T Λ(s) ∇θµθ(s). Proof. First, the existence of a Markov mirror policy results from Property 4.1 and is provided by equation (33): $$\pi_{\theta}^{\prime}(a_{t}|s_{t})=\int\pi_{\theta_{t}}(a_{t}|s_{t})q(\theta_{t}|\theta,\Lambda(s_{t}))\,d\theta_{t}\ .$$ In addition, πθt and q are Gaussian distributions by hypotheses: $$(35)$$ $$\begin{array}{c}{{\pi_{\theta_{t}}(a_{t}|s_{t})=\mathcal{N}(a_{t}|\mu_{\theta_{t}}(s_{t}),\Sigma_{\theta_{t}}(s))}}\\ {{q(\theta_{t}|\theta,\Lambda(s_{t}))=\mathcal{N}(\theta_{t}|\theta,\Lambda(s_{t}))~,}}\end{array}$$ where µθt (st) and Σθt (st) are affine functions of θt. Therefore, these functions can be written as follows: $$\mu_{\theta_{t}}(s_{t})=\left(\nabla_{\theta_{t}}\mu_{\theta_{t}}(s_{t})\right)\theta_{t}+\mu^{\prime}(s_{t})$$ $$\Sigma_{\theta_{t}}(s_{t})=\left(\nabla_{\theta_{t}}\Sigma_{\theta_{t}}(s_{t})\right)\theta_{t}+\Sigma^{\prime}(s_{t})\;.\tag{1}$$ For any state st, in the limit as affine coefficients of the covariance approaches zero, the covariance is such that: $$\lim_{\|\nabla_{\theta_{t}}\Sigma_{\theta_{t}}(s_{t})\|\to0}\Sigma_{\theta_{t}}(s_{t})=\Sigma^{\prime}(s_{t})\;.\tag{14}$$ In this limit, equation (35) consists in marginalizing a conditional linear Gaussian transition model with a Gaussian prior and is such that (Bishop & Nasrabadi, 2006): $$\lim_{\|\nabla_{\theta}\Sigma_{\theta}(s_{t})\|\to0}\tau_{\theta}^{\prime}(a_{t}|s_{t})=\mathcal{N}\left(a_{t}|\left(\nabla_{\theta}\mu_{\theta}(s_{t})\right)\theta+\mu^{\prime}(s_{t}),\left(\nabla_{\theta}\mu_{\theta}(s_{t})\right)^{T}\Lambda(s_{t})\left(\nabla_{\theta}\mu_{\theta}(s_{t})\right)+\Sigma^{\prime}(s_{t})\right)$$ $$=\mathcal{N}\left(a_{t}|\mu_{\theta}(s_{t}),\left(\nabla_{\theta}\mu_{\theta}(s_{t})\right)^{T}\Lambda(s_{t})\left(\nabla_{\theta}\mu_{\theta}(s_{t})\right)+\Sigma_{\theta}(s_{t})\right)\,.$$ (41) $$\begin{array}{l}{(36)}\\ {(37)}\end{array}$$ $$(38)$$ $$(39)$$ $$(40)$$ (41) $\binom{42}{42}$ . $\square$ $$(43)$$ Property 4.3. Let the original policy µθ ∈ M *be an affine deterministic policy. Let the covariance function* depend solely on the last state in the history and let the distribution q be a Gaussian distribution. Then, the Markov policy π GP θ ′∈ Π *is a mirror policy:* $$\pi_{\theta}^{G P^{\prime}}(a|s)={\mathcal N}(a|\mu_{\theta}(s),\Sigma_{\theta}^{\prime}(s))\;,$$ (s)) , (43) where Σ ′ θ (s) = ∇θµθ(s) T Λ(s) ∇θµθ(s). Proof. The statement results from the particularization of Property 4.2 to the case of deterministic policies. Let π GP θ ∈ Π be an affine Gaussian policy with constant covariance matrix for any state Σθ(st) = C. In that case, we have by Property 4.2 that π ′ θ ∈ Π is a mirror policy as follows: $$\pi^{\prime}_{\theta}(a_{t}|s_{t})=\mathcal{N}\left(a_{t}|\mu_{\theta}(s_{t}),\left(\nabla_{\theta}\mu_{\theta}(s_{t})\right)^{T}\Lambda(s_{t})\left(\nabla_{\theta}\mu_{\theta}(s_{t})\right)+\Sigma_{\theta}(s_{t})\right)$$ $$=\mathcal{N}\left(a_{t}|\mu_{\theta}(s_{t}),\left(\nabla_{\theta}\mu_{\theta}(s_{t})\right)^{T}\Lambda(s_{t})\left(\nabla_{\theta}\mu_{\theta}(s_{t})\right)+C\right)\;.$$ Taking the limit of π GP θas the constant covariance matrix C approaches zero, we get that the original policy from Property 4.2 converges to the one of Property 4.3, namely the deterministic policy µθ. This implies that the policy π GP θ ′∈ Π provided by the limit of the mirror policy from Property 4.2, see equation (45), is a mirror policy of the original policy µθ from Property 4.3: $$\pi_{\theta}^{G P^{\prime}}(a_{t}|s_{t})=\operatorname*{lim}_{C\rightarrow0}\pi_{\theta}^{\prime}(a_{t}|s_{t})=\mathcal{N}\left(a_{t}|\mu_{\theta}(s_{t}),\left(\nabla_{\theta}\mu_{\theta}(s_{t})\right)^{T}\Lambda(s_{t})\left(\nabla_{\theta}\mu_{\theta}(s_{t})\right)\right)\;.$$ $$(44)$$ $$(45)$$ $$(46)$$ □ Property 4.4. Let the original policy µθ ∈ M *be an affine deterministic policy. Let the distribution* q be a Gaussian distribution. Then, the policy η ′ θ ∈ E *is a mirror policy:* $$\eta_{\theta}^{\prime}(a|h)={\mathcal N}(a|\mu_{\theta}(s),\Sigma_{\theta}^{\prime}(h))\ ,$$ $$(47)$$ $$(48)$$ (h)) , (47) where Σ ′ θ (h) = ∇θµθ(s) T Λ(h) ∇θµθ(s). Proof. The policy µθt is an affine function of the parameter vector θt and can thus be written as follows: $$\mu_{\theta_{t}}(s_{t})=\left(\nabla_{\theta_{t}}\mu_{\theta_{t}}(s_{t})\right)\theta_{t}+\mu^{\prime}(s_{t})\ .$$ ′(st) . (48) In addition, the samples drawn from the process q are distributed according to a Gaussian distribution: $$q(\theta_{t}|\theta,\Lambda(h_{t}))=\mathcal{N}(\theta_{t}|\theta,\Lambda(h_{t}))\ .$$ q(θt|θ,Λ(ht)) = N (θt|θ,Λ(ht)) . (49) The closed form of the density of the mirror policy, provided by equation (14), is thus simplified as: where ηθt is the policy where each action respecting equation (48) has a probability one. The policy is a degenerated Gaussian distribution (Rao, 1973), it provides a dirac measure to each state, and its (generalized) density function may be approached as follows: $$\eta_{\theta_{t}}(a_{t}|h_{t})=\operatorname*{lim}_{\|\Sigma\|\to0}\mathcal{N}(a_{t}|\left(\nabla_{\theta_{t}}\mu_{\theta_{t}}(s_{t})\right)\theta_{t}+\mu^{\prime}(s_{t}),\Sigma)\;.$$ By substitution, we therefore get that the mirror policy η ′ θ writes as follow: $$\eta_{\theta}^{\prime}(a_{t}|h_{t})=\int\eta_{\theta_{t}}(a_{t}|h_{t})\mathcal{N}(\theta_{t}|\theta,\Lambda(h_{t}))\,d\theta_{t}$$ $$=\int\operatorname*{lim}_{\|\Sigma\|\to0}\mathcal{N}\left(a_{t}\,|\left(\nabla_{\theta_{t}}\mu_{\theta_{t}}(s_{t})\right)\theta_{t}+\mu^{\prime}(s_{t}),\Sigma\right)\mathcal{N}\left(\theta_{t}|\theta,\Lambda(h_{t})\right)\,d\theta_{t}\;.$$ The product of the Gaussian prior over parameters and the linear Gaussian transition model of the actions provides a joint Gaussian distribution of actions and parameters (Bishop & Nasrabadi, 2006), which is degenerated but has a density for the (marginal) Gaussian distribution of actions (Rao, 1973). The density of the mirror policy η ′ θ can thus be computed taking the limit of the marginalization: We note that this result can be obtained without working on degenerated Gaussian distributions. The policy is an affine function of the parameters, which follow a Gaussian distribution, the marginal distribution of actions is thus also a Gaussian distribution of the form of equation (58). This distribution is furthermore the one of a mirror policy, see Theorem 1. □ $$\eta^{\prime}_{\theta}(a_{t}|h_{t})=\int\eta_{\theta_{t}}(a_{t}|h_{t})q(\theta_{t}|\theta,\Lambda(h_{t}))\;d\theta_{t}$$ $$=\int\eta_{\theta_{t}}(a_{t}|h_{t})\mathcal{N}(\theta_{t}|\theta,\Lambda(h_{t}))\;d\theta_{t}\;,$$ $$(49)$$ $$(50)$$ $$(51)$$ $$(52)$$ $$(53)$$ $$(54)$$ η ′ θ (at|ht) = lim ∥Σ∥→0 ZN (at|(∇θtµθt (st)) θt + µ ′(st), Σ) N (θt|θ,Λ(ht)) dθt (55) = lim ∥Σ∥→0 N at|(∇θµθ(st)) θ + µ ′(st),(∇θµθ(st))T Λ(ht) (∇θµθ(st)) + Σ(56) = lim ∥Σ∥→0 N at|µθ(st),(∇θµθ(st))T Λ(ht) (∇θµθ(st)) + Σ(57) = N at|µθ(st),(∇θµθ(st))T Λ(ht) (∇θµθ(st)). (58) $$(55)$$ (56) $\binom{57}{57}$ . $$(58)$$ ## B Description Of The Car Environment In this section, we formalize the reinforcement learning environment that models the movement of a car in a valley with two floors separated by a peak, as depicted in Figure 2. The car always starts at the topmost floor and receives rewards proportionally to its depth in the valley. An optimal agent drives the car from the initial position to the lowest floor in the valley by passing the peak. In the following, we describe each element composing the environment. ![20_image_0.png](20_image_0.png) Figure 2: Valley in which the car moves. State Space. The state st ∈ R 2 of the environment is composed of two scalar values, namely the position xt ∈ R of the pointmass representing the car and its tangent speed vt ∈ R. Action Space. At each time step, the agent controls the force applied on the car through the actions at ∈ R it executes. Initial Position. The car always starts at the topmost floor x*initial* = −3 in the valley at rest. The initial state distribution thus provides a probability one to the state $$(59)$$ $x_{0}=x_{initial}$ $v_{0}=0$. $$(60)$$ $$(61)$$ $$(62)$$ Transition Distribution. The continuous motion of the car in the valley is derived for Newton's formula. The valley's analytical description is provided by the function h, the car's mass is denoted by m = 0.5, gravitational acceleration by g = 9.81, and damping factor by e = 0.65. The position x and speed v of the car follow the subsequent continuous-time dynamics as a function of the force a: $$\dot{x}=v$$ $$\dot{v}=\frac{a}{m(1+h'(x)^{2})}-\frac{gh'(x)}{1+h'(x)^{2}}+\frac{v^{2}h'(x)h''(x)}{1+h'(x)^{2}}-ev^{2}\;.$$ The position and force are furthermore bounded to intervals as part of the dynamics such that $x\in[x_{m},x_{M}]=[-4,5]$ $a\in[a_{m},a_{M}]=[-10,10]$. Clamped force values are therefore used in equation (62). Similarly, the position is clamped in equation (61). In discrete time, the state st+1 is computed through Euler integration of the continuous-time dynamics, considering an initial position given by the current state st. The force a remains constant during a discretization time ∆ = 0.1 and is equal to the action at, with an additive noise drawn from N (·|0, 1) and clamped before integration. $$\begin{array}{l}{(63)}\\ {(64)}\end{array}$$ $\left(61\right)$ Reward Function. The rewards correspond to the depth of the valley at the current position. The reward function thus solely depends on the position $$\rho(s_{t},a_{t})=-h(x_{t})\ .$$ $$(65)$$ ρ(st, at) = −h(xt) . (65) Discount Factor. The discount factor equals γ = 0.99 and the horizon is curtailed to T = 100 in each numerical computation.
Review 1: Summary: The submission offers a novel theoretical perspective for policy-gradient algorithms (PGA). Initially, it casts PGA within the 'optimization by continuation' framework. This approach is tailored for optimizing nonconvex functions by progressively optimizing a series of surrogate objectives known as continuations. Subsequently, it demonstrates that the process of optimizing affine Gaussian policies combined with entropy regularization can be seen as an indirect optimization of deterministic policies through continuation. Drawing from these theoretical findings, it advances that exploration in PGA amounts to determining a continuation of the current policy's return. Finally, it argues that the policy variance should be based on history rather than being put in trade-off with the policy return maximization. Strengths and Weaknesses: Strength: - Interesting angle to tackle regularization in policy gradient optimization. - The work is well presented: good motivation, clear introduction, sufficient background, sound mathematical exposition. Weaknesses: - The work is limited to a specific model of the policies function class: the affine Gaussian policies. - I found that the submission was lacking empirical illustration of the theoretical findings: in practice what are the similarities with the main algorithms? In which kind of MDPs do we observe differences in policy optimization? As I said before, the submission is remarkably typo-proof, and my only minor remark will be on the choice of \eta to design policies while stochastic policies are denoted by \pi, and deterministic policies by \mu. I found that it unnecessarily overloaded the notations. Requested Changes: I find the paper good enough as it is, but I am a bit frustrated by the lack of empirical studies to support it. So my only question would be, why not have some experiments that could both serve as didactic content and as validation of the theory? Broader Impact Concerns: I do not have concern on any potential negative impact. ================================================== Review 2: Summary: This works shows a link between direct policy optimization methods (in reinforcement learning) and optimization by continuation (in non-convex optimization). Optimization by continuation solves a non-convex optimization problem by solving a sequence of "relaxed" optimization problems, on fgunctions called continuation of the objective functions. This works shows that the continuation of the return of the policy is the return of another policy (the mirrror policy), and then that solving the return by continuation is the same as solving a policy optimization problem for the mirror policy. Then, the authors link more specific policy gradient methods to optimization by continuation. In particular, they show that in the case of a affine-parametrized Guassian policy with decreasing variance, policy gradient is equivalent to optimization by continuation on the deterministic policy (that just outputs the mean of the Gaussian). Strengths and Weaknesses: # Strengths ## Clarity The paper is clear and well presented. ## Insight Links between RL methods and optimization are not new, but to my knowledge this specific link was not shown before in the literature. In general, I think that linking works form two different communities is often a valuable contribution, as it can provide new understanding on these methods. In particular, this works proposes directions to understand better entropy regularization in RL. Entropy is many modern RL algorithms, but it is not always clear what its exact function is. This is a studied question in the RL literature, and while this work does not give a definitive answer to it, I think it can be interesting to many researchers in that area. For example, it fits nicely in the direction open by [1], arguing that entropy is useful because it smooth the optimization landscape of the policy. # Weakness I think the work is lacking a bit of context on why these results are interested. In particular, two questions are not addressed (that are not independent): ## How and why optimization by continuation is used in classic optimization ? When introducing OC, it would be valuable to understand why this method would be used in non-convex optimization, for example if it has a inherent theoretical or practical advantage, and maybe give one example where it would be effective. This would make the contribution clearer, because right now it is hard to see the point of this method by itself. ## What are the known convergence results on optimization by continuation, and how would they translate to RL ? I am not a specialist of optimization, but I think it would be interesting to state what are the known theoretical properties of OC. In particular, this could be discussed in relation to section 5.1. If some policy gradient algorithms can be analyzed through OC, it would be valuable to know if some analysis tools form optimization can be applied to this analysis. Requested Changes: Poposed adjustments are the answers to the Weakness section. I would suggest - extending the background on OC, - discussing the theoretical and practical properties of this method, and how they can influence RL algorithm analysis adn design. Broader Impact Concerns: No broader impact concerns (theoretical work). ================================================== Review 3: Summary: The submission provides an interpretation of optimizing a Gaussian policy instead of a deterministic policy or adding entropy regularization as optimization by continuation (aka graduated optimization), a global search method that smoothens the objective function at monotonically decreasing levels to avoid bad local optima. An important difference between the analysis in this paper and common policy gradient algorithms is that here the covariance matrix of the Gaussian policy is not optimized but should be annealed independently to match the framework of optimization by continuation. Strengths and Weaknesses: The final decision is left for after discussions. The scope of the paper is appropriate for TMLR. The connection to optimization by continuation is intuitive and is explained in an example environment. The proofs for two of the main results are not presented clearly (comments 1 and 2 in Correctness Section below) and I can only assess the correctness of the results if the authors clarify these proofs in a revision. Correctness: 1. There are multiple steps to obtain Eq 23 from Eq 22 in the appendix which should be laid out. In particular I do not understand how the order of the product over t and the expectation over theta are swapped to obtain Eq 23. 2. The discussion for Gaussian policies in Section 5.1 should be stated formally, ideally as propositions. Why does the condition on dimensions and the rank of the gradient satisfy the requirement in this discussion? What does it precisely mean that the covariance is discounted and why does it guarantee full rank gradient? 3. Section 2.2 mentions that an MDP always has an optimal deterministic policy. This is true for *finite* MDPs. A continuous-action MDP, the problem this paper focuses on, may not have an optimal policy in general (stochastic or deterministic). I recommend removing this statement. 4. The same paragraph cites Silver et al (2014) to claim deterministic policies tend to get stuck in local optima. Silver et al in fact mention inadequate exploration as the problem with deterministic policies. 5. The conclusion mentions that a categorical distribution is a natural policy parametrization for discrete state space. The action space, and not the state space, determines policy parametrization. A continuous-action discrete-state problem would still naturally use Gaussian or deterministic (rather than categorical) policies. 6. Does the inner loop in the optimization by continuation algorithm continue until convergence or for one or a few steps? If it is until convergence, this difference between this algorithm and policy gradient algorithms should be clarified in the text. Organization: 1. The results that follow from Theorem 1 are presented as "Property 1", "Property 2", etc. except Theorem 2 which is called a theorem, described informally in the main paper, and proven in a dedicated section in the appendix before the proof for Theorem 1. I think presenting the result like the others would make the paper clearer. The paper is otherwise well organized. Notation and typos: 1. Even if "p" is the conventional notation for continuation it's still better to use a different letter here since p is already used for start distribution and transition kernel. 2. Extra phrase "algorithm algorithmic" before Algorithm 1. 3. In all references to equations "equation" is repeated. 4. (Below property 1) Independently *of* the history. 5. From property 2 onwards the notation for covariance matrix changes to have a subscript of theta. 6. (Section 5.1, Gaussian policies) "does optimizing a Gaussian policy is" -> "optimizing a Gaussian policy is". "trough" -> "through" Requested Changes: The comments in the Correctness section are all critical. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: The reviewers are happy with the improvements to the work. They unanimously voted to accept. I know that this is an Accept as is, but I would request one very minor thing. It would be great if you could consider breaking up the very large paragraphs that were added. I can see obvious subparagraphs within those large paragraphs, and they are currently hard to read. ==================================================
# Distribution Embedding Networks For Generalization From A Diverse Set Of Classification Tasks Lang Liu *liu16@uw.edu* University of Washington Mahdi Milani Fard *mmilanifard@google.com* Google Research Sen Zhao senzhao@google.com Google Research Reviewed on OpenReview: *https: // openreview. net/ forum? id= F2rG2CXsgO* ## Abstract We propose Distribution Embedding Networks (DEN) for classification with small data. In the same spirit of meta-learning, DEN learns from a diverse set of training tasks with the goal to generalize to unseen target tasks. Unlike existing approaches which require the inputs of training and target tasks to have the same dimension with possibly similar distributions, DEN allows training and target tasks to live in heterogeneous input spaces. This is especially useful for tabular-data tasks where labeled data from related tasks are scarce. DEN uses a three-block architecture: a covariate transformation block followed by a distribution embedding block and then a classification block. We provide theoretical insights to show that this architecture allows the embedding and classification blocks to be fixed after pre-training on a diverse set of tasks; only the covariate transformation block with relatively few parameters needs to be fine-tuned for each new task. To facilitate training, we also propose an approach to synthesize binary classification tasks, and demonstrate that DEN outperforms existing methods in a number of synthetic and real tasks in numerical studies. ## 1 Introduction While machine learning has made substantial progress in many technological and scientific applications, its success often relies heavily on large-scale data. However, in many real-world problems, it is costly or even impossible to collect large training sets. For example, in online spam detection, at any time, we may only possess dozens of freshly labeled spam results. In health sciences, we may only have clinical outcomes on a few hundred study subjects. Few-shot learning (FSL) has recently been proposed as a new framework to tackle such small data problems. It has now gained huge attention in many applications such as image classification (Koch et al., 2015; Finn et al., 2017), sentence completion (Vinyals et al., 2016; Munkhdalai et al., 2018), and drug discovery (Altae-Tran et al., 2017); see Wang et al. (2020) for a survey. The core idea behind most of FSL methods is to augment the limited training data with prior knowledge, e.g., images of other classes in image classification or similar molecules' assays in drug discovery. In *meta-learning* based FSL, such prior knowledge is formulated as a set of related training tasks assumed to follow the same task distribution (Finn et al., 2017), with the goal that the trained model could quickly adapt to new tasks. In practice, however, given an arbitrary target task, the degree and nature of its relationship to the available auxiliary training data is often unknown. In this scenario, it is unclear if existing approaches can extract useful information from the training data and improve the performance on the target task. In fact, there is empirical evidence suggesting that learning from unrelated training tasks can lead to negative adaptation (Deleu & Bengio, 2018). In our numerical studies, we also observe similar behavior; existing FSL approaches perform poorly when training tasks are unrelated to the target task. ![1_image_0.png](1_image_0.png) Figure 1: Training and evaluation of DEN. We first pre-train DEN on training tasks with heterogeneous covariate spaces. We then fine-tune the transformation block on a few labeled examples from the target task, and use the fine-tuned model for classification on the query set. In this paper, we investigate this important but under-studied setting—few-shot meta-learning with possibly unrelated tasks. We specifically focus on classification tasks with *tabular* data. Unlike machine learning with image and text inputs, large datasets of related tasks are not available for generic tabular-data classification. A key challenge in this setting is that the input, in the form of covariate vectors for training and target tasks, can live in different spaces and follow different distributions with possibly different dimensions. Existing meta-learning techniques often assume a *homogeneous* input space across tasks and thus cannot be directly applied in such cases with *heterogeneous* covariate spaces. In this work, we propose Distribution Embedding Networks (DEN) for meta-learning on classification tasks with potentially *heterogeneous* covariate spaces. DEN consists of a novel three-block architecture. It first calibrates the raw covariates via a transformation block. A distribution embedding block is then applied to form an embedding vector serving as the "summary" of the target task. Finally, a classification block uses this task embedding vector along with the transformed query features to form a prediction. Since the tasks can be unrelated, we learn a different transformation block for each task to form task-invariant covariates for the rest of the network. In other words, the transformation block is *task-dependent*. We keep the *task-independent* embedding and classification blocks fixed after *pre-training*, and use a few labeled examples from the target task (i.e., the support set) to *fine-tune* the task-dependent transformation block. Since our setting is significantly more challenging than the standard few-shot meta-learning setting due to the heterogeneity among training and target tasks, we assume that we have access to a slightly larger support set compared to the FSL setting (e.g., 50 examples in total across all classes rather than 5 examples per class). We further assume that the support set follows the same distribution as the query set. To address the challenge of variable-length covariates, the classification block is built upon a Deep Sets architecture (Zaheer et al., 2017). Figure 1 shows an overview of the architecture and training and evaluation mechanisms. To summarize our main contributions: (I) We propose a method for meta-learning with possibly unrelated tabular-data training tasks—an important setting that expands the application of meta-learning but has rarely been investigated in the literature; (II) We propose the three-block architecture, allowing the model to be pre-trained on a large variety of tasks, and then fine-tuned on an unrelated target task; we provide a scenario in which our three-block architecture can perform well; (III) Described in Section 5, we design a procedure to generate artificial tasks for pre-training, and empirically verify its effectiveness when testing on real tasks. This provides a principled way to generate training tasks and alleviates the cost of collecting real training tasks. (IV) We compare DEN with various existing FSL approaches on both simulated and real tasks, showing improved performance in most of the tasks we consider. ## 2 Related Work There are multiple generic techniques applied to the meta-learning problem in the literature (Wang et al., 2020). The first camp learns similarities between pairs of examples (Koch et al., 2015; Vinyals et al., 2016; Bertinetto et al., 2016; Snell et al., 2017; Oreshkin et al., 2018; Wang et al., 2018; Sung et al., 2018; Satorras & Estrach, 2018; Liu et al., 2019; Mishra et al., 2018). For an unlabeled example on a new task, we use its similarity score with labeled examples of the given task for classification. The second camp of optimization-based meta-learning aims to find a good starting point model such that it can quickly adapt to new tasks with a small number of labeled examples from the new task. This camp includes different variants of MAML (Finn et al., 2017; Lee & Choi, 2018; Finn et al., 2018; Grant et al., 2018; Rusu et al., 2019) and Meta-Learner LSTM (Ravi & Larochelle, 2017). More recently, Lee et al. (2020; 2021) proposed to learn task-specific parameters for the loss weight and learning rate for out-of-distribution tasks. Their use of task-embedding is conceptually similar to DEN. The third camp is conceptually similar to topic modeling, such as Neural Statistician (Edwards & Storkey, 2017) and CNP (Garnelo et al., 2018), which learn a task specific (latent) embedding for classification. The final camp utilizes memory (Santoro et al., 2016; Kaiser et al., 2017; Munkhdalai & Yu, 2017; Munkhdalai et al., 2018). Note that all the above methods assume that all training and target tasks are related and share the same input space. A closely related problem is Domain Generalization (DG) which estimates a functional relationship between the input x and output y given data from different domains (i.e., with different marginal P(x)); see, e.g., Wang et al. (2021) and Zhou et al. (2021) for surveys. The core idea lies behind a large class of DG methods is to learn a domain-invariant feature representation (see, e.g., Muandet et al., 2013; Li et al., 2018a;b; Shankar et al., 2018; Shen et al., 2018), which aligns the marginal P(x) and/or the conditional P(y|x) distributions across multiple domains. In a similar spirit, DEN first adapts to the task via the transformation block and then learns a task-invariant representation via the task-independent embedding block. Learning from heterogeneous feature spaces has been studied in transfer learning, or domain adaptation (Dai et al., 2008; Yang et al., 2009; Duan et al., 2012; Li et al., 2014; Yan et al., 2017; Zhou et al., 2019); see Day & Khoshgoftaar (2017) for a survey. These approaches only focus on two tasks (source and target), and require the model to learn a transformation mapping to project the source and target tasks into the same space. Unlike meta-learning and DG methods, DEN is applicable for tasks with *heterogeneous covariates spaces*. This phenomenon is especially prevalent in tabular data tasks, where the number and definition of features could be vastly different across tasks. Iwata & Kumagai (2020) is among the first works that combine meta-learning with heterogeneous covariate spaces. Both their approach and DEN rely on pooling to handle variable-length inputs, using building blocks such as Deep Sets (Zaheer et al., 2017) and Set Transformers (Lee et al., 2019). There are several differences between their approach and DEN. Firstly, DEN uses a covariate transformation block, allowing it adapt to new tasks more efficiently. Secondly, their model is permutation invariant in covariates and thus restrictive in model expressiveness; while DEN does not have this restriction. Thirdly, we also provide theoretical insights and justification for our model architecture design. ## 3 Notations Let T1*, . . . ,* TM be M training tasks. For each training task T, we observe an i.i.d. sample DT = {(xT,i, yT,i)}i∈[nT]from some joint distribution PT, where [n] = {1*, . . . , n*} and xT,i ∈ R dT is the covariate vector of the i-th example and yT,i ∈ [LT] is its associated label. We denote this sample in matrix form by (XT, yT), where the j-th column x j T ∈ R nT is the j-th covariate vector. We let XT,k be the covariate sub-matrix corresponding to examples with label k for k ∈ [LT]. When the context is clear, we drop the dependency on T for simplicity of the notation, e.g., we write XT,k as Xk. Let S be a target task that is not contained in the training tasks. We are given a set of labeled examples (XS, yS), where the sample size nS is small. We refer to it as the *support set*. The goal is to predict labels for unlabeled examples in the target task, which is called the *query set*. We denote T = {T1*, . . . ,* TM, S}. ## 4 Distribution Embedding Networks We first describe the model architecture of DEN for binary classification in Section 4.1, and then extend DEN for multi-class classification in Section 4.2. Finally, we provide insights into the model architecture and justify its design in Section 4.3. ![3_image_0.png](3_image_0.png) Figure 2: Block diagram of DEN for binary classification. During pre-training, for each gradient step we sample task T ∈ {T1*, . . . ,* TM} and two batches of data from the task (XA T , y A T ) and (XB T , y B T ). During fine-tuning, we treat the support set as the support batch, using which to derive distribution embedding to make predictions on the query set, treated as the query batch. ## 4.1 Model Architecture For Binary Classification To describe the model architecture of DEN for binary classification (illustrated in Figure 2), consider data (X, y) in a given task T. DEN can be decomposed into three major blocks: transformation, embedding and classification. We will describe these blocks in this section. Transforming covariates with task-dependent transformation block. We first transform the covariates via a transformation block, i.e., $$\mathbf{Z}=c(\mathbf{X}),$$ $$(1)$$ Z = c(X), (1) where c : R d → R dis applied to each row. Specifically, we use a piecewise linear function1(PLF) for each covariate, i.e., c(x) = (c 1(x 1)*, . . . , c*d(x d)). PLFs can be optionally constrained to be monotonic, which would serve as a form of regularization during training (Gupta et al., 2016). Note that the transformation block is task-dependent—its parameters need to be re-trained for each new task. The goal is that, after applying the corresponding transformation to each task, the relatedness across tasks increases. In contrast, existing meta-learning approaches usually do not have the transformation block and thus require the raw tasks to be related. This is conceptually similar to Muandet et al. (2013), where they consider the domain generalization problem and directly learn an invariant feature transformation by minimizing the dissimilarity across domains. To the contrary, we incorporate this block into a larger architecture and learn it in an end-to-end fashion. One may instead consider other architectures than PLFs for the transformation block. We choose PLFs since they can implement compact one-dimensional non-linearities and can thus be fine-tuned with a small support set. Moreover, they are universal approximators: with enough keypoints, they can approximate any one-dimensional bounded continuous functions. In Section 5.3 we show that this PLF transformation is key to ensure good performance when training and target tasks have heterogeneous covariate spaces. PLFs cannot model interactions between covariates, which is a sacrifice we may have to make in light of the small support set. We study in Section 5.3 the trade-offs between the flexibility of PLF and the size of the support set. Summarizing task statistics with task-independent distribution embedding block. The second block in DEN is to learn a vector that summarizes the task distribution. This is similar to Garnelo et al. (2018). Naïvely, one could learn a non-linear transformation φ which embeds (z, y) into a vector of smaller dimension. However, this would not work since the dimension of z can vary across tasks. In contrast, we embed the distribution P(z, y) of a given task into a vector using the transformed features Z in the following 1See Appendix A for the precise definition. way. For all *a, b* ∈ [d], we derive a distribution embedding of P(z a, zb, y) by $$\mathbf{s}^{a,b}=\left(\overline{h(\mathbf{Z}_{1}^{a,b})},\overline{h(\mathbf{Z}_{2}^{a,b})},\overline{\mathbf{y}}\right),\tag{1}$$ $\downarrow$ . where h is a vector-valued *trainable* function, and the average is taken with respect to the training batch during training and support set during inference, i.e., $$\overline{{{h(\mathbf{Z}_{k}^{a,b})}}}=\frac{\sum_{i=1}^{n}h(z_{i}^{a},z_{i}^{b})\mathbf{1}\{y_{i}=k\}}{\sum_{i=1}^{n}\mathbf{1}\{y_{i}=k\}},$$ and y = 1 n Pn i=1 yi. Note that n is about 50 in our setting, so the empirical average is close to the population counterpart. Intuitively, we decompose a variable-length feature vector z into smaller pieces of fixed length 2, and use the h function to learn a pairwise embedding s a,b for each pair of the 2 features a and b. This pairwise decomposition allows us to handle variable-length covariates. Note that the same function h is shared across all tasks, and it can be chosen as a few fully connected layers. The distribution embedding of P(z, y) is thus the set of embeddings of all pairs s =s a,ba,b∈[d] . Remark 4.1. The length 2 here is arbitrary. We can use pieces of length r for any r ≥ 1, i.e., obtain an embedding s t1*,...,t*r of P(z t1,...,tr, y) for all t1*, . . . , t*r ∈ [d]. The larger r is the more expressive the model will be. We refer to r as the *dependency order* , and experiment with different values in Section 5.3. Prediction with task-independent classification block. Given a query x, we first transform it via z = c(x), and decompose the transformed features into sets of feature pairs. We then obtain the distribution embedding vector s a,b of each pair in (2). Finally, we obtain the predicted logits using a Deep Sets architecture (Zaheer et al., 2017): $$q=\Phi(\mathbf{z},\mathbf{s})=\psi\left(\sum_{a,b\in[d]}\varphi\left(\left[\mathbf{z}^{a,b},\mathbf{s}^{a,b}\right]\right)\right),\tag{3}$$ where ϕ is a vector-valued trainable function and ψ is a real-valued trainable function. Both ϕ and ψ are shared across tasks and can be chosen as fully connected layers. The Deep Sets architecture, which aggregates all possible pairs of covariates, is proven to be a universal approximator of set-input functions (Zaheer et al., 2017). We note that one may use other set input architectures to construct the classification block, e.g., Set Transformer (Lee et al., 2019). ## 4.2 Model Architecture For Multiclass Classification For multiclass classification tasks, we modify the distribution embedding and classification blocks. Specifically, we modify the distribution embedding in (2) as $$\mathbf{s}^{a,b}=\frac{1}{n}\sum_{i=1}^{n}h\left(\left[z_{i}^{a},z_{i}^{b},\mathbf{v}(y_{i})\right]\right),\tag{1}$$ $$\mathbf{\Sigma}$$ where v : N → R m is a vector-valued, trainable function that is shared across tasks—v(k) is hence a vector encoding of class k—and h is a vector-valued trainable function with input dimension m + 2. For the classification block, we modify the idea of Matching Net (Vinyals et al., 2016), which is also similar to the modification adopted by Iwata & Kumagai (2020). This modification is suitable for tasks with different numbers of label classes. Let Φ˜ be Φ in (3) without the last layer, i.e., Φ˜(z, s) = Pa,b∈[d] ϕ([z a,b, s a,b]) is the penultimate layer embedding of the query set example. We then obtain its class scores by $$q_{k}=\frac{\sum_{i=1}^{n}\tilde{\Phi}(\mathbf{z},\mathbf{s})^{\top}\tilde{\Phi}(\mathbf{z}_{i},\mathbf{s})\mathbf{1}\{y_{i}=k\}}{\sum_{i=1}^{n}\mathbf{1}\{y_{i}=k\}}.\tag{1}$$ Note that i in the above equation is the index of support set examples (n in total). The score qk can thus be interpreted as the average dot-product of the penultimate layer embedding of the given query example with the penultimate embedding of the support set examples of class k. To obtain class probability, we apply a softmax on qk's. $$\mathbf{\Sigma}$$ ## 4.3 Rationale For The Architectural Design For further insights into the model architecture, consider the optimal Bayes classifier: $$P(y=k\mid\mathbf{x})=\frac{P(\mathbf{x}\mid y=k)P(y=k)}{\sum_{l=1}^{L}P(\mathbf{x}\mid y=l)P(y=l)}.\tag{6}$$ In the case $\mathbf{x}$ is a $\mathbf{x}$-valued function of $\mathbf{x}$. For a given task, if the conditional probability P(x | y = k) belongs to the same family of distributions for all k ∈ [L], i.e., x | y = k ∼ φ(x; θk), then the Bayes classifier can be constructed by estimating the parameters θk and P(y = k), and approximating the density φ as $$P(y=k\mid\mathbf{x})=\frac{\phi(\mathbf{x};\hat{\mathbf{\theta}}_{k})\hat{P}(y=k)}{\sum_{l=1}^{L}\phi(\mathbf{x};\hat{\mathbf{\theta}}_{l})\hat{P}(y=l)}.\tag{7}$$ If, additionally, P(x | y = k) belongs to the same family of distributions for all k ∈ [L] and also all tasks, then the Bayes classifers for all tasks should have the same functional form; only the parameters θk and P(y = k) differ by task. In this case, we can simply pool the data from all tasks together to estimate θk, P(y = k) for each task, and the task-independent function φ. However, the distribution family φ(x; θk) may vary greatly across tasks. The task-dependent transformation block allows the transformed covariates to be in the same distribution family approximately. Then, we utilize the distribution embedding block to estimate the parameters θk and P(y = k). If all transformed covariates belong to the same distribution family, then the function form to estimate the parameters θk and P(y = k) should be identical for all tasks, which justifies our use of a *task-independent* distribution embedding block. Finally, we use the task-independent classification block to approximate the task-independent function φ and obtain a score for each label. The effectiveness of DEN depends crucially on the ability of the transformation block to align distribution families of covariates across tasks. We study in Section 5.3 the performance of DEN in relation to the flexibility of PLFs. Finally, since the covariate dimension can vary across tasks, we decompose the covariate vector into sub-vectors of fixed length r and apply a Deep Sets architecture to these sub-vectors. In fact, Proposition 4.3 shows that if we consider the following family of densities, then the Bayes classifier must be of the form (3). Definition 4.2. Let {f(·; θ) : R r → R} be a parametric family of functions (not necessarily densities). For any integer d ≥ r, we say a function g on R d admits an f*-expansion* if it factorizes as g(z) = Qt1:r∈[d] r f(z t1:r; θ t1:r ), where {θ t1:r ∈ R τ } is a set of parameters. For instance, if z|y = 1 ∼ Nd(µ, σ2Id), then the conditional density p(z|y = 1) is proportional to $$\prod_{a=1}^{d}{\frac{1}{\sigma}}\exp\left(-{\frac{\left(z^{a}-\mu^{a}\right)^{2}}{2\sigma^{2}}}\right),$$ which admits an f-expansion with r = 1 and θ a = (µ a, σ). Proposition 4.3. Let (z, y) *be a random vector in* R d × [L] following some distribution P. Assume that the conditional density p(z|y = k) admits an f*-expansion for some parametric family of functions* {f(·; θ)} on R r with parameters {θ t1:r k}. Then there exist functions ψ and ϕ *such that* $$P(y=k\mid\mathbf{z})\propto\psi\left(\sum_{t_{1:r}\in[d]^{r}}\varphi(\mathbf{z}^{t_{1:r}},\mathbf{\theta}_{k}^{t_{1:r}},\pi_{k})\right),$$ $$(8)$$ _where $\mathbf{z}^{t_{1:r}}=(z^{t_{1}},\ldots,z^{t_{r}})$, $\pi_{k}=P(y=k)$, and $\psi$ and $\varphi$ only depend on $f$._ The proof is included in Appendix A. Note that our model (3) has exactly the same structure as the optimal Bayes classifier in (8) with r = 2 and s representing the parameters {θ t1:r k} and marginal {πk}. Proposition 4.3 shows that, under appropriate conditions, our model class is expressive enough to include the optimal Bayes classifier. This justifies our choice of the distribution embedding (2) and the Deep Sets structure (3). DEN will consequently performed well when learning across tasks whose conditional distributions of the PLF-transformed feature admits the same f-expansion. Heuristically, this means DEN is ideally applied to meta-learning settings in which features across tasks can be transformed to have a similar structure. | Table 1: Overview of training and evaluation. | | | | | |-------------------------------------------------|--------------------------------------------|------------|-----------|----------------| | Step | Input data | Transform | Embedding | Classification | | Pre-training | Heterogeneous tasks {(xT,i, yT,i)}T,i | Trained | Trained | Trained | | Fine-tuning | Support set {(xS,i, yS,i)}i | Trained cS | Fixed | Fixed | | Evaluation | Query xS,q and support set {(xS,i, yS,i)}i | Fixed cS | Fixed | Fixed | ## 4.4 Training And Inference Figure 2 shows a high level summary of our three-block model architecture. The overall training and evaluation procedure is summarized in Table 1. Note that all sets of r inputs use the same h and ϕ functions through parameter sharing, which reduces the size of the model. For DEN applied on tasks with d features, the feature transformation block has O(d) parameters. An embedding block with embedding order r, L layers and H hidden nodes per layer has rH + H2(L − 1) parameters. A classification block with L layers and H hidden nodes per layer for both the ϕ and ψ functions has (r + 2)H + 2H2L parameters. This DEN model is roughly of the same size as a 3L-layer deep H-node wide neural network. Empirically, we found that the computational complexity and resource requirement of DEN are similar to those of Vinyals et al. (2016) and Iwata & Kumagai (2020) given similar model sizes. During pre-training, in each gradient step, we randomly sample a task T ∈ {Tt}M t=1 and two batches A and B from (XT, yT). These two batches are first transformed using PLFs in (1). We then use (2) or (4) to obtain a distribution embedding sT, taking the average with respect to the examples in the support batch. Next, we use the distribution embedding to make predictions on the query batch using (3) or (5). Note that during training, sT is identical across examples within the same batch, but it could vary (even within the same task) across batches. If PLFs can approximately transform the covariates so that they admit the same f-expansion, then the rest of the network is task-independent. Thus, after pre-training on {Tt}M t=1, for each new task S, we can fine-tune the transformation block (1), while keeping the weights of other blocks fixed. Because PLFs only have a small number of parameters, they can be trained on a small support set from the task S. During inference, we first use the learned PLFs in (1) to transform covariates in both the support set and the query set. We then utilize the learned distribution embedding block to obtain sS, where the average in (2) and (4) is taken over the whole support set. Finally, the embedding sS and PLF-transformed query set covariates are used to classify query set examples using (3) or (5). ## 5 Numerical Studies In Section 5.1, we use OpenML tabular datasets to demonstrate the performance of DEN on real-world tasks. DEN achieves the best performance among a number of baseline methods. We then introduce in Section 5.2 an approach to simulate binary classification tasks, which allows us to generate a huge amount of pre-training examples without the need of collecting real tasks. Surprisingly, DEN (and several other methods) trained on simulated data can sometimes outperform those trained on real data. In Section 5.3, we examine the performance of DEN in relation to different architecture and hyper-parameter choices. The findings are: (a) PLF and fine-tuning are crucial when the training and test tasks are unrelated (e.g., when training tasks are simulated), whereas their effect is insignificant when the tasks are similar, and (b) the performance of DEN is relatively stable for small values of the dependency order r in Remark 4.1. Baseline methods for comparison with DEN include Matching Net (Vinyals et al., 2016), Proto Net (Snell et al., 2017), TADAM (Oreshkin et al., 2018), PMN (Wang et al., 2018), Relation Net (Sung et al., 2018), CNP (Garnelo et al., 2018), MAML (Finn et al., 2017), BMAML (Finn et al., 2018), T-Net (Lee & Choi, 2018) and Iwata & Kumagai (2020). Hyperparameters of all methods are chosen based on cross-validation on training tasks. Hyperparameters, model structures and implementation details are summarized in Appendix B. Note that we set the dependency order r = 2 if it is not stated explicitly. For MAML, BMAML and T-net, | Method | Average Test AUC (%) | % Best | |------------------------|------------------------|----------| | Matching Net | 50.11 (0.04) | 0.00% | | Proto Net | 71.11 (0.72) | 27.50% | | PMN | 56.11 (0.65) | 0.75% | | Relation Net | 51.65 (0.28) | 0.25% | | CNP | 58.01 (0.69) | 7.00% | | Iwata & Kumagai (2020) | 70.35 (0.70) | 26.75% | | MAML | 60.64 (0.82) | 7.00% | | T-Net | 52.22 (0.41) | 0.5% | | DEN | 70.12 (0.83) | 30.25% | Table 2: Average test AUC (standard error) and the percent of times that each method achieves the best performance on 20 OpenML datasets × 20 repeats. we fine-tune the last layer of the base model for 5 epochs on the support set. For the rest of the methods, we train them with episodic training2(Vinyals et al., 2016). For methods that do not readily handle variable length inputs, we randomly repeat features to make all tasks have the same length of inputs. ## 5.1 Results On Openml Classification Tasks 5.1.1 Binary Classification We compare DEN with baseline methods on 20 OpenML binary classification datasets (Vanschoren et al., 2013) following the setup in Iwata & Kumagai (2020) (see a list of datasets in Appendix C). These datasets have examples ranging from 200 to 1,000,000 and features ranging from 2 to 25. We pre-train DEN and baseline methods on the OpenML datasets in the leave-one-dataset-out fashion. That is, for each of the 20 OpenML datasets chosen as a target task, we pretrain the models on the remaining 19 datasets. For the target task, we randomly select 50 examples to form the support set, and use the rest of the dataset as the query set. We repeat the whole procedure 20 times, and report the average AUC and the percentage that each method achieves the best performance in Table 2 based on 20 test sets × 20 repeats. DEN has average AUC comparable with the best methods, and the highest frequency that it achieves the best AUC. As a comparison, directly training task-specific linear models on the support set of each task gives an average test AUC of 57.15%. Directly training an 1 hidden layer, 8 hidden nodes neural network on the support set gives an average test AUC of 54.58%. With 2 hidden layers, AUC drops to 53.00% and with 3 hidden layers, 49.74%. DEN significantly outperforms those methods. These results demonstrate the effect of over-fitting for classical methods on small data, and the benefit of DEN over classical methods on those small-data problems. In Section 5.2, we will further describe an approach to generate binary classification training tasks through controlled simulation. DEN trained on simulated tasks, surprisingly, outperforms DEN trained on real tasks, and, in fact, achieves the best performance among all competing methods. ## 5.1.2 Multiclass Classification In addition to binary classification tasks, we also compare DEN with baseline methods on 8 OpenML multi-class classification datasets. These datasets have examples ranging from 400 to 1,000,000, features ranging from 5 to 23, and number of classes ranging from 3 to 7. We train DEN and baseline methods on the OpenML datasets in the same leave-one-dataset-out fashion as in the binary classification. Results are summarized in Table 3. DEN achieved the best performance, followed by Matching Net. We also compare against directly training a neural network (NN) on the support set, which achieved decent accuracy. But DEN remains the best methods in majority (61.6%) of cases. 2Code is available at https://github.com/google-research/google-research/distribution_embedding_networks. | Method | Average Test Accuracy (%) | % Best | |------------------------|-----------------------------|----------| | Direct NN | 45.68 (1.56) | 13.6% | | Matching Net | 46.36 (1.55) | 13.8% | | Proto Net | 33.68 (1.80) | 4.9% | | PMN | 24.77 (0.63) | 0.0% | | Relation Net | 33.49 (1.82) | 4.9% | | CNP | 18.77 (1.53) | 1.1% | | Iwata & Kumagai (2020) | 24.93 (0.62) | 0.0% | | DEN | 48.60 (1.51) | 61.6% | Table 3: Average test accuracy (standard error) and the percent of times that each method achieves the best performance on 8 OpenML datasets × 20 repeats. ## 5.2 Generate Training Tasks Through Controlled Simulation In this section, we describe an approach to generate binary classification pre-training tasks based on model aggregation. Comparing to pre-training meta-learning models on related real-world datasets, which could be expensive to collect, this synthetic approach can easily give us a huge amount of pre-training examples. We will show that meta-learning methods trained with simulated data can, surprisingly, sometimes outperform those trained with real data. Specifically, we first take seven image classification datasets: CIFAR-10, CIFAR-100 (Krizhevsky, 2009), MNIST (LeCun et al., 2010), Fashion MNIST (Xiao et al., 2017), EMNIST (Cohen et al., 2017), Kuzushiji MNIST (KMNIST; Clanuwat et al., 2018) and SVHN (Netzer et al., 2011). On each dataset, we pick nine equally spaced cutoffs and binarize the labels based on whether the class id is below the cutoff. This gives nine binary classification tasks for each dataset with positive label proportion in {0.1, 0.2*, . . . ,* 0.9}. To generate covariates for each task, we build 50 convolution image classifiers of various model complexities (details in Appendix B) on each of the 63 tasks to predict the binary label. We take classification scores on the test set as covariates. With these covariates and associated labels, we construct 7 × 9 = 63 binary classification tasks {T1*, . . . ,* T63}. Essentially, they are model aggregation tasks since we are combining 50 classifiers to make a prediction. Note that the accuracy of those image classifiers ranges from below 0.6 to over 0.99, giving rise to covariates ranging widely in their signal-to-noise ratios. Finally, to augment training data, we apply covariate sampling during pre-training. In each pre-training step, we first randomly sample an integer C ∈ {1*, . . . ,* 50} and a task from the 63 aggregation tasks {T1*, . . . ,* T63} described above. Then, among the 50 convolution image classifiers we built, we randomly pick C of them and use their classification scores as covariates to construct a *sub-task*. Finally, DEN takes labeled examples from this sub-task and uses them for training in this step. We shall emphasize that although training tasks are built based on image classification datasets, we do not use raw pixel values as covariates in pre-training. OpenML binary classification with simulated training tasks. To illustrate the effectiveness of our data simulation approach, we use the models pre-trained on these data and evaluate them on the same 20 OpenML binary classification tasks in Section 5.1 after fine-tuning. Interestingly, as shown in Table 4, for 5 out of the 9 methods considered, pre-training on the simulated data gives us statistically significantly better test AUC. This suggests that the proposed approach to generate training tasks is not only convenient but also effective. Moreover, DEN pre-trained on the simulated data outperforms all methods (either pre-trained on the simulated data or OpenML data) significantly. ## 5.3 Effect Of Dependence Order, Plf, And Fine-Tuning We continue our examination of DEN with pre-training on simulated data. In addition to comparing DEN with competing methods, we also study the performance of DEN in relation to its dependence order r, and | Method | Average Test AUC (%) | % Best | Improv. in Test AUC (%) | |------------------------|------------------------|----------|---------------------------| | Matching Net | 53.88 (0.46) | 0.00% | 3.77 (0.46) | | Proto Net | 71.12 (0.65) | 28.00% | 0.01 (0.97) | | PMN | 59.04 (0.68) | 1.25% | 2.93 (0.94) | | Relation Net | 57.97 (0.59) | 0.00% | 6.32 (0.65) | | CNP | 60.95 (0.69) | 2.50% | 2.94 (0.98) | | Iwata & Kumagai (2020) | 66.01 (0.74) | 11.50% | -4.34 (1.02) | | MAML | 61.16 (0.66) | 2.75% | 0.52 (1.05) | | T-Net | 53.35 (0.46) | 0.25% | 1.13 (0.62) | | DEN | 74.13 (0.68) | 53.75% | 4.01 (1.07) | Table 4: Average test AUC (standard error) and the percent of times that each method achieves the best performance on 20 OpenML datasets × 20 repeats. The models are pre-trained on simulated data. We also report the improvement in average test AUC compared to pre-training on OpenML data. | Method | Nomao | Puzzles | |------------------------|--------------|--------------| | Matching Net | 73.18 (2.23) | 62.65 (1.45) | | Proto Net | 80.56 (0.56) | 73.77 (0.37) | | TADAM | 82.42 (0.35) | 74.86 (0.25) | | PMN | 77.00 (3.13) | 57.69 (1.53) | | Relation Net | 52.32 (1.61) | 63.73 (1.79) | | CNP | 91.40 (0.36) | 53.20 (0.12) | | Iwata & Kumagai (2020) | 66.85 (3.23) | 61.32 (0.52) | | MAML | 78.92 (2.22) | 54.92 (0.54) | | BMAML | 47.20 (3.40) | 53.49 (1.88) | | T-Net | 61.42 (3.44) | 54.55 (1.78) | | DEN w/o PLF w/o FT | 59.01 (1.04) | 68.87 (0.38) | | DEN w/o FT | 94.42 (0.16) | 70.74 (0.62) | | DEN | 95.21 (0.10) | 78.11 (0.53) | Table 5: Average test AUC (standard error) on Nomao and Puzzles data. the use of PLF and fine-tuning. For DEN, we explore three options: 1) fine-tune the PLF layer for 10 epochs, 2) take the PLF layer directly from the last pre-training epoch without fine-tuning, or 3) do not include a PLF layer in DEN and hence no fine-tuning at all. ## 5.3.1 Tasks With Heterogeneous Covariate Spaces We study and demonstrate the importance of PLF when the training and test tasks are heterogeneous. Specifically, we use all 63 simulated tasks described in Section 5.2 for pre-training, and test the performance on two real datasets: Nomao and Puzzles. We give a short description of each dataset in Section C. For each dataset, we repeat the whole procedure 20 times and report the average AUC and standard error in Table 5. It is clear that DEN outperforms other baseline methods significantly. More importantly, the results also show that fine-tuning and PLF greatly improve the performance of DEN. Since DEN is pre-trained on simulated tasks which are completely unrelated to the target task, this improvement demonstrates the importance of fine-tuning and PLF when the training and target tasks are heterogeneous. We also examine the effect of the size of the support set on the flexibility of the covariate transformation block. Figure 3 shows that with enough support set examples (e.g., Puzzles task), having more PLF keypoints ![10_image_0.png](10_image_0.png) Figure 3: Average test AUC versus the number of PLF keypoints on Nomao and Puzzles data. | r = 1 | r = 2 | r = 3 | r = 4 | |--------------|--------------|--------------|--------------| | 94.38 (0.46) | 95.21 (0.10) | 93.80 (0.25) | 92.61 (0.55) | | r = 5 | r = 6 | r = 7 | r = 8 | | 93.16 (0.45) | 91.22 (0.74) | 88.95 (2.02) | 89.15 (1.34) | Table 6: Average test AUC (standard error) of DEN on Nomao data with different dependency order. benefits the test performance due to the improved ability of task-adaptation. However, if the support set is small (e.g., Nomao task), having a flexible covariate transformation block could even be marginally harmful. Finally, we conduct an ablation study to examine the effect of dependency order in Remark 4.1 on the test AUC on the Nomao dataset. In particular, we examine DEN with r ∈ [8]. The results in Table 6 show that when r ≤ 5, the test AUC is relatively stable, with the best performance achieved at r = 2; whereas the test AUC is much worse and unstable for larger r. ## 5.3.2 Tasks With Homogeneous Covariate Spaces We study the performance of DEN in the case when the training and target tasks are homogeneous. To ensure the task homogeneity, we use the model aggregation tasks described in Section 5.2 for both training and evaluation. Specifically, we use the 5 × 9 = 45 tasks described in Section 5.2 derived from CIFAR-10, CIFAR-100, MNIST, Fashion MNIST and EMNIST to train DEN and other meta-learning methods. We then pick four test tasks from SVHN and KMNIST of different difficulties, where the average AUCs over 50 classifiers are 68.28%, 78.11%, 91.51%, and 87.58%, respectively. Note that the training and test tasks are totally separated, but they likely follow similar distributions since the covariates for training and tests are all classifier scores. For each test task, we randomly select 100 sets of C classifiers among the 50 candidate classifiers, resulting in 100 aggregation sub-tasks. For each aggregation sub-task, we form a support set with 50 labeled examples and a disjoint query set with 8000 examples. We repeat the entire training and fine-tuning process for 5 times, and report the average AUC and its standard error. Table 7 shows the result of aggregating an ensemble of C = 25 classifiers. Table 8 shows the result where the number of classifiers C to be aggregated is sampled uniformly from [13, 25] and could vary across sub-tasks (100 aggregation sub-tasks × 5 repeats). To allow baseline methods to take varying number of covariates, we randomly duplicate some of the C classifiers so that all inputs have 25 covariates. We observe that DEN significantly outperforms other methods in all tasks, and that DEN without PLF and/or without fine-tuning is statistically no worse than DEN with fine-tuning on the PLF layer. This suggests that fine-tuning the PLF layer is not necessary when the data distribution is similar among tasks. | Table 7: Average test AUC (standard error) when aggregating 25 classifiers. | | | | | |-------------------------------------------------------------------------------|--------------|--------------|--------------|--------------| | Method | Test AUC (%) | | | | | Task 1 | Task 2 | Task 3 | Task 4 | | | Matching Net | 70.01 (0.40) | 82.13 (0.05) | 95.29 (0.06) | 93.82 (0.08) | | Proto Net | 90.95 (0.05) | 89.84 (0.03) | 98.07 (0.01) | 97.40 (0.02) | | TADAM | 90.98 (0.05) | 89.90 (0.03) | 98.14 (0.01) | 97.57 (0.02) | | PMN | 86.78 (0.10) | 88.69 (0.03) | 97.46 (0.02) | 96.22 (0.05) | | Relation Net | 85.39 (0.15) | 88.70 (0.02) | 97.25 (0.02) | 95.55 (0.08) | | CNP | 86.53 (0.09) | 88.80 (0.02) | 97.50 (0.02) | 96.22 (0.05) | | Iwata & Kumagai (2020) | 89.33 (0.05) | 89.17 (0.02) | 97.93 (0.01) | 97.73 (0.02) | | MAML | 86.10 (0.11) | 88.78 (0.03) | 97.48 (0.02) | 96.13 (0.06) | | BMAML | 71.38 (0.84) | 85.96 (0.21) | 97.04 (0.08) | 95.39 (0.18) | | T-Net | 86.23 (0.10) | 88.76 (0.03) | 97.47 (0.02) | 96.11 (0.06) | | DEN w/o PLF w/o Fine-Tuning | 91.53 (0.03) | 90.18 (0.02) | 98.03 (0.01) | 98.37 (0.01) | | DEN w/o Fine-Tuining | 91.76 (0.03) | 90.20 (0.02) | 98.18 (0.01) | 98.41 (0.01) | | DEN | 91.80 (0.03) | 89.77 (0.02) | 97.38 (0.01) | 97.23 (0.01) | Table 7: Average test AUC (standard error) when aggregating 25 classifiers. | Method | Test AUC (%) | | | | |-----------------------------|----------------|--------------|--------------|--------------| | Task 1 | Task 2 | Task 3 | Task 4 | | | Matching Net | 75.16 (0.37) | 81.12 (0.08) | 95.86 (0.04) | 93.61 (0.08) | | Proto Net | 90.02 (0.11) | 89.65 (0.04) | 97.94 (0.02) | 97.57 (0.02) | | TADAM | 90.20 (0.10) | 89.74 (0.04) | 98.04 (0.01) | 97.84 (0.01) | | PMN | 85.68 (0.24) | 88.59 (0.04) | 97.30 (0.03) | 95.83 (0.08) | | Relation Net | 80.85 (0.65) | 88.53 (0.04) | 97.11 (0.04) | 95.03 (0.12) | | CNP | 84.97 (0.28) | 88.71 (0.04) | 97.31 (0.04) | 95.96 (0.07) | | Iwata & Kumagai (2020) | 89.26 (0.16) | 89.05 (0.03) | 97.82 (0.02) | 97.75 (0.02) | | MAML | 85.39 (0.23) | 88.53 (0.04) | 97.32 (0.04) | 95.86 (0.08) | | BMAML | 59.57 (1.07) | 85.40 (0.24) | 96.73 (0.10) | 94.53 (0.23) | | T-Net | 85.51 (0.22) | 88.55 (0.04) | 97.35 (0.03) | 95.94 (0.07) | | DEN w/o PLF w/o Fine-Tuning | 91.06 (0.09) | 89.92 (0.03) | 98.09 (0.01) | 98.24 (0.01) | | DEN w/o Fine-Tuning | 91.17 (0.09) | 89.95 (0.03) | 97.95 (0.01) | 98.10 (0.01) | | DEN | 91.29 (0.03) | 89.83 (0.02) | 97.60 (0.03) | 97.47 (0.01) | Table 8: Average test AUC (standard error) when aggregating variable number of classifiers. ## 6 Conclusion In this work, we introduce a novel meta-learning method that can be applied to settings where both the distribution and number of covariates vary across tasks. This allows us to train the model on a wider range of training tasks and then adapt it to a variety of target tasks. Most other meta-learning techniques do not readily handle such settings. In numerical studies, we demonstrate that the proposed method outperforms a number of meta-learning baselines. The proposed model consists of three flexible building blocks. Each block can be replaced by more advanced structures to further improve its performance. DEN can also be combined with optimization based metalearning methods, e.g., MAML. A limitation of DEN is that it requires calculating the embedding for d r different combinations of covariates, which is infeasible for high-dimensional tasks. A potential solution is to use a random subset of these combinations. We leave the exploration of these options for future work. ## References Han Altae-Tran, Bharath Ramsundar, Aneesh S Pappu, and Vijay Pande. Low data drug discovery with one-shot learning. *ACS Central Science*, 3(4), 2017. Luca Bertinetto, João F. Henriques, Jack Valmadre, Philip H. S. Torr, and Andrea Vedaldi. Learning feed-forward one-shot learners. In *NIPS*, 2016. Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, and David Ha. Deep learning for classical Japanese literature. *NeurIPS Workshop on Machine Learning for Creativity and* Design, 2018. Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. EMNIST: Extending MNIST to handwritten letters. In *IJCNN*, 2017. Wenyuan Dai, Yuqiang Chen, Guirong Xue, Qiang Yang, and Yong Yu. Translated learning: Transfer learning across different feature spaces. In *NIPS*, 2008. Oscar Day and Taghi M. Khoshgoftaar. A survey on heterogeneous transfer learning. *Journal of Big Data*, 4, 2017. Tristan Deleu and Yoshua Bengio. The effects of negative adaptation in model-agnostic meta-learning. *arXiv* preprint, 2018. Lixin Duan, Dong Xu, and Ivor W. Tsang. Learning with augmented features for heterogeneous domain adaptation. In *ICML*, 2012. Harrison Edwards and Amos Storkey. Towards a neural statistician. In *ICLR*, 2017. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In *ICML*, 2017. Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic model-agnostic meta-learning. In *NeurIPS*, 2018. Marta Garnelo, Dan Rosenbaum, Christopher Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo Rezende, and S. M. Ali Eslami. Conditional neural processes. In *ICML*, 2018. Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. Recasting gradient-based meta-learning as hierarchical Bayes. In *ICLR*, 2018. Maya Gupta, Andrew Cotter, Jan Pfeifer, Konstantin Voevodski, Kevin Canini, Alexander Mangylov, Wojciech Moczydlowski, and Alexander van Esbroeck. Monotonic calibrated interpolated look-up tables. Journal of Machine Learning Research, 17(109), 2016. Tomoharu Iwata and Atsutoshi Kumagai. Meta-learning from tasks with heterogeneous attribute spaces. In NeurIPS, 2020. Lukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio. Learning to remember rare events. In *ICLR*, 2017. Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In *ICML*, 2015. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. Yann LeCun, Corinna Cortes, and Christopher Burges. MNIST handwritten digit database. ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist, 2010. Hae Beom Lee, Hayeon Lee, Donghyun Na, Saehoon Kim, Minseop Park, Eunho Yang, and Sung Ju Hwang. Learning to balance: Bayesian meta-learning for imbalanced and out-of-distribution tasks. In *ICLR*, 2020. Hayeon Lee, Eunyoung Hyung, and Sung Ju Hwang. Rapid neural architecture search by learning to generate graphs from datasets. In *ICLR*, 2021. Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. Set Transformer: A framework for attention-based permutation-invariant neural networks. In *ICML*, 2019. Yoonho Lee and Seungjin Choi. Gradient-based meta-learning with learned layerwise metric and subspace. In *ICML*, 2018. Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C Kot. Domain generalization with adversarial feature learning. In *CVPR*, 2018a. Wen Li, Lixin Duan, Dong Xu, and Ivor W. Tsang. Learning with augmented features for supervised and semi-supervised heterogeneous domain adaptation. *IEEE Transactions on Pattern Analysis and Machine* Intelligence, 36(6), 2014. Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, and Dacheng Tao. Deep domain generalization via conditional invariant adversarial networks. In *ECCV*, 2018b. Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sungju Hwang, and Yi Yang. Learning to propagate labels: Transductive propagation network for few-shot learning. In *ICLR*, 2019. Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive meta-learner. In ICLR, 2018. Krikamol Muandet, David Balduzzi, and Bernhard Schölkopf. Domain generalization via invariant feature representation. In *ICML*, 2013. Tsendsuren Munkhdalai and Hong Yu. Meta networks. In *ICML*, 2017. Tsendsuren Munkhdalai, Xingdi Yuan, Soroush Mehri, and Adam Trischler. Rapid adaptation with conditionally shifted neurons. In *ICML*, 2018. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. Boris N. Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. TADAM: task dependent adaptive metric for improved few-shot learning. In *NeurIPS*, 2018. Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In *ICLR*, 2017. Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Meta-learning with latent embedding optimization. In *ICLR*, 2019. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In *ICML*, 2016. Victor Garcia Satorras and Joan Bruna Estrach. Few-shot learning with graph neural networks. In *ICLR*, 2018. Shiv Shankar, Vihari Piratla, Soumen Chakrabarti, Siddhartha Chaudhuri, Preethi Jyothi, and Sunita Sarawagi. Generalizing across domains via cross-gradient training. In *ICLR*, 2018. Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. Wasserstein distance guided representation learning for domain adaptation. In *AAAI*, 2018. Jake Snell, Kevin Swersky, and Richard S. Zemel. Prototypical networks for few-shot learning. In *NIPS*, 2017. Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H. S. Torr, and Timothy M. Hospedales. Learning to compare: Relation network for few-shot learning. In *CVPR*, 2018. Joaquin Vanschoren, Jan N. van Rijn, Bernd Bischl, and Luis Torgo. OpenML: Networked science in machine learning. *SIGKDD Explorations*, 15(2), 2013. Oriol Vinyals, Charles Blundell, Tim Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In *NIPS*, 2016. Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, and Tao Qin. Generalizing to unseen domains: A survey on domain generalization. In *IJCAI*, 2021. Yaqing Wang, Quanming Yao, James T Kwok, and Lionel M Ni. Generalizing from a few examples: A survey on few-shot learning. *ACM Computing Surveys*, 53(3), 2020. Yu-Xiong Wang, Ross B. Girshick, Martial Hebert, and Bharath Hariharan. Low-shot learning from imaginary data. In *CVPR*, 2018. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. *ArXiv preprint*, 2017. Yuguang Yan, Wen Li, Michael K. P. Ng, Mingkui Tan, Hanrui Wu, Huaqing Min, and Qingyao Wu. Learning discriminative correlation subspace for heterogeneous domain adaptation. In *IJCAI*, 2017. Qiang Yang, Yuqiang Chen, Gui-Rong Xue, Wenyuan Dai, and Yong Yu. Heterogeneous transfer learning for image clustering via the social web. In *Joint Conference of ACL and IJCNLP*, 2009. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabás Póczos, Ruslan Salakhutdinov, and Alexander J. Smola. Deep sets. In *NIPS*, 2017. Joey Tianyi Zhou, Ivor W. Tsang, Sinno Jialin Pan, and Mingkui Tan. Multi-class heterogeneous domain adaptation. *Journal of Machine Learning Research*, 20, 2019. Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization: A survey. arXiv preprint, 2021.
Review 1: Summary: This paper proposes a meta-learning framework that can be used with heterogeneous covariate spaces, where the inputs for each task may be incompatible and/or have different dimensionalities. To handle such heterogeneity, the authors have proposed to learn a task-specific linear embedding for each task to send them to a more compatible, continuous space across tasks, while learning a task-independent distribution embedding block to summarize the task distribution and obtain a pairwise embedding of the set of features. Such a treatment of the features as a set allows to deal with variable length inputs that are dimension-wise incompatible. Finally, the authors propose to use a deep-set based classifier to deal with the input feature set. The authors provide theoretical analysis to justify the choice of architectural design, and further perform experiments on a tabular dataset against existing methods, which shows that the proposed framework outperforms them. Contributions: - Proposal of a meta-learning framework that can deal with heterogeneous input spaces across tasks. - Experiments on a tabular data benchmark which shows the effectiveness of the proposed method. Strengths and Weaknesses: Pros - The motivation and the focus of the paper, which is to deal with input heterogeneity, is both sound and clear, and to my knowledge, this is the only work on meta-learning that focuses on dealing with heterogeneity in the tabular data. - The experimental comparison against existing methods as well as ablations of the proposed framework shows the effectiveness of the proposed framework as well as its individual components. - The theoretical analysis further justifies the choice of the architectural components (e.g. deep sets-based classifier). - The paper is overall well-written and well-structured, and both the methodology and the experiments section are easy to follow. Cons - This is not the first work that deals with varying number of inputs, since [Lee et al. 20] and [Lee et al. 21] use set encoding to deal with datasets with different number and order of classes. Although these previous works do not tackle dealing with tabular data and does not use task-dependent embedding, the strategy used in [Lee et al. 20] and [Lee et al. 21] can be also utilized for tabular data with little modification. Thus, they should be acknowledged and be compared against. Maybe the proposed DEN could be compared against the L2B framework in [Lee et al. 20] in their any-shot, any-way classification setting since the goals for the two works are similar. The task embedding using set encoding is further developed in [Lee et al. 21], and is used to generalize across datasets with different sizes and orders of classes, although experiments on NAS may be out of scope for this work. - The baselines used are somewhat dated, although there is one baseline from a work published in 2020. The authors should compare against more recent state-of-the-art meta-learning models from the recent years. Also, they are not direct competitors since they lack mechanisms to handle heterogeneous input distributions. [Lee et al. 20] Learning to Balance: Bayesian Meta-learning for Imbalanced and Out-of-distribution Tasks, ICLR 2020 [Lee et al. 21] Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets, ICLR 2021 Requested Changes: - Please discuss [Lee et al. 20] and [Lee et al. 21], and modify the hierarchical set encoding in the two previous works into a flat set encoding and compare its performance on the tabular dataset benchmark, if possible. - If possible, please compare your methods on the any-shot any-way classification benchmark from [Lee et al. 20]. Broader Impact Concerns: None ================================================== Review 2: Summary: The paper introduces a new Distribution Embedding Networks that focuses on the traditional tasks studied by meta learning: training with certain tasks and then generalize to new ones. Different from the focus studied by most of other meta-learning works, this paper focuses primarily on the tabular data, which might be the reason that this method manages to get a clear improvement over other existing methods. Strengths and Weaknesses: -Strengths - The idea of Distribution Embedding Networks is quite interesting, and, to my knowledge, very different from many other meta-learning settings. - The empirical performances are extremely strong -Weakness - It's unclear where the power of the empirical performances is derived, some ablation studies are necessary. - There seems to be no apparent reason why the study focuses on the tabular data. - Proposition 4.3 seems to be a trivial result by reading the proofs in the appendix, also, the authors need to explain the significance of this result, there seems to be no mathematical significance of the result (which is fine), but then, there seems to practical significance either, what does the result guide us in using the method in practice? - It's intuitively unconvincing that the method can be applied to arbitrary query tasks with arbitrary support tasks, yet it's not clear where the relation is specified, in other words, what are the application scenarios for this method? This also corresponds to the above question that there seems to be a missing link between the theoretical discussion and the practical side of the work. Requested Changes: Please answer the questions above. Broader Impact Concerns: no issues noted. ================================================== Review 3: Summary: This paper tracks an important problem in meta-learning: the tasks are not necessarily related. By designing the task specific representation, this paper jointly adopted the task specific representation and nature representation Z for the prediction. The empirical results on different UCI datasets and simple image datasets demonstrates the effectiveness of the proposed method. Strengths and Weaknesses: ### Strong points 1. This paper considered an important problem in meta representation learning: the tasks are not necessarily related. Then enforcing a meta-prior is not proper. 2. The proposed method is empirically validated by different open source datasets. ### Weak points 1. The proposed approach is merely a collection of different other approaches. Although TMLR does not emphasize the novelty, this paper is still strongly suggested to illustrate the motivation of choosing these components. 2. Another main concern is why considering meta-representation learning in the context of tabular or simple image dataset. These datasets (particularly tabular dataset) could be solved by the “non-deep” regime such as kernel method or sparse coding. Based on this argument, the experiment or the proposed approach seems not convincing. 3. Concerns in the proposed analysis. ### Detailed comments 1. A major concern is the motivation of the proposed approach. Please note the motivation of the problem itself is clear and well-appreciated. In the context of uci dataset and simple image dataset, why do we need to learn a representation through deep neural networks? If the data is UCI or simple image, we could learn the embedding though kernel or spare coding. The corresponding method could be significantly simplified. This reviewer feels rather confused about the motivation of the proposed approach or evaluated empirical results. 2. The key to encode the task specific feature is equation (2-4), while this reviewer has multiple concerns: - In Eq(2), the task is represented by the label specific features and label average. While this seems a bit heuristic since the different labels across the tasks do not necessarily have the same label space. E.g, for two different binary classification, the same label average is not meaningful. - In Eq(3,4), the combination of learned embedding $Z$ and task specific embedding $S$ are concatenated, what is the support of combining information in this manner? Why not tensor product? - Besides, why introduce a,b in the embedding S, what is the specific benefit? This makes both high memory and computational complexity. It seems this paper did NOT report the comparison of time and memory complexity. 3. In Sec 4.3, the analysis is a bit ad-hoc, in Equation (6-7), the P(y) and P(x|y=k) are ground truth distribution. Unfortunately, in the context of this paper (few-shot settings), adopting the empirical term as an approximation of the ground truth distribution is incorrect. In fact, Sec 4.3 could not provide a clear explanation of the proposed approach. It seems an ad-hoc analysis without considering the few-shot or meta learning problems. 4. In the empirical comparison, since the data is not complex or extremely high-dimensional, comparing with modern deep network based approaches seems unfair. Requested Changes: Indeed I really like the problem setup and the motivation, unfortunately the proposed approach seems not clearly addressing the problem. In fact, the proposed analysis seems ad-hoc and many components lack clear support or justifications. Based on these, I would suggest - A clear explanation and justification of each component, particularly why designing a meta-representation learning approach? If it is necessary, how to fully justify the learned representation is correct ? How to justify the encoding of the task specific feature $S$ exactly to capture the correct information? The whole proposed approach lack clear **justifications**. - A revisit on the evaluated datasets. In general, the motivation of deep-meta learning is to learn a representation. In this paper, except accuracy, there is no other proper information/analysis about the representation learning. Why not a simple sparse-coding based approach on tabular datasets? - The derived theory could not support the proposed framework in few-shot settings. Authors are strongly suggested to make revisions on the theory. Broader Impact Concerns: Not applicable ==================================================
# Integrating Large Language Models In Causal Discovery: A Statistical Causal Approach Anonymous authors Paper under double-blind review ## Abstract In practical statistical causal discovery (SCD), embedding domain expert knowledge as constraints into the algorithm is significant for creating consistent meaningful causal models, despite the challenges in systematic acquisition of the background knowledge. To overcome these challenges, this paper proposes a novel method for causal inference, in which SCD and knowledge based causal inference (KBCI) with a large language model (LLM) are synthesized through "statistical causal prompting (SCP)" for LLMs and prior knowledge augmentation for SCD. Experiments have revealed that the results of LLM-KBCI and SCD augmented with LLM-KBCI approach the ground truths, more than the SCD result without prior knowledge. It has also been revealed that the SCD result can be further improved if the LLM undergoes SCP. Furthermore, with an unpublished real-world dataset, we have demonstrated that the background knowledge provided by the LLM can improve SCD on this dataset, even if this dataset has never been included in the training data of the LLM. The proposed approach can thus address challenges such as dataset biases and limitations, illustrating the potential of LLMs to improve data-driven causal inference across diverse scientific domains. ## 1 Introduction 1.1 Background Understanding causal relationships is key to comprehending basic mechanisms in various scientific fields. The statistical causal inference framework, which is widely applied in areas such as medical science, economics, and environmental science, aids this understanding. However, traditional statistical causal inference methods generally rely on the assumed causal graph for determining the existence and strength of causal impacts. To overcome this challenge, data-driven algorithmic methods have been developed as statistical causal discovery (SCD) methods, both in non-parametric (Spirtes et al., 2000; Chickering, 2002; Silander & Myllymäki, 2006; Yuan & Malone, 2013; Huang et al., 2018; Xie et al., 2020) and semi-parametric (Shimizu et al., 2006; Hoyer et al., 2008; Shimizu et al., 2011; Rolland et al., 2022; Tu et al., 2022) approaches. In addition, benchmark datasets have been published for the evaluation of SCD methods (Mooij et al., 2016; Käding & Runge, 2023). Despite advancements in SCD algorithms, data-driven acquisition of causal graphs without domain knowledge can be inaccurate. This is generally attributed to a mismatch between assumptions in SCD and real-world phenomena (Reisach et al., 2021). Moreover, obtaining experimental and systematic datasets sufficient for causal inference is difficult, whereas observational datasets, which are prone to selection bias and measurement errors, are more readily accessible (Abdullahi et al., 2020). Consequently, for more persuasive and reliable validation of causal models, the augmentation with domain knowledge plays a critical role (Rohrer, 2017). In addition, with respect to efficiency and precision in SCD, the importance of incorporating constraints on trivial causal relationships into the SCD algorithms has been highlighted (Inazumi et al., 2010; Chowdhury et al., 2023). Causal learning software packages have been augmented with prior knowledge, as demonstrated in "causal-learn" 1, and "LiNGAM" 2(Zheng et al., 2023; Ikeuchi et al., 2023). 1https://github.com/py-why/causal-learn 2https://github.com/cdt15/lingam ![1_image_0.png](1_image_0.png) Figure 1: Overall framework of the statistical causal prompting in a large language model (LLM) and statistical causal discovery (SCD) with LLM-based background knowledge. Moreover, the systematic acquisition of domain expert knowledge is a challenging task. Although there are several examples of constructing directed acyclic graphs (DAGs) by domain experts, as demonstrated in health services research (Rodrigues et al., 2022), practical methods for this process have not been proposed. The scenario recently has been changed with the rapid progress in the development of high-end large language models (LLMs). With their high performances in the applications of their domain knowledge acquired from vast amounts of data in the pre-training processes (OpenAI, 2023; Touvron et al., 2023; Gemini Team, Google, 2023), it has been expected that LLMs also can be applied to the causal reasoning tasks. In fact, several studies have reported the trial results of LLM knowledge-based causal inference (LLM-KBCI) (Jiang et al., 2023; Jin et al., 2024; Kıcıman et al., 2023; Zečević et al., 2023; Jiralerspong et al., 2024; Zhang et al., 2024), and in particular, the performance enhancement in non-parametric SCD with the guides by LLMs was confirmed (Ban et al., 2023; Vashishtha et al., 2023; Khatibi et al., 2024). However, it remains unclear whether the enhancement in SCD accuracy with background knowledge augmented by LLMs is robustly observed when the task of the SCD depends on closed data uncontained in the pre-training datasets of LLMs, and whether it leads to more statistically valid causal models. ## 1.2 Central Idea Of Our Research Based on the rapidly evolving techniques in the context of causal inference with LLMs, a novel methodology for SCD is proposed in this paper, in which the LLM prompted with the results of SCD without background knowledge evaluates the probability of the causal relationships considering both the domain knowledge and the statistical characteristic suggested by SCD (Figure 1). In the first step, an SCD is executed on a dataset without prior knowledge, and the results of the statistical causal analysis are outputted. To maximize the usage of the expert knowledge acquired in the pre-training process of the LLM, the method of generated knowledge prompting (Liu et al., 2022) is adopted, and then, the process of utilizing the LLM includes the second step (knowledge generation) and the third step (knowledge integration). In the second step, knowledge of the causal relationships between all pairs of variables is generated in detail from the domain knowledge of the LLM based on the zero-shot chain-of-thought (ZSCoT) prompting technique (Kojima et al., 2022). Here, the LLM can be prompted with the results of SCD as supplemental information for LLM inference, expecting that the in-context learning (Brown et al., 2020) of the SCD results can leads to the better performance of the LLM-KBCI in terms of statistic. We define this technique as "statistical causal prompting" (SCP). Thereafter, in the third step, the LLM judges whether there are any causal relationships between all pairs of variables with "yes" or "no," thus objectively considering the dialogs of the second step. Here, the probabilities of the responses from the LLM are evaluated and transformed into the prior knowledge matrix. This matrix, the output of LLM-KBCI with SCP, is finally re-applied to SCD in the fourth step. ## 1.3 Our Contribution The contributions through the demonstration of the proposed method in this paper are as follows: (1) Realization of the Synthesis of LLM-KBCI and SCD in a Mutually-Referential Manner The practical method for realizing the proposed concept of Figure 1 is detailed, and the SCP method is proposed. Experiments were demonstrated with several benchmark datasets, which are open and have been widely used for the evaluation of SCD algorithms. They all consist of continuous variables. (2) Mutual Performance Enhancement of SCD and LLM-KBCI We demonstrated that the augmentation by the LLM with SCP, improved the performance of the SCD, and the performance of LLM-KBCI was also enhanced by SCP. (3) Enhancement of SCD Performance by SCP In the experiments, we demonstrate the implication that the output of several SCD algorithms augmented by the LLM with SCP, can be a superior causal model to the pattern of prompting without SCP in terms of domain expertise and statistic. (4) Improvement of SCD results with the background knowledge provided by LLMs even if the dataset is uncontained in the pre-training materials We prepared a closed health screening dataset that is uncontained in the pre-training materials for LLMs, and demonstrated the experiment on a sub-dataset that has been randomly sampled from the entire dataset. Through this experiment, we clearly confirmed that the proposed method robustly leads to more statistically valid causal models with the natural ground truths. This fact proved that the LLMs can indeed contribute to background knowledge augmentation for SCD algorithms in practical situations, even if the dataset used for SCD are not memorized by LLMs. ## 2 Related Works And Originality In Our Work Augmentation of SCD Algorithms with Background Knowledge As introduced in Section 1.1, several SCD algorithms 3can be systematically augmented with background knowledge. Moreover, their software packages are open. For example, as a non-parametric and constraint-based SCD method, the Peter-Clerk (PC) algorithm (Spirtes et al., 2000) is augmented with the background knowledge of the forced or forbidden directed edges in "causal-learn." "Causal-learn" also provides the Exact Search algorithm (Silander & Myllymäki, 2006; Yuan & Malone, 2013) as a non-parametric and score-based SCD method, which can be augmented with the background knowledge of the forbidden directed edges as a super-structure matrix. With respect to a semi-parametric approach, DirectLiNGAM (Shimizu et al., 2011) algorithm is augmented with prior knowledge of the causal order (Inazumi et al., 2010) in "LiNGAM" project (Ikeuchi et al., 2023). Causal Inference in Knowledge-Driven Approach with LLMs In addition to the study on the causality detection from natural language texts using language models with additional training datasets (Khetan et al., 2022), the rapid growth of LLMs have made it possible to produce some valuable works on 3All of the algorithms adopted for the experiments in this paper can be used under the assumptions of DAG and with no hidden confounders. causality, including causal inference using LLMs. Attempts have been made to use LLMs for causal inference among a set of variables, prompting with the metadata such as the names of variables, and without the SCD process with the benchmark datasets (Kıcıman et al., 2023; Zečević et al., 2023; Jiralerspong et al., 2024; Zhang et al., 2024). Adopting the similar approach, the concept of causal modeling agents, which improves the precision of the causal graphs through the iteration of hypothesis generation from LLMs and model fitting to the real-world data, has also been proposed (Abdulaal et al., 2024). There are also studies on incorporating LLMs in the process of SCD as an alternative tool for conditional independence tests in the PC algorithm (Cohrs et al., 2023), and for identifying the causal graphs beyond the Markov equivalent class (Long et al., 2023). In addition, researches have been conducted with focus on the use of LLMs to improve the SCD results(Ban et al., 2023; Vashishtha et al., 2023; Khatibi et al., 2024). However, all the experiments were conducted only on popular benchmark datasets, contained in the pre-training datasets of LLMs. Consequently, while acknowledging the valuable foundations laid by previous studies, it has remained uncertain whether the enhancements in SCD accuracy are truly driven by LLMs leveraging their vast knowledge for genuine causal inference, or merely by reproducing the memorized content of datasets (Vashishtha et al., 2023).. Originality in Our Work In contrast to the studies with similar focuses on LLM-guided SCD (Ban et al., 2023; Vashishtha et al., 2023; Khatibi et al., 2024), this study also focuses on the construction of the background knowledge in a quantitative manner based on the response probability of the LLM, which can reflect the credibility of the decision made by the LLM with SCP. In addition, in the case of a semiparametric SCD method such as LiNGAM, we detail herein how to achieve both statistical validity and natural interpretation with respect to domain knowledge at a maximally high level, by prompting with the causal coefficients and bootstrap probabilities of all the patterns of directed edges. Moreover, we validate the proposed method on an unpublished dataset. This approach has not only demonstrated the practical utility and robustness of our method, but also opened new avenues for supporting the validity and applicability of existing works (Ban et al., 2023; Vashishtha et al., 2023; Khatibi et al., 2024). ## 3 Materials And Methods 3.1 Algorithms And Elements For Llm-Augmented Causal Discovery With respect to practicality, the method of Figure 1 is outlined as Algorithm 1. Following the notation in Algorithm 1, the input elements in the demonstration are explained below. Algorithms for Statistical Causal Discovery For the SCD method S, we adopted the PC algorithm (Spirtes et al., 2000), Exact Search based on the A∗ algorithm (Yuan & Malone, 2013), and DirectLiNGAM algorithm (Shimizu et al., 2011), which can all be optionally augmented with prior knowledge, and are open in "causal-learn" (Zheng et al., 2023) and "LiNGAM" (Ikeuchi et al., 2023). Furthermore, we also implement the bootstrap sampling function B of the SCD algorithm to investigate the statistical properties such as the bootstrap probabilities pij of the emergence of the directed edges from xj to xi. In our experiments, the number of bootstrap resamplings was fixed to 1000. Conditions of the LLM and Prompting For utilizing the LLM as the domain expert, we adopted GPT-4-1106-preview4 developed by OpenAI; the temperature, a hyperparameter for adjusting the probability distribution of the output, was fixed to 0.7. 4We recognize that there have been various kinds of high-performance LLMs from several institutes, and that it is valuable to demonstrate that including a broader range of LLMs could provide valuable insights into the generalizability and scalability of our method across various LLM architectures. However, our goal in this work is to explore the effectiveness of integrating LLMs into SCD via SCP, requiring trials and comparisons of various SCP patterns and SCD algorithms, as described in Section 4. To maintain consistency and control across these trials, it is also important to fix the LLM in the experiments, which has advanced capabilities and state-of-the-art status in various domains. Moreover, we should adopt the LLMs that satisfy specific conditions for our experiments, such as the maximum input token capacity for the prompting processes, and the functionality to obtain log-probability of the output. For these strategic and technical reasons, we adopted GPT-4 in our experiments. Algorithm 1 Background knowledge construction with the LLM prompted with the results of the SCD Input 1: Data X with variables{x1*, ..., x*n} Input 2: SCD method S Input 3: Function for bootstrap B Input 4: Response of the domain expert (GPT-4) ϵGPT4 Input 5: Log probability of the response L(ϵGPT4) Input 6: Prompt function for knowledge generation Q (1) ij Input 7: Prompt function for knowledge integration Q (2) ij Input 8: Transformation from probability matrix to prior knowledge T Input 9: Number of times to measure the probability M Output: Result of SCD with prior knowledge Gˆ on X SCD result without prior knowledge Gˆ0 = S(X) bootstrap probability matrix P = B(*S, X*) for i = 1 to n do for j = 1 to n do pi,i = NaN if i ̸= j **then** prompt q (1) ij = Q (1) ij (xi, xj , Gˆ0, P ) response aij = ϵGPT4(q (1) ij ) prompt q (2) ij = Q (2) ij (q (1) ij , aij ) for m = 1 to M do L (m) ij = L (m)(ϵGPT4(q (2) ij ) = "yes") end for mean probability pij = PM m=1 exp(L (m) ij ) M end if end for end for probability matrix p = (pij ) prior knowledge PK = T(p) return Gˆ = S(X, PK) The template for the first prompting q (1) ij for knowledge generation is shown in Table 1. This prompting template is based on the underlying principle of the ZSCoT technique5(Kojima et al., 2022), which was reported as a potential method to enhance the performance of the LLM generation tasks; enhancement is performed by guiding logical inference and eliciting the background knowledge acquired through the pre-training process from the LLM. Furthermore, expecting that the in-context learning (Brown et al., 2020) of the SCD results can leads to the better performance of the LLM-KBCI in terms of statistic, the SCD results, e.g., the causal structure and bootstrap probabilities, can be included in ⟨blank 5⟩ and ⟨blank 9⟩, which are defined as "statistical causal prompting" (SCP). Because the information used in SCP is partially dependent on the SCD algorithms, a brief description of the patterns for constructing the contents of ⟨blank 5⟩ and ⟨blank 9⟩ is presented in Section 3.2. As shown in Table 2, generated knowledge is integrated in the second prompt, and GPT-4 is required to judge the existence of the causal effect from xj on xi from an objective point of view. Because the response from GPT-4 is required with "yes" or "no," it is simple to quantitatively evaluate the level of GPT-4's confidence regarding the existence of the causal relationship based on both the SCD result and domain knowledge. The probability pij of the assertion that there is a causal effect from xj on xi can be output from GPT-4 5Although the quality of the LLM outputs can be further enhanced, e.g., by fine-tuning with several datasets containing fundamental knowledge for causal inference or Retrieval-Augmented Generation (RAG), we adopt the idea of ZSCoT in order to establish low-cost and simple methods, which can be universally applied independent of the targeted fields of causal inference. Table 1: Prompt template of Q (1) ij (xi, xj , Gˆ0, P ) used for the generation of the expert knowledge of the causal effect from xj on xi. The "blanks" enclosed with ⟨ ⟩ are filled with description words . The word for ⟨blank 6⟩ is selected from "a" or "no," depending on the content of ⟨blank 5⟩. The notations of the SCD result without prior knowledge Gˆ0 and the bootstrap probability matrix P are same as those in Algorithm 1. Prompt Template of q (1) ij = Q (1) ij (xi, xj , Gˆ0, P ) We want to carry out causal inference on ⟨blank 1. The theme⟩, considering ⟨blank 2. The description of all variables⟩ as variables. First, we have conducted the statistical causal discovery with ⟨blank 3. The name of the SCD algorithm⟩, using a fully standardized dataset on ⟨blank 4. The description of the dataset⟩. ⟨blank 5. Including here the information of Gˆ0 or P .The detail of the contents depends on prompting patterns.⟩ According to the results shown above, it has been determined that there may be ⟨blank 6. a/no⟩ direct impact of a change in ⟨blank 7. The name of xj ⟩ on ⟨blank 8. The name of xi⟩ ⟨blank 9. The value of causal coefficients or bootstrap probability⟩. Then, your task is to interpret this result from a domain knowledge perspective and determine whether this statistically suggested hypothesis is plausible in the context of the domain. Please provide an explanation that leverages your expert knowledge on the causal relationship between ⟨blank 7. The name of xj ⟩ and ⟨blank 8. The name of xi⟩, and assess the naturalness of this causal discovery result. Your response should consider the relevant factors and provide a reasoned explanation based on your understanding of the domain. directly with the optional function of returning log-probabilities of most likely tokens at each token position6. Although pij can be evaluated readily from the log probability of the GPT-4 response as "yes," there is a slight fluctuation in the log probability output from GPT-4 (Andriushchenko et al., 2024). Thus, we adopted the mean probability pij of the single-shot measurement M times for the decision of prior knowledge matrix PK, and we set M = 5 in the experiments. The combination of these prompting techniques can contribute to minimizing the risk of hallucination from the LLM, and it is expected that reliable probability of the response from the LLM for the causal relationship between a pair of variables is obtained. Table 2: Prompt template of Q (2) ij (q (1) ij , aij ) used for the quantitative evaluation of the probability of GPT-4's assertion that there is a causal effect from xj on xi. Prompt Template of q (2) ij = Q (2) ij (q (1) ij , aij ) An expert was asked the question below: ⟨blank 10. q (1) ij ⟩ Then, the expert replied with its domain knowledge: ⟨blank 11. aij ⟩ Considering objectively this discussion above, if ⟨blank 12. The name of xj ⟩ is modified, will it have a direct or indirect impact on ⟨blank 13. The name of xi⟩? Please answer this question with ⟨yes⟩ or ⟨no⟩. No answers except these two responses are needed. For the subsequent SCD with prior knowledge, the probability matrix p is transformed with T, as expressed by Algorithm 2, into the background knowledge matrix. Here, if PKij = Forbidden, the causal effect from xj to xiis forbidden to appear in the SCD result. On the other hand, if PKij = Forced, the causal effect from 6The technical detail of this probability sampling is explained in Appendix D . Algorithm 2 Transformation from the probability matrix into the prior knowledge matrix Input 1: probability matrix p = (pij ) Input 2: SCD method S ∈ { PC, Exact Search, DirectLiNGAM } Input 3: probability criterion for the forbidden causal relationship α1 Input 4: probability criterion for the forced causal relationship α2 Output: prior knowledge matrix PK = (PKij ) for i = 1 to n do for j = 1 to n do PKij = Unknown if i = j **then** PKij = Forbidden else if pij < α1 **then** PKij = Forbidden else if (pij ≥ α2) and (S ̸= Exact Search) **then** PKij = Forced end if end if end for end for if S = DirectLiNGAM **then** PK = A(PK) (acyclic transformation) end if return PK xj to xiis forced to appear in the SCD result7. For the decision of forbidden or forced causal relationship from PKij , we prepare the probability criterion for the forbidden path as α1 and that for the forced path as α2. In our experiments, α1 is fixed at 0.05 and α2 is fixed at 0.95, for common settings8. Furthermore, the differences in the constraints that can be adopted depending on the SCD algorithms should be considered. In the Exact Search algorithm, the constraints of the forced edge cannot be applied. For the case of DirectLiNGAM, because prior knowledge is used for the decision of the causal order in the algorithm, the prior knowledge matrix must be an "acyclic" adjacency matrix when it is represented in the form of a network graph. Thus, when S = DirectLiNGAM and PK is cyclic, an additional transformation algorithm A is required. In addition, there can be several acyclic transformation patterns; only one acyclic matrix with some criteria should be selected. The algorithm for the transformation and the matrix selection criterion in this study are explained in Appendix F. ## 3.2 Experiment Patterns Of Scp Related to the ⟨blank 5⟩ and ⟨blank 9⟩ in Table 1, we conducted experiments using several patterns of SCP. the notations of the prompting patterns in the experiments are presented with explanations below: Pattern 0: without SCP This pattern corresponds to the reference for the comparison with the other patterns including SCD results in their prompts. Because the prompt template shown in Table 1 is not adequate for this pattern, we prepare a different template for Pattern 0, which is shown in Appendix B. 7Although these constraints can be systematically adapted with the augmentation supported in "causal-learn" (Zheng et al., 2023) and "LiNGAM" (Ikeuchi et al., 2023), the detailed meanings of forced and forbidden causal effects are slightly different among the SCD algorithms. The details are described in Appendix E 8These heuristic thresholds follow the widely accepted conventions seen in statistical significance levels. Although the specific choice of threshold might influence the formation of the prior knowledge matrix, the probability outputs from LLMs for clear domain knowledge instances are either very high (close to 100 %) or very low (closed to 0 %), ensuring that the essential insights are retained regardless of the threshold. In Appendix D, technical details of this sensitivity around these thresholds are described through the evaluation of the standard error of pij , which can be realized by single-shot measurements of pij for M times. Pattern 1: with the list of edges that appeared in the first SCD Directed or undirected9edges between xi and xj emerged in the SCD are listed. Pattern 2: with the list of edges with their non-zero bootstrap probabilities Directed or undirected edges between xi and xj that emerged in the bootstrap process are listed with their bootstrap probabilities. Pattern 3: with the list of edges that emerged in the first SCD with the calculated causal coefficients (only for DirectLiNGAM) Based on the property of DirectLiNGAM, that outputs the causal coefficients with the DAG discovered in the algorithm, this pattern is attempted to elucidate whether more information such as causal coefficients in addition to Pattern 1 can improve the performance of LLM-KBCI and the subsequent SCD with prior knowledge. Pattern 4: with the list of edges with their non-zero bootstrap probabilities and calculated causal coefficients for the full dataset (only for DirectLiNGAM) We also attempt this pattern with the most amount of information of 1st SCD as a mixture of Patterns 2 and 3. ## 3.3 Datasets For The Experiments Although there are several widely-open benchmark datasets with well-known ground truths, particularly for Bayesian network-based causal structure learning (Scutari & Denis, 2014), several of them are fully or partially composed with categorical or discrete variables. However, considering Patterns 3 and 4 for the experiments in this study, because the basic structure causal model assumes the continuous properties of all variables, it is more effective to adopt benchmark datasets fully composed with continuous variables. Consequently, we select three benchmark datasets for the experiments, as follows: 1. Auto MPG data (Quinlan, 1993) (five continuous variables) , 2. Deutscher Wetterdienst (DWD) climate data (Mooij et al., 2016) (six continuous variables) , 3. Sachs protein data (Sachs et al., 2005)(eleven continuous variables) . Furthermore, to demonstrate that GPT-4 can aid SCD with its domain knowledge, even if the dataset used in the SCD process and analytics on the dataset are not contained in the pre-training data of GPT-4, the proposed methods are also applied on our dataset on health screening results, which has not been disclosed and therefore not learned by GPT-4. To demonstrate that the proposed methods can be applied when the dataset contains bias, which may lead to highly inaccurate SCD results, the health screening dataset for this experiment was sampled, and we deliberately chose a subset where certain biases are still present. Basic information on these datasets such as the first SCD results and the ground truths is presented in Appendix C. ## 4 Results And Discussions 4.1 Results In Benchmark Datasets For the interpretation of the experimental results, we evaluate PK (for measuring the performance of LLMKBCI with the prompts, including the first SCD results) and the adjacency matrix obtained in SCD with PK for each pattern (for measuring the performance of SCD augmented with LLM-KBCI), with the structural hamming distance (SHD), false positive rate (FPR), false negative rate (FNR), precision, and F1 score, using the ground truth adjacency matrix GT as a reference. All of these metrics are frequently used to evaluate how the results of the causal structure learning are close to the ground truths, and can be calculated solely from the adjacency matrix and GT , as shown in Appendix E.3. In addition, this paper presents the evaluation of the comparative fit index (CFI) (Bentler, 1990), root mean square error of approximation (RMSEA) (Steiger & Lind, 1980) and Bayes information criterion (BIC) (Schwarz, 1978) of the causal structure obtained in SCD with PK, under the assumption of linear-Gaussian data10; to evaluate the results with respect to the 9In the PC algorithm, undirected edges that appear as xi − xj with respect to a causal relationships between "xi and xj " in which the direction cannot be determined, can be detected. The prompt template for reflecting this difference from directed edges is shown in Appendix B. 10Although this assumption of linear-Gaussian data for the calculation of the CFI, RMSEA, and BIC, does not match the assumption of a non-Gaussian error distribution in LiNGAM, we adopt these indices to evaluate and compare the results with respect to the same statistical method, irrespective of the difference in the SCD algorithms. statistical validity of calculated causal models. For evaluating the effect of PK augmentation by the LLM, the baseline result is that without PK (Baseline A), as a reference for the comparison with the SCD results augmented with PK. In contrast, for evaluating the effect of SCP, the results with the prompting in Pattern 0 becomes the baseline (Baseline B), in the comparison with the results obtained in other SCP patterns. The indices for all patterns on the DWD, Auto MPG, and Sachs datasets are summarized in Table 3. Enhancing the performance with prior knowledge augmentation by GPT-4 One of the characteristics in Table 3 is that in most of the cases, the result of SCD augmented with PK is more similar to GT than the first SCD result without prior knowledge (Baseline A). This behavior is interpreted as the knowledge-based improvement of the causal graph by GPT-4 as a domain expert, which is qualitatively consistent with other related works on LLM-guided causal inference (Ban et al., 2023; Vashishtha et al., 2023). Moreover, in many of the cases of Auto MPG and DWD data, the precision or F1 score are higher after the SCD augmented with PK than the pure PK, which are conclusions of LLM-KBCI, while they are almost comparable in the cases of Sachs data. From this comparison it is implied that even if LLM-KBCI is not optimal, the ground truths can be better approached by conducting SCD augmented with LLM-KBCI. In addition, BIC decreases in almost all the patterns from the SCD result without PK (Baseline A). The aforementioned properties suggest that knowledge-based augmentation from GPT-4 can improve the performance of SCD, indeed, in terms of the consistency with respect to the domain expert knowledge and statistical causal structure. However, the amount of improvement can differ depending on the number of variables and the methods of SCD. Dependence on the number of variables In the case of Auto MPG data with only five variables, the amount of improvement for each SCD method is almost constant among all the prompting patterns. One of the possible reasons is that within relatively small numbers of variables, the amount of information in SCP becomes small, and the difference of inference performance of GPT-4 among the prompting patterns becomes subtle. Moreover, because the space for discovery becomes also smaller along with the network shrink, the SCD algorithm may reach a single optimal solution, even if PK is different. On the other hand, in the cases of DWD data with six variables and Sachs data with eleven variables, the difference in the amount of improvement becomes more clear depending on the prompting patterns. From this fact, it is implied that the threshold of the number of variables over which the quality of PK and SCD results depend on the amount of information included. In SCP for GPT-4, it is around five. Moreover, in many cases of DWD and Sachs data, in particular for Exact Search or DirectLiNGAM, the precision and F1 score of PK in Pattern 0 (Baseline B) are usually smaller than any of other patterns, in which GPT-4 experiences SCP. This supports the performance enhancement of LLM-KBCI by SCP. Prompting pattern dependence on SCD methods On the other hand, considering the output of the SCD augmented with PK, Pattern 0 (Baseline B) stably indicates relatively higher performance among all the patterns in both terms of domain knowledge and statistical model fitting. Furthermore, the results of Patterns 0 and 1 are almost the same when the PC or the Exact Search algorithm is adopted, and in particular, the result of Pattern 0 with the PC algorithm on Sachs data is superior to that of Pattern 0. Although it is difficult to explain the reason for this behavior, one of the possible reasons, if we focus only on the results on Sachs data, is that we adopted the ground truths partially determined by Bayesian network inference (Sachs et al., 2005). Indeed, the first SCD result in PC is already relatively close to GT , as shown in Figure 8 (a), and we interpret that in this situation, the performance of SCD with PK cannot be improved. The scenario differs when DirectLiNGAM is adopted. The performance of the SCD with PK either in Pattern 1 or 2, remains totally superior to that of Pattern 0 (Baseline B) from both statistical and domain expert points of view. This implies that SCP can effectively improve the performance of DirectLiNGAM. However, from the analysis of Patterns 3 and 4, in which GPT-4 is prompted with the causal coefficients of the first SCD results, it is also revealed that prompting with a greater amount of statistical information does not always lead to improved SCD results. In particular, while PK in Pattern 4 is closer to ground truth matrix than that in Pattern 0 in DWD or Sachs data 11, the final SCD result augmented with PK in Pattern 11This fact reinforces the reliability of our interpretation that SCP enhance the performance of LLM-KBCI. Table 3: Comparison of the SCD results in all the experiment patterns we have conducted. While SHD, FPR, FNR, precision, and F1score are compared to evaluate how the results are close to the ground truths, CFI, RMSEA, and BIC are compared to evaluate the statistical validity of the calculated causal models. Lower values are superior for SHD, FPR, FNR, RMSEA and BIC, and higher values for precision, F1score and CFI. The numbers in parentheses indicate SHD, FPR, FNR, precision, and F1score of PK with ground truths, for the evaluation of the performance of LLM-KBCI. The values in the blue font are the optimal results among Patterns 0–4, If the dataset and the SCD algorithm is fixed. Baseline A is used for the comparison with the SCD results augmented with PK generated by the LLM, to evaluate the effect of PK augmentation by the LLM. Baseline B is used for the comparison with the results obtained in other SCP patterns, to evaluate the effect of SCP. It is implied that in DWD and Sachs datasets, the outputs of LLM-KBCI in Patterns 1–4 are likely to approach the ground truths more closely than those in Pattern 0. It is also suggested that several of the outputs of Exact Search and DirectLiNGAM in Patterns 1–4 (with SCP), can be superior causal models than those in Pattern 0. SCD algorithm Pattern SHD↓ FPR↓ FNR↓ Precision↑ F1score↑ CFI↑ RMSEA↓ BIC↓ 1. Auto MPG data with 5 continuous variables | than those in Pattern 0. SCD algorithm Pattern | SHD↓ | FPR↓ | FNR↓ | Precision↑ | F1score↑ | CFI↑ | RMSEA↓ | BIC↓ | | |--------------------------------------------------------------------------|-------------------|-------------|-------------|--------------|-------------|-------------|----------|--------|--------| | 1. Auto MPG data with 5 continuous variables wo PK(Baseline A) 8 | 0.40 | 0.80 | 0.11 | 0.14 | 1.00 | 0.00 | 71.65 | | | | Pattern 0 (Baseline B) | 3 (5) | 0.15 (0.25) | 0.20 (0.20) | 0.57 (0.44) | 0.67 (0.57) | 1.00 | 0.07 | 65.62 | | | PC | Pattern 1 | 4 (7) | 0.15 (0.30) | 0.20 (0.20) | 0.57 (0.40) | 0.67 (0.53) | 1.00 | 0.00 | 59.71 | | Pattern 2 | 3 (6) | 0.15 (0.30) | 0.20 (0.20) | 0.57 (0.40) | 0.67 (0.53) | 1.00 | 0.07 | 65.62 | | | wo PK(Baseline A) | 5 | 0.25 | 0.40 | 0.38 | 0.46 | 1.00 | 0.07 | 71.61 | | | Pattern 0 (Baseline B) | 4 (5) | 0.20 (0.25) | 0.20 (0.20) | 0.50 (0.44) | 0.62 (0.57) | 1.00 | 0.09 | 71.59 | | | Exact Search | Pattern 1 | 5 (5) | 0.20 (0.20) | 0.20 (0.20) | 0.50 (0.50) | 0.62 (0.62) | 1.00 | 0.07 | 65.62 | | Pattern 2 | 4 (5) | 0.20 (0.25) | 0.20 (0.20) | 0.50 (0.44) | 0.62 (0.57) | 1.00 | 0.09 | 71.59 | | | DirectLiNGAM | wo PK(Baseline A) | 8 | 0.40 | 0.80 | 0.11 | 0.14 | 1.00 | 0.05 | 77.61 | | Pattern 0 (Baseline B) | 3 (5) | 0.15 (0.25) | 0.20 (0.20) | 0.57 (0.44) | 0.67 (0.57) | 1.00 | 0.07 | 65.62 | | | Pattern 1 | 3 (5) | 0.15 (0.30) | 0.20 (0.20) | 0.57 (0.40) | 0.67 (0.53) | 1.00 | 0.07 | 65.62 | | | Pattern 2 | 3 (5) | 0.15 (0.25) | 0.20 (0.20) | 0.57 (0.44) | 0.67 (0.57) | 1.00 | 0.07 | 65.62 | | | Pattern 3 | 4 (6) | 0.20 (0.35) | 0.20 (0.20) | 0.50 (0.36) | 0.62 (0.50) | 1.00 | 0.00 | 71.65 | | | Pattern 4 | 3 (5) | 0.15 (0.30) | 0.20 (0.20) | 0.57 (0.40) | 0.67 (0.53) | 1.00 | 0.07 | 65.62 | | | 2. DWD climate data with 6 continuous variables wo PK(Baseline A) 9 | 0.20 | 0.83 | 0.14 | 0.15 | 0.90 | 0.22 | 69.32 | | | | Pattern 0 (Baseline B) | 5 (8) | 0.03 (0.20) | 0.67 (0.33) | 0.67 (0.40) | 0.44 (0.50) | 0.71 | 0.36 | 32.70 | | | PC | Pattern 1 | 5 (9) | 0.03 (0.23) | 0.67 (0.33) | 0.67 (0.36) | 0.44 (0.47) | 0.71 | 0.36 | 32.70 | | Pattern 2 | 5 (8) | 0.03 (0.20) | 0.67 (0.33) | 0.67 (0.40) | 0.44 (0.50) | 0.71 | 0.36 | 32.70 | | | wo PK(Baseline A) | 6 | 0.20 | 0.17 | 0.45 | 0.59 | 0.91 | 0.28 | 92.87 | | | Pattern 0 (Baseline B) | 5 (8) | 0.10 (0.20) | 0.33 (0.33) | 0.57 (0.40) | 0.62 (0.50) | 0.98 | 0.12 | 58.38 | | | Exact Search | Pattern 1 | 5 (5) | 0.10 (0.13) | 0.33 (0.17) | 0.57 (0.56) | 0.62 (0.67) | 0.91 | 0.19 | 57.73 | | Pattern 2 | 6 (9) | 0.13 (0.23) | 0.33 (0.33) | 0.50 (0.36) | 0.57 (0.47) | 0.91 | 0.20 | 63.58 | | | DirectLiNGAM | wo PK(Baseline A) | 10 | 0.33 | 0.67 | 0.17 | 0.22 | 1.00 | 0.00 | 99.53 | | Pattern 0 (Baseline B) | 4 (8) | 0.07 (0.20) | 0.33 (0.33) | 0.67 (0.40) | 0.67 (0.50) | 1.00 | 0.00 | 52.67 | | | Pattern 1 | 8 (8) | 0.10 (0.10) | 0.83 (0.83) | 0.25 (0.25) | 0.20 (0.20) | 0.64 | 0.43 | 38.03 | | | Pattern 2 | 4 (7) | 0.03 (0.17) | 0.50 (0.33) | 0.75 (0.44) | 0.60 (0.53) | 0.98 | 0.09 | 40.80 | | | Pattern 3 | 5 (6) | 0.10 (0.13) | 0.33 (0.33) | 0.57 (0.50) | 0.62 (0.57) | 0.93 | 0.16 | 57.90 | | | Pattern 4 | 5 (6) | 0.10 (0.17) | 0.33 (0.16) | 0.57 (0.50) | 0.62 (0.62) | 0.92 | 0.18 | 57.80 | | | 3. Sachs' protein data with 11 continuous variables wo PK(Baseline A) 24 | 0.16 | 0.47 | 0.38 | 0.44 | 0.99 | 0.05 | 294.15 | | | | Pattern 0 (Baseline B) | 15 (19) | 0.04 (0.11) | 0.58 (0.47) | 0.67 (0.48) | 0.52 (0.50) | 0.89 | 0.16 | 166.91 | | | PC | Pattern 1 | 25 (43) | 0.17 (0.53) | 0.68 (0.16) | 0.26 (0.23) | 0.29 (0.36) | 0.97 | 0.11 | 284.58 | | Pattern 2 | 23 (23) | 0.13 (0.24) | 0.74 (0.32) | 0.28 (0.35) | 0.27 (0.46) | 0.97 | 0.09 | 231.27 | | | wo PK(Baseline A) | 31 | 0.26 | 0.68 | 0.18 | 0.23 | 0.99 | 0.07 | 374.35 | | | Pattern 0 (Baseline B) | 17 (19) | 0.07 (0.11) | 0.58 (0.47) | 0.53 (0.48) | 0.47 (0.50) | 0.91 | 0.16 | 202.97 | | | Exact Search | Pattern 1 | 17 (15) | 0.05 (0.06) | 0.68 (0.58) | 0.55 (0.57) | 0.40 (0.48) | 0.87 | 0.20 | 158.26 | | Pattern 2 | 16 (14) | 0.11 (0.14) | 0.53 (0.32) | 0.45 (0.48) | 0.46 (0.57) | 0.95 | 0.12 | 257.53 | | | DirectLiNGAM | wo PK(Baseline A) | 29 | 0.25 | 0.47 | 0.28 | 0.36 | 1.00 | 0.01 | 410.23 | | Pattern 0 (Baseline B) | 17 (19) | 0.07 (0.11) | 0.53 (0.47) | 0.56 (0.48) | 0.51 (0.50) | 0.91 | 0.16 | 203.02 | | | Pattern 1 | 15 (13) | 0.07 (0.08) | 0.47 (0.37) | 0.59 (0.60) | 0.56 (0.62) | 0.88 | 0.18 | 220.32 | | | Pattern 2 | 22 (21) | 0.14 (0.18) | 0.63 (0.47) | 0.33 (0.36) | 0.35 (0.43) | 0.70 | 0.31 | 269.76 | | | Pattern 3 | 22 (23) | 0.15 (0.18) | 0.53 (0.53) | 0.38 (0.33) | 0.42 (0.39) | 0.92 | 0.16 | 301.51 | | | Pattern 4 | 20 (16) | 0.12 (0.16) | 0.53 (0.26) | 0.43 (0.47) | 0.45 (0.57) | 0.89 | 0.19 | 264.97 | | 4 is inferior to that of Pattern 0. One of the possible reasons for this may be the pruning of the candidate edges suggested from PK in the SCD process. For elucidating this behavior, further research is required on what type and amount of information in SCP can truly maximize the performance of SCD. ![10_image_0.png](10_image_0.png) Figure 2: Results of DirectLiNGAM in the health screening data. (a) Result without prior knowledge. (b) Result with prior knowledge, which is generated from GPT-4 with SCP in Patterns 2 and 4. In this randomly selected subset, the DirectLiNGAM result without prior knowledge exhibits unnatural paths drawn in red in (a), which indicates that "Age" is influenced by "HbA1c." However, using the proposed method, the unnatural behavior is clearly mitigated with the guide of prior knowledge generated from GPT-4 with SCP, including the value of causal coefficients in (a) or the bootstrap probabilities of the emergence of directed edges. ## 4.2 Results In Randomly Selected Sub-Sample Of Health Screening Data Excluded From Gpt-4 Pre-Training Dataset It is difficult to assert that this improvement is solely due to the LLM's high performance in KBCI from the experimental results on open benchmark datasets, because it cannot be determined whether the improvement stems from the LLM's recall of the data obtained during the pre-training process on these datasets. Furthermore, assuming realistic situations, it is also important to confirm the robust effectiveness of this method, even if the range of the available dataset for statistical causal inference is limited to observation data, which may be statistically biased, and the trivial causal relationships are not apparent in SCD without prior knowledge. Therefore, we also apply the proposed methods on the sub-dataset of health screening results, which has been randomly sampled from the entire dataset12, and the natural ground truth is not presented in the SCD results without PK. In Figure 2(a), the result of DirectLiNGAM without PK is shown, and unnatural directed edges to "Age" from other variables are suggested, although the parts of the ground truths from expert knowledge are reversed relationships from these edges, such as "Age"→"HbA1c". However, when the causal discovery is assisted with PK generated from GPT-4 with SCP in Patterns 2 and 4, the causal graph becomes more natural: "Age" is not influenced by other variables, and the ground truth "Age"→"HbA1c", which cannot be detected without PK in this randomly selected subset, appears in the causal graph, as shown in Figure 2(b). Because this sub-dataset and the analysis results are not disclosed and have been completely excluded from the pre-training data for GPT-4, GPT-4 cannot respond to prompts asking for the causal relationships merely by reproducing the knowledge acquired from the same data. Based on the above, it is verified that the assist of GPT-4 with SCP can cause the result of SCD to further approach the ground truths to an extent, even when the dataset is not learned by GPT-4 and contains bias. Moreover, the effectiveness of the proposed method demonstrated on both open benchmark datasets and unpublished health-screening datasets also supports the validity and applicability of existing works (Ban et al., 2023; Vashishtha et al., 2023; Khatibi et al., 2024), where knowledge-based improvements with LLMs of the causal graph on open benchmark datasets have been demonstrated using their own methods with unique originality. Our research, by showing versatility across both open and unpublished data, further enhances the validity of these existing works. For further clarity regarding the behaviors of SCP, the mean probabilities of the positive response on the causal relationships from GPT-4 for ground truths in the health screening data, in addition to the statistics of the fitting results with the structure suggested by DirectLiNGAM with SCP are presented in Table 4. In Pattern 0 without SCP, the existence of "Age"→"BMI" and "Age"→"HbA1c" is supported in the probabilities over 0.90, and the probabilities decrease in other patterns with SCP. On the other hand, the opposite behavior 12The details of the sampling method for this experiment are presented in Appendix C Table 4: Quantitative evaluation of characteristic results of SCD in the proposed methods on the subset of health screening data with certain biases. A: The elements of P generated in each prompting pattern for all the appropriate ground truth causal relationships in these variables. The values enclosed in parentheses are the bootstrap probabilities of the directed edges in the DirectLiNGAM algorithm without PK. It is suggested that the mean probabilities of the positive response on the causal relationships from GPT-4 are influenced by the results of SCD and bootstrap probabilities when they are included in SCP. Moreover, all the probabilities of reversed, unnatural causal relationships in which "age" is influenced by other factors are extremely closed to zero. B: CFI, RMSEA and BIC evaluated on the model fitting with the structure discovered by DirectLiNGAM, under the assumption of linear-Gaussian data. The values enclosed in parentheses are the statistics of the structure calculated without PK. It can be inferred that SCP with bootstrap probabilities in Patterns 2 and 4, have enhanced the confidence in "Age"→"DBP" by GPT-4, and improved the BIC when compared with Pattern 0. | | | Pattern 0 | Pattern 1 | Pattern 2 | Pattern 3 | Pattern 4 | | |---------------------------------------------------|-----------------------|-------------|-------------|-------------|-------------|-------------|--------| | | "Age"→"BMI" | 0.901 | 0.076 | 0.093 | 0.306 | 0.037 | | | | (0.166) | | | | | | | | A. Probability of reproducing ground truth from GPT-4 | "Age"→"SBP" | 0.626 | 0.302 | 0.207 | 0.235 | 0.795 | | | | (0.550) "Age"→"DBP" | 0.001 | 0.019 | 0.115 | 0.095 | 0.926 | | | | (0.308) "Age"→"HbA1c" | 0.986 | 0.170 | 0.723 | 0.046 | 0.176 | | | | (0.327) | | | | | | | | B. | Statistics | of | linear | | | | | | Gaussian fitting with the results of DirectLiNGAM | CFI↑ | (1.002) | 0.999 | 0.992 | 0.995 | 0.986 | 0.995 | | | RMSEA↓ | (0.000) | 0.018 | 0.054 | 0.032 | 0.057 | 0.032 | | | BIC↓ | (124.332) | 103.581 | 89.738 | 89.740 | 103.506 | 89.740 | is also observed on "Age"→"DBP," which is strongly denied from the GPT-4 domain knowledge without SCP in Pattern 0. It is reasonable to interpret that these probability changes with SCP in GPT-4 are induced by the results of SCD and bootstrap probabilities. For example, focusing on the relationship "Age"→"BMI," the lack of the direct edge in the SCD result with PK shown in Figure 2(b) and the relatively low bootstrap probability 0.166 in Table 4, may be the cause of lower probability in the SCP patterns than Pattern 0. By contrast, although the hypothesis of "Age"→"DBP" is not supported by the GPT-4 domain knowledge in Pattern 0, the appearance of the edge in Figure 2(a) and the bootstrap probability 0.308 are considered to be the cause of the increase in probability through SCP. Considering the aforementioned properties, the probability that the judgement rendered by GPT-4 regarding causal relationships can be influenced by the SCD results, particularly when the dataset with some significant biases is used in the causal discovery. Finally, it is also confirmed that the BIC becomes smaller with the assistance by GPT-4 than without PK. In particular, the values in Patterns 1, 2, and 4 are lower than Pattern 0. This suggests that SCP can contribute to the discovery of the causal structure with more adequate statistical models. ## 5 Conclusion In this study, a novel methodology of causal inference, in which SCD and LLM-KBCI are synthesized with SCP and prior knowledge augmentation was developed and demonstrated. It has been revealed that GPT-4 can cause the output of LLM-KBCI and the SCD result with prior knowledge from LLM-KBCI to approach the ground truth, and that the SCD result can be further improved, if GPT-4 undergoes SCP. Furthermore, with an unpublished real-world dataset, we have demonstrated that GPT-4 with SCP can assist in SCD with respect to domain expertise and statistics, and that the proposed method is effective even if the dataset has never been included in the training data of the LLM and the sample size is not sufficiently large to obtain reasonable SCD results. ## Limitations Of This Work We have fixed the LLM in our experiment to GPT-4 for its extensive general knowledge and capabilities in common-sense reasoning, and our experiments on several real-world datasets, typically within the realm of common-sense reasoning, have illustrated the potential utility of the proposed method across various scientific domains. We expect that the proposed method will work even if other LLMs are adopted, since the capability of LLM-KBCI has already been demonstrated in several models (Zečević et al., 2023). Considering the continuous emergence of new LLMs and the rapid update of existing LLMs, it is expected that the universal effectiveness of the proposed method across different LLMs will be verified. Moreover, it is also important to recognize the risks of using current LLMs in our methods: hallucinations at Steps 2 and 3 in Figure 1 can worsen the results of LLM-KBCI and SCD. Considering that hallucinations arise from factors such as memorization of training data (McKenna et al., 2023; Carlini et al., 2023), statistical biases (Jones & Steinhardt, 2022; McMilin, 2022), and predictive uncertainty (Yang et al., 2023; Lin et al., 2024), it is essential to be especially careful when our method is applied in highly specialized academic domains (George & Stuhlmueller, 2023; Pal et al., 2023). To reduce these risks when the proposed method is applied in domains with deeper specificity in the future, it is necessary to further improve our method to more effectively avoid hallucinations. Although we have already introduced ZSCoT (Kojima et al., 2022) and generated knowledge prompting (Liu et al., 2022), employing optimal LLMs that are specifically pre-trained with materials on specific domains is one possible solution, or incorporating techniques such as fine-tuning and RAG could be necessary. Therefore, systematic research in the context of optimal LLM-KBCI is also required on the selection of the optimal LLMs for each domain and on the techniques for utilizing the LLMs. Furthermore, for generalization of the method we have proposed, we also have to remark that although SCP is indeed likely to enhance the performance of SCD more than the prompting without SCP, whether this improvement is observed can depend on the datasets used for SCD. Moreover, there remains a discussion on the potential for overfitting in SCP, especially with small or biased datasets. Although we have shown the effectiveness of SCP on a sub-sample of the health screening dataset, which contains bias, it is still an open question what conditions of the LLMs or prompting techniques can guarantee robustness for small or biased datasets. In terms of the more reliable application of the proposed method, further basic research on the effect of causal coefficients or bootstrap probabilities on the results of LLM-KBCI is required. ## Broader Impact Statement This paper proposed a novel approach that integrates SCD methods with LLMs. This research has the potential to contribute to more accurate causal inference in fields where understanding causality is crucial, such as healthcare, economics, and environmental science. However, the use of LLMs such as GPT-4 necessitates the extensive consideration of data privacy and biases. Moreover, it should also be noted that incorrect causal inferences, caused by hallucinations, biased training data, or inappropriate use of LLMs, could lead to significant consequences for society. In particular, these risks arise when the proposed method is used for completely "black-box" decision making. To prevent these risks, the transparency and interpretability of the whole system should be clarified. This study highlights the responsible use of artificial intelligence, considering ethical implications and societal impacts. With appropriate guidelines and ethical standards, the proposed methodology can advance scientific understanding and provide extensive widespread benefits to society. ## References Ahmed Abdulaal, adamos hadjivasiliou, Nina Montana-Brown, Tiantian He, Ayodeji Ijishakin, Ivana Drobnjak, Daniel C. Castro, and Daniel C. Alexander. Causal modelling agents: Causal graph discovery through synergising metadata- and data-driven reasoning. In *The Twelfth International Conference on Learning* Representations, 2024. URL https://openreview.net/forum?id=pAoqRlTBtY. Umar I. Abdullahi, Spyros Samothrakis, and M. Fasli. Causal inference with correlation alignment. In Proc. 2020 IEEE International Conference on Big Data (Big Data), pp. 4971–4980, 2020. doi: 10.1109/ BigData50022.2020.9378334. Dawn E. Alley, Luigi Ferrucci, Mario Barbagallo, Stephanie A. Studenski, and Tamara B. Harris. A Research Agenda: The Changing Relationship Between Body Weight and Health in Aging. The Journals of Gerontology: Series A, 63(11):1257–1259, 11 2008. ISSN 1079-5006. doi: 10.1093/gerona/63.11.1257. URL https://doi.org/10.1093/gerona/63.11.1257. Maksym Andriushchenko, Francesco Croce, and Nicolas Flammarion. Jailbreaking leading safety-aligned llms with simple adaptive attacks. *arXiv preprint*, 2024. doi: 10.48550/arXiv.2404.02151. Taiyu Ban, Lyuzhou Chen, Derui Lyu, Xiangyu Wang, and Huanhuan Chen. Causal structure learning supervised by large language model. *arXiv preprint*, 2023. doi: 10.48550/arXiv.2311.11689. Peter M. Bentler. Comparative fit indexes in structural models. *Psychological bulletin*, 107 2:238–46, 1990. doi: 10.1037/0033-2909.107.2.238. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1877–1901. Curran Associates, Inc., 2020. URL https://proceedings. neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf. Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. Quantifying memorization across neural language models. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=TatRHT_1cK. Lu Cheng, Ruocheng Guo, Raha Moraffah, Paras Sheth, K. Selçuk Candan, and Huan Liu. Evaluation methods and measures for causal learning algorithms. *IEEE Transactions on Artificial Intelligence*, 3(6): 924–943, 2022. doi: 10.1109/TAI.2022.3150264. David Maxwell Chickering. Optimal structure identification with greedy search. *The Journal of Machine* Learning Research, 3:507–554, 2002. URL https://dl.acm.org/doi/10.1162/153244303321897717. Jawad Chowdhury, Rezaur Rashid, and Gabriel Terejanu. Evaluation of induced expert knowledge in causal structure learning by NOTEARS. *arXiv preprint*, 2023. doi: 10.48550/arXiv.2301.01817. Philippa Clarke, Patrick M O'Malley, Lloyd D Johnston, and John E Schulenberg. Social disparities in BMI trajectories across adulthood by gender, race/ethnicity and lifetime socio-economic position: 1986–2004. International Journal of Epidemiology, 38(2):499–509, 10 2008. ISSN 0300-5771. doi: 10.1093/ije/dyn214. URL https://doi.org/10.1093/ije/dyn214. Kai-Hendrik Cohrs, Emiliano Diaz, Vasileios Sitokonstantinou, Gherardo Varando, and Gustau Camps-Valls. Large language models for constrained-based causal discovery. In *AAAI 2024 Workshop on "Are Large* Language Models Simply Causal Parrots?", 2023. URL https://openreview.net/forum?id=NEAoZRWHPN. N. Dubowitz, W. Xue, Q. Long, J. G. Ownby, D. E. Olson, D. Barb, M. K. Rhee, A. V. Mohan, P. I. WatsonWilliams, S. L. Jackson, A. M. Tomolo, T. M. Johnson II, and L. S. Phillips. Aging is associated with increased hba1c levels, independently of glucose levels and insulin resistance, and also with decreased hba1c diagnostic specificity. *Diabetic Medicine*, 31(8):927–935, 2014. doi: https://doi.org/10.1111/dme.12459. Gemini Team, Google. Gemini: A family of highly capable multimodal models. *arXiv preprint*, 2023. doi: 10.48550/arXiv.2312.11805. Charlie George and Andreas Stuhlmueller. Factored verification: Detecting and reducing hallucination in summaries of academic papers. In Tirthankar Ghosal, Felix Grezes, Thomas Allen, Kelly Lockhart, Alberto Accomazzi, and Sergi Blanco-Cuaresma (eds.), Proceedings of the Second Workshop on Information Extraction from Scientific Publications, pp. 107–116, Bali, Indonesia, November 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.wiesp-1.13. URL https://aclanthology.org/2023. wiesp-1.13. Penny Gordon-Larsen, Natalie S. The, and Linda S. Adair. Longitudinal trends in obesity in the united states from adolescence to the third decade of life. *Obesity*, 18(9):1801–1804, 2010. doi: https://doi.org/10. 1038/oby.2009.451. Michael Gurven, Aaron D. Blackwell, Daniel Eid Rodríguez, Jonathan Stieglitz, and Hillard Kaplan. Does blood pressure inevitably rise with age? *Hypertension*, 60(1):25–33, 2012. doi: 10.1161/ HYPERTENSIONAHA.111.189100. Uzma Hasan, Emam Hossain, and Md Osman Gani. A survey on causal discovery methods for i.i.d. and time series data. *Transactions on Machine Learning Research*, 2023. ISSN 2835-8856. David W. Hosmer and Stanley Lemeshow. *Applied logistic regression (Wiley Series in probability and* statistics). Wiley-Interscience Publication, 2 edition, 2000. Patrik Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Bernhard Schölkopf. Nonlinear causal discovery with additive noise models. In *Proc. NIPS 2008*, volume 21 of *Advances in Neural Information* Processing Systems, 2008. URL https://proceedings.neurips.cc/paper_files/paper/2008/file/ f7664060cc52bc6f3d620bcedc94a4b6-Paper.pdf. Biwei Huang, Kun Zhang, Yizhu Lin, Bernhard Schölkopf, and Clark Glymour. Generalized score functions for causal discovery. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD '18), pp. 1551–1560, 2018. doi: 10.1145/3219819.3220104. Takashi Ikeuchi, Mayumi Ide, Yan Zeng, Takashi Nicholas Maeda, and Shohei Shimizu. Python package for causal discovery based on LiNGAM. *Journal of Machine Learning Research*, 24(14):1–8, 2023. URL http://jmlr.org/papers/v24/21-0321.html. Takanori Inazumi, Shohei Shimizu, and Takashi Washio. Use of prior knowledge in a non-Gaussian method for learning linear structural equation models. In *Proc. International Conference on Latent Variable Analysis* and Signal Separation (LVA/ICA 2010), volume 6365 of *LNCTS (Lecture Notes in Computer Science)*, pp. 221–228, 2010. doi: 10.1007/978-3-642-15995-4_28. Haitao Jiang, Lin Ge, Yuhe Gao, Jianian Wang, and Rui Song. Large language model for causal decision making. *arXiv preprint*, 2023. doi: 10.48550/arXiv.2312.17122. Zhijing Jin, Jiarui Liu, Zhiheng LYU, Spencer Poff, Mrinmaya Sachan, Rada Mihalcea, Mona T. Diab, and Bernhard Schölkopf. Can large language models infer causation from correlation? In *The Twelfth* International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id= vqIH0ObdqL. Thomas Jiralerspong, Xiaoyin Chen, Yash More, Vedant Shah, and Yoshua Bengio. Efficient causal graph discovery using large language models. *arXiv preprint*, 2024. doi: 10.48550/arXiv.2402.01207. Erik Jones and Jacob Steinhardt. Capturing failures of large language models via human cognitive biases. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), *Advances in Neural Information Processing Systems*, volume 35, pp. 11785–11799. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ 4d13b2d99519c5415661dad44ab7edcd-Paper-Conference.pdf. Christoph Käding and Jakob Runge. Distinguishing cause and effect in bivariate structural causal models: A systematic investigation. *Journal of Machine Learning Research*, 24(278):1–144, 2023. URL http: //jmlr.org/papers/v24/22-0151.html. Elahe Khatibi, Mahyar Abbasian, Zhongqi Yang, Iman Azimi, and Amir M. Rahmani. Alcm: Autonomous llm-augmented causal discovery framework. *arXiv preprint*, 2024. doi: 10.48550/arXiv.2404.01744. Vivek Khetan, Roshni Ramnani, Mayuresh Anand, Subhashis Sengupta, and Andrew E. Fano. Causal bert: Language models for causality detection between events expressed in text. In Kohei Arai (ed.), Intelligent Computing, pp. 965–980, Cham, 2022. Springer International Publishing. ISBN 978-3-030-80119-9. Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. In *Proc. NeurIPS 2022*, volume 35 of Advances in Neural Information Processing Systems, pp. 22199–22213, 2022. Emre Kıcıman, Robert Ness, Amit Sharma, and Chenhao Tan. Causal reasoning and large language models: Opening a new frontier for causality. *arXiv preprint*, 2023. doi: 10.48550/arXiv.2305.00050. Zhen Lin, Shubhendu Trivedi, and Jimeng Sun. Generating with confidence: Uncertainty quantification for black-box large language models. *Transactions on Machine Learning Research*, 2024. ISSN 2835-8856. URL https://openreview.net/forum?id=DWkJCSxKU5. Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, and Hannaneh Hajishirzi. Generated knowledge prompting for commonsense reasoning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022), volume 1 (Long Papers), pp. 3154–3169, May 2022. doi: 10.18653/v1/2022.acl-long.225. Stephanie Long, Alexandre Piché, Valentina Zantedeschi, Tibor Schuster, and Alexandre Drouin. Causal discovery with language models as imperfect experts. In *ICML 2023 Workshop on Structured Probabilistic* Inference & Generative Modeling, 2023. URL https://openreview.net/forum?id=RXlvYZAE49. Nick McKenna, Tianyi Li, Liang Cheng, Mohammad Javad Hosseini, Mark Johnson, and Mark Steedman. Sources of hallucination by large language models on inference tasks. In *The 2023 Conference on Empirical* Methods in Natural Language Processing, 2023. URL https://openreview.net/forum?id=rJhk7Fpnvh. Emily McMilin. Selection bias induced spurious correlations in large language models. In *ICML 2022:* Workshop on Spurious Correlations, Invariance and Stability, 2022. URL https://openreview.net/ forum?id=8nnaDv9dUli. Joris M. Mooij, Jonas Peters, Dominik Janzing, Jakob Zscheischler, and Bernhard Schölkopf. Distinguishing cause from effect using observational data: Methods and benchmarks. Journal of Machine Learning Research, 17(32):1–102, 2016. URL http://jmlr.org/papers/v17/14-518.html. OpenAI. GPT-4 technical report. *arXiv preprint*, 2023. doi: 10.48550/arXiv.2303.08774. Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. Med-HALT: Medical domain hallucination test for large language models. In Jing Jiang, David Reitter, and Shumin Deng (eds.), *Proceedings* of the 27th Conference on Computational Natural Language Learning (CoNLL), pp. 314–334, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.conll-1.21. URL https://aclanthology.org/2023.conll-1.21. Lydie N. Pani, Leslie Korenda, James B. Meigs, Cynthia Driver, Shadi Chamany, Caroline S. Fox, Lisa Sullivan, Ralph B. D'Agostino, and David M. Nathan. Effect of Aging on A1C Levels in Individuals Without Diabetes: Evidence from the Framingham Offspring Study and the National Health and Nutrition Examination Survey 2001–2004. *Diabetes Care*, 31(10):1991–1996, 10 2008. ISSN 0149-5992. doi: 10.2337/dc08-0577. URL https://doi.org/10.2337/dc08-0577. R. Quinlan. Auto MPG. UCI Machine Learning Repository, 1993. doi: 10.24432/C5859H. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. P. RaviKumar, A. Bhansali, R. Walia, G. Shanmugasundar, and M. Ravikiran. Alterations in hba1c with advancing age in subjects with normal glucose tolerance: Chandigarh urban diabetes study (cuds). *Diabetic* Medicine, 28(5):590–594, 2011. doi: https://doi.org/10.1111/j.1464-5491.2011.03242.x. Alexander G. Reisach, Christof Seiler, and Sebastian Weichwald. Beware of the simulated DAG! causal discovery benchmarks may be easy to game. In *Proc. NeurIPS 2021*, volume 34 of Advances in Neural Information Processing Systems, 2021. Daniela Rodrigues, Noemi Kreif, Anna Lawrence-Jones, Mauricio Barahona, and Erik Mayer. Reflection on modern methods: constructing directed acyclic graphs (DAGs) with domain experts for health services research. *International Journal of Epidemiology*, 51(4):1339–1348, 06 2022. ISSN 0300-5771. doi: 10.1093/ ije/dyac135. J. Rohrer. Thinking clearly about correlations and causation: Graphical causal models for observational data. *Advances in Methods and Practices in Psychological Science*, 1:27 - 42, 2017. doi: 10.1177/ 2515245917745629. Paul Rolland, Volkan Cevher, Matthäus Kleindessner, Chris Russell, Dominik Janzing, Bernhard Schölkopf, and Francesco Locatello. Score matching enables causal discovery of nonlinear additive noise models. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of *PMLR (Proceedings* of Machine Learning Research), pp. 18741–18753. PMLR, 17–23 Jul 2022. URL https://proceedings. mlr.press/v162/rolland22a.html. Karen Sachs, Omar Perez, Dana Pe'er, Douglas A. Lauffenburger, and Garry P. Nolan. Causal proteinsignaling networks derived from multiparameter single-cell data. *Science*, 308(5721):523–529, 2005. doi: 10.1126/science.1105809. Gideon Schwarz. Estimating the Dimension of a Model. *The Annals of Statistics*, 6(2):461 - 464, 1978. doi: 10.1214/aos/1176344136. Marco Scutari and Jean-Baptiste Denis. *Bayesian Networks with Examples in R*. Chapman and Hall, Boca Raton, 2014. Shohei Shimizu, Patrik O. Hoyer, Aapo Hyvärinen, and Antti Kerminen. A linear non-Gaussian acyclic model for causal discovery. *Journal of Machine Learning Research*, 7(72):2003–2030, 2006. URL https: //dl.acm.org/doi/10.5555/1248547.1248619. Shohei Shimizu, Takanori Inazumi, Yasuhiro Sogawa, Aapo Hyvärinen, Yoshinobu Kawahara, Takashi Washio, Patrik O. Hoyer, and Kenneth Bollen. DirectLiNGAM: A direct method for learning a linear non-gaussian structural equation model. *Journal of Machine Learning Research*, 12(33):1225–1248, 2011. URL https://dl.acm.org/doi/10.5555/1953048.2021040. Tomi Silander and Petri Myllymäki. A simple approach for finding the globally optimal Bayesian network structure. In *Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence (UAI* '06), pp. 445–452, 2006. P. Spirtes, C. Glymour, and R. Scheines. *Causation, Prediction, and Search*. MIT press, 2nd edition, 2000. Peter Spirtes, Clark Glymour, Richard Scheines, and Robert Tillman. Automated Search for Causal Relations: Theory and Practice. In Rina Dechter, Hector Geffner, and Joseph Y. Halpern (eds.), *Heuristics, Probability* and Causality: A Tribute to Judea Pearl, chapter 28, pp. 467–506. College Publications, 2010. J. H. Steiger and J. C. Lind. Statistically based tests for the number of common factors. In *The Annual* Meeting of the Psychometric Society, 1980. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint, 2023. doi: 10.48550/arXiv.2307.09288. Ruibo Tu, Kun Zhang, Hedvig Kjellstrom, and Cheng Zhang. Optimal transport for causal discovery. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id= qwBK94cP1y. Lewis Tunstall, Leandro von Werra, and Thomas Wolf. *Natural Language Processing with Transformers:* Building Language Applications with Hugging Face. O'Reilly Media, Incorporated, 2022. ISBN 1098103246. URL https://books.google.ch/books?id=7hhyzgEACAAJ. Aniket Vashishtha, Abbavaram Gowtham Reddy, Abhinav Kumar, Saketh Bachu, Vineeth N. Balasubramanian, and Amit Sharma. Causal inference using LLM-guided discovery. In AAAI 2024 Workshop on "Are Large Language Models Simply Causal Parrots?", 2023. URL https://openreview.net/forum?id= 4B296hrTIS. Feng Xie, Ruichu Cai, Biwei Huang, Clark Glymour, Zeng Hao, and Kun Zhang. Generalized independent noise condition for estimating latent variable causal graphs. In *Proc. NeurIPS 2020*, volume 33 of Advances in Neural Information Processing Systems, 2020. Qi Yang, Shreya Ravikumar, Fynn Schmitt-Ulms, Satvik Lolla, Ege Demir, Iaroslav Elistratov, Alex Lavaee, Sadhana Lolla, Elaheh Ahmadi, Daniela Rus, Alexander Amini, and Alejandro Perez. Uncertainty-aware language modeling for selective question answering. *arXiv preprint*, 2023. doi: 10.48550/arXiv.2311.15451. Yang Claire Yang, Christine E. Walsh, Moira P. Johnson, Daniel W. Belsky, Max Reason, Patrick Curran, Allison E. Aiello, Marianne Chanti-Ketterl, and Kathleen Mullan Harris. Life-course trajectories of body mass index from adolescence to old age: Racial and educational disparities. Proceedings of the National Academy of Sciences, 118(17):e2020167118, 2021. doi: 10.1073/pnas.2020167118. URL https: //www.pnas.org/doi/abs/10.1073/pnas.2020167118. Changhe Yuan and Brandon Malone. Learning optimal Bayesian networks: A shortest path perspective. Journal of Artificial Intelligence Research, 48:23–65, 10 2013. doi: 10.1613/jair.4039. Matej Zečević, Moritz Willig, Devendra Singh Dhami, and Kristian Kersting. Causal parrots: Large language models may talk causality but are not causal. *Transactions on Machine Learning Research*, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=tv46tCzs83. Yuzhe Zhang, Yipeng Zhang, Yidong Gan, Lina Yao, and Chen Wang. Causal graph discovery with retrievalaugmented generation based large language models. *arXiv preprint*, 2024. doi: 10.48550/arXiv.2402.15301. Xun Zheng, Bryon Aragam, Pradeep K Ravikumar, and Eric P Xing. Dags with no tears: Continuous optimization for structure learning. In *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_files/paper/2018/file/ e347c51419ffb23ca3fd5050202f9c3d-Paper.pdf. Yujia Zheng, Biwei Huang, Wei Chen, Joseph Ramsey, Mingming Gong, Ruichu Cai, Shohei Shimizu, Peter Spirtes, and Kun Zhang. Causal-learn: Causal discovery in python. *arXiv preprint*, 2023. doi: 10.48550/arXiv.2307.16405. ## A Ethics Review Ethical Considerations in Methodology and AI Use This paper proposes a novel approach that integrates SCD with LLMs. We have thoroughly considered the issues of data privacy and biases associated with the use of LLMs. The proposed methodology enhances the accuracy and efficiency of causal discovery; however, it does not introduce explicit ethical implications beyond those generally applicable to machine learning. We are committed to the responsible use of AI and welcome the scrutiny of the ethics review committee. Institutional Review and Consent Compliance of Health Screening Data The institutional review board approved this study, and in accordance with the double-blind review process of TMLR, the name of the institution has been withheld. The identity of the institution will be disclosed upon successful completion of the peer review. As we only analyzed anonymized data from the database, the need for informed consent was waived. ## B Detail Of Contents In Each Prompting Pattern In this section, the details of the prompting in each pattern are presented. For Pattern 0, another prompting template is detailed instead of the sentences shown in Table 1. Moreover, for Patterns 1, 2, 3, and 4, the contents filled in ⟨blank 5⟩ and ⟨blank 9⟩ of the prompt template for SCP shown in Table 1 are described. For Pattern 0 Compared with other patterns of SCP, Pattern 0 does not include any results of SCD without prior knowledge. As a result, the prompt template in Table 1 is not suitable for Pattern 0, as it includes the blanks filled with the description of the dataset and SCD result. Thus, we prepare another prompt template for Pattern 0, which is completely independent of the SCD result, and require GPT-4 to generate the response solely from its domain knowledge. Table 5 presents the prompt template in Pattern 0, which is composed mainly from the ZSCoT concept. Because it does not include information on the SCD method and relies solely on the domain knowledge in GPT-4, the probability matrix from the process of GPT-4 in Pattern 0 is applied independently of the SCD method. Table 5: The prompt template of Q (1) ij (xi, xj ) for Pattern 0 for the generation of the expert knowledge of the causal effect from xj on xi. The "blanks" enclosed with ⟨ ⟩ are filled with description words of the theme of the causal inference and variable names. Prompt Template of q (1) ij = Q (1) ij (xi, xj ) **in Pattern 0 (for all SCD methods)** We want to carry out causal inference on ⟨blank 1. theme⟩, considering ⟨blank 2. The description of all variables⟩ as variables. If ⟨blank 7. The name of xj ⟩ is modified, will it have a direct impact on ⟨blank 8. The name of xi⟩? Please provide an explanation that leverages your expert knowledge on the causal relationship between ⟨blank 7. The name of xj ⟩ and ⟨blank 8. The name of xi⟩. Your response should consider the relevant factors and provide a reasoned explanation based on your understanding of the domain. For Patterns 1–4 (in the case of Exact Search and DirectLiNGAM) Following the concept of each SCP pattern, the contents filled in ⟨blank 5⟩ shown in Table 1 are summarized in Table 6. In this table, the names of the causes and effected variables are represented as ⟨ cause i ⟩ and ⟨ effect i ⟩ respectively, and the bootstrap probability of this causal relationship in SCD Pi and the causal coefficient of LiNGAM bi can be included in Patterns 2–4. In Patterns 2 and 4, only the causal relationships with Pi ̸= 0 are listed in ⟨ blank 5 ⟩. In Pattern 3, the causal relationships with bi ̸= 0 are listed in ⟨ blank 5 ⟩. The contents filled in ⟨blank 6⟩ and ⟨blank 9⟩ also depend on the SCP patterns, and are shown in Table 7. Here, we define the bootstrap probability of xj → xi as Pij . we also define the causal coefficient of xj → xi in LiNGAM as bij , because the structural equation of LiNGAM is usually defined as13: $$x_{i}=\sum_{i\neq j}b_{i j}x_{j}+e_{i}$$ $$(1)$$ bijxj + ei (1) Prompt Template in case of PC algorithm Although the causal relationships are ultimately represented only as directed edges in Exact Search and DirectLiNGAM, the situation changes slightly when we adopt the PC algorithm along with the codes in "causal-learn." This is because the PC algorithm can also output undirected edges, when the causal direction between a pair of variables cannot be determined. Therefore, we have tentatively decided to include not only directed edges but also undirected edges in SCP. Additionally, we have prepared another prompting template for SCP in the case of PC, as shown in Table 8. This template is 13Here, the error distribution function ei is also assumed to be non-Gaussian. | SCP Pattern | Content in ⟨blank 5⟩ | |----------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Pattern 1 | | | Directed edges | All of the edges suggested by the statistical causal discovery are below: ⟨ cause 1 ⟩ → ⟨ effect 1 ⟩ ⟨ cause 2 ⟩ → ⟨ effect 2 ⟩ . . . | | Pattern 2 | | | Bootstrap probabilities of directed edges | All of the edges with non-zero bootstrap probabilities suggested by the statistical causal discovery are below: ⟨ cause 1 ⟩ → ⟨ effect 1 ⟩ (bootstrap probability = P1) ⟨ cause 2 ⟩ → ⟨ effect 2 ⟩ (bootstrap probability = P2) . . . | | Pattern 3 | | | (DirectLiNGAM Only) | | | Non-zero causal coefficients of directed edges | All of the edges and their coefficients of the structural causal model suggested by the statistical causal discovery are below: ⟨ cause 1 ⟩ → ⟨ effect 1 ⟩ (coefficient = b1) ⟨ cause 2 ⟩ → ⟨ effect 2 ⟩ (coefficient = b2) . . . | | Pattern 4 | | | (DirectLiNGAM Only) | | | Non-zero causal coefficients and bootstrap probabilities of directed edges | All of the edges with non-zero bootstrap probabilities and their coefficients of the structural causal model suggested by the statistical causal discovery are below: ⟨ cause 1 ⟩ → ⟨ effect 1 ⟩ (coefficient = b1, bootstrap probability = P1) ⟨ cause 2 ⟩ → ⟨ effect 2 ⟩ (coefficient = b2, bootstrap probability = P2) . . . | Table 6: Contents filled in ⟨blank 5⟩ shown in Table 1, when Exact Search or DirectLiNGAM is adopted for the SCD process. | SCP Pattern | Case Classification | ⟨blank 6⟩ | Content in ⟨blank 9⟩ | |----------------------------------------------------------------------------|---------------------------|-----------------------------------------------------------------------------|-------------------------------------------------------------------------------| | xj → xi emerged | a | - | | | Pattern 1 | (No values are filled in) | | | | Directed edges | xj → xi not emerged | no | - | | | (No values are filled in) | | | | Pattern 2 | | | | | Bootstrap probabilities of directed edges | Pij ̸= 0 | a | with a bootstrap probability of Pij | | Pij = 0 | no | - | | | | (No values are filled in) | | | | bij ̸= 0 | a | with a causal coefficient of bij | | | Pattern 3 | | | | | (DirectLiNGAM Only) | | | | | Non-zero causal coefficients of directed edges | bij = 0 | no | - | | | (No values are filled in) | | | | Pattern 4 | | | | | (DirectLiNGAM Only) | | | | | Non-zero causal coefficients and bootstrap probabilities of directed edges | Pij ̸= 0 and bij ̸= 0 | a | with a bootstrap probability of Pij , and the coefficient is likely to be bij | | Pij ̸= 0 and bij = 0 | a | with a bootstrap probability of Pij , but the coefficient is likely to be 0 | | | Pij = 0 | no | - | | | | (No values are filled in) | | | Table 7: Contents filled in ⟨blank 6⟩ and ⟨blank 9⟩ shown in Table 1, when Exact Search or DirectLiNGAM is adopted for the SCD process. slightly modified from the one in Table 1. The description for ⟨ blank 5 ⟩ in each SCP pattern is augmented by Table 9, and the description for ⟨ blank 6 ⟩ is similarly augmented by Table 10. As shown in Table 9, the directed and undirected edges are separately listed and clearly distinguished by the edge symbols of "→" for directed edges and "—" for undirected edges. These are then filled in ⟨ blank 5 ⟩. The pairs of variables connected by undirected edges are represented as ⟨ cause or effect i-1 ⟩ and ⟨ cause or effect i-2 ⟩, and the bootstrap probability of the emergence of these relationships is represented as P u i . On the other hand, the bootstrap probability of ⟨ cause i *⟩ → ⟨* effect i ⟩ is represented as P d i . The division of the descriptions in ⟨ blank 6 ⟩ is shown in Table 10. The bootstrap probabilities of the appearance of xj → xi and xj—xi are respectively represented as P d ij and P u ij . Table 8: The prompt template of Q (1) ij (xi, xj , Gˆ0, P d, P u) in case of the PC algorithm. The "blanks" enclosed with ⟨ ⟩ are filled with description words considering the theme of the causal inference, variable names, and the SCD result with the PC algorithm. Prompt Template of q (1) ij = Q (1) ij (xi, xj , Gˆ0, P d, P u) **for PC** We want to carry out causal inference on ⟨blank 1. The theme⟩, considering ⟨blank 2. The description of all variables⟩ as variables. First, we have conducted the statistical causal discovery with the PC (Peter-Clerk) algorithm, using a fully standardized dataset on ⟨blank 4. The description of the dataset⟩. ⟨blank 5. Including here the information of Gˆ0 (for Pattern 1), or P dand P u(for Pattern 2). The detail of the contents depends on prompting patterns, both for directed and undirected edges.⟩ According to the results shown above, it has been determined that ⟨blank 6. The detail of the interpretation on whether there is a causal relationship between xj and xi from the result shown in blank 5⟩. Then, your task is to interpret this result from a domain knowledge perspective and determine whether this statistically suggested hypothesis is plausible in the context of the domain. Please provide an explanation that leverages your expert knowledge on the causal relationship between ⟨blank 7. The name of xj ⟩ and ⟨blank 8. The name of xi⟩, and assess the naturalness of this causal discovery result. Your response should consider the relevant factors and provide a reasoned explanation based on your understanding of the domain. | SCP Pattern | Content in ⟨blank 5⟩ | | |----------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----| | Pattern 1 | | | | Directed and undirected edges | All of the edges suggested by the statistical causal discovery are below: ⟨ cause 1 ⟩ → ⟨ effect 1 ⟩ ⟨ cause 2 ⟩ → ⟨ effect 2 ⟩ . . . In addition to the directed edges above, all of the undirected edges suggested by the statistical causal discovery are below: ⟨ cause or effect 1-1 ⟩ - ⟨ cause or effect 1-2 ⟩ ⟨ cause or effect 2-1 ⟩ - ⟨ cause or effect 2-2 ⟩ . . . | | | Pattern 2 | | | | Bootstrap probabilities | | | | of directed and undirected edges | All of the edges with non-zero bootstrap probabilities suggested by the statistical causal discovery are below: d ⟨ cause 1 ⟩ → ⟨ effect 1 ⟩ (bootstrap probability = P 1 ) ⟨ cause 2 ⟩ → ⟨ effect 2 ⟩ (bootstrap probability = P d ) 2 . . . In addition to the directed edges above, all of the undirected edges suggested by the statistical causal discovery are below: | u | | | ⟨ cause or effect 1-1 ⟩ - ⟨ cause or effect 1-1 ⟩ (bootstrap probability = P 1 ) u ⟨ cause or effect 2-1 ⟩ - ⟨ cause or effect 2-2 ⟩ (bootstrap probability = P 2 ) . . . | | Table 9: Contents filled in ⟨blank 5⟩ shown in Table 8. | SCP Pattern | Case Classification | Content in ⟨blank 6⟩ | |-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------| | xj → xi | there may be a direct impact of | | | Pattern 1 | a change in ⟨blank 7. The name of xj ⟩ on ⟨blank 8. The name of xi⟩ | | | Directed and undirected edges | there may be a direct causal relationship | | | xj - xi | between ⟨blank 7. The name of xj ⟩ and ⟨blank 8. The name of xi⟩, although the direction has not been determined | | | no edge between xi and xj | there may be no direct impact of a change in ⟨blank 7. The name of xj ⟩ on ⟨blank 8. The name of xi⟩ | | | P d | u | | | ij ̸= 0 and P ij ̸= 0 | there may be a direct impact of a change in ⟨blank 7. The name of xj ⟩ on ⟨blank 8. Thename of xi⟩ d with a bootstrap probability of P ij . In addition, it has also been shown above that there may be a direct causal relationship between ⟨blank 7. The name of xj ⟩ and ⟨blank 8. The name of xi⟩ with a bootstrap probability of P ij , u although the direction has not been determined | | | Pattern 2 | | | | Bootstrap probabilities of | | | | directed and undirected edges | there may be a direct impact of | | | P d | u | | | ij ̸= 0 and P ij = 0 | a change in ⟨blank 7. The name of xj ⟩ on ⟨blank 8. The name of xi⟩ with a bootstrap probability of P d ij there may be a direct causal relationship | | | d | u | between ⟨blank 7. The name of xj ⟩ and ⟨blank 8. The name of xi⟩ | | P ij = 0 and P ij ̸= 0 | u | | | | with a bootstrap probability of P ij , although the direction has not been determined | | | P ij = 0 and P d ij = 0 | there may be no direct impact of | | | u | a change in ⟨blank 7. The name of xj ⟩ on ⟨blank 8. The name of xi⟩ | | Table 10: Contents filled in ⟨blank 6⟩ shown in Table 8. ## C Details Of Datasets Used In Demonstrations In this section, the details of the dataset used in the main body and the appendix are clarified. The ground truths set for the evaluation of the SCD and LLM-KBCI results for each dataset are presented. ## C.1 Auto Mpg Data Auto MPG data were originally open in the UCI Machine Learning Repository (Quinlan, 1993), and used as a benchmark dataset for causal inference (Spirtes et al., 2010; Mooij et al., 2016). This dataset consists of the variables around the fuel consumption of cars. We adopt five variables: "Weight", "Displacement", "Horsepower", "Acceleration" and "Mpg"(miles per gallon). Moreover, the number of points of this dataset in the experiment is 392. The ground truth of causal relationships we adopt in this paper is shown in Figure 3; the original has been shown as the example of the kPC algorithm (Spirtes et al., 2010). The differences from the original study (Spirtes et al., 2010) are presented below: (1) Loss of "Cylinders" Although there is also a discrete variable of "Cylinders" in the original data (Quinlan, 1993), it is omitted in the experiments to focus solely on the continuous variables. (2) Directed edge from "Weight" to "Displacement" The "Weight" and "Displacement" are connected with an undirected edge, which indicates that the direction cannot be determined in the kPC algorithm, although a causal relationship between these two variables is suggested. However, it is empirically acknowledged that large and heavy vehicles use engines with larger displacement to provide sufficient power to match their size. Thus, we temporally set the direction of the edge between these two variable as "Weight" → "Displacement." We also recognize that another ground truth was interpreted in the process of reconstructing the Tübingen ![24_image_0.png](24_image_0.png) database for causal-effect pairs (Mooij et al., 2016) 14, and "Mpg" and "Acceleration" were interpreted as effected variables from other elements. This ground truth seems to be reliable, if we do not significantly discriminate between direct and indirect causal effects and targeting the identification of cause and effect from a pair of variables. However, we adopt the ground truth based on the result from the kPC algorithm, because our target is to approach the true causal graph, including multi-step causal relationships. Figure 3: Causal graph of ground truth adopted for Auto MPG data in this study. In Figure 4, the results of basic causal structure analysis by the PC, Exact Search, and DirectLiNGAM algorithms without prior knowledge are presented. Several reversed edges from ground truths such as "Mpg" → "Weight" are observed. 14https://webdav.tuebingen.mpg.de/cause-effect/ ![25_image_0.png](25_image_0.png) ![25_image_1.png](25_image_1.png) Figure 4: Results of SCD on Auto MPG data with (A) PC, (B) Exact Search, and (C) DirectLiNGAM. ## C.2 Dwd Climate Data The DWD climate data were originally provided by the DWD 15, and several of the original datasets were merged and reconstructed as a component of the übingen database for causal-effect pairs (Mooij et al., 2016). This dataset consists of six variables: "Altitude", "Latitude", "Longitude", "Sunshine" (duration), "Temperature" and "Precipitation". The number of points of this dataset is 349, which corresponds to the number of weather stations in Germany without missing data. Because there is no ground truth on this dataset advocated, except for that in the übingen database for causal-effect pairs (Mooij et al., 2016), we adopt it temporally in this experiment, as shown in Figure 5. ![26_image_0.png](26_image_0.png) Figure 5: Causal graph of ground truth adopted for DWD climate data in this study. In Figure 6, the results of basic causal structure analysis by the PC, Exact Search, and DirectLiNGAM algorithms without prior knowledge are presented. In all the causal graphs in Figure 6, several unnatural behaviors are observed, such as "Altitude" being effected by other climate variables, which we interpret as reversed causal relationships from the ground truths. ![26_image_1.png](26_image_1.png) Figure 6: Results of SCD on DWD climate data with (A) PC, (B) Exact Search, and (C) DirectLiNGAM. ## C.3 Sachs Protein Data This dataset consists of the variables of the phosphorylation level of proteins and lipids in primary human immune cells, which were originally constructed and analyzed with the non-parametric causal structure learning algorithm by Sachs et al. (2005). It contains 11 continuous variables: "raf", "mek", "plc", "pip2", "pip3", "erk", "akt", "pka", "pkc", "p38" and "jnk". The number of points of this dataset is 7466. The ground truth adopted in this study is almost the same as the interpretation shown in the study by Sachs et al. (2005). The differences from the causal graph visually displayed in the original paper are presented below: (1) Reversed edge between "pip3" and "plc" Although the directed edge "plc" → "pip3" was detected in the original study, it was denoted as "reversed," which may be the reversed direction from the expected edge. Thus, we adopt the causal relationship of "pip3" → "plc" which Sachs *et al.* anticipated as true from an expert point of view. (2) Three missed edges in the original study In the study by Sachs *et al.*, "pip2" → "pkc',' "plc" ![27_image_0.png](27_image_0.png) → "pkc," and "pip3" → "akt" did not appear in the Bayesian network inference result, although they were expected to be direct causal relationships from the domain knowledge. We adopt these three edges for the ground truth considering that they may not appear under certain SCD conditions and assumptions. Figure 7: Causal graph of ground truth adopted for Sachs protein data in this study. In Figure 8, the results of the basic causal structure analysis by the PC, Exact Search, and DirectLiNGAM algorithms without prior knowledge are shown. ![28_image_0.png](28_image_0.png) Figure 8: Results of SCD on Sachs data with (A) PC, (B) Exact Search, and (C) DirectLiNGAM. ## C.4 Health Screening Data (Closed Data And Not Included In Gpt-4'S Pre-Training Materials) To confirm that GPT-4 can adequately judge the existence of causal relationships with SCP, even if the dataset used in SCD is not included in the pre-training dataset of GPT-4, we have additionally prepared the health screening dataset of workers in engineering and construction contractors, which is not disclosed because of its sensitive nature from personal handling and other private aspects. It contains seven continuous variables: body mass index ("BMI"), waist circumference ("Waist"), Systolic blood pressure ("SBP"), Diastolic blood pressure ("DBP"), hemoglobin A1c ("HbA1c"), low density lipoprotein cholesterol ("LDL"), and age ("Age"). The number of total points of this dataset is 123 151. Although the causal relationships between all pairs of variables are not completely determined, we set two types of ground truths. (1) Directed edges interpreted as ground truths We empirically set four directed edges below as ground truths. - "Age"→"BMI"(Clarke et al., 2008; Alley et al., 2008; Gordon-Larsen et al., 2010; Yang et al., 2021) - at least one of "Age"→"SBP" and "Age"→"DBP"(Gurven et al., 2012) - "Age"→"HbA1c"(Pani et al., 2008; RaviKumar et al., 2011; Dubowitz et al., 2014) (2) Variable interpreted as a parent for all other variables "Age" is an unmodifiable background factor. Furthermore, it has been clearly demonstrated in numerous medical studies that aging affects "BMI," "SBP," "DBP," and "HBA1c." Based on this specialized knowledge, we interpret "Age" as a parent for all other variables. The ground truths introduced above also appear in the result of DirectLiNGAM without prior knowledge, as shown in Figure 9. Although "Age"→"HbA1c" is confirmed in this result, the causal coefficient of this edge is relatively small. Thus, depending on the number of data points or the bias of the dataset, it is possible that this edge does not appear in all SCD methods without prior knowledge. For the experiment, to confirm that GPT-4 can supply SCD with adequate prior knowledge, even if a direct edge of the ground truth is not apparent, we have repeated the sampling of 1000 points from the entire dataset, until we obtained a subset on which PC, Exact Search, and DirectLiNGAM cannot discern the causal relationship "Age"→"HbA1c" without prior knowledge. The results of the SCD on the subset are shown in Figure 10, and this subset is adopted to confirm the effectiveness of the proposed method. It is confirmed in all SCD results that "Age" → "HbA1c" does not appear, and "Age" is directly influenced by other variables, which we interpret as an unnatural behavior from the domain knowledge. ![30_image_0.png](30_image_0.png) Figure 9: Causal graph suggested by DirectLiNGAM using full points of the health screening data. ![30_image_1.png](30_image_1.png) Figure 10: Results of SCD on the randomly selected subsample in the health screening dataset with (A) PC, (B) Exact Search, and (C) DirectLiNGAM. ## D Token Generation Probabilities As Confidence Of Large Language Models And Sensitivity Analysis In this section, we provide a supplemental explanation of the properties of the mean probability pij , that the LLM generates the token "yes" as the response to the prompt asking for the confidence in the causal relationship from xj to xi, highlighting the properties of the token generation process in Transformer-based LLMs. We also discuss the mean probability rij that the LLM generates the token "no," and explain the symmetric behavior of pij and rij arising from the empirical relationship pij + rij = 1, under the assumption of the "faithfulness" of the LLM to the prompt. Moreover, we quantitatively evaluate the fluctuation of pij with the standard error SE(pij ) as indices of the sensitivity or reliability of these probabilities measured in our experiments, through the regression analysis with a phenomenological distribution function SE(pij ) = ap − bp(pij − 0.5)2. Finally, the precision of the classification of forbidden causal paths is evaluated through numerical simulations. The discussion in this section not only contributes to the evaluation of the LLM's confidence in the causal relationship between pairs of variables, but also supports the reliability of the LLM's outputs. ## D.1 Details Of Token Generation Probabilities Calculations As explained in Section 3.1, we quantitatively evaluate the generation probability of the token "yes" from GPT-4, as the confidence of GPT-4 regarding the existence of the causal relationship. Although the details of GPT-4's architecture have not been disclosed, it is naturally assumed that GPT-4 employs a temperature-adjusted softmax function for token generation, considering the following facts: - The previous GPT series developed by OpenAI are based on the Transformer architecture (Radford et al., 2019; Brown et al., 2020) - The temperature hyperparameter can be optionally set - Log-probabilities of the most likely tokens at each token position can be obtained In the process of text generation in a Transformer, the model first calculates the logit zk for token ck (0 ≤ k ≤ K, where K is the number of candidate tokens) as a likelihood score for all candidate tokens based on the prompted text X. Then, the conditional probability distribution of the emergence of token ca with the logit za, P(ca|X), is calculated through the temperature-adjusted softmax function as follows (Tunstall et al., 2022): $$\mathbb{P}(c_{a}|X,T)=\frac{\exp(z_{a}/T)}{\sum_{k=0}^{K}\exp(z_{k}/T)}\tag{1}$$ $$\left(2\right)$$ Here, T is a temperature hyperparameter that adjusts the conditional probability distribution. Finally, the next token is sampled based on the conditional probability distribution above. In this work, T was fixed to 0.7. In the third step, the log-probability L (m) ij = L (m)(ϵGPT4(q (2) ij ) = "yes") shown in Algorithm 1 is directly sampled for M times using the optional function of GPT-4. Considering that the token generation probability is represented by Eq. (2), we have calculated the mean probability pij as follows: $$\frac{\sum_{m=1}^{M}\exp(L_{ij}^{(m)})}{M}=\frac{\sum_{m=1}^{M}P^{(m)}\left(\text{``yes``}|q_{ij}^{(2)},T\right)}{M}\tag{3}$$ We have interpreted pij calculated above as the confidence16 of GPT-4 in the existence of the causal effect from xj on xi. 16Although it is suggested that this interpretation is valid in common-sense reasoning from our experiments, it should be noted that this probability simply represents the likelihood that the LLM selected the expected token as the answer to the prompt. In this sense, this "confidence" can be biased under some conditions since it depends on the architecture and the pre-training datasets of the LLM. Therefore, as discussed in Section 5, it is required to further improve our method in order to avoid hallucinations more reliably, especially for applications in highly specialized academic domains. ![32_image_0.png](32_image_0.png) Figure 11: (a) The relationship between pij and rij . The red line is the result of least squares regression with rij = αpij + β. The estimated values (± the standard errors) of the coefficients are α = −1.002 ± 0.002(≃ −1) and β = 0.990 ± 0.001(≃ 1). (b) The relationship between SE(pij ) and SE(rij ). The red line is the result of least squares regression with SE(rij ) = γSE(pij ) + δ. The estimated values (± the standard errors) of the coefficients are γ = 0.933 ± 0.011(≃ 1) and δ = 0.0015 ± 0.0002(≃ 0). ## D.2 Confidence In The Absence Of The Causal Relationship Although the generation probability of the token "no" is not evaluated in the main text, it is valuable to discuss the ideal behavior of this probability for the subsequent discussion of the stability of pij . In the same context as pij , the mean probability of generating the token "no", rij , is calculated as follows: rij = PM m=1 P (m)"no"|q (2) ij , T M(4) Since GPT-4 is required to answer only with "yes" or "no," under the assumption that the response from GPT-4 is faithful to the prompting explained in Table 2, it is expected that P (m)("yes"|q (2) ij , T)+P (m)("no"|q (2) ij , T) = 1. In this section, we define this property as "faithfulness." Therefore, from Eqs. (3) and (4), the ideal relation between pij and rij is expressed as follows: $\left(\frac{5}{9}\right)^{\frac{1}{3}}$ $\overline{\color{red}{P_{ij}}}=-\overline{\color{red}{P_{ij}}}+1$. rij = −pij + 1 (5) Since Eq. (5) is also expressed in a symmetrical form such as rij − 0.5 = −(pij − 0.5), this property is expected to contribute to a clearer quantitative interpretation of the stability of pij . This is particularly important for validating the latter discussion on the fluctuation of pij . ## D.3 Stability Of Token Generation Probabilities For the quantitative evaluation of the stability of pij , we compare pij and rij , along with their standard errors (SE(pij ) and SE(rij )) estimated through trials conducted M = 5 times. We temporarily concatenate the results of LLM-KBCI on the datasets of AutoMPG, DWD, and Sachs (for all the prompting patterns and for all the SCD algorithms), and on the health-screening data (for all the prompting patterns with ![33_image_0.png](33_image_0.png) Figure 12: (a) The relationship between the mean probability Pij and the standard error SE(pi;). The red line is the result of least squares regression with SE(M) = ap - bp(Fij - 0.5)2. The estimated values (± the standard errors) of the coefficients are ap = 0.0694 ± 0.0007 and bp = 0.2783 ± 0.0030. (b) The relationship between the mean probability To and the standard error SE(Tij). The red line is the result of least squares regression with SE(Ti) = a, - b,(rij - 0.5)2. The estimated values (± the standard errors) of the coefficients are ar = 0.0692 ± 0.0007 and br = 0.2780 ± 0.0032. DirectLiNGAM), under the assumption that the relations among Pij, Tij, SE(pij) and SE(qij) are almost independent of the domains. This holds even if SCP is conducted within the realm of common-sense reasoning. The number of points of the concatenated data is 1650. Figure 11 (a) shows the relation between pij and Tij, and the red line indicates the result of least squares regression with Tij = apij + B. Considering that the estimated values of the causal coefficients (a = - 1.002, 3 = 0.990) exhibit similar behavior to Eq. (5), it can be inferred that the assumption of the "faithfulness" of the LLM generation to the prompts is valid in our experiments. Therefore, it seems approximately valid to assume the symmetrical relation 7 - 0.5 = - (pij - 0.5). Moreover, Figure 11 (b) shows the relation between SE(p;) and SE(T;), where the red line indicates the result of least squares regression with SE(Ti) = YSE(Ti) + 8. Given the estimated values of the causal coefficients (y = 0.933, 8 = 0.002), it can be inferred that the approximation SE(Fi) ~ SE(TT) holds true. This fact provides further evidence for the approximately symmetric behavior expressed as 7-0.5 =- (pij - 0.5). Figure 12 shows the relation between pij and SE(pi;) in (a) and the relation between TT and SE(ri;) in (b). These two graphs exhibit mutually related characteristics as follows: · In the limit of pij → 1 or Tij → 0, both SE(Pi;) and SE(ri;) approach 0, indicating that 77 and TT become highly stable when confidence in the presence of the causal effect from x; to x; is very high. · In the limit of pij → 0 or rij → 1, both SE(pi;) and SE(ri;) approach 0, indicating that 77 and TT become highly stable when confidence in the absence of the causal effect from x; to x; is very high. Furthermore, the entire form of the pij - SE(pij ) distribution is quantitatively similar to the rij - SE(rij ) distribution. From this fact, it is expected that the functional form of SE(pij ) explained with pij is approximately the same as SE(rij ) explained with rij . Therefore, in order to interpret the stability of the probability easily and clearly, we assume a common distribution function between the pij - SE(pij ) and rij - SE(rij ) distributions, as well as the symmetric behavior shown in Eq. (5), and regress the pij - SE(pij ) distribution with a phenomenological function as follows: $$S E(p_{i j})=a_{p}-b_{p}({\overline{{p_{i j}}}}-0.5)^{2}$$ $$\mathbf{\hat{n}}$$ SE(pij ) = ap − bp(pij − 0.5)2(6) In the same context, the rij - SE(rij ) distribution is regressed with a phenomenological function as follows: $$S E(r_{i j})=a_{r}-b_{r}{\left({\overline{{r_{i j}}}}-0.5\right)}^{2}$$ $\square$ SE(rij ) = ar − br(rij − 0.5)2(7) Ideally, if the symmetrical behavior holds true, the relationship (ap, bp) = (ar, br) is expected. From the analysis results of the least squares regression, the coefficients of Eqs. (6) and (7) are estimated as (ap, bp) = (0.0694, 0.2783) and (ar, br) = (0.0692, 0.2780). Thus, it is confirmed that (ap, bp) ≃ (ar, br) holds true. From the results above, the standard error of pij becomes the maximum value (expected to be 0.062) when pij = 0.5. In our experiments, the fluctuation becomes critical at the criteria α1 and α2 explained in Section 3.1. From the phenomenological analysis shown above, the expected standard errors of pij at the criteria (α1 = 0.05 and α2 = 0.95 in our experiments) are estimated through Eq. (6) to be around 0.013. Therefore, when pij is around the threshold of 0.05 or 0.95, the probability fluctuation around this standard error can lead to a different PK matrix. ## D.4 Effect On The Decision Of Prior Knowledge Matrix For a more detailed analysis of the effect of the fluctuation on the determination of PK, we focus on two indices of sensitivity: (A) the statistically estimated probability that the true value of pij (p true ij ) satisfies the inequality p true ij < α1, and (B) the area under the curve (AUC) calculated through a Monte Carlo simulation. ## (A) Statistically Estimated Probability Of True Confidence For a simple simulation, we assume that SE(pij ) is calculated with Eq. (6) and (ap, bp) = (0.0694, 0.2783) as estimated through the regression analysis in Appendix D.3. Moreover, we assume that the distribution of p true ij is expressed with a Gaussian distribution, with mean value pij and standard deviation SE(pij ), as described in Eq. (8). $$f_{p_{ij}^{\rm true}}(\overline{p_{ij}},SE(\overline{p_{ij}}),p)=\frac{1}{\sqrt{2\pi SE(\overline{p_{ij}})}}\exp\left(-\frac{(p-\overline{p_{ij}})^{2}}{2SE(\overline{p_{ij}})^{2}}\right)\tag{1}$$ Under the assumptions above, P(p true ij < α1), the probability that the true confidence probability p true ij is below the forbidden edge threshold α1, is calculated as follows: $$P(p_{ij}^{\rm true}<\alpha_{1})=\int_{0}^{\alpha_{1}}dp_{p_{ij}^{\rm true}}(\overline{p_{ij}},SE(\overline{p_{ij}}),p)\tag{1}$$ Then, P(p true ij < α1) is numerically calculated using Eq. (9), and the result is shown in Fig. 13 (a). It is observed that when pij ≤ 0.04, p true ij is almost certainly below the threshold α1 = 0.05, and when pij ≥ 0.08, p true ij is almost certainly above 0.05. However, between 0.04 ≤ pij ≤ 0.08 (around 0.05), P(p true ij < α1) decreases with the increase in pij . Although it is confirmed that the width of the region of this decreasing P(p true ij < α1) is similar in the order of SE(pij ) ≃ 0.013 around pij = 0.05, the slope of the decrease is gentler in the region of pij ≥ 0.05 than in the region of pij ≤ 0.05. This asymmetric property originates from the phenomenological nonlinear function of SE(pij ) as described in Eq. (6). If the threshold of the forbidden edge α1 is decreased, it is possible to shrink the region of decreasing P(p true ij < α1) and the asymmetric slope. However, although we have tentatively set α1 to 0.05, the optimal threshold is expected to be discussed in $$(8)$$ $$(9)$$ ![35_image_0.png](35_image_0.png) Figure 13: (a) The relationship between calculated P(p true ij < α1) and pij based on Eq. (9). (b) ROC curve estimated through a Monte Carlo simulation based on Eq. (8). The AUC calculated with this ROC curve is 0.899. future research from a practical point of view. It should also be noted that a lower α1 leads to a PK matrix, which includes more uncertain elements from domain expert knowledge and can be less effective in the fourth step in Fig. 1 than expected. ## (B) Roc And Auc Estimated Through A Monte Carlo Simulation Since p true ij cannot be known in our experiments, it is impossible to construct the receiver operating characteristic (ROC) curve directly with the probability data obtained in our experiments. Instead, we have conducted a Monte Carlo simulation, through which p true ij is randomly generated using the distribution described in Eq. (8). Furthermore, for the calculation of ROC and AUC, we define the conditions for true positive (TP), false positive (FP), true negative (TN), and false negative (FN), in the context of judging whether the causal path from xj to xiis forbidden, as follows: - TP: generated p true ij satisfies p true ij < α1, and experimentally fixed pij satisfies pij < α1. - FP: generated p true ij satisfies p true ij ≥ α1, and experimentally fixed pij satisfies pij < α1. - TN: generated p true ij satisfies p true ij ≥ α1, and experimentally fixed pij satisfies pij ≥ α1. - FN: generated p true ij satisfies p true ij < α1, and experimentally fixed pij satisfies pij ≥ α1. The practical Monte Carlo simulation has been conducted 1000 times for each pij value, varying pij from 0.032 to 0.10817 in increments of 0.001, and the calculated ROC through this simulation is shown in Fig. 13. AUC is estimated to be 0.899, indicating excellent discrimination of the forbidden direct causal path (Hosmer & Lemeshow, 2000). From this analysis, it is inferred that in the realm of common-sense reasoning, the decision 17In this region, P(p true ij < α1) satisfies 0.001 < P(p true ij < α1) < 0.999. Therefore, for the practical calculation of ROC and AUC, we have assumed that when pij satisfies pij < 0.032, p true ij is under α1 and always classified as TP, and that when pij satisfies pij > 0.108, p true ij is ≥ α1 and always classified as TN. process of PK from the confidence of GPT-4 at the third step in Fig. 1 according to the interpretation obtained in the second step, is statistically valid. ## D.5 Limitations In Sensitivity Analysis Of Token Generation Probabilities Although we have discussed the sensitivity of the token generation probabilities as confidence of GPT-4 above, there are several limitations in elucidating the entire properties of the fluctuation induced by the randomness of the LLMs. For further research to sophisticate our methods, we introduce some additional points of issue related to the measurement stability in the third step in Fig. 1. ## Sensitivity In The Explanation Obtained In Step 2 Although we have evaluated the probability fluctuation emergent in Step 3 of Fig. 1, pij and SE(pij ) can also depend on aij , the output in Step 2, based on the relationship as q (2) ij = Q (2) ij (q (1) ij , aij ). Since it is practically impossible to evaluate the fluctuation of aij at Step 2 quantitatively, we have discussed the token generation probability and its sensitivity with a fixed aij . Furthermore, we have not discussed in this work the validity of aij , as this is expected to be debated within the domain of research specific to LLMs, which is outside the scope of our study. However, it should be noted that although our method has proven to be effective in many cases through this research, it may be negatively impacted if an LLM makes biased judgments. Therefore, ongoing efforts to eliminate biases in LLMs and prevent hallucinations are required. Additionally, considering the future applications of this method, careful consideration and judgment will continue to be necessary. ## Imperfect Satisfaction Of The Condition Of The "Faithfulness" In our work, the "faithfulness" is almost confirmed as seen in Fig. (11). However, there are several points that are obviously deviated from the regression line, or Eq. (5). Specifically, out of a total of 1650 points in the case of our analysis in this section, approximately 1400 points satisfy the relationship 0.99 < pij + rij < 1. In contrast, for about 80 points, pij + rij < 0.95, indicating clear violations of "faithfulness." The break of this "faithfulness" means there is a generation probability of the token that is neither "yes" nor "no," at a level that cannot be ignored. One possible explanation for this case is that GPT-4 cannot completely assert "yes" or "no" on the existence of the causal effect from xj to xi with the knowledge obtained in Step 2. In more anthropomorphic terms, this could suggest that GPT-4 sometimes struggles with its decision-making. For further rigid handling in the application of our method, it is required to prevent the break of "faithfulness," or to extend our quantitative handling of the token generation probability to the one which permits the possibility of this behavior. ## Temperature Dependence Of The Fluctuation And The "Faithfulness" As described in Eq. (2), probability distribution depends on the temperature parameter. Since higher temperature leads to larger generation probabilities of the token, unexpected at lower temperature, it is possible that this parameter can also have effects on the fluctuation of pij and the "faithfulness." Although we have fixed this parameter at 0.7 in all the experiments in this work, further discussion on the optimal temperature for this task is expected. ## E **Composition Of Adjacency Matrices Representing Causal Structure And Evaluation** E.1 Composition Of Prior Knowledge Matrices As shown in Algorithm 2, the composition rule of PK depends on the type of SCD method adopted, and the decision criteria of forced and forbidden edges are tentatively set at 0.95 and 0.05, respectively. While PC and DirectLiNGAM can be augmented with constraints for both forced and forbidden directed edges or paths, Exact Search can only be augmented with the constraints for forbidden directed edges. In this section, the composition rule for PK is described in detail for all the SCD algorithms we have adopted in this work. For PC As for the matrix representation of PK, the values of the matrix elements are determined as follows: - Case 1. If xi → xj is forced (i.e., pij ≥ 0.95), then PKij is set to 1. - Case 2. If xi → xj is forbidden (i.e., pij < 0.05), then PKij is set to 0. - Case 3. If the existence of xi → xj cannot be determined immediately from the domain knowledge generated by GPT-4 (i.e., 0.05 ≤ pij < 0.95), then PKij is set to −1. This ternary matrix composition is based on the constraints of the prior knowledge matrix DirectLiNGAM, which will be explained later, to apply the generated PK to DirectLiNGAM as quickly as possible. Although prior knowledge is represented as a matrix in the PC algorithm widely open in "causal-learn," both forced and forbidden edges can be set and the possibility of other unknown edges are explored. This similar properties with DirectLiNGAM means that the prior knowledge for this PC algorithm can be represented in ternary matrix, if we need to do. Therefore, the composition rule of PK for PC is set to be the same as that for DirectLiNGAM in this work, to treat it consistently as possible. For DirectLiNGAM Although the criteria of setting the values in PK are the same as those for PC, the definition of the value becomes slightly different. Although the prior knowledge for the PC algorithm in the "causal-learn" package corresponds to the existence of directed edges between pairs of variables, the prior knowledge for DirectLiNGAM is determined with the knowledge on directed paths. The values of the matrix elements are determined as below: - Case 1. If the directed path from xi to xj is forced (i.e., pij ≥ 0.95), then PKij is set to 1. - Case 2. If the directed path from xi to xj is forbidden (i.e., pij < 0.05), then PKij is set to 0. - Case 3. If the existence of the directed path from xi to xj cannot be determined immediately from the domain knowledge generated by GPT-4 (i.e., 0.05 ≤ pij < 0.95), then PKij is set to −1. This ternary matrix composition, using 1, 0 and −1 is indeed implemented in the software package "LiNGAM." For Exact Search While PK in cases of PC and DirectLiNGAM is a ternary matrix, on e must be careful that PK in Exact Search is a binary matrix. The values of the matrix elements are determined as below: - Case 4. If xi → xj is forbidden (i.e., pij < 0.05), then PKij is set to 0. - Case 5. If xi → xj is forced, or the existence of this causal relationship cannot be determined immediately from the domain knowledge generated by GPT-4 (i.e., 0.05 ≤ pij ), then PKij is set to 1. It must be carefully noted that, although the definition of PKij = 0 in Case 4 for Exact Search is exactly the same as that in Case 2 for PC and DirectLiNGAM, the definition of PKij = 1 in Case 5 for Exact Search encompasses the both Case 1 and Case 3 for PC and DirectLiNGAM. This difference must be taken into account when evaluating PK in comparison with the ground truths, to interpret the results in a unified manner regardless of the SCD methods used. ## E.2 Composition Of Ground Truth Matrix The representation of ground truths in matrix form can be simply realized using a binary matrix, provided that it is determined whether a directed edge exists for every possible pair of variables in the system. The composition rule for the ground truth matrix GT is as follows: - If $x_j\to x_j$ exists, then $GT_{ij}$ is set to $1$. - If xj → xj does not exist, then GTij is set to 0. The matrix representations of the ground truth of the benchmark datasets of Auto MPG, DWD, and Sachs shown in Appendix C are expressed as follows: GT AutoMPG = GT DWD = GT Sachs = $$(10)$$ 0 0 0 1 0 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 $$\begin{array}{cc}1&0\\ 1&0\\ 0&0\\ 0&0\\ 0&0\end{array}$$ $$\begin{array}{cc}0&0\\ 0&1\\ 0&0\\ 0&0\\ 0&0\end{array}$$ $$\begin{array}{cc}0&1&0&0\\ 0&1&1&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&1&0&0&0\\ 0&0&1&0&0\\ 0&0&0&0\\ 1&1&1&0&0\\ 0&1&1&0&0\\ 1&1&1&0&0\end{array}$$ $$(11)$$ $$\left(12\right)$$ 0 0 0 0 0 0 1 0 0 1 0 1 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 ## E.3 Calculation Of Metrics For Evaluation Of Structural Consistency With Ground Truths (Shd, Fpr, Fnr, Precision, F1 Score) Structural metrics such as SHD, FPR, FNR, precision, and F1 score are commonly calculated for performance evaluation in various machine learning and classification contexts, and are compared with the ground truth data. In a similar context, the causal structures inferred by LLM-KBCI and SCD, especially for the benchmark datasets with known ground truths, can also be evaluated using these metrics. For the practical evaluation of the SCD results in this study, we use the ground truth matrices defined for the benchmark datasets in Eq.(10), (11) and (12) as references, and we measure these metrics using the adjacency matrices that are calculated directly in SCD algorithms or easily transformed from the output causal graphs. Similarly, the calculation of these metrics for the evaluating LLM-KBCI outputs is carried out using PK. However, it must be noted that there can be some arguments on the definition of metrics for PK based on GT , because the definition of the matrix elements of PK shown in Appendix E.1 is partially different from that of GT described in Appendix E.2. In particular, although GTij is a binary variable completely determined with whether xj → xi exists or not, PKij can be set to −1 for PC and DirectLiNGAM and to 1 for Exact Search. This indicates or includes the case where it is not impossible to definitively assert the presence or absence of xj → xi. Therefore, although there may be a discussion that a reasonable extension of the definitions of these metrics is required for the case above, in this study, we evaluate these metrics from |PK|, in which both PKij = 1 and PKij = −1 are interpreted as a "tentative assertion of the presence of xj → xi" and are treated identically. This processing of PK can also be interpreted as that of temporarily adopting the composition rule of PK for Exact Search as it is for the evaluation of SHD, FPR, FNR, precision, and F1 score, for all SCD methods. With this processing, PK is handled in a unified manner regardless of the SCD methods used. We believe this approach is the best way to maintain the original concept of composing PK, while aiming for consistent discussion across all SCD methods. Calculation of SHD According to the original concept of the structural hamming distance (SHD), this metric is represented as the total number of edge additions, deletions, or reversals that are needed to convert the estimated graph G′into its ground truth graph G (Zheng et al., 2018; Cheng et al., 2022; Hasan et al., 2023). As in our study, if network graphs G and G′ are represented by binary matrices G and G′, respectively, where all elements are either 0 or 1, then the total number of edge additions (A), deletions (D), and reversals (R) can be simply calculated as follows: $$\begin{array}{l}{{A(\mathbf{G^{\prime}},\mathbf{G})=\sum_{i,j}\mathbf{1}(G_{i j})\mathbf{1}(G_{j i})\mathbf{1}(G^{\prime}_{i j}-1)}}\\ {{\ }}\\ {{D(\mathbf{G^{\prime}},\mathbf{G})=\sum_{i,j}\mathbf{1}(G^{\prime}_{i j})\mathbf{1}(G^{\prime}_{j i})\mathbf{1}(G_{i j}-1)}}\\ {{\ }}\\ {{R(\mathbf{G^{\prime}},\mathbf{G})=\sum_{i,j}\mathbf{1}(G_{i j})\mathbf{1}(G_{j i}-1)\mathbf{1}(G^{\prime}_{i j}-1)\mathbf{1}(G^{\prime}_{j i})}}\end{array}$$ Here, we introduce the indicator function 1(x), expressed as follows: $$(13)$$ $$(14)$$ $$(15)$$ $$\mathbf{1}(x)={\begin{cases}1&{\mathrm{if~}}x=0,\\ 0&{\mathrm{otherwise~}}(x\neq0).\end{cases}}$$ $$(16)$$ 0 otherwise (x ̸= 0).(16) As SHD is defined as SHD = A + D + R, it is easily evaluated as follows: $$\begin{array}{l}\mbox{\it SHID}(\mathbf{G^{\prime},G})=\\ \sum_{i,j}\left\{\begin{array}{l}\mbox{\bf1}(G_{ij})\mbox{\bf1}(G_{ji})\mbox{\bf1}(G^{\prime}_{ij}-1)\ +\ \mbox{\bf1}(G^{\prime}_{ij})\mbox{\bf1}(G^{\prime}_{ji})\mbox{\bf1}(G_{ij}-1)\ +\ \mbox{\bf1}(G_{ij})\mbox{\bf1}(G_{ji}-1)\mbox{\bf1}(G^{\prime}_{ij}-1)\mbox{\bf1}(G^{\prime}_{ji})\end{array}\right\}\end{array}\tag{17}$$ For the evaluation of SHD of LLM-KBCI outputs, SHD(|PK|, GT ) is calculated with Eq. (17). Calculation of FPR, FNR, Precision, and F1score In the similar context to SHD, for calculation of the metrics such as false positive rate (FPR) and false negative rate (FNR), we prepare the equation for evaluating the number of true positive (TP), false positive (FP), true negative (TN), and false negative (FN) edges as follows: $$\begin{split}&TP(\mathbf{G^{\prime}},\mathbf{G})=\sum_{i,j}\mathbf{1}(G_{ij}-1)\mathbf{1}(G^{\prime}_{ij}-1)\\ &FP(\mathbf{G^{\prime}},\mathbf{G})=\sum_{i,j}\mathbf{1}(G_{ij})\mathbf{1}(G^{\prime}_{ij}-1)\\ &TN(\mathbf{G^{\prime}},\mathbf{G})=\sum_{i,j}\mathbf{1}(G_{ij})\mathbf{1}(G^{\prime}_{ij})\\ &FN(\mathbf{G^{\prime}},\mathbf{G})=\sum_{i,j}\mathbf{1}(G_{ij}-1)\mathbf{1}(G^{\prime}_{ij})\end{split}$$ $$(18)$$ $$\left(19\right)$$ $$(20)$$ $$(21)$$ Then, using Eq. (18)– (21), the definition of FPR, FNR, precision, and F1 score can be expressed as follows: $$F P R(G^{\prime},G)=$$ $$(222)$$ FPR(G′, G) = FP(G′, G) $$\frac{F P(\mathbf{G^{\prime}},\mathbf{G})}{T N(\mathbf{G^{\prime}},\mathbf{G})+F P(\mathbf{G^{\prime}},\mathbf{G})}\tag{1}$$ $$\frac{F N(G^{\prime},G)}{T P(G^{\prime},G)+F N(G^{\prime},G)}$$ $$F N R(G^{\prime},G)=$$ $$(23)$$ $$\frac{T P(G^{\prime},G)}{T P(G^{\prime},G)+F P(G^{\prime},G)}$$ $$(24)$$ $$(25)$$ FNR(G′, G) = FN(G′, G) TP(G′, G) + FN(G′, G)(23) $$P r c i s i o n(G^{\prime},G)=0$$ Precision(G′, G) = TP(G′, G) TP(G′, G) + FP(G′, G)(24) $$F_{1}s c o r e(\mathbf{G^{\prime}},\mathbf{G})={\frac{2~T P(\mathbf{G^{\prime}},\mathbf{G})}{2~T P(\mathbf{G^{\prime}},\mathbf{G})+F N(\mathbf{G^{\prime}},\mathbf{G})+F P(\mathbf{G^{\prime}},\mathbf{G})}}$$ 2 TP(G′, G) + FN(G′, G) + FP(G′, G)(25) For the evaluation of structural metrics such as FPR of LLM-KBCI outputs, FPR(|PK|, GT ), FNR(|PK|, GT ), *Precision*(|PK|, GT ) and F1*score*(|PK|, GT ) are calculated with Eq. (22)–(25). ## F Algorithm A For Transformation Of Cyclic Pk **Into Acyclic Adjacency Matrices And** Selection Of The Optimal Matrix As briefly described in Section 3.1, for the case of DirectLiNGAM, acyclicity of PK is also required. Thus, if the PK directly calculated from the probability matrix is cyclic, it must be transformed into an acyclic form. One possible method is to delete the minimum number of edges included in cycles to transform PK into an acyclic matrix. However, it is possible that there are several solutions transformed from the same PK with the minimum manipulation of deleting edges. Therefore, we have decided to carry out causal discovery with DirectLiNGAM for every possible acyclic prior knowledge matrix to select the best acyclic prior knowledge matrix PKA in terms of statistical modeling. The dataset is again fitted with a structural equation model under the constraint of the causal structure explored with DirectLiNGAM, assuming linear-Gaussian data, and the Bayes Information Criterion (BIC) is calculated. After repeating this process, the acyclic prior knowledge matrix with which the BIC becomes the lowest is selected as PKA. The overall transformation process is described in Algorithm 3. However, in the practical application of this method, it must be noted that completing the list of acyclic prior knowledge matrices A incurs significant computational costs. Hence, as the number of variables increases, completing this calculation algorithm in a realistic time frame becomes more challenging. For the future generalization and application of our inference method using DirectLiNGAM, the development of more efficient algorithms for transforming a cyclic matrix into an acyclic one is anticipated. Algorithm 3 Transformation of Cyclic PK into Acyclic Adjacency Matrices and Selection of the Optimal Matrix Input 1: Cyclic prior knowledge matrix PKC Input 2: Data X with variables{x1*, ..., x*n} Input 3: DirectLiNGAM Algorithm L(X, PK) Output: Optimal acyclic matrix PKA Initialize the temporal set for matrices T ← {PKC} Initialize the temporal set for number of the cycles N *← {}* Initialize the temporal set for acyclic matrices A *← {}* repeat for matrix Tm ∈ T do Count the number of cycles in Tm as Nm Add Nm to N end for if ∃Nm ∈ N Nm = 0 **then** Detect all Tm, which satisfies Nm = 0 and add them to A else Initialize the temporal set for modified matrices T ′ *← {}* for Tm ∈ T do Initialize the set for edges included in cycles Em *← {}* Initialize the set for edges to be removed Fm *← {}* Detect all cycles in Tm For each detected cycle, identify all the edges that form the cycle Add these edges to a set of edges to be removed Em Detect the most frequent edges "xi ← xj " in Em as (*i, j*)f ∀(if , jf ) add (if , jf ) to Fm for (if , jf ) ∈ Fm do T ′ m = Tm T ′ m(if , jf ) ← 0 Add T ′ m to T ′ end for end for Replace T with T ′ end if until A is not empty Initialize the optimal BIC value B = None Initialize the optimal acyclic matrix for prior knowledge in DirectLiNGAM Aoptimal BIC = None for Am ∈ A do Calculate adjacency matrix (with the components 0 or 1) of the causal discovery result Adj = L(X, PK = Am) Fit X with the structural causal equation model represented in Adj assuming linear-Gaussian data Calculate BIC with Adj and X as Btemp if *B > B*temp or B = None **then** B ← Btemp Aoptimal BIC = Am end if end for return Aoptimal BIC as PKA ## G Details Of Llm-Kbci Results It is also valuable to examine the details of the probability matrices generated by LLM-KBCI, both for the basic discussion on whether LLMs can generate a valid interpretation of causality from a domain expert's point of view, and for understanding the characteristics of SCP. In this section, the probability matrices generated by GPT-4 for Auto MPG data and DWD climate data, which are relatively easy to interpret within common daily knowledge, are shown and briefly interpreted. For comparison among various SCP patterns (Patterns 1–4) using the same SCD method as much as possible, the probability matrices generated by GPT-4 with SCP are shown only for DirectLiNGAM. We also briefly present the probability matrices of LLM-KBCI for the sampled sub-dataset of health screening results. ## G.1 Llm-Kbci For Auto Mpg Data In Table 11, the probabilities of causal relationships of pairs of variables in Auto MPG data are shown. The cells highlighted in green are the ones in which the directed edges are expected to appear from the ground truths shown in Figure 3. For all the prompting patterns, although the probability of "Weight"→"Displacement", which is interpreted as one of the ground truth directed edges, is 0, the probability of reversed edge "Displacement"→"Weight" is non-zero and over 0.95 in Patterns 1–4. For understanding this behavior and elucidating the true causal relationship between these two variables, further discussion is required, including the possibility of the hidden common causes that are excluded from the dataset we have used. In addition to that, although we do not believe the existence of the directed edge of "Displacement"→"Acceleration," the probability of this causal relationship is over 0.85 for all the prompting patterns. This may be due to the property of the prompting for evaluating the probability. As shown in Table 2, GPT-4 is allowed to judge the existence of both direct and indirect causal relationships, to acquire a positive answer even if any intervening variables are not included in the dataset. However, for example, considering that the probabilities of both "Displacement"→"Horsepower" and "Horsepower"→"Acceleration," which are part of the ground truths, are relatively high, it is also possible that GPT-4 supports the hypothesis of some impact from "Displacement" on "Acceleration" partially due to the confidence in the indirect causal relationship of "Displacement"→"Horsepower"→"Acceleration". If one wants to distinguish the direct and indirect causal relationships in the interpretation of the probability matrix, investigation of the response from LLMs for the first prompting may lead to further understanding. Some differences that can be related to the prompting patterns can also be observed. For example, the probability of "Horsepower"→"Mpg" in Pattern 1 is much smaller than other patterns. Moreover, the probabilities of "Horsepower"→"Acceleration" in Patterns 1 and 3 are smaller than other patterns, in which the probability of this edge is almost 1. A possible explanation of these behaviors is that the decision-making of GPT-4 is unsettled with SCP, in which the causal structure inferred by DirectLiNGAM shown in Figure 4 (c) is included. As neither "Horsepower"→"Acceleration" nor "Horsepower"→"Mpg" appears in Figure 4 (c), despite the confidence in the existence of these edges only from the domain knowledge, the decision-making of GPT-4 may become more careful, taking into account the result of SCD. It is desired to elucidate what kinds of decision-making of LLMs are likely to be affected by SCP in future work. ## G.2 Llm-Kbci For Dwd Climate Data In Table 12, the probabilities of the causal relationships of pairs of variables in DWD climate data are shown. The cells highlighted in green are the ones in which the directed edges are expected to appear from the ground truths shown in Figure 5. For all the prompting patterns, it is confirmed that all of the probabilities of the causal effects on "Altitude," "Longitude," and "Latitude" from other variables are 0. As these three variables are geographically given and fixed, the interpretation by GPT-4 that they act as parent variables that are not influenced by other factors is completely reasonable. Although "Altitude" and "Latitude" are somehow influenced according to the result of DirectLiNGAM without prior knowledge as shown in Figure 6 (c), SCP including these unnatural results | | Pattern 0 | | | | | |----------------|----------------|-------|--------------|----------|----------------| | EFFECTED\CAUSE | "Displacement" | "Mpg" | "Horsepower" | "Weight" | "Acceleration" | | "Displacement" | - | 0.000 | 0.000 | 0.000 | 0.000 | | "Mpg" | 0.999 | - | 0.997 | 1.000 | 1.000 | | "Horsepower" | 0.999 | 0.000 | - | 0.000 | 0.000 | | "Weight" | 0.635 | 0.000 | 0.000 | - | 0.000 | | "Acceleration" | 0.996 | 0.023 | 0.998 | 0.998 | - | | | Pattern 1 | | | | | | EFFECTED\CAUSE | "Displacement" | "Mpg" | "Horsepower" | "Weight" | "Acceleration" | | "Displacement" | - | 0.000 | 0.000 | 0.000 | 0.000 | | "Mpg" | 1.000 | - | 0.128 | 0.484 | 0.058 | | "Horsepower" | 1.000 | 0.056 | - | 0.001 | 0.000 | | "Weight" | 0.994 | 0.000 | 0.000 | - | 0.000 | | "Acceleration" | 0.859 | 0.000 | 0.828 | 0.998 | - | | | Pattern 2 | | | | | | EFFECTED\CAUSE | "Displacement" | "Mpg" | "Horsepower" | "Weight" | "Acceleration" | | "Displacement" | - | 0.000 | 0.000 | 0.000 | 0.000 | | "Mpg" | 1.000 | - | 0.999 | 1.000 | 0.984 | | "Horsepower" | 1.000 | 0.000 | - | 0.000 | 0.000 | | "Weight" | 0.997 | 0.000 | 0.000 | - | 0.000 | | "Acceleration" | 0.995 | 0.002 | 0.996 | 0.999 | - | | | Pattern 3 | | | | | | EFFECTED\CAUSE | "Displacement" | "Mpg" | "Horsepower" | "Weight" | "Acceleration" | | "Displacement" | - | 0.000 | 0.000 | 0.000 | 0.000 | | "Mpg" | 0.977 | - | 0.969 | 0.754 | 0.547 | | "Horsepower" | 1.000 | 0.051 | - | 0.696 | 0.010 | | "Weight" | 0.954 | 0.000 | 0.000 | - | 0.000 | | "Acceleration" | 0.981 | 0.000 | 0.435 | 0.809 | - | | | Pattern 4 | | | | | | EFFECTED\CAUSE | "Displacement" | "Mpg" | "Horsepower" | "Weight" | "Acceleration" | | "Displacement" | - | 0.000 | 0.000 | 0.000 | 0.000 | | "Mpg" | 0.995 | - | 0.994 | 0.997 | 0.940 | | "Horsepower" | 0.999 | 0.314 | - | 0.006 | 0.000 | | "Weight" | 0.999 | 0.000 | 0.012 | - | 0.000 | | "Acceleration" | 0.964 | 0.000 | 0.989 | 0.814 | - | has not affected the decision-making by GPT-4. From this behavior, it is inferred that the response regarding axiomatic and self-evident matters from GPT-4 is robust and not likely to be affected by SCP, even if the SCD result exhibits obviously unnatural behaviors. In addition, while "Longitude"→"Temperature," which are assumed to be a ground truth, is not likely to be asserted by GPT-4, "Temperature"→"Precipitation," which is not expected to be a ground truth, is likely to be asserted by GPT-4, across all the prompting patterns. For further interpretation of these unexpected behaviors from our ground truths, investigation of the response generated in the first prompting process is recommended. It is also interesting that although the probabilities of "Longitude"→"Precipitation" are 0 in Patterns 0–2, they become non-zero finite values in Patterns 3 and 4, in which the causal coefficient of this directed edge calculated with DirectLiNGAM is included in SCP. This behavior may be a glimpse that SCP can assist the decision-making of GPT-4 even if it generates an incomplete response on causal relationships with its background knowledge. Table 12: Probabilities of the causal relationships suggested by GPT-4 in DWD climate data. The cells in which the directed edges are expected to appear from the ground truths as shown in Figure 5 are highlighted in green. Pattern 0 EFFECTED\CAUSE "Altitude" "Temperature" "Precipitation" "Longitude" "Sunshine" **"Latitude"** "Altitude" - 0.000 0.000 0.000 0.000 0.000 "Temperature" 1.000 - 0.891 0.000 1.000 1.000 "Precipitation" 1.000 0.999 - 0.000 0.001 0.995 "Longitude" 0.000 0.000 0.000 - 0.000 0.000 "Sunshine" 1.000 0.000 0.998 0.000 - 1.000 "Latitude" 0.000 0.000 0.000 0.000 0.000 - | EFFECTED\CAUSE | "Altitude" | "Temperature" | "Precipitation" | "Longitude" | "Sunshine" | "Latitude" | |------------------|--------------|-----------------|-------------------|---------------|--------------|--------------| | "Altitude" | - | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | | "Temperature" | 0.982 | - | 0.023 | 0.029 | 0.990 | 0.958 | | "Precipitation" | 0.826 | 0.987 | - | 0.927 | 0.010 | 0.797 | | "Longitude" | 0.000 | 0.000 | 0.000 | - | 0.000 | 0.000 | | "Sunshine" | 0.534 | 0.021 | 0.387 | 0.013 | - | 0.638 | | "Latitude" | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | - | | EFFECTED\CAUSE | "Altitude" | "Temperature" | "Precipitation" | "Longitude" | "Sunshine" | "Latitude" | |------------------|--------------|-----------------|-------------------|---------------|--------------|--------------| | "Altitude" | - | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | | "Temperature" | 0.919 | - | 0.016 | 0.003 | 0.615 | 0.973 | | "Precipitation" | 0.585 | 0.996 | - | 0.175 | 0.002 | 0.008 | | "Longitude" | 0.000 | 0.000 | 0.000 | - | 0.000 | 0.000 | | "Sunshine" | 0.039 | 0.000 | 0.001 | 0.875 | - | 0.199 | | "Latitude" | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | - | | Pattern 2 EFFECTED\CAUSE | "Altitude" | "Temperature" | "Precipitation" | "Longitude" | "Sunshine" | "Latitude" | |----------------------------|--------------|-----------------|-------------------|---------------|--------------|--------------| | "Altitude" | - | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | | "Temperature" | 0.997 | - | 0.007 | 0.000 | 0.999 | 0.989 | | "Precipitation" | 0.739 | 0.999 | - | 0.000 | 0.000 | 0.384 | | "Longitude" | 0.000 | 0.000 | 0.000 | - | 0.000 | 0.000 | | "Sunshine" | 0.874 | 0.010 | 0.976 | 0.000 | - | 0.981 | | "Latitude" | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | - | | EFFECTED\CAUSE | "Altitude" | "Temperature" | "Precipitation" | "Longitude" | "Sunshine" | "Latitude" | |------------------|--------------|-----------------|-------------------|---------------|--------------|--------------| | "Altitude" | - | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | | "Temperature" | 0.384 | - | 0.034 | 0.000 | 0.856 | 0.011 | | "Precipitation" | 0.025 | 0.999 | - | 0.000 | 0.036 | 0.026 | | "Longitude" | 0.000 | 0.000 | 0.000 | - | 0.000 | 0.000 | | "Sunshine" | 0.006 | 0.011 | 0.008 | 0.000 | - | 0.596 | | "Latitude" | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | - | ## G.3 Llm-Kbci For Dataset Of Health Screening Results In Table 13, the probabilities of the causal relationships of pairs of variables in our sampled sub-dataset of health screening results are shown. The cells highlighted in red are the ones in which the directed edges are expected to appear as described in Appendix C.4. In contrast, since "Age" is an unmodifiable background factor, it can be concluded that it is not a descendant of any other variables. Therefore, the probabilities in the cells highlighted in blue are expected to be 0. Across all the prompting patterns, it is confirmed that all of probabilities of the causal effects on "Age" from other variables are indeed 0. From this fact, it is likely to be regarded by GPT-4 as axiomatic and self-evident that "Age" cannot be affected from other variables, and the judge of the causal relationships is not influenced by SCP, even if the SCD result exhibits obviously unnatural behaviors as shown in Figure 10. Table 13: Probabilities of the causal relationships suggested by GPT-4 in the sampled sub-dataset of health screening results. The cells in which the directed edges are expected to appear from the ground truths are highlighted in red. In contrast, the probabilities in the cells highlighted in blue, are expected to be zero, since "Age" is expected to be a parent variable for all other variables. Pattern 0 EFFECTED\CAUSE "BMI" "Waist" "SBP" "DBP" "HbA1c" "LDL" "Age" "BMI" - 0.994 0.000 0.000 0.000 0.000 0.901 "Waist" 1.000 - 0.000 0.000 0.000 0.000 0.353 "SBP" 0.999 0.962 - 0.998 0.987 0.000 0.626 "DBP" 0.998 0.995 0.993 - 0.000 0.000 0.001 "HbA1c" 0.998 0.998 0.000 0.000 - 0.000 0.986 "LDL" 0.988 0.967 0.000 0.000 0.000 - 0.002 "Age" 0.000 0.000 0.000 0.000 0.000 0.000 - | | Pattern 4 | | | | | | | |----------------|-------------|---------|-------|-------|---------|-------|-------| | EFFECTED\CAUSE | "BMI" | "Waist" | "SBP" | "DBP" | "HbA1c" | "LDL" | "Age" | | "BMI" | - | 0.993 | 0.000 | 0.000 | 0.024 | 0.006 | 0.037 | | "Waist" | 1.000 | - | 0.000 | 0.000 | 0.957 | 0.000 | 0.395 | | "SBP" | 0.999 | 0.982 | - | 0.001 | 0.998 | 0.000 | 0.795 | | "DBP" | 0.994 | 0.204 | 0.985 | - | 0.408 | 0.000 | 0.926 | | "HbA1c" | 0.824 | 0.391 | 0.000 | 0.000 | - | 0.000 | 0.176 | | "LDL" | 0.485 | 0.403 | 0.000 | 0.000 | 0.000 | - | 0.027 | | "Age" | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | - | | EFFECTED\CAUSE | "BMI" | "Waist" | "SBP" | "DBP" | "HbA1c" | "LDL" | "Age" | |------------------|---------|-----------|---------|---------|-----------|---------|---------| | "BMI" | - | 0.003 | 0.000 | 0.000 | 0.868 | 0.923 | 0.306 | | "Waist" | 1.000 | - | 0.000 | 0.000 | 0.983 | 0.000 | 0.076 | | "SBP" | 1.000 | 0.855 | - | 0.959 | 0.999 | 0.000 | 0.235 | | "DBP" | 1.000 | 0.032 | 0.140 | - | 0.021 | 0.000 | 0.095 | | "HbA1c" | 0.967 | 0.634 | 0.000 | 0.000 | - | 0.000 | 0.046 | | "LDL" | 0.562 | 0.165 | 0.000 | 0.000 | 0.085 | - | 0.013 | | "Age" | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | - | | | Pattern 2 | | | | | | | |----------------|-------------|---------|-------|-------|---------|-------|-------| | EFFECTED\CAUSE | "BMI" | "Waist" | "SBP" | "DBP" | "HbA1c" | "LDL" | "Age" | | "BMI" | - | 0.998 | 0.001 | 0.001 | 0.996 | 0.000 | 0.093 | | "Waist" | 0.999 | - | 0.000 | 0.003 | 0.959 | 0.007 | 0.099 | | "SBP" | 0.998 | 0.994 | - | 0.983 | 0.994 | 0.040 | 0.207 | | "DBP" | 0.997 | 0.975 | 0.984 | - | 0.983 | 0.002 | 0.115 | | "HbA1c" | 0.982 | 0.608 | 0.002 | 0.000 | - | 0.000 | 0.723 | | "LDL" | 0.994 | 0.946 | 0.000 | 0.000 | 0.452 | - | 0.171 | | "Age" | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | - | | | Pattern 1 | | | | | | | |----------------|-------------|---------|-------|-------|---------|-------|-------| | EFFECTED\CAUSE | "BMI" | "Waist" | "SBP" | "DBP" | "HbA1c" | "LDL" | "Age" | | "BMI" | - | 0.312 | 0.000 | 0.000 | 0.014 | 0.000 | 0.076 | | "Waist" | 1.000 | - | 0.000 | 0.000 | 0.023 | 0.000 | 0.043 | | "SBP" | 0.999 | 0.912 | - | 0.999 | 0.997 | 0.000 | 0.302 | | "DBP" | 0.998 | 0.421 | 0.050 | - | 0.000 | 0.000 | 0.019 | | "HbA1c" | 0.517 | 0.503 | 0.101 | 0.000 | - | 0.000 | 0.170 | | "LDL" | 0.008 | 0.527 | 0.000 | 0.000 | 0.000 | - | 0.517 | | "Age" | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | - |
Review 1: Summary: The authors introduce a prompting strategy for causal discovery. The intuition is to first use a statistical causal discovery method, pass the result in some form to an LLM to find corrections (using a self-criticising prompting strategy), and rerunning the statistical method with the results from the LLM. The authors evaluate their method on 3 datasets (all with a small number of variables), and a new dataset which will not be disclosed. Strengths and Weaknesses: Strengths: - Combining traditional causal discovery methods with LLMs for prior knowledge is an interesting idea. - Quite a few variants of the method are tried, although there is no conclusive evidence for either. - Using a held-out dataset helps motivate the generality of the method Weaknesses: - The method is not analysed from a theoretical point of view. It is unclear how the LLM could impact traditional issues in causal discovery, or if there are any guarantees on that front. - It is unclear how or if this method would scale to problems with more variables. - The reproducibility of the method is low. It is unclear if code will be provided, the fourth dataset will not be shared and GPT-4 is used, which changes versions on a whim and is closed source (in fact the version used in this paper is already offline). - The paper mentions several related works also exploring the integration of LLMs and causal discovery, but does not experimentally compare to them or analyse why or when this method would be preferred. The extra changes focusing on quantitative properties in the prompting don't seem to have a major effect experimentally. I am not familiar with this related work and cannot judge the novelty. Requested Changes: - The paper would be significantly stronger if the reproducibility is increased through public code and evaluation on an open source model. - Under related work it is mentioned that the held-out dataset helps motivate the use of existing work. However in the experimental section it is not clear when there is a comparison to this work. - I did not understand the need to for measuring log probabilities multiple times. How important is using M=5 for these experiments? - Is it correct that no in-context learning is used in the prompt? - It is unclear what are 'forbidden or forced causal relationships' in this context. - In my experience very few probabilities fall in [0.05, 0.95], so doesn't this hyperparameter setting enforce everything to be either 'forbidden or forced'? - There are a lot of metrics in table 3. Without the relevant background knowledge it is hard to understand what is going on and what each means. - If I understand correctly, there is no standard deviation on the result, yet this could be quite high given the randomness of the GPT model with temperature 0.7. This should be mentioned, especially since the relative results on the different patterns seem quite random. - I did not understand (quickly, at least) what table 4 A was trying to show. Minor textual changes: - Abstract: The sentence starting with 'Experiments have revealed' is hard to parse - Introduction: The claim that "LLMs can be expected to perform objective evaluation of causal relationships" is strong and should be motivated with a thorough study. - Introduction: "observed in **inference themes** involving". Unclear what is meant here. - 1.2: space extra after LLM inference - Footnote 3: "Hidden common causes = hidden confounder?" - 3.1: notations -> notation, the the input elements - 4.1: 'the baseline result is **that wo**" - Table 3: the the - Table 3: I suppose blue highlights the best result, however in the CFI and RMSEA columns often Baseline A is strongest but not marked. - Table 3: Is the number in the parantheses (eg 5 (8) ) meant to be the standard deviation? I suppose not, but it's not clear what it is meant to represent. - Figure 2b: Is this the output of pattern 2 or 4? Broader Impact Concerns: na ================================================== Review 2: Summary: The paper introduces an approach that combines statistical causal discovery (SCD) with knowledge-based causal inference (KBCI) using large language models (LLMs). The authors present statistical causal prompting (SCP) method, which employs the domain expertise embedded in LLMs to improve the accuracy of causal models derived from SCD algorithms. The paper validates this method through experiments on benchmark datasets and an unpublished real-world dataset, demonstrating significant enhancements in SCD results when supplemented with LLM-provided background knowledge. Notably, these improvements are achieved even when the dataset is not part of the LLM's training data. Strengths and Weaknesses: * Strengths 1. The integration of SCD and LLM-KBCI through SCP is a significant advancement in causal discovery, enhancing model accuracy. 2. The detailed explanation of the algorithms and prompting techniques used provides clarity and replicability to the study. * Weaknesses 1. While the results are promising, the generalization across different types of datasets and LLMs needs further exploration to establish broader applicability. 2. The paper acknowledges that the transformation of cyclic matrices into acyclic ones for DirectLiNGAM is computationally expensive, which could be a barrier for some applications. Requested Changes: 1. The authors should discuss the potential impact of using different LLMs and whether the choice of GPT-4 (e.g., smaller models) introduces any biases or limitations to the findings. And also have a comparison and discussion between using probability critic and natural language critic for evaluation of causal relationship. 2. Including a sensitivity analysis to understand how variations in the LLM's confidence levels affect causal inference outcomes would strengthen the study. 3. The paper should address the potential for overfitting when using SCP, particularly with smaller datasets or those with inherent biases. Broader Impact Concerns: 1. The potential for the LLM to inherit and propagate biases present in its training data, and how this might affect causal inference results. 2. The implications of using such a system in high-stakes domains like healthcare or economics, where incorrect causal inferences could lead to significant consequences. 3. A discussion on the transparency and interpretability of the models produced by the integration of LLMs and SCD, and the potential for "black-box" decision-making. ================================================== Review 3: Summary: This paper considered LLM as an expert prior in the **data-driven** causal discovery. Specifically, LLM is set as a set of queries to analyze the plausibility of the causal discovery results. The discovered results seem more human compatible, compared with traditional methods. Strengths and Weaknesses: **Pros** Integrating proper expert’s knowledge into the causal discovery is quite interesting and promising in the discovery science. Indeed there is a long term belief that human plausible prior could regularize the causal discovery. This paper interestingly considered this and leveraged LLM as a proxy of the human. I feel like this direction is worthy, promising and exciting. **Cons** In general, my concerns are mainly conceptual. I feel like causal-discovery may not be a proper name for these kinds of methods. Indeed, I feel like it’s more or less Bayes structure discovery. Please see the request changes for my concerns. Requested Changes: **About conceptual contribution** I do agree this paper has sort of interesting insights. However, I have concerns about such methods in the real-world causal discovery such as healthcare. Indeed, the LLM is often trained as a black-box and lacks a clear underlying mechanism. Therefore the expert prior should always have certain bias or misspecification in certain questions. This is crucial in practical causal discovery. For example, in Figure 1, the X-ray to cancer question has certain ambiguity in a certain sense. Indeed we can not causally infer if someone has cancer, the X-ray will be True. Alternatively we can infer this relation in probability. I.e, if someone has cancer, the probability of X-ray=Ture is 95%. This is a much more reasonable argument. If we take a closer look at this example, prior knowledge provides a better bayes probability rather than the true causal probability. Therefore I think the main contribution should not simply be causal discovery. Another concern lies in Step 3, the probability estimation from GPT. I also have concerns about creating such probabilistic outputs. As far as I understand, this paper adopted Monte Carlo estimation to measure the probability. Is there any alternative to do so? I feel like current probability estimation seems a bit awkward. **Suggestions** I think this paper itself is worthy of publication. However, current submissions lack rigors in the context of causal discovery. I carefully checked the limitation sections and felt rather limited. I would like this paper could have an effective revision based on - Clearly illustrate the limitations and risks of using current LLM in data-driven causal discovery. - Showing the failure scenarios that LLM worsens the causal discovery. - Clearly showing how the probability of confidence is obtained, including its sensitivity and reliability. Broader Impact Concerns: See the request changes ================================================== Metareview: Recommendation: Reject Comment: Two reviewers out of three recommended the paper to be accepted, even if their recommendation il "leaning accept", while the third reviewer recommended rejection. I went through all reviews, rebuttals from authors and found that the vast majority of issues raised by the reviewers were addressed and solved in a sensible manner, as two out of three reviewers agree upon. I also put particular attention to the review and the argument from the third reviewer who recommended rejection. I found the arguments there developed to make sense and being useful to foster the discussion. If what suggested by SVi9 is achieved can it will significantly improve the impact of the interesting work of the authors, exactly as SVi9 agrees upon in the last comments to the recommendation, I also think the approach should be made more specific for any case to be analyzed and that it must not be presented as a general model to solve such complex problems as those which live in the healthcare domain. In the light of this considerations I suggest rejecting the paper but with very strong encouragement to the authors to rework and resubmit after having carefully addressed and managed the SVi9 issues. ==================================================
# Exploring The Potential Of Direct Feedback Alignment For Continual Learning Anonymous authors Paper under double-blind review ## Abstract Real-world applications of machine learning require robustness to shifts in the data distribution over time. A critical limitation of standard artificial neural networks trained with backpropagation (BP) is their susceptibility to catastrophic forgetting: they "forget" prior knowledge when trained on a new task, while biological neural networks tend to be more robust to catastrophic forgetting. While various algorithmic ways of mitigating catastrophic forgetting have been proposed, developing an algorithm that is capable of learning continuously remains an open problem. Motivated by recent theoretical results, here we explore whether a biologically inspired learning algorithm like Direct Feedback Alignment (DFA) can mitigate catastrophic forgetting in artificial neural networks. We train fully-connected networks on several continual learning benchmarks using DFA and compare its performance to vanilla back propagation, random features, and other continual learning algorithms. We find that an inherent bias of DFA, called "degeneracy breaking", leads to low average forgetting on common continual learning benchmarks when using DFA in the Domain-Incremental learning scenario and in the Task-Incremental learning scenario. We show how to control the trade-off between learning and forgetting with DFA, and relate different modes of using DFA to other methods in the field. ## 1 Introduction Neural networks have demonstrated remarkable achievements in various domains, including image recognition (Krizhevsky et al., 2012; LeCun et al., 2015; Simonyan & Zisserman, 2015; He et al., 2016; Dosovitskiy et al., 2021) and natural language processing (Devlin et al., 2019; Howard & Ruder, 2018; Radford et al., 2018; Brown et al., 2020; OpenAI, 2024). However, a significant drawback of traditional neural networks is their susceptibility to catastrophic forgetting (Goodfellow et al., 2015; McCloskey & Cohen, 1989). Catastrophic forgetting refers to a loss in performance on previously learned tasks when the network is trained on new tasks or data distributions. This limitation impedes the ability of neural networks to continuously learn and adapt to evolving environments, limiting their practical applicability in real-world scenarios. Continual learning (CL) focuses on developing algorithms and techniques that enable systems to maintain old knowledge, while also being able to learn new information. This requires resolving the "stability–plasticity dilemma" (Carpenter, 1986; Mermillod et al., 2013): a model needs plasticity to obtain new knowledge and adapt to new environments, while also requiring stability to prevent forgetting of previous information. Typical approaches to CL include dynamic architectures (Razavian et al., 2014), progressive learning (Fayek et al., 2020), regularisation techniques (Kirkpatrick et al., 2017; Zenke et al., 2017), or episodic memory replay (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019; Rebuffi et al., 2017; Shin et al., 2017; Kamra et al., 2017; Seff et al., 2017); see (De Lange et al., 2021) for a review. Meanwhile, biological neural networks do not suffer from catastrophic forgetting nearly as badly as artificial neural networks. Consequently, several approaches to mitigate catastrophic forgetting have taken direct inspiration from biology: for example, the complexity of synaptic plasticity in biological neurons inspired regularisation approaches such as Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) and Synaptic Intelligence (Zenke et al., 2017). 1 Here, we continue this line of thought by investigating the potential of a biologically plausible learning algorithm to mitigate catastrophic forgetting: Direct Feedback Alignment (DFA). Proposed by Nøkland (2016) as a biologically plausible algorithm to train neural networks, DFA propagates the error signal through fixed, random feedback connections directly to the hidden layers, thereby solving the weight transport problem that plagues vanilla backpropagation (Grossberg, 1987; Crick, 1989). Decoupling the weight updates of different layers enables parallel and local updates, which are considered key features of synaptic updates in the brain (Bengio et al., 2015). We focus on the potential of DFA not just because of its greater biological plausibility compared to vanilla backpropagation. We are encouraged by the recent analysis of Refinetti et al. (2021), who showed that DFA has a *degeneracy breaking* property. To learn with DFA, the neural network has to first align its weights (to some extent) with the feedback matrices to ensure that the error signal can be backpropagated efficiently (Lillicrap et al., 2016). This alignment has the effect that neural networks trained by DFA always converge to the same region in the loss landscape, independently of their initialisation, and in contrast to networks trained by vanilla back propagation. In other words, DFA will drive a neural network to a specific region in the loss landscape. The location of this region depends on the feedback matrices, thereby breaking the degeneracy of the solutions of vanilla SGD. In this work, we ask whether we can use the influence of the DFA feedback matrix on the weights learnt by a neural network to mitigate catastrophic forgetting. We formulate and test two different hypotheses for how DFA can facilitate continual learning. Our **first hypothesis** is that using the *same* feedback matrix for different tasks in a continual learning curriculum prevents catastrophic forgetting by implicitly biasing the weights to a single region of loss landscape, as they always need to align with the same feedback matrix. In this case, DFA would act as a an *implicit* regulariser, similar to other algorithms like Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017), a popular implicit regularisation technique. We will call this approach **DFA-same**. Our **second hypothesis** is that using *different* feedback matrices for each task will effectively update the weight matrices in different directions for each task, thus preventing catastrophic forgetting. This **DFA-diff** approach is inspired by various CL algorithms that explicitly orthogonalise gradients (Zeng et al., 2019; He & Jaeger, 2018; Bennani & Sugiyama, 2020). This idea can also be motivated from a neuroscientific perspective given recent evidence that neural population codes orthogonalize with learning (Flesch et al., 2022; Failor et al., 2021; Zeng et al., 2019). Since efficient training of convolutional neural networks (CNNs) with DFA remains an open problem (Crafton et al., 2019; Launay et al., 2020), we focus on fully-connected networks and limit ourselves to simple benchmark datasets based on MNIST and FMNIST where these architectures achieve good performance. In particular, we use the split and the permuted manipulations of MNIST and FMNIST that turn the 10-class classification tasks into a set of successive classification tasks (Kirkpatrick et al., 2017; Zenke et al., 2017). The contributions of this paper are threefold: 1. We empirically show that DFA performs better at Continual Learning than vanilla back-propagation and other baselines, such as random features (RF) and Elastic Weight Consolidation (EWC). 2. We benchmark the performance of different DFA strategies for Continual Learning, either sharing or changing feedback matrices between tasks, and show how the best strategy depends on the type of continual learning task and the network architecture. 3. We show that the scale of the feedback matrix allows one to trade-off plasticity vs. stability in continual learning, beyond what can be achieved by choosing layer-specific learning rates. Exploring the potential of DFA to mitigate catastrophic forgetting both deepens our understanding of the working mechanism behind DFA by testing it in a continual learning setting, and it adds to the tool set that can be tested on continual learning problems. The remainder of this paper is organized as follows: In section 2, we provide the details of the experimental setup. Section 3 presents our results by benchmarking DFA on various continual learning tasks and contrasting its performance with various baselines. Section 4 is dedicated to a discussion of our results and gives some concluding perspectives. ## 2 Methods Direct Feedback Alignment (DFA) DFA presents a departure from the traditional backpropagation algorithm (Rumelhart et al., 1988), which relies on weight symmetry and precise weight updates for accurate error propagation. Instead, DFA employs random feedback matrices that are decoupled from the forward weight matrices, enabling more flexible weight updates without the constraints imposed by weight symmetry (Nøkland, 2016). To state the DFA weight updates clearly, and to contrast them with vanilla backpropagation, we consider mini-batches of input-output pairs (*x, y*) that we want the network to learn. For simplicity, we consider a simple network composed of two fully-connected layers and one softmax layer at the end. We denote as Wi, hi and ai the weight matrix, activation function and activations of layer i; The output of the network is yˆ and the activations of the output layer are ay. The error e is the derivative of the loss J and in case of a cross-entropy loss it is equal to: $$e=\delta a_{y}=\frac{\partial J}{\partial a_{y}}=\hat{y}-y\tag{1}$$ $$\left(1\right)$$ $$\mathrm{DFA}$$ $$\left(2\right)$$ (3) $\binom{4}{}$ (4) ... Denoting ⊙ as the Hadamard product, for BP (on the left) and DFA (on the right), the gradients of the hidden layers are calculated as $$\begin{array}{l}{{\delta W_{3}=-e h_{2}^{T}}}\\ {{\delta W_{2}=\frac{\partial J}{\partial a_{2}}=(W_{3}^{T}e)\odot f^{\prime}(a_{2})h_{1}^{T}}}\\ {{\delta W_{1}=(W_{2}^{T}\delta a_{2})\odot f^{\prime}(a_{1})x^{T}}}\end{array}$$ 2δW3 = −ehT 2(2) 1δW2 = 1(3) $$\delta W_{3}=-e h_{2}^{T}$$ $$\delta W_{2}=\frac{\partial J}{\partial a_{1}}=(B_{2}e)\odot f^{\prime}(a_{2})h_{1}^{T}$$ $$\delta W_{1}=(B_{1}e)\odot f^{\prime}(a_{1})x^{T}$$ TδW1 = (B1e) ⊙ f T(4) $$\mathrm{B}\mathrm{P}$$ BP DFA While the final layer (the readout layer) is updated in the same way with both algorithms, the weight updates for the other layers are all different. BP implements the exact gradient for each layer by applying the chain rule to compute the derivatives of the loss; instead, DFA keeps only the error term and substitutes the derivative of the following layer by an entry of the feedback matrix. A detailed theoretical analysis of the evolution of the update dynamics can be found in Refinetti et al. (2021). In the case of DFA-same, the feedback matrices (B1 and B2) are initialized according to a uniform distribution and kept unchanged during the training of all tasks. According to the degeneracy breaking of DFA discussed in the introduction, DFA-same could alleviate catastrophic forgetting by keeping the weights of the network when training on the second task close to the weights learnt on the first task, etc. In DFA-diff, we use a different feedback matrix for each task, by sampling the new ones from the same Uniform distribution used to sampled the first one. Our hypothesis is that the different feedback matrices may bias the dynamics of the networks in orthogonal directions of the weight landscape for every task. In fact, due to the large dimensions of the matrix and their sampling from uniform distributions, the new feedback matrices are approximately orthogonal to the previous ones. The gradient alignment will then be directed to orthogonal directions for every task. Making the gradients explicitly orthogonal to each other is known to help against catastrophic forgetting, and the idea is exploited in methods such as Orthogonal Weight Modification (OWM) (Zeng et al., 2019), conceptor-aided backprop (He & Jaeger, 2018), and orthogonal gradient descent (Bennani & Sugiyama, 2020), because the features of different tasks are learned along orthogonal manifolds, and the weights updates do not interfere with the previous ones. Appendix A confirms that the gradients are indeed orthogonal when two networks are trained on the same dataset with orthogonal feedback matrices. Benchmark datasets We report results on the FashionMNIST (FMNIST) dataset (Xiao et al., 2017), but our findings are consistent with the MNIST dataset, see appendix C. We do not apply data augmentation techniques, and train the networks on two different CL benchmark tasks: In our experimental evaluations, we use: ![3_image_0.png](3_image_0.png) Figure 1: A One example of the first three tasks in the split (Binary classification) and permuted dataset (classification). B Similarity measures of the images in the same class averaged over the different classes of the Split and Permuted datasets. The permuted-FMNIST dataset is the one where the images have the highest similarity among tasks, while split-FMNIST has very different images in the same class. See section appendix E for a description of the similarity measures. C One example of a three-layer architecture used in the Domain-IL and Task-IL scenarios. In the second case, one output node is dedicated to training and evaluating one specific task. 1. Permuted FMNIST (pFMNIST), where we generate a sequence of learning tasks by permuting the pixels of each image; the idea here is to test a model's ability to incrementally learn new information with similar feature representations (Kemker et al., 2017). 2. Split FMNIST (sFMNIST), where we split the original dataset of 10 classes into five smaller datasets with two disjoint classes for each. The resulting smaller datasets will have different modalities, so a model trained sequentially on them needs to be able to incrementally learn new information with dramatically different feature representations. We show an example of the split and permuted FMNIST in fig. 1, panel A. Experimental setup We test DFA in two continual learning scenarios (van de Ven & Tolias, 2019) (Panel C, fig. 1): 1. In the Domain-IL scenario, the network does not have the information about the task-ID during inference. The whole architecture is shared among the tasks, including the output layer. 2. In the Task-IL scenario, the network can use the task-ID during both training and testing. We implement this approach by having task-specific output layers that are updated only during the training on the corresponding task and used whenever testing on the same task. ![4_image_0.png](4_image_0.png) Figure 2: Illustration on Average Forgetting (AF) and Average Performance (AP) in the case of backpropagation on the p-FMNIST task in the Task-IL scenario, where the network has a separate head for each task. AP describes the test accuracy of the network, averaged over the tasks, while AF quantifies the average of the difference in accuracy on a given task after training on it and at the end of the whole training trajectory. We show in Figure 4 and in appendix A that the weight alignment and gradient alignment of DFA are preserved if the output layer is re-initialized. These tests are important because the degeneracy breaking feature of DFA was previously observed in networks with weights sampled from independent distributions, while in this case only the output layers of the networks in comparison are different. In our experiments, we use 3-layer Fully-Connected Networks with 1000 neurons in each hidden layer. We train the networks for a maximum of 1000 epochs and apply early-stopping by halting the training as soon as the network overcomes 99% training accuracy. All layers are initialized using the Xavier uniform initialization (Glorot & Bengio, 2010). We choose a logistic activation function in the output layer and ReLU in the other layers. The loss function is cross-entropy. Evaluation metrics Performance in continual learning is difficult to report by one single measure due to the stability–plasticity trade-off. We will summarize the performances of the algorithms under study using average performance (AP) and average forgetting (AF) (Lopez-Paz & Ranzato, 2022; Chaudhry et al., 2018), which we visualize in fig. 2. To define AF and AP, we consider a number of tasks T and denote by Accij the accuracy of the network on the jth task at the end of training on the ith task, where j, i ∈ {1*, . . . , T*}. AP quantifies the performance of the network by calculating the mean test accuracy of the network on each task, directly after training on it: $$\mathrm{AP}={\frac{1}{T}}\sum_{i=1}^{T}\mathrm{Acc}_{i,i}.$$ $\left(5\right)^{\frac{1}{2}}$ $\mathbf{a}\cdot\mathbf{a}=\mathbf{a}\cdot\mathbf{a}$. continued $\text{learningmi}$ $$(6)$$ Acci,i. (5) AF measures how much the model loses accuracy on previous tasks during continual learning: $$\mathrm{AF}={\frac{1}{T-1}}\sum_{i=1}^{T-1}\mathrm{Acc}_{i,i}-\mathrm{Acc}_{i,T}$$ Acci,i − Acci,T (6) Baselines and hyper-parameters tuning To benchmark the performance of DFA, we consider the following additional baselines: 1. Backpropagation is the most important baseline: we train the same fully-connected networks, adding a Dropout layer (Srivastava et al., 2014) after each layer, which we find helps with generalization, ![5_image_0.png](5_image_0.png) Figure 3: Performance of the first task throughout learning all tasks evaluated on the permuted-FMNIST (above) in the Task-IL scenario. DFA-same is more powerful than RF (purple line). Compared to BP, DFA-same has less forgetting in both cases: when we optimize the models for maximum performance and for least forgetting. in accordance to the literature (Mirzadeh et al., 2020). We perform a grid-search optimization for the learning rate in the range between 1e-2 and 1e-4. 2. Random Features (Rahimi & Recht, 2008; 2009) are fully-connected networks in which we train only the readout layer with BP. The first and the intermediate layers are kept unchanged during the training phase. This model can be applied only in the Task-IL scenario because it requires a different output layer for each task in the testing phase. The motivation for using this model is to understand the performance of a neural network that does not learn data-dependent features, akin to the lazy regime (Chizat et al., 2019). We use a learning rate of 1e-2. 3. Elastic weight consolidation (EWC) (Kirkpatrick et al., 2017) adds a regularisation term on the loss that penalizes the change of the weights that are more important for previous tasks. It achieves this by pre-multiplying the BP weight updates using the inverse of the diagonal approximation of the Fisher information matrix of the model. In our EWC experiments, we chose a learning rate of 1e-3 and an "importance" of 1000, which is another important hyper-parameter for EWC. The error bars in table 1 are computed over a repetition of the experiment with 5 random seeds. ## 3 Results 3.1 Dfa Between The Two Ends Of The Plasticity-Stability Trade-Off Catastrophic forgetting arises in Backpropagation (Rumelhart et al., 1988) due to the fact that the weights of the network are updated in the direction of the minimum of the loss, irrespective of the previous tasks. In this Table 1: Results in terms of Average Performance (AP) and Average Forgetting (AF) in the different datasets and scenarios (Task-IL and Domain-IL) for all the methods (DFA–diff, DFA–same, BP, BP–ablated, EWC, RF ). In the first column, the networks are optimized for maximum performances and in the second column for minimum forgetting. We discuss all these results in detail in section 3 and we present them visually in Figure 5. In the case of minimized forgetting, we report here the minimum AP % values above the RF baseline. | AP | AF | AP | AF | | | |---------------------------|-----------------------|------------|------------|------------|-------------| | Optimized for performance | Minimizing forgetting | | | | | | Split FMNIST | DFA-diff | 100 ± 0 | 9.8 ± 2 | 93.9 ± 0.1 | 0 ± 0 | | Task-IL | DFA-same | 100 ± 0 | 29.4 ± 1.2 | 95.4 ± 0 | 0 ± 0 | | BP | 100 ± 0 | 15 ± 2.3 | 89.5 ± 0.1 | 0.3 ± 0.3 | | | BP ablated | 100 ± 0 | 9.7 ± 1.6 | 94 ± 0 | 0 ± 0 | | | EWC | 99.2 ± 0.2 | 4.5 ± 1.5 | 99.1 ± 0.2 | 3.4 ± 1.8 | | | RF | 93.4 ± 0.1 | 0 ± 0 | 93.4 ± 0.1 | 0 ± 0 | | | Permuted FMNIST | DFA-diff | 88.5 ± 0.1 | 44.5 ± 2.4 | 83 ± 0 | 0 ± 0 | | Task-IL | DFA-same | 88.2 ± 0.1 | 50.1 ± 2.5 | 83 ± 0 | 0 ± 0 | | BP | 90.0 ± 0.1 | 49.5 ± 2.5 | 83 ± 0 | 11.8 ± 0.8 | | | BP ablated | 90 ± 0.1 | 47 ± 2 | 82.8 ± 0.1 | 0 ± 0 | | | EWC | 88.7 ± 0.1 | 46.5 ± 2.7 | 83 ± 0 | 8.7 ± 1.2 | | | RF | 80.9 ± 0.1 | 0 ± 0 | 80.9 ± 0.1 | 0 ± 0 | | | Split FMNIST | DFA-diff | 100 ± 0 | 45.8 ±2.4 | 94.0 ± 0 | 35.56 ± 1.6 | | Domain-IL | DFA-same | 100 ± 0 | 35.5 ± 0.2 | 98.2 ± 0 | 32.1 ± 1.6 | | BP | 100 ± 0 | 37.8 ± 0.4 | 94.5 ± 0.1 | 35.8 ± 0.6 | | | BP ablated | 100 ± 0 | 40.5 ± 1.7 | 98.4 ± 0.1 | 34.7 ± 0.7 | | | EWC | 100 ± 0 | 46.9 ± 4.8 | 98.7 ± 0.4 | 40.4 ± 3.1 | | | Permuted FMNIST | DFA-diff | 88.3 ± 0 | 71.8 ± 1.2 | 82.3 ± 0.1 | 29.1 ± 0.9 | | Domain-IL | DFA-same | 88.2 ± 0.1 | 45.8 ± 1.9 | 85.8 ± 0 | 26.2 ± 1.9 | | BP | 89.7 ± 0.1 | 57.6 ± 1.1 | 85.4 ± 0.1 | 23.1± 1.5 | | | BP ablated | 88.7 ± 0.1 | 49.4 ± 0.4 | 84 ± 0.1 | 26.2 ± 2.4 | | | EWC | 80.7 ±0.2 | 55.7 ± 2.5 | 80.7 ± 0.2 | 55.7 ± 2.5 | | case the network is flexible to fit the task at hand, but forgetting is high. Random features are at the other extreme, since only the weights in the output layer are trained on the task it results in a more rigid method that retains previous performances. BP and random features therefore delineate the two extremes between which we would like to interpolate: we would like DFA to be less susceptible to catastrophic forgetting than BP, while being able to learn data-dependent features that allow it to beat the performance of random features. We report the results of our experiments that we obtain by optimizing the hyper-parameters for maximum AP or for minimum AF in table 1. When we train DFA and BP at the same training conditions, we can see that DFA can reach the same performance of 100% accuracy as BP on the split dataset (where the tasks are binary classifications) and almost reach it on the permuted dataset (where the tasks consist of 10-class classifications), where DFA achieves 88.2 ± 0.1 1 and BP 89.7 ± 0.1. Thus, DFA proves to be able to adapt the network to learn the new tasks efficiently. In terms of forgetting, DFA achieves 10% less AF than BP in both split and permuted 1We notice that letting DFA train for more epochs, it can achieve the same level as BP, but we decided to keep a maximum of 1000 epochs for all methods. 7 datasets in the Domain-IL scenario. This trend is conserved when the experiments are performed with the MNIST datasets (see appendix C). The advantage of DFA in the Domain-IL scenario is in accordance to the hypothesis for which DFA learns close representations for different tasks. In fact, the Domain-IL has a unique output layer shared among all the tasks and this requires similar representations among different tasks in order to correctly classify the previous ones. In the Task-IL scenario, the average forgetting of BP improves with respect to the Domain-IL scenario by 8% and 22% for the Split and Permuted FMNIST datasets respectively. On the other hand, DFA improves only by 6% on the split dataset and become worse, with AF from 45.8% to 50.1%, in the permuted-FMNIST. This trend brings DFA to perform worse than BP on the split dataset and comparably to BP on the permuted dataset2. The accuracy trend of DFA can be explained in light of our hypothesis: if the output layer is specialized for every task, keeping the representations similar to each other can bring a disadvantage, as the previous representations are "overwritten". Regarding the comparison of Random Features and DFA with minimized forgetting, we find that DFA can achieve a better Average Performance than a Random Feature model, both in the split and permuted datasets, while maintaining zero forgetting, just as RF. Specifically, DFA shows AP at least 2% higher than RF (see table 1, column "Minimizing forgetting"). One example is shown in fig. 3, where DFA reaches 83% accuracy while Random Features reaches 80.9%. The fact that DFA reaches higher performances than RF means that the weights of the first two layers are updated, and the null forgetting values DFA means that this weights update can happen in such a way that the information of the previous tasks is conserved. Overall, we find that DFA can embody different behaviours according to the value of its hyper-parameters. It can almost reach the same generalization performances as BP while displaying less forgetting in the majority of the cases; and it can achieve zero forgetting like RF, while generalizing *better* than RF. It thus occupies a sweet-spot between the plastic BP and the static RF. ## 3.2 Extension Of Dfa To The Task-Il Scenario We showed that the standard DFA cannot benefit from the introduction of a specialized output layer for every task (Task-IL scenario). We propose an extension of DFA in which there is a different feedback matrix for every task: in fig. 4, we show that in this setting the representations of one dataset are learned in different manifolds. According to our hypothesis, this phenomenon can be extended to the learning of different datasets and this would allow to learn new tasks with less interference with previous representations. We expect that the combination of a dedicated feedback matrix alongside a dedicated output layer can reduce forgetting in the Task-IL scenario. In the remaining, we will denote the standard DFA as DFA-same and this extension as DFA-diff. We find that the main differences with respect to DFA-same are in the Average Forgetting measure and are in accordance to the hypothesis: when optimized for performance and in the Task-IL scenario, DFA-diff has 9.8% ± 2% AF. This value is around 20% lower than the one of DFA-same and 5% lower than BP's. In the Domain-IL scenario, on the other hand, DFA-diff has at least 8% more forgetting than BP and 10% more than DFA-same. When optimized for minimal forgetting, DFA-diff also achieves null forgetting in the Task-IL scenario, but it shows lower performances than DFA-same (up to 4.2% less than DFA-same in the split Domain-IL). ## 3.3 Comparing Dfa With Elastic Weight Consolidation In the Domain-IL scenario there is an advantage of DFA-same over EWC of about 10% AF. This could be due to the underlying hypothesis that in DFA-same the implicit regularisation given by the feedback matrix is kept constant. On the other hand, the regularisation factor in EWC is different for every new task, as explained in section 2. On the permuted dataset, EWC has a performance that is intermediate between 2The comparable value of AF between DFA and BP in the Task-IL permuted case hinders the advantage of DFA for the forgetting of the earliest tasks. We show in fig. 3 the trend of forgetting of the first task and we can notice a clear advantage. In ?? we report the forgetting trends of all the tasks, where one can see that DFA forgets more tasks seen one or two epochs before, but is more stable for earlier ones. ![8_image_0.png](8_image_0.png) Figure 4: Cosine similarity of the change in activations due to the n th training vs the change since the first initialization. Values Averaged over the first two layers of the network and the images of the test dataset. The network is trained repeatedly on the same dataset (FMNIST) while the output layer is re-initialized at the beginning of every session. The overall change in activation in DFA-same is always aligned with the change taking place in every single training, while in DFA-diff it is less and less aligned with the changes due to subsequent trainings. DFA-same and DFA-diff. In fact, EWC can be considered a method in between DFA-same and DFA-diff because the "pulling forces" in the loss due to the feedback matrices will not be parallel to the ones of the previous tasks as in DFA-same, but not orthogonal as in DFA-diff, either. On the split datasets, on the other hand, EWC tends to be better than DFA-diff. In the Task-IL scenario EWC is the best method both when we evaluate on MNIST (see appendix C) and FMNIST. In the Domain-IL it is the best only in the experiments on MNIST. This might be due to a difficulty to find the correct hyper-parameters that would render EWC performative on FMNIST. ## 3.4 Ablation Experiments In order to investigate the reasons behind the success of DFA in some of the CL scenarios, we look for the underlying mechanism that influences DFA's stability to forgetting the most. For achieving minimum AF, we performed a grid-search over the hyper-parameters and we find that forgetting is reduced with smaller variance of the feedback matrix for both DFA-same and DFA-diff, see fig. 5. This impact can be easily explained by observing that the entries of the Feedback matrix are a multiplicative factor in the update rule of the layers 1 and 2 (see section 2, in the right column of equations 3 and 4); thus the smaller the values in the matrix, the smaller will be the update of the weights of these two layers. The output layer is updated exactly as in BP, so this layer is not affected by the variance of the Feedback matrix. In the limit of a matrix filled with zeroes, DFA approaches a Random Feature behaviour where only the readout layer is updated. We therefore performed an ablation experiment with BP where we reduced the learning rate of BP in the first two layers to mimic the effect of feedback matrices with reduced variance, while keeping the learning rate of the readout layer fixed and similar to the one used for DFA. We show the results of the ablated experiment in all the datasets in fig. 5. We found that lowering the learning rate of the intermediate layer indeed brings BP towards lower AF and lower AP, similarly to the effect of lowering the variance in DFA's feedback matrix. In fact, BP with reduced learning rates can surpass DFA's stability in some cases, for example in the Task-IL, permuted case. In all the other cases, BP-ablated can reach AF similar to DFA, ![9_image_0.png](9_image_0.png) Figure 5: AF-AP plots for all datasets (by rows) and scenarios (by columns). In each plot, the increasing size of the marker in DFA and DFA-diff indicates an increase in the variance of the feedback matrix (from 1.8 × 10-3 to 1.8) and for BP an increase of the learning rate of the first two layers (from 10-8 to 10-3); the learning rate of the third layer is kept to 0.01 to match the value used in DFA. We can notice that for larger values of the feedback matrix variance, DFA can reach higher AP at the price of smaller AF (smaller dots moving towards the top-left part of the plots, which culminates in the RF regime). The same trend is followed by BP when the learning rate of the first two layers decreases (BP ablated). underlining the importance of variance of the feedback matrix for the stability of DFA. Crucially, DFA still retains higher accuracy for old tasks in the long run: As we show in appendix B, figure fig. 7, the accuracy drop for the case of Task-IL p-FMNIST in DFA is bounded to a maximum of 59% while BP-ablated reaches 67% forgetting. ## 4 Conclusions We have explored the potential of a biologically inspired learning algorithm, direct feedback alignment, to alleviate catastrophic forgetting in artificial neural networks. By comparing the performance of DFA and various baseline algorithms on several benchmarks, we found that DFA has potential as an algorithm for continual learning. For example, we found that DFA-same can alleviate catastrophic forgetting better than BP and EWC in the Domain-IL scenario. This result is consistent with the first hypothesis, whereby DFA- same imposes weight alignment in the presence of different datasets. We saw that the networks are indeed able to exploit this for mitigating catastrophic forgetting. According to our hypothesis, DFA-same works as an implicitly regularised method like EWC, but with the advantage of keeping exactly the same constraints on the weights, while EWC adjusts the constraint of the weights after every new dataset encountered. Empirical evaluation also shows that DFA with different feedback matrices for each task in the curriculum has an advantage with respect to EWC and BP in the Task-IL scenario. This is the architectural scenario in which the output layer is specific for every task. This allows learning the features on different manifolds in weight space, while still permitting an efficient encoding of the features for a successful classification of the different tasks. In continual learning, when the features are learned on different manifolds, the learning of the new tasks does not interfere with the separating hyperplane of the previous tasks, so it can be an advantage in the Task-IL scenario. The drastic improvements of DFA-diff in the Task-IL scenario in contrast to the minor improvement of DFA-same in this setting corroborates our second hypothesis, whereby gradient alignment extends to the case of different datasets, allowing DFA-diff to learn in different manifolds. This approach can thus be employed for an effective mitigation to catastrophic forgetting. Finally, we tested gradient alignment in two specific experiments, and we find it visible in the case in which DFA is trained on the same series of datasets starting from different initialisations with the same feedback matrices (appendix A, panel C) and when DFA is trained on the same dataset repeatedly but with different Feedback Matrices (Figure 4). While backpropagation with layer-specific learning rates in the first two layers (BP ablated) has a significantly lower overall forgetting than DFA-same in the case of Task-IL and permuted dataset, DFA-same retained its advantage with respect to BP and BP ablated for the tasks that were processed at the beginning (see ?? and fig. 3). The design of BP ablated itself was inspired by the impact of the scale of DFA's feedback matrix on forgetting. The advantage of DFA-same over this method is further evidence that weight alignment is taking place and is beneficial against catastrophic forgetting beyond the impact of the Feedback matrix on the learning rate. In the future, it will be interesting to investigate how the architecture of the networks, and specifically the width and the depth of the networks influence the plasticity-stability trade off curves. In the case of the split dataset, the forgetting is heavily dependent on the similarity of consecutive tasks (see larger error bars in fig. 5). On top of curriculum learning, it might also be beneficial to use convolutional layers before the Fully–Connected part to replace the highly-variable distribution of pixels with more rationalized features (Crafton et al., 2019). We tested the Continual Learning scenarios where the model is given the task–ID at test time (Task-IL) or not (Domain-IL). The next challenge is the class–incremental learning scenario where the model has to provide the task-ID when solving a task as is the case, for example, in many reinforcement learning scenarios. Increasing the biological plausibility of DFA is another interesting direction. Investigating the viability of DFA extensions that go toward more local update rules like the one analysed by Dellaferrera et al. (2021) would be particularly interesting. ## References Yoshua Bengio, Dong-Hyun Lee, Jörg Bornschein, and Zhouhan Lin. Towards biologically plausible deep learning. *ArXiv*, abs/1502.04156, 2015. Mehdi Abbana Bennani and Masashi Sugiyama. Generalisation guarantees for continual learning with orthogonal gradient descent. *ArXiv*, abs/2006.11942, 2020. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. Gail Carpenter. A massively parallel architecture for a self-organizing neural pattern recognition machine. Computer Vision, Graphics, and Image Processing, 37:54–115, 11 1986. doi: 10.1016/S0734-189X(87) 80014-2. Arslan Chaudhry, Puneet K. Dokania, Thalaiyasingam Ajanthan, and Philip H. S. Torr. *Riemannian Walk* for Incremental Learning: Understanding Forgetting and Intransigence, pp. 556572. Springer International Publishing, 2018. ISBN 9783030012526. doi: 10.1007/978-3-030-01252-6_33. Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-gem. *ArXiv*, abs/1812.00420, 2019. Lénaïc Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable programming. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché Buc, E. Fox, and R. Garnett (eds.), *Advances in* Neural Information Processing Systems 32, pp. 2933–2943. Curran Associates, Inc., 2019. Brian Crafton, Abhinav Parihar, Evan Gebhardt, and Arijit Raychowdhury. Direct feedback alignment with sparse connections for local learning, 2019. Francis Crick. The recent excitement about neural networks. *Nature*, 337(6203):129–132, 1989. Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Aleš Leonardis, Gregory Slabaugh, and Tinne Tuytelaars. A continual learning survey: Defying forgetting in classification tasks. IEEE transactions on pattern analysis and machine intelligence, 44(7):3366–3385, 2021. Giorgia Dellaferrera, Stanislaw Wozniak, Giacomo Indiveri, Angeliki Pantazi, and Evangelos Eleftheriou. Learning in deep neural networks using a biologically inspired optimizer, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. Samuel W. Failor, Matteo Carandini, and Kenneth D. Harris. Learning orthogonalizes visual cortical population codes. *bioRxiv*, 2021. doi: 10.1101/2021.05.23.445338. Haytham M. Fayek, Lawrence Cavedon, and Hong Ren Wu. Progressive learning: A deep learning framework for continual learning. *Neural networks : the official journal of the International Neural Network Society*, 128:345–357, 2020. Timo Flesch, Keno Juechems, Tsvetomira Dumbalska, Andrew Saxe, and Christopher Summerfield. Orthogonal representations for robust context-dependent task performance in brains and neural networks. Neuron, 110(7):1258–1270.e11, 2022. ISSN 0896-6273. doi: https://doi.org/10.1016/j.neuron.2022.01.005. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Yee Whye Teh and Mike Titterington (eds.), *Proceedings of the Thirteenth International Conference on* Artificial Intelligence and Statistics, volume 9 of *Proceedings of Machine Learning Research*, pp. 249–256, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010. PMLR. Ian J. Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks, 2015. Stephen Grossberg. Competitive learning: From interactive activation to adaptive resonance. Cognitive science, 11(1):23–63, 1987. David Hardoon, Sandor Szedmak, and John Shawe-Taylor. Canonical correlation analysis: An overview with application to learning methods. *Neural computation*, 16:2639–64, 01 2005. doi: 10.1162/ 0899766042321814. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Xu He and Herbert Jaeger. Overcoming catastrophic interference using conceptor-aided backpropagation. In *International Conference on Learning Representations*, 2018. Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification, 2018. Nitin Kamra, Umang Gupta, and Yan Liu. Deep generative dual memory network for continual learning. ArXiv, abs/1710.10368, 2017. Ronald Kemker, Marc McClure, Angelina Abitino, Tyler Hayes, and Christopher Kanan. Measuring catastrophic forgetting in neural networks, 2017. James Kirkpatrick, Razvan Pascanu, Neil C. Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114:3521 - 3526, 2017. Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning Research*, pp. 3519–3529. PMLR, 09–15 Jun 2019. A. Krizhevsky, I. Sutskever, and G.E. Hinton. Imagenet classification with deep convolutional neural networks. In *Advances in neural information processing systems*, pp. 1097–1105, 2012. Julien Launay, Iacopo Poli, François Boniface, and Florent Krzakala. Direct feedback alignment scales to modern deep learning tasks and architectures. *Advances in neural information processing systems*, 33: 9346–9360, 2020. Yann LeCun, Y. Bengio, and Geoffrey Hinton. Deep learning. *Nature*, 521:436–44, 05 2015. doi: 10.1038/ nature14539. Timothy P. Lillicrap, Daniel Cownden, Douglas Blair Tweed, and Colin J. Akerman. Random synaptic feedback weights support error backpropagation for deep learning. *Nature Communications*, 7, 2016. David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning. In *NIPS*, 2017. David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning, 2022. Michael McCloskey and Neal J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. volume 24 of *Psychology of Learning and Motivation*, pp. 109–165. Academic Press, 1989. doi: https://doi.org/10.1016/S0079-7421(08)60536-8. Martial Mermillod, Aurélia Bugaiska, and Patrick Bonin. The stability-plasticity dilemma: Investigating the continuum from catastrophic forgetting to age-limited learning effects. *Frontiers in psychology*, 4:504, 08 2013. doi: 10.3389/fpsyg.2013.00504. Seyed Iman Mirzadeh, Mehrdad Farajtabar, Razvan Pascanu, and Hassan Ghasemzadeh. Understanding the role of training regimes in continual learning, 2020. Arild Nøkland. Direct feedback alignment provides learning in deep neural networks. In *NIPS*, 2016. OpenAI. Gpt-4 technical report, 2024. Alec Radford, Karthik Narasimhan, et al. Improving language understanding by generative pre-training, 2018. A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in neural information processing systems, pp. 1177–1184, 2008. A. Rahimi and B. Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In *Advances in neural information processing systems*, pp. 1313–1320, 2009. Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-theshelf: An astounding baseline for recognition. *2014 IEEE Conference on Computer Vision and Pattern* Recognition Workshops, pp. 512–519, 2014. Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, G. Sperl, and Christoph H. Lampert. icarl: Incremental classifier and representation learning. *2017 IEEE Conference on Computer Vision and Pattern Recognition* (CVPR), pp. 5533–5542, 2017. Maria Refinetti, Stéphane d'Ascoli, Ruben Ohana, and Sebastian Goldt. Align, then memorise: the dynamics of learning with feedback alignment. *Journal of Physics A: Mathematical and Theoretical*, 55, 2021. David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. *Learning representations by backpropagating errors*, pp. 696699. MIT Press, Cambridge, MA, USA, 1988. ISBN 0262010976. Ari Seff, Alex Beatson, Daniel Suo, and Han Liu. Continual learning in generative adversarial nets. *ArXiv*, abs/1705.08395, 2017. Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. In *NIPS*, 2017. K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations, 2015. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. *The journal of machine learning research*, 15(1): 1929–1958, 2014. Gido M. van de Ven and Andreas S. Tolias. Three scenarios for continual learning, 2019. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017. Guanxiong Zeng, Yang Chen, Bo Cui, and Shan Yu. Continual learning of context-dependent processing in neural networks. *Nature Machine Intelligence*, 1(8):364372, August 2019. ISSN 2522-5839. doi: 10.1038/ s42256-019-0080-x. Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. *Proceedings of machine learning research*, 70:3987–3995, 2017. ## A Degeneracy Breaking Of Dfa Previous work on DFA (Refinetti et al., 2021) and FA (Lillicrap et al., 2016) has shown that DFA brings ![14_image_0.png](14_image_0.png) the gradients and the weights to align with the feedback matrix. This phenomenon is called degeneracy breaking because among all the combinations of the weights that can fit the dataset, DFA selects one that is closest to the one aligned to the feedback matrix. **Gradient Alignment** and **Weight Alignment** were previously measured on the same dataset, starting from different initialization seeds. Here, we show the overlap of the weights (in panel A) and the overlap of the gradients (in panel B) among networks with the same/different initialization seed on the weight (yellow/purple in the first column of panel A) and the same/different feedback matrix (DFA-same/DFA-diff). In the training on Task 1 all the networks have the same feedback matrix. Figure 6: A: Dot product of the weights at the end of training on the different tasks (averaged over the first two layers, Domain-IL architecture structure) of two networks trained with DFA-same or DFA-diff in contrast with a reference network trained with DFA-same. **Weight Alignment** can be appreciated mostly by looking at the second row, the case in which the networks are initialized from different seeds (seed 1 for the reference network and seed 2 for the network in the second row): the weights at first do not overlap, but thanks to the fact that the networks are trained with the same feedback matrix, the weights overlap more and more. B: Overlap of gradients averaged over two different checkpoints during the training on the same dataset (splitFMNIST, Task-IL architecture structure) of two networks trained with DFA-same or DFA-diff in contrast with a reference network trained with DFA-same. **Gradient Alignment** is more visible in the first and the third row: the gradients in panel B become orthogonal (overlap equal to 0) starting from the second task, as soon as the feedback matrix changes, irrespective of the fact that the networks are initialized in the same way (first row) or if the networks are initialized differently (third row). ## B Dfa-Same Versus Bp And Bp-Ablated In The Long Run In order to answer the question, "Is DFA mitigation for catastrophic forgetting solely due to the small learning rate in the first two layers?" we propose a deeper comparison with BP ablated in the context of the permuted dataset. The same results apply to the split dataset. In fig. 7 below, we display the accuracy of all the tasks during the CL pipeline (training one task at a time and always monitoring the accuracy of the other tasks; in the case of Task-IL scenario, either for training and for testing one task, the corresponding output layer is used). We compare the case in which BP ablated has the most similar AF to DFA-same and we find that DFA has the tendency of forgetting more than BP and BP ablated after few stages but is more stable in the long run, for example after 6 tasks in the Task-IL and after 1 or 9 tasks in the Domain-IL scenario; the black and the grey lines in the plot of DFA-same visually show the point where the advantage starts. We conclude that a small learning rate in the first two layers is not always enough to reproduce DFA-same stability and this advantage in the long run might be due to the alignment of the weights. ![15_image_0.png](15_image_0.png) Figure 7: Accuracy interpolation lines along the stages. On the left (red lines) we show the results for the Task-IL scenario, we can notice that even though DFA-same has higher average forgetting than BP and BP-ablated, it has lower forgetting for the first three (vs BP) or four tasks (vs BP ablated): DFA-same retains the accuracy better than BP and BP-ablated in the long run, while it forgets more in the shorter run. This is also true in the Domain-IL scenario (blue lines, on the right) in which DFA has better average forgetting than BP and worse compared to BP-ablated. ## C Results Evaluated On Mnist The plots displaying the results of the methods applied to the MNIST datasets in the Task-II scenario can be found in fig. 8. The points in the circle indicates the Split-MNIST, otherwise they refer to the results on the Permuted-MNIST. ![16_image_0.png](16_image_0.png) Figure 8: We can notice that the results are consistent with the results on F-MNIST: In the Task-IL split- MNIST DFA lays in an intermediate position between RF and BP, slightly behind EWC; In the permuted Task-IL, DFA-diff has less forgetting than all the other methods while achieving the same AP of BP. In Domain-IL (panel on the right), DFA-same is comparable or equal in AF to BP while DFA-diff has the worst performances; EWC is in an intermediate level between DFA-same and DFA-diff on the permuted dataset while is superior to DFA on the split datasets. ## D Random Seed Impact On The Accuracy Of The Firstly Learned Task ![17_image_0.png](17_image_0.png) Figure 9: Complete illustration of forgetting the first learned task for all methods (Accuracy % on the y-axis and stage number on the x-axis). The different figures display one method at a time in the following order: DFA-same, DFA-diff, BP-ablated, BP, EWC; permuted FMNIST above and split FMNIST below. The dotted lines are the ones optimized for minimizing forgetting. In EWC permuted FMNIST, the blue dotted line coincides with the blue solid line. ## E Similarity Measures Of Datasets In fig. 1, panel B, we analyze the similarity of the datasets arising from the division into the different tasks. We take in the rows all the images of one class in the dataset corresponding to task1 (and their pixels in the columns) and we compare it to the matrix of images of the same class in the next dataset. We average over all the couples of datasets and over the different classes. The similarity measures we apply (described below) measure the similarity from a merely geometrical perspective and we find that in this point of the tasks have classes more similar to each other in the permuted dataset. By construction, the reshuffle of the pixels does not change the mean and the standard deviation of the distributions of the pixels. Nevertheless, the split datasets is easier because it has only two classes and in the Task-IL learning the difference in distribution can be an advantage. In fact, the average forgetting in the Task-IL scenario are overall much smaller than the average forgetting on the permuted dataset. On the other hand, the permuted dataset has only slightly more forgetting in the Domain-IL scenario in the case in which the models are optimized for performance. we use the following similarity measures: 1. Cosine similarity: Dot product of the matrices reshaped into vectors and subsequently normalized by the L2 norm of the two vectors. 2. Canonical Correlation Analysis (CCA) (Hardoon et al., 2005): Similarity measure which is invariant to affine transformations (any linear invertible transformation including scaling, rotations, translations). 3. Centered Kernel Alignment (CKA) (Kornblith et al., 2019): Similarity measure which is invariant to orthogonal transformations, which is a subset of the invertible linear transformations. CCA is 1 when the two matrices are linearly invariant (they are linked by any affine transformation) and 0 when they are not; CKA is 1 in the case that the two matrices are orthogonally invariant (can be linked by a rotation). With the plot in fig. 1 we show that the permuted dataset is composed of images with distributional similarity among the tasks. This holds also from the point of view of the representations inside the network of the different tasks. We report in table 2 the similarity of the activation matrices (input images in the rows, and number of neurons in the columns). In the permuted dataset there is a larger overlap (especially in the case with shared output) than the disjoint datasets. ## F Analysis Of The Representations Since the training is driven by both the architecture (weights, activation functions) and the data distribution, it can useful to inspect how the activations of the network in different stages vary during the training on different tasks. In Neuroscience, similarity measures of the activations of neurons have proven to be a valid tool for computing differences in the activation patterns of neurons. These methods have been applied to ANNs to compare the internal representations of the networks among layers, epochs of training, different initialization seeds, different training algorithms, training architectures, and datasets. We showed that the gradients and weights align in two networks with different weight initialization and same feedback matrix (fig. 4 and Appendix appendix A). In this part, we provide an analysis of the representations overlap. In table 2 we report the dot product of the differences in activations of the fist two layers obtained feeding the test images of two subsequent datasets into the network in the same stage of training, averaged for all couples of adjacent datasets and stages of training. It is evident that in the split datasets the activations of subsequent tasks do not overlap at all; while in the permuted dataset some overlap is present. | | Task-IL | Domain-IL | | |----------|------------------------|--------------------------------|------| | | Dot product of changes | overlaps of different datasets | | | Split | BP | 0 | 0 | | DFA-same | 0 | 0 | | | DFA-diff | 0 | 0 | | | EWC | 0 | 0 | | | Permuted | BP | 0.045 | 0.87 | | DFA-same | 0.08 | 0.96 | | | DFA-diff | 0.09 | 0.97 | | | EWC | 0.06 | 0.92 | |
# Detecting Incidental Correlation In Multimodal Learning Via Latent Variable Modeling Taro Makino taro@nyu.edu NYU Center for Data Science Yixin Wang *yixinw@umich.edu* University of Michigan Krzysztof J. Geras k.j.geras@nyu.edu NYU Grossman School of Medicine Kyunghyun Cho *kyunghyun.cho@nyu.edu* NYU Center for Data Science Prescient Design, Genentech CIFAR LMB Reviewed on OpenReview: *https: // openreview. net/ forum? id= QoRo9QmOAr* ## Abstract Multimodal neural networks often fail to utilize all modalities. They subsequently generalize worse than their unimodal counterparts, or make predictions that only depend on a subset of modalities. We refer to this problem as *modality underutilization*. Existing work has addressed this issue by ensuring that there are no systematic biases in dataset creation, or that our neural network architectures and optimization algorithms are capable of learning modality interactions. We demonstrate that even when these favorable conditions are met, modality underutilization can still occur in the small data regime. To explain this phenomenon, we put forth a concept that we call *incidental correlation*. It is a spurious correlation that emerges in small datasets, despite not being a part of the underlying data generating process (DGP). We develop our argument using a DGP under which multimodal neural networks must utilize all modalities, since all paths between the inputs and target are causal. This represents an idealized scenario that often fails to materialize. Instead, due to incidental correlation, small datasets sampled from this DGP have higher likelihood under an alternative DGP with spurious paths between the inputs and target. Multimodal neural networks that use these spurious paths for prediction fail to utilize all modalities. Given its harmful effects, we propose to detect incidental correlation via latent variable modeling. We specify an identifiable variational autoencoder such that the latent posterior encodes the spurious correlations between the inputs and target. This allows us to interpret the Kullback-Leibler divergence between the latent posterior and prior as the severity of incidental correlation. We use an ablation study to show that identifiability is important in this context, since we derive our conclusions from the latent posterior. Using experiments with synthetic data, as well as with VQA v2.0 and NLVR2, we demonstrate that incidental correlation emerges in the small data regime, and leads to modality underutilization. Practitioners of multimodal learning can use our method to detect whether incidental correlation is present in their datasets, and determine whether they should collect additional data. ## 1 Introduction Multimodal learning refers to jointly modeling data from different modalities, such as images, speech, and text (Ngiam et al., 2011; Baltrusaitis et al., 2019; Liang et al., 2022). While the goal is to learn modality interactions that are useful for downstream tasks, multimodal neural networks often generalize worse than their unimodal counterparts (Jabri et al., 2016; Poliak et al., 2018; Jean & Cho, 2019; Wang et al., 2020a; Wu et al., 2020; 2022), or make predictions that only depend on a subset of modalities (Agrawal et al., 2016; Goyal et al., 2017; Agrawal et al., 2018; Cadène et al., 2019; Hessel & Lee, 2020). Since these are both symptoms of multimodal neural networks failing to utilize all modalities, we treat them as the same problem, and refer to it as *modality underutilization*. Existing work has addressed this issue by ensuring that there are no systematic biases in dataset creation (Goyal et al., 2017; Poliak et al., 2018; Suhr et al., 2019; Hudson & Manning, 2019), or that our neural network architectures and optimization algorithms are able to learn modality interactions (Hazirbas et al., 2017; Zeng et al., 2019; Song et al., 2020; Valada et al., 2020; Wang et al., 2020a;b; Wu et al., 2022). We demonstrate that even under these favorable conditions, modality underutilization can still occur in the small data regime. This has important implications for practitioners of multimodal learning. Even if they carefully design their data collection procedure to be free of biases, and use proven neural network architectures and optimization methods, they are still vulnerable to modality underutilization if they do not collect enough data. We explain this phenomenon using a concept that we call *incidental correlation.* Incidental correlation is a spurious correlation that emerges in small datasets, despite not being a part of the underlying data generating process (DGP). For example, suppose the ground-truth DGP consists of two independent Bernoulli random variables. If we sample a small dataset from this DGP and compute the correlation between the variables, it is likely to be significantly different from zero. In other words, the dataset has higher likelihood under the alternative DGP than under the ground-truth DGP that it was sampled from. To connect this idea to modality underutilization, we put forth an idealized DGP under which modality underutilization cannot occur. We then explain how this ideal scenario breaks down in the small data regime. The DGP is a causal graphical model (Pearl, 2009), since we consider the causal relationships between the variables. There is an unobserved variable u that gives rise to two observable input modalities x and x ′. These input modalities then cause the target y. We refer to this DGP as the *multimodal generating* process (MGP), and show its graph in Fig. 1a. Under the MGP, we have $$p(\mathbf{y}\mid d o(\mathbf{x}),d o(\mathbf{x}^{\prime}))=p(\mathbf{y}\mid\mathbf{x},\mathbf{x}^{\prime}).$$ $$(1)$$ ′). (1) In causal language, p(y | do(x)*, do*(x ′)) is identifiable, meaning that it is obtainable by estimating p(y | x, x ′) from observational data. Therefore, if we estimate p(y | x, x ′) and use it to predict the target given the inputs, the prediction is purely based on the causal paths x → y and x ′ → y. Such a model successfully utilizes all modalities. ![1_image_0.png](1_image_0.png) (a) Multimodal generating process (MGP). All paths between the inputs and target are causal. (b) Spurious generating process (SGP). The edge from u to y unblocks spurious paths between the inputs and target, which lead to modality underutilization. ## Figure 1 ![1_Image_1.Png](1_Image_1.Png) We argue that modality underutilization occurs when this ideal scenario fails to materialize. Suppose we sample a small dataset from the MGP, where u and y are conditionally independent given the inputs. Due to incidental correlation, our dataset can have higher likelihood under an alternative DGP with an edge from u to y. This additional edge makes u and y conditionally dependent given the inputs. We call this alternative DGP the *spurious generating process* (SGP), and show its graph in Fig. 1b. The edge from u to y unblocks the spurious paths x ← u → y and x ′ ← u → y, and prevents Eq. 1 from holding. Consequently, if we estimate p(y | x, x ′) and use it to predict the target given the inputs, we can no longer tell whether the predictions are based on the causal paths, or the spurious ones. This ambiguity can lead to modality underutilization. For example, a model can rely solely on the spurious path x ← u → y for prediction, thus failing to utilize the x ′ modality. It is beneficial to detect incidental correlation, given that it leads to modality underutilization. The main challenge in doing so is that it involves the unobserved variable u. We circumvent this difficulty by using latent variable modeling. We specify an identifiable variational autoencoder (VAE) (Kingma & Welling, 2014) such that the relationship between the latent variable z and the inputs captures the incidental correlation between u and the target. This allows us to interpret the Kullback-Leibler (KL) divergence between the latent posterior and prior as the severity of incidental correlation. We elaborate on this procedure in Sec. 4. Since we derive our conclusions from the latent posterior, it is important to prevent learning an arbitrary latent space. We therefore use a VAE with a mixture prior and a piecewise affine decoder to achieve identifiability (Kivva et al., 2022). In order to assess its importance, we compare these results to those derived using a vanilla VAE (Kingma & Welling, 2014), which is unidentifiable. Our ablation study demonstrates that identifiability is necessary for drawing correct conclusions. We conduct experiments to empirically verify our main claim, which is that even if the underlying DGP is free of systematic biases, incidental correlation emerges in the small data regime, and leads to modality underutilization. Our experiments consist of a toy problem with synthetic data and small neural networks, as well as realistic settings with VQA v2.0 (Goyal et al., 2017) and NLVR2 (Suhr et al., 2019), and a state-of-the-art model called FIBER (Dou et al., 2022). In all cases, our experiments consist of two stages. The first stage shows that even when incidental correlation is absent in the large data regime, it emerges as we reduce the size of our datasets. We do this by training a VAE, and showing that the KL divergence between the latent posterior and prior is approximately zero for large datasets, and increases as the datasets become smaller. This occurs because the latent variable encodes spurious correlations between the inputs and target that are present in small datasets, and lead the latent posterior to diverge from the prior. The second stage demonstrates that the emergence of incidental correlation leads to modality underutilization. To show this, we train a multimodal neural network and an ensemble of unimodal neural networks, and use their generalization gap as a proxy for modality underutilization. The multimodal neural network generalizes better than the unimodal ensemble in the large data regime, but this difference erodes as the datasets become smaller. We attribute this to spurious paths between the inputs and target that fail to generalize. Practitioners of multimodal learning can use our method to detect whether incidental correlation is present in their datasets, and determine whether they should collect more data. This is useful in realistic situations, when practitioners encounter modality underutilization in data-constrained settings. While our experiments involve a large dataset, this was for the purpose of experimental control. We showed that even when incidental correlation is absent in the large data regime, it emerges when we reduce the size of our datasets. Data is less abundant in many real-world applications, and this is where our method provides value. ## 2 Related Work 2.1 Modality Underutilization Modality underutilization is a long-standing open problem, and existing work has generally addressed it from two different directions. The first direction is to ensure that there are no systematic bias in dataset creation. This is motivated by the fact that many benchmark datasets intended to require multimodal reasoning could unwittingly be solved with a subset of modalities. Poliak et al. (2018) showed in natural language inference that tasks designed to require the use of a hypothesis and a context could be solved with hypothesis-only baselines. A similar problem occurs in visual question answering (VQA) (Antol et al., 2015). Instead of answering a question about a specific image, VQA models often predict the same answer for a given question across a wide variety of images (Agrawal et al., 2016; Jabri et al., 2016; Agrawal et al., 2018; Cadène et al., 2019). To counter this problem, many multimodal benchmark datasets have been proposed that are carefully constructed to be free of systematic biases (Goyal et al., 2017; Hudson & Manning, 2019). We show that this precaution is not enough, and that incidental correlation can be detected in such datasets in the small data regime. The second direction is to ensure that our neural network architectures and optimization algorithms are capable of learning modality interactions. In terms of neural network architectures, the majority take into account the idiosyncrasies of the specific modalities and prediction tasks (Hazirbas et al., 2017; Zeng et al., 2019; Song et al., 2020; Valada et al., 2020; Wang et al., 2020b), but generic approaches also exist (Jean & Cho, 2019; Baevski et al., 2022). Regarding optimization algorithms, Wang et al. (2020a) proposed to mitigate modality underutilization by averaging gradients across modalities, while Wu et al. (2022); Sun et al. (2021) took the approach of balancing the modality-specific learning rates. Meanwhile, Gat et al. (2020) introduced a regularization term based on functional entropy to balance the contributions of each modality. ## 2.2 Modeling Unobserved Confounding Our work is related to causal inference approaches that address unobserved confounding between multiple treatments and a single outcome (Wang et al., 2017; Frot et al., 2018; Tran & Blei, 2018; Heckerman, 2018; Janzing & Schölkopf, 2018; Wang & Blei, 2019; Ranganath & Perotte, 2018; D'Amour, 2019; Ćevid et al., 2020; Puli et al., 2020; Agrawal et al., 2021; Wang & Blei, 2021). These studies leveraged the correlations among the multiple treatments as observable signatures for inferring the shared unobserved confounder. We also draw inspiration from Makino et al. (2022), who adjusted for unobserved confounding in the context of multitask learning, where there is a single input and multiple targets. ## 2.3 Identifiable Latent Variable Modeling Informally, a latent variable model is identifiable if the true joint distribution over the observed and latent variables can be learned from data. The precise definition is context-dependent, and we provide ours in the appendix. Identifiablity is important in the context of this work, since we derive our conclusions from the latent posterior. The theory originated in nonlinear ICA (Hyvärinen & Morioka, 2016; 2017; Hyvärinen et al., 2019), which was extended to VAEs in Khemakhem et al. (2020). These works assumed there is an observed auxiliary variable that the latent prior factorizes over, such as a timestamp or a class label. Since auxiliary variables are not always available, there have been efforts to relax this requirement. Wang et al. (2021) achieved this with Brenier maps and input convex neural networks, and Moran et al. (2022) did so using a sparse model in which each dimension of the observed data depends on a small subset of the latent variables. There also exist approaches based on the principle of independent mechanisms from causality (Gresele et al., 2021; Lachapelle et al., 2022). We adopt the assumptions of Kivva et al. (2022), who achieve identifiability without auxiliary information via a mixture prior and a piecewise affine decoder. This approach is appealing to us because it is consistent with conventional modeling choices that scale to high-dimensional data (Jiang et al., 2017; Falck et al., 2021). It has also been shown to perform well in terms of various empirical measures of identifiability (Willetts & Paige, 2021). The topic of identifiability is also related to disentanglement, which can be thought of as identifiability up to permutation and component-wise transformation (Higgins et al., 2017; Li & Mandt, 2018; Locatello et al., 2019; Bengio et al., 2020; Bai et al., 2021; Lachapelle et al., 2022). Our work also joins a direction of research that demonstrates the practical utility of identifiability. While this topic has been relatively unexplored (Hyvärinen et al., 2023), Lopez et al. (2023) demonstrated its importance for modeling single-cell genomics data. ## 3 Incidental Correlation We explain modality underutilization using a concept that we call incidental correlation. We define this as a spurious correlation that emerges in the small data regime, despite not being a part of the underlying DGP. Small datasets that are sampled from the MGP, where u and y are conditionally independent given the inputs, can have higher likelihood under the SGP, where this independence does not hold. In the SGP, the correlation between the inputs and targets flow through the causal paths x → y and x ′ → y, as well as the spurious paths x ← u → y and x ′ ← u → y. As we discussed in Sec. 1, these spurious paths contribute to modality underutilization. The severity of this effect depends on the degree to which u and y are conditionally dependent given the inputs. In order to explain this, let us define y in the SGP as $$\mathbf{y}:=f_{\mathbf{y}}(\mathbf{x},\mathbf{x}^{\prime},g(\mathbf{u}),\epsilon_{\mathbf{y}}),$$ $$\left(2\right)$$ ′, g(u), ϵy), (2) where ϵy is exogenous noise, and g(u) is a function that determines the degree to which u and y are conditionally dependent given the inputs. On one extreme, if g(u) is a constant function, u and y are conditionally independent given the inputs. This makes the SGP equivalent to the MGP. On the other extreme, if g(u) = y, u and y are perfectly dependent given the inputs. In this case, u and y collapse into a single variable, which we refer to as y. In the corresponding graph shown in Fig. 2a, the edges between the inputs and target are bidirectional, forming a Markov random field. The data distribution is given by $$p(\mathbf{x},\mathbf{x}^{\prime},\mathbf{y})\propto\exp(\phi_{\mathbf{x}}(\mathbf{x})+\phi_{\mathbf{x}^{\prime}}(\mathbf{x}^{\prime})+\phi_{\mathbf{y}}(\mathbf{y})+\phi_{\mathbf{x},\mathbf{y}}(\mathbf{x},\mathbf{y})+\phi_{\mathbf{x}^{\prime},\mathbf{y}}(\mathbf{x}^{\prime},\mathbf{y})),$$ where the ϕ's are potential functions. Under this distribution, the prediction for y is given by $$\operatorname*{arg\,max}_{\mathbf{y}}\phi_{\mathbf{y}}(\mathbf{y})+\phi_{\mathbf{x,y}}(\mathbf{x,y})+\phi_{\mathbf{x^{\prime},y}}(\mathbf{x^{\prime},y}).$$ y If ϕy(y) is uniform, and thus constant with respect to y, this corresponds to an ensemble of the two unimodal neural networks ϕx,y(x, y) and ϕx′,y(x ′, y). In other words, multimodal learning is equivalent to unimodal ensembling in this particularly degenerate case of the SGP. In practice, the severity of incidental correlation is likely to fall somewhere between these two extremes. (a) When u and y collapse into a single variable ![4_image_1.png](4_image_1.png) which we call y, the graph becomes a Markov random field because the edges between the inputs and target are bidirectional. (b) Selection bias is induced by conditioning on ![4_image_0.png](4_image_0.png) the collider S, which makes u and y conditionally independent given the inputs. ## Figure 2 Incidental correlation is not the only way that u and y can become conditionally dependent given the inputs. Another way this can occur is due to selection bias, where there is an additional node S ∈ {0, 1} that represents a selection criteria (Bareinboim & Pearl, 2012). We show the corresponding graph in Fig. 2b. Data is sampled from a DGP, and is included in the dataset when S = 1. Since S is a child of u and y, this induces a collider bias and makes u and y conditionally dependent given the inputs. Selection bias is an example of a systematic bias, so we do not consider it in this work. While systematic biases are indeed problematic, the purpose of our work is to show that even under favorable conditions where there are no systematic biases, modality underutilization can occur in the small data regime due to incidental correlation. ## 4 Detecting Incidental Correlation Suppose we have a multimodal dataset D = {x (n), x ′(n), y (n)} N n=1, where N is the size of the dataset. It is beneficial to be able to detect whether incidental correlation is present in D, since if it is, spurious correlations between the inputs and target can lead to modality underutilization. We propose to detect incidental correlation using latent variable modeling. Our method is based on the following high-level idea, which we elaborate on in the following paragraphs. If there is no incidental correlation in D, then it has highest likelihood under the MGP, and {x, x ′} is sufficient for predicting the target. In contrast, if incidental correlation is present, D has higher likelihood under the SGP, and {x, x ′} is no longer sufficient for predicting the target. We therefore introduce a latent variable z, and predict the target based on {x, x ′, z}. z encodes any additional information that is useful for predicting the target, given the inputs. This additional information corresponds to the spurious correlations between the inputs and target. To achieve this, we specify a conditional VAE (Kingma & Welling, 2014; Kingma et al., 2014; Sohn et al., 2015) with an encoder q(z | x, x ′, y) and a decoder p(y | x, x ′, z), and maximize the lower bound of log p(y | x, x ′), which is $$\mathbb{E}_{q({\bf z}|{\bf x},{\bf x^{\prime}},{\bf y})}[\log p({\bf y}\mid{\bf x},{\bf x^{\prime}},{\bf z})]-D_{K L}(q({\bf z}\mid{\bf x},{\bf x^{\prime}},{\bf y})\parallel p({\bf z})).$$ $$\left({\boldsymbol{3}}\right)$$ This is called the evidence lower-bound (ELBO), and is derived in the appendix. The KL term in the ELBO acts as a regularizer, and discourages putting information in z unless necessary. This means that if there are no spurious correlations between the inputs and targets, the encoder does not need to do anything, and the KL term is driven to zero. However, if incidental correlation is present in D, then the spurious correlations between the inputs and target are represented by z, and the KL term increases. We can therefore interpret the KL divergence between the latent posterior and prior as the severity of incidental correlation. Another way to interpret this is as a conditional independence test. We are using the KL term to determine whether u and y are conditionally independent given the inputs, without having to observe u. If the KL divergence between the latent posterior and prior is close to zero, this leads us to conclude that incidental correlation is absent from D. In order to prevent false negatives, we need to take precautions to avoid a common failure mode of VAEs called posterior collapse. Posterior collapse refers to the latent posterior being independent of the data, and equaling the prior. This has often been attributed to optimization issues specific to VAEs (Dieng et al., 2019; Lucas et al., 2019; Razavi et al., 2019), but was linked to latent variable identifiability by Wang et al. (2021). Posterior collapse is not possible when the true model does not exhibit posterior collapse, and is identifiable up to an invertible affine transformation. In the context of an observed variable x and a latent variable z, Wang et al. (2021) defined unidentifiability as p(x | z) = p(x), and posterior collapse as p(z | x) = p(z). The authors showed that the former implies the latter. Conversely, if the true model satisfies p(x | z) ̸= p(x), then its posterior does not collapse. If we estimate the true model up to an invertible affine transformation, then p(x | z) ̸= p(x) holds for our model, and its posterior does not collapse. In other words, a non-collapsed posterior cannot be transformed into a collapsed posterior by an invertible affine transformation. Therefore, we adopt the assumptions of Kivva et al. (2022) to achieve identifiability up to an invertible affine transformation. Informally, the two assumptions are that that the latent prior is a Gaussian mixture, and that the decoder is piecewise affine. In the appendix, we state the relevant assumptions and identifiability results of Kivva et al. (2022). The latent prior is given by $$p(\mathbf{z})=\sum_{k=1}^{K}P(C=k){\mathcal{N}}(\mathbf{z};\mu_{c},\Sigma_{c}),$$ where K is the number of components, and C is a categorical variable. The corresponding graph is shown in Fig. 3. A key requirement in specifying our model is that z and y are conditionally dependent given {x, x ′}, since this is what allows us to model incidental correlation with the latent posterior. This can also be achieved Figure 3: Our VAE has a mixture prior over z, where C is the mixture component. ![6_image_0.png](6_image_0.png) with an alternative model with edges from z to {x, x ′}. These edges are present in the SGP, as seen in Fig. 1b. However, it is unnecessary to include these edges in our model, since {x, x ′} are always conditioned on. The only unblocked path from z to y conditioned on {x, x ′} is the edge from z to y, so we chose to make z to {x, x ′} marginally independent in our model. This has the practical benefit of simplifying the implementation of the KL divergence between the latent posterior and prior. To summarize, we train an identifiable VAE in order to detect incidental correlation, where the latent posterior encodes the spurious correlations between the inputs and target. This allows us to interpret the KL divergence between the latent posterior and prior as the severity of incidental correlation. This value should be zero when datasets have higher likelihood under the MGP relative to the SGP, since the inputs are sufficient for modeling the target. However, this value will be positive if, due to incidental correlation, there is additional spurious information that is predictive of the target. There are three conditions under which we are correct to interpret the KL divergence between the latent posterior and prior as the severity of incidental correlation. First, if our causal graph in Fig. 1b correctly describes the true DGP, then any information in addition to {x, x ′} that is useful for predicting y is spurious. Second, if the VAE is identifiable, the latent posterior represents this spurious information. Third, if the VAE is trained until convergence, the KL term is minimized, meaning that it is only positive when the spurious information is present. ## 5 Experiments 5.1 Toy Problem We begin by verifying our claims with a toy problem with synthetic data. Our toy problem is designed so that in the large data regime, multimodal learning successfully utilizes all modalities. We then reduce the size of the dataset, while keeping everything else the same. This gradually induces incidental correlation, which subsequently leads to modality underutilization. The prediction task is inspired by the exclusive or (XOR) operation. XOR is a logical connective that outputs true if exactly one of two input conditions is true. It is relevant from the perspective of comparing multimodal and unimodal learning, because knowing only one of the two input conditions bears no information on the output value. The DGP is $$\mathbf{u}\sim{\mathcal{N}}(\mathbf{0},I),$$ $$\begin{array}{l}{{{\bf{u}}\sim{\mathcal{N}}({\bf{0}},I),}}\\ {{{\bf{x}}\sim{\mathcal{N}}({\bf{u}}_{J}+b\cdot{\bf{1}},\sigma^{2}I),}}\\ {{{\bf{x}}^{\prime}\sim{\mathcal{N}}({\bf{u}}_{-J},\sigma^{2}I),}}\\ {{Y:=\mathrm{Ber}(\sigma(100{\bf{x}}^{\top}{\bf{x}}^{\prime})),}}\end{array}$$ where u ∈ R 2D, x, x ′ ∈ R D, and Y ∈ {0, 1}. Here, we set b = 1.5 and σ = 0.1, and provide additional results with b = 3 and σ = 1 in the appendix. The mean of x is a random subset of u that is offset by b · 1. This offset serves an important purpose, which we elaborate on in the next paragraph. The set of indices J are a randomly sampled half of the full set of indices {1*, . . . ,* 2D}. We denote the remaining half of indices as −J = {1, . . . , 2D} \ J. This DGP is an MGP, since the target is defined without involving u. Since its definition includes an interaction between all modalities, classifying the target requires using all modalities. However, as we will show, a spurious solution emerges in the small data regime due to the randomness of sampling. In order to explain this visually, we illustrate the prediction task in Fig. 4 in the simplest setting with scalar inputs, i.e. D = 1. As seen in Fig. 4a, in the large data regime, a multimodal neural network can learn to predict Y = 1 when the inputs have the same sign, and Y = 0 otherwise. In contrast, a unimodal ensemble cannot express this decision rule because it considers each input separately. Therefore, multimodal learning generalizes better than unimodal learning in the large data regime. Fig. 4b shows that the situation changes in the small data regime. Since we offset the mean of X by b = 1.5, a unimodal neural network can accurately classify Y while only looking at X′. Therefore, the predictive advantage of multimodal learning over unimodal learning deteriorates in the small data regime. ![7_image_1.png](7_image_1.png) (a) In the large data regime, both modalities are required in order to classify the target. ![7_image_0.png](7_image_0.png) (b) In the small data regime, the target can be classified using a single modality. Figure 4: Toy problem with scalar inputs. In the following experiments, we sample several datasets from the DGP that vary only in the number of examples N ∈ {100, 400, 1600, 6400, 25600}. We then randomly select 60% as the training set, 20% as the validation set, and the remaining 20% as the test set. We train for a certain number of epochs M that depends on N, and early stop when the validation loss does not improve for ⌊0.1 · M⌋ epochs. A table with the values of M and N is in the appendix. We use a batch size of 128 for all experiments. We show the results for D = 1 here, and those for D = 16 in the appendix. For each N, we train ten models starting from different randomly-initialized parameters, and report the mean and standard deviation. ## 5.1.1 Detecting Incidental Correlation With U **Observed** We detect incidental correlation in two ways: using a simple heuristic with u observed, and with our VAEbased detection method described in Sec. 4, where u is unobserved. In both cases, we show that incidental correlation emerges in the small data regime. The following heuristic with u observed serves as a sanity check, since if we cannot detect incidental correlation with u observed, we should not be able to detect it when it is unobserved. We define an alternative DGP where Y is redefined as $$Y:=\operatorname{Ber}(\sigma(\tau\mathbf{x}^{\top}\mathbf{x}^{\prime}+\alpha^{\top}\mathbf{u})).$$ This alternative DGP is a SGP, since there is an edge from u to Y . We then compute $$\arg\operatorname*{max}_{\alpha}\sum_{n=1}^{N}\log P(y^{(n)}\mid\mathbf{x}^{(n)},\mathbf{x}^{\prime(n)},\alpha^{\top}\mathbf{u}^{(n)})$$ on the training set, and report the value of ∥α∥ corresponding to the maximum log-likelihood on the validation set. The only trainable parameter is α ∈ R 2D, which we optimize using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 10−3. Since D was sampled from an MGP, where u and Y are conditionally independent given the inputs, α ⊤u should play no part in maximizing the likelihood. Therefore, we can interpret ∥α∥ as the degree of incidental correlation. The results are shown in Fig. 5a. As expected, ∥α∥ is centered near zero in the large data regime, and increases as the dataset becomes smaller. This confirms that incidental correlation emerges in the small data regime. ![8_image_0.png](8_image_0.png) (a) With u observed, the norm of the optimal α increases as we reduce the size of the dataset. ![8_image_1.png](8_image_1.png) (b) With u unobserved, the KL divergence between the latent posterior and prior increases as we reduce the size of the dataset. This pattern is more discernible with the identifiable VAE. Figure 5: Detecting incidental correlation on the toy problem with D = 1, b = 1.5, and σ = 0.1. Reducing the size of the dataset makes incidental correlation emerge, both when u is observed and unobserved. ## 5.1.2 Detecting Incidental Correlation With U **Unobserved** Given the success of our sanity check with u observed, we move on to detecting incidental correlation with our VAE-based method, where u is unobserved. Our VAE consists of an encoder and a decoder, which are both implemented using MLPs with two hidden layers with 128 neurons each, and ReLU activations. This choice of activation function in the decoder satisfies an assumption for achieving identifiability (Kivva et al., 2022). We additionally train a vanilla VAE (Kingma & Welling, 2014) with the same hyperparameters except for sigmoid activations in the decoder, which violates the assumptions of Kivva et al. (2022). We refer to this as the unidentifiable VAE. This ablation study allows us to check whether identifiability helps us draw correct conclusions based on the latent posterior. The two VAEs have the same encoder, and their decoder is also the same except for the choice of activation function. Their biggest difference is in their prior over the latent variable z ∈ R 16. For the encoder, two MLPs each take as input the concatenation of {x, x ′, y} and return the mean and covariance parameters of the multivariate Gaussian distribution q(z | x, x ′, y). For the decoder, an MLP takes as input the concatenation of {x, x ′, z} and outputs the values of the categorical distribution p(y | x, x ′, z). As for the prior, in our identifiable VAE, we use a Gaussian mixture prior to satisfy an assumption from Kivva et al. (2022). There are 16 mixture components, where each component is Gaussian with diagonal covariance. The mixture components are uniformly initialized, and the Gaussian parameters are Xavier normal initialized. All parameters of the mixture prior are updated during training. The prior in our unidentifiable VAE is Gaussian with zero mean and unit covariance. The parameters of the prior are updated during training for the identifiable VAE, but not for the unidentifiable VAE. Both VAEs are trained with the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 10−4. Our results in Fig. 5b show that for the identifiable VAE, the KL divergence between the latent posterior and prior is centered near zero in the large data regime, but increases as the dataset becomes smaller. This means that incidental correlation emerges in the small data regime, as we expected. This relationship does not hold for the unidentifiable VAE, whose posterior collapses. This supports the theory of Kivva et al. (2022), and confirms the importance of achieving identifiability when drawing conclusions based on the latent posterior. ## 5.1.3 Comparing Multimodal And Unimodal Learning Having detected incidental correlation, we move on to comparing multimodal and unimodal learning in its presence. Our multimodal neural network is implemented using an MLP which takes as input the concatenation of {x, x ′}, and outputs the values of the Bernoulli distribution p(y | x, x ′). The ensemble of unimodal neural networks consists of two MLPs. The MLPs separately take as input x and x ′, and output the values of p(y | x) and p(y | x ′). These unimodal predictions are averaged to form the ensemble prediction. All of these MLPs have two hidden layers with 128 neurons each, and ReLU activations. They are trained with the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 10−3. We interpret the difference in log-probability of the target given the inputs between the multimodal neural network and the unimodal ensemble as a proxy for modality underutilization. We refer to this difference as the generalization gap, and use it as the vertical axis in our results in Fig. 6. Our results show that the multimodal neural network generalizes better than the unimodal ensemble in the large data regime, but this difference erodes when datasets become smaller. This serves as strong evidence that incidental correlation contributes to modality underutilization. ![9_image_0.png](9_image_0.png) Figure 6: Comparing multimodal and unimodal learning on the toy problem with D = 1, b = 1.5, and σ = 0.1. Multimodal learning generalizes better than unimodal learning in the large data regime, but the difference narrows as we reduce the size of the dataset. ## 5.2 Vqa V2.0 And Nlvr2 Having verified our claims on the toy problem, we move on to realistic settings with two popular benchmark visual language datasets called VQA v2.0 (Goyal et al., 2017) and NLVR2 (Suhr et al., 2019), and a state-ofthe-art neural network architecture called FIBER (Dou et al., 2022). Both of these datasets are designed to be less systematically biased than their respective first iterations. Therefore, they represent ideal test beds for us to demonstrate that incidental correlation can occur in the absence of systematic dataset biases. Similar to our experiments with the toy problem, our experiments in this section consist of two stages, where we first detect the presence of incidental correlation, and then compare the generalization gap between multimodal and unimodal learning. VQA v2.0 We experiment with VQA v2.0 (Goyal et al., 2017), which is a revised version of VQA v1.0 (Antol et al., 2015) that is constructed to be free of systematic biases. Models trained on VQA v1.0 were frequently observed to ignore the image when answering a question. This is thought to occur because the dataset exhibits strong correlations between the questions and answers. To remedy this, the authors of VQA v2.0 ensured that each question in the dataset is associated with a pair of similar images that result in two different answers. The dataset consists of images and questions as the inputs, and answers as the target. Each target is a vector representing the 3,129 most common answers in the training and validation sets. Each element of this vector represents an answer, and the values are in [0, 1] to reflect the labelers' uncertainty. The log-probability of the target is the sum of log-probabilities of the individual answers. This reflects standard practice when working with this dataset. We construct several versions of the dataset that vary only in their number of examples. To do so, we first merge the training and validation set to use as the full set of examples. We do not include the test set, because the ground-truth labels are not publicly available for it. The creators of the dataset indexed these examples in terms of unique images, where each image is associated with a variable-sized set of questions and answers. To sample a dataset, we randomly sample N ∈ {500, 2000, 8000, 32000, 128000} unique images from the full set of 217,522, and split them into training, validation, and test sets, using a 60%, 20%, and 20% split. Thus, the dataset sampling procedure is consistent across all choices of N. Similarly to the toy problem, in the following experiments we train for a certain number of epochs M for a given N, whose value are provided in the appendix. In all cases, we early stop when the validation loss stops improving for 20 epochs. We use a batch size of 512 for all experiments. For each N, we train five models from different random initializations, and report the mean and standard deviation. NLVR2 We additionally experiment with NLVR2 (Suhr et al., 2019), which is designed to be less systematically biased than NLVR (Suhr et al., 2017), its previous iteration. NLVR2 represents the binary classification task of answering a true or false question regarding a pair of images. Therefore, there are three inputs, where two are images and one is text. We merge the training, validation, and test examples together, for a total of 59,677. We then randomly sample N ∈ {200, 800, 3200, 12800, 51200} examples from the full set, and split them into training, validation, and test sets again using a 60%, 20%, and 20% split. For both datasets, instead of processing the raw image and text inputs, we instead use the embeddings produced by a state-of-the-art neural network called FIBER (Dou et al., 2022) that was pretrained on a diverse set of datasets and tasks. We use a version of the model that was released by the authors that uses a Swin Transformer as the image backbone (Liu et al., 2021), and RoBERTa as the text backbone (Liu et al., 2019). The image and text embeddings are both in R 768. For VQA v2.0, we denote the image embedding as x, and the text embedding as x ′. For NLVR2, we denote the two image embeddings as x1 and x2, and the text embedding as x ′. During training, we freeze the weights of the image and text backbones that produce the embeddings. ## 5.2.1 Detecting Incidental Correlation In order to detect incidental correlation, we train identifiable and unidentifiable VAEs, and report the KL divergence between the latent posterior and prior on the test set, using the weights that minimize the validation loss. The VAEs have encoders and decoders that are both implemented using MLPs with two hidden layers with 512 neurons each. The MLPs are parameterized with ReLU activations, except for the decoder of the unidentifiable VAE, which uses sigmoid activations. The choice of sigmoid activations leads the unidentifiable VAE to violate the piecewise affine decoder assumption in Kivva et al. (2022). For both VAEs, the latent variable z is in R 512. The identifiable VAE has a mixture prior with 128 components, where each component is Gaussian with diagonal covariance. Both the categorical distribution over the components, and the mean and diagonal entries of the components are randomly initialized and updated during training. The components are uniform initialized, and the Gaussian parameters are Xavier normal initialized. The prior for the unidentifiable VAE is Gaussian with mean zero and unit covariance, and its parameters are not updated during training. Both VAEs are optimized with the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 10−4. In Fig. 7, we show our results for detecting incidental correlation in VQA v2.0 and NLVR2. The conclusions are similar to what we saw for the toy problem. With the identifiable VAE, the KL divergence between the latent posterior and prior is close to zero in the large data regime. This implies that the dataset is consistent with the MGP, where there are no spurious paths between the inputs and targets. However, the KL term increases as we reduce the size of the dataset. This is because small datasets have higher likelihood under the SGP, where there are spurious paths between the inputs and target. These spurious correlations are encoded into the latent posterior, making it diverge from the prior. In contrast, the results with the unidentifiable VAE are unclear. For both VQA v2.0 and NLVR2, the posterior of the unidentifiable VAE collapses when it does not for its identifiable counterpart. Also, the nonmonotonicity of the KL term with respect to dataset size makes detecting incidental correlation difficult. Similar to the toy problem, these results support the theory of Kivva et al. (2022), and demonstrate that identifiability is critical when drawing conclusions based on the latent posterior. ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) ![11_image_2.png](11_image_2.png) ![11_image_3.png](11_image_3.png) Figure 7: Results on VQA v2.0 and NLVR2 for detecting incidental correlation. Incidental correlation emerges in the small data regime, and this pattern is more discernible with the identifiable VAE. ## 5.2.2 Comparing Multimodal And Unimodal Learning We now compare how a multimodal neural network generalizes relative to a unimodal ensemble. This generalization gap is a proxy for modality underutilization. For both datasets, the multimodal and unimodal neural networks are implemented as MLPs with two hidden layers with 512 units each, and ReLU activations, which are trained with the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 10−4. For VQA v2.0, the multimodal neural network takes in the concatenation of {x, x ′} and outputs log p(y | x, x ′). Since there are three inputs in NLVR2, the multimodal neural network takes in the concatenation of {x1, x2, x ′} and outputs log p(y | x1, x2, x ′). The unimodal ensemble for VQA v2.0 consists of two MLPs that separate take in x and x ′, and output p(y | x) and p(y | x ′). Similarly, for NLVR2, three MLPs take in x1, x2 and x ′, and output p(y | x1), p(y | x2), and p(y | x ′). In both cases, the ensemble output is the log of the average of the unimodal outputs. Our results in Fig. 8 show that for both VQA v2.0 and NLVR2, the multimodal neural network generalizes better than the unimodal ensemble in the large data regime. This represents the ideal setting where multimodal learning works as intended. However, as datasets become smaller, multimodal learning no longer generalizes better than the unimodal ensemble. This is because incidental correlation makes the inputs and target spuriously correlated, and the multimodal neural network learns to predict the target using these spurious paths. ![11_image_4.png](11_image_4.png) ![11_image_5.png](11_image_5.png) (a) VQA v2.0 ## (B) Nlvr2 Figure 8: Results on VQA v2.0 and NLVR2 for comparing multimodal and unimodal learning. The generalization gap between multimodal and unimodal learning narrows in the small data regime. ## 6 Conclusion We have demonstrated using both synthetic and real data that incidental correlation emerges in the small data regime, and leads to modality underutilization. The most important takeaway is that when pursuing multimodal learning, it is not enough to just worry about systematic dataset biases, or neural network architectures and optimization algorithms. Our results show that even in an ideal setting where these concerns are met, and multimodal neural networks should successfully utilize all modalities, they fail to do so in the small data regime. In addition to the existing concerns, we must also acquire a sufficient amount of data in order for multimodal learning to be successful. When data is scarce, it may be futile to try to improve multimodal learning without resolving the issue of incidental correlation. Our proposed method enables practitioners to check whether incidental correlation is present in their datasets, and determine whether they need to collect more data. So far, we have reported the issue of incidental correlation, and shown that it is problematic for multimodal learning. However, we have not yet proposed a way to mitigate it. We believe this is a promising direction for future work, since this can improve the efficacy of multimodal learning in data-scarce settings. One example is to use our method in the acquisition step of active learning. We can collect data to reduce the KL divergence between the latent posterior and prior, which corresponds to collecting data that does not exhibit incidental correlation. ## Acknowledgments This work was supported by grants from the National Institutes of Health (P41EB017183), the National Science Foundation (HDR-1922658, CHE-2231174, DMS-2310831), the Gordon and Betty Moore Foundation (9683), Hyundai Motor Company (Uncertainty in Neural Sequence Modeling), Samsung Advanced Institute of Technology (Next Generation Deep Learning: From Pattern Recognition to AI), and the Office of Naval Research (N00014-23-1-2590). ## References Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. Analyzing the behavior of visual question answering models. In *EMNLP*, 2016. Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. Don't just assume; look and answer: Overcoming priors for visual question answering. In *CVPR*, 2018. Raj Agrawal, Chandler Squires, Neha Prasad, and Caroline Uhler. The decamfounder: Non-linear causal discovery in the presence of hidden variables. *arXiv*, 2021. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: visual question answering. In *ICCV*, 2015. Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli. data2vec: A general framework for self-supervised learning in speech, vision and language. In *ICML*, 2022. Junwen Bai, Weiran Wang, and Carla P. Gomes. Contrastively disentangled sequential variational autoencoder. In *NeurIPS*, 2021. Tadas Baltrusaitis, Chaitanya Ahuja, and Louis-Philippe Morency. Multimodal machine learning: A survey and taxonomy. *IEEE Trans. Pattern Anal. Mach. Intell.*, 2019. Elias Bareinboim and Judea Pearl. Controlling selection bias in causal inference. In *AISTATS*, 2012. Yoshua Bengio, Tristan Deleu, Nasim Rahaman, Nan Rosemary Ke, Sébastien Lachapelle, Olexa Bilaniuk, Anirudh Goyal, and Christopher J. Pal. A meta-transfer objective for learning to disentangle causal mechanisms. In *ICLR*, 2020. Rémi Cadène, Corentin Dancette, Hédi Ben-Younes, Matthieu Cord, and Devi Parikh. Rubi: Reducing unimodal biases in visual question answering. In *NeurIPS*, 2019. Domagoj Ćevid, Peter Bühlmann, and Nicolai Meinshausen. Spectral deconfounding via perturbed sparse linear models. *The Journal of Machine Learning Research*, 2020. Alexander D'Amour. On multi-cause approaches to causal inference with unobserved counfounding: Two cautionary failure cases and A promising alternative. In *AISTATS*, 2019. Adji B Dieng, Yoon Kim, Alexander M Rush, and David M Blei. Avoiding latent variable collapse with generative skip models. In *AISTATS*, 2019. Zi-Yi Dou, Aishwarya Kamath, Zhe Gan, Pengchuan Zhang, Jianfeng Wang, Linjie Li, Zicheng Liu, Ce Liu, Yann LeCun, Nanyun Peng, Jianfeng Gao, and Lijuan Wang. Coarse-to-fine vision-language pre-training with fusion in the backbone. In *NeurIPS*, 2022. Fabian Falck, Haoting Zhang, Matthew Willetts, George Nicholson, Christopher Yau, and Chris C. Holmes. Multi-facet clustering variational autoencoders. In *NeurIPS*, 2021. Benjamin Frot, Preetam Nandy, and Marloes H Maathuis. Learning directed acyclic graphs with hidden variables via latent gaussian graphical model selection. *arXiv*, 2018. Itai Gat, Idan Schwartz, Alexander G. Schwin, and Tamir Hazan. Removing bias in multi-modal classifiers: Regularization by maximizing functional entropies. In *NeurIPS*, 2020. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In *CVPR*, 2017. Luigi Gresele, Julius von Kügelgen, Vincent Stimper, Bernhard Schölkopf, and Michel Besserve. Independent mechanism analysis, a new concept? In *NeurIPS*, 2021. Caner Hazirbas, Lingni Ma, Csaba Domokos, and Daniel Cremers. Fusenet: Incorporating depth into semantic segmentation via fusion-based cnn architecture. In *ACCV*, 2017. David Heckerman. Accounting for hidden common causes when inferring cause and effect from observational data. *arXiv*, 2018. Jack Hessel and Lillian Lee. Does my multimodal model learn cross-modal interactions? it's harder to tell than you might think! In *EMNLP*, 2020. Irina Higgins, Loïc Matthey, Arka Pal, Christopher P. Burgess, Xavier Glorot, Matthew M. Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. In *ICLR*, 2017. Drew A. Hudson and Christopher D. Manning. GQA: A new dataset for real-world visual reasoning and compositional question answering. In *CVPR*, 2019. Aapo Hyvärinen and Hiroshi Morioka. Unsupervised feature extraction by time-contrastive learning and nonlinear ICA. In *NIPS*, 2016. Aapo Hyvärinen and Hiroshi Morioka. Nonlinear ICA of temporally dependent stationary sources. In AISTATS, 2017. Aapo Hyvärinen, Hiroaki Sasaki, and Richard E. Turner. Nonlinear ICA using auxiliary variables and generalized contrastive learning. In *AISTATS*, 2019. Aapo Hyvärinen, Ilyes Khemakhem, and Ricardo Pio Monti. Identifiability of latent-variable and structuralequation models: from linear to nonlinear. *arXiv*, 2023. Allan Jabri, Armand Joulin, and Laurens van der Maaten. Revisiting visual question answering baselines. In *ECCV*, 2016. Dominik Janzing and Bernhard Schölkopf. Detecting confounding in multivariate linear models via spectral analysis. *Journal of Causal Inference*, 2018. Sébastien Jean and Kyunghyun Cho. Context-aware learning for neural machine translation. *arXiv*, 2019. Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. In *IJCAI*, 2017. Ilyes Khemakhem, Diederik P. Kingma, Ricardo Pio Monti, and Aapo Hyvärinen. Variational autoencoders and nonlinear ICA: A unifying framework. In *AISTATS*, 2020. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *ICLR*, 2015. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In *ICLR*, 2014. Diederik P. Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In *NIPS*, 2014. Bohdan Kivva, Goutham Rajendran, Pradeep Ravikumar, and Bryon Aragam. Identifiability of deep generative models under mixture priors without auxiliary information. In *NeurIPS*, 2022. Sébastien Lachapelle, Pau Rodríguez, Yash Sharma, Katie Everett, Rémi Le Priol, Alexandre Lacoste, and Simon Lacoste-Julien. Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA. In *CLeaR*, 2022. Yingzhen Li and Stephan Mandt. Disentangled sequential autoencoder. In *ICML*, 2018. Paul Pu Liang, Amir Zadeh, and Louis-Philippe Morency. Foundations and recent trends in multimodal machine learning: Principles, challenges, and open questions. *arXiv*, 2022. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretraining approach. *arXiv*, 2019. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In *ICCV*, 2021. Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In *ICML*, 2019. Romain Lopez, Natasa Tagasovska, Stephen Ra, Kyunghyun Cho, Jonathan K. Pritchard, and Aviv Regev. Learning causal representations of single cells via sparse mechanism shift modeling. In *CLeaR*, 2023. James Lucas, George Tucker, Roger B Grosse, and Mohammad Norouzi. Don't blame the elbo! a linear vae perspective on posterior collapse. In *NeurIPS*, 2019. Taro Makino, Krzysztof J. Geras, and Kyunghyun Cho. Generative multitask learning mitigates targetcausing confounding. In *NeurIPS*, 2022. Gemma E Moran, Dhanya Sridhar, Yixin Wang, and David M Blei. Identifiable deep generative models via sparse decoding. In *TMLR*, 2022. Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y. Ng. Multimodal deep learning. In *ICML*, 2011. Judea Pearl. *Causality*. Cambridge university press, 2009. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. Hypothesis only baselines in natural language inference. In *NAACL-HLT*, 2018. Aahlad Manas Puli, Adler J. Perotte, and Rajesh Ranganath. Causal estimation with functional confounders. In *NeurIPS*, 2020. Rajesh Ranganath and Adler J. Perotte. Multiple causal inference with latent confounding. *arXiv*, 2018. Ali Razavi, Aäron van den Oord, Ben Poole, and Oriol Vinyals. Preventing posterior collapse with delta-vaes. In *ICLR*, 2019. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In *NIPS*, 2015. Sijie Song, Jiaying Liu, Yanghao Li, and Zongming Guo. Modality compensation network: Cross-modal adaptation for action recognition. *IEEE Trans. Image Process.*, 2020. Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. A corpus of natural language for visual reasoning. In ACL, 2017. Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. A corpus for reasoning about natural language grounded in photographs. In ACL, 2019. Ya Sun, Sijie Mai, and Haifeng Hu. Learning to balance the learning rates between various modalities via adaptive tracking factor. *IEEE Signal Process. Lett.*, 2021. Dustin Tran and David M. Blei. Implicit causal models for genome-wide association studies. In *ICLR*, 2018. Abhinav Valada, Rohit Mohan, and Wolfram Burgard. Self-supervised model adaptation for multimodal semantic segmentation. *Int. J. Comput. Vis.*, 2020. Jingshu Wang, Qingyuan Zhao, Trevor Hastie, and Art B Owen. Confounder adjustment in multiple hypothesis testing. *Annals of statistics*, 2017. Weiyao Wang, Du Tran, and Matt Feiszli. What makes training multi-modal classification networks hard? In *CVPR*, 2020a. Yikai Wang, Wenbing Huang, Fuchun Sun, Tingyang Xu, Yu Rong, and Junzhou Huang. Deep multimodal fusion by channel exchanging. In *NeurIPS*, 2020b. Yixin Wang and David M Blei. The blessings of multiple causes. *Journal of the American Statistical* Association, 2019. Yixin Wang and David M. Blei. A proxy variable view of shared confounding. In *ICML*, 2021. Yixin Wang, David M. Blei, and John P. Cunningham. Posterior collapse and latent variable nonidentifiability. In *NeurIPS*, 2021. Matthew Willetts and Brooks Paige. I don't need u: Identifiable non-linear ica without side information. arXiv, 2021. Nan Wu, Stanislaw Jastrzebski, Jungkyu Park, Linda Moy, Kyunghyun Cho, and Krzysztof J. Geras. Improving the ability of deep neural networks to use information from multiple views in breast cancer screening. In *MIDL*, 2020. Nan Wu, Stanislaw Jastrzebski, Kyunghyun Cho, and Krzysztof J. Geras. Characterizing and overcoming the greedy nature of learning in multi-modal deep neural networks. In *ICML*, 2022. Jin Zeng, Yanfeng Tong, Yunmu Huang, Qiong Yan, Wenxiu Sun, Jing Chen, and Yongtian Wang. Deep surface normal estimation with hierarchical RGB-D fusion. In *CVPR*, 2019. ## A Appendix A.1 Derivations The ELBO is log p(y | x, x ′) = log Z z p(y, z | x, x ′)dz = log Eq(z|x,x′,y) p(y, z | x, x ′) q(z | x, x′, y) ≥ Eq(z|x,x′,y) log p(y | x, x ′, z)p(z | ✟x,✟❍✟❍x ′) q(z | x, x′, y) = Eq(z|x,x′,y)[log p(y | x, x ′, z)] − DKL(q(z | x, x ′, y) ∥ p(z)). ## A.2 Results From Kivva Et Al. (2022) We adopt the following definition of identifiability from Kivva et al. (2022). Let f♯P denote the pushforward measure of P by f. Let P be a family of probability distributions on R m and F be a family of functions f : R m → R n. 1. For (P, f) *∈ P × F* we say that the prior P is identifiable (from f♯P*) up to an affine transformation* if for any (P ′, f′) *∈ P × F* such that f♯P ≡ f ′ ♯P ′there exists an invertible affine map h : R m → R m such that P ′ = h♯P (i.e., P ′is the pushforward measure of P by h). 2. For (P, f) *∈ P×F* we say that the pair (P, f) is identifiable (from f♯P*) up to an affine transformation* if for any (P ′, f′) *∈ P × F* such that f♯P ≡ f ′ ♯P ′there exists an invertible affine map h : R m → R m such that f ′ = f ◦ h −1 and P ′ = h♯P. f is said to be *weakly injective* if (i) there exists x0 ∈ R n and δ > 0 s.t. |f −1({x})| = 1 for every x ∈ B(x0, δ) ∩ f(R m), and (ii) {x ∈ R n : |f −1({x})| = *∞} ⊆* f(R m) has measure zero with respect to the Lebesgue measure on f(R m). According to Kivva et al. (2022), ReLU networks are generally weakly injective under simple assumptions on their architecture. Assuming that the prior over the latent variable Z is a Gaussian mixture that factorizes over unobserved discrete variable U, and the decoder is piecewise affine and weakly injective, then P(*U, Z*) is identifiable from the observed data distribution P(X) up to an affine transformation of Z. ## A.3 Implementation Details In our experiments, we trained for a certain number of epochs M that depends on the size of the dataset N. Here are the values of M and N for the toy problem, as well as with VQA v2.0 and NLVR2. | Table 1: Toy problem | Table 2: VQA v2.0 | Table 3: NLVR2 | | | | |------------------------|---------------------|------------------|-------|-------|----| | N | M | N | M | N | M | | 100 | 1,000 | | | | | | 400 | 1,000 | | | | | | 1,600 | 500 | | | | | | 6,400 | 200 | | | | | | 25,600 | 100 | 500 | 1,000 | | | | | 2,000 | 1,000 | | | | | | 8,000 | 500 | | | | | | 32,000 | 200 | | | | | | 128,000 | 100 | 200 | 1,000 | | | | 800 | 1,000 | | | | | | 3,200 | 500 | | | | | | 12,800 | 200 | | | | | | 51,200 | 100 | | | | ## A.4 Additional Experimental Results ![17_image_0.png](17_image_0.png) ![17_image_1.png](17_image_1.png) Figure 9: Detecting incidental correlation on the toy problem with D = 16, b = 1.5, and σ = 0.1. ![17_image_2.png](17_image_2.png) Figure 10: Comparing multimodal and unimodal learning on the toy problem with D = 16, b = 1.5, and σ = 0.1. ![17_image_3.png](17_image_3.png) ![17_image_4.png](17_image_4.png) Figure 11: Detecting incidental correlation on the toy problem with D = 1, b = 3, and σ = 1. ![17_image_5.png](17_image_5.png) Figure 12: Comparing multimodal and unimodal learning on the toy problem with D = 1, b = 3, and σ = 1. ![18_image_1.png](18_image_1.png) ![18_image_0.png](18_image_0.png) Figure 13: Detecting incidental correlation on the toy problem with D = 16, b = 3, and σ = 1. ![18_image_2.png](18_image_2.png) Figure 14: Comparing multimodal and unimodal learning on the toy problem with D = 16, b = 3, and σ = 1. ![18_image_4.png](18_image_4.png) ![18_image_3.png](18_image_3.png) Figure 15: Detecting incidental correlation on VQA v2.0 and NLVR2 with a smaller model with latent dimension 128 and 32 components in the mixture prior.
Review 1: Summary: This paper studies the problem of detecting spurious correlations between different data modalities in the small sample size regime. The proposed method uses a conditional VAE model with a Gaussian mixture latent space of a response conditioned on two inputs with a latent variable that represents a possible unobserved variable influencing the inputs and response. The ground truth situation is that the response is conditionally independent of the unobserved latent variable given the inputs, but it can become conditionally dependent in the small sample regime. The proposed method uses the KL term of VAE learning to as a indicator of spurious correlations from low samples sizes. High KL is interpreted to mean that spurious correlations are being represented by z, and low KL is interpreted to mean that spurious correlation are not being learned because the encoder is not needed. Experiments on synthetic data and VQAv2 show that KL decreases with sample size. Strengths and Weaknesses: *Strengths:* * The problem of understanding when multimodal models (or even unimodal models) make predictions based on spurious attributes of the data distribution is a very interesting and relevant direction. * The intuitive direction of trying to model possible confounding information using the latent space of a VAE is a promising direction. *Weaknesses:* * A central part of the experiments involves comparing what is called an identifiable VAE with a learnable mixture latent space and a so-called unidentifiable standard VAE, based on the theoretical results from Kivva et al. 2022. I am not an expert on identifiable VAEs but from my understanding of the claims in Kivva et al. 2022 Section 3.4 "Classical VAE" it appears that classical VAEs are considered identifiable and a special case of the mixture VAE. Why does the paper claim that the classical VAE is unidentifiable in the context of Kivva et al.? I am confused on this point and it is central to the experiment section. * The scope of the method and experiments seem very limited. The main method simply uses the KL divergence term of VAE learning to detect whether spurious correlations exist for small sample sizes. However, there is no heuristic presented for determining whether a sample size is sufficiently large to be free from spurious correlations. The method also does not include a constructive way of reducing spurious correlations when they are detected. Better practical applications would make the method more appealing. * It is not entirely clear that spurious correlations are the reason why KL divergence is high for low sample sizes. Could there be other aspects of small sample sizes that cause this? * The problem of small sample size learning in the context of multimodal models might be of limited interest. Investigating the method at scale would better display its potential. Requested Changes: * Please clarify why the vanilla VAE is presented as unidentifiable, especially with reference to Section 3.4 of Kivvas et al. 2022. * Is there a way to apply the method to large scale datasets which might not be balanced? For example, improving the balancing on VQA v1. Broader Impact Concerns: n/a ================================================== Review 2: Summary: The paper addresses a key problem in multimodal learning known as "modality underutilization," where multimodal neural networks fail to utilize all modalities, leading to weaker generalization or biased predictions. The authors introduce a concept called "incidental correlation," that leads to modality underutilization. The authors further propose a method to detect incidental correlation through latent variable modeling. They specify an identifiable variational autoencoder (VAE), allowing them to interpret the Kullback-Leibler divergence between the latent posterior and prior as the severity of incidental correlation. The paper conducts experiments with synthetic data and the VQA v2.0 dataset, validating the authors' theory of incidental correlation and demonstrating its effect on modality underutilization. Strengths and Weaknesses: - strengths - the paper discusses important problem of "model underutilization" and discusses a new concept called "incidental correlation". - the idea of using VAE to identify "incidental correlation" is interesting - weakness - the writing of the paper has multiple unclear remarks, for examples - it is unclear how "incidental correlation" is different from the well-known terms such as spurious correlation, confounding factors, or dataset bias etc - the authors create multiple terms such as "Multimodal generating process" and "Spurious generating process", while it seems the authors suggest the these two terms are referring to two different scenarios mainly being differentiated by whether p(y|d(x)) equalize to p(y|x) or not, the names "Multimodal" and "Spurious" are not even orthogonal to each other. - It's unclear why "Small datasets that are sampled from the MGP, where u and y are conditionally independent given the inputs, can have higher likelihood under the SGP, where this independence does not hold" This seems to be a key assumption used in the paper, while this is an often observed phenomenon in practices, it does not seem to hold when the authors put into their theoretical framework, if we are talking about uniform sampling. - "g(u) is a function that determines the amount of information that passes from u to y given the inputs" not sure what does "amount of information" refers to. - the synthetic experiments setting seems to limited - will the synthetic results different if we test under more various settings of the parameters (1.5, 0.1)? - if the data are uniformally sampled, aren't Figure 4(a) and Figure 4(b) have the same error rate while using single modality? then why a model will favor one in one case over the other one in the other? - regarding real dataset experiment - there are tons of datasets that are proven to have spurious features, e.g. [1]. Probably the authors can demonstrate the usage of their methods on more datasets with known spurious features, the results might be more convincing. - it might be better to demonstrate accuracy-improving experiments on popular datasets. [1]. Select-Additive Learning: Improving Generalization in Multimodal Sentiment Analysis Requested Changes: Please address to the weakness listed above. Broader Impact Concerns: none noted ================================================== Review 3: Summary: Based on the observation that multimodal learning usually do not generalise better than unimodal learning, the authors hypthesizes that this is because small dataset creates the possibility of suprious correlations between an unobserved confounder in small-data regime. The authors proposed a method for detecting this \emph{incidental correlation} and showed the plausibility of their hypothesis and effectiveness of their method. Strengths and Weaknesses: ## Strengths: * The paper is super well written and I understand fully and approve of (most of) what's been done, the definition of incidental correlation and why this may hurt performance, and the broader context in which this paper may be important or limited (e.g. the case of Fig 2b). * The experiments carefully validated their hypotheses and results are consistent across datasets and problems. * Their method is practical and can be easily used for many problems. ## Weaknesses * I understand the authors want to \emph{detect} incidental correlations, but do not further provide a method for improve methods that were affected by it. This is a tiny bit disappointing (although the authors are aware of it based on Conclusions). * I feel that, although it is possible that a more complicated model (with an arrow from $u$ to $z$) can become a winning model in small-data regime, there isn't any explanation or illustration how this dependence could be. This is more out of curiosity, but helps strenghtens the authors' hypotheses. * There is a problem with the CVAE setup: the prior does not depend on the conditioning variables. * The distinction between the indentifiable and unidentifiable CVAEs need more explanation. The cited paper of Kivva (2022) is quite dense, so the authors should explain briefly why choosing a single, fixed prior makes the CVAE unidentifiable, or just state the assumption needed for identifiability. Requested Changes: ## Critical: 1. Please justify the use of an independent prior in the CVAE setup. Logically, u and y are conditionally independent given x and x', this implies that KL[q(z|x, x', y)||p(z|x, x')] =0, not KL[q(z|x, x', y)||p(z|)] =0. This is quite important to explain, or run experiments to justify which one is better empirically. 1. Please address this question regarding Section 4: is it the case that (at least theoretically) posterior collapse is not possible when the model is identifiable? And is it the case that the method of Kivva et al (2022) can always achieve identifiability? If so, under what conditions? Because if this is the case, then we can have conditions that the identifiable model can avoid posterior collapse under those conditions. This is conceptually an important chain of reasoning. 1. Section 5.2.1. A Gaussian mixture with 128 components is quite large, and the dimensionality of the z-space is also huge. Can the authors run some experiments with lower dimensions? Or is it not important? This helps practitioners know what kind of dimensionalities they should expect to run to detect incidental correlations. ## Suggestions I have a few suggestions that may help reader understand the paper better 1. typo: page 2, 2nd last para. line 4: "edge makes which makes" 2. Writing: in Section 5, it would help if the authors could allude to the readers that there will be a few experiments with different techniques (with u observed and unobserved, and comparing multipmodal vs unimodal learning). Even better, let us know what you are running these different comparisons that serve very similar purposes. 3. typo: Sec. 5.1.2, last para, first line: "show that the for the " 4. second to last line of page 8: if I understand correctly, it should be the log-probability of the "true" target, right? It doesn't hurt to clarify a bit more. 5. All figures: what are the errorbars? Are there any reasons not to run statistical tests on claims like "... is large when datasize is small"? Could we do some t-test on comparison with zero, or on the slope of the lines? Broader Impact Concerns: Don't think there is any substantial concerns. ================================================== Review 4: Summary: In this work, it raises a concern that not all modalities are used to boost more accurate predictions in the multimodal learning. Though the existing works address the issue by ensuring no systematic biases in the data creation or enlarging the neural network's capacity, the authors point out that such *modality underutilization* could still occur in the small data regime. They proposed an assumption that instead of the ground-truth underlying data generating process, it's more likely a different *spurious* data generating process is learnt. The authors propose an identifiable variational autoencoder, which conditions on the multimodal observations $x, x'$ and further uses target $y$ to learn the posterior for a latent variable $z$ (encoder). The decoder reads $x, x', z$ to predict the target $y$. The KL divergence between the learnt posterior and the Gaussian mixture prior is defined as the severity of incidental correlation. A toy problem with synthetic data is illustrated to demonstrate the proposed method. The identifiable VAE is also further validated on a real-world dataset VQA 2.0. Strengths and Weaknesses: Pros: 1. The paper is written clearly and the motivations are clearly stated. 2. The method is concise and easy to plug in. From the description in the paper, the work could generalize to a variety of application domains. 3. One detailed toy example is provided to explain how the proposed method works and it's further validated on a real-world datasets VQA 2.0. Cons: 1. Besides detecting whether incidental correlation is present or not, to determine the need for more data, I think another direction would be more interesting: if the dataset is large enough to get rid of the the incidental correlation issue. Furthermore, if this work can propose the potential directions of which data to further collect. 2. Small datasets are more likely to suffer from incidental correlation issue. While an unobserved var $u$ is a possibility, I would intuitively more suspect another scenario: small amount of samples are not enough to depict the data distribution, so it's not another DGP is more likely to be learnt, but instead one was "randomly" selected from multiple "eligible" DGPs. Also, I think it's a bit hard to determine if adding $u$ is enough to narrow down the DGP candidates to the ground-truth one? 3. Close-to-zero KL divergence is also often known as "over-regularization" (or posterior collapse as you mentioned). Though Gaussian mixture is proposed to alleviate the issue, how do you pre-define or learn these Gaussians? Are they learnt and defined case-by-case as in the toy example? 4. Eq. (3) has a typo in $q(z|x,x',z)$. Pls correct. 5. While I generally agree all modalities should be used if necessary, I wonder if there are some cases fewer modalities can make accurate predictions and forcing the model to use all modalities hurt the performance on the contrary? 6. You mentioned multiple modalities like text, vision, speech, etc. in the introduction. Given that you place your work as a detection method, I think it's better to test the method on more datasets. Requested Changes: 1. I think the Gaussian mixture is a critical module in this work. Pls elaborate on how you define or learn the Gaussian mixture. Is it dataset-dependent? 2. It would be more convincing if you can test your method on more real-world datasets or incorporating more modalities to demonstrate its generalizability. 3. For latent variable modeling and disentangled representation learning related works, I would suggest adding [1,2] into the discussion. [1] Li, Y. and Mandt, S., 2018, July. Disentangled sequential autoencoder. In International Conference on Machine Learning (pp. 5670-5679). PMLR. [2] Bai, J., Wang, W. and Gomes, C.P., 2021. Contrastively disentangled sequential variational autoencoder. Advances in Neural Information Processing Systems, 34, pp.10105-10118. Broader Impact Concerns: The Broader Impact Statement section is not presented, and I don't think it is applicable to this paper. ================================================== Metareview: Recommendation: Accept as is Comment: All reviewers thought (1) the submission is well written, and (2) the problem studied here is important. Most reviewers did not engage in the discussion, unfortunately. Two reviewers remained unconvinced after reading the authors' responses. Two sticking points were: (i) *The experiments could be expanded to include more real-world datasets*: a new dataset was added in the latest revision. The synthetic data were designed to study the problem in the limited data regime, so I don't think this is an issue given the scope of the claims. (ii) *Whether the KL divergence between the posterior and prior is an appropriate metric to measure spurious correlation*: additional discussion has been added and experimental results have shown the utility of this metric. All other requested changes have been incorporated and discussed. I recommend acceptance. ==================================================
# Robust And Explainable Deep Hedging With Linearized Neural Network Anonymous authors Paper under double-blind review ## Abstract Deep hedging is promising for risk management for financial derivatives through deep learning, yet it remains hindered by complex, resource-intensive training and the challenge of effectively integrating deep neural networks with hedging optimization. To overcome these issues, we introduce a robust and efficient linearized neural network architecture, seamlessly integrated with Black-Scholes' Delta, to streamline deep learning-based hedging optimization (DHLNN). Our approach enhances both the efficiency and interpretability of hedging strategies in derivative markets. The proposed model shows strong resilience to market fluctuations, effectively addresses action-dependence challenges, and achieves faster convergence compared to existing methods. Extensive simulations confirm the superior performance and cost-effectiveness of our method, under varying market conditions, when compared to state-of-the-art deep hedging models. These findings underscore the potential of DHLNN to significantly improve both convergence and hedging performance in derivative markets. ## 1 Introduction The evolution of trading practices has transcended traditional, model-driven pricing and hedging techniques. In the dynamic landscape of contemporary financial markets, a more sophisticated, data-driven approach has become indispensable. Deep neural networks (DNNs) have emerged as powerful tools in this context, offering unprecedented representational power. By harnessing the strengths of DNNs alongside advancements in modern optimization algorithms, we can advance beyond the conventional reliance on Greeks as the primary risk management tool, paving the way for more robust and promising hedging strategies Buehler et al. (2019). Deep hedging has emerged as the state-of-the-art framework for automating hedging operations Buehler et al. (2019), offering effective risk mitigation with an inherent consideration for path dependence. Notably, the No-Transaction Band network (NTB) has been proposed as a rapid method for determining optimal hedging strategies, enabling hedgers to respond swiftly to customer demands by providing faster and more accurate quotes Imaki et al. (2021). However, translating the deep hedging concept into practical application presents significant challenges. One of the most critical challenges in deep hedging is action-dependence, where the optimal hedging position at a subsequent time step is influenced by the current action. This dependency forces the neural network to navigate an expansive function space, complicating the optimization process Kallsen & Muhle-Karbe (2015). Furthermore, the non-convex nature of the loss surface in neural networks, riddled with saddle points, intensifies this situation, often leading to convergence issues despite numerous gradient updates Karakida et al. (2019). For hedgers who must execute orders timely and accurately, this lack of convergence poses a significant risk. In response to these challenges, several directions to enhance the performance of deep hedging could come from several recent studies in machine learning. Analyzing the training dynamics of wide-width shallow networks has provided valuable insights into the optimization behavior of neural networks Mei et al. (2018); Novak et al. (2022). Moreover, task-specific design integration into neural networks has frequently led to substantial advancements, as evidenced by breakthroughs in various deep learning applications Krizhevsky et al. (2012); Vaswani et al. (2017). These approaches underscore the potential for enhancing the convergence and stability of deep hedging models. Nevertheless, the landscape of deep hedging remains fraught with complexities. The inclusion of transaction costs, for instance, adds another layer of difficulty. Hedging in the presence of market frictions, such as transaction costs, necessitates sophisticated adjustments to the hedging strategy, which may not be adequately captured by conventional deep learning models. Additionally, ensuring that these models remain robust and reliable in dynamic and unpredictable market conditions remains an ongoing challenge. To address these challenges, we propose a novel approach that integrates a linearized neural network architecture with the well-established Black-Scholes' Delta. This design mitigates the complexities associated with action-dependence by making hedge positions a function of current market conditions rather than the entire history of previous actions. As a result, our model can efficiently manage high-volume orders, enabling rapid and precise adjustments to hedge positions in response to market changes. This capability is particularly critical in volatile markets or when handling large transaction volumes, where delays in decision-making can lead to significant losses. Moreover, our approach introduces an optimization strategy that spans the entire price trajectory of the underlying asset from the inception of the derivative to its maturity. This holistic method ensures that the sequence of hedge positions is optimized as a whole, capturing the full dynamics of the hedging process. By optimizing across the entire trajectory, our model delivers more consistent and effective management of derivative positions, ensuring alignment with the overall objective of minimizing risk or cost throughout the life of the derivative. In summary, the contributions of this work are as follows: - We propose a novel framework based on neural network linearization that addresses the core challenges of deep hedging. This framework offers a robust and interpretable solution by blending the strengths of neural networks with the clarity of linear models. - Our approach introduces an optimization strategy that spans the entire price trajectory, ensuring a holistic and effective method for managing derivative positions. This strategy is crucial for accurately capturing the dynamics of hedging over time. - By integrating the linearized neural network architecture with Black-Scholes' Delta, we overcome the challenge of action-dependence. This integration facilitates the management of high-volume orders and the rapid injection of liquidity into derivative markets, thereby enhancing overall market efficiency. - Extensive simulations demonstrate the resilience of our proposed approach under varying market conditions. The results highlight the model's superior convergence speed and robust performance, making it a more resilient and efficient approach to risk management in derivative markets. The rest of this paper is organized as follows. Section 2 introduces the background and related work. Section 3 formulates the hedging optimization problem, followed by Section 4 which explicates the proposed method. Section 6 shows the results where the underlying asset price follows geometric Brownian motion. The concluding remarks are given in Section 7. ## 2 Related Work Derivatives are financial instruments whose value is derived from underlying assets such as equities, bonds, and currencies. Frequent trading of derivatives often leads to the initiation of short positions, which necessitates sophisticated hedging strategies to mitigate the associated risks. The value of a derivative closely tracks the fluctuations in the underlying asset, exhibiting a level of sensitivity that necessitates strategic interventions by hedgers. They manage this variability by executing transactions that precisely adjust their exposure to the underlying asset, thereby offsetting risk. This delicate balance between derivatives and their underlying assets emphasizes the intricate mechanics necessary for effective hedging within the complex landscape of financial markets. In a frictionless and fully efficient market, perfect hedging can theoretically be achieved by employing the Black-Scholes replicating portfolio Black & Scholes (1973). However, the Black-Scholes framework assumes that derivatives can be perfectly replicated using underlying instruments, effectively rendering them redundant in such an idealized market Davis et al. (1993). In contrast, real-world markets are characterized by various frictions, such as transaction costs, which significantly complicate the optimization of hedging strategies Collins & Fabozzi (1991). Extending the Black-Scholes model to account for transaction costs has led to the development of discretely rebalanced portfolios Leland (1985), as well as approaches that approximate the optimal hedge in the mean-variance sense Grannan & Swindle (1996). Additionally, utility indifference pricing has been employed in several works to derive optimal hedging strategies Monoyios (2004)Henderson & Hobson (2004)Carmona (2008). Despite these advancements, achieving a truly optimal hedge that fully accounts for transaction costs remains an unresolved challenge. While there is extensive literature on pricing models employing neural networks, only a limited number of studies have explored the application of neural networks to hedging strategies Ruf & Wang (2019). Buehler et al. (2019) demonstrated that deep hedging algorithms could effectively hedge a European option in the presence of market frictions, using convex risk measures as the loss function. Imajo et al. (2021) proposed a neural network architecture that incorporates widely accepted financial inductive biases, such as the amplitude and time-scale variance of stock time series, to enhance performance. Furthermore, hybrid models combining parametric option pricing approaches like the Black-Scholes model with non-parametric machine learning techniques have been proposed to improve predictive accuracy by incorporating inductive biases that categorize options based on moneyness and time-to-maturity Das & Padhy (2017). Jang & Lee (2019) introduced generative Bayesian neural networks, consistent with the no-arbitrage pricing structure, to achieve better calibration and prediction performance for American option pricing compared to classical models. Neural networks have also been applied to analyze volatility surface movements, offering deeper insights into market dynamics Cao et al. (2020). In response to the unresolved challenges in deep hedging, we introduce an efficient method that enhances training by fully leveraging the versatile representation capabilities of neural network models, while simultaneously harnessing the explainability of linear models as an approximation of the neural network. Our approach addresses the complexities of action-dependence, demonstrates resilience to varying market conditions, and exhibits superior convergence speed. By focusing on the linearization of neural network models and integrating them with deep hedging strategies, the proposed DHLNN model offers a more efficient and interpretable approach to optimizing hedge strategies in derivative markets. Not only does DHLNN outperform traditional hedging methods, but it also surpasses state-of-the-art deep hedging baselines, particularly in volatile market conditions, making it a valuable tool for traders and financial institutions aiming to enhance their hedging strategies and reduce risk. ## 3 Hedging Optimization In this section, we formulate the optimal hedging problem in three steps. First, we describe the derivative market considering transaction costs. Next, we construct the optimal derivative hedging objective, which aims to mitigate liabilities by minimizing the convex risk measure. Finally, we formulate the optimal hedging problem in terms of the certainty equivalent of a convex risk measure, specifically utilizing the exponential utility framework within the context of indifference pricing. The payoff of a derivative depends on the price trajectory of the underlying asset Hull & White (1987). When hedgers enter into derivative contracts, they commit to settling the resulting payoff at maturity. This commitment involves the obligation to either buy or sell the underlying asset to the derivative holder on the agreed-upon date at the specified price. The pricing of a derivative contract is intricately connected to the price movements of the underlying asset, providing hedgers with crucial insights for determining their optimal position. Consequently, hedgers expose themselves to potentially higher risks, as they must fulfill their liabilities when the derivatives are exercised. This can result in losses exceeding the initial options premium paid. Therefore, careful consideration of the underlying asset's price dynamics is imperative for effective risk management in derivative trading. The objective of hedging optimization is to design the hedging positions over the underlying asset price trajectories to minimize the capital required to be injected into the portfolio to balance their liabilities. ## 3.1 The Derivative Market Formulation In financial markets, a sole tradable asset is often represented by its Weighted Average Price (WAP), which provides a comprehensive measure of its value. The order book, which lists all outstanding buy (bid) and sell (ask) orders, is the primary source of information for calculating the WAP. We examine the market characterized by discrete time steps $t=\{t_{0}=0,t_{1},t_{2},\cdots,t_{n}=T\}$. $$(1)$$ $$\mathrm{(2)}$$ The WAP at a given time ti, denoted by Pti , incorporates both the prices and the sizes of the orders. To calculate the WAP, we first determine the weighted average bid price and the weighted average ask price. The weighted average bid price (Pbid,ti ) is given by $$P_{\mathrm{bid},t_{i}}={\frac{\sum_{j}({\mathrm{Bid~Price}}_{j}\times{\mathrm{Bid~Size}}_{j})}{\sum_{j}{\mathrm{Bid~Size}}_{j}}},$$ where j indexes the different bid prices and sizes. Similarly, the weighted average ask price (Pask,ti ) is calculated as $$P_{\mathrm{ask},t_{i}}={\frac{\sum_{k}(\mathrm{Ask~Price}_{k}\times\mathrm{Ask~Size}_{k})}{\sum_{k}\mathrm{Ask~Size}_{k}}},$$ where k indexes the different ask prices and sizes. The WAP (Pti ) is then determined by averaging the weighted average bid and ask prices as $$\left({3}\right)$$ $P_{t_i}=\dfrac{P_{\text{bid},t_i}+P_{\text{ask},t_i}}{2}.$ - Why is a calculator to add a particle as a particle? $$\left(4\right)$$ $\left(5\right)^2$ Over a period from 0 ≤ ti ≤ T, the series of WAPs for a sole tradable asset is represented as $$P=\{P_{t_{i}}|P_{t_{i}}>0\}_{0\leq t_{i}\leq T}.$$ |Pti > 0}0≤ti≤T . (5) The WAP provides a fairer valuation of the asset by incorporating the sizes of the orders, which is crucial for making informed trading decisions. It offers a clearer picture of market liquidity, helping traders understand the market's depth and the potential impact of large orders. When dealing with financial derivatives, it is important to specify how and when the payoff is settled. In our model, we assume that the payoff is settled at maturity t = T and discounted at the zero risk-free rate. Maturity refers to the specific date on which the derivative contract expires. At this point, the obligations of the contract are fulfilled. Settlement at maturity means that any profits or losses from the derivative are realized at this end date. For example, in the case of a European call option, the holder has the right to buy the underlying asset at the strike price only on the maturity date T. Discounting is a financial process where future cash flows are adjusted to their present value. This is essential because a dollar today is worth more than a dollar tomorrow due to the potential earning capacity of the money. Discounting at the zero risk-free rate implies that we are using this rate to calculate the present value of the derivative's payoff at maturity as shown in the process $$P_{v}=\frac{P_{T}}{(1+r)^{T}},\tag{1}$$ $$({\mathfrak{f}}{\mathfrak{h}})$$ where Pv is the present value, PT is the future value (payoff at maturity), r is the risk-free rate, and T is the time until maturity. Hence, under the assumption of a zero risk-free rate, the payoff at maturity does not need to be adjusted for present value, simplifying our calculations. The liabilities of the hedger holding a European call option can be expressed as $$Z=\operatorname*{max}(P_{T}-P_{s},0),$$ Z = max(PT − Ps, 0), (7) where PT is the price of the asset at maturity T and Ps is the strike price. The liabilities of a Lookback call option can be expressed as $$Z=\operatorname*{max}\left(\operatorname*{max}\{P_{t_{i}}\mid P_{t_{i}}>0\}_{0\leq t_{i}\leq T}-P_{s},0\right),$$ where the maximum price max{Pti | Pti > 0}0≤ti≤T is observed over the entire time period from t0 to T. Mitigating investment risk involves strategically employing financial instruments or market strategies to counteract potential adverse price movements. Hedgers manage the risk associated with their liabilities by trading the underlying asset of the derivative. The position of the underlying asset held by the hedger at each time step is indicated by a signed numerical quantity denoted as $$(8)$$ $$\delta=\{\delta_{t_{i}}\mid\delta_{t_{i}}\in\mathbb{R}\}_{0\leq t_{i}\leq T},$$ $$(9)$$ $$(10)$$ | δti ∈ R}0≤ti≤T , (9) where δ0 = 0. The hedgers need to determine δti based on the information available before ti. The market imposes a transaction cost that is proportional to traded values, characterized by a cost rate c ∈ [0, 1). The total transaction costs incurred by the hedger until ti+1 are denoted by $$C_{t_{i+1}}=C_{t_{i}}+c P_{t_{i}}|\delta_{t_{i+1}}-\delta_{t_{i}}|,\tag{1}$$ where C0 = 0. In practical scenarios, transaction costs comprise a range of factors, including the bid-ask spread, commission fees, temporary market impact, and other associated expenses. ## 3.2 Derivative Hedging Objective Investors strategically hedge one investment by executing trades in another. The overall objective of investment hedging is to mitigate the impact of adverse events. For derivative instruments, hedgers employ a trading strategy denoted as δ for the underlying assets to offset the associated liabilities. All trading operations are self-financed, the liabilities of the hedger can be reflected by the amount of injecting additional cash PL into the portfolio. It is noteworthy that a negative cash injection implies the potential for the hedger to extract cash which means the positive terminal portfolio value. Importantly, the primary objective is not terminal portfolio value maximization over the hedging strategy. Instead, the ultimate goal is to hedge the liability Z at T through strategic trading in the underlying asset, where the hedging objective should adhere to: $$P_{L}-Z+\sum_{t_{i}=0}^{T-1}\delta_{t_{i}}\Delta P_{t_{i}}-C_{t_{i}}|\delta_{t_{i+1}}-\delta_{t_{i}}|P_{t_{i}}=0.\tag{1}$$ $$(11)$$ We consider the mid-price of the underline asset denoted by P = {Pti |Pti > 0}0≤ti≤T . The hedger must establish an optimality criterion for the hedging strategy to provide the least amount of cash required to supplement the liabilities, implementing the optimal hedge accounting for various costs and constraints. We formulate criteria to describe the target of hedging optimization, employing convex risk measures defined as $$\min_{\delta}\quad\rho(-Z+\sum_{t_{i}=0}^{T-1}\delta_{t_{i}}\Delta P_{t_{i}}-C_{t_{i}}|\delta_{t_{i+1}}-\delta_{t_{i}}|P_{t_{i}}),\tag{12}$$ where ρ is a monotonically decreasing convex function with Cash-Invariant property Ben-Tal & Teboulle (2007). The solution to (12) is referred to as the optimal hedging strategy for position −Z. This strategy determines the minimal capital required to be added to the risky position, rendering it acceptable according to the risk measure. In essence, hedging optimization aims to find the minimum charge that the hedger needs to implement in order to make the terminal position acceptable when employing optimal hedging. ## 3.3 Certainty Equivalent Of Convex Risk Measures In Indifference Pricing To achieve indifference between the position −Z and the scenario without liability, which represents the acceptable case with no capital injection, we introduce the indifference price q(Z) as: $$q(Z)=\inf_{\delta}\ \rho\Big{(}-Z+\sum_{t_{i}=0}^{T-1}\delta_{t_{i}}\Delta P_{t_{i}}-C_{t_{i}}|\delta_{t_{i+1}}-\delta_{t_{i}}|P_{t_{i}}\Big{)}\tag{13}$$ $$\quad-\inf_{\delta}\ \rho\Big{(}\sum_{t_{i}=0}^{T-1}\delta_{t_{i}}\Delta P_{t_{i}}-C_{t_{i}}|\delta_{t_{i+1}}-\delta_{t_{i}}|P_{t_{i}}\Big{)}.$$ Indifference pricing provides a natural and versatile framework for characterizing the optimality of hedging strategies and determining the fair price of derivatives in the presence of market frictions. The indifference price q(Z) can be interpreted as the amount of capital needed to injected into the portfolio to eliminate the liability Z, which means inject q(Z) can achieve the same expected utility as the case without liability. The optimal hedging and pricing problem is converted into an optimization problem of the convex risk measure. According to the exponential utility U(x) = − exp(−λx), we define a continuous, non-decreasing, and convex loss function L as $$L(x)=e^{\lambda x}-\frac{1+\log(\lambda)}{\lambda},\tag{14}$$ where λ > 0 is a risk aversion coefficient. Then, we establish an optimized certainty equivalent of the convex risk measure. This measure can be formulated as an optimization problem, aiming to minimize the certainty equivalent under the given loss function L $$\rho_{Z}=\operatorname*{inf}_{\theta\in\mathbb{R}}\Bigl\{\theta+\mathbb{E}[L(Z-\theta-\sum_{t_{i}=0}^{T-1}(\delta_{t_{i}}\Delta P_{t_{i}}-C_{t_{i}}|\delta_{t_{i+1}}-\delta_{t_{i}}|P_{t_{i}}))]\Bigr\},$$ where θ is an optimization variable. Substituting the loss function L(x) = e λx − 1+log(λ) λinto the optimization problem, we get $$\rho_{Z}=\operatorname*{inf}_{\theta\in\mathbb{R}}\Bigl\{\theta+\mathbb{E}[e^{\lambda(Z-\theta-\sum_{i_{i}=0}^{T-1}(\delta_{i_{1}}\Delta P_{i_{1}}-C_{i_{1}}|\delta_{i_{1}+1}-\delta_{i_{1}}|P_{i_{1}}))}]-\frac{1+\log(\lambda)}{\lambda}\Bigr\}.$$ o. (16) We denote X as follows $$\left(15\right)$$ $$(16)$$ $$X=Z-\sum_{t_{i}=0}^{T-1}(\delta_{t_{i}}\Delta P_{t_{i}}-C_{t_{i}}|\delta_{t_{i+1}}-\delta_{t_{i}}|P_{t_{i}}),\tag{1}$$ which simplifies our problem to $$(17)$$ $$\rho_{Z}=\operatorname*{inf}_{\theta\in\mathbb{R}}\Bigl\{\theta+\mathbb{E}[e^{\lambda(X-\theta)}]-\frac{1+\log(\lambda)}{\lambda}\Bigr\}.$$ We solve the optimization problem by differentiating with respect to θ and setting the derivative to zero to find the optimal θ $$\frac{\partial}{\partial\theta}\Big\{\theta+\mathbb{E}[e^{\lambda(X-\theta)}]-\frac{1+\log(\lambda)}{\lambda}\Big\}=1-\lambda\mathbb{E}[e^{\lambda(X-\theta)}]=0.$$ Thus, we have $$\lambda\mathbb{E}[e^{\lambda(X-\theta)}]=1\quad\Rightarrow\quad\mathbb{E}[e^{\lambda(X-\theta)}]=\frac{1}{\lambda}.$$ . (20) Taking the logarithm on both sides, we get $$\lambda(X-\theta)=\log\left(\frac{1}{\lambda}\right)=-\log(\lambda)\quad\Rightarrow\quad X-\theta=-\frac{\log(\lambda)}{\lambda},$$ $$\left(18\right)$$ $$(19)$$ $$(20)^{\frac{1}{2}}$$ $$(21)$$ 6 which leads to $$\theta=X+{\frac{\log(\lambda)}{\lambda}}.$$ $$(222)^{\frac{1}{2}}$$ $$(23)^{\frac{1}{2}}$$ λ. (22) Substitute the optimal θ back into the original optimization problem, we obtain the entropic risk measure as $$\rho_{Z}=\frac{1}{\lambda}\log(\lambda\mathbb{E}[e^{\lambda(Z-\sum_{t_{i}=0}^{T-1}(\delta_{t_{i}}\Delta P_{t_{i}}-C_{t_{i}}|\delta_{t_{i+1}}-\delta_{t_{i}}|P_{t_{i}}))}]).$$ ))]). (23) This completes the detailed derivation of converting the optimal hedging and pricing problem into an optimization problem using a convex risk measure with an exponential utility function. The fair price q(Z) corresponds to the cash amount that renders a hedger the equivalent situation of settling a liability with q(Z) and having no liability, assuming optimal hedging. The optimal convex risk measure is construed as the residual risk of the derivative post-optimal hedging. Building on the foundation of the optimal hedge within the framework of the hedger's risk measure, the hedger quotes a price that compensates for the remaining risk after hedging. ## 4 Deep Hedging With Linearized Neural Network In addressing challenges related to convergence difficulties and non-explanatory training dynamics in neural network hedging, we introduce an innovative approach known as deep hedging with a linearized neural network (DHLNN). This strategy involves leveraging the outputs of a linearized neural network alongside Black-Scholes' delta to formulate an effective hedging strategy. The key innovation lies in transforming the complex, infinite-dimensional task of identifying an optimal hedging strategy into a more manageable, finite-dimensional challenge. Specifically, our focus is on determining optimal parameters for the linearized neural network, streamlining the overall optimization process. Additionally, the integration of Black-Scholes' delta enhances efficiency and holds the potential for substantial reductions in transaction costs. A significant advantage of employing a linearized neural network is its enhanced robustness in dynamic market conditions. This not only streamlines the hedging optimization process but also provides a clearer and more interpretable understanding of the training dynamics. This approach aligns well with the practicalities of financial markets, making it a valuable tool for risk management. In this section, first, we formulate the neural model architecture. Second, we design the hedging model with robust and explainable properties. Finally, we provide the analysis of the model approximation by linearization. ## 4.1 Market Information In structuring the market information, we incorporate three crucial factors for derivative pricing and hedging: the log moneyness of the underlying asset, the expiration time, and the volatility of the underlying asset. These factors provide a comprehensive representation of the market information relevant to derivative valuation and hedging. Log moneyness is a measure that compares the current price of the underlying asset Pti to the strike price Ps of the derivative. It is calculated as the natural logarithm of the ratio between these two prices, log Pti Ps . This measure indicates whether an option is in-the-money, at-the-money, or out-of-the-money. In-the-money means the option would have a positive payoff if exercised today; at-the-money means the current price is roughly equal to the strike price; and out-of-the-money means it would not be profitable to exercise the option at present. Using the logarithm standardizes this measure and makes it symmetric for different strike prices, which aids in neural network training by mitigating issues related to scale differences. The expiration time T − ti represents the remaining time until the derivative contract reaches its maturity T. This factor is critical because the value of a derivative, particularly options, is highly sensitive to time. As the expiration date approaches, the time value of the option decreases. The closer the expiration date, the lesser the impact of potential future price movements on the derivative's value, making it an essential input for pricing models. Volatility, denoted by σ, measures the degree of variation of the underlying asset's price over time. It reflects the market's expectation of the asset's price fluctuation and is a fundamental input in the valuation of derivatives. Higher volatility increases the potential for the underlying asset's price to move significantly, thus increasing the value of options. Conversely, lower volatility implies less price movement, reducing the option's value. For neural networks, volatility provides a critical measure of risk and price dynamics, influencing the model's ability to predict and hedge the value of derivatives accurately. While these three factors are not the only market information available, they are representative and encompass much of the essential data needed for derivative pricing and hedging. Many other features can be derived from these factors to represent different aspects of market information. By including log moneyness, expiration time, and volatility, our neural network model captures the fundamental elements affecting the pricing and hedging of derivatives. This comprehensive approach ensures that the model is well-equipped to handle the complexities of financial markets, providing accurate and reliable outputs for risk management and trading strategies. ## 4.2 Model Function Formulation The neural network processes market information as inputs xti and produces a permissible band for the subsequent position, effectively addressing the issue of position-dependence. The parameters of the neural network model are represented by the weight matrices and bias vectors across multiple layers. Let W(l) and b (l) denote the weights and biases of the l-th layer, where l = 1*, . . . , L* − 1. The hidden layers are indexed by l = 1*, . . . , L* − 1. The parameters of the neural network model are represented by w = {W(l), b (l)} L−1 l=1 and a1, a2 ∈ R m. Given an input xti , the neural network processes this input through L − 1 hidden layers. The transformation starts with the first hidden layer, which computes $$\mathbf{h}^{(1)}=\sigma(\mathbf{W}^{(1)}\mathbf{x}_{t_{i}}+\mathbf{b}^{(1)}).$$ (1)). (24) Subsequent hidden layers apply the transformation $$\mathbf{h}^{(l)}=\sigma(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\quad\mbox{for}\quad l=2,\ldots,L-1.\tag{1}$$ Finally, the output layer applies a linear transformation to the outputs of the last hidden layer. This transformation is weighted by the vectors a1 and a2 to produce the two outputs f1(xti ) and f2(xti ). The model function, denoted as f : R d → R 2 with L hidden layers comprising is defined as $$\Big\{\mathbf{f}_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})={\frac{1}{\sqrt{m}}}\sum_{j=1}^{m}\mathbf{a}_{k,j}\prod_{l=1}^{L}\Big[\sigma(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\Big]\Big\}_{k=1,2}.$$ $$(24)$$ $$(25)$$ $$(26)$$ The neural network representation facilitates the translation of the hedging problem into the optimization of a set of parameters w = {W(l), b (l)} L−1 l=1 of the neural network, guided by the convex risk measure in (23). The minimizer of (23) provides an approximate value for the indifference price, as demonstrated in (13). This comprehensive methodology offers a promising avenue for addressing challenges in neural network hedging within the context of financial markets. The neural network's design effectively tackles the challenge of position-dependence, which refers to the difficulty of making predictions or decisions that are influenced by the specific position or state of a financial instrument. By using this approach, the model is able to incorporate various market factors and dependencies into its predictions, providing a flexible and adaptive mechanism for financial decision-making. This capability is crucial in financial contexts where market conditions and positions can vary widely, ensuring that the network's outputs remain relevant and accurate across different scenarios. ## 4.3 Robust And Explainable Hedging Model Design Designing a robust and explainable hedging model involves developing a strategy that not only effectively mitigates risks but also provides clear insights into its decision-making process. To make the model more robust against the dynamic market information and explainable about the training dynamics, we introduce the linearized neural network coupling with Black-Scholes' delta to formulate an effective hedging strategy. ## 4.3.1 Training Dynamic We establish the dynamics of the model weights throughout the updating trajectory. In order to explicate the training dynamics of the transition from the current model wtto a new model w, we introduce an auxiliary variable denoted as z: z = wt + τ (w − wt) (27) According to where 0 ≤ τ ≤ 1. With z representing the model parameters, we obtain the first-order derivative of the model function as: $$\Big{\{}\frac{df_{k}(\mathbf{x}_{t_{i}},\mathbf{z},\mathbf{a}_{k})}{d\tau}=\nabla f_{k}(\mathbf{x}_{t_{i}},\mathbf{z},\mathbf{a}_{k})^{T}(\mathbf{w}-\mathbf{w}^{t})\Big{\}}_{k=1,2},$$ $$\Big{\{}\int_{0}^{1}df_{k}(\mathbf{x}_{t_{i}},\mathbf{z},\mathbf{a}_{k})=\int_{0}^{1}\nabla f_{k}(\mathbf{x}_{t_{i}},\mathbf{z},\mathbf{a}_{k})^{T}(\mathbf{w}-\mathbf{w}^{t})d\tau\Big{\}}_{k=1,2}.$$ which leads to $\frac{1}{2}$ $$z=w^{t}+\tau(w-w^{t})$$ $$(27)^{\frac{1}{2}}$$ $\text{to both the first and second,both sides}$. $$(28)$$ $$(29)$$ $$\left\{f_{k}(x_{t_{i}},\mathbf{w},\mathbf{a}_{k})-f_{k}(\mathbf{x}_{t_{i}},\mathbf{w}^{t},\mathbf{a}_{k})=\int_{0}^{1}f_{k}(\mathbf{w}^{t}+\mathbf{\tau}(\mathbf{w}-\mathbf{w}^{t}))d\tau\right\}_{k=1,2},$$ $$(30)$$ , (30) and combining with (29), we have $$\left\{f_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})-f_{k}(\mathbf{x}_{t_{i}},\mathbf{w}^{t},\mathbf{a}_{k})=\int_{0}^{1}\nabla f_{k}(\mathbf{x}_{t_{i}},z,\mathbf{a}_{k})^{T}(\mathbf{w}-\mathbf{w}^{t})d\tau\right\}_{k=1,2},$$ , (31) which can be rewritten as $$(31)$$ $$\left\{f_{k}(x_{t_{i}},w,a_{k})-f_{k}(x_{t_{i}},w^{t},a_{k})=\nabla f_{k}(x_{t_{i}},w^{t},a_{k})^{T}(w-w^{t})\right.$$ $$\left.+\int_{0}^{1}\left(\nabla f_{k}(x_{t_{i}},z,a_{k})-\nabla f_{k}(x_{t_{i}},w^{t},a_{k})\right)^{T}(w-w^{t})d\tau\right\}_{k=1,2}.$$ $$(32)$$ Then, according to $$\Big{|}\int_{0}^{1}(\nabla f_{k}(x_{t_{i}},z,a_{k})-\nabla f_{k}(x_{t_{i}},w^{t},a_{k}))^{T}(w-w^{t})d\tau\Big{|}$$ $$\leq\int_{0}^{1}\Big{|}(\nabla f_{k}(x_{t_{i}},z,a_{k})-\nabla f_{k}(x_{t_{i}},w^{t},a_{k}))^{T}(w-w^{t})\Big{|}d\tau,$$ $$(33)$$ and by Cauchy-Schwarz inequality, we obtain $$\left\{\left|(\nabla f_{k}(x_{t_{i}},z,a_{k})-\nabla f_{k}(x_{t_{i}},w^{t},a_{k}))^{T}(w-w^{t})\right|\right.$$ $$\left.\leq\left\|\nabla f_{k}(x_{t_{i}},z,a_{k})-\nabla f_{k}(x_{t_{i}},w^{t},a_{k})\right\|_{2}\cdot\left\|w-w^{t}\right\|_{2}\right\}_{k=1,2}.\right.$$ $$(34)$$ ## 4.3.2 Linearized Hedging Model The Hessian captures the second-order behavior of the function, essentially measuring the rate of change of the gradient. If the Hessian is bounded, it implies that the gradient does not change too rapidly, which is crucial for our approximation. The rate of change in the gradients can be quantified by the Hessian of the model function at the current point wt, denoted by ▽ 2fk(xti , wt, ak) k=1,2 . Since the Hessian represents the changing rate of the gradient with respect to the parameter variable w, the approximated change induced in the gradient can be expressed as: $$\left\{\left\|\nabla f_{k}(x_{t_{i}},w^{t},a_{k})-\nabla f_{k}(x_{t_{i}},z,a_{k})\right\|\approx\left\|z-w^{t}\right\|\cdot\left\|\nabla^{2}f_{k}(x_{t_{i}},w^{t},a_{k})\right\|\right\}_{k=1,2}.\tag{35}$$ If ∇2fk(x, w, ak) is bounded, we can ensure that the relative change in the gradient is small. Specifically, if the Hessian remains constant along the trajectory, the relative change in the gradient due to this perturbation is $$\left\|\nabla f_{k}(x_{t_{i}},w^{t},a_{k})-\nabla f_{k}(x_{t_{i}},z,a_{k})\right\|\approx\frac{\left\|z-w^{t}\right\|\cdot\left\|\nabla^{2}f_{k}(x_{t_{i}},w^{t},a_{k})\right\|}{\left\|\nabla f_{k}(x_{t_{i}},w^{t},a_{k})\right\|}.\tag{36}$$ This shows that the relative change in the gradient is proportional to the magnitude of the perturbation (w − wt) and inversely proportional to the magnitude of the gradient ∥∇fk(x, w, ak)∥. This ratio quantifies how much the gradient changes when moving from wtto another point z. If this ratio is small, it indicates that the gradients at wt and z are similar in magnitude and direction. As long as the Hessian remains bounded, the relative change in the gradient will be sufficiently small, ensuring that the linear approximation of the model function is accurate. According to the analysis in Section 5, we define βk as $$\beta_{k}=\operatorname*{max}\left(A(M_{2}H^{2}+M_{1})^{L},A M_{2}^{L}\right),$$ $$(37)$$ $$(38)^{\frac{1}{2}}$$ $$(39)$$ $$(40)$$ , (37) and we have $$\left\{\left\|\nabla^{2}f_{k}(x_{t_{i}},w^{t},a_{k})\right\|\leq\beta_{k}\right\}_{k=1,2}.$$ . (38) Combining with (27) and (35), we have $$\left\{\left\|\nabla f_{k}(x_{t_{i}},z,a_{k})-\nabla f_{k}(x_{t_{i}},w^{t},a_{k})\right\|_{2}\leq\tau\beta_{k}\right\|(w-w^{t})\right\|_{2}\right\}_{k=1,2},$$ , (39) According to (39), we have $$\Big\{\Big|\int_{0}^{1}(\nabla f_{k}(x_{t_{i}},z,a_{k})-\nabla f_{k}(x_{t_{i}},w^{t},a_{k}))^{T}(w-w^{t})d\tau\Big|\leq\frac{\beta_{k}}{2}\Big\|(w-w^{t})\Big\|_{2}^{2}\Big\}_{k=1,2}.$$ . (40) The relative change in the gradient is then $$\frac{\left\|\,\nabla f_{k}(x_{t_{i}},w^{t},a_{k})-\nabla f_{k}(x_{t_{i}},z,a_{k})\right\|}{\left\|\,\nabla f_{k}(x_{t_{i}},w^{t},a_{k})\right\|}\leq\frac{\beta_{k}\left\|z-w^{t}\right\|}{\left\|\,\nabla f_{k}(x_{t_{i}},w^{t},a_{k})\right\|}.$$ When the relative change in the gradient is small, the linear approximation ˆfk accurately represents the behavior of fk around wt. As long as the Hessian remains bounded, the relative change in the gradient is small for small perturbations (w − wt). This ensures that the linear approximation defined as $$\hat{f}_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})=f_{k}(\mathbf{x}_{t_{i}},\mathbf{w}^{t},\mathbf{a}_{k})+\triangledown f_{k}(\mathbf{x}_{t_{i}},\mathbf{w}^{t},\mathbf{a}_{k})^{T}(\mathbf{w}-\mathbf{w}^{t})$$ is accurate. According to the above analysis, the model function can be rewritten as follows $$\mathbf{f}_{h}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})=\mathbf{f}_{h}(\mathbf{x}_{t_{i}},\mathbf{w}^{t},\mathbf{a}_{k})+\nabla\mathbf{f}_{h}(\mathbf{x}_{t_{i}},\mathbf{w}^{t},\mathbf{a}_{k})^{T}(\mathbf{w}-\mathbf{w}^{t})+\mathcal{O}\Big{(}\frac{\beta_{k}}{2}\Big{\|}(\mathbf{w}-\mathbf{w}^{t})\Big{\|}_{2}^{2}\Big{)}.\tag{43}$$ $$(41)$$ $$(42)^{\frac{1}{2}}$$ If the relative change is bounded and small for small w − wt, the higher-order terms can be considered insignificant, ensuring that that the linear model ˆfk closely approximates fk. Combining the bounds, the accuracy can be quantified as follows $$\left|f_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})-\hat{f}_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})\right|\leq\frac{\beta_{k}}{2}\left\|\mathbf{w}-\mathbf{w}^{t}\right\|^{2}\tag{1}$$ $$(44)$$ This shows that the error in the linear approximation is quadratic in the perturbation sizew − wt and proportional to the bound βk on the Hessian. Since this bound is proportional to(w − wt) 2, the error introduced by the second-order term becomes negligible for small perturbations. Therefore, the model function fk is very close to its linear approximation when the Hessian is bounded. The optimal hedging involves refraining from making trades when the hedge position falls within a designated no-trade interval. When the hedge ratio reaches either endpoint of this interval, the most effective strategy is to execute trades to either maintain the hedge ratio on the boundary or within the interval. ## 4.4 Anchor Hedge Strategy Using Black-Scholes Delta The anchor hedge strategy involves using the Black-Scholes delta, which is a derivative of the Black-Scholes formula commonly used in financial mathematics to hedge options. This strategy leverages the sensitivity of the option's price to small changes in the price of the underlying asset, known as the delta, to create a hedged position. Here's a detailed explanation of the components and the process: The variable bsti is defined as $$b s_{t_{i}}={\frac{\log({\frac{P_{t_{i}}}{P_{x}}})+{\frac{\sigma^{2}(T-t_{i})}{2}}}{\sigma{\sqrt{T-t_{i}}}}}$$ $$\left(45\right)$$ √T − ti(45) where Pti is the current price of the underlying asset at time ti, Ps is the strike price of the option, σ is the volatility of the underlying asset's returns, T is the time to maturity of the option, and tiis the current time. This equation calculates the standardized distance between the logarithm of the current price and the strike price, adjusted for the time remaining until maturity and the volatility. It essentially measures how far the current price is from the strike price in standard deviation units, taking into account the time decay. ``` The Black-Scholes delta δ bs ti is given by ``` $$\delta_{t_{i}}^{b s}=\frac{1}{2}\left(1+\frac{2}{\sqrt{\pi}}\int_{0}^{b s_{t_{i}}}e^{-t^{2}}d t\right)$$ $$(46)$$ 2dt!(46) This represents the hedge ratio, or the sensitivity of the option's price to small changes in the price of the underlying asset. The delta is the probability that the option will end up in-the-money, which means it will have a positive payoff at expiration. The term √ 2 π R bsti 0e −t 2dt is the error function erf(bsti ), related to the normal distribution and helps in computing the probabilities associated with the normal distribution. ``` The delta δ bs ti ranges from 0 to 1. When the underlying asset price Pti is much higher than the strike price ``` Ps, bsti will be large, and δ bs ti will approach 1, indicating a high probability that the option will end up in-the-money. Conversely, if Pti is much lower than Ps, bsti will be negative and large in magnitude, and δ bs ti will approach 0, indicating a low probability of the option being in-the-money. ``` The anchor hedge strategy involves using this delta to hedge an option position. By holding δ bs ti units of the ``` underlying asset for each option, the trader can create a hedged position that mitigates the risk from small movements in the underlying asset's price. The delta is recalculated periodically as Pti and ti change, to maintain the hedge. ## 4.5 Model Function And Position Adjustment Strategy The model function outputs define the boundaries for the upper and lower gaps derived from the anchor hedge strategy. Specifically, we define: $\Delta_{t_{i}}^{l}=\delta_{t_{i}}^{bs}-\hat{f}_{1}(\mathbf{x_{t_{i}}},\mathbf{w},\mathbf{a_{1}})$ $\Delta_{t_{i}}^{u}=\delta_{t_{i}}^{bs}+\hat{f}_{2}(\mathbf{x_{t_{i}}},\mathbf{w},\mathbf{a_{2}})$. $$(48)$$ $$(47)$$ where δ bs ti is the Black-Scholes delta, ˆf1 and ˆf2 are the linearized model functions representing the lower and upper adjustments to the delta, respectively. The subsequent position, δti+1 , is determined by: $$\delta_{t_{i+1}}=\operatorname*{min}\{\operatorname*{max}\{\Delta_{t_{i}}^{l},\delta_{t_{i}}\},\Delta_{t_{i}}^{u}\}.$$ }. (48) This equation ensures that the new position is constrained within the defined upper and lower bounds, providing a controlled and adaptive hedge adjustment. In this conceptual framework, the introduced feature addresses the challenge of position-dependence by ensuring that the inputs to the neural network remain independent of the current position. This enhancement significantly improves the training process. The strategy encoded by the neural network leverages a transaction band, which is cost-effective as it restricts transactions within the specified band, thereby minimizing unnecessary trading costs. The theoretical foundation supporting this design is robust, demonstrating that the strategy outlined in equations (47)-(48) is optimal not only for European options but also for a broader various utilities and derivatives, including exotic options Imaki et al. (2021). ## 5 Analysis Of Approximation Quality In Hedging Model To simplify the analysis, we assume each variable in {ak}k=1,2 is sampled uniformly at random and exclude it from the training procedure, focusing solely on the weight matrices W(l) and bias vectors b (l), which make up the variable w = {W(l), b (l)} L−1 l=1 . We consider the following assumptions, i.e., the derivative of the activation function |σ ′(z)| is bounded by M1 for all z, the input vectors h (l−1) are bounded such that ∥h (l−1)∥ ≤ H, and the elements ak,j are constants with a maximum absolute value A = maxj |ak,j |. ## 5.1 Gradient Magnitude Analysis To compute the gradient of the model function with respect to the weight matrices W(l) and bias vectors b (l), we use the chain rule, where the terms Ql−1 i=1 σ(W(i)h (i−1) + b (i)) capture the propagation of the gradient through all previous layers. The gradient with respect to the weight matrices W(l)is derived as $$\nabla_{\mathbf{W}^{(l)}}f_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})=\nabla_{\mathbf{h}^{(l)}}f_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})\cdot\nabla_{\mathbf{W}^{(l)}}\mathbf{h}^{(l)},$$ (l), (49) where $$\nabla_{W^{(l)}}\mathbf{h}^{(l)}=\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\cdot\mathbf{h}^{(l-1)}$$ (l−1) (50) Combining (49) and (50), we have $$\nabla_{\mathbf{W}^{(l)}}f_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})={\frac{1}{\sqrt{m}}}\sum_{j=1}^{m}a_{k,j}\left(\prod_{i=1}^{l-1}\sigma(\mathbf{W}^{(i)}\mathbf{h}^{(i-1)}+\mathbf{b}^{(i)})\right)\cdot\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\cdot\mathbf{h}^{(l-1)}.$$ (l−1). (51) The gradient with respect to the bias vectors b (l)is derived as $$\nabla_{\mathbf{b}^{(l)}}f_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})=\nabla_{\mathbf{h}^{(l)}}f_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})\cdot\nabla_{\mathbf{b}^{(l)}}\mathbf{h}^{(l)},$$ (l), (52) where $$\nabla_{\mathbf{b}^{(l)}}\mathbf{h}^{(l)}=\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)}).$$ (l)). (53) $$(49)$$ $$(50)$$ $$\left(51\right)$$ $$(52)$$ $$(53)$$ Combining (52) and (53), we have $$\nabla_{\mathbf{b}^{(l)}}f_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})={\frac{1}{\sqrt{m}}}\sum_{j=1}^{m}\mathbf{a}_{k,j}\left(\prod_{i=1}^{l-1}\sigma(\mathbf{W}^{(i)}\mathbf{h}^{(i-1)}+\mathbf{b}^{(i)})\right)\cdot\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)}).$$ (l)). (54) The product term Ql−1 i=1 σ(W(i)h (i−1) + b (i)) can be bounded by Ml−1 1. Next, consider σ ′(W(l)h (l−1) + b (l))· h (l−1). Using the bounds |σ ′(W(l)h (l−1) + b (l))| ≤ M1 and ∥h (l−1)∥ ≤ H, we get $$\left(54\right)$$ $$|\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\cdot\mathbf{h}^{(l-1)}|\leq M_{1}H.\tag{1}$$ Then, considering the sum over m terms, we have $$\left\|\frac{1}{\sqrt{m}}\sum_{j=1}^{m}a_{k,j}\left(\prod_{i=1}^{i-1}\sigma(\mathbf{W}^{(i)}\mathbf{h}^{(i-1)}+\mathbf{b}^{(i)})\right)\cdot\sigma^{\prime}(\mathbf{W}^{(i)}\mathbf{h}^{(i-1)}+\mathbf{b}^{(i)})\cdot\mathbf{h}^{(i-1)}\right\|\leq\frac{1}{\sqrt{m}}\sum_{j=1}^{m}|a_{k,j}|M_{1}^{i-1}M_{1}H.\tag{56}$$ Since ak,j are constants and A is the maximum of their absolute values $$\left(55\right)$$ $$(56)$$ $$\frac{1}{\sqrt{m}}\sum_{j=1}^{m}|\mathbf{a}_{k,j}|\leq A\sqrt{m}.\tag{1}$$ $$\left(57\right)$$ $$\left(58\right)$$ $$(59)$$ $$(60)$$ Combining these bounds, we get $$\|\nabla_{\mathbf{W}^{(l)}}f_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})\|\leq{\frac{1}{\sqrt{m}}}\cdot A{\sqrt{m}}\cdot M_{1}^{l}\cdot H=A M_{1}^{l}H.$$ Similarly, for the bias vectors, given the gradient expression $$\nabla_{\mathbf{b}^{(l)}}f_{k}(\mathbf{x}_{i},\mathbf{w},\mathbf{a}_{k})=\frac{1}{\sqrt{m}}\sum_{j=1}^{m}\mathbf{a}_{k,j}\left(\prod_{i=1}^{l-1}\sigma(\mathbf{W}^{(i)}\mathbf{h}^{(i-1)}+\mathbf{b}^{(i)})\right)\cdot\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)}).$$ In particular, we obtain $\mathbf{W}^{(l)}$. (l)). (59) we can follow a similar analysis. We have: $$\left\|\frac{1}{\sqrt{m}}\sum_{j=1}^{m}\mathbf{a}_{k,j}\left(\prod_{i=1}^{I-1}\sigma(\mathbf{W}^{(i)}\mathbf{h}^{(i-1)}+\mathbf{b}^{(i)})\right)\cdot\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\right\|\leq\frac{1}{\sqrt{m}}\sum_{j=1}^{m}|\mathbf{a}_{k,j}|M_{1}^{l}.$$ . (60) Using the same argument for |ak,j | in (57), we get 1 $[\mathbf{a}_{k,j}]$ in (37), we get $ \|\nabla_{\mathbf{b}^{(l)}}f_k(\mathbf{x}_{t_i},\mathbf{w},\mathbf{a}_k)\|\leq\frac{1}{\sqrt{m}}\cdot A\sqrt{m}\cdot M_1^l=AM_1^l$. The bound of the gradient for both the weight matrices and the bias vectors can be summarized as ∥∇W(l) fk(xti , w, ak)∥ ≤ AMl1H, (62) and $$\|\nabla_{\mathbf{b}^{(l)}}f_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})\|\leq A M_{1}^{l}.$$ . (63) Combining the effects from both the weights and biases, we can write the overall bound on the gradient with respect to the model parameters w as $$(\mathbf{61})$$ $$\|\nabla_{\mathbf{w}}f_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})\|\leq AM_{1}^{l}(H+1).\tag{1}$$ The analysis above offers essential insights into the gradient behavior within the hedging model's training process. By applying the chain rule, it is evident that the gradient magnitudes ∥∇wfk(xti , w, ak)∥ are exponentially influenced by the network's depth l. This highlights the importance of selecting appropriate initializations and activation functions. When the magnitude of the activation function's gradient M1 is controlled around 1, the gradient magnitudes ∥∇wfk(xti , w, ak)∥ tend to stabilize, converging to a constant value as the network depth increases. This behavior strongly supports the subsequent analysis, demonstrating that our approximation maintains high quality, closely aligning with the linearized hedging model. $\left(\text{63}\right)$. $$(64)$$ ## 5.2 Gradient Changing Rate Analysis To analyze the gradient changing rate, we focus on the rate at which the gradient of the loss function changes with respect to the model parameters. This involves examining the difference in gradients between two different iterations or parameter settings. We start with the gradient of the model function fk(xti , w, ak) with respect to the weight matrices W(l) and bias vectors b (l). To measure the gradient changing rate, we consider the difference between the gradients at two different parameter settings, w and w′. For the weight matrices, the gradient difference is $$\Delta\nabla_{\mathbf{W}^{(l)}}f_{k}=\nabla_{\mathbf{W}^{(l)}}f_{k}(\mathbf{x}_{t_{i}},\mathbf{w},\mathbf{a}_{k})-\nabla_{\mathbf{W}^{(l)}}f_{k}(\mathbf{x}_{t_{i}},\mathbf{w}^{\prime},\mathbf{a}_{k}).$$ , w′, ak). (65) Substituting the gradient expressions, we get $$\Delta\nabla_{\mathbf{W}^{(l)}}f_{k}=\frac{1}{\sqrt{m}}\sum_{j=1}^{m}\mathbf{a}_{k,j}\left(\left(\prod_{i=1}^{l-1}\sigma(\mathbf{W}^{(i)}\mathbf{h}^{(i-1)}+\mathbf{b}^{(i)})\right)\cdot\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\cdot\mathbf{h}^{(l-1)}\right.$$ $$\left.-\left(\prod_{i=1}^{l-1}\sigma(\mathbf{W}^{\prime(i)}\mathbf{h}^{\prime(i-1)}+\mathbf{b}^{\prime(i)})\right)\cdot\sigma^{\prime}(\mathbf{W}^{\prime(l)}\mathbf{h}^{\prime(l-1)}+\mathbf{b}^{\prime(l)})\cdot\mathbf{h}^{\prime(l-1)}\right).$$ $$(66)$$ Using the triangle inequality, we can bound the difference as $$\|\Delta\nabla_{\mathbf{W}^{(t)}}f_{k}\|\leq\frac{1}{\sqrt{m}}\sum_{j=1}^{m}|\mathbf{a}_{k,j}|\left\|\left(\prod_{i=1}^{l-1}\sigma(\mathbf{W}^{(i)}\mathbf{h}^{(i-1)}+\mathbf{b}^{(i)})\right)\cdot\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\cdot\mathbf{h}^{(l-1)}\right.$$ $$\left.-\left(\prod_{i=1}^{l-1}\sigma(\mathbf{W}^{(t)}\mathbf{h}^{\prime(i-1)}+\mathbf{b}^{\prime(i)})\right)\cdot\sigma^{\prime}(\mathbf{W}^{\prime(l)}\mathbf{h}^{\prime(l-1)}+\mathbf{b}^{\prime(l)})\cdot\mathbf{h}^{\prime(l-1)}\right\|.$$ $$\left(65\right)$$ $$(67)$$ $$(68)^{\frac{1}{2}}$$ Similarly, for the bias vectors, the gradient difference is $$\Delta\lor b^{(l)}$$ ∆∇b(l) fk = ∇b(l) fk(xti , w, ak) − ∇b(l) fk(xti , w′, ak). (68) Substituting the gradient expressions, we get $$\Delta\nabla_{\mathbf{b}^{(l)}}f_{k}=\frac{1}{\sqrt{m}}\sum_{j=1}^{m}a_{k,j}\left(\left(\prod_{i=1}^{l-1}\sigma(\mathbf{W}^{(i)}\mathbf{h}^{(i-1)}+\mathbf{b}^{(i)})\right)\cdot\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\right.$$ $$\left.-\left(\prod_{i=1}^{l-1}\sigma(\mathbf{W}^{r(i)}\mathbf{h}^{r(i-1)}+\mathbf{b}^{r(i)})\right)\cdot\sigma^{\prime}(\mathbf{W}^{r(l)}\mathbf{h}^{r(l-1)}+\mathbf{b}^{r(l)})\right).$$ $$(69)$$ Using the triangle inequality, we can bound the difference as $$\|\Delta\nabla_{\mathbf{b}^{(l)}}f_{k}\|\leq\frac{1}{\sqrt{m}}\sum_{j=1}^{m}|\mathbf{a}_{k,j}|\,\bigg{\|}\left(\prod_{i=1}^{l-1}\sigma(\mathbf{W}^{(i)}\mathbf{h}^{(i-1)}+\mathbf{b}^{(i)})\right)\cdot\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})$$ $$\qquad-\left(\prod_{i=1}^{l-1}\sigma(\mathbf{W}^{\prime(i)}\mathbf{h}^{\prime(i-1)}+\mathbf{b}^{\prime(i)})\right)\cdot\sigma^{\prime}(\mathbf{W}^{\prime(l)}\mathbf{h}^{\prime(l-1)}+\mathbf{b}^{\prime(l)})\bigg{\|}\,.$$ $$(70)$$ $$(71)$$ Let ∆W(l) = W(l) − W′(l) and ∆b (l) = b (l) − b ′(l). Given the bounds |σ ′(W(l)h (l−1) + b (l))| ≤ M1, ∥h (l−1)∥ ≤ H, and |ak,j | ≤ A, we can derive $$\|\Delta\nabla_{\mathbf{W}^{(l)}}f_{k}\|\leq\frac{1}{\sqrt{m}}\sum_{j=1}^{m}AM_{1}^{l-1}M_{1}H\|\Delta\mathbf{W}^{(l)}\|=AM_{1}^{l}H\|\Delta\mathbf{W}^{(l)}\|,\tag{1}$$ and $$\|\Delta\nabla_{\mathbf{b}^{(l)}}f_{k}\|\leq\frac{1}{\sqrt{m}}\sum_{j=1}^{m}AM_{1}^{l-1}M_{1}\|\Delta\mathbf{b}^{(l)}\|=AM_{1}^{l}\|\Delta\mathbf{b}^{(l)}\|.\tag{1}$$ $$\left(72\right)$$ To include the Hessian matrix in our analysis, we use the Taylor expansion around the parameter setting w $$\nabla_{\mathbf{W}^{(l)}}f_{k}(\mathbf{w}^{\prime})\approx\nabla_{\mathbf{W}^{(l)}}f_{k}(\mathbf{w})+\nabla_{\mathbf{W}^{(l)}}^{2}f_{k}(\mathbf{w})\cdot(\mathbf{W}^{\prime(l)}-\mathbf{W}^{(l)}),$$ where ∇2W(l) fk(w) denotes the Hessian matrix of the loss function with respect to W(l) at w. Therefore, the gradient difference is ∆∇W(l) fk = ∇2W(l) fk(w) · (W′(l) − W(l)). (74) The norm of this gradient difference is $$\|\Delta\nabla_{\mathbf{W}^{(l)}}f_{k}\|=\left\|\nabla_{\mathbf{W}^{(l)}}^{2}f_{k}(\mathbf{w})\cdot(\mathbf{W}^{r(l)}-\mathbf{W}^{(l)})\right\|.\tag{14.14}$$ Similarly, for the bias vectors, we have $$(73)$$ $$(75)$$ $$\Delta\nabla_{\mathbf{b}^{(l)}}f_{k}=\nabla_{\mathbf{b}^{(l)}}^{2}f_{k}(\mathbf{w})\cdot(\mathbf{b}^{\prime(l)}-\mathbf{b}^{(l)}).$$ $$(76)^{\frac{1}{2}}$$ $$(77)$$ (l)). (76) The norm of the gradient difference for the bias vectors is $$\|\Delta\nabla_{\mathbf{b}^{(l)}}f_{k}\|=\left\|\nabla_{\mathbf{b}^{(l)}}^{2}f_{k}(\mathbf{w})\cdot(\mathbf{b}^{(l)}-\mathbf{b}^{(l)})\right\|.\tag{14}$$ Let ∆W(l) = W(l) − W′(l) and ∆b (l) = b (l) − b ′(l), and given the bounds on the Hessian, we can derive $$\|\Delta\nabla_{\mathbf{W}^{(l)}}f_{k}\|\leq\|\nabla_{\mathbf{W}^{(l)}}^{2}f_{k}(\mathbf{w})\|\cdot\|\Delta\mathbf{W}^{(l)}\|,$$ ∥∆∇W(l) fk∥ ≤ ∥∇2W(l) fk(w)*∥ · ∥*∆W(l)∥, (78) and $||\Delta\nabla_{\mathbf{b}^{(l)}}f_{k}||\leq||\nabla_{\mathbf{b}^{(l)}}^{2}f_{k}(\mathbf{w})||\cdot||\Delta\mathbf{b}^{(l)}||$. ## (L)∥. (79) 5.3 Hessian Bound Analysis To bound the Hessian of the model function fk(x, w, ak) with respect to the weight matrices W(l) and bias vectors b (l), we need to consider the maximum values that the second-order partial derivatives can attain. First, for the weight matrices W(l), the second-order partial derivatives are given by $$\nabla^{2}_{\mathbf{W}^{(t)}}f_{k}(\mathbf{x},\mathbf{w},\mathbf{a}_{k})=\frac{1}{\sqrt{m}}\sum_{j=1}^{m}\mathbf{a}_{k,j}\prod_{l=1}^{L}\left[\sigma^{\prime\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\mathbf{h}^{(l-1)}(\mathbf{h}^{(l-1)})^{T}+\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\right].\tag{80}$$ For the bias vectors b (l), the second-order partial derivatives are $$\nabla_{{\mathbf{b}}^{(l)}}^{2}f_{k}({\mathbf{x}},{\mathbf{w}},{\mathbf{a}}_{k})={\frac{1}{\sqrt{m}}}\sum_{j=1}^{m}{\mathbf{a}}_{k,j}\prod_{l=1}^{L}\left[\sigma^{\prime\prime}({\mathbf{W}}^{(l)}{\mathbf{h}}^{(l-1)}+{\mathbf{b}}^{(l)})\right].$$ $$(78)$$ $$(79)$$ $$(81)$$ $$(82)^{\frac{1}{2}}$$ By making another assumption that the activation function σ and its derivatives σ ′′ are bounded, where |σ ′′(z)| ≤ M2 for all z. The norm of each term in the product can be bounded by $$\left|\sigma^{\prime\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\mathbf{h}^{(l-1)}(\mathbf{h}^{(l-1)})^{T}\right|\leq M_{2}H^{2}$$ and σ $$\left|\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\right|\leq M_{1}.\tag{1}$$ $$(83)$$ Thus, the norm of the product term is $$\left.\prod_{l=1}^{L}\left[\sigma^{\prime\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\mathbf{h}^{(l-1)}(\mathbf{h}^{(l-1)})^{T}+\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\right]\right|\leq\left(M_{2}H^{2}+M_{1}\right)^{L}.$$ L. (84) Since ak,j are constants, the sum is bounded by It concludes, the sum is bounded by $$\left|\frac{1}{\sqrt{m}}\sum_{j=1}^{m}\mathbf{a}_{k,j}\prod_{l=1}^{L}\left[\sigma^{\prime\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\mathbf{h}^{(l-1)}(\mathbf{h}^{(l-1)})^{T}+\sigma^{\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\right]\right|$$ $$\leq\frac{1}{\sqrt{m}}\sum_{j=1}^{m}|\mathbf{a}_{k,j}|(M_{2}H^{2}+M_{1})^{L}.$$ $$(84)$$ $$(85)$$ $$(86)$$ $$(87)$$ $$(88)$$ $$(89)$$ Letting A = maxj |ak,j |, we get $$\left|\nabla_{\mathbf{W}^{(l)}}^{2}f_{k}\right|\leq\frac{1}{\sqrt{m}}m A(M_{2}H^{2}+M_{1})^{L}=A(M_{2}H^{2}+M_{1})^{L}.$$ Next, consider the Hessian for the bias vectors. Using the bound on σ ′′, we have $$\left|\prod_{l=1}^{L}\sigma^{\prime\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\right|\leq M_{2}^{L}.\tag{10.11}$$ The sum is bounded by $$\left|\frac{1}{\sqrt{m}}\sum_{j=1}^{m}\mathbf{a}_{k,j}\prod_{l=1}^{L}\sigma^{\prime\prime}(\mathbf{W}^{(l)}\mathbf{h}^{(l-1)}+\mathbf{b}^{(l)})\right|\leq\frac{1}{\sqrt{m}}\sum_{j=1}^{m}|\mathbf{a}_{k,j}|M_{2}^{L}.$$ Combining with A = maxj |ak,j |, we get $$\left|\nabla_{b^{(1)}}^{2}f_{k}\right|\leq\frac{1}{\sqrt{m}}m A M_{2}^{L}=A M_{2}^{L}.$$ $$(90)$$ . (89) Moreover, the mixed second-order partial derivatives ∇2b(l),W(l) fk and ∇2W(l),b(l) fk are zero due to the structure of the model function and the independence of the weight matrices W(l) and the bias vectors b (l). The weight matrix W(l) and the bias vector b (l) at the same layer l do not directly interact in the second-order sense. Therefore, their partial derivatives are computed independently, resulting in zero mixed second-order derivatives∂ $$\frac{\partial}{\partial\mathbf{b}^{(l)}}\left(\frac{\partial f_{k}}{\partial\mathbf{W}^{(l)}}\right)=0\quad\text{and}\quad\frac{\partial}{\partial\mathbf{W}^{(l)}}\left(\frac{\partial f_{k}}{\partial\mathbf{b}^{(l)}}\right)=0.$$ The overall Hessian norm is bounded by considering the maximum bounds of the components $$\left\|\nabla^{2}f_{k}(\mathbf{x},\mathbf{w},\mathbf{a}_{k})\right\|\leq\max\left(A(M_{2}H^{2}+M_{1})^{L},AM_{2}^{L}\right).$$ ## 6 Numerical Results The strategies compared include the proposed Deep Hedging with Linearized Neural Networks (DHLNN), Deep Hedging with Neural Tangent Bootstrap (DHNTB), Deep Hedging with Multi-Layer Perceptron (DHMLP), and Black-Scholes Delta Hedging (BSDH). We conducted numerical simulations and experiments to evaluate the effectiveness of our proposed hedging method. For comparison, we utilized the Black-Scholes Delta Hedging (BSDH) strategy, a fundamental technique in options trading that involves continuously adjusting the number of shares in the underlying asset to maintain a delta-neutral portfolio Hull & Basu (2016)Broadie et al. (1999). This approach minimizes risk from price fluctuations in the underlying asset and serves as a standard benchmark in our experiments. We also included state-of-the-art deep hedging baselines, DHMLP and DHNTB, as described in previous studies Buehler et al. (2019)Imaki et al. (2021). Our neural network model was designed with four hidden layers, each containing 64 neurons, to strike a balance between performance and computational efficiency. $$(91)$$ ## 6.1 Datasets Used In Experiments We have tow different sources of data used to evaluate the performance of our designed deep hedging strategy, i.e., simulated price trajectories from Imaki et al. (2021) and price trajectories from real market data from Andrew Meyer (2021). Based on these price trajectories, we applied two kinds of options into the hedging strategies, i.e., European Call Options and Lookback Call Options. ## 6.1.1 Price Trajectories We first simulate market data by modeling the spot prices of the underlying assets following a geometric Brownian motion. The use of simulated market data is crucial because real market data often cannot cover the full range of conditions necessary to comprehensively test a hedging strategy. By simulating market conditions, we can evaluate the robustness of our hedging strategy across all possible scenarios it may encounter in reality, ensuring that it performs well even under extreme market behaviors. To rigorously assess the strategy, we manipulate key parameters such as the volatility and drift of the spot price, and the transaction cost rate. At each time step, the relevant information considered includes log-moneyness, time to maturity, and volatility. The derivatives are set with a maturity of T = 30/365, with time steps defined as t = 0, 1/365*, . . . ,* 30/365. For each iteration, we generate 10, 000 Monte Carlo trajectories to represent the underlying asset prices, which are then used as training data samples Imaki et al. (2021). To ensure a rigorous evaluation, we simulate an additional set of 10, 000 trajectories to serve as testing data. The final assessment of our strategy's effectiveness is based on the terminal hedging performance evaluated using these testing data trajectories. For the real market data set, which comprises highly detailed financial data for numerous stocks across various sectors, containing hundreds of millions of rows. Central to this data is the order book, an electronic ledger of buy and sell orders for specific securities, meticulously organized by price levels. The order book is indispensable for market research as it mirrors market demand and supply, offering a real-time snapshot of trading intentions. From the order book data, we can derive several crucial statistics to assess market liquidity and stock valuation. One of the most significant metrics is the bid-ask spread, which is calculated from the best bid and offer prices. This spread is essential for understanding the immediate costs associated with trading. Another vital metric is the Weighted Average Price (WAP), which plays a critical role in our analysis as it serves as our measure for price trajectories. WAP is computed by considering both the price and volume of top-level bids and offers, providing a comprehensive, instantaneous valuation that reflects the true market conditions. By accounting for market supply and demand, WAP allows us to track price movements more accurately and assess the statistical volatility of the assets over time. ## 6.1.2 Options In Experiments In our experiments, we apply two distinct types of options, i.e., the European Call Option and the Lookback Call Option, to evaluate the performance of our hedging strategy. The choice of these two options is strategic, as each provides unique insights into the strategy's effectiveness under different market conditions and complexities. The first option we use is the European Call Option. This is a standard derivative where the holder has the right, but not the obligation, to purchase the underlying asset at a predetermined strike price on a specified maturity date. The European Call Option is a fundamental choice for testing because of its simplicity and prevalence in financial markets. By using this option, we can evaluate our hedging strategy's ability to manage risk in a straightforward scenario, where the primary concern is the asset's price at the time of maturity. This option helps us understand how well the strategy performs under typical market conditions without the complexity of path-dependent features. The second option employed is the Lookback Call Option, which is a more complex, path-dependent derivative. Unlike the European Call Option, the payoff of a Lookback Call Option depends on the maximum price of the underlying asset during the option's life, allowing the holder to "look back" and choose the most favorable price. This characteristic makes the Lookback Call Option particularly sensitive to the entire price trajectory of the underlying asset, not just the price at maturity. By testing our hedging strategy with this option, ![17_image_0.png](17_image_0.png) Figure 1: Comparison of hedging performance across different transaction costs {2 x 10-3, 4 x 10-3, 6 x 10-3, 8 x 10-3} for a European option with a strike price of 1.2. The analysis, conducted over 50 training epochs, focuses on the distribution of hedging PNL with volatility set at 0.1. we can assess its robustness in more challenging scenarios, where the price path plays a critical role. This option is ideal for evaluating how well the strategy handles scenarios involving significant price volatility and provides a more comprehensive test of its effectiveness. ## 6.2 Experiments With Stochastically Simulated Market Data We first provide compelling evidence of the DHLNN's effectiveness in managing transaction costs while preserving hedging performance. This capability is especially crucial in high-cost environments, where traditional and other deep hedging strategies falter. The analysis highlights the importance of developing adaptive and robust hedging models, capable of maintaining efficiency across varying transaction cost scenarios. In Fig. 1, the analysis compares the hedging performance across different transaction costs {2 × 10-3, 4 × 10-3,6 x 10-3,8 x 10-3 } for a European option with a strike price of 1.2. The evaluation is based on the distribution of hedging profit and loss (PNL) after training the deep hedging models for 50 epochs, with volatility set at 0.1. As transaction costs increase, the hedging PNL distributions exhibit a clear degradation in performance across all models. This trend highlights the sensitivity of hedging strategies to transaction costs, a crucial factor in real-world trading scenarios. The results illustrate that the proposed DHLNN method consistently ![18_image_0.png](18_image_0.png) Figure 2: Hedging Performance comparison over different transaction costs {2 × 10−3, 4 × 10−3, 6 × 10−3, 8 × 10−3} of underlying asset for a European option and strike 1.2 with 50 training epochs for the Entropic Loss and Expected Shortfall of Hedging PNL where volatility is set as 0.1. outperforms the baselines, particularly at higher transaction cost levels. This superior performance indicates the effectiveness of DHLNN in minimizing transaction costs while maintaining robust hedging performance. When transaction costs are low as 2 × 10−3, the differences between the models are relatively modest, with all strategies showing strong hedging performance. However, as transaction costs rise, the performance gap widens significantly. The baseline models, such as the BSDH, experience a noticeable decline in PNL distribution as transaction costs increase. At the 6 × 10−3cost level, the number of trials with losses beyond −0.02 becomes prominent, indicating that BSDH is less efficient at higher transaction costs. This decline underscores the limitations of traditional hedging strategies in environments with substantial transaction costs. On the other hand, the DHLNN's ability to adapt to different transaction cost levels demonstrates its robustness and flexibility, particularly when compared to state-of-the-art deep hedging models like DHMLP and DHNTB, which have more spread-out distributions, indicating less efficient handling of transaction costs compared to DHLNN. Fig. 2 presents a detailed analysis of the hedging performance of various strategies under different transaction costs 2 × 10−3, 4 × 10−3, 6 × 10−3, and 8 × 10−3for a European option with a strike price of 1.2. The metrics evaluated include Entropic Loss and Expected Shortfall, both of which are crucial for understanding the effectiveness of hedging strategies in managing risk and minimizing potential losses. In the case of Entropic Loss as shown in Fig. 2(a), which reflects the risk-adjusted return of each strategy, DHLNN consistently outperforms the other strategies across all levels of transaction costs. For instance, at the highest transaction cost 8 × 10−3), DHLNN records an Entropic Loss just above 1.005, significantly lower than that of the other strategies. This indicates that DHLNN manages risk more effectively, optimizing the trade-off between risk and return. Meanwhile, BSDH exhibits the highest Entropic Loss, particularly at higher transaction costs, reaching approximately 1.015, underscoring its inefficiency due to continuous rebalancing, which incurs greater costs. DHMLP and DHNTB perform moderately well but do not achieve the same efficiency as DHLNN, especially as transaction costs rise. The Expected Shortfall analysis as shown in Fig. 2(b), which assesses the worst-case scenario risk by evaluating the average loss in the tail of the distribution, further highlights the robustness of DHLNN. Across all transaction costs, DHLNN maintains the lowest Expected Shortfall, with values around 0.018 even at the highest transaction cost level 8 × 10−3). This indicates DHLNN's superior ability to mitigate extreme losses. BSDH, on the other hand, shows the highest Expected Shortfall, particularly at higher transaction costs, where it reaches around 0.034, demonstrating its vulnerability to extreme losses. DHMLP and DHNTB offer better performance than BSDH but still fall short of the robustness displayed by DHLNN, with increasing Expected Shortfall as transaction costs rise. ![19_image_0.png](19_image_0.png) ![19_image_1.png](19_image_1.png) Figure 3: Comparison of convergence performance for deep hedging models across different tnainteg poechs (10, 20, 30, 40) on a European option with a strike price of 1.2. The ![20_image_0.png](20_image_0.png) Figure 4: Comparison of convergence performance across different training epochs {10, 20, 30, 40} for deep hedging models applied to a European option with a strike price of 1.2. The evaluation focuses on Entropic Loss and Entropic Risk Measure of hedging PNL. The transaction cost is set at 2 × 10−3and volatility at 0.1. In Fig. 3, we compare the PNL distributions of various deep hedging models across different training epochs 10, 20, 30, 40 for European option with a strike price of 1.2, under transaction costs of 2 × 10−3 and a fixed volatility of 0.1. The DHLNN strategy consistently shows a tightly concentrated PNL distribution across all training epochs, demonstrating its strong ability to minimize hedging errors and manage risk effectively. As training progresses, the PNL distribution of DHLNN becomes even more concentrated around zero, indicating stable and reliable performance with increased training, which reflects its robust learning capability. In contrast, the DHMLP and DHNTB strategies display broader PNL distributions, particularly in the earlier epochs, indicating higher variability and less effective risk management. However, as training continues, these models gradually improve, with their PNL distributions becoming narrower, though they do not reach the same level of concentration as DHLNN. This suggests that while DHMLP and DHNTB benefit from additional training, they require more epochs to achieve a similar performance level. The traditional strategy BSDH shows the least concentrated PNL distributions, with significant tails that suggest a higher likelihood of large deviations, underscoring its limitations in effectively hedging the option. DHLNN outperforms the other models, demonstrating superior risk management and consistency in PNL distribution across different training stages. We then analyze how different deep hedging models, including DHLNN, DHNTB, and DHMLP, evolve and optimize their performance across increasing training epochs, focusing on Entropic Loss and Entropic Risk Measure as shown in Fig. 4. In the early stages of training, at 10 and 20 epochs, these models are still in the process of adjusting their parameters to fit the market dynamics present in the training data. During this phase, both Entropic Loss and Entropic Risk Measure are relatively high, reflecting the initial challenges these models face in effective hedging and risk management. As training progresses to around 30 epochs, DHLNN shows substantial improvement, with significant reductions in both Entropic Loss and Entropic Risk Measure. This suggests that DHLNN has effectively optimized its hedging strategy, resulting in more stable and profitable outcomes, with Entropic Loss potentially decreasing by about 20%. By 30 epochs, DHNTB and DHMLP typically reach a plateau, where additional training to 40 epochs yields diminishing returns, indicating that they have reached their best limits for the given market conditions. Meanwhile, our proposed DHLNN continues to improve with much better performance. In contrast, baseline strategies like BSDH, which does not adapt through learning, show stable but generally higher Entropic Loss and Risk Measures, underscoring the advantages of deep learning-based approaches like DHLNN in dynamic market conditions. We then evaluate the hedging strategies based on the Expected Shortfall of the hedging PNL, as depicted in Fig. 5. The DHLNN model shows marked improvement in minimizing Expected Shortfall as training epochs increase. Initially, the Expected Shortfall is relatively high, reflecting greater tail risk, but it stabilizes at ![21_image_0.png](21_image_0.png) ![21_image_1.png](21_image_1.png) Figure 5: Convergence Performance comparison over different training epochs {10, 20, 30, 40} for deep hedging models over a European option with strike 1.2 for Expected Shortfall of hedging PNL, transaction cost is 2 × 10−3and volatility is set as 0.1. a lower level after 30 epochs, indicating DHLNN's enhanced ability to manage extreme market scenarios effectively. This rapid convergence suggests DHLNN is highly efficient at learning optimal hedge adjustments to minimize large potential losses. In comparison, DHMLP and DHNTB also show a decrease in Expected Shortfall with more training epochs, though their improvement rate is slower than DHLNN. By around 30 epochs, their Expected Shortfall levels off, indicating diminishing returns with further training. This suggests that while DHMLP and DHNTB can reduce tail risk, they are less efficient than DHLNN. Traditional strategy like BSDH shows consistent Expected Shortfall across all epochs. However, its Expected Shortfall values are generally higher than those of the deep hedging models, particularly DHLNN. The results underscore DHLNN's clear advantage in reducing Expected Shortfall, demonstrating its superior risk management capabilities, especially in mitigating extreme losses, which is crucial for maintaining financial stability in volatile markets. In Fig. 6, we analyze the PNL distributions of various hedging strategies, including the traditional BSDH, state-of-the-art deep hedging baselines DHMLP and DHNTB, and our proposed DHLNN, under different volatility levels, i.e., 0.1, 0.2, 0.3, and 0.4, in the underlying asset price. At the lowest volatility of 0.1, all strategies show relatively tight PNL distributions, indicative of a stable market. However, DHLNN already exhibits a great edge by maintaining a more concentrated distribution around the zero PNL mark, suggesting more precise hedging and minimized deviations from expected outcomes. As volatility rises to 0.2, the PNL distributions widen, reflecting increased market uncertainty for all baselines. DHLNN continues to outperform other strategies, with a more focused PNL distribution, indicating better management of moderate market fluctuations and enhanced protection against potential losses. At a volatility of 0.3, the differences between the strategies become more pronounced. DHLNN's PNL distribution remains relatively narrow, showcasing its robustness in more volatile conditions. In contrast, the BSDH strategy shows significantly wider distributions, indicating reduced hedging effectiveness. While DHMLP and DHNTB perform better than traditional methods, they still do not match DHLNN's superior performance. In the highest volatility scenario of 0.4, all strategies exhibit their widest PNL distributions, but DHLNN leads with a distribution that remains more concentrated than the others. The results highlight DHLNN's resilience in highly volatile markets, where the risk of extreme losses is greatest. DHLNN consistently demonstrates fewer extreme losses and more stable performance across all volatility levels, outperforming both traditional and deep hedging baselines. Fig. 7 compares the performance of hedging strategies, including traditional BSDH, deep hedging methods DHMLP and DHNTB, and our proposed DHLNN, focusing on Entropic Loss and Entropic Risk Measure across four volatility scenarios as 0.1, 0.2, 0.3, and 0.4. ![22_image_0.png](22_image_0.png) ![22_image_1.png](22_image_1.png) Figure 6: Hedging Performance comparison of PNL distributions over different volatility of the underline asset price {0.1,0.2,0.3,0.4} for a European option and strike 1.2 with transaction cost 5 x 10-3, with 50 training epochs. ![22_image_2.png](22_image_2.png) Figure 7: Comparison of hedging performance based on PNL distributions across various volatilities of the underlying asset price {0.1,0.2,0.3,0.4} for a European option with a strike price of 1.2. The analysis focuses on Entropic Loss and Entropic Risk Measure of the hedging PNL, with a transaction cost of 5 x 10-3 and conducted over 50 training epochs. As shown in Fig. 7(a), Entropic Loss increases with rising volatility across all strategies, indicating heightened market risk. At a volatility of 0.1, all strategies show minimal Entropic Loss, with DHLNN outperforming the others greatly, suggesting its superior risk management under stable conditions. As volatility rises to 0.2, Entropic Loss increases for all strategies, but DHLNN maintains a lower Entropic Loss than BSDH, DHMLP, and DHNTB, demonstrating its effectiveness in managing escalating risk. In highly volatile environments as 0.3 and 0.4, DHLNN consistently achieves the lowest Entropic Loss, underscoring its robustness in volatile markets. The Entropic Risk Measure, presented in Fig. 7(b), offers a broader view of strategy effectiveness under varying volatility. At a volatility of 0.1 and 0.2, DHLNN shows great advantage over all other strategies exhibiting low Entropic Risk closer to 0. In more volatile scenarios, the Entropic Risk Measure rises sharply, especially for BSDH, while DHLNN consistently exhibits a lower risk measure, indicating better risk-adjusted performance. ## 6.3 Experiments With Real Market Data Lookback options, which base their payoff on the maximum price of the underlying asset during the option's life, present significant hedging challenges due to their path dependency. This complexity requires sophisticated hedging strategies capable of adapting to the entire price history of the underlying asset. Traditional strategies like BSDH, which rely on continuous rebalancing based on the current asset price, struggle with Lookback options because they do not account for the historical price extremes that determine the payoff. These limitations highlight the need for more advanced approaches. Our analysis compares deep hedging strategies, specifically DHMLP, DHNTB, and our proposed DHLNN, that are better suited to manage the path dependency and nonlinearities inherent in Lookback options. In Fig. 8, we evaluate deep hedging strategies for Lookback options, focusing on PNL distributions under varying transaction costs. The strategies compared are DHMLP, DHNTB, and the proposed DHLNN, across transaction costs of 2 × 10−3, 4 × 10−3, 6 × 10−3, and 8 × 10−3. At the lowest cost 2 × 10−3, all strategies show tight PNL distributions, with DHLNN displaying superior concentration near zero PNL, highlighting its efficiency. As costs increase to 4×10−3, PNL distributions widen, but DHLNN remains more concentrated, indicating resilience. With a further rise to 6 × 10−3, the impact of transaction costs becomes more pronounced, yet DHLNN continues to outperform in maintaining a tighter distribution. At the highest cost 8 × 10−3, while all strategies exhibit their widest distributions, DHLNN still demonstrates robust risk management, outperforming the other methods. These results underscore DHLNN's advantages in managing Lookback options, consistently showing tighter PNL distributions and better resilience to rising costs. Fig. 9 assesses hedging performance based on Entropic Loss and Entropic Risk Measure across the same cost levels. Initially, DHMLP and DHNTB perform well, but as costs rise, their effectiveness declines, with increased Entropic Loss and Risk Measure. DHLNN, however, maintains superior performance, managing the trade-offs between risk and transaction costs effectively. At the highest transaction cost, the baselines struggle significantly, while DHLNN continues to manage risks effectively, demonstrating robustness. Fig. 10 compares Expected Shortfall across the same cost levels. DHLNN consistently shows minimal Expected Shortfall, even as costs rise, while DHMLP and DHNTB become increasingly vulnerable. This highlights DHLNN's robustness and adaptability in managing tail risk for Lookback options, making it a more reliable strategy under varying cost scenarios. In Fig. 11, we compare the convergence performance of DHMLP, DHNTB, and our proposed DHLNN for a Lookback option with a strike price of 1.0, across training epochs 10, 20, 30, and 40 under a transaction cost of 2 × 10−3. We evaluate how each model's hedging performance evolves with increased training with the Lookback options. Initially, DHMLP performs well, showing a tighter PNL distribution and closer to 0 compared with DHNTB. However, as training extends to 30 and 40 epochs, DHNTB improves significantly, surpassing DHMLP in performance. This suggests that DHNTB benefits more from extended training, refining its hedging strategies over time. Despite these improvements, our proposed DHLNN consistently outperforms both DHMLP and DHNTB across all training settings. DHLNN quickly converges to a tighter PNL distribution, indicating ![24_image_0.png](24_image_0.png) ![24_image_1.png](24_image_1.png) Figure 8: Hedging Performance comparison over different transaction costs {2 x 10-3, 4 x 10-3, 6 x 10-3, 8 x 10-3} of underlying asset for a Lookback option and strike 1.0 with 50 training epochs for the distribution of Hedging PNL. ![24_image_2.png](24_image_2.png) (a) (b) Figure 9: Comparison of helging performance across various transaction costs {2 x 10-3, 4 x 10-3, 8 x 10-3, 8 x 10-3} for a Lookback option with a strike price of 1.0. The evaluation uses 50 training epochs and focuses on the Entropic Loss and Entropic Risk Messure of the hedging PNL. ![25_image_0.png](25_image_0.png) Figure 10: Comparison of helging performance acvass various transaction costs {2 x 10 ~, 4 x 10 ~, 8 x 10 ° }, 8 × 10 ~ } 8 × 10 ~ } 8 × 10 ~ } 8 × 10 ~ } 8 × 10 ~ } } 6r × l PNL ![25_image_1.png](25_image_1.png) ![25_image_2.png](25_image_2.png) ![25_image_3.png](25_image_3.png) Figure 11: Comparison of convergence performance across different training epochs {10,20, 30, 40} for deep hedging models applied to a Lookback option with a strike price of 1.0. The analysis considers a transaction cost of 2 x 10-3 ![26_image_0.png](26_image_0.png) Figure 12: Convergence Performance comparison over different training epochs {10, 20, 30, 40} for deep hedging models over a Lookback option with strike 1.0 for Entropic Loss and Entropic Risk Measure of hedging PNL, transaction cost is 2 × 10-3. ![26_image_1.png](26_image_1.png) ![26_image_2.png](26_image_2.png) xpected Shortfal Figure 13: Convergence performance comparison of deep hedging models over different training epochs {10, 20, 30, 40} for a Lookback option with a strike price of 1.0. The analysis focuses on the Expected Shortfall of the hedging PNI with a transaction cost of 2 x 10-3. superior risk management and more effective hedging strategies, even with fewer training epochs. This efficiency highlights DHLNN's robustness in managing Lookback options. Fig. 12 compares Entropic Loss and Entropic Risk Measure for the same models and training settings. DHLNN consistently outperforms DHMLP and DHNTB, with noticeable improvements evident by 20 epochs. The rapid convergence of DHLNN, as shown in the low and stable metrics at 20 epochs, demonstrates its efficiency in learning optimal hedging strategies with minimal training efforts. Beyond 20 epochs, DHLNN's performance stabilizes, indicating that it has already converged to its optimal state. While DHNTB shows improvements with extended training, it requires more epochs to surpass DHMLP, which initially manages risk more effectively. The results show DHLNN's ability to achieve superior risk management and hedging performance with significantly fewer training epochs compared to DHMLP and DHNTB. DHLNN's rapid convergence and consistent performance across all metrics highlight its advantage in delivering reliable and efficient hedging strategies for complex derivatives like Lookback options. Fig. 13 analyzes Expected Shortfall across the same models and training epochs. Initially, DHMLP outperforms DHNTB in managing tail risk indicated by lower Expected Shortfall values. However, as training progresses, DHNTB surpasses DHMLP, suggesting that it requires more training to optimize its strategies effectively. In contrast, DHLNN consistently achieves the lowest Expected Shortfall values, with significant improvements evident by 20 epochs. The rapid stabilization of DHLNN's Expected Shortfall values indicates its efficiency in learning robust hedging strategies with minimal training. Beyond 20 epochs, further improvements are marginal, demonstrating DHLNN's quick convergence to an optimal state. The results highlight DHLNN's resilience and reliability in managing extreme market movements, especially when training resources are limited. ## 7 Conclusion In conclusion, our work introduces a cutting-edge linearized neural network architecture that seamlessly integrates with the Black-Scholes delta, offering significant potential to reduce transaction costs and establish robust hedge positions in derivative markets. This approach enhances both the robustness and interpretability of hedging strategies while preserving the powerful representational capabilities inherent in neural networks. By framing the model training procedure within the context of convex risk measures, we have established a transparent and mathematically sound framework for risk assessment and the optimization of hedging strategies. One critical future research area is the development of robust, explainable, and adaptive deep hedging strategies that can dynamically adjust to evolving market conditions. As financial markets continue to become more complex and interconnected, the need for hedging strategies that can provide both robustness against market volatility and clarity in their decision-making processes becomes increasingly important. Enhancing the adaptive capabilities of these models to offer more responsive and dynamic hedging strategies remains an open challenge, and addressing this will be key to advancing the practical applicability of deep hedging techniques in real-world scenarios. ## References CameronOptiver IXAGPOPU Jiashen Liu Matteo Pietrobon (Optiver) OptiverMerle Sohier Dane Stefan Vallentine Andrew Meyer, BerniceOptiver. Optiver realized volatility prediction, 2021. URL https://kaggle.com/competitions/optiver-realized-volatility-prediction. Aharon Ben-Tal and Marc Teboulle. An old-new concept of convex risk measures: The optimized certainty equivalent. *Mathematical Finance*, 17(3):449–476, 2007. Fischer Black and Myron Scholes. The pricing of options and corporate liabilities. *Journal of political* economy, 81(3):637–654, 1973. Mark Broadie, Paul Glasserman, and Shing-Gang Kou. Connecting discrete and continuous path-dependent options. *Finance and Stochastics*, 3:55–82, 1999. Hans Buehler, Lukas Gonon, Josef Teichmann, and Ben Wood. Deep hedging. *Quantitative Finance*, 19(8): 1271–1291, 2019. Jay Cao, Jacky Chen, and John Hull. A neural network approach to understanding implied volatility movements. *Quantitative Finance*, 20(9):1405–1413, 2020. René Carmona. *Indifference pricing: theory and applications*. Princeton University Press, 2008. Bruce M Collins and Frank J Fabozzi. A methodology for measuring transaction costs. *Financial Analysts* Journal, 47(2):27–36, 1991. Shom Prasad Das and Sudarsan Padhy. A new hybrid parametric and machine learning model with homogeneity hint for european-style index option pricing. *Neural Computing and Applications*, 28:4061–4077, 2017. Mark HA Davis, Vassilios G Panas, and Thaleia Zariphopoulou. European option pricing with transaction costs. *SIAM Journal on Control and Optimization*, 31(2):470–493, 1993. Erik R Grannan and Glen H Swindle. Minimizing transaction costs of option hedging strategies. Mathematical finance, 6(4):341–364, 1996. Vicky Henderson and David Hobson. Utility indifference pricing-an overview. *Volume on Indifference Pricing*, 2004. John Hull and Alan White. The pricing of options on assets with stochastic volatilities. *The journal of* finance, 42(2):281–300, 1987. John C Hull and Sankarshan Basu. *Options, futures, and other derivatives*. Pearson Education India, 2016. Kentaro Imajo, Kentaro Minami, Katsuya Ito, and Kei Nakagawa. Deep portfolio optimization via distributional prediction of residual factors. In *Proceedings of the AAAI conference on artificial intelligence*, volume 35, pp. 213–222, 2021. Shota Imaki, Kentaro Imajo, Katsuya Ito, Kentaro Minami, and Kei Nakagawa. No-transaction band network: A neural network architecture for efficient deep hedging. *arXiv preprint arXiv:2103.01775*, 2021. Huisu Jang and Jaewook Lee. Generative bayesian neural network model for risk-neutral pricing of american index options. *Quantitative Finance*, 19(4):587–603, 2019. Jan Kallsen and Johannes Muhle-Karbe. Option pricing and hedging with small transaction costs. *Mathematical Finance*, 25(4):702–723, 2015. Ryo Karakida, Shotaro Akaho, and Shun-ichi Amari. Universal statistics of fisher information in deep neural networks: Mean field approach. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1032–1041. PMLR, 2019. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. *Advances in neural information processing systems*, 25, 2012. Hayne E Leland. Option pricing and replication with transactions costs. *The journal of finance*, 40(5): 1283–1301, 1985. Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A mean field view of the landscape of two-layer neural networks. *Proceedings of the National Academy of Sciences*, 115(33):E7665–E7671, 2018. Michael Monoyios. Option pricing with transaction costs using a markov chain approximation. Journal of Economic Dynamics and Control, 28(5):889–913, 2004. Roman Novak, Jascha Sohl-Dickstein, and Samuel S Schoenholz. Fast finite width neural tangent kernel. In International Conference on Machine Learning, pp. 17018–17044. PMLR, 2022. Johannes Ruf and Weiguan Wang. Neural networks for option pricing and hedging: a literature review. *arXiv* preprint arXiv:1911.05620, 2019. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. ## A Performance Comparison With Additional Simulations A.1 Simulations for Lookback Options ![29_image_0.png](29_image_0.png) ![29_image_1.png](29_image_1.png) Figure 14: Comparison of hedging performance across different transaction costs {2 x 10 ~, 4 x 10 ~ , 8 x 10 ~ }, 8 x 10 ~ } 8 x 10 ~ } 8 x 10 ~ } 8 x 10 ~ } 8 x 10 ~ } } 8 x volatility set at 0.1. ![30_image_0.png](30_image_0.png) ![30_image_1.png](30_image_1.png) Figure 15: Comparison of hedging performance across different transaction costs {2 x 10 ~, 4 x 10 ~ , 8 x 10 ~ } 8 x 10 ~ } 8 x 10 ~ } 8 x 10 ~ } 8 x 10 ~ } 8 x 10 ~ } } 8 x volatility set at 0.1. ![31_image_0.png](31_image_0.png) Figure 16: Comparison of hedging performance across different transaction costs {2 x 10 ~, 4 x 10 ~ , 8 x 10 ~ }, 8 x 10 ~ } 8 x 1 volatility set at 0.1. ![32_image_0.png](32_image_0.png) Figure 17: Comparison of convergence performance for deep hedging models accoss different training epochs (20,40, 60, 80) on a European option with a strike price of 1.2. Th ![33_image_0.png](33_image_0.png) ![33_image_1.png](33_image_1.png) Figure 18: Hedging Performance comparison over different transaction costs {2 x 10-3, 4 x 10-3, 6 x 10-3, 6 x 10-3 } of underlying asset for a Lookback option and strike 1.2 with 50 training epochs for the distribution of Hedging PNL where volatility is set as 0.1. ![33_image_2.png](33_image_2.png) (a) (b) Figure 19: Hedging Performance comparison over different transaction costs {2 x 10-3, 4 x 10-3, 6 x 10-3, 8 x 10-3} of underlying asset for a Lookback option and strike 1.2 with 50 training epochs for the Entropic Loss and Expected Shortfall of Hedging PNI, where volatility is set as 0.1. ![34_image_0.png](34_image_0.png) ![34_image_1.png](34_image_1.png) Figure 20: Comparison of convergence performance across different training epochs {10,20,30, 40} for deep hedging models applied to a Lookback option with a strike price of 1 ![35_image_0.png](35_image_0.png) Figure 21: Comparison of convergence performance across different training epochs {10,20,30, 40} for deep hedging models applied to a Lookback option with a strike price of 1 ![36_image_0.png](36_image_0.png) ![36_image_1.png](36_image_1.png) ![36_image_2.png](36_image_2.png) Figure 22: Hedging Performance comparison of PNL distributions over different volatility of the underline asset price {0.1,0.2,0.3,0.4} for a Lookback option and strike 1.0 with transaction cost 2 × 10-3, with 100 training epochs. ![36_image_3.png](36_image_3.png) Figure 23: Hedging Performance comparison of PNL distributions over different volatility of the underline asset price {0.1,0.2,0.3,0.4} for a Lookback option and strike 1.0 for Entropic Loss and Entropic Risk Measure of hedging PNL, transaction cost 2 x 10-3, with 100 training epochs
# On Equivalences Between Weight And Function-Space Langevin Dynamics Anonymous authors Paper under double-blind review ## Abstract Approximate inference for overparameterized Bayesian models appears challenging, due to the complex structure of the posterior. To address this issue, a recent line of work has investigated the possibility of directly conducting approximate inference in the "function space", the space of prediction functions. This paper provides an alternative perspective to this problem, by showing that for many models - including a simplified neural network model - Langevin dynamics in the overparameterized "weight space" induces equivalent functionspace trajectories to certain Langevin dynamics procedures in function space. Thus, the former can already be viewed as a function-space inference algorithm, with its convergence unaffected by overparameterization. We provide simulations on Bayesian neural network models and discuss the implication of the results. ## 1 Introduction Consider a common Bayesian predictive modeling setting: we are provided with i.i.d. observations D := {(xi, yi)} n Q i=1 where xi ∈ X , yi ∈ R and X denotes the input space; a likelihood model p({yi} | {xi}, θ) = n i=1 p(yi| f(xi; θ)) determined by a prediction function f(·; θ); and a prior πθ(dθ). We are interested in the predictive distribution p(y∗ | x∗, D) = Rπθ|D(dθ)p(y∗ | x∗, θ), induced by the posterior πθ|D. Modern machine learning models are often overparameterized, meaning that multiple parameters may define the same likelihood. For example, in Bayesian neural network (BNN) models where θ ∈ R d denote the network *weights*, we can obtain a combinatorial number of equivalent parameters by reordering the neurons, after which f(·; θ), and thus the likelihood, remain unchanged. Consequently, the posterior measure exhibits complex structures and becomes hard to approximate; for example, its Lebesgue density may contain a large number of global maxima. Starting from Sun et al. (2019); Wang et al. (2019); Ma et al. (2019), a recent literature investigates the possibility of simplifying inference by approximating a *function-space posterior*. Concretely, let A : R d → F ⊂ R X , θ 7→ f(·; θ) denote a "parameterization map". Then $$p(y_{*}\mid x_{*},{\cal D})=\int\pi_{\theta\mid{\cal D}}({\rm d}\theta)\,p(y_{*}\mid f(x_{*};\theta))=\int({\cal A}_{\theta}\,\pi_{\theta\mid{\cal D}})({\rm d}f)\,p(y_{*}\mid f(x_{*}))=\int\pi_{f\mid{\cal D}}({\rm d}f)\,p(y_{*}\mid f(x_{*})),$$ where A\#(·) refers to the pushforward measure (Villani, 2009, p. 11), and πf|D denotes the function-space posterior defined by the prior A\#πθ =: πf and likelihood p(y | *x, f*) = p(y | f(x)). As shown above, πf|D is sufficient for prediction. Moreover, it often has simpler structures: for example, for ultrawide BNN models with a Gaussian πθ, πf may converge to a Gaussian process (GP) prior (Lee et al., 2018; Matthews et al., 2018; Yang, 2019), in which case πf|D will also converge to a GP posterior. Thus, it is natural to expect approximate inference to be easier in function space. While the intuition has been appealing, existing works on function-space inference tend to be limited by theoretical issues: principled applications may require full-batch training (Sun et al., 2019), Gaussian likelihood (Shi et al., 2019), or specifically constructed models (Ma et al., 2019; Ma & Hernández-Lobato, 2021). Many approaches rely on approximations to the function-space prior, which can make the functional KL divergence unbounded (Burt et al., 2020). Additionally, there is a lack of understanding about optimization convergence, or the expressivity of the variational families used. In contrast, gradient-based MCMC methods, such as Hamiltonian Monte Carlo (HMC) or Langevin dynamics (LD)-based algorithms, can be applied to a broad range of models. Their convergence behaviors are well-understood (Roberts & Tweedie, 1996; Villani, 2009), and intriguingly, their performance often appears to be satisfying on massively overparameterized models (Zhang et al., 2019; Izmailov et al., 2021), even though they are implemented in weight space. This paper bridges the two lines of approaches by showing that - In various overparameterized models, including two simplified BNN models (Sec. 2.1 and Ex. 2.3), weightspace Langevin dynamics (LD) is equivalent to a reflected / Riemannian LD procedure in function space, defined by the pushforward metric. - For practical feed-forward network models, *a possible consequence* of the equivalence still appears to hold in simulations (Sec. 3): weight-space LD produces predictive distributions that appears to approach the functional posterior, at a rate that does not depend on the degree of overparameterization. The equivalence has important implications: it means that principled function-space inference has always been possible and in use. Thus, explicit consideration of function-space posteriors *alone* will not be sufficient to guarantee improvement over existing approaches, and more careful analyses are necessary to justify possible improvement. We also discuss how further insights into the behavior of weight-space LD could be gained by comparing the pushforward metric with the prior (Sec. 2.2). It should be noted that in several scenarios, it has been established that overparameterization does not necessarily hinder the convergence of LD. Moitra & Risteski (2020) proves that polynomial convergence can be possible for a family of *locally* overparameterized models, despite the non-convexity introduced by the overparameterization.1 Dimensionality-independent convergence has also been established for infinite-width NNs in the mean-field regime (e.g., Mei et al., 2019), even though its implication for practical, finite-width models is less clear. More broadly, at a high level our work is also related to past works that studied alternative inference schemes for different models that exhibit some redundancy in the parameterization (Papaspiliopoulos et al., 2007; Yu & Meng, 2011). We are unaware of strict equivalence results as provided in this paper, but we should also emphasize that it is not their technical sophistication that makes them interesting; it is rather *their implications for BNN inference, which appear underappreciated*: the results justify the use of LD as an effective function space inference procedure, in settings that match or generalize previous work. For example, Example 2.1 covers overparameterized linear models, and many popular approaches (e.g., Osband et al., 2018; He et al., 2020) are only justified in this setting. Our results contribute to the understanding of the real-world performance of BNN models, as they provide a theoretical support for the hypothesis that inference may be good enough in many applications, and is not necessarily the limiting factor in a predictive modeling workflow. In this aspect, our results complement a long line of existing work which examined the influence of likelihood, prior and data augmentation in BNN applications, with an emphasis on classification tasks with clean labels; see Aitchison (2020); Wenzel et al. (2020); Fortuin et al. (2021), to name a few. ## 2 Equivalence Between Weight And Function-Space Langevin Dynamics Suppose the prior measure πθ is supported on an open subset of R d and has Lebesgue density pθ. The weightspace posterior πθ|D can be recovered as the stationary measure of the (weight-space) Langevin dynamics dθt = ∇θ(log p(Y | θt, X) + log pθ(θt))dt + √2dBt, (WLD) where we write X := {xi} n i=1, Y := {yi} n i=1 for brevity. 1This result is still not fully unimpeded by overparameterization, as it quantifies convergence to the weight-space posterior, which necessarily requires traversal through all symmetric regions. $$+\log p_{\theta}(\theta_{t}))\mathrm{d}t+\sqrt{2}\mathrm{d}B_{t},$$ $$({\mathrm{WLD}})$$ The pushforward measure A\#πθ =: πf provides a prior in function space. Combining πf and the likelihood leads to a posterior, πf|D. When the function space F := suppπf is a Riemannian manifold2 of dimensionality k ≤ d, it is intuitive that we could sample from πf|D by simulating a Riemannian Langevin dynamics on F (Girolami & Calderhead, 2011). In coordinate form: $$\mathrm{d}{\tilde{f}}_{t}=V({\tilde{f}}_{t})\mathrm{d}t+{\sqrt{2G^{-1}({\tilde{f}}_{t})}}\mathrm{d}B_{t},$$ ˜ft)dBt, (FLD) where ˜ft ∈ R kis the coordinate of ft ∈ F, G−1( ˜f) = (g ij )i,j∈[k]is the inverse of the metric matrix and gij are the local representations for the metric (Lee, 2018, p. 13), dBt is the standard Brownian motion, and $$V^{i}(\hat{f})=\sum_{j=1}^{k}g^{i j}\partial_{j}\Big(\log p(\mathbf{Y}\mid\tilde{f},\mathbf{X})+\log{\frac{d\pi_{f}}{d\mu_{\mathcal{F}}}}(\hat{f})-{\frac{\log|G|}{2}}\Big)+\sum_{j=1}^{k}\partial_{j}g^{i j}.$$ $$(\mathrm{FLD})$$ In the above, µF denotes the corresponding *Riemannian measure* (Do Carmo, 1992, p. 45), and p(Y | ˜f, X) := p(Y | f(X)) denotes the likelihood of the function f corresponding to ˜f. We are interested in possible equivalences between the induced function-space trajectory of (WLD), {Aθt}, and the trajectory of possibly generalized versions of (FLD), with metric defined as the pushforward of the Euclidean metric by A or its generalization. By equivalence we mean that for any k ∈ N and {ti} k i=1 ⊂ [0, ∞), {Aθti } k i=1 and {fti } k i=1 equal in distribution. When it holds, an algorithm that simulates (WLD) for a time period of T and returns AθT *can be equivalently viewed as a "function-space inference algorithm"*, as it is then equivalent (in distribution) to the simulation of (FLD) which does not have to be defined w.r.t. an overparameterized model. We will first illustrate the equivalence on linear models (Sec. 2.1) which, while technically simple, provides intuition and formally covers NN models in the "kernel regime" (Woodworth et al., 2020). We will then discuss the role of the pushforward metric in (FLD) (Sec. 2.2), and analyze general models in which overparameterization can be characterized by group actions (Sec. 2.3). ## 2.1 Overparameterized Linear Models The following is the easiest example where the equivalence can be demonstrated: Example 2.1 (equivalence in linear models; see Appendix B.1 for details). Suppose the map A is linear. For expository simplicity, further assume that πθ = N (0, I), and that the input space X = {x1, x2, . . . , xK} has finite cardinality K, so that any function can be represented by a K dimensional vector (f(x1)*, . . . , f*(xK)), and A *can be identified as a matrix* A ∈ R K×d. (i) If A is a bijection (i.e., d = K and A is invertible), the above vector representation will provide a coordinate for F*. In this coordinate, the metric matrix* G is (AA>) −1(see e.g., Bai et al., *2022).* (FLD) *with this metric reduces to* $$\mathrm{d}\bar{f}_{t}=(AA^{\top})\nabla_{\bar{f}}\left(\log p(\mathbf{Y}\mid\bar{f}_{t},\mathbf{X})-\frac{1}{2}\|A^{-1}\bar{f}_{t}\|_{2}^{2}\right)\mathrm{d}t+\sqrt{2AA^{\top}}\mathrm{d}B_{t}.\tag{1}$$ By Itô's lemma, the above SDE also describes the evolution of Aθt, for θt *following* (WLD). (ii) The equivalence continue to hold in the overparameterized case (e.g., when d > K*): consider the decomposition* R d = Ran(A>)⊕Ker(A)*. Then the evolution of* θt in (WLD) *"factorizes" along the decomposition: the likelihood gradient is fully contained in* Ran(A>) *and thus only influences* ProjRan(A>) θt, whereas ProjKer(A) θt has no influence on Aθt. Therefore, we can describe the evolution of the former independently, thereby reducing to the exactly parameterized case. The second case above provides the first intuition on why (WLD) is not necessarily influenced by overparameterization. While technically simple, it is relevant as it covers random feature models, which only require 2See Appendix A.2 for a conceptual review of relevant notions in Riemannian geometry. replacing X with preprocessed features. Random feature models formally include infinitely wide DNNs in the "kernel regime" (Jacot et al., 2018), where the pushforward metric converges to a constant value. As referenced before, many popular procedures for BNN inference are only justified in this regime. ## 2.2 The Pushforward Metric The pushforward metric that instantiates our (FLD) is an important object in the study of DNNs, in which it is named the "neural tangent kernel" (NTK, Jacot et al., 2018). It acts as a preconditioner in our function-space dynamics, and makes a similar appearance in the analysis of gradient descent (GD) where its preconditioning effect is often believed to be desirable (Arora et al., 2019a;b; Lee et al., 2019). As cited before, for BNN models with a Gaussian πθ, the function-space prior can converge to a Gaussian process (the "NNGP", Lee et al., 2018) as the network width goes to infinity. The NTK is closely related to the covariance kernel of the NNGP; they are equivalent if only the last layer of the DNN is learnable, and for more general models may still share the same Mercer eigenfunctions (Arora et al., 2019a, App. H). When the two kernels are close and the BNN model is correctly specified, it can be informally understood that (FLD) *may enjoy good convergence properties*, by drawing parallels to the analyses of GD (Arora et al., 2019a; Lee et al., 2019);3consequently, the approximate posterior will have a good predictive performance. However, for very deep networks, the Mercer spectra of the two kernels can be very different (Arora et al., 2019a, Fig. 4), in which case we can expect (FLD) to have poor convergence. The above discussions immediately apply to (WLD) when it is equivalent to (FLD) or its suitable variants. More generally, however, it can still be helpful to check for significant differences between the NNGP and NTK kernels when using (WLD), as part of a prior predictive check (Box, 1980) process. This is especially relevant for deeper models, because in certain initialization regimes, both kernels can have pathological behavior as the network depth increases (Schoenholz et al., 2016; Hayou et al., 2019). ## 2.3 Overparameterization Via Group Actions It is often the case that overparameterization can be characterized by group actions; in other words, there exists some group H on R ds.t. any two parameters *θ, θ*0 ∈ R dinduce the same function Aθ = Aθ 0if and only if they belong to the same orbit. In such cases, we can identify F as the quotient space R d/H and the map A : R d → F as the quotient map, and it is desirable to connect (WLD) to possibly generalized versions of (FLD) on F. This subsection presents such results. To introduce our results, we first recall some basic notions in group theory. (Additional background knowledge is presented in Appendix A.) Let H be a Lie group. The unit element of H is denoted as e, and we use ϕ1ϕ2 ∈ H to denote the group operation of ϕ1, ϕ2 ∈ H. An *action* of H on R dis a map Γ : H × R d → R d, s.t. for all ϕ1, ϕ2 ∈ H and p ∈ R d, we have Γ(*e, p*) = p, Γ(ϕ1, Γ(ϕ2, p)) = Γ(ϕ1ϕ2, p) where e ∈ H denotes the identity. We use ϕ · p to denote Γ(*ϕ, p*) for simplicity. For any ϕ ∈ H, introduce the map Γϕ : R d → R d, p 7→ ϕ· p. Then the action is *free* if Γϕ has no fixed point for all ϕ 6= e, *proper* if the preimage of any compact set of the map (*ϕ, p*) 7→ ϕ · p is also compact, and *smooth* if Γϕ is smooth for each ϕ ∈ H. An *orbit* is defined as H · p := {ϕ · p : ϕ ∈ H} where p ∈ R d. Analysis of free group actions The quotient manifold theorem (Lee, 2012, Theorem 21.10) guarantees that the quotient space R d/H is a smooth manifold if the action is smooth, proper and free. To define the pushforward metric on F, we further assume that the action is isometric, i.e., Γϕ is an isometry for every ϕ ∈ H. Under this condition, a metric on F can be defined as4 h(dA|p)(u),(dA|p)(v)iTApF := hu, viRd , ∀p ∈ F*, u, v* ∈ Tp(H · p) ⊥ ⊂ R d. 3For the kernel regime and a Gaussian likelihood, a precise analysis can be possible: the evolution of ft(X) factorizes along the eigenvectors of the NTK Gram matrix. We forgo it for brevity. 4It is well-defined since dA|p is an isomorphism between Tp(H · p)⊥ and TApF, and the isometry assumption ensures that the definition is independent of the choice of p in the orbit (Lee, 2018). The above equation used some standard notations in differential geometry (Lee, 2018, p. 16): dA|p : R d → TApF is the *differential* of A at p, Tp denotes the *tangent space* of a manifold, and Tp(H ·p) ⊥ is the orthogonal complement of the tangent space of the orbit H · p, which is a submanifold of R d. The following proposition establishes the equivalence under discrete group action. Proposition 2.1 (proof in Appendix B.2). Suppose H is a discrete group (Hall, 2013, p. 28) acting smoothly, freely, properly on R d, and A *is such that* Aθ = Aθ 0*if and only if* θ 0 ∈ H · θ*. If either (a) the (improper)* prior pθ is constant and the group action is isometric; or (b) H = {e} *is trivial, then the equivalence between* (WLD) and (FLD) *will hold.* Remark 2.1. For continuous groups that act freely, the situation is more complicated, and depends on how the orbits are embedded in the ambient space R d. For example, a drift term depending on the mean curvature vector of the orbit may be introduced when pushing a Brownian motion using the quotient map (JE, 1990), and when the mean curvature vanishes, the equivalence will continue to hold, as shown in our Example 2.1 (ii). Analysis for non-free group actions is primarily complicated by the fact that the quotient space is no longer a manifold in general (Satake, 1956). Still, as we show in Example 2.3, similar results can be established under the action of symmetric groups. We now provide a concrete, albeit artificial, example in which the equivalence implies fast convergence to the function-space posterior. It also highlights that VI and MCMC methods can have different behavior on overparameterized models, and that for VI methods it may still be necessary to explicitly account for overparameterization. While recent works have made similar observations (e.g., Sun et al., 2019), and provided some examples (Wang et al., 2019; Kurle et al., 2022), our example may provide additional insight: Example 2.2 (LD vs. particle-based VI on torus). Let Aθ := ([θ1], . . . , [θd])*, where* [a] := a − bac ∈ [0, 1). Let πθ, πf have constant densities, and the negative log likelihood be unimodal and locally strongly convex. Then we have F = T d, the d-dimensional torus, and by Proposition *2.1,* (WLD) is equivalent to Riemannian LD on F*. As* T d*is a compact manifold,* (FLD) enjoys exponential convergence (Villani, 2009), and so does the induced function-space measure of (WLD). Particle-based VI methods approximate the weight-space posterior with an empirical distribution of particles {θ (i)}M i=1, and update the particles iteratively. Consider the W-SGLD method in Chen et al. *(2018): its* update rule resembles (WLD)*, but with the diffusion term replaced by a deterministic "repulsive force" term,* v˜t(θ)dt*, where* $$\tilde{v}_{t}(\theta):=\sum_{j=1}^{M}\frac{\nabla_{\theta^{(j)}}k_{h}(\theta,\theta^{(j)})}{\sum_{k=1}^{M}k_{h}(\theta^{(j)},\theta^{(k)})}+\frac{\sum_{j=1}^{M}\nabla_{\theta^{(j)}}k_{h}(\theta,\theta^{(j)})}{\sum_{k=1}^{M}k_{h}(\theta,\theta^{(k)})},$$ and kh is a radial kernel with bandwidth h*. Formally, in the infinite-particle, continuous time limit, as* h → 0, both v˜tdt and the diffusion term implements the Wasserstein gradient of an entropy functional (Carrillo et al., 2019), and W-SGLD and LD are formally equivalent (Chen et al., *2018).* The asymptotic equivalence between (WLD) *and W-SGLD breaks down in this example: whereas* (WLD) induces a function-space measure that quickly converges to πf|D, this is not necessarily true for W-SGLD. Indeed, its induced function-space measure may well collapse to a point mass around the MAP, regardless of the number of particles. To see this, let θ ∗ ∈ [0, 1)d*be any MAP solution so that* ∇θ log p(Y | X, θ∗)p(θ ∗) = 0. Then for any fixed h = O(1), as M → ∞*, the configuration* {θ (i,M) = (1010M i, 0*, . . . ,* 0) + θ ∗}M i=1 *will* constitute an approximate stationary point for the W-SGLD update. This is because the posterior gradient term is always zero, but the repulsive force term vanishes due to the very large distances between particles in weight space. Past works have noted the pathologies of particle-based VI in high dimensions (Zhuo et al., 2018; Ba et al., 2021), but this example is interesting as it does not require an increasing dimensionality. Rather, it is global overparameterization that breaks the asymptotic convergence to LD. Analysis of non-free group actions As we have shown in Example 2.2, Proposition 2.1 already demonstrates some equivalence between (WLD) and (FLD) in the presence of global overparameterization. It can also be combined with Example 2.1 (ii) to construct models exhibiting both local and global overparameterization. Still, we present a more relevant example below, which is a BNN model exhibiting permutational symmetry. Note that the model will still be different from practical models, in particular because it precludes continuous symmetry. However, it allows for a non-constant NTK, which is an important feature of effective NN models for high-dimensional data (see e.g., Ghorbani et al., 2019; Wei et al., 2019). Example 2.3 (simplified BNN model). *Consider the model* f(x; θ) := Pd i=1 sin(θix), *which is a two-layer* BNN with the second layer frozen at initialization. Let the prior support supp πθ *be contained in* (0, +∞) d*. Then by the linear independence of sine functions,* for Aθ = Aθ 0*to hold,* θ 0 must be a permutation of θ*, and thus the symmetry in this model can be described* by the symmetric group Sd consisting of all permutations on the set {1, . . . , d}. The action of Sn on the weight space R d*is non-free, and the function space is a manifold with boundary, namely a polyhedral cone* Cn := {θ ∈ R d: θ1 ≤ θ2 *≤ · · · ≤* θd}. Let {θt} *be the trajectory of* (WLD) and pt denote the distribution of θt. Appendix B.3 proves that the pushforward distribution p˜t := A\#pt *follows the Fokker-Planck equation with the Neumann boundary condition:* $$\begin{cases}\partial_{t}\tilde{p}_{t}(\theta)=-\nabla\cdot(\tilde{p}_{t}(\theta)\nabla_{\theta}(\log p(\mathbf{Y}\mid\theta,\mathbf{X})+\log p_{\theta}(\theta)))+\Delta\tilde{p}_{t}(\theta),&\theta\in\mathcal{F}^{\circ}\\ \partial_{\theta}\tilde{p}_{t}(\theta)/\partial v=0,&v\in N_{\theta},\theta\in\partial\mathcal{F},\end{cases}\tag{2}$$ where ∂F and F ◦ are the boundary and the interior of F, respectively, and Nθ is the set of inward normal vectors of F at θ. The evolution of p˜t *is closely related to the* reflected Langevin dynamics in F *(Sato* et al., 2022), which keeps its trajectory in F by reflecting it at ∂F; when the posterior is strongly logconcave in Cn, the connection suggests that the function-space measure p˜t *may enjoy a fast convergence.*5In contrast, convergence of (WLD) to the weight-space posterior may be much slower*, as it will have to visit* an exponential number of equivalence classes. We note that mixture models exhibit a similar permutational invariance, and their inferential and computational issues have been extensively studied (Celeux et al., 2000; Frühwirth-Schnatter, 2001; Jasra et al., 2005; Frühwirth-Schnatter & Frèuhwirth-Schnatter, 2006). However, those works typically focus on the mixing in the parameter (i.e., weight) space, which is different from our work which only concerns the function space. ## 3 Numerical Study While our theoretical results have covered two simplified BNN models, the models are still different from those employed in practice. In this section we present numerical experiments that evaluate the efficacy of (WLD)-derived algorithms on practical BNN models. While they cannot provide direct evidence on the equivalence between (WLD) and (FLD), they are still validating a possible consequence of it, as we expect (FLD) to have good convergence properties (when the NTK and the NNGP kernels are not too different, Sec. 2.2) and (WLD) will inherit such a property if the equivalence holds. We will experiment on two setups, a toy 1D regression dataset (Sec. 3.1) and a collection of semi-synthetic datasets adapted from the UCI regression datasets (Sec. 3.2). We note that it is impossible to implement (WLD) exactly, as it is a continuous-time process; thus, we will experiment with Metropolis-adjusted Langevin algorithm (MALA, Roberts & Stramer, 2002) and unadjusted Langevin algorithm (ULA, Grenander & Miller, 1994), which are two standard, widely-used algorithms derived from LD.6 More importantly, it is difficult to directly validate the equivalence between (WLD) and (FLD) empirically, as the latter involves the function-space prior which cannot be computed or approximated efficiently; for this reason we have resorted to *indirect experiments*. The experiments also share a similar goal to our theoretical analysis, which is to understand the behavior of (WLD)-derived algorithms on BNN models. In this aspect they complement previous works, by investigating practical, finite-width NN models and eliminating the influence of possible model misspecification in the evaluation. 5For a bounded convex domain and a smooth boundary, we can prove that (2) describes the density evolution of the reflected LD, and its convergence rate has also been established (Bubeck et al., 2018, Proposition 2.6). 6Briefly, ULA simulates a discretization of (WLD) and MALA corrects for the bias arising from the discretization. For simplicity, we may refer to them as simulating (WLD) in the following. ## 3.1 Sample Quality On A Toy Dataset We first consider BNN inference on a toy 1D regression dataset, and check if the function-space measure induced by simulating (WLD) (i.e., the distribution of Aθt) appears to converge at a similar rate, across models with increasing degree of overparameterization. 1. we will visualize the pointwise credible intervals of Aθt, which are informative about one-dimensional marginal distributions of the function-space measure; 2. when the training sample size n is small, we approximately evaluate the approximation quality of (n+ 1)- dimensional marginal distributions of f(Xe) := (f(x1), . . . , f(xn), f(x∗)) with f ∼ πf|D, by estimating the kernelized Stein discrepancy (KSD, Liu et al., 2016; Chwialkowski et al., 2016) between the marginal distribution q(f(Xe)) (where f = Aθt and θt follows (WLD)), and the true marginal posterior p(f(Xe)) (where f ∼ πf|D). KSD is often used for measuring sample quality (Gorham & Mackey, 2017; Anastasiou et al., 2023). We use the U-statistic estimator in Liu et al. (2016, Eq. 14), which only requires the specification of a kernel in R n+1, samples from q and the *score function* of p(f(Xe)). Importantly, we can estimate the score since it admits the following decomposition: $$\nabla_{f(\mathbf{X}_{e})}\log p=\nabla_{f(\mathbf{X}_{e})}\Big{(}\log\frac{d\pi_{f(\mathbf{X}_{e})}}{d\mu_{Leb}}+\log p(\mathbf{Y}\mid f(\mathbf{X}_{e}))\Big{)}$$ $$=\nabla_{f(\mathbf{X}_{e})}\Big{(}\log\frac{d\pi_{f(\mathbf{X}_{e})}}{d\mu_{Leb}}+\log p(\mathbf{Y}\mid f(\mathbf{X}))\Big{)},\qquad\qquad\text{(since$\mathbf{X}\subset\mathbf{X}_{e}$)}\tag{3}$$ where πf(Xe) denotes the respective marginal distribution of πf , and µLeb denotes the Lebesgue measure. We estimate the first term by fitting nonparametric score estimators (Zhou et al., 2020) on prior samples. The second term can be evaluated in closed form. ![6_image_0.png](6_image_0.png) Figure 1: 1D regression: visualization of the induced function-space measure of MALA after I iterations. We plot the pointwise 80% credible intervals. The results for L = 2 are deferred to Fig. 4. We use feed-forward networks with factorized Gaussian priors, and the standard initialization scaling: f(x; θ) := f (L)(f (L−1)(*. . . f*(0)(x))), where $$f^{(l)}(h^{(l-1)}):=\sigma^{(l)}\left(B^{(l)}h^{(l-1)}+b^{(l)}\right),\ \ \ \mbox{vec}(B^{(l)})\sim\mathcal{N}\left(0,(\dim h^{(l-1)})^{-1}I\right),\ \ \ b^{(l)}\sim\mathcal{N}(0,0.2I),\tag{4}$$ 7 ![7_image_0.png](7_image_0.png) Figure 2: 1D regression: estimated √KSD between the LD predictive distribution q(f(Xe)) and the approximate function-space posterior p(f(Xe)). We simulate 1000 LD chains. For the approximate posterior, we estimate the prior score term in (3) using 5 × 106samples. and the activation functions σ (l) are SELU (Klambauer et al., 2017) for hidden layers (*l < L*) and the identity map for the output layer (l = L). We vary the network depth L ∈ {2, 3}, and the width of all hidden layers W ∈ [20, 500]. The training data is generated as follows: the inputs consist of b2n/3c evenly spaced points on [−2.5, −0.5], and the remaining points are evenly placed on [1, 2]. The output is sampled from p(y | x) = N (x sin(1.5x) + 0.125x 3, 0.01). We use n = 7 for visualization, and n = 3 for KSD evaluation. The difference is due to challenges in approximating our KSD: (3) involves score estimation, and in our case we further need the estimate to generalize to out-of-distribution inputs (approximate posterior as opposed to prior samples); both are extremely challenging tasks in high dimensions7. We simulate (WLD) with MALA, and evaluate the induced function-space samples for varying number of iterations. The step size is set to 0.025/nW, so that the function-space updates have a similar scale. We visualize the posterior approximations in Fig. 1 and Fig. 4, and report the approximate KSD in Fig. 2. As we can see, the convergence appears to happen at a similar rate, which is a possible consequence of the equivalence results. ## 3.2 Average-Case Predictive Performance On Semi-Synthetic Data The previous experiments cannot scale to larger datasets due to the aforementioned challenges in estimating the KSD. To investigate the behavior of (WLD)-derived algorithms on more realistic datasets, we turn to less direct experiments and check whether *in the absence of model misspecification* (WLD)-derived algorithms will lead to competitive predictive performance. Our experiments use semi-synthetic datasets adapted from the UCI machine learning repository. Specifically, we modify the UCI datasets by keeping the input data and replacing the output with samples from the model likelihood p(y | *x, f*0), where f0 = fBNN(·; θ0) is sampled from the BNN prior: $\text{p}(y=\cdot\mid t)$ $\mathrm{N}\left(x;\theta_{0}\right)$). θ0 ∼ πθ, y | x ∼ p(y = · | fBNN(x; θ0)). (5) We will consider Gaussian (resp. Laplacian) likelihood and check whether an approximate posterior mean (resp. median) estimator, constructed using the (WLD)-derived algorithms, has a competitive *average-case* performance across randomly sampled θ0. This will happen if the weight-space algorithms provide a reasonably accurate approximation to the function-space posterior, since the *exact* posterior mean (resp. median) estimator will minimize the similarly defined *average-case* risk ˆf 7→ Ef0∼πf EY∼p(·|f0(X))Ex∗∼px,y∗∼p(·|f0(x∗))`( ˆf(x∗), y∗), (6) 7Without strong differentiability assumptions, the error of score estimation may suffer from curse of dimensionality (Tsybakov & Zaiats, 2009). $\left(5\right)$. where ` denotes the loss function derived from the model likelihood, and ˆf denotes any estimator that maps the data (X, Y) to a prediction function (we dropped the dependency on the data for readability). Therefore, competitive predictive performance of the approximate predictor will provide evidence on the quality of posterior approximation. Note that by using a semi-synthetic data generating process, we can allow the input features to have realistic distributions, while avoiding the possible influence of model misspecification which cannot be ruled out in past works that experiment on real-world data (Wenzel et al., 2020; Fortuin et al., 2021). We estimate the average-case risk (6) for MALA and ULA, using a Monte-Carlo estimate; the full procedure is summarized as Algorithm 1 in appendix. We instantiate (6) with Gaussian and Laplacian likelihoods, which correspond to the square loss and the absolute error loss, respectively. The feed-forward network architecture fBNN follows Sec. 3.1 by varying L ∈ {2, 3}, W ∈ {50, 200}, and we use 80% samples for training and 20% for testing. To understand the performance of the (WLD)-derived algorithms, we report the *Bayes error*, which is the minimum possible average-case risk attainable with *infinite samples*; we also include a baseline that replaces the (WLD)-derived algorithm with an ensemble gradient descent (GD) procedure for a maximum a posterior (MAP) estimate. For all methods, the step size is selected from {η/2nW : η ∈ {1, 0.5, 0.1, 0.05, 0.01, 0.005}} such that the average acceptance rate of the first 200 MALA iterations is closest to 0.7, where n denotes the size of training set. We plot the Bayes error and the estimated average-case risk against the number of iterations in Fig. 3 and Fig. 5-6 in appendix, and report the performance at the best iteration in Table 1-3. As we can see, across all settings, MALA and ULA lead to a similar predictive performance to GD, and all of them attain errors close to the Bayes error, especially for a dataset with a larger training set. As it is well known that GD methods perform well on DNN models (Du et al., 2018; Allen-Zhu et al., 2019; Mei et al., 2019; Arora et al., 2019a), these results provide further evidence on the efficacy of the (WLD)-derived algorithms. ## 4 Conclusion In this work we have investigated the function space behavior of weight-space Langevin-type algorithms on overparameterization models. Across multiple settings that encompass simplified BNN models, we have established the equivalence of the function-space pushforward of weight-space LD to its various functionspace counterparts. Within their scope, the equivalence results allow us to view weight-space LD as a function-space inference procedure, and understand its behavior by examining the preconditioner in the equivalent function-space dynamics. Numerical experiments provide additional evidence for the efficacy of Langevin-type algorithms on practical feed-forward network models. ## References Laurence Aitchison. A statistical theory of cold posteriors in deep neural networks. arXiv preprint arXiv:2008.05912, 2020. Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. In *International Conference on Machine Learning*, pp. 242–252. PMLR, 2019. Andreas Anastasiou, Alessandro Barp, François-Xavier Briol, Bruno Ebner, Robert E Gaunt, Fatemeh Ghaderinezhad, Jackson Gorham, Arthur Gretton, Christophe Ley, Qiang Liu, et al. Stein's method meets computational statistics: a review of some recent developments. *Statistical Science*, 38(1):120–139, 2023. Sanjeev Arora, Simon Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. In *International Conference on Machine* Learning, pp. 322–332. PMLR, 2019a. Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. *Advances in neural information processing systems*, 32, 2019b. ![9_image_0.png](9_image_0.png) Figure 3: Semi-synthetic experiment: estimated average-case risk (6) under different choices of likelihood, for L = 2, W = 200. Shade indicates standard deviation across 8 independent replications. Jimmy Ba, Murat A Erdogdu, Marzyeh Ghassemi, Shengyang Sun, Taiji Suzuki, Denny Wu, and Tianzong Zhang. Understanding the variance collapse of SVGD in high dimensions. In *International Conference on* Learning Representations, 2021. Qinxun Bai, Steven Rosenberg, and Wei Xu. Understanding natural gradient in Sobolev spaces. *arXiv* preprint arXiv:2202.06232, 2022. George EP Box. Sampling and bayes' inference in scientific modelling and robustness. *Journal of the Royal* Statistical Society: Series A (General), 143(4):383–404, 1980. Sébastien Bubeck, Ronen Eldan, and Joseph Lehec. Sampling from a log-concave distribution with projected Langevin Monte Carlo. *Discrete & Computational Geometry*, 59:757–783, 2018. David R Burt, Sebastian W Ober, Adrià Garriga-Alonso, and Mark van der Wilk. Understanding variational inference in function-space. *arXiv preprint arXiv:2011.09421*, 2020. José Antonio Carrillo, Katy Craig, and Francesco S Patacchini. A blob method for diffusion. *Calculus of* Variations and Partial Differential Equations, 58(2):1–53, 2019. Gilles Celeux, Merrilee Hurn, and Christian P Robert. Computational and inferential difficulties with mixture posterior distributions. *Journal of the American Statistical Association*, 95(451):957–970, 2000. Changyou Chen, Ruiyi Zhang, Wenlin Wang, Bai Li, and Liqun Chen. A unified particle-optimization framework for scalable Bayesian sampling. *arXiv preprint arXiv:1805.11659*, 2018. Kacper Chwialkowski, Heiko Strathmann, and Arthur Gretton. A kernel test of goodness of fit. In *International conference on machine learning*, pp. 2606–2615. PMLR, 2016. Manfredo Perdigao Do Carmo. *Riemannian geometry*, volume 6. Springer, 1992. Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes overparameterized neural networks. *arXiv preprint arXiv:1810.02054*, 2018. David Steven Dummit and Richard M Foote. *Abstract algebra*, volume 3. Wiley Hoboken, 2004. Vincent Fortuin, Adrià Garriga-Alonso, Florian Wenzel, Gunnar Rätsch, Richard Turner, Mark van der Wilk, and Laurence Aitchison. Bayesian neural network priors revisited. *arXiv preprint arXiv:2102.06571*, 2021. Sylvia Frühwirth-Schnatter. Markov chain Monte Carlo estimation of classical and dynamic switching and mixture models. *Journal of the American Statistical Association*, 96(453):194–209, 2001. Sylvia Frühwirth-Schnatter and Sylvia Frèuhwirth-Schnatter. *Finite mixture and Markov switching models*, volume 425. Springer, 2006. Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Limitations of lazy training of two-layers neural networks. *arXiv preprint arXiv:1906.08899*, 2019. Mark Girolami and Ben Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society Series B: Statistical Methodology, 73(2):123–214, 2011. Jackson Gorham and Lester Mackey. Measuring sample quality with kernels. In *International Conference* on Machine Learning, pp. 1292–1301. PMLR, 2017. Ulf Grenander and Michael I Miller. Representations of knowledge in complex systems. *Journal of the Royal* Statistical Society: Series B (Methodological), 56(4):549–581, 1994. Brian C Hall. *Lie groups, Lie algebras, and representations*. Springer, 2013. Soufiane Hayou, Arnaud Doucet, and Judith Rousseau. Exact convergence rates of the neural tangent kernel in the large depth limit. *arXiv e-prints*, pp. arXiv–1905, 2019. Bobby He, Balaji Lakshminarayanan, and Yee Whye Teh. Bayesian deep ensembles via the neural tangent kernel. *Advances in Neural Information Processing Systems*, 33:1010–1022, 2020. Pavel Izmailov, Sharad Vikram, Matthew D Hoffman, and Andrew Gordon Gordon Wilson. What are Bayesian neural network posteriors really like? In *International Conference on Machine Learning*, pp. 4629–4640. PMLR, 2021. Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. *Advances in Neural Information Processing Systems*, 31, 2018. Ajay Jasra, Chris C Holmes, and David A Stephens. Markov chain monte carlo methods and the label switching problem in bayesian mixture modeling. 2005. Pauwels JE. Riemannian submersions of Brownian motions. Stochastics: An International Journal of Probability and Stochastic Processes, 29(4):425–436, 1990. Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural networks. *Advances in Neural Information Processing Systems*, 30, 2017. Richard Kurle, Ralf Herbrich, Tim Januschowski, Yuyang Bernie Wang, and Jan Gasthaus. On the detrimental effect of invariances in the likelihood for variational inference. *Advances in Neural Information* Processing Systems, 35:4531–4542, 2022. Jaehoon Lee, Jascha Sohl-dickstein, Jeffrey Pennington, Roman Novak, Sam Schoenholz, and Yasaman Bahri. Deep neural networks as Gaussian processes. In *International Conference on Learning Representations*, 2018. Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent. Advances in Neural Information Processing Systems, 32, 2019. John M. Lee. *Introduction to Smooth Manifolds*. Springer, 2012. John M Lee. *Introduction to Riemannian manifolds*. Springer, 2018. Qiang Liu, Jason Lee, and Michael Jordan. A kernelized stein discrepancy for goodness-of-fit tests. In International conference on machine learning, pp. 276–284. PMLR, 2016. Chao Ma and José Miguel Hernández-Lobato. Functional variational inference based on stochastic process generators. *Advances in Neural Information Processing Systems*, 34:21795–21807, 2021. Chao Ma, Yingzhen Li, and José Miguel Hernández-Lobato. Variational implicit processes. In *International* Conference on Machine Learning, pp. 4222–4233. PMLR, 2019. Alexander G de G Matthews, Mark Rowland, Jiri Hron, Richard E Turner, and Zoubin Ghahramani. Gaussian process behaviour in wide deep neural networks. *arXiv preprint arXiv:1804.11271*, 2018. Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit. In *Conference on Learning Theory*, pp. 2388–2464. PMLR, 2019. Ankur Moitra and Andrej Risteski. Fast convergence for Langevin diffusion with manifold structure, September 2020. arXiv:2002.05576 [cs, math, stat]. Ian Osband, John Aslanides, and Albin Cassirer. Randomized prior functions for deep reinforcement learning. Advances in Neural Information Processing Systems, 31, 2018. Omiros Papaspiliopoulos, Gareth O Roberts, and Martin Sköld. A general framework for the parametrization of hierarchical models. *Statistical Science*, pp. 59–73, 2007. Gareth O Roberts and Osnat Stramer. Langevin diffusions and Metropolis-Hastings algorithms. Methodology and computing in applied probability, 4:337–357, 2002. Gareth O Roberts and Richard L Tweedie. Exponential convergence of Langevin distributions and their discrete approximations. *Bernoulli*, pp. 341–363, 1996. Ichirô Satake. On a generalization of the notion of manifold. *Proceedings of the National Academy of Sciences*, 42(6):359–363, 1956. Kanji Sato, Akiko Takeda, Reiichiro Kawai, and Taiji Suzuki. Convergence error analysis of reflected gradient Langevin dynamics for globally optimizing non-convex constrained problems. *arXiv preprint* arXiv:2203.10215, 2022. Samuel S Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. Deep information propagation. *arXiv preprint arXiv:1611.01232*, 2016. Jiaxin Shi, Mohammad Emtiyaz Khan, and Jun Zhu. Scalable training of inference networks for Gaussianprocess models. In *International Conference on Machine Learning*, pp. 5758–5768. PMLR, 2019. Shengyang Sun, Guodong Zhang, Jiaxin Shi, and Roger Grosse. Functional variational Bayesian neural networks. *arXiv preprint arXiv:1903.05779*, 2019. A B Tsybakov and Vladimir Zaiats. *Introduction to Nonparametric Estimation*. Springer series in statistics. Springer, New York, NY, December 2009. Cédric Villani. *Optimal transport: old and new*, volume 338. Springer, 2009. Ziyu Wang, Tongzheng Ren, Jun Zhu, and Bo Zhang. Function space particle optimization for Bayesian neural networks. In *International Conference on Learning Representations*, 2019. Colin Wei, Jason D Lee, Qiang Liu, and Tengyu Ma. Regularization matters: Generalization and optimization of neural nets v.s. their induced kernel. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32, 2019. Florian Wenzel, Kevin Roth, Bastiaan S Veeling, Jakub Świkatkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, and Sebastian Nowozin. How good is the Bayes posterior in deep neural networks really? *arXiv preprint arXiv:2002.02405*, 2020. Blake Woodworth, Suriya Gunasekar, Jason D Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, and Nathan Srebro. Kernel and rich regimes in overparametrized models. In *Conference on* Learning Theory, pp. 3635–3673. PMLR, 2020. Greg Yang. Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation. *arXiv preprint arXiv:1902.04760*, 2019. Yaming Yu and Xiao-Li Meng. To center or not to center: That is not the question—an ancillarity–sufficiency interweaving strategy (asis) for boosting mcmc efficiency. *Journal of Computational and Graphical Statistics*, 20(3):531–570, 2011. Ruqi Zhang, Chunyuan Li, Jianyi Zhang, Changyou Chen, and Andrew Gordon Wilson. Cyclical stochastic gradient mcmc for Bayesian deep learning. *arXiv preprint arXiv:1902.03932*, 2019. Yuhao Zhou, Jiaxin Shi, and Jun Zhu. Nonparametric score estimators. In International Conference on Machine Learning, pp. 11513–11522. PMLR, 2020. Jingwei Zhuo, Chang Liu, Jiaxin Shi, Jun Zhu, Ning Chen, and Bo Zhang. Message passing Stein variational gradient descent. In *International Conference on Machine Learning*, pp. 6018–6027. PMLR, 2018. ## A Background Knowledge A.1 Groups A *group* H is a set equipped with an operation Ψ : H × H → H. It satisfies the following properties: - There is a unit element e in H such that Ψ(*e, ϕ*) = Ψ(*ϕ, e*) = ϕ for every ϕ ∈ H. - Every element ϕ ∈ H has an inverse ϕ −1 ∈ H such that Ψ(*ϕ, ϕ*−1) = Ψ(ϕ −1, ϕ) = e. - The operation Ψ is associative, i.e., for ϕ1, ϕ2, ϕ3 ∈ H, it holds that Ψ(ϕ1, Ψ(ϕ2, ϕ3)) = Ψ(Ψ(ϕ1, ϕ2), ϕ3). For simplicity, we use ϕ1ϕ2 to denote Ψ(ϕ1, ϕ2) for every ϕ1, ϕ2 ∈ H. A group H is a *Lie group* if it is a smooth manifold (Hall, 2013), and we say H is *discrete* if for each ϕ ∈ H there exists a neighborhood Uϕ 3 ϕ containing only ϕ. ## A.2 Riemannian Manifolds To aid understanding, we provide a conceptual introduction of some relevant notions in Riemannian geometry in this section. We refer interested readers to Do Carmo (1992); Lee (2018) for rigorous definitions. A k-dimensional manifold M is a topological space locally resembling a k-dimensional Euclidean space. Specifically, for any p ∈ M, there exist open neighborhoods M ⊃ U 3 p and R k ⊃ V 3 0 and a homeomorphism ψ : V → U. The map ψ is called a *coordinate map near* p if ψ(0) = p, and the *coordinate* of a point q ∈ U is ψ −1(q). If two coordinate maps ψ1, ψ2 overlap, we require the transition ψ −1 2◦ ψ1 to be smooth. The tangent space at p ∈ M is a k-dimensional linear space "orthogonal to M", denoted as TpM. A Riemannian structure equips the tangent space TpM with an inner product h·, ·ip for every p ∈ M. Given a coordinate map ψ : V → U and a ∈ V , the differential dψ|a at a is a linear bijection between R k and Tψ(a)M. Let e1, e2*, . . . , e*k ∈ R k be the standard basis of R k, then {Ei(a) := (dψ|a)(ei)}i∈[k]is a basis of Tψ(a)M. We call gij (a) := hEi(a), Ej (a)iψ(a) the *coordinate representation of the Riemannian metric* and G(a) := (gij (a))i,j∈[k] ∈ R k×kthe *coordinate representation of the metric matrix*. For simplicity, we omit the dependence on a of gij (a) and G(a). Also, under a coordinate ψ : V → U, the *volume* of a set R ⊂ U is defined as (Do Carmo, 1992, p. 45) $$\mathrm{Vol}(R):=\int_{\psi^{-1}(R)}{\sqrt{|G|}}\mathrm{d}\mu_{L e b},$$ where |G| denotes the determinant of G, and µLeb is the k-dimensional Lebesgue measure. The measure dµM := p|G|dµLeb is called the *Riemannian measure* or the *volume form*, and is independent of the choice of the coordinate ψ. Example A.1. *As an example, let* M := T 2:= {(cos α,sin α, cos β,sin β) : α, β ∈ [0, 2π)} be the twodimensional torus considered in Example *2.2. For any* p = (cos α,sin α, cos β,sin β) ∈ T 2, we can find a coordinate map ψ(*ζ, ξ*) := (cos(α + ζ),sin(α + ζ), cos(β + ξ),sin(β + ξ)) ∈ R 4 with (*ζ, ξ*) ∈ V := (−1, 1) × (−1, 1)*. The tangent space* TpT 2 = p + {(−tsin *α, t* cos α, −s sin *β, s* cos β) : t, s ∈ R} *is the plane orthogonal* to p*. For* t1, t2, s1, s2 ∈ R *and tangent vectors* v1 = p + (−t1 sin *α, t*1 cos α, −s1 sin *β, s*1 cos β) and v2 = p + (−t2 sin *α, t*2 cos α, −s2 sin *β, s*2 cos β), we can define an inner product hv1, v2ip := (v1 − p) >(v2 − p) = t1t2 + s1s2. The differential of ψ is (dψ|0)(*t, s*) = (−tsin *α, t* cos α, −s sin *β, s* cos β)*, mapping the basis* e1 = (1, 0), e2 = (0, 1) in R 2to E1 = p + (− sin α, cos α, 0, 0), E2 = p + (0, 0, − sin β, cos β) ∈ TpT 2*. The* coordinate representation of the metric is such that gij (0) = 1 if i = j and gij (0) = 0 otherwise, and the metric matrix is G(0) = 1 0 0 1. ## B Proofs B.1 Details In Example 2.1 As mentioned in this example, when the input space X has finite cardinality K ∈ N, a function f : X → R can be identified as the vector (f(x1), f(x2)*, . . . , f*(xK)) ∈ R K. Conversely, for a vector v = (v1, v2*, . . . , v*K) ∈ R K we can define a function fv : X → R such that fv(xj ) = vj for j = 1*, . . . , K*. In the remaining part, we will represent the function fv ∈ R X by the vector v ∈ R K. Proof of Example *2.1.* Since the map A is linear, there exists a matrix A ∈ R K×dsuch that Aθ = Aθ for every θ ∈ R d. Now the claim (i) follows by Proposition 2.1 (b): clearly, in this case H = {e} fulfills the conditions in the proposition. We now turn to (ii). When A is a surjection with *d > K*. Let Ker(A) := Ker(A) := {x ∈ R d: Ax = 0} be the kernel of A, and Ran(A>) := Ran(A>) := {A>y ∈ R d: y ∈ R K} be the range of A>, then the space R d has the orthogonal decomposition R d = Ker(A)⊕Ran(A>), and there exists an orthogonal matrix Q ∈ R d×d such that Q(Ker(A)) = {0} k × R d−k and Q(Ran(A>)) = R k × {0} d−k, where k := dim Ran(A>) ≤ *K < d*. Then, θ ∈ R d has the representation θ = Q>(θ k, θ⊥) for θ k ∈ R k and θ ⊥ ∈ R d−k. Under this representation, A becomes A˜ := A ◦ Q> and A| ˜Rk×{0}d−k is a bijection. We can also define a reduced likelihood p˜(Y | θ k, X) := p(Y | Q>(θ k, 0), X), then $$\mathrm{d}\begin{pmatrix}\theta_{t}^{\mathrm{I}}\\ \theta_{t}^{\mathrm{I}}\end{pmatrix}=\mathrm{d}(Q\theta)=(Q\nabla_{\theta}\log p(\mathbf{Y}\mid\theta_{t},\mathbf{X})-Q\theta_{t})\mathrm{d}t+\sqrt{2}Q\mathrm{d}B_{t}\quad(p_{\theta}=\mathcal{N}(0,I_{d}))$$ $$=\left(\left(\nabla_{\theta^{\mathrm{I}}}\log\tilde{p}(\mathbf{Y}\mid\theta_{t}^{\mathrm{I}},\mathbf{X})\right)-\left(\theta_{t}^{\mathrm{I}}\right)\right)\mathrm{d}t+\sqrt{2}\mathrm{d}\tilde{B}_{t}\qquad(Q\mathrm{d}B_{t}\stackrel{{d}}{{=}}\sqrt{Q\,Q^{\mathrm{T}}}\mathrm{d}B_{t}=\mathrm{d}\tilde{B}_{t})\.$$ where B˜t denotes another Brownian motion. Therefore, θ k t and θ ⊥ tfactorize to independent processes, and the equivalence holds by applying the result of (i) to θ k t and A˜ (and noting that A˜θ k t = Aθt). ## B.2 Proof Of Proposition 2.1 Proof of Proposition *2.1.* By definitions, for any f ∈ F, there exists some θ ∈ R d and one of its neighborhood N such that f = Aθ, and that for U = A(N), (U, A|N ) forms a coordinate chart. On this chart, the coordinate matrix of the pushforward metric tensor equals identity, by its definition. Thus, the coordinate representation (FLD) reduces to $$\mathrm{d}\theta_{t}=\nabla_{\theta}\left(\log p(\mathbf{Y}\mid\theta_{t},\mathbf{X})+\log{\frac{d\pi_{f}}{d\mu_{\mathcal{F}}}}\right)\mathrm{d}t+{\sqrt{2}}\mathrm{d}B_{t},$$ and it differs from (WLD) only on the prior term. When condition (a) in the proposition holds, the prior is uniform so the gradient vanishes. When condition (b) holds, the group is trivial and the quotient map A is a bijection. Thus, it suffices to show that for all θ ∈ supp πθ, we have $$\frac{d\pi_{f}}{d\mu_{F}}({\mathcal A}\theta)=\frac{d\pi_{\theta}}{d\mu_{L e b}}(\theta)=p_{\theta}(\theta),$$ where µLeb denotes the Lebesgue measure. By the change of measure formula, the above will be implied by $$\pi_{f}\stackrel{(i)}{=}{\mathcal A}_{\#}\pi_{\theta},\;\;\;\mu_{\mathcal F}\stackrel{(i i)}{=}{\mathcal A}_{\#}\mu_{L e b}.$$ (i) is the definition of πf . For (ii), let ζ : F → R be any measurable function with a compact support, {(Ui = A(Ni), A|Ni ) : i ∈ [h]} be a finite chart covering of supp ζ, and {ρi} be a corresponding partition of unity. Then $$\int_{\mathcal{F}}\zeta(f)\mu_{\mathcal{F}}(\mathrm{d}f)=\sum_{i=1}^{h}\int_{N_{i}}(\rho_{i}\zeta)(\mathcal{A}(\theta))\sqrt{|G(\theta)|}\mu_{L e b}(\mathrm{d}\theta)=\int_{\mathcal{A}^{-1}(\mathrm{supp}\ \zeta)}\zeta(\mathcal{A}(\theta))\mu_{L e b}(\mathrm{d}\theta).$$ This establishes (ii), and thus completes the proof. ## B.3 Details In Example 2.3 Recall the definition of the cone Cd := {x ∈ R d: x1 ≤ x2 ≤ ... ≤ xd}, and the group Sd that consists of all permutations of length d. An action of Sd on R dcan be naturally defined, under which we have Cd = R d/Sd. We introduce a few additional notations. For x ∈ R d, the *stabilizer subgroup* is defined as StabSd x := {ϕ ∈ Sd : ϕ · x = x}, and the orbit is Sd · x := {ϕ · x : ϕ ∈ Sd}. A vector nx ∈ R dis an *inward normal vector* of Cd at x if hnx, y − xi ≥ 0 holds for all y ∈ Cd. Denote by Nx the set of all inward normal vector of Cd at x. For any f : R d → R, define the function $$\tilde{f}:C_{d}\rightarrow\mathbb{R},\ \ \tilde{f}(x):=\frac{1}{|S_{d}|}\sum_{\varphi\in S_{d}}f(\varphi\cdot x).\tag{7}$$ When f is the density function of a measure π on R d, the pushforward measure under the quotient map R d → Cd has the density function ˜f. The following lemma shows that the directional derivative of ˜f along the normal direction vanishes. Lemma B.1. Let x ∈ Cd and assume f is differentiable at every y ∈ Sd · x*. Then* $$D_{v}{\bar{f}}(x)={\frac{1}{|S_{d}|}}\sum_{y:=\psi\cdot x\in S_{d}\cdot x}D_{\psi\cdot W_{x}(v)}f(y),\quad{\mathrm{~where~}}\;W_{x}(v):=\sum_{\varphi\in\operatorname{Stab}x}\varphi\cdot v,$$ where Dv denotes the directional derivative along v*. Moreover,* Wx(v) = v for x ∈ C ◦ n and v ∈ R d*, and* Wx(v) = 0 for x ∈ ∂Cd and v ∈ Nx. We postpone the proof of the above lemma to the end of this section, and first present the following lemma, which implies the invariance of the Fokker-Planck equation under orthogonal transformations. Lemma B.2. Let *f, g* : R d → R *be two functions and* Q ∈ R d×d*be an orthogonal matrix, then* [∇(f ◦ Q)]T ∇(g ◦ Q) = [(∇f) T ∇g] ◦ Q and ∆(f ◦ Q) = ∆f ◦ Q, in which Q *is also regarded as a linear map* Q : R d → R d. Proof. Note that ∇(f ◦ Q) = QT(∇f ◦ Q). Let Qi be the i-th column of Q, then $$[\nabla(f\circ Q)]^{T}\nabla(g\circ Q)=\sum_{i=1}^{d}(\nabla f\circ Q)^{T}Q_{i}Q_{i}^{T}(\nabla g\circ Q)=(\nabla f\circ Q)^{T}(\nabla g\circ Q).$$ A similar result also holds for the Laplacian: $$\Delta(f\circ Q)=\sum_{i=1}^{d}\partial_{i}\partial_{i}(f\circ Q)=\sum_{i,j=1}^{d}\partial_{i}(\partial_{j}f\circ Q)q_{j i}=\sum_{i,j,k=1}^{d}(\partial_{k}\partial_{j}f\circ Q)q_{j i}q_{k i}.$$ $$\square$$ As Q is orthogonal, we know Pd i=1 qjiqki = δjk, which completes the proof. As the pushforward measure A\#p has density p˜, the following proposition establishes the equivalence result claimed in the text. Proposition B.1. Let p : R d → R be any function that is invariant under the action of Sd, and Xt *follow* the Langevin dynamics on R d, dXt = ∇ log p(Xt)dt + √2dBt. Then, the pushforward density p˜t of Xt *will evolve as* $$\begin{cases}\partial_{t}\tilde{p}_{t}=-\nabla\cdot(\tilde{p}_{t}\nabla\log p)+\Delta\tilde{p}_{t},&\text{in}C_{n}^{\circ},\\ \frac{\partial\tilde{p}_{t}}{\partial v}(x)=0,&\forall v\in N_{x},x\in\partial C_{d}.\end{cases}$$ Proof. Let pt be the density of the distribution of Xt, then it follows the Fokker-Planck equation $$\partial_{t}p_{t}=-\nabla\cdot(p_{t}\nabla\log p)+\Delta p_{t}=-(\nabla p_{t})^{T}\nabla\log p-p_{t}\Delta\log p+\Delta p_{t}.$$ For ϕ ∈ Sd, we denote Pϕ ∈ R d×d by the corresponding matrix such that ϕ · x = Pϕx for every x ∈ R d. Then, Pϕ is an orthogonal matrix, and by Lemma B.2 $$\partial_{t}(p_{t}\circ P_{\varphi})=(\partial_{t}p_{t})\circ P_{\varphi}=-(\nabla p_{t}\cdot\nabla\log p)\circ P_{\varphi}-(p_{t}\Delta\log p)\circ P_{\varphi}+(\Delta p_{t})\circ P_{\varphi}$$ $$=-[\nabla(p_{t}\circ P_{\varphi})]^{T}\nabla\log p-(p_{t}\circ P_{\varphi})\Delta\log p+\Delta(p_{t}\circ P_{\varphi}),$$ where the first equation is because Pϕ is independent to t, and the last equation follows from Lemma B.2 and log p ◦ Pϕ = log p. Therefore, we obtain the equation for p˜t: obtain the equation for $p_{t}$. $$\partial_{t}\hat{p}_{t}=\frac{1}{|S_{d}|}\sum_{\varphi\in S_{d}}\theta(p_{t}\circ P_{\varphi})=\frac{1}{|S_{d}|}\sum_{\varphi\in S_{d}}\left(-\nabla\cdot((p_{t}\circ P_{\varphi})\nabla\log p)+\Delta(p_{t}\circ P_{\varphi})\right)\tag{8}$$ $$=-\nabla\cdot(\hat{p}_{t}\nabla\log p)+\Delta\hat{p}_{t}.$$ $$\quad(9)$$ Combining with Lemma B.1 yields the boundary condition $$\frac{\partial\tilde{p}_{t}}{\partial v}(x)=0,\quad\forall v\in N_{x},x\in\partial C_{d}.\tag{1}$$ Proof of Lemma *B.1.* Since the group action is linear (i.e., ϕ · (x + y) = ϕ · x + ϕ · y and ϕ · (tx) = tϕ · x), we have $$D_{v}{\bar{f}}(x)=\operatorname*{lim}_{t\to0+}{\frac{1}{t}}\left({\bar{f}}(x+t v)-{\bar{f}}(x)\right)={\frac{1}{|S_{d}|}}\sum_{\varphi\in S_{d}}D_{\varphi\cdot v}f(\varphi\cdot x).$$ To simplify the above summation, we introduce the coset ϕ Stab x := {ϕψ : ψ ∈ Stab x} for each ϕ ∈ Sd, and the set of cosets Sd/ Stab x := {ϕ Stab x : ϕ ∈ Sd}. Clearly, any two cosets are either equal or disjoint, and the group Sd is partitioned by Sd/ Stab x. The orbit-stabilizer theorem (Dummit & Foote, 2004, p. 114) states that the map ϕ Stab x 7→ ϕ · x is a bijection between cosets Sd/ Stab x and the orbit Sd · x, and thus8 Dv ˜f(x) = 1 |Sd| X ϕ∈Sd Dϕ·vf(ϕ · x) =1 |Sd| X ϕ∈C C=ψ Stab x∈Sd/ Stab x Dϕ·vf(ϕ · x) (partition) =1 |Sd| X ϕ∈ψ Stab x y:=ψ·x∈Sd·x Dϕ·vf(ϕ · x) (ψ Stab x 7→ ψ · x bijective) =1 |Sd| X y:=ψ·x∈Sd·x X ϕ0∈Stab x Dψ·(ϕ0·v)f(y) (ϕ 0:= ψ −1ϕ) =1 |Sd| X y:=ψ·x∈Sd·x Dψ·Wx(v)f(y). (linearity of D(·)f) This proves the first claim. For any interior point x ∈ C ◦ d , we have Stab x = {e} and thus Wx(v) = v. For any boundary point x ∈ ∂Cd, the stabilizer subgroup is non-trivial, and it remains to show that Wx(v) = 0 for normal vectors. 8It can be verified that the proof is independent on the choice of ψ. | Likelihood | Algorithm | boston | concrete | energy | kin8nm | naval | power plant | wine | yacht | |--------------|-------------|----------|------------|----------|----------|---------|---------------|--------|---------| | MALA | 0.067 | 0.058 | 0.056 | 0.055 | 0.051 | 0.050 | 0.063 | 0.055 | | | Gaussian | GD | 0.068 | 0.059 | 0.056 | 0.055 | 0.051 | 0.050 | 0.063 | 0.056 | | ULA | 0.067 | 0.058 | 0.056 | 0.055 | 0.051 | 0.050 | 0.063 | 0.055 | | | MALA | 0.188 | 0.174 | 0.167 | 0.170 | 0.160 | 0.160 | 0.186 | 0.172 | | | Laplacian | GD | 0.189 | 0.175 | 0.167 | 0.170 | 0.161 | 0.160 | 0.186 | 0.173 | | ULA | 0.188 | 0.175 | 0.167 | 0.170 | 0.161 | 0.160 | 0.186 | 0.171 | | Table 1: Semi-synthetic experiment: average-case test risk for the best stopping iteration, for L = 2, W = 200. An element ϕ ∈ Sd can be identified as a permutation matrix Pϕ ∈ R d×ds.t. the group action is the matrixvector multiplication ϕ · v = Pϕv, and clearly, the stabilizer of x ∈ ∂Cd always has the form of a Cartesian product, Qmx j=1 Scj , where {cj} is s.t. Pmx j=1 cj = d. 9 Therefore, we have $$W_{x}(v)=\sum_{\varphi\in\operatorname{Stab}x}\varphi\cdot v=\left(\sum_{\varphi\in\prod_{j=1}^{m_{x}}S_{c_{j}}}P_{\varphi}\right)v.$$ Note that Pϕ = blkdiag(P1, P2*, . . . , P*mx ), 10 with each Pj ∈ R cj×cj being a permutation matrix, and the sum of all size cj permutation matrices is (cj − 1)!1cj×cj , where 1 denotes the all-ones matrix. Thus, by decomposing Wx(v) ∈ R dinto R c1 × R c2 *× · · · ×* R cmx we have $$W_{x}(v)\;=\;\biggl(\frac{A_{0}}{c_{1}}\sum_{i=s_{0}+1}^{s_{1}}v_{i}{\bf1}_{c_{1}},\;\;\frac{A_{0}}{c_{2}}\sum_{i=s_{1}+1}^{s_{2}}v_{i}{\bf1}_{c_{2}},\;\;\ldots,\;\;\frac{A_{0}}{c_{m_{x}}}\sum_{i=s_{m_{x}-1}+1}^{s_{m_{x}}}v_{i}{\bf1}_{c_{m_{x}}}\biggr),$$ where A0 =Qmx j=1 cj ! and sj =Pl≤j cl. Let e (j) ∈ R d be such that e (j) k = 1 if sj−1 < k ≤ sj , and e (j) k = 0 otherwise. Then a sufficient condition for Wx(v) = 0 is that h*v, e*(j)i = 0 for all j ∈ [mx]. Let nx ∈ Nx be an inward normal vector and fix j ∈ [mx]. Since x ± αj e (j) ∈ Cd for αj = min(xsj − xsj−1 , xsj+1 − xsj ) > 0, we conclude that hnx, ±e (j)i ≥ 0 and hence hnx, e(j)i = 0. Thus, Wx(nx) = 0. ## C Additional Results And Full Algorithm For Section 3.2 9For example, for x ∈ C5 with x1 = x2 < x3 = x4 < x5, the stabilizer is S2 × S2 × S1. 10blkdiag(P1, P2*, . . . , P*mx ) is the block diagonal matrix P1 0 *· · ·* 0 0 P2 *· · ·* 0 ............ 0 0 *· · ·* Pmx ∈ Rd×d. * [10] M. C. | Likelihood | Algorithm | boston | concrete | energy | kin8nm | naval | power plant | wine | yacht | |--------------|-------------|----------|------------|----------|----------|---------|---------------|--------|---------| | MALA | 0.071 | 0.058 | 0.053 | 0.054 | 0.052 | 0.050 | 0.063 | 0.057 | | | Gaussian | GD | 0.071 | 0.058 | 0.053 | 0.054 | 0.051 | 0.050 | 0.063 | 0.057 | | ULA | 0.070 | 0.058 | 0.053 | 0.054 | 0.051 | 0.050 | 0.063 | 0.057 | | | MALA | 0.194 | 0.172 | 0.170 | 0.167 | 0.161 | 0.160 | 0.186 | 0.167 | | | Laplacian | GD | 0.194 | 0.173 | 0.170 | 0.168 | 0.161 | 0.160 | 0.187 | 0.168 | | ULA | 0.192 | 0.172 | 0.170 | 0.166 | 0.161 | 0.160 | 0.186 | 0.169 | | Table 2: Semi-synthetic experiment: average-case test risk for the best stopping iteration, for L = 2, W = 50. | Likelihood | Algorithm | boston | concrete | energy | kin8nm | naval | power plant | wine | yacht | |--------------|-------------|----------|------------|----------|----------|---------|---------------|--------|---------| | MALA | 0.069 | 0.059 | 0.055 | 0.056 | 0.051 | 0.052 | 0.068 | 0.053 | | | Gaussian | GD | 0.070 | 0.059 | 0.056 | 0.056 | 0.051 | 0.051 | 0.069 | 0.054 | | ULA | 0.069 | 0.059 | 0.055 | 0.056 | 0.051 | 0.050 | 0.068 | 0.053 | | | MALA | 0.193 | 0.176 | 0.173 | 0.173 | 0.161 | 0.161 | 0.189 | 0.176 | | | Laplacian | GD | 0.197 | 0.176 | 0.172 | 0.173 | 0.161 | 0.161 | 0.190 | 0.175 | | ULA | 0.194 | 0.176 | 0.172 | 0.172 | 0.161 | 0.161 | 0.190 | 0.174 | | Table 3: Semi-synthetic experiment: average-case test risk for the best stopping iteration, for L = 3, W = 50. Algorithm 1 The algorithm for evaluating (6) with MALA Require: A training set Xtrain and a test set Xtest; a BNN fBNN(·; θ) and a prior pθ; a likelihood p(*· | ·*). Ensure: An approximation of (6). 1: for i = 1*, . . . ,* 8 do 2: θ (i) ∗ ∼ pθ . the groundtruth 3: Y (i) train ∼ p(· | fBNN(Xtrain; θ (i) ∗ )) 4: Y (i) test ∼ p(· | fBNN(Xtest; θ (i) ∗ )) 5: for j = 1, *· · ·* , 50 do 6: θ (j) init ∼ pθ . the initial state 7: θ (j) MALA ← MALA(θ (j) init, Xtrain, Y (i) train) . the posterior sample 8: **end for** 9: if the likelihood is Gaussian **then** 10: `(ˆ*y, y*) := (ˆy − y) 2 . the loss function derived from the likelihood 11: ˆf (i)(x) := 1 50 P50 k=1 fBNN(x; θ (k) MALA) . the predictive function 12: **else if** the likelihood is Laplacian **then** 13: `(ˆ*y, y*) := |yˆ − y| 14: ˆf (i)(x) := median of {y (k): y (k) ∼ p(· | fBNN(x; θ (k) MALA)), k = 1*, . . . ,* 50} 15: **end if** 16: L (i) ← 1 |Xtest| P(x,y)∈(Xtest,Y (i) test) `( ˆf (i)(x), y) 17: **end for** 18: L ← 18 P8 i=1 L (i) . the approximated average-case risk (6) ![19_image_0.png](19_image_0.png) Figure 4: Additional visualizations in the setting of Fig. 1. ![20_image_0.png](20_image_0.png) ![20_image_1.png](20_image_1.png) (a) Gaussian likelihood / mean square error (b) Laplace likelihood / mean absolute error Figure 5: Semi-synthetic experiment: estimated loss (6) under different likelihoods, for L = 2, W = 50. ![21_image_0.png](21_image_0.png) Figure 6: Semi-synthetic experiment: estimated loss (6) under different likelihoods, for L = 3, W = 50.
Review 1: Summary: The reviewed paper is concerned with inferential problems where (a) the model is over-parametrised (e.g. Bayesian neural networks); and (b) Langevin dynamics are used to sample from the corresponding posterior. It discusses in particular the situation where the Langevin algorithm seems to mix poorly in the original parameter space, yet it may mix sufficiently well in a lower-dimensional space, which may be sufficient for approximating the posterior predictive distribution. The main point of the paper seems to derive mathematical results to explain this phenomenon. Strengths and Weaknesses: The paper is somehow interesting, but it has the following limitations. First, the main scientific message (it's ok if your Langevin sampler does not visit the whole support of the parameter space, as it may have already visited a portion of the support which is sufficiently representative, as far as the predictive distribution is concerned) has been stated before in various recent papers (cited on the first page). Second, the mathematical results derived in the paper to support this message actually say much less that they authors want them to say: * A minor point, but a function space is usually infinite-dimensional, and then it becomes non-trivial to define a probability distribution with respect to such a space, not to mention defining Langevin dynamics. But, in this paper, the "function space" is actually of dimension $k\leq d$, the dimension of the parameter space, see bottom of p. 2. It would be more sensible to say that we are considering over-parametrised models rather than "function spaces". (Incidently, there are quite a few papers that discusses already MCMC and related algorithms for over-parametrised models, or more generally how parametrisation may affect MCMC, but they are not cited; see section below). * The mathematical results say essentially: if I generate a Langevin diffusion associated to the posterior of $\theta$ (the parameter in the original parametrisation), then, in certain special cases, the trajectory of $f$ (the parameter in the transformed space) is also a certain Langevin diffusion. Fine, but this is almost orthogonal to the point of interest: it's not because a trajectory follows Langevin dynamics that it necessarily mixes well, and vice-versa. * If I know already that the model is over-parametrised, then what would make more sense practically would be to use some pre-conditioner for the WLD diffusion. Say my model is such that the parameter is $\theta=(\theta_1, \theta_2)$ and, the likelihood depends only on the sum $\theta_1+\theta_2$. Then the corresponding posterior will be very elongated along the line $y=-x$, but, if I rotate the axes (i.e consider parameters $\eta_1 = \theta_1+\theta_2$ and $\eta_2 = \theta_1 - \theta_2$), then I may get much better mixing, both for $\theta$, and for the parameter of interest, which is $f=\eta_1$ here. * The first part of the results concerns the case where the transformation (from $\theta$ to $f$) is actually linear. Sorry, but this case is completely trivial. * The second part of the results concerns group actions, and is more interesting, see in particular the torus in Example 2.2. Still, it looks like (a) the actually results are just pulled from other references (Villani, 2009); and (b) they may not be so relevant in practice, as, Bayesian neural networks and similar over-parametrised models do not seem to have this particular group structure in most cases. Requested Changes: Relevant references =============== It might make to cite some of the papers belows, and others cited therein. The following papers discuss the interplay between (over-)parametrisation and MCMC mixing in various ways: * Papaspiliopoulos, Roberts, and Sköld (2007). A general framework for the parametrization of hierarchical models. Statist. Sci. 22, no. 1, 59–73. * Yu and Meng (2011). To center or not to center: that is not the question—an ancillarity-sufficiency interweaving strategy (ASIS) for boosting MCMC efficiency. J. Comput. Graph. Statist. 20, no. 3, 531–570. Mixture models are a class of model where group actions and invariance plays an important role: * Frühwirth-Schnatter (2001). Markov chain Monte Carlo estimation of classical and dynamic switching and mixture models. J. Amer. Statist. Assoc. 96, no. 453, 194–209. * Celeux, Hurn and, Robert (2000). Computational and inferential difficulties with mixture posterior distributions. J. Amer. Statist. Assoc. 95, no. 451, 957–970. Broader Impact Concerns: No broader concern. ================================================== Review 2: Summary: This work illustrates an equivalence between weight-space and function-space Langevin dynamics applied to Bayesian models. The authors explain the general intuition that we can expect weight-space LD to induce LD in function space. They theoretically derive this equivalence in a couple of concrete cases - the linear case and a simplified BNN model - and explain how the intuition from these cases can be expected to generalize to more complex settings. The authors then provide a pair of experiments, one on a 1D synthetic dataset which aims to validate the prediction made by this equivalence that the rate of convergence to the ground truth should not depend on the degree of overparameterization, and the second on a semi-synthetic dataset which aims to demonstrate that weight-space LD approximates the function space posterior well. Overall, I believe that the theoretical contributions of the work are interesting and novel as far as I am aware. I agree with the sentiment implied by the authors that the primary value of the work is not in its theoretical complexity, but rather in the high-level intuition that it provides for understanding and interpreting Langevin dynamics methods and theoretical substantiation of that intuition. I find the experimental claims of the paper less compelling - the experiments provide some support to the authors' claims that weight-space LD correctly approaches the functional posterior and doesn't depend on the degree of overparameterization, but I don't think this is sufficient to convincingly demonstrate the broader equivalence between WLD and FLD in practice. Strengths and Weaknesses: ### Strengths - The authors provide lots of intuition for how to interpret their theoretical results, which is much appreciated given that this is in my view the main contribution of the paper. The authors also make good reference to the previous literature and explain well how their results develop previous work. - The work is very clear throughout and easy to follow. The authors' choice of illustrative examples in Section 2 are well chosen and significantly aid the communication of their overarching thesis. The detailed derivation of Example 2.3 is also enlightening. ### Weaknesses - The gap between the assumptions made in Proposition 2.1 and the conditions that we expect to hold for real neural networks is quite large. For example, most neural networks will have at least some continuous symmetries. Although the authors' mention continuous symmetries in Remark 2.1, it is still quite unclear to me how large the gap between reality and the assumptions made in this paper is. The authors might consider further indicating either why they think their assumptions are nevertheless reasonable, or how large they consider the gap to be. - I'm not convinced that the experiments given in Section 3 convincingly demonstrate the equivalence between WLD and FLD that the authors claim they do and which is explained from a theoretical perspective in Section 2. I think the authors equivocate in different places between claiming that (a) weight space LD approaches the functional posterior at a rate that's independent of overparameterization and (b) WLD is equivalent to FLD. My interpretation of the results are that Section 3.1 shows that (in this synthetic case) WLD seems to converge in function space at a rate that's independent of the degree of overparameterisation, and that Section 3.2 shows that the FLD-approximated posterior is close to the true posterior. - The experiment performed in Section 3.1 is extremely small (with only 3 datapoints in one version, as I understand it). I'm not sure this is big enough for us to learn anything meaningful, especially with regards to deep neural networks as they are used in practice. - The conclusions the authors draw from the experiment in Section 3.2 seem to me to go via an argument that gradient descent performs well on this task and therefore must closely approximate the exact posterior mean in function space, so if MALA and GD are getting similar performance then this means that the MALA-approximated posterior must also be cloes to the function-space posterior. This argument seems a bit tenuous to me, and relying on assumptions about behaviors of GD that are unsubstantiated. Requested Changes: The principle change I would like to see is a clarification of what the authors' believe their experiments in Section 3 demonstrate. In particular, if they plan to claim it then it would be beneficial to include an experiment that showed the dynamics of WLD and FLD (rather than just their endpoints) were equivalent in practice. The rest of my suggested changes are minor, but would strengthen the work in my view: - In Example 2.1(i), the authors say that the derivation is given in Appendix 2.1. However, this appendix only seems to include the proof of Proposition 2.1. Is this proof of Proposition 2.1 supposed to also indicate what happens in Example 2.1? If so, I'm not sure I understand the connection. If not, then it might be worth indicating where the results in Example 2.1(i) come from. - I would consider providing more details of Example 2.1(ii), perhaps in an appendix, since the current explanation is quite brief, and this is the only place where continuous symmetries of the parameter space are explicitly dealt with. For instance, it might be worth making explicit that (as I understand it) the fact that the evolution factorizes into two subspaces is dependent on the Gaussian choice of prior. - The choice of notation $T_p(H \cdot p)^\perp$ is a little confusing to me. Firstly, $H \cdot p$ is a set, and I'm not sure what is meant by the tangent plane at a set (as opposed to a particular point). It might also be worth being explicit about the definition of the symbols $^\perp$ and $\mathrm{d} \mathcal{A}|_p u$. - I'm not sure what is meant by "let $\pi_\theta$ have constant density'' in Example 2.2, since it's defined on $\mathbb{R}^d$ with respect to the Lebesgue measure - where the constant measure isn't a probability measure. - In Example 2.3, the section "The evolution of $\tilde p_t$ ... of equivalence classes." is quite vague. It's not clear what the phrases "closely related to'' refers to, or exactly what fast vs. slow convergence entails in this context. It would be nice to make more precise what is meant here. - The authors could consider clarifying the argument in Section 3.2 for why this experiment provides evidence about the equivalence of WLD and FLD. (As mentioned above, this was a point of confusion for me.) - In Appendix A.1 there is a clash of notation where $g$ is used to denote both the metric tensor and a test function. - It might be worth defining the notation $\mathrm{blkdiag}(P_1, \dots, P_{m_x})$. Broader Impact Concerns: I have no broader impact concerns. ================================================== Review 3: Summary: The paper shows that weight and function space inference of BNNs are in equivalent in some sense. Strengths and Weaknesses: S: The research topic is interesting and timely. S: The results have theoretical significance W: The paper is sloppily written W: It's unclear how significant the results are. W: The experiments are not convincing or particularly clear. Requested Changes: This paper shows some theoretical results between weight and function space langevin dynamics. The main results is a statement that they are “equivalent”. This is a bold statement, and does not come with many disclaimers. I feel that the paper’s conclusions are too grand and not precise enough, and not supported by enough evidence. The paper is a bit vague in what equivalency means, and the exact contributions of this paper are not clear. I feel the paper is a bit roundabout: some results are shown but there is no clear statement of what are the theoretical contributions. In the end I feel that this paper mostly states that parameter posteriors can be trivially turned into function posteriors, which feels quite an obvious result. There is also some remarks about overparameterisation, which are interesting but felt incomplete. I wonder how significant is the analysis of equivalences between WLD and FLD. Surely the main interest is in the sample-efficiency, and in convergence properties. WLD should be less sample-efficient due to the symmetries, permutations and invariances in the weight space. I wonder how this paper’s claims then relate to these two aspects. The experiments are small scale, and it was not clear what claims they are studying or demonstrating. The experiments show that MALA can sample small networks. Ok, but large-scale LD results are common-place in literature, so what is the added insight from this paper? The authors need to clarify what are the precise claims of the paper, and how do the empirical analysis study or verify these claims. The paper is also mathematically quite dense, but frustratingly sloppy in notation, and author clarifications are needed. - I don’t get the A notation. It maps the parameters space to F, which is a subset of R^n. What does this mean? Do we assume that outputs $y$ are real? Do we assume the function space contracts to training data only? Aren’t functions infinite? - I also don’t understand what A_# \pi means. “A” should map parameters to functions, but now it maps densities into densities. How can you apply R^d → R^n operator to R^d → R^1 function? It seems some notation overloading is happening, please define this. - How does the push-forward handle change of volume? It would be good to give a precise definition of the push-forward, since $A_# \pi_\theta$ is right now undefined. - I’m confused by the size of F: at first it’s defined as subset of R^n, but later in FLD its defined as less than d. So is F then up to R^d? - What is “riemannian manifold structure”? Please define. What is $k$? Why do we choose low-dimensional stuff here. This seems abrupt and even harmful: surely we want to cover the function space as-is, and not some contraction of it. The FLD defines tilde-f_t as one coordinate of R^n size f, yet this “one coordinate” has size k. - What is size of G? What are assumptions about G? What is g? What does the strange $g^{ij}_{ij}$ notation mean? What is a “coordinate matrix”? What is coordinate (is it one datapoint..)? What is our metric? What is i? What is j? What is f(f)? What is partial_j? What is “corresponding Riemann measure”? - The math overall needs more precision: please define all symbols and terms precisely, and also give intuitive conceptual explanations and motivations. Right now parts of the story and technicalities are missing. - It is not “intuitive” to me that WLD equals FLD. The paper cites Girolami2011, but this does not contain any discussion about function-space sampling or pushforwards. If the WLD/FLD connection is known, can you add a citation; and if not, can you derive it yourself? - What is $\cal{X}$? What is $|\cal{X}|$? How can you use $\cal{X}$ inside the definition of $\cal{X}$, isn’t this circular? To me a natural definition for input space would be $R^D$ or some subset of it. Here instead we define it as a finite set, which feels odd. Does this then refer to $n$ datapoints instead? Is this based on some kind of Dirac measure? Are you talking about finite dataset or the input domain here? I’m pretty lost since I can’t understand what your math means.. - “above vector representation”.. errr where? I don’t see any vector representation. What does “coordinate” of F mean? I hope F still means a subset of R^n, but at this point I’m unsure since it does not seem to align with the rest of the math. It might be a good idea to remind the reader of definition of $\cal{F}$. - What is “coordinate matrix” of A? What is its size? - What does it mean that “in this coordinate the metric has a coordinate AA”. I assume that coordinate refers to a single axis or dimension, but suddenly coordinate also means a matrix. I’m lost and don’t really understand the high-level story anymore. - Please define the symbols and their domains and their sizes in the ex 2.1. equations. - Ex 2.1. (ii) is quite technical, and there is no proof or derivation or citation for this result. Can you add it? Where does this result come from? “ran” and “ker” are undefined. - Can you define $(\psi_1 \psi_2)$ and $\psi \cdot p$. Both seem unintuitive, and are undefined. - $A|_p u$ is undefined. $T_{Ap} F$ is undefined. $T_p(H \cdot p)^{I}$ is undefined. Please also give conceptual explanations of every symbol in these (eg. what does p,u,T,.. represent?) - What is a discrete group, please define. This sounds odd: surely parameters have continuous invariances? - The prop 2.1. is strangely written. The first sentence assumes some terms with no connection to posteriors or parameters. Then suddenly the second sentence talks about prior and WLD and FLD, even though there is no connection between them and the A and H in the context of prop 2.1. Perhaps a connecting phrase is missing. I don’t really understand what this proposition tries to say. - WHat does “equivalence” of WLD and FLD mean? This is vague wording. - I see no relevance of example 2.2. due to its artificiality. No ML function looks like this (mapping parameters to functions by taking decimals). What’s then the point? - The example 2.2. italics continues for a long time. Is all of this intended to be part of the example? Does the v() equation have a plus? It looks like it should be equals. - I don’t agree with example 2.3.: one can produce same function values without permutations. For instance, if we have a dataset of one datapoint, the function space (which is just real line) can be surjective. This of course depends on how the function space is defined (this is a bit loose). I think the example is assuming that function space is infinite, while earlier definition assumed it to be $R^n$. - It’s unclear what does $\theta_t$ refer to in ex 2.3. I assume its weight LD, but not sure. It’s unclear what does $\tilde{p}(\theta)$ mean: how can you evaluate parameters in a function density? The dimensions won’t match. I’m now confused what tilde p is trying to be. - What is “function space measure induced by WLD”? How do you compute or estimate it? What does “credible interval” mean? Please specify. What does “marginal” distribution mean here? What do you marginalise? What is “marginal distribution q induced by WLD”? Please define. How do you obtain ground truth? - I don’t understand what eq 1 means. Does this have something to do with KSD? I general I couldn’t follow the KSD stuff, or what really happens. I get that you use a sampler to get the parameter posterior, but everything after that is unclear. - Earlier H was a group, now its width. Please avoid this. - The paper talks about LD, but experiments are using MALA. Is this ok? How can you say that the results from MALA transfer to LD? Why not use LD in the experiments if the paper is about LD..? MALA is also undefined. - It’s a bit unclear what you are demonstraing in figure 1. So it seems that we get better fits with more samples. Ok sure, but what’s the point? What research question or claim are you studying here? I don’t see a connection to the contributions of this paper. - Fig2: I don’t know what “LD predictive distribution” or “approximate functoin space posterior” means. Can you define them? - Fig2: It seems that the MALA converges wrt something. What is the point of this figure, and what research question are you demonstrating or studying here? I can’t follow. - In 3.2. I can’t follow what you do. It seems that you sample many $\theta_0$, which means that you sample a lot of ground truth functions. Yet do you only do MALA once..? I can’t follow what’s going on. What does 8 $\theta_0$’s mean? Are these the ground truth functions, or draws from the prior? I think the text is referring that we sample bunch of stuff from the prior, but surely this does not fit the data in any way. WHat is then the point? - It would be very helpful to have algorithm boxes to describe the experiments. - I also wonder about the mean arguments: surely in a multimodal posterior you should not average since you can end up between peaks. - I’m having in general trouble seeing the research question studied in 3.2. Suddenly we have ensembles entering the discussion here, which complicates the discussion and makes it difficult to see what hypothesis you are studying here. The section concludes by saying that MALA is a good method for BNNs. Ok, but isn’t this already known? For instance, in Izmailov’2021 they used LD to sample large neural networks and showed good performance. What is the novelty of your demonstration? Broader Impact Concerns: No issues ================================================== Metareview: Recommendation: Reject Comment: The reviewers were divided in their recommendation. More important than their recommendation, however, is that they largely agree that the paper is unclear about it claims and aims. First, the paper needed to be more precise about the nature of the equivalence between weight and function-space Langevin dynamics that the paper is studying, how it is related or not related to efficiency, and what the equivalence allows us to do. Second, the paper needed to better explain why they can work with a function space that has dimensionality k that is smaller than the number of parameters. The examples in Section 2.1 may be important building blocks for the message of the paper, but currently they are actually more confusing than helpful because the function is assumed to only take on finitely many values which is far removed from actual usage and the framing of the problem in the introduction. Third, the experimental evidence needed to be better and more clearly support the main message of the paper. ==================================================
# Ehi**: End-To-End Learning Of Hierarchical Index** For Efficient Dense Retrieval Anonymous authors Paper under double-blind review ## Abstract Dense embedding-based retrieval is widely used for semantic search and ranking. However, conventional two-stage approaches, involving contrastive embedding learning followed by approximate nearest neighbor search (ANNS), can suffer from misalignment between these stages. This mismatch degrades retrieval performance. We propose End-to-end Hierarchical Indexing (EHI), a novel method that directly addresses this issue by jointly optimizing embedding generation and ANNS structure. EHI leverages a dual encoder for embedding queries and documents while simultaneously learning an inverted file index (IVF)-style tree structure. To facilitate the effective learning of this discrete structure, EHI introduces dense path embeddings that encodes the path traversed by queries and documents within the tree. Extensive evaluations on standard benchmarks, including MS MARCO (Dev set) and TREC DL19, demonstrate EHI's superiority over traditional ANNS index. Under the same computational constraints, EHI outperforms existing state-of-the-art methods by **+1.45%** in MRR@10 on MS MARCO (Dev) and **+8.2%** in nDCG@10 on TREC DL19, highlighting the benefits of our end-to-end approach. ## 1 Introduction Semantic search (Johnson et al., 2019) aims to retrieve relevant or *semantically similar* documents/items for a given query. In the past few years, semantic search has been applied to numerous real-world applications like web search, product search, and news search (Nayak, 2019; Dahiya et al., 2021). The problem in the simplest form can be abstracted as: for a given query q, retrieve the relevant document(s) d(q) from a static set of documents {d1, d2*, . . . , d*N } s.t. d(q) = arg max1≤j≤N SIM(q, dj). Here SIM is a similarity function that has high fidelity to the training data B = {(qi, dj , yij )}. Tuple (qi, dj , yij ) indicates if document dj is relevant (yij = 1) or irrelevant (yij = −1) for a given query qi ∈ Q. Dense embedding-based retrieval (Johnson et al., 2019; Jayaram Subramanya et al., 2019; Guo et al., 2020) is the state-of-the-art (SOTA) approach for semantic search and typically follows a two-stage process. In the first stage, it embeds the documents and the query using a deep network like BERT (Devlin et al., 2018) (Additional details about the encoder used in EHI is described in Section 3.2, and the training methodology expanded in Section 3.5). That is, it defines similarity SIM(*q, d*) := hEθ(q), Eθ(d)i as the inner product between embeddings Eθ(q) and Eθ(d) of the query q and the document d, respectively. Eθ(·) is a dense embedding function learned using contrastive losses (Ni et al., 2021; Menon et al., 2022). In the second stage, approximate nearest neighbor search (ANNS) retrieves relevant documents for a given query. That is, all the documents are indexed offline and are then retrieved online for the input query. ANNS in itself has been extensively studied for decades with techniques like ScaNN (Guo et al., 2020), IVF (Sivic & Zisserman, 2003), HNSW (Malkov & Yashunin, 2020), DiskANN (Jayaram Subramanya et al., 2019) and many others being used heavily in practice. ## 1.1 Motivation The prevailing two-stage approach to dense retrieval, involving separate training of the embedding encoder and the approximate nearest neighbor search (ANNS) structure, suffers from inherent limitations: Embedding-ANNS Misalignment: The lack of joint optimization leads to a potential mismatch between the learned embedding space and the requirements of the ANNS structure. This misalignment can cause suboptimal clustering of data points, hindering the ANNS algorithm's ability to efficiently identify semantically similar examples. For example, the documents might be clustered in six clusters to optimize encoder loss. However, due to computational constraints, ANNS might allow only five branches/clusters, thus splitting or merging clusters unnaturally and inaccurately. This is further depicted in Figure 7(a) and Appendix B. Disregard for Query Distribution: Conventional ANNS methods focus on generic retrieval efficiency, potentially neglecting the specific characteristics of the query distribution. This can result in an indexing structure that is ill-suited to the patterns present in the actual queries encountered during deployment (Jaiswal et al., 2022). We further illustrate this phenomenon with the example of MS Marco dataset as shown in Figure 7(b) and Appendix B. ## 1.2 Overview And Evaluation Motivated by the aforementioned issues, we propose EHI– End-to-end learning of Hierarchical Index - that jointly trains both the encoder and the search data structure; see Figure 2. To the best of our knowledge, EHI is the *first* end-to-end learning method for dense retrieval. Similar to two-stage approaches, EHI employs a tree-structured inverted file-like index, but unlike DE+ANNS (the conventional two-stage approach where a standard Dual Encoder (DE) is trained independently, followed by an Approximate Nearest Neighbor Search (ANNS) using the learned embeddings), EHI integrates the training process to simultaneously learn the discrete document assignments within its index and refine the encoder parameters without any warm-starting. We can learn balanced and roughly uniform clusters with just the query, document pairs information which is generally available in the supervised training paradigm of most encoders, and does not require any proxy information through KNN or other clustering algorithms for learning the INF-style tree structure. EHI's core innovation lies in the introduction of dense path embeddings. These embeddings represent the traversal paths of queries and documents within the tree index, enabling the learning of discrete document assignments. By optimizing the index such that semantically similar (query, document) pairs share similar path embeddings, EHI facilitates efficient retrieval by clustering relevant pairs within the same leaf nodes. We conduct an extensive empirical evaluation of our method against SOTA techniques on standard benchmarks. Our experiments on the popular MS MARCO benchmark (Bajaj et al., 2016) demonstrate that EHI shows improvements of 0.6% in terms of nDCG@10 compared to dense-retrieval with ScaNN baselines when only 10% of documents are searched. We attribute these improved embeddings to the fact that EHI enables an additional signal to the encoder from the indexer as well as the integrated hard negative mining as it can retrieve irrelevant or negative documents from indexed leaf nodes of a query. Here, the indexer parameters are always fresh, unlike techniques akin to ANCE (Xiong et al., 2020). Similarly, EHI provides upto **8.2%** higher nDCG@10 than state-of-the-art (SOTA) baselines on the MS MARCO TREC DL19 (Craswell et al., 2020) benchmarks for the same compute budget. EHI also achieves SOTA exact search performance on both MRR@10 and nDCG@10 metrics with up to 80% reduction in latency, indicating the effectiveness of the joint learning objective. Similarly, we outperform SOTA architectures such as NCI on NQ320k by **0.6%** and 1.8% on Recall@10 and Recall@100 metrics. (see Section 4.2). ## 1.3 Contributions Our work introduces several key advancements to the field of dense retrieval: - **Novel End-to-End Paradigm.** We propose End-to-end Hierarchical Indexing (EHI), an end-to-end learning framework that jointly optimizes the embedding encoder and the search index structure. EHI addresses the inherent misalignment in conventional two-stage approaches, enabling more efficient and accurate retrieval (Section 3, Figure 1). To the best of our knowledge, we are the first to propose this without any warm starting or soft labels. - **Rigorous Empirical Validation.** We conduct comprehensive evaluations on the industry-standard MS MARCO benchmark, demonstrating EHI's superiority over state-of-the-art ANNS techniques such as ScaNN, Faiss. Furthermore, we also showcase that EHI is competitive with other SOTA baselines such as ColBERT, SGPT, cpt-text, ANCE, DyNNIBAL. Our experiments underscore EHI's focus on enhancing retrieval accuracy within a fixed computational budget (Section 4.2, Appendix D). - **Framework Flexibility.** EHI is designed to be agnostic to specific encoder architectures, similarity metrics, and hard negative mining strategies. This flexibility underscores its potential for broad applicability within the dense retrieval domain. ## 2 Related Works Dense retrieval (Mitra et al., 2018) underlies a myriad of web-scale applications like search (Nayak, 2019), recommendations (Eksombatchai et al., 2018; Jain et al., 2019), and is powered by (a) learned representations (Devlin et al., 2018; Kolesnikov et al., 2020; Radford et al., 2021), (b) ANNS (Johnson et al., 2019; Sivic & Zisserman, 2003; Guo et al., 2020) and (c) LLMs in retrieval (Tay et al., 2022; Wang et al., 2022; Guu et al., 2020). Figure 1 illustrates the difference between dual encoder trained with an ANNS (DE + ANNS), Generative Retrieval approaches (such as DSI, NCI) and EHI. ![2_image_0.png](2_image_0.png) Figure 1: Comparison between EHI's end-to-end training paradigm in comparison to other approaches such as Dual encoder + ANNS and Generative Retrieval approaches which require a multi-stage training.We denote stages which are not optimized for the task objective by a grey shade. Note that unlike DE + ANNS, and Generative Retrieval, EHI optimizes for recall in both encoder and indexer. Representation learning. Powerful representations are typically learned through supervised and un/selfsupervised learning paradigms that use proxy tasks like masked language modeling (Devlin et al., 2018) and autoregressive training (Radford et al., 2018). Recent advances in contrastive learning (Gutmann & Hyvärinen, 2010) helped power strong dual encoder-based dense retrievers (Ni et al., 2021; Izacard et al., 2021; Nayak, 2019). They consist of query and document encoders, often shared, which are trained with contrastive learning using limited positively relevant query and document pairs (Menon et al., 2022; Xiong et al., 2020). While most modern-day systems use these learned representations as is for large-scale ANNS, there is no need for them to be aligned with the distance metrics or topology of the data structures. Recent works have tried to address these concerns by warm-starting the learning with a clustering structure (Gupta et al., 2022) but fall short of learning jointly optimized representations alongside the search structure. We explicitly state the challenges of directly adapting methods like ELIAS, originally designed for classification, to the dense retrieval task. However, we include a comparison against ELIAS on a classification benchmark with label features in Appendix E.7. Other works such as RepCONC (Zhan et al., 2022), and SPLADE (Formal et al., 2022) also work on the efficiency aspect of retrieval, where they focus on quantization of the representations using regularizers which explicitly work on reducing FLOPS. Approximate nearest neighbor search (ANNS). The goal of ANNS is to retrieve *almost* nearest neighbors without paying exorbitant costs of retrieving true neighbors (Clarkson, 1994; Indyk & Motwani, 1998; Weber et al., 1998). The "approximate" nature comes from pruning-based search data structures (Sivic & Zisserman, 2003; Malkov & Yashunin, 2020; Beygelzimer et al., 2006) as well as from the quantization based cheaper distance computation (Jegou et al., 2010; Ge et al., 2013). This paper focuses on ANNS data structures and notes that compression is often complementary. Search data structures reduce the number of data points visited during the search. This is often achieved through hashing (Datar et al., 2004; Salakhutdinov & Hinton, 2009; Kusupati et al., 2021), trees (Friedman et al., 1977; Sivic & Zisserman, 2003; Bernhardsson, 2018; Guo et al., 2020) and graphs (Malkov & Yashunin, 2020; Jayaram Subramanya et al., 2019). ANNS data structures also carefully handle the systems considerations involved in a deployment like load-balancing, disk I/O, main memory overhead, etc., and often tree-based data structures tend to prove highly performant owing to their simplicity and flexibility (Guo et al., 2020). For a more comprehensive review of ANNS structures, please refer to Cai (2021); Li et al. (2020); Wang et al. (2021). Works such as CCSA (Lassance et al., 2021) propose alternate ANN structures for efficient retrieval via constrained clustering. Other works such as TDM (Zhu et al., 2018) proposes learning a tree-like ANNS but the ANNS is learnt in a disjoint fashion where k-means algorithm is used for learning the clustering without any supervision. Works such as JTM (Zhu et al., 2019), and Deep Retrieval (Gao et al., 2020b) also attempt to optimize the learn the indexer with a neural network - but they require pre-computation of optimal paths taken by the query and document for learning. Prior works including PQN (Yu et al., 2018) and GPQ (Jang & Cho, 2020) have explored end-to-end ANNS techniques in the visual domain which primarily rely on hashing-based approaches, and not an hierarchical tree-like structure. EHI differentiates itself through its novel tree structure and dense path embeddings, tailored for the unique characteristics of text data and semantic search. Furthermore, EHI handles a significantly larger number of buckets (e.g., 7000) compared to the 48-bit hashing used in approaches like GPQ, demonstrating its suitability for the scale of textual datasets with millions of documents. Similarly, the scale of the experiments are also significantly larger as PQN, GPQ showcase their efficacy on ∼170K images to be indexed (where the overhead of exact search might not be very large), while we experiment on ∼10M documents, where exact search is a significant overhead during inference. EHI works in an end-to-end fashion without any pre-computation of optimal paths, and is the *first end-to-end* hierarchical indexer framework to the best of our knowledge. Generative Retrieval for Semantic Search. Recently, there have been some efforts towards modeling retrieval as a sequence-to-sequence problem. In particular, Differential Search Index (DSI) (Tay et al., 2022) and more recent Neural Corpus indexer (NCI) (Wang et al., 2022) method proposed encoding the query and then find relevant document by running a learned decoder. However, both these techniques, at their core, use a *separately* computed hierarchical k-means-based clustering of document embeddings for semantically assigning the document-id. That is, they also index the documents using an ad-hoc clustering method which might not be aligned with the end objective of improving retrieval accuracy. In contrast, EHI jointly learns both representation and a k-ary tree-based search data structure end-to-end. This advantage is reflected on MS MARCO dataset EHI is upto 7.12% more accurate (in terms of nDCG@10) compared to DSI. Recently, retrieval has been used to augment LLMs also (Guu et al., 2020; Izacard & Grave, 2020b;a; Izacard et al., 2022). We would like to stress that the goal with LLMs is language modeling while retrieval's goal is precise document retrieval. However, retrieval techniques like EHI can be applied to improve retrieval subcomponents in such LLMs. ## 3 End-To-End Hieararchical Indexing (Ehi) Problem definition and Notation. Consider a problem with a corpus of N documents D = {d1*, ..., d*N }, a set of Q training queries Q = {q1*, ..., q*Q}, and training data (qi, dk, yik), where yik *∈ {−*1, 1} is the label for a given training (query, document) tuple and yik = 1 denotes that dk is relevant to qi. Given these inputs, the goal is to learn a *retriever* that maps a given query to a set of relevant documents while minimizing the computation cost. While wall-clock time is the primary cost metric, comparing different methods against it is challenging due to very different setups (language, architecture, parallelism, etc.). Instead, we rely on recall vs. % searched curves, widely considered a reasonable proxy for wall-clock time modulo other setup/environment changes (Guo et al., 2020). ## 3.1 Overview Of Ehi ![4_image_0.png](4_image_0.png) Figure 2: EHI is an end-to-end hierarchical indexer which comprises an encoder and a hierarchical tree as the indexer where the entire pipeline is learnable and differentiable. Here, variables V98, V123, and V576 are dense representations (embeddings) of the text and P98, P123, and P576 are path embeddings of the respective samples. To efficiently train EHI without any warm starting, we use a combination of objectives - Lsiamese, Lindexing, Lintra-leaf (see Section 3 for details). At a high level, EHI has three key components: Encoder Eθ, Indexer Iφ and Retriever. Parameters θ of the query/document encoder and φ of the indexer are the trainable parameters of EHI. Unlike most existing techniques, which train the encoder and indexer in a two-step disjoint process, we train both the encoder and indexer parameters jointly with an appropriate loss function; see Section 3.5. Learning the indexer – generally a discontinuous function - is a combinatorial problem that also requires multiple rounds of indexing the entire corpus. However, by modeling the indexer using a hierarchical tree and its "internal representation" as compressed path embedding, we demonstrate that the training and retrieval with encoder+indexer can be executed efficiently and effectively. In the following sections, we provide details of the encoder and indexer components. In Section 3.4, we detail how encoder+indexer can be used to retrieve specific documents for a given query, which is used for inference and hard-negative mining (further elaborated in Appendix E.6) during training. Section 3.5 provides an overview of the training procedure. Finally, Section 3.6 summarizes how documents are ranked after retrieval. ## 3.2 Encoder Eθ**: Dense Embedding Of Query/Documents** Our method is agnostic to the architecture used for dual encoder. But for simplicity, we use standard dual encoder (Ni et al., 2021) to map input queries and documents to a common vector space. That is, encoder Eθ parameterized by θ, maps query (q ∈ Q) and document (d ∈ D) to a common vector space: Eθ(q) ∈ R m, and Eθ(d) ∈ R m, where m is the embedding size of the model (768 here). While such an encoder can also be multi-modal as well as multi-vector, for simplicity, we mainly focus on standard textual data with single embedding per query/document. We use the standard *BERT architecture* for encoder Eθ and initialize parameters θ using a pre-trained Sentence-BERT distilbert model (Reimers & Gurevych, 2019). Our base model has 6 layers, 768 dimensions, 12 heads with 66 million parameters. We then fine-tune the final layer of the model for the target downstream dataset. ## 3.3 Indexer Iφ**: Indexing Of Query/Document In The Hierarchical Data Structure** EHI's indexer (Iφ) is a tree with height H and branching factor B. Each tree node contains a *classifier* that provides a distribution over its children. So, given a query/document, we can find out the leaf nodes that the query/document indexes into, as well as the *probabilistic* path taken in the tree. The final leaf nodes reached by the query are essential for retrieval. But, we also propose to use the path taken by a query/document in the tree as an *embedding* of the query/document - which can be used in training through the loss function. However, the path a query/document takes is an object in an exponentially large (in height H) vector space, owing to BH leaf nodes, making it computationally intractable even for a small H and B. For instance, with B = 10 and H = 5, a relatively small tree, there would be 100,000 (105) possible paths. Instead, below, we provide a significantly more compressed *path embedding* - denoted by T (·; φ) and parameterized by φ - embeds any given query or document in a relatively low-dimensional (B · H) vector space (which would lead to just 50 in the above example). For simplicity, we denote the query and the document path embedding as Tφ(q) = T (Eθ(q); φ) and Tφ(d) = T (Eθ(d); φ), respectively. We construct path embedding of a query/document as: $T(\mathcal{E}_{\theta}(q))=T(\mathcal{E}_{\theta}(q);\phi)=[\mathbf{p}^{H};\mathbf{p}^{H-1};\ldots;\mathbf{p}^{1}]$, $${\mathfrak{f}}-1_{\vdots}\ldots{\mathfrak{;p}}^{1}],$$ Where p h ∈ [0, 1]B denotes the probability distribution of children nodes for a parent at height h. For a given leaf l, say path from root node is defined as l = [i 1 l , i2 l . . . iH l ] where i h l ∈ [1 *. . . B*] for h ∈ [H]. The probability at a given height in a path is approximated using a height-specific simple feed-forward neural network parameterized by Wh+1 ∈ R (B·h+m)×B and Uh+1 ∈ R (B·h+m)×(B·h+m)(m is the embedding size). That is, $$\mathbf{p}^{h+1}=\texttt{Softmax}(\mathbf{W}_{h+1}^{\top}{\mathcal{K}}_{h};\mathbf{U}_{h+1}))\cdot\mathbf{p}^{h}[i_{l}^{h}]$$ $$(1)$$ ] (1) where Kh = F([o(i h l ); o(i h−1 l); *. . .* ; o(i 1 l ); Eθ(q)], and one-hot-vector o(i) is the i-th canonical basis vector and F is a non-linear transformation given by F(x; Uh) = x + ReLU(U> h x). In summary, the path embedding for height 1 represents a probability distribution over the leaves. During training, we compute path embedding for higher heights for only the most probable path, ensuring that the summation of leaf node logits remains a probability distribution. Also, the indexer and path embedding function T (·; φ) has the following collection of trainable parameters: φ = {WH, . . . ,W1, UH*, . . . ,* U1}, which we learn by optimizing a loss function based on the path embeddings; see Section 3.5. ## 3.4 Retriever: Indexing Items For Retrieving Indexing and retrieval form a backbone for any search structure. EHI efficiently encodes the index path of the query and documents in (B · H)-dimensional embedding space. During retrieval for a query q, EHI explores the tree structure to find the "most relevant" leaves and retrieves documents associated with those leaves. For retrieval, it requires encoder and indexer parameters (*θ, φ*) along with Leaf, document hashmap M. The relevance of a leaf l for a query q is measured by the probability of a query reaching a leaf at height H (P(*q, l, H*)). Recall from previous section that path to a leaf l is defined as l = [i 1 l , i2 l . . . iH l ] where i h l ∈ [1 *. . . B*] for h ∈ [H]. The probability of reaching a leaf l for a given query q ∈ Q to an arbitrary leaf l ∈ Leaves can be computed as P(*q, l, H*) = p H[i H l ] using equation 1. But, we only need to compute the most probable leaves for every query during inference, which we obtain by using the standard beam-search procedure summarized below: 1. For all parent node at height h − 1, compute probability of reaching their children Sˆ = ∪c∈child(p)P(*q, c, h*) ∀p∈P 2. Keep top β children based on score Sˆ and designate them as the parents for the next height. Repeat steps 1 and 2 until the leaf nodes are reached. Once we select β leaves EHI retrieves documents associated with each leaf, which is stored in the hashmap M. To compute this hash map, EHI indexes each document d ∈ D (similar to query) with β = 1. Here, β = 1 is a design choice considering memory and space requirements and is kept as a tuneable parameter. It stores document-to-leaf associations, allowing us to strategically focus the search on the most relevant subset of documents. The hashmap M is used to store the information of which documents reach which leaves in our tree. This is essentially the result of the indexing step. Note that to add new ad-hoc documents after training EHI, one would need to follow the indexing step listed above to store the new documents in the appropriate buckets and update the hashmap M. During retrieval, we find out which leaves the query goes to, and we only use those documents which have reached similar leaves. This is how we sample a few documents to search over instead of searching over the entire document space. Algorithm 2 in the appendix depicts the approach used by our Indexer for better understanding. ## 3.5 Training Ehi Given the definition of all three EHI components - encoder, indexer, and retriever - we are ready to present the training procedure. As mentioned earlier, the encoder and the indexer parameters (θ; φ) are optimized simultaneously with our proposed loss function, which is designed to have the following properties: a) Relevant documents and queries should be semantically similar, b) documents and queries should be indexed together iff they are relevant, and c) documents should be indexed together iff they are similar. Given the encoder and the indexer, we design one loss term for each of the properties mentioned above and combine them to get the final loss function. To this end, we first define the triplet loss as: $$L(a,b,c)=[a c-a b+\gamma]_{+}$$ $$\left(2\right)$$ L(*a, b, c*) = [ac − ab + γ]+ (2) where we penalize if similarity between query q and an *irrelevant* document d− (y(*q, d*−) 6= 1) is within γ margin of the corresponding similarity between q and a relevant document d+ (y(*q, d*+) = 1).We now define the following three loss terms: 1. **Semantic Similarity**: the first term is a standard dual-encoder contrastive loss between a relevant document d+ - i.e., y(*q, d*+) = +1 - and an *irrelevant* document with y(*q, d*−) 6= 1. $${\cal L}_{\mathrm{{\scriptsize~{\scriptsize~{\scriptsize~{\scriptsize~L}}}}}}={\cal L}({\mathcal E}_{\theta}(q),{\mathcal E}_{\theta}(d_{+}),{\mathcal E}_{\theta}(d_{-});\theta)$$ $$\left({\boldsymbol{3}}\right)$$ $$(d_{+}),T_{\phi}(d_{-});\theta,\phi)$$ $$\left(4\right)$$ Lsiamese = L(Eθ(q), Eθ(d+), Eθ(d−); θ) (3) 2. **Indexing Similarity**: the second term is essentially a similar contrastive loss over the query, relevant-doc, irrelevant-doc triplet, but where the query and documents are represented using the path-embedding Tφ(·) given by the indexer Iφ. Lindexing = L(Tφ(q), Tφ(d+), Tφ(d−); *θ, φ*) (4) 3. **Intra-leaf Similarity**: to spread out irrelevant docs, third loss applies triplet loss over the sampled relevant and irrelevant documents for a query q. Note that we apply the loss only if the two docs are semantically dissimilar according to the latest encoder, i.e., SIM(a, b, τ ) = Eθ(a) T Eθ(b) |Eθ(a)||Eθ(b)| < τ for a pre-specified threshold τ = 0.9. $${\cal L}_{\mathrm{intra-leaf}}=\mathsf{SIM}(d_{+},d_{-},\tau){\cal T}_{\phi}(d_{+})^{\top}{\cal T}_{\phi}(d_{-})$$ $$\left({\bar{5}}\right)$$ >Tφ(d−) (5) The final loss function L is given as the weighted sum of the above three losses: $${\mathcal{L}}=\lambda_{1}L_{\mathrm{{siamese}}}+\lambda_{2}L_{\mathrm{{indexing}}}+\lambda_{3}L_{\mathrm{{intra-leaf}}}$$ L = λ1Lsiamese + λ2Lindexing + λ3Lintra-leaf (6) Here γ is set to 0.3 for all loss components, and λ1, λ2, λ3 are tuneable hyper-parameters. Our trainer (see Algorithm 1) learns θ and φ by optimizing L using standard techniques; for our implementation we used AdamW (Loshchilov & Hutter, 2017). Note that the loss function only uses in-batch documents' encoder embeddings and path embeddings, i.e., we are not even required to index all the documents in the tree structure, thus allowing efficient joint training of both encoder and indexer. To ensure fast convergence, we use hard negatives mined from the indexed leaves of a given query q for which we require documents to be indexed in the tree. But, this procedure can be done once in every r step where r is a hyper-parameter set to 5 by default across our experiments. We will like to stress that existing methods like DSI, NCI, or ANCE not only have to use stale indexing of documents, but they also use stale or even fixed indexers - like DSI, NCI learns a fixed semantic structure over docids using one-time hierarchical clustering. In contrast, EHI jointly updates the indexer and the encoder in each iteration, thus can better align the embeddings with the tree/indexer. $$(6)$$ ## 3.6 Re-Ranking And Evaluation This section describes the re-ranking step and the test-time evaluation process after retrieval. In Section 3.4, we discussed how each document is indexed, and we now have a learned mapping of d × l, where d is the corpus size, and l is the number of leaves. Given a query at test time, we perform a forward pass similar to the indexing pipeline presented in Section 3.4 and find the top-b leaves (b here is the beam size) the given query reaches. We collate all the documents that reached these b leaves (set operation to avoid any repetition of the same documents across multiple leaves) and rank them based on an appropriate similarity metric such as cosine similarity, dot product, manhattan distance, etc. We use the cosine similarity metric for ranking throughout our experiments (see Section 4.2). ## 4 Experiments We conduct a comprehensive empirical evaluation of EHI on established dense retrieval benchmarks. Our experiments have two primary objectives: (a) **Validate the End-to-end Paradigm:** We seek to demonstrate the superiority of EHI's joint optimization of the embedding encoder and Indexer compared to the conventional disjoint training approach that utilizes off-the-shelf ANNS methods (e.g., ScaNN, Faiss-IVF) or alternative generative retrieval or sparse retrieval approaches. This comparison provides strong evidence for the benefits of our proposed paradigm shift. (b) **Justify Design Decisions:** Through systematic experimentation, we aim to elucidate the rationale behind the design choices within EHI. These insights will solidify the theoretical foundations of our method and guide future research in this domain. Detailed training hyperparameters for EHI are provided in Appendix C. ![7_image_0.png](7_image_0.png) Figure 3: EHI is significantly more accurate than DE + ScaNN or Faiss-IVF, especially when restricted to visit a small fraction of documents. See Figure 8 in Appendix for results on Scifact, Fiqa. ## 4.1 Experimental Setup Datasets. We evaluate EHI on four standard but diverse retrieval datasets of increasing size: SciFact (Wadden et al., 2020), FIQA (Maia et al., 2018), NQ320k (Kwiatkowski et al., 2019), and MS MARCO (Bajaj et al., 2016). Appendix A provides additional details about these datasets. We establish a robust set of baselines to comprehensively evaluate EHI's performance and isolate the impact of our core contributions: State-of-the-Art (SOTA) Comparisons. We evaluate EHI against leading dense retrieval methods such as ColBERT-v2 (Santhanam et al., 2021), SPLADE (Formal et al., 2022), and many other baselines on the MS MARCO benchmark (see Table 4, Appendix D). These comparisons, while acknowledging architectural differences, demonstrate EHI's strong performance and its potential within the broader research landscape. Importantly, EHI achieves this competitiveness using a significantly small encoder (66M parameters), highlighting the effectiveness of our end-to-end index learning approach. We note that re-ranking and other late interaction methods employed in ColBERT-v2 and SPLADE currently outperform EHI in exact search scenarios on the MS MARCO benchmark. This presents a promising future research direction to explore how re-ranking losses and other techniques could be integrated into EHI. Note that methods like ColBERT-v2, due to their late-interaction similarity computation, are less readily adaptable to ANNS-based retrieval. Controlled Ablation Studies. To directly assess the value of EHI's end-to-end indexer, we construct baselines using the same DistilBERT encoder paired with: (a) Dual Encoder (DE with ANCE hard-negatives; Menon et al. (2022)) + Exact Search (for theoretical upper bound), (b) DE + ScaNN (representative of popular ANNS libraries; Guo et al. (2020)), and (c) DE + Faiss-IVF (for comparison with another IVF-style index; Johnson et al. (2019)). Generative Retrieval. We include baselines like DSI (Tay et al., 2022) and NCI (Wang et al., 2022) to provide context, acknowledging their potential limitations in this setting. We report DSI numbers on MS MARCO using an implementation validated by the authors. However, we note that NCI fails to scale to large datasets like MS MARCO. Sparse Neural Retrieval. We include sparse neural IR baselines like CCSA (Lassance et al., 2021) and SPLADE (Formal et al., 2022) for a comprehensive comparison in Table 4. However, it's important to note their potential differences in our context. Methods like CCSA, with their two-step disjoint training, face inherent challenges similar to those discussed earlier. While SPLADE offers notable improvements by replacing ANNS with sparse representations and using a FLOP-minimizing regularization term, this approach necessitates training multiple models for different FLOP targets of the practitioner. In contrast, EHI achieves this flexibility and elasticity within a single model. Furthermore, as Lassance et al. (2023) observes, SPLADE models can exhibit slower inference speeds compared to dense IR models which could be alleviated with static pruning, but the earlier limitation still holds water. However, the distillation-based improvements achieved by SPLADE are promising, and exploring the integration of distillation techniques with EHI represents an exciting direction for future research. ## 4.2 Results Evaluations and findings on Scifact and Fiqa datasets is presented in Appendix E. We evaluate EHI on the MS MARCO passage retrieval task, a *widely recognized benchmark* for semantic search. Results are presented for both the standard dev set and the TREC DL-19 set (Craswell et al., 2020). We also report our findings on other datasets such as NQ320k (Kwiatkowski et al., 2019). We compare against the standard Sentence-BERT model (Huggingface, 2019), trained on the MS MARCO dataset, which is further fine-tuned to the various datasets compared in this paper. Efficiency-Accuracy Trade-off: EHI demonstrates a compelling advantage in the efficiency-accuracy trade-off. For instance, on the standard MS MARCO dev set, EHI matches or exceeds the accuracy of Exact Search while visiting 80% fewer documents (see Figure 3(a)). This significantly outperforms DE+ScaNN and DE+Faiss-IVF, which require visiting nearly double the documents for comparable accuracy. Furthermore, on the TREC-DL 19 benchmark, EHI is able to match or surpass the nDCG@10 of baseline Exact Search with an 78% reduction in latency (see Figure 3(b)). On the NQ320k dataset, EHI matches or surpasses the accuracy of baseline Exact Search with a 60% reduction in latency (see Figure 3(c)). Gains in compute bottlenecks: We emphasize the significance of EHI's **3.6%** nDCG@10 improvement (with only 1% documents visited) on the well-optimized MS MARCO benchmark. Such gains underscore the practical value of our approach. Similarly, when restricted to visiting 1% of the documents, EHI achieves 8.2% higher nDCG@10 than DE+ScaNN and DE+Faiss-IVF on the TREC-DL benchmark. (see Appendix E) Comparison with Generative Retrieval Methods: We note that the DSI base model with 250M parameters is almost *four times* the size of the current EHI model. After multiple weeks of DSI training with doc2query + atomic id + base model, DSI achieves a MRR@10 metric of 26% on the MS MARCO dev set, which is **8.58%** lower than EHI with just 1% visited documents (see Table 4). Note that despite significant efforts, we could not scale NCI code (Wang et al., 2022) on MS MARCO due to the dataset size; NCI paper does not provide metrics on MS MARCO dataset. To compare against NCI, we report our findings on the NQ320k dataset. Note that EHI is able to significantly outperform DSI and NCI (without query generation) despite NCI utilizing a 10× larger encoder! Furthermore, even with query generation, NCI is 0.6% and 1.8% less accurate than EHI on Recall@10 and Recall@100 metrics, respectively. (see Table 6) Our observations about EHI are statistically significant as evidenced by p-value tests Appendix E.3. Additional experiments such as the number of documents per leaf, robustness to initialization, and qualitative analysis on the leaves of the indexer learned by the EHI model are depicted in Appendix E. ![9_image_0.png](9_image_0.png) Figure 4: Ablations studies depicting the various properties of EHI's training paradigm. ## 4.3 Discussions In the previous section, we demonstrated the effectiveness of EHI against multiple baselines on diverse benchmarks. In this section, we report results from multiple ablation studies to better understand the behavior of EHI. Additional experiments and properties such as load balancing, and other properties of EHI are discussed in Appendix E. Fraction of visited documents as a proxy for latency. Ideally, we would want to compare query throughput against recall/MRR, but obtaining head-to-head latency numbers is challenging as different systems are implemented using different environments and optimizations (for instance, off-the-shelf indexers such as SCANN, FAISS work better with CPU optimization while EHI takes advantage of GPUs). Thus, following standard practice in the ANNS community, we use a fraction of documents visited/searched as a proxy for latency (Jayaram Subramanya et al., 2019; Guo et al., 2020). Stand-alone Indexer comparisons. To showcase the alignment between the embeddings and the downstream clusters learned, we conduct an experiment where we initialize the encoders with the encoder trained via EHI. We find the indexer trained by EHI is better aligned with the representations and leads to better performance when visiting fewer fraction of documents (see Figure 5). This underscores the crucial alignment and efficiency gains achieved through EHI's joint optimization. To isolate the EHI indexer's impact, we fix encoder embeddings (pre-trained EHI model) and compare EHI indexing against ScaNN, and Faiss-IVF. Figure 4(a) illustrates that EHI outperforms other baselines by up to **4.6%** when visiting a limited number of documents, highlighting the EHI indexer's intrinsic efficiency. Effect of branching factor. Figure 4(b) shows recall@100 of EHI on SciFact with varying branching factors. We consider two versions of EHI, one with exact-search, and another where where we restrict EHI to visit about 10% visited document. Interestingly, for EHI + Exact Search, the accuracy decreases with a higher branching factor, while it increases for the smaller beam-size of 0.1. We attribute this to documents in a leaf node being very similar to each other for high branching factors (fewer points per leaf). We hypothesize that EHI is sampling highly relevant documents for hard negatives leading to a lower exact search accuracy. Ablation w.r.t loss components. Next, on FIQA dataset, we study performance of EHI when one of the loss components equation 6 is turned off; see Figure 4(c). First, we observe that EHI outperforms the other three vanilla variants, implying that each loss term contributes non-trivially to the performance of EHI. Next, we observe that removing the document-similarity-based loss term (λ3), Eq. equation 5, has the least effect on the performance of EHI, as the other two-loss terms already capture some of its desired consequences. However, turning off the contrastive loss on either encoder embedding (λ1), Eq. equation 3, or path embedding (λ2), Eq. equation 4, loss leads to a significant loss in accuracy. This also indicates the importance of jointly and accurately learning encoder and indexer parameters. ![10_image_0.png](10_image_0.png) Figure 5: EHI indexer is significantly more accurate than EHI encoder + off-the-shelf indexers such as ScaNN, Faiss-IVF especially when restricted to visit a small fraction of documents. Note that all the algorithms have been initialized with the same EHI encoder for computing embeddings. ## 4.4 Scaling To Deeper Trees ![10_image_1.png](10_image_1.png) Figure 6: EHI upon extending to deeper trees. In this section, we further motivate the need to extend to deep trees and showcase how this extension is critical to be deployed in production. We show that the hierarchical part in EHI can be tailored for large-scale tasks (>1B documents or otherwise) and is absolutely necessary for two reasons: Sub-optimal convergence and blow-up in path embedding: When working with hundreds of billions of scale datasets as ones generally used in industry, a single height indexer would not work. Having more than a billion logits returned by a single classification layer would be suboptimal and lead to poor results. This is a common trend observed in other domains, such as Extreme Classification, and thus warrants hierarchy in our model design. Furthermore, the path embedding would blow up if we use a single height indexer, and optimization and gradient alignments would become the bottleneck in this scenario. Sub-linear scaling and improved latency: Furthermore, increasing the height of EHI can be shown to scale to very large leaves and documents. Consider a case where our encoder has P parameters, and the indexer is a single height tree that tries to index each document of D dimension into L leaves. The complexity of forward pass in this setting would be O(P + DL). However, for very large values of L, this computation could blow up and lead to suboptimal latency and blow up in memory. Note, however, that by extending the same into multiple leaves (H), one can reduce the complexity at each height to a sublinear scale. The complexity of forward pass the hierarchical k-ary tree would be O(P + HDL1/H). Note that since L H, the hierarchical k-ary tree can be shown to have a better complexity and latency, which helps in scaling to documents of the scale of billions as observed in general applications in real-world scenarios. For instance, EHI trained on Scifact with equal number of leaves, we notice a significant speedup with increasing height; for example at (B = 64, H = 1), (B = 8, H = 2), and (B = 4, H = 3), we notice a per-query latency of 2.48ms, 2.40ms, and 1.99ms respectively at the same computation budget. We study the accuracy of EHI when extending to multiple heights of the tree structure to extend its effectiveness in accurately indexing extensive web-scale document collections. Traditional indexing methods that rely on a single-height approach can be computationally impractical and sub-optimal when dealing with billions or more documents. To address this challenge, EHI treats height as a hyperparameter and learns the entire tree structure end-to-end. This extension to hierarchical k-ary tree is absolutely necessary for scalability. In the above results presented in Figure 6, we showcase that one could increase the depth of the tree to allow multiple linear layers of lower logits - say 1000 clusters at each height instead of doing so at a single level. This is precisely where the hierarchical part of the method is **absolutely necessary** for scalability. Furthermore, we also extend EHI model on other datasets such as Scifact to showcase this phenomenon as depicted in Table 1. We do confirm with a p-test with a p-value of 0.05 and confirm that the results across different permutations are nearly identical, and the improvements on R@50 and R@100 over other permutations are not statistically significant (improvements < 0.5%). Furthermore, improvements of other permutations of (B,H) on the R@10 and R@20 metrics over the H=1 baseline are also statistically insignificant. Our current experiments on Scifact/MSMARCO not only portray that our method does indeed scale to deeper trees but also showcase that EHI can extend to deeper trees with little to no loss in performance and improved latency. We also maintain a roughly similar performance and uniform splitting of documents even when extended to deeper trees as depicted in Figure 6. Table 1: Performance metrics (%) evaluated on the Scifact dataset across various permutations of braching factor (B) and height (H). The best value for each metric, corresponding to a specific number of visited documents, is indicated in **bold** font. | Scifact | R@10 | R@20 | R@50 | R@100 | |-------------------------------------|--------|--------|--------|---------| | Two-tower model (with Exact Search) | 75.82 | 82.93 | 87.9 | 91.77 | | EHI (B=40, H=1) | 84.73 | 87.57 | 93.87 | 95.27 | | EHI (B=4, H=2) | 86.23 | 88.7 | 93.43 | 94.77 | | EHI (B=6, H=2) | 83.8 | 88.63 | 92.77 | 95.27 | | EHI (B=8, H=2) | 84.46 | 88.47 | 93.27 | 95.27 | | EHI (B=4, H=3) | 84.96 | 88.7 | 93.6 | 94.77 | | EHI (B=6, H=3) | 85.36 | 89.3 | 93.6 | 94.77 | | EHI (B=8, H=3) | 83.86 | 87.47 | 92.27 | 95.27 | ## 5 Conclusions, Limitations, And Future Work We presented EHI, a novel framework that fundamentally shifts the paradigm of dense retrieval by jointly optimizing the query/document encoder and the search indexer. EHI's core components - the encoder, indexer, and retriever - work in tandem to generate compact path embeddings. These path embeddings, representing the path taken by queries and documents within the index tree, are crucial for EHI's effective joint training. Extensive evaluations on standard benchmarks demonstrate EHI's superior efficiency and accuracy compared to traditional approaches. While the success of path embeddings highlights their potential, a deeper theoretical understanding and formal guarantees would be valuable. Additionally, exploring the integration of EHI with hierarchical representations like Matryoshka Embeddings (Kusupati et al., 2022) or techniques like RGD (Kumar et al., 2023) could further enhance performance, particularly for complex and tail queries. We believe EHI lays a strong foundation for future innovations in dense retrieval, promising more efficient and accurate semantic search systems. ## Broader Impact Statement Our proposed approach helps improve large scale retrieval and has potential to be applied to multiple industry grade retrieval solutions. Improving scale and recall quality of retrieval itself might have multiple societal conquences, but noe of which are specific to our method. ## References Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. Ms marco: A human generated machine reading comprehension dataset. *arXiv preprint arXiv:1611.09268*, 2016. Erik Bernhardsson. *Annoy: Approximate Nearest Neighbors in C++/Python*, 2018. URL https://pypi. org/project/annoy/. Python package version 1.13.0. Michele Bevilacqua, Giuseppe Ottaviano, Patrick Lewis, Scott Yih, Sebastian Riedel, and Fabio Petroni. Autoregressive search engines: Generating substrings as document identifiers. Advances in Neural Information Processing Systems, 35:31668–31683, 2022. Alina Beygelzimer, Sham Kakade, and John Langford. Cover trees for nearest neighbor. In *Proceedings of* the 23rd international conference on Machine learning, pp. 97–104, 2006. Deng Cai. A revisit of hashing algorithms for approximate nearest neighbor search. *IEEE Transactions on* Knowledge and Data Engineering, 33(6):2337–2348, 2021. doi: 10.1109/TKDE.2019.2953897. Kenneth L Clarkson. An algorithm for approximate closest-point queries. In *Proceedings of the tenth annual* symposium on Computational geometry, pp. 160–164, 1994. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M Voorhees. Overview of the trec 2019 deep learning track. *arXiv preprint arXiv:2003.07820*, 2020. Kunal Dahiya, Deepak Saini, Anshul Mittal, Ankush Shaw, Kushal Dave, Akshay Soni, Himanshu Jain, Sumeet Agarwal, and Manik Varma. Deepxml: A deep extreme multi-label learning framework applied to short text documents. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pp. 31–39, 2021. Kunal Dahiya, Nilesh Gupta, Deepak Saini, Akshay Soni, Yajun Wang, Kushal Dave, Jian Jiao, Gururaj K, Prasenjit Dey, Amit Singh, et al. Ngame: Negative mining-aware mini-batching for extreme classification. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pp. 258–266, 2023. Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In *Proceedings of the twentieth annual symposium on Computational geometry*, pp. 253–262, 2004. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Chantat Eksombatchai, Pranav Jindal, Jerry Zitao Liu, Yuchen Liu, Rahul Sharma, Charles Sugnet, Mark Ulrich, and Jure Leskovec. Pixie: A system for recommending 3+ billion items to 200+ million users in real-time. In *Proceedings of the 2018 world wide web conference*, pp. 1775–1784, 2018. Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stéphane Clinchant. From distillation to hard negative sampling: Making sparse neural ir models more effective. In *Proceedings of the 45th International* ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 2353–2359, 2022. Jerome H Friedman, Jon Louis Bentley, and Raphael Ari Finkel. An algorithm for finding best matches in logarithmic expected time. *ACM Transactions on Mathematical Software (TOMS)*, 3(3):209–226, 1977. Luyu Gao, Zhuyun Dai, Zhen Fan, and Jamie Callan. Complementing lexical retrieval with semantic residual embedding. corr abs/2004.13969 (2020). *arXiv preprint arXiv:2004.13969*, 2020a. Weihao Gao, Xiangjun Fan, Chong Wang, Jiankai Sun, Kai Jia, Wenzhi Xiao, Ruofan Ding, Xingyan Bin, Hui Yang, and Xiaobing Liu. Deep retrieval: Learning a retrievable structure for large-scale recommendations. arXiv preprint arXiv:2007.07203, 2020b. Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. Optimized product quantization. IEEE transactions on pattern analysis and machine intelligence, 36(4):744–755, 2013. Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and Sanjiv Kumar. Accelerating large-scale inference with anisotropic vector quantization. In *International Conference on Machine Learning*, pp. 3887–3896. PMLR, 2020. Nilesh Gupta, Patrick Chen, Hsiang-Fu Yu, Cho-Jui Hsieh, and Inderjit Dhillon. Elias: End-to-end learning to index and search in large output spaces. *Advances in Neural Information Processing Systems*, 35: 19798–19809, 2022. Michael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In *Proceedings of the thirteenth international conference on artificial* intelligence and statistics, pp. 297–304. JMLR Workshop and Conference Proceedings, 2010. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval augmented language model pre-training. In *International conference on machine learning*, pp. 3929–3938. PMLR, 2020. Sebastian Hofstätter, Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin, and Allan Hanbury. Efficiently teaching an effective dense retriever with balanced topic aware sampling. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 113–122, 2021. Huggingface. HuggingFace Sentence-transformers for MSMarco, 2019. URL https://huggingface.co/ sentence-transformers/msmarco-distilbert-cos-v5. Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In *Proceedings of the thirtieth annual ACM symposium on Theory of computing*, pp. 604–613, 1998. Gautier Izacard and Edouard Grave. Distilling knowledge from reader to retriever for question answering. arXiv preprint arXiv:2012.04584, 2020a. Gautier Izacard and Edouard Grave. Leveraging passage retrieval with generative models for open domain question answering. *arXiv preprint arXiv:2007.01282*, 2020b. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. Towards unsupervised dense information retrieval with contrastive learning. arXiv preprint arXiv:2112.09118, 2021. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. Few-shot learning with retrieval augmented language models. *arXiv preprint arXiv:2208.03299*, 2022. Himanshu Jain, Venkatesh Balasubramanian, Bhanu Chunduri, and Manik Varma. Slice: Scalable linear extreme classifiers trained on 100 million labels for related searches. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 528–536, 2019. Shikhar Jaiswal, Ravishankar Krishnaswamy, Ankit Garg, Harsha Vardhan Simhadri, and Sheshansh Agrawal. Ood-diskann: Efficient and scalable graph anns for out-of-distribution queries. arXiv preprint arXiv:2211.12850, 2022. Young Kyun Jang and Nam Ik Cho. Generalized product quantization network for semi-supervised image retrieval. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 3420–3429, 2020. Suhas Jayaram Subramanya, Fnu Devvrit, Harsha Vardhan Simhadri, Ravishankar Krishnawamy, and Rohan Kadekodi. Diskann: Fast accurate billion-point nearest neighbor search on a single node. *Advances in* Neural Information Processing Systems, 32, 2019. Herve Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search. *IEEE* transactions on pattern analysis and machine intelligence, 33(1):117–128, 2010. Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535–547, 2019. Omar Khattab and Matei Zaharia. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In *Proceedings of the 43rd International ACM SIGIR conference on research and* development in Information Retrieval, pp. 39–48, 2020. Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Big transfer (bit): General visual representation learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16, pp. 491–507. Springer, 2020. Ramnath Kumar, Kushal Majmundar, Dheeraj Nagaraj, and Arun Sai Suggala. Stochastic re-weighted gradient descent via distributionally robust optimization. *arXiv preprint arXiv:2306.09222*, 2023. Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani, Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham Kakade, and Ali Farhadi. Llc: Accurate, multi-purpose learnt low-dimensional binary codes. *Advances in Neural Information Processing Systems*, 34:23900–23913, 2021. Aditya Kusupati, Gantavya Bhatt, Aniket Rege, Matthew Wallingford, Aditya Sinha, Vivek Ramanujan, William Howard-Snyder, Kaifeng Chen, Sham Kakade, Prateek Jain, et al. Matryoshka representation learning. *Advances in Neural Information Processing Systems*, 35:30233–30249, 2022. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:453–466, 2019. Carlos Lassance, Thibault Formal, and Stéphane Clinchant. Composite code sparse autoencoders for first stage retrieval. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 2136–2140, 2021. Carlos Lassance, Simon Lupart, Hervé Dejean, Stéphane Clinchant, and Nicola Tonellotto. A static pruning study on sparse neural retrievers. In *Proceedings of the 46th International ACM SIGIR Conference on* Research and Development in Information Retrieval, pp. 1771–1775, 2023. W Li, Y Zhang, Y Sun, W Wang, W Zhang, and X Lin. Approximate nearest neighbor search on high dimensional data—experiments, analyses, and improvement. IEEE Transactions on Knowledge and Data Engineering, 2020. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. *arXiv preprint arXiv:1711.05101*, 2017. Macedo Maia, Siegfried Handschuh, André Freitas, Brian Davis, Ross McDermott, Manel Zarrouk, and Alexandra Balahur. Www'18 open challenge: financial opinion mining and question answering. In Companion proceedings of the the web conference 2018, pp. 1941–1942, 2018. Yu A Malkov and DA Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. *IEEE Transactions on Pattern Analysis & Machine Intelligence*, 42(04): 824–836, 2020. Yu A Malkov and Dmitry A Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. *IEEE transactions on pattern analysis and machine intelligence*, 42(4):824–836, 2018. Aditya Menon, Sadeep Jayasumana, Ankit Singh Rawat, Seungyeon Kim, Sashank Reddi, and Sanjiv Kumar. In defense of dual-encoders for neural ranking. In *International Conference on Machine Learning*, pp. 15376–15400. PMLR, 2022. Bhaskar Mitra, Nick Craswell, et al. An introduction to neural information retrieval. Foundations and TrendsR *in Information Retrieval*, 13(1):1–126, 2018. Nicholas Monath, Manzil Zaheer, Kelsey Allen, and Andrew McCallum. Improving dual-encoder training through dynamic indexes for negative mining. In International Conference on Artificial Intelligence and Statistics, pp. 9308–9330. PMLR, 2023. Niklas Muennighoff. Sgpt: Gpt sentence embeddings for semantic search. *arXiv preprint arXiv:2202.08904*, 2022. Pandu Nayak. Understanding searches better than ever before. *Google AI Blog*, 2019. URL https: //blog.google/products/search/search-language-understanding-bert/. Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, et al. Text and code embeddings by contrastive pre-training. arXiv preprint arXiv:2201.10005, 2022. Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y Zhao, Yi Luan, Keith B Hall, Ming-Wei Chang, et al. Large dual encoders are generalizable retrievers. *arXiv preprint* arXiv:2112.07899, 2021. Rodrigo Nogueira and Kyunghyun Cho. Passage re-ranking with bert. *arXiv preprint arXiv:1901.04085*, 2019. Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. Document expansion by query prediction. arXiv preprint arXiv:1904.08375, 2019. Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. Rocketqa: An optimized training approach to dense passage retrieval for open-domain question answering. *arXiv preprint arXiv:2010.08191*, 2020. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. *OpenAI*, 2018. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International conference on machine learning*, pp. 8748–8763, 2021. Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. *arXiv* preprint arXiv:1908.10084, 2019. Stephen Robertson, Hugo Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and TrendsR *in Information Retrieval*, 3(4):333–389, 2009. Ruslan Salakhutdinov and Geoffrey Hinton. Semantic hashing. International Journal of Approximate Reasoning, 50(7):969–978, 2009. Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. Colbertv2: Effective and efficient retrieval via lightweight late interaction. *arXiv preprint arXiv:2112.01488*, 2021. Josef Sivic and Andrew Zisserman. Video google: A text retrieval approach to object matching in videos. In Computer Vision, IEEE International Conference on, volume 3, pp. 1470–1470. IEEE Computer Society, 2003. Yi Tay, Vinh Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, et al. Transformer memory as a differentiable search index. Advances in Neural Information Processing Systems, 35:21831–21843, 2022. Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. Beir: A heterogenous benchmark for zero-shot evaluation of information retrieval models. *arXiv preprint arXiv:2104.08663*, 2021. David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. Fact or fiction: Verifying scientific claims. *arXiv preprint arXiv:2004.14974*, 2020. Mengzhao Wang, Xiaoliang Xu, Qiang Yue, and Yuxiang Wang. A comprehensive survey and experimental comparison of graph-based approximate nearest neighbor search. *Proceedings of the VLDB Endowment*, 14 (11):1964–1978, 2021. Yujing Wang, Yingyan Hou, Haonan Wang, Ziming Miao, Shibin Wu, Qi Chen, Yuqing Xia, Chengmin Chi, Guoshuai Zhao, Zheng Liu, et al. A neural corpus indexer for document retrieval. Advances in Neural Information Processing Systems, 35:25600–25614, 2022. Roger Weber, Hans-Jörg Schek, and Stephen Blott. A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces. In *VLDB*, volume 98, pp. 194–205, 1998. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. Approximate nearest neighbor negative contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808, 2020. Tan Yu, Junsong Yuan, Chen Fang, and Hailin Jin. Product quantization network for fast image retrieval. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 186–201, 2018. Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, and Shaoping Ma. Learning discrete representations via constrained clustering for effective and efficient dense retrieval. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pp. 1328–1336, 2022. Han Zhu, Xiang Li, Pengye Zhang, Guozheng Li, Jie He, Han Li, and Kun Gai. Learning tree-based deep model for recommender systems. In *Proceedings of the 24th ACM SIGKDD international conference on* knowledge discovery & data mining, pp. 1079–1088, 2018. Han Zhu, Daqing Chang, Ziru Xu, Pengye Zhang, Xiang Li, Jie He, Han Li, Jian Xu, and Kun Gai. Joint optimization of tree-based index and deep model for recommender systems. *Advances in Neural Information* Processing Systems, 32, 2019. ## A Datasets In this section, we briefly discuss the open-source datasets used in this work adapted from the Beir benchmark Thakur et al. (2021). Scifact: Scifact (Wadden et al., 2020) is a fact-checking benchmark that verifies scientific claims using evidence from research literature containing scientific paper abstracts. The dataset has ∼ 5000 documents and has a standard train-test split. We use the original publicly available dev split from the task having 300 test queries and include all documents from the original dataset as our corpus. The scale of this dataset is smaller and included to demonstrate even splits and improvement over baselines even when the number of documents is in the order of 5000. Fiqa: Fiqa (Maia et al., 2018) is an open-domain question-answering task in the domain of financial data by crawling StackExchange posts under the Investment topic from 2009-2017 as our corpus. It consists of 57, 368 documents and publicly available test split from (Thakur et al., 2021) As the test set, we use the random sample of 500 queries provided by Thakur et al. (2021). The scale of this dataset is 10x higher than Scifact, with almost 57,638 documents in our corpus. MS Marco: The MSMarco benchmark Bajaj et al. (2016) has been included since it is widely recognized as the gold standard for evaluating and benchmarking large-scale information retrieval systems (Thakur et al., 2021; Ni et al., 2021). It is a collection of real-world search queries and corresponding documents carefully curated from the Microsoft Bing search engine. What sets MSMarco apart from other datasets is its scale and diversity, consisting of approximately 9 million documents in its corpus and 532,761 query-passage pairs for fine-tuning the majority of the retrievers. Due to the increased complexity in scale and missing labels, the benchmark is widely known to be challenging. The dataset has been extensively explored and used for fine-tuning dense retrievers in recent works (Thakur et al., 2021; Nogueira & Cho, 2019; Gao et al., 2020a; Qu et al., 2020). MSMarco has gained significant attention and popularity in the research community due to its realistic and challenging nature. Its large-scale and diverse dataset reflects the complexities and nuances of real-world search scenarios, making it an excellent testbed for evaluating the performance of information retrieval algorithms. NQ320k: The NQ320k benchmark (Kwiatkowski et al., 2019) has become a standard information retrieval benchmark used to showcase the efficacy of various SOTA approaches such as DSI (Tay et al., 2022) and NCI (Wang et al., 2022). In this work, we use the same NQ320k preprocessing steps as NCI. The queries are natural language questions, and the corpus is Wikipedia articles in HTML format. During preprocessing, we filter out tags and other special characters and extract the title, abstract, and content strings. Please refer to Wang et al. (2022) for additional details about this dataset. | Dataset | # Train Pairs | # Test Queries | # Test Corpus | Avg. | Test Doc per Query | |-----------|-----------------|------------------|-----------------|--------|----------------------| | Scifact | 920 | 300 | 5183 | 1.1 | | | Fiqa-2018 | 14,166 | 648 | 57,638 | 2.6 | | | MS Marco | 532,761 | 6,980 | 8,841,823 | 1.1 | | Table 2 presents more information about our datasets. Table 2: Dataset details. ## B Motivation For Ehi Misalignment of representations: Figure 7(a) helps illustrate the issue with the disjoint training regime typically used in dense retrieval literature. Consider that the learned embedding structure of the encoder is nearly symmetrical and not optimal for splitting into buckets, as shown in the left subfigure. If the disjoint ![18_image_0.png](18_image_0.png) Figure 7: Motivation of EHI. ANNS tries to cluster these documents into buckets, the resulting model will achieve suboptimal performance since the encoder was never aware of the task of ANNS. Instead, EHI proposes the learning of both the encoder and ANNS structure in a single pipeline. This leads to efficient clustering, as the embeddings being learned already share information about how many clusters the ANNS is hashing. Ignoring query distribution: To provide an illustrative instance, let us consider a scenario in which there exists a corpus of documents denoted by D, with two classes of documents categorized as [D1, D2], following a certain probability distribution [d1, d2]. Suppose, d1 d2, then under the utilization of the unsupervised off-the-shelf ANNS indexer such as ScaNN algorithm, clustering into two partitions, the algorithm would distribute the two document classes in an approximately uniform manner across the resultant clusters. Concurrently, the documents of class D1 would be allocated to both of these clusters, as guided by the partitioning procedure, regardless of the query distribution. However, supposing a scenario in which an overwhelming majority of the queries target D2, the goal emerges to rigorously isolate the clusters associated with D1 and D2 since we would like to improve latency in the downstream evaluation and visit fewer documents for each query. In this context, the EHI emerges as a viable solution as it takes the query distribution into consideration, effectively addressing this concern by facilitating a clear delineation between the clusters corresponding to the distinct domains. Figure 7(b) depicts the query distribution of EHI model trained on the MS MARCO dataset. It is important here to highlight that for tail queries, which are barely searched in practice, EHI is able to grasp the niche concepts and hash as few documents in these buckets as indicated by the very few average number of documents visited. Moving towards more popular queries, or the torso queries, we note a rise in the average number of documents visited as these queries are more broad in general. Moving even further to the head queries, we again notice a drop in the average number of documents visited, which showcases the efficiency and improved latency of the EHI model. ## C Model And Hyperparameter Details We followed a standard approach called grid search to determine the optimal hyperparameters for training the EHI model. This involved selecting a subset of the training dataset and systematically exploring different combinations of hyperparameter values that perform best on this held-out set. The key hyperparameters we focused on were the Encoder (Eθ ) and Indexer (Iφ) learning rates. Other hyperparameters to tune involved the weights assigned to each component of the loss function, denoted as λ1, λ2, λ3 in Equation 6. The specific hyperparameter values used in our experiments are detailed in Table 3. This table provides a comprehensive overview of the exact settings for each hyperparameter. The transformer block used in this work is the sentence transformers distilBERT model, which has been trained on the MSMarco dataset (https: //huggingface.co/sentence-transformers/msmarco-distilbert-cos-v5). We fine-tune the encoder for our task and report our performance metrics. Table 3: Hyperparameters used for training EHI on various datasets. Note that number of epoch and refresh rate (r) was set to 100 and 5, respectively. EHI initializes with distilBERT with encoder embedding representation as 768 Description Scifact Fiqa MSMarco NQ320k argument_name General Settings Batch size 64 64 4096 1024 batch_size Number of leaves 40 5000 7000 1000 leaves Optimizer Settings Encoder Learning rate 4e −4 2e −4 2.5e −8 1e −4 enc_lr Classifier Learning rate 0.016 9e −4 8e −3 4e −4 cl_lr Loss factor 1 0.2 0.03 0.11 0.2 λ1 Loss factor 2 0.8 0.46 0.84 0.4 λ2 Loss factor 3 0.2 0.54 0.60 0.11 λ3 ## D Comparisons To Sota In this section, we compare EHI with SOTA approaches such as ColBERT, SGPT, cpt-text, ANCE, DyNNIBAL, which use different dot-product metrics or hard-negative mining mechanisms, etc. Note that EHI proposed in this paper is a paradigm shift of training and can be clubbed with any of the above architectures. So the main comparison is not with SOTA architectures, but it is against the existing paradigm of two-stage dense retrieval as highlighted in Section 4.2. Nevertheless, we showcase that EHI outperforms SOTA approaches on both MS MARCO dev set as well as MS MARCO TREC DL-19 setting as highlighted in Table 4 and Table 5 respectively. ## E Additional Experiments This section presents additional experiments conducted on our proposed model, referred to as EHI. The precise performance metrics of our model, as discussed in Section 4.2, are provided in Appendix E.1. This analysis demonstrates that our approach achieves state-of-the-art performance even with different initializations, highlighting its robustness (Appendix E.2). Furthermore, we delve into the document-to-leaves ratio concept, showcasing its adaptability to degrees greater than one and the advantages of doing so, provided that our computational requirements are met (Appendix E.8). This flexibility allows for the exploration of more nuanced clustering possibilities. We also examine load balancing in EHI and emphasize the utilization of all leaves in a well-distributed manner. This indicates that our model learns efficient representations and enables the formation of finer-grained clusters (Appendix E.4). To shed light on the learning process, we present the expected documents per leaf metric over the training regime in Appendix E.5. This analysis demonstrates how EHI learns to create more evenly distributed clusters as training progresses, further supporting its effectiveness. Finally, we provide additional insights into the semantic analysis of the Indexer in Appendix E.8, highlighting the comprehensive examination performed to understand the inner workings of our model better. Through these additional experiments and analyses, we reinforce the efficacy, robustness, and interpretability of our proposed EHI model, demonstrating its potential to advance the field of information retrieval. SciFact. We first start with the small-scale SciFact dataset. Figure 8(a) and Table 7 compares EHI to three DE baselines. Clearly, EHI's recall-compute curve dominates that of DE+ScaNN and DE+Faiss-IVF. For example, when allowed to visit/search about 10% of documents, EHI obtains up to **+15.64%** higher Recall@100. Furthermore, EHI can outperform DE+Exact Search with a 60% reduction in latency. Finally, representations from EHI's encoder with exact search can be as much as 4% more accurate (in terms of | Method | MRR@10 (Dev) | nDCG@10 (Dev) | #. parameters | |----------------------------------------------------------------------|----------------|-----------------|---------------------| | DPR Thakur et al. (2021) | - | 17.7 | 110M (BERT-base) | | BM25 (official) Khattab & Zaharia (2020) | 16.7 | 22.8 | - | | BM25 (Anserini) Khattab & Zaharia (2020) | 18.7 | - | - | | cpt-text S Neelakantan et al. (2022) | 19.9 | - | 300M (cpt-text S) | | cpt-text M Neelakantan et al. (2022) | 20.6 | - | 1.2B (cpt-text M) | | cpt-text L Neelakantan et al. (2022) | 21.5 | - | 6B (cpt-text L) | | cpt-text XL Neelakantan et al. (2022) | 22.7 | - | 175B (cpt-text XL) | | DSI (Atomic Docid + Doc2Query + Base Model) Tay et al. (2022) | 26.0 | 32.28 | 250M (T5-base) | | DSI (Naive String Docid + Doc2Query + XL Model) Tay et al. (2022) | 21.0 | - | 3B (T5-XL) | | DSI (Naive String Docid + Doc2Query + XXL Model) Tay et al. (2022) | 16.5 | - | 11B (T5-XXL) | | DSI (Semantic String Docid + Doc2Query + XL Model) Tay et al. (2022) | 20.3 | 27.86 | 3B (T5-XL) | | CCSA Lassance et al. (2021) | 28.9 | - | 110M (BERT-base) | | HNSW Malkov & Yashunin (2018) | 28.9 | - | - | | RoBERTa-base + In-batch Negatives Monath et al. (2023) | 24.2 | - | 123M (RoBERTa-base) | | RoBERTa-base + Uniform Negatives Monath et al. (2023) | 30.5 | - | 123M (RoBERTa-base) | | RoBERTa-base + DyNNIBAL Monath et al. (2023) | 33.4 | - | 123M (RoBERTa-base) | | RoBERTa-base + Stochastic Negatives Monath et al. (2023) | 33.1 | - | 123M (RoBERTa-base) | | RoBERTa-base +Negative Cache Monath et al. (2023) | 33.1 | - | 123M (RoBERTa-base) | | RoBERTa-base + Exhaustive Negatives Monath et al. (2023) | 34.5 | - | 123M (RoBERTa-base) | | SGPT-CE-2.7B Muennighoff (2022) | - | 27.8 | 2.7B (GPT-Neo) | | SGPT-CE-6.1B Muennighoff (2022) | - | 29.0 | 6.1B (GPT-J-6B) | | SGPT-BE-5.8B Muennighoff (2022) | - | 39.9 | 5.8B (GPT-J) | | doc2query Khattab & Zaharia (2020) | 21.5 | - | - | | DeepCT Thakur et al. (2021) | 24.3 | 29.6 | 110M (BERT-base) | | docTTTTquery Khattab & Zaharia (2020) | 27.7 | - | - | | SPARTA Thakur et al. (2021) | - | 35.1 | 110M (BERT-base) | | docT5query Thakur et al. (2021) | - | 33.8 | - | | ANCE | 33.0 | 38.8 | - | | EHI (distilbert-cos; Ours) | 33.8 | 39.4 | 66M (distilBERT) | | RepCONC Zhan et al. (2022) | 34.0 | - | 123M (RoBERTa-base) | | TAS-B Thakur et al. (2021) | - | 40.8 | 110M (BERT-base) | | GenQ Thakur et al. (2021) | - | 40.8 | - | | ColBERT (re-rank) Khattab & Zaharia (2020) | 34.8 | - | 110M(BERT-base) | | ColBERT (end-to-end) Khattab & Zaharia (2020) | 36.0 | 40.1 | 110M (BERT-base) | | BM25 + CE Thakur et al. (2021) | - | 41.3 | - | | SPLADE (simple training) Formal et al. (2022) | 34.2 | - | 110M (BERT-base) | | SPLADE + DistilMSE Formal et al. (2022) | 35.8 | - | 66M (distilBERT) | | SPLADE + SelfDistil Formal et al. (2022) | 36.8 | - | 66M (distilBERT) | | SPLADE + EnsembleDistil Formal et al. (2022) | 36.9 | - | 66M (distilBERT) | | EHI (distilbert-dot; Ours) | 37.2 | 43.3 | 66M (distilBERT) | | SPLADE + CoCondenser-SelfDistil Formal et al. (2022) | 37.5 | - | 66M (distilBERT) | | SPLADE + CoCondenser-EnsembleDistil Formal et al. (2022) | 38.0 | - | 66M (distilBERT) | | ColBERT-v2 Santhanam et al. (2021) | 39.7 | - | 110M (BERT-base) | Table 4: Performance metrics (%) evaluated on the MS MARCO dev dataset. The best value for each metric is indicated in **bold** font. | Method | MRR@10 | nDCG@10 | #. parameters | |----------------------------------------------------------|----------|-----------|---------------------| | BM25 Robertson et al. (2009) | 68.9 | 50.1 | - | | CCSA Lassance et al. (2021) | - | 58.3 | 110M (BERT-base) | | RepCONC Zhan et al. (2022) | 66.8 | - | 123M (RoBERTa-base) | | Cross-attention BERT (12-layer) Menon et al. (2022) | 82.9 | 74.9 | 110M (BERT-base) | | Dual Encoder BERT (6-layer) Menon et al. (2022) | 83.4 | 67.7 | 66M (distilBERT) | | DistilBERT + MSE Menon et al. (2022) | 78.1 | 69.3 | 66M (distilBERT) | | SPLADE (simple training) Formal et al. (2022) | - | 69.9 | 110M (BERT-base) | | DistilBERT + Margin MSE Menon et al. (2022) | 86.7 | 71.8 | 66M (distilBERT) | | DistilBERT + RankDistil-B Menon et al. (2022) | 85.2 | 70.8 | 66M (distilBERT) | | DistilBERT + Softmax CE Menon et al. (2022) | 84.6 | 72.6 | 66M (distilBERT) | | DistilBERT + M3SE Menon et al. (2022) | 85.2 | 71.4 | 66M (distilBERT) | | SPLADE + DistilMSE Formal et al. (2022) | - | 72.9 | 66M (distilBERT) | | SPLADE + SelfDistil Formal et al. (2022) | - | 72.3 | 66M (distilBERT) | | SPLADE + EnsembleDistil Formal et al. (2022) | - | 72.1 | 66M (distilBERT) | | SPLADE + CoCondenser-SelfDistil Formal et al. (2022) | - | 73.0 | 66M (distilBERT) | | SPLADE + CoCondenser-EnsembleDistil Formal et al. (2022) | - | 73.2 | 66M (distilBERT) | | EHI (distilbert-cos; Ours) | 97.7 | 88.4 | 66M (distilBERT) | Table 5: Performance metrics (%) evaluated on the MS MARCO TREC-DL 2019 Craswell et al. (2020) dataset. The best value for each metric is indicated in **bold** font. Table 6: Performance metrics (%) evaluated on the NQ320k dataset (Kwiatkowski et al., 2019). The best value for each metric is indicated in **bold** font. | Method | R@10 | R@100 | #. parameters | |----------------------------------------------------------|--------|---------|---------------------| | BM25 Robertson et al. (2009) | 32.48 | 50.54 | - | | BERT + ANN (Faiss) Johnson et al. (2019) | 53.63 | 73.01 | 110M (BERT-base) | | BERT + BruteForce Wang et al. (2022) | 53.42 | 73.16 | 110M (BERT-base) | | BM25 + DocT5Query Nogueira et al. (2019) | 61.83 | 76.92 | - | | ANCE (FirstP) Xiong et al. (2020) | 80.33 | 91.78 | - | | ANCE (MaxP) Xiong et al. (2020) | 80.38 | 91.31 | - | | SEAL (Base) Bevilacqua et al. (2022) | 79.97 | 91.39 | 139M (BART-base) | | SEAL (Large) Bevilacqua et al. (2022) | 81.24 | 91.93 | 406M (BART-large) | | RoBERTa-base + In-batch Negatives Monath et al. (2023) | 69.5 | 84.3 | 123M (RoBERTa-base) | | RoBERTa-base + Uniform Negatives Monath et al. (2023) | 72.3 | 84.8 | 123M (RoBERTa-base) | | RoBERTa-base + DyNNIBAL Monath et al. (2023) | 75.4 | 86.2 | 123M (RoBERTa-base) | | RoBERTa-base + Stochastic Negatives Monath et al. (2023) | 75.8 | 86.2 | 123M (RoBERTa-base) | | RoBERTa-base +Negative Cache Monath et al. (2023) | - | 85.6 | 123M (RoBERTa-base) | | RoBERTa-base + Exhaustive Negatives Monath et al. (2023) | 80.4 | 86.6 | 123M (RoBERTa-base) | | DSI (Base) Tay et al. (2022) | 56.6 | - | 250M (T5-base) | | DSI (Large) Tay et al. (2022) | 62.6 | - | 3B (T5-XL) | | DSI (XXL) Tay et al. (2022) | 70.3 | - | 11B (T5-XXL) | | NCI (Base) Wang et al. (2022) | 85.2 | 92.42 | 220M (T5-base) | | NCI (Large) Wang et al. (2022) | 85.27 | 92.49 | 770M (T5-L) | | NCI w/ qg-ft (Base) Wang et al. (2022) | 88.48 | 94.48 | 220M (T5-base) | | NCI w/ qg-ft (Large) Wang et al. (2022) | 88.45 | 94.53 | 770M (T5-L) | | EHI (distilbert-cos; Ours) | 88.98 | 96.3 | 66M (distilBERT) | Recall@100) than baseline dual-encoder+Exact Search, indicating effectiveness of EHI's integrated hard negative mining. FIQA. Here also we observe a similar trend as SciFact; see Figure 8(b) and Table 8. That is, when restricted to visit only 15% documents (on an average), EHI outperforms ScaNN and Faiss-IVF in Recall@100 metric by **5.46%** and **4.36%** respectively. Furthermore, EHI outperforms the exact search in FIQA with a 84% reduction in latency or documents visited. Finally, when allowed to visit about 50% of the documents, EHI is about 5% more accurate than Exact Search, which visits all the documents. Thus indicating better quality of learned embeddings. ## E.1 Results On Various Benchmarks In this section, we depict the precise numbers we achieve with various baselines as well as EHI as shown in Table 7, Table 8, and Table 9 for Scifact, Fiqa, and MS Marco respectively. To understand how the performance of EHI model is on a smaller number of k in Recall@k, we report Recall@10, Recall@25, and Recall@50 metrics on the Scifact dataset. Figure 9 depicts the performance of the EHI model in this setting. On the Recall@10, Recall@20 and Recall@50 metric, EHI outperforms exact search metric by 8.91%, 4.64% and 5.97% respectively. Furthermore, we note that most prior works, including ScaNN, DSI, and NCI, all report their numbers for a single seed. To show the robustness of our approach to initializations, we also report our numbers on multiple seeds and showcase that we significantly outperform our baselines across all the boards as further described in Section E.2. ## E.2 Robustness To Initializations In this section, we showcase the robustness of our proposed approach - EHI to various initializations of the model. Note that analysis in this manner is uncommon in efficient retrieval works, as showcased in ScaNN, DSI, and NCI. However, such approaches worked with two disjoint processes for learning embeddings, Table 7: Performance metrics (%) evaluated on the Scifact dataset across various ranges of visited documents. The best value for each metric, corresponding to a specific number of visited documents, is indicated in **bold** font. | Metric | Method | 10% | 20% | 30% | 40% | 50% | 60% | 70% | 80% | 90% | 100% | |-----------|--------------|-------|--------|-------|-------|-------|-------|-------|-------|-------|--------| | R@10 | Exact Search | | 82.73 | | | | | | | | | | ScaNN | 66.2 | 74.11 | 78.28 | 79.94 | 81.28 | 81.61 | 81.94 | 82.28 | 82.28 | 82.28 | | | Faiss-IVF | 70.74 | 78.27 | 80.07 | 82.23 | 82.9 | 82.73 | 82.73 | 82.73 | 82.73 | 82.73 | | | EHI | 76.97 | 81.93 | 83.33 | 83.67 | 84.73 | 85.07 | 84.73 | 84.73 | 84.73 | 84.73 | | | R@20 | Exact Search | | 86.9 | | | | | | | | | | ScaNN | 68.32 | 77.97 | 0.8247 | 84.63 | 85.97 | 86.3 | 86.63 | 86.97 | 86.97 | 86.97 | | | Faiss-IVF | 73.26 | 81.43 | 84.07 | 86.73 | 87.4 | 87.23 | 86.9 | 86.9 | 86.9 | 86.9 | | | EHI | 81.03 | 85.27 | 86.5 | 86.17 | 87.23 | 87.57 | 87.57 | 87.57 | 87.57 | 87.57 | | | R@50 | Exact Search | | 92.53 | | | | | | | | | | ScaNN | 72.39 | 82.7 | 87.2 | 90.2 | 91.53 | 91.87 | 92.2 | 92.53 | 92.53 | 92.53 | | | Faiss-IVF | 76.66 | 85.4 | 87.37 | 90.03 | 91.03 | 90.87 | 90.87 | 90.87 | 90.87 | 90.87 | | | EHI | 82.87 | 88.67 | 91.73 | 92.07 | 93.2 | 93.53 | 93.53 | 93.53 | 93.87 | 93.87 | | | R@100 | Exact Search | | 94.1 | | | | | | | | | | ScaNN | 72.62 | 83.03 | 88.53 | 91.53 | 92.87 | 93.2 | 93.53 | 93.87 | 93.87 | 93.87 | | | Faiss-IVF | 77.16 | 87.3 | 89.93 | 92.6 | 93.27 | 93.43 | 93.43 | 93.77 | 94.1 | 94.1 | | | EHI | 83.53 | 89.33 | 92.9 | 93.23 | 94.43 | 94.77 | 94.77 | 94.93 | 95.27 | 95.27 | | | Metric | Method | 10% | 20% | 30% | 40% | 50% | 60% | 70% | 80% | 90% | 100% | |-----------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------| | R@10 | Exact Search | 30.13 | | | | | | | | | | | ScaNN | 23.95 | 29.08 | 29.69 | 29.98 | 29.83 | 30.13 | 30.08 | 30.08 | 30.13 | 30.13 | | | Faiss-IVF | 26.8 | 29.67 | 29.9 | 29.93 | 29.93 | 29.93 | 29.98 | 30.13 | 30.13 | 30.13 | | | EHI | 27.43 | 32.02 | 32.12 | 32.19 | 32.42 | 32.42 | 32.42 | 32.42 | 32.47 | 32.47 | | | R@20 | Exact Search | 36.0 | | | | | | | | | | | ScaNN | 28.54 | 34.45 | 35.34 | 35.67 | 35.7 | 36.0 | 35.88 | 35.95 | 36.0 | 36.0 | | | Faiss-IVF | 32.17 | 35.51 | 35.77 | 35.8 | 35.8 | 35.8 | 35.85 | 36.0 | 36.0 | 36.0 | | | EHI | 33.47 | 39.01 | 39.36 | 39.45 | 39.68 | 39.68 | 39.68 | 39.68 | 39.73 | 39.73 | | | R@50 | Exact Search | 45.4 | | | | | | | | | | | ScaNN | 35.93 | 43.38 | 44.51 | 45.11 | 45.02 | 45.12 | 45.07 | 45.48 | 45.5 | 45.4 | | | Faiss-IVF | 39.78 | 44.45 | 44.86 | 45.19 | 45.19 | 45.19 | 45.24 | 45.4 | 45.4 | 45.4 | | | EHI | 41.03 | 49.77 | 50.35 | 50.62 | 50.75 | 50.83 | 50.99 | 50.99 | 51.04 | 51.04 | | | R@100 | Exact Search | 53.78 | | | | | | | | | | | ScaNN | 51.0 | 51.9 | 52.33 | 52.69 | 52.63 | 53.39 | 52.49 | 52.97 | 53.64 | 53.78 | | | Faiss-IVF | 52.75 | 53.27 | 53.65 | 53.65 | 53.65 | 53.63 | 53.78 | 53.78 | 53.78 | 53.78 | | | EHI | 57.36 | 58.42 | 58.97 | 59.18 | 59.13 | 59.29 | 59.29 | 59.34 | 59.34 | 59.39 | | Table 8: Performance metrics (%) evaluated on the Fiqa dataset across various ranges of visited documents. The best value for each metric, corresponding to a specific number of visited documents, is indicated in **bold** font. ![23_image_0.png](23_image_0.png) Figure 8: (a), (b): Recall@100 of EHI and baselines on Scifact and Fiqa dataset. EHI is significantly more accurate than dual encoder + ScaNN or Faiss-IVF baselines, especially when computationally restricted to visit a small fraction of documents. For example, on Scifact dataset, at 10% document visit rate, EHI has 11.7% higher Recall@100 than baselines. ![23_image_1.png](23_image_1.png) Figure 9: Additional Results for other Recall@k metrics of EHI trained on the Scifact dataset. followed by the ANNS. Since we attempt to solve the problem in an end-to-end framework in a differentiable fashion through neural networks, it is worth understanding the effect of different initializations or seeds on our learning. Overall, we run the same model for the best hyperparameters over five different seeds and report the mean and standard deviation numbers below in Table 10. ## E.3 Statistical Significance Tests In this section, we aim to show that the improvements achieved by EHI over other baselines as depicted in Sec 4.2 are statistically significant. We confirm with hypothesis tests that all the results on datasets considered in this work are statistically significant with a p-value less than 0.05. ## E.4 Load Balancing Load balancing is a crucial objective that we aim to accomplish in this work. By achieving nearly uniform clusters, we can create clusters with finer granularity, ultimately reducing latency in our model. In Figure 10(a), we can observe the distribution of documents across different leaf nodes. It is noteworthy that the expected number of documents per leaf, calculated as Pl=\#Leaf i=1 pici, where pi represents the probability of a document in bucket index i and ci denotes the number of documents in bucket i, yields an optimal value of approximately 1263.12. Currently, our approach attains an expected number of documents per leaf of 1404.65. Despite a slight skewness in the distribution, we view this as advantageous since we can allocate some leaves to store infrequently accessed tail documents from our training dataset. Additionally, our analysis demonstrates that queries are roughly evenly divided, highlighting the successful load balancing achieved, as illustrated in Figure 10. | Metric | Method | 10% | 20% | 30% | 40% | 50% | 60% | 70% | 80% | 90% | 100% | |--------------|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------| | Exact Search | 22.41 | | | | | | | | | | | | ScaNN | 22.06 | 22.38 | 22.36 | 22.36 | 22.41 | 22.39 | 22.41 | 22.41 | 22.41 | 22.41 | | | nDCG@1 | Faiss-IVF | 22.16 | 22.32 | 22.38 | 22.41 | 22.41 | 22.41 | 22.39 | 22.39 | 22.39 | 22.41 | | EHI | 22.39 | 22.48 | 22.46 | 22.46 | 22.46 | 22.46 | 22.46 | 22.46 | 22.46 | 22.46 | | | Exact Search | 32.5 | | | | | | | | | | | | ScaNN | 31.96 | 32.36 | 32.42 | 32.43 | 32.49 | 32.48 | 32.5 | 32.5 | 32.5 | 32.5 | | | nDCG@3 | Faiss-IVF | 32.05 | 32.25 | 32.39 | 32.44 | 32.48 | 32.48 | 32.48 | 32.5 | 32.5 | 32.5 | | EHI | 32.45 | 32.61 | 32.61 | 32.6 | 32.6 | 32.6 | 32.6 | 32.6 | 32.6 | 32.6 | | | Exact Search | 36.03 | | | | | | | | | | | | ScaNN | 35.38 | 35.82 | 35.91 | 35.94 | 36.01 | 36.01 | 36.03 | 36.02 | 36.03 | 36.03 | | | nDCG@5 | Faiss-IVF | 35.56 | 35.75 | 35.89 | 35.96 | 36.0 | 36.0 | 36.02 | 36.04 | 36.04 | 36.03 | | EHI | 35.9 | 36.08 | 36.08 | 36.08 | 36.08 | 36.08 | 36.08 | 36.08 | 36.08 | 36.08 | | | Exact Search | 39.39 | | | | | | | | | | | | ScaNN | 38.61 | 39.1 | 39.25 | 39.3 | 39.37 | 39.36 | 39.38 | 39.38 | 39.39 | 39.39 | | | nDCG@10 | Faiss-IVF | 38.86 | 39.09 | 39.23 | 39.31 | 39.35 | 39.36 | 39.36 | 39.38 | 39.38 | 39.39 | | EHI | 39.19 | 39.42 | 39.42 | 39.42 | 39.43 | 39.43 | 39.43 | 39.43 | 39.43 | 39.43 | | | Exact Search | 33.77 | | | | | | | | | | | | ScaNN | 33.16 | 33.58 | 33.67 | 33.7 | 33.76 | 33.75 | 33.77 | 33.77 | 33.77 | 33.77 | | | MRR@10 | Faiss-IVF | 33.34 | 33.54 | 33.66 | 33.72 | 33.75 | 33.75 | 33.74 | 33.76 | 33.76 | 33.77 | | EHI | 33.64 | 33.81 | 33.81 | 33.8 | 33.81 | 33.8 | 33.8 | 33.8 | 33.8 | 33.8 | | ![24_image_1.png](24_image_1.png) ![24_image_2.png](24_image_2.png) # Doc. (a) MS MARCO: Document load balancing (b) MS MARCO: Query load balancing Figure 10: Leaf distribution of documents and queries of MS Marco in our trained EHI model. Our model learns splits closer to ideal split (uniform), this suggests that all the leaves are efficiently used, and our model does not learn very lob sided clusters. This roughly uniform spread of both queries (Figure 10(b)), as well as documents (Figure 10(a)) provides a more fine-grained clustering structure which not only takes advantage of all the leaves but also shows state of the art performance compared to baselines as discussed extensively in Section 4.2. ![24_image_0.png](24_image_0.png) Table 9: Performance metrics (%) evaluated on the MS Marco dataset across various ranges of visited documents. The best value for each metric, corresponding to a specific number of visited documents, is indicated in **bold** font. Table 10: Performance metrics (%) evaluated on the various datasets across various ranges of visited documents. The best value for each metric, corresponding to a specific number of visited documents, is indicated in bold font. Here, n denotes the number of seeds run for the given model. We report the mean and standard deviation of the metrics for EHI and showcase a robustness to initialization. | Metric | Method | 10% | 20% | 30% | 40% | 50% | 60% | 70% | 80% | 90% | 100% | |------------------------------|-----------------------------------------------------------------------------------------------------------------------------------|-------|-------|-------------|-------|-------|-------|-------|-------|-------|--------| | Exact Search | 91.77 | | | | | | | | | | | | Scifact - R@100 | ScaNN | 67.89 | 80.76 | 83.76 | 87.49 | 89.82 | 90.6 | 91.6 | 91.27 | 91.27 | 91.77 | | Faiss-IVF | 71.81 | 79.42 | 83.93 | 86.93 | 89.6 | 91.1 | 91.77 | 91.77 | 91.77 | 91.77 | | | EHI (n = 5) | 80.68 ± 1.51 87.05 ± 1.92 90.06 ± 0.97 91.75 ± 0.89 93.03 ± 0.77 93.26 ± 0.72 93.34 ± 0.73 93.44 ± 0.62 93.51 ± 0.62 93.51 ± 0.62 | | | | | | | | | | | | Exact Search | 53.78 | | | | | | | | | | | | ScaNN | 51.0 | 51.9 | 52.33 | 52.69 | 52.63 | 53.39 | 52.49 | 52.97 | 53.64 | 53.78 | | | Fiqa - R@100 | Faiss-IVF | 52.75 | 53.27 | 53.65 | 53.65 | 53.65 | 53.63 | 53.78 | 53.78 | 53.78 | 53.78 | | EHI (n = 5) | 57.23 ± 0.42 57.74 ± 0.27 57.91 ± 0.27 57.98 ± 0.26 58.07 ± 0.25 58.08 ± 0.24 58.11 ± 0.21 58.11 ± 0.21 58.1 ± 0.22 | | | 58.1 ± 0.22 | | | | | | | | | Exact Search | 39.39 | | | | | | | | | | | | ScaNN | 38.61 | 39.1 | 39.25 | 39.3 | 39.37 | 39.36 | 39.38 | 39.38 | 39.39 | 39.39 | | | MS Marco - nDCG@10 Faiss-IVF | 38.86 | 39.09 | 39.23 | 39.31 | 39.35 | 39.36 | 39.36 | 39.38 | 39.38 | 39.39 | | | EHI (n = 5) | 39.17 ± 0.04 39.34 ± 0.03 39.39 ± 0.02 39.41 ± 0.01 39.41 ± 0.01 39.42 ± 0.01 39.42 ± 0.01 39.42 ± 0.01 39.42 ± 0.01 39.42 ± 0.01 | | | | | | | | | | | | Exact Search | 33.77 | | | | | | | | | | | | ScaNN | 33.16 | 33.58 | 33.67 | 33.7 | 33.76 | 33.75 | 33.77 | 33.77 | 33.77 | 33.77 | | | MSMarco - MRR@10 | Faiss-IVF | 33.34 | 33.54 | 33.66 | 33.72 | 33.75 | 33.75 | 33.74 | 33.76 | 33.76 | 33.77 | | EHI (n = 5) | 33.62 ± 0.03 33.74 ± 0.03 33.77 ± 0.02 33.78 ± 0.01 33.78 ± 0.01 33.79 ± 0.01 33.79 ± 0.01 33.79 ± 0.01 33.79 ± 0.01 33.79 ± 0.01 | | | | | | | | | | | | Dataset | Scifact (R@100) | Fiqa (R@100) | MSMARCO (nDCG@10) | MSMARCO (MRR@10) | |-----------|-------------------|----------------|---------------------|--------------------| | p-value | 0.0016 | 7.97E-07 | 0.0036 | 0.0018 | Table 11: Statistical Significance p-test. ## E.5 Learning To Balance Load Over Epochs In the previous section, we showed that our learned clusters are almost uniformly distributed, and help improve latency as shown in Section 4.2. In this section, we study the progression of expected documents per leaf as defined above over training progression on the Scifact dataset. In Figure 11 demonstrates the decline of the expected documents per leaf over the training regime. ## E.6 Negative Mining In this section, we discuss additional details on the exact working of our possible negative mining approach. The idea is that given positive document (d+), you would like to sample other documents which reached the same bucket as d+ but are not considered relevant to the query (q) per the ground truth data. Note that the dynamic negative mining is something EHI achieves for almost free, as the built indexer is also used for the ANNS. Prior approaches including Monath et al. (2023); Dahiya et al. (2023); Hofstätter et al. (2021) do study this dynamic negative sampling approaches as well. However, the difference lies in the fact that EHI does not need to build an new index for clustering every few steps, and the hierarchical tree index learned by EHI is used for the downstream ANNS task. This contrasts with works such as Monath et al. (2023); Dahiya et al. (2023); Hofstätter et al. (2021), where the index is used during training and hard negative mining, but is finally discarded. We leave this as an hyperparameter left to the practitioner. Note that we fine-tune a distilbert model which was trained on the MS Marco dataset with mined hard negatives. Thus, addition of this negative mining feature in our experiments does not lead to any significant drops as further shown in Table 12, and leads to efficient re-using of the same indexer learnt during inference. ## E.7 Comparisons Against Elias In this section, we compare against other end-to-end training paradigms such as ELIAS (Gupta et al., 2022) which requires warm starting of a knn index. It should be noted that ELIAS and BLISS are tailored for the domain of extreme classification while we focus on the dense information literature domain (note that label classifiers, and pre-computing KNN structure for datasets the size of MSMARCO is indeed a significant overhead). Regardless, we report our findings on the LF-AmazonTitles-131K (XC dataset) below ![26_image_0.png](26_image_0.png) Figure 11: Load balancing of documents over the training regime of EHI. EHI learning objective encourages well balances leaf nodes as well as state of the art accuracy. Table 12: Performance metrics (%) evaluated on the various datasets showcasing the effect of negative mining when finetuned from a distilbert-cos model trained on MS Marco benchmark. | Dataset | Metric | EHI's indexer negative mining | ANCE | |---------------------------|----------|---------------------------------|--------| | Scifact | R@100 | 95.27 | 95.27 | | MS Marco (distilbert-cos) | MRR@10 | 33.8 | 33.79 | | MS Marco (distilbert-cos) | nDCG@10 | 39.43 | 39.4 | ![27_image_0.png](27_image_0.png) ![27_image_1.png](27_image_1.png) Figure 12: Ablation study of docs to leaves on the MS Marco benchmark. Overall, we notice that as long as computational power is available, increasing number of leaves indexed for documents aids performance in retrieval. (see Table 13). Note that below comparison is still unfair, as ELIAS utilizes additional million auxiliary parameters in their label classifiers, while EHI does not. Furthermore, EHI uses label features for encoding, while ELIAS does not. Although EHI outperforms ELIAS by 4.16% in P@1, it must be recognized that comparing against extreme classification based methods is unfair. Table 13: Performance metrics (%) evaluated on the LF-AmazonTitles-131K dataset showcasing the efficacy of EHI in comparison to ELIAS. | LF-AmazonTitles-131K | P@1 | P@3 | P@5 | |------------------------|-------|-------|-------| | ELIAS | 40.13 | 27.11 | 19.54 | | EHI | 41.8 | 28.64 | 20.79 | ## E.8 Allowing Documents To Have Multiple Semantics Note that throughout training, our model works with a beam size for both query and document and tries to send relevant documents and queries to the same leaves. While testing, we have only presented results where beam size is used to send queries to multiple leaves, limiting the number of leaves per document to 1. We could use a number of leaves per document as a hyperparameter and allow documents to reach multiple leaves as long as our computational power allows. Accuracy: In this section, we extend our results on increasing this factor on the MS Marco dataset as presented in Figure 12. Intuitively, each document could have multiple views, and simply sending them to only one leaf might not be ideal, but efficient indexing. Furthermore, increasing this hyperparameter from the default d2l = 1 setting shows strictly superior performance and improves our metrics on the MRR@10 as well as nDCG@10 metric. Advantages of Multiple Leaves per Document: While assigning a document to multiple leaf nodes incurs additional memory costs, we uncover intriguing semantic properties from this approach. For the analysis, we assign multiple leaves for a document (Doc) ranked based on decreasing order of similarity, as shown in Figure 13. For Figure 13, we showcase three distinct examples of diverse domains showcasing not only the semantic correlation of the documents which reach similar leaves but also the queries which reach the same leaf as our example. It can be seen that each leaf captures slightly different semantics for the document. On further observations, queries mapped to the same leaf also share the semantic similarity with the Doc. Note that semantic similarity decreases as the document's relevance to a label decreases (left to right). This shows that EHI's training objection (Part 2 and Part 3) ensures queries with similar semantic meaning as documents and similar documents are clustered together. ![28_image_0.png](28_image_0.png) Figure 13: We deploy the concept of word cloud to summarize the semantics of all queries and documents in a leaf. Here, every document and query can hold multiple semantics therefore for the analysis, each document (Doc) was assigned to multiple leafs with decreasing order of similarity. It can be seen that for each leaf captures slightly different semantic for the document. On further observations, queries that were mapped to the same leaf also shares the semantic similarity with the Doc. Note that semantic similarity decreases as the relevance of document to a label decreases (left to right). This shows that EHI's training objection (Part 2 and Part 3) ensure queries with similar semantic meaning as document as as well as similar documents are clustered together. ![29_image_0.png](29_image_0.png) Figure 14: Figure shows that EHI's learning objective encourages leaf nodes to have different semantics. Here semantic information in each leaf is shown in the form of word cloud. The seemingly similar leaves such as (1,1), (5, 5) or (8,2) shares the similar words like "year" however, they capture completely different meaning such as financial, criminal and sports respectively. ![30_image_0.png](30_image_0.png) Figure 15: EHI indexer is a simple tree-structure where each height uses a neural network to. decide which path the given query or document needs to go. Following the similar paradigm as Figure 2, variables V98 is the dense representations (embeddings) of the text of query or document. Diverse Concepts Across Leaf Nodes: Our analysis, as showcased in Figure 14, reveals a wide range of diverse concepts captured by the word clouds derived from the leaf nodes ofEHI. For Figure 14, we visualize only a subset of randomly chosen leaves (54) instead of all leaf nodes clarity, which was obtained through random sampling. Below, we highlight some intriguing findings from our analysis. These concepts span various domains such as phones, plants, healthcare, judiciary, and more. Moreover, within each leaf node, we observe a cohesive theme among the words. For instance, the word cloud at coordinates (1, 1) reflects an international finance theme, while the word cloud at (1, 2) pertains to fitness. This observation highlights the importance of grouping similar documents into the same leaf nodes and excluding irrelevant documents using our objective function. Note that seemingly similar leaves such as (1, 1), (5, 5), or (8, 2) share similar words like "year"; however, they capture completely different meanings such as financial, criminal, and sports, respectively. ## F Additional Information Of Ehi**'S Indexer** In this section, we hope to shed more light on the indexing process of EHI at both training and inference time. We expand upon our Figure 2, and only discuss the indexer part or the tree segment of the image in Figure 15. For a given query/document, the root node uses the embedding information (V98 in this case) alone, and we use a linear layer (U1) to transform the embeddings received by the encoder to another 768-dimensional embedding. Having these affine transformations at each height is essential as we would like to focus on different aspects of the same query/document at each height to classify which children node the item needs to go to. This is followed by another linear layer (W1) to predict logits for the children at the node (2 in this case). This, followed by the softmax in the tree, leads to the probability distribution deciding which node to go towards - (p 1 = [P1, P2]). This is followed by the logic at the following height, which follows roughly the same process as the root node with one significant difference. The affine transformation at this height takes both the embedding input (V98) and the one hot version of the probabilities at the previous height to make the predictions. During training, we have 1 at the index with the highest probability and 0 otherwise (assume P1 > P2, and hence one-hot version is [1, 0]). However, once the conditional probability at the next height is computed, we multiply by this P1 to have the actual probability of reaching nodes 3 and 4 (note these numberings follow the same logic as used in Figure 2). The final path embedding is the concatenation of these probability distributions leading to [P1, P2, P1P3, P1, P4] as depicted in Figure 15. During inference, the indexer follows a similar setup where, instead of only doing one forward pass with the highest probability, we do b forward passes at a given height with top-b highest probabilities. Algorithm 1 Training step for EHI. u(q) & v(d) denote the query (q) & document (d) representation from encoder Eθ. Similarly path embedding by the indexer (Iφ) is denoted by T (q) & T (d). Please refer to Algorithm 2 in appendix for the definition of TOPK-INDEXER and INDEXER. Note that the Update(.) function updates the encoder and indexer parameters through the back-propagation of the loss through AdamW (Loshchilov & Hutter, 2017) in our case, while other optimizers could also be used for the same (e.g. SGD, Adam, etc.). Require: φ ← Indexer Parameters, θ ← Encoder parameters, τ ← similarity threshold, B ← indexer branching factor, H ← indexer height, β ← beam size, M ← Mapping of leaf to documents, Qˆ = {q1, q2, . . . , qs*} ∼ Q* (randomly sampled mini-batch), INDEXER ← Returns path embedding, TOPK-INDEXER ← Returns most relevant leaf procedure RETRIEVER(u, φ, β) - Retrieves most relevant doc _, _, Leafs = TOPK-INDEXER(u*, φ, B, H, β*) . Refer to Alg 2 return ∪l∈LeafsM(l) end procedure procedure Training(Qˆ) - jointly update encoder and indexer parameters θ, φ loss = 0 for qi ∈ Qˆ do di ∼ {k|yik = 1} . Sample a relevant document hi ∼ {d|RETRIEVER(Eθ(qi), φ, β)*} \ {*k|yik = 1} . Sample a hard neg. from retriever loss += L(Eθ(qi), Eθ(di), Eθ(hi)) . Triplet loss against hard negative for qj ∈ Qˆ and qi 6= qj do dj ∼ {k|yik = −1 ∩ yjk = 1} . Sample an in-batch negative loss += L(Eθ(qi), Eθ(di), Eθ(dj )) . Uses θ to encode doc. and query loss += L(Tφ(qi), Tφ(di), Tφ(dj )) . Uses INDEXER in Alg 2 for Tφ. loss += 1{SIM(v(di), v(dj )) < τ}L(Tφ(di), Tφ(di), Tφ(dj )) end for end for return Update(*θ, φ,* **loss**) . Update encoder and indexer parameters with gradient of loss end procedure Algorithm 2 Training step for EHI. u(q) & v(d) denote the query (q) & document (d) representation from encoder Eθ. Similarly path embedding by the indexer (Iφ) is denoted by T (q) & T (d). Require: φ = {WH, . . . ,W1, UH, . . . , U1} ← Indexer Parameters, θ ← Encoder parameters, τ ← Interdocument similarity threshold, B ← branching factor of the Indexer, H ← height of the indexer, β ← beam size, M ← Mapping of leaf to documents, Qˆ = {q1, q2, . . . , qs*} ∼ Q* (randomly sampled mini-batch) procedure INDEXER(u*, φ, B, H*) - Computes indexer path emb. for a query/document l = TOPK-INDEXER(u*, φ, B, H, β* = 1) Tφ = [] pl = 1 for h ∈ 1 *. . . H* do pˆ = S([o(i h−1 l); o(i h−2 l); *. . .* ; o(i 1 l ); u];Wh, Uh) Tφ = [Tφ; pˆ · pl] pl = pl· pˆ[i h l ] end for return Tφ end procedure procedure TOPK-INDEXER(u*, φ, B, H, β*) - Indexes a query/document in the β most relevant leaves h ← 1, P = {< 1, u, 0 >} . Tuple of score, path and node ID while h ≤ H do Pˆ = Max-Heap(*size* = β) for s, p, n ∈ P*.pop*() do pˆ = S(p;Wh, Uh) × s; For all i ∈ 1 . . . B: Pˆ.push(< pˆ[i], o(i); p, n · B + i >) end for P = Pˆ end while return P end procedure
Review 1: Summary: The paper presents a hierarchical indexing algorithm for document retrieval that trains the embedding and indexing components simultaneously. Empirical evaluations show the merit of individual components of the architecture. Moreover, the the proposed method is shown the outperform several baselines on some standard benchmarks. Strengths and Weaknesses: Several embedding ideas seem useful and interesting, and the combined method does appear to have strong empirical performance. The main weakness of the paper is poor presentation. Many components are not formally defined. The description of the encoder (BERT) is dismissed with a simple citation without any further details on architecture and training. The motivation for design are very superficial, e.g., the path is computationally intractable (why would a simple path be intractable?). The training and updating is treated very superficially (e.g. by having a call to Update() in Algorithm 2). While the experiments do appear to show improvement, there is no indication of statistical significance. Requested Changes: The description and motivation of the algorithm should be considerably improved. There should be a detailed description of the experimental set-up, and clear indication of the significance of the results. The current presentation of the experiment might be sufficient for a conference paper that highlights the promise of an algorithm, but it is insufficient for TMLR that has a strong emphasis on the trustworthiness of the presented claims. Broader Impact Concerns: none ================================================== Review 2: Summary: This paper introduces a novel method for dense retrieval that integrates embedding generation and approximate nearest neighbor search (ANNS) into a single, end-to-end learning framework. The proposed End-to-end Hierarchical Indexing (EHI) addresses the misalignment issues found in conventional two-stage approaches by jointly optimizing the embedding encoder and the search index structure. The authors highlight the method's capability to learn balanced and uniform clusters without needing proxy information or additional clustering algorithms. Extensive empirical evaluations demonstrate EHI's superiority over state-of-the-art techniques, showing significant improvements in retrieval accuracy on benchmarks. Strengths and Weaknesses: ## Strengths This paper is well-written, making it easy to understand the proposed method. The motivation for the proposed approach is clear, and the method of jointly learning the encoder and ANNS is highly intriguing. The novelty of the proposed method is emphasized through a wide range of experimental results. The paper holds significant value by offering a new paradigm in information retrieval. ## Weaknesses - The introduction of a new paradigm is highly commendable. However, several questions remain unanswered upon reading this paper: - Is this method applicable to "real" scenarios at a billion-scale? (While the cost of extracting dense features may be comparable to other methods, I am curious about the costs incurred during the retrieval process.) - If applicable, what are the computational costs? - When adding multiple new search documents, what procedures must be followed with the trained indexer? - A simple question remains: What are the costs involved in hard-negative mining? - I am curious about the reason for using distilBERT, pretrained on MS MARCO. According to the main paper and the appendix, the main contribution is the introduction of a new paradigm, but the authors would also agree that performance is crucial. Therefore, for fair comparison, it should be clear if other comparative models also used distilBERT. According to Table 4 in the appendix, only the parameters of the encoder are presented, without a detailed description of the pretrained model used. Requested Changes: It would be beneficial if the paper included content that addresses my opinions on its weaknesses. If such content is already included, please indicate where it can be found. Broader Impact Concerns: There are no broader impact concerns of this work. ================================================== Review 3: Summary: This work proposes an end-to-end hierarchical indexing (EHI) method, which overcomes the limitations of previous approximate nearest neighbor search (ANNS) methods that require separate two-stage processes for semantic search tasks. The single-stage approach of the EHI method has the advantage of reducing potential mismatches between the embedding space and the requirements of the ANNS structure. It also allows for consideration of specific characteristics of the query distribution. EHI employs a tree-structured inverted index without any initialization of cluster points, utilizing significantly compressed path embedding and a parameterized indexer. EHI is a model-agnostic method that can be integrated into various models with different encoder architectures or training schemes. Experiments on various benchmarks show that EHI achieves state-of-the-art results. Strengths and Weaknesses: Pros) This paper presents the first end-to-end approach for training discrete indices-based ANNS models for text query-document retrieval. The novel tree-based architecture successfully overcomes the issues of previous two-stage approaches efficiently. Sufficient experiments are conducted to prove the advantages of the proposals. Detailed discussions on ablation results and method components are provided. Cons) In general, end-to-end approaches for ANNS already exist in visual domains, as mentioned in references [1-2]. It would be beneficial to clarify that this work represents the first approach specifically for the text domain. [1] Yu, Tan, et al. "Product quantization network for fast image retrieval." Proceedings of the European Conference on Computer Vision (ECCV). 2018. [2] Jang, Young Kyun, and Nam Ik Cho. "Generalized product quantization network for semi-supervised image retrieval." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. Requested Changes: Need explanation of what DE+ANNS stands for in section 1.2. In Figure 1, it would be better to change e2e to end-to-end for clarification. Overall, paper is well written and straightforward. Broader Impact Concerns: I suppose nothing special for this paper. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper has mixed but mostly positive opinions. The main concerns include - The end-to-end learning framework is not first proposed by this work (Reviewer Nywg) - This concern is addressed by revising Section 2, and focusing on "dense text retrieval" - Comparison with SOTA -- because it uses distilBert (Reviewer y44s) - Table 4 was revised to include the backbone details to resolve the concern - The presentation could be improved - The reason why the path is computationally intractable is now mentioned in Section 3.3. - In my opinion, the presentation quality still can be improved, but the current version looks okay. But, if the authors can update the manuscript, I slightly recommend revising the paper to make the motivation and experimental details more visible to novice readers. See my comment below. - The lack of statistical significance for the experiments - The authors refuted this argument by citing Table 11 - In my opinion, the current empirical evidence is somewhat okay, but the visibility of the results can be improved. See my comment below. There were some additional comments, such as - Practicalness of the proposed method to the billion-scale DB (Reviewer y44s) - As far as I understood, the discussion is not explicitly included in the revised paper. If my understanding is correct, I strongly recommend adding the related contents in the revision. In my opinion, considering that BERT is now a very famous architecture whose detail can be omitted, the current manuscript has a weak problem with reproducibility (reproducibility can still be a problem because this submission does not include any implementation, but it is acceptable, IMO). Overall, I think this paper can satisfy the TMLR evaluation criteria with a minor revision. I request a minor revision by moving major contents in the Appendix to the main page. Please note that there is no page limitation for TMLR papers; the page limitation is only used to set the initial review period for the reviewers (2 weeks for less than 12 pages, 4 weeks for more than 12 pages). In my opinion, the appendix should be an "appendix," namely, the TMRL criteria (claim and evidence) should be supported whenever the appendix's contents are ignored. For example, dataset explanations or model hyperparameter details can be in the Appendix, but many experiments that support the main claim should be in the main paper (unless an experiment has a duplicated message with any previous experiment in the main paper). I strongly recommend moving Appendix B (Motivation for EHI) and many important experiments in Appendix D and E to the main paper. After moving the contents to the main paper, I think this paper will need a polishing stage (e.g., enhancing clarity of the manuscript), which would be a minor modification. ==================================================
# Interpreting Global Perturbation Robustness Of Image Models Using Axiomatic Spectral Importance Decomposition Róisín Luo (Jiaolin Luo) *j.luo2@universityofgalway.ie* James McDermott *james.mcdermott@universityofgalway.ie* Colm O'Riordan *colm.oriordan@universityofgalway.ie* SFI Centre for Research Training in Artificial Intelligence School of Computer Science, University of Galway Galway, H91 TK33, Ireland Reviewed on OpenReview: *https: // openreview. net/ forum? id= uQYomAuo7M* ## Abstract Perturbation robustness evaluates the vulnerabilities of models, arising from a variety of perturbations, such as data corruptions and adversarial attacks. Understanding the mechanisms of perturbation robustness is critical for global interpretability. We present a model-agnostic, global mechanistic interpretability method to interpret the perturbation robustness of image models. This research is motivated by two key aspects. First, previous global interpretability works, in tandem with robustness benchmarks, *e.g.* mean corruption error (mCE), are not designed to directly interpret the mechanisms of perturbation robustness within image models. Second, we notice that the spectral signal-to-noise ratios (SNR) of perturbed natural images exponentially decay over the frequency. This power-law-like decay implies that: Low-frequency signals are generally more robust than high-frequency signals - yet high classification accuracy can not be achieved by low-frequency signals alone. By applying Shapley value theory, our method axiomatically quantifies the predictive powers of robust features and non-robust features within an information theory framework. Our method, dubbed as **I-ASIDE** (Image Axiomatic Spectral Importance Decomposition Explanation), provides a unique insight into model robustness mechanisms. We conduct extensive experiments over a variety of vision models pre-trained on ImageNet, including both convolutional neural networks (e.g. AlexNet, VGG, GoogLeNet/Inception-v1, *Inception-v3*, ResNet, SqueezeNet, RegNet, MnasNet, MobileNet, *EfficientNet*, etc.) and vision transformers (e.g. ViT, *Swin Transformer*, and *MaxViT*), to show that **I-ASIDE** can not only **measure** the perturbation robustness but also **provide interpretations** of its mechanisms. ## 1 Introduction Image modeling with deep neural networks has achieved great success (Li et al., 2021; Khan et al., 2022; Han et al., 2022). Yet, deep neural networks are known to be vulnerable to perturbations. For example, the perturbations may arise from corruptions and adversarial attacks (Goodfellow et al., 2014; Hendrycks & Dietterich, 2019; Szegedy et al., 2013), etc. Perturbation robustness, henceforth referred to as robustness, characterizes a crucial intrinsic property of models (Hendrycks & Dietterich, 2019; Bai et al., 2021; Goodfellow et al., 2014; Silva & Najafirad, 2020). Robustness mechanisms refer to the mechanisms which lead to robustness in models. The study of robustness mechanisms aims to answer the question '**why some models are more robust than others**' (Lipton, 2018; Zhang et al., 2021; Bereska & Gavves, 2024). The causes within this question can arise from multifarious ![1_image_0.png](1_image_0.png) The 2D spectrum is mapped into 1D radial spectrum and further divided into M bands which are denoted from I0 to IM´1. Figure 1: Power-law-like energy spectral density (ESD) distribution of natural images over the frequency. The signal spectrum is divided into M bands (from I0 to IM´1). Each spectral band is a robustness band. aspects, including architecture designs (Zhou et al., 2022), data annotations, training methods (Pang et al., 2020), inferences, etc. For example, noisy labels can arise from data annotations (Frénay & Verleysen, 2013; Wei et al., 2021); adversarial attacks often take place in inferences. This research provides a unified view regarding the mechanisms of image model robustness on spectra. We only focus on global interpretation of robustness mechanisms of image models. Our method is primarily motivated by two key aspects, as detailed below. Despite the substantial advances in previous global interpretability works (Covert et al., 2020; Kolek et al., 2022), in tandem with the wide adoption of empirical robustness benchmarks (Hendrycks & Dietterich, 2019; Zheng et al., 2016; Zhang et al., 2021), these methods are not designed to provide global interpretations regarding model robustness. For example, SAGE (Covert et al., 2020), a global interpretability work, can attribute the decisive pixel features in decision-makings. Yet, the importance of decisive pixel features fails to interpret the global robustness. Although the robustness of image models can be quantified by mean corruption errors (mCE) (Hendrycks & Dietterich, 2019) or the distances in feature spaces between clean and perturbed image pairs (Zheng et al., 2016; Zhang et al., 2021), these scalar metrics often fall short in interpreting the underlying 'why' question. This gap prompts us to provide global mechanistic interpretations of perturbation robustness. Motivation also arises from the robustness characterization of the spectral signals within natural images. Images can be represented as spectral signals (Körner, 2022; Sherman & Butler, 2007). The signal-to-noise ratios (SNR) (Sherman & Butler, 2007) of spectral signals can be used to characterize the signal robustness with respect to perturbations. Our later empirical study, as illustrated in Figure 2, suggests that the spectral SNRs of perturbed natural images decay over the frequency with a power-law-like distribution. We refer to the *spectral SNR* as the ratio of the point-wise energy spectral density (ESD) (Stoica et al., 2005) of spectral signal to noise (*i.e.* perturbation): $$S N R(r):={\frac{E S D_{r}(x)}{E S D_{r}(\Delta x)}}$$ $\left(1\right)^{2}$ . where r is a radial frequency point, x is an image, ∆x is the perturbation, ESDrp¨q gives the point-wise energy density at r. The ESDrp¨q is defined as: $$E S D_{r}(x):={\frac{1}{|L_{r}|}}\cdot\sum_{(u,v)\in L_{r}}|\mathcal{F}(x)(u,v)|^{2}$$ |Fpxqp*u, v*q|2(2) $$(2)^{\frac{1}{2}}$$ where Lr denotes a circle as illustrated in Figure 1, |Lr| denotes the circumference of Lr, and Fpxq denotes the 2D Discrete Fourier Transform (DFT) of x. Readers can further refer to more details in Appendix B.1. Why do the spectral SNRs of some corruptions and adversarial attacks exhibit a power-law-like decay over the frequency? We surmise that the ESDs of many perturbations are often not power-law-like, while the ESDs of natural images are power-law-like empirically, as shown in Figure 1. For example, the spatial perturbation drawn from N p0, σ2q (*i.e.* white noise) has a constant ESD: ESDrp∆xq " σ 2. In Figure 2, we characterize the spectral SNR distributions of perturbed images. We set the energy of perturbations to 10% of the energy of the clean image for a fair comparison. The perturbation sources include corruptions (Hendrycks & Dietterich, 2019) and adversarial attacks (Szegedy et al., 2013; Tsipras et al., 2018). We notice that the SNR distributions are also power-law-like over the frequency. We refer to spectral signals as *spectral features* or simply as *features* if without ambiguity. This power-law-like SNR decay suggests an empirical feature robustness prior: **Low-frequency features are robust features (RFs) while high-frequency features** are non-robust features (NRFs). The experiments in Figure 3 demonstrate that models trained with low-frequency signals exhibit higher robustness compared to the models trained with high-frequency signals. This also echoes with our earlier claim "low-frequency signals are generally more robust than high-frequency signals - yet high classification accuracy can not be achieved by low-frequency signals alone". Contributions. By applying the Shapley value theory framework (Shapley, 1997; Roth, 1988; Hart, 1989), we are able to assess the predictive powers (Zheng & Agresti, 2000) of RFs and NRFs. Further leveraging a specifically designed characteristic function of the Shapley value theory framework, the predictive powers are assessed on the basis of their information gains. **I-ASIDE** uses information theory within the Shapley value theory framework, for interpreting robustness mechanisms, as detailed in Section 3. We claim our major contributions as: - We propose a model-agnostic, global interpretability method, for interpreting robustness mechanisms of image models, through the lens of the predictive powers of robust features and non-robust features; - We analyze the robustness mechanisms of image models within information theory on spectra; - We showcase a case study that **I-ASIDE** has the potential to interpret how supervision noise levels affect model robustness. ## 2 Notations Image classifier. The primary task of an image classifier is to predict the probability distributions over discrete classes for given images. We use Qpy|x; θq : p*x, y***q ÞÑ r**0, 1s to denote a classifier in the form of conditional probability. The Q predicts the probability that an image x is of class y. The θ are the parameters. For brevity, we ignore the parameter θ. For example, we denote Qpy|x; θq as Qpy|xq. Dataset and annotation. We use a tuple xX , Yy to denote an image classification dataset, where X is the image set and Y is the label set. We use |Y| to denote the number of classes (*i.e.* the cardinality of set Y). The annotation task of image classification datasets is to assign each image with a discrete class probability distribution. We use Ppy|xq to denote the ground-truth probability that an image x is assigned as a class y. We use Ppxq to denote the probability of x in set X . We use Ppyq to denote the probability of y in set Y. In class-balanced datasets, Ppyq " 1 |Y| . ## 3 Method High-level overview. We apply Shapley value theory for axiomatically assigning credits to spectral bands. Within this framework, the specially devised characteristic function measures the information gains of spectral bands. **I-ASIDE** interprets robustness mechanisms using this axiomatic framework with the information theory. Problem formulation. Quantifying the predictive powers of features can be viewed as a *value* decomposition problem. In this research, the *value* is the information quantities that the features contribute to decisions. ![3_image_0.png](3_image_0.png) Figure 2: Spectral SNR characterization with multiple corruptions and adversarial attacks. The corruptions include: white noise, Poisson noise, Salt-and-pepper noise, and Gaussian blur. The adversarial attacks include: FGSM (Goodfellow et al., 2014), PGD (Madry et al., 2017), SparseFool (Modas et al., 2019) and Pixel (Pomponi et al., 2022). We set the perturbation energy to 10% of the energy of the reference image. The results are shown in decibels (dB) for better visualization. The dB below zero indicates that the perturbations overwhelm the spectral features. Specifically, the *value* is in the form of the log-likelihood expectation of predictions (*i.e.* the negative crossentropy loss). The *value* decomposition aims to assign each robustness band a predictive power such that: (1) The sums of the predictive powers are equal to the *value*, and (2) the assigned predictive powers should reflect their importance in the decisions. In the coalitional game theory, this decomposition scheme is known as an *axiomatic fairness division problem* (Roth, 1988; Hart, 1989; Winter, 2002; Klamler, 2010; Han & Poor, 2009). The fairness division must satisfy four axioms: efficiency, symmetry, *linearity* and *dummy player* (Roth, 1988). We refer to the axiomatic fairness division as *axiomatic decomposition*. Of the scheme, the axioms guarantee *uniqueness* and *fairness* (Aumann & Maschler, 1985; Yaari & Bar-Hillel, 1984; Aumann & Dombb, 2015; Hart, 1989; Roth, 1988). The property *fairness* refers to the principle '*equal treatment of* equals' (Yokote et al., 2019; Navarro, 2019). Shapley value theory is the unique solution satisfying the above axioms. However, the theory merely provides an abstract framework. To employ, we have to instantiate two abstracts: (1) players and coalitions, and (2) characteristic function. Abstract (1): players and coalitions. The spectral bands are dubbed as *spectral players*. A subset of the spectral player set is dubbed as a *spectral coalition*. The details are as shown in Section 3.1. The M spectral players can forge 2M spectral coalitions. We represent the presences and absences of the spectral players as the pass-bands and stop-bands in a multi-band-pass digital signal filter (Oppenheim, 1978; Roberts & Mullis, 1987; Pei & Tseng, 1998), as shown in Section 3.2. Abstract (2): characteristic function. The characteristic function is designed to measure the contributions that the coalitions contribute in decisions. The contributions of the 2M coalitions are then combined to compute their marginal contributions in decisions. We specially design the characteristic function, as shown in Section 3.3 and Appendix B.3, to measure the information gains. Figure 4 shows the framework of applying the Shapley value theory. Figure 5 shows the block diagram of the spectral coalition filtering. Figure 6 shows an example of 2M spectral coalitions. ![4_image_0.png](4_image_0.png) | training frequency range | | | | |----------------------------|----------|-----------|----------| | full-freq | low-freq | high-freq | | | r0, 1s | r0, 0.3s | r0.3, 1s | | | train acc. | 98.05% | 97.13% | 96.18% | | val. acc. w/ clean | 96.10% | 98.80% | 56.70% | | generalization gap | ´1.95 Ó | `1.67 Ò | ´39.48 Ó | | val. acc. w/ Gaussian | 92.60% | 98.00% | 26.70% | | generalization gap | ´5.45 Ó | `0.87 Ò | ´69.48 Ó | ![4_image_1.png](4_image_1.png) Figure 3: Understanding the role of spectral signals. We train a *resnet18* on three datasets derived from STL10 (Coates et al., 2011): r0, 1s contains full-frequency signals; r0, 0.3s only contains low-frequency signals with a cut-off frequency by 0.3; and r0.3, 1s only contains the high-frequency signal with a cut-off frequency by 0.3. In the derived datasets, the filtered high or low frequency signals are randomly replaced by the high or low frequency signals sampled from the full-frequency train set. We visualize the embeddings in the left figure with respect to clean samples and perturbed samples with Gaussian noise (σ " 0.1). The samples are from the validation set. The accuracy on both the train and validation sets is provided in the right table. We also provide generalization gaps, measured by the differences between validation accuracy and train accuracy (*i.e.* accval ´ acc*train*). We have noted in that: (1) both high-frequency and low-frequency signals contain sufficient discriminative information to achieve high training accuracy; (2) high-frequency signals alone are not robust signals because they fail to generalize well from train to validation; (3) overall, low-frequency signals are more robust signals because models trained alone with them generalize better and exhibit higher robustness. We summarize these empirical observations into Assumption 3.8. We organize the implementation details of instantiating the aforementioned two abstracts from three aspects: (1) formulating a spectral coalitional game, (2) the implementation of spectral coalitions and (3) the design of the characteristic function. ## 3.1 Spectral Coalitional Game Spectral player. We use Ii (where i P rMs :" t0, 1, **¨ ¨ ¨** , M ´ 1u) to denote the i-th spectral player. The I0 contains the most robust features and the IM´1 contains the most non-robust features. The M spectral players constitute a player set I :" tIiu M´1 i"0 . Figure 15 in Appendix C.1 shows two partition schemes to partition spectral bands (ℓ8 and ℓ2). We empirically choose ℓ8. Spectral coalition. A subset Ir Ď I is referred to as the *spectral coalition*. The player set I is often referred to as the *grand coalition*. Characteristic function. A characteristic function vpIrq : Ir ÞÑ R measures the contribution for a given coalition and satisfies v**pHq "** 0. In this research, the contribution of Ir is measured in the form of the log-likelihood expectation of the predictions by the Q, in which the input images only contain the signals present in the Ir. We show that this design of v theoretically measures how much information the Q uses from the features in the Ir for decisions. Shapley value. A spectral coalitional game pI, vq is defined on a spectral player set I equipped with a characteristic function v. The weighted marginal contribution of a spectral player Ii over all possible coalitions is referred to as the Shapley value of the spectral player Ii. We use ψipI, vq to represent the Shapley value of ![5_image_0.png](5_image_0.png) Figure 4: Framework of applying Shapley value theory. Spectral coalition filtering creates spectral coalitions over X . Each coalition contains a unique combination of spectral signals, in which some spectral bands are present and others are absent. The coalitions are fed into a classifier Q. For each coalition, Q outputs the predictions. The characteristic function v then uses the predictions to estimate the contributions of the features present in the coalitions. The results from v are combined to compute the marginal contributions of spectral bands - *i.e.* the spectral importance distribution of Q. the player Ii. The Shapley value ψipI, vq is uniquely given by: $$\psi_{i}({\mathcal{I}},v)=\sum_{{\tilde{\mathcal{I}}}\subseteq{\mathcal{I}}\setminus{\mathcal{I}}_{i}}{\frac{1}{M}}{\binom{M-1}{|{\tilde{\mathcal{I}}}|}}^{-1}\left\{v({\tilde{\mathcal{I}}}\cup\{{\mathcal{I}}_{i}\})-v({\tilde{\mathcal{I}}})\right\}$$ $$\left(3\right)^{\frac{1}{3}}$$. where 1M `M´1 |I˜| ˘´1gives the weight of the player Ii presenting in the coalition I˜. Spectral importance distribution (SID). We use Ψpvq :" pψiqiPrMsP RM to denote the collection of ψipI, vq over all players. We min-max normalize Ψpvq by taking Ψ¯ pvq " Ψpvq´min Ψpvq ||Ψpvq´min Ψpvq||1 . The reason for normalizing the SIDs is that we want to scalarize the SIDs for numerical comparisons. The Ψ¯ pvq is referred to as *spectral importance distribution*. Figure 7 shows examples of the spectral importance distributions of trained and un-trained models. Spectral robustness score (SRS). We can also summarize spectral importance distributions into scalar values. We refer to the summarized scalar values as *spectral robustness scores*. We use Spvq : v **ÞÑ r**0, 1s to denote the summarizing function. ## 3.2 Spectral Coalition Filtering We represent the *presences* and *absences* of the spectral players through the signal *pass-bands* and *stop-bands* using a multi-band-pass digital signal filtering (Oppenheim, 1978; Pei & Tseng, 1998; Steiglitz, 2020), as shown in Figure 5. For example, the example spectral coalition tI0, I2u signifies the signals present only in I0 and I2. With the spectral coalition filtering, we are able to evaluate the contributions of the combinations of various spectral features. Figure 6 shows an example of 2M spectral coalitions. To implement the presences and absences of spectral signals, we define a mask map TpIrq : Ir **ÞÑ t**0, 1uMˆN on 2D spectrum, where Ir is a spectral coalition and M ˆ N denotes 2D image dimensions. The mask map is point-wisely defined as: $\mathbb{T}(\widetilde{\mathcal{L}})(m,n)=\begin{cases}1,&\text{if the frequency point(m,n)is present in coalition}\widetilde{\mathcal{L}},\\ 0,&\text{otherwise}\end{cases}$ $\left(4\right)^3$ ![6_image_0.png](6_image_0.png) Figure 5: Spectral coalition filtering. In this example, the mask map TpIrq (*i.e.* transfer function) only allows to pass the signals present in the spectral coalition tI0, I2u. The M is 4 and the absences are assigned to zeros. The images after spectral coalition filtering (x ' Ir) are then fed into frozen classifiers to assess the contributions of spectral coalitions. The binary operation notation ' denotes coalition filtering given by Definition 3.1. The notation d denotes Hadamard product. where p*m, n*q P rM**s ˆ r**Ns. In the digital signal processing literature, this mask map is known as a transfer function (Steiglitz, 2020). In the 2D mask map, the frequency points are '1' in pass-bands (presences) and '0' in stop-bands (absences). The process of spectral coalition filtering with the mask map for a given spectral coalition Ir is shown as in Figure 5. We apply the transfer function TpIrq on the spectra of images with element-wise product (*i.e.* Hadamard product (Horn, 1990; Horadam, 2012)). Let F be the Discrete Fourier transform (DFT) operator and F´1 be the inverse DFT (IDFT) operator (Tan & Jiang, 2018). Readers can further refer to Appendix B.1. Definition 3.1 (Spectral coalition filtering). We define a binary operator ''*' to represent the signal filtering* by: $$x\bowtie\tilde{\mathcal{I}}:={\mathcal{F}}^{-1}\left[\underbrace{{\mathcal{F}}(x)\odot\mathbb{T}(\tilde{\mathcal{I}})}_{\mathrm{Spectral~presence}}+\underbrace{{\boldsymbol{b}}\odot(\mathbb{1}-\mathbb{T}(\tilde{\mathcal{I}}))}_{\mathrm{Spectral~absence}}\right],$$ $$\left({\bar{5}}\right)$$ $$(6)$$ ffifl (5) where 'd*' denotes Hadamard product (* i.e. element-wise product), 1 P RMˆN denotes an all-ones matrix and b P CMˆN represents the assignments of the absences of spectral players. In the context of attribution analysis, b *is often referred as the baseline. In our implementation, we empirically set* b " 0. Remark 3.2 (Absence baseline). *Formally, in the context of attribution analysis, the term 'baseline' defines* the absence assignments of players (Sundararajan et al., 2017; Shrikumar et al., 2017; Binder et al., 2016). For example, if we use 'zeros' to represent the absence of players, the 'zeros' are dubbed as the 'baseline' in the attribution analysis. We have discussed multiple baselines in Appendix B.2. Definition 3.3 (Spectral coalition filtering over set). Accordingly, we define the filtering over a set X as: $${\mathcal{X}}\bowtie{\widetilde{\mathcal{I}}}:=\{x\bowtie{\widetilde{\mathcal{I}}}|x\in{\mathcal{X}}\}.$$ X ' Ir :" tx ' Ir|x P X u. (6) ## 3.3 Characteristic Function Design The characteristic function is needed in order to measure the contributions of the features in Ir. We define the characteristic function as the gains of the negative cross-entropy loss values between feature presences and absences. ![7_image_0.png](7_image_0.png) | 0000 | 0001 | 0010 | 0011 | 0100 | 0101 | 0110 | 0111 | |--------|--------|--------|--------|--------|--------|--------|--------| | 1000 | 1001 | 1010 | 1011 | 1100 | 1101 | 1110 | 1111 | Figure 6: An example of a complete 2M spectral coalitions. This example shows 16 spectral coalitions with M " 4. Each coalition provides various information relevant to decisions. Each image is a coalition. Each coalition contains a unique spectral signal combination. We use binary code to index these coalitions. The '1' in the i-th position indicates the presence of the i-th player. For example, 1001 indicates the presences of two spectral players (I0 and I3) in the coalition. Definition 3.4 (Characteristic function). The characteristic function vpIrq : Ir ÞÑ R *is defined as:* $$v(\widetilde{\mathcal{I}}):=\underbrace{\mathbb{E}}_{x\sim\mathcal{X}}\sum_{y\in\mathcal{Y}}\left\{\underbrace{P(y|x)\cdot\log Q(y|x\bowtie\widetilde{\mathcal{I}})}_{\mathrm{Feature\;presence}}-\underbrace{P(y|x)\cdot\log Q(y|x\bowtie\mathcal{Q})}_{\mathrm{Absence\;baseline}}\right\}$$ $$=\underbrace{\mathbb{E}}_{x\sim\mathcal{X}}\sum_{y\in\mathcal{Y}}P(y|x)\cdot\log Q(y|x\bowtie\widetilde{\mathcal{I}})-C$$ $$\mathbf{r})$$ $$\mathbf{\hat{h}}$$ where the constant term C :" E x∼X $\sum_{y}P(y|x)\cdot\log O(y|x\bowtie\emptyset)$ is used to fulfil $v(\emptyset)=0$. yPY Ppy|xq ¨ log Qpy|x ' Hq *is used to fulfil* v**pHq "** 0*. We refer to the* C as the Dummy player constant. Remark 3.5. *If the labels are one-hot, then Equation 7 is simplified into:* $$v(\tilde{\mathcal{I}}):=\operatorname*{\mathbb{E}}_{x,y\sim\langle\mathcal{X},\mathcal{Y}\rangle}\log Q(y|x\bowtie\tilde{\mathcal{I}})-C$$ log Qpy|x ' Irq ´ C (8) $$a n d\setminus d\setminus\{d\setminus\{d\setminus\}\}$$ Linking to information theory. The relationship between information theory and the characteristic function in the form of negative log-likelihood of Bayes classifiers has been discussed in the literature (Covert et al., 2020; Aas et al., 2021; Lundberg & Lee, 2017). Following on from their discussions, we show that the v in Equation 7 profoundly links to information theory in terms of spectral signals. The maximal information of features relevant to labels in Ir is the mutual information IpX ' Ir, Yq. A classifier Q can merely utilize a proportion of the information. Theorem 3.6 states an information quantity identity regarding the IpX ' Ir, Yq and the v. The term DKLrP||Qs measures the point-wise (*i.e.* for an image x) KL-divergence between the predictions and ground-truth labels. On the basis of the information quantity identity, the v can be interpreted as the information gains between the presences and absences of the features, which the Q utilizes in decisions from the Ir. By enumerating all coalitions, the information gains are then combined to compute the marginal information gains of features in decisions via Equation 3. Theorem 3.6 (Spectral coalition information identity). *The information quantity relationship is given as:* $$\mathbb{I}(\mathcal{X}\bowtie\widehat{\mathcal{I}},\mathcal{Y})=\underset{x\in\mathcal{X}\cup\widehat{\mathcal{I}}}{\mathbb{E}}D_{KL}[P(y|x)||Q(y||x)]+H(\mathcal{Y})+v(\widehat{\mathcal{I}})+C\tag{9}$$ where v is defined in Equation 7 and HpYq is the Shannon entropy of the label set Y. The proof is provided in Appendix B.3. ![8_image_0.png](8_image_0.png) Figure 7: Spectral importance distributions (SIDs) of trained models and un-trained models. The experimental models are pre-trained on *ImageNet*. We also include the models with random weights as a control marked by the blue box. We have noticed that: (1) The spectral importance distributions of trained models exhibit non-uniformity, and (2) the spectral importance distributions of un-trained models exhibit uniformity. We summarize these empirical observations as Assumption 3.7. ## 3.4 Spectral Robustness Score (Srs) Although we are firstly interested in using the spectral importance distributions (SID) for robustness interpretations, they can also be summarized into scalar scores for purposes such as numerical comparisons, and later correlation studies. Assumption 3.7 (Spectral uniformity assumption of random decisions). The second row in Figure 7 shows the SIDs from various models with randomized weights. We randomize the model weights with Kaiming initialization (He et al., 2015). The measured SIDs exhibit spectral uniformity. This suggests: *Un-trained* models do not have spectral preferences*. We refer to 'the models with randomized parameters' as random* decisions. Therefore, we assume that the SIDs of random decisions are uniform: 1M . Assumption 3.8 (Robustness prior). We assume: Higher utilization of robust features in decisions implies robust models*. This is further substantiated by the experiments in Figure 3. To reflect this* robustness prior, we empirically design a series β :" pβ 0, β1, **¨ ¨ ¨** , βM´1q T *where* β P p0, 1q *as the summing* weights of SIDs. Empirically, we choose β " 0.75 *because this choice achieves the best correlation with model* robustness. Summarizing with weighted sum. Let Ψpvq be the measured spectral importance distribution (SID). Set Ψ¯ pvq " Ψpvq´min Ψpvq ||Ψpvq´min Ψpvq||1 with min-max normalization. The weighted sum of the Ψpvq with the weights β is given by: $$\left|\beta^{T}\bar{\Psi}(v)-\beta^{T}\frac{1}{M}\right|\tag{10}$$ where β T 1 M is served as a random decision baseline. Let Spvq : v **ÞÑ r**0, 1s be the normalized result in Equation 10. The Spvq is given by: $$S(v):=\frac{\left|\beta^{T}\bar{\Psi}(v)-\beta^{T}\frac{1}{M}\right|}{\sup_{\bar{\Psi}}\left|\beta^{T}\bar{\Psi}(v)-\beta^{T}\frac{1}{M}\right|}=\left|\frac{\bar{\beta}^{T}\bar{\Psi}(v)-\eta}{1-\eta}\right|\tag{11}$$ where β P p0, 1q, β¯ "β ||β||2 and η "1M ||β||1 ||β||2 . Readers can refer to Appendix C.2 for the simplification deduction. ## 4 Experiments We design experiments to show the dual functionality of **I-ASIDE**, which can not only **measure** robustness and but also **interpret** robustness. We organize the experiments in three categories: ![9_image_0.png](9_image_0.png) Figure 8: The spectral robustness scores (SRS), measured with I-ASIDE, correlate to the mean corruption errors (mCE) in the literature (Hendrycks & Dietterich, 2019). - Section 4.1: Correlation to a variety of robustness metrics; - Section 4.2: Studying architectural robustness; - Section 4.3: A case study interpreting how supervision noise levels affect model robustness. Section 4.1 shows that the scores obtained with **I-ASIDE** correlate with the model robustness scores measured with other methods. Section 4.2 and Section 4.3 show that **I-ASIDE** is able to interpret robustness by examining SIDs. Reproducibility. We choose M " 8 and 200 samples to conduct experiments. We choose 200 samples because the experiments in Appendix C.3. The experiment shows that: A small amount of examples are sufficiently representative for spectral signals. ## 4.1 Correlation To Robustness Metrics Definition 4.1 (**Mean prediction error**). In our experiments, we measure model perturbation robustness with mean prediction errors (mPE) besides the mean corruption errors (mCE). Let x be some clean image and x ˚ be the perturbed image. For a classifier Q*, we define the mean prediction error (mPE) as:* $$\Delta{\mathcal{P}}:=\operatorname*{\mathbb{E}}_{x,y\sim\langle{\mathcal{X}},y\rangle}|Q(y|x)-Q(y|x^{*})|.$$ ˚q|. (12) We demonstrate that **I-ASIDE** is able to measure model robustness. The experiments are broken down into three aspects: (1) correlation to mCE scores, (2) correlation to adversarial robustness, and (3) correlation to corruption robustness. Correlation to mCE scores. Figure 8 shows the correlation between spectral robustness scores (SRS) and the mean corruption errors (mCE). The mCE scores are taken from the literature (Hendrycks & Dietterich, 2018). The mCE scores are measured on a corrupted *ImageNet* which is known as *ImageNet-C* in the literature (Hendrycks & Dietterich, 2019). The *ImageNet-C* includes 75 common visual corruptions with five levels of severity in each corruption. This correlation suggests that the results measured with **I-ASIDE** correlate with the results measured with robustness metric mCE. $$(12)^{\frac{1}{2}}$$ ![10_image_0.png](10_image_0.png) Figure 9: The spectral robustness scores (SRS), measured with I-ASIDE, correlate with the mean prediction errors (mPE) in adversarial attacks. The circle sizes in (b) are proportional to the SRS. Correlation to adversarial robustness. Figure 9 shows the correlation between the correlation between spectral robustness scores (SRS) and the mean prediction errors (mPE) of the adversarial attacks with FGSM and PGD. We vary the eps from 0.1 to 0.2. The results show that our scores correlate with the mean prediction errors in various eps settings. This suggests that the results measured by our method correlate with adversarial robustness. Correlation to corruption robustness. Figure 10 shows the correlation between the correlation between spectral robustness scores (SRS) and the mean prediction errors (mPE) of the corruptions with white noise and Gaussian blurring. We vary the σ of white noise from 0.1 to 0.2. We vary the Gaussian blurring kernel sizes from 3 ˆ 3 to 7 ˆ 7. The results show that our scores correlate with the mean prediction errors in all cases. This suggests that the results measured by our method can reflect the corruption robustness. ![11_image_0.png](11_image_0.png) Figure 10: The spectral robustness scores (SRS), measured with I-ASIDE, correlate to the mean prediction errors (mPE) in corruptions. The circle sizes in (b) are proportional to the SRS. ## 4.2 Studying Architectural Robustness I-ASIDE is able to answer questions such as: - Does model parameter size play a role in robustness? - Are vision transformers more robust than convolutional neural networks (ConvNets)? Does model parameter size play a role in robustness? Figure 11 (a) shows parameter counts do not correlate with model robustness. Thus, the tendency of a model to use robust features is not determined by parameter counts alone. We would like to carry out further investigation in future work. ![12_image_1.png](12_image_1.png) ![12_image_0.png](12_image_0.png) (b) Spectral importance distribution (SID) t-SNE projection Figure 11: How do architectural elements affect robustness? The left figure is to answer: "Does model parameter size play a role on robustness?". The right figure, a t-SNE projection of SIDs, is to answer: "Are vision transformers more robust than convolutional neural networks?". For the two questions, the answers suggested by using **I-ASIDE** are yes. But the story is more complicated, we have provided a brief discussion regarding this case study within Section 4.2. We also have noted that *efficientnet* surprisingly exhibits comparable perturbation robustness as the architectures in Transformer family. All experimental models are pre-trained on *ImageNet*. The circle sizes in (b) are proportional to SRS. Are vision transformers more robust than ConvNets? Figure 11 (b) shows a t-SNE projection of the spectral importance distributions of a variety of models. The results show that vision transformers form a cluster (swin_b, *maxvit_t* and *vit_b_16* ) and outperform ConvNets in terms of robustness. This results correlate with the recent robustness research in the literature (Paul & Chen, 2022; Zhou et al., 2022; Shao et al., 2021). The interpretation is that: **Vision transformers tend to use more robust features than** ConvNets. Discussion. Vision transformers generally outperform ConvNets; nevertheless, state-of-the-art ConvNets, e.g. efficientnet (Tan & Le, 2019), can achieve comparable robustness performance (*e.g.* by error rates on benchmark datasets) (Li & Xu, 2023). The literature (Devaguptapu et al., 2021) affirms that *efficientnet* is more robust than most ConvNets. But, why *efficientnet* is unique? The *efficientnet* introduces an innovative concept in that the network sizes can be controlled by scaling the width, depth, and resolution with a compound coefficient (Tan & Le, 2019). The base architecture is then searched with neural architecture searching (NAS) (Ren et al., 2021) instead of hand-crafted design. The NAS optimization objective is to maximize the network accuracy subject to arbitrary image resolutions. The searching implicitly encourages that the network structure of *efficientnet* uses more robust features. This is because: The low-frequency signals in various resolutions are robust signals while high-frequency signals are not. The second column in Figure 7 shows the SID of *efficientnet* pre-trained on *ImageNet*. The SID shows that efficientnet_v2_s uses more robust features than *alexnet* and *resnet18*. ![13_image_0.png](13_image_0.png) Figure 12: How do models respond to label noise? Our results show that models trained with higher label noise levels tend to use spectral signals uniformly, *i.e.* without a preference for robust (low-frequency) features. ## 4.3 Interpreting How Supervision Noise Levels Affect Model Robustness The previous robustness benchmarks with mean corruption errors (mCE) are not able to answer the longstanding question: "**How and why label noise levels affect robustness?**". We demonstrate that I-ASIDE is able to answer this question. Learning with noisy labels. Supervision signals refer to the prior knowledge provided by labels (Sucholutsky et al., 2023; Zhang et al., 2020; Shorten & Khoshgoftaar, 2019; Xiao et al., 2020). There is a substantial line of previous research on the question of "how supervision noise affects robustness" (Gou et al., 2021; Frénay & Verleysen, 2013; Lukasik et al., 2020; Rolnick et al., 2017). This question is not completely answered yet. For example, Flatow & Penner add uniform label noise into *CIFAR-10* and study its impact on model robustness (Flatow & Penner, 2017). Their results show that classification test accuracy decreases as the training label noise level increases. However, empirical studies like this are not able to answer the underlying 'why' question. Noisy-label dataset. We derive noisy-label datasets from a clean *Caltech101*. We randomly assign a proportion of labels with a uniform distribution over label classes to create a noisy-label dataset. We refer to the randomly assigned proportion as supervision noise level. We vary the noise level from 0.2 to 1.0 to derive five training datasets. Experiment. We train three models (googlenet, *resnet18* and *mobilenet_v2* ) over the clean and the five noisy-label datasets for 120 epochs respectively. We then measure their SIDs. The results are visualized in Figure 12 with heat maps. The results show that there is a pattern across the above three models in that: The SIDs are more uniform with higher supervision noise levels. The interpretation regarding the learning dynamics with the presence of label noise is that: Models tend to use more non-robust features in the presence of higher label noise within training set. ## 5 Related Work We further conduct a literature investigation from three research lines: (1) global interpretability, (2) model robustness, and (3) frequency-domain research. This literature study shows that **I-ASIDE** provides unique insights in these research lines. Global interpretability. Global interpretability summarizes the decision behaviours of models from a holistic view. In contrast, local interpretability merely provides explanations on the basis of instances (Sundararajan et al., 2017; Smilkov et al., 2017; Linardatos et al., 2020; Selvaraju et al., 2017; Arrieta et al., 2020; Zhou et al., 2016; Ribeiro et al., 2016; Lundberg & Lee, 2017; Lakkaraju et al., 2019; Guidotti et al., 2018; Bach et al., 2015; Montavon et al., 2019; Shrikumar et al., 2017). There are four major research lines in image models: (1) feature visualization, (2) network dissection, (3) concept-based method, and (4) feature importance. Feature visualization seeks the ideal inputs for specific neurons or classes by maximizing activations (Olah et al., 2017; Nguyen et al., 2019; Zeiler et al., 2010; Simonyan et al., 2013; Nguyen et al., 2016a;b). This method provides intuitions regarding the question: "What inputs maximize the activations of specific neurons or classes?". Network dissection aims to connect the functions of network units (e.g. channels or *layers*) with specific concepts (e.g. eyes or *ears*) (Bau et al., 2017). Concept-based methods understand the decisions by answering the question "how do models use a set of given concepts in decisions?" (Kim et al., 2018; Ghorbani et al., 2019; Koh et al., 2020; Chen et al., 2020). For example, TCAV explains model decisions by evaluating the importance of a given set of concepts (*e.g.* the textures dotted, *striped* and *zigzagged*) for a given class (*e.g.* the class *zebra*) (Kim et al., 2018). Global input feature importance analysis, often by using Shapley value theory framework, attempts to answer the question: "How do input features contribute to predictions?" (Altmann et al., 2010; Greenwell et al., 2018; Lundberg & Lee, 2017; Ribeiro et al., 2016; Simonyan et al., 2013; Sundararajan et al., 2017; Covert et al., 2020). However, there are few works falling in the scope of global interpretability with feature importance analysis. A related work, SAGE, applying the Shapley value theory framework, globally assigns spatial input features with importance values for interpreting spatial feature contributions (Covert et al., 2020). Although the aforementioned global interpretability methods provide insights into understanding decisions inside black-box models, they do not provide interpretations regarding robustness mechanisms. Our work fundamentally differs from them in that: We provide interpretations regarding robustness mechanisms. We attempt to answer the fundamental question: "Why some models are more robust than others?". Model robustness. Model robustness refers to the prediction sensitivity of models to perturbations. The perturbations can perturb in spaces such as the input space and the parameter space (Hendrycks & Dietterich, 2019; Drenkow et al., 2021). In this research, we focus on the perturbations within the input space. The perturbations can stem from sources such as adversarial attacks (Szegedy et al., 2013; Goodfellow et al., 2014), corruptions (Hendrycks & Dietterich, 2019), outliers (Hendrycks et al., 2018a; Pang et al., 2021) and supervision signal noise (Hendrycks et al., 2018b). Model robustness is often assessed using scalar metrics (Hendrycks & Dietterich, 2019; Krizhevsky et al., 2012; Laugros et al., 2019; Taori et al., 2020). For example, robustness can be measured by the distances between clean and perturbed pairs in feature spaces (Zheng et al., 2016). Hendrycks & Dietterich benchmark the corruption robustness with mean corruption errors (mCE) over a set of corrupted datasets like *ImageNet-C* (Hendrycks & Dietterich, 2019), using AlexNet (Hendrycks & Dietterich, 2019) as a normalization baseline. Despite their widespread adoption in previous literature, these scalar metrics lack the ability to provide detailed insights into the robustness mechanisms. Our work not only serves as a robustness metric but also offers mechanistic interpretations, answering the "why" question behind model robustness. This dual functionality distinguishes our approach, providing a deeper understanding of the mechanisms. Frequency-domain research. Neural networks are non-linear parameterized signal processing filters. Investigating how neural networks respond to input signals in the frequency-domain can provide a unique insight into understanding its functions. Xu et al. delve into the learning dynamics of neural networks in the frequency-domain (Xu et al., 2019a;b). They present their findings as 'F-Principle'. Their work suggests that the learning behaviors of neural networks exhibit spectral non-uniformity: Neural networks fit low-frequency components first, then high-frequency components. In a related study, Tsuzuku & Sato show that convolutional neural networks have spectral non-uniformity with respect to Fourier bases (Tsuzuku & Sato, 2019). Later, Wang et al. connect model generalization behaviors and image spectrum (Wang et al., 2020). They argue that: (1) The supervision signals provided by humans use more low-frequency signals in images and (2) models trained on it tend to use more low-frequency signals. Our showcase experiment in Figure 12 provides the interpretations regarding their empirical findings. In the interpretability research line within the frequency-domain, Kolek et al. propose 'CartoonX' based on the rate-distortion explanation (RDE) framework (Macdonald et al., 2019; Heiß et al., 2020). The RDE framework identifies decision-critical features by partially obfuscating the features. They refer to 'the prediction errors between clean inputs and the partially obfuscated inputs' as distortions. CartoonX pinpoints the decision-critical features within wavelet domain to answer the query: "What features are crucial for decisions?" (Kolek et al., 2022). Our work differs from CartoonX in that: (1) Our method aims to interpret model robustness mechanisms while CartoonX does not, (2) our method is a global interpretability method while CartoonX is a local interpretability method, (3) our method analyzes within an information theory framework while CartoonX uses RDE framework, and (4) our method uses Fourier bases while CartoonX uses wavelet bases. ## 6 Limitations I-ASIDE provides a unique insight into the perturbation robustness mechanisms. Yet, our method has two major limitations: (1) The spectral perspective can merely reflect one aspect of the holistic view of model robustness, and (2) the SID resolutions are low. Limitation (1). For example, carefully crafted malicious adversarial perturbations on low-frequency components can fool neural networks (Luo et al., 2022; Liu et al., 2023; Maiya et al., 2021). Luo et al. demonstrate that attacking low-frequency signals can fool neural networks, resulting in attacks which are imperceptible to humans. This further implies the complexity of this research topic. Limitation (2). The computation cost is imposed by Op2Mq. Fortunately, we do not need high SID resolution to analyze the model robustness problem. For example, a choice with M " 8 is sufficient to interpret robustness mechanisms (as we have shown) while the computational cost remains reasonable. ## 7 Conclusions On the solid ground provided by information theory and coalitional game theory, we present an axiomatic method to interpret model robustness mechanisms, by leveraging the power-law-like decay of SNRs over the frequency. Our method addresses the limitation that scalar metrics fail to interpret robustness mechanisms. We carry out extensive experiments over a variety of architectures. The SIDs, when scalarized, can largely reproduce the results found with previous methods, but addresses their failures to answer the underlying 'why' questions. Our method goes beyond them with the dual functionality in that: **I-ASIDE** can not only measure the robustness but also interpret its mechanisms. Our work provides a unique insight into the robustness mechanisms of image classifiers. ## Acknowledgments This publication has emanated from research [conducted with the financial support of/supported in part by a grant from] Science Foundation Ireland under Grant number 18/CRT/6223. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. We also thank reviewers for their constructive comments which can significantly improve our research quality. We thank the support from the ICHEC (Irish Centre for High-End Computing). We also thank the help from Prof. Dr. Michael Madden from University of Galway, Ireland. ## References Kjersti Aas, Martin Jullum, and Anders Løland. Explaining individual predictions when features are dependent: More accurate approximations to shapley values. *Artificial Intelligence*, 298:103502, 2021. André Altmann, Laura Toloşi, Oliver Sander, and Thomas Lengauer. Permutation importance: a corrected feature importance measure. *Bioinformatics*, 26(10):1340–1347, 2010. Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. *Information* fusion, 58:82–115, 2020. Robert J Aumann and Michael Maschler. Game theoretic analysis of a bankruptcy problem from the talmud. Journal of economic theory, 36(2):195–213, 1985. Yonatan Aumann and Yair Dombb. The efficiency of fair division with connected pieces. ACM Transactions on Economics and Computation (TEAC), 3(4):1–16, 2015. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. *PloS one*, 10(7):e0130140, 2015. Tao Bai, Jinqi Luo, Jun Zhao, Bihan Wen, and Qian Wang. Recent advances in adversarial training for adversarial robustness. *arXiv preprint arXiv:2102.01356*, 2021. David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quantifying interpretability of deep visual representations. In *Proceedings of the IEEE conference on computer vision* and pattern recognition, pp. 6541–6549, 2017. Leonard Bereska and Efstratios Gavves. Mechanistic interpretability for ai safety–a review. *arXiv preprint* arXiv:2404.14082, 2024. Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, Klaus-Robert Müller, and Wojciech Samek. Layer-wise relevance propagation for neural networks with local renormalization layers. In *Artificial* Neural Networks and Machine Learning–ICANN 2016: 25th International Conference on Artificial Neural Networks, Barcelona, Spain, September 6-9, 2016, Proceedings, Part II 25, pp. 63–71. Springer, 2016. Zhi Chen, Yijie Bei, and Cynthia Rudin. Concept whitening for interpretable image recognition. Nature Machine Intelligence, 2(12):772–782, 2020. Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In *Proceedings of the fourteenth international conference on artificial intelligence and statistics*, pp. 215–223. JMLR Workshop and Conference Proceedings, 2011. Ian Covert, Scott M Lundberg, and Su-In Lee. Understanding global feature contributions with additive importance measures. *Advances in Neural Information Processing Systems*, 33:17212–17223, 2020. Chaitanya Devaguptapu, Devansh Agarwal, Gaurav Mittal, Pulkit Gopalani, and Vineeth N Balasubramanian. On adversarial robustness: A neural architecture search perspective. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 152–161, 2021. Nathan Drenkow, Numair Sani, Ilya Shpitser, and Mathias Unberath. A systematic review of robustness in deep learning for computer vision: Mind the gap? *arXiv preprint arXiv:2112.00639*, 2021. David Flatow and Daniel Penner. On the robustness of convnets to training on noisy labels. Technical report, Stanford University, 2017. Benoît Frénay and Michel Verleysen. Classification in the presence of label noise: a survey. IEEE transactions on neural networks and learning systems, 25(5):845–869, 2013. Amirata Ghorbani, James Wexler, James Y Zou, and Been Kim. Towards automatic concept-based explanations. *Advances in Neural Information Processing Systems*, 32, 2019. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Jianping Gou, Baosheng Yu, Stephen J Maybank, and Dacheng Tao. Knowledge distillation: A survey. International Journal of Computer Vision, 129(6):1789–1819, 2021. Brandon M Greenwell, Bradley C Boehmke, and Andrew J McCarthy. A simple and effective model-based variable importance measure. *arXiv preprint arXiv:1805.04755*, 2018. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, and Fosca Giannotti. Local rule-based explanations of black box decision systems. *arXiv preprint arXiv:1805.10820*, 2018. Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, et al. A survey on vision transformer. IEEE transactions on pattern analysis and machine intelligence, 45(1):87–110, 2022. Zhu Han and H Vincent Poor. Coalition games with cooperative transmission: a cure for the curse of boundary nodes in selfish packet-forwarding wireless networks. *IEEE transactions on communications*, 57 (1):203–213, 2009. Sergiu Hart. Shapley value. In *Game theory*, pp. 210–216. Springer, 1989. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification. In *Proceedings of the IEEE international conference on* computer vision, pp. 1026–1034, 2015. Cosmas Heiß, Ron Levie, Cinjon Resnick, Gitta Kutyniok, and Joan Bruna. In-distribution interpretability for challenging modalities. *arXiv preprint arXiv:2007.00758*, 2020. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. *arXiv preprint arXiv:1903.12261*, 2019. Dan Hendrycks and Thomas G Dietterich. Benchmarking neural network robustness to common corruptions and surface variations. *arXiv preprint arXiv:1807.01697*, 2018. Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep anomaly detection with outlier exposure. arXiv preprint arXiv:1812.04606, 2018a. Dan Hendrycks, Mantas Mazeika, Duncan Wilson, and Kevin Gimpel. Using trusted data to train deep networks on labels corrupted by severe noise. *Advances in neural information processing systems*, 31, 2018b. Kathy J Horadam. *Hadamard matrices and their applications*. Princeton university press, 2012. Roger A Horn. The hadamard product. In *Proc. Symp. Appl. Math*, volume 40, pp. 87–169, 1990. Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. Transformers in vision: A survey. *ACM computing surveys (CSUR)*, 54(10s):1–41, 2022. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning, pp. 2668–2677. PMLR, 2018. Christian Klamler. Fair division. *Handbook of group decision and negotiation*, pp. 183–202, 2010. Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. Concept bottleneck models. In *International Conference on Machine Learning*, pp. 5338–5348. PMLR, 2020. Stefan Kolek, Duc Anh Nguyen, Ron Levie, Joan Bruna, and Gitta Kutyniok. Cartoon explanations of image classifiers. In *European Conference on Computer Vision*, pp. 443–458. Springer, 2022. Thomas William Körner. *Fourier analysis*. Cambridge university press, 2022. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. *Advances in neural information processing systems*, 25, 2012. Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. Faithful and customizable explanations of black box models. In *Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society*, pp. 131–138, 2019. Alfred Laugros, Alice Caplier, and Matthieu Ospici. Are adversarial robustness and common perturbation robustness independant attributes? In *Proceedings of the IEEE/CVF International Conference on Computer* Vision Workshops, pp. 0–0, 2019. Yanxi Li and Chang Xu. Trade-off between robustness and accuracy of vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7558–7568, 2023. Zewen Li, Fan Liu, Wenjie Yang, Shouheng Peng, and Jun Zhou. A survey of convolutional neural networks: analysis, applications, and prospects. *IEEE transactions on neural networks and learning systems*, 33(12): 6999–7019, 2021. Pantelis Linardatos, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. Explainable AI: A review of machine learning interpretability methods. *Entropy*, 23(1):18, 2020. Zachary C Lipton. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. *Queue*, 16(3):31–57, 2018. Jiyuan Liu, Bingyi Lu, Mingkang Xiong, Tao Zhang, and Huilin Xiong. Low frequency sparse adversarial attack. *Computers & Security*, 132:103379, 2023. Michal Lukasik, Srinadh Bhojanapalli, Aditya Menon, and Sanjiv Kumar. Does label smoothing mitigate label noise? In *International Conference on Machine Learning*, pp. 6448–6458. PMLR, 2020. Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017. Cheng Luo, Qinliang Lin, Weicheng Xie, Bizhu Wu, Jinheng Xie, and Linlin Shen. Frequency-driven imperceptible adversarial attack on semantic similarity. In *Proceedings of the IEEE/CVF Conference on* Computer Vision and Pattern Recognition, pp. 15315–15324, 2022. Jan Macdonald, Stephan Wäldchen, Sascha Hauch, and Gitta Kutyniok. A rate-distortion framework for explaining neural network decisions. *arXiv preprint arXiv:1905.11092*, 2019. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. *arXiv preprint arXiv:1706.06083*, 2017. Shishira R Maiya, Max Ehrlich, Vatsal Agarwal, Ser-Nam Lim, Tom Goldstein, and Abhinav Shrivastava. A frequency perspective of adversarial robustness. *arXiv preprint arXiv:2111.00861*, 2021. Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Sparsefool: a few pixels make a big difference. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 9087–9096, 2019. Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, and Klaus-Robert Müller. Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, pp. 193–209, 2019. Florian Navarro. Necessary players, myerson fairness and the equal treatment of equals. Annals of Operations Research, 280:111–119, 2019. Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. *Advances in neural information* processing systems, 29, 2016a. Anh Nguyen, Jason Yosinski, and Jeff Clune. Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks. *arXiv preprint arXiv:1602.03616*, 2016b. Anh Nguyen, Jason Yosinski, and Jeff Clune. Understanding neural networks via feature visualization: A survey. *Explainable AI: interpreting, explaining and visualizing deep learning*, pp. 55–76, 2019. Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. Feature visualization. *Distill*, 2(11):e7, 2017. Alan V Oppenheim. Applications of digital signal processing. *Englewood Cliffs*, 1978. Guansong Pang, Chunhua Shen, Longbing Cao, and Anton Van Den Hengel. Deep learning for anomaly detection: A review. *ACM computing surveys (CSUR)*, 54(2):1–38, 2021. Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, and Jun Zhu. Bag of tricks for adversarial training. *arXiv* preprint arXiv:2010.00467, 2020. Sayak Paul and Pin-Yu Chen. Vision transformers are robust learners. In Proceedings of the AAAI conference on Artificial Intelligence, volume 36, pp. 2071–2081, 2022. Soo-Chang Pei and Chien-Cheng Tseng. A comb filter design using fractional-sample delay. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, 45(5):649–653, 1998. Jary Pomponi, Simone Scardapane, and Aurelio Uncini. Pixle: a fast and effective black-box attack based on rearranging pixels. In *2022 International Joint Conference on Neural Networks (IJCNN)*, pp. 1–7. IEEE, 2022. Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Xiaojiang Chen, and Xin Wang. A comprehensive survey of neural architecture search: Challenges and solutions. ACM Computing Surveys (CSUR), 54(4):1–34, 2021. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135–1144, 2016. Richard A Roberts and Clifford T Mullis. *Digital signal processing*. Addison-Wesley Longman Publishing Co., Inc., 1987. David Rolnick, Andreas Veit, Serge Belongie, and Nir Shavit. Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694, 2017. Alvin E Roth. *The Shapley value: essays in honor of Lloyd S. Shapley*. Cambridge University Press, 1988. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pp. 618–626, 2017. Rulin Shao, Zhouxing Shi, Jinfeng Yi, Pin-Yu Chen, and Cho-Jui Hsieh. On the adversarial robustness of vision transformers. *arXiv preprint arXiv:2103.15670*, 2021. Lloyd S Shapley. A value for n-person games. *Classics in game theory*, 69, 1997. Charles H Sherman and John L Butler. *Transducers and arrays for underwater sound*, volume 4. Springer, 2007. Connor Shorten and Taghi M Khoshgoftaar. A survey on image data augmentation for deep learning. *Journal* of big data, 6(1):1–48, 2019. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In *International conference on machine learning*, pp. 3145–3153. PMLR, 2017. Samuel Henrique Silva and Peyman Najafirad. Opportunities and challenges in deep learning adversarial robustness: A survey. *arXiv preprint arXiv:2007.00753*, 2020. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. *arXiv preprint arXiv:1312.6034*, 2013. Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. *arXiv preprint arXiv:1706.03825*, 2017. Kenneth Steiglitz. *Digital Signal Processing Primer*. Courier Dover Publications, 2020. Petre Stoica, Randolph L Moses, et al. *Spectral analysis of signals*, volume 452. Pearson Prentice Hall Upper Saddle River, NJ, 2005. Ilia Sucholutsky, Ruairidh M Battleday, Katherine M Collins, Raja Marjieh, Joshua Peterson, Pulkit Singh, Umang Bhatt, Nori Jacoby, Adrian Weller, and Thomas L Griffiths. On the informativeness of supervision signals. In *Uncertainty in Artificial Intelligence*, pp. 2036–2046. PMLR, 2023. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International conference on machine learning, pp. 3319–3328. PMLR, 2017. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*, 2013. Lizhe Tan and Jean Jiang. *Digital signal processing: fundamentals and applications*. Academic Press, 2018. Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pp. 6105–6114. PMLR, 2019. Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig Schmidt. Measuring robustness to natural distribution shifts in image classification. *Advances in Neural Information* Processing Systems, 33:18583–18599, 2020. Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. *arXiv preprint arXiv:1805.12152*, 2018. Yusuke Tsuzuku and Issei Sato. On the structural sensitivity of deep convolutional networks to the directions of fourier basis functions. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 51–60, 2019. Haohan Wang, Xindi Wu, Zeyi Huang, and Eric P Xing. High-frequency component helps explain the generalization of convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8684–8694, 2020. Jiaheng Wei, Zhaowei Zhu, Hao Cheng, Tongliang Liu, Gang Niu, and Yang Liu. Learning with noisy labels revisited: A study using real-world human annotations. *arXiv preprint arXiv:2110.12088*, 2021. Eyal Winter. The shapley value. *Handbook of game theory with economic applications*, 3:2025–2054, 2002. Tete Xiao, Xiaolong Wang, Alexei A Efros, and Trevor Darrell. What should not be contrastive in contrastive learning. *arXiv preprint arXiv:2008.05659*, 2020. Zhi-Qin John Xu, Yaoyu Zhang, Tao Luo, Yanyang Xiao, and Zheng Ma. Frequency principle: Fourier analysis sheds light on deep neural networks. *arXiv preprint arXiv:1901.06523*, 2019a. Zhi-Qin John Xu, Yaoyu Zhang, and Yanyang Xiao. Training behavior of deep neural network in frequency domain. In *International Conference on Neural Information Processing*, pp. 264–274. Springer, 2019b. Menahem E Yaari and Maya Bar-Hillel. On dividing justly. *Social choice and welfare*, 1:1–24, 1984. Koji Yokote, Takumi Kongo, and Yukihiko Funaki. Relationally equal treatment of equals and affine combinations of values for tu games. *Social Choice and Welfare*, 53:197–212, 2019. Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks. In 2010 IEEE Computer Society Conference on computer vision and pattern recognition, pp. 2528–2535. IEEE, 2010. Yu Zhang, Peter Tiňo, Aleš Leonardis, and Ke Tang. A survey on neural network interpretability. IEEE Transactions on Emerging Topics in Computational Intelligence, 2021. Zhenyu Zhang, Xiaobo Shu, Bowen Yu, Tingwen Liu, Jiapeng Zhao, Quangang Li, and Li Guo. Distilling knowledge from well-informed soft labels for neural relation extraction. In *Proceedings of the AAAI* Conference on Artificial Intelligence, volume 34, pp. 9620–9627, 2020. Beiyao Zheng and Alan Agresti. Summarizing the predictive power of a generalized linear model. Statistics in medicine, 19(13):1771–1781, 2000. Stephan Zheng, Yang Song, Thomas Leung, and Ian Goodfellow. Improving the robustness of deep neural networks via stability training. In *Proceedings of the ieee conference on computer vision and pattern* recognition, pp. 4480–4488, 2016. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In *Proceedings of the IEEE conference on computer vision and pattern* recognition, pp. 2921–2929, 2016. Daquan Zhou, Zhiding Yu, Enze Xie, Chaowei Xiao, Animashree Anandkumar, Jiashi Feng, and Jose M Alvarez. Understanding the robustness in vision transformers. In International Conference on Machine Learning, pp. 27378–27394. PMLR, 2022. ## A Appendix B Fairness Division Axioms Symmetry **axiom**: Let Ir P 2 I be some spectral player coalition. For @ Ii, Ij P I ^ Ii, Ij R Ir, the statement vpIr Y tIiuq " vpIr Y tIj uq implies ψipI, vq " ψj pI, vq. This axiom restates the statement '*equal treatment of* equals' principle mathematically. This axiom states that the 'names' of players should have no effect on the 'treatments' by the characteristic function in coalition games (Roth, 1988). Linearity **axiom**: Let u and v be two characteristic functions. Let pI, uq and pI, vq be two coalition games. Let pu ` vqpIrq :" upIrq ` vpIrq where Ir P 2 I. The divisions of the new coalition game pI, u ` vq should satisfy: ψipI, u ` vq " ψipI, uq ` ψipI, vq. This axiom is also known as '*additivity* axiom' and guarantees the uniqueness of the solution of dividing payoffs among players (Roth, 1988). Efficiency **axiom**: This axiom states that the sum of the divisions of all players must be summed to the worth of the player set (the grand coalition): Mř´1 i"0 ψipI, vq " vpIq. Dummy player **axiom**: A dummy player (null player) I˚ is the player who has no contribution such that: ψ˚pI, vq " 0 and vpIr Y tI˚**uq "** vpIrq for @ I˚ R Ir ^ I˚ Ď I. Remark B.1. In the literature (Roth, 1988), the efficiency axiom and the dummy player axiom are also combined and relabeled as carrier axiom. ## B.1 Spectral Signal-To-Noise Ratio (Snr) Discrete Fourier Transform. The notion 'frequency' measures how 'fast' the outputs can change with respect to inputs. High frequency implies that small variations in inputs can cause large changes in outputs. In terms of images, the 'inputs' are the pixel spatial locations while the 'outputs' are the pixel values. Let x : p*i, j***q ÞÑ** R be some 2D image with dimension M ˆ N which sends every location p*i, j*q to some real pixel value where p*i, j*q P rM**s ˆ r**Ns. Let F : R 2ÞÑ C 2 be some DFT functional operator. The DFT of x is given by: $$\mathscr{F}(x)(u,v)=\sum_{j=0}^{N-1}\sum_{i=0}^{M-1}x(i,j)e^{-i\,2\pi(\frac{i}{M}i+\frac{i}{N}j)}.\tag{13}$$ $$(14)$$ Point-wise energy spectral density (ESD). The ESD measures the energy quantity at a frequency. To simplify discussions, we use *radial frequency*, which is defined as the radius r with respect to zero frequency point (*i.e.* the frequency center). The energy is defined as the square of the frequency magnitude according to Parseval's Power Theorem. Let Lr be a circle with radius r on the spectrum of image x, as illustrated in Figure 1. The r is referred to as radial frequency. The point-wise ESD function is given by: $$E S D_{r}(x):=\frac{1}{|L_{r}|}\cdot\sum_{(u,v)\in L_{r}}|\mathscr{F}(x)(u,v)|^{2}\tag{1}$$ where p*u, v*q is the spatial frequency point and |Lr| is the circumference of Lr. Spectral signal-to-noise ratio (SNR). The SNR can quantify signal robustness. We define the spectral SNR at radius frequency r as: $$S N R(r):={\frac{E S D_{r}(x)}{E S D_{r}(\Delta x)}}$$ $$\left(15\right)$$ where ∆x is some perturbation. We have characterized the SNRs of some corroptions and adversarial attacks in Figure 2. ## B.2 Absence Assignment Scheme There exist multiple choices for the assignments of the absences of spectral layers in coalition filtering design: (1) Assigning to constant zeros (Zeroing), (2) assigning to complex Gaussian noise (Complex Gaussian) and (3) assigning to the corresponding frequency components randomly sampled from other images at the same dataset (Replacement). Zeroing. The b in Equation 5 is set to zeros. Complex Gaussian. The b in Equation 5 is sampled from a *i.i.d.* complex Gaussian distribution: N pµ, σ 2 2q ` iN pµ, σ 2 2q. Replacement. The b in Equation 5 is set to: b " Fpx ˚q (where x ˚ ∼ X is a randomly sampled image from some set X ). In our implementation, we simply choose 'zeroing': b " 0. Figure 13 shows the filtered image examples by using the above three strategies and also show the examples of measured spectral importance distributions. Empirically, the three strategies have rather similar performance. In this research, we do not unfold the discussions regarding the masking strategy choices. ![24_image_0.png](24_image_0.png) Figure 13: Three absence assignment strategies: (1) Assigning the spectral absences with constant zeros (Zeroing), (2) assigning the spevtral absences with Gaussian noise (Complex Gaussian) and (3) randomly sampling spectral components from the same image datasets (Replacement). The standard complex Gaussian distribution is given by: N p0, 1 2q ` iN p0, 1 2q. The figures (a), (b) and (c) show the coalition filtering results with the spectral coalition: tI0u. The figures (d), (e) and (f) show the coalition filtering results with the spectral coalition: tI1, I2, I3, I4, I5, I6, I7u. The figures (g) to (l) show the examples of the measured spectral importance distributions of a *resnet18* and a *efficientnet_v2_s* (both are pre-trained on *ImageNet*) with the three assignment strategies. ## B.3 Proof For Spectral Coalition Information Identity Theorem Proof for Spectral Coalition Information Identity. Suppose the probability measures Ppxq, Pp*x, y*q, Ppy|xq, and Qpy|xq are absolutely continuous with respect to x on domain X ' Ir. IpX ' Ir, Yq " ż X 'Ir ÿ yPY Pp*x, y*q ¨ log Pp*x, y*q Ppxq ¨ Ppyq dx (16) X 'Ir ÿ yPY Pp*x, y*q ¨ log ˆPpy|xq ¨ Ppxq Ppyq ¨ Ppxq¨ Qpy|xq Qpy|xq ˙dx (17) " ż X 'Ir ÿ yPY Pp*x, y*q ¨ log ˆPpy|xq Qpy|xq ¨ 1 Ppyq ¨ Qpy|xq ˙dx (18) " ż X 'Ir Ppxq ˜ÿ yPY Ppy|xq ¨ log Ppy|xq Qpy|xq ¸ dx (19) " ż ´ ÿ yPY ¨ X 'Ir Pp*x, y*qdx ˛ ˚˝ ż ‹'log Ppyq (20) ` ż X 'Ir ÿ yPY Pp*x, y*q ¨ log Qpy|xqdx (21) " E xPX 'Ir KLpPpy|xq||Qpy||xqq loooooooooooomoooooooooooon point´wise `HpYq ` ż X 'Ir Ppxq ˜ÿ $$(16)$$ (17) $\binom{18}{2}$ . $$(19)$$ $$(20)$$ $$(21)$$ (22) $\binom{23}{2}$ . $$(24)$$ yPY Ppy|xq ¨ log Qpy|xq ¸ dx (22) " E xPX 'Ir KLpPpy|xq||Qpy||xqq loooooooooooomoooooooooooon point´wise `HpYq ` E xPX 'Ir ÿ yPY Ppy|xq ¨ log Qpy|xq (23) " E xPX 'Ir KLpPpy|xq||Qpy||xqq loooooooooooomoooooooooooon point´wise `HpYq ` vpIrq ` C (24) where HpYq is the Shannon entropy of the label set Y. C Information quantity relationship in spectral coalitions ![26_image_0.png](26_image_0.png) Constant Variable (KL divergence) Variable (characteristic function) Figure 14: Information quantity relationship. This shows the theoretical information quantity relationship between what the characteristic function v measures and the mutual information IpX ' Ir, Yq. For a given coalition Ir, a dataset xX , Yy and a classifier Q, the v measures how much information the classifier Q utilizes in decisions. The measured results are then used to compute the marginal contributions of features. ## C.1 Partitioning Spectrum With ℓ8 Ball Over ℓ2 **Ball** ![27_image_0.png](27_image_0.png) Spectral players with ℓ8 partition. The pixel density of IM´1 with ℓ2 ball. The pixel density of IM´1 with ℓ8 ball. Figure 15: Two spectral band partitioning schemes. This shows the motivation we choose ℓ8 ball over ℓ2 ball in partitioning the frequency domain into the M bands (i.e., M 'spectral players') over 2D Fourier spectrum. The frequency data density of the spectral players with ℓ8 remains a constant. However, the frequency data density of the spectral players with ℓ2 is not a constant since some frequency components do not present. This motives us to empirically choose ℓ8 metric to form spectral players in implementation. ## C.2 Normalizing Summarized Sids We normalize the above result and set: d text $\begin{array}{c}S(v):=\dfrac{\left|\beta^T\bar{\Psi}(v)-\frac{||\beta||_1}{M}\right|}{\text{sup}\left|\beta^T\bar{\Psi}(v)-\frac{||\beta||_1}{M}\right|}\\ \\ =\dfrac{\left|\beta^T\bar{\Psi}(v)-\frac{||\beta||_1}{M}\right|}{\text{sup}\left|||\beta||_2:||\bar{\Psi}(v)||_2-\frac{||\beta||_1}{M}\right|}\\ \\ =\dfrac{\left|\beta^T\bar{\Psi}(v)-\frac{||\beta||_1}{M}\right|}{\left|||\beta||_2-\frac{||\beta||_1}{M}\right|}\\ \\ =\left|\dfrac{\bar{\beta}^T\bar{\Psi}(v)-\frac{1}{M}\frac{||\beta||_1}{||\beta||_2}}{1-\frac{1}{M}\frac{||\beta||_1}{||\beta||_2}}\right|\\ \end{array}$ . $$(25)$$ $$(26)$$ $$(27)$$ $$(28)$$ where β¯ "β ||β||2 and sup β T Ψ¯ pvq ´ ||β||1 M is derived by: $$\sup\left|\beta^{T}\tilde{\Psi}(v)-\frac{||\beta||_{1}}{M}\right|=\left|\sup\beta^{T}\tilde{\Psi}(v)-\frac{||\beta||_{1}}{M}\right|$$ $$=\left|\sup||\beta||_{2}\cdot||\tilde{\Psi}(v)||_{2}-\frac{||\beta||_{1}}{M}\right|\quad\text{s.t.}||\tilde{\Psi}(v)||_{1}=1$$ $$=\left|||\beta||_{2}-\frac{||\beta||_{1}}{M}\right|\quad\text{since}||\tilde{\Psi}(v)||_{2}^{2}\leqslant||\tilde{\Psi}(v)||_{1}^{2}.$$ Set η " 1 M ||β||1 ||β||2 : $$S(v)=\left|\frac{\bar{\beta}^{T}\bar{\Psi}(v)-\eta}{1-\eta}\right|.\tag{1}$$ (29) $\binom{30}{2}$ (31) . $$(32)$$ $$Q.E.D.$$ ## C.3 How Much Samples Are Sufficient? ![29_image_0.png](29_image_0.png) (a) *resnet18 @CIFAR10* (b) *resnet18 @CIFAR100* $$(33)^{\frac{1}{2}}$$ (c) *resnet18 @ImageNet* Figure 16: Convergence of relative estimation errors converge with respect to the numbers of samples K. The errors are measured by: 1M ||Ψpi`1qpvq ´ Ψpiqpvq||1 where Ψpiqpvq denotes the i-th measured spectral importance distribution with respect to characteristic function v. The experiments are conducted on *CIFAR10*, CIFAR100 and *ImageNet* with *resnet18*. Error bound analysis. Let K be the number of the samples of some baseline dataset. Let: $$\Delta v(\tilde{\mathcal{I}},\mathcal{I}_{i}):=v(\tilde{\mathcal{I}}\cup\{\mathcal{I}_{i}\})-v(\tilde{\mathcal{I}})$$ ∆vpI˜, Iiq :" vpI˜ Y tIi**uq ´** vpI˜q (33) and $$\Delta v({\mathcal{I}}_{i}):=\left(\Delta v({\tilde{\mathcal{I}}},{\mathcal{I}}_{i})\right)_{{\mathcal{I}}\subseteq{\mathcal{I}}}$$ ˘I˜ĎI(34) and $$W:=\left(\frac{1}{M}\binom{M-1}{|\tilde{\mathcal{I}}|}^{-1}\right)_{\tilde{\mathcal{I}}\subseteq\mathcal{I}}.$$ $$(34)^{\frac{1}{2}}$$ $$(35)$$ $$(37)$$ $$(40)^{\frac{1}{2}}$$ . (35) Hence: $$\psi_{i}({\mathcal{I}},v)=W^{T}\Delta v({\mathcal{I}}_{i})$$ ψipI, vq " WT ∆vpIiq (36) where ||W||1 " 1 since W is a probability distribution. Let ψ¯i, ∆v¯pIiq and ∆v¯pI˜, Iiq be estimations with K samples using Monte Carlo sampling. The error bound with ℓ1 norm is given by: where *V ar*p∆¯vq gives the upper bound of the variance of ∆¯vpI˜, Iiq. ϵ def " sup i||ψ¯ipI, vq ´ ψipI, vq||1 " sup i||WT ∆¯vpIiq ´ WT ∆vpIiq||1 (37) ď sup i||W||1 ¨ ||∆¯vpIiq ´ ∆vpIiq||8`Hölder1s inequality˘(38) " sup i|| ÿ I˜ĎIzIi `∆¯vpI˜, Iiq ´ ∆vpI˜, Iiq ˘||8 (39) ď sup i 2 M´1¨ sup I˜||∆¯vpI˜, Iiq ´ ∆vpI˜, Iiq||8 (40) " sup i 2 M´1¨ sup I˜||∆¯vpI˜, Iiq ´ ∆vpI˜, Iiq||1 (41) ď 2 M´1¨ "V arp∆¯vq K *12(42) $$(41)$$ $$(42)^{\frac{1}{2}}$$
Review 1: Summary: The authors apply Shapley value theory to quantify the predictive power of different image features. A decomposition of the image into spectral bands provides a set of *players* which are then used to form *spectral coalitions*. Each of those resulting coalitions can be used as input to a (frozen) classifier, by inverting the decomposing transformation to obtain a filtered image without the excluded spectral bands. The value assigned to a coalition is then defined as the gains in terms of the negative cross-entropy obtained when the spectral bands are included compared to when they are excluded. This information gain is established more directly by expressing the mutual information as a sum of the Shannon entropy and the value function, plus additional terms including a KL-divergence between the predicted and ground-truth labels. The proposed value decomposition can be used to derive a new spectral robustness score, which the authors proceed to apply in a series of experiments establishing its correlation with previously proposed robustness metrics, as well as its application to two case studies in robustness evaluation. Strengths and Weaknesses: Strengths: - Overall, the paper is well presented and easy to read. - Offers a formal approach to (robustness) attributions for each spectral band. - Sheds new light on known phenomena such as the relative robustness of high/low frequency features. (Fig. 6) - Sheds new light on relevant questions such as the relative robustness of ViT vs ConvNets and small/large models. (Fig. 10) Weakness: - There seems to be a bit of a gap in the presented study, which also leads to a bit of disconnect while reading through the paper, since the presentation (title + intro) focus more heavily on the intended application rather than the new technique and formulations presented. It would have been nice to spend more time on the new techniques and showcase its applications to a more wide range of problems, rather than just perturbation robustness. Indeed, the abstract makes a better case for the techniques developed, compared to the (intro + title), by explaining how they offer a formal tool to quantify the attributions of different spectral bands, which has a wider impact beyond robustness considerations. Requested Changes: Abstract: ======= - "model perturbation robustness mechanisms" is a very complex string. Even the simpler "perturbation robustness" still needs to be defined, in contrast to the more familiar notion of adversarial robustness. [R1] Please clarify the scope of the paper by defining "perturbation robustness" first, before discussing the issue of understanding its mechanisms. In doing so, please use simpler sentence structures, e.g., the mechanisms of perturbation robustness of machine learning models. - Given the confusion about the scope, and the complexity of the language in the beginning of the abstract, I find it extra distracting to prematurely switch attention to the specifics of the approach, let alone its name or acronyms. [R2] Please defer naming the method till after the scope/motivation and a summary of the technical ideas have been explained clearly. Section 1: ======= - The authors set out to study robustness "mechanisms" as the first thrust motivating this paper (i.e. existing robustness scores failing to provide further insights into the robustness mechanisms). It would help to [R3] please elaborate on what is meant by robustness mechanisms, perhaps by enumerating different considerations over different stages of the model development cycle. It should be fine to focus on those aspects actually studied in the paper, where a reference to recent surveys or concurrent work may suffice for other such considerations. This would help make it clear how much the progress is achieved along this direction through the presented contributions -- which is not very clear to me right now. - I like the study of label-noise impact, presented in Section 4.3, as a more interesting example of a "mechanism" since it touches upon the varied scenarios encountered in the training stage. So, I wonder what other kinds of studies can be designed to probe more aspects of robustness mechanisms. Section 3 ======= - Although the connection to information theory was advertised as early as the abstract, I see it only appears in the last part of Section 3.3. It may be fine to defer the full proof to the appendix, but [R4] please include at least a high level narrative or a walkthrough a simple numerical example to help make the connection to information theory more clear and accessible. - It would be nice to [R5] distinguish the game theory framework (coalitions and Shapley value) from the information theory connection, that arises due to the specific characteristic function used. - [R6] please make sure to cite and discuss more of the works on interpretability that utilized Shapley values, rather than just SAGE which is mentioned very briefly. For example: - Kumar, Indra, Carlos Scheidegger, Suresh Venkatasubramanian, and Sorelle Friedler. "Shapley Residuals: Quantifying the limits of the Shapley value for explanations." Advances in Neural Information Processing Systems 34 (2021): 26598-26608. - Some concurrent work also seems relevant: - Herbinger, Julia, Bernd Bischl, and Giuseppe Casalicchio. "Decomposing global feature effects based on feature interactions." arXiv preprint arXiv:2306.00541 (2023). - Huang, Xuanxiang, and Joao Marques-Silva. "The inadequacy of Shapley values for explainability." arXiv preprint arXiv:2302.08160 (2023). - Spectral importance distribution (SID): It would also help to [R7] please clearly define $\Psi$ as a vector in $\mathbb{R}^M$. I would also recommend to reserve the asterisk to optimal values, rather than normalized values which typically use a bar. - Spectral coalition filtering: I would recommend to [R8] please refer to $\mathbb{T}$ consistently as a mask map or an indicator function - I would personally prefer to call it a mask map. - It would help to [R9] please clarify that spectral coalition images are "fed into" the classifier models (either trained or un-trained) for inference only, and that the proposed technique does not require training a dedicated model for each coalition. Section 4: ======= - The experiment with label noise is quite intriguing. In regards to its conclusion, i.e., the last sentence in bold at the end of Section 4.3, I'm not sure if we're looking at causation or correlation. I wonder if forcing the model to give more weight to the robust features, i.e., lower frequency bands, or completely suppress non-robust features, i.e., high frequency bands, would yield a more robust model. [R9] It would be nice to include such an experiment. - More generally, following the previous point, it would be nice to consider using the proposed SID as a training objective. That is, if we have a clear idea of what the SID for a robust model looks like, we can encourage the models we train to have a robust SID profile. Nitpicking ======== - deploy -> apply Broader Impact Concerns: N/A ================================================== Review 2: Summary: The authors present an approach for robustness quantification for computer vision models. Building on work from information theory, eXplainable Artificial Intelligence (XAI), information theory and game theory, the authors derive a model agnostic workflow for perturbing images and evaluating robustness w.r.t. to those perturbations in frequency space. Compared to other XAI approaches that investigate models in terms of single pixels, patches, concepts or entire images, adding the classical spatial frequency decomposition as semantic concepts for XAI is a great idea. And as many perturbations used in nowadays training of computer vision models as well as for XAI purposes, a better understanding of these perturbations w.r.t. their impact on predictive performance is timely and relevant. While the structure of the manuscript is a bit unconventional (for instance separate subsections of the experiments have mirrored the classical manuscript structure with methods/results/discussion), this actually works well in parts, in other parts I sometimes found it difficult to understand whether this work is about XAI, about robustness or about any of the (very relevant and well tested) empirical questions in the experimental sections. It feels a bit like there is a lot of work that went into this with a theoretical motivation and then the authors looked for an empirical use case. As in: we'd like to do something with information theory and spectral image decompositions, but computer vision models nowadays all work in decompositions induced by neural networks, so we need to somehow draw this connection and find an application of those pre-ImageNetMoment methods)to state-of-the-art applications. One way of reducing potential confusion would be to highlight directly in the abstract some of these (i think) great empirical findings on the relative robustness of some models w.r.t. some types of perturbations. Strengths and Weaknesses: ## Strengths * Developing solutions for assessing robustness of computer vision models is a timely and highly relevant topic * Comparing robustness of models empirically is a valuable contribution * The authors do a great job at embedding their work in the existing literature * I think it's great that the authors make an effort to explain their results. For instance the aspect that efficient net seems to be special and how that is related to the frequency distribution * in a way the study helps to connect the pre-ImageNetMoment research to post-ImageNetMoment research. characterizing robustness in terms of spatial frequency decompositions is interesting and helpful * I'm convinced that the proposed approach is a valuable contribution to a better understanding of robustness of computer vision models; ## Weaknesses * Some of the figures could be improved w.r.t. detail and content * I'm wondering whether the interpretability spin is needed / helpful for the narrative - i was at times confused about the goal of the work * I'm not sure i understand how label noise is related to spectral perturbations of images. I mean, certaintly, both affect robustness, but i wouldn't expect the image to change. I think the finding that *"Models tend to use more non-robust features in the presence of higher label noise within training set."* is very interesting, but as the perturbation on labels is independent of the spectral perturbations it would be helpful to comment on that relationship Requested Changes: For a better assessment of the pros and cons of the proposed method, it would be great if the authors could: * one could put a legend in the figures? in fig 9 it seems that the colors are not used consistently, vit is pink in the upper two panels and green in the lower two? ideally one would use the same colormap across all figures, e.g. also in fig. 10, to enable readers to quickly understand which model is which. Also, the sizes of the circles in fig 8/9 seems to be relevant but it's not explained in the figure. * generally i'd recommend to try and make the figures/captions as self-explanatory as possible; that will help to reduce cognitive friction when processing these central elements of the publication. More concretely, one could for instance guide the attention to the aspects that the authors deem relevant for the reader * maybe one could elaborate on assumption 3.7 as to where that assumption comes from. It reads a bit like this is an empirical finding that gets explained post-hoc, while it seems more intuitive that this is not a property of the model or the perturbations, but rather due to the empirical distribution of natural images in spatial frequency space. * in fig 10b - what's shown there on the x and y axis respectively? * also fig. 10b: it's great that the authors put the research question in that caption, but ideally the caption would contain the answer (or rather: the figure should contain the answer and the caption draws the attention of the reader to the relevant aspects in the figure). I know it's stated in the text clearly and in bold face, but redundancy always helps and having the relevant information at the relevant positions helps, too. * the manuscript could be strengthened by being more specific / detailed at times. For instance when the authors write: *"efficientnet, can achieve comparable robustness performance"* it would be helpful to mention how robustness was measured in that case. Broader Impact Concerns: No concerns, on the contrary: Testing for robustness and explaining potential shortcomings is key to reponsible usage of ML models; I'm convinced this work can make a valuable contribution towards better assessment of model robustness. ================================================== Review 3: Summary: This paper considered Interpreting Global Perturbation Robustness through the understanding of spectral importance. Specifically, this paper considered the Shapley value in the context of image spectral space to analyze the importance of high-/low- frequency signals. Strengths and Weaknesses: I feel like this paper proposed a very interesting and principled approach to understand the robustness in the context of images. I really like the idea of incorporating the inductive bias on the image itself. Overall, I have no major concerns but some minor points to be addressed. **Please see the request changes for details** Requested Changes: - Page 1 I would think the title/abstract should highlight the context of Image modality. This paper proposed a compelling concept for image but this does not imply it’s insightful for all the kinds of data modalities such as tabular, graph and NLP. The current title is too general and gives a wrong impression that it works for all kinds of data modality. Indeed, the whole paper only focuses on image (I believe this is great! It’s just to adjust the paper scope). - Page 1, Sec 1, Introduction I have similar concerns with the introduction section. It clearly stated that the main scope is image modelling and robustness. Therefore it should clearly revise the abstract and title. In paragraph 3, the SNR is introduced in an unnatural manner, where I think it’s better to put it into the background section rather than intro. Besides, Appendix B3 here still seems quite awkward for new audiences. In the introduction section, I have a feeling that this is written for the reviewers who already had reviewed your paper and checked for the second round. However, for a new reviewer like me, I still feel a bit hard to digest this section. I would recommend a smoother transition for the new audiences. - Page 5 Sec 3.1 I could not understand why we need to normalize SID? - Page 6 Sec 3.3 I feel like the information theoretical interpretation could not be counted as a novel contribution. In general, such a technique has been demonstrated in the literature such as [1]. A clarification is required. [1] Understanding Global Feature ContributionsWith Additive Importance Measures. NeurIPS 2020. - Page 8 Could you explain a bit more about the role of \beta? - Page 14 I really like the discussion on the limitation sections. Is it possible to elaborate a bit more on low- high- frequency adversarial attack? Broader Impact Concerns: Not applicable ================================================== Metareview: Recommendation: Accept as is Comment: In this paper, the authors proposed a method I-ASIDE to evaluate the robustness of image models against frequency distribution. I-ASIDE divides the input image's frequency into bins, masks them, and assesses the contribution of each frequency bin using the Shapley value. The authors revealed through experiments that image models tend to focus on low-frequency components. Additionally, they reported that image models with a higher bias towards frequency contribution are more vulnerable to adversarial noise and other such perturbations. The proposal of I-ASIDE for investigating frequency contributions and the insights into the relationship between these frequency components and the robustness of image models are highly intriguing. The authors appropriately revised the paper to clarify its message through communication with the reviewers. As a result, all three reviewers agreed to accept this paper. ==================================================
# A Survey Of Temporal Credit Assignment In Deep Reinforcement Learning Eduardo Pignatelli *e.pignatelli@ucl.ac.uk* University College London Laura Toni *l.toni@ucl.ac.uk* University College London Reviewed on OpenReview: *https: // openreview. net/ forum? id= bNtr6SLgZf* Johan Ferret *jferret@google.com* Google DeepMind Hado van Hasselt hado@google.com Google DeepMind Matthieu Geist mfgeist@google.com Google DeepMind Thomas Mesnard mesnard@google.com Google DeepMind Olivier Pietquin pietquin@google.com Google DeepMind ## Abstract The Credit Assignment Problem (CAP) refers to the longstanding challenge of Reinforcement Learning (RL) agents to associate actions with their long-term consequences. Solving the CAP is a crucial step towards the successful deployment of RL in the real world since most decision problems provide feedback that is noisy, delayed, and with little or no information about the causes. These conditions make it hard to distinguish serendipitous outcomes from those caused by informed decision-making. However, the mathematical nature of credit and the CAP remains poorly understood and defined. In this survey, we review the state of the art of Temporal Credit Assignment (CA) in deep RL. We propose a unifying formalism for credit that enables equitable comparisons of state of the art algorithms and improves our understanding of the trade-offs between the various methods. We cast the CAP as the problem of learning the influence of an action over an outcome from a finite amount of experience. We discuss the challenges posed by delayed effects, *transpositions*, and a lack of action influence, and analyse how existing methods aim to address them. Finally, we survey the protocols to evaluate a credit assignment method and suggest ways to diagnose the sources of struggle for different methods. Overall, this survey provides an overview of the field for new-entry practitioners and researchers, it offers a coherent perspective for scholars looking to expedite the starting stages of a new study on the CAP, and it suggests potential directions for future research. ## 1 Introduction RL is poised to impact many real world problems that require sequential decision making, such as strategy (Silver et al., 2016; 2018; Schrittwieser et al., 2020; Anthony et al., 2020; Vinyals et al., 2019; Perolat et al., 2022) and arcade video games (Mnih et al., 2013; 2015; Badia et al., 2020; Wurman et al., 2022), climate control (Wang & Hong, 2020), energy management (Gao, 2014), car driving (Filos et al., 2020) and stratospheric balloon navigation (Bellemare et al., 2020), designing circuits (Mirhoseini et al., 2020), cybersecurity (Nguyen & Reddi, 2021), robotics (Kormushev et al., 2013), or physics (Degrave et al., 2022). One fundamental mechanism allowing RL agents to succeed in these scenarios is their ability to evaluate the *influence* of their actions over outcomes - e.g., a win, a loss, a particular event, a payoff. Often, these outcomes are consequences of isolated decisions taken in a very remote past: actions can have long-term effects. The problem of learning to associate actions with distant, future outcomes is known as the temporal Credit Assignment Problem (CAP): to distribute the credit of success among the multitude of decisions involved (Minsky, 1961). Overall, the *influence* that an action has on an outcome represents *knowledge* in the form of *associations* between actions and outcomes (Sutton et al., 2011; Zhang et al., 2020). These associations constitute the scaffolding that agencies can use to deduce, reason, improve and act to address decision-making problems and ultimately improve their data efficiency. Solving the CAP is paramount since most decision problems have two important characteristics: they take a long time to complete, and they seldom provide immediate feedback, but often *with delay* and little insight as to which actions caused it. These conditions produce environments where the feedback signal is weak, noisy, or deceiving, and the ability to separate serendipitous outcomes from those caused by informed decisionmaking becomes a hard challenge. Furthermore, as these environments grow in complexity with the aim to scale to real-world tasks (Rahmandad et al., 2009; Luoma et al., 2017), the actions taken by an agent affect an increasingly vanishing part of the outcome. In these conditions, it becomes challenging to learn value functions that accurately represent the *influence* of an action and to be able to distinguish and order the relative long-term values of different actions. In fact, canonical Deep Reinforcement Learning (Deep RL) solutions to *control* are often brittle to the hyperparameter choice (Henderson et al., 2018), inelastic to generalise zero-shot to different tasks (Kirk et al., 2023), prone to overfitting (Behzadan & Hsu, 2019; Wang et al., 2022), and sample-inefficient (Ye et al., 2021; Kapturowski et al., 2023). Overall, building a solid foundation of knowledge that can unlock solutions to complex problems beyond those already solved calls for better CA techniques (Mesnard et al., 2021). In the current state of RL, *action values* are a key proxy for *action influence*. Values *actualise* a return by synthesising statistics of the *future* into properties of the *present*: they transform a signal dependent on the future into one dependent only on the present. Recently, the advent of Deep RL (Arulkumaran et al., 2017) granted access to new avenues to express credit through values, either by using memory (Goyal et al., 2019; Hung et al., 2019), associative memory (Hung et al., 2019; Ferret et al., 2021a; Raposo et al., 2021), counterfactuals (Mesnard et al., 2021), planning (Edwards et al., 2018; Goyal et al., 2019; van Hasselt et al., 2021) or by meta-learning (Xu et al., 2018; Houthooft et al., 2018; Oh et al., 2020; Xu et al., 2020; Zahavy et al., 2020). The research on CAP is now fervent, and with a rapidly growing corpus of works. Motivation. Despite its central role, there is little discussion on the precise mathematical nature of credit. While these proxies are sufficient to unlock solutions to complex tasks, it remains unclear where to draw the line between a generic measure of action influence and *credit*. Existing works focus on partial aspects or sub-problems (Hung et al., 2019; Arjona-Medina et al., 2019; Arumugam et al., 2021) and not all works refer to the CAP explicitly in their text (Andrychowicz et al., 2017; Nota et al., 2021; Goyal et al., 2019), despite their findings providing relevant contributions to address the problem. The resulting literature is fragmented and lacks a space to connect recent works and put their efforts in perspective for the future. The field still holds open questions: Q1. *What* is the *credit* of an action? How is it different from an *action value*? And what is the CAP? What in words, and what in mathematics? Q2. How do agents learn to *assign* credit? What are the main methods in the literature and how can they be organised? Q3. How can we *evaluate* whether a method is improving on a challenge? How can we monitor advancements? Goals. Here, we propose potential answers to these questions and set out to realign the fundamental issue raised by Minsky (1961) to the Deep RL framework. Our main goal is to provide an overview of the field to new-entry practitioners and researchers, and, for scholars looking to develop the field further, to put the heterogeneous set of works into a comprehensive, coherent perspective. Lastly, we aim to reconnect works whose findings are relevant for CAP, but that do not refer to it directly. To the best of our knowledge, the work by Ferret (2022, Chapter 4) is the only effort in this direction, and the literature offers no explicit surveys on the temporal CA problem in Deep RL. Scope. The survey focuses on temporal CA in single-agent Deep RL, and the problems of (i) quantifying the influence of an action mathematically and formalising a mathematical objective for the CA problem (ii) defining its challenges, and categorising the existing methods to learn the quantities above, *(iii)* defining a suitable evaluation protocol to monitor the advancement of the field. We do not discuss *structural* CA in Deep Neural Networks (DNNs), that is, the problem of assigning credit or blame to individual parameters of a DNN (Schmidhuber, 2015; Balduzzi et al., 2015). We also do not discuss CA in multi-agent RL, that is, to ascertain which agents are responsible for creating good reinforcement signals (Chang et al., 2003; Foerster et al., 2018). When credit (assignment) is used without any preceding adjective, we always refer to *temporal* credit (assignment). In particular, with the adjective *temporal* we refer to the fact that "each ultimate success is associated with a vast number of internal decisions" (Minsky, 1961) and that these decisions, together with states and rewards, are arranged to form a *temporal* sequence. The survey focuses on Deep RL. In surveying existing formalisms and methods, we only look at the Deep RL literature, and when proposing new ones, we tailor them to Deep RL theories and applications. We exclude from the review methods specifically designed to solve decision problems with linear or tabular RL, as they do not bode well for scaling to complex problems. Outline. We address Q1., Q2. and Q3. in the three major sections of the manuscript. Respectively: - **Section** 4 addresses Q1., proposing a definition of credit and the CAP and providing a survey of action influence measures. - **Section** 5 and **Section** 6 address Q2., discussing the key challenges to solving the CAP and the existing methods to assign credit, respectively. - **Section** 7 answers Q3., reviewing the problem setup, the metrics, and the evaluation protocols to monitor advancements in the field. - **Section** 8 summarises the main points of the manuscript and provides a critical discussion to highlight the open challenges. For each question, we contribute by: (a) systematising *existing works* into a simpler, coherent space; (b) discussing it, and (c) synthesising our perspective into a unifying formalism. Table 1 outlines the suggested reading flow according to the type of reader. | Reader type | Suggested Flow | |-----------------------------------|---------------------------| | Specialised CA scholar | 1 → 8 → 4 → 5 → 6 → 7 → 2 | | RL researcher | 1 → 8 → 4 → 5 → 6 → 7 | | Deep Learning researcher | 1 → 3 → 4 → 5 → 6 → 7 → 8 | | Practitioner (applied researcher) | 6 → 4.4 → 3 | | Proposing a new CA method | 8 → 7 → 6 → 2 → 4 | Table 1: Suggested flow of reading by type of reader to support the outline in Section 1. Numbers represent section numbers. ## 2 Related Work Three existing works stand out for proposing a better understanding of the CAP explicitly. Ferret (2022, Chapter 4) designs a conceptual framework to unify and study credit assignment methods. The chapter proposes a general formalism for a range of credit assignment functions and discusses their characteristics and general desiderata. Unlike Ferret (2022, Chapter 4), we survey potential formalisms for a mathematical definition of credit (Section 4); given the new formalism, we propose an alternative view of the methods to assign credit (Section 6), and an evaluation protocol to measure future advancements in the field. Arumugam et al. (2021) analyses the CAP from an information theoretic perspective. The work focuses on the notion of information sparsity to clarify the role of credit in solving sparse reward problems in RL. Despite the work questioning what credit is mathematically, it does not survey existing material, and it does not provide a framework that can unify existing approaches to represent credit under a single formalism. Harutyunyan et al. (2019) propose a principled method to measure the credit of an action. However, the study does not aim to survey existing methods to *measure* credit, the methods to *assign* credit, and the methods to evaluate a credit assignment method, and does not aim to organise them into a cohesive synthesis. The literature also offers surveys on related topics. We discuss them in Appendix A to preserve the fluidity of the manuscript. As a result, none of these works position CAP in a single space that enables thorough discussion, assessment and critique. Instead, we propose a formalism that unifies the existing *quantities* that represent the influence of an action (Section 4). Based on this, we can analyse the advantages and limitations of existing measures of action influence. The resulting framework provides a way to gather the variety of existing *methods* that learn these quantities from experience (Section 6), and to monitor the advancements in solving the CAP. ## 3 Notation And Background Here we introduce the notation that we will follow in the rest of the paper and the required background. Notations. We use calligraphic characters to denote sets and the corresponding lowercases to denote their elements, for example, x ∈ X . For a measurable space (X , Σ), we denote the set of probability measures over X with ∆(X ). We use an uppercase letter X to indicate a random variable, and the notation PX to denote its distribution over the sample set X , for example, PX : X → ∆(X ). When we mention a random event X (for example, a *random action*) we refer to a random draw of a specific value x ∈ X from its distribution PX and we write, X ∼ PX. When a distribution is clear from the context, we omit it from the subscript and write P(X) instead of PX(X). We use Y (x) for the indicator function that maps an element x ∈ X to 1 if x ∈ Y ⊂ X and 0 otherwise. We use R to denote the set of real numbers and B = {0, 1} to denote the Boolean domain. We use ℓ∞(x) = ∥x∥∞ = supi |xi| to denote the ℓ-infinity norm of a vector x of components xi. We write the Kullback-Leibler divergence between two discrete probability distributions PP (X) and PQ(X) with sample space X as: DKL(PP (X)||PQ(X)) = !x∈X [PP (x) log(PP (x)/PQ(x))]. Reinforcement Learning. We consider the problem of learning by interacting with an environment. A program (the *agent*) interacts with an *environment* by making decisions (*actions*). The action is the agent's interface with the environment. Before each action, the agent may *observe* part of the environment and take suitable actions. The action changes the state of the environment. After each action, the agent may perceive a feedback signal (the *reward*). The goal of the agent is to learn a rule of behaviour (the *policy*) that maximises the expected sum of rewards. Markov Decision Processes (MDPs). MDPs formalise decision-making problems. This survey focuses on the most common MDP settings for Deep RL. Formally, a discounted MDP (Howard, 1960; Puterman, 2014) is defined by a tuple M = (S, A*, R, µ,* γ). S is a finite set of states (the *state space*) and A is a finite set of actions (the action space). R : S × A → [rmin, rmax] is a deterministic, bounded reward function that maps a state-action pair to a scalar reward r. γ ∈ [0, 1] is a discount factor and µ : S × A → ∆(S) is a transition kernel, which maps a state-action pair to probabilities over states. We refer to an arbitrary state s ∈ S with s, an action a ∈ A with a and a reward r ∈ [rmin, rmax] with r. Given a state-action tuple (*s, a*), the probability of the next random state St+1 being s′ depends on a *state-transition* distribution: Pµ(St+1 = s′|St = *s, A*t = a) = µ(s′|s, a), ∀*s, s*′ ∈ S. We refer to St as the *random state* at time t. The probability of the action a depends on the agent's policy, which is a stationary mapping π : S → ∆(A), from a state to a probability distribution over actions. These settings give rise to a discrete-time, stateless (Markovian), Random Process (RP) with the additional notions of *actions* to represent decisions and *rewards* for a feedback signal. Given an initial state distribution Pµ0 (S0), the process begins with a random state s0 ∼ Pµ0 . Starting from s0, at each time t the agent interacts with the environment by choosing an action At ∼ Pπ(·|st), observing the reward rt ∼ Rt(St, At) and the next state st+1 ∼ Pµ. If a state st is also an *absorbing* state (s ∈ S ⊂ S), the MDP transitions to the same state st with probability 1 and reward 0, and we say that the episode terminates. We refer to the union of each temporal transition (st, at, rt, st+1) as a trajectory or episode d = {st, at, rt, : 0 ≤ t ≤ T}, where T is the *horizon* of the episode. We mostly consider episodic settings where the probability of ending in an absorbing state in finite time is 1, resulting in the random horizon T. We consider discrete action spaces A = {ai : 1 ≤ i ≤ n} only. A trajectory is also a random variable in the space of all trajectories D = (S ×A×R)T , and its distribution is the joint of all of its components PD(D) = PA,S,R(s0, a1, r1*,...,s*T ). Given an MDP M = (S, A*, R, µ,* γ) and fixing a policy π produces a Markov Process (MP) Mπ and induces a distribution over trajectory Pµ,π(D). We refer to the *return* random variable Zt as the sum of discounted rewards from time t to the end of the episode, Zt = !Tk=t γk−tR(Sk, Ak). The *control* objective of an RL problem is to find a policy π∗ that maximises the expected return, $$\pi^{*}\in\operatorname{argmax}_{\pi}\mathbb{E}_{\mu,\pi}\left[\sum_{t=0}^{T}\gamma^{t}R(S_{t},A_{t})\right]=\mathbb{E}\left[Z_{0}\right].$$ $$\quad(1)$$ = E [Z0] . (1) Partially-Observable MDPs (POMDPs). POMDPs are MDPs in which agents do not get to observe a true state of the environment, but only a transformation of it, and are specified with an additional tuple ⟨O, µO⟩, where O is an observation space, and µO : S → ∆(O) is an observation kernel, that maps the true environment state to observation probabilities. Because transitioning between observations is not Markovian, policies are a mapping from partial *trajectories*, which we denote as *histories*, to actions. Histories are sequences of transitions ht = {O0} ∪ {Ak, Rk, Ok+1 : 0 *<k<t* − 1} ∈ (O × A × R)t = H. Generalised Policy Iteration (GPI). We now introduce the concept of value functions. The state value function of a policy π is the expected return of the policy from state st, vπ(s) = Eπ,µ[Zt|St = s]. The action-value function (or Q-function) of a policy π is the expected return of the policy from state st if the agent takes at, qπ(*s, a*) = Eπ,µ[Zt|St = *s, A*t = a]. Policy Evaluation (PE) is then the process that maps a policy π to its value function. A canonical PE procedure starts from an arbitrary value function V0 and iteratively applies the Bellman operator, T , such that: $$\hat{v}_{k+1}^{\pi}(S_{t})={\cal T}^{\pi}[\hat{v}_{k}^{\pi}(S_{t})]:=\mathbb{E}_{\pi,\mu}\left[R(S_{t},A_{t})+\gamma\hat{v}_{k}(S_{t+1})\right],\tag{1}$$ where vˆk denotes the value approximation at iteration k, At ∼ Pπ(·|St), and St+1 ∼ Pµ,π(·|St, At). The Bellman operator is a γ-contraction in the ℓ∞ and the ℓ2 norms, and its fixed point is the value of the policy π. Hence, successive applications of the Bellman operator improve the prediction accuracy because the current value gets closer to the true value of the policy. We refer to the PE as the *prediction* objective (Sutton & Barto, 2018). Policy improvement maps a policy π to an improved policy: $$\pi_{k+1}(a|S)=\mathcal{G}[\pi_{k},S]=\mathds{1}_{\{a\}}(\operatorname*{argmax}_{u\in\mathcal{A}}\left[R(S,u)+\gamma v_{k}(S^{\prime})\right])=\mathds{1}_{\{a\}}(\operatorname*{argmax}_{u\in\mathcal{A}}\left[q_{k}(S,u)\right]).\tag{3}$$ We refer to GPI as a general method to solve the *control* problem (Sutton & Barto, 2018) deriving from the composition of PE and Policy Improvement (PI). In particular, we refer to the algorithm that alternates an arbitrary number k of PE steps and one PI step as Modified Policy Iteration (MPI) (Puterman & Shin, $$\left(2\right)$$ 1978; Scherrer et al., 2015). For k = 1, MPI recovers Value Iteration, while for k → +∞, it recovers Policy Iteration. For any value of k ∈ [1, +∞), and under mild assumptions, MPI converges to an optimal policy (Puterman, 2014). In Deep RL we parameterise a policy using a neural network with parameters set θ and denote the distribution over action as π(a|s, θ). We apply the same reasoning for value functions, with parameters set φ, which leads to v(s, φ) and q(*s, a,* φ) for the state and action value functions respectively. ## 4 Quantifying Action Influences We start by answering Q1., which aims to address the problem of *what* to measure, when referring to credit. Since Minsky (1961) raised the Credit Assignment Problem (CAP), a multitude of works paraphrased his words: - "*The problem of how to incorporate knowledge*" and "given an outcome, how relevant were past decisions?" (Harutyunyan et al., 2019), - "*Is concerned with identifying the contribution of past actions on observed future outcomes*" (Arumugam et al., 2021), - "*The problem of measuring an actions influence on future rewards*" (Mesnard et al., 2021), - "*An agent must assign credit or blame for the rewards it obtains to past states and actions*" (Chelu et al., 2022), - "*The challenge of matching observed outcomes in the future to decisions made in the past*" (Venuto et al., 2022), - "*Given an observed outcome, how much did previous actions contribute to its realization?*" (Ferret, 2022, Chapter 4.1). These descriptions converge to Minsky's original question and show agreement in the literature on an informal notion of credit. In this introduction, we propose to reflect on the different metrics that exist in the literature to quantify it. We generalise the idea of *action value*, which often only refers to q-values, to that of action influence, which describes a broader range of metrics used to quantify the credit of an action. While we do not provide a definitive answer on what credit *should* be, we review how different works in the existing RL literature have characterised it. We now start by developing an intuition of the notion of credit. Consider Figure 1, inspired to both Figure 1 of Harutyunyan et al. (2019) and to the *umbrella* problem in Osband et al. (2020). The action taken at x0 determines the return of the episode by itself. From the point of view of *control*, any policy that always takes a′ in x0 (i.e., π∗ ∈ Π∗ : π∗(a′|x0)=1), and then any other action afterwards, is an optimal policy. From the CAP point of view, some optimal actions, namely those after the first one, do not *actually* contribute to optimal returns. Indeed, alternative actions still produce optimal returns and contribute equally to each other to achieve the goal, so their credit is equal. We can see that, in addition to optimality, credit not only identifies optimal actions but informs them of how *necessary* they are to achieve an outcome of interest. From the example, we can deduce that credit evaluates actions for their potential to influence an outcome. The resulting CAP is the problem of **estimating the influence** of an action over an outcome from experimental data and describes a pure association between them. Why solving the CAP? Action evaluation is a cornerstone of RL. In fact, solving a control problem often involves running a GPI scheme. Here, the influence of an action drives learning, for it suggests a possible direction to improve the policy. For example, the action-value plays that role in Equation (3). It follows that the quality of the measure of influence fundamentally impacts the quality of the policy improvement. Low quality evaluations can lead the policy to diverge from the optimal one, hinder learning, and slow down progress (Sutton & Barto, 2018; van Hasselt et al., 2018). On the contrary, high quality evaluations provide accurate, robust and reliable signals that foster convergence, sample-efficiency and low variance. While ![6_image_0.png](6_image_0.png) Figure 1: A simplified MDP to develop an intuition of credit. The agent starts in x0, and can choose between two actions, a′ and a′′ in each state; the reward is 1 when reaching the upper, solid red square, and 0 otherwise. The first action determines the outcome alone. simple evaluations are enough for specialised experiments, the real world is a complex blend of multiple, sometimes hierarchical tasks. In these cases, the optimal value changes from one task to another, and these simple evaluations do not bode well to adapt to general problem solving. Yet, the causal structure that underlies the real word is shared among all tasks, and the modularity of its causal mechanisms is often a valuable property to incorporate. In these conditions, learning to assign credit in one environment becomes a lever to assign credit in another (Ferret et al., 2021a), and ultimately makes learning faster, more accurate and more efficient. For these reasons, and because an optimal policy only requires discovering one single optimal trajectory, credit stores knowledge beyond that expressed by optimal behaviours alone, and solving the control problem is not sufficient to solve the CAP, with the former being an underspecification of the latter. ## 4.1 Are All Action Values, **Credit**? As we stated earlier, most Deep RL algorithms use some form of *action influence* to evaluate the impacts of an action on an outcome. This is a fundamental requirement to rank actions and select the optimal one to solve complex tasks. For example, many model-free methods use the *state-action value* function qπ(*s, a*) to evaluate actions (Mnih et al., 2015; van Hasselt et al., 2016), where actions contribute as much as the expected return they achieve at termination of the episode. Advantage Learning (AL) (Baird, 1999; Mnih et al., 2016; Wang et al., 2016b, Chapter 5) uses the *advantage* function Aπ(st, at) = qπ(st, at) − vπ(st) 1 to measure credit, while other works study the effects of the *action-gap* (Farahmand, 2011; Bellemare et al., 2016; Vieillard et al., 2020b) on it, that is, the relative difference between the expected return of the best action and that of another action, usually the second best. Action influence is also a key ingredient of actor-critic and policy gradient methods (Lillicrap et al., 2015; Mnih et al., 2016; Wang et al., 2016a), where the policy gradient is proportional to Eµ,π[Aπ(*s, a*)∇ log π(A|s)], with Aπ(*s, a*) estimating the influence of the action A. These proxies are sufficient to select optimal actions and unlock solutions to complex tasks (Silver et al., 2018; Wang et al., 2016b; Kapturowski et al., 2019; Badia et al., 2020; Ferret et al., 2021b). However, while many works explicitly refer to the action influence as a measure of credit, the term is not formally defined and, it remains unclear where to draw the line between *credit* and other quantities. Key questions arise: What is the difference between these quantities and credit? Do they actually represent credit as originally formulated by Minsky *(1961)? If so, under what conditions do they do?* Without a clear definition of *what* to measure, we do not have an appropriate quantity to target when designing an algorithm to solve the CAP. More importantly, we do not have an appropriate quantity to use as a single source of truth and term of reference to measure the accuracy of other metrics of action influence, and how well they approximate credit. To fill this gap, we proceed as follows: - Section 4.2 formalises what is a *goal* or an *outcome*: what we evaluate the action for; - Section 4.3 unifies existing functions under a common formalism; 1To be consistent with the RL literature we abuse notation and denote the advantage with a capital letter Aπ despite not being random and being the same symbol of the action At. - Section 4.4 formalises the CAP following this definition; - Section 4.5 analyses how different works interpreted and quantified *action influences* and reviews them; - Section 4.6 distils the properties that existing measures of action influence exhibit. We suggest the reader only interested in the final formalism to directly skip to Section 4.4, and to come back to the next sections to understand the motivation behind it. ## 4.2 What Is A **Goal**? Because credit measures the influence of an action upon achieving a certain goal, to define credit formally we must be able to describe *goals* formally, and without a clear understanding of what constitutes one, an agent cannot construct a learning signal to evaluate its actions. *Goal* is a synonym for *purpose*, which we can informally describe as a performance to meet or a prescription to follow. Defining a goal rigorously allows making the relationship between the action and the goal explicit (Ferret, 2022, Chapter 4) and enables the agent to decompose complex behaviour into elementary ones in a compositional (Sutton et al., 1999; Bacon et al., 2017), and possibly hierarchical way (Flet-Berliac, 2019; Pateria et al., 2021; Hafner et al., 2022). This idea is at the foundation of many CA methods (Sutton et al., 1999; 2011; Schaul et al., 2015a; Andrychowicz et al., 2017; Harutyunyan et al., 2019; Bacon et al., 2017; Smith et al., 2018; Riemer et al., 2018; Bagaria & Konidaris, 2019; Harutyunyan et al., 2018; Klissarov & Precup, 2021). We proceed with a formal definition of *goals* in the next paragraph, and review how these goals are *represented* in seminal works on CA in the one after. This will lay the foundation for a unifying notion of credit later in Sections 4.3. Defining goals. To define goals formally, we adopt the *reward hypothesis*, which posits: That all of what we mean by goals and purposes can be well thought of as maximization of the expected value of the cumulative sum of a received scalar signal (reward). (Sutton, 2004). Here, the goal is defined as the *behaviour* that results from the process of maximising the return. The reward hypothesis has been further advanced by later studies (Abel et al., 2021b; Pitis, 2019; Shakerinava & Ravanbakhsh, 2022; Bowling et al., 2023). In the following text, we employ the goal definition in Bowling et al. (2023), which we report hereafter: Definition 1 (Goal). Given a distribution of finite histories P(H), ∀H ∈ H*, we define a* goal as a partial ordering over P(H), and for all h, h′ ∈ H we write h ! h′ to indicate that h is preferred to h′ or that the two are indiff*erently preferred.* Here, H is a random history in the set of all histories H as described in Section 3, and P(H) is an unknown distribution over histories, different from that induced by the policy and the environment. An agent behaviour and an environment then induce a new distribution over histories, and we obtain Pµ,π(H) as described in Section 3. This in turn allows defining a partial ordering over policies, rather than histories, and we write analogously π ! π′ to indicate the preference. For the *Markov Reward Theorem* (Bowling et al., 2023, Theorem 4.1) and under mild conditions (Bowling et al., 2023), there exists a deterministic, Markov reward function2 R : O × A → [0, 1] such that the maximisation of the expected sum of rewards is consistent with the preference relation over policies. Subjective and objective goals. The *Markov Reward Theorem* holds both if the preferences are defined internally by the agent itself - this is the case of *intrinsic motivation* (Piaget et al., 1952; Chentanez et al., 2004; Barto et al., 2004; Singh et al., 2009; Barto, 2013; Colas et al., 2022) - and in case they originate from an *external* entity, such as an agent-designer. In the first case, the agent doing the maximisation is the 2We omit the transition dependent discounting for the sake of conciseness and because not relevant to our problem. The reader can consult Pitis (2019); White (2017) for details. same as the one holding the ordering over policies, and we refer to the corresponding goal as a *subjective* goal. In the second case, an *agent designer* or an unknown, non-observable entity holds the ordering and a separate *learning agent* is the one pursuing the optimisation process. We refer to a goal as an *objective goal* in this latter case. These settings usually correspond to the distinction between goals and sub-goals in the literature (Liu et al., 2022). Outcomes. A particularly interesting use of goals for CA is in hindsight (Andrychowicz et al., 2017). Here the agent acts with a goal in mind, but it evaluates a trajectory as if a reward function - one different from the original one - was maximised in the current trajectory. We discuss the benefits of these methods in Section 6.4. When this is the case, we use the term *outcome* to indicate a realised goal in hindsight. In particular, given a history H ∼ Pµ,π(H), there exists a deterministic, Markov reward function R that is maximal in H. We refer to the corresponding H as an outcome. For example, consider a trajectory h that ends in a certain state s. There exist a Markov reward function that outputs always 0 and 1 only when the s is the final state of h. We refer to h as an *outcome*. In other words, this way of defining goals or outcomes corresponds to defining a task to solve, which in turn can be expressed through a reward function with the characteristics described above. Vice-versa, the reward function can *encode* a task. When credit is assigned with respect to a particular goal or outcome, it then evaluates the influence of an action to solving that particular task. As discussed above, this is key to decomposing and recomposing complex behaviours and the definition aligns with that of other disciplines, such as psychology where *a goal . . . is a cognitive representation of something that is possible in the future* (Elliot & Fryer, 2008) or philosophy, where representations do not merely read the world as it is, but they express *preferences* over something that is possible in the future (Hoffman, 2016; Prakash et al., 2021; Le Lan et al., 2022). Representing goals and outcomes. However, expressing the relation between actions and goals explicitly, that is, when the function that returns the credit of an action has a goal as an input, raises the problem of how to *represent* a goal for computational purposes. This is important because among the CA methods that define goals explicitly (Sutton et al., 2011; Schaul et al., 2015a; Andrychowicz et al., 2017; Rauber et al., 2019; Harutyunyan et al., 2019; Tang & Kucukelbir, 2021; Arulkumaran et al., 2022; Chen et al., 2021), not many of them use the rigour of a general-purpose definition of goal such as that in Bowling et al. (2023). In these works, the *goal-representation space*, which we denote as g ∈ G, is arbitrarily chosen to represent specific features of a trajectory. It denotes an *object*, rather than a performance or a prescription to meet. For example, a *goal-representation* g can be a state (Sutton et al., 2011; Andrychowicz et al., 2017) and g ∈ G = S. It can be a specific observation (Nair et al., 2018) with g ∈ G = O. Alternatively, it can be an abstract features vector (Mesnard et al., 2021) that reports on some characteristics of a history, and we have g ∈ G = Rd, where d is the dimensionality of the vector. Even, a goal can be represented by a natural language instruction (Luketina et al., 2019) and g ∈ G = Rd is the embedding of that piece of text. A goal can be represented by a scalar g ∈ G = R (Chen et al., 2021) that indicates a specific return to achieve, or even a full command (Schmidhuber, 2019), that is a return to achieve is a specific window of time. While these representations are all useful heuristics, they lack formal rigour and leave space for ambiguities. For example, saying that the goal is a state might mean that *visiting the state at the end of the trajectory* is the goal or that visiting it in the *middle* of it is the goal. This is often not formally defined, and what is the reward function that corresponds to that specific representation of a goal is not always clear. In the following text, when surveying a method or a metric that specifies a goal, we refer to the specific goal representation used in the work and make an effort to detail what is the reward function that underpins that goal representation. ## 4.3 What Is An **Assignment**? Having established a formalism for goals and outcomes, we are now ready to describe *credit* formally and we proceed with a formalism that unifies the existing measures of action influence. We first describe a generic function that generalises most CAs, and then proceed to formalise the CAP. Overall, this formulation provides a term of reference for the quantities described in Section 4.5. We now formalise an *assignment*: Definition 2 (Assignment). Consider an action a ∈ A, a goal g ∈ G, and a context c ∈ C*. We use the term* assignment function *or simply* assignment to denote a function K that maps a context, an action, and an outcome to a quantity y ∈ Y, which we refer to as the **influence** *of the action:* $$K:{\mathcal{C}}\times{\mathcal{A}}\times{\mathcal{G}}\to{\mathcal{Y}}.$$ $\left(4\right)$. K : C × A × G → Y. (4) Here, a context c ∈ C represents some input data and can be arbitrarily chosen depending on the assignment in question. For example, c can be a state s. A context must hold information about the present, for example, the current state or the current observation; it may contain information about the past, for example, the sequence of past decisions that occurred until now for a POMDP; to evaluate the current action, it can contain information about what future actions will be taken *in-potentia*, for example by specifying a policy to follow when a ∈ A is not taken, or a fixed trajectory, in which case the current action is evaluated in hindsight (Andrychowicz et al., 2017). In the general case, the action influence is a random variable Y ∈ Y ⊂ Rd. This is the case, for example, of the action-value distribution (Bellemare et al., 2017) as described in Equation 10, where the action influence is defined over the full distribution of returns. However, most methods extract some scalar measures of the full influence distribution, such as expectations (Watkins, 1989), and the action influence becomes a scalar y ∈ R. In the following text, we mostly consider scalar forms of the influence Y = R as these represent the majority of the existing formulations. In practice, an *assignment* provides a single mathematical form to talk about the multitude of ways to quantify action influence that are used in the literature. It takes an action a ∈ A, some contextual data c ∈ C and a goal g ∈ G and maps it to some measure of *action influence*. While maintaining the same mathematical form, different assignments can return different values of action influence and steer the improvement in different directions. Equation (4) also resembles the General Value Function (GVF) (Sutton et al., 2011), where the influence y = qπ(*s, a, g*) is the expected return of the policy π when taking action a in state s, with respect a goal g. However, in GVFs: (i) y is an *action value* and does not generalise other forms of action influence; the goal is an MDP state g ∈ S and does not generalise to our notion of goals in Section 4.2; the function only considers forward predictions and does not generalise to evaluating an action in hindsight (Andrychowicz et al., 2017). Table 2 at page 11 contains further details on the comparison and further specifies the relationship between the most common functions and their corresponding assignment. ## 4.4 The Credit Assignment Problem The generality of the assignment formalism reflects the great heterogeneity of action influence metrics, which we review later in Section 4.5. This heterogeneity shows that, even if most studies agree on an intuitive notion of credit, they diverge in practice on how to quantify credit mathematically. Having unified the existing assignments in the previous section, we now proceed to formalise the CAP analogously. This allows us to put the existing methods into a coherent perspective as a guarantee for a fair comparison, and to maintain the heterogeneity of the existing measures of action influence. We cast the CAP as the problem of approximating a measure of action influence from experience. We assume standard model-free, Deep RL settings and consider an assignment represented as a neural network k : C × A × G × Φ → R with parameters φ ∈ Φ = Rn that can be used to approximate the credit of the actions. This usually represents the critic or the value function of an RL algorithm. In addition, we admit a stochastic function to represent the policy, also in the form of a neural network f : S × Θ → ∆(A), with parameters set θ ∈ Θ = Rm. We assume that n ≪ |S| × |A| and m ≪ |S| × |A| and note that often subsets of parameters are shared among the two functions. We further assume that the agent has access to a set of experiences D and that it can sample from it according to a distribution D ∼ PD. This can be a pre-compiled set of external demonstrations, where PC (D) = U(D), or an MDP, where PC = Pµ,π(D), or even a fictitious model of an MDP PC = Pµ, ˜ π(D), where µ% is a function internal to the agent, of the same form of µ. These are also mild assumptions as they | Assignment | Action influence | Context | Action | Goal | |-----------------------------|-------------------------------------|------------------|----------|---------------| | State-action-value | qπ(s, a) | s ∈ S | a ∈ A | g ∈ R | | Advantage | qπ(s, a) − v(s) | s ∈ S | a ∈ A | g ∈ R | | General q-value function | qπ,R(s, a) | s ∈ S | a ∈ A | g ∈ S | | Distributional action-value | Qπ(s, a) | s ∈ S | a ∈ A | g ∈ {0,...,n} | | Distributional advantage | DKL(Qπ(s, a)||V π(s, a)) | s ∈ S | a ∈ A | g ∈ {0,...,n} | | Hindsight advantage | 1 − | π(At|s) | | | | | PD(At|st,Zt)Zt | s ∈ S, hT ∈ H | a ∈ h | g ∈ R | | Counterfactual advantage | PD(At = a|St = s, Ft = f)q(s, a, f) | s ∈ S | a ∈ h | g ∈ R | | Posterior value | !T t=0 Pµ,π(Ut = u|ht)vπ(ot, xt) | o ∈ O, u ∈ Rd, π | A ∼ π | g ∈ R | | Policy-conditioned value | q(s, a, π) | s ∈ S, π ∈ Π | a ∈ A | g ∈ R | Table 2: A list of the most common *action influences* and their assignment functions in the Deep RL literature analysed in this survey. For each function, the table specifies the influence, the context representation, the action, and the goal representation of the corresponding assignment function K ∈ K. correspond to, respectively, offline settings, online settings, and model-based settings where the model is learned. We detail these settings in Appendix B. We now define the CAP formally. Definition 3 (The credit assignment problem). Consider an MDP M, a goal g ∈ G, and a set of experience D. Consider an arbitrary assignment K ∈ K *as described in Equation* (4)*. Given a parameterised function* K% : C × A × G × Φ → R with parameters set φ ∈ Φ ⊂ Rn*, we refer to the* Credit Assignment Problem as the problem of finding the set of parameters φ ∈ Φ *such that:* $$\widetilde{K}(c,a,g,\phi)=K(c,a,g),\quad\forall c\in{\mathcal{C}},a\in{\mathcal{A}},g\in{\mathcal{G}}.$$ $$\left({\bar{5}}\right)$$ K%(*c, a, g,* φ) = K(c, a, g), ∀c ∈ C, a ∈ A, g ∈ G. (5) Different choices of action influence have a great impact on the hardness of the problem. In particular, there is a trade-off between: (a) how effective the chosen measure of influence is to inform the direction of the policy improvement, (b) how easy it is to learn that function from experience. For example, using *causal influence* (Janzing et al., 2013) as a measure of action influence makes the CAP hard to solve in practice. In fact, discovering causal mechanisms from associations alone is notoriously challenging (Pearl, 2009; Bareinboim et al., 2022), and pure causal relationships are rarely observed in nature (Pearl et al., 2000) but in specific experimental conditions. However, causal knowledge is reliable, robust to changes in the experience collected and effective, and causal mechanisms can be invariant to changes in the goal. On the contrary, q-values are easier to learn as they represent a measure of statistical correlation between state-actions and outcomes, but their knowledge is limited to the bare minimum necessary to solve a control problem. This makes them more brittle to sudden changes to the environment, for example, in open-ended settings (Abel et al., 2023). Which quantity to use in each specific instance or each specific problem is still the subject of investigation in the literature, as we show in the next sections. Ideally, we seek to use the most general measure of influence that can be learned with the least amount of experience. ## 4.5 Existing Assignment Functions We now survey the most important assignment functions from the literature and their corresponding measure of action influence. The following list is not exhaustive, but rather it is representative of the limitations of existing credit formalisms. For brevity, and without loss of generality, we omit functions that do not explicitly evaluate actions (for example, state-values), but we notice that it is still possible to reinterpret an assignment to a state as an assignment to a set of actions for it affects all the actions that led to that state. State-action values (Shannon, 1950; Schultz, 1967; Michie, 1963; Watkins, 1989) are a hallmark of RL, and are described by the following expression: $$q^{\pi}(s,a)=\mathbb{E}_{\mu,\pi}[Z_{t}|S_{t}=s,A_{t}=a].$$ $$(6)$$ qπ(*s, a*) = Eµ,π[Zt|St = *s, A*t = a]. (6) Here, the context c is a state s ∈ S in the case of MDPs or a history h ∈ H for a POMDP. The q-function quantifies the credit of an action by the expected return of the action in the context. q-values are among the simplest ways to quantify credit and offer a basic mechanism to solve control problems. However, while q-functions offer solid theoretical guarantees in tabular RL, they can be unstable in Deep RL. When paired with bootstrapping and off-policy learning, q-values are well known to diverge from the optimal solution (Sutton & Barto, 2018). van Hasselt et al. (2018) provide empirical evidence of the phenomenon, investigating the relationship between divergence and performance, and how different variables affect divergence. In particular, the work shows that the Deep Q-Network (DQN) (Mnih et al., 2015) is not guaranteed to converge to the optimal q-function. The divergence rate on both evaluation and control problems increases depending on specific mechanisms, such as the amount of bootstrapping, or the amount of prioritisation of updates (Schaul et al., 2015b). An additional problem arises when employing GPI schemes to solve control problems. While during evaluation the policy is fixed, here the policy continuously changes. It becomes more challenging to track the target of the update while converging to it, as the change of policy makes the problem appear non-stationary from the point of view of the value estimation. In fact, even if the policy changes, there is no signal that informs the policy evaluation about the change. To mitigate the issue, many methods either use a fixed network as an evaluation target (Mnih et al., 2015), perform Polyak averaging of the target network (Haarnoja et al., 2018), or clip the gradient update to a maximum cap (Schulman et al., 2017). To further support the idea, theoretical and empirical evidence (Bellemare et al., 2016) shows that the q-function is *inconsistent*: for any suboptimal action a, the optimal value function q∗(*s, a*) describes the value of a *non-stationary* policy, which selects a different action π∗(s) (rather than a) at each visit of s. The non-stationarity of q-values for suboptimal actions has also been shown empirically. Schaul et al. (2022) measure the per-state *policy change* W(π, π′|s) = !a∈A |π(a|s) − π′(a|s)| for several Atari 2600 games Arcade Learning Environment (ALE) (Bellemare et al., 2013), and show that the action-gap undergoes brutal changes despite the agent maintaining a constant value of expected returns. In practice, Deep RL algorithms often use q-targets to approximate the q-value, for example, n-step targets (Sutton & Barto, 2018, Chapter 7), or λ-returns (Watkins, 1989; Jaakkola et al., 1993; Sutton & Barto, 2018, Chapter 12). However, we consider them as *methods*, rather than quantities to measure credit, since they all ultimately aim to converge to the q-value. For this reason, we discuss them in Section 6.1. Advantage (Baird, 1999) measures, in a given state, the difference between the q-value of an action and the value of its state $$A^{\pi}(s,a)=q^{\pi}(s,a)-v^{\pi}(s).$$ $$\left(7\right)$$ Aπ(*s, a*) = qπ(*s, a*) − vπ(s). (7) Here, the context c is the same as in Equation (6). Because vπ(s) = !a∈A q(*s, a*)π(a|s) and Aπ(*s, a*) = qπ(s, a) − Eπ[qπ(*s, a*)], the advantage quantifies how much an action is better than average. As also shown in Bellemare et al. (2016), using the advantage to quantify credit can increase the *action-gap*. Empirical evidence has shown the consistent benefits of advantage over q-values (Baird, 1999; Wang et al., 2016b; Bellemare et al., 2016; Schulman et al., 2016), and the most likely hypothesis is its regularisation effects (Vieillard et al., 2020b;a; Ferret et al., 2021a). On the other hand, when estimated directly and not by composing state and state-action values, for example in Pan et al. (2022), the advantage does not permit bootstrapping. This is because advantage lacks an absolute measure of action influence, and only maintains one that is relative to the other possible actions. Overall, in canonical benchmarks for both evaluation (Wang et al., 2016b) and control (Bellemare et al., 2013), advantage has been shown to improve over q-values (Wang et al., 2016b). In particular, policy evaluation experiences faster convergence in large action spaces because the state-value vπ(s) can hold information that is shared between multiple actions. For control, it improves the score over several Atari 2600 games compared to both double q-learning (van Hasselt et al., 2016) and Prioritised Experience Replay (PER) (Schaul et al., 2015b). General Value Functions (GVFs) (Sutton et al., 2011; Schaul et al., 2015a) are a set of q-value functions that predict returns for multiple reward functions: $$q^{\pi,R}(s,a)=\{\mathbb{E}_{\mu,\pi}\left[\sum_{t}^{T}R(S_{t},A_{t})|S_{t}=s,A_{t}=a\right]:\forall R\in\mathcal{R}\},\tag{8}$$ where R is a pseudo-reward function and R is an arbitrary, pre-defined set of reward functions. Notice that we omit the pseudo-termination and pseudo-discounting terms that appear in their original formulation (Sutton et al., 2011) to maintain the focus on credit assignment. The context c is the same of q-values and advantage, and the goal that the pseudo-reward represents is to reach a specific state g = s ∈ S. When first introduced (Sutton et al., 2011), the idea of GVFs stemmed from the observation that canonical value functions are limited to address only a single task at a time. Solving a new task would require learning a value function *ex-novo*. By maintaining multiple assignment functions at the same time, one for each goal, GVFs can instantly quantify the influence of an action for multiple goals simultaneously. However, while GVFs maintain multiple assignments, the goal is still not an explicit input of the value function. Instead, it is left implicit, and each assignment serves the ultimate goal to maximise a different pseudo-reward function (Sutton et al., 2011). Universal Value Functions Approximators (UVFAs) (Schaul et al., 2015a) scale GVFs to Deep RL and advance their idea further by conflating these multiple assignment functions into a single one, represented as a deep neural network. Here, unlike for state-action values and GVFs, the goal is an explicit input of the assignment: $$q^{\pi}(s,a,g)=\mathbb{E}_{\mu,\pi}[Z_{t}|S_{t}=s,A_{t}=a,G_{t}=g].$$ qπ(*s, a, g*) = Eµ,π[Zt|St = s, At = *a, G*t = g]. (9) The action influence here is measured for a goal explicitly. This allows to leverage the generalisation capacity of deep neural networks and to generalise not only over the space of states but also over that of goals. Distributional values (Jaquette, 1973; Sobel, 1982; White, 1988; Bellemare et al., 2017) consider the full return distribution Zt instead of its expected value: $$({\mathfrak{g}})$$ $$Q^{\pi}(s,a)=\mathbb{P}_{\mu,\pi}(Z_{t}|S_{t}=s,A_{t}=a),$$ $$(10)$$ where Pµ,π(Zt) is the distribution over returns. Notice that we use uppercase Q to denote the value distribution and the lowercase q for its expectation (Equation (6)). To translate the idea into a practical algorithm, Bellemare et al. (2017) proposes a discretised version of the value distribution by projecting Pµ,π(Zt) on a finite support C = {0 ≤ i ≤ C}. The discretised value distribution then becomes Qπ(*s, a*) = PC (Zt|St = *s, A*t = a), where PC is a categorical Bernoulli that describes the probability that a return c ∈ C is achieved. Here, the context is the current MDP state and the goal is the expected return. Notice that while the optimal expected value function q∗(*s, a*) is unique, in general, there are many optimal value distributions since different optimal policies can induce different value distributions. Experimental evidence (Bellemare et al., 2017) suggests that distributional values provide a better quantification of the action influence, leading to superior results in well known benchmarks for control (Bellemare et al., 2013). However, it is yet not clear why distributional values improve over their expected counterparts. One hypothesis is that predicting for multiple goals works as an auxiliary task (Jaderberg et al., 2017), which often leads to better performance. Another hypothesis is that the distributional Bellman optimality operator proposed in Bellemare et al. (2017) produces a smoother optimisation problem, but the evidence remains weak or inconclusive (Sun et al., 2022). Distributional advantage (Arumugam et al., 2021) proposes a distributional equivalent of the advantage: $$A^{\pi}(s,a)=D_{K L}(Q^{\pi}(s,a)||V^{\pi}(s)),$$ $$(11)$$ and borrows the properties of both distributional values and the expected advantage. Intuitively, Equation (11) shows how much knowing the action changes the value distribution. To do so, it measures the change of the value distribution, for a given state-action pair, relative to the distribution for the particular state only. The KL divergence between the two distributions can then be interpreted as the distributional analogue of Equation (7), where the two quantities appear in their expectation instead. The biggest drawback of this measure of action influence is that it is only treated in theory, and there is no empirical evidence that supports distributional advantage as a useful proxy for credit in practice. Future works should consider providing empirical evidence on how this measure of action influence behaves compared to q-values and distributional values. Hindsight advantage (Harutyunyan et al., 2019) stems from conditioning the action influence on future states or returns. The return-conditional hindsight advantage function can be written as follows: $$A^{\pi}(s,a,z)=\left(1-\frac{\mathbb{P}_{\pi}(A_{t}=a|S_{t}=s)}{\mathbb{P}_{\mu,\pi}(A_{t}=a|S_{t}=s,Z_{t}=z)}\right)z.\tag{12}$$ Here Aπ(*s, a, z*) denotes the return-conditional advantage and Pµ,π(at|St = *s, Z*t = z) is the returnconditional *hindsight distribution* and describes the probability that an action a has been taken in s, given that we observed the return z at the end of the episode, after following π. The context is a state, and the goal is the expected return, which, in this case, corresponds also to the value of the return collected in the current trajectory. The idea of *hindsight* - initially presented in Andrychowicz et al. (2017) - is that even if the trajectory does not provide useful information for the main goal, it can be revisited as if the goal was the outcome just achieved. Hindsight advantage brings this idea to the extreme and rather than evaluating only for a pre-defined set of goals such as in Andrychowicz et al. (2017), it evaluates for every experienced state or return. Here, the action influence is quantified by that proportion of return determined by the ratio in Equation (12). To develop an intuition of it, if the action a leads to the return z with probability > 0 such that Pµ,π(At = a|St = *s, Z*t = z) > 0, but the behaviour policy π takes a with probability 0, the credit of the action a is 0. There exists also a state-conditional formulation rather than a return-conditional one, and we refer to Harutyunyan et al. (2019) for details on it to keep the description concise. Future-conditional advantage (Mesnard et al., 2021) generalises hindsight advantage to use an arbitrary property of the future: $$A^{\pi}(s,a,f)=\mathbb{P}_{\mu,\pi}(A_{t}=a|S_{t}=s,F_{t}=f)q^{\pi}(s,a,f).$$ Here, F : DT → Rn is an n-dimensional feature of a trajectory d, and Ft is that feature for a trajectory that starts at time t and ends at the random horizon T. qπ(*s, a, f*) = Eµ,π[Zt|St = s, Ft = f,At = a] denotes the future-conditioned state-action value function. The context is a tuple of state and feature (*s, f*); the goal is the expected return observed at the end of the trajectory. Notice that you can derive the hindsight advantage by setting F = Z. To develop an intuition, F can represent, for example, whether a day is rainy, and the future-conditional advantage expresses the probability of an action a, given that the day will be rainy. Counterfactual advantage (Mesnard et al., 2021) proposes a specific choice of F such that F is independent of the current action. This produces a future-conditional advantage that factorises the influence of an action in two components: the contribution deriving from the intervention itself (the action) and the luck represented by all the components not under the control of the agent at the time t, such as fortuitous outcomes of the state-transition dynamics, exogenous reward noise, or future actions. The form is the same as that in Equation 13, with the additional condition that the feature Ft is independent of the action At and we have EF [DKL(P(At|St = s)||P(At|St = *s, F*t = f)] = 0. $$(13)$$ The main intuition behind *counterfactual advantage* is the following. While to compute counterfactuals we need access to a model of the environment, in model-free settings we can still compute all the relevant information Ft that does not depend on this model. Once learned, a model of F can then represent a valid baseline to compute counterfactuals in a model-free way. To stay in the scope of this section, we detail how to learn this quantity in Section 6.4. Posterior value functions (Nota et al., 2021) reflect on partial-observability and propose a characterisation of the hindsight advantage bespoke to POMDPs. The intuition behind Posterior Value Functions (PVFs) is that the evaluated action only accounts for a small portion of the variance of returns. The majority of the variance is often due to the part of the trajectory that still has to happen. For this reason, incorporating in the baseline information of the future could have a greater impact in reducing the variance of the policy gradient estimator. PVFs focus on the variance of a future-conditional baseline (Mesnard et al., 2021) caused by the partial observability. Nota et al. (2021) factorises a state s into an observable component o and an non-observable one u, and formalises the PVF as follows: $$v_{t}^{\pi}(h_{t})=\sum_{u\in{\mathcal{U}}}\mathbb{P}_{\mu,\pi}(U_{t}=u|h_{t})v^{\pi}(o_{t},u_{t}),$$ Pµ,π(Ut = u|ht)vπ(ot, ut), (14) where u ∈ U is the non-observable component of st such that s = {*u, o*}. Notice that this method is not taking into account actions. However, it is trivial to derive the corresponding Posterior Action-Value Function (PAVF) as qπ(ht, a) = R(st, at) + γvπ(ht+1). Policy-conditioned values (Harb et al., 2020; Faccio et al., 2021) are value functions that include the policy as an input. For example, a policy-conditioned state-action value has the form: $$(14)$$ $$q(s,\pi,a)=q^{\pi}(s,a),$$ $$(15)$$ q(s, *π, a*) = qπ(*s, a*), (15) but a representation of the policy π is used as an explicit input of the influence function. Here, the context is the union of the current MDP state s and the policy π, and the goal is the expected return at termination. The main difference with state-action values is that, all else being equal, q(s, *π, a, g*) produces different values instantly when π varies, since π is now an explicit input. For this reason, q(s, *π, a*) can generalise over the space of policies, while qπ(*s, a*) cannot. Using the policy as an input raises the problem of *representing* a policy in a way that can be fed to a neural network. Harb et al. (2020) and Faccio et al. (2021) propose two methods to represent a policy. To keep our attention on the CAP, we refer to their works for further details on possible ways to represent a policy (Harb et al., 2020; Faccio et al., 2021). Here we limit to convey that the problem of representing a policy has been already raised in the literature. ## 4.6 Discussion The sheer variety of assignment functions described above leads to an equally broad range of metrics to quantify action influence and what is the best assignment function for a specific problem remains an open question. While we do not provide a definitive answer to the question of which properties are necessary or sufficient for an assignment function to output a satisfactory measure of credit, we set out to draw attention to the problem by abstracting out some of the properties that the metrics above share or lack. We identify the following properties of an assignment function and summarise our analysis in Table 3. Explicitness. We use the term *explicitness* when the goal appears as an explicit input of the assignment and it is not left implicit or inferred from experience. Using the goal as an input allows generalising CA over the space of goals. The decision problem can then more easily be broken down into subroutines that are both independent of each other and independently useful to achieve some superior goal g. Overall, explicitness allows incorporating more knowledge because the assignment spans each goal without losing information about others, only limited by the capacity of the function approximator. For example, UVFAs, hindsight advantages, and future conditional advantages are explicit assignments. As discussed in the previous section, *distributional values* can also be interpreted as explicitly assigning credit for each atom | Name | Explicitness | Recursivity | Future-dependent | Causality | |-----------------------------|----------------|---------------|--------------------|-------------| | State-action value | ◦ | - | ◦ | ◦ | | Advantage | ◦ | •◦ | ◦ | ◦ | | GVFs/UVFAs | - | - | ◦ | ◦ | | Distributional action-value | •◦ | - | ◦ | ◦ | | Distributional advantage | •◦ | ◦ | ◦ | - | | Hindsight advantage | •◦ | ◦ | •◦ | ◦ | | Counterfactual advantage | •◦ | ◦ | •◦ | - | | Posterior value | ◦ | ◦ | - | ◦ | | Observation-action value | ◦ | ◦ | ◦ | ◦ | | Policy-conditioned value | ◦ | - | - | ◦ | Table 3: A list of the most common *action influences* and their assignment functions in the Deep RL literature analysed in this survey, and the properties they respect. Respectively, empty circles, half circles and bullets indicate that the property is not respected, that it is only partially respected, and it is fully respected. See Sections 4.5 and 4.6 for details. of the quantised return distribution, which is why we only partially consider them having this property in Table 3. Likewise, hindsight and future-conditional advantage, while not conditioning on a goal explicitly, can be interpreted as conditioning the influence on sub-goals that are states or returns, and future statistics, respectively. For this reason, we consider them as partially explicit assignments. Recursivity. We use the term *recursivity* to characterise the ability of an assignment function to support bootstrapping (Sutton & Barto, 2018). When an assignment is recursive, it respects a relationship of the type: K(ct+1, at+1, g) = f(K(ct, at, g)), where f projects the influence from the time t to t+ 1. For example, goal-conditioned q-values can be written as: qπ(st+1, at+1, g) = R(st, at, g) +γqπ(st, at, g), where R(st, at, g) is the reward function for the goal g. Recursivity provides key advantages when *learning* credit, which we discuss more in detail in Section 6. In theory, it reduces the variance of the estimation at the cost of a bias (Sutton & Barto, 2018): since the agent does not complete the trajectory, the return it observes is imprecise but varies less. In practice, bootstrapping is often necessary in Deep RL when the length of the episode for certain environments makes full Monte-Carlo estimations intractable due to computational and memory constraints. When the influence function does not support bootstrapping, the agent must obtain complete episodes to have unbiased samples of the return. For example, Direct Advantage Estimation (DAE) (Pan et al., 2022) uses the advantage function as a measure of credit, but it does not decompose the advantage into its recursive components that support bootstrapping (q(*s, a*) and v(s)), and requires full Monte-Carlo returns to approximate it. This is often ill-advised as it increases the variance of the estimate of the return. For this reason, we consider the advantage to only partially satisfy recursivity. Future-dependent. We use the term *future-dependent* for assignments that take as input information about what actions will be or have been taken *after* the time t at which the action At is evaluated. This is key because the influence of the current action depends also on what happens *after* the action. For example, picking up a key is not meaningful if the policy does not lead to opening the door afterwards. Future actions can be specified *in-potentia*, for example, by specifying a policy to follow after the action. This is the case of policy-conditioned value function, whose benefit is to explicitly condition on the policy such that, if the policy changes, but the action remains the same, the influence of the action changes instantly. They can also be specified *in realisation*. This is the case, for example, of hindsight evaluations (Andrychowicz et al., 2017) such as the hindsight advantage, the counterfactual advantage, and the PVF where the influence is conditioned on some features of the future trajectory. However, these functions only consider *features* of the future: the hindsight advantage considers only the final state or the final return of a trajectory; the counterfactual advantage considers some action-independent features of the future; the posterior value function considers only the non-observable components. Because futures are not considered fully, we consider these functions as only partially specifying the future. Furthermore, while state-action value functions, the advantage and their distributional counterparts specify a policy in principle, that information is not an explicit input of the assignment, but only left implicit. In practice, in Deep RL, if the policy changes, the output of these assignments does not change unless retraining. Causality. We refer to a *causal* assignment when the influence that it produces is also a measure of causal influence (Janzing et al., 2013). For example, the counterfactual advantage proposes an interpretation of the action influence closer to causality, by factorising the influence of an action in two. The first factor includes only the non-controllable components of the trajectory (e.g., exogenous reward noise, stochasticity of the state-transition dynamics, stochasticity in the observation kernel), or those not under direct control of the agent at time t, such as future actions. The second factor includes only the effects of the action alone. The interpretation is that, while the latter is due to causation, the former is only due to fortuitous correlations. This vicinity to causality theory exists despite the counterfactual advantage not being a satisfactory measure of causal influence as described in Janzing et al. (2013). Distributional advantage in Equation 11 can also be interpreted as containing elements of causality. In fact, we have that the expectation of the advantage over states and actions is the Conditional Mutual Information (CMI) between the policy and the return, conditioned on the state-transition dynamics: Eµ,π[DKL(Qπ(*s, a*)||V π(s))] = I(Pπ(A|S = s);Z|Pµ(S)). The CMI (with its limitations (Janzing et al., 2013)) is a known measure of causal influence. Overall, these properties define some characteristics of an assignment, each one bringing positive and negative aspects. Explicitness allows maintaining the influence of an action for multiple goals at the same time, promoting the reuse of information and a compositional onset of behaviour. Recursivity ensures that the influence can be learned via bootstrapping. Future-dependency separates assignments by whether they include information about future actions. Finally, causality filters out the spurious correlations evaluating the effects of the action alone. ## 4.7 Summary In this section, we addressed Q1. and discussed the problem of how to quantify action influences. In Section 4.1 we formalised our question: "How do diff*erent works quantify action influences?*" and "Are these quantities satisfactory measures of credit?". We proceeded to answer the questions. In Section 4.2 we formalised the concept of *outcome* as some arbitrary function of a given history. In Section 4.3 we defined the assignment function as a function that returns a measure of action influence. In Section 4.4 we used this definition to formalise the CAP as the problem of learning a measure of action influence from experience. We refer to the set of protocols of this learning process as a credit assignment *method*. In Section 4.5 we surveyed existing measures of action influence from literature, detailed the intuition behind them, their advantages and drawbacks. Finally, in Section 4.6 we discussed how these measures of action influence relate to each other, the properties that they share and those that are rarer in literature, but still promising for future advancements. In the next sections, we proceed to address Q2.. Section 5 describes the obstacles to solving the CAP and Section 6 surveys the methods to solve the CAP. ## 5 The Challenges To Assign Credit In Deep Rl Having clarified what measures of action influence are available in the literature, we now look at the obstacles that arise to learn them and, together with Section 6, answer Q2.. We first survey the literature to identify known issues to assign credit and then systematise the relevant issues into CA challenges. These challenges provide a perspective to understand the principal directions of development of CA methods and are largely independent of the choice of action influence. However, using a measure of influence over another can still impact the prominence of each challenge. We identify the following issues to assign credit: (a) **delayed rewards** (Raposo et al., 2021; Hung et al., 2019; Arjona-Medina et al., 2019; Chelu et al., 2022): reward collection happens long after the action that determined it, causing its influence to be perceived as faint; (b) **sparse rewards** (Arjona-Medina et al., 2019; ![17_image_0.png](17_image_0.png) Figure 2: Visual intuition of the three challenges to temporal CA and their respective set of solutions, using the graph analogy. Nodes and arrows represent, respectively, MDP states and actions. Blue nodes and arrows denote the current episode. Black ones show states that could have potentially been visited, but have not. Square nodes denote goals. Forward arrows (pointing right) represent environment interactions, whereas backward arrows (pointing left) denote credit propagation via state-action back-ups. From top left: (a) the temporal distance between the accountable action and the target state requires propagating credit deep back in time; (b) considering any state as a target increases the density of possible associations and reduces information sparsity; and finally, (c) the breadth of possible pathways leading to the target state. Seo et al., 2019; Chen & Lin, 2020; Chelu et al., 2022): the reward function is zero everywhere, and rarely spikes, causing uninformative Temporal Difference (TD) errors; (c) **partial observability** (Harutyunyan et al., 2019): where the agent does not hold perfect information about the current state; (d) **high variance** (Harutyunyan et al., 2019; Mesnard et al., 2021; van Hasselt et al., 2021) of the optimisation process; (e) the resort to **time as a heuristic** to determine the credit of an action (Harutyunyan et al., 2019; Raposo et al., 2021): (f) the lack of **counterfactual** CA (Harutyunyan et al., 2019; Foerster et al., 2018; Mesnard et al., 2021; Buesing et al., 2019; van Hasselt et al., 2021); (g) **slow convergence** (Arjona-Medina et al., 2019). While these issues are all very relevant to the CAP, their classification is also tailored to control problems. Some of these are described by the use of a particular solution, such as (e), or the lack thereof, like (f), rather than by a characteristic of the decision or of the optimisation problem. Here, we systematise these issues and transfer them to the CAP. We identify three principal characteristics of MDPs, which we refer to as *dimensions* of the MDP: depth, **density** and **breadth** (see Figure 2). Challenges to CA emerge when pathological conditions on depth, density, and breadth produce specific phenomena that mask the learning signal to be unreliable, inaccurate, or insufficient to correctly reinforce an action. We now detail these three dimensions and the corresponding challenges that arise. ## 5.1 Delayed Effects Due To High Mdp Depth We refer to the *depth* of an MDP as the number of temporal steps that intervene between a highly influential action and an outcome (Ni et al., 2023). When this happens, we refer to the action as a *remote* action, and to the outcome as a *delayed* outcome. When outcomes are delayed, the increase of temporal distance often corresponds to a combinatorial increase of possible alternative futures and the paths to get to them. In these conditions, recognising which action was responsible for the outcome is harder, since the space of possible associations is very large. We identify two main reasons for an outcome to be delayed, depending on whether the decision after the remote action influences the outcome or not. The first reason for delayed effects is that the success of the action is not immediate but requires a sequence of actions to be performed *afterwards*, which causes the causal chain leading to success to be long. This issue originates from the typical hierarchical structure of many MDPs, where the agent must first perform a sequence of actions to reach a subjective sub-goal, and then perform another sequence to reach another. The key-to-door task (Hung et al., 2019) is a good example of this phenomenon, where the agent must first collect a key, to be able to open a door later. The second reason is *delayed reinforcements*: outcomes are only *observed* after a long time horizon, and any decision taken *after* the remote action does not influence the outcome significantly. The phenomenon was first noted in behavioural psychology and is known as the *delayed reinforcement* problem (Lattal, 2010), Reinforcement is delayed whenever there is a period of time between the response producing the reinforcer and its subsequent delivery. (Lattal, 2010) The main challenge with *delayed reinforcements* is in being able to ignore the series of irrelevant decisions that are encountered between the remote action and the delayed outcome, focus on the actions that are responsible for the outcome, and assign credit accordingly. This is a key requirement because most CA methods rely on temporal recency as a heuristic to assign credit (Klopf, 1972; Sutton, 1988; Mahmood et al., 2015; Sutton et al., 2016; Jiang et al., 2021a). When this is the case, the actions in the proximity of achieving the goal are reinforced, even if not actually being responsible for the outcome (only the remote action is), just because they are temporally close to the outcome. While recent works advance proposals on how to measure the MDP depth, for example, CA length (Ni et al., 2023), there is currently no formal agreement in the literature on how to diagnose the presence of delayed effects. ## 5.2 Low Action Influence Due To Low Mdp Density If delayed effects are characterised by a large temporal distance between an action and the outcome, MDP sparsity derives from a *lack of influence* between them. Even if the literature often confounds *sparse* and delayed rewards, there is a substantial difference between them. With delayed effects, actions can cause outcomes very frequently, except with delay. Here, actions have little or no impact on the outcome, and outcomes do not vary regardless of the actions taken, but in a few, rare instances. We identify two main reasons. The first one is highly stochastic state-transition dynamics, which can be diagnosed by measuring the entropy of the state-transition distribution H(Pµ) and/or of the reward function H(P(R)). In highly stochastic MDPs, actions hardly affect the future states of the trajectory, the agent is unable to make predictions with high confidence, and therefore cannot select actions that are likely to lead to the goal. The second reason is the low goal density. This is the canonical case of reward sparsity in RL, where the goal is only achievable in a small subset of the state space, or for a specific sequence of actions. Formally, we can measure the sparsity of an MDP using the notion of information sparsity (Arumugam et al., 2021). Definition 4 (MDP sparsity). An MDP is ε*-information sparse if:* $$\operatorname*{max}_{\pi\in\Pi}\mathbb{E}_{\mu,\pi}[D_{K L}(P_{\mu,\pi}(Z|s,a)||P_{\mu,\pi}(Z|s))]\leq\varepsilon,$$ Eµ,π[DKL(Pµ,π(Z|*s, a*)||Pµ,π(Z|s))] ≤ ε, (16) where Eµ,π denotes the expectation over the stationary distribution induced by the policy and the statetransition dynamics. The information sparsity of an MDP is the maximum information gain that can be obtained by an agent. When this is low everywhere, and only concentrated in a small subset of decisions, CA methods often struggle to assign credit, because the probability of behaving optimally is lower (Abel et al., 2021a), and there is rarely a signal to propagate. $$(16)$$ ## 5.3 Low Action Influence Due To High Mdp Breadth We use the term *breadth* of an MDP to denote the number of alternative histories h that produce the same outcome g. We then use the term *dilution* of credit, when many optimal pathways exist, and there is no bottleneck decision that the agent has to necessarily make to achieve the goal. We formalise the concept using the notion of the *null space* of a policy (Schaul et al., 2022): $$\operatorname{Null}(\pi):=\{\Omega|v^{\pi}(s)=v^{\pi^{\prime}}(s)\}\quad\forall\pi,\pi^{\prime}\in\Omega\subseteq\Pi,\forall s\in{\mathcal{S}}.$$ $$(17)$$ Null(π) := {Ω|vπ(s) = vπ′(s)} ∀π, π′ ∈ Ω ⊆ Π, ∀s ∈ S. (17) Null(π) is the *null space* of a policy π, defined to be the subset of the space of all policies Ω ⊂ Π such that two policies π, π′ ∈ Ω have the same expected state-value vπ(s) = vπ′(s) in all the states of the MDP s ∈ S. Credit dilution is often not a challenge for control because optimal behaviours are more probable. However, it can be problematic for CA. Most of the common baselines, such as Advantage Actor Critic (A2C) (Mnih et al., 2016) or Proximal Policy Optimisation (PPO) (Schulman et al., 2017), stop exploring after a small subset of optimal histories is found (or after a certain amount of time). Indeed, when diam(Null(π∗)) is large, there are many optimal histories. Yet, most of them are not included in the experience set C since exploration stopped prematurely, and credit will not be improved for those. This is particularly relevant for assignments that measure the influence of an action relative to another. For example, the advantage Aπ(*s, a*) = qπ(s, a) − Ea′∼π[qπ(*s, a*′)] is inaccurate if E′a[qπ(*s, a*)] is inaccurate, which requires taccurately evaluating q, ∀a′ ∈ A. This often results in a low diversity of behaviours (Parker-Holder et al., 2020), and a poor robustness to changes in the environment (Eysenbach & Levine, 2022). ## 5.4 Relationship With The Exploration Problem One additional challenge in practical experiments is that it is often hard to disentangle the impacts of CA from those of exploration. In fact, discerning the effects of the two is often only done qualitatively. Here, we discuss the connection between the two problems, if they can be studied independently, and whether it is possible to find a way to diagnose and separate the effect of one from the other. We use the interpretation of exploration as the problem of acting in an unknown environment to discover temporal sequences of states, actions and rewards with the purpose of acquiring new information (Amin et al., 2021; Jiang et al., 2023). The acquired experiences then become part of the experience set C, which is used to solve the CAP as described in Equation (5). To visualise the difference between the exploration problem and the CAP, consider the usual key-to-door environment, where the agent needs to pick up a key, which opens a door, behind which lies a reward. While highly improbable (Abel et al., 2021a), this successful event is the result of chance and random behaviour3. Nevertheless, it is the responsibility of exploration to *discover* for the **first time** an optimal history, and to keep feeding the set C with useful discoveries. Then, once the successful experience C∗ is in the set C, it becomes the responsibility of the CA method to consume that experience and extract a measure of influence from the relationship context-action-outcome (Equation (4)) that supports effective improvements. This is a key difference because the very same behaviour has a different cause whether it comes from exploration or from CA. If due to exploration, it happens by chance, making it unlikely to occur again. If due to accurate CA, it is the result of informed decision-making, and funded on the ability to forecast (Sutton et al., 2011) the effects of an action. Then, when assignments start to be accurate enough, policy improvement further increases the probability of visiting optimal trajectories in a virtuous cycle that also improves CA. Many studies show how common RL baselines often struggle to extract a reliable signal from a small set of isolated successes. This is the case, for example, of A2C (Oh et al., 2018), DQN (Schaul et al., 2015b) or PPO (Arjona-Medina et al., 2019). To further support the claim, increasing the sampling probability of a success, for example through PER (Schaul et al., 2015b) or Self-Imitation Learning (SIL) (Oh et al., 2018), shows great improvements in CA. We can draw two conclusions from the arguments above. On one hand, if there is a *minimum* number of optimal trajectories C∗ ⊂ C in C, exploration has done its job and failures can be attributed to poor CA. On 3Or, rather, by the laws dictated by the exploration algorithm. the other hand, a natural question arises: "What is the minimum rate of successes Gmin = |C∗|/|C| that a CA method requires to start converging to an optimal policy?". This is a fundamental open question in the current literature, and an answer to it can produce a valid tool to evaluate a CA method. All else being equal, the lowest the ratio C∗/C, the better the method, because it requires exploration to randomly collect optimal histories at a lower rate, and can solve harder MDPs (Abel et al., 2021a). ## 5.5 Summary In this section, we surveyed the literature and discussed both the obstacles and the current limitations to solving the CAP. These include delayed rewards, sparse rewards, partial observability, high variance, the lack of counterfactual CA, and sample efficiency. Then, we systematised these issues into challenges that emerge from specific properties of the decision problem, which we refer to as dimensions of the MDP: depth, density, and breadth. Challenges emerge when pathological conditions on these dimensions produce specific phenomena that mask the learning signal to be unreliable, inaccurate, or insufficient to correctly reinforce an action: delayed effects, sparsity, and credit dilution. We have provided an intuition of this classification with the aid of graphs and proceeded to detail each challenge. Finally, we discussed the connection between the CAP and the exploration problem, suggesting a way to diagnose when a failure is caused by one or the other, and disentangling exploration from CA. With these challenges in mind, we now proceed to review the state of the art in CA, and discuss the methods that have been proposed to address them. ## 6 Methods To Assign Credit In Deep Rl Following the definition of CAP in Section 4.4, a *credit assignment method* is then an algorithm that takes an initial guess K% φ ∈ K and a finite set of experience D = (S × A × R)T , and, by sampling and learning from transitions D ∼ PD 4, it recursively produces a better approximation of the true assignment K. In this section, we present a list of the credit assignment methods focused on Deep RL. Our classification aims to identify the principal directions of development and to minimise the intersection between each class of methods. We aim to understand the density around each set of approaches, to locate the branches suggesting the most promising results, and to draw a trend of the latest findings. This can be helpful to both the researchers on the CAP who want to have a bigger picture of the current state of the art, to general RL practitioners and research engineers to identify the most suitable methods to use in their applications, and to the part of the scientific community that focuses on different problems, but that can benefit from the insights on CA. We define a CA method according to how it specifies three elements: (a) The measure of action influence via the assignment function K. (b) The protocol that the method uses to approximate K from the experience D. (c) The mechanism PD(d) to collect and sample from d ∈ D. This provides consistency with the framework just proposed and allows categorising each method by the mechanisms that it uses to assign credit. Therefore, for each method, we report the three elements described above. We identify the following categories: 1. Methods using **time contiguity** as a heuristic (Section 6.1). 2. Those **decomposing returns** into per-timestep utilities (Section 6.2). 3. Those conditioning on **predefined goals** explicitly (Section 6.3). 4. Methods conditioning the present on **future outcomes in hindsight** (Section 6.4). 4To enhance the flow of the manuscript, we formalise *contextual distributions* in Appendix B, and since they are intuitive concepts, we describe them in words when surveying the methods. 5. Modelling trajectories as **sequences** (Section 6.5). 6. Those **planning or learning backwards** from an outcome (Section 6.6). 7. **Meta-learning** different proxies for credit (Section 6.7). Note that, we do not claim that this list of methods is exhaustive. Rather, as in Section 4.5, this taxonomy is representative of the main approaches and a tool to understand the current state of the art in the field. We are keen to receive feedback on missing methods from the list to improve further revisions of the manuscript. We now proceed to describe the methods, which we also summarise in Table 4. | Publication | Method | Class | Depth | Density | Breadth | |-----------------------------|---------------|-----------------------|---------|-----------|-----------| | Baird (1999) | AL | Time | ◦ | ◦ | - | | Wang et al. (2016b) | DDQN | Time | ◦ | ◦ | - | | Pan et al. (2022) | DAE | Time | ◦ | ◦ | - | | Klopf (1972) | ET | Time | - | ◦ | ◦ | | Sutton et al. (2016) | ETD | Time | - | ◦ | ◦ | | Bacon et al. (2017) | Option-critic | Time | - | ◦ | ◦ | | Hung et al. (2019) | TVT | Return decomposition | - | ◦ | ◦ | | Arjona-Medina et al. (2019) | RUDDER | Return decomposition | - | ◦ | ◦ | | Ferret et al. (2021a) | SECRET | Return decomposition | - | ◦ | ◦ | | Ren et al. (2022) | RRD | Return decomposition | - | ◦ | ◦ | | Raposo et al. (2021) | SR | Return decomposition | - | ◦ | ◦ | | Sutton et al. (2011) | GVF | Auxiliary goals | ◦ | - | ◦ | | Schaul et al. (2015a) | UVFA | Auxiliary goals | ◦ | - | ◦ | | Andrychowicz et al. (2017) | HER | Future-conditioning | ◦ | - | ◦ | | Rauber et al. (2019) | HPG | Future-conditioning | ◦ | - | ◦ | | Harutyunyan et al. (2019) | HCA | Future-conditioning | ◦ | - | ◦ | | Schmidhuber (2019) | UDRL | Future-conditioning | ◦ | - | ◦ | | Mesnard et al. (2021) | CCA | Future-conditioning | ◦ | - | - | | Nota et al. (2021) | PPG | Future-conditioning | ◦ | - | ◦ | | Janner et al. (2021) | TT | Sequence modelling | ◦ | - | ◦ | | Chen et al. (2021) | DT | Sequence modelling | ◦ | - | ◦ | | Goyal et al. (2019) | Recall traces | Backward planning | ◦ | - | - | | Edwards et al. (2018) | FBRL | Backward planning | ◦ | - | - | | Nair et al. (2020) | TRASS | Backward planning | ◦ | - | - | | van Hasselt et al. (2021) | ET(λ) | Learning predecessors | - | ◦ | - | | Xu et al. (2018) | MG | Meta-Learning | - | ◦ | ◦ | | Yin et al. (2023) | Distr. MG | Meta-Learning | - | ◦ | ◦ | Table 4: List of the most representative algorithms for CA classified by the CA challenge they aim to address. For each method, we report the publication that proposed it, the class we assigned to it, and whether it is designed to address each challenge described in Section 5. Hollow circles mean that the method does not address the challenge, and the full circle represents the opposite. ## 6.1 Time As A Heuristic One common way to assign credit is to use time contiguity as a proxy for causality: an action is as influential as it is temporally close to the outcome. This means that, regardless of the action being an actual cause of the outcome, if the action and the outcome appear temporally close in the same trajectory, the action is assigned high credit. At their foundation, there is TD learning (Sutton, 1988), which we describe below. TD learning (Sutton, 1984; 1988; Sutton & Barto, 2018) iteratively updates an initial guess of the value function according to the difference between expected and observed outcomes. More specifically, the agent starts with an initial guess of values, acts in the environment, observes returns, and aligns the current guess to the observed return. The difference between the expected return and the observed one is the TD error δt: $$\delta_{t}=R(s_{t},a_{t})+\gamma q^{\pi}(s_{t+1},a_{t+1})-q^{\pi}(s_{t},a_{t})$$ $$(18)$$ δt = R(st, at) + γqπ(st+1, at+1) − qπ(st, at) (18) with at+1 ∼ π and st+1 ∼ µ. When the temporal distance between the goal and the action is high - a premise at the base of the CAP - it is often improbable to observe very far rewards. As time grows, so does the variance of the observed outcome, due to the intrinsic stochasticity of the environment dynamics, and the policy. To mitigate the issue, TD methods often replace the theoretical measure of influence with an approximation: the *TD target*. In TD learning, the value function is updated to approximate the *target*, and not the theoretical measure of action influence underneath it. Since policy improvement uses the current approximation of the value to update the policy, future behaviours are shaped according to it, and the *TD target* drives the learning process. We separate the methods in this category in three subgroups: those specifically designed around the advantage function, those re-weighing updates to stabilise learning, and those assigning credit to subsets of temporally extended courses of actions. ## 6.1.1 Advantage-Based Approaches The first subset of methods uses the *advantage* (see Section 4.5) as a measure of action influence, but still uses time as a heuristic to learn it. Actor-Critic (AC) methods with a baseline function (Sutton & Barto, 2018, Chapter 13) approximate the action influence using some estimator of the *advantage* function (Equation 7). In fact, the policy gradient is proportional to Eµ,π[(Qπ(*s, a*) − b(s))∇ log π(a|s)] and if we choose vπ(s) as our baseline b(s), we get Eµ,π[(Aπ(*s, a*))∇ log π(a|s)] because qπ(s, a)−vπ(*s, a*) = Aπ(*s, a*). The use of an action-independent baseline function usually helps to reduce the variance of the evaluation, and thus of the policy gradients, while maintaining an unbiased estimate of it (Sutton & Barto, 2018). What function to use as a baseline is the subject of major studies, and different choices of baselines often yield methods that go beyond using time as a heuristic (Harutyunyan et al., 2019; Mesnard et al., 2021; Nota et al., 2021; Mesnard et al., 2023). Advantage Learning (AL) Baird (1999) also uses time as a proxy for causality. There are many instances of AL in the Deep RL literature. The Dueling Deep Q-Network (DDQN) (Wang et al., 2016b) improves on DQN by calculating the q-value as the sum between the state-value function and a normalised version of the advantage. Even if this results in using the q-value as a measure of action influence and K(*s, a*) = vπ(s)+(Aπ(s, a) − !a Aπ(s, a′)/|A|), approximating the advantage is a necessary step of it. DAE (Pan et al., 2022) follows Wang et al. (2016b) with the same specification of the advantage but provides better connections between the advantage and causality theory. In particular, for fully observable MDPs, the causal effect of an action a upon a scalar outcome G is defined as E[G|*s, a*]−E[G|s]. If we choose the return Z as outcome, this actually corresponds to the advantage E[Z|*s, a*] − E[Z|s] = qπ(*s, a*) − vπ(s), which becomes an approximate expression for the causal influence of an action upon the random return, as discussed also in Arumugam et al. (2021). Here, the context is an MDP state, the action is the greedy action with respect to the current advantage estimation, and the goal is the expected return at termination. As explained in Section 5.3, advantage can be decomposed in two terms Aπ(*s, a*) = qπ(s, a) − vπ(*s, a*). Since vπ(s) = Eπ[qπ(*s, a*)], it is clear that the accuracy of the advantage depends on the accuracy of the q-values of all actions. It has been shown that, because of this, estimating and incorporating the advantage in the q-value has a regularisation effect (Vieillard et al., 2020a). Another effect is increasing the action-gap (i.e. the difference in value between the best and second-best action), which facilitates value learning. Because evaluations are more accurate for a greater portion of the state-action space, AL-based methods contribute to address MDP breadth, as shown in Table 4. ## 6.1.2 Re-Weighing Updates And Compound Targets The second subset of methods in this category re-weighs temporal updates according to some heuristics, which we detail below. Re-weighing updates can be useful to emphasise or de-emphasise important states or actions to stabilise learning in Deep RL (van Hasselt et al., 2018). Eligibility Traces (ET) (Klopf, 1972; Singh & Sutton, 1996; Precup, 2000a; Geist et al., 2014; Mousavi et al., 2017) credit the long-term impact of actions on future rewards by keeping track of the influence of past actions on the agent's future reward. Specifically, an eligibility trace (Sutton & Barto, 2018, Chapter 12) is a function that assigns a weight to each state-action pair, based on the recency of the last visit to it. A *trace* et(s) spikes every time a state (or state-action) is visited and decays exponentially over time until the next visit or until it extinguishes. At each update, the TD error, which determines the magnitude of the update, is scaled by the value of the trace at that state, and δET t = δtet(s). There are several types of eligibility traces, depending on the law of decay of the trace. For example, with accumulating traces (Klopf, 1972), every visit causes an increment of the trace. Replacing traces (Singh & Sutton, 1996) are capped to a specific value, instead. Deep Q(λ)-Networks (DQ(λ)Ns) (Mousavi et al., 2017) implement eligibility traces on top of DQN (Mnih et al., 2015). Here, the eligibility trace is a vector e ∈ Rd with the same number of components d as the parameters of the DNN, and the action influence is measured by the q-value with parameters set θ ∈ Rd. The context is an MDP state, the action is an off-policy action in a transition arbitrarily chosen from the buffer; the goal is the expected return. The ET information is embedded in the parameters θ since they are updated according to θ ← θ + δe. Here e is the eligibility trace, incremented at each update by the value gradient (Sutton & Barto, 2018, Chapter 12): e ← γλe + ∇θqπ(*s, a*). Finally, successive works advanced on the idea of ETs, and proposed different updates for the eligibility vector (Singh & Sutton, 1996; van Hasselt & Sutton, 2015; Precup, 2000a). Emphatic Temporal Diff**erences (ETDs)** (Sutton et al., 2016; Mahmood et al., 2015; Jiang et al., 2021b) continue on the idea of ETs to weigh TD updates with a trace. They aim to address the issue that canonical ETs may suffer from early divergence when combined with non-linear function approximation and off-policy learning. The re-weighing in ETD is based on the *emphatic trace*, which encodes the degree of bootstrapping of a state. Originating from tabular and linear RL, the intuition behind ETDs is that states with high uncertainty - the states encountered long after the state-action pair of evaluation - are more reliable, and vice versa. The main adaptation of the algorithm to Deep RL is by Jiang et al. (2021b), who propose the Windowed Emphatic TD(λ) (WETD) algorithm. In this approach, ETD is adapted to incorporate update windows of length n, introducing a mixed update scheme where each state in the window is updated with a variable bootstrapping length, all bootstrapping on the last state in the window. The influence of an action in WETD is the same as for any other ET, but the trace itself is different and measures the amount of bootstrapping of the current estimate. ETDs provide an additional mechanism to re-weigh updates, the interest function i : S → [0,∞). By emphasising or de-emphasising the interest of a state, the interest function can be a helpful tool to encode the influence of the actions that had led to that state. Because hand-crafting an interest function requires human interventions, allowing suboptimal and biased results, Klissarov et al. (2022) proposes a method to learn and adapt the interest function at each update using meta-gradients. Improvements on both discrete control, such as ALE, and on continuous control problems, such as MuJoCo (Todorov et al., 2012), suggest that the interest function can be helpful to assign credit faster and more accurately. Re-weighing updates includes a set of techniques to adjust the influence of past actions based on their temporal proximity to the current state. Such methods aim to mitigate the limitations of TD methods by dynamically adjusting the weight assigned to past actions, thereby emphasizing or de-emphasizing their contribution to future rewards. For this reason, these methods can be seen as potential solutions to mitigate the impacts of delayed effects and improve credit assignment in settings with high MDP depth, as shown in Table 4. ## 6.1.3 Assigning Credit To Temporally Extended Actions The third and last subset of methods in this category assigns credit to temporally extended actions rather than a single, atomic action. This is formalised in the *options framework* (Sutton et al., 1999; Precup, 2000b). For the purpose of CA, *options*, also known as *skills* (Haarnoja et al., 2017; Eysenbach et al., 2018), can be described as the problem of achieving *sub-goals*, such that an optimal policy can be seen as the composition of elementary behaviours. For example, in a key-to-door environment, such as MiniGrid (Chevalier-Boisvert et al., 2018) or MiniHack (Samvelyan et al., 2021) the agent might select the option *pick up the key*, followed by *open the door*. Each of this macro-action requires a lower level policy to be executed. For example, *pick up* the key requires selecting the actions that lead to reach the key before grabbing it. In the option framework, credit is assigned for each specific subgoal (the macro-action), with the benefits already described for the explicitness property, from Section 4.6. The idea stems from the intuition that it is easier to assign credit to macro-actions since a sequence of options is usually shorter than a sequence of atomic actions, reducing the overall temporal distance to the time of achieving the goal. However, since the option literature often does not explicitly condition on goals, but uses other devices to decompose the CA problem, we review works about learning options next, and dedicate a separate section to auxiliary goal-conditioning in Section 6.3. The option-critic architecture (Bacon et al., 2017) scales options to Deep RL and mirrors the actorcritic architecture but considering options rather than actions. The option-critic architecture allows learning both how to execute a specific option, and which option to execute at each time simultaneously and online. The option executes using the *call-and-return* model. Starting from a state s, the agent picks an option ω according to its policy over options πΩ. This option then determines the primitive action selection process through the intra-policy πω until the option termination function β signals to stop. Learning options, and assigning credit to its actions, is then possible using the *intra-option policy gradient* and the *termination* gradient theorems (Bacon et al., 2017), which define the gradient (thus the corresponding update) for all three elements of the learning process: the option ω ∈ Ω, their termination function β(s) and the policy over options πΩ. Here, the context is a state s ∈ S, the actions to assign credit to are both the intra-option action a ∈ A and the option ω ∈ Ω, and the goal is to maximise the return. On the same lines, Riemer et al. (2018) propose *hierarchical option-critics*, which allows learning options at multiple hierarchical levels of resolution - nested options - but still only on a fixed number of pre-selected options. Klissarov & Precup (2021) further improve on this method by updating all options with a single batch of experience. In the context of the option-critic architecture, CA occurs at multiple levels of the hierarchy. At the lower, intra-option level, where individual actions are taken, credit assignment involves determining the contribution of each action to the achievement of sub-goals. This is essential for learning effective policies for executing primitive actions within each option. At the higher level of the hierarchy, credit assignment involves attributing credit to options for achieving higher-level goals and involves identifying the contribution of each option to achieving the overall task objective. The hierarchical structure of the option-critic architecture facilitates credit assignment by decomposing the learning problem into multiple levels of abstraction. For their ability to decompose a bigger task into smaller sub-problems, these methods naturally improve credit assignment when effects are delayed and in settings with high MDP depth (see Table 4). The methods we covered in this section use the temporal distance between the context-action pair and a reward to measure the action influence. The closer is the action, the higher is its influence and vice versa. While this could maybe be a reasonable assumption when the policy is optimal, it is not the case for the early exploratory stages of learning. In fact, as described in Section 5.4, highly influential actions are often taken long before their rewards are collected while exploring. For example, in our usual key-to-door example, the agent would pick up the key, perform hundreds of random, unnecessary actions, and the goal-tile only reached after those. In these cases, the two events are separated by a "*multitude of* [random and non-influential] decisions" (Minsky, 1961). Because these non-influential actions are temporally closer to reaching the goaltile than that of picking up the key, these methods mistakenly assign them high influence and, in particular, a higher influence than to pick up the key. Today, methods that assign credit only by looking at the temporal distance between the action and the outcome usually underperform on tasks with delayed effects (Arjona-Medina et al., 2019). Nevertheless, some of the branches in this category improve assignments in condition of high MDP depth by re-weighing updates, using advantage or breaking down the task into multiple, composable subtasks. ## 6.2 Decomposing Return Contributions To improve CA in settings with high MDP depth, the line of research we describe next focuses on decomposing returns into per-timestep contributions. These works interpret the CAP as a *redistribution* problem: the return observed at termination is re-distributed to each time-step with an auxiliary mechanism that depends on each method and complements TD learning. Temporal Value Transport (TVT) (Hung et al., 2019) uses an external long-term memory system to improve on delayed tasks. The memory mechanism is based on the Differentiable Neural Computer (DNC) (Grefenstette et al., 2015; Graves et al., 2016), a neural network then reads events from an external memory matrix, represented as the hidden state of a Long-Short-Term-Memory (LSTM). The agent decides to read from and write into it. To write, state-action-reward triples are projected to a lower dimensional space, and processed by the DNC. During training, this works as a trigger: when a past state-action pair is read from memory, it gets associated with the current one, transporting the state-action value - credit - from the present to the remote state. To read, the state-action-reward is reconstructed from the latent code. During inference, this acts as a proxy for credit. If a past state-action-reward triple is retrieved from the memory, it means that it is correlated with the current return. This allows to use the retrieval score of a past transition as a measure of the influence of its action. Return Decomposition for Delayed Rewards (RUDDER) (Arjona-Medina et al., 2019) stems from the intuition that, if we can construct a reward function that *redistributes* the rewards collected in a trajectory such that the expected future reward is zero, we obtain an instantaneous signal that immediately informs the agent about future rewards. The method proposes to learn a function f : (S × A)T → R that maps a sequence of state-action pairs to the sum of discounted rewards, including the past, present and future rewards. In practice, f is implemented as an LSTM, which is trained to fit a subset of the whole experience set Dr ⊂ D. Dr is constructed to contain only trajectories containing delayed rewards, and experience is sampled proportionally to the current prediction error. The underlying hypothesis is that, by fitting the return, the LSTM's hidden state holds useful information to redistribute the return to the most relevant transitions in the sequence. Once f represents a faithful model of the return, at each iteration of the RL algorithm, RUDDER uses the LSTM to infer the return for each st, at ind ∼ Pµ,π. It then uses the difference between the inferred returns (i.e., the redistributed returns) at two consecutive time steps as a reward to perform canonical TD learning. This quantity, represents the credit of a state-action pair: $$K(s_{t},a_{t})=f(s_{t+1},a_{t+1})-f(s_{t},a_{t})$$ K(st, at) = f(st+1, at+1) − f(st, at) (19) Here, f(st+1, at+1) − f(st, at) = R∗(st, at) is the reward function of M∗ = (S, A, R∗*, µ,* γ), an MDP returnequivalent to M = (S, A*, R, µ,* γ): M and M∗ have the same set of optimal policies, but the reward function R∗ of M∗ is such that the sum of expected future rewards is zero for all states (all the future rewards are paid in the current state). The context is a history h = {ot, at, rt : 0 ≤ t ≤ T} from the assigned MDP, the action is an action from the trajectory a ∈ h, and the goal is the achieved return. Self-Attentional Credit Assignment for Transfer (SECRET) (Ferret et al., 2021a) uses a causal Transformer-like architecture (Vaswani et al., 2017) with a self-attention mechanism (Lin et al., 2017) in the standalone supervised task of reconstructing the sequence of rewards from observations and actions. It then views attention weights over past state-action pairs as credit for the generated rewards. This was shown to $$(19)$$ help in settings of high MDP depth in a way that transfers to novel tasks when trained over a distribution of tasks. We can write its measure of action influence as follows: $$K(s_{t},a_{t})=\sum_{t=1}^{T}\mathds{1}\{S_{t}=s,A_{t}=a\}\sum_{i=t}^{T}\alpha_{t\gets i}R(s_{i},a_{i}).\tag{20}$$ $$(21)$$ Here, αt←i is the attention weight on (oi, ai) when predicting the reward rj . Also, here the context is a history h, the action is an action from the trajectory a ∈ h, and the goal is the achieved return. Synthetic returns (SR) (Raposo et al., 2021) assume only one state-action to be responsible for the terminal reward. They propose a form of state pairs association where the earlier state (the *operant*) is a leading indicator of the reward obtained in the later one (the *reinforcer*). The association model is learned with a form of episodic memory. Each entry in the memory buffer, which holds the states visited in the current episode, is associated with a reward - the *synthetic* reward - via supervised learning. At training time, this allows propagating credit *directly* from the reinforcer to the operant, bypassing the local temporal difference. When this reward model is accurately learned, each time the operant is observed, the synthetic reward model spikes, indicating a creditable state-action pair. Here the synthetic reward acts as a measure of causal influence, and we write: $$K(s,a)=q^{\pi}(s,a)+f(s).$$ K(*s, a*) = qπ(*s, a*) + f(s). (21) Here f(s) is the synthetic reward function, and it is trained with value regression on the loss ||rt − u(st)!t−1 k=0 f(st) − b(st)||2, where h(st) and b(st) are auxiliary neural networks optimised together with f. As for Arjona-Medina et al. (2019), the context c is a history h from the assigned MDP, the action is an action from the trajectory a ∈ h, and the goal is the achieved return. This method is, however, stable only within a narrow range of hyperparameters and assumes that only one single action is to be credited. The methods in this section assign credit by decomposing returns into per time-step contributions and then learning values from this new, clearer reward signal. For the purposes of this survey, they mainly differ by the method used to redistribute the contributions to each context-action pair. TVT uses an external memory system, RUDDER uses contribution analysis, SECRET exploits the Transformer's self-attention weights, SR use a gating function. Their motivation stems from improving on delayed effects, which they often state as an explicit goal, and for this reason, we report them as improving CA in settings of high MDP depth (Table 4). Indeed, the empirical evidence they provide suggests that improvements are consistent, and *redistribution* methods provide benefits over their TD learning baselines. On the other hand, these methods do not provide formal guarantees that the assignments improve over TD learning, and there is currently a gap to fill to justify these improvements also theoretically. This is the case, for example, of other methods (Harutyunyan et al., 2019; Wang et al., 2016b; Mesnard et al., 2021; van Hasselt et al., 2021) that prove to reduce the variance of the evaluation, some of which we describe in later sections. ## 6.3 Conditioning On A Predefined Set Of Auxiliary Goals The methods in this category evaluate actions for their ability to achieve multiple goals explicitly. They do so by conditioning the value function on a goal and then using the resulting value function to evaluate actions. The intuition behind them is that the agent's knowledge about the future can be decomposed into more elementary associations between states and goals. We now describe the two most influential methods in this category. General Value Functions (GVFs) (Sutton et al., 2011), described in Section 4.5, stem from the idea that knowledge about the world can be expressed in the form of predictions. These predictions can then be organised hierarchically to solve more complex problems. While GVFs carry several modifications to the canonical value, we focus on its goal-conditioning for the purpose of this review, which is also its foundational idea. GVFs conditions the action value on a goal to express the expected return with respect to the reward function that the goal induces. In their original formulation (Sutton et al., 2011), GVFs are a set of value functions, one for each goal. The goal is any object in a predefined goal set of MDP states g ∈ S, and the resulting measure of action influence is the following: $$K(s,a,g)=q^{\pi}(s,a,g),$$ $\left(22\right)^{2}$ K(*s, a, g*) = qπ(*s, a, g*), (22) that is the q-function with respect to the goal-conditioned reward function R(*s, a, g*), which is 0 everywhere, and 1 when a certain state is reached. Because GVFs evaluate an action for what it is going to happen in the future, GVFs are forward methods, and interpret the CAP as a prediction problem: "What is the expected return of this action, given that g is the goal?". Universal Value Functions Approximators (UVFAs) (Schaul et al., 2015a) scale the idea of GVFs to a large set of goals, by using a single value function to learn the whole space of goals. One major benefit of UVFAs over GVFs is that they are readily applicable to Deep RL by simply adding the goal as an input to the value function approximator. This allows the agent to learn end-to-end with bootstrapping and allows for exploiting a shared prediction structure across different states and goals. Since they derive from GVFs, UVFA share most of their characteristics. The context is an MDP state s ∈ S; the goal is still any object in a predefined goal set of states, g ∈ S, and the credit of an action is the expected return of the reward function induced by the goal (see Equation (22)). The methods in this category stand out for using an *explicit* goal to assign credit, as described in Section 4.6. What distinguishes these methods from those that follow in the next section (which also use goals explicitly) is their flexibility. While in hindsight methods choose the goal after completing a trajectory, or based on information acquired during training, these methods do not. Instead, the set of goals of a GVF is predefined in Sutton et al. (2011). UVFAs, even if they can generalise to new goals in theory, they are designed with that purpose in mind, and their application is limited. This represents both a strong limitation of these methods, and a gap to fill in the literature, since it limits both their flexibility and their autonomy to adapt to different tasks, requiring the human designer to specify the set of goals *ex-ante* and to provide the set of goals as an input at the start of training. Furthermore, their interpretation of credit is still linked to the idea of temporal contiguity described in Section 6.1. For this reason, they share many drawbacks and limitations with those methods and perform poorly when the MDP is deep, especially if not accompanied by more advanced techniques. To the best of our knowledge, there are no examples in the literature that pair these methods with more advanced CA techniques (e.g., *options*), which represents a gap to fill. On the other hand, by specifying a goal explicitly (GVFs) and by generalising over the goal space (UVFA), conditioning on a predefined set of goals provides a way to extract signals from the environment even when the signal is sparse and action influence is low. In fact, even when the main task is complex, the set of auxiliary goals is designed to provide a useful signal for learning. This is the reason why we consider these methods improving CA when the MDP is sparse (see Table 4). ## 6.4 Conditioning In Hindsight The methods in this category are characterised by the idea of re-evaluating the action influence according to what the agent achieved, rather than what it was supposed to achieve. This means that, given a trajectory h, we can choose some goal g ∈ G (*after* collecting h) and evaluate the influence of all the actions in h upon achieving g. We separate the methods in this category into three subgroups. (i) Those that re-label the past experience under a different perspective, such as achieving a different goal than the one the agent started. (ii) Those that condition the action evaluation on some properties of the future during training, which becomes an explicit performance request at inference time. (iii) Those that condition on future factors that are independent on the evaluated action, but that still influence future returns. ## 6.4.1 Relabelling Experience Hindsight Experience Replay (HER) (Andrychowicz et al., 2017) stems from the problem of learning in sparse rewards environments, which is an example of low action influence in our framework (see Section 5.2. The method exploits the fact that even if a trajectory is suboptimal for the overall implicit goal to maximise MDP returns, it can be viewed as optimal if the goal is to achieve its final state. In practice, HER brings together UVFAs and experience replay (Lin, 1992) to re-examine trajectories. After collecting a set of trajectories from the environment, the agent stores each transition in a replay buffer, together with both the state it sought to reach and the one that it actually did reach. This allows optimising K% φ(*s, a, g*) for both goals. We refer to this process of re-examining a trajectory collected with a prior goal in mind and evaluating it according to the actually realised outcome as *hindsight conditioning*, which is also the main innovation that HER brings to the CAP. Notice that the original goal is important because the trajectory is collected with a policy that aims to maximise the return for that specific goal. However, in HER, the goal set is still predefined, which requires additional specifications from the agentdesigner and can limit the autonomy of the overall agent, which increases the autonomy of the agent. HER uses the goal-conditioned q-values described in Section 6.3 to measure action influence: $$K(s_{t},a_{t},s_{T})=q^{\pi}(s_{t},a_{t},s_{T}).$$ $\left(23\right)$. K(st, at, sT ) = qπ(st, at, sT ). (23) Here the context is a history from the MDP, the action is an action from the trajectory a ∈ h, and the goal g is to visit a state sT at the end of a trajectory. Since HER is limited to off-policy learning with experience replay, **Hindsight Policy Gradients** (HPGs) (Rauber et al., 2019) transfers the findings of HER to Policy Gradient (PG) methods, and extend it to online settings. Instead of updating the policy based on the actual reward received, Hindsight Policy Gradient (HPG) updates the policy based on the hindsight reward, which is calculated based on the new goals that were defined using HER. The main difference with HER is that in HPGs, both the critic and the actor are conditioned on the additional goal. This results in a goal-conditioned policy π(·|S = *s, G* = g), describing the probability of taking an action, given the current state and a realised outcome. The action influence used in HPG is the advantage formulation of the hindsight policy gradients: $$K(s,a,g)=q^{\pi}(s,a,g)-v^{\pi}(s,g),$$ $\left(24\right)^{\frac{1}{2}}$ K(*s, a, g*) = qπ(s, a, g) − vπ(*s, g*), (24) where qπ(*s, a, g*) and vπ(*s, g*) are the goal-conditioned value functions. Here the context c is a history h = {ot, at, rt : 0 ≤ t ≤ T}, the goal is arbitrarily sampled from a goal set, g ∈ G. Like HER, HPG is tailored to tasks with low action influence due to low MDP density, and it is shown to be effective in sparse reward settings. Overall, HER and HPG are the first completed work to talk about *hindsight* as the re-examination of outcomes for CA. Their solution is not particularly interesting for the CAP as they do not cast their problem as a CAP and they do not connect the finding to the CAP explicitly. However, they are key precursors of the methods that we review next, which instead provide novel and reusable developments for CAP specifically. ## 6.4.2 Conditioning On The Future Hindsight Credit Assignment (HCA) Traditional reinforcement learning algorithms often struggle with credit assignment as they rely solely on foresight: they evaluate actions against a predetermined goal, selected *before* acting. These methods operate under the assumption that we lack knowledge of what occurs beyond a given time step, making accurate credit assignment challenging, especially in tasks with delayed effects. (Harutyunyan et al., 2019), on the other hand, centres on utilising hindsight information, acknowledging that credit assignment and learning typically take place after the agent completes its current trajectory. This approach enables us to leverage this additional data to refine the learning of critical variables necessary for credit assignment. (Harutyunyan et al., 2019) introduces a new family of algorithms known as Hindsight Credit Assignment (HCA). Hindsight Credit Assignment (HCA) algorithms explicitly assign credit to past actions based on the likelihood of those actions having been taken, given that a certain outcome has been observed. This is achieved by comparing a learned *hindsight distribution* over actions, conditioned by a future state or return, with the policy that generated the trajectory. More precisely, the hindsight distribution, h(a|st, π, g) is the likelihood of an action a, given the outcome g experienced in the trajectory d ∼ Pµ,π(D|S0 = *s, a*t ∼ π). In practice, Harutyunyan et al. (2019) consider two classes of outcomes: states and returns. We refer to the algorithms that derive from these two classes of goals as *state-HCA* and *return-HCA*. For state-HCA, the context c is the current state st at time t; the outcome is a future state in the trajectory st′ ∈ d where t′ > t; the credit is the ratio between the state-conditional hindsight distribution and the policy ht(a|st,s′t) π(a|st) . For return-HCA, the context c is identical; the outcome is the observed return Zt; the credit is the ratio between the return-conditional hindsight distribution and the policy 1 − π(a|st) ht(a|st,Zt) . The resulting ratios provide a measure of how crucial a particular action was in achieving the outcome. A ratio deviating further from 1 indicates a greater impact (positive or negative) of that action on the outcome. For example, return-HCA measures the influence of an action with the hindsight advantage described in Section 4: $$K(s_{t},a_{t},z_{t})=\left(1-\frac{\pi(a_{t}|S_{t}=s_{t})}{\mathbb{P}_{\mu,\pi}(a_{t}|S_{t}=s_{t},Z_{t}=z_{t})}\right)z_{t}.$$ $$(25)$$ 'zt. (25) To compute the *hindsight distribution*, HCA algorithms employ a technique related to importance sampling. Importance sampling estimates the expected value of a function under one distribution (the *hindsight distribution*) using samples from another distribution (the policy distribution). In the context of HCA, importance sampling weights are determined based on the likelihood of the agent taking each action in the trajectory, given the hindsight state compared to the likelihood of the policy for that same action. Once the hindsight distribution is computed, HCA algorithms can be used to update the agent's policy and value function. One approach involves using the hindsight distribution to reweight the agent's experience. This means the agent will learn more from actions that were more likely to have contributed to the observed outcome. Besides advancing the idea of hindsight, (Harutyunyan et al., 2019) carries one novelty: the possibility to drop the typical policy evaluation settings, where the goal is to learn a value function by the repeated application of the Bellman expectation backup. Instead, action values are defined as a measure of the likelihood that the action and the outcome appear together in the trajectory, and are a precursor of the sequence modelling techniques described in the next section (Section 6.5). Upside-Down RL (UDRL) (Schmidhuber, 2019; Srivastava et al., 2019; Ashley et al., 2022; Štrupl et al., 2022) is another implementation of the idea to condition on the future. The intuition behind UpsideDown RL (UDRL) is that rather than conditioning returns on actions, which is the case of the methods in Section 6.1, we can invert the dependency and condition actions on returns instead. This allows using returns as an input and inferring the action distribution that would achieve that return. The action distribution is approximated using a neural network, the *behaviour policy*, that is trained via maximum likelihood estimation using trajectories collected online from the environment. In UDRL the context is a completed trajectory d; the outcome is a command that achieves the return Zk in H = T − k time-steps, which we denote as g = (Zk, H); the credit of an action a is its probability according to the behaviour function, π(a|*s, g*). In addition to HCA, UDRL also conditions the return to be achieved in a specific timespan. Posterior Policy Gradients (PPGs) (Nota et al., 2021) further the idea of hindsight to provide lowervariance, future-conditioned baselines for policy gradient methods. At the base of PPG there is a novel value estimator, the PVF. The intuition behind PVFs is that in POMDPs the state value is not a valid baseline because the true state is hidden from the agent, and the observation cannot provide as a sufficient statistic for the return. However, after a full episode, the agent has more information to calculate a better, a posteriori guess of the state value at earlier states in the trajectory. Nota et al. (2021) refers to the family of possible *a posteriori* estimations of the state value as the PVF. Formally, a PVF decomposes a state into its current observation ot, and some hidden state that is not observable and typically unknown bt. The value of a state can then be written as the expected observation-action value function over the possible non-observable components uT ∈ U = Rd. The action influence of a PPG is quantified by the expression: $$K(o_{t})=\mathop{\mathbb{E}}_{u\in{\mathcal{U}}}\left[\mathbb{P}(u_{t}=u|h_{t})v(o_{t},u_{t})\right].\tag{1}$$ $$(26)$$ Notice that, as explained in Section 4.5, the PVF does not depend on an action. However, we can derive the corresponding action-value formulation with qπ(ht, a) = R(st, at) + γvπ(ht+1). Here, the context is an observation, the action is the current action and the goal is the observed return. In practice, PVF advances HCA by learning which statistics of the trajectory ψ(d) are useful to assign credit, rather than specifying it objectively as a state or a return. ## 6.4.3 Exposing Irrelevant Factors Counterfactual Credit Assignment (CCA) For being data efficient, credit assignment methods need to disentangle the effects of a given action of the agent from the effects of external factors and subsequent actions. External factors in reinforcement learning are any factors that affect the state of the environment or the agent's reward but are outside the agent's control. This can include things like the actions of other agents in the environment, changes in the environment state due to natural processes or events. These factors can make credit assignment difficult because they can obscure the relationship between the agent's actions and its rewards. Mesnard et al. (2021) proposes to get inspiration from the counterfactuals from causality theory to improve credit assignment in model-free reinforcement learning. The key idea is to condition value functions on future events, and learn to extract relevant information from a trajectory. Relevant information here corresponds to all information that is predictive of the return while being independent of the agent's action at time t. This allows the agent to separate the effect of its own actions, *the skills*, from the effect of external factors and subsequent actions, the *luck*, which will enable refined credit assignment and therefore faster and more stable learning. It shows that these algorithms have provably lower variance than vanilla policy gradient, and develops valid, practical variants that avoid the potential bias from conditioning on future information. One variant explicitly tries to remove information from the hindsight conditioning that depends on the current action while the second variant avoids the potential bias from conditioning on future information thanks to a technique related to important sampling. The empirical evidence in Mesnard et al. (2021) suggests that CCA offers great improvements in tasks with delayed effects. The methods in this section bring many independent novelties to CA. The most relevant for our scope is the idea of hindsight conditioning, which can be summarised as evaluating past actions using additional information about the future, usually not available at the time the action was taken. They differ from those in Section 6.3, as they do not act on a pre-defined objective set of goals, but these are chosen *in hindsight*. One drawback of these methods is that they must be able to generalise to a large goal space to be effective, which is not a mild requirement because the ability to generalise often correlates with the size of the network. This can limit the applicability of the method, especially in cases of low computation and memory budgets. One of the greatest benefits of these methods is to always have a signal to learn from because, by construction, there is always a goal that has been achieved in the current trajectory, for example, the final state, or the terminal return. This, in turn, produces a higher number of context-action-outcome associations, translates into additional training data that is often beneficial in supervised problems, and results in an overall denser signal. These improvements in MDPs with low density, which we report in Table 4, are supported by both empirical evidence and theoretical guarantees to reduce the variance of the evaluations (Harutyunyan et al., 2019; Wang et al., 2016b; Mesnard et al., 2021; van Hasselt et al., 2021). Incorporating information about the future (for example, future returns or states), is most likely one major reason why these algorithms overperform the others. In fact, when this information is designed to express particular features, such as action-independence or independence to irrelevant factors, such as in Mesnard et al. (2021), the gap increases even further. Finally, some of these methods (Mesnard et al., 2021) also incentivise the discovery of multiple pathways to the same goal, by identifying decisions that are irrelevant to the outcome, resulting in the fact that any of them can be taken without affecting the outcome. The only requirement is to employ an actor-critic algorithm, which we consider a mild assumption, since transitioning from actor-critic to value-based settings is usually trivially achievable. ## 6.5 Modelling Trajectories As Sequences The methods in this category are based on the observation that RL can be seen as a sequence modelling problem. Their main idea is to transfer the successes of sequence modelling in Natural Language Processing (NLP) to improve RL. On a high level, they all share the same assumption: a sequence in RL is a sequence of transitions (*s, a, r*), and they differ in either how to model the sequence, the problem they solve, or the specific method they transfer from NLP. Trajectory Transformers (TTs) (Janner et al., 2021) implements a decoder-only (Radford et al., 2018; 2019) Transformer (Vaswani et al., 2017) to model the sequence of transitions. TTs learn from an observational stream of data, composed of expert demonstrations resulting in an offline RL training protocol. The main idea of TTs is to model the next token in the sequence, which is composed by the next state, the next action, and the resulting reward. This enables planning, which TTs exploit to plan via beam search. Notice that, for any of these paradigms, if the sequence model is autoregressive - the next prediction depends only on the past history, but since a full episode is available, the future-conditioned probabilities are still well-defined, and also TTs can condition on the future. In TTs the action influence is the product between the action probability according to the demonstration dataset and its q-value: $$K(s_{t},a_{t},z_{t})=\mathbb{P}_{\theta}(A_{t}=a_{t}|Z_{t}=z_{t})q^{\pi}(s_{t},a_{t}).$$ $$(27)$$ K(st, at, zt) = Pθ(At = at|Zt = zt)qπ(st, at). (27) Here, the context c is an MDP state c = s ∈ S, the action is arbitrarily selected, and the goal is the return distribution P(Z). Decision Transformers (DTs) (Chen et al., 2021) proceed on the same lines as TTs but ground the problem in learning, rather than planning. DTs interpret a sequence as a list of (st, at, Zt) triples, where Zt is the return-to-go. They then use a Transformer to learn a model of the actor that takes the current state and the return as input and outputs a distribution over actions. In addition, they optionally learn a model of the critic as well, which takes the current state and each action in the distribution to output the value of each action. The sequences are sampled from expert or semi-expert demonstrations, and the model is trained to maximise the likelihood of the actions taken by the expert. From the perspective of CA, TTs and DTs are equivalent, and they share the same limitation in that they struggle to assign credit accurately to experience beyond that of the offline dataset. Furthermore, like HCA (Harutyunyan et al., 2019), DTs bring more than one novelty to RL. Besides modelling the likelihood of the next token, they also use returns as input to the model, resulting in a form of future conditioning. However, for CA and this section, we are only interested in their idea of sequence modelling and we will not discuss the other novelties. There exist further extensions to DT both to online settings (Zheng et al., 2022) and to model quantities beyond the return (Furuta et al., 2022). The former allows assigning credit by modelling transition sequences in online settings. The latter, instead, generalises sequence modelling to transitions with additional arbitrary information attached - the same way, Future-Conditional Policy Gradient (FC-PG) generalise HCA. Sequence modelling in RL transfers the advances in sequence modelling for NLP to Deep RL setting. The main idea is to measure credit by estimating the probability of the next action (or the next token), conditioned on the context and the goal defined in hindsight, according to an offline dataset of expert trajectories (Chen et al., 2021; Janner et al., 2021). While some works propose adaptation to online fine-tuning (Lee et al., 2022), these methods mostly learn from offline datasets and the idea to apply sequence modelling online is underexplored. This represents a strong limitation as it limits the generalisation ability of these methods. For example, DT often fail to generalise to returns outside the training distribution. The distribution that measures this likelihood P(a|*c, g*) can be interpreted as the hindsight distribution (Harutyunyan et al., 2018) described in Section 6.4. Their development has a similar pattern to that of hindsight methods and progressively generalises to more complex settings, such as online learning (Zheng et al., 2022) and more general outcomes (Furuta et al., 2022). In practice, these two trends converge together to model the likelihood of action, states and rewards, which hindsight methods call the *hindsight distribution*. Yet, this set of methods would benefit from a better connection to the RL theory. This has been the case for hindsight methods, which leverage notions from causality and the policy gradient theorem (Sutton & Barto, 2018) to achieve better experimental results (Mesnard et al., 2021). For the same reasons explained for hindsight methods in Section 6.4, these methods improve CA when the MDP has low density and the action influence is low (see Table 4). Nevertheless, sequence modelling remains a promising direction for CA, especially for their ability to scale to large datasets (Reed et al., 2022). It is not clear how these methods position with respect to the CA challenges described in Section 5, for the lack of experimentation on tasks that explicitly stress the agent's ability to assign credit. However, in their vicinity to future-conditioned methods, they bear some of the same advantages and also share some limitations. In particular, for their ability to define outcomes in hindsight, regardless of an objective learning signal, they bode well in tasks with low action influence. ## 6.6 Planning And Learning Backwards The methods in this category extend CA to potential predecessor decisions that have not been taken, but could have led to the same outcome (Chelu et al., 2020). The main intuition is that, in environments with low action influence, highly influential actions are rare, and when a goal is achieved the agent should use that event to extract as much information as possible to assign credit to relevant decisions. We divide the section into two major sub-categories, depending on whether the agent identifies predecessor states by planning with an inverse model, or by learning relevant statistics without it. ## 6.6.1 Planning Backwards Recall traces (Goyal et al., 2019) combine model-free updates from Section 6.1 with learning a backward model of the environment. A backward model µ−1(st−1|St = *s, A*t−1 = a) describes the probability of a state St−1 being the predecessor of another state s, given that the action a was taken. This backward action is sampled from a *backward policy*, πb(at−1|st), which predicts the previous action, and a *backward dynamics*. By autoregressively sampling from the backward policy and dynamics, the agent can cross the MDP backwards, starting from a final state, sT , up until a starting state, s0 to produce a new trajectory, called *recall* trace. This allows the agent to collect experience that always leads to a certain state, sT , but that does so from different starting points, discovering multiple pathways to the same goal. Formally, the agent alternates between steps of GPI via model-free updates and steps of behaviour cloning on trajectories collected via the backward model. Trajectories are reversed to match the forward arrow of time before cloning. This is a key step towards solving the CAP as it allows propagating credit to decisions that have not been taken but could have led to the same outcome without interacting with the environment directly. Recall-traces measure the influence of an action by its q-value, but differ from any other method using the same action influence because the contextual data is produced via backward crossing. The goal is to maximise the expected returns. The same paradigm has been presented in a concurrent work (Edwards et al., 2018) as Forward-Backward RL (FBRL). The benefits of a backward model have also been further investigated in other studies. Wang et al. (2021) investigate the problem in offline settings, and show that backward models enable better generalisation than forward ones. van Hasselt et al. (2019) provide empirical evidence suggesting that assigning credit from hypothetical transitions, that is, via planning, improves the overall efficiency in control problems. Chelu et al. (2020) and van Hasselt et al. (2019) further show that backward planning provides even greater benefits than forward planning when the state-transition dynamics are stochastic. ## 6.6.2 Learning Predecessors Expected Eligibility Trace (ET(λ)) (van Hasselt et al., 2021) provide a model-free alternative to backward planning that assigns credit to potential predecessors decisions of the outcome: decisions that have been taken in the past but have not in the last episode. The main idea is to weight the action value by its expected eligibility trace, that is, the instantaneous trace (see Section 6.1), but in expectation over the random trajectory, defined by the policy and the state-transition dynamics. The Deep RL implementation of ET(λ) considers the expected trace upon the action value representation - usually the last layer of a neural network value approximator. Like for other ETs algorithms, ET(λ) measures action influence using the q-value of the decision and encodes the information of the trace in the parameters of the function approximator. In this case, the authors interpret the value network as a composition of a non-linear representation function φ(s) and a linear value function v(st) = w⊤φ(s). The expected trace e(s) = Eφ(s) is then the result of applying a second linear operator E on the representation. e(s) is then trained to minimise the expected ℓ2 norm between the current estimation of e(s) and the instantaneous trace. The methods in this section assign credit by considering the effects of decisions that have not been taken, but could have led to the same outcome. The intuition behind them is that, in tasks where the action influence is low due to low MDP density, creditable actions are rare findings. When this happens the agent can use that occurrence to extract as much information as possible from them. One set of methods does so by learning inverse models of the state-transition dynamics and walking backwards from the outcome. Chelu et al. (2020); van Hasselt et al. (2019) further analyse the conditions in which backward planning is beneficial. Another set of methods exploits the idea of eligibility traces and keeps a measure of the marginal state-action probability to assign credit to actions that could have led to the same outcome. Overall, these methods are designed to thrive in tasks where the action influence is low. Also, for their ability to start from a high-value state, backward planning methods can find a higher number of optimal transpositions, and therefore provide a less biased estimate of the credit of a state-action pair. ## 6.7 Meta-Learning Proxies For Credit The methods in this category aim to meta-learn key hyperparameters of canonical TD methods. In fact, RL methods are often brittle to the choice of hyperparameters, for example, the number of steps to look-ahead in bootstrapping, what discount factor to use, or meta-parameters specific to the method at hand. How to select these meta-parameters is an accurate balance that depends on the task, the algorithm, and the objective of the agent. For this reason, it is sometimes difficult to analyse them using the usual framework, and we present them differently, by describing their main idea, and the way they are implemented in Deep RL. Meta Gradient (MG) RL (Xu et al., 2018) remarks how different CA measures of action influence impact the performance on control problems, and proposes to answer the question: "Among the most common TD targets, which one results in the best performance?". The method interprets the target as a parametric, differentiable function that can be used and modified by the agent to guide its behaviour to achieve the highest returns. In particular, *Meta-Gradients* consider the λ-return (Sutton, 1988) target, for it can generalise the choice of many targets (Schulman et al., 2016). It then learns its meta-parameters: the bootstrapping parameter λ and the discount factor γ. The connection between MG and CA is that, different pairs of meta-parameters evaluate actions differently. For example, changing the discount factor can move the focus of the assignment from early to late actions with effects on policy improvements (Xu et al., 2018). In fact, adapting and learning the meta-parameters online effectively corresponds to meta-learning a measure of action influence, and profoundly affects credit. Meta-learning credit assignment strategies has been further extended to distributional (Yin et al., 2023) and continual (Zheng et al., 2020) settings. Badia et al. (2020) investigated the effects of meta-learning the discount factor and the exploration rate to balance out short and long-term rewards. Overall, these methods assign credit to actions by applying canonical TD learning algorithms with a meta-learnt measure of action influence. The goal can come in the form of an update target (Xu et al., 2018; Zheng et al., 2018; Xu et al., 2020), a full return distribution (Yin et al., 2023), or a reward function (Zheng et al., 2020). This allows agents to adapt their influence function online, especially improving in conditions of high MDP depth. ## 7 Evaluating Credit Like accurate evaluation is fundamental to RL agents to improve their policy, an accurate evaluation of a CA method is fundamental to CA research to monitor if and how a method is advancing the field. The aim of this section is to survey the state of the art of the metrics, the tasks, and the evaluation protocols to evaluate a CA method. We discuss the main components of the evaluation procedure, the performance metrics, the tasks, and the evaluation protocols. ## 7.1 Metrics We categorise existing metrics to evaluate a CA method in two main classes: (a) The metrics that are already used for control problems. These mostly aim to assess the agent's ability to make optimal decisions, but they do not explicitly measure the accuracy of the action influence. (b) The metrics that target the quality of an assignment directly, which usually aggregate metrics throughout the RL training procedure. We now proceed to describe the two classes of metrics. ## 7.1.1 Metrics Borrowed From Control Bias, variance and contraction rate. The first, intuitive, obvious proxy to assess the quality of a credit assignment method is its theoretical performance in suitable *control* problems: the bias, variance, and contraction rate of the policy improvement operator described in Rowland et al. (2020). Notice that these metrics are not formally defined for all the methods, either because some variables cannot be accessed or because the operators they act on are not formally defined for the method in question. For the evaluation operator described in Equation (2), we can specify these quantities as follows. $$\Gamma=\operatorname*{sup}_{s\in{\mathcal{S}}}{\frac{||{\mathcal{T}}V^{\pi}(s)-{\mathcal{T}}V^{\pi}(s)||_{\infty}}{||V^{\pi}(s)-V^{\prime\pi}(s)||_{\infty}}}$$ $$(28)$$ is the contraction rate and describes how fast the assignment converges to its fixed point, if it does so, and thus how efficient it is. Here V π(s) and V ′π(s) are two estimates of the state-value, which highlights that these set of metrics are not suitable to evaluate methods using any measure of action influence. If T is contractive, then Γ < 1 ∀V π and V ′π, and there exist a fixed-point bias of T given by: $$\xi=||V^{\pi}(s)-\hat{V}^{\pi}(s)||_{2},$$ $$(29)$$ $$(30)$$ ξ = ||V π(s) − Vˆ π(s)||2, (29) where Vˆ π(s) is the true, unique fixed point of T , whose existence is guaranteed by Γ < 1. For every evaluation operator T , there is an update rule Λ : R|S| × H → R that takes as input the current estimation of the state-value function, and a trajectory and outputs the updated function. Λ has a variance: $$\nu=\mathbb{E}_{\mu,\pi}[||\Lambda[V(s),D]-\mathcal{T}V(s)||_{2}^{2}].$$ ν = Eµ,π[||Λ[V (s), D] − T V (s)||22]. (30) These three quantities are usually in a trade-off (Rowland et al., 2020). Indeed, many (if not all) studies on credit assignment (Hung et al., 2019; Mesnard et al., 2021; Ren et al., 2022; Raposo et al., 2021) report the empirical return and its variance. Because the contraction rate is often harder to calculate, an alternative metric is the time-to-performance, which evaluates the number of interactions necessary to reach a given performance. These mostly aim at showing improvement in sample efficiency and/or asymptotic performance. While useful, this is often not enough to assess the quality of credit assignment, as superior returns can be the result of better exploration, better optimisation, better representation learning, luck (as per the environment dynamics' stochasticity) or of a combination of such factors. Using empirical returns makes the evaluation method empirically viable for any measure of action influence described in Section 4, even if these metrics are not formally defined for them. Nonetheless, when the only difference between two RL algorithms lies in how credit is assigned, and this is not confounded by the aforementioned factors, it is generally safe to attribute improvements to superior credit, given that the improvements are statistically significant (Henderson et al., 2018; Agarwal et al., 2021). Task completion rate. A related, but more precise, metric is the success rate. Given a budget of trials, the success rate measures the frequency of task completion, that is, the number of times the task was solved over the total number of episodes: G = |C∗|/|C|. Here, C∗ is a set of optimal histories experienced by the agent, and C is the full set of histories used to train it. Considering success rates instead of bias, variance, and trade-off is useful as it alleviates another issue of these performance metrics: there is no distinction between easy-to-optimise rewards and hard-to-optimise rewards. This is evident in the key-todoor task with distractors (Hung et al., 2019), which we describe in detail later in Section 7.2. Due to the stochasticity from the apple phase (the distractors), it is generally impossible to distinguish performance on apple picking (easy-to-optimise rewards) and door opening (hard-to-optimise rewards that superior credit assignment methods usually obtain). Furthermore, the minimum success rate Gmin could also be an effective metric to disentangle the effects of exploration from those of CA as discussed in Section 5.4, despite never being employed for that purpose. However, notice that this clarity in reporting credit comes at a cost. In fact, even if these kinds of metrics are more precise than performance metrics, they require expert knowledge of the task. They often suffer from the same confounders as bias, variance, and contraction rate. Value error. As the value function is at the heart of many credit assignment methods, another proxy for the quality of the credit is the quality of value estimation, which can be estimated from the distribution of TD errors (Andrychowicz et al., 2017; Rauber et al., 2019; Arjona-Medina et al., 2019). We can then generalise the value error to one of influence error: E[||K%(s, a, g) − K(*s, a, g*)||i], where *|| · ||*i denotes the i th norm of a vector, K%(*s, a, g*) is the current approximation of influence and K(*s, a, g*) is the true influence. A drawback of the influence error (and the value error) is that it can be misleading. When an algorithm does not fully converge, for example, because of high MDP sparsity (see Section (b), it can happen that the value error is very low. This is because the current policy never visits a state with a return different from zero, and the value function collapses to always return zero. Nevertheless, this metric is a viable option to evaluate RL methods that use some form of action influence. It is not applicable, for example, to PG methods using Monte-Carlo returns to improve a parametric policy via gradient ascent (Sutton & Barto, 2018), or to sequence modelling methods (see Section 6.5 that only approximate the action probabilities of a predefined set of demonstrations. ## 7.1.2 Bespoke Metrics For Credit Assignments We now review metrics that measure the quality of individual credit assignments, that is, how well actions are mapped to corresponding outcomes, or how well outcomes are redistributed to past actions. Usually, these metrics are calculated in hindsight, after outcomes have been observed. Using knowledge about the causal structure. Suppose we have expert knowledge about the causal structure of the task at hand, i.e. which actions cause which outcomes. This is often the case since as humans we often have an instinctive understanding of the tasks agents tackle. In such a case, given an observed outcome from an agent's trajectory, one can compare credit assignments, which approximate such cause and effect relationships, to the ground truth represented by our causal model of the task. We give several examples from the literature. In Delayed Catch, Raposo et al. (2021) assess whether credit is assigned to the actions that lead to catches or to the end-of-episode reward since they know that these actions are causing the experienced rewards. They do the same on the Atari game Skiing, which is a more complex task but that shares the fact that only a subset of the actions of the agent yield rewards. For example, in Skiing, going between ski poles is the only thing that grants rewards (with delay) at the end of an episode. Ferret et al. (2021a) adopt a similar approach and look at the influence attributed to actions responsible for trigger switches in the Triggers environment, which contribute alone to the end-of-episode reward. Similarly, Arjona-Medina et al. (2019) look at redistributions of RUDDER on several tasks, including the Atari 2600 game Bowling. Counterfactual simulation. A natural approach, which is nonetheless seldom explored in the literature, is counterfactual simulation. On a high level, it consists in asking what would have happened if actions that are credited for particular outcomes had been replaced by another action. This is close to the notion of hindsight advantage. Comparing to actual values of the estimated quantity. This only applies to methods whose credit assignments are mathematically grounded, in the sense that they are the empirical approximations of welldefined quantities. In general, one can leverage extra compute and the ability to reset a simulator to arbitrary states to obtain accurate estimations of the underlying quantity, and compare it to the actual, resource-constrained quantity estimated from experience. ## 7.2 Tasks In what follows, we present environments that we think are most relevant to evaluate credit assignment methods and individual credit assignments. The most significant tasks are those that present all three challenges to assign credit: delayed rewards, transpositions, and sparsity of the influence. This often corresponds to experiments that have reward delay, high marginal entropy of the reward, and partial observability. To benchmark explicit credit assignment methods, we additionally need to be able to recover the ground truth influence of actions w.r.t. given outcomes, or we can use our knowledge of the environment and develop more subjective measures. ## 7.2.1 Diagnostic Tasks Diagnostic tasks are useful as sanity checks for RL agents and present the advantage of running rather quickly, compared to complex environments with visual input that may imply several millions of samples before agents manage to solve the task at hand. Notice that these tasks may not be representative of the performance of the method at scale, but provide a useful signal to diagnose the behaviour of the algorithm in the challenges described in Section 5. Sometimes, the same environment can represent both a diagnostic task and an experiment at scale, simply by changing the space of the observations or the action space. We first present chain-like environments, that can be represented graphically by a chain (environments a to c), and then a grid-like environment (environment d), that has more natural grid representations for both the environment and the state. a) Aliasing chain. The aliasing chain (introduced in Harutyunyan et al. (2019) as Delayed Effect) is an environment whose outcome depends only on the first action. A series of perceptually aliased and zero-reward states follow this first action, and an outcome is observed at the end of the chain (+1 or −1 depending on the binary first action). b) Discounting chain. The discounting chain (Osband et al., 2020) is an environment in which a first action leads to a series of states with inconsequential decisions with a final reward that is either 1 or 1 + ϵ, and a variable length. It highlights issues with the discounting horizon. c) Ambiguous bandit. The ambiguous bandit (Harutyunyan et al., 2019) is a variant of a two-armed bandit problem. The agent is given two actions: one that transitions to a state with a slightly more advantageous Gaussian distribution over rewards with probability 1 − ϵ, and another that does so with probability ϵ. d) Triggers. Triggers (Ferret et al., 2021a) is a family of environments and corresponding discrete control tasks that are suited for the quantitative analysis of the credit assignment abilities of RL algorithms. Each environment is a bounded square-shaped 2D gridworld where the agent collects rewards that are conditioned on the previous activation of all the triggers of the map. Collecting all triggers turns the negative value of rewards into positive and this knowledge can be exploited to assess proper credit assignment: the actions of collecting triggers appear natural to be credited. The environments are procedurally generated: when requesting a new environment, a random layout is drawn according to the input specifications. ## 7.2.2 Tasks At Scale In the following, we present higher-dimension benchmarks for agents equipped with credit assignment capabilities. Atari. The Arcade Learning Environment (Bellemare et al., 2013) (ALE) is an emulator in which RL agents compete to reach the highest scores on 56 classic Atari games. We list the ones we deem interesting for temporal credit assignment assessment due to delayed rewards, which were first highlighted by ArjonaMedina et al. (2019). **Bowling**: like in real-life bowling, the agent must throw a bowling ball at pins, while ideally curving the ball so that it can clear all pins in one throw. The agent experiences rewards with a high delay, at the end of all rolls (between 2 and 4 depending on the number of strikes achieved). **Venture**: the agent must enter a room, collect a treasure and shoot monsters. Shooting monsters only give rewards after the treasure was collected, and there is no in-game reward for collecting it. **Seaquest**: the agent controls a submarine and must sink enemy submarines. To reach higher scores, the agent has to additionally rescue divers that only provide reward once the submarine lacks oxygen and surfaces to replenish it. **Solaris**: the agent controls a spaceship that earns points by hunting enemy spaceships. These shooting phases are followed by the choice of the next zone to explore on a high-level map, which conditions future rewards. Skiing: the agent controls a skier who has to go between poles while going down the slope. The agent gets no reward until reaching the bottom of the slope, at which time it receives a reward proportional to the pairs of poles it went through, which makes for long-term credit assignment. VizDoom. VizDoom (Kempka et al., 2016) is a suite of partially observable 3D tasks based on the classical Doom video game, a first-person shooter. As mentioned before, it is an interesting sandbox for credit assignment because it optionally provides high-level information such as labelled game objects, depth as well as a top-view minimap representation; all of which can be used for approximate optimally efficient credit assignment algorithms. BoxWorld. BoxWorld (Zambaldi et al., 2018) is a family of environments that shares similarities with Triggers, while being more challenging. Environments are also procedurally-generated square-shaped 2D gridworlds with discrete controls. The goal is to reach a gem, which requires going through a series of boxes protected by locks that can only be opened with keys of the same colour while avoiding distractor boxes. The relations between keys and locks can be utilised to assess assigned credit since the completion of the task (as well as intermediate rewards for opening locks) depends on the collection of the right keys. Sokoban. Sokoban (Racanière et al., 2017) is a family of environments that is similar to the two previous ones. The agent must push boxes to intended positions on the grid while avoiding dead-end situations (for instance, if a block is stuck against walls on two sides, it cannot be moved anymore). While there is no definite criterion to identify decisive actions, actions that lead to dead-ends are known and can be exploited to assess the quality of credit assignment. DeepMind Lab. DeepMind Lab (Beattie et al., 2016) (DMLab) is a suite of partially observable 3D tasks with rich visual input. We identify several tasks that might be of interest to assess credit assignment capabilities, some of which were used in recent work. **Keys-Doors**: the agent navigates to keys that open doors (identified by their shared colour) so that it can get to an absorbing state represented by a cake. Ferret et al. (2021a) consider a harder variant of the task where collecting keys is not directly rewarded anymore and feedback is delayed until opening doors. **Keys-Apples-Doors**: Hung et al. (2019) consider an extended version of the previous task. The agent still has to collect a key, but after a fixed duration a distractor phase begins in which it can only collect small rewards from apples, and finally, the agent must find and open a door with the key it got in the initial phase. To solve the task, the agent has to learn the correlation or causation link between the key and the door, which is made hard because of the extended temporal distance between the two events and of the distractor phase. Deferred Eff**ects**: the agent navigates between two rooms, the first one of which contains apples that give low rewards, while the other contains cakes that give high rewards but it is entirely in the dark. The agent can turn the light on by reaching the switch in the first room, but it gets an immediate negative reward for it. In the end, the most successful policy is to activate the switch regardless of the immediate cost so that a maximum number of cakes can be collected in the second room before the time limit. ## 7.3 Protocol Online evaluation. The most standard approach is to evaluate the quality of credit assignment methods and individual credit assignments along the RL training procedure. As the policy changes, the credit assignments change since the effect of actions depends on subsequent actions (which are dictated by the policy). One can dynamically track the quality of credit assignments and that of the credit assignment method using the metrics developed in the previous section. For the credit assignment method, since it requires a dataset of interaction, one can consider using the most trajectories produced by the agent. An advantage of this approach is that it allows evaluating the evolution of the credit assignment quality along the RL training, with an evolving policy and resulting dynamics. Also, since the goal of credit assignment is to help turn feedback into improvements, it makes sense to evaluate it in the context of said improvements. While natural, online evaluation means one has little control over the data distribution of the evaluation. This is problematic because it is generally hard to disentangle credit quality from the nature of the trajectories it is evaluated on. A corollary is that outcomes that necessitate precise exploration (which can be the outcomes for which agents would benefit most from accurate credit assignment) might not be explored. Offline evaluation. An alternative is to consider offline evaluation. It requires a dataset of interactions, either collected before or during the RL training. Credit assignments and the credit assignment method then use the parameters learned during the RL training while being evaluated on the offline data. As the policy in the offline data is generally not the latest policy from the online training, offline evaluation is better suited for policy-conditioned credit assignment or (to some extent) trajectory-conditioned credit assignment. Indeed, other forms of credit assignment are specific to a single policy, and evaluating these on data generated from another policy would not be accurate. An important advantage of offline evaluation is that it alleviates the impact of exploration, as one controls the data distribution credit is evaluated on. ## 8 Closing, Discussion And Open Challenges The CAP is the problem to approximate the influence of an action from a finite amount of experience, and it is of critical importance to deploy RL agents into the real world that are effective, general, safe and interpretable. However, there is a misalignment in the current literature on what credit means in words and how it is formalised. In this survey, we put the basis to reconcile this gap by reviewing the state of the art of the temporal CAP in Deep RL, focusing on three major questions. ## 8.1 Summary Overall, we observed three major fronts of development around the CAP. The first concern is the problem of *how to quantify action influence* (Q1.). We addressed Q1. in **Section** 4, and analysed the quantities that existing works use to represent the influence of an action. In Section 4.1 we unified these measures of action influence with the *assignment* definition. In Sections 4.3 and 4.5 we showed that the existing literature agrees on an intuition of credit as a measure of the influence of an action over an outcome, but that it does not translate that well into mathematics and none of the current quantities align with the purpose. As a consequence, we proposed a set of principles that we suggest a measure of action influence should respect to represent credit. The second front aims to address the question of *how to learn action influence from experience* and to describe the existing *methods* to assign credit. In **Section** 5 we looked at the challenges that arise from learning these measures of action influence and, together with **Section** 6, answered Q2.. We first reviewed the most common obstacles to learning already identified in the literature and realigned them to our newly developed formalism. We identified three dimensions of an MDP, depth, breadth, and density and described pathological conditions on each of them that hinder the CA. In Section 6 we defined a CA method as an algorithm whose aim is to approximate a measure of action influence from a finite amount of experience. We categorised methods into those that: (i) use temporal contiguity as a proxy for causal influence; *(ii)* decompose the total return into smaller per-timestep contributions; *(iii)* condition the present on information about the future using the idea of hindsight; *(iv)* use sequence modelling and represent action influence as the likelihood of action to follow a state and predict an outcome; (v) learn to imagine backward transitions that always start at a key state and propagate back to the state that could generate them; *(vi)* meta-learn action influence measures. Finally, the third research front deals with *how to evaluate quantities and methods* to assign credit and aims to provide an unbiased estimation of the progress in the field. In **Section** 7 we addressed Q3. and analysed how current methods evaluate their performance and how we can monitor future advancements. We discussed the resources that each benchmark has to offer and their limitations. For example, diagnostic benchmarks do not isolate the specific CAP challenges identified in Section 5: delayed effects, transpositions, and sparsity. Benchmarks at scale often cannot disentangle the CAP from the exploration problem, and it becomes hard to understand whether a method is advancing one problem or another. ## 8.2 Discussion And Open Challenges As this survey suggests, the work in the field is now fervent and the number of studies in a bullish trend, with many works showing substantial gains in control problems only by - to the best of our current knowledge – advancing on the CAP alone (Bellemare et al., 2017; van Hasselt et al., 2021; Edwards et al., 2018; Mesnard et al., 2021; 2023). We observed that the take-off of CA research in the broader area of RL research is only recent. The most probable reason for this is to be found in the fact that the tasks considered in earlier Deep RL research were explicitly designed to be simple from the CA point of view. Using tasks where assigning credit is hard would have - and probably still does, e.g., Küttler et al. (2020) - obfuscate other problems that it was necessary to solve before solving the CAP. For example, adding the CAP on the top of scaling RL to high-dimensional observations (Arulkumaran et al., 2017) or dealing with large action spaces (Dulac-Arnold et al., 2015; van Hasselt & Wiering, 2009) would have, most likely, concealed any evidence of progress for the underlying challenges. This is also why CA methods do not usually shine in classical benchmarks (Bellemare et al., 2013), and peer reviews are often hard on these works. Today, thanks to the advancements in other areas of RL, the field is in a state where improving on the CAP is a compelling challenge. Yet, the CAP still holds open questions and there is still much discussion required to consider the problem solved. In particular, the following observations describe our positions with respect to this survey. Aligning future works to a common problem definition. The lack of a review since its conception (Minsky, 1961) and the rapid advancements produced a fragmented landscape of definitions for action influence, an ambiguity in the meaning of *credit assignment*, a misalignment between the general intuition and its practical quantification, and a general lack of coherence in the principal directions of the works. While this diversity is beneficial for the diversification of the research, it is also detrimental to comparing the methods. Future works aiming to propose a new CA method should clarify these preliminary concepts. Answers to "What is the choice of the measure of action influence? Why the choice? What is the method of learning it from experience? How is it evaluated?" would be good a starting point. Characterising credit. "*What is the* minimum set of properties that a measure of action influence should respect to inform control? What the more desirable ones?". This question remains unanswered, with some ideas in Ferret (2022, Chapter 4), and we still need to understand what characterises a proper measure of credit. Causality. The relationship between CA and causality is underexplored, but in a small subset of works (Mesnard et al., 2021; Pitis et al., 2020; Buesing et al., 2019). The literature lacks a clear and complete formalism that casts the CAP as a problem of causal discovery. Investigating this connection and formalising a measure of action influence that is also a satisfactory measure of causal influence would help better understand the effects of choosing a measure of action influence over another. Overall, we need to better understand the connections between CA and causality: what happens when credit is a strict measure of causal influence? How do current algorithms perform with respect to this measure? Can we devise an algorithm that exploits a causal measure of influence? Optimal credit. Many works refer to *optimal credit* or to *assigning credit optimally*, but it is unclear what that formally means. "*When is credit optimal?*" remains unanswered. Combining benefits from diff**erent methods.** Methods conditioning on the future currently show superior results compared to methods in other categories. These promising methods include hindsight (Section 6.4), sequence modelling (Section 6.5) and backward learning and planning methods (Section 6.6). However, while hindsight methods are advancing fast, sequence modelling and backward planning methods are underinvestigated. We need a better understanding of the connection between these two worlds, which could potentially lead to even better ways of assigning credit. Could there be a connection between these methods? What are the effects of combining backward planning methods with more satisfactory measures of influence, for example, with CCA? Benchmarking. The benchmarks currently used to review a CA method (Chevalier-Boisvert et al., 2018; Bellemare et al., 2013; Samvelyan et al., 2021) (see Section 7.2) are often borrowed from *control* problems, leading to the issues discussed in Section 7 and recalled in the summary above. On a complementary note, CA methods are often evaluated in actor-critic settings (Harutyunyan et al., 2019; Mesnard et al., 2021), which adds layers of complexity that are not necessary. This, together with the inclusion of other unnecessary accessories, can obfuscate the contributions of CA to the overall RL success. As a consequence, the literature lacks a fair comparison among all the methods, and it is not clear how all the methods in Section 6 behave with respect to each other against the same set of benchmarks. This lack of understanding of the state of the art leads to a poor signal to direct future research. We call for a new, community-driven single set of benchmarks that disentangles the CAP from the exploration problem and isolate the challenges described in Section 5. How to disentangle the CAP and the exploration problem? How to isolate each challenge? Shall we evaluate in value-based settings, and would the ranking between the methods be consistent with an evaluation in actor-critic settings? While we introduced some ideas in Section 5.4, these questions are still unanswered. Reproducibility. Many works propose open-source code, but experiments are often not reproducible, their code is hard to read, hard to run and hard to understand. Making code public is not enough, and cannot be considered open-source if it is not easily usable. Other than public, open-source code should be accessible, documented, easy to run, and accompanied by continuous support for questions and issues that may arise from its later usage. We need future research to acquire more rigour in the way to publish, present, and support the code that accompanies scientific publications. In particular, we need (i) a formalised, shared and broadly agreed standard that is not necessarily a new standard; *(ii)* for new studies to adhere to this standard, and *(iii)* for publishers to review the accompanying code at least as thoroughly as when reviewing scientific manuscripts. Monitoring advancements. The community lacks a database containing comprehensive, curated results of each baseline. Currently, baselines are often re-run when a new method is proposed. This can potentially lead to comparisons that are unfair both because the baselines could be suboptimal (e.g., in the hyperparameters choice, training regime) and their reproduction could be not faithful (e.g., in translating the mathematics into code). When these conditions are not met, it is not clear whether a new method is advancing the field because it assigns credit better or because of misaligned baselines. We call for a new, community-driven database holding the latest evaluations of each baseline. The evaluation should be driven by the authors and the authors be responsible for its results. When such a database will be available, new publications should be tested against the same benchmarks and not re-run previous baselines, but rather refer to the curated results stored in the database. Peer reviewing CA works. As a consequence of the issues identified above, and because CA methods do not usually shine in classical benchmarks (Bellemare et al., 2013), peer reviews often do not have the tools to capture the novelties of a method and its improvements. On one hand, we need a clear evaluation protocol, including a shared benchmark and leaderboard to facilitate peer reviews. On the other hand, peer reviews must steer away from using tools and metrics that would be used for control, and use those appropriate for the CAP instead. Lack of priors and foundation models. Most of the CA methods start to learn credit from scratch, without any prior knowledge but the one held by the initialisation pattern of its underlying network. This represents a main obstacle to making CA efficient because, at each new learning phase, even elementary associations must be learned from scratch. In contrast, when facing a new task, humans often rely on their prior knowledge to determine the influence of an action. In the current state of the art, the use of priors to assign credit more efficiently is overlooked. Vice versa, the relevance of the CAP and the use of more advanced methods for CA (Mesnard et al., 2021; 2023; Edwards et al., 2018; van Hasselt et al., 2021) is often underestimated for the development of foundation models in RL. ## 8.3 Conclusions To conclude, in this survey, we have set out to formally settle the CAP in Deep RL. The resulting material does not aim to solve the CAP, but rather proposes a unifying framework that enables a fair comparison among the methods that assign credit and organises existing material to expedite the starting stages of new studies. Where the literature lacks answers, we identify the gaps and organise them in a list of challenges. We kindly encourage the research community to join in solving these challenges in a shared effort, and we hope that the material collected in this manuscript can be a helpful resource to both inform future advancements in the field and inspire new applications in the real world. ## References David Abel, Cameron Allen, Dilip Arumugam, D Ellis Hershkowitz, Michael L Littman, and Lawson LS Wong. Bad-policy density: A measure of reinforcement learning hardness. *arXiv preprint* arXiv:2110.03424, 2021a. David Abel, Will Dabney, Anna Harutyunyan, Mark K Ho, Michael Littman, Doina Precup, and Satinder Singh. On the expressivity of markov reward. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, 2021b. David Abel, Andre Barreto, Benjamin Van Roy, Doina Precup, Hado van Hasselt, and Satinder Singh. A definition of continual reinforcement learning. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=ZZS9WEWYbD. Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Bellemare. Deep reinforcement learning at the edge of the statistical precipice. *Advances in neural information processing* systems, 34:29304–29320, 2021. Mostafa Al-Emran. Hierarchical reinforcement learning: a survey. *International journal of computing and* digital systems, 4(02), 2015. Susan Amin, Maziar Gomrokchi, Harsh Satija, Herke van Hoof, and Doina Precup. A survey of exploration methods in reinforcement learning. *arXiv preprint arXiv:2109.00157*, 2021. Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In *Advances* in neural information processing systems, volume 30, 2017. URL https://proceedings.neurips.cc/ paper/2017/file/453fadbd8a1a3af50a9df4df899537b5-Paper.pdf. Thomas Anthony, Tom Eccles, Andrea Tacchetti, János Kramár, Ian Gemp, Thomas Hudson, Nicolas Porcel, Marc Lanctot, Julien Pérolat, Richard Everett, et al. Learning to play no-press diplomacy with best response policy iteration. *Advances in Neural Information Processing Systems*, 33:17987–18003, 2020. Jose A Arjona-Medina, Michael Gillhofer, Michael Widrich, Thomas Unterthiner, Johannes Brandstetter, and Sepp Hochreiter. Rudder: Return decomposition for delayed rewards. Advances in Neural Information Processing Systems, 32, 2019. Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, and Anil Anthony Bharath. Deep reinforcement learning: A brief survey. *IEEE Signal Processing Magazine*, 34(6):26–38, 2017. Kai Arulkumaran, Dylan R Ashley, Jürgen Schmidhuber, and Rupesh K Srivastava. All you need is supervised learning: From imitation learning to meta-rl with upside down rl. *arXiv preprint arXiv:2202.11960*, 2022. Dilip Arumugam, Peter Henderson, and Pierre-Luc Bacon. An information-theoretic perspective on credit assignment in reinforcement learning. *CoRR*, abs/2103.06224, 2021. URL https://arxiv.org/abs/2103. 06224. Dylan R Ashley, Kai Arulkumaran, Jürgen Schmidhuber, and Rupesh Kumar Srivastava. Learning relative return policies with upside-down reinforcement learning. *arXiv preprint arXiv:2202.12742*, 2022. Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. In Proceedings of the AAAI conference on artificial intelligence, volume 31, 2017. Adrià Puigdomènech Badia, Bilal Piot, Steven Kapturowski, Pablo Sprechmann, Alex Vitvitskyi, Zhaohan Daniel Guo, and Charles Blundell. Agent57: Outperforming the atari human benchmark. In *International Conference on Machine Learning*, pp. 507–517. PMLR, 2020. Akhil Bagaria and George Konidaris. Option discovery using deep skill chaining. In International Conference on Learning Representations, 2019. Leemon C III Baird. *Reinforcement Learning Through Gradient Descent*. PhD thesis, US Air Force Academy, 1999. David Balduzzi, Hastagiri Vanchinathan, and Joachim Buhmann. Kickback cuts backprop's red-tape: Biologically plausible credit assignment in neural networks. In *Proceedings of the AAAI Conference on* Artificial Intelligence, volume 29, 2015. Elias Bareinboim, Juan D. Correa, Duligur Ibeling, and Thomas Icard. *On Pearls Hierarchy and the Foundations of Causal Inference*, pp. 507556. Association for Computing Machinery, New York, NY, USA, 1 edition, 2022. ISBN 9781450395861. URL https://doi.org/10.1145/3501714.3501743. Andrew G Barto. Intrinsic motivation and reinforcement learning. *Intrinsically motivated learning in natural* and artificial systems, pp. 17–47, 2013. Andrew G Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete event dynamic systems, 13(1):41–77, 2003. Andrew G Barto, Satinder Singh, Nuttapong Chentanez, et al. Intrinsically motivated learning of hierarchical collections of skills. In *Proceedings of the 3rd International Conference on Development and Learning*, volume 112, pp. 19. Citeseer, 2004. Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, et al. Deepmind lab. *arXiv preprint arXiv:1612.03801*, 2016. Vahid Behzadan and William Hsu. Adversarial exploitation of policy imitation. arXiv preprint arXiv:1906.01121, 2019. Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. *Journal of Artificial Intelligence Research*, 47:253–279, 2013. Marc G Bellemare, Georg Ostrovski, Arthur Guez, Philip Thomas, and Rémi Munos. Increasing the action gap: New operators for reinforcement learning. In *Proceedings of the AAAI Conference on Artificial* Intelligence, volume 30, 2016. Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In International Conference on Machine Learning, pp. 449–458. Proceedings of Machine Learning Research, 2017. Marc G Bellemare, Salvatore Candido, Pablo Samuel Castro, Jun Gong, Marlos C Machado, Subhodeep Moitra, Sameera S Ponda, and Ziyu Wang. Autonomous navigation of stratospheric balloons using reinforcement learning. *Nature*, 588(7836):77–82, 2020. Michael Bowling, John D Martin, David Abel, and Will Dabney. Settling the reward hypothesis. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), *Proceedings of the 40th International Conference on Machine Learning*, volume 202 of Proceedings of Machine Learning Research, pp. 3003–3020. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr. press/v202/bowling23a.html. Lars Buesing, Theophane Weber, Yori Zwols, Nicolas Heess, Sebastien Racaniere, Arthur Guez, and JeanBaptiste Lespiau. Woulda, coulda, shoulda: Counterfactually-guided policy search. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BJG0voC9YQ. Yu-Han Chang, Tracey Ho, and Leslie Kaelbling. All learning is local: Multi-agent learning in global reward games. *Advances in neural information processing systems*, 16, 2003. Veronica Chelu, Doina Precup, and Hado P van Hasselt. Forethought and hindsight in credit assignment. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural* Information Processing Systems, volume 33, pp. 2270–2281. Curran Associates, Inc., 2020. URL https: //proceedings.neurips.cc/paper/2020/file/18064d61b6f93dab8681a460779b8429-Paper.pdf. Veronica Chelu, Diana Borsa, Doina Precup, and Hado van Hasselt. Selective credit assignment. *arXiv* preprint arXiv:2202.09699, 2022. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. In Advances in Neural Information Processing Systems, volume 34, pp. 15084–15097, 2021. URL https: //proceedings.neurips.cc/paper/2021/file/7f489f642a0ddb10272b5c31057f0663-Paper.pdf. Zhixin Chen and Mengxiang Lin. Self-imitation learning in sparse reward settings. *arXiv preprint* arXiv:2010.06962, 2020. Nuttapong Chentanez, Andrew Barto, and Satinder Singh. Intrinsically motivated reinforcement learning. Advances in neural information processing systems, 17, 2004. Maxime Chevalier-Boisvert, Lucas Willems, and Suman Pal. Minimalistic gridworld environment for openai gym. https://github.com/maximecb/gym-minigrid, 2018. Cédric Colas, Tristan Karch, Olivier Sigaud, and Pierre-Yves Oudeyer. Autotelic agents with intrinsically motivated goal-conditioned reinforcement learning: a short survey. *Journal of Artificial Intelligence Research*, 74:1159–1199, 2022. Jonas Degrave, Federico Felici, Jonas Buchli, Michael Neunert, Brendan Tracey, Francesco Carpanese, Timo Ewalds, Roland Hafner, Abbas Abdolmaleki, Diego de Las Casas, et al. Magnetic control of tokamak plasmas through deep reinforcement learning. *Nature*, 602(7897):414–419, 2022. Gabriel Dulac-Arnold, Richard Evans, Hado van Hasselt, Peter Sunehag, Timothy Lillicrap, Jonathan Hunt, Timothy Mann, Theophane Weber, Thomas Degris, and Ben Coppin. Deep reinforcement learning in large discrete action spaces. *arXiv preprint arXiv:1512.07679*, 2015. Ashley D Edwards, Laura Downs, and James C Davidson. Forward-backward reinforcement learning. arXiv preprint arXiv:1803.10227, 2018. Andrew J Elliot and James W Fryer. The goal construct in psychology. In *Handbook of motivation science*, volume 18, pp. 235–250, 2008. Benjamin Eysenbach and Sergey Levine. Maximum entropy RL (provably) solves some robust RL problems. In *International Conference on Learning Representations*, 2022. Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. In *International Conference on Learning Representations*, 2018. Francesco Faccio, Louis Kirsch, and Jürgen Schmidhuber. Parameter-based value functions. In *International* Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=tV6oBfuyLTQ. Amir-massoud Farahmand. Action-gap phenomenon in reinforcement learning. *Advances in Neural Information Processing Systems*, 24, 2011. Johan Ferret. *On Actions that Matter: Credit Assignment and Interpretability in Reinforcement Learning*. PhD thesis, Université de Lille, 2022. Johan Ferret, Raphaël Marinier, Matthieu Geist, and Olivier Pietquin. Self-attentional credit assignment for transfer in reinforcement learning. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI'20, 2021a. ISBN 9780999241165. Johan Ferret, Olivier Pietquin, and Matthieu Geist. Self-imitation advantage learning. In *AAMAS 2021-20th* International Conference on Autonomous Agents and Multiagent Systems, 2021b. Angelos Filos, Panagiotis Tigkas, Rowan McAllister, Nicholas Rhinehart, Sergey Levine, and Yarin Gal. Can autonomous vehicles identify, recover from, and adapt to distribution shifts? In *International Conference* on Machine Learning, pp. 3145–3153. PMLR, 2020. Yannis Flet-Berliac. The promise of hierarchical reinforcement learning. *The Gradient*, 9, 2019. Jakob Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, and Shimon Whiteson. Counterfactual multi-agent policy gradients. In *Proceedings of the AAAI conference on artificial intelligence*, volume 32, 2018. Hiroki Furuta, Yutaka Matsuo, and Shixiang Shane Gu. Generalized decision transformer for offline hindsight information matching. In *International Conference on Learning Representations*, 2022. URL https: //openreview.net/forum?id=CAjxVodl_v. Jim Gao. Machine learning applications for data center optimization, 2014. Javier García, Fern, and o Fernández. A comprehensive survey on safe reinforcement learning. *Journal of* Machine Learning Research, 16(42):1437–1480, 2015. URL http://jmlr.org/papers/v16/garcia15a. html. Matthieu Geist, Bruno Scherrer, et al. Off-policy learning with eligibility traces: a survey. *J. Mach. Learn.* Res., 15(1):289–333, 2014. Anirudh Goyal, Philemon Brakel, William Fedus, Soumye Singhal, Timothy Lillicrap, Sergey Levine, Hugo Larochelle, and Yoshua Bengio. Recall traces: Backtracking models for efficient reinforcement learning. In *International Conference on Learning Representations*, 2019. URL https://openreview.net/forum? id=HygsfnR9Ym. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. *Nature*, 538(7626):471–476, 2016. Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. *Advances in neural information processing systems*, 28, 2015. Nathan Grinsztajn, Johan Ferret, Olivier Pietquin, Matthieu Geist, et al. There is no turning back: A self-supervised approach for reversibility-aware reinforcement learning. *Advances in Neural Information* Processing Systems, 34:1898–1911, 2021. Arthur Guez, Fabio Viola, Theophane Weber, Lars Buesing, Steven Kapturowski, Doina Precup, David Silver, and Nicolas Heess. Value-driven hindsight modelling. *Advances in Neural Information Processing* Systems, 33:12499–12509, 2020. Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. In *International conference on machine learning*, pp. 1352–1361. PMLR, 2017. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International Conference on Machine Learning, pp. 1861–1870. Proceedings of Machine Learning Research, 2018. Danijar Hafner, Kuang-Huei Lee, Ian Fischer, and Pieter Abbeel. Deep hierarchical planning from pixels. Advances in Neural Information Processing Systems, 35:26091–26104, 2022. Jean Harb, Tom Schaul, Doina Precup, and Pierre-Luc Bacon. Policy evaluation networks. arXiv preprint arXiv:2002.11833, 2020. Anna Harutyunyan, Peter Vrancx, Pierre-Luc Bacon, Doina Precup, and Ann Nowe. Learning with options that terminate off-policy. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32, 2018. Anna Harutyunyan, Will Dabney, Thomas Mesnard, Mohammad Gheshlaghi Azar, Bilal Piot, Nicolas Heess, Hado P van Hasselt, Gregory Wayne, Satinder Singh, Doina Precup, et al. Hindsight credit assignment. Advances in neural information processing systems, 32, 2019. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In *Proceedings of the AAAI conference on artificial intelligence*, volume 32, 2018. Donald D Hoffman. The interface theory of perception. *Current Directions in Psychological Science*, 25(3): 157–161, 2016. Rein Houthooft, Yuhua Chen, Phillip Isola, Bradly Stadie, Filip Wolski, OpenAI Jonathan Ho, and Pieter Abbeel. Evolved policy gradients. *Advances in Neural Information Processing Systems*, 31, 2018. Ronald A Howard. *Dynamic programming and Markov processes.* John Wiley, 1960. Chia-Chun Hung, Timothy Lillicrap, Josh Abramson, Yan Wu, Mehdi Mirza, Federico Carnevale, Arun Ahuja, and Greg Wayne. Optimizing agent behavior over long time scales by transporting value. Nature Communications, 10(1):5223, 2019. ISSN 2041-1723. doi: 10.1038/s41467-019-13073-w. URL https: //doi.org/10.1038/s41467-019-13073-w. Tommi Jaakkola, Michael Jordan, and Satinder Singh. Convergence of stochastic iterative dynamic programming algorithms. *Advances in neural information processing systems*, 6, 1993. Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In *International* Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=SJ6yPD5xg. Michael Janner, Qiyang Li, and Sergey Levine. Offline reinforcement learning as one big sequence modeling problem. In *Advances in Neural Information Processing Systems*, 2021. Dominik Janzing, David Balduzzi, Moritz Grosse-Wentrup, and Bernhard Schölkopf. Quantifying causal influences. *The Annals Of Statistics*, pp. 2324–2358, 2013. Stratton C Jaquette. Markov decision processes with a new optimality criterion: Discrete time. The Annals of Statistics, 1(3):496–505, 1973. Minqi Jiang, Edward Grefenstette, and Tim Rocktäschel. Prioritized level replay. In *International Conference* on Machine Learning, pp. 4940–4950. Proceedings of Machine Learning Research, 2021a. Minqi Jiang, Tim Rocktäschel, and Edward Grefenstette. General intelligence requires rethinking exploration. *Royal Society Open Science*, 10(6):230539, 2023. Ray Jiang, Tom Zahavy, Zhongwen Xu, Adam White, Matteo Hessel, Charles Blundell, and Hado van Hasselt. Emphatic algorithms for deep reinforcement learning. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 5023–5033. PMLR, 18–24 Jul 2021b. URL https://proceedings.mlr. press/v139/jiang21j.html. Steven Kapturowski, Georg Ostrovski, John Quan, Remi Munos, and Will Dabney. Recurrent experience replay in distributed reinforcement learning. In *International conference on learning representations*, 2019. Steven Kapturowski, Víctor Campos, Ray Jiang, Nemanja Rakicevic, Hado van Hasselt, Charles Blundell, and Adria Puigdomenech Badia. Human-level atari 200x faster. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=JtC6yOHRoJJ. Michał Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Jaśkowski. Vizdoom: A doom-based ai research platform for visual reinforcement learning. In *2016 IEEE conference on computational intelligence and games (CIG)*, pp. 1–8. IEEE, 2016. Robert Kirk, Amy Zhang, Edward Grefenstette, and Tim Rocktäschel. A survey of zero-shot generalisation in deep reinforcement learning. *Journal of Artificial Intelligence Research*, 76:201–264, 2023. Martin Klissarov and Doina Precup. Flexible option learning. *Advances in Neural Information Processing* Systems, 34:4632–4646, 2021. Martin Klissarov, Rasool Fakoor, Jonas Mueller, Kavosh Asadi, Taesup Kim, and Alex Smola. Adaptive interest for emphatic reinforcement learning. In Decision Awareness in Reinforcement Learning Workshop at ICML 2022, 2022. URL https://openreview.net/forum?id=ZGi3bDRXkx. A Harry Klopf. Brain function and adaptive systems: a heterostatic theory. Technical Report 133, Air Force Cambridge Research Laboratories. Special Reports, Bedford, Massachusets, 1972. Petar Kormushev, Sylvain Calinon, and Darwin G Caldwell. Reinforcement learning in robotics: Applications and real-world challenges. *Robotics*, 2(3):122–148, 2013. Heinrich Küttler, Nantas Nardelli, Alexander Miller, Roberta Raileanu, Marco Selvatici, Edward Grefenstette, and Tim Rocktäschel. The nethack learning environment. *Advances in Neural Information Processing Systems*, 33:7671–7684, 2020. Kennon A Lattal. Delayed reinforcement of operant behavior. *Journal of the Experimental Analysis of* Behavior, 93(1):129–139, 2010. Charline Le Lan, Stephen Tu, Adam Oberman, Rishabh Agarwal, and Marc G Bellemare. On the generalization of representations in reinforcement learning. In International Conference on Artificial Intelligence and Statistics, pp. 4132–4157. PMLR, 2022. Kuang-Huei Lee, Ofir Nachum, Mengjiao Yang, Lisa Lee, Daniel Freeman, Winnie Xu, Sergio Guadarrama, Ian Fischer, Eric Jang, Henryk Michalewski, et al. Multi-game decision transformers. *arXiv preprint* arXiv:2205.15241, 2022. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. *arXiv preprint* arXiv:1509.02971, 2015. Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3):293–321, 1992. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. In *International Conference on Learning Representations*, 2017. Minghuan Liu, Menghui Zhu, and Weinan Zhang. Goal-conditioned reinforcement learning: Problems and solutions. *arXiv preprint arXiv:2201.08299*, 2022. J Luketina, N Nardelli, G Farquhar, J Foerster, J Andreas, E Grefenstette, S Whiteson, and T Rocktäschel. A survey of reinforcement learning informed by natural language. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, August 10-16 2019, Macao, China., volume 57, pp. 6309–6317. AAAI Press (Association for the Advancement of Artificial Intelligence), 2019. Jukka Luoma, Sampsa Ruutu, Adelaide Wilcox King, and Henrikki Tikkanen. Time delays, competitive interdependence, and firm performance. *Strategic Management Journal*, 38(3):506–525, 2017. A Rupam Mahmood, Huizhen Yu, Martha White, and Richard S Sutton. Emphatic temporal-difference learning. *arXiv preprint arXiv:1507.01569*, 2015. Matheus RF Mendonca, Artur Ziviani, and André MS Barreto. Graph-based skill acquisition for reinforcement learning. *ACM Computing Surveys (CSUR)*, 52(1):1–26, 2019. Thomas Mesnard, Theophane Weber, Fabio Viola, Shantanu Thakoor, Alaa Saade, Anna Harutyunyan, Will Dabney, Thomas S Stepleton, Nicolas Heess, Arthur Guez, et al. Counterfactual credit assignment in model-free reinforcement learning. In *International Conference on Machine Learning*, pp. 7654–7664. Proceedings of Machine Learning Research, 2021. Thomas Mesnard, Wenqi Chen, Alaa Saade, Yunhao Tang, Mark Rowland, Theophane Weber, Clare Lyle, Audrunas Gruslys, Michal Valko, Will Dabney, Georg Ostrovski, Eric Moulines, and Remi Munos. Quantile credit assignment. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), *Proceedings of the 40th International Conference on Machine Learning*, volume 202 of *Proceedings of Machine Learning Research*, pp. 24517–24531. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/mesnard23a.html. Donald Michie. Experiments on the mechanization of game-learning part i. characterization of the model and its parameters. *The Computer Journal*, 6(3):232–236, 1963. Marvin Minsky. Steps toward artificial intelligence. *Proceedings of the IRE*, 49(1):8–30, 1961. ISSN 00968390. doi: 10.1109/JRPROC.1961.287775. Azalia Mirhoseini, Anna Goldie, Mustafa Yazgan, Joe Jiang, Ebrahim Songhori, Shen Wang, Young-Joon Lee, Eric Johnson, Omkar Pathak, Sungmin Bae, et al. Chip placement with deep reinforcement learning. arXiv preprint arXiv:2004.10746, 2020. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In Advances in Neural Information Processing Systems, Deep Learning Workshop, 2013. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. *Nature*, 518(7540):529–533, 2015. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In *International conference on machine learning*, pp. 1928–1937. PMLR, 2016. Seyed Sajad Mousavi, Michael Schukat, Enda Howley, and Patrick Mannion. Applying q (λ)-learning in deep reinforcement learning to play atari games. In *AAMAS Adaptive Learning Agents (ALA) Workshop*, pp. 1–6, 2017. Ashvin V Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, and Sergey Levine. Visual reinforcement learning with imagined goals. *Advances in neural information processing systems*, 31, 2018. Suraj Nair, Mohammad Babaeizadeh, Chelsea Finn, Sergey Levine, and Vikash Kumar. Trass: Time reversal as self-supervision. In *2020 IEEE International Conference on Robotics and Automation (ICRA)*, pp. 115– 121. IEEE, 2020. Thanh Thi Nguyen and Vijay Janapa Reddi. Deep reinforcement learning for cyber security. *IEEE Transactions on Neural Networks and Learning Systems*, 2021. Tianwei Ni, Michel Ma, Benjamin Eysenbach, and Pierre-Luc Bacon. When do transformers shine in RL? decoupling memory from credit assignment. In *Thirty-seventh Conference on Neural Information Processing* Systems, 2023. URL https://openreview.net/forum?id=APGXBNkt6h. Chris Nota, Philip Thomas, and Bruno C. Da Silva. Posterior value functions: Hindsight baselines for policy gradient methods. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International* Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 8238–8247. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/nota21a.html. Junhyuk Oh, Yijie Guo, Satinder Singh, and Honglak Lee. Self-imitation learning. In *International Conference on Machine Learning*, pp. 3878–3887. PMLR, 2018. Junhyuk Oh, Matteo Hessel, Wojciech M. Czarnecki, Zhongwen Xu, Hado P van Hasselt, Satinder Singh, and David Silver. Discovering reinforcement learning algorithms. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 1060–1070. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 0b96d81f0494fde5428c7aea243c9157-Paper.pdf. Ian Osband, Yotam Doron, Matteo Hessel, John Aslanides, Eren Sezener, Andre Saraiva, Katrina McKinney, Tor Lattimore, Csaba Szepesvári, Satinder Singh, Benjamin Van Roy, Richard Sutton, David Silver, and Hado van Hasselt. Behaviour suite for reinforcement learning. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rygf-kSYwH. Hsiao-Ru Pan, Nico Gürtler, Alexander Neitz, and Bernhard Schölkopf. Direct advantage estimation. *Advances in Neural Information Processing Systems*, 35:11869–11880, 2022. Jack Parker-Holder, Aldo Pacchiano, Krzysztof M Choromanski, and Stephen J Roberts. Effective diversity in population based reinforcement learning. *Advances in Neural Information Processing Systems*, 33: 18050–18062, 2020. Shubham Pateria, Budhitama Subagdja, Ah-hwee Tan, and Chai Quek. Hierarchical reinforcement learning: A comprehensive survey. *ACM Computing Surveys (CSUR)*, 54(5):1–35, 2021. Judea Pearl. Causal inference in statistics: An overview. *Statistics Surveys*, 3:96–146, 2009. Judea Pearl et al. Models, reasoning and inference. *Cambridge, UK: CambridgeUniversityPress*, 19(2):3, 2000. Julien Perolat, Bart De Vylder, Daniel Hennes, Eugene Tarassov, Florian Strub, Vincent de Boer, Paul Muller, Jerome T Connor, Neil Burch, Thomas Anthony, et al. Mastering the game of stratego with model-free multiagent reinforcement learning. *Science*, 378(6623):990–996, 2022. Jean Piaget, Margaret Cook, et al. *The origins of intelligence in children*, volume 8. International Universities Press New York, 1952. Silviu Pitis. Rethinking the discount factor in reinforcement learning: A decision theoretic approach. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 7949–7956, 2019. Silviu Pitis, Elliot Creager, and Animesh Garg. Counterfactual data augmentation using locally factored dynamics. *Advances in Neural Information Processing Systems*, 33:3976–3990, 2020. Chetan Prakash, Kyle D Stephens, Donald D Hoffman, Manish Singh, and Chris Fields. Fitness beats truth in the evolution of perception. *Acta Biotheoretica*, 69:319–341, 2021. Doina Precup. Eligibility traces for off-policy policy evaluation. Computer Science Department Faculty Publication Series, pp. 80, 2000a. Doina Precup. *Temporal abstraction in reinforcement learning*. PhD thesis, University of Massachusetts Amherst, 2000b. Martin L Puterman. *Markov decision processes: discrete stochastic dynamic programming*. John Wiley & Sons, 2014. Martin L Puterman and Moon Chirl Shin. Modified policy iteration algorithms for discounted markov decision problems. *Management Science*, 24(11):1127–1137, 1978. Sébastien Racanière, Théophane Weber, David Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, et al. Imaginationaugmented agents for deep reinforcement learning. *Advances in neural information processing systems*, 30, 2017. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. *OpenAI blog*, 2018. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. *OpenAI blog*, 1(8):9, 2019. Hazhir Rahmandad, Nelson Repenning, and John Sterman. Effects of feedback delay on learning. *System* Dynamics Review, 25(4):309–338, 2009. David Raposo, Sam Ritter, Adam Santoro, Greg Wayne, Theophane Weber, Matt Botvinick, Hado van Hasselt, and Francis Song. Synthetic returns for long-term credit assignment. *CoRR*, 2021. Paulo Rauber, Avinash Ummadisingu, Filipe Mutz, and Jürgen Schmidhuber. Hindsight policy gradients. In *International Conference on Learning Representations*, 2019. URL https://openreview.net/forum? id=Bkg2viA5FQ. Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel BarthMaron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al. A generalist agent. arXiv preprint arXiv:2205.06175, 2022. Zhizhou Ren, Ruihan Guo, Yuan Zhou, and Jian Peng. Learning long-term reward redistribution via randomized return decomposition. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=lpkGn3k2YdD. Matthew Riemer, Miao Liu, and Gerald Tesauro. Learning abstract options. *Advances in neural information* processing systems, 31, 2018. Mark Rowland, Will Dabney, and Rémi Munos. Adaptive trade-offs in off-policy learning. In *International* Conference on Artificial Intelligence and Statistics, pp. 34–44. PMLR, 2020. Mikayel Samvelyan, Robert Kirk, Vitaly Kurin, Jack Parker-Holder, Minqi Jiang, Eric Hambro, Fabio Petroni, Heinrich Kuttler, Edward Grefenstette, and Tim Rocktäschel. Minihack the planet: A sandbox for open-ended reinforcement learning research. In *Thirty-fifth Conference on Neural Information* Processing Systems Datasets and Benchmarks Track (Round 1), 2021. URL https://openreview.net/ forum?id=skFwlyefkWJ. Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In Francis Bach and David Blei (eds.), *Proceedings of the 32nd International Conference on Machine Learning*, volume 37 of *Proceedings of Machine Learning Research*, pp. 1312–1320, Lille, France, 07–09 Jul 2015a. PMLR. URL https://proceedings.mlr.press/v37/schaul15.html. Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. *arXiv preprint* arXiv:1511.05952, 2015b. Tom Schaul, André Barreto, John Quan, and Georg Ostrovski. The phenomenon of policy churn. *arXiv* preprint arXiv:2206.00730, 2022. Bruno Scherrer, Mohammad Ghavamzadeh, Victor Gabillon, Boris Lesner, and Matthieu Geist. Approximate modified policy iteration and its application to the game of tetris. *Journal of Machine Learning Research*, 16(49):1629–1676, 2015. Juergen Schmidhuber. Reinforcement learning upside down: Don't predict rewards–just map them to actions. arXiv preprint arXiv:1912.02875, 2019. Jürgen Schmidhuber. Deep learning in neural networks: An overview. *Neural Networks*, 61:85–117, 2015. Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. *Nature*, 588(7839):604–609, 2020. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. In *Proceedings of the International Conference* on Learning Representations (ICLR), 2016. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *arXiv preprint arXiv:1707.06347*, 2017. Donald G Schultz. State functions and linear control systems. *McGraw-Hill Book Company*, 1967. Minah Seo, Luiz Felipe Vecchietti, Sangkeum Lee, and Dongsoo Har. Rewards prediction-based credit assignment for reinforcement learning with sparse binary rewards. *IEEE Access*, 7:118776–118791, 2019. Mehran Shakerinava and Siamak Ravanbakhsh. Utility theory for sequential decision making. In International Conference on Machine Learning, pp. 19616–19625. PMLR, 2022. Claude E Shannon. Programming a computer for playing chess. *The London, Edinburgh, and Dublin* Philosophical Magazine and Journal of Science, 41(314):256–275, 1950. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. *nature*, 529(7587):484–489, 2016. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. *Science*, 362(6419):1140–1144, 2018. Satinder Singh, Richard L Lewis, and Andrew G Barto. Where do rewards come from. In *Proceedings of the* annual conference of the cognitive science society, pp. 2601–2606. Cognitive Science Society, 2009. Satinder P Singh and Richard S Sutton. Reinforcement learning with replacing eligibility traces. *Machine* learning, 22(1):123–158, 1996. Matthew Smith, Herke Hoof, and Joelle Pineau. An inference-based policy gradient method for learning options. In *International Conference on Machine Learning*, pp. 4703–4712. PMLR, 2018. Matthew J Sobel. The variance of discounted markov decision processes. *Journal of Applied Probability*, 19 (4):794–802, 1982. Rupesh Kumar Srivastava, Pranav Shyam, Filipe Mutz, Wojciech Jaskowski, and Jürgen Schmidhuber. Training agents using upside-down reinforcement learning. *CoRR*, abs/1912.02877, 2019. URL http: //arxiv.org/abs/1912.02877. Miroslav Štrupl, Francesco Faccio, Dylan R Ashley, Jürgen Schmidhuber, and Rupesh Kumar Srivastava. Upside-down reinforcement learning can diverge in stochastic environments with episodic resets. arXiv preprint arXiv:2205.06595, 2022. Ke Sun, Bei Jiang, and Linglong Kong. How does value distribution in distributional reinforcement learning help optimization? *arXiv preprint arXiv:2209.14513*, 2022. Richard S Sutton. *Temporal credit assignment in reinforcement learning*. PhD thesis, University of Massachusetts, 1984. Richard S Sutton. Learning to predict by the methods of temporal differences. *Machine learning*, 3:9–44, 1988. Richard S. Sutton. The reward hypothesis. http://incompleteideas.net/rlai.cs.ualberta.ca/RLAI/ rewardhypothesis.html, 2004. Richard S Sutton and Andrew G Barto. *Reinforcement Learning: an introduction*. MIT Press, 2nd edition, 2018. ISBN 9781626239777. URL http://incompleteideas.net/book/RLbook2020.pdf. Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. *Artificial intelligence*, 112(1-2):181–211, 1999. Richard S Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M Pilarski, Adam White, and Doina Precup. Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In *The 10th International Conference on Autonomous Agents and Multiagent SystemsVolume 2*, pp. 761–768, 2011. Richard S Sutton, A Rupam Mahmood, and Martha White. An emphatic approach to the problem of offpolicy temporal-difference learning. *The Journal of Machine Learning Research*, 17(1):2603–2631, 2016. Yunhao Tang and Alp Kucukelbir. Hindsight expectation maximization for goal-conditioned reinforcement learning. In *International Conference on Artificial Intelligence and Statistics*, pp. 2863–2871. PMLR, 2021. Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In International conference on intelligent robots and systems, pp. 5026–5033. IEEE, 2012. Hado van Hasselt and Richard S Sutton. Learning to predict independent of span. arXiv preprint arXiv:1508.04582, 2015. Hado van Hasselt and Marco A Wiering. Using continuous action spaces to solve discrete problems. In *2009* International Joint Conference on Neural Networks, pp. 1149–1156. IEEE, 2009. Hado van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI conference on artificial intelligence, volume 30, 2016. Hado van Hasselt, Yotam Doron, Florian Strub, Matteo Hessel, Nicolas Sonnerat, and Joseph Modayil. Deep reinforcement learning and the deadly triad. *arXiv preprint arXiv:1812.02648*, 2018. Hado van Hasselt, Sephora Madjiheurem, Matteo Hessel, David Silver, André Barreto, and Diana Borsa. Expected eligibility traces. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pp. 9997–10005, 2021. Hado P van Hasselt, Matteo Hessel, and John Aslanides. When to use parametric models in reinforcement learning? *Advances in Neural Information Processing Systems*, 32, 2019. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. David Venuto, Elaine Lau, Doina Precup, and Ofir Nachum. Policy gradients incorporating the future. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id= EHaUTlm2eHg. Nino Vieillard, Tadashi Kozuno, Bruno Scherrer, Olivier Pietquin, Rémi Munos, and Matthieu Geist. Leverage the average: an analysis of kl regularization in reinforcement learning. Advances in Neural Information Processing Systems, 33:12163–12174, 2020a. Nino Vieillard, Olivier Pietquin, and Matthieu Geist. Munchausen reinforcement learning. *Advances in* Neural Information Processing Systems, 33:4235–4246, 2020b. Oriol Vinyals, Igor Babuschkin, Junyoung Chung, Michael Mathieu, Max Jaderberg, Wojtek Czarnecki, Andrew Dudzik, Aja Huang, Petko Georgiev, Richard Powell, Timo Ewalds, Dan Horgan, Manuel Kroiss, Ivo Danihelka, John Agapiou, Junhyuk Oh, Valentin Dalibard, David Choi, Laurent Sifre, Yury Sulsky, Sasha Vezhnevets, James Molloy, Trevor Cai, David Budden, Tom Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Toby Pohlen, Dani Yogatama, Julia Cohen, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Chris Apps, Koray Kavukcuoglu, Demis Hassabis, and David Silver. AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. https://deepmind.com/blog/ alphastar-mastering-real-time-strategy-game-starcraft-ii/, 2019. Jianhao Wang, Wenzhe Li, Haozhe Jiang, Guangxiang Zhu, Siyuan Li, and Chongjie Zhang. Offline reinforcement learning with reverse model-based imagination. *Advances in Neural Information Processing* Systems, 34:29420–29432, 2021. Tony Tong Wang, Adam Gleave, Nora Belrose, Tom Tseng, Joseph Miller, Michael D Dennis, Yawen Duan, Viktor Pogrebniak, Sergey Levine, and Stuart Russell. Adversarial policies beat professional-level go ais. arXiv preprint arXiv:2211.00241, 2022. Zhe Wang and Tianzhen Hong. Reinforcement learning for building controls: The opportunities and challenges. *Applied Energy*, 269:115036, 2020. Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, and Nando de Freitas. Sample efficient actor-critic with experience replay. In International Conference on Learning Representations, 2016a. Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, and Nando Freitas. Dueling network architectures for deep reinforcement learning. In *International conference on machine learning*, pp. 1995– 2003. PMLR, 2016b. Christopher J C H Watkins. *Learning from delayed rewards*. PhD thesis, King's College, Cambridge, United Kingdom, 1989. Douglas J White. Mean, variance, and probabilistic criteria in finite markov decision processes: A review. Journal of Optimization Theory and Applications, 56:1–29, 1988. Martha White. Unifying task specification in reinforcement learning. In International Conference on Machine Learning, pp. 3742–3750. PMLR, 2017. Peter R Wurman, Samuel Barrett, Kenta Kawamoto, James MacGlashan, Kaushik Subramanian, Thomas J Walsh, Roberto Capobianco, Alisa Devlic, Franziska Eckert, Florian Fuchs, et al. Outracing champion gran turismo drivers with deep reinforcement learning. *Nature*, 602(7896):223–228, 2022. Zhongwen Xu, Hado P van Hasselt, and David Silver. Meta-gradient reinforcement learning. Advances in neural information processing systems, 31, 2018. Zhongwen Xu, Hado P van Hasselt, Matteo Hessel, Junhyuk Oh, Satinder Singh, and David Silver. Metagradient reinforcement learning with an objective discovered online. *Advances in Neural Information* Processing Systems, 33:15254–15264, 2020. Weirui Ye, Shaohuai Liu, Thanard Kurutach, Pieter Abbeel, and Yang Gao. Mastering atari games with limited data. *Advances in Neural Information Processing Systems*, 34:25476–25488, 2021. Haiyan Yin, Shuicheng YAN, and Zhongwen Xu. Distributional meta-gradient reinforcement learning. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview. net/forum?id=LGkmUauBUL. Tom Zahavy, Zhongwen Xu, Vivek Veeriah, Matteo Hessel, Junhyuk Oh, Hado van Hasselt, David Silver, and Satinder Singh. Self-tuning deep reinforcement learning. *arXiv preprint arXiv:2002.12928*, 2020. Vinicius Zambaldi, David Raposo, Adam Santoro, Victor Bapst, Yujia Li, Igor Babuschkin, Karl Tuyls, David Reichert, Timothy Lillicrap, Edward Lockhart, et al. Relational deep reinforcement learning. *arXiv* preprint arXiv:1806.01830, 2018. Shangtong Zhang, Vivek Veeriah, and Shimon Whiteson. Learning retrospective knowledge with reverse reinforcement learning. *Advances in Neural Information Processing Systems*, 33:19976–19987, 2020. Qinqing Zheng, Amy Zhang, and Aditya Grover. Online decision transformer. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of *Proceedings of Machine Learning Research*, pp. 27042–27059. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/zheng22c.html. Zeyu Zheng, Junhyuk Oh, and Satinder Singh. On learning intrinsic rewards for policy gradient methods. Advances in Neural Information Processing Systems, 31, 2018. Zeyu Zheng, Junhyuk Oh, Matteo Hessel, Zhongwen Xu, Manuel Kroiss, Hado van Hasselt, David Silver, and Satinder Singh. What can learned intrinsic rewards capture? In International Conference on Machine Learning, pp. 11436–11446. PMLR, 2020.
Review 1: Summary: Summary: This paper writes a sufficient and detailed survey of temporal credit assignment in Deep Reinforcement Learning. The paper firstly formally defines the MDP, the goal, and the credit assignment function. Then the paper discuss the current assignment functions used in most reinforcement learning algorithms. After that the paper introduces several challenges to assign credit in deep RL: delayed effects due to MDP depth, low action influence due to low MDP density, low action influence due to high MDP breadth, and discuss the relationship with the exploration problem. Then the paper classifies RL methods in "Time as a heuristic", "Decomposing return contributions", “Conditioning on a predefined set of goals”, "Modeling transitions as sequences", "Planning and learning backwards", "Meta-learning proxies for credit". Then the paper propose some evaluation metrics to evaluate the credit assignment in deep RL. Strengths and Weaknesses: Comments: 1.The paper clearly defines the concept of credit assignments. 2.The paper thinks the action-value function is also a form of credit assignment. But I think it is not true. Learning an immediate reward function that assigns the credit can be called as the credit assignment function. Other forms can not viewed as a valid credit assignment function. 3.The paper does not provide quantitative comparison or analysis of current credit assignment methods. 4.Some definitions that measures the credit assignment function rather than the training performance or value errors. Requested Changes: Please clarify about the following points: 1.The paper thinks the action-value function is also a form of credit assignment. But I think it is not true. Learning an immediate reward function that assigns the credit can be called as the credit assignment function. Other forms can not viewed as a valid credit assignment function. 2.The paper does not provide quantitative comparison or analysis of current credit assignment methods. 3.Some definitions that measures the credit assignment function rather than the training performance or value errors. Broader Impact Concerns: None ================================================== Review 2: Summary: The paper surveys approaches to the credit assignment problem (CAP) in Deep RL and proposes a unifying framework. Three lines of research in CAP are described: (a) action influence quantification, (b) learning of action influence, and (c) evaluation. Challenges and open problems in the field are laid out together with future research directions. Strengths and Weaknesses: Strengths: * The paper surveys a broad spectrum of material. * It formulates fundamental questions and proposes a unified view of CAP. This is very useful both for researchers and the people who enter the field of RL. * The paper provides food for thought, identifies the gaps in the current state-of-the-art, and organizes them in a list of challenges. Weaknesses: * The paper has several technical errors that are misleading to various degrees, e.g.: * The sum in the definition of $Z_t$ (above eq. (1)) should iterate over $k$. * In eq. (8) the items under the curly bracers should be swapped. * There seems to be an inconsistency in defining GVF and UFVAs in Section 4.5 and Table 2. * Equations (12) and (26) lack the bracer, i.e., $(1 - ...)z$. * The intuition at the end of the paragraph "Hindsight advantage" is not clear. Can we have $\pi(a|s)=0$ and $\mathbb P_D(a|s,z)>0$? (it is not clear that this is the case by reading Harutyunyan et al., 2019). * Eq. (17) is unclear. * It seems that in the sentence directly preceding eq. (30), there should be "for all $V$ and $V'$, \Gamma<1$. * It seems that there should be $\hat{V^\pi}$ eq. (30). * The paper is long with over 43 pages. Some information is duplicated, and the text, while often easy to read, is quite verbose. * The paper includes the Acknowledgments section, where the initials of the main author together with the grant number are provided. Additionally, a 'thank you' is given to a person mentioned by the first and the last name. * There are several typos and missing spaces. Requested Changes: See the above section. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper surveys the area of temporal credit assignment in deep reinforcement learning. This paper seeks to organize and compare different types of credit assignment, challenges to credit assignment, methods for learning credit assignment, and credit assignment evaluation procedures from the literature. Toward this end, this paper formalizes notions of action influence and credit assignment, describes desirable properties for an action influence, and organizes contributions by building on these theoretical foundations. Strengths and Weaknesses: The first three sections of the paper are an effective setup for a survey on this topic. Section 4 is an interesting discussion and formalization of credit assignment. Section 5 thoughtfully teases apart different types of challenges for effective credit assignment in MDPs. Unfortunately, Sections 6 and 7 do not make the best use of these foundations. For example, Table 4 contains some interesting claims about the types of challenges defined in Section 5 that different methods address, but it is unclear to me how this table was constructed and I could not find any reference to the table in the text for further discussion. The paper varies significantly in quality and attention to detail across sections. The paper is largely well written up to Section 4. Section 4 is careful about defining common terms such as "goal", "outcome", "(action) influence", and "(credit) assignment", while the sections that follow are more ambiguous in their language, particularly around the terms "value" and "return". It is often unclear to me if these "values" and "returns" are meant to reference only cumulative or discounted future rewards, and thus only pertain to state value, action value, or advantage credit assignments only or if they are referring to "values" and "returns" in a more general sense. The method descriptions in Section 6 can be effective in that they describe how a method works and how it relates to previously discussed concepts. For example, the RUDDER subsection in Section 6.2 is effective in this way (in spite of some shortcomings, see items in Requested Changes on this subsection). These method descriptions can also be poor in that they do little more than present a reference and a high level description of the algorithm, without highlighting its relationship to credit assignment. For example, the PGIF subsection in Section 6.4.2. The summaries at the subsections in Section 6 are confusing. Related, summaries or later parts of the paper may make strong declarative statements that are surprising given the previous content. For example, in the summary of Section 6.2, it states that a measure of action influence is not a satisfactory quantification of credit and cites Section 4.3, but Section 4.3 defines a general notion of assignment that each quantity talked about so far fits within. Another example is in the third paragraph of Section 8.1 where it states that "we highlighted that current benchmarks are not fit for the purpose". While "the purpose" could be clarified, my sense of the discussion of benchmarks in the previous section were that each benchmark had something different to offer, not that all of them were critically lacking. Requested Changes: Please improve Section 6 to explain how each learning method relates or could be combined with each type of assignment described in the previous Section. Please improve the summaries in Section 6. Please make the paper more consistent across Sections in terms of the claims that are made. Please edit the paper to correct typos, improve wording, add missing words, and clarify references. Some examples in no particular order: 1. At the end of Section 4.1, the last bullet point is missing an "s" and a word. I expect is should read "Section 4.6 distils the properties that existing measure*s* of action influence **exhibit**." 2. There appears to be a hat missing from $V$ in Equation (30). 3. In the summary of Section 6.2, it is unclear to what "their" is referring in the sentence, "Despite their measure of action influence not being a satisfactory quantification of credit . . ." 4. In the last sentence of the summary of Section 6.2, it is unclear to what "these models" refers. 6. In the RUDDER subsection in Section 6.2 just after Equation (20), what is $\mathcal{M}^*$? 7. In the RUDDER subsection, you have $g = z^* \in \mathbb{R}$ at the end of the sentence as if it's describing an optimal policy but I think you meant to describe the maximum expected return. In addition to this confusion, I do not understand why the symbols $g$ and $z^*$ are used here, especially when $g$ is used for the function that returns the discounted sum of rewards along a trajectory, $d$. I also do not understand how $g$ can generate that output without being an explicit function of $d$. 8. Section 5.4 is particularly confusing. The second paragraph of Section 5.4 especially contains many typos and awkward wordings. 9. The first sentence in Section 8.1, "tree" -> "three". All of these issues should be addressed for me to recommend acceptance. Broader Impact Concerns: No broader impact concerns. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper reviews the Credit Assignment Problem (CAP) in Reinforcement Learning (RL), focusing on Temporal Credit Assignment (CA) in deep RL. It proposes a unified formalism for credit, addresses challenges such as delayed effects, and suggests evaluation protocols. Overall, it provides insights for newcomers and suggests future research directions in the CAP. The paper has many strong points: clear definitions, comprehensive survey, formulation of fundamental questions, identification of gaps and challenges, effective setup, interesting discussion, and thoughtful analysis. On the other hand, the reviewers raised several technical and presentation issues, which were partially addressed by the updated version provided by the authors. After reading the new version of the paper, the reviewers feel that some points were not properly fixed (see the message by reviewer 3PaG posted on February 29th). The authors need to provide a new version of their paper addressing all the open issues. ==================================================
# Using Stochastic Gradient Descent To Smooth Nonconvex Functions: Analysis Of Implicit Graduated Optimization With Optimal Noise Scheduling Anonymous authors Paper under double-blind review ## Abstract The graduated optimization approach is a heuristic method for finding globally optimal solutions for nonconvex functions and has been theoretically analyzed in several studies. This paper defines a new family of nonconvex functions for graduated optimization, discusses their sufficient conditions, and provides a convergence analysis of the graduated optimization algorithm for them. It shows that stochastic gradient descent (SGD) with mini-batch stochastic gradients has the effect of smoothing the function, the degree of which is determined by the learning rate and batch size. This finding provides theoretical insights on why large batch sizes fall into sharp local minima, why decaying learning rates and increasing batch sizes are superior to fixed learning rates and batch sizes, and what the optimal learning rate scheduling is. To the best of our knowledge, this is the first paper to provide a theoretical explanation for these aspects. Moreover, a new graduated optimization framework that uses a decaying learning rate and increasing batch size is analyzed and experimental results of image classification that support our theoretical findings are reported. ## 1 Introduction 1.1 Background The amazing success of deep neural networks (DNN) in recent years has been based on optimization by stochastic gradient descent (SGD) (Robbins & Monro, 1951) and its variants, such as Adam (Kingma & Ba, 2015). These methods have been widely studied for their convergence (Moulines & Bach, 2011; Needell et al., 2014) (Fehrman et al., 2020; Bottou et al., 2018; Scaman & Malherbe, 2020; Loizou et al., 2021; Zaheer et al., 2018; Zou et al., 2019; Chen et al., 2019; Zhou et al., 2020; Chen et al., 2021; Iiduka, 2022) and stability (Hardt et al., 2016; Lin et al., 2016; Mou et al., 2018; He et al., 2019) in nonconvex optimization. SGD updates the parameters as xt+1 := xt − η∇fSt (xt), where η is the learning rate and ∇fSt is the stochastic gradient estimated from the full gradient ∇f using a mini-batch St. Therefore, there is only an ωt := ∇fSt (xt) − ∇f(xt) difference between the search direction of SGD and the true steepest descent direction. Some studies claim that it is crucial in nonconvex optimization. For example, it has been proven that noise helps the algorithm to escape local minima (Ge et al., 2015; Jin et al., 2017; Daneshmand et al., 2018; Harshvardhan & Stich, 2021), achieve better generalization (Hardt et al., 2016; Mou et al., 2018), and to find a local minimum with a small loss value in polynomial time under some assumptions (Zhang et al., 2017). Kleinberg et al. (2018) also suggests that noise smooths the objective function. Here, at time t, let yt be the parameter updated by the gradient descent (GD) and xt+1 be the parameter updated by SGD, i.e., yt := xt − η∇f(xt), xt+1 := xt − η∇fSt (xt) = xt − η(∇f(xt) + ωt). Then, we obtain the following update rule for the sequence {yt}, $$\mathbb{E}_{\omega_{t}}\left[\mathbf{y}_{t+1}\right]=\mathbb{E}_{\omega_{t}}\left[y_{t}\right]-\eta\nabla\mathbb{E}_{\omega_{t}}\left[f(\mathbf{y}_{t}-\eta\mathbf{\omega}_{t})\right],$$ $\left(1\right)$. [f(yt − ηωt)] , (1) where f is Lipschitz continuous and differentiable. Therefore, if we define a new function ˆf(yt) := Eωt [f(yt− ηωt)], ˆf can be smoothed by convolving f with noise (see Definition 2.1, also Wu (1996)), and its parameters yt can approximately be viewed as being updated by using the gradient descent to minimize ˆf. In other words, simply using SGD with a mini-batch smooths the function to some extent and may enable escapes from local minima. (The derivation of equation (1) is in Section A.) Graduated Optimization. Graduated optimization is one of the global optimization methods, which searches for the global optimal solution of difficult multimodal optimization problems. The method generates a sequence of simplified optimization problems that gradually approach the original problem through different levels of local smoothing operations. It solves the easiest simplified problem first, as it should have nice properties such as convexity or strong convexity; after that, it uses that solution as the initial point for solving the second-simplest problem, then the second solution as the initial point for solving the thirdsimplest problem and so on, as it attempts to escape from local optimal solutions of the original problem and reach a global optimal solution. This idea was first established as graduated non-convexity (GNC) by Blake & Zisserman (1987) and has since been studied in the field of computer vision for many years. Similar early approaches can be found in Witkin et al. (1987) and Yuille (1989), and the same concept has appeared in the fields of numerical analysis (Allgower & Georg, 1990) and optimization (Rose et al., 1990; Wu, 1996). Over the past 25 years, graduated optimization has been successfully applied to many tasks in computer vision, such as early vision (Black & Rangarajan, 1996), image denoising (Nikolova et al., 2010), optical flow (Sun et al., 2010; Brox & Malik, 2011), dense correspondence of images (Kim et al., 2013), and robust estimation (Yang et al., 2020; Antonante et al., 2022; Peng et al., 2023). In addition, it has been applied to certain tasks in machine learning, such as semi-supervised learning (Chapelle et al., 2006; Sindhwani et al., 2006; Chapelle et al., 2008), unsupervised learning (Smith & Eisner, 2004), and ranking Chapelle & Wu (2010). Moreover, scorebased generative models (Song & Ermon, 2019; Song et al., 2021b) and diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021a; Rombach et al., 2022), which are currently state-of-the-art generative models, implicitly use the techniques of graduated optimization. A comprehensive survey on the graduated optimization approach can be found in (Mobahi & Fisher III, 2015b). While graduated optimization is popular, there is not much theoretical analysis on it. Mobahi & Fisher III (2015a) performed the first theoretical analysis, but they did not provide a practical algorithm. Hazan et al. (2016) defined a family of nonconvex functions satisfying certain conditions, called σ-nice, and proposed a first-order algorithm based on graduated optimization. In addition, they studied the convergence and convergence rate of their algorithm to a global optimal solution for σ-nice functions. Iwakiri et al. (2022) proposed a single-loop method that simultaneously updates the variable that defines the noise level and the parameters of the problem and analyzed its convergence. Li et al. (2023) analyzed graduated optimization based on a special smoothing operation. Note that Duchi et al. (2012) pioneered the theoretical analysis of optimizers using Gaussian smoothing operations for nonsmooth convex optimization problems. Their method of optimizing with decreasing noise level is truly a graduated optimization approach. ## 1.2 Motivation Equation (1) indicates that SGD smooths the function (Kleinberg et al., 2018), but it is not clear to what extent the function is smoothed or what factors are involved in the smoothing. Therefore, we decided to clarify these aspects and identify what parameters contribute to the smoothing. Although Hazan et al. (2016) proposed a σ-nice function, it is unclear how special a nonconvex function the σ-nice function is. In some cases, there may be no function that satisfies the σ-nice property. Here, we decided to try to define and analyze a new family of functions with clear sufficient conditions as replacements for the σ-nice function. ![2_image_0.png](2_image_0.png) Figure 1: Conceptual diagram of new σ-nice function and its smoothed versions (see also the Notation 1). In graduated optimization, the noise level is gradually reduced, eventually arriving at the original function, but there are an infinite number of ways to reduce the noise. For better optimization, the choice of noise scheduling is a very important issue. Therefore, we also aimed to clarify the optimal noise scheduling theoretically. Once it is known what parameters of SGD contribute to smoothing and the optimal noise scheduling, an implicit graduated optimization can be achieved by varying the parameters so that the noise level is optimally reduced gradually. Our goal was thus to construct an implicit graduated optimization framework using the smoothing properties of SGD to achieve global optimization of deep neural networks. ## 1.3 Contributions 1.3.1 Sgd'S Smoothing Property We show that the degree of smoothing by SGD depends on the ratio √ η b between the batch size and the learning rate. Accordingly, the smaller the batch size b and the larger the learning rate η are, the more smoothed the function becomes (see Figure 1). Also, we can say that halving the learning rate is the same as quadrupling the batch size. (Goyal et al., 2017; Smith et al., 2018; Xie et al., 2021) also studied SGD dynamics and demonstrated how the ratio η b affect training dynamics. Note that our theory includes and does not conflict with their results. ## 1.3.2 Why The Use Of Large Batch Sizes Leads To Solutions Falling Into Sharp Local Minima In other words, from a smoothing perspective, if we use a large batch size and/or a small learning rate, it is easy for the algorithm to fall into a sharp local minimum and experience a drop in generalization performance, since it will optimize a function that is close to the original multimodal function. As is well known, training with a large batch size leads to convergence to sharp local minima and poor generalization performance, as evidenced by the fact that several prior studies (Hoffer et al., 2017; Goyal et al., 2017; You et al., 2020) provided techniques that do not impair generalization performance even with large batch sizes. Keskar et al. (2017) showed this experimentally, and our results provide theoretical support for it. ## 1.3.3 Why Using Decaying Learning Rates And Increasing Batch Sizes Is Superior To Using Fixed Ones Moreover, we can say that decreasing the learning rate and/or increasing the batch size during training is indeed an implicit graduated optimization. Hence, using a decaying learning rate and increasing the batch size makes sense in terms of avoiding local minima. Our results provide theoretical support for the many positive findings on using decaying learning rates (Wu et al., 2014; Ioffe & Szegedy, 2015; Loshchilov & Hutter, 2017; Hundt et al., 2019; You et al., 2019; Hundt et al., 2019; Lewkowycz, 2021) and increasing batch sizes (Byrd et al., 2012; Friedlander & Schmidt, 2012; Balles et al., 2017; De et al., 2017; Bottou et al., 2018; Smith et al., 2018). ## 1.3.4 New Σ**-Nice Function** We propose a new σ-nice function that generalizes the σ-nice function. All smoothed functions of the new σ-nice function are σ-strongly convex in a neighborhood B(x ⋆; dm|δm|) of the optimal solution x ⋆that is proportional to the noise level |δm| (see Figure 1). In contrast to Hazan et al. (2016), we show sufficient conditions for a certain nonconvex function f to be a new σ-nice function as follows: $2L_{\sigma}\max\left\{\left\|x_{\delta_{m}}^{*}-x^{*}\right\|,\left\|x_{\delta_{m+1}}^{*}-x^{*}\right\|\right\}$$\sigma\left(1-\gamma_{m}\right)$$\leq\left|\delta_{m}\right|=\left|\delta_{m}^{-}\right|$ where |δ −m| > 0, dm+1 > 1, γm ∈ 1 dm+1 , 1 , and m ∈ [M] ⊂ N. δm is the noise level of ˆfδm, which is a smoothed version of f, and x ⋆is the global optimal solution of the original function f. Furthermore, we show that the graduated optimization algorithm for the Lf -Lipschitz new σ-nice function converges to an ϵ-neighborhood of the globally optimal solution in O 1/ϵ 1p +2(p ∈ (0, 1]) rounds. ## 1.3.5 Optimal Noise Scheduling Let |δm| be the current noise level, and let the next noise level be determined by |δm+1| := γm|δm|, where γm is the decay rate of noise. We show theoretically that γm should decay slowly from a value close to 1 for convergence to the globally optimal solution. To the best of our knowledge, ours is the first paper to provide theoretical results on optimal scheduling, both in terms of how to reduce the noise in graduated optimization and how to decay the learning rate and increase the batch size in general optimization. Noise scheduling also has an important role in score-based models (Song & Ermon, 2020), diffusion models (Chen, 2023), panoptic segmentation (Chen et al., 2023), etc., so our theoretical findings will contribute to these methodologies as well. Furthermore, since the decay rate of noise in graduated optimization is equivalent to the decay rate of the learning rate and rate of increase in batch size, we can say that it is desirable to vary them gradually from a value close to 1. As for the schedule for decaying the learning rate, many previous studies have tried cosine annealing (without restart) (Loshchilov & Hutter, 2017), cosine power annealing (Hundt et al., 2019), or polynomial decay (Liu et al., 2015; Chen et al., 2018; Zhao et al., 2017; Chen et al., 2017), but it has remained unclear why they are superior to fixed rates. We provide theoretical support showing why they are experimentally superior. In particular, we show that a polynomial decay with a power less than or equal to 1 is the optimal learning rate schedule and demonstrate this in Section 4. ## 1.3.6 Implicit Graduated Optimization We propose a new implicit graduated optimization algorithm. The algorithm decreases the learning rate of SGD and increases the batch size during training. We show that the algorithm for the Lf -Lipschitz new σ-nice function converges to an ϵ-neighborhood of the globally optimal solution in O 1/ϵ 1p (p ∈ (0, 1]) rounds. In Section 4, we show experimentally that methods that reduce noise outperform methods that use a constant learning rate and constant batch size. We also find that methods which increase the batch size outperform those which decrease the learning rate when the decay rate of the noise is set at 1/ √2. ## 2 Preliminaries 2.1 Definitions And Notation The notation used in this paper is summarized in Table 1. | | Table 1: Notation List | | |----------------|------------------------------------------------------------------------------------------------|--------------------| | Notation | Description | | | N | The set of all nonnegative integers | | | [N] | [N] := {1, 2, . . . , N} (N ∈ N\{0}) | | | d | A d-dimensional Euclidean space with inner product ⟨·, ·⟩, which induces the norm ∥ · ∥ | | | R | | | | Eξ[X] | The expectation with respect to ξ of a random variable X | | | St | Mini-batch of b samples zi at time t | | | N(x ⋆ ; ϵ) | ϵ-neighborhood of a vector x ⋆ , i.e., N(x ⋆ ; ϵ) := x ∈ R d : ∥x − x ⋆ ∥ < ϵ | | | ⋆ ; r) | The Euclidian closed ball of radius r centered at x ⋆ , i.e., B(x ⋆ ; r) := x ∈ R d : ∥x − x ⋆ ∥ ≤ r | | | B(x | | | | u ∼ B(x ⋆ ; r) | A random variable distributed uniformly over B(x ⋆ ; r) | | | M | The number of smoothed functions, i.e., M ∈ N | | | m | Counts from the smoothest function, i.e., m ∈ [M] | | | δ | The degree of smoothing of the smoothed function, i.e., δ ∈ R | | | δm | The degree of smoothing of the m-th smoothed function, i.e., δm ∈ R | | | ˆfδ | The function obtained by smoothing f with a noise level δ | | | ˆfδm | The m-th smoothed function obtained by smoothing f with a noise level δm TF +1 | | | xm+1 | xm+1 is defined by ˆfδm(xm+1) ≤ ˆfδm(xˆt), where (xˆt) t=1 | is generated by GD | | fi(x) | A loss function for x ∈ R d and zi | | | f(x) | The total loss function for x ∈ R d , i.e., f(x) := |S|−1 P i∈S fi(x) | d | | ξ | A random variable supported on Ξ that does not depend on x ∈ R t | d | | ξt | ξ0, ξ1, . . . are independent samples and ξt is independent of (xk) k=0 ⊂ R | | | ξt,i | A random variable generated from the i-th sampling at time t | | | (x) | The stochastic gradient of f(·) at x ∈ R d | | | Gξt ∇fSt (xt) | The mini-batch stochastic gradient of f(xt) for St, i.e., ∇fSt (xt) := b −1 P i∈[b] Gξt,i (xt) | | Definition 2.1 (Smoothed function). Given an Lf -Lipschitz function f, define ˆfδ to be the function obtained by smoothing f as $${\hat{f}}_{\delta}(\mathbf{x}):=\mathbb{E}_{\mathbf{u}\sim B(\mathbf{0};1)}\left[f(\mathbf{x}-\delta\mathbf{u})\right],$$ $$\left(2\right)$$ ˆfδ(x) := Eu∼B(0;1) [f(x − δu)] , (2) where δ ∈ R represents the degree of smoothing and u *is a random variable distributed uniformly over* B(0; 1). Also, $\mathbf{x}^{\star}:=\operatorname*{argmin}f(\mathbf{x})$ and $\mathbf{x}_{\delta}^{\star}:=\operatorname*{argmin}\hat{f}_{\delta}(\mathbf{x})$. Remark: For a general smoothing as in Definition 2.1, the distribution followed by the random variable u need not necessarily be uniform; it can be a normal distribution. In fact, several previous studies (Wu, 1996; Iwakiri et al., 2022) assumed that u follows a normal distribution. Here, we assume that it follows a uniform distribution because this is necessary for the analysis of the new σ-nice function. This is also true for the analysis of the σ-nice function (Hazan et al., 2016). There are a total of M smoothed functions in this paper. The largest noise level is δ1 and the smallest noise level is δM+1 = 0. Thus, ˆfδM+1 = f. Definition 2.2 (σ-nice function (Hazan et al., 2016)). *A function* f : R d → R is said to be σ-nice if the following two conditions hold: (i) For every δ > 0 *and every* x ⋆ δ , there exists x ⋆ δ/2 such that: $$\left\|{\boldsymbol{x}}_{\delta}^{\star}-{\boldsymbol{x}}_{\delta/2}^{\star}\right\|\leq{\frac{\delta}{2}}.$$ (ii) For every δ > 0*, let* rδ = 3δ; then, the function ˆfδ(x) *over* N(x ⋆ δ ; rδ) is σ*-strongly convex.* The σ-nice property implies that optimizing the smoothed function ˆfδ is a good start for optimizing the next smoothed function ˆfδ/2, which has been shown to be sufficient for graduated optimization (Hazan et al., 2016). ## 2.2 Assumptions And Lemmas We make the following assumptions: Assumption 2.1. *(A1)* f : R d → R is continuously differentiable and Lg-smooth, i.e., *for all* x, y ∈ R d, $|\nabla f(\pmb{x})-\rangle$ ∥∇f(x) − ∇f(y)∥ ≤ Lg∥x − y∥. $\underrightarrow{(A2)}$ 3. (A2) f : R d → R is Lf -Lipschitz function, i.e., *for all* x, y ∈ R d, |f(x) − f(y)| ≤ Lf ∥x − y∥. (A3) Let (xt)t∈N ⊂ R dbe the sequence generated by SGD. (i) For each iteration t, $$\mathbb{E}_{\xi_{t}}\left[\mathbf{G}_{\xi_{t}}(\mathbf{x}_{t})\right]=\nabla f(\mathbf{x}_{t}).$$ $\nabla$ . (ii) There exists a nonnegative constant C 2*such that* $\ell_{\ell}$. $\mathbb{R}_{\infty}$. $\square$ $\downarrow$ . Eξt -∥Gξt (xt) − ∇f(xt)∥ 2≤ C 2. $\frac{f}{d x}$ (A4) For each iteration t, SGD samples a mini-batch St ⊂ S *and estimates the full gradient* ∇f as $$\nabla f_{{\mathcal{S}}_{t}}(\mathbf{x}_{t}):={\frac{1}{b}}\sum_{i\in[b]}\mathbf{G}_{\xi_{t,i}}(\mathbf{x}_{t})={\frac{1}{b}}\sum_{\{i:\,\mathbf{z}_{i}\in{\mathcal{S}}_{t}\}}\nabla f_{i}(\mathbf{x}_{t}).$$ Lemma 2.1. Suppose that (A3)(ii) and (A4) hold for all t ∈ N*; then,* $$\mathbb{E}_{\xi_{t}}\left[\|\nabla f_{S_{t}}(\mathbf{x}_{t})-\nabla f(\mathbf{x}_{t})\|^{2}\right]\leq{\frac{C^{2}}{b}}.$$ The proof of Lemma 2.1 can be found in Appendix B.1. The following Lemmas concern the properties of smoothed functions ˆfδ. See Appendix B for their proofs. Lemma 2.2. Suppose that (A1) holds; then, ˆfδ defined by (2) is also Lg*-smooth; i.e., for all* x, y ∈ R d, $\left|\nabla\hat{f}_{\delta}(\mathbf{x})-\nabla\hat{f}_{\delta}(\mathbf{y})\right|\leq L_{g}\|\mathbf{x}-\mathbf{y}\|$. Lemma 2.3. Suppose that (A2) holds; then ˆfδ is also an Lf *-Lipschitz function; i.e., for all* x, y ∈ R d, $\left|\frac{\pi}{\pi}(\pi)-\frac{\pi}{\pi}(\pi)\right|$. Lemmas 2.2 and 2.3 imply that the Lipschitz constants Lf of the original function f and Lg of ∇f are taken over by the smoothed function ˆfδ and its gradient ∇ ˆfδ for all δ ∈ R. Lemma 2.4. Let ˆfδ be the smoothed version of f*; then, for all* x ∈ R d, $$\left|{\hat{f}}_{\delta}(x)-{\hat{f}}(x)\right|\leq|\delta|L_{f}.$$ Lemma 2.4 implies that the larger the degree of smoothing is, the further away the smoothed function is from the original function. Since the degree of smoothing is determined by the learning rate and batch size (see Section 3.3), we can say that the optimal value obtained by using a large learning rate and/or small batch size may be larger than the optimal value obtained by using a small learning rate and/or large batch size. When decreasing the learning rate or increasing the batch size, the sharp decrease in function values at that time depends on the change in the objective function (see also Figure 1), and this phenomenon is especially noticeable in schedules that use the same noise level for multiple epochs, such as the step decay learning rate (see Figures 7-9). ## 3 Main Results 3.1 New Σ**-Nice Function** Since the definition of the σ-nice function is inappropriate for large noise levels (see Section 3.2), we generalize the σ-nice function and define a new σ-nice function that can be defined even when the noise level is large. Definition 3.1. Let δ1 ∈ R. *A function* f : R d → R is said to be "new σ*-nice" if the following two conditions* hold : (i) For all m ∈ [M] and all γm ∈ (0, 1), there exist δm ∈ R with |δm+1| := γm|δm| and x ⋆ δm such that $$\left\|\mathbf{x}_{\delta_{m}}^{\star}-\mathbf{x}_{\delta_{m+1}}^{\star}\right\|\leq|\delta_{m}|-|\delta_{m+1}|.$$ (ii) For all m ∈ [M] and all γm ∈ (0, 1), there exist δm ∈ R with |δm+1| := γm|δm| and dm > 1 such that the function ˆfδm(x) is σ*-strongly convex on* N(x ⋆; dmδm). The value δm ∈ R in Definition 3.1 is the degree of smoothing or noise level (see Definition 2.1) and γm ∈ (0, 1) is the decay rate of the noise, i.e., γm := |δm+1|/|δm|. In the definition of the σ-nice function (Definition 2.2), γm is always 0.5. We have extended this condition to γm ∈ (0, 1). We can show that, for the graduated optimization algorithm to be successful, γm requires a certain lower bound, which provides important insights into the optimal noise scheduling (see Section 3.2). The next propositions provide a sufficient condition for the function f to be a new σ-nice function. The proofs of Propositions 3.1 and 3.2 are in Section D.5 and D.6, respectively. Proposition 3.1. Let am > √2 for all m ∈ [M]. *Suppose that the function* f : R d → R is σ-strongly convex on B(x ⋆; r) for sufficiently small r > 0 and the noise level |δm| *satisfies* |δm| = |δ −m|*; then, the smoothed* function ˆfδm is σ*-strongly convex on* N(x ⋆; amr)*, where* $$|\delta_{m}^{-}|:=\sup_{x\in\mathbb{N}(\mathbf{x}^{*}\cup\mathbf{x}^{*})\setminus\{\mathbf{x}^{*}\}}\mathbb{E}_{u_{m}\sim\beta(\Omega;1)}\left[\|\mathbf{x}^{*}-\mathbf{x}\|\|\mathbf{u}_{m}\|\cos\theta-\sqrt{\|\mathbf{x}^{*}-\mathbf{x}\|^{2}\|\mathbf{u}_{m}\|^{2}\cos^{2}\theta-r^{2}(a_{m}^{2}-1)}\right],$$ and θ is the angle between um and x ⋆ − x. Also, *if we define* dm as dm := amr/|δ −m|, then the smoothed function ˆfδm is also σ*-strongly convex on* N(x ⋆; dm|δm|). Note that um ∈ R dis a random variable used to define the smoothed function ˆfδm, which we assume follows a uniform distribution (see Definition 2.1). In addition, am ∈ R is only required for the analysis and (am)m∈[M] is monotone decreasing. Proposition 3.1 implies that the radius of the strongly convex region of the function ˆfδm extends to amr if the sequence of noise (δm)m∈[M] added to the function f satisfies |δm| = |δ −m| for all m ∈ [M]. Thus, if amr ≥ dm|δm| holds, then the smoothed function ˆfδm is also strongly convex in the neighborhood N(x ⋆; dm|δm|). Therefore, we define dm ∈ R as dm := amr/|δ −m|. Now, let us discuss dm. From the definition of dm, the lower and upper bounds of dm can be expressed as $$1<d_{m}\leq\frac{a_{m}}{\sqrt{a_{m}^{2}-1}-1}.$$ . (3) $$\left({\boldsymbol{3}}\right)$$ Thus, the upper bound of dm gradually increases as am decreases. The σ-nice function (Hazan et al., 2016) always assumes dm = 3, but we see that this does not hold when am is large (see Figure 2 in Section 3.2). Proposition 3.2. Let dm > 1 for all m ∈ [M]. *Suppose that the function* f : R d → R is σ-strongly convex and Lg*-smooth* on B(x ⋆; r) for sufficiently small r > 0; a sufficient condition for f to be a new σ*-nice* function is that the noise level |δm| *satisfies the following condition* : For all m ∈ [M]*, suppose that* x ⋆ δm−1 ∈ N(x ⋆; dm|δm|), $2L_{\sigma}\max\left\{\left\|x_{\delta_{m}}^{*}-x^{*}\right\|,\left\|x_{\delta_{m+1}}^{*}-x^{*}\right\|\right\}$$\sigma\left(1-\gamma_{m}\right)$$\leq\left|\delta_{m}\right|=\left|\delta_{m}\right|$. . (4) Proposition 3.2 shows that any function is a new σ-nice function if |δm| satisfies equations (4). Note that δ −m does not always exist. The probability p(am) that δ −m exists depends on the direction of the random variable vector um and can be expressed as $$0<p(a_{m}):={\frac{\operatorname{arccos}\left({\frac{r{\sqrt{a_{m}^{2}-1}}}{\|x^{\star}-x\|\|u_{m}\|}}\right)}{\pi}}<{\frac{\operatorname{arccos}\left({\frac{{\sqrt{{\frac{a_{m}^{2}}{a_{m}}}}}}{\pi}}\right)}{\pi}}$$ ![7_image_0.png](7_image_0.png) π< 1, $$\left(4\right)$$ $\leq1$. ! where r > 0, am > √2, x ∈ N(x ⋆; amr)\{x ⋆}. Since the upper bound of p(am) approaches 0 when am is large, the probability p(am) approaches 0 as am gets larger, but it never reaches 0. Therefore, the success of Algorithm 1 depends on the random variable um, especially when am is large, i.e., when δm is large. The framework of graduated optimization for the new σ-nice function is shown in Algorithm 1. Algorithm 2 is used to optimize each smoothed function. Algorithm 1 Graduated Optimization Require: ϵ > 0, r ∈ (0, 1), p ∈ (0, 1], ¯d > 0, x1, B2 > 0 δ1 := 2Lg σr α0 := min σr 8L2f (1+d¯) , √σr 2 √2Lg , Mp:= 1 α0ϵ for m = 1 to M + 1 do if m ̸= M + 1 **then** ϵm := σδ2m, TF := 2B2/σϵm γm :=(M−m) p {M−(m−1)} p end if xm+1 := GD(TF , xm, ˆfδm) δm+1 := γmδm end for return xM+2 Algorithm 2 GD with decaying learning rate Require: TF , xˆ1, F for t = 1 to TF do ηt := 2/σt xˆt+1 := xˆt − ηt∇F(xˆt) end for return xˆTF +1 = GD(TF , xˆ1, F) The smoothed function ˆfδm is σ-strongly convex in the neighborhood N(x ⋆; dm|δm|). Thus, we should now consider the convergence of GD for a σ-strongly convex function F = ˆfδm. Theorem 3.1 is a convergence analysis for when a decaying learning rate is used (The proof of Theorem 3.1 is in Section D.1). Theorem 3.1 (Convergence analysis of Algorithm 2). *Suppose that* F : R d → R is a σ*-strongly convex and* Lg-smooth function and ηt := 2 σt . Then, the sequence (xˆt)t∈N generated by Algorithm 2 *satisfies* $$\operatorname*{min}_{t\in[T]}{\big(}{\mathcal{F}}\left({\hat{x}}_{t}\right)-{\mathcal{F}}(x^{*}){\big)}\leq{\frac{2B_{2}}{\sigma T}}={\mathcal{O}}\left({\frac{1}{T}}\right),$$ $$\left(5\right)$$ where x ⋆is the global minimizer of F, and B2 > 0 *is a nonnegative constant.* Theorem 3.1 is the convergence analysis of Algorithm 2 for any σ-strongly convex function F. It shows that the algorithm can reach an ϵm-neighborhood of the optimal solution x ⋆ δm of ˆfδm in approximately TF := 2B2/σϵm iterations. Remark: Algorithms 1 and 2 represent explicit graduated optimization algorithms. Function smoothing is accomplished explicitly by convolving random variables as in Definition 2.1, and the smoothed strongly convex function is optimized by the gradient descent (Algorithm 2). However, in general, the integral operation of the function f is not possible, so optimization by Algorithms 1 and 2 is not feasible. If smoothing of the function f by Definition 2.1 is possible and ˆfδ is accessible, then Algorithm 2 may be SGD-type optimizer. For example, Algorithm 2 can be the projected SGD generated by the sequence (xˆt) with xˆt+1 = Pm(xˆt − ηt∇FSt (xˆt)), where Pm is the projection onto B(x ⋆; dmδm). Since xˆ0 = x ⋆ δm−1 ∈ B(x ⋆; dmδm) is guaranteed by Proposition 3.3(ii), the sequence (xˆt) generated by the projected SGD is always in B(x ⋆; dmδm). Using the proof of Theorem 3.1 and the nonexpansivity of Pm (i.e., ∥Pm(x) − Pm(y)∥ ≤ ∥x − y∥), we can show that the projected SGD satisfies $\begin{array}{c}\mbox{min}\mathbb{E}\left[F(\mathbf{x}_{1})-F(\mathbf{x}^{*})\right]\leq\frac{2D_{2}}{\sigma T}\end{array}$ where D1 := supt∈N E -∥∇F(xˆt)∥ 2and D2 := C 2 + D1. The next theorem guarantees the convergence of Algorithm 1 for the new σ-nice function (The proof of Theorem 3.2 is in Section D.2). Theorem 3.2 (Convergence analysis of Algorithm 1). Let ϵ ∈ (0, 1) and f be an Lf -Lipschitz new σ-nice function. Suppose that we apply Algorithm *1; then, after* O 1/ϵ 1p +2*rounds, the algorithm reaches an* ϵ*-neighborhood of the global optimal solution* x ⋆. Remark: In Algorithm 1 and Theorem 3.2, we assume that we can access the full gradient of the smoothed function ∇ ˆfδm. Thus, our explicit graduated optimization by Algorithms 1 and 2 is only valid for functions f for which the computation of ˆfδm by Definition 2.1 and the access to its full gradient ∇ ˆfδm are possible. Hence, Algorithm 1 and 2 are not applicable to DNN. Note that Theorem 3.2 provides a total complexity that integrates Algorithm 1 and Algorithm 2 because Algorithm 1 uses Algorithm 2 at each m ∈ [M]. Theorem 3.2 implies that convergence is faster when the power of the polynomial decay p is large, and when p = 1, it takes at least O 1/ϵ3rounds for new σ-nice functions. Hazan et al. (2016) showed that their graduated optimization algorithm converges to a globally optimal solution in O 1/ϵ2iterations for a σ-nice function. However, explicit graduated optimization, such as with our Algorithm 1 and Algorithm 1 in Hazan et al. (2016), is not applicable to DNN due to the impossibility of computing a smoothed function ˆfδm. ## 3.2 Optimal Noise Scheduling The next proposition is crucial to the success of Algorithm 1 (The proof is in Section D.7). Proposition 3.3. Let dm > 1 for all m ∈ [M] and *suppose that* f : R d → R is a new σ-nice function. (i) Then, the following always holds for all m ∈ [M], $\left\|x^{\star}_{\delta_{m}}-x^{\star}\right\|<d_{m}|\delta_{m}|$. _(iii) If $\gamma_{m}$ satisfies $\gamma_{m}\in\left(\frac{1}{\gamma_{m+1}},1\right)$ for all $m\in[M]$, then the following holds for all $m\in\{2,3,\cdots,M\}$,_ $$\left\|x_{\delta_{m-1}}^{*}-x^{*}\right\|<d_{m}|\delta_{m}|.$$ Proposition 3.3 (i) implies that if the objective function f is the new σ-nice function, the optimal solution x ⋆ δm of the smoothed function ˆfδm is always contained in the σ-strongly convex region N(x ⋆; dm|δm|) of the function ˆfδm. Therefore, if the initial point of the optimization of the function ˆfδm is contained in the σ-strongly convex region N(x ⋆; dm|δm|), the sequence generated by Algorithm 2 never leaves the σ-strongly convex region. Also, assuming that Algorithm 2 comes sufficiently close to the optimal solution x ⋆ δm−1 after more than TF iterations in the optimization of the ˆfδm−1 , x ⋆ δm−1 is the initial point of the optimization of the next function ˆfδm. Proposition 3.3 (ii) therefore implies that the initial point of optimization of the function ˆfδm is contained in the σ-strongly convex region of the function ˆfδm. Hence, Proposition 3.3 guarantees that if the initial point x¯1 of Algorithm 1 is contained in the σ-strongly convex region N(x ⋆; d1|δ1|) of the smoothest function ˆfδ1 , then the algorithm will always reach the globally optimal solution x ⋆ of the original function f. Note that the decay rate γm used in Algorithm 1 satisfies γm ∈ (1/dm+1, 1). See the following discussion. ![9_image_0.png](9_image_0.png) Figure 2: The ranges of possible values for dm and 1/dm are colored blue and green, respectively. Note that the vertical axis is logarithmic. Figure 3: Existing decay learning rate versus number of epochs. The schedule definitions are included in equations (6) through (7). According to Proposition 3.1, for a function to be a new σ-nice function, dm must satisfy equation (3). Thus, there is a range of possible values for 1/dm: $$1<\,d_{m}\leq\frac{a_{m}}{\sqrt{a_{m}^{2}-1}-1}\,\,\mathrm{and}\,\,\frac{\sqrt{a_{m}^{2}-1}-1}{a_{m}}\leq\frac{1}{d_{m}}\,\,\mathrm{\textasciitilde}}\,\,\mathrm{\textasciitilde}\,\,1.$$ The range is plotted in Figure 2. Recall that am is a value that appears only in the theoretical analysis and it becomes smaller as m increases and δm decreases, since it satisfies dm|δm| = amr. dm is involved in the radius of the strongly convex region N(x ⋆; dm|δm|) of the smoothed function ˆfδm. According to Figure 2, when am is large, i.e., when m is small and |δm| is large, dm can only take almost 1. From the definition of a σ-nice function (Hazan et al., 2016) (see Definition 2.2), a smoothed function ˆfδm is strongly convex in a neighborhood N(x ⋆ δm ; 3δm). Then, since x ⋆ δm is always contained in N(x ⋆; dm|δm|) (see Proposition 3.3), we see that dm = 3 does not always hold. That is, a σ-nice function cannot be defined when the noise level is large. From Proposition 3.3 and its proof, for Algorithm 1 to be successful, 1/dm+1 is required as a lower bound for γm, i.e., γm ∈ 1 dm+1 , 1 . Recall that γm is the decay rate of the noise level |δm|, i.e., |δm+1| := γm|δm|. According to Figure 2, when am is large, i.e., when m is small and |δm| is large, 1/dm and γm can only take almost 1. Therefore, γm should vary very gradually from a value close to 1. From the definition of a σ-nice function (see Definition 2.2), γm is always 0.5. When the noise level is large, a small decay rate such as 0.5 cannot be used, so the definition of the σ-nice function is still not appropriate when the noise level is large. Even when the noise level is large, our new σ-nice function can satisfy the conditions (Proposition 3.3) necessary for the success of Algorithm 1 because the radius dm of the strongly convex region and the decay rate γm vary with the noise level δm. ![10_image_0.png](10_image_0.png) Figure 4: Decay rate of the existing decaying learning rate schedule. The area colored green represents the value that the decay rate γm must satisfy for the graduated optimization approach to succeed. Here, the green curve in this figure is a symmetric shift and parallel shift of the green curve in Figure 2 to zero at epoch 200. Now let us see if there is a decaying learning rate schedule that satisfies the decay rate γm condition. The existing decaying learning rate schedule is shown in Figure 3 (Methods that include an increase in the learning rate, even partially, such as warm-up, are omitted). The following defines the update rules for all decaying learning rates (ηt)t∈{0,1,··· ,T −1}, where T means the number of epochs. cosine annealing (Loschilov & Hutter, 2017): $$\eta_{i}:=\eta_{\rm min}+\frac{1}{2}\left(\eta_{\rm max}-\eta_{\rm min}\right)\left(1+\cos\left(\frac{t}{T}\pi\right)\right)$$ (6) cosine power annealing (Hundt et al., 2019): $$\eta_{i}:=\eta_{\rm min}+\left(\eta_{\rm max}-\eta_{\rm min}\right)\frac{w^{\frac{1}{2}\left(1+\cos\left(\frac{t}{T}\pi\right)\right)+1}-w}{w^{2}-w}\ \ \ (w>0)$$ (6) step decay (Lu, 2022): $\ \eta_{t}:=\eta_{\max}d^{\lfloor\frac{t}{n}\rfloor}\ (0<d<1,n)$, exponential decay (Wu et al., 2014): $\ \eta_{t}:=\eta_{\max}\exp\left(-kt\right)\ (k>0)$, (1001) polynomial decay (Chen et al., 2018): $\eta_{t}:=(\eta_{\max}-\eta_{\min})\left(1-\frac{t}{T}\right)^{p}+\eta_{\min}\ (p>0)$ (7) The curves in Figure 3 are plotted for T = 200, ηmin = 0, ηmax = 0.1, d = 0.5, n = 40, k = 0.94, w = 10 and p = 0.5. The decay rates of these schedules are plotted in Figure 4. Figure 5 and Figure 6 are for polynomial decays with different parameters p. Note that ηmin is 0, but since t ∈ [0, 1, · · · , T − 1], ηt will never be 0 under any update rule. In Figure 4 and 6, only the first one is set to 1 artificially. Also, the value shown at epoch t represents the rate of decay from the learning rate used in epoch t to the learning rate used in epoch t + 1. Therefore, the graphs stop at 199 epochs. $${}_{i}<T)$$ ![11_image_0.png](11_image_0.png) Figure 5: Polynomial decay learning rate versus epoch. The update rule for polynomial decay is defined by (7). Figure 6: Decay rate of polynomial decay learning rate schedule. The area colored green represents the value that the decay rate γm must satisfy for the graduated optimization approach to succeed. According to Figure 4 and Figure 6, only a polynomial decay with small power p satisfies the conditions that γm must satisfy. Finally, we would like to mention something about warm-up techniques (Radford et al., 2018; Liu et al., 2018; Gotmare et al., 2019). Although warm-up techniques that increase the learning rate in the early stages of learning are very popular, they are a distraction from the discussion of the decay rates shown in Figures 4 and 6; hence, we have focused on monotonically decreasing learning rates in this paper. Since the learning rate determines the smoothing level of the function, increasing the learning rate in the early learning phase, with a fixed batch size, means temporarily smoothing the function significantly and exploring that function with a large learning rate. Therefore, we can say that the warm-up technique is a reasonable scheduling that, as conventionally understood, defines the best starting point. However, we should also note that, since Algorithm 3 assumes that the learning rate is monotonically decreasing, Theorem 3.4 may not hold if the warm-up technique is used. ## 3.3 Sgd'S Smoothing Property This section discusses the smoothing effect of using stochastic gradients. From Lemma 2.1, we have $$\mathbb{E}_{\xi_{t}}\left[||\omega_{t}||\right]\leq{\frac{C}{\sqrt{b}}},$$ due to ωt := ∇fSt (xt) − ∇f(xt). The ωt for which this equation is satisfied can be expressed as ωt = √ C b ut,where ut ∼ N 0; √ 1 d Id , N 0; √ 1 d Id is a normal distribution with mean 0 and variance-covariance matrix √ 1 d Id, and Id denotes the identity matrix in R d. Note that, according to (Zhang et al., 2020), for some deep learning models and datasets, the stochastic noise follows a normal distribution. Based on this, we assume that the stochastic noise follows a normal distribution. Then, using Definition 2.1, we further transform equation (1) as follows: Eωt [yt+1] = Eωt [yt] − η∇Eωt [f(yt − ηωt)] = Eωt [yt] − η∇Eut∼N0; √1d Id f yt − ηC √b ut ≈ Eωt [yt] − η∇Eut∼B(0;1) f yt − ηC √b ut = Eωt [yt] − η∇ ˆf ηC √b (yt), (8) where we have used the fact that the standard normal distribution in high dimensions d is close to a uniform distribution on a sphere of radius √d (Vershynin, 2018, Section 3.3.3). This shows that Eωt [f(yt − ηωt)] $$({\boldsymbol{\delta}})$$ is a smoothed version of f with a noise level ηC/√b and its parameter yt can be approximately updated by using the gradient descent to minimize ˆf ηC √b . Therefore, we can say that the degree of smoothing by the stochastic gradients in SGD is determined by the learning rate η and batch size b. The findings gained from this insight are immeasurable. ## 3.3.1 Why The Use Of Large Batch Sizes Leads To Solutions Falling Into Sharp Local Minima It is known that training with large batch sizes leads to a persistent degradation of model generalization performance. In particular, Keskar et al. (2017) showed experimentally that learning with large batch sizes leads to sharp local minima and worsens generalization performance. According to equation (8), using a large learning rate and/or a small batch size will make the function smoother. Thus, in using a small batch size, the sharp local minima will disappear through extensive smoothing, and SGD can reach a flat local minimum. Conversely, when using a large batch size, the smoothing is weak and the function is close to the original multimodal function, so it is easy for the solution to fall into a sharp local minimum. Thus, we have theoretical support for what Keskar et al. (2017) showed experimentally. In addition, equation (8) implies that halving the learning rate is the same as quadrupling the batch size. Note that Smith et al. (2018) argues that reducing the learning rate by half is equivalent to doubling the batch size. Remark: Note that our argument is based on the somewhat non-theoretical finding that flat local solutions have better generalizability than sharp local solutions (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017; Izmailov et al., 2018; Li et al., 2018). Since the function optimized by the optimizer is constructed from a limited training sample, there should be some deviation from the function constructed with unknown data, including the test data. Therefore, the intuitive explanation is that the flatness around the local solution prevents the deviation from degrading the generalizability. ## 3.3.2 Why Decaying Learning Rates And Increasing Batch Sizes Are Superior To Fixed Learning Rates And Batch Sizes From equation (8), the use of a decaying learning rate or increasing batch size during training is equivalent to decreasing the noise level of the smoothed function, so using a decaying learning rate or increasing the batch size is an implicit graduated optimization. Thus, we can say that using a decaying learning rate (Loshchilov & Hutter, 2017; Hundt et al., 2019; You et al., 2019; Lewkowycz, 2021) or increasing batch size (Byrd et al., 2012; Friedlander & Schmidt, 2012; Balles et al., 2017; De et al., 2017; Bottou et al., 2018; Smith et al., 2018) makes sense in terms of avoiding local minima and provides theoretical support for their experimental superiority. ## 3.3.3 Optimal Decay Rate Of Learning Rate As indicated in Section 3.2, gradually decreasing the noise from a value close to 1 is an optimal noise scheduling for graduated optimization. Therefore, we can say that the optimal update rule for a decaying learning rate and increasing batch size is varying slowly from a value close to 1, as in cosine annealing (without restart) (Loshchilov & Hutter, 2017), cosine power annealing (Hundt et al., 2019), and polynomial decay (Liu et al., 2015; Chen et al., 2018; Zhao et al., 2017; Chen et al., 2017). Thus, we have a theoretical explanation for why these schedules are superior. In particular, a polynomial decay with small powers from 0 to 1 satisfies the conditions that the decay rate must satisfy (see also Figures 4 and 6 in Section 3.2). Therefore, we argue that polynomial decays with powers less than equal to 1 are the optimal decaying learning rate schedule. ## 3.4 Implicit Graduated Optimization Algorithm Algorithm 3 embodies the framework of implicit graduated optimization for the new σ-nice function. Algorithm 4 is used to optimize each smoothed function. The γm used in Algorithms 1 and 3 is a polynomial decay rate with powers from 0 to 1 which satisfies the condition that γm must satisfy for the Algorithm to succeed 3 (see Proposition 3.3). The smoothed function ˆfδm is σ-strongly convex in the neighborhood N(x ⋆; dm|δm|). Also, the learning rate used by Algorithm 4 to optimize ˆfδm is always constant. Therefore, let us now consider the convergence of GD with a constant learning rate for a σ-strongly convex function F = ˆfδm. The proof of Theorem 3.3 is in Section D.3. Theorem 3.3 (Convergence analysis of Algorithm 4). *Suppose that* F : R d → R is a σ*-strongly convex and* Lg-smooth function and η < min n1σ , 2 Lg o. Then, the sequence (xˆt)t∈N generated by Algorithm 4 *satisfies* $$\operatorname*{min}_{t\in[T]}{\big(}{\mathcal{F}}\left({\hat{x}}_{t}\right)-{\mathcal{F}}(x^{*}){\big)}\leq{\frac{H_{3}}{T}}={\mathcal{O}}\left({\frac{1}{T}}\right),$$ $$({\mathfrak{g}})$$ , (9) where x ⋆is the global minimizer of F, and H3 *is a nonnegative constant.* Theorem 3.3 is the convergence analysis of Algorithm 4 for any σ-strongly convex and Lg-smooth function F. It shows that Algorithm 4 can reach an ϵm-neighborhood of the optimal solution x ⋆ δm of ˆfδm in approximately TF := H3/ϵm iterations. Proposition 3.3 also holds for Algorithm 3. Therefore, if the initial point x¯1 is contained in the σ-strongly convex region N(x ⋆; d1|δ1|) of the smoothest function ˆfδ1 , then the algorithm will always reach the globally optimal solution x ⋆ of the original function f. Remark: Algorithms 3 and 4 represent implicit graduated optimization algorithms. Function smoothing is accomplished implicitly by the stochastic noise in SGD. From Section 3.3, SGD is running for the objective function f, but behind the scenes, GD can be regarded as running for the function ˆf ηC √b , which is smoothed version of f, where η and b are hyperparameters of SGD. That is why, our Algorithm 4 can be GD. The convergence analysis for this case is Theorem 3.3. On the other hand, another way of looking at it is possible. The experiments in Section 4 simply run SGD, which uses a decaying learning rate or increasing batch size, and GD is not used explicitly. In this case, since b data are handled in each step, it may be viewed as ˆf ηC √b ≈ 1 b Pb i=1 fξi . Then Algorithm 4 can be SGD since ˆf ηC √b varies depending on the data chosen. If the projection is computable to ensure that the SGD sequence does not leave the strongly convex region of the function ˆf ηC √b , then convergence can be guaranteed as with Remark for Theorem 3.1. The relationship between the loss function fi for the i-th data and the smoothed function ˆf ηC √b is still unknown. Then, there may be some differences between theory and practice. Algorithm 3 Implicit Graduated Optimization with SGD Require: ϵ > 0, p ∈ (0, 1], ¯d > 0, x1, η1, b1 δ1 := η√1C b1 α0 := min √b1 4Lf η1C(1+d¯) , √ √b1 2ση1C , Mp:= 1 α0ϵ for m = 1 to M + 1 do if m ̸= M + 1 **then** ϵm := σ 2δ 2 m, TF := H3/ϵm γm :=(M−m) p {M−(m−1)} p κm/ √λm = γm (κm ∈ (0, 1], λm ≥ 1) end if xm+1 := GD(TF , xm, ˆfδm, ηm) ηm+1 := κmηm, bm+1 := λmbm δm+1 := √ ηm+1C bm+1 end for return xM+2 The next theorem guarantees the convergence of Algorithm 3 with the new σ-nice function (The proof of Theorem 3.4 is in Section D.4). Theorem 3.4 (Convergence analysis of Algorithm 3). Let ϵ ∈ (0, 1) and f be an Lf -Lipschitz new σnice function. Suppose that we run Algorithm *3; then after* O 1/ϵ 1p rounds, the algorithm reaches an ϵ*-neighborhood of the global optimal solution* x ⋆. Algorithm 4 GD with a constant learning rate Require: TF , xˆ1*, F, η* for t = 1 to TF do xˆt+1 := xˆt − η∇F(xt) end for return xˆTF +1 = GD(TF , xˆ1*, F, η*) Note that Theorem 3.4 provides a total complexity including those of Algorithm 3 and Algorithm 4, because Algorithm 3 uses Algorithm 4 at each m ∈ [M]. Theorem 3.4 implies that convergence is faster when the power of the polynomial decay p is high, and when p = 1, it takes at least O (1/ϵ) rounds for new σ-nice functions. ## 4 Numerical Results The experimental environment was as follows: NVIDIA GeForce RTX 4090×2GPU and Intel Core i9 13900KF CPU. The software environment was Python 3.10.12, PyTorch 2.1.0 and CUDA 12.2. The code is available at https://anonymous.4open.science/r/new-sigma-nice. ## 4.1 Implicit Graduated Optimization Of Dnn We compared four types of SGD for image classification: 1. constant learning rate and constant batch size, 2. decaying learning rate and constant batch size, 3. constant learning rate and increasing batch size, 4. decaying learning rate and increasing batch size. We evaluated the performance of the four SGDs in training ResNet18 (He et al., 2016) on the CIFAR100 dataset (Krizhevsky, 2009) (Figure 7), WideResNet-28-10 (Zagoruyko & Komodakis, 2016) on the CIFAR100 dataset (Figure 8), and ResNet34 (He et al., 2016) on the ImageNet dataset (Deng et al., 2009) (Figure 9). All experiments were run for 200 epochs. In methods 2, 3, and 4, the noise decreased every 40 epochs, with a common decay rate of 1/ √2. That is, every 40 epochs, the learning rate of method 2 was multiplied by 1/ √2, the batch size of method 3 was doubled, and the learning rate and batch size of method 4 were respectively multiplied by √3/2 and 1.5. The initial learning rate was 0.1 for all methods, which was determined by performing a grid search among [0.01, 0.1, 1.0, 10]. The noise reduction interval was every 40 epochs, which was determined by performing a grid search among [10, 20, 25, 40, 50, 100]. A history of the learning rate or batch size for each method is provided in the caption of each figure. For methods 2, 3, and 4, the decay rates are all 1/ √2, and the decay intervals are all 40 epochs, so throughout the training, the three methods should theoretically be optimizing the exact same five smoothed functions in sequence. Nevertheless, the local solutions reached by each of the three methods are not exactly the same. All results indicate that method 3 is superior to method 2 and that method 4 is superior to method 3 in both test accuracy and training loss function values. This difference can be attributed to the different learning rates used to optimize each smoothing function. Among methods 2, 3, and 4, method 3, which does not decay the learning rate, maintains the highest learning rate 0.1, followed by method 4 and method 2. In all graphs, the loss function values are always small in this order; i.e., the larger the learning rate is, the lower loss function values become. Therefore, we can say that the noise level |δ|, expressed as ηC √b , needs to be reduced, while the learning rate η needs to remain as large as possible. Alternatively, if the learning rate is small, then a large number of iterations are required. Thus, for the same rate of change and the same number of epochs, an increasing batch size is superior to a decreasing learning rate because it can maintain a large learning rate and can be made to iterate a lot when the batch size is small. ![15_image_0.png](15_image_0.png) Figure 7: Accuracy score for testing and loss function value for training versus the number of epochs (**left**) and the number of parameter updates (**right**) in training ResNet18 on the CIFAR100 dataset. The solid line represents the mean value, and the shaded area represents the maximum and minimum over three runs. In method 1, the learning rate and the batch size were fixed at 0.1 and 128, respectively. In method 2, the learning rate decreased every 40 epochs as h0.1,1 10√2 , 0.05,1 20√2 , 0.025iand the batch size was fixed at 128. In method 3, the learning rate was fixed at 0.1, and the batch size was increased as [16, 32, 64, 128, 256]. In method 4, the learning rate was decreased as h0.1, √3 20 , 0.075, 3 √3 80 , 0.05625iand the batch size was increased as [32, 48, 72, 108, 162]. ![15_image_1.png](15_image_1.png) Figure 8: Accuracy score for testing and loss function value for training versus the number of epochs (**left**) and the number of parameter updates (**right**) in training WideResNet-28-10 on the CIFAR100 dataset. The solid line represents the mean value, and the shaded area represents the maximum and minimum over three runs. In method 1, the learning rate and batch size were fixed at 0.1 and 128, respectively. In method 2, the learning rate was decreased every 40 epochs as h0.1,1 10√2 , 0.05,1 20√2 , 0.025iand the batch size was fixed at 128. In method 3, the learning rate was fixed at 0.1, and the batch size increased as [8, 16, 32, 64, 128]. In method 4, the learning rate decreased as h0.1, √3 20 , 0.075, 3 √3 80 , 0.05625iand the batch size increased as [8, 12, 18, 27, 40]. ![16_image_0.png](16_image_0.png) Figure 9: Accuracy score for testing and loss function value for training versus the number of epochs (**left**) and the number of parameter updates (**right**) in training ResNet34 on the ImageNet dataset. The solid line represents the mean value, and the shaded area represents the maximum and minimum over three runs. In method 1, the learning rate and batch size were fixed at 0.1 and 256, respectively. In method 2, the learning rate was decreased every 40 epochs as h0.1,1 10√2 , 0.05,1 20√2 , 0.025iand the batch size was fixed at 256. In method 3, the learning rate was fixed at 0.1, and the batch size was increased as [32, 64, 128, 256, 512]. In method 4, the learning rate was decreased as h0.1, √3 20 , 0.075, 3 √3 80 , 0.05625iand the batch size was increased as [32, 48, 72, 108, 162]. Theoretically, the noise level |δm| should gradually decrease and become zero at the end, so in our algorithm 3, the learning rate ηm should be zero at the end or the batch size bm should match the number of data sets at the end. However, if the learning rate is 0, training cannot proceed, and if the batch size is close to a full batch, it is not feasible from a computational point of view. For this reason, the experiments described in this paper are not fully graduated optimizations; i.e., full global optimization is not achieved. In fact, the last batch size used by method 2 is around 128 to 512, which is far from a full batch. Therefore, the solution reached in this experiment is the optimal solution for a function that has been smoothed to some extent, and to achieve a global optimization of the DNN, it is necessary to increase only the batch size to eventually reach a full batch, or increase the number of iterations accordingly while increasing the batch size and decaying the learning rate. Finally, we should note that graduated optimization with Algorithm 1 is not applicable to DNN. Our approach, Algorithm 3, allows implicit graduated optimization by exploiting the smoothness of SGD; the experimental results provided in this section imply its success. ## 4.2 Experiments On Optimal Noise Scheduling Section 3.3.3 shows that the optimal decaying learning rate is in theory a polynomial decay with small powers from 0 to 1. To demonstrate this, we evaluated the performance of SGDs with several decaying learning rate schedules in training ResNet18 and WideResNet-28-10 on CIFAR100 dataset. Figures 10 and 11 plot the accuracy in testing and the loss function value in training versus number of epochs. All experiments were run for 200 epochs and the batch size was fixed at 128. The learning rate was decreased per epoch; see Section 3.2 for the respective update rules. Both results show that a polynomial decay with a power less than or equal to 1, which is the schedule that satisfies the condition that γm must satisfy, is superior in both test accuracy and training loss function value. Furthermore, the loss function values and test accuracy worsen the further away from the green region that the decay rate curve must satisfy (see Figure 4 and Figure 6), and the order is in excellent agreement with the order in which lower loss function values are achieved. According to Theorem 3.4, Algorithm 3 reaches an ϵ-neighborhood of the globally optimal solution after O 1/ϵ 1p iterations. Thus, theoretically, the closer p is to 1, the fewer iterations are required. This explains why p = 0.1 is not initially superior in both test accuracy and loss function value for training in Figures 10 and 11. ![17_image_0.png](17_image_0.png) Figure 10: Accuracy score for testing and loss function value for training versus epochs in training of ResNet18 on the CIFAR100 dataset. The solid line represents the mean value, and the shaded area represents the maximum and minimum over three runs. See Figure 15 in Appendix F for full results. ![17_image_1.png](17_image_1.png) Figure 11: Accuracy score for testing and loss function value for training versus epochs in training of WideResNet-28-10 on the CIFAR100 dataset. The solid line represents the mean value, and the shaded area represents the maximum and minimum over three runs. See Figure 16 in Appendix F for full results. ## 5 Conclusion We defined a family of nonconvex functions: new σ-nice functions that prove that the graduated optimization approach converges to a globally optimal solution. We also provided sufficient conditions for any nonconvex function to be a new σ-nice function and performed a convergence analysis of the graduated optimization algorithm for the new σ-nice functions. We proved that SGD with a mini-batch stochastic gradient has the effect of smoothing the function, and the degree of smoothing is greater with larger learning rates and smaller batch sizes. This shows theoretically that smoothing with large batch sizes is makes it easy to fall into sharp local minima, that using a decaying learning rate and/or increasing batch size is implicitly graduated optimization, which makes sense in the sense that it avoids local solutions, and that the optimal learning rate scheduling rule is a gradual scheduling with a decreasing rate, such as a polynomial decay with small powers. Based on these findings, we proposed a new graduated optimization algorithm that uses a decaying learning rate and increasing batch size and analyzed it. Finally, we conducted experiments whose results showed the superiority of our recommended framework for image classification tasks on CIFAR100 and ImageNet and that polynomial decay with small powers is an optimal decaying learning rate schedule. ## References Eugene L. Allgower and Kurt Georg. Numerical continuation methods - an introduction, volume 13. Springer, 1990. Pasquale Antonante, Vasileios Tzoumas, Heng Yang, and Luca Carlone. Outlier-robust estimation: Hardness, minimally tuned algorithms, and applications. IEEE Transactions on Robotics, 38(1):281–301, 2022. Lukas Balles, Javier Romero, and Philipp Hennig. Coupling adaptive batch sizes with learning rates. In Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence, 2017. Michael J. Black and Anand Rangarajan. On the unification of line processes, outlier rejection, and robust statistics with applications in early vision. International Journal of Computer Vision, 19(1):57–91, 1996. Andrew Blake and Andrew Zisserman. Visual Reconstruction. MIT Press, 1987. Léon Bottou, Frank E. Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. SIAM Review, 60(2):223–311, 2018. Thomas Brox and Jitendra Malik. Large displacement optical flow: Descriptor matching in variational motion estimation. IEEE Transactions on Pattern Analysis and Machine Learning, 33(3):500–513, 2011. Richard H. Byrd, Gillian M. Chin, Jorge Nocedal, and Yuchen Wu. Sample size selection in optimization methods for machine learning. Mathematical Programming, 134(1):127–155, 2012. Olivier Chapelle and Mingrui Wu. Gradient descent optimization of smoothed information retrieval metrics. Information retrieval, 13(3):216–235, 2010. Olivier Chapelle, Mingmin Chi, and Alexander Zien. A continuation method for semi-supervised SVMs. In Proceedings of the 23rd International Conference on Machine Learning, volume 148, pp. 185–192, 2006. Olivier Chapelle, Vikas Sindhwani, and S. Sathiya Keerthi. Optimization techniques for semi-supervised support vector machines. Journal of Machine Learning Research, 9:203–233, 2008. Jinghui Chen, Dongruo Zhou, Yiqi Tang, Ziyan Yang, Yuan Cao, and Quanquan Gu. Closing the generalization gap of adaptive gradient methods in training deep neural networks. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, volume 452, pp. 3267–3275, 2021. Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation. http://arxiv.org/abs/1706.05587, 2017. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Learning, 40(4):834–848, 2018. Ting Chen. On the importance of noise scheduling for diffusion models. https://arxiv.org/abs/2301. 10972, 2023. Ting Chen, Lala Li, Saurabh Saxena, Geoffrey E. Hinton, and David J. Fleet. A generalist framework for panoptic segmentation of images and videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 909–919, 2023. Xiangyi Chen, Sijia Liu, Ruoyu Sun, and Mingyi Hong. On the convergence of a class of Adam-type algorithms for non-convex optimization. Proceedings of the 7th International Conference on Learning Representations, 2019. Hadi Daneshmand, Jonas Moritz Kohler, Aurélien Lucchi, and Thomas Hofmann. Escaping saddles with stochastic gradients. In Proceedings of the 35th International Conference on Machine Learning, volume 80, pp. 1163–1172, 2018. Soham De, Abhay Kumar Yadav, David W. Jacobs, and Tom Goldstein. Automated inference with adaptive batches. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54, pp. 1504–1513, 2017. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 248–255, 2009. John C. Duchi, Peter L. Bartlett, and Martin J. Wainwright. Randomized smoothing for stochastic optimization. SIAM Journal on Optimization, 22(2):674–701, 2012. Benjamin Fehrman, Benjamin Gess, and Arnulf Jentzen. Convergence rates for the stochastic gradient descent method for non-convex objective functions. Journal of Machine Learning Research, 21:1–48, 2020. Michael P. Friedlander and Mark Schmidt. Hybrid deterministic-stochastic methods for data fitting. SIAM Journal on Scientific Computing, 34(3), 2012. Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points - online stochastic gradient for tensor decomposition. In Proceedings of the 28th Conference on Learning Theory, volume 40, pp. 797–842, 2015. Akhilesh Gotmare, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. A closer look at deep learning heuristics: Learning rate restarts, warmup and distillation. In Proceedings of the 7th International Conference on Learning Representations, 2019. Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: training imagenet in 1 hour. https://arxiv.org/abs/1706.02677, 2017. Moritz Hardt, Ben Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. In Proceedings of The 33rd International Conference on Machine Learning, volume 48, pp. 1225–1234, 2016. Harshvardhan and Sebastian U. Stich. Escaping local minima with stochastic noise. In the 13th International OPT Workshop on Optimization for Machine Learning in NeurIPS 2021, 2021. Elad Hazan, Kfir Yehuda, and Shai Shalev-Shwartz. On graduated optimization for stochastic non-convex problems. In Proceedings of The 33rd International Conference on Machine Learning, volume 48, pp. 1833–1841, 2016. Fengxiang He, Tongliang Liu, and Dacheng Tao. Control batch size and learning rate to generalize well: Theoretical and empirical evidence. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 1141–1150, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Proceedings of the 34th Conference on Neural Information Processing Systems, 2020. Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. MIT Press, 9(1):1–42, 1997. Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 1731–1741, 2017. Andrew Hundt, Varun Jain, and Gregory D. Hager. sharpDARTS: faster and more accurate differentiable architecture search. https://arxiv.org/abs/1903.09900, 2019. Hideaki Iiduka. Appropriate learning rates of adaptive learning rate optimization algorithms for training deep neural networks. IEEE Transactions on Cybernetics, 52(12):13250–13261, 2022. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, volume 37, pp. 448–456, 2015. Hidenori Iwakiri, Yuhang Wang, Shinji Ito, and Akiko Takeda. Single loop gaussian homotopy method for non-convex optimization. In Proceedings of the 36th Conference on Neural Information Processing Systems, 2022. Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry P. Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. In Proceedings of the 34th Conference on Uncertainly in Artificial Intelligence, pp. 876–885, 2018. Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, and Michael I. Jordam. How to escape saddle points efficiently. http://arxiv.org/abs/1703.00887, 2017. Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In Proceedings of the 5th International Conference on Learning Representations, 2017. Jaechul Kim, Ce Liu, Fei Sha, and Kristen Grauman. Deformable spatial pyramid matching for fast dense correspondences. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2307–2314, 2013. Diederik P Kingma and Jimmy Lei Ba. A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, pp. 1–15, 2015. Robert Kleinberg, Yuanzhi Li, and Yang Yuan. An alternative view: When does SGD escape local minima? In Proceedings of the 35th International Conference on Machine Learning, volume 80, pp. 2703–2712, 2018. Alex Krizhevsky. Learning multiple layers of features from tiny images. https://www.cs.toronto.edu/ ~kriz/learning-features-2009-TR.pdf, 2009. Aitor Lewkowycz. How to decay your learning rate. https://arxiv.org/abs/2103.12682, 2021. Da Li, Jingjing Wu, and Qingrun Zhang. Stochastic gradient descent in the viewpoint of graduated optimization. https://arxiv.org/abs/2308.06775, 2023. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems, pp. 6391–6401, 2018. Junhong Lin, Raffaello Camoriano, and Lorenzo Rosasco. Generalization properties and implicit regularization for multiple passes SGM. In Proceedings of The 33rd International Conference on Machine Learning, volume 48, pp. 2340–2348, 2016. Songtao Liu, Di Huang, and Yunhong Wang. Receptive field block net for accurate and fast object detection. In Proceedings of the 15th European Conference on Computer Vision, volume 11215, pp. 404–419, 2018. Wei Liu, Andrew Rabinovich, and Alexander C. Berg. Parsenet: Looking wider to see better. http: //arxiv.org/abs/1506.04579, 2015. Nicolas Loizou, Sharan Vaswani, Issam Laradji, and Simon Lacoste-Julien. Stochastic polyak step-size for SGD: An adaptive learning rate for fast convergence: An adaptive learning rate for fast convergence. In Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS), volume 130, 2021. Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with warm restarts. In Proceedings of the 5th International Conference on Learning Representations, 2017. Jun Lu. Gradient descent, stochastic optimization, and other tales. https://arxiv.org/abs/2205.00832, 2022. Hossein Mobahi and John W. Fisher III. A theoretical analysis of optimization by gaussian continuation. In Proceedings of the 39th AAAI Conference on Artificial Intelligence, pp. 1205–1211, 2015a. Hossein Mobahi and John W. Fisher III. On the link between gaussian homotopy continuation and convex envelopes. In Proceedings of the 10th International Conference on Energy Minimization Methods in Computer Vision and Pattern Recognition, volume 8932, pp. 43–56, 2015b. Wenlong Mou, Liwei Wang, Xiyu Zhai, and Kai Zheng. Generalization bounds of SGLD for non-convex learning: Two theoretical viewpoints. In Proceedings of the 31st Annual Conference on Learning Theory, volume 75, pp. 605–638, 2018. Eric Moulines and Francis Bach. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In Proceedings of the 25th Annual Conference on Neural Information Processing Systems, volume 24, 2011. Deanna Needell, Rachel A. Ward, and Nathan Srebro. Stochastic gradient descent, weighted sampling, and the randomized kaczmarz algorithm. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems, volume 27, pp. 1017–1025, 2014. Mila Nikolova, Michael K. Ng, and Chi-Pan Tam. Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction. IEEE Transactions on Image Processing, 19(12), 2010. Liangzu Peng, Christian Kümmerle, and René Vidal. On the convergence of IRLS and its variants in outlier-robust estimation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17808–17818, 2023. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/ language-unsupervised/language_understanding_paper.pdf, 2018. Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22:400–407, 1951. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. Kenneth Rose, Eitan Gurewitz, and Geoffrey Fox. A deterministic annealing approach to clustering. Pattern Recognition Letters, 11(9):589–594, 1990. Kevin Scaman and Cedric Malherbe. Robustness analysis of non-convex stochastic gradient descent using biased expectations. In Proceedings of the 34th Conference on Neural Information Processing Systems, volume 33, pp. 16377–16387, 2020. Alexander Shapiro, Darinka Dentcheva, and Andrzej Ruszczynski. Lectures on Stochastic Programming - Modeling and Theory. MOS-SIAM Series on Optimization. SIAM, 2009. Vikas Sindhwani, S. Sathiya Keerthi, and Olivier Chapelle. Deterministic annealing for semi-supervised kernel machines. In Proceedings of the 23rd International Conference on Machine Learning, volume 148, pp. 841–848, 2006. Noah A. Smith and Jason Eisner. Annealing techniques for unsupervised statistical language learning. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pp. 486–493, 2004. Samuel L. Smith, Pieter-Jan Kindermans, Chris Ying, and Quoc V. Le. Don't decay the learning rate, increase the batch size. In Proceedings of the 6th International Conference on Learning Representations, 2018. Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the 32nd International Conference on Machine Learning, volume 37, pp. 2256–2265, 2015. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In Proceedings of the 9th International Conference on Learning Represantations, 2021a. Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 11895– 11907, 2019. Yang Song and Stefano Ermon. Improved techniques for training score-based generative models. In Proceedings of the 34th Conference on Neural Information Processing Systems, 2020. Yang Song, Jascha Sohl-Dickstein, Diederik P. kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In Proceedings of the 9th International Conference on Learning Represantations, 2021b. Deqing Sun, Stefan Roth, and Michael J. Black. Secrets of optical flow estimation and their principles. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2432–2439, 2010. Roman Vershynin. High-Dimensional Probability: An Introduction with Applications in Data Science. Number 47. Cambridge University Press, 2018. Andrew P. Witkin, Demetri Terzopoulos, and Michael Kass. Signal matching through scale space. Intenational Jounral of Computer Vision, 1(2):133–144, 1987. Yuting Wu, Daniel J. Holland, Mick D. Mantle, Andrew Gordon Wilson, Sebastian Nowozin, Andrew Blake, and Lynn F. Gladden. A bayesian method to quantifying chemical composition using NMR: application to porous media systems. In Proceedings of the 22nd European Signal Processing Conference (EUSIPCO), pp. 2515–2519, 2014. Zhijun Wu. The effective energy transformation scheme as a special continuation approach to global optimization with application to molecular conformation. SIAM Journal on Optimization, 6(3):748–768, 1996. Zeke Xie, Issei Sato, and Masashi Sugiyama. A diffusion theory for deep learning dynamics: Stochastic gradient descent exponentially favors flat minima. In Proceedings of the 9th International Conference on Learning Represantations, 2021. Heng Yang, Pasquale Antonante, Vasileios Tzoumas, and Luca Carlone. Graduated non-convexity for robust spatial perception: From non-minimal solvers to global outlier rejection. IEEE Robotics and Automation Letters, 5(2):1127–1134, 2020. Kaichao You, Mingsheng Long, Jianmin Wang, and Michael I. Jordam. How does learning rate decay help modern neural networks? https://arxiv.org/abs/1908.01878, 2019. Yang You, Jing Li, Sashank J. Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training BERT in 76 minutes. In Proceedings of the 8th International Conference on Learning Representations, 2020. A. L. Yuille. Energy functions for early vision and analog networks. Biological Cybernetics, 61(2):115–123, 1989. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In Proceedings of the British Machine Vision Conference, 2016. Manzil Zaheer, Sashank J. Reddi, Devendra Sachan, Satyen Kale, and Sanjiv Kumar. Adaptive methods for nonconvex optimization. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, volume 31, 2018. Jingzhao Zhang, Sai Praneeth Karimireddy, Andreas Veit, Seungyeon Kim, Sanjiv Kumar, and Suvrit Sra. Why are adaptive methods good for attention models? In Proceedings of the 33rd Annual Conference on Neural Information Processing Systems, 2020. Yuchen Zhang, Percy Liang, and Moses Charikar. A hitting time analysis of stochastic gradient langevin dynamics. In Proceedings of the 30th Conference on Learning Theory, volume 65, pp. 1980–2022, 2017. Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 6230–6239, 2017. Dongruo Zhou, Jinghui Chen, Yuan Cao, Yiqi Tang, Ziyan Yang, and Quanquan Gu. On the convergence of adaptive gradient methods for nonconvex optimization. 12th Annual Workshop on Optimization for Machine Learning, 2020. Fangyu Zou, Li Shen, Zequn Jie, Weizhong Zhang, and Wei Liu. A sufficient condition for convergences of adam and rmsprop. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11119–11127, 2019. ## A Derivation Of Equation (1) Let yt be the parameter updated by gradient descent (GD) and xt+1 be the parameter updated by SGD at time t, i.e., $$\begin{array}{c}{{y_{t}:=x_{t}-\eta\nabla f(x_{t}),}}\\ {{x_{t+1}:=x_{t}-\eta\nabla f_{S_{t}}(x_{t})}}\\ {{=x_{t}-\eta(\nabla f(x_{t})+\omega_{t}).}}\end{array}$$ Then, we have $$\begin{array}{r l}{\mathbf{x}_{t+1}:=\mathbf{x}_{t}-\eta\nabla f_{S_{t}}(\mathbf{x}_{t})}\\ {=(\mathbf{y}_{t}+\eta\nabla f(\mathbf{x}_{t}))-\eta\nabla f_{S_{t}}(\mathbf{x}_{t})}\\ {=\mathbf{y}_{t}-\eta\omega_{t},}\end{array}$$ (10) $$\begin{array}{l}\mbox{(10)}\end{array}$$ . = yt − ηωt, (10) from ωt := ∇fSt (xt) − ∇f(xt). Hence, $$\begin{array}{c}{{y_{t+1}=x_{t+1}-\eta\nabla f(x_{t+1})}}\\ {{=y_{t}-\eta\omega_{t}-\eta\nabla f(y_{t}-\eta\omega_{t}).}}\end{array}$$ By taking the expectation with respect to ωt on both sides, we have, from Eωt [ωt] = 0, Eωt [yt+1] = Eωt [yt] − η∇Eωt [f(yt − ηωt)] , where we have used Eωt [∇f(yt − ηωt)] = ∇Eωt [f(yt − ηωt)], which holds for the Lipschitz-continuous and differentiable of f (Shapiro et al., 2009, Theorem 7.49). In addition, from (10) and Eωt [ωt] = 0, we obtain $$\mathbb{E}_{\omega_{t}}\left[x_{t+1}\right]=y_{t}.$$ Therefore, on average, the parameter xt+1 of the function f arrived at by SGD coincides with the parameter yt of the smoothed function ˆf(yt) := Eωt [f(yt − ηωt)] arrived at by GD. ## B Proofs Of The Lemmas In Section 2.2 B.1 Proof Of Lemma 2.1 Proof. (A3)(ii) and (A4) guarantee that Eξt -∥∇fSt (xt) − ∇f(xt)∥ 2= Eξt i=1 Gξt,i (xt) − ∇f(xt) 2 1 b X b = Eξt i=1 ∇f(xt) 2 1 b X b i=1 Gξt,i (xt) − 1 b X b = Eξt i=1 Gξt,i (xt) − ∇f(xt) 2 1 b X b = 1 b 2 Eξt i=1 Gξt,i (xt) − ∇f(xt) 2 X b = 1 b 2 Eξt "X b i=1 Gξt,i (xt) − ∇f(xt) 2 # ≤ C 2 b . This completes the proof. ## B.2 Proof Of Lemma 2.2 Proof. From Definition 2.1 and (A1), we have, for all x, y ∈ R d, ∇ ˆfδ(x) − ∇ ˆfδ(y) = ∥∇Eu [f(x − δu)] − ∇Eu [f(y − δu)]∥ = ∥Eu [∇f(x − δu)] − Eu [∇f(y − δu)]∥ = ∥Eu [∇f(x − δu) − ∇f(y − δu)]∥ ≤ Eu [∥∇f(x − δu) − ∇f(y − δu)∥] ≤ Eu [Lg ∥(x − δu) − (y − δu)∥] = Eu [Lg ∥x − y∥] = Lg∥x − y∥. This completes the proof. ## B.3 Proof Of Lemma 2.3 Proof. From Definition 2.1 and (A2), we have, for all x, y ∈ R d, $$\left|\hat{f}_{\delta}(\mathbf{x})-\hat{f}_{\delta}(\mathbf{y})\right|=\left|\mathbb{E}_{\mathbf{u}}\left[f(\mathbf{x}-\delta\mathbf{u})\right]-\mathbb{E}_{\mathbf{u}}\left[f(\mathbf{y}-\delta\mathbf{u})\right]\right|$$ $$=\left|\mathbb{E}_{\mathbf{u}}\left[f(\mathbf{x}-\delta\mathbf{u})-f(\mathbf{y}-\delta\mathbf{u})\right]\right|$$ $$\leq\mathbb{E}_{\mathbf{u}}\left[\left|f(\mathbf{x}-\delta\mathbf{u})-f(\mathbf{y}-\delta\mathbf{u})\right|\right]$$ $$\leq\mathbb{E}_{\mathbf{u}}\left[L_{f}\|(\mathbf{x}-\delta\mathbf{u})-(\mathbf{y}-\delta\mathbf{u})\|\right]$$ $$=\mathbb{E}_{\mathbf{u}}\left[L_{f}\left|\mathbf{x}-\mathbf{y}\right|\right]$$ $$=L_{f}\|\mathbf{x}-\mathbf{y}\|.$$ This completes the proof. ## B.4 Proof Of Lemma 2.4 Proof. From Definition 2.1 and (A2), we have, for all x, y ∈ R d, $$\left|{\hat{f}}_{\delta}(\mathbf{x})-f(\mathbf{x})\right|$$ $$\left|\hat{f}_{\delta}(\mathbf{x})-f(\mathbf{x})\right|=\left|\mathbb{E}_{\mathbf{u}}\left[f(\mathbf{x}-\delta\mathbf{u})\right]-f(\mathbf{x})\right|$$ $$=\left|\mathbb{E}_{\mathbf{u}}\left[f(\mathbf{x}-\delta\mathbf{u})-f(\mathbf{x})\right]\right|$$ $$\leq\mathbb{E}_{\mathbf{u}}\left[\left|f(\mathbf{x}-\delta\mathbf{u})-f(\mathbf{x})\right|\right]$$ $$\leq\mathbb{E}_{\mathbf{u}}\left[L_{f}\right|\!\left(\mathbf{x}-\delta\mathbf{u})-\mathbf{x}\right|\!\right]$$ $$=\mathbb{E}_{\mathbf{u}}\left[L_{f}|\delta|\!\right|\!\right|\!\left|\mathbf{u}\right|\!\right]$$ $$=|\delta|L_{f},$$ where we have used ∥u∥ ≤ 1. This completes the proof. ## C Lemmas Used In The Proofs Of The Theorems Lemma C.1. *Suppose that* F : R d → R is σ-strongly convex and xˆt+1 := xˆt − ηtgt*. Then, for all* t ∈ N, $$F(\hat{\mathbf{x}}_{t})-F(\mathbf{x}^{\star})\leq{\frac{1-\sigma\eta_{t}}{2\eta_{t}}}X_{t}-{\frac{1}{2\eta_{t}}}X_{t+1}+{\frac{\eta_{t}}{2}}\|\mathbf{g}_{t}\|^{2},$$ where gt := ∇F(xˆt), Xt := ∥xˆt − x ⋆∥ 2*, and* x ⋆*is the global minimizer of* F. Proof. Let t ∈ N. The definition of xˆt+1 guarantees that $$\left\|\hat{\mathbf{x}}_{t+1}-\mathbf{x}^{\star}\right\|^{2}=\left\|(\hat{\mathbf{x}}_{t}-\eta_{t}\mathbf{g}_{t})-\mathbf{x}^{\star}\right\|^{2}$$ $$=\left\|\hat{\mathbf{x}}_{t}-\mathbf{x}^{\star}\right\|^{2}-2\eta_{t}\langle\hat{\mathbf{x}}_{t}-\mathbf{x}^{\star},\mathbf{g}_{t}\rangle+\eta_{t}^{2}\|\mathbf{g}_{t}\|^{2}.$$ $$\square$$ From the σ-strong convexity of F, $$\|{\hat{\mathbf{x}}}_{t+1}-\mathbf{x}^{\star}\|^{2}\leq\|{\hat{\mathbf{x}}}_{t}-\mathbf{x}^{\star}\|^{2}+2\eta_{t}\left(F(\mathbf{x}^{\star})-F({\hat{\mathbf{x}}}_{t})-{\frac{\sigma}{2}}\|{\hat{\mathbf{x}}}_{t}-\mathbf{x}^{\star}\|^{2}\right)+\eta_{t}^{2}\|\mathbf{g}_{t}\|^{2}.$$ Hence, $$F(\hat{\mathbf{x}}_{t})-F(\mathbf{x}^{\star})\leq{\frac{1-\sigma\eta_{t}}{2\eta_{t}}}\|\hat{\mathbf{x}}_{t}-\mathbf{x}^{\star}\|^{2}-{\frac{1}{2\eta_{t}}}\|\hat{\mathbf{x}}_{t+1}-\mathbf{x}^{\star}\|^{2}+{\frac{\eta_{t}}{2}}\|\mathbf{g}_{t}\|^{2}.$$ This completes the proof. Lemma C.2. *Suppose that* F : R d → R is Lg-smooth and xˆt+1 := xˆt − ηtgt*. Then, for all* t ∈ N, $$\eta_{t}\left(1-\frac{L_{g}\eta_{t}}{2}\right)\|\nabla F(\hat{\mathbf{x}}_{t})\|^{2}\leq F(\hat{\mathbf{x}}_{t})-F(\hat{\mathbf{x}}_{t+1}).$$ where gt := ∇F(xˆt) and x ⋆is the global minimizer of F. Proof. From the Lg-smoothness of the F and the definition of xˆt+1, we have, for all t ∈ N, $$F(\hat{\mathbf{x}}_{t+1})\leq F(\hat{\mathbf{x}}_{t})+\langle\nabla F(\hat{\mathbf{x}}_{t}),\hat{\mathbf{x}}_{t+1}-\hat{\mathbf{x}}_{t}\rangle+\frac{L_{g}}{2}\|\hat{\mathbf{x}}_{t+1}-\hat{\mathbf{x}}_{t}\|^{2}$$ $$=F(\hat{\mathbf{x}}_{t})-\eta_{t}\langle\nabla F(\hat{\mathbf{x}}_{t}),\mathbf{g}_{t}\rangle+\frac{L_{g}\eta_{t}^{2}}{2}\|\mathbf{g}_{t}\|^{2}$$ $$\leq F(\hat{\mathbf{x}}_{t})-\eta_{t}\left(1-\frac{L_{g}\eta_{t}}{2}\right)\|\nabla F(\hat{\mathbf{x}}_{t})\|^{2}.$$ Therefore, we have $$\eta_{t}\left(1-\frac{L_{g}\eta_{t}}{2}\right)\|\nabla F(\hat{\mathbf{x}}_{t})\|^{2}\leq F(\hat{\mathbf{x}}_{t})-F(\hat{\mathbf{x}}_{t+1}).$$ This completes the proof. Lemma C.3. *Suppose that* F : R d → R is Lg-smooth, xˆt+1 := xˆt − ηtgt, and ηt := η < 2 Lg . Then, for all t ∈ N, $${\frac{1}{T}}\sum_{t=1}^{T}\|\mathbf{g}_{t}\|^{2}\leq{\frac{2\left(F({\hat{\mathbf{x}}}_{1})-F(\mathbf{x}^{\star})\right)}{\eta\left(2-L_{g}\eta\right)T}},$$ $$\square$$ where gt := ∇F(xˆt) and x ⋆is the global minimizer of F. Proof. According to Lemma C.2, we have $$\eta\left(1-\frac{L_{g}\eta}{2}\right)\left\|\nabla F(\mathbf{x}_{t})\right\|^{2}\leq F(\hat{\mathbf{x}}_{t})-F(\hat{\mathbf{x}}_{t+1}).$$ Summing over t, we find that $$\eta\left(1-\frac{L_{g}\eta}{2}\right)\frac{1}{T}\sum_{t=1}^{T}\|\nabla F(\hat{\mathbf{x}}_{t})\|^{2}\leq\frac{F(\hat{\mathbf{x}}_{1})-F(\hat{\mathbf{x}}_{T+1})}{T}.$$ Hence, from η < 2 Lg , $${\frac{1}{T}}\sum_{t=1}^{T}\|\mathbf{g}_{t}\|^{2}={\frac{2\left(F({\hat{\mathbf{x}}}_{1})-F(\mathbf{x}^{\star})\right)}{\eta\left(2-L_{g}\eta\right)T}}.$$ ∥gt∥ 2 ≤ B2, This completes the proof. Lemma C.4. *Suppose that* F : R d → R is Lg-smooth, xˆt+1 := xˆt − ηtgt*, and* ηt := 2 σt *. Then, for all* t ∈ N, where gt := ∇F(xˆt) and B2 ≥ 0 *is a nonnegative constant.* Proof. According to Lemma C.2, we have C.2, we have $$\eta_t\left(1-\frac{L_g\eta_t}{2}\right)\|\nabla F(\hat{\mathbf{x}}_t)\|^2\leq F(\hat{\mathbf{x}}_t)-F(\hat{\mathbf{x}}_{t+1}).$$ Let $T=-1$. Summing over t from t = t0 to t = T, we have $$\sum_{t=t_{0}}^{T}\eta_{t}\left(1-\frac{L_{g}\eta_{t}}{2}\right)\|\nabla F(\hat{\mathbf{x}}_{t})\|^{2}\leq F(\hat{\mathbf{x}}_{t_{0}})-F(\hat{\mathbf{x}}_{T}),$$ where t0 satisfies Hence, we obtain $$\forall t\geq t_{0}\colon\eta_{t_{0}}<{\frac{2}{L_{g}}}.$$ $$\left(1-{\frac{L_{g}\eta_{t_{0}}}{2}}\right)\sum_{t=t_{0}}^{T}\eta_{t}\|\nabla F({\hat{\mathbf{x}}}_{t})\|^{2}\leq\underbrace{F({\hat{\mathbf{x}}}_{t_{0}})-F(\mathbf{x}^{\star})}_{=:B}<\infty.$$ Then, $$\sum_{t=t_{0}}^{T}\eta_{t}\|\nabla F({\hat{\mathbf{x}}}_{t})\|^{2}\leq{\frac{2B}{2-L_{g}\eta_{t_{0}}}}<\infty.$$ Therefore, $$\sum_{t=1}^{T}\eta_{t}\|\nabla F(\hat{\mathbf{x}}_{t})\|^{2}\leq\underbrace{\frac{2B}{2-L_{g}\eta_{t_{0}}}+\sum_{t=1}^{t_{0}-1}\eta_{t}\|\nabla F(\hat{\mathbf{x}}_{t})\|^{2}}_{=:\beta}<\infty.\tag{11}$$ From ηT ≤ ηt := 2 σt , $$\frac{2}{\sigma T}\sum_{t=1}^{T}\|\nabla F(\hat{\mathbf{x}}_{t})\|^{2}=\eta_{T}\sum_{t=1}^{T}\|\nabla F(\hat{\mathbf{x}}_{t})\|^{2}\leq\sum_{t=1}^{T}\eta_{t}\|\nabla F(\hat{\mathbf{x}}_{t})\|^{2}\leq\bar{B}.\tag{12}$$ (1) is unbounded, we have Then, if ∥∇F(xˆt)∥ 2is unbounded, we have $\forall\epsilon>0,\exists t_1\in\mathbb{N},\forall t\in\mathbb{N}\colon t\geq t_1\Rightarrow t_2$ 2 ≥ ϵ. Therefore, from (12), $\hat{B}\geq\frac{2}{\sigma T}\sum_{t=1}^{T}\|\nabla F(\hat{\mathbf{x}}_{t})\|^{2}$ $\hat{B}\geq\frac{2}{\sigma T}\sum_{t=1}^{T}\|\nabla F(\hat{\mathbf{x}}_{t})\|^{2}+\sum_{t=1}^{t_{1}-1}\|\nabla F(\hat{\mathbf{x}}_{t})\|^{2}$ $\hat{B}\geq\frac{2}{\sigma T}(T-t_{1}+1)\epsilon$ $\hat{B}\geq\frac{2}{\sigma T}(T-t_{1}+1)\epsilon$ $\hat{B}\geq\frac{2}{\sigma T}(T-t_{1}+1)\epsilon$ where we have used Pt1−1 t=1 ∥∇F(xˆt)∥ 2 ≥ 0. Hence, letting ϵ := σBˆ, we have $\mathbb{H}_{1},\mathbb{V}T=\mathbb{H}_{1}$: $2\left(1-\frac{\mathbb{H}_{1}-1}{T}\right)\mathbb{B}\leq\mathbb{B}$. Taking the limit of T → ∞, we have 2Bˆ ≤ Bˆ. This is a contradiction. Hence, ∥∇F(xˆt)∥ 2is bounded. Let its upper boundary be B2. This completes the proof. ## D Proof Of The Theorems And Propositions D.1 Proof Of Theorem 3.1 Proof. Lemma C.1, Lemma C.4, and ηt := 2 σt guarantee that $$F(\hat{\mathbf{x}}_{t})-F(\mathbf{x}^{\star})\leq\frac{1-\sigma\eta_{t}}{2\eta_{t}}X_{t}-\frac{1}{2\eta_{t}}X_{t+1}+\frac{\eta_{t}}{2}\|\mathbf{g}_{t}\|^{2}$$ $$\leq\frac{1}{2\eta_{t}}\left\{(1-\sigma\eta_{t})X_{t}-X_{t+1}\right\}+\frac{\eta_{t}B_{2}}{2}$$ $$=\frac{\sigma(t-2)}{4}X_{t}-\frac{\sigma t}{4}X_{t+1}+\frac{B_{2}}{\sigma t}.$$ Therefore, we have $$\left(t-1\right)\left(F({\hat{x}}_{t})-F(x^{\star})\right)\leq{\frac{\sigma(t-2)(t-1)}{4}}X_{t}-{\frac{\sigma(t-1)t}{4}}X_{t+1}+{\frac{B_{2}(t-1)}{\sigma t}}.$$ Summing over t, we find that $$\sum_{t=1}^{T}(t-1)\left(F(\hat{\mathbf{x}}_{t})-F(\mathbf{x}^{\star})\right)\leq\frac{\sigma\cdot(-1)\cdot0}{4}X_{1}-\frac{\sigma(T-1)T}{4}X_{T+1}+\frac{B_{2}}{\sigma}\sum_{t=1}^{T}\frac{t-1}{t}$$ $$\leq\frac{B_{2}(T-1)}{\sigma}.$$ Then, we have $${\frac{2}{(T-1)T}}\sum_{t=1}^{T}(t-1)\left(F({\hat{\mathbf{x}}}_{t})-F(\mathbf{x}^{\star})\right)\leq{\frac{2B_{2}}{\sigma T}}.$$ From the convexity of F, $$F\left(\frac{2}{T(T-1)}\sum_{t=1}^{T}(t-1)\hat{\mathbf{x}}_{t}\right)\leq\frac{2}{T(T-1)}\sum_{t=1}^{T}(t-1)F(\hat{\mathbf{x}}_{t}).$$ Hence, $$F\left({\frac{2}{T(T-1)}}\sum_{t=1}^{T}(t-1){\hat{\mathbf{x}}}_{t}\right)-F(\mathbf{x}^{\star})\leq{\frac{2B_{2}}{\sigma T}}={\mathcal{O}}\left({\frac{1}{T}}\right).$$ $$\square$$ In addition, since the minimum value is smaller than the mean, we have $$\operatorname*{min}_{t\in[T]}\left(F\left({\hat{\mathbf{x}}}_{t}\right)-F(\mathbf{x}^{\star})\right)\leq{\frac{\color{red}{2B_{2}}}{\color{red}{\sigma T}}}={\mathcal{O}}\left({\frac{1}{T}}\right).$$ This completes the proof. ## D.2 Proof Of Theorem 3.2 The following proof uses the proof technique of Hazan et al. (2016). *Proof.* From $M^p:=\frac{1}{\alpha_{00}\epsilon}$, $\delta_1:=\frac{2\color{red}{L_{\alpha}}}{\sigma\color{black}{r}}$, and $\gamma_{m+1}:=\frac{(M-m)^p}{(M-(m-1))^p}$, we have $$\begin{aligned} \delta_M &= \delta_1 \left( \gamma_1 \gamma_2 \cdots \gamma_{M-1} \right) \\ &= \delta_1 \cdot \frac{(M-1)^p}{M^p} \cdot \frac{(M-2)^p}{(M-1)^p} \cdot \frac{(M-3)^p}{(M-2)^p} \cdot \cdot \frac{1}{2^p} \\ &= \delta_1 \cdot \frac{1}{M^p} \\ &= \delta_{10} \epsilon \\ &= \frac{2\color{red}{L_{\alpha}}\alpha_{0}\epsilon}{\sigma\color{black}{r}}.\end{aligned}$$ *learning.* The proof is * According to Theorem 3.1, $$\begin{split}\hat{f}_{\delta_{M}}\left(\pmb{x}_{M+1}\right)-\hat{f}_{\delta_{M}}\left(\pmb{x}_{\delta_{M}}^{\star}\right)&\leq\epsilon_{M}\\ &=\sigma\delta_{M}^{2}\\ &=\left(\frac{2_{\mathbb{I}_{\otimes}}\alpha_{0}\epsilon}{\sqrt{\sigma}r}\right)^{2}\end{split}$$ From Lemma 2.3 and 2.4, f(xM+2) − f(x ⋆) = nf(xM+2) − ˆfδM (xM+2) o+ nˆfδM (x ⋆) − f(x ⋆) o+ nˆfδM (xM+2) − ˆfδM (x ⋆) o ≤ nf(xM+2) − ˆfδM (xM+2) o+ nˆfδM (x ⋆) − f(x ⋆) o+ nˆfδM (xM+2) − ˆfδM (x ⋆ δM ) o ≤ δMLf + δMLf + nˆfδM (xM+2) − ˆfδM (x ⋆ δM ) o = 2δMLf + nˆfδM (xM+2) − ˆfδM (xM+1) o+ nˆfδM (xM+1) − ˆfδM (x ⋆ δM ) o ≤ 2δMLf + Lf ∥xM+2 − xM+1∥ + nˆfδM (xM+1) − ˆfδM (x ⋆ δM ) o. Then, we have $$f(\mathbf{x}_{M+2})-f(\mathbf{x}^{\star})\leq2\delta_{M}L_{f}+2L_{f}d_{M}\delta_{M}+\epsilon_{M}$$ $$\leq2\delta_{M}L_{f}+2L_{f}\bar{d}\delta_{M}+\epsilon_{M}$$ $$=2L_{f}\delta_{M}\left(1+\bar{d}\right)+\epsilon_{M},$$ where we have used ∥xM+2 − xM+1∥ ≤ 2dMδM since xM+2, xM+1 ∈ N(x ⋆; dMδM), and dM ≤ ¯d < +∞. Therefore, $$f(\mathbf{x}_{M+2})-f(\mathbf{x}^{\star})\leq2L_{f}\left(1+\bar{d}\right)\frac{2L_{f}\alpha_{0}\epsilon}{\sigma r}+\left(\frac{2\mathbb{I}_{\sigma}\alpha_{0}\epsilon}{\sqrt{\sigma r}}\right)^{2}$$ $$=\frac{4L_{f}^{2}\left(1+\bar{d}\right)\alpha_{0}\epsilon}{\sigma r}+\left(\frac{2\mathbb{I}_{\sigma}\alpha_{0}\epsilon}{\sqrt{\sigma r}}\right)^{2}$$ $$\leq\epsilon,$$ where we have used α0 := min σr 8L2f (1+d¯) , √σr 2 √2Lg . Let Ttotal be the total number of queries made by Algorithm 1; then, Ttotal = M X +1 m=1 2B2 σϵm = M X +1 m=1 2B2 σ 2δ 2m = 2B2 σ 2δ 2 1 1 + 1 γ 2 1 +1 γ 2 1 γ 2 2 + · · · +2 γ 2 1 γ 2 2 · · · γ 2 M−1 = 2B2 σ 2δ 2 1 (γ 2 1 γ 2 2 · · · γ 2 M−1 + γ 2 2 γ 2 3 · · · γ 2 M−1 + · · · + γ 2 M−1 + 2 γ 2 1 γ 2 2 · · · γ 2 M−1 ) From γ1γ2 *· · ·* γM−1 =1 Mp , Ttotal = 2B2 σ 2δ 2 1 · 1 M2p +1 (M−1)2p +1 (M−2)2p + · · · 1 2 2p + 2 1 M2p ≤ 2B2 σ 2δ 2 1 · M2p(M + 1) = 2B2 σ 2δ 2 1 · M2p+1 + 2B2 σ 2δ 2 1 · M2p = 2B2 σ 2δ 2 1 1 α0ϵ 1p +2+ 2B2 σ 2δ 2 1 1 α0ϵ 2 = O 1 ϵ 1 p +2 , This completes the proof. ## D.3 Proof Of Theorem 3.3 Proof. Lemma C.1 guarantees that $$F(\hat{\mathbf{x}}_{t})-F(\mathbf{x}^{\star})\leq\frac{1-\sigma\eta_{t}}{2\eta_{t}}X_{t}-\frac{1}{2\eta_{t}}X_{t+1}+\frac{\eta_{t}}{2}\|\mathbf{g}_{t}\|^{2}$$ $$=\frac{1-\sigma\eta}{2\eta}\left(X_{t}-X_{t+1}\right)-\frac{\sigma}{2}X_{t+1}+\frac{\eta}{2}\|\mathbf{g}_{t}\|^{2}.$$ From η < min n1σ , 2 Lg oand Lemma C.3, by summing over t we find that 1 T X T t=1 (F(xˆt) − F(x ⋆)) ≤ 1 − ση 2ηT(X1 − XT +1) − σ 2T X T t=1 Xt+1 +η 2T X T t=1 ∥gt∥ 2 ≤ 1 − ση 2ηTX1 + η 2T X T t=1 ∥gt∥ 2 1 T + F(xˆ1) − F(x ⋆) (2 − Lgη) | {z } =:H2 ≤ (1 − ση) X1 2η | {z } =:H1 1 T 1 T = (H1 + H2) | {z } =:H3 = H3 T , where H3 > 0 is a nonnegative constant. From the convexity of F, $$F\left({\frac{1}{T}}\sum_{t=1}^{T}{\hat{\mathbf{x}}}_{t}\right)\leq{\frac{1}{T}}\sum_{t=1}^{T}F({\hat{\mathbf{x}}}_{t}).$$ Hence, $$F\left({\frac{1}{T}}\sum_{t=1}^{T}{\hat{\mathbf{x}}}_{t}\right)-F(\mathbf{x}^{\star})\leq{\frac{H_{3}}{T}}={\mathcal{O}}\left({\frac{1}{T}}\right).$$ In addition, since the minimum value is smaller than the mean, we have $$\operatorname*{min}_{t\in[T]}\left(F\left({\hat{\mathbf{x}}}_{t}\right)-F(\mathbf{x}^{\star})\right)\leq{\frac{H_{3}}{T}}={\mathcal{O}}\left({\frac{1}{T}}\right).$$ This completes the proof. ## D.4 Proof Of Theorem 3.4 The following proof uses the proof technique of Hazan et al. (2016). Proof. According to δm+1 := √ $\frac{1}{1}:=\frac{\eta_{m+1}C}{\sqrt{b_{m+1}}}$ and $\frac{\kappa_m}{\sqrt{\lambda_m}}=\gamma_m$, we have . $$\begin{array}{r l}{\delta_{m+1}:={\frac{\eta_{m+1}C}{\sqrt{b_{m+1}}}}}\\ {={\frac{\kappa_{m}\eta_{m}C}{\sqrt{\lambda_{m}}\sqrt{b_{m}}}}}\\ {={\frac{\kappa_{m}}{\sqrt{\lambda_{m}}}}\delta_{m}}\\ {=\gamma_{m}\delta_{m}.}\end{array}$$ Therefore, from Mp:= 1 α0ϵ , δ1 := $=\frac{\eta_1C}{\sqrt{b_1}}$, and $\gamma_m:=\frac{}{}\$$ , and γm :=(M−m) p {M−(m−1)} p , then $$\delta_{M}=\delta_{1}\left(\gamma_{1}\gamma_{2}\cdots\gamma_{M-1}\right)$$ $$=\delta_{1}\cdot\frac{(M-1)^{p}}{M^{p}}\cdot\frac{(M-2)^{p}}{(M-1)^{p}}\cdot\frac{(M-3)^{p}}{(M-2)^{p}}\cdots\frac{1}{2^{p}}$$ $$=\delta_{1}\cdot\frac{1}{M^{p}}$$ $$=\delta_{1}\alpha_{0}\epsilon$$ $$=\frac{\eta_{1}C\alpha_{0}\epsilon}{\sqrt{b_{1}}}.$$ According to Theorem 3.3, $$\begin{split}\hat{f}_{\delta_{M}}\left(\pmb{x}_{M+1}\right)-\hat{f}_{\delta_{M}}\left(\pmb{x}_{\delta_{M}}^{\star}\right)&\leq\epsilon_{M}\\ &=\sigma\delta_{M}^{2}\\ &=\left(\frac{\sqrt{\sigma}\eta_{1}C\alpha_{0}\epsilon}{\sqrt{b_{1}}}\right)^{2}\end{split}$$ As in the proof of Theorem 3.2, we have $$f(\mathbf{x}_{M+2})-f(\mathbf{x}^{\star})\leq2L_{f}\delta_{M}\left(1+\bar{d}\right)+\epsilon_{M}.$$ Therefore, $$f(\mathbf{x}_{M+2})-f(\mathbf{x}^{\star})\leq2L_{f}\left(1+\bar{d}\right)\frac{\eta_{1}C\alpha_{0}\epsilon}{\sqrt{b_{1}}}+\left(\frac{\sqrt{\sigma}\eta_{1}C\alpha_{0}\epsilon}{\sqrt{b_{1}}}\right)^{2}$$ $$=\frac{2L_{f}\left(1+\bar{d}\right)\eta_{1}C\alpha_{0}\epsilon}{\sqrt{b_{1}}}+\left(\frac{\sqrt{\sigma}\eta_{1}C\alpha_{0}\epsilon}{\sqrt{b_{1}}}\right)^{2}$$ $$\leq\epsilon,$$ where we have used α0 := min √b1 4Lf η1C(1+d¯) , √ √b1 2ση1C . Let Ttotal be the total number of queries made by Algorithm 3; then, - $T_{\text{total}}=\sum_{m=1}^{M+1}\frac{H_3}{\epsilon_m}=\sum_{m=1}^{M+1}\frac{H_3}{\sigma^2\delta_m^2}$ $=\frac{H_3}{\sigma^2\delta_1^2}+\frac{H_3}{\sigma^2\delta_2^2}+\cdot\cdot\cdot+\frac{2H_3}{\sigma^2\delta_M^2}$ $\leq\frac{H_3(M+1)}{\sigma^2\delta_M^2}$ $\leq\frac{H_3(M+1)}{\sigma^2\delta_M^2}$ $=\frac{H_3(M+1)}{\sigma^2\delta_1^2\frac{1}{M^{2p}}}$ $=\frac{H_3M^{2p}(M+1)}{\sigma^2\delta_1^2}$ $=\frac{H_3M^{2p+1}}{\sigma^2\delta_1^2}+\frac{H_3M^{2p}}{\sigma^2\delta_1^2}$. From Mp:= 1 α0ϵ , Ttotal = H3 1 α0ϵ 1p +2 σ 2δ 2 1 + H3 1 α0ϵ 2 σ 2δ 2 1 ≤ 2H3 1 α0ϵ 1p +2 σ 2δ 2 1 = 2H3 1 α0ϵ 1p σ 2δ 2 1 (α0ϵ) 2 =2H3 σ 2δ 2 1 (α0ϵ) 1 p +2 ≤2H3 σ 2δ 2 1 (α0ϵ) 1 p = O 1 ϵ 1 p . This completes the proof. ## D.5 Proof Of Proposition 3.1 Proof. For all x ∈ N(x ⋆; amr)\{x ⋆} (am ≥ 0), and all um ∼ B(0; 1), the quadratic equation $$\delta_{m}^{2}-2\left\langle\mathbf{x}^{*}-\mathbf{x},\mathbf{u}_{m}\right\rangle\delta_{m}+(a_{m}^{2}-1)r^{2}=0\tag{13}$$ for δm ∈ R has solutions with probability p(am) when am > 1 and always has solutions when 0 ≤ am ≤ 1. Let us derive p(am). When am > 1, the condition for the discriminant equation of (13) to be positive is as follows: $$-1\leq\cos\theta\leq-\frac{r\sqrt{a_{m}^{2}-1}}{\|\mathbf{x}^{\star}-\mathbf{x}\|\|\mathbf{u}_{m}\|},\ \ \mbox{or}\ \ \frac{r\sqrt{a_{m}^{2}-1}}{\|\mathbf{x}^{\star}-\mathbf{x}\|\|\mathbf{u}_{m}\|}\leq\cos\theta\leq1,\tag{14}$$ where θ is the angle between um ∼ B(0; 1) and x ⋆ − x. Note that cos θ can be positive or negative because δm ∈ R. Since the random variable um is sampled uniformly from the B(0; 1), the probability that um satisfies (14) is less than $$p(a_{m}):={\frac{\operatorname{arccos}\left({\frac{r{\sqrt{a_{m}^{2}-1}}}{\|x^{\star}-x\|\|u_{m}\|}}\right)}{\pi}},$$ $$(15)$$ for δm > 0 and δm < 0, respectively. Now let us consider the solution of the quadratic inequality, $$\|\mathbf{u}_{m}\|^{2}\delta_{m}^{2}-2\left\langle\mathbf{x}^{*}-\mathbf{x},\mathbf{u}_{m}\right\rangle\delta_{m}+(a_{m}^{2}-1)r^{2}\leq0$$ for δm ∈ R. (i) When am > 1, (15) has one or two solutions with probability p(am) or less. When r √a2m−1 ∥x⋆−x∥∥um∥ ≤ cos θ ≤ 1, let the larger solution be D+m > 0 and the smaller one be D−m > 0; we can express these solutions as follows: $$\begin{array}{l}{{D_{m}^{+}(x,u_{m}):=\|x^{\star}-x\|\|u_{m}\|\cos\theta+\sqrt{\|x^{\star}-x\|^{2}\|u_{m}\|^{2}\cos^{2}\theta-r^{2}(a_{m}^{2}-1)},}}\\ {{D_{m}^{-}(x,u_{m}):=\|x^{\star}-x\|\|u_{m}\|\cos\theta-\sqrt{\|x^{\star}-x\|^{2}\|u_{m}\|^{2}\cos^{2}\theta-r^{2}(a_{m}^{2}-1)}.}}\end{array}$$ Moreover, we define δ + m and δ −m as follows: $\delta_{m}^{+}:=\sup\limits_{x\in N(x^{*};a_{m}r)\setminus\{x^{*}\}}\mathbb{E}_{u_{m}\sim B(0;1)}\left[D_{m}^{+}(x,u_{m})\right],$ $\delta_{m}^{-}:=\sup\limits_{x\in N(x^{*};a_{m}r)\setminus\{x^{*}\}}\mathbb{E}_{u_{m}\sim B(0;1)}\left[D_{m}^{-}(x,u_{m})\right].$ Thus, the solution Dm(x,um) to (15) is $0<D_{m}(x,u_{m})<D_{m}(x,u_{m})<D_{m}^{+}(x,u_{m})$ when r √a2m−1 ∥x⋆−x∥∥um∥ ≤ cos θ ≤ 1, and −D+m(x,um) < Dm(x,um) < −D−m(x,um) < 0 when −1 ≤ cos θ ≤ − r √a2m−1 ∥x⋆−x∥∥um∥ . Hence, let δm := sup x∈N(x⋆;amr)\{x⋆} Eum∼B(0;1) [Dm(x,um)], then we have |δ − m| ≤ |δm| ≤ |δ + m|. (ii) When am ≤ 1, (15) always has one or two solutions. The two solutions are defined as in (i). Then, the solution to (15) is $D_{m}(x,u_{m})<D_{m}(x,u_{m})<D_{m}(x,u_{m})$ when $\frac{r\sqrt{\omega_{n}-1}}{\|\mathbf{x}-\mathbf{x}\|\|\mathbf{u}_{m}\|}\leq\cos\theta\leq1$, and $$-D_{m}^{+}(\mathbf{x},\mathbf{u}_{m})<D_{m}(\mathbf{x},\mathbf{u}_{m})<-D_{m}^{-}(\mathbf{x},\mathbf{u}_{m})\ \ (-D_{m}^{+}(\mathbf{x},\mathbf{u}_{m})<0,-D_{m}^{-}(\mathbf{x},\mathbf{u}_{m})>0)$$ when $-1\leq\cos\theta\leq-\frac{r\sqrt{\omega_{n}-1}}{\|\mathbf{x}-\mathbf{x}\|\|\mathbf{u}_{m}\|}$. Hence, we have $$|\delta_{\mathbf{m}}|\leq|\delta_{m}^{-}|.$$ $$|\delta_{m}|\leq|\delta_{m}^{-}|.$$ From (i) and (ii), (15) may have a solution for all am > 0 when |δm| = |δ −m|. Therefore, suppose |δm| = |δ −m|; then, $r^{2}\geq a_{m}^{2}r^{2}-2\delta_{m}\langle\mathbf{x}^{\star}-\mathbf{x},\mathbf{u}_{m}\rangle+|\delta_{m}|^{2}$ $\geq a_{m}^{2}r^{2}-2\delta_{m}\langle\mathbf{x}^{\star}-\mathbf{x},\mathbf{u}_{m}\rangle+|\delta_{m}|^{2}\|\mathbf{u}_{m}\|^{2}$ $>\|\mathbf{x}-\mathbf{x}^{\star}\|^{2}-2\delta_{m}\langle\mathbf{x}^{\star}-\mathbf{x},\mathbf{u}_{m}\rangle+|\delta_{m}|^{2}\|\mathbf{u}_{m}\|^{2}$ $=\|\mathbf{x}+\delta_{m}\mathbf{u}_{m}-\mathbf{x}^{\star}\|^{2}$. $\in N(\mathbf{x}^{\star};r)$ ($\delta_{m}\in\mathbb{R}$), where $\mathbf{u}_{m}\sim B(\mathbf{0};1)$. Hence, This means that x + δmum ∈ N(x ⋆; r) (δm ∈ R), where um ∼ B(0; 1). Hence, for all x, y ∈ N(x ⋆; amr) ⊂ R d(am > √2), 2), $$\left\langle\nabla\hat{f}_{\delta_{m}}(\pmb{x})-\nabla\hat{f}_{\delta_{m}}(\pmb{y}),\pmb{x}-\pmb{y}\right\rangle$$ D∇ ˆfδm(x) − ∇ ˆfδm(y), x − y E= ⟨∇Eu[f(x + δmu)] − ∇Eu[f(y + δmu)], x − y⟩ = ⟨Eu[∇f(x + δmu)] − Eu[∇f(y + δmu)], x − y⟩ = ⟨Eu[∇f(x + δmu) − ∇f(y + δmu)], x − y⟩ = Eu[⟨∇f(x + δmu) − ∇f(y + δmu), x − y⟩] ≥ Eu[σ∥(x + δmu) − (y + δmu)∥ 2] = Eu[σ∥x − y∥ 2] = σ∥x − y∥ 2. This means that, if |δm| = |δ −m| holds, then ˆfδm is σ-strongly convex on N(x ⋆; amr) (am > √2) when f is σ-strongly convex on B(x ⋆; r). Also, if we define dm := amr |δ −m| , then dm|δm| ≤ amr holds; i.e., ˆfδm is σ-strongly convex on N(x ⋆; dm|δm|). This completes the proof. Remark: In the end, |δm| must be equal to |δ −m|. We can show that |δ −m| is non-zero. Suppose that (14) holds, then |δ − m| := sup x∈N(x⋆;amr)\{x⋆} Eum∼B(0;1) h ∥x ⋆ − x∥∥um∥ cos θ − p∥x⋆ − x∥ 2∥um∥ 2 cos2 θ − r 2(a 2m − 1) i > sup x∈N(x⋆;amr)\{x⋆} Eum∼B(0;1) h r pa 2m − 1 − pa 2mr 2 − r 2(a 2m − 1) i = |r pa 2m − 1 − r| = r pa 2m − 1 − 1 > 0, where we have used ∥x ⋆ − x∥ < amr, ∥um∥ ≤ 1,r √a2m−1 $\frac{\sqrt{a_m^2-1}}{-x||u_m||}\leq|\cos\theta|\leq1$, and $a_m>\sqrt{\frac{\alpha}{2}}$. ## D.6 Proof Of Proposition 3.2 Proof. From Proposition 3.1, for all |δm| = |δ −m|, ˆfδm is σ-strongly convex, i.e., $$\sigma\|\mathbf{x}^{\star}-x_{\delta_{m-1}}^{\ast}\|^{2}\leq\left\langle\mathbf{x}^{\star}-x_{\delta_{m-1}}^{\ast},\nabla\hat{f}_{\delta_{m}}(\mathbf{x}^{\star})-\nabla\hat{f}_{\delta_{m}}(x_{\delta_{m-1}}^{\ast})\right\rangle$$ $$\leq\left\|\mathbf{x}^{\star}-x_{\delta_{m-1}}^{\ast}\right\|\left\|\nabla\hat{f}_{\delta_{m}}(\mathbf{x}^{\star})-\nabla\hat{f}_{\delta_{m}}(x_{\delta_{m-1}}^{\ast})\right\|,$$ where we have used the Cauchy-Schwarz inequality and ∇ ˆfδm(x ⋆ δm ) = 0. Accordingly, we have $$\left\|\mathbf{x}^{\star}-x_{\delta_{m-1}}^{\star}\right\|\left(\sigma\left\|\mathbf{x}^{\star}-x_{\delta_{m-1}}^{\star}\right\|-\left\|\nabla\hat{f}_{\delta_{m}}(\mathbf{x}^{\star})-\nabla\hat{f}_{\delta_{m}}(x_{\delta_{m-1}}^{\star})\right\|\right)\leq0.$$ Because x ⋆ − x ⋆ δm−1 ≥ 0 and Lemma 2.2, $$\left\|\boldsymbol{x}^{\star}-\boldsymbol{x}^{\star}_{\hat{\delta}_{m-1}}\right\|\leq\frac{\left\|\nabla\hat{f}_{\delta_{m}}(\boldsymbol{x}^{\star})-\nabla\hat{f}_{\delta_{m}}(\boldsymbol{x}^{\star}_{\hat{\delta}_{m-1}})\right\|}{\sigma}$$ $$\leq\frac{\left\|\boldsymbol{x}^{\star}_{\hat{\delta}_{m-1}}-\boldsymbol{x}^{\star}\right\|}{\sigma}.$$ Hence, $$\left\|\mathbf{x}_{\delta_{m}}^{*}-\mathbf{x}_{\delta_{m+1}}^{*}\right\|\leq\left\|\mathbf{x}_{\delta_{m}}^{*}-\mathbf{x}^{*}\right\|+\left\|\mathbf{x}_{\delta_{m+1}}^{*}-\mathbf{x}^{*}\right\|$$ $$\leq\frac{\frac{\mathbb{I}_{\alpha}}{\sigma}\left(\left\|x_{\delta_{m}}^{+}-x^{*}\right\|+\left\|x_{\delta_{m+1}}^{+}-x^{*}\right\|\right)$$ $$\leq\frac{2\mathbb{I}_{\alpha}}{\sigma}\max\left\{\left\|x_{\delta_{m}}^{+}-x^{*}\right\|,\left\|x_{\delta_{m+1}}^{+}-x^{*}\right\|\right\}$$ $$\leq|\delta_{m}|(1-\gamma_{m})$$ $$=|\delta_{m}|-|\delta_{m+1}|$$ This completes the proof. ## D.7 Proof Of Proposition 3.3 Proof. By using the triangle inequality, we have, for all m ∈ [M], ∥x ⋆ δm − x ⋆∥ = ∥x ⋆ δm − x ⋆ δm+1 + x ⋆ δm+1 − x ⋆∥ ≤ ∥x ⋆ δm − x ⋆ δm+1 ∥ + ∥x ⋆ δm+1 − x ⋆∥ ≤ ∥x ⋆ δm − x ⋆ δm+1 ∥ + ∥x ⋆ δm+1 − x ⋆ δm+2 ∥ + · · · + ∥x ⋆ δM − x ⋆ δM+1 ∥ + ∥x ⋆ δM+1 − x ⋆∥ ≤ (|δm| − |δm+1|) + (|δm+1| − |δm+2|) + · · · + (|δM| − |δM+1|) + 0 = |δm|, (16) where we have used x $x^{\ast}_{\color{red}\delta\color{red}M+1}=x^{\ast}\,,\,\,\delta_{\color{red}M+1}=0$ ⋆, δM+1 = 0. Therefore, from dm > 1, we have $$\left(16\right)$$ On $d_m>1$, we have . $$\|x_{\delta_{m}}^{*}-x^{*}\|<d_{m}|\delta_{m}|.$$ This completes the proof of Proposition 3.3(i). In addition, if γm ∈ (1 dm+1 , 1) holds, from (16), $\|\delta_{m}\|$ $\|\delta_{m}\|$ $$\square$$ This completes the proof of Proposition 3.3(ii). ## E **Additional Experimental Results** For the sake of fairness, we provide here a version of Figures 7-9 with the number of gradient queries on the horizontal axis. Since b stochastic gradients are computed per epoch, the number of gradient queries is T b, where T means the number of steps and b means the batch size. ## F Full Experimental Results For Section 4.2 ![37_image_0.png](37_image_0.png) Figure 12: Accuracy score for testing and loss function value for training versus the number of gradient queries in training ResNet18 on the CIFAR100 dataset. The solid line represents the mean value, and the shaded area represents the maximum and minimum over three runs. In method 1, the learning rate and the batch size were fixed at 0.1 and 128, respectively. In method 2, the learning rate decreased every 40 epochs as in h0.1,1 10√2 , 0.05,1 20√2 , 0.025iand the batch size was fixed at 128. In method 3, the learning rate was fixed at 0.1, and the batch size was increased as [16, 32, 64, 128, 256]. In method 4, the learning rate was decreased as h0.1, √3 20 , 0.075, 3 √3 80 , 0.05625iand the batch size was increased as [32, 48, 72, 108, 162]. This graph shows almost the same results as Figure 7. ![38_image_0.png](38_image_0.png) Figure 13: Accuracy score for testing and loss function value for training versus the number of gradient queries in training WideResNet-28-10 on the CIFAR100 dataset. The solid line represents the mean value, and the shaded area represents the maximum and minimum over three runs. In method 1, the learning rate and batch size were fixed at 0.1 and 128, respectively. In method 2, the learning rate was decreased every 40 epochs as h0.1,1 10√2 , 0.05,1 20√2 , 0.025iand the batch size was fixed at 128. In method 3, the learning rate was fixed at 0.1, and the batch size increased as [8, 16, 32, 64, 128]. In method 4, the learning rate decreased as h0.1, √3 20 , 0.075, 3 √3 80 , 0.05625iand the batch size increased as [8, 12, 18, 27, 40]. **This graph shows almost** the same results as Figure 8. ![39_image_0.png](39_image_0.png) Figure 14: Accuracy score for testing and loss function value for training versus the number of gradient queries in training ResNet34 on the ImageNet dataset. The solid line represents the mean value, and the shaded area represents the maximum and minimum over three runs. In method 1, the learning rate and batch size were fixed at 0.1 and 256, respectively. In method 2, the learning rate was decreased every 40 epochs as h0.1,1 10√2 , 0.05,1 20√2 , 0.025iand the batch size was fixed at 256. In method 3, the learning rate was fixed at 0.1, and the batch size was increased as [32, 64, 128, 256, 512]. In method 4, the learning rate was decreased as h0.1, √3 20 , 0.075, 3 √3 80 , 0.05625iand the batch size was increased as [32, 48, 72, 108, 162]. This graph shows almost the same results as Figure 9. ![40_image_0.png](40_image_0.png) Figure 15: Accuracy score for testing and loss function value for training versus epochs in training of ResNet18 on the CIFAR100 dataset. The solid line represents the mean value, and the shaded area represents the maximum and minimum over three runs. This is the full version of Figure 10. ![40_image_1.png](40_image_1.png) Figure 16: Accuracy score for testing and loss function value for training versus epochs in training of WideResNet-28-10 on the CIFAR100 dataset. The solid line represents the mean value, and the shaded area represents the maximum and minimum over three runs. This is the full version of Figure 11.
Review 1: Summary: The paper attempts to offer insights and guidelines for training by presenting an analysis of graduated optimization combined with SGD. Specifically, it introduces a relaxed notion of a $\sigma$-nice function and demonstrates the convergence of the graduated optimization method with specific step sizes and noise scheduling. The paper recommends increasing the batch size and decreasing the learning rate. Strengths and Weaknesses: Strength: The paper aims at the fundamental question of selecting the learning rate and batch size during training. Some discoveries of the paper coincide with practical observations. Weaknesses: 1. The sufficient condition for this new $\sigma$-nice function is not clear to me. I didn't understand the role of random variable $u_m \sim B(0,1)$ in Proposition 3.1. The definition of $ |\delta^-|$ include $||u_m||$, x, and, the angle between $u_m$ and $x^* - x$ . Does $||u_m||$ denote the norm of the random variable or the norm of a realization of this random variable? If it is the norm of a random variable, which norm is applied here? If it is a realization, I found it hard to understand why being "\sigma"-nice would depend on such realization. Also, does different $x$ give different $|\delta^-|$ values? 2. In Section 3.3 Equation (8), how does noise $\omega_t$ reduce to the uniform distribution over a ball $u_t$? $\omega_t$ is the noise due to minibatch, so this noise can follow different kinds of distribution. Also, I don't think the arguments in Sections 3.3.1 and 3.3.2 are rigorous. 3. In Algorithm 1, it is required to compute the stochastic gradient $\hat f_{\delta}$. It should be explained how to compute that. 4. In Section A, it should be mentioned when the expectation and gradient can be exchanged. Requested Changes: 1. In some sections of the paper, it is unclear whether the discussion pertains to SGD alone or to graduated optimization combined with SGD. This should be clarified. 2. The clarity of the writing in the paper needs significant improvement. For instance, in Definition 3.1, it is ambiguous whether "$\left(\gamma_m \in\left(\frac{1}{d_{m+1}}, 1\right)\right)$" refers to "for all" $\gamma$ or "exists one". Also, the inputs for Algorithm 4 and Algorithm 2 differ; why is Algorithm 4 termed GD rather than SGD? The authors should verify the clarity of every theorem, proposition, and lemma. 3. I suggest that the authors also provide the total complexity of Algorithm 1, integrating Theorem 3.2 and Theorem 3.1, as well as for Algorithm 4. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This submission connects SGD to graduated optimization, a global optimization technique that works by iteratively minimizing smoothed versions of the objective function. The authors argue that the noise in SGD smooths the objective function and increasing the batch-size or decreasing the step-size reduces the degree of smoothing, allowing for control of the graduated optimizer. They establish a class of functions called "new $\sigma$-nice" for which they prove graduated optimization with SGD has global convergence to a minimizer of the objective. The authors then show that a polynomial decay schedule for the step-size maximizes the rate of this global convergence compared to other widely-used schedules. The paper concludes with experiments on CIFAR-100 and ImageNet. Strengths and Weaknesses: This paper attempts to answer a widely studied question in machine learning: why does the noise in SGD seem help find "good" solutions when training non-convex models? The authors use the graduated optimization framework to argue that SGD is actually performing global optimization. While I appreciate this fresh viewpoint, the paper is held back by the confused writing and several important theoretical issues. As it is, I cannot endorse accepting this work. To summarize: **Strengths**: - Graduated optimization provides a fresh and interesting perspective on SGD in the non-convex setting. - Experiments are provided for large-scale problems, including ImageNet. **Weaknesses**: - The global convergence theorems for graduated optimization with SGD implicitly require that the SGD iterates remain in a bounded neighborhood of the minimizer. Since this may not hold, I am skeptical that Theorems 3.2 and 3.4 are correct. - The writing is very difficult to understand. Moreover, since the various definitions and theorems are interconnected, it is difficult to track down the meanings and values of different variables. - The argument that SGD smooths the objective is not rigorously developed and distributional assumptions on the gradient noise (see Eq. 8) are not justified. - The experiments are given only for a single restart and plot epochs/iterations rather than the total number of gradient queries, leading to a biased presentation. ## Additional Details Theorem 3.2: - I believe the proof of this result has a serious issue. The proof of Theorem 3.1 requires strong convexity of $f$ to hold between between $\hat x_t$ and the minimizer $x^*$ for every $t$ (see Lemma D.1). This is true if $f$ is strongly-convex globally, such as assumed in Theorem 3.1. However, Theorem 3.2 uses Theorem 3.1 to control sub-optimality when minimizing $\hat f_{\delta_M}$, which is only strongly convex over a neighbourhood according to the definition of a new $\sigma$-nice function. Since there is no guarantee that that the iterates of SGD remain in this neighbourhood, it is not possible to apply Theorem 3.1 to minimization of $\hat f_{\delta_M}$. - I am also very skeptical that you can guarantee the iterates generated by SGD remain in $\mathcal{N}(x^*, d_m \delta_m)$ with probability $1$. For general finite-sum functions, it is straightforward to argue that there is a sequence of stochastic gradients for which the iterates must exit any neighbourhood of the minimizer. Equation 1: - It took me a bit of effort to see this equivalence. Perhaps you should mention that $y_{t+1}$ is a GD step from $x_{t+1}$, so that it is a a random variable that depends on $w_t$. - It does not immediately follow that $\mathbb{E}[\nabla f(y_t - \eta \omega_t)] = \nabla \mathbb{E}[f(y_t - \eta \omega_t)]$. Additional work is required to exchange the order of expectation and differentiation since this requires exchanging integrals and limits. As a result, I don't think your argument that SGD is smoothing $f$ is fully rigorous. Section 3.3: - How do you argue that the gradient noise $\omega_t = \frac{C}{\sqrt{b}} u_t$, where $u_t$ is uniformly distributed on the norm ball? I see no reason why the gradient noise should be distributed so nicely, especially when $f$ is a finite sum function. - The appearance of $u_m$ in Prop. 3.1 makes it seem like this distributional assumption on the noise is being used here as well? This needs to be clearly explained in the text. Experiments: - It's critical to average results for stochastic optimizers over multiple random restarts. Convergence guarantees for SGD and related methods are typically in-expectation and results for a single run may be misleading. I suggest running $5$ trials and plotting both the median and interquartile range. - I think it would be more fair to plot the total number of gradient queries rather than epochs or iterations. Plotting epochs biases the results towards small batch-sizes, while plotting iterations biases the results towards large batch-sizes. - It seems like increasing the batch-size generally outperforms the hybrid approach in terms of total number of iterations on three out of four problems. Why do you claim that the hybrid approach is superior? ### Minor Issues: Page 1: - "and its variants, such as Adam kingma & Ba (2015)" --- Use `\citep` here, rather than `citet`. Also, "kingma" should be capitalized since it's a proper noun. Page 2: - "Equation (1) indicates that SGD smooths the function Kleinberg et al. (2018)" --- again, you want `citep` here instead of `\citet`. Table 1: - This is a very awkward way to introduce the notation for the paper. Since notation is not introduced when first used, I am forced to scroll back through the paper to reference this table whenever new notation is used. Definition 3.1: - Calling the condition "new $\sigma$-nice" is quite confusing For example, in Proposition 3.2 when you say "...sufficient condition for $f$ to be a new $\sigma$-nice function", it's not clear if $f$ is a new function which is $\sigma$-nice or if it is a new $\sigma$-nice function. I suggest coming up with a better name. - In part (ii), do you require $\hat f_{\delta_m}$ to be strongly convex on $N(x^*, d_m \delta_m)$ or on $N(x^*_{\delta_m}, d_m \delta_m)$? It seems quite strong to require the smoothed function to be strongly convex on a neighbourhood of $x^*$ instead of the smoothed minimizer $x_{\delta_m}^8$ as required by the original $\sigma$-nice definition. Proposition 3.1: - It looks like $\delta^{-}$ is defined as a function of $x \in N(x^*, a_m r)$. Is $\delta^{-}$ supposed to be the supremum over this set? Otherwise I cannot see how it is a fixed value otherwise. The same goes for $u_m \sim B(0,1)$. - What does it mean to say "if $d_m := a_m r / |\delta^{-}|$ holds"? This is the definition of $d_m$, so the expression always holds. - What is the definition of $a_m$? This isn't written anywhere. Proposition 3.2: - Isn't the definition of new $\sigma$-nice supposed to hold for any choice of noise level $\delta_m \in \mathbb{R}$? That is, the definition of new $\sigma$-nice is given independently of the smoothing regime --- it's only a condition on $f$ --- so now it is strange to prove $f$ is new $\sigma$-nice for some specific noise levels. - It's not at all clear what functions/noise levels satisfy Eq. (4). Theorem 3.1: - This is a standard result in the optimization literature and should not be presented as if new. See Rakhlin et al. [1] and the references therein. All Figures: - The font-sizes and line-widths are too small to be legible when printed. As a rule, the font-size of all figure text should be at least as large as the font-size of the text in the main paper. ### References: [1] Rakhlin, Alexander, Ohad Shamir, and Karthik Sridharan. "Making gradient descent optimal for strongly convex stochastic optimization." Proceedings of the 29th International Coference on International Conference on Machine Learning. 2012. Requested Changes: - The most important change is to fix the use of local strong convexity in Theorem 3.2 and 3.4. I'm not sure how the authors can argue that the iterates remain in the region where strong convexity holds almost surely, but something like this needs to hold to fix their proof. - The experimental should be repeated for several random restarts to account for stochasticity. - I strongly suggest improving the writing in Section 3.1. In particular, the large number of unexplained constants in Def. 3.1 and Prop. 3.1 (e.g. $d_{m}$, $\gamma_m$, $u_m$, $a_m$, $\delta^{-}$, etc.) should be addressed. Broader Impact Concerns: None. ================================================== Review 3: Summary: The manuscript studies the problem of training neural networks with graduated optimization. The authors generalize the notion of $\sigma$-nice functions by Hazan et al. (2016) and suggest approximating stochastic gradient noise as a uniform distribution over the ball. Then, this work proposes running SGD over a smoothed function with a gradually decreased smoothing level to find a global optimum supported by convergence analysis. The suggested framework is used to justify better learning rates and mini-batch size schedules, and it has also been validated via experiments on several computer vision problems and models. Strengths and Weaknesses: ## Strengths - The idea appears novel and not very well explored. The considered problem is important and deserves attention. The proposed approach may be of interest to the broader machine learning and optimization community from both theoretical and practical perspectives. - Theoretical results are generally accurately stated and seem to be correct. - The paper is well organized. The writing is detailed and mostly clear. - Experiments include several practical problems of different scales with various models. The authors also include an implementation hosted via an anonymous repository. ## Weaknesses 1. __Background__. While the literature overview describes many papers on Graduated optimization and various applications of this technique, it seems to completely miss the relevant works on smoothing (especially in optimization [1]) and analysis of SGD. This makes it harder to judge the theoretical contributions as they are not put into the proper context of optimization literature. Moreover, it creates an impression that the authors are unfamiliar with some classical results, such as SGD convergence [3, 4]. The choice of references on optimization methods in the second sentence of the 1.1 Background subsection is questionable. - It is unclear why the authors present convergence theory for SGD on smooth and strongly convex functions instead of reusing state-of-the-art results. The suggested analysis appears too long and suboptimal (e.g., constant $B_2$ can be arbitrarily large). That is why I recommend referring to Section 5.5 (Bibliographic notes) of a recent manuscript [2] for a historical overview and more recent techniques. - In addition, please compare the obtained theoretical and experimental results to a recent work [5] that performed extensive experiments on learning rate scheduling. 2. __Theoretical results__. - I would like the authors to include more details about the optimization properties of the smoothed function (such as smoothness, Lipschitzness, and convexity). - I believe a more detailed discussion about Definition 2.2 is needed as it is not very common in optimization literature. For instance, how restrictive is it, and what is the intuition behind it? - Lower bound for $a_m > \sqrt{2}$ (in Proposition 3.1) seems strange as it means that smoothed function $\hat{f_{\delta_m}}$ can be strongly convex on an arbitrarily large ball around $x^\star$. At the same time, the "strong convexity area" of the same function is restricted to the ball of radius $d_m |\delta_m|$. Could you please clarify this point? - In addition, $d_m$ is inversely proportional to $|\delta^-|$ which can be equal to zero when $x = x^\star$. - Theorem 3.2 needs discussion about its implications, proof technique, and innovations upon previous analysis. - Assumption 2.1 (ii) is pretty restrictive as it does not hold even for subsampling of quadratic functions over an unbounded domain. While it is suitable for other situations, in the case of subsampling, it is less nuanced than existing approaches [6]. I find this issue significant as mini-batching is one of the focuses of this work. - Moreover, Assumption 2.1 (ii) serves as the basis for the central insights in Subsection 3.3. It is unclear how the transition from a general stochastic gradient error vector to a uniform distribution over a ball happens. If it is an approximation, it needs to be stated explicitly. Because of this issue, the paper makes the impression of suggesting a theory artificially tailored to explain some practical effects rather than developing a framework rigorously from first principles. One of the main statements > We proved that SGD with a mini-batch stochastic gradient has the effect of smoothing the function, and the degree of smoothing is greater with larger learning rates and smaller batch sizes. appears to be not supported enough. However, I am happy to change my opinion in case of misunderstanding. 3. __Writing__. Notation is heavy in some places due to the many constants introduced. I had a hard time understanding some of the technical details. It would be nice if it could be simplified for the main part of the manuscript. In some cases, the paper seems to create confusion between optimization complexity and generalization performance (e.g., in subsection 1.3.2). 4. __Experimental results__. - What does "the number of parameter updates" mean on the horizontal axis of the plots? Do the figures take into account the fact that the per-iteration cost of methods with variable batch size changes? - There are too many curves in Figure 10, which makes it hard to distinguish the results. ### Minor - I would like to ask the authors to comment on the parameters required to run the proposed Graduate optimization. - I am also curious about how the constant $M$ is selected. - > This noise is worthless in convex optimization This claim is too bold or simply inaccurate. ___ [1] Duchi, John C., Peter L. Bartlett, and Martin J. Wainwright. "Randomized smoothing for stochastic optimization." SIAM Journal on Optimization 22.2 (2012): 674-701. [2] Garrigos, Guillaume, and Robert M. Gower. "Handbook of convergence theorems for (stochastic) gradient methods." arXiv preprint arXiv:2301.11235 (2023). [3] Moulines, Eric, and Francis Bach. "Non-asymptotic analysis of stochastic approximation algorithms for machine learning." Advances in neural information processing systems 24 (2011). [4] Needell, Deanna, Rachel Ward, and Nati Srebro. "Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm." Advances in neural information processing systems 27 (2014). [5] Defazio, Aaron, et al. "When, Why and How Much? Adaptive Learning Rate Scheduling by Refinement." arXiv preprint arXiv:2310.07831 (2023). [6] Gower, Robert Mansel, et al. "SGD: General analysis and improved rates." International conference on machine learning. PMLR, 2019 Requested Changes: My concerns are listed in the Weaknesses part of the review. Next, I give a short overview of my major recommendations. 1. Revision of the related works section described in Weakness 1 by putting the findings into a modern context. 2. Adjustment of some claims based on Assumption 2.1. Broader Impact Concerns: No concerns about this submission. ================================================== Metareview: Recommendation: Reject Comment: This submission received a thorough review process thanks to the three reviewers who provided insightful comments. The reviewers also all engaged with the authors during discussion period for further clarifications. Please see also the comments in the section "Claims and Evidence". Unfortunately, applicability of both explicit and implicit graduated optimization approaches proposed in the paper and the relationship between SGD and Algorithm 3 is not clear. To summarize the final recommendations of reviewers: - All the reviewers and myself agree that the paper underwent a significant change between the initial version and the final version. In particular, the reviewers found many technical issues and inaccuracies which are acknowledged by the authors. Since these lead to critical parts of the paper to change significantly (in addition, some of the reviewers are still not convinced by the correctness of some parts, for example, Propositions 3.1 and 3.2), the reviewers do not have sufficient trust in the current version of the manuscript. - All the reviewers and myself do not find the discussions and explanations about the connection between SGD and the algorithms that are proposed and analyzed in the paper clear or convincing enough. - **Reviewer 9HcQ** argues that the main theme of the algorithm in establishing the connection between SGD and graduated optimization is undermined due to the lack of a clear connection and the fact that the new Algorithm 1&2 are not implementable in practice, due to Alg 2 being GD. I also agree with this view. - **Reviewer Kbek** still has concerns with the accuracy of Propositions 3.1 and 3.2, the reviewer already provided pointers for their argument. I agree with the reviewer's concerns. - In summary, all the reviewers and myself agree on the importance of the topic of the paper. We also agree that the approach the authors take is interesting. However, unfortunately the current version is not suitable for acceptance. I recommend the authors to incorporate the constructive feedback from the reviewers to improve readability and clarity of the submission. ==================================================
# Identifying Latent Distances With Finslerian Geometry Alison Pouplin *alpu@dtu.dk* Technical University of Denmark David Eklund david.eklund@ri.se Research Institutes of Sweden Carl Henrik Ek *che29@cam.ac.uk* University of Cambridge Søren Hauberg sohau@dtu.dk Technical University of Denmark Reviewed on OpenReview: *https: // openreview. net/ forum? id= Q2Gi0TUAdS* ## Abstract Riemannian geometry provides us with powerful tools to explore the latent space of generative models while preserving the underlying structure of the data. The latent space can be equipped it with a Riemannian metric, pulled back from the data manifold. With this metric, we can systematically navigate the space relying on geodesics defined as the shortest curves between two points. Generative models are often stochastic, causing the data space, the Riemannian metric, and the geodesics, to be stochastic as well. Stochastic objects are at best impractical, and at worst impossible, to manipulate. A common solution is to approximate the stochastic pullback metric by its expectation. But the geodesics derived from this expected Riemannian metric do not correspond to the expected length-minimising curves. In this work, we propose another metric whose geodesics explicitly minimise the expected length of the pullback metric. We show this metric defines a Finsler metric, and we compare it with the expected Riemannian metric. In high dimensions, we prove that both metrics converge to each other at a rate of O1D . This convergence implies that the established expected Riemannian metric is an accurate approximation of the theoretically more grounded Finsler metric. This provides justification for using the expected Riemannian metric for practical implementations. ## 1 Introduction Generative models provide a convenient way to learn low-dimensional latent variables z corresponding to data observations x through a smooth function f : Z ⊂ R q *→ X ⊂* R D, such that x = f(z). Through this learnt manifold, one can generate new data or compare observations by interpolating or computing distances. However, doing so by using the Euclidean distance in the latent space is misleading (Hauberg, 2018a), because the latent variables are not statistically identifiable. If our observations are lying near a manifold (Fefferman et al., 2016), we want to equip our latent space with a metric that preserves distance measures on it. Figure 1 (left panel) illustrates the need for defining geometric-aware distances on manifolds. Distances on a manifold can be precisely defined using a norm, which is a mathematical function that exhibits several desirable properties such as non-negativity, homogeneity, and the triangle inequality. In particular, a norm can be induced by an inner product (i.e., a quadratic function) that associates each pair of points on the manifold with a scalar value. To derive the standard Riemannian interpretation of the latent space, we first compute the infinitesimal Euclidean norm according to the data space. Using the Taylor expansion, we have:f(z + ∆z) − f(z) 2 2 ≈ f(z) + J(z)∆z − f(z) 2 2 = ∆z >J(z) >J(z)∆z. As a first approximation, the norm defined in the latent space locally preserves the Euclidean norm defined in the data space. The curvature of our data manifold is condensed in the Riemannian metric tensor Gz = J >(z)J(z), which serves as a proxy to define the Riemannian metric: gz : (*u, v*) → u >Gzv. In mathematical jargon, we say that the Riemannian manifold (Z, g) is obtained by pulling back the Euclidean metric through the map f. Riemannian geometry enables the exploration of the latent space in precise geometric terms, and quantities of interest such as the length, the energy or the volume can be directly derived from the pullback metric. These geometric quantities are, by construction, known to be invariant to reparametrizations of the latent space Z, and are thus statistically identifiable (Hauberg, 2018a). While this geometric framework exclusively handles deterministic objects, generative models are often stochastic. The learnt map f, that mathematically describes those models, is stochastic too. For example, in the celebrated Gaussian Process Latent Variable Model (GPLVM) (Lawrence, 2003), this function is a Gaussian process. The pullback metric is then stochastic, and standard differential geometric constructions no longer apply. Navigating the latent manifolds through geodesics (length-minimising curves) becomes practically infeasible. Previous research tried to circumvent this problem by approximating the stochastic pullback metric with the expected value of the Riemannian metric tensor, but the derived length is an unintuitive quantities that do not correspond to the expected length: $${\mathcal{L}}(\gamma)\left|_{\mathbb{E}[G]}\right.:=\int\left\|\gamma(t)\right\|_{\mathbb{E}[G]}\,d t\quad\neq\quad\mathbb{E}\left[{\mathcal{L}}(\gamma)\left|_{G}\right.\right]:=\int\mathbb{E}\left\|\gamma(t)\right\|_{G}\,d t.$$ Instead of taking the expectation of the metric tensor, which serves as a surrogate to define a norm, we propose to take the expectation of the norm directly. In this paper, we compare our expected norm with the norm induced by the the commonly used expected metric tensor. The main findings are: 1. The expected norm defines a Finsler metric. Finsler geometry is a generalisation of Riemannian geometry. 2. For Gaussian processes, the stochastic norm obtained through the pullback metric follows a noncentral Nakagami distribution, so our Finsler metric has a closed-form expression. 3. In high dimensions, for Gaussian processes, our Finsler metric and the previously studied Riemannian metric converge to each other at a rate of O1D , with D the dimension of the data space. We conclude that the Riemannian metric serves as a good approximation of the Finsler metric, which is theoretically more grounded. It further justifies the use of the Riemannian metric in practice when exploring stochastic manifolds. ## 1.1 Outline Of The Paper The paper explores the geometry of latent spaces learned by generative models, which encode a latent low-dimensional manifold that represents observed high-dimensional data. The latent manifold is denoted Z ⊂ R q and the data manifold is denoted X ⊂ R D. Assuming the manifold hypothesis holds, we need to define an infinitesimal norm in the latent manifold to compute distances that respect the underlying geometry of the data. Such a norm can be constructed by pulling back the Euclidean distance through the smooth function that maps the latent manifold to the data manifold. This map, f : *Z → X* , mathematically describes the decoder of a trained generative model. Those models being often stochastic, we consider f being a stochastic process. It means that the pullback metric tensor, G = J >J, and its induced norm, k·kG : u → √u>Gu, are also stochastic. In section 2, we mathematically define the notion of stochastic pullback metric and stochastic manifolds. ![2_image_0.png](2_image_0.png) Figure 1: In this illustration, we show the 2-dimensional representation (*θ, ϕ*) of the sphere parametrised in R 3 with f : (*θ, ϕ*) → cos(θ) sin(ϕ), sin(θ) cos(ϕ), sin(ϕ). In practice, we don't have access to well-parametrised manifolds, but instead those manifolds are being shaped by data points in R D. The data points are depicted as tiny dots for illustrative purposes. On the left figure, the map f is deterministic, and on the right figure, we imagine that f is stochastic. When we pullback, through f, a circle from the sphere to the plane, the circle is deformed to an ellipse. Those ellipses, called indicatrices, are the fingerprints of the metric and they showcase the deformation of the plane. Left figure: In blue, the Euclidean distance, is represented by the straight line in the plane. It is not identifiable and it does not represent a geodesic on the sphere. In orange, the length-minimising curve obtained through the pullback metric of the mapping f leads to a great circle on the sphere. Following the great circle is the fastest way to go from one point to another. Effectively, the pullback metric leads to a geodesic, contrary to the Euclidean metric. Notice that, in the plane, the orange geodesic also follows the direction of the indicatrices, since it is length-minimising. Right figure: We imagine that a generative model maps latent space to data space to the data space using a stochastic process f = {f1, f2*, . . .* }. This leads to a stochastic pullback metric, stochastic indicatrices, and stochastic paths in the latent space. To circumvent all the challenges posed by this stochastic component, a deterministic approximation of the norm is needed. It can be defined by taking the expectation of the metric tensor. This norm, that we will note k·kR : u → kukE[G] , has been studied before by Tosi et al. (2014), and is explained in section 2.2. In this paper, we propose instead to directly take the expectation of the stochastic norm. This expected norm, noted k·kF : u → E-kukG , is introduced in section 2.4. The norm k·kR is defined by a Riemannian metric, and we show that the norm k·kF defines a Finsler metric. We explain the general difference between Finsler and Riemannian geometry in section 3. The aim of this paper is to compare those two norms. We first draw absolute bounds in section 3.2, and then relative bounds in 3.3. We also investigate the relative difference of the norms when the dimension of the data space increases, in section 3.4. Finally, we perform some experiments in Section 4, that illustrate the Riemannian and Finsler norm in the same latent space. ## 1.2 Related Works Riemannian Geometry For Machine Learning When navigating a learnt latent space, the geodesics obtained through the pullback metric not only follow geometrically coherent paths, they also remain invariant under different representations. While two runs of the same model will produce distinct latent manifolds, the geodesics connecting the same chosen points should have the same length. We say that the pullback metric effectively solved the identifiability problem (Hauberg, 2018a). This has led to the growing adoption of Riemannian geometry in machine learning applications. In robotics, Beik-Mohammadi et al. (2021) used the pullback metric from a variational autoencoder to safely navigate the space of motion patterns, and Scannell et al. (2021) use geodesics under the expected metric in a GPLVM to control quadrotor robot. In proteins modelling, Detlefsen et al. (2022) showed that the derived geodesics on the space of beta-lactamase follow their phylogenic tree structure. In game content generation, González-Duque et al. (2022) generated game levels more coherent and reliable by interpolating data along the geodesics. Jørgensen & Hauberg (2021) used the expected Riemannian metric from observations of pairwised distances through a GPLVM to build a probabilistic dimensionality reduction model. ## Finsler Geometry In Machine Learning Our work crucially relies on Finslerian geometry, which has been well-studied mathematically, but has only seen very limited use in machine learning and statistics. We point to two notable exceptions, which are quite distinct from our work. Lopez et al. (2021) use symmetric spaces to represent graphs and endow these with a Finsler metric to capture dissimilarity structure in the observational data. Ratliff et al. (2021) discuss the role of differential geometry in motion planning for robotics. Along the way, they touch upon Finslerian geometry, but mostly as a neat tool to allow for generalizations. To the best of our knowledge, no prior work has investigated the links between stochastic and Finslerian geometry. ## Strategies To Deal With Stochastic Riemannian Geometry Tosi et al. (2014) and Arvanitidis et al. (2018) introduced approximation of the pullback metric by taking the expectation of the metric tensor. In those two cases, the map f is respectively a trained Gaussian process, or the decoder of a VAE. In this paper, the derivations only hold if f is a smooth stochastic process (Definition 2.2), which is not the case1 of the VAEs, and hence, our results are not directly applicable to those models. In addition to the work of Tosi et al. (2014), a solution to circumvent the randomness of the metric tensor is to consider that the data follows a specific probability distribution. Instead of looking at the shortest path on the data manifold, Arvanitidis et al. (2021) borrow tools from information geometry and consider the straightest paths on the manifold whose elements are probability distributions. ## 2 Expectation On Random Manifolds The metric pulled back by a stochastic mapping is, de facto, stochastic and endows a random manifold. Unfortunately, we are not yet equipped to derive geometric objects on a random manifold. Instead, we dodge this problem by seeking a deterministic approximation of this stochastic metric. As mentioned above, a common solution is to approximate such a metric by its expectation. In section 2.2, we study the expected Riemannian metric. The solution suggested by this paper is to approximate the expectation of the lengths instead of the random metric itself. In section 2.4, we show that this new metric is not Riemannian but Finslerian (Proposition 2.2), and it has a closed-form expression when the map f is a Gaussian process (Proposition 2.3). ## 2.1 Random Riemannian Geometry The pullback metric is defined as a Riemannian metric if and only if the mapping f is an immersion, which is a differentiable function whose derivatives are injective everywhere on the manifold (Lee, 2013, Proposition 13.9). A manifold equipped with a Riemannian metric is called a Riemannian manifold. Definition 2.1. The pullback of the Euclidean metric through the immersion f : *Z → X* is a **Riemannian metric**. It is defined as the inner product gz : (TzZ, TzZ) → R+ : (*u, v*) → u >Gv, at a specific point z in the manifold Z. u and v are vectors lying in the tangent plane TzZ (ie: the set of all tangent vectors) of the manifold. G = J >J, with J the Jacobian of f. Since a Riemannian metric is an inner product, it induces a norm: k·kG. We can then define the **curve** length and **curve energy** on a manifold: LG(γ) = R 1 0 γ˙(t)G dt and EG(γ) = 12 R 1 0 γ˙(t) 2 G dt, with γ a curve defined on Z, and γ˙ its derivative. A locally length-minimising curve between two connecting points is a **geodesic**. To obtain a geodesic, we can minimise the curve length, but in practice minimising the 1the decoder of a VAE, while it decodes to a Gaussian, cannot be considered as a differentiable stochastic process. One reason is because the independence of the probability of the data: p(x|z) = Qn i=1 p(xi|zi). Let us assume the opposite: the decoder is a Gaussian process. The covariance of the Gaussian process would be a diagonal matrix because of the independence of the probability of the data. The covariance would correspond to a dirac distribution: cov(xi, xj ) = δij . However, a stochastic process is differentiable only if the covariance is differentiable, which is not the case of the dirac distribution. curve energy is more efficient. On the manifold, we also define a **volume measure** in order to integrate probability functions: for *U ⊂ Z*,Rf(U) h(x) dx =RU h(f(z))VR dz, with VR(z) = √Gz the volume measure. In addition, we are considering the case where the immersion f is a stochastic process. The outputs of our trained model, x ∈ X , which represent our data, are random variables. Definition 2.2. A **stochastic process** is a collection of random variables {X(t, ω), t ∈ T} indexed by an index set T defined on a sample space Ω, which represents the set of all possible outcomes. An outcome in Ω is denoted by ω, and a realisation of the stochastic process is the sequence of X(·, ω) that depends on the outcome ω. In this framework, our index set is our latent manifold T = Z, and our sample space Ω is defined as the set of the model evaluations. For every point z ∈ Z, every time we execute our model, the output x = f(z) is a random variable following a specific distribution. When the data x follow a Gaussian distribution, the stochastic process is called a **Gaussian process**. A GP-LVM (Lawrence, 2003) is a model that learns how to map the data from a latent space to a data space through a Gaussian process. When f is a stochastic immersion, the metric tensor becomes a random matrix. In this paper, we call a manifold equipped with the stochastic pullback metric a **random manifold**, noted (Z, g). As a consequence of the stochastic aspect of the metric, all the functionals are stochastic themselves, and they are no longer trivial to manipulate. Definition 2.3. A **random Riemannian metric tensor** is a matrix-valued random field (ie: a collection of matrix-valued random variables {G(z, ω), z *∈ Z}*), whose realisation for a specific evaluation ω ∈ Ω is a Riemannian metric tensor. A **random Riemannian metric** is a metric induced by a random Riemannian metric tensor: gz : (TzZ, TzZ) → R+ : (*u, v*) → u >Gv. For the rest of the paper, the associated **stochastic norm** is noted: k·kG : TzZ → R+ : u →pgz(*u, u*) := √ u>Gu If this stochastic norm is induced by f defined as a Gaussian process, then k·kG follows a non-central Nakagami distribution. This is explained in the proof of Proposition 2.3. ## 2.2 Norm Induced By The Expected Metric Tensor One way to approximate a random metric tensor is to take its expectation with respect to the collection of random metrics induced by the stochastic process. This has been introduced before by Tosi et al. (2014) GP-LVMs. Definition 2.4. Let G be a stochastic Riemannian metric tensor on the manifold Z. We refer to E[G] as the **expected metric tensor**. It induces a Riemannian metric and a norm on Z. We will note the norm induced by the expected metric tensor as: $$\|\cdot\|_{R}:{\mathcal{T}}_{z}{\mathcal{Z}}\to\mathbb{R}_{+}:u\to\|u\|_{\mathbb{E}[G]}:={\sqrt{u^{\top}\mathbb{E}[G]u}}$$ Like any Riemannian metric, we can define the following functionals: LR(γ) = R 1 0 pγ˙(t)>E[G] ˙γ(t) dt, ER(γ) = R 1 0 γ˙(t) >E[G] ˙γ(t) dt = E[ER(γ)], and VR(z) = pdet E[G]. ## 2.3 Expected Paths On Random Manifolds Approximating the stochastic metric by its expectation seems a natural but also ad-hoc solution. If we want to explore a manifold, we might prefer to use a representative quantity, such as the lengths between data points. The expectation of the lengths can give us an idea about how, on average, two points are connected on a random manifold. The **expected curve length**, and its corresponding **curve energy** on the random manifold (*Z, g*) are defined as: LF (γ) = R 1 0 E hpγ˙(t)>Gγ˙(t) idt = E[LG(γ)], and EF (γ) = $\int_0^1\mathbb{E}\left[\sqrt{\dot{\gamma}(t)}\right]$ hpγ˙(t)>Gγ˙(t) i2dt. One observation made by Eklund & Hauberg (2019) is that the length (LR) derived from the expected Riemannian metric is not equal to the expected curve length (LF ), and their respective energy curves differ by a variance term: $$E_{R}(\gamma)-E_{F}(\gamma)=\int_{0}^{1}\dot{\gamma}(t)^{\top}\mathbb{E}[G]\dot{\gamma}(t)-\mathbb{E}\left[\sqrt{\dot{\gamma}(t)^{\top}G\dot{\gamma}(t)}\right]^{2}\,dt$$ $$=\int_{0}^{1}\mathbb{E}\left[\left\|\dot{\gamma}(t)\right\|_{G}^{2}\right]-\mathbb{E}\left[\left\|\dot{\gamma}(t)\right\|_{G}\right]^{2}\,dt=\int_{0}^{1}\mathrm{Var}\left[\left\|\dot{\gamma}(t)\right\|_{G}\right]\,dt\,\,\,.$$ This term can be regarded as a **regularisation term** for the Riemannian energy curve: the curve energy ER might be penalised when the curve goes through regions with high-variance. In practice, for a Gaussian process with a stationary kernel, this variance term is upper bounded by the posterior variance that is relatively low next to the training points and is high outside of the support of the data. Later, we will also see that the functionals agree in high dimensions, leading to the same geodesics (Section 3). Eklund & Hauberg (2019) also noted that these quantities are bounded by the number of dimensions: Proposition 2.1. (Eklund & Hauberg, 2019) Let f : R q → R D be a stochastic process such that the sequence: {f 0 1 , f02 , . . . , f0D} has uniformly bounded moments. There is then a constant C such that: $$0\leq{\frac{L_{R}-L_{F}}{L_{R}}}\leq{\frac{C}{8D}}$$ ## 2.4 Expected Norm And Finsler Geometry Our work builds on Eklund & Hauberg (2019)'s research. We are interested in approximating the stochastic norm instead of the metric tensor, and by doing so, the derived curve length and curve energy are the same ones studied by Eklund & Hauberg (2019). We go further as we not only compare curve lengths, but the deterministic norms obtained with the stochastic metric. Definition 2.5. Let G be a stochastic Riemannian metric tensor on the manifold Z. It induces a stochastic norm, k·kG on Z. We will note the **expected norm** as: $$\|\cdot\|_{F}:{\mathcal{T}}_{z}{\mathcal{Z}}\to\mathbb{R}_{+}:u\to\mathbb{E}[\|u\|_{G}]:=\mathbb{E}\left[{\sqrt{u^{\top}G u}}\right].$$ While it cannot be induced by an inner-product, it is sufficiently convex to be defined as a **Finsler metric**. Definition 2.6. Let F : *T Z →* R+ be a continuous non-negative function defined on the tangent bundle T Z of a differentiable manifold Z. We say that F is a **Finsler metric** if, for each point z of Z and v on TzZ, we have (1) Positive homogeneity: ∀λ ∈ R+, F(λv) = λF(v). (2) **Smoothness**: F is a C∞ function on the slit tangent bundle *T Z \ {*0}. (3) **Strong convexity criterion**: the Hessian matrix gij (v) = 12 ∂ 2F 2 ∂viv j (v) is positive definite for non-zero v. A differentiable manifold equipped with a Finsler metric is called a Finsler manifold. Finsler geometry can be seen as an extension of Riemannian geometry, since the requirements for defining a metric are less restrictive. Proposition 2.2. Let G be a stochastic Riemannian metric tensor. Then, the function Fz : TzZ → R : u → kukF defines a Finsler metric, but it is not induced by a Riemannian metric. Proof. If F was induced by a Riemannian metric, then this metric would be defined as: fz : R q × R q → R+ : (v1, v2) → E[pv > 1 Gv2] 2. Since a Riemannian metric is an inner product, it should be symmetric, positive, definite and bilinear. Here, we can see that fx is not bilinear, so fz is not a Riemannian metric. However, we can prove that Fz : R q → R : v → E[ √v>Gv] is positive, homogeneous, smooth and strongly convex, and so Fz is a Finsler metric (Shen & Shen, 2016, Definition 2.1). For the full proof, see Section B.1. So far, we have assumed that f is an immersion and a stochastic process. If we consider f to be a Gaussian Process in particular, the Finsler norm can be rewritten in a closed form expression. Proposition 2.3. Let f be a Gaussian process and J its Jacobian, with J ∼ N (E[J], Σ). The Finsler norm can be written as: $$F_{z}:{\mathcal{T}}_{z}{\mathcal{Z}}\to\mathbb{R}_{+}:\|v\|_{F}:=v\to{\sqrt{2}}{\sqrt{v^{\top}\Sigma v}}{\frac{\Gamma({\frac{D}{2}}+{\frac{1}{2}})}{\Gamma({\frac{D}{2}})}}_{1}F_{1}\left(-{\frac{1}{2}},{\frac{D}{2}},-{\frac{\omega}{2}}\right),$$ with 1F1 as the confluent hypergeometric function of the first kind and ω = (v >Σv) −1(v >E[J] >E[J]v). Proof. We suppose that f is a Gaussian process, and so is its Jacobian. G follows a non-central Wishart distribution: G = J >J ∼ Wq(D, Σ, Σ −1E[J] >E[J]). v >Gv is a scalar and also follows a non-central Wishart distribution: v >Gv ∼ W1(*D, σ, ω*), with σ = v >Σv and ω = (v >Σv) −1(v >E[J] >E[J]v) (Kent & Muirhead, 1984, Definition 10.3.1). The square-root of a non-central Wishart distribution follows a non-central Nakagami distribution (Hauberg, 2018b). Then, by construction, the stochastic norm k·kG follows a noncentral Nakagami distribution. The expectation of this distribution is known, and it has a closed-form expression. The confluent hypergeometric function of the first kind, also known as the Kummer function, is a special function that is defined as the solution of a specific second-order linear differential equation. The term ω appears from the non-central Wishart distribution. When ω is non-zero, the distribution of the Jacobian shifts away from the origin, and ω represents the magnitude and the direction of this shift, balanced by the correlation between the variables. In Section 3.4, to prove our results in high-dimensions, we will assume that our manifold Z is bounded, and so is ω. ## 3 Comparison Of Riemannian And Finsler Metrics 3.1 Theoretical Comparison In geometry, we need to define a norm to compute functionals, and the Riemannian metric is conveniently obtained by constructing an inner product. Because of its bilinearity, it greatly simplifies subsequent computations, but it is also restrictive. Relaxing this assumption and defining a metric as a more general2 norm has been studied by Finsler (1918), who gave his name to this discipline. Finsler geometry is similar to Riemannian geometry without the bilinear assumption, most of the functionals (curve length and curve energy) are defined similarly to those obtained in Riemannian geometry. However, the volume measure is different, and there are at least two definitions of volume measure used in Finsler geometry: the Busemann-Hausdorff volume and the Holmes-Thomson volume measure (Wu, 2011). In this 2The norm actually needs to be strongly convex, but it is not necessary symmetric. This means that, for a vector v, we can have a non reversible Finsler metric: Fx(v) 6= Fx(−v). Intuitively, this means that the path used to connect two points would be different depending on the starting point. This asymmetric property becomes valuable when studying the geometry of anisotropic media (Markvorsen, 2016), for example. In our case, our Finsler metric is reversible. paper, we decided to focus on the Busemann-Hausdorff definition (Definition 3.1), which is more intuitive and easier to derive. If the Finsler metric is a Riemannian metric, the definition of volume naturally coincides with the Riemannian volume measure. Definition 3.1. For a given point z on the manifold, we define the **Finsler indicatrix** as the set of vectors in the tangent space such that the Finsler metric is equal to: {v ∈ TzZ|Fz(v) = 1}). We call B n(1) the Euclidean unit ball, and vol(·) the standard Euclidean volume. In local coordinates (e 1, · · · , ed) on a Finsler manifold M, the **Busemann-Hausdorff volume** form is defined as dVF = VF (z)e 1 *∧ · · · ∧* e d, with: $\begin{array}{c}\mbox{\rm vol}(\mbox{\rm\bf B}^{n}(1))\\ \mbox{\rm vol}(\mbox{\rm\bf B}^{n}(1))\end{array}$ In the definition above, we introduce the notion of *indicatrix*. An indicatrix is a way to represent the distortion induced by the metric on a unit circle. If our metric is Euclidean, we will only have a linear transformation between the latent and the observational spaces, and the indicatrix would still be a circle. Because the Riemannian metric is quadratic, it will always generate an ellipse in the latent space. The Finsler indicatrix, however, would have a convex, even asymmetrical, shape. This difference can be observed in the indicatrix-field represented in Figure 2: The Finsler indicatrices in purple can have almost rectangular shape, while the Riemannian indicatrices, in orange, are ellipses. ![7_image_0.png](7_image_0.png) Figure 2: Indicatrice field over the latent space of the pinwheel data (in grey) representing the Riemannian (in orange) and Finslerian (in purple) metrics (See Section C). (A) The indicatrices are computed over a grid in the latent space. (B) The indicatrices are computed along a geodesic: the Riemannian and Finslerian metrics coincide. There are also a few observations to note in Figure 2. First, in the area of low predictive variance (where data points lie in the latent space), the Finsler and Riemannian indicatrices are alike. This follows from the preceding comment that the metrics diverge by a variance term. If our mapping f was deterministic, both metrics would agree. Second, for every point, the Riemannian indicatrices are always contained by the Finslerian ones, illustrating Proposition 3.1 on our absolute bounds in the following section. ## 3.2 Absolute Bounds On The Finsler Metric The Finsler norm is upper bounded with the Riemannian norm obtained from the expected metric tensor. It is also lower bounded: Proposition 3.1. We define α = 2 Γ( D 2 + 12 ) Γ( D 2 ) 2. The Finsler norm: k·kF is bounded by two norms, k·kαΣ and k·kR, induced by the two respective Riemannian metric tensors: the covariance tensor αΣz and the expected metric tensor E[Gz]. $$\forall(z,v)\in{\mathcal{Z}}\times{\mathcal{T}}_{z}Z:\ \|v\|_{\alpha\Sigma}\leq\|v\|_{F}\leq\|v\|_{R},$$ Proof. The full proof is detailed in Section B.2.1, and it can be summarised the following way. The upper bound kvkF ≤ kvkR, also rewritten as: E[ √v>Gv] ≤pv>E[G]v, is obtained by applying Jensen's inequality, knowing that the square root x → √x is a concave function. The lower bound kvkαΣ ≤ kvkF , rewritten as √v>αΣv ≤ E[ √v>Gv], is obtained using the closed form expression of the Finsler function. The result is illustrated in Figure 4 (lower right). Four metric tensors (G1, G2, G3, G4), each following a non-central Wishart distribution with a specific mean and covariance matrix, have been computed. For each of them, we have drawn the indicatrices ({v ∈ Tz*Z | k*vk = 1}) induced by the norms: k·kF , k·kR and k·kαΣ . As expected, we can notice that the αΣ-indicatrix contains the Finsler indicatrix, itself containing R-indicatrix. By bounding the Finsler metric, we are able to bound their respective functionals: Corollary 3.1. The length, the energy and the Busemann-Hausdorff volume of the Finsler metric are bounded respectively by the Riemannian length, energy and volume of the covariance tensor αΣ (noted LαΣ, EαΣ, VαΣ) and the expected metric E[G] (noted LR, ER, VR): $$\forall z$$ $\mathbb{E}(z)<1$ $\ell$. $\mathfrak{z}(z)$. $$\begin{array}{l}{{\leq E_{R}(z)}}\\ {{\leq V_{R}(z)}}\end{array}$$ ∀z ∈ Z, LαΣ(z) ≤ LF (z) ≤ LR(z) EαΣ(z) ≤ EF (z) ≤ ER(z) VαΣ(z) ≤ VF (z) ≤ VR(z) Proof. The full proof is detailed in Section B.2.1. From Proposition 3.1, we need to integrate each term of the inequality to obtain the length and the energy. The volume is less trivial, since we use the BusemannHausdorff definition for measuring VF . We recast the problem in hyperspherical coordinates, and show that the Finsler indicatrix is still bounded. ## 3.3 Relative Bounds On The Finsler Metric Proposition 3.2. Let f be a stochastic immersion. f induces the stochastic norm k·kG, defined in Section 2. The relative difference between the Finsler norm k·kF and the Riemmanian norm k·kR is: $$0\leq{\frac{\|v\|_{R}-\|v\|_{F}}{\|v\|_{R}}}\leq{\frac{\mathrm{Var}\left[\|v\|_{G}^{2}\right]}{2\mathbb{E}\left[\|v\|_{G}^{2}\right]^{2}}}.$$ Proof. This proposition is a direct application of the Sharpened Jensen's inequality (Liao & Berg, 2019). The previous proposition is valid for any stochastic immersion. We can see that the metrics become equal when the ratio of the variance over the expectation shrinks to zero. This happens in two cases: when the variance converges to zero, which is similar to having a deterministic immersion, and when the number of dimensions increases. The latter case is investigated below for a Gaussian process 3. 3Interestingly, the term E[ν] 2/Var[ν], when ν follows a central Nakagami distribution, is called a shape parameter. It has been introduced by Nakagami himself to study the intensity of fading in radio wave propagation (Nakagami, 1960). When ν :=pξ with ξ ∼ W1(Σ, D), then m = D/2. This is a particular result obtained from Proposition 3.3, when E[J] = 0. Proposition 3.3. Let f be a Gaussian process. We note ω = (v >Σv) −1(v >E[J] >E[J]v), with J the jacobian of f, and Σ the covariance matrix of J. The relative ratio between the Finsler norm k·kF and the Riemmanian norm k·kR is: $$0\leq{\frac{\|v\|_{R}-\|v\|_{F}}{\|v\|_{R}}}\leq{\frac{1}{D+\omega}}+{\frac{\omega}{(D+\omega)^{2}}}.$$ Proof. v >Gv follows a one-dimension non-central Wishart distribution: v >Gzv ∼ W1(*D, σ, ω*), with σ = v >Σv and ω = (v >Σv) −1(v >E[J] >E[J]v). We use the theorem of the moments to obtain both the expectation and the variance, which leads us to the result. As we have seen that the metrics are bounded, it is easy to show that the functionals derived from those metrics are also bounded: Corollary 3.2. When f is a Gaussian Process, the relative ratio between the length, the energy and the volume of the Finsler norm (noted LF , EF , VF ) and the Riemannian norm (noted LR, ER, VR) is: $$0\leq\frac{L_{R}(z)-L_{F}(z)}{L_{R}(z)}\leq\max_{v\in\mathcal{T}_{s}\mathbb{Z}}\left\{\frac{1}{D+\omega}+\frac{\omega}{(D+\omega)^{2}}\right\}$$ $$0\leq\frac{E_{R}(z)-E_{F}(z)}{E_{R}(z)}\leq\max_{v\in\mathcal{T}_{s}\mathbb{Z}}\left\{\frac{2}{D+\omega}+\frac{1+2\omega}{(D+\omega)^{2}}+\frac{2\omega}{(D+\omega)^{3}}+\frac{\omega^{2}}{(D+\omega)^{4}}\right\}$$ $$0\leq\frac{V_{R}(z)-V_{F}(z)}{V_{R}(z)}\leq1-\left(1-\max_{v\in\mathcal{T}_{s}\mathbb{Z}}\left\{\frac{1}{D+\omega}+\frac{\omega}{(D+\omega)^{2}}\right\}\right)^{q}$$ Proof. We directly use Proposition 3.3. To obtain the inequalities with the lengths and the energies, we first multiply all the terms by the Riemannian metric, and we integrate every term. To obtain the inequality with the volume, similarly to Corollary 3.1, we place ourselves in hyperspherical coordinates and bound the radius of the Finsler indicatrix. The full proof is in Section B.2.2. ![9_image_0.png](9_image_0.png) Figure 3: Difference of volume for data embedded in the latent space. (A) Riemannian volume measure, (B) Finslerian (Busemann-Hausdorff) volume measure, (C) Variance of the Gaussian process, (D) Ratio between the Riemannian and Finslerian volume: (VR(z) − VF (z))/VR(z). All heatmaps are computed in logarithm scale. In Figure 3, we can compare the volume measures obtained from the Riemannian and Finsler metrics, and in particular, their ratio in the top right image. When the metrics are computed next to the data points in area where the variance is very low, we can see that the ratio of the volume measure is at the order of magnitude 10−4. Further away from the data points, the variance increases and so does the difference between the Riemannian and Finsler volume measures. ## 3.4 Results In High Dimensions Proposition 3.3 and Corrollary 3.2 indicate that the metrics become similar when the dimension (D) of the observational space increases. If we assume that the latent space is a bounded manifold, the metrics converge to each other at a rate of O1D , as do their functionals. We assume that the latent manifold is bounded. Then, we can deduce that (1) the term ω, which represents the non-centrality of the data, does not grow faster than the number of dimensions (See lemma B.2.2, in Section B.2.3) and (2) we that the metrics are finite. Corollary 3.3. Let f be a Gaussian Process. In high dimensions, we have: $${\frac{L_{R}(z)-L_{F}(z)}{L_{R}(z)}}={\mathcal{O}}\left({\frac{1}{D}}\right),\quad{\frac{E_{R}(z)-E_{F}(z)}{E_{R}(z)}}={\mathcal{O}}\left({\frac{1}{D}}\right),\quad{\mathrm{and}}\quad{\frac{V_{R}(z)-V_{F}(z)}{V_{R}(z)}}={\mathcal{O}}\left({\frac{q}{D}}\right).$$ When D converges toward infinity: LR ∼ +∞ LF , ER ∼ +∞ EF and VR ∼ +∞ $$V_{F}.$$ Proof. This result follows from Corollary 3.2, assuming the latent manifold is bounded. The full proof can be found in Setion B.2.3. Corollary 3.4. Let f be a Gaussian Process. In high dimensions, the relative ratio between the Finsler norm k·kF and the Riemmanian norm k·kR is: $\mathbf{E}=\mathbf{\hat{r}}$. $\mathbf{a}\cdot\mathbf{a}=\mathbf{a}\cdot\mathbf{a}$. $${\frac{\|v\|_{R}-\|v\|_{F}}{\|v\|_{R}}}={\mathcal{O}}\left({\frac{1}{D}}\right)$$ And, when D converges toward infinity: ∀v ∈ TzZ, kvkR ∼ +∞ kvkF . ![10_image_0.png](10_image_0.png) Proof. Similarly, from Proposition 3.3, in a bounded manifold, both metrics converge to each other in high dimensions. Figure 4: Left: Ratio of volumes (VR − VF )/VR decreasing with respect to the number of dimensions. The results were obtained from using a collection of matrices {Gi} following a non-central Wishart distribution. Upper right: The Finsler and Riemannian indicatrices converge towards each other when increasing the number of dimensions. Lower right: Illustration of the absolute bounds in Proposition 3.1 with the αΣ-indicatrices, Riemannian indicatrices and Finsler indicatrices. ## 4 Experiments We want to illustrate cases where these metrics differ in practice. For this, we use two synthetic datasets, consisting of pinwheel and concentric circles mapped to a sphere, and four real-world datasets: a font dataset (Campbell & Kautz, 2014), a dataset representing single-cells (Guo et al., 2010), MNIST (LeCun, 1998) and fashionMNIST (Xiao et al., 2017). We trained a GPLVM with and without using stochastic active sets (Moreno-Muñoz et al., 2022) to learn a latent manifold. From the learnt model, we can access the Riemannian and Finsler metrics, and minimise their respective curve energies to obtain the corresponding geodesics. All the code has been built using Stochman (Detlefsen et al., 2021), a python library to efficiently compute geodesics on manifolds. ## 4.1 Experiments With Synthetic Data Showing High Variance The synthetic data correspond to simple patterns - a pinwheel and concentric circles - that have been ![11_image_0.png](11_image_0.png) projected onto a sphere. In those examples, we are plotting curves that does not follow the data points and so, go through regions of high variance. Notably, the background of the latent manifold represents the variance of the posterior distribution in logarithmic scale. For both cases, we have the 3-dimensional data space and the corresponding learnt latent space. The curves are mapped from the latent space to the data space using the forward pass of the GPLVM. Figure 5: The Riemannian (purple), Finslerian (orange) and Euclidean (dotted gray) geodesics obtained by pulling back the metric through the Gaussian processes of a trained GPLVM. The models, trained on the 3-dimensional synthetic data (Figures A.2, B.2) learnt their latent representations (Figures A.1, B.1) We can see that the Riemannian geodesics tend to avoid area of high variance in both cases: they are attracted to the data, at the detriment of following longer paths in R 3. The Finslerian geodesics, on the opposite, are not perturbed by the variance term, and will explore regions without any data, following shorter paths in R 3. The geodesics have been plotted by computing a discretized manifold, obtained from a 10x10 grid with Detlefsen et al. (2021). This approach proved to be more effective than minimizing the energy along a spline, which was prone to getting trapped in local minima. The GPLVM has been coded in Pyro (Bingham et al., 2019). All the implementation details can be found in the Appendix C. ## 4.2 Experiments With A Font Dataset And Qpcr Dataset The font dataset (Campbell & Kautz, 2014) represents the contour of letters in various font, and the qPCR dataset (Guo et al., 2010) represents single-cells stages. We trained a GPLVM model, also using Pyro (Bingham et al., 2019), to learn the latent space. From the optimised Gaussian process, we can access the Riemannian and Finsler metric, and minimise their respective curve energies to obtain geodesics. Similar to the previous experiment, the background colour represents the variance of the posterior distribution in logarithmic scale. We can notice that the Riemannian and the Finsler geodesics agrees with each other. This experiment also agrees with our finding that in high dimensions (the dimensions of the single-cells data is 48, the font data is 256), both metrics converge to each other. This concludes that, in practice, the Riemannian metric is a good approximation of the Finsler metric, and Finsler geometry doesn't need to be used in high dimensions. ![12_image_0.png](12_image_0.png) Figure 6: The Riemannian (plain line) and Finslerian (dotted line) geodesics obtained by pulling back the metric through the GPLVM. The models trained on single-cells data (which cannot be represented) and font data (Figure B.2) learnt their respective 2-dimensional latent representation (Figures A, B.1). ## 4.3 Experiments With Mnist And Fashionmnist GPLVMs are often hard to train, but they can be scaled effectively using variational inference and inducing points. Using stochastic active sets has been shown to give reliable uncertainty estimates, and the model also learn more meaningful latent representations, as shown in the original paper by Moreno-Muñoz et al. (2022). For those experiments, we used those models on two well-known benchmarks: MNIST and FashionMNIST. In both experiments, because the difference of the variance of the posterior learnt by the GPLVM across the latent manifold is low, as we can notice in the background of the plots on the right, and because the data is high dimensional, we cannot see any difference between the Finslerian and the Riemannian geodesics. Again, in practice, the Riemannian metric is a good approximation of the Finslerian metric. ## 5 Discussion Generative models are often used to reduce data dimension in order to better understand the mechanisms behind the data generating process. We consider the general setting where the mapping from latent variables to observations is driven by a smooth stochastic process, and the sample mappings span Riemannian manifolds. The Riemannian geometry machinery has already been used in the past to explore the latent space. In this paper, we have shown how curves and volumes can be identified by defining the length of a latent curve as its expected length measured in the observation space. This is a natural extension of classical differential geometric constructions to the stochastic realm. Surprisingly, we have shown that this does not give rise to a Riemannian metric over the latent space, even if sample mappings do. Rather, the latent representation naturally becomes equipped with a Finsler metric, implying that stochastic manifolds, such as those spanned by Latent Variable Models (LVMs), are inherently more complex than their deterministic counterparts. The Finslerian view of the latent representation gives us a suitable general solution to explore a random manifold, but it does not immediately translate into a practical computational tool. As Riemannian manifolds are better understood computationally than Finsler manifolds, we have raised the question: How good an approximation of the Finsler metric can be achieved by a Riemannian metric? The answer turns out to be: quite good. We have shown that as data dimension increases, the Finsler metric becomes increasingly Riemannian. Since LVMs are most commonly applied to high-dimensional data (as this is where dimensionality reduction carries value), we have justification for approximating the Finsler metric with a Riemannian metric such that computational tools become more easily available. In practice we find that geodesics under the Finsler and the Riemannian metric are near identical, except in regions of high uncertainty. ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) ![13_image_2.png](13_image_2.png) Figure 7: The Riemannian (purple) and the Finslerian (orange) geodesics are plotted. **Upper figure:** learnt latent ![13_image_3.png](13_image_3.png) ![13_image_4.png](13_image_4.png) space with the images (left) and data points (right) of Fashion MNIST. **Lower figure:** learnt latent space with the images (left) and data points (right) of MNIST. ## Acknowledgments The authors would like to thank Professor Steen Markovsen for the initial discussions on Finsler geometry, and Dr. Cilie Feldager for her invaluable help in taming GP-LVMs. This work was funded in part by the Novo Nordisk Foundation through the Center for Basic Machine Learning Research in Life Science (NNF20OC0062606). It also received funding from the European Research Council (ERC) under the European Union Horizon 2020 research, innovation programme (757360). SH was supported in part by research grants (15334, 42062) from VILLUM FONDEN. ## Code Availability The code used for the experiments in this study is available on GitHub at https://github.com/a-pouplin/ latent_distances_finsler. Notations ## References Z, X Smooth differentiable latent (Z) and data (X ) manifold, f A stochastic immersion f : Z ⊂ R q *→ X ⊂* R D, J Jacobian of the stochastic function f, G Stochastic metric tensor defined as the pullback metric through f: G = J >J, TzZ Tangent space of the manifold Z at a point z, Σ IF f is a Gaussian process, then J ∼QD i=1 N (µi, Σ), k·kG Stochastic induced norm: kvkG := √v>Gv, k·kR Riemannian induced norm: kvkR :=pv>E[G]v :=pg(*v, v*), k·kF Finsler norm: kvkF := E[ √v>Gv] = F(v), LR, ER, VR Length, energy and volume obatined from the Riemannian induced norm k·kR, LF , EF , VF Length, energy and Busemann Hausdorff volume obtained from the Finsler norm k·kF . Sumon Ahmed, Magnus Rattray, and Alexis Boukouvalas. Grandprix: scaling up the bayesian gplvm for single-cell data. *Bioinformatics*, 35(1):47–54, 2019. Georgios Arvanitidis, Lars Kai Hansen, and Søren Hauberg. Latent Space Oddity: on the Curvature of Deep Generative Models. In *International Conference on Learning Representations (ICLR)*, 2018. Georgios Arvanitidis, Miguel González-Duque, Alison Pouplin, Dimitris Kalatzis, and Søren Hauberg. Pulling back information geometry, 2021. URL https://arxiv.org/abs/2106.05367. Hadi Beik-Mohammadi, Søren Hauberg, Georgios Arvanitidis, Gerhard Neumann, and Leonel Rozo. Learning riemannian manifolds for geodesic motion skills. *arXiv preprint arXiv:2106.04315*, 2021. Eli Bingham, Jonathan P Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D Goodman. Pyro: Deep universal probabilistic programming. *The Journal of Machine Learning Research*, 20(1):973–978, 2019. Neill D. F. Campbell and Jan Kautz. Learning a manifold of fonts. *ACM Trans. Graph.*, 33(4), jul 2014. ISSN 0730-0301. doi: 10.1145/2601097.2601212. URL https://doi.org/10.1145/2601097.2601212. Nicki S. Detlefsen, Alison Pouplin, Cilie W. Feldager, Cong Geng, Dimitris Kalatzis, Helene Hauschultz, Miguel González Duque, Frederik Warburg, Marco Miani, and Søren Hauberg. Stochman. GitHub. Note: https://github.com/MachineLearningLifeScience/stochman/, 2021. Nicki Skafte Detlefsen, Søren Hauberg, and Wouter Boomsma. Learning meaningful representations of protein sequences. *Nature communications*, 13(1):1914, 2022. David Eklund and Søren Hauberg. Expected path length on random manifolds. *arXiv preprint* arXiv:1908.07377, 2019. Charles Fefferman, Sanjoy Mitter, and Hariharan Narayanan. Testing the manifold hypothesis. *Journal of* the American Mathematical Society, 29(4):983–1049, 2016. Paul Finsler. *Ueber kurven und Flächen in allgemeinen Räumen*. Philos. Fak., Georg-August-Univ., 1918. Miguel González-Duque, Rasmus Berg Palm, Søren Hauberg, and Sebastian Risi. Mario plays on a manifold: Generating functional content in latent space through differential geometry. In 2022 IEEE Conference on Games (CoG), pp. 385–392. IEEE, 2022. Guoji Guo, Mikael Huss, Guo Qing Tong, Chaoyang Wang, Li Li Sun, Neil D. Clarke, and Paul Robson. Resolution of cell fate decisions revealed by single-cell gene expression analysis from zygote to blastocyst. Developmental Cell, 18(4):675–685, 2010. ISSN 1534-5807. doi: https://doi.org/10.1016/j.devcel.2010.02. 012. URL https://www.sciencedirect.com/science/article/pii/S1534580710001103. Søren Hauberg. Only bayes should learn a manifold (on the estimation of differential geometric structure from data). *arXiv preprint arXiv:1806.04994*, 2018a. Søren Hauberg. The non-central Nakagami distribution. Technical report, Technical University of Denmark, 2018b. URL http://www2.compute.dtu.dk/~sohau/papers/nakagami2018/nakagami.pdf. Martin Jørgensen and Soren Hauberg. Isometric gaussian process latent variable model for dissimilarity data. In *International Conference on Machine Learning*, pp. 5127–5136. PMLR, 2021. John T. Kent and R. J. Muirhead. Aspects of Multivariate Statistical Theory. *The Statistician*, 1984. ISSN 00390526. doi: 10.2307/2987858. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Neil Lawrence. Gaussian process latent variable models for visualisation of high dimensional data. Advances in neural information processing systems, 16, 2003. Yann LeCun. The mnist database of handwritten digits. *http://yann. lecun. com/exdb/mnist/*, 1998. John M Lee. Smooth manifolds. In *Introduction to smooth manifolds*, pp. 1–31. Springer, 2013. J. G. Liao and Arthur Berg. Sharpening jensen's inequality. *The American Statistician*, 73(3):278–281, 2019. doi: 10.1080/00031305.2017.1419145. Federico Lopez, Beatrice Pozzetti, Steve Trettel, Michael Strube, and Anna Wienhard. Symmetric spaces for graph embeddings: A finsler-riemannian approach. In Marina Meila and Tong Zhang (eds.), *Proceedings* of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 7090–7101. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/ lopez21a.html. Steen Markvorsen. A finsler geodesic spray paradigm for wildfire spread modelling. Nonlinear Analysis: Real World Applications, 28:208–228, 2016. Pablo Moreno-Muñoz, Cilie Feldager, and Søren Hauberg. Revisiting active sets for gaussian process decoders. *Advances in Neural Information Processing Systems*, 35:6603–6614, 2022. Minoru Nakagami. The m-distributiona general formula of intensity distribution of rapid fading. In *Statistical* methods in radio wave propagation, pp. 3–36. Elsevier, 1960. Pyro. Gaussian process latent variable model, 2022. URL https://pyro.ai/examples/gplvm.html. Carl Edward Rasmussen and Christopher K. I. Williams. *Gaussian Processes for Machine Learning (Adaptive* Computation and Machine Learning). The MIT Press, 2005. ISBN 026218253X. Nathan D. Ratliff, Karl Van Wyk, Mandy Xie, Anqi Li, and Muhammad Asif Rana. Generalized nonlinear and finsler geometry for robotics. In *2021 IEEE International Conference on Robotics and Automation* (ICRA), pp. 10206–10212, 2021. doi: 10.1109/ICRA48506.2021.9561543. Aidan Scannell, Carl Henrik Ek, and Arthur Richards. Trajectory optimisation in learned multimodal dynamical systems via latent-ode collocation. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 12745–12751. IEEE, 2021. Yi-Bing Shen and Zhongmin Shen. *Introduction to Modern Finsler Geometry*. Co-published with HEP, 2016. doi: 10.1142/9726. URL https://www.worldscientific.com/doi/abs/10.1142/9726. Alessandra Tosi, Søren Hauberg, Alfredo Vellido, and Neil D Lawrence. Metrics for probabilistic geometries. arXiv preprint arXiv:1411.7432, 2014. J. G. Wendel. Note on the gamma function. *The American Mathematical Monthly*, 55(9):563–564, 1948. ISSN 00029890, 19300972. URL http://www.jstor.org/stable/2304460. Bingye Wu. Volume form and its applications in finsler geometry. *Publicationes Mathematicae*, 78, 04 2011. doi: 10.5486/PMD.2011.4998. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. *arXiv preprint arXiv:1708.07747*, 2017. ## A A Primer On Geometry The main purpose of the paper is to define and compare two legitimate metrics to compute the average length between random points. Before going further, it's important to formally define the two metrics (Riemannian and Finsler metrics, respectively) which we do in sections A.2 and A.3. They are both constructed on topological manifolds, the definition of which is recalled in section A.1. We finally introduce the notion of random manifold in section A.4, which is the last notion needed to frame our problem of interest: which metric should we use to compute the average distance on a random manifold? ## A.1 Topological And Differentiable Manifolds This section aims to define core concepts in differential geometry that will be used later to define Riemannian and Finsler manifolds. Recall that two topological spaces are called homeomorphic if there is a continuous bijection between them with continuous inverse. Definition A.1. A d-dimensional **topological manifold** M is a second-countable Hausdorff topological space such that every point has an open neighbourhood homeomorphic to an open subset of R d. Let M be a topological manifold. This means that for any x ∈ M there is an open neighbourhood Ux of x and a homeomorphism φUx : Ux → R d onto an open subset of R d. Suppose that x, y ∈ M are such that Ux ∩ Uy 6= ∅, let U = Ux, V = Uy and consider the so-called coordinate change map φV ◦ φ −1 U|φU (U∩V ) : φU (U ∩ V ) → R d. We call M together with an open cover {Ux}x∈M as above a differentiable or **smooth** manifold if the coordinate maps are infinitely differentiable. Beyond these technical definitions, one can imagine a differentiable manifold as a well-behaved smooth surface that possesses *locally* all the topological properties of a Euclidean space. All the manifolds in this paper are assumed to be differentiable and connected manifolds. Definition A.2. We also define, for a differentiable manifold M, the **tangent space** TxM as the set of all the tangent vectors at x ∈ M, and the **tangent bundle** T M the disjoint union of all the tangent spaces: T M = ∪ x∈M TxM. So far, we have only defined topological and differential properties of manifolds. In order to compute geometric quantities, we need to equip those with a metric that helps us derive useful quantities such as lengths, energies and volumes. A metric is a scalar valued function that is defined for each point on the topological manifold and takes as inputs one or two vectors (depending on the type of metric) from the tangent space at the specific point. Such a function can either be defined as a scalar product between two vectors, this is the case of a Riemannian metric or, in the case of a Finsler metric, it is defined similarly to the norm of a vector. We will formally define these metrics and highlight their differences in the following sections. ## A.2 Riemannian Manifolds Definition A.3. Let M be a manifold. A **Riemannian metric** is a map assigning at each point x ∈ M a scalar product G(·, ·) : TxM × TxM → R, with G a positive definite bilinear map, which is smooth with respect to x. A smooth manifold equipped with a Riemannian metric is called a Riemannian manifold. We usually express the metric as a symmetric positive definite matrix G, where we have for two vectors u, v ∈ TxM: G(*u, v*) = h*u, v*iG = u >Gv. We further define the induced norm: v ∈ TxM, kvkG =pG(*v, v*). The Riemannian metric here can either refer to the scalar product G itself, or the associated metric tensor G. ![18_image_0.png](18_image_0.png) Figure 8: f is an immersion that maps a low dimensional manifold to a high dimensional manifold M. On M, a tangent plane TxM is draw at x. The indicatrice of the Euclidean metric is plotted in blue. When this metric is pulled-back through f, the low dimensional space is now equipped with the pullback metric g, which is a Riemannian metric by definition. The vectors df(u) and df(v) are called the push-forwards of the vectors u and v through f. Definition A.4. We consider a curve γ(t) and its derivative γ˙(t) on a Riemannian manifold M equipped with the metric g. Then, we define the **length of the curve**: $$L_{G}(\gamma)=\int\left\|{\dot{\gamma}}(t))\right\|_{G}d t=\int{\sqrt{g_{t}({\dot{\gamma}}(t),{\dot{\gamma}}(t))}}\,d t,$$ where gt = gγ(t). Locally length-minimising curves between two connecting points are called **Geodesics**. Definition A.5. The **curve energy** is defined as: $$E_{G}(\gamma)=\int\left\|{\dot{\gamma}}(t))\right\|_{G}^{2}d t=\int g_{t}({\dot{\gamma}}(t),{\dot{\gamma}}(t))\,d t.$$ There are two interesting properties to note about the length of a curve and the curve energy. First, the length is parametrisation invariant: for any bijective smooth function η on the domain of γ we have that LG(γ ◦ η) = LG(γ). We also say the Riemannian metric gives us intrinsic coordinates to compute the length. Secondly, for a given curve γ, we have: LG(γ) 2 ≤ 2EG(γ). Because of the invariance of the curve, when we aim to minimise it, a solver can find an infinite number of solutions. On the other hand, the curve energy is convex and will lead to a unique solution. Thus, to obtain a geodesic, instead of solving the corresponding ODE equations, or directly minimising lengths, it is easier in practice to minimise the curve energy, as a minimal energy gives a minimal length. The Riemannian metric also provides us with an infinitesimal volume element that relates our metric G to an orthonormal basis, the same way the Jacobian determinant accommodate for a change of coordinates in the change of variables theorem. Definition A.6. In local coordinates (e 1, · · · , ed), the **volume form** of the Riemannian manifold M, equipped with the metric tensor G, is defined as: dVG = VG(x)e 1 *∧ · · · ∧* e d, with: $$V_{G}(x)={\sqrt{\operatorname*{det}(G)}}.$$ Remark. The symbol ∧ *represents the wedge product and it is used to manipulate differential k-forms. Here,* the basis vectors (e 1, · · · , ed*) form a d-dimensional parallelepiped (*e 1 *∧ · · · ∧* e d*) with unit volume.* ![19_image_0.png](19_image_0.png) Figure 9: Once the low dimensional manifold is equipped with a metric that captures the inherent structure of the high dimensional manifold, we can compute a geodesic γ, by minimising the energy functional between two points. The geodesic f(γ) will be the shortest path between two points on the manifold M. The volume measure VG can also be used to integrate functions over regions of the manifold, as we would do in the Euclidean space. It can also be linked to the density of the data: if the data points are uniformly distributed over the high-dimensional manifold, in the low-dimensional manifold, a low volume would correspond to a high density of data. It is a useful way to give more information about the distribution of the data. ## A.3 Finsler Manifolds Finsler geometry is often described as an extension of Riemannian geometry, since the metric is defined in a more general way, lifting the quadratic constraint. In particular, the norm of a Riemmanian metric is a Finsler metric, but the converse is not true. Definition A.7. Let F : *T M →* R+ be a continuous non-negative function defined on the tangent bundle T M of a differentiable manifold M. We say that F is a **Finsler metric** if, for each point x of M and v on TxM, we have: 1. Positive homogeneity: ∀λ ∈ R+, F(λv) = λF(v). 2. Smoothness: F is a C∞ function on the slit tangent bundle *T M \ {*0}. 3. Strong convexity criterion: the Hessian matrix gij (v) = 12 ∂ 2F 2 ∂viv j (v) is positive definite for nonzero v. A differentiable manifold M equipped with a Finsler metric is called a **Finsler manifold**. Here, it is worth noting that, for a given point in the manifold, the Finsler metric is defined with only one vector in the tangent space, while the Riemannian metric is defined with two vectors. Moreover, from the previous definition, we can deduce that the metric is: 1. Positive definite: for all x ∈ M and v ∈ TxM, F(v) ≥ 0 and F(v) = 0 if and only if v = 0. 2. Subadditive: F(v + w) ≤ F(v) + F(w) for all x ∈ M and v, w ∈ TxM. We say that F is a Minkowski norm on each tangent space TxM. Furthermore, if F satisfies the reversibility property: F(v) = F(−v), it defines a norm on TxM in the usual sense. Similarly to Riemannian geometry, lengths, energies and volumes can be defined directly from the Finsler metric: Definition A.8. We consider a curve γ and its derivative γ˙ on a Finsler manifold M equipped with the metric F. We define the **length of the curve** as follows: $$L_{F}(\gamma)=\int F(\dot{\gamma}(t))d t.$$ ![20_image_0.png](20_image_0.png) Figure 10: f is an immersion that maps a low dimensional manifold to a high dimensional manifold M. On M, a tangent plane TxM is draw at x. Compared to the Riemannian manifold, the Finsler indicatrix, which represents the all the vectors u ∈ TxM such that Fx(u) = 1, is not necessarily an ellipse. It can be asymmetric if the metric is asymmetric itself. It is always convex. Definition A.9. The **curve energy** is defined as: EF (γ) = RF( ˙γ(t))2dt. Not only are the definitions strikingly similar, they also share the same properties. The curve length is also invariant under reparametrisation, and upper bounded by the curve energy. Computing geodesics on a manifold is reduced to a variational optimisation problem. These propositions are proved in detail in Lemmas B.1.4 and B.1.5, in the appendix. In Riemannian geometry, the volume measure defined by the metric is unique. In Finsler geometry, different definitions of the volume exist, and they all coincide with the Riemannian volume element when the metric is Riemannian. The most common choices of volume forms are the Busemann-Hausdorff measure and the Holmes-Thompson measure. According Wu (2011), depending on the Finsler metric and the topological manifold, some choices seem more legitimate than others. In this paper, we decided to only focus on the Busemann-Hausdorff volume, as its definition is the most commonly used and leads to easier derivations. We will later show that in high dimensions, our Finsler metric converges to a Riemannian metric, and thus, the results obtained for the Busemann-Hausdorff volume measure are also valid for the Holmes-Thomson volume measure. Definition A.10. For a given point x on the manifold, we define the **Finsler indicatrix** as the set of vectors in the tangent space such that the Finsler metric is equal to one: {v ∈ TxM|F(v) = 1}). We denote the Euclidean unit ball in R d by B d(1) and for measurable subsets S ⊆ R d we use vol(S) to denote the standard Eulcidean volume of S. In local coordinates (e 1, · · · , ed) on a Finsler manifold M, the **Busemann-Hausdorff volume** form is defined as dVF = VF (x)e 1 *∧ · · · ∧* e d, with: $$V_{F}(x)={\frac{\mathrm{vol}(\mathbb{B}^{d}(1))}{\mathrm{vol}(\{v\in\mathcal{T}_{x}M|F(v)<1\})}}.$$ We can interpret the volume as the ratio between the euclidean ball, and a convex ball whose radius is defined as a unit Finsler metric. If the Finsler metric is replaced by a Riemannian metric, the volume of the indicatrix will be an ellipsoid whose semi-axis are equal to the inverse of the squareroot of the metric's eigenvalues. The Finsler volume then reduces to the definition of the Riemannian volume. ## A.4 Random Manifolds So far, we have only considered deterministic data points lying on a manifold. If we consider our data to be random variables, we will need to define the associated random metric and manifold. ![21_image_1.png](21_image_1.png) ![21_image_0.png](21_image_0.png) Figure 11: Usually the immersion f would be deterministic. In the case of most generative models, where f is described by a GP-LVM, or the decoder of a VAE, the immersion is stochastic. The pullback metric is stochastic de facto. As said previously, if we have a function f : R q → R D that parametrises a manifold, then we can construct a Riemannian metric G = J > f Jf , with Jf the Jacobian of the function f. In the previous cases, we assumed f to be a deterministic function, and so is the metric. We construct a stochastic Riemannian metric in the same way, with f being a stochastic process. A stochastic process f : R q → R D is a random map in the sense that samples of the process are maps from R qto R D (the so-called sample paths of the process). Definition A.11. A stochastic process f : R q → R D is smooth if the sample paths of f are smooth. We call a smooth process f a **stochastic immersion** if the Jacobian matrix of its sample paths has full rank everywhere. We can then define the **stochastic Riemannian metric** G = J > f Jf . The terms *stochastic* and *random* are used interchangeably. The definition of the stochastic immersion is fairly important, as it means that its Jacobian is full rank. Since the Jacobian is full rank, the random metric G is positive definite, a necessary condition to define a Riemannian metric. Another definition of a stochastic Riemannian metric would be the following: Definition A.12. A **stochastic Riemannian metric** on R qis a matrix-valued random field on R q whose sample paths are Riemannian metrics. A **stochastic manifold** is a differentiable manifold equipped with a stochastic Riemannian metric. Any matrice drawn from this stochastic metric would be a proper Riemannian metric. When using the random Riemannian metric on two vectors u, v ∈ TxM, G(*u, v*) = u >Gv is a random variable, but both *u, v* are deterministic vectors. From this definition, it follows that the length, the energy and the also volume are random variables. | Object | Riemann | Finsler | |------------------|----------------------------------|-----------------------------------------------------------------------------------| | metric | g : TxM × TxM → R | F : T M → R+ | | length structure | LG(γ) = R p gt( ˙γ(t), γ˙(t)) dt | LF (γ) = R F( ˙γ(t)) dt | | energy structure | EG(γ) = R gt( ˙γ(t), γ˙(t)) dt | EF (γ) = R F( ˙γ(t))2 dt | | volume element | VG(x) = p | det{G}| | VF (x) = vol(B n(1))/vol({v ∈ TxM|F(x, v) < 1}) Busemann-Hausdorff volume measure | Table 1: Comparison of Riemannian and Finsler metrics. ## B Proofs One of the main challenges of this paper is to find coherent notations while respecting the tradition of two geometric fields. In Riemannian geometry, we principally use a metric, noted gp : TpM × TpM → R+, that is defined as an inner product and thus can induce a norm, but is not a norm. In Finsler geometry, we call intercheangeably Finsler function, Finsler metric or Finsler norm, the norm traditionally noted F : *T M →* R+, with Fp(u) := F(*p, u*) defined at a point p ∈ M for a vector u ∈ TpM. We will assume that all our metric are always defined for a specific point z (or p) on our manifold Z (or M), and so we will just drop this index. The following notations will be used: Stochastic pullback metric tensor G = J Is will be used: $\begin{array}{l}G=J_f^\top J_f\\ \tilde{g}:(u,v)\to u^\top Gu\\ g:(u,v)\to u^\top\mathbb{E}[G]v\\ \|\cdot\|_G:u\to\sqrt{u^\top Gu}\\ \|\cdot\|_R:u\to\sqrt{u^\top\mathbb{E}[G]u}:=\sqrt{g(u,u)}\\ \|\cdot\|_F:u\to\mathbb{E}[\sqrt{u^\top Gu}]:=F(u)\end{array}$ ... ## Stochastic Pullback Metric G˜ : (*U, V*) → U Expected Riemannian Metric G : (*U, V*) → U Stochastic Pullback Induced Norm K·Kg Expected Riemannian Induced Norm K·Kr Finsler Metric K·Kf B.1 Finslerian Geometry Of The Expected Length In this section, we will always let f : R q → R D be a stochastic immersion, Jf its Jacobian, and G = J > f Jf a metric tensor. We will first prove that the function F : T M → R : v → E h√v>Gviis a Finsler metric. Then, for the specific case where Jf follows a non-central normal distribution, the Finsler metric F defined as the expected length follows a non-central Nakagami distribution and can be expressed in closed form. To prove that the function F is indeed a Finsler metric, we will need to verify the criteria above, among them the strong convexity criterion is less trivial to prove than the others. It will be detailed in Lemma B.1.3. Strong convexity means that the Hessian matrix 12Hess(F(v) 2) = 12 ∂ 2F 2 ∂viv j (v) is strictly positive definite for non-negative v. This matrix, when F is a Finsler function, is also called the fundamental form and plays an important role in Finsler geometry. To prove the strong convexity criterion, we will need the full expression of the fundamental form, detailed in Lemma B.1.1. Lemma B.1.1. The Hessian matrix 12*Hess*(F(v) 2) *of the function* F(v) = E h√v>Gvi*is given by* $$(v)^{\prime}$$ $${\frac{1}{2}}H e s s(F(v)^{2})=\mathbb{E}\left[(v^{\top}G v)^{\frac{1}{2}}\right]\mathbb{E}\left[(v^{\top}G v)^{-\frac{1}{2}}G-(v^{\top}G v)^{-\frac{3}{2}}G v v^{\top}G\right]+\mathbb{E}\left[(v^{\top}G v)^{-\frac{1}{2}}G\right]^{2}v v^{\top}.$$ Proof. Let G be a random positive definite symmetric matrix and define g : R q → R : v 7→ √v>Gv, where v is considered a column vector. We would like to know the different derivatives of g with respect to v. We name by default Jg and Hg, its Jacobian and Hessian matrix. Using the chain rule, we have: Jg = (v >Gv) − 12 v >G and Hg = (v >Gv) − 12 G − (v >Gv) − 32 (Gvv>G). For the rest of the proof, we need to show that derivatives and expectation values commute. Using the Fubini theorem, we can show that tha derivatives and the expectation values commute. For F : R q → R : v 7→ E[ √v>Gv], $\mathbb{E}[\cdot]_{\mathbb{E}}$ $\text{Hess}(F)=\mathbb{E}[H_{g}]=\mathbb{E}\left[(v^{\top}Gv)^{-\frac{1}{2}}G-(v^{\top}Gv)^{-\frac{3}{2}}Gvv^{\top}G\right]$. $$\nabla F=\mathbb{E}[J_{g}]=\mathbb{E}[(v^{\top}G v)^{-{\frac{1}{2}}}G v].$$ We now consider the function h : R q → R : v 7→ E[ √v>Gv] 2 = F(v) 2. Using the chain rule and changing the order of expectation and derivatives, we have its Hessian $H_h=2F\cdot\text{Hess}[F]+2\nabla F^\top\nabla F=2\mathbb{E}[g]\mathbb{E}[H_g]+2\mathbb{E}[J_g]^\top\mathbb{E}[J_g]$ Finally, replacing Jg and Hg previously obtained in this expression, we conclude: $${\frac{1}{2}}H_{h}(x,v)=\mathbb{E}\left[(v^{\top}G v)^{\frac{1}{2}}\right]\mathbb{E}\left[(v^{\top}G v)^{-{\frac{1}{2}}}G-(v^{\top}G v)^{-{\frac{1}{2}}}G v v^{\top}G\right]+\mathbb{E}\left[(v^{\top}G v)^{-{\frac{1}{2}}}G\right]^{2}v v^{\top}.$$ $\square$ Remark. *Before going further, it's important to note that* G = J > f Jf *is a random matrix that is positive* definite: it is symmetric by definition and has full rank. The later statement is justified by the assumption that the stochastic process f : R q → R D is an immersion, then Jf *is full rank.* Lemma B.1.2. *The function* F(v) = E h√v>Gviis: 1. *positive homogeneous:* ∀λ ∈ R+, F(λv) = λF(v) 1) Let $\lambda\in\mathbb{R}$, then we have: . 2. smooth: F(v) is a C∞ function on the slit tangent bundle *T M \ {*0} Proof. 1) Let λ ∈ R, then we have: F(λv) = E $$\mathbb{E}\left[{\sqrt{\lambda^{2}v^{\top}G v}}\right]=|\lambda|\left[{\sqrt{v^{\top}G v}}\right].$$ 2) The multivariate function: R q\{0} → R ∗ + : v → v >Gv is C∞ and strictly positive, since G = J > f Jf is positive definite. The function R ∗ + → R ∗ + : x → √x is also C∞. Finally, R ∗ + → R ∗ + : x → E[x] is by definition differentiable. By composition, F(v) is a C∞ function on the slit tangent bundle *T M \ {*0}. Lemma B.1.3. *The function* F(v) = E h√v>Gvi*satisfies the strong convexity criterion.* Proof. Proving that F satisfies the strong convexity criterion is equivalent to show that the Hessian matrix H = 1 2Hess(F(v) 2) is strictly positive definite. Thus, we need to prove that ∀w ∈ R q\{0}, w>*Hw >* 0. According to Lemma B.1.1, because the expectation is a positive function, it's straightforward to see that ∀w ∈ R q\{0}, w>Hw ≥ 0. The tricky part of this proof is to show that w >*Hw >* 0. This can be obtained if one of the terms (F · Hess(F) or ∇F >∇F) is strictly positive. First, let's decompose H as the sum of matrices: H = FHess(F) + ∇F >∇F (Lemma B.1.1), with: $$F\cdot\operatorname{Hess}(F)=\mathbb{E}\left[(v^{\top}G v)^{\frac{1}{2}}\right]\mathbb{E}\left[(v^{\top}G v)^{-{\frac{3}{2}}}\left((v^{\top}G v)G-G v(G v)^{\top}\right)\right],$$ $$\nabla F^{\top}\nabla F=\mathbb{E}\left[(v^{\top}G v)^{-{\frac{1}{2}}}G\right]^{2}v v^{\top}.$$ We will study two cases: when w ∈ span(v), and when w /∈ span(v). We will always assume that v 6= 0, and so by definition: F(v) > 0. Let w ∈ span(v). We will show that w >∇F >∇*F w >* 0. We have w = *αv, α* ∈ R. Because F is 1homogeneous and using Euler theorem, we have: ∇F(v)v = F(v). Then (αv) >∇F >∇F(αv) = α 2F 2, and α 2F(v) 2 > 0. Let w /∈ span(v). F being a scalar function, we have: w >FHess[F]w = F w>Hess[F]w. We would like to show that: w >Hess[F]w > 0. The strategy is the following: if we prove that the kernel of Hess[F] is equal to the span(v), then w /∈ span(v) is equivalent to say that w /∈ ker(Hess[F]) and we can conclude that: w >Hess[F]w > 0. Let's prove span(v) ∈ ker(Hess(F)). We know that Hess(F)v = 0, since F is 1-homogeneous, so we have span(v) ∈ ker(Hess(F)). To obtain the equality, we just need to prove that the dimension of the kernel is equal to 1. Let z ∈ span(v >G) >, which is (Gv) Tz = 0. We have dim(span(v >M)) = 1, and thus: dim(span(v >G) >) = q − 1. Furthermore, z >Hess[F]z = z >E hM(v >Mv) − 12 iz > 0, so we can deduce that dim(im(Hess[F])) = q − 1. Using the Rank-Nullity theorem, we conclude that dim(ker(Hess(F))) = q − dim(im(Hess[F])) = 1, which concludes the proof. In conclusion, ∀w ∈ R q\{0}, w> 12Hess(F(v) 2)w > 0. The function F satisfies the strong convexity criterion. Proposition 2.2. Let G be a stochastic Riemannian metric tensor. Then, the function Fz : TzZ → R : u → kukF defines a Finsler metric, but it is not induced by a Riemannian metric. Proof. Let's define F as a Riemannian metric: F : R q × R q → R : (v1, v2) → E hpv > 1 Gv2 i. If F were a Riemannian metric, then it would be bilinear, which is clearly not the case. Thus, F is not a Riemannian metric. According to Lemma B.1.2 and Lemma B.1.2, F is a Finsler metric. Proposition 2.3. Let f be a Gaussian process and J its Jacobian, with J ∼ N (E[J], Σ). The Finsler norm can be written as: $$F_{z}:{\mathcal{T}}_{z}{\mathcal{Z}}\to\mathbb{R}_{+}:\|v\|_{F}\coloneqq v\to{\sqrt{2}}{\sqrt{v^{\top}\Sigma v}}{\frac{\Gamma{\big(}{\frac{D}{2}}+{\frac{1}{2}}{2}{\big)}}{\Gamma{\big(}{\frac{D}{2}}{\big)}}}_{1}F_{1}\left(-{\frac{1}{2}},{\frac{D}{2}},-{\frac{\omega}{2}}\right),$$ with 1F1 as the confluent hypergeometric function of the first kind and ω = (v >Σv) −1(v >E[J] >E[J]v). Proof. The objective of the proof is to show that, if the Jacobian Jf follows a non-central normal distribution, then, ∀v ∈ R q, the expectation E[v >J > f Jf v] will follow a non-central Nakagami distribution. This is a particular case of the derivation of moments of non-central Wishart distributions, previously shown and studied by Kent & Muirhead (1984); Hauberg (2018b). By hypothesis, Jf follows a non-central normal distribution: Jf ∼ N (E[J], ID ⊗ Σ). Then, G = J > f Jf follows a non-central Wishart distribution: G ∼ Wd(D, Σ, Σ −1E[J] >E[J]). According to (Kent & Muirhead, 1984, Theorem 10.3.5.), v >Gv will also follow a non-central Wishart distribution: v >Gv ∼ W1(D, v>Σ*v, ω*), with: ω = (v >Σv) −1(v >E[J] >E[J]v). To compute E[ √v>Gv], we shall look at the derivation of moments. (Kent & Muirhead, 1984, Theorem 10.3.7.) states that: if X ∼ Wq(D, Σ, Ω 0), with q ≤ D, then E[(det(X))k] = (det{Σ}) k2 qk Γq( D 2 +k) Γq( D 2 ) 1F1(−k, D 2 , − 1 2Ω 0). We directly apply the theorem to our case, knowing that v >Gv is a scalar term, so detv >Gv= v >Gv, q = 1, and k = 1 2 : $$\|v\|_{F}:=\mathbb{E}[{\sqrt{v^{\top}\,G v}}]={\sqrt{2}}{\sqrt{v^{\top}\,\Sigma v}}{\frac{\Gamma({\frac{D}{2}}+{\frac{1}{2}})}{\Gamma({\frac{D}{2}})}}_{1}F_{1}(-{\frac{1}{2}},{\frac{D}{2}},-{\frac{1}{2}}\omega)$$ Lemma B.1.4. *The length of a curve using a Finsler metric is invariant by reparametrisation.* Proof. The proof is similar to the one obtained on a Riemannian manifold (Lee (2013), Proposition 13.25), where we make use of the homogeneity property of the Finsler metric. Let (M, F) be a Finsler manifold and γ : [a, b] → M a piecewise smooth curve segment. We call γ˜ a reparametrisation of γ, such that γ˜ = γ ◦ φ with φ : [c, d] → [*a, b*] a diffeomorphism. We want to show that LF (γ) = LF (˜γ). $$L_{F}(\tilde{\gamma})=\int_{c}^{d}F(\dot{\tilde{\gamma}}(t))\,dt=\int_{c}^{d}F(\frac{d}{dt}(\gamma\circ\phi(t)))\,dt$$ $$=\int_{\phi^{-1}(a)}^{\phi^{-1}(b)}|\dot{\phi}(t)|F(\dot{\gamma}\circ\phi(t))\,dt=\int_{a}^{b}F(\dot{\gamma}(t))\,dt=L_{F}(\gamma)\,$$ Lemma B.1.5. *If a curve globally minimizes its energy on a Finsler manifold, then it also globally minimizes* its length and the Finsler function F *of the velocity vector along the curve is constant.* Proof. The curve energy and the curve length are defined as: EF (γ) = R 1 0 F 2( ˙γ(t))dt and LF (γ) = R 1 0 F( ˙γ(t))dt, with γ : [0, 1] → R d. Let's define f and g two real-valued functions such that: f : R → R : t 7→ F( ˙γ(t)) and g : R → R : t 7→ 1. Applying Cauchy-Schwartz inequality, we directly obtain: $$\left(\int_{0}^{1}F(\dot{\gamma}(t))d t\right)^{2}\leq\int_{0}^{1}F(\dot{\gamma}(t))^{2}d t\cdot\int_{0}^{1}1^{2}d t,\quad\mathrm{which~means}\quad L_{F}(\gamma)^{2}\leq E_{F}(\gamma).$$ The equality is obtained exactly when the functions f and g are proportional, hence, when the Finsler function is constant. ## B.2 Comparison Of Riemannian And Finsler Metrics We have defined both a Riemannian (g : (v1, v2) → v > 1 E[G]v2) and a Finsler (F : (*x, v*) → E[ √v>Gv]) metric, in the hope to compute the average length between two points on a random manifold created by the random field f: G = J > f Jf . The main idea of this section is to better compare those two metrics and in what extend they differ in terms of length, energy and volume. From now on, f : R q → R D will always be defined as a stochastic non-central gaussian process. Its Jacobian Jf also follows a non-central gaussian distribution, G = J > f Jf a non-central Wishart distribution, and F : (*x, v*) = E[ √v>Gv] a non-central Nakagami distribution (Proposition 2.3). The Finsler metric can be written in closed form. In section B.2.1, we will see that the Finsler metric is upper and lower bounded by two Riemannian tensors (Proposition 3.1), and we can deduce an upper and lower bound for the length, the energy and the volume (Corollary 3.1). Then, in section B.2.2, we will show that the relative difference between the Finsler norm and the Riemannian induced norm is always positive and upper bounded a term that is inversely proportional to the number of dimensions D (Proposition 3.3). Similarly, we will deduce the same for the length, the energy and the volume (Corollary 3.2). From this last results, we can directly conclude in section B.2.3 that both metrics are equal in high dimensions (Corollary 3.4). A possible interpretation is that in high dimensions the data distribution obtained on those manifolds becomes more and more concentrated around the mean, reducing the variance term to zero. The manifold becoming deterministic, both metrics become equal. Remark. Most of the following proofs will be a bit technical, as they rely on the closed form expression of the non-central Nakagami distribution. Once proving the main propositions, obtaining the corollaries is straightforward. While we do not have closed form expression of the indicatrix, we will show that it's a monotoneous function which can upper and lower bounded. ## B.2.1 Bounds On The Finsler Metric Proposition 3.1. We define α = 2 Γ( D 2 + 12 ) Γ( D 2 ) 2. The Finsler norm: k·kF is bounded by two norms, k·kαΣ and k·kR, induced by the two respective Riemannian metric tensors: the covariance tensor αΣz and the expected metric tensor E[Gz]. $$\forall(z,v)\in{\mathcal{Z}}\times{\mathcal{T}}_{z}Z:\ \|v\|_{\alpha\Sigma}\leq\|v\|_{F}\leq\|v\|_{R}$$ Proof. Let's first recall that the Finsler function can be written as: $$\|v\|_{F}:=F(v)=\sqrt{2}\sqrt{v^{\top}\Sigma v}\frac{\Gamma(\frac{D}{2}+\frac{1}{2})}{\Gamma(\frac{D}{2})}\cdot{}_{1}F_{1}(-\frac{1}{2},\frac{D}{2},-\frac{1}{2}\omega).$$ The confluent hypergeometric function is defined as: 1F1(*a, b, z*) = P∞ k=0 (a)k (b)k z k k! , with (a)k and (b)k being the Pochhammer symbols. Note that, despite their confusing notation, they are defined as rising factorials. By definition, we have: (a)k (b)k = Γ(a+k) Γ(b+k) Γ(b) Γ(a) . We can use the Kummer transformation to obtain: 1F1(*a, b,* −z) = e −z1F1(b − *a, b, z*). Replacing a = − 1 2 , b = D 2 and z = 1 2 ω, we finally get: $$F(v)={\sqrt{2}}{\sqrt{v^{\top}\Sigma v}}\cdot e^{-z}\sum_{k=0}^{\infty}{\frac{\Gamma({\frac{D}{2}}+{\frac{1}{2}}+k)}{\Gamma({\frac{D}{2}}+k)}}{\frac{z^{k}}{k!}}.$$ 1) Let's show that: $\forall v\in\mathcal{T}_{x}M:\ \sqrt{v^{\top}\alpha\Sigma v}\leq F(v)$, with $\alpha=2\left(\frac{\Gamma(\frac{\theta}{2}+\frac{1}{2})}{\Gamma(\frac{\theta}{2})}\right)^{2}$. The Pochhammer symbole is defined as (x)k = x(x + 1)*. . .*(x + k − 1) = Γ(x+k) Γ(x) . For x ∈ R ∗ +, we have: (x)k ≤ (x+ 1 2 )k. Thus, Γ( D 2 +k) Γ( D 2 )≤ Γ( D 2 + 12 +k) Γ( D 2 + 12 ) . The Gamma function being strictly positive on R+, we obtain: Γ( D 2 + 1 2 ) Γ( D 2 ) ≤ Γ( D 2 + 1 2 + k) Γ( D 2 + k) √2 √ v>Σv Γ( D 2 + 1 2 ) Γ( D 2 ) · e −z X∞ k=0 z k k! ≤ √2 √ v>Σv · e −z X∞ k=0 Γ( D 2 + 1 2 + k) Γ( D 2 + k) z k k! √ v>Σv ≤ √2 √ v>Σv · e −z X∞ √2 Γ( D 2 + 1 2 ) Γ( D 2 ) k=0 Γ( D 2 + 1 2 + k) Γ( D 2 + k) z k k! √ v>αΣv ≤ F(v). 2) Let's show that: ∀v ∈ TxM : F(v) ≤pv>E[G]v. Wendel (1948) proved: Γ(x+y) Γ(x) ≤ x y, for x > 0 and y ∈ [0, 1]. With x = D 2 + k, y = 1 2 , we obtained Γ( D 2 + 12 +k) Γ( D 2 +k)≤ qD 2 + k, which leads to: F(v) ≤ √ 2v>Σv · e −z P∞ k=0 qD 2 + k z k k! . Furthermore, P∞ k=0 e −z z k k! = 1 and the function x →qD 2 + x is concave. Then by Jensen's inequality: e −z P∞ k=0 qD 2 + k z k k! ≤qD 2 + e−z P∞ k=0 z k k! k. Knowing that P∞ k=0 z k k! = zez, we have: e −z P∞ k=0 qD 2 + k z k k! ≤ qD 2 + z. And with z = Ω 2 , we obtain: F(v) ≤pv>Σ(D + Ω)v. From (Kent & Muirhead, 1984, p. 442), the expectation of a non-central Wishart distribution (G ∼ Wq(D, Σ, Ω)) is: E[G] = DΣ + ΣΩ. This finally leads to: $$\cdot\,z.$$ $$F(v)\leq{\sqrt{v^{\top}\mathbb{E}[G]v}}.$$ Remark. As a side note, the second part of the inequality F(v) ≤pv>E[G]v *can be obtained using directly* Proposition 3.2. Corollary 3.1. The length, the energy and the Busemann-Hausdorff volume of the Finsler metric are bounded respectively by the Riemannian length, energy and volume of the covariance tensor αΣ (noted LαΣ, EαΣ, VαΣ) and the expected metric E[G] (noted LR, ER, VR): $$\begin{array}{c}{{\forall z\in{\mathcal{Z}},\ L_{\alpha\Sigma}(z)\leq L_{F}(z)\leq L_{R}(z)}}\\ {{E_{\alpha\Sigma}(z)\leq E_{F}(z)\leq E_{R}(z)}}\\ {{V_{\alpha\Sigma}(z)\leq V_{F}(z)\leq V_{R}(z)}}\end{array}$$ Proof. From Proposition 3.1, we have ∀(x, v) *∈ M × T*xM :ph(v) ≤ F(v) ≤pg(v), with h : v → v >αΣv and g : v → v >E[G]v Riemannian metrics. We also define the parametric curve: ∀t ∈ R, γ(t) = x and γ˙(t) = v. 1) Let's show that LΣ(x) ≤ LF (x) ≤ LR(x). Because of the monotonicity of the Lebesgue integrals, we directly have: R ph( ˙γ(t))dt ≤RF( ˙γ(t))dt ≤R pg( ˙γ(t))dt. 2) Let's show that EΣ(x) ≤ EF (x) ≤ ER(x). Since all the functions are positive, we can raise them to the power two, and again, with the monotonicity of the Lebesgue integrals, we have: Rh( ˙γ(t))dt ≤RF 2(γ(t), γ˙(t))dt ≤R pg( ˙γ(t))dt. 3) Let's show that VΣ(x) ≤ VF (x) ≤ VR(x). We write the vectors v ∈ TxM in hyperspherical coordinates: v = re, with r = kvk the radial distance and e the angular coordinates. With v = re, we have: r ·ph(e) ≤ r · F(e) ≤ r · g(e) ⇐⇒ ph(e) −1≥ F(e) −1 ≥pg(e) −1. We want to identify an inequality between the indicatrices, noted vol(Ih), vol(Ig), vol(IF ), formed by the functions h, g and F. Let's define: rg ph(e) = rh pg(e) = rF F(e) = 1. For every angular coordinate e, we obtain: rh ≥ rF ≥ rg. Intuitively, this means that the finsler indicatrix will always be bounded by the indicatrices formed by h and g. The Busemann-Hausdorf volume of a function f is defined as: σB(f) = vol(B n(1))/vol(If ), with vol(B n(1)) the volume of the unit ball and vol(If ) the volume of the indicatrix formed by f. The previous inequality and the definition of the Busemann-Hausdorff volume implies that: vol(Ih) ≥ vol(IF ) ≥ vol(Ig) ⇒ σB(h) ≤ σB(F) ≤ σB(g). The functions g and h being Riemannian, we have: σB(h) = pdet(αΣ) and σB(g) = pdet(E[G]), which concludes the proof. ## B.2.2 Relative Bounds Between The Finsler And The Riemannian Metric Proposition 3.2. Let f be a stochastic immersion. f induces the stochastic norm k·kG, defined in Section 2. The relative difference between the Finsler norm k·kF and the Riemmanian norm k·kR is: $$0\leq{\frac{\|v\|_{R}-\|v\|_{F}}{\|v\|_{R}}}\leq{\frac{\mathrm{Var}\left[\|v\|_{G}^{2}\right]}{2\mathbb{E}\left[\|v\|_{G}^{2}\right]^{2}}}.$$ Proof. We will directly use a sharpen version of Jensen's inequality obtained by Liao & Berg (2019): Let X be a one-dimensional random variable with mean µ and P(X ∈ (*a, b*)) = 1, where *−∞ ≤* a ≤ b ≤ +∞. Let φ a twice derivable function on (*a, b*). We further define: h(*x, µ*) = φ(x)−φ(µ) (x−µ) 2 − φ 0(µ) x−µ . Then: $$\inf_{x\in(a,b)}\{h(x,\mu)\}\mathrm{Var}[X]\leq\mathbb{E}[\phi(x)]-\phi(\mathbb{E}[x])\leq\sup_{x\in(a,b)}\{h(x,\mu)\}\mathrm{Var}[X].$$ In our case, we will chose φ : z → √z with z a one-dimensional random variable defined as z = v >Gv. a = 0, b = +∞ and µ = E[z]. h(*z, µ*) = (√z − √µ)(z − µ) −2 − (2(z − µ) √µ) −1. Because its first derivative φ 0is convex, the function x → h(*x, µ*) is monotonically increasing. Thus: $$\operatorname*{inf}_{z\in(0,+\infty)}\{h(x,\mu)\}=\operatorname*{lim}_{z\to0}=-{\frac{\sqrt{\mu}}{2\mu^{2}}}\quad{\mathrm{and}}\quad\operatorname*{sup}_{z\in(0,+\infty)}\{h(x,\mu)\}=\operatorname*{lim}_{z\to+\infty}=0.$$ It finally gives: $$-{\frac{\sqrt{\mu}}{2\mu^{2}}}\mathrm{Var}[z]\leq\mathbb{E}[{\sqrt{z}}]-{\sqrt{\mathbb{E}[z]}}\leq0.$$ Replacing kvkF := F(v) = E[ √z] and kvkR :=pg(v) = pE[z] = √µ concludes the proof. Lemma B.2.1. Let z ∼ W1(D, σ, Ω) following a one-dimensional non-central Wishart distribution. Then: $$\frac{V a r[z]}{2\mathbb{E}[z]^{2}}=\frac{1}{D+\Omega}+\frac{\Omega}{(D+\Omega)^{2}}$$ Proof. (Kent & Muirhead, 1984, Theorem 10.3.7.) states that if z ∼ W1(*D, σ, ω*) then E[z k] = σ k2 k Γ( D 2 +k) Γ( D 2 ) 1F1(−k, D 2 , − 1 2Ω). In particular, for k = 1 and k = 2, we have 1F1(−1*, b, c*) = 1 − c b and 1F1(−2*, b, c*) = 1 − 2c b +c 2 b(b+1) . We also have Γ( D 2 +1) Γ( D 2 )= D 2and Γ( D 2 +2) Γ( D 2 )= D 2 D 2 + 1, which leads to: E[z] = σ(D + Ω) and E[z 2] = σ 2(2ω + 2(D + ω) + (D + ω) 2). Finally, we conclude: $${\frac{\mathrm{Var}[z]}{\mathbb{E}[z]^{2}}}={\frac{\mathbb{E}[z^{2}]}{\mathbb{E}[z]^{2}}}-1={\frac{2\omega}{(D+\omega)^{2}}}+{\frac{2}{D+\omega}}.$$ Proposition 3.3. Let f be a Gaussian process. We note ω = (v >Σv) −1(v >E[J] >E[J]v), with J the jacobian of f, and Σ the covariance matrix of J. The relative ratio between the Finsler norm k·kF and the Riemmanian norm k·kR is: $$0\leq{\frac{\|v\|_{R}-\|v\|_{F}}{\|v\|_{R}}}\leq{\frac{1}{D+\omega}}+{\frac{\omega}{(D+\omega)^{2}}}.$$ Proof. The result is directly obtained using Proposition 3.2 and Lemma B.2.1. Corollary 3.2. When f is a Gaussian Process, the relative ratio between the length, the energy and the volume of the Finsler norm (noted LF , EF , VF ) and the Riemannian norm (noted LR, ER, VR) is: $$0\leq\frac{L_{R}(z)-L_{F}(z)}{L_{R}(z)}\leq\max_{v\in\mathcal{T}_{z}\mathcal{Z}}\left\{\frac{1}{D+\omega}+\frac{\omega}{(D+\omega)^{2}}\right\}$$ $$0\leq\frac{E_{R}(z)-E_{F}(z)}{E_{R}(z)}\leq\max_{v\in\mathcal{T}_{z}\mathcal{Z}}\left\{\frac{2}{D+\omega}+\frac{1+2\omega}{(D+\omega)^{2}}+\frac{2\omega}{(D+\omega)^{3}}+\frac{\omega^{2}}{(D+\omega)^{4}}\right\}$$ $$0\leq\frac{V_{R}(z)-V_{F}(z)}{V_{R}(z)}\leq1-\left(1-\max_{v\in\mathcal{T}_{z}\mathcal{Z}}\left\{\frac{1}{D+\omega}+\frac{\omega}{(D+\omega)^{2}}\right\}\right)^{q}$$ Proof. Let's call M = maxv∈TxM{ω (D+ω) 2 +1 D+ω }. From Proposition 3.3, we have: $0\leq\|v\|_{R}-\|v\|_{F}\leq M\ \|v\|_{R},\quad\text{with}\quad M=\max_{v\in\mathcal{T}_{x}M}\left\{\frac{1}{D+\omega}+\frac{\omega}{(D+\omega)^{2}}\right\}.$ 1) By the monocity of the Lesbesgue integral, we can directly integrate the previous norms along a curve γ, which immediately leads to: 0 ≤ LR(x) − LF (x) ≤ MLR(x). 2) Since all the functions are positive: 0 ≤ kvkF ≤ kvkR ≤ MkvkR + kvkFleads to: kvk 2 F ≤ kvk 2 R ≤ M2kvk 2 R + 2MkvkF kvkR + kvk 2 F , and replacing kvkF ≤ kvkR in the right hand term: kvk 2 F ≤ kvk 2 R ≤ (M2 + 2M)kvk 2 R + kvk 2 F , and finally: 0 ≤ kvk 2 R − kvk 2 F ≤ (M2 + 2M)kvk 2 R. Again, by continuity of the Lebesgue integral, we directly obtain: 0 ≤ ER(x) − EF (x) ≤ (M2 + 2M)ER(x). 3) In order to compare the volume between the Finsler and the Riemannian metric, we need to compare the volume of their indicatrices, noted: vol(Ig) and vol(IF ) respectively. We write the vectors v ∈ TxM in hypershperical coordinates, with v = re, r = kvk the radial distance and e the angular coordinates. The volume of the indicatrices obtained in dimension q (dimension of the latent space) can be written as: d qV = r q−1drdΦ, with Φ defining the different angles. We will note rF and rg the radial distances of the Finsler and Riemann metrics such that: kvkF = rF kekF = 1 and kvkR = rgkekR = 1 obtained for a specific angle e. $$\mathrm{vol}(I_{F})-\mathrm{vol}(I_{q})=\int_{\Phi}\left(\int_{0}^{r_{f}}r^{n-1}dr-\int_{0}^{r_{f}}r^{n-1}dr\right)d\Phi=\int_{\Phi}\frac{r_{f}^{n}}{q}\left(1-\left(\frac{r_{q}}{r_{f}}\right)^{q}\right)d\Phi\leq\int_{\Phi}\frac{r_{f}^{n}}{q}d\Phi\cdot\left(1-\left(\frac{r_{q}}{r_{f}}\right)^{n}\right).$$ $$\square$$ and by definition: vol(IF ) = RΦ (r q f /q)dΦ. Furthermore, for a specific angle e, we have: rg/rF = pg(e)/F(e) ≥ 1 − M, from Proposition 3.3. We have: $$0\leq{\frac{\operatorname{vol}(I_{F})-\operatorname{vol}(I_{g})}{\operatorname{vol}(I_{F})}}\leq\cdot1-\left({\frac{r_{g}}{r_{f}}}\right)^{q}\leq1-\left(1-M\right)^{q},$$ and by the definition of the Busemann Hausdorff volume: VF (x)−VG(x) VF (x) = vol(IF )−vol(Ig) vol(IF ), we conclude the proof. ## B.2.3 Implications In High Dimensions In this section, we want to show that the difference between the Finsler norm and the Riemannian induced norm, as well as their respective functionals, tend to zero at a rate of O( 1 D ). We need to be sure that ω doesn't grow faster than D, in other terms: ω = O(D). This can be obtained if we assume that every element of the expectation of Jacobian is upper bounded (∃m ∈ R ∗ +, ∀*i, j* E[Jij ] ≤ m). This happens in at least two cases: (1) E[f] is somehow Lipschitz continuous; or (2) if f is a Gaussian Process and its covariance is upper bounded. The latter case happens when the process is defined over a bounded domain. Lemma B.2.2. *Our Finsler metric* v → E[ √v>Gv] *is defined with* v >Gv ∼ W1(D, v>Σv, ω)*, and* ω = (v >Σv) −1(v >E[J] >E[J]v). If the Finsler manifold is bounded, then: ω ≤ DM*, with* M ∈ R+. Proof. By definition, Σ does not depend on D. We assume the manifold is bounded, which means that every element of the expected Jacobian is upper bounded: E[J]ij ≤ m, with m ∈ R ∗ +. We call σ = v >Σv ∈ R ∗ +. We have: $$\omega=\sigma^{-1}\sum_{k=1}^{D}\sum_{i=1}^{q}\sum_{j=1}^{q}v_{i}\mathbb{E}[J]_{k i}\mathbb{E}[J]_{k j}v_{j}\leq\sigma^{-1}\sum_{k=1}^{D}m^{2}\|v\|^{2}\leq D M,$$ with M = σ −1m2kvk 2∈ R ∗ +, and M does not depend on D. Corollary 3.3. Let f be a Gaussian Process. In high dimensions, we have: $${\frac{L_{R}(z)-L_{F}(z)}{L_{R}(z)}}={\mathcal{O}}\left({\frac{1}{D}}\right),\quad{\frac{E_{R}(z)-E_{F}(z)}{E_{R}(z)}}={\mathcal{O}}\left({\frac{1}{D}}\right),\quad{\mathrm{and}}\quad{\frac{V_{R}(z)-V_{F}(z)}{V_{R}(z)}}={\mathcal{O}}\left({\frac{q}{D}}\right).$$ When $D$ converges toward infinity: $L_R\underset{+\infty}{\sim}L_F$, $E_R\underset{+\infty}{\sim}E_F$ and $V_R\underset{+\infty}{\sim}V_F$. Proof. From Corollary 3.2, we directly obtained the results in high dimensions. We assume that our latent space is bounded, then by B.2.2, we have: 0 ≤ ω ≤ MD, with M ∈ R+. For the length, we have: $$\frac{L_{G}(x)-L_{F}(x)}{L_{G}(x)}\leq\operatorname*{max}_{v\in\mathcal{T}_{x}M}\left\{\frac{1}{D+\omega}+\frac{\omega}{(D+\omega)^{2}}\right\}$$ $$\leq\frac{1+M}{D}$$ For the energy functional, we have: $$\frac{E_{G}(x)-E_{F}(x)}{E_{G}(x)}\leq\max_{v\in T_{\mathcal{M}}}\left\{\frac{2}{D+\omega}+\frac{1+2\omega}{(D+\omega)^{2}}+\frac{2\omega}{(D+\omega)^{3}}+\frac{\omega^{2}}{(D+\omega)^{4}}\right\}$$ $$\leq\frac{2+2M}{D}+\frac{1+2M+M^{2}}{D^{2}}$$ $$\limsup_{D\to\infty}D\times\frac{E_{G}(x)-E_{F}(x)}{E_{G}(x)}\leq\limsup_{D\to\infty}2(1+M)+\frac{M^{2}+2M+1}{D^{2}}\to2(1+M)$$ For the volume, we have: $$\frac{V_{G}(x)-V_{F}(x)}{V_{G}(x)}\leq1-\left(1-\max_{v\in\mathcal{T}_{x}M}\left\{\frac{1}{D+\omega}+\frac{\omega}{(D+\omega)^{2}}\right\}\right)^{q}$$ $$\leq1-\left(1-\frac{1+M}{D}\right)^{q}$$ Using Taylor series expansion, when x ∼ 0, we have: 1−(1 − x) q = qx +o(x 2). Let's call ε 1, and rewrite the Taylor series: $$\begin{array}{c}{{\frac{V_{G}(x)-V_{F}(x)}{V_{G}(x)}\leq q\frac{1+M}{D}+\varepsilon q\frac{1+M}{D}}}\\ {{\operatorname*{lim}_{D\to\infty}\frac{D}{q}\times\frac{V_{G}(x)-V_{F}(x)}{V_{G}(x)}\leq(1+M)(1+\varepsilon)}}\end{array}$$ The difference between the functionals can converge to zero if they are similar in high dimensions, or if they all diverge to infinity. This latter case does not happen as we assume the latent manifold being bounded, and so the metrics are then finite, which concludes the proof. Corollary 3.4. Let f be a Gaussian Process. In high dimensions, the relative ratio between the Finsler norm k·kF and the Riemmanian norm k·kR is: $${\frac{\|v\|_{R}-\|v\|_{F}}{\|v\|_{R}}}={\mathcal{O}}\left({\frac{1}{D}}\right)$$ And, when D converges toward infinity: ∀v ∈ TzZ, kvkR ∼ +∞ kvkF . Proof. Similar to the 3.3, assuming that our latent space is bounded, from B.2.2, we have 0 ≤ ω ≤ MD. From 3.3, we deduce: $$0\leq{\frac{\|v\|_{R}-\|v\|_{F}}{\|v\|_{R}}}\leq{\frac{1}{D+\omega}}+{\frac{\omega}{(D+\omega)^{2}}}$$ $$\leq{\frac{1+M}{D}}$$ In a bounded manifold, the metric are finite. We can deduce that they converge to each other in high dimensions. ## C Experiments C.1 Datasets C.1.1 Font Data The dataset represents 46 different font for each letter (upper and lower case) whose contour is parametrised by a spline (or two splines, depending on the letter used) obtained from at least 500 points Campbell & Kautz (2014). In our case, we choose to learn the manifold of the letter f. The dataset is composed of 46 different fonts, each letter being drawn by 1024 points. We reduce this number from 1024 to 256 by sampling one point every 4. The dimension of the observational space is then 256. ## C.1.2 Qpcr The qPCR data, gathered from Guo et al. (2010), was used to illustrate the training of a GPLVM in Pyro Pyro (2022) and is available at the Open Data Science repository Ahmed et al. (2019). It consists of 437 single-cell qPCR data for which the expression of 48 genes has been measured during 10 different cell stages. We then have 437 data points, 48 observations, and 10 classes. Before training the GP-LVM, the data is grouped by the capture time, as illustrated in the Pyro documentation. ## C.1.3 Pinwheel On A Sphere A pinwheel in 2-dimension is created and then projected onto a sphere using a stereographic projection method. The final dataset is composed of 1000 points with their coordinates in 3-dimensions. ## C.2 Gp-Lvm Training We learn our two-dimensional latent space by training a GP-LVM Lawrence (2003) with Pyro Bingham et al. (2019). The Gaussian Process used is a Sparse GP, defined with a kernel (RBF, or Matern) composed of a one-dimensional lengthscale and variance. The parameters are learnt with the Adam optimiser Kingma & Ba (2014). The number of steps and the initialisation of the latent space vary with the dataset. | datasets | pinwheel | font data | qPCR | |------------------------|------------|-------------|----------| | Number of data points | 500 | 46 | 437 | | Number of observations | 3 | 256 | 48 | | initialisation | PCA | PCA | custom | | kernel | RBF | Matern52 | Matern52 | | steps | 17000 | 5000 | 5000 | | learning rate | 1e-3 | 1e-4 | 1e-4 | | lengthscale | 0.24 | 0.88 | 0.15 | | variance | 0.95 | 0.30 | 0.75 | | noise | 1e-4 | 1e-3 | 1e-3 | Table 2: Description of the datasets trained with a GP-LVM. ## C.3 Computing Indicatrices An indicatrix of a function g at a point x is defined such that: v ∈ TxM|gx(v) < 1. In other terms, the indicatrix is the representation of a unit ball in our latent space. If we use an euclidean metric, our indicatrix in our 2-dimensional latent space would be a unit ball, as we need to solve: v ∈ TxM, kvk < 1. For a Riemannian metric, our indicatrix is necessarily an ellipse, whose semi axis are the square-roots of the eigenvalues of the metric tensor G:: v ∈ TxM, v >*Gv <* 1. For our Finsler metric, we don't have an analytical solution, and so it's difficult to predict the shape of the convex polygon. In this paper, the indicatrices are drawn the following way: for a single point in our latent space, we compute the value of v >Gv and F(*x, v*) for v varying over the space. We then extract the contour when v >Gv and F(*x, v*) are equal to 1. Computing the area of the indicatrices will be used in the section C.4 to compute the volume measures. ## C.4 Computing The Volume Forms For the figures used in this paper, by default, the background of the latent space represents the volume measure of the expected Riemannian metric (VG =pE[G]) on a logarithm scale. In figure 3, the volume measure of the Finsler metric is also computed. ## C.4.1 Finsler Metric To compute the volume measure of our Finsler metric, we choose the Busemann-Hausdorff definition, which is the ratio of a unit ball over the volume of its indicatrix: V = vol(B n(1))/vol({v ∈ TxM|F(*x, v*) < 1}). While our Finsler function has an analytical form, its expression doesn't allow to directly solve the equation: v ∈ TxM, F(*x, v*) < 1. Instead we approximate its indicatrix as describe in C.3, using a contour plot and extracting the paths vertices. We can then compute the area of the obtained polygon, and divide with the volume of a unit ball: vol(B 2(1)) = π. The volume measure can then be computed for each point over a grid (32 x 32, in figure 3), and we interpolate all the other points. Note that this method can only be used when our latent space is of dimension 2. ## C.4.2 Expected Riemannian Metric There is two ways to compute the volume measure of the expected Riemannian metric. One way is to directly use the metric tensor: VG =pE[G]. Another one is to remember that any Riemannian metric is a Finsler metric, and thus, the Busemann-Hausdorff definition also applied for our metric: VG = vol(B n(1))/vol({v ∈ TxM|v >E[G]v < 1}). Solving v >E[G]v < 1 for v ∈ TxM is equivalent to solving the area of an ellipse. For the first method, we can either sample multiple times the metric, which is computationally expensive, or use the fact that our metric tensor is a non-central Wishart matrix: G = J >J ∼ Wq(D, Σ, Σ −1E[J] >E[J]), with Σ the covariance of the Jacobian J and D the dimension of the observational space. In this case, its expectation is: E[G] = E[J] >E[J] + DΣ. We can access the derivatives of the function f (detailed in section D.2), and compute both quantities E[J] and Σ needed to estimate the expected metric and its determinant. For the second method, we can compute the area of the ellipse in the same way we compute the Finsler volume measure. ## C.5 Experiments When Increasing The Number Of Dimensions In Figure 4, we computed the volume ratio and draw indicatrices while varying the number of dimensions, to illustrate that both our Finsler metric and the expected Riemannian metric seem to converge when D increases. The main issue with this experiment is to vary only one factor, the number of dimensions D, while keeping the other factors unchanged. This is difficult for two reasons: 1) Even with a very low dimensional observational, both metrics are already very similar to each other. It would be difficult to illustrate a convergence while increasing the number of dimensions. 2) the function f needs to be learnt again each time we increase the number of dimensions of the observational space, and the parameters of the Gaussian Process will change too. Instead, we try to illustrate our results by computing empirically the stochastic metric tensor G = J >J, using its Jacobian J ∼ N (E[J], Σ), a D × q matrix. The number of dimensions is modified by simply truncatenating the Jacobian J. In Figure 4, the volume ratio is computed for 12 Jacobians obtained with different random parameters E[J] and Σ. The Finsler and Riemannian indicatrices (lower right) are drawn for only one Jacobian selected randomly. ## D Computations D.1 Computing Geodesics With Stochman And Minimising The Curve Energy Functionals An essential task is to compute shortest paths, or geodesics, between data points in the latent space. Those shortest paths can be obtained in two ways: either by solving a corresponding system of ODEs, or by minimising the curve energy in the latent space. The former being computationally expensive, we favour the second approach which consists in optimising a parameterised spline on the manifold. This method is already implemented in Stochman Detlefsen et al. (2021), where we can easily optimise splines by redefining the curve energy function of a manifold class. We need two curvge energy functional: one for the expected Riemannian metric and one for the Finsler metric. ## D.1.1 Curve Energy For The Riemannian Metric We know that the stochastic metric tensor Gt defined on a point t follows a non-central Whishart distribution. Thus, we can compute its expectation E[Gt] knowing the Jacobian covariance and expectation: E[Jt] and Σ. The next Section D.2 explains how to compute those quantities. Assuming the spline is discretized into N points, we can compute the curve energy with: $$E_{G}(\gamma(t))=\int_{0}^{1}\dot{\gamma}(t)^{\top}\mathbb{E}[G_{t}]\dot{\gamma}(t)\,d t\approx\sum_{i=1}^{N}\dot{\gamma}_{i}^{\top}\left(\mathbb{E}[J_{i}]^{T}\mathbb{E}[J_{i}]+D\Sigma_{i}\right)\dot{\gamma}_{i}.$$ ## D.1.2 Curve Energy For The Finsler Metric In order to compute of the curve energy E(γ), we must first derive the expectation E[Jt] and covariance Σ of the Jacobian of f, which should follow a normal distribution: Ji = ∂fi ∼ N E[J], Σ. We assume the points ∂fi are independent samples with the same variance drawn from a normal distribution. We can then compute the Finsler metric which follows a non-central Nakagami distribution (See Proposition 2.2): $${\mathcal{E}}(\gamma(t))=\int_{0}^{1}F(t,{\dot{\gamma}}(t))^{2}\,d t\approx\sum_{i=1}^{N}2{\dot{\gamma}}_{i}^{\top}\Sigma_{i}{\dot{\gamma}}_{i}\left(\frac{\Gamma(\frac{D}{2}+\frac{1}{2})}{\Gamma(\frac{D}{2})}\right)^{2}{}_{1}F_{1}\left(-\frac{1}{2},\frac{D}{2},-\frac{\omega_{i}}{2}\right)^{2},$$ with 1F1 the confluent hypergeometric function of the first kind and ωi = ( ˙γ > i Σiγ˙ i) −1( ˙γ > i Ωiγ˙ i) and Ωi = Σ −1 i E[Ji] >E[Ji]. This function has been implemented in Pytorch using the known gradients for the hypergeometric function: ∂ ∂x 1F1(*a, b, x*) = a b 1F1(a + 1, b + 1, x). ## D.2 Accessing The Posterior Derivatives We assume that the probabilitic mapping f from the latent variables X to the observational variables Y follows a normal distribution. We would like to obtain the posterior kernel Σ∗ and expectation µ∗ such that p(∂tf|Y, X) ∼ N (µ∗, Σ∗). We make the hypothesis the observed variables are modelled with a gaussian noise whose variance is the same in every dimension. In particular, for the n th latent (x) and observed (y) variable in the j th dimension: yn,j = fj (xn,:) + n. Thus, the output variables have the same variance, and the posterior kernel Σ∗ is then isotropic with respect to the output dimensions: Σ∗ = σ 2 ∗ · ID. There are two ways of obtaining the posterior variance and expectation: - We use the gaussian processes to predict the derivative (∂cf) of the mapping function f, and we multiply the obtained posterior kernel by the curve derivative (∂tc), following the chain rule: df(c(t)) dt = df dc · dc dt (Section: D.2.1) - We discretize the derivative of the mapping function as the difference of this function evaluated at two close points. We use a linear operation to obtain the posterior variance and expectation: ∂tf(c(t)) ∼ f(c(ti+1)) − f(c(ti)). (Section: D.2.2) ## D.2.1 Closed-Form Expressions We assume that f is a Gaussian process. Hence, because the differentiation is a linear operation, the derivative of a Gaussian Process is also a Gaussian Process Rasmussen & Williams (2005). The data Y ∈ RN×D follows a normal distribution, so we can infer the partial derivative of one data point (J T)ji = ∂yi ∂xj , with i = 1 *. . . D* and j = 1 *. . . d*. We have: $$\left[\begin{array}{c}{{Y}}\\ {{(J)^{\top}}}\end{array}\right]=\prod_{i=1}^{D}{\mathcal{N}}\left(\left[\begin{array}{c}{{\mu_{y}}}\\ {{\mu_{\partial y}}}\end{array}\right],\left[\begin{array}{c c}{{K(x,x)}}&{{\partial K(x,x_{*})}}\\ {{\partial K(x_{*},x)}}&{{\partial^{2}K(x_{*},x_{*})}}\end{array}\right]\right).$$ and J > can be predicted: $$p(J^{\top}|Y,X)=\prod_{i=1}^{D}{\mathcal{N}}\left(\mu_{*},\Sigma_{*}\right),$$ with: $$\mu_{*}=\partial K(x_{*},x)\cdot K(x,x)^{-1}\cdot(y-\mu_{y})+\mu_{\partial y},$$ $\Sigma_{*}=\partial^{2}K(x_{*},x_{*})-\partial K(x_{*},x)\cdot K(x,x)^{-1}\cdot\partial K(x,x_{*}).$ Finally, ∂tf is obtained: $$p(\partial_{t}f(c(t))|f(x),x)=\prod_{i=1}^{D}{\mathcal{N}}\left(\dot{c}\mu_{*},\dot{c}^{\top}\Sigma_{*}\dot{c}\cdot I_{D}\right).$$ ## D.2.2 Discretization One can notice that: ∂tf(c(t)) ∼ f(c(ti+1)) − f(c(ti)). We know that f(c(ti+1)) and f(c(ti)) both follows a normal distribution. $$\left[\begin{array}{c}{{f(c(t_{i}))}}\\ {{f(c(t_{i+1}))}}\end{array}\right]=\prod_{j=1}^{D}{\mathcal{N}}\left(\left[\begin{array}{c}{{\mu_{i}}}\\ {{\mu_{i+1}}}\end{array}\right],\left[\begin{array}{c c}{{\sigma_{i i}^{2}}}&{{\sigma_{i,i+1}^{2}}}\\ {{\sigma_{i+1,i}^{2}}}&{{\sigma_{i+1,i+1}^{2}}}\end{array}\right]\right).$$ If Y = AX affine transformation of a multivariate Gaussian X ∼ N (*µ, σ*2), then Y is also a multivariate Gaussian with: Y ∼ N (*Aµ, A*T σ 2A). In our case, we choose AT = [−1, 1]. We have: $$f(c(t_{i+1}))-f(c(t_{i}))\sim{\mathcal{N}}(\mu_{*},\sigma_{*}^{2}\cdot I_{D}),$$ with: $$\mu_{*}=\mu_{i+1}-\mu i$$ $$\sigma_{*}^{2}=\sigma_{i i}^{2}+\sigma_{i+1,i+1}^{2}-2\sigma_{i,i+1}^{2}$$ ![35_image_0.png](35_image_0.png) Figure 12: Illustration of the derivatives obtained with a trained GP on a simple parametrised function: both methods give the correct derivatives if enough points are sampled.
Review 1: Summary: The work studies tools for investigating the latent space geometry of generative models. It first notes that traditionally, one would like to investigate said geometry by pulling back a Riemannian metric (say, the Euclidean metric) from the data manifold. Since generative models are often stochastic, previous work approximates the stochastic pullback metric by its expectation. However, this is not fully principled since the geodesics derived from the expected Riemannian metric do not correspond to the expected length-minimizing curves. To resolve this, the work proposes a Finsler metric whose geodesics explicitly minimize the expected length of the pullback metric. The paper introduces bounds to compare the expected Riemannian metric with this Finsler metric and further prove that the metrics converge to each other at a rate of $\mathcal{O}(1/D)$. Several experiments are given in Section 4; both synthetic test tasks that demonstrate these metrics can behave quite differently for low dimensions, as well as test tasks that demonstrate these metrics behave almost identically for high dimensions. The ramifications of the paper are that it provides a justification for using the expected Riemannian metric (used in papers such as Tosi et al. (2014) and Arvanitidis et al. (2018)) for practical implementations when the data dimension is quite high (e.g. more than $100$). Strengths and Weaknesses: ## Strengths 1. The paper is novel in investigating the metric whose geodesics explicitly minimize the expected length of the pullback metric via the Finsler geometry framework. It does indeed prove that this metric converges to the expected Riemannian metric for high dimensions, providing a practical justification for using the Riemannian metric for high dimensional data. 2. Ideas in the paper are clearly presented, Figures 2-4 provide great visual aids for the presented material. ## Weaknesses 1. The impact of the paper is relatively marginal given the fact that the largest result is to provide a justification for a previous practice in a niche subfield. The test tasks seem very simple/toy, but are ultimately sufficient to demonstrate what the paper wanted. I would have liked to see more investigation into the low-dimensional cases where the Riemannian and Finsler metrics are very different, particularly for low dimensions. Are there any (common) cases for which the traditional Riemannian approximation completely fails/misrepresents geometry, but the Finsler metric does well (in essence, a deeper investigation of what was done in Figure 5). 2. In terms of writing, the paper could be more formal in presenting certain topics. For example, in Section 3.1 writing the parenthetical in "...define a metric (a norm)" is a bit confusing. A metric and a norm are very different, yet the parenthetical seems to imply they are interchangeable. Additionally, language such as "We have to place ourselves in hyperspherical coordinates...", present in the proof for Corollary 3.1, is rather informal and should be altered. For example, "We recast the problem in hyperpsherical coordinates...". ## Verdict The paper presents a novel investigation of the Finsler metric whose geodesics explicitly minimize the expected length of the pullback metric in the context of existing papers such as Tosi et al. (2014), and shows that this metric converges to the expected Riemannian metric for high dimensions, thereby providing a theoretical justification for using this approximation for high dimensions. It also empirically investigates the differences between the Finsler and expected Riemannian metric, showing them to be quite similar for high dimensions. Although the impact is a bit limited due to the narrow scope and the writing could use some work, I believe the claims made in the submission are well supported and the findings of the paper would be interesting to some portion of TMLR's audience. As such, both criteria for acceptance to TMLR are satisfied; ergo, I recommend an accept rating for the paper. Requested Changes: Content-wise, I think the work is complete as is. In terms of writing, I gave some parts of the paper that I think should be rewritten/clarified above, in point 2 of the Weaknesses section. I would like to suggest some minor writing corrections that will improve the writing via the following non-exhaustive list: - Abstract, second paragraph: "Generative models are often stochastic causing the data space, the Riemannian metric, and the geodesics to be stochastic as well" -> "Generative models are often stochastic, causing the data space, the Riemannian metric, and the geodesics, to be stochastic as well" - Introduction (Section 1), fifth paragraph: "The pullback metric is then stochastic, and standard differential geodesic constructions no longer applies." -> "The pullback metric is then stochastic, and standard differential geodesic constructions no longer apply." - Introduction (Section 1), sixth paragraph: "...but the derived length are unintuitive quantities that do not correspond to the expected length" -> "...but the derived length is an unintuitive quantity that does not correspond to the expected length" - Section 1.1, fourth paragraph: "...when the dimension of the data space increase" -> "...when the dimension of the data space increases" - Section 1.1, fourth paragraph: "...we perform some experiments in Section 4, that illustrates" -> "...we perform some experiments in Section 4, that illustrate" - Footnote on page 4: "the decoder..." -> "The decoder..." - Section 2.1, first paragraph: "...metric is defined as an Riemannian metric" -> "...metric is defined as a Riemannian metric" - Section 3, third paragraph: "...euclidean" -> "...Euclidean" - Section 4, first paragraph: "...consisting of of pinwheel and concentric circles" -> "...consisting of pinwheel and concentric circles" - Section 4.2, second paragraph: "...Finler geometry" -> "...Finsler geometry" - Section 4.2, first paragraph: "Using stochastic active sets has been shown to give reliable undertainty estimates, and the model also learn more meaning latent representations" -> "Using stochastic active sets has been shown to give reliable uncertainty estimates, and the model also learns more meaningful latent representations" - Discussion (Section 5), third paragraph: "In practice we find that geodesics under the Finsler the Riemannian metric are near identical" -> "In practice we find that geodesics under the Finsler and the Riemannian metrics are nearly identical" Broader Impact Concerns: The paper is of a fairly theoretical nature; as such, I have no ethical concerns and do not believe the paper requires a broader impact statement. ================================================== Review 2: Summary: This paper studies the pull back metric associated with a stochastic map. Pull back metrics are defined by the Jacobian of the map. However, if the map is stochastic, then the Jacobian of the map is stochastic. The paper states computing the expected metric tensor results in Riemannian metric. However, if we delay computing the expectation to after computing the quadratic form, then this is no longer Riemannian, but is now instead Finslerian. The paper then derives various properties of this geometry in relation to the Riemannian one obtained by taking the expectation earlier. The paper supports the theoretical results with numerical simulations and even provides an example where the Finserlerian geometry is more useful. Strengths and Weaknesses: **Strengths** 1) The proposed method is interesting and has nice connections to Finslerian geometry 2) Paper does a thorough comparison of the two types of geometry. In particular, the paper does a very good job of connecting the data density, the Finselerian norm and the Riemannian norm. The paper bounds the Finserlerian quantities in terms of the Riemannian ones. The strength of the bound depends on the data density. 3) The theoretical results are supported by experimental validation. The figures are well illustrated and help the reader understand the relevant concepts. **Weaknesses** See questions. See questions **Questions** 0) What is the Jacobian of a stochastic map? Why do we expect in general realizations of the stochastic map to be differentiable? I think there are assumptions missing in definition 2.3. 1) Bottom of page 5, it says $L_F(\gamma) = \int\mathbb{E}\sqrt{\dot{\gamma}^T G \dot{\gamma}}dt = \mathbb{E}[L_R(\gamma)]$. But that second equality does not seem correct. We need to replace $G$ by $\mathbb{E}[G]$ for that to hold. This equality not being true *seems to be the main point of the paper*. 2) What is a Gaussian process on a manifold? Is that for any finite subset of points on the manifold, the marginal distribution is multivariate Gaussian in the standard sense? Requested Changes: In the introduction's first paragraph. It would be nice to distinguish between the maps from $\mathcal{X}$ to $\mathcal{Z}$. Generative models learn $\mathcal{Z} \to \mathcal{X}$. However, the introduction wants to learn the inverse. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper analyzes differential-geometric based metrics used in some analyses of generative models like VAEs or GPs. Strengths and Weaknesses: Strengths ----------- * The method is motivated through a natural geometric framework. * The theory provides a justification for previous methods. * The experiments showcase that the correspondence between the Finslerian and Riemannian metrics does emerge for real-world datasets (MNIST and fashion MNIST). Weaknesses -------------- * The general setting of the paper is rather narrow (proving that a **specific** approximation to a **specific** Riemannian metric on a **specific** type of generative model is justified). I am concerned that this nicheness, when coupled with the rather uncommon background required to understand the paper, can leave this paper out-of-scope of the TMLR audience. Requested Changes: N/A Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: Mentioned above. Claims are justified. All the reviewers believe the paper is now in a better shape than the earlier versions. Please take care of the changes asked in the camera-ready version. Note: this was a major revision of an earlier submission id=bm2XSzY6o7. ==================================================
# Identifying Causal Structure In Dynamical Systems Dominik Baumann *dominik.baumann@it.uu.se* Department of Information Technology Uppsala University Uppsala, Sweden Friedrich Solowjow friedrich.solowjow@dsme.rwth-aachen.de Institute for Data Science in Mechanical Engineering RWTH Aachen University Aachen, Germany Karl H. Johansson kallej@kth.se Division of Decision and Control Systems and Digital Futures KTH Royal Institute of Technology Stockholm, Sweden Sebastian Trimpe *trimpe@dsme.rwth-aachen.de* Institute for Data Science in Mechanical Engineering RWTH Aachen University Aachen, Germany Reviewed on OpenReview: *https: // openreview. net/ forum? id= X2BodlyLvT* ## Abstract Mathematical models are fundamental building blocks in the design of dynamical control systems. As control systems are becoming increasingly complex and networked, approaches for obtaining such models based on first principles reach their limits. Data-driven methods provide an alternative. However, without structural knowledge, these methods are prone to finding spurious correlations in the training data, which can hamper generalization capabilities of the obtained models. This can significantly lower control and prediction performance when the system is exposed to unknown situations. A preceding causal identification can prevent this pitfall. In this paper, we propose a method that identifies the causal structure of control systems. We design experiments based on the concept of controllability, which provides a systematic way to compute input trajectories that steer the system to specific regions in its state space. We then analyze the resulting data leveraging powerful techniques from causal inference and extend them to control systems. Further, we derive conditions that guarantee the discovery of the true causal structure of the system. Experiments on a robot arm demonstrate reliable causal identification from real-world data and enhanced generalization capabilities. ## 1 Introduction When learning models for dynamical control systems, we ideally would like to obtain models that (i) generalize well to new domains, *(ii)* are interpretable, and *(iii)* computationally efficient. However, generalization to new domains and interpretability are particular weaknesses of current black-box machine learning methods (Schölkopf, 2022; Rudin, 2019). We address this problem by first identifying a control system's causal structure and subsequently using this structural knowledge for model learning. As the understanding of causality differs depending on the domain, we first provide some intuition of what kind of structure we seek to identify. For this, following Eichler (2012), we consider two notions. The first is temporal precedence: causes precede their effects. Temporal precedence is the typical causality notion used in systems theory (Hespanha, 2018, p. 31). However, here, we mainly focus on the second notion, physical influence: manipulating causes changes the effects. In other words, we seek experimental routines and tests that enable control systems to learn (i) what is the influence of their internal states on one another, and *(ii)* which of their inputs influence which internal states. Incorporating the causal structure into the model learning problem targets the three characteristics of ideal models outlined above. A key reason for the shortcomings of most current machine learning algorithms concerning those characteristics is that they mostly rely on a pure statistical analysis of the data (Pearl, 2018). Thus, when considering stochastic systems and finite sample sizes, these algorithms will, due to spurious correlations, typically find connections between variables, although those variables do not influence one another based on the underlying physical equations. This can lead to catastrophic errors when extrapolating outside the training data, thus, diminishing generalization capabilities. Further, such models are less interpretable since it is unclear whether connections between variables in the model indicate a causal influence or merely spurious correlation found in the training data. A causal analysis mitigates both problems since it identifies which variables actually have a causal influence on each other. Incorporating this knowledge into the model learning problem yields more interpretable models that generalize better to new domains (Pearl & Mackenzie, 2018). Lastly, when only connections between variables that actually have a causal influence on one another are considered, this also reduces the model's parameter space. Therefore, the causal analysis also helps make the model computationally more efficient. In this paper, we propose an algorithm that automatically identifies the causal structure of a control system. The problem of inferring causal structure from observational data, i.e., given data for which we cannot influence the data generating process, has been addressed using various methods, see Spirtes et al. (2000); Peters et al. (2017) for an overview. For control systems, solely relying on observational data is not necessary. Such systems are equipped with an input, which they can use to actively conduct experiments for causal inference. Causal inference from experiments, or interventions, has also been studied, most prominently in the context of the do-calculus (Pearl, 1995). However, there it is assumed that state variables can be directly influenced by the input, which is often not possible in control systems. In control systems, it is essential to consider a proper notion of *controllability*, i.e., how the system can be steered to particular regions in the state space through appropriate input trajectories. To the best of our knowledge, such notions have not been considered yet in the causality literature. In the literature on model learning and system identification, we find many techniques for learning mathematical models for control systems (Ljung, 1999; Nguyen-Tuong & Peters, 2011; Schoukens & Ljung, 2019). In system identification, the problem of the exploding number of parameters for black-box methods has, for instance, been addressed using regularization techniques (Schoukens & Ljung, 2019). While these methods can reduce the number of parameters, they may exclude parameters representing a causal influence or include parameters representing spurious correlation depending on the regularization parameter. Thus, the obtained models may fall short on generalization and interpretation capabilities. Contributions. In summary, existing methods from causal inference do not consider a proper notion of controllability, while existing methods from system identification cannot give formal guarantees on finding the true causal structure. In this paper, we bridge this gap and present an algorithm that identifies a control system's causal structure through an experimental design based on a suitable controllability notion and a subsequent causal analysis, for which we can provide formal guarantees. For the causal analysis, we leverage powerful kernel-based statistical tests based on the maximum mean discrepancy (MMD) (Gretton et al., 2012). Since the MMD has been developed for independent and identically distributed (i.i.d.) data, we extend it by deriving conditions under which the MMD still yields valid results and by coming up with a test statistic for hypothesis testing, despite non-i.i.d. data. In terms of controllability, we investigate three different settings: (i) exact controllability, where we can exactly steer the system to a desired position, *(ii)* stochastic controllability, where we can only steer the system to an -region around the desired position, and *(iii)* the special case of linear systems with Gaussian noise that are controllable in the sense of Kalman (Kalman, 1960a), as they represent a widely studied class of systems. We demonstrate the proposed method's applicability by automatically identifying the causal structure of a robotic system and a simulated, nonlinear quadruple tank system. Further, we show improved generalization capabilities for the robotic system and reduced computational complexity for the quadruple tank system, both inherited through the causal identification. ## 2 Related Work To the best of our knowledge, no other algorithm seeks to identify the causal structure of a dynamical control system through experiments based on a suitable controllability notion. However, several works in causal inference aim to infer the causal structure of dynamical systems, and several methods in system identification seek to reduce the parameter space or identify structural properties of control systems. In this section, we discuss those works. Causal inference for dynamical systems. Causal inference in dynamical systems or time series has been studied in Demiralp & Hoover (2003); Eichler (2010); Moneta et al. (2011); Malinsky & Spirtes (2018) using vector autoregression, in Peters et al. (2013) based on structural equation models, in Entner & Hoyer (2010), using the fast causal inference algorithm (Spirtes et al., 2000), and in Salvi et al. (2021) and Quinn et al. (2011), applying kernel mean embeddings and directed information, respectively. In Pfister et al. (2019), the authors develop a procedure for learning the causal structure of kinetic systems. A more extensive overview of causal inference methods that can be applied to time-series data, with a particular focus on Earth system sciences, is provided in Runge et al. (2019a). While all these methods make causal inference for dynamical systems, none of them investigates experimental design. Instead, they aim to infer the causal structure from observational data and, thus, need additional assumptions to arrive at statements about the causal structure. Dynamical systems, as considered in this work, can be actively influenced through a control input. Hence, we can design experiments and do not need to rely on the data being sufficiently rich. Experimental design. A well-known concept for causal inference from experiments is the do-calculus. In the basic setting, a variable is clamped to a fixed value, and the distribution of the other variables conditioned on this intervention is studied (Pearl, 1995). Extensions to more general classes of interventions exist, see, e.g., Yang et al. (2018); Shanmugam et al. (2015), but they consider static models, which is different from the dynamical systems studied herein. Causal inference in dynamical systems or time series with interventions has been investigated in Eichler (2012); Peters et al. (2022); Mooij et al. (2013); Rubenstein et al. (2018); Sokol & Hansen (2014). However, therein it is assumed that one can directly manipulate the variables, e.g., by setting them to fixed values or forcing them to follow a trajectory, which is typically impossible in practice. None of those works considers various degrees of controllability, which is the case in this paper. Thus, they are not readily applicable to control systems. Model selection and regularization. As an alternative to directly testing causal relations between variables, several methods exist that identify a dynamic model, trading off model complexity and accuracy. Typically, this is done by letting the algorithm select from a set of candidate models. In system identification, the Akaike information criterion (Akaike, 1973) and the Bayesian information criterion (Schwarz, 1978) are two well-known examples of such methods. In neuroimaging, there are dynamic causal models (Friston et al., 2003; Stephan et al., 2010). A third family of methods are symbolic regression techniques (Bongard & Lipson, 2007; Schmidt & Lipson, 2009; Brunton et al., 2016). In all cases, the true causal structure of the system can only be revealed if a model representing this structure is part of the candidate models. Further, they typically use a regularization parameter to find a trade-off between model complexity and accuracy. This parameter punishes model complexity (e.g., the number of parameters) and rewards goodness of fit. Thus, it also depends on the specific choice of this regularization parameter whether or not these algorithms find a model representing the system's true causal structure. Structure detection in dynamical systems. Revealing causal relations in a dynamical system can be interpreted as identifying its structure. Related ideas exist in the identification of hybrid and piecewise affine systems (Roll et al., 2004; Lauer & Bloch, 2018). These approaches try to find a trade-off between model complexity and fit but cannot guarantee to find the true causal structure. Further methods that identify structural properties of dynamical systems can be found in topology identification (Materassi & Innocenti, 2010; Shahrampour & Preciado, 2014; van den Hof et al., 2013) and complex dynamic networks (Boccaletti et al., 2006; Liu et al., 2009; Yu, 2010). Those works seek to find interconnections between subsystems instead of identifying a system's inner structure as done herein. Moreover, while the mentioned works often rely on restrictive assumptions such as known interconnections or linear dynamics, our approach can deal with nonlinear systems and does not require prior knowledge. Kernel mean embeddings. For causal inference, we will leverage concepts based on kernel mean embeddings, which have been widely used for causal inference (Peters et al., 2017; Chen et al., 2014; Lopez-Paz et al., 2015). A downside of those methods is that they typically assume that data has been drawn i.i.d. from the underlying probability distributions. Extensions to non-i.i.d. data exist (Chwialkowski & Gretton, 2014; Chwialkowski et al., 2014), but rely on mixing time arguments. Dynamical systems, as investigated in this work, often have large mixing times or do not mix at all (Simchowitz et al., 2018). Therefore, these types of analyses are not sufficient in this case. ## 3 Problem Setting And Main Idea We consider dynamical control systems of the form $$x(t)=f(x(0),u(0:t),v(0:t))$$ x(t) = f(x(0), u(0 :t), v(0 :t)) (1) with discrete time index t ∈ N, dynamics function f, state x(t) *∈ X ⊂* R n, state space X , input u(t) ∈ U ⊂ R m, input space U, and v(t) ∈ R n an independent random variable sequence. The notation (0 :t) here denotes the sequence u(0), u(1)*, . . . , u*(t). In the following, we will omit this notation and simply write u or v if we consider the whole trajectory. The description of the system in equation 1 is different from the standard, incremental version x(t + 1) = ˜f(x(t), u(t), v(t)). Specifically, equation 1 emphasizes the dependence of x at time t on the initial state. Equation 1 can readily be obtained by iterating ˜f starting from x(0). We will lend notation from the do-calculus (Pearl, 1995) for causal inference. In this notation, P(x(t) | do(xi(0) = x I i (0))) defines the conditional probability distribution of x(t) given that we set the initial condition of the ith component of the state vector x to a specific value x I i (0) while all other components of x and the inputs u are left unaffected. In this article, we show how such do-conditional distributions can be realized in dynamical systems when considering a suitable notion of controllability and how we can use them for causal inference. ## 3.1 Problem Setting The system description from equation 1 is stochastic and, hence, induces a probability distribution P(x) over trajectories x. Based on this, we define non-causality adapting standard notation from the do-calculus (Pearl, 1995) to dynamical systems as in equation 1: Definition 1 (Global non-causality). *The state variable* xj does not cause xi if P(xi| do(xj (0) = x I j (0))) = P(xi| do(xj (0) = x II j (0))) *for all* x I j (0) and x II j (0). The superscripts I and II *denote two experimental designs,* i.e., different choices of xj (0). Similarly, uj *does not cause* xi if P(xi| do(uj = u I j )) = P(xi| do(uj = u II j )) for all u I j and u II j . The main objective of this article is to develop an algorithm to test for non-causality in the sense of definition 1 with theoretical guarantees. That is, we seek to test whether the initial condition of xj respectively the input trajectory uj changes the probability distribution of the resulting xi trajectory. The do-operator denotes that we force xj to a specific initial condition and uj to be a specific input trajectory, while all other initial conditions and input trajectories remain fixed between experiments. While this is a common assumption of the do-calculus, in dynamical systems, we cannot just set state variables to specific values. Instead, we need to consider a proper notion of *controllability*. In the following, we discuss how the do-operator can be applied to dynamical systems, i.e., how we can, based on an appropriate controllability notion, *steer* state variables toward specific values as required to test for non-causality. Subsequently, we develop an algorithm that identifies for each pair xi and xj and for each pair xi and uj of a control system as in equation 1, whether or not both variables are non-causal according to definition 1. ![4_image_0.png](4_image_0.png) $$x_{1\dots n\setminus j}^{\mathrm{I}}(0)=x$$ Figure 1: Experimental design for causal inference. We design two experiments, where all initial conditions and input trajectories are constant except for xj (0). If the resulting trajectories of xi *are different in both* experiments, we have evidence that the change in xj (0) *caused this change.* ## 3.2 Main Idea For causal inference, we will exploit that we can influence equation 1 through u. To make the notion of how we can influence the system precise, we adopt controllability definitions for stochastic systems (Sunahara et al., 1974; Bashirov & Kerimov, 1997): Definition 2. *The system described by equation 1 is said to be* completely -controllable *in probability* η in the normed square sense in the time interval [0, tf] *if for all desired states* xdes *and initial states* x(0) *from* X , there exists an input sequence u from U *such that* Pr{kx(tf) − xdesk 2 2 ≥ } ≤ 1 − η, with 0 *< η <* 1. A variety of methods exist to identify or learn models for systems as in equation 1, e.g., Gaussian process regression (Williams & Rasmussen, 2006) or fitting linear state space models using least squares (Ljung, 1999). In the following, we assume that we can obtain an estimate ˆf of the investigated system in equation 1 (including an estimate of the distribution of v(t)) using techniques from system identification and model learning. The concrete method that is used to obtain ˆf is independent of the developed causal identification procedure. However, in later sections, we will specify requirements the model estimate ˆf needs to satisfy. This model estimate ˆf, obtained without further assumptions or physical insights, will, due to the stochasticity of equation 1, almost surely entail spurious correlation and suggest causal influences that are actually not present in the physical system. Nevertheless, it will allow us to (approximately) steer the system to specific initial conditions and start experiments from there. We propose two types of experiments to test for causal relations. For the first type, we investigate whether xj causes xi. We conduct two experiments (denoted by I and II) with different initial conditions x I j (0) 6= x II j (0), while all others are kept the same (cf. figure 1). This can be formalized as $$x_{\ell}^{\mathrm{I}}(0)=x_{\ell}^{\mathrm{II}}(0){\mathrm{~for~all~}}\ell\neq j,\quad x_{j}^{\mathrm{I}}(0)\neq x_{j}^{\mathrm{II}}(0),\quad u_{\ell}^{\mathrm{I}}(t)=u_{\ell}^{\mathrm{II}}(t){\mathrm{~for~all~}}\ell,t.$$ (t) for all *`, t.* (2a) By comparing the resulting trajectories of x I i and x II i , we can check whether the change in xj (0) caused a different behavior. For checking the similarity of trajectories, we will use the MMD, whose mathematical definition we provide in the next section. The second type of experiments is analogous to the first, but instead of varying initial conditions, we consider different input trajectories u I j 6= u II j , $x_{\ell}^{1}(0)=x_{\ell}^{11}(0)$ for all $\ell,\quad u_{\ell}^{1}(t)=u_{\ell}^{11}(t)$ for all $\ell\neq j,\ \text{for all}\ t,\quad u_{j}^{1}(t)\neq u_{j}^{11}(t)$ for all $t.$ (2b) Remark 1. *Note that for the testing experiments, we design open-loop trajectories. That is, the input cannot* depend on the system's current state. This is essential since it creates independent trajectories for which we can leverage the MMD as a similarity measure. ## 3.3 Road-Map We propose an algorithm that consists of two steps: (i) we design experiments, and *(ii)* we analyze the resulting data using the MMD. The data obtained from these experiments must fulfill specific requirements to allow for proper causal inference. In the next section, we provide the mathematical definition of the MMD and deduce the requirements on the experimental data. We then subsequently develop a suitable $$(2\mathrm{a})$$ experimental design based on the controllability notion stated in definition 2. We state conditions under which this experimental design yields data from which the MMD can provably infer the true causal structure in the infinite sample limit and derive a hypothesis test for finitely many samples. Until here, we focus on a rigorous convergence analysis. However, this requires us to do many experiments on the real system. In section 5, we propose a heuristic algorithm that is more efficient in terms of the number of experiments and computations but forgoes some of the guarantees. Lastly, in section 6, we demonstrate the method's applicability on a real robotic system and a quadruple tank process and present comparisons with a sparse identification and a causal discovery method on a synthetic linear example. ## 4 Causal Identification For Dynamical Systems We will now develop the causality testing procedure. First, we introduce the MMD (Gretton et al., 2012), which we shall use as a similarity measure. The MMD can be used to check whether two probability distributions P and Q are equal based on samples drawn from these distributions. Let X and Y be samples drawn i.i.d. from P and Q, respectively. Further, let H be a *reproducing kernel Hilbert space* (Sriperumbudur et al., 2010), with canonical feature map φ: *X → H*. The MMD is defined as $$\operatorname{MMD}(\mathbb{P},\mathbb{Q})=\|\mathbb{E}_{X\sim\mathbb{P}}[\phi(X)]-\mathbb{E}_{Y\sim\mathbb{Q}}[\phi(Y)]\|_{\mathcal{H}}.$$ $$\left({\boldsymbol{3}}\right)$$ MMD(P, Q) = kEX∼P[φ(X)] − EY ∼Q[φ(Y )]kH. (3) The feature map φ can be expressed in terms of a kernel function k(·, ·), where k(*x, y*) = hφ(x), φ(y)iH. If the kernel is characteristic, we have MMD(P, Q) = 0 if, and only if, P = Q (Fukumizu et al., 2008; Sriperumbudur et al., 2011). In the remainder of the paper, we always assume a characteristic kernel (e.g., the Gaussian kernel). Remark 2. *In general, also other measures that compare probability distributions may be used for our* algorithm. An overview of such methods is provided in Sriperumbudur et al. (2012). A popular example would be the Kullback-Leibler divergence (Kullback & Leibler, 1951). In this paper, we propose to use the MMD since it allows us to compare probability distributions without actually estimating them, provides theoretical guarantees, and can be computed efficiently. In the following, we derive conditions that allow one to provably identify causal relations. We investigate three settings. First, we discuss the case where we can precisely steer the system to desired initial conditions (i.e., = 0 in definition 2). We then extend this to > 0, which requires a stricter controllability definition. Finally, we show that for linear systems with additive Gaussian noise, a widely studied class of systems, the conditions stated by Kalman (Kalman, 1960a) are sufficient, and the identification is substantially easier. ## 4.1 Exact Controllability When considering control systems, instead of obtaining single i.i.d. samples from stationary distributions, we receive sequences of random variables sampled from a stochastic process as in equation 1. This data is often non-i.i.d.. The objects of interest, whose distributions we want to compare, are then the xi trajectories obtained from two different experimental settings. To simplify notation, we denote a trajectory obtained from the first setting by x I i and the joint probability distribution over the trajectory states by P I, and equivalently for x II iand P II. We sample from P I and P II by designing two experiments as in equation 2 with fixed length T and repeating each experiment m times. That is, we obtain 2m sequences of T random variables sampled at discrete intervals of fixed length (i.e., t → t + 1 always has the same length). These samples we denote by x I i , x II i ∈ R m×T. Note that (i) all m sequences are sampled from the same distributions P I and P II, *(ii)* while the distributions are non-stationary along time, the distributions for multiple experiments of fixed length T are identical, and *(iii)* all m sequences are independent of each other. The MMD then reads $$\mathrm{MMD}({\bf x}_{i}^{\mathrm{I}},{\bf x}_{i}^{\mathrm{II}})=\|\mathbb{E}_{{\bf x}_{i}^{\mathrm{I}}\sim\mathbb{P}^{\mathrm{I}}}[\phi({\bf x}_{i}^{\mathrm{I}})]-\mathbb{E}_{{\bf x}_{i}^{\mathrm{II}}\sim\mathbb{P}^{\mathrm{II}}}[\phi({\bf x}_{i}^{\mathrm{II}})]\|_{\mathcal{H}}.$$ )]kH. (4) If we now design experiments using equation 2, we can check the similarity of xi trajectories with equation 4. Following Szabó & Sriperumbudur (2018), if we use a characteristic kernel in equation 4, the general properties of the MMD test are still valid given that the initial conditions x I(0) and x II(0), respectively the input trajectories u I and u II, are i.i.d., which is the case for our design. That is, MMD > 0 suggests that the $$\left({\boldsymbol{4}}\right)$$ distributions of the trajectories are different, so we can conclude that there is a causal influence. However, the other direction is less straightforward: for a system as in equation 1, the MMD may be zero, even though variables are dependent, as can be seen in the following example: Example 1. *Assume a control system with* x1(t + 1) = x1(t)x2(t) and x2(t + 1) = u(t)*, and an input signal* u(t) that is different from 0*. If we choose* x1(0) = 0, the trajectory of x1 will, despite the fact that x2 clearly has a causal influence on x1, always be 0 *no matter the initial condition* x2(0). To address this, we define the concept of local non-causality: Definition 3 (Local non-causality). Let Xnc ⊂ X and Unc ⊂ U with Xnc ∪ Unc 6= ∅*. The state variable* xj *does* locally not cause xi if P(xi| do(xj (0) = x I j (0))) = P(xi| do(xj (0) = x II j (0))) *for all* x I j (0) and x II j (0) given that the sequence x is entirely in Xnc and the sequence u *is entirely in* Unc. 1 *Similarly, the input* variable uj *does locally not cause* xi if P(xi| do(uj = u I j )) = P(xi| do(uj = u II j )) *for all* u I j and u II jgiven that x is in Xnc and u in Unc. The non-causality becomes global if Xnc = X and Unc = U. To properly test for causal relations, we need to ensure that the experimental design in equation 2 yields initial conditions and input trajectories that are not inside Xnc and Unc. For this, we propose to design experiments based on the estimated model ˆf. In particular, we utilize simulated trajectories based on the model ˆf, which we denote by (ˆx | ˆf) and where for each t we have xˆ(t) = ˆf(ˆx(0)*, u,* vˆ). We then need to make the following assumption about the system described by equation 1 and the estimated model ˆf: Assumption 1. Consider a dynamical system as in equation 1 for which Xnc ∪ Unc 6= ∅. Consider further two independent experimental designs following equation 2 with initial conditions x I(0), xII(0) *and input* trajectories u I, uII *for the first, and initial conditions* x III(0), xIV(0) *and input trajectories* u III, uIII for the second experiment. Assume that all individual inputs u(t) *that are part of* u III or u IV are in Unc and that all states x(t) *that are part of the simulated trajectories* (ˆx III | ˆf) or (ˆx IV | ˆf) are inside Xnc. Further, assume that for the first experiment there exists some u(t) or some x(t) that is not part of Unc respectively Xnc. For such cases, we assume MMD(ˆx III i, xˆ IV i| ˆf) < MMD(ˆx I i , xˆ II i | ˆf). This assumption *does not* require that ˆf captures the causal structure. That is, if variable xj does not cause variable xi, the model ˆf may still suggest that they are causally related—such modeling errors will then be accounted for through the identification procedure we propose in this paper. However, we require that ˆf captures that the influence of xj on xiis lower in regions in which they are locally non-causal. Intuitively, the assumption says that if xi does not influence xj in certain parts of the state/action space, our model may, due to spurious correlation, still assign some influence of xi on xj in those regions. Nevertheless, we expect the model to assign a stronger influence in regions where the influence stems not only from spurious correlations but also from an actual physical influence of xi on xj . Given a reasonable choice of system identification or model learning technique and a sufficiently rich excitation signal, this assumption will typically be satisfied in practice, as shown in the example below. It further follows that simulated trajectories in regions of local non-causality have a smaller MMD than trajectories in other regions. Example 1 (cont). *Given assumption 1, if we simulate experiments using equation 2a and compute the* MMD for the resulting xˆ1*, the MMD will be* lower *if we choose* xˆ1(0) = 0 *than for any other choice of* xˆ1(0). To validate assumption 1, we identify the system using a regularization based nonlinear system identification algorithm called sparse identification of nonlinear dynamics (SINDy) (Brunton et al., 2016). By exciting the system for 500 time steps and using the SINDy implementation from de Silva et al. (2020), we receive a model. Simulating the MMD for different initial conditions x1(0) for x I 2 (0) = −10 and x II 2 (0) = 10 then reveals that the MMD has a minimum at x1(0) = 0 *(cf. figure 2), i.e., the identified model satisfies assumption 1. We* will introduce the finite sample approximation for the MMD used in figure 2 in section 4.4 and we provide a further example to empirically verify assumption 1 for a system with a hysteresis in appendix B. We now specify the experimental design. To avoid regions of local non-causality, we propose to maximize the MMD given the model estimate. Thus, for checking whether xj causes xi, we choose input trajectories 1In general, there may exist different X ij nc for each combination of xi and xj (and likewise for Unc). We can also cover this case. However, we omit it here to simplify notation. ![7_image_0.png](7_image_0.png) Figure 2: MMD of simulated experiments with different initial conditions x1(0) for the system from example 1. *The MMD has a minimum at* x1(0) = 0*, i.e., it reflects the local non-causality. The finite sample* approximation MMD \2*of the MMD used in this figure will be introduced in section 4.4.* of length T and initial conditions that solve $$\operatorname*{max}_{x^{1}(0),x^{\mathrm{II}}(0),u^{1},u^{\mathrm{II}}}\operatorname{MMD}({\hat{x}}_{i}^{\mathrm{I}},{\hat{x}}_{i}^{\mathrm{II}}\mid{\hat{f}})$$ subject to $x_{\ell}^{1}(0)=x_{\ell}^{\mathrm{II}}(0)$ for all $\ell\neq j\quad u_{\ell}^{1}(t)=u_{\ell}^{\mathrm{II}}(t)$ for all $\ell,t$. $$\left(5\mathrm{a}\right)$$ We will discuss how to handle this optimization problem in practice in section 5.2. In particular, in practice, we typically do not require a global solution to equation 5a. If we want to check whether uj causes xi, we choose input trajectories and initial conditions by solving $$\operatorname*{max}_{x^{\mathrm{I}}(0),x^{\mathrm{II}}(0),u^{\mathrm{I}},u^{\mathrm{II}}}\operatorname{MMD}(\hat{x}_{t}^{\mathrm{I}},\hat{x}_{t}^{\mathrm{II}}\mid\hat{f})$$ subject to $x_{\ell}^{\mathrm{II}}(0)=x_{\ell}^{\mathrm{II}}(0)$ for all $\ell\quad u_{\ell}^{\mathrm{I}}(t)=u_{\ell}^{\mathrm{II}}(t)$ for all $\ell\neq j,t$. $$\mathbf{\Sigma}(5\mathbf{b})$$ Before stating our main theorem, we need one further assumption. Given the system model from equation 1, we could also have systems for which the influence of xj on xi becomes only apparent after a certain number of time steps. To guarantee that we identify the true causal structure, we need to assume that this delay is at most equal to the length of an experiment T. Assumption 2. Consider a pair (xi, xj ) for which xj has a causal influence on xi *as per definition 3. We* assume that changes of xj , when outside of regions of local non-causality, cause a change in xi *after at most* T *discrete time steps.* We can now state the main theorem: Theorem 1. Consider a completely *-controllable system described by equation 1 with* = 0 *that fulfills* assumptions 1 and 2. Let experiments be designed according to equation 5 for a fixed experiment length T and repeated infinitely often (i.e., we have x I i , x II i ∈ R m×T with m → ∞*). Then:* MMD(x I i , x II i ) = 0 *if, and* only if, xj respectively uj does not cause xi *as per definition 1.* Proof. Let variables be non-causal. Then, we have by definition 1, P(xi| do(xj (0) = x I j (0))) = P(xi| do(xj (0) = x II j (0))). That is, the distribution of xiin both experiments is equal. Thus, MMD(x I i , x II i ) = 0 follows from Gretton et al. (2012). Now, assume MMD(x I i , x II i ) = 0. This implies that the distribution of xiis equal in both experiments (Gretton et al., 2012), i.e., P(xi| do(xj (0) = x I j (0))) = P(xi| do(xj (0) = x II j (0))). This could be the case because (i) xi and xj are non-causal or *(ii)* we have that x ∈ Xnc and u ∈ Unc. Due to assumption 1 and equation 5, there exists a t for which either x /∈ Xnc or u /∈ Unc. Thus, we are in case (i) and xj does not cause xi. The proof for uj and xi follows analogously. Remark 3. *Assumption 1 ensures that, if we design experiments using the optimization algorithm in equation 5, we are not inside a region of local non-causality. However, even if we weaken the assumption, the* claim of theorem 1 would still hold. Since equation 5 finds the experiment design that maximizes the MMD, the algorithm only fails if our model estimate assumes the influence of one variable on the other to be strongest in regions where there is actually no causal influence. Given a reasonable model class and excitation signal for the initial model estimation, this should not occur in practice. We still keep assumption 1 in its stronger form as it allows us to also make causal inference in practical settings where we may not be able to globally solve equation 5. Remark 4. Instead of doing multiple experiments of fixed length, one might consider doing one long experiment. This design has several disadvantages. First, if the process is stable, the probability distribution can become independent of the initial conditions over time. Then, causal influences are only visible during the transient. Second, if we deal with non-ergodic systems, where time and spatial average are not the same, we only have a valid testing procedure if we do multiple runs of each experiment. Lastly, doing multiple experiments ensures that the different runs are independent of each other. Data generated by a single long experiment can be correlated, and obtaining a valid test is way more involved (Solowjow et al., 2020). ## 4.2 **-Controllability** For a stochastic system as in equation 1, it is in general impossible to steer the system exactly to the initial conditions suggested by equation 5; i.e., we need to resort to controllability with > 0 (cf. definition 2). Nevertheless, even in such cases, it is still possible to guarantee the consistency of the causality testing procedure. However, we need a stricter definition of controllability. Definition 4. Let the system given in equation 1 be *-controllable according to definition 2, and consider* some arbitrary, but fixed x∗ `,des*. Then, the system given in equation 1 is said to be* completely -controllable in distribution *if, for any* x(0) *and any* xdes *with* x`,des = x∗ `,des*, there exists an input sequence* u(0 :tf) *such* that x`(tf) *always follows the same distribution; i.e.,* P(x`(tf)) = P∗for some P∗*that does not depend on* x(0) *or any component of* xdes *except* x`,des = x∗ `,des and Pr{kx(tf) − xdesk 2 2 ≥ } ≤ 1 − η, with 0 *< η <* 1. In other words, the definition states that, for any x(0), we can generate input trajectories that guarantee that the fixed component x∗ `,des of xdes is matched in distribution. Linear systems with additive Gaussian noise that are controllable following Kalman (1960a) are also controllable in the sense of definition 4, as we show in the appendix. We further need to make an assumption about the initial conditions suggested by equation 5. To steer the system to those initial conditions without ending up in a region of local non-causality, we need that states that are -close to those initial conditions are not in a region of local non-causality. Assumption 3. *For the initial conditions suggested by equation 5, we assume that there exist open balls* with radius √ *centered around each element of* x I(0) and x II(0) that are outside of the local non-causality sets. This assumption is not very strong since equation 5 suggests initial conditions for which the influence of xj respectively uj on xiis maximal. Thus, it is unlikely that these initial conditions are close to regions of local non-causality. The main reason why this assumption is needed is to exclude corner cases in which the causal influence only exists in single points of the state and input space. In such cases, the probability of successfully steering the system to those points is zero. However, considering the variables as non-causal may then anyway be reasonable. Equipped with these three assumptions, we can now state: Corollary 1. Consider a system as in equation 1 that is completely *-controllable in distribution according* to definition 4 and fulfills assumptions 1, 2, and 3. Let experiments be designed as in equation 5 for a fixed experiment length T*, trajectories that steer the system to the initial conditions of the experiments be* chosen such that P(x I ` (0)) = P(x II ` (0)) for all ` 6= j*, and experiments be repeated infinitely often. Then:* MMD(x I i , x II i ) = 0 if, and only if, xj respectively uj does not cause xi according to Definition 1. Proof. Let variables be non-causal. Then, we have P(x I ` (0)) = P(x II ` (0)) for all ` 6= j, thus, also the distribution of the obtained xi trajectories is equal and we have MMD(x I i , x II i ) = 0 (Gretton et al., 2012). Now, assume MMD(x I i , x II i ) = 0. This implies that the distribution of xiis equal in both experiments (Gretton et al., 2012). By assumption 1, existing local non-causalities are reflected by the model and thus, equation 5 will suggest experiments outside of such regions. Assumption 3 ensures that we can steer the system to those regions. Thus, if distributions are equal, non-causality must be global as in definition 1. ## 4.3 Linear Systems Local non-causality as in definition 3 is a nonlinear phenomenon. If we assume equation 1 to be linear timeinvariant (LTI) with Gaussian noise v(t), we can reveal the true causal structure without the optimization procedure in equation 5, making this case substantially easier. For an LTI system, equation 1 reads $$x(t)=A^{t}x(0)+\sum_{i=0}^{t-1}A^{i}(Bu(t-1-i)+v(t-1-i)),\tag{6}$$ with state transition matrix A ∈ R n×n, input matrix B ∈ R n×m, and v(t) ∼ N (0, Σv). The system in equation 6 is controllable as per definition 4 if it satisfies the classical controllability condition from Kalman (1960a), i.e., if the matrix *B AB . . . A*n−1Bhas full row rank, as we show in lemma 3 in the appendix. We can then state the following theorem, whose proof is provided in the appendix: Theorem 2. Assume an LTI system as in equation 6, whose A and B *matrices satisfy Kalman's controllability condition. Let experiments be designed as in equation 2a and equation 2b, respectively. Then:* MMD(x I i , x II i ) = 0 if, and only if, xj respectively uj does not cause xi *as per definition 1.* ## 4.4 Test With Finite Samples Until here, we derived guarantees in the infinite sample limit. In practice, we can only carry out finitely many experiments, i.e., we have finitely many samples of the random variable sequence xi. Thus, we need a finite sample approximation of the MMD. Lemma 1. Consider m experiments with fixed length T*, i.e.,* xi ∈ R m×Tbut now we also have m < ∞*. An* unbiased empirical estimate of the squared population MMD can be computed as $$\widehat{\text{MMD}}^{2}(\mathbf{x}_{i}^{1},\mathbf{x}_{i}^{\text{II}})=\frac{1}{m(m-1)}\sum_{r\neq s}^{m}(k(^{r}x_{i}^{1},\,^{s}x_{i}^{\text{II}})+k(^{r}x_{i}^{\text{II}},\,^{s}x_{i}^{\text{II}})-k(^{r}x_{i}^{1},\,^{s}x_{i}^{\text{II}})-k(^{s}x_{i}^{1},\,^{r}x_{i}^{\text{II}})),\tag{7}$$ where rx I i denotes element r of the xi trajectories from experiment I. Proof. The m trajectories for x I and x II are independent and follow P I and P II, respectively. Thus, we are in the same setting as in (Gretton et al., 2012, lem. 6) and the proof follows as shown therein. For a finite sample approximation, we, in general, have MMD2(x I i , x II i ) > 0 even if P I = P II. 2 Thus, we need to do a hypothesis test and derive a test statistic. We assume the null hypothesis $$H_{0}\colon\mathbb{P}^{\mathrm{I}}=\mathbb{P}^{\mathrm{II}}$$ $$\mathbf{\Sigma}$$ II (8) and obtain the test statistic from the following result. Lemma 2. *Assume* 0 ≤ k( rx I i , sx II i ) ≤ K*. Then* $$\mathrm{Pr}_{\mathbf{z}_{i}^{\mathsf{I}},\mathbf{z}_{i}^{\mathsf{II}}}\{\widehat{\mathrm{MMD}}^{2}(\mathbf{z}_{i}^{\mathsf{I}},\mathbf{z}_{i}^{\mathsf{II}})-\mathrm{MMD}^{2}(\mathbb{P}^{\mathsf{I}},\mathbb{P}^{\mathsf{II}})>\gamma\}\leq\exp\biggl(\frac{-\gamma^{2}m_{2}}{8K^{2}}\biggr),$$ where m2 := bm/2c. The hypothesis test of level α for the null hypothesis in equation 8 has the acceptance region $$\widehat{\mathrm{MMD}}^{2}(\mathbf{x}_{i}^{\mathrm{I}},\mathbf{x}_{i}^{\mathrm{II}})<\biggl(\frac{4K}{\sqrt{m}}\sqrt{\log(\alpha^{-1})}\biggr).$$ Proof. The m trajectories for x I and x II are independent and follow P I and P II, respectively. Thus, the proof follows from theorem 10 and corollary 11 in Gretton et al. (2012). 2Note that the unbiased estimate in lemma 1 can even become negative, cf. the discussion after lemma 6 in Gretton et al. (2012). ## 5 Implementation The results in section 4 show that we are able to detect whether the variables xj or uj have a causal influence on xi. In practical implementations, two challenges remain: first, we want to minimize the number of experiments we need to carry out on the physical platform. Second, we may be unable to obtain a global solution to equation 5. ## 5.1 Heuristic Test With Finite Samples The test statistic provided in lemma 2 enjoys a theoretical foundation. However, the threshold decreases as m−1/2, i.e., we need many experiments to not be overly conservative. While more efficient test statistics exist (e.g., achieved through subsampling as discussed in section 6 of Gretton et al. (2012)), generating all this data through experiments on real-world systems is often undesirable, e.g., because it is time-consuming and may cause excessive wear and tear on the hardware. Thus, we propose an alternative test statistic that can be obtained from the model estimate ˆf. This alternative test statistic is heuristic and forgoes the theoretical properties but is efficient to implement and yields good results in practice, as we show in section 6. We estimate a model ˆfi,nc that assumes xi and xj respectively uj are non-causal (i.e., we do not use the data xj respectively uj when estimating ˆfi,nc). We propose to use this model to decide whether to accept the null hypothesis of xi and xj respectively uj being non-causal. That is, we replace our current model ˆfi for state component xi with ˆfi,nc if $$\widehat{\mathrm{MMD}}^{2}(x_{i}^{1},x_{i}^{\Pi})<\!\mathbb{E}[\widehat{\mathrm{MMD}}^{2}(\hat{x}_{i}^{1},\hat{x}_{i}^{\Pi}\mid\hat{f}_{i,\mathrm{nc}})]+\nu\sqrt{\mathrm{Var}[\widehat{\mathrm{MMD}}^{2}(\hat{x}_{i}^{1},\hat{x}_{i}^{\Pi}\mid\hat{f}_{i,\mathrm{nc}})]}.$$ (9) $\frac{1}{2}$ ˆfi,nc)].(9) Expected value and variance in equation 9 can be estimated through Monte Carlo simulations. For these simulations, we use the true initial conditions x I(0) and x II(0). That way, we account for uncertainty due to unequal initial conditions between experiments. The significance level of the test can be adjusted through ν using Chebyshev's inequality (Chebyshev, 1867). ## 5.2 Experimental Design The framework for testing causality between state variables is summarized in algorithm 1 (and works analogously for inputs). After having obtained an initial model (see ll. 1-2 in algorithm 1), we run equation 5a to find initial conditions and input trajectories for testing non-causality of one specific combination of x` and xj (l. 6). The optimization in equation 5a may be arbitrarily complex or even intractable, depending on the chosen model class. However, finding a global optimum of equation 5a is not necessary. The goal of the optimization procedure is to avoid regions of local non-causality. We thus optimize equation 5a until it is above a threshold δ1 to be confident that we are not in a region of local non-causality. In practical applications, we can often already achieve this through a reasonable initialization of the optimization problem, i.e., by choosing initial conditions for xj as far apart as possible. We then run the designed experiment and collect the data (l. 7). Ideally, we would like to use data from this single experiment to test for the causal influence of xj on all other state components. Thus, we check for which xi the experiment yields an expected MMD that is higher than a second threshold δ2 and do the hypothesis test for all of those (ll. 9-13). ## 6 Evaluation We evaluate the framework on three systems. First, we identify the causal structure of one arm of the robot Apollo (Kappler et al., 2018) shown in figure 3 in section 6.1. Then, we demonstrate the causal identification of a simulated quadruple tank process (cf. figure 5) in section 6.2. In both cases, we present the general setup and results in the following, while we defer implementation details to the appendix. Lastly, we discuss a synthetic linear toy example. We use this example to highlight once more the importance of considering a notion of controllability and to compare our method to a sparse identification and a causal discovery algorithm3. For all experiments, we use a Gaussian kernel to compute the MMD. 3Code for both simulation examples is available at https://github.com/baumanndominik/identifying_causal_structure. Algorithm 1 Pseudocode of the proposed framework. 1: Excite system with input signal, collect data 2: Obtain ˆf through black-box system identification 3: for xj in x do 4: xtest = [x1*, . . . , x*n] . states to be tested for non-causality 5: for x` in xtest do . design experiment to test whether xj causes x` 6: Run equation 5a until E[MMD \2(ˆx I `, xˆ II ` | ˆf)] > δ1 7: Run causal experiments, collect data 8: for xi in xtest do . The experimental designed for xj and x` might also yield a valid test for other x 9: if E[MMD \2(ˆx I i, xˆ II i| ˆf)] > δ2 **then** . If the empirical MMD is above the threshold for some other xi, also test that xi 10: Obtain ˆfi,nc 11: Obtain test statistic via Monte Carlo simulations 12: if equation 9 holds **then** . independence test 13: ˆfi = ˆfi,nc 14: Delete xi from xtest Figure 3: The robot showing initial postures for two experiments. ![11_image_0.png](11_image_0.png) ## 6.1 Robot Experiments We consider kinematic control of the robot; that is, we can command desired angular velocities to the joints, which are then tracked by low-level controllers (taking care, among other things, of the robot dynamics) (Berenz et al., 2021). As measurements, we receive the joint angles. The goal of the causal identification is to learn which joints influence each other and which joints are influenced by which inputs. We consider four joints of the robot arm in the following experiments. From the robot arm's design, we know its kinematic structure, which is described by φ˙i = ui for each joint, where φi denotes the angle of joint i and ui the control input. That is, we expect each joint velocity φ˙i to depend only on the local input ui and not other variables. While the dynamics are approximately linear, we do not rely on this information and are thus in the setting discussed in section 4.2. In the following, we will investigate whether our proposed causal identification can automatically reveal this structure. Following algorithm 1, we start by identifying a model ˆf. In this experiment, we estimate a linear statespace model. As expected, the initial model suggests that all joints are linked to each other and all inputs due to spurious correlations. We then design experiments for causality testing, example trajectories of such experiments are shown in figure 4 (left). The empirical squared MMD of the resulting trajectories is compared with the test statistic. The trajectories in figure 4 (left) already suggest that the experiments are in line with the kinematic model: while the two trajectories of joint 1 for different initial conditions of joint 3 are essentially equal (blue dashed and green dotted lines overlap), the trajectories of joint 3 for different choices of the third input are fundamentally different. This is also revealed through the proposed causality test. The middle plots of figure 4 show the empirical squared MMD (left-hand side of equation 9) and the test threshold (right-hand side of equation 9) for the experiments that were conducted to test the influence of the initial conditions of the third joint (top) and of the third input signal (bottom) on all joints. As can be seen, the causal identification reveals that the third joint does not influence any other joint, and the third input only affects the third joint. Note that the third joint's trajectories are obviously different when choosing different initial conditions for the third joint. However, since this is expected, we subtract the initial condition in this case to investigate whether the movement starting from these distinct initial conditions differs. The remaining experiments (results are contained in the appendix) yield similar results. In summary, the causal identification successfully reveals the expected causal structure. The experiment design in equation 5 requires us to solve an optimization problem. Nevertheless, algorithm 1 introduces two parameters, δ1 and δ2. These can be used to stop the optimization early in case the predicted MMD is high enough for us to be confident that we are not in a region of local non-causality (δ1) and that we can use the design to test for all influences of a joint or input signal (δ2). To design δ1 and δ2, we can look at the system's noise level and choose them some orders of magnitude higher. As discussed in section 5, a high predicted MMD can often already be achieved through sophisticated guesses, i.e., by choosing initial conditions far apart from each other and diverse input signals. We follow this approach. When testing for the influence of the third joint on others and choosing initial conditions that are far apart, we predict MMDs of around 0.5. The model we estimate initially for the robot arm has a noise standard deviation below 1 × 10−4. That is, the predicted MMD is way above the noise level of the system and way above the MMDs we find in experiments. Thus, for any choice of δ1 and δ2 below 0.5, we can confidently accept this experiment design. If we were even more conservative and chose them above 0.5, we would need to optimize the experiment design further. To investigate the generalization capability, we compare predictions of the model ˆfinit obtained from the initial system identification and the model ˆfcaus that was learned after revealing the causal structure. We use the same training data to estimate the model parameters in both cases. However, for ˆfcaus, we leverage the obtained knowledge of the causal structure when estimating parameters. In contrast, for ˆfinit we do not take any prior knowledge into account. As test data, we use an experiment that was conducted to investigate the influence of the initial condition of joint 3 on the other joints and let both models predict the trajectory of joint 1. For this experiment, the initial angle of joint 3 is close to its maximum value, a case that is not contained in the training data. As can be seen in figure 4 (right), the predictions of ˆfcaus (blue) are very close to the true data (green, dashed), i.e., the model can generalize well, while the predictions of ˆfinit deviate significantly. ## 6.2 Quadruple Tank Process The experimental demonstration in the previous section showed that the presented algorithm can successfully identify a real-world robotic system's causal structure. However, the causal structure is relatively straightforward, and dynamics are approximately linear. To stress the method's ability to perform similarly well on more complex structures and with nonlinear dynamics, we now consider the quadruple tank system from Johansson (2000), illustrated in figure 5. Its continuous-time dynamics are given by $$\dot{x}_{1}=-\frac{a_{1}}{A_{1}}\sqrt{2gx_{1}}+\frac{a_{3}}{A_{1}}\sqrt{2gx_{3}}+\frac{\zeta_{1}}{A_{1}}u_{1}$$ $$\dot{x}_{2}=-\frac{a_{2}}{A_{2}}\sqrt{2gx_{2}}+\frac{a_{3}}{A_{2}}\sqrt{2gx_{4}}+\frac{\zeta_{2}}{A_{2}}u_{2}$$ $$\dot{x}_{3}=-\frac{a_{3}}{A_{3}}\sqrt{2gx_{3}}+\frac{1-\zeta_{2}}{A_{3}}u_{2}$$ $$\dot{x}_{4}=-\frac{a_{4}}{A_{4}}\sqrt{2gx_{4}}+\frac{1-\zeta_{1}}{A_{4}}u_{1},$$ where xi denotes the water level of tank i, ui the flow rate of pump i, g the gravitational constant (9.81 m s−2), and the remaining constants as in figure 5. The dynamics of the quadruple tank process are, thus, clearly nonlinear. That is, we cannot expect good performance if we approximate them using a linear state-space ![13_image_0.png](13_image_0.png) Figure 4: Causality tests and model evaluation for the robotic system. Plots on the left show example trajectories of two experiments, in the middle the experimental MMD and the test threshold for joint 3, and on the right predictions based on the initial model and on the refined model after the causal identification. ![13_image_1.png](13_image_1.png) Figure 5: Schematic of the quadruple tank process from Johansson (2000). model. Therefore, we discretize the system and use Gaussian processes (GPs) to model the dynamics, see Williams & Rasmussen (2006) for an introduction and details. In particular, we model each state of the system with a GP, where each GP uses all states and inputs to predict its assigned state variable at the next time step. We proceed similarly as for the robot experiments. We select initial conditions through sophisticated guesses, i.e., far apart from each other. In a simulation, we can set initial conditions and do not need to steer the system there. However, to be more realistic, we sample initial conditions from a normal distribution ![14_image_0.png](14_image_0.png) Figure 6: Causality tests for the quadruple tank system. *The plots show the results of the experiments testing* the influence of the third tank on all others (left) and the influence of u1 *on all tanks (right).* with mean equal to the selected initial conditions and variance of 1 × 10−2. Thus, we only consider - controllability. Results of the testing procedure are shown in figure 6. The left plot shows test statistic and experimental MMD for the third tank's influence on all others. It reveals that the third tank influences itself and the first tank, which is in line with the schematic in figure 5 and the dynamics in equation 10. The right plot illustrates the test statistic and experimental MMD for the influence of u1 on the tanks. Also these results are in accordance with figure 5, as the experiments suggest that u1 influences all but the third tank. The remaining results are collected in section D in the appendix. Similarly as the ones in figure 6, they reveal the causal structure that can be inferred from figure 5. In this case, the causal identification results in a reduction of computational complexity. Especially for the third and the fourth tanks, which are only influenced by u2 respectively u1 and by themselves, the input dimension of their corresponding GPs decreases from 10 to 3. The complexity of standard GP regression grows with O(n 3) with the number of datapoints n. Thus, if we can reduce the dimensions that need to be considered and, with that, the number of data points that we use for the regression by 70 % as in this example, we can considerably reduce the computational complexity. ## 6.3 Synthetic Example And Comparison Lastly, we present a synthetic, linear example. We consider an LTI system as in equation 6, with $$A=\begin{pmatrix}0.9&-0.75&1.2\\ 0&0.9&-1.1\\ 0&0&0.7\end{pmatrix}\quad B=\begin{pmatrix}0.03&0&0\\ 0&0.06&0\\ 0.07&0&0.05\end{pmatrix}\tag{11}$$ $\left(12\right)^{2}$ $$(13)$$ and Gaussian noise with standard deviation 1 × 10−4. For this example, we apply algorithm 1 without the need for the optimization procedure since the example is linear. Again, we want to stress the importance of an appropriate notion of controllability. That is, instead of assuming that we can set the system to initial conditions, we always *steer* the system to the initial conditions required for that experiment. For this, we employ an approach to set-point tracking that has, for instance, been discussed in Pannocchia et al. (2005). Given a desired state xdes, we seek a feedback control law of the form u = Mxdes + F x, i.e., a control law that depends both on the desired state and the current state. We obtain the gain matrix F using standard methods from linear optimal control (Anderson & Moore, 2007), in particular, the linear quadratic regulator (LQR). Thus, we can rewrite the incremental dynamics of the system as $$x(t+1)=\tilde{A}x(t)+B M x_{\mathrm{des}}+v(t),$$ x(t + 1) = Ax˜ (t) + BMxdes + v(t), (12) where A˜ := A + BF. We now choose the feedforward term M such that the reference is matched in stationarity, i.e., we want to achieve $$x=(I-\tilde{A})^{-1}B M x_{\mathrm{des}}.$$ −1BMxdes. (13) ![15_image_0.png](15_image_0.png) $$(14)$$ Figure 7: Comparison of prediction capabilities. In blue predictions of the true model obtained after causal identification, in green the model obtained from SINDy with parameter choices from Brunton et al. (2016). We do not compare to PCMCI in the numerical experiment since PCMCI only reveals which causal influences exist, but not how strong they are, i.e., different from our algorithm and SINDy, it does not return a full system model that can be used in simulations. Thus, we need $$M=((I-\tilde{A})^{-1}B)^{-1}$$ −1(14) to track the reference point. To compute M, we use the matrices Aˆ and Bˆ of the estimated model ˆf. We start the experiment once kx(t) − xdesk2 < 0.01. Analyzing the resulting data with the MMD lets us, similar as before, infer the true causal structure. In particular, the causal analysis reveals that x1 does not cause x2 nor x3, x2 does not cause x3, and u2 does not cause x3. In addition to those results, we compare our method with a sparse identification method and a causal discovery algorithm for this linear example. In both cases, we excite the system for 100 000 time-steps with a chirp as an input signal. First, we use SINDy, as in example 1, to identify the underlying discrete-time system4. Sparse identification algorithms seek to find a trade-off between model complexity and goodness of fit. Thus, for such algorithms, a parameter needs to be selected to indicate how important accuracy is compared to complexity. When applying SINDy to the synthetic system, its success in finding the true causal structure depends on the choice of this parameter. We start with the parameter settings that were used for a three-dimensional linear example in Brunton et al. (2016). For those settings, the algorithm does not recognize the influence of u1 on x1, i.e., the first row of the B matrix consists of zeros. In figure 7, we show the effects that this error can have by comparing predictions of the true model obtained after the causal identification with predictions of the model obtained from SINDy. For this, we set u1 = 10, u2 = 0, and u3 = −14 and simulate both models for 100 time steps. As the SINDy model does not reflect the influence of u1 on x1, it assumes that x1 does not move. However, the model obtained after the causal identification reflects this influence and, thus, correctly predicts the movement of x1. Only when lowering the threshold parameter can SINDy recover the true causal structure of the system. This stresses the general shortcoming of sparse identification methods when identifying the causal structure of a control system. Depending on the parameter settings, they may or may not recover the true causal structure. However, their general purpose is different. They seek a trade-off between model complexity and accuracy. Thus, neglecting a causal link with a comparably weak influence might be the desired outcome. In contrast, we seek the true causal structure, independent of how strong the influence is. 4We use the implementation provided in de Silva et al. (2020). Second, we compare our algorithm to the PCMCI5 algorithm proposed in Runge et al. (2019b)6. This algorithm focuses on detecting causal influences from time-series data. That is, it does not yield an identified system matrix but discovers which variables have a causal influence on which others. When again exciting the linear system with a chirp signal and running the PCMCI algorithm, we receive an output that we can interpret as x1(k + 1) = a1x1(k) + a2x2(k) + a3x3(k) + b1u1(k) x2(k + 1) = a4x1(k − 1) + a5x2(k) + a5x3(k) + a6x3(k − 2) + b2u2(k) + b3u3(k − 1) x3(k + 1) = a7x1(k − 4) + a8x3(k) + b4u1(k) + b5u3(k), $$\left(15\right)$$ where all weights ai and bi are non-zero. Besides, the algorithm discovers an influence of x1 on u3. However, since such links were ruled out by design for the other algorithms, we neglect this here. Nevertheless, also the equations as stated above are not in line with the actual matrices in equation 11. For instance, following equation 15, the variable x1 has a causal influence on x2 and x3 through the non-zero factors a4 and a7. However, given the upper-triangular structure of the A matrix, x3 is not influenced by any other state variable apart from itself. Similarly, also a4 should be zero. Thus, the algorithm does not reveal the true structure of the system. ## 7 Conclusion We presented a method that identifies the causal structure of dynamical control systems by conducting experiments and analyzing the generated data with MMD-based techniques. It differs from prior approaches to causal inference in that it uses a controllability notion that is suitable to design experiments for control systems. We evaluated the method on a real-world robotic system and a simulated quadruple tank system. Our algorithm successfully identified the underlying causal structure of both systems, which in turn allowed us to learn a model that accurately generalizes to previously unseen states while reducing computational complexity. ## Acknowledgments The authors would like to thank Manuel Wüthrich and Alonso Marco Valle for insightful discussions, Steve Heim for valuable feedback, and Vincent Berenz for support with the robot experiments. This work has received funding from the German Research Foundation within the SPP 1914 (grant TR 1433/1-1), the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the German State of North Rhine-Westphalia (MKW) under the Excellence Strategy of the Federal Government and the Länder, the Cyber Valley Initiative, the Max Planck Society, the Knut and Alice Wallenberg Foundation, and the Swedish Research Council. ## References Hirotogu Akaike. Information theory and an extension of the maximum likelihood principle. In *International* Symposium on Information Theory, pp. 267–281, 1973. Brian DO Anderson and John B Moore. *Optimal Control: Linear Quadratic Methods*. Courier Corporation, 2007. Agamirza E Bashirov and Kerim R Kerimov. On controllability conception for stochastic systems. SIAM Journal on Control and Optimization, 35(2):384–398, 1997. Vincent Berenz, Maximilien Naveau, Felix Widmaier, Manuel Wüthrich, Jean-Claude Passy, Simon Guist, and Dieter Büchler. The o80 C++ templated toolbox: Designing customized Python APIs for synchronizing realtime processes. *Journal of Open Source Software*, 6(66):2752, 2021. 5an adaptation of the PC algorithm, named after its creators Peter Spirtes and Clark Glymour (Spirtes et al., 2000), which uses a momentary conditional independence (MCI) test to deal with auto-correlations in time-series data 6For our simulations, we use the implementation and parameter settings provided in https://github.com/jakobrunge/ tigramite Stefano Boccaletti, Vito Latora, Yamir Moreno, Martin Chavez, and Dong-Uk Hwang. Complex networks: Structure and dynamics. *Physics Reports*, 424(4-5):175–308, 2006. Josh Bongard and Hod Lipson. Automated reverse engineering of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 104(24):9943–9948, 2007. Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. *Proceedings of the National Academy of Sciences*, 113(15):3932–3937, 2016. Pafnutii Lvovich Chebyshev. Des valeurs moyennes. *Journal de Mathématiques Pures et Appliquées*, 2(12): 177–184, 1867. Zhitang Chen, Kun Zhang, Laiwan Chan, and Bernhard Schölkopf. Causal discovery via reproducing kernel Hilbert space embeddings. *Neural Computation*, 26(7):1484–1517, 2014. Kacper Chwialkowski and Arthur Gretton. A kernel independence test for random processes. In *International* Conference on Machine Learning, pp. 1422–1430, 2014. Kacper Chwialkowski, Dino Sejdinovic, and Arthur Gretton. A wild bootstrap for degenerate kernel tests. In *Advances in Neural Information Processing Systems*, pp. 3608–3616, 2014. Brian de Silva, Kathleen Champion, Markus Quade, Jean-Christophe Loiseau, J. Kutz, and Steven Brunton. PySINDy: A Python package for the sparse identification of nonlinear dynamical systems from data. Journal of Open Source Software, 5(49):2104, 2020. Selva Demiralp and Kevin D Hoover. Searching for the causal structure of a vector autoregression. *Oxford* Bulletin of Economics and Statistics, 65:745–767, 2003. Michael Eichler. Graphical Gaussian modelling of multivariate time series with latent variables. In *International Conference on Artificial Intelligence and Statistics*, pp. 193–200, 2010. Michael Eichler. Causal inference in time series analysis. In Carlo Berzuini, Philip Dawid, and Luisa Bernardinell (eds.), *Causality: Statistical Perspectives and Applications*, chapter 22, pp. 327–354. John Wiley & Sons, Ltd, 2012. Abbas Emami-Naeini and G Franklin. Deadbeat control and tracking of discrete-time systems. IEEE Transactions on Automatic Control, 27(1):176–181, 1982. Doris Entner and Patrik O Hoyer. On causal discovery from time series data using FCI. *Probabilistic* Graphical Models, pp. 121–128, 2010. Karl J Friston, Lee Harrison, and Will Penny. Dynamic causal modelling. *Neuroimage*, 19(4):1273–1302, 2003. Kenji Fukumizu, Arthur Gretton, Xiaohai Sun, and Bernhard Schölkopf. Kernel measures of conditional dependence. In *Advances in Neural Information Processing Systems*, pp. 489–496, 2008. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *Journal of Machine Learning Research*, 13(Mar):723–773, 2012. Joao P Hespanha. *Linear Systems Theory*. Princeton University Press, 2018. Karl Henrik Johansson. The quadruple-tank process: A multivariable laboratory process with an adjustable zero. *IEEE Transactions on Control Systems Technology*, 8(3):456–465, 2000. Rudolf Emil Kalman. Contributions to the theory of optimal control. *Boletin de la Sociedad Matematica* Mexicana, 5(2):102–119, 1960a. Rudolf Emil Kalman. On the general theory of control systems. In *International Conference on Automatic* Control, pp. 481–492, 1960b. Daniel Kappler, Franziska Meier, Jan Issac, Jim Mainprice, Cristina Garcia Cifuentes, Manuel Wüthrich, Vincent Berenz, Stefan Schaal, Nathan Ratliff, and Jeannette Bohg. Real-time perception meets reactive motion generation. *IEEE Robotics and Automation Letters*, 3(3):1864–1871, 2018. Solomon Kullback and Richard A Leibler. On information and sufficiency. *The Annals of Mathematical* Statistics, 22(1):79–86, 1951. Fabien Lauer and Gérard Bloch. Hybrid System Identification: Theory and Algorithms for Learning Switching Models. Springer, 2018. Hui Liu, Jun-An Lu, Jinhu Lü, and David J Hill. Structure identification of uncertain general complex dynamical networks with time delay. *Automatica*, 45(8):1799–1807, 2009. Lennart Ljung. *System Identification: Theory for the User*. Prentice Hall PTR, 1999. David Lopez-Paz, Krikamol Muandet, Bernhard Schölkopf, and Ilya Tolstikhin. Towards a learning theory of cause-effect inference. In *International Conference on Machine Learning*, pp. 1452–1461, 2015. Daniel Malinsky and Peter Spirtes. Causal structure learning from multivariate time series in settings with unmeasured confounding. In *ACM SIGKDD Workshop on Causal Discovery*, pp. 23–47, 2018. Donatello Materassi and Giacomo Innocenti. Topological identification in networks of dynamical systems. IEEE Transactions on Automatic Control, 55(8):1860–1871, 2010. Alessio Moneta, Nadine Chlaß, Doris Entner, and Patrik Hoyer. Causal search in structural vector autoregressive models. In *NIPS Mini-Symposium on Causality in Time Series*, pp. 95–114, 2011. Joris M. Mooij, Dominik Janzing, and Bernhard Schölkopf. From ordinary differential equations to structural causal models: The deterministic case. In *Conference on Uncertainty in Artificial Intelligence*, pp. 440–448, 2013. Duy Nguyen-Tuong and Jan Peters. Model learning for robot control: A survey. *Cognitive Processing*, 12 (4):319–340, 2011. Gabriele Pannocchia, Nabil Laachi, and James B Rawlings. A candidate to replace PID control: SISOconstrained LQ control. *AIChE Journal*, 51(4):1178–1189, 2005. Judea Pearl. Causal diagrams for empirical research. *Biometrika*, 82(4):669–688, 1995. Judea Pearl. Theoretical impediments to machine learning with seven sparks from the causal revolution. arXiv preprint arXiv:1801.04016, 2018. Judea Pearl and Dana Mackenzie. *The Book of Why: The New Science of Cause and Effect*. Basic Books, 2018. Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. Causal inference on time series using restricted structural equation models. In *Advances in Neural Information Processing Systems*, pp. 154–162, 2013. Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. Elements of Causal Inference: Foundations and Learning Algorithms. MIT Press, 2017. Jonas Peters, Stefan Bauer, and Niklas Pfister. Causal models for dynamical systems. In Hector Geffner, Rina Dechter, and Joseph Y. Halpern (eds.), Probabilistic and Causal Inference: The Works of Judea Pearl, pp. 671–690. 2022. Niklas Pfister, Stefan Bauer, and Jonas Peters. Learning stable and predictive structures in kinetic systems. Proceedings of the National Academy of Sciences, 116(51):25405–25411, 2019. Christopher J Quinn, Todd P Coleman, Negar Kiyavash, and Nicholas G Hatsopoulos. Estimating the directed information to infer causal relationships in ensemble neural spike train recordings. *Journal of* Computational Neuroscience, 30(1):17–44, 2011. Jacob Roll, Alberto Bemporad, and Lennart Ljung. Identification of piecewise affine systems via mixedinteger programming. *Automatica*, 40(1):37–50, 2004. Paul K. Rubenstein, Stephan Bongers, Bernhard Schölkopf, and Joris M. Mooij. From deterministic ODEs to dynamic structural causal models. In *Conference on Uncertainty in Artificial Intelligence*, 2018. Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. *Nature Machine Intelligence*, 1(5):206–215, 2019. Jakob Runge, Sebastian Bathiany, Erik Bollt, Gustau Camps-Valls, Dim Coumou, Ethan Deyle, Clark Glymour, Marlene Kretschmer, Miguel D Mahecha, Jordi Muñoz-Marí, Egbert Nes, Jonas Peters, Rick Quax, Markus Reichstein, Marten Scheffer, Bernhard Schölkopf, Peter Spirtes, George Sugihara, Jie Sun, and Jako Zscheischler. Inferring causation from time series in Earth system sciences. *Nature Communications*, 10(1):1–13, 2019a. Jakob Runge, Peer Nowack, Marlene Kretschmer, Seth Flaxman, and Dino Sejdinovic. Detecting and quantifying causal associations in large nonlinear time series datasets. *Science Advances*, 5(11):eaau4996, 2019b. Cristopher Salvi, Maud Lemercier, Chong Liu, Blanka Horvath, Theodoros Damoulas, and Terry Lyons. Higher order kernel mean embeddings to capture filtrations of stochastic processes. *Advances in Neural* Information Processing Systems, 2021. Michael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data. *Science*, 324 (5923):81–85, 2009. Bernhard Schölkopf. Causality for machine learning. In Hector Geffner, Rina Dechter, and Joseph Y. Halpern (eds.), *Probabilistic and Causal Inference: The Works of Judea Pearl*, pp. 765–804. 2022. Johan Schoukens and Lennart Ljung. Nonlinear system identification: A user-oriented road map. IEEE Control Systems Magazine, 39(6):28–99, 2019. Gideon Schwarz. Estimating the dimension of a model. *The Annals of Statistics*, 6(2):461–464, 1978. Shahin Shahrampour and Victor M Preciado. Topology identification of directed dynamical networks via power spectral analysis. *IEEE Transactions on Automatic Control*, 60(8):2260–2265, 2014. Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G Dimakis, and Sriram Vishwanath. Learning causal graphs with small interventions. In *Advances in Neural Information Processing Systems*, pp. 3195–3203, 2015. Max Simchowitz, Horia Mania, Stephen Tu, Michael I. Jordan, and Benjamin Recht. Learning without mixing: Towards a sharp analysis of linear system identification. In *Conference on Learning Theory*, pp. 439–473, 2018. Alexander Sokol and Niels Richard Hansen. Causal interpretation of stochastic differential equations. *Electronic Journal of Probability*, 19(100):1–24, 2014. Friedrich Solowjow, Dominik Baumann, Christian Fiedler, Andreas Jocham, Thomas Seel, and Sebastian Trimpe. A kernel two-sample test for dynamical systems. *arXiv preprint arXiv:2004.11098*, 2020. Peter Spirtes, Clark N Glymour, Richard Scheines, David Heckerman, Christopher Meek, Gregory Cooper, and Thomas Richardson. *Causation, Prediction, and Search*. MIT Press, 2000. Bharath K Sriperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Schölkopf, and Gert RG Lanckriet. Hilbert space embeddings and metrics on probability measures. *Journal of Machine Learning Research*, 11(Apr):1517–1561, 2010. Bharath K Sriperumbudur, Kenji Fukumizu, and Gert RG Lanckriet. Universality, characteristic kernels and RKHS embedding of measures. *Journal of Machine Learning Research*, 12(7), 2011. Bharath K Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, and Gert RG Lanckriet. On the empirical estimation of integral probability metrics. *Electronic Journal of Statistics*, 6:1550–1599, 2012. Klaas Enno Stephan, Will D Penny, Rosalyn J Moran, Hanneke EM den Ouden, Jean Daunizeau, and Karl J Friston. Ten simple rules for dynamic causal modeling. *Neuroimage*, 49(4):3099–3109, 2010. Yoshifumi Sunahara, Teruo Kabeuchi, Yoshiharu Asada, Shin Ichi Aihara, and Kiyotaka Kishino. On stochastic controllability for nonlinear systems. *IEEE Transactions on Automatic Control*, 19(1):49–54, 1974. Zoltán Szabó and Bharath Sriperumbudur. Characteristic and universal tensor product kernels. Journal of Machine Learning Research, 18:233, 2018. Paul MJ van den Hof, Arne Dankers, Peter SC Heuberger, and Xavier Bombois. Identification of dynamic models in complex networks with prediction error methods—basic methods for consistent module estimates. *Automatica*, 49(10):2994–3006, 2013. Christopher KI Williams and Carl Edward Rasmussen. *Gaussian Processes for Machine Learning*, volume 2. MIT press Cambridge, MA, 2006. Karren Yang, Abigail Katcoff, and Caroline Uhler. Characterizing and learning equivalence classes of causal DAGs under interventions. In *International Conference on Machine Learning*, pp. 5541–5550, 2018. Dongchuan Yu. Estimating the topology of complex dynamical networks by steady state control: Generality and limitation. *Automatica*, 46(12):2035–2040, 2010. ## A Proof Of Theorem 2 An LTI system with Gaussian noise follows a normal distribution, whose mean and variance are given by $$\mathbb{E}[x(t)]=A^{t}x(0)+\sum_{i=0}^{t-1}A^{i}Bu(t-1-i)$$ $$\mathrm{Var}[x(t)]=\sum_{i=0}^{t-1}A^{i}\mathrm{Var}[v(t-1-i)],$$ $$(16\mathrm{a})$$ $$(16\mathrm{b})$$ where we assume Var[v(t)] = Σv for all t. For such systems, we will first show that if equation 16a obeys the controllability conditions stated by Kalman, the system is also controllable according to definitions 2 and 4. Lemma 3. The system in equation 6 is completely -controllable in distribution if the deterministic part obeys the controllability condition stated in Kalman (1960a). Proof. The expected value in equation 16a represents the deterministic part of the system. Thus, according to Kalman (1960a), we can design an input trajectory that steers equation 16a to any point in the state space. Since we do not assume constraints on the input or the state variables, the desired state can be reached with a trajectory within q < ∞ time steps (cf. deadbeat control (Kalman, 1960b; Emami-Naeini & Franklin, 1982)). That is, starting from any x(0) ∈ X we can steer the system state toward x(q) != x Iin q time steps and obtain the distribution $$\mathbb{E}[x(q)]=x^{\mathrm{I}}$$ $$\mathrm{Var}[x(q)]=\sum_{i=0}^{q-1}A^{i}\Sigma_{\mathrm{v}}.$$ The probability ofx(q) − x I 2 2 being larger than is given by the cumulative distribution function of the normal distribution. Proof of theorem 2. We can now prove theorem 2. Since the variance of equation 6 solely depends on the number of time steps, which is equal for all experiments, distributions can only be different because of their means. We start with experiments that are designed according to equation 2a. In this case, for distributions to be equal, and, thus, for variables to be non-causal, we need ${e_i\Bigg(A^t x^\text{I}(0)+\sum_{i=0}^{t-1}A^i(Bu(t-1-i)\Bigg)=e_i\Bigg(A^t x^\text{II}(0)+\sum_{i=0}^{t-1}A^i(Bu(t-1-i)\Bigg),}$ It is the unit vector (i.e., a vector with zeros and a single 1 at the ${i}$th entry). where ei ∈ R n is the unit vector (i.e., a vector with zeros and a single 1 at the ith entry). Since input trajectories are equal, this boils down to $$e_{i}A^{t}x^{\mathrm{I}}(0)=e_{i}A^{t}x^{\mathrm{II}}(0).$$ Essentially this means that the component ij of At needs to be 0. This is clearly the case, if there is no influence of xi on xj , i.e., in case variables are non-causal, we have MMD = 0. The event of component ij of At being 0 by chance, even though xj has a causal influence on xi, has probability 0. Thus, we have that variables are non-causal if MMD = 0. For experiments that are designed according to equation 2b, initial states are equal and, in case variables are non-causal, we have $$e_{i}\sum_{i=0}^{t-1}A^{i}(B u^{\mathrm{I}}(t-1-i)=e_{i}\sum_{i=0}^{t-1}A^{i}(B u^{\mathrm{II}}(t-1-i).$$ Similar as before, we have equal distributions and, thus, MMD = 0 if entries in the AiB matrices relating xi and uj are 0, i.e., if there is no causal influence. The other direction holds since the event of the relevant entries being 0 by chance has probability 0. ## B Further Example For Assumption 1 We provide a further example to empirically validate assumption 1. In practice, regions of local non-causality are often due to hysteresis effects. For instance, a resting body first needs to overcome static friction. That is, when the velocity is zero, the input signal needs to overcome a threshold to actually influence the velocity. In particular, we consider the system $$x(k+1)=\begin{cases}\begin{pmatrix}1&0.01\\ 0&1\end{pmatrix}x(k)+\begin{pmatrix}0\\ 0.1\end{pmatrix}u(k)+v(k)&\text{if}x_{1}(k)\neq0\vee|u(k)|>0.1\\ x(k)&\text{else,}\end{cases}\tag{17}$$ where the state is 2-dimensional, the input is scalar, and v(k) is a normally distributed random variable with standard deviation 1 × 10−4. We excite the system with a ramp function starting at zero and going until two and learn a GP model. For the GP model, we use a Gaussian kernel for which we optimize the hyperparameters. We then assume the initial condition of both state variables to be zero and compute the MMD of simulated trajectories for 100 step inputs, with u Iranging from 0.05 to 5 and u II = −u I. The results confirm our findings from example 1 since also here assumption 1 is satisfied (see figure 8). ## C Further Results Of The Robot Experiments We first provide implementation details for the experiments presented in section 6. The initial model estimate is obtained by exciting the system with a chirp signal for 30 s and using the generated data to learn a linear state-space model (cf. equation 6) with least squares. The obtained matrices are $$A_{\rm init}\approx\begin{pmatrix}0.868&-0.132&0.754&-0.491\\ -0.132&0.868&0.754&-0.491\\ -0.132&-0.132&1.754&-0.491\\ -0.134&-0.134&0.76&0.508\end{pmatrix}\quad B_{\rm init}\approx\begin{pmatrix}0.075&-0.056&-0.031&0.022\\ 0.074&-0.055&-0.031&0.022\\ 0.074&-0.056&-0.03&0.022\\ 0.075&-0.056&-0.032&0.022\end{pmatrix}.$$ ![22_image_0.png](22_image_0.png) Figure 8: Simulated MMD for the hysteresis system in equation 17. Similar as for example 1, also here we see that assumption 1 is satisfied. Initial conditions and input trajectories for the causality testing experiments are obtained through sophisticated guesses, as discussed in section 5.2. The found initial conditions and input trajectories yield expected MMDs orders of magnitude above the system's noise level for all joints. Thus, we need only eight experiments to identify the causal structure. We design input trajectories of 100 time steps for each experiment, repeat the experiment ten times, and use collected data from all experiments for hypothesis testing. While the squared MMD is always positive, the empirical approximation in equation 7 can become negative since it is an unbiased estimate. For the test statistic in equation 9, we estimate the variance using 100 Monte Carlo simulations and obtain the expected value through a noiseless simulation. We use ν = 1, but as we will see in the results, the empirical MMD is, in all cases, orders of magnitude below or above the threshold. Thus, more conservative choices of ν would yield the same outcome. In tables 1 and 2, we present the results of all causality testing experiments conducted on the robotic platform shown in figure 3. As for the results discussed in section 6, we always have a clear decision on whether to accept or reject the null hypothesis: The MMD found in experiments is always orders of magnitude larger or smaller than the test statistic. Also here, we find that all joints can be moved independently of each other and are affected by exactly one input. When exploiting the revealed causal structure for identifying the system matrices, we obtain $$A_{\mathrm{caus}}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}\quad B_{\mathrm{caus}}\approx\begin{pmatrix}0.013&0&0&0\\ 0&0.007&0&0\\ 0&0&0.01&0\\ 0&0&0&0.01\end{pmatrix}.$$ ## D Further Results Of The Quadruple Tank Experiments We discretize the quadruple tank system with a time-step of 100 ms. We choose Ai = 50 cm2for all tanks, a1,2 = 0.242 cm2, a3,4 = 0.242 cm2, and the valve parameters ζ1,2 = 0.5. For the initial model learning, we excite the system for 5000 time steps. During excitation, the input is sampled from a univariate distribution with an interval [0, 60]. We use the generated data to learn a GP with a Gaussian kernel for each state variable. Having identified a model, we follow algorithm 1 to identify the causal structure. As for the robot experiments, we repeat each experiment 10 times and use 50 simulations to estimate the standard deviation. We choose ν = 10 in equation 9 to avoid false positives. The results are collected in tables 3 and 4. Table 1: Results of the causal structure identification for a robot arm. Causal influences of joints on each other. Joint → Experimental Test Joint MMD statistic x1 → x1 0 1.65 × 10−4 x1 → x2 0 1.79 × 10−4 x1 → x3 0 2.39 × 10−4 x1 → x4 2.43 × 10−13 1.61 × 10−4 x2 → x1 0 5.6 × 10−7 x2 → x2 −2.8 × 10−18 4.58 × 10−7 x2 → x3 0 3.56 × 10−7 x2 → x4 1.82 × 10−13 6.91 × 10−7 x3 → x1 0 5.81 × 10−7 x3 → x2 0 4.54 × 10−7 x3 → x3 −1.38 × 10−18 6.16 × 10−7 x3 → x4 1.29 × 10−16 5.2 × 10−7 x4 → x1 0 4.99 × 10−7 x4 → x2 0 4.66 × 10−7 x4 → x3 9.63 × 10−15 5.8 × 10−7 x4 → x4 −5.44 × 10−15 5.8 × 10−7 Table 2: Results of the causal structure identification for a robot arm. *Causal influences of inputs on joints.* Input → Experimental Test Joint MMD statistic u1 → x1 0.04 1.18 × 10−5 u1 → x2 0 5.38 × 10−7 u1 → x3 0 6.51 × 10−7 u1 → x4 0 4.47 × 10−7 u2 → x1 0 5.09 × 10−5 u2 → x2 0.04 1.15 × 10−5 u2 → x3 3.51 × 10−14 5.95 × 10−7 u2 → x4 3.51 × 10−14 4.67 × 10−7 u3 → x1 2.31 × 10−13 5.26 × 10−5 u3 → x2 1.4 × 10−13 4.28 × 10−5 u3 → x3 0.04 1.18 × 10−5 u3 → x4 2.25 × 10−12 4.36 × 10−7 u4 → x1 0 4.47 × 10−5 u4 → x2 0 5.11 × 10−5 u4 → x3 0 5.69 × 10−7 u4 → x4 0.04 6.58 × 10−4 Table 3: Results of the causal structure identification for a quadruple tank system. Causal influences of the tanks on each other. Tank → Experimental Test Tank MMD statistic x1 → x1 2.78 × 10−1 9.81 × 10−4 x1 → x2 7.39 × 10−6 9.55 × 10−5 x1 → x3 1.47 × 10−5 9.78 × 10−5 x1 → x4 9.45 × 10−5 1.01 × 10−3 x2 → x1 3.57 × 10−6 2.35 × 10−5 x2 → x2 2.74 × 10−1 4.58 × 10−4 x2 → x3 2.32 × 10−6 4.31 × 10−6 x2 → x4 3.54 × 10−5 1.41 × 10−4 x3 → x1 1.62 × 10−1 1.29 × 10−4 x3 → x2 5.36 × 10−6 3.28 × 10−4 x3 → x3 3.44 × 10−1 2.75 × 10−4 x3 → x4 9.59 × 10−6 3.46 × 10−5 x4 → x1 4.28 × 10−6 4.04 × 10−5 x4 → x2 1.45 × 10−1 8.74 × 10−5 x4 → x3 2.35 × 10−6 6.56 × 10−6 x4 → x4 3.57 × 10−1 1.42 × 10−4 Table 4: Results of the causal structure identification for a quadruple tank system. Causal influences of inputs on tanks. Input → Experimental Test Tank MMD statistic u1 → x1 1.8 × 10−1 1.01 × 10−4 u1 → x2 8.04 × 10−3 8.74 × 10−5 u1 → x3 2.46 × 10−8 3.8 × 10−5 u1 → x4 1.56 × 10−1 8.61 × 10−5 u2 → x1 7.55 × 10−3 9.86 × 10−5 u2 → x2 1.78 × 10−1 1.12 × 10−4 u2 → x3 1.58 × 10−1 3.77 × 10−5 u2 → x4 5.44 × 10−6 4.46 × 10−5
Review 1: Summary: The paper presents an algorithm for identifying causal structure of the dynamical systems, predicting which state/control dimensions are causally related and which are not. Determining causality is increasingly important in controlling dynamical systems as providing powerful inductive bias that would be hard to determine from data automatically via deep ML. The algorithm presented by the authors is based on the non-causality definition related to the one from Pearl 1995. The algorithm comes in two different variants: (a) the one where initial conditions vary whereas controls are thee same and (b) the one where initial conditions vary, but the controls differ. The method proposed by the authors uses maximum mean discrepancy (MMD) based metrics (measuring discrepancy between trajectories) to identify causal relationships and discuss how to apply them in the control setting, where states can be achieved only by applying a sequence of controls. In order to scale up the approach, the authors leverage kernel-based unbiased estimator of the squared MMD wit the use of the characteristic kernels. Presented theoretical results are accompanied with empirical evaluation on: (a) the kinematic control of the robot, (b) the quadruple tank process (to test the setting with highly nonlinear dynamics) and some synthetic tasks. Strengths and Weaknesses: The proposed algorithm (formulations: 5a, 5b) is an elegant application of the MMD methods and the authors do a good job in explaining how to make this approach scalable. What I found very important is the discussion how to achieve desired states for the presented algorithm by controlling the system which is crucial from the practical point of view (Section 4.2 on controllability). Presented empirical results show that the method is very effective in practice do determine causal relationships. The weakness of the approach is that even though elegant, presented algorithm is pretty straightforward given used definition of non-causality combined with Assumption 1 (MMD formulation is almost tautologically implied by Definition 1 and Assumption 1). One of my concerns is that it is not clear at all which dynamical systems f satisfy Assumption 1 or how to test this efficiently. Given the questions regarding how constraining Assumption 1 is, it is not clear to me how widely applicable the methods presented by the Authors are. The paper would also benefit from adding few more difficult and high-dimensional dynamical systems setting, in particular to test the scalability of the algorithm and its data efficiency. Requested Changes: Strengthening the experimental section would be very welcome. The authors can also work on improving the presentation of the paper. For instance: (a) the Authors refer to "do-" notation in Definition 1, without properly introducing it earlier, (b) while providing an introduction to MMDs the authors refer to characteristic kernel as useful while working with the empirical MMD, but do not provide their definition (just Gaussian kernel as an example). Most importantly, the Authors should comment on the Assumption 1 which is critical for the correctness of the algorithm (the proof that the proposed method is almost immediate if we use it). Broader Impact Concerns: I do not see any concerns regarding ethical aspects of this work. ================================================== Review 2: Summary: This paper presents an experimental design method for causal structure identification in general, possibly non-linear dynamical systems which are not necessarily fully controllable. The method is based on comparing the trajectory distributions under different settings for initial conditions and control signals. Theoretical results are derived relating the maximum mean discrepancy (MMD) of the trajectory distributions and causal relations within controllability limitations. A set of experiments demonstrating the method on a physical robot and two synthetic problems is also presented. Strengths and Weaknesses: The paper addresses an important problem on causality in the context of control systems. As the paper discusses, most causal identification algorithms are either entirely based on observational data or interventions where one has full control over the variables of interest. Therefore, I believe that an approach that takes controllability restrictions into account is interesting for the machine learning community working on causality. However, the paper also comes with a number of issues which make me believe it's not ready yet for publication. #### Major issues: - 1) **Insufficient theoretical background.** The proposed method is based on Judea Pearl's do-calculus, but no (minimal) background on do-calculus is provided. Although one could argue that is possible to understand most of the paper without this background, the problem setting in Sec. 3.1 gives the impression that it is actually a somewhat necessary background. - 2) **Notation issues.** The notation in the paper can be quite confusing sometimes, as some of the symbols have their definition coming late or are left implicit/unexplained. For instance, the subscripts and superscripts in Definition 1 have no prior definition. Later on, it can be *assumed* that the subscript $i$ on a state variable $x_i$ denotes the $i$th coordinate of that variable, or perhaps an individual vector-valued variable within a group of vector-valued variables composing the state of the system (the paper does not make it clear). Nowhere in the paper I have seen a formal specification for the control and state spaces. They're only left to be assumed as some possibly arbitrary vector space of some dimensionality. It is also not clear what the superscripts "I" and "II" represent in Definition 1 in a general context. Furthermore, $\mathcal{X}_{nc}$ and the corresponding $\mathcal{U}$ are only loosely explained, but not formally defined. - 3) **Confusing theoretical statements.** Assumption 1 sounds more like a theorem than an assumption due to the last sentence in its statement. It might need to be split into assumption and theorem/lemma/remark statements, depending on whether it requires a proof or the proof happens to be trivial. - 4) **Hyper-parameters.** The proposed method has a few hyper-parameters, such as the MMD thresholds $\delta_1$ and $\delta_2$, but the paper apparently does not discuss how these hyper-parameters can be tuned or set accordingly in practice for different applications. - 5) **Lack of experimental baselines.** The experiments section does not provide comparisons against other suitable algorithmic baselines, other than SINDy in the last synthetic experiment. Although existing work is mostly based on observational data or full interventions, it would be interesting to see how the proposed algorithm fares against other methods in settings where they can be applied. As far as I understand, at least the methods based on purely observational data can be applied to the same problems explored in the paper. An interesting question would then be whether they can identify the same causal relationships. Minor issues: - I believe there's a typo in the definition of $m_2$ in Lemma 2. - Kinematic model in experiment 1. Torque is proportional to angular acceleration, not speed, as in the equation $\dot{\phi}_i = \tau_i$, if $\phi_i$ indeed represents angular position/angle. Requested Changes: Considering the issues presented above, I suggest: - 1. Adding a brief theoretical background on do-calculus in the context of causal identification - 2. Revising the notation in the main theoretical results - 3. Revising theoretical statements, making sure that notation is properly explained and formally defined - 4. Adding a discussion on hyper-parameters tuning - 5. Ablation experiments showing how the method behaves as a function of hyper-parameter settings - 6. Adding further experimental baselines to experiments in Sec. 6. Broader Impact Concerns: No major broader impact concern, other than perhaps adding a brief discussion on the possible consequences of inferring causality in experiments outside of robotic settings, which may involve human agents, for example, case that's a suitable and relevant application. ================================================== Review 3: Summary: In this paper, the authors study the problem of identifying causality in dynamical systems. To this end, they introduce an algorithm based on measuring maximum mean discrepancy. Implicitly, the method considers controllability, making it a viable method to identify causality in control system. The method is evaluated in simulations consisting of a robotic arm and a quadruple tank process. Strengths and Weaknesses: Strengths. This is a work of possible interest to the community. Writing and presentation are clear and, for the most part, concepts and statements are clearly introduced and treated in a rigorous manner. The idea introduced in the paper is simply yet well founded. Weaknesses. (i) I think that knowledge of do-calculus might not be that common among readers. Doing a soft introduction to it might be beneficial to the paper. The do-operator appears in Definition 1 without much introduction (even though it is explained afterwards). Maybe a "Preliminaries" section would help the reader more unfamiliar with do-calculus. (ii) The related work section and the overall framing of this work with respect of the state of the art is somewhat odd. Most of the works cited, while important works, are quite old. While reading the manuscript I had a hard time placing this work with respect to more recent works addressing the problem of identifying causality. Overall, it feels it feels like a work disconnected from recent work. I'd suggest a better reframing of the Related Work section to put better emphasis on how this work relates to recent state of the art. (iii) Why is the maximum mean discrepancy used as the measure of similarity (instead of something like the KL divergence). This is not clearly explained/motivated. A somewhat vague justification is given for the case of reproducing kernel Hilbert spaces, but I'd suggest clarifying this further. (iv) Overall, the information presented in the numerical results is simplistic. For example, a motivation for learning causality is to reduce model complexity, but this is not really evaluated in the numerical results. In the same sense, for the quadruple tank process, only the causality tests are shown (and extended in the Appendix). However, it would be interesting to see how a reduced model learned from the causality learning process behaves. I'd suggest extending the numerical results with more insightful figures and discussion. Requested Changes: Addressing the previously mentioned weaknesses should be sufficient. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Accept with minor revision Comment: Learning the causal structure of an unknown dynamical system and exploiting it for data-efficient control is a very well motivated and important problem, with many applications of cross-disciplinary interest. This paper develops a class of MMD based methods to compare trajectory distributions under varying initial conditions/control sequences to infer causality, which implicitly accounts for controllability of the underlying system -- the reviewers agree this is an elegant approach overall, presented clearly and rigorously in the paper. Several concerns and issues were raised around comprehensiveness of the experimental results, but they were diligently addressed during the review period and the AE believes the expansions described in the author responses are sufficient. For final revision, all reviewers requested a more complete and self-contained introduction to "do-calculus", related work that better covers more recent works, and a stronger commentary around what classes of nonlinear systems may or may not be captured by Assumption 1. ==================================================
# The Foundation Model Transparency Index Anonymous authors Paper under double-blind review ## Abstract Foundation models have rapidly permeated society, catalyzing a wave of generative AI applications spanning enterprise and consumer-facing contexts. While the societal impact of foundation models is growing, transparency is on the decline, mirroring the opacity that has plagued past digital technologies (e.g. social media). Reversing this trend is essential: transparency is a vital precondition for public accountability, scientific innovation, and effective governance. To assess the transparency of the foundation model ecosystem and help improve transparency over time, we introduce the **Foundation Model Transparency Index**. The 2023 Foundation Model Transparency Index specifies 100 fine-grained indicators that comprehensively codify transparency for foundation models, spanning the *upstream* resources used to build a foundation model (e.g. data, labor, compute), details about the model itself (e.g. size, capabilities, risks), and the *downstream* use (e.g. distribution channels, usage policies, affected geographies). We score 10 major foundation model developers (e.g. OpenAI, Google, Meta) against the 100 indicators to assess their transparency. To facilitate and standardize assessment, we score developers in relation to their practices for their flagship foundation model (e.g. GPT-4 for OpenAI, PaLM 2 for Google, Llama 2 for Meta). We present 10 top-level findings about the foundation model ecosystem: for example, no developer currently discloses significant information about the downstream impact of its flagship model, such as the number of users, affected market sectors, or how users can seek redress for harm. Overall, the Foundation Model Transparency Index establishes the level of transparency today to drive progress on foundation model governance via industry standards and regulatory intervention. ## 1 Introduction Foundation models (FMs) like LLaMA and DALL-E 3 are an emerging class of digital technology that has transformed artificial intelligence (Bommasani et al., 2021). These resource-intensive models are often built by processing trillions of bytes of data, with some of the most capable systems, like OpenAI's GPT-4, costing hundreds of millions of dollars to build.1 Foundation models power some of the fastest-growing consumer technologies in history,2including myriad generative AI applications,3 bringing immense commercial investment and public awareness to AI. Simultaneously, these models have captured the interest of policymakers around the world: the United States,4 China,5 Canada,6the European Union,7the United Kingdom,8India,9 Japan,10 the G7,11 and a wide range of other governments have already taken action on foundation 1https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/ 2https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ 3https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier 4https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf 5http://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm 6https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems 7https://www.europarl.europa.eu/news/en/press-room/20230609IPR96212/meps-ready-to-negotiate-first-ever-rules-for-safe-and-transparent-ai 8https://www.gov.uk/cma-cases/ai-foundation-models-initial-review 9https://indiaai.s3.ap-south-1.amazonaws.com/docs/generative-ai-report.pdf 10https://english.kyodonews.net/news/2023/10/3b83adf1e28d-japans-ai-draft-guidelines-ask-for-measures-to-address-overreliance. html 11https://www.politico.eu/wp-content/uploads/2023/09/07/3e39b82d-464d-403a-b6cb-dc0e1bdec642-230906_ Ministerial-clean-Draft-Hiroshima-Ministers-Statement68.pdf models and generative AI. Foundation models are positioned to be the defining digital technology of the decade ahead. Transparency is an essential precondition for public accountability, scientific innovation, and effective governance of digital technologies. Without adequate transparency, stakeholders cannot understand foundation models, who they affect, and the impact they have on society. Historically, digital technologies often follow a familiar pattern: a new technology provides opportunities and benefits, but companies are not transparent in how they develop and deploy the technology, and this opacity eventually leads to harm. In the case of social media, companies have not been transparent about the ways in which they moderate content and share user data, contributing to massacres like the Rohingya genocide in Myanmar12 and gross violations of privacy like the Cambridge Analytica scandal.13 Consequently, a chorus of academics, civil society organizations, firms, and governments have called for foundation model developers to improve transparency.14 Groups such as the Partnership on AI, Mozilla, and Freedom House have noted that increased transparency is a crucial intervention.15 UN Secretary-General António Guterres has proposed that the international community should "make transparency, fairness and accountability the core of AI governance ... [and] Consider the adoption of a declaration on data rights that enshrines transparency."16 Foundation models appear to be on track to replicate the opacity of social media. Consider OpenAI's GPT-4, one of the most influential foundation models today. OpenAI states plainly its intention to be nontransparent in the GPT-4 technical report, which "contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar" (OpenAI, 2023). Companies often claim that such information is proprietary or that sharing it would undermine their market position and pose a danger to society as a whole, but this does not negate the enormous risks stemming from foundation models these same companies openly acknowledge, as well as the value of greater transparency. While the downsides of opacity are clear, transparency in the foundation model ecosystem today remains minimal. Little to no evidence exists about which foundation model developers are transparent about which matters, and where there are blind spots in the industry. How best to improve transparency remains an open question despite rising concerns. To determine the status quo and track how it evolves over time, we introduce the Foundation Model Transparency Index (FMTI).17 A composite *index* measures a complex construct (e.g. transparency) as the basis for scoring/ranking entities (e.g. foundation model developers) by aggregating many low-level quantifiable indicators of transparency. Indexes are not common in AI18 but are a standard methodology in the social sciences: iconic examples include the United Nations Development Programme's Human Development Index (UNDP, 2022), which ranks countries, and Ranking Digital Rights' Corporate Accountability Index, which ranks companies (RDR, 2020). We score the transparency of foundation model developers in an effort to promote responsible business practices and greater public accountability. We deconstruct the concept of transparency into 3 high-level domains: the *upstream* (e.g. the data, labor, and compute resources used to build a foundation model), *model*-level (e.g. the capabilities, risks, and evaluations of the foundation model), and *downstream* (e.g. the distribution channels, usage policies, and affected geographies) practices of the foundation model developer. The 2023 Foundation Model Transparency Index. For the 2023 index, each domain is further broken down into 32–35 indicators: these are concrete, specific, and decidable aspects of transparency (e.g. does the foundation model developer disclose the size of its model?). Ultimately, the index consists of 100 indicators (see Appendix A) that comprehensively codify what it means for a foundation model developer to 12/https://about.fb.com/wp-content/uploads/2018/11/bsr-facebook-myanmar-hria_final.pdf 13https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html 14See Appendix C for further discussion. 15http://partnershiponai.org/wp-content/uploads/2021/08/PAI-Responsible-Sourcing-of-Data-Enrichment-Services. pdf 16https://indonesia.un.org/sites/default/files/2023-07/our-common-agenda-policy-brief-gobal-digi-compact-en. pdf 17See URL to project website is anonymized for submission.. 18We note that the AI Index from the Stanford Institute for Human-Centered AI (Zhang et al., 2022a; Maslej et al., 2023) is a related effort, but the AI Index tracks broader trends in AI, rather than scoring specific entities or aggregating to a single value. be transparent, building upon formative works on transparency for AI and other digital technologies (Gebru et al., 2021; Bender & Friedman, 2018b; Mitchell et al., 2018; Raji & Buolamwini, 2019; Gray & Suri, 2019a; Crawford, 2021; Vogus & Llansó, 2021; Keller, 2022).19 We score 10 major foundation model developers on each of the 100 indicators to determine how transparent each company is in the development and deployment of its models. In particular, we score developers based on their practices in relation to their flagship foundation models: we assess OpenAI (GPT-4), Anthropic (Claude 2), Google (PaLM 2), Meta (Llama 2), Inflection (Inflection-1), Amazon (Titan Text), Cohere (Command), AI21 Labs (Jurassic-2), Hugging Face (BLOOMZ; as host of BigScience),20 and Stability AI (Stable Diffusion 2). In addition, for downstream indicators we consider the flagship or in-house distribution channel: OpenAI (OpenAI API), Anthropic (Claude API), Google (PaLM API), Meta (Microsoft Azure),21 Inflection (Pi), Amazon (Bedrock), Cohere (Cohere API), AI21 Labs (AI21 Studio), Hugging Face (Hugging Face Model Hub), and Stability AI (Stability API). We assess developers on the basis of publicly-available information to make our findings reproducible and encourage transparency vis-à-vis the public on the whole. To ensure our scoring is consistent, we identify information using a rigorous search protocol (see Appendix B). To ensure our scoring is accurate, we notified developers and provided them the opportunity to contest any scores prior to the release of this work (all 10 responded and 8 of the 10 explicitly contested some scores). We summarize our core findings, recommendations, and contributions below and make all core materials (e.g. indicators, scores, justifications, visuals) publicly available.22 ## 1.1 Findings On the basis of conducting the index, we extensively catalogue 35 empirical findings in §7: results spanning overarching trends, domain-level analyses, breakdowns for open vs. closed developers, and similarities in developer practices. We summarize the 10 most critical findings. Significant but obtainable headroom in overall transparency scores (Figure 7). Given that the highest overall score is 54 out of 100 and the mean overall score is 37, all developers have significant room for improvement. In many cases, such improvement is already feasible: 82 of the indicators are achieved by some developer, and 71 are achieved by multiple developers. Significant unevenness in overall scores with three major clusters (Figure 7). Overall scores vary significantly given a range of 42 between the highest-scoring developer, Meta, at 54 and the lowest-scoring developer, Amazon, at 12. Relative to the mean overall score of 37, organizations group into three clusters: four well-above the mean (Meta, Hugging Face, OpenAI, Stability AI), three around the mean (Google, Anthropic, Cohere), and three well-below the mean (AI21 Labs, Inflection, Amazon). Upstream resource transparency scores the worst (Figure 8). Breaking down the trends by domain, scores are consistently the worst for the upstream domain, particularly the Data, Data Labor, and Compute subdomains. Several developers (AI21 Labs, Inflection, Amazon) receive 0 points across the entire set of 32 indicators for the upstream domain. Several upstream matters surrounding data creation are fully opaque (Figure 10). Within the upstream domain, no company scores points for indicators about data creators, the copyright and license status of data, and mitigations related to copyright. The industry-wide lack of transparency on these issues relates directly to pressing societal concerns related to copyright and intellectual property, which are the subject of ongoing litigation. 19See https://transparency.dsa.ec.europa.eu/ and https://www.tspa.org/curriculum/ts-fundamentals/ transparency-report/ 20Our objective is to assess Hugging Face as a company that can be tracked over time. BLOOMZ however was not built unilaterally by Hugging Face, but instead through the BigScience open collaboration. As a result, we refer to Hugging Face in the prose but include the BigScience logo in visuals; we provide further discussion in §5.2: model-selection. 21Meta announced Microsoft as the "preferred partner" for Llama 2 via Azure: https://about.fb.com/news/2023/07/ llama-2/ 22URL to data is anonymized for submission. Transparency is highest, but still imperfect, for very basic model information and downstream distribution (Figure 9). Breaking down the trends by major dimensions of transparency, the highest-scoring dimensions are Methods, Model Basics, Capabilities, and Distribution. However, even when considering indicators for these high-scoring dimensions of transparency, most companies do not reveal basic information like model size nor do they explain how or why they made certain release decisions. Developer transparency on Capabilities does not translate to transparency on Limitations, Risks, and Model Mitigations (Figure 11). Within the model domain, we consider Capabilities, Limitations, Risks, and Model Mitigations as four tightly-related subdomains that characterize a model's potential societal impact. While many developers score well on Capabilities by describing, demonstrating, and evaluating capabilities, which reflect their models' strengths, the same cannot be said for the other three subdomains. Instead, transparency is significantly worse: just two developers demonstrate limitations, none evaluate multiple intentional harms their models could facilitate, and none provide either externally reproducible or third-party assessments of mitigation efficacy. There is virtually no transparency about the downstream impact of foundation models (Figure 12). Within the downstream domain, no developer provides any transparency into the affected market sectors, affected individuals, affected geographies, or any form of usage reporting. Overall, the average score on the Impact subdomain is the worst in the entire index at 11%; only three developers provide even the most minimal characterization of the number of downstream applications, and no developer provides a mechanism for users to seek redress. Open developers are consistently more transparent than closed developers (Figure 13). Breaking down trends by how developers release their models, open developers (i.e. those that release model weights and, potentially, data) show a clear edge in transparency over closed counterparts (e.g. API providers). Two of the three open developers (Meta and Hugging Face) score better than all other developers, while the third (Stability AI) scores one point below the highest-performing closed developer (OpenAI). Open developers have higher average scores on 17 of the 23 subdomains. Open developers are much more transparent on upstream resources and comparably transparent on downstream use when compared to closed developers (Figure 13). The average score for open developers on upstream indicators is 53% compared to a paltry 9% of closed developers. However, while closed developers have greater control over the downstream use of their foundation models, this does not translate to greater downstream transparency as the average for open developers on downstream indicators is 49% compared to 43% from closed developers. Some companies have highly correlated scores (Figure 14). Considering pairs of companies, we analyze the extent to which they agree on the indicators where they do and do not score points. In particular, the three members of the Frontier Model Forum (Anthropic, Google, OpenAI) exhibit high indicator-level similarity, as do the two companies that release both model weights and data (Hugging Face, Stability AI) and the four lowest-scoring companies (Cohere, AI21 Labs, Inflection, Amazon). This leaves Meta as the sole outlier in terms of developer-developer indicator-level similarity. ## 1.2 Recommendations On the basis of our findings, we make specific recommendations aimed at foundation model developers, foundation model deployers, and policymakers in §8: recommendations. We highlight our top-most recommendation for each stakeholder group. Foundation model developers should improve transparency by drawing on the practices of their competitors. By assessing developers directly, we clarify for each developer the indicators where they lack transparency. In itself, this provides a clear diagnostic on where they stand relative to their competitors and, given our justifications for why transparency on these matters is valuable, why improving transparency would be beneficial for society. Given that 82 indicators are all satisfied by some developer, developers can directly consult the practices of their competitors to provide a clear example of how they might improve their transparency. There is a tremendous gap between the 82 already-feasible indicators and the current top score of 54 and the mean score of 37, meaning there are many areas of low-hanging fruit where developers can readily improve transparency today. Foundation model deployers should push for greater transparency from developers. Foundation models intermediate a growing supply chain: deployers of foundation models (e.g. cloud service providers and companies that license developers' models) as well as other downstream actors are influenced by, and can influence, the transparency of foundation model developers. In particular, deployers should push developers for greater transparency when making the decision, and potentially negotiating the contract, to deploy a developer's model. Deployers and other downstream actors wield leverage collectively: it is their downstream use that generates users and revenue for foundation model developers, meaning they should use this leverage to acquire the necessary transparency from foundation model developers. Policymakers should prioritize transparency with sufficient precision. Given the importance of transparency, policymakers should make transparency a top priority in legislative proposals and regulatory enforcement related to foundation models. While transparency is already broadly recognized in most regulatory frameworks for AI, policymakers should be more precise about what they mean by transparency and the areas in which they hope to reduce opacity via transparency requirements or other measures. In particular, policymakers should understand the status quo for transparency (e.g. via the scores we provide) and use this evidence to inform interventions in the areas where transparency is most urgently needed (e.g. on Data Labor and Impact, given these are lowest-scoring dimensions of transparency across the entire supply chain). ## 1.3 Contributions To summarize, our contributions are: 1. **Taxonomy.** We taxonomize the vast conceptual space of transparency in the context of foundation models, following on widespread calls for transparency (see Appendix C). In particular, we structure the space hierarchically into 3 domains (i.e. upstream, model, downstream), 23 subdomains (e.g. data, compute, capabilities, risks, distribution, feedback), and 100 decidable and actionable indicators. 2. **Scoring of major foundation model developers.** We score 10 major foundation model developers and their flagship foundation models with a standardized protocol. These developers vary in their company status (e.g. startups, Big Tech), release strategy (e.g. open weights, restricted API), modalities (e.g. text-to-text, text-to-image), and involvement in global policy efforts (e.g. White House voluntary commitments, Frontier Model Forum). We allow developers to directly contest scores: all 10 developers engaged in correspondence and 8 contested specific scores. 3. **Empirical findings.** Our extensive evaluation yields 35 findings, which ground existing discourse and sharpen our understanding of the lack of transparency in the foundation model ecosystem. In many cases, these findings directly bear on critical global AI policy efforts (e.g. the EU AI Act) and provide the basis for clear recommendations on how developers may improve their practices (e.g. by creating centralized documentation artifacts). Our scores offer ample opportunities for further analysis. 4. **Legibility and reproducibility.** We provide a public website that presents our findings and recommendations broadly legible to the general audience.23 To facilitate further research, and reproduce our scoring and analyses, we make all core materials (e.g. indicators, scores, justifications, visuals) publicly available.24 5. **Theory of change and future versions.** Our objective is to simultaneously articulate the status quo and increase transparency over time. To this end, we make very explicit our theory of change: we view our work as compiling the transparency practices across es across companies as an instrument for driving change (see §9.1: change) and the limitations/risks of our work (see §9.2: limitations). Critically, we will conduct additional iterations of the index to track progress over time to work towards a more transparent foundation model ecosystem. 23URL to project website is anonymized for submission. 24URL to data is anonymized for submission. ## 2 Background To begin, we provide a brief primer on the three core concepts underlying this work: foundation models, transparency, and indexes. ## 2.1 Foundation Models Foundation models are the defining paradigm of modern AI, reflecting a broad shift in the field from bespoke models for individual tasks to more general models that can be adapted for a wide range of use cases Bommasani et al. (2021). In this sense, foundation models belong to the broader class of general-purpose technologies that have restructured society such as electricity, the Internet, and smartphones (Bresnahan & Trajtenberg, 1995; Brynjolfsson et al., 2021; Bommasani et al., 2021; Eloundou et al., 2023). Building foundation models requires significant resources: immense volumes of data are processed using immense amounts of computation to yield the foundation model. Using foundation models often requires substantially fewer resources in comparison: models can be adapted, often in lightweight fashion (e.g. through a simple textual interface), for an increasingly wide range of use cases. The disparity in resource requirements between development and deployment has yielded a market where a small set of companies build the most prominent foundation models that are then adopted by thousands of companies and millions of consumers (Bommasani et al., 2023b; Vipra & Korinek, 2023; Widder et al., 2023). The structure of the foundation model paradigm implicates a broader ecosystem and supply chain Bommasani et al. (2023b); Cen et al. (2023); Jones (2023). We depict a conceptualized view of this supply chain in Figure 1. The supply chain begins with the *upstream* resources that are used to build a foundation model: data, computational hardware, energy, labor, and code. For each of these resources, a further supply chain exists: for example, data to build foundation models is often sourced from the Internet, but this data can only come to be on the Internet as a result of human data-generating process (e.g. publishing news article, authoring personal blogs, uploading videos to YouTube, creating music) along with Internet infrastructure (e.g. networking protocols). Alongside these upstream resources and supply chains, foundation models are then used as the foundation for supply chains that derive from the model. In particular, foundation models are made available for downstream use through *distribution channels* (e.g. an API to access the model or a host that facilitates inference using the model). By way of these distribution channels, foundation models power *downstream* applications (e.g. commercial products and services) across a range of market sectors and geographies. For instance, OpenAI's GPT-4 powers applications in education (e.g. Khan Academy's Khanmigo tutor), finance (e.g. Stripe's fraud detection tool), banking (e.g. Morgan Stanley's internal chatbot), and government (e.g. Iceland's language preservation system).25 Overall, a comprehensive account of the societal impact of foundation models, and their transparency in particular, requires consideration of the different parts of the foundation model ecosystem (Bommasani et al., 2021, §1.2). Foundation models have fueled the recent wave of generative AI technologies: these models can be used to generate fluent text, useful code, photorealistic images, and compelling audio. New research efforts built foundation models in an even broader array of domains: biology (Lin et al., 2023), climate change (Lacoste et al., 2023), weather,26 astronomy (Nguyen et al., 2023), radiology (Chambon et al., 2022), and robotics (Open X-Embodiment Collaboration et al., 2023). Nevertheless, much of the present public and commercial interest centers on language models (e.g. Anthropic's Claude 2, Meta's Llama 2) and multimodal models with language interfaces (e.g. Stability AI's Stable Diffusion 2, OpenAI's GPT-4). Alongside their significant capabilities, researchers have highlighted a large number of potential risks posed by these foundation models spanning malicious uses like generating disinformation to unintended harms like generating text that reinforces societal biases (Bender et al., 2021; Bommasani et al., 2021; Abid et al., 2021; Weidinger et al., 2022). There have also been recent demonstrations of many concrete harms from language models.27 25See https://openai.com/gpt-4 for a list of several applications built upon OpenAI's GPT-4. 26https://www.earthdata.nasa.gov/news/weather-ai-fm-workshop 27Partnership on AI's AI Incident database (https://incidentdatabase.ai/) and the AI, Algorithmic, and Automation Incidents and Controversies database (https://www.aiaaic.org/aiaaic-repository) collect incidents of harm caused by AI. For a concrete example, see https://www.404media.co/ inside-the-ai-porn-marketplace-where-everything-and-everyone-is-for-sale/. ![7_image_0.png](7_image_0.png) Figure 1: Foundation Model Supply Chain. A conceptual depiction of the foundation model supply chain, beginning with the primary upstream resources (i.e. data, compute) and transitioning to the foundation model, subsequent hosts (or distribution channels), and ending with downstream applications. Image taken with permission from Jones (2023). ## 2.2 Transparency Transparency is broadly understood as the property of being visible and easily understood (Aristotle, 350 B.C.E; Kalderon, 2015), and is often a fundamental prerequisite of social responsibility and accountability (Florini, 2007; Robinson & Acemoglu, 2012). Transparency is desirable from a variety of standpoints. For example, transparently disclosing information makes that information available, shareable, legible, and verifiable. Transparency when conducting a complex process can make clear the processes' scope, stakes, and pitfalls (Lathrop & Ruma, 2010). Similarly, transparency in decision-making can help those who are not involved in the decision assess the motivations behind the decision, the evidence used to justify it, as well as its costs and benefits. Various philosophers, political theorists, scientists, and journalists have emphasized the importance of transparency across these and other domains (Johnston, 2006; Florini, 2007; Benkler, 2013; Schudson, 2015). Civil society, grassroots organizations, and consumers also regularly call for transparency as a mechanism for fact finding, accountability, and holding organizations responsible for harm (Heikkilä, 2023; DiResta et al., 2022).28 For our purposes, we consider transparency as it relates to the development and use of digital technologies, with a specific focus on the transparency of the practices of foundation model developers as measured by the information they share regarding their models.29 Why transparent matters for digital technologies. Transparency in digital technologies is particularly relevant for three reasons. First, new digital technologies, such as AI, are not well understood by society, often appearing as a black box (Castelvecchi, 2016). Second, digital technologies are easily rendered invisible, meaning it is difficult for nonexperts to understand when processes like algorithmic decision-making are taking place (Ng et al., 2021). Third, these technologies can have a profound influence on billions of users across society. And yet these technologies are built by a small cadre of industry actors who do not represent society as a whole. Under these conditions, transparency functions as a prerequisite for public accountability and responsible innovation (Klyman, 2023). Shared visibility engenders public trust and facilitates interventions in the public interest (Hardin, 2002). Without sufficient understanding of industry practices, researchers cannot characterize the societal impact of digital technologies, let alone propose concrete actions to improve business practices (Pasquale, 2015). While the effects of transparency are often difficult to measure as they are diffuse and indirect, transparency helps to expose malpractice and enables the public to respond to such malpractice. Limitations of transparency. Transparency is far from sufficient on its own and it may not always bring about the desired change (Corbett & Denton, 2023). Salient critiques of transparency include: - Transparency does not equate to responsibility. Without broad based grassroots movements to exert public pressure or concerted government scrutiny, organizations often do not change bad practices (Boyd, 2016; Ananny & Crawford, 2018). - Transparency-washing provides the illusion of progress. Some organizations may misappropriate transparency as a means for subverting further scrutiny. For instance, major technology companies that vocally support transparency have been accused of *transparency-washing*, whereby "a focus on transparency acts as an obfuscation and redirection from more substantive and fundamental questions about the concentration of power, substantial policies and actions of technology behemoths" (Zalnieriute, 2021). - Transparency can be gamified. Digital platforms have been accused of performative transparency, offering less insightful information in the place of useful and actionable visibility (Ghosh & Faxon, 2023; Mittelstadt, 2019). As with other metrics, improving transparency can be turned into a game, the object of which is not necessarily to share valuable information.30 28See Appendix C for additional details on calls for transparency. 29Note that the term "transparency" is at times also used to describe efforts to make AI more explainable or interpretable at the level of specific AI-based predictions or decisions (Liao & Vaughan, 2023; Zou et al., 2023). Such transparency is not the subject of our work. 30According to Goodhart's Law, "when a measure becomes a target, it ceases to be a good measure" (Goodhart, 1984). - Transparency can inhibit privacy and promote surveillance. Transparency is not an apolitical concept and is often instrumentalized to increase surveillance and diminish privacy (Han, 2015; Mohamed et al., 2020; Birchall, 2021). For foundation models, this critique underscores a potential tension between adequate transparency with respect to the data used to build foundation models and robust data privacy. - Transparency may compromise competitive advantage or intellectual property rights. Protections of competitive advantage plays a central role in providing companies to the incentives to innovate, thereby yielding competition in the marketplace that benefits consumers. Consequently, work in economics and management studies have studied the interplay and potential trade-off between competitive advantage and transparency (Bloomfield & O'Hara, 1999; Granados & Gupta, 2013; Liu et al., 2023), especially in the discourse on corporate social responsibility (). Transparency is not a panacea. In isolation, more information about foundation models will not necessarily produce a more just or equitable digital world. But if transparency is implemented through engagement with third-party experts, independent auditors, and communities who are directly affected by digital technologies, it can help ensure that foundation models benefit society. Transparency in practice for prior digital technologies Digital technologies are marked by a long track record of poor transparency. While each major new technology has dramatically restructured society, the powerful corporations that build these technologies have wielded outsized influence and maintained opacity to advance their commercial interests. Consider the following examples of digital technologies that suffer from a lack of transparency as well as associated interventions/studies to reduce opacity: the fight for net neutrality for internet service providers like Comcast (Service, 2021), web cookies for online advertising like Google Ads (Englehardt et al., 2015; Englehardt & Narayanan, 2016; Narayanan & Reisman, 2017), labor practices for crowd-sourcing platforms like Amazon Mechnical Turk (Gray & Suri, 2019a; Crawford, 2021), wage schemes for ride sharing platforms like Uber (Rosenblat & Stark, 2016), and dark patterns for game companies like Epic Games (Commission, 2023). Stepping through these examples, efforts like the Princeton Web Transparency Project (Englehardt et al., 2015; Englehardt & Narayanan, 2016; Narayanan & Reisman, 2017) have unveiled the ecosystem of online third-party tracking using cookies, which "led to greater public awareness, the cessation of some privacyinfringing practices, and the creation of new consumer privacy tools." Similarly, Rosenblat & Stark (2016) empirically demonstrated that Uber drivers were the subject of a severely asymmetric power dynamic given the control exerted by Uber over their drivers, to the detriment of the ride sharing market. In the context of crowd-sourcing, Gray & Suri (2019a) and Crawford (2021) demonstrated exploitation of the "ghost" workers powering AI, such as on Amazon Mechanical Turk, that was made invisible on these platforms. More recently, these efforts have prompted the scrutiny of lawmakers as to improve transparency and, thereby, labor conditions. As a final example, dark patterns have a pervasive practice for myriad technologies, leading to mismanaged consumer expectations and overall opacity. To this end, the FTC's recent inquiry into Epic Games for dark patterns used to deceive gamers, and particularly children, amounted to a $245M fine on Epic Games (Commission, 2023). Building on these prior examples, we consider social media more specifically. Social media platforms provide a vivid example of transparency challenges in recent years, and the increasing level of acknowledgement among some technology companies that a baseline level of transparency is a necessity. Given the profound impact of social media in mediating how humans form relationships, communicate with each other, buy goods and services, and access information, a broad body of work argues for greater transparency (see Keller, 2022). Social media platforms have slowly begun to adopt transparency reporting practices. For example, Facebook now hosts its own Ad Library31, Content Library32, and a transparency center33 that reports on content enforcement, widely viewed content, regulatory transparency, government data requests, and intellectual property, among other pieces of mostly voluntary transparency. In parallel, transparency requirements have 31https://www.facebook.com/ads/library/ 32https://transparency.fb.com/researchtools/meta-content-library 33https://transparency.fb.com/ been enshrined in laws like the EU Digital Services Act (Commission, 2022) and legislative proposals like the U.S. Platform Accountability and Transparency Act (Coons et al., 2021). Transparency for AI. With the rise of AI in the past 10 years, its societal impact has received much greater attention (Barocas & Selbst, 2016; Abebe et al., 2020; Hutchinson et al., 2021; Bender et al., 2021). Transparency is often referenced as a core ethical principle undergirding responsible AI (Fjeld et al., 2020; Hagendorff, 2020).34 Jobin et al. (2019) find that transparency is the most frequently cited principle in AI ethics guidelines, appearing in 85% of the assessed 84 guidelines. Given that the standard machine learning pipeline is divided into several stages, transparency efforts often target different stages.35 Documentation efforts are most common at the level of data (Gebru et al., 2021; Bender & Friedman, 2018b; Pushkarna et al., 2022) and models (Mitchell et al., 2018; Crisan et al., 2022), with evaluations providing further insight into models (Deng et al., 2009; Ribeiro et al., 2020; Perez et al., 2022; Liang et al., 2022c; Bommasani et al., 2023c). More recently, several efforts have studied the broader ecosystem-wide transparency of AI and its supply chains (Bommasani et al., 2023b; Cen et al., 2023), though transparency on the downstream impacts of AI is comparatively understudied (Narayanan & Kapoor, 2023). The Foundation Model Transparency Index advances this view, assessing transparency of foundation models with a comprehensive ecosystem-level approach that spans the data and broader upstream resources, the foundation models themselves, and the downstream use and impact. ## 2.3 Indexes A (composite) index is a standard methodology (OECD et al., 2008; Greco et al., 2019) for assessing entities (e.g. companies, countries) in relation to a specific construct (e.g. transparency, responsibility). Methodologically, the score on an index for a specific entity is the aggregate of multiple low-level indicators that can be more directly quantified. Composite indexes as a methodology has seen broad adoption across the social sciences, including to directly address major political, economic, and societal concerns such as public corruption (e.g. Transparency International's Corruption Perceptions Index; Transparency International, 2023), environmental welfare (e.g. the World Economic Forum's Environmental Sustainability Index; Whitford & Wong, 2009) and living standards (e.g. the United Nations Development Programme's Human Development Index; Hopkins, 1991). However, indexes have not played a major role in mainstream AI discourse.36 Indexes are designed to further several objectives and have certain characteristic strengths (Commission et al., 2008; Saisana & Tarantola, 2002). Most fundamentally, indexes can transform complex and amorphous constructs into straightforward and concrete scores. Indexes and the aggregate quantitative metrics they provide can therefore allow for broad engagement on certain topics, furthering public understanding as well as providing a strong basis for various forms of decision-making such as regulatory intervention. In addition, when indexes are maintained over time, they encourage a long-term focus and can be vital in fostering improvement over time. In this way, while operating at a different level of abstraction and involving a different set of design decisions, indexes are analogous to model benchmarks that are commonplace in AI (Deng et al., 2009; Wang et al., 2019; Liang et al., 2023) and appeal to a similar theory of change (Donoho, 2017; Ethayarajh & Jurafsky, 2020; Raji et al., 2021; Bommasani, 2023). Indexes also have shortcomings: namely, they can be reductive and overly subjective (Saisana & Tarantola, 2002; OECD et al., 2008; Greco et al., 34See UNESCO's Recommendation on the Ethics of Artificial Intelligence, which was adopted by its 193 member states and constitutes the first global normative instrument on AI ethics. Our conceptualization of transparency covers several of UNESCO's 10 principles, namely Transparency and Explainability. See https://www.unesco.org/en/artificial-intelligence/ recommendation-ethics 35As mentioned previously, the term "transparency" is also sometimes used in AI to refer to explainability/interpretability, referring to understanding how a specific model makes predictions (Zou et al., 2023). In part, the emphasis on this topic is due to the inscrutability of the deep neural networks that have powered AI's rise. However, we focus on structural forms of transparency, taking a more macroscopic perspective. 36We highlight the AI Index from the Stanford Institute for Human-Centered AI (Maslej et al., 2023; Zhang et al., 2022a), which tracks global progress of AI across a variety of quantitative indicators. In contrast to the composite indexes here, the AI Index neither directly scores specific entities nor does it aggregate individual indicators into a singular aggregate. We also highlight the Generative AI Accountability Scorecard from Ranking Digital Rights as a forthcoming effort that targets the generative AI services downstream of foundation models: https://rankingdigitalrights.org/mini-report/ introducing-rdrs-preliminary-standards-for-generative-ai/. 2019). To design and score an index, researchers must make simplifying decisions about which indicators to include, how to weigh those indicators, and how to grade indicators. Beyond these methodological issues, indexes are subject to a broader conceptual critique that they may oversimplify concepts that are intrinsically complex, discarding valuable nuances.37 Indexes may also be subject to gaming, which we discuss more extensively in §9.2: limitations. 37The literature and theory on composite indexes is much too extensive to be easily summarized in this brief primer. We recommend the Handbook on Constructing Composite Indicators: Methodology and User Guide (OECD et al., 2008) as a proper introduction to the subject: https://doi.org/10.1787/9789264043466-en. ## 3 The Foundation Model Transparency Index The Foundation Model Transparency Index scores foundation model developers for their comprehensive transparency. We discuss specifics on the developers, indicators, and scoring in subsequent sections. Strategically, our aim is for the index to clarify discourse on foundation models and AI that is muddled and lacks grounding in empirical data. We aim to improve the overall transparency of the AI ecosystem by encouraging foundation model developers to share more information about the development and deployment of their models. We also provide a clear taxonomization of the key issues related to transparency and demonstrate where greater transparency would be especially valuable. Therefore, the Foundation Model Transparency Index provides a frame of reference for assessing whether the ecosystem as a whole—and which developers in particular—become more or less transparency over time. Simultaneously, given the limitations of indexes, we are fully transparent about our methodology, including the core decisions on indicator inclusion, indicator weighting, and indicator scoring. We also discuss methodological shortcomings relating to each of these decisions in §9.2: limitations. To guard against unnecessary simplification, we provide discussion and analysis at several levels of abstraction in §7: results. Overall, the Foundation Model Transparency Index captures the key dimensions of transparency that are relevant to foundation models at present. As the foundation model ecosystem and AI policy evolves over time, the central questions regarding the transparency of foundation models will evolve as well. Consequently, we will conduct future versions of the index that adjust the indicators to reflect these changes. We more expansively discuss our intended impact (including our theory of change and associated limitations and risks) in §9: impact. ## 4 Indicators We define 100 indicators that comprehensively characterize transparency for foundation model developers. To select these indicators, we compiled relevant concepts raised across past scientific literature as well as concerns animated by public discourse on foundation models and other digital technologies. In Appendix A we provide specific references for each indicator, and these references advocate for increased transparency and information sharing related to the indicator in question. We derived a concrete set of indicators from this literature, engaging external researchers to converge on the final list of 100 (see Figure 2). These indicators cover each dimension of the foundation model supply chain, from the data, compute, and labor required to build foundation models to model evaluations and developers' policies to restrict their use. We divide our indicators into three broad domains as described in Figure 1: indicators that are *upstream* of the model, indicators that relate to the *model* itself, and indicators that are *downstream* of the model. ## 4.1 Upstream Indicators The upstream indicators identify the *ingredients and processes* involved in building a foundation model. There are 32 upstream indicators, which we further taxonomize into the following 6 subdomains: - **Data (10 indicators).** Assesses transparency regarding the size and composition of the data used to build the model; the creators whose content is present in the data; and any steps to curate or augment the data. These indicators also address transparency regarding the inclusion of personal, copyrighted, or licensed data. - **Data Labor (7 indicators).** Assesses transparency regarding the use of human labor in producing the data used to build the model, including the wages, labor protections, employer, and geographic distribution of workers who contributed to data annotation and curation. These indicators also address transparency regarding the third parties that foundation model developers partnered with to construct their models. - **Data Access (2 indicators).** Assesses the scope of data access given to external parties. - **Compute (7 indicators).** Assesses transparency regarding the hardware and computation used to build the model, as well as the resulting energy use and environmental impacts. - **Methods (4 indicators).** Assesses basic technical specifications for the model's training stages and objectives, as well as the software frameworks and dependencies used. - **Data Mitigations (2 indicators).** Assesses transparency regarding steps taken to mitigate data privacy and copyright concerns. We depict the upstream indicators in Figure 3. Researchers have widely advocated for greater transparency in relation to Data and Data Access (Bender & Friedman, 2018a; Gebru et al., 2018; Hutchinson et al., 2021; Dodge et al., 2021; Bandy & Vincent, 2021) as a means for contextualizing model capabilities (Sambasivan et al., 2021; Longpre et al., 2023) and risks related to privacy, bias, and copyright (Buolamwini & Gebru, 2018; Bender et al., 2021; Kandpal et al., 2022; Sobel, 2017). Data Labor indicators uplift concerns related to labor practices, include irresponsible or exploitative use of human labor (Gray & Suri, 2019a; Crawford, 2021; Hao & Seetharaman, 2023; Kittur et al., 2013; Dzieza, 2023; West, 2019). Compute indicators relate to concerns around the high computational cost and energy expenditure associated with building foundation models, which can result in environmental harm (Lacoste et al., 2019; Strubell et al., 2019; Schwartz et al., 2020; Patterson et al., 2021; Bender et al., 2021; Henderson et al., 2020; Luccioni & Hernández-García, 2023; Vipra & West, 2023). Data Mitigations indicators also relate to the growing legal and sociotechnical concerns over data privacy, copyright, and licensing (Henderson et al., 2023; Brown et al., 2022; Lee et al., 2023a; Cooper et al., 2023; Saveri et al., 2023). | 2023 Foundation Model Transparency Index Indicators Downstream | | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Upstream | Model | | | Data size | Input modality | Release decision-making | | Data sources | Output modality | Release process | | Data creators | Model components | Distribution channels | | Data source selection | Model size | Products and services | | Data curation | Model architecture | Detection of machine-generated content | | Data augmentation | Centralized model documentation | Model License | | Harmful data ltration | External model access protocol | Terms of service | | Copyrighted data | Blackbox external model access | Permitted and prohibited users | | Data license | Full external model access | Permitted, restricted, and prohibited uses | | Personal information in data | Capabilities description | Usage policy enforcement | | Use of human labor | Capabilities demonstration | Justication for enforcement action | | Employment of data laborers | Evaluation of capabilities | Usage policy violation appeals mechanism | | External reproducibility of capabilities evaluation | | | | Geographic distribution of data laborers Wages Instructions for creating data | Third party capabilities evaluation Limitations description | | | Labor protections | Limitations demonstration | Permitted, restricted, and prohibited model behaviors Model behavior policy enforcement Interoperability of usage and model behavior policies User interaction with AI system Usage disclaimers User data protection policy Permitted and prohibited use of user data Usage data access protocol Versioning protocol | | Third party partners | Third party evaluation of limitations | | | Queryable external data access | Risks description | | | Direct external data access | Risks demonstration | | | Compute usage | Unintentional harm evaluation | | | Development duration Compute hardware | Change log | | | Hardware owner | Deprecation policy | | | Energy usage | Feedback mechanism | | | Carbon emissions | Feedback summary | | | Broader environmental impact | Government inquiries | | | External reproducibility of unintentional harm evaluation Intentional harm evaluation External reproducibility of intentional harm evaluation Third party risks evaluation Mitigations description Mitigations demonstration Mitigations evaluation External reproducibility of mitigations evaluation Third party mitigations evaluation Trustworthiness evaluation External reproducibility of trustworthiness evaluation Inference duration evaluation Inference compute evaluation | | | | Model stages | Monitoring mechanism | | | Model objectives | Downstream applications | | | Core frameworks | Aected market sectors | | | Additional dependencies | Aected individuals | | | Mitigations for privacy | Usage reports | | | Mitigations for copyright | Geographic statistics Redress mechanism Centralized documentation for downstream use Documentation for responsible downstream use | | Figure 2: **Indicators.** The 100 indicators of the Foundation Model Transparency Index spanning the 3 domains: upstream, model, and downstream. . ## Upstream Indicators For The 2023 Foundation Model Transparency Index Upstream Data **size:** For the data used in building the model, is the data size disclosed? Data **sources:** For all data used in building the model, are the data sources disclosed? Data **creators:** For all data used in building the model, is there some characterization of the people who created the data? Data source **selection:** Are the selection protocols for including and excluding data sources disclosed? Data **curation:** For all data sources, are the curation protocols for those data sources disclosed? Data **augmentation:** Are any steps the developer takes to augment its data sources disclosed? Harmful data **ltration:** If data is ltered to remove harmful content, is there a description of the associated lter? Copyrighted **data:** For all data used in building the model, is the associated copyright status disclosed? Data **license:** For all data used in building the model, is the associated license status disclosed? Personal information in **data:** For all data used in building the model, is the inclusion or exclusion of personal information in that data disclosed? Use of human **labor:** Are the phases of the data pipeline where human labor is involved disclosed? Employment of data **laborers:** Is the organization that directly employs the people involved in data labor disclosed for each phase of the data pipeline? Geographic distribution of data **laborers:** Is geographic information regarding the people involved in data labor disclosed for each phase of the data pipeline? Wages: Are the wages for people who perform data labor disclosed? Instructions for creating **data:** Are the instructions given to people who perform data labor disclosed? Labor **protections:** Are the labor protections for people who perform data labor disclosed? Third party **partners:** Are the third parties who were or are involved in the development of the model disclosed? Queryable external data **access:** Are external entities provided with queryable access to the data used to build the model? Direct external data **access:** Are external entities provided with direct access to the data used to build the model? Compute **usage:** Is the compute required for building the model disclosed? Development **duration:** Is the amount of time required to build the model disclosed? Compute **hardware:** For the primary hardware used to build the model, is the amount and type of hardware disclosed? Hardware **owner:** For the primary hardware used in building the model, is the owner of the hardware disclosed? Energy **usage:** Is the amount of energy expended in building the model disclosed? Carbon **emissions:** Is the amount of carbon emitted (associated with the energy used) in building the model disclosed? Broader environmental **impact:** Are any broader environmental impacts from building the model besides carbon emissions disclosed? Model **stages:** Are all stages in the model development process disclosed? Model **objectives:** For all stages that are described, is there a clear description of the associated learning objectives or a clear characterization of the nature of this update to the model? Core **frameworks:** Are the core frameworks used for model development disclosed? Additional **dependencies:** Are any dependencies required to build the model disclosed besides data, compute, and code? Mitigations for **privacy:** Are any steps the developer takes to mitigate the presence of PII in the data disclosed? Mitigations for **copyright:** Are any steps the developer takes to mitigate the presence of copyrighted information in the data disclosed? Figure 3: **Upstream Indicators.** The 32 upstream indicators that span Data, Data Labor, Data Access, Compute, Methods, and Data Mitigations. 16 ## 4.2 Model Indicators The model indicators identify the *properties and function* of the foundation model. There are 33 model indicators, which we further taxonomize into the following 8 subdomains: - **Model Basics (6 indicators).** Assesses transparency regarding fundamental information about the model such as modalities, size, and architecture as well as the presence of centralized model documentation. - **Model Access (3 indicators).** Assesses the scope of model access given to external entities. - **Capabilities (5 indicators).** Assesses transparency regarding the capabilities of the model, including evaluations. - **Limitations (3 indicators).** Assesses transparency regarding the limitations of the model, including evaluations. - **Risks (7 indicators).** Assesses transparency regarding the risks of the model, including evaluations, with specific focus on both unintentional harm (e.g. bias) and intentional harm (e.g. fraud). - **Model Mitigations (5 indicators).** Assesses transparency regarding model-level mitigations, including evaluations of their efficacy. - **Trustworthiness (2 indicators).** Assesses transparency regarding the trustworthiness of the model, including evaluations. - **Inference (2 indicators).** Assesses transparency regarding standardized inference with the model. We depict the model indicators in Figure 4. Model Basics indicators refer to fundamental information that is expected by model documentation standards (Mitchell et al., 2019; Crisan et al., 2022; Bommasani et al., 2023b) and, historically, have been reliably reported in the release of machine learning models. Model Access indicators reflect literature tied to the spectrum of model release and the associated differences in external access (Solaiman et al., 2019; Sastry, 2021; Shevlane, 2022; Liang et al., 2022b; Solaiman, 2023). The indicators on Capabilities, Limitations, Risks and Model Mitigations are motivated by a common understanding that these factors jointly influence the societal impact of machine learning models and AI systems (Tabassi, 2023b; Weidinger et al., 2023). For these subdomains, the description and demonstration indicators gauge whether there is some non-technical articulation and legibility of these concepts, primed by concerns surrounding public understanding of foundation models.38 To make these assessments more rigorous, the evaluation indicators build on the extensive tradition of evaluation in AI spanning iconic benchmarks like ImageNet (Deng et al., 2009), broader benchmarks like SuperGLUE Wang et al. (2019), and extensive meta-benchmarks like LM-Harness, BIG-bench, HELM and BEHAVIOR (Gao et al., 2021b; Srivastava et al., 2022; Liang et al., 2023; Srivastava et al., 2021). Indicators assessing evaluations also highlight the importance of reproducibility (Lipton & Steinhardt, 2019; Kapoor et al., 2023; Kapoor & Narayanan, 2023)39 and independent assessment (Sandvig et al., 2014; Raji & Buolamwini, 2019; Metaxa et al., 2021; Costanza-Chock et al., 2022; Raji et al., 2022b; Raji, 2022; Lam et al., 2022; Weidinger et al., 2023), which enable open science and external verification of developers' claims about their models. In the case of risks, finer distinctions between unintentional harms (e.g. biases, toxicity) and intentional harms (e.g. disinformation, fraud) build on harm taxonomies (Bender et al., 2021; Bommasani et al., 2021; Weidinger et al., 2021; Tabassi, 2023a; Weidinger et al., 2023). Indicators on trustworthiness and inference are especially motivated by the Trustworthy ML Initiative40 and MLPerf (Reddi et al., 2020) respectively, among other works (Brundage et al., 2020; Cammarota et al., 2020; Kumar et al., 2020; Liu et al., 2022; Shneiderman, 2020; Patterson et al., 2021; Narayanan et al., 2023). 38See https://www.gov.uk/government/publications/public-perceptions-towards-the-use-of-foundation-models-in-the-public-sector. 39See the ML Reproducibility challenge: https://paperswithcode.com/rc2022, CodaLab worksheets for reproducible ML: https://worksheets.codalab.org/, and Joelle Pineau's reproducibility checklist: https://www.cs.mcgill.ca/~jpineau/ ReproducibilityChecklist.pdf. 40https://www.trustworthyml.org/ ## Model Indicators For The 2023 Foundation Model Transparency **Index** Model Input **modality:** Are the input modalities for the model disclosed? Output **modality:** Are the output modalities for the model disclosed? Model **components:** Are all components of the model disclosed? Model **size:** For all components of the model, is the associated model size disclosed? Model **architecture:** Is the model architecture disclosed? Centralized model **documentation:** Is key information about the model included in a centralized artifact such as a model card? External model access **protocol:** Is a protocol for granting external entities access to the model disclosed? Blackbox external model **access:** Is black box model access provided to external entities? Full external model **access:** Is full model access provided to external entities? Capabilities **description:** Are the model's capabilities described? Capabilities **demonstration:** Are the model's capabilities demonstrated? Evaluation of **capabilities:** Are the model's capabilities rigorously evaluated, with the results of these evaluations reported prior to or concurrent with the initial release of the model? External reproducibility of capabilities **evaluation:** Are the evaluations of the model's capabilities reproducible by external entities? Third party capabilities **evaluation:** Are the model's capabilities evaluated by third parties? Limitations **description:** Are the model's limitations disclosed? Limitations **demonstration:** Are the model's limitations demonstrated? Third party evaluation of **limitations:** Can the model's limitations be evaluated by third parties? Risks **description:** Are the model's risks disclosed? Risks **demonstration:** Are the model's risks demonstrated? Unintentional harm **evaluation:** Are the model's risks related to unintentional harm rigorously evaluated, with the results of these evaluations reported prior to or concurrent with the initial release of the model? External reproducibility of unintentional harm **evaluation:** Are the evaluations of the model's risks related to unintentional harm reproducible by external entities? Intentional harm **evaluation:** Are the model's risks related to intentional harm rigorously evaluated, with the results of these evaluations reported prior to or concurrent with the initial release of the model?. External reproducibility of intentional harm **evaluation:** Are the evaluations of the model's risks related to intentional harm reproducible by external entities? Third party risks **evaluation:** Are the model's risks evaluated by third parties? Mitigations **description:** Are the model mitigations disclosed? Mitigations **demonstration:** Are the model mitigations demonstrated? Mitigations **evaluation:** Are the model mitigations rigorously evaluated, with the results of these evaluations reported? External reproducibility of mitigations **evaluation:** Are the model mitigation evaluations reproducible by external entities? Third party mitigations **evaluation:** Can the model mitigations be evaluated by third parties? Trustworthiness **evaluation:** Is the trustworthiness of the model rigorously evaluated, with the results of these evaluations disclosed? External reproducibility of trustworthiness **evaluation:** Are the trustworthiness evaluations reproducible by external entities? Inference duration **evaluation:** Is the time required for model inference disclosed for a clearly-specied task on a clearly-specied set of hardware? Inference compute **evaluation:** Is the compute usage for model inference disclosed for a clearly-specied task on a clearly-specied set of hardware? Figure 4: **Model Indicators.** The 33 model indicators that span Model Basics, Model Access, Capabilities, Limitations, Risks, Model Mitigations, Trustworthiness, and Inference. 18 ## 4.3 Downstream Indicators The downstream indicators identify the use of the foundation model, including details about its *release*. There are 35 downstream indicators, which we further taxonomize into the following 9 subdomains: - **Distribution (7 indicators).** Assesses transparency regarding the release process, the distribution channels for the model, and the products and services that arise through internal use. Additionally, this subdomain assesses the presence of model licenses, terms of service, and mechanisms for detecting model-generated content. - **Usage Policy (5 indicators).** Assesses transparency regarding the developer's acceptable use policy such as restrictions on specific uses or users, as well as transparency regarding how it enforces such policies. - **Model Behavior Policy (3 indicators).** Assesses transparency regarding the developer's policy on acceptable and unacceptable model behavior as well as transparency regarding enforcement of this policy and expectations in the event of usage policy violations. - **User Interface (2 indicators).** Assesses transparency in the user interface for the developer's flagship distribution channel, if the channel includes a user interface. - **User Data Protection (3 indicators).** Assesses transparency regarding the developer's policies with respect to user data protection, such as how data is stored, shared, and accessed. - **Model Updates (3 indicators).** Assesses transparency regarding the developer's versioning protocol, change log, and deprecation policy. - **Feedback (3 indicators).** Assesses transparency regarding mechanisms for reporting feedback on the model, summaries of feedback received, and related government inquiries. - **Impact (7 indicators).** Assesses transparency regarding the downstream impact of the model on society, such as affected market sectors, individuals, and geographies. Additionally, this subdomain assesses transparency regarding downstream applications, usage statistics, and mechanisms for monitoring usage as well as providing redress in the event of harm to users. - **Downstream Documentation (2 indicators).** Assesses the presence of centralized documentation for downstream use and documentation for responsible downstream use. We depict the downstream indicators in Figure 5. Given that foundation models are the basis for a downstream supply chain (Bommasani et al., 2021), the distribution indicators are informed by the literature on AI supply chains (Bommasani et al., 2023b; Vipra & Korinek, 2023; Cen et al., 2023; Cobbe et al., 2023; Widder & Wong, 2023; Brown, 2023) and release practices (Liang, 2022; Solaiman, 2023; Henderson et al., 2023; Kirchenbauer et al., 2023; Kuditipudi et al., 2023; Liesenfeld et al., 2023). Usage policy indicators draw from company publications on responsible model deployment Cohere (2022) as well precedents from social media. Model behavior policy indicators are rooted in literature that discusses AI behavior and trustworthiness, risks, mitigation and refusal Kumar et al. (2022); Weidinger et al. (2021); Brundage et al. (2020); Cammarota et al. (2020); Kumar et al. (2020); Liu et al. (2022); Reuter & Schulze (2023). User interface indicators are derived from research on safety by design and human-centered user interfaces Wang et al. (2023b); Nakao et al. (2022). User data protection indicators are inspired by policy recommendations on user data minimization, privacy, preservation, protection and contextual integrity EU (2016); Brown et al. (2022); Vipra & Myers West (2023); Winograd (2023); Nissenbaum (2024); King (2020); Mulligan et al. (2016). Model updates indicators stem from work focused on adequately updating systems and version control of AI systems (Sathyavageesran et al., 2022; Hashesh, 2023; Chen et al., 2023b). For feedback, impact and downstream documentation, the indicators were motivated by the literature on algorithmic auditing Liang (2022); Solaiman (2023); Raji et al. (2022b) as well as transparency reporting practices for social media.41 41See https://www.tspa.org/curriculum/ts-fundamentals/transparency-report/, https://transparencyreport.google. com/ and https://transparency.fb.com/reports/. ## Downstream Downstream Indicators For The 2023 Foundation Model Transparency **Index** Release **decision-making:** Is the developer's protocol for deciding whether or not to release a model disclosed? Release **process:** Is a description of the process of how the model was released disclosed? Distribution **channels:** Are all distribution channels disclosed? Products and **services:** Does the developer disclose whether any products and services oered by the developer are dependent on the model? Detection of machine-generated **content:** Are any mechanisms for detecting content generated by this model disclosed? Model **License:** Is a license for the model disclosed? Terms of **service:** Are terms of service disclosed for each distribution channel? Permitted and prohibited **users:** Is a description of who can and cannot use the model disclosed? Permitted, restricted, and prohibited **uses:** Are permitted, restricted, and prohibited uses of the model disclosed? Usage policy **enforcement:** Is the enforcement protocol for the usage policy disclosed? Justication for enforcement **action:** Do users receive a justication when they are subject to an enforcement action for violating the usage policy? Usage policy violation appeals **mechanism:** Is a mechanism for appealing potential usage policy violations disclosed? Permitted, restricted, and prohibited model **behaviors:** Are model behaviors that are permitted, restricted, and prohibited disclosed? Model behavior policy **enforcement:** Is the enforcement protocol for the model behavior policy disclosed? Interoperability of usage and model behavior **policies:** Is the way that the usage policy and the model behavior policy interoperate disclosed? User interaction with AI **system:** For distribution channels with user-facing interfaces, are users notied (i) that they are interacting with an AI system, (ii) of the specic foundation model they are interacting with, and (iii) that outputs are machine-generated? Usage **disclaimers:** For distribution channels with user-facing interfaces, are users provided with disclaimers involving model use? User data protection **policy:** Are the protocols for how the developer stores, accesses, and shares user data disclosed? Permitted and prohibited use of user **data:** Are permitted and prohibited uses of user data disclosed? Usage data access **protocol:** Is a protocol for granting external entities access to usage data disclosed? Versioning **protocol:** Is there a disclosed version and versioning protocol for the model? Change **log:** Is there a disclosed change log for the model? Deprecation **policy:** Is there a disclosed deprecation policy for the developer? Feedback **mechanism:** Is a feedback mechanism disclosed? Feedback **summary:** Is a report or summary disclosed regarding the feedback the developer received or, alternatively, the way the developer responded to that feedback? Government **inquiries:** Is a summary of government inquiries related to the model received by the developer disclosed? Monitoring **mechanism:** For each distribution channel, is a monitoring mechanism for tracking model use disclosed? Downstream **applications:** Across all forms of downstream use, is the number of applications dependent on the foundation model disclosed? Aected market **sectors:** Across all downstream applications, is the fraction of applications corresponding to each market sector disclosed? Aected **individuals:** Across all forms of downstream use, is the number of individuals aected by the foundation model disclosed? Usage **reports:** Is a usage report that gives usage statistics describing the impact of the model on users disclosed? Geographic **statistics:** Across all forms of downstream use, are statistics of model usage across geographies disclosed? Redress **mechanism:** Is any mechanism to provide redress to users for harm disclosed? Centralized documentation for downstream **use:** Is documentation for downstream use centralized in a centralized artifact? Documentation for responsible downstream **use:** Is documentation for responsible downstream use disclosed? Figure 5: **Downstream Indicators.** The 35 downstream indicators that span Distribution, Usage Policy, Model Behavior Policy, User Interface, User Data Protection, Model Updates, Feedback, Impact, and Downstream Documentation. 20 Note on assessment of indicators. We assess each indicator based on the information that developers share publicly about their flagship foundation models and their practices that apply to these models. Our standard for awarding points on an indicator is that the developer must explicitly state the information related to the indicator in its documentation, or it must explicitly point to the information in its documentation. This implies that if developers are overly vague or do not link to a key external document for a particular indicator then they do not receive a point. In addition, if developers explicitly state in their documentation that they *do not* carry out a specific action related to an indicator (e.g. they do not have a mechanism for users to provide feedback) then we generally award a point for that indicator. We note that this is exceedingly rare and that, in general, developers share little information about the actions they do or do not take in the process of developing and deploying foundation models. Note on inclusion of deployment. Our view of transparency is expansive, considering the broader supply chain beyond just foundation models. As we discuss in §2.2: background-transparency, existing conceptualizations of transparency in AI often consider upstream resources (especially data) in addition to machine learning models. But these works and broader public discourse usually do not foreground the downstream use and impact of AI, even though this is the most direct way in which AI affects society. To this end, we include the entire downstream domain to bring greater attention to this vital topic. In particular, while we are assessing foundation model developers, we assess them in relation to distribution channels and other factors that determine their downstream impact. At present, we recognize that characterizing the downstream impact of foundation models may be challenging, especially for open model developers. By releasing a model openly, developers may cede the ability to easily monitor the model's downstream use and impact. Open model developers can be fully transparent by being clear about the ways in which they do or do not monitor downstream use and impact. In addition, we believe in the potential for greater coordination between foundation model developers and distribution channels to increase transparency; for example, distribution channels could supply information about how the model is used to the foundation model developer. Partnerships with distribution channels that promote transparency provide a promising means for all foundation model developers to share more information about the impact their models have on society. ## 5 Foundation Model Developers | Name | Flagship | Release | Input | Output | Status | Headquarters | WH1 | WH2 | WH3 | FMF | |--------------|--------------------|-----------------------------|--------------|----------|----------|--------------------|-------|-------|-------|-------| | AI21 Labs | Jurassic-2 | API | Text | Text | Startup | Tel Aviv, Israel | ✗ | ✗ | ✗ | ✗ | | Amazon | Titan Text | API | Text | Text | Big Tech | Seattle, USA | ✗ | ✓ | ✗ | ✗ | | Anthropic | Claude 2 | API | Text | Text | Startup | San Francisco, USA | ✓ | ✓ | ✗ | ✓ | | Cohere | Command | API | Text | Text | Startup | Toronto, Canada | ✓ | ✗ | ✓ | ✗ | | Google | PaLM 2 | API | Text | Text | Big Tech | Mountain View, USA | ✓ | ✓ | ✗ | ✓ | | Hugging Face | BLOOMZ | Open weights, open data | Text | Text | Startup | Brooklyn, USA | ✓ | ✗ | ✗ | ✗ | | Inflection | Inflection-1 | No access (API forthcoming) | Text | Text | Startup | Palo Alto, USA | ✗ | ✓ | ✗ | ✗ | | Meta | Llama 2 | Open weights | Text | Text | Big Tech | Menlo Park, USA | ✗ | ✓ | ✗ | ✗ | | OpenAI | GPT-4 | API | Text, Images | Text | Startup | San Francisco, USA | ✓ | ✓ | ✗ | ✓ | | Stability AI | Stable Diffusion 2 | Open weights, open data | Text | Images | Startup | London, UK | ✓ | ✗ | ✓ | ✗ | Table 1: **Selected foundation model developers.** Information on the 10 selected foundation model developers: the developer name, their flagship model, the release strategy for the model (see Figure 6), the input and output modalities for the model, the developer's status as either Big Tech or Startup, and the developer's headquarters. We note which of the developers were involved in the White House's initiative for public evaluation of AI systems announced in May 2023 (WH1), voluntary commitments for the management of risks posed by AI announced in July 2023 (WH2), and commitments by additional organizations on the same matters of risks by AI announced in September 2023 (WH3). Additionally, we note which of the developers are founding members of the Frontier Model Forum, announced in July 2023. Transparency initiatives in AI (e.g. datasheets and model cards) often introduce frameworks that support machine learning developers in achieving greater transparency in their own work. In contrast, we proactively assess foundation model developers for their transparency using the 100 indicators we specify. By conducting the assessment ourselves, we sidestep concerns of uneven uptake that have arisen with past transparency initiatives (e.g. Gebru et al., 2018; Mitchell et al., 2018) and provide greater consistency in the scoring of each indicator across developers. Most importantly, scoring many developers allows for the comparison of their scores, which provides a rich context for how to improve transparency in the foundation model ecosystem. Efforts like Ecosystem Graphs (Bommasani et al., 2023b) and the UK Competition and Markets Authority (CMA) report on the foundation model market42 track the organizations that develop foundation models. At the time of writing in September 2023, the CMA report documented 160 foundation models (based on data drawn from Ecosystem Graphs) built by more than 50 organizations.43 However, as the CMA report states, a small number of developers control the majority of the market at present (Vipra & Korinek, 2023). Due to this intense level of market concentration, we decided to assess 10 major foundation model developers. 42https://www.gov.uk/government/publications/ai-foundation-models-initial-report 43https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1185508/ Full_report_.pdf#page=22 ## 5.1 Selecting Developers We considered a variety of selection criteria in choosing the 10 developers to assess, arriving at the following three principles: 1. **Impact.** We selected developers that have built the most influential foundation models. 2. **Diversity.** We selected developers that, when considered collectively, represent many axes of variation in the foundation model ecosystem. For example, developers that release models along different points on the release gradient (e.g. open vs. closed, Solaiman, 2023), build models with different modalities (e.g. text-to-text vs. text-to-image), and occupy different positions in the market (e.g. startups vs. Big Tech). 3. **Companies.** We selected developers that are established companies as enduring targets for longitudinal improvement. This to some extent parallels current regulatory initiatives that explicitly focus on companies as the target of policy for foundation models.44 On this basis, we chose 10 companies that all are influential foundation model developers: AI21 Labs, Amazon, Anthropic, Cohere, Google, Hugging Face, Inflection, Meta, OpenAI, and Stability AI. These 10 provide significant diversity in terms of release strategy (e.g. Anthropic, Meta, and Hugging Face all release flagship models with different levels of openness; see Figure 6), modality (e.g. Cohere, OpenAI, and Stability AI all provide different input-output modalities), and market position (e.g. Google, Inflection, and OpenAI occupy different market positions). Additionally, in parallel to our research, the White House made three announcements involving companies that develop foundation models: a red-teaming exercise announced in May 2023,45 a set of voluntary commitments announced in July 2023,46 and another set of voluntary commitments announced in September 2023.47 Separately, three of the companies we assess jointly announced the formation of the Frontier Model Forum in July 2023.48 When taken together, these announcements name 16 companies: Adobe, Amazon, Anthropic, Cohere, Google, Hugging Face, IBM, Inflection, Meta, Microsoft, NVIDIA, OpenAI, Palantir, Salesforce, Scale AI, and Stability AI. We note that 9 of the 10 companies we selected are within this set of 16 (all but AI21 Labs). The gradient of release strategies. The strategies for releasing foundation models differ widely (see Figure 6). Some developers release the weights of the model as well as the data used, which allows independent researchers and developers to use the models on their own and investigate the data. For example, EleutherAI released the weights of its Neo-X model (Black et al., 2022) along with The Pile, which Neo-X was trained on (Gao et al., 2021a). Meta released the weights to its OPT model (Zhang et al., 2022b), but did not release the associated training data. For our purposes, we will often refer to any release where model weights are made broadly available as "open," which includes the flagship models of Hugging Face, Meta, and Stability AI. In contrast, other developers do not release the weights of their flagship model, retaining greater control over who has access to the model and the extent to which it may be used externally (if at all). The majority of the developers we assess provide a programmatic API to query their flagship model as a black box. Other developers in the ecosystem do not provide a programmatic API but do allow for some forms of black box access, as Midjourney does for its text-to-image models that it makes available via a Discord server.49 Still other developers provide no external access to their models as is the case for Google's Chinchilla model (Hoffmann et al., 2022a) and Meta's Make-A-Video model (Singer et al., 2022). For our purposes, we will often refer to any release where model weights are not made externally available as "closed," which includes the flagship models of AI21 Labs, Amazon, Anthropic, Cohere, Google, Inflection, and OpenAI. 44See https://www.blumenthal.senate.gov/imo/media/doc/09072023bipartisanaiframework.pdf. 45https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/fact-sheet-biden-harris-administration-announces-new-actions-to-promote-responsible-ai-innovation-that-protects-americans-rights-and-safety/ 46https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/ 47https://www.whitehouse.gov/briefing-room/statements-releases/2023/09/12/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-eight-additional-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/ 48https://blogs.microsoft.com/on-the-issues/2023/07/26/anthropic-google-microsoft-openai-launch-frontier-model-forum/ 49See https://docs.midjourney.com/docs/midjourney-discord. ![23_image_0.png](23_image_0.png) Figure 6: **The gradient of release of foundation models.** Foundation models can be fully closed (e.g. only used internally within the company, without public release), released gradually as their risks and benefits are better understood (e.g. via a staged rollout involving initial testers), released via a web or app interface (e.g. users need to visit a website or join a Discord server to access the model's outputs), released via a programmatic API (e.g. users can query the model and receive outputs programmatically), released via downloadable model weights (e.g. users can access and adapt the model), or released with the training data alongside downloadable model weights (i.e. ostensibly maximal openness). For the ten models we consider, one falls under the fully closed category at the time of writing (Inflection-1), though Inflection plans to make it available via an API; six are available via an API (GPT-4, Claude 2, PaLM 2, Jurassic-2, Command, Titan Text); one is downloadable (Llama 2), and two are released with their model weights as well as underlying training data downloadable (Stable Diffusion 2 and BLOOMZ). For simplicity, we at times binarize these distinctions into models with downloadable weights ("open") and models without downloadable weights ("closed"). Image taken with permission from Solaiman (2023). The overall approach to release is informed by a developer's business strategy and perspective on its model's utility and risks. In particular, many organizations may adopt different release approaches for different foundation models. For example, when releasing GPT-4, OpenAI did not disclose many details about the modeling architecture and training data, citing competition and safety as the two main reasons.50 On the other hand, when releasing the text-to-speech Whisper model Radford et al. (2022), OpenAI disclosed many details and released the model weights openly. For other developers, the release decision may directly relate to their purpose for building a foundation model in the first place. For example, the BigScience collaboration led by Hugging Face that led to the BLOOM model (Le Scao et al., 2022) was explicitly designed to democratize access to multilingual large language models with capabilities in traditionally underrepresented languages. As a result, the initiative released model weights and data. 50Interview with OpenAI's chief scientist and co-founder: https://www.theverge.com/2023/3/15/23640180/ openai-gpt-4-launch-closed-research-ilya-sutskever-interview ## 5.2 Selecting Flagship Models Almost all major foundation model developers release multiple foundation models over time and, even at the time of writing, many have multiple salient foundation models (often across different modalities). For example, OpenAI has developed GPT, GPT-2, GPT-3, GPT-4, InstructGPT, WebGPT, Codex, CLIP, DALL-E, DALL-E 2, DALL-E 3, Jukebox, and Whisper among other models. Given that developers are not guaranteed to provide uniform transparency for each foundation model (e.g. OpenAI releases the weights openly for some of these models but not others), we decide to assess developers in relation to their *flagship* foundation model. By flagship foundation model, we mean the foundation model that is most salient and/or capable from the developer based on our judgment, which is directly informed by the company's public description of the model. We provide basic information about each of the developers and their flagship model in Table 1.51 Note on Hugging Face. In the case of Hugging Face, we are assessing the company in general as an enduring target over time. However, for this version of the index, we assess BLOOMZ (Muennighoff et al., 2022), which was collaboratively developed through the year-long BigScience initiative that was initiated and led by Hugging Face from May 2021 to May 2022. As a result, we refer to Hugging Face throughout the prose, but include the BigScience logo in visuals (which may also be distributed absent the context we provide in this paper) to highlight this nuance. 51For OpenAI, we evaluate GPT-4, which was released in March 2023, not GPT-4V, a model OpenAI released in September 2023 after we completed our analysis. With respect to input and output modality, OpenAI (2023) states that GPT-4 is "a large multimodal model capable of processing image and text inputs and producing text outputs." ## 6 Scoring By selecting the indicators and companies, we abstractly specify the form of the index. By defining each indicator and designating the flagship foundation model to be assessed for each developer, we move to a more precise operationalization. To make the index fully precise, we describe how we sourced the information that was used to assess each developer on each indicator, resulting in the final scores. Search protocol. To source information that we use to score developers, we exclusively use publicly available information provided by developers themselves. We recognize that this information may be incomplete (e.g. clients or governments may have greater access to information from the developer), but given that our focus includes public accountability, and we are academic researchers, we choose to consider only publicly available information. Given that public information may change, we use information available as of September 15, 2023. For each developer, we initially compile a basic set of resources disclosed by the developer about their model development practices and their flagship foundation model. To gather information for a specific indicator, we perform a structured search to identify all relevant information that is public. The exact details of how we execute this search are provided in Appendix B. Initial scoring. Having identified the information basis for scoring an indicator, 2 researchers on the team independently scored the developer on the indicator. This entails specifying a *score* (i.e. 0 or 1), *source* used in arriving at that score (e.g. one or more webpages), and a textual *justification* for how the evidence from sources is weighed against the criteria for the indicator in determining the score. Given these initial score assignments, the researchers reviewed their scores to identify any errors. Binary scoring provided several advantages. First, it simplified the scoring process by allowing researchers to focus on the sharp distinction between 0 and 1 point for each indicator. Second, a narrow criterion for making a binary scoring decision for each indicator reduced subjectivity in the initial scoring. Third, by reducing the level of complexity of each indicator we were able to reduce overlap between indicators, ensuring that we assess distinct dimensions of transparency. At the same time, binary scoring limits the level of complexity of each indicator, potentially leaving out valuable information that can be captured by more complex scoring schemes (cf. Bommasani et al., 2023a).52 In some instances, the researchers responsible for the same (indicator, developer) pair arrived at different scores, indicating disagreement. Given the systematic information gathering process, the iterative refinement of indicator definitions, and the binary scoring scheme, we found that disagreements were fairly infrequent. Disagreements generally related to relevant information being erroneously neglected by one researcher or differences in the fine-grained interpretation of how to score an indicator. Overall, across all 100 × 10 (indicator, developer) pairs, the agreement rate was 85.2% (Cohen's κ = 0.67, indicating substantial agreement; Landis & Koch, 1977). To resolve disagreements, the researchers discussed and jointly came to a resolution. Following the disagreement resolution, the scores were finalized and sources and justifications were merged to yield an initial set of 1000 (score, source, justification) triples for all 1000 (indicator, developer) pairs. 52See §9.2: limitations for further discussion. Company feedback. Given that these scores constitute a direct assessment of specific companies, we engaged these companies to provide them with the opportunity to review, respond, and potentially rebut or contest the scores we assigned. Concretely, we contacted leaders at each of the companies with (i) a description of the Foundation Model Transparency Index, (ii) the 100 indicators and their definitions, and (iii) their 100 (score, source, justification) triples. We encouraged each company to review our scores, provide any general feedback and, especially, to directly contest any scores the company viewed as incorrect (by referencing public information available as of September 15, 2023). Companies were provided two business weeks to respond with clear assurance that all correspondence would be strictly private. Of the 10 companies, all 10 responded. Of these, 8 companies (Amazon, Anthropic, Cohere, Hugging Face, Inflection, Meta, OpenAI, Stability AI) provided rebuttals for specific scores, which we extensively reviewed. In most cases, we did not change scores, though some rebuttals led to improvements in the scores (an average increase of 1.25 points across the 8 developers that contested on average 8.75 scores). Rather than improving developers' scores, these rebuttals often revealed misunderstandings regarding definitions of indicators or our justifications for scores, leading to more robust definitions and justifications. Beyond the scores, several companies scheduled calls with us or provided broader forms of feedback, which provided insight regarding how they conceptualize best practices for transparency and responsible AI. Following company feedback, we again verified all scores, sources, and justifications that constitute the finalized materials used throughout this paper and made publicly available. We also notified the companies prior to the release of this paper, responding to their feedback. In addition, we encouraged companies to provide a public written response regarding their perspective on this initiative, their specific scores, and their broader approach as an organization to transparency and responsible AI as it relates to foundation models. Moving forward, we hope these organizations implement more transparent practices and we provide specific recommendations to that effect in §8.1: recommendations-developers. Foundation Model Transparency Index Total Scores, **2023** Source: 2023 Foundation Model Transparency Index ![27_image_0.png](27_image_0.png) Figure 7: **Overall Scores.** The overall Foundation Model Transparency Index score and ranking across all 100 indicators. ## 7 Analysis The finalized results of the Foundation Model Transparency Index are the scores for each of the 100 indicators across all 10 companies. These result are accessible at URL to data is anonymized for submission., to facilitate subsequent analyses. Here, we specifically consider overarching trends in the results, along with more specific trends based on the structure of the index. Namely, we analyze along the rows/indicators (e.g. domains), the columns/companies (e.g. release strategy), as well as data-driven trends (e.g. correlations). ## 7.1 Overarching Results We begin our analysis by first establishing the broad trends when viewing the index as a whole. We consider results aggregated at the level of a single overall score per company (Figure 7) as well as the scores broken down into the 3 domains (upstream, model, downstream; Figure 8). We supplement our findings on these overarching trends with a more granular consideration of the *major dimensions of transparency* in the index in Figure 9.53 All developers have significant room for improvement. But most transparency indicators are very obtainable, having been implemented by at least one developer. Based on Figure 7, the highest-scoring developer scores points for 54 of the 100 indicators, and the average score across all developers is 37. This establishes a pervasive lack of transparency across major foundation model developers. With that said, for 82 of the 100 indicators, there exists some developer that scores points, and of these there are 71 where multiple developers score points. Consequently, there is clear reason to believe that across all developers, the necessary change to become more transparent is feasible. That companies' competitors are more transparent in certain issue areas suggests that such transparency, even if not fully costless, is unlikely to cause serious damage to their business. Companies can emulate the higher level of transparency their competitors exhibit on certain indicators, providing a precedent and a starting point for improving transparency in the foundation model ecosystem. 53The major dimensions of transparency we highlight are 13 large subdomains among the 23 subdomains. ![28_image_0.png](28_image_0.png) Foundation Model Transparency Index Scores by Domain, **2023** Figure 8: **Scores by Domain.** The aggregate score of each developer broken down by the three domains: upstream, model, and downstream. Foundation Model Transparency Index Scores by Major Dimensions of Transparency, **2023** Source: 2023 Foundation Model Transparency Index | Llama 2 | BLOOMZ | GPT-4 | Stable Diusion 2 | PaLM 2 | Claude 2 | Command | Jurassic-2 | Inection-1 | Titan Text | Average | | |----------------------------------------------------------------------------------------------------|----------|---------|--------------------|----------|------------|-----------|--------------|--------------|--------------|-----------|-----| | 40% | 60% | 20% | 40% | 20% | 0% | 20% | 0% | 0% | 0% | 20% | | | 29% | 86% | 14% | 14% | 0% | 29% | 0% | 0% | 0% | 0% | 17% | | | 57% | 14% | 14% | 57% | 14% | 0% | 14% | 0% | 0% | 0% | 17% | | | 75% | 100% | 50% | 100% | 75% | 75% | 0% | 0% | 0% | 0% | 48% | | | 100% | 100% | 50% | 83% | 67% | 67% | 50% | 33% | 50% | 33% | 63% | | | 100% | 100% | 67% | 100% | 33% | 33% | 67% | 33% | 0% | 33% | 57% | | | Data Labor | | | | | | | | | | | | | Compute Methods | | | | | | | | | | | | | Model Basics Model Access Capabilities Risks Mitigations Distribution Usage Policy Feedback Impact | 60% | 80% | 100% | 40% | 80% | 80% | 60% | 60% | 40% | 20% | 62% | | 57% | 0% | 57% | 14% | 29% | 29% | 29% | 29% | 0% | 0% | 24% | | | 60% | 0% | 60% | 0% | 40% | 40% | 20% | 0% | 20% | 20% | 26% | | | 71% | 71% | 57% | 71% | 71% | 57% | 57% | 43% | 43% | 43% | 59% | | | 40% | 20% | 80% | 40% | 60% | 60% | 40% | 20% | 60% | 20% | 44% | | | 33% | 33% | 33% | 33% | 33% | 33% | 33% | 33% | 33% | 0% | 30% | | | 14% | 14% | 14% | 14% | 14% | 0% | 14% | 14% | 14% | 0% | 11% | | | Average | 57% | 52% | 47% | 47% | 41% | 39% | 31% | 20% | 20% | 13% | | Figure 9: **Scores by Major Dimensions of Transparency.** The fraction of achieved indicators in each of the 13 major dimension of transparency. Major dimension of transparency are large subdomains within the 23 subdomains. ``` M ajor Dim e n sio n s of Tra n spare n c y ``` Developers show significant variance in overall transparency scores. While all developers have significant room for improvement, the current transparency of developers is strikingly uneven. Namely, the range in overall scores is 42 between the highest-scoring Meta at 54 and the lowest-scoring Amazon at 12. Even excluding Amazon's score as especially low, we still see an effective range of 30 points between Meta and the next lowest Inflection. Overall, with respect to the mean of 37, the standard deviation is 14.2, which is quite substantial. The four top-scoring developers (Meta, Hugging Face, OpenAI, Stability AI) all cluster well above the mean, the next three are very close to the mean (Google, Anthropic, Cohere), and the three lowest-scoring developers (AI21 Labs, Inflection, Amazon) are well below the mean. In many cases, the lowest-scoring developers have clear opportunities for improvement through straightforward changes related to some of the least challenging indicators. Examples include improved documentation (e.g. change logs, versioning protocols, model cards, centralized documentation for downstream use), clearer language in corporate policies (e.g. usage policies, model behavior policies, deprecation policies), and disclosing additional information that is unlikely to have implications for business competitiveness or safety (e.g. basic details on methods, dependencies, feedback). The Upstream domain sees the worst transparency scores. To gain additional insight beyond developers' basic overall scores, we consider scores broken down by the 3 top-level domains in Figure 8. On this basis, we see clear evidence that developers are, on average, least transparent with respect to the upstream resources required to build their models, such as data, labor, and compute. Concretely, the mean score on upstream indicators is 7.2 out of 32 (22.5%), compared to 14.1 out of 33 (42.7%) for model indicators and 15.7 out of 35 (44.9%) for downstream indicators. To confirm this is not overly biased by outliers, we note that the medians show the same trend: the median score on upstream indicators is 3.5, compared to 12.5 for model indicators and 16 for downstream indicators. We specifically highlight that the four lowest-scoring developers overall (Figure 7) also fare the worst on the upstream domain (Figure 8), with Cohere receiving 3 points and all of AI21 Labs, Inflection, and Amazon receiving 0 points. In contrast, for both the model and downstream domains, all 10 companies receive at least 6 points. Domain-level discrepancies explain some of the differences between companies with similar overall scores. We partition the 10 companies into three groups based on whether their overall score (Figure 7) is well-above (Meta, Hugging Face, OpenAI, Stability AI), around (Google, Anthropic, Cohere), or well-below (AI21 Labs, Inflection, Amazon) the mean. Within these groups, while companies receive somewhat similar scores, we find that their domain-level scores clarify discrepancies between them. Among the highest scorers, OpenAI is considerably less transparent on upstream matters (7) as compared to the other three high-scoring companies (Meta with 14, Hugging Face with 21, Stability AI with 16). In particular, OpenAI and Stability AI receive the nearly the same overall score, with OpenAI making up the deficit to Stability AI on upstream transparency mostly through better model-level transparency (and, specifically, many of the indicators on evaluations and risks). For the middle category of Google, Anthropic, and Cohere, the discrepancies are less stark, but we do see that Cohere is at 3 in the upstream category compared to Google with 6 and Anthropic with 5. Given the broadly similar scores for these three developers across all of the domains, we revisit the extent to which they are correlated at a finer-grained level in §7.6: correlations. Among the three lowest-scoring developers, we see that AI21 Labs and Inflection are differentiated by the model domain, with both scoring a zero on the upstream domain and similarly on the downstream domain. Data, Data Labor, and Compute are pervasive blind spots across developers. While the overall and domain-level results provide a basic lay of the land, we find that the major dimensions of transparency provide the Goldilocks region for clear and incisive analysis as shown in Figure 9. In particular, these dimensions of transparency are subdomains with several indicators (so the subdomain scores are more reliable) that are tied to broadly-understandable concepts like labor and capabilities. We hone in on the following major dimensions of transparency: Data, Data Labor, Compute, Methods, Model Basics, Model Access, Capabilities, Risks, Model Mitigations, Distribution, Usage Policy, Model Behavior Policy, Model Updates, User Data Protection, Feedback, and Impact. Analysis at this level reveals actionable insight into what types of transparency or opacity lead to many of our top findings. For example, we find that the poor upstream transparency stems from low performance on the Data, Data Labor, and Compute subdomains; developers average just 20%, 17%, and 17% for Data, Data Labor, and Compute respectively. In terms of smaller subdomains, developers on average score 25% of the available points on Data Mitigations. Model Basics, Capabilities, Limitations, and User Data Protection are the most transparent subdomains at present, but still short of the ideal. Developers score the highest proportion of points on indicators related to the following subdomains: User Interface (85%), Downstream Documentation (70%), User Data Protection (67%), Model Basics (63%), and Model Updates (63%). This reflects some baseline level of transparency across developers with respect to notifying users they are interacting with AI systems, providing centralized documentation for downstream use, publishing data protection policies, and disclosing the modalities associated with their model. Still, there are gaps in even for these subdomains. No developer provides a protocol for accessing usage data. Most developers (8 of 10) do not disclose the size of their model. And only half of the developers provide any form of deprecation policy. ## 7.2 Upstream Results Upstream indicators assess transparency regarding the ingredients that go into the foundation model including data, labor, compute, methods, and code. These ingredients are important predictors of the capabilities and risks of the foundation model they produce, as well as externalities of the model development process (e.g. impacts on human laborers and the environment). As we show in Figure 8, the upstream indicators are the most sparsely awarded (22.5% coverage on average). Here, we analyze at the level of subdomains and indicators based on Figure 10. The Upstream domain shows the greatest spread. Building on the fact that developers score worst on the upstream domain–with several developers scoring exactly or nearly 0 points–we find the range in scores is the greatest for this domain. Namely, only one developer (Hugging Face) scores more than half of the indicators (21 of the available 32 indicators; 65.6%), yielding a range of 21 when compared to the lowestscoring developers: AI21 Labs, Inflection, and Amazon (0 of the available 32 indicators; 0%). We emphasize this striking disparity given that many of the fundamental societal issues in connection with foundation models relate to upstream resources: bias, copyright, and privacy in relation to data, worker protections and fair compensation in relation to labor, environmental impact and energy expenditure in relation to compute, reproducibility in relation to methods, and cybersecurity in relation to code. The Methods subdomain is the most transparent in aggregate, while Data Labor is the least transparent. Among the upstream subdomains, only Methods shows some degree of coverage, with six of the developers giving some description of training stages, training objectives, and dependencies. On the other end of the spectrum, Data Labor sees little to no coverage with the exception of BLOOMZ, which involved volunteers providing data. Developers generally share no information about the use of human labor in their data pipeline, the employer, wages, and geographical distribution of these workers, instructions they give to data annotators, or any labor protections they implement. This industry norm of being nontransparent with respect to data labor is in tension with the fact that such information is critical to reinforcement learning with human feedback (Ziegler et al., 2019; Ouyang et al., 2022; Casper et al., 2023). That data labor is one of the two least transparent subdomains is consistent with prior work documenting widespread ethical challenges with data labor (Gray & Suri, 2019a; Crawford, 2021; Hao & Seetharaman, 2023). The Compute subdomain shows major discrepancies among developers. Meta and Stability AI document some aspects of compute, energy, and hardware usage, as well as the carbon footprint of model development, whereas many developers do not. Given the significant compute expenditure required to build many foundation models, the practice of documenting energy use and environmental impact is wellestablished along with associated tooling to measure these quantities (Lacoste et al., 2019; Strubell et al., 2019; Schwartz et al., 2020; Luccioni & Hernández-García, 2023). In spite of this, most developers do not disclose minimal, or sometimes any, details related to compute usage, particularly with respect to energy usage, carbon footprint, and environmental impact. The broader environmental impact of building foundation models is also essential to consider; although there has been significant public attention concerning energy expenditure, other matters such as water usage may be of similar consequence environmentally (Luccioni & Hernández-García, 2023). Luccioni et al. (2022) provides an excellent example, documenting the embodied emissions, dynamic consumption, and idle consumption associated with BLOOM (Le Scao et al., 2022). Given that BLOOMZ is derived from BLOOM, we note the potential for *documentation transience*, where prior documentation is not updated to reflect substantial changes and, therefore, does not correctly persist to the new asset. In particular, the additional broader environmental impact of deriving BLOOMZ from BLOOM is not disclosed. Widespread lack of upstream transparency on data creators, data license, copyrighted data and associated mitigations, and broader environmental impact. Of the 32 indicators, no company scores points on six of them. These are the indicators for data creators, data license status, copyrighted data, copyright mitigations, compute usage and broader environmental impact. For data creators, in part we believe this reflects the nascent status of methods for providing web-scale understanding of who created the data (e.g. text, images) scraped from the Internet. However, we recognize that Hugging Face in particular has taken important steps to characterize aspects of who created the data, along with associated metadata for copyright, license, and personal information, for the ROOTS corpus used to build BLOOM (though not the additional data involved in building BLOOMZ). With respect to the copyrighted data and data license status indicators, we emphasize that information related to these indicators is at issue in ongoing litigation. In particular, Stability AI has explicitly argued that training foundation models on copyrighted data is protected by fair use doctrine in the U.S.54 Closed developers may also view information related to their data as a key competitive advantage, or be disincentivized to share this information due to a perception of legal risk. Additionally, we note that we are surprised no developer directly discloses the compute usage in FLOPs to sufficient precision, though several disclose information that could be used to compute an estimate or upper bound. 54See https://www.judiciary.senate.gov/imo/media/doc/2023-07-12_pm_-_testimony_-_brooks.pdf and https://www. documentcloud.org/documents/23589439-openai-motion-to-dismiss as well as Lemley & Casey (2020). ## Foundation Model Transparency Index Indicator-Level Scores For Upstream, 2023 Source: 2023 Foundation Model Transparency Index | Subdomain | | Indicator | | | | | | | | | | | | | | | | | | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|-------------|-------|------------------|--------|----------|---------|------------|------------|------------|-----|----|----|----|----|----|----|----|----| | Data Data Labor | Llama 2 | BLOOMZ | GPT-4 | Stable Diusion 2 | PaLM 2 | Claude 2 | Command | Jurassic-2 | Inection-1 | Titan Text | | | | | | | | | | | 1 | 1 | 0 | | 1 | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | | | | | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | 1 | 1 | 1 | 1 | 1 | 0 | | 1 | 0 | | 0 | | 0 | | | | | | | | | 1 | 1 | 0 | | 1 | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | | | | | | 1 | 1 | 1 | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 1 | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | 0 | | 1 | 0 | | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | | | | 1 | 1 | 0 | | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | | | | | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | 0 | | 1 | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | 0 | | 1 | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | 0 | | 1 | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 1 | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | 0 | | 1 | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | 1 | 0 | | 1 | 1 | 1 | 0 | | 1 | 0 | | 0 | | 0 | | | | | | | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | 1 | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 1 | 1 | 1 | 1 | 1 | 1 | 0 | | 0 | | 0 | | 0 | | | | | | | | | 1 | 1 | 1 | 1 | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | | | | | | | 0 | | 1 | 0 | | 1 | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | | | | | 1 | 1 | 0 | | 1 | 1 | 1 | 0 | | 0 | | 0 | | 0 | | | | | | | | 1 | 1 | 1 | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | 14 | | 21 | 7 | 16 | | 6 | | 5 | | 3 | | 0 | | 0 | | 0 | | | | | Data size | | | | | | | | | | | | | | | | | | | | | Data sources Data creators | | | | | | | | | | | | | | | | | | | | | Data source selection | | | | | | | | | | | | | | | | | | | | | Data curation | | | | | | | | | | | | | | | | | | | | | Data augmentation | | | | | | | | | | | | | | | | | | | | | Harmful data ltration | | | | | | | | | | | | | | | | | | | | | Copyrighted data | | | | | | | | | | | | | | | | | | | | | Data license | | | | | | | | | | | | | | | | | | | | | Personal information in data | | | | | | | | | | | | | | | | | | | | | Use of human labor | | | | | | | | | | | | | | | | | | | | | Employment of data laborers | | | | | | | | | | | | | | | | | | | | | Geographic distribution of data laborers Wages Instructions for creating data Labor protections Third party partners Queryable external data access Direct external data access Compute usage Development duration Compute hardware Hardware owner Energy usage Carbon emissions Broader environmental impact Model stages Model objectives Core frameworks Additional dependencies Mitigations for privacy Mitigations for copyright Upstream Subtotal | | | | | | | | | | | | | | | | | | | | | Data Access Compute Methods Data Mitigations | 44% | | 66% | | 22% | | 50% | | 19% | | 16% | | 9% | | 0% | | 0% | | 0% | Figure 10: **Upstream Scores by Indicator.** The scores for each of the 32 upstream indicators. margin=1.7cm top=2cm, head=10pt, left=2cm, right=2cm,bottom=2cm ## Foundation Model Transparency Index Indicator-Level Scores For Model, 2023 Source: 2023 Foundation Model Transparency Index | Subdomain | | Indicator | | | | | | | | | | | | | | | | | | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|-------------|--------|----|-------|------------------|--------|----|----------|---------|------------|-----------------------|-----|----|-----|----|-----|----|-----| | Model Basics Model Access Capabilities Limitations | Llama 2 | | BLOOMZ | | GPT-4 | Stable Diusion 2 | PaLM 2 | | Claude 2 | Command | Jurassic-2 | Inection-1 Titan Text | | | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | | | | | | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | | | | | | | | | | | | 1 | 1 | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | | 1 | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | 1 | 1 | 0 | | 1 | 1 | 1 | 0 | | 0 | | 0 | | 0 | | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | | 1 | 0 | | | | | | | | | | | 1 | 1 | 1 | 1 | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | | 1 | | | | | | | | | | | 1 | 1 | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | | 0 | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | | | | | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | | 0 | | | | | | | | | | | 1 | 1 | 1 | 0 | | 1 | 1 | 0 | | 0 | | 1 | 0 | | | | | | | | | 1 | 1 | 1 | 0 | | 1 | 1 | 0 | | 0 | | 0 | | 0 | | | | | | | | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 1 | 1 | 0 | | 0 | | | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | | 1 | 0 | | | | | | | | | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | | 1 | | | | | | | | | | | 1 | 0 | | 1 | 1 | 0 | | 1 | 1 | 1 | 0 | | 0 | | | | | | | | | 1 | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | 1 | 0 | | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | | | | 1 | 0 | | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | 0 | | 0 | | 1 | 0 | | 0 | | 1 | 1 | 1 | 0 | | 0 | | | | | | | 1 | 0 | | 1 | 0 | | 1 | 1 | 1 | 0 | | 1 | 1 | | | | | | | | | 1 | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | 1 | 0 | | 1 | 0 | | 1 | 1 | 0 | | 0 | | 0 | | 0 | | | | | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 1 | 1 | 0 | | 0 | | | | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 1 | 1 | 0 | | 0 | | | | | 1 | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 22 | | 16 | | 20 | | 13 | | 15 | | 15 | | 16 | | 11 | 7 | 6 | | | | | Input modality | | | | | | | | | | | | | | | | | | | | | Output modality | | | | | | | | | | | | | | | | | | | | | Model components Model size Model architecture | | | | | | | | | | | | | | | | | | | | | Centralized model documentation | | | | | | | | | | | | | | | | | | | | | External model access protocol Blackbox external model access Full external model access Capabilities description Capabilities demonstration Evaluation of capabilities | | | | | | | | | | | | | | | | | | | | | External reproducibility of capabilities evaluation | | | | | | | | | | | | | | | | | | | | | Third party capabilities evaluation | | | | | | | | | | | | | | | | | | | | | Limitations description | | | | | | | | | | | | | | | | | | | | | Limitations demonstration | | | | | | | | | | | | | | | | | | | | | Third party evaluation of limitations Risks description Risks demonstration Unintentional harm evaluation | | | | | | | | | | | | | | | | | | | | | External reproducibility of unintentional harm evaluation | | | | | | | | | | | | | | | | | | | | | Intentional harm evaluation | | | | | | | | | | | | | | | | | | | | | External reproducibility of intentional harm evaluation | | | | | | | | | | | | | | | | | | | | | Third party risks evaluation | | | | | | | | | | | | | | | | | | | | | Mitigations description | | | | | | | | | | | | | | | | | | | | | Mitigations demonstration | | | | | | | | | | | | | | | | | | | | | Mitigations evaluation | | | | | | | | | | | | | | | | | | | | | External reproducibility of mitigations evaluation | | | | | | | | | | | | | | | | | | | | | Third party mitigations evaluation | | | | | | | | | | | | | | | | | | | | | Trustworthiness evaluation | | | | | | | | | | | | | | | | | | | | | External reproducibility of trustworthiness evaluation | | | | | | | | | | | | | | | | | | | | | Inference duration evaluation | | | | | | | | | | | | | | | | | | | | | Inference compute evaluation | | | | | | | | | | | | | | | | | | | | | Model Subtotal | | | | | | | | | | | | | | | | | | | | | Risks Model Mitigations Mitigations Trustworthiness Inference | 67% | | 48% | | 61% | | 39% | | 45% | | 45% | | 48% | | 33% | | 21% | | 18% | Figure 11: **Model Scores by Indicator.** The scores for each of the 33 model indicators. margin=1.7cm top=2cm, head=10pt, left=2cm, right=2cm,bottom=2cm No upstream indicators are satisfied by all developers. At the indicator level, there is no upstream indicator for which every developer receives points. Of course, this is guaranteed by the presence of (multiple) developers that score 0 points on the entire upstream domain. Even putting these 3 developers aside, there is no indicator that is satisfied by all of the remaining 7. The indicators where the greatest number of developers score points are data curation (all but Anthropic) and model stages (all but Cohere), which both suggest that developers are generally willing to describe the basics of the overall pipeline of model development. With that said, we take the absence of any upstream indicator where all companies score points, and the fact that 5 or more developers score no points on 30 of 32 upstream indicators, as strong evidence that upstream transparency is the domain with the broadest room for improvement. ## 7.3 Model Results Model indicators assess transparency regarding the function of foundation models, spanning model access, capabilities, risks, limitations, mitigations, trustworthiness and inference efficiency, as well as basic information about the model. The indicators in this domain comprehensively characterize the foundation model as a standalone artifact: what tasks the model can and cannot perform, what is the model's basic structure, who has access to the model, and more. Here, we analyze developers at the level of subdomains and indicators based on Figure 11. Model subdomains are some of the highest-scoring across the index. Overall, the mean score on model indicators is 14.1 out of 33 (42.7%) and the median developer receives 12.5 points (37.9%). With this in mind, several of the highest-scoring subdomains belong to the model domain. Developers score best on Model Basics (63%), Capabilities (62%), Limitations (60%), and Model Access (57%) within the domain. These scores arise partially because of very generous indicators within these subdomains (e.g. input modality, output modality, description of capabilities, description of limitations). Transparency on capabilities does not translate to transparency on limitations, risks, or mitigations. Of the 33 model indicators, 20 are in the Capabilities, Limitations, Risks, and Model Mitigations subdomains. Within these subdomains, Capabilities is clearly the most transparent subdomain: nearly all developers provide descriptions (9 of 10) and demonstrations (8 of 10) of multiple model capabilities, with the majority reporting evaluations (6 of 10), half reporting reproducible evaluations (5 of 10), and few providing third party evaluations (3 of 10). In general, we see a decline in the number of developers who score the point from the most rudimentary (i.e. description) to the most substantive (i.e. third party evaluations) across these four subdomains. With respect to Capabilities, while we assume most or all developers conduct internal evaluations, they may not score points on evaluations indicators because (i) they do not disclose sufficient details about internal evaluations for these evaluations to be externally reproducible, (ii) they do not assess multiple capabilities, or (iii) they do not report the results of the evaluations, perhaps due to a concern that a model may underperform competitors' models. With this in mind, developers consistently score worse on Limitations, Risks, and Model Mitigations indicators than on Capabilities. For example, only Cohere receive points for demonstrating limitations, while 8 developers score points for demonstrating capabilities. These asymmetries where companies are more willing to share information about capabilities than limitations, risks, and mitigations are concerning, as they may lead to an inflated sense of trust in companies' foundation models. In fact, these asymmetries are especially pronounced for Risks (average score of 24%) and Model Mitigations (average score of 26%), given that these scores are considerably worse than the average scores for Capabilities (62%) and Limitations (60%). Developers score poorly on Trustworthiness, largely in line with Risks and Model Mitigations. With respect to the Trustworthiness subdomain, only OpenAI, Cohere, and AI21 Labs provide information about rigorous evaluations of their flagship model related to robustness, reliability, hallucinations, calibration, or explainability. Of those developers, only Cohere and AI21 Labs provide sufficient detail for their evaluations to be deemed externally reproducible due to their use of the HELM benchmark (Liang et al., 2023), compared to OpenAI's unclear description of their evaluations of model calibration. Given the previous asymmetry we establish around greater disclosure of capabilities as compared to limitations, risks, and mitigations, the absence of trustworthiness evaluations exacerbates these concerns. Put together, the lack of sufficient public information on limitations, risks, mitigations, and trustworthiness makes it more likely that consumers will not have well-calibrated expectations. In turn, this could lead to undesirable overreliance on foundation models because not enough is done to calibrate consumers on the appropriate levels of trust (Parasuraman & Manzey, 2010).55 With this said, we do acknowledge that developers may take other routes towards improving trustworthiness including methods like reinforcement learning from human feedback (Ziegler et al., 2019; Ouyang et al., 2022, RLHF;) and constitutional AI (Bai et al., 2022), though transparency is lacking on these approaches (Casper et al., 2023). Model Access reveals slight differences beyond just release strategy. In aggregate, companies score 17 of the 30 points (57%) in the Model Access subdomain across the 3 indicators and 10 companies. On the external model access protocol indicator, Meta, Hugging Face, OpenAI, and Stability AI are the only developers to score points. We find this particularly interesting given Meta, Hugging Face and Stability AI release their models openly in terms of both model weights and data, whereas OpenAI is considerably more closed, providing only API access. However, in particular, OpenAI has a clear researcher access program with a form to request access, criteria it discloses for granting access, and a period of 4–6 weeks disclosed as the expected turnaround for a decision. This demonstrates that developers across the release spectrum (Solaiman, 2023) may achieve transparency on some indicators while taking substantively different approaches. In practice, we find that several closed developers have access forms that allow external entities greater access to the model, but these forms often lack key components of transparency that clarify the specific steps the developer will take to assess and grant applications (e.g. in comparison to OpenAI's process). With that said, the indicator for full external model access is exclusively achieved by the three open developers, though every developer other than Inflection provides black box access access to its model. Model Mitigations are a weak point for most developers. Developers on average scored just 26% of the total available points on the five Model Mitigations indicators. Hugging Face, Stability AI, and AI21 Labs score 0 points, while Cohere, Inflection, and Amazon score only the point on mitigations description, which is the most lightweight of these indicators. In general, we highlight an important mismatch between the many risks that are enumerated and the relatively few mitigations that are described, implemented, and/or evaluated. Even when mitigations are described, in scoring we find the mapping between stated risks and stated mitigations is often vague or nonexistent. Moving forward, we hope developers will directly aim mitigations at addressing specific risks, with appropriate evaluations to confirm the efficacy of mitigations in achieving the stated goals. 55See https://www.theverge.com/2023/5/30/23741996/openai-chatgpt-false-information-misinformation-responsibility as an example. ## Foundation Model Transparency Index Indicator-Level Scores For Downstream, 2023 Source: 2023 Foundation Model Transparency Index | Subdomain | | Indicator | | | | | | | | | | | | | | | | | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|-------------|--------|----|-------|------------------|--------|----|----------|----|---------|------------|------------|------------|-----|----|-----|----|-----| | Distribution Usage Policy Model Behavior Policy User Interface | Llama 2 | | BLOOMZ | | GPT-4 | Stable Diusion 2 | PaLM 2 | | Claude 2 | | Command | Jurassic-2 | Inection-1 | Titan Text | | | | | | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | 1 | 1 | 1 | 1 | 1 | 1 | 0 | | 1 | 0 | | 1 | | | | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | | | | | | | | | | | | 1 | 0 | | 1 | 0 | | 1 | 1 | 1 | 0 | | 1 | 0 | | | | | | | | | 0 | | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | 1 | 1 | 0 | | 1 | 1 | 0 | | 1 | 0 | | 0 | | 0 | | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | | | | | | | | | | | | 1 | 0 | | 1 | 0 | | 1 | 1 | 1 | 0 | | 0 | | 0 | | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | | | | | | | | | | | | 0 | | 0 | | 1 | 1 | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | | | | | 0 | | 0 | | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 1 | 0 | | | | | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 1 | 0 | | | | | 0 | | 0 | | 1 | 0 | | 0 | | 1 | 0 | | 0 | | 1 | 0 | | | | | | 0 | | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 1 | 0 | | | | | 0 | | 0 | | 1 | 0 | | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | | | | 1 | 1 | 1 | 1 | 1 | 0 | | 1 | 1 | 1 | 0 | | | | | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | | | | | | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | | | | | | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | | | | | | | | | | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 1 | 1 | 1 | 1 | 1 | 1 | 0 | | 1 | 0 | | 0 | | | | | | | | | | 1 | 1 | 1 | 1 | 1 | 0 | | 1 | 1 | 0 | | 0 | | | | | | | | | | 1 | 1 | 1 | 1 | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | | | | | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | | | | | | | | | | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 0 | | 0 | | 1 | 0 | | 1 | 0 | | 1 | 1 | 1 | 0 | | | | | | | | 1 | 1 | 0 | | 1 | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | 0 | | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | | 0 | | | | | | | | | | | 1 | 0 | | 1 | 0 | | 1 | 1 | 1 | 1 | 0 | | 0 | | | | | | | | | 18 | | 16 | | 21 | 18 | | 19 | | 16 | | 15 | | 14 | | 14 | | 6 | | | | Release decision-making protocol Release process Distribution channels Products and services Machine-generated content Model License Terms of service Permitted and prohibited users | | | | | | | | | | | | | | | | | | | | | Permitted, restricted, and prohibited uses Usage policy enforcement Justication for enforcement action Usage policy violation appeals mechanism | | | | | | | | | | | | | | | | | | | | | Permitted, restricted, and prohibited model behaviors Model behavior policy enforcement Interoperability of usage and model behavior policies User interaction with AI system Usage disclaimers User data protection policy Permitted and prohibited use of user data Usage data access protocol Versioning protocol Change log Deprecation policy Feedback mechanism Feedback summary Government inquiries Monitoring mechanism Downstream applications Aected market sectors Aected individuals Usage reports Geographic statistics Redress mechanism Centralized documentation for downstream use Documentation for responsible downstream use Downstream Subtotal | | | | | | | | | | | | | | | | | | | | | User Data Protection Model Updates Feedback Impact Downstream Documentation | 51% | | 46% | | 60% | | 51% | | 54% | | 46% | | 43% | | 40% | | 40% | | 17% | Figure 12: **Downstream Scores by Indicator.** The scores for each of the 35 downstream indicators. margin=1.7cm top=2cm, head=10pt, left=2cm, right=2cm,bottom=2cm Most model indicators are scored by some developer, though most developers score poorly on indicators related to evaluating intentional harms, mitigations, and inference efficiency. Of the 33 indicators in the model domain, at least one developer scores a point on 29 of them. Further, multiple developers score points on 27 model indicators. The 4 indicators for which no developer scores points are (i) intentional harm evaluation, (ii) external reproducibility of mitigations evaluations, (iii) third party mitigations evaluations, and (iv) inference compute evaluation. The 2 additional indicators for which only one developer scores points are limitations demonstration (Cohere) and external reproducibility of internal harm evaluation (OpenAI). While many companies describe risks (including the risk of intentional harms), they do not share sufficient information related to evaluations of intentional harm or the reproducibility of evaluations of mitigations. In the case of inference, we believe standards are needed akin to MLPerf (Reddi et al., 2020) to rigorously benchmark the inference of foundation models (Narayanan et al., 2023) given the key role of efficient inference and low latency in the usability of models (Lee et al., 2023b). We see that BLOOMZ in particular provides a potential benchmark for language models by tracking the time spent for a fixed task (generating 100 tokens given a 7 token prefix) on fixed hardware (a NVIDIA A100-80GB GPU), though compute is not measured.56 ## 7.4 Downstream Results Downstream indicators assess transparency regarding the use of foundation models, spanning subdomains related to distribution, policies constraining the use and behavior of the model, user interfaces, user data protection, model updates, feedback, impact, and documentation. Indicators in these subdomains characterize transparency related to how the foundation model is deployed and its downstream effects on the ecosystem and society. Our analysis is based on publicly available information about how the foundation model is distributed, how it can and cannot be used, how users can give feedback and seek redress, broader societal impacts, and the how the model affects actors downstream of the developer in the supply chain. Here, we conduct a fine-grained analysis at the level of subdomains and indicators based on Figure 12. Downstream scores show less spread across developers. Total scores on downstream indicators are tightly clustered around the mean of 15.7 out of 35, which corresponds to 44.9% of the 35 downstream indicators. With the exception of Amazon (6 out of 35; 17.1%), the other nine developers all score between 14 and 21 points. The highest-scoring on the downstream domain is OpenAI at 21 points and the lowestscoring (barring Amazon) are AI21 Labs and Inflection at 14 points. In §7.6: correlations, we clarify the extent to which these smaller margins in scoring discrepancies in the downstream domain are due to high agreement in indicator-level scores across companies. Impact is the least transparent subdomain in the entire index. To clarify the downstream impact of a given foundation model, the Impact subdomain includes indicators on monitoring mechanisms, affected market sectors, affected individuals, usage reports, geographic statistics, and redress mechanisms. Strikingly, the mean score across all developers on this subdomain is just 11%, with 8 developers scoring points on just 1 of the possible 7 indicators and the remaining 2 scoring none of the indicators. No developer scores points on affected market sectors, affected individuals, usage reports, geographic statistics, or redress mechanism. This means that there is essentially no information about how many people, sectors, and regions foundation models are impacting. OpenAI, Google, Cohere, AI21 Labs, and Inflection are the only developers to disclose a potential monitoring mechanism for tracking model use. And only open foundation model developers share limited information about downstream applications, whereas the rest provide no information.57 Developers are significantly more transparent about Distribution than other major dimensions of (downstream) transparency. Across the four major dimensions of transparency in the downstream 56See https://huggingface.co/blog/habana-gaudi-2-bloom. 57We score the downstream applications indicator quite generously: all of the open developers score points because they discloses which Hugging Face "Spaces" are also using the model via Hugging Face's platform. However, we emphasize that this is still a poor proxy for the number of applications dependent on the foundation model. domain (Distribution, Usage Policy, Feedback, Impact), mean scores are on the higher end only for Distribution at 59%, with the other three all below 50%. Every developer shares information about distribution channels, or the pathways by which the model is made available to entities beyond the model developer organization. Every developer provides terms of service that cover the distribution of its foundation model.58 Most developers share information about their process for releasing their flagship model (8 of 10) as well as the developer's products and services that use the foundation model (6 of 10). Half of developers share information about the license under which the model is distributed. In spite of broad transparency on the Distribution subdomain, developers are highly opaque around release decisions. Within the Distribution subdomain, developers score poorly on the release decision-making protocol indicator; Hugging Face is the only developer that shares information about its decision-making protocol for release. Although there has been an extensive focus on release strategies in the literature on foundation models (Solaiman et al., 2019; Sastry, 2021; Shevlane, 2022; Liang et al., 2022a; Liang & Reich, 2022; Solaiman, 2023; Widder et al., 2023; Seger et al., 2023), developers across the release spectrum share very little information about how and why they release their flagship models. In particular, we highlight that many of companies we assess have written about the broader topic of release, but not in a way that is precise to their specific decisions for their flagship models.59 Usage Policy and Model Behavior Policy subdomain scores are uneven across developers. Scores on the Usage Policy subdomain are uneven, with all developers scoring points on the indicator for permitted, restricted, and prohibited uses, but only two (OpenAI and Inflection) scoring points on the usage policy violation appeals indicator. This reflects the lack of industry standards regarding precisely how foundation model developers should restrict the use of their models. We found that different developers provide this information in different types of documentation, ranging from standalone Acceptable Use Policies to Content Policies to terms in the model license, and that many developers share some of this information in several different documents. While developers did provide some transparency on usage policies related to a user's obligations, they did not provide a similar level of transparency on the restrictions they place on their model's behavior. Scores on indicators in the Model Behavior Policy subdomain were relatively weaker, with a mean across the 3 indicators of 23% compared to 44% for the 5 usage policy indicators. OpenAI, Anthropic, and Inflection are the only developers who provide information about permitted, restricted, and prohibited model behaviors, while only Inflection and Stability AI provide information about how they might enforce such restrictions. OpenAI and Anthropic are the only developers who make clear how their models are expected to behave in the event that a user violates the usage policy. In part, we believe the norms and standards around model behavior are rather immature, meaning that developers do not provide a clear conceptualization of if/how they impose a model behavior policy. For example, the role of modeling decisions (e.g. the use of reinforcement learning from human feedback or constitutional AI) on behaviors (e.g. model refusals to specific requests) are not made clear. Identical scores on the User Data Protection subdomain across all developers. For the User Data Protection subdomain, scores are uniform across developers, with every developer scoring points on user data protection policy, as well as permitted and prohibited uses of user data. However, no developer scores points on usage data access protocol. This may reflect that few, if any, companies actually share usage data externally, meaning companies may perceive that the need to develop protocols for sharing such data is limited. However, developers' data protection policies include many provisions that would allow them to share such usage data, and specific protocols for how and when they do so are not transparent. Developers lack transparency on the Feedback subdomain. Developers score relatively poorly on Feedback indicators, scoring only 30% of the available points. While every developer but Amazon has a public mechanism for collecting feedback on its model, none provide information such as a feedback summary or 58As with several downstream indicators, we assessed the terms of service of the primary distribution channel. For example, this meant that we assessed Microsoft Azure's terms of service for Meta. 59We note that following September 15, 2023, Anthropic released information about its approach to responsible scaling: https://www.anthropic.com/index/anthropics-responsible-scaling-policy. details on government inquiries, such as requests for user data (which social media companies disclose). This is likely a function of how nascent the foundation model ecosystem is: companies have only been collecting feedback for a few years, and it took social media companies several years to respond to public calls for transparency around the feedback they receive from users and governments. Moving forward, more robust transparency reporting practices that provide the public with more information regarding these forms of feedback will likely be necessary.60 Developers are fairly transparent on the Model Updates subdomain. 5 of 10 developers provide clear information about their versioning protocol, change log, and deprecation policy. Inflection and Amazon, however, score zero points on these indicators, which may be due in part due to the face that Inflection-1 and Titan Text are at an earlier stage of release than some other flagship models. While there is a wide variation in the type, specificity, and quality of documentation provided related to Model Updates, as with other indicators, we assess these metrics generously and allocate points on the basis of transparency alone. Developers score well on the User Interface subdomain, though this may change due to deployments on mobile phones. Developers scored highly on User Interface indicators (average score of 85%), with more than half of developers scoring points on both indicators, which assess if users are told they are interacting with an AI system and if users are provided appropriate disclaimers. Developers frequently disclose to users that they are interacting with a specific foundation model by including the name of the foundation model somewhere in the user interface, while they give usage disclaimers upon sign-up for the user interface via a link to the terms of service or usage policy. Unlike all other indicators, we generally had to make use of step 7 in the and directly interact with developers' models via a user interface to assess these indicators. However, Amazon did not have a publicly available user interface in advance of September 15, 2023, meaning that it could not receive these points. We initially assessed transparency of deployments on mobile devices in some cases, though we ultimately did not consider these deployments for scoring. With that said, we highlight that the same standard for transparency of user interfaces does not currently appear to be met by mobile deployments from OpenAI and Inflection. Overall, we believe in the importance of providing transparency through user interfaces as it can help foundation models avoid the formation of the "dark patterns" we have seen develop with other digital technologies (Mathur et al., 2019). For example, we highlight that Anthropic does not make clear that a user is interacting with an AI system, except for the textual description "Message Claude." ## 7.5 Results For Open And Closed Developers Foundation models are released by different developers using a variety of release strategies (Liang et al., 2022a; Solaiman, 2023). In particular, we deliberately chose several developers that are more *open* (e.g. release the weights of their model, perhaps along with the data used to build the model) and others that are more *closed* (e.g. only provide access via an API). The topic of release and the (reductive) dichotomy of open vs. closed has emerged as a primary topic of technical and policy research on foundation models (Solaiman et al., 2019; Sastry, 2021; Shevlane, 2022; Liang et al., 2022a; Liang & Reich, 2022; Solaiman, 2023; Widder et al., 2023; Seger et al., 2023). To clarify how transparency differs between the open developers we assess (i.e. Meta, Hugging Face, Stability AI) and the closed developers (i.e. OpenAI, Google, Anthropic, Cohere, AI21 Labs, Inflection, Amazon), we emphasize the distinction in Figure 13. Open developers score higher in aggregate and on every domain. We establish a clear trend that the open developers score higher overall, with all three being among the four highest-scoring developers (see Figure 7). In particular, every open developer is nearly at least as transparent in terms of aggregate score as the highest-scoring closed developer (OpenAI): Meta and Hugging Face are at least 5 points higher, and Stability AI is within a point of OpenAI. Further, this trend is established more strongly through domainlevel analysis, where open developers score higher on average than closed developers across all domains (i.e. upstream, model, downstream). The mean score of open developers on upstream indicators is 53% compared to 9% for closed developers, 51% for open developers on model indicators compared to 39% for 60For example, consider the EU's DSA Transparency Database, implemented on the basis of the Digital Services Act to provide transparency on content moderation decisions: https://transparency.dsa.ec.europa.eu/. ![41_image_0.png](41_image_0.png) Figure 13: **Open vs. Closed by Subdomains.** The mean score for the 3 open developers (Meta, Hugging Face, Stability AI) and the 7 closed developers (OpenAI, Anthropic, Google, Cohere, AI21 Labs, Inflection, Amazon) across each of the 23 subdomains. Note: the number of indicators per subdomain varies widely. closed developers, and 49% on downstream indicators compared to 43% for closed developers. To ensure these trends are robust to outliers, we highlight that the trends hold even when considering medians instead of means (upstream: 50% to 9%, model: 48% to 45%, downstream: 51% to 43%). We emphasize that our findings confirm common hypotheses that open developers will in general be more transparent with respect to the upstream resources required to build their models (which also aligns with some making the data they use publicly available), but our findings dispute hypotheses that open developers will be less transparent on downstream matters due to their weaker control over downstream use. While we believe that closed developers providing APIs are better positioned to collect information on the downstream use of their models, in practice these developers do not disclose this information to provide greater public transparency. Open developers score higher on most subdomains. Open developers score higher than closed developers on 15 of the 23 subdomains, which account for 68 of the 100 indicators. The mean score of closed developers is higher than that of open developers on indicators in the subdomains of Capabilities, Risks, Model Mitigations, Trustworthiness, Usage Policy, Model Behavior Policy, and Downstream Documentation. We highlight that these seven subdomains point to two broader themes: closed developers in some cases may be higher-resourced or face stronger incentives to proactively address certain matters around responsible AI (e.g. Risks, Model Mitigations, Trustworthiness). In addition, closed developers often have a closer coupling between the foundation model we assessed and downstream services, meaning that certain userrelated aspects of transparency are potentially of higher priority (namely the Usage Policy). For example, many closed developers provide products built on top of their flagship foundation model, providing users of their platforms and clients who license their proprietary foundation models with an opportunity to push for transparency. The mean score of open developers is higher than closed developers on every upstream subdomain, with major score differentials especially for the Data, Compute, and Methods subdomains. Looking at the difference in average scores by release strategy, we see large disparities in favor of open models in each domain, with the largest gaps for Data Access (67% to 0%), Methods (92% to 29%), and Data Mitigations (67% to 7%). We also observe similar large differentials (40%+) for Model Basics, Model Access, and Model Updates. While less stark, we highlight the superior transparency on average for the Distribution subdomain as especially surprising given that closed developers maintain greater control over distribution by virtue of being closed. Indicator-level analysis further demonstrates the disparity between open and closed developers. At the indicator level, the median open developer outscores the median closed developer on 28 indicators (18 upstream, 7 model, 3 downstream), while the median closed developer scores higher on just 6 indicators (0 upstream, 2 model, 4 downstream). The median open developer and the median closed developer both score points on 22 indicators and neither scores points on 44 indicators. The open developers we assessed provide greater transparency than their closed counterparts. Overall, each level of analysis points in the same direction: open developers are reliably more transparent. In particular, we highlight that the release of assets (e.g. model weights, data, code) may be significantly underweighted in terms of its broader transparency effects. Our findings dispel the belief that closed developers are more likely to be transparent about downstream matters due to their greater control over deployment, while emphasizing that both open and closed developers continue to be extremely opaque in terms of the downstream impact of their foundation models. With this in mind, we caution that our assessment is necessarily based on the practices of some of the highest-resourced open and closed developers, so these trends should not be taken as sufficient evidence to claim that all open developers are more transparent than closed developers. And we believe there is ample opportunity for closed developers to address these gaps in transparency as we discuss in §8.1: recommendations-developers. ## 7.6 Correlations Between Companies Measuring correlations. The 100 × 10 scores introduces data-driven structure. In particular, it clarifies relationships that arise in practice between different regions of the index. Here, we consider the *correlations*, ![43_image_0.png](43_image_0.png) ![43_image_1.png](43_image_1.png) Figure 14: Correlations between Companies. The correlation between the scores for pairs of companies across all indicators. Correlation is measured using the simple matching coefficient (i.e. agreement rate), which is the fraction of all indicators for which both companies receive the same score (i.e. both receive the point or both do not receive the point). - 1.0 - 0.9 - 0.8 - 0.7 - 0.6 in scores, focusing on company-to-company similarity for simplicity. For example, if two companies receive similar aggregate scores, is this because they satisfy all the same indicators or do they score points on two very different sets of indicators? In Figure 14, we plot the correlation between every pair of companies. To measure correlation, we report the simple matching coefficient (SMC) or the agreement rate. The SMC is the fraction of the 100 indicators for which both companies receive the same score (i.e. both receive a zero or both receive a 1). As a result, a SMC of 0 indicates there is no indicator such that both companies receive the same score and a SMC of 1 indicates that for all indicators both companies receive the same score. For this reason, the correlation matrix is symmetric and guaranteed to be 1 on the diagonal. To systematically analyze the results, we consider three patterns in the correlation matrix: (i) individual cells with very small or very large values (i.e. highly similar or highly dissimilar company pairs), (ii) individual rows with consistently small, consistently large, or highly varied values (i.e. unusual companies), and (iii) structural patterns across the correlation matrix. Strongly correlated company practices. In terms of the most correlated company pairs, we identify a few regions of the correlation matrix. First, we identify the three most correlated pairs: (Cohere, AI21 Labs; SMC = 0.87), (AI21 Labs, Amazon; SMC = 0.85), and (Inflection, Amazon; SMC = 0.85). These pairs are all among the four lowest-scoring companies, though we note the inclusion of Cohere is interesting given Cohere's overall score (34) is closer to the average (37) and the middle-scoring group of companies (i.e. including Google and Anthropic). In addition to these pairs, if we consider the other highly-correlated pairs (SMC ≥ 0.8), we identify: (Hugging Face, Stability AI; SMC = 0.80), (OpenAI, Anthropic; SMC = 0.80), and (Google, Anthropic; SMC = 0.80). In particular, we observe that the company correlations identify clear structure: Hugging Face and Stability AI are the only two developers to release both data and models openly, and the trio of OpenAI, Google, and Anthropic are the three members of the Frontier Model Forum that we assess. Weakly correlated company practices. In contrast, we see that the least correlated pairs (SMC < 0.6) are pairs involving Meta and the three lowest-scoring developers as well as pairs involving Hugging Face and five of the seven closed developers (OpenAI, Cohere, AI21 Labs, Inflection, Amazon). These are all pairings between an open and a closed developer. More broadly, we highlight that Meta is the sole developer that is not correlated with SMC at least 0.80 with any other developer, with the most similar other developer being Google at 0.78 (see below for further analysis). This means Meta is rather unique in terms of the indicators where it scores points; it is the sole developer that is not strongly correlated with any other company, even including the two other open developers. Nevertheless, the least correlated pair of companies still agrees in over half the indicators (SMC = 0.54), which is not surprising given that all the companies are opaque (e.g. if all the companies all scored 0 on every indicator, they would necessarily be perfectly correlated with SMC = 1). Upstream correlations. In Figure 15, we plot the correlation between every pair of companies when considering only indicators from the upstream domain. Since the four lowest-scoring companies overall also score zero (or near-zero in the case of Cohere) points on the upstream indicators, they are necessarily extremely correlated. For the same reason, the extent to which the remaining six companies are correlated with the three lowest-scoring companies is precisely proportional to their own opacity on the upstream domain. Looking at the three companies that score in the middle overall (Google, Anthropic, Cohere), we see their indicator-level transparency is reasonably correlated. We also see a similar trend where OpenAI, Google, and Anthropic are correlated, though in this case OpenAI and Cohere are even more correlated with an SMC of 0.81. Interestingly, while the three open developers score much higher overall than any of the seven closed developers for the upstream domain, the correlations between them are somewhat different than in the other domains: there is a weaker correlation between Hugging Face and Stability AI, and Meta's correlation with OpenAI and Stability AI is stronger than its correlation with Hugging Face. Despite the fact that Meta and Hugging Face are the two highest-scoring companies on upstream, they are not especially correlated (SMC = 0.53) in that domain. These discrepancies coincide with the indicators where Hugging Face scores points and the other two open developers (Meta, Stability AI) do not, namely those relating to | Meta | l | 0.53 | 0.72 | 0.75 | 0.69 | 0.66 | 0.66 | 0.56 | 0.56 | 0.56 | |--------------|--------------|--------|--------------|--------|-----------|--------|-----------|------------|--------|--------| | Hugging Face | 0.53 | ı | 0.44 | 0.66 | 0.47 | 0.5 | 0.38 | 0.34 | 0.34 | 0.34 | | OpenAl | 0.72 | 0.44 | l | 0.66 | 0.78 | 0.75 | 0.81 | 0.78 | 0.78 | 0.78 | | Stability Al | 0.75 | 0.66 | 0.66 | 1 | 0.69 | 0.53 | 0.59 | 0.5 | 0.5 | 0.5 | | Google | 0.69 | 0.47 | 0.78 | 0.69 | 1 | 0.78 | 0.84 | 0.81 | 0.81 | 0.81 | | | | ı | | | | | | | | | | Anthropic | 0.66 | 0.5 | 0.75 | 0.53 | 0.78 | 0.75 | 0.84 | 0.84 | 0.84 | | | Cohere | 0.66 | 0.38 | 0.81 | 0.59 | 0.84 | 0.75 | ı | 0.91 | 0.91 | 0.91 | | AI21 Labs | 0.56 | 0.34 | 0.78 | 0.5 | 0.81 | 0.84 | 0.91 | l | l | l | | Inflection | 0.56 | 0.34 | 0.78 | 0.5 | 0.81 | 0.84 | 0.91 | 1 | 1 | 1 | | Amazon | 0.56 | 0.34 | 0.78 | 0.5 | 0.81 | 0.84 | 0.91 | 1 | 1 | 1 | | Meta | Hugging Face | OpenAl | Stability Al | Google | Anthropic | Cohere | AJ21 Labs | Inflection | Amazon | | Figure 15: Correlations between Companies (Upstream Indicators). The correlation between the scores for pairs of companies across all indicators when only considering upstream indicators. Correlation is measured using the simple matching coefficient (i.e. agreement rate), which is the fraction of all indicators for which both companies receive the same score (i.e. both receive the point or both do not receive the point). - 1.0 ﻮﻩ - - 0.8 - 0.7 - 0.6 - 0.5 - 0.4 | Meta | 1 | 0.76 | 0.64 | 0.67 | 0.73 | 0.67 | 0.45 | 0.36 | 0.48 | 0.45 | |--------------|--------------|--------|--------------|--------|-----------|--------|------------|------------|--------|--------| | Hugging Face | 0.76 | 1 | 0.58 | 0.85 | 0.73 | 0.73 | 0.58 | 0.55 | 0.67 | 0.64 | | OpenAl | 0.64 | 0.58 | 1 | 0.61 | 0.67 | 0.79 | 0.76 | 0.67 | 0.61 | 0.58 | | Stability Al | 0.67 | 0.85 | 0.61 | 1 | 0.7 | 0.76 | 0.73 | 0.7 | 0.7 | 0.73 | | Google | 0.73 | 0.73 | 0.67 | 0.7 | 1 | 0.88 | 0.61 | 0.58 | 0.76 | 0.73 | | Anthropic | 0.67 | 0.73 | 0.79 | 0.76 | 0.88 | 1 | 0.73 | 0.7 | 0.76 | 0.73 | | Cohere | 0.45 | 0.58 | 0.76 | 0.73 | 0.61 | 0.73 | 1 | 0.85 | 0.67 | 0.7 | | AI21 Labs | 0.36 | 0.55 | 0.67 | 0.7 | 0.58 | 0.7 | 0.85 | 1 | 0.64 | 0.79 | | Inflection | 0.48 | 0.67 | 0.61 | 0.7 | 0.76 | 0.76 | 0.67 | 0.64 | l | 0.85 | | Amazon | 0.45 | 0.64 | 0.58 | 0.73 | 0.73 | 0.73 | 0.7 | 0.79 | 0.85 | 1 | | Meta | Hugging Face | OpenAl | Stability Al | Google | Anthropic | Cohere | All21 Labs | Inflection | Amazon | | - 1.0 - 0.9 - 0.0 - 0.7 - 0.6 - 0.5 - 0.4 ![46_image_2.png](46_image_2.png) ![46_image_3.png](46_image_3.png) ![46_image_0.png](46_image_0.png) ![46_image_1.png](46_image_1.png) ![46_image_4.png](46_image_4.png) Figure 16: Correlations between Companies (Model Indicators). The correlation between the scores for pairs of companies across all indicators when only considering model indicators. Correlation is measured using the simple matching coefficient (i.e. agreement rate), which is the fraction of all indicators for which both companies receive the same score (i.e. both receive the point or both do not receive the point). data sources and data labor. Given the large spread in scores across developers in the upstream domain, we see the related effect that the correlations can be quite variable with some at or near 1 and others well below 0.5 (minimum upstream SMC = 0.34). Model correlations. In Figure 16, we plot the correlation between every pair of companies when considering only indicators from the model domain. In contrast to the upstream correlations, we see a much more varied picture. First, much like the overall correlations, we see strong correlations for (Cohere, AI21 Labs; SMC = 0.85) and (Inflection, Amazon; SMC = 0.85) but not necessarily for the other pairs between these four companies. Among the three Frontier Model Forum companies, we see a very strong correlation of 0.88 between Google and Anthropic, a fairly high correlation of 0.79 between OpenAI and Anthropic, but a considerably lower correlation for the third pair of OpenAI and Google at 0.67. These trends, where Anthropic is highly correlated with both, but OpenAI and Google are not necessarily correlated, mirror what we observe for the overall correlations. Similar to what we observed for the overall correlations, Hugging Face and Stability AI are quite correlated as well with a correlation of 0.85, and Meta is not particularly correlated with any company (the highest is Hugging Face at 0.76). Downstream correlations. In Figure 17, we plot the correlation between every pair of companies when considering only indicators from the downstream domain. The downstream correlations surface considerably different trends from the overall correlations or those for the other two domains. In particular, we first highlight that Meta is strongly correlated with Google in their scores on downstream indicators. Given that several of the downstream indicators related to broader corporate practices, the similarities between these companies may contribute to this result, though both companies are not strongly correlated with Amazon, the other Big Tech company we assess. Relatedly, we see fairly strong correlations between OpenAI and Anthropic, which again may relate to their fairly similar business practices mapping onto specific downstream indicators (e.g. indicators in the Model Behavior Policy subdomain). On the other hand, akin to the upstream subdomain, we see that Inflection is especially dissimilar from all of the open model developers (Meta, Hugging Face, Stability AI). And, unlike the other correlation matrices, OpenAI and Amazon are more dissimilar than usual. Overall, while we do not observe it as clearly in the other correlation analyses, here we see all three pairs of open developers are highly correlated: (Meta, Hugging Face; SMC = 0.89), (Meta, Stability AI; SMC = 0.83), (Hugging Face, Stability AI; SMC = 0.89). This may reflect that all open developers have shared transparency challenges on specific indicators within the downstream domain (e.g. monitoring mechanism and model behavior policy enforcement), perhaps stemming from the weaker control they have over downstream use. Overall, we find the complex structure and heterogeneity in the correlation for the downstream domain especially intriguing, given the aggregate scores for this domain are the most tightly clustered (see §7.4: downstream-results). That is to say, disaggregated indicator-level analysis is especially revealing for this domain compared to domain-level analysis. | Meta | 1 | 0.89 | 0.8 | 0.83 | 0.91 | 0.77 | 0.86 | 0.83 | 0.6 | 0.66 | |--------------|--------------|--------|--------------|--------|-----------|--------|-----------|------------|--------|--------| | | 0.89 | | | | | | | | | | | Hugging Face | 0.89 | 1 | 0.69 | 0.8 | 0.66 | 0.74 | 0.83 | 0.6 | 0.71 | | | OpenAl | 0.8 | 0.69 | 1 | 0.69 | 0.83 | 0.86 | 0.77 | 0.8 | 0.69 | 0.57 | | Stability Al | 0.83 | 0.89 | 0.69 | 1 | 0.74 | 0.66 | 0.69 | 0.77 | 0.6 | 0.66 | | Google | 0.91 | 0.8 | 0.83 | 0.74 | 1 | 0.74 | 0.89 | 0.86 | 0.69 | 0.63 | | Anthropic | 0.77 | 0.66 | 0.86 | 0.66 | 0.74 | 1 | 0.74 | 0.77 | 0.66 | 0.71 | | | 0.89 | | 1 | 0.86 | 0.74 | | | | | | | Cohere | 0.86 | 0.74 | 0.77 | 0.69 | 0.74 | 0.69 | | | | | | AJ21 Labs | 0.83 | 0.83 | 0.8 | 0.77 | 0.86 | 0.77 | 0.86 | l | 0.71 | 0.77 | | Inflection | 0.6 | 0.6 | 0.69 | 0.6 | 0.69 | 0.66 | 0.74 | 0.71 | ı | 0.71 | | Amazon | 0.66 | 0.71 | 0.57 | 0.66 | 0.63 | 0.71 | 0.69 | 0.77 | 0.71 | 1 | | Meta | Hugging Face | OpenAl | Stability Al | Google | Anthropic | Cohere | AI21 Labs | Inflection | Amazon | | Figure 17: Correlations between Companies (Downstream Indicators). The correlation between the scores for pairs of companies across all indicators when considering only downstream indicators. Correlation is measured using the simple matching coefficient (i.e. agreement rate), which is the fraction of all indicators for which both companies receive the same score (i.e. both receive the point or both do not receive the point). - 1.00 0.95 - 0.90 - 0.85 - 0.80 - 0.75 - 0.70 - 0.65 - 0.60 ## 8 Recommendations The design of our indicators, execution of our assessment, and analysis of our results provide a rich supply of ideas for how to improve transparency. We center our attention on foundation model developers and deployers, along with policymakers. For each group of stakeholders, we provide concrete recommendations on the basis of this research. Additionally, we encourage researchers to scrutinize our overall approach in order to clarify how transparency for foundation models can be better understood in the future. ## 8.1 Recommendations For Foundation Model Developers By directly scoring foundation model developers, we provide explicit feedback on where developers are and are not transparent. In itself, this provides immediate guidance for these 10 companies in the context of their current flagship foundation models. Moreover, the Foundation Model Transparency Index provides valuable insights for these companies to consider related to their other models and future releases; it also has bearing on how foundation model developers that we did not assess can promote transparency. We provide 10 specific recommendations for foundation model developers. ## 1. Increase Transparency For Existing Foundation Models. - As our results show, the development and use of major foundation model developers' current flagship models is opaque. Developers should remedy this situation by releasing more information about the systems that are central to today's foundation model ecosystem. Increasing the transparency of future releases will not resolve this issue as there are hundreds of products, services, and other models that are built on top of today's flagship models.61 - Developers should begin by focusing on low-hanging fruit, such as clarifying ambiguous language in their documentation, centralizing existing information sources, and sharing information about models that poses minimal concerns related to market competitiveness or legal risk. 62 Developers should also be clear about why they will not release certain information about their foundation models; developers should explicitly state the subdomains where they do not release information and explain why they do not do so. ## 2. Increase Transparency For Future Foundation Model Releases. - Developers should substantially increase the transparency of future foundation model releases. Wherever possible, they should publicly disclose information related to the 100 indicators we outline as well as additional information they feel is important to share with the industry, the public, and governments. This might look like taking a transparency-first approach in which the developer prioritizes transparency throughout the model development process and includes transparency as an important performance metric for research teams.63 61Developers that signed on to the White House's first round of voluntary commitments (including Amazon, Anthropic, Google, Inflection, Meta, and OpenAI) have pledged only to improve transparency "for all new significant model public releases within scope," where the scope is defined as "generative models that are overall more powerful than the current industry frontier (e.g. models that are overall more powerful than any currently released models, including GPT-4, Claude 2, PaLM 2, Titan and, in the case of image generation, DALL-E 2)." Developers that signed on to the White House's second round of voluntary commitments (including Cohere and Stability AI) have pledged only to improve transparency for "generative models that are overall more powerful than the current most advanced model produced by the company making the commitment." See https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf and https:// www.whitehouse.gov/wp-content/uploads/2023/09/Voluntary-AI-Commitments-September-2023.pdf 62For example, Anthropic released significantly more information about Claude 2 than its previous flagship model, Claude, including in the form of a model card. 63One relevant analogy is to the development of open foundation models. Much as some developers begin the process of building a foundation model with the intention of making all model assets openly available, then subsequently decide if the risks of making a model asset openly available outweigh the potential benefits, developers could begin the development process with the assumption of maximum transparency and remove only some items along the way (Klyman, 2023). - Profit-oriented developers commonly argue that certain forms of transparency can endanger their competitive advantage. Nevertheless, developers have a basic responsibility to weigh this concern against the the risks posed by their technology to society and the benefits of increasing societal understanding of this technology via transparency. These risks should determined by not only the developer but also the assessment of third party experts. Voluntary access for *independent*, third party audits (i.e. auditors not selected by the developer itself), can achieve a greater degree of transparency, and safeguard competition concerns with non-disclosure agreements. We would also argue audits are not always a good substitute for public transparency, and developers' arguments around competitive advantage should be carefully assessed for each indicator of transparency. These arguments are a common refrain to avoid meaningful community discussion about widespread practices that do not in actuality endanger competitive advantages. ## 3. Follow Industry Best Practices With Respect To Transparency. - Our findings suggest that every developer could significantly improve transparency by drawing on different approaches to transparency from across the industry. At least one developer scores points on 82 of our 100 indicators: where developers are struggling to increase transparency in a specific issue area, they should look to developers that have already done so. - While the foundation model ecosystem is nascent, some developers have outlined best practices for responsible development that relate to transparency. For example, in their "Joint Recommendations for Language Model Development," OpenAI, Cohere, and AI21 Labs state that developers should "publish usage guidelines and terms of use ... document known weaknesses and vulnerabilities ... [and] model and use-case-specific safety best practices." (See Appendix C for additional examples of calls from developers for transparency.) ## 4. Work With Deployers To Increase Transparency. - In cases where a developer is not the sole deployer of a foundation model, the developer should partner with deployers to increase transparency. For example, developers should attempt to require that deployers disclose usage statistics and provide usage disclaimers. Developers might do so through legal agreements that they sign with deployers that grant deployers the right to offer the foundation model. If a developer has little leverage over larger deployers it should consider partnering with similarly situated developers to increase their collective bargaining power. Without such efforts, it may be difficult for a developer to assess the downstream impact of its foundation models. ## 5. Work With Downstream Developers To Increase Transparency. - Foundation model developers should make it easy for downstream developers to be transparent in their release of fine-tuned models. In addition to increasing transparency for their own models, foundation model developers should release documentation to help downstream developers be more transparent and actively encourage them to do so. ## 6. Work With Regulators To Increase Transparency. - While we believe that the public is entitled to information about each of the indicators of transparency that we examine, we recognize that it is unlikely that every foundation model developers will publicly release all of this information. In some cases, foundation model developers may argue the risks of disclosing such information are too great to justify public release. In many such cases, developers should still share this information with regulators such that governments have sufficient information to adequately scrutinize developers in the public interest. ## 7. Use Transparency To Improve Trust, Safety And Reliability. - Sharing internal practices, documentation, and details about risks can lead to short term criticism and negative media coverage, but in the long term it can foster greater community trust than is possible with a more opaque approach. Investigative journalists will eventually expose practices that lead to systemic harms, and these harms are often exacerbated the longer they remain hidden, as illustrated by the Facebook Files Hagey & Horwitz (2021). Foundation models are technologies that could cause widespread harm, and the evidence suggests that safety and reliability will require dedicated and strong forms of transparency from foundation model developers. ## 8. Dedicate Resources To Continue Improving Transparency Over Time. - As technologies and risks rapidly evolve, the varieties of and baselines for meaningful transparency will also change. Well-resourced developers should dedicate personnel to adapting their documentation and releases to take account of this shifting landscape, rather than adhering to static benchmarks. Low-resourced developers should seek out funding in order to similarly improve transparency. ## 9. Work To Improve Transparency In The Foundation Model Ecosystem. - There are many areas where transparency is sorely needed, ranging from the downstream impact of foundation model releases to the use of human labor in producing the data used to build foundation models. One cross-cutting issue is the fact that developers do not exist in a vacuum: the foundation models a developer releases depend on and significantly affect other parts of the ecosystem. Taking this into account, developers should increase transparency as a means of improving the health of the overall ecosystem. - Developers should use semantic versioning for their models (as is the norm in software engineering) such that there is no ambiguity as to the version of the model that is being distributed. Developers should also give as much notice as is practicable (e.g. 3 months notice) in advance of deprecating models in order to give any downstream dependencies adequate time to migrate to a new version. - Developers should release an artifact alongside their foundation models that includes information about models' upstream and downstream dependencies (Bommasani et al., 2023b). Information about the datasets, software frameworks, and applications the model depends upon, as well as products, services, and other models that depend upon the model, are essential for effective supply chain monitoring. ## 10. Use The Foundation Model Transparency Index To Increase Transparency. - The Foundation Model Transparency Index provides an extensive taxonomy of the key elements of transparency in the field. We encourage developers to score their non-flagship models on the index and see where they have room for improvement. - Each indicator contains significant detail that developers can utilize to increase transparency in specific issue areas. For quantitative metrics, indicators include information regarding the appropriate unit of measurement and level of precision. For qualitative metrics, indicators often provide de facto instructions for how to clearly share information about a specific subdomain with the public. ## 8.2 Recommendations For Foundation Model Deployers Foundation model developers are not the only actors with a responsibility to promote transparency: deployers of foundation models such as cloud services providers and companies that license foundation models from developers also have a significant role to play. Although deployers cannot unilaterally increase transparency as they are not the party responsible for building a foundation model, there are still some tools at their disposal for doing so and they should think seriously about the implications of relying on systems for which there is little publicly available information. ## 1. Assess The Risks Of Deploying A Foundation Model Without Adequate Transparency. - Deployers that make use of a developer's foundation model in their products and services should conduct pre-deployment risk assessments that include specific assessments of risks stemming from a lack of transparency. These risks may include increased legal liability for difficult-to-explain model behaviors, reduced trust from users due to the product's opacity, and lower product performance without adequate information about the data used to build the model. ## 2. Require Sufficient Transparency In Working With Foundation Model Developers - Foundation model deployers should work with developers to increase the level of transparency regarding their models. It is not only in a deployers' interest for developers to share information bilaterally, but also for developers to be transparent with the public about the risks and limitations of their models. Deployers themselves can help developers increase transparency by sharing usage statistics. - Deployers should go beyond information sharing requests to improve transparency. For example, deployers should aim to negotiate contracts with developers that require developers to publicly share information that is relevant to the developers' customers as well as the broader public, such as information regarding Model Updates, changes in Usage Policy, and Impact. In cases where deployers have little leverage over larger developers they should consider partnering with similarly situated deployers to increase their collective bargaining power. ## 3. Do Not Put Undue Trust In Opaque Foundation Models. - Some deployers may take a foundation model from a reputable company at face value, assuming that all of the relevant information about that system is available to deployers and regulators. This could be a serious misjudgment: as our findings show, developers are overwhelmingly not transparent about the development and use of their foundation models. Assuming that a model complies with regulatory requirements regarding information sharing could come with substantial legal risk; for example, if new regulations primarily place information sharing requirements on deployers, they may face legal exposure related to their deployment of opaque foundation models. While developers are presumably more transparent in their relationships with deployers than in their public facing documentation, this is no guarantee that relevant information is shared across the 23 subdomains we identify. ## 8.3 Recommendations For Policymakers Policymakers across the United States, China, Canada, the European Union, the United Kingdom, India, Japan, the G7, and many other governments have already taken specific actions on foundation models and generative AI (see Appendix C). Evidence-driven policy that is grounded in a rich and sophisticated understanding of the current foundation model market is likely to achieve the best outcomes. As a result, our extensive characterization of transparency provides three core insights: (i) what aspects of transparency are present in status quo absent regulatory intervention, (ii) if mandated and enforced, what aspects of transparency would change relative to the status quo, and (iii) what substantive requirements beyond transparency would be most appropriate given the newfound transparency? We hope that lawmakers will draw on the information we aggregate in the Foundation Model Transparency Index to better inform policy initiatives. To be clear, our intent is to not to make a claim about whether specific governments should or should not regulate foundation models at this time, though some policy intervention is likely needed. Nor is our intent to recommend broad disclosure requirements, which could cause substantial harm if they are implemented without regard for differences in developers' business models and their level of financial resources, or without adequate government support for regulatory compliance. Our view is that a better understanding of the status quo will lead to smarter policy, which leads to the following recommendations. ## 1. Transparency Should Be A Top Priority For Ai Legislation. - Mechanisms to promote transparency should be among the suite of policy tools that lawmakers use to encourage responsible development of foundation models (Engler, 2023; Hacker et al., 2023). Unlike many other policy tools, transparency can be relatively low cost—where developers already possess the relevant information, sharing it does not require data collection. Another advantage of pro-transparency policies are that they can help solve collective action problems with respect to sharing information with the public. If one developer shares much more information about its foundation model with the public, then it could theoretically be penalized by investors or scrutinized by regulations for having more information about the model's risks and limitations publicly available than its competitors. As a result, a developer may be hesitant to be a first mover on transparency if its competitors are steadfast in maintaining opacity. By contrast, if that developer's peers must also share information about the risks and limitations of their foundation models, there is much less potential for transparency to represent a competitive disadvantage. - Transparency is a fundamental prerequisite for accountability, robust science, continuous innovation, and effective regulation. With additional information about companies' business practices, the impact of their foundation models, the resources used to build models, and the AI supply chain, governments would be much better positioned to enact comprehensive AI regulations. - Policymakers have a responsibility to ensure that the public has adequate information about extremely powerful AI systems that hundreds of millions of people use. 2. Regulators should enforce existing regulation to promote transparency for foundation model developers. - Governments already have substantial authority to require companies to share information about their business practices Ho (2012); Hess (2019); Irion (2022). For example, in recent years data protection authorities have increased their efforts to regulate the development and use of AI (ZanfirFortuna, 2023); they should consider using these authorities to solicit additional information from foundation model developers regarding the data they use to build foundation models and the labor that goes into producing that data. Similarly, sectoral regulators should consider scrutinizing the deployment of foundation models within their purview and require transparency where appropriate. 3. Policymakers should be realistic about the limits of transparency. - Transparency is not an end in itself. While having more information about companies' business practices and the foundation model ecosystem will undoubtedly be helpful, the most significant benefits from transparency will stem from the ways in which it elicits changes in business practices and promotes responsible development and use of foundation models. - Transparency is not a viable alternative to substantive change. Some interest groups and policymakers have nonetheless pushed for transparency requirements as a form of "light-touch" AI regulation. Rather than mandating that companies change their policies and practices, this approach would merely require some level of information sharing with the government. But transparency is only useful insofar as the information it yields is actionable. Increased transparency can help policymakers have sufficient information about the state of the industry that many governments seek to regulate. - While transparency requirements may appear more feasible and even-handed than other policy interventions in the near term, policymakers should recognize that they are likely insufficient to reduce harm in many areas. Even if companies share more information about the impacts of their models on workers and the environment, that may not lead them to improve working conditions or reduce emissions. Policymakers should consider measures beyond transparency requirements in a wide variety of areas while balancing other important equities related to competition and algorithmic justice. 4. Governments should craft a policy architecture that enables responsible development of open foundation models, which will in turn promote transparency. - Open foundation models are more transparent than closed foundation models, often by a significant margin. This means that policymakers with an interest in transparency should be hesitant to impose regulations on foundation model developers or deployers that make it considerably more difficult to build open foundation models. Measures that substantially increase the legal risk of developing open foundation models by holding foundation model developers liable for model outputs or by requiring comprehensive monitoring of downstream use may ultimately undermine transparency. - Pro-competitive policies such as those that encourage a variety of different business models in the foundation model ecosystem can promote transparency. If there are only a few major technology companies that develop flagship foundation models, it will be easier for those companies to circumvent transparency rules by coordinating their activities. For instance, a handful of major closed developers could agree that a certain level of transparency is sufficient to satisfy their goals and to meet regulatory requirements, leading them to obfuscate their business practices in similar ways. If the foundation model ecosystem is dominated by a few incumbents, it will also be easier for those incumbents to jointly engage in regulatory capture as there will be no countervailing narrative from other developers in the ecosystem. By contrast, policies that result in a diverse array of open and closed foundation model developers could create a positive feedback loop for transparency. The higher level of transparency of open developers can help draw attention to the lack of information available about the resources required to build closed foundation models. Some closed developers in this environment may see it as in their interest to share more information about their models in order to engender more trust in their products and services, which can in turn push less transparent closed developers to alter their business practices. ## 9 Impact The Foundation Model Transparency Index characterizes the transparency of foundation model developers at present. While this descriptive work already yields significant insights and value, our ambition for this work is to drive change. In short, our objective is to *improve* transparency in the foundation model ecosystem: we believe improved transparency will result in better science, more innovation, greater accountability, and ultimately give society greater collective confidence that this promising technology can truly advance the public interest. To achieve these lofty goals requires changing the conduct of powerful organizations. As with many similar efforts to drive change, we have conceptualized a specific theory of change and have considered specific limitations and risks of our work. Consistent with the work's spirit of transparency, we describe both matters plainly below. ## 9.1 Theory Of Change Assessment of any kind naturally characterizes the status quo. However, our intent is for our assessment to drive change, especially given that our most fundamental finding is that there is insufficient transparency in the foundation model ecosystem. Bommasani (2023) argues that evaluation and assessment can drive change if there is sufficient uptake: we specifically articulate how assessment can motivate improvement through different forms of uptake. Assessment motivates improvement. By quantifying transparency and simultaneously scoring many developers, we hope that these organizations will improve their transparency scores over time. We aim to provide a characterization of the status quo that is broadly legible across the ecosystem by directly comparing organizations' transparency scores. Furthermore, specific areas where there is pervasive opacity, or where specific companies are less transparent than their counterparts, are prime targets for public pressure and scrutiny. With respect to AI companies, we believe that certain teams and employees will play an outsized role in shaping transparency practices. Namely, responsible AI teams along with other teams that address ethics and safety are likely to shape many company-wide practices on transparency, including for their flagship foundation models. For this key group of individuals, we hope the Foundation Model Transparency Index provides a concrete list of indicators to proactively consider in making decisions. At present, we believe that some companies are not transparent about certain issue areas not because of specific countervailing concerns (e.g. profits, privacy, safety) but because they have not explicitly considered whether they should be transparent on this issue. In these cases, we believe the index provides a structured, well-argued resource that responsible AI teams can directly consider in making decisions around transparency. The index also provides an extensive account of why these specific indicators are valuable, which could help responsible AI teams advocate for greater transparency within their organizations. To be concrete, linking outward-facing transparency reporting with internal-facing company tracking could be a natural outcome where our index could bring about desired change while adding minimal overhead for these companies. Indexes draw power from their subsequent iterations, allowing for improvements to be clearly measured and acknowledged over time. In the fast-moving foundation model ecosystem, subsequent versions could be motivated by (i) changes in the indicators, (ii) changes in the key companies to assess, (iii) changes in the flagship foundation models of those companies, and (iv) changes in the underlying materials for a specific company. As a result, we believe the maintenance and subsequent versions of the Foundation Model Transparency Index will be necessary for its sustained impact. We have not yet determined an exact cadence and strategy, though we will conduct future versions of the index.64 Assessment guides standards and mandates. Fundamentally, the Foundation Model Transparency Index assesses companies on metrics of transparency that are selected and evaluated based on our judgments as experts in this domain. With this in mind, the indicators selected as well as the results could directly inform more formal processes. For instance, policymakers around the world are considering a spectrum of voluntary commitments, industry standards, and mandatory requirements for foundation model developers. 64See URL to project website is anonymized for submission. for the latest details. Many current policy efforts, across the varying levels of voluntary and mandatory requirements, explicitly name transparency as a top-level priority and directly identify specific indicators and subdomains covered by our index (see Appendix C). Our index is likely to be more comprehensive, more fine-grained, and more empirically grounded than most ongoing policy initiatives. As a result, our index provides direct value to policymakers. In selecting requirements, policymakers can use the index to explore the broader universe of potential transparency and disclosure requirements. In defining requirements, policymakers can use the index to explore specific definitions as well as edge cases that may complicate a requirement. And, in ultimately deciding which organizations to regulate and how to enforce regulations, policymakers can look at the current status quo to efficiently allocate resources. As a brief example, consider the European Parliament's position for the EU AI Act,65 which was adopted by the Parliament on June 14, 2023 by a vote of 499 in favour, 28 against and 93 abstentions.66 Bommasani et al. (2023a) provide an initial analysis of potential compliance with the the Act as proposed in the context of foundation model developers. Given this legislative proposal, European lawmakers might recognize that topics related to upstream labor and downstream impact, which are covered in the Foundation Model Transparency Index, are not adequately addressed in the draft AI Act. Policymakers might also acknowledge that requirements to disclose a summary of any copyrighted training data are too vague and a more specific definition, such as the definition we provide in Appendix A, may be desirable to improve compliance. And, finally, policymakers might view the results of how open and closed developers fare in deciding which requirements are best targeted at which developers along the release spectrum. Overall, much as transparency is instrumental for key societal objectives like public trust, we believe the Foundation Model Transparency Index can be similarly instrumental for key societal processes like sound policy-making. ## 9.2 Limitations And Risks Equating transparency and responsibility. Because we foreground transparency in our assessment of developers and their flagship models, it is likely that some will misinterpret the Foundation Model Transparency Index as a measure of the responsibility of companies. This is not the case for a number of reasons; most importantly, we award points on the basis of whether a developer is transparent about each indicator, not whether it has responsible business practices tied to that indicator. Concretely: if a developer discloses that it pays data laborers just one cent per hour, it would score points on the wages indicator under our methodology, while a developer that pays data laborers $20 an hour but does not make that information publicly available would score no points. This means that one risk of our approach is that it could incentivize developers to be transparent in performative ways that merely increase the amount of information available about their flagship models but do not reflect an effort on the part of the developer to substantively improve its business practices. Nevertheless, we believe that additional information about each of these indicators is an invaluable first step towards understanding how developers build and use foundation models. This will in turn allow many other evaluations of responsible business practices, in which the level of transparency should be but one factor. Transparency-washing. There is no guarantee that improved transparency of foundation models will result in more responsible development. As critics of transparency-washing have persuasively argued (Zalnieriute, 2021), major technology companies have used transparency to create the illusion that they are responsible players with the public's best interest at heart. In this way, transparency can be a shield against further scrutiny, helping to convince the public that foundation models are safe and trustworthy when they may not be. Similarly, companies may use transparency as a shield against comprehensive regulation. Companies could face substantial costs if they were required to increase pay for data laborers or forego certain risky use cases for their foundation models, leading some to argue that governments should simply require transparency in these verticals. However, transparency alone will not change a business' fundamental incentives and, if used 65https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf 66https://www.europarl.europa.eu/news/en/press-room/20230609IPR96212/meps-ready-to-negotiate-first-ever-rules-for-safe-and-transparent-ai to water down regulation, can perpetuate harm. Notwithstanding this risk, transparency may be a more appropriate regulatory option for many of the indicators we consider given the early stage of the foundation model ecosystem and the risk that substantive requirements will disproportionately damage small and open developers. Gaming the index. Moving forward, developers might attempt to game the Foundation Model Transparency Index without actually improving transparency. They could do this by clarifying that they do not share information about certain practices and giving a justification for doing so. Developers might also exploit the fact that indicators are relatively generous, meaning that they could share minor additional information on indicators that are comparatively easy to satisfy without meaningfully improving transparency. Since scores could theoretically be gamed in this way, it is important to consider the Foundation Model Transparency Index in conjunction with other metrics of companies' business practices. Binary scoring. The fact that each indicator is binary limits the amount of information that each score can reflect when compared with more expressive scoring schemes. For the same indicator, it is often the case that several developers share much less information than others but they all score one point nonetheless as they cross the threshold for receiving points. Conversely, in certain instances developers disclose some information related to a particular indicator but it is insufficient to receive points, yet they are grouped alongside developers who disclose no information whatsoever about an indicator. We attempt to address this limitation by breaking complex indicators into discrete chunks, meaning that each indicator assesses one key dimension of transparency and can more easily be made binary. The models we assessed are predominantly language models. For the developers we assess, their associated flagship models are predominantly text-to-text language models (8 of the 10). Of the remaining two, only one includes images as an input (GPT-4) and only one outputs images (Stable Diffusion 2). None of the flagship models we considered include modalities beyond text and images, though these modalities may become more common in the coming years. With this in mind, in principle the indicators are chosen and defined in a largely modality-agnostic fashion to facilitate future assessment as the flagship models in the ecosystem diversify in terms of modalities. Most companies we assessed are headquartered in the U.S. Of the 10 developers we assess, 7 are headquartered in the United States. Although this reflects the disproportionate global reach of U.S. technology companies, there are salient foundation model developers in other parts of the world that we did not assess in this work. For instance, the index excludes foundation model developers in East Asia that we believe are sufficiently important to evaluate, but they often did not share enough information publicly to even attempt evaluation. We also did not consider Falcon-180B from the Technology Innovation Institute in Abu Dhabi,67 as we had already finalized our evaluations when the model was released in September. We hope that researchers will use Foundation Model Transparency Index and our fully transparent methodology to assess the transparency of these developers as well as others around the world. Low bar for awarding points. We were generally quite generous in the scoring process. When we determined that a developer scored some version of a half-point, we usually rounded up. Since we assess transparency, we award developers points if they explicitly disclose that they do not share information about a particular indicator. We also read developers' documents with deference where possible, meaning that we often awarded points where there are grey areas. This means that developers' scores may actually be higher than their documentation warrants in certain cases as we had a low bar for awarding points on many indicators. ## 10 Conclusion Our research establishes the extent to which foundation model developers are transparent, set against the backdrop of decreasing transparency. Our findings show that the status quo is characterized by a widespread lack of transparency across developers, with significant unevenness in how individual developers fare and where they have room for improvement. We take this as a serious indictment of the overall ecosystem. Transparency is a broadly-necessary condition for other more substantive societal progress, and without improvement opaque foundation models are likely to contribute to harm. Foundation models are being developed, deployed, and adopted at a frenetic pace: for this technology to advance the public interest, real change must be made to rectify the fundamental lack of transparency in the ecosystem. ## References Rediet Abebe, Solon Barocas, Jon Kleinberg, Karen Levy, Manish Raghavan, and David G Robinson. Roles for computing in social change. In *Proceedings of the 2020 Conference on Fairness, Accountability, and* Transparency, pp. 252–260, 2020. Abubakar Abid, Maheen Farooqi, and James Zou. Persistent anti-muslim bias in large language models. arXiv preprint arXiv:2101.05783, 2021. Mike Ananny and Kate Crawford. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. *New Media & Society*, 20(3):973–989, 2018. doi: 10.1177/ 1461444816676645. URL https://doi.org/10.1177/1461444816676645. Aristotle. *De Anima: On the Soul*. 350 B.C.E. Amanda Askell, Miles Brundage, and Gillian Hadfield. The role of cooperation in responsible ai development, 2019. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. Constitutional ai: Harmlessness from ai feedback, 2022. Jack Bandy and Nicholas Vincent. Addressing" documentation debt" in machine learning research: A retrospective datasheet for bookcorpus. *arXiv preprint arXiv:2105.05241*, 2021. Solon Barocas and Andrew D. Selbst. Big data's disparate impact. *104 California Law Review*, 3:671–732, 2016. J. Bates, H. Kennedy, and I. et al. Medina Perea. Socially meaningful transparency in data-based systems: reflections and proposals from practice. *Journal of Documentation*, 2023. ISSN 0022-0418. doi: 10.1108/ JD-01-2023-0006. Luca Belli and Walter Gaspar. *The Quest for AI Sovereignty, Transparency and Accountability*. UN Internet Governance Forum Data and Artificial Intelligence Governance Coalition, 2023. Emily M. Bender and Batya Friedman. Data statements for natural language processing: Toward mitigating system bias and enabling better science. *Transactions of the Association for Computational Linguistics*, 6:587–604, 2018a. doi: 10.1162/tacl_a_00041. URL https://aclanthology.org/Q18-1041. Emily M Bender and Batya Friedman. Data statements for natural language processing: Toward mitigating system bias and enabling better science. *Transactions of the Association for Computational Linguistics* (TACL), 6:587–604, 2018b. Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623, 2021. Yochai Benkler. Practical anarchism: Peer mutualism, market power, and the fallible state. Politics & Society, 41(2):213–251, 2013. Clare Birchall. *Radical secrecy: The ends of transparency in datafied America*, volume 60. U of Minnesota Press, 2021. Sid Black, Stella Rose Biderman, Eric Hallahan, Quentin G. Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, M. Pieler, Usvsn Sai Prashanth, Shivanshu Purohit, Laria Reynolds, J. Tow, Ben Wang, and Samuel Weinbach. GPT-NeoX-20B: An open-source autoregressive language model. *arXiv*, 2022. Robert Bloomfield and Maureen O'Hara. Market transparency: Who wins and who loses? *The Review of* Financial Studies, 12(1):5–35, 1999. ISSN 08939454, 14657368. URL http://www.jstor.org/stable/ 2645985. Rishi Bommasani. Evaluation for change. In *Findings of the Association for Computational Linguistics:* ACL 2023, pp. 8227–8239, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.522. URL https://aclanthology.org/2023.findings-acl.522. Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dorottya Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. Rishi Bommasani, Kevin Klyman, Daniel Zhang, and Percy Liang. Do foundation model providers comply with the eu ai act?, 2023a. URL https://crfm.stanford.edu/2023/06/15/eu-ai-act.html. Rishi Bommasani, Dilara Soylu, Thomas Liao, Kathleen A. Creel, and Percy Liang. Ecosystem graphs: The social footprint of foundation models. *ArXiv*, abs/2303.15772, 2023b. URL https://api. semanticscholar.org/CorpusID:257771875. Rishi Bommasani, Daniel Zhang, Tony Lee, and Percy Liang. Improving transparency in ai language models: A holistic evaluation. *Foundation Model Issue Brief Series*, 2023c. URL https://hai.stanford.edu/ foundation-model-issue-brief-series. Danah Boyd. Algorithmic accountability and transparency. Open Transcripts, Nov 2016. URL http:// opentranscripts.org/transcript/danah-boyd-algorithmic-accountability-transparency/. Presented by danah boyd in Algorithmic Accountability and Transparency in the Digital Economy. Timothy F. Bresnahan and M. Trajtenberg. General purpose technologies 'engines of growth'? *Journal of* Econometrics, 65(1):83–108, 1995. ISSN 0304-4076. doi: https://doi.org/10.1016/0304-4076(94)01598-T. URL https://www.sciencedirect.com/science/article/pii/030440769401598T. Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, and Florian Tramèr. What does it mean for a language model to preserve privacy? In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 2280–2292, 2022. Ian Brown. Expert explainer: Allocating accountability in ai supply chains. *The Ada Lovelace Institute*, 2023. URL https://www.adalovelaceinstitute.org/resource/ai-supply-chains/. Miles Brundage, Shahar Avin, Jasmine Wang, Haydn Belfield, Gretchen Krueger, Gillian Hadfield, Heidy Khlaaf, Jingying Yang, Helen Toner, Ruth Fong, et al. Toward trustworthy ai development: mechanisms for supporting verifiable claims. *arXiv preprint arXiv:2004.07213*, 2020. Erik Brynjolfsson, Daniel Rock, and Chad Syverson. The productivity j-curve: How intangibles complement general purpose technologies. *American Economic Journal: Macroeconomics*, 13(1):333–72, January 2021. doi: 10.1257/mac.20180386. URL https://www.aeaweb.org/articles?id=10.1257/mac.20180386. Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In *Conference on Fairness, Accountability and Transparency*, pp. 77–91, 2018. Rosario Cammarota, Matthias Schunter, Anand Rajan, Fabian Boemer, Ágnes Kiss, Amos Treiber, Christian Weinert, Thomas Schneider, Emmanuel Stapf, Ahmad-Reza Sadeghi, et al. Trustworthy ai inference systems: An industry research view. *arXiv preprint arXiv:2008.04449*, 2020. Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, CharbelRaphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, and Dylan Hadfield-Menell. Open problems and fundamental limitations of reinforcement learning from human feedback, 2023. Davide Castelvecchi. Can we open the black box of ai? *Nature*, 10 2016. URL https://www.nature.com/ news/can-we-open-the-black-box-of-ai-1.20731. Artificial intelligence is everywhere. But before scientists trust it, they first need to understand how machines learn. Sarah H. Cen, Aspen Hopkins, Andrew Ilyas, Aleksander Madry, Isabella Struckman, and Luis Videgaray. Ai supply chains and why they matter. *AI Policy Substack*, 2023. URL https://aipolicy.substack. com/p/supply-chains-2. Pierre Chambon, Christian Blüthgen, Jean-Benoit Delbrouck, Rogier van der Sluijs, Malgorzata Polacin, Juan Manuel Zambrano Chaves, T. Abraham, Shivanshu Purohit, Curt P. Langlotz, and Akshay Chaudhari. Roentgen: Vision-language foundation model for chest x-ray generation. *ArXiv*, abs/2211.12737, 2022. URL https://api.semanticscholar.org/CorpusID:253801600. Chien-Lun Chen, Leana Golubchik, and Ranjan Pal. Achieving transparency report privacy in linear time, 2021. Junyu Chen, Norihiro Yoshida, and Hiroaki Takada. An investigation of licensing of datasets for machine learning based on the gqm model, 2023a. Lingjiao Chen, Matei Zaharia, and James Zou. How is chatgpt's behavior changing over time?, 2023b. Dorothy Chou. Transparency report: Government requests on the rise. URL https://blog.google/ technology/safety-security/transparency-report-government-requests/. Hyung Won Chung, Le Hou, S. Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Wei Yu, Vincent Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc Le, and Jason Wei. Scaling instruction-finetuned language models. *ArXiv*, abs/2210.11416, 2022. Jennifer Cobbe, Michael Veale, and Jatinder Singh. Understanding accountability in algorithmic supply chains. In *2023 ACM Conference on Fairness, Accountability, and Transparency*. ACM, jun 2023. doi: 10.1145/3593013.3594073. URL https://doi.org/10.1145%2F3593013.3594073. Cohere. Best practices for deploying language models, Jul 2022. URL https://txt.cohere.com/ best-practices-for-deploying-language-models/. European Commission. The digital services act: ensuring a safe and accountable online environment. *European Commission*, 2022. URL https://commission. europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/ digital-services-act-ensuring-safe-and-accountable-online-environment_en. Federal Trade Commission. Ftc finalizes order requiring fortnite maker epic games to pay 245 million for tricking users into making unwanted charges, Mar 2023. URL https://www.ftc.gov/news-events/news/press-releases/2023/03/ ftc-finalizes-order-requiring-fortnite-maker-epic-games-pay-245-million-tricking-users-making. FTC will use the money to provide refunds to consumers. Joint Research Centre-European Commission et al. *Handbook on constructing composite indicators: methodology and user guide*. OECD publishing, 2008. Chris Coons, Bill Cassidy, Amy Klobuchar, Richard Blumenthal, and Mitt Romney. The platform accountability and transparency act. *Congressional Bill*, 2021. URL https://www.coons.senate.gov/imo/ media/doc/pata_one_pager_118th_congress_june_2023.pdf. A. Feder Cooper, David Mimno, Madiha Choksi, and Katherine Lee. Machine learning and artificial intelligence: Legal concepts. Generative AI and Law Workshop at the International Conference of Machine Learning, 2023. URL https://genlaw.github.io/glossary.html\#legal-concepts. Eric Corbett and Emily Denton. Interrogating the t in facct. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pp. 1624–1634, 2023. Sasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini. Who audits the auditors? recommendations from a field scan of the algorithmic auditing ecosystem. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT '22, pp. 1571–1583, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450393522. doi: 10.1145/3531146.3533213. URL https://doi.org/10.1145/3531146.3533213. Kate Crawford. *The atlas of AI: Power, politics, and the planetary costs of artificial intelligence*. Yale University Press, 2021. Anamaria Crisan, Margaret Drouhard, Jesse Vig, and Nazneen Rajani. Interactive model cards: A humancentered approach to model documentation. In 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT '22, pp. 427–439, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450393522. doi: 10.1145/3531146.3533108. URL https://doi.org/10.1145/3531146. 3533108. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In *Computer Vision and Pattern Recognition (CVPR)*, pp. 248–255, 2009. Renée DiResta, Laura Edelson, Brendan Nyhan, and Ethan Zuckerman. It's time to open the black box of social media. *Scientific American*, Apr 2022. URL https://www.scientificamerican.com/article/ its-time-to-open-the-black-box-of-social-media/. Social media companies need to give their data to independent researchers to better understand how to keep users safe. Jesse Dodge, Maarten Sap, Ana Marasović, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pp. 1286–1305, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.98. URL https://aclanthology.org/ 2021.emnlp-main.98. David Donoho. 50 years of data science. *Journal of Computational and Graphical Statistics*, 26(4):745–766, 2017. doi: 10.1080/10618600.2017.1384734. URL https://doi.org/10.1080/10618600.2017.1384734. Josh Dzieza. Ai is a lot of work: As the technology becomes ubiquitous, a vast tasker underclass is emerging — and not going anywhere. *The Verge*, Jun 2023. URL https://www.theverge.com/features/23764584/ ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots. Illustrations by Richard Parry. Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock. Gpts are gpts: An early look at the labor market impact potential of large language models, 2023. Steven Englehardt and Arvind Narayanan. Online tracking: A 1-million-site measurement and analysis. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 1388– 1401, 2016. Steven Englehardt, Dillon Reisman, Christian Eubank, Peter Zimmerman, Jonathan Mayer, Arvind Narayanan, and Edward W Felten. Cookies that give you away: The surveillance implications of web tracking. In *Proceedings of the 24th International Conference on World Wide Web*, pp. 289–299, 2015. Alex Engler. A comprehensive and distributed approach to ai regulation, 2023. URL https://www. brookings.edu/articles/a-comprehensive-and-distributed-approach-to-ai-regulation/. Kawin Ethayarajh and Dan Jurafsky. Utility is in the eye of the user: A critique of NLP leaderboards. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4846–4853, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020. emnlp-main.393. URL https://aclanthology.org/2020.emnlp-main.393. EU. Official journal of the european union 2016. *Official Journal of the European Union*, L 119/1, Apr 2016. URL https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1552662547490&uri=CELEX% 3A32016R0679. Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for ai. Berkman Klein Center Research Publication, (2020-1), January 15 2020. doi: 10.2139/ssrn.3518482. URL https: //ssrn.com/abstract=3518482. Ann Florini. *The right to know: transparency for an open world*. Columbia University Press, 2007. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The Pile: An 800GB Dataset of Diverse Text for Language Modeling. *arXiv preprint arXiv:2101.00027*, 2021a. URL https://arxiv. org/abs/2101.00027. Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation. Version v0. 0.1. Sept, 2021b. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Ill, and Kate Crawford. Datasheets for datasets. *arXiv preprint arXiv:1803.09010*, 2018. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. Datasheets for datasets. *Communications of the ACM*, 64(12): 86–92, 2021. Ritwick Ghosh and Hilary Oliva Faxon. Smart corruption: Satirical strategies for gaming accountability. Big Data & Society, 10(1):20539517231164119, 2023. doi: 10.1177/20539517231164119. URL https: //doi.org/10.1177/20539517231164119. Charles AE Goodhart. Problems of monetary management: the uk experience. In *Monetary theory and* practice, pp. 91–121. Springer, 1984. Nelson Granados and Alok Gupta. Transparency strategy: Competing with information in a digital world. MIS Quarterly, 37(2):637–641, 2013. ISSN 02767783. URL http://www.jstor.org/stable/43825928. Mary L Gray and Siddharth Suri. Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books, 2019a. M.L. Gray and S. Suri. *Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass*. Houghton Mifflin Harcourt, 2019b. ISBN 978-1-328-56624-9. URL https://books.google.com/books? id=u10-uQEACAAJ. Salvatore Greco, Alessio Ishizaka, Menelaos Tasiou, and Gianpiero Torrisi. On the methodological framework of composite indices: A review of the issues of weighting, aggregation, and robustness. *Social Indicators* Research, 141:1–34, 01 2019. doi: 10.1007/s11205-017-1832-9. Sam Gregory. The need for transparency in artificial intelligence. Testimony before the U.S. Senate Committee on Commerce, Science and Transportation, Subcommittee on Consumer Protection, Product Safety and Data Security, Sep 2023. URL https://www.commerce.senate.gov/services/files/ DAD2163A-EF02-41B5-B7BA-2BA8B568C977. Executive Director, WITNESS. Philipp Hacker, Andreas Engel, and Marco Mauer. Regulating chatgpt and other large generative ai models. In *Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency*, FAccT '23, pp. 1112–1123, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9798400701924. doi: 10.1145/3593013.3594067. URL https://doi.org/10.1145/3593013.3594067. Thilo Hagendorff. The ethics of ai ethics: An evaluation of guidelines. *Minds and Machines*, 30(1):99– 120, March 2020. ISSN 1572-8641. doi: 10.1007/s11023-020-09517-8. URL https://doi.org/10.1007/ s11023-020-09517-8. Keach Hagey and Jeff Horwitz. Facebook tried to make its platform a healthier place. it got angrier instead., 2021. URL https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215. Byung-Chul Han. *The transparency society*. Stanford University Press, 2015. Karen Hao and Deepa Seetharaman. Cleaning up chatgpt takes heavy toll on human workers. *The Wall Street Journal*, July 2023. URL https://www.wsj.com/articles/ chatgpt-openai-content-abusive-sexually-explicit-harassment-kenya-workers-on-human-workers-cf191483. Photographs by Natalia Jidovanu. Russell Hardin. *Trust and trustworthiness*. Russell Sage Foundation, 2002. Woodrow Hartzog. Oversight of a.i.: Legislating on artificial intelligence. Prepared Testimony and Statement for the Record before the U.S. Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law, Sep 2023. URL https://www.judiciary.senate.gov/imo/media/doc/2023-09-12_pm_ -_testimony_-_hartzog.pdf. Stefanus Agus Haryono, Ferdian Thung, Hong Jin Kang, Lucas Serrano, Gilles Muller, Julia Lawall, David Lo, and Lingxiao Jiang. Automatic android deprecated-api usage update by learning from single updated example, 2020. Ahmed Hashesh. Version control for ml models: Why you need it, what it is, how to implement it, 2023. URL https://neptune.ai/blog/version-control-for-ml-models. Melissa Heikkilä. It's high time for more ai transparency. *MIT Technology Review*, Jul 2023. URL https: //www.technologyreview.com/2023/07/25/1076698/its-high-time-for-more-ai-transparency/. Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, and Joelle Pineau. Towards the systematic reporting of the energy and carbon footprints of machine learning. *Journal of Machine* Learning Research, 21(248):1–43, 2020. Peter Henderson, Xuechen Li, Dan Jurafsky, Tatsunori Hashimoto, Mark A Lemley, and Percy Liang. Foundation models and fair use. *arXiv preprint arXiv:2303.15715*, 2023. David Hess. The transparency trap: Non-financial disclosure and the responsibility of business to respect human rights. *American Business Law Journal*, 56(1):5–53, 2019. doi: https://doi.org/10.1111/ablj.12134. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/ablj.12134. Daniel E. Ho. Fudging the nudge: Information disclosure and restaurant grading. *Yale Law Journal*, 122: 574–583, 2012. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and L. Sifre. Training compute-optimal large language models. ArXiv, abs/2203.15556, 2022a. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. Training compute-optimal large language models, 2022b. Michael Hopkins. Human development revisited: A new undp report. *World Development*, 19(10):1469–1473, 1991. Yangsibo Huang, Samyak Gupta, Mengzhou Xia, Kai Li, and Danqi Chen. Catastrophic jailbreak of opensource llms via exploiting generation, 2023. Ben Hutchinson, Andrew Smart, Alex Hanna, Emily Denton, Christina Greer, Oddur Kjartansson, Parker Barnes, and Margaret Mitchell. Towards accountability for machine learning datasets: Practices from software engineering and infrastructure. In *Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency*, pp. 560–575, 2021. Kristina Irion. Algorithms off-limits? if digital trade law restricts access to source code of software then accountability will suffer. In *Proceedings of the 2022 ACM Conference on Fairness, Accountability, and* Transparency, FAccT '22, pp. 1561–1570, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450393522. doi: 10.1145/3531146.3533212. URL https://doi.org/10.1145/3531146. 3533212. Anna Jobin, Marcello Ienca, and Effy Vayena. The global landscape of ai ethics guidelines. *Nature Machine* Intelligence, 1(9):389–399, September 2019. doi: 10.1038/s42256-019-0088-2. URL https://doi.org/ 10.1038/s42256-019-0088-2. Michael Johnston. Good governance: Rule of law, transparency, and accountability. New York: United Nations Public Administration Network, pp. 1–32, 2006. Elliot Jones. Explainer: What is a foundation model? *Ada Lovelace Institute*, 2023. URL https://www. adalovelaceinstitute.org/resource/foundation-models-explainer/. Mark Eli Kalderon. Transparency. In *Form without Matter: Empedocles and Aristotle on Color Perception*. Oxford University Press, 01 2015. ISBN 9780198717904. doi: 10.1093/acprof:oso/9780198717904.003.0003. URL https://doi.org/10.1093/acprof:oso/9780198717904.003.0003. Margot E. Kaminski. *Understanding Transparency in Algorithmic Accountability*, pp. 121–138. Cambridge Law Handbooks. Cambridge University Press, 2020. doi: 10.1017/9781108680844.006. Nikhil Kandpal, Eric Wallace, and Colin Raffel. Deduplicating training data mitigates privacy risks in language models. *arXiv preprint arXiv:2202.06539*, 2022. Sayash Kapoor and Arvind Narayanan. Leakage and the reproducibility crisis in machine-learning-based science. *Patterns*, 4(9):100804, 2023. ISSN 2666-3899. doi: https://doi.org/10.1016/j.patter.2023.100804. URL https://www.sciencedirect.com/science/article/pii/S2666389923001599. Sayash Kapoor, Emily Cantrell, Kenny Peng, Thanh Hien Pham, Christopher A. Bail, Odd Erik Gundersen, Jake M. Hofman, Jessica Hullman, Michael A. Lones, Momin M. Malik, Priyanka Nanayakkara, Russell A. Poldrack, Inioluwa Deborah Raji, Michael Roberts, Matthew J. Salganik, Marta Serra-Garcia, Brandon M. Stewart, Gilles Vandewiele, and Arvind Narayanan. Reforms: Reporting standards for machine learning based science, 2023. Daphne Keller. Hearing on platform transparency: Understanding the impact of social media. Technical report, United States Senate Committee on the Judiciary, Subcommittee on Privacy, Technology and the Law, May 2022. URL https://www.judiciary.senate.gov/imo/media/doc/Keller%20Testimony1. pdf. Statement of Daphne Keller, Stanford University Cyber Policy Center. Siwon Kim, Sangdoo Yun, Hwaran Lee, Martin Gubri, Sungroh Yoon, and Seong Joon Oh. Propile: Probing privacy leakage in large language models, 2023. Jen King. Redesigning data privacy: Reimagining notice and consent for human technology interaction. Technical report, The Center for Internet and Society, Stanford Law School, 2020. John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, and Tom Goldstein. A watermark for large language models. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of *Proceedings of Machine Learning Research*, pp. 17061–17084. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/kirchenbauer23a.html. Aniket Kittur, Jeffrey V Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. The future of crowd work. In Proceedings of the 2013 conference on Computer supported cooperative work, pp. 1301–1318, 2013. Kevin Klyman. How to promote responsible open foundation models, 2023. URL https://hai.stanford. edu/news/how-promote-responsible-open-foundation-models. Rohith Kuditipudi, John Thickstun, Tatsunori Hashimoto, and Percy Liang. Robust distortion-free watermarks for language models. *ArXiv*, abs/2307.15593, 2023. URL https://api.semanticscholar.org/ CorpusID:260315804. Abhishek Kumar, Tristan Braud, Sasu Tarkoma, and Pan Hui. Trustworthy ai in the age of pervasive computing and big data. In *2020 IEEE International Conference on Pervasive Computing and Communications* Workshops (PerCom Workshops), pp. 1–6. IEEE, 2020. Sachin Kumar, Vidhisha Balachandran, Lucille Njoo, Antonios Anastasopoulos, and Yulia Tsvetkov. Language generation models can cause harm: So what can we do about it? an actionable survey. arXiv preprint arXiv:2210.07700, 2022. Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. Quantifying the carbon emissions of machine learning. *arXiv preprint arXiv:1910.09700*, 2019. Alexandre Lacoste, Nils Lehmann, Pau Rodríguez López, Evan D. Sherwin, Hannah Rae Kerner, Bjorn Lutjens, Jeremy A. Irvin, David Dao, Hamed Alemohammad, Alexandre Drouin, Mehmet Gunturkun, Gabriel Huang, David Vázquez, Dava Newman, Yoshua Bengio, Stefano Ermon, and Xiao Xiang Zhu. Geo-bench: Toward foundation models for earth monitoring. *ArXiv*, abs/2306.03831, 2023. URL https: //api.semanticscholar.org/CorpusID:259088736. LAION. Towards a transparent ai future: The call for less regulatory hurdles on open-source ai in europe, Sep 2023. URL https://laion.ai/blog/transparent-ai/. Following our previous open letter to the European Parliament on the significance of open-source AI, LAION, backed by European Laboratory for Learning and Intelligent Systems (ELLIS) and a long list of very impactful AI researchers, we submit this new open letter to the European Parliament. Michelle S. Lam, Mitchell L. Gordon, Danaë Metaxa, Jeffrey T. Hancock, James A. Landay, and Michael S. Bernstein. End-user audits: A system empowering communities to lead large-scale investigations of harmful algorithmic behavior. *Proc. ACM Hum.-Comput. Interact.*, 6(CSCW2), nov 2022. doi: 10.1145/3555625. URL https://doi.org/10.1145/3555625. Patrick Lam, Jens Dietrich, and David J. Pearce. Putting the semantics into semantic versioning. In Proceedings of the 2020 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software. ACM, nov 2020. doi: 10.1145/3426428.3426922. URL https: //doi.org/10.1145%2F3426428.3426922. J. Richard Landis and Gary G. Koch. The measurement of observer agreement for categorical data. *Biometrics*, 33(1):159–174, 1977. ISSN 0006341X, 15410420. URL http://www.jstor.org/stable/2529310. Issie Lapowsky. How cambridge analytica sparked the great privacy awakening, 2018. URL https://www. wired.com/story/cambridge-analytica-facebook-privacy-awakening/. Daniel Lathrop and Laurel Ruma. Open government: Collaboration, transparency, and participation in practice. " O'Reilly Media, Inc.", 2010. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilić, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillanMajor, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, Dragomir Radev, Eduardo González Ponferrada, Efrat Levkovizh, Ethan Kim, Eyal Bar Natan, Francesco De Toni, Gérard Dupont, Germán Kruszewski, Giada Pistilli, Hady Elsahar, Hamza Benyamina, Hieu Tran, Ian Yu, Idris Abdulmumin, Isaac Johnson, Itziar Gonzalez-Dios, Javier de la Rosa, Jenny Chim, Jesse Dodge, Jian Zhu, Jonathan Chang, Jörg Frohberg, Joseph Tobing, Joydeep Bhattacharjee, Khalid Almubarak, Kimbo Chen, Kyle Lo, Leandro Von Werra, Leon Weber, Long Phan, Loubna Ben allal, Ludovic Tanguy, Manan Dey, Manuel Romero Muñoz, Maraim Masoud, María Grandury, Mario Šaško, Max Huang, Maximin Coavoux, Mayank Singh, Mike Tian-Jian Jiang, Minh Chien Vu, Mohammad A. Jauhar, Mustafa Ghaleb, Nishant Subramani, Nora Kassner, Nurulaqilla Khamis, Olivier Nguyen, Omar Espejel, Ona de Gibert, Paulo Villegas, Peter Henderson, Pierre Colombo, Priscilla Amuok, Quentin Lhoest, Rheza Harliman, Rishi Bommasani, Roberto Luis López, Rui Ribeiro, Salomey Osei, Sampo Pyysalo, Sebastian Nagel, Shamik Bose, Shamsuddeen Hassan Muhammad, Shanya Sharma, Shayne Longpre, Somaieh Nikpoor, Stanislav Silberberg, Suhas Pai, Sydney Zink, Tiago Timponi Torrent, Timo Schick, Tristan Thrush, Valentin Danchev, Vassilina Nikoulina, Veronika Laippala, Violette Lepercq, Vrinda Prabhu, Zaid Alyafeai, Zeerak Talat, Arun Raja, Benjamin Heinzerling, Chenglei Si, Elizabeth Salesky, Sabrina J. Mielke, Wilson Y. Lee, Abheesht Sharma, Andrea Santilli, Antoine Chaffin, Arnaud Stiegler, Debajyoti Datta, Eliza Szczechla, Gunjan Chhablani, Han Wang, Harshit Pandey, Hendrik Strobelt, Jason Alan Fries, Jos Rozen, Leo Gao, Lintang Sutawika, M Saiful Bari, Maged S. Al-shaibani, Matteo Manica, Nihal Nayak, Ryan Teehan, Samuel Albanie, Sheng Shen, Srulik Ben-David, Stephen H. Bach, Taewoon Kim, Tali Bers, Thibault Fevry, Trishala Neeraj, Urmish Thakker, Vikas Raunak, Xiangru Tang, Zheng-Xin Yong, Zhiqing Sun, Shaked Brody, Yallow Uri, Hadar Tojarieh, Adam Roberts, Hyung Won Chung, Jaesung Tae, Jason Phang, Ofir Press, Conglong Li, Deepak Narayanan, Hatim Bourfoune, Jared Casper, Jeff Rasley, Max Ryabinin, Mayank Mishra, Minjia Zhang, Mohammad Shoeybi, Myriam Peyrounette, Nicolas Patry, Nouamane Tazi, Omar Sanseviero, Patrick von Platen, Pierre Cornette, Pierre François Lavallée, Rémi Lacroix, Samyam Rajbhandari, Sanchit Gandhi, Shaden Smith, Stéphane Requena, Suraj Patil, Tim Dettmers, Ahmed Baruwa, Amanpreet Singh, Anastasia Cheveleva, Anne-Laure Ligozat, Arjun Subramonian, Aurélie Névéol, Charles Lovering, Dan Garrette, Deepak Tunuguntla, Ehud Reiter, Ekaterina Taktasheva, Ekaterina Voloshina, Eli Bogdanov, Genta Indra Winata, Hailey Schoelkopf, Jan-Christoph Kalo, Jekaterina Novikova, Jessica Zosa Forde, Jordan Clive, Jungo Kasai, Ken Kawamura, Liam Hazan, Marine Carpuat, Miruna Clinciu, Najoung Kim, Newton Cheng, Oleg Serikov, Omer Antverg, Oskar van der Wal, Rui Zhang, Ruochen Zhang, Sebastian Gehrmann, Shani Pais, Tatiana Shavrina, Thomas Scialom, Tian Yun, Tomasz Limisiewicz, Verena Rieser, Vitaly Protasov, Vladislav Mikhailov, Yada Pruksachatkun, Yonatan Belinkov, Zachary Bamberger, Zdeněk Kasner, Alice Rueda, Amanda Pestana, Amir Feizpour, Ammar Khan, Amy Faranak, Ana Santos, Anthony Hevia, Antigona Unldreaj, Arash Aghagol, Arezoo Abdollahi, Aycha Tammour, Azadeh HajiHosseini, Bahareh Behroozi, Benjamin Ajibade, Bharat Saxena, Carlos Muñoz Ferrandis, Danish Contractor, David Lansky, Davis David, Douwe Kiela, Duong A. Nguyen, Edward Tan, Emi Baylor, Ezinwanne Ozoani, Fatima Mirza, Frankline Ononiwu, Habib Rezanejad, Hessie Jones, Indrani Bhattacharya, Irene Solaiman, Irina Sedenko, Isar Nejadgholi, Jesse Passmore, Josh Seltzer, Julio Bonis Sanz, Karen Fort, Livia Dutra, Mairon Samagaio, Maraim Elbadri, Margot Mieskes, Marissa Gerchick, Martha Akinlolu, Michael McKenna, Mike Qiu, Muhammed Ghauri, Mykola Burynok, Nafis Abrar, Nazneen Rajani, Nour Elkott, Nour Fahmy, Olanrewaju Samuel, Ran An, Rasmus Kromann, Ryan Hao, Samira Alizadeh, Sarmad Shubber, Silas Wang, Sourav Roy, Sylvain Viguier, Thanh Le, Tobi Oyebade, Trieu Le, Yoyo Yang, Zach Nguyen, Abhinav Ramesh Kashyap, Alfredo Palasciano, Alison Callahan, Anima Shukla, Antonio Miranda-Escalada, Ayush Singh, Benjamin Beilharz, Bo Wang, Caio Brito, Chenxi Zhou, Chirag Jain, Chuxin Xu, Clémentine Fourrier, Daniel León Periñán, Daniel Molano, Dian Yu, Enrique Manjavacas, Fabio Barth, Florian Fuhrimann, Gabriel Altay, Giyaseddin Bayrak, Gully Burns, Helena U. Vrabec, Imane Bello, Ishani Dash, Jihyun Kang, John Giorgi, Jonas Golde, Jose David Posada, Karthik Rangasai Sivaraman, Lokesh Bulchandani, Lu Liu, Luisa Shinzato, Madeleine Hahn de Bykhovetz, Maiko Takeuchi, Marc Pàmies, Maria A Castillo, Marianna Nezhurina, Mario Sänger, Matthias Samwald, Michael Cullan, Michael Weinberg, Michiel De Wolf, Mina Mihaljcic, Minna Liu, Moritz Freidank, Myungsun Kang, Natasha Seelam, Nathan Dahlberg, Nicholas Michio Broad, Nikolaus Muellner, Pascale Fung, Patrick Haller, Ramya Chandrasekhar, Renata Eisenberg, Robert Martin, Rodrigo Canalli, Rosaline Su, Ruisi Su, Samuel Cahyawijaya, Samuele Garda, Shlok S Deshmukh, Shubhanshu Mishra, Sid Kiblawi, Simon Ott, Sinee Sang-aroonsiri, Srishti Kumar, Stefan Schweter, Sushil Bharati, Tanmay Laud, Théo Gigant, Tomoya Kainuma, Wojciech Kusa, Yanis Labrak, Yash Shailesh Bajaj, Yash Venkatraman, Yifan Xu, Yingxin Xu, Yu Xu, Zhe Tan, Zhongli Xie, Zifan Ye, Mathilde Bras, Younes Belkada, and Thomas Wolf. Bloom: A 176b-parameter open-access multilingual language model. 2022. doi: 10.48550/ARXIV.2211.05100. URL https://arxiv.org/abs/2211.05100. Katherine Lee, A Feder Cooper, and James Grimmelmann. Talkin"bout ai generation: Copyright and the generative-ai supply chain. *arXiv preprint arXiv:2309.08133*, 2023a. Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines GerardUrsin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, Rose E Wang, Minae Kwon, Joon Sung Park, Hancheng Cao, Tony Lee, Rishi Bommasani, Michael S. Bernstein, and Percy Liang. Evaluating human-language model interaction. *Transactions on Machine Learning Research*, 2023b. ISSN 2835-8856. URL https: //openreview.net/forum?id=hjDYJUn9l1. Mark A Lemley and Bryan Casey. Fair learning. *Tex. L. Rev.*, 99:743, 2020. Daoyuan Li, Li Li, Dongsun Kim, Tegawendé F. Bissyandé, David Lo, and Yves Le Traon. Watch out for this commit! a study of influential software changes, 2016. Percy Liang. The time is now to develop community norms for the release of foundation models, May 2022. URL https://hai.stanford.edu/news/ time-now-develop-community-norms-release-foundation-models. Percy Liang and Rob Reich. Condemning the deployment of gpt-4chan, 2022. URL https://docs.google. com/forms/d/e/1FAIpQLSdh3Pgh0sGrYtRihBu-GPN7FSQoODBLvF7dVAFLZk2iuMgoLw/viewform. Percy Liang, Rishi Bommasani, Kathleen A. Creel, and Rob Reich. The time is now to develop community norms for the release of foundation models, 2022a. URL https://crfm.stanford.edu/2022/05/17/ community-norms.html. Percy Liang, Rishi Bommasani, Kathleen A. Creel, and Rob Reich. The time is now to develop community norms for the release of foundation models, 2022b. URL https://crfm.stanford.edu/2022/05/17/ community-norms.html. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher R'e, Diana Acosta-Navas, Drew A. Hudson, E. Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel J. Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan S. Kim, Neel Guha, Niladri S. Chatterji, O. Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas F. Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. Holistic evaluation of language models. *ArXiv*, abs/2211.09110, 2022c. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Alexander Cosgrove, Christopher D Manning, Christopher Re, Diana Acosta-Navas, Drew Arad Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue WANG, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri S. Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Andrew Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. Holistic evaluation of language models. *Transactions on Machine Learning Research*, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=iO4LZibEqW. Featured Certification, Expert Certification. Qingzi Vera Liao and Jennifer Wortman Vaughan. Ai transparency in the age of llms: A human-centered research roadmap. *ArXiv*, abs/2306.01941, 2023. URL https://api.semanticscholar.org/CorpusID: 259075521. Andreas Liesenfeld, Alianda Lopez, and Mark Dingemanse. Opening up chatgpt: Tracking openness, transparency, and accountability in instruction-tuned text generators. In *Proceedings of the 5th International Conference on Conversational User Interfaces*, CUI '23, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9798400700149. doi: 10.1145/3571884.3604316. URL https://doi.org/10.1145/3571884.3604316. Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Nikita Smetanin, Robert Verkuil, Ori Kabeli, Yaniv Shmueli, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Salvatore Candido, and Alexander Rives. Evolutionary-scale prediction of atomic-level protein structure with a language model. *Science*, 379(6637):1123–1130, 2023. doi: 10.1126/science.ade2574. URL https://www. science.org/doi/abs/10.1126/science.ade2574. Zachary C. Lipton and Jacob Steinhardt. Troubling trends in machine learning scholarship: Some ml papers suffer from flaws that could mislead the public and stymie future research. *Queue*, 17(1):45–77, feb 2019. ISSN 1542-7730. doi: 10.1145/3317287.3328534. URL https://doi.org/10.1145/3317287.3328534. Haochen Liu, Yiqi Wang, Wenqi Fan, Xiaorui Liu, Yaxin Li, Shaili Jain, Yunhao Liu, Anil Jain, and Jiliang Tang. Trustworthy ai: A computational perspective. *ACM Transactions on Intelligent Systems and* Technology, 14(1):1–59, 2022. Xingyu Liu, Annabel Sun, and Jason I. Hong. Identifying terms and conditions important to consumers using crowdsourcing, 2021. Yeyi Liu, Martin Heinberg, Xuan Huang, and Andreas B. Eisingerich. Building a competitive advantage based on transparency: When and why does transparency matter for corporate social responsibility? *Business* Horizons, 66(4):517–527, 2023. ISSN 0007-6813. doi: https://doi.org/10.1016/j.bushor.2022.10.004. URL https://www.sciencedirect.com/science/article/pii/S0007681322001306. Shayne Longpre, Gregory Yauney, Emily Reif, Katherine Lee, Adam Roberts, Barret Zoph, Denny Zhou, Jason Wei, Kevin Robinson, David Mimno, et al. A pretrainer's guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity. *arXiv preprint arXiv:2305.13169*, 2023. Alexandra Sasha Luccioni and Alex Hernández-García. Counting carbon: A survey of factors influencing the emissions of machine learning. *ArXiv*, abs/2302.08476, 2023. Alexandra Sasha Luccioni, Sylvain Viguier, and Anne-Laure Ligozat. Estimating the carbon footprint of bloom, a 176b parameter language model. *ArXiv*, abs/2211.02001, 2022. URL https://api. semanticscholar.org/CorpusID:253265387. Nils Lukas, Ahmed Salem, Robert Sim, Shruti Tople, Lukas Wutschitz, and Santiago Zanella-Béguelin. Analyzing leakage of personally identifiable information in language models, 2023. Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, et al. The ai index 2023 annual report. AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, 2023. Arunesh Mathur, Gunes Acar, Michael J. Friedman, Eli Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan. Dark patterns at scale: Findings from a crawl of 11k shopping websites. *Proc. ACM* Hum.-Comput. Interact., 3(CSCW), nov 2019. doi: 10.1145/3359183. URL https://doi.org/10.1145/ 3359183. Meta. Meta platform terms, 2023. URL https://developers.facebook.com/terms/\#datause. Danaë Metaxa, Joon Sung Park, Ronald E. Robertson, Karrie Karahalios, Christo Wilson, Jeff Hancock, and Christian Sandvig. Auditing algorithms: Understanding algorithmic systems from the outside in. Foundations and Trends® *in Human–Computer Interaction*, 14(4):272–344, 2021. ISSN 1551-3955. doi: 10.1561/1100000083. URL http://dx.doi.org/10.1561/1100000083. Katharine Miller. Radical proposal: Third-party auditor access for ai accountability, Oct 2021. URL https: //hai.stanford.edu/news/radical-proposal-third-party-auditor-access-ai-accountability. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 2018. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency, pp. 220–229, 2019. Brent Mittelstadt. Principles alone cannot guarantee ethical ai. *Nature Machine Intelligence*, 1(11):501–507, November 2019. ISSN 2522-5839. doi: 10.1038/s42256-019-0114-4. URL https://doi.org/10.1038/ s42256-019-0114-4. Shakir Mohamed, Marie-Therese Png, and William Isaac. Decolonial ai: Decolonial theory as sociotechnical foresight in artificial intelligence. *Philosophy & Technology*, 33(4):659–684, December 2020. ISSN 22105441. doi: 10.1007/s13347-020-00405-8. URL https://doi.org/10.1007/s13347-020-00405-8. Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Rose Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng Xin Yong, Hailey Schoelkopf, Xiangru Tang, Dragomir Radev, Alham Fikri Aji, Khalid Almubarak, Samuel Albanie, Zaid Alyafeai, Albert Webson, Edward Raff, and Colin Raffel. Crosslingual generalization through multitask finetuning. *ArXiv*, abs/2211.01786, 2022. Deirdre K. Mulligan, Colin Koopman, and Nick Doty. Privacy is an essentially contested concept: a multidimensional analytic for mapping privacy. *Philosophical Transactions Royal Society A*, 374, 2016. Yuri Nakao, Lorenzo Strappelli, Simone Stumpf, Aisha Naseer, Daniele Regoli, and Giulia Del Gamba. Towards responsible ai: A design space exploration of human-centered artificial intelligence user interfaces to investigate fairness, 2022. Arvind Narayanan and Sayash Kapoor. Generative ai companies must publish transparency reports, 2023. URL https://knightcolumbia.org/blog/ generative-ai-companies-must-publish-transparency-reports. Arvind Narayanan and Dillon Reisman. The princeton web transparency and accountability project. *Transparent data mining for big and small data*, pp. 45–67, 2017. Deepak Narayanan, Keshav Santhanam, Peter Henderson, Rishi Bommasani, Tony Lee, and Percy Liang. Cheaply evaluating inference efficiency metrics for autoregressive transformer APIs. In *Thirty-seventh* Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id= RJpAz15D0S. Davy Tsz Kit Ng, Jac Ka Lok Leung, Samuel Kai Wah Chu, and Maggie Shen Qiao. Conceptualizing AI literacy: An exploratory review. *Computers and Education: Artificial Intelligence*, 2:100041, 2021. ISSN 2666-920X. doi: https://doi.org/10.1016/j.caeai.2021.100041. URL https://www.sciencedirect.com/ science/article/pii/S2666920X21000357. Tuan Dung Nguyen, Yuan-Sen Ting, Ioana Ciucă, Charlie O'Neill, Ze-Chang Sun, Maja Jablo'nska, Sandor Kruk, Ernest Perkowski, Jack W. Miller, Jason Li, Josh Peek, Kartheik Iyer, Tomasz R'o.za'nski, Pranav Khetarpal, Sharaf Zaman, David Brodrick, Sergio J. Rodr'iguez M'endez, Thang Bui, Alyssa Goodman, Alberto Accomazzi, Jill P. Naiman, Jesse Cranney, Kevin Schawinski, and UniverseTBD. Astrollama: Towards specialized foundation models in astronomy. *ArXiv*, abs/2309.06126, 2023. URL https://api. semanticscholar.org/CorpusID:261696577. Helen Nissenbaum. Privacy as contextual integrity. In *Washington Law Review*, 2024. URL https:// digitalcommons.law.uw.edu/wlr/vol79/iss1/10. OECD, European Union, and Joint Research Centre European Commission. Handbook on Constructing Composite Indicators: Methodology and User Guide. 2008. doi: https://doi.org/https://doi.org/10.1787/ 9789264043466-en. URL https://www.oecd-ilibrary.org/content/publication/9789264043466-en. Open X-Embodiment Collaboration, Abhishek Padalkar, Acorn Pooley, Ajinkya Jain, Alex Bewley, Alex Herzog, Alex Irpan, Alexander Khazatsky, Anant Rai, Anikait Singh, Anthony Brohan, Antonin Raffin, Ayzaan Wahid, Ben Burgess-Limerick, Beomjoon Kim, Bernhard Schölkopf, Brian Ichter, Cewu Lu, Charles Xu, Chelsea Finn, Chenfeng Xu, Cheng Chi, Chenguang Huang, Christine Chan, Chuer Pan, Chuyuan Fu, Coline Devin, Danny Driess, Deepak Pathak, Dhruv Shah, Dieter Büchler, Dmitry Kalashnikov, Dorsa Sadigh, Edward Johns, Federico Ceola, Fei Xia, Freek Stulp, Gaoyue Zhou, Gaurav S. Sukhatme, Gautam Salhotra, Ge Yan, Giulio Schiavi, Hao Su, Hao-Shu Fang, Haochen Shi, Heni Ben Amor, Henrik I Christensen, Hiroki Furuta, Homer Walke, Hongjie Fang, Igor Mordatch, Ilija Radosavovic, Isabel Leal, Jacky Liang, Jaehyung Kim, Jan Schneider, Jasmine Hsu, Jeannette Bohg, Jeffrey Bingham, Jiajun Wu, Jialin Wu, Jianlan Luo, Jiayuan Gu, Jie Tan, Jihoon Oh, Jitendra Malik, Jonathan Tompson, Jonathan Yang, Joseph J. Lim, João Silvério, Junhyek Han, Kanishka Rao, Karl Pertsch, Karol Hausman, Keegan Go, Keerthana Gopalakrishnan, Ken Goldberg, Kendra Byrne, Kenneth Oslund, Kento Kawaharazuka, Kevin Zhang, Keyvan Majd, Krishan Rana, Krishnan Srinivasan, Lawrence Yunliang Chen, Lerrel Pinto, Liam Tan, Lionel Ott, Lisa Lee, Masayoshi Tomizuka, Maximilian Du, Michael Ahn, Mingtong Zhang, Mingyu Ding, Mohan Kumar Srirama, Mohit Sharma, Moo Jin Kim, Naoaki Kanazawa, Nicklas Hansen, Nicolas Heess, Nikhil J Joshi, Niko Suenderhauf, Norman Di Palo, Nur Muhammad Mahi Shafiullah, Oier Mees, Oliver Kroemer, Pannag R Sanketi, Paul Wohlhart, Peng Xu, Pierre Sermanet, Priya Sundaresan, Quan Vuong, Rafael Rafailov, Ran Tian, Ria Doshi, Roberto Martín-Martín, Russell Mendonca, Rutav Shah, Ryan Hoque, Ryan Julian, Samuel Bustamante, Sean Kirmani, Sergey Levine, Sherry Moore, Shikhar Bahl, Shivin Dass, Shuran Song, Sichun Xu, Siddhant Haldar, Simeon Adebola, Simon Guist, Soroush Nasiriany, Stefan Schaal, Stefan Welker, Stephen Tian, Sudeep Dasari, Suneel Belkhale, Takayuki Osa, Tatsuya Harada, Tatsuya Matsushima, Ted Xiao, Tianhe Yu, Tianli Ding, Todor Davchev, Tony Z. Zhao, Travis Armstrong, Trevor Darrell, Vidhi Jain, Vincent Vanhoucke, Wei Zhan, Wenxuan Zhou, Wolfram Burgard, Xi Chen, Xiaolong Wang, Xinghao Zhu, Xuanlin Li, Yao Lu, Yevgen Chebotar, Yifan Zhou, Yifeng Zhu, Ying Xu, Yixuan Wang, Yonatan Bisk, Yoonyoung Cho, Youngwoon Lee, Yuchen Cui, Yueh hua Wu, Yujin Tang, Yuke Zhu, Yunzhu Li, Yusuke Iwasawa, Yutaka Matsuo, Zhuo Xu, and Zichen Jeff Cui. Open X-Embodiment: Robotic learning datasets and RT-X models. https://robotics-transformer-x.github.io, 2023. OpenAI. Gpt-4 technical report, 2023. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, J. Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, P. Welinder, P. Christiano, J. Leike, and Ryan J. Lowe. Training language models to follow instructions with human feedback. *arXiv*, 2022. Raja Parasuraman and Dietrich H. Manzey. Complacency and bias in human use of automation: An attentional integration. *Human Factors*, 52(3):381–410, 2010. doi: 10.1177/0018720810376055. URL https://doi.org/10.1177/0018720810376055. PMID: 21077562. Frank Pasquale. *The black box society: The secret algorithms that control money and information*. Harvard University Press, 2015. David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. Carbon emissions and large neural network training. *arXiv preprint* arXiv:2104.10350, 2021. Ethan Perez, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nathan McAleese, and Geoffrey Irving. Red teaming language models with language models. *ArXiv*, abs/2202.03286, 2022. Jason Phang, Herbie Bradley, Leo Gao, Louis Castricato, and Stella Biderman. Eleutherai: Going beyond "open science" to "science in the open", 2022. Aleksandra Piktus, Christopher Akiki, Paulo Villegas, Hugo Laurençon, Gérard Dupont, Alexandra Sasha Luccioni, Yacine Jernite, and Anna Rogers. The roots search tool: Data transparency for llms, 2023. David Piorkowski, John Richards, and Michael Hind. Evaluating a methodology for increasing ai transparency: A case study, 2022. Giada Pistilli, Carlos Muñoz Ferrandis, Yacine Jernite, and Margaret Mitchell. Stronger together: on the articulation of ethical charters, legal tools, and technical documentation in ML. In 2023 ACM Conference on Fairness, Accountability, and Transparency. ACM, jun 2023. doi: 10.1145/3593013.3594002. URL https://doi.org/10.1145%2F3593013.3594002. Mahima Pushkarna, Andrew Zaldivar, and Oddur Kjartansson. Data cards: Purposeful and transparent dataset documentation for responsible ai. In *2022 ACM Conference on Fairness, Accountability, and* Transparency, pp. 1776–1826, 2022. Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. Finetuning aligned language models compromises safety, even when users do not intend to!, 2023. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision. *ArXiv*, abs/2212.04356, 2022. URL https://api. semanticscholar.org/CorpusID:252923993. Deb Raji. Mozilla open source audit tooling (oat) project, Feb 2022. Project documentation or description at Mozilla. Inioluwa Deborah Raji and Joy Buolamwini. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES '19, pp. 429–435, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450363242. doi: 10.1145/3306618.3314244. URL https://doi.org/ 10.1145/3306618.3314244. Inioluwa Deborah Raji, Emily Denton, Emily M. Bender, Alex Hanna, and Amandalynne Paullada. AI and the everything in the whole wide world benchmark. In *Thirty-fifth Conference on Neural Information* Processing Systems Datasets and Benchmarks Track (Round 2), 2021. URL https://openreview.net/ forum?id=j6NxpQbREA1. Inioluwa Deborah Raji, I. Elizabeth Kumar, Aaron Horowitz, and Andrew Selbst. The fallacy of ai functionality. In *2022 ACM Conference on Fairness, Accountability, and Transparency*, FAccT '22, pp. 959–972, New York, NY, USA, 2022a. Association for Computing Machinery. ISBN 9781450393522. doi: 10.1145/3531146.3533158. URL https://doi.org/10.1145/3531146.3533158. Inioluwa Deborah Raji, Peggy Xu, Colleen Honigsberg, and Daniel Ho. Outsider oversight: Designing a third party audit ecosystem for ai governance. In *Proceedings of the 2022 AAAI/ACM Conference on* AI, Ethics, and Society, AIES '22, pp. 557–571, New York, NY, USA, 2022b. Association for Computing Machinery. ISBN 9781450392471. doi: 10.1145/3514094.3534181. URL https://doi.org/10.1145/ 3514094.3534181. Bogdana Rakova, Megan Ma, and Renee Shelby. Terms-we-serve-with: a feminist-inspired social imaginary for improved transparency and engagement in ai, 2022. RDR. 2020 ranking digital rights corporate accountability index, 2020. URL https:// rankingdigitalrights.org/index2020/. Vijay Janapa Reddi, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, Maximilien Breughe, Mark Charlebois, William Chou, et al. Mlperf inference benchmark. In *2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA)*, pp. 446– 459. IEEE, 2020. Max Reuter and William Schulze. I'm afraid i can't do that: Predicting prompt refusal in black-box generative language models, 2023. Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. Beyond accuracy: Behavioral testing of NLP models with CheckList. In *Association for Computational Linguistics (ACL)*, pp. 4902– 4912, 2020. James A Robinson and Daron Acemoglu. *Why nations fail: The origins of power, prosperity and poverty*. Profile London, 2012. Alex Rosenblat and Luke Stark. Algorithmic labor and information asymmetries: A case study of uber's drivers. *International Journal Of Communication*, 10:27, Jul 2016. URL https://papers.ssrn.com/ sol3/papers.cfm?abstract_id=2686227. Posted: 5 Nov 2015; Last revised: 25 Jul 2017. Michaela Saisana and Stefano Tarantola. State-of-the-art report on current methodologies and practices for composite indicator development, volume 214. 2002. Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. "everyone wants to do the model work, not the data work": Data cascades in high-stakes ai. In proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–15, 2021. Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cédric Langbort. Auditing algorithms : Research methods for detecting discrimination on internet platforms. 2014. URL https://api.semanticscholar. org/CorpusID:15686114. Girish Sastry. Beyond "release" vs. "not release", 2021. URL https://crfm.stanford.edu/commentary/ 2021/10/18/sastry.html. Nitya Sathyavageesran, Roy D. Yates, Anand D. Sarwate, and Narayan Mandayam. Privacy leakage in discrete time updating systems, 2022. Joseph R. Saveri, Cadio Zirpoli, Christopher K.L. Young, and Kathleen J. McMahon. Paul tremblay, mona awad vs. openai, inc., et al., 2023. URL https://storage.courtlistener.com/recap/gov. uscourts.cand.414822/gov.uscourts.cand.414822.1.0_1.pdf. Case 3:23-cv-03223-AMO Document 1 Filed 06/28/23, UNITED STATES DISTRICT COURT, NORTHERN DISTRICT OF CALIFORNIA, SAN FRANCISCO DIVISION. Michael Schudson. *The rise of the right to know: Politics and the culture of transparency, 1945-1975*. Harvard University Press, 2015. Roy Schwartz, Jesse Dodge, Noah A Smith, and Oren Etzioni. Green ai. *Communications of the ACM*, 63 (12):54–63, 2020. Elizabeth Seger, Noemi Dreksler, Richard Moulange, Emily Dardaman, Jonas Schuett, K. Wei, Christoph Winter, Mackenzie Arnold, Seán Ó hÉigeartaigh, Anton Korinek, Markus Anderljung, Ben Bucknall, Alan Chan, Eoghan Stafford, Leonie Koessler, Aviv Ovadya, Ben Garfinkel, Emma Bluemke, Michael Aird, Patrick Levermore, Julian Hazell, and Abhishek Gupta. Open-Sourcing Highly Capable Foundation Models: An Evaluation of Risks, Benefits, and Alternative Methods for Pursuing Open-Source Objectives. 2023. Congressional Research Service. The federal net neutrality debate: Access to broadband networks. Technical report, Congressional Research Service, Feb 2021. URL https://crsreports.congress.gov/product/ pdf/R/R40616. Report Number: R40616. Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, and Pablo Villalobos. Compute trends across three eras of machine learning, 2022. Toby Shevlane. Structured access: an emerging paradigm for safe ai deployment, 2022. URL https: //arxiv.org/abs/2201.05159. Ben Shneiderman. Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered ai systems. *ACM Trans. Interact. Intell. Syst.*, 10(4), oct 2020. ISSN 2160-6455. doi: 10.1145/3419764. URL https://doi.org/10.1145/3419764. Uriel Singer, Adam Polyak, Thomas Hayes, Xiaoyue Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, Devi Parikh, Sonal Gupta, and Yaniv Taigman. Make-a-video: Text-to-video generation without text-video data. *ArXiv*, abs/2209.14792, 2022. Aviya Skowron and Stella Biderman. Eleutherai's thoughts on the eu ai act, Jul 2023. URL https://blog. eleuther.ai/eu-aia/. How we are supporting open source and open science in the EU AI Act. Benjamin LW Sobel. Artificial intelligence's fair use crisis. *Colum. JL & Arts*, 41:45, 2017. Irene Solaiman. The gradient of generative ai release: Methods and considerations. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pp. 111–122, 2023. Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, and Jasmine Wang. Release strategies and the social impacts of language models. *ArXiv*, abs/1908.09203, 2019. Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Hal Daumé III au2, Jesse Dodge, Ellie Evans, Sara Hooker, Yacine Jernite, Alexandra Sasha Luccioni, Alberto Lusoli, Margaret Mitchell, Jessica Newman, Marie-Therese Png, Andrew Strait, and Apostol Vassilev. Evaluating the social impact of generative ai systems in systems and society, 2023. Aaron Springer and Steve Whittaker. Progressive disclosure: Designing for effective transparency, 2018. Aarohi Srivastava, Abhinav Rastogi, Abhishek B Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ameet Annasaheb Rahane, Anantharaman S. Iyer, Anders Johan Andreassen, Andrea Santilli, Andreas Stuhlmuller, Andrew M. Dai, Andrew D. La, Andrew Kyle Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakacs, Bridget R. Roberts, Bao Sheng Loe, Barret Zoph, Bartlomiej Bojanowski, Batuhan Ozyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Stephen Howald, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, C'esar Ferri Ram'irez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Tatiana Ramirez, Clara Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Daniel H Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Gonz'alez, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, D. Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth P. Donoway, Ellie Pavlick, Emanuele Rodolà, Emma FC Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fan Xia, Fatemeh Siar, Fernando Mart'inez-Plumed, Francesca Happ'e, François Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo JaimovitchL'opez, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Han Sol Kim, Hannah Rashkin, Hanna Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hubert Wong, Ian Aik-Soon Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, John Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, J. Brooker Simon, James Koppel, James Zheng, James Zou, Jan Koco'n, Jana Thompson, Jared Kaplan, Jarema Radom, Jascha Narain Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jenni Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Oluwadara Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Jane W Waweru, John Burden, John Miller, John U. Balis, Jonathan Berant, Jorg Frohberg, Jos Rozen, José Hernández-Orallo, Joseph Boudeman, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Ochieng' Omondi, Kory Wallace Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Luca Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Col'on, Luke Metz, Lutfi Kerem cSenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Madotto Andrea, Maheen Saleem Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, M Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew Leavitt, Matthias Hagen, M'aty'as Schubert, Medina Baitemirova, Melissa Arnaud, Melvin Andrew McElrath, Michael A. Yee, Michael Cohen, Mi Gu, Michael I. Ivanitskiy, Michael Starritt, Michael Strube, Michal Swkedrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Monica Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, T MukundVarma, Nanyun Peng, Nathan Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas S. Roberts, Nicholas Doiron, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter W. Chang, Peter Eckersley, Phu Mon Htut, Pi-Bei Hwang, P. Milkowski, Piyush S. Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, QING LYU, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ram'on Risco Delgado, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan Le Bras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib J. Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Sam Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Sameh Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo hwan Lee, Spencer Bradley Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Rose Biderman, Stephanie C. Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq A. Ali, Tatsuo Hashimoto, Te-Lin Wu, Theo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, T. N. Kornev, Timothy Telleen-Lawton, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler O. Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay V. Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, W Vossen, Xiang Ren, Xiaoyu F Tong, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yang Song, Yasaman Bahri, Ye Ji Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yu Hou, Yushi Bai, Zachary Seid, Zhao Xinran, Zhuoye Zhao, Zi Fu Wang, Zijie J. Wang, Zirui Wang, and Ziyi Wu. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. *ArXiv*, abs/2206.04615, 2022. Sanjana Srivastava, Chengshu Li, Michael Lingelbach, Roberto Martín-Martín, Fei Xia, Kent Vainio, Zheng Lian, Cem Gokmen, Shyamal Buch, Karen Liu, Silvio Savarese, Hyowon Gweon, Jiajun Wu, and Li Fei-Fei. Behavior: Benchmark for everyday household activities in virtual, interactive, and ecological environments. In *Conference in Robot Learning (CoRL)*, pp. accepted, 2021. Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in NLP. *arXiv preprint arXiv:1906.02243*, 2019. Elham Tabassi. Artificial intelligence risk management framework (ai rmf 1.0), 2023-01-26 05:01:00 2023a. URL https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=936225. Elham Tabassi. Artificial intelligence risk management framework (ai rmf 1.0), 2023-01-26 05:01:00 2023b. URL https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=936225. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. *arXiv preprint arXiv:2302.13971*, 2023. Transparency International. Corruption perception index 2022, 2023. URL http://www.transparency. org/cpi. UNDP. Human development report 2021-22. 2022. URL http://report.hdr.undp.org. Jai Vipra and Anton Korinek. Market concentration implications of foundation models: The invisible hand of chatgpt. *The Brookings Institution*, 2023. URL https://www.brookings.edu/articles/ market-concentration-implications-of-foundation-models-the-invisible-hand-of-chatgpt. Jai Vipra and Sarah Myers West. Computational power and ai, Sep 2023. URL https://ainowinstitute. org/publication/policy/compute-and-ai. Jai Vipra and Sarah Myers West. Comments on the business practices of cloud computing providers. Technical report, AI Now Institute, June 2023. URL https://ainowinstitute.org/wp-content/uploads/ 2023/06/Cloud-RFI-Submission_06222023.pdf. Filed before the Federal Trade Commission, Docket ID FTC-2023-0028. Caitlin Vogus and Emma Llansó. Making transparency meaningful: A framework for policymakers. *Center for Democracy and Technology*, 2021. URL https://cdt.org/insights/ report-making-transparency-meaningful-a-framework-for-policymakers/. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2019. Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, and Bo Li. Decodingtrust: A comprehensive assessment of trustworthiness in gpt models, 2023a. Qiaosi Wang, Michael Madaio, Shaun Kane, Shivani Kapania, Michael Terry, and Lauren Wilcox. Designing responsible ai: Adaptations of ux practice to meet responsible ai challenges. In *Proceedings of the 2023 CHI* Conference on Human Factors in Computing Systems, CHI '23, New York, NY, USA, 2023b. Association for Computing Machinery. ISBN 9781450394215. doi: 10.1145/3544548.3581278. URL https://doi.org/ 10.1145/3544548.3581278. WEF. The presidio recommendations on responsible generative ai, 2023. URL https://www3.weforum. org/docs/WEF_Presidio_Recommendations_on_Responsible_Generative_AI_2023.pdf. Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, et al. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359, 2021. Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, Courtney Biles, Sasha Brown, Zac Kenton, Will Hawkins, Tom Stepleton, Abeba Birhane, Lisa Anne Hendricks, Laura Rimell, William Isaac, Julia Haas, Sean Legassick, Geoffrey Irving, and Iason Gabriel. Taxonomy of risks posed by language models. In *2022* ACM Conference on Fairness, Accountability, and Transparency, FAccT '22, pp. 214–229, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450393522. doi: 10.1145/3531146.3533088. URL https://doi.org/10.1145/3531146.3533088. Laura Weidinger, Maribeth Rauh, Nahema Marchal, Arianna Manzini, Lisa Anne Hendricks, Juan MateosGarcia, Stevie Bergman, Jackie Kay, Conor Griffin, Ben Bariach, Iason Gabriel, Verena Rieser, and William S. Isaac. Sociotechnical safety evaluation of generative ai systems. 2023. URL https://arxiv. org/abs/2310.11986. Sarah Myers West. Data capitalism: Redefining the logics of surveillance and privacy. *Business & society*, 58(1):20–41, 2019. Andrew B Whitford and Karen Wong. Political and social foundations for environmental sustainability. Political Research Quarterly, 62(1):190–204, 2009. David Gray Widder and Richmond Wong. Thinking upstream: Ethics and policy opportunities in ai supply chains, 2023. David Gray Widder, Sarah West, and Meredith Whittaker. Open (for business): Big tech, concentrated power, and the political economy of open ai. 2023. Amy Winograd. Loose-lipped large language modells spill your secrets: The privacy implications of large language models. *Harvard Journal of Law and Technology*, 36(2), 2023. Monika Zalnieriute. "transparency-washing" in the digital age : A corporate agenda of procedural fetishism. Technical report, 2021. URL http://hdl.handle.net/11159/468588. Gabriela Zanfir-Fortuna. How data protection authorities are de facto regulating generative ai, 2023. URL https://fpf.org/blog/ how-data-protection-authorities-are-de-facto-regulating-generative-ai/. Daniel Zhang, Nestor Maslej, Erik Brynjolfsson, John Etchemendy, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Michael Sellitto, Ellie Sakhaee, et al. The ai index 2022 annual report. ai index steering committee. *Stanford Institute for Human-Centered AI, Stanford University*, pp. 123, 2022a. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. OPT: Open pre-trained transformer language models. *arXiv*, 2022b. Daniel M. Ziegler, Nisan Stiennon, Jeff Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. *ArXiv*, abs/1909.08593, 2019. URL https://api.semanticscholar.org/CorpusID:202660943. Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, and Dan Hendrycks. Representation engineering: A top-down approach to ai transparency, 2023. ## A Indicators 1. Upstream → Data → **Data size** - Definition: For the data used in building the model, is the data size disclosed? - Notes: Data size should be reported in appropriate units (e.g. bytes, words, tokens, images, frames) and broken down by modality. Data size should be reported to a precision of one significant figure (e.g. 4 trillion tokens, 200 thousand images). No form of decomposition into data phases is required. - References: Bender & Friedman (2018a), Gebru et al. (2021) ## 2. Upstream → Data → Data Sources - Definition: For all data used in building the model, are the data sources disclosed? - Notes: To receive this point, a meaningful decomposition of sources must be listed in an understandable way (e.g. named URLs/domains/databases/data providers). It does not suffice to say data is "sourced from the Internet" or comes from "licensed sources". - References: Gebru et al. (2021), Hutchinson et al. (2021) ## 3. Upstream → Data → Data Creators - Definition: For all data used in building the model, is there some characterization of the people who created the data? - Notes: While information about data creators may not be easily discernible for some data scraped from the web, the general sources (URLs/domains) should be listed, and, for other data that is bought, licensed, or collected, a reasonable attempt at characterizing the underlying people who provided the data is required to receive this point. The relevant properties of people can vary depending on context: for example, relevant properties could include demographic information like fraction of Black individuals contributing to the dataset, geographic information like fraction of European individuals contributing to the dataset, language information like fraction of L1 English speakers, or occupational information like the fraction of professional artists. - References: Gebru et al. (2021), Hutchinson et al. (2021) ## 4. Upstream → Data → Data Source Selection - Definition: Are the selection protocols for including and excluding data sources disclosed? - Notes: Selection protocols refer to procedures used to choose which datasets or subsets of datasets will be used to build a model. We will award this point even if the selection protocols are nonexhaustive. - References: Gebru et al. (2021), Hutchinson et al. (2021) 5. Upstream → Data → **Data curation** - Definition: For all data sources, are the curation protocols for those data sources disclosed? - Notes: Curation protocols refer to steps taken to further modify data sources, such as procedures to manage, annotate, and organize data. The aims of curation might include improving the quality, relevance, and representativeness of the data. We will award this point if the developer reports that it does not perform any further curation beyond the data sources. - References: Gebru et al. (2021), Hutchinson et al. (2021) ## 6. Upstream → Data → Data Augmentation - Definition: Are any steps the developer takes to augment its data sources disclosed? - Notes: Such steps might include augmenting data sources with synthetic data. We will award this point if the developer reports that it does not take any steps to augment its data. - References: Gebru et al. (2021), Hutchinson et al. (2021) ## 7. Upstream → Data → Harmful Data Filtration - Definition: If data is filtered to remove harmful content, is there a description of the associated filter? - Notes: Such harmful content might relate to violence or child sexual abuse material. We will award this point if the developer reports that it does not perform any harmful data filtration. - References: Dodge et al. (2021), Longpre et al. (2023) ## 8. Upstream → Data → Copyrighted Data - Definition: For all data used in building the model, is the associated copyright status disclosed? - Notes: To receive this point, the copyright status (e.g. copyrighted, public domain) must relate to some decomposition of the data. We will award this point if there is some meaningful decomposition of the data, even if the decomposition is insufficient to receive the Data Creators point or if the disclosure is not comprehensive relative to legal copyright standards. - References: Bandy & Vincent (2021), Cooper et al. (2023) 9. Upstream → Data → **Data license** - Definition: For all data used in building the model, is the associated license status disclosed? - Notes: To receive this point, the license status must relate to some decomposition of the data. We will award this point if there is some meaningful decomposition of the data, even if the decomposition is insufficient to receive the Data Creators point. - References: Bandy & Vincent (2021), Cooper et al. (2023) ## 10. Upstream → Data → Personal Information In Data - Definition: For all data used in building the model, is the inclusion or exclusion of personal information in that data disclosed? - Notes: To receive this point, the disclosure of personal information must relate to some decomposition of the data. We will award this point if there is some meaningful decomposition of the data, even if the decomposition is insufficient to receive the Data Creators point. Additionally, we will award this point if the developer reports the inclusion of personal information, independent of if and how they mitigate related privacy concerns. - References: West (2019), Brown et al. (2022) ## 11. Upstream → Data Labor → Use Of Human Labor - Definition: Are the phases of the data pipeline where human labor is involved disclosed? - Notes: Phases of the data pipeline that involve human labor include activities and tasks performed by people to collect, annotate, clean, or validate data. This indicator is inclusive of all data that is created by or on behalf of the developer. We will award this point if the developer gives a reasonable best-effort description of the use of human labor in their data pipeline. - References: Kittur et al. (2013), Dzieza (2023) ## 12. Upstream → Data Labor → Employment Of Data Laborers - Definition: Is the organization that directly employs the people involved in data labor disclosed for each phase of the data pipeline? - Notes: Phases of the data pipeline that involve human labor include activities and tasks performed by people to collect, annotate, clean, or validate data. This indicator is inclusive of all data that is created by or on behalf of the developer. We will award this point if the developer provides the name of the organization that employs data laborers, even if other details about the employment relationship are not disclosed. - References: Kittur et al. (2013), Dzieza (2023) 13. Upstream → Data labor → **Geographic distribution of data laborers** - Definition: Is geographic information regarding the people involved in data labor disclosed for each phase of the data pipeline? - Notes: This indicator is inclusive of all data that is created by or on behalf of the developer. We will award this point if the developer gives a reasonable best-effort description of the geographic distribution of labor at the country-level. - References: Hao & Seetharaman (2023), Gray & Suri (2019a) ## 14. Upstream → Data Labor → Wages - Definition: Are the wages for people who perform data labor disclosed? - Notes: This indicator is inclusive of data labor at all points of the model development process, such as training data annotation or red teaming data used to control the model. We will award this point if the developer reports that it does not compensate workers. For all data that is created by or on behalf of the developer, - References: Kittur et al. (2013), Dzieza (2023) 15. Upstream → Data labor → **Instructions for creating data** - Definition: Are the instructions given to people who perform data labor disclosed? - Notes: This indicator is inclusive of all data that is created by or on behalf of the developer. We will award this point if the developer makes a reasonable best-effort attempt to disclose instructions given to people who create data used to build the model for the bulk of the data phases involving human labor. - References: Sambasivan et al. (2021), Kittur et al. (2013) 16. Upstream → Data labor → **Labor protections** - Definition: Are the labor protections for people who perform data labor disclosed? - Notes: This indicator is inclusive of data labor at all points of the model development process, such as training data annotation or red teaming data used to control the model. It is also inclusive of all data that is created by or on behalf of the developer. As an example, labor protections might include protocols to reduce the harm to workers' mental health stemming from exposure to violent content when annotating training data. We will award this point if the developer reports that it does not protect workers or if it does not use data laborers and therefore has no labor protections. - References: Crawford (2021), Gray & Suri (2019a) 17. Upstream → Data labor → **Third party partners** - Definition: Are the third parties who were or are involved in the development of the model disclosed? - Notes: This indicator is inclusive of partnerships that go beyond data labor as there may be third party partners at various stages in the model development process. We will award this point if the developer reports that it was the sole entity involved in the development of the model. - References: Crawford (2021), Gray & Suri (2019a) 18. Upstream → Data access → **Queryable external data access** - Definition: Are external entities provided with queryable access to the data used to build the model? - Notes: We will award this point for any reasonable mechanism for providing access: direct access to the data, an interface to query the data, a developer-mediated access program where developers can inspect requests, etc. Developers may receive this point even if there are rate-limits on the number of queries permitted to an external entity and restrictions on which external entities are given access, insofar as these limits and restrictions are transparent and ensure a reasonable amount of external access. We may accept justifications for prohibiting queries of specific parts of the data. - References: Gebru et al. (2021), Piktus et al. (2023) 19. Upstream → Data access → **Direct external data access** - Definition: Are external entities provided with direct access to the data used to build the model? - Notes: We will award this point if external entities can directly access the data without any form of gating from the developer. With that said, we may award this point if the developer provides justifications for prohibiting access to specific parts of the data or to unauthorized external entities. - References: Gebru et al. (2021), Piktus et al. (2023) ## 20. Upstream → Compute → Compute Usage - Definition: Is the compute required for building the model disclosed? - Notes: Compute should be reported in appropriate units, which most often will be floating point operations (FLOPS). Compute should be reported to a precision of one significant figure (e.g. 5 x 1025 FLOPS). We will award this point even if there is no decomposition of the reported compute usage into compute phases, but it should be clear whether the reported compute usage is for a single model run or includes additional runs, or hyperparameter tuning, or training other models like reward models, or other steps in the model development process that necessitate compute expenditure. - References: Henderson et al. (2020), Strubell et al. (2019) 21. Upstream → Compute → **Development duration** - Definition: Is the amount of time required to build the model disclosed? - Notes: The continuous duration of time required to build the model should be reported in weeks, days, or hours to a precision of one significant figure (e.g. 3 weeks). No form of decomposition into phases of building the model is required for this indicator, but it should be clear what the duration refers to (e.g. training the model, training and subsequent evaluation and red teaming). - References: Sevilla et al. (2022), Hoffmann et al. (2022b) ## 22. Upstream → Compute → Compute Hardware - Definition: For the primary hardware used to build the model, is the amount and type of hardware disclosed? - Notes: In most cases, this indicator will be satisfied by information regarding the number and type of GPUs or TPUs used to train the model. The number of hardware units should be reported to a precision of one significant figure (e.g. 800 NVIDIA H100 GPUs). We will not award this point if (i) the training hardware generally used by the developer is disclosed, but the specific hardware for the given model is not, or (ii) the training hardware is disclosed, but the amount of hardware is not. We will award this point even if information about the interconnects between hardware units is not disclosed. - References: Sevilla et al. (2022), Hoffmann et al. (2022b) 23. Upstream → Compute → **Hardware owner** - Definition: For the primary hardware used in building the model, is the owner of the hardware disclosed? - Notes: For example, the hardware owner may be the model developer in the case of a self-owned cluster, a cloud provider like Microsoft Azure, Google Cloud Platform, or Amazon Web Services, or a national supercomputer. In the event that hardware is owned by multiple sources or is highly decentralized, we will award this point if a developer makes a reasonable effort to describe the distribution of hardware owners. - References: Sevilla et al. (2022), Hoffmann et al. (2022b) ## 24. Upstream → Compute → Energy Usage - Definition: Is the amount of energy expended in building the model disclosed? - Notes: Energy usage should be reported in appropriate units, which most often will be megawatthours (mWh). Energy usage should be reported to a precision of one significant figure (e.g. 500 mWh). No form of decomposition into compute phases is required, but it should be clear whether the reported energy usage is for a single model run or includes additional runs, or hyperparameter tuning, or training other models like reward models, or other steps in the model development process that necessitate energy usage. - References: Lacoste et al. (2019), Patterson et al. (2021) 25. Upstream → Compute → **Carbon emissions** - Definition: Is the amount of carbon emitted (associated with the energy used) in building the model disclosed? - Notes: Emissions should be reported in appropriate units, which most often will be tons of carbon dioxide emitted (tCO2). Emissions should be reported to a precision of one significant figure (e.g. 500 tCO2). No form of decomposition into compute phases is required, but it should be clear whether the reported emissions is for a single model run or includes additional runs, or hyperparameter tuning, or training other models like reward models, or other steps in the model development process that generate emissions. - References: Lacoste et al. (2019), Patterson et al. (2021) 26. Upstream → Compute → **Broader environmental impact** - Definition: Are any broader environmental impacts from building the model besides carbon emissions disclosed? - Notes: While the most direct environmental impact of building a foundation model is the energy used and, therefore, the potential carbon emissions, there may be other environmental impacts. For example, these may include the use of other resources such as water for cooling data centers or metals for producing specialized hardware. We recognize that there does not exist an authoritative or consensus list of broader environmental factors. For this reason, we will award this point if there is a meaningful, though potentially incomplete, discussion of broader environmental impact. - References: Luccioni & Hernández-García (2023), Strubell et al. (2019) 27. Upstream → Methods → **Model stages** - Definition: Are all stages in the model development process disclosed? - Notes: Stages refer to each identifiable step that constitutes a substantive change to the model during the model building process. We recognize that different developers may use different terminology for these stages, or conceptualize the stages differently. We will award this point if there is a clear and complete description of these stages. - References: Mitchell et al. (2019), Chung et al. (2022) 28. Upstream → Methods → **Model objectives** - Definition: For all stages that are described, is there a clear description of the associated learning objectives or a clear characterization of the nature of this update to the model? - Notes: We recognize that different developers may use different terminology for these stages, or conceptualize the stages differently. We will award this point if there is a clear description of the update to the model related to each stage, whether that is the intent of the stage (e.g. making the model less harmful), a mechanistic characterization (e.g. minimizing a specific loss function), or an empirical assessment (e.g. evaluation results conducted before and after the stage). - References: Mitchell et al. (2019), Chung et al. (2022) 29. Upstream → Methods → **Core frameworks** - Definition: Are the core frameworks used for model development disclosed? - Notes: Examples of core frameworks include Tensorflow, PyTorch, Jax, Hugging Face Transformers, Seqio, T5X, Keras, SciKit, and Triton. If there are significant internal frameworks, there should be some description of their function and/or a reasonably similar publicly-available analogue. We recognize that there does not exist an authoritative or consensus list of core frameworks. For this reason, we will award this point if there is a meaningful, though potentially incomplete, list of major frameworks for the first version of the index. - References: Mitchell et al. (2019), Chung et al. (2022) ## 30. Upstream → Methods → Additional Dependencies - Definition: Are any dependencies required to build the model disclosed besides data, compute, and code? - Notes: For example, if the model depends on an external search engine, programmable APIs, or tools, this should be disclosed. We recognize that there is not widespread consensus regarding what constitutes key dependencies beyond the data, compute, and code. We will award this point only if developers give a reasonable best-effort description of any additional dependencies or make clear that no additional dependencies are required. - References: Lukas et al. (2023), Kim et al. (2023) 31. Upstream → Data Mitigations → **Mitigations for privacy** - Definition: Are any steps the developer takes to mitigate the presence of PII in the data disclosed? - Notes: Such steps might include identifying personal information in the training data, filtering specific datasets to remove personal information, and reducing the likelihood that models will output personal information. We will award this point if the developer reports that it does not take steps to mitigate the presence of PII in the data. - References: Kandpal et al. (2022), Cooper et al. (2023) 32. Upstream → Data Mitigations → **Mitigations for copyright** - Definition: Are any steps the developer takes to mitigate the presence of copyrighted information in the data disclosed? - Notes: Such steps might include identifying copyrighted data, filtering specific datasets to remove copyrighted data, and reducing the likelihood that models will output copyrighted information. We will award this point if the developer reports that it does take steps to mitigate the presence of copyrighted information in the data. - References: Bandy & Vincent (2021), Cooper et al. (2023) 33. Model → Model basics → **Input modality** - Definition: Are the input modalities for the model disclosed? - Notes: Input modalities refer to the types or formats of information that the model can accept as input. Examples of input modalities include text, image, audio, video, tables, graphs. - References: Mitchell et al. (2019), Crisan et al. (2022) 34. Model → Model basics → **Output modality** - Definition: Are the output modalities for the model disclosed? - Notes: Output modalities refer to the types or formats of information that the model can accept as output. Examples of output modalities include text, image, audio, video, tables, graphs. - References: Mitchell et al. (2019), Crisan et al. (2022) 35. Model → Model basics → **Model components** - Definition: Are all components of the model disclosed? - Notes: Model components refer to distinct and identifiable parts of the model. We recognize that different developers may use different terminology for model components, or conceptualize components differently. Examples include: (i) For a text-to-image model, components could refer to a text encoder and an image encoder, which may have been trained separately. (ii) For a retrieval-augmented model, components could refer to a separate retriever module. - References: Mitchell et al. (2019), Crisan et al. (2022) 36. Model → Model basics → **Model size** - Definition: For all components of the model, is the associated model size disclosed? - Notes: This information should be reported in appropriate units, which generally is the number of model parameters, broken down by named component. Model size should be reported to a precision of one significant figure (e.g. 500 billion parameters for text encoder, 20 billion parameters for image encoder). - References: Mitchell et al. (2019), Crisan et al. (2022) 37. Model → Model basics → **Model architecture** - Definition: Is the model architecture disclosed? - Notes: Model architecture is the overall structure and organization of a foundation model, which includes the way in which any disclosed components are integrated and how data moves through the model during training or inference. We recognize that different developers may use different terminology for model architecture, or conceptualize the architecture differently. We will award this point for any clear, though potentially incomplete, description of the model architecture. - References: Mitchell et al. (2019), Crisan et al. (2022) ## 38. Model → Model Basics → Centralized Model Documentation - Definition: Is key information about the model included in a centralized artifact such as a model card? - Notes: We recognize that different developers may share this information through different types of documentation, such as a system card or several clearly interrelated documents. We will award this point for the disclosure of any such centralized artifact that provides key information typically included in a model card, though the artifact may be longer-form than a standard model card (e.g. a technical report). - References: Mitchell et al. (2019), Crisan et al. (2022) ## 39. Model → Model Access → External Model Access Protocol - Definition: Is a protocol for granting external entities access to the model disclosed? - Notes: A model access protocol refers to the steps, requirements, and considerations involved in granting authorized model access to external entities. We will award this point if the developer discloses key details of its protocol, including (i) where external entities can request access (e.g. via an access request form); (ii) explicit criteria for selecting external entities; and (iii) a transparent decision on whether access has been granted within a specified, reasonable period of time. - References: Solaiman (2023), Shevlane (2022) ## 40. Model → Model Access → Blackbox External Model Access - Definition: Is black box model access provided to external entities? - Notes: Black box model access refers to the ability to query the model with inputs and receive outputs, potentially without further access. Examples of external entities that might be granted access include researchers, third-party auditors, and regulators. We will award this point for any reasonable access level: direct access to the model weights, an interface to query the model, a developer-mediated access program where developers can inspect requests, etc. Developers may receive this point even if there are rate-limits on the number of queries permitted to an external entity and restrictions on the external entities that are permitted access, insofar as these limits and restrictions are transparent. - References: Solaiman (2023), Shevlane (2022) 41. Model → Model access → **Full external model access** - Definition: Is full model access provided to external entities? - Notes: Full model access refers to the ability to access the model via the release of model weights. Developers may receive this point even if there are some restrictions on the external entities that are permitted access (e.g. geographic restrictions), insofar as these restrictions are transparent (e.g. via some high-level description of who has been granted access to the foundation model). - References: Solaiman (2023), Shevlane (2022) ## 42. Model → Capabilities → Capabilities Description - Definition: Are the model's capabilities described? - Notes: Capabilities refer to the specific and distinctive functions that the model can perform. We recognize that different developers may use different terminology for capabilities, or conceptualize capabilities differently. We will award this point for any clear, but potentially incomplete, description of the multiple capabilities. - References: Srivastava et al. (2022), Liang et al. (2022c) ## 43. Model → Capabilities → Capabilities Demonstration - Definition: Are the model's capabilities demonstrated? - Notes: Demonstrations refer to illustrative examples or other forms of showing the model's capabilities that are legible or understandable for the general public, without requiring specific technical expertise. We recognize that different developers may use different terminology for capabilities, or conceptualize capabilities differently. We will award this point for clear demonstrations of multiple capabilities. - References: Srivastava et al. (2022), Liang et al. (2022c) ## 44. Model → Capabilities → Evaluation Of Capabilities - Definition: Are the model's capabilities rigorously evaluated, with the results of these evaluations reported prior to or concurrent with the initial release of the model? - Notes: Rigorous evaluations refer to precise quantifications of the model's behavior in relation to its capabilities. We recognize that capabilities may not perfectly align with evaluations, and that different developers may associate capabilities with evaluations differently. We will award this point for clear evaluations of multiple capabilities. For example, this may include evaluations of world knowledge, reasoning, state tracking or other such proficiencies. Or it may include the measurement of average performance (e.g. accuracy, F1) on benchmarks for specific tasks (e.g. text summarization, image captioning). We note that evaluations on standard broad-coverage benchmarks are likely to suffice for this indicator, though they may not if the model's capabilities are presented as especially unusual such that standard evaluations will not suffice. - References: Srivastava et al. (2022), Liang et al. (2022c) 45. Model → Capabilities → **External reproducibility of capabilities evaluation** - Definition: Are the evaluations of the model's capabilities reproducible by external entities? - Notes: For an evaluation to be reproducible by an external entity, we mean that the associated data is either (i) publicly available or (ii) described sufficiently such that a reasonable facsimile can be constructed by an external entity. In addition, the evaluation protocol should be sufficiently described such that if the evaluation is reproduced, any discrepancies with the developer's results can be resolved. We recognize that there does not exist an authoritative or consensus standard for what is required for an evaluation to be deemed externally reproducible. Evaluations on standard benchmarks are assumed to be sufficiently reproducible for the purposes of this index. We will award this point for reproducibility of multiple disclosed evaluations. In the event that an evaluation is not reproducible, a justification by the model developer for why it is not possible for the evaluation to be made reproducible may be sufficient to score this point. - References: Kapoor & Narayanan (2023), Liang et al. (2022c) ## 46. Model → Capabilities → Third Party Capabilities Evaluation - Definition: Are the model's capabilities evaluated by third parties? - Notes: By third party, we mean entities that are significantly or fully independent of the developer. We will award this point if (i) a third party has conducted an evaluation of model capabilities, (ii) the results of this evaluation are publicly available, and (iii) these results are disclosed or referred to in the developer's materials. - References: Raji et al. (2022b), Liang et al. (2022c) 47. Model → Limitations → **Limitations description** - Definition: Are the model's limitations disclosed? - Notes: Limitations refer to the specific and distinctive functions that the model cannot perform (e.g. the model cannot answer questions about current events as it only contains data up to a certain time cutoff, the model is not very capable when it comes to a specific application). We recognize that different developers may use different terminology for limitations, or conceptualize limitations differently. We will award this point for any clear, but potentially incomplete, description of multiple limitations. - References: Raji et al. (2022a), Liang et al. (2022c) ## 48. Model → Limitations → Limitations Demonstration - Definition: Are the model's limitations demonstrated? - Notes: Demonstrations refer to illustrative examples or other forms of showing the limitations that are legible or understandable for the general public, without requiring specific technical expertise. We recognize that different developers may use different terminology for limitations, or conceptualize the limitations differently. We will award this point for clear demonstrations of multiple limitations. - References: Raji et al. (2022a), Liang et al. (2022c) 49. Model → Limitations → **Third party evaluation of limitations** - Definition: Can the model's limitations be evaluated by third parties? - Notes: By third parties, we mean entities that are significantly or fully independent of the model developers. In contrast to the third party evaluation indicators for capabilities and risks, we will award this point if third party evaluations are possible even if no third party has yet conducted them. Such evaluations are possible if, for example, the model is deployed via an API (or with open weights) and there are no restrictions on evaluating limitations (e.g. in the usage policy). - References: Raji et al. (2022b), Liang et al. (2022c) 50. Model → Risks → **Risks description** - Definition: Are the model's risks disclosed? - Notes: Risks refer to possible negative consequences or undesirable outcomes that can arise from the model's deployment and usage. This indicator requires disclosure of risks that may arise in the event of both (i) intentional (though possibly careless) use, such as bias or hallucinations and (ii) malicious use, such as fraud or disinformation. We recognize that different developers may use different terminology for risks, or conceptualize risks differently. We will award this point for any clear, but potentially incomplete, description of multiple risks. - References: Solaiman et al. (2023), Weidinger et al. (2021) ## 51. Model → Risks → Risks Demonstration - Definition: Are the model's risks demonstrated? - Notes: Demonstrations refer to illustrative examples or other forms of showing the risks that are legible or understandable for the general public, without requiring specific technical expertise. This indicator requires demonstration of risks that may arise in the event of both (i) intentional (though possibly careless) use, such as biases or hallucinations and (ii) malicious use, such as fraud or disinformation. We recognize that different developers may use different terminology for risks, or conceptualize risks differently. We will award this point for clear demonstrations of multiple risks. - References: Solaiman et al. (2023), Weidinger et al. (2021) ## 52. Model → Risks → Unintentional Harm Evaluation - Definition: Are the model's risks related to unintentional harm rigorously evaluated, with the results of these evaluations reported prior to or concurrent with the initial release of the model? - Notes: Rigorous evaluations refer to precise quantifications of the model's behavior in relation to such risks. Unintentional harms include bias, toxicity, and issues relating to fairness. We recognize that unintended harms may not perfectly align with risk evaluations, and that different developers may associate risks with evaluations differently. We will award this point for clear evaluations of multiple such risks. We note that evaluations on standard broad-coverage benchmarks are likely to suffice for this indicator, though they may not if the model's risks related to unintentional harm are presented as especially unusual or severe. - References: Solaiman et al. (2023), Weidinger et al. (2021) ## 53. Model → Risks → External Reproducibility Of Unintentional Harm Evaluation - Definition: Are the evaluations of the model's risks related to unintentional harm reproducible by external entities? - Notes: For an evaluation to be reproducible by an external entity, we mean that the associated data is either (i) publicly available or (ii) described sufficiently such that a reasonable facsimile can be constructed by the external entity. In addition, the evaluation protocol should be sufficiently described such that if the evaluation is reproduced, any discrepancies with the developer's results can be resolved. We recognize that there does not exist an authoritative or consensus standard for what is required for an evaluation to be deemed externally reproducible. Evaluations on standard benchmarks are assumed to be sufficiently reproducible for the purposes of this index. We will award this point for reproducibility of multiple disclosed evaluations. In the event that an evaluation is not reproducible, a justification by the developer for why it is not possible for the evaluation to be made reproducible may suffice. - References: Kapoor & Narayanan (2023), Weidinger et al. (2021) ## 54. Model → Risks → Intentional Harm Evaluation - Definition: Are the model's risks related to intentional harm rigorously evaluated, with the results of these evaluations reported prior to or concurrent with the initial release of the model?. - Notes: Rigorous evaluations refer to precise quantifications of the model's behavior in relation to such risks. Intentional harms include fraud, disinformation, scams, cybersecurity attacks, designing weapons or pathogens, and uses of the model for illegal purposes. We recognize that unintentional harms may not perfectly align with risk evaluations, and that different developers may associate risks with evaluations differently. We will award this point for clear evaluations of multiple such risks. We note that evaluations on standard broad-coverage benchmarks are likely to suffice for this indicator, though they may not if the model's risks related to unintentional harm are presented as especially unusual or severe. - References: Solaiman et al. (2023), Weidinger et al. (2021) ## 55. Model → Risks → External Reproducibility Of Intentional Harm Evaluation - Definition: Are the evaluations of the model's risks related to intentional harm reproducible by external entities? - Notes: For an evaluation to be reproducible by an external entity, we mean that the associated data is either (i) publicly available or (ii) described sufficiently such that a reasonable facsimile can be constructed by the external entity. In addition, the evaluation protocol should be sufficiently described such that if the evaluation is reproduced, any discrepancies with the developer's results can be resolved. We recognize that there does not exist an authoritative or consensus standard for what is required for an evaluation to be deemed externally reproducible. Evaluations on standard benchmarks are assumed to be sufficiently reproducible for the purposes of this index. We will award this point for reproducibility of multiple disclosed evaluations. In the event that an evaluation is not reproducible, a justification by the model developer for why it is not possible for the evaluation to be made reproducible may suffice. - References: Kapoor & Narayanan (2023), Weidinger et al. (2021) 56. Model → Risks → **Third party risks evaluation** - Definition: Are the model's risks evaluated by third parties? - Notes: By third party, we mean entities that are significantly or fully independent of the developer. A third party risk evaluation might involve the developer allowing a third party to choose a methodology for evaluating risks that differs from that of the developer. We will award this point if (i) a third party has conducted an evaluation of model risks, (ii) the results of this evaluation are publicly available, and (iii) these results are disclosed or referred to in the developer's materials. If the results are not made public (but are disclosed to have been conducted) and/or the results are not discoverable in the developer's materials, we will not award this point. We may accept a justification from either the third party or the developer for why part of the evaluation is not disclosed in relation to risks. - References: Raji et al. (2022b), Weidinger et al. (2021) 57. Model → Model Mitigations → **Mitigations description** - Definition: Are the model mitigations disclosed? - Notes: By model mitigations, we refer to interventions implemented by the developer at the level of the model to reduce the likelihood and/or the severity of the model's risks. We recognize that different developers may use different terminology for mitigations, or conceptualize mitigations differently. We will award this point for any clear, but potentially incomplete, description of multiple mitigations associated with the model's risks. Alternatively, we will award this point if the developer reports that it does not mitigate risk. - References: Solaiman et al. (2023), Weidinger et al. (2021) 58. Model → Model Mitigations → **Mitigations demonstration** - Definition: Are the model mitigations demonstrated? - Notes: Demonstrations refer to illustrative examples or other forms of showing the mitigations that are legible or understandable for the general public, without requiring specific technical expertise. We recognize that different developers may use different terminology for mitigations, or conceptualize mitigations differently. We will award this point for clear demonstrations of multiple mitigations. We will also award this point if the developer reports that it does not mitigate the risks associated with the model. - References: Solaiman et al. (2023), Weidinger et al. (2021) ## 59. Model → Model Mitigations → Mitigations Evaluation - Definition: Are the model mitigations rigorously evaluated, with the results of these evaluations reported? - Notes: Rigorous evaluations refer to precise quantifications of the model's behavior in relation to the mitigations associated with its risks. We will award this point for clear evaluations of multiple mitigations. - References: Huang et al. (2023), Weidinger et al. (2021) 60. Model → Model Mitigations → **External reproducibility of mitigations evaluation** - Definition: Are the model mitigation evaluations reproducible by external entities? - Notes: For an evaluation to be reproducible by an external entity, we mean that the associated data is either (i) publicly available or (ii) described sufficiently such that a reasonable facsimile can be constructed by the external entity. In addition, the evaluation protocol should be sufficiently described such that if the evaluation is reproduced, any discrepancies with the developer's results can be resolved. In the case of mitigations evaluations, this will usually involve details about a comparison to some baseline, which may be a different, unmitigated version of the model. We recognize that there does not exist an authoritative or consensus standard for what is required for an evaluation to be deemed externally reproducible. We will award this point for reproducibility of multiple disclosed evaluations. In the event that an evaluation is not reproducible, a justification by the model developer for why it is not possible for the evaluation to be made reproducible may suffice. - References: Kapoor & Narayanan (2023), Weidinger et al. (2021) 61. Model → Model Mitigations → **Third party mitigations evaluation** - Definition: Can the model mitigations be evaluated by third parties? - Notes: By third party, we mean entities that are significantly or fully independent of the model developers. This indicator assesses whether it is possible for third parties to assess mitigations, which is not restricted to the methods the developer uses to assess mitigations. In contrast to the third party evaluation indicators for capabilities and risks, we will award this point if third party evaluations are possible even if no third party has yet conducted them. - References: Raji et al. (2022b), Weidinger et al. (2021) ## 62. Model → Trustworthiness → Trustworthiness Evaluation - Definition: Is the trustworthiness of the model rigorously evaluated, with the results of these evaluations disclosed? - Notes: Rigorous evaluations refer to precise quantifications of the model's behavior in relation to its trustworthiness. For example, this may include evaluations of the model's robustness or reliability, its uncertainty, calibration, or causality, or its interpretability or explainability. We recognize that trustworthiness may not perfectly align with evaluations, and that different developers may associate trustworthiness with evaluations differently. We will award this point for a clear evaluation of the trustworthiness of the model. - References: Brundage et al. (2020), Wang et al. (2023a) 63. Model → Trustworthiness → **External reproducibility of trustworthiness evaluation** - Definition: Are the trustworthiness evaluations reproducible by external entities? - Notes: For an evaluation to be reproducible by an external entity, we mean that the associated data is either (i) publicly available or (ii) described sufficiently such that a reasonable facsimile can be constructed by the external entity. In addition, the evaluation protocol should be sufficiently described such that if the evaluation is reproduced, any discrepancies with the developer's results can be resolved. We recognize that there does not exist an authoritative or consensus standard for what is required for an evaluation to be deemed externally reproducible. Evaluations on standard benchmarks are assumed to be sufficiently reproducible for the purposes of this index. We will award this point for reproducibility of at least one evaluation. In the event that an evaluation is not reproducible, we may accept a justification by the model developer for why it is not possible for the evaluation to be made reproducible. - References: Kapoor & Narayanan (2023), Shneiderman (2020) ## 64. Model → Inference → Inference Duration Evaluation - Definition: Is the time required for model inference disclosed for a clearly-specified task on a clearlyspecified set of hardware? - Notes: The duration should be reported in seconds to a precision of one significant figure (e.g. 0.002 seconds). We recognize that no established standard exists for the standardized reporting of inference evaluation. Therefore, we permit the developer to specify the task and hardware setup, as long as both are disclosed. The hardware in this evaluation need not be the hardware the developer uses for inference if it in fact does any inference itself. For example, the specific task might be generating 100,000 tokens as 5,000 sequences of length 20 and the fixed set of hardware might be 8 NVIDIA A100s. The hardware in this evaluation need not be the hardware the developer uses for inference if it in fact does any inference itself. - References: Reddi et al. (2020), Narayanan et al. (2023) ## 65. Model → Inference → Inference Compute Evaluation - Definition: Is the compute usage for model inference disclosed for a clearly-specified task on a clearly-specified set of hardware? - Notes: Compute usage for inference should be reported in FLOPS to a precision of one significant figure (e.g. 5 x 1025 FLOPS). We recognize that no established standard exists for the standardized reporting of inference evaluation. Therefore, we permit the developer to specify the task and hardware setup, as long as both are clear. For example, the specific task might be generating 100k tokens as 5k sequences of length 20 and the fixed set of hardware might be 8 NVIDIA A100s. The hardware in this evaluation need not be the hardware the developer uses for inference if it in fact does any inference itself. - References: Reddi et al. (2020), Narayanan et al. (2023) 66. Downstream → Distribution → **Release decision-making** - Definition: Is the developer's protocol for deciding whether or not to release a model disclosed? - Notes: We recognize that the release of a foundation model falls along a spectrum, with many forms of partial release, and that different developers may conceptualize release differently. We will award this point for any clear protocol that discusses the decision-making process, including if the protocol is more general to the developer rather than the specific foundation model under consideration. - References: Solaiman (2023), Liang et al. (2022a) 67. Downstream → Distribution → **Release process** - Definition: Is a description of the process of how the model was released disclosed? - Notes: A description of the release process might include information about who received access to the model at what stage of the release of the model. For example, a developer might conduct a staged release where it releases the model to a select group at first and subsequently makes the model more widely available. We recognize that the release of a foundation model falls along a spectrum, with many different forms of release, and that different developers may conceptualize release differently. We will award this point for any detailed discussion of the release process, including if the discussion is more general to the developer rather than the specific foundation model under consideration. - References: Solaiman (2023), Liang et al. (2022a) 68. Downstream → Distribution → **Distribution channels** - Definition: Are all distribution channels disclosed? - Notes: By distribution channel, we mean any pathway by which the model is made accessible to entities beyond the developer. We recognize that distribution channels may arise without the knowledge of the model developer. For example, the weights of a model may be released through one distribution channel and then be distributed through other channels. We will award this point if the developer discloses all of the distribution channels of which it is aware. - References: Cobbe et al. (2023), Widder & Wong (2023) 69. Downstream → Distribution → **Products and services** - Definition: Does the developer disclose whether any products and services offered by the developer are dependent on the model? - Notes: We recognize that a developer may provide many products and services that depend on a foundation model or internal derivatives of the model. We will award this point for a reasonable best-effort description of any ways the developer makes internal use of the model in its products or services. - References: Cobbe et al. (2023), Cen et al. (2023) 70. Downstream → Distribution → **Detection of machine-generated content** - Definition: Are any mechanisms for detecting content generated by this model disclosed? - Notes: Such a mechanism might include storing a copy of all outputs generated by the model to compare against, implementing a watermark when generating content using the model, or training a detector post-hoc to identify such content. We will award this point if any such mechanism is disclosed or if the developer reports that it has no such mechanism. - References: Kirchenbauer et al. (2023), Kuditipudi et al. (2023) 71. Downstream → Distribution → **Model License** - Definition: Is a license for the model disclosed? - Notes: In the event that licenses are written more generally, it should be clear which assets they apply to. We recognize that different developers may adopt different business models and therefor have different types of model licenses. Examples of model licenses include responsible AI licenses, open-source licenses, and licenses that allow for commercial use. - References: Pistilli et al. (2023), Chen et al. (2023a) 72. Downstream → Distribution → **Terms of service** - Definition: Are terms of service disclosed for each distribution channel? - Notes: We will award this point if there are terms-of-service that appear to apply to the bulk of the model's distribution channels. - References: Rakova et al. (2022), Liu et al. (2021) 73. Downstream → Usage policy → **Permitted and prohibited users** - Definition: Is a description of who can and cannot use the model disclosed? - Notes: Such restrictions may relate to countries (e.g. US-only), organizations (e.g. no competitors), industries (e.g. no weapons industry users) or other relevant factors. These restrictions on users are often contained in multiple policies; we group them here for simplicity. We will awarded this point for a clear description of permitted, restricted, and prohibited users of the model. - References: Cohere (2022), Meta (2023) 74. Downstream → Usage policy → **Permitted, restricted, and prohibited uses** - Definition: Are permitted, restricted, and prohibited uses of the model disclosed? - Notes: We will award this point if at least two of the following three categories are disclosed: (i) permitted uses, (ii) restricted uses, and (iii) prohibited uses. By restricted uses, we mean uses that require a higher level of scrutiny (such as permission from or a separate contract with the developer) to be permitted. These uses are generally included in an acceptable use policy, model license, or usage policy. - References: Cohere (2022), Meta (2023) 75. Downstream → Usage policy → **Usage policy enforcement** - Definition: Is the enforcement protocol for the usage policy disclosed? - Notes: By enforcement protocol, we refer to (i) mechanisms for identifying permitted and prohibited users, (ii) mechanisms for identifying permitted/restricted/prohibited uses, (iii) steps the developer takes to enforce its policies related to such uses, and (iv) the developer's procedures for carrying out these steps. We will award this point for a reasonable best-effort attempt to provide the bulk of this information, though one line indicating the developer reserves the right to terminate accounts is insufficient. Alternatively, we will award this point if the developer reports that it does not enforce its usage policy. - References: Cohere (2022), Meta (2023) ## 76. Downstream → Usage Policy → Justification For Enforcement Action - Definition: Do users receive a justification when they are subject to an enforcement action for violating the usage policy? - Notes: For example, does the developer disclose a protocol for telling users which part of the usage policy they violated, when they did so, and what specifically was violative? Enforcement actions refer to measures to limit a user's ability to use the model, such as banning a user or restricting their ability to purchase tokens. We will award this point if the developer discloses that it gives justification for enforcement actions or, alternatively, if it discloses that it does not provide justification for enforcement actions or that it does not enforce its usage policy. - References: Cohere (2022), Meta (2023) 77. Downstream → Usage policy → **Usage policy violation appeals mechanism** - Definition: Is a mechanism for appealing potential usage policy violations disclosed? - Notes: We will award this point if the developer provides a usage policy violation appeals mechanism, regardless of whether it is provided via a user interface or distribution channel. - References: Cohere (2022), Meta (2023) 78. Downstream → Model behavior policy → **Permitted, restricted, and prohibited model behaviors** - Definition: Are model behaviors that are permitted, restricted, and prohibited disclosed? - Notes: We refer to a policy that includes this information as a model behavior policy, or a developer's policy on what the foundation model can and cannot do (e.g. such a policy may prohibit a model from generating child sexual abuse material). We recognize that different developers may adopt different business models and that some business models may make enforcement of a model behavior policy more or less feasible. We will award this point if at least two of the three categories (i.e. permitted, restricted, and prohibited model behaviors) are disclosed. Alternatively, we will award this point if the developer reports that it does not impose any restrictions on its model's behavior. - References: Reuter & Schulze (2023), Qi et al. (2023) 79. Downstream → Model behavior policy → **Model behavior policy enforcement** - Definition: Is the enforcement protocol for the model behavior policy disclosed? - Notes: By enforcement protocol, we refer to mechanisms for identifying whether model behavior is permitted or prohibited and actions that may arise in the event the model behavior policy is violated. For example, the developer may make updates to the model in response to issues with the model's adherence to the model behavior policy. We will award this point if there is a clear description of the enforcement protocol, or if the developer reports that it does not enforce its model behavior policy or that it has no such restrictions on the model's behavior. - References: Brundage et al. (2020), Qi et al. (2023) 80. Downstream → Model behavior policy → **Interoperability of usage and model behavior policies** - Definition: Is the way that the usage policy and the model behavior policy interoperate disclosed? - Notes: For example, if a user attempts to use the model for a prohibited use such as spam, how does the model behavior policy apply if at all? We will also award this point if the developer reports that it does not impose any restrictions on its model's behavior in the event of usage policy violation. - References: Reuter & Schulze (2023), Qi et al. (2023) 81. Downstream → User Interface → **User interaction with AI system** - Definition: For distribution channels with user-facing interfaces, are users notified (i) that they are interacting with an AI system, (ii) of the specific foundation model they are interacting with, and (iii) that outputs are machine-generated? - Notes: A user-facing interface refers to the means by which the user interacts with the foundation model, including how the user can observe outputs from the foundation model and other notifications. We will award this point if, for all distribution channels with user-facing interfaces, the user is provided adequate transparency as to the foundation model being distributed and the potential presence of any model outputs. - References: Wang et al. (2023b), Nakao et al. (2022) 82. Downstream → User Interface → **Usage disclaimers** - Definition: For distribution channels with user-facing interfaces, are users provided with disclaimers involving model use? - Notes: A user-facing interface refers to the means by which the user interacts with the foundation model, including how the user can observe outputs from the foundation model and other notifications. Usage disclaimers could include information about what constitutes a usage policy violations or how users should interpret model outputs. We will award this point if, for all distribution channels with user-facing interfaces, the user is provided with usage disclaimers. - References: Wang et al. (2023b), Nakao et al. (2022) 83. Downstream → User data protection → **User data protection policy** - Definition: Are the protocols for how the developer stores, accesses, and shares user data disclosed? - Notes: We will also award this point if the developer reports that it has no user data protection policy. - References: Nissenbaum (2024), King (2020) 84. Downstream → User data protection → **Permitted and prohibited use of user data** - Definition: Are permitted and prohibited uses of user data disclosed? - Notes: Developers use user data for a range of purposes such as building future models, updating existing models, and evaluating both existing and future models. We will award this point if a developer discloses its policy on the use of user data from interactions associated with this model, including both permitted and prohibited uses. This may span different distribution channels if multiple channels supply user data to the developer. Alternatively, we will award this point if the developer reports it does not impose any limits on its use of user data. - References: Nissenbaum (2024), King (2020) 85. Downstream → User data protection → **Usage data access protocol** - Definition: Is a protocol for granting external entities access to usage data disclosed? - Notes: Usage data refers to the data created through user interaction with the model, such as user inputs to the model and associated metadata such as the duration of the interaction. A usage data access protocol refers to the steps, requirements, and considerations involved in granting external entities access to usage data; this goes beyond stating the conditions under which related personal information may be shared with external entities. We will award this point for a clear description of the usage data access protocol or if the developer reports it does not share usage data with external entities. - References: Lapowsky (2018), King (2020) 86. Downstream → Model Updates → **Versioning protocol** - Definition: Is there a disclosed version and versioning protocol for the model? - Notes: By versioning, we mean that each instance of the model is uniquely identified and that the model is guaranteed to not change when referring to a fixed version number; alternatively, the version clearly indicating a specific instance of the model may be able to change by noting that it is the "latest" or an "unstable" version. We recognize that different developers may adopt different versioning practices that may differ from standard semantic versioning practices used elsewhere in software engineering. - References: Chen et al. (2023b), Lam et al. (2020) 87. Downstream → Model Updates → **Change log** - Definition: Is there a disclosed change log for the model? - Notes: By change log, we mean a description associated with each change to the model (which should be indicated by a change in version number). We recognize that different developers may adopt different practices for change logs that may differ from practices used elsewhere in software engineering. We will award this point if the change log provides a clear description of changes that is legible to a technical audience. - References: Chen et al. (2023b), Li et al. (2016) 88. Downstream → Model Updates → **Deprecation policy** - Definition: Is there a disclosed deprecation policy for the developer? - Notes: By deprecation policy, we refer to a description of what it means for a model to be deprecated and how users should respond to the deprecation (e.g. instructions to migrate to a newer version). We will award this point for a clear disclosure of a deprecation policy or if there is no risk of deprication (e.g. if the developer openly releases model weights). - References: Chen et al. (2023b), Haryono et al. (2020) 89. Downstream → Feedback → **Feedback mechanism** - Definition: Is a feedback mechanism disclosed? - Notes: By feedback mechanism, we refer to a means for external entities to report feedback or issues that arise in relation to the foundation model. Such entities may include but are not necessarily limited to users. We will award this point if the developer discloses a feedback mechanism that has been implemented. - References: Bommasani et al. (2023b), Raji et al. (2022b) 90. Downstream → Feedback → **Feedback summary** - Definition: Is a report or summary disclosed regarding the feedback the developer received or, alternatively, the way the developer responded to that feedback? - Notes: We recognize that there does not exist an authoritative or consensus standard for what is required in a feedback report. For this reason, we will award this point if there is a meaningful, though potentially vague or incomplete, summary of feedback received. - References: Chen et al. (2021), Piorkowski et al. (2022) ## 91. Downstream → Feedback → Government Inquiries - Definition: Is a summary of government inquiries related to the model received by the developer disclosed? - Notes: Such government inquiries might include requests for user data, requests that certain content be banned, or requests for information about a developer's business practices. We recognize that there does not exist an authoritative or consensus standard for what is required for such a summary of government inquiries. For this reason, we will award this point if (i) there is a meaningful, though potentially vague or incomplete, summary of government inquiries, or (ii) a summary of government inquiries related to user data. - References: Chou, Bommasani et al. (2023b) ## 92. Downstream → Impact → Monitoring Mechanism - Definition: For each distribution channel, is a monitoring mechanism for tracking model use disclosed? - Notes: By monitoring mechanism, we refer to a specific protocol for tracking model use that goes beyond an acknowledgement that usage data is collected. We will also award this point for a reasonable best-effort attempt to describe monitoring mechanisms, or if a developer discloses that a distribution channel is not monitored. - References: Springer & Whittaker (2018), Bommasani et al. (2023b) 93. Downstream → Impact → **Downstream applications** - Definition: Across all forms of downstream use, is the number of applications dependent on the foundation model disclosed? - Notes: We recognize that there does not exist an authoritative or consensus standard for what qualifies as an application. We will award this point if there is a meaningful estimate of the number of downstream applications, along with some description of what it means for an application to be dependent on the model. - References: Vipra & Korinek (2023), Bommasani et al. (2023b) 94. Downstream → Impact → **Affected market sectors** - Definition: Across all downstream applications, is the fraction of applications corresponding to each market sector disclosed? - Notes: By market sector, we refer to an identifiable part of the economy. While established standards exist for describing market sectors, we recognize that developers may provide vague or informal characterizations of market impact. We will award this point if there is a meaningful, though potentially vague or incomplete, summary of affected market sectors. - References: Vipra & Korinek (2023), Bommasani et al. (2023b) 95. Downstream → Impact → **Affected individuals** - Definition: Across all forms of downstream use, is the number of individuals affected by the foundation model disclosed? - Notes: By affected individuals, we principally mean the number of potential users of applications. We recognize that there does not exist an authoritative or consensus standard for what qualifies as an affected individual. We will award this point if there is a meaningful estimate of the number of affected individuals along with a clear description of what it means for an individual to be affected by the model. - References: Vipra & Korinek (2023), Bommasani et al. (2023b) 96. Downstream → Impact → **Usage reports** - Definition: Is a usage report that gives usage statistics describing the impact of the model on users disclosed? - Notes: We recognize that there does not exist an authoritative or consensus standard for what is required in a usage report. Usage statistics might include, for example, a description of the major categories of harm that has been caused by use of the model. We will award this point if there is a meaningful, though potentially vague or incomplete, summary of usage statistics. - References: Brown (2023), Bommasani et al. (2023b) 97. Downstream → Impact → **Geographic statistics** - Definition: Across all forms of downstream use, are statistics of model usage across geographies disclosed? - Notes: We will award this point if there is a meaningful, though potentially incomplete or vague, disclosure of geographic usage statistics at the country-level. - References: Brown (2023), Bommasani et al. (2023b) 98. Downstream → Impact → **Redress mechanism** - Definition: Is any mechanism to provide redress to users for harm disclosed? - Notes: We will also award this point if the developer reports it does not have any such redress mechanism. - References: Vipra & Myers West (2023), Bommasani et al. (2023b) 99. Downstream → Documentation for Deployers → **Centralized documentation for downstream use** - Definition: Is documentation for downstream use centralized in a centralized artifact? - Notes: Centralized documentation for downstream use refers to an artifact, or closely-linked artifacts, that consolidate relevant information for making use of or repurposing the model. Examples of these kinds of artifacts include a website with dedicated documentation information, a github repository with dedicated documentation information, and an ecosystem card. We recognize that different developers may take different approaches to centralizing information. We will award this point if there is a clearly-identified artifact(s) that contains the majority of substantive information (e.g. capabilities, limitations, risks, evaluations, distribution channels, model license, usage policies, model behavior policies, feedback and redress mechanisms, dependencies). - References: Gebru et al. (2021), Mitchell et al. (2019) 100. Downstream → Documentation for Deployers → Documentation for responsible downstream use - Definition: Is documentation for responsible downstream use disclosed? - Notes: Such documentation might include details on how to adjust API settings to promote responsible use, descriptions of how to implement mitigations, or guidelines for responsible use. We will also award this point if the developer states that it does not provide any such documentation. For example, the developer might state that the model is offered as is and downstream developers are accountable for using the model responsibly. - References: Bommasani et al. (2023b), Brown (2023) ## B Search Protocol In this section, we outline the search process we used to look for evidence that a foundation model developer satisfies our requirements for a given indicator. ## B.1 General Search Process B.1.1 Keyword Definitions Each item under review has associated search keywords in our GitHub repository.68 ## B.1.2 Model-Item Pair Searches For every model-item pair, we conduct a search using the defined keywords within the centralized resources associated with the respective models listed below. ## B.1.3 Search Methodology We employ the following format for every model-item-keyword tuple while using Google search, and read through the first 10 search results. s i t e : [ R e f e r t o d e v el o p e r ' s w e b si t e l i s t below ] [ R e f e r t o model name l i s t below ] [ Enter keyword ] For example, for GPT-4's energy efficiency item, the searches would be: s i t e : open ai . com gpt−4 ene r gy s i t e : open ai . com gpt−4 e f f i c i e n ## B.1.4 Justification We note the source (e.g., website, company blog post, paper) for each piece of evidence that helped confirm an item is present, alongside the justification. We link to an archive.org URL that contains the justification (instead of linking to developers' pages directly), to maintain records. ## B.1.5 Avoid Search Personalization To minimize the influence of personalized search results, we perform all searches in a private or incognito browser tab. ## B.1.6 Determination Criteria If we find one piece of evidence that fully justifies 1 point - or, in rarer cases, 0 points - for an item, we don't perform other searches. ## B.1.7 Distribution Channels In certain limited cases where the above steps fail to generate any information for indicators related to distribution channels, we interact with the developer's intended distribution channel (if disclosed), such as its API or its preferred deployment partner's API, or the documentation related to this API. We search for the required information via this distribution channel to the extent possible. We also use proxies, such as model playgrounds, if enterprise access is otherwise required. ## B.2 Developer Website - AI21 Labs (Jurassic-2): ai21.com - Amazon (Titan Text): aws.amazon.com/bedrock/titan/ - Anthropic (Claude): anthropic.com - Cohere (Command): cohere.com - Google (PaLM 2): ai.google - Hugging Face (BLOOMZ): bigscience.huggingface.co - Inflection (Inflection-1): inflection.ai - Meta (Llama 2): ai.meta.com - OpenAI (GPT-4): openai.com - StabilityAI (Stable Diffusion 2): stability.ai ## B.3 Centralized Resources For All Models B.3.1 Ai21 Labs (Jurassic-2) - https://docs.ai21.com/docs/jurassic-2-models - https://docs.ai21.com/docs/responsible-use - https://uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_ jurassic_tech_paper.pdf - https://www.ai21.com/blog/introducing-j2 - https://docs.ai21.com/docs/responsible-use\#usage-guidelines - https://studio.ai21.com/terms-of-use - https://studio.ai21.com/privacy-policy - https://docs.ai21.com/changelog ## B.3.2 Amazon (Titan Text) - https://aws.amazon.com/bedrock/titan/ - https://docs.aws.amazon.com/pdfs/bedrock/latest/APIReference/bedrock-api.pdf\#API_ ListFoundationModels - https://aws.amazon.com/aup/ ## B.3.3 Anthropic (Claude 2) - https://legal.anthropic.com/\#aup - https://vault.pactsafe.io/s/9f502c93-cb5c-4571-b205-1e479da61794/legal.html\#aup - https://console.anthropic.com/docs/api/supported-regions - https://legal.anthropic.com/\#terms - https://legal.anthropic.com/\#privacy - https://docs.anthropic.com/claude/docs - https://www.anthropic.com/index/claude-2 - https://www.anthropic.com/earlyaccess - https://www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf - https://www.anthropic.com/index/frontier-threats-red-teaming-for-ai-safety ## B.3.4 Cohere (Command) - https://docs.cohere.com/docs/ - https://cohere.com/security - https://dashboard.cohere.ai/playground/generate - https://cohere.com/terms-of-use - https://cloud.google.com/blog/products/ai-machine-learning/ accelerating-language-model-training-with-cohere-and-google-cloud-tpus - https://cohere.com/data-usage-policy - https://cohere.com/privacy - https://cohere-inc.secureframetrust.com/ ## B.3.5 Google (Palm 2) - https://ai.google/static/documents/palm2techreport.pdf - https://developers.generativeai.google/models/language - https://policies.google.com/terms/generative-ai/use-policy - https://developers.generativeai.google/guide/safety_guidance - https://developers.generativeai.google/products/palm - https://developers.generativeai.google/available_regions - https://developers.generativeai.google/terms\#content_license_and_data_use ## B.3.6 Hugging Face (Bloomz) - https://arxiv.org/abs/2211.01786 - https://huggingface.co/docs/transformers/model_doc/bloom - https://huggingface.co/bigscience/bloom - https://arxiv.org/abs/2303.03915 - https://arxiv.org/abs/2211.05100 - https://proceedings.neurips.cc/paper_files/paper/2022/file/ ce9e92e3de2372a4b93353eb7f3dc0bd-Paper-Datasets_and_Benchmarks.pdf ## B.3.7 Inflection (Inflection-1) - https://inflection.ai/assets/Inflection-1.pdf - https://inflection.ai/inflection-1 - https://inflection.ai/assets/MMLU-Examples.pdf - https://heypi.com/policy\#privacy - https://inflection.ai/safety ## B.3.8 Meta (Llama 2) - https://arxiv.org/pdf/2307.09288.pdf - https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md - https://ai.meta.com/static-resource/responsible-use-guide/ ## B.3.9 Openai (Gpt-4) - https://openai.com/research/gpt-4 - https://openai.com/policies/usage-policies - https://openai.com/form/chat-model-feedback - https://platform.openai.com/docs - https://openai.com/customer-stories - https://status.openai.com/ - https://openai.com/policies/terms-of-use - https://cdn.openai.com/policies/employee-data-privacy-notice.pdf - https://cdn.openai.com/papers/gpt-4-system-card.pdf - https://arxiv.org/pdf/2303.08774.pdf - https://openai.com/research/triton - https://openai.com/pricing - https://platform.openai.com/docs/deprecations - https://openai.com/waitlist/gpt-4-api - https://openai.com/our-structure - https://openai.com/api-data-privacy ## B.3.10 Stabilityai (Stable Diffusion 2) - https://huggingface.co/stabilityai/stable-diffusion-2 - https://openreview.net/forum?id=M3Y74vmsMcY - https://huggingface.co/terms-of-service - https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL - https://platform.stability.ai/legal/terms-of-service - https://stability.ai/use-policy ## C Calls For Transparency In recent years, transparency has been a rallying cry for activists, a boon to researchers, and a tangible first step for governments interested in regulating foundation models. Here we outline some of the salient calls for transparency to illustrate the different stakeholders with an interest in a more transparent foundation model ecosystem. ## Calls For Transparency From Governments. A wide variety of governments have made transparency in the development of foundation models a top priority in their wider agenda for AI regulation. In the U.S., the White House has secured voluntary commitments from 16 companies that include a commitment "to publicly reporting their AI systems' capabilities, limitations, and areas of appropriate and inappropriate use" in the form of "transparency reports."69 The AI Risk Management Framework from the U.S. National Institute for Standards and Technology outlines the U.S. federal government's current approach to transparency for foundation models and other AI systems.70 The AI Risk Management Framework states "Trustworthy AI depends upon accountability. Accountability presupposes transparency. Transparency reflects the extent to which information about an AI system and its outputs is available to individuals interacting with such a system ... Meaningful transparency provides access to appropriate levels of information based on the stage of the AI lifecycle and tailored to the role or knowledge of AI actors or individuals interacting with or using the AI system." The SAFE framework for regulating AI proposed by Senate Majority Leader Schumer aims to ensure that "AI is developed and deployed in a responsible and transparent manner" and to "support US-led innovation in AI technologies—including innovation in security, transparency and accountability."71 Transparency is also one of the five pillars of the bipartisan framework for a U.S. AI Act proposed by Senators Hawley and Blumenthal; their framework specifically suggests "requiring transparency from the companies developing and deploying A.I. systems" as it relates to training data, limitations, accuracy, safety, and user interaction with an AI system.72 A variety of other draft legislation in the U.S. would require a higher level of transparency for foundation model developers, such as the Algorithmic Accountability Act73 at the federal level and California's Safety in Artificial Intelligence Act.74 In the EU, transparency and information sharing have become a central focus of the draft EU AI Act. For instance, Article 52 of the Act imposes "transparency obligations" for some types of AI systems. The European Parliament's draft of the AI Act included specific obligations for foundation model developers: "foundation models should have information obligations and prepare all necessary technical documentation for potential downstream providers to be able to comply with their obligations under this Regulation. Generative foundation models should ensure transparency about the fact the content is generated by an AI system, not by humans."75 Developers of high-risk AI systems may also be required to provide additional transparency about their systems such that deployers have adequate information about risks and how to mitigate them. China has gone a step further, with the central government adopting regulations that impose transparency requirements on foundation model deployers. China's "Interim Measures for the Management of Generative Artificial Intelligence Services" state that organizations deploying foundation models, including via an API, must "employ effective measures to increase transparency in generative AI services."76 The law further specifies that "providers shall formulate clear, specific, and feasible tagging rules" for data and that "providers 69See https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/ and https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf and https://www.whitehouse.gov/briefing-room/statements-releases/2023/09/12/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-eight-additional-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/ and https://www.whitehouse.gov/wp-content/uploads/2023/09/Voluntary-AI-Commitments-September-2023.pdf 70https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf 71https://www.democrats.senate.gov/imo/media/doc/schumer_ai_framework.pdf 72https://www.blumenthal.senate.gov/imo/media/doc/09072023bipartisanaiframework.pdf 73See https://www.congress.gov/bill/118th-congress/house-bill/5628/all-info?s=2&r=1 and https://docs.google. com/document/d/1A1bJ1mkIfE3eZuSbDmz3HGVtOvQDegHl53q3ArO7m44/ 74https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB294 75https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf 76http://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm shall establish and complete mechanisms for making complaints and reports, setting up easy complaint and reporting portals, disclosing the process for handling them and the time limits for giving responses." Many other governments have also highlighted the importance of transparency in the development and use of foundation models. Canada has released a "Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems," which has been signed by Cohere, the Montreal Institute for Learning Algorithms, and the Vector Institute among other organizations.77 Canada's Voluntary Code of Conduct states that signatories commit to achieve transparency such that "sufficient information is published to allow consumers to make informed decisions and for experts to evaluate whether risks have been adequately addressed." It further specifies that "developers of advanced generative systems available for public use" are required to "Publish information on capabilities and limitations of the system... Develop and implement a reliable and freely available method to detect content generated by the system, with a near-term focus on audio-visual content (e.g., watermarking). ... Publish a description of the types of training data used to develop the system, as well as measures taken to identify and mitigate risks." Japan is reportedly in the process of adopting its own code of conduct, which may go beyond voluntary commitments.78 India's report on "Impact, Opportunity, and Challenges of Generative AI," coauthored by India's Ministry of Electronics and Information Technology, states that transparency should be a central feature of India's regulatory framework for ensuring responsible use of generative AI.79 The United Arab Emirates' generative AI guide, published by the Office of the Minister for Artificial Intelligence, Digital Economy, and Remote Work Applications, highlights the importance of transparency for generative AI in terms of data protection: "Transparency is crucial to data privacy because it enables individuals to know how their data is collected, processed, and used by organizations. By being transparent, organizations can provide clear and concise information about their data privacy practices, policies, and procedures."80 Data protection authorities around the world are "de facto regulating generative AI" by using their existing authorities, including those related to information sharing; for example, data protection authorities in Brazil, Japan, and South Korea launched investigations into OpenAI's ChatGPT in 2023.81 Some governments have highlighted the fact that existing transparency requirements already apply to foundation model developers and ought to be enforced as such. The UK Competition and Markets Authority notes that transparency requirements are already in place under consumer protection law, and that foundation model developers must comply with the transparency provisions of the UK Consumer Rights Act.82 The U.S. Federal Trade Commission has stated that "we take note–and can take action–if companies aren't upfront about what consumers are buying, who made it, how it was made, or what rights people have in their own creations. ... When offering a generative AI product, [companies] may need to tell customers whether and the extent to which the training data includes copyrighted or otherwise protected material."83 It is also worth noting that many governments have emphasized the importance of transparency in the development and use of AI systems outside of the context of foundation models. The national AI strategies of Colombia,84, Egypt,85 Indonesia,86, and India87 highlight the importance of transparency as do the national AI strategies of other countries.88 Calls for transparency from international organizations. The UN High Commissioner for Human Rights, Volker Türk, has argued that existing rules for businesses squarely apply to foundation model develop- 77https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems 78https://english.kyodonews.net/news/2023/10/3b83adf1e28d-japans-ai-draft-guidelines-ask-for-measures-to-address-overreliance. html 79https://indiaai.s3.ap-south-1.amazonaws.com/docs/generative-ai-report.pdf 80https://ai.gov.ae/wp-content/uploads/2023/04/406.-Generative-AI-Guide_ver1-EN.pdf 81https://fpf.org/blog/how-data-protection-authorities-are-de-facto-regulating-generative-ai/ 82https://www.gov.uk/government/publications/ai-foundation-models-initial-report 83https://www.ftc.gov/business-guidance/blog/2023/08/cant-lose-what-you-never-had-claims-about-digital-ownershipcreation-age-generative-ai 84https://colaboracion.dnp.gov.co/CDT/Conpes/EconÃşmicos/3975.pdf 85https://mcit.gov.eg/Upcont/Documents/Publications_672021000_Egypt-National-AI-Strategy-English.pdf 86https://ai-innovation.id/images/gallery/ebook/stranas-ka.pdf 87https://www.niti.gov.in/sites/default/files/2019-01/NationalStrategy-for-AI-Discussion-Paper.pdf 88https://oecd.ai/en/dashboards/overview ers. In a speech in July 2023, Türk stated that generative AI "companies must live up to their responsibilities to respect human rights in line with the Guiding Principles on Business and Human Rights."89 In addition to requiring human rights due diligence, the UN Guiding Principles on Business and Human Rights explicitly refer to transparency as it relates to a company's obligation to (i) transparently communicate the human rights impact of its products and (ii) be transparent in administering grievance processes.90 Türk further argued that without adequate guarantees of transparency, generative AI and other types of AI systems should be banned or suspended. He said "regulations need to require assessment of the human rights risks and impacts of AI systems before, during, and after their use. Transparency guarantees, independent oversight, and access to effective remedies are needed, particularly when the State itself is using AI technologies. AI technologies that cannot be operated in compliance with international human rights law must be banned or suspended until such adequate safeguards are in place." UN Secretary-General António Guterres has foregrounded transparency as well. The UN's digital agenda, summarized in Guterres' Global Digital Compact, makes three key proposals related to transparency: (i) the international community should "make transparency, fairness and accountability the core of AI governance," (ii) governments should "consider the adoption of a declaration on data rights that enshrines transparency," and (iii) researchers and companies should be responsible for transparently communicating the risks of AI systems.91 The G7 Hiroshima AI Process, which was launched in May 2023 and focuses on generative AI, makes "promotion of transparency" one of its core aims.92 A September 2023 joint statement on the Hiroshima AI Process by G7 Digital and Technology Ministers committed the G7 to "develop guiding principles for organizations developing, deploying, and using advanced AI systems, in particular foundation models and generative AI," and stated that one such guiding principle could be "publicly report models' capabilities, limitations and domains of appropriate and inappropriate use, ensuring sufficient transparency."93 More broadly, international organizations have long noted that transparency is essential for responsible development of AI systems. The OECD AI Principles, adopted in 2019, include transparency as one of five principles for trustworthy AI. The principle on "transparency and explainability" reads: "AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art: (i) to foster a general understanding of AI systems; (ii) to make stakeholders aware of their interactions with AI systems, including in the workplace; (iii) to enable those affected by an AI system to understand the outcome; and, (iv.) to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision."94 The G20 AI Principles, also adopted in 2019, include this OECD principle on transparency verbatim. 95 A number of other countries have committed to the OECD AI Principles, including Argentina, Brazil, Egypt, and Singapore.96 Calls for transparency from foundation model developers. Foundation model developers have also called for greater transparency and touted the benefits of transparency in their own business practices. For example, in June 2022 AI21 Labs, Cohere, and OpenAI published "Joint Recommendation for Language Model Deployment" that advocated for increased transparency (Cohere, 2022). Their recommendations 89https://www.ohchr.org/en/statements/2023/07/artificial-intelligence-must-be-grounded-human-rights-says-high-commissioner 90For instance, the UN Guiding Principles on Business and Human Rights state, "The responsibility to respect human rights requires that business enterprises have in place policies and processes through which they can both know and show that they respect human rights in practice. Showing involves communication, providing a measure of transparency and accountability to individuals or groups who may be impacted and to other relevant stakeholders, including investors." See https://www.ohchr. org/sites/default/files/documents/publications/guidingprinciplesbusinesshr_en.pdf 91https://indonesia.un.org/sites/default/files/2023-07/our-common-agenda-policy-brief-gobal-digi-compact-en. pdf 92https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/g7-hiroshima-leaders-communique/ 93https://www.politico.eu/wp-content/uploads/2023/09/07/3e39b82d-464d-403a-b6cb-dc0e1bdec642-230906_ Ministerial-clean-Draft-Hiroshima-Ministers-Statement68.pdf 94https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 95https://wp.oecd.ai/app/uploads/2021/06/G20-AI-Principles.pdf 96https://oecd.ai/en/ai-principles stated that developers should "Publish usage guidelines and terms of use of LLMs ... Document known weaknesses and vulnerabilities, such as bias or ability to produce insecure code ... Documentation should also include model and use-case-specific safety best practices." Individual developers have highlighted the importance of transparency as well. Anthropic ties the importance of transparency to interpretability in its paper on Constitutional AI and in describing the company's "Core Views on AI Safety" (Bai et al., 2022).97 Inflection prioritizes transparency in its decision-making about the choices it makes with regard to safety. Inflection's Safety Policy states "Safety at its heart is a question of values. Companies choose what risks to prioritize, and how to address them. We believe the best principle is to be deliberate about these choices, and transparent with our users about the specific values we build into our AIs. We may prioritize values that you disagree with. That's OK. We think that there is room for many perspectives ... We commit to sharing publicly what positions we aim to take in our AIs."98 OpenAI has argued that transparency can help companies work together to mitigate safety concerns regarding foundation models.99 Askell et al. (2019) argue "information that companies provide about their intentions and actions—how transparent they are—can play an important role in whether other companies will cooperate with them." OpenAI also requires transparency from its suppliers: OpenAI's Supplier Code of Conduct states that "OpenAI expects all Suppliers to adhere to the highest standards of integrity, transparency, honesty, and ethical conduct in all their business dealings."100 Cohere states that transparency is important for its responsible development of large language models, noting that it has "invested in technical and non-technical measures to mitigate potential harm and make our development processes transparent."101 Cohere's Usage Guidelines prohibit users from using Cohere's platform for applications with "no transparency," meaning those that "do not disclose that the content is generated through automated means."102 Stability AI has called for transparency in connection with its advocacy for open foundation models. In a May 2023 report submitted to the U.S. Senate Judiciary Subcommittee on Privacy, Technology, and the Law, Stability AI wrote "Models like Stable Diffusion and StableLM demonstrate our commitment to AI technology that is transparent, accessible, and human-centric: ... We develop open models for transparency. Researchers can 'look under the hood' to verify performance, identify potential risks, and help develop safeguards. Organizations across the public and private sector can customize these models for their own needs without exposing sensitive data or ceding control of their AI capabilities."103 The report further argues "These principles can help to advance important policy objectives. Transparent models promote safety and security. ... open models enable the transparent identification, assessment, and management of risks consistent with the National Institute of Standards and Technology AI Risk Management Framework." Hugging Face has also called for transparency as part of its push for open foundation models. In written testimony before the U.S. House Committee on Science, Space, and Technology, Hugging Face CEO Clement Delangue stated "Rigorous documentation practices for AI systems, with transparent reporting that follows well-defined protocols, serves three main goals: incentivizing responsible development; ensuring researchers and developers consider values and priorities that may otherwise be overlooked; and creating a paper trail for review. ... transparency from entities about how and where they deploy AI systems to understand what evaluations are most urgently needed."104 Hugging Face has, along with various partners, released a number of artifacts that advance transparency such as tools for exploring datasets (Piktus et al., 2023). In articulating Meta's position with respect to Llama 2, Touvron et al. (2023) state that "It is important to understand what is in the pretraining data both to increase transparency and to shed light on root causes of 97As the blog post summarizing the paper states, "Constitutional AI is also helpful for transparency: we can easily specify, inspect, and understand the principles the AI system is following." See https://www.anthropic.com/index/ claudes-constitutionandhttps://www.anthropic.com/index/core-views-on-ai-safety 98https://inflection.ai/safety 99https://openai.com/research/cooperation-on-safety 100https://openai.com/policies/supplier-code 101https://cohere.com/responsibility 102https://docs.cohere.com/docs/usage-guidelines 103https://stability.ai/blog/stability-ai-letter-us-senate-ai-oversight 104https://republicans-science.house.gov/_cache/files/5/5/551f066b-4483-4efd-b960-b36bc02d4b66/ B82DBAFFA56F31799E058FB2755C2348.2023-06-22-mr.-delangue-testimony.pdf potential downstream issues, such as potential biases. ... open releases promote transparency and allow more people to access AI tools, democratizing the technology and decentralizing AI expertise." Meta's Responsible Use Guide for Llama 2 encourages downstream developers to "build transparency and reporting mechanisms in user interactions ... consider ways to provide transparency to end users regarding potential risks and limitations of the system prior to or at the time of user interaction." 105 Amazon makes clear that transparency is important with respect to the way in which it communicates its policies to users. Amazon Web Services' Data Privacy Center states that "Our contracts are written in plain, straightforward language to be transparent and help you understand the data privacy protections that we offer. We also provide ongoing data transparency reporting."106 Google highlights transparency in its AI principles, writing "For datasets and models, the consistent outcome is to create and publish detailed documentation of datasets and models in the form of structured transparency artifacts known as data and model cards (see the following section for details), which function like nutrition labels, providing information such as the provenance of the data (if a data card) and model performance when tested for fairness (if a model card)."107 Google's AI principles also detail the "Transparency Artifacts" that Google researchers have built, such as Healthsheets and a Data Cards Playbook. Microsoft has also produced such artifacts, namely in the form of "Transparency Notes," which "are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment."108 A large number of developers and deployers that we do not assess have also expressed the importance of transparency (Jobin et al., 2019; Fjeld et al., 2020; WEF, 2023). Notable among them is EleutherAI, a non-profit research group that is a leading developer of open foundation models (Skowron & Biderman, 2023). Phang et al. (2022) write that "EleutherAI's approach to research goes beyond transparency: by doing research entirely in public, anyone in the world can observe and contribute at every stage," adding that such public-facing research fosters a highly collaborative, diverse, and innovative research community. Calls for transparency from researchers, civil society, and labor. While governments and companies have consistently underscored the value of transparency, less powerful actors have banded together to push public and private entities to meaningfully improve transparency along with the business practices that transparency uncovers. Researchers have driven much of the improvement in transparency for foundation model developers, with innovations like model cards, datasheets, and data statements leading to substantial gains (Mitchell et al., 2018; Gebru et al., 2018; Bender & Friedman, 2018a). Some have sought to solidify these improvements in transparency by strengthening the field of algorithmic auditing (Costanza-Chock et al., 2022). Mozilla's Open Source Audit Tooling project calls for better infrastructure to evaluate and audit AI systems (Raji, 2022). Another proposal to bolster the auditing ecosystem is for governments to conduct third-party audits of AI systems under their existing authority to protect consumers and data subjects (Miller, 2021). Recently, coalitions of researchers led by organizations like LAION have come together to call for greater transparency in the foundation model ecosystem (LAION, 2023). In recent congressional hearings, expert testimony has expressed "The Need for Transparency in Artificial Intelligence" (Gregory, 2023). Belli & Gaspar (2023) detail the central importance of transparent foundation models from the perspective of experts across Asia, Africa, and Latin America. Other researchers still have argued that transparency, while necessary, is far from sufficient to regulate AI (Hartzog, 2023). Data workers employed as contractors by foundation model developers have also mobilized for increased transparency (Gray & Suri, 2019b).109 For example, in July 2023 members of the African Content Moderators 105https://ai.meta.com/static-resource/responsible-use-guide/ 106https://aws.amazon.com/compliance/data-privacy/Privacy_at_AWS_ 107https://ai.google/static/documents/ai-principles-2022-progress-update.pdf 108https://learn.microsoft.com/en-us/legal/cognitive-services/language-service/transparency-note 109Some policymakers have focused on the importance of transparency with respect to data labor. For example, in a letter to the CEOs of major foundation model developers, eight members of the U.S. Congress wrote "Tech companies also Union filed a petition with Kenya's parliament requesting an investigation into OpenAI, Meta, Google, and other multinational technology companies that employ content moderators in Kenya.110 The petition states that OpenAI used a vendor, Sama, to hire the petitioners as contractors who "trained the ChatGPT algorithm," and alleges that "the contracts did not sufficiently describe the nature of the job ... we were not properly informed of the nature of the work we would be undertaking." The petition further alleges that although this data labor included "reading and viewing material that depicted sexual and graphic violence and categorizing it accordingly so that ChatGPT's artificial intelligence could learn it for the purposes of its future interactions with people ... throughout the contract of training ChatGPT we were not afforded psychosocial support."111 The Partnership on AI has advocated for transparency with respect to the employment of data enrichment workers, writing "While shifting how the broader field approaches data enrichment is not a trivial task, increasing transparency regarding current practices and developing more practical guidance can move the field towards improved conditions for data enrichment workers. Greater transparency can help emphasize the central role of data enrichment workers, create the basis for a rich public dialogue of how to improve conditions for workers, and increase confidence in AI models themselves."112 Civil society groups with a range of different focus areas agree that transparency is a pressing priority for policymakers and foundation model developers. For instance, 123 civil society organizations, including AccessNow, Algorithm Watch, and the European Center for Not-for-Profit Law, released a statement advocating for the prioritization of more serious transparency requirements in the EU AI Act.113 The statement advocates the inclusion of a "mandatory impact assessments are a crucial measure to ensure foresight and accountability for potential AI-related harms," and that "information on all uses of AI systems by public authorities, regardless of the systems' risk level, should be made public in the EU database." Additionally, they call for "an obligation for providers and/or users to include information regarding the environmental impact of AI systems," which is not a provision in the EU AI Act. Freedom House has also warned that "AI has allowed governments to refine their online censorship" and threatens to exacerbate the decline in global internet freedom. AI has allowed governments to enhance and refine their online censorship, and foundation models may exacerbate this trend.114 Freedom House points to transparency requirements as a mechanism to identify and combat evolving and subtle censorship pressures. In October 2023, the U.S. Federal Trade Commission convened a workshop on the "Creative Economy and Generative AI," where creators from across different industries demanded increased transparency. In the words of one participant, "The creative economy only works when the basic tenants of consent, credit, compensation, and transparency are followed. ... Without transparency, we can't even know the extent of how much of these companies have taken. They took our work and data to train for-profit technologies that then directly compete against us in our own markets using generative media that is meant to mimic us."115 Despite its limits, transparency is a necessary and broadly popular first step towards accountability for harm caused by AI systems (Kaminski, 2020; Bates et al., 2023). In the context of the rapid rollout of extremely powerful AI systems such as foundation models, transparency is all the more urgent. Companies developing and deploying foundation models should heed the call. must be more transparent about the role data workers play in their AI, so that consumers can make informed choices about the products they use. Unfortunately, many companies have sidestepped these duties, and that must change. ... Please share any plans your company has to be more transparent about the role its data workers play and their working conditions." See https://www.markey.senate.gov/imo/media/doc/letter_to_artificial_intelligence_companies_on_data_ worker_labor_conditions_-_091323pdf1.pdf 110The African Content Moderators Union has also sued Meta, alleging that it unlawfully fired workers for their union organizing. See https://techcrunch.com/2023/08/23/meta-and-moderators-agree-to-mediation/ 111https://x.com/mercymutemi/status/1678984336996028416?s=46 112In addition to conducting a case study in partnership with Google DeepMind exploring how to increase transparency regarding data labor, the Partnership on AI has separately published a white paper recommending that developers increase transparency in wages and pay structure for data enrichment workers. See https://partnershiponai. org/wp-content/uploads/2022/11/case-study_deepmind.pdf and http://partnershiponai.org/wp-content/uploads/2021/ 08/PAI-Responsible-Sourcing-of-Data-Enrichment-Services.pdf 113https://www.fairtrials.org/app/uploads/2022/05/Civil-society-reacts-to-EP-AI-Act-draft-report_FINAL.pdf 114https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence 115https://www.ftc.gov/system/files/ftc_gov/pdf/creative-economy-and-generative-ai-transcript-october-4-2023. pdf
# [Re] On The Reproducibility Of Post-Hoc Concept Bottleneck Models Nesta Midavaine∗, Gregory Hok Tjoan Go∗, Diego Cánez Ildefonso∗, Ioana Simion∗, Satchit Chatterji† {nesta.midavaine, gregory.go, diego.canez.ildefonso, ioana.simion, satchit.chatterji}@student.uva.nl Graduate School of Informatics University of Amsterdam Reviewed on OpenReview: *https: // openreview. net/ forum? id= 8UfhCZjOV7* ## Abstract To obtain state-of-the-art performance, many deeper artificial intelligence models sacrifice human explainability in their decision-making. One solution proposed for achieving top performance and retaining explainability is the Post-Hoc Concept Bottleneck Model (PCBM) (Yuksekgonul et al., 2023), which can convert the embeddings of any deep neural network into a set of human-interpretable concept weights. In this work, we reproduce and expand upon the findings of Yuksekgonul et al. (2023), showing that while their claims and results do generally hold, some of them could not be sufficiently replicated. Specifically, the claims relating to PCBM performance preservation and its non-requirement of labeled concept datasets were generally reproduced, whereas the one claiming its model editing capabilities was not. Beyond these results, our contributions to their work include evidence that PCBMs may work for audio classification problems, verification of the interpretability of their methods, and updates to their code for missing implementations. The code for our implementations can be found in https://github.com/dgcnz/FACT. ## 1 Introduction There is an increasing demand within society to make artificially intelligent systems more explainable due to concerns of algorithmic bias (Pessach & Shmueli, 2023; Suresh & Guttag, 2021). One method of doing so involves Concept Bottleneck Models (CBMs), which train a model in an end-to-end fashion via concept prediction (Koh et al., 2020). These concepts are then used to predict the label attached to the data point, thus improving the model explainability and allowing for the detection of concept prediction-related mistakes through model-based interpretability. However, despite their benefits, CBMs have been widely criticized for being unable to deliver on its goals of interpretability, predictability, and intervenability (Raman et al., 2024; Havasi et al., 2022; Furby et al., 2024; Margeloiu et al., 2021). For example, Margeloiu et al. (2021) have found that when the concept predictor and classifier of a CBM are trained jointly, extra information about the targets is learned in the concept predictor. As such, interpretability and intervenability become lost as other factors beyond the concepts influence the decisions made. By combining model-based and post-hoc paradigms, Yuksekgonul et al. (2023) proposed Post-hoc Concept Bottleneck Models (PCBMs). Through these, they address the CBM's limitations of data usage, decreased performance, and lack of model editing capabilities. The limitation of data usage is tackled by the implementation of multimodal models to obtain concepts automatically and decreased performance is addressed through the introduction of a residual modeling step. This paper attempts to reproduce and extend upon their main findings through the following: ∗Equal contribution. †Supervisor. - **Reproducing their results using their provided codebase**. This was done to identify which of the results supporting their claims can be reproduced, and the resource costs involved (i.e., computational cost and development effort). - **Reproducing their user study**. This relates to their claim stating that human-guided editing is fast and improves classification accuracy, which was supported using a survey with the participants being machine learning practitioners and researchers. We investigate this by replicating said survey with a different group of participants who come from a similar demographic to that of the original. - **Verifying the interpretability of their approach**. The original paper assesses the interpretability of PCBMs by highlighting crucial concepts in specific classes for some of the used datasets. It also argues that the performance increase from model editing stems from the PCBM's interpretability. To inspect this, we examine two directions: First, we analyze what happens when the original authors' concept collection process is replaced with using meaningless concepts, which we then evaluate based on interpretability and performance. This serves as a baseline for our other interpretability experiments and also for the performance of PCBMs as it answers the question: How much of the performance retained by the bottleneck can be attributed to the usefulness of concepts? Furthermore, we evaluate the interpretability of PCBMs by examining if the concepts used by the model correctly correspond to objects in the input space. Similar to the work of Margeloiu et al. (2021) criticizing CBMs for not being truly interpretable, we utilize saliency maps (Simonyan et al., 2014; Smilkov et al., 2017) to visualize the correspondence between the concepts used and parts of the image. Additionally, we conduct another experiment to show the correspondence between concepts and the input space more generally for entire datasets instead of single images. - **Extending their work through an additional experiment**. This experiment examines how the original implementation can extend to another modality, namely, audio. This was performed using AudioCLIP (Guzhov et al., 2022), which is a multimodal model we can utilize to automatically obtain concepts for audio. It was utilized in the same way (Yuksekgonul et al., 2023) did for images. - **Improving the original code by implementing additional scripts**. This includes, for example, bash and python scripts which allow for easier reproduction, and master Jupyter Notebooks containing essentially all the setup and code needed to run the experiments. After performing the above, we successfully verified their claims on model performance and the usability of CLIP concepts, though encountered issues replicating the claim on PCBMs enabling global model editing. In addition, we discovered limitations in the meaningfulness of concepts and their correspondence to objects in input images, and that PCBMs could be applied to inputs that are not in the form of image data. As such, our replication efforts yielded mixed results due to potential implementation mistakes and incomplete experiment setup details. While some performance similarities were observed, inconsistencies emerged, particularly with the COCO-Stuff and SIIM-ISIC datasets. While PCBM-h's exhibited similar performance to the original model, the PCBMs showed mixed results. Also, our interpretability analysis revealed that despite performing well, random concepts lack interpretability. There were also limitations in interpretability due to concept feature values not being solely derived from the concept's presence in the image. ## 2 Scope Of Reproducibility PCBMs were introduced to imbue interpretability into any neural network while preserving both its performance and flexibility. They build upon the idea of concept analysis (Kim et al., 2018), which aims to comprehend how such networks generate and leverage high-level, human-understandable features. This inspired the development of Concept Activation Vectors (CAVs), forming the foundation of PCBMs. Yuksekgonul et al. (2023) proposed three claims in their paper which we aim to investigate: 1. **Claim 1: PCBMs achieve comparable performance to the original model.** The authors claim in their paper that after applying the Post-Hoc Concept Bottlenecks to various models, classification performance was comparable and in some cases even identical to the original baselines in all scenarios. It must be noted, however, that this claim only holds for their hybrid implementation which they call "PCBM-h." An explanation for it can be found in subsection 3.1. 2. **Claim 2: PCBMs do not require labeled concept datasets.** Another claim made is that multimodal representations can be used in the absence of labeled data for concept generation. The original authors demonstrated this using CLIP for concept generation and showed that it improved the performance of the PCBM due to the concepts generated being more expressive. 3. **Claim 3: PCBMs allow for global model editing.** The claim states that by simply adjusting the concept weights, the PCBMs can be adapted to new distributions. In addition, it is shown through a user study that pruning the least human-logical concepts always results in improved classification performance. ## 3 Methodology 3.1 Model Descriptions The original paper introduces a post-hoc method aimed at enhancing the interpretability of neural networks. It demonstrates this approach using three different pre-trained backbone models: ResNet18 (He et al., 2016), CLIP-ResNet50 (Radford et al., 2021), and Inception (Szegedy et al., 2015). These models are freely available online, with the weights specifically for the Inception model trained on HAM10000 accessible from Daneshjou et al. (2022a). All our experiments involve classification tasks, focusing on the performance of PCBM and PCBM-h. As a baseline, we employ a linear probe at the output layer for model-dataset combinations lacking a classification head. This involves using logistic regression with l2 regularization. To convert the backbone model into a PCBM, the original authors projected the final embeddings of the backbone model onto a concept subspace. This projection is then used to train an interpretable linear classifier. The concept subspace is defined using a concept library which can be denoted as I = {i1, i2*, . . . , i*Nc }, where Nc denotes the number of concepts. Each concept can be learned directly from the data or selected by a domain expert (Ghorbani et al., 2019; Yeh et al., 2020). The original authors used two different methods to learn concept representations, with the first being through CAVs (Kim et al., 2018). To obtain a CAV ci for each concept i, we need two image embedding sets Pi and Ni. The former comprises the embeddings of Np = 50 images containing the concept, which we call the positive image examples xp. Meanwhile, the latter comprises the embeddings of Nn = 50 random images not containing said concept which we refer to as the negative image examples xn. This gives us Pi = {f(xp1 )*, . . . , f*(xpNp )} and Ni = {f(xn1 )*, . . . , f*(xnNn )}, which we use to train a linear SVM that returns the CAV via its vector normal to the linear classification boundary. Note that obtaining these CAVs would require a densely annotated dataset with positive examples for each concept. Meanwhile, the second approach involves utilizing the image and text encoders of a multimodal model (i.e., CLIP (Radford et al., 2021)) to generate embeddings for each modality. These encoders map to a shared embedding space, meaning that we can use the vector representations of the concept's natural language descriptions as the concept vectors. As such, we have for the concept "stripes" that c text stripes = ftext(*stripes*). The vector representations of concepts are collected in a projection matrix C ∈ R Nc×d, such that each row represents a concept vector of dimensionality d. After we obtain C, the final embeddings of the backbone are projected onto the concept subspace. This matrix is then used to compute fC(x) = *proj*Cf(x) ∈ R Ncin a way such that f (i) C (x) = f(x)·ci ||ci||22 . A way to interpret f (i) C (x) is as the concept feature value of concept i for image x, which we further examine in subsection 3.4. This means the concept feature value says something about the correspondence between concept i and image x. Thus, the vector fC(x) can be used as the feature matrix of some interpretable model. Using this function, the original authors minimized the following loss for the computation: $$\operatorname*{min}_{g}\operatorname*{\mathbb{E}}_{(x,y)\sim D}[{\mathcal{L}}(g(f_{\mathbf{C}}(x)),y)]+\frac{\lambda}{N_{c}K}\Omega(g),$$ Ω(g), (1) where L represents the cross-entropy loss and the function g represents a sparse linear model as determined by the authors, denoted as g(fC(x)) = fC(x)W + b. Here W is the vector of concept weights, these concept weights give the importance of a concept for a given class. This means the concept weights are determined on the class level, while concept feature values are different for each image. Additionally Ω(g) is the (multiclass) elastic-net penalty. Defined as Ω(g) = α||W||1 + (1 − α)||W||22 with α as a regularization factor. Furthermore, if g is a classification problem, we apply softmax to its output. Moreover, K denotes the number of classes in the classification problem and λ gives the regularization strength. There are cases where the concept set is not expressive enough for the PCBMs alone to recover the original model's performance. To solve this, the original authors reintroduce the embeddings to "fit the residuals." They thus introduce Hybrid Post-Hoc CBMs (PCBM-h) to solve the following optimization problem: $$\mathbf{M}\cup\mathbf{T}$$ min r E(x,y)∼D[L(g(fC (x)) + r(f(x)), y)], (2) where r : R d −→ Y represents the residual predictor mapping embeddings to the classes Y in our classification task. Also, minimizing the loss in Equation 2 is a sequential process where both the concept bank and the interpretable predictor g remain fixed, while a linear layer is employed to fit the residuals. For the claim regarding model editing, Yuksekgonul et al. (2023) again use both PCBM and PCBM-h, with the model editing performed on the sparse linear model g. Both the user study and the controlled experiment use the CLIP-ResNet50 backbone model from Radford et al. (2021). ## 3.2 Datasets In total, the original authors used seven different datasets for experimentation, either to evaluate the performance of PCBMs across different domains, the quality of generated CLIP concepts, or the results of global model editing. All datasets used for binary classification were evaluated using the Area-Under-Curve (AUC), the multi-class binary classification-based COCO-Stuff using the mAP, and the rest using accuracy. An overview of each dataset and its purpose can be found in Table 8. For COCO-Stuff and SIIM-ISIC, we followed the original paper to create subsets for each to reduce the required disk space for experimentation.1 The specifications for how they were created can be found in our repository. Meanwhile, for the model editing experiments and the survey, multiple datasets were generated using Metashift with the Visual Genome dataset.2 The concepts used for evaluating the performance across different domains were taken from 3 different datasets. For CIFAR-10, CIFAR-100, and COCO-Stuff, the concept set was taken from BRODEN (Fong & Vedaldi, 2018), which contains visual concepts including for objects, settings, and image qualities, among others. Meanwhile, the version of the CUB dataset used from Koh et al. (2020) is annotated with preprocessed bird-related concepts, and the medical concepts for the HAM10000 and SIIM-ISIC originated from the Derm7pt dataset (Kawahara et al., 2019). For the experiments that evaluated the use of CLIP-generated concepts, ConceptNet was employed to generate the utilized concepts (Speer et al., 2017). As an open knowledge graph, it can be used to find concepts with particular relations for any given query concept, such as "wheel" and "engine" for "car." Similar to the original authors, only five relations sets were used to build the concept space for each class, which are the "hasA," "isA," "partOf," "HasProperty," and "MadeOf" relations. ## 3.3 Hyperparameters For a comparable replication, we used the same hyperparameters specified in the original paper whenever they were. This was the case for everything apart from the regularization parameters Csvm and λ logistic regression. Csvm is used by the SVM for CAV computation. The open-source repository supplies the majority of the necessary code, including an example grid for fine-tuning C values, which is the following: [0.001, 0.01, 0.1, 1.0, 10.0]. Meanwhile, λ logistic regression is employed when investigating the original models for CIFAR10, CIFAR100, 1The trimmed-down datasets can be found here: COCO-Stuff, SIIM-ISIC. 2The generated datasets can be found here: Model editing, Survey. and COCO-Stuff. The original model is CLIP-ResNet50 for these three datasets, thus we determine the hyperparameter in the same way utilized by Radford et al. (2021). As such, we conduct a hyperparameter sweep on validation sets over a range from 10−6to 106, with 96 logarithmically spaced steps. A hyperparameter warranting some attention is the regularization strength λ. It determines the weight decay for the parameters of our sparse linear model g as defined in subsection 3.1, influencing the interpretability of the model and thus the PCBM. Appendix A of the original paper specifies the values used per dataset, tuned on a validation set, but omits the metric used for tuning. Additionally, the original code notes a trade-off between interpretability and accuracy for this parameter, which is discussed in subsection 4.1. ## 3.4 Experimental Setup And Code To test the first two main claims outlined in section 2, we perform reproductions of the original paper's results for CAVs and CLIP concepts using the authors' code repository as well as the same datasets, parameters, backbone models, and number of training epochs used for the PCBMs outlined in their paper. An overview of the backbones and datasets used can be found in Appendix A. In addition, we test the third claim by evaluating their techniques for doing the model-editing experiments, using an unedited PCBM as a baseline. As such, we replicate their methodology by utilizing 6 scenarios where we artificially introduce a spurious correlation between a class and a concept. For example, in one scenario we use a dataset where all images of a bed contain dogs for training and test the resulting model on a dataset where they contain cats instead. A description of these scenarios can be found in Appendix D. The first technique we evaluate is called "pruning," where we set the corresponding weights for "bed" and "dog" to zero in the PCBM layer following the above example, ideally resulting in the model assigning less importance to "dog" when predicting "bed." Moreover, we evaluate "pruning with normalization," which is pruning combined with a re-scaling of the weight vector corresponding to the affected class, thus making the new L1 norm match the previous one. We evaluate these methods alongside "fine-tuning," which involves first training a PCBM on the training domain, then fine-tuning in the test domain, and then testing on a held-out set of the test domain. Another way we evaluate their third claim is through a survey. Here, we use a dataset similar to the one mentioned previously, with the primary distinction being the absence of spurious correlations in the test class. For example, we use images of beds containing dogs for training and then only use bed images without dogs during testing. In addition, we reproduce the survey in a comparable setting to the one presented in the original paper. Using a variant of the base PCBM, we identify the top 10 ranked concept weights for each class of interest and guide the users through the model editing process. Most participants (94.11%) were machine learning students or practitioners and had a median age of 24, alongside a male-to-female ratio of 76.2% to 23.8%. Additionally, informed consent was obtained from all participants. We follow the original baselines to obtain an accurate benchmark: random pruning (randomly select a subset of the top ten concepts to prune, matching the number of concepts pruned by the user for fair comparison) and greedy pruning (greedily select from the top 10 concepts that improve model accuracy when pruned, again matching the number of concepts pruned by users). Further details of the setup and survey questions can be found in Appendix F. ## 3.4.1 Assumptions Made Unfortunately, due to missing implementations and details, we had to update the repository to include additional components such as dataset installation scripts, code for the missing implementations, and an environment file. Moreover, we had to make several assumptions in our experimentation: 1. For the COCO-Stuff experiments, binary cross-entropy loss minimization was approached as 20 binary classifications. Performance was averaged across runs, treating the target class as positive and the remaining classes as negative. 2. For the SIIM-ISIC experiments, we implemented our data selection method based on the limited details provided by the authors. These details state that they utilized 2000 images (400 malignant, 1600 benign) for training and 500 images (100 malignant, 400 benign) for model evaluation. on a held-out set of 500 images (100 malignant, 400 benign). 3. For the global model editing experiments, design flexibility existed for the dataset, training set size, class names, optimizers, and hyperparameters due to conflicting information. Examples of this are: λ 170×5 = 0.05 vs λ 170×5 = 0.002, CLIP ResNet50 vs ResNet18, and Adam vs SGD Optimizer. After testing many configurations, we decided to proceed using λ = 1.7, an L1 ratio of α = 0.99, BRODEN concepts, a CLIP-ResNet50 encoder, and the SGD optimizer. 4. For the user study experiments, insights from the Metashift study guided us in selecting hyperparameters and matching the reported weight magnitudes. We experimented with the regularization and settled on λ = 0.002, which made our results consistent with the original. Also, the architecture, which was built on existing models, now included adjustments and additional implementation of the three pruning methodologies. ## 3.5 Additional Experiments To further assess the efficacy of PCBMs, we explore three research directions. The first investigates PCBMs' interpretability and performance using randomly generated concept vectors. Meanwhile, the second involves the interpretability of the concepts used in PCBMs. Lastly, the third direction aims to verify the original authors' claim that any neural network can be converted into a PCBM by testing it for audio classification. ## 3.5.1 Random Projection Random projections, known for their desirable dimensionality reduction properties, preserve input data distance and similarity (Dasgupta, 2000; Bingham & Mannila, 2001; Cannings, 2021). For this experiment, we substitute the embedding matrix C with a random one, where each i'th row is a normally distributed and normalized vector. In other words, we replace the original concept matrix, obtained by the original author's methods, with a random and meaningless concept matrix. The exact details of this random projection experiment are as follows. We train a PCBM and PCBM-h on CIFAR10, CIFAR100, COCO-Stuff, and HAM10000. For the first three datasets, our new random concept matrix has the same dimensionality as the CLIP concept matrix. Meanwhile, for HAM10000, we have eight random concepts, similar to the number of concepts used in the original paper. The results of this experiment will determine whether meaningful concepts are important for only performance, interpretability, or both. Additionally, this experiment gives a baseline on the performance of PCBMs. ## 3.5.2 Object-Concept Correspondence We examine whether the concepts used in PCBMs correctly correspond to objects in the input space. To see what this means in practice, we give the following general interpretation of the interpretability of PCBMs. When we refer to the PCBM as interpretable, we implicitly rely on the following two assumptions: 1. The concept weight of concept i for class k is high when i is important for correctly classifying k. 2. The concept feature value of concept i for image j, reflects **ONLY** the visibility of i in j. The first assumption immediately holds for the used class of interpretable models. However, the second one is less trivial. Assumption two is related to specifically the interpretability of the concepts used. In practice, it means that when we have an image of a bicycle the concept feature value for the concept "green" should only be high when the bicycle is green. Thus issues arise when representations of different concepts become entangled since they will be projected onto the image in a similar way. Kim et al. (2018) experimentally show that higher-ranked images represent the concept better when ranking images based on the concept feature value. However, they do use a different projection method called the TCAV-score, meaning their findings do not directly carry over to our scenario. Also, Kim et al. (2023) show that CAVs are sensitive to the chosen negative and positive samples, leading to unintended entanglement of concept representations. They find that this happens when the positive images for a concept almost always include a different concept, such as how images of a bicycle frame will almost always include a handlebar. For CLIP concepts, some potential issues exist relating to assumption two. Firstly, CLIP is trained for singlelabel classification, whereas we are trying to identify multiple parts of an object in an image. Secondly, a type of entanglement has also been found in CLIP concept representations which involves it treating images as bags of concepts, where any one concept from the bag can be used to explain an entire part of the image (Tang et al., 2023; Lewis et al., 2024). Also, Tang et al. (2023) specifically discovered that there exists a Concept Association Bias (CAB) within CLIP between objects and their attributes, showing that this holds for part-whole relationships and colour. As such, the textual representation of only one of these two can be used to explain the occurrence of both in the image. Our first method to test object-concept correspondence is by testing assumption two for PCBMs. We do so by evaluating the concept feature values fC(x) for the positive and negative concept images in BRODEN and CUB. We do this for CAVs, CLIP concepts, and random concept vectors, with the latter having the same shape as the CLIP concepts. For each method, we have a set of concepts I = {i1, i2*, . . . , i*Nc }, Nc is the number of concepts we obtained using this method. We have two image sets for each concept Pi = {xp1 , . . . , xp50 } and Ni = {xn1 , . . . , xn50 }, each containing 50 positive and 50 negative images for a concept i respectively. We report the following statistic for each method: $$gap=\frac{1}{|I|}\sum_{i\in I}\bigg{(}\frac{1}{50}\sum_{\mathbf{pos}\in P_{i}}f_{\mathbf{C}}(\mathbf{pos})-\frac{1}{50}\sum_{\mathbf{neg}\in N_{i}}f_{\mathbf{C}}(\mathbf{neg})\bigg{)}.\tag{3}$$ This means that if this assumption holds, we should see generally higher concept feature values for positive concept images than for negative ones, alongside a positive average gap when using the interpretable concept vectors (CAVs and CLIP concepts). Meanwhile, for random concepts, we should not see a difference. Meanwhile, our second method involves using saliency maps (Simonyan et al., 2014; Smilkov et al., 2017) to see exactly where in the image our model "sees" the concept. We construct saliency maps from the concept projection to the input. Due to this, our saliency maps visualize the gradient of the concept feature value concerning the input. Below we define a saliency map for concept c concerning the input x as Mc(x): $$M_{c}(\mathbf{x})=\partial f_{\mathbf{c}}(\mathbf{x})/\partial\mathbf{x}.$$ $\left(4\right)$. Mc(x) = ∂fc(x)/∂x. (4) Additionally, we employ the SmoothGrad implementation of saliency maps (Smilkov et al., 2017), which enhances these maps by averaging gradients over multiple forward passes while adding noise to the input x. Consequently, our saliency maps depict the sensitivity of concept feature values to input changes, with brighter regions indicating higher importance. We investigate if the most crucial parts of an image for a given concept visually align with it. For instance, the saliency map for the concept "green" should highlight the green parts of a bicycle in an image. To conduct this experiment, we utilize CIFAR100 images and the CLIP backbone, meaning the model from which we compute gradients comprises the CLIP ResNet50 backbone followed by the concept projection described in subsection 3.1, where the concept matrix C contains the concept vector of a single concept. We generate saliency maps for ten different concepts, with five using CAV concept vectors and the remaining using CLIP ones. For both methods, we select the four concepts with the highest concept weight for the class, along with one hand-chosen concept corresponding to a co-occurring object in the image. This hand-chosen concept aids in examining the less fine-grained behavior of concept vectors. ## 3.5.3 Audio Classification The third experiment aims to verify the original authors' claim that any neural network can be converted into a PCBM. We therefore evaluate its performance in audio classification, a domain that has yet been untested. AudioCLIP was utilized for this purpose, which is an implementation of CLIP that has been integrated with a compatible and well-performing audio encoder (Guzhov et al., 2022). This encoder is an ESResNeXt instance that has been jointly trained alongside the other CLIP encoders (Guzhov et al., 2021). We assess the audio model performance on ESC-50 (Salamon et al., 2014) and UrbanSound8K (Piczak, 2015). Similar to the models used in the main reproduction study, we create a baseline for AudioCLIP's performance via linear probing. To obtain the CLIP-based concept matrix, we collect textual concepts and feed them through the AudioCLIP text encoder, similar to how it was done with standard CLIP. We derive textual concepts as follows: From ESC-50, ConceptNet generates 146 concepts from ground-truth labels, following the original authors' method. UrbanSound8K provides 31 concepts from non-terminal nodes with depths 1 to 4 in its taxonomy, excluding terminal nodes with data point labels. The final set of concepts is derived from the AudioSet ontology, totalling 610 concepts after excluding class-label-containing nodes. Thus, the concept matrix has a total of 787 concept vectors. To obtain the CAV-based concept matrix, we perform the following weakly supervised approach: For each label in a dataset, we generated its associated concepts using ConceptNet. Then, we gathered 50 audio clips for each concept whose labels were associated with the given concept and treated them as positive examples, using a similar process for negative examples. Afterwards, we proceeded to train a SVM to separate positive and negative examples and took the vector normal to the linear boundary as the concept's vector. This approach lead to 31 concepts for UrbanSound8K and 171 for ESC-50. We evaluate PCBM and PCBM-h on various datasets using CAV and CLIP-based concept matrices. The AudioCLIP audio head's final embedding dimension has a size of 2048, making the residual classifier of PCBM-h a function r : R 2048 −→ N*classes*. For UrbanSound8K, we set λ*CLIP* = 2 · 10−4 and λCAV =1 34×10 . Similarly, for ESC-50, λ*CLIP* = 2 · 10−10 and λCAV =1 171×50 . Early stopping was applied to the ESC-50 models due to observed overfitting. ## 3.6 Computational Requirements All of our experiments were conducted using Google Colab in region "europe-west4," which has a carbon efficiency of 0.57 kgCO2eq/kWh. However, most experiments were CPU-based as almost all training and evaluation was done only to evaluate the PCBM performances given the usage of pre-trained models for the backbones. As such, only the PCBM-h instances required GPU computation as they are neural networks. We utilized a T4 GPU and Intel(R) Xeon(R) CPU for these experiments, resulting in a total computational cost of roughly 30 CPU and 30 GPU hours for all experiments. This would amount to 2.39 kgCO2eq in emissions from GPU usage which was entirely offset by the cloud provider. These estimates were conducted using the Machine Learning Impact calculator presented in Lacoste et al. (2019). ## 4 Results 4.1 Results Reproducing Original Paper Claim 1 - One of the main claims originally made is that PCBMs achieve comparable or sometimes even identical performance to the original baselines across various models. Our results (shown in Table 1) mostly support this claim, as we observed results similar to the original for all datasets besides CUB, which returned a performance that was 10 percentage points higher, aligning more with the CUB ResNet18 pre-trained model's performance.3 This suggests a potential mistake in the author's evaluation code. In any case, our results show that the performances achieved and trends found match those of the original study quite closely. | Original | CIFAR10 | CIFAR100 | COCO-Stuff | CUB | HAM10000 | SIIM-ISIC | |----------------|---------------|---------------|---------------|---------------|---------------|---------------| | Original Model | 0.888 | 0.701 | 0.770 | 0.612 | 0.963 | 0.821 | | PCBM | 0.777 ± 0.003 | 0.520 ± 0.005 | 0.741 ± 0.002 | 0.588 ± 0.008 | 0.947 ± 0.001 | 0.736 ± 0.012 | | PCBM-h | 0.871 ± 0.001 | 0.680 ± 0.001 | 0.768 ± 0.01 | 0.610 ± 0.01 | 0.962 ± 0.002 | 0.801 ± 0.056 | | Reproduction | CIFAR10 | CIFAR100 | COCO-Stuff | CUB | HAM10000 | SIIM-ISIC | | Original Model | 0.885 | 0.699 | 0.838 | 0.744 | 0.963 | 0.761 | | PCBM | 0.773 ± 0.001 | 0.511 ± 0.002 | 0.796 ± 0.005 | 0.577 ± 0.007 | 0.926 ± 0.004 | 0.511 ± 0.002 | | PCBM-h | 0.883 ± 0.002 | 0.688 ± 0.002 | 0.797 ± 0.001 | 0.595 ± 0.005 | 0.957 ± 0.004 | 0.751 ± 0.021 | Table 1: Original (Yuksekgonul et al., 2023) and reproduction results of the claim that PCBMs achieve comparable performance to the original model. The reported metrics are AUROC for HAM10000 and SIIMISIC, mAP for COCO-Stuff, and accuracy for CIFAR and CUB. 3The pre-trained model and its performance can be found here: https://github.com/osmr/imgclsmob Claim 2 - We evaluate if PCBMs could achieve at least the same, if not higher, performance using generated concepts. However, our findings (as shown in Table 2) contradict this claim, evidenced by how for both CIFAR10 and CIFAR100, the PCBM results are approximately 10 percentage points lower than originally reported. Notably, the PCBM for CIFAR10 performed worse when using CLIP concepts compared to BRODEN concepts, suggesting potential limitations in the expressiveness of CLIP concepts. Moreover, our results indicate that the performance reported in the original paper could be replicated using a lower λ value than mentioned. This is further explained in Appendix B. It is noteworthy that the PCBM-h with CLIP concepts achieves comparable performance to CAVs, which is expected due to the sequential procedure in addition to the reintroduction of the embeddings when training PCBM-h. Because of this, the performance of PCBM-h depends only minimally on the performance of PCBM, and by extension, the concepts used. Furthermore, Figure 3 (in Appendix C) displays example concept weights for the same classes and datasets as in the original paper. While the HAM10000 concept weights are quite similar, the weight values for CIFAR100 are one order of magnitude higher (though the important weights are similar). This aligns with the potential use of a lower λ, as lower weight decay typically results in larger concept weights. | Original | CIFAR10 | CIFAR100 | COCO-Stuff | |---------------------------|---------------|---------------|---------------| | Original Model | 0.888 | 0.701 | 0.770 | | PCBM & labeled concepts | 0.777 ± 0.003 | 0.520 ± 0.005 | 0.741 ± 0.002 | | PCBM-h & labeled concepts | 0.871 ± 0.001 | 0.680 ± 0.001 | 0.768 ± 0.01 | | PCBM & CLIP concepts | 0.833 ± 0.003 | 0.600 ± 0.003 | 0.755 ± 0.001 | | PCBM-h & CLIP concepts | 0.874 ± 0.001 | 0.691 ± 0.006 | 0.769 ± 0.001 | | Reproduction | CIFAR10 | CIFAR100 | COCO-Stuff | | Original Model | 0.885 | 0.699 | 0.838 | | PCBM & labeled concepts | 0.773 ± 0.001 | 0.511 ± 0.002 | 0.796 ± 0.005 | | PCBM-h & labeled concepts | 0.883 ± 0.002 | 0.688 ± 0.002 | 0.797 ± 0.001 | | PCBM & CLIP concepts | 0.736 ± 0.005 | 0.535 ± 0.003 | 0.801 ± 0.002 | | PCBM-h & CLIP concepts | 0.881 ± 0.002 | 0.688 ± 0.004 | 0.797 ± 0.001 | | Original | Unedited | Prune | Prune + Normalize | Fine-tune (Oracle) | |----------------|---------------|---------------|---------------------|----------------------| | PCBM Accuracy | 0.656 ± 0.025 | 0.686 ± 0.026 | 0.750 ± 0.019 | 0.859 ± 0.028 | | PCBM Edit Gain | - | 0.029 | 0.093 | 0.202 | | Reproduction | Unedited | Prune | Prune + Normalize | Fine-tune (Oracle) | | PCBM Accuracy | 0.864 ± 0.033 | 0.874 ± 0.025 | 0.873 ± 0.025 | 0.939 ± 0.004 | | PCBM Edit Gain | - | 0.010 ± 0.009 | 0.009 ± 0.008 | 0.075 ± 0.031 | Table 2: Original (Yuksekgonul et al., 2023) and reproduction results of the claim that PCBMs do not require labeled concept datasets, alongside comparisons to the original performances. Table 3: Original (Yuksekgonul et al., 2023) and reproduction results of the claim that PCBMs allow for global model editing. Claim 3 - While we see that global model editing (both pruning and pruning with normalization) results in improvements relative to the baseline, the performance increase is half that of the original results, as seen in Table 3. Note that the results are averaged over 10 seeds, such that the PCBM edit gain is the average gain across 10 seeds. For a more detailed description of the per-class edit gain, refer to Appendix E. With regards to the user study, we find that users demonstrated enhancements in 4 of the 9 scenarios, as illustrated in our user study (see Table 15 replicating the original paper's Table 9). However, the extent of these improvements does not align with the initial claim (comparing with the original paper's Table 4), as the performance increases are generally modest. In fact, we observed declines in performance for most scenarios, especially when the number of pruned concepts increased (see Table 4). | Original | Layer | Unedited | Random Prune | User Prune | Greedy Prune (Oracle) | |----------------|------------------|---------------|----------------|---------------|-------------------------| | Spurious Class | PCBM Accuracy | 0.620 ± 0.035 | 0.604 ± 0.039 | 0.719 ± 0.042 | 0.740 ± 0.041 | | | PCBM Edit Gain | - | -0.016 | 0.099 | 0.120 | | | PCBM-h Accuracy | 0.642 ± 0.034 | 0.622 ± 0.037 | 0.736 ± 0.034 | 0.766 ± 0.034 | | | PCBM-h Edit Gain | - | 0.020 | 0.094 | 0.124 | | Reprod. | Layer | Unedited | Random Prune | User Prune | Greedy Prune (Oracle) | | Spurious Class | PCBM Accuracy | 0.731 ± 0.172 | 0.217 ± 0.270 | 0.199 ± 0.249 | 0.276 ± 0.291 | | | PCBM Edit Gain | - | -0.514 | -0.532 | -0.455 | | | PCBM-h Accuracy | 0.740 ± 0.145 | 0.736 ± 0.144 | 0.735 ± 0.144 | 0.733 ± 0.145 | | | PCBM-h Edit Gain | - | -0.004 | -0.005 | -0.007 | | All | | | | | | | Classes | PCBM Accuracy | 0.822 ± 0.038 | 0.737 ± 0.059 | 0.733 ± 0.577 | 0.747 ± 0.066 | | | PCBM Edit Gain | - | -0.085 | -0.089 | -0.075 | | | PCBM-h Accuracy | 0.854 ± 0.056 | 0.857 ± 0.052 | 0.856 ± 0.052 | 0.855 ± 0.053 | | | PCBM-h Edit Gain | - | 0.003 | 0.002 | 0.001 | Table 4: Original (Yuksekgonul et al., 2023) and reproduction results of the claim that human-guided editing improves accuracy. In our study, users pruned 3.11 ± 2.12 concepts on average, showing larger variation compared to the original average of 3.65 ± 0.39. This variance affected both the greedy and random methodologies, as they rely on the number of pruned concepts. Further details on how these techniques were employed and design decisions regarding the initial selection of classes in the multimodal approach can be found in Appendix F and Appendix H respectively. Unfortunately, we could not provide a detailed breakdown of the number of scenarios improved per user, as the observed improvements were mostly minor. Moreover, some scenarios that showed enhancements in one instance also demonstrated declines in performance across many runs. Nonetheless, the initial claim that users could complete the pruning task faster than the Greedy approach with only a relatively minor decrease in performance indeed holds. Users achieved comparable outcomes (a slight decrease in PCBM score and an increase in PCBM-h) while maintaining a mean time of 47.48 ± 31 seconds per scenario, in contrast to the Greedy approach's average time of 52.49 ± 3.58. ## 4.2 Results Beyond Original Paper 4.2.1 Random Projection | CIFAR10 | CIFAR100 | COCO-stuff | HAM10000 | | |----------------|---------------|---------------|---------------|---------------| | Original Model | 0.885 | 0.699 | 0.838 | 0.963 | | PCBM | 0.746 ± 0.002 | 0.490 ± 0.003 | 0.810 ± 0.002 | 0.500 ± 0.000 | | PCBM-h | 0.882 ± 0.002 | 0.685 ± 0.004 | 0.797 ± 0.001 | 0.961 ± 0.004 | The experiment results of using random projections to assess the necessity of meaningful concepts (e.g., CLIP and CAV concepts) for performance and interpretability are displayed in Table 2. PCBM retains substantial performance when using random concepts for CIFAR10, CIFAR100, and COCO-Stuff, indicating the preservation of backbone embeddings information post-projection. However, the HAM10000 performance is not retained, possibly because a random projection with eight concepts cannot maintain enough information about the original dataset. Table 5: Extension results of the random projection experiment. The concept weights for two classes in CIFAR100 and HAM10000 are shown in Figure 4 and Figure 5 (Appendix C). For CIFAR100, the concept weights for random concepts lack intuitiveness. Meanwhile, all concept weights are zero for HAM10000. This is consistent with the earlier finding that PCBMs fail to maintain the original model's performance on this dataset, suggesting the necessity of collecting meaningful concepts for straightforward interpretability. Comparing the performance results with those obtained using meaningful concepts, we find that the latter does outperform this baseline in all cases. ## 4.2.2 Object-Concept Correspondence This section contains our findings from testing the assumption that a concept feature value reflects the visibility of a concept in the image. The results for this can be found in Table 2, which presents the average concept feature values for BRODEN and CUB. Looking at the results, CAVs seem to have the greatest separation between positive and negative images, with gaps of 0.274 and 1.694 respectively. Meanwhile, the CLIP concept vectors return a notably weaker signal for a concept's appearance, with gaps of 0.053 and 0.022 for BRODEN and CUB respectively. It is especially weak for the latter since we see that there are incorrect and negative signals within one standard deviation. We also see that random concept vectors have an average of around zero in both cases as expected, given that no relevant information about the images is encoded. Thus, we find that CAV and CLIP concept feature values do indeed represent a signal for whether a concept is in the image. | | BRODEN | CUB | |----------------------------|---------------|----------------| | Concept Activation Vectors | 0.274 ± 0.124 | 1.694 ± 0.037 | | CLIP Concept Vectors | 0.053 ± 0.046 | 0.022 ± 0.037 | | Random Concept Vectors | 0.002 ± 0.018 | -0.006 ± 0.088 | Table 6: Extension results of the concept feature value experiment. Reported here are the mean and standard deviations of the concept feature value gaps between 50 positive and negative images, averaged across concepts. Both Figure 1 and Appendix I depict the outcomes of our experiment, revealing the most relevant parts of the image for each concept via saliency maps for two CIFAR100 images. The former illustrates saliency maps for the four most significant concepts of the class "bicycle," along with maps for the concept "grass," chosen due to its co-occurrence with bicycles in the image corners. Notably, most saliency maps mainly emphasize the primary object, regardless of the concept's referenced part of the image. For example, both "chain wheel" and "handlebar" saliency maps for CAVs are bicycle-shaped, consistent with the findings of Kim et al. (2023), which highlight the unintended entanglement in CAVs. In addition, the bicycle-shaped saliency map for the concept "book" likely stems from entanglement, as it is a top 4 concept for the bicycle class despite its apparent irrelevance. Notably, CAV saliency maps exhibit greater diversity compared to CLIP, highlighting different, incorrect parts of the bicycle, possibly due to CAV's sensitivity to sampled positive and negative images, as discussed in Kim et al. (2023). A similar entanglement is present for the CLIP concepts, as the saliency maps for "bicycle wheel," "coaster brake," "two wheels," and "bicycle seat" are all bicycle-shaped. This matches previous findings showing that CLIP suffers from CAB and thus cannot distinguish well between an object and its attributes (Tang et al., 2023). A related possible reason why our saliency maps do not correctly highlight the regions of the image corresponding to the concepts is that the clip was not trained for such fine-grained classification. A related finding by Zhong et al. (2021) is that CLIP does not transfer well to region detection tasks. They hypothesize that this is because CLIP was trained to only match whole images to the corresponding text descriptions without capturing the fine-grained alignment between image regions and text spans. Further evidence that our saliency map findings are due to entanglement rather than other factors (i.e., our CLIP-ResNet50 backbone encoder only extracting features related to the main object in a scene) is provided by Chen et al. (2023), who discovered that CLIP-ResNet50 channels exhibit more noise compared to a ResNet50 trained solely on ImageNet. Using various saliency map types, they demonstrated that this backbone examines more than just the main object within a scene. Despite this, a positive observation for PCBMs is that concepts such as "green" and "greenness" successfully highlight the top right green patch of the image, indicating that concept feature values sometimes reflect broader relevant parts of the image. ![11_image_0.png](11_image_0.png) Figure 1: Extension results of the saliency map experiment. Shown here are ten saliency maps for an image from the class "bicycle" of CIFAR100. Five CAV and five CLIP concepts were used. ## 4.2.3 Audio Classification An overview of the audio classification extension results is provided in Table 7. PCBM struggles to match the original model's performance when using UrbanSound8K with CLIP concepts, though PCBM-h narrows this gap. This pattern does not hold, however, for other dataset-method combinations. For example, PCBM-h fails to reach the original model's performance when using UrbanSound8K with CAVs. Meanwhile, CLIP performs poorly for ESC-50, and the CAV-based approach falls short of the original model despite improving over CLIP. The discrepancy in CLIP-based performances may stem from the number of CLIP concepts not scaling with the number of classes in ESC-50, which is something discussed by Oikarinen et al. (2023). This means that PCBMs are harder to scale to datasets with many labels. The inadequacy of CLIP's concept space is evident in Table 12 (Appendix D), where top concepts like "heavy metal," "motorcycle," and "whirr" are mismatched with their class label "cow." Conversely, CAVs' consistency across datasets may arise from the customized concepts tailored to each dataset. The results in Table 11 and Table 12 (in Appendix D) support the interpretability of CAVs over CLIP. Each class has multiple fitting concepts within the top three most important concepts. Thus, based on this, extending PCBMs to different domains can yield mixed results, heavily influenced by the chosen concept subspace. | | UrbanSound8K | ESC-50 | |---------------------------|----------------|---------------| | Original Model | 0.613 | 0.670 | | PCBM & CLIP concepts | 0.558 ± 0.002 | 0.280 ± 0.035 | | PCBM-h & CLIP concepts | 0.603 ± 0.006 | 0.280 ± 0.035 | | PCBM & labeled concepts | 0.411 ± 0.001 | 0.400 ± 0.006 | | PCBM-h & labeled concepts | 0.462 ± 0.009 | 0.410 ± 0.006 | Table 7: Extension results of the audio classification experiment. ## 5 Discussion Our study aimed to replicate the findings from Yuksekgonul et al. (2023) and somewhat successfully reproduced their first two claims, though had challenges with doing so for the third. The first claim, asserting that Post-Hoc Concept Bottlenecks do not reduce model performance, aligns with our results. Our PCBM experiments show performance similar to that in the original paper and PCBM-h retrieves a similar performance to the baseline, with negligible differences observed in certain scenarios. As outlined before, some of these differences are caused by potential implementation mistakes by the original authors, relating to the conflicting results obtained for CUB. Furthermore, other differences stem from missing details regarding the experiment setup, which is the case for the experiments related to COCO-Stuff and, to an extent, SIIM-ISIC. The second claim, suggesting the usability and potential advisability of CLIP concepts, is partially supported. While the PCBM-h here performs similarly to the original model, the PCBM results are mixed and not consistently superior to the CAV-based approach. Meanwhile, the third claim regarding PCBMs enabling global model editing is not truly supported by our results. From the Metashift experiment, the observed increase was significantly less than the original results, and the user study indicated only marginal performance improvements and, in many cases, declines. Similar to some of the other experiments performed, this discrepancy is due to incomplete implementation details provided by the authors, especially relating to how the Metashift PCBMs were trained. Beyond these three claims, we evaluated the interpretability of PCBMs by first testing whether meaningful concepts are necessary for both performance and interpretability, discovering that random, meaningless, concepts perform well but are not easily interpretable. We then examined if the concepts used by the model correctly correspond to objects in the input image using saliency maps and observed that the concept projection generally signals an object's present within an image correctly, though this signal is not based on only the image part containing said object. Based on this, our conclusion is that the interpretability of PCBMs is limited due to how the concept feature values used by the interpretable model are not obtained only by looking at the concept in the image. Instead, many concepts return high feature values only because they are correlated with the main object we are classifying. Furthermore, another of our extensions partially supports the claim that PCBMs apply to any neural network. We find that results when applying PCBMs to audio are mixed. The CAV-based approach is consistent and gives interpretable concepts, but does not perform as well as the original model. The CLIP-based approach does as well as the original model for one dataset but does poorly on the other. Additionally, the CLIP concepts are harder to interpret in combination with the classes. Given these results, we recommend further research focusing on the interpretability of both concept vectors discussed. Additionally, exploring more suitable projection methods for CLIP concepts is crucial to enhance their responsiveness in the presence of specific concepts in images. Another recommendation is to evaluate the performance of additional audio datasets and architectures to investigate if other, non-CLIP-based audio classifiers still function with the PCBM architecture. Lastly, given how frequently PCBMs are utilized (Oikarinen et al. (2023); Yang et al. (2023); Panousis et al. (2023); Daneshjou et al. (2022b)), we suggest that future researchers be more careful when using them, as our findings show that proper interpretability is not fully guaranteed. It would be especially beneficial to examine further research utilizing PCBMs for object attribute entanglement in concepts, to see if they suffer from the same issues covered here. ## 5.1 What Was Easy, And What Was Difficult? An appendix with all used hyperparameters and implementation details was provided by the original paper. This made it simple to start with the experiments, especially when coupled with the publicly available repository and the complete model implementations. That aside, the repository provided was still incomplete, meaning that several code parts, such as the COCOStuff implementations, model editing files, and Metashift concept sources, were unavailable.The explanation for some of the experiment setups is also insufficient for producing accurate reproductions, resulting in the assumptions outlined in subsection 3.4. ## 5.2 Communication With Original Authors We have attempted to contact the original authors for clarification on how some of the experiments were set up due to missing details/implementations, such as how the COCO-Stuff binary classification and Metashift experiments were performed. However, as of writing, they have not responded to any of our inquiries. ## References Ella Bingham and Heikki Mannila. Random projection in dimensionality reduction: applications to image and text data. In *Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery* and data mining, pp. 245–250, San Francisco California, August 2001. ACM. ISBN 978-1-58113-391-2. doi: 10.1145/502512.502546. URL https://dl.acm.org/doi/10.1145/502512.502546. Timothy I. Cannings. Random projections: Data perturbation for classification problems. WIREs Computational Statistics, 13(1):e1499, 2021. ISSN 1939-0068. doi: 10.1002/ wics.1499. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/wics.1499. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/wics.1499. Peijie Chen, Qi Li, Saad Biaz, Trung Bui, and Anh Nguyen. gScoreCAM: What Objects Is CLIP Looking At? In Lei Wang, Juergen Gall, Tat-Jun Chin, Imari Sato, and Rama Chellappa (eds.), Computer Vision - ACCV 2022, volume 13844, pp. 588–604. Springer Nature Switzerland, Cham, 2023. ISBN 978-3-03126315-6 978-3-031-26316-3. doi: 10.1007/978-3-031-26316-3_35. URL https://link.springer.com/10. 1007/978-3-031-26316-3_35. Series Title: Lecture Notes in Computer Science. Roxana Daneshjou, Kailas Vodrahalli, Weixin Liang, Roberto A. Novoa, Melissa Jenkins, Veronica Rotemberg, Justin Ko, Susan M. Swetter, Elizabeth E. Bailey, Olivier Gevaert, Pritam Mukherjee, Michelle Phung, Kiana Yekrang, Bradley Fong, Rachna Sahasrabudhe, James Zou, and Albert Chiou. Disparities in Dermatology AI: Assessments Using Diverse Clinical Images. *Science Advances*, 8(32):eabq6147, August 2022a. ISSN 2375-2548. doi: 10.1126/sciadv.abq6147. URL http://arxiv.org/abs/2111.08006. arXiv:2111.08006 [cs, eess]. Roxana Daneshjou, Mert Yuksekgonul, Zhuo Ran Cai, Roberto Novoa, and James Y Zou. SkinCon: A skin disease dataset densely annotated by domain experts for fine-grained debugging and analysis. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), *Advances in Neural Information Processing Systems*, volume 35, pp. 18157–18167. Curran Associates, Inc., 2022b. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ 7318b51b52078e3af28197e725f5068a-Paper-Datasets_and_Benchmarks.pdf. Sanjoy Dasgupta. Experiments with Random Projection. *Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence*, pp. 143–151, 2000. Ruth Fong and Andrea Vedaldi. Net2Vec: Quantifying and Explaining How Concepts are Encoded by Filters in Deep Neural Networks. In *2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 8730–8738, Salt Lake City, UT, June 2018. IEEE. ISBN 978-1-5386-6420-9. doi: 10.1109/CVPR.2018. 00910. URL https://ieeexplore.ieee.org/document/8579008/. Jack Furby, Daniel Cunnington, Dave Braines, and Alun Preece. Can we Constrain Concept Bottleneck Models to Learn Semantically Meaningful Input Features?, February 2024. URL http://arxiv.org/ abs/2402.00912. arXiv:2402.00912 [cs]. Jort F. Gemmeke, Daniel P. W. Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing Moore, Manoj Plakal, and Marvin Ritter. Audio Set: An ontology and human-labeled dataset for audio events. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 776–780, New Orleans, LA, March 2017. IEEE. ISBN 978-1-5090-4117-6. doi: 10.1109/ICASSP.2017.7952261. URL http://ieeexplore.ieee.org/document/7952261/. Amirata Ghorbani, James Wexler, James Y Zou, and Been Kim. Towards Automatic Concept-based Explanations. *Advances in Neural Information Processing Systems*, 32, 2019. Andrey Guzhov, Federico Raue, Jörn Hees, and Andreas Dengel. ESResNe(X)t-fbsp: Learning Robust Time-Frequency Transformation of Audio, April 2021. URL http://arxiv.org/abs/2104.11587. arXiv:2104.11587 [cs, eess]. Andrey Guzhov, Federico Raue, Jörn Hees, and Andreas Dengel. Audioclip: Extending Clip to Image, Text and Audio. In ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 976–980, May 2022. doi: 10.1109/ICASSP43922.2022.9747631. URL https: //ieeexplore.ieee.org/document/9747631. ISSN: 2379-190X. Marton Havasi, Sonali Parbhoo, and Finale Doshi-Velez. Addressing Leakage in Concept Bottleneck Models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), *Advances in Neural Information Processing Systems*, volume 35, pp. 23386–23397. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ 944ecf65a46feb578a43abfd5cddd960-Paper-Conference.pdf. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In *2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 770–778, Las Vegas, NV, USA, June 2016. IEEE. ISBN 978-1-4673-8851-1. doi: 10.1109/CVPR.2016.90. URL http:// ieeexplore.ieee.org/document/7780459/. Jeremy Kawahara, Sara Daneshvar, Giuseppe Argenziano, and Ghassan Hamarneh. Seven-Point Checklist and Skin Lesion Classification Using Multitask Multimodal Neural Nets. IEEE Journal of Biomedical and Health Informatics, 23(2):538–546, March 2019. ISSN 2168-2208. doi: 10.1109/JBHI.2018.2824327. URL https://ieeexplore.ieee.org/document/8333693. Conference Name: IEEE Journal of Biomedical and Health Informatics. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In *Proceedings of the 35th International Conference on Machine Learning*, pp. 2668–2677. PMLR, July 2018. URL https://proceedings.mlr.press/v80/kim18d.html. ISSN: 2640-3498. Siwon Kim, Jinoh Oh, Sungjin Lee, Seunghak Yu, Jaeyoung Do, and Tara Taghavi. Grounding Counterfactual Explanation of Image Classifiers to Textual Concept Space. In *2023 IEEE/CVF Conference on* Computer Vision and Pattern Recognition (CVPR), pp. 10942–10950, Vancouver, BC, Canada, June 2023. IEEE. ISBN 9798350301298. doi: 10.1109/CVPR52729.2023.01053. URL https://ieeexplore.ieee. org/document/10203370/. Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. Concept Bottleneck Models. *Proceedings of the 37th International Conference on Machine Learning*, 119, 2020. Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. Quantifying the Carbon Emissions of Machine Learning, November 2019. URL http://arxiv.org/abs/1910.09700. arXiv:1910.09700 [cs]. Martha Lewis, Nihal Nayak, Peilin Yu, Jack Merullo, Qinan Yu, Stephen Bach, and Ellie Pavlick. Does CLIP Bind Concepts? Probing Compositionality in Large Image Models. In Yvette Graham and Matthew Purver (eds.), *Findings of the Association for Computational Linguistics: EACL 2024*, pp. 1487–1500, St. Julian's, Malta, March 2024. Association for Computational Linguistics. URL https://aclanthology. org/2024.findings-eacl.101. Andrei Margeloiu, Matthew Ashman, Umang Bhatt, Yanzhi Chen, Mateja Jamnik, and Adrian Weller. Do Concept Bottleneck Models Learn as Intended?, May 2021. URL http://arxiv.org/abs/2105.04289. arXiv:2105.04289 [cs]. Tuomas Oikarinen, Subhro Das, Lam M. Nguyen, and Tsui-Wei Weng. Label-Free Concept Bottleneck Models, June 2023. URL http://arxiv.org/abs/2304.06129. arXiv:2304.06129 [cs]. Konstantinos P. Panousis, Dino Ienco, and Diego Marcos. Sparse Linear Concept Discovery Models. In 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), pp. 2759–2763, Paris, France, October 2023. IEEE. ISBN 9798350307443. doi: 10.1109/ICCVW60793.2023.00292. URL https://ieeexplore.ieee.org/document/10350660/. Dana Pessach and Erez Shmueli. A Review on Fairness in Machine Learning. *ACM Computing Surveys*, 55 (3):1–44, March 2023. ISSN 0360-0300, 1557-7341. doi: 10.1145/3494672. URL https://dl.acm.org/ doi/10.1145/3494672. Karol J. Piczak. ESC: Dataset for Environmental Sound Classification. In Proceedings of the 23rd ACM international conference on Multimedia, pp. 1015–1018, Brisbane Australia, October 2015. ACM. ISBN 978-1-4503-3459-4. doi: 10.1145/2733373.2806390. URL https://dl.acm.org/doi/10.1145/2733373. 2806390. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning Transferable Visual Models From Natural Language Supervision, February 2021. URL http://arxiv. org/abs/2103.00020. arXiv:2103.00020 [cs]. Naveen Raman, Mateo Espinosa Zarlenga, Juyeon Heo, and Mateja Jamnik. Do Concept Bottleneck Models Obey Locality?, January 2024. URL http://arxiv.org/abs/2401.01259. arXiv:2401.01259 [cs]. Justin Salamon, Christopher Jacoby, and Juan Pablo Bello. A Dataset and Taxonomy for Urban Sound Research. In *Proceedings of the 22nd ACM international conference on Multimedia*, pp. 1041–1044, Orlando Florida USA, November 2014. ACM. ISBN 978-1-4503-3063-3. doi: 10.1145/2647868.2655045. URL https://dl.acm.org/doi/10.1145/2647868.2655045. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, April 2014. URL http://arxiv.org/abs/1312.6034. arXiv:1312.6034 [cs]. Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. SmoothGrad: removing noise by adding noise, June 2017. URL http://arxiv.org/abs/1706.03825. arXiv:1706.03825 [cs, stat]. Robyn Speer, Joshua Chin, and Catherine Havasi. ConceptNet 5.5: an open multilingual graph of general knowledge. In *Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence*, AAAI'17, pp. 4444–4451, San Francisco, California, USA, February 2017. AAAI Press. Harini Suresh and John V. Guttag. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. In *Equity and Access in Algorithms, Mechanisms, and Optimization*, pp. 1–9, October 2021. doi: 10.1145/3465416.3483305. URL http://arxiv.org/abs/1901.10002. arXiv:1901.10002 [cs, stat]. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In *2015 IEEE Conference* on Computer Vision and Pattern Recognition (CVPR), pp. 1–9. IEEE Computer Society, June 2015. ISBN 978-1-4673-6964-0. doi: 10.1109/CVPR.2015.7298594. URL https://www.computer.org/csdl/ proceedings-article/cvpr/2015/07298594/12OmNyOq4YE. ISSN: 1063-6919. Yingtian Tang, Yutaro Yamada, Yoyo Zhang, and Ilker Yildirim. When are Lemons Purple? The Concept Association Bias of Vision-Language Models. In *Proceedings of the 2023 Conference on Empirical Methods* in Natural Language Processing, pp. 14333–14348, Singapore, 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.886. URL https://aclanthology.org/2023.emnlp-main.886. Yue Yang, Artemis Panagopoulou, Shenghao Zhou, Daniel Jin, Chris Callison-Burch, and Mark Yatskar. Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification. In *2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 19187–19197, Vancouver, BC, Canada, June 2023. IEEE. ISBN 9798350301298. doi: 10.1109/CVPR52729.2023.01839. URL https://ieeexplore.ieee.org/document/10204225/. Chih-Kuan Yeh, Been Kim, Sercan Ö. Arık, Chun-Liang Li, Tomas Pfister, and Pradeep Ravikumar. On Completeness-aware Concept-Based Explanations in Deep Neural Networks. *Advances in Neural Information Processing Systems*, 33:20554–20565, 2020. Mert Yuksekgonul, Maggie Wang, and James Zou. Post-Hoc Concept Bottleneck Models. The Eleventh International Conference on Learning Representations, February 2023. Yiwu Zhong, Jianwei Yang, Pengchuan Zhang, Chunyuan Li, Noel Codella, Liunian Harold Li, Luowei Zhou, Xiyang Dai, Lu Yuan, Yin Li, and Jianfeng Gao. RegionCLIP: Region-based Language-Image Pretraining, December 2021. URL http://arxiv.org/abs/2112.09106. arXiv:2112.09106 [cs]. ## A Specifications Of Datasets Used | Task | Dataset | Backbone model | Concepts used | Number of concepts used | Classes | |----------------------------------------------------|-------------------------|--------------------|------------------------|---------------------------|-----------| | | CIFAR-10 | 10 | | | | | | CLIP-ResNet50 | BRODEN | | | | | | (Fong & Vedaldi, 2018) | 170 | | | | | Evaluating performance across different domains | CIFAR-100 | 100 | | | | | | COCO-Stuff | 20 | | | | | | CUB | ResNet18 | From Koh et al. (2020) | 112 | 200 | | | HAM10000 | Inception | Derm7pt | | | | | (Kawahara et al., 2019) | 8 | 2 | | | | | SIIM-ISIC | | | | | | Evaluating generated concepts | | - | 10 | | | | | CIFAR-10 | ConceptNet | | | | | | CLIP-ResNet50 | generated concepts | | | | | | CIFAR-100 | - | 100 | | | | | COCO-Stuff | - | 20 | | | | Controlled Metashift Experiments For Model Editing | Metashift | CLIP-ResNet50 | BRODEN | | | | | (Fong & Vedaldi, 2018) | 170 | 5 | | | | | ConceptNet | | | | | | User Study | Metashift | CLIP-ResNet50 | generated | 440 | 5 | | | concepts | | | | | Table 8 contains an overview of all datasets used by the original authors, which were experimented on for reproduction. Table 8: Overview of datasets used from the original study. | Dataset | Backbone model | Concepts used | Number of concepts used | |--------------|----------------------------|-----------------|---------------------------| | ESC-50 | Generated by ConceptNet | 146 | | | UrbanSound8K | From Salamon et al. (2014) | 31 | | | | AudioCLIP | | | | AudioSet | From Gemmeke et al. (2017) | 610 | | As part of the audio classification extension, the ESC-50, UrbanSound8K, and AudioSet datasets were utilized (Salamon et al., 2014; Piczak, 2015). All datasets were created to tackle the issue of data scarcity in automatic urban sound classification and are used to further fine-tune the AudioCLIP audio encoder (Guzhov et al., 2022). An overview of them can be found in Table 9. Table 9: Overview of the datasets used for audio classification. 18 ## B Accuracy/Interpretability Trade-Off Table 10 shows the different PCBM accuracies when varying the regularization strength. We see there that for both CIFAR10 and CIFAR100, the accuracy is highest when the regularization strength is 0.1 KNc , after which it decreases. To examine the trade-off between accuracy and interpretability that exists when tuning the regularization strength, we plot the regularization strength against the number of non-zero parameters and the absolute sum of weights. For both metrics lower is better. Figure 2a and Figure 2b show these plots for CIFAR10, where we see that a lower regularization strength leads to the trained PCBM doing worse on the interpretability metrics. As such, the regularization strength can not be decreased to obtain higher accuracies without harming interpretability. Figure 2c and Figure 2d show similar trends for CIFAR100. | | 1.0 | 0.1 | 0.01 | 0.001 | | |-------------------------|----------|-------|--------|---------|-------| | Regularization strength | 10.0 KNc | KNc | KNc | KNc | KNc | | CIFAR10 | 0.318 | 0.552 | 0.559 | 0.507 | 0.461 | | CIFAR100 | 0.648 | 0.793 | 0.839 | 0.824 | 0.820 | Table 10: PCBM accuracies for different regularization strengths. ![18_image_0.png](18_image_0.png) ![18_image_1.png](18_image_1.png) (a) The absolute sum of all class weights against accuracy for a PCBM trained on CIFAR10.(b) The number of non-zero weights against accuracy for ![18_image_2.png](18_image_2.png) ![18_image_3.png](18_image_3.png) a PCBM trained on CIFAR10. (c) The absolute sum of all class weights against accuracy for a PCBM trained on CIFAR100. (d) The number of non-zero weights against Accuracy for a PCBM trained on CIFAR10. Figure 2: Tradeoffs between accuracy, class weights, and regularization strength for selected scenarios. ## C Example Weights Obtained Below are examples of weights obtained for certain classes from the HAM10000, CIFAR100, and audio ![19_image_0.png](19_image_0.png) experiments. As mentioned in the section 4, we see that the obtained concept weights are indeed humanlogical, which supports the usability of PCBMs for interpretability. Figure 3: Example weights for classes of HAM10000 ![19_image_2.png](19_image_2.png) (CAVs) and CIFAR100 (CLIP concepts). We also attach examples of weights obtained from some of the random projection experiments and for ![19_image_1.png](19_image_1.png) CIFAR100. Both figures can be found below: Figure 4: Example weights for classes of CIFAR100 (CLIP concepts). Figure 5: Example weights for classes of HAM10000 (Random Concept Vectors). ## D Preprocessing Metashift For Model Editing Experiments To generate the 10 scenarios used in the model editing experiments, the original authors used Metashift. They first defined two 5-way classification tasks where they specified different independent scenarios for each. Furthermore, every scenario contained one class which is spuriously correlated with a different concept in its train and test datasets, and each split consists of 100 images per class. For instance, the first scenario of the first task in Table 13 consists of a training set where all images corresponding to the class bed contain a dog and a test set where all images of beds contain cats along with 100 unconstrained images for each remaining class. We can see in Table 13 each task with its 5 classes and corresponding scenarios. Although there is conflicting information on the number of images per class (either 50 or 100), it is worth noting that of the 10 scenarios present, four of them do not have 100 images. We considered that 250 images per split was too little, so we discarded the four problematic scenarios. | Class | CLIP Concepts (weights) | Labeled Concepts | |---------------------------------------------|------------------------------------------------------------------------------------------|-----------------------| | air_conditioner | Doorbell (17.85) | machine (10.698) | | Bicycle bell (17.81) | automobile (10.238) | | | Cat communication (16.87) | air (4.116) | | | car_horn | Shofar (14.49) | honk (4.858) | | Sonar (12.40) | bullet (0.334) | | | Harmonic (11.56) | bang (0.297) | | | children_playing | Babbling (20.03) | toy (6.711) | | Hubbub, speech noise, speech babble (19.47) | school (5.524) | | | Chatter (18.26) | fun (5.484) | | | dog_bark | Growling (29.54) | cry (12.929) | | | Yip (27.51) | dog (10.003) | | Bow-wow (21.07) | woof (8.170) | | | drilling | Cattle, bovinae (22.48) | construction (4.338) | | Honk (17.71) | machine (0.000) | | | Canidae, wolves (13.47) | air (0.000) | | | engine_idling | Purr (19.06) | automobile (7.059) | | Meow (17.67) | machine (6.176) | | | Claws (16.95) | motorcycle (5.037) | | | gun_shot | Silence (14.86) | bang (4.549) | | Arrow (13.78) | bullet (4.545) | | | Chainsaw (12.55) | honk (1.769) | | | jackhammer | Snake (23.94) | rock (5.604) | | Steelpan (19.34) | drill (4.969) | | | Tambourine (18.31) | construction (0.166) | | | siren | Ice cream truck, ice cream van (28.41) | wail (16.096) | | Air horn, truck horn (27.81) | alarm (9.479) | | | Ambulance (27.61) | noise (5.034) | | | street_music | Shofar (16.56) | entertainment (8.744) | | Water tap, faucet (16.43) | automobile (4.411) | | | Clarinet (14.77) | rap (3.860) | | | Table 11: | Comparison of the top three concept weights between CAVs (labeled concepts) and concepts | | Table 11: Comparison of the top three concept weights between CAVs (labeled concepts) and concepts obtained using CLIP for UrbanSound8K. Furthermore, it is unclear to us how the domain cow(cat) could even be generated, as the Metashift domain lookup table4 does not register any such image and has not been changed since the creation of the Metashift GitHub repository. We speculate that the original authors may have used a different base dataset (such as COCO) for Metashift, but we chose to stick with the Visual Genome dataset as it was the default option and because no custom base datasets were mentioned in the original paper. We make the chosen six scenarios available on HuggingFace along with the entirety of our deterministic pre-processing pipeline for full transparency. Although we report all of our results using a cherry-picked dataset (meaning that we manually pick the best images that represent the domain shift), we also provide a randomly selected dataset. 4https://github.com/Weixin-Liang/MetaShift/blob/main/dataset/meta_data/full-candidate-subsets.pkl | Class | CLIP Concepts (weights) | Labeled Concepts | |---------------------------------------|--------------------------------------------|--------------------| | dog | Growling (3260.187) | canine (9.666) | | Yip (2679.440) | woof (8.258) | | | Bow-wow (2495.401) | four_legged_animal (7.040) | | | rooster | Power windows, electric windows (2253.037) | crowing (12.931) | | male chicken (1976.757) | chicken (10.502) | | | Crowing, cock-a-doodle-doo (1950.169) | farm_animal (10.376) | | | pig | Ping (2314.484) | engine (4.299) | | Chewing, mastication (2257.323) | amphibian (3.932) | | | Snort (2160.123) | oink_oink (2.580) | | | cow | Heavy metal (2201.455) | moo (12.866) | | Motorcycle (2009.351) | living_creature (10.481) | | | Whir (1978.380) | nature (7.279) | | | frog | Pig (2828.164) | amphibian (18.596) | | Ringtone (2783.966) | ribbit (17.952) | | | Frog (2761.661) | windshield (4.473) | | | cat | Cat (2598.395) | feline (10.581) | | Cat communication (2558.293) | meow (8.726) | | | Caterwaul (2385.926) | scratch_furniture (7.948) | | | hen | Pigeon, dove (1894.772) | chirps (7.573) | | Dishes, pots, and pans (1873.768) | bird (4.638) | | | Chink, clink (1797.078) | farm_animal (3.759) | | | insects | Strum (1400.697) | move (7.170) | | Whale vocalization (1386.632) | bugs (6.840) | | | Croak (1372.076) | animals (4.380) | | | Task | Train domain | Test domain | Availability | |---------------------------------------------|----------------|------------------------|----------------| | Task 1 | | | | | (airplane, bed, car, cow, keyboard) | bed(dog) | bed(cat) | Enough images | | bed(cat) | bed(dog) | Enough images | | | car(dog) | car(cat) | 50 < car(cat) < 100 | | | car(cat) | car(dog) | 50 < car(cat) < 100 | | | cow(dog) | cow(cat) | cow(cat) = 0 | | | keyboard(dog) | keyboard(cat) | 0 < keyboard(dog) < 50 | | | Task 2 | | | | | (beach, computer, motorcycle, stove, table) | table(cat) | table(dog) | Enough images | | table(dog) | table(cat) | Enough images | | | table(books) | table(dog) | Enough images | | | table(books) | table(cat) | Enough images | | Table 12: Comparison of the top three concept weights between CAVs (labeled concepts) and concepts obtained using CLIP for ESC-50. Table 13: Metashift task splits and their availability. ## E Per Scenario Performance On Model Editing Experiments Similar to Table 6 of the original paper, we compute the accuracies and standard error of each scenario and method as described in Appendix D. As shown in Table 14, both pruning and pruning with normalization do not show the improvements seen in the original study, even when averaging over 10 seeds and trying all the ambiguous configurations of regularization strengths and backbones mentioned. (more information on Figures 6a and 6b). For Table 14 we use CLIP's pre-trained ResNet50 with a λ of 1.7 (which results in an overall regularization strength of 0.002), learning rate of 0.5, α of 0.99 and using the SGDClassifier. In Figure 6, we see that no matter the regularization strength or the backbone used, the accuracy change does not compare to the original paper. In addition, the ResNet18 accuracy change is negligible at best. | Train | Test | Original | Prune | Prune + Normalize | Finetune | |--------------|------------|---------------|---------------|---------------------|---------------| | bed(cat) | bed(dog) | 0.928 ± 0.001 | 0.930 ± 0.001 | 0.930 ± 0.001 | 0.946 ± 0.001 | | bed(dog) | bed(cat) | 0.926 ± 0.001 | 0.925 ± 0.001 | 0.925 ± 0.001 | 0.943 ± 0.001 | | table(books) | table(cat) | 0.761 ± 0.002 | 0.795 ± 0.004 | 0.792 ± 0.003 | 0.924 ± 0.001 | | table(books) | table(dog) | 0.765 ± 0.002 | 0.805 ± 0.002 | 0.804 ± 0.002 | 0.948 ± 0.001 | | table(cat) | table(dog) | 0.923 ± 0.002 | 0.918 ± 0.003 | 0.918 ± 0.003 | 0.946 ± 0.001 | | table(dog) | table(cat) | 0.883 ± 0.001 | 0.874 ± 0.001 | 0.872 ± 0.002 | 0.927 ± 0.002 | Table 14: Accuracy and Standard Error for Metashift model editing tasks over 10 seeds. To make sure that our implementation does not differ substantially from the intended, we make two more: ![22_image_0.png](22_image_0.png) First, one that is as close as possible to the existing author's code, which we'll name Strict ResNet50, and second, an implementation that uses the Adam Optimizer for the training procedure (as said in the original paper), which we'll name Adam ResNet50. In Figures 7a and 7b we can observe that both implementations perform considerably worse than ours. (a) Boxplots of ResNet18's accuracy gain compared to Original for different regularization strengths (b) Boxplots of ResNet50's accuracy gain compared to Unedited model for different regularization strengths Figure 6: Boxplots for the accuracy gain of Pruning and Pruning with Normalization over the Unedited mode for a given model and λ value. ## F Human-Guided Editing Experiment - Setup To replicate the human-guided editing experiment, we used the following setup and design assumptions: - We based our dataset on the same source as the Metashift experiment. The training dataset consisted of 100 samples of a class with spurious correlation, while the test dataset comprised 100 samples of the same class with correlations to any concepts except the one used in training. - We utilized the same model and training methods as in the original repository with minor modifications (i.e., to the dataloaders and pruning capabilities) to align with initial results. - Greedy (Oracle) and random pruning used the same number of simultaneous pruned concepts as the users. For this, we assumed that for each task, each unique number of concepts pruned by users results in a model (e.g., if users pruned between 1 and 6 concepts for a task, we would have one greedy/random model for 1 concept pruned, 2 concepts pruned and so on). - We reproduced the user study following the details provided in the original paper by introducing the terminology to the users (Figure 8) and presenting a similar task of choosing concepts to prune per scenarios (Figure 9). ![23_image_0.png](23_image_0.png) (a) Boxplots of Adam ResNet50's accuracy gain compared to Original for different regularization strengths (b) Boxplots of Strict ResNet50's accuracy gain compared to Unedited model for different regularization strengths Figure 7: Boxplots for the accuracy gain of Pruning and Pruning with Normalization over the Unedited mode for a given model and \ value of other implementations. Background and purpose Concepts' are human-understandable features/attributes. They can be lev improve the interpretability of machine leaming models, as flyry can be mage ade. This offers an insight into what the model ' consequently provides a ranking of concepts based on their relevance in making th prediction Instructions largest weights). These concepts and scores are retireved as part of a 5-illass ey is to help prune the generalization, aiming to improve the prediction acc presented class (e.g. the concept x does not help the model in classilying general any x). You may also use other rational Note to complete the survey in one go. Also, if you need help understanding any sinology, feel free to do a quick on ![23_image_1.png](23_image_1.png) Figure 8: First page of the survey introducing the task. Figure 9: Example scenario with choice of concepts ## G User Study For Model Editing Replicating Table 8 of the original paper, Table 15 contains a full overview of the results obtained from the user study. The original authors used only the PCBM-h results for their breakdown. We attempt to replicate these results and while we also observe increases and decreases in performance across scenarios, the magnitude of the increases does not align with the original claim. Moreover, the usage of PCBM-h for reporting the final results does not provide a reliable insight into the method's performance. By visualizing the PCBM results instead (shown in Table 16), we can see that pruning (in the settings defined, where we highlight a larger number of simultaneous concepts pruned than the original paper) reduces the per-class accuracy considerably. | Scenario | Unedited | Random Prune | User Prune | Greedy Prune | |-----------------------|-----------------------------|----------------|---------------|----------------| | | Shifted Class Test Accuracy | | | | | bed(dog) | 0.732 ± 0.018 | 0.727 ± 0.019 | 0.738 ± 0.027 | 0.729 ± 0.021 | | keyboard(cat) | 0.918 ± 0.003 | 0.906 ± 0.009 | 0.903 ± 0.010 | 0.911 ± 0.010 | | bed(cat) | 0.864 ±0.006 | 0.863 ± 0.008 | 0.854 ± 0.009 | 0.911 ± 0.010 | | couch(cat) | 0.965 ± 0.005 | 0.962 ± 0.005 | 0.961 ± 0.007 | 0.964 ± 0.004 | | painting(lamp) | 0.534 ± 0.021 | 0.509 ± 0.012 | 0.509 ± 0.012 | 0.511 ± 0.012 | | pillow(clock) | 0.764 ± 0.010 | 0.762 ± 0.005 | 0.759 ± 0.007 | 0.763 ± 0.002 | | television(fireplace) | 0.701 ± 0.012 | 0.686 ± 0.019 | 0.678 ± 0.015 | 0.685 ± 0.020 | | fork(tomato) | 0.565 ± 0.015 | 0.580 ± 0.016 | 0.579 ± 0.014 | 0.575 ± 0.019 | | car(snow) | 0.618 ± 0.008 | 0.625 ± 0.015 | 0.620 ± 0.016 | 0.617 ± 0.012 | | | Overall Test Accuracy | | | | | bed(dog) | 0.903 ± 0.002 | 0.903 ± 0.003 | 0.905 ± 0.006 | 0.903± 0.003 | | keyboard(cat) | 0.901 ± 0.002 | 0.898 ± 0.002 | 0.898 ± 0.002 | 0.899 ± 0.002 | | bed(cat) | 0.853 ± 0.002 | 0.852 ± 0.002 | 0.851 ± 0.002 | 0.853 ± 0.002 | | couch(cat) | 0.898 ± 0.001 | 0.898 ± 0.002 | 0.897 ± 0.002 | 0.898 ± 0.002 | | painting(lamp) | 0.840 ± 0.002 | 0.840 ± 0.002 | 0.841 ± 0.003 | 0.841 ± 0.002 | | pillow(clock) | 0.877 ± 0.002 | 0.879 ± 0.002 | 0.878 ± 0.002 | 0.879 ± 0.002 | | television(fireplace) | 0.888 ± 0.003 | 0.886 ± 0.004 | 0.884 ± 0.003 | 0.886 ± 0.005 | | fork(tomato) | 0.719 ± 0.003 | 0.726 ± 0.004 | 0.726 ± 0.003 | 0.725 ± 0.004 | | car(snow) | 0.808 ± 0.002 | 0.809 ± 0.003 | 0.807 ± 0.003 | 0.807 ± 0.003 | Table 15: Reproduction results of the user study pruning with PCBM-h. ## H Multimodal Performance Based On Initial Choice Of Classes In designing our user study experiment, we explored and considered different approaches with the dataset and multimodal representations. Notably, we had to consider the choice of initial elements (classes) used as a basis for utilizing CLIP and ConceptNet as it can significantly influence the results obtained. When determining the classes for concept retrieval in our user study scenarios, we compared several approaches: - Using the 5 main classes on which we are conducting classification. - Incorporating these 5 classes along with the class exhibiting spurious correlations. - Including all classes and spurious correlation classes used across the scenarios. - Employing the CIFAR100 classes, which were part of the original code. These different configurations allowed us to assess how class selection impacts the study's outcomes. Even with a 5-class simple classification task, the difference in accuracy (shown in Table 17) suggests that more classes/initial objects lead to better accuracy. Adding on to this, better interpretability of the data and predictions are showcased by the results with more suitable high-weighted concepts (for a class exhibiting "bed" with spurious context "cat" we see terms such as "bedroom furniture" and "feline" for CIFAR100 Figure 10d but in limited initial categories we see terminology from other classes spilling over such as "car seat" in Figure 10a). | Scenario | Unedited | Random Prune | User Prune | Greedy Prune | |-----------------------|-----------------------------|----------------|---------------|----------------| | | Shifted Class Test Accuracy | | | | | bed(dog) | 0.614 ± 0.166 | 0.099 ± 0.163 | 0.051 ± 0.107 | 0.208 ± 0.228 | | keyboard(cat) | 0.843 ± 0.049 | 0.199 ± 0.274 | 0.051 ± 0.107 | 0.236 ± 0.293 | | bed(cat) | 0.886 ± 0.029 | 0.415 ± 0.318 | 0.239 ± 0.279 | 0.389 ± 0.341 | | couch(cat) | 0.979 ± 0.010 | 0.390 ± 0.370 | 0.310 ± 0.339 | 0.461 ± 0.385 | | painting(lamp) | 0.467 ± 0.061 | 0.132 ± 0.143 | 0.132 ± 0.128 | 0.188 ± 0.165 | | pillow(clock) | 0.707 ± 0.035 | 0.264 ± 0.234 | 0.279 ± 0.252 | 0.364 ± 0.271 | | television(fireplace) | 0.539 ± 0.059 | 0.117 ± 0.167 | 0.153 ± 0.201 | 0.219 ± 0.225 | | fork(tomato) | 0.790 ± 0.035 | 0.174 ± 0.229 | 0.156 ± 0.236 | 0.216 ± 0.269 | | car(snow) | 0.762 ± 0.061 | 0.175 ± 0.024 | 0.256 ± 0.279 | 0.224 ± 0.269 | | | Overall Test Accuracy | | | | | bed(dog) | 0.874 ± 0.025 | 0.774 ± 0.030 | 0.764 ± 0.021 | 0.795 ± 0.044 | | keyboard(cat) | 0.839 ± 0.012 | 0.714 ±0.055 | 0.718 ± 0.043 | 0.722 ± 0.058 | | bed(cat) | 0.820 ± 0.014 | 0.735 ± 0.064 | 0.698 ± 0.057 | 0.739 ± 0.068 | | couch(cat) | 0.852 ± 0.010 | 0.750 ± 0.070 | 0.736 ± 0.065 | 0.764 ± 0.073 | | painting(lamp) | 0.806 ± 0.013 | 0.756 ± 0.024 | 0.758 ± 0.021 | 0.766 ± 0.028 | | pillow(clock) | 0.854 ± 0.005 | 0.781 ± 0.042 | 0.784 ± 0.045 | 0.798 ± 0.049 | | television(fireplace) | 0.825 ± 0.009 | 0.755 ± 0.029 | 0.762 ± 0.034 | 0.772 ± 0.039 | | fork(tomato) | 0.738 ± 0.012 | 0.661 ± 0.036 | 0.661 ± 0.034 | 0.668 ± 0.040 | | car(snow) | 0.800 ± 0.007 | 0.699 ± 0.050 | 0.714 ± 0.054 | 0.707 ± 0.055 | Table 16: Performance of PCBMs in the context of user study pruning. | Scenario and | | | | | |------------------|-----------------------|------------------|---------------|---------------| | Scenario classes | All Scenarios classes | CIFAR100 classes | | | | Spurious classes | | | | | | Overall | 0.797 ± 0.007 | 0.798 ± 0.015 | 0.802 ± 0.008 | 0.819 ± 0.014 | | Test Accuracy | | | | | | Shifted Class | 0.864 ± 0.036 | 0.82 ± 0.051 | 0.852 ± 0.037 | 0.875 ± 0.024 | | Test Accuracy | | | | | ![26_image_0.png](26_image_0.png) Table 17: Top concepts retrieved based on the initial choice of classes. (a) Concepts extracted using the 5 scenario classes explained above. ![26_image_2.png](26_image_2.png) (c) Concepts extracted using all scenario classes. ![26_image_1.png](26_image_1.png) (b) Concepts extracted using the 5 scenario and spurious classes. ![26_image_3.png](26_image_3.png) (d) Concepts extracted using CIFAR100 classes. Figure 10: Concept extracted from the various scenarios established. ## I Additional Examples Of Obtained Saliency Maps Below, we attach an example of the saliency maps for a television image. The concepts used are the four ![27_image_0.png](27_image_0.png) most important concepts for the class "television" and the concept floor which co-occurs with the television at the bottom of the image. The results obtained here align with the "bike" saliency map results. Figure 11: Extension results of the saliency map experiment. Shown here are ten saliency maps for an image from the class "television" of CIFAR100. Five CAV concepts and five CLIP concepts are used.
Review 1: Summary: The work is a reproduction effort of [1]. As such, it - checks the reproducability of experimental results in support of [1]'s claims using [1]'s released codes as well as own investigation and implementation for the missing parts. - conducts an independent user study mostly using the conditions of [1]'s study. - investigates the importance and validity of concepts in concept-bottleneck interpretability of [1] through an ablation study of generated concepts as well as a semantic evaluation of the discovered concepts. - evaluates the method on audio data that is beyond [1]'s evaluation setup - improves codebase to be more user-friendly [1] Yuksekgonul, Mert, Maggie Wang, and James Zou. "Post-hoc Concept Bottleneck Models." The Eleventh International Conference on Learning Representations. 2022. Strengths and Weaknesses: **Strengths.** -it identifies the missing information required for reproduction and makes a plausible effort either by looking in the authors’ codebase or prior work to fill in the gap, implement, and experiment accordingly. - the list of required assumptions corresponding to the missing information is further provided. - addition of a new benchmark of a different modality (audio) is interesting - the conduction of a new user study helps with understanding the reliability of the original work. - there are informative findings regarding the reproducibility of [1] - the main claim, regarding the matching performance, is generally reproducible, including on the new audio modality - underperformance of the reported CUB baseline in [1] - smaller improvement using manual editing of concepts. - inexisting improvement and sometimes degradation of results with the userstudy, especially when it comes to non-hybrid PCBM. **Weaknesses.** - presentation can be significantly improved, some examples in the requested changes. - it is not clear how the use of “random projection” for the concepts is relevant for the PCBM study. - it is not clear how the object-concept correspondence values in Table 6 are produced. - it is not clear how the concepts are formed for the audio classification experiments from section 4.2.3. Requested Changes: Are the reported improvements in Table 3 (“Edit Gain”) statistically significant? Can this be repeated to make sure the differences, especially in case of pruning, is reliable? - p3. <.,.> should be defined as the dot product. - eq1. closing parenthesis for the loss should come after label y. - eq1. sampling (~) or membership ($\in$) are usually not represented by the minus sign (-) - p2. if $\mathbf{\omega}$ is a vector, then g() can only represent a binary classifier, is that the intention? - eq2. parentheses need to be corrected - check when the differences are meant to be percentage points and use percentage points in the text instead of percent (%). - claim 2 discussions in p.8 do not refer to the relevant table. Related to that, it is better to have the results in Table 1 repeated in Table 2 and named differently to compare with and without provided concepts. - similarly, the results for the userstudy in Claim 3 do not refer to the corresponding tables. - for Table 5. good to have the results from “meaningful” concepts repeated here for comparison. Broader Impact Concerns: N/A ================================================== Review 2: Summary: _Note: To my understanding, this is a submission for the reproducibility challenge (MLRC) 2023. Please correct me if this is not._ This report aims to reproduce the results of Yuksekgonul et al. (2023), which proposed post-hoc concept bottleneck models (PCBM). The authors mainly focus on validating three core claims in the original paper. (1) PCBMs closely achieves the original model's performance. (2) PCBMs can be trained without labeled concept datasets. (3) PCBMs enable global model editing. This report successfully validates the first claim, but finds that the second claim can only be partially supported. The third claim, unfortunately, has not been reproduced well. Stepping further, the paper also conducts several additional experiments on PCBMs. One of the experiments, interestingly, suggest that the interpretabiliy of PCBMs may be more limited than it looks. Strengths and Weaknesses: **Strengths** - Authors have successfully identified and distilled the key claims of the original paper, and systematically validated the claims through experiments. In particular, the categorization of the main points into three claims have been very effective and added clarity to the overall manuscript. - The paper has tediously reported small experimental details, even including the hardware details, for future references. - The paper accurately points out the details that have been missing in the original paper, and clearly explains how the authors approached to fill the gap. **Weaknesses** - Writing could have been a bit more clearer. I could not locate the main messages of the manuscript very easily. In many cases, the key sentences appeared in the middle of a lengthy section or a paragraph, instead the beginning. - The motivation behind the additional "random projection" experiment was not very clear to me. Is this merely a sanity check that the concept matrix actually plays a significant role in PCBMs? Also, it is not super clear to me why the authors brought up the J-L lemma; it adds some mathematical fanciness, but I do not think the lemma adds any concreteness to the discussion. - The choice to reduce the COCO-Stuff and SIIM-ISIC datasets to a smaller scale is totally understandable, but nevertheless undermines the credibility of the reproducibility study. Requested Changes: The main issues that I have is about the clarity of the manuscript. In particular, I recommend: - adding several sentences to the introduction that summarizes the findings---whether the results were satisfactory, what were the key differences, etc. - slightly more explanations on the intentions behind the random projection experiment. Other than these, I am generally happy with the manuscript. Broader Impact Concerns: I believe that the current version is already good in this respect. ================================================== Review 3: Summary: In this paper, the authors reproduced Post-hoc Concept Bottleneck Models (PCBMs) to validate the experimental results presented in the original paper. This validation includes confirming whether PCBMs can 1) achieve performance similar to the original model, 2) operate without the need for labeled concept datasets, and 3) enable global model editing. Additionally, the interpretability and scalability of PCBMs were validated through a sanity check of concept vectors, assessment of alignment with actual concepts, and extension experiments into audio classification. Through these validations, this paper demonstrates that to some extent, the experimental results of previous literature can be reproduced, with the exception of global model editing. Additionally, the paper shows that PCBMs possess interpretability and scalability. Strengths and Weaknesses: Strengths: - The paper provides clear explanations of what is clear and unclear in order to reproduce the results. - The validation of three scenarios to verify the operation of post-hoc CBM, beyond the results shown in the paper, is highly valid. Particularly, the validation of object-concept correspondence appears crucial. Weaknesses: - The analysis of experimental results is overly simplistic. Additional discussion seems necessary. - Verification in other domains has been limited to performance aspects only, which is regrettable. More diverse examples and experimental validations are needed in this regard. Requested Changes: 1) In this paper, the authors compared the performances of their trained model with the reported performances in the original paper to initially verify if the performance is reproducible. To facilitate a proper comparison, it would be beneficial to include the performance reported in the original paper in the tables of the paper, allowing readers to compare effectively. 2) The interpretability aspect is crucial, and Section 3.5.2 touches on important points. However, there are considerations to be made regarding the following: [a] mentions that CAVs are sensitive to negative samples, suggesting that issues may arise not only due to the method of projection but also inherent to CAV itself. [b] demonstrates issues with concept association bias in vision-language models like CLIP, which could potentially enhance the discussion in Section 3.5.2. [a] Kim, Siwon, et al. "Grounding Counterfactual Explanation of Image Classifiers to Textual Concept Space." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [b] Tang, Yingtian, et al. "When are Lemons Purple? The Concept Association Bias of Vision-Language Models." Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023. 3) In Figure 1, the discrepancy between CLIP concepts and CAV concepts in textual representation begs further explanation. Why are they different? The observation that saliency maps highlight primary objects rather than fine-grained parts relevant to concepts is closely related to findings in various papers analyzing CLIP. Citing these papers could enrich the discussion. 4) Expanding the validation of audio classification would be beneficial. Currently, the paper only covers the use of multimodal models like AudioCLIP. Similar to CAVs, exploring the application of techniques like CAVs to audio data could provide a broader validation of Post-hoc CBM across different data domains. More examples are needed besides Figure 4, and the results presented in Section 4.2.3 only cover UrbanSound8K despite the use of various audio datasets mentioned in Section 3.5.3. Additionally, detailed explanations of the settings for PCBM-h (e.g., dimensions for the residual path) are lacking. Overall, there is a shortage of explanation and validation for these experiments. 5) Further minor comments include: - Ensuring consistent terminology for the SIIM-ISIC dataset. - Properly indicating in the main text when referring to figures and tables included in the appendix for readability. Broader Impact Concerns: There is no section for Broader Impact Statement. The significance of this research lies in verifying reproducibility of Post-hoc CBM and conducting its validation in various scenarios, under the assumption that Post-hoc CBM holds significance. It would be beneficial to include a Broader Impact Statement section based on these considerations. ================================================== Metareview: Recommendation: Accept as is Comment: This paper is reviewed by three expert reviewers. During the rebuttal period, the authors addressed the concerns raised by the reviewers well. After the rebuttal, all the reviewers were willing to accept the paper. Following the TMLR official acceptance criteria (https://jmlr.org/tmlr/acceptance-criteria.html), I mainly focused on whether this paper has sufficient generalized insights and actionable lessons for the TMLR audience. Note that all the experiments of this reproducibility report are based on the authors' code repository. I don't think showing the re-run results of the original code repository itself is sufficient generalized insights. Thus, my decision is based on the additional contributions of the authors. Namely, random projection experiments, object-concept correspondence, and audio classifier results. After carefully reading the reviewers' comments and the revised paper, I think the contribution of this paper is slightly above the bar of the TMLR criterion. Specifically, I think the conclusion of this paper would be somewhat interesting for the audiences interested in applying PCBM (or any other CBM variants) to their own task rather than the well-known benchmarks (such as CUB). We still have a large gap for improving CBMs in terms of its interpretability and generalizability. ==================================================
# Learning Optimal Policies Through Contact In Differentiable Simulation Anonymous authors Paper under double-blind review ## Abstract Model-Free Reinforcement Learning (MFRL) has garnered significant attention for its effectiveness in continuous motor control tasks. However, its limitations become apparent in high-dimensional problems, often leading to suboptimal policies even with extensive training data. Conversely, First-Order Model-Based Reinforcement Learning (FO-MBRL) methods harnessing differentiable simulation offer more accurate gradients but are plagued by instability due to exploding gradients arising from the contact approximation model. We propose Adaptive Horizon Actor Critic (AHAC), a massively parallel FO-MBRL approach that truncates trajectory gradients upon encountering stiff contact, resulting in more stable and accurate gradients. We experimentally show this on a variety of simulated locomotion tasks, where our method achieves up to 64 % higher asymptotic episodic reward than state-ofthe-art MFRL algorithms and less hyper-parameter sensitivity than prior FO-MBRL methods. Moreover, our method scales to high-dimensional motor control tasks while maintaining better wall-clock-time efficiency. https://adaptive-horizon-actor-critic.github.io/ ## 1 **Introduction** Reinforcement Learning (RL) has achieved remarkable success in complex tasks, such as Atari games (Mnih et al., 2013), Minecraft (Hafner et al., 2023) and Go (Silver et al., 2017). Combined with the Policy Gradients Theorem (Sutton et al., 1999), we can derive approaches for solving continuous motor control tasks. Although some of these Model-Free Reinforcement Learning (MFRL) approaches have achieved impressive results (Hwangbo et al., 2017; Akkaya et al., 2019; Hwangbo et al., 2019), they suffer from subpar sample efficiency, limiting their practical utility (Amos et al., 2021). Consequently, addressing this limitation has been a central research focus for the past few years. An alternative approach, Model-Based Reinforcement Learning (MBRL), seeks to model the environment's dynamics for improved efficiency. However, most MBRL methods still rely on experience data to learn dynamics and often produce suboptimal policies compared to MFRL. To enhance sample efficiency, highperformance physics simulators can be employed, offering abundant training data in ideal scenarios. When combined with computationally efficient MFRL methods, these simulators enable quick training of robots for tasks such as walking (Rudin et al., 2022). However, a question remains: even with extensive data, can MFRL effectively tackle high-dimensional motor control problems? Given the substantial effort put into producing accurate and efficient simulators, one should naturally ask, why don't we use them as models for MBRL? In turn, this makes it tempting to learn the policy using first-order methods which are theoretically more efficient (Berahas et al., 2022). This has been explored in the domain of model-based control literature, where we differentiate the model in order to plan trajectories for applications such as autonomous driving (Kabzan et al., 2019) and agile quadrotor maunders (Kaufmann et al., 2020). However, using first-order methods to learn feedback policies in standard RL settings has received limited attention. Non-differentiable contact point discontinuities in available simulation models have been a major hurdle, leading to the development of differentiable simulators (Hu et al., 2019a; Freeman et al., 2021; Heiden et al., 2021; Xu et al., 2021). ![1_image_0.png](1_image_0.png) Figure 1: We find that FO-MBRL methods suffer from high dynamics gradients ∥∇f(s, a)∥ ≫ 0) which often arise from stiff contact approximation. Our proposed method, AHAC, truncates model-based trajectories at the point of and during stiff contact, thus avoiding both the gradient bias and learning instability exhibited by previous methods using differentiable simulation. Where model-based control literature hand-designs bespoke models for each problem, differentiable simulation aims to create a physics engine that is fully differentiable. Thus, applying it to a different problem is as easy as defining the structure of the problem (e.g. joints and links) and leaving the physics to be calculated by the engine. These simulators have enabled a new family of FO-MBRL algorithms that can efficiently learn complex control tasks. Short Horizon Actor Critic (SHAC) (Xu et al., 2022) is such an approach that utilises the popular actor-critic paradigm (Konda & Tsitsiklis, 1999). The actor is trained in a first-order fashion, while the critic is trained model-free. This allows SHAC to learn through the highly non-convex landscape by using the critic as a smooth surrogate of the cumulative reward objective and avoiding exploding gradients by employing short-horizon rollouts. While SHAC boasts incredible sample efficiency when compared against MFRL, it is also brittle, exhibits higher learning instability, and requires extensive hyper-parameter tuning (Suh et al., 2022). In this study, we attempt to address those issues and shift our focus from sample efficiency to the asymptotic performance of FO-MBRL methods in massively parallel differentiable simulations. We aim to answer the following questions: 1. What causes learning instability in FO-MBRL approaches such as SHAC? Our analysis reveals that first-order methods exhibit high empirical bias when estimating gradients through sample-limited Monte-Carlo approximation, hindering efficiency and resulting in suboptimal policies. This bias is primarily driven by the high magnitude dynamical gradients (∥∇f(s, a)∥ ≫ 0) arising from stiff contact approximation. 2. Can FO-MBRL methods outperform MFRL in finding optimal policies? We introduce Adaptive Horizon Actor Critic (AHAC), a first-order model-based algorithm that mitigates gradient issues during stiff contact by adapting its trajectory rollout horizon (Figure 1). Experimentally, we show that it is capable of achieving up to 64% more asymptotic reward in comparison to model-free approaches across complex locomotion tasks. 3. Which methods are suitable for scaling to high-dimensional motor control tasks? We find that AHAC exhibits lower variance during training, offering stability and gradient accuracy, allowing it to scale effectively to high-dimensional motor control tasks with action dimension A = R 152. ## 2 **Preliminaries** In this paper, we study discrete-time, finite-horizon, fully-observable reinforcement learning problems where the state of the system is defined as s ∈ R n, actions are defined as a ∈ R m, and the dynamics are governed by the function f : R n × R m → R n. Unlike traditional RL formulations, here we assume that the dynamics (i.e., transition function) are deterministic. At each timestep t, we sample an action from a stochastic policy at ∼ πθ(·|st), which is parameterised by some parameters θ ∈ R d, and in turn, we receive a reward from r : R n × R m → R. We can define the H-step return as: $$R_{H}(\mathbf{s}_{1},\mathbf{\theta})=\sum_{h=1}^{H}r(\mathbf{s}_{h},\mathbf{a}_{h})\qquad s.t.\quad\mathbf{s}_{h+1}=f(\mathbf{s}_{h},\mathbf{a}_{h})\quad\mathbf{a}_{h}\sim\pi_{\mathbf{\theta}}(\cdot|\mathbf{s}_{h})$$ As is typical in RL, the objective of the policy is to maximise the cumulative reward: $$\operatorname*{max}_{\boldsymbol{\theta}}J({\boldsymbol{\theta}}):=\operatorname*{max}_{\boldsymbol{\theta}}\mathbb{E}_{\begin{array}{c}{{\boldsymbol{s_{1}}\sim\rho}}\\ {{\boldsymbol{a_{h}}\sim\pi(\cdot|\boldsymbol{s_{h}})}}\end{array}}[R_{H}({\boldsymbol{s_{1}}},{\boldsymbol{\theta}})]$$ $$(1)$$ [RH(s1, θ)] (1) where ρ is the initial state distribution. Without loss of generality, we make our work easier to follow by making the following assumption: Assumption 2.1. We assume that ρ is a dirac-delta distribution. Similar to prior work Duchi et al. (2012); Berahas et al. (2022); Suh et al. (2022), we are trying to exploit the smoothing properties of stochastic optimisation on the landscape of our optimisation objective. Following recent successful deep-learning approaches to MFRL (Schulman et al., 2017; Haarnoja et al., 2018), we assume: Assumption 2.2. We assume that our policy is stochastic one parameterised by θ and expressed as πθ(·|s). In order to solve our main optimisation problem in Equation 1, we consider using stochastic gradient estimates of J(θ). These can be obtained via zero-order and first-order methods. To guarantee the existence of ∇J(θ), we need to make certain assumptions: Definition 2.3. A function g : R d → R d has *polynomial growth* if there exists constants *a, b* such that ∀z ∈ R d, ||g(z)*|| ≤* a(1 + ||z||b). Assumption 2.4. To ensure gradients are well defined, we assume that the policy πθ(·|s) is continuously differentiable ∀s ∈ R n, ∀θ ∈ R d. Furthermore, the system dynamics f and reward r have polynomial growth. ## 2.1 **Zeroth-Order Batch Gradient (Zobg) Estimates** These weak assumptions are sufficient to make J(θ) differentiable in expectation by simply taking samples of the function value in a zeroth-order fashion. This gives us estimates of ∇J(θ) via the stochasticity introduced by π, as first shown in (Williams, 1992), and commonly referred to as as the *Policy Gradient Theorem* (Sutton et al., 1999). Definition 2.5. Given a sample of the H-step return RH(s1) = PH h=1 r(sh, ah) following the policy π, we can estimate zero-order policy gradients via: oney gradients via: ${\nabla^{[0]}_\theta J(\theta):=\mathbb{E}_{\mathbf{a}_h\sim\pi_\theta(\cdot|\mathbf{s}_h)}\Bigg[R_H(\mathbf{s}_1)\sum_{h=1}^H\nabla_\theta\log\pi_\theta(\mathbf{a}_h|\mathbf{s}_h)\Bigg]}$ The above is the "non-trivial" (or "non-negative"). $$\mathrm{(2)}$$ Lemma 2.6. Under Assumptions 2.1 and 2.4, the ZOBG is an unbiased estimator of the stochastic objective E -∇¯ [0]J(θ)= ∇J(θ) where ∇¯ [0]J(θ) is the sample mean of N Monte Carlo estimates of Eq. 2. These zero-order policy gradients are known to have high variance, and one way to reduce their variance is by subtracting a baseline from the function estimates. Similar to (Suh et al., 2022), we do that by subtracting the return given by the noise-free policy rollout: The return given by the basic-free policy robot: ${\nabla_\theta^{[1]}J(\mathbf{\theta})=\mathbb{E}_{a_h\sim\pi_\theta^{c}(\cdot|a_h)}\bigg[\bigg(R_H(\mathbf{s}_1)-R_H^s(\mathbf{s}_1)\bigg)\sum_{h=1}^H\nabla_\theta\log\pi_\theta(a_h|s_h)\bigg]\quad R_H^s(\mathbf{s}_1):=\sum_{h=1}^H r(\mathbf{s}_h,\mathbb{E}[\pi_\theta(\cdot|s_h)])\bigg]}$. $\left(3\right)^{2}$ ## 2.2 **First-Order Batch Gradient (Fobg) Estimates** Given access to a differentiable simulator (with contact approximation), one can directly compute the analytic gradients of ∇θRH(s1) induced by the policy π: $$\nabla_{\theta}^{[1]}J(\theta):=\mathbb{E}_{a_{h}\sim\pi\theta\,(\cdot|s_{h})}[\nabla_{\theta}R_{H}(s_{1})]$$ J(θ) := Eah∼πθ(·|sh)[∇θRH(s1)] (3) However, for these gradients to be well-defined, we need to make further assumptions: Assumption 2.7. The system dynamics f(s, a) and the reward r(s, a) are continuously differentiable ∀s ∈ R n, ∀a ∈ R m. ## 3 **Learning Through Contact** First-order gradients, as demonstrated in prior research, are asymptotically unbiased when N → ∞ (Schulman et al., 2015). However, this ideal scenario is often impractical in real-world applications, leading to observed empirical bias, as indicated by (Suh et al., 2022). To illustrate this bias, we use the soft Heaviside function, a common tool for studying discontinuous functions in physics simulations as it is an approximation of the Coulomb friction model: ![3_image_0.png](3_image_0.png) $${\bar{H}}(x)={\begin{cases}1&x>\nu/2\\ 2x/\nu&|x|\leq\nu/2\\ -1&x<-\nu/2\end{cases}}$$ $$\left({4}\right)$$ Figure 2: Soft Heavside of Eq 4. where a ∼ πθ(·) = θ + N (0, σ2). As shown in Appendix A, Eπ -H¯ (a) is a sum of error functions whose derivative ∇θ Eπ -H¯ (a)= 0 ̸ at θ = 0. However, using FOBG, we obtain ∇θH¯ (a) = 0 in samples where |a| *> ν/*2, which occurs with probability at least ν/σ√2π. Since in practice we are limited in sample size, this translates to empirical bias that is inversely proportional to sample size, as shown in Figure 3. Notably, when ν → 0, we achieve a more accurate approximation of the underlying discontinuous function, but we also increase the likelihood of obtaining incorrect FOBG, thus amplifying bias in stochastic scenarios. We used this particular example to showcase empirical bias as our differentiable simulator used in Section 5 is based on the Coulomb friction model. ![3_image_1.png](3_image_1.png) Figure 3: Empirical bias of ZOBG and FOBG when approximating the gradients of the Heaviside function. By analysing the empirical bias from the perspective of bias and variance, we derive a practical upper bound: Lemma 3.1. For an H-step stochastic optimisation problem under Assumptions 2.7, which also has Lipshitzsmooth policies ||∇πθ(a|s)*|| ≤* Bπ and Lipshitz-smooth and bounded rewards r(s, a) ≤ ||∇r(s, a)*|| ≤* Br ∀s ∈ R n; a ∈ R m; θ ∈ R d, then zero-order estimates remain unbiased. However, first-order gradient exhibit bias which is bounded by: $$\left\|\mathbb{E}\left[\nabla_{\mathbf{\theta}}^{[1]}J(\mathbf{\theta})\right]-\mathbb{E}\left[\nabla_{\mathbf{\theta}}^{[0]}J(\mathbf{\theta})\right]\right\|\leq H^{4}B_{r}^{2}B_{\pi}^{2}\,\mathbb{E}_{\mathbf{a}\sim\pi}\left[\prod_{t=1}^{H}\left\|\nabla f(\mathbf{s}_{t},\mathbf{a}_{t})\right\|^{2}\right]\tag{5}$$ The proof can be found in Appendix B We can ensure that the rewards are designed to meet the condition r(s, a) ≤ ∥∇r(s, a)∥ ≤ Br. The assumption over the policy ||∇πθ(·|s)*|| ≤* Bπ is less straightforward if we are using high-capacity models such as neural networks, but can be tackled via gradient normalisation techniques. However, giving any bounds over the dynamics ||∇f(st, at)|| is difficult, yet influential in Equation 5. The Lemma also suggests that longer horizons results in exponential growth in bias. To investigate the implications of Lemma 3.1, we constructed a simple scenario involving a ball rebounding off a wall and aiming to reach a target location, as illustrated in Figure 5. In this setup, the initial position s1 = [x1, y1] and velocity of the ball are fixed. The objective is for the policy to learn the optimal initial orientation θ in order to reach a target position sT at the end, defined as RH(s1) = ∥sH − sT ∥ −1 2. Similar to before, we use an additive Gaussian policy a = θ +w where w ∼ N (0, σ2). Alternatively, a ∼ πθ(·) = N (*θ, σ*2) ![4_image_0.png](4_image_0.png) Figure 4: **Results from the toy ball problem**. Fig (a) shows the empirical bias of FOBG measures by comparing it to ZOBG. Fig (b) shows how the variance of the two gradient types evolves over time. Both Figs (a) and (b) use different shades to show different stiffness configurations of the simulation with darker shades designating stiffer (and more realistic) simulation. Fig (c) shows attempts at optimising the initial angle of the ball to hit the target. With this, zero-order gradients from Equation 2 can be expressed as: $$\nabla_{\theta}^{[0]}J(\theta)=\mathbb{E}_{\mathbf{a}_{h}\sim\pi(\theta)}\big[\big(R_{H}(\mathbf{s}_{1})-R_{H}^{*}(\mathbf{s}_{1})\big)\nabla\log\pi_{\theta}(a)\big]\approx\frac{1}{N\sigma^{2}}\big(R_{H}(\mathbf{s}_{1})-R_{H}^{*}(\mathbf{s}_{1})\big).$$ We collect N = 1024 samples of each gradient type for each timestep with H = 40, starting from a randomly sampled starting angle θ ∼ U(−*π, π*) for each environment. Figure 4a shows how the empirical bias of FOBG grows as H increases, validating the proposed lemma. The bias remains low until the ball encounters contact, at which point it starts growing exponentially. We also examined the variance of the gradients in Figure 4b, observing that ZOBG follow Var-∇[0]J(θ)≤ HB2 rB2π σ 2first proposed by (Suh et al., 2022) - most importantly, they scale linearly with H. However, FOBG variance behaves similarly to the empirical bias bound described in Lemma 3.1. exhibiting lower variance at the beginning of the rollout but growing exponentially. This is due mostly to the stiff dynamics ||∇f*|| ≫* 0, which can be clearly seen at timestep h = 6 of Figure 4b where the ball first makes contact with the ground. Towards the end of the rollout, FOBG's variance can be up to five orders of magnitude higher than ZOBG, which remains unaffected by stiff dynamics. Both the bias and variance issues become even more pronounced as the contact stiffness increases, indicated by the darker shades in the figures. Figure 5: The toy problem where the ball is shot against a wall to reach the blue box. This high empirical bias and the resulting high variance have significant consequences for optimisation and policy learning. We find that the biased FOBG fail to find a solution, as shown in Figure 4c. In contrast, the unbiased ZOBG have lower variance and slowly make progress towards a solution. This situation leads to a critical question: Is it possible to leverage the efficiency of FOBG in the presence of high bias gradients? Inspired by a common practice in deep learning, we attempt to normalise gradient norms of the dynamics. Employing this approach in Figure 4c shows that FOBG are now able to converge to a solution at a much faster rate than ZOBG. $$\begin{array}{l l}{{\tilde{\nabla}_{\mathbf{s}_{h}}f(\mathbf{s}_{h},\mathbf{a}_{h})=g c(\nabla_{\mathbf{s}_{h}}f(\mathbf{s}_{h},\mathbf{a}_{h}))}}&{{\forall h\in[0,H]}}\\ {{\tilde{\nabla}_{\mathbf{a}_{h}}f(\mathbf{s}_{h},\mathbf{a}_{h})=g c(\nabla_{\mathbf{a}_{h}}f(\mathbf{s}_{h},\mathbf{a}_{h}))}}&{{\forall h\in[0,H]}}\end{array}$$ $$g c(\mathbf{x}):={\begin{cases}{\frac{\mathbf{x}}{||\mathbf{x}||_{2}}}&{{\mathrm{if}}||\mathbf{x}||_{2}>1}\\ {\mathbf{x}}&{{\mathrm{otherwise}}}\end{cases}}$$ ## 4 **Adaptive Horizon Actor Critic (Ahac)** With a clearer understanding of stiff contact in differentiable simulators, we aim to develop a model-based algorithm employing FOBG for effective learning in infinite-horizon robotics tasks. Although we can apply the gradient clipping technique as above, in a multi-step problem, we can avoid the stiff gradients altogether. Consider a scenario where we have a 3-step trajectory (H = 3), and contact occurs at timestep h = 3, as shown in Figure 6. If we took the gradient normalisation approach, the gradients for r(s3, a3) with a θ-parameterised policy where ah ∼ πθ(·|sh) with respect to θ: $$\begin{array}{c}{{\nabla_{\theta}r(s_{3},a_{3})=\nabla_{a_{3}}r(s_{3},a_{3})\nabla_{\theta}\pi_{\theta}(a_{3}|s_{3})}}\\ {{+\nabla_{s_{3}}r(s_{3},a_{3})\tilde{\nabla}_{a_{2}}f(s_{2},a_{2})\nabla_{\theta}\pi_{\theta}(a_{2}|s_{2})}}\\ {{+\nabla_{s_{3}}r(s_{3},a_{3})\tilde{\nabla}_{s_{2}}f(s_{2},a_{2})\tilde{\nabla}_{a_{1}}f(s_{1},a_{1})\nabla_{\theta}\pi_{\theta}(a_{1}|s_{1})}}\end{array}$$ By definition of gc(x) we will be losing gradient information. An alternative would be to cut gradients at h = 2. This would render ∇θr(s3, a3) = 0, preventing us from learning in a scenario such as our toy example in Section 3. However, in a multi-step decision-making problem, r(s3, a3) still yields gradients: ∇θ X 3 h=1 r(sh, ah) =∇a3 r(s3, a3)∇θπθ(a3|s3) + ∇a2 r(s2, a2)∇θπθ(a2|s2) + ∇s2 r(s2, a2)∇a1 f(s1, a1)∇θπθ(a1|s1) + ∇a1 r(s1, a1)∇θπθ(a1|s1) 4.1 **Learning through contact in a single environment**s1 s2 s3 ![5_image_0.png](5_image_0.png) Figure 6: **H=3 trajectory with** truncated gradients. The green arrows indicate back-propagated gradients for *AHAC-1*. Note how the gradients of dynamics in contact do not appear above. We call this technique *contact truncation*. We present an FO-MBRL algorithm with an actor-critic architecture similar to SHAC (Xu et al., 2022). The critic, denoted as Vψ(s), is model-free and trained using TD(λ) over an H-step horizon from timestep t: $$R_{h}(\mathbf{s}_{t}):=\sum_{n=1}^{t+h-1}\gamma^{n-t}r(\mathbf{s}_{n},\mathbf{a}_{n})+\gamma^{t+h}V_{\mathbf{\psi}}(\mathbf{s}_{t+h})\qquad\quad\hat{V}(\mathbf{s}_{t}):=(1-\lambda)\bigg{[}\sum_{h=1}^{H-t-1}\lambda^{h-1}R_{h}(\mathbf{s}_{t})\bigg{]}+\lambda^{H-t-1}R_{H}(\mathbf{s}_{t}).$$ The critic loss becomes $\mathcal{L}_{V}(\psi)$, while the actor is trained using FORDG as in Equation 3, with the addition of the critic value estimate: $$\mathcal{L}_{V}(\psi):=\sum_{h=t}^{t+H}\left\|V_{\psi}(s_{h})-\hat{V}(s_{h})\right\|_{2}^{2}\qquad\quad(6)\qquad\quad J(\mathbf{\theta}):=\sum_{h=t}^{t+H-1}\gamma^{h-t}r(s_{h},a_{h})+\gamma^{t}V_{\psi}(s_{t+T})\tag{7}$$ Unlike fixed-horizon model-based rollouts in (Xu et al., 2022), our policy is rolled out until stiff contact is encountered, which can be determined in simulation. This results in a dynamic FO-MBRL algorithm that adjusts its horizon to avoid exploding gradients. However, not all contact results in high bias; therefore, we want to truncate only on stiff contact ∥∇f(st, at)∥ > C, where C is the contact stiffness parameter we set. We refer to this algorithm as Adaptive Horizon Actor Critic 1 (AHAC-1), which is designed for single environments but not suitable for vectorised environments (see Appendix D). Using this approach, we can investigate if truncating gradients on contact yields better policy than naively cutting on fixed short-horizons. We compare SHAC and AHAC-1 on a simple contact-rich locomotion task, Hopper, a single-legged agent that obtains a reward for forward velocity (Figure 1). Both algorithms share the same hyperparameters, except for the ones related to horizons. SHAC uses a fixed H = 32, while AHAC-1 uses a maximum horizon of H = 64 with a contact threshold of C = 500. From Figure 7, we observe that AHAC-1 achieves a higher reward while exhibiting lower variance. Although difficult to analyse, we believe that our approach avoids local minima by adapting its horizon to avoid stiff gradients. On the other hand, SHAC gets pushed into local minima, which eventually results in policy collapse as seen in Figure 7. Unfortunately, AHAC-1 cannot be applied to parallel vectorised environments due to the challenge of asynchronously truncating trajectories, leading to infinitely long compute graphs. ![6_image_0.png](6_image_0.png) Figure 7: Comparison between SHAC and AHAC-1 ran on the Hopper task with only a single environment. The left plot shows rewards achieved by the algorithms over five different random seeds, with the mean and std. dev. plotted. The right plot is the moving window averaged mean horizon of both approaches. Note that even though SHAC has fixed horizons, they can still vary if the environment is terminated early due to early termination or episode end. ## 4.2 **Ahac: A Scalable Approach For Learning Through Contact** A straightforward solution to asynchronous truncation in AHAC-1 might involve adopting the short-horizon approach of SHAC and truncating the graph on stiff contact. Unfortunately, this did not yield any performance improvements. Instead, we investigate the impact of the horizon length on policy optimality. We find that contact-based tasks have an inherent optimal solution, for instance, in a locomotion task, in the form of an optimal gait pattern. We observe that an SHAC agent converges to a particular solution, often suboptimal, depending on the horizon parameter H. Importantly, we find that asymptotic performance is maximised when the horizon H matches the optimal (though unknown) gait frequency. ![6_image_1.png](6_image_1.png) Figure 8: An ablation of short horizons H for the SHAC algorithm applies to Ant. Each run is trained until convergence for 5 random seeds. To empirically demonstrate this, we conducted an experiment by parameterizing SHAC with different H values for Ant locomotion tasks, where a quadruped robot seeks to maximise forward velocity rewards. As seen from the results in Figure 8, the gait period aligns with the horizon length until H = 28, after which it attempts to fit two gait patterns within a single H-timestep rollout. Moreover, we noticed that the asymptotic reward reaches its peak as the horizon-length H approaches what we believe to be the optimal gait period and displays the least variance across runs. From these observations, we glean two insights: (1) each task has an optimal model-based horizon length H that corresponds to the gait period, and (2) the associated optimal horizon results in the lowest variance between runs, supported by Lemma 3.1. We leverage these insights to generalise the AHAC-1 algorithm into a GPU-parallelisable approach, which we call AHAC. The critic training formulation remains the same as in Equation 6, but we introduce a new constrained objective for the actor: $$J(\mathbf{\theta}):=\sum_{h=t}^{t+H-1}\gamma^{h-t}r(\mathbf{s}_{h},\mathbf{a}_{h})+\gamma^{t}V_{\mathbf{\psi}}(\mathbf{s}_{t+H})\quad\text{s.t.}\quad\|\nabla f(\mathbf{s}_{t},\mathbf{a}_{t})\|\leq C\quad\forall t\in\{0,..,H\}\tag{8}$$ In simple terms, this objective seeks to maximise the reward while ensuring that all contact stiffness remains below a predefined threshold. Building on the inspiration from AHAC-1, we can progressively increase the horizon as long as the constraint is satisfied. Using the Lagrangian formulation, we derive the dual problem: $${\mathcal{L}}_{\pi}(\mathbf{\theta},\mathbf{\phi})=\sum_{h=t}^{t+H-1}\gamma^{h-t}r(\mathbf{s}_{h},\mathbf{a}_{h})+\gamma^{t}V_{\mathbf{\psi}}(\mathbf{s}_{t+H})+\mathbf{\phi}^{T}\ |$$ $$\left(\left[\begin{array}{c c c}{{}}&{{}}&{{\|\nabla f(\mathbf{s}_{t},\mathbf{a}_{t})\|}}\\ {{}}&{{}}&{{}}\\ {{}}&{{}}&{{\vdots}}\\ {{}}&{{}}&{{}}\\ {{\|\nabla f(\mathbf{s}_{t+H},\mathbf{a}_{t+H})\|}}\end{array}\right]$$ $$(9)$$ $${}^{\star}\!/$$ $${}^{\star}/$$ (9) $$\left|\begin{array}{l}{{-C}}\\ {{}}\\ {{|\downarrow\rangle}}\end{array}\right|$$ By definition, ϕi = 0 if the constraint is met and ϕi > 0 otherwise. Thus, we can use ϕ to adapt the horizon, resulting in the full AHAC algorithm shown in Algorithm 1. We train the critic until convergence is defined as a sufficiently small change in the last 5 critic training iterations, Pn i=n−5 L(ψ) where we take mini-batch samples from the buffer (s, Vˆ (s)) ∼ D. In practice, we find that truncating on ∇f(st, at) is limiting since different tasks involve varying contact forces, which often change throughout the learning process. Instead, we normalise the contact forces with modified acceleration per state dimension qˆt = max(qt, 1) where the max is applied element-wise, resulting in normalised contact forces ∇ˆ f(st, at) = diag(qˆt)∇f(st, at). This allows us to use a single C parameter across different tasks. Additionally, as contact approximation forces are computed separately in differentiable simulators, we don't need to utilise the full Jacobian of the dynamics. Instead, we can use the Jacobian derived only from contact. The differences between SHAC and AHAC are summarised in Appendix C. $${}^{\star}\!/$$ Algorithm 1: Adaptive Horizon Actor-Critic Given: γ: discount rate Given: α: learning rate Given: C: contact threshold Initialise learnable parameters θ, ψ*, H,* ϕ = 0 t ← 0 while *episode not done* do /* rollout policy */ Initialise buffer D for h = 0, 1*, .., H* do at+h ∼ πθ(·|st+h) st+h+1 = f(st+h, at+h) D ← D ∪ {(st+h, at+h, rt+h, Vψ(st+h+1))} /* train actor with Eq. 9 */ θ ← θ + α∇θLπ(θ, ϕ) ϕ ← ϕ + α∇ϕLπ(θ, ϕ) H ← H − αPH t=0 ϕt /* train critic with Eq. 6 */ while *not converged* do sample (s, Vˆ (s)) ∼ D ψ ← ψ − α∇ψLV (ψ) t ← t + H ## 5 **Experiments** In this section, we aim to address the following key questions experimentally: 1. Can first-order model-based (FO-MBRL) policies outperform zeroth-order model-free (ZO-MBRL) policies concerning asymptotoic episodic reward? 2. Does AHAC remain sample and wall-clock time efficient similar to prior FO-MBRL algorithms? 3. Does AHAC scale to high-dimensional environments? 4. Which components of AHAC contribute to its improved asymptotic reward compared to SHAC? Previous FO-MBRL algorithms utilising differentiable simulators, such as SHAC (Xu et al., 2022), face challenges related to instability arising from stiff contact, which empirically results in worse asymptotic performance. The previous section and Figure 7 specifically suggest that the single-environment version of AHAC manages to avoid local minima and continue to progressively obtain a higher performance policy. We now investigate whether these benefits persist when scaling AHAC to N = 512 parallel environments and applying it to a more complex task: Ant, a quadruped with symmetrical legs, S = R 37 and A = R 8. The simulator used throughout this section is dflex, introduced by (Xu et al., 2022) and described in more detail in Appendix E. We compare AHAC to SHAC, its predecessor; PPO, a state-of-the-art on-policy MFRL algorithm (Schulman et al., 2017); SAC, an off-policy MFRL algorithm (Haarnoja et al., 2018); and SVG, a FO-MBRL method that does not utilise a differentiable simulator but instead learns its model of the dynamics1. A more explicit comparison of all of these approaches and more can be found in Section 6. We 1To make performance comparable, we attempted to vectorise SVG and found that it did not scale well with an increased number of parallel environments. Therefore, the results presented in this paper are from the original single-environment version. ![8_image_0.png](8_image_0.png) Figure 9: **Episodic rewards of the Ant task against both simulation steps and wall clock time**. The episodic reward is normalised by the highest mean reward achieved by PPO (i.e. PPO-normalised). The dashed lines represent the reward achieved by each respective algorithm at the end of their training runs. tune and train all algorithms on the Ant task until convergence. Furthermore, we also train the MFRL baselines for 3B timesteps to investigate their asymptotic performance given practically infinite data. To best account for statistical errors given our limited runs, we employ the robust metrics suggested by (Agarwal et al., 2021). All results presented in this section show the 50% IQM and the 95% CI across 10 runs. Figure 9 provides insights into the asymptotic performance. It shows that AHAC achieves a 41% higher reward than the closest model-free baseline, PPO, and outperforms SHAC with a smaller standard deviation between runs. Remarkably, the MFRL algorithms PPO and SAC get worse episodic rewards over time compared to AHAC, even when they are trained for 3B timesteps. To answer questions (2) and (3) regarding efficiency and scalability, we conducted experiments across a variety of locomotion tasks. We again compare AHAC against SHAC, PPO, SAC, and SVG. We tune our model-free baselines sufficiently per-task to maximise performance. Due to long training times, SVG utilises similar hyper-parameters as in the original work (Amos et al., 2021). SHAC is tuned per-task but uses H = 32 across all tasks (Xu et al., 2022). AHAC uses the same shared hyper-parameters as SHAC and only has a tuned horizon learning rate of αψ per task. The full hyper-parameter details can be found in Appendix F. The following tasks share the same basic reward of maximising forward velocity with action penalties: 1. **Hopper**, a single-legged robot jumping only in one axis with S = R 11 and A = R 3. 2. **Anymal**, a sophisticated quadruped with S = R 49 and A = R 12 modelled after (Hutter et al., 2016). 3. **Humanoid**, a classic contact-rich environment with S = R 76 and A = R 21 which requires extensive exploration to find a good policy. 4. **SNU Humanoid**, a version of Humanoid lower body where instead of joint torque control, the ![8_image_1.png](8_image_1.png) robot is controlled via A = R 152 muscles intended to challenge the scaling capabilities of algorithms. Figure 10: Locomotion environments (left to right): Hopper, Ant, Anymal, Humanoid and SNU Humanoid. ![9_image_0.png](9_image_0.png) Figure 11: **Reward curves for all tasks against both simulation steps and training time**. The reward axis are normalised by the highest mean reward achieved by PPO. We further apply exponentially weighted smoothing with α = 0.98 to increase legibility. We compare the approaches against the number of simulation steps but also acknowledge that MFRL methods are computationally simpler and thus also provide results against wall-clock time. From the results in Figure 11, we observe that AHAC significantly outperforms the other methods while maintaining the sample efficiency of SHAC. On the simpler Hopper task, all algorithms achieve similar performance when compared against wall-clock time. However, with more complex tasks, we observe that AHAC not only learns at a faster rate but is also capable of achieving higher asymptotic performance across all tasks. This gap only becomes larger as we turn to more high-dimensional environments, showcasing the remarkable scalability of our approach, where AHAC achieves a 64% higher asymptotic reward than PPO on the SNU Humanoid task. Figure 12 shows aggregate summary statistics across all tasks, where AHAC outperforms all baselines but exhibits a larger confidence interval. Regardless, even at the tail end of the worst runs, AHAC achieves higher asymptotic rewards than our baselines, as seen in the score distributions. We also include tabular results in Appendix G. ![9_image_1.png](9_image_1.png) ![9_image_2.png](9_image_2.png) Figure 12: **Aggregate statistics comparing AHAC and our chosen baselines across all tasks.** The left figure shows 50% IQM of PPO-normalised reward with 95% CI. The right figure shows the score distributions of the algorithms as recommended in (Agarwal et al., 2021), which allows us to gauge the performance variability across tasks. Ablation study. To understand the factors contributing to the performance demonstrated by AHAC, we investigate its key modifications, as detailed in Appendix C. Starting from our tuned baseline SHAC with H = 32, we add only one change per ablation: 1. SHAC H=29: using the H converged by AHAC. 2. Adapt. Obj.: SHAC with Eq. 9 and fixed H = 32. 3. Adapt. Horizon: SHAC with Eq. 9 and adapting H. 4. Iterative critic: SHAC with iterative critic training. 5. Dual critic: SHAC with a dual critic. Figure 13: Ablations of AHAC on the Ant task. All of the experiments use the same hyper-parameters tuned for SHAC, except for the horizon learning rate αϕ, which was tuned for AHAC. It is worth noting that SHAC with adaptive horizon (3) is equivalent to AHAC with single critic and no critic training until convergence. The outcomes presented in Figure 13 elucidate that the adaptive horizon objective notably enhances asymptotic reward. Rather surprisingly, SHAC with H = 29 achieves a higher reward than the baseline but fails to match the performance of the adaptive horizon mechanism. Similar to Section 4.1, we hypothesise that even when SHAC is using the converged optimal H = 29, it is still prone to getting stuck in local minima as the horizon might not be optimal during training. Simultaneously, the dual critic strategy substantially mitigates variance between runs compared to the target-critic employed in SHAC. Further ablation details and figures are provided in Appendix H. ## 6 **Related Work** In this section, we provide an overview of recent continuous control reinforcement learning (RL) methods, all of which follow the actor-critic paradigm (Konda & Tsitsiklis, 1999). The critic estimates the value of state-action pairs Q(s, a), and the actor learns optimal actions via maxa Q(s, a). While this section is intended as a review of related work, we also attempt to classify methods by their method of policy (actor) training, value estimator (critic) training, and their dynamics model f(s, a). When no dynamics model is assumed, we are restricted to Model-Free Reinforcement Learning (MFRL) methods. We can take Monte-Carlo samples of the Policy Gradients Theorem to find ∇θJ(θ) using Equation 2. This allows MFRL methods to learn a feedback policy that predicts the distribution of actions given the state. On-policy methods, like Proximal Policy Optimisation (PPO) (Schulman et al., 2017), learn using only the most recent samples following the policy. In contrast, off-policy approaches, such as Soft Actor-Critic (SAC) (Haarnoja et al., 2018), can use all previously collected data at the expense of memory requirements. Alternatively, Model-Based Reinforcement Learning (MBRL) methods aim to leverage a model for learning. This model can be learned from experience data or assumed a priori. In a basic scenario, it serves as an additional source of return estimates for the critic, which can still be trained in a model-free manner (Janner et al., 2019). Alternatively, the model can be used to obtain simulated returns for the critic, which can be first-order back-propagated through, known as Model-based Value Expansion (MVE) (Feinberg et al., 2018). Actor training is more intricate in this context. It can be done using Policy Gradients augmented by model-generated data (Janner et al., 2019) or as part of a gradient-free planning actor (Hafner et al., 2019b). This family of approaches is termed Zeroth-Order MBRL (ZO-MBRL). Alternatively, the returns of trajectories can be used to backpropagate through the model (Hafner et al., 2019a; Byravan et al., 2020), and we refer to these methods as First-Order MBRL (FO-MBRL). Key recent work is summarised in Table 1. Recent interest in differentiable simulation has given rise to several works that employ FOBG to optimise objectives by back-propagating through the dynamics model (Hu et al., 2019b; Liang et al., 2019; Huang et al., 2021; Du et al., 2021). Although not explicitly addressing RL, these works follow the idea of rolling out a trajectory under a policy and iteratively optimising it until convergence. This approach can be reformulated as a FO-MBRL algorithm, referred to as Back-Propagation-Through-Time (BPTT). When employed for typical long episodic RL tasks, BPTT performs poorly due to a noisy optimisation landscape and exploding gradients. (Xu et al., 2022) proposes Short Horizon Actor-Critic (SHAC) to address ![10_image_0.png](10_image_0.png) | Algorithm | Policy Learning | Value Learning | Dynamics Model | | |--------------------------------|-----------------------------|------------------|---------------------|----| | MFRL ZO-MBRL | PPO (Schulman et al., 2017) | Zeroth-order | Model-free | - | | SAC (Haarnoja et al., 2018) | Zeroth-order | Model-free | - | | | MBPO (Janner et al., 2019) | Zeroth-order | Model-free | Ensemble NN | | | PlaNet (Hafner et al., 2019b) | Gradient-free | - | Probabilistic NN | | | MVE (Feinberg et al., 2018) | Zeroth-order | Model-based | Deterministic NN | | | STEVE (Buckman et al., 2018) | Zeroth-order | Model-based | Probabilistic NN | | | Dreamer (Hafner et al., 2019a) | First-order | Model-based | Probabilistic NN | | | IVG (Byravan et al., 2020) | First-order | Model-free | Deterministic NN | | | SAC-SVG (Amos et al., 2021) | First-order | Model-free | Deterministic NN | | | BPTT | First-order | - | Differentiable sim. | | | SHAC (Xu et al., 2022) | First-order | Model-free | Differentiable sim. | | | AHAC (this paper) | First-order | Model-free | Differentiable sim. | | | FO-MBRL | | | | | Table 1: Comparison between recent influential RL algorithms for continuous control. We classify these approaches into the MFRL, ZO-MBRL and FO-MBRL categories predominantly by the way policy(actor) is learned. Zeroth-order (policy gradient) methods are harnessing the gradient estimates following Equation 2 without the need of taking dynamics model gradients, while First-order methods differentiate the whole trajectory following Equation 3. Model-based Value Learning refers to methods that fall under the Modelbased Value Expansion (MVE) approach (Feinberg et al., 2018). these issues by (1) introducing a model-free critic that acts as a smooth surrogate of the reward-maximisation objective and (2) by employing short rollouts to avoid high and unstable policy gradients. When run in a massively parallel fashion, SHAC stands out as one of the few MBRL approaches that achieves comparable asymptotic performance to MFRL methods while also demonstrating significantly better sample efficiency. ## 7 **Conclusion** Our study aimed to compare the asymptotic performance of conventional Zeroth-order Model-Free RL (MFRL) methods with First-Order Model-Based (FO-MBRL) methods in differentiable simulators. We assessed the difference between both types of gradients and derived Lemma 3.1 showing that first-order batch gradient (FOBG) empirical bias is upper-bounded by the stiffness of the dynamics. Unfortunately, contact-rich tasks exhibit such properties, translating to FOBG with high bias and unstable learning. We explored this issue in a toy problem and then introduced an algorithm designed to mitigate the accumulation of gradient bias stemming from stiff dynamics by truncating trajectories upon contact. When applied to highdimensional locomotion tasks, our proposed approach, Adaptive Horizon Actor-Critic (AHAC), achieved up to a 64% increase in asymptotic episodic rewards compared to state-of-the-art MFRL methods. Surprisingly, we found that even with near-infinite data, MFRL methods cannot solve tasks with similar asymptotic reward to our proposed method. Additionally, AHAC retained the advantages commonly observed in FO-MBRL approaches, including exceptional sample efficiency and scalability to higher-dimensional tasks. Notably, our approach demonstrated the ability to learn complex locomotion policies for a quadruped robot in as little as 10 minutes on a single GPU, paving the way for RL scalability. While AHAC outperforms MFRL methods in asymptotic rewards, it necessitates the development of differentiable simulators, requiring substantial engineering effort. Thus, we cannot help but admire the simple yet capable capable model-free algorithms such as PPO. Despite this, the performance of our proposed method renders it promising for robotic applications. However, it is essential to acknowledge the sim2real gap, which requires further exploration. Our vision for the next phase involves applying FO-MBRL approaches to real robots in a closed-loop manner where simulations aid policy learning but continually adapt to match the real environment. Furthermore, we believe that our proposed approach, AHAC, still has room for improvement, in particular by truncating each parallel environment independently, instead of learning a uniform horizon. ## References Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Bellemare. Deep reinforcement learning at the edge of the statistical precipice. *Advances in neural information processing* systems, 34:29304–29320, 2021. Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, et al. Solving rubik's cube with a robot hand. arXiv preprint arXiv:1910.07113, 2019. Brandon Amos, Samuel Stanton, Denis Yarats, and Andrew Gordon Wilson. On the model-based stochastic value gradient for continuous reinforcement learning. In *Learning for Dynamics and Control*, pp. 6–20. PMLR, 2021. Albert S Berahas, Liyuan Cao, Krzysztof Choromanski, and Katya Scheinberg. A theoretical and empirical comparison of gradient approximations in derivative-free optimization. *Foundations of Computational* Mathematics, 22(2):507–560, 2022. Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, and Honglak Lee. Sample-efficient reinforcement learning with stochastic ensemble value expansion. *Advances in neural information processing* systems, 31, 2018. Arunkumar Byravan, Jost Tobias Springenberg, Abbas Abdolmaleki, Roland Hafner, Michael Neunert, Thomas Lampe, Noah Siegel, Nicolas Heess, and Martin Riedmiller. Imagined value gradients: Model-based policy optimization with tranferable latent dynamics models. In *Conference on Robot Learning*, pp. 566–589. PMLR, 2020. Tao Du, Kui Wu, Pingchuan Ma, Sebastien Wah, Andrew Spielberg, Daniela Rus, and Wojciech Matusik. Diffpd: Differentiable projective dynamics. *ACM Transactions on Graphics (TOG)*, 41(2):1–21, 2021. John C Duchi, Peter L Bartlett, and Martin J Wainwright. Randomized smoothing for stochastic optimization. SIAM Journal on Optimization, 22(2):674–701, 2012. Vladimir Feinberg, Alvin Wan, Ion Stoica, Michael I Jordan, Joseph E Gonzalez, and Sergey Levine. Modelbased value expansion for efficient model-free reinforcement learning. In *Proceedings of the 35th International* Conference on Machine Learning (ICML 2018), 2018. C Daniel Freeman, Erik Frey, Anton Raichuk, Sertan Girgin, Igor Mordatch, and Olivier Bachem. Brax–a differentiable physics engine for large scale rigid body simulation. *arXiv preprint arXiv:2106.13281*, 2021. Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms and applications. *arXiv* preprint arXiv:1812.05905, 2018. Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. *arXiv preprint arXiv:1912.01603*, 2019a. Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In *International conference on machine learning*, pp. 2555–2565. PMLR, 2019b. Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models. *arXiv preprint arXiv:2301.04104*, 2023. Eric Heiden, Miles Macklin, Yashraj S Narang, Dieter Fox, Animesh Garg, and Fabio Ramos. DiSECt: A Differentiable Simulation Engine for Autonomous Robotic Cutting. *Robotics: Science and Systems*, 2021. Yuanming Hu, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, and Frédo Durand. Difftaichi: Differentiable programming for physical simulation. *arXiv preprint arXiv:1910.00935*, 2019a. Yuanming Hu, Jiancheng Liu, Andrew Spielberg, Joshua B Tenenbaum, William T Freeman, Jiajun Wu, Daniela Rus, and Wojciech Matusik. Chainqueen: A real-time differentiable physical simulator for soft robotics. In *2019 International conference on robotics and automation (ICRA)*, pp. 6265–6271. IEEE, 2019b. Zhiao Huang, Yuanming Hu, Tao Du, Siyuan Zhou, Hao Su, Joshua B Tenenbaum, and Chuang Gan. Plasticinelab: A soft-body manipulation benchmark with differentiable physics. *arXiv preprint arXiv:2104.03311*, 2021. Marco Hutter, Christian Gehring, Dominic Jud, Andreas Lauber, C Dario Bellicoso, Vassilios Tsounis, Jemin Hwangbo, Karen Bodie, Peter Fankhauser, Michael Bloesch, et al. Anymal-a highly mobile and dynamic quadrupedal robot. In *2016 IEEE/RSJ international conference on intelligent robots and systems (IROS)*, pp. 38–44. IEEE, 2016. Jemin Hwangbo, Inkyu Sa, Roland Siegwart, and Marco Hutter. Control of a quadrotor with reinforcement learning. *IEEE Robotics and Automation Letters*, 2(4):2096–2103, 2017. Jemin Hwangbo, Joonho Lee, Alexey Dosovitskiy, Dario Bellicoso, Vassilios Tsounis, Vladlen Koltun, and Marco Hutter. Learning agile and dynamic motor skills for legged robots. *Science Robotics*, 4(26):eaau5872, 2019. Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In *Advances in Neural Information Processing Systems*, 2019. Juraj Kabzan, Lukas Hewing, Alexander Liniger, and Melanie N Zeilinger. Learning-based model predictive control for autonomous racing. *IEEE Robotics and Automation Letters*, 4(4):3363–3370, 2019. Elia Kaufmann, Antonio Loquercio, René Ranftl, Matthias Müller, Vladlen Koltun, and Davide Scaramuzza. Deep drone acrobatics. *arXiv preprint arXiv:2006.05768*, 2020. Vijay Konda and John Tsitsiklis. Actor-critic algorithms. *Advances in neural information processing systems*, 12, 1999. Junbang Liang, Ming Lin, and Vladlen Koltun. Differentiable cloth simulation for inverse problems. Advances in Neural Information Processing Systems, 32, 2019. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. *arXiv preprint arXiv:1509.02971*, 2015. Miles Macklin. Warp: A high-performance python framework for gpu simulation and graphics. https: //github.com/nvidia/warp, March 2022. NVIDIA GPU Technology Conference (GTC). Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. *arXiv preprint arXiv:1312.5602*, 2013. Nikita Rudin, David Hoeller, Philipp Reist, and Marco Hutter. Learning to walk in minutes using massively parallel deep reinforcement learning. In *Conference on Robot Learning*, pp. 91–100. PMLR, 2022. John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. *Advances in neural information processing systems*, 28, 2015. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *arXiv preprint arXiv:1707.06347*, 2017. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. *nature*, 550(7676):354–359, 2017. Hyung Ju Suh, Max Simchowitz, Kaiqing Zhang, and Russ Tedrake. Do differentiable simulators give better policy gradients? In *International Conference on Machine Learning*, pp. 20668–20696. PMLR, 2022. Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. *Advances in neural information processing systems*, 12, 1999. Joel A Tropp et al. An introduction to matrix concentration inequalities. Foundations and Trends® in Machine Learning, 8(1-2):1–230, 2015. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Reinforcement learning, pp. 5–32, 1992. Jie Xu, Tao Chen, Lara Zlokapa, Michael Foshey, Wojciech Matusik, Shinjiro Sueda, and Pulkit Agrawal. An end-to-end differentiable framework for contact-aware robot design. *arXiv preprint arXiv:2107.07501*, 2021. Jie Xu, Viktor Makoviychuk, Yashraj Narang, Fabio Ramos, Wojciech Matusik, Animesh Garg, and Miles Macklin. Accelerated policy learning with parallel differentiable simulation. *arXiv preprint arXiv:2204.07137*, 2022. ## A **Heavisde Example** This appendix provides additional details on the Heaviside example used to obtain intuition regarding FOBG bias in Section 3. $${\bar{H}}(x)={\begin{cases}1&x>\nu/2\\ 2x/\nu&|x|\leq\nu/2\\ -1&x<-\nu/2\end{cases}}$$ Under stochastic input x ∼ πθ(·) = θ + w where w ∼ N (0, σ), we can obtain the expected value: Ew -H¯ (x)= Z ∞ −∞ H¯ (x)πθ(x)dx = − Z −ν/2 −∞ πθ(x)dx + Z ν/2 −ν/2 2x ν πθ(x)dx + Z ∞ ν/2 πθ(x)dx = − 1 2 erfc ν + 2θ 2 √2ν + 1 2 erfc ν − 2θ 2 √2ν + θ ν erf ν − 2θ 2 √2σ + θ ν erf ν + 2θ 2 √2σ + σ √2 ν √π exp − (ν + 2θ) 2 8σ 2− exp − (ν − 2θ) 2 8σ 2 From the expectation, we can obtain the gradient w.r.t. the parameter of interest: ∇θ Ew -H¯ (x)=1 √2πσ exp −(ν + 2θ) 2 8σ 2 +1 √2πσ exp −(ν − 2θ) 2 8σ 2 + 1 ν erf ν − 2θ 2 √2σ + 1 ν erf ν + 2θ 2 √2σ − √2θ √πνσ exp −(ν − 2θ) 2 8σ 2 + √2θ √πνσ exp −(ν + 2θ) 2 8σ 2 −1 √2σν exp − (ν − 2θ) 1/4σ 2 (ν − 2θ) 1/4σ 2−1 −1 √2σν exp − (ν + 2θ) 1/4σ 2 (ν + 2θ) 1/4σ 2−1 As seen from the equation above, the true gradient ∇θ Ew -H¯ (x)= 0 ̸ at θ = 0. However, using FOBG, we obtain ∇θH¯ (a) = 0 in samples where |a| *> ν/*2, which occurs with probability at least ν/σ√2π. Even though both ZOBG and FOBG are theoretically unbiased as N → ∞, both exhibit empirical bias as shown in Figure 14 ![15_image_0.png](15_image_0.png) (a) Gradient estimates at N = 1000. (b) Change in gradient bias for different sample sizes N. Figure 14: Gradient bias study for the Soft Heaviside function shown in Eq. 4. Both ZOBG and FOBG exhibit bias at low samples sizes, however, FOBG are especially susceptible to the empirical bias phenomena. ## B **Proof Of Lemma 3.1** Assumption B.1. As well as being continuously differentiable (Assumption 2.7, the policy is 1-Lipshitzsmooth: ||∇θπθ(a|s)*|| ≤* Bπ ≤ 1 and the reward function is 1 Lipshitz-smooth and bounded rewards r(s, a) ≤ ||∇r(s, a)*|| ≤* Br ≤ 1 ∀s ∈ R n; a ∈ R m; θ ∈ R d. Proof. First, we expand our definition of bias and define a random variable of a single Monte-Carlo sample $$\left\|\text{Var}[\nabla]_{\mathbf{\theta}}^{[1]}J(\mathbf{\theta})-\text{Var}[\nabla]_{\mathbf{\theta}}^{[0]}J(\mathbf{\theta})\right\|=\left\|\frac{1}{N}\sum_{i=1}^{N}\widehat{\nabla}_{\mathbf{\theta}}^{[1]}J_{i}(\mathbf{\theta})-\frac{1}{N}\sum_{i=1}^{N}\widehat{\nabla}_{\mathbf{\theta}}^{[0]}J_{i}(\mathbf{\theta})\right\|\tag{10}$$ $$=\frac{1}{N}\left\|\sum_{i=1}^{N}(\widehat{\nabla}_{\mathbf{\theta}}^{[1]}J_{i}(\mathbf{\theta})-\widehat{\nabla}_{\mathbf{\theta}}^{[0]}J_{i}(\mathbf{\theta}))\right\|$$ Define Xi = ∇ˆ [1] θ Ji(θ) − ∇ˆ [0] θ Ji(θ) and bound it: h′=1 ∇sh r(sh, ah) h Y′ t=1 ∇st f(st, at)∇θπθ(ah′ |sh′ ) Xi = X H h=1 ∇ah r(sh, ah)∇θπθ(ah|sh) + h X−1 + X H h=1 r(sh, ah)∇ log πθ(ah|sh) = X H h=1 ∇ah r(sh, ah)∇θπθ(ah|sh) − r(sh, ah)∇θ log πθ(ah|sh) h′=1 ∇sh r(sh, ah) h Y′ t=1 ∇st f(st, at)∇θπθ(ah′ |sh′ ) + h X−1 h′=1 ∇sh r(sh, ah) h Y′ t=1 ∇st f(st, at)∇θπ(·, sh′ ) (Assumption B.1) ≤ X H h=1 h X−1 h′=1 BrBπ h Y′ t=1 ∥∇f(st, at)∥ ≤ X H h=1 h X−1 We can now sum up these random variables as Z =PN i=1 Xi and create an upper concentration bound. As these RVs are difficult to bound, we can apply a Chebyshev Inequality (Tropp et al., 2015): $$P(||Z-\mathbb{E}[Z]||>\epsilon)\leq{\frac{\operatorname{Var}[Z]}{\epsilon^{2}}}$$ Since the gradient samples are assumed to be i.i.d., we can expand this variance using the definition of each gradient type where all expectations are taken over the action distributions ah for each step: Var[Z] = Var"X N i=1 Xi # = Var"X N i=1 ∇ˆ [1] θ Ji(θ) − ∇ˆ [0] θ Ji(θ) # = X N i=1 Varh∇ˆ [1] θ Ji(θ) − ∇ˆ [0] θ Ji(θ) i ≤ X N i=1 E ∇ˆ [1] θ Ji(θ) − ∇ˆ [0] θ Ji(θ) 2 X H h′=1 BrBπ h Y′ t=1 ||∇f(st, at)|| ||2 i=1 E ≤ X N h=1 h X−1 ≤ NH4B 2 rB 2 π E "Y H t=1 ∥∇f(st, at)∥ 2 # With this result we can return back to Equation 10 and obtain $$\left\|\operatorname{Var}[\nabla]_{\boldsymbol{\theta}}^{[1]}J(\boldsymbol{\theta})-\operatorname{Var}[\nabla]_{\boldsymbol{\theta}}^{[0]}J(\boldsymbol{\theta})\right\|\leq\frac{1}{N}NH^{4}B_{r}^{2}B_{\pi}^{2}\mathbb{E}\left[\prod_{t=1}^{H}\left\|\nabla f(s_{t},a_{t})\right\|^{2}\right]$$ $$=H^{4}B_{r}^{2}B_{\pi}^{2}\mathbb{E}\left[\prod_{t=1}^{H}\left\|\nabla f(s_{t},a_{t})\right\|^{2}\right]$$ ## C **Summary Of Modifications** To develop Adaptive Horizon Actor Critic (AHAC) algorithm, we used the Short Horizon Actor Critic (SHAC) algorithm (Xu et al., 2022) as a starting point. This section details all modifications applied to the SHAC in order to derive AHAC and achieve the reported results in this paper. We also note that some of these are not exclusive to either approach approach. 1. **Adaptive horizon objective** - instead of optimising the for the short horizon rollout return, we introduce the new constrained objective shown in Equation 8. To optimise that and adapt the horizon H, we introduced the dual problem in Equation 9 and optimised it directly for policy parameters θ and the Lagrangian coefficients ϕ. $$\underbrace{J(\mathbf{\theta}):=\sum_{h=t}^{t+T-1}\gamma^{h-t}r(s_{h},\mathbf{a}_{h})+\gamma^{t}V_{\mathbf{\phi}}(\mathbf{s}_{t+T})}_{\text{SHAC objective}}\quad s.t.\quad\|\nabla f(\mathbf{s}_{t},\mathbf{a}_{t})\|\leq C\quad\forall t\in\{0,..,T\}$$ 2. **Dual critic** - the original implementation of SHAC struggled with more complex tasks such as Humanoid due the its highly non-convex value landscape. The authors of (Xu et al., 2022) solved that by introducing a delayed target critic similar to prior work in deep RL (Lillicrap et al., 2015). We found that approach brittle and requiring more hyper-parameter tuning. Instead, we replaced it with a dual critic similar to (Haarnoja et al., 2018) which has been shown to stabilise on-policy algorithms (Amos et al., 2021). For our work, we found that it reduced variance of asymptotic rewards achieved by AHAC while removing a hyperparameter. 3. **Critic training until convergence** - empirically we found that different problems present different value landscapes. The more complex the landscape, the more training the critic required and the critic often failed to fit the data with the limited number of critic iterations done in SHAC (16). Instead of training the critic for a fixed number of iterations, we trained the (dual) critic of AHAC until convergence defined by Pn i=n−5 Li(ψ) − Li−1(ψ) < 0.5 where Li(ψ) is the critic loss for mini-batch iteration i. We allowed the critic to be trained for a maximum of 64 iterations. We found that this resulted in asymptotic performance improvements on more complex tasks such as Humanoid and SNU Humanoid while removing yet another hyper-parameter. ## D **Ahac-1 Algorithm** Algorithm 2: Adaptive Horizon Actor-Critic (Single environment) 1 **Given**: γ: discount rate 2 **Given**: α: learning rate 3 **Given**: H: maximum trajectory length 4 **Given**: C: contact threshold 5 **Initialise learnable parameters** θ, ψ 6 t ← 0 7 **while** *episode not done* do /* rollout policy */ 8 Initialise buffer D 9 Initialise return R ← 0 10 **while** ∥∇f∥ ≤ C and h ≤ H do 11 at ∼ πθ(·|st) 12 st+1 = f(st, at) 13 D ← D ∪ {(st+h, at+h, rt+h, Vψ(st+h+1))} 14 t ← t + 1 /* train actor with Eq. 7 */ 15 θ ← θ − α∇θJ(θ) /* train critic with Eq. 6 */ 16 **while** *not converged* do 17 sample (s, Vˆ (s)) ∼ D 18 ψ ← ψ + α∇ψL(ψ) ## E **Simulation Details** The experimental simulator, dflex (Xu et al., 2022), employed in Section 5, is a GPU-based differentiable simulator utilizing the Featherstone formulation for forward dynamics. It employs a spring-damper contact model with Coulomb friction. The dynamics function f is modeled by solving the forward dynamics equations: Mq¨ = J T F(q, q˙) + c(q, q˙) + τ (q, *q, a* ˙ ) where, q, q,˙ q¨ are joint coordinates, velocities, and accelerations, respectively. F represents external forces, c includes Coriolis forces, and τ denotes joint-space actuation. Mass matrix M and Jacobian J are computed concurrently using one thread per-environment. The composite rigid body algorithm (CRBA) is employed for articulation dynamics, enabling caching of the matrix factorization for reuse in the backward pass through parallel Cholesky decomposition. After determining joint accelerations q¨, a semi-implicit Euler integration step updates the system state s = (q, q˙). Torque-based control is employed for simple environments, where the policy outputs τ at each time-step. For further details, see (Xu et al., 2022). It is noted that dflex is no longer actively developed and has been succeeded by warp (Macklin, 2022). The rewards used across all experiments are designed to maximise the forward velocity vx: | Environment | Reward | | |---------------|------------------------------------------------------|----| | Hopper | vx + Rheight + Rangle − 0.1∥a∥ 2 2 | 2 | | Ant | vx + Rheight + 0.1Rangle + Rheading − 0.01∥a∥ 2 2 | | | Anymal | vx + Rheight + 0.1Rangle + Rheading − 0.01∥a∥ 2 | | | Humanoid | vx + Rheight + 0.1Rangle + Rheading − 0.002∥a∥ 2 2 2 | | | Humanoid STU | vx + Rheight + 0.1Rangle + Rheading − 0.002∥a∥ 2 | | Table 2: Table of hyper-parameters for all algorithms bench-marked in Section 5. We additionally use auxiliary rewards R*height* to incentivise the agent to, R*angle* to keep the agents' normal vector point up, R*heading* to keep the agent's heading pointing towards the direction of running and a norm over the actions to incentivise energy-efficient policies. For most algorithms, none of these rewards apart from the last one are crucial to succeed in the task. However, all of them aid learning policies faster. $$R_{h e i g h t}=\begin{cases}h-h_{t e r m}&i f h\geq h_{t e r m}\\ -200(h-h_{t e r m})^{2}&i f h<h_{t e r m}\end{cases}$$ $$R_{a n g l e}=1-\left(\frac{\theta}{\theta_{t e r m}}\right)^{2}$$ Rangle = ∥qforward − q*agent*∥ 2 2 is the difference between the heading of the agent q*agent* and the forward vector q*agent*. h is the height of the CoM of the agent and θ is the angle of its normal vector. hterm and θterm are parameters that we set for each environment depending on the robot morphology. Similar to other high-performance RL applications in simulation, we find it crucial to terminate episode early if the agent exceeds these termination parameters. However, it is worth noting that AHAC is still capable of solving all tasks described in the paper without these termination conditions, albeit slower. ## F **Hyper-Parameters** This section details all hyper-parameters used in the main experiments of Section 5. PPO and SAC, as our MFRL baselines, have been tuned to perform well across all tasks, including task-specific hyper-parameters. SVG has not been specifically tuned for all benchmarks due to time limitations but instead uses the hyperparameters presented in (Amos et al., 2021).2 SHAC is tuned to perform well across all tasks using a fixed H = 32 as in the original work (Xu et al., 2022). AHAC shares all of its common hyper-parameters with SHAC and only has its horizon learning rate αϕ tuned per-task. The contact threshold C and iterative critic training criteria did not benefit from tuning. Note that the dual critic employed by AHAC, uses the same hyper-parameters used by the SHAC critic. Therefore, we have left AHAC under-tuned in comparison to SHAC in order to make to highlight the benefits of the adaptive horizon mechanism presented in this work. Table 3 shows common hyper-parameters shared between all tasks. While table 4 shows hyper-parameters specific to each problem. Where possible we attempted to use the hyper-parameters suggested by the original works, however, we also attempted to share hyper-parameters between algorithms to ease comparison. If a specific hyper-parameter is not mentioned, then it is the one used in the original work behind the specific algorithm. 2Tuning SVG proved difficult as we were unable to vectorise the algorithm resulting in up to 2-week training times. This made it difficult to tune for our benchmarks | AHAC | SHAC | PPO | SAC | SVG | | |---------------------|--------|--------|-------|-------|------| | Mini-epochs | 16 | 5 | 4 | | | | Batch size | 8 | 8 | 8 | 32 | 1024 | | λ | 0.95 | 0.95 | 0.95 | | | | γ | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | | H - horizon | 32 | 32 | 3 | | | | C - contact thresh. | 500 | | | | | | Grad norm | 1.0 | 1.0 | 1.0 | | | | ϵ | 0.2 | | | | | | Actor log(σ) bounds | (-5,2) | (-5,2) | | | | | α - temperature | 0.2 | 0.1 | | | | | λα | 10−4 | 10−4 | | | | | |D| - buffer size | 106 | 106 | | | | | Seed steps | 0 | 0 | 0 | 104 | 104 | | | Hopper | Ant | Anymal | Humanoid | SNU Humanoid | |---------------|---------------|---------------|------------|------------|----------------| | Actor layers | (128, 64, 32) | (128, 64, 32) | (256, 128) | (256, 128) | (512, 256) | | Actor αθ | 2 × 10−3 | 2 × 10−3 | 2 × 10−3 | 2 × 10−3 | 2 × 10−3 | | Horizon αϕ | 2 × 10−4 | 1 × 10−5 | 1 × 10−5 | 1 × 10−5 | 1 × 10−5 | | Critic layers | (64, 64) | (64, 64) | (256, 128) | (256, 128) | (256, 256) | | Critic αψ | 4 × 10−3 | 2 × 10−3 | 2 × 10−3 | 5 × 10e−4 | 5 × 10−4 | | Critic τ | 0.2 | 0.2 | 0.2 | 0.995 | 0.995 | Table 3: Table of hyper-parameters for all algorithms benchmarked in Section 5. These are shared across all tasks. Table 4: Task-specific hyper-parameters. All benchmarked algorithms share the same actor and critic network hyper-parameters with ELU activation functions. AHAC and PPO do not have target critic networks and as such do not have τ as a hyper-parameter. | Hopper | Ant | Anymal | Humanoid | SNU Humanoid | | |----------|-------------|-------------|-------------|----------------|-------------| | PPO | 1.00 ± 0.11 | 1.00 ± 0.12 | 1.00 ± 0.03 | 1.00 ± 0.05 | 1.00 ± 0.09 | | SAC | 0.87 ± 0.16 | 0.95 ± 0.08 | 0.98 ± 0.06 | 1.04 ± 0.04 | 0.88 ± 0.11 | | SVG | 0.84 ± 0.08 | 0.83 ± 0.13 | 0.84 ± 0.19 | 1.06 ± 0.16 | 0.75 ± 0.23 | | SHAC | 1.02 ± 0.03 | 1.16 ± 0.13 | 1.26 ± 0.04 | 1.15 ± 0.04 | 1.44 ± 0.08 | | AHAC | 1.10 ± 0.00 | 1.41 ± 0.08 | 1.46 ± 0.06 | 1.35 ± 0.07 | 1.64 ± 0.07 | ## G **Tabular Experimental Results** Below we present the asymptotic results of Section 5 in tabular form. Table 5: Tabular results of the asymptotic (end of training) rewards achieved by each algorithm across all tasks. The results presented are 50 % IQM and standard deviation across 10 random seeds. All algorithms have been trained until convergence. The rewards presented are PPO-normalised. ## H **Ablation Study Details** In Section 5, we provided an ablation study of the individual contributions of our proposed approach, AHAC, as summarised in Appendix C. In this section, we provide further details on the conducted experiments. The aim of the study is to understand what changes contribute to the asymptotic performance of AHAC. To best | Actor ablations | Ablation | H | Actor objective | Critic | Iterative critic training | |-------------------|------------|-------|-------------------|------------------|-----------------------------| | SHAC H=32 | 32 | Eq. 7 | | Single w/ target | | | SHAC H=29 | 29 | Eq. 7 | | Single w/ target | | | Adapt. Objective | 32 | Eq. 9 | | Single w/ target | | | Adapt. Horizon | adaptive | Eq. 9 | | Single w/ target | | | Iterative critic | 32 | Eq. 7 | | Single w/ target | ✓ | | Dual critic | 32 | Eq. 7 | | Dual | | | AHAC | adaptive | Eq. 9 | | Dual | ✓ | | Critic ablations | | | | | | Table 6: Differences between ablations studied, split into actor and critic ablations. All ablations only introduce one component to the baseline, SHAC. achieve that, we started from SHAC as the baseline using the tuned version detailed in Appendix F above. Afterwards we add the individual components that contribute to AHAC using the hyper-parameter from the section above. Note that only hyper-parameters particular to AHAC have been tuned to achieve the results presented in this paper; all other hyper-parameters are the ones tuned to our baseline SHAC with H = 32. In particular we have only tuned the adaptive horizon learning rate αψ, contact threshold C and the criteria for early stopping while doing iterative critic training. Table 6 shows the detailed differences between the ablations presented in Section 5. The ablations include: 1. SHAC H=32 - our baseline with most hyper-parameters tuned to it. 2. SHAC H=29 - SHAC using the horizon H which AHAC converges to asymptotically. 3. Adapt. Objective - SHAC using the adaptive horizon objective introduced in Eq. 9 but without using it to adapt to horizon. 4. Adapt. Horizon - SHAC using the objective in Eq 9 and adapting the horizon. This is equivalent AHAC without the dual critic with iterative training. 5. Iterative critic - SHAC with a single target critic utilising iterative critic training until convergence. 6. Dual critic - SHAC with a dual critic and no target. | Ablation | Asymptotic reward | |--------------------|---------------------| | SHAC H=32 | 1.16 ± 0.14 | | SHAC H=29 | 1.23 ± 0.17 | | Adaptive Objective | 1.18 ± 0.18 | | Adaptive Horizon | 1.35 ± 0.12 | | Iterative Critic | 1.17 ± 0.13 | | Dual Critic | 1.20 ± 0.07 | | AHAC | 1.41 ± 0.08 | We provide ablations results on these changes on the Ant task in Figure 13. Wee also provide the learning curves for the same experiments in Figure 15 and tabular results in 7. Table 7: Results of asymptotic performance of our ablation study showing 50% IQM and standard deviation. ![22_image_0.png](22_image_0.png) Figure 15: Standalone ablation results for the Ant task. These results are the same as in Figure 13 but presented in a different format for improved legibility.
Review 1: Summary: The manuscript introduces a new first order model-based reinforcement learning (MBRL) algorithm called Adaptive Horizon Actor-Critic (AHAC). This algorithm builds on top of a previous algorithm named Short-Horizon Actor-Critic (SHAC). The core idea behind these two algorithms is to carry out first order estimations of the gradient of the cumulative reward by means of differentiable dynamics and reward models, also termed as differentiable simulators. Backpropagating through the dynamics and reward models appear to have a positive effect on the sample complexity, and even asymptotic performance, of these algorithms when compared with zeroth order model-free reinforcement learning (MFRL) algorithms. The problem with naive backpropagation through dynamics and reward models is that first order approximations of the gradient are highly biased and present high variability. This bias is shown in the paper to be bounded by the dynamics models gradients. The dynamics gradients can be particularly large in states that are informally called contacts. SHAC does this type of naive backpropagation and AHAC attempts to solve this by means of two different elements: a gradient clipping over the dynamics gradients, analogous to the gradient clipping observed in MFRL algorithms like PPO, and making adaptive a hyperparameter that is fixed in SHAC and PPO called the horizon, which determines how large are the subtrajectories of an episode used to carry out backpropagation. The motivation behind gradient clipping is that a smaller gradient will result in smaller bias and variance, which ideally would result in more stable deep reinforcement learning dynamics. Similarly, the idea behind having an adaptive horizon is that in this way backpropagation through transitions with high-normed gradients (contacts) would be eliminated altogether. The way in which the horizon is adapted lies in the observation that, for continuous control environments with the main goal being locomotion, optimal policies display oscillatory behaviour, referred to as gait, with some natural period. This natural period coincides with the encounter of contacts and so with the desirable horizon to consider for backpropagation. To instantiate the adaptive horizon, AHAC introduces a Lagrangian formulation of the cumulative reward maximization problem that requires gradients to have a small norm. Correspondingly, the horizon is updated by subtracting the dual variables associated to the dynamics gradients at each step. Intuitively, the dual variables will be 0 only for the natural period of gaits, and so, ideally, the adaptive horizon will correspond to this period after enough iterations. Lastly, the experiments included in the manuscript suggest that AHAC outperforms alternative MBRL and MFRL methods in diverse locomotion tasks with high dimensional state and action spaces. In summary, the manuscript proposes and motivates an MBRL algorithm that, based on the experiments, outperforms the sample efficiency and asymptotic cumulative reward of alternative MBRL and MFRL algorithms by improving naive backpropagation through contacts of a previous MBRL algorithm. Strengths and Weaknesses: Strengths: 1. Overall, the paper does seem to introduce an interesting idea: first order backpropagation of the cumulative reward should consider an adaptive finite horizon that avoids large gradients. While not commented in the paper, this idea resembles the n-step TD and TD($\lambda$) estimates that can be used for the zeroth order methods. 2. The paper is very didactical in spirit, analyzing step-by-step the pitfall of SHAC and the possible solutions. In principle, this allows gaining an intuition of AHAC. 3. Although 5 seeds are a small number to make strong statistical conclusions, the experiments do seem to support positive answers to the questions 2-4 raised on Section 5 (for question 1, I see evidence for better sample efficiency, but not for "enhanced stability"). Weaknesses: 1. All being said, the imprecision (or straight errors) in terms and equations in the paper is so frequent that the motivations for AHAC end up being more confusing than enlightening. 2. Similarly, the use of so many techniques on top of SHAC makes unclear what exactly is making the algorithm perform so well. Specially since most of the techniques are heuristic in nature and tangential to the paper's line of argumentation. 3. A small number of seeds is being used per environment. Requested Changes: Proposed adjustments: 1. Make modifications to the Introduction to make it more precise. What follows are some suggestions of points that could be detailed or modified (not critical): - Make clear what is the distinction or relationship between simulators and models. - Remove comments to the sim-to-real gap. This is not part of the motivation the paper develops later and raises questions about having perfectly deterministic dynamics. - The use of first-order methods seems unmotivated when first mentioned. - It is not clear what is the relationship between first-order methods and either "trajectory generation" in model-based control or "non-differentiable contact point discontinuities". -What is the definition of brittle? Is it unrelated to "learning instability" or "hyperparameter tuning"? If not, it seems unnecessary to mention it. 2. Correct technical imprecisions or mistakes in Sections 2-5 (critical): - $\hat{\nabla}^{[0]}J$ is not explicitly defined. - Differentiable simulators are not defined. - Contacts are not explicitly defined. - In the Heaviside example, the policy should be a normal distribution with parameters, not with fixed mean 0 and deviation 1. - In the same example, it is understandable but not clear what is the relationship between $R_1$, $\bar{H}$, $\theta$, $a$, $\sigma$, and $\nu$. Why not define $r(s,a)$ and $\theta$ explicitly? - In the same example, the Heaviside function goes from 0 to 1, but in Figure 2 it goes from -1 to 1. It is not clear which one is incorrect since changing the 0 to -1 results in an expected value of 0, as opposed to what was stated. If -1 is changed to 0, then I do not get the same expected value either. - In the same example, it is stated that "we achieve more realistic behaviour". What does this mean? - In Figure 2, what is the orange line and the x-axis? The expected value is a scalar. How come it is represented by a curve? - In Lemma 3.1, correct the first $s$ in $\|\|\nabla\pi_{\theta}(s|s)\|\|$. - In Figure 4, the same color is being used for different things. In particular, FOBG clipped in c) has the same color as a high stiffness FOBG for a) and b), while there are no legends for a). I suggest changing the color of this curve in c) and possibly adding legends to a) to avoid confusions. - I find it unnecessarily imprecise to state that the policy is a sum, right after Figure 4: $\pi(\theta)=\theta+w$. It is the action random variable the one that is equal to the sum of the mean and the noise, and the policy is a Gaussian with mean $\theta$ and deviation $\sigma$. - In the Equation presented right after introducing the additive policy $\pi(\theta)=\theta+w$, the logarithm of the policy gradient disappeared, it is not clear where the approximation is coming from (or what it even means -- is this for large $N$?), and to me it makes no sense to have $w$ on the right hand side, since this is precisely the random variable with respect to to which the expected value would be taken. - In this same ball example, it is not clear what is the relationship between $\psi$ and $a$. It is also not clear what it means to sample $\psi$ uniformly, given that this is, from what I understand, $a_1$. - The reward in the ball example is introduced too late. - At the end of Section 3 it is stated that "this preserves most of the gradient information". What does information mean in this setting? In order to say "most", the notion of information used should be measurable in some way. Is this the case? - In subsection 4.1, I find the chain rule being employed confusing. It is correct, but not straightforward what it means to take the gradient of the policy provided that the action is not directly a function of the parameters. Why not explicitly define the random variable for action as a function of $\theta$? - When defining the H-step TD($\lambda$) return, it probably should be $V_{\psi}$ instead of $V_{\phi}$. - In Algorithm 1, since it is a maximization problem, the signs should be positive when updating $\theta$ and $\phi$. - When defining the normalized "contact forces" (what is this?), what does it mean to divide 2 vectors that have different shapes? - Stability (in question 1 of Section 5) is not defined. 3. Include ablation experiments (critical): There are at least 6 characteristics of AHAC that make it different with respect to SHAC and the rest of the baseline models. The line of argumentation seems to suggest that 2 of them are central for the observed performance, but there is no evidence that the other 4 are not more important. Taking this into consideration, I consider it necessary to include ablation experiments that are conclusive regarding the relevance of the two main elements introduced by the paper. More specifically, the 6 elements I am referring to, 4 of which you already detail in the Appendix, are the following: - adaptive horizon objective, - clipped dynamics gradient norms, - dual critic, - critic training until convergence, - dynamics gradient normalization ("normalised contact forces"), - and TD($\lambda$) value estimation. Then, I suggest making use of any experiment that shows that the performance difference between AHAC and SHAC lies mainly in elements 1 and 2. Regarding the 6th one, the comparison would be mainly with the other methods like PPO and SAC, where TD($\lambda$) is typically not used for the critic targets. 4. Add more seeds to all experiments (not critical) Broader Impact Concerns: I have no concerns. ================================================== Review 2: Summary: First, the paper shows first-order model-based reinforcement learning (FO-MBRL) approaches suffer from high empirical bias and identifies "stiff contact" as the cause of this bias. Then, the paper introduces Adaptive Horizon Actor Critic (AHAC) as a solution to this. AHAC is a FO-MBRL method that optimizes for the horizon to avoid stiff contact. This is accomplished by bounding the norm of the gradient of the dynamics function along the sequence, which is optimized using a Lagrange multiplier. Finally, in five simulated control tasks AHAC achieves higher asymptotic reward than model-free methods and lower variance/sensitivity FO-MBRL methods. The largest of the tasks has a 152-dimensional action space demonstrating to some extent that AHAC can scale to high dimensional control problems. The authors have made code available on GitHub. Strengths and Weaknesses: Strengths: Section 3, though a tad on the long side, does a good job of demonstrating the bias in the first-order gradient estimates and its effect on learning. In particular, lemma 3.1 theoretically justifies the norm constraint imposed by AHAC. Weaknesses: AHAC is not really adaptive in a strong sense. The horizon does change while learning, but it is not a function of, e.g., the starting state. It is not clear (either intuitively or from evidence in the paper) why it should perform better than SHAC with a parameter sweep over the horizon. The need for the dynamics and reward functions to be deterministic, known ahead-of-time, and differentiable seems to be prohibitive limiting the approach to certain simulated environments only. Some of the claims in the paper are not fully supported: - Re: FO-MBRL "This bias is primarily driven by the high magnitude dynamical gradients"--the particular example demonstrates this, but this does not imply this is the case in all domains. - Re: Lemma 3.1 "However, giving any bounds over the dynamics ||grad f||**2 is difficult, yet is the biggest contributor to the variance in Equation 5"--There is an H**4 term in equation 5 as well. It is not at all obvious the norm term is always the biggest contributor. - "We believe our approach avoids local minima by adapting its horizon to stiff gradients"--what is the evidence that supports this? The paper is not self-consistent and loose/under-specified in places: - In the preliminaries it is stated the paper discusses finite-horizon tasks, but upon introducing the algorithm it switches to infinite-horizon tasks with discounted reward. - H is defined as a natural number, but in the optimization it is a real. Despite this, the real H is used to, e.g., index into the state sequence. - hat V depends on H, but this dependence is not explicitly spelled out. It is not obvious in the algorithm what H to use as, e.g., values are stored in the replay buffer that depend on the H at the time they are stored. Requested Changes: The caption of table 1 implies that H converges (at least experimentally). One would expect that short horizon actor/critic with that particular H should perform at least as well, if not better, than AHAC. This means some combination of: (a) H in AHAC has not converged, (b) The SHAC experiments are undertuned, (c) There are some helpful learning dynamics of AHAC to speed convergence It would be beneficial to investigate this. Broader Impact Concerns: None. ================================================== Review 3: Summary: This paper proposes a way to exploit differentiable simulators in RL. It does so by truncating exploding gradients during contact, resulting in more stable gradient estimates and sample efficient learning. The algorithm is tested in locomotion tasks in MuJoCo, where it outperforms some other RL algorithms. Strengths and Weaknesses: Strengths: - Overall approach is intuitive: truncate high-variance gradients that happen during contact. - Evaluation is not limited to reward/learning curves, the paper specifically demonstrates high-mag gradients and that they can be mitigated. - Learning curves show that the method is useful and promising for contact-rich tasks. - I appreciated the effort behind the Related Work in Table 2. Questions: - How do you get access to a differentiable sim for the mujoco problems? Is that something you have to learn? I couldn't find that in the paper, apologies if I missed it. - In Section 2, you assume that the transition function and start-state are deterministic. Are these assumptions necessary? I could not figure out how they are exploited in the main algorithm. - In Section 3 Pg 4, you say that that the sq norm of the gradient of f is the biggest contributor to the variance in Eq 5; can you spell out how Eq 5 is a statement about variance? - In the equation on top of Pg 5 (simplification of the "0th order" PG), what happened to the log in gradient? If $\pi(\theta)=\theta + w$, shouldn't the gradient wrt theta be $(1+w)$? Does $w$ depend on $\theta$? - In Sec 4.1, you say that "SHAC gets pushed to local minima, which eventually results in policy collapse." Do you have evidence for this claim? - Can you update the SVG results in Table 1 so that is a fair comparison? Requested Changes: I think the paper does a good job at following up intuitive explanation with thorough empirical evaluation. But, I think that the paper can be improved with some writing/presentation changes: - The math in the paper was hard for me to follow, there were some abrupt jumps going from 1 step to the other. - I was not familiar with the terminology of 1st order RL; it would be better to define that before talking about it in the Introduction. - The SVG results were hard to make sense of, but I am sure you will update them when all your runs finish :) - The biggest confusion I have is how the differentiable simulator is obtained and how you actually compute the gradient wrt state-action. Broader Impact Concerns: n/a ================================================== Metareview: Recommendation: Reject Comment: In the end, one reviewer voted weak reject, one reviewer voted weak accept (but acknowledged the weaknesses raised above during the discussion and that many of the ACs suggestions should be address). The third voted accept, but did not participate in the discussion and their review not detailed. Again, I thank the authors for their engagement in improving the work, it just needs a couple more iterations. Certainly a good candidate for resubmission of a major revision. ==================================================
# Paits: Pretraining And Augmentation For Irregularlysampled Time Series Anonymous authors Paper under double-blind review ## Abstract Real-world time series data that commonly reflect sequential human behavior are often uniquely irregularly sampled and sparse, with highly nonuniform sampling over time and entities. Yet, commonly-used pretraining and augmentation methods for time series are not specifically designed for such scenarios. In this paper, we present PAITS (Pretraining and Augmentation for Irregularly-sampled Time Series), a framework for identifying suitable pretraining strategies for sparse and irregularly sampled time series datasets. PAITS leverages a novel combination of NLP-inspired pretraining tasks and augmentations, and a random search to identify an eective strategy for a given dataset. We demonstrate that dierent datasets benefit from dierent pretraining choices. Compared with prior methods, our approach is better able to consistently improve pretraining across multiple datasets and domains. Our code is attached and will be publicly available. ## 1 Introduction Time series data appear in many areas ranging from healthcare to retail, and play an important role in tasks such as forecasting to classification. Despite the abundance of time series data in a variety of fields, there is often a relative scarcity of labeled data, due to the fact that generating annotations often requires additional eort or expertise. In other domains, such as computer vision and natural language processing (NLP), large unlabeled datasets have motivated the use of unsupervised pre-training methods which have led to great improvements in downstream supervised tasks with smaller labeled datasets Chen et al. (2020); Devlin et al. (2018). While pretraining strategies for time series data have been relatively less explored, recent works have shown promise, for example using contrastive learning Eldele et al. (2021); Zhang et al. (2022); Yue et al. (2022). However, many such approaches have been developed for cases in which data appear in frequent and regular intervals, with repeating signals. In real-world scenarios (such as medical data in healthcare settings), features may be collected at irregular intervals, and particularly for multivariate datasets, features may be collected at varying rates. When using a traditional approach of representing a time series as a matrix of features' values across regularly spaced time intervals (which we refer to as "discretization", as described by Shukla & Marlin (2020)), such irregularity can lead to challenges of extreme sparsity in the discretized data setting (i.e., high missingness for pre-defined intervals, as illustrated in Figure 1). Some recent studies have instead cast time series data as a set of events, which are each characterized by a time, feature that was observed, its value at that time Horn et al. (2020); Tipirneni & Reddy (2022). Such a representation is more flexible, requiring minimal preprocessing, and avoids the need to explicitly represent "missingness," as the data only contains events that were observed (e.g., Figure 1). This data representation also has parallels to natural language: while NLP casts text as a sequence of words (tokens), this approach for time-series data represents a time series as a sequence of events (with an associated feature, time, and value), and thus we hypothesize that pretraining and augmentation approaches developed for NLP may be particularly advantageous for sparse and irregularly sampled time series data. In particular, we consider a forecasting task (previously explored by Tipirneni & Reddy (2022)) along with a sequence reconstruction task inspired by the pretraining strategies used in BERT Devlin et al. (2018). ![1_image_0.png](1_image_0.png) 5HSUHVHQWLQJLUUHJXODUO\VDPSOHGWLPHVHULHV Figure 1: Illustration of discretized vs. sequence-based representation of irregularly sampled time series data. We experimented with multiple pretraining tasks and related augmentations and found that there was not a one-size-fits-all approach that consistently worked best across multiple datasets. For example, when considering the same mortality prediction task in dierent datasets, we found that some benefited from the combination of two pretext tasks, whereas another was only helped by a single pretraining task; and notably, each dataset was best suited by diering augmentations. Because datasets benefit from dierent pretraining strategies, we present a framework, Pretraining and Augmentation for Irregularly-sampled Time Series (PAITS), to identify the best pretraining approach for a given dataset using a systematic search. Applied to multiple healthcare and retail datasets, we find consistent improvements over previously proposed pretraining approaches. In summary, the main contributions we provide are the following: - PAITS introduces a novel combination of NLP-inspired pretext tasks and augmentations for selfsupervised pretraining in irregularly sampled time series datasets. - By leveraging a random search, PAITS consistently identifies eective pretraining strategies, leading to improvements in performance over previous approaches across healthcare and retail datasets. ## 2 Related Work 2.0.1 Self-Supervised Pretraining In Other Domains In computer vision (CV) and natural language processing (NLP), where vast quantities of unlabeled data are available, researchers have explored an array of approaches for leveraging self-supervised learning, where pseudo-labels are automatically generated and used for pretraining models before finetuning with smaller labeled datasets. In CV, the current state of the art approaches involve contrastive learning, in which models are encouraged to be invariant to dierently-augmented versions of the same input image. Thus, such methods, such as MoCo He et al. (2020) and SimCLR Chen et al. (2020), rely on the choice of image-related augmentations, such as color jitter and rotation. In NLP, where text data is represented by sequences of tokens, recent architectures such as transformers Vaswani et al. (2017), paired with self-supervised pretraining with large unlabeled datasets, have led to great improvements in NLP tasks by capturing contextual information about elements in a sequence given the other elements. In particular, two of the most common pretraining tasks are (1) "language modeling": predicting the next element given the previous elements of the sequence Peters et al. (2018); Radford & Narasimhan (2018), and (2) "masked language modeling": masking elements of a sequence and predicting the masked elements from the unmasked ones Devlin et al. (2018). ## 2.0.2 Self-Supervised Pretraining For Time Series Data Inspired by progress in CV and NLP in recent years, researchers have also begun to adapt self-supervised pretraining approaches for time series data. For example, several methods developed for dense time series data (particularly signal data that follow cyclical patterns, such as brainwaves and electricity usage), have involved contrastive learning-based approaches. The approaches introduced for time series data have often relied on new augmentations that reflect invariances expected specifically for time series data. For example, the TS-TCC method involves augmentations such as adding random jitter to the signals, scaling magnitudes within a specific feature, and shuing chunks of signals (which is most appropriate for repeating patterns) Eldele et al. (2021). Similarly, TF-C incorporates the frequency domain for its contrastive learning approach Zhang et al. (2022). While these approaches have shown promise, they rely on an underlying expectation dense and regularly sampled time series data (often with perceptible repeating patterns). In practice, when data are sparse and irregularly sampled (e.g., Appendix Table 4), the use of such methods is limited by extreme missingness in the discretized representation and need for imputing the vast majority of data. Beyond the more common contrastive learning-based methods, another recent pre-training approach by Zerveas et al. (2021) introduced an input denoising pretext task for regularly sampled time series data inspired by advances in NLP. In particular, they masked values for randomly sampled times and features, and as a self-supervised pretraining task, reconstructed the masked values, which led to improved downstream classification performance. ## 2.0.3 Data Augmentations And Augmentation Selection While pretraining tasks often involve the use of augmentations (e.g., masking parts of an input in order for them to be reconstructed as a task), the use of augmentation alone has been explored as an approach for improving the stability and generalizability of representations in supervised training regimes. For example, in computer vision, augmentation methods are routinely used in training neural networks for supervised tasks in a variety of domainsKrizhevsky et al. (2012); He et al. (2016); Esteva et al. (2017). Similar approaches have also been explored in time series data, ranging in complexity from simple noise addition and scaling to complex neural network-based approaches such as generative adversarial networks Harada et al. (2018); Lou et al. (2018). However, as found in CV, choosing augmentations is not always straightforward, and the choice of appropriate augmentation may vary across datasets and settings. While some work has explored complex procedures for selecting augmentation strategies (e.g., reinforcement learning-based Cubuk et al. (2019)), recent work has demonstrated that randomly selecting augmentations can sometimes perform equally well Cubuk et al. (2020). In our work, we explore the use of multiple imputation strategies, and employ a random search to identify an appropriate solutions for datasets. ## 2.0.4 Irregularly Sampled Time Series Data: Alternative Data Representations And Pretraining When time series data are sparse and irregularly sampled, the traditional approach of representing these data as discretized matrices may become impractical due to high sparsity, and thus imputation that is be required (Figure 1). These challenges have motivated the recent use of a sequence-based representation Horn et al. (2020); Tipirneni & Reddy (2022), in which time series data are encoded by a sequence of observed events, with parallels to NLP's representation of text data as a sequnce of tokens. In this context, inspiration may be drawn from self-supervised pretraining advances in NLP. For example, STraTS uses a sequence-based representation of time series, and introduces a forecasting pretraining task (similar to language modeling) where the goal is to predict feature values in a future time point given an input sequence of observations Tipirneni & Reddy (2022). In this work, we draw on inspiration from NLP pretraining tasks, existing augmentation methods, and the previously proposed sequence-based representations of time series data. We uniquely combine pretext tasks and augmentations in the context of irregularly sampled time series data and present a systematic search for appropriate pretraining strategies multiple datasets and domains. ![3_image_0.png](3_image_0.png) Figure 2: PAITS illustration: We sample and select pretraining and augmentation strategies (details in Appendix Algorithm 1). ## 3 Methods 3.1 Notation And Data Representation As illustrated in Figure 1, each time series is represented by a set of observation triplets which each encode a feature, time it was measured, and observed value at that time. We use a similar data representation to the one proposed by Tipirneni & Reddy (2022), which we describe below: ## 3.1.1 Labeled Dataset Notation Consider a labeled dataset of NL samples: DL = {(di, Si, yi)}NL i=1. For each sample i, we have static features (di) that do not vary with time, a time series (Si) which we define next, and an associated label (yi). More specifically, the time series for instance i, where Mi observations occurred within a given observation window, is given by Si = (s1*, ...s*Mi ), where each observation sj is represented by a triplet (tj , vj , fj ), where fj œ 1*, ...V* represents which feature was observed (among V total available features), tj is the time that the feature was measured, and vj œ R is the observed value of the feature fj at time tj . Often, Si may be restricted to observations occurring in a specific period from which predictions will be made (e.g., the first 48 hours of a hospital stay). Given di and Si, the goal is to predict the accompanying label yi. ## 3.1.2 "Unlabeled" Dataset For Self-Supervised Pretraining For self-supervised pretraining, we consider an additional set of available data (which may overlap with the labeled dataset, but does not require labels y). We have an "unlabeled" dataset with NU (where usually NU Ø NL) samples: DU = {(di, Si)}NU i=1. Here, each time series Si may contain additional observations outside of the times expected for the supervised task (e.g., an entire hospital stay, rather than the first 48 hours), and we do not expect access to a label. As described in the following sections, we generate pretraining input data along with pseudo-labels for our pretext tasks from unlabeled time series samples Si. From our unlabeled dataset DU , we generate input data and pseudo-labels for our pretext tasks. In particular, we define observation length (lo), forecasting length (lf ), and stride length (ls) which we use to generate a larger set of processed samples used for pretraining. We then define a collection of starting points, W, for observation windows beginning at the first possible start-time for the data, and then at at intervals of the stride length (W = {0, s, 2*s, ...*}. For a given time series sample Si and observation start time tw, we thus have a pretraining input sample Siw = {(tj , vj , fj ) œ Si : tw <= tj < tw + lo}. We note that unlike with a discretized representation for regularly sampled time series, elements in Siw are not regularly spaced observations from time tw to tw + lo; instead, we have a variable-length sequence of triplets depending on the true number of observations that occurred during that time period. ## 3.2 Self-Supervised Pretraining And Finetuning Tasks Consistent with Tipirneni & Reddy (2022), we use a neural network architecture in which these triplets are passed through learned embedding layers and a transformer to generate a low-dimensional representation of the time series. Built upon this base encoder architecture, we add additional layers to encode two pretext tasks and a final target task which we describe in the following subsections. Forecasting pretraining. The first of our pre-text tasks, which was originally proposed by Tipirneni & Reddy (2022), is a forecasting task, in which for each sample, we predict the observed value of each feature in a pre-determined follow-up window, based on the events from an observation window. We thus have a V -dimensional regression task. In this work, we propose incorporating augmentations to the input data, which we describe in the following subsections sections. Due to the irregularly sampled nature of the time series data, a feature may be observed multiple times during the follow-up window, or more commonly, not have been observed at all in the follow-up window. Thus, for a given sample i with observation window start tw, our inputs are di and SÕiw (an augmented version of Siw), and the goal is to predict the first observed value of each feature (if such a value exists). Thus, our masked mean squared error loss (such that predictions for missing terms are ignored) is given by: $${\mathcal{L}}_{{\mathcal{F}}}={\frac{1}{N_{U}}}\sum_{i=1}^{N_{U}}\sum_{w\in W}\sum_{j=1}^{V}m_{i w,j}{\big(}{\hat{z}}_{i w,j}-z_{i w,j}{\big)}^{2}$$ where zˆiw is the model's prediction vector for sample i with an observation window starting at w, ziw is the true V -dimensional array of observed values in the follow-up window, and m*iw,j* is a mask with value 1 if feature j was observed in the follow-up window for sample i, and 0 otherwise. Reconstruction pretraining. A second pretext task which we propose to add, inspired by masked language modeling, is to reconstruct the original time series Siw from augmented inputs SÕiw. In particular, for most of our datasets, consistent with the forecasting task, we predict the values observed in the original time series. To that end, we include a decoder module which takes the contextual triplet embeddings produced by the transformer layers and generates a *seqlen*-dimensional output indicating predictions of the original value associated with each event. The loss for reconstructing time series Siw is given by: $${\mathcal{L}}_{\mathcal{R}}={\frac{1}{N_{U}}}\sum_{i=1}^{N_{U}}\sum_{w\in W}\sum_{k=1}^{s e q l e n}p_{i w,k}c_{i w,k}({\hat{v}}_{i w,k}-v_{i w,k})^{2}$$ where piw œ {0, 1}*seqlen* is a padding mask indicating whether the index k represents a real observation (1) or padding (0), and ciw œ {0, 1}*seqlen* is a reconstruction mask indicating whether the element should be reconstructed. An all 1s vector would indicate that the goal is to reconstruct the entire sequence, whereas alternative masks (which we describe in relation to augmentations below) can be used to encode the goal of reconstructing only parts of the sequence. For pretraining, we jointly optimize for both pretext tasks, and allow dierent weightings of the tasks (e.g., Appendix Table 5), given by hyperparameters ⁄F and ⁄R: $${\mathcal{L}}_{\mathcal{P}}=\lambda_{\mathcal{F}}{\mathcal{L}}_{\mathcal{F}}+\lambda_{\mathcal{R}}{\mathcal{L}}_{\mathcal{R}}$$ Supervised finetuning. After convergence on the jointly pretrained tasks above, we discard the pretrainingspecific modules and fine-tune the model on the target task of predicting yi from di and Si. The prediction layers for this final target are built on the base encoder described above (which has been optimized to the pretext tasks). Thus, the final model is initialized with the pretrained encoder weights (followed by randomly initialized prediction layers), and during finetuning, model parameters are updated to minimize the supervised task's loss LS (e.g., cross-entropy loss for a classification task). We illustrate the full pretraining and finetuning process in Figure 2. ## 3.3 Augmentations The goal of our self-supervised pre-training strategy is to generate robust representations that reflect natural variations in irregularly sampled data (e.g., dropped observations) as well as general variation in real-world data (e.g., measurement noise). We expect that invariance to such perturbations may lead to more eective downstream classification. To that end, we propose combining two classes of augmentations (which we illustrate in toy examples in Figure 2): Adding noise. In particular, we augment samples by applying Gaussian noise to the continuousvalue elements of the triplets: time and value. Here, ‡ œ RØ0 is a hyperparameter controlling the amount of Gaussian noise used to perturb the true values, and our augmented view for sample i, sÕi = (tÕi, vÕi, fi) is given by: $$t_{i,j}^{\prime}=t_{i,j}+\epsilon_{i,j},\mathrm{~where~}\epsilon_{i,j}\sim N(0,\sigma^{2})$$ $$v_{i,j}^{\prime}=v_{i,j}+\epsilon_{i,j},\mathrm{~where~}\epsilon_{i,j}\sim N(0,\sigma^{2})$$ where each observation j is independently perturbed across sample i's time series. In our experiments, we also consider ‡ := 0, implying no augmentation to the input. Masking. Here, because our data represents sparse and irregularly sampled time series, we apply additional augmentations to further down-sample the time series. For the masking augmentation, we consider several hyperparameters to cover a wide range of datasets and scenarios: mask rate (probability of a specific observation being masked), mask sampling strategy (probability distribution for sampling missingness sequences), which parts of the triplet to mask, and the masked values themselves. First, we consider the mask rate r and mask sampling strategy, which together determine the probability distributions governing sampled binary masks. For mask sampling strategy we consider (1) simply sampling from a Bernoulli distribution where each observation across each time series is randomly masked with probability r, and (2) sampling masks that follow a geometric distribution, as proposed by Zerveas et al. (2021). In particular, for the approach described by Zerveas et al. for regularly sampled time series, state transition probabilities from masked to unmasked and vice versa follow geometric distributions, leading to longer alternating stretches of masked vs. unmasked observations. We adapt their approach by selecting time increments at which to mask, such that masks are consistently applied within a given interval (described in depth in the Appendix). Given a sampled binary mask vector mi for time series si as described above, we can then generate augmented (masked) versions ti, vi, fi as follows: $$t_{i,j}^{\prime}=t_{i,j}m_{i,j}+a_{t}(1-m_{i,j})$$ $$\begin{array}{l}{{v_{i,j}^{\prime}=v_{i,j}m_{i,j}+a_{v}(1-m_{i,j})}}\\ {{f_{i,j}^{\prime}=f_{i,j}m_{i,j}+a_{f}(1-m_{i,j})}}\end{array}$$ where a = (at, av, af ) is the masked value with which we replace the original values. Finally, we consider masking dierent portions of the triplets within each time series, so our final masked series si is: $s^{\prime}_{i}=(\mathbb{1}_{t\in E}t^{\prime}_{i}+\mathbb{1}_{t\notin E}t_{i},\mathbb{1}_{v\in E}v^{\prime}_{i}+\mathbb{1}_{v\notin E}v_{i},\mathbb{1}_{f\in E}f^{\prime}_{i}+\mathbb{1}_{f\notin Ef_{i}})$. where E is the set of elements within the triplet that will be masked. We apply the augmentations one after another (noise æ masking) and note that these augmentations are used during pretraining and then optionally also used during finetuning, as shown in FIgure 2 and Appendix Table 5. ## 3.4 Paits Strategy Search We hypothesize that datasets with varying distributions and sparsity patterns will benefit from dierent pretraining approaches; thus, we propose using a random search strategy to explore the space of possible pretraining tasks and augmentations, which we outline in Appendix Algorithm 1. In Figure 2 and Appendix Table 5, we summarize and illustrate the pretraining and finetuning search space we considered. Finally, to identify an appropriate pretraining and finetuning strategy within each dataset, we randomly sample strategies from the search space defined in Appendix Table 5, perform pretraining and finetuning on the training sets (described in the next section), and select the strategy with the best validation performance obtained during finetuning. ## 4 Experimental Settings 4.1 Datasets And Pre-Processing We apply PAITS to four datasets with sparse and irregularly sampled time series: three commonly used real-world medical datasets (with a goal of predicting in-hospital mortality), and a retail dataset (with the aim of predicting purchases). We summarize them below and in Appendix Table 4: PhysioNet 2012: The PhysioNet Challenge 2012 dataset Silva et al. (2012) is a publicly available dataset of patients in intensive care units (ICUs). Each patient's sample includes 48 hours of time series data, from which the goal is to predict in-hopsital death. MIMIC III: The MIMIC III dataset Johnson et al. (2016) provides a large collection of medical records from Beth Israel Deaconess Medical Center ICU stays, and we similarly perform 48h mortality prediction. However, for this dataset, we have access to longer time series beyond the first 48h, which we leverage as unlabled pretraining data. eICU: The eICU Collaborative Research Database Pollard et al. (2018) is a large multi-center database consisting data from ICU stays, for which the primary task is to predict mortality from the first 24h of data, although additional time series data are available for pretraining. H&M: We use the data from the "H&M Personalized Fashion Recommendations" competition hosted on Kaggle, which consists of purchases made over the course of two years by H&M customers, ranging from September 2018 to September 2020, with the goal of predicting a user's purchases based on past purchases. Further details about data are provided in the Appendix. ## 4.1.1 Medical Data Preprocessing Consistent with the approach used by Tipirneni & Reddy (2022), and described above, we represent time series data as a sequence of times, features, and values. Our labeled datasets consist of samples with time series data available for the first 24- or 48-hours in the ICU, along with binary mortality labels, which we randomly split into training, validation, and test splits (described further in the Appendix). When available, we generate a larger unlabeled dataset consisting of time series data outside of the supervised task windows, as described earlier. For stability and eciency of neural network training, we normalize times and values (within features) to have mean of 0 and variance of 1, and set a maximum sequence length for each dataset as the 99th percentile across samples (Appendix Table 4). Additional details may be found in the Appendix. ## 4.1.2 Retail Data Preprocessing To evaluate our pretraining strategies on an alternative data type with even higher sparsity, we apply the same approach for the medical datasets with slight modifications for the H&M dataset. We restrict our focus to customers with at least 50 purchases throughout the full two year period (top 12.5% of customers), and items that were purchased at least 1000 times (7804 items). For our time series representations, each | Dataset (Approx. training | Methods | Labeled data (%) | | | | |------------------------------------|-----------------|--------------------|---------------|---------------|---------------| | dataset size: labeled / unlabeled) | 10% | 20% | 50% | 100% | | | STraTS | 0.4097±0.0279 | 0.4460±0.0128 | 0.4735±0.0132 | 0.5019±0.0046 | | | TST | 0.2871±0.0332 | 0.3433±0.0486 | 0.4411±0.0106 | 0.4818±0.0064 | | | PhysioNet 2012 | TS-TCC | 0.3076±0.0222 | 0.3709±0.0368 | 0.4504±0.0076 | 0.4961±0.0077 | | (6.4K / 6.2K) | CL (PAITS augs) | 0.3191±0.0098 | 0.3436±0.0271 | 0.4509±0.0147 | 0.4879±0.0034 | | No pretraining | 0.2787±0.0289 | 0.3472±0.0430 | 0.4356±0.0162 | 0.4762±0.0103 | | | PAITS | 0.4201±0.0213 | 0.4422±0.0150 | 0.4862±0.0111 | 0.5104±0.0046 | | | STraTS | 0.5170±0.0111 | 0.5469±0.0136 | 0.5801±0.0069 | 0.5872±0.0034 | | | TST | 0.4751±0.0255 | 0.5076±0.0129 | 0.5505±0.0107 | 0.5655±0.0136 | | | MIMIC III | TS-TCC | 0.5342±0.0181 | 0.5487±0.0066 | 0.5652±0.0068 | 0.5768±0.0037 | | (29K / 422K) | CL (PAITS augs) | 0.4089±0.0212 | 0.4658±0.0105 | 0.5236±0.0130 | 0.5574±0.0073 | | No pretraining | 0.4737±0.0176 | 0.5111±0.0136 | 0.5424±0.0093 | 0.5665±0.0037 | | | PAITS | 0.5394±0.0177 | 0.5632±0.0083 | 0.5868±0.0094 | 0.5975±0.0088 | | | STraTS | 0.3288±0.0084 | 0.3356±0.0143 | 0.3528±0.0043 | 0.3639±0.0049 | | | TST | 0.2650±0.0200 | 0.3072±0.0119 | 0.3339±0.0058 | 0.3520±0.0050 | | | EICU | TS-TCC | 0.3116±0.0072 | 0.3294±0.0083 | 0.3427±0.0052 | 0.3610±0.0044 | | (85K / 1.07M) | CL (PAITS augs) | 0.2986±0.0118 | 0.3196±0.0075 | 0.3375±0.0052 | 0.3528±0.0042 | | No pretraining | 0.2716±0.0175 | 0.2961±0.0182 | 0.3244±0.0071 | 0.3462±0.0041 | | | PAITS | 0.3280±0.0199 | 0.3380±0.0078 | 0.3523±0.0073 | 0.3603±0.0113 | | Table 1: After pretraining and finetuning, we compare test AUROCs across methods for three healthcare datasets. We provide additional details about datasets and test AUPRC metrics in Appendix Tables 4 and 6, respectively. triplet (*t, f, v*) in the sequence of events represents a date, price, and article ID for an item that the customer purchased. We considered our supervised task to be predicting users' purchases in September 2020 from their purchases in August 2020, but leverage the earlier months' data for generating a much larger pre-training dataset. As shown in Appendix Table 4, we note that these time series are extremely sparse: on average, there is only about one purchase in a given month. ## 4.2 Paits Implementation Details For our healthcare data experiments, we use the encoder architecture proposed by Tipirneni & Reddy (2022), which take the time series and demographic features as input, and consists of the following components: (1) Separate embedding modules for time, values, and features, from which the three embeddings are summed to obtain a final triplet embeddings for each event, (2) a standard transformer architecture Vaswani et al. (2017) to incorporate contextual information across triplet embeddings, (3) a fusion self-attention layer that generates a weighted average over the triplets' embeddings to produce a final embedding over the entire time series, and (4) a static features embedding module, from which the time series and static features embedding are concatenated to obtain the final embedding (more details in Appendix). | Dataset | (⁄F , ⁄R) | Aug. noise | Mask sampling, rate | Mask elements æ values | Finetuning aug. | |---------------|-------------|--------------|-----------------------|--------------------------|-------------------| | MIMIC III | (1,1) | 0 | random, 0.5 | (t,f,v)->(0,0,V+1) | None | | PhysioNet2012 | (1,0) | 0.1 | geometric, 0.5 | (t,f,v)->(0,0,V+1) | Same | | EICU | (1,1) | 0 | geometric, 0.8 | (t,f,v)->(-100,-100,V+1) | None | | H&M | (1,0) | 0.1 | random, 0.3 | (t,f,v)->(-100,-100,V+1) | Same | Built on the encoder architecture, we have three task-specific modules: (1) the forecasting module, consisting of a single dense layer used for the pretraining forecasting task, (2) a reconstuction module, consisting of three dense layers used for the reconstruction pretraining task, and finally (3) a prediction module consisting of two dense layers for the final supervised task. While the architecture is flexible, we held it constant for our PAITS strategy search experiments for simpler comparison across datasets and methods. For the strategy search, we considered pretraining, augmentation, and finetuning settings outlined in Appendix Table 5, and sampled 100 distinct strategies in our search. Table 2: Strategies for pretraining, augmentation, and finetuning selected by PAITS across datasets. | Methods | Labeled data (%) | | | | |-------------------|--------------------|---------------|---------------|---------------| | 10% | 20% | 50% | 100% | | | STraTS | 0.0147±0.0005 | 0.0152±0.0003 | 0.0153±0.0004 | 0.0157±0.0003 | | TST | 0.0129±0.0004 | 0.0132±0.0002 | 0.0133±0.0001 | 0.0134±0.0002 | | TST (random mask) | 0.0130±0.0003 | 0.0132±0.0002 | 0.0133±0.0001 | 0.0131±0.0003 | | CL (PAITS augs) | 0.0147±0.0003 | 0.0148±0.0003 | 0.0152±0.0004 | 0.0151±0.0008 | | No pretraining | 0.0130±0.0003 | 0.0132±0.0002 | 0.0131±0.0005 | 0.0132±0.0003 | | PAITS | 0.0148±0.0006 | 0.0154±0.0002 | 0.0158±0.0006 | 0.0161±0.0003 | Table 3: After pretraining and finetuning, we compare purchase prediction eectiveness across methods in the H&M dataset. In Appendix Table 7, we additionally provide test binary cross-entropy loss values. ## 4.3 Baselines We compare our approach to related methods for time series pretraining, and provide details in the Appendix: - **STraTS**: A related approach developed for irregularly sampled time series data, with the same data representation and base architecture Tipirneni & Reddy (2022). STraTS represents one strategy within our search space: forecasting alone (⁄R = 0) and no augmentations. - TST: This reconstruction pretraining method was developed for regularly sampled time series Zerveas et al. (2021), and we modify their approach for our data representation. This approach is similar to masked value reconstruction alone (⁄F = 0) with geometric masking. - **TS-TCC**: A contrastive learning-based pretraining approach Eldele et al. (2021) which we have adapted to our data representation. TS-TCC learns joint forecasting and contrastive tasks, alongside augmentations including scaling, noise, and permuting time blocks. - **Contrastive learning with PAITS augmentations**: Here, we consider the same set of augmentations used in PAITS; however, we replace the the reconstruction and forecasting tasks with a contrastive task (i.e., InfoNCE loss Oord et al. (2018)) for the strategy search. - **No pretraining**: Random initialization to show the minimum performance expected without any pretraining. ## 5 Experimental Results And Discussion In this section, we evaluate PAITS alongside alternative pretraining approaches across multiple datasets and goals. ## 5.0.1 Icu Mortality Prediction As described above, our ultimate goal for the healthcare dataets is mortality prediction based on a patient's initial stay in the ICU. Separately for each dataset, we apply our strategy search approach, and sample 100 strategies for pre-training and fine-tuning as outlined in Appendix Table 5. Within each run, pretraining is done with a larger unlabeled dataset containing data collected from the first five days in the ICU. After convergence, finetuning is applied to only the labeled samples from the initial 24h or 48h window. We select the strategy for which the performance is best among the validation set. In Table 1, we report test results of PAITS and related methods. To demonstrate the eectiveness of our approach when labeled datasets are smaller, which is often the motivation for leveraging self-supervised pretraining, we provide test metrics when we vary the amount of labaled data available during finetuning (while keeping pretraining constant). As shown in Table 1, PAITS systematically finds combinations of pretraining tasks and augmentations, improving prediction accuracy compared to previous methods. We also note that dierences among methods tend to be more pronounced as labeled dataset sizes decrease, highlighting the advantage of more robust pretraining in the smaller data regime. Furthermore, we note that the relative performance of baseline methods varies across datasets, highlighting the need for a systematic search among candidate pretraining approaches. Indeed, in Table 2, we show that dierent strategies were selected for each dataset, possibly due to underlying dierences in their distributions. One notable observation was that when comparing the two pretext tasks we considered in our strategy search, forecasting tended to provide more gain than reconstruction. In fact, for PhysioNet 2012, where there were no additional unlabeled samples available for preraining (i.e., we pretrain and finetune with the same samples), we found that forecasting alone was selected by PAITS. This may indicate that the reconstruction task relies on larger sets of unlabeled data to provide robust improvements to the pretrained representations, and thus was more helpful in the larger MIMIC III and eICU datasets. ## 5.0.2 Retail Purchase Prediction Next, to evaluate our approach on a dierent domain, we consider the problem of forecasting a user's purchases based on their prior purchases using the H&M dataset described above. We leverage the PAITS method in largely the same way as with healthcare data with some modifications for the specific retail forecasting task: (1) Rather than binary classification (mortality), our supervised task is multilabel classification (i.e., for each article, whether or not it was purchased in the prediction window), and (2) for both reconstruction and forecasting pretext tasks, we consider the feature (i.e., purchased item) as the key element of the triplet, and thus we use binary cross-entropy loss for each article (rather than mean squared error when predicting values in the healthcare datasets). As described in Methods, we leverage the large set of previous months' purchases for pretraining, and set a final goal of predicting September 2020 purchases from August 2020 purchases. Consistent with the evaluation metric in the Kaggle competition, we use a MAP@12 metric (but over the course of one month) to evaluate the relative rankings of predicted articles purchased. As shown in Table 3, PAITS is able to identify a pretraining and finetuning strategy that most eectively predicts purchased items in the following month. Interestingly, similarly to the PhysioNet 2012 dataset above, we also found here that the forecasting task without reconstruction was selected by our search. This could be due to the forecasting pretraining task being identical to the supervised task in this regime, which may highlight the importance of alignment between pretraining and finetuning tasks. However, the additional augmentations selected by PAITS lead to improvments over the non-augmented forecasting introduced by STraTS, highlighting the utility of considering a range of tasks and augmentations. ## 6 Conclusion In this work, we present PAITS, a systematic approach for identifying appropriate pretraining and finetuning strategies for sparse and irregularly sampled time series data. We found that dierent datasets do indeed benefit from dierent combinations of pretext tasks, alongside dierent augmentation types and strengths, even when downstream tasks are similar. Thus, a fixed strategy may not always be relied on for consistent gains during pretraining. Furthermore, we found that the use of NLP-inspired pretexts tasks for the sequencebased representation of time series data was more eective than a contrastive learning pretext task, which has been more eective in the context of dense and regularly sampled time series data. While PAITS was developed with the goal of improving pretraining for irregularly sampled time series, such a framework could be similarly applied to dense and regularly sampled time series; however, future work will be needed to assess whether similar gains are seen in that regime. Finally, while our strategy search was restricted to a limited set of pretraining tasks and augmentations, the approach could be arbitrarily extended to include a wider variety of tasks and augmentations as they are developed, which may open the door to even greater gains in prediction for irregularly sampled time series data. ## References Ting Chen, Simon Kornblith, Mohammad Norouzi, and Georey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020. Ekin Dogus Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le. Autoaugment: Learning augmentation policies from data. 2019. URL https://arxiv.org/pdf/1805.09501.pdf. Ekin Dogus Cubuk, Barret Zoph, Jon Shlens, and Quoc Le. Randaugment: Practical automated data augmentation with a reduced search space. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 18613–18624. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/ d85b63ef0ccb114d0a3bb7b7d808028f-Paper.pdf. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, C. Kwoh, Xiaoli Li, and Cuntai Guan. Timeseries representation learning via temporal and contextual contrasting. In *International Joint Conference* on Artificial Intelligence, 2021. Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and Sebastian Thrun. Dermatologist-level classification of skin cancer with deep neural networks. *nature*, 542(7639): 115–118, 2017. Shota Harada, Hideaki Hayashi, and Seiichi Uchida. Biosignal data augmentation based on generative adversarial networks. volume 2018, pp. 368–371, 07 2018. doi: 10.1109/EMBC.2018.8512396. Hrayr Harutyunyan, Hrant Khachatrian, David C Kale, Greg Ver Steeg, and Aram Galstyan. Multitask learning and benchmarking with clinical time series data. *Scientific data*, 6(1):96, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 770–778, 2016. doi: 10.1109/CVPR.2016.90. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In *Proceedings of the IEEE/CVF conference on computer vision and pattern* recognition, pp. 9729–9738, 2020. Max Horn, Michael Moor, Christian Bock, Bastian Rieck, and Karsten Borgwardt. Set functions for time series. In *International Conference on Machine Learning*, pp. 4353–4363. PMLR, 2020. Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. Mimic-iii, a freely accessible critical care database. *Scientific data*, 3(1):1–9, 2016. Alex Krizhevsky, Ilya Sutskever, and Georey E Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger (eds.), Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc., 2012. URL https://proceedings. neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf. Huan Lou, Zong feng Qi, and Jian xun Li. One-dimensional data augmentation using a wasserstein generative adversarial network with supervised signal. *2018 Chinese Control And Decision Conference (CCDC)*, pp. 1896–1901, 2018. URL https://api.semanticscholar.org/CorpusID:49654696. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*, 2018. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2227–2237, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-1202. URL https://aclanthology.org/N18-1202. Tom J Pollard, Alistair EW Johnson, Jesse D Raa, Leo A Celi, Roger G Mark, and Omar Badawi. The eicu collaborative research database, a freely available multi-center database for critical care research. Scientific data, 5(1):1–13, 2018. Alec Radford and Karthik Narasimhan. Improving language understanding by generative pre-training. 2018. Satya Narayan Shukla and Benjamin M Marlin. A survey on principles, models and methods for learning from irregularly sampled time series: From discretization to attention and invariance. *ArXiv*, abs/2012.00168, 2020. URL https://api.semanticscholar.org/CorpusID:227239385. Ikaro Silva, George Moody, Daniel J Scott, Leo A Celi, and Roger G Mark. Predicting in-hospital mortality of icu patients: The physionet/computing in cardiology challenge 2012. In *2012 Computing in Cardiology*, pp. 245–248. IEEE, 2012. Sindhu Tipirneni and Chandan K. Reddy. Self-supervised transformer for sparse and irregularly sampled multivariate clinical time-series. *ACM Trans. Knowl. Discov. Data*, 16(6), jul 2022. ISSN 1556-4681. doi: 10.1145/3516367. URL https://doi.org/10.1145/3516367. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, £ukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. Zhihan Yue, Yujing Wang, Juanyong Duan, Tianmeng Yang, Congrui Huang, Yunhai Tong, and Bixiong Xu. Ts2vec: Towards universal representation of time series. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 8980–8987, 2022. George Zerveas, Srideepika Jayaraman, Dhaval Patel, Anuradha Bhamidipaty, and Carsten Eickho. A transformer-based framework for multivariate time series representation learning. In *Proceedings of the* 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD '21, pp. 2114–2124, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383325. doi: 10.1145/3447548. 3467401. URL https://doi.org/10.1145/3447548.3467401. Xiang Zhang, Ziyuan Zhao, Theodoros Tsiligkaridis, and Marinka Zitnik. Self-supervised contrastive pretraining for time series via time-frequency consistency. *Advances in Neural Information Processing Systems*, 35:3988–4003, 2022.
Review 1: Summary: This paper propose a framework for identifying suitable pretraining strategies for sparse and irregularly sampled time series datasets. Specifically, the proposed PAITS method combines forcasting and reconstruction training objectives with multiple data augmentation options, and employs a random search approach to find the most suitable design of the pretraining and finetuning strategy given the dataset. Strengths and Weaknesses: ## Strength 1. This paper focuses on an important problem of pretraining models for sparse and irregularly sampled time series. The paper provides a clear introduction on the importance of such data type and the difficulty it induces for the pretraining. 2. The paper identifies the impact of different training objective and data augmentation strategies on the final outcome of the pretraining, and that such impact is different for different dataset. The idea of searching through multiple combinations is helpful in finding the best setting. ## Weakness 1. The pretraining method description is not clear, from the current formulation it is hard to tell if the pretraining is done by using all the observed features to forcast or reconstruct unseen features, or only the selected feature is used to predict itself. A figure illustrating the input and target of the pretraining objective will be helpful. 2. Performance-wise, seems like the performance improvement brought by the proposed method diminishes with a larger unlabeled dataset., such as on the EICU dataset. Further analysis is needed on whether this incapability is caused by the proposed pretraining objective or by the difficulty of finding an optimal option with trandom search for larger dataset. 3. The impact of random search to the result is not well-studied. It is unclear about the cost of the search, like how many trials are needed. It is also unclear about the impact of different options on the final performance. Further ablation study is needed to verify whether the random search is finding an optimal result. Requested Changes: More discussion is needed on the impact of random search: How is the variance between the performance of different choices? How many trials are needed to find the optimal result? Whether this random selection is scalable to a larger search space? Broader Impact Concerns: No broader impact concern. ================================================== Review 2: Summary: The paper introduces PAITS (Pretraining and Augmentation for Irregularly-sampled Time Series), a framework designed to optimize pretraining strategies for irregularly sampled and sparse time series data. The techniques are mainly inspired by the pretraining methods widely used in NLP. Some empirical studies have been performed to support their claims. Strengths and Weaknesses: ## Strengths 1. I am not an expert in this area. However, it is interesting to see the applications of the techniques that originated from the NLP tasks in other time-series tasks. ## Weakness 1. The use of a random search algorithm, while effective, might introduce significant computational overhead, especially with very large datasets, which could limit its applicability in resource-constrained environments. I think the authors could provide more details on the algorithm's complexity and what is the computational cost in the presented tasks. 2. While there is a list of techniques proposed, the lack of ablation studies makes it difficult to see their effectiveness. Besides, there is no clear guidance on how to apply them according to the properties of the time series data. 3. Many important details should have been included in the main text instead of the appendix. 4. There are also some obvious formatting problems and typos. Requested Changes: 1. [Critical] Add necessary discussions on the algorithm's complexity of the random search algorithm and the computational cost in the presented tasks. 2. [Critical] Add necessary ablation studies to provide a clearer view of the effectiveness of the proposed techniques. 3. [Minor] Move important details from the appendix to the main text and fix formatting problems and typos. Broader Impact Concerns: No broader impact concerns. ================================================== Review 3: Summary: This work focuses on pretraining strategies for irregularly sampled and sparse time-series data, a challenging problem in the time-series analysis field. This work is motivated by the fact that existing pretraining and augmentation strategies do not necessarily focus on irregular time series. This work proposes a new method called PAITS that searches for the best combination of pretraining and augmentation techniques for self-supervised pretraining on time-series data. Strengths and Weaknesses: Strength: - This work is motivated for irregular time-series data, which is an interesting area of work as regular sampling for time-series data is a common challenge in real-world applications. - This work extends simply proposing a one-size-fits-all solution to accommodate the diversity in time-series datasets. - This work conducts a diverse empirical analysis to showcase the performance of PAITS. Weaknesses: - This work's main motivation is pretraining approaches for irregular time-series data. However, the proposed algorithm does not consider the sample irregularity in its search algorithm. Or at least it is unclear how irregularly sampled data is approached in this method. The suggested pretext pretraining is designed to be a general method that can work on irregular time-series dataset, and not to consider the limitations of irregular samples [R1]. The motivation for this purpose should be clarified. - In addition to the previous point, this work does not consider the synchronization challenge of the observation time-steps of irregular time-series. There is no specific reason why PAITS is motivated specifically for irregular time-series data. - There is a marginal empirical improvement shown in the conducted experiments, essentially in Table 1. According to the motivation of this paper, one would expect better results for a low-label regime. The impact of PAITS should be further investigated. - PAITS is described as a random search process that picks the right algorithms for pretraining/augmentation based on the validation performance. This is a naive approach as the algorithms considered in pretraining/augmentation are not novel or designed for irregular time series. A more in-depth analysis should be provided to correlate the characteristics of each dataset to the combination of the algorithm chosen to pre-train the model on it. There is a lack of theoretical or empirical insights into the choice of the right pretraining strategy. [R1]: Chen, Yuqi, et al. "ContiFormer: Continuous-time transformer for irregular time series modeling." Advances in Neural Information Processing Systems 36 (2024). Requested Changes: See the review comments. Broader Impact Concerns: N/A ==================================================
# Active & Passive Causal Inference: Introduction Anonymous authors Paper under double-blind review ## Abstract This paper serves as a starting point for machine learning researchers, engineers and students who are interested in but not yet familiar with causal inference. We start by laying out an important set of assumptions that are collectively needed for causal identification, such as exchangeability, positivity, consistency and the absence of interference. From these assumptions, we build out a set of important causal inference techniques, which we do so by categorizing them into two buckets; active and passive approaches. We describe and discuss randomized controlled trials and bandit-based approaches from the active category. We then describe classical approaches, such as matching and inverse probability weighting, in the passive category, followed by more recent deep learning based algorithms. By finishing the paper with some of the missing aspects of causal inference from this paper, such as collider biases, we expect this paper to provide readers with a diverse set of starting points for further reading and research in causal inference and discovery. ## 1 Introduction The human curiosity and the desire to understand how things work lead to studying causality (Kidd & Hayden, 2015; Zheng et al., 2020; Ferraro et al., 2019; Bender, 2020). Causality is about discovering the causal relationship between variables. Causality delineates an asymmetric relationship where it can only say "A causes B" or *"B causes A"*, while correlation expresses a symmetric relationship that measures the co-occurrence of A and B. They can extend to more than two variables. Being able to depict the causal relationship is an ideal framework for humans to explain how the system works. An important concept in causality, that we are particularly interested in, is causal effect. It refers to the impact of a choice of action on an outcome. For example, what is the impact of receiving COVID-19 vaccine on catching COVID-19? In order to know the effect of Covid-19 vaccination, we must be able to predict the outcomes of both taking and not taking the vaccine, respectively. That is, we must know the potential outcome of taking an arbitrary action. We call this process of inferring the potential outcome given an action causal inference (CI) (Rubin, 1974). Correlation does not imply causation, and at the same time, it is possible to have causation with no correlation. CI is tricky like that because observation is not the same as intervention. The former is about passively observing what happens, while the latter is about observing what happens when one actively intervenes in the world. Taking the CI approach allows us to distinguish between correlation and causation by clearly defining the two probabilities. To infer causal effects we must measure intervention probabilities. Intervention probability is the probability of a particular outcome resulting from our intervention in the system by imposing a specific action. This is different from conditioning, as we actively alter the action. We often have access to conditional and joint probabilities, but not intervention probabilities directly. It has thus been a major research topic to infer the intervention probability from conditional and joint probabilities in many disciplines (Rubin, 1974; Berry & Fristedt, 1985; Heckman et al., 1997; Weitzen et al., 2004; Hernán & Robins, 2006; Breslow et al., 2009; Graepel et al., 2010; Abhijit V. & Esther, 2012; Yazdani & Boerwinkle, 2015; Bouneffouf et al., 2020). There are two main frameworks to introducing estimating causal effect, Rubin (1974)'s potential outcome framework and Pearl (2009)'s do-calculus framework. Potential outcome framework focuses on the concept of the outcomes that would have been observed under different treatment conditions. Do-calculus revolves around a set of formal rules for reasoning about intervention in a causal model. This introductory paper diverges from conventional teaching methods in causal inference by combining both Rubin's framework of potential outcomes and Judea Pearl's framework of do-calculus. We adopt a holistic approach, drawing upon concepts from both paradigms to construct a foundational understanding of causal inference rooted in first principles. Our choice to introduce potential outcomes initially stems from its intuitive appeal, particularly when illustrating treatment effects through familiar examples from domains such as medicine or the sciences. However, as we delve deeper into the formalization of causal models, the incorporation of intervention probabilities becomes essential, necessitating a shift towards joint and conditional probability distributions. By incorporating aspects of both frameworks, we aim to present a unified perspective on causal inference that facilitates a smoother transition between the intuitive conceptualization of potential outcomes and the more formalized treatment of intervention probabilities. A causal graph is a graphical representation of causal relationships among variables in a system. It visually depicts how variables influence each other, helping us to understand and analyze the causal structure of a phenomenon. Figure 1 shows the graph representation of causal relationships for a variety of CI methods. Depending on the data collection process and experimental setup, certain methods don't require knowing the causal graph in priority, such as RCT and difference-in-difference method, while other methods require knowing the structure of a causal graph. In this paper, we consider a problem setup in which we have a covariate X, an action A, and an outcome Y . The action A is a variable of interest and has a causal effect on the outcome. The outcome Y is a variable that is affected by the treatment and is what we often want to maximize. Covariates X are all the other variables that may affect and be affected by the action A and the outcome Y . We are particularly interested in the case where these covariates are confounders, i.e., affect the action and the outcome together. We measure how treatments affect outcomes by looking at both the average effect across groups and how each person's treatment affects them personally. There is enormous work on various assumptions and conditions that allow us to infer causal effects (Rubin, 1974). The most fundamental assumptions are i) exchangeability, ii) positivity, iii) consistency, and iv) no interference. These assumptions must be satisfied at the time of data collection rather than at the time of causal inference. When these assumptions are met, we can then convert statistical quantities, estimated from collected data, into causal quantities, including the causal effect of the action on the outcome (Hernán et al., 2019; Musci & Stuart, 2019; Zheng & Kleinberg, 2019). One way to satisfy all of these assumptions is to collect data actively by observing the outcome after randomly assigning an action independent of the covariate. Such an approach, which is often referred to as a random controlled trial (RCT), is used in clinical trials, where patients are assigned randomly to an actual treatment or placebo (Chalmers et al., 1981; Kendall, 2003; P. M. et al., 2021). RCT is deliberately designed to prevent confounding the treatment and the outcome, so that the conditional probabilities estimated from collected data approximate the intervention probabilities as well. Randomized data collection is not always feasible and often suffers in efficiency from running large-scale experiments. There has been an enormous amount of work from various disciplines on estimating causal effects without such randomized data collection (Rubin, 1977; 1979; Chalmers et al., 1981; Lu & Rosenbaum, 2004b). As an alternative, different approaches have been proposed, including figuring out how to work with the non-randomized dataset and finding a more efficient way to collect data than the randomized approach. In this paper, we organize these CI methods into passive and active learning categories. In the passive CI category exist methods that work *given* a dataset which was *passively* collected by the experts. In contrast, the active CI category includes methods that may actively intervene in the data collection process. RCT for instance belongs to the active CI category, as it actively collects data by randomizing the treatment assignment. There are however other methods in the same category that aim also to maximize the outcome by making a trade-off between exploration and exploitation. The organization of this literature review paper is as follows. In §2, we introduce the definitions and metrics for estimating causal effects and discuss in depth the assumptions necessary for identification of causal effects. We then cover naive conditional mean estimator and ordinary square estimator, both of which are widely used with randomized datasets (Rubin, 1974; Pearl, 2010). In this paper, We do not consider collider bias and we assume a stationary conditional probability distribution. In §3, we describe RCT and move on to bandit approaches in the active CI category. While bandits are used in many practical applications, the research community has been adding more emphasis on theoretical analysis of minimizing the regret bounds of different policy algorithms (Berry & Fristedt, 1985; Langford & Zhang, 2007). We look at bandits through the lens of CI, where many of the bandit algorithms can be seen as learning the classic causal graph in Figure 2b with different exploration and exploitation rates. We examine different constrained contextual bandit problems that correspond to different causal graphs, respectively. We also compare passive CI learning methods to bandits on naive causal graphs. We furthermore review causal bandits which consider graphs with unknown confounding variables (Bareinboim et al., 2015; Lattimore et al., 2016; Sachidananda & Brunskill, 2017). In this survey, we limit our scope to bandits and do not consider causal reinforcement learning which we leave for the future. In §4, we start with classical approaches in the passive CI category, such as matching (Rubin & Thomas, 1992; Gu & Rosenbaum, 1993), inverse probability weighting (Rosenbaum & Rubin, 1983; Rubin & Thomas, 1992; Hirano et al., 2003) and doubly robustness methods (Heejung & James M., 2005; Shardell et al., 2014; Seaman & Vansteelandt, 2018). We then discuss deep learning based CI methods (Zhihong & Manabu, 2012; Pearl, 2015; Johansson et al., 2016; Wang et al., 2016; Louizos et al., 2017a). Deep learning is particularly useful when we need to conduct causal inference on high dimensional data with a very complicated mapping from input to output, as deep neural networks can learn a compact representation of action as well as covariate that captures the intrinsic and semantic similarities underlying the data (Kingma & Welling, 2014; Danilo Jimenez & Shakir, 2014). Deep learning is applied to CI in order to infer causal effects by learning the hidden/unknown confounder representations from complicated data and causal graph relationships. Such a capability of learning a compact representation from a high-dimensional input allows it to work with challenging problems such as those involving raw medical images and complex treatments (Castro et al., 2020; jiwoong Im et al., 2021; Puli et al., 2022; van Amsterdam et al., 2022). CI is an important topic in various disciplines, including statistics, epidemiology, economics, and social sciences, and is receiving an increasingly higher level of interest from machine learning and natural language processing due to the recent advances and interest in large-scale language models and more generally generative artificial intelligence. In this paper, we cover various CI algorithms and categorize them into active and passive CI families. The goal of this paper is to serve as a concise and readily-available resource for those who are just starting to grow their interest in causal inference. ![3_image_0.png](3_image_0.png) ![3_image_3.png](3_image_3.png) ![3_image_2.png](3_image_2.png) Active CI ![3_image_4.png](3_image_4.png) ![3_image_1.png](3_image_1.png) ![3_image_5.png](3_image_5.png) Figure 1: Examples of passive and active causal inference methods. Dark gray nodes correspond to observed variables while light gray nodes correspond to latent variables. A square node corresponds to a deterministic variable while a circle corresponds to stochastic variables. ![3_image_6.png](3_image_6.png) (b) Causal Graph Figure 2: Condition versus intervention ## 2 Background 2.1 Preliminary Let X, A and Y be covariate, *action*, and *outcome* variables, respectively. We define the joint, *conditional* and *intervention* probabilities as $$\text{Joint:}p(Y=y,A=a,X=x)=p(X=x)p(A=a|X=x)p(Y=y|A=a,X=x),$$ $$\text{Conditional:}p(Y=y|A=a)=\frac{\sum_x p(X=x)p(A=a|X=x)p(Y=y|A=a,X=x)}{\sum_{x^\prime,y^\prime}p(X=x^\prime)p(A=a|X=x^\prime)p(Y=y^\prime|A=a,X=x^\prime)},\text{and}p(Y=y|do(A=a))=\sum_x p(X=x)p(Y=y|A=a,X=x),$$ x respectively. We marginalize the covariate variable to obtain the conditional and intervention probabilities. We observe that the conditional probability is often different from the intervention probability, p(Y |A = a) ̸= p(Y |do(A = a)). The conditional probability takes into account the prevalence of a particular action Passive CI a in the population and checks how often a particular outcome y is associated with it. On the other hand, the intervention probability does not consider the prevalence of the action a and only considers what the outcome would be had the action been forced to be a. p(Y |A = *a, x*) distribution tells us the effect of A on Y given X = x, since A = a is assigned directly. p(Y |do(A = a)) is a marginal distribution of p(Y |A = *a, x*) over X and A = a was directly assigned. As soon as one incorporates p(A = a|X) or p(A = a), it is not an intervention but a joint. See Figure 2a for a graphical illustration. This can be understood as removing all the incoming edges to the action variable, i.e. X → A, when we intervene on A, as in Figure 2b. The intervention probability formula above reflects it by ignoring the probability of a particular action. The intervention probability p(Y |do(A)) describes the causal effect of action A. The corresponding causal graph A ⇒ Y is a graphical representation used in causal inference to depict the causal relationships between variables in a system (see Figure 2b right). Consider the following example that shows how intervention and conditional probabilities are sometimes different and sometimes the same, given two variables. Example 1. let C indicate whether coffee is hot or cold and T *be the thermometer reading that measures the* temperature of the coffee. Based on our everyday observation, T and C *are highly correlated. For instance,* p(C = hot|T = 70◦) and p(T = 70◦|C = hot) *are both high. However, it is clear that forcing the thermostat's* reading to be high does not cause the temperature of coffee to go up, that is, p(C = hot|do(T = 70◦)) is low despite high p(C = hot|T = 70◦). On the other hand, boiling coffee would indeed cause the thermostat's reading to go up, that is, both p(T = 70◦|C = hot) and p(T = 70◦|do(C = hot)) *are high.* Another example, demonstrating the discrepancy between the intervention and conditional probabilities, is Simpson's paradox in which different assumptions about outcome, treatment, and confounder variables lead to different conclusions on causation: Example 2 (Simpson's paradox illustration (Carlson, 2019)). You are studying sex bias in graduate school admission. According to the data, men were more likely to be admitted to graduate school than women were, where 40% of male applicants and 25% of female applicants were admitted. In other words, there was a strong association between being a man and being admitted. You however found that such association varies across different sub-populations. For example, 80% and 46% of men and women were admitted to natural science respectively, and 20% and 4% of women and men were admitted to social science respectively. It turns out that the social science department has a much lower acceptance rate than the natural science department, while women were more likely to apply to social science and men were more likely to apply to natural science departments. In summary, we can derive different conclusions about sex and admission rate association by either combining or separating sub-populations. We can notice how adding the department to the covariate variable or not changes the result. If the department is not part of the covariate, then the conditional and intervention probability coincide with each other, since there is no confounding via the department choice. Otherwise, these two probabilities deviate from each other due to p(S = s|X = x)*. This demonstrates that CI is inherently dependent upon our modelling assumption.* A *potential outcome* is an outcome given a covariate under a potential action (Rubin, 1974; 2005). Assuming a binary outcome variable Y ∈ {0, 1} and a discrete action variable A ∈ A, the potential outcome (Rubin, 1974) is defined as ## Yx(A) = Y |Do(A = A). For instance, there are two potential outcomes, YX(1) and YX(0), for a binary action A ∈ {0, 1}. It is often impossible to compute these potential outcomes given a fixed covariate directly due to the fundamental problem of causal inference, that is, we cannot perform two distinct actions for a given covariate X simultaneously and observe their outcomes (Saito & Yasui, 2019). We can only observe the outcome of one action that has been performed, to which we refer as the *factual outcome*, but cannot observe that of the other action, to which we refer as the *counterfactual outcome*. We instead consider the potential outcome averaged over the entire population, represented by the covariate distribution p(X). We call it the expected potential outcome EY,X[YX(A = a)], as opposed to the *conditional potential outcome*, where we marginalize out the covariate X. The goal of causal inference is to estimate the potential outcomes that each individual would have experienced under different treatment conditions, as well as the average or expected outcomes across the population, based on observed data, (yi|do(A = ai), xi, ai) iid∼ p, where p is the data distribution and ai ∈ A is the action performed for the i-th covariate xi. We often hear about the *treatment effect* of an experimental drug or a new surgical procedure in the medical literature. Treatment effect measures whether and how much the treatment caused the difference in the outcome (or in the potential outcome). Often treatment effect is used for binary actions in medical research. Unless confusing, we will interchangingly use the treatment effect and causal effect throughout this paper. In general, the treatment effect is defined as the difference between two potential outcomes YX(1) − YX(0). The *average treatment effect* (ATE) is then the difference between the potential outcomes averaged over the covariate distribution (Rubin, 1974; Imbens, 2004): $$A T E:=\mathbb{E}_{X,Y}[Y_{X}(1)-Y_{X}(0)]=\mathbb{E}_{X,Y}[Y_{X}(1)]-\mathbb{E}_{X,Y}[Y_{X}(0)].$$ We may be interested in the average treatment effect over a subpopulation, defined by a subset X′ ⊆ X of covariates. We then compute the *conditional average treatment effect* (CATE). CATE is defined as averaging the treatment effect for an individual patient characterized by X′(Radcliffe, 2007; Athey et al., 2015): $$CATE(x^{\prime}):=\mathbb{E}_{Y,X\setminus X^{\prime}}[Y_{X}(1)-Y_{X}(0)|X^{\prime}=x^{\prime}]=\mathbb{E}_{Y,X\setminus X^{\prime}}[Y_{X}(1)|X^{\prime}=x^{\prime}]-\mathbb{E}_{Y,X\setminus X^{\prime}}[Y_{X}(0)|X^{\prime}=x^{\prime}],\tag{2}$$ where X\X′is the remainder of the covariate over which we compute the expectation. There are a few alternatives to the ATE. The first one is an *expected precision in the estimation of heterogeneous effect* (PEHE (Imbens, 2004)): $$(1)$$ $$P E H E:=\mathbb{E}_{X,Y}[(Y_{X}(1)-Y_{X}(0))^{2}].$$ Another alternative is the *population average treatment effect* (PATT) for the treated, i.e. a = 1 (Rubin, 1977; Heckman & Robb, 1985): $$P A T T(a):=\mathbb{E}_{X,Y|A=a}[Y_{X}(1)-Y_{X}(0)],$$ which is a helpful quantity when a particular, treated sub-population is more relevant in the context of narrowly targeted experiments. All of these metrics have their own places. For instance, it is more common to see PEHE in medical research, while PATT can be used to study the effect on the treated group programs (e.g. individuals disadvantaged in the labour market (Heckman & Robb, 1985)). ## 2.2 Assumptions For Causal Inference Unfortunately, we cannot simply average the factual outcomes for each action to estimate the average treatment effect, because this corresponds to measuring the expected conditional outcome which is a biased estimate of the expected potential outcome. This bias is usually due to confounders. For instance, socioeconomic status can be a confounder in the study of the treatment effect of a medication. Socioeconomic status often affects both patients' access to medication and their general health, which makes it a confounder between the treatment and health outcome. In this case, the expected conditional outcome estimate is biased, because those patients who receive the medication are also likely to receive better healthcare, resulting in a better outcome. We must isolate the effect of medications on the health outcome by separating out the effect of having better access to healthcare due to patients' higher socioeconomic status, in order to properly estimate the expected treatment effect. In this section, we review the assumptions required for us to obtain an unbiased estimator for the average potential outcome. The main strategy for estimating the (average) potential outcome is to compute causal quantities, such as intervention probabilities, from statistical quantities that are readily estimated from a set of samples, i.e., a dataset. In this context, we can say that a causal quantity is *identifiable* if we can compute it from statistical quantities. In doing so, there are a number of assumptions that must be satisfied. Positivity/Overlap. The first step in estimating the potential outcome is to estimate the conditional probabilities from data. In particular, we need to compute $$p(Y|X,A)={\frac{p(Y,A|X)}{p(A|X)}}$$ for X with p(X) > 0. This implies that p(A = a|X) for the action a, of which we are interested in computing the potential outcome, must be positive. We call it the *positivity*. Positivity is a necessary condition for computing ATE as we need the p(A|X = x) > 0 in the denominator for data x. The *overlap* assumption is similar to the positivity assumption but applies to the covariate. It requires that the distributions p(X|A = 0) and p(X|A = 1) have common support. Partial overlap occurs when you are missing on particular action for a certain area of covariate space. For example, we can have treated units of certain patient groups but no control units, or vice versa. Ignorability/Exchangeability. Even if we can estimate the statistical quantities, such as the conditional probability p(Y |*X, A*), we need an additional set of assumptions in order to turn them into causal quantities. The first such assumption is *exchangeability* which states that the potential outcome Yˆ (a) must be preserved even if the choice of an action to each covariate configuration p(A|X) changes. That is, the causal effect of A on Y does not depend on how we assign an action A to each sample X. This is also called *ignorability*, as this is equivalent to ignoring the associated covariate when assigning an action to a sample, i.e., A ⊥⊥ X. This enables us to turn the conditional probabilities into intervention probabilities, eventually allowing us to estimate the potential outcome, which we describe in more detail later. The exchangeability is a strict condition that may not be easily met in practice. This can be due to the existence of confounding variables, selection bias, or due to time-dependency between action selections (violation of Markov Assumption). We can relax this by assuming *conditional exchangeability*. As we did for defining the CATE above, we partition the covariate into X and X′ and condition the latter on a particular value, i.e., X′ = x ′. If the exchangeability is satisfied conditioned on X′ = x ′, we say that conditional exchangeability was satisfied. This however implies that we are only able to estimate the potential outcome given a particular configuration of X′. To meet the ignorability assumption, we need to achieve unconfoundedness, which refers to having an absence of confounding variables in a causal relationship. This involves careful design, data collection, and measurements to control potential confounders and isolate the impact. These two assumptions together allow us to turn the statistical quantities into causal quantities, just like when (unconditional) exchangeability alone was satisfied. Consistency and Markov assumption. Data for causal inference is often collected in a series of batches rather than at once in parallel. Each t-th batch has an associated potential outcome Yˆ (at) for action a. If the potential outcome changes over these batches, we cannot concatenate all these batches and use them as one dataset to estimate the potential outcome. That is, we must ensure that Yˆ (at) = Yˆ (at ′ ) for all *t, t*′ where at = at ′ . This condition is called *consistency*. Furthermore, we must ensure that the past batches do not affect the future batches in terms of the potential outcome, i.e., Yˆ (a1*, . . . , a*t) = Yˆ (at), as this would effectively increase the action space dramatically and make it impossible to satisfy positivity. We call this condition a *Markov* assumption. In practice, the potential outcomes are estimated from the data, which is a statistical estimation. Hence, all assumptions are required to turn causal estimand into statistical estimand. We show step-by-step how each assumption is used to compute ATE: $$ATE=\mathbb{E}_{X}[\mathbb{E}_{X^{\prime}}[Y(1)-Y(0)|X^{\prime}]]\tag{1}$$ $$=\mathbb{E}_{X}[\mathbb{E}_{X^{\prime}}[Y(1)|X^{\prime}]-\mathbb{E}_{X^{\prime}}[Y(0)|X^{\prime}]]$$ $$=\mathbb{E}_{X}[\mathbb{E}_{X^{\prime}}[Y(1)|A=1,X^{\prime}]]-\mathbb{E}_{X}[\mathbb{E}_{X^{\prime}}[Y(0)|A=0,X^{\prime}]]$$ $$=\mathbb{E}_{X}[\mathbb{E}[Y|A=1,X]]-\mathbb{E}_{X}[\mathbb{E}[Y|A=0,X]]$$ $$=\mathbb{E}[Y|A=1]-\mathbb{E}[Y|A=0]].$$ = EX[EX′ [Y (1)|X′] − EX′ [Y (0)|X′]] (Conditional exchangeability) = EX[EX′ [Y (1)|A = 1, X′]] − EX[EX′ [Y (0)|A = 0, X′]] (Ignorability) = EX[E[Y |A = 1, X]] − EX[E[Y |A = 0, X]] (Consistency) $\mathrm{C}_0$ ![7_image_0.png](7_image_0.png) Figure 3: Generalization of the propensity score in two scenarios where the positivity assumption is violated. (a) requires extrapolation and (b) requires interpolation for generalizing to unseen counterfactual examples respectively. ## 2.3 Discussion On The Assumptions In Practice These assumptions above, or some of their combinations, enable us to derive causal quantities from statistical quantities. It is thus important to carefully consider these assumptions when faced with causal inference and how they may be violated, as most of these are often impossible to verify in practice. We discuss a few important points regarding these assumptions in practice. Unconfoundedness & Conditional Exchangeability. We face the challenge of verifying whether the potential outcome remains the same across all possible confounder configurations because we cannot enumerate all possible confounders. In practice, we use conditional exchangeability because we often condition each individual data point (e.g., covariates per patient). However, the conditional exchangeability assumption is impossible to test and verify in practice. Estimating the potential outcome given a particular configuration of X′ means removing all the existing confounders for X′, however, we cannot know every possible confounder out there. In practice, we condition X′ = x ′to be an individual data point (e.g., x ′ being a patient) and often conditional exchangeability is taken for granted for that data point. Unconfoundedness vs. Overlap. In order to estimate ATE, one must assume both unconfoundedness and positivity. It is however difficult to satisfy both of them in practice, because there is a natural trade-off between them. We are likely to satisfy the unconfoundedness by adding more features to the covariate, which in turn increases the dimensionality of data. This in turn increases the chance of violating the overlap assumption due to the curse of dimensionality. Similarly, we can satisfy the overlap assumption by choosing only the minimum number of features as a covariate, but we may unintentionally create unobserved confounders along the way. Positivity via generalization. Positivity is hard to satisfy in a strict sense, as we must have as many data points as there are actions for each and every possible x with p(x). This is largely impossible when x is continuous, and even when x is discrete, it is difficult if the support of p(x), i.e., {x *∈ X |*p(x) > 0} is large. Instead, we can fit a parametric or non-parametric model to model p(A = a|X) that can generalize to an unseen combination of (*X, A*). In such a case, the success of generalization depends on the support of data points we have access to. Figure 3 presents a depiction of two cases where the fitted model must respectively extrapolate and interpolate. In the latter case of interpolation, even without positivity, we may be successful at correctly inferring the causal effect, but in the former case, it would be much more challenging. This suggests we must be careful at relying on generalization to overcome the issue of positivity (or lack thereof.) ## Algorithm 1 Active Ci Protocol A actions, T rounds (both known); potential outcome Y (a) for each action a (unknown *a priori*). while each round t ∈ [T] do Observe a covariate xt Pick an action according to policy, at ∼ π(x). Outcome observed yt ∈ [0, 1] is sampled given X = xt, A = at. if action a sampled from π(x) = p(A|X = x) **then** Update the potential outcome Ep(X)[YX(a)] ← t −1 Ptt ′=1 I(at ′ ) p(a|x) yt else Update the potential outcome Ep(X)[YX(a)] ← t −1 Ptt ′=1 I(at ′ ) p(a) yt end if [Optional] Update the policy π. end while ## 3 Active Causal Inference Learning Because counterfactual information is rarely available together with factual information in observed data (Graepel et al., 2010; Li et al., 2010; Chapelle et al., 2015), one approach to causal inference (CI) is to design an online algorithm that actively explores and acquires data. We introduce and describe an active CI framework that combines causal inference and data collection, inspired by works of literature on contextual bandit(Slivkins, 2019; Bouneffouf et al., 2020). In this framework, an algorithm estimates the expected potential outcome YX(A) of each action and collects further data for underexplored counterfactual actions. A general active CI protocol is presented in Algorithm 1. We denote (xt, at, yt) for observed covariate, action, and outcome at time t, and denote YX(A) for potential outcome. We update the estimated expected potential outcome EX[YX(A)] based on a new observed data point (xt, at, yt). Although active CI algorithms are inspired by contextual bandit, there is a major difference between these two; that is, CI focuses much more on estimating the expected potential outcomes, while bandit algorithms on finding an optimal policy function that maximizes the expected outcome values.1 Most of the time CI researchers and practitioners face the challenge of being limited to data exploration not by our choice, but by other factors that make it inaccessible to A/B testing from real-world constraints. Active CI learning benefits from the policy function π(x) when it can compromise between exploration and exploitation for understanding the causal effect and optimizing the decision in real-world applications respectively. In this section, we review active CI literature by first examining RCT and then expanding it to contextual bandit methods. ## 3.1 Randomized Controlled Trial Randomized controlled trial (RCT) is the most well-known and widely practiced CI method. It is *de facto* standard in e.g. clinical trials, where participants are randomly divided into treatment and control groups, and the outcomes of these two groups are used to compute the average treatment effect (ATE). RCT falls under the active CI method group since it collects data proactively by randomly assigning an action to each data point (Rubin, 1974). Although it is typical to choose a uniform distribution over an action set π = U[A] as a policy function in RCT, we can in principle pick any random distribution as long as it is independent of the covariate π = P[A], as shown in Algorithm 2.2 A set of points {(xt, at, yt)} T t=0 collected by RCT automatically satisfy the exchangeability assumption. Because actions are selected independently of covariate X as shown in Figure 2b, the potential outcome YX[A], estimated from this data, must be preserved even if action P(A|X) changes. With these we can show that the conditional and intervention probabilities coincide with each other, allowing us to perform causal 1We use *outcome* and *reward* interchangeably. 2Existing literature often conflates having an equal chance of sampling each action with independence, but this is not true. Algorithm 2 Randomized Controlled Trial Observes a covariate xt. Pick an action according to a policy, at ∼ π = P[A]. Observe the outcome yt ∈ [0, 1] ∼ YX(A)|X = xt, A = at. Estimate Ep(X)[YˆX(a)] for all a. inference from data collected by RCT: $$p(Y=y|do(A=a))=\sum_{x}p(Y=y|A=a,X=x)p(x)$$ $$=\sum_{x}\frac{p(Y=y|A=a,X=x)p(A=a|X=x)p(x)}{p(A=a|X=x)}$$ $$=\sum_{x}\frac{p(Y=y,A=a,X=x)}{p(A=a|X=x)}$$ $$=\sum_{x}\frac{p(Y=y,A=a,X=x)}{p(A=a)}\text{(since}A\perp\!\!\!\perp X)$$ $$=\sum_{x}p(Y=y,X=x|A=a)$$ $$=p(Y=y|A=a).$$ $$\left({\mathfrak{3}}\right)$$ Let us consider Simpson's paradox (Ex. 2 in §2.3). Although there was an overall strong association between being a man and being more likely to be admitted, we found different associations when we considered different sub-populations due to the uneven sex distribution among the applicants and the acceptance rates, across departments. With RCT using the uniform action distribution, we would end up with an even number of each sex independent of the department choice, i.e., p(A = a|S = man) = p(A = a|S = woman). This would allow us to verify whether men are more likely to get admitted to graduate school without being confounded by the department's choice. It is often tedious and difficult to plan and design an RCT due to many biases that are difficult to mitigate. One such example is a control bias which arises when the control group behaves differently or is treated differently from the treatment group. For instance, participants in the control group may be more likely to drop out of the study than those in the treatment group, due to the lack of progress they perceive themselves. In order to avoid a control bias, one must carefully consider eligibility criteria, selection of study groups, baseline differences in the available population, variability in indications in covariates, and also the management of intermediate outcomes (Simon, 2001; Jane-wit et al., 2010). There are techniques that help you assess the quality of RCT studies with an emphasis on measuring control bias in the planning and implementation of experiment designs (Chalmers et al., 1981; 1989; Stewart & Parmar, 1996; Moreira & Susser, 2002; Olivo et al., 2008). RCT is widely used in clinical trials in medicine (Concato et al., 2000; Rothwell, 2005; Green & Glasgow, 2006; Frieden, 2017), policy research in public health (Sibbald & Roland, 1998; García & Wantchekon, 2010; Deaton & Cartwright, 2016; Choudhry, 2017), econometrics (Heckman & Robb, 1985; LaLonde, 1986; Abhijit V. & Esther, 2012) and advertisements in marketing (Graepel et al., 2010; Chapelle et al., 2015; Gordon et al., 2019). In these real-life applications, we often cannot afford to blindly assign actions but to determine the action based on the covariate X, due to ethical, legal and economical reasons. Because it is challenging to apply RCT in practice (Saturni et al., 2014), some studies explore combining RCT with observational study data (Rubin, 1974; Concato et al., 2000; Hannan, 2008). For example, one can use inverse probability weighting or matching techniques to de-bias the potential outcome estimation, as we will discuss in §4. We thus review active CI methods with a covariate-dependent data collection policy in the rest of this section. ## 3.2 Causal Inference With Contextual Bandits It is often impractical to use RCT in real-life settings due to (but not limited to) the following three limitations (Rubin, 1974; Olivo et al., 2008; Schafer & Kang, 2009; Saturni et al., 2014). First, a sample size must be large enough to detect a meaningful difference between the outcomes of actions, due to the high variance of RCT. Second, complete randomization or complete independence from the covariate is often neither feasible nor ethical in practice. Lastly, complete randomization often goes against the real-world objective of maximizing the outcome which is different from correctly inferring the causal effect. For example, doctors should not randomly assign different treatments to patients in order to test their causal effects on the patients, because this could end up harming many patients. Instead, a doctor makes the best decision for each patient given their expertise (i.e. their own policy), and on the fly adjusts their policy online in order to maximize the outcome of each patient: $$\arg\operatorname*{max}_{\pi_{e}}\mathbb{E}_{x\sim p(X)}\mathbb{E}_{a\sim\pi_{e(x)}}\left[Y_{X}(a)\right].$$ RCT on the other hand does not maximize the outcome at all, which makes it less desirable to use in many real-world scenarios. In this section, we review different ways of performing both tasks together, that is, finding an optimal policy and estimating causal effects. In particular, we examine various ways to intervene and actively collect data under the framework of contextual bandits.3 There are two primary approaches to applying contextual bandit algorithms to CI. The first approach actively gathers interventional data and then estimates ATE from this actively collected randomized dataset.4 The second approach uses a contextual bandit method to learn a causal model utilizing all the collected data points including both randomized and non-randomized actions. ## 3.2.1 Estimating Ate From Interventional Data Collected With A Bandit Method The general idea is to keep track of data points for which intervention, i.e. randomization, happened. It is no different from RCT except that random intervention happens only occasionally based on your choice of bandit algorithm. For the purpose of illustration, we use the ϵ-greedy strategy together with the expert's policy as an example in Algorithm 3. The epsilon greedy algorithm is an easy way to add exploration to the basic greedy algorithm; we greedily choose an action based on the estimated highest outcome values but once in a while randomly select an action independently of the covariate with the probability ϵ. We use a *randomized set* to refer to a collection of these randomized actions together with associated outcomes and covariates. We then estimate the average treatment effect (ATE) directly using the randomized dataset only (see Appendix A.1). The efficiency in estimating the causal effect is determined solely by how often we randomize action, i.e., the probability ϵ. In contrast to RCT, as we select actions based on the covariate-aware policy occasionally with the probability 1 − ϵ, we fulfill both outcome maximization and ATE estimation, although the efficiency in ATE estimation is typically worse than that of RCT. There are other, more sophisticated methods such as high-confidence elimination and upper-confidence bound (UCB) algorithms Auer et al. ("2002"); Slivkins (2019).5 Unlike the ϵ-greedy strategy, these approaches choose when to randomize action based on a learned policy so far. A *high-confidence elimination method* alternates between two actions, a and a ′ until the confidence bounds of the two actions' potential outcomes do not overlap, where the confidence bounds are defined as $$\begin{array}{l}{{U B C_{t}(a)=\mathbb{E}\left[Y(a)\right]+r_{t}(a)}}\\ {{L B C_{t}(a)=\mathbb{E}\left[Y(a)\right]-r_{t}(a)}}\end{array}$$ with the confidence radius rt(a) = p2 log(T)/nt(a) 6 and the number nt(a) of rounds with the action aSlivkins (2019). 3See Appendix A.3.1 for the basic description of contextual bandits. 4A randomized dataset refers to a dataset consisting of tuples collected using actions chosen independently of covariates. 5See Appendix A.3.1 for more details. 6There are ways to estimate a tighter radius based on different assumptions according to the previous research Slivkins (2019). Algorithm 3 ϵ-greedy protocol A actions, T rounds (both known); potential outcome Y (a) for each action a (unknown). while In each round t ∈ [T] do Toss a coin with the exploration probability ϵt. if explore **then** explore: choose an action at ∼ U[a] else Observe a covariate xt Pick an action according to the expert at ∼ πe(x). end if Observed the outcome yt ∼ Y |X = xt, A = at ∈ [0, 1]. Store yt in set D if explore Update the expected potential outcome Ep(X)[YX(a)] *← |D|*−1 PD d I[A = a]yd for all a. end while Once the lower-confidence bound of the potential outcome of one action is greater than the upper-confidence bound of that of the other action, i.e., UCB(a) *< LCB*(a ′), the former action a ′is selected indefinitely from there on because the abandoned action cannot be the best action in terms of maximizing the outcome value. The rationale behind this method is to make sure to explore until we are confident about the expected potential outcome of each action. In short, the high-confidence elimination method fully explores until the best action is determined with a high level of confidence, after which it converts to exploiting the discovered best action only. The first phase of exploration is thus similar to performing RCT. This approach is interesting because as soon as the lower bound of a ′ and the upper bound of a separate, we automatically get some level of assurance about the potential outcome estimates. The *UCB algorithm* on the other hand picks an action that maximizes UCBt(a) at every round t. This choice automatically balances exploration and exploitation. a ′is selected over another action a if UCBt(a ′) > UCBt(a), which can happen for one of two reasons; the uncertainty is high (exploration) and the actual reward is high (exploitation). For the purpose of estimating ATE, we only use the randomized (explored) data points where the confidence radius rt(a) was large. There are other methods such as Thompsonsampling causal forest Dimakopoulou et al. (2017) and random-forest bandit Féraud et al. (2016), which share a similar flavour with the UCB algorithm. ## 3.2.2 Learning A Causal Graph Using A Bandit Algorithm So far in this section, we have discussed how to estimate ATE directly by gathering an interventional dataset using bandit methods. While such an approach allows us to measure the expected potential outcomes and ATE, we cannot infer conditional potential outcomes. One of the CI goals is to measure the causal effect using ATE but a bigger and more ambitious goal is to answer counterfactual questions. The latter is only possible if we can infer individual potential outcomes. Here, we discuss how to learn an underlying graphical model using a contextual bandit method in order to infer individual potential outcomes. At each trial t, we approximate the potential outcome YˆX(at) using a parametrized model gθ(xt, at) that computes the outcome based on the context xt and action at, $${\hat{Y}}_{X}(a_{t})=g_{\theta}(x_{t},a_{t})+\epsilon_{t},$$ YˆX(at) = gθ(xt, at) + ϵt, (4) where ϵt ∼ N (0, σ2) is noise which comes from unknown confounders at time t. Although data was collected by the generic probabilistic graph that includes a policy π in Figure 4b, gθ learns a causal graph in Figure 4a, because the parameters θ are estimated from the intervention dataset alone. This allows us to use gθ to infer the potential outcome of a counterfactual action and/or of unseen covariates. Contextual bandits can handle causal graphs that are more complicated. For instance, in Fig. 4c, there is an extra variable Z = f(X) that mediates the effect of the covariate X on the outcome Y but does not affect $\left(4\right)$. ![12_image_0.png](12_image_0.png) Figure 4: Bandit methods for Active CI the action A. In this case, the outcome function is in the form of $${\hat{Y}}_{X}(a_{t})=g_{\theta}$$ YˆX(at) = gθ(xt, at) + fϕ(xt) + ϵt, where gθ(xt, at) takes into account the confounder X as well as policy function π(X) and maps them to an output Y . fϕ(X) learns to isolate the effect of the covariates that affect the outcome independent of the action. ϵt is the noise. Having fϕ(X) to learn covariates that are not confounders can reduce the complexity of learning for gθ(xt, at). When both gθ and fϕ are non-parametric, it is often computationally intractable to tackle this problem. In order to avoid these issues of computational tractability and undesirable regret bound, *semiparametric contextual bandits* consider parametric policies (Krishnamurthy et al., 2018; Greenewald et al., 2017; Peng et al., 2019) and learn a linear bandit model using regression oracles to estimate potential outcome (Swaminathan et al., 2017) $${\hat{Y}}_{X}(a_{t})=w^{\top}x_{t}+f_{\phi}(x_{t})+\epsilon_{t},$$ where w ⊤ is a parameter vector and ϵt is noise. They have a nice property where the regret bound is the same as the regular contextual bandit's regret. The action-independent features fϕ(xt) get cancelled out in the regret formulation together with the noise term ϵt, because they are independent of the action choice. ## 3.2.3 Correcting A Bias From Unobserved Confounders It is unrealistic to observe all confounders in practice, nor for us to verify whether all confounders have been observed. Any confounding factor not included in the covariate leads to difficulties in accurately estimating the causal effect. Here, we consider a way to detect and correct such a bias that arises from having unobserved confounders (see Figure 4d). Consider a policy function π ∗(x ′) that is optimized for maximizing arg maxa *CAT E*(x ′, a): 7 $$\pi^{*}(x^{\prime})=\arg\max_{a}C A T E(x^{\prime},a)=\arg\max_{a}\mathbb{E}_{Y,X\setminus X^{\prime}}\left[Y_{X}(a)-\max_{a}Y_{X}(\bar{a})|X^{\prime}=x^{\prime}\right].$$ This π ∗(x ′) is estimated without considering unobserved confounders like in Figure 4b, and yet, with the realization of unobserved confounders shown in Figure 4d, we now correct the estimation. Let a¯ be the counteraction to π ∗(x ′) that returns a larger CATE than the originally selected action π ∗(x ′). Suppose we found x ′ where *CAT E*(x ′, π∗(x ′)) *< CAT E*(x ′, a¯) where a¯ ̸= π ∗(x ′). Since the policy function π ∗is optimal, this can only happen if there are unobserved confounders. This illustrates that our learned policy might not be the best in such a situation. We then need to search for a new policy function π ′that can handle this situation better. 7Unlike *CAT E*(x) with a binary action in Equation 2, we define CAT E(*x, a*) for a set of more than two actions. Because our counteraction could not be better than the action π ∗(x ′) without unobserved confounders, we compare the potential outcomes of the action and the counteraction a¯; $$\mathbb{E}_{Y,X\setminus X^{\prime}}\big{[}Y_{X}(\pi^{*}(x^{\prime}))|X^{\prime}=x^{\prime}\big{]}\text{and}$$ $$\mathbb{E}_{Y,X\setminus X^{\prime}}\big{[}Y_{X}(\bar{a})|A=\pi^{*}(x^{\prime})|X^{\prime}=x^{\prime}\big{]},$$ (5) (6) $\frac{1}{2}$ respectively. In order to compute the outcome of the counteraction, we actively intervene with the counteraction and collect a new sample (x ′, y′, *a, a* ¯ ). We also count the frequency of the expected potential outcome of counteraction being higher than action π ∗(x ′), $$f(\pi^{*})=\sum_{i}^{N}\mathbb{I}\left[\mathbb{E}_{Y,X\setminus X^{\prime}}\left[Y_{X}(\pi^{*}(x_{i}^{\prime}))|X^{\prime}=x_{i}^{\prime}\right]>\mathbb{E}_{Y,X\setminus X^{\prime}}\left[Y_{X}(\bar{a})|A=\pi^{*}(x_{i}^{\prime}),X^{\prime}=x_{i}^{\prime}\right]\right],$$ where f(π ∗) Ntells us how often π ∗is wrong. Our new policy function π ′(x ′) samples a new action according to the Bernoulli distribution with probability f(π ∗) N. This strategy copes with the fact that our policy is biased and incorrect with the probability f(π ∗) Ndue to unobserved confounders. Bareinboim et al. (2015) show that while this method requires collecting enough data for f to converge, we can optimize a new policy function faster by weighting the samples from the Beta distribution B(f,(1 − f)) by the bias as shown in Algorithm 4. The bias from the unobserved confounder is defined as one minus the absolute treatment effect (Bareinboim et al., 2015), bias $=1-|\mathbb{E}_{Y,X\setminus X^{\prime}}[Y_{X}(\bar{a})|A=\pi^{*}(x^{\prime}),X^{\prime}=x^{\prime}]-\mathbb{E}_{Y,X\setminus X^{\prime}}[Y_{X}(\pi^{*}(x^{\prime}))|X^{\prime}=x^{\prime}]|$. Eq. 7 quantifies how effective applying action π ∗(x ′) over the counteraction a¯ is. If this quantity is large, the re-weight of the probability of choosing action π ∗(x ′) by the bias remains high. Otherwise, the re-weight of the probability of choosing counteraction a¯ by bias remains high. Overall this encourages faster convergence of re-estimating the potential outcome (see the last three lines of Algorithm 4). This specific algorithm is called causal Thompson sampling. Algorithm 4 Causal Thompson Sampling (TSC) (Bareinboim et al., 2015) Let a = π ∗(x ′) be intuitive action. Let a¯ be counteraction to a. Let EY,X\X′ [YXA = a|A = *a, X*′ = x ′] be the expected payout for intuitive action Let EY,X\X′ [YXA = ¯a|A = *a, X*′ = x ′] be the expected payout for counter-intuitive action while t = 1 *· · ·* T do Let w = [1, 1] Sample a ∼ intuition(xt, t) // Estimate the potential outcomes and bias Compute *bias* = 1 − |E[YX(¯a)|A = *a, X*′ = x ′] − E[YX(a)|X′ = x ′]| (Equation 7) if E[YX(¯a)|A = *a, X*′ = x ′] > E[YX(a)|X′ = x ′] **then** Set w[a] = *bias* else Set w[¯a] = *bias* end if // Choose a new policy π ′(x ′) based on new weighting β1 = B(f(a),(1 − f(a))) β2 = B(f(¯a),(1 − f)(¯a)) Set π˙(x ′) ← max(β1 · w[a], β2 · w[¯a]) Set y˙ = simulate( ˙π(x ′)) Update E[YX( ˙π(x ′))|A = *a, X*′ = x ′] with (x ′, y,˙ π˙(x ′), a) end while ## 4 Passive Causal Inference Unlike in active causal inference (CI), in passive CI, we must calculate ATE given a non-randomized dataset which was gathered in advance with the actions chosen based on covariates (Rubin, 1974; Holland, 1986). As discussed earlier in §2.3, we assume that the dataset satisfies positivity, although we discuss how to relax it with deep learning later in this section. In this section, we present and examine passive approaches to CI, which are gaining more popularity. Most of them are primarily grounded in one of the following three general approaches: matching, inverse probability weighting and doubly-robust methods. We first review these basic approaches and introduce some of the more recent approaches. ## 4.1 A Naive Estimator Before we begin, we start with a naive estimator µA(X) of the outcome variable Y . Let µA : *X → Y* be the generic estimator that predicts the outcome. For example, this estimator can be simple empirical averaging: $$\mu_{a}(x)={\frac{\sum_{i=1}^{N}y_{i}\mathbb{I}[a_{i}=a,x_{i}=x]}{\sum_{i^{\prime}=1}^{N}\mathbb{I}[x_{i^{\prime}}=x]}},$$ where D = {(xi, ai, yi)} N i=1 is a dataset that consists of N data points, and I is an indicator function. This estimator looks at the average outcome of the action a given a particular covariate x. Another example would be to have a parametrized estimator, µA(X; θ) where θ is a parameter. Such a naive estimator µA(X), which often maximizes the log-likelihood, is a biased estimator of the potential outcome Y , due to the discrepancy between the conditional and invention probabilities. In the subsequent sections, we introduce and discuss modifications either to the estimator or to the dataset, that allows us to obtain an unbiased estimate of the potential outcome. ## 4.2 Matching ATE estimation is hard when working with a real-life dataset due to confounding. A common approach is to construct a randomized dataset, that is free of confounding, from a non-randomized one. The matching method achieves this by pairing each treated instance with a control instance and ignoring any unmatched instance. This process balances the treatment and controlled group (Scotina & Gutman, 2019). Ideally, we should have both factual and counterfactual outcomes for every data point (⟨(xi, yi(1), yi(0)⟩ for all i). This is often impossible due to the fundamental problem of CI, where the problem arises from the very fact that we can only observe the outcome of a single action given a particular covariate. We instead aim for approximate one-to-one matching where we pair treated data with controlled data that are similar enough. That is, we pair two instances if {⟨(xi, 1, yi(1)),(xj , 0, yj (0))⟩|D(xi, xj ) < ϵ for i ̸= j} and for some small ϵ, where D(·, ·) is a problem-specific distance metric. We then compute ATE using our new balanced dataset. Matching methods remove confoundedness by creating comparable groups based on observed covariates. It selects individuals from different treatment groups but has similar characteristics or covariate distributions. This ensures that the treatment and control groups are balanced concerning the potential confounders. The advantages and disadvantages of such a matching method lie in the bias-variance tradeoff. The advantage of the matching method is that it reduces the confounding bias, but it increases the variance because we removed (potentially many) unmatched data points. There are many standard metrics of *closeness* that are widely used. One such metric is Mahalanobis distance: $$D_{i j}=(x_{i}-x_{j})^{\top}\Sigma^{-1}(x_{i}-x_{j}),$$ where Σ is the covariance metric. Many alternative metrics have been proposed over the past decades, such as those relying on a coarsened data space or other feature spaces (Cochran & Rubin, 1973; Rubin, 1979; Rosenbaum & Rubin, 1983; Rubin & Thomas, 1992; Rubin & Stuart, 2006; Stuart, 2010; Iacus et al., 2012; Zubizarreta, 2012; Zhao, 2004; Resa & Zubizarreta, 2016). The choice of a metric must be determined for each problem separately. Matching methods have evolved from a greedy algorithm based (heuristic-based search) to optimal/full matching (Kim & Steiner, 2016). The greedy method ends up with a sub-optimal solution since the ordering of pairing matters (Weitzen et al., 2004). Moving away from greedy matching, one can use an optimal nonbipartite matching algorithm that runs in polynomial time (Lu & Rosenbaum, 2004a; Dehejia & Wahba, 1999). Such an algorithm generates a series of matched sets that consist of at least one treated individual and at least one control individual. It is optimal in terms of minimizing the average distance between each treated individual and each controlled individual within each matched set (Hansen, 2004). There are other approaches such as weighting methods where one adjusts the importance of distance between data points (Heckman et al., 1997; Hirano et al., 2003; Imbens, 2004). These methods can be helpful when the data samples are unevenly spread out over the domain. There however remain challenges with matching. First, it is difficult to have exact matches for all data points with a non-randomized or imbalanced dataset (the dataset is unevenly distributed w.r.t actions). We thus end up eliminating a significant number of unpaired data points after matching between treated and controlled groups, resulting in the loss of information and statistical power. There are however some algorithms that enable many-to-one matching, with the simplest being the K-nearest neighbour method (Karp, 1972; Schafer & Kang, 2009; Zubizarreta, 2012). Second, matching works well when the dimensionality of the covariate is low, but it easily fails when the dimensionality is high due to the curse of dimensionality (Gu & Rosenbaum, 1993). It can be helpful to use deep learning to obtain a more concise and dense representation, similar to what we will discuss in §4.5. ## 4.3 Inverse Probability Weighting Inverse probability weighting (IPW) removes the effect of confounders by weighting the potential outcome of each action by its inverse probability weight (Rosenbaum & Rubin, 1983; Robins et al., 1994; Hirano et al., 2003): $$\mathbb{E}\left[\frac{Y[\left[A=a\right]}{p(A|X)}\right]=\mathbb{E}_{p(X)}\left[\mathbb{E}\left[\frac{Y(a)[A=a]}{p(A|X)}\right|X\right]\right]=\mathbb{E}\left[\frac{\mathbb{E}\left[Y(a)\Big{|}X\,\mathbb{E}\left[\mathbb{I}[A=a|X]\right]}{p(A|X)}\right]=\mathbb{E}\left[Y(a)\right].\tag{8}$$ Equation 8 illustrates that we can take the subset of data that corresponds to a particular action a as long as we can divide by the so-called propensity score in order to compute E[Y (A)]. p(A = a|X = x) is known as the *propensity score* e(x). 8 The propensity score theorem (Rosenbaum & Rubin, 1983) tells us that if ignorability is satisfied given X, then ignoreability conditioned on e(X) is also satisfied (Imbens & Rubin, 2015): $$(Y(1),Y(0))\perp\!\!\!\perp A|X\implies(Y(1),Y(0))\perp\!\!\!\perp A|e(X).$$ This predicate tells us that the potential outcomes are independent of A given confounder X, and this is due to the blocking back-door criterion conditioning on confounder X (Pearl, 2009). Similarly, the potential outcomes are independent of A given the propensity score e(x), and the propensity score has the same effect as removing the blocking back-door path in a causal graph by conditioning on the edge between X and A. This illustrates that the 1-dimensional score function, that is the propensity score, is enough to compress the high-dimensional confounder X. While simple finite-sample averaging AT E [AGG is a biased estimator of ATE, $$\widehat{A T E}_{A G G}=\frac{1}{N}\sum_{i}^{n}\left[\frac{\mathbb{I}[A_{i}=1]y_{i}(1)}{\eta(x_{i})}\right]-\frac{1}{N}\sum_{i}^{n}\left[\frac{\mathbb{I}[A_{i}=0]y_{i}(0)}{1-\eta(x_{i})}\right],$$ AT E [IPW is an unbiased estimator of ATE, $$\widehat{A T E}_{I P W}=\frac{1}{N}\sum_{i}^{n}\left[\frac{\mathbb{I}[A_{i}=1]y_{i}(1)}{e(x_{i})}\right]-\frac{1}{N}\sum_{i}^{n}\left[\frac{\mathbb{I}[A_{i}=0]y_{i}(0)}{1-e(x_{i})}\right],$$ where η(x) = Nx1 Nand Nx1 is the number of treated samples. Note that AT E [AGG uses a naive estimator from §4.1 to estimate ATE. AT E [IPW is known as *an oracle IPW estimator* since the propensity score e(·) is known. The estimation error, defined as √N|AT E − AT E [IPW |, follows a Normal distribution with zero mean and the variance of $${\rm VAR}_{\rm IPW}={\rm Var}[ATE_{\rm AGG}(X)]+{\mathbb{E}}_{p}(x)\left[\frac{\sigma^{2}(X)}{e(X)(1-e(X))}+\frac{c(X)^{2}}{e(X)(1-e(X))}\right],\tag{9}$$ where c(X) is a function that satisfies $$\begin{array}{l}{{Y(0)=c(X)-(1-e(X))A T E(X)+\epsilon(X)}}\\ {{Y(1)=c(X)+e(X)A T E(X)+\epsilon(X),}}\end{array}$$ where ϵ(X) is a Gaussian noise with variance of σ 2(x). AT E [AGG can be seen as a version of the IPW estimator with an imperfect propensity score being eˆ(x) = Nx1 N . VARIPW demonstrates that even with using 8Additionally, propensity scores can be used for matching methods where one compares two covariates with similar propensity values D(ˆe(xi), eˆ(ej )) for i ̸= j and some distance metric Dij = |e(xi) − e(xj )|, where e(xi) and e(xj ) are the propensity scores for the data point xi and xj , respectively. The propensity score is a popular method as it summarizes the entire covariate into a single scalar and has been shown to be effective in theory (Rosenbaum & Rubin, 1983; Rubin & Thomas, 1992; Rubin & Stuart, 2006; Zubizarreta, 2012; Diamond & Sekhon, 2013; Resa & Zubizarreta, 2016; Abadie & Imbens, 2016) and in practice (Jalan & Ravallion, 2001; Dehejia & Wahba, 2002; Monahan et al., 2011; Amusa, 2018). the true propensity score e(x), AT E [IPW has a worse asymptotic variance than AT E [AGG which can be thought of as inverse probability reweighting with an incorrect propensity score eˆ(x). This is an example of the bias-variance trade-off. A plethora of methods have been proposed since then, that are unbiased and exhibit lower variance than the oracle IPW above by replacing the propensity score with other weighting schemes eˆ ′. For example, one can reduce the variance of an IPW estimator by normalizing the weights (Hirano et al., 2003): $${\widehat{A T E}}_{S W}={\frac{\sum_{i}^{n}\mathbb{I}[A_{i}=1]y_{i}(1)w(x_{i})}{\sum_{i}^{n}\mathbb{I}[A_{i}=1]w_{1}(x_{i})}}-{\frac{\sum_{i}^{n}\mathbb{I}[A_{i}=0]y_{i}(0)w_{0}(x_{i})}{\sum_{i}^{n}\mathbb{I}[A_{i}=0]w_{0}(x_{i})}},$$ where w1 =1 e(xi) and w0 =1 1−e(xi) . This leads to a lower variance in the estimate, and these weights are thus called *stabilized weights* (Robins et al., 2000). According to Hirano et al. (2003), *AT E* [SW outperforms AT E [IPW in terms of asymptotic convergence rate (Hirano et al., 2003). Lunceford & Davidian (2004) review various versions of the IPW estimator and suggest a way to take into account the uncertainty in estimating the propensity score using a closed-form sandwich estimator (M-estimator (Stefanski & Boos, 2002)) (Lunceford & Davidian, 2004). ## 4.4 Doubly Robust Methods It is often hard to obtain the propensity score in advance nor guarantee that our propensity estimate eˆ(x) is accurate. Furthermore, even with the oracle propensity score, we have just shown that the IPW has a high variance. On the other hand, the naive estimator (without IPW) from §4.1 is unbiased only when the actions in the dataset were sampled independently of the associated covariates. In other words, it is often not enough to rely on either of these approaches on their own to perform causal inference (Belloni et al., 2011). We can do better by combining IPW with the naive estimator µA(X), to which we refer as a doubly robust estimator. In doubly robust estimation, a bias from one method is addressed by the other method, and vice versa (Robins et al., 1994; Lunceford & Davidian, 2004; Kang & Schafer, 2007; Chernozhukov et al., 2017). Such a method is both consistent and unbiased, as long as at least one of the IPW and the naive estimator is consistent and unbiased. The doubly robust estimator for ATE in the case of two actions is then $$ATE_{DR}=\mathbb{E}_{Y,X}\left[\mu_{1}(X)-\mu_{0}(X)\right]+\mathbb{E}_{Y,X,A}\left[A\frac{(Y-\mu_{1}(X))}{e(X)}-(1-A)\frac{(Y-\mu_{0}(X))}{1-e(X)}\right]$$ $$=\mathbb{E}_{Y,X,A}\left[A\frac{(Y-\mu_{1}(X))}{e(X)}+\mu_{1}(X)\right]-\mathbb{E}_{Y,X,A}\left[(1-A)\frac{(Y-\mu_{0}(X))}{1-e(X)}+\mu_{0}(X)\right].$$ (10) . (11) If the estimated potential outcomes µA(X) are correct, we do not need to worry about propensity score estimation, since Y − µˆ1(X) and Y − µˆ0(X) will be zero in Equation 11. *AT E* [DR hence reduces to approximating E [ˆµ1(X) − µˆ0(X)]. In contrast, it is okay for our estimated potential outcome to be wrong if propensity score estimation is consistent. If the propensity score is correctly estimated, then E*Y,X,A* hAY eˆ(X) i and E*Y,X,A* h(1−A)Y (1−eˆ(X)) iwill be weighted correctly as well, and we recover the IPW estimator (Robins et al., 1994; Robins & Rotnitzky, 1995; SCHARFSTEIN et al., 1999). Consequently, such a doubly robust method is consistent (Hahn, 1998; Heejung & James M., 2005; Shardell et al., 2014; Farrell, 2015). Re-arranging terms and expressing in terms of Monte Carlo estimation of *AT E*DR, we get $$\widehat{A T E}_{D R}\approx\frac{1}{N}\sum_{i}^{N}\left[\frac{a_{i}y_{i}}{\hat{e}(x_{i})}-\frac{a_{i}-\hat{e}(x_{i})}{\hat{e}(x_{i})}\hat{\mu}_{1}(x_{i})\right]-\frac{1}{N}\sum_{i}^{N}\left[\frac{(1-a_{i})y_{i}}{1-\hat{e}(x_{i})}-\frac{\hat{e}(x_{i})-a_{i}}{1-\hat{e}(x_{i})}\hat{\mu}_{0}(x_{i})\right].$$ (10) $\binom{11}{2}$ (11) ... $$\left(12\right)$$ . (12) The empirical estimate converges the true ATE, AT Eˆ DR → *AT E*, with an asymptotic variance of $$\mathrm{Var}[A T E_{D R}(X)]=\mathrm{Var}[A T E_{A G G}(X)]+\mathbb{E}\left[{\frac{\sigma_{1}^{2}(X)}{e(X)}}\right]+\mathbb{E}\left[{\frac{\sigma_{0}^{2}(X)}{1-e(X)}}\right],$$ where σT (X) = Var[Yi(T)|X]. Despite its greater variance, the doubly robust method often exhibits greater efficiency and robustness to model misspecification. Multiple studies have shown that doubly robust methods for ATE estimation with missing data perform better than their non-doubly robust baselines and theoretically have shown to converge faster to the true ATE than individual methods (Robins et al., 1994; Mayer et al., 2020). A naive doubly-robust estimator is asymptotically optimal among non-parametrized estimators, meaning the semiparametric variances are bounded and asymptotically convergent if either the propensity score or the estimator µA(X) is correct (Robins & Rotnitzky, 1995; Kang & Schafer, 2007). Subsequently, other estimators with bounded asymptotic variances have been proposed even when IPW exhibits a high variance (Robins et al., 2007; Tan, 2010; Waernbaum & Pazzagli, 2017). ## 4.5 Causal Inference With Representation Learning While matching, IPW, and doubly robust methods have their own merits for estimating potential outcomes, it may be necessary to utilize a powerful model that can express highly complex functions given high dimensional data with a limited amount of data. For instance, working with non-linear high-dimensional data such as X-ray images may require a non-linear parametric model to transform data from its original space to a better representation that facilitates CI. The goal is to extract a causal representation from high dimensional data (covariate and action) and more accurately predict potential outcomes from the extracted causal representation. In this section, we review methods that extend the previous CI approaches in this way by (deep) representation learning (Schölkopf et al., 2021; Wang & Jordan, 2021). Representation learning involves automatically learning features or representations of raw data. Rather than manual feature engineering, representation learning enables models to learn to extract high-level features from raw, high-dimensional data such as images, audio, and texts, themselves. Here, the potential outcome estimator uA(X; ϕ) is a deep neural network with parameter ϕ with m hidden layers H = {h (i)} m 1 . Each hidden representation at i-th layer (*i < k*) is a function of all the previous layers, hi(x) = fi(hi−1; ϕ) with h0 = x. Each hidden representation at j ≥ k, hj (x) = fj (hj−1, a; ϕ), depends also on the action. The model is trained to minimize the factual loss lFactual which is typically the negative log-likelihood of the dataset with inverse probability weighting. uA(X; ϕ) learns to predict the potential outcome however only well on observed covariate-action pairs due to the challenges in generalization. We thus need to add a regularization term to the loss function in order to encourage the model to generalize better to (unseen) counterfactual actions: $${\mathcal{L}}=\mathbb{E}_{Y,X,A}[w l_{\mathrm{Factual}}(X,Y,A;u_{A}(X;\phi))+\lambda{\mathcal{R}}(H)].$$ $$(13)$$ L = E*Y,X,A*[wlFactual(*X, Y, A*; uA(X; ϕ)) + λR(H)]. (13) The loss function is weighted by the inverse probability weight w =A 2e(X) +1−A 2(1−e(X)) (see §4.3). Regularization often imposes certain properties on the hidden layers, which is why we refer to it as R(H), with the regularization coefficient λ (Johansson et al., 2016; Uri et al., 2017; Yao et al., 2020; Wu & Fukumizu, 2021). Regularization reduces the hypothesis space in a way that encourages µA(*X, ϕ*) to capture a causal relationship rather than a spurious relationship between the action and outcome. Deep representation learning based CI is vast and fast-growing (Nabi & Shpitser, 2017; Yoon et al., 2018; Veitch et al., 2020; Zhang et al., 2020; Wang & Jordan, 2021; Zhang et al., 2021). We characterize a majority of these approaches as minimizing a combination of a factual regression loss and a regularizer, as in Equation 13, and we discuss four representative ones in this section. Among these methods, we separately discuss counterfactual smoothing regularization and deep latent variable models (see Figure 5) with a focus on how both of these approaches lead to better generalization of ATE. ## 4.5.1 Counterfactual Smoothing Regularization The objective in counterfactual smoothing is to train a model to generalize to a counterfactual potential outcome even if it only saw factual data during training (see Figure 5a). Although the model uA(*X, ϕ*) is trained to estimate the potential outcome using the inverse probability weighted factual loss function, it may still underperform for an unseeded data or a data paired with unseen action (see Figure 3). In such a ![19_image_0.png](19_image_0.png) ![19_image_1.png](19_image_1.png) $$\mathbf{a}_{i}(X)|A=$$ Figure 5: Causal Graph - (a) Deterministic Representation: H corresponds to a hidden layer representation that is a deterministic variable, X is a covariate, A is an action, and Y is an outcome variable. (b) Stochastic Representation: Z is a latent variable, X is a noise version of confounder Z, A is an action, and Y is an outcome variable. case, the variance of the potential outcome estimate over counterfactual actions is high, implying that an individual model's prediction may be inaccurate. Counterfactual smoothing reduces the variance of the predicted potential outcome whose variance can be controlled by ensuring that the learned representations of treated and controlled data points to follow (largely) indistinguishable distributions in the feature space, respectively (Johansson et al., 2016; Shalit et al., 2017; Johansson et al., 2018; Yao et al., 2018). The divergence between factual and counterfactual hidden representations' distributions influences the maximum variance of the ATE since covariate and an action propagate through deep neural networks, assuming that the deep neural network has a finite Lipschitz constant. We can thus reduce the variance by optimizing the factual loss function while, for instance, minimizing the integral probability metric (IPM) as regularization. The overall objective function is LCF R = E*Y,X,A*[wlFactual(*X, Y, A*; uA(X; ϕ))] + λIPMF (ˆp(hi(X)|A = 1), pˆ(hi(X)|A = 0)), (14) where hi(X) is a hidden representation from the i-th layer of uA with *i < k*. IPMF is the integral probability metric with a class F of real-valued bounded measurable functions. The hidden representation hi(X) does not depend on action A, although the data point xi was collected together with some action a. pˆ(hi(X)|A = 1) and pˆ(hi(X)|A = 0) are thus the empirical probability distributions over the representations of treated and controlled groups, respectively. The 1-Lipschitz function and universal reproducing Hilbert kernel space, such as 1-Wasserstein distance (Cuturi & Doucet, 2013) and MMD (Gretton et al., 2012), is the most commonly used class of functions for IPMs (Shalit et al., 2017; Johansson et al., 2018). Suppose IPMF (ˆp(hi(X)|A = 1), pˆ(hi(X))|A = 0) = 0. Then, we would not be able to tell whether the hidden representation hiis likely conditioned on action or counterfactual action. The generalization error of ITE is bounded by the CFR objective and it has been empirically shown to have lower ATE for the out of sample data. However, this does not theoretically guarantee that generalization error will be always lower. We explain how the approach upper bounds the PEHE estimation in Appendix A.4.2. Zeng et al. (2020) extend this approach to use a doubly robust estimator instead of IPW (see §4.4) and simultaneously minimize the Jensen-Shannon divergence between the treated and controlled group instead of IPM. Hassanpour & Greiner (2019) view ITE estimation problem from a domain adaption perspective where factual data is assumed to come from the source and the counteraction data from a different target distribution. They use importance sampling to re-weight the loss function for each factual data to make it look like it was sampled from the target distribution instead of the source distribution (Hassanpour & Greiner, 2019). Instead of globally balancing the treatment and controlled posterior distributions, Yao et al. (2018) propose a local similarity preserved individual treatment effect (SITE) estimation method based on deep representation learning (Yao et al., 2018). Their method preserves the local similarity and balances between the factual and counterfactual distributions over the set of actions, simultaneously. Furthermore, there is a line of work where they separately extract the representations of confounders and non-confounders and re-weight the confounder representation only (Kuang et al., 2017; Wu et al., 2020). Domain invariance, or equivalently the full overlap between the factual and counterfactual distributions, can be an overly restrictive criterion, as it may remove information from input variables (covariates and sometimes action) that may be necessary for accurately estimating the treatment effect. (Stojanov et al., 2021). Yao et al. (2020) demonstrate why it is not ideal to use a distributional divergence to balance the treated and controlled representations (Yao et al., 2020). Instead, they propose to minimize the counterfactual variance and make the hidden representation invertible by adding a reconstruction loss function, which is, they claim, enough to have sufficient overlap in the factual and counterfactual supports. Deep kernel learning for individual treatment effect (DKLITE) is a representative passive CI method that uses variance reduction (Yao et al., 2020). Unlike the methods above, this algorithm manipulates the action-dependent hidden representation. More formally, it uses kernel regression to estimate the potential outcome yˆi = Wahm−1(xi) + ϵi,a on top of the final layer of deep neural network hm−1(x), where Wa is the parameter for action a and ϵi,a is an action-dependent noise variable. The posterior distribution is N (mahm−1(x), σ2(x; X , Θa)), where σ 2(x; X , Θa) = h T m−1K−1 a hm−1 is the variance. DKLITE objective function minimizes both the negative log-likelihood and posterior variance given the counterfactual actions. Although DKLITE uses kernel regression to derive the posterior distribution, it is not necessary to use kernel regression. The objective function for training a causal inference model with posterior variance reduction can be written as $${\mathcal{L}}_{V R}=w{\mathcal{L}}_{\mathrm{Factual}}(X,Y,A;u_{A}(X;\phi))+\lambda\mathbb{E}_{p(X,A)}[g(\mathrm{VAR}[h_{\theta}(X,1-A))]],$$ $$(15)$$ where VAR[h(X, 1 − A)] is the posterior variance given a data point X = x and a counterfactual action. The function g : R D → R aggregates the covariance of high dimensional representations into a single scalar. For example, g(·) can be a sum of the element-wise variances of the hidden representation. By reducing the variance in the representations h(X, 1 − A), we also reduce the variance in the ATE. ## 4.5.2 Deep Latent Variables Ci Models In deep latent variable models for causal inference, we assume a particular data-generating process described with a probabilistic graphical model that contains latent (or hidden) stochastic variables (Louizos et al., 2017a; Rakesh et al., 2018; Vowels et al., 2020; Wu & Fukumizu, 2021; jiwoong Im et al., 2021; Kumor et al., 2021; Lu et al., 2022). The inclusion of such latent variables enables us to model the potential outcome with a much richer distribution. Without latent variables, it is challenging to build a function approximator, such as a deep neural network, that captures a multimodal distribution of the potential outcome, regardless of how complex a form such a function takes. Causal effects are however not identifiable in general when there are latent variables. Identifiability requires that the model parameter can be uniquely estimated from the data. However, since the latent variables do not directly measure the unobserved variable but infer from the observed variables, it introduces ambiguity that leads to violates the assumption of unconfoundedness. In order to overcome this issue, one has to make two assumptions; (1) X is a proxy variable that is a noisy version of a hidden confounder, and (2) this unknown confounder can be modelled by these latent variables (see Figure 5b). While these extra assumptions are a major drawback of deep latent variable-based CI (Louizos et al., 2017b; Kocaoglu et al., 2017), we nevertheless review this literature as they are increasingly more widely used in practice (Pearl et al., 2016; Wu & Fukumizu, 2021; Trifunov et al., 2020; Kumor et al., 2021; Rissanen & Marttinen, 2021). With these assumptions, we can consider the latent variable Z a hidden, i.e. unobserved, confounder, to which we have access via its noisy realization X. We can recover the joint distribution p(*Z, X, Y, A*) from observational data (*X, Y, A*). We can then compute p(Y |X = *x, do*(A = 1)) and p(Y |X = *x, do*(A = 0)), which allows us to compute ITE, by $$p(Y|X=x,do(A=a))=\int_{z}p(Y|X,do(A=a),Z)p(Z|X=x,do(A=a))dZ$$ $$=\int_{z}p(Y|X=x,do(A=a),Z)p(Z|X=x,do(A=a))dZ$$ $$=\int_{z}p(Y|X=x,A=a,Z)p(Z|X=x)dZ.$$ The third equality follows from the rule of do-calculus. Therefore, we can estimate p(Y |X = *x, do*(A = a)) as long as we can approximate p(Y |A = *a, Z*) and p(X|Z). The causal effect variational autoencoder (CEVAE) is a particular type of variational inference framework which allows us to estimate p(Y |*A, Z*) and p(Z|X) using deep neural networks (Kingma & Welling, 2014; Louizos et al., 2017a). With this VAE, the true posterior distribution is defined as pθ(Z|*X, Y, A*) ∝ pθ(*Y, A*|Z)pθ(X|Z)p(Z), where bothpθ(*Y, A*|Z) and pθ(X|Z) are modelled using a deep neural network parametrized by θ and p(Z) is a prior distribution and is not parameterized by θ. We approximate the posterior distribution with a variational posterior qϕ(Z|*X, Y, A*) which is modelled by a deep neural network with a variational parameter ϕ. We infer the hidden confounder Z from the observation (*X, Y, A*) using this variational posterior neural network. We estimate p(Y |*A, Z*) and p(Z|X) directly by training both generative and inference networks on observational data. Training is done to maximize the following variational lower bound with respect to the parameters θ and ϕ: $${}_{a t a}(X,Y,A)\ \left[\right]$$ ## Lcevae = Epdata(X,Y,A) -Eqϕ(Z|X,Y,A)[Log Pθ(X, Y, A|Z)] − Kl [Qϕ(Z|X, Y, A)∥P(Z)]. The first term is the reconstruction of observable variables from the inferred confounder Z, and the the second term is a regularizer which enforces the approximate posterior to be close to the prior and maximizes the entropy of the posterior distribution over the confounder Z. We jointly update both generative and inference network parameters by using backpropagation and the re-parameterization trick (Danilo Jimenez & Shakir, 2014; Kingma & Welling, 2014). Similar to CEVAE, linked causal variational autoencoder (LCVA) treats the latent attributes directly as confounders with the assumption that these confounders affect both the treatment and the outcome of units (Rakesh et al., 2018). The main difference is that the authors want to measure the causal effect when there exists *a spillover effect*9 between pairs of two covariates through the confounders. Another variant is the Causal Effect by using Variational Information Bottleneck (CEVIB) (Lu et al., 2022). Just like any other variational latent model, it learns to fit the model to observation data and learns the confounders that affect treatments and outcomes using variational information bottleneck (Alemi et al., 2016). CEVIB does this in a way that allows the model to forget some latent variables that are not confounders and learn to extract only the confounding information from covariate. Deep entire space cross networks for individual treatment effect estimation (DESCN) attempt to learn the latent confounders ("the hidden treatment effect") through a cross-network in a multi-task learning manner (Zhong et al., 2022). It reduces treatment biases, that favour one treatment over another, by learning from multiple tasks and overcoming sample imbalance. In CEVAE, the conditional distribution pθ(*X, Y, A*|Z) is learned from data sampled from p(A|Z), p(X|Z) and p(Y |*A, Z*), where Z ∼ p(Z). This conditional distribution, which is used for computing treatment effect, is however used with an action A sampled from p(A) rather than p(A|Z) in the inference time. This discrepancy is known as covariate shift or more generally distribution shift and is detrimental to generalization in deep learning (Shimodaira, 2000; Sergey & Christian, 2015; Jeong & Namkoong, 2020; Louizos et al., 2017b), which in turn results in a degradation in the quality of ATE estimation. Replacing the observational distribution with a uniform treatment distribution, which is independent of the covariate, provides randomized treatment samples for training a CEVAE. A uniform treatment selection process decouples Z and A, thereby making A independent of the covariate X, i.e. p(A|X) = p(A). This is similar to a randomized clinical trial over treatment A in Section 3.1. For this reason, it may be beneficial to train a CEVAE using a uniform treatment distribution. Here, the observational data-based distribution is p(*X, Y, A*) = p(A|X)p(X)p(Y |*X, A*) and the corresponding uniform treatment distribution is r(*X, Y, A*) = r(A)p(X)p(Y |*X, A*). 9A spillover happens when something in one situation affects something else in a different situation, even though they may not seem related. jiwoong Im et al. (2021) use importance weighting to write the variational lower bound objective under the uniform treatment distribution (jiwoong Im et al., 2021): $${\mathcal{L}}_{\mathrm{UTVAE}}=\mathbb{E}_{p(X,Y,A)}\left[w(X,A)\mathbb{E}_{q_{\phi}(Z|X,Y,A)}\left[\log{\frac{p_{\theta}(X,Y,A|Z)p(Z)}{q_{\phi}(Z|X,Y,A)}}\right]\right],$$ where w(*X, A*) = r(A|X) p(A|X) =1 2p(A|X) is the importance weight. r(A|X) p(A|X) = r(*X,Y,A*) p(*X,Y,A*) and r(A|X) = r(A) = 12 , because of the independence between X and A in the causal graph and the uniformly distributed treatment selection procedure. UTVAE generalizes better than CEVAE especially when there is a distribution shift between training and inference. See the details of the training procedure of CEVAE and UTVAE for generative and inference networks respectively at (jiwoong Im et al., 2021). ## 4.5.3 Combining Active And Passive Methods So far, we have discussed active and passive CI learning separately. It is however often necessary to combine active and passive approaches in order to mitigate the bias arising from non-randomized data. Here, we review some of the methods that combine the two. Sawant et al. (2018) use a bandit method to collect data online and estimate the ATE offline using the potential outcome model (Sawant et al., 2018). At each time step, they sample data using a bandit algorithm like Thompson sampling and aggregate a dataset (see Algorithm 5). Using this dataset, they update the model and re-estimate the potential outcome (see Algorithm 6). This approach combines active and passive learning since the model is updated in a batch training setting while the dataset can be collected asynchronously. Similarly, Ye et al. (2020) proposes a framework that combines the two methods. Here they use inverse probability weighting and matching algorithms for passive learning and UCB and LinUCB (Slivkins, 2019)) for active CI algorithms (Ye et al., 2020). In the inference time, Ye et al. (2020) estimate the potential outcomes using a passive method given a new unseen example X = xt for every action. If the new data point (X = xt, A = at) with an action at is not in a non-randomized dataset, then the algorithm resorts to exploring the action for X = xt and adds the new data point (X = xt, A = at, Y = yt) to the non-randomized dataset. | | Algorithm 6 Offline Batch Training | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Algorithm 5 Online Scoring and Batch training Iteration t = 0; Log L = {}; contextual distribution parameter θ0 and θ1. for i = 1, 2, · · · , T do for t = 1, 2, · · · , T do Sample data xt Predict Yˆ (1) = f(xt, θ1) Predict Yˆ (0) = f(xt, θ0) Choose action at = argmaxa∈{0,1}Yˆ (a) Compute pt = p(at|xt) L = L ∪ (xt, at, pt) end for Update θ0 and θ1 using Algo 6 end for | Dataset D = {}. for i = 1, 2, · · · , T do Sample (xi , ai , pi , yi(ai)) from L yˆi(ai) = yi(ai)/pi - bias correction yˆi(¬ai) = 0 for m = 1, 2, · · · , M do Sample (xm, ¬ai , pm, ym(¬am)) from L yˆi(¬ai) ← yˆi(¬ai) + 1 M ym(¬ai)/pm end for CATEi = ˆyi(ai) − yˆi(¬ai) D = D ∪ (xi , ai , CATEi) end for update θ0 and θ1 by maximizing the likelihood on D. | ## 5 Conclusion The objective of this paper is to introduce various algorithms and frameworks in causal inference by categorizing them into passive and active algorithms. We have particularly focused on estimating average treatment effect (ATE), after outlining the standard assumptions necessary for the identification of causal effects: positivity, ignoreability, conditional exchangeability, consistency and Markov assumptions. We first present the randomized controlled trial (RCT) as a representative example of an active causal inference algorithm. We then delve into bandit approaches that aim to balance the outcome itself and the quality of estimating the treatment effect. We explore different contextual bandit algorithms by considering various causal graph scenarios such as taking account of non-confounding variables or dealing with unknown confounding variables. We then move on to discussing passive CI methods, including matching, inverse probability weighting and doubly robust methods. We touched upon deep learning-based approaches as well. A majority of these studies focus on converting a causal estimand into a statistical estimand, and subsequently, estimating the statistical estimand in order to obtain the causal estimate. These classical methods unfortunately do not perform well when they are put to work with high dimensional data. In order to overcome this challenge, deep learning has been proposed as a way to learn a compact representation suitable for estimating ATE. We thus discussed several deep learning-based approaches that learn to infer causal effects by automatically extracting hidden or unknown confounders' representations in problems with high dimensional data. After reading the main part of this paper, readers should notice a clear resemblance between offline policy evaluation and passive causal inference methods. This resemblance is due to the similarity in estimating the reward in policy evaluation and estimating the potential outcomes in causal inference (Swaminathan & Joachims, 2015; Li, 2015). For example, some of the methods discussed in Section 4.5.3 have been used for offline policy evaluation in contextual bandit algorithms (Li et al., 2012; Sawant et al., 2018). Although we do not explore this connection further here, this is an important avenue to pursue both causal inference and reinforcement learning. This review of causal inference is limited in two ways. First, we do not consider collider bias, and second, we assume a stationary conditional probability distribution. We discuss these two limitations briefly before ending the paper. Collider bias happens when there is an extra variable, caused by both action and outcome variables, that is observed to be (conditioned on) a particular value. For example, suppose we investigate the effect of a new versus old medication on the patient outcome, and we gather data from a hospital. We divide the patients into two groups: those who received both the new and old medications. The study finds that the patients who received the old medication had better outcomes than the new one and the hospital conclude that the old medication is more effective on patient outcomes. It turns out this conclusion has a collider bias. This is because the decision to give the medication is based on the patient's recovery rate, which encourages doctors to prescribe a more potent drug. Thus, patients are more likely to receive the old medication which is stronger. In this case, the patient's recovery rate becomes a collider variable because it is caused by both the decision to give which medication and the patient's outcome. this is very separate from confounder bias and it doesn't address the collider bias issue Besides collider bias, we have assumed that all conditional distributions are stationary (i.e. do not change over the course of causal inference and data collection). The question is what happens if these conditional distributions shift over time? One can introduce temporal dependencies to a causal graph to describe such shifts over time. Still, it is challenging to work with such a graph because collected data points over time are correlated with each other. This violates the no interference/Markov assumption we discussed earlier in Section 2.3. To address this problem, various time series methods have been developed that take into account the temporal dependence of the data. For instance, deep sequential weighting (DSW) and sequential causal effect variational autoencoder (SEVAE) estimate ITE with time-varying confounders (Trifunov et al., 2022; Liu et al., 2020; Kumor et al., 2021). ## References Alberto Abadie and Guido W. Imbens. Matching on the estimated propensity score. *Econometrica*, 84(2): 781–807, 2016. Banerjee Abhijit V. and Duflo Esther. Poor economics: A radical rethinking of the way to fight global poverty. In *Public Affairs*, 2012. Shipra Agrawal and Navin Goyal. Thompson sampling for contextual bandits with linear payoffs. In *Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume* 28, ICML'13, pp. III–1220–III–1228. JMLR.org, 2013. Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep variational information bottleneck. 2016. Lateef Amusa. Reducing bias in observational studies: An empirical comparison of propensity score matching methods. *Turkiye Klinikleri Journal of Biostatistics*, 10:13–26, 01 2018. doi: 10.5336/biostatic.2017-58633. Susan Athey, Guido Imbens, and Vikas Ramachandra. Machine learning methods for estimating heterogeneous causal effects. 04 2015. "Peter Auer, Paul Fischer, and Nicolo Cesa-Bianchi". "finite-time analysis of the multiarmed bandit problem". "Machine Learning", "47"("3"):"235–256", "2002". Elias Bareinboim and Judea Pearl. Causal inference and the data-fusion problem. *Proceedings of* the National Academy of Sciences, 113(27):7345–7352, 2016. doi: 10.1073/pnas.1510507113. URL https://www.pnas.org/doi/abs/10.1073/pnas.1510507113. Elias Bareinboim, Andrew Forney, and Judea Pearl. Bandits with unobserved confounders: A causal approach. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 28. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/paper/2015/file/795c7a7a5ec6b460ec00c5841019b9e9-Paper.pdf. Alexandre Belloni, Victor Chernozhukov, and Christian Hansen. Inference on treatment effects after selection amongst high-dimensional controls. *Operations Research eJournal*, 2011. Andrea Bender. What is causal cognition? *Frontiers in Psychology*, 11, 01 2020. doi: 10.3389/fpsyg.2020.00003. Donald A Berry and Bert Fristedt. Bandit problems: sequential allocation of experiments (monographs on statistics and applied probability). *London: Chapman and Hall*, 5(71-87):7–7, 1985. Djallel Bouneffouf, Irina Rish, and Charu Aggarwal. Survey on applications of multi-armed and contextual bandits. In *2020 IEEE Congress on Evolutionary Computation (CEC)*, pp. 1–8, 2020. doi: 10.1109/CEC48606.2020.9185782. Norman E. Breslow, Thomas Lumley, Christie M. Ballantyne, Lloyd E. Chambless, and Michal Kulich. Using the whole cohort in the analysis of case-cohort data. *American journal of epidemiology*, 169:1398–1405, 2009. Bruce W. Carlson. Simpson's paradox. *Encyclopedia Britannica*, 2019. doi: https://www.britannica.com/topic/Simpsons-paradox. Daniel C. Castro, Ian Walker, and Ben Glocker. Causality matters in medical imaging. *Nature Communication*, 3673, 2020. doi: https://doi.org/10.1038/s41467-020-17478-w. T. C. Chalmers, Jr Smith, H., B. Blackburn, B. Silverman, B. Schroeder, D. Reitman, and A. Ambroz. A method for assessing the quality of a randomized control trial. *Controlled clinical trials*, 2:31–49, 1981. Thomas C. Chalmers, Paul Hewett, Dinah Reitman, and Henry S Sacks. Selection and evaluation of empirical research in technology assessment. *International Journal of Technology Assessment in Health Care*, 5:521 - 536, 1989. Olivier Chapelle, Eren Manavoglu, and Romer Rosales. Simple and scalable response prediction for display advertising. *ACM Transaction Intelligence System Technology*, 5(4), 2015. Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, and James M. Robins. Double/debiased machine learning for treatment and structural parameters. *Econometrics: Econometric & Statistical Methods - Special Topics eJournal*, 2017. Niteesh K. Choudhry. Randomized, controlled trials in health insurance systems. *The New England Journal* of Medicine, 377:957–964, 2017. William G. Cochran and Donald B. Rubin. Controlling bias in observational studies: A review. The Indian Journal of Statistics, Series A, 35:417–446, 1973. doi: JSTOR, http://www.jstor.org/stable/25049893. John Concato, Nirav Shah, and Ralph I. Horwitz. Randomized, controlled trials, observational studies, and the hierarchy of research designs. *New England Journal of Medicine*, 342(25):1887–1892, 2000. Marco Cuturi and A. Doucet. Fast computation of wasserstein barycenters. In International Conference on Machine Learning, 2013. Rezende Danilo Jimenez and Mohamed Shakir. Stochastic backpropagation and approximate inference in deep generative models. In *arXiv preprint arXiv:1401.4082*, 2014. Angus Deaton and Nancy Cartwright. Understanding and misunderstanding randomized controlled trials. Behavioral & Experimental Economics eJournal, 2016. Rajeev Dehejia and Sadek Wahba. Propensity score matching methods for non-experimental causal studies. The Review of Economics and Statistics, 84:151–161, 02 2002. doi: 10.1162/003465302317331982. Rajeev H. Dehejia and Sadek Wahba. Causal effects in nonexperimental studies: Reevaluating the evaluation of training programs. *Journal of the American Statistical Association*, 94(448):1053–1062, 1999. doi: 10.1080/01621459.1999.10473858. URL https://www.tandfonline.com/doi/abs/10.1080/01621459.1999.10473858. Alexis Diamond and Jasjeet S. Sekhon. Genetic Matching for Estimating Causal Effects: A General Multivariate Matching Method for Achieving Balance in Observational Studies. *The Review of Economics and* Statistics, 95(3):932–945, 2013. Maria Dimakopoulou, Susan Athey, and Guido Imbens. Estimation considerations in contextual bandits. 11 2017. Max H. Farrell. Robust inference on average treatment effects with possibly more covariates than observations. *Journal of Econometrics*, 189(1):1–23, 2015. ISSN 0304-4076. doi: https://doi.org/10.1016/j.jeconom.2015.06.017. URL https://www.sciencedirect.com/science/article/pii/S0304407615001864. Paul J. Ferraro, James N. Sanchirico, and Martin D. Smith. Causal inference in coupled human and natural systems. *Proceedings of the National Academy of Sciences*, 116(12):5311–5318, 2019. doi: 10.1073/pnas.1805563115. URL https://www.pnas.org/doi/abs/10.1073/pnas.1805563115. Thomas R. Frieden. Evidence for health decision making - beyond randomized, controlled trials: The changing face of clinical trials. *The New England Journal of Medicine*, 377:465–475, 2017. Raphaël Féraud, Robin Allesiardo, Tanguy Urvoy, and Fabrice Clérot. Random forest for the contextual bandit problem. In Arthur Gretton and Christian C. Robert (eds.), *Proceedings of the 19th International Conference on Artificial Intelligence and Statistics*, volume 51 of Proceedings of Machine Learning Research, pp. 93–101, Cadiz, Spain, 09–11 May 2016. PMLR. Fernando Martel García and Léonard Wantchekon. Theory, external validity, and experimental inference: Some conjectures. *The ANNALS of the American Academy of Political and Social Science*, 628:132 - 147, 2010. Brett R. Gordon, Florian Zettelmeyer, Neha Bhargava, and Dan Chapsky. A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook. *Marketing Science*, 38(2): 193–225, 2019. Thore Graepel, Joaquin Quiñonero Candela, Thomas Borchert, and Ralf Herbrich. Web-scale bayesian click-through rate prediction for sponsored search advertising in microsoft's bing search engine. In Proceedings of the 27th International Conference on Machine Learning ICML 2010, Invited Applications Track (unreviewed, to appear), June 2010. Invited Applications Track. L. W. Green and Russell E. Glasgow. Evaluating the relevance, generalization, and applicability of research. Evaluation & the Health Professions, 29:126 - 153, 2006. Kristjan Greenewald, Ambuj Tewari, Susan Murphy, and Predag Klasnja. Action centered contextual bandits. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/4fa177df22864518b2d7818d4db5db2d-Paper.pdf. Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *Journal of Machine Learning Research*, 13(25):723–773, 2012. URL http://jmlr.org/papers/v13/gretton12a.html. Xing Gu and Paul R. Rosenbaum. Comparison of multivariate matching methods: Structures, distances, and algorithms. *Journal of Computational and Graphical Statistics*, 2:405–420, 1993. Jinyong Hahn. On the Role of the Propensity Score in Efficient Semiparametric Estimation of Average Treatment Effects. *Econometrica*, 66(2):315–332, March 1998. Edward L Hannan. Randomized clinical trials and observational studies: guidelines for assessing respective strengths and limitations. *JACC. Cardiovascular interventions*, 1:211–217, 2008. Ben Hansen. Full matching in an observational study of coaching for the sat. Journal of the American Statistical Association, 99:609–618, 02 2004. doi: 10.1198/016214504000000647. Negar Hassanpour and Russell Greiner. Counterfactual regression with importance sampling weights. In International Joint Conference on Artificial Intelligence, 2019. James Heckman, Hidehiko Ichimura, and Petra Todd. Matching as an econometric evaluation estimator: Evidence from evaluating a job training programme. *Review of Economic Studies*, 64:605–54, 02 1997. doi: 10.2307/2971733. James J. Heckman and Richard Robb. Alternative methods for evaluating the impact of interventions: An overview. *Journal of Econometrics*, 30(1):239–267, 1985. ISSN 0304-4076. doi: https://doi.org/10.1016/0304-4076(85)90139-3. URL https://www.sciencedirect.com/science/article/pii/0304407685901393. Bang Heejung and Robins James M. Doubly robust estimation in missing data and causal inference models. Biometrics, 61(4):962–973, 07 2005. doi: 10.1111/j.1541-0420.2005.00377.x. Miguel A Hernán and James M Robins. Estimating causal effects from epidemiological data. Journal of epidemiology and community health, 60:578–586, 2006. Miguel A. Hernán, John Hsu, and Brian Healy. A second chance to get causal inference right: A classification of data science tasks. *CHANCE*, 32(1):42–49, 2019. doi: 10.1080/09332480.2019.1579578. URL https://doi.org/10.1080/09332480.2019.1579578. Keisuke Hirano, Guido Imbens, and Geert Ridder. Efficient estimation of average treatment effects using the estimated propensity score. *Econometrica*, 71:1161–1189, 02 2003. doi: 10.1111/1468-0262.00442. Paul W. Holland. Statistics and causal inference. *Journal of the American Statistical Association*, 81(396):945–960, 1986. doi: 10.1080/01621459.1986.10478354. URL https://www.tandfonline.com/doi/abs/10.1080/01621459.1986.10478354. Stefano M. Iacus, Gary King, and Giuseppe Porro. Causal inference without balance checking: Coarsened exact matching. *Political Analysis*, 20(1):1–24, 2012. doi: 10.1093/pan/mpr013. Guido W Imbens. Nonparametric estimation of average treatment effects under exogeneity: A review. *Review* of Economics and statistics, 86(1):4–29, 2004. Guido W. Imbens and Donald B. Rubin. Causal inference for statistics, social, and biomedical sciences. Cambridge University Press, 2015. Jyotsna Jalan and Martin Ravallion. Estimating the benefit incidence of an antipoverty program by propensity-score matching. *Journal of Business and Economic Statistics*, 21, 12 2001. doi: 10.1198/073500102288618720. Dan Jane-wit, Ralph I Horwitz, and John Concato. Variation in results from randomized, controlled trials: stochastic or systematic? *Journal of clinical epidemiology*, 63 1:56–63, 2010. Sookyo Jeong and Hongseok Namkoong. Robust causal inference under covariate shift via worst-case subpopulation treatment effects. *ArXiv*, abs/2007.02411, 2020. Daniel jiwoong Im, Kyunghyun Cho, and Narges Razavian. Causal effect variational autoencoder with uniform treatment for overcoming covariate shifts. In *arXiv preprint arXiv:2111.08656*, 2021. Fredrik D. Johansson, Uri Shalit, and David Sontag. Learning representations for counterfactual inference. In *arXiv preprint arXiv:1605.03661*, 2016. Fredrik D. Johansson, Nathan Kallus, Uri Shalit, and David A. Sontag. Learning weighted representations for generalization across designs. *arXiv: Machine Learning*, 2018. Joseph Kang and Joseph L. Schafer. Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. *Statistical Science*, 22:523–539, 2007. Richard Karp. Reducibility among combinatorial problems. volume 40, pp. 85–103, 01 1972. ISBN 978-3540-68274-5. doi: 10.1007/978-3-540-68279-0-8. J M Kendall. Designing a research project: randomised controlled trials and their principles. *Emergency Medicine Journal*, 20(2):164–168, 2003. ISSN 1472-0205. doi: 10.1136/emj.20.2.164. URL https://emj.bmj.com/content/20/2/164. Celeste Kidd and Benjamin Y. Hayden. The psychology and neuroscience of curiosity. *Neuron*, 88(3):449–460, November 2015. ISSN 0896-6273. doi: 10.1016/j.neuron.2015.09.010. Funding Information: This research was supported by a grant from the NIH, R01 (DA038615) (to B.Y.H.). We thank Sarah Heilbronner, Steve Piantadosi, Shraddha Shah, Maya Wang, Habiba Azab, and Maddie Pelz for helpful comments. Publisher Copyright: © 2015 Elsevier Inc. Yongnam Kim and Peter Steiner. Quasi-experimental designs for causal inference. *Educ Psychol*, 51:395–405, 2016. doi: 10.1080/00461520.2016.1207177. Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In *arXiv preprint arXiv:1312.6114*, 2014. Murat Kocaoglu, Karthikeyan Shanmugam, and Elias Bareinboim. Experimental design for learning causal graphs with latent variables. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/291d43c696d8c3704cdbe0a72ade5f6c-Paper.pdf. Akshay Krishnamurthy, Zhiwei Steven Wu, and Vasilis Syrgkanis. Semiparametric contextual bandits. arXiv, 2018. doi: 10.48550/ARXIV.1803.04204. URL https://arxiv.org/abs/1803.04204. Kun Kuang, Peng Cui, B. Li, Meng Jiang, Shiqiang Yang, and Fei Wang. Treatment effect estimation with data-driven variable decomposition. In *AAAI Conference on Artificial Intelligence*, 2017. Daniel Kumor, Junzhe Zhang, and Elias Bareinboim. Sequential causal imitation learning with unobserved confounders. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 14669–14680. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/7b670d553471ad0fd7491c75bad587ff-Paper.pdf. Robert J. LaLonde. Evaluating the econometric evaluations of training programs with experimental data. The American economic review, pp. 604–620, 1986. John Langford and Tong Zhang. The epoch-greedy algorithm for multi-armed bandits with side information. In J. Platt, D. Koller, Y. Singer, and S. Roweis (eds.), *Advances in* Neural Information Processing Systems, volume 20. Curran Associates, Inc., 2007. URL https://proceedings.neurips.cc/paper/2007/file/4b04a686b0ad13dce35fa99fa4161c65-Paper.pdf. Finnian Lattimore, Tor Lattimore, and Mark D. Reid. Causal bandits: Learning good interventions via causal inference, 2016. URL https://arxiv.org/abs/1606.03203. Lihong Li. Offline evaluation and optimization for interactive systems. In Proceedings of the 8th ACM International Conference on Web Search and Data Mining. ACM - Association for Computing Machinery, February 2015. URL https://www.microsoft.com/en-us/research/publication/offline-evaluation-and-optimization-for-interactive-systems/. Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th International Conference on World Wide Web, WWW '10, pp. 661–670, New York, NY, USA, 2010. Association for Computing Machinery. ISBN 9781605587998. doi: 10.1145/1772690.1772758. URL https://doi.org/10.1145/1772690.1772758. Lihong Li, Wei Chu, John Langford, Taesup Moon, and Xuanhui Wang. An unbiased offline evaluation of contextual bandit algorithms with generalized linear models. In Dorota Glowacka, Louis Dorard, and John Shawe-Taylor (eds.), *Proceedings of the Workshop on On-line Trading of Exploration and Exploitation 2*, volume 26 of *Proceedings of Machine Learning Research*, pp. 19–36, Bellevue, Washington, USA, 02 Jul 2012. PMLR. Ruoqi Liu, Changchang Yin, and Ping Zhang. Estimating individual treatment effects with time-varying confounders, 2020. Christos Louizos, Uri Shalit, Joris Mooij, David Sontag, Richard Zemel, and Max Welling. Causal effect inference with deep latent-variable models. In *arXiv preprint arXiv:1705.08821*, 2017a. Christos Louizos, Uri Shalit, Joris M. Mooij, David A. Sontag, Richard S. Zemel, and Max Welling. Causal effect inference with deep latent-variable models. In *NIPS*, 2017b. Bo Lu and Paul Rosenbaum. Optimal pair matching with two control groups. Journal of Computational and Graphical Statistics - J COMPUT GRAPH STAT, 13:422–434, 06 2004a. doi: 10.1198/1061860043470. Bo Lu and Paul R. Rosenbaum. Optimal pair matching with two control groups. *Journal of Computational* and Graphical Statistics, 13:422 - 434, 2004b. Zhenyu Lu, Yurong Cheng, Mingjun Zhong, George Stoian, Ye Yuan, and Guoren Wang. *Causal Effect* Estimation Using Variational Information Bottleneck, pp. 288–296. 12 2022. ISBN 978-3-031-20308-4. doi: 10.1007/978-3-031-20309-1-25. Jared K Lunceford and Marie Davidian. Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study. *Statistics in Medicine*, 23, 2004. Imke Mayer, Erik Sverdrup, Tobias Gauss, Jean-Denis Moyer, Stefan Wager, and Julie Josse. Doubly robust treatment effect estimation with missing attributes. *The Annals of Applied Statistics*, 14(3):1409 - 1431, 2020. doi: 10.1214/20-AOAS1356. URL https://doi.org/10.1214/20-AOAS1356. Kathryn Monahan, Joanna Williams, and Laurence Steinberg. Revisiting the impact of part-time work on adolescent adjustment: Distinguishing between selection and socialization using propensity score matching. Child development, 82:96–112, 01 2011. doi: 10.1111/j.1467-8624.2010.01543.x. Edson Duarte Moreira and Ezra S Susser. Guidelines on how to assess the validity of results presented in subgroup analysis of clinical trials. *Revista do Hospital das Clinicas*, 57 2:83–8, 2002. Rashelle J. Musci and Elizabeth A. Stuart. Ensuring causal, not casual, inference. *Prevention Science*, 20: 452–456, 2019. Razieh Nabi and Ilya Shpitser. Semi-parametric causal sufficient dimension reduction of high dimensional treatments. *arXiv: Methodology*, 2017. Susan Armijo Olivo, Luciana Gazzi Macedo, Inae Caroline Gadotti, Jorge Fuentes, Tasha R. Stanton, and D J Magee. Scales to assess the quality of randomized controlled trials: A systematic review. *Physical* Therapy, 88:156 - 175, 2008. Aronow P. M., Robins James M., Saarinen Theo, Sävje Fredrik, and Sekhon Jasjeet. Nonparametric identification is not enough, but randomized controlled trials are. In *arXiv preprint arXiv:2108.11342*, 2021. Judea Pearl. *Causality: Models, Reasoning and Inference*. Cambridge University Press, 2nd edition, 2009. Judea Pearl. An introduction to causal inference. *The international journal of biostatistics*, 6:1557–4679, 2010. Judea Pearl. Detecting latent heterogeneity. In *Sociological Methods & Research*, 2015. Judea Pearl, M Maria Glymour, and Nicholas P Jewell. Causal inference in statistics: A primer. 2016. Yi Peng, Miao Xie, Jiahao Liu, Xuying Meng, Nan Li, Cheng Yang, Tao Yao, and Rong Jin. A practical semi-parametric contextual bandit. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI'19, pp. 3246–3252. AAAI Press, 2019. ISBN 9780999241141. Aahlad Puli, Nitish Joshi, He He, and Rajesh Ranganath. Nuisances via negativa: Adjusting for spurious correlations via data augmentation, 2022. URL https://arxiv.org/abs/2210.01302. Nicholas Radcliffe. Using control groups to target on predicted lift: Building and assessing uplift model. 2007. Vineeth Rakesh, Ruocheng Guo, Raha Moraffah, Nitin Agarwal, and Huan Liu. Linked causal variational autoencoder for inferring paired spillover effects. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM '18, pp. 1679–1682, New York, NY, USA, 2018. Association for Computing Machinery. ISBN 9781450360142. María Resa and José Zubizarreta. Evaluation of subset matching methods and forms of covariate balance. Statistics in medicine, 35, 07 2016. doi: 10.1002/sim.7036. Severi Rissanen and Pekka Marttinen. A critical look at the consistency of causal estimation with deep latent variable models. 2021. James Robins and A G Rotnitzky. Semiparametric efficiency in multivariate regression models with missing data. *Journal of The American Statistical Association - J AMER STATIST ASSN*, 90:122–129, 03 1995. doi: 10.1080/01621459.1995.10476494. James M. Robins, Andrea Rotnitzky, and Lue Ping Zhao. Estimation of regression coefficients when some regressors are not always observed. *Journal of the American Statistical Association*, 89(427):846–866, 1994. doi: 10.1080/01621459.1994.10476818. URL https://doi.org/10.1080/01621459.1994.10476818. James M. Robins, Miguel A. Hernán, and Babette A. Brumback. Marginal structural models and causal inference in epidemiology. *Epidemiology*, 11:550–560, 2000. James M. Robins, Mariela Sued, Quanhong Lei-Gomez, and Andrea Rotnitzky. Comment: Performance of double-robust estimators when "inverse probability" weights are highly variable. *Statistical Science*, 22: 544–559, 2007. Paul R. Rosenbaum and Donald B. Rubin. The central role of the propensity score in observational studies for causal effects. *Biometrika*, 70(1):41–55, 04 1983. ISSN 0006-3444. doi: 10.1093/biomet/70.1.41. Peter M. Rothwell. External validity of randomised controlled trials: "to whom do the results of this trial apply?". *The Lancet*, 365:82–93, 2005. Donald Rubin and Elizabeth Stuart. Affinely invariant matching methods with discriminant mixtures of proportional ellipsoidally symmetric distributions. *The Annals of Statistics*, 34, 12 2006. doi: 10.1214/009053606000000407. Donald B Rubin. Estimating causal effects of treatments in randomized and nonrandomized studies. *Journal* of educational Psychology, 66(5):688, 1974. Donald B Rubin. Assignment to treatment group on the basis of a covariate. *Journal of educational Statistics*, 2(1):1–26, 1977. Donald B. Rubin. Using multivariate matched sampling and regression adjustment to control bias in observational studies. *Journal of the American Statistical Association*, 74:318–328, 1979. Donald B Rubin. Causal inference using potential outcomes: Design, modeling, decisions. *Journal of the* American Statistical Association, 100(469):322–331, 2005. Donald B. Rubin and Neal Thomas. Characterizing the effect of matching using linear propensity score methods with normal distributions. *Biometrika*, 79(4):797–809, 12 1992. ISSN 0006-3444. doi: 10.1093/biomet/79.4.797. URL https://doi.org/10.1093/biomet/79.4.797. Vin Sachidananda and Emma Brunskill. Online learning for causal bandits. In *Advances in Neural Information Processing Systems*, 2017. Yuta Saito and Shota Yasui. Counterfactual cross-validation: Effective causal model selection from observational data. In *arXiv preprint arXiv:1909.05299*, 2019. Sara Saturni, Federico Bellini, Fulvio Braido, Pierluigi Paggiaro, Alessandro Sanduzzi, Nicola Scichilone, Pierachille Santus, Luca Morandi, and Alberto Papi. Randomized controlled trials and real life studies. approaches and methodologies: a clinical point of view. *Pulmonary pharmacology & therapeutics*, 27 2: 129–38, 2014. Neela Sawant, Chitti Babu Namballa, Narayanan Sadagopan, and Houssam Nassif. Contextual multi-armed bandits for causal marketing. *CoRR*, abs/1810.01859, 2018. URL http://arxiv.org/abs/1810.01859. Joseph Schafer and Joseph Kang. Average causal effects from nonrandomized studies: A practical guide and simulated example. *Psychological methods*, 13:279–313, 01 2009. doi: 10.1037/a0014268. Daniel SCHARFSTEIN, A G Rotnitzky, and James Robins. Adjusting for nonignorable drop-out using semiparametric nonresponse models. *JASA. Journal of the American Statistical Association*, 94, 12 1999. doi: 10.2307/2669923. Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. Towards causal representation learning, 2021. URL https://arxiv.org/abs/2102.11107. Anthony D. Scotina and Roee Gutman. Matching algorithms for causal inference with multiple treatments. arXiv preprint arXiv:1809.00269, 2019. Shaun Seaman and Stijn Vansteelandt. Introduction to double robust methods for incomplete data. Statistical Science, 33:184–197, 05 2018. doi: 10.17863/CAM.23913. Ioffe Sergey and Szegedy Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *arXiv preprint arXiv:1502.03167*, 2015. Uri Shalit, Fredrik D. Johansson, and David Sontag. Estimating individual treatment effect: Generalization bounds and algorithms. ICML'17, pp. 3076–3085. JMLR.org, 2017. Michelle Shardell, Gregory E. Hicks, and Luigi Ferrucci. Doubly robust estimation and causal inference in longitudinal studies with dropout and truncation by death. *Biostatistics*, 16(1):155–168, 07 2014. doi: 10.1093/biostatistics/kxu032. URL https://doi.org/10.1093/biostatistics/kxu032. Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. *Journal of Statistical Planning and Inference*, 90:227–244, 2000. Bonnie Sibbald and Martin Roland. Understanding controlled trials: Why are randomised controlled trials important? BMJ, 316:201, 1998. Stephen D. Simon. Is the randomized clinical trial the gold standard of research? *Journal of andrology*, 22 6:938–43, 2001. Aleksandrs Slivkins. Introduction to multi-armed bandits. *CoRR*, abs/1904.07272, 2019. URL http://arxiv.org/abs/1904.07272. Leonard A. Stefanski and Dennis D. Boos. The calculus of m-estimation. *The American Statistician*, 56:29 - 38, 2002. Lesley A Stewart and Mahesh K. B. Parmar. Bias in the analysis and reporting of randomized controlled trials. *International Journal of Technology Assessment in Health Care*, 12:264 - 275, 1996. Petar Stojanov, Zijian Li, Mingming Gong, Ruichu Cai, Jaime Carbonell, and Kun Zhang. Domain adaptation with invariant representation learning: What transformations to learn? In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 24791–24803. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/cfc5d9422f0c8f8ad796711102dbe32b-Paper.pdf. Elizabeth A. Stuart. Matching Methods for Causal Inference: A Review and a Look Forward. Statistical Science, 25(1):1 - 21, 2010. doi: 10.1214/09-STS313. URL https://doi.org/10.1214/09-STS313. Adith Swaminathan and Thorsten Joachims. Counterfactual risk minimization: Learning from logged bandit feedback. *CoRR*, abs/1502.02362, 2015. URL http://arxiv.org/abs/1502.02362. Adith Swaminathan, Akshay Krishnamurthy, Alekh Agarwal, Miroslav Dudík, John Langford, Damien Jose, and Imed Zitouni. Off-policy evaluation for slate recommendation. NIPS'17, pp. 3635–3645, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964. Zhiqiang Tan. Bounded, efficient and doubly robust estimation with inverse weighting. *Biometrika*, 97: 661–682, 2010. Violeta Teodora Trifunov, Maha Shadaydeh, Jakob Runge, Veronika Eyring, Markus Reichstein, and Joachim Denzler. Causal link estimation under hidden confounding in ecological time series. 01 2020. Violeta Teodora Trifunov, Maha Shadaydeh, and Joachim Denzler. Time series causal link estimation under hidden confounding using knockoff interventions, 2022. Shalit Uri, Johansson Fredrik D., and David Songtag. Estimating individual treatment effect: generalization bounds and algorithms. In *arXiv preprint arXiv:1606.03976*, 2017. Wouter A. C. van Amsterdam, Pim A. de Jong, Joost J. C. Verhoeff, Tim Leiner, and Rajesh Ranganath. Decision making in cancer: Causal questions require causal answers, 2022. URL https://arxiv.org/abs/2209.07397. Victor Veitch, Dhanya Sridhar, and David Blei. Adapting text embeddings for causal inference. In Jonas Peters and David Sontag (eds.), *Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence* (UAI), volume 124 of *Proceedings of Machine Learning Research*, pp. 919–928. PMLR, 03–06 Aug 2020. Cédric Villani. *Optimal transport: old and new*, volume 338. Springer, 2009. Matthew James Vowels, Necati Cihan Camgöz, and R. Bowden. Targeted vae: Structured inference and targeted learning for causal parameter estimation. *ArXiv*, abs/2009.13472, 2020. Ingeborg Waernbaum and Laura Pazzagli. Model misspecification and bias for inverse probability weighting and doubly robust estimators. *arXiv: Statistics Theory*, 2017. Miao Wang, Geng Zhi, and Tchetgen Eric Tchetgen. Identifying causal effects with proxy variables of an unmeasured confounder. In *arXiv preprint arXiv:1705.08821*, 2016. Yixin Wang and Michael I. Jordan. Desiderata for representation learning: A causal perspective, 2021. Sherry Weitzen, Kate Lapane, Alicia Toledano, Anne Hume, and Vincent Mor. Principles for modeling propensity scores in medical research: A systematic literature review. Pharmacoepidemiology and drug safety, 13:841–53, 12 2004. doi: 10.1002/pds.969. Anpeng Wu, Kun Kuang, Junkun Yuan, Bo Li, Pan Zhou, Jianrong Tao, Qiang Zhu, Yueting Zhuang, and Fei Wu. Learning decomposed representation for counterfactual inference. *ArXiv*, abs/2006.07040, 2020. Pengzhou (Abel) Wu and Kenji Fukumizu. Intact-vae: Estimating treatment effects under unobserved confounding. *ArXiv*, abs/2101.06662, 2021. Liuyi Yao, Sheng Li, Yaliang Li, Mengdi Huai, Jing Gao, and Aidong Zhang. Representation learning for treatment effect estimation from observational data. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/a50abba8132a77191791390c3eb19fe7-Paper.pdf. Zhang Yao, Bellot Alexis, and Schaar Mihaela van der. Learning overlapping representations for the estimation of individualized treatment effects. In *International Conference of Machine Learning*, 2020. A Yazdani and E Boerwinkle. Causal inference in the age of decision medicine. Journal of data mining in genomics & proteomics, 6, 2015. Li Ye, Yishi Lin, Hong Xie, and John C.s Lui. Combining offline causal inference and online bandit learning for data driven decisions. 01 2020. URL http://arxiv.org/abs/2001.05699. Jinsung Yoon, James Jordon, and Mihaela van der Schaar. Ganite: Estimation of individualized treatment effects using generative adversarial nets. In *International Conference on Learning Representations*, 2018. Shuxi Zeng, Serge Assaad, Chenyang Tao, Shounak Datta, Lawrence Carin, and Fan Li. Double robust representation learning for counterfactual prediction. *ArXiv*, abs/2010.07866, 2020. Junzhe Zhang, Daniel Kumor, and Elias Bareinboim. Causal imitation learning with unobserved confounders. In *Proceedings of the 34th International Conference on Neural Information Processing Systems*, 2020. Yao Zhang, Jeroen Berrevoets, and Mihaela van der Schaar. Identifiable energy-based representations: An application to estimating heterogeneous causal effects. *ArXiv*, abs/2108.03039, 2021. Zhong Zhao. Using matching to estimate treatment effects: Data requirements, matching metrics, and monte carlo evidence. *The Review of Economics and Statistics*, 86:91–107, 02 2004. doi: 10.1162/003465304323023705. Min Zheng and Samantha Kleinberg. Using domain knowledge to overcome latent variables in causal inference from time series. In Finale Doshi-Velez, Jim Fackler, Ken Jung, David Kale, Rajesh Ranganath, Byron Wallace, and Jenna Wiens (eds.), *Proceedings of the 4th Machine Learning for Healthcare Conference*, volume 106 of *Proceedings of Machine Learning Research*, pp. 474–489. PMLR, 09–10 Aug 2019. URL https://proceedings.mlr.press/v106/zheng19a.html. Min Zheng, Jessecae K. Marsh, Jeffrey V. Nickerson, and Samantha Kleinberg. How causal information affects decisions. *Cognitive Research: Principles and Implications*, 5, 2020. Cai Zhihong and Kuroki Manabu. On identifying total effects in the presence of latent variables and selection bias. In *arXiv preprint arXiv:1206.3239*, 2012. Kailiang Zhong, Fengtong Xiao, Yan Ren, Yaorong Liang, Wenqing Yao, Xiaofeng Yang, and Ling Cen. Descn: Deep entire space cross networks for individual treatment effect estimation. Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022. José Zubizarreta. Using mixed integer programming for matching in an observational study of kidney failure after surgery. *JASA. Journal of the American Statistical Association*, 107, 12 2012. doi: 10.1080/01621459.2012.703874. ## A Appendix A.1 Naive Ate Estimation Having an RCT dataset, by default gives us $$\{Y(0),Y(1)\}$$ {Y (0), Y (1)*} ⊥⊥* A|X = x, for all x ∈ X . We can compute ATE by aggregating differences in mean estimators of treatment and control group $$\begin{array}{c}{{A T E=\mathbb{E}_{p(x)}[\mathbb{E}[Y(1)-Y(0)|X=x]]}}\\ {{=\mathbb{E}_{p(x)}\left[\mathbb{E}[Y(1)|X=x]-\mathbb{E}[Y(0)|X=x]\right]}}\end{array}$$ Taking the Monte Carlos approximation, we have $$\widehat{ATE}=\mathbb{E}_{p}(x)\left[\frac{1}{N_{x1}}\sum_{i:t_{i}=1}y_{i}-\frac{1}{N_{x0}}\sum_{j:t_{j}=0}y_{j}\right]$$ $$=\frac{N_{x}}{N}\sum_{x}\left[\frac{1}{N_{x1}}\sum_{i:t_{i}=1}y_{i}-\frac{1}{N_{x0}}\sum_{j:t_{j}=0}y_{j}\right]$$ where Nxt is the number of x data points with treatment assignment t, Nx is number of x data points, N is the total data points. For simplicity, let e¯(x) = Nx1 and 1 − e¯(x) = Nx0. We can re-express the *AT E* [(x) as $${\widehat{A T E}}_{\mathrm{AGG}}(x)={\frac{1}{\bar{e}(x)}}\sum_{i:t_{i}=1}y_{i}-{\frac{1}{1-{\widehat{(e)}}(x)}}\sum_{j:t_{j}=0}y_{j}.$$ Our *AT E* [(x) is unbiased estimator and the estimation error is $${\sqrt{N_{x}}}({\widehat{A T E}}_{\mathrm{AG}}(x)-A T E(x))\to{\mathcal{N}}\left(0,{\frac{\mathrm{Var}[Y(1)|X=x]}{{\bar{e}}(x)}}+{\frac{\mathrm{Var}[Y(0)|X=x]}{1-{\bar{e}}(x)}}\right).$$ Under the assumption that Var[Y (A)|X = x] = σ 2(x) does not depend on A, then we $$\sqrt{N_{x}}(\widehat{A T E}_{\mathrm{AG}}(x)-A T E(x))\to{\mathcal N}\left(0,\frac{\sigma^{2}(x)}{\bar{e}(x)(1-\bar{e}(x))}\right).$$ We can re-write our estimator by decomposing in terms of true ATE and the approximation error: $$\widehat{ATE}_{\text{AGG}}=\sum_{x}\frac{N_{x}}{N}\widehat{ATE}_{\text{AGG}}(x)$$ $$=\sum_{x}p(x)ATE(x)+\underbrace{\sum_{x}p(x)(ATE(x)-\widehat{ATE}(x))}_{\approx\mathcal{N}(0,\sum_{x}p(x)^{2}\text{Var}[\widehat{ATE}(x)]})$$ $$+\underbrace{\sum_{x}\left(p(x)-\frac{N_{x}}{N}\right)ATE(x)}_{\approx\mathcal{N}(0,N^{-1}\text{Var}[ATE(x)]}+\underbrace{\sum_{x}\left(p(x)-\frac{N_{x}}{N}\right)(ATE(x)-\widehat{ATE}(x))}_{=\mathcal{O}(N^{-1})}.$$ This makes *AT E* [AGG error to distribute N (0, VarAGG), where $$N{\rm Var}_{AGG}={\rm Var}[ATE_{\rm AGG}(x)]+{\mathbb{E}}_{p}(x)\left[\frac{\sigma^{2}(x)}{\bar{e}(x)(1-\bar{e}(x))}\right].\tag{16}$$ ## A.2 Ate Ordinary Least-Squares Estimator Rather than taking the difference in direct estimations of expected potential outcomes, we will model the potential outcome using linear regression, $$Y_{i}(t)=c_{t}+X_{i}\beta_{t}+\epsilon_{i}(t)$$ where E[ϵi(t)|Xi] = 0 and Var[ϵi(t)|Xi] = σ 2. The mean of data to be zero E[X] = 0 by normalizing the dataset and the variance of data is Var[X] = σX. Because we are in RCT setting, p(T = 1) = p(T = 0) = 12 . In this setup, we can write the ATE as $$A T E=\mathbb{E}[Y(1)-Y(0)]=c_{1}-c_{0}+\mathbb{E}[X](\beta_{1}-\beta_{0}).$$ Now we can run ordinary least-square (OLS) estimator to estimate ct and βt, $$\hat{\tau}_{O L S}=\hat{c}_{1}-\hat{c}_{0}+\bar{X}(\hat{\beta}_{1}-\hat{\beta}_{0}),$$ where X¯ = 1 n Pn i=1 xi. The standard error for ct and βt estimation are $${\hat{c}}_{1}-c_{1}={\mathcal{N}}\left(0,{\frac{\sigma^{2}}{n_{1}}}\right){\mathrm{~and~}}{\hat{c}}_{0}-c_{0}={\mathcal{N}}\left(0,{\frac{\sigma^{2}}{n_{0}}}\right)$$ respectively. Since E[X] = 0 and X¯ asymptotically approaches to 0 with the standard error of ∥β1β0∥ 2 σX /N, we have $$\hat{\tau}_{O L S}-\tau=\hat{c_{1}}-c_{1}-\hat{c_{0}}-c_{0}+\bar{X}(\beta_{1}-\beta_{0})+\bar{X}(\hat{\beta}_{1}-\beta_{1}-\hat{\beta}_{0}+\beta_{0})$$ where the last term has O(1/n) error rate. This makes *AT E* [ error to distribute N (0, VarOLS), where VarOLS) = 4σ 2 + ∥β0 − β1∥ 2 σX . ## A.3 Active Ci Learning A.3.1 Contextual Bandit Algorithm Contextual bandit is known as an online method that helps you make decisions in given contexts. It finds optimal decisions by maximizing their total rewards over a period of time. In contextual bandit setup, we get an observation xt at time t, the algorithm picks an action from a finite set at ∈ A, and then executes action at on observation xt. The reward yt ∈ [0, 1] is given by the world which is some distribution parameterized by (*X, A*) variables and the samples are drawn independently and identically. The reward depends on both the context xt and the chosen action at at each round. The reward distribution can change over time but this change is explained by the stream of context data10. The action is chosen based on the choice of algorithm, such as UBC1 and Thomson sampling methods Agrawal & Goyal (2013)11 (see Appendix A.3.1). Examples of contextual bandits' policy algorithms Algorithm 7 Epsilon-Greedy Toss a coin with success probability ϵt. if success **then** explore: choose an arm uniformly at random. else exploit: choose the arm with the highest average reward so far. end if Algorithm 8 "High-confidence elimination" Alternate two arms until UCB(at) *< LCB*(a ′ t ) after some even round t. Abandon arm a, and use arm a ′forever since. Algorithm 9 UBC1 pick arm some a which maximizes UCB(at) Algorithm 10 Thompson Sampling Sample mean reward vector E[Y (a)] for each action a from the posterior distribution p(a|X). Choose the best arm at according to E[Y (at)]. Upper bound on ATE Both *high-confidence elimination method* and UCB algorithms have assured to be upper-bounded for ATE estimations since the potential outcomes are distanced enough that the two confidence intervals do not overlap, AT E = |E [Y (A = a)] − E [Y (A = a ′)]| ≤ 2(rt(a) + rt(a ′)) ≤ 4(p2 log(T)/⌊t/2⌋) ≤ O(plog(T)/t). where T is the total iterations and nt(a) = ⌊ t 2 ⌋ since a and a ′ has been altered. In this case, we can aggregate data into an intervention dataset until the confidence interval does not overlap and use them to estimate ATE. 10The reward and outcome can be viewed the same and used interchangeably. 11We will not review the details of each algorithm in this paper but refer the reader to Bouneffouf et al. (2020). ## A.4 Passive Ci Learning A.4.1 Inverse Probability Weighting Since propensity score e(X) is unknown in practice, we have to estimate eˆ(X) via parametric and nonparametric regression. We use non-parametric regression to estimate eˆ(X) = N1 N where N1 =Pi=1 I[Y (X) = 1]. ATE estimation is $$\widehat{A T E}_{\mathrm{IPW}}=\frac{1}{N}\sum_{i}^{n}\left[\frac{\mathbb{I}[A_{i}=1]y_{i}(1)}{\hat{e}(x_{i})}\right]-\frac{1}{N}\sum_{i}^{n}\left[\frac{\mathbb{I}[A_{i}=0]y_{i}(0)}{1-\hat{e}(x_{i})}\right].$$ Suppose that eˆ(x) → e(x) as N → ∞ (i.e., supx∈X |e(x) − ˆ e(x)*| → O*(an). Then, the ATE error becomes $$|A T E-{\widehat{A T E}}_{\mathrm{IPW}}|={\mathcal{O}}\left({\frac{a_{n}}{\eta}}\right)$$ where η ≤ e(x) ≤ 1 − η for all x ∈ X and |Yi| ≤ 1. Therefore, *AT E* [ is concentrated at √ 1 n . The weighted population outcomes of two actions, A = 0 and A = 1, is an unbiased estimate of the average treatment effect $$ATE=\mathbb{E}\left[\frac{\mathbb{I}[A=1]Y(1)}{e(X)}\right]-\mathbb{E}\left[\frac{\mathbb{I}[A=0]Y(0)}{1-e(X)}\right].\tag{17}$$ In order to express the variance of *AT E* in Equation 17, let us re-express Y (0) and Y (1), $$\begin{array}{l}{{Y(0)=c(X)-(1-e(X))A T E(X)+\epsilon(0)}}\\ {{Y(1)=c(X)+e(X)A T E(X)+\epsilon(1)}}\end{array}$$ where c(X) is a function that makes the expression above work, and E[ϵ(0)|X] = 0 and E[ϵ(1)|X] = 0. assume that Var(ϵ(A)|X) = σ 2(X) does not depend on A. Then $$N\text{Var}_{IFPW}[ATE(X)]=\text{Var}\left[\frac{AX}{e(X)}-\frac{(1-A)Y}{1-e(X)}\right]$$ $$=\text{Var}\left[\frac{Ae(X)}{e(X)}-\frac{(1-A)e(X)}{1-e(X)}\right]+\text{Var}[ATE(X)]+\text{Var}\left[\frac{Ae(X)}{e(X)}-\frac{(1-A)e(X)}{1-e(X)}\right]$$ $$=\mathbb{E}\left[\frac{e(X)^{2}}{e(X)(1-e(X))}\right]+\text{Var}[ATE(X)]+\mathbb{E}\left[\frac{\sigma^{2}(X)}{e(X)(1-e(X)}\right]$$ $\left.\right.$ $\left.\right.$ Note that we can express the variance of IPW estimator in terms of the variance of our estimator is worse than the variance of aggregating difference in mean estimator VarAGG in Equation 16, which is the naive ATE estimator for RCT dataset, $$N\mathrm{Var}_{I P W}[A T E(X)]=N\mathrm{Var}_{A G G}[A T E(X)]+\mathbb{E}\left[\frac{c(X)^{2}}{e(X)(1-e(X))}\right].$$ This means that IPW has higher variance than AGG estimator. Surprisingly, we can conclude that the true propensity score performs worse than empirical propensity score, since AGG estimator used e¯(x) = Nx N(see Section A.1). ## A.4.2 Domain Invariance Regularization Intuitively, inducing the treated and control representational distribution to be the same is that it induces the two learned prediction function pθ(y|t = 0, x) and pθ(y|t = 1, x) to have better generalization across the ![37_image_0.png](37_image_0.png) Figure 6: Red distribution (left) has large overlap on the tails and the green distribution has small overlap ![37_image_1.png](37_image_1.png) Figure 7: (a) Results using IPMs - Wasserstein distance and MMD, (b) Results using counterfactual variance treated and control populations. Indeed, Shalit et al. (2017) show that CFR objective function is the upper bounds of the PEHE generalization error Bareinboim & Pearl (2016), $$P E H E\leq2({\mathcal{L}}_{\mathrm{F}}+{\mathcal{L}}_{\mathrm{CP}})$$ 2 Y ) ≤ 2(L T =0 F + L T =1 F + BhIPMF (ˆpT =1(h(X)), pˆT =0(h(X)) where the expected factual and counterfactual losses are defined as $\mathcal{L}_{\mathrm{F}}=\mathbb{E}_{p(X,T)}[l(X,T)]=u\mathcal{L}_{\mathrm{F}}^{T=1}+(1-u)\mathcal{L}_{\mathrm{F}}^{T=0}$ $\mathcal{L}_{\mathrm{CF}}=\mathbb{E}_{p(X,1-T)}[l(X,T)]=(1-u)\mathcal{L}_{\mathrm{CF}}^{T=0}+u\mathcal{L}_{\mathrm{CF}}^{T=1}$ respectively. The expected loss function for individual data point over p(YT |X = x) is l(X = *x, T* = t) and u = p(T = 1) is the proportions of treated in the population. The expected factual/counterfactual treated and control losses becomes $$\begin{array}{l l}{{{\mathcal L}_{\mathrm{F}}^{T=1}=\mathbb{E}_{p(X,1)}[l(X,1)],}}&{{\quad{\mathcal L}_{\mathrm{F}}^{T=0}=\mathbb{E}_{p(X,0)}[l(X,0)]}}\\ {{{\mathcal L}_{\mathrm{CF}}^{T=1}=\mathbb{E}_{p(X,0)}[l(X,1)],}}&{{\quad{\mathcal L}_{\mathrm{CF}}^{T=0}=\mathbb{E}_{p(X,1)}[l(X,0)]}}\end{array}$$ respectively. σ 2 Y := min{σ 2 0 , σ2 1} and σ 2 t = Ep(X,T)[(Y − fϕ(X))2] is the expected variance of YT . The full proof can be found in the original paper Shalit et al. (2017) but the key idea is that LCF ≤ uL T =0 F + (1 − u)L T =1 F + BhIPMF . ## A.5 Posterior Variance Reduction Regularization Yao et al. (2020) demonstrate why distributional distance to balance the treated and controlled representations is not ideal using an toy example in Figure 6. The red population comes from two truncated normal distributions having large overlap in tails and the green population comes from two normal distributions having small overlap in the tails. Figure 7(a) illustrates that both the MMD Gretton et al. (2012) and Wasserstein distances Villani (2009) are smaller in the green population compared to the red population, even though sufficient support is satisfied in the red population and not the green population. In contrast, counterfactual variance perfectly describes the lack of support in the red population as shown in Figure 7(b). Minimizing the counterfactual variance can lead to better generalization error Yao et al. (2020). The follow Theorem shows that the counterfactual Gibbs risk Rp(1−T)is upper bounded by two terms that corresponds to domain invariance and counterfactual variance, $$R_{p(1-t)}\leq\operatorname*{sup}_{x}{\frac{p(x,1-t)}{p(x,t)}}{\mathcal{L}}_{\mathrm{Factual}}+{\frac{1}{2}}\mathbb{E}_{p(x)}[\sigma^{2}(x|{\mathcal{X}},\theta)].$$ We observe that the first term consists of the factual loss and distribution mismatch. Minimizing both factual loss and the making posterior distribution to be invariant will lead to lower counterfactual Gibbs risk. The second term corresponds to counterfactual variance. This illustrates that minimizing the counterfactual variance is indispensable regularization term as well. ## A.6 Deep Latent-Variable Model: Utvae In the CEVAE, there are two conditional distributions that depend on treatment T, pθ(Y |*T, Z*) and qϕ(Z|*T, X, Y* ). Both of these distributions can be estimated using samples drawn from a treatment distribution that is either dependent on or independent of the confounding factor. In doing so, we have the option to use observational data based, or uniform treatment distributions, for estimating generative and inference distributions respectively $${\mathcal{L}}(\theta;\phi)={\mathcal{L}}_{\mathrm{CEVAE}}(\theta;{\bar{\phi}})+{\mathcal{L}}_{\mathrm{UTVAE}}(\phi;{\bar{\theta}})$$ where ¯θ and ϕ¯ are fixed parameters - the gradients with respect to these variables are blocked in the computational graph. We do so in order to isolate the impact of the choice of treatment distribution on the associated conditional distributions.
Review 1: Summary: This paper is presented as an introduction to the field of causal inference for newcomers. It adopts the framework of potential outcomes from Donald Rubin, and presents a range of existing methods for estimating causal effects in contextual bandits, categorized into active and passive methods. There is no new knowledge presented in the paper, the contribution is mainly to present a collection of existing works on causal inference in a unified framework, for a broad audience. Strengths and Weaknesses: While the intent of the paper is noble, I find it rather weak in achieving its purpose: introducing newcomers to the field of causal inference. First, it only considers a single framework, that of potential outcomes, and disregards other existing frameworks without even a mention. Then, I found several inconsistent statements and notations (see detailed comments), which made the presentation confusing even for me, someone with expertise in the field. It is also unclear when reading the paper what are contributions from the authors, and what are existing frameworks and algorithms (we introduce, we estimate etc. are used freely at several places), making the paper further ambiguous. The scope of the paper is also rather restricted (contextual bandits and potential outcomes), while the title is very general. Overall, I do not find this paper to bring anything of value to the field or the community, especially in light of the plethora of existing introductory works in the litterature [1-5]. [1] An Introduction to Causal Inference, Judea Pearl 2010 [2] Introduction to Causal Inference, Spirtes 2010 [3] Introduction to Causal Inference, Neal 2020 [4] Observation and experiment: An introduction to causal inference, Rosenbaum 2017 [5] Causal Inference, Ding & Li 2018 Detailed comments: - p1: "potential outcome" -> the concept of potential outcomes is well-established, here it seems the authors come up with it. A reference is missing. - p2: "In this paper [...] under such a causal graph." -> The introduction starts getting confusing here. Is it absolutely necessary to assume the availability of a causal graph to do CI? What does a causal graph allow for, which its absence does not allow? Until now it has not been said if the problem of interest was CI from interventional data, observational data, or a mix of both. The challenges of CI are not the same in each of these settings, and knowledge of the causal graph is not required in the purely interventional setting. - p2: "2b condition" -> typo? - p2: "The action A [...] outcome Y." -> I find the terms employed a bit confusing. Earlier we can read "what is the impact", now we have "causal effect", "outcome" and "affected" Do these terms refer to the same thing? I'd suggest sticking to a unique terminilogy in the paper to avoid confusion. - p2: "We measure [...] treatment effects." -> What are average and individual treatment effects? This paper claims to be a starting point for people interested in but not familiar with causal inference, yet here it assumes right-away that the reader is familiar with non-trivial concepts. - p2: There exist other frameworks for CI, not mentioned here, such as the do-calculus framework of Judea Pearl in which these assumptions all boil down to a simpler condition, identifiability. If this paper only covers the potential outcomes framework of Donald Rubin, then it should be clearly stated. Unfamiliar readers (the target audience of the paper) might be lured to believe potential outcomes is the one and only framework for CI. - p2: the classic causal graph in Figure 1 -> Figure 1 exposes several graphs, which one is referred to here? Also, is it meant here that bandit algorithms are learning a causal graph? - p2: RUBIN & THOMAS -> typesetting - p3: Figure 1 -> these are graphs, not CI methods. What is the meaning of squares vs circles, and dark grey vs light grey nodes? What is the difference between the "deep latent models" and "deep causal models" graphs ? Moreover, it is not clear to me what the methods "causal bandit" or "deep causal method" refer to here. References are missing. - p4: "We define the joint, conditional and intervention probabilities" -> Where does the intervention probability come from? This formula is valid only for specific graphs, which is not made clear from the text. Also, the do-calculus notation is used here, while only the potential outcome framework was referred to in the text earlier. This is inconsistent and confusing, especially for an introductory paper. Other information is missing, like p which is introduced without context or definition. Following what is said earlier in the introduction, which setting are we in here? Observational? Interventional? Passive? Active? - p5: where we marginalize out the covariate X -> X is also marginalized out in your expected potential outcome formula. - p5: individual as well as expected -> What is meant here by individual? A definition is missing. - p5: P -> is this capital P different from the lowercase p used earlier? - p5: This dataset does not, or perhaps more correctly cannot -> does not or can not? Your data distribution being iid, I do not see why counterfactual outcomes as defined here could not be obtained in a large enough dataset. - p8: $Y_{X}(A)$ -> This is inconsistent with Algorithm 1, where potential outcome is $Y(A)$. To this point, I am still unsure which setting is studied here. Are we in a bandit setting, where we are looking for an unconditional policy for $A$ ? Or are we in a contextual setting, where we want a policy for $A | X$ ?Why is Algorithm 1 sampling actions from a contextual policy, but then marginalizes out the covariate $X$ when computing the potential outcomes? I am very confused. - p8: there is a major difference -> I believe this is an overstatement. Most bandit algorithms keep track of the expected outcome of each action, in order to find the best policy. And, as stated in the introduction, CI models potential outcomes so that it can pick the action that maximizes the outcome. I hardly see any difference here. - p10: We then estimate the average treatment effect (ATE) -> I find it very confusing that the authors frequently say "We do X", "We propose X", "We introduce X", while they also claim this paper to be an introduction to existing methods. At times it feels like the presented methods and algorithms are the author's own contribution. - p10: appendix ?? -> broken ref - p23: We discuss these two limitations briefly before ending the paper -> These limitations, or rather assumptions, should be stated clearly at the beginning of the paper to give a proper context, not at the end. Requested Changes: Improve the overall consistency and mathematical soundness of the paper. It is possible to be clear, concise and appealing to newcomers without sacrificing correctness. Clarify and acknowledge other frameworks such as do-calculus, in particular if you use its notations like $do(A=a)$. Make it very clear, from the start, the context, the assumptions and the limitations of the paper, and of each of the presented methods method. There is no room for ambiguity in an introductory paper. The field of causal inference has ambiguity enough already, efforts are much needed in the other direction. Broader Impact Concerns: No concern ================================================== Review 2: Summary: This paper aims at being an introduction to algorithms that solves active and passive causal inference. In both cases, the algorithm tries to estimate the effect of a given action $A\in\{0,1\}$. The passive case corresponds to an on offline data where the actions where not chosen by the algorithm. The active case corresponds to a situation where the algorithm *actively* choose the actions as a function of what it wants to estimate. After recalling major definitions, the authors first cover the active case, which seems a relatively mature field, and expose the link with contextual bandits problems. The second topic then cover the topic of passive causal interference, that seems highly dependent on the assumptions that are made on data. This part is more heuristic. Strengths and Weaknesses: (I must admit that I am not an expert of the field to this review should be taken as the view of an outsider. Since this paper is supposed to be an 'introduction', this might reflect the feeling of some of the readers of the paper). The paper is relativey easy to read but below are some comments (in order of appearance): Section 2 (background): this part sets a few definitions that are going to be used. I think that it is relatively clear but I think that the authors should be more precise and consistent concerning the vocabulary that is used. In particular, there seems to be a confusing in the paper between "confoundedness, confoundness and ignorability/exchangeability. To me, this seems to be a central notion in causal inference and I think that it should be explained more clearly. Also, in general the paper should be more precise about the definition and assumptions that are used. For instance: it is not clear what the "big formula" in page 7 is: ATE is "computed as follows" if we assume the above assumptions but why and when should we assume them? Section 3 (active inference): I quite liked this part and I found it clear and relatively well documented. Most of the presented material seems classical and not very recent but the link with contextual bandit is interesting. Section 4 (passive inference): This seems to be the major part of the paper but I must say that I found this part very confusing. For instance: - in Section 4.2 and 4.3, it is very unclear to me how the effect of confunder can be ignored (or not) and under which assumptions do the methods presented here work. For instance: is the matching robust to the existence of confunder variables? (I would say not but the writing seems to indicate the opposite). - in Section 4.3, I do not get why one estimator is biased and the next is not. Also, at some point a variable "follows a Normal distribution" (page 16). Why? Is it an assumption or a consequence? - The "modern" part of the paper seems to be in section 4.5. For an outsider, I found this part very hard to read and I am not sure what information I should gain from this. Minor remarks: - there are a few references to an Appendix that are "??" - Figure 1 and 4 are (almost) identical. Could this be better used? Requested Changes: The "weakness" part above indicates a lot of clarifications that could be made. Broader Impact Concerns: I do not foresee direct impact of the work. ================================================== Review 3: Summary: The paper titled "Active & Passive Causal Inference" provides a comprehensive introduction to causal inference, particularly aimed at machine learning researchers, engineers, and students who are new to this field. It categorizes causal inference methods into two main groups: active and passive. The paper discusses several crucial assumptions necessary for causal identification, such as exchangeability, positivity, consistency, and the absence of interference, and explains how these assumptions underpin different causal inference techniques. In active causal inference, the paper covers methods like randomized controlled trials (RCTs) and bandit-based approaches. These methods actively intervene in the data collection process, with RCT being a standard approach in fields like clinical trials. The paper also reviews bandit methods for causal inference, which offer a more sophisticated way to estimate average treatment effects (ATE) by occasionally randomizing actions based on the chosen bandit algorithm. In contrast, passive causal inference methods work with datasets that have been collected without active intervention. The paper discusses classical approaches like matching, inverse probability weighting, and doubly robust methods, as well as more recent developments involving deep learning. Strengths and Weaknesses: Strengths: 1. **Comprehensiveness:** The paper provides a thorough overview of both active and passive approaches to causal inference, making it a valuable resource for beginners in this field. 2. **Clarity in Assumptions:** It clearly outlines the fundamental assumptions required for causal inference, offering a solid foundation for understanding the subject. Weaknesses: 1. Although it is targeted at beginners, some parts may be too specialized or assume prior knowledge, which can be challenging for the intended audience. For example, the article directly gives the calculation formula $p(Y = y| do(A=a)) = \sum_x p(X = x)p(Y = y | A = a, X = x)$, but does not explain why this formula calculates the probability of $Y = y$ when $A = a$ is enforced; 2. Limited exploration of advanced topics: The paper seems to focus more on traditional methods, rather than on newer or more advanced causal inference techniques beyond basic deep learning methods. 3. Linguistic and Typographical Errors: The paper contains several errors that detract from its overall quality, such as: a. "There are however other methods in the same category that **aims** ..." b. "The first such assumption is **exchangeabilty** which ..." c. "... the expected potential outcome of counteraction being higher **than of** action ..." d. "Such an algorithm generates a series of matched sets that **consists** of ..." e. "The propensity score theorem tells us that if **ignoreability** is satisfied ..." f. "... either the propensity score or the estimator µA(X) **are** correct ..." Requested Changes: 1. Clarification and simplification for beginners: To better cater to the target readers of beginners, the paper should include an explanation of new concepts and jumping steps about causality, such as the calculation of $p(Y = y| do(A=a))$ and the definition of causal graph. The paper seems to use causal graph without introducing the definition of causal graph. 2. Expansion on Advanced Topics: While the paper does an excellent job of covering traditional methods in causal inference, it falls short in exploring more advanced techniques, particularly those beyond basic deep learning methods. To provide a more rounded perspective, it would be beneficial to include a section or a discussion on recent advancements and trends in causal inference, highlighting how these new methods compare or integrate with the traditional approaches. 3. Correction of Linguistic and Typographical Errors: The paper currently contains several linguistic and typographical errors that need to be addressed to improve its readability and professionalism. Broader Impact Concerns: There is no ethical concern as far as I know. ================================================== Metareview: Recommendation: Reject Comment: All the reviewers acknowledged that the authors indeed tried to provide a comprehensive introduction. However, unfortunately, they failed to do so. AE checked the background of the three reviewers: one does not know causal inference, and the other two are experts as they have published several papers in this area. All of them raised concerns about the inconsistencies, restricted scope, and confusing notations of the paper. After AE reading through the paper, AE feels the same. So, the paper is not friendly for beginners in causal inference as it does not develop new notations and scopes to better illustrate and explain the insights of causality. AE recommends rejection. ==================================================
# On The Robustness Of Neural Collapse And The Neural Collapse Of Robustness Jingtong Su js12196@nyu.edu Center for Data Science New York University Ya Shi Zhang ysz23@cam.ac.uk Statistical Laboratory University of Cambridge Nikolaos Tsilivis nt2231@nyu.edu Center for Data Science New York University Julia Kempe kempe@nyu.edu Center for Data Science and Courant Institute of Mathematical Sciences New York University Reviewed on OpenReview: https: // openreview .*net/ forum? id= OyXS4ZIqd3* ## Abstract Neural Collapse refers to the curious phenomenon in the end of training of a neural network, where feature vectors and classification weights converge to a very simple geometrical arrangement (a simplex). While it has been observed empirically in various cases and has been theoretically motivated, its connection with crucial properties of neural networks, like their generalization and robustness, remains unclear. In this work, we study the stability properties of these simplices. We find that the simplex structure disappears under small adversarial attacks, and that perturbed examples "leap" between simplex vertices. We further analyze the geometry of networks that are optimized to be robust against adversarial perturbations of the input, and find that Neural Collapse is a pervasive phenomenon in these cases as well, with clean and perturbed representations forming aligned simplices, and giving rise to a robust simple nearest-neighbor classifier. By studying the propagation of the amount of collapse inside the network, we identify novel properties of both robust and nonrobust machine learning models, and show that earlier, unlike later layers maintain reliable simplices on perturbed data. Our code is available at https://github.com/JingtongSu/ robust_neural_collapse. ## 1 Introduction Deep Neural Networks are nowadays the de facto choice for a vast majority of Machine Learning applications. Their success is often attributed to their ability to jointly learn rich feature functions from the data and to predict accurately based on these features. While the exact reasons for their generalization abilities still remain elusive despite a profusion of active research, they can, at least partially, be explained by implicit properties of the training algorithm, i.e some variant of gradient descent, that specifically biases the solutions towards having certain geometric properties (such as the maximum possible separation of the training points)(Neyshabur et al., 2015; Soudry et al., 2018). Reinforcing arguments about the simplicity of the networks found by stochastic gradient descent in classification settings, Papyan et al. (2020) made the surprising empirical observation that both the feature representations in the penultimate layer (grouped by their corresponding class) and the weights of the final layer form a simplex equiangular tight frame (ETF) with C vertices, where C is the number of classes. Curiously, such a geometric arrangement becomes more pronounced well-beyond the point of (effectively) zero loss on the training data, motivating the common tendency of practitioners to optimize a network for as long as the computational budget allows. The collection of these empirical phenomena was termed *Neural Collapse*. While the results of Papyan et al. (2020) fueled much research in the field, many questions remain regarding the connection of Neural Collapse with properties like generalization and robustness of Neural Networks. In particular with regards to *adversarial robustness*, the ability of a model to withstand adversarial modifications of the input without effective drops in performance, it has been originally claimed that the instantiation of Neural Collapse has positive effect on defending against adversarial attacks (Papyan et al., 2020; Han et al., 2022). However, this seems to at least superficially contradict the fact that neural networks are not a priori adversarially robust (Szegedy et al., 2014; Carlini and Wagner, 2017). Our contributions. Through an extensive empirical investigation with computer vision datasets, we study the robustness of the formed simplices for several converged neural networks. Specifically, we find that gradient-based adversarial attacks with standard hyperparameters alter the feature representations, resulting in neither variability collapse nor simplex formation. To decouple our analysis from label dependencies that accompany untargeted attacks, we further perform *targeted attacks* that maintain class balance, and show that perturbed points end up almost exactly in the target class-mean vertex of the original simplex, meaning that even this optimal simplex arrangement is quite fragile. Moving forward, we pose the question of whether Neural Collapse can appear in other settings in Deep Learning, in particular in robust training settings. Interestingly, we find that Neural Collapse also happens during adversarial training (Goodfellow et al., 2015; Madry et al., 2018), a worst-case version of empirical risk minimization (Madry et al., 2018), and, in fact, simplices form both for the "clean" original samples and the adversarially perturbed training data. Curiously, the amount of collapse and simplex formation is much less prevalent when alternative robust training methods (Zhang et al., 2019) are deployed (with the same ability to fit the training data). Finally, we study the geometry of the inner layers of the network through the lens of Neural Collapse, and analyze the propagation of representations of both natural and adversarial data. We observe two novel phenomena inside standardly trained (non-robust) networks: (a) feature representations of adversarial examples in the earlier layers show some form of collapse which disappears later in the network, and (b) the nearest-neighbors classifiers formed by the class-centers of feature representations that correspond to early layers have significant accuracy on adversarial examples of the network. ![1_image_0.png](1_image_0.png) Figure 1: Visualisation of our findings. Sticks represent clean class-means. Small dots correspond to the representation of an individual datum. The color represents the ground-truth label, and the dotted lines represent the predicted class-means. **Left to Right:** clean representations with standardly-trained (ST) networks; perturbed representations with standardly-trained networks; clean representations with adversarially-trained (AT) networks; perturbed representations with adversarially-trained networks. With ST nets, the adversarial perturbations push the representation to "leap" towards another cluster with slight angular deviation. AT makes the simplex resilient to such adversarial attacks, with higher and intra-class variance. To summarize, our contributions and findings are the following, partly illustrated in Figure 1: - **Is NC robust?** We initiate the study of the neural collapse phenomenon in the context of adversarial robustness, both for standarly trained networks under adversarial attacks and for adversarially trained robust networks to investigate the stability and prevalence of the NC phenomenon. Our work exposes considerable additional fundamental, and we think, surprising, geometrical structure: - No! For standardly trained networks we find that small, imperceptible adversarial perturbations of the training data remove any simplicial structure at the representation layer: neither variance collapse nor simplex representations appear under standard metrics. Further analysis through class-targeted attacks that preserve class-balance shows a "cluster-leaping" phenomenon: representations of adversarially perturbed data jump to the (angular) vicinity of the original class means. - **Yes for AT networks! Two identical simplices emerge.** Adversarially trained, robust, networks exhibit a simplex structure both on original clean and adversarially perturbed data, albeit of higher variance. These two simplices turn out to be the same. We find that the simple nearest-neighbor classifiers extracted from such models also exhibit robustness. - **Early layers are more robust.** Analyzing NC metrics in the representations of the inner layers, we observe that initial layers exhibit a higher degree of collapse on adversarial data. The resulting simplices, when used for Nearest Neighbor clustering, give surprisingly robust classifiers. This phenomenon disappears in later layers. The outline of this paper is as follows: In Section 2 we summarize previous work, in Section 3 we describe the measures and methods we use, Section 4 gives our experimental results and the insights we derive from them, and Section 5 concludes. ## 2 Related Work Neural Collapse & Geometric properties of Optimization in Deep Learning. The term Neural Collapse was coined by Papyan et al. (2020) to describe phenomena about the feature representations of the last layer and the classification weights of a deep neural network at *convergence*. It collectively refers to the onset of variability collapse of within-class representations (NC1), the formation of two simplices (NC2) - one from the class-mean representations and another from the classification weights - that are actually dual (NC3), and, finally, the underlying simplicity of the prediction rule of the network, which becomes nothing but a simple nearest-neighbor classifier (NC4) (see Section 3 for formal definitions). Papyan et al. (2020), using ideas from Information Theory, showed that the formation of a simplex is optimal in the presence of vanishing within-class variability. Mixon et al. (2022) introduced the *unconstrained features* model (independently proposed by Fang et al. (2021) as the Layer-Peeled model), a model where the feature representations are considered as free optimization variables, and showed that a global optimizer of this problem (for the MSE loss) exhibits Neural Collapse. Many derivative works have proven Neural Collapse modifying this model, by either considering other loss functions or trying to incorporate more deep learning elements into it (Fang et al., 2021; Zhu et al., 2021; Ji et al., 2022; E and Wojtowytsch, 2022; Zhou et al., 2022; Tirer and Bruna, 2022; Han et al., 2022). The notion of maximum separability dates back to Support Vector Machines (Cortes and Vapnik, 1995), while the bias of gradient-based optimization algorithms towards such solutions has been used to explain the success of boosting methods (Schapire et al., 1997), and, more recently, to motivate the generalization properties of neural networks (Neyshabur et al., 2015; Soudry et al., 2018; Lyu and Li, 2020). The connection between Neural Collapse and generalization of neural networks (on in-distribution and transfer-learning tasks) has been explored in Galanti et al. (2021); Hui et al. (2022). Finally, the propagation of Neural Collapse inside the network has been studied by Ben-Shaul and Dekel (2022); He and Su (2022); Hui et al. (2022); Li et al. (2022); Tirer et al. (2022); Rangamani et al. (2023). Adversarial Examples & Robustness. Neural Networks are famously susceptible to adversarial perturbations of their inputs, even of very small magnitude (Szegedy et al., 2014). Most of the attacks that drive the performance of networks to zero are gradient-based (Goodfellow et al., 2015; Carlini and Wagner, 2017). These perturbations are surprisingly consistent between different architectures and hyperparameters, they are in many cases transferable between models (Papernot et al., 2017), and they can also be made universal (one perturbation for all inputs) (Moosavi-Dezfooli et al., 2017). For training robust models, one can resort to algorithms from robust optimization (Xu et al., 2009; Goodfellow et al., 2015; Madry et al., 2018). In particular, the most effective algorithm used in deep learning is called *Adversarial Training* (Madry et al., 2018). During adversarial training one alternates steps of generating adversarial examples and training on this data instead of the original one. Several variations of this approach have been proposed in the literature (e.g. Zhang et al. (2019); Shafahi et al. (2019); Wong et al. (2020)), modifying either the attack used for data generation or the loss used to measure mistakes. However, models produced by this algorithm, despite being relatively robust, still fall behind in terms of absolute performance (Croce et al., 2021), while there are still many unresolved conceptual questions about adversarial training (Rice et al., 2020a). In terms of the geometrical properties of the solutions, Li et al.; Lv and Zhu (2022) showed that in some cases (either in the presence of separable data or/and homogeneous networks) adversarial training converges to a solution that maximally separates the *adversarial* points. ## 3 Background & Methodology In this section, we proceed with formal definitions of Neural Collapse (NC), adversarial attacks, and Adversarial Training (AT), together with the variants we study in this paper. ## 3.1 Notation Let X be an input space, and Y be an output space, with |Y| = C. Denote by S a given class-balanced dataset that consists of C classes and n data points per class. Let f : *X → Y* be a neural network, with its final linear layer denoted as W. For each class c, the corresponding classifier - i.e. the c-th row of W - is denoted as wc, and the bias is called bc. Denote the representation of the i-th sample within class c as hi,c ∈ R p, and the union of such representations H(S). We define the global-mean vector µG ∈ R p, and class-mean vector µc ∈ R p associated with S as µG ≜1 nC Pi,c hi,c and µc ≜ 1 n Pi hi,c, c = 1*, . . . , C.* For brevity, we refer in the text to the globally-centered class-means, {µc − µG} C c=1, as just *class-means*, since these vectors are constituents of the simplex. We denote µ˜c = (µc − µG)/∥µc − µG∥2 the normalized class-means. Unless otherwise specified, the term "representation" refers to the penultimate layer of the network. Additionally, we will refer to class-means of representations of adversarial examples (see Section 3.3) as 'perturbed class-means.' This is because adversarially robust networks can still achieve 100% training accuracy on 'adversarial' perturbations of the original dataset S. ## 3.2 Neural Collapse Concepts Papyan et al. (2020) demonstrate the prevalence of NC on networks optimized by SGD in the Terminal Phase of Training (TPT) - the phase beyond the point of zero training error and towards zero training loss - by tracing the following quantities. Throughout our paper, we closely follow Papyan et al. (2020) and Han et al. (2022) on formalization of NC1-NC4. Please refer to the Appendix A for exact definitions. (NC1) Variability collapse: For all classes c, the within-class variation collapses to zero: hi,c → µc ∀ i ∈ [n], c ∈ [C]. (NC2, Equiangular) Class-Means converge to Equal, Maximally-Separated Pair-wise Angles: For every pair of distinct labels c ̸= c ′, $$\langle\tilde{\mu}_{c},\tilde{\mu}_{c^{\prime}}\rangle\to-\frac{1}{C-1}.$$ . (NC2, Equinorm) Class-Means Converge to Equal Length: |∥µc − µG∥2 − ∥µc ′ − µG∥2 | → 0 ∀ *c, c*′. (NC3) Convergence to self-duality: The linear classifier W and the class-means converge to each other1: wc $||\mathbf{w}_c||_2$. $${\frac{c}{\|\|_{2}}}-{\frac{\mu_{c}-\mu_{G}}{\|\mu_{c}-\mu_{G}\|_{2}}}\to0\quad\forall\ c.$$ 1Using the definition of NC3 from Han et al. (2022). (NC4): Simplification to Nearest Class Center (NCC) classifier: The prediction of the network is equivalent to that of the NCC classifier formed by the non-centered class-means on S: arg max c ′⟨wc ′ , h⟩ + bc ′ → arg min c ′ ∥h − µc ′∥2 ∀ h ∈ H(S). In this work, we compare representations of original and perturbed data, which imposes ambiguity on which class-mean vectors µc, µG to use (from S or S ′). In the spirit of the original definitions, for NC1-4 we will use the class means induced by the dataset S ′, even if different from the training set. NC4 studies the predictive power of the NCC classifier on S ′ by comparing it to the network classification output, which at TPT is equivalent to the ground truth label. For study of reference data S ′ outside the TPT, we introduce two quantities, which we use in Section 4 to study Neural Collapse in intermediate layers: NCC-Network Matching Rate: It measures the rate at which the NCC classifier defined in NC4 trained on S coincides with the output of the network on dataset S ′. Note that we use µc calculated by S. arg max c ′⟨wc ′ , h⟩ + bc ′ ?= arg min c ′ ∥h − µc ′∥2, h ∈ H(S ′). NCC Accuracy: It measures the accuracy on dataset S ′ of the NCC classifier defined in NC4 trained on S. Note that we use µc calculated by S. ch denotes the ground-truth label of the input. ch ?= arg min c ′ ∥h − µc ′∥2, h ∈ H(S ′). Note that when S = S ′, both NCC-Network Matching Rate and NCC Accuracy stem from (NC4). We also introduce the following measures to quantify the proximity of two simplices over C-classes: Simplex Similarity: We define the similarity measure between two C-class simplices with normalized class means µ˜c, µ˜ ′ c as $$\mathrm{AVG}_{c}\operatorname{arccos}\langle{\tilde{\mu}}_{c},{\tilde{\mu}}_{c}^{\prime}\rangle.$$ Non-centered Angular Distance: Similarly, given two simplices, without taking the global mean µG and µ ′ G into account, we can calculate the angular distance with non-centered class-means directly: angular distance with non-zero class matrix $\operatorname{AVG}_c\arccos{\langle\dfrac{\mu_c}{||\mu_c||_2},\dfrac{\mu_c'}{||\mu_c'||_2}\rangle}.$ listance between a simplex and itself is zero. Note that the similarity and angular distance between a simplex and itself is zero. ## 3.3 Gradient-Based Adversarial Attack, Adversarial Training (At), And Trades Given a deep neural network f with parameters θ, a clean example (x, y) and cross-entropy loss L(·, ·), the untargeted adversarial perturbation is crafted by running multiple steps of projected gradient descent (PGD) to maximize the CE loss (Kurakin et al., 2017; Madry et al., 2018) (in what follows, we focus on ℓ∞ adversary with ℓ2 deferred to the appendix): $$\mathbf{x}^{k+1}=\Pi_{\mathcal{B}_{\mathbf{x}^{0}}^{e}}\left(\mathbf{x}^{k}+\alpha\cdot\operatorname{sign}(\nabla_{\mathbf{x}^{k}}{\mathcal{L}}(f(\mathbf{x}^{k}),y)\right),$$ k), y), (1) where x 0 = x is the original example, α is the step size, x˜ = x N is the final adversarial example, and Π is the projection on the valid ϵ-constraint set, B ϵ x , of the data. B ϵ x is usually taken as either an ℓ∞ or ℓ2 ball centered in x 0. Further, to control the predicted label of x˜, a variant called *targeted attack* minimizes the CE loss w.r.t. a target label yt ̸= y: $${\bf\nabla}_{{\bf x}^{k+1}}^{\alpha}=\Pi_{{\mathcal B}_{{\bf x}^{0}}^{e}}\left({\bf x}^{k}-\alpha\cdot\mathrm{sign}(\nabla_{{\bf x}^{k}}{\mathcal L}(f({\bf x}^{k}),y_{t})\right).$$ k), yt). (2) With a standardly-trained network, both these methods can effectively reduce the accuracy to 0%. To combat this phenomenon, robust optimization algorithms have been proposed. The most representative methodology, adversarial training (Madry et al., 2018), generates x˜ *on-the-fly* with Equation (1) for each epoch from x, and takes the model-gradient update on x˜ only. An alternative robust training variant, TRADES (Zhang et al., 2019), is of particular interest as it aims to address both robustness and clean accuracy. Thus the gradient steps of TRADES directly involve both x and x¯, where x¯ is also obtained by PGD, but under the KL-divergence loss: $${\bf x}_{y}^{k+1}=\Pi_{{\bf x}_{\bf x}^{0}}\left({\bf x}^{k}+\alpha\cdot\mathrm{sign}(\nabla_{{\bf x}^{k}}{\mathcal{L}}_{K L}(f({\bf x}),f({\bf x}^{k}))\right).$$ k)). (3) $$(1)$$ $\left(2\right)$. $\left(3\right)$. The total TRADES loss is a summation of the CE loss on the clean data and a KL-divergence (KLD) loss between the predicted probability of x and x¯ with a regularization constant β: LCE(f(x), y) + β · LKL(f(x), f(x¯)). (4) ## 4 Experiments In this section, we present our main experimental results measuring neural collapse in standardly (ST) and adversarially (AT) trained models. When collecting feature representations for adversarially perturbed data we always compute the epoch-relevant perturbations: for ST models throughout training we compute NC metrics for data perturbed relative to the model at the current training epoch. For AT models at each epoch we use the current (adversarially perturbed) training data. Datasets We consider image classification tasks on CIFAR-10, CIFAR-100 in our main text. Both datasets are balanced (in terms of images per class), so we comply with the original experimental setup of Papyan et al. (2020). We preprocess the images by subtracting their global (train) mean and dividing by the standard deviation. For the completeness of our experiments, we also consider a 10-class subset of ImageNet: ImageNette2. We pick the 160px variant. The results are presented in Appendix F. Models We train two large convolutional networks, a standard VGG and a Pre-Activation ResNet18, from a random initialization. Both models have sufficient capacity to fit the entire training data. We launch 3 independent runs and report the mean and standard deviation throughout our paper. Algorithms We train the networks using stochastic gradient descent, either optimizing the cross entropy loss (standard training - ST) or the worst case loss, bounded by either an ℓ2 or ℓ∞ perturbation (adversarial training - AT). For CIFAR-10/100 datasets, we adopt community-wide standard values for the perturbations following Rice et al. (2020a): for the ℓ∞ adversary, we use radius ϵ = 8/255 and step size α = 2/255 in Equation 1. For the ℓ2 adversary, we use radius ϵ = 128/255 and step size α = 15/255. We perform 10 PGD iterations. All networks are being trained for 400 epochs in order to reach the terminal phase of training (post zero-error), with batch size 128 and initial learning rate 1e-1. We drop the learning rate by a factor of 0.1 at the 100th and again at the 150th epoch. We also consider the TRADES algorithm (Zhang et al., 2019) with Equation (4), setting β = 6 following Zhang et al. (2019). For ImageNette, we pick ℓ2 radius ϵ = 1536/255 and step size α = 360/255, and the same ℓ∞ hyperparameters as for the CIFAR family. We perform 5 PGD iterations due to the larger image size. For full experimental details, please refer to Appendix B. Gaussian perturbation benchmark To disentangle the effect of the size of the perturbations and their adversarial nature, we benchmark our NC analysis with "Gaussian" perturbations randomly drawn from N (0,(8/255)2), i.e. of the same variance as the adversarial perturbation. We present results for standardly trained models and ℓ∞-adversarially trained ones on CIFAR-10 in the main text and defer the rest of the results to the Appendix C and D, with similar conclusions. In Appendix E, we also study NC phenomena under smaller adversarial ℓ∞ perturbations and smaller radii in adversarial training (2/255, 4/255 and 6/255); the results interpolate as one would expect. Previous research has observed that the neural collapse phenomenon under standard training does not hold on the *test set* (e.g., Hui et al. (2022).) We investigated the evolution of accuracy, loss, and neural collapse metrics for both clean and adversarially perturbed test set data. We not only reproduce the non-collapsed dynamic with ST, but also observe an even more severe non-collapse phenomenon on the test set under robust optimization. Please refer to Appendix G for the results. ## 4.1 Standardly Trained Neural Nets Instability of Simplices The first and third column of Figure 2 show the evolution of the NC quantities as described in Section 3 for standardly trained models. We use both adversarially perturbed and Gaussian reference data to study the stability of the original simplices. As expected, NC metrics converge on the clean training data. For Gaussian reference data, Neural Collapse is only slightly attenuated and is hardly 2https://github.com/fastai/imagenette ![6_image_0.png](6_image_0.png) Figure 2: Accuracy, Loss, and NC evolution for standardly (ST) and adversarially (AT) trained VGG and ResNet. For AT models, clean and Guassian curves coincide. Setting: CIFAR-10, ℓ∞ adversary. distinguishable from the clean curve, thus we choose to omit it. Strikingly, NC disappears for adversarially perturbed data. These findings suggest that the simplex formed by clean training data is robust against random perturbations, but fragile to adversarial attacks. The results certainly corroborate the conclusion that the representation class-means of perturbed points with ground-truth label c do not form any geometricallymeaningful structure at all. Re-emergence of Simplices: Cluster Leaping To gain more insight into the geometric structure of the representations of adversarial examples, we propose modifications to our metrics to understand whether any simplex-like structure on adversarial data disappears or if we could retrieve simplicial remnants. We thus ask whether labeling adversarial data with *predicted* labels instead of ground truth class labels might lead to re-emergence of a geometrical structure. Note that classes formed by predicted labels are not any more balanced in general (see left column of Figure 3) and in fact might have vanishing classes, especially for larger datasets like CIFAR-100. Since the elegant simplex-structure of NC is predicated on balanced classes, NC metrics as defined will be hampered by this imbalance. To gain some coarse insight into the geometry - leveraging the existence of a simplex ETF with ST when centering with the global-mean vector µG - in Figure 3 we study perturbed representations *relative to* the original training-data (clean) simplex, by measuring how predicted class-means deviate from the corresponding clean class-means when centering them with the same clean global-mean vector µG. We outline two key insights: First, the predicted class-means have varying norms. Second, the angular distance between each pair of clean and predicted class-means is in general small, as conceptually illustrated in Figure 1. There, all correctly predicted clean data are wrongly classified after the adversarial perturbation is applied. Further, the predicted class-means have varying norms whilst keeping a small angular distance to the clean class-means, which are represented by the dotted lines and sticks, respectively. We conclude that adversarial attacks manipulate the feature space in an intriguing way by pushing the representation from the ground-truth class-mean to the (wrongly) predicted class-mean with very small angular deviation ("cluster leaping"). This observation is non-trivial, since adversarial attacks are performed by maximizing the CE loss w.r.t. the true label. Intuitively, a successful attack should push the representation to be not aligned with the last layer's true label weight vector, but whether it would align or not with a wrong class's weight vector is not a prior clear. Our experiments answer this question in the affirmative. ![7_image_0.png](7_image_0.png) Figure 3: Illustration of untargeted adversarial attacks on standardly trained, converged, models that correspond to one random seed. (CIFAR-10, ℓ∞). *Left:* Number of examples with a certain predicted label. *Inner Left:* The norms of clean class-means. *Inner Right:* The norms of predicted class-means with perturbed data. *Right:* Angular distance between clean and predicted class-mean with perturbed data. *Upper:* ResNet18; *Lower:* VGG11. For 10 classes, the between-class angular distance is arccos (− 1 9 ) = 1.68 rad = 96.38 degrees, while 0.2 rad is only 11.4 degrees. Targeted perturbations: merging simplices: To further understand the geometry of representations of perturbed data and circumvent the class-imbalance issue, we perform *targeted attacks* (Equation (2)) in a circular way (samples of class i are perturbed to resemble class (i + 1) mod C). Note that targeted attacks still result in 100% attack success rate. NC metrics for targeted perturbations are shown in red in Figure 2. As already hinted at by results from the untargeted attack, the NC2 Equiangular term collapses, but NC2 Equinorm does not. This illustrates that the targeted class-means also form a maximally-separated geometrical structure but with varying norms on CIFAR-10. Also, the non-converging NC1 indicates that within-class predicted representations have oscillating positions. Further, from the vanishing NC3, we can infer that for each class c, these non-centered predicted class-mean vectors are very close in the angular space to the non-centered clean class-means µc, as they both approach Wc/||Wc||, implying that clean and predicted simplices are close. To provide additional verification beyond NC3, in Figure 4 (left subplots) we measure Simplex Similarity and non-centered Angular Distance of the simplices formed by targeted adversarial examples and by clean examples as described in Section 3. These results give us a full glimpse of how standardly trained networks are non-robust and fail under adversarial attacks: adversarial perturbations break the simplex ETF by "leaping" the representation from one class-mean to another, forming a norm-imbalanced less concentrated structure around the original simplex. ![8_image_0.png](8_image_0.png) Figure 4: Angular distance. *Left and Inner Left:* Average between targeted attack class-means and clean class-means on ST network. *Inner Right and Right:* Average between perturbed class-means and clean class-means on AT network. Setting: CIFAR-10, ℓ∞ adversary. ## 4.2 Neural Collapse During Adversarial Training We train neural nets adversarially to full convergence with perfect clean and robust training accuracy and measure NC metrics for clean (original) and perturbed (epoch-wise) training data in Figure 2 (columns 2 and 4). Interestingly, we find that Neural Collapse *qualitatively* occurs in this setting as well, both for clean and perturbed data, and two simplices emerge. In particular, we find that for both ResNet18 and VGG, as robust training loss is driven to zero the NC metrics decrease on both the original (clean) train images and the perturbed points that we use on each epoch to update the parameters. Notice, however, that the extent of variability collapse (NC1) on the perturbed points is smaller than on the "clean" data or the Gaussian noise benchmark, indicating that clean examples are more concentrated around the vertices. To understand the relative positioning of the two simplices, we investigate the Simplex Similarity and Angular Distance between non-centered class-means in Figure 4 (right). The vanishing distance confirms these two simplices are exactly the same. We notice that the angular distance with AT grows initially and then gradually drops. We explain this phenomenon as follows. At the initial phase, the network's accuracy is only slightly higher than its robustness (see e.g., Figure 2) with both terms relatively low. Thus, untargeted attacks are not capable of modifying the features away from clean class-means yet. Gradually, as clean accuracy rises and clean clusters start to emerge, adversarial attacks become more and more successful, and thus we observe an increase in the angular distance between the classes. These results suggest that Adversarial Training nudges the network to learn simple representational structures (namely, a simplex ETF) not only on clean examples but also on perturbed examples to achieve robustness against adversarial perturbations. Equivalently, the simplices induced by robust networks are *not fragile* anymore, but *resilient*. Note also that NC4 results imply that there is a simple nearest-neighbor classifier that is robust against adversarial perturbations generated from the network. ## 4.3 No Neural Collapse Under Trades Objective One could conjecture that the formation of two very close simplices in robust models are necessary for robustness. Curiously, this is not the case for all training algorithms that produce robust models. In particular, Figure 5 demonstrates that a state-of-the-art algorithm that aims to balance clean and robust accuracy, TRADES (Zhang et al., 2019), shows fundamentally different behavior. Even though both terms of the loss (see Equation 4) are driven to zero, we do not observe Neural Collapse either qualitatively or quantitatively; the amount of collapse on both clean and perturbed data is roughly one order of magnitude larger than for adversarial training, and the feature representations do not approach the ETF formation, even well past the onset of the terminal phase. We view this as evidence that the prevalence of Neural Collapse is not necessary for robust classification. ## 4.4 Neural Collapse In Early Layers And Early-Stage Robustness While originally variability collapse and simplex formation were observed for the last layer representations, follow-up studies extended the analysis to the intermediate layers of the neural network. In particular, He and Su (2022) found that the amount of variability collapse measured at different layers (at convergence) decreases 9 ![9_image_0.png](9_image_0.png) Figure 5: Accuracy, Loss and NC evolution with TRADES trained networks. *Upper:* ResNet18; *Lower:* VGG11. No simplices are formed with TRADES training. Setting: CIFAR-10, ℓ∞ adversary. Note that we plot the KLD-loss here to showcase optimization convergence, to avoid the effect of the regularization constant β. smoothly as a function of the index of the layer. Further, Hui et al. (2022) coined the term Cascading Neural Collapse to describe the phenomenon of cascading variability collapse; starting from the end of the network, the collapse of one layer seemed to be signaling the collapse of the previous layers (albeit to a lesser extent). Here, we replicate this study of the intermediate layer computations, while also studying the representations of the perturbed points (both in standard and adversarial training). In particular, we collect the input of either convolutional or linear layers of the network *at convergence*, order them by depth index, and compute the NC quantities of Section 3. The results are presented in Figure 6. Both for ST and AT models, we reproduce the power law behavior observed in He and Su (2022) for clean data; the feature variability collapses progressively, and, interestingly, undergoes a slower decrease in the case of adversarial training. The adversarial data representations for ST models, however, while failing to collapse at the final layer (as already established in Figure 2), exhibit the same extent of Neural Collapse as those of the original data for the earlier layers. This hints that from the viewpoint of the earlier layers, clean and adversarial data are indistinguishable. And, this, is indeed the case! Looking at the first and third column of Figure 7, we observe that the simple classifier formed by the centers of the early layers is quite robust (∼ 40%) to these adversarial examples (both train and test). Curiously, this robustness is higher than the one of the simple classifiers defined by layers of an adversarially trained model (although the two numbers are not directly comparable). This is, undeniably, a peculiar phenomenon of standardly trained models that is worth more exploration; could it be that the lesser variability exhibited in the earlier layers is actually beneficial for robustness or is it just the stability of the feature space that makes prediction more robust? ![10_image_0.png](10_image_0.png) Figure 6: Layerwise evolution of NC1, NC2 and NC4 for ST and AT networks. NC metrics for perturbed data tend to undergo some amount of clustering in the earlier layers. For AT, collapse undergoes a slower decrease through layers than for ST. Setting: CIFAR-10, ℓ∞ adversary. ![10_image_1.png](10_image_1.png) Figure 7: Layerwise NCC classifier. We measure the performance of the NCC classifier obtained from (training) class means on both train and test data. NCC Robustness refers to NCC Accuracy on perturbed data. Note that, on training data, the NCC Robustness and the perturbed NCC-Net Matching Rate curves overlap. Early layers give a surprisingly robust NCC classifier (NCC Robustness) for both train and test data. Setting: CIFAR-10, ℓ∞ adversary. ## 5 Conclusion Neural Collapse is an interesting phenomenon displayed by classification Neural Networks. We present experiments that quantify the sensitivity of this geometric arrangement to input perturbations, and, further, display that Neural Collapse can appear (but not always does!) in Neural Networks trained to be robust. Specifically, we find Adversarial Training (Madry et al., 2018) induces severe Neural Collapse not only on clean data but also for perturbed data, while TRADES (Zhang et al., 2019) leads to no Neural Collapse at all. Interestingly, simple nearest-neighbors classifiers defined by feature representations (either final or earlier ones) from either standardly or adversarially trained Neural Networks can exhibit remarkable accuracy and robustness, suggesting robustness is maintained in early layers for both situations, while it diminishes quickly across layers for standardly trained networks. We conclude that Neural Collapse is an elegant phenomenon, prevalent in many deep learning settings, with yet unclear connection to the generalization and robustness of Neural Networks: it is not necessary to appear in adversarially-robust optimization. Our findings on robust networks call for a theoretical analysis of Neural Collapse, possibly moving beyond the unconstrained feature model, which by construction doesn't seem to be able to reason about perturbations in the data. ## Acknowledgments This work was supported by the National Science Foundation under NSF Award 1922658, the Dean's Undergraduate Research Fund from the NYU College of Arts and Science, and in part through the NYU IT High Performance Computing resources, services, and staff expertise. ## References Ido Ben-Shaul and Shai Dekel. Nearest class-center simplification through intermediate layers. *CoRR*, abs/2201.08924, 2022. Nicholas Carlini and David A. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22-26, 2017, pages 39–57. IEEE Computer Society, 2017. Corinna Cortes and Vladimir Vapnik. Support-vector networks. *Mach. Learn.*, 20(3):273–297, 1995. Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark. In *Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks* Track (Round 2), 2021. Weinan E and Stephan Wojtowytsch. On the emergence of simplex symmetry in the final and penultimate layers of neural network classifiers. In Joan Bruna, Jan Hesthaven, and Lenka Zdeborova, editors, *Proceedings* of the 2nd Mathematical and Scientific Machine Learning Conference, volume 145 of Proceedings of Machine Learning Research, pages 270–290. PMLR, 16–19 Aug 2022. Cong Fang, Hangfeng He, Qi Long, and Weijie J. Su. Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training. Proceedings of the National Academy of Sciences of the United States of America, 118, 2021. Tomer Galanti, András György, and Marcus Hutter. On the role of neural collapse in transfer learning. CoRR, abs/2112.15121, 2021. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. X.Y. Han, Vardan Papyan, and David L. Donoho. Neural collapse under MSE loss: Proximity to and dynamics on the central path. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=w1UbdvWH_R3. Hangfeng He and Weijie J. Su. A law of data separation in deep learning. 2022. Like Hui, Mikhail Belkin, and Preetum Nakkiran. Limitations of neural collapse for understanding generalization in deep learning. *CoRR*, abs/2202.08384, 2022. Wenlong Ji, Yiping Lu, Yiliang Zhang, Zhun Deng, and Weijie J Su. An unconstrained layer-peeled perspective on neural collapse. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=WZ3yjh8coDg. Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings, 2017. Xiao Li, Sheng Liu, Jinxin Zhou, Xinyu Lu, Carlos Fernandez-Granda, Zhihui Zhu, and Qing Qu. Principled and efficient transfer learning of deep models via neural collapse. *arXiv preprint arXiv:2212.12206*, 2022. Yan Li, Ethan X. Fang, Huan Xu, and Tuo Zhao. Implicit bias of gradient descent based adversarial training on separable data. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. Bochen Lv and Zhanxing Zhu. Implicit bias of adversarial training for deep neural networks. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. Kaifeng Lyu and Jian Li. Gradient descent maximizes the margin of homogeneous neural networks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards Deep Learning Models Resistant to Adversarial Attacks. In *International Conference on Learning Representations*, 2018. Dustin G. Mixon, Hans Parshall, and Jianzong Pi. Neural collapse with unconstrained features. Sampling Theory, Signal Processing, and Data Analysis, 20:11, 2022. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pages 1765–1773, 2017. Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6614. Nicolas Papernot, Patrick D. McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Ramesh Karri, Ozgur Sinanoglu, AhmadReza Sadeghi, and Xun Yi, editors, Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, AsiaCCS 2017, Abu Dhabi, United Arab Emirates, April 2-6, 2017, pages 506–519. ACM, 2017. Vardan Papyan, XY Han, and David L Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. *Proceedings of the National Academy of Sciences*, 117(40):24652–24663, 2020. Akshay Rangamani, Marius Lindegaard, Tomer Galanti, and Tomaso Poggio. Feature learning in deep classifiers through intermediate neural collapse. Technical report, Center for Brains, Minds and Machines (CBMM), 2023. Leslie Rice, Eric Wong, and J. Zico Kolter. Overfitting in adversarially robust deep learning. In *Proceedings* of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of *Proceedings of Machine Learning Research*, pages 8093–8104. PMLR, 2020a. Leslie Rice, Eric Wong, and Zico Kolter. Overfitting in adversarially robust deep learning. In *International* Conference on Machine Learning, pages 8093–8104. PMLR, 2020b. Robert E. Schapire, Yoav Freund, Peter Barlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. In Douglas H. Fisher, editor, Proceedings of the Fourteenth International Conference on Machine Learning (ICML 1997), Nashville, Tennessee, USA, July 8-12, 1997, pages 322–330. Morgan Kaufmann, 1997. Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John P. Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, *Advances in Neural* Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 3353–3364, 2019. Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. *J. Mach. Learn. Res.*, 19:70:1–70:57, 2018. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks, 2014. Tom Tirer and Joan Bruna. Extended unconstrained features model for exploring deep neural collapse. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 21478–21505. PMLR, 17–23 Jul 2022. Tom Tirer, Haoxiang Huang, and Jonathan Niles-Weed. Perturbation analysis of neural collapse. *arXiv* preprint arXiv:2210.16658, 2022. Eric Wong, Leslie Rice, and J. Zico Kolter. Fast is better than free: Revisiting adversarial training. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. Huan Xu, Constantine Caramanis, and Shie Mannor. Robustness and regularization of support vector machines. *J. Mach. Learn. Res.*, 10:1485–1510, 2009. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings of Machine Learning Research*, pages 7472–7482. PMLR, 2019. Jinxin Zhou, Xiao Li, Tianyu Ding, Chong You, Qing Qu, and Zhihui Zhu. On the optimization landscape of neural collapse under MSE loss: Global optimality with unconstrained features. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, *Proceedings of the 39th* International Conference on Machine Learning, volume 162 of *Proceedings of Machine Learning Research*, pages 27179–27202. PMLR, 17–23 Jul 2022. Zhihui Zhu, Tianyu Ding, Jinxin Zhou, Xiao Li, Chong You, Jeremias Sulam, and Qing Qu. A geometric analysis of neural collapse with unconstrained features. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, *Advances in Neural Information Processing Systems*, 2021. ## Appendix A Exact Definitions Of Nc Metrics Simplex ETF: A *standard simplex ETF* composed of C points is a set of points in R C , each point belonging to a column of $${\sqrt{\frac{C}{C-1}}}(I-{\frac{1}{C}}\mathbf{1}_{C}\mathbf{1}_{C}^{\top}),$$ where I ∈ R C×C is the identity matrix and 1C =-1 *· · ·* 1⊤∈ R C is the all-ones vector. In our discussion, a *simplex* can be thought of as a standard simplex ETF up to partial rotations, reflections, and rescaling. Between-class and within-class covariance: Using terminology developed in Section 3, we define between-class covariance ΣB ∈ R p×p as $$\Sigma_{B}\triangleq\mathrm{AVG}_{c}(\mu_{c}-\mu_{G})(\mu_{c}-\mu_{G})^{\top}$$ and $\Sigma_W\in\mathbb{R}^{p\times p}$ as . $$\Sigma_{W}\triangleq\mathrm{AVG}_{i,c}(\mathbf{h}_{i,c}-\boldsymbol{\mu}_{c})(\mathbf{h}_{i,c}-\boldsymbol{\mu}_{c})^{\top}.$$ (NC1) Variability Collapse: $$\Sigma_{B}^{\dagger}\Sigma_{W}\rightarrow\mathbf{0},$$ where † denotes the Moore-Penrose inverse. The NC1 curve corresponds to Tr(Σ†BΣW). (NC2) Convergence to Simplex ETF: $$\langle\tilde{\mu}_{c},\tilde{\mu}_{c^{\prime}}\rangle\to-\frac{1}{C-1}\quad\forall\;c\neq c^{\prime}$$ $$||\mu_{c}-\mu_{G}||_{2}-||\mu_{c^{\prime}}-\mu_{G}||_{2}|\to0\quad\forall\;c,c^{\prime}$$ The NC2 Equinorm curve corresponds to the variation of ||µc−µG||2 across all labels c, the standard deviation of these c quantities: std(||µc − µG||2). The NC2 Equiangular curve corresponds to AVGc̸=c ′ abs(⟨µ˜c, µ˜c ′ ⟩ + 1 C−1 ), where abs is the absolute value operator. $\mathbf{\text{onvergent}}$ (NC3) Convergence to self-duality: $${\frac{\mathbf{w}_{c}}{||\mathbf{w}_{c}||_{2}}}-{\frac{\mu_{c}-\mu_{G}}{||\mu_{c}-\mu_{G}||_{2}}}\to0\quad\forall\ c.$$ The $\mathrm{NC3}$ curve corresponds to: . $${\sqrt{\sum_{c}\vert\vert{\frac{\mathbf{w}_{c}}{\vert\vert\mathbf{w}_{c}\vert\vert_{2}}}-{\frac{\mu_{c}-\mu_{G}}{\vert\vert\mu_{c}-\mu_{G}\vert\vert_{2}}}\vert\vert_{2}^{2}}}.$$ $\mathbf{M}$ (NC4) Simplification to NCC classifier: arg max c ′⟨wc ′ , h⟩ + bc ′ → arg min c ′ ∥h − µc ′∥2 ∀ h ∈ H(S). The NC4 curve corresponds to the mismatch ratio of these two quantities. NCC-Network Matching Rate: arg max c ′⟨wc ′ , h⟩ + bc ′ ?= arg min c ′ ∥h − µc ′∥2, h ∈ H(S ′). This quantity matches NC4 when S ′ = S, where S is the dataset that the classifier was trained on and S ′is the dataset of interest. When we study the presence of Neural Collapse in intermediate layers of an already converged network, we first collect its intermediate representations at this layer. The centers of these representations can be used to form a simple NCC classifier. This allows us to compute the alignment of the predictions of the intermediate-layer NCC and the entire network over S ′. In our experiments, we calculate the NC statistics with the code provided by Han et al. (2022) 3. 3https://colab.research.google.com/github/neuralcollapse/neuralcollapse/blob/main/neuralcollapse.ipynb ## B Experimental Details Code. For ℓ∞ and ℓ2 PGD attacks with ST and AT, we used the code from Rice et al. (2020a) 4. For TRADES, we adopted the original implementation5. For the 160px ImageNette, all original images have the shortest side resized to 160px. To fulfill the requirement of NC, we use the CenterCrop of 160 to fix the train and test set. We have attached the code for reproducing NC results with ST, AT, and TRADES within a zip file. Plotting. Throughout our paper, we plot all quantities per 5 epochs in all figures. Layerwise NC. We study the layerwise NC1, NC2 and NC4 quantities for both PreActResNet18 (ResNet18) and VGG11. With ResNet18, which consists of one convolutional layer, four residual blocks, and the final linear layer, we use the features after every block for the first five blocks (one convolutional layer and four residual blocks) as representations. With VGG, which consists of eight convolutional blocks (convolutional layer + batch-normalization + max-pooling) and the final linear layer, we use the features after each convolutional block as representations. We apply average-pooling subsampling on representations that are too large for feasible computation of NC1's pseudo-inverse. ## C Complementary Results On Cifar-10, ℓ2 **Adversary.** Here we complement our main text with robust network experiments on CIFAR-10 for ℓ2 adversarial perturbations. Figure 8 illustrates NC results of Adversarial Training and TRADES training with the ℓ2 adversary. All plots are consistent with our findings in the main text: Adversarial Training alters Neural Collapse such that the clean representation simplex overlaps with the perturbed representation simplex, whereas TRADES does not lead to any simplex ETF. ## D Complementary Results On Cifar-100 In this section, we reproduce our experiments on CIFAR-100. We illustrate results with (ℓ∞, ℓ2) adversaries and obtain the same conclusions as those on CIFAR-10. This suggests the universality of the intrinsic adversarial perturbation dynamics that we have detailed in the main text. ## D.1 Cifar-100 ℓ∞ **Standard And Adversarial Training Results** All results are summarized within Figure 9. Similar to the main text, we plot the untargeted attack illustration in Figure 10. Notably, on CIFAR-100 with ST, adversarial perturbations also push the representation to leap toward the predicted class's simplex cluster with very small angular deviation. ## D.2 Cifar-100 ℓ∞ **Trades Results** For CIFAR-100 ℓ∞ trained with TRADES, Figure 11 depicts the results, and we observe that no simplex exists, consistent with previous results. ## D.3 Cifar-100 ℓ2 **At And Trades Results** These results are shown in Figure 12. All observations are consistent with previous results. 4https://github.com/locuslab/robust_overfitting 5https://github.com/yaodongyu/TRADES ![16_image_0.png](16_image_0.png) Figure 8: Accuracy, Loss and NC evolution with adversarially trained and TRADES trained networks. Setting: CIFAR-10, ℓ2 adversary. ## D.4 Cifar-100 Simplex Similarity Results The Simplex Similarity and non-centered Angular Distance of the simplices formed by targeted adversarial and clean examples with ST, and the simplices generated by clean and perturbed examples with AT, are depicted in Figure 13. The result is the same as the one for CIFAR-10 in the main text, Figure 4. ## D.5 Cifar-100 Layerwise Results Here in Figure 14 and Figure 15, we perform the same computations as those in Figure 6 and Figure 7. We arrive at the same conclusions. ![17_image_0.png](17_image_0.png) Figure 9: Accuracy, Loss and NC evolution with standardly trained networks. Setting: CIFAR-100, ℓ∞ adversary. ## E Small Epsilon Results Here, we illustrate how AT indeed progressively induces more robust NC metrics and simplex ETFs with respect to the perturbation radius ϵ. Figure 16 shows the NC metrics over 8/255−perturbed data. Conversely, using an ST model, the NC metrics when evaluating on (2/255, 4/255, 8/255)−perturbed data also increases monotonically with adversarial strength. This is illustrated in Figure 17. 6 ## F Imagenette Results To further corroborate our experimental conclusions, we append ImageNette results here. Results are illustrated in Figure 18, 19, 20 and 21. 6For small radius AT and small radius adversarial attack for ST, we scale the PGD step size α linearly with ϵ to ensure PGD to work properly. ![18_image_1.png](18_image_1.png) ![18_image_0.png](18_image_0.png) Figure 10: Illustration of untargeted adversarial attacks on standardly trained, converged, models that correspond to one random seed. (CIFAR-100, ℓ∞). *Left:* Number of examples with a certain predicted label. *Inner Left:* The norms of clean class-means. *Inner Right:* The norms of predicted class-means with perturbed data. *Right:* Angular distance between clean and predicted class-mean with perturbed data. *Upper:* ResNet18; *Lower:* VGG11. For 100 classes, the between-class angular distance is arccos (− 1 99 ) = 1.58 rad = 90.58 degrees, while 0.2 rad is only 11.4 degrees. ## G Test Set Neural Collapse Results For the completeness of our investigation, we illustrate the test set loss, accuracy, and NC evolution in Figures 22, 23, 24, 25, 26 and 27. We put TRADES and AT results together for a straightforward comparison between these two robust optimization algorithms. Still, TRADES leads to a non-collapsed dynamic, and clearly, representations on perturbed data of robust networks do not form a simplex ETF anymore, because the robust test accuracy is less than 40%. An interesting observation is that though both robust optimization algorithms exhibit significant robust overfitting (Rice et al., 2020b), AT's robust accuracy *increases* in the final stage, with an increment in the NC2 quantity. ![19_image_0.png](19_image_0.png) Figure 11: Accuracy, Loss and NC evolution with TRADES trained networks. *Upper:* ResNet18; *Lower:* VGG11. Results indicate AT boosts Neural Collapse so that it also happens on adversarially-perturbed data. Setting: CIFAR-100, ℓ∞ adversary. ![20_image_0.png](20_image_0.png) Figure 12: Accuracy, Loss and NC evolution with ℓ2 robust models on CIFAR-100. Setting: CIFAR-100, ℓ2 adversary. ![20_image_1.png](20_image_1.png) Figure 13: Angular distance. *Left and Inner Left:* Average between targeted attack class-means and clean class-means on ST network. *Inner Right and Right:* Average between perturbed class-means and clean class-means on AT network. Setting: CIFAR-100, ℓ∞ adversary. ![21_image_0.png](21_image_0.png) Figure 14: Layerwise evolution of NC1, NC2 and NC4 for ST and AT networks. NC metrics for perturbed data tend to undergo some amount of clustering in the earlier layers. For AT, collapse undergoes a slower decrease through layers than for ST. Setting: CIFAR-100, ℓ∞ adversary. ![21_image_1.png](21_image_1.png) Figure 15: Layerwise NCC classifier. We measure the performance of the NCC classifier obtained from (training) class means on both train and test data. NCC Robustness refers to NCC Accuracy on perturbed data. Note that, on training data, the NCC Robustness and the perturbed NCC-Net Matching Rate curves overlap. Early layers give a surprisingly robust NCC classifier (NCC Robustness) for both train and test data. Setting: CIFAR-100, ℓ∞ adversary. ![22_image_0.png](22_image_0.png) Figure 16: Progressive NC evolution, AT with varying strength. The color indicates the epsilon used for training. Setting: CIFAR-10, ℓ∞ adversary. ![22_image_1.png](22_image_1.png) Figure 17: Progressive NC evolution, ST with varying attacking strength. The color indicates the epsilon used for **evaluation**. Setting: CIFAR-10, ℓ∞ adversary. ![23_image_0.png](23_image_0.png) Figure 18: Accuracy, Loss and NC evolution with standardly-trained and adversarially-trained networks. Setting: ImageNette, loo adversary. ![24_image_0.png](24_image_0.png) Figure 19: Accuracy, Loss and NC evolution with TRADES trained networks. *Upper:* ResNet18; *Lower:* VGG11. Results indicate AT boosts Neural Collapse so that it also happens on adversarially-perturbed data. Setting: ImageNette, ℓ∞ adversary. ![24_image_1.png](24_image_1.png) Figure 20: Angular distance. *Left and Inner Left:* Average between targeted attack class-means and clean class-means on ST network. *Inner Right and Right:* Average between perturbed class-means and clean class-means on AT network. Setting: ImageNette, ℓ∞ adversary. ![25_image_0.png](25_image_0.png) Figure 21: Accuracy, Loss and NC evolution with standardly-trained and adversarially-trained networks. Setting: ImageNette, l2 adversary. ![26_image_0.png](26_image_0.png) Figure 22: Test set accuracy, Loss and NC evolution with standardly-trained and adversarially-trained networks. Setting: CIFAR-10, loo adversary. ![27_image_1.png](27_image_1.png) ![27_image_0.png](27_image_0.png) Figure 23: Test set accuracy, Loss and NC evolution with adversarially-trained and TRADES-trained networks. Setting: CIFAR-10, loo adversary. ![28_image_0.png](28_image_0.png) Figure 24: Test set accuracy, Loss and NC evolution with standardly-trained and adversarially-trained networks. Setting: CIFAR-100, loo adversary. ![29_image_0.png](29_image_0.png) ![29_image_1.png](29_image_1.png) Figure 25: Test set accuracy, Loss and NC evolution with adversarially-trained and TRADES-trained networks. Setting: CIFAR-100, lo adversary. ![30_image_0.png](30_image_0.png) Figure 26: Test set accuracy, Loss and NC evolution with standardly-trained and adversarially-trained networks. Setting: ImageNette, & adversary. ![31_image_0.png](31_image_0.png) Figure 27: Test set accuracy, Loss and NC evolution with adversarially-trained and TRADES-trained networks. Setting: ImageNette, loo adversary.
Review 1: Summary: This paper empirically studied several properties regarding Neural Collapse on the robustness of NNs and NNs with adversarial training. The paper found that adversarial attacks tend to break Neural Collapse; Neural Collapse also happens in adversarial training but not necessarily in all robust training methods (e.g., TRADES); earlier layers tend to have stronger Neural Collapse. Strengths and Weaknesses: Strengths: - This paper analyzed Neural Collapse on NNs under adversarial attacks and NNs with adversarial training. The paper has presented several new and interesting findings along this line. - Experimental findings sound reasonable and the experiments are detailed. Weaknesses: - Section 3 (methodology) seems to be mostly from previous works. If that's the case, I'd suggest the authors make this section as a background section while focusing on the experiments as the major contributions. Requested Changes: See above. Possibly make Section 3 a background section rather than a methodology section. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper investigates neural collapse phenomena in the context of the adversarially modified examples and adversarially trained networks. It presents empirical investigation based on CIFAR-10 and CIFAR-100 datasets, ResNet18 and VGG11 neural networks and PGD attacks with l_{\inf} and l_2 norms. With the experimental results it is conjectured that not robust networks have a fragile under adversarial attacks simplex of final layer representations, while trained with standard adversarial training approach networks organize into simplex adversarial samples as well. Additionally, it was experimentally analyzed if earlier (shallower) layers exhibit more\less neural collapse behavior on clean and adversarial examples with the conclusion that early layer representations are more robust. Strengths and Weaknesses: The paper provides large number of experiments and different perspectives on the results obtained. The investigated topic can advance the understanding of adversarial robustness for interpolating networks. Nevertheless, paper is rather hard to read and it seems that claimed contributions are not supported by experimental results, i.e.,: I do not understand how the conclusion about samples making "class leap" was made. According to the experiments results description on page 6 and illustration in Figure1 the adversarial samples move a little angle away from the correct class. Why is it called a leap? The main conjecture about connection between neural collapse and adversarial robustness is confusing: while it is interesting that emergence of neural collapse does not help robustness, it follows from the experiments that robust networks can both have and not have simplex structure (Madry AT VS TRADES). What is the authors conclusion here? Can the answer be traced to some properties of TRADES or standard AT? It is unclear to me how the conclusions about more neural collapse in the earlier layers for adversarial samples were made: the experiments presented in the paper show that neural collapse is equally absent both in early and late layers for non-robust networks. Correspondingly, the experiments showing robustness of early layer representations seem to be disconnected with the overall line of work. Also, it is claimed in section 4.4 that "earlier layers show same amount of clustering as later on the perturbed data" - but there is no clustering as it is claimed before? And finally, NCC accuracy on clean data matches NCC accuracy on perturbed data according to the experiments - but how high is this accuracy? Is it significant? Two additional criterion of neural collapse introduced on pages 4-5 are not used in first group of experiments and seems to be needed only for layer-wise analysis (or at least it was not clearly stated). Also, why NCC-Accuracy stems from NC4? NC4 states that classification becomes essentially NCC, but it does not necessarily mean that it is high (low) accuracy. I do not see a discussion for Figure3 in the text. There are some formulations in the text that are hard to understand, e.g.: - "To decouple our analysis from label dependencies that accompany standard attacks" - please explain which dependencies you have in mind. Also please explain why targeted attacks that allow for balanced classes should produce more interesting results. - What is meant by the "classifier of a class c"? is it a neuron that outputs exactly this class value? - NC3 means that the weights are becoming equal to representations? - Definition of NCC-Network matching rate is very strange: in the text it states that it is rate, the formula looks like it is either true or false. - The criterion of NC should be clearly stated for adversarial samples - currently the text is confusing, using term "perturbed class mean" without defining it - It is unclear if for experiments with AT networks adversarial samples for generated at each epoch for the state of the network at this epoch or only for the final network? It can severely affect the observed results. Minor: - last sentence of caption for figure1 is broken - S' is not define before using on page 4 - TPT abbreviation is not introduced - What is the difference between \mu'_c and \mu_{c'}? Requested Changes: The central claim of the paper should be explained better. Is it that adversarial samples form separate simplex and in some cases AT networks also form simplex? Is there anything more? Currently I cannot see a conclusive result that is confirmed throughout the paper and can serve as a contribution. While the experiments are definitely interesting, they mostly contradict each other. The presentation should be adjusted to be more clear and draw visible connection between experimental results and claims. Currently tracking the discussion of experiments back to the original claims is hard. In case this can be clarified and all the claims are indeed confirmed, I would recommend acceptance. The figures are very hard to read. Maybe it is worth considering leaving less figures in the main text, but making them larger and discussing in more details (additionally referencing to all the other experiments that lead to the same conclusion). Broader Impact Concerns: The work is theoretical and analyses neural collapse for adversarial samples/training. Currently it does not require any impact statement. ================================================== Review 3: Summary: The authors investigate the relationship between neural collapse and adversarial robustness. Through experiments on standard vision datasets, they find that neural collapse is not robust on standard networks, but is robust for adversarially trained networks. Furthermore, the authors find that earlier layers of the network have more collapse on adversarially perturbed data. Strengths and Weaknesses: **Strengths** This paper on the Robustness of Neural Collapse and the Neural Collapse of Robustness has several strengths. Firstly, it investigates a novel connection between neural collapse and robustness, shedding light on the stability properties of simplices and their behavior under adversarial attacks. Secondly, the well-chosen experiments provide comprehensive empirical results that offer valuable insights into the properties of robust and non-robust machine learning models. Notably, the authors consider both TRADES and adversarial training in their experiments and include code for their experiments, which is valuable. Thirdly, the descriptions and illustrations are clear, making the paper accessible to a wide audience. Overall, this paper makes a significant contribution to the field, providing a deeper understanding of the behavior of neural networks and their connection to generalization and robustness. **Weaknesses** The main weakness of the paper in my view is that experiments are only performed on CIFAR-10 and CIFAR-100. It would strengthen the paper to also conduct experiments on a larger dataset such as ImageNet or in a different domain (outside vision). Requested Changes: **Would strengthen work** * Some of the numbers in the Figures are small and hard to read; it would be great to have them enlarged * It is not clear what the margins in some of the figures indicate (e.g. Figure 2); it would be helpful to note this in the caption * Experiments on a larger dataset or different domain Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept with minor revision Comment: The authors might want to consider improving the presentation of their conclusions, as multiple reviewers found it difficult to connect some of the experimental results to the conclusions. While the authors made progress towards improving the presentation by explicitly defining technical terms and improving the font-size and layout of figures, I think the authors can clarify their results further in the presentation and writing. ==================================================
# Tofu: ¯Owards Qbfuscated Eederated Updates By Encoding Weight Updates Into Gradients From Proxy Data Anonymous authors Paper under double-blind review ## Abstract Advances in Federated Learning and an abundance of user data have enabled rich collaborative learning between multiple clients, without sharing user data. This is done via a central server that aggregates learning in the form of weight updates. However, this comes at the cost of repeated expensive communication between the clients and the server, and concerns about compromised user privacy. The inversion of gradients into the data that generated them is termed data leakage. Encryption techniques can be used to counter this leakage, but at added expense. To address these challenges of communication efficiency and privacy, we propose TOFU, a novel algorithm which generates proxy data that encodes the weight updates for each client in its gradients. Instead of weight updates, this proxy data is now shared. Since input data is far lower in dimensional complexity than weights, this encoding allows us to send much lesser data per communication round. Additionally, the proxy data resembles noise, and even perfect reconstruction from data leakage attacks would invert the decoded gradients into unrecognizable noise, enhancing privacy. We show that TOFU enables learning with less than 1% and 7% accuracy drops on MNIST and on CIFAR-10 datasets, respectively. This drop can be recovered via a few rounds of expensive encrypted gradient exchange. This enables us to learn to near-full accuracy in a federated setup, while being 4x and 6.6× more communication efficient than the standard Federated Averaging algorithm on MNIST and CIFAR-10, respectively. ## 1 Introduction Federated learning is the regime in which many devices have access to localized data and communicate with each, other either directly or through a central node. The goal is to improve their learning abilities collaboratively, without sharing data. Here, we focus on the centralized setting, in which each device or 'client' learns on the data available to it and communicates the weight updates to a central node or 'server', which aggregates the updates it receives from all the clients. The server propagates the aggregated update back to each client, thus enabling collaborative leaming from data available to all devices, without actually sharing the data. The abundance of user data has enabled rich complex learning. However, this comes at the cost of increased computational or communication costs between the clients and the server, and with increasing concerns about compromised user privacy. Privacy of user data is a growing concern, and standard federated averaging techniques have been shown to be vulnerable to data leakage by inverting gradients. into the data that generated them (Zhu & Han, 2020; Geiping et al., 2020; Yin et al., 2021; Fowl et al., 2021; 2022). Gradients can be encrypted to preserve privacy, but incurs further communication overhead. (Bonawitz et al., 2017). In this work, we focus on the communication between the clients and the server, a critical point for both communication and data leakage. Traditionally, in every communication round, each client shares its weight updates with the servers. To enable complex learning, the models are getting larger, growing to many millions of parameters (Simonyan & Zisserman, 2014; He et al., 2016). To put things in context, a VGG13 model has 9.4 million parameters, resulting in 36 MB of data being shared per communication round, per device. Since each device only has limited data, the number of rounds needed for the server to reach convergence are orders of magnitude more than those needed by the individual clients, further heightening the communication cost and opportunities for privacy leaks. This cost quickly grows prohibitive in resource constrained settings with limited bandwidth. To address these concerns, we propose TOFU, a novel algorithm that works Towards Obfuscated Eederated Updates, outlined pictorially in Figure 1. Here, each client generates synthetic proxy data whose combined gradient captures the weight update, and communicates this data instead of the weights. This mitigates two issues simultaneously. Data ![1_image_0.png](1_image_0.png) Figure 1: A pictorial representation of our encoding. The loss landscape shown in blue is taken from Li et al. (2017), with the starting point marked with a red circle. Each client learns on some minibatches of real data, shown on the top. The updates from these minibatches are marked with red arrows. The final weight update to be encoded and communicated, Urealy is shown in white. We construct a limited set of synthetic data that generates gradients gsym on the loss landscape, a weighted combination of which results in Uogn. The reconstruction algorithm optimizes these images and weights (denoted by a) to maximize the cosine similarity between Usym and Ureal. The synthetic images are visualized on the left and resemble noise, obfuscating user data. is much lower dimensional than gradients. For context, CIFAR-10 images only have 3072 pixels, and we show that TOFU needs under 100 images to capture the weight updates well. Sending these images instead of the weight updates for VGG13 results in an order of magnitude reduction in communicated costs per round. Additionally, the synthetic data resembles noise, and existent techniques would invert the gradients to this noise rather than the true data, thus enhancing privacy. To further improve communication efficiency and encourage the synthetic images to differ from the true data distribution, we show that our method can approximate the gradient well even with images that are downsampled by 4x, or reduced to a single channel. The synthetic images are visualized in Figure 2. Since our method approximates gradients to reduce communication costs and enhance privacy, it results in a slight accuracy drop. We exchange proxy data for most of the communication rounds, which are tolerant to noisy updates. Closer to convergence, updates are more precise and approximations are harmful. In these few communication rounds, we recover any accuracy drop by sharing the true full weight updates. In this phase, care needs to be taken to ensure privacy via expensive encryption techniques. Since this sensitive phase consists of far fewer communication rounds than the non-sensitive learning phase, the overheard resulting from this is countered by the communication efficiency achieved by sharing synthetic data instead of weight updates for most of the communication rounds. We show that we need only 3 and 15 full weight update rounds for MNIST and CIFAR-10, respectively, to recover any drop in accuracy. This proposed hybrid approach provides both communication efficiency and privacy, without any loss in accuracy. We demonstrate TOFU on the CIFAR-10 dataset in single device setups and show that we can learn with 3% accuracy drop while communicating 17× lesser parameters on average. We extend this to a federated setup, with data distributed in an IID (Independent and Identically Distributed) fashion. We show that with a few additional rounds of full weight update, we can learn to accuracies comparable to FedAvg while achieving up to 4.6× and 6.8× better communication efficiency on MNIST and CIFAR-10, respectively. We emphasize that TOFU will result in increasing gains with increasing complexity of the models, since the size of weight updates will grow but the image size stays constant. ## 2 Background (McMahan et al., 2017) pioneered the field of Federated Learning and proposed the first algorithm for distributed private training, called FedAvg, which serves as our baseline. Here, only weight updates are shared with the server, which aggregates updates from all clients and shares them back with each client. There are two research thrusts, that depend on whether the local data present at any client is distributed in an IID fashion or not. We focus on the IID ![2_image_0.png](2_image_0.png) Figure 2: (a) True images sampled from the CIFAR-10 dataset. (b) Full-sized (32 × 32 × 3) synthetic images generated by TOFU. (c) Synthetic Images generated by downsampling the number of channels (32 × 32 × 3). (d) Synthetic Images generated by downsampling the height and width (16 x 16 x 3). The synthetic images encode the first round of weight exchange, corresponding to the learning from 200 minibatches. setting in this work and direct readers to (Kairouz et al., 2019) for a better survey on non-IID methods. We focus on three key aspects in the IID setting: efficiency, privacy and accuracy, and discuss relevant works in each. Federated learning has two key areas of inefficiency: communication cost, both from client to server Efficiency (up-communication) and from server to client (down-communication), and computational cost. The most potential for impact comes with decreasing client to server communication (Kairouz et al., 2019). In our work, we target both up- and down-communication efficiency. Related works include quantization or sparsification of the weight updates (Konečný et al., 2016), (Horvath et al., 2019), (Basu et al., 2019), (Alistarh et al., 2016). While they significantly improve communication efficiency, there have been concerns raised (Kairouz et al., 2019) about their compatibility with secure aggregation (Bonawitz et al., 2017) and differential privacy techniques (Abadi et al., 2016). Our method can be thought of as an indirect compression, by encoding updates into proxy inputs. Our proxy images are amenable to encryption, and can potentially be further quantized, resulting in additional savings. In this work, we focus on showing that it is possible to encode gradients into synthetic fake-looking data and still enable learning. Other methods restrict the structure of updates, such as to a low rank or a sparse matrix (Konečný et al., 2016), or split the final network between the client and the server (He et al., 2020). We impose no constraints on learning, and focus on the standard case where each client has a synchronized model and equal accuracy on queries from any other client's dataset. Recent methods such as Inverting Gradients (IG) (Geiping et al., 2020) and Deep Leakage from Gradients Privacy (DLG) (Zhu & Han, 2020) have shown that gradients can be inverted into the training images that generated them, violating user data privacy. GradInversion (Yin et al., 2021) improved upon IG by introducing better fidelity metrics in the objective to recover images from federated scenarios with larger batch sizes, and more complex models and datasets such as ResNets and ImageNet. This is cause for concern, and we circumvent by showing that our proxy data looks like noise, and hence even perfect inversion by these techniques would only resemble noise. Fowl et al. (2021) show that altering the model architectures minimally allows the server to obtain user data without solving complex optimization problems. They also show that modifying only the larger linear layers can help recover user data. Methods to secure gradients from attack involve encryption and differential privacy techniques that add additional computational expense (Bonawitz et al., 2017), (Abadi et al., 2016). These methods are compatible with our proxy data, should the need for extra encryption arise. Additionally, encrypting our proxy data will be less costly since standard encryption costs are proportional to the size of the vector being encrypted (Bonawitz et al., 2017). Accuracy Efforts to increase accuracy often focus on variance reduced Stochastic Gradient Descent (SGD) (Karimireddy et al., 2019), (Yu et al., 2019) or on adaptive optimization and aggregation techniques (Reddi et al., 2020), (Karimireddy et al., 2020). Astraea (Duan et al., 2019) reschedules client participation based on the KL divergence of their data distribution in order to overcome data distribution imbalances to improve accuracy in federated settings. Our method is orthogonal and compatible with such techniques. Distillation Recently, there has been interest in one-shot federated learning, wherein there is only one communication round. An approach that is similar to ours, called DOSFL (Zhou et al., 2020) focuses on this setting. It is based on the dataset distillation (Wang et al., 2018) technique, in which the entire dataset is distilled into synthetic data. DOSFL uses this to distill the local data of each client and share that for one-shot learning. There are a few key differences between our method for synthetic data generation and dataset distillation. We generate proxy data that aligns its gradients to a desired weight update, whereas dataset distillation optimizes data for accuracy after learning on it. Dataset Distillation shows very large drops in accuracy for CIFAR-10 dataset (~26%) versus our single device results (Section 4.1), which shows an average of 3% drop. DOSFL gets impressive results on MNIST, especially for a single round of communication but does not show results on larger datasets like CIFAR-10, presumably due to the significant drop in the baseline technique of dataset distillation. In parallel with the development of this work, (Cazenavette et al., 2022) came up with an extension of dataset distillation that precomputes training trajectories from an expert and saves them to guide the synthetic image creation. While the work overlaps with ours in concept, they focus on creating coresets to make training efficient, and we focus on improving communication efficiency in federated learning. ## Methodology 3 This section outlines the desired properties of the synthetic dataset, the algorithm to create it, and the tradeoff between communication efficiency, privacy and accuracy. ## 3.1 Desiderata Of The Synthetic Dataset The generated synthetic dataset should have two properties - (a) it must be small in size in order to ensure communication efficiency and (b) it should not resemble the true data to ensure that data leakage attacks are unable to invert the proxy gradients into real data, thus ensuring privacy of the real data. We discuss these in more detail next. Communication Efficiency: Input data is much lower in dimensional complexity than gradients (for instance, 3072 parameters per image in CIFAR-10 compared to 9.4 million parameters for sending VGG13 weight updates). This allows us to attain the first goal of efficient communication. We experimentally show that 64 images give us good results, which allows us to send ~50× lesser data per communication round as compared to the weight updates of VGG13. For improving efficiency, we also experiment with images that have (a) the 3 channels reduced to a single channel, replicated across all 3 channels, and (b) the height and width reduced by 2×, which is then upsampled with their nearest neighbor values as part of pre-processing. This adds a savings of 3× and 4×, respectively. Enhanced Privacy from Data Leakage: To attain the goal of privacy, we rely on the high dimensional, non-linear nature of neural networks to generate images that resemble noise to the human eye. We distill the weight update of a client after learning on many minibatches into a single minibatch of synthetic images. Combining weighted gradients is not the same as combining inputs, and we observe that condensing the learning from many images into a smaller set results in images that visually do not conform to the true data distribution. The resulting weighted gradient we generate is an approximation of the true weight update. This lossy compression buffers our gradient from data leakage as can be observed from the results of performing the Inverting Gradients (IG) attack, (Geiping et al., 2020)) on our images, shown in Figure 3. Even if IG attacks were to invert the images perfectly, the inverted images would still look like noise, circumventing data leakage. We employ some additional tricks to encourage obscurity of generated images. IG assumes availability of one-hot labels, or reconstructs one label per image. Instead, we use soft labels to further discourage reconstruction. Additionally, we weigh the gradients differently into the combined final gradient so that no true gradient is well represented. We distill the updates from a large number of minibatches (to the tune of ![4_image_0.png](4_image_0.png) Figure 3: The top row visualizes actual synthetic images (x3jm) generated by our algorithm. We show 5 randomly picked images from a set of 32 images encoding a weight update at the 20041 communication round of VGG13 on CIFAR-10, Synfreq = 1 epoch. The bottom row visualizes images recovered with the IG attack Geiping et al. (2020) on the decoded weight update. Neither sets of images resemble CIFAR-10 images, visually obfuscating the user data and protecting it from leakage. a whole epoch) down to just a handful of images. While this incurs an accuracy loss, it results in images that do not resemble the real data distribution at all. Additionally, forcing a downsampling prior on the images, in either channels or height and width, allows the image distribution to differ from the original one even more. These synthetic images are visualized in Figure 2 at the beginning of training. We also visualize the synthetic images generated by TOFU in the middle of the training process (epoch 200) and at maximum accuracy (epoch 400) in Supplementary section A.2. Tradeoffs: The tradeoff cost associated is two-fold: a) the clients have added computational complexity to create the synthetic data and b) the communication rounds needed for convergence increase since we introduce some error in the weight updates. We emphasize that our method is better applied to use-cases where the clients have computational resources but are limited in communication bandwidth or cost. Furthermore, communication efficiency has been identified as the major efficiency bottleneck, with the potential for most impact (Kairouz et al., 2019). We account for the latter tradeoff of increased communication rounds when reporting our final efficiency ratios. ## 3.2 Creating The Synthetic Dataset We now detail the algorithm that distills the change in model parameters into a synthetic dataset during training. Taking inspiration from the IG attack, we optimize synthetic data to align the resulting gradient direction with the true weight update. To formalize, let this true weight update attained after a client learns on its true data be referred to as Ureal. We want to generate a synthetic dataset, $D_{syn}=\{(x_{syn_{i}},y_{syn_{i}},\alpha_{syn_{i}});\;i=1...N\}$. where N is the number of images in the synthetic dataset. Toym, and your, refer to the 244 image and soft label respectively. The goal of reconstruction is that the combined gradient obtained upon forward and back-propagating all { 2001.1 } is aligned to the true weight update, Ureal. Each synthetic datapoint generates a single gradient direction, and with N datapoints in our synthetic dataset, we generate N different gradient directions. Traditionally, if we were to treat these N images as a minibatch, we would average the N gradients. However, we take a weighted average of the gradients, allowing us to span a larger space. We jointly optimize these weights, referred to as osyn; > Li Osyn; = 1, along with the images and soft label. Next, we show that weighing the gradients of each image in the backward pass is the same as weighing the losses from each image on the forward pass, since derivative and summation can be interchanged. Formally, let 0 be the weights of the model, and 0(x) the output of the model. Let the loss per synthetic datapoint, denoted by L(0(x)yn, } 38yn, } ; be weighted by the respective a syn, and summed into Loyn, the overall loss of the synthetic dataset. Let the resulting gradient from backpropagating Lsyn be Usym. Backpropagating Lsyn results in the desired weighted average of the individual gradients per datapoint, as shown: $$L_{syn}=\sum_{i}\alpha_{i}L(\theta(x_{syn_{i}}),y_{syn_{i}})$$ $$U_{syn}=\frac{\nabla L_{syn}}{\nabla\theta}=\sum_{i}\alpha_{i}\frac{\nabla L(\theta(x_{syn_{i}}),y_{syn_{i}})}{\nabla\theta}$$ $$R_{loss}=\left(1-\frac{<U_{real}\cdot U_{syn}>}{||U_{real}||_{2}||U_{syn}||_{2}}\right)\tag{1}$$ $$(1)$$ $${\mathrm{(2)}}$$ Standard cross entropy loss is used to calculate gradients from both the true and the synthetic data. The synthetic data is optimized by minimizing the reconstruction loss, Rioss, which is the cosine similarity between the true update Ureal, and the synthetic update Usyn. $$({\mathfrak{I}})$$ Since minimizing Ricoss only aligns the directions of gradients, we additionally send scaling values for each layer from Ureat to scale up Usyn. To avoid cluttering notation, we leave this out since it only adds extra parameters equal to the number of layers. In addition, we also use soft labels instead of the hard labels used in the real dataset. This provides more flexibility for the optimization algorithm to create a better alignment between synthetic and true updates, and discourages attacks like IG, which rely on one-hot labels. We use Adam (Kingma & Ba, 2015) to optimize the randomly initialized images to generate a gradient that aligns with the true weight updates. We use learning rates 0.1 for images, labels and as, for 1000 iterations, decayed by a factor of 0.1 at the 375th, 625th, and the 875th iteration. For the downsampling experiments, since the network expects inputs of size 32 x 32 x 3, we replicate the single channel along all dimension in the grayscale case, and perform nearest neighbor upsampling before feeding in the images for the case with reduced image size. ## 3.3 Tofu: The Federated Learning Algorithm We now put all the parts together and describe how we utilize the synthesized dataset to enable communication efficient and private federated learning. All clients and the server have the same model initialization before the learning phase starts. Every client first trains on its private local data for a few minibatches and determines the true weight update, Ureal, as the difference between the starting and ending weights. This true weight update is encoded using synthetic data (Dsyn) as described in Section 3.2. Dspm and then communicated in lieu of weight updates to the server. The server decodes the information by performing a single forward and backward pass to get the encoded weight update. The server repeats this for proxy data received from all clients and averages the decoded updates. To ensure efficiency during down-communication as well, the server encodes its own weight update due to aggregation into proxy data, and sends this back to all clients. The clients then update their local models by decoding this information. The process is repeated until convergence. This is summarized in Algorithm 3 in Supplementary section A.1. ## Efficiency-Privacy-Accuracy Tradeoff 3.4 The weight update statistics change with accumulation over different number of batches and convergence progress. We now introduce the various hyperparameters that need to be tuned, and their effect on accuracy, privacy and efficiency. Number of Synthetic Datapoints (Nimgs): The size of the synthetic dataset transmitted per communication round has a direct impact on privacy, communication efficiency, and accuracy. While a larger synthetic dataset provides better accuracy as the encoding will be closer to the true weight update, the communication efficiency of the algorithm decreases since we have to communicate more data. We note that using 64-128 datapoints gives us the best empirical results. We see larger approximation errors with smaller number of datapoints, buffering us more against attacks. Synthesis Frequency (Synfreq): This denotes how many minibatches of weight updates should be accumulated by the client before communicating with the server. In FedAvg, this is usually one epoch. A very large Synfreq results in larger accuracy drops, since a large accumulated weight cannot be well represented by few synthetic images, but allows for enhanced privacy due to larger approximations. However, it degrades efficiency since we have to communicate more often per epoch. Phases to Improve Accuracy (switch) In the initial phase of learning, the gradient step is quite error tolerant since there is a strong direction of descent. For the single device experiments, after a few communication rounds for warmup, we empirically see better results by scaling the learning rate by the reconstruction error. This is implemented by scaling Usum by (1-Ripos), capturing the cosine similarity between the true and the synthetic update. This enables small steps to be taken if the synthetic data could not approximate the true update well. For single-device experiments, we switch from warm-up to this scaled learning rate phase at 200 communication rounds, and referred to it as switch . In the Federated setting, we did not notice much improvement from this switch empirically, and thus leave it out for simplicity. However, we note that two sets of encoding are required now, for both up-communication and downcommunication, and hence, we see more accuracy drop than the single device case. To counter this, we end with a few communication rounds of full weight update exchange to regain any accuracy loss. To ensure privacy, we recommend expensive encryption of the weight updates here. Since it consists of very few rounds (under 15), we do not sacrifice efficiency. We call the communication round where we switch to this final phase as switch2 and mark it by a star in the learning curves shown in Supplementary section A.3, and mention them in the corresponding hyperparameter sections. The efficiency savings we report take into account these rounds of expensive full gradient exchange as well. To summarize, for single device experiments, we have a brief warm-up phase of a few hundred communication rounds, and then switch to scaling learning rate by reconstruction error. For the federated setup, we do not scale the learning rate by communication round, and instead have a brief full weight update exchange phase of up to 15 communication rounds at the end of training. Experimenting with Image sizes: We want to encourage synthetic data to resemble the true data distribution even lesser, while relying on our optimization algorithm to tune them to get a good match between the target gradient and the synthetic gradient. To do this, and to further enhance efficiency, we experiment with enforcing two downsampling priors on the synthetic images. The CIFAR-10 images are of size 32 × 3, and this is also the size of our synthetic data. In one experiment, we downsample the number of channels in synthetic images from 3 to 1. The single channel is replicated across the 3 dimensions before being fed into the network for synthetic gradient calculation. In the other experiment, the synthetic images are sized 16 × 16 × 3 with the height and width upsampled to the correct size by nearest neighbor upsampling before being fed into the network. We show a comparison of what the true images, the synthetic full size images, the grayscale images, and the downsampled images look like in Figure 2. In Section 4.1, we show that we can successfully learn with all three kinds of synthetic images with an average accuracy drop of 3% for full sized synthetic images, 5% for single-channeled images and 3.5% for images with width and length halved. ## 4 Experimental Results And Discussion In this section, we first demonstrate TOFU on a single device setup to show that privacy preserved learning with only synthetic data is possible. This setup can be thought of as a federated setup with only 1 client and no downcommunication. We initialize two copies of the same network with the same weights. Network 1 represents the client, and learns on real data for Synfreq number of batches, generating Nimgs number of synthetic datapoints to send to Network 2, which emulates the server. Network 2 only learns on the synthetic data. Post communication, both the networks have the same weights since Network 1 knows how the synthetic data is going to update Network 2 and resets its own weights accordingly. For the single device experiments, we focus on getting the maximum accuracy from purely synthetic data, and hence we do not employ the final phase of full weight update exchanges. We first show the results of learning with similarly sized images as the real data, and then introduce priors that recreate the gradient via single channel images and images with width and height downsampled by 2. We then extend it to multiple clients in a federated setup. This has two encoding phases, the first carried out by each client to transmit their updates to the server (up-communication), and the second carried out by the server after aggregation from all clients (down-communication). Down-communication ensures that the weights of all clients and the server remain in sync after the end of each communication round. The experiments for Federated setup include a few rounds of full gradient exchange at the end in order to circumvent any accuracy drop, and the emphasis in these experiments is on efficiency. All learning curves are shown in Supplementary section A.3. | Synthetic | Original | Grayscale | Downsampled | | | | |-------------|-------------------------------------------------------|-------------|---------------|----------|-------------|----------| | Image Size | 32×32×3 | 32×32×1 | 16×16×3 | | | | | | Max Acc (%) | Comm Eff | Max Acc (%) | Comm Eff | Max Acc (%) | Comm Eff | | | Baseline Accuracy without using synthetic data: 88.6% | | | | | | | Nimgs | Varying Nimgs @ Synfreq = 200 | | | | | | | 32 | 84.05 | 32× | 81.24 | 97× | 82.61 | 121× | | 64 | 85.78 | 18× | 83.39 | 45× | 84.14 | 60× | | તેર | 86.05 | 10× | 83.63 | 31× | 85.29 | 46× | | 128 | 86.81 | 8× | 84.09 | 23× | 85.74 | 30× | | Synfreq | Varying Synfreq @Nimgs = 64 | | | | | | | 50 | 85.22 | 16× | 82 | 55× | 83.57 | 75× | | 100 | 84.73 | 16× | 82.73 | 45× | 84.18 | 64× | | 200 | 85.78 | 18× | 83.39 | 45× | 84.14 | 60× | | 400 | 85.76 | 15× | 83.19 | 47× | 84.61 | 64× | | 1 epoch | 84.79 | 19× | 81.3 | 51× | 84.62 | 64× | Table 1: Single Device accuracies and efficiency ratios for a VGG13 model, CIFAR-10 dataset on synthetic data. Baseline accuracy for learning on real data with the same hyperparameters is 88.6%, as shown in grey. The best accuracy setting for each set of experiments for using only synthetic data is highlighted in bold. Communication rounds required to reach maximum accuracy are shown in Supplementary section A.3.2. The network is trained with a batch-size of 64, with 782 minibatches making up an epoch. ## 4.1 Single Device Experiments We demonstrate our results on the CIFAR-10 dataset. It comprises of 50,000 training samples and 10,000 Setup validation samples of 10 classes each. For all experiments, we use a VGG-13 (Simonyan & Zisserman, 2014) network. We use an SGD optimizer with learning rates 0.02, decayed by 0.2 at the 250th and 40044 epochs, with a mini-batch size of 64 for a total of 500 epochs. The maximum baseline accuracy we achieve by training on real data is 88.6%. In this section all non-baseline results are shown for learning on purely synthetic data, with varying Synfreq, Nimgs and synthetic image sizes. More details are shown in Supplementary section A.3. The results are tabulated in Table 1. The efficiency shown is calculated as the ratio of parameters sent with synthetic data versus sending full gradients for the corresponding Synfreq and Nimgs setting. We perform the warm up phase for all experiments for 200 communication rounds and then start to scale the learning rate by the reconstruction error. The hyperparameters are tuned for the case of Synfreq= 200 and Nimgs= 64 and are held constant for all the other single device experiments. We show additional results for MNIST dataset in Supplementary section A.3.2. Efficiency Calculation The number of parameters of each synthetic image is 3072 (32 × 32 × 3) if no prior is enforced on the image, 1024 (32 × 32 × 1) if single-channeled images are synthesized, and 768 (16 × 16 × 3) if downsampled images are synthesized. The size of each datapoint (image, label, a trio) in the synthetic dataset 7 sun is the sum of the size of the synthetic image + 10 (number of soft labels) + 1 (x per image). Hence the total size of the synthetic dataset (Datze) is the size of each data point multiplied with Nimgs. The communication efficiency (n) is then calculated as: ## Number Of Model Parameters * Number Of Baseline Communication Rounds $$\eta={\frac{\mathrm{Number~of~Model}}{D_{s i z e}}}$$ D size * Number of Synthetic Communication Rounds Varying Nimgs Table 1 shows that increasing the synthetic dataset size improves accuracy, but reduces communication efficiency as we need to send more parameters per communication round. We achieve good accuracies by learning on only synthetic data, with an average accuracy drop of 3%, 5.5% and 4% for synthetic images of original size, grayscale images, and downscaled respectively across all considered Nimgs. For further experiments, we fix Nimgs = 64 as a good trade-off point. The exchange frequency for both the baseline and the synthetic case for all Nimgs is set as 200. Baseline training reaches full accuracy sooner than learning with synthetic data as expected, but even after accounting for that, we are able to achieve upto 121× more communication efficiency. The communication rounds vary between experiments and are shown in Appendix section A.3.2. We also show the corresponding results for MNIST in Appendix section A.3.2. Here we show that whether we generate synthetic images to match a gradient as often as 50 Varying Synfreq minibatches or as late as once an epoch (updates from 782 minibatches), we can converge to a reasonable accuracy. All simulations take similar number of epochs to converge, and are able to converge to very similar accuracies. In Table 1, we compare efficiency when the synthetic images are being communicated instead of the gradient, and we assume that both of these are communicated as often as the mentioned Synfreq. However, as we see in the federated setup in the next section, the frequency of communication can vary between the synthetic data and the real data. In those cases, a lower Synfreq will result in a requirement for more communication rounds and hence achieve lesser communication savings. The corresponding rounds for convergence for all experiments are mentioned in Supplementary section A.3.2. Forcing a Prior on Synthetic Image Sizes Here, we experiment with enforcing images to not follow the same distribution as the real dataset by constraining their size. In the first experiment, the synthetic images are constrained to have a single channel (results in column 2 of Table 1) . In the second experiment, we constrain images to be half the width and height (results in column 3 of Table 1). To be compatible with expected image size before being fed into the network for gradient calculation, the grayscale images are duplicated across the 3 channels and the smaller images are upsampled to the correct size by copying the nearest neighbors pixel value. The results show that we get very low drop in accuracies from full sized images. The average accuracy drop from full sized synthetic images to images with height and width halved is only 0.5% acrosss all experiments , and 2% to grayscale images. However we can see that we get approximately 4× and 3× improvement in communication efficiency as a result of reducing the number of parameters to be communicated or learned per image. Discussion We successfully show that synthetic data can be used to learn, with small accuracy drops for CIFAR-10, using only synthetic data. This drop is later recovered in the federated setup. For full size images, the average accuracy drop from baseline is 3% at 17× communication efficiency. For grayscale images, we get an average accuracy drop of 5% at 49× communication efficiency and 3.5% drop at 65× savings for images with width and height halved. We also wish to mention reiterate that the larger the networks, the more the savings that can be achieved with our method, since the size of updates will increase but the size of images remains constant. ## Federated Learning Experiments 4.2 Setup We now discuss the federated experiments conducted on 5 and 10 clients for CIFAR-10 and MNIST, respectively, with an IID distribution of data. This results in each client having 157 minibatches of size 64 image-label pairs for CIFAR-10 and 97 minibatches of size 64 for MNIST. We compare with FedAvg (McMahan et al., 2017) as our baseline, with exchanges happening once per epoch. We assume a participation rate of 1. The results are shown in Table 2 for MNIST and Table 3 for CIFAR-10. Efficiency Calculation In the case where full gradients are exchanged at the end in order to recover the accuracy, the communication efficiency (7) is calculated by including the parameters used for exchanging the full gradients (FFaul): Pfull = (Number of Model Parameters * Number of Full Gradient Exchanges) Number of Model Parameters * Number of Baseline Communication Rounds ท = (Dsize * Number of Synthetic Communication Rounds) + Pfull Varying Synfreq and Nimgs We get the best results at a Synfreq equal to one local epoch for both datasets, similar to FedAvg. We note that there is more variation in efficiency with varying Synfreq settings than single device experiments. That is because in this case, our Federated baseline is fixed at a Synfreq of 1 epoch, irrespective of what the Synfreq of synthetic images is. Increasing the synthetic dataset size (Nimgs) results in better or comparable accuracy but sends more parameters per communication round. We get best accuracy for Nimgs = 96 for both datasets, seeing a drop of 2% and 12% for MNIST and CIFAR-10, with 3.6× and 14.7× communication efficiency, respectively. We note that the drop is more than what we see in single device, since each round incurs approximations from both up and down-communications. We show that we can recover this drop almost completely in just 3 communication rounds for | MNIST, LeNet5, 10 Clients with IID distribution | | | | | |---------------------------------------------------|-------------------------|----------|-------|------| | Using Only | + 3 Additional | | | | | Synthetic Data | Rounds of FedAvg | | | | | Max. | Comm. | Max | Comm. | | | Acc. (%) | Eff. | Acc. (%) | Eff. | | | FEDAVG | 98.91 | 1.0× | - | - | | Nimgs | Synfreq = 1 local epoch | | | | | 32 | 95.39 | 6.9× | 98.06 | 6.6× | | | 98.14 | | | | | 64 | 96.06 | 4.2× | 4.1× | | | | 96.77 | 3.6× | 98.01 | 3.5× | | 128 | 95.23 | 4.2× | 98.07 | 4.1× | | Synfreq | Nimgs = 64 | | | | | 25 | 92.00 | 3.4× | 98.29 | 3.3× | | 50 | 91.71 | 7.9× | 98.19 | 7.5× | | 1 epoch | 96.06 | 4.2× | 98.14 | 4.1× | | 2 epochs | 95.52 | 3.1× | 97.91 | 3.1× | Table 2: Accuracy and efficiency of the federated platform on MNIST. The baseline of FedAvg is shown in grey. The best accuracy setting for each set of experiments for using only synthetic data is highlighted in bold. MNIST and 15 rounds in CIFAR-10, still allowing for 3-6.6 x communication efficiency for MNIST and 5.4-8.8× for CIFAR-10, as seen in the last columns of Tables 2 and 3. Additionally, we realize that showing maximum accuracy might skew communication efficiency in our benefit, since FedAvg reaches higher accuracy, which will naturally take more communication rounds. To account for this, we also report communication efficiency at iso-accuracy in Tables 7 and 8 for MNIST and CIFAR-10, respectively in Supplementary section A.3. When we consider 72% as the baseline accuracy for CIFAR-10, our efficiency of 5.4 - 8.8× reduces to 2.4 - 4.7×. This shows that our method is still more efficient under stricter evaluation conditions. We found that the combination of Synfreq=1 epoch and Nimgs=64 provides a good trade-off point between communication efficiency and accuracy drop. Enforcing a Prior on Synthetic Image Sizes We also experimented with restricting the optimization to generate single-channeled images in the federated setup. Table 3 shows that generating 1-channeled images instead of 3channeled images provided communication efficiency at the cost of accuracy. Using only synthetic images provided a communication efficiency of 46 - 117×, however, with an accuracy drop of 17 - 25% from the FedAvg baseline. However, this drop can be reduced to lesser than 1% with 15 additional rounds of full weight update exchange, while still achieving communication efficiency of 8.9 - 10.1×. Cost of Full Weight Update Exchange TOFU shares full weight updates for the last few epochs to regain full accuracy, which need to be encrypted to ensure privacy. For a conservative estimate while ensuring privacy, we assume that we need to encrypt all parameters sent during all communication rounds for both methods, including synthetic data and full weight updates. Secure aggregation (Bonawitz et al., 2017), a commonly used protocol, shows that the communication cost is O(n + k) for the client and O(nk + n2) for the server, where k is the dimension of the vector being encrypted and n is the number of clients. Comparing the encryption cost between FedAvg and TOFU for the same number of clients reduces to a ratio of the total parameters sent. This means that encryption retains the efficiency benefits of our method. The results show that TOFU can learn both MNIST and CIFAR-10, distributed in an IID setup, with an average of ~4.6× and ~6.8× communication efficiency and less than an 1% average accuracy drop. ## 5 Conclusion In the standard federated learning algorithm, clients carry out local learning on their private datasets for some minibatches, and communicate their weight updates to a central server. The central server aggregates the weight updates | CIFAR-10, VGG13, 5 Clients with IID distribution | | | | | | |----------------------------------------------------|-------------------------|------------|-------|-------|------| | Using Only | + 15 Additional | | | | | | Synthetic Data | Rounds of FedAvg | | | | | | Max. | Comm. | Max | Comm. | | | | Acc. (%) | Eff. | Acc. (%) | Eff. | | | | FEDAVG | 88.73 | 1.0× | - | - | | | Nimgs | Synfreq = 1 local epoch | | | | | | 32 | 67.12 | 44.9× | 87.29 | 8.8× | | | 75.00 | 19.8× | 88.30 | 7.1× | | | | 64 | | | | | | | 3 Channel | તે ર | 76.02 | 14.7× | 88.39 | 6.3× | | Synthetic | 128 | 76.00 | 10.6× | 87.86 | 5.4× | | Images | Synfreq | Nimgs = 64 | | | | | 50 | 63.03 | 18.0× | 87.69 | 6.9× | | | 100 | 70.07 | 14.5× | 87.26 | 6.3× | | | 1 epoch | 75.00 | 19.8× | 88.30 | 7.1× | | | Nimgs | Synfreq = 1 local epoch | | | | | | 32 | 63.26 | 117.0× | 87.17 | 10.1× | | | 64 | 69.04 | 58.9× | 87.34 | 9.3× | | | 1 Channel | 96 | 71.15 | 39.3× | 87.60 | 8.6× | | Synthetic | 128 | 71.04 | 46.1× | 86.88 | 8.9× | | Images | Synfreq | Nimgs = 64 | | | | | 50 | 54.23 | 47.2× | 87.57 | 8.9× | | | 100 | 65.08 | 47.2× | 87.38 | 8.9× | | | 1 epoch | 69.04 | 58.9× | 87.34 | 9.3× | | Table 3: Accuracy and efficiency on the federated platform on CIFAR-10. The baseline of FedAvg is shown in grey. Results for additional 5 & 10 rounds are in Table 9, Supplementary section A.3.4. The best accuracy setting for each set of experiments for using only synthetic data is highlighted in bold. received from all clients, and communicates this update back to all clients. There are two major bottlenecks in this procedure; it is communication inefficient and it is shown that gradient and weight updates can be inverted into the data that generated them, violating user privacy. In this work, we introduce TOFU, a federated learning algorithm for communication efficiency and to enhance protection against data leakage via gradients. We encode the weight updates to be communicated into a weighted summation of the gradients of a much smaller set of proxy data. The proxy data resembles noise and thus even perfect inversion from data leakage attacks will result in revealing this noise rather than user data. Additionally, data is far lower in dimensional complexity than gradients, improving communication efficiency. We also show that this proxy data can be downsampled in size from the original data that generated the target gradients without much drop in accuracy, thus being even more efficient. Since proxy data only approximates gradients, we observe a small drop in accuracy when learning only from this synthetic data. We show that the accuracy can be recovered by a very few communication rounds of full weight updates. To ensure privacy in this phase, we recommend encrypting the updates. Since these rounds are very few in comparison to the number of rounds where we exchange synthetic data, we are still able to maintain communication efficiency. We show that we can learn the MNIST dataset, distributed between 10 clients and the CIFAR-10 dataset, distributed between 5 clients to accuracies comparable to FedAvg, with an average of ~4.6× and ~6.8× communication efficiency and less than an average 1% accuracy drop, respectively. Availability of more data and compute capabilities has encouraged network sizes to grow. Since input data usually is of fixed dimensions, the communication efficiency advantages of TOFU are expected to grow with network size. ## References Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang, Decp learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS '16, pp. 308-318, New York, NY, USA, 2016. Association for Computing Machinery. ISBN 9781450341394. doi: 10.1145/2976749.2978318. URL https://doi.org/10.1145/2976749.2978318. Dan Alistarh, Jerry Li, Ryota Tomioka, and Milan Vojnovic. QSGD: randomized quantization for communicationoptimal stochastic gradient descent. CoRR, abs/1610.02132, 2016. URL http://arxiv.org/abs/1610.02132. Debraj Basu, Deepesh Data, Can Karakus, and Suhas Diggavi. Qsparse-local-sgd: Distributed sgd with quantization, sparsification, and local computations, 2019. Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS '17, pp. 1175-1191, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450349468. doi: 10.1145/3133956.3133982. URL https://doi.org/10.1145/3133956.3133982. George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, and Jun-Yan Zhu. Dataset distillation by matching training trajectories. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. Moming Duan, Duo Liu, Xianzhang Chen, Yujuan Tan, Jinting Ren, Lei Qiao, and Liang Liang. Astraea: Selfbalancing federated learning for improving classification accuracy of mobile deep learning applications. In 2019 IEEE 37th international conference on computer design (ICCD), pp. 246–254. IEEE, 2019. Liam Fowl, Jonas Geiping, Wojtek Czaja, Micah Goldblum, and Tom Goldstein. Robbing the fed: Directly obtaining private data in federated learning with modified models. arXiv preprint arXiv:2110.13057, 2021. Liam Fowl, Jonas Geiping, Steven Reich, Yuxin Wen, Wojtek Czaja, Micah Goldblum, and Tom Goldstein. Decepticons: Corrupted transformers breach privacy in federated learning for language models. arXiv preprint arXiv:2201.12675, 2022. Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. Inverting gradients-how easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053, 2020. Chaoyang He, Murali Annavaram, and Salman Avestimehr. Group knowledge transfer: Collaborative training of large cnns on the edge. CoRR, abs/2007.14513, 2020. URL https://arxiv.org/abs/2007.14513. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. Samuel Horvath, Chen-Yu Ho, Ludovit Horvath, Atal Narayan Sahu, Marco Canini, and Peter Richtárik. Natural compression for distributed deep learning. CoRR, abs/1905.10988, 2019. URL http://arxiv.org/abs/1905.10988. Jinwoo Jeon, Kangwook Lee, Sewoong Oh, Jungseul Ok, et al. Gradient inversion with generative image prior. Advances in neural information processing systems, 34:29898-29908, 2021. Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, K. A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G.L. D'Oliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning. 2019. URL https://arxiv.org/abs/1912.04977. Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, and Ananda Theertha Suresh. SCAFFOLD: stochastic controlled averaging for on-device federated learning. CoRR, abs/1910.06378, 2019. URL http://arxiv.org/abs/1910.06378. Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, and Ananda Theertha Suresh. Mime: Mimicking centralized stochastic algorithms in federated learning. CoRR, abs/2008.03606, 2020. URL https://arxiv.org/abs/2008.03606. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980. Jakub Konečný, H. Brendan McMahan, Felix X. Yu, Peter Richtarik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. In NIPS Workshop on Private Multi-Party Machine Learning, 2016. URL https://arxiv.org/abs/1610.05492. Hao Li, Zheng Xu, Gavin Taylor, and Tom Goldstein. Visualizing the loss landscape of neural nets. CoRR, abs/1712.09913, 2017. URL http://arxiv.org/abs/1712.09913. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communicationefficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp. 1273–1282. PMLR, 2017. Sashank J. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, and H. Brendan McMahan. Adaptive federated optimization. CoRR, abs/2003.00295, 2020. URL https://arxiv.org/abs/ 2003.00295. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A. Efros. Dataset distillation. CoRR, abs/1811.10959, 2018. URL http://arxiv.org/abs/1811.10959. Hongxu Yin, Arun Mallya, Arash Vahdat, Jose M Alvarez, Jan Kautz, and Pavlo Molchanov. See through gradients: Image batch recovery via gradinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16337-16346, 2021. Hao Yu, Rong Jin, and Sen Yang. On the linear speedup analysis of communication efficient momentum sgd for distributed non-convex optimization, 2019. Yanlin Zhou, George Pu, Xiyao Ma, Xiaolin Li, and Dapeng Wu. Distilled one-shot federated learning, arXiv preprint arXiv:2009.07999, 2020. Ligeng Zhu and Song Han. Deep leakage from gradients. In Federated learning, pp. 17-31. Springer, 2020.
Review 1: Summary: The authors propose a way to improve communication efficiency in federated learning by learning synthetic data such that the synthetic data produce a gradient similar to that produced by true training data. Empirical evidence has shown that the method could compress the message being communicated while preserving the model utility. Strengths and Weaknesses: Strengths: - The empirical evidence seems solid. - The idea of using synthetic data is interesting. Weaknesses: - This work seems to be very similar to this following work [1]. At first glance, the two works similarly use synthetic data for gradient compression. Comparison between the two works both intuitively and empirically is missing. - Comparison with prior baselines on compression in federated learning is missing. To name a few baselines: Random Masking, Top-k Masking [3], Quantization [2], etc. - If one of the motivation of this work is to protect BOTH communication efficiency and privacy, I found the privacy aspect of the proposed method largely unexplored in this work. For example, empirical evidence over how the method perform compared to FedAvg under common inversion attacks; theoretical evidence on how the proposed method could bound the Fisher Information, which could be viewed as an method to measure data leakage [4]. Discussion around these points would be helpful. - Related to the privacy concern, it is unclear how does the differentially private version of the proposed method compared to the DP version of FedAvg, if privacy is one of the concern in this paper. - How does the authors deal with the labels? Are the labels fixed or are the labels also trained? - While the method significantly reduces the communication cost, it seems that the computation process is significantly more costly since it requires computing second order derivative (taking the gradient w.r.t the synthetic data, where the gradient it self is the cosine dissimilarity between two gradients). Could the authors discuss how what's the physical running time for this method. - The scale of federated learning seems relatively small. Could the authors evaluate the method on cross-device settings, where the number of clients is large and one would need partial participation to train the model scalably. - The method seems to work on larger models. I encourage the authors to evaluate the method on large NLP models, where the method could potentially gain more advantages. [1] Hu, S., Goetz, J., Malik, K., Zhan, H., Liu, Z., & Liu, Y. (2022). Fedsynth: Gradient compression via synthetic data in federated learning. arXiv preprint arXiv:2204.01273. [2] D. Alistarh, D. Grubic, J. Li, R. Tomioka, and M. Vojnovic. Qsgd: Communication-efficient sgd via gradient quantization and encoding. Advances in Neural Information Processing Systems, 30, 2017. [3] A. Beznosikov, S. Horváth, P. Richtárik, and M. Safaryan. On biased compression for distributed learning. arXiv preprint arXiv:2002.12410, 2020. [4] Hannun, A., Guo, C., & van der Maaten, L. (2021, December). Measuring data leakage in machine-learning models with fisher information. In Uncertainty in Artificial Intelligence (pp. 760-770). PMLR. Requested Changes: As stated in the weakness section, some of the changes I suggest includes: - Adding citations and comparison to related works (e.g. [1,2,3]). - Discuss the privacy aspect of this algorithm. - Experiment in cross-device setting and with larger models. (A good example would be Reddit). - Demonstrate the computation cost for the proposed method and compare that with prior works as well. Broader Impact Concerns: NA ================================================== Review 2: Summary: The authors propose a novel method, TOFU, to address two common challenges in Federated Learning (FL) pipelines: (1) expensive client to server communication of model updates and (2) privacy concerns with gradient inversion attacks allowing to recover (a part of the) local training samples. The main idea is to encode the model update (delta between the server model and the model after training for say one epoch locally) as a weighted sum of gradients of mock images that resemble noise. Once a client has designed the mock samples to encode their model update, they send the mock samples to the server, and the server is able to reconstruct (approximately) the model update by computing the weighted sum of the gradients of the mock samples with respect to the previous central model. The authors claim two main benefits of TOFU - Reduced communication cost (4 to 6x on FL LEAF benchmarks) - Enhanced privacy (Figure 3 in particular shows that the IG attack (gradient inversion) fails to recover the original images. Strengths and Weaknesses: Let us address the two main challenges raised by the authors in FL: **Communication** Here, the claim is matched with empirical evidence mentioned in the abstract and in Tables 1, 2 and 3. However, we would like to point important points that would deserve clarification: - Compute: with TOFU, the authors implicitly trade off less communication for more compute: (1) on the client side, since each client has to perform approximately a thousand steps of gradient descent to design the mock samples (end of Section 3.2) after performing the local standard training and (2) on the server side, since the server has to recompute the gradients of all mock samples (roughly 64) coming from all the clients participating in the round (realistically, a few hundred rather than the proposed 5 or 10 clients of Section 4.2). It would strengthen the impact of the paper if the authors could clearly present and benchmark the additional compute needed, in particular on the client side. Indeed, the ultimate metric in FL is the wall time needed to reach a certain model utility or accuracy. If client compute time doubles, then wall time to accuracy would essentially double. - Full model updates: the authors mention the necessity to still send over full model updates without using TOFU (a few of them over the course of the training), especially as model gets closer to convergence. Could the authors please elaborate on this point by providing loss curves (flagging when full model updates were performed) to help the reader quantify how much we need to recover when using only TOFU? It might be indeed that most of the loss incurred by TOFU is recovered with the full model updates, hence it would be important to substantiate this claim more. **Privacy** This experiments would deserve more clarification and investigation from the authors to claim privacy protection. The high-level concern is the following. The server is able to recover (a good approximate of) the individual model update of each client (as demonstrated by the decent accuracies reached after training, assuming non-TOFU full model updates are not a concern as highlighted above). One might think that there is therefore enough information in the individual model updates to recover the *original*, non-mock samples (or at least some of them), otherwise it would be impossible to learn a model performing well on original images. In other words, it is not because a single generic gradient inversion attack fails (as demonstrated in Figure 3) that the same attack with more aggressive setups or another, slightly modified gradient inversion attack would fail. Below are some thoughts that may help strengthen this aspect of the paper by putting more effort in this proof-testing TOFU: - [Sanity check] Compute the analogous of Figure 3 (gradient inversion on the individual TOFU model updates) for the actual model updates (no TOFU involved). This important baseline would answer the question: is the used gradient inversion attack able to recover some original samples when we omit TOFU? - [Push attack setup] Focus extreme setups vary the client batch size (smaller might be easier to attack) and number of iterations (smaller might be also easier to attack) and test extreme cases. For instance, in the single batch model update setting, does the gradient inversion attack still fail? Same question but in the (imaginary) case of single batch, single sample model update? - [Redesign attack] Non-injectivity of the image $\mapsto$ gradient function. In other words, many (potentially non-image looking) samples might produce (approximately) the same gradient. This is why most of the attacks gradient reconstruction attacks guide the reconstruction with a prior on what the image is supposed to be like. Some other attacks might succeed here, such as [1] or [2]. Finally, one could imagine a novel attack in which the adversary knows this obfuscation method is being used (namely TOFU) to guide the reconstruction. [1] "Gradient Inversion with Generative Image Prior", Jeon et al [2] ""See through Gradients: Image Batch Recovery via GradInversion", Yin et al Requested Changes: The main requested changes are highlighted in the "Strengths and Weaknesses" section, namely: - [high] Privacy claims: sanity check, push attack setup and redesign attack. - [medium] Communication reduction claim: elaborate on compute and full model updates. - [small] Typo (nit) "communicate with each, other" first sentence of introduction. Broader Impact Concerns: None. ================================================== Review 3: Summary: The papers proposes a new federated learning scheme to reduce the communication cost by generating synthetic data. Specifically, to generate such data, it uses a loss function to let the generated synthetic data's gradient loss matches the model updates. To get a better matching performance, it further utilizes the soft label in stead of hard label and learn a parameter alpha to determine the importance of each samples. The experimental results show, for one client, the proposed method could still maintain similar performance (around 2-3% drop) with over 5X speed up in terms of number of communication cost. For multiple clients in the IID data distribution, it still suffer from around 10% performance loss when only using synthetic data. Strengths and Weaknesses: Strengths: 1. The proposed new scheme is interesting and novel. It is also interesting to apply data condensation into federated learning framework for a better communication efficiency. 2. The experimental results on one client looks promising. However, I would like to see more experimental results on more practical settings such as non-IID data. Weaknesses: 1. The proposed method should include some formal proofs to show it still keeps the clients data's confidentiality. By directly sharing the data with the sever, it is necessary to show the synthetic data won't include data information. Although the generated synthetic data seems that it doesn't include the personal data, it is not enough to justify the confidentiality. In short, there is no enough evidence to show the privacy requirement in FL. 2. The experiments are conducted all on the IID datasets. It is necessary to show the proposed method's performance on non-IID distribution as well. Requested Changes: Please refer to weaknesses. Broader Impact Concerns: I don't see any ethical concerns in this paper. ================================================== Metareview: Recommendation: Reject Comment: In agreement with the reviewers, I encourage the authors to rework the privacy statements in their paper, ideally by strengthening these statements in establishing rigorous resilience guarantees to substantial families of attacks. ==================================================
# Logistic Regression Through The Veil Of Imprecise Data Anonymous authors Paper under double-blind review ## Abstract Logistic regression is a popular supervised learning algorithm used to assess the probability of a variable having a binary label based on some predictive features. Standard methods can only deal with precisely known data; however, many datasets have interval uncertainties due to measurement error, censoring or missing data that traditional methods either reduce to a single point or completely disregard. This paper shows that it is possible to include these intervals by considering an imprecise logistic regression model using the set of possible models obtained from values within the intervals. This method can express the epistemic uncertainty neglected by traditional methods. ## 1 Introduction Logistic regression is used to predict the probability of a binary outcome as a function of some predictive variable. In medicine, for example, logistic regression can be used to predict the probability of an individual having a disease where the values of risk factors are known. While logistic regression is most commonly used for binary outcomes, multinomial logistic regression extends this into events with any number of labels (Menard, 2010, Chapter 1). However, many decisions and events are binary (yes/no, passed/failed, alive/dead, etc.), and for the sake of simplicity, we will restrict our discussion and examples to binary outcome logistic regression. Additionally, unlike discriminant function analysis, logistic regression does not require predictor variables to be normally distributed, linearly related or have equal variance (Press & Wilson, 1978). There are many practical applications for logistic regression across many different fields. For example, in the medical domain, risk factors in the form of continuous data–such as age–or categorical data–such as gender–may be used to fit a model to predict the probability of a patient surviving an operation (Bagley et al., 2001; Neary et al., 2003). In engineering systems, logistic regression can be used to determine whether a mineshaft is safe (Palei & Das, 2009); to predict the risk of lightning strikes (Lambert & Wheeler, 2005) or landslides (Ohlmacher & Davis, 2003). In the arts, it can explore how education impacts museum attendance or watching a performing arts performance (Kracman, 1996). It is also possible to predict the probability of sports results using logistic regression (Li et al., 2021). Due to its wide range of applications, logistic regression is considered a fundamental machine learning algorithm with many modern programming languages having packages for users to experiment with, such as *Scikit-learn* (Pedregosa et al., 2011) in Python, which has been used for the analysis within this paper. Traditionally it has been assumed that all of the values of the features and labels used in logistic regression are *precisely* known. However, in practice, there can be considerable imprecision in the features and labels used in the regression analysis and the application of the regression model. Analyses using data from combined studies with inconsistent measurement methods can even result in datasets with varying degrees of uncertainty. Likewise, the outcome data can be uncertain if there is ambiguity in the classification scheme (good/bad). However, even relatively straightforward classifications (alive/dead) can yield uncertainty when a subject leaves a study, and the outcome is unknown. Within statistics, censored data is sometimes of this form Rabinowjtz et al. (1995); Lindsey & Ryan (1998). In the case of continuous variables, the interval reflects the measurement uncertainty, while in the binary outcome, the interval is the vacuous [0, 1] because the correct classification is unclear. There are multiple methods of dealing with interval data with the features of a logistic regression model. The problem may also be considered a subproblem within symbolic data analysis (Billard & Diday, 2003; Whitaker et al., 2021; Beranger et al., 2022). These methods often require approximations to be made to simplify the process and allow the use of standard logistic regression techniques. The most straightforward approach is to discard the interval data, assuming that the epistemic uncertainty that the intervals represent is small compared to the sampling uncertainty or natural variability in the data or that values are missing at random or missing completely at random (Rubin, 1976; Ferson et al., 2007; Kreinovich, 2009; Hosmer Jr et al., 2013). These assumptions are likely to be untenable in practice. Another approach is to treat interval data as uniform distribution based on the "equidistribution hypothesis" that assumes each possible value to be equally likely; thus, the intervals are modelled as a uniform distribution (Bertrand, 2000; Billard & Diday, 2000; Bock & Diday, 2000; Beranger et al., 2022). This idea has its roots in the principle of insufficient reason, first described by both Bernoulli and Laplace, and more recently known as the principle of indifference (Keynes, 1921). Alternatively, the interval is commonly represented by the interval's midpoint, which represents the mean and median of a uniform distribution or a random value from within the interval (Osler et al., 2010). While these approaches are computationally expedient, as will be shown in Section 3.3, they underrepresent the imprecision by presenting a single middle-of-the-road logistic regression. Similar methods include performing a conjoint logistic regression using the interval endpoints or averaging separate regressions performed on the endpoints of the intervals (de Souza et al., 2008; 2011). While these various methods make different assumptions about the data within the interval ranges, ultimately, they still transform interval data such that the final results can be represented by a single binary logistic regression (de Souza et al., 2011). The approach proposed within this paper for dealing with interval data in logistic regressions is based on imprecise probabilities and considers the set of models rather than a single one (Walley, 1991; Manski, 2003; Ferson et al., 2007; Nguyen et al., 2012). This is similar to approaches proposed for dealing with interval uncertainty within linear regression (Utkin & Coolen, 2011; Wiencierz, 2013; Schollmeyer, 2021; Tretiak et al., 2022). These approaches, alongside other methods for interval linear regression (Gioia et al., 2005; Fagundes et al., 2013) are not directly translatable to logistic regression as they require the use of least squares methods ill-suited to dichotomous problems (Hosmer Jr et al., 2013). If separate logistic regressions are generated via maximum likelihood estimation from the interval data and displayed graphically, the envelope of these models can be considered as an imprecise logistic regression model. The 'true' model–the model that would have been fitted if there was no epistemic uncertainty associated with the sample–would always be contained within the bounds of the best possible imprecise model. The primary benefit of such an approach is that it represents the epistemic uncertainty removed by traditional methods. Additionally, this method can also handle the case of intervals in discrete risk factors. This imprecise approach makes the fewest assumptions but can be computationally challenging for large datasets (Ferson et al., 2002; 2007; Kreinovich, 2009). In the case of uncertainty in the outcome status used within logistic regression, traditionally, there is little that can be done but to discard these data points as they cannot be used as part of the analysis or to use a semi-supervised learning methodology (Amini & Gallinari, 2002; Chapelle et al., 2006; Chi et al., 2019). However, the proposed imprecise logistic regression technique can be used to include unlabelled examples within the dataset. Again the imprecise approach does not require making the assumptions required by other methods to fit a model. This paper continues as follows: in Section 2, precise logistic regression is reviewed, Section 3 and Section 4 introduce the imprecise logistic regression for data with intervals within the features and labels respectively. In both these sections, a 1-dimensional synthetic dataset is used to demonstrate the methodology before it is compared to alternative methods on both the synthetic data and a real-world example. The purpose of the comparison with existing techniques is not to . Finally, in Section 5 the method is used on a dataset which contains intervals in both the features and labels and is contrasted against a dataset from the literature. ## 2 Precise Logistic Regression Let x be a m-dimensional covariate with a binary label, y ∈ {1, 0}. Logistic regression can be used to model the probability that y = 1 using: $${\rm Pr}(y=1|{\bf x})=\pi({\bf x})=\frac{1}{1+\exp\left(-\left(\beta_{0}+\beta_{1}x_{1}+\cdots+\beta_{m}x_{m}\right)\right)}\tag{1}$$ where β0, β1*, . . .* are a set of unknown regression coefficients. If dataset D contains n samples $$D=\left\{\begin{array}{c}{{\left(\left(x_{1}^{(1)}x_{2}^{(1)}\cdots x_{m}^{(1)}\right),y_{1}\right)\right)}}\\ {{\left(\left(x_{1}^{(2)}x_{2}^{(2)}\cdots x_{m}^{(2)}\right),y_{2}\right)\right)}}\\ {{\vdots}}\\ {{\left(\left(x_{1}^{(n)}x_{2}^{(n)}\cdots x_{m}^{(n)}\right),y_{n}\right)\right)}}\end{array}\right\},$$ $$\quad(2)$$ a logistic regression model can be trained on D, LR(D), by finding optimal values of β0, β1*, . . . , β*m for the observed data. This is often done using *maximum likelihood estimation* (Menard, 2010; Myung, 2003), although other techniques exist, for instance through Bayesian analysis Jaakkola (1997); O'Brien & Dunson (2004). A classification, yˆ, can be made from the logistic regression model by selecting a threshold value, C, and then defining $$\hat{y}=\begin{cases}1&\text{if}\pi(\mathbf{x})\geq C\\ 0&\text{if}\pi(\mathbf{x})<C\end{cases}\tag{1}$$ $$\left({\mathrm{3}}\right)$$ The simplest case is when C = 0.5, implying yˆ is more likely to be true than false. However, this value could be different depending on the use of the model and the risk appetite of the analyst. For example, in medicine, a small threshold value may be used in order to produce a conservative classification and therefore reduce the number of false negative results. Where predictions are made within this paper, C = 0.5 unless otherwise stated. ## 2.1 Demonstration To demonstrate, a synthetic 1-dimensional dataset (D1) with a sample size of fifty was used to train a logistic regression model, LR(D1), as shown in Figure 1. After training the model, it is useful to ask the question "how good is the model?" For logistic regression there are several ways in which that can be done, see Hosmer Jr et al. (2013, pp. 157–169) or Kleinbaum & Klein (2010, pp.318–326). For the analysis in this paper, we will consider the receiver operating characteristic graph and area under curve statistic, discriminatory performance visualisations (Royston & Altman, 2010) as well as the sensitivity and specificity of the classifications made by the algorithm. ![3_image_0.png](3_image_0.png) Figure 1: Logistic regression curve, LR (D1), created by fitting the model on dataset D1 (shown with scattered points). Royston & Altman (2010) introduced visualisations to assess the discriminatory performance of the model by considering a scatter plot of the true outcome (jittered for clarity) vs the estimated probability. Such a plot is shown in Figure 2a. A perfectly discriminating model would have two singularities with all the points with outcome = 1 at (1,1) and all the points with outcome = 0 at (0,0). In general, the better the classifier, the more clustered the points would be towards these values, with the points on the upper band having larger probabilities and the points on the lower band having lower probabilities. From Figure 2a we can see that there is significant clustering towards the endpoints, showing that the model has excellent discriminatory performance. ![3_image_2.png](3_image_2.png) (a) Scatter plot of jittered outcome vs estimated probability. ![3_image_1.png](3_image_1.png) Figure 2: Two plots to show the discriminatory performance of the logistic regression model shown in Figure 1. We can make and compare the predictions made from the logistic regression model using a larger dataset that has been generated using the same method as above. Tabulating these results in a confusion matrix for the base predictions is shown in Table 1. | | 1 | 0 | Total | |-------------|-----|-----|---------| | Predicted 1 | 34 | 14 | 48 | | Predicted 0 | 10 | 42 | 52 | | Total | 44 | 56 | 100 | Table 1: Confusion matrix for 100 test data points using predictions from LR(X) shown in Figure 1. There are numerous statistics than can be derived from confusion matrices to express the performance of classifiers. For this analysis, we shall consider the sensitivity s, the fraction of positive individuals correctly identified as such and the specificity t, the fraction of negative individuals correctly identified. Mathematically $s=\dfrac{\text{True Positive}}{\text{Total Number of Positives}}\\[0.5em]$ $t=\dfrac{\text{True Negative}}{\text{Total Number of Negatives}}$. and From Table 1 we can calculate that s = 0.773 and t = 0.750. As confusion matrices and statistics are calculated from them depending on the cutoff value chosen (C from Equation 3), a complete way of determining the classification performance of models is by considering the receiver operating characteristic (ROC) curve of the model (Kleinbaum & Klein, 2010; Hosmer Jr et al., 2013). ROC curves can be compared graphically and by considering the area under the curve (AUC). The better the model is, the closer the AUC would be to 1. The worst possible AUC would be 0.5, as again, anything lower than that would be improved by simply switching the classification. For the ROC curve shown in Figure 2b, AUC = 0.887. ## 3 Uncertainty In Features If there is interval uncertainty within dataset D, D = (([x (2) 1, x (2) 1] [x (2) 2, x (2) 2]· · · [x (2) m , x (2) m ]) , y2 ) ... (([x (n) 1, x (n) 1] [x (n) 2, x (n) 2]· · · [x (n) m , x (n) m ]) , yn ) (([x (1) 1, x (1) 1] [x (1) 2, x (1) 2]· · · [x (1) m , x (1) m ]) , y1 ) , (6) $$\left(4\right)$$ $$\left(5\right)$$ and we have no more information about the true value, x (i)† j, nor are willing to make further assumptions about the true value, only that x (i)† j ∈ [x (i) j , x (i) j ], then it is only possible to partially identify an imprecise logistic regression model for the data, ILR(D): $$\mathcal{ILCR}(D)=\left\{\mathcal{LCR}\left(D^{\prime}\right):\forall D^{\prime}\in\left\{\left\{\begin{array}{c}\left(\left(x_{1}^{\prime\left(1\right)}\cdots x_{1}^{\prime\left(m\right)}\right),y_{1}\right)\right)\\ \vdots\\ \left(\left(x_{n}^{\prime\left(1\right)}\cdots x_{n}^{\prime\left(m\right)}\right),y_{n}\right)\end{array}\right\}\forall x_{j}^{\prime\left(i\right)}\in\left[\frac{x_{j}^{\left(i\right)}}{\cdot},\overline{x_{j}^{\left(i\right)}}\right]\right\}\right\}\tag{7}$$ i.e. ILR(D) is the set of all possible logistic regression models that can be created from all possible datasets that can be constructed from the interval data; this ensures that the true logistic regression model, LR†, is contained within the set. This set is infinitely large for continuous data, so estimates must be used to find the set. Predictions can be made by sampling all the possible models that are contained within ILR(D) and creating an interval containing the maximum and minimum values, π(x) = [π(x), π(x) ]. When calculating the probability of a value being 1 under the imprecise model, there is an interval probability [π(x), π(x)], where π(x) and π(x) are the maximum and minimum values of π(x) calculated from models within the set. As such, when using the model to perform classifications, this interval means that Equation 3 becomes $$\hat{y}=\begin{cases}1&\text{if}\overline{\pi}(\mathbf{x})>C\\ 0&\text{if}\overline{\pi}(\mathbf{x})<C\\ \left[0,1\right]&\text{if}C\left[\overline{\pi}(\mathbf{x}),\overline{\pi}(\mathbf{x})\right]\end{cases}.\tag{8}$$ The final line of this equation returns the *dunno* interval, meaning there is uncertainty in determining whether the datum should be predicted 0 or 1, and the model should therefore abstain from providing a classification. It is left up to the analyst to decide what to do with such a result. Under such a scenario, there are two approaches to characterising the classifier using a confusion matrix. The first is to consider the intervals directly within the confusion matrix. Calculating statistics derived from the confusion matrix requires careful handling for interval calculations (Gray et al., 2022). To demonstrate, let a be the number of true positives, b be the number of false positives, c be the number of false negatives, and d be the number of true negatives, where a, b, c and d are all intervals. Since the number of positive individuals is fixed, a and c are oppositely dependent on each other (as a → a then c → c), as are b and d. These dependencies imply that care must be taken when calculating sensitivity and specificity. Sensitivity needs to be calculated as $s=\dfrac{a}{\underline{a}+\overline{c}}$, $t=\dfrac{d}{\underline{b}+\overline{d}}$. and specificity as $\epsilon$. $\eqref{eq:walpha}$ $$(10)$$ $$(11)$$ In this instance, the s and t would themselves both be intervals (s = [s, s] and t =[t,t]). s and t would be the sensitivity and specificity of a system that first perfectly classified all the uncertain predictions. $\text{precision}=\dfrac{\text{Total Number}}{\text{Total Number}}$ - 11:11 = 0 2. Of course, analysts could use various alternative statistics to describe the classifier's performance, some of which require special care when using imprecise numbers. For instance, *precision* and *recall* are often used within the machine learning literature to assess classifiers. Precision is the fraction of positive predictions that are true positives, precision =True Positive $$\frac{\text{True Positive}}{\text{Total Number of Positive Predictions}},$$ - $\text{True weight and the next bit is due to the F-wave weight}$. and recall is analogous to sensitivity. Often quoted alongside these statistics is the F1 score, which is the harmonic mean of these values, $$F_{1}=2\frac{\text{precision}\cdot\text{recall}}{\text{precision}+\text{recall}}.\tag{1}$$ With intervals, precision needs to be calculated using a single-use expression to ensure that there is no artifactual uncertainty within the calculation $$(12)$$ $$\mathrm{precision}=\frac{1}{1+\frac{b}{a}}\tag{1}$$ $$(13)$$ and recall calculated using Equation 9. The F1 score is again best calculated through a single-use expression $\ F_1=2\frac{1}{\frac{TP+b}{a}+1}$ Additions ($TP=a+\pi$). $$(14)$$ where T P is the total number of positive predictions (T P = a + c). An alternative approach to constructing a confusion matrix with uncertain predictions is to tabulate the dunno predictions in a separate row. If the model returned u true positives, v false positives, w false negatives and x true negatives but did not make a prediction for y positives and z negatives, then the confusion matrix shown in Table 2 can be created. From this confusion matrix, some useful statistics can be calculated to account for the uncertainty produced by these uncertain classifications. The traditional definitions of sensitivity and specificity can be re-imagined by defining what the *predictive sensitivity* s′ as the sensitivity out of the points for which a prediction was made $$s^{\prime}=\frac{u}{u+w}\tag{1}$$ $$(15)$$ and similarly, the *predictive specificity* t′ as the specificity for which a prediction was made $$t^{\prime}={\frac{x}{v+x}}.$$ $$(16)$$ . (16) | | 1 | 0 | Total | |---------------|-----|-----|---------| | Predicted 1 | u | v | P+ | | Predicted 0 | w | x | P− | | No Prediction | y | z | P× | | Total | T+ | T− | N | Table 2: Alternative confusions matrix where uncertain predictions are tabulated separately. Two other statistics are useful to describe the data in Table 2. We can define the *positive incertitude* σ to be the fraction of positive cases for which the model could not make a prediction $$\sigma=\frac{y}{u+w+y}.\tag{1}$$ $$(17)$$ Similarly, the *negative incertitude* τ can be defined as the total number of negative cases for which the model could not make a prediction $$\tau=\frac{z}{v+x+z}.\tag{1}$$ $$(18)$$ ## 3.1 Approaches To Estimating Equation 7 As the set described in Equation 7 is infinitely large approaches need to be taken to estimate it. Since any analysis of ILR(D) only requires knowledge of the maximum and minimum π(x) values ([π(x), π(x) ]), the best possible model would only need to contain the models that produce models that for some x produce either endpoint of the interval. ## 3.1.1 Systematic Search The ideal method of estimating the set is to search values from within the intervals systematically. This approach requires specifying P as the number of steps within the intervals, then producing a logistic regression model as shown in Algorithm F0. This approach would have complexity O(NP ) where P is the number of intervals within the set. So whilst it would be the method most accurate to the best-possible method of computing ILR (D), for large datasets, this algorithm would be exceedingly time expensive and thus impractical. 7 Algorithm F0: Systematic search to find ILR (D) if D has interval uncertainty within its features. Input: D = {(([x (i) j , x (i) j ]∀i = 0, . . . , m), yj )∀j = 0*, . . . , n*}, Q steps ILR (D) ← {}; ∆ ← {i−1 N−1 ∀i = 1, . . . , N}; for all combinations {δ (i) j }∈ ∆ do D′ ← D; for all i ∈ {0, . . . , m} do for all j ∈ {0, . . . , n} do D ′(i) j ← {x (i) j + δ (i) j(x (i) j − x (i) j )} end 1 end ILR (D) ← ILR (D) ∪ {LR (D′)}; end Output: ILR (D) ## 3.1.2 Method Of Minimum And Maximum Coefficients Since it is only necessary to find the logistic regression models that correspond to extreme values of π(x) for all π(x). It is possible to reduce the number of models that need to be contained within ILR (D) to only those that make up the envelope of the set. These lines can be estimated as $$\begin{array}{l}{{\mathcal{I}\mathcal{R}(D)=\left\{\mathcal{L}\mathcal{R}\left(D_{\beta_{i}}^{\prime}\right),\mathcal{L}\mathcal{R}\left(D_{\beta_{i}}^{\prime}\right)\ \forall i=0,1,\ldots,m\right\}}}\\ {{\qquad\qquad\cup\left\{\mathcal{L}\mathcal{R}\left(D\right),\mathcal{L}\mathcal{R}\left(\overline{{{D}}}\right)\right\}}}\end{array}$$ $$(19)$$ where D′β0 is the dataset constructed from points within the intervals such that the value of β0 is minimised, D′β0 is the dataset constructed from points within the intervals such that the value of β0 is maximised, and so on. D corresponds to the dataset created by taking the lower bound of every interval within D, similarly for D. For a dataset with m features, there are 2m + 2m models that are needed to find the bounds of the set. Algorithm F1 can be used to find this set. An illustration of the validity of this estimation can be found in Appendix A. Figure 3 shows all six lines for the six intervals shown compared to the systematic method, with the black dashed lines representing the envelope of the set. To measure how good an estimate the method is, we can let A be the area of the systematic bounds that are outside the bounds produced by the proposed algorithm as a fraction of the total area found systematically. Ideally, we would see A = 0, implying that the set perfectly covered the systematic bounds. The envelope of these models shown withn Figure 3 has A = 0.0160. ![8_image_0.png](8_image_0.png) Figure 3: Comparison of systematic approach against Algorithm F1. Algorithm F1: Method to find ILR (D) if D has interval uncertainty within its features by calculating the minimum and maximum intercept and coefficients. Input: D = {(([x (i) j , x (i) j ]∀i = 0, . . . , m), yj )∀j = 0*, . . . , n*} D = {((x (i) j∀i = 0, . . . , m), yj )∀j = 0*, . . . , n*}; D = {((x (i) j∀i = 0, . . . , m), yj )∀j = 0*, . . . , n*}; ILR (D) ←{LR (D),LR (D)}; for i ← 0 to m do Using stochastic optimisation find D′βi such that LR (D′βi )has the minimum value of βi; ILR (D) ← ILR (D) *∪ {LR* (D′)}; Using stochastic optimisation find D′βi such that LR (D′βi )has the maximum value of βi; ILR (D) ← ILR (D) *∪ {LR* (D′)}; end Output: ILR (D) ## 3.1.3 Minimum And Maximum Spread Approach If the dataset contains a large number of intervals, then, due to the increasing complexity of the optimisation, Algorithm F1 may take a prohibitively long time to compute. In such a situation, Algorithm F2 can be used to find the imprecise model. This algorithm uses the heuristic that the extreme bounds are likely to be associated with the minimum and maximum spread of points around particular values, a discussion of this heuristic can be found in Appendix B. The complexity of this approach is O(2(1 + P)) The bounds produced by this method for the same six intervals from Figure 3 are shown in Figure 4. This plot has A = 0.0351 compared to the systematic search method. ![9_image_0.png](9_image_0.png) Figure 4: Comparison of systematic approach against Algorithm F2 with P = 100. Algorithm F2: Minimum and Maximum spread approach to estimate ILR (D) if D has interval uncertainty within its features. The procedures for finding the minimum and maximum spread of points around a particular value can be found in Prodcedures 1 and 2 in Appendix B Input: D = {(([x (i) j , x (i) j ]∀i = 0, . . . , m), yj )∀j = 0*, . . . , n*} D = {((x (i) j∀i = 0, . . . , m), yj )∀j = 0*, . . . , n*}; D = {((x (i) j∀i = 0, . . . , m), yj )∀j = 0*, . . . , n*}; ILR (D) ←{LR (D),LR (D)}; for all p ∈ {k P +1 ∀k = 1*, . . . , P*}do Find T such that πLR(D)(T) = P; Find T such that πLR(D) (T) = P; dmin ← minimum spread of points around T; dmax ← maximum spread of points around T; ILR (D) ← ILR (D) ∪ {LR (dmin),LR (dmax)}; end Output: ILR (D) ## 3.1.4 Monte Carlo A straightforward and computationally expedient way of estimating is to use Monte Carlo to find valid models. This approach, as shown in Algorithm F3, requires the analyst to specify a desired number of iterations, then, for every iteration, make a new dataset by sampling a random value from all intervals within the dataset, then fitting a model on this new dataset. Often this sampling assumes the equidistribution hypothesis and thus models the intervals as a uniform distribution. This approach is likely to underestimate the bounds as Monte Carlo methods are unlikely to find the extreme models (Ferson, 1996). ![10_image_0.png](10_image_0.png) Figure 5: Comparison of systematic approach against a Monte Carlo sample with 106iterations. Figure 5 shows a comparison between a Monte Carlo search with 106iterations and a systematic search for the six intervals shown. As expected, the bounds from the Monte Carlo method fail to enclose the total area revealed by the systematic search. In this case, A = 0.166. A higher number of iterations would have led to a smaller A value. Algorithm F3: Monte Carlo search to find ILR (D) if D has interval uncertainty within its features. Input: D = {(([x (i) j , x (i) j ]∀i = 0, . . . , m), yj )∀j = 0*, . . . , n*}, P steps ILR (D) *← {}*; for P *iterations* do D′ ← D; D ′(i) j ← random value in [x (i) j , x (i) j ]; ILR (D) ← ILR (D) *∪ {LR* (D′)}; end Output: ILR (D) ## 3.2 Alternative Methods Several authors have suggested different approaches to compute logistic regression models with interval uncertainty. This section will consider three methods to compare with the methods presented in this paper. ## 3.2.1 Midpoint The most straightforward approach to dealing with interval data is to produce a precise dataset by replacing the intervals with their midpoints and then fitting a dataset with this midpoint data. i.e. $$D_{m}=\left\{\left(\left(\frac{x_{j}^{(i)}+\overline{{{x_{j}^{(i)}}}}}{2}\right),y_{j}\right)\ \forall\left[x_{j}^{(i)},\overline{{{x_{j}^{(i)}}}}\right]\in D\right\}$$ $$(20)^{\frac{1}{2}}$$ then LRm (D) = LR (Dm). (21) This dataset can then be used as a precise logistic regression model described in Section 2. ## 3.2.2 De Souza de Souza et al. (2008; 2011) introduce several methods for characterising the uncertainty with interval features. They conclude that the best method is to perform two separate logistic regressions on the lower and upper bounds of the intervals and average the posterior probabilities to obtain a pooled posterior probability. They find LRdS (D) = {LR (D),LR (D)} (22) and then reduces this to a single logistic regression model based on the average of the outputted probabilities. $${\mathcal{L R}}_{m}\left(D\right)={\mathcal{L R}}\left(D_{m}\right).$$ $$(21)$$ $${\mathfrak{Z}}\left(D\right)\}$$ $$\pi_{{\mathcal{L R R}}_{d S}(D)}(\mathbf{x})={\frac{\pi_{{\mathcal{L R}}(D)}(\mathbf{x})+\pi_{{\mathcal{L R}}({\overline{{D}}})}(\mathbf{x})}{2}}.$$ $$(23)$$ ## 3.2.3 Billard-Diday Billard & Diday (2000) propose a method, based upon Bertrand (2000), for characterising interval uncertainties within linear regression that can be easily extended to logistic regression. Their method assumes that each value from within the interval is equally likely, and therefore constructs the logistic regression models as the uniform mixture of N logistic regression models that are fitted from random samples, $$\mathcal{L}\mathcal{R}_{BD}\left(D\right)=\left\{\mathcal{L}\mathcal{R}\left(D_{b}\right):D_{b}=\left\{\left\{\left(r_{j}^{\left(i\right)},y_{j}\right)\right\}\right.\left.r_{j}^{\left(i\right)}\in\left[\underline{x}_{j}^{\left(i\right)},\overline{x_{j}^{\left(i\right)}}\right]\right\}\right.\,k=0,\ldots,N\right\}$$ like de Souza, they then average the probability from all models $$(24)$$ $$\pi_{\mathcal{L}\mathcal{R}_{BD}(D)}(\mathbf{x})=\frac{1}{N}\sum_{\forall l\in\mathcal{L}\mathcal{R}_{BD}(D)}\pi_{l}(\mathbf{x}).$$ $$(25)$$ πl(x). (25) This method is computationally the same as Algorithm F3 but takes the average of the found models to produce a precise final model instead of taking the envelope to produce an imprecise model. ## 3.3 Comparison Of Methods Dataset D1 from 2 has been intervalised into dataset D2 using the following transformation x′i = [m − *ϵ, m* + ϵ] where m is a number drawn from the triangular distribution T(xi − *ϵ, x*i + ϵ 6 , xi + ϵ) with ϵ = 0.375 for all xi ∈ X. With this dataset we can use Algorithms F3, F1 and F2 to construct ILR (Y ), as is shown in Figure 6. It would have been too computationally expensive to perform a systematic search using Algorithm F0. Figure 6 shows that Algorithm F1 produces the most comprehensive bounds and is, therefore, most likely to represent the entire set described by Equation 7. The Monte Carlo approach in Algorithm F3 produces the narrowest bounds with Algorithm F2 For comparison, logistic regression models have been fitted using the methods described in Section 3.2; these models are shown in Figure 7. Whilst there are subtle differences between the different approaches, it is clear that they are all approximately equal. This equivelence is unsurprising as they all implicitly make the equidistribution assumption that a uniform distribution can represent the intervals. It is also possible to consider situations where data is intervalised differently. Figure 8 shows four different intervalisations of dataset X as described in Table 3. In plot (a), the intervalisation has occurred by placing the true value at the left edge of the interval; similarly, in (b), the true value is at the right edge of the interval. In (c), the value of x impacts the interval's width, and in (d), the label impacts the intervalisation. In this figure, imprecise models have been fitted on the datasets using Algorithms F3, F1 and F2 alongside ![12_image_0.png](12_image_0.png) Figure 6: Imprecise logistic regression models fitted using Algorithms F3 (with 104iterations), F1 and F2 ![12_image_1.png](12_image_1.png) Figure 7: Imprecise logistic regression model fitted using Algorithm F1 and logistic regression models fitted using the alternative methods in Section 3.2 for the interval dataset (jittered for clarity). the 'true' model (from Figure 1) and the midpoint model. The de Souza and Billard-Diday methods are not shown due to their similarity with the midpoint model. Looking at all these figures, we see that the imprecise model produced by Algorithms F1 and F2 always bounds the base model. As a result, any interval regression analysis performed would be guaranteed to bound the true model. The model produced by Algorithm F3 fails to do so in (a), (b) and (d). The fact that in (a) and (b), the models produced by Algorithm F1 and Algorithm F2 The figure also shows that there can be significant differences between the base and midpoint models. The alternative approaches provide a good approximation of the true value if the equidistribution hypothesis can be justified, as in plot (c). If the intervalisation depends on the outcome, then the approaches appear inadequate, as is shown in plot (d). This implies that the alternative approaches, and Algorithm F3, should only be used in cases within which one can assume that the data has been intervalised independent of either the true underlying value or the outcome status and each value within the interval is equally likely. In many real-world datasets, the assumptions that the alternative methods rely on fail to hold, and in those scenarios, only the complete imprecise method would guarantee coverage of the true model. ## 3.4 Red Wine Example In order to demonstrate the methodology on a real-world dataset, we can use the red wine dataset from Cortez et al. (2009). This dataset contains 11 covariates1that can be used to predict the quality of the wine sample based on a scale from 0 to 10. In order to provide a binary classification, we define wine as good if it has quality ≥ 6. The dataset contains 1599 samples, of which 855 have been classified as good wine. The dataset with added uncertainties is denoted as R. In order to fit the logistic regression model, the dataset has been split into training and test subsets containing half the data in each. To intervalise the data, values have been turned into intervals based on the number of significant figures within the model (for example, 0.5 would become [0.45, 0.55]). An imprecise model can then be fitted on the dataset using Algorithm F2. It is helpful to consider visualisations when discussing the classifier's performance (Figure 9). The simplest of these is the scatter plots shown in Figure 9a. From the plot, most of the wines rated as good were given a high π value, and vice versa for the bad wines–although no wine was given a very low pi. There is, however, substantial overlap between the two groups. The plot also shows that some wines have a wide interval π. The plot also shows the size of the intervals for π. In this instance, all intervals are reasonably consistent and not overly wide. We can also construct ROC plots and calculate their AUCs, as shown in Figure 9b. In this plot, we can see that the LR (R†)and LRm (R) curves are only subtly distinguishable from each other. The same is also true of LRdS (R) and LRBD (R). The imprecise model bounds all these models. Additionally, for the imprecise model, we can plot a ROC curve for when the model abstains from making a prediction. In this instance s′is plotted against *f pr*′(= 1 − t′). This ROC curve outperforms the others. The AUC for the curves are AUC [ILR (R)] = [0.742, 0.872], AUC [ILR (R)( *Abstain*)] = 0.834, AUC[LR (R†)] ≈ AUC [LRm (R)] = 0.818 and AUC[LRdS (R)] ≈ AUC[LRBD (R)] = 0.813. 1fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulfur dioxide, total sulfur dioxide, density, pH, sulphates, alcohol ![14_image_0.png](14_image_0.png) shown in Table 3. In all plots, LR (D1) is the precise model shown in Figure 1. Plot Intervalisation (a) D2a = {([xi, xi + 2], yi) ∀(xi, yi) ∈ X} (b) D2b = {([xi − 2, xi], yi) ∀(xi, yi) ∈ X} | Plot | Intervalisation | | | |------------------------------|---------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|----------------| | (a) | D2a = {([xi , xi + 2], yi) ∀(xi , yi) ∈ X} | | | | (b) | D2b = {([xi − 2, xi ], yi) ∀(xi , yi) ∈ X}   | | | | (c) | D2c = |     | | |  | | xi if xi < 2.5 [m − 0.25, m + 0.25] , m ∈ U(xi ± 0.25) if 2.5 ≤ xi < 5.0 [m − 0.75, m + 0.75] , m ∈ U(xi ± 0.75) if 5.0 ≤ xi < 7.5 | | | |  [xi , xi + 1.5], if 7.5 < xi | ) | } | | {({[xi , xi + 1.5] if yi = 1 | | | | | (d) | D2d = | , yi | ∀(xi , yi) ∈ X | | | | [xi − 1.5, xi ] otherwise | | , yi ∀(xi, yi) ∈ X Table 3: Intervalisations used in Figure 8 ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) (a) Scatter plots of probability vs outcome for ILR (R) (b) Receiver operating characteristic curve for the simple example with added uncertain classifications. Figure 9: Plots to show the discriminatory performance of the logistic regression models for the red wine example. | Good Wine | Bad Wine | Total | | |----------------|------------|-----------|-----------| | Predicted Good | [279,329] | [68,109] | [347,438] | | Predicted Bad | [99,149] | [263,304] | [362,453] | | Total | 428 | 372 | 800 | Classifications about whether a wine is good or not can be made. Selecting a threshold value of 0.5 gives the confusion matrices shown in Table 4. There are two possible interpretations of how the intervals can be expressed within the confusion matrices shown in a the intervals are plotted directly within the matrix giving the following statistics: s = [0.652, 0.769] and t = [0.707, 0.817]. Allowing the model to abstain as in b implies that there are 91 wines for which a prediction could not be made solely as a result of the imprecision of the model. The summary statistics from this confusion matrix are: s′ = 0.738, t′ = 0.795, σ = 0.117 and τ = 0.110. (a) Tabulating inconclusive results as [0,1] intervals. (b) Tabulating inconclusive results separately. | Good Wine | | Bad Wine | Total | |----------------|-----|------------|---------| | Predicted Good | 279 | 68 | 347 | | Predicted Bad | 99 | 263 | 362 | | No Prediction | 50 | 41 | 91 | | Total | 428 | 372 | 800 | Table 4: Two possible confusion matrices for 100 test samples from the imprecise logistic regression model shown in Figure 3. ## 4 Uncertainty In Labels This set-based approach can be extended to the situation where there is uncertainty about the outcome status meaning there are some points for which we do not know the binary classification and can be represented as the dunno interval [0, 1]. In this situation the dataset D contains p variables with corresponding labels (x1, y1),(x2, y2), · · · ,(xp, yp) but also q variables for which the label is unknown (xp+1, ),(xp+2, ), *· · ·* ,(xp+q, ). For simplicity, we shall refer to the points with labels as being in set d and those without labels in set u, D = d ∪ u. It is important to note that, whilst formally, all possible unobserved, and therefore unlabelled, x values could be considered to be in set u, however, this should not be considered the case. The datapoints that are in u only contain x values that have been observed but for which the label is unidentified for some reason. This may be due to a participant dropping out of a medical trial before it concludes or there if there is disagreement between expert labellers about the true clasification. Thus it is not the case that one can increase the number of points in u by introducing arbitrary new x values. Traditional analysis may ignore all the points in u. However, they can be included within the analysis by considering the set of possible logistic regression models trained on all possible datasets that could be possible based upon the uncertainty. This set of datasets can be created by giving all unlabelled values the value 0, all unlabelled values the value 1 and all combinations thereof, i.e. $$\mathcal{L}\mathcal{R}=\left\{\mathcal{L}\mathcal{R}(D^{\prime})\;\forall D^{\prime}\in\left\{\begin{array}{c}d\cup\{(\mathbf{x}_{p+1},0),\cdots,(\mathbf{x}_{p+q},0)\}\\ d\cup\{(\mathbf{x}_{p+1},0),\cdots,(\mathbf{x}_{p+q},1)\}\\ \vdots\\ d\cup\{(\mathbf{x}_{p+1},1),\cdots,(\mathbf{x}_{p+q},0)\}\\ d\cup\{(\mathbf{x}_{p+1},1),\cdots,(\mathbf{x}_{p+q},1)\}\end{array}\right\}.\right.\tag{26}$$ There are 2 q possible logistic regression models within this set. An imprecise logistic regression model can then be created by finding the envelope of the set, as shown in Algorithm L1. As the computational time for this algorithm increases as O(2q), then as q increases finding the bounds by calculating the envelope for all possible combinations can become computationally expensive. Algorithm L2 reduces the number of iterations models that need to be fitted to find an *estimate* for the imprecise bounds. This algorithm first finds the logistic regression model assuming all uncertain labels are 0 and the logistic regression model assuming all uncertain labels are 1. The uncertain points are split into three groups: G1 contains points which have a low π value with both models, G2 contains points which have a high π value with both models and all other points in G3. The algorithm assumes that the most extreme models can be found by giving all the points in G1 the same label, all the points in G2 the same label and only finding all possible combinations of labels for the points within G3. This algorithm reduces the number of logistic regression models fitted to 2 + 22+q ′where q′is the number of points in G3. ## 4.1 Alternative Methods 4.1.1 Exclude Uncertain Results The most straightforward approach to dealing with this is to remove the uncertain results from D to produce D× and a precise logistic regression model LR× (D). This approach may be valid if the missing data is small compared to the total dataset size and if it is missing at random or completely at random. ## 4.1.2 Semi-Supervised Logistic Regression Semi-supervised learning methods extend supervised learning techniques to cope with additional unlabelled data. Numerous authors present semi-supervised logistic regression methods based on a variety of different methods: Amini & Gallinari (2002) use Classification Expectation Maximisation; Krishnapuram et al. (2004) and Chi et al. (2019) use Bayesian methods; Bzdok et al. (2015) use an autoencoder and a factored logistic regression model; Chai et al. (2018) combine active learning and semi-supervised learning to " achieve better performances compared to the widely used semi-supervised learning and active learning methods." In all cases, it is important that the smoothness, clustering and manifold assumptions are valid to use semi-supervised learning techniques (Chapelle et al., 2006). For this analysis, we have used the scikit-learns semi-supervised learning algorithm, which uses Yarowsky's algorithm to enable the logistic regression to learn from the unlabelled data (Yarowsky, 1995; Pedregosa et al., 2011). Algorithm L1: Algorithm to find ILR (D) if D has interval uncertainty within its labels. Input: d = {(xi, yi) ∀i = 0*, . . . , p*}; u = {(xj , [0, 1]) ∀j = 0*, . . . , q*}; D = d ∪ u ILR (D) *← {}*; for all combinations C ∈ {(0, . . . , 0),(0, . . . , 1),(1, . . . , 0),(1, . . . , 1), *· · · }* do D′ ← d ∪ {(xj , Cj ) ∀xj ∈ u}; ILR (D) ← ILR (D) *∪ LR* (D′); end Output: ILR (D) Algorithm L2: Algorithm to find ILR (D) if D has interval uncertainty within its labels using heuristics to reduce the number of iterations needed. Input: d = {(xi, yi) ∀i = 0, . . . , p}; u = {(xj , yj = [0, 1]) ∀j = 0, . . . , q}; D = d ∪ u D(1,...,1) ← d ∪ {(xj , 1) ∀xj ∈ u}; D(0,...,0) ← d ∪ {(xj , 0) ∀xj ∈ u}; Find LR (D(1*,...,*1))and LR (D(0*,...,*0)); G1 ← {}; G2 ← {}; G3 *← {}*; for all uj = (xj , yj ) ∈ u do if πLR(D1,...,1)(xi) < 0.5 and πLR(D0*,...,*0)(xi) < 0.5 **then** G1 ← G1 ∪ {uj} else if πLR(D1,...,1)(xi) > 0.5 and πLR(D0*,...,*0)(xi) > 0.5 **then** G2 ← G2 ∪ {uj} else G3 ← G3 ∪ {uj} end end for all A in {0, 1} do for all B in {0, 1} do for all combinations C ∈ {(0, . . . , 0),(0, . . . , 1),(1, . . . , 0),(1, . . . , 1), *· · · }* do D′ ← d ∪ {(xj , A) ∀(xj , yj ) ∈ G1} ∪ {(xj , B) ∀(xj , yj ) ∈ G2} ∪ {(xj , Cj ) ∀(xj , yj ) ∈ G3}; ILR (D) ← ILR (D) *∪ LR* (D′); end end end Output: ILR (D) ## 4.2 Demonstration Dataset D3 has been created from dataset D1 by replacing five labels from the dataset with the [0, 1] interval. The labels that have been changed are around the point at which the data goes from 0 to 1. This dataset is shown in Figure 10, with the uncertain labels plotted as vertical lines. The figure shows all the logistic regression models that have been fitted using both Algorithms L1 (grey lines) and L2 (coloured). ILR (D3) is the envelope of these sets and is shown with the black dashed lines. Since the black lines always correspond to the extremum of the colour lines, Algorithm L2 has correctly estimated the imprecise bounds, and any interval π value found from imprecise models is guaranteed to contain the true value. Figure 11 shows the imprecise logistic regression model that is trained on this uncertain dataset, and, for comparison, the model trained on the dataset with the dunno labels removed, LR× (D3), and the semisupervised model, LRss (D3). From the figure, it is also notable that LR× (D3) and LRss (D3) are similar. As in Section 3.3, it is helpful to consider different scenarios within which the labels have been removed. Figure 12 shows four different scenarios within which the data has been made uncertain, and the imprecise logistic regression models have been fitted using Algorithm L2. Using this algorithm allowed plot (b) to be computed since Algorithm L1 would have required 2 20 models to be fitted and have been computationally prohibitive. In all four plots, the imprecise model bounds the base model. It is also notable that the semi-supervised and discarded data approaches are similar in all the plots. This plot demonstrates that if it can be assumed that the labels are missing at random, as in (a) and (b), the two alternative approaches are reasonably close to the true model. However, if this is not the case, then significant differences between the approaches and the imprecise method must be used to obtain a model that is guaranteed to bound the true model. ## 4.3 White Wine Example As in Section 3.4 it is useful to consider this methodology on a real dataset. In this instance, we will use the white wine dataset from Cortez et al. (2009). This dataset contains the same covariates as the red wine dataset used in Section 3.4 but contains many more samples (4898). Again good wine has been defined as having a quality ≥ 6. This data has been split into training and test samples, with 1618 and 3281 samples, respectively. In order to simulate sommeliers being unsure about the classification of marginally good wine, 100 samples with quality=6 have had their labels removed. Let W be the uncertain dataset. Algorithm L2 can be used to fit the imprecise logistic regression model on this dataset. For comparison, LRss (W) and LR× (W) have also be found. The discrimination plots for these models are shown in Figure 13. Figure 13a shows that very few points have been given a low probability of being bad wine. Most of the bad wine has π ≈ 0.5 whereas good wine has a high probability (π ≈ 0.9). This plot suggests that when making classifications from the model, selecting a threshold value of C = 0.7 would be an appropriate choice to distinguish between the two classes. ROC curves can also be plotted. As with the previous examples, the precise models all have very similar curves and AUC values which the abstaining model 'beats'. (In this case AUC [ILR (W)] = [0.716, 0.826], AUC [ILR (W)( *Abstain*)] = 0.794, AUC[LRss (W)] ≈ AUC[LR× (W)] ≈ AUC[LR (W†)] = 0.774) ![19_image_0.png](19_image_0.png) Figure 10: Imprecise logistic regression model with the uncertain labels represented by the vertical lines. The grey lines represent all possible models found using Algorithm L1. The blue lines are the bounds found using Algorithm L2. ![19_image_1.png](19_image_1.png) Figure 11: Bounds for the imprecise logistic regression or all 50 points with 5 points made uncertain. Compared with LR× (D3) and LRss (D3). ![20_image_0.png](20_image_0.png) Figure 12: 4 different scenarios within which dataset D1 has had labels removed. D3a has 5 labels missing at random. D3b has 10 labels missing at random. D3c has 8 1 labels missing. D3d has 8 0 labels missing. ![20_image_2.png](20_image_2.png) ![20_image_1.png](20_image_1.png) (a) Scatter plots of probability vs outcome for ILR (W) (b) Receiver operating characteristic curve for the simple example with added uncertain classifications. Figure 13: Plots to show the discriminatory performance of the logistic regression models for the white wine example. 21 ## 5 Uncertainty In Both Features And Labels The imprecise approach can be used when there is uncertainty about both the features and the labels. Such situations are present in numerous real-world datasets. An imprecise logistic regression model can be found in this scenario through a combination of the algorithms in Sections F1 and 4. ## 5.1 Example Osler et al. (2010) use a logistic regression model to predict the probability of death for a patient after a burn injury. The model they use is based upon a subset of data from the American Burn Association's National Burn Database2. The dataset has a mix of discrete (gender, race, flame involved in injury, inhalation injury) and continuous variables (age, percentage burn surface area) that can be used to model the probability that a person dies (outcome 1) after suffering a burn injury. Osler et al. exclude some patients from the dataset before training their model. They remove patients if their age or 'presence of inhalation injury' was not recorded. Additionally, as patients older than 89 were assigned to a single age category in the original dataset, they gave them a random age between 90 and 100 years. Osler et al. did not need to exclude these patients merely because of epistemic uncertainty about the values. The proposed approach can be used with the original data. For instance, patients for which the outcome was unknown could have been included within their analysis as described in Section 4. Similarly, patients whose inhalation injury or age was unknown could have been included with the method described in Section 3. Patients with unknown inhalation injury could have been included as the [0, 1] interval. Patients whose age was completely unknown could have been replaced by an interval between the minimum and maximum age, whereas if there was uncertainty because they were over 90 years old, then they could be intervalised as [90, 100]. Other interval uncertainties may be present within the dataset. It is unlikely to be the case that all the people used within the study fit neatly into the discrete variables given. For instance the variable race is valued at 0 for "non-whites" and 1 for "whites". However, it goes without saying that the diversity of humanity does not simply fall into such overly simplified categories; there are likely to be many people who could not be given a value of 0 or 1 and should instead have a [0, 1] value. The same is true for gender. Not everyone can be defined as male or female. Also, there is almost certainly some measurement uncertainty associated with calculating the burn surface area that may also be best expressed as intervals. For simplicity, these uncertainties have not been addressed below. For this analysis, the subsample of the dataset used by Osler et al. made available by Hosmer Jr et al. (2013, p. 27) has been used. This version of the dataset includes 1000 patients from the 40,000 within the entire study and has a much higher prevalence of death than the original dataset. Because access to the original data is prohibitively expensive, the values in this dataset have been re-intervalised to replicate some of the removed uncertainty to create a hypothetical dataset, B, for this exposition. As there are no individuals older than 90 within the dataset, that particular re-intervalisation has not been possible, so all patients older than 80 have had their ages intervalised as [80,90]. Similarly, for 20 patients, the censored inhalation injury has been restored to dunno interval. Ten patients, who had been dropped because their outcome status was unknown, have been restored with status represented as [0,1]. There are two possible routes for an analyst to proceed when faced with such a dataset. They could follow the original methodology of Osler et al. and randomly assign patients with interval ages a precise value and then discard all other patients for which there is some uncertainty. Alternatively, the analyst could include the uncertainty within the model by creating an imprecise logistic regression model. As there is uncertainty within both the features and the labels, the model can be estimated by finding the values within the intervals that correspond to the minimum and maximum for β0, β1, etc. ILR(B) is the imprecise logistic regression fitted from this burn data. For comparison, LR(B×) has also been fitted based on removing the uncertainty in B using the same methodology as Osler. 2http://ameriburn.org/research/burn-dataset/ Regarding the performance of the two models, we can again turn to visualisations, as shown in Figure 14. Firstly, looking at Figure 14c we can see that the vast majority of patients who were given a low probability of death (π) did indeed survive, and patients who were given a high probability of death did sadly die. The ROC plots are shown in Figure 14a, Figure 14b shows the upper right corner of the plot in more detail. ILR(B) has a AUC = [0.955, 0.974], the no prediction model has AUC = 0.972 and LR(B) has AUC = 0.966. ![22_image_1.png](22_image_1.png) (a) Receiver operating characteristic curves for the burn example. ![22_image_0.png](22_image_0.png) ![22_image_2.png](22_image_2.png) (b) Receiver operating characteristic curves for the (c) Scatter plots of probability vs outcome. The two outcomes have been separated into different plots for clarity. Figure 14: Plots to show the discriminatory performance of the various logistic regression models for the burn survivability example It is pertinent to consider how a model is likely to be used and how uncertainty about the predicted probability of death impacts the classification. One method of dealing with this uncertainty that arises in Sections 4 and 3 is not making a prediction when the interval for π straddles C. This method may not be appropriate in this example. What should happen when the model is unable to make a prediction should depend on what the result of deciding a patient has a high risk of death means clinically. If the model was being used to triage patients that need to go to a major trauma centre because the probability of death is considered high, then–out of an abundance of caution–one might prefer that if any part of the interval probability was greater than some threshold, the patient should be considered high risk. Equivalent to taking the probabilities from the upper bound of the range, $${\mathrm{high~risk}}={\begin{cases}1,\\ 0,\end{cases}}$$ {1, if π ≥ C $$(27)$$ if $\pi\geq C$ otherwise. However, if patients who are considered high risk then undergo some life-altering treatment that is perhaps only preferable to death, then under the foundational medical aphorism of "first do no harm", it may be preferable to consider a patient high-risk only if the whole interval is greater than the decision threshold, this is equivalent of taking the probabilities from the lower bound of the range, $${\mathrm{high~risk}}={\begin{cases}1,\\ 0,\end{cases}}$$ {1, if π ≥ C $$(28)$$ $${\mathrm{if~}}\pi\geq C$$ $${\mathrm{otherwise.}}$$ Using the imprecise model in these scenarios would lead to better outcomes as the epistemic uncertainty would not be ignored. It is also the case that a patient that has a wide interval (as is the case for some in Figure 14c), implying that there is large epistemic uncertainty about the prediction, the medics would be aware of the uncertainty associated with the model and therefore may prefer to decide another way. ## 6 Discussion And Conclusion Analysts face uncertainties in the measurement values of their data. Often this uncertainty is either assumed away or just completely ignored. However, it may be better to compute with what we know rather than to make assumptions that may need to be revised later. Many uncertainties are naturally expressed as intervals arising from measurement errors and missing or censored values. In the case of logistic regression, when faced with interval uncertainties, samples are often dropped from analyses—assuming that they are missing at random—or reduced down to a single value. Analysts should not simply throw this uncertainty away to make subsequent calculations easier. Interval uncertainties can be included within logistic regression models by considering the set of possible models as an imprecise structure, including in situations where there is uncertainty about the binary outcome status. The present analysis showed that it is not reasonable to throw away data when the status is unknown if the reason the data has gone missing is dependent on the value or status of the missing samples. The case studies showed that previous methods used to handle interval uncertainties are ill-suited to situations where the narrow assumptions that they rely upon are unjustified or untenable. The methods based upon imprecise probabilities described in this paper work whenever there are interval uncertainties in the data, regardless of how they happened to arise. The imprecise approach presented within this paper introduces two distinct uncertainties within logistic regression models. The first is the uncertainty about what the expected label for a particular x should be, expressed in an aleatoric way by considering π(x) as a probability. The second type of uncertainty, added by imprecise logistic regression models, is the epistemic uncertainty expressed by the interval π(x) values. A reviewer of this paper noted that, in the case of label uncertainty, it seems counterintuitive that adding unlabelled examples yields an important uncertainty, despite the fact that the unlabelled examples appear uninformative and thus useless. However, it is essential to consider that there is a critical distinction between not knowing a label for some arbitrary value and knowing that we do not know a label for a directly observed value. In the latter case, there is information in the fact that it is known that the label is unknown. As shown within Figure 12, if an analyst can not assume that the labels are missing (completely) at random, they cannot say that they are uninformative. Additionally, it is not the case that these uncertain points are being 'added'. They are not being removed or assumed away, as would have been the case in traditional analyses. Previously, analysts did not have methods to characterise this epistemic uncertainty, whereas the presented methods enable analysts to keep their interval data and not neglect them. The situation may occur when one dataset contains precisely known values and another dataset with interval data. Whilst it may seem obvious to pool this data into a big dataset, it may not be the case that bigger data is better data, especially if there is more uncertainty in the pooled dataset. For a full discussion about combining datasets with different levels of imprecision, see Tretiak & Ferson (2022). If the uncertainty within the dataset can be assumed to be missing (completely) at random, the intervals are small compared to the underlying size of the data, or the equidistribution hypothesis holds. Analysts may be justified in not performing an imprecise logistic regression and using one of the alternative methods reviewed within this paper. However, if there is doubt about whether these assumptions hold, then the methods presented within this paper must be used to characterise this uncertainty. It has always been easy to get wrong answers; thus, whilst the algorithms presented within this paper are computationally expensive, only they will account for the full uncertainty within the dataset. When using the new approach to classify, each new sample gets an interval probability of belonging to one of the binary classifications. Therefore, there are likely to be samples for which a definitive prediction cannot be made. If an analyst is happy to accept a *don't know* result, then the regression's performance as a classifier may be improved for the samples for which a prediction is made. It may seem counterproductive or unhelpful for a model to return a don't know result. However, this can be desirable behaviour; saying "I don't know" is perfectly valid in situations where the uncertainty is large enough that a different decision could have been reached. Uncertainty in the output can allow for decisions made by algorithms to be more humane by requiring further interrogation to make a classification. Alternatively, depending on the use case, other ways of making decisions based on uncertain predictions could be made, such as being conservative or cost-minimising. Although deciding indeterminate predictions at random would be capricious. Within many active learning systems, the model is already forced to abstain from predicting labels for samples when the probability is close to the decision boundary (π(x) ≈ C) so that a human can provide a classification (Schein & Ungar, 2007; Chai et al., 2018). If the imprecise logistic regression model presented in this paper were used in such a system, it would have the advantage of clearly providing a criteria for which abstentions are prefered, as opposed to a post-hoc decision based upon an arbitrary definition of how close to the boundary is too close. Additionally, this method exposes samples for which the model returns broad interval probabilities even if their centre is not close to the decision boundary and would have been considered a clear decision previously. If an abstention region is provided, then any interval probability produced from the model that straddles the region would be indeterminate. We have shown that it is possible to include interval uncertainty in both outcome status and predictor variables within logistic regression analysis by considering the set of possible models as an imprecise structure. Such a method clearly can express the epistemic uncertainties within the dataset that are removed by traditional methods. Future work should be invested in finding improved algorithms to make them less computationally expensive for large-scale datasets. ## A Illustration That Equation 19 **Approximates The Envelope Of Equation** 7 To illustrate, it is useful to first consider a linear regression model with intervals in the dependent variables. Consider the two interval datapoints (x1, y1) = ([1, 2], 2) and (x2, y2) = ([3, 4], 3) ![25_image_0.png](25_image_0.png) Figure A1: Intervals points (x1, y1) = ([1, 2], 2) and (x2, y2) = ([3, 4], 3) Whilst there are infinitely many linear regression models (y = β0 + β1x) that could be drawn by selecting points from x1 and x2 there are four extreme values as shown in A2. In this instance, the four regression lines are: ![25_image_1.png](25_image_1.png) Figure A2: 4 extreme regression models that can be fitted from the two intervals $$y=x^{\frac{1}{2}}$$ y = x (29a) 3 $ y=\frac{1}{3}x+\frac{1}{2}$ ... 3(29b) .$ \frac{5}{3}$ . $$(29\mathrm{c})$$ $$(29\mathrm{d})$$ $$y=x-2$$ $$y={\frac{1}{2}}x+{\frac{3}{2}}$$ 2(29d) If we consider these four lines as a set and consider the imprecise regression model as the envelope of this set, then we get the band shown in Figure A3. Six segments make up this band (AB, BC, CD, EF, F G, GH). These lines correspond to lines with the minimum and maximum β0 and β1 values plus the line drawn with the left-most and right-most values from within the intervals. ![26_image_0.png](26_image_0.png) Figure A3: Envolope of the extreme lines shown in Figure A2 In theory, there are six possible models; it just so happens to be the case that the line which contains β0 also has β1 and vice versa. Figure A4 shows 100 regression models fitted using 100 Monte Carlo samples for values within the intervals. The whole band between the black lines would have been filled with enough samples or systematic sampling. ![26_image_1.png](26_image_1.png) Figure A4: 100 regression models fitted on Monte Carlo samples for values within the intervals. If there are four instead of two intervals, we can test how well the method finds the minimum and maximum coefficient values alongside the all-left bound, and all-right bound models. Figure A5 shows four intervals and the band bounded by the six lines suggested to be plotted. As before, we can sample values from within the intervals to check the performance of the band. ![27_image_0.png](27_image_0.png) Figure A5 Again we can test whether this imprecise model bounds all possible regression models for values within the intervals. If we do a systematic sample of points within the four intervals, we can find all valid linear regression models consistent with the intervals. The envelope of these models is shown with the grey band in Figure A6. As can be seen, this band is not entirely within the black lines. For the model shown in Figure A6, doing this gives a value A = 0.0311. ![27_image_1.png](27_image_1.png) $$(30)$$ For logistic regression, since Equation 1 can be written as $$\pi(x)={\frac{1}{1+\exp-r(x)}}=\sigma(r(x))$$ = σ(r(x)) (30) where r(x) = β0 + β1x and σ is the sigmoid function. Since we know that the 6-line method is suitable for estimating the envelope of r(x) and the sigmoid function is monotonic, we can use this method for Logistic Regression. The represents the set represented by Equation 7 produced by Algorithm F1. If we consider two intervals with opposite binary labels (x1, y1) = ([1, 2], 0) and (x2, y2) = ([3, 4], 1), shown in Figure A7. Again we can consider that the endpoints of the intervals will produce the minimum and ![28_image_0.png](28_image_0.png) Figure A7: Two intervals with binary labels ![28_image_1.png](28_image_1.png) Figure A8: 4 extreme regression models that can be fitted from the two intervals. maximum logistic regression models, as shown in Figure A8. As before, these lines represent the minimum and maximum coefficient values possible. Again we can test whether the envelope of these four models (in this case, just the two lines that represent the models fitted on the endpoints of the intervals) will contain all possible models for values within the intervals via systematic sampling. This is shown in Figure A9. If we move to an imprecise logistic regression model fitted on four intervals using Algorithm F1, as shown in Figure A10. We can again test using a systematic sampling of the intervals whether the model fully covers all possible models (shown within the grey bounds). As before, the imprecise model does not perfectly model the systematic plot. In this instance, we have a value of A = 0.0252. ![29_image_0.png](29_image_0.png) Figure A9: Envelope of the logistic regression models shown in Figure A8 and models produced by systematically sampling the intervals. ![29_image_1.png](29_image_1.png) Figure A10: Imprecise Logistic Regression model for the four intervals ## B Illustration That Algorithm F2 **Approximates The Envelope Of Equation** 7 If we reconsider A8 but instead of considering that the four lines correspond to the minimum and maximum intercept and coefficient values, note that, as shown in Figure B1, the lines produced also represent the minimum and maximum spread of values around x = 1, x = 2.5 and x = 4. ![30_image_0.png](30_image_0.png) Figure B1: Repoduction of Figure A8 showing that the extreme models can be considered as the lines that correspond to the minimum and maximum spread of values around x = 1, x = 2.5 and x = 4 (shown with grey lines). This leads to another way of approximating Equation 7. If we x = 1 and x = 4 are the minimum and maximum values from within the dataset, the minimum and maximum spread around these points represent the all-left and all-right bound models. The x = 2.5 corresponds to the minimum and maximum slope of the logistic regression curve. Finding the x value that corresponds to this extreme case is trivial when there are only two intervals however is challenging for more complicated datasets. One approach is to select P values between the minimum and maximum values from the dataset and the minimum and maximum values themselves, giving 2+P values to the sample. For a dataset with m features, this leads to (2 + P) m models that would need to be found. An alternative approach is first to fit the all-left and all-right models (LR (D) and LR (D)), then select is to select P π values ({p P +1 ∀p = 1*, . . . , P*}). For each of these P values, we can find T such that πLR(D)(T) = P and T such that πLR(D) (T) = P we can then find the datasets that correspond to the minimum and maximum spread of values using Algorithm 1 and 2 respectively. This approach has complexity 2(1 + P) and is the approach described within Algorithm F2. Algorithm 1: Procedure to find the minimum spread of the intervals in D around T. Data: D, T for all Intervals (I =[i,i]) in D do if T ∈ P **then** Dmin(I) = T; else if *T < P* **then** Dmin(I) = i; else Dmin(I) = i; Output: Dmin Algorithm 2: Procedure to find the maximum spread of the intervals in D around T. Data: D, T for all Intervals (I =[i,i]) in D do if T ∈ P then if |i − T| > |i − T| **then** Dmax(I) = i; else Dmax(I) = i else if *T < P* **then** Dmax(I) = i; else Dmax(I) = i; Output: Dmax ## References Massih-Reza Amini and Patrick Gallinari. Semi-Supervised Logistic Regression. In 15th European Conference on Artificial Intelligence, pp. 5, Lyon, France, 2002. Steven C Bagley, Halbert White, and Beatrice A Golomb. Logistic regression in the medical literature : Standards for use and reporting , with particular attention to one medical domain. Journal of Clinical Epidemiology, 54:979–985, 2001. Boris Beranger, Huan Lin, and Scott Sisson. New models for symbolic data analysis. *Advances in Data Analysis and Classification*, September 2022. ISSN 1862-5347, 1862-5355. doi: 10.1007/s11634-022-00520-8. Patrice Bertrand. Descriptive Statistics for Symbolic Data. In Hans-Hermann Bock and Edwin Diday (eds.), Analysis of Symbolic Data: Exploratory Methods for Extracting Statistical Information from Complex Data, pp. 106–124. Springer Berlin Heidelberg, 2000. L Billard and E Diday. Regression Analysis for Interval-Valued Data. In Henk A. L. Kiers, Jean-Paul Rasson, Patrick J. F. Groenen, and Martin Schader (eds.), *Data Analysis, Classification, and Related Methods*, pp. 369–374. Springer Berlin Heidelberg, 2000. L Billard and E Diday. From the Statistics of Data to the Statistics of Knowledge: Symbolic Data Analysis. Journal of the American Statistical Association, 98(462):470–487, June 2003. ISSN 0162-1459, 1537-274X. doi: 10.1198/016214503000242. HH Bock and E Diday. Analysis of Symbolic Data: Exploratory Methods for Extracting Statistical Information from Complex Data, Edited by H.-H. Bock and E. Diday. Springer Berlin Heidelberg, Berlin, Germany, 2000. Danilo Bzdok, Michael Eickenberg, Olivier Grisel, Bertrand Thirion, and Gael Varoquaux. Semi-Supervised Factored Logistic Regression for High-Dimensional Neuroimaging Data. In Advances in Neural Information Processing Systems 28 (NIPS 2015), pp. 9, Montreal, Canada, 2015. Hua Chai, Yong Liang, Sai Wang, and Hai-wei Shen. A novel logistic regression model combining semisupervised learning and active learning for disease classification. *Scientific Reports*, 8(1):13009, December 2018. ISSN 2045-2322. doi: 10.1038/s41598-018-31395-5. Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien. *Semi-Supervised Learning*. MIT Press, Cambridge, MA, USA, ebook edition, 2006. Shengqiang Chi, Xinhang Li, Yu Tian, Jun Li, Xiangxing Kong, Kefeng Ding, Chunhua Weng, and Jingsong Li. Semi-supervised learning to improve generalizability of risk prediction models. *Journal of Biomedical* Informatics, 92:103117, April 2019. ISSN 15320464. doi: 10.1016/j.jbi.2019.103117. Paulo Cortez, António Cerdeira, Fernando Almeida, Telmo Matos, and José Reis. Modeling wine preferences by data mining from physicochemical properties. *Decision Support Systems*, 47(4):547–553, 2009. doi: 10.1016/j.dss.2009.05.016. Renata M C R de Souza, Francisco Jos, A Cysneiros, Diego C F Queiroz, and Roberta A De A Fagundes. A Multi-Class Logistic Regression Model for Interval Data. In IEEE International Conference on Systems, Man and Cybernetics, pp. 1253–1258, Singapore,Singapore, 2008. doi: 10.1109/ICSMC.2008.4811455. Renata M C R de Souza, Diego C F Queiroz, and Francisco Jose. Logistic regression-based pattern classifiers for symbolic interval data. *Pattern Analysis and Applications*, 14(3):273–282, 2011. doi: 10.1007/s10044-011-0222-1. Roberta A A Fagundes, Renata M C R de Souza, and Francisco Jose. Robust regression with application to symbolic interval data. *Engineering Applications of Artificial Intelligence*, 26(1):564–573, 2013. doi: 10.1016/j.engappai.2012.05.004. Scott Ferson. What Monte Carlo methods cannot do. Human and Ecological Risk Assessment: An International Journal, 2(4):990–1007, December 1996. ISSN 1080-7039, 1549-7860. doi: 10.1080/ 10807039609383659. Scott Ferson, Lev Ginzburg, Vladik Kreinovich, Luc Longpr, Monica Aviles, and North Country Road. Computing Variance for Interval Data is NP-Hard. *ACM SIGACT News*, 33(2):108–118, 2002. Scott Ferson, Vladik Kreinovich, Janos Hajagos, William Oberkampf, and Lev Ginzburg. Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty. Technical report, Sandia National Laboratories, Albuquerque, NM, USA, 2007. Federica Gioia, Carlo N Lauro, and Napoli Federico. Basic statistical methods for interval data. Statistica Applicata, 17:1–29, 2005. Nicholas Gray, Scott Ferson, Marco De Angelis, Ander Gray, and Francis Baumont de Oliveira. Probability bounds analysis for Python. *Software Impacts*, 12:100246, May 2022. ISSN 26659638. doi: 10.1016/j. simpa.2022.100246. David W. Hosmer Jr, Stanley Lameshow, and Rodney X. Sturdivant. *Applied Logistic Regression*. John Wiley & Sons, Ltd, Hoboken, NJ, USA, 3rd edition, 2013. Tommi S Jaakkola. A variational approach to Bayesian logistic regression models and their extensions. In Proceedings of the Sixth International Workshop on Artificial Intelligence and Statistics, pp. 12, Fort Lauderdale, FL, USA, 1997. PMLR. JM Keynes. *A Treatise on Probability*. Macmillian and Co., London, UK, 1921. David G. Kleinbaum and Mitchel Klein. *Logistic Regression: A Self-Learning Text*. Springer, New York, NY USA, 3rd edition, 2010. Kimberly Kracman. The effect of school-based arts instruction on attendance at museums and the performing arts. *Poetics*, 24:203–218, 1996. Vladik Kreinovich. Interval Computations and Interval-Related Statistical Techniques: Tools for Estimating Uncertainty of the Results of Data Processing and Indirect Measurements. In Franco Pavese and Alistair B. Forbes (eds.), *Data Modeling for Metrology and Testing in Measurement Science*, pp. 1–29. Birkhäuser Boston, Boston, 2009. ISBN 978-0-8176-4592-2 978-0-8176-4804-6. doi: 10.1007/978-0-8176-4804-6_4. Balaji Krishnapuram, David Williams, Ya Xue, Lawrence Carin, Mário Figueiredo, and Alexander J Hartemink. On Semi-Supervised Classification. In *Advances in Neural Information Processing Systems 17* (NIPS 2004), pp. 8, Vancouver/Whistler, Canada, 2004. Winifred Lambert and Mark Wheeler. Objective Lightning Probability Forecasting for Kennedy Space Center and Cape Canaveral Air Force Station. Technical report, National Air and Space Administration, Hannover, MD, USA, 2005. Yongjun Li, Lizheng Wang, and Feng Li. A data-driven prediction approach for sports team performance and its application to National Basketball Association R. *Omega*, 98:102123–102123, 2021. doi: 10.1016/ j.omega.2019.102123. Jane C. Lindsey and Louise M. Ryan. Methods for interval-censored data. *Statistics in Medicine*, 17(2): 219–238, January 1998. ISSN 0277-6715, 1097-0258. doi: 10.1002/(SICI)1097-0258(19980130)17:2<219:: AID-SIM735>3.0.CO;2-O. CF Manski. *Partial Identification of Probability Distributions*. Springer, New York, NY USA, 2003. ISBN 0-387-00454-8. Scott Menard. *Logistic Regression: From Introductory to Advanced Concepts and Applications*. SAGE Publications, Inc, Thousand Oaks, California, 2010. ISBN 978-1-4129-7483-7. doi: 10.4135/9781483348964. In Jae Myung. Tutorial on maximum likelihood estimation. *Journal of Mathematical Psychology*, 47(1): 90–100, 2003. doi: 10.1016/S0022-2496(02)00028-7. W D Neary, B P Heather, and J J Earnshaw. The Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity (POSSUM). *British Journal of Surgery*, 30(2):157–165, 2003. doi: 10.1002/bjs.4041. Hung T. Nguyen, Vladik Kreinovich, Berlin Wu, and Gang Xiang. *Computing Statistics under Interval and* Fuzzy Uncertainty. Springer, Heidelberg, Germany, 2012. ISBN 978-3-642-22829-2. Sean M. O'Brien and David B. Dunson. Bayesian Multivariate Logistic Regression. *Biometrics*, 60(3): 739–746, September 2004. ISSN 0006341X. doi: 10.1111/j.0006-341X.2004.00224.x. Gregory C Ohlmacher and John C Davis. Using multiple logistic regression and GIS technology to predict landslide hazard in northeast Kansas, USA. *Engineering Geology*, 69:331–343, 2003. doi: 10.1016/S0013-7952(03)00069-3. Turner Osler, Laurent G Glance, and David W Hosmer. Simplified Estimates of the Probability of Death After Burn Injuries : Extending and Updating the Baux Score. Journal of Trauma Injury, Infection and Critical Care, 68(3), 2010. doi: 10.1097/TA.0b013e3181c453b3. Sanjay Kumar Palei and Samir Kumar Das. Logistic regression model for prediction of roof fall risks in bord and pillar workings in coal mines : An approach. *Safety Science*, 47(1):88–96, 2009. doi: 10.1016/j.ssci. 2008.01.002. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine Learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. S James Press and Sandra Wilson. Choosing Between Logistic Regression and Discriminant Analysis. Journal of the American Statistical Association, 73(364):699–705, 1978. Daniel Rabinowjtz, Anastasios Tsiatis, and Jorge Aragon. Regression with interval-censored data. Biometrika, 82(3):13, 1995. Patrick Royston and Douglas G Altman. Visualizing and assessing discrimination in the logistic regression model. *Statistics in Medicine*, 29, 2010. doi: 10.1002/sim.3994. Donald B Rubin. Inference and missing data. *Biometrika*, 63(8):12, 1976. Andrew I. Schein and Lyle H. Ungar. Active learning for logistic regression: An evaluation. *Machine* Learning, 68(3):235–265, August 2007. ISSN 0885-6125, 1573-0565. doi: 10.1007/s10994-007-5019-5. Georg Schollmeyer. Computing Simple Bounds for Regression Estimates for Linear Regression with Intervalvalued Covariates. In *Proceedings of the Twelth International Symposium on Imprecise Probabilities:* Theories and Applications, pp. 7, Virtual Conference, 2021. Proceedings of Machine Learning Research. Krasymyr Tretiak and Scott Ferson. Measuring uncertainty when pooling interval-censored data sets with different precision. http://arxiv.org/abs/2210.13863, October 2022. Krasymyr Tretiak, Georg Schollmeyer, and Scott Ferson. Neural network model for imprecise regression with interval dependent variables. http://arxiv.org/abs/2206.02467, June 2022. Lev V Utkin and Frank P A Coolen. Interval-valued regression and classification models in the framework of machine learning. In *7th International Symposium on Imprecise Probability: Theories and Applications*, pp. 11, Innsbruck, Austria, 2011. Peter Walley. *Statistical Reasoning with Imprecise Probabilities*. Chapman and Hall, London, UK, 1991. T. Whitaker, B. Beranger, and S. A. Sisson. Logistic Regression Models for Aggregated Data. Journal of Computational and Graphical Statistics, 30(4):1049–1067, October 2021. ISSN 1061-8600, 1537-2715. doi: 10.1080/10618600.2021.1895816. Andrea Wiencierz. *Regression analysis with imprecise data*. PhD thesis, Ludwig-Maximilians-Universitat Munchen, 2013. David Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics -, pp. 189–196, Cambridge, Massachusetts, 1995. Association for Computational Linguistics. doi: 10.3115/981658.981684.
Review 1: Summary: The paper proposes a new method to handle interval data both at the input and at the output of logistic regression. The idea of the approach is to construct the set of all possible logistic regression models that can be learned when input and output values are drawn from these intervals and then to provide predictions in the form of intervals as well (constructed from all possible models). Experiments are conducted on several artificial and real datasets. Strengths and Weaknesses: The idea of constructing the set of the all possible models that can be learned from instances of the dataset compatible with the intervals is very interesting. The fact that it is possible to reduce this set using (8) is a very nice result. I appreciate the fact that exact and approximate algorithms are provided in both cases (input and output uncertainty). The paper is well written and the authors have put a lot of effort in the assessment and the visualization of their results. The paper has several important weaknesses however at this stage. (1) I'm not convinced by the approach in the case of label uncertainty. The problem in this setting is that adding unlabeled examples increases uncertainty with respect to not using them, and where these examples lie in the input space can have a huge impact on this uncertainty (as shown in Figure 9). In front of a new dataset with unlabeled examples, one thus has the strange choice between not using these unlabeled examples, which will give no uncertainty about the model, or using them, which will yield a important uncertainty. As, formally, every $x$ without a label can be considered as a unlabeled examples, one can thus arbitrarily increase uncertainty by adding new unlabeled examples, which does not seem right to me. I think that the problem comes from the fact that the authors do not assume a particular data model. Not knowing at all the label of an example and not making any assumption about the data distribution should make the unlabeled examples uninformative, and thus useless, to me. (2) To a smaller extent, there is a similar problem in the case of input intervals. Not having interval data gives the feeling that there is no uncertainty about the model, while adding examples with uncertain inputs gives more uncertain outputs, which is contradictory to me (having more data, even uncertain, should reduce the uncertainty about the model, not increasing it). I think the problem is that the original epistemic uncertainty of logistic regression, with non interval data, is not integrated into the equation. To me, this makes the interpretation of the interval predictions provided by the model very hazardous. This problem is overlooked in the paper I think. (3) In the case of uncertainty in features, the paper is lacking a proof that it is indeed possible to reduce the set of models to (8). Maybe, it's trivial but I was not able to convince myself that the models in (8) are enough to provide minimum and maximum values for $\pi(x)$ for all $x$. A formal proof should be given. (4) The description of the algorithms is not detailed enough. In Algorithm 1, what means "using stochastic optimization"? How is it implemented in practice? What is the computational cost of this step? Algorithm 2 needs also more explanation. What means "many $P$"? What is $P$ actually? A point or a prediction interval? The first step of the loop makes $T$ an input to $\pi$ and $P$ an output of $\pi$ but later in the code $P$ is compared to $T$. So, I'm unable to understand Algorithm 2. It needs to be explained further in the text. There is also not discussion about the hyper-parameters of the algorithms and how they have been set in the experiments. Experiments are no reproducible. (3) Experimental results are nicely presented but the experiments however leave me wanting more. In both scenarios, only a single purely manually constructed dataset and a single real dataset, but with manually constructed intervals, are used to illustrate the method and compare it against competitors. Only the problem in Section 5.1 is a real dataset with interval data. This is a bit short. In addition, these experiments are purely illustrative. I'm missing for example more systematic experiments that compare Algorithm 1 with Algorithm 2, as well as Algorithm 3 with Algorithm 4, and study the impact of their hyperparameters. How well does Algorithm 2 (resp. 4) approximate the result of Algorithm 1 (resp. 3), and how much does it improve computing times? (4) The comparison about state-of-the-art methods is not totally convincing. None of the competitors provide interval predictions and thus I don't know what conclusions can be drawn from these comparisons. But I'm not aware of other methods that takes into account interval data and provides intervals at the output. Maybe the author could propose a way to provide a pointwise prediction from their intervals (e.g., taking the midpoint of the interval) and compare quantitatively this prediction against other methods (using AUC). (5) Some typos/errors in the text: - Algorithm 1: $D'$ is used instead of $D'_{\beta_i}$ (twice) - Algorithm 2: $cud_{min}$ instead of $d_{min}$ - In (9), $\in$ is missing in the third case - Page 8, the first sentence in the description of the de Souza method is unfinished - Page 9, "constructs consider" - Page 9, Section 3.2: "Dataset $X$ from 2": I took me some time to undertstand that you were referring to the dataset used in Section 2.1. The use of $X$ and $Y$ as dataset names in the different figures is confusing at first. I would have preferred $D_1$ and $D_2$ for example. - Page 11, last sentence: I would add a table with the AUC values. - Page 13, top: "shown in a the intervals are plotted" ? - Page 13, Section 4, "contains p variables" => "p examples" - Pages 14: "contains points for which have" (twice) Requested Changes: All the issues raised above require changes in the paper. If (1) can not be addressed, the paper could maybe refocus on uncertainty of the features only, although (2) is still a problem to me. Other points can be more easily addressed I think. Broader Impact Concerns: I have no concern about the ethical implications of the work. ================================================== Review 2: Summary: The paper explores the probabilistic nature of classifiers, namely, of the logistic regression. It is proposed to estimated a number of logistic regression models to construct interval uncertainties. Strengths and Weaknesses: Strengths. The method is described in details. The evaluation metrics are also well explained. Weaknesses. I still do not understand why \beta should be minimised or maximised, if it is the function which is supposed to be minimised/maximised. (I reviewed the previous version of the paper). In de Souza method "present a method for characterising the uncertainty with ." Please finish the sentence. The paper was carefully re-written compared to its previous version. However, in general, the description and explanations of the results are still quite incremental and straightforward. The approach seems to be a heuristic, since there is a number of claims without any proofs. It is not discussed in details in the current submission but I suppose that the model is rather complex and I wonder whether it is applicable even to a moderate size task. Also Section 4 introduces a very complex model with an exponential number of logistic regressions (2^q logistic regression models), and the motivation for such a complex model is lacking. There is also a lack of numerical experiments on realistic data sets. I also wonder whether bootstrap can be used as a baseline method for the proposed approach. Requested Changes: The algorithm lacks theoretical foundations, however, if it could be shown that it performs well in practice (i.e., add much more results of numerical experiments on realistic data and compare them to the state-of-the-art), it would be interesting. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The submission proposes algorithms for two-class logistic regression on interval-valued data. It also considers how to adapt evaluation measures such as precision and recall when intervals are generated, rather than point estimates of class probabilities. Strengths and Weaknesses: Learning a logistic regression model from interval-valued data appears to be a subproblem considered in the area of symbolic data analysis, see, for example, Beranger, B., Lin, H., & Sisson, S. (2022). New models for symbolic data analysis. Advances in Data Analysis and Classification, 1-41. but the submission fails to present it in this context. There is also relevant work that is not discussed or compared to: de Barros, A. P., de Carvalho, F. D. A. T., & Neto, E. D. A. L. (2012, October). A pattern classifier for interval-valued data based on multinomial logistic regression model. In 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (pp. 541-546). IEEE. Whitaker, T., Beranger, B., & Sisson, S. A. (2021). Logistic regression models for aggregated data. Journal of Computational and Graphical Statistics, 30(4), 1049-1067. Whitaker, T. (2019). Innovative methods for the analysis of complex and non-standard data (Doctoral dissertation, UNSW Sydney). Moreover, according to Pełka, M., & Rybicka, A. (2019). Identification of factors that can cause mobile phone customer churn with application of symbolic interval-valued logistic regression and conjoint analysis. In The 13th Professor Aleksander Zelias International Conference on Modelling and Forecasting of Socio-Economic Phenomena (pp. 187-195). the reference de Souza (2011) actually discussed three different methods for using interval-valued data in logistic regression, not just the one method discussed in the submission. The submission should also discuss Fagundes, R. A., De Souza, R. M., & Cysneiros, F. J. A. (2014). Interval kernel regression. Neurocomputing, 128, 371-388. and explain why this cannot be applied to kernel logistic regression. Other weak points of the submission: - Only one real-world dataset with interval-valued data is used for the experiments (intervals are introduced artificially in the other datasets). - The algorithms proposed are either brute-force and not computationally feasible in realistic scenarios or apply heuristics without any performance guarantees. - Two of the competitors considered in the study can be trivially adapted to output an interval rather than a single estimate based on the mean. Requested Changes: Please address all the comments above as well as the (mainly) smaller issues listed below. === Abstract First sentence of abstract: what about polytomous logistic regression? "many datasets" - this seems hard to believe, it would be useful to give at least one example here already Introduction First sentence: what about polytomous logistic regression? "where the time of the event is not essential." - delete? "linearly related or equal variance" - grammar "This assumption is valid when the sampling uncertainty or natural variability in the data is significant compared to the epistemic uncertainty or if values are missing at random (Ferson et al., 2007)." - it is not clear to me why this is the case and Ferson et al. is just a tech report; please explain in the paper "Measurement uncertainty is sometimes best represented as intervals, sometimes called "censored data"." - this is not what censoring normally refers to "to Simplify" "While these approaches are computationally expedient, they underrepresent the imprecision by presenting a single middle-of-the-road logistic regression." - please explain why this applies to the approach based on the equidistribution hypothesis "The use ... has been used ..." "The imprecise probabilities approach makes the fewest assumptions, but some statistics can be computationally challenging for large datasets (Ferson et al., 2007)." - more detail is required here; also, the reference to a tech report does not provide strong support "If there is interval uncertainty ... . Then" - > "If there is interval uncertainty ... , then" Section 3 There is no discussion of the optimality or runtime of the algorithms that are presented. "The first is to consider that the traditional definitions of sensitivity and specificity can be could be re-imagined by defining what the predictive sensitivity" - grammar "for characterising the uncertainty with . Their" "therefore constructs consider" It seems the methods by de Souza and Billard-Diday could be trivially adapted to output intervals by simply not taking the mean. How does that compare to the proposed approach? Section 4 "to find an esitmate" Broader Impact Concerns: N/A ================================================== Review 4: Summary: The paper proposes novel methods to quantify the predictive uncertainty of logistic regression in scenarios where the training data have uncertainty about feature values, labels, or both. Feature uncertainty is assumed to be represented as intervals that must contain the true value of the feature. Label uncertainty is considered through some data being unlabelled, i.e. using the semi-supervised setting. The methods are demonstrated on various small datasets, showing the resulting predictive intervals visually as prediction bands. Strengths and Weaknesses: Strengths: * The paper provides clear motivation and highlights the shortcomings in existing methods; * The existing literature and methods are covered in sufficient detail; * The methods are demonstrated on several datasets visually, helping readers to develop intuition about the matter. Weaknesses: * In the task of finding an interval $\pi(x)$ of minimum and maximum predictions across the set of all models within ILR(D) as defined in Eq.(7), the authors claim that it is only necessary to consider a small number of models as defined in Eq.(8). However, no proof is given for this claim, and it has not been checked empirically either whether this is true. This is a critical shortcoming because it leaves it unclear whether the bounds obtained by Eq.(8) or Algorithm 1 are valid or not. In other words, is it possible to have a model which belongs to ILR(D) as defined by Eq.(7) but which provides predictions that are occasionally out of the bounds determined by ILR(D) of Eq.(8) or by ILR(D) of Algorithm 1? Figure 3 shows an example where 60 models randomly drawn from the interval (i.e. from the set of Eq.(7)) are shown all to fall between the extremes defined by Algorithm 1 (which approximates Eq.(8)). However, 60 is a small number; did the authors check with thousands of models as well? Can it be proved that all models of Eq.(7) fall between the extremes of Eq.(8)? If not, then have the authors been able to find a counterexample? Figure 3 is about a dataset with a single feature, but are the models of Eq.(7) within the bounds defined by Eq.(8) when there are more features? Can this be proved? If not, can the authors demonstrate a counterexample? The methods of this paper can still be valuable if the bounds are not entirely valid; in this case, they are just a heuristic approximation and might still be useful in practice. However, this has to be then validated by extensive experiments, exploring how strongly and how often the bounds are violated. Otherwise, it would be very hard to trust a model for which there is no understanding of its capabilities and limitations. In such experiments, it would also be important to compare to simpler baseline methods, such as randomly drawing a fixed number of models from within ILR(D) of Eq.(7) and using the set of models trained on these datasets (essentially the same as the 60 models in Figure 3). Would the method of Algorithm 1 provide typically wider intervals, given a comparable amount of computational resources? As a slightly more advanced baseline, what if the models are drawn from within ILR(D) of Eq.(7) not entirely randomly but such that each feature has the value of one or the other extreme (randomly chosen which one)? * Related to the previous shortcoming, the paper currently makes unjustified claims as if there were a guarantee, e.g. "only the imprecise method guarantees coverage of the true model" at the bottom of page 9. It only becomes a guarantee if it has been proved. Even if the feature coefficients beta of the true model fall between the minimal and maximal coefficients learned by stochastic optimisation, it is not obvious that the predictions also always fall into the bounds, because the combined effect of multiple betas has not been considered in Eq.(8). * The paper has not quantified how expensive the computations in the proposed methods are in practice. This is fundamentally important because the goal of the proposed method is to save computation by considering fewer models of Eq.(8) instead of all models of Eq.(7). Instead of stochastic optimisation to find extreme values of beta coefficients, perhaps it is better to consider a larger random sample of models from Eq.(7)? Maybe not, but this needs demonstrating. Requested Changes: In my opinion, all the weaknesses highlighted above are critical and need addressing before acceptance can be considered. As a result, it must become clear (or at least clearer) to what extent the intervals provided by the models are valid and how much better the proposed methods are compared to existing methods and simpler baselines. Broader Impact Concerns: I do not have any concerns on the ethical implications of this work. ================================================== Metareview: Recommendation: Reject Comment: On top of the several significant weaknesses pointed out in most of the reviews, I am quite concerned by the lack of engagement from authors: not all reviews have been addressed by rebuttals, and several reviewers expressed their disappointment that their time and effort in reviewing has not been acknowledged and used by authors. The authors did not respond directly to each reviewer, who wrote extensive and detailed reviews, and indicate what changes were made. The paper does not reach the acceptance bar for TMLR in my opinion, and I do not recommend a resubmission. ***This meta-review was edited by the Editors in Chief to improve the explanation of the decision. ==================================================
## Response To Reviewers We greatly appreciate the constructive comments and the opportunity we were given to improve our paper. Following your comments and advice, we revised our manuscript to include missing details to make our study more sound and complete. Within the points, we also reviewed all the detailed comments from the reviewers with our best efforts. We refer to specific locations of each change using **Review X-XX** . You can click X ("on page X") to jump to the modified location. We are currently on revision R2.3. Reviewer \#SPN2: A.1. I appreciate the clarification. To enhance understanding for readers, I suggest incorporating the explanations you provided for the items below into the paper: - *The discrepancy between layer-by-layer accuracy (early exit accuracy?) and final accuracy.* - *The fact that the upper bound (Base) does not apply to other methods.* Thanks for the suggestions. We put them on Review 1-1 **on page 20** and Review 1-2 **on page 22** respectively as well. Reviewer \#SPN2: A.2. Thanks for addressing the baseline request. However, there is still an unaddressed part, which is defining the formulation of the problem setup. For a more concrete example, since you are extending the work of (Kaya et al., 2019), refer to their 'Setting' subsection (first paragraph of Section 3). It is essential to formally define your unsupervised setting so that it can be easily compared with other settings. Thanks for the suggestion. We added a setting that refers to the figure of the proposed method. The content marked with Review 1-3 **on page 9** is the location of the change. Reviewer \#SPN2: A.3. I believe there is a misunderstanding regarding the original request. The request had two parts: (1) The results in Section 4.6.6 reference some values for the figures at an undefined confidence interval. For example, the statement "we achieved a substantial 52.2% and 32.4% reduction in latencies, with a modest 6.9% and 1.6% decrease in mean accuracy when compared to the BranchyNet and Gati method" doesn't define the confidence interval at which the observation was made from Figure 3. If the same confidence interval is used for all observations, this can be simply addressed by stating the confidence value at the beginning of Section 4.6.6. (2) The current presentation of the results in Figures 3 and 4 compares methods across three metrics, making it difficult to compare them directly. Table 3 of (Kaya et al., 2019) reports accuracy at inference costs of < 25%, < 50%, and < 75%, allowing for a direct comparison of cost to accuracy. In contrast, reporting accuracy versus the confidence interval does not directly reflect the trade-off between inference cost and accuracy. Additionally, the table in Appendix D only reports Early Exit Enabled and omits BranchyNet and Gati. I believe generating the table I'm requesting does not require running new experiments; you can simply extract the inference costs of < 25%, < 50%, and < 75% from Figures 3 and 4. Thank you for the notes. For the first part, we measured the average accuracy and latencies across all confidence levels greater than 0.25. We've added this explanation to all corresponding reports, highlighted in blue. Regarding the second point, it was indeed a misunderstanding on our part. We have now provided the requested table. The content marked with Review 1-4 **on page 29** is the location of the change. Reviewer \#t26e: 1. In candidate layer selection step (section 3.1), the author mentions several static rules to select candidate layers (eg. last layers not useful, layers with parallel connections are not useful). I find this static rules difficult to generalize across numerous existing backbone architecture. How should one decide how many last layers to discard? The method is not generalizable for recent popular complex architectures like MoEs where there a multiple parallel experts? From requested changes: Also, additional comments related to adaptation for modern architectures with stack of transformer will be helpful given the candidate layer selection is heuristic based. Thank you for your insightful comment. Regarding the first issue: Our approach includes a rule that discards the last two layers when selecting candidate layers for early exits. The rationale behind this decision is that each early exit model typically consists of at least two layers. If the main path has only two layers remaining before the final result and softmax, adding an early exit at that point would not result in a significant inference improvement. Therefore, the last two layers (e.g., L15 and L16 in our model) are excluded from candidate consideration. We have clarified this point in the revised manuscript. Regarding the second issue: We acknowledge that our method has not been tested on more complex architectures, such as a Mixture of Expert (MoE) models. In the original text, we mentioned L7-L11 as an example of layers with parallel connections that are not suitable for early exiting. In response to your suggestion, we have added further notes to address this limitation and have emphasized the need for future work to explore the generalizability of our method on more complex architectures like MoE. The content marked with Review 1-5 **on page 10** and Review 1-6 **on page 11** are the locations of the changes. Reviewer \#t26e: 2. Does the authors have thought about using these early exit approaches for modern LLMs where early exits would add a lot of value given gigantic memory cost? Recently some work find like SkipDecode (https://arxiv.org/abs/2307.02628), ShortGPT(https://arxiv.org/pdf/2403.03853), FFN-SkipLLM (https://arxiv.org/abs/2404.03865) explores layer-skipping or early-exit strategies. What are authors thought about these techniques given their simplicity in identifying comparatively less useful layers and discarding them in limited compute settings? The three papers you referenced—SkipDecode, ShortGPT, and FFN-SkipLLM—explore dynamic layerskipping techniques in large language models (LLMs) and Transformer-based architectures. These methods focus on reducing computational overhead by selectively skipping layers based on input complexity or model confidence. Unlike our early exit approach, which aims to terminate computation entirely when a confident prediction is reached, these methods allow the model to continue processing with fewer layers, thereby conserving computational resources. Another distinction lies in the application domain. While the methods mentioned are focused on LLMs, our approach has primarily been tested on vision-based tasks such as classification and detection. However, we recognize the potential for adapting our early exit strategy to LLMs, especially given the shared goal of reducing unnecessary computations. This represents an exciting area for future research, and we have included a discussion on this topic in the manuscript Review 1-7 **on page 30** . Reviewer \#t26e: 3. In section 3.2.1, the author briefly discuss the complexity of task (CIFAR10 vs CIFAR100) in deciding the amount the early-exit model capacity. I see further potential here to explore a more task-difficulty specific evaluation to see how it relate with the speedup achieved for a easy task vs difficult task? Thank you for your valuable comment. The manuscript discusses how task complexity, particularly the difference between CIFAR10 and CIFAR100, influences the capacity of early-exit models. Specifically, the larger number of target classes in CIFAR100 necessitated early-exit models with greater learning capacity compared to CIFAR10. This adjustment is reflected in our design, where the early-exit models for CIFAR100 have been configured accordingly. To further explore the impact of task complexity on model capacity and speedup, we have also evaluated our approach on the ImageNet dataset, which contains 1,000 classes and presents a more challenging task. The results from ImageNet further demonstrate that our early-exit strategy remains adaptable, balancing accuracy and speedup effectively across varying levels of task difficulty. We have updated the manuscript to include details regarding these adjustments for more complex datasets like ImageNet ( Review 1-8 **on page 12** ). Reviewer \#t26e: 4. Comparison with a dense baseline with similar FLOP/compute count to validate the performance difference of the early exit model with a model with similar compute cost is required. We respectfully acknowledge this comment and are awaiting the reviewer's response to our clarification question before providing a detailed response. Reviewer \#t26e: 6. I am also curious about the thresholding, and the expense of update and maintenance of early exit models. A ablation of these thresholding is required to show the impact it can have. Thank you for raising this important point. The update and maintenance of early exit models, as discussed in the manuscript, is designed to adapt the early exit models to new trends in inference data. In practical applications, users may decide to update the early exit layers if they receive a significant amount of new, unlabeled data. This process involves retraining the exit layers and adjusting their confidence thresholds to maintain accuracy. However, it is important to clarify that in our experiments, we did not perform these updates, as our focus was on evaluating the performance of the initial early exit models. The recommendation for updating layers and thresholds was included to provide guidance for users who may deploy this tool in dynamic environments where the data distribution changes over time. To ensure transparency, we have added further clarification in the manuscript that these updates are optional and application-specific in Review 1-13 **on page 17** . An ablation study on update and maintenance was not conducted, as it falls outside the scope of our current experiments. Reviewer \#t26e: 5. The well-advertised benefit of the work is speedup, however a clean speedup/latency evaluation settings is missing. How is the speed up measured specifically across CPUs and GPUs? What's the hardware/software stack? A fine-grained evaluation of compute/latency/memory for backbone, early-exit modules is a must? The authors nee to also provide an idea of compute additionally involved in training these early exit models, NAS, thresholding. From requested changes: The critical requirements include a through investigation of training and inference time compute benefits, and details for hardware/software stack is required. The latency increases with increasing the batchsize (Table 6) is also a negative point, given modern GPUs which are fast and efficient in large batch processing. Regarding the hardware/software stack, we have provided information about the hardware setup used in our experiments in the "Environment Setup" subsection of the manuscript. Additional details about the software environment have now been included in Review 1-9 **on page 35** . Regarding the GPU/CPU analysis, our primary focus was on inference performance, as explored in Research Question 3 (RQ3). We measured inference with both GPU and CPU, and analyzed memory and FLOPs costs, which are crucial for real-time applications. For NAS, training layers, and update/maintenance tasks, these are one-time processes, and as such, we did not focus heavily on them. We conducted these tasks using a GPU. The time cost for training layers depends on the number of epochs chosen. Since our early exit method typically involves training a limited number of layers (e.g., 3 or 4), and we train them while the backbone remains frozen, the resource requirements are similar to those during testing. The time consumed is roughly equivalent to the maximum latency for each batch. For update and maintenance, as mentioned in a previous response, these tasks may not even be required unless new, unlabeled data is received and further training is needed. In such cases, the time required scales proportionally with the amount of new data—adding 20% more unlabeled data would typically require 20% more time for training. However, to provide readers with a general idea of the resource costs, we have added details in the manuscript: Review 1-10 on page 37 , Review 1-11 **on page 18** . We emphasize that our focus was on real-world applications that use pre-trained models over extended periods, rather than on the costs of one-time processes. Thank you for noting the increase in latency with larger batch sizes, as shown in Table 6. While this is a valid point, we would like to clarify that in many real-world applications, such as evaluating individual images or scenes, smaller batch sizes are typically used. Consequently, the latency impact observed with large batch sizes may be less relevant in practical deployments. Additionally, our experiments with large batch sizes demonstrate that GPUs can still effectively reduce inference time by utilizing their cores for large matrix computations. This highlights that early exit approaches are more suitable for small batch sizes, especially when GPU acceleration is not available, rather than being a limitation of the method. Reviewer \#t26e: From requested changes: I also highly encourage the authors to explain their key method with a good diagram to make it easy to understand, at this point it requires a lot of effort to understand it to understand the nuances. We respectfully acknowledge this comment and are awaiting the reviewer's response to our clarification question before providing a detailed response. # An Unsupervised Early Exit Mechanism For Deep Neural Networks Anonymous authors Paper under double-blind review ## Abstract Deep Neural Networks (DNNs) have become an essential component in many application domains, including web-based services. A variety of these services require high throughput and (close to) real-time features, for instance, to respond or react to users' requests or to process a stream of incoming data on time. However, the trend in DNN design is towards larger models with many layers and parameters to achieve more accurate results. Although these models are often pre-trained, the computational complexity in such large models can still be relatively significant, hindering low inference latency. In this paper, we propose an end-to-end automated early exiting solution to improve the performance of DNN-based services in terms of computational complexity and inference latency. Our method adopts the ideas of self-distillation of DNN models and early exits. The proposed solution is an automated unsupervised early exiting mechanism that allows early exiting of a large model during inference time if the early exit model in one of the early exits is confident enough for final prediction. One of the main contributions of this paper is that we have implemented the idea as an unsupervised early exiting, meaning that the early exit models do not need access to training data and perform solely based on the incoming data at run-time, making it suitable for applications using pre-trained models. The results of our experiments on two downstream tasks (image classification and object detection) show that, on average, early exiting can reduce the computational complexity of these services up to 58% (in terms of FLOP count) and improve their inference latency up to 46% with low to zero reduction in accuracy. Our approach also outperforms existing methods, particularly on complex models and larger datasets. It achieves a remarkable reduction in latency of 51.6% and 30.4% on CIFAR100-Resnet50, with an accompanying increase in accuracy of 2.31% and 0.72%, on average, compared to Gati and BranchyNet. ## 1 Introduction Deep Neural Networks (DNNs) are incorporated in real-world applications used by a wide spectrum of industry sectors including healthcare (Shorten et al., 2021; Fink et al., 2020), finance (Huang et al., 2020; Culkin, 2017), self-driving vehicles (Swinney & Woods, 2021), and cybersecurity (Ferrag et al., 2020). These applications utilize DNNs in various fields such as computer vision (Hassaballah & Awad, 2020; Swinney & Woods, 2021), audio signal processing (Arakawa et al., 2019; Tashev & Mirsamadi, 2017),and natural language processing (Otter et al., 2021). Many services in large companies such as Google and Amazon have DNN-based back-end software (e.g., Google Lens and Amazon Rekognition) with tremendous volume of queries per second. For instance, Google processes over 99,000 searches every second (Mohsin, 2022) and spends a substantial amount of computation power and time on the run time of their models (Xiang & Kim, 2019). These services are often time-sensitive and resource-intensive and require high availability and reliability. Now the question is how fast the current state-of-the-art (SOTA) DNN models are at inference time and to what extent they can provide low-latency responses to queries. The SOTA model depends on the application domain and the problem at hand. However, the trend in DNN design is indeed toward pre-trained large-scale models due to their reduced training cost (only fine-tuning) while providing dominating results (since they are huge models trained on an extensive dataset). One of the downsides of large-scale models (pre-trained or not) is their high inference latency. Although the inference latency is usually negligible per instance, as discussed, a relatively slow inference can jeopardize a service's performance in terms of throughput when the QPS is high. In general, in a DNN-based software development and deployment pipeline, the inference stage is part of the so-called "model serving" process, which enables the model to serve inference requests or jobs (Xiang & Kim, 2019) by directly loading the model in the process or by employing serving frameworks such as TensorFlow Serving (Olston et al., 2017) or Clipper (Crankshaw et al., 2017). The inference phase is an expensive stage in the life cycle of a deep neural model in terms of time and computation costs (Desislavov et al., 2021). Therefore, efforts towards decreasing the inference cost in production have increased rapidly over the past few years. From a system engineering perspective, caching is a standard practice to improve the performance of software systems, helping to avoid redundant computations. Caching is the process of storing recently observed information to be reused when needed in the future, rather than re-computation (Wessels, 2001; Maddah-Ali & Niesen, 2014). Caching is usually orthogonal to the underlying procedure, meaning that it is applied by observing the inputs and outputs of the target procedure and does not engage with the internal computations of the cached function. Caching effectiveness is best observed when the cached procedure often receives duplicated input while in a similar internal state, for instance, accessing a particular memory block, loading a web page, or fetching the books listed in a specific category in a library database. It is also possible to adopt a standard caching approach with DNNs (e.g., some work caches a DNN's output solely based on its input values (Crankshaw et al., 2017)). However, it would most likely provide a meager improvement due to the high dimension and size of the data (such as images, audios, texts) and low duplication among the requests. However, due to the feature extraction nature of deep neural networks, we can expect the inputs with similar outputs (e.g., images of the same person or the same object) to have a pattern in the intermediate layers' activation values. Therefore, we exploit the opportunity to cache a DNN's output based on the intermediate layer activation values. In this way, we can cache the results not by looking at the raw inputs but by looking at their extracted features in the intermediate layers within the model's forward-pass. The intermediate layers often have dimensions even higher than the input data. Therefore, we use shallow classifiers (Kaya et al., 2019) to replace the classic cache-storing and look-up procedures. A shallow classifier is a supplementary model attached to an intermediate layer in the base model that uses the intermediate layer's activation values to infer a prediction. In the caching method, training a shallow classifier on a set of samples mimics the procedure of storing those samples in a cache storage, and inferring for a new sample using the shallow classifier mimics the look-up procedure. In this paper, we propose early exiting the predictions made by standard classification models using shallow classifiers trained using the samples and information collected at inference time. We first evaluate the rationality of our method in our first research question by measuring how it affects the final accuracy of the given base models and assessing the effectiveness of the parameters we introduce (tolerance and confidence thresholds) as a knob to control the early exiting certainty. We further evaluate the method in terms of computational complexity and inference latency improvements in the second and third research questions. We measure these improvements by comparing the FLOPs count, memory consumption, and inference latency for the original model vs. the early exit-enabled version that we build throughout this experiment. We observed a reduction of up to 58% in FLOPs, an acceleration of up to 46% in inference latency while inferring on the CPU and up to 18% on GPU, with less than 2% drop in accuracy. Our method demonstrates remarkable performance across a range of models and datasets. By averaging across all confidence levels greater than 0.25, for the simplest model and dataset, CIFAR10-resnet50, it offers a substantial reduction of 52.2% and 32.4% in latency, with only a minor decrease of 6.9% and 1.6% in accuracy compared to the BranchyNet and Gati methods, respectively. Furthermore, in the case of the more complex CIFAR100-Resnet50 model and dataset, our method achieves a significant reduction in latency of 51.6% and 30.4%, while simultaneously enhancing the accuracy by 2.31% and 0.72% compared to the Gati and BranchyNet methods. In summary the contributions of this paper are: - Proposing an early exiting method for the predictions made by off-the-shelf image classifiers and object detection models, which only uses unlabelled samples collected at inference time. - Automating the process of designing the supplementary models used for early exiting and tuning their parameters used for determining the early exit hits, by AutoML methods. - Empirically evaluating the proposed early exiting method using six publicly available off-the-shelf models on five datasets (CIFAR-10, CIFAR-100, LFW, Cityscapes, Criteo), in terms of computational complexity and inference time reduction. In the rest of the paper, we discuss the background and related works in section 2, details of the method in section 3, design and evaluation of the study in section 4, and we conclude the discussions in section 5. ## 2 Related Works In this section, we briefly review the topics related to the model inference optimization problem. Following this, we introduce the techniques used to build the early exiting procedure. ## 2.1 Inference Optimization There are two perspectives that address the model inference optimization problem. The first perspective focuses on optimizing the model deployment platform and covers a broad range of optimization goals (Yu et al., 2021). These studies often target deployment environments in resource-constrained edge devices (Liu et al., 2021; Zhao et al., 2018) or resourceful cloud-based devices (Li et al., 2020a). Others focus on hardware-specific optimizations (Zhu & Jiang, 2018) and inference job scheduling (Wu et al., 2020). The second perspective is focused on minimizing the model's inference compute requirements by compressing the model. Among model compression techniques, model pruning (Han et al., 2015; Zhang et al., 2018; Liu et al., 2019b), model quantization (Courbariaux et al., 2015; Rastegari et al., 2016; Nagel et al., 2019), and model distillation (Bucila et al., 2006; Polino et al., 2018; Hinton et al., 2015) are widely used. These ideas alleviate the model's computational complexity by pruning the weights, computing the floating-point calculations at lower precision, and distilling the knowledge from a teacher (more complex) model into a student (less complex) model, respectively. These techniques modify the original model and often cause a fixed amount of loss in test accuracy. In (Liu et al., 2019a) and (Li et al., 2020b), the authors use search algorithms for channel pruning, and they suggest compressed networks with lighter models, which lead to a lower accuracy but faster inference. In fact, in most cases in these studies, either they offer great computation (latency), but with poor accuracy, or a reasonable accuracy with poor latency. The main disadvantage of these methods is that the computation cannot be tuned with any parameters. In practice, though, to what extent one wants to sacrifice accuracy for faster inference is project dependent and must be tunable. As we mentioned in the motivation, accelerating inference is crucial for sensitive real-time approaches such as autonomous vehicles. Most of the methods focus on partitioning and offloading calculations to the edge ((Mohammed et al., 2020)). However, achieving faster decisions for a vehicle to detect a pedestrian requires a more immediate reaction than outsourcing data to the edge. We have applied our method to state-of-the-art object detection approaches, such as Mask R-CNN ((He et al., 2017)) using popular urban datasets, showing a significant improvement even when using unlabeled data. ## 2.2 Early-Exits In Dnns "Early exit" generally refers to an alternative path in a DNN model which can be taken by a sample instead of proceeding to the next layers of the model. Many previous works have used the concept of early exit for different purposes (Xiao et al., 2021; Scardapane et al., 2020; Matsubara et al., 2022; Haseena Rahmath et al., 2023). (Panda et al., 2016) is one of the early works in this area. They tried to terminate classification by cascading a linear network of output neurons for each convolutional layer and monitoring the output of the linear network to decide about the difficulty of input instances and conditionally activate the deeper layers of the network. But they have not mentioned anything about the inference time and accuracy/time trade-off issue. BranchyNet ((Teerapittayanon et al., 2016)), (Pacheco et al., 2021) and (Ebrahimi et al., 2022) also utilize their old observation that features were learned in an early layer of a network to make an early exit. However, they require labeled data to train their models, rendering them unsuitable for use with unlabeled data. Shallow Deep Networks (SDN) (Kaya et al., 2019) points out the "overthinkingproblem " in deep neural networks. "Overthinking" refers to models spending a fixed amount of computational resources for any query sample, regardless of their complexity (i.e., how deep the neural network should be to infer the correct prediction for the sample). Their research proposes attaching shallow classifiers to the intermediate layers in the model to form the early-exits. Each shallow classifier in SDN provides a prediction based on the values of the intermediate layer to which it is attached. On the other hand, (Xiao et al., 2021) incorporates the shallow classifiers to obtain multiple predictions for each sample. In their method, they use early exits as an ensemble of models to increase the base model's accuracy. The functionality of the shallow classifiers in our proposed method is similar to that of SDN. However, the SDN method trains the shallow classifier using ground truth data from the training set and ignores the available knowledge in the original model. This constraint renders the proposed method useless when using a pre-trained model without access to the original training data, which is commonly the case for practitioners. ## 2.3 Dnn Distillation And Self-Distillation Among machine learning tasks, the classification category is one of the significant use cases where DNNs have been successful in recent years. Classification is applied to a wide range of data, such as classification of images (Bharadi et al., 2017; Xia et al., 2021), text (Varghese et al., 2020), audio (Lee et al., 2009), and time series (Zheng et al., 2014). Knowledge distillation(KD) (Bucila et al., 2006; Polino et al., 2018; Hinton et al., 2015) is a model compression method that trains a relatively small (less complex) model known as the student to mimic the behavior of a larger (more complex) model known as the teacher. Classification models usually provide a probability distribution (PD) representing the probability of the input belonging to each class. KD trains the student model to provide similar PDs (i.e., soft labels) to the teacher model rather than training it with just a class label for each sample (i.e., hard labels). KD uses specialized loss functions in the training process, such as Kullback-Leibler Divergence (Joyce, 2011) to measure how one PD is different from another. KD is usually a 2-step process consisting of training a large complex model to achieve high accuracy and distilling its knowledge into a smaller model. An essential challenge in KD is to choose the right teacher and student models. Self-distillation (Zhang et al., 2021) addresses this challenge by introducing a singlestep method to train the teacher model along with multiple shallow classifiers. Each shallow classifier in self-distillation is a candidate student model, which is trained by distilling the knowledge from one or more of the deeper classifiers. In contrast to SDN, self-distillation utilizes knowledge distillation to train shallow classifiers. However, it still trains the base model from scratch along with the shallow classifiers, using the original training set. This training procedure conflicts with our objectives in both aspects. Specifically, we use a pre-trained model and keep it unchanged throughout the experiment and only use inference data to train the shallow classifiers. In (Leontiadis et al., 2021) the authors present a method for enhancing CNN efficiency through early exits. Using supervision, self-supervision, and self-distillation, you can personalize the device, using both labeled and unlabeled data. This allows for dynamic adaptation with varying data availability, focusing on training enhancements. Our work is different by not altering the main model, but instead utilizing early exit layers updated solely during the inference time. This is done to improve latency, without the need for labeled data during these updates, which offers a modular solution with minimal modifications to existing systems. Our work modifies and puts the presented methods in SDN and self-distillation in the context of early exiting the final predictions of pre-trained DNN models. The method trains the shallow classifiers using only the unlabeled samples collected at run-time and measures the improvement in inference compute costs achieved by the early exits throughout the forward passes. ## 2.4 Dnn Prediction Early Exiting Clipper (Crankshaw et al., 2017) is a serving framework that incorporates early exiting DNNs predictions based on their inputs. Freeze inference (Kumar et al., 2019) investigates the use of traditional ML models such as K-NN and K-Means to predict based on intermediate layer values. They show that the size and computation complexity of those ML models grows proportionally with the number of available samples, and their computational overheads by far exceed any improvement. In Learned Early Exits, (Balasubramanian et al., 2021) extend Freeze Inference by replacing the ML models with a pair of DNN models. A predictor model that predicts the results and a binary classifier that predicts whether the result should be used as the final prediction. Their method uses ground-truth data in the process of training the predictor and selector models. In contrast, our method 1) only uses unlabeled inference data, 2) automates the process of early exit-enabling, 3) uses a confidence-based early exit hit determination, and 4) handles batch processing by batch shrinking. ## 3 Methodology In this section, we explain the method to convert a pre-trained deep neural model (which we call the backbone) to its extended version with our early exiting method (called early exit-enabled model). The early exiting method adds one or more early-exit paths to the backbone, controlled by the shallow classifiers (which we call the early exit models), allowing the model to infer a decision faster at run-time for some test data samples (early exit hits). Faster decisions for some queries will result in a reduced mean response time. "Early Exit model" is a supplementary model that we attach to an intermediate layer in the backbone, which given the layer's values provides a prediction (along with a confidence value) for the backbone's output. Just a reminder that as our principal motivation, we assume that the original training data is unavailable for the user, as is the case for most large-scale pre-trained models used in practice. Therefore, in the rest of the paper, unless we explicitly mention it, the terms dataset, training set, validation set, and test set all refer to the whole available data at run-time or a respective subset. Our procedure for enabling a pre-trained model to early exit is derived primarily from the self-distillation method (Zhang et al., 2021). However, we adapt this method to early exit-enable pre-trained models using only their recorded outputs through unsupervised learning, without access to the ground truth (GT) labels. The novel aspects of our approach consists of selecting the best early exit model through Neural Architecture Search (NAS) followed by an inference-time early exit training. This approach helps us refine our early exit strategy based on performance feedback, with the aim of optimizing both accuracy and inference speed across all stages of the network. Since we employ early exit models in a sequence, if an early early exit model wrongly triggers an incorrect early exit, later early exit models will not have a chance to engage even if they would have been successful. Therefore, an individual early exit model's accuracy/hit rate does not guarantee overall good results, since the final outcomes depend on the earlier early exit model's performance as well. Review 1-3 **[[ Setting. Our approach is built on a pre-trained model, and before starting,** we use a NAS algorithm to select the top candidate layers with the potential to cache what the base model has learned. After this selection, the inference process begins by passing an input i **through a sequence of layers** l1, l2*, . . . , l*M**. Each candidate layer has an associated** cache model that performs an early exit, producing an output with the same shape as the base model's output. The base model is frozen during both the training and testing phases. For training, at each selected layer L, we train the corresponding cache model using the probability distribution P = B(i) **generated by the base model as the reference distribution** and the probability distribution Q = BL(i) provided by the cache model for KL divergence. After training these early exit layers, during the testing phase, the model evaluates at each candidate layer whether the current representation BL(i) is sufficient to make a confident prediction and potentially terminate the inference process early. This decision is based on the confidence measure calculated at each layer. The process continues through the layers until either an early exit is triggered or the final layer lM **is reached, where the final prediction is** made. The model determines the output by argmaxBL(i)**. ]]** A step-by-step guide on early exit-enabling an off-the-shelf pre-trained model from a user perspective contains the following steps: 1. **Identify the candidate layers to be early exit** This step involves analyzing each selected model to identify potential positions for early exit layers. Layers whose outputs are independent of other layers' states are chosen as candidates, enabling training possibilities. 2. **Build an early exit model for each candidate** Using Neural Architecture Search (NAS), we evaluate all possible subsets for these early exit layers. NAS also scores the different architectures defined for our early exit models. 3. **Assign confidence thresholds to built models to determine early exit hits** Each early exit model features a softmax as the final layer. In this step, we assign confidence thresholds to the constructed models to determine early exit hits. The probability value associated with the predicted output reflects the model's confidence in its prediction. This confidence level for a given input determines whether we accept that prediction as an early exit hit or continue processing through the rest of the backbone model. 4. **Evaluate and optimize the early exit-enabled model** In this step, we evaluate and optimize the early exit-enabled model in the inference mode and also present our algorithm to update all early exit layers. 5. **Early-Exit Optimization Implementation** In this step, we implement the algorithm and train all the early exit models using unsupervised learning (without using any ground truth) on the datasets we have mentioned. 6. **Update and maintenance** In this step, we periodically update early exit models by retraining them with newly collected inference samples and adjusting their confidence thresholds. In subsections 3.1 to 3.6, we further discuss the procedure and design decisions in each step outlined above. ## 3.1 Identifying Candidate Layers Choosing which layers to early exit is the first step towards early exit-enabling a model. A candidate layer is a layer that we will examine its values' correlation to the final predictions by training an early exit model based on them. One can simply list all the layers in the backbone as candidates. However, since we launch a search for an early exit model per candidate layer in the next step, we suggest narrowing the list by filtering out some layers with the following criteria: - Some layers are disabled at inference time, such as dropouts and batch normalizations. These layers do not modify their input values at inference time. Therefore, we cross them off the candidate list. - Some of the last layers in the model (close to the output layer, such as L15 in Figure 1) might not be valuable candidates for early exiting, since the remaining layers might not have heavy computations to reach the output. Review 1-5 [[This is particularly important as each early exit model typically consists of at least two layers. If the main path has only two layers remaining before the final result and softmax, adding an early exit at that point would not significantly improve inference time. Therefore, we discard the last two layers (e.g., L15 and L16**) from consideration.]]** ![10_image_0.png](10_image_0.png) Figure 1: Early Exit-enabling procedure, candidate layers, and data paths. - DNN models usually are composed of multiple components (i.e. first-level modules) consisting of multiple layers such as multiple residual blocks in ResNet models He et al. (2016)). We narrow down the search space to the outputs layers in those components. - We only consider the layers, for which, given their activation values, the backbone's output is uniquely determined without any other layer's state involved (i.e., the backbone's output is a function of the layer's output). In other words, a layer with other layers or connections in parallel (such as L7-L11 and L13 in the Figure 1) is not suitable for early exiting, since the backbone's output does not solely depend on the layer's output. Review 1-6 **[[While this approach has proven effective** for the architectures we tested, we acknowledge that it may not generalize well to more complex architectures like Mixture of Experts (MoE) models, where parallel experts are common. We have added this as a limitation of our method and suggest future exploration in this area.]] Having the initial set of the candidate layers, we next build and associate an early exit model to each one. ## 3.2 Building Early Exit Models Building an early exit model to be associated with an intermediate layer in the backbone consists of finding a suitable architecture for the early exit model and training the model with that architecture. The details of the architecture search (search space, search method, and evaluation method) and the training procedure (training data extraction and loss function) are discussed in the following two subsections. ## 3.2.1 Early Exit Models Architecture An early exit model can have an architecture of any depth and breadth size, as long as it provides more computational improvement than its overhead. In other words, it must have substantially less complexity (i.e., number of parameters and connections) than the rest of the layers in the backbone that come after the corresponding intermediate layer. The search space for such models would contain architectures with different numbers and types of layers (e.g., a stack of dense and/or convolution layers). However, all models in the search space must produce a PD identical to the output of the backbone in terms of size (i.e, the number of classes) and activation (e.g., SoftMax or LogSoftMax). In our experiments, the search space consists of architectures with a stack of (up to 2) convolution layers followed by another stack of (up to 2) linear layers, with multiple choices of kernel and stride sizes for the convolutions and neuron counts for the linear layers. However, users can modify or expand the search space according to their specific needs and budget. The objective of the search is to find a minimal architecture that converges and predicts the backbone's output with acceptable accuracy. Note that any accuracy given by an early exit model (better than random) can be helpful, as we will have a proper selection mechanism later in the process to only use the early exit predictions that are (most likely) correct, and also to discard the early exit models yielding low computational improvement. The user can conduct the search by empirically sampling through the search space or by using a automated Neural Architecture Search (NAS) tool such as Auto-Keras (Jin et al., 2019), Auto-PyTorch (Zimmer et al., 2021), Neural Network Intelligence (NNI) (Microsoft, 2022), or NASLib (Ruchte et al., 2020). However, we used NNI to conduct the search and customized the evaluation process to account for the models' accuracy and their computational complexity. We have used the floating point operations (FLOPs) count as the estimation for the models' computational complexity in this stage. Several factors influence the architecture of an early exit model for a given intermediate layer. These factors include the dimensions of the intermediate target layer, its position in the backbone, and the data set specifications, such as its number of target classes. For example, the first early exit models in the CIFAR100-Resnet50 and CIFAR10-Resnet18 experiments (shown as early exit1 in Figure 6) have the same input size, but since CIFAR100 has more target classes, it reasonably requires an early exit model with more learning capacity. Review 1-8 [[The architecture of each early exit model is further adjusted based on the specific requirements of the dataset, particularly the number of output classes. For example, while the early exit models for CIFAR10 and CIFAR100 share similar input sizes, the early exit models for CIFAR100 are designed with increased capacity to handle its larger number of classes. This adaptation is guided by the suggestions provided by Neural Architecture Search (NAS), ensuring that each early exit model is optimized for the dataset it is applied to. This approach is consistent across different datasets, such as ImageNet, where the early exit models are similarly adjusted to accommodate the 1,000 output classes. These architectural adjustments ensure that the models remain efficient and capable of handling the varying complexities of different datasets.]] Therefore, using NAS to design the early exit models helps automate the process and alleviate deep learning expert supervision in designing the early exit models. NAS tries to minimize the total accuracy no more than the tolerance so for all possible subsets, there will be a maximum range. This method does not guarantee the best score, but is a good solution. Subsets with the best score will be selected as our early exit system which will be added inside the model to be trained individually by the predictions and perform early exits. Regardless of the search method, evaluating a nominated architecture requires training a model with the given architecture, of which we discuss the procedure in the next section. Moreover, since the search space is limited in depth, it is possible that, for some intermediate layers, neither of the early exit models converges (i.e., the model provides nearly random results). In such cases, we discard the candidate layer as non-suitable for early exiting. ## 3.2.2 Training An Early Exit Model Figure (1) illustrates the schema of an early exit-enabled model consisting of the backbone (the dashed box) and the associated early exit models. The objective of an early exit model is to predict the output of the backbone model, given the corresponding intermediate layer's output, per input sample. Similarly to the backbone, early exit models are classification models. However, their inputs are the activation values in the intermediate layers. As suggested in self-distillation (Zhang et al., 2021), training an early exit model is essentially similar to distilling the knowledge from the backbone (final classifier) into the early exit model. Therefore, to distill the knowledge from the backbone into the early exit models, we need a medial data set (MD) based on the collected inference data (ID). The medial data set for training an early exit model associated with an intermediate layer L in the backbone B consists of the set of activation values in the layer L and the PDs given by B per samples in the given ID, formally annotated as follows: $$M D_{L}=[i\in I D:<B_{L}(i),B(i)>]$$ $\left(1\right)$. MDL = [i ∈ ID :< BL(i), B(i) >] (1) where: MDL : Medial dataset for the early exit model associated with the layer L ID : The collected inference data consisting of unlabelled samples BL(i): Activation values in layer L given the sample i to the backbone B B(i) : The backbone's PD output for the sample i Note that the labels in MDs are the backbone's outputs and not the GT labels, as we assumed the GT labels to be unavailable. We split the MDL into three splits (MD*T rain* L, MDV al L, MD*T est* L) and use them, respectively, similar to the common deep learning training and test practices. Similarly to the distillation method (Hinton et al., 2015), we use the Kullback-Leibler divergence (KLDiv) (Joyce, 2011) loss function in the training procedure. KLDiv measures how different the two given PDs are. Thus, minimizing the value of the KLDiv loss in MDT rain Ltrains the early exit model to estimate the prediction of the backbone (B(i)). Unlike self-distillation, where (Zhang et al., 2021) train the backbone and shallow classifiers simultaneously, in our method, while training an early exit model, it is crucial to freeze the rest of the model including the backbone and the other early exit models (if any) in the collection, to ensure that the training process does not modify any parameter not belonging to the current early exit model. ## 3.3 Assigning Confidence Threshold The probability value associated with the predicted class (the one with the highest probability) is known as the confidence of the model in the prediction. The early exit model's prediction confidence for a particular input will indicate whether we stick with that prediction (early exit hit) or proceed with the rest of the backbone to the next - or probably final - exit (early exit miss). Confidence calibration means enhancing the model to provide accurate confidence. In other words, a wellcalibrated model's confidence accurately represents the likelihood for that prediction to be correct(Guo et al. (2017)). An overconfident early exit model will lead the model to prematurely exit for some samples based on incorrect predictions, whereas an underconfident early exit model will bear a low early exit hit rate. Therefore, after building an early exit model, we also calibrate its confidence using MD*V al* Lto better distinguish the predictions that are more likely to be correct. Several confidence calibration methods are discussed in (Guo et al., 2017), among which temperature scaling (in the output layer) has been shown to be practical and easy to implement. Having the model calibrated, we next assign a confidence threshold value to the model which will be used at inference time to determine the early exit hits and misses. When an early exit model identifies an early exit hit, its prediction is considered the final prediction. However, when needed for validation and test purposes, we obtain the predictions from the early exit model and the backbone. Confidence calibration involves adjusting the predictive confidence of our early exit models to more accurately reflect their true performance, particularly how likely they are to be correct. This is crucial for making reliable decisions during inference, especially when early exits from the model are considered based on these confidence scores. To calibrate the confidence levels of our early exit models, we employ a threshold that measures confidence based on the model's output during the validation phase. Specifically, each early exit model functions as a classifier: during both the validation and testing phases, it generates an output that carries an associated confidence value. This confidence value indicates the probability that the model prediction is correct. Table 1: Early Exit prediction confusion matrix, C: Early Exit predicted class, B: Backbone's predicted class, GT: Ground Truth label | Category | B = C | B = GT | C = GT | |------------|---------|----------|----------| | BC | ✓ | ✓ | ✓ | | BC | ✓ | X | X | | BC | X | ✓ | X | | BC | X | X | ✓ | | B C | X | X | X | An early exit model's prediction (C) for an input to the backbone falls into one of the 5 correctness categories listed in table 1 with respect to the ground truth labels (GT) and the backbone's prediction (B) for the input. Among the cases where the early exit model and the backbone disagree, the BC predictions negatively affect the final accuracy and on the other hand, the BC predictions positively affect the final accuracy. The Equation 2 formulates an early exit model's actual effect on the final accuracy. $$F_{\Delta}(\theta)=\overline{{{B}}}C_{\Delta}(\theta)-B\overline{{{C}}}_{\Delta}(\theta)$$ $$(2)$$ F∆(θ) = BC∆(θ) − BC∆(θ) (2) Where: ∆ : The early exit model F∆ : The actual accuracy effect ∆ causes given θ as threshold BC∆ : Ratio of BC predictions by ∆ given θ as threshold BC∆ : Ratio of BC predictions by ∆ given θ as threshold However, since we used the unlabeled inference data to form the MDs, we can only estimate an upper bound for the effect of the early exit model in the final accuracy. The estimation assumes that an incorrect early exit would always lead to an incorrect classification of the sample (BC). We estimate the change in the accuracy upper bound an early exit model causes given a certain confidence threshold by its hit rate and early exit accuracy: $$F_{\Delta}(\theta)\leq H R_{\Delta}(\theta)\times(1-C A_{\Delta}(\theta))$$ $$\left({\mathfrak{3}}\right)$$ F∆(θ) ≤ HR∆(θ) × (1 − CA∆(θ)) (3) Where ∆ : The early exit model F∆ : The expected accuracy drop ∆ causes given θ as threshold HR∆ : Hit rate provided by ∆ given θ as threshold CA∆ : Early Exit accuracy provided by ∆ given θ as threshold Given the tolerance T for the drop in final accuracy, we assign a confidence threshold to each early exit model that yields no more than the expected drop in accuracy T /2 n% in MDV al Laccording to equation 3, where n is the 1-based index of the early exit model in the setup. It is important to note that there are alternative methods to distribute the accuracy drop budget among the early exit models. For example, one can equally distribute the budget. However, as we show in the evaluations later in section 4.6.1, we find it reasonable to assign more budget to the early exit models shallower positions in the backbone. ## 3.4 Evaluation And Optimization Of The Early Exit-Enabled Model So far, we have a set of early exit layers and their corresponding early exit models ready for deployment. The algorithm 1 demonstrates a pseudo-implementation of a Python-style inference process for the early exitenabled model. When the early exit-enabled model receives a batch of samples, it proceeds layer-by-layer similar to the standard forward-pass. Once an early exit layer's activation values are available, it will pass the values to the corresponding early exit model and obtain an early prediction with a confidence value per sample in the batch. For each sample, if the corresponding confidence value exceeds the specified threshold, we consider it an early exit hit. Hence, we have the final prediction for the sample without passing it through the rest of the backbone. At this point, the prediction can be sent to the procedure awaiting the results (e.g., an API, a socket connection, a callback). We shrink the batch by discarding the early exit hit items at each exit and proceed with a smaller batch to the next (or the final) exit. | Algorithm 1 Early Exit-enabled model inferenc | e | |-------------------------------------------------|-----| Require: Backbone ▷ The original model Require: Early Exit Layers ▷ List of early exit layers Require: Layer ▷ As part of Backbone, including the associated early exit model and threshold 1: **procedure** ForwardPass(X, callback) ▷ X: Input batch 2: for Layer in Backbone.Layers do ▷ In order of presence1 3: X ← Layer(X) 4: if Layer in Early Exit Layers **then** 5: Early Exit ← Layer.Early ExitModel 6: T ← Early Exit.Threshold 7: early exit PDs ← Early Exit(X) 8: confidences ← max(early exit PDs, axis=1) 9: callback(early exit PDs[confidences≥ T]) ▷ Resolve early exit hits 10: X ← X[confidences<T] ▷ Shrink the batch 11: **end if** 12: **end for** 13: **end procedure** So far in the method, we have only evaluated the early exit models individually, but to gain the highest improvement, we must also evaluate their collaborative performance within the early exit-enabled model. Once the early exit-enabled model is deployed, each early exit model affects the hit rates of the following early exit models by narrowing the set of samples for which they will infer. More specifically, even if an early exit model shows promising hit rate and accuracy in individual evaluation, its performance in the deployment can be affected due to the previous early exit hits made by the earlier early exit models (connected to shallower layers in the backbone). Therefore, we need to choose the optimum subset of early exit models to infer the predictions with the minimum computations. A brute-force approach to find the optimum subset would require evaluating the early exit-enabled model with each subset of the early exit models. However, we implement a more efficient method without multiple executions of the early exit-enabled model. First, for each early exit model, we record its prediction per sample in the MDV al Land their confidence values. We also record two FLOPs counts per early exit model; One is the early exit model's FLOPs count(C1), and the other is the fallback FLOPs count which denotes the FLOPs in the remaining layers in the backbone(C2). For example, for the layer L12 in the Figure 1, C1 is the FLOP count of the corresponding early exit model, and C2 is the FLOPs count in the layers L13 through L16. For each subset S, we process the lists of predictions recorded for each model in S to generate the lists of samples that they actually receive when deployed along with other early exit models in S. The processing 1The loop is to show that each early exit model will receive the early exit layer's activation values when available, immediately, before proceeding to the next layer in the base model. consists of keeping only the samples in each list for which there have been no early exit hits by the previous early exit models in the subset. Further, we divide each list into two parts according to each early exit model's confidence threshold; One consisting of the early exit hits, and the other consisting of the early exit misses. Finally, we score each subset using the processed lists and recorded values for each early exit model in S as follows: $$K(S)=\sum_{\Delta\in S}|H_{\Delta}|\times(C_{2,\Delta}-C_{1,\Delta})-|M_{\Delta}|\times C_{1,\Delta}$$ $$(4)$$ |H∆| × (C2,∆ − C1,∆) − |M∆| × C1,∆ (4) Where K : The early exiting score for subset S ∆ : An early exit model in S H∆ : The generated list of early exit hits for ∆ M∆ : The generated list of early exit misses for ∆ C1,∆ : FLOPs count recorded for ∆ C2,∆ : Fallback FLOPs count recorded for ∆ The score equation accounts for both the improvement an early exit model provides through its early exit hits within the subset, and the overhead it produces for its early exit misses. Utilizing additional early exit layers may cause earlier detection of a class with a higher probability (with using an appropriate confidence for early exit layers) before moving to the next layers, so the total accuracy would increase with the cost of memory consumption. Final schemas after applying the method on MobileFaceNet, EfficientNet, ResNet18, and ResNet50 are discussed in the main text, with detailed illustrations provided in the Appendix (see Figure 6). This figure demonstrates the chosen subsets and their associated early exit models for each backbone and dataset. ## 3.5 Early-Exit Optimization Implementation In this section, we present the implementation and application of early-exit models for efficient and timely predictions in three distinct tasks: image classification, object detection, and recommendation systems. We begin by incorporating our proposed early-exit approach into architectures widely used for image classification tasks i.e. MobileFaceNet, EfficientNet, ResNet18, and ResNet50 on benchmark datasets i.e. CIFAR10, CIFAR100, ImageNet and LFW. Inspired by the promising results obtained in image classification, we further extend our methodology to address a critical real-world scenario: pedestrian detection in urban environments. For this purpose, we adopt the state-of-the-art Mask R-CNN model, renowned for its exceptional object detection capabilities. By integrating our early-exit strategy into Mask R-CNN, we enable the model to detect pedestrians at an earlier stage during inference, thus significantly reducing the processing time and providing timely warnings to autonomous vehicles about the presence of pedestrians within a given scene. The significance of our contributions lies in the potential to enhance the safety and responsiveness of autonomous vehicles in urban settings, where pedestrian detection plays a pivotal role in avoiding accidents and ensuring seamless interaction between vehicles and pedestrians. In our approach, while prioritizing accuracy, we forgo an important aspect: the exact coordinates of the detected objects. By implementing early-exit models triggered upon the detection of pedestrians or humans, we aim to achieve faster processing and response times. However, we acknowledge that in certain cases, reacting within the required time window is of utmost importance. The trade-off between accuracy and response time is a crucial consideration in our methodology, and we recognize the significance of timely actions, especially in scenarios where immediate responses are critical for ensuring optimal outcomes. In the context of the Mask R-CNN model, various options are available to select different backbones and settings, allowing flexibility in performance evaluation and adaptation to specific tasks. While numerous configurations are possible, we opted to utilize a publicly available, pre-trained backbone to ensure that our experiments are standardized and well established. This choice allows us to focus on the effectiveness of our proposed approach, taking advantage of the robustness and generalization capabilities of the chosen backbone. Additionally, using a pre-trained model helps to mitigate potential biases in training data and enables fair comparisons with other methods that have adopted similar backbones. The final schema of the early exit models for Mask R-CNN with Resnet50 backbone is illustrated in the appendix (see Figure 7). We extend our early exit classification model to implement pedestrian object detection. The object detection model scans an input image and detects multiple objects within the image, assigning each detected object a probability distribution over possible classes. In object detection, the model's task is to identify various objects within an image and to provide a probability distribution for each detected object. The challenge therein lies in determining an effective method for updating the convolutional dense layer early exits in this context. Finally, we explore the application of our early-exit strategy in recommendation systems. Specifically, we integrate early-exit nodes into the Deep Learning Recommendation Model (DLRM), as suggested by the Microsoft Neural Architecture Search (NAS) algorithm. This integration aims to optimize the inference time while maintaining high recommendation accuracy. The schema of the DLRM architecture with early exits is detailed in Figure 8. ## 3.5.1 Updating Early Exits For Pedestrian Detection In object detection, pedestrians are one of the most common classes. To optimize the performance of our early exit framework for pedestrian detection, we explore three update strategies for the layers: - Updating with the most confident pedestrians: In this approach, we selectively update the layers with features extracted from regions confidently classified as pedestrians. By focusing on the most confident detections, we aim to enhance the early exit memory's relevance to crucial features associated with pedestrians in the scene. - Updating with the most confident object: We investigate updating the layers with features from regions classified as the most confident object, regardless of whether it is a person or another class. This strategy is designed to ensure that the early exit memory reflects critical features representative of the dominant object class in the scene. - Updating with all detected objects: In this method, we update the layers with features from all detected objects in the scene. While this approach may provide a broader context, it may introduce redundancy and bias towards the more prevalent classes. After testing these three early exit-updating approaches, the results supported updating with the most confident single object as the best-performing method. Training early exit layers with a sole focus on individual objects, such as pedestrians, leads to non-convergence and a lack of meaningful learning. Even prior to testing, it became evident that training layers exclusively with a defined class introduce bias, impeding effective learning. Meanwhile, training early exit layers using a diverse set of objects results in model confusion, manifesting itself as reduced accuracy in our testing outcomes. So, our selected approach updates the model with the most certain detection while avoiding the issues of bias and multi-object confusion. ## 3.6 Updates And Maintenance Similar to conventional caching, layer early exiting also requires recurring updates to the early exit space to adapt to the trend in inference data. However, unlike conventional caching, we can not update the early exit models in real-time. Therefore, to update the early exit models using the extended set of collected inference samples, we retrain them and re-adjust their confidence thresholds. Review 1-13 [[It should be noted that in our experiments, we did not perform these updates, as our focus was on the initial setup of the early exit models. The guidance provided here is intended for users who may deploy this tool in dynamic environments where data distribution changes over time.]] Retraining adapts the early exit models to the trend in the incoming queries and maintains their early exit accuracy. We consider two triggers for the updates: I) When the size of the recently collected data reaches a threshold (e.g. 20% of the collected samples are new) and II) When the backbone is modified or retrained. However, users must adapt the recommended triggers to their requirements and budget. Review 1-11 **[[The costs associated with updating and maintaining the early exit models will** depend on the amount of new data and the specific requirements of the application. For instance, adding 20% more unlabeled data would typically require 20% more training time.]] This is our summarized evaluation process: - **Pretraining Models:** We use pretrained models, typically trained with both training and validation data. After pretraining, we integrate our layers (early exits) into the models. Initially, these new blocks and layers are not trained. - **Test Data Splitting:** For testing our method, we split the test dataset into two separate parts. We then freeze the main model and update the early exit layers by performing inference on the first portion of the test data. During this phase, the main model weights are not updated, only the new early exit layers are trained. Importantly, we do not use the labels from the test data for training or optimization, thus avoiding overfitting concerns. - **Performance Evaluation:** After training the early exit layers, we test the performance of the entire model using the remaining part of the test dataset. This approach may result in a slight loss of accuracy, but it significantly reduces the total inference time due to the early exits. ## 4 Empirical Evaluation In this section, we explain the objective of our experiment, the research questions, the implementation of the tool and the design of the experiment including the backbones and data sets, the evaluation metrics, and the configuration of the environment. ## 4.1 Objectives And Research Questions The high-level objective of this experiment is to assess the ability of the automated layer early exiting mechanism to improve compute requirements and inference time for DNN-based services. To address the above objective, we designed the following research questions (RQ): RQ1 To what extent the early exit models can accurately predict the backbone's output and the ground truth data? This RQ investigates the core idea of early exiting as a mechanism to estimate the final output earlier in the model. The assessments in this RQ consider the early exit models' accuracy in predicting the backbone's output (early exit accuracy) and predicting the correct labels (GT accuracy). RQ2 To what extent can early exit-enabling improve compute requirements? In this RQ, we are interested in how early exit-enabling affects the models' computation requirements. In these measurements, we measure the FLOPs counts and memory usage as metrics for the models' compute consumption. RQ3 How much acceleration does early exit-enabling provide on CPU/GPU? In this RQ, we are interested in the actual amount of end-to-end speed-up that an early exit-enabled model can achieve. We break this result down to CPU and GPU accelerations, since they address different types of computation during the inference phase and thus may have been differently affected. RQ4 How does the early exit-enabled model's accuracy/latency trade-off compare with other early exit methods? In this research question, our aim is to assess and compare the performance of your early exitenabled model against other existing early exit methods concerning the trade-off between accuracy and latency in practical, real-world scenarios. ## 4.2 Tasks And Datasets Among the diverse set of real-world classification tasks that are implemented by solutions using DNN models, we have selected two representatives: face recognition and object classification. Both tasks are quite commonly addressed by DNNs and often used in large-scale services that have non-functional requirements such as: high throughput (due to the nature of the service and the large volume of input data) and are time-sensitive. Face recognition models are originally trained on larger datasets such as MS-Celeb-1M (Guo et al., 2016) and are usually tested with different - and smaller - datasets such as LFW (Huang et al., 2008), CPLFW (Zheng et al., 2017), RFW (Wang et al., 2019), AgeDB30 (Moschoglou et al., 2017), and MegaFace (KemelmacherShlizerman et al., 2016) to test the models against specific challenges, such as age/ethnic biases, and recognizing mask-covered faces. We used the Labeled Faces in the Wild (LFW) dataset for face recognition, which contains 13,233 images of 5,749 individuals. We used the images of 127 identities who have at least 11 images in the set so we can split them for training, validation, and testing. We also used CIFAR10, CIFAR100, and ImageNet test sets (Krizhevsky, 2009; Russakovsky et al., 2015) for object classification, each containing 10,000 images distributed equally among 10 and 100 classes for CIFARs and 100,000 images among 1,000 classes for ImageNet. CIFARs have a size of 32 × 32 pixels, while the ImageNet dataset originally contains images of different sizes. To standardize the input size for ImageNet, we used a common method of resizing images to 256×256 pixels and then cropping them to 224×224 pixels. Reminder that we do not use the training data, rather we only use the test sets to simulate incoming queries at run-time. Specifically, we used only the test splits of the CIFARs and ImageNet datasets. However, we used the whole LFW data as it has not been used to train the face recognition models. Moreover, we do not use the labels in these test sets in the training and optimization process, rather we only use them in the evaluation step to provide GT accuracy statistics. Each dataset mentioned above represents an inference workload for the models. Thus, we split each one into training, validation and test partitions with 50%, 20%, and 30% proportionality, respectively. However, we augmented the test sets using flips and rotations to improve the statistical significance of our testing measurements. We used the CityScape dataset to assess the presence of pedestrians ((Cordts et al., 2016)). This data set is valuable for our research on the comprehension of urban scenes, as it offers meticulously annotated images, with pixel-level labels, depicting various urban environments from the perspective of a vehicle. To evaluate the accuracy of our model, we needed labeled data for testing purposes. Upon observing that the test dataset within the CityScape dataset contained dummy labels, we opted to utilize the validation subset instead. To demonstrate the applicability of our early exit model on other non-image based tasks, we integrated it into a recommendation system. The data set we use is from the Criteo dataset on Kaggle (Jean-Baptiste Tien, 2014), used for the Display Advertising Challenge, which is a comprehensive resource designed to benchmark click-through rate (CTR) prediction algorithms. It includes data collected over a seven-day period, consisting of feature values and click feedback for millions of display ads. The dataset is divided into training and testing parts, with each part containing both clicked and non-clicked examples that have been sub-sampled to manage dataset size. The dataset consists of 39 features per sample: 13 integer features, primarily count data, and 26 hashed categorical features for anonymization. The details of these features are not disclosed and may include missing values. This application can show the ability of the model to significantly improve inference speed while managing large-scale data structures. ## 4.3 Backbones And Models The proposed early exit enabler method is applicable to any deep classifier model. However, the results will vary for different models depending on their complexity. Among the available face recognition models, we have chosen the well-known MobileFaceNet and EfficientNet models to evaluate the method, and we experiment with ResNet18 and ResNet50 for object classification. The object classification models are typical classifier models out-of-the-box. However, face recognition models are feature extractors that provide embedding vectors for each image based on the face / location features. They can still be used to classify a face-identity dataset. Therefore, we attached a classifier block to these models and trained them (with the feature extractor layers frozen) to classify the images of the 127 identities with the highest number of images in the LFW dataset (above 10). It is important to note that since the added classifier block is a part of the pre-trained model under study, we discarded the data portion used to train the classifier block to ensure we still hold on to the constraint of working with pre-trained models without access to the original training dataset. As stated previously, our pedestrian detection approach required the selection of an object detection technique capable of identifying pedestrians within images. We adopted the Mask R-CNN framework. This method encompasses a backbone component (for which we employed ResNet50) and two additional sections that consume significant time and memory resources. However, for our specific use case of providing early warnings to autonomous vehicles regarding the presence of pedestrians, the precise localization of pedestrians is not essential. Consequently, we chose to disregard the other resource-intensive sections, resulting in substantial time savings while still achieving the necessary level of awareness of pedestrian presence. The Deep Learning Recommendation Model (DLRM (Naumov et al., 2019)) is a neural network architecture designed for use in recommendation systems. It efficiently handles categorical and numerical data, making it highly effective for personalized recommendation tasks. DLRM uses a combination of embedding tables for categorical features and multi-layer perceptrons (MLPs) for numerical features. These components are then interacted with through a specialized dot product interaction operation, which enables the model to learn and predict complex patterns from user-item interactions. ## 4.4 Metrics And Measurements Our evaluation metrics for RQ1 are ground truth (GT) accuracy and early exit accuracy. Early Exit accuracy measures how accurately an early exit model predicts the backbone's output (regardless of correctness). The GT accuracy applies to both the early exit-enabled model and each individual early exit model. However, early exit accuracy only applies to early exit models. Review 1-1 **[[Individual early-exit layers' accuracies might be different from the final early-exitenabled model's accuracy. This discrepancy arises because layer-wise accuracies are calculated** only for the cases where the input is classified at that specific layer (hit cases), which typically results in higher accuracies. For example, out of 5000 inputs, a particular layer may serve as an early exit for only 5 inputs at a confidence level of 99%. Even though this layer may exhibit high accuracy in these cases, it would have only a slight impact on the overall final accuracy of the model.]] In RQ2, we compare the original models and their early exit-enabled version in terms of the average FLOPs count occurring for inference and their memory usage. We only measure the resources used in the inference. Specifically, we exclude the training-specific layers (e.g. batch normalization and dropout) and computations (e.g. gradient operations) in the analysis. FLOPs count takes into account the model architecture and input size and estimates the computations required by the model to infer the input (Desislavov et al., 2021). In other words, the fewer FLOPs used for inference, the more efficient the model is in terms of compute and energy consumption. On the other hand, we report two aspects of memory usage for the models. The first is the total space used to load the models in the memory (i.e, model size). This metric is essentially agnostic to the performance of early exit models and only considers the memory cost of loading them along with the backbone. In addition to the memory required for their weights, DNNs also allocate a sizeable amount of temporary memory for buffers (also referred to as tensors) that correspond to intermediate results produced during the evaluation of the DNN's layers Levental (2022). Therefore, our second metric is the live tensor memory allocations (LTMAs) during inference. LTMA measures the total memory allocated to load, move, and transform the input tensor through the model's layers to form the output tensor while executing the model. In RQ3, we compare the average inference latency of the original model with its early exit-enabled counterpart. Inference latency measures the time spent from passing the input to the model till it exits the model (by either an early-exit or the final classifier in the backbone). Various factors affect inference latency including hardware-specific optimizations (e.g., asynchronous computation), framework, and model implementation. In our measurements, the framework and model implementations are fixed as discussed in the Appendix .1. However, to account for other factors, we repeat each measurement 100 times and report the average inference latency recorded for each experiment. Further, to also account for the effects of asynchronous computations in GPU inference latency, we repeated experiments with different batch sizes. Please refer to Appendix 5 for details of implementation and setup. RQ4 aims to evaluate the trade-offs between accuracy, latency, and computational cost of an early exitenabled model compared to other early exit methods. This inquiry is crucial to understanding the efficiency and effectiveness of different approaches. To address this question, we have selected two prominent and easy-to-implement early exit methods for comparison. These methods are evaluated using classification tasks in ResNet18, ResNet50, and the CIFAR datasets. ## 4.5 Baselines In this subsection, we discuss the baselines used to evaluate the proposed unsupervised early exit method. Specifically, we consider two prominent methods: BranchyNet and GATI, both of which aim to reduce the latency of deep neural network (DNN) inference while maintaining high accuracy. These baselines typically require access to labeled training data, which contrasts with our unsupervised approach. BranchyNet is designed to improve the efficiency of DNN inference by allowing certain samples to exit the network early through additional side branch classifiers. These branches are strategically placed at various depths in the network to enable predictions at different levels of abstraction. During the training phase, BranchyNet jointly optimizes the loss functions of both the main branch and the side branches, effectively regularizing the network and mitigating vanishing gradients. The inference phase utilizes entropy-based thresholds to determine whether a sample should exit early or continue through the deeper layers of the network. In our implementation of BranchyNet as a baseline, we maintain the original supervised setup, where labeled training data is used to train both the main branch and the side branches. For evaluation, we applied BranchyNet to the entire test set, allowing us to directly compare its performance against our unsupervised method. GATI introduces a learned caching mechanism to accelerate DNN inference by exploiting temporal locality in prediction serving workloads. GATI caches the hidden layer outputs of the DNN, enabling faster predictions for inputs that exhibit similar characteristics to previously processed data. The system employs simple machine learning models as learned caches, which are continuously updated based on incoming data. Unlike BranchyNet, GATI does not modify the DNN architecture but instead optimizes the inference process by dynamically skipping layers when a cache hit is detected. This approach also relies on supervised training to establish the initial caches, using a validation dataset to optimize the architecture and hit-rate of the caches. During evaluation, GATI was applied to the entire test set, as it requires labeled data to train and update its learned caches. While both baselines rely on supervised training, our method is designed to function in an unsupervised manner, particularly in scenarios where a base pre-trained model is accessible, but labeled data may not be available. This flexibility allows our approach to be deployed in contexts where labeled data is scarce or unavailable, leveraging the test set itself for training early exits. To ensure a fair comparison with our method, we adapted both baselines to this unsupervised framework by evaluating them on the entire test set. However, it is important to note that these baselines do not inherently support unsupervised training, which is a key contribution of our work. Review 1-2 [[An important point to note is that our Base model represents the upper bound of our work, as it is built on a pre-trained model. However, this does not necessarily apply to other baselines, such as BranchyNet and GATI, as they may use different approaches for training. While our Base model demonstrates the best possible performance within the constraints of our approach, the same cannot be directly inferred for the other baselines, which may not achieve the same upper bound due to their reliance on labeled training data and other training techniques.]] ## 4.6 Experiment Results In this sub-section, we evaluate the results of applying the method on the baseline backbones and discuss the answers to the RQs. For a more comprehensive understanding, further details about our implementation, including access to the code repository, are provided in the Appendix 5. ## 4.6.1 Rq1. How Accurate Are The Early Exit Models In Predicting The Backbone Output And The Ground Truth Labels? In this RQ, we are interested in the built early exit models' performance in terms of their hit rate, GT accuracy, and early exit accuracy. We break down the measurements into two parts. The first part covers the individual performance of the early exit models over the whole test set without any other early exit model involved. The second part covers their collaborative performance within the early exit-enabled model. ## 4.6.2 Early Exit Models' Individual Performance Figure 2 portrays each early exit model's individual performance against any confidence threshold value in CIFAR100-Resnet50 experiment. Figures demonstrating the same measurements for other experiments are available in the Appendix .2. We make three key observations here. First, deeper early exit models are more confident and accurate in their predictions. For example, early exit 1 in Figure 2 has 33.36% GT accuracy and 35.74% early exit accuracy, while these metrics increase to 78.60% and 95.38% for early exit 3, respectively. This observation agrees with the generally acknowledged feature extraction pattern in the DNNs - deeper layers convey more detailed information. The second key observation is the inverse correlation between the early exit models' accuracy (both GT and early exit) and their hit rates. This observation highlights the reliability of confidence thresholds in distinguishing predictions that are more likely to be correct. For example, early exit 1 in Figure 2, with a confidence threshold 20%, produces a hit rate of 35.24% but also a drop of 8.99% in the final accuracy. However, with a confidence threshold 60%, it produces a hit rate 4% and does not reduce the final accuracy by more than 0.1%. The third observation is that the early exit accuracy is higher than the GT accuracy in all cases. This difference is because we have trained the early exit models to mimic the backbone only by observing its activation values in the intermediate layers and outputs. Since we have not assumed access to the GT labels (which is the case for inference data collected at run-time) while training the early exit models, they have learned to make correct predictions only through predicting the backbone's output, which might have been incorrect in the first place. On the other hand, we observed that the early exit models predict the correct labels for a portion of the samples for which the backbone misclassifies. For example, for 0.92% of the samples, early exit 3 (in Figure 2) correctly predicted the GT labels while the backbone failed (BC predictions). This shows the potential of the early exit models to partially compensate for their incorrect early exits (BC predictions) by correcting the backbone predictions for some samples (BC). This agrees with the overthinking concept in SDN (as discussed in 2.3), since for this set of samples, the early exit models have been able to predict correctly in the shallower layers of the backbone. ![22_image_0.png](22_image_0.png) Cifar100-Resnet50 Figure 2: Individual accuracy and hit rate of the early exit models vs. confidence threshold per early exit model in CIFAR100 - Resnet50 experiment ## 4.6.3 Early Exit Models' Collaborative Performance Table 2: Early Exit models' collaborative performance in terms of hit rate(HR), early exit accuracy (Aearly exit), GT accuracy (AGT), and their effect on the final accuracy(↓Aeffect) in the early exit-enabled model with a confidence threshold of 0.9. LFW: Labeled Faces in the Wild, MFN: MobileFaceNet, EFN: EfficientNet | EfficientNet Data Model | Final accuracy | Exit# | HR | Aearly exit | AGT | ↓ Aeffect | | | | |---------------------------|--------------------|---------|--------|---------------|--------|-------------|--------|-------|-------| | Base | Early Exit-enabled | 1 | 67.21% | 92.29% | 88.91% | 01.31% | | | | | 2 | 10.33% | 89.76% | 76.63% | 0.56% | | | | | | | 3 | 11.24% | 85.71% | 51.43% | 0.25% | | | | | | | 4 | 8.32% | 91.37% | 35.71% | 0.1 % | | | | | | | 88.71% | 86.49% | | | | | | | | | | CIFAR10 | Resnet18 Resnet50 | 87.92% | 85.88% | 1 | 61.41% | 89.12% | 86.19% | 1.12% | | | 2 | 15.73% | 93.01% | 77.84% | 0.58% | | | | | | | 3 | 10.29% | 82.22% | 53.33% | 0.3% | | | | | | | 4 | 6.1% | 97.47% | 42.65% | 0.04% | | | | | | | 1 | 11.96% | 99.29% | 82.11% | 0.94% | | | | | | | 2 | 58.26% | 99.62% | 85.41% | 0.1% | | | | | | | 3 | 7.26 % | 93.81% | 59.29% | 0.3% | | | | | | | 4 | 5.36% | 55.56% | 38.89% | 0.11% | | | | | | | Resnet18 | 75.92% | 74.47% | | | | | | | | | CIFAR100 | Resnet50 | 78.98% | 77.04% | 1 | 11.92% | 76.34% | 80.2% | 1.32% | | | 2 | 61.98% | 98.56% | 84.55% | 0.34% | | | | | | | 3 | 11.5% | 97.85% | 63.69% | 0.27% | | | | | | | 4 | 7.38% | 73.68% | 52.63% | 0.1% | | | | | | | 1 | 8.21% | 93.55% | 76.42% | 0.59% | | | | | | | 2 | 14.09% | 93.24% | 84.23% | 1.1% | | | | | | | 3 | 38.13% | 83.53% | 88.76% | 1.13% | | | | | | | 4 | 8.78% | 75.23% | 79.72% | 0.61% | | | | | | | Resnet18 | 69.76% | 68.12% | | | | | | | | | ImageNet | Resnet50 | 76.13% | 74.09% | 1 | 9.13% | 91.41% | 92.19% | 0.48% | | | 2 | 13.89% | 84.78% | 81.77% | 0.89% | | | | | | | 3 | 42.65% | 78.58% | 72.03% | 1.37% | | | | | | | 4 | 3.14% | 73.12% | 71.12% | 0.19% | | | | | | | 1 | 37.35% | 98.63% | 97.88% | 0.51% | | | | | | | 2 | 41.02% | 99.71% | 99.71% | 0% | | | | | | | 3 | 55.95% | 93.44% | 96.18% | 0.24% | | | | | | | LFW | | MFN | | 97.78% | 96.91% | | | | | | EFN | | 97.29% | 95.35% | 1 | 63.73% | 96.82% | 96.24% | 1.67% | | | 2 | 14.52% | 99.12% | 98.76% | 0.02% | | | | | | | 1 | 34.3% | 58.3% | 57.9% | 0.1% | | | | | | | CityScape | Mask RCNN | 91.0% | 83.4% | 2 | 36.34% | 79.2% | 79.1% | 0.24% | | | 3 | 21.12% | 87.31% | 86.16% | 0.81% | | | | | | | 1 | 12.3% | 74.3% | 70.9% | 0.4% | | | | | | | Criteo | DLRM | | 78.88% | 73.54% | 2 | 14.3% | 69.5% | 69.3% | 0.23% | | 3 | 25.41% | 94.31% | 90.70% | 0.51% | | | | | | Table 2 describes the collaborative performance of the early exit models within a confidence threshold of 0.9. In the table, we also report how each early exit model's layer hits have affected the final accuracy. Here, we observe that while evaluating the early exit models on the subset of samples, which were missed by the previous early exit models (the relatively more complex ones), the measured hit rate and GT accuracy are substantially lower compared to the evaluation on the whole dataset. This is in fact due to the fact that the simpler samples (less detailed and easier to classify) are resolved earlier in the model. More specifically, the hit rate decreases since the early exit models are less confident in their prediction for the more complex samples, and the GT accuracy also decreases since the backbone is also less accurate for such samples. However, we observe that the early exit models still have high early exit accuracy, with a low impact on the overall accuracy. This observation shows how the confidence-based early exiting method has effectively enabled the early exit models to provide early predictions and keep the overall accuracy drop within the given tolerance. Summary for RQ1 Early Exit models show lower hit rates and GT accuracy for complex samples, but maintain overall accuracy thanks to the effective use of a confidence-based early exiting method. ## 4.6.4 Rq2. To What Extent Can Early Exit-Enabling Improve Compute Requirements? In this RQ, we showcase the amount of computation early exiting can save in terms of FLOPs count and analyze the memory usage of the models. They must be comprised of accuracy drop data | Dataset(input size) | Model | FLOPs | ↓ Ratio | ↓ Accuracy | | |----------------------------|-------------------------------------|---------|-----------|--------------|-------| | Original | Early Exit-enabled (0.9 Confidence) | | | | | | CIFAR10(3 × 32 × 32) | Resnet18 | 765M | 414M | 45.88% | 2.51% | | Resnet50 | 1303M | 601M | 53.87% | 2.32% | | | CIFAR100(3 × 32 × 32) | Resnet18 | 766M | 374M | 51.17% | 1.91% | | Resnet50 | 1304M | 547M | 58.05% | 2.46% | | | ImageNet(3 × 224 × 224) | Resnet18 | 2343M | 1673M | 28.6% | 2.35% | | Resnet50 | 2783M | 2020M | 27.4% | 2.68% | | | LFW(3 × 112 × 112) | MobileFaceNet | 474M | 296M | 37.55% | 0.91% | | EfficientNet | 272M | 182M | 33.08% | 1.99% | | | CityScape(3 × 2048 × 1024) | Mask R-CNN | 4950M | 2730M | 44.84% | 8.61% | | Criteo(39) | DLRM | 153M | 99M | 35.29% | 6.52% | Table 3: Original and early exit-enabled models FLOPs (M:Mega - 106) Table 3 shows the average amount of FLOPs computed for inference per sample. Here we observe that shrinking the batches proportionally decreases the FLOPs count required for inference. In addition, Table 4 shows the memory used to load the models (i.e, the model size) and the total LTMA during inference while inferring for the test set. As expected, the size of the early exit-enabled models is larger than the original model in all cases since they include the backbone and the additional early exit models. However, the decrease in LTMA in all cases shows a reduced amount of memory allocations during the inference time. Generally, lower LTMA indicates smaller tensor dimensions (e.g., batch size, input and operator dimensions) (Ren et al., 2021). However, in our case, since we do not change either of the dimensions, the lower LTMA is due to avoiding the computations in the remaining layers after early exit hits which require further memory allocations. A noteworthy observation from this table highlights the substantial memory usage of our object detection approach due to the sizeable model employed. This underscores the notion that implementing early exiting for this purpose does not significantly amplify the memory requirements. Although the FLOPs count and memory usage indicate the model's inference computational requirements, the decreased amount of FLOPs and LTMA does not necessarily lead to proportional reduction in the models' inference latency, which we further investigate in the next RQ. | Original | | Early Exit-enabled (0.9 Confidence) | | | | | |----------------------------|---------------|---------------------------------------|--------|------------|--------|-----------| | Dataset(input size) | Model | Model Size | LTMA | Model Size | LTMA | ↓Accuracy | | CIFAR10(3 × 32 × 32) | Resnet18 | 43MB | 102MB | 97MB | 88MB | 2.51% | | Resnet50 | 91MB | 235MB | 243MB | 201MB | 2.32% | | | CIFAR100(3 × 32 × 32) | Resnet18 | 43MB | 104MB | 383MB | 93MB | 1.91% | | Resnet50 | 91MB | 235MB | 552MB | 189MB | 2.46% | | | ImageNet(3 × 224 × 224) | Resnet18 | 43MB | 110MB | 403MB | 101MB | 2.35% | | Resnet50 | 91MB | 237MB | 572MB | 198MB | 2.68% | | | LFW(3 × 112 × 112) | MobileFaceNet | 286MB | 567MB | 350MB | 515MB | 0.91% | | EfficientNet | 147MB | 371MB | 297MB | 349MB | 1.99% | | | CityScape(3 × 2048 × 1024) | Mask R-CNN | 3680MB | 3925MB | 4171MB | 4216MB | 8.61% | | Criteo(39) | DLRM | 320MB | 450MB | 332MB | 411MB | 6.52% | Table 4: Original and early exit-enabled models memory usage ## Summary For Rq2 Early Exit-enabled models are larger due to additional early exit layers, but reduce memory use during inference by avoiding computations after early exit hits, demonstrating efficient memory management without significantly increasing overall memory requirements. ## 4.6.5 Rq3. How Much Acceleration Does Early Exit Enablement Provide On The Cpu/Gpu? In this RQ, we investigate the end-to-end improvement that early exit enablement offers. The results of this measurement clearly depend on multiple deployment factors, such as the underlying hardware and framework, and, as we discuss later in the section, their asynchronous computation capabilities. | Dataset | Model | Original latency | Early Exit-enabled latency | ↓ Ratio | | | | |-----------|------------|--------------------|------------------------------|-----------|----------|--------|--------| | CPU | GPU | CPU | GPU | CPU | GPU | | | | CIFAR10 | Resnet18 | 13.4 ms | 1.08 ms | 10.11 ms | 0.98 ms | 24.55% | 10.2% | | Resnet50 | 18.73 ms | 1.81 ms | 14.62 ms | 1.51 ms | 31.08% | 16.57% | | | CIFAR100 | Resnet18 | 14.23 ms | 1.39 ms | 9.39 ms | 1.25 ms | 34.01% | 10.08% | | Resnet50 | 19.59 ms | 2.05 ms | 9.02 ms | 1.84 ms | 46.08% | 16.75% | | | ImageNet | Resnet18 | 38.19 ms | 3.26 ms | 30.23 ms | 2.79 ms | 20.84% | 14.42% | | Resnet50 | 47.21 ms | 3.49 ms | 38.74 ms | 2.42 ms | 17.94% | 30.65% | | | LFW | MFN | 25.34 ms | 8.22 ms | 16.91 ms | 7.30 ms | 33.23% | 11.19% | | EFN | 39.41 ms | 17.63 ms | 27.98 ms | 14.38 ms | 29.01% | 18.44% | | | CityScape | Mask R-CNN | 895 ms | 145.2 ms | 562.3 ms | 108.7 ms | 45.12% | 35.32% | | Criteo | DLRM | 9.0 ms | 1.7 ms | 7.67 ms | 1.38 ms | 14.7% | 18.82% | Table 5: end-to-end evaluation of early exit-enabled models improvement in average inference latency, batch size = 32, MFN: MobileFaceNet, EFN: EfficientNet Table (5) shows the average latency for the base models on CPU and GPU, vs. their early exit-enabled counterparts, evaluated on the test set. The first key observation here is the improvements on the CPU. This improvement is due to the low parallelism in the CPU architecture. Essentially, the computation volume on the CPU is proportional to the number of samples. Therefore, when a sample takes an early exit, the remaining computation required to finish the tasks for the batch proportionally decreases. The FLOPs of all our early exit models, utilized by each input in the batch, are aggregated, resulting in latency being dictated by the longest time observed in the batch. Consequently, a larger batch size tends to worsen latency, as it may lead to more frequent early exit misses. The second observation is the relatively lower latency improvement in the GPU. This observation shows that shrinking a batch does not proportionally reduce the inference time on GPU, which is due to the high parallelism in the hardware. Shrinking the batch on GPU provides a certain overhead since it interrupts the on-chip parallelism and hardware optimizations. This interruption forces the hardware to re-plan its computations, which can be time consuming. Thus, batch-scanning improvements can be insignificant on GPU. The third observation pertains to the time savings related to pedestrian detection, in contrast to the primary model. This significant gain in efficiency is attributed to disregarding the additional layers of Mask R-CNN through our early exit strategy. Table 6: Inference latency improvement on GPU vs. batch size in Resnet18 and Resnet50 trained on CIFAR100 | Model | Batch Size | Original Latency | Early Exit-enabled (0.9 Confidence) Latency | ↓ Ratio | |----------|--------------|--------------------|-----------------------------------------------|-----------| | Resnet18 | 16 | 1.34 ms | 1.18 ms | 11.83% | | 32 | | 1.39 ms | 1.25 ms | 10.08% | | 64 | | 1.43 ms | 1.77 ms | -24.28% | | 128 | | 1.61 ms | 2.11 ms | -31.05% | | Resnet50 | 16 | 1.98 ms | 1.71 ms | 13.68% | | 32 | | 2.05 ms | 1.84 ms | 16.75% | | 64 | | 2.19 ms | 1.98 ms | 9.21% | | 128 | | 2.7 ms | 3.22 ms | -19.43% | Table 6 further demonstrates how the batch size affects the improvement provided by early exiting. The key observation here is that increasing the batch size can negate the early exiting effect on the inference latency, which, as discussed, is due to fewer batches that are fully resolved through the early exit models and do not reach the last layers. In conclusion, the latency improvement here highly depends on the hardware used in inference and must be specifically analyzed per hardware environment and computation parameters such as batch size. However, the method can still be useful when the model is not performing batch inferences (batch size = 1). One can also use the tool and get the best prediction so far within the forward-pass process by disabling batch shrinking. Doing so will generate multiple predictions per input sample, one per exit (early and final). Summary for RQ3 Increasing batch size in early exit models can worsen latency due to more frequent early exit misses and does not proportionally reduce inference time on GPUs due to the disruption of hardware parallelism. In particular, significant time savings are achieved in pedestrian detection by bypassing extra layers with an early exit strategy, although the benefits of early exiting on latency heavily depend on the specific hardware and batch size used. ## 4.6.6 Rq4. How Does The Early Exit-Enabled Model Accuracy / Latency And Computational Cost Trade-Off Compare With Other Early Exit Methods? In this RQ, we conducted a comprehensive evaluation of three distinct methods, including BranchyNet(Teerapittayanon et al., 2016), Gati(Balasubramanian et al., 2021), and our proposed method in the same system configuration. This evaluation included two different Resnets with two different data sets to thoroughly assess the performance of each method. Figures 3 and 4 demonstrate a comprehensive analysis of the accuracy / latency / computational cost tradeoff at various confidence levels for all methods evaluated for CIFAR10 and CIFAR100. Our findings reveal a ![27_image_0.png](27_image_0.png) Figure 3: CIFAR10 Accuracy/latency/computational cost comparison between the approaches with different confidence. ![27_image_1.png](27_image_1.png) Figure 4: CIFAR100 Accuracy/latency/computational cost comparison between the approaches with different confidence. remarkable adaptability of our proposed method to different confidence levels, particularly excelling in lowconfidence scenarios. This adaptability underscores the effectiveness of our approach's training, showcasing its ability to perform exceptionally well in situations where traditional methods might falter. In contrast, as confidence levels increase, our method exhibits a slight latency increase, reflecting its ability to finetune confidence settings to suit various application requirements. This feature demonstrates the method's adaptability and its potential for accommodating diverse use cases. Review 1-4 **[[ Table 7 presents the** comparison of inference costs and accuracy across different methods (BranchyNet, GATI, and Ours) using ResNet18 and ResNet50 models on CIFAR-10 and CIFAR-100 datasets. The table reports accuracy at various inference cost thresholds (≤25%, ≤50%, and ≤75%) relative to the original model's inference cost. The "Max" column indicates the highest accuracy achieved by each method with early exits. This comparison highlights the trade-offs between inference cost and accuracy for the evaluated methods. This table demonstrates that our method has a better impact for more complex base models and datasets.]] Table 7: Comparing the inference costs of Resnet18 and Resnet50 models on CIFAR-10 and CIFAR-100 on different methods. ≤25%, ≤50%, and ≤75% report the early exit accuracy when we limit the average inference cost to at most 25%, 50%, and 75% that of the original models of each method. Max reports the highest accuracy early exits can achieve. | Method | ≤25% | ≤50% | ≤75% | Max | |----------------------|--------|--------|--------|-------| | CIFAR-10 - ResNet18 | | | | | | BranchyNet | 86.4 | 87.7 | 91.54 | 95.5 | | GATI | 69.8 | 77.4 | 85.7 | 91.6 | | Ours | 81.6 | 82.8 | 83.5 | 89.4 | | CIFAR-10 - ResNet50 | | | | | | BranchyNet | 82.0 | 84.3 | 88.4 | 90.1 | | GATI | 74.5 | 80.3 | 86.0 | 88.5 | | Ours | 80.7 | 81.5 | 83.7 | 86.6 | | CIFAR-100 - ResNet18 | | | | | | BranchyNet | 70.8 | 72.8 | 73.2 | 77.4 | | GATI | 71.5 | 72.5 | 74.7 | 76.4 | | Ours | 62.2 | 63.0 | 67.8 | 77.5 | | CIFAR-100 - ResNet50 | | | | | | BranchyNet | 65.7 | 69.5 | 72.3 | 74.1 | | GATI | 66.2 | 66.8 | 69.7 | 73.3 | | Ours | 67.6 | 70.0 | 73.9 | 79.4 | The primary observation is that our method exhibits the best latencies in various methods at different confidence levels. Another significant observation is that our method's performance scales positively with the complexity of the model or dataset, leading to superior outcomes in terms of both accuracy and latency. As we add some lightweight layers for each model, it becomes evident that Resnet50 exhibits more pronounced improvements in our research. This observation is also shown in Figure 3. By averaging across all confidence levels greater than 0.25, For the simplest model and dataset, namely CIFAR10-Resnet18, we achieved a substantial 52.2% and 32.4% reduction in latencies, with a modest 6.9% and 1.6% decrease in accuracy when compared to the BranchyNet and Gati methods, respectively, while improving computational costs by 7.1% and 0.5%. In the case of a more complex model and dataset, such as CIFAR100-Resnet50, we achieved a substantial reduction in latency of 51.6% and 30.4% while simultaneously improving the accuracy by 2.31% and 0.72% compared to the Gati and BranchyNet methods, respectively, while improving computational costs by 14.3% and 8.2%. BranchyNet, on the other hand, exhibits relatively consistent results at different confidence levels. Although this stability might be desirable in some scenarios, it lacks the adaptability and dynamic response that our method offers. Gati, while showing learned early exits, is marked by the complexity of its training process, which can be challenging to implement effectively. Moreover, Gati consumes more memory resources during implementation, in contrast to the efficiency of our approach. Indeed, our method stands out as a dynamic and adaptable solution, requiring less extensive data and model preparation compared to approaches such as Gati. The ability to configure confidence levels further underscores its versatility for a broad spectrum of applications. However, it should be noted that for simpler models and datasets, our method may not demonstrate a marked improvement over others, as they employ additional dense layers for early exiting, which can provide competitive results. Crucially, the most significant aspect of our work lies in the capability to update the early exit layers effectively without reliance on groundtruth labels, offering a substantial advantage in practical applications where such labels may not be readily available. ![29_image_0.png](29_image_0.png) Our method demonstrates exceptional adaptability across different confidence levels, particularly ![29_image_1.png](29_image_1.png) ![29_image_2.png](29_image_2.png) excelling in low-confidence scenarios, which highlights its robust training and ability to fine-tune settings for varied applications. It achieves the best latencies and scales positively with the complexity of the model or dataset, showing substantial improvements in both accuracy and computational costs, particularly with complex configurations like CIFAR100-Resnet50. To evaluate the impact of implementing NAS and varying model depths, we conducted an ablation study, detailed in Appendix .2. ## 4.7 Limitation And Future Directions The first limitation of this study is that the proposed method is limited to classification models, since it would be more complicated for early exit models to predict the output of a regression model due to their continuous values. This limitation is strongly tied to the effectiveness of knowledge distillation in the case of regression models. The method also does not take the internal state of the backbone (if any) into account, such as the hidden states in recurrent neural networks. Therefore, the effectiveness of the method for such models still needs to be further assessed. Moreover, practitioners should take the underlying hardware and the backbone structure into account, as they directly affect the final performance. On this note, as shown in Section 4.6.5, different models provide different performances in terms of inference latency in the first place; therefore, choosing the right model for the task comes first, and early exiting can be helpful in improving the performance. As the main goal of our proposed approach is to increase the inference time, depending on the application domain, if a potential slight degradation of accuracy is not acceptable (e.g., in a safety critical system), using our approach might not be a good fit. Review 1-7 **[[ While our current comparison has yielded valuable results, we can explore the** applicability of our approach to other large models, particularly in non-vision-based datasets, to assess its effectiveness in different domains. Given the growing importance of reducing latency and inference time, especially in Large Language Models (LLMs), our future research can focus on methods to further optimize and reduce costs for large-scale industries.]] ## 5 Conclusion In this paper, we have shown that our automated early exiting approach is able to extend a pretrained classification DNN to an early exit-enabled version using a relatively small and unlabeled dataset. The required training data sets for early exiting models are collected just by recording the input items and their corresponding backbone outputs at the inference time. We have also shown that the early exiting method can introduce significant improvement in the model's computing requirements and inference latency, especially when the inference is performed on the CPU. We discussed the parameters, design choices, and the procedure of early exit-enabling a pre-trained off-theshelf model, and the required updates and maintenance. In conclusion, while traditional early exiting might not be beneficial for DNN models due to the diversity, size, and dimensions of the inputs, early exiting the features in the hidden layers of the DNNs using the early exit models can achieve significant improvement in the model's inference computational complexity and latency. As shown in sections 4.6.4 and 4.6.5, early exiting reduces the average inference FLOPs by up to 58% and the latency by up to 46.09% on CPU and 18.44% on GPU for classification purposes. For pedestrian detection, we could reduce latency up to 45.1% on CPU and 35.32% on GPU. In summary, our method consistently outperforms alternative approaches in different models and datasets. Considering average values across all confidence levels greater than 0.25, in the case of the simplest model, CIFAR10- Resnet50, we observe a remarkable reduction of 52.2% and 32.4% in latency, with accuracy only experiencing a minor decrease of 6.9% and 1.6% compared to the BranchyNet and Gati methods, respectively. For the more intricate CIFAR100-Resnet50 model and dataset, our method excels with a significant 51.6% and 30.4% reduction in latency, and a notable 2.31% and 0.72% improvement in accuracy compared to the Gati and BranchyNet methods. These findings underscore the adaptability and superior performance of our method in diverse and complex scenarios. ## References Riku Arakawa, Shinnosuke Takamichi, and Hiroshi Saruwatari. Implementation of dnn-based real-time voice conversion and its improvements by audio data augmentation and mask-shaped device. *10th ISCA* Workshop on Speech Synthesis (SSW 10), 2019. Arjun Balasubramanian, Adarsh Kumar, Yuhan Liu, Han K. Cao, Shivaram Venkataraman, and Aditya Akella. Accelerating deep learning inference via learned caches. *ArXiv*, abs/2101.07344, 2021. Vinayak Ashok Bharadi, Arusa Irfan Mukadam, Misbah N Panchbhai, and Nikita N Rode. Image classification using deep learning. *International journal of engineering research and technology*, 6, 2017. Nandita Bhaskhar. Intermediate activations - the forward hook. *Blog: Roots of my Equation (web.stanford.edu/ nanbhas/blog/)*, 2020. URL https://web.stanford.edu/~nanbhas/blog/ forward-hooks-pytorch/. Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In *KDD '06*, 2006. Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. *ArXiv*, abs/1812.00332, 2019. Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In *Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2016. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In *NIPS*, 2015. Daniel Crankshaw, Xin Wang, Giulio Zhou, Michael J. Franklin, Joseph E. Gonzalez, and Ion Stoica. Clipper: A low-latency online prediction serving system. In *NSDI*, 2017. Robbie Culkin. Machine learning in finance : The case of deep learning for option pricing. 2017. Radosvet Desislavov, Fernando Mart'inez-Plumed, and Jos'e Hern'andez-Orallo. Compute and energy consumption trends in deep learning inference. *ArXiv*, abs/2109.05472, 2021. Maryam Ebrahimi, Alexandre da Silva Veith, Moshe Gabel, and Eyal de Lara. Combining dnn partitioning and early exit. In *Proceedings of the 5th International Workshop on Edge Systems, Analytics and* Networking, pp. 25–30, 2022. Mohamed Amine Ferrag, L. Maglaras, Sotiris K. Moschoyiannis, and Helge Janicke. Deep learning for cyber security intrusion detection: Approaches, datasets, and comparative study. *J. Inf. Secur. Appl.*, 50, 2020. Olga Fink, Qin Wang, Markus Svens'en, Pierre Dersin, Wan-Jui Lee, and Mélanie Ducoffe. Potential, challenges and future directions for deep learning in prognostics and health management applications. Eng. Appl. Artif. Intell., 92:103678, 2020. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of *Proceedings of Machine Learning Research*, pp. 1321–1330. PMLR, 06–11 Aug 2017. URL https://proceedings.mlr.press/v70/guo17a.html. Yandong Guo, Lei Zhang, Yuxiao Hu, Xiaodong He, and Jianfeng Gao. Ms-celeb-1m: A dataset and benchmark for large-scale face recognition. In *ECCV*, 2016. Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. *arXiv preprint arXiv:1510.00149*, 2015. P Haseena Rahmath, Vishal Srivastava, and Kuldeep Chaurasia. A strategy to accelerate the inference of a complex deep neural network. In *Proceedings of Data Analytics and Management: ICDAM 2022*, pp. 57–68. Springer, 2023. M. Hassaballah and Ali Ismail Awad. Deep learning in computer vision. 2020. Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. *2016* IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961–2969, 2017. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. *ArXiv*, abs/1503.02531, 2015. Gary B. Huang, Marwan A. Mattar, Tamara L. Berg, and Eric Learned-Miller. Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. 2008. Jian Huang, Junyi Chai, and Stella Cho. Deep learning in finance and banking: A literature review and classification. *Frontiers of Business Research in China*, 14:1–24, 2020. Olivier Chapelle Jean-Baptiste Tien, joycenv. Display advertising challenge, 2014. URL https://kaggle. com/competitions/criteo-display-ad-challenge. Haifeng Jin, Qingquan Song, and Xia Hu. Auto-keras: An efficient neural architecture search system. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019. James M. Joyce. *Kullback-Leibler Divergence*, pp. 720–722. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011. ISBN 978-3-642-04898-2. doi: 10.1007/978-3-642-04898-2_327. URL https://doi.org/10.1007/ 978-3-642-04898-2_327. Yigitcan Kaya, Sanghyun Hong, and Tudor Dumitras. Shallow-deep networks: Understanding and mitigating network overthinking. In *ICML*, 2019. Ira Kemelmacher-Shlizerman, Steven M. Seitz, Daniel Miller, and Evan Brossard. The megaface benchmark: 1 million faces for recognition at scale. *2016 IEEE Conference on Computer Vision and Pattern Recognition* (CVPR), pp. 4873–4882, 2016. Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. Adarsh Kumar, Arjun Balasubramanian, Shivaram Venkataraman, and Aditya Akella. Accelerating deep learning inference via freezing. *ArXiv*, abs/2002.02645, 2019. Honglak Lee, Peter T. Pham, Yan Largman, and A. Ng. Unsupervised feature learning for audio classification using convolutional deep belief networks. In *NIPS*, 2009. Ilias Leontiadis, Stefanos Laskaridis, Stylianos I Venieris, and Nicholas D Lane. It's always personal: Using early exits for efficient on-device cnn personalisation. In Proceedings of the 22nd International Workshop on Mobile Computing Systems and Applications, pp. 15–21, 2021. Maksim Levental. Memory planning for deep neural networks. *ArXiv*, abs/2203.00448, 2022. Y. Li, Zhenhua Han, Quanlu Zhang, Zhenhua Li, and Haisheng Tan. Automating cloud deployment for deep learning inference of real-time online services. IEEE INFOCOM 2020 - IEEE Conference on Computer Communications, pp. 1668–1677, 2020a. Yawei Li, Shuhang Gu, Kai Zhang, Luc Van Gool, and Radu Timofte. Dhp: Differentiable meta pruning via hypernetworks. In *Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August* 23–28, 2020, Proceedings, Part VIII 16, pp. 608–624. Springer, 2020b. Renju Liu, Luis Garcia, Zaoxing Liu, Botong Ou, and Mani B. Srivastava. Secdeep: Secure and performant on-device deep learning inference framework for mobile and iot devices. Proceedings of the International Conference on Internet-of-Things Design and Implementation, 2021. Zechun Liu, Haoyuan Mu, Xiangyu Zhang, Zichao Guo, Xin Yang, Kwang-Ting Cheng, and Jian Sun. Metapruning: Meta learning for automatic neural network channel pruning. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 3296–3305, 2019a. Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. *ArXiv*, abs/1810.05270, 2019b. Mohammad Ali Maddah-Ali and Urs Niesen. Fundamental limits of caching. *IEEE Transactions on Information Theory*, 60(5):2856–2867, 2014. doi: 10.1109/TIT.2014.2306938. TorchVision maintainers and contributors. Torchvision: Pytorch's computer vision library. https://github. com/pytorch/vision, 2016. Sebastien Marcel and Yann Rodriguez. Torchvision the machine-vision package of torch. Proceedings of the 18th ACM international conference on Multimedia, 2010. Yoshitomo Matsubara, Marco Levorato, and Francesco Restuccia. Split computing and early exiting for deep learning applications: Survey and research challenges. *ACM Computing Surveys (CSUR)*, 2022. Microsoft. Neural Network Intelligence, 1 2022. URL https://github.com/microsoft/nni. Thaha Mohammed, Carlee Joe-Wong, Rohit Babbar, and Mario Di Francesco. Distributed inference acceleration with adaptive dnn partitioning and offloading. In IEEE INFOCOM 2020-IEEE Conference on Computer Communications, pp. 854–863. IEEE, 2020. Maryam Mohsin. 10 google search statistics you need to know in 2022, Feb 2022. URL https://www. oberlo.ca/blog/google-search-statistics. Stylianos Moschoglou, A. Papaioannou, Christos Sagonas, Jiankang Deng, Irene Kotsia, and Stefanos Zafeiriou. Agedb: The first manually collected, in-the-wild age database. 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1997–2005, 2017. Markus Nagel, Mart van Baalen, Tijmen Blankevoort, and Max Welling. Data-free quantization through weight equalization and bias correction. *2019 IEEE/CVF International Conference on Computer Vision* (ICCV), pp. 1325–1334, 2019. Maxim Naumov, Dheevatsa Mudigere, Hao-Jun Michael Shi, Jianyu Huang, Narayanan Sundaraman, Jongsoo Park, Xiaodong Wang, Udit Gupta, Carole-Jean Wu, Alisson G Azzolini, et al. Deep learning recommendation model for personalization and recommendation systems. *arXiv preprint arXiv:1906.00091*, 2019. Christopher Olston, Noah Fiedel, Kiril Gorovoy, Jeremiah Harmsen, Li Lao, Fangwei Li, Vinu Rajashekhar, Sukriti Ramesh, and Jordan Soyke. Tensorflow-serving: Flexible, high-performance ml serving. *ArXiv*, abs/1712.06139, 2017. Dan Otter, Julian Richard Medina, and Jugal Kumar Kalita. A survey of the usages of deep learning for natural language processing. *IEEE Transactions on Neural Networks and Learning Systems*, 32:604–624, 2021. Roberto G Pacheco, Kaylani Bochie, Mateus S Gilbert, Rodrigo S Couto, and Miguel Elias M Campista. Towards edge computing using early-exit convolutional neural networks. *Information*, 12(10):431, 2021. Priyadarshini Panda, Abhronil Sengupta, and Kaushik Roy. Conditional deep learning for energy-efficient and enhanced pattern recognition. In 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 475–480. IEEE, 2016. Huy Phan. huyvnphan/pytorch_cifar10, January 2021. URL https://doi.org/10.5281/zenodo.4431043. Antonio Polino, Razvan Pascanu, and Dan Alistarh. Model compression via distillation and quantization. ArXiv, abs/1802.05668, 2018. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In *ECCV*, 2016. Jie Ren, Jiaolin Luo, Kai Wu, Minjia Zhang, and Hyeran Jeon. Sentinel: Efficient tensor migration and allocation on heterogeneous memory systems for deep learning. *2021 IEEE International Symposium on* High-Performance Computer Architecture (HPCA), pp. 598–611, 2021. Michael Ruchte, Arber Zela, Julien Siems, Josif Grabocka, and Frank Hutter. Naslib: A modular and flexible neural architecture search library. https://github.com/automl/NASLib, 2020. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. *International Journal of Computer Vision (IJCV)*, 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y. Simone Scardapane, Michele Scarpiniti, Enzo Baccarelli, and Aurelio Uncini. Why should we add early exits to neural networks? *Cognitive Computation*, pp. 1–13, 2020. Connor Shorten, Taghi M. Khoshgoftaar, and Borko Furht. Deep learning applications for covid-19. *Journal* of Big Data, 8, 2021. Carolyn J. Swinney and John O. Woods. Unmanned aerial vehicle operating mode classification using deep residual learning feature extraction. 2021. Ivan Tashev and Seyedmahdad Mirsamadi. Dnn-based causal voice activity detector. 2017. Surat Teerapittayanon, Bradley McDanel, and Hsiang-Tsung Kung. Branchynet: Fast inference via early exiting from deep neural networks. In *2016 23rd international conference on pattern recognition (ICPR)*, pp. 2464–2469. IEEE, 2016. Arun Varghese, George Agyeman-Badu, and Michelle Cawley. Deep learning in automated text classification: a case study using toxicological abstracts. *Environment Systems and Decisions*, pp. 1–15, 2020. Jun Wang, Yinglu Liu, Yibo Hu, Hailin Shi, and Tao Mei. Facex-zoo: A pytorch toolbox for face recognition. Proceedings of the 29th ACM International Conference on Multimedia, 2021. Mei Wang, Weihong Deng, Jiani Hu, Xunqiang Tao, and Yaohai Huang. Racial faces in the wild: Reducing racial bias by information maximization adaptation network. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 692–702, 2019. Weiaicunzai. Practice on cifar-100 using pytorch. https://github.com/weiaicunzai/pytorch-cifar100, 2020. Duane Wessels. *Web caching*. " O'Reilly Media, Inc.", 2001. Xiaorui Wu, Hong Xu, and Yi Wang. Irina: Accelerating dnn inference with efficient online scheduling. 4th Asia-Pacific Workshop on Networking, 2020. Wenhan Xia, Hongxu Yin, Xiaoliang Dai, and Niraj K Jha. Fully dynamic inference with deep neural networks. *IEEE Transactions on Emerging Topics in Computing*, 10(2):962–972, 2021. Yecheng Xiang and Hyoseung Kim. Pipelined data-parallel cpu/gpu scheduling for multi-dnn real-time inference. *2019 IEEE Real-Time Systems Symposium (RTSS)*, pp. 392–405, 2019. Yan Xiao, Ivan Beschastnikh, David S. Rosenblum, Changsheng Sun, Sebastian G. Elbaum, Yun Lin, and Jin Song Dong. Self-checking deep neural networks in deployment. 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), pp. 372–384, 2021. Fuxun Yu, Di Wang, Longfei Shangguan, Minjia Zhang, Xulong Tang, Chenchen Liu, and Xiang Chen. A survey of large-scale deep learning serving system optimization: Challenges and opportunities. *ArXiv*, abs/2111.14247, 2021. Linfeng Zhang, Chenglong Bao, and Kaisheng Ma. Self-distillation: Towards efficient and compact neural networks. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, PP:1–1, 03 2021. doi: 10.1109/TPAMI.2021.3067100. Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad, and Yanzhi Wang. A systematic dnn weight pruning framework using alternating direction method of multipliers. In *ECCV*, 2018. Zhuoran Zhao, Kamyar Mirzazad Barijough, and Andreas Gerstlauer. Deepthings: Distributed adaptive deep learning inference on resource-constrained iot edge clusters. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 37:2348–2359, 2018. Tianyue Zheng, Weihong Deng, and Jiani Hu. Cross-age lfw: A database for studying cross-age face recognition in unconstrained environments. *ArXiv*, abs/1708.08197, 2017. Yi Zheng, Qi Liu, Enhong Chen, Yong Ge, and J. Leon Zhao. Time series classification using multi-channels deep convolutional neural networks. In *WAIM*, 2014. Keqian Zhu and Jingfei Jiang. Research on parallel acceleration for deep learning inference based on manycore arm platform. In ACA, 2018. Lucas Zimmer, Marius Thomas Lindauer, and Frank Hutter. Auto-pytorch: Multi-fidelity metalearning for efficient and robust autodl. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 43: 3079–3090, 2021. ## Appendix A. Implementation .1 Implementation Details We developed the early exiting tool using PyTorch, which is accessible through the GitHub repository2. Review 1-9 **[[The software environment for our experiments included Ubuntu 20.04, Python** 3.7, and PyTorch 1.1.]] Figure 5 shows the overall system design. The tool provides a NAS module, an optimizer module, and a deployment module. The NAS module provides the architectures to be used per early exit model. The optimizer assigns the confidence thresholds, finds the best subset of the early exit models, and provides evaluation reports. Lastly, the deployment module launches a Web server with the early exit-enabled model ready to serve queries. 2https://anonymous.4open.science/r/AutoCacheLayer-CBB4/ ![35_image_0.png](35_image_0.png) ## .1.1 Nas Module Existing NAS tools typically define different search spaces according to different tasks, which constrains their applicability to certain input types and sizes. Using such tools with input constraints defeats our method's generalization and automation purpose, since the early exit models' inputs can have any dimension and size. For example, ProxylessNAS (Cai et al., 2019) specializes in optimizing the performance of neural architecture for a target hardware. However, it is only applicable for image classification tasks and requires certain input specifications (e.g., 3xHxW images normalized using given values). Similarly, Auto-PyTorch (Zimmer et al., 2021) and Auto-Keras are only applicable to tabular data sets, text, and images. We chose NNI from Microsoft (Microsoft, 2022) as it does not constrain the input of the model in terms of type, size, and dimensions. NNI also provides an extensible search space definition with support for variable number of layers and nested choices (e.g., choosing among different layer types, each with different layer-specific parameters). Given the backbone implementation, the dataset and the search space, the module launches an NNI experiment per candidate layer to find an optimum early exit model for the layer. Each experiment launches a web GUI for progress reports and results. We aim for end-to-end automation in the tool. However, currently, the user still needs to manually export the architecture specifications when using the NAS module and convert them to a proper Python implementation (i.e., a PyTorch module implementing the architecture). The specifications are available to the user through the experiments web GUI and also in the trial output files. This shortcoming is due to the NNI implementation, which does not currently provide access to the model objects within the experiments. We have created an enhancement suggestion on the NNI repository to support model object access (issue \#4910). ## .1.2 Optimizer And Deployment Modules Given the implementation of the backbone and the early exit models, the optimizer evaluates the early exit models, assigns their confidence thresholds, finds the best subset of the early exit models, and disables the rest, and finally reports the relevant performance metrics for the early exit-enabled model and each early exit model. We used the DeepSpeed by Microsoft and the PyTorch profiler to profile the FLOP counts, memory usage, and latency values for the early exit models and the backbones. The user can use each module independently. Specifically, the user can skip the architecture search via the NAS module and provide the architectures manually to the optimizer, and the module trains them before proceeding to the evaluation. Review 1-10 [[ NAS with random search strategy can take approximately 1 hour for simpler models like ResNet18 and up to 3 hours for more complex models like ResNet50.]] The tool also offers an extensive set of configurations. More specifically, the user can configure the tool to use one device (e.g., GPU) for training processes and the other (e.g., CPU) for evaluation and deployment. The deployment module launches a web server and exposes a WebSocket API to the early exit-enabled model. The query batches passed to the socket will receive one response per item, as soon as the prediction is available through either of the (early or final) exits. ## .1.3 Backbone Implementation We used the backbone implementations and weights provided by the FaceX-Zoo (Wang et al., 2021) repository to conduct the experiments with LWF data set on the MobileFaceNet and EfficientNet models. For experimenting with CIFAR10 and CIFAR100, we used the implementations provided by torchvision (Marcel & Rodriguez, 2010) and the weights provided by (Phan, 2021) and (Weiaicunzai, 2020). For ImageNet pre-trained weights for ResNets we used new torchvision version as well (maintainers & contributors, 2016). All backbone implementations were modified to implement an interface that handles the interactions with the early exit models, controls the exits (early exit hits and misses), and provides the relevant reports and metrics. We document the use of the interface in the repository, so that users can experiment with new backbones and datasets. We refer interested readers to a blog post on how to extract intermediate activations in PyTorch (Bhaskhar, 2020) which introduces three methods to access the activation values. The introduced forward hooks method in PyTorch is very convenient for read-only purposes. However, our method requires performing actions based on the activation values, specifically, early exit lookup and batch shrinking, and avoiding further computation through the next layers. Therefore, we used the so-called "hacker" method to access the activation values and perform these actions and provided the interface for easy replication on different backbones. ## .2 Environment Setup The hardware used for inference substantially affects the results due to the hardware-specific optimizations such as computation parallelism. In our experiments, we have used an "Intel(R) Core(TM) i7-10700K CPU @ 3.80GH" to measure on-CPU inference times and an "NVIDIA GeForce RTX 3070" GPU to measure on-GPU inference time. ## Appendix B. Final Schema Of Early Exit Models This appendix shows key figures that provide a deeper visual understanding of the methodologies and results highlighted in the main text. Figure 6 illustrates the final schema of the early exit models used in our experiments with CIFAR10-Resnet18, CIFAR100-Resnet50, LWF-EfficientNet, and LFW-MobileFaceNet. Figure 7 presents the final schema of the early exit models specifically designed for our CityScapes experiments using the Mask R-CNN framework. Figure 8 shows the final schema of the early exit models designed for our recommendation experiments with the Criteto data set using DLRM. ![37_image_0.png](37_image_0.png) Figure 6: Final schema of the early exit models, for the experiments CIFAR10-Resnet18, CIFAR100- Resnet50, LWF-EfficientNet, and LFW-MobileFaceNet ![37_image_1.png](37_image_1.png) Figure 7: Final schema of the early exit models, for the experiments CityScapes - Mask RCNN ![38_image_0.png](38_image_0.png) Figure 8: Final schema of the early exit models, for the experiments Criteo - DLRM ## Appendix C. Early Exit Models' Individual Performance For All Experimenst The following figures demonstrate the hit rate, GT accuracy, and early exit accuracy of each early exit model vs. the confidence threshold, per experiment dataset and backbone. ![39_image_0.png](39_image_0.png) Cifar10-Resnet18 Figure 9: Experiment: CIFAR10-Resnet18 ![40_image_0.png](40_image_0.png) Cifar10-Resnet50 Figure 10: Experiment: CIFAR10-Resnet50 ![41_image_0.png](41_image_0.png) Cifar100-Resnet18 Figure 11: Experiment: CIFAR100-Resnet18 ![42_image_0.png](42_image_0.png) ImageNet-Resnet18 Figure 12: Experiment: ImageNet-Resnet18 ![43_image_0.png](43_image_0.png) ImageNet-Resnet50 Figure 13: Experiment: ImageNet-Resnet50 ![44_image_0.png](44_image_0.png) LFW-MobileFaceNet Figure 14: Experiment: LFW-MobileFaceNet ![45_image_0.png](45_image_0.png) LFW-EfficientNet Figure 15: Experiment: LFW-EfficientNet ![46_image_0.png](46_image_0.png) Mask RCNN-CityScape Figure 16: Experiment: Mask RCNN-CityScape ![47_image_0.png](47_image_0.png) DLRM Criteo Figure 17: Experiment: DLRM-Criteo ## Appendix D. Ablation Study To investigate the need for using NAS in our work, we run an ablation study. The alternative to use NAS is inserting early exit models at every possible node, which can be memory-intensive and may even degrade the final performance and accuracy. Additionally, the number of early exit layers may also impact the results. Our default setup uses 2 dense layers per early exit model. This part of the ablation study assesses the impact of increasing this number to 3, 4, and 5 dense layers. The following tables compare latency (GPU and CPU), accuracy, model size and FLOPs across these modes. Table 8: Different early exit models accuracy, MFN: MobileFaceNet, EFN: EfficientNet, R18: Resnet18, R50: Resnet50, C10: CIFAR10, C100: CIFAR100, IN: ImageNet, CityScapes: CS, Mask RCNN: MRCNN | Model Type | R18-C10 | R50-C10 | R18-C100 | R50-C100 | R18-IN | R50-IN | MFN-LFW | EFN-LFW | MRCNN-CS | DLRM-Criteo | |---------------|-----------|-----------|------------|------------|----------|----------|-----------|-----------|------------|---------------| | With NAS | 86.49% | 85.88% | 74.47% | 77.04% | 68.12% | 74.09% | 96.91% | 95.35% | 83.4% | 73.54% | | Without NAS | 86.79% | 85.16% | 74.32% | 77.19% | 67.31% | 72.91% | 96.19% | 95.71% | 83.14% | 73.96% | | 1 more layer | 87.15% | 86.12% | 74.29% | 77.72% | 67.56% | 73.19% | 96.07% | 95.92% | 83.89% | 73.91% | | 2 more layers | 86.20% | 86.09% | 74.82% | 77.02% | 67.61% | 73.21% | 95.42% | 95.14% | 83.01% | 73.62% | | 3 more layers | 86.17% | 86.01% | 74.78% | 77.03% | 67.60% | 73.31% | 95.43% | 95.14% | 82.99% | 73.48% | | Model Type | R18-C10 | R50-C10 | R18-C100 | R50-C100 | R18-IN | R50-IN | MFN-LFW | EFN-LFW | MRCNN-CS | DLRM-Criteo | |---------------|-----------|-----------|------------|------------|----------|----------|-----------|-----------|------------|---------------| | With NAS | 0.98 ms | 1.51 ms | 1.25 ms | 1.84 ms | 2.79 ms | 2.42 ms | 7.30 ms | 14.38 ms | 108.7 ms | 1.38 ms | | Without NAS | 1.29 ms | 1.75 ms | 1.47 ms | 2.27 ms | 3.91 ms | 3.64 ms | 9.26 ms | 17.87 ms | 153.1 ms | 1.89 ms | | 1 more layer | 1.11 ms | 1.63 ms | 1.32 ms | 2.19 ms | 3.27 ms | 3.33 ms | 7.91 ms | 16.10 ms | 123.1 ms | 1.52 ms | | 2 more layers | 1.21 ms | 1.71 ms | 1.39 ms | 2.26 ms | 3.34 ms | 3.37 ms | 7.97 ms | 16.70 ms | 131.27 ms | 1.69 ms | | 3 more layers | 1.27 ms | 1.81 ms | 1.48 ms | 2.37 ms | 3.61 ms | 3.47 ms | 8.83 ms | 17.51 ms | 138.27 ms | 1.74 ms | | Model Type | R18-C10 | R50-C10 | R18-C100 | R50-C100 | R18-IN | R50-IN | MFN-LFW | EFN-LFW | MRCNN-CS | DLRM-Criteo | |---------------|-----------|-----------|------------|------------|----------|----------|-----------|-----------|------------|---------------| | With NAS | 10.11 ms | 14.62 ms | 9.39 ms | 9.02 ms | 30.23 ms | 38.74 ms | 16.91 ms | 27.98 ms | 562.3 ms | 7.67 ms | | Without NAS | 13.53 ms | 19.17 ms | 13.19 ms | 13.01 ms | 41.29 ms | 51.04 ms | 22.53 ms | 38.77 ms | 724.42 ms | 11.89 ms | | 1 more layer | 11.87 ms | 18.12 ms | 12.19 ms | 12.12 ms | 33.31 ms | 45.41 ms | 21.25 ms | 34.68 ms | 689.11 ms | 10.63 ms | | 2 more layers | 12.84 ms | 19.32 ms | 14.01 ms | 13.89 ms | 34.91 ms | 47.02 ms | 25.94 ms | 35.78 ms | 713.71 ms | 11.93 ms | | 3 more layers | 13.17 ms | 22.12 ms | 16.71 ms | 15.89 ms | 36.06 ms | 48.68 ms | 26.41 ms | 37.52 ms | 743.71 ms | 13.31 ms | | Model Type | R18-C10 | R50-C10 | R18-C100 | R50-C100 | R18-IN | R50-IN | MFN-LFW | EFN-LFW | MRCNN-CS | DLRM-Criteo | |---------------|-----------|-----------|------------|------------|----------|----------|-----------|-----------|------------|---------------| | With NAS | 414M | 601M | 374M | 547M | 1673M | 2020M | 296M | 182M | 2730M | 99M | | Without NAS | 671M | 942M | 545M | 719M | 1941M | 2430M | 371M | 282M | 3211M | 128M | | 1 more layer | 614M | 921M | 534M | 643M | 1897M | 2321M | 351M | 261M | 3139M | 121M | | 2 more layers | 701M | 1011M | 709M | 773M | 1970M | 2410M | 404M | 290M | 3310M | 138M | | 3 more layers | 791M | 1125M | 771M | 823M | 2018M | 2512M | 514M | 368M | 3589M | 153M | Table 9: Different early exit models latency on GPU, MFN: MobileFaceNet, EFN: EfficientNet, R18: Resnet18, R50: Resnet50, C10: CIFAR10, C100: CIFAR100, IN: ImageNet, CityScapes: CS, Mask RCNN: MRCNN Table 10: Different early exit models latency on CPU, MFN: MobileFaceNet, EFN: EfficientNet, R18: Resnet18, R50: Resnet50, C10: CIFAR10, C100: CIFAR100, IN: ImageNet, CityScapes: CS, Mask RCNN: MRCNN Table 11: Different early exit models FLOPs, MFN: MobileFaceNet, EFN: EfficientNet, R18: Resnet18, R50: Resnet50, C10: CIFAR10, C100: CIFAR100, IN: ImageNet, CityScapes: CS, Mask RCNN: MRCNN Regarding NAS use, the results indicate that there is an average increase in accuracy of less than 2%, while latency, FLOPs, and model sizes increased by an average of 16%. When using additional dense layers, the results showed an increase in accuracy of less than 1%, but resulted in approximately 24% worse results in latency and model size and about 30% increase in FLOPs usage on average. Ablation experiments indicate that using additional dense layers or ignoring NAS and inserting and training all possible early exit layers, may result in slightly better accuracy. However, this is accompanied by increased Table 12: Different early exit models size (Mega Byte), MFN: MobileFaceNet, EFN: EfficientNet, R18: Resnet18, R50: Resnet50, C10: CIFAR10, C100: CIFAR100, IN: ImageNet, CityScapes: CS, Mask RCNN: MRCNN | Model Type | R18-C10 | R50-C10 | R18-C100 | R50-C100 | R18-IN | R50-IN | MFN-LFW | EFN-LFW | MRCNN-CS | DLRM-Criteo | |---------------|-----------|-----------|------------|------------|----------|----------|-----------|-----------|------------|---------------| | With NAS | 97MB | 243MB | 383MB | 552MB | 403MB | 572MB | 350MB | 297MB | 4171MB | 332MB | | Without NAS | 137MB | 324MB | 442MB | 822MB | 731MB | 801MB | 514MB | 411MB | 5231MB | 451MB | | 1 more layer | 113MB | 273MB | 401MB | 622MB | 591MB | 609MB | 372MB | 320MB | 4319MB | 362MB | | 2 more layers | 130MB | 301MB | 420MB | 691MB | 682MB | 725MB | 395MB | 351MB | 4480MB | 390MB | | 3 more layers | 159MB | 340MB | 451MB | 751MB | 705MB | 789MB | 421MB | 393MB | 4680MB | 427MB | latency and, in some cases, performance is worse than the original model without early exits. This is because having, for example, 10 early exit layers and processing all batches through them significantly increases the time required and also enlarges the model size. In addition, memory consumption is negatively affected.
Review 1: Summary: This paper proposes an end-to-end automated early exiting solution to improve the performance of DNN-based services in terms of computational complexity and inference latency. The paper methodology involves - (a) Identify the candidate layers to be early exit (b) Build an early exit model for each candidate (c) Assign confidence thresholds to built models to determine early exit hits (d) Evaluate and optimize the early exit-enabled model (e) Early-Exit Optimization Implementation (f) Update and maintenance early exit models. Strengths and Weaknesses: Strengths: 1. The draft is well written with proper inclusion of past related work to easily follow the related concepts for new readers. 2. The idea is developed in unsupervised settings, allowing early exit models do not need access to training data and perform solely based on the incoming data at run-time. 3. During the training an early exit model, the proposed work freeze the the model backbone, to ensure that the training process does not modify any parameter not belonging to the current early exit model; which encourages adoptability. 4. The authors extends their proposed work to evaluating a critical real-world scenario - pedestrian detection in urban environments. Weakness: While the paper is a interesting read, I have several concerns with the algorithm/method developed given its complexity. 1. In candidate layer selection step (section 3.1), the author mentions several static rules to select candidate layers (eg. last layers not useful, layers with parallel connections are not useful). I find this static rules difficult to generalize across numerous existing backbone architecture. How should one decide how many last layers to discard? The method is not generalizable for recent popular complex architectures like MoEs where there a multiple parallel experts? 2. Does the authors have thought about using these early exit approaches for modern LLMs where early exits would add a lot of value given gigantic memory cost? Recently some work find like SkipDecode (https://arxiv.org/abs/2307.02628), ShortGPT(https://arxiv.org/pdf/2403.03853), FFN-SkipLLM (https://arxiv.org/abs/2404.03865) explores layer-skipping or early-exit strategies. What are authors thought about these techniques given their simplicity in identifying comparatively less useful layers and discarding them in limited compute settings? 3. In section 3.2.1, the author briefly discuss the complexity of task (CIFAR10 vs CIFAR100) in deciding the amount the early-exit model capacity. I see further potential here to explore a more task-difficulty specific evaluation to see how it relate with the speedup achieved for a easy task vs difficult task? 4. Comparison with a dense baseline with similar FLOP/compute count to validate the performance difference of the early exit model with a model with similar compute cost is required. 5. The well-advertised benefit of the work is speedup, however a clean speedup/latency evaluation settings is missing. How is the speed up measured specifically across CPUs and GPUs? What's the hardware/software stack? A fine-grained evaluation of compute/latency/memory for backbone, early-exit modules is a must? The authors nee to also provide an idea of compute additionally involved in training these early exit models, NAS, thresholding. 6. I am also curious about the thresholding, and the expense of update and maintenance of early exit models. A ablation of these thresholding is required to show the impact it can have. Requested Changes: The critical requirements include a through investigation of training and inference time compute benefits, and details for hardware/software stack is required. The latency increases with increasing the batchsize (Table 6) is also a negative point, given modern GPUs which are fast and efficient in large batch processing. I also highly encourage the authors to explain their key method with a good diagram to make it easy to understand, at this point it requires a lot of effort to understand it to understand the nuances. Also, additional comments related to adaptation for modern architectures with stack of transformer will be helpful given the candidate layer selection is heuristic based. Broader Impact Concerns: None. ================================================== Review 2: Summary: This work presents an approach to accelerate inference for deep neural networks by introducing early exit paths that are trained using test data without requiring labels. The training method works through self-distillation, where the output of each newly added early exit layer mimics the model's output probability distribution (soft labels). The authors also use Neural Architecture Search (NAS) to identify candidate layers for adding early exits, making the approach adaptable to different architectures such as MobileFaceNet, EfficientNet, ResNet18, and ResNet50. Overall, this work extends early exiting approaches to be used in an unsupervised fashion, which brings practical value to applications with limited labeled data. Strengths and Weaknesses: **Strengths:** 1. The proposed method works in an unsupervised manner without requiring labeled data, making it useful for applications with limited labeled data. The authors also use Neural Architecture Search (NAS) to dynamically add early exits across different architectures, demonstrating great flexibility and adaptability. 2. The related works section is comprehensive and clearly differentiates this work’s contribution from the existing literature. 3. The proposed approach shows significant speed-ups in inference compared to other methods at the cost of a slight drop in accuracy. This can be useful in applications where the inference cost is prioritized over accuracy. **Weaknesses:** 1. The accuracy metrics are not clearly defined and show inconsistent results across the different experiments. (Refer to Requested Changes A.1 for details) 2. The unsupervised problem setup and evaluation are not clearly defined. (Refer to Requested Changes A.2 for details) 3. The comparison with other methods is not performed in a standardized way. (Refer to Requested Changes A.3 for details) **Disclaimer:** I skimmed the math related to Equation 3 and am not able to evaluate its theoretical correctness. Requested Changes: **A. Recommendation for acceptance (critical comments):** A.1. The accuracy metrics are not clearly defined and show inconsistent results across the different experiments. For example, the final accuracy of the Base (non-early exit enabled) on CIFAR100-ResNet50 is 78.98% in Table 2, which I would expect to be an approximate upper bound on accuracy based on Table 3 from (Kaya et al., 2019). However, the early exits 1-4 in Figure 2 achieve 100% GT Accuracy at 100% confidence, while the early-exit enabled model achieves roughly 86% accuracy at 95% confidence in Figure 4. In both experiments, I expected these accuracies to be closer and roughly upper bounded by the Base model (78.98%). **Recommendation:** Similar to the tables of (Kaya et al., 2019), I recommend adding the accuracy of the original model (non-exit enabled) across tables and figures for reference. A.2. The unsupervised problem setup and evaluation are not clearly defined. - Given that the training of the early exits is done on the test set, does this mean the model is not deployed yet? Is this training phase on the test set considered a warm-up? Or are predictions being provided to the target application while also training early exits? - How were other methods trained given that, to my knowledge, they require access to the training data? Were they also trained in an unsupervised way? - Were other methods evaluated on the entire test set or just the second half of the test data, similar to your method? **Recommendation:** To address these comments, I recommend adding a formulation of the problem setup in the methodology section since the proposed unsupervised training is part of your contribution and may not be clear to the reader. I also recommend discussing how other methods, which may originally train on labeled data, are adapted to this framework. As a suggestion, this could be under a "Baselines" subsection in Section 4. A.3. The comparison with other methods is not performed in a standardized way. - At which confidence % were the results reported in section 4.5.6? If these were reported at fixed confidence %, this may make the evaluation subjective and could be dataset-specific. While it is great to show a trend of trade-offs between performance and latency, a more standardized approach for benchmarking would be preferable. **Recommendation:** Since this work is based on (Kaya et al., 2019) and to be consistent with previous works, I recommend showing the comparison results at defined points of inference costs similar to Table 3 of (Kaya et al., 2019). **B. Comments to strengthen the work (minor comments):** B.1. Section 3.5 seems to be incomplete or out of context. At the beginning of the section, the discussion appears to be about the implementation of two tasks: image classification and object detection. However, the rest of the section only covers image classification for pedestrian detection. Then, at the end of the section, a recommendation system task is introduced abruptly without sufficient context. I recommend revising Section 3.5 to clarify its objective. B.2. Writing clarity: I highly recommend revising the document thoroughly in terms of language and grammar. For example: - There are several mentions of the term “a early exit”, which should be “an early exit”. - The phrase “acceleration inference” in Section 2.1 should be “accelerating inference”. B.3. Terminology consistency: There are a few mentions of “mean precision” and “final precision”. It seems precision and accuracy are used interchangeably in the document, even though they are different metrics. Do these references actually indicate the precision metric? If so, I suggest defining them in the metrics. Broader Impact Concerns: Although this is obvious, it may be worth mentioning that deploying this approach in safety-critical applications (e.g., autonomous cars) to improve inference speed may result in accuracy degradation, which could lead to potential accidents. ================================================== Review 3: Summary: In this paper, the authors propose to use early exits in deep neural networks to return a prediction quickly when these early exits are already able to make an accurate prediction. In addition, the authors propose to update these early exits at runtime, mimicking a caching mechanism. Strengths and Weaknesses: Strengths: - This paper addresses an interesting topic. Adaptive inference in neural networks is gaining popularity and the idea of using test-time information to improve the efficiency of future predictions is promising. - While most similar works only report results for image classification, this paper also include object detection and recommendation systems. Weaknesses: - I believe the choice of the word "cache" in the title and throughout the paper is misleading. After reading the abstract, I expected that this paper would introduce some sort of key value storage in the model that would allow the model to return a prediction early if the input is similar to a previously processed input, stored in the cache. Instead the authors propose to add trained early exit classifiers. I suppose you could interpret these classifiers as implementing some sort of caching mechanism since they are updated at test-time but I believe that the use of the word "cache" is distracting. Instead, I would encourage the authors to stick to more conventional terminology (e.g. early exit, adaptive computation, ...). I believe this would make it easier for potential readers to find this paper. - It is not clear to me how the confidence calibration works. I understand the need for calibration but how is this performed without ground truth labels ? I suppose you could use the final model's outputs as ground truth but then you should probably also take this model's confidence into account ? - The authors put a lot of emphasis on the fact that it is not required to have access to the training data to train the early exits. Does that mean that they are initialized from scratch ? Why not pretrain them on the training data ? - It is not clear to me how the updating procedure works. Section 3.6 gives two obvious approaches (update every X samples, update the early exits when the model is updated). For a paper that claims "online" in the title, I would have expected a more in depth discussion on the actual online updating procedure. As far as I can see, there is no online aspect in the provided experiments. I would expect at least a graph that shows the accuracy/ runtime of the model as a function of the observed test samples. This way you can see that the model becomes more efficient as more samples have been observed. - In section 4.2 the authors explain the test procedure. This is not clear to me. For example for the CIFAR10 dataset, the authors state that "we only use the test splits" and "we do not use the labels in these test sets in the training and optimization process". Does that mean that the early exits are trained using test data ? This seems like an obvious opportunity for overfitting. In addition, it is stated that the test data is augmented with flips and rotations which makes it hard to compare to other approaches that just use the conventional test data. Requested Changes: - I feel that the paper should include results on the ImageNet dataset as well. I understand that it is annoying when reviewers ask for additional results on other datasets, especially since the paper already includes experiments on large datasets such as CityScape but the ImageNet dataset is still the most commonly used image classification benchmark and I believe that the large number of classes and the higher resolution makes it a more interesting benchmark than CIFAR10/CIFAR100. In addition, it is likely that follow up work will be evaluated on ImageNet and this would allow those authors to compare to this work. - The writing could be improved. There are still quite a bit of typo's and numbers in the text have a space after the decimal point. - Overall, the paper is very verbose which hides a lot of the interesting aspects. I would encourage the authors to focus on what is really new (the online adaptation). Broader Impact Concerns: - ==================================================
# Boosting Visual-Language Models By Exploiting Hard Pairs Anonymous authors Paper under double-blind review ## Abstract Contrastive Language-Image Pre-training (CLIP) has emerged as the industry standard for aligning images with their corresponding textual descriptions. However, to enhance zeroshot recognition, current methods often demand additional data collection and retraining with the introduced new loss functions. Such additional demands for data collection and retraining impose substantial constraints on their practical deployment. In this work, we present Helip, a low-cost strategy tailored to enhance the performance of pre-trained CLIP models. This is achieved by further training them with challenging text-image pairs selected from their training dataset. Our proposed Hard Pair Mining (HPM) method treats a textimage pair as a single point in the joint Vision-Language space and identifies those in close proximity to a given pair as its hard pairs. By incorporating these challenging data, we refine pretrained CLIP models using both the traditional contrastive alignment loss and the newly introduced Hard Negative Margin Loss (HNML). This approach ensures the optimal harnessing of insights from challenging data. Notably, Helip is designed to be seamlessly integrated with existing models, providing an enhancement without the need for training a model from scratch or collecting additional data. On a comprehensive zero-shot and retrieval benchmark, Helip consistently boosts existing models to achieve leading performance. In particular, for ImageNet zero-shot accuracy, Helip boosts CC3M and CC12M pretrained SLIP by 3.05 and 4.47 respectively. In addition, the systematic evaluations of zero-shot and linear probing experiments across fine-grained classification datasets demonstrate a consistent performance improvement and validates the efficacy of Helip. Specifically, Helip boosts the zero-shot performance of pretrained CLIP and SLIP by an average of 8.4% and 18.6%, respectively, and improves their linear probe performance by an average of 9.5% and 3.0%. ## 1 Introduction Contrastive Language-Image Pretraining (CLIP) (Radford et al., 2021) is quickly becoming the standard for foundation models in computer vision due to its effectiveness for a variety of vision-language tasks without task-specific finetuning (i.e. zero-shot), such as image classification (Li et al., 2021) and image-text retrieval (Baldrati et al., 2022). Nevertheless, web-crawled image-text pairs used for the training of CLIP models are often loosely connected, leading to multiple plausible matches, beyond the assigned ones (Wu et al., 2022). Hence, a number of strategies (Li et al., 2022a; 2021; Mu et al., 2022; Radenovic et al., 2023) have been presented to investigate appropriate matches and take advantage of the widespread supervision among the image-text pairs to improve language-image pretraining. Efforts to improve the foundational contrastive language-image pretraining have largely followed two distinct approaches: 1) the incorporation of a multitask objective to bolster the efficiency of single-modality monitoring (Li et al., 2022a; Mu et al., 2022); and 2) the use of intra/inter-modality similarities to identify and retrain with sample-level challenging data (Li et al., 2021; Radenovic et al., 2023). Notably, techniques that depend on intra/inter-modality similarities for intensive negative data mining within a batch frequently fail to pinpoint data beneficial for contrastive learning. This isn't solely due to batch size constraints, but also because the contrastive loss in CLIP is over pair data; thus, sample-level hard data may not always be effective. This issue is further compounded when imprecise matching of image-text caption pairs leads to inaccurate hard pairs. Given these issues, a natural question arises: Can the performance of pre-trained CLIP models be improved more efficiently and broadly using hard data from its original pretraining dataset? In response to the question, we introduce the innovative training framework Helip as a tool to enhance CLIP models. This framework boosts CLIP models by effectively using hard pairs derived from the original training sets. Contrary to traditional methods that select hard samples based on the intra/inter-modality similarity calculated within a batch, Helip firstly defines hard pairs as nearby pairs within a joint VisionLanguage space. Following this, the challenge of mining nearby pairs is transformed into a feasible proxy task focused on maximizing text-image agreement of the target pair. In this proxy task, the target pair is treated as a matching pair, with the optimization objective being to curate a pair set that enables the proxy model to assign a higher probability of matching. To optimize this agreement, the **Hard Pair Mining (HPM)** component of Helip is designed to explicitly represent the target text-image pair using the remaining dataset. Subsequently, it then selects a subset that most effectively identifies the target as a matching pair, designating this as its set of hard pairs, as visualized in Figure 1. Additionally, rather than further training CLIP models solely with the original text-image contrastive loss (Radford et al., 2021)—which uniformly pushes all negative samples away from their positive counterpart—Helip integrates the Hard Negative Margin Loss (HNML) into the loss function. As illustrated in Figure 2, the intrinsic similarity between pairs should be reflected in the learned representations. Therefore, during training, Helip imposes additional geometric structure on the learned representation space by involving HNML as a regularization. This process allows hard negatives to situate themselves closer to the positive pair than the ordinary negatives. In doing so, the valuable information embodied within hard data can be harnessed more effectively. Empirical tests underscore the efficacy of Helip. When fine-tuning established CLIP models (such as CLIP, SLIP, and DECLIP) with hard examples and the hard negative margin loss, Helip consistently enhances CLIP checkpoints across zero-shot classification, text-image retrieval, and fine-grained linear probe benchmarks. For zero-shot classification on ImageNet, CIFAR-10, and CIFAR-100, Helip consistently boosts the performance of 6 pretrained models. Particularly, using Helip to boost SLIP models pretrained on CC3M, CC12M, and YFCC15M results in ImageNet zero-shot accuracy gains of 3.05, 4.47, and 10.14, respectively. Further, after finetuning with hard pairs and hard negative margin loss, those pretrained models achieve better zero-shot and linear probe performance on 7 fine-grained image classification datasets. Specifically, the average zero-shot accuracy of CC3M pretrained CLIP and SLIP are improved from 14.45 to 15.67 (+8.4%) and from 16.96 to 20.124 (+18.6%). The average linear probe accuracy of CC3M pretrained CLIP and SLIP are improved from 53.29 to 58.34 (+9.5%) and from 64.89 to 66.81 (+3.0%).Additionally, the performance gain is also valid in terms of zero-shot retrieval, with 1.1 of R@1 on Flickr30K, and 2.2 of R@1 on COCO for SLIP-Helip. Our contributions could be summarized as: - To our best knowledge, Helip stands out as the first method aimed at enhancing already well-trained CLIP models with a lost-cost strategy. It accomplishes this by further training them using their original datasets and is distinctively designed for seamless integration with various CLIP methodologies. - We propose an innovative technique for selecting hard pairs, specifically targeting the identification of challenging negative pairs. Complementing this, we introduce the hard negative margin loss, an approach that considers representation distances, ensuring the successful incorporation of hard pairs during finetuning. - Through empirical analysis across zero-shot classification, image-text retrieval, and linear probe benchmarks, we demonstrate that Helip is able to consistently improve CLIP model checkpoints. ## 2 Related Work Vision-Language pre-training. Vision Language Pretraining (VLP) is a technique that leverages large-scale image-text datasets to learn a strong joint representation between the two modalities that can be transferred to various downstream vision-language tasks. VLP models can be generally divided into single-stream models and dual-stream models. Dual-stream models (Jia et al., 2021; Li et al., 2022b; Mu et al., 2022; Radford et al., 2021; Yao et al., 2022) typically consist of two separate encoders for image and text respectively and perform cross-modality interactions on the top, are becoming more and more popular because of its flexibility of transferring pre-trained knowledge to downstream tasks. CLIP (Radford et al., 2021), uses a simple contrastive objective to learn visual features from natural language supervision and achieves remarkable zero-shot recognition performance using 400M web-crawled image-text pairs. Recent works boot the performance of CLIP by applying self-supervision within visual modal (Mu et al., 2022), additional nearest neighbor supervision (Li et al., 2022b). These methods are actually doing data augmentations to increase data efficiency and thus bring additional computational costs. Contrastive learning with hard negative samples. Contrastive learning learns a representation of input data that maps semantically comparable examples close together and semantically dissimilar examples far apart (Chen et al., 2020a;b; Wang & Isola, 2020). Recent works include hard negative samples into the loss function and achieve better empirical performance (Cai et al., 2020; Huynh et al., 2022; Kalantidis et al., 2020; Li et al., 2021; Radenovic et al., 2023; Robinson et al., 2021; Shah et al., 2022). For Languageimage contrastive learning, current approaches (Li et al., 2021; Radenovic et al., 2023) mine multimodal hard negative examples using intra/inter-modality similarity. Li et al. (2021) choose in-batch hard negative samples with image-text contrastive loss. Hard negative noise contrastive multimodal alignment loss by Radenovic et al. (Radenovic et al., 2023) up-weights the loss term for in-batch hard samples. For previous intra/inter-modality hard sample mining methods, two text-image pairs are considered as hard samples, if the cosine similarity between visual/textual features is high (Li et al., 2021; Radenovic et al., 2023). However, due to the nature of loose assignment for web-crawled image-caption data, a high similarity indicated by intra/inter-modality doesn't indicate that the two pairs are difficult to tell apart. Contrary to prior works, we design a hard sample mining method to discover similar pairs defined in joint vision-language space and efficiently select samples challenging enough to improve learning. ## 3 Hard Pairs For Visual-Language Models In this section, we first define the notations and revisit CLIP for zero-shot recognition in the preliminary section. Next, we introduce the Hard Pairs Mining method, denoted as HPM, along with the associated Hard Negative Margin Loss, **HNML**, specifically designed to leverage hard pairs identified by HPM. ## 3.1 Preliminaries We consider the task of contrastive image-text pretraining. Given an image-caption dataset D = {zi} N i=1 = {(x I i , xT i )} N i=1, (x I i , xT i ) *∈ I × T* , the x I i , x T i denote the image and its corresponding caption, I and T indicates visual and textual space respectively, and *I × T* indicates the joint Vision-Language space. Our goal is to learn a dual encoder model ϕ = {ϕimage, ϕ*text*}, where ϕ*image* represents the image encoder and ϕ*text* denotes the text encoder. We use the shorthand Ii = ϕ*image*(x I i ) and Ti = ϕ*text*(x T i ) to denote the encoded representation of an image and its caption, respectively. The contrastive objective of CLIP is formulated as, $$\ell_{C L I P}=-\frac{1}{|B|}\sum_{i\in B}\log\frac{\exp\left(s i m(I_{i},T_{i})/\sigma\right)}{\sum_{j\in B}\exp\left(s i m(I_{i},T_{j})/\sigma\right)},$$ , (1) where sim(·, ·) is the cosine similarity function, B is a batch of samples and σ is a trainable parameter controlling the temperature. Intuitively, the above formulation explicitly aligns the representations of image and text from one pair. ## 3.2 Hpm: Hard Pair Mining In the context of vision-language contrastive learning, we define "hard pairs" as the pairs that are nearby to a specified target pair within the joint vision-language space, *I × T* . The formal definition of the hard pair mining problem can be found in Equation 2. Here, zi signifies a target pair, Hi denotes a set of pairs chosen from the dataset Di = D \ zi, and the metric S(,) quantifies the similarity between the target pair and a set $$(1)$$ ![3_image_0.png](3_image_0.png) $$\left(2\right)$$ Figure 1: **Hard Pair Mining (HPM)**. Choose hard pairs by optimizing the support set to maximize the agreement prediction of the target pair. of pairs, $${\mathcal{H}}_{i}^{\star}=\arg\operatorname*{max}_{{\mathcal{H}}_{i}}\mathbf{S}(z_{i},{\mathcal{H}}_{i}).$$ S(zi, Hi).(2) However, a key challenge arises in defining the similarity metric for pairs, S. Existing methods (Radford et al., 2021; Li et al., 2022b;a) preliminary focus on aligning an image with its caption (Radford et al., 2021; Li et al., 2022a) from a image-text pair. They rarely emphasize on bringing similar pairs closer while distancing the dissimilar ones, which makes current methods fall short in gauging similarity between two pairs. For instance, the cosine similarity between two pairs is ill-defined, within the context of current methods. Selecting hard pairs by maximizing pair agreement. To identify nearby pairs in the joint VisionLanguage space, we introduce the idea of text-image pair agreement maximization. This can be viewed as a proxy task for selecting hard pairs. To illustrate why text-image pair agreement serves as an effective proxy for hard pair selection, we begin with the assumption that for a machine learning model, the similarity between the test sample and training data is a crucial factor that affects its prediction on the test sample. This assumption is supported by recent empirical and theoretical studies about model memorization (Chen et al., 2009; Zhang et al., 2021; Stephenson et al., 2021; Brown et al., 2021). Intuitively, if a pair agreement prediction model, trained on a set of pairs, predicts a particular target pair with a high probability of being a matching pair, then the target pair is likely similar to the matching pairs the model was trained on. With this in mind, the challenge of selecting hard pairs is reshaped into an optimization task centered on the text-image pair agreement, formally represented as: $$\arg\max_{\mathcal{H}_{i}}\mathbf{S}(z_{i},\mathcal{H}_{i})=\arg\max_{\mathcal{H}_{i}}P_{\mathcal{M}}(z_{i}|\mathcal{H}_{i}),\tag{3}$$ where PM(zi|Hi) denotes the prediction of a pair agreement model, M, for the pair zi based on a pair set Hi. This set is a subset of Di. In this framework, the goal of selecting hard pair is transformed into identifying a training set Hi such that the model M predicting the target pair as a matching pair. Designing a suitable pair agreement prediction model for this proxy task is a nontrivial endeavor because the model needs to not only predict the pair matching probability but also allow the optimization of the training set, as indicated in Equation 3. Consequently, a conventional deep neural network design becomes unviable due to the impracticality of retraining across all possible sets Hi from Di. Taking inspiration from recent work (Norelli et al., 2022), we propose a data-centric design for the agreement prediction model M. As illustrated in Figure 1, the model leverages two pretrained single-modal encoders, i.e., f*image* and f*text*, to align representations of images and texts in a unified Vision-Language space. Specifically, the model encodes the target pair ziinto (Ii, Ti) using these single-modal encoders. For the visual modality, we determine a similarity vector between the target pair zi and the dataset Di. The similarity vector is defined as S⃗I(x I i , Di) = [. . . , sim(Ii, Ij )*, . . .* ] ⊤ ∈ R N−1. Here Ij = f*image*(x I j ) with (x I j , xT j ) being an element of Di, and function sim(·, ·) denotes the cosine similarity. To counteract noise, values in the vector S⃗I(x I i , Di) are set to zero if sim(Ii, Ij ) < τ . This cleaned-up vector is represented as SeI. The procedure for the textual modality is analogous, producing a vector denoted as SeT. Note, the representations in this shared space are intuitively interpretable: each dimension corresponding to the visual/textual similarity of the input to a unique pair in the multimodal dataset. This interpretable characteristic enables us to directly optimize the supporting set to maximize the pair matching probability: $${\cal H}_{i}^{*}=Argmax_{|{\cal H}_{i}|=k}\widetilde{S}^{I}(x_{i}^{I},{\cal H}_{i})^{\top}\widetilde{S}^{T}(x_{i}^{T},{\cal H}_{i}),\tag{4}$$ where the H⋆ i is the hard pair set and k ∈ R + is the number of selected pairs which is much less than |D|. The previous problem can be efficiently solved by greedily choosing dimensions that maximize the inner product. Due to the interpretable property, the selected dimensions correspond to the desired pairs. Mitigation of noisy data impact. The prior method assumes the target pair zi to be a suitable matching pair. However, in inherently noisy datasets, such as web-crawled ones like LAION (Schuhmann et al., 2022), mismatched pairs might be present. The potential adverse effects of hard pairs, which can be generated by these mismatched pairs, necessitates a strategy for their identification and elimination. To counteract the detrimental effect of such negative hard pairs arising from mismatched pairs, we employ a pair removal strategy based on the availability of hard pairs. The strategy proceeds as follows: A target pair ziis deemed as unsuitable and thus removed, if there is a non-empty subset Hsub i ⊆ H⋆ i with |Hsub i| > 0, such that SeI(x I i , Hsub i) ⊤SeT(x T i , Hsub i) = 0. This equation indicates that the number of matching pairs affirming zi as a genuine matching pair is less than k. For a dataset D \ zi, if there doesn't exist a small subset with size k to support ziis indeed a matching pair, it suggests that the target pair is an outlier, possibly resulting from a mismatch. Such outliers can degrade dataset quality, and thus they are removed to ensure the reliability of hard data. Fast hard pair mining (FastHPM). It is instinctive to infer that for a dataset collecting from a single source, the number of intrinsic hard pairs, *i.e.*, those robust enough to enhance the learned representation, will proportionally increase with the size of the dataset originating from that source. Thus, intuitively, to identify k (much less than |D|) "qualified" hard pairs, part of the dataset D is enough. Therefore, we present the Fast Hard Pair Mining (FastHPM) approach, devised to circumvent the time complexity associated with hard pair mining across the entire dataset. FastHPM's objective can be formalized as follows: $${\mathcal{H}}_{i}^{\star}\approx A r g m a x_{|{\mathcal{H}}|=k}{\tilde{S}}^{I}(x_{i}^{I},{\mathcal{H}}_{i})^{\top}{\tilde{S}}^{T}(x_{i}^{T},{\mathcal{H}}_{i}),$$ $$\mathbf{\Sigma}$$ , Hi), (5) where Hi ⊆ Di and |Di| = C is sampled uniformly from set Di. In this equation, it's noteworthy that the selection of value C is solely based on the number of hard pairs k, instead of the size of Di. Consequently, this optimization reduces the time complexity of FastHPM to O(N). The detailed procedure of the hard pair mining algorithm is consolidated and presented in Appendix A.1. ## 3.3 Hnml: Hard Negative Margin Loss The image-text contrastive loss ℓ*CLIP* , as illustrated in the preliminary section, aligns the true image-text pairs. But it poses no constraints on the overall geometry among data pairs (Goel et al., 2022). After involving hard data into the finetuning stage, equally maximizing the distance for normal negative pairs and hard negative pairs is an undesired way to utilize the information provided by hard negative pairs. The intuition follows directly from Figure 2. In a desired representation space, the similarity between the positive and the hard negative, S1, should be greater than the similarity between the positive and those normal negatives, S2, S3. Therefore, to impose the additional geometric structure, we introduce the Hard Negative Margin Loss (HNML): $$\ell_{m a r g i n}=\frac{1}{|B|}\sum_{j\in B}\max\big(0,s i m(I_{i},T_{j})-\min_{j^{\prime}\in\mathcal{H}_{i}^{p}}\{s i m(I_{i},T_{j^{\prime}})\}\big),\tag{1}$$ (6) $\frac{1}{2}$ ![5_image_0.png](5_image_0.png) $$\left({\overline{{t}}}\right)$$ Figure 2: Hard Negative Margin Loss (HNML). Hard negative pairs are closer to the positive than the normal negative pairs. Figure 3: **Further training CLIP with Hard** Pairs. For text-image pairs within a batch, we sample corresponding hard data from the preprocess hard pair set. where H p i ⊆ H⋆ i is the hard negative pairs for the target ziinvolved in one training batch. Note, the HNML is computationally efficient. No extra inner product computation is required. The geometric regularization is applied over the inner product matrix computed in the original CLIP loss, Equation equation 1. Then, the well-trained model is finetuned with the following loss, where γ is the hyperparameter balancing the two losses, ℓf inetune = ℓCLIP + γℓ*margin*. (7) To boost the performance of well-trained CLIP models without introducing extra data and extra parameters, we introduce the further training strategy which involves the preprocessed hard pairs into the batch composition during training. As shown in Figure 3, for text-image pairs within the batch B, we randomly sample a subset B′ as seeds. Then, for zi ∈ B′, we randomly select |Hp i | = p pairs from H⋆ i . The actual training batch is B = B |B′ S | $$\ell_{f i n e t u n e}=\ell_{C L I P}+\gamma\ell_{m a r g i n}.$$ i=0 H p i . We summarize the training pipeline in appendix A.1. ## 4 Experiments In the experiments, detailed in Section 4.2, we showcase the potential of the Helip approach to boost zeroshot classification, linear probing, and zero-shot image-text retrieval performance for vision-language models. In Section 4.3, the efficacy and robustness of Helip in handling noisy datasets are explored. Section 4.4 offers comparative insights, highlighting the superiority of the Hard Pair Mining (HPM) method over other techniques that rely solely on information at the sample level. Lastly, in Section 4.5, we delve into the impact of the Hard Negative Margin Loss (HNML), emphasizing its role in maximizing the information derived from challenging pairs. ## 4.1 Experimental Setup Training datasets. In our experiments, we employed open-source datasets from multiple sources. These included the Conceptual Captions 3M (CC3M) (Sharma et al., 2018), Conceptual Captions 12M (CC12M) (Changpinyo et al., 2021). Additionally, we utilized two distinct 15M subsets of the YFCC100M (Thomee et al., 2016) dataset: the first, v1, collected by Radford et al. (2021), and the second, v2, collected by Li et al. (2022b). The combined datasets of CC3M, CC12M, and YFCC15M v1, which we denote as Open29M following the term used in previous work (Li et al., 2022b), were not completely obtained due to expired Urls. Additionally, we independently sampled 7.5M and 8M subsets from the noisier data source, LAION-5B (Schuhmann et al., 2022), labeled as LAION7.5M and LAION8M, respectively. While these datasets are smaller than the 400 million pair dataset used in CLIP's original study (Radford et al., 2021), they are well-suited for the data and computational resources we have and have been frequently employed in benchmark evaluations for numerous studies on language-image pretraining (Goel et al., 2022; Li et al., 2022b; Mu et al., 2022). Downstream datasets. We mainly verify the effectiveness of our methods through zero-shot image classification, linear probing, and zero-shot image-text retrieval. For zero-shot classification, in addition to commonly used ImageNet (Deng et al., 2009), CIFAR10, and CIFAR100 (Krizhevsky et al., 2009), we also verify the performance on 7 fine-grained classification datasets including Caltech101 (Fei-Fei et al., 2004), Food101 (Bossard et al., 2014), Sun397 (Xiao et al., 2010), Flowers102 (Nilsback & Zisserman, 2008), CUB (Wah et al., 2011), Stanford Cars (Krause et al., 2013) and FGVC Aircraft Maji et al. (2013). For the zero-shot image-text retrieval task, MS-COCO (Lin et al., 2014) and Flickr30K (Plummer et al., 2015) are adopted. Implementation details. Our experiments are conducted across three distinct architectures: ResNet-50, ViT-B/16, and ViT-B/32, tailored to various datasets and pretrained models. Specifically, when pretraining on CC3M and CC12M, we utilize the ResNet-50 architecture with the CLIP model. For experiments involving the SLIP model on CC3M and CC12M, we employ ViT-B/16 to align with the framework established in Mu et al. (2022). In contrast, for pretraining on YFCC15M v1, v2, and Open29M datasets, we consistently use ViT-B/32 to ensure a fair comparison with the results reported in Li et al. (2022b). Furthermore, for the SLIP and DECLIP models, we adapt the pretrained parameters from the publicly available resources∗ The input resolution of the image encoder is 224 × 224 and the maximum context length of the text encoder is 77. All of our experiments are conducted on 8 V100 GPUs with a batch size of 128 for ViT-B/16 models, and a batch size of 512 for ResNet-50 models and ViT-B/32 models. The dimension of the image and text embeddings is 1024 for ResNet-50 models and 512 for ViT-B/16 and ViT-B/32 models. We set τ = 0.5, γ = 1 and p = 1 for all the expriments by default. Automatic mixed-precision is used to save GPU memory. To avoid the model from overfitting to potential harmful distribution induced by the batch of hard data, we use early stopping if there's no performance gain within 5 consecutive epochs. Following the pre-training configurations established by Ilharco et al. (2021) and Mu et al. (2022), we set the training epoch at 32 for the baseline models that need to be pre-trained from scratch. Typically, best performance is achieved within these 32 epochs. The checkpoint demonstrating the highest performance is then chosen and saved for further use with Helip. Unless specified, in the preparation of hard negative pairs, as suggested in the previous work Norelli et al. (2022), we employ the unsupervisedly pretrained vision transformer, DINO VITs8, as the image encoder (Caron et al., 2021). As for the text encoder, we utilize the SentenceT (Reimers & Gurevych, 2019), which is a transformer trained on a dataset comprising over 1 billion sentences gathered from the internet. To reflect that our method is designed to work with minimal assumptions regarding the encoders, we utilized encoders pretrained over single-modal source, instead of multimodality pretrained ones. Specifically, we employed an unsupervisedly pretrained vision transformer, DINO VITs8 (Caron et al., 2021), and a Sentence Transformer (SentenceT) (Reimers & Gurevych, 2019) as our text encoder. The embedding sizes are 384 for DINO VITs8 and 768 for SentenceT. For more details, we refer the readers to appendix. ## 4.2 Main Results And Discussion Zero-shot classification. We compare zero-shot performances of the CLIP, SLIP, DECLIP, and those models finetuned by Helip on CC3M, CC12M, YFCC15M and Open29M. We denote the models finetuned by Helip as CLIP-Helip , SLIP-Helip , and DECLIP-Helip respectively. As shown in Table 1, models further finetuned by Helip consistently get significant improvements on the three datasets compared with their counterparts. Specifically, on CC3M, with the help of Helip , zero-shot classification accuracy on ImageNet of CLIP model was improved from 19.04% to 19.86%. While SLIP model has a performance boost of over 13%, compared with its original number, achieving 26.05% accuracy on ImageNet. We additionally include two baseline methods: CYCLIP (Goel et al., 2022) and CLOOB (Fürst et al., 2021) for reference. As for pretraining on CC12M, we directly adopted the checkpoints released by SLIP (Mu et al., 2022). SLIP-Helip outperforms its counterpart by 4.47% in zero-shot accuracy on ImageNet. Due to the absence of openly accessible parameters for DECLIP on the CC3M and CC12M datasets, our analysis focused on ∗https://github.com/facebookresearch/SLIP, https://github.com/Sense-GVT/DeCLIP. comparing DECLIP with DECLIP-Helip over the YFCC15M v2 dataset. In this context, we present the performance of the SLIP and DECLIP models, as pretrained and released by Li et al. (2022b), using their designated evaluation pipeline (denoted with ∗). Additionally, we have included the corresponding results derived from our own evaluation pipeline for a fair comparison. Notably, both SLIP and DECLIP showed improvements with Helip, averaging increases of 15.49% and 6.74%, respectively. Further, to demonstrate Helip's sustained efficacy across larger datasets, we assessed CLIP and CLIP-Helip on Open29M. The best performance of CLIP on Open29M was recorded at the 18th epoch, achieving a 42.32% zero-shot performance on ImageNet. Extending the training by two more epochs resulted in a slight decrease in performance, dropping to 42.25%. Remarkably, our HELIP method was able to enhance CLIP's performance from 42.32% to 46.33% with just *one additional training epoch*. | Method | ImageNet | CIFAR10 | CIFAR100 | | | |------------------------------|------------------------------|------------------------|-----------------|-------|-------| | CYCLIP (Goel et al., 2022) | 22.08 | 51.45 | 23.15 | | | | CLOOB (Fürst et al., 2021) | 23.97 | - | - | | | | CLIP† (Radford et al., 2021) | 19.04 | 33.06 | 13.77 | | | | CC3M | | CLIP† -Helip | 19.86 | 34.05 | 14.13 | | SLIP (Mu et al., 2022) | 23.00 | 65.61 | 34.69 | | | | SLIP-Helip | 26.05 | 68.18 | 37.77 | | | | CLIP† (Radford et al., 2021) | 30.27 | 51.07 | 21.94 | | | | CLIP† -Helip | 32.05 | 52.27 | 24.51 | | | | CC12M | | SLIP (Mu et al., 2022) | 41.17 | 81.30 | 53.68 | | SLIP-Helip | 45.64 | 82.31 | 53.79 | | | | YFCC15M | | SLIP (Mu et al., 2022) | 25.29 (34.30∗ ) | 60.19 | 26.80 | | SLIP-Helip | 35.43 | 75.49 | 47.84 | | | | DECLIP (Li et al., 2022b) | 36.05 (43.20∗ ) | 78.12 | 50.60 | | | | DECLIP-Helip | 43.80 | 84.88 | 56.31 | | | | 29M | CLIP† (Radford et al., 2021) | 42.32 | 71.98 | 42.73 | | | CLIP† -Helip | 46.33 | 77.97 | 48.33 | | | Table 1: **Zero-shot performance on ImageNet, CIFAR10 and CIFAR100.**The † indicates baselines pre-trained by us. For all other baselines, publicly available pre-trained parameters were used. Specifically for SLIP and DECLIP on YFCC15M, we report results from two sources: our evaluation using OpenCLIP's framework with pre-trained parameters released by Li et al. (2022b), and the performance originally reported in Li et al. (2022b), marked with ∗. Zero-shot fine-grained classification. By leveraging challenging image-text pairs in contrastive learning, Helip amplifies the discriminative capability of the CLIP model's visual embedding. This improvement proves valuable in classification, particularly for fine-grained datasets. Our evaluation on 7 fine-grained classification datasets (Table 2) reveals that SLIP-Helip boosts the zero-shot accuracy of CC3M and CC12M pretrained SLIP on Caltech101 by 12.88% and 3.95% respectively. Both CLIP and SLIP models witness consistent improvements with their Helip counterparts. Linear probing. The linear probing task trains a randomly initialized linear classifier on the feature extracted from the frozen image encoder on the downstream dataset. To accomplish this, we train the logistic regression classifier using scikit-learn's L-BFGS implementation (Pedregosa et al., 2011), with maximum 1,000 iterations on those 7 datasets. For each dataset, we search for the best regularization strength factor on the validation set over 45 logarithmically spaced steps within the range 1e-6 to 1e+5. Experimental results in Table 3 demonstrate that both CLIP-Helip and SLIP-Helip have consistent improvements over their counterparts on almost all 7 datasets. Note that on CC12M SLIP-Helip performs marginally better on 5 out of 7 datasets. It's probably because the self-supervision of SLIP (Mu et al., 2022) within the visual modal can be beneficial for learning fine-grained visual embedding, while SLIP-Helip doesn't include image self-supervision during the training. In addition, we did not match the training batch size as SLIP (Mu | CC3M CC12M | |--------------| Dataset Method Caltech101 Food101 Sun397 Flowers102 CUB Stanford Cars FGVC Aircraft Average CLIP 42.14 13.02 27.08 13.37 3.45 1.08 1.02 14.45 CLIP-Helip 48.08 13.11 28.94 13.61 3.70 1.17 1.11 15.67 SLIP 54.01 16.03 29.19 12.06 4.70 **1.21 1.50** 16.96 SLIP-Helip 66.89 17.05 33.69 15.16 **4.85** 1.19 1.29 **20.12** CLIP 63.78 31.53 37.86 19.56 7.32 14.22 2.49 25.25 CLIP-Helip 64.85 36.49 38.22 24.73 8.58 15.59 2.97 27.35 SLIP 76.33 52.33 44.96 **31.81** 10.50 22.53 3.06 34.50 SLIP-Helip 80.28 54.86 **47.53** 31.39 10.56 25.67 4.08 **36.34** Table 2: **Zero-shot performance on fine-grained image classification.** On a variety of fine-grained image classification benchmarks, models finetuned by Helip achieves consistent performance gains compared to their original counterparts. | Dataset | Method | Caltech101 | Food101 | Sun397 | Flowers102 | CUB | Stanford Cars | FGVC Aircraft | Avg. | |------------|----------|--------------|-----------|----------|--------------|-------|-----------------|-----------------|--------| | CC3M | CYCLIP | 80.88 | 54.95 | - | 83.74 | - | 22.72 | 28.02 | - | | CLIP | 80.11 | 53.82 | 56.40 | 84.07 | 40.30 | 22.70 | 35.61 | 53.29 | | | CLIP-Helip | 82.49 | 59.79 | 59.56 | 87.84 | 46.19 | 30.01 | 42.48 | 58.34 | | | SLIP | 87.96 | 72.50 | 66.96 | 91.91 | 49.77 | 39.25 | 45.87 | 64.89 | | | SLIP-Helip | 89.64 | 73.09 | 67.67 | 93.02 | 53.16 | 42.44 | 48.66 | 66.81 | | | CC12M | CLIP | 85.35 | 68.00 | 64.45 | 87.88 | 48.75 | 57.80 | 40.32 | 64.65 | | CLIP-Helip | 85.87 | 68.89 | 64.95 | 88.36 | 49.41 | 58.55 | 40.17 | 65.17 | | | SLIP | 92.89 | 83.63 | 74.34 | 94.87 | 60.99 | 73.43 | 52.23 | 76.05 | | | SLIP-Helip | 92.85 | 84.25 | 74.74 | 95.09 | 60.53 | 74.23 | 52.36 | 76.29 | | | Pretraining Dataset | Method | COCO | Flickr30K | | | |-----------------------|----------|--------|-------------|-------|------| | | R@1 ↑ | R@5 ↑ | R@1 ↑ | R@5 ↑ | | | CC3M | CLIP | 14.4 | 34.1 | 31.7 | 56.0 | | CLIP-Helip | 17.8 | 39.8 | 35.4 | 61.0 | | | SLIP | 22.3 | 45.6 | 39.6 | 68.6 | | | SLIP-Helip | 23.4 | 48.3 | 41.8 | 69.6 | | | CC12M | CLIP | 26.9 | 52.6 | 47.2 | 74.3 | | CLIP-Helip | 27.8 | 54.3 | 48.2 | 75.4 | | | SLIP | 39.0 | 66.0 | 65.4 | 90.1 | | | SLIP-Helip | 39.4 | 67.2 | 66.2 | 89.7 | | et al., 2022) because of resource limitations. A combination of Helip and image self-supervision and larger training batch size may be a potential direction for achieving better linear probe performance. Table 3: **Linear probe performance on Fine-grained Image Classification.** We report the linear probe performance on a variety of classification benchmarks. On average, both the CLIP and SLIP pretrained on CC3M and CC12M are improved. Table 4: **Zero-shot image-text retrieval results on MSCOCO and Flickr.** ↑ indicates higher is better. Combining with Helip, CLIP and SLIP show better performance. Zero-shot retrieval. We evaluate Helip on zero-shot image-to-text retrieval tasks on MS-COCO (Lin et al., 2014) and Flickr30K (Plummer et al., 2015). We compare CLIP, SLIP, and their counterparts trained on CC3M and CC12M respectively in Table 4. As shown in the table, both CLIP and SLIP benefit from Helip. ## 4.3 Performance Of Helip On Noisy Dataset We extend our investigation by analyzing the efficacy of the HELIP model on subsets of LAION7.5M and 8M, respectively, which are randomly sampled from LAION (Schuhmann et al., 2022). The encoders in the Hard Positive Mining (HPM) utilized pre-trained VITs8 and SentenceT, while the CLIP model was executed with ViT-B/16. The outcomes are compiled in Table 5. Upon analysis of the presented data in Table 5, it | ImageNet | CIFAR10 | CIFAR100 | Caltech | Food | Sun | Avg. | | |------------|-----------|------------|-----------|--------|-------|--------|------| | CLIP-7.5M | 23.5 | 34.6 | 14.5 | 58.9 | 28.6 | 25.3 | 30.8 | | HELIP-7.5M | 25.8 | 39.9 | 16.7 | 61.9 | 34.1 | 28.2 | 34.4 | | CLIP-8M | 25.1 | 31.1 | 12.9 | 60.9 | 29.5 | 27.5 | 31.2 | | HELIP-8M | 26.5 | 38.8 | 14.6 | 62.3 | 33.1 | 26.6 | 33.7 | Table 5: **Zero-shot performance of HELIP on two LAION subsets.** HELIP counterparts witness consistent improvements over almost all the datasets. can be discerned that the Helip model, in general, outperforms the CLIP model on both subsets for the majority of the datasets used for evaluation, namely ImageNet, CIFAR10, CIFAR100, Caltech, and Food. The average performance of Helip is also higher in both subsets. In the 7.5M subset, Helip yields a better performance across all datasets, including Sun, with an average performance improvement of 3.6%. For the 8M subset, while the CLIP model scores slightly higher on the Sun dataset, the overall performance of Helip remains superior, with an average performance increase of 2.5%. These results, therefore, underscore the improved performance yielded by the Helip model in comparison to the CLIP model over a noisy data source, showing its potential of boosting larger scale pretraining that involves noisier data. ## 4.4 Comparison With Other Hard Data Selection Method We evaluate the efficacy of the proposed method in enhancing the discriminative capacity of learned representations by comparing its zero-shot classification performance with that of other hard data mining strategies. As described in the Section 2, a common way to define hard data is through intra-modality similarity. Hence, we introduce the hard data mining methods depending on image similarity and text similarity and denote them as IM and TM correspondingly. For a given target pair, we compute the cosine similarity between its image/text representation and those of the remaining dataset. The image and text representations are encoded using a pretrained Resnet50 and BERT, respectively. As a global preprocessing step, both IM and TM methods mine hard negatives. Subsequently, we integrate the mined hard negative pairs into the training pipeline of the CLIP+IM and CLIP+TM methods and optimize the original contrastive loss to finetune the model. Additionally, we employ the hard negative contrastive loss, HN-NCE, proposed by Radenovic et al. (2023), as a baseline. HN-NCE up-samples the weight of hard-negatives identified by the current model. As shown in Table 6, when the CC3M pretrained CLIP model is combined with Helip, the performance of our pair-level hard data mining method significantly outperforms other sample-level techniques. We visualize the mined hard data obtained from three different preprocessing methods, namely, hard pair mining (HPM), image similarity mining (IM), and text similarity mining (TM), in Figure 4. The image-text pairs selected by HPM are displayed in the first row, while the second and third rows show the pairs selected by IM and TM, respectively. We observe that the captions of the hard pairs mined with image similarity are only loosely connected with the image of the target pair. For samples mined by TM, their images are even mismatched with the caption of the target pair. The fact that pairs mined by TM is easier than IM is also reflected in the Table 6, where the zero-shot performance of the CLIP+IM method consistently outperforms the CLIP+TM method across three datasets. ## 4.5 Impact Of Hard Negative Margin Loss We investigate the impact of using hard negative margin loss (HNML) on the performance of the SLIP model. In particular, our attention is directed towards an analysis of the SLIP model's performance, which has been previously pre-trained on the CC3M dataset, when it is both further trained with HPM+HNML | Imagenet | CIFAR10 | CIFAR100 | | |---------------|-----------|------------|-------| | CLIP + Helip | 19.86 | 34.05 | 14.13 | | CLIP + TM | 16.70 | 28.71 | 9.67 | | CLIP + IM | 16.93 | 29.22 | 10.42 | | CLIP + HN-NCE | 19.47 | 29.88 | 11.83 | Table 6: Comparison of zero-shot performance for CC3M-finetuned CLIP with hard pairs (by Helip**) and hard samples (by other methods).** Helip shows superior performance, consistently outperforming local/global hard sample mining techniques by a significant margin. ![10_image_0.png](10_image_0.png) Figure 4: Hard negative data **selected by different methods.** Compared to text-image data mined using the visual or textual modality, hard pairs mined by HPM are more difficult. Figure 5: **HPM and fastHPM.** We show the hard pairs mined by HPM and fastHPM. The quality of hard pairs mined by fastHPM is competitive with the pairs mined by HPM. | ImageNet | CIFAR10 | CIFAR100 | Caltech101 | Food101 | Sun397 | Avg. | | |------------|-----------|------------|--------------|-----------|----------|--------|-------| | SLIP | 23.00 | 65.61 | 34.69 | 54.01 | 16.03 | 29.20 | 37.09 | | wo HNML | 24.94 | 69.44 | 36.35 | 64.07 | 16.51 | 30.91 | 40.37 | | w HNML | 26.05 | 68.18 | 37.77 | 66.89 | 17.05 | 33.68 | 41.60 | and left without HNML. Our approach involves a comparative analysis of the model's zero-shot classification performance across multiple datasets including ImageNet, CIFAR 100, CIFAR 10, Caltech 101, Food 101, and Sun397. The results of our evaluation are comprehensively detailed in Table 7. These demonstrate that the SLIP model supplemented with HPM and HNML exhibits superior performance, with a performance boost of 4.51 and 3.27 compared to the SLIP and SLIP + HPM models respectively. Interestingly, the model achieved superior performance on the CIFAR 10 dataset without HNML. We postulate that this may be attributed to HNML's ability to enhance the discriminative power of the learnt representations by employing the class distance as a cost metric. In light of this, our findings suggest that for classification datasets consisting of a larger number of subclasses, employing HNML during the training phase can lead to an increase in classification performance. Table 7: **SLIP finetuned with and without hard negative margin loss.** When finetuned with hard pairs, the zero-shot performance of CC3M pretrained SLIP can be further enhanced usingx HMNL. | ImageNet | CIFAR10 | CIFAR100 | Avg. | | |--------------------|-----------|------------|--------|-------| | CLIP Encoders | 19.57 | 33.28 | 13.53 | 22.12 | | VITs8 + SentenceT | 19.86 | 34.05 | 14.13 | 22.68 | | VITb16 + SentenceT | 19.62 | 35.53 | 14.67 | 23.27 | | VITs8 + T5 | 19.61 | 33.99 | 13.82 | 22.47 | Table 8: **The zero-shot performances of HELIP with different encoders in HPM.** HPM's performance is insensitive to the selection of encoders. ## 4.6 Delving Into Hard Pair Mining Impact of different encoders in HPM. We explored the effect of different pretrained encoders on HPM's performance by alternating image and text encoders. Initially, the unsupervised pretrained DINO VITs8 (Caron et al., 2021) was paired with the SentenceT (Reimers & Gurevych, 2019) transformer, trained on over a billion internet-based sentences. This combination was later swapped for the SWAG VITb16 (Singh et al., 2022) and the T5 (Raffel et al., 2020). Additionally, experiments using OpenAI's CLIP model (Radford et al., 2021) multimodal encoders were conducted. Interestingly, as Table 8 suggests, the encoder choice seemingly has negligible impact on HPM's performance, likely due to the proficiency of current pretrained models in modeling intra-modal similarities. Moreover, the ability to use single-modal pretrained models and still achieve competitive or superior performance implies that there's no assumption of having access to a high-quality CLIP model, such as OpenAI's CLIP-400M. | Imagenet | CIFAR10 | CIFAR100 | | |-------------|-----------|------------|-------| | SLIP | 41.17 | 81.30 | 53.68 | | Helip- 3M | 45.07 | 82.42 | 55.22 | | Helip- 6M | 44.98 | 81.64 | 56.62 | | Helip- Full | 45.64 | 82.31 | 53.79 | Performance Comparison between HPM and FastHPM. A comparison was made between the zeroshot performances of SLIP models, further trained with hard pairs obtained from both HPM and fastHPM. This comparison, summarized in Table 9, was conducted under three different settings, each maintaining the hyperparameter k = 500. Additionally, we established subsets Dei of sizes 3M and 6M, and accordingly denoted Helip with these subset sizes as Helip-3M and Helip-6M. Table 9 shows that the zero-shot performances of Helip-3M and Helip-6M remain competitive with the global HPM hard pair mining approach. These findings suggest that fastHPM offers an efficient strategy for hard pair mining, without compromising performance. Additionally, they hint at fastHPM's potential to scale up hard pair mining in larger pre-training datasets, a promising direction for future exploration. Table 9: Zero-shot performance for SLIP + Helip on CC12M with hard samples mined with HPM and fastHPM. Compared with hard samples mined with HPM, the fast versions are competitive with the full version. Visual insights into HPM and FastHPM. We took the initiative to visualize the hard pairs as identified by the aforementioned three methods. Within Figure 5, the leftmost image-text pairing is earmarked as the target. The pairs in the primary row are those selected via HPM. The subsequent rows, specifically the second and third, present image-text pairings identified by the 6M fastHPM and the 3M fastHPM methods, respectively. Through a comparative visualization, it's evident that the hard pairs pinpointed by fastHPM bear a significant resemblance to the target pair. For readers keen on delving deeper, we've provided an extended set of visualization outcomes in Appendix A.2. Computational time analysis. Table 10 provides a comparison of the computational time required by HPM and fastHPM. The hard negative pairs preparation times listed were measured on 8 V100 GPUs, with the exception of the ∗ symbol, which was measured on a single V100 GPU. Given its efficiency and the | CC3M | CC12M | YFCC15M | | |-------------|---------|-----------|----------| | Helip- 3M | - | 2h18min | 3h27min | | Helip- 6M | - | 5h3min | 6h19min | | Helip- Full | 1h9min∗ | 9h11min | 17h41min | performance similarities observed in Table 9, fastHPM emerges as a compelling alternative to the full HPM method. Table 10: **Preparation time for hard pairs.** FastHPM speeds up the hard negative pairs mining process. ## 5 Conclusion In this work, we explored the potential of boosting pre-trained CLIP models' performance by more effectively leveraging their original training dataset. This endeavor stemmed from recognizing the challenges posed by the loosely connected nature of web-crawled image-text pairs, which lead to suboptimal utilization of the available data by the standard CLIP loss. Our framework, Helip, introduces a practical and efficient solution for boosting model performance without requiring extensive retraining or additional datasets. It utilizes challenging data from their original training datasets to improve the performance of contrastive learning. Specifically, Helip defines hard negative data as the nearby pairs within the Vision-Language space. The Hard Pair Mining (HPM) module efficiently identifies challenging negative pairs, operating under the premise of access to single-modality pretrained models. Furthermore, the Hard Negative Margin Loss (HNML) effectively utilizes the information provided by those hard pairs during the fine-tuning phase. Our empirical results highlight the efficacy of the Helip approach and emphasize the significance of pairlevel hard data. This is demonstrated by notable improvements across various benchmarks such as zero-shot classification and image-text retrieval. ## 6 Future Work Moving forward, several avenues for future research present themselves. First, we aim to explore compositionaware fine-tuning for VLMs, which could potentially enable more effective utilization of multimodal information. Moreover, we are intrigued by the prospect of combining parameter-efficient tuning (He et al., 2022) with Helip potentially further enhancing performance. Another area of interest is scaling up the dataset size and examining the applicability of the scaling law to our method. We also intend to investigate how the integration of our boosting algorithm might alter the multimodal dataset curation algorithm (Gadre et al., 2023). Ultimately, we hope our work will serve as a catalyst for additional research in the fine-tuning of pre-trained, large-scale multimodal models. ## References Alberto Baldrati, Marco Bertini, Tiberio Uricchio, and Alberto Del Bimbo. Conditioned and composed image retrieval combining and partially fine-tuning clip-based features. In *IEEE/CVF Conference on* Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2022, New Orleans, LA, USA, June 19-20, 2022, pp. 4955–4964. IEEE, 2022. doi: 10.1109/CVPRW56347.2022.00543. URL https: //doi.org/10.1109/CVPRW56347.2022.00543. Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101–mining discriminative components with random forests. In *European conference on computer vision*, pp. 446–461. Springer, 2014. Gavin Brown, Mark Bun, Vitaly Feldman, Adam Smith, and Kunal Talwar. When is memorization of irrelevant training data necessary for high-accuracy learning? In Proceedings of the 53rd annual ACM SIGACT symposium on theory of computing, pp. 123–132, 2021. Tiffany Tianhui Cai, Jonathan Frankle, David J. Schwab, and Ari S. Morcos. Are all negatives created equal in contrastive instance discrimination? *ArXiv preprint*, 2020. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In *2021 IEEE/CVF International* Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, pp. 9630– 9640. IEEE, 2021. doi: 10.1109/ICCV48922.2021.00951. URL https://doi.org/10.1109/ICCV48922. 2021.00951. Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pp. 3558– 3568. Computer Vision Foundation / IEEE, 2021. doi: 10.1109/CVPR46437.2021.00356. URL https://openaccess.thecvf.com/content/CVPR2021/html/Changpinyo_Conceptual_12M_Pushing_ Web-Scale_Image-Text_Pre-Training_To_Recognize_Long-Tail_Visual_CVPR_2021_paper.html. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In *Proc. of ICML*, volume 119 of *Proceedings of Machine Learning Research*, pp. 1597–1607. PMLR, 2020a. URL http://proceedings.mlr.press/v119/chen20j.html. Xinlei Chen, Haoqi Fan, Ross B. Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. *ArXiv preprint*, 2020b. Yihua Chen, Eric K Garcia, Maya R Gupta, Ali Rahimi, and Luca Cazzanti. Similarity-based classification: Concepts and algorithms. *Journal of Machine Learning Research*, 10(3), 2009. Yufeng Cui, Lichen Zhao, Feng Liang, Yangguang Li, and Jing Shao. Democratizing contrastive languageimage pre-training: A clip benchmark of data, model, and supervision, 2022. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pp. 248–255. IEEE Computer Society, 2009. doi: 10.1109/CVPR.2009.5206848. URL https://doi.org/10.1109/CVPR.2009.5206848. Li Fei-Fei, Fergus Rob, and Pietro Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In *Computer Vision and Pattern* Recognition Workshop, 2004. CVPRW'04. Conference on. IEEE, 2004. Andreas Fürst, Elisabeth Rumetshofer, Viet Tran, Hubert Ramsauer, Fei Tang, Johannes Lehner, David P. Kreil, Michael Kopp, Günter Klambauer, Angela Bitto-Nemling, and Sepp Hochreiter. CLOOB: modern hopfield networks with infoloob outperform CLIP. *ArXiv preprint*, 2021. Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, et al. Datacomp: In search of the next generation of multimodal datasets. *ArXiv preprint*, 2023. Shashank Goel, Hritik Bansal, Sumit Bhatia, Ryan A Rossi, Vishwa Vinay, and Aditya Grover. Cyclip: Cyclic contrastive language-image pretraining. *ArXiv preprint*, 2022. Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. Towards a unified view of parameter-efficient transfer learning. In *Proc. of ICLR*. OpenReview.net, 2022. URL https: //openreview.net/forum?id=0RDcd5Axok. Tri Huynh, Simon Kornblith, Matthew R. Walter, Michael Maire, and Maryam Khademi. Boosting contrastive self-supervised learning with false negative cancellation. In IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, Waikoloa, HI, USA, January 3-8, 2022, 2022. Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt. Openclip, July 2021. URL https://doi.org/10.5281/zenodo.5143773. If you use this software, please cite it as below. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In Marina Meila and Tong Zhang (eds.), *Proc. of ICML*, volume 139 of *Proceedings of Machine* Learning Research, pp. 4904–4916. PMLR, 2021. URL http://proceedings.mlr.press/v139/jia21b. html. Yannis Kalantidis, Mert Bülent Sariyildiz, Noé Pion, Philippe Weinzaepfel, and Diane Larlus. Hard negative mixing for contrastive learning. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ f7cade80b7cc92b991cf4d2806d6bd78-Abstract.html. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In *Proceedings of the IEEE International Conference on Computer Vision Workshops*, pp. 554–561, 2013. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Gotmare, Shafiq R. Joty, Caiming Xiong, and Steven Chu-Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 9694–9705, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/ 505259756244493872b7709a8a01b536-Abstract.html. Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi. BLIP: bootstrapping language-image pretraining for unified vision-language understanding and generation. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of *Proceedings* of Machine Learning Research, pp. 12888–12900. PMLR, 2022a. URL https://proceedings.mlr.press/ v162/li22n.html. Yangguang Li, Feng Liang, Lichen Zhao, Yufeng Cui, Wanli Ouyang, Jing Shao, Fengwei Yu, and Junjie Yan. Supervision exists everywhere: A data efficient contrastive language-image pre-training paradigm. In *Proc. of ICLR*. OpenReview.net, 2022b. URL https://openreview.net/forum?id=zq1iJkNk3uN. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: common objects in context. In *Proc. of ECCV*, 2014. S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. Technical report, 2013. Norman Mu, Alexander Kirillov, David A. Wagner, and Saining Xie. SLIP: self-supervision meets languageimage pre-training. In *Proc. of ECCV*, 2022. M-E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In *Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing*, Dec 2008. Antonio Norelli, Marco Fumero, Valentino Maiorca, Luca Moschella, Emanuele Rodolà, and Francesco Locatello. Asif: Coupled data turns unimodal models to multimodal without training. *ArXiv preprint*, 2022. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. *the Journal of machine Learning research*, 2011. Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In *2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015*, pp. 2641–2649. IEEE Computer Society, 2015. doi: 10.1109/ICCV.2015.303. URL https://doi.org/10.1109/ICCV.2015.303. Filip Radenovic, Abhimanyu Dubey, Abhishek Kadian, Todor Mihaylov, Simon Vandenhende, Yash Patel, Yi Wen, Vignesh Ramanathan, and Dhruv Mahajan. Filtering, distillation, and hard negatives for visionlanguage pre-training. *CoRR*, 2023. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Marina Meila and Tong Zhang (eds.), Proc. of ICML, volume 139 of *Proceedings of Machine Learning Research*, pp. 8748–8763. PMLR, 2021. URL http://proceedings.mlr.press/v139/radford21a.html. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1–140:67, 2020. URL http://jmlr.org/papers/v21/20-074.html. Nils Reimers and Iryna Gurevych. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In *Proc. of EMNLP*, pp. 3982–3992, Hong Kong, China, 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1410. URL https://aclanthology.org/D19-1410. Joshua David Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. Contrastive learning with hard negative samples. In *Proc. of ICLR*. OpenReview.net, 2021. URL https://openreview.net/forum? id=CR1XOQ0UTh-. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open largescale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278–25294, 2022. Anshul Shah, Suvrit Sra, Rama Chellappa, and Anoop Cherian. Max-margin contrastive learning. In ThirtySixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pp. 8220–8230. AAAI Press, 2022. URL https://ojs.aaai.org/index.php/AAAI/article/view/20796. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In *Proc. of ACL*, pp. 2556–2565, Melbourne, Australia, 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1238. URL https://aclanthology.org/P18-1238. Mannat Singh, Laura Gustafson, Aaron Adcock, Vinicius de Freitas Reis, Bugra Gedik, Raj Prateek Kosaraju, Dhruv Mahajan, Ross Girshick, Piotr Dollár, and Laurens Van Der Maaten. Revisiting weakly supervised pre-training of visual perception models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 804–814, 2022. Cory Stephenson, Suchismita Padhy, Abhinav Ganesh, Yue Hui, Hanlin Tang, and SueYeon Chung. On the geometry of generalization and memorization in deep neural networks. In *Proc. of ICLR*. OpenReview.net, 2021. URL https://openreview.net/forum?id=V8jrrnwGbuc. Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. *Communications of the ACM*, 2016. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds200-2011 dataset. Technical report, California Institute of Technology, 2011. Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In *Proc. of ICML*, volume 119 of *Proceedings of Machine Learning* Research, pp. 9929–9939. PMLR, 2020. URL http://proceedings.mlr.press/v119/wang20k.html. Bichen Wu, Ruizhe Cheng, Peizhao Zhang, Tianren Gao, Joseph E. Gonzalez, and Peter Vajda. Data efficient language-supervised zero-shot recognition with optimal transport distillation. In *Proc. of ICLR*. OpenReview.net, 2022. URL https://openreview.net/forum?id=G89-1yZLFHk. Jianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. SUN database: Largescale scene recognition from abbey to zoo. In The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010, San Francisco, CA, USA, 13-18 June 2010, pp. 3485–3492. IEEE Computer Society, 2010. doi: 10.1109/CVPR.2010.5539970. URL https://doi.org/10.1109/CVPR. 2010.5539970. Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, and Chunjing Xu. FILIP: fine-grained interactive language-image pre-training. In *Proc. of ICLR*. OpenReview.net, 2022. URL https://openreview.net/forum?id=cpDhcsEDC2. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. *Communications of the ACM*, 64(3):107–115, 2021. ## A Appendix A.1 Algorithm We summarize the Hard Pair Mining (HPM), the fast Hard Pair Mining (fastHPM) and the training pipeline of Helip in Algorithm 1, 2 and 3 respectively. Algorithm 1: Hard Pair Mining (HPM) Input: Hard pairs number per sample k Pretrained unimodal vision model: f*text* Pretrained unimodal vision model: f*image* Dataset D = {(x I 1 , xT 1 ),(x I 2 , xT 2 ), *· · ·* ,(x I N , xTN )} Threshold for visual and textual modality τI and τT Output: Hard samples H = [H1, H2, *· · ·* , HN ] for i ∈ [1, N] do s ← [0, 0, *· · ·* , 0]⊤ ∈ R N Ii ← f*image*(x I i ) Ti ← f*text*(x T i ) for j ∈ [1, N] do Ij ← f*image*(x I j ) Tj ← f*text*(x T j ) S⃗I j ←Ii·Ij ∥Ii∥2 ·∥Ij ∥2 if Ii·Ij ∥Ii∥2 ·∥Ij ∥2 > τI **else** 0 S⃗T j ←Ti·Tj ∥Ii∥2 ·∥Tj ∥2 if Ti·Tj ∥Ti∥2 ·∥Tj ∥2 > τT **else** 0 sj ← S⃗I j · S⃗T j end Hi ← Argmax(s, k) if ∃j ∈ Hi, sj = 0 **then** Hi = ∅ \# Indicate noise sample end Note, in the inner for loop, shown in Algorithm 1, the image and caption representations will be repeatedly computed. To accelerate the hard pair mining and avoid unnecessary computational overhead, we compute and save the encoded image features and text features. Besides, the outer loop is parallelized in the implementation. ## A.2 More Visualization Results We offer further visualization results pertaining to the hard samples mined by various methods. As depicted in Figure 6, the hard samples sourced by HPM closely resemble the target sample (seen at the top left). Interestingly, for samples with fewer objectives, the image and text mining method can identify a reasonably challenging counterpart, as seen in the case of "the harbor in a small village". However, for intricate scenes, only the HPM is capable of yielding sufficiently challenging samples, like the scenario "people touring and enjoying the public park during summer". The dataset acquired from the web encompasses a myriad of such intricate cases. We posit that this is why training with hard samples unearthed by HPM yields more proficient outcomes. Moreover, we present additional visualization results for hard samples mined via different techniques. Hard samples extracted by HPM exhibit a stronger resemblance to the target sample, as highlighted in Figure 6 (top left). We observed that the image and text mining methods can provide a relatively fitting hard counterpart for simpler samples, like "the harbor in a quaint settlement". However, for more intricate scenes, only the HPM method produces samples of adequate difficulty, such as "people touring and relishing the public Algorithm 2: fast Hard Pair Mining (fastHPM) Input: Hard pairs number per sample k Pretrained unimodal vision model: f*text* Pretrained unimodal vision model: f*image* Dataset D = {(x I 1 , xT 1 ),(x I 2 , xT 2 ), *· · ·* ,(x I N , xTN )} Threshold for visual and textual modality τI and τT Candidate pool size C Output: Hard samples H = [H1, H2, *· · ·* , HN ] for i ∈ [1, N] do Uniformly C samples from Dataset D, Di = {(x I 1 , xT 1 ),(x I 2 , xT 2 ), *· · ·* ,(x I C , xTC )} s ← [0, 0, *· · ·* , 0]⊤ ∈ R N Ii ← f*image*(x I i ) Ti ← f*text*(x T i ) for j ∈ [1, C] do Ij ← f*image*(x I j ) Tj ← f*text*(x T j ) S⃗I j ←Ii·Ij ∥Ii∥2 ·∥Ij ∥2 if Ii·Ij ∥Ii∥2 ·∥Ij ∥2 > τI **else** 0 S⃗T j ←Ti·Tj ∥Ii∥2 ·∥Tj ∥2 if Ti·Tj ∥Ti∥2 ·∥Tj ∥2 > τT **else** 0 sj ← S⃗I j · S⃗T j end Hi ← Argmax(s, k) if ∃j ∈ Hi, sj = 0 **then** Hi = ∅ \# Indicate noise sample end park throughout summer". The web-based dataset includes a significant proportion of these complex cases. Consequently, we infer that training with hard samples mined by HPM results in enhanced performance. ## A.3 Performance Of Helip With Scaled Training Data In order to study the impact of expanding training dataset sizes on the effectiveness of Helip, we trained CLIP on the YFCC15M, collected by Radford et al. (2021). This training resulted in a zero-shot classification accuracy of 25.46 on ImageNet. When applying Helip, its performance reached 26.45 after one epoch. Summarizing the zero-shot performance on ImageNet of both standard CLIP and its enhanced counterpart, CLIP-Helip, across varying data scales, we present those results in Figure 7. The results clearly demonstrate that Helip consistently boosts CLIP's performance. Notably, the most significant improvement was observed with the Open29M dataset, where Helip made an impressive performance increase of 3.06%. Extending this observation, we anticipate that Helipwill offer instant performance improvements for well-trained CLIP models on larger datasets, like the private 400M dataset referenced in Radford et al. (2021). ## A.4 Discussion About Baselines In our experiments, we utilized CLIP, SLIP, and DECLIP as baseline models on CC3M, CC12M, YFCC15M, and Open29M datasets. To ensure our results are both compelling and reproducible, we primarily employed publicly available checkpoints as our baseline and rigorously tested the effectiveness of Helip against these checkpoints. On CC3M, the checkpoint of SLIP model is released†. We enhanced its performance by applying †https://github.com/facebookresearch/SLIP#results-and-pre-trained-models Algorithm 3: Hard samplE for boosting contrastive Language-Image Pretrained models (Helip) Input: D = {(x I 1 , xT 1 ),(x I 1 , xT 1 ), *· · ·* ,(x I N , xTN )} Hard Pair Mining algorithm, HPM() \# or the fastHPM() Pretrained unimodal vision model: f*text* Pretrained unimodal vision model: f*image* Pretrained contrastive language-image model {ϕimage, ϕ*text*} hyperparameters: Hard pairs number k Hard negative margin strength γ Sampled hard negatives number p Learning ratio η Batch size b Training iteration number E Visual and textual modality threshold τI and τT Output: CLIP model {ϕimage, ϕ*text*} H ← HPM(D, ftext, fimage, k, τI , τT ) for *iter* ∈ [1, E] do B ← {z1*, . . . , z*b} i.i.d. ∼ *Uniform*(D) for zi ∈ B do H p i ← {zi*, . . . , z*p} i.i.d. ∼ *Uniform*(Hi) B ← B ∪ Hp i end Compute loss ℓ*f inetune*, Equation (6), with samples B ϕimage ← ϕimage + η · ∂ϕimage ℓ*f inetune* ϕtext ← ϕtext + η · ∂ϕtext ℓ*f inetune* end Helip which notably improved the zero-shot performance on ImageNet from 23.00 to 26.05. However, we noticed that the CLIP with ResNet50 on CC3M is missing. To address this, we undertook the pretraining ourselves. Our results were encouraging: the performance of our pretrained CLIP with ResNet50 achieved a score of 19.86, surpassing the 17.10 achieved by SLIP's CLIP with ViT-B/32 as reported in Mu et al. (2022). This outcome suggests the robustness of our implementation. Besides, consistent with several prior studies, we found that on smaller pretraining datasets, CLIP with ResNet50 outperforms CLIP with ViT-B. On the CC12M dataset, a similar situation arose: while the SLIP checkpoint was available, the CLIP model was absent, leading us to undertake its pretraining. On the YFCC15M (v1) collected by Radford et al. (2021), we trained the CLIP model. This resulted in a 25.46 score in the ImageNet zero-shot classification, closely aligning with the 26.10 outcome reported by Cui et al. (2022). Additionally, for the YFCC15M (v2) dataset referenced in Li et al. (2022b), both SLIP and DECLIP pretrained parameters were made available by Li et al. (2022b), which we utilized directly as our baselines. On the larger dataset, Open29M, there was a lack of open-source pretrained checkpoints, prompting us to conduct the pretraining ourselves. Notably, the performance of our reimplementation (42.32) closely aligns with the results reported by Li et al. (2022b), indicating the effectiveness of our approach. ![20_image_0.png](20_image_0.png) Figure 6: **Hard samples selected by different methods.** ![21_image_0.png](21_image_0.png) Figure 7: Zero-shot performance on ImageNet for models trained over different size of dataset.
Review 1: Summary: This article explores how to enhance the performance of pre-trained vision-language models by effectively utilizing hard pairs. Specifically, the article introduces a method for extracting hard text-image pairs through hard pair mining. Based on these hard pairs, the article proposes a hard negative margin loss to aid in the training of vision-language models. The effectiveness of this method is validated on multiple pre-trained models and various downstream tasks. Strengths and Weaknesses: Strengths: 1. The topic addressed in this paper is significant. With the widespread application of pre-trained vision-language models in various downstream tasks, designing better pre-trained vision-language models is an important research area. 2. To the best of my knowledge, the use of hard pairs to enhance vision-language models has not been extensively studied, making this paper potentially the first of its kind. 3. In order to leverage hard pairs to improve vision-language models, the paper introduces a method called hard pair mining, which utilizes the similarity between the vision and language domains to identify these hard pairs. Additionally, a hard negative margin loss is proposed to complement the traditional contrastive loss. 4. The paper extensively validates the effectiveness of the proposed approach through experiments conducted on different pre-training datasets and various downstream tasks. Weaknesses: 1. One major concern is related to the experimental validation of the paper. The pre-training datasets used in this study are relatively small, such as 'CC3M' and 'CC12M,' resulting in lower baseline results (e.g., approximately 30% accuracy on zero-shot ImageNet). In other words, the exploration in this paper is based on a relatively low baseline. However, current widely used vision-language models are based on larger datasets, such as LAION-2B, and achieve higher results (e.g., approximately 70% accuracy on zero-shot ImageNet). Therefore, it would be more valuable if the authors could validate the effectiveness of their method based on larger pre-training datasets and more practical baselines. 2. The writing could be improved. Some sentences are difficult to understand clearly, such as “Intuitively, pairs that most effectively support a model to make the decision that the target pair is matching are deemed its hard pairs“on Page Two. Requested Changes: See weaknesses. Broader Impact Concerns: NA. ================================================== Review 2: Summary: The paper shows that contrastive self-supervised image-text pre-training (CLIP, SLIP) can be improved by using external models during hard negative mining. Authors propose to use this as an additional training step applied to already pre-trained models, but using the original data. They use it to select hard pairs from the entire corpus (rather than just from within a batch, as several other works are doing) and introduce Hard Negative Margin Loss (HNML) to modify the loss function within a batch to improve representation learning. Results show improvements on several zero-shot classification tasks, such as ImageNet, CIFAR as well as retrieval. Strengths and Weaknesses: Strengths The proposed framework is largely intuitive and elegant. It solves an interesting and timely problem, given that CLIP and similar methods are widely used for image, video and multimodal analysis. The paper is generally well written and accessible. Weaknesses The paper's language is often imprecise (see below) and the experimental results leave open several key questions. The most significant is that the CLIP training data is not available, so that the basic idea (better using the available training data) cannot be fully utilized, IIUC correctly. This should be explained better. Requested Changes: I identified several critical issues that I would like to see addressed before publication: - Please clean up some of the language: "plug-and-play" -- I don't know what that means; "pioneering plug-and-play", "revolutionary technique" -- I think it is too early to say that (and I am not sure tbh about the novelty of using external models to mine the corpus); "strategic approach to assimilate", "strategically utilizing the intrinsic data" -- again, what does that mean? - Please clearly explain the significance and role of your baselines. The CLIP zero-shot performance on ImageNet is listed as 19.04% in Table 1, but the cited paper does not show that number. IIUC, this is a re-implementation of CLIP using different training data? Please show this in the Table. - What happens if you scale up the training data, e.g. to all of the LAION data, instead of <10M sets? Do the benefits of hard mining diminish or disappear? It would be important to understand if and how the benefits of the proposed method translate to larger training datasets - Please present the results of your work not in "chronological" but "logical" order. I.e. in Section 4.1 you explain that you use DINO ViTs8 and SentenceT as external image and text encoders (and explain why you pick those). In Table 8, you ablate and demonstrate that there does not seem to be much of a difference between which encoder is being used. It would seem more logical to start with CLIP (which matches the pre-training criterion), before ablating other methods? Broader Impact Concerns: n/a ================================================== Review 3: Summary: This paper proposes a novel approach that finetunes the vision-language model on the pre-trained datasets by exploiting the hard pairs in the pre-trained datasets to boost the zero-shot performance of the model on the downstream benchmarks. Strengths and Weaknesses: Strengths: the overall improvement on various benchmarks shows that the proposed method, HELIP, is quite helpful. Weakness: 1. However, the experiment comparison is not very convincing. The paper directly compares the pre-trained models and fine-tuned models. Although the training data are the same, it is still hard to identify if it is another round of training that achieves the improvements. 2. If the approach is so effective, why can't it be directly used in the pre-training? In other words, the motivation for applying the approach to the fine-tuning stage is unclear. Others: The proposed approaches contain hard sample mining and margin loss, which are actually not very novel in the field of visual perception. However, the simplicity of the proposed approach seems to be an advantage since it could be easier to be leveraged by other works. Overall, my review is positive. Requested Changes: 1. The paper might need more fair comparisons between the experiments with and w/o the proposed approach 2. The paper needs clearer clarification of the motivation for doing another round of fine-tuning on the pre-trained dataset. Broader Impact Concerns: none ================================================== Metareview: Recommendation: Reject Comment: Although the paper shows significant improvements on some settings in different experiments, a common concerns of the reviewers is that there is not sufficient evidence to prove that the performance improvement come from the proposed loss. The rebuttal addresses some of the concerns of the reviewers, but some remained unsolved. For example, Reviewer sCbG pointed out that the proposed method relies on fully accessible to the pretrained data, what if pretrained data is not available? Furthermore, the current results are based on some small scaled data, so it is believed not a well-pretrained model. One suggestion would be what if we fine-tunes the original CLIP model with the proposed method and views it as a "domain adaptation" method? I.e. taking an 80% checkpoint and seeing if it improves by the proposed hard mining etc, even if we only use a small amount of data? Or use more data? In conclusion, the paper is interesting and generally well written. It just has a lack of evidence to prove the usefulness of the proposed method. I would suggest the authors to revise the paper according to the reviewers' comments and resubmit. I am happy to supervise it in the next round. ==================================================
# Adaptive Multiple Optimal Learning Factors For Neural Network Training Anonymous authors Paper under double-blind review ## Abstract This paper presents the Adapt-MOLF algorithm that merges the strengths of second order algorithms while addressing their limitations. Adapt-MOLF algorithm dynamically adjusts the number of weight groups per hidden unit to maximize error change per multiplication, optimizing computational efficiency. Leveraging curvature-based grouping and Gauss-Newton updates, it efficiently interpolates the Hessian and negative gradients for computation. The two-stage algorithm alternately determines output weights and employs multiple learning factors to train input weights in a Multi-Layer Perceptron. This adaptive adjustment of learning factors maximizes the error decrease per multiplication, showcasing superior performance over OWO-MOLF and Levenberg Marquardt (LM) across diverse datasets. Extensive experiments demonstrate its competitive or superior results compared to state-ofthe-art algorithms particularly excelling in reducing testing errors. This research represents a promising advancement in second-order optimization methods for neural network training, offering scalability, efficiency, and superior performance across heterogeneous datasets. ## 1 Introduction A Neural Network is a computational cornerstone crucial for function approximation and pattern recognition tasks. Its significance stems from inherent capabilities enabling universal approximation (1) and the approximation of Bayes discriminants (1) (2). Within the realm of neural network optimization, techniques like Adam (3), AdaDelta (4), AdaGrad (5), RmsProp (6), and Nesterov momentum (7) predominantly align with first-order training algorithms. However, a conspicuous gap persists in the literature concerning advanced optimization strategies for second-order training algorithms. The crux of the challenge with second-order algorithms lies in their scalability issues, predominantly rooted in the resource-intensive computation of Hessian matrices. As network size expands, the computational demands for Hessian matrix operations grow exponentially, presenting substantial scalability hurdles. Neural networks have seen a re-birth after their increased success in the recent past. Their capacity for universal approximation (1) challenges the No-Free-Lunch theorem (8), implying that multi-layer perceptrons (MLPs) can effectively approximate diverse discriminants. While Adafactor (9) combines Adam optimization and memory efficiency to reduce memory overhead, its adaptability across various neural network architectures and datasets remains underexplored. AdaSmooth (10) introduces adaptable window sizes for past gradient accumulation but lacks investigation into its efficacy across dynamic or noisy datasets. Similarly, LAMB (11) showcases layer-wise weight updating for faster model training, enabling faster training of models like BERT (12), yet its scalability across diverse architectures and datasets demands deeper examination. Techniques like SLAMB (13), integrating LAMB with sparse GPU communication, show promise in accelerating large batch training but require further exploration regarding practical implementation challenges and hardware requirements. Additionally, the Symbolic Discovery of Optimization Algorithms (LION) (14) offers a comprehensive exploration of optimization algorithms but lacks comparative analyses against existing strategies, limiting insights into their relative effectiveness. Shifting focus to second-order optimization techniques, Shampoo (15) introduces efficient gradient preconditioning using matrices instead of Hessian computations. However, Shampoo's scalability concerns and its ![1_image_0.png](1_image_0.png) Figure 1: Fully Connected Multilayer Perceptron performance in scenarios with highly non-convex functions or noisy gradients remain areas that necessitate further investigation to ascertain its generalizability. Moreover, Scalable Second Order Optimization in (16) efficiently implements the Shampoo algorithm. Yet, a comprehensive examination of its computational complexity and comparison with other scalable second-order techniques is lacking, hindering a complete understanding of its advantages over alternative methodologies. AdaHessian (17) innovatively utilizes only the diagonal elements of the Hessian matrix, reducing memory requirements significantly. However, the trade-offs between computational efficiency and optimization accuracy in complex, highly non-linear optimization landscapes warrant deeper analysis to comprehend its viability across diverse neural network architectures and training scenarios. The algorithm proposed in this paper addresses this challenge by optimizing neural network training operations more efficiently. The methodology involves segmenting the input weights of a multilayer perceptron (MLP) into clusters, followed by the application of Newton's method to compute a vector of learning factors, one for each cluster. ## 2 Background We start by describing the multi layer perceptron (MLP), which is a non linear signal processor that has good approximation and classification properties. The MLP has basis functions that can adapt during the training process by utilizing example input and desired outputs. ## 2.1 Structure And Notation The architecture of a fully connected feed forward multi layer perceptron (MLP) is shown in Figure 1. The input weights w(*k, n*) connect the n th input to the k th hidden unit. Output weights woh(*m, k*) connect the k th hidden unit's non-linear activation Op(k) to the mth output yp(m), which has a linear activation. The bypass weights woi(*m, n*) connects the n th input to the mth output. The training data, described by the set of independent, identically distributed input-output pair {xp, tp} consists of N dimensional input vectors xp and M dimensional desired output vectors, tp. The pattern number p varies from 1 to Nv, where Nv denotes the number of training vectors present in the datasets. Let Nh denote the number of hidden units. Input bias is added by augmenting the input units with an extra element xp(N + 1), where xp(N + 1) = 1. For each training pattern p, the hidden layer net function vector np can be written as np = W · xp. The k th element of the hidden unit activation vector Op is calculated as Op(k) = f(np(k)) where f(·) denotes the sigmoid activation function. The network output vector, yp can be written as $$\mathbf{y_{p}}=\mathbf{W_{o i}}\cdot\mathbf{x_{p}}+\mathbf{W_{o h}}\cdot\mathbf{O_{p}}$$ yp = Woi · xp + Woh · Op (1) The expression for the actual outputs given in equation (1) can be re-written as yp = Wo · Xa, where Xa = [xp T: Op T] Tis the augmented input column vector with Nu basis functions, where Nu = 1 + N + Nh . Similarly, Wo is the *M by N*u dimensional augmented weight matrix defined as Wo = [Woh, : Woi]. The training process for a MLP involves minimizing the mean squared error (3) between the desired and actual network output. This optimization requires fine-tuning the network's weights and biases to ensure a close match between computed and desired outputs. To train an MLP effectively, we reformulate the learning problem as an optimization task within a structural risk minimization framework (18; 19). This framework aims to minimize the error function E, represented by equation 3, acting as a surrogate for non-smooth classification errors. From a Bayesian perspective (18), the approach involves maximizing the likelihood function or minimizing the mean square error (MSE) in a least square sense. The MSE, calculated between inputs and outputs, is defined as: $$E={\frac{1}{N_{v}}}\sum_{p=1}^{N_{v}}\sum_{i=1}^{M}[t_{p}(i)-y_{p}(i)]^{2}$$ $\quad(3)^{\frac{1}{2}}$ . The nonlinearity within yp introduces non-convexity into the error function E, potentially leading to local minima in practice. We assume tp follows a Gaussian distribution concerning input xp. Our aim is to determine the optimal weights within the MLP structure. Employing the empirical risk minimization framework (20), we design learning algorithms, benefiting from the advantage of transforming MLP training into an optimization problem. This conversion allows us to leverage various optimization algorithms to enhance MLP learning processes. ## 2.2 Mlp Initialization As from (20), the input weights matrix W is initialized randomly from a zero mean Gaussian random number generator. The training of the input weights, strongly depends on the gradient of the hidden units activation functions with respect to the inputs. Training of input weights will cease if the hidden units it feeds into has an activation function derivative of zero for all patterns. In order to remove the dominance of a large variance inputs, we divide the input weights by the input's standard deviation. Therefore we adjust the mean and standard deviation of all the hidden units net functions. This is called net control as in (21). At this point, we have determined the initial input weights and we are now ready to initialize the output weights. To initialize the output weight matrix Wo, we use output weight optimization (OWO) (20). OWO minimizes the error function from equation (3) with respect to Wo by solving the M sets of Nu equations in Nu unknowns given by $$\mathbf{C}=\mathbf{R}\cdot\mathbf{W_{o}^{\mathrm{T}}}$$ o(2) where cross-correlation matrix C =1 Nv PNv p=1 tp · XT a , auto-correlation matrix R =1 Nv PNv p=1 Xa · XT a . In terms of optimization theory, solving equation (2) is merely Newton's algorithm for the output weights (22). After initialization of W , Woi, Woh, we begin a two step procedure in which we modify W and perform OWO to modify Wo. The MLP network is now initialized and ready to be trained with first or second order algorithms. Training an MLP can be seen as an unconstrained optimization problem that usually involves first order gradient methods such as backpropagation (BP), conjugate gradient (CG) and second order Levenberg-Marquardt (LM), Newton's method as the most popular learning algorithm. Training algorithms can be classified as One Stage, in which all the weights of the network are updated simultaneously and Two Stage, in which input and output weights are trained alternately. ## 2.3 Optimization Algorithms First-order optimization algorithms focus solely on the utilization of the first derivative to iteratively improve convergence. Contrasting these, the concept of employing a second-order method involves enhancing first-order algorithms by incorporating both the first and second derivatives (20). In our study, we'll compare our work with scaled conjugate gradient, an optimization algorithms positioned between first and second-order optimization techniques. Alongside these, we briefly review two, second order algorithms namely, single-stage $$(2)$$ Levenberg-Marquardt (LM) (23) and a two-stage Output Weight Optimization-Multiple Optimal Learning Factor (OWO-MOLF). ## 2.3.1 Scaled Conjugate Gradient In a steepest descent algorithm, the weights are updated in the negative gradient direction. Although the error function reduces most rapidly along the negative direction of the gradient, it does not necessarily create fast convergence. Conjugate gradient algorithm (24) performs a line-search in the conjugate direction and has faster convergence than backpropagation algorithm. Although scaled conjugate gradient (SCG) is a general unconstrained optimization technique, its use in efficiently training an MLP is well documented in (25). To train an MLP using conjugate gradient algorithm, we use a direction vector that is obtained from the gradient g as p ← −g + B1 · p. Here p = vec(P, Poh, Poi) and P, Poi and Poh are the direction vectors. B1 is the ratio of the gradient energy from two consecutive iterations. This direction vector, in turn, update all the weights simultaneously as w ← w + z · p. The Conjugate Gradient algorithm possesses several advantages. Firstly, its convergence rate aligns with the number of unknowns. Secondly, it outperforms the steepest descent method and can handle nonquadratic error functions. Thirdly, as it avoids matrix inversion involving Hessians, its computational cost remains at O(w), where w represents the weight vector's size. For a detailed pseudocode, refer to (20). In neural network training, the learning factor z determines the convergence speed. Typically, a small positive z works but leads to slow convergence, while a large z might increase the error E (20). To counter this, heuristic scaling methods adjust learning factors between iterations to hasten convergence. However, an Optimal Learning Factor (OLF) for OWO-BP (Output Weight Optimization Backpropagation) can be derived non-heuristically using Taylor's series for error E as discussed in (20). ## 2.3.2 Levenberg-Marquardt Algorithm The LM algorithm is a compromise between Newton's method, which converges rapidly near local or global minima but may diverge, and gradient descent, which has assured convergence through a proper selection of step size parameter but converge slowly. The LM algorithm is a sub-optimal method as usually H is singular in Newton's method, an alternate is to modify the Hessian matrix as in LM (26) algorithm or use a two step method such as layer by layer training (27). In LM, we modify the Hessian as HLM = H + λ · I. Here I is the identity matrix of the same dimensions as H and λ is a regularizing parameter that forces the sum matrix (H + λ · I) to be positive definite and safely well conditioned throughout the computation. We calculate the second order direction, d, similar to Newton's method as HLM · d = g. Upon acquiring HLM, the model's weights undergo an update. The regularization parameter λ significantly influences the LM algorithm's behavior. For optimal λ selection, (28) propose an excellent *Marquardt recipe*. However, practically, computing HLM can be computationally demanding, especially in high-dimensional weight vectors w. Consequently, due to scalability limitations, LM proves more suitable for smaller networks. A detailed exploration of the LM algorithm can be found in (20). ## 2.3.3 Owo-Molf An alternative to LM, the OWO-MOLF algorithm adopts a two-stage "layer by layer" approach, leveraging Output Weight Optimization (OWO) and Multiple Optimal Learning Factors (MOLF) per hidden unit (29). Unlike heuristic methods relying on various learning rates or momentum terms (30; 31), OWO-MOLF targets enhanced learning speed and convergence.This algorithm handles input weight updates via the negative gradient matrix G and a size (Nh,1) vector of learning factors z. Meanwhile, output weights are trained using OWO. A key concept involves updating input weight matrices for each epoch, replacing a single optimal learning factor z with a vector z estimated using Newton's method (23). This vector solves Hmolf · z = g**molf** through orthogonal least squares (OLS), where H**molf** and g**molf** represent the Hessian and negative gradient, respectively, concerning the error with respect to z. Further implementation details are available in (20). ## 2.4 Challenges And Objectives The paper addresses the following challenges posed by second order algorithms. Algorithms such as LavenbergMarquardt incur significant computational costs due to their reliance on optimizing all weights utilizing second order information. In contrast, algorithms like OWO-MOLF, Shampoo, and Ada-Hessian attempt to mitigate these computational demands by either optimizing a subset of parameters via second order methods or by employing a lower rank approximation of the Hessian matrix. This approach reduces the need to compute and invert the full Hessian matrix for all parameters. However, a common limitation within these algorithms, when applied to a specific model architecture, is their consistent optimization of a similar fraction of parameters using second order methods for every training iteration for a given data-set. This uniform strategy leads to an unnecessary escalation in computational requirements. The objective of this paper is to formulate a novel second order training algorithm that dynamically adjusts the number of parameters optimized using second order information. This adjustment will be specific to each training iteration, dependent on the dataset and model architecture involved. The goal is to enhance computational efficiency without increase in optimization time. In order to address the objectives, we introduce Adapt-MOLF, a second-order algorithm that involves creating groups for each hidden unit, computing a single learning factor for each group. ## 3 Adapt-Molf The OWO-Newton algorithm often has excellent error convergence characteristics, but its performance decreases when there are linear dependencies in the input signal or hidden unit activations. Additionally, when the error function deviates from a quadratic form, the algorithm's performance is compromised. Moreover, OWO-Newton is computationally more expensive than first order algorithms. In contrast, the OWO-MOLF algorithm, while not matching the convergence speed per iteration of OWO-Newton, exhibits robustness against linearly dependent input signals and operates with lower computational overhead (22). The main objective of our paper is to integrate the strengths of OWO-MOLF and OWO-Newton while addressing their limitations. Adapt-MOLF dynamically adapts between OWO-MOLF and OWO-Newton, aiming to retain their respective advantages. In OWO-MOLF, Nh learning factors correspond to each hidden unit, whereas in the Adapt-MOLF algorithm, the number of computed learning factors ranges from Nh to Nh · (N + 1) in each iteration. Adapt-MOLF dynamically tailors second-order information use, optimizing computational efficiency across various scenarios. ## 3.1 Adaptive Grouping Of The Input Weights In Adapt-MOLF, a *group* refers to a collection of input weights updated using a shared learning factor. For instance, in the steepest descent method, a single learning factor z updates all network weights within one group (20). In OWO-MOLF, the input weights linked to a hidden unit are updated using a learning factor specific to that unit, forming groups of size N + 1, totaling one group per hidden unit (Ng = 1) (22). However, in Adapt-MOLF, group sizes fluctuate between N + 1 and 1, varying the number of groups per hidden unit from 1 to N + 1. Here, input weights W are grouped based on the curvature L of the error function concerning these weights. The elements of the matrix L are $$l(k,n)=\frac{\partial^{2}E}{\partial w(k,n)^{2}}=\frac{2}{N_{v}}\sum_{i=1}^{M}w_{oh}(i,k)^{2}\sum_{p=1}^{N_{v}}f^{\prime}(n_{p}(k))^{2}x_{p}(n)^{2}\tag{3}$$ ## 3.2 Adaptive Grouping For Error-Optimized Multiplication Another adaptive approach we study is to optimize the number of groups per hidden unit to maximize error change per multiplication. In each iteration, we compute the error change by evaluating the difference between the current and previous iteration errors. This calculated error change is then divided by the number of multiplications required in the current iteration, defining the *error per multiply* (EPM) for that specific iteration using equation (4): $$E P M(i_{t})={\frac{E(i_{t}-1)-E(i_{t})}{M(i_{t})}}$$ M(it)(4) $$\left({4\atop4}\right)$$ Here, M(it) denotes the number of multiplications in iteration it, while EPM(it) represents the error per multiply for that particular iteration. As the error change per multiply fluctuates, the number of groups per hidden unit (Ng) dynamically adjusts. An increase in error change per multiply leads to an increase in Ng, and conversely, a decrease in error change per multiply results in a reduction of Ng. This key aspect of Adapt-MOLD gives substantial error reduction during initial iterations, operating akin to the OWO-Newton algorithm. As convergence to local minima occurs, the algorithm dynamically shifts towards behavior resembling the OWO-MOLF algorithm, reducing computational complexity while fine-tuning convergence. ## 3.3 Adaptive Learning Factor Using Compact Hessians Consider an MLP where the input weights are trained using negative gradients. Rather than a single learning factor, envision a vector of learning factors represented by z, with elements zk,c employed to update all weights w(*k, n*) pertaining to group c of hidden unit k. The error function to minimize from 3. The predicted output yp(i) is given by $$y_{p}(i)=\sum_{n=1}^{N+1}x_{p}(n)w_{oi}(n)+\sum_{k=1}^{N_{k}}w_{oh}(i,k)f\left(\sum_{c=1}^{N_{s}}\sum_{a\in c}x_{p}(i_{k}(a))\left[\left(w(k,i_{k}(a))\right)+z_{k,c}g(k,i_{k}(a))\right]\right)\tag{41}$$ where, Ng is the number of groups per hidden unit, c is the group index. Ik is the vector of input indices of weights connected to hidden unit k sorted in descending order of curvature computed using equation (3). Ik = [n1, n2, n3 *. . . n*N+1] where n1, n2, n3 *. . . n*N+1 are input indices such that l(k, n1) ≥ l(k, n2) ≥ l(k, n3)*. . .* ≥ l(k, nN+1). zk,c is the learning factor used to update all the input weights that belong to group c of hidden unit k. The total number of learning factors that are going to be computed is L = Nh · Ng. Where, L can vary from Nh to Nh · (N + 1) as the number of groups per hidden unit Ng, can vary from 1 to N + 1. The gradient of loss E with respect to zk,c is, $$g_{\text{Amoll}}(k,c)=\frac{\partial E}{\partial z_{k,c}}=-\frac{2}{N_{v}}\sum_{p=1}^{N_{v}}\sum_{i=1}^{M}(t_{p}(i)-y_{p}(i))\frac{\partial y_{p}(i)}{\partial z_{k,c}}$$ $$\frac{\partial y_{p}(i)}{\partial z_{k,c}}=w_{o h}(i,k)f^{\prime}(n_{p}(k))\frac{\partial n_{p}(k,c)}{\partial z_{k,c}}$$ $$n_{p}(k,c)=\sum_{a\in c}x_{p}(i_{k}(a))[w(k,i_{k}(a))+z_{k,c}g(k,i_{k}(a))]\quad{\mathrm{and}}\quad\ n_{p}(k)=\sum_{c=1}^{N g}n_{p}(k,c)$$ $$\quad(5)$$ $$\left(6\right)$$ $$\quad(7)$$ $$(8)$$ $$\quad(9)$$ $$(10)^{\frac{1}{2}}$$ $${\frac{\partial n_{p}(k,c)}{\partial z_{k,c}}}=\sum_{a\in c}x_{p}(i_{k}(a))g(k,i_{k}(a))]$$ Using Gauss-Newton updates, the elements of the 4-D Hessian H*Amolf* are computed as, $$h_{A m o l f}(k,c_{1},j,c_{2})=\frac{2}{N_{v}}\sum_{i=1}^{M}w_{o h}(i,k)w_{o h}(i,j)\sum_{p=1}^{N_{v}}f^{\prime}(n_{p}(k))f^{\prime}(n_{p}(j))\frac{\partial n_{p}(k,c_{1})}{\partial z_{k,c1}}\frac{\partial n_{p}(j,c_{2})}{\partial z_{j,c2}}$$ ∂zj,c2(9) The 4 dimensional Hessian H**Amolf** matrix is converted to 2 dimensional matrix as, $$h_{A m o l f}\left((k-1)N_{g}+c_{1},(j-1)N_{g}+c_{2}\right)=h_{A m o l f}(k,c_{1},j,c_{2})$$ The 2 dimensional gradient matrix g**Amolf** is converted into column vector as, $$g_{A m o l f}\left((k-1)N_{g}+c\right)=g_{A m o l f}(k,c)$$ g*Amolf* ((k − 1)Ng + c) = gAmolf (*k, c*) (11) $$(11)^{\frac{1}{2}}$$ 6 The Gauss-Newton update guarantees that H**Amolf** is non-negative definite. Given the negative gradient vector, $$\mathbf{g_{Amoif}}=\left[-\frac{\partial E}{\partial z_{1,c_{1}}},-\frac{\partial E}{\partial z_{1,c_{2}}},\ldots,-\frac{\partial E}{\partial z_{N_{h},c_{N_{g}}}}\right]^{T}\tag{1}$$ $$\left(12\right)$$ H**Amolf** is minimize E with respect to vector z using Newton's method. The learning factors z can be computed as, $$\left(13\right)$$ $$\mathbf{z}=\mathbf{H}_{\mathrm{Amolf}}^{-1}\cdot\mathbf{g}\mathrm{Amolf}\tag{1}$$ $$\left(14\right)$$ Amolf · g**Amolf** (13) The input weights matrix W are updated as follows, $$w(k,i_{k}(a))=w(k,i_{k}(a))+z_{k,c}g(k,i_{k}(a))\quad{\mathrm{where}}\quad a\in c$$ Note that H**Amolf** is quite compact that streamline computations compared to conventional Hessian computations. ## 3.4 Adapt-Molf Initialization The initial stage of Adapt-MOLF involves selecting the number of groups per hidden unit by exhaustively experimenting across the range of possibilities, from 1 to N + 1. The hyperparameter configuration yielding the lowest error is chosen as the starting point. This initialization technique significantly enhances algorithmic performance, outperforming random initialization approaches notably. In our experiments, periodic application, once every 50 iterations during training, further refines the algorithm's performance. Evaluating errors for all potential group counts incurs considerable computational expense. To remove this, Adapt-MOLF uses interpolation between the Hessian and negative gradients with the Gauss-Newton Hessian and input weight negative gradients, respectively, as demonstrated by equation 13. This interpolation allows for the derivation of H**Amolf** and g**Amolf** from the Gauss-Newton Hessian and negative gradients H and g for input weights in the Adapt-MOLF algorithm. Expanding equation 5 and 9, we get $$g_{\mathrm{Amoff}}(k,c)=\sum_{a\in c}g(k,i_{k}(a))^{2}$$ $$h_{\mathrm{Amoff}}(k,c_{1},j,c_{2})=\sum_{a\in c_{1}}\sum_{b\in c_{2}}h(k,i_{k}(a),j,i_{j}(b))g(k,i_{k}(a))g(j,i_{j}(a))$$ $$\left(15\right)$$ $$(16)$$ From equation 15 and 16, we can interpolate the Hessian and negative gradients of an Adapt-MOLF for any number of groups from Newton's Hessian and negative gradients of input weights. This helps to avoid recalculating Hessian and negative gradients for all the possible numbers of groups in determining the initial point of the algorithm. ## 3.5 Mathematical Treatment Lemma 1 : Assume E(w) is a quadratic function of the input weight column vector w which is divided into k partitions wk such that w = [wT 1 , wT 2 , . . . , wT k ] T and gk = ∂E ∂wk . If a training algorithm minimizes E with respect to the k dimensional vector z producing an error Ek = E(w1 + z1g1, w2 + z2g2*, . . . ,* wk + zkgk) and k can only increase by splitting one of the existing partitions, then Ek + 1 ≤ Ek Proof : The error E(w) after updating the input weights can be modeled as, $$E(\mathbf{w}+\mathbf{e})=E_{0}-\mathbf{e}^{T}\mathbf{g}+{\frac{1}{2}}\mathbf{e}^{T}\mathbf{H}\mathbf{e}$$ T He (17) $$\left(17\right)$$ 7 where, Eo is the error before updating the input weights, g is gk for k=1, H is the Hessian, and e is the input weight change vector. If e is found optimally using Newton's method, then e = H−1· g The input weight change vector for k groups is $\mathbf{e}_{k}=[z_{1}\mathbf{g}_{1}^{T},z_{2}\mathbf{g}_{2}^{T},\ldots,z_{k}\mathbf{g}_{k}^{T}]^{T}$ $$(18)$$ $$(19)$$ T(18) Given, z = *argmin*z(E(w + ek)), increase k by one so that. $$\mathbf{e}_{k+1}=[z_{1}\mathbf{g}_{1}^{T},z_{2}\mathbf{g}_{2}^{T},\ldots,z_{k a}\mathbf{g}_{k a}^{T},z_{k b}\mathbf{g}_{k b}^{T}]^{T}$$ T(19) If zka = zkb = zk, then ek = ek+1 and Ek+1 = Ek. However, since all the k+1 elements in z can change, we get Ek+1 ≤ Ek. *Lemma 1* presents a clear justification for increasing the number of second order groups or unknowns. Lemma 2 : If E(w) is quadratic in each iteration, and if E − E*molf* and E − E*Amolf* denote the error decrease due to the Newton steps of OWO-MOLF and Adapt-MOLF respectively, then E − Emolf ≤ E − E*Amolf* Proof : The k groups of unknowns for Adapt-MOLF can be formed by splitting the Nh groups of OWO-MOLF. The lemma follows from Lemma 1. Lemma 3 : OWO-Newton is a limiting case of the Adapt-MOLF algorithm as the k groups of Adapt-MOLF are split until k = Nh · (N + 1) Proof : We have $$\mathbf{e_{Newton}}={\left[\begin{array}{l l l}{}&{z_{1}\cdot\mathbf{g}_{1}}&{}&{}\\ {}&{z_{2}\cdot\mathbf{g}_{2}}&{}\\ {}&{}&{\vdots}\\ {}&{}&{}\\ {z_{N_{h}}(N_{g}+1)\,\cdot\,\mathbf{g}_{N_{h}}(N_{g}+1)}\end{array}\right]}$$ $$\left(\text{59}\right)$$. In each iteration's Newton steps, the resulting errors adhere to the relationship ENewton ≤ EAmolf. This assertion aligns with Lemma 1. *Lemmas 2* and 3 elucidate the Adapt-MOLF algorithm's interpolation between OWO-MOLF and OWO-Newton. The pseudo-code for the Adapt-MOLF procedure is delineated in Algorithm 1. Within the training process of the Adapt-MOLF algorithm, the grouping of weights associated with a hidden unit is determined by the second derivative of the loss concerning the weight, referred to as "curvature" in this context. Notably, this second derivative corresponds to the diagonal elements within the Hessian matrix. The Adapt-MOLF algorithm presents an intriguing approach with the potential to streamline model complexity. ## 3.6 Computational Burden The computational burden is used to measure the time complexity for each algorithm. It indicates number of multipliers that a particular algorithm needs to process per iteration using inputs, hidden units and outputs. We calculate computational burden for Adapt-MOLF, OWO-MOLF, SCG and LM. The proposed Adapt-MOLF algorithm involves calculating Hessian. However, compared to Newton's method or LM, the size of the Hessian is much smaller. Updating input weights using Newton's method or LM, requires a Hessian with Nw rows and columns, whereas the Hessian used in the proposed Adapt-MOLF has only Nh rows and columns. The total number of weights in the network is denoted as Nw = M · Nu + (N + 1) · Nh and Nu = N + Nh + 1. The number of multiplies required to solve for output weights using the OLS is given by Mols = Nu(Nu + 1)[M + 1 6Nu(2Nu + 1) + 32 ]. Therefore, the total number of multiplications per training iteration for LM, SCG, OWO-MOLF and Adapt-MOLF algorithm is given as : $$M_{\rm fm}=[MN_{\rm n}+2N_{\rm h}(N+1)+M(N+6N_{\rm h}+4)+MN_{\rm n}(N_{\rm n}+3N_{\rm h}(N+1))+4N_{\rm h}^{2}(N+1)^{2}]+N_{\rm m}^{3}+N_{\rm m}^{2}\tag{20}$$ $$M_{s e g}=[M N_{u}+M(N+6N_{h}+4)+M N_{u}(N_{u}+3N_{h}(N+1))+4N_{h}^{4}(N+1)^{2}]+N_{w}^{3}+N_{w}^{2}$$ w (21) $$(21)^{\frac{1}{2}}$$ $$M_{o w o-m o l f}=M_{o l s}+N_{v}N_{h}[2M+N+2+\frac{M(N_{h}+1}{2}]$$ $$(22)^{\frac{1}{2}}$$ $$(23)^{\frac{1}{2}}$$ 2] (22) $$M_{a d a p t-m o l f}=M_{o w o-m o l f}+N_{v}[N_{h}(N+4)-M(N+6N-N_{h}+4)]+(N_{h})^{3}$$ 3(23) Note that Madapt−*molf* consists of Mowo−*molf* plus the required multiplies for calculating adaptive learning factors.The computational cost of Adapt-MOLF algorithm varies between computational cost of OWO-MOLF and OWO-Newton. ## Algorithm 1 Adapt-Molf Algorithm 1: Read the training data. Initialize W, Woi, Woh, Nit , Ng ← Nh, it ← 0, 2: **while** it < Nit do 3: Compute G and do **Adapt-MOLF steps** : 4: a: Compute L (the curvature of error w.r.t input weights) from equation (3). *size*(L) = (Nh, N + 1) 5: b: I ← argsort(L*, axis* = 1). Input weight indices sorted in the descending order of curvature. size(I) = (Nh, N + 1) 6: c: Divide the sorted indices of weights connected to each hidden unit into Ng groups of equal size. 7: d: Compute Adapt-MOLF learning factors z**Amolf** from equation (13). *size*(z**Amolf**) = (Nh · Ng, 1) 8: e: Create learning factor column vector z by applying same learning factors for all the weights in a group. *size*(z) = (Nh · (N + 1), 1) 9: Update input weights: W ← W + diag(z) · G 10: **OWO step** : Solve equation (2) to obtain Wo and compute EPMit from equation (4) 11: Ng ← Ng + (sign(EPMit − EPMit−1)) ; it ← it + 1 12: **end while** ## 4 Experimental Results The Adapt-MOLF algorithm is empirically evaluated with OWO-MOLF, SCG and LM using *Twod* (32), Single2 (32), Oh7 (32) and *Mattern*(32) datasets.The data sets used for the simulations are listed in Table 1. | Data Set Name | No. of Inputs | No. of Outputs | No. of Patterns | |-----------------|-----------------|------------------|-------------------| | Twod | 8 | 7 | 1768 | | Single2 | 16 | 3 | 10000 | | Oh7 | 20 | 3 | 15000 | | Mattrn | 4 | 4 | 2000 | Table 1: Description of Data Sets used in Experiments The data sets are normalized to zero mean before training. The number of hidden units to be used in the MLP is determined by network pruning using the method of (33). By this process the complexity of each of the data sets is analyzed and an appropriate number of hidden units is found. Training is done on the entire data set 10 times with 10 different initial networks. The average Mean Squared Error (MSE) from this 10-fold training is shown in the plots below. The average training error and the number of multiplies is calculated for every iteration in a particular dataset using the different training algorithms. These results are then plotted to provide a graphical representation of the efficiency and quality of the different training algorithms. For the *Twod* data file MLP is trained with 27 hidden units. In Figure 2, the average mean square error (MSE) from 10-fold training is plotted versus the number of iterations for each algorithm. In Figure 3, the average training MSE from 10-fold training is plotted versus the required number of multiplies (shown on a ![9_image_0.png](9_image_0.png) Figure 2: Twod.tra data set: average error vs. number of iterations Figure 3: Twod.tra data set: average error vs. number of multiplies log scale). From these plots, Adapt-MOLF is performing far better than the other three algorithms both in terms of number of iterations and number of multiples. ![9_image_1.png](9_image_1.png) Figure 4: Single2.tra data set: average error vs. number of iterations Figure 5: Single2.tra data set: average error vs. number of multiplies. For the Single2 dataset, the MLP is trained with 23 hidden units. In Figure 4, the average mean square error (MSE) from 10-fold training is plotted versus the number of iterations for each algorithm (shown on a loge scale). In Figure 5, the average training MSE from 10-fold training is plotted versus the required number of multiplies (shown on a loge scale). For this dataset the performance of Adapt-MOLF is very close to that of OWO-MOLF. This shows that the number of groups per hidden unit Ng is 1 in most of the iterations. For the Oh7 dataset, the MLP is trained with 23 hidden units. In Figure 6, the average mean square error (MSE) for training from 10-fold training is plotted versus the number of iterations for each algorithm (shown on a loge scale). In Figure 7, the average training MSE from 10-fold training is plotted versus the required ![10_image_0.png](10_image_0.png) Figure 6: Oh7.tra data set: average error vs. number of iterations Figure 7: Oh7.tra data set: average error vs. number of multiplies. number of multiplies (shown on a loge scale). In this data set the proposed algorithm is performing better than OWO-MOLF in terms of iterations. In terms of multiples the performance of Adapt-MOLF is very close to that of OWO-MOLF. ![10_image_1.png](10_image_1.png) Figure 8: Matrix inversion data set: average error vs. number of iterations Figure 9: Matrix inversion data set: average error vs. number of multiplies For the matrix inversion data file [8], the MLP is trained with 30 hidden units. In Figure8, the average mean square error (MSE) from 10-fold training is plotted versus the number of iterations for each algorithm (shown on a loge scale). In Figure 9, the average training MSE from 10-fold training is plotted versus the required number of multiplies (shown on a loge scale). For this dataset the proposed algorithm is dominating the other three algorithms. ## 4.1 K-Fold Validation And Testing The k-fold validation procedure is used to obtain the average training and testing errors. In k-fold validation, given a data set, it is randomly split into k non-overlapping parts of equal size, of which (k-2) parts are used for training, one part for validation and the remaining one part for testing. In this technique the training is stopped when we get a satisfactory validation error, and the resulting network will be tested on the testing data to obtain the test error. This procedure is repeated k times to obtain average training and testing errors. For the simulations the k value is chosen as 10. The Table 2 depicts Adapt-MOLF's better performance compared to other methods in terms of iteration count and convergence operations. SCG and LM exhibit comparatively higher errors, suggesting potential limitations in handling these specific datasets. OWO-MOLF show low training but higher testing errors, indicate potential overfitting. While OWO-MOLF maintains stable performance across datasets, it dosen't performance best in specific scenarios. Adapt-MOLF achieved comparable performance with OWO-MOLF using nearly the same number of operations and comparable performance with LM with less than 50% of operations required for LM. Table 2: 10-fold training (Etrn) and testing (Etst) errors. Best-performing results are in bold | DataSet | Adapt-MOLF | OWO-MOLF | SCG | LM | |-----------|-----------------|-----------------|-----------------|-----------------| | Twod | 0.0888 / 0.1172 | 0.1554 / 0.1731 | 1.0985 / 1.0945 | 0.2038 / 0.2205 | | Single2 | 0.0042 / 0.0175 | 0.0151 / 0.1689 | 3.5719 / 3.6418 | 0.0083 / 0.0178 | | Mattrn | 0.0011 / 0.0013 | 0.0027 / 0.0032 | 4.2400 / 4.3359 | 0.0022 / 0.0027 | | Oh7 | 1.2507 / 1.4738 | 1.3205 / 1.4875 | 4.1500 / 4.1991 | 1.1602 / 1.4373 | Table 2 presents 10-fold training and testing errors showcasing the Adaptive MOLF algorithm's consistent and competitive performance. Across the datasets, Adaptive MOLF is a top-performing algorithm or closely competes with other methods, demonstrating its robustness and effectiveness. Notably, in the *Twod* dataset, Adaptive MOLF achieves significantly lower errors in both training and testing compared to OWO-MOLF, SCG, and LM, emphasizing its superiority. In the *Single2* dataset, Adaptive MOLF displays exceptional performance by yielding the lowest testing error among all algorithms, showcasing its remarkable generalization capabilities. Moreover, in the *Mattrn* dataset, Adaptive MOLF consistently produces the lowest errors in both training and testing phases. Despite a higher training error in the Oh7 dataset, Adaptive MOLF excels by achieving the best testing error, underlining its robust generalization prowess. Overall, the results strongly suggests that the Adaptive MOLF algorithm consistently competes favorably or outperforms other algorithms, particularly excelling in testing errors, signifying its potential for better generalization across diverse datasets. ## 5 Conclusion And Future Work The Adaptive Multiple Optimal Learning Factors (Adapt-MOLF) algorithm presented in this research successfully mitigates the scalability challenges inherent in second-order training algorithms. This is achieved through an adaptive mechanism that modulates the computation of learning factors using Newton's method, thereby enhancing the efficiency of error reduction per multiplication operation. Empirical evaluations indicate that the Adapt-MOLF algorithm exhibits superior performance compared to the OWO-MOLF algorithm, both in terms of error reduction per iteration and often in error decrease per multiply. Furthermore, the algorithm demonstrates a unique capacity to interpolate between the OWO-MOLF and OWO-Newton methodologies. The scope of this study is intentionally focused, applying the concept of Adaptive Multiple Optimal Learning Factors exclusively to input weights in a Multilayer Perceptron with a single hidden layer. This concentration allows for a detailed exploration and elucidation of the algorithm's derivation and operational specifics. Looking forward, subsequent research will expand the application of the Adapt-MOLF algorithm to more complex deep neural network architectures. This will facilitate a comprehensive comparison of its performance against established first-order neural network optimization techniques. This forthcoming analysis is anticipated to provide further insights into the efficacy and applicability of the Adapt-MOLF algorithm within the broader context of neural network optimization. ## References [1] K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2:359–366, 1989. [2] D. W. Ruck, S. K. Rogers, M. Kabbisky, M. E. Oxley, and B. W. Suter. The multilayer perceptron as an approximation to a bayes optimal discriminant function. *IEEE Transactions on Neural Networks*, 1(4), 1990. [3] Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. 2015. [4] Matthew D. Zeiler. Adadelta: An adaptive learning rate method. *arXiv preprint arXiv:1212.5701*, 2012. [5] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. *Journal of Machine Learning Research*, 12:2121–2159, 2011. [6] Sebastian Ruder. An overview of gradient descent optimization algorithms*. 2017. [7] Aleksandar Botev, Guy Lever, and David Barber. Nesterov's accelerated gradient and momentum as approximations to regularised update descent. 2016. [8] Richard O Duda, Peter E Hart, and David G Stork. *Pattern classification*. John Wiley & Sons, 2012. [9] Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pages 4596–4604. PMLR, 2018. [10] Jun Lu. Adasmooth: An adaptive learning rate method based on effective ratio. In *Sentiment Analysis* and Deep Learning: Proceedings of ICSADL 2022, pages 273–293, Singapore, 2023. Springer Nature Singapore. [11] Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes. *arXiv preprint arXiv:1904.00962*, 2019. [12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. [13] Hang Xu, Wenxuan Zhang, Jiawei Fei, Yuzhe Wu, TingWen Xie, Jun Huang, Yuchen Xie, Mohamed Elhoseiny, and Panos Kalnis. Slamb: accelerated large batch training with sparse communication. In International Conference on Machine Learning, pages 38801–38825. PMLR, 2023. [14] Xiangning Chen, Chen Liang, Da Huang, Esteban Real, Kaiyuan Wang, Yao Liu, Hieu Pham, and et al. Symbolic discovery of optimization algorithms. *arXiv preprint arXiv:2302.06675*, 2023. [15] Vineet Gupta, Tomer Koren, and Yoram Singer. Shampoo: Preconditioned stochastic tensor optimization. In *International Conference on Machine Learning*, pages 1842–1850. PMLR, 2018. [16] Rohan Anil, Vineet Gupta, Tomer Koren, Kevin Regan, and Yoram Singer. Scalable second order optimization for deep learning. *arXiv preprint arXiv:2002.09018*, 2020. [17] Zhewei Yao, Amir Gholami, Sheng Shen, Mustafa Mustafa, Kurt Keutzer, and Michael Mahoney. Adahessian: An adaptive second order optimizer for machine learning. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 10665–10673, 2021. [18] Christopher M Bishop. Pattern recognition. *Machine Learning*, 128, 2006. [19] Hugo Larochelle, Dumitru Erhan, Aaron Courville, James Bergstra, and Yoshua Bengio. An empirical evaluation of deep architectures on problems with many factors of variation. In *Proceedings of the 24th* international conference on Machine learning, pages 473–480. ACM, 2007. [20] Kanishka Tyagi, Chinmay Rane, and Michael Manry. Supervised learning. In *Artificial Intelligence and* Machine Learning for EDGE Computing, pages 3–22. Elsevier, 2022. [21] Kanishka Tyagi. *Automated multistep classifier sizing and training for deep learners*. PhD thesis, Department of Electrical Engineering, The University of Texas at Arlington, Arlington, TX, 2018. [22] Melvin Deloyd Robinson and Michael Thomas Manry. Two-stage second order training in feedforward neural networks. In *FLAIRS Conference*, 2013. [23] Kanishka Tyagi, Son Nguyen, Rohit Rawat, and Michael Manry. Second order training and sizing for the multilayer perceptron. *Neural Processing Letters*, 51(1):963–991, 2020. [24] Magnus Rudolph Hestenes and Eduard Stiefel. *Methods of conjugate gradients for solving linear systems*, volume 49. NBS, 1952. [25] Christakis Charalambous. Conjugate gradient algorithm for efficient training of artificial neural networks. IEE Proceedings G-Circuits, Devices and Systems, 139(3):301–310, 1992. [26] Kenneth Levenberg. A method for the solution of certain non-linear problems in least squares. *Quarterly* of applied mathematics, 2(2):164–168, 1944. [27] Régis Lengellé and Thierry Denoeux. Training mlps layer by layer using an objective function for internal representations. *Neural Networks*, 9(1):83–97, 1996. [28] William H Press, Saul A Teukolsky, William T Vetterling, and Brian P Flannery. *Numerical recipes in* C, volume 2. Cambridge university press Cambridge, 1996. [29] Rohit Rawat, Jignesh K Patel, and Michael T Manry. Minimizing validation error with respect to network size and number of training epochs. In *The 2013 International Joint Conference on Neural* Networks (IJCNN), pages 1–7. IEEE, 2013. [30] Robert A Jacobs. Increased rates of convergence through learning rate adaptation. *Neural networks*, 1(4):295–307, 1988. [31] Simon Haykin. *Neural networks and learning machines*, volume 3. Pearson Education, 2009. [32] Classification data files. Image Processing and Neural Networks Lab, The University of Texas Arlington. https://ipnnl.uta.edu/training-data-files/regression/, 2022. Image Processing and Neural Networks Lab, The University of Texas Arlington. [33] S. S. Malalur, M. T. Manry, and P. Jesudhas. Multiple optimal learning factors for the multilayer perceptron. *Neurocomputing*, 149:1490–1501.
Review 1: Summary: The paper proposed Adapt-MOLF, a second-order optimization algorithm that dynamically calculates the number of weight groups for efficiently achieving the Hessian matrix. The proposed method is an extension of OWO-MOLF algorithm where instead of keeping the group size fixed, it adaptively calculate the group size based on the curvature of the error function. With the proposed method, the paper claims that it achieves faster convergence rate than OWO-MOLF algorithm and less computational cost than LM. The paper also empirically showing that the proposed algorithm can achieve faster or comparable convergence rate with a MLP on different datasets. Strengths and Weaknesses: Strengths: The idea is simple and based on the experiments, the proposed method is effective. Weakness: 1. The idea is not novel enough. 2. The paper is not well written. The plots are not well presented and lack intuition and analysis for the experiment results. 3. I am concerned about the practical effectiveness since the paper only shows that the proposed method can only work on a 2-layer MLP. Requested Changes: 1. The plots should be more clear and at least with the same size. 2. For Figure 2 and 3, why LM is worse than the proposed method and OWO-MOLF? Can we have some intuition of that? 3. For figure 4 and 5. The OWO-MOLF and Adapt-MOLF actually have similar convergence rate for iteration numbers and based on the paper, OWO-MOLF should be cheaper in computational costs. Then why the convergence rate is much faster for Adapt-MOLF in figure 5? 4. LM seems to achieve smaller MSE in figures against iteration numbers but why it has higher for the multipliers? Can we have a full plot? Like training longer. 5. Can we have more experiments on different widths of the MLP to see the trade-offs between the costs and convergence? In addition, can we have the training time for all these experiments? 6. Is there a chance that we can apply the proposed methods to other deep learning architectures? Broader Impact Concerns: NONE ================================================== Review 2: Summary: The paper proposes an Adapt-MOLF, a second-order optimization algorithm that clusters the input weights of multiplayer perceptron (MLP) and only computes the relevant statistics within each cluster. This cluster is dynamically adjusted throughout training, which the authors claim is advantageous compared to existing structured second-order techniques. Empirically, the authors investigate the effectiveness of Adapt-MOLF in simplistic settings (e.g., MLP with a single hidden layer), and show improved performance against baseline techniques such as scaled conjugate gradient (SCG). Strengths and Weaknesses: - The writing in the paper can be significantly improved. There are various formatting (e.g., omitting equations or references) and notation issues, which makes it extremely challenging to follow the paper. Apart from these problems, it is also difficult to follow the overall flow of the paper (which I will further elaborate on in the section below). - Many claims in the paper are not well justified (which I detail in the section below). - The authors study a limited setting: multilayer perceptron (MLP) with sigmoid activation, a single hidden layer, and MSE loss. The setting also assumes the residual connection for the MLP architecture (which the authors term “fully connected MLP”). However, this is not clearly communicated in the abstract or introduction and is not justified. For example, the author refers to the above specific architecture and setting as MLP, which is technically incorrect. - The authors do not describe the limitations of their approach, such as scalability and limited algorithm applicability. - The empirical study has several limitations. Firstly, the authors do not compare their algorithm to other second-order methods mentioned before, such as Shampoo. Given that the network size is small (e.g., 30 hidden units), it would be helpful to include the results using the exact Hessian (without block structure). Especially in this limited setting the author explores, I would expect a more detailed analysis and comparison to other existing approaches. Secondly, a detailed description of the experimental setup is missing from the Appendix (e.g., hyperparameter used), and the code is not provided, which makes the algorithm difficult to reproduce. - At this stage, it is challenging to understand the applicability of Adapt-MOLF and its relative performance to other optimizers. Requested Changes: - In the introduction, the authors motivate the work by describing (1) the challenges of second-order approximation for neural networks and (2) the limitations of existing optimization techniques such as AdaFactor [1] and LAMB [2]. However, I find that some claims, such as the applicability of AdaFactor not well understood, are unreasonable. This can be said for any neural network optimizers, such as stochastic gradient descent. The last sentence briefly describes the algorithm proposed in the paper, but it does not connect to the arguments the authors made in the previous paragraphs. I recommend revising the introduction to make the contribution more clear. - Figure 1, describing the fully connected MLP, does not provide much information. I suggest removing or adding more content that describes the set-up of the paper studies. - \citep and \citet are not correctly used. I understand this was done to reduce the space in the paper, but it makes it extremely difficult to follow the paper. For example, does (3) refer to equation 3 or reference 3? - The descriptions of the proposed algorithm in Section 3 can be improved. - I recommend that the authors provide more details of their experimental setup in the Appendix and compare the algorithm with other optimizers. It would be helpful to include a plot, where the x-axis is the wall-clock time, to see the relative computational comparisons. Broader Impact Concerns: I do not believe the work requires a Broader Impact Statgement. ================================================== Review 3: Summary: The authors introduce a second-order optimization method that uses second order information to limit the number of parameters that are optimized using curvature-based heuristics to group parameters for optimization and Gauss-Newton updates. The authors introduce an "error per multiply" (EPM) figure of merit and use that to decide the number of groups for optimization "N_g". Strengths and Weaknesses: Strengths: * The authors introduce Algorithm 1 that is able to outperform the baselines on several tasks (e.g. in Figures 2 and 3). * The idea of using curvature to decide which parameters should be optimized is sound. Weaknesses: * The main text is extremely hard to follow due to the dense notation. The intuition and motivations behind the mathematical manipulations are hidden across the text, and not emphasized at all. The extra complication comes from the fact that there are many typos, e.g. there is a parenthesis missing in Equation (22). * The significance and the semantics of the datasets in the experiments is not explained at all. The reader is expected to be familiar with these datasets. I am not sure the claim “The Adaptive Multiple Optimal Learning Factors (Adapt-MOLF) algorithm presented in this research successfully mitigates the scalability challenges inherent in second-order training algorithms” is validated in the paper. * Figures 4-7: AdamMOLF is very close to OWO-MOLF, and also LM is clearly winning in Figure 6. Are these results statistically significant? Error bars are missing. * Any analysis on how EPM and N_g change with time is missing, which is a pity, since these variables are introduced by the algorithm. Requested Changes: Please explain: * What does it mean for a convergence rate to “align with the number of unknowns.” * Regarding focusing on a two-layer MLP, you say, “This concentration allows for a detailed exploration and elucidation of the algorithm’s derivation and operational specifics”. I am not sure why it is so hard to extend your analysis for deeper models. * What does “No. of patterns” identify in Table 1? Please modify: * A lot of equations in Section 3.3 can be moved to the appendix. * The quality of Figure 1 is really poor. Currently, it’s just a screenshot of a figure. * It is not clear what the background is used for. What do we consider all methods in Section 2.3? We need some propert motivation. The introduction in Section 3 can come handy in introducing that. Broader Impact Concerns: N/A ==================================================
# Local Advantage Networks For Multi-Agent Reinforcement Learning In Dec-Pomdps Raphaël Avalos raphael.avalos@vub.be Vrije Universiteit Brussel Mathieu Reymond mathieu.reymond@vub.be Vrije Universiteit Brussel Ann Nowé *ann.nowe@vub.be* Vrije Universiteit Brussel Diederik M. Roijers diederik.roijers@vub.be Vrije Universiteit Brussel City of Amsterdam Reviewed on OpenReview: *https: // openreview. net/ forum? id= adpKzWQunW* ## Abstract Many recent successful off-policy multi-agent reinforcement learning (MARL) algorithms for cooperative partially observable environments focus on finding factorized value functions, leading to convoluted network structures. Building on the structure of independent Q-learners, our LAN algorithm takes a radically different approach, leveraging a dueling architecture to learn for each agent a decentralized best-response policies via individual advantage functions. The learning is stabilized by a centralized critic whose primary objective is to reduce the moving target problem of the individual advantages. The critic, whose network's size is independent of the number of agents, is cast aside after learning. Evaluation on the StarCraft II multi-agent challenge benchmark shows that LAN reaches state-of-theart performance and is highly scalable with respect to the number of agents, opening up a promising alternative direction for MARL research. ## 1 Introduction Reinforcement learning (RL) (Sutton & Barto, 1998) is the branch of machine learning dedicated to learning through trial-and-evaluation by interaction between an agent and an environment. Research in RL has successfully managed to exceed human performance in many tasks including Atari games (Mnih et al., 2015) and the challenging game of Go (Silver et al., 2016). While single-agent RL has been highly successful, many real world tasks - sensor networks (Mihaylov et al., 2010), wildlife protection (Xu et al., 2020), and space debris cleaning (Klima et al., 2018) - require multiple agents. When these agents need to act on local observations, or the problem is too large to centralize due to the exponential growth of the joint action space in the number of agents, an explicitly multi-agent approach is required. As such, *Multi-Agent Reinforcement Learning (MARL)* (Buşoniu et al., 2008; Hernandez-Leal et al., 2019; Shoham et al., 2007) introduces additional layers of complexity over single-agent RL. In this paper, we focus on partially observable cooperative MARL where the agents optimize the same team reward. This setting introduces two main challenges that do not exist in single-agent RL. 1) The moving target problem (Tuyls & Weiss, 2012): the presence of multiple learners in an environment makes it impossible for an agent to infer the conditional probability of future states. This invalidates most single-agent approaches, as the Markovian property no longer holds. 2) The *multi-agent credit assignment problem*: to learn a policy each agent needs to determine which actions contribute to obtaining the maximum reward. While in single agent RL this problem is only temporal, as the reward can be sparse and delayed, the shared reward increases the complexity of this problem as the agents also need to determine their individual contribution. Centralized Training with Decentralized Execution (CTDE) (Oliehoek et al., 2008a; Foerster et al., 2018; Lowe et al., 2017), has become a popular learning paradigm for MARL. The core idea behind CTDE is that even though decentralized execution is required the learning is allowed to be centralized. Specifically, during training, it is often possible to access the global state of the environment, the observations and actions of all agents allowing to break partial observability, which mitigates both the moving target problem and the credit assignment problem. Most of the research in off-policy CTDE MARL for collaborative partially observable environments focuses on factorizing the joint Q-Value into local agent utilities such as QMIX (Rashid et al., 2018) and QPLEX (Wang et al., 2021). In this paper, we take a radically different approach. Our *Local Advantage Networks (LAN)* algorithm learns for every agent the advantage of the best response policy to the other agents' polices. These local advantages, which are solely conditioned on the agent observation-action history, are sufficient to build a decentralized policy. In this sense, the architecture of LAN resembles independent Q-learners more than other CTDE approaches such as QMIX or QPLEX. A key element of our solution is to derive a proxy of the local Q-value that leverages CTDE to stabilize the learning of the local advantages. For each agent the Q-value proxy is composed of the sum of the local advantage with the centralized value of the joint policy. Compared to the local Q-value, LAN's proxy is able mitigate the moving target problem, by integrating the changes of the other agents' policies faster, and to reduce the multi-agent credit assignment, by learning the local advantage function for each agent. LAN is also highly scalable as the centralized value network reuses the hidden states of the local advantages to represent the joint observation-action history and the number of parameters of the centralized value does not depend on the number of agents. Finally, compared to QMIX and QPLEX which factorize the joint Q-value into individual utilities, LAN learns individual best-response Q-value proxies. This allows LAN to not have any restriction on the family of decentralized functions that it can represent, as opposed to QMIX. Indeed, in cooperative environments the optimal policies are best response policies. We empirically evaluate LAN against independent Q-Learners (Tan, 1993; Tampuu et al., 2015) and stateof-the-art algorithms for deep MARL, i.e., VDN (Sunehag et al., 2018), QMIX and QPLEX, on the Starcraft Multi-agent Challenge (SMAC) benchmark (Samvelyan et al., 2019). We show that on the 14 maps that compose the benchmark, LAN reaches similar performance of the SOTA in 11, surpasses the others algorithms with a large margin in 2, and under-performs in 1. In the maps with the most agents, LAN's centralized network uses up to 7 times fewer parameters than QPLEX demonstrating the scalability of our algorithm. Furthermore, in two super hard maps, LAN learns a complex strategy based on an agent sacrificing itself to lure the enemies far from its teammates, showcasing LAN's capacity to mitigate the temporally extended multi-agent credit assignment problem. This strategy allows LAN to obtain a success rate of respectively 40% and 90% on two maps where the current state-of-the-art - QPLEX - struggles to obtain any wins. By improving performance on these two maps, LAN was able to achieve an average final performance on all 14 maps that is 10% better than QPLEX's score. Importantly, the objective of this new method is not to improve performance over the SOTA but rather to present an alternative research direction to factorizing the joint Q-value. ## 2 Background The setting considered in this paper are Dec-POMDPs (Oliehoek & Amato, 2016; Oliehoek et al., 2008a) G " xA, S, U, P, R, O*, O, γ*y. At each time-step, every agent a P A selects an action ua P Ua to form the joint action u P U, where U " Ś a Ua, that is processed by the environment to produce: a unique reward r common to all agents; the next state s 1 P S; and the agents' joint observation o P O, where O " ŚOa, with oa P Oa the observation of agent a. γ P r0, 1q is the discount factor. As the agents cannot access the real state of the environment they condition their policy on their observation-action history τa P Ta " pOa, Uaq ˚, with τ P T , where T " Ś a Ta being the joint observation-action history. We refer to the observation-action history of an agent as its history, and the joint observation-action history as the joint history. To simplify the notations in this paper we assume that the observation function is deterministic. However the extension to stochastic observations is straightforward. With that setting, the next joint history τ 1is defined entirely by the current joint history, the joint action and the state xτ ,u, s1y. The value, Q-value and advantage functions of the joint policy π, which can be centralized or decentralized, are defined as: $$V^{\pi}(s,\tau)=\sum_{\mathbf{u}}\pi(\mathbf{u}|\tau)\big[R(s,\mathbf{u})+\gamma\sum_{s^{\prime}}P(s^{\prime}|s,\mathbf{u})V^{\pi}(s^{\prime},\tau^{\prime})\big]$$ $$Q^{\pi}(s,\tau,u)=R(s,u)+\gamma\sum_{s'}P(s'|s,u)V^{\pi}(s',\tau')\quad A^{\pi}(s,\tau,u)=Q^{\pi}(s,\tau,u)-V^{\pi}(s,\tau)$$ We note that, if there is only a single agent a Dec-POMDP is a POMDP, and if this agent can observe the full state s the POMDP is an MDP. DQN (Mnih et al., 2013) is a popular algorithm for MDPs that learns an approximation of Q˚ " maxπ Qπ with a neural network parametrized by θ. This θ is learned through gradient descent by minimizing Qp*s, u* | θq ´ y DQN q 2 with y DQN " r ` γ maxu1 Qps 1, u1| θq. DQN uses a replay buffer to improve sample efficiency and to stabilize the learning. Dueling DQN (Wang et al., 2016) is a variant of DQN that learns both the value and the advantage, to then produce the Q-value as the sum of both instead of learning Q directly. This alternative architecture is motivated by the fact that having one part of the neural network that learns the general value of the state, and a second part that learns the effects of the actions - represented by the advantage - can be easier than learning both in the same network. DRQN uses a Recurrent Neural Network (RNN), such as a Gated Recurrent Network (GRU) (Cho et al., 2014) or an LSTM (Hochreiter & Schmidhuber, 1997), to extend DQN to partial observablity (POMDP). ## 3 Related Work Applying single agent RL algorithms to Dec-POMDPs, such as Independent Q-Learners (IQL) and Independent Actor-Critic, results in poor performance due to the moving target and multi-agent credit assignment problems (Tan, 1993; Tampuu et al., 2015; Foerster et al., 2018) - with the exception of stateless normal form games (Nowé et al., 2012). The replay buffer, fundamental to DQN, worsens the moving target problem as the sampled transitions are quickly outdated and off-environment as the policies evolve. Indeed, as all the agents are learning, states of transitions saved in the replay buffer might no longer be achievable by changing the policy of one agent. As removing the replay buffer does not lead to good polices, alternatives such as importance sampling and the use of fingerprints have been explored leading to small improvements (Foerster et al., 2017). In contrast, LAN's centralized value function mitigates the moving target problem sufficiently, which enables it to take advantage of the replay buffer and to reach state-of-the-art performance. COMA (Foerster et al., 2018) and MADDPG (Lowe et al., 2017) introduced CTDE to Deep MARL by building on single-agent actor-critic algorithms but replacing the local critic with a centralized one to improve the quality of the value estimation guiding the updates. In comparison, our method, LAN, is a value-based algorithm making it more sample-efficient. While LAN's joint value is also a centralized critic, it plays an intrinsically different role, as it fosters learning coordination between the local advantage functions. Centralized Q-Learning (CQL) and Independent Q-Learners (IQL) form the two extremes of value-based methods for MARL. On the one hand, CQL learns a unique Q-Value conditioned on the full joint action space and the joint history. While in this setting the optimal performance is better or equal to the decentralized one due to the reduction of partial observability, the agents are no longer autonomous as they rely on a central entity for execution. In addition, this algorithm does not scale well due to the exponential increase of the joint action space in the number of agents. On the other hand, IQL learns for each agent a local Q-Value conditioned on its local observation-action history. This algorithm is heavily affected by the moving target ![3_image_0.png](3_image_0.png) Figure 1: Comparison diagram between Centralized Q-Learning, Independent Q-Learning, Value factorization methods and LAN. problem. However, in settings with limited interactions between agents, the moving target problem is not as intense and IQL can show good performance. Value factorization (VDN, QMIX, QPLEX) emerged as the main alternative in recent years. Closer to CQL than IQL, those algorithms learn a unique Q-Value over the joint action space. Its factorized architecture allows recovering for each agent a utility function for action selection. To ensure that the agents select the same action during training with the centralized component and during decentralized execution, the factorization follows the individual global max (IGM) principle: the maximizing joint action of the joint Q-value must be equal to the joint action that results from maximizing the local utilities. The factorization usually enforces a monotonicity constraint to ensure IGM, i.e., for each agent the derivative of the joint Q-value to the agent's local utility is positive. VDN is the first algorithm of this kind and decomposes the joint Q-value into a simple sum. QMIX extends VDN by learning state-dependent positive weights. The state dependency broadens the family of Q-value functions that can be learned, and the positive weights constraint ensures IGM. While QMIX achieves good performance and improves over VDN, the monotonicity constraints still limits the family of functions learnable. QATTEN (Yang et al., 2020) extends QMIX by using multi-head attention (Vaswani et al., 2017) to compute the mixing weights. More recently, QPLEX extends QATTEN by transferring the IGM principle from the Q-value to the advantage function. At the cost of twice as many parameters on average and a more complex mixing network, QPLEX outperforms QMIX on SMAC. In contrast to those algorithms, LAN does not factorize the joint Q-value into individual agents utilities but learns an individual Q-value proxy for every agent. This result in LAN's architecture being able to represent any decentralized policy, as opposed to QMIX and VDN. Figure 1 presents a visual comparison of the structural differences between CQL, IQL, value factorization and LAN. This figure highlights the fact that while value factorization and LAN are both CTDE methods, LAN is closer to IQL as it learns for each agent a Q-value on its local action space. Improving multi-agent exploration or scalability regarding the action space in Dec-POMDPs have been successfully explored by MAVEN (Mahajan et al., 2019) and RODE (Wang & Dong, 2020). Both works are orthogonal to ours, and while they use QMIX as a base algorithm they could also be applied to LAN. For this reason we do not include them as baselines. Recently, MAPPO (Yu et al., 2021) and IPPO de Witt et al. (2020) showed that actor-critic-based algorithms could achieve good performance on cooperative MARL. However, they require significantly more interactions, 10 million timesteps instead of 2 million, and more computing power. Comparison with those two algorithms is also harder because MAPPO changed the state space, IPPO changed the difficulty of the enemy team, and they do not use the same version of the environment. Also, they both have different hyperparameters per map whereas the other algorithms have one set of hyperparameters for the full benchmark challenge. However, just like LAN, both MAPPO and IPPO propose an alternative to off-policy value factorization. While comparing the three methods might not be straightforward due to the need to retune the three algorithms on a fixed version of SMAC, a further comparative study would help to better understand the strengths and weaknesses of each algorithm. ## 4 Method In this section, we present **Local Advantage Networks (LAN)**, a novel value-based algorithm for collaborative partially observable MARL. LAN goes in the opposite direction of the current state-of-the-art in MARL, which focuses on factorizing the Q-value of the joint policy Qπ into individual utilities. Instead, LAN learns for each agent the advantage of the best response policy to the other agents' policies. The local advantages are only conditioned on the own agent's history allowing for decentralized execution. The main contribution of LAN is to stabilize the learning of those advantages by leveraging CTDE to use the value of the joint policy V π to coordinate their learning. The centralized nature of V π allows to reduce the partial observability, mitigate the moving target problem and the multi-agent credit assignment problem. By combining the local advantages with the centralized value, LAN derives a proxy of the individual Q-value of each agent and can simultaneously learn all components with DQN. Two key differences with a factorized Q-function are: (1) that LAN does not learn the Q-value of the joint policy, which is in fact more difficult to learn than the value V , and its factorization, but proxies of the individual Q-Values and (2) that in contrast to VDN and QMIX, LAN's architecture does not limit the the family of decentralized policies it can represent. We note that QPLEX can also represent all these policies at the cost of a more complex architecture. Best response policies We start from the observation that in a Dec-POMDP when the agents reach an optimal policy, their individual policies are best responses to the other agents' policies. Indeed, if one agent could improve its policy while the other agents polices are fixed, the joint policy cannot be optimal as the agents share the same reward. Based on this observation, LAN focuses on learning best response polices. To better understand how to learn best response policies, we first focus on a single agent a P A and assume that the joint policy of the other agents π´a is fixed. As in (Foerster et al., 2017), we derive from the DecPOMDP G a POMDP Ga " xS˜, Ua, Pa, Oa, Oa, Ra, γy, with S˜ " xS, T ´ay being the original state space extended with the observation-action histories of the other agents, Pa and Ra are defined as follows: $$P_{a}(\langle s^{\prime},\tau^{\prime}_{-a}\rangle|\langle s,\tau_{-a}\rangle,u_{a})=\sum_{\mathbf{u}_{-a}}\pi_{-a}(\mathbf{u}_{-a}|\mathbf{\tau}_{-a})P(s^{\prime}|s,\langle u_{a},\mathbf{u}_{-a}\rangle)p(\mathbf{\tau}^{\prime}_{-a}|\mathbf{\tau}_{-a},s,s^{\prime},\mathbf{u}_{-a})$$ $$R_{a}(\langle s,\mathbf{\tau}_{-a},u_{a}\rangle=\sum_{\mathbf{u}_{-a}}\pi_{-a}(\mathbf{u}_{-a}|\mathbf{\tau}_{-a})R(s,\langle u_{a},\mathbf{u}_{-a}\rangle)$$ The value, Q-value and advantage of Ga can then be derived as follows, with pps˜|τaq the probability of being in an extended state s˜ P S˜ when τa is agent a's local history. V πapτaq " ÿ ua πapua|τaq ÿ s˜ pps˜|τaq ÿ u´a π´apu´a|τ´aq " Rps,pua,u´aqq ` γ ÿ s 1 Pps 1|s, xua,u´ayqV πapτ 1 aq ‰ Q πapτa, uaq " ÿ s˜ pps˜|τaq ÿ u´a π´apu´a|τ´aq " Rps,pua,u´aqq ` γ ÿ s 1 Pps 1|s, xua,u´ayqV πapτ 1 aq ‰ Q πapτa, uaq " V πapτaq ` A πapτa, uaq Partial observability Due to the partial observability, agent a needs to disambiguate the state of Ga corresponding to the original state s and the joint history of the other agents τ´a. As the environment is no longer Markovian, the agent needs to base its policy on a belief over the extended state. The most straightforward way to compute this belief is to keep the full history of the agent. However, this strategy does not scale well in the number of time-steps or state space. As analyzed in the work on influence-based abstractions (Oliehoek et al., 2012), in a Dec-POMDP maintaining a belief over the subset of features that allows to locally regain the Markovian property is sufficient, using the property of d-separation. This belief is much more compact than keeping track of the entire action-observation history, and therefore offers the possibility to keep a fully sufficient representation that remains tractable. In the ideal case, the RNN's history representation will capture the belief over the d-separating features, enabling the reinforcement learning agent to learn an optimal Dec-POMDP policy. In practice of course, we aim to closely approximate such a representation, but are often uncertain of its existence, or of its size if it does exist. Applying DQN to the single-agent POMDP Ga learns, for each agent a, the best response policy to π´a, as the probability distribution over the relevant features Pa results from executing fixed policies for the other agents. A naive solution to learn good decentralized policies would therefore be to improve each agent successively. However, this approach fails if the environment requires the agents to explore simultaneously to find the optimal policy. On the other hand, optimizing Qπa for all the agents simultaneously, i.e., Independent Q-Learning (IQL) (Tan, 1993; Tampuu et al., 2015) also has key downsides. While IQL allows agents to explore together, it does not perform well in more complicated tasks due to the moving target problem as it ignores that the environment Ga perceived by agent a is shifting as π´a evolves. So while we need agents that learn together, they need to do so in a coordinated manner. Q-Value proxy LAN simultaneously learns best response policies and mitigates the moving target problem. These best response policies are expressed as *local advantage functions* that are solely conditioned on the agent's observation-action history, Aπa pτa, uaq, allowing for decentralized execution. To coordinate the learning of those local advantage functions, following the CTDE paradigm, LAN leverages full information about the state and the other agents observation-action history at training time via a centralized value function V π. More specifically, LAN derives Q˜π a a proxy of the local Q-value Qπa for each agent a P A. $$\tilde{Q}_{a}^{\pi}(s,\tau,u_{a})=V^{\pi}(s,\tau)+A^{\pi_{a}}(\tau_{a},u_{a})$$ $$(1)$$ $$\left(2\right)$$ πapτa, uaq (1) The proxy is constructed by summing the local advantage Aπa with the centralized value of the joint policy V π. While Q˜π a is not a real Q-value and it is conditioned on the full state and the joint history τ it can be used to extract decentralized policies as the maximizing actions only depend on the agent's history τa, as shown by equation 2. We obtain this equation by remarking that for both decomposition of Qπa and Q˜π a , the local and centralized values are not conditioned by the agent's actions. $$\arg\operatorname*{max}_{u_{a}}\tilde{Q}_{a}^{\pi}(s,\tau,u_{a})=\arg\operatorname*{max}_{u_{a}}A^{\pi_{a}}(\tau_{a},u_{a})=\arg\operatorname*{max}_{u_{a}}Q^{\pi_{a}}(\tau_{a},u_{a})$$ πapτa, uaq (2) LAN uses DQN to learn the individual Q-value proxy Q˜π a for all agents a P A simultaneously. This allows LAN to learn the local advantages Aπa and the centralized value V π in parallel by optimizing a unique loss, resulting in an efficient learning scheme. LAN's DQN target for agent a is defined as follows with the subscript t referring to a delayed copy of the networks to increase learning stability (van Hasselt et al., 2015). Appendix E contains the pseudo code of LAN. $$y_{a}=r+\gamma\bar{Q}_{t_{a}}^{\pi}(s^{\prime},\mathbf{\tau}^{\prime},\arg\max_{u^{\prime}_{a}}\bar{Q}_{a}^{\pi}(s^{\prime},\mathbf{\tau}^{\prime},u^{\prime}_{a}))=r+\gamma[V^{\pi}_{t}(s^{\prime},\mathbf{\tau}^{\prime})+A_{t}^{\pi*}(r^{\prime}_{a},\arg\max_{u^{\prime}_{a}}A^{\prime*}(r^{\prime}_{a},u^{\prime}_{a}))]\tag{3}$$ The following Theorem, shows that our Q-value proxy is an unbiased estimator of the local Q-value it approximates. Theorem 4.1. For any agent a P A, and any realisable local history τa P Ta, and any action ua P Ua , the Q-value proxy Q˜a *is an unbiased estimator of the local Q-value* Qπa $$\mathbb{E}_{s,\tau_{-a}\sim p(\cdot|\tau_{a})}\tilde{Q}_{a}(s,\langle\tau_{-a},\tau_{a}\rangle,u_{a})=Q^{\pi_{a}}(\tau_{a},u_{a})$$ We prove the Theorem in Appendix F. In a nutshell, this Theorem shows that by optimizing the Q-value proxy we are optimizing in the same direction of the local Q-value. Compared to the local Q-value Qπa , the learning of LAN's proxy Q˜π a has two interesting properties that help stabilize and coordinate the learning, and give an intuition on how LAN solves the task as a whole. We note that these properties result from applying DQN to LAN's Q-value proxies to all agents in parallel, and cannot be tested independently. Property 1: Q˜π a mitigates the moving target problem, which results from all the agents learning at the same time. This simultaneous learning allows the agent to explore together, which is necessary to find an $$\left(4\right)$$ ![6_image_0.png](6_image_0.png) Figure 2: Architecture of LAN. optimal strategy in non-monotonic environments, but because of it the environment is constantly changing and locally loses its Markovian property. To provide meaningful updates and prevent the learning to plateau prematurely as in IQL, the updates need to reflect as closely as possible the ever changing environment. LAN achieves this thanks to the centralized value, which coordinates the learning of all the local advantages. This happens in two steps. First, as an update of Q˜π a results in the update of both the centralized value and the local advantage with the same transitions, a modification of a local advantage function results in a change of the centralized value. Second, as the centralized value is part of the target update of every agent's Q-value (eq. 3), the change is then propagated to the other agents' advantage. Property 2: Q˜π a mitigates the multi-agent credit assignment problem. As the centralized value function approximates the expected return of the joint policy, the agents can easily evaluate the effect of their actions on the effective return simply by subtracting it from the centralized value. This difference is learned by the local advantages. Indeed, by applying DQN to Q˜π a the *induced update* of the local advantage network of agent a (eq. 5) is similar to the one used by COMA (Foerster et al., 2018) to reduce the multi-agent credit assignment problem. We stress the fact that we learn all the networks in parallel with the Equation 3. $$y_{A_{a}}=r+\gamma\bar{Q}_{t_{a}}^{\pi}\left(s^{\prime},\tau^{\prime},\arg\operatorname*{max}_{u_{a}^{\prime}}\bar{Q}_{a}^{\pi}\left(s^{\prime},\tau^{\prime},u_{a}^{\prime}\right)\right)-V^{\pi}\left(s,\tau\right)$$ $$\left(5\right)$$ ˆ ˙ πps, τ q (5) Additionally, we also have two intuitions regarding LAN's performance. While we were not able to prove them, we believe that they are still valuable leads to explore. Intuition 1: Q˜π a allows to provide better update targets by breaking the partial observability. In a POMDP, the same observation-action history can be linked to different states forcing the agent to learn a Q-value that marginalizes over the possible states. In a Dec-POMDP this aspect is even more apparent as all the agents a P A need to marginalize over the possible states but also over the possible joint histories of the other agents xs, τ´ay as shown by the derivation of Ga. By its conditioning on the next state and the joint history xs 1, τ 1y, LAN's DQN target does not suffer from the partial observability and can therefore provide updates taking into account this information. As highlighted by (Lyu et al., 2021), using a centralized target to learn a decentralized object might lead to high variance updates. The authors mention that the choice of a centralized versus decentralized critic is a bias-variance trade-off. In LAN, the value is centralized while the Advantage is decentralized. This means that LAN by using the Q-value proxy (not as precise as the real Q-values) to compute the targets induces a bias which in turn reduces the variance of the updates. Intuition 2: Q˜π a reduces the learning complexity associated with decentralized policy optimization. Typically, extracting a policy from a value-based algorithm involves selecting the action that maximizes the Q-value or advantage, as they have the same action ordering. However, advantage and value functions exhibit different learning complexities, depending on the characteristics of the environment. While the advantage function learns the impact of each action on the overall return, the value function learns the expected cumulative return, necessitating more marginalization over different states and other agents' histories. This distinction motivated the introduction of Dueling DQN in MDPs (Wang et al., 2016). Nonetheless, learning the advantage function in isolation is not feasible; it requires learning the corresponding value function, which suffers from both the partial observability and the moving target problem. Therefore, LAN's proxy provides a straightforward and efficient approach to learn local advantages without relying on local values. ## Architecture To overcome the partial observability the local advantages networks use a GRU which learns to represent the observation-actions history into a hidden state ha, with the aim to capture the necessary features to locally regain the Markov property as stated above. This hidden state is then used to compute the local advantages. LAN leverages the work done at the agent level to represent τa to build a representation of τ . For each agent a the centralized value network combines the id a of the agent with its hidden state ha, its last observation oa and its last action ua into a vector h˜a " rha, oa, ua, as. To represent τ efficiently we first embed h˜a into hˆa for all agents with a shared network and sum those embedding. The embedding allows to limit the potential information loss of the summation, and this combination performs better than concatenation. Finally, the value is computed from τ using an MLP. LAN's architecture, represented in Figure 2, provides two main benefits. First, the centralized value network does not learn a second recurrent network, which are knowingly difficult to train. Second, as the embedding for all agents are computed with the same weights, the number of parameters of the centralized value network does not depend on the number of agents. As the policies are deterministic, the local advantages should be negative with the maximizing value equal to 0. However as (Wang et al., 2016) studies, even when computing the real Q-value in single agent MDP enforcing this constraint has a negative impact on the learning. Their experiments showed that applying the following transformation to the output of the neural network provides better stability. $$A^{\pi_{a}}(\tau_{a},u_{a})\gets A^{\pi_{a}}(\tau_{a},u_{a})-\frac{1}{|U_{a}|}\sum_{u\in u_{a}}A^{\pi_{a}}(\tau_{a},u)$$ $$({\mathfrak{f}}{\mathfrak{o}})$$ In the single agent case, this results in the learned advantage to differ from the real advantage by a fixed offset. In LAN, as the centralized value is shared between all the agents, enforcing the local advantages to have a zero mean means that the offset will be shared between all the agents. As in (Wang et al., 2016), we investigated enforcing negative advantages and observed that the learning was also highly impacted by it in LAN. While sharing the offset between the agents can have a positive impact on collaboration it can also hinder the learning by adding an additional constraint on both networks. Appendix D reports LAN's performance with the mean constraint (eq. 6). Therefore, in LAN we do not apply any constraint on the output of the advantage network. ## 5 Experiments To benchmark LAN we use the StarCraft Multi-Agent Challenge1(SMAC) (Samvelyan et al., 2019), a set of environments that runs in the popular video game StarCraft II. SMAC does not focus on the full game but rather on micromanagement tasks where two teams of agents - possibly heterogeneous and imbalanced - fight. A match is considered won if the other team is eliminated within the time limit. The time limits differ per task. Each agent only observes its surroundings and receives a team reward proportional to the damage done to the other team plus bonuses for killing an enemy and winning. The action space of each agent consists of a move action to each cardinal direction, a no-op action, and an attack action for each enemy which is replaced by a heal action for each team member for the Medivacs units. The attack/heal action only affects units within range. As the agent's observation and action space are linearly dependent on the number of agents to perform well scalability is a key issue. SMAC also provides the real state of the environment, which we use as input for the centralized value. The benchmark is composed of 14 different maps that are designed to assess different aspects of cooperation. They are ranked into 3 categories: easy, hard, and super hard maps. 1We use version SC2.4.6.2.69232 and not SC2.4.10. Performances are not comparable between versions. ![8_image_0.png](8_image_0.png) Figure 3: Median battle won rate during learning on 9 maps of SMAC. Each algorithm is run on at least 10 different seeds per map. Following the evaluation method of Samvelyan et al. (2019), we train the agents on 2 million steps and plot the median, 1st and 3rd quantiles. IQL (no fog) is introduced in 5.4 ## 5.1 Configuration To ensure a fair comparison, the decentralized network architecture, the version of the game, the E-annealing parameters, the batch size, the replay buffer size, the use of a single environment, and the use of a unique set of parameters across all maps is consistent with the QMIX and QPLEX papers. Appendix B lists the hyperparameters used, and Appendix D reports the results of a variation of LAN where we force the advantage to have a zero-mean as in Dueling DQN (Wang et al., 2016). The training and evaluation follows the procedure described in Samvelyan et al. (2019), namely 2 million training timesteps, and evaluation of the decentralized greedy polices over 32 episodes every 10k timesteps. We train LAN on at least 10 different random seeds and report the median of the battle win rate over the learning time as well as the first and third quantiles. ## 5.2 Results We compare LAN to IQL, VDN, QMIX and QPLEX. For the first three algorithms we used the implementation of QMIX and we used the official implementation of QPLEX. In the following, we present LAN's performance on 9 maps (Figure 3). The other maps are presented in Appendix C. The first row features fights between marines with an increasing number of agents and the enemy controlling more units. The second row is composed from left to right of a balanced map with 24 heterogeneous units per team, a map where 2 power-full units fight a swarm of 64 smaller enemies, and an unbalanced heterogeneous map with a medic units that as a side effect increases the action space. The last row shows the result on two super-hard Table 1: Number of parameters (x1000) of the value function in LAN vs. the mixing network in QPLEX/QMIX for the first 4 maps of Figure 3. See Appendix A for the other maps. The dependency of the dimension of the observation and action space in the number of agents is the only cause of the difference in the number of parameters of LAN's centralized value network in the different maps | 5m_vs_6m | 10m_vs_11m | 27m_vs_30m | bane_vs_bane | | |------------|--------------|--------------|----------------|-----| | LAN | 56 | 68 | 111 | 125 | | QPLEX | 43 | 106 | 709 | 555 | | QMIX | 32 | 70 | 283 | 241 | maps where the baselines do not reach any wins, and a map where LAN seems to under-perform. Finally, we discuss LAN's average performance across all maps (Figure 4). In the maps of the first row of Figure 3, two unbalanced teams with homogeneous units fight against each other, with our team composed of fewer units than the enemy: in 5m_vs_6m 5 agents fight 6 enemies, in 10m_vs_11m 10 agents fight 11 enemies, and in 27m_vs_30m 27 agents fight 30 enemies. The ratio between the number of agents and the number of enemies makes the map 10m_vs_11m easier compared to the other two. In the map 27m_vs_30m, both the number of agents and the dimension of the observation and action space constitute a real challenge for MARL. In those three maps, LAN dominates IQL and performs on par with SOTA. First, as IQL is a natural ablation of LAN, we deduce from this experiment that the centralized value introduced by LAN does indeed help to coordinate the learning of the agents and that LAN can address the shortcomings of IQL. Second, LAN does not only performs on par with the SOTA, and slighty outperforms the other algorithms in the more difficult map, it is also more scalable than QMIX and QPLEX in terms of parameters of its centralized component with respect to the number of agents (Table 1). Indeed, between 5m_vs_6m and 27m_vs_30m the number of agents is multiplied by 5.4 and the number of parameters of LAN's centralized value is only multiplied by a factor of 2, while for the centralized component of QMIX and QPLEX this factor is respectively 8.8 and 16.5. The second row of Figure 3, is composed of two hard and one super-hard maps. The first one, bane_vs_bane, opposes two large and balanced teams of 24 heterogeneous units. We observe that while IQL easily reaches 100% of winning rate, VDN struggles to learn and QMIX fails to learn. This hints at a limitation of both monotonous mixing strategies regarding scaling to a large number of agents, supporting our claim that an alternative research direction to value factorization is needed. QPLEX is able to learn the perfect strategy at the cost of doubling the number of parameters compared to QMIX. LAN also learns to consequently eliminate the opposing team and reaches a perfect score with 5 times fewer parameters than QPLEX. The second map, 2c_vs_64zg, matches two powerful agents against 64 weaker agents. The numerous enemies make the action space very large, with 70 actions, which is a known challenge in RL (Zahavy et al., 2018). In this map, QPLEX reaches a final performance of 83% win rate followed closely by LAN with 80%, while QMIX, VDN and IQL score respectively around 50%, 20% and 15% win rate. The third map, MMM2, features two unbalanced heterogeneous teams, with the enemy team having 2 additional units, and is the only map including medical units. While IQL and VDN do not obtain any wins, QMIX and QPLEX score 60% and 80% respectively. LAN obtains the same final performance as QPLEX. The last row of Figure 3 presents LAN's performance on 2 super-hard maps alongside the easier version of one of those maps. In the super hard map corridor, 6 agents of type 'zealot' fight a team 24 enemies of type 'zerlings'. While the SMAC paper claimed that the only solution for this map was to take advantage of the terrain (a spawning zone connected to a second zone by a corridor) to limit the number of enemies that can attack our agents, LAN discovered another solution. One agent lures part of the enemies to a remote location while the rest fights the remaining enemies. After killing the bait a fraction of the enemies attack our agents while the majority go through the corridor to reach the second zone. Our agents defeat their attackers, and after regenerating part of their shields move to the second zone to finish off the enemies. While the current SOTA flattens to zero, LAN obtains an almost perfect score with around 90% success rate. On the next super hard map, 3s5z_vs_3s6z, LAN learns good decentralized policies with a performance at around 40%. The only other algorithm that was able to achieve any wins is QPLEX with less than 10%. The strategy is similar as the one learned in corridor, a stalker (long-range unit) baits most of the enemy's zealots (close combat units) into targeting him. It then flees far away from his teammates and sacrifices himself so that the other agents can kill the stalkers and remaining zealots. The agents can then easily kill the remaining enemies as they are no longer protected by any long-range support. The last map of Figure 3, 3s5z, is the balanced version of the previous map and therefore easier. In this map, LAN reaches 87% median battle win rate, whereas VDN only scores 80%, and QMIX and QPLEX obtain 97%. This underperformance is intriguing as LAN performs better than the other algorithms in the harder version of this map. By visualizing the learned policies in 3s5z we discovered that LAN converges to two different policies: a) a basic confrontation policy which is the policy learned by QMIX and QPLEX; b) a baiting strategy identical to the one learned in 3s5z_vs_3s6z. We also remark that LAN appears to still be learning and might converge to the same performance as the other QPLEX if given more time. LAN's performance in last two super-hard maps can be attributed to is its ability to train an agent to lure ![10_image_0.png](10_image_0.png) the enemies and to sacrifice itself for the team's survival. We believe that this behavior is easier to discover with LAN than with the mixing algorithms because of the shared Value network, as it allows dead agents to benefit directly from the rewards scored by the other agents after their death. LAN, by focusing on learning best response policies instead of factorizing a joint Q-value, learns for each agent the policy that maximizes the team return. On the other hand, QMIX and QPLEX introduce individual rewards through factorization, which agents learn to maximize. However, if these individual rewards do not align with the team reward, as is the case in baiting strategies, mixing algorithms struggle to learn effectively. The complex strategy learned by LAN demonstrates its capacity to mitigate effectively the multi-agent credit assignment problem. Figure 4: (Left) Averaged median test win on the 14 maps during learning. Shaded area denotes average first and third quantile. (Right) Number of maps where the algorithms are first by at least 1/32 during learning. As in the SMAC benchmark and QPLEX papers, Figure 4 shows, on the left plot, LAN's general average performance on the 14 maps that composes the SMAC benchmark, and, on the right plot, the number of maps where each algorithm outperforms the others by a margin of at least 1{32th. IQL only achieves 30% averaged median test wins and is the best on 0 maps. This under-performance was expected as it is the only fully decentralized learning algorithm, and because it is highly vulnerable to the moving target problem. At the beginning of the learning, VDN and QMIX show similar performance, but QMIX takes the lead obtaining 60% and beating VDN by 8%. QPLEX learns faster than the other algorithms and reaches the same final performance of QMIX in just a million timesteps to obtain 67% at the end of the learning. Finally, LAN learns faster than the baselines except QPLEX, which it exceeds at around 1.25 ˆ 106timesteps. LAN finishes first with 77% wins. The right plot shows that LAN bests the other algorithms on 3 maps, namely corridor, 3s5z_vs_3s6z, 5m_vs_6m. ## 5.3 Credit Assignment Analysis In the most difficult maps of SMAC the enemy teams have more units and the contribution of all the agents is required to win. The difference of performance between 3s5z and 3s5z_vs_3s6z (same team of agents but one more enemy) is a good example of that. The baiting strategy discovered in 3s5z_vs_3s6z and corridor showcase the credit assignment of LAN. Indeed, while the agent that serves as bait acts at the beginning of the episode the correct behavior is reinforced even though the rewards for killing the enemies and for defeating the enemy team arrives later. (a) Initial state of the environment. (b) Final state of the environment ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) ![11_image_2.png](11_image_2.png) reached by LAN's policy, with the first agent having eaten all the apples. Figure 5: The Checkers environment. The green boxes are the apples, they yield `10 rewards when eaten by the first agent and `1 when eaten by the second agent. The yellow boxes are the lemons that yields ´10 and ´1 to the first and second agent respectively. The supplementary material contains a gif of the policy. To further emphasize this, we performed an additional experiment on Checkers, an environments of VDN designed to asses credit assignment. In Checkers the red agent gets `10 rewards for eating apples (green) and ´10 rewards for eating lemons (yellow), while the second agent gets `1 and ´1 respectively. The agents receive the sum of both rewards. Each agent receives as observation its location in the map and a 3x3 window around it. The environment finishes when there are no more apples or after 100 steps. Agent 2 needs to eat the lemons (-1 reward) that block the way for agent 1 to eat the apples (`10 reward), as shown by the initial state of the environment (Figure 5a). While the agents get the same team reward, they have distinctive roles as the second agent needs to learn that negative immediate rewards lead to a better team return. LAN converges to the policies described above, with the 3 lemons on the top row left uneaten (Figure 5b). As this environment was designed to assess the credit assignment problem, this shows that LAN mitigates it. ## 5.4 Moving Target Problem Analysis IQL serves as a natural ablation of LAN, wherein the shared centralized Value component of our Q-Value proxy is swapped. As discussed in the preceding section, the primary drawback of IQL lies in its susceptibility to the moving target problem, as it disregards the learning of other agents. Consequently, IQL lacks any mitigation strategy against this issue. In scenarios such as bane_vs_bane, where coordination is unnecessary or when agents have no mutual influence, IQL can exhibit satisfactory performance. However, the notable superiority of LAN over IQL across all maps demonstrates that LAN effectively addresses the limitations of IQL, including the challenge posed by the moving target problem. Since the centralized Value of LAN allows to break partial observability we carried out an additional experiment to make sure that the increased performance of LAN was not only due to targets with increased observability. In this experiment, we trained IQL without the fog of war so that all the agents could observe the entire map. While the RNN is no longer needed we kept the same architecture and training procedure of replaying full episodes. This experiment is labelled as "IQL (no fog)" in Figures 3 and 4. In all the maps IQL performs better than IQL without the fog of war. This shows that LAN's performance is not only due to the increased observability of its centralized component and strengthens our claim that our Q-Value proxy mitigates the moving target problem. In summary, LAN performs on par with the SOTA on the easy and hard maps while dominating the other methods on the super hard maps, even the ones where the other methods did not achieve any wins. LAN outperforms QPLEX by 10% in averaged performance. These results showcase LAN's performance and scalability potential, and its capacity to handle many agents and large observation and action spaces. ## 6 Conclusion In this paper, we proposed Local Advantage Networks (LAN); a novel value-based MARL algorithm for Dec-POMDPs. LAN leverages the CTDE approach by building, for each agent, a proxy of the local Q-value composed of the local advantage and the joint value. LAN trains both networks by applying DQN to a Q-value proxy. The centralized learning allows to condition the joint value on the real state to overcome the partial observability during training. In parallel, it learns the advantages together with the joint value, to synchronize all value functions to the ever changing policies. This results in more accurate DQN targets and mitigates the moving target problem. Conditioning the local advantages solely on the agent's observationaction history, ensures decentralized execution. To ensure scalability, LAN's joint value efficiently summarizes the hidden states produced by the GRUs of the local advantages to represent the joint history. Therefore, the number of parameters of this value function is independent of the number of agents. We evaluated LAN on the challenging SMAC benchmark where we performed significantly better or on par compared to state-of-the-art methods, while its architecture is significantly more scalable in the number of agents. In the two most complex maps, LAN was able to learn a complex strategy where one agent would sacrifice itself for the survival of the team, and therefore proving experimentally LAN's ability to mitigate the multi-agent credit assignment problem. We believe that the lean architecture of LAN for learning decentralized policies in a Dec-POMDP is key to learning efficiently in decentralized partially observable settings. Most of the recent work in value-based Deep MARL for Dec-POMDP focused on improving the value factorization of QMIX. The need for a different research direction is therefore real, and LAN, by moving away from value factorization, offers an alternative. LAN is not only able to achieve better performance than value factorization but is also more scalable parameter-wise. Future work In future work, we aim to explore how the history representation of the centralized value can be improved through the use of Attention (Vaswani et al., 2017) or Graph Neural Networks (Kipf & Welling, 2017). We also aim to investigate how explicit communication (Oliehoek et al., 2008b; Messias et al., 2011; Wang et al., 2020; Das et al., 2019) can be added to LAN to further improve the coordination between the agents and to improve robustness of the learned policies. We also plan to investigate how LAN's architecture might benefit MARL algorithms in settings with continuous action spaces. ## Acknowledgements R. Avalos is supported by the Research Foundation - Flanders (FWO), under grant number 11F5721N. We thank Florent Delgrange and the anonymous reviewers for their valuable feedback. ## References Lucian Buşoniu, Robert Babuška, and Bart De Schutter. A comprehensive survey of multiagent reinforcement learning. *IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews*, 38(2): 156–172, 3 2008. doi: 10.1109/TSMCC.2007.913919. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. *EMNLP 2014 - 2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference*, pp. 1724–1734, 6 2014. URL https://arxiv.org/abs/1406.1078v3. Abhishek Das, Théophile Gervet, Joshua Romoff, Dhruv Batra, Devi Parikh, Michael Rabbat, and Joelle Pineau. TarMAC: Targeted multi-agent communication. In *36th International Conference on Machine* Learning, ICML 2019, volume 2019-June, pp. 2776–2784, 10 2019. ISBN 9781510886988. URL http: //arxiv.org/abs/1810.11187. Christian Schroeder de Witt, Tarun Gupta, Denys Makoviichuk, Viktor Makoviychuk, Philip H. S. Torr, Mingfei Sun, and Shimon Whiteson. Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge? 11 2020. URL http://arxiv.org/abs/2011.09533. Jakob Foerster, Nantas Nardell, Gregory Farquhar, Trtantafyllos Afouras, Philip H.S. Torr, Pushmeet Kohli, and Shimon Whiteson. Stabilising experience replay for deep multi-agent reinforcement learning. In 34th International Conference on Machine Learning, ICML 2017, volume 3, pp. 1879–1888. International Machine Learning Society (IMLS), 2 2017. ISBN 9781510855144. URL http://arxiv.org/abs/1702. 08887. Jakob N. Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, and Shimon Whiteson. Counterfactual multi-agent policy gradients. In *32nd AAAI Conference on Artificial Intelligence, AAAI 2018*, pp. 2974–2982, 5 2018. ISBN 9781577358008. URL http://arxiv.org/abs/1705.08926. Pablo Hernandez-Leal, Bilal Kartal, and Matthew E. Taylor. A survey and critique of multiagent deep reinforcement learning. *Autonomous Agents and Multi-Agent Systems 2019 33:6*, 33(6):750–797, 10 2019. ISSN 1573-7454. doi: 10.1007/S10458-019-09421-1. URL https://link.springer.com/article/10. 1007/s10458-019-09421-1. Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. *Neural Computation*, 9(8):1735–1780, 11 1997. ISSN 0899-7667. doi: 10.1162/NECO.1997.9.8.1735. URL http://direct.mit.edu/neco/ article-pdf/9/8/1735/813796/neco.1997.9.8.1735.pdf. Bojun Huang. Steady state analysis of episodic reinforcement learning. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), *Advances in Neural Information* Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 69bfa2aa2b7b139ff581a806abf0a886-Abstract.html. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, 9 2017. URL http://arxiv.org/abs/1609.02907. Richard Klima, Daan Bloembergen, Rahul Savani, Karl Tuyls, Alexander Wittig, Andrei Sapera, and Dario Izzo. Space debris removal: Learning to cooperate and the price of anarchy. *Frontiers Robotics AI*, 5 (JUN), 2018. doi: 10.3389/FROBT.2018.00054/FULL. Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. In *Advances in Neural Information Processing Systems*, volume 2017-Decem, pp. 6380–6391, 6 2017. URL http://arxiv.org/abs/1706.02275. Xueguang Lyu, Yuchen Xiao, Brett Daley, and Christopher Amato. Contrasting Centralized and Decentralized Critics in Multi-Agent Reinforcement Learning. *Proc. of the 20th International Conference on* AutonomousAgents and multi-agent Systems (AAMAS 2021), 2 2021. URL http://arxiv.org/abs/ 2102.04402. Anuj Mahajan, Tabish Rashid, Mikayel Samvelyan, and Shimon Whiteson. MAVEN: Multi-Agent Variational Exploration. *Advances in Neural Information Processing Systems*, 32, 10 2019. URL https://arxiv. org/abs/1910.07483v2. João V. Messias, Matthijs T.J. Spaan, and Pedro U. Lima. Efficient offline communication policies for factored Multiagent POMDPs. In Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011, NIPS 2011, 2011. Mihail Mihaylov, Karl Tuyls, and Ann Nowé. Decentralized Learning in Wireless Sensor Networks. In Matthew Taylor and Karl Tuyls (eds.), *Adaptive and Learning Agents*, pp. 60–73, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg. ISBN 978-3-642-11814-2. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing Atari with Deep Reinforcement Learning. 12 2013. URL http://arxiv.org/ abs/1312.5602. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. *Nature*, 518(7540):529–533, 2 2015. ISSN 14764687. doi: 10.1038/nature14236. Ann Nowé, Peter Vrancx, and Yann Michaël De Hauwere. Game theory and multi-agent reinforcement learning. In *Adaptation, Learning, and Optimization*. 2012. doi: 10.1007/978-3-642-27645-3{\_}14. Frans A Oliehoek and Christopher Amato. *A Concise Introduction to Decentralized POMDPs*. 2016. ISBN 978-3-319-28927-4. URL http://www.springer.com/us/book/http://link.springer.com/10.1007/ 978-3-319-28929-8http://www.springer.com/us/book/%0Ahttp://link.springer.com/10.1007/ 978-3-319-28929-8. Frans A. Oliehoek, Matthijs T.J. Spaan, and Nikos Vlassis. Optimal and approximate Q-value functions for decentralized POMDPs. *Journal of Artificial Intelligence Research*, 32:289–353, 10 2008a. ISSN 10769757. doi: 10.1613/jair.2447. URL http://dx.doi.org/10.1613/jair.2447. Frans A. Oliehoek, Matthijs T.J. Spaan, Shimon Whiteson, and Nikos Vlassis. Exploiting locality of interaction in factored Dec-POMDPs. In *Proceedings of the International Joint Conference on Autonomous* Agents and Multiagent Systems, AAMAS, volume 1, 2008b. Frans A. Oliehoek, Stefan J. Witwicki, and Leslie P. Kaelbling. Influence-based abstraction for multiagent systems. In *Proceedings of the National Conference on Artificial Intelligence*, 2012. ISBN 9781577355687. Tabish Rashid, Mikayel Samvelyan, Christian Schroeder de Witt, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning. *Proceedings of the35thInternational Conference on MachineLearning, Stockholm, Sweden,* PMLR 80, 2018, 3 2018. URL https://arxiv.org/abs/1803.11485v2. Mikayel Samvelyan, Tabish Rashid, Christian Schroeder de Witt, Gregory Farquhar, Nantas Nardelli, Tim G. J. Rudner, Chia-Man Hung, Philip H. S. Torr, Jakob Foerster, and Shimon Whiteson. The StarCraft Multi-Agent Challenge. Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, 4:2186–2188, 2 2019. URL https://arxiv.org/abs/1902.04043v5. Yoav Shoham, Rob Powers, and Trond Grenager. If multi-agent learning is the answer, what is the question? Artificial Intelligence, 2007. ISSN 00043702. doi: 10.1016/j.artint.2006.02.006. David Silver, Aja Huang, Christopher J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. 2016. Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z. Leibo, Karl Tuyls, and Thore Graepel. Value-decomposition networks for cooperative multi-agent learning based on team reward. In *Proceedings of the International* Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, volume 3, pp. 2085–2087, 6 2018. ISBN 9781510868083. URL http://arxiv.org/abs/1706.05296. R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. *IEEE Transactions on Neural* Networks, 1998. ISSN 1045-9227. doi: 10.1109/tnn.1998.712192. Ardi Tampuu, Tambet Matiisen, Dorian Kodelja, Ilya Kuzovkin, Kristjan Korjus, Juhan Aru, Jaan Aru, and Raul Vicente. Multiagent Cooperation and Competition with Deep Reinforcement Learning. PLoS ONE, 12(4), 11 2015. URL http://arxiv.org/abs/1511.08779. Ming Tan. Multi-Agent Reinforcement Learning: Independent vs. Cooperative Agents. In *Machine Learning* Proceedings 1993. 1993. doi: 10.1016/b978-1-55860-307-3.50049-6. Karl Tuyls and Gerhard Weiss. Multiagent learning: Basics, challenges, and prospects. In *AI Magazine*, 2012. doi: 10.1609/aimag.v33i3.2426. Hado van Hasselt, Arthur Guez, and David Silver. Deep Reinforcement Learning with Double Q-learning. 30th AAAI Conference on Artificial Intelligence, AAAI 2016, pp. 2094–2100, 9 2015. URL https:// arxiv.org/abs/1509.06461v3. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 2017-Decem, pp. 5999–6009, 2017. Jianhao Wang, Zhizhou Ren, Terry Liu, Yang Yu, and Chongjie Zhang. QPLEX: Duplex Dueling Multi-Agent Q-Learning. *International Conference on Learning Representations*, 8 2021. URL https://arxiv.org/ abs/2008.01062http://arxiv.org/abs/2008.01062https://openreview.net/forum?id=Rcmk0xxIQV. Rose E. Wang, Michael Everett, and Jonathan P. How. R-MADDPG for Partially Observable Environments and Limited Communication. 2 2020. ISSN 2331-8422. URL http://arxiv.org/abs/2002.06684. Tonghan Wang and Heng Dong. Roma: Multi-Agent reinforcement learning with emergent roles. In 37th International Conference on Machine Learning, ICML 2020, volume PartF16814, pp. 9818–9828, 3 2020. ISBN 9781713821120. URL http://arxiv.org/abs/2003.08039. Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Van Hasselt, Marc Lanctot, and Nando De Frcitas. Dueling Network Architectures for Deep Reinforcement Learning. In 33rd International Conference on Machine Learning, ICML 2016, volume 4, pp. 2939–2947, 11 2016. ISBN 9781510829008. URL https://arxiv. org/abs/1511.06581. Lily Xu, Shahrzad Gholami, Sara Mc Carthy, Bistra Dilkina, Andrew Plumptre, Milind Tambe, Rohit Singh, Mustapha Nsubuga, Joshua Mabonga, Margaret Driciru, Fred Wanyama, Aggrey Rwetsiba, Tom Okello, and Eric Enyel. Stay ahead of poachers: Illegal wildlife poaching prediction and patrol planning under uncertainty with field test evaluations (Short Version). Proceedings - International Conference on Data Engineering, 2020-April:1898–1901, 4 2020. doi: 10.1109/ICDE48307.2020.00198. Yaodong Yang, Jianye Hao, Ben Liao, Kun Shao, Guangyong Chen, Wulong Liu, and Hongyao Tang. Qatten: A General Framework for Cooperative Multiagent Reinforcement Learning. 2 2020. URL https: //arxiv.org/abs/2002.03939v2http://arxiv.org/abs/2002.03939. Chao Yu, Akash Velu, Eugene Vinitsky, Yu Wang, Alexandre Bayen, and Yi Wu. The Surprising Effectiveness of PPO in Cooperative, Multi-Agent Games. 3 2021. URL http://arxiv.org/abs/2103.01955. Tom Zahavy, Matan Haroush, Nadav Merlis, Daniel J. Mankowitz, and Shie Mannor. Learn what not to learn: Action elimination with deep reinforcement learning. In *Advances in Neural Information Processing* Systems, volume 2018-Decem, pp. 3562–3573, 9 2018. URL http://arxiv.org/abs/1809.02121. ## A Starcraft Multi-Agent Challenge The complete information about the SMAC benchmark can be found in the introductory paper (Samvelyan et al., 2019). Table 2 lists the 14 different maps of the challenge with the number of agents in each team and the number of parameters of the centralized part of LAN, QPLEX and QMIX. Table 3 lists the number of parameters of the centralized component of LAN, QMIX and QPLEX for the 14 maps. | Map Name | Ally Units | Enemy Units | |--------------|------------------------------------|------------------------------------| | 2s3z | 2 Stalkers & 3 Zealots | 2 Stalkers & 3 Zealots | | 3s5z | 3 Stalkers & 5 Zealots | 3 Stalkers & 5 Zealots | | 1c3s5z | 1 Colossus, 3 Stalkers & 5 Zealots | 1 Colossus, 3 Stalkers & 5 Zealots | | 5m_vs_6m | 5 Marines | 6 Marines | | 10m_vs_11m | 10 Marines | 11 Marines | | 27m_vs_30m | 27 Marines | 30 Marines | | 3s5z_vs_3s6z | 3 Stalkers & 5 Zealots | 3 Stalkers & 6 Zealots | | MMM2 | 1 Medivac, 2 Marauders & 7 Marines | 1 Medivac, 3 Marauders & 8 Marines | | 2s_vs_1sc | 2 Stalkers | 1 Spine Crawler | | 3s_vs_5z | 3 Stalkers | 5 Zealots | | 6h_vs_8z | 6 Hydralisks | 8 Zealots | | bane_vs_bane | 20 Zerglings & 4 Banelings | 20 Zerglings & 4 Banelings | | 2c_vs_64zg | 2 Colossi | 64 Zerglings | | corridor | 6 Zealots | 24 Zerglings | Table 2: The different maps of SMAC. | LAN | QPLEX | QMIX | | |--------------|---------|--------|-----| | 2s3z | 62 | 50 | 36 | | 3s5z | 74 | 90 | 60 | | 1c3s5z | 83 | 113 | 73 | | 5m_vs_6m | 56 | 43 | 32 | | 10m_vs_11m | 68 | 106 | 70 | | 27m_vs_30m | 111 | 709 | 283 | | 3s5z_vs_3s6z | 76 | 95 | 63 | | MMM2 | 86 | 136 | 85 | | 2s_vs_1sc | 46 | 18 | 12 | | 3s_vs_5z | 54 | 31 | 22 | | 6h_vs_8z | 61 | 59 | 42 | | bane_vs_bane | 125 | 555 | 241 | | 2c_vs_64zg | 119 | 116 | 72 | | corridor | 79 | 109 | 69 | Table 3: Number of parameters (x1000) of the value function in LAN vs. the mixing network in QPLEX/QMIX. ## B Implementation Details We use neural networks with ReLu activation functions, to approximate the local advantage and the centralized value. To increase the learning speed and reduce the number of parameters we share the neural network weights of the local advantages between all the agents. The input of the advantage network conditions on the agent ID so that the policy can differ per agent. The advantage network is composed of a 2 hidden layers, a 64 units feed forward network and a 64 units GRU, which is consistent with the architecture used in the SOTA algorithms to represent the decentralized utilities (Rashid et al., 2018; Wang et al., 2021). The centralized value network (Figure 2, left) first computes an embedding of ha for each agent, ha, using a feed forward network of 128 units. The agents' embeddings are then merged together by summing them resulting in a joint history embedding of fixed size. This joint history embedding is then concatenated with the real state provided by the environment to create a state-history embedding. Finally, this state-history embedding goes through an feed forward network of two hidden layers of 128 units to compute the value. We train LAN for 2 million timesteps using a replay buffer of 5k episodes. During training we use an &-greedy exploration strategy over the local advantages, with & decaying from 1 to 0.05 over the first 50k timesteps. After every episode we optimize both networks twice using Adam with a learning rate of 5e - 4 and without TD(A). For each update we sample a batch of 32 episodes from the replay buffer. The DQN target are computed with a target network that is updated every 200 gradient updates. We clip to 10 the norm of the gradient. We note that LAN does not require parameter sharing, and that each type of agent could have its own model. In that case, every agent type also needs its own embedding network to compute ha. ## C Remaining maps of SMAC ![17_image_0.png](17_image_0.png) Figure 6: Median battle won rate during learning on the last 5 SMAC maps. Figure 6 includes the 5 SMAC maps that are not included in the main paper. The first map, 2s_vs_1sc, is an easy map and LAN learns the perfect strategy as the other algorithms do. In the second and third maps, 2s3z and 1c3s5z, all the algorithms but IQL learn near-optimal policies. In 3s_vs_5z, LAN and QPLEX learn the optimal policy followed closely by QMIX and VDN that both reach around 85%. Finally, in the last map 6h_vs_8z no algorithm is able to score any wins. We note that the difference in performance between IQL and IQL (no fog) is consistent with the other maps: removing the fog of war does not increase performance. ## D Discussion regarding the advantage ![18_image_0.png](18_image_0.png) Figure 7: Median battle won rate during learning on the all the SMAC maps. Figure 7 shows the performance on all the $MAC maps with a variation of LAN called LAN mean, which applies the equation 6 . While in two maps 3s5z, 27m_vs_30m the mean version of LAN improves over the classical version, it degrades the performance in others other maps such as 5m_vs_6m, 2c_vs_64zg, and MMM2, and prevents the learning in corridor and 3s5z_vs_3s6z. This empirically shows that while in the single agent case the equation 6 stabilizes the learning it might not be the case when multiple agents are involved. ## E Algorithm Algorithm 1: Local Advantage Networks (LAN) Input : Agent set N Input : Replay memory capacity MC Input : Frequency of target update C Input : Exploration rate ϵ 1 Initialize(replay memory D *with capacity* MC); 2 Initialize(centralized value function V *with random weights*); 3 Initialize(for each agent a P N local advantage function Aa, containing RNNa, with random weights); 4 Initialize(target value function Vt with weights of V , and for each agent a *target local advantage* Aa t with weights Aa) ; 5 Initialize(epsilon decay ϵdecay *and minimum epsilon* ϵmin); 6 for *episode* do // Interaction with environment 7 Initialize(*empty episode memory* E) 8 ResetEnvironment(s, o Ð env) // get state and joint observation 9 ResetHiddenStates(@a P *N, τ*a " 0); 10 ResetLastAction(@a P *N, u*a " 0) 11 **while** *episode is not finished* do 12 for *agent* a P N do 13 UpdateHiddenState(τa Ð RNNapτa, ua, oaq); 14 SelectAction(ua Ð πϵpAapτaqq); 15 ExecuteJointAction(u) ; 16 Observe next state s 1, next joint observation o, reward r ; 17 StoreTransition(s, o, u, r, s 1 o 1*, in episode memory* E) ; 18 UpdateCurrentState(s Ð s 1) ; 19 UpdateCurrentJointObs(o Ð o 1) ; 20 Store episode memory E in replay memory D ; // Perform learning step 21 Sample random batch B of episodes from D ; 22 for each episode e *in the batch* B do 23 for each timestep t " 1 *to last step of the episode* Tpeq do 24 Unroll RNN of current and target networks; 25 For each agent a compute current Q˜ estimate using Equation 1; // Q˜π aps, τ , uaq " V πps, τ q ` Aπa pτa, uaq 26 For each agent a compute TD target with target networks using Equation 3; // ya " r ` γrV π tps 1, τ 1q ` A πa tpτ 1a , arg maxu1a Aπa pτ 1a , u1aqqs 27 For each agent a compute T D*a,e,t* the temporal difference error; 28 UpdateValueAndLocalAdvantages(*using gradient descent on the mean square temporal difference* error); // Update target network and exploration 29 UpdateTargetNetwork(@a P *N, A*ta Ð A; V t Ð V ) (every C steps) ; 30 UpdateExploration(ϵ Ð maxpϵ ˆ ϵ*decay*; ϵminq) ; ## F Proof Episodic process. A POMDP P is episodic if it includes a special *reset state* that is fully observable by the agent, and that under any policy the environment is almost surely eventually reset. Furthermore, when the environment is reset it transition to the initial state. For this proof we consider an agent a with policy πa and the induced POMDP Ga obtained by fixing the policy of the other agents π´a (defined in Section 4). Without any loss of generality, we augment Ga with an observable reset state so that Ga is episodic. This ensures the ergodicity of Ga, as every epsiodic process is ergodic or can be made ergodic without loss of generality Huang (2020), and consequently the existence of a stationary distribution pπp*s, τ* ˜ aq " pπps, τ q. As LAN learns greedy policies we consider only deterministic policies. ## F.1 Warm-Up By decomposing the next joint history τ 1 as a tuple containing the new joint observation o 1, the joint action u and the joint history τ we obtain the following equality: $$p\big(\tau^{\prime}=\langle\mathbf{o}^{\prime},\mathbf{u},\tilde{\mathbf{\tau}}\rangle\mid s^{\prime},\mathbf{\pi}(\mathbf{\tau}),\mathbf{\tau}\big)=\delta_{\tilde{\tau}}(\mathbf{\tau})\delta_{\mathbf{u}}\big(\mathbf{\pi}(\mathbf{\tau})\big)O\big(\mathbf{o}^{\prime}\mid s^{\prime},\mathbf{\pi}(\mathbf{\tau})\big)$$ $$\left(7\right)$$ Where δypxq is the Kronecker delta symbol. It is equal to 1 if x " y and 0 otherwise. We can obtain a similar result for the next local history τ 1a $$p\big(\tau_{a}^{\prime}=\big<o_{a}^{\prime},u_{a},\tilde{\tau_{a}}\big>\mid s^{\prime},\pi(\langle\tau_{-a},\tau_{a}\rangle),\langle\tau_{-a},\tau_{a}\rangle\big)=\delta_{\tilde{\tau_{a}}}(\tau_{a})\delta_{u_{a}}(\pi_{a}(\tau_{a}))O_{a}\big(o_{a}^{\prime}\mid s^{\prime},\pi_{a}(\tau_{a})\big)$$ ` @ D ˘ ` ˘ $$({\boldsymbol{\delta}})$$ For any local history τa of agent a that is realisable under the policy πa we can define the following conditional probability: $$p(s,\pi_{-a}\mid\tau_{a})={\frac{p_{\pi}(s,\langle\tau_{-a},\tau_{a}\rangle)}{p(\tau_{a})}}={\frac{p_{\pi}(s,\langle\pi_{-a},\tau_{a}\rangle)}{\mathbb{E}_{s^{\prime},\langle\tau_{-a}^{\prime},\tau_{a}^{\prime}\rangle\to p_{\pi}}^{\prime}\delta\tau_{a}^{\prime}(\tau_{a})}}$$ For any realisable history τa of agent a that is realisable under the policy πa, and next history τ 1a we have: $$p\big{(}\tau_{a}^{\prime}\mid\tau_{a}\big{)}=\mathop{\mathbb{E}}_{s,\tau_{-a}\sim p(\cdot|\tau_{a})\ \neq\ \sim\mathbf{P}(\cdot|s,\pi(\tau_{-a},\tau_{a}))}p\big{(}\tau_{a}^{\prime}\mid s^{\prime},\langle\tau_{-a},\tau_{a}\rangle,\pi(\langle\tau_{-a},\tau_{a}\rangle)\big{)}\tag{10}$$ $$\mathbf{\Sigma}$$ Proof: p ` τ 1 a| τa ˘ " ż s ż s 1 ż τ´a p ` τ 1 a| s 1, xτ´a, τay ,πpxτ´a, τayq, s˘p ` s 1, τ´a,πpxτ´a, τayq, s | τa ˘ dτ´ads 1ds (law of total probability) " ż s ż s 1 ż τ´a p ` τ 1 a| s 1, xτ´a, τay ,πpxτ´a, τayq, s˘pps, τ´a | τaqP ` s 1| s, xτ´a, τay ,πpxτ´a, τayq˘dτ´ads 1ds (chain rule) " ż s ż τ´a pps, τ´a | τaq ż s 1 P ` s 1| s, xτ´a, τay ,πpxτ´a, τayq˘p ` τ 1 a| s 1, xτ´a, τay ,πpxτ´a, τayq, s˘ds 1dsdτ´a (linearity) " ż s ż τ´a pps, τ´a | τaq E s 1"Pp¨|s,πpxτ´a,τayqq p ` τ 1 a| s 1, xτ´a, τay ,πpxτ´a, τayq, s˘dsdτ´a (definition of expectation) " E s,τ´a"pp¨|τaq E s 1"Pp¨|s,πpxτ´a,τayqq p ` τ 1 a| s 1, xτ´a, τay ,πpxτ´a, τayq, s˘(definition of expectation) " E s,τ´a"pp¨|τaq E s 1"Pp¨|s,πpxτ´a,τayqq p ` τ 1 a| s 1, xτ´a, τay ,πpxτ´a, τayq˘ (conditional independence of τ 1a and s given s 1) For any realisable history τa, that is realisable under the policy πa, and any next state s 1 and next joint history τ 1 we have e have $$p\big(s',\tau'\mid\tau_a\big)=\mathop{\mathbb{E}}_{s,\tau_{a}\sim p(\cdot\mid\tau_a)}\mathbf{P}\big(s'\mid s,\pi(\langle\tau_{-a},\tau_a\rangle)p\big(\tau'\mid s',\langle\tau_{-a},\tau_a\rangle,\pi(\langle\tau_{-a},\tau_a\rangle)\big)\tag{11}$$ Proof p ` s 1, τ 1| τa ˘ " E s,τ´a"pp¨|τaq p ` s 1, τ 1| s, xτ´a, τay ˘ (law of total probability) " E s,τ´a"pp¨|τaq p ` s 1| s, xτ´a, τay ˘ p ` τ 1| s 1, s, xτ´a, τay ˘ (chain rule) " E s,τ´a"pp¨|τaq P ` s 1| s,πpxτ´a, τayq˘p ` τ 1| s 1, s, xτ´a, τay ,πpxτ´a, τayq˘ " E s,τ´a"pp¨|τaq P ` s 1| s,πpxτ´a, τayq˘p ` τ 1| s 1, xτ´a, τay ,πpxτ´a, τayq˘ (conditional independence of τ 1 and s given s 1, xτ´a, τay ,πpxτ´a, τayq) ## F.2 Unbiased Estimator Theorem F.1. For any agent a P A, and any realisable local history τa P Ta, and any action ua P Ua , the Q-value proxy Q˜a *is an unbiased estimator of the local Q-value* Qπa $$\mathbb{E}_{s,\tau_{-a}\sim p(\cdot|\tau_{a})}\hat{Q}_{a}(s,\langle\tau_{-a},\tau_{a}\rangle\,,u_{a})=Q^{\pi_{a}}(\tau_{a},u_{a})\tag{1}$$ Proof We fix a P A, ua P Ua, τa P Ta $$\mathbb{E}_{s,\tau_{-a},\tau_{a},\tau_{a},\tau_{a}}\left[\hat{Q}_{a}(s,\langle\tau_{-a},\tau_{a}\rangle,u_{a})-Q^{\tau_{a}}(\tau_{a},u_{a})\right]=\mathbb{E}_{s,\tau_{-a},\tau_{a}\sim p(\tau_{a})}\left[V^{\pi}(s,\langle\tau_{-a},\tau_{a}\rangle)+A^{\tau_{a}}(\tau_{a},u_{a})\right.\tag{13}$$ $$\left.-\left.\mathbb{E}_{s,\tau_{-a}\sim p(\tau_{a})}\left[V^{\pi_{a}}(s,\langle\tau_{-a},\tau_{a}\rangle)-V^{\tau_{a}}(\tau_{a})\right]\right.$$ $$\left(12\right)$$ By definition we have: V πps, τ q " rps,πpτ qq ` γ E s 1"Pp¨|s,πpτqq E τ1"pp¨|s 1,πpτq,τq V π ` s 1, τ 1 ˘ V πapτaq " E s,τ´a"pp¨|τaq rps,πpxτ´a, τayqq ` γ E s,τ´a"pp¨|τaq E s 1"Pp¨|s,πpxτ´a,τayqq E τ 1a"pp¨|s 1,πpxτ´a,τayq,xτ´a,τayq V πa ` τ 1 a ˘ We define ∆r and ∆p as follow: ∆rps, xτ´a, τayq " rps,πpxτ´a, τayqq ´ E s, ˜ τ˜´a"pp¨|τaq rps, ˜ πpxτ˜´a, τayqq ∆pps, xτ´a, τayq " E s 1"Pp¨|s,πpxτ´a,τayqq E τ1"pp¨|s 1,πpxτ´a,τayq,xτ´a,τayq V π ` s 1, τ 1 ˘ ´ E s, ˜ τ˜´a"pp¨|τaq E s 1"Pp¨|s, ˜ πpxτ˜´a,τayqq E τ 1a"pp¨|s˜ 1,πpxτ˜´a,τayq,xτ´˜a,τayq V πa ` τ 1 a ˘ This allows us to rewrite Eq 13 as $$\mathbb{E}_{s,\tau_{-a}\to c(\tau_{a})}\left[\widehat{Q}_{a}(s,\langle\tau_{-a},\tau_{a}\rangle,u_{a})-Q^{\pi_{a}}(\tau_{a},u_{a})\right]=\mathbb{E}_{s,\tau_{-a}\to c(\tau_{a})}\left[\Delta_{\tau}(s,\langle\tau_{-a},\tau_{a}\rangle)\right]+\gamma_{s,\tau_{-a}\to c(\tau_{a})}\left[\Delta_{p}(s,\langle\tau_{-a},\tau_{a}\rangle)\right]\tag{14}$$ Let's first focus on the first part of the RHS of Equation 13. E s,τ´a"pp¨|τaq r∆rps, xτ´a, τayqs " E s,τ´a"pp¨|τaq " rps,πpxτ´a, τayqq ´ E s, ˜ τ˜´a"pp¨|τaq rps, ˜ πpxτ˜´a, τayqqȷ " E s,τ´a"pp¨|τaq rps,πpxτ´a, τayqq ´ E s,τ´a"pp¨|τaq E s, ˜ τ˜´a"pp¨|τaq rps, ˜ πpxτ˜´a, τayqq (linearity of expectation) " E s,τ´a"pp¨|τaq rps,πpxτ´a, τayqq ´ E s, ˜ τ˜´a"pp¨|τaq rps, ˜ πpxτ˜´a, τayqq (second part does not depend on s, τ´a) " 0 Let's now focus on the second part of the RHS of Equation 13. E s,τ´a"pp¨|τaq r∆pps, xτ´a, τa**yqs "** E s,τ´a"pp¨|τaq " E s 1"Pp¨|s,πpxτ´a,τayqq E τ1"pp¨|s 1,πpxτ´a,τayq,xτ´a,τayq V π ` s 1, τ 1 ˘ ´ E s, ˜ τ˜´a"pp¨|τaq E s 1"Pp¨|s, ˜ πpxτ˜´a,τayqq E τ 1a"pp¨|s˜ 1,πpxτ˜´a,τayq,xτ´˜a,τayq V πa ` τ 1 a ˘ ff " hkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkikkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkj A E s,τ´a"pp¨|τaq E s 1"Pp¨|s,πpxτ´a,τayqq E τ1"pp¨|s 1,πpxτ´a,τayq,xτ´a,τayq V π ` s 1, τ 1 ˘ ´ E s,τ´a"pp¨|τaq E s, ˜ τ˜´a"pp¨|τaq E s 1"Pp¨|s, ˜ πpxτ˜´a,τayqq E τ 1a"pp¨|s˜ 1,πpxτ˜´a,τayq,xτ´˜a,τayq V πa ` τ 1 a ˘ looooooooooooooooooooooooooooooooooooooooooooooooooooomooooooooooooooooooooooooooooooooooooooooooooooooooooon B (linearity of expectation) A " E s,τ´a"pp¨|τaq E s 1"Pp¨|s,πpxτ´a,τayqq E τ1"pp¨|s 1,πpxτ´a,τayq,xτ´a,τayq V π ` s 1, τ 1 ˘ " E s,τ´a"pp¨|τaq ż s 1 P ` s 1| s,πpxτ´a, τayq˘ ż τ1 p ` τ 1| s 1,πpxτ´a, τayq, xτ´a, τay ˘ V π ` s 1, τ 1 ˘ ds 1 dτ 1 (definition of expectation) " ż s 1 ż τ1 E s,τ´a"pp¨|τaq P ` s 1| s,πpxτ´a, τayq˘p ` τ 1| s 1,πpxτ´a, τayq, xτ´a, τay ˘ V π ` s 1, τ 1 ˘ ds 1 dτ 1(linearity) " ż s 1 ż τ1 p ` s 1, τ 1| τa ˘ V π ` s 1, τ 1 ˘ ds 1 dτ 1(see Eq. 11) " E s 1,τ1"pp¨|τaq V π ` s 1, τ 1 ˘ (definition of expectation) B " E s,τ´a"pp¨|τaq E s, ˜ τ˜´a"pp¨|τaq E s 1"Pp¨|s, ˜ πpxτ˜´a,τayqq E τ 1a"pp¨|s˜ 1,πpxτ˜´a,τayq,xτ´˜a,τayq V πa ` τ 1 a ˘ " E s, ˜ τ˜´a"pp¨|τaq E s 1"Pp¨|s, ˜ πpxτ˜´a,τayqq E τ 1a"pp¨|s˜ 1,πpxτ˜´a,τayq,xτ´˜a,τayq V πa ` τ 1 a ˘ (does not depend on s, τ´a) " E s, ˜ τ˜´a"pp¨|τaq E s 1"Pp¨|s, ˜ πpxτ˜´a,τayqq żτ 1a p ` τ 1 a| s˜ 1,πpxτ˜´a, τayq, xτ´˜a, τay ˘ V πa ` τ 1 a ˘ dτ 1 a (definition of expectation) " ż τ 1a E s, ˜ τ˜´a"pp¨|τaq E s 1"Pp¨|s, ˜ πpxτ˜´a,τayqq p ` τ 1 a| s˜ 1,πpxτ˜´a, τayq, xτ´˜a, τay ˘ V πa ` τ 1 a ˘ dτ 1 a(linearity) " ż τ 1a p ` τ 1 a| τa ˘ V πa ` τ 1 a ˘ dτ 1 a(see Eq. 10) " E τ 1a"pp¨|τaq V πa ` τ 1 a ˘ (definition of expectation) By using the value of A and B we get: E s,τ´a"pp¨|τaq r∆pps, xτ´a, τayqs " E s 1,τ1"pp¨|τaq V π ` s 1, τ 1 ˘ ´ E τ 1a"pp¨|τaq V πa ` τ 1 a ˘ " E τ 1a"pp¨|τaq E s 1,τ 1 ´a"pp¨|τa,τ1aq V π ` s 1, @ τ 1 ´a , τ 1a D˘ ´ E τ 1a"pp¨|τaq V πa ` τ 1 a ˘ (chain rule) " E τ 1a"pp¨|τaq E s 1,τ 1 ´a"pp¨|τ 1aq V π ` s 1, @ τ 1 ´a , τ 1a D˘ ´ E τ 1a"pp¨|τaq V πa ` τ 1 a ˘ (τ 1a contains τa) " E τ 1a"pp¨|τaq E s 1,τ 1 ´a"pp¨|τ 1aq " V π ` s 1, @ τ 1 ´a , τ 1a D˘ ´ V πa ` τ 1 a ˘‰ (linearity) Therefore we obtain: $$\begin{array}{c}\mathbb{E}\\ \approx,\tau_{\alpha}\rightarrow\mathbb{P}(\gamma_{\alpha})\left[V^{\pi}(s,\langle\tau_{\alpha},\tau_{\alpha}\rangle-V^{\pi_{\alpha}}(\tau_{\alpha})\right]=\gamma\end{array}\right.\begin{array}{c}\mathbb{E}\\ \approx,\tau_{\alpha}^{\prime}\rightarrow\mathbb{P}(\gamma_{\alpha})\left[V^{\pi}(s^{\prime},\langle\tau_{\alpha}^{\prime},\tau_{\alpha}^{\prime}\rangle)-V^{\pi_{\alpha}}(\tau_{\alpha}^{\prime})\right]\end{array}\tag{15}$$ By applying recursively n times Equation 15 we obtain: E s,τ´a"pp¨|τaq rV πps, xτ´a, τayq ´ V πapτaqs " γ nE τ 1 a"pp¨|τaq E τ 2 a"pp¨|τ 1 aq ... E τ n a "pp¨|τ n´1 a q E sn,τ n ´a"pp¨|τ n aq " V π ` s n, @ τ n ´a , τ n a D˘ ´ V πapτ n aq ‰ (16) We then define Rmax " maxsPS maxuPU |Rps,uq|. This allows us to bound the difference between the centralized value and the local value: $$\forall s\in\mathcal{S},\tau\in\mathcal{T},\quad|V^{\pi_{s}}(s,\langle\tau_{-s},\tau_{s}\rangle)-V^{\pi_{s}}(\tau_{s})|\leqslant|V^{\pi_{s}}(s,\langle\tau_{-s},\tau_{s}\rangle)|+|V^{\pi_{s}}(\tau_{s})|\qquad\text{(triangular inequality)}$$ $$\leqslant\frac{R_{\max}}{1-\gamma}+\frac{R_{\max}}{1-\gamma}\qquad\qquad\text{(upper-bound on the value)}$$ $$\leqslant\frac{2R_{\max}}{1-\gamma}$$ This allows us to bound the LHS of Equation 16: ˇ ˇ ˇ ˇ E s,τ´a"pp¨|τaq rV πps, xτ´a, τayq ´ V πapτaqs ˇ ˇ ˇ ˇ " ˇ ˇ ˇ ˇ ˇ γ nE τ 1 a"pp¨|τaq E τ 2 a"pp¨|τ 1 aq ... E τ n a "pp¨|τ n´1 a q E sn,τ n ´a"pp¨|τ n aq " V π ` s n, @ τ n ´a , τ n a D˘ ´ V πapτ n aq ‰ ˇ ˇ ˇ ˇ ˇ ď γ nE τ 1 a"pp¨|τaq E τ 2 a"pp¨|τ 1 aq ... E τ n a "pp¨|τ n´1 a q E sn,τ n ´a"pp¨|τ n aq "ˇ ˇV π ` s n, @ τ n ´a , τ n a D˘ ´ V πapτ n aq ˇ ˇ ‰ (Jensen inequality) ď γ nE τ 1 a"pp¨|τaq E τ 2 a"pp¨|τ 1 aq ... E τ n a "pp¨|τ n´1 a q E sn,τ n ´a"pp¨|τ n aq " 2Rmax 1 ´ γ ȷ (see above) ď γ n 2Rmax 1 ´ γ As γ Ps0, 1r, when n Ñ ` inf we obtain: ˇ $$\left|\operatorname*{\mathbb{E}}_{s,\tau_{-a}\sim p(\cdot|\tau_{a})}\left[V^{\pi}(s,\langle\tau_{-a},\tau_{a}\rangle)-V^{\pi_{a}}(\tau_{a})\right]\right|\leqslant0$$ ˇ And finally, using Eq. 13: $$\mathbb{E}_{s,\tau_{-a}\sim p(\cdot|\tau_{a})}\left[\tilde{Q}_{a}(s,\langle\tau_{-a},\tau_{a}\rangle\,,u_{a})-Q^{\pi_{a}}(\tau_{a},u_{a})\right]=0$$ ## G Additional Experiment We conducted an evaluation of LAN and the selected baseline algorithms within a modified version of the simple spread environment from the Multi-Agent Particle Environment suite (MPE) Lowe et al. (2017). In the original environment, three agents are tasked to spread efficiently across three landmarks while avoiding collisions with one another. The reward structure combined two components: a) the cumulative negative distance between each landmark and its closest agent; b) penalties for collisions between agents. Both agents and landmarks are randomly spawned on the map at the beginning of an episode. Notably, we introduced partial observability into the environment, restricting agents to observe only those agents and landmarks within a fixed radius. Additionally, modifications were made to the environment, allowing for any number of agents while maintaining a constant ratio between the environment size and the agent count. ![25_image_0.png](25_image_0.png) Figure 8: Median return during learning on the simple spread environment of MPE with 3, 6 and 9 agent. Each algorithm is run on 10 different seeds. We train the agents on 2 million steps and plot the median, 1st and 3rd quantiles. Figure [8 depicts the median return obtained by LAN and the baseline algorithms with 3, 6, and 9 agents. The results are averaged over 10 runs. The hyper-parameters from SMAC were adopted without further tuning. The results consistently demonstrate LAN's accelerated learning compared to other algorithms in all three instances. With 3 agents, both QPLEX and QMIX exhibit slightly inferior performance relative to LAN. In contrast, VDN significantly underperforms in comparison to LAN, while IQL appears to struggle in learning. With 6 and 9 agents, the learning curves of LAN, QPLEX, QMIX, and VDN align closely, eventually reaching similar performance levels. However we note that LAN consistently achieves quicker convergence. IQL fails to learn a good policy in all the instances.
Review 1: Summary: This paper studies a new training scheme for multi-agent deep reinforcement learning (MARL). Specifically, instead of learning a decomposable centralized Q function, the authors proposed to learn a centralized value function and, for each agent, a decentralized local advantage function. Each agent’s Q-value proxy is then constructed by summing the centralized value function and the decentralized local advantage function. The authors claim that the proposed approach mitigates the moving target and credit assignment issues in MARL. The proposed approach is evaluated on a subset of StarCraft II multi-agent challenge benchmark. Strengths and Weaknesses: Strength: 1. This paper proposed a new direction of estimating the Q functions of agents in CTDE settings. The idea of using an individual advantage function with a centralized value function to approximate the Q function is less explored in our MARL community. 2. The paper is generally well-organized and most of the technical contents are presented clearly. 3. The related work section adequately discusses and compares with existing works on MARL. Weakness: 1. The reviewer has some concerns on the experiments. Firstly, the proposed approach could be considered as an improved version of an independent learner-based approach. However, there is no comparison with independent approaches such as Independent PPO. The reviewer noted that the authors mentioned some difficulty in comparison with IPPO in Section 3. However, without comparison with the SOTA independent approach, it is hard to evaluate the contribution and effectiveness of the proposed approach. 2. Could you comment on why QPlex is still on-par or outperforming the proposed approach? 3. The authors highlighted the addressing credit assignment problem as one of the main contributions. However, based on (5), it seems quite similar to the counterfactual baseline proposed in COMA. Requested Changes: 1. Compared with IPPO and MAPPO. 2. For scalability, multiple particle environments (MPE), which could easily be scaled to hundreds of agents, could be considered. Broader Impact Concerns: No additional Broader Impact Statement is needed. ================================================== Review 2: Summary: The authors are introducing a simple approach to the MARL problem, learn a centralized value function shared by all agents and individual advantage functions. They perform experiments on the standard SCII micromanagement benchmark tasks and show, in particular, better results on the "super-hard" maps. I believe that the authors have done a comprehensive job of addressing the critique from the first submission. Strengths and Weaknesses: The simplicity and efficiency of the approach is appealing. Requested Changes: I have no further requests. Broader Impact Concerns: No particular concerns. ================================================== Review 3: Summary: This paper proposes a modification to the centralized training and decentralized execution (CTDE) based approach in partially observable cooperative multi-agent reinforcement learning (MARL) environments. The provided local advantage network (LAN) framework maintains a central state value function (unlike CTDE where a central state-action value function is maintained), which is only used during training (as in the CTDE approach) and can use the centralized global state variable during training (as in the CTDE approach). There are distinct local actors (trained via local advantage estimates) for each agent that are learned in a decentralized fashion during training and used for execution (similar to actors in CTDE), however the local advantage estimates use a proxy Q function for training (unlike CTDE). The paper contains detailed experimentation in the StarCraft Multi-Agent Challenge (SMAC) that shows better performances for LAN compared to several strong state-of-the-art baselines in multiple SMAC settings. Strengths and Weaknesses: I have gone through all the reviews and comments in the previous submission of this paper to TMLR. I appreciate that the authors have meticulously addressed almost all the concerns of the previous set of reviewers (still some important concerns are not addressed, see below). I will add my review of the paper, by trying to avoid repetition with the previous set of reviews, as much as possible. Strengths: 1. The approach is intuitive and the paper is clearly written. 2. Claims for why the proposed method is useful are detailed and supported by experiments. 3. Mostly, the experiments support the claims made by the paper. Major Weaknesses: 1. Claims as intuitions: The authors note that they have separated four claims into two claims and two intuitions from their previous submission. The previous reviewers have mentioned that the LAN's performance regarding better update targets (BT) and reducing learning complexity (LC) are unsupported. The authors have moved these two claims to intuitions since they mentioned that these two claims cannot be supported by experiments. I think the intuitions are an extremely hand wavy way of resolving this problem. It is definitely possible to come up with experiments to support these claims. For example, regarding the claim of better update targets one can think of a plot where the episodes are on the x-axis and the distance (such as the max-norm) between the target Q estimates and the evaluated Q estimates are plotted for LAN and other baselines in any cooperative MARL testbed. If such experiments cannot be performed, I would recommend that the intuitions be removed from the paper (since these are just unsupported claims). 2. I am uncomfortable with the proxy Q function in Equation 1. This function seems to be an approximation for the multi-agent real Q function, but there are no approximation bounds provided in the paper. If the approximation is unbounded, I am not sure how this Q function helps in learning better policies as claimed in the paper. 3. The paper is proposing a new algorithm but has no pseudo-code either in the main paper or the appendix. I am not sure in which order the updates for the different networks happen and what exactly is the target values for each network. This point was raised by the previous action editor and I think the authors must add a complete pseudo-code of their algorithm to address this (while the authors only have a passing reference to the target value in their current submission). 4. All the experimental results are provided in SMAC. The authors must either include strong justifications for why good performances in SMAC is sufficient to establish the strengths of their method as compared to other baselines, or provide results in at-least one other cooperative multi-agent testbed. 5. I am not able to understand the term "learning complexity". Hence, I am unable to follow the Intuition 2 in the paper. Please provide a precise definition of this term in the paper. The previous action editor also commented on this (does this refer to sample complexity or something else?). 6. What is meant by an algorithm being able to represent all policies? In page 5 the authors mention "We note that QPLEX can also represent all these policies at the cost of a more complex architecture". I am not sure what particular advantage of LAN over QPLEX is being presented here. Minors: 1. Section 2: an agents -> an agent 2. Section 2: learning directly Q -> learning Q directly 3. Section 2: to extends -> to extend 4. Section 3: off-environment -> 'please revise, I am not sure what the meaning is here' 5. Section 6: embodies this alternative -> 'awkward, please consider revising' Requested Changes: Please address all my comments in the Weaknesses above. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept with minor revision Comment: The issue of dismissing comparison with simple independent learning PPO-based learning methods I don't think should be so easily dismissed as the authors' discussion at the end of Section 3. I would like to see the authors add a further concluding sentence to that paragraph clarifying something to the affect that that this work does not make any claims about the effectiveness of its approach in comparison to such methods, and maybe that such a comparison is deserving of further attention, or something like that. I'll leave it to the authors to determine exactly how to word such a disclaimer. ==================================================
# Numerically Robust Fixed-Point Smoothing Without State Augmentation Anonymous authors Paper under double-blind review ## Abstract Practical implementations of Gaussian smoothing algorithms have received a great deal of attention in the last 60 years. However, almost all work focuses on estimating complete time series ("fixed-*interval* smoothing", O(K) memory) through variations of the Rauch–Tung– Striebel smoother, rarely on estimating the initial states ("fixed-*point* smoothing", O(1) memory). Since fixed-point smoothing is a crucial component of algorithms for dynamical systems with unknown initial conditions, we close this gap by introducing a new formulation of a Gaussian fixed-point smoother. In contrast to prior approaches, our perspective admits a numerically robust Cholesky-based form (without downdates) and avoids state augmentation, which would needlessly inflate the state-space model and reduce the numerical practicality of any fixed-point smoother code. The experiments demonstrate how a JAX implementation of our algorithm matches the runtime of the fastest methods and the robustness of the most robust techniques while existing implementations must always sacrifice one for the other. Code: [Link redacted for anonymity; code in supplement] ## 1 Introduction Bayesian filtering and smoothing enjoy numerous applications in areas like tracking, navigation, or control (Grewal and Andrews, 2014; Särkkä and Svensson, 2023). They also serve as the computational backbone of popular machine learning methods, including message passing, sequence modelling, Gaussian processes, and probabilistic numerics (Grewal and Andrews, 2014; Särkkä and Solin, 2019; Murphy, 2023; Hennig et al., 2022). Recent developments in Bayesian smoothing focus on subjects like linearisation (e.g. García-Fernández et al., 2016), temporal parallelisation (e.g. Särkkä and García-Fernández, 2020), or numerically robust implementations (e.g. Yaghoobi et al., 2022). However, they all exclusively focus on fixed-*interval* smoothing, which targets the full time-series p(x0:K | y1:K) in O(K) memory, never on fixed-*point* smoothing, which only yields the initial value p(x0 | y1:K) but does so in O(1) memory. Even though fixed-point smoothing is a crucial component of, for example, estimating past locations of a spacecraft (Meditch, 1969), it has yet to receive much attention in the literature on state estimation in dynamical systems. The experiments in Section 4 demonstrate fixed-point smoothing applications in probabilistic numerics and in a tracking problem. Contributions To close this gap, we introduce an implementation of numerically robust fixed-point smoothing. Our approach avoids the typical construction of fixed-point smoothers via state-augmentation (Biswas and Mahalanabis, 1972; Smith and Roberts, 1982), which increases the dimension of the state-space model and thus makes estimation needlessly expensive. And unlike previous work on fixed-point smoothing (Meditch, 1967a; Sarkka and Hartikainen, 2010; Rauch, 1963; Meditch, 1967b; 1976; Meditch and Hostetter, 1973; Nishimura, 1969), our perspective is not tied to any particular parametrisation of Gaussian variables. Instead, our proposed algorithm enjoys numerical robustness and compatibility with data streams, while maintaining minimal complexity. There exist tools for each of those desiderata, but our algorithm is the first to deliver them all at once. We demonstrate the algorithm's efficiency and robustness on a sequence of test problems, including a probabilistic numerical method for boundary value problems (Krämer and Hennig, 2021), and show how to use fixed-point smoothing for parameter estimation in Gaussian state-space models. Notation Enumerated sets are abbreviated with subscripts, for example x0:K := {x0*, ..., x*K} and y1:K := {y1*, ..., y*K}. For sequential conditioning of Gaussian variables (like in the Kalman filter equations), we indicate the most recently visited data points with subscripts in the parameter vectors, for example, p(xk | y1:k−1) = N (mk|k−1, Ck|k−1) or N (mK|K, CK|K) = p(xK | y1:K). In is the identity matrix with n rows and columns. All covariance matrices shall be symmetric and positive semidefinite. Like existing work on numerically robust state-space model algorithms (e.g. Grewal and Andrews, 2014; Krämer, 2024), we define the *generalised Cholesky factor* LΣ of a covariance matrix Σ as any matrix that satisfies Σ = LΣ(LΣ) ⊤. This definition includes the "true" Cholesky factor if Σ is positive definite but applies to semidefinite Σ; for instance, the zero-matrix is the generalised Cholesky factor of the zero-matrix (which does not admit a Cholesky decomposition). There are numerous ways of parametrising multivariate Gaussian distributions, but we exclusively focus on two kinds. Like Yaghoobi et al. (2022), we distinguish: - *Covariance-based parametrisations:* Parametrise multivariate Gaussian distributions with means and covariance matrices. Covariance-based parametrisations are the standard approach. - *Cholesky-based parametrisations:* Parametrise multivariate Gaussian distributions with means and generalised Cholesky factors of covariance matrices instead of covariance matrices. Manipulating Gaussian variables in Cholesky-based parametrisations replaces the addition and subtraction of covariance matrices with QR decompositions, which improves the numerical robustness at the cost of a slightly increased runtime; it leads to methods like the square-root Kalman filter (Bennett, 1965). Other forms, such as the information or canonical form of a multivariate Gaussian distribution (Murphy, 2022), are not directly relevant to this work. ## 2 Problem Statement: Fixed-Point Smoothing 2.1 Background On Filtering And Smoothing Linear Gaussian state-space models This work only discusses linear, discrete state-space models with additive Gaussian noise because such models are the starting point for Bayesian filtering and smoothing. Nonlinear extensions are future work. For integers d and D, let A1*, ..., A*K ∈ R D×D and H1*, ..., H*K ∈ R d×D be linear operators and C0|0, Q1*, ..., Q*K ∈ R D×D and R1*, ..., R*K ∈ R d×d be covariance matrices. Introduce vectors m0|0, b1*, ..., b*K ∈ R D and r1*, ..., r*K ∈ R d. Assume that observations y1:K are available according to the state-space model (and potentially as a data stream) $$x_{0}=\theta,\quad x_{k}=A_{k}x_{k-1}+b_{k},\quad y_{k}=H_{k}x_{k}+r_{k},\quad k=1,...,K,$$ with pairwise independent Gaussian random variables $$\theta\sim{\mathcal{N}}(m_{0|0},C_{0|0}),\quad b_{k}\sim{\mathcal{N}}(0,B_{k}),\quad r_{k}\sim{\mathcal{N}}(0,R_{k}),\quad k=1,...,K.$$ In order to simplify the notation, Equations 1 and 2 assume that the Gaussian variables b1:K and r1:K have a zero mean and that there is no observation of the initial state x0 = θ. The experiments in Section 4 demonstrate successful fixed-point smoothing even when violating those two assumptions. Estimating x1:K from observations y1:K is a standard setup for filtering and smoothing (Särkkä and Svensson, 2023). Kalman filtering Different estimators target different conditional distributions (Table 1). For example, the Kalman filter (Kalman, 1960) computes p(xK | y1:K) by initialising p(x0 | y1:0) = p(x0) and alternating Prediction: $p(x_{k-1}\mid y_{1:k-1})\longmapsto p(x_{k}\mid y_{1:k-1})$$k=1,...,K$ Update: $p(x_{k}\mid y_{1:k-1})\longmapsto p(x_{k}\mid y_{1:k})$$k=1,...,K$. Since all operations are linear and all distributions Gaussian, the recursions are available in closed form. Detailed iterations are in Appendix A. An essential extension of the Kalman filter is the square-root Kalman filter (Bennett, 1965; Andrews, 1968) (see also Grewal and Andrews (2014)), which yields the identical $$\left(1\right)$$ $$\left(2\right)$$ (3a) (3b) $\text{}$ Table 1: *Estimation in Gaussian state-space models.* Other estimation tasks exist, for example, fixed-lag smoothing. However, this article focuses on fixed-point smoothing and its relationship with filtering and fixed-interval smoothing. "RTS smoother": "Rauch–Tung–Striebel smoother". | Task | Target | |----------------------------------------------|----------------------------------------| | Filtering (e.g. Kalman filter) | Real-time updates {p(xk | y1:k)} K k=1 | | Fixed-interval smoothing (e.g. RTS smoother) | Full time series p(x0:K | y1:K) | | Fixed-point smoothing | Initial state p(x0 | y1:K) | ![2_image_0.png](2_image_0.png) Figure 1: Fixed-point (left) versus fixed-interval smoothing problem (right) as factor graphs. The shaded variables are observed. "QoI": "Quantity of interest". distributions as the Kalman filter but manipulates Cholesky factors of covariance matrices instead of covariance matrices. The advantage of the Cholesky-based implementation of the Kalman filter over the covariance-based version is that covariance matrices are guaranteed to remain symmetric and positive semidefinite, whereas accumulated round-off errors can make the covariance-based Kalman filter break down. In this work, we introduce Cholesky-based implementations for fixed-point smoothing, among other things. Both the Kalman filter and its Cholesky-based extension cost O(KD3) runtime and O(D2) memory, assuming D ≥ d (otherwise, exchange D for d). The Cholesky-based filter is slightly more expensive because it uses QR decompositions instead of matrix multiplication; Appendix A contrasts Cholesky-based and covariance-based Kalman filtering. Fixed-interval smoothing While the filter computes the terminal state p(xK | y1:K), the *fixed-interval* smoother targets the entire trajectory p(x0:K | y1:K). For linear Gaussian state-space models, fixed-interval smoothing is implemented by the (fixed-interval) Rauch–Tung–Striebel smoother (Rauch et al., 1965): Relying on independent noise in Equation 2, the Rauch–Tung–Striebel smoother factorises the conditional distribution backwards in time according to (interpret y1:0 = ∅ for notational brevity), $$p(x_{0:K}\mid y_{1:K})=p(x_{K}\mid y_{1:K})\prod_{k=1}^{K}p(x_{k-1}\mid x_{k},y_{1:k-1}).$$ $$\left(4\right)$$ p(xk−1 | xk, y1:k−1). (4) The first term, p(xK | y1:K), is the filtering distribution at the final state and usually computed with a Kalman filter. The remaining terms, p(xk−1 | xk, y1:k−1), can be assembled with the prediction step in the filtering pass (Equation 3a), which preserves the Kalman filter's O(KD3) runtime complexity but increases the memory consumption from O(D2) to O(KD2). Like the Kalman filter, the Rauch–Tung–Striebel smoother allows covariance- and Cholesky-based parametrisations (Park and Kailath, 1995) in roughly the same complexity. In practice, Cholesky-based arithmetic costs slightly more than covariance-based arithmetic but enjoys an increase in numerical robustness. Appendix B contrasts both implementations. Towards fixed-point smoothing Finally, we turn to the *fixed-point smoothing* problem: computing the conditional distribution of the initial state p(x0 | y1:K) conditioned on all observations (Figure 1). The central difficulty for fixed-point smoothing is that we ask for O(K) algorithms even though all observations y1:K are in the "future" of x0, which lacks an immediate sequential formulation. That said, one algorithm for sequentially solving the fixed-point smoothing problem involves Rauch–Tung–Striebel smoothing: The fixed-point smoothing problem could be solved by assembling p(x0:K | y1:K) with the Rauch–Tung–Striebel smoother, which yields a parametrisation of p(x0 | y1:K) in closed form. Unfortunately, this solution is not very efficient: the Rauch–Tung–Striebel smoother stores parametrisations of all conditional distributions {p(xk−1 | xk, y1:k−1)} K k=1, which is problematic for long time-series because it consumes O(KD2) memory. In contrast, state-of-the-art fixed-point smoothers require only O(D2) memory. ## 2.2 Existing Approaches To Fixed-Point Smoothing The state of the art in fixed-point smoothing does not use fixed-interval smoothers to compute p(x0 | x1:K). There are two more efficient approaches: Either we solve the fixed-point smoothing problem by computing the solution to a specific high-dimensional filtering problem (Biswas and Mahalanabis, 1972), or we derive recursions for how p(x0 | y1:k) evolves with k = 1*, ..., K* (Meditch, 1967b;a; 1969). Both solutions have advantages and disadvantages. We discuss both before explaining how our algorithm fixes their shortcomings. Fixed-point smoothing via state-augmented filtering Fixed-point smoothing can be implemented with a Kalman filter, more precisely, by running a filter on the state-space model (Biswas and Mahalanabis, 1972) $$\begin{pmatrix}x_{k}\\ x_{0}\end{pmatrix}=\begin{pmatrix}A_{k}&0\\ 0&I_{D}\end{pmatrix}\begin{pmatrix}x_{k-1}\\ x_{0}\end{pmatrix}+\begin{pmatrix}I_{D}\\ 0\end{pmatrix}b_{k},\quad y_{k}=\begin{pmatrix}H_{k}&0\end{pmatrix}\begin{pmatrix}x_{k}\\ x_{0}\end{pmatrix}+r_{k},\quad k=1,...,K\tag{5}$$ with initial condition $$\begin{pmatrix}x_{0}\\ x_{0}\end{pmatrix}\sim N\left(\begin{pmatrix}m_{0|0}\\ m_{0|0}\end{pmatrix},\begin{pmatrix}C_{0|0}&C_{0|0}\\ C_{0|0}&C_{0|0}\end{pmatrix}\right)=N\left(\begin{pmatrix}m_{0|0}\\ m_{0|0}\end{pmatrix},\begin{pmatrix}L_{C_{0|0}}&0\\ L_{C_{0|0}}&0\end{pmatrix}\begin{pmatrix}L_{C_{0|0}}&0\\ L_{C_{0|0}}&0\end{pmatrix}^{\top}\right)\tag{6}$$ The covariance of the initial condition is rank-deficient. However, the generalised Cholesky factor remains well-defined according to Equation 6. The difference between the state-space models in Equation 5 and Equation 1 is that Equation 5 tracks the augmented state (xk, x0) instead of xk. As a result, the Kalman filter applied to Equations 5 and 6 yields parametrisations of the conditional distributions p(xk, x0 | y1:k) K k=1 in O(K(2D) 3) runtime and O((2D) 2) memory. The factor "2D" stems from doubling the state-space dimension. The initial condition p(x0 | y1:K) emerges from the augmented state p(xK, x0 | y1:K) in closed form. Relying on the Kalman filter formulation inherits all computational benefits of Kalman filters, most importantly, the Kalman filter's Cholesky-based form with its attractive numerical robustness. Nonetheless, doubling of the state-space dimension is unfortunate because it increases the runtime roughly by factor 8 (because (2D) 3 = 8D3) and the memory by factor 4 (because (2D) 2 = 4D2). However, as long as one commits to covariance-based arithmetic, this increase can be avoided: Fixed-point recursions Suppose we temporarily ignore Cholesky-based forms and commit to covariancebased parametrisations of Gaussian distributions. In that case, we can improve the efficiency of the fixed-point smoother. It turns out that during the forward pass, the mean and covariance of p(x0 | y1:k) follow a recursion that can be run parallel to the Kalman filter pass (Meditch, 1969; Sarkka and Hartikainen, 2010; Meditch, 1967b). Sarkka and Hartikainen (2010) explain how to implement this recursion: start by initialising m0|0 and C0|0 according to Equation 1 and set G0|0 = I. Then, iterate from (m0|k−1, C0|k−1, G0|k−1) to (m0|k, C0|k, G0|k) using the recursion (Sarkka and Hartikainen, 2010; Särkkä, 2013) $$\begin{array}{l}{{G_{0|k}=G_{0|k-1}G_{k-1|k}}}\\ {{m_{0|k}=m_{0|k-1}+G_{0|k}(m_{k|k}-m_{k|k-1})}}\\ {{C_{0|k}=C_{0|k-1}+G_{0|k}(C_{k|k}-C_{k|k-1})(G_{0|k})^{\top}.}}\end{array}$$ Next to (m0|k−1, C0|k−1, G0|k−1), these formulas only depend on the output of the prediction step as well as the smoothing gains Gk−1|k (smoothing gains: Appendix B). The recursion in Equation 7 can be implemented to run simultaneously with the forward filtering pass. Unfortunately, even though this implementation avoids doubling the size of the state-space model and enjoys O(D2) memory and O(KD3) runtime, it does not have a Cholesky-based formulation. Thus, it cannot be applied to problems where numerical robustness is critical, like the probabilistic numerical simulation of differential equations (Krämer, 2024). The main contribution of this paper is to generalise Equation 7 in a way that enables Cholesky-based parametrisations: Contribution. Compute the solution of the fixed-point smoothing problem p(x0 | y1:K) in O(KD3) *runtime,* O(D2) *memory, using any parametrisation of Gaussian variables, and without state augmentation; Table* 2. Table 2: Existing approaches to the fixed-point smoothing problem. The "fixed-point recursion" represents Meditch (1969; 1967b) through Equation 7. "O(1) memory" stands in contrast to the O(K) memory of a fixed-interval smoother, and the "low-dimensional state" refers to avoiding state-augmentation, which affects both runtime and memory because it doubles the dimension of the state-space model. | Method | O(1) memory | Low-dimensional state | Cholesky-based | |------------------------------------|---------------|-------------------------|------------------| | Via Rauch–Tung–Striebel smoothing | ✗ | ✓ | ✓ | | Via filtering with augmented state | ✓ | ✗ | ✓ | | Fixed-point recursion (Equation 7) | ✓ | ✓ | ✗ | | This work (Algorithms 1 to 3) | ✓ | ✓ | ✓ | ## 3 The Method: Numerically Robust Fixed-Point Smoothing Our approach to numerically robust fixed-point smoothing involves two steps: First, we derive a recursion for the conditional distribution p(x0 | xk, y1:k) instead of one for the joint distribution p(x0, xk | y1:k), k = 1*, ..., K*. Second, we implement the recursion in Cholesky-based parametrisations without losing closed-form, constantmemory updates. As a byproduct of this derivation, other parametrisations of Gaussian distributions (like the information form) become possible, too. However, we focus on Cholesky-based implementations for their numerical robustness and leave other choices to future work. ## 3.1 A New Recursion For Fixed-Point Smoothing A derivation of a new fixed-point smoothing recursion follows. Similar to the state-augmented filtering perspective, the target distribution p(x0 | y1:K) can be written as the marginal of the augmented state $$p(x_{0}\mid y_{1:K})=\int p(x_{0},x_{K}\mid y_{1:K})\,\mathrm{d}x_{K}.$$ Computing p(x0 | y1:K) becomes a byproduct of computing p(x0, xK | y1:K), and the latter admits a sequential formulation. This perspective was essential to implementing fixed-point smoothing with state-augmented filtering. The augmented state factorises into the conditional $$p(x_{0},x_{K}\mid y_{1:K})=p(x_{0}\mid x_{K},y_{1:K})p(x_{K}\mid y_{1:K}).$$ p(x0, xK | y1:K) = p(x0 | xK, y1:K)p(xK | y1:K). (9) Since p(xK | y1:K) is already computed in the forward pass, it suffices to derive a recursion for the fixed-point conditional p(x0 | xK, y1:K). The backward factorisation of the smoothing distribution in Equation 4 implies $$p(x_{0}\mid x_{K},y_{1:K})=\int\prod_{k=1}^{K}p(x_{k-1}\mid x_{k},y_{1:k-1})\,\mathrm{d}x_{1:K-1}.\tag{1}$$ $$(8)$$ $$({\mathfrak{g}})$$ $$(10)$$ Marginalising over all intermediate states x1:K−1 like in Equation 10 turns a Rauch–Tung–Striebel smoother into a fixed-point smoother. An O(K) runtime implementation now emerges from observing how the integral in Equation 10 rearranges to a nested sequence of single-variable integrals, $$p(x_{0}\mid x_{K},y_{1:K})=\int\prod_{k=1}^{K}p(x_{k-1}\mid x_{k},y_{1:k-1})\,\mathrm{d}x_{1:K-1}$$ $$=\int\ldots\left(\int\left[\int p(x_{0}\mid x_{1})p(x_{1}\mid x_{2},y_{1:1})\,\mathrm{d}x_{1}\right]p(x_{2}\mid x_{3},y_{1:2})\,\mathrm{d}x_{2}\right)\ldots\mathrm{d}x_{K-1}$$ $$=\int\ldots\left(\int p(x_{0}\mid x_{2},y_{1})p(x_{2}\mid x_{3},y_{1:2})\,\mathrm{d}x_{2}\right)\ldots\mathrm{d}x_{K-1}$$ This rearranging of the integrals is essential to the derivation because it implies a forward-in-time recursion: $$(111\mathbf{a})$$ (11b) $\binom{11\text{c}}{11\text{c}}$ . Algorithm 1 (Fixed-point smoother). *To compute the solution to the fixed-point smoothing problem, assemble* p(xK | y1:K) with a Kalman filter and evaluate p(x0 | xK, y1:K) as follows. (To simplify the index-related notation in this algorithm, read y1:−1 = y1:0 = ∅.) 1. Initialise p(x0 | x0, y1:−1) = N (G0|0x0 + p0|0, P0|0) *with* G0|0 = ID, p0|0 = 0*, and* P0|0 = 0. 2. For k = 1, ..., K, iterate from k − 1 to k, $$G_{0|0}=I_{D},\,p_{0|0}=0,\,a n d\,P_{0|0}=0.$$ $$p(x_{0}\mid x_{k},y_{1:k-1})=\int p(x_{0}\mid x_{k-1},y_{1:k-2})p(x_{k-1}\mid x_{k},y_{1:k-1})\;dx_{k-1}.\tag{12}$$ $$The~conditionals~p(x_{k-1}\mid x_{k},y_{1:k-1})$$ The conditionals p(xk−1 | xk, y1:k−1) are from the Rauch–Tung–Striebel smoother (Equation 4). 3. Marginalise p(x0 | y1:K from p(xK | y1:K) and p(x0 | xK, y1:K−1) *via Equation* 8. Equation 12 turns a Rauch–Tung–Striebel smoother into a fixed-point smoother: The recursion requires access to p(xk−1 | xk, y1:k−1) computed by the forward-pass of the Rauch–Tung–Striebel smoother. Therefore, Equation 12 runs concurrently to the forward filtering pass. ## 3.2 Cholesky-Based Implementation The missing link for Algorithm 1 is the merging of two affinely related conditionals in Equation 12. Since both conditionals in Equation 12 are affine and Gaussian, their combination p(x0 | xk, y1:k) is Gaussian and available in closed form for all k = 1*, ..., K*. Its parametrisation depends on the parametrisation of each input distribution. For covariance- and Cholesky-based arithmetic, it looks as follows: Algorithm 2 (Covariance-based implementation of Equation 12). *Recall the initialisation of Algorithm* 1 and the convention y1:−1 = y1:0 = ∅*. For any* k = 1, ..., N*, if the fixed-point and smoothing conditionals are* $p(x_{0}\mid x_{k-1},y_{1:k-2})=\mathcal{N}(G_{0|k-1}x_{k-1}+p_{0|k-1},P_{0|k-1})$ $p(x_{k-1}\mid x_{k},y_{1:k-1})=\mathcal{N}(G_{k-1|k}x_{k}+p_{k-1|k},P_{k-1|k})$, $$\begin{array}{l}{(13\mathrm{a})}\\ {(13\mathrm{b})}\end{array}$$ for given Gi|j ∈ R D×D, pi|j ∈ R D*, and* Pi|j ∈ R D×D*, then, the next fixed-point conditional is* $i|j\in\mathbb{R}^{n}$, and $P_{i}|_{j}\in\mathbb{R}^{n}$, then, the next fixed-point conditional is $p(x_{0}\mid x_{k},y_{1:k-1})=\mathcal{N}(G_{0|k}x_{k}+p_{0|k},P_{0|k})$,_ $$G_{0|k}\coloneqq G_{0|k-1}G_{k-1|k},$$ $$p_{0|k}\coloneqq G_{0|k-1}p_{k-1|k}+p_{0|k-1},$$ $$P_{0|k}\coloneqq G_{0|k-1}P_{k-1|k}(G_{0|k-1})^{\top}+P_{0|k-1}.$$ _on onto $\mathcal{O}(D^{3})$ floating point operations_ $$\begin{array}{l}{(14\mathrm{a})}\\ {(14\mathrm{b})}\end{array}$$ $$\stackrel{\mathrm{t}}{(14\mathrm{c})}$$ ⊤ + P0|k−1. (14d) Implementing this operation costs O(D3) *floating-point operations.* Algorithm 3 (Cholesky-based implementation of Equation 12). *For any* k = 1, ..., N *(and interpreting* y1:−1 = y1:0 = ∅*), if the previous fixed-point conditional and smoothing transition are* $$\begin{array}{l}{{p(x_{0}\mid x_{k-1},y_{1:k-2})=\mathcal{N}(G_{0|k-1}x_{k-1}+p_{0|k-1},L_{P_{0|k-1}}(L_{P_{0|k-1}})^{\top})}}\\ {{p(x_{k-1}\mid x_{k},y_{1:k-1})=\mathcal{N}(G_{k-1|k}x_{k}+p_{k-1|k},L_{P_{k-1|k}}(L_{P_{k-1|k}})^{\top}),}}\end{array}$$ ⊤) (15a) ⊤), (15b) *for given $G_{ij}\in\mathbb{R}^{D\times D}$, $p_{ij}\in\mathbb{R}^D$, and $L_{P_{ij}}\in\mathbb{R}^{D\times D}$, then, the next fixed-point conditional is $L_{P_{ij}}\in\mathbb{R}^{D\times D}$.* *for given $G_{ij}\in\mathbb{R}^{D\times D}$, $p_{ij}\in\mathbb{R}^D$, and $L_{P_{ij}}\in\mathbb{R}^{D\times D}$, then, the next fixed-point conditional is $L_{P_{ij}}\in\mathbb{R}^{D\times D}$.* $p(x_{0}\mid x_{k},y_{1:k-1})=\mathcal{N}(G_{0|k}x_{k}+p_{0|k},L_{P_{0|k}}(L_{P_{0|k}})^{\top})$, $$G_{0|k}:=G_{0|k-1}G_{k-1|k},$$ $$p_{k-1|k}:=G_{0|k-1}p_{k-1|k}+p_{0|k-1},$$ $$L_{P_{k-1|k}}:=\mathfrak{R}^{\top},$$ ⊤), (16a) $$(14\mathrm{d})$$ $$(15\mathrm{a})$$ $$(15\mathrm{b})$$ G0|k:= G0|k−1Gk−1|k, (16b) pk−1|k:= G0|k−1pk−1|k + p0|k−1, (16c) := R⊤, (16d) where R *is the upper triangular matrix returned by the QR-decomposition* $$\Omega\,\Re=\left(\begin{array}{c c}{{-}}&{{-}}\\ {{(L_{P_{k-1|k}})^{\top}(G_{0|k-1})^{\top}}}\\ {{(L_{P_{0|k-1}})^{\top}}}\end{array}\right).$$ . (17) Implementing this operation costs O(D3) *floating-point operations.* $$(16\mathrm{a})$$ $$(16\mathrm{b})$$ $$(16\mathrm{c})$$ $$(16\mathrm{d})$$ $$\left(17\right)$$ Algorithm 2 follows from the rules for manipulating affinely related Gaussian distributions (e.g. Särkkä, 2013, Appendix A.1). Algorithm 3 is indeed the Cholesky-based version of Algorithm 2: The expression in Equation 16d is the generalised Cholesky factor of the expression in Equation 14d because $$L_{P_{k-1|k}}(L_{P_{k-1|k}})^{\top}=\Re^{\top}\Omega^{\top}\Omega\Re=\left(G_{0|k-1}L_{P_{k-1|k}}\quad L_{P_{0|k-1}}\right)\left(\begin{matrix}\left(L_{P_{k-1|k}}\right)^{\top}\left(G_{0|k-1}\right)^{\top}\\ \left(L_{P_{0|k-1}}\right)^{\top}\end{matrix}\right)=P_{k-1|k}\tag{18}$$ holds. The combination of Algorithms 1 and 3 is a novel, Cholesky-based implementation of a fixed-point smoother. Similarly, the combination of Algorithms 1 and 2 is a novel, covariance-based implementation of a fixed-point smoother. With a small modification to Algorithm 1, combining Algorithm 2 with the covariance-based formulas in Algorithm 1 recovers Equation 7: Proposition 1. If the combination of Algorithms 1 and 2 *computes the marginals* $$p(x_{0}\mid y_{1:k})=\int p(x_{0}\mid x_{k},y_{1:k-1})p(x_{k}\mid y_{1:k})\,d x_{k},$$ $$(19)$$ $\square$ at every k = 1, ..., K *instead of only at the final time-step, the recursion reduces to Equation* 7. Proof. Induction; Appendix C. Computational complexity Both Algorithms 2 and 3 cost O(D3) floating-point operations per step. The kth iteration in Algorithm 1 needs access to the same quantities as the kth iteration of the forward pass of a Rauch–Tung–Striebel smoother, which can be implemented in O(D2) memory. Unlike the Rauch–Tung– Striebel smoother, Algorithm 1 does not store intermediate results, which is why it consumes O(D2) memory in total (instead of O(KD2)). Both Algorithm 2 and Equation 7 require three matrix-matrix- and one matrix-vector multiplication per step, so they are equally efficient (Algorithm 2 uses fewer matrix additions. However, the addition of matrices is fast). Therefore, it will be no loss of significance that we always implement covariance-based fixed-point smoothing via Algorithm 1 instead of Equation 7 in our experiments. ## 4 Experiments The experiments serve two purposes. To start with, they investigate whether the proposed Cholesky-based implementation (Algorithm 3) of the fixed-point smoother recursion (Algorithm 1) holds its promises about memory, runtime, and numerical robustness. According to the theory in Section 3, we should observe: - *Memory:* Slightly lower memory demands than a state-augmented Kalman filter; drastically lower memory demands than a Rauch–Tung–Striebel smoother. - *Wall-time:* Faster than a state-augmented, Cholesky-based Kalman filter; roughly as fast as a Rauch–Tung–Striebel smoother. - *Numerical robustness:* Combining Algorithm 1 with Cholesky-based parametrisations (Algorithm 3) is significantly more robust than combining it with covariance-based parametrisations (Algorithm 2); comparable to a Cholesky-based implementation of a Kalman filter. These three phenomena will be studied in two experiments, one for runtime/memory and one for numerical robustness (outline: Figure 2). The problem set for these two experiments includes a toy problem for the former and an application in probabilistic numerics for the latter. Afterwards, a case study in tracking shows how to use the fixed-point smoother for estimating the initial parameter in a state-space model. Hardware and code All experiments run on the CPU of a consumer-grade laptop and finish within a few minutes. Our JAX implementation (Bradbury et al., 2018) of Kalman filters, Rauch–Tung–Striebel smoothers, and fixed-point smoothers is at $$\mathbb{L}_{\mathrm{{1n}}}$$ [Link redacted for anonymity; code in supplement] We implement the existing fixed-point smoother recursions in Equation 7 by combining Algorithm 1 with covariance-based parametrisations; recall Proposition 1. | Covariance-based | Cholesky-based | | |--------------------------|---------------------------|-------| | Via filter | Known | Known | | Via Rauch–Tung–Striebel | Known | Known | | Fixed-point recursion | Experiment II: Robustness | | | Known, but see Prop. 1 | Our contribution | | | Experiment I: Efficiency | | | Figure 2: Outline of the memory-, runtime-, and robustness-related demonstrations. | d = 2 | d = 5 | d = 10 | d = 20 | d = 50 | d = 100 | | |-------------------------|------------|------------|------------|------------|-----------|-----------| | Via Rauch–Tung–Striebel | 5.8 × 10−3 | 1.8 × 10−2 | 6.5 × 10−2 | 4.5 × 10−1 | 2.9 × 100 | 1.2 × 101 | | Via filter | 5.0 × 10−3 | 2.1 × 10−2 | 8.0 × 10−2 | 7.1 × 10−1 | 4.6 × 100 | 1.6 × 101 | | Algorithm 1 | 6.4 × 10−3 | 1.8 × 10−2 | 6.4 × 10−2 | 4.4 × 10−1 | 2.9 × 100 | 1.2 × 101 | Table 3: Runtime in seconds (wall-time, best of three runs). Lower is better. The column-wise lowest entries are bold and shaded. Randomly populated model. K = 1, 000 steps. ## 4.1 Experiment I: How Efficient Is The Fixed-Point Recursion? Motivation Sections 2.2 and 3 mention three approaches to fixed-point smoothing: a detour via Rauch– Tung–Striebel smoothing, state-augmented Kalman filtering, and our recursion in Algorithm 1. In theory, Algorithm 1 should be the most efficient: it does not inflate the state-space model like a state-augmented Kalman filter does and requires O(1) instead of O(K) memory, unlike the Rauch–Tung–Striebel smoother. This first experiment demonstrates that these effects are visible in practice. Problem setup In this first example, we only measure the execution time and memory requirement of three approaches to fixed-point smoothing. Both memory and runtime depend only on the size of a state-space model and not on the difficulty of the estimation task. Therefore, we consider a state-space model where all system matrices and vectors are populated with random values. Implementation We choose K = 1, 000, vary d, set the size of the hidden state to D = 2d, and use Cholesky-based arithmetic for all estimators. We take Equation 1 and introduce a nonzero bias in all noises in Equation 2. Then, we randomly populate all system matrices in the state-space model with independent samples from N (0, 1/K2). Afterwards, we sample y1:K from this state-space model to generate toy data. A K-dependent covariance controls that the samples y1:K remain finite in 32-bit floating-point arithmetic. However, the precise values of the model and data do not matter for this experiment - only their size does. Finally, we vary d and measure the runtime and memory requirements of each of the three methods. To measure runtime, we time the execution of the fixed-point smoothers: wall time in seconds, choosing the best of three runs. To measure memory consumption, we count the number of floating-point values the estimators carry from step to step, multiply this by 32 (because we use 32-bit arithmetic), and translate bits into bytes. Evaluation The runtime results are in Table 3 and the memory results in Table 4. The runtime data shows that except for d = 2, the Rauch–Tung–Striebel smoother and Algorithm 1 are faster than the state-augmented filter. Algorithm 1 is marginally faster than the Rauch–Tung–Striebel smoother code, even though it computes strictly more at every iteration. However, the Rauch–Tung–Striebel smoother executes two loops, one forwards and one backwards, whereas Algorithm 1 only executes a forward loop. This discrepancy might lead to the performance improvement from Rauch–Tung–Striebel smoothing to Algorithm 1. The memory data shows that the Rauch–Tung–Striebel smoothing solution and the Algorithm 1 have identical memory requirements per step. Both consume less storage than the state-augmented filter (per step). As expected, the Rauch–Tung–Striebel smoother requires exactly K-times the memory of Algorithm 1, which would be infeasible for long time series over high-dimensional states. In general, this experiment underlines | d = 2 | d = 5 | d = 10 | d = 20 | d = 50 | d = 100 | | |-------------------------|-----------|-----------|-----------|-----------|-----------|-----------| | Via Rauch–Tung–Striebel | 2.2 × 105 | 1.2 × 106 | 4.9 × 106 | 1.9 × 107 | 1.2 × 108 | 4.8 × 108 | | Via filter | 2.8 × 102 | 1.6 × 103 | 6.5 × 103 | 2.5 × 104 | 1.6 × 105 | 6.4 × 105 | | Algorithm 1 | 2.2 × 102 | 1.2 × 103 | 4.9 × 103 | 1.9 × 104 | 1.2 × 105 | 4.8 × 105 | Table 4: Memory in bytes (we use 32-bit arithmetic). Lower is better. The column-wise lowest entries are bold and shaded. Randomly populated model. K = 1, 000 steps. the claimed efficiency of our new fixed-point smoother recursion in a Cholesky-based parametrisation. The following experiment will answer the question of whether Cholesky-based arithmetic is necessary. ## 4.2 Experiment Ii: How Much More Robust Is The Cholesky-Based Code? Motivation Cholesky-based implementations replace matrix addition and matrix multiplication with QR decompositions. They are more expensive than covariance-based implementations because the decomposition is more expensive than matrix multiplication. However, there are numerous situations where the gains in numerical robustness are worth the increase in runtime. One example is the probabilistic numerical simulation of differential equations, which use integrated Wiener processes (Schober et al., 2014; 2019) with high order and small time-steps. For these applications, Cholesky-based implementations are the only feasible approach (Krämer and Hennig, 2024). This second experiment shows that Algorithm 1 unlocks fixed-point smoothing for probabilistic numerical simulation through Algorithm 3. Problem setup We solve a boundary value problem based on an ordinary differential equation. More specifically, we solve the 15th in the collection of test problems by Mazzia and Cash (2015) (Figure 3), $$10^{-3}\cdot\frac{d^{2}}{dt^{2}}u(t)=tu(t),\quad u(-1)=u(1)=1.\tag{20}$$ ![8_image_0.png](8_image_0.png) We follow the procedure for solving boundary value problems with a probabilistic numerical method by Krämer and Hennig (2021) for the most part. However, we skip the iterated Kalman smoother because the differential equation is linear and skip mesh refinement and expectation maximisation to keep this demonstration simple. The focus lies on numerical robustness. We choose a twice-integrated Wiener process prior and discretise it on K equispaced points in [−1, 1], which yields the latent transitions (let ∆t := 1/K) $$(21)$$ $$(22)$$ Figure 3: 15th Boundary value problem (Mazzia and Cash, 2015). $$A_{k}:=\begin{pmatrix}1&\Delta t&\frac{(\Delta t)^{2}}{2}\\ 0&1&\Delta t\\ 0&0&1\end{pmatrix},\quad B_{k}:=\begin{pmatrix}\frac{(\Delta t)^{5}}{2}&\frac{(\Delta t)^{4}}{2}&\frac{(\Delta t)^{3}}{2}\\ \frac{(\Delta t)^{5}}{2}&\frac{(\Delta t)^{3}}{2}&\frac{(\Delta t)^{2}}{2}\\ \frac{(\Delta t)^{3}}{3}&\frac{(\Delta t)^{2}}{2}&\Delta t\end{pmatrix},\quad k=1,...,K.$$ The dynamics are constant because the grid points are evenly spaced. The means of the process noise are zero, like in Equation 2. The hidden state $$x_{k}:=\left(u(t_{k}),{\frac{d}{d t}}u(t_{k}),{\frac{d^{2}}{d t^{2}}}u(t_{k})\right),\quad k=1,...,K$$ tracks the differential equation solution u and its derivatives, including d 2 dt2 u, which means that the residual 10−3 d 2 dt2 u−tu(t) is a linear function of xk. We introduce the model for the constraints (βk shall be the nonzero mean of the observation noise rk in Equations 1 and 2) $$\begin{array}{l l l l l}{{}}&{{H_{k}=\left(-t_{k}\quad0\quad10^{-3}\right),}}&{{}}&{{\beta_{k}=0,}}&{{}}&{{R_{k}=0,}}\\ {{}}&{{H_{K}=\left(1\quad0\quad0\right),}}&{{}}&{{\beta_{K}=-1,}}&{{}}&{{R_{K}=0.}}\end{array}$$ $$(23\mathrm{a})$$ $$(23\mathrm{b})$$ Table 5: Deviation from the Cholesky-based, state-augmented Kalman filter on the boundary value problem. Lower is better and close to machine-precision desirable. The column-wise lowest values are bold and coloured. "K": number of grid points. Double precision (64-bit arithmetic). | *-based: | K = 10 | K = 20 | K = 50 | K = 100 | K = 200 | K = 500 | K = 1, 000 | |------------|-------------|------------|------------|------------|------------|------------|--------------| | Covariance | 2.9 × 10−3 | 3.4 × 10−1 | 1.1 × 100 | 1.0 × 101 | 2.1 × 101 | NaN | NaN | | Cholesky | 2.0 × 10−10 | 5.0 × 10−8 | 4.2 × 10−7 | 7.9 × 10−8 | 1.3 × 10−7 | 6.1 × 10−8 | 3.4 × 10−8 | Note how the constraint at the final time-point encodes the right-hand side boundary condition, not the differential equation. We choose the initial mean m0|0 = (1, 0, 0) and initial covariance C0|0 = diag(0, 1, 1) to represent the left-hand side boundary condition. Estimating x0:K from y1:K := 0 solves the boundary value problem ((Krämer and Hennig, 2021); Figure 3 displays the mean of the fixed-interval smoothing solution). Krämer and Hennig (2021) explain how estimating the initial condition p(x0 | y1:K) is important for parameter estimation problems. In other words, fixed-point smoothers are relevant for boundary value problem simulation. We are the first to use them for this task. Implementation For this demonstration, we consider the Cholesky-based implementation of the Kalman filter as the gold standard for numerical robustness because it has been used successfully in numerous similar problems (Krämer and Hennig, 2024; Bosch et al., 2021; Krämer and Hennig, 2021; Krämer, 2024). The Cholesky-based, state-augmented Kalman filter provides a reference solution of the fixed-point equations. We run Cholesky-based (Algorithm 3) and covariance-based code (Algorithm 2) for the fixed-point smoother recursions (Algorithm 1) and measure how much the estimated initial means deviate from the Kalman-filter reference in terms of the absolute root-mean-square error. In infinite precision, the results would be identical. In finite precision, all deviations should be due to a loss of stability. We vary K to investigate how the step-size ∆t affects the results, with the intuition that smaller steps lead to worse conditioning in Bk and that this effect makes the estimation task more difficult (Krämer and Hennig, 2024). Evaluation The results are in Table 5. They indicate how the Cholesky-based code is significantly more robust than the covariance-based code. The covariance-based implementation delivers only three meaningful digits for K = 10 points, diverges for more grid points, and breaks for K ≥ 500. In contrast, the Choleskybased code delivers meaningful approximations across all grids. This consistency underlines how much more robust our Cholesky-based code is and how it unlocks fixed-point smoothing for probabilistic numerics. ## 4.3 Case Study: Estimating Parameters Of A State-Space Model Movitation The previous two experiments have emphasized the practicality of our proposed method. We conclude by applying fixed-point smoothing to parameter estimation in a tracking model. This study aims to emulate applications of fixed-point smoothing in navigation tasks (Meditch, 1969) in a setup that is easy to reproduce. However, the demonstration also links to the previous boundary value problem experiment through expectation maximisation (Krämer and Hennig, 2021). Problem setup The task is to estimate the mean parameter of an unknown initial condition in a car tracking example. Define the Wiener velocity model on K = 10 equispaced points (∆t = 1/10), $$A_{k}:=\begin{pmatrix}I_{2}&\Delta t\cdot I_{2}\\ 0&I_{2}\end{pmatrix}\quad B_{k}:=\begin{pmatrix}\frac{(\Delta t)^{2}}{2}\cdot I_{2}&\frac{(\Delta t)^{2}}{2}\cdot I_{2}\\ \frac{(\Delta t)^{2}}{2}\cdot I_{2}&\Delta t\cdot I_{2}\end{pmatrix},\quad H_{k}=\begin{pmatrix}I_{2}&0\end{pmatrix},\quad R_{k}=0.1^{2}\cdot I_{2}.\tag{24}$$ All biases are zero, bk = 0, rk = 0. Textbooks on Bayesian filtering and smoothing (Särkkä, 2013; Särkkä and Svensson, 2023) use this Wiener velocity model to estimate the trajectory of a car. We populate m and LC with samples from a standard normal distribution and sample an initial condition $$x_{0}=\begin{pmatrix}\theta_{1}&\theta_{2}&\dot{\theta}_{1}&\dot{\theta}_{2}\end{pmatrix}\sim\mathcal{N}(m_{0|0},L_{C_{0|0}}(L_{C_{0|0}})^{\top}).$$ We sample artificial observations y1:K. From here on, the initial mean m0|0 will be treated as unknown. $$(25)$$ ![10_image_0.png](10_image_0.png) Figure 4: Initial distributions p(θ1, θ2 | y1:K) of the car tracking model after running the fixed-point smoother (recall x := (θ1, θ2, ˙θ1, ˙θ2)). Left to right: After three iterations, the combination of expectation maximisation with the fixed-point smoother finds the correct initial mean θ = (θ1, θ2) of the state-space model. Top: First coordinate p(θ1 | y1:K). Bottom: second coordinate p(θ2 | y1:K). "PDF": "Probability density function". Implementation We combine fixed-point smoothing with expectation maximisation (Dempster et al., 1977) to calibrate m0|0 using the data y1:K. The expectation maximisation update for the initial mean in a linear Gaussian state-space model is mnew := m0|K (Särkkä and Svensson, 2023), and this update repeats until convergence. Here, m0|K is the mean of p(x0 | y1:K). We implement the fixed-point smoother recursion in Cholesky-based arithmetic and run expectation maximisation for three iterations. We initialise the mean guess by sampling all entries independently from a centred normal distribution with a variance of 100. We track the data's marginal likelihood ("evidence"), computing it online during the forward filtering pass. Evaluation The results are in Figure 4. They show how the combination of expectation maximisation with fixed-point smoothing recovers the initial mean already after three iterations. The evidence increases at every iteration, but that is normal for expectation maximisation (Wu, 1983). In conclusion, fixed-point smoothing is a viable parameter estimation technique in a state-space model. ## 5 Discussion Limitations and future work Our new approach to fixed-point smoothing strictly generalises existing techniques, but inherits some of their limitations: Even though the new recursion in Algorithm 1 is independent of the type of state-space model, implementing the fixed-point smoother in closed form assumes a linear Gaussian setup. Future work should explore robust fixed-point smoothing in nonlinear state-space models, for example, through posterior linearisation (García-Fernández et al., 2016). Like all Gaussian smoothing algorithms, our methods have cubic complexity in the state-space dimension because of the matrix-matrix arithmetic or QR decompositions, respectively. Finally, while other parametrisations of Gaussian variables may be used instead of covariance or Cholesky-based parametrisations, such directions are future work. Conclusion This paper presented a new recursion for fixed-point smoothing in Gaussian state-space models: Algorithm 1. It has lower memory consumption than existing approaches because it avoids state-augmentation and foregoes storing all intermediate results of a Rauch–Tung–Striebel smoother. It also allows arbitrary parametrisations of Gaussian distributions, and we use this perspective to derive a Cholesky-based form of fixed-point smoothers. As a result, our method matches the speed of the fastest and the robustness of the most robust methods, as has been demonstrated in three simulations of varying difficulty. Through these successes, our contribution hopefully revives fixed-point smoothing as part of the literature on Gaussian filters and smoothers. We anticipate notable performance improvements for algorithms at the interface of smoothing and dynamical systems due to the newfound ability to reliably use non-standard smoothing algorithms. ## References Mohinder S Grewal and Angus P Andrews. *Kalman filtering: Theory and Practice with MATLAB*. John Wiley & Sons, 2014. Simo Särkkä and Lennart Svensson. *Bayesian Filtering and Smoothing*. Cambridge University Press, 2023. Simo Särkkä and Arno Solin. *Applied Stochastic Differential Equations*, volume 10. Cambridge University Press, 2019. Kevin P Murphy. *Probabilistic Machine Learning: Advanced Topics*. MIT Press, 2023. Philipp Hennig, Michael A Osborne, and Hans P Kersting. *Probabilistic Numerics: Computation as Machine* Learning. Cambridge University Press, 2022. Ángel F García-Fernández, Lennart Svensson, and Simo Särkkä. Iterated posterior linearization smoother. IEEE Transactions of Automatic Control, 62(4):2056–2063, 2016. Simo Särkkä and Ángel F García-Fernández. Temporal parallelization of Bayesian smoothers. IEEE Transactions of Automatic Control, 66(1):299–306, 2020. Fatemeh Yaghoobi, Adrien Corenflos, Sakira Hassan, and Simo Särkkä. Parallel square-root statistical linear regression for inference in nonlinear state space models. *Preprint on ArXiv:2207.00426*, 2022. James S Meditch. *Stochastic Optimal Linear Estimation and Control*. McGraw-Hill, 1969. KK Biswas and AK Mahalanabis. An approach to fixed-point smoothing problems. *IEEE Transactions on* Aerospace and Electronic Systems, (5):676–682, 1972. MWA Smith and AP Roberts. The robustness ot the fixed point smoothing algorithm. *Journal of Information* and Optimization Sciences, 3(2):87–104, 1982. JS Meditch. Optimal fixed-point continuous linear smoothing. In *Joint Automatic Control Conference*, number 5, pages 249–257, 1967a. Simo Sarkka and Jouni Hartikainen. On Gaussian optimal smoothing of non-linear state space models. IEEE Transactions of Automatic Control, 55(8):1938–1941, 2010. H Rauch. Solutions to the linear smoothing problem. *IEEE Transactions of Automatic Control*, 8(4):371–372, 1963. James S Meditch. Orthogonal projection and discrete optimal linear smoothing. *SIAM Journal on Control*, 5 (1):74–89, 1967b. James S Meditch. Initial- and lagging-state observers. *Information Sciences*, 11(1):55–67, 1976. JS Meditch and GH Hostetter. Estimation of initial conditions in linear systems. In Proceedings of the 11th Annual Atierton Conference on Circuit and System Theory, pages 226–235, 1973. T Nishimura. A new approach to estimation of initial conditions and smoothing problems. *IEEE Transactions* on Aerospace and Electronic Systems, (5):828–836, 1969. Nicholas Krämer and Philipp Hennig. Linear-time probabilistic solution of boundary value problems. Advances in Neural Information Processing Systems, 34:11160–11171, 2021. Peter Nicholas Krämer. *Implementing probabilistic numerical solvers for differential equations*. PhD thesis, Universität Tübingen, 2024. John M Bennett. Triangular factors of modified matrices. *Numerische Mathematik*, 7:217–221, 1965. Kevin P Murphy. *Probabilistic Machine Learning: an Introduction*. MIT press, 2022. Rudolph E Kalman. A new approach to linear filtering and prediction problems. *Journal of Basic Engineering*, 82(1):35–45, 1960. Angus Andrews. A square root formulation of the kalman covariance equations. *AIAA Journal*, 6(6): 1165–1166, 1968. Herbert E Rauch, F Tung, and Charlotte T Striebel. Maximum likelihood estimates of linear dynamic systems. AIAA journal, 3(8):1445–1450, 1965. Poogyeon Park and Thomas Kailath. Square-root RTS smoothing algorithms. International Journal of Control, 62(5):1049–1060, 1995. Simo Särkkä. *Bayesian Filtering and Smoothing*. Cambridge University Press, 2013. (Note: This is the first edition. The second edition (Särkkä and Svensson, 2023) does not discuss fixed-point smoothing.). James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. Michael Schober, David K Duvenaud, and Philipp Hennig. Probabilistic ODE solvers with Runge–Kutta means. *Advances in Neural Information Processing Systems*, 27, 2014. Michael Schober, Simo Särkkä, and Philipp Hennig. A probabilistic model for the numerical solution of initial value problems. *Statistics and Computing*, 29(1):99–122, 2019. Nicholas Krämer and Philipp Hennig. Stable implementation of probabilistic ODE solvers. Journal of Machine Learning Research, 25(111):1–29, 2024. Francesca Mazzia and Jeff R Cash. A Fortran test set for boundary value problem solvers. In AIP Conference Proceedings, volume 1648. AIP Publishing, 2015. Nathanael Bosch, Philipp Hennig, and Filip Tronarp. Calibrated adaptive probabilistic ODE solvers. In International Conference on Artificial Intelligence and Statistics, pages 3466–3474. PMLR, 2021. Arthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data via the EM algorithm. *Journal of the Royal Statistical Society: Series B (Methodological)*, 39(1):1–22, 1977. CF Jeff Wu. On the convergence properties of the EM algorithm. *The Annals of Statistics*, pages 95–103, 1983. Stuart Gibson and Brett Ninness. Robust maximum-likelihood estimation of multivariable dynamic systems. Automatica, 41(10):1667–1682, 2005. Matthias Seeger. Low rank updates for the Cholesky decomposition. 2004. ## A Kalman Filter Recursions The Kalman filter (Kalman, 1960) computes p(xK | y1:K) by initialising p(x0 | y1:0) = p(x0) and alternating | Prediction: | p(xk−1 | y1:k−1) 7−→ p(xk | y1:k−1) | k = 1, ..., K | (26a) | |---------------|---------------------------------------|-----------------|---------| | Update: | p(xk | y1:k−1) 7−→ p(xk | y1:k) | k = 1, ..., K. | (26b) | Since all operations are linear and all distributions are Gaussian, the recursions are available in closed form. Their parametrisation depends on the parametrisation of Gaussian variables as follows. ## A.1 Covariance-Based Parametrisation Let k = 1*, ..., K*. The prediction step computes the predicted distribution p(xk | y1:k−1) = N (mk|k−1, Ck|k−1) from the previous filtering distribution p(xk−1 | y1:k−1) = N (mk−1|k−1, Ck−1|k−1) by $$m_{k|k-1}=A_{k}m_{k-1|k-1},\quad C_{k|k-1}=A_{k}C_{k-1|k-1}(A_{k})^{\top}+B_{k}.$$ the predicted distribution forms the joint distribution. $$\begin{array}{l}{{\mathrm{.1)=}p(y_{k}\mid x_{k})p(x_{k}\mid y_{1:k-1})=}}\\ {{\mathrm{~}}}\\ {{p(x_{k}\mid y_{1:k})=\mathcal{N}(m_{k|k},C_{k|k}),}}\end{array}$$ ⊤ + Bk. (27) The update step takes the predicted distribution, forms the joint distribution $p(x_{k},y_{k}\mid y_{1:k-1})=p(y_{k}\mid x_{k})p(x_{k}\mid y_{1:k-1})=\mathcal{N}(H_{k}x_{k},R_{k})\mathcal{N}(m_{k|k-1},C_{k|k-1})$ and computes the update as p(xk | y1:k) = N (mk|k, Ck|k), $|\ y_{1:k}\rangle=\mathcal{N}\left(m_{k}|k,\cup_{k}|k\right)$, $s_{k}|_{k-1}=H_{k}m_{k}|_{k-1}$, $S_{k}|_{k-1}=H_{k}C_{k|k-1}(H_{k})^{\top}+R_{k}$, $Z_{k}=C_{k|k-1}(H_{k})^{\top}(S_{k|k-1})^{-1}$, $m_{k|k}=m_{k|k-1}+Z_{k}(y_{k}-s_{k|k-1})$, $C_{k|k}=C_{k|k-1}-Z_{k}S_{k|k-1}(Z_{k})^{\top}$. $1,...,K$ yields $p(x_{K}\ |\ y_{1:K})$. $$(27)$$ $$(28)^{3}$$ $$(29\mathrm{a})$$ $$\mathbf{\Sigma}^{+}$$ $$(29\mathbf{b})$$ Iterating these two steps from k = 1*, ..., K* yields p(xK | y1:K). ## A.2 Cholesky-Based Parametrisation The Cholesky-based parametrisation predicts the mean like the covariance-based parametrisation. The covariance prediction is replaced by the QR decomposition $$\Omega\mathfrak{R}=\begin{pmatrix}(L_{C_{k-1|k-1}})^{\top}(A_{k})^{\top}\\ (L_{B_{k}})^{\top}\end{pmatrix}\tag{1}$$ $$(30)$$ $$(31)$$ $$(32)$$ followed by setting LCk|k−1 = R⊤ (Krämer and Hennig, 2024; Grewal and Andrews, 2014). The logic mirrors that in Algorithm 3. The update step in Cholesky-based arithmetic amounts to a QR-decomposition of (Gibson and Ninness, 2005) $$\Omega\left(\begin{array}{l}{{\mathfrak{A}}_{1}}\\ {{0}}\end{array}\right)$$ R1 R2 0 R3 $$\begin{array}{c}\mathfrak{R}_{2}\\ \mathfrak{R}_{3}\end{array}\right)=\left(\begin{array}{cc}\left(L_{R_{k}}\right)^{\top}&0\\ \left(L_{C_{k\mid k-1}}\right)^{\top}\left(H_{k}\right)^{\top}&\left(L_{C_{k\mid k-1}}\right)^{\top}\end{array}\right)\tag{1}$$ followed by setting $$L_{S_{k|k-1}}=(\Re_{1})^{\top},\quad Z_{k}=((\Re_{1})^{-1}\Re_{2})^{\top},\quad L_{C_{k|k}}=(\Re_{3})^{\top}.$$ ⊤. (32) Then, use Zk to evaluate mk|k and iterate k 7→ k + 1. If p(yk | y1:k−1) is needed, evalaute sk|k−1 like in the covariance-based implementation. Choosing the QR decomposition by Gibson and Ninness (2005) avoids implementing Gaussian updates via Cholesky downdates (e.g. Yaghoobi et al., 2022), which are sometimes numerically unstable (Seeger, 2004). We refer to Krämer (2024, Chapter 4) for why the above assignments yield the correct distributions. ## B Rauch–Tung–Striebel Smoother Recursions The Rauch–Tung–Striebel smoother proceeds similarly to the Kalman filter. ## B.1 Covariance-Based Parametrisation The prediction step in the smoother involves computing p(xk | y1:k−1) like in the filter. It further evaluates p(xk−1 | xk, y1:k−1) = N (Gk−1|kxk + pk−1|k, Pk−1|k), (33a) Gk−1|k = Ck−1|k−1(Ak) ⊤(Ck|k−1) −1(33b) pk−1|k = mk−1|k−1 − Gk−1|kmk|k−1 (33c) Pk−1|k = Pk−1|k−1 − Gk−1|kCk+1|k(Gk−1|k) ⊤. (33d) $$5\ p(x_{k}\ |\ y_{1:k-1})$$ This conditional distribution is stored after every step. The rest of the step proceeds like in the Kalman filter. $$(33\mathrm{a})$$ ## B.2 Cholesky-Based Parametrisation The Cholesky-based parametrisation computes the same conditional distribution as the covariance-based parametrisation but replaces matrix multiplication and matrix addition with another QR decomposition; specifically, the QR decomposition (Gibson and Ninness, 2005) $$\Omega\begin{pmatrix}\Re_{1}&\Re_{2}\\ 0&\Re_{3}\end{pmatrix}=\begin{pmatrix}(L_{B_{k}})^{\top}&0\\ (L_{C_{k-1|k-1}})^{\top}(A_{k})^{\top}&(L_{C_{k-1|k-1}})^{\top}\end{pmatrix}.\tag{1}$$ $$(34)$$ $$(35)$$ This QR decomposition is followed by setting $$L_{C_{k\mid k-1}}=(\Re_{1})^{\top},\quad G_{k-1\mid k}=((\Re_{1})^{-1}\Re_{2})^{\top},\quad L_{P_{k-1\mid k}}=(\Re_{3})^{\top}.$$ This step computes the Cholesky factors of Ck|k−1 and Pk−1|k, as well as the smoothing gain Gk−1|k in a single sweep. It replaces the prediction step of the Cholesky-based Kalman filter. Compute mk|k−1 and pk−1|k like in the covariance-based parametrisation. Store these quantities and proceed with the update step of the Cholesky-based Kalman filter. Again, refer to Krämer (2024, Chapter 4) for why these assignments yield the correct distributions. ## C Proof Of Proposition 1 Recall the notation from Algorithm 2, most importantly, G0|0 = I, p0|0 = 0, and P0|0 = 0. Algorithm 1 computes the transition $p(x_{0}\mid x_{k},y_{1:k-1})={\cal N}(G_{0|k}x_{k}+p_{0|k},P_{0|k})$. A forward pass with a Kalman filter gives $$(36)$$ $$(37)$$ $$p(x_{k}\mid y_{1:k})={\mathcal{N}}(m_{k|k},C_{k|k}).$$ $$(38)$$ $$(39\mathrm{a})$$ p(xk | y1:k) = N (mk|k, Ck|k). (37) Then, due to the rules of manipulating Gaussian distributions, $$p(x_{0}\mid y_{1:k})={\mathcal{N}}(G_{0|k}m_{k|k}+p_{0|k},G_{0|k}C_{k|k}(G_{0|k})^{\top}+P_{0|k})$$ ⊤ + P0|k) (38) follows. To show that this matches the result of Equation 7, it suffices to show $P_{0|k}=m_{0|k-1}-G_{0|k}m_{k|k-1}$ $P_{0|k}=C_{0|k-1}-G_{0|k}C_{k|k-1}(G_{0|k})^{\top}$ $$(39\mathrm{b})$$ ⊤ (39b) because the remaining terms already coincide. We use induction to show Equation 39. Initialisation For k = 1, we get $$p_{0|1}=G_{0|0}p_{0|1}+p_{0|0}=p_{0|1}=m_{0|0}-G_{0|1}m_{1|0},$$ p0|1 = G0|0p0|1 + p0|0 = p0|1 = m0|0 − G0|1m1|0, (40) which shows Equation 39a (compare the smoothing recursion in Appendix B for the last step), as well as $$(40)^{\frac{1}{2}}$$ $$P_{0|1}=G_{0|0}P_{0|1}G_{0|0}+P_{0|0}=P_{0|1}$$ P0|1 = G0|0P0|1G0|0 + P0|0 = P0|1 (41) which gives Equation 39b. Step Now, let Equation 39 hold for some k. Then, (abbreviate "RTSS": "Rauch–Tung–Striebel smoother") p0|k+1 = G0|kpk|k+1 + p0|k (Algorithm 2) = G0|kpk|k+1 + m0|k−1 − G0|kmk|k−1 (Equation 39a holds for k) = G0|k(mk|k − Gk|k+1mk+1|k) + m0|k−1 − G0|kmk|k−1 (Expand pk|k+1 as in RTSS) = G0|kmk|k − G0|kGk|k+1mk+1|k + m0|k−1 − G0|kmk|k−1 (Resolve parentheses) = m0|k−1 − G0|k(mk|k−1 − mk|k) − G0|k+1mk|k+1 (G0|kGk|k+1 = G0|k+1) = m0|k − G0|k+1mk+1|k (Equation 7) $$(41)^{\frac{1}{2}}$$ Equation 39a is complete. Similarly, P0|k+1 = G0|kPk|k+1(G0|k) ⊤ + P0|k (Algorithm 2) = G0|kPk|k+1(G0|k) ⊤ + C0|k−1 − G0|kCk|k−1(G0|k) ⊤ (Equation 39b holds for k) = G0|k(Ck|k − Gk|k+1Ck+1|k(Gk|k+1) ⊤)(G0|k) ⊤ + C0|k−1 − G0|kCk|k−1(G0|k) ⊤ (Expand Pk|k+1 as in RTSS) = G0|kCk|k(G0|k) ⊤ − G0|k+1Ck+1|k(G0|k+1) ⊤ + C0|k−1 − G0|kCk|k−1(G0|k) ⊤ (Expand parentheses, G0|kGk|k+1 = G0|k+1) = C0|k−1 − G0|k(Ck|k−1 − Ck|k)(G0|k) ⊤ − G0|k+1Ck+1|k(G0|k+1) ⊤ (Rearrange) = C0|k − G0|k+1Ck+1|k(G0|k+1) ⊤ (Equation 7) the covariance recursion must hold. Equation 39 holds for all k ≥ 1, and the statement is complete.
# Improving Predictor Reliability With Selective Recalibration Thomas P. Zollo *tpz2105@columbia.edu* Columbia University Zhun Deng *zhun.d@columbia.edu* Columbia University Jake C. Snell js2523@princeton.edu Princeton University Toniann Pitassi toni@cs.columbia.edu Columbia University Richard Zemel *zemel@cs.columbia.edu* Columbia University Reviewed on OpenReview: *https: // openreview. net/ forum? id= Aoj9H6jl6F* ## Abstract A reliable deep learning system should be able to accurately express its confidence with respect to its predictions, a quality known as calibration. One of the most effective ways to produce reliable confidence estimates with a pre-trained model is by applying a post-hoc recalibration method. Popular recalibration methods like temperature scaling are typically fit on a small amount of data and work in the model's output space, as opposed to the more expressive feature embedding space, and thus usually have only one or a handful of parameters. However, the target distribution to which they are applied is often complex and difficult to fit well with such a function. To this end we propose *selective recalibration*, where a selection model learns to reject some user-chosen proportion of the data in order to allow the recalibrator to focus on regions of the input space that can be well-captured by such a model. We provide theoretical analysis to motivate our algorithm, and test our method through comprehensive experiments on difficult medical imaging and zero-shot classification tasks. Our results show that selective recalibration consistently leads to significantly lower calibration error than a wide range of selection and recalibration baselines. ## 1 Introduction In order to build user trust in a machine learning system, it is important that the system can accurately express its confidence with respect to its own predictions. Under the notion of calibration common in deep learning (Guo et al., 2017; Minderer et al., 2021), a confidence estimate output by a model should be as close as possible to the expected accuracy of the prediction. For instance, a prediction assigned 30% confidence should be correct 30% of the time. This is especially important in risk-sensitive settings such as medical diagnosis, where binary predictions alone are not useful since a 30% chance of disease must be treated differently than a 1% chance. While advancements in neural network architecture and training have brought improvements in calibration as compared to previous methods (Minderer et al., 2021), neural networks still suffer from miscalibration, usually in the form of overconfidence (Guo et al., 2017; Wang et al., 2021). In addition, these models are often applied to complex data distributions, possibly including outliers, and may have different calibration error within and between different subsets in the data (Ovadia et al., 2019; PerezLebel et al., 2023). We illustrate this setting in Figure 1a with a Reliability Diagram, a tool for visualizing ![1_image_0.png](1_image_0.png) Figure 1: Reliability Diagrams for a model that has different calibration error (deviation from the diagonal) in different subsets of the data (here shown in blue and green). The data per subset is binned based on confidence values; each marker represents a bin, and its size depicts the amount of data in the bin. The red dashed diagonal represents perfect calibration, where confidence equals expected accuracy. calibration by plotting average confidence against accuracy for bins of datapoints with similar confidence estimates. To address this calibration error, the confidence estimates of a pre-trained model can be refined using a post-hoc recalibration method like Platt scaling (Platt, 1999), temperature scaling (Guo et al., 2017), or histogram binning (Zadrozny & Elkan, 2001). Given existing empirical evidence (Guo et al., 2017) and the fact that they are typically fit on small validation sets (on the order of hundreds to a few thousand examples), these "recalibrators" usually reduce the input space to the model's logits (e.g., temperature scaling) or predicted class scores (e.g., Platt scaling, histogram binning), as opposed to the high-dimensional and expressive feature embedding space. Accordingly, they are generally inexpressive models, having only one or a handful of parameters (Platt, 1999; Guo et al., 2017; Zadrozny & Elkan, 2001; Kumar et al., 2019). But the complex data distributions to which neural networks are often applied are difficult to fit well with such simple functions, and calibration error can even be exacerbated for some regions of the input space, especially when the model has only a single scaling parameter (see Figure 1b). Motivated by these observations, we contend that these popular recalibration methods are a natural fit for use with a selection model. Selection models (El-Yaniv & Wiener, 2010; Geifman & El-Yaniv, 2017) are used alongside a classifier, and may reject a portion of the classifier's predictions in order to improve some performance metric on the subset of accepted (i.e., unrejected) examples. While selection models have typically been applied to improve classifier accuracy (Geifman & El-Yaniv, 2017), they have also been used to improve calibration error by rejecting the confidence estimates of a fixed model (Fisch et al., 2022). However, selection alone cannot fully address the underlying miscalibration because it does not alter the confidence output of the model (see Figure 1c), and the connection between selection and post-hoc recalibration remains largely unexplored. In this work we propose *selective recalibration*, where a selection model and a recalibration model are jointly optimized in order to produce predictions with low calibration error. By rejecting some portion of the data, the system can focus on a region that can be well-captured by a simple recalibration model, leading to a set of predictions with a lower calibration error than under recalibration or selection alone (see Figure 1d). This approach is especially important when a machine learning model is deployed for decision-making in risk-sensitive domains such as healthcare, finance, and law, where a predictor must be well-calibrated if a human expert is to use its output to improve outcomes and avoid causing active harm. To summarize our contributions: - We formulate selective recalibration, and offer a new loss function for training such a system, Selective Top-Label Binary Cross Entropy (S-TLBCE), which aligns with the typical notion of loss under smooth recalibrators like Platt or temperature scaling models. - We test selective recalibration and S-TLBCE in real-world medical diagnosis and image classification experiments, and find that selective recalibration with S-TLBCE consistently leads to significantly lower calibration error than a wide range of selection and recalibration baselines. - We provide theoretical insight to support our motivations and algorithm. ## 2 Related Work Making well-calibrated predictions is a key feature of a reliable statistical model (Guo et al., 2017; HebertJohnson et al., 2018; Minderer et al., 2021; Fisch et al., 2022). Popular methods for improving calibration error given a pretrained model and labeled validation dataset include Platt (Platt, 1999) and temperature scaling (Guo et al., 2017), histogram binning (Zadrozny & Elkan, 2001), and Platt binning (Kumar et al., 2019) (as well as others like those in Naeini et al. (2015); Zhang et al. (2020)). Loss functions have also been proposed to improve the calibration error of a neural network in training, including Maximum Mean Calibration Error (MMCE) (Kumar et al., 2018), S-AvUC, and SB-ECE (Karandikar et al., 2021). Calibration error is typically measured using quantities such as Expected Calibration Error (Naeini et al., 2015; Guo et al., 2017), Maximum Calibration Error (Naeini et al., 2015; Guo et al., 2017), or Brier Score (Brier, 1950) that measure whether prediction confidence matches expected outcomes. Previous research (Roelofs et al., 2022) has demonstrated that calibration measures calculated using binning have lower bias when computed using equal-mass bins. Another technique for improving ML system reliability is selective classification (Geifman & El-Yaniv, 2017; El-Yaniv & Wiener, 2010), wherein a model is given the option to abstain from making a prediction on certain examples (often based on confidence or out-of-distribution measures). Selective classification has been well-studied in the context of neural networks (Geifman & El-Yaniv, 2017; 2019; Madras et al., 2018). It has been shown to increase disparities in accuracy across groups (Jones et al., 2021), although work has been done to mitigate this effect in both classification (Jones et al., 2021) and regression (Shah et al., 2022) tasks by enforcing calibration across groups. Recent work by Fisch et al. (2022) introduces the setting of calibrated selective classification, in which predictions from a pre-trained model are rejected for the sake of improving selective calibration error. The authors propose a method for training a selective calibration model using an S-MMCE loss function derived from the work of Kumar et al. (2018). Our work differs from this and other previous work by considering the *joint* training and application of selection and recalibration models. While Fisch et al. (2022) apply selection directly to a frozen model's outputs, we contend that the value in our algorithm lies in this joint optimization. Also, instead of using S-MMCE, we propose a new loss function, S-TLBCE, which more closely aligns with the objective function for Platt and temperature scaling. Besides calibration and selection, there are other approaches to quantifying and addressing the uncertainty in modern neural networks. One popular approach is the use of ensembles, where multiple models are trained and their joint outputs are used to estimate uncertainty. Ensembles have been shown to both improve accuracy and provide a means to estimate predictive uncertainty without the need for Bayesian modeling (Lakshminarayanan et al., 2017). Bayesian neural networks (BNNs) offer an alternative by explicitly modeling uncertainty through distributions over the weights of the network, thus providing a principled uncertainty estimation (Blundell et al., 2015). Dropout can also be viewed as approximate Bayesian inference (Gal, 2016). Another technique which has received interest recently is conformal prediction, which uses past data to determine a prediction interval or set in which future points are predicted to fall with high probability (Shafer & Vovk, 2008; Vovk et al., 2005). Such distribution-free guarantees have been extended to cover a wide set of risk measures (Deng et al., 2023; Snell et al., 2023) and applications such as robot planning (Ren et al., 2023) and prompting a large language model (Zollo et al., 2024). ## 3 Background Consider the multi-class classification setting with K classes and data instances (x, y) ∼ D, where x is the input and y ∈ {1, 2*, ..., K*} is the ground truth class label. For a black box predictor f, f(x) ∈ RK is a vector where f(x)k is the predicted probability that input x has label k; we denote the confidence in the top predicted label as ˆf(x) = maxk f(x)k. Further, we may access the unnormalized class scores fs(x) ∈ RK (which may be negative) and the feature embeddings fe(x) ∈ Rd. The predicted class is yˆ = arg maxk f(x)k and the correctness of a prediction is y c = 1{y = ˆy}. ## 3.1 Selective Classification In selective classification, a *selection model* g produces binary outputs, where 0 indicates rejection and 1 indicates acceptance. A common goal is to decrease some error metric by rejecting no more than a 1 − β proportion of the data for given target coverage level β. One popular choice for input to g is the feature embedding fe(x), although other representations may be used. Often, a soft selection model gˆ : Rd → [0, 1] is trained and g is produced at inference time by choosing a threshold τ on gˆ to achieve coverage level β (i.e., E[1{gˆ(X) ≥ τ}] = β). ## 3.2 Calibration The model f is said to be top-label calibrated if ED[y c| ˆf(x) = p] = p for all p ∈ [0, 1] in the range of ˆf(x). To measure deviation from this condition, we calculate expected calibration error (ECE): $$\text{ECE}_{q}=\left(\mathbb{E}_{f(x)}\bigg{[}\Big{(}\big{|}\mathbb{E}_{\mathcal{D}}[y^{c}|\hat{f}(x)]-\hat{f}(x)\big{|}\Big{)}^{q}\bigg{]}\right)^{\frac{1}{q}},\tag{1}$$ where q is typically 1 or 2. A *recalibrator model* h can be applied to f to produce outputs in the interval [0, 1] such that h(f(x)) ∈ RK is the recalibrated prediction confidence for input x and hˆ(f(x)) = maxk h(f(x))k. See Section 4.3 for details on some specific forms of h(·). ## 3.3 Selective Calibration Under the notion of calibrated selective classification introduced by Fisch et al. (2022), a predictor is selectively calibrated if ED -y c| ˆf(x) = *p, g*(x) = 1= p for all p ∈ [0, 1] in the range of ˆf(x) where g(x) = 1. To interpret this statement, for the subset of examples that are accepted (i.e., g(x) = 1), for a given confidence level p the predicted label should be correct for p proportion of instances. Selective expected calibration error is then calculated as: $$\text{S-ECE}_{q}=\left(\mathbb{E}_{\hat{f}(x)}\bigg{[}\Big{(}|\mathbb{E}_{\mathcal{D}}[y^{c}|\hat{f}(x),g(x)=1]-\hat{f}(x)\Big{|}\Big{)}^{q}\mid g(x)=1\Big{]}\right)^{\frac{1}{q}}.\tag{2}$$ It should be noted that selective calibration is a separate goal from selective accuracy, and enforcing it may in some cases decrease accuracy. For example, a system may reject datapoints with ˆf(x) = 0.7 and p(y c = 1|x) = 0.99 (which will be accurate 99% of the time) in order to retain datapoints with ˆf(x) = 0.7 and p(y c = 1|x) = 0.7 (which will be accurate 70% of the time). This will decrease accuracy, but the tradeoff would be acceptable in many applications where probabilistic estimates (as opposed to discrete labels) are the key decision making tool. See Section 5.2.1 for a more thorough discussion and empirical results regarding this potential trade-off. Here we are only concerned with calibration, and leave methods for exploring the Pareto front of selective calibration and accuracy to future work. ## 4 Selective Recalibration In order to achieve lower calibration error than existing approaches, we propose jointly optimizing a selection model and a recalibration model. Expected calibration error under both selection and recalibration is equal to SR-ECE${}_{q}=\left(\mathrm{E}_{\hat{h}(f(x))}\left[\left(\left|\mathrm{E}_{\mathcal{D}}[y^{c}|\hat{h}(f(x)),g(x)=1]-\hat{h}(f(x))\right|\right)^{q}\mid g(x)=1\right]\right)^{\frac{1}{q}}$. Taking SR-ECEq as our loss quantity of interest, our goal in selective recalibration is to solve the optimization problem: $$\operatorname*{min}_{g,h}\operatorname{SR-ECE}_{q}\quad\mathrm{s.t.}\quad\operatorname{E}_{\mathcal{D}}[g(x)]\geq\beta,$$ $\downarrow$ . SR-ECEq s.t. ED[g(x)] ≥ β, (4) where β is our desired coverage level. There are several different ways one could approach optimizing the quantity in Eq. 4 through selection and/or recalibration. One could apply only h or g, first train h and then g (or vice versa), or jointly train g and h (i.e., selective recalibration). In Fisch et al. (2022), only g is applied; however, as our experiments will show, much of the possible reduction in calibration error comes from h. While h can be effective alone, typical recalibrators are inexpressive, and thus may benefit from rejecting some difficult-to-fit portion of the data (as we find to be the case in experiments on several real-world datasets in Section 5). Training the models sequentially is also sub-optimal, as the benefits of selection with regards to recalibration can only be fully realized if the two models can interact in training, since fixing the first model constrains the available solutions. Selective recalibration, where g and h are trained together, admits any solution available to these approaches, and can produce combinations of g and h that are unlikely to be found via sequential optimization (we formalize this intuition theoretically via an example in Section 6). Since Eq. 4 cannot be directly optimized, we instead follow Geifman & El-Yaniv (2019) and Fisch et al. (2022) and define a surrogate loss function L including both a selective error quantity and a term to enforce the coverage constraint (weighted by λ): $\min L=L_{sel}+\lambda L_{cov}(\beta)$. $$\left(5\right)$$ L = Lsel + λLcov(β). (5) We describe choices for Lsel (selection loss) and Lcov (coverage loss) in Sections 4.1 and 4.2, along with recalibrator models in Section 4.3. Finally, we specify an inference procedure in Section 4.4, and explain how the model trained with the soft constraint in Eq. 5 is used to satisfy Eq. 4. ## 4.1 Selection Loss In selective recalibration, the selection loss term measures the calibration of selected instances. Its general form for a batch of data D = {(xi, yi)} n i=1 with n examples is $$L_{sel}=\frac{l(\hat{g},h;f,D)}{\frac{1}{n}\sum_{i}\hat{g}(x_{i})}\tag{1}$$ $$(6)$$ where l measures the loss on selected examples and the denominator scales the loss according to the proportion preserved. We consider 3 forms of l: S-MMCE of Fisch et al. (2022), a selective version of multi-class cross entropy, and our proposed selective top label cross entropy loss. ## 4.1.1 Maximum Mean Calibration Error We apply the S-MMCE loss function proposed in Fisch et al. (2022) for training a selective calibration system. For a batch of training data, this loss function is defined as $$l_{\text{S-3MCE}}(\hat{g},h;f,D)=\left[\frac{1}{n^{2}}\sum_{i,j}\left|y_{i}^{e}-\hat{h}(f(x_{i}))\right|^{q}\left|y_{j}^{e}-\hat{h}(f(x_{j}))\right|^{q}\hat{g}(x_{i})\hat{g}(x_{j})\phi\left(\hat{h}(f(x_{i})),\hat{h}(f(x_{j}))\right)\right]^{\frac{1}{q}}\tag{7}$$ where $\phi$ is some similarity kernel, like Laplacian. On a high level, this loss penalizes pairs of instances that have similar confidence and both are far from the true label y c(which denotes prediction correctness 0 or 1). Further details and motivation for such an objective can be found in Fisch et al. (2022). ## 4.1.2 Top Label Binary Cross Entropy Consider a selective version of a typical multi-class cross entropy loss: $$l_{\mathrm{S-MCE}}(\hat{g},h;f,D)=\frac{-1}{n}\sum_{i}\hat{g}(x_{i})\log h(f(x_{i}))_{y_{i}}\ .$$ . (8) $$({\boldsymbol{\delta}})$$ In the case that the model is incorrect (y c = 0), this loss will penalize based on under-confidence in the ground truth class. However, our goal is calibration according to the *predicted* class. Thus we propose a loss function for training a selective recalibration model based on the typical approach to optimizing a smooth recalibration model, Selective Top Label Binary Cross Entropy (S-TLBCE): $$l_{\rm S\mbox{-}TLRCE}(\hat{g},h;f,D)=\frac{-1}{n}\sum_{i}\hat{g}(x_{i})\Big{[}y_{i}^{c}\log\hat{h}(f(x_{i}))+(1-y_{i}^{c})\log(1-\hat{h}(f(x_{i})))\Big{]}.\tag{9}$$ In contrast to S-MCE, in the case of an incorrect prediction S-TLBCE penalizes based on over-confidence in the predicted label. This aligns with the established notion of top-label calibration error, as well as the typical Platt or temperature scaling objectives, and makes this a natural loss function for training a selective recalibration model. We compare S-TLBCE and S-MCE empirically in our experiments, and note that in the binary case these losses are the same. ## 4.2 Coverage Loss When the goal of selection is improving accuracy, there exists an ordering under gˆ that is optimal for any choice of β, namely that where gˆ is greater for all correctly labeled examples than it is for any incorrectly labeled example. Accordingly, coverage losses used to train these systems will often only enforce that no more than β proportion is to be rejected. Unlike selective accuracy, selective calibration is not monotonic with respect to individual examples and a mismatch in coverage between training and deployment may hurt test performance. Thus in selective recalibration we assume the user aims to reject exactly β proportion of the data, and employ a coverage loss that targets a specific β, $$L_{c o v}(\beta)=\left(\beta-{\frac{1}{n}}\sum_{i}\hat{g}(x_{i})\right)^{2}.$$ $$(10)$$ Such a loss will be an asymptotically consistent estimator of (β −E[ˆg(x)])2. Alternatively, Fisch et al. (2022) use a logarithmic regularization approach for enforcing the coverage constraint without a specific target β, computing cross entropy between the output of gˆ and a target vector of all ones. However, we found this approach to be unstable and sensitive to the choice of λ in initial experiments, while our coverage loss enabled stable training at any sufficiently large choice of λ, similar to the findings of Geifman & El-Yaniv (2019). ## 4.3 Recalibration Models We consider two differentiable and popular calibration models, Platt scaling and temperature scaling, both of which attempt to fit a function between model confidence and output correctness. The main difference between the models is that Platt scaling works in the output probability space, whereas temperature scaling is applied to model logits before a softmax is taken. A Platt recalibrator (Platt, 1999) produces output according to $$h^{\rm PMI}(f(x))=\frac{1}{1+\exp(wf(x)+b)}\tag{11}$$ where $w,b$ are learnable scalar parameters. On the other hand, a temperature scaling model (Guo et al., 2017) produces output according to $$h^{\rm Temp}(f_{s}(x))={\rm softmax}\bigg{(}\frac{f_{s}(x)}{T}\bigg{)}\tag{12}$$ where fs(x) is the vector of logits (unnormalized scores) produced by f and T is the single learned (scalar) parameter. Both models are typically trained with a binary cross-entropy loss, where the labels 0 and 1 denote whether an instance is correctly classified. ## 4.4 Inference Once we have trained gˆ and h, we can flexibly account for β by selecting a threshold τ on unlabeled test data (or some other representative tuning set) such that E[1{gˆ(X) ≥ τ}] = β. The model g is then simply g(x) := 1{gˆ(x) ≥ τ}. ## 4.4.1 High Probability Coverage Guarantees Since 1{gˆ(x) ≥ τ} is a random variable with a Bernoulli distribution, we may also apply the Hoeffding bound (Hoeffding, 1963) to guarantee that with high probability empirical target coverage βˆ (the proportion of the target distribution where gˆ(x) ≥ τ ) will be in some range. Given a set V of nu i.i.d. unlabeled examples from the target distribution, we denote empirical coverage on V as β˜. With probability at least 1 − δ, βˆ will be in the range [β˜− ϵ, β˜+ ϵ], where ϵ = qlog( 2 δ ) 2nu . For some critical coverage level β, τ can be decreased until β˜ − ϵ ≥ β. ## 5 Experiments In this section we examine the performance of selective recalibration and baselines when applied to models pre-trained on real-world datasets and applied to a target distribution possibly shifted from the training distribution. In Section 5.1 we investigate whether, given a small validation set of labeled examples drawn i.i.d. from the target distribution, joint optimization consistently leads to a lower empirical selective calibration error than selection or recalibration alone or sequential optimization. Subsequently, in Section 5.2 we study multiple out-of-distribution prediction tasks and the ability of a single system to provide decreasing selective calibration error across a range of coverage levels when faced with a further distribution shift from validation data to test data. We also analyze the trade-off between selective calibration error and accuracy in this setting. Since we are introducing the objective of selective recalibration here, we focus on high-level design decisions, in particular the choice of selection and recalibration method, loss function, and optimization procedure (joint vs. sequential). For selective recalibration models, the input to g is the feature embedding. Temperature scaling is used for multi-class examples and Platt scaling is applied in the binary cases (following Guo et al. (2017) and initial results on validation data). Calibration error is measured using ECE1 and ECE2 with equal mass binning. For the selection loss, we use lS-TLBCE and lS-MMCE for binary tasks, and include a selective version of typical multi-class cross-entropy (lS-MCE) for multi-class tasks. Pre-training is performed where h is optimized first in order to reasonably initialize the calibrator parameters before beginning to train g. Models are trained both with h fixed after this pre-training (denoted as "sequential" in results) and when it is jointly optimized throughout training (denoted as "joint" in results). Our selection baselines include confidence-based rejection ("Confidence") and multiple out-of-distribution (OOD) detection methods ("Iso. Forest", "One-class SVM"), common techniques when rejecting to improve accuracy. The confidence baseline rejects examples with the smallest ˆf(x) (or hˆ(f(x))), while the OOD methods are measured in the embedding space of the pre-trained model. All selection baselines are applied to the recalibrated model in order to make the strongest comparison. We make further comparisons to recalibration baselines, including the previously described temperature and Platt scaling as well as binning methods like histogram binning and Platt binning. See Appendix A for more experiment details including calibration error measurement and baseline implementations. ## 5.1 Selective Recalibration With I.I.D. Data First, we test whether selective recalibration consistently produces low ECE in a setting where there is a validation set of labeled training data available from the same distribution as test data using outputs of pretrained models on the Camelyon17 and ImageNet datasets. Camelyon17 (Bandi et al., 2018) is a task where the input x is a 96x96 patch of a whole-slide image of a lymph node section from a patient with potentially metastatic breast cancer and the label y is whether the patch contains a tumor. Selection and recalibration models are trained with 1000 samples, and we apply a Platt scaling h since the task is binary. ImageNet is a well-known large scale image classification dataset, where we use 2000 samples from a supervised ResNet34 model for training selection and recalibration models, and temperature scaling h since the task is multi-class. Our soft selector gˆ is a shallow fully-connected network (2 hidden layers with dimension 128), and we report selective calibration error for coverage level β ∈ {0.75, 0.8, 0.85, 0.9}. Full experiment details, including model specifications and training parameters, can be found in Appendix A. ![7_image_0.png](7_image_0.png) Figure 2: Selective calibration error on ImageNet and Camelyon17 for coverage level B € {0.75, 0.8, 0.85, 0.9}. Left: Various recalibration methods are trained using labeled validation data. Middle: Selection baselines including confidence-based rejection and various OOD measures. Right: Selective recalibration with different loss functions. Results are displayed in Figure 2. They show that by jointly optimizing the selector and recalibration models, we are able to achieve improved ECE at the given coverage level 3 compared to first training h and then g. We also find selective recalibration achieves the lowest ECE in almost every case in comparison to recalibration alone. While the confidence-based selection strategy performs well in these experiments, this is not a good approach to selective calibration in general, as this is a heuristic strategy and may fail in cases where a model's confident predictions are in fact poorly calibrated (see Section 5.2 for examples). In addition, the S-TLBCE loss shows more consistent performance than S-MMCE, as it reduces ECE in every case, whereas training with S-MMCE increase calibration error in some cases. To be sure that the lower calibration error as compared to recalibration is not because of the extra parameters in the selector model, we also produce results for a recalibration model with the same architecture as our selector. Once again the input is the feature embedding, and the model h has 2 hidden layers with dimension 128. Results for Camelyon17 are included in Figure 2; for clarity of presentation of ImageNet results, we omit the MLP recalibrator results from the plot as they were an order of magnitude worse than all other methods (ECE1 0.26, ECE2 0.30). In neither case does this baseline reach the performance of the best low-dimensional recalibration model. ## 5.2 Selective Re-Calibration Under Distribution Shift In this experiment, we study the various methods applied to genetic perturbation classification with RxRx1, as well as zero-shot image classification with CLIP and CIFAR-100-C. RxRx1 (Taylor et al., 2019) is a task where the input x is a 3-channel image of cells obtained by fluorescent microscopy, the label y indicates which of 1,139 genetic treatments (including no treatment) the cells received, and there is a domain shift across the batch in which the imaging experiment was run. CIFAR-100 is a well-known image classification dataset, and we perform zero-shot image classification with CLIP. We follow the setting of Fisch et al. (2022) where the test data is drawn from a shifted distribution with respect to the validation set and the goal is not to target a specific β, but rather to train a selector that works across a range of coverage levels. In the case of RxRx1 the strong batch processing effect leads to a 9% difference in pretrained model accuracy between validation (18%) and test (27%) output, and we also apply a selective recalibration model trained on validation output from zero-shot CLIP and CIFAR-100 to test examples drawn from the OOD CIFAR-100-C test set. Our validation sets have 1000 (RxRx1) or 2000 (CIFAR-100) examples, gˆ is a small network with 1 hidden layer of dimension 64, and we set β = 0.5 when training the models. For our results we report the area under the curve (AUC) for the coverage vs. error curve, a typical metric in selective classification (Geifman & El-Yaniv, 2017; Fisch et al., 2022) that reflects how a model can reduce the error on average at different levels of β. We measure AUC in the range β = [0.5, 1.0], with measurements taken at intervals of 0.05 (i.e., β ∈ [0.5, 0.55, 0.6*, ...,* 0.95, 1.0]). Additionally, to induce robustness to the distribution shift we noise the selector/recalibrator input. See Appendix A for full specifications. | Table 1: | RxRx1 and CIFAR-100-C AUC in the range β = [0.5, 1.0]. RxRx1 CIFAR-100-C | | | | | | | |---------------|----------------------------------------------------------------------------|-------|-------|-------|-------|-------|-------| | Selection | Opt. of h, g | ECE1 | ECE2 | Acc. | ECE1 | ECE2 | Acc. | | Confidence | - | 0.071 | 0.081 | 0.353 | 0.048 | 0.054 | 0.553 | | One-class SVM | - | 0.058 | 0.077 | 0.227 | 0.044 | 0.051 | 0.388 | | Iso. Forest | - | 0.048 | 0.061 | 0.221 | 0.044 | 0.051 | 0.379 | | S-MCE | sequential | 0.059 | 0.075 | 0.250 | 0.033 | 0.041 | 0.499 | | | joint | 0.057 | 0.073 | 0.249 | 0.060 | 0.068 | 0.484 | | S-MMCE | sequential | 0.036 | 0.045 | 0.218 | 0.030 | 0.037 | 0.503 | | | joint | 0.036 | 0.045 | 0.218 | 0.043 | 0.051 | 0.489 | | S-TLBCE | sequential | 0.036 | 0.045 | 0.219 | 0.030 | 0.037 | 0.507 | | | joint | 0.039 | 0.049 | 0.218 | 0.026 | 0.032 | 0.500 | | Recalibration | Temp. Scale | 0.055 | 0.070 | 0.274 | 0.041 | 0.047 | 0.438 | | (β = 1.0) | None | 0.308 | 0.331 | 0.274 | 0.071 | 0.079 | 0.438 | Results are shown in Table 1. First, these results highlight that even in this OOD setting, the selection-only approach of Fisch et al. (2022) is not enough and recalibration is a key ingredient in improving selective calibration error. Fixing h and then training g performs better than joint optimization for RxRx1, likely because the distribution shift significantly changed the optimal temperature for the region of the feature space where g(x) = 1. Joint optimization performs best for CIFAR-100-C, and does still significantly improve ECE on RxRx1, although it's outperformed by fixing h first in that case. The confidence baseline performs quite poorly on both experiments and according to both metrics, significantly increasing selective calibration error in all cases. ## 5.2.1 Trade-Offs Between Calibration Error And Accuracy While accurate probabilistic output is the only concern in some domains and should be of at least some concern in most domains, discrete label accuracy can also be important in some circumstances. Table 1 shows accuracy results under selection, and Figure 3 shows the selective accuracy curve and confidence histogram for our selective recalibration model trained with S-TLBCE for RxRx1 and CIFAR-100 (and applied to shifted distributions). Together, these results illustrate that under different data and prediction distributions, selective recalibration may increase or decrease accuracy. For RxRx1, the model tends to reject examples with higher confidence, which also tend to be more accurate. Thus, while ECE@β may improve with respect to the full dataset, selective accuracy at β is worse. On the other hand, for CIFAR-100-C, the model tends to reject examples with lower confidence, which also tend to be less accurate. Accordingly, both ECE@β and selective accuracy at β improve with respect to the full dataset. Figure 3: Plots illustrating 1) distribution of confidence among the full distribution and those examples ![9_image_0.png](9_image_0.png) accepted for prediction (i.e., where g(x) = 1) at coverage level β = 0.8 and 2) selective accuracy in the range β = [0.8, 1.0]. ## 6 Theoretical Analysis To build a deeper understanding of selective recalibration (and its alternatives), we consider a theoretical situation where a pre-trained model is applied to a target distribution different from the distribution on which it was trained, mirroring both our experimental setting and a common challenge in real-world deployments. We show that with either selection or recalibration alone there will still be calibration error, while selective recalibration can achieve ECE = 0. We also show that joint optimization of g and h, as opposed to sequentially optimizing each model, is necessary to achieve zero calibration error. ## 6.1 Setup We consider a setting with two classes, and without loss of generality we set y *∈ {−*1, 1}. We are given a ![9_image_1.png](9_image_1.png) classifier pre-trained on a mixture model, a typical way to view the distribution of objects in images (Zhu et al., 2014; Thulasidasan et al., 2019). The pre-trained classifier is then applied to a target distribution containing a portion of outliers from each class unseen during training. Our specific choices of classifier and training and target distributions are chosen for ease of interpretation and analysis; however, the intuitions built can be applied more generally, for example to neural network classifiers, which are too complex for such analysis but are often faced with outlier data on which calibration is poor (Ovadia et al., 2019). Figure 4: A classifier pre-trained on a mixture model is applied to a target distribution with outliers. ## 6.1.1 Data Generation Model Definition 1 (Target Distribution) *The target distribution is defined as a* (θ ∗, σ, α)-perturbed mixture model over (*x, y*) ∈ R p × {1, −1}: x | *y, z* ∼ zJ1 + (1 − z)J2, where y *follows the Bernoulli distribution* P(y = 1) = P(y = −1) = 1/2, z *follows a Bernoulli distribution* P(z = 1) = β, and z *is independent of* y. Our data model considers a mixture of two distributions with disjoint and bounded supports, J1 and J2, where J2 is considered to be an outlier distribution. Specifically, for y *∈ {−*1, 1}, J1 is supported in the balls with centers yθ∗ and radius r1, J2 is supported in the balls with centers −yαθ∗ and radius r2, and both J1 and J2 have standard deviation σ. See Figure 4 for an illustration of our data models, and Appendix C.1 for a full definition of the distribution. ## 6.1.2 Classifier Algorithm Recall that in our setting f is a pre-trained model, where the training distribution is unknown and we only have samples from some different target distribution. We follow this setting in our theory by considering a (possibly biased) estimator ˆθ of θ ∗, which is the output of a training algorithm A (S tr) that takes the i.i.d. training data set S tr = {(x tr i , ytr i )} m i=1 as input. The distribution from which S tr is drawn is *different* from the target distribution from which we have data to train the selection and recalibration models. We only impose one assumption on the model ˆθ: that A outputs a ˆθ that will converge to θ0 if training data is abundant enough and θ0 should be aligned with θ ∗ with respect to direction (see Assumption 3, Appendix C.2 for formal statement). For ease of analysis and explanation, we consider a simple classifier defined by ˆθ =Pm i=1 x tr i y tr i /m when the training distribution is set to be an unperturbed Gaussian mixture x tr|y tr ∼ N (y tr· θ ∗, σ2I) and y tr follows a Bernoulli distribution P(y tr = 1) = 1/2. 1 This form of ˆθ is closely related to Fisher's rule in linear discriminant analysis for Gaussian mixtures (see Appendix C.2.1 for further discussion). Having obtained ˆθ, our pretrained classifier aligns with the typical notion of a softmax response in neural networks. We first obtain the confidence vector f(x) = (f1(x), f−1(x))⊤, where $$f_{-1}(x)=\frac{1}{e^{2\theta^{\top}x}+1},\quad f_{1}(x)=\frac{e^{2\theta^{\top}x}}{e^{2\theta^{\top}x}+1}.\tag{13}$$ and then output yˆ = arg maxk∈{−1,1} fk(x). For k *∈ {−*1, 1}, the confidence score fk(x) represents an estimator of P(y = k|x) and the final classifier is equivalent to yˆ = sgn(ˆθ ⊤x). ## 6.2 Main Theoretical Results Having established our data and classification models, we now analyze why selective recalibration (i.e., joint training of g and h) can outperform recalibration and selection performed alone or sequentially. To measure calibration error, we consider ECEq with q = 1 (and drop the subscript q for notational simplicity below). For the clarity of theorem statements and proofs, we will restate definitions of calibration error to make them explicitly dependent on selection model g and temperature T and tailored for the binary case we are studying. We want to emphasize that we are not *introducing new concepts*, but instead offering different surface forms of the same quantities introduced earlier. First, we notice that under our data generating model and pretrained classifier, ECE can be expressed as $$\operatorname{ECE}=\mathbb{E}_{{\hat{\theta}}^{\top}x}{\Bigg[}{\Big|}\mathbb{P}[y=1\mid{\hat{\theta}}^{\top}x=v]-{\frac{1}{1+e^{-2v}}}{\Big|}{\Bigg]}.$$ By studying such population quantities, our analysis is not dependent on any binning-methods that are commonly used in empirically calculating expected calibration errors. 1The in-distribution case also works under our data generation model. ## 6.2.1 Selective Recalibration V.S. Recalibration Or Selection We study the following ECE quantities according to our data model for recalibration alone (R-ECE), selection alone (S-ECE), and selective recalibration (SR-ECE). For recalibration, we focus on studying the popular temperature scaling model, although the analysis is nearly identical for Platt scaling. $$\text{R-ECE}(T)=\mathbb{E}_{\hat{\theta}^{\top}x}\bigg{[}\bigg{|}\mathbb{P}[y=1\mid\frac{\hat{\theta}^{\top}x}{T}=v]-\frac{1}{1+e^{-2v}}\bigg{|}\bigg{]}.$$ S-ECE$(g):=\mathbb{E}_{\hat{\theta}^{\top}x}\bigg{[}\bigg{|}\mathbb{P}[y=1\mid\hat{\theta}^{\top}x=v,g(x)=1]-\frac{1}{1+e^{-2v}}\bigg{|}\mid g(x)=1\bigg{]}.$ SR-ECE$(g,T):=\mathbb{E}_{\hat{\theta}^{\top}x}\bigg{[}\bigg{|}\mathbb{P}[y=1\mid\frac{\hat{\theta}^{\top}x}{T}=v,g(x)=1]-\frac{1}{1+e^{-2v}}\bigg{|}\mid g(x)=1\bigg{]}.$ Our first theorem proves that under our data generation model, S-ECE and R-ECE can never reach 0, but SR-ECE can reach 0 by choosing appropriate g and T. Theorem 1 Under Assumption 3, for any δ ∈ (0, 1) and ˆθ output by A *, there exist thresholds* M ∈ N + and τ > 0 *such that if* max{r1, r2*, σ,* ∥θ ∗∥} < τ and m > M, there exists a positive lower bound L*, with* probability at least 1 − δ *over* S tr $$\operatorname*{min}{\big\{}\operatorname*{min}_{g:\mathbb{E}[g(x)]\geq\beta}S\text{-}E C E(g),\quad\operatorname*{min}_{T\in\mathbb{R}}R\text{-}E C E(T){\big\}}>L.$$ However, there exists T0 and g0 satisfying E[g0(x)] ≥ β*, such that SR-ECE*(g0, T0) = 0. Intuition and interpretation. Here we give some intuition for understanding our results. Under our perturbed mixture model, R-ECE is calculated as $$\text{R-ECE}(T)=\mathbb{E}_{v=\theta^{\intercal}x}\left|\frac{\mathbf{1}\{v\in\mathcal{A}\}}{1+\exp\left(\frac{-2\theta^{\intercal}\theta^{\intercal}}{\sigma^{2\|\theta\|^{2}}}\cdot v\right)}+\frac{\mathbf{1}\{v\in\mathcal{B}\}}{1+\exp\left(\frac{2\theta^{\intercal}\theta^{\intercal}}{\sigma^{2\|\theta\|^{2}}}\cdot v\right)}-\frac{1}{e^{-2v/T}+1}\right|.$$ for disjoint sets A and B, which correspond to points on the support of J1 and J2 respectively. In order to achieve zero R-ECE, when v ∈ A, we need T = ˆθ ⊤θ ∗/(σ 2∥ ˆθ∥ 2). However, for v ∈ B we need T = −αˆθ ⊤θ ∗/(σ 2∥ ˆθ∥ 2). These clearly cannot be achieved simultaneously. Thus the presence of the outlier data makes it impossible for the recalibration model to properly calibrate the confidence for the whole population. A similar expression can be obtained for S-ECE. As long as ˆθ ⊤θ ∗/(σ 2∥ ˆθ∥ 2) and −αˆθ ⊤θ ∗/(σ 2∥ ˆθ∥ 2) are far from 1 (i.e., miscalibration exists), no choice of g can reach zero S-ECE. In other words, no selection rule alone can lead to calibrated predictions, since no subset of the data is calibrated under the pre-trained classifier. However, by setting g0(x) = 0 for all x ∈ B and g0(x) = 1 otherwise, and choosing T0 = ˆθ ⊤θ ∗/(σ 2∥ ˆθ∥ 2), SR-ECE = 0. Thus we can conclude that achieving ECE = 0 on eligible predictions is only possible under selective recalibration, while selection or recalibration alone induce positive ECE. See Appendix C for more details and analysis. ## 6.2.2 Joint Learning Versus Sequential Learning We can further demonstrate that jointly learning a selection model g and temperature scaling parameter T can outperform sequential learning of g and T. Let us first denote g˜ := arg ming S-ECE(g) such that E[˜g(x)] ≥ β and T˜ := arg minT R-ECE(T). We denote two types of expected calibration error under sequential learning of g and T, depending on which is optimized first. $$\operatorname{ECE}^{R\to S}:=\operatorname*{min}_{g:\mathbb{E}[g(x)]\geq\beta}\operatorname{S-ECE}(g,{\tilde{T}});$$ $$\operatorname{ECE}^{S\to R}:=\operatorname*{min}_{T\in\mathbb{R}}\operatorname{R-ECE}({\tilde{g}},T).$$ Our second theorem shows these two types of expected calibration error for sequential learning are lower bounded, while jointly learning *g, T* can reach zero calibration error. Theorem 2 Under Assumption 3, if β > 2(1 − β), for any δ ∈ (0, 1) and ˆθ output by A , there exist thresholds M ∈ N + and τ2 > τ1 > 0*: if* max{r1, r2, σ} < τ2, τ1 < σ, and m > M*, then there exists a positive* lower bound L, with probability at least 1 − δ *over* S tr $$s{\to}R\}>L.$$ min ECER→S,ECES→R} *> L.* However, there exists T0 and g0 satisfying E[g0(x)] ≥ β, such that SR-ECE(g0, T0) = 0. Intuition and interpretation. If we first optimize the temperature scaling model to obtain T˜, T˜ will not be equal to ˆθ ⊤θ ∗/(σ 2∥ ˆθ∥ 2). Then, when applying selection, there exists no g that can reach 0 calibration error since the temperature is not optimal for data in A or B. On the other hand, if we first optimize the selection model and obtain g˜, g˜ will reject points in A instead of those in B because points in A incur higher calibration error, and thus data from both A and B will be selected (i.e., not rejected). In that case, temperature scaling not will be able to push calibration error to zero because, similar to the case in the earlier R-ECE analysis, the calibration error in A and B cannot reach 0 simultaneously using a single temperature scaling model. On the other hand, the optimal jointly-learned solution yields a set of predictions with zero expected calibration error. ## 7 Conclusion We have shown both empirically and theoretically that combining selection and recalibration is a potent strategy for producing a set of well-calibrated predictions. Eight pairs of distribution and β were tested when i.i.d. validation data is available; selective recalibration with our proposed S-TLBCE loss function outperforms every single recalibrator tested in 7 cases, and always reduces S-ECE with respect to the calibrator employed by the selective recalibration model itself. Taken together, these results show that while many popular recalibration functions are quite effective at reducing calibration error, they can often be better fit to the data when given the opportunity to ignore a small portion of difficult examples. Thus, in domains where calibrated confidence is critical to decision making, selective recalibration is a practical and lightweight strategy for improving outcomes downstream of deep learning model predictions. ## Broader Impact Statement While the goal of our method is to foster better outcomes in settings of societal importance like medical diagnosis, as mentioned in Section 2, selective classification may create disparities among protected groups. Future work on selective recalibration could focus on analyzing and mitigating any unequal effects of the algorithm. ## Acknowledgments We thank Mike Mozer and Jasper Snoek for their very helpful feedback on this work. This research obtained support by the funds provided by the National Science Foundation and by DoD OUSD (R&E) under Cooperative Agreement PHY-2229929 (ARNI: The NSF AI Institute for Artificial and Natural Intelligence). JCS gratefully acknowledges financial support from the Schmidt DataX Fund at Princeton University made possible through a major gift from the Schmidt Futures Foundation. TP gratefully acknowledges support by NSF AF:Medium 2212136 and by the Simons Collaboration on the Theory of Algorithm Fairness grant. ## References Peter Bandi, Oscar Geessink, Quirine Manson, Marcory van Dijk, Maschenka Balkenhol, Meyke Hermsen, Babak Ehteshami Bejnordi, Byungjae Lee, Kyunghyun Paeng, Aoxiao Zhong, Quanzheng Li, Farhad Ghazvinian Zanjani, Sveta Zinger, Keisuke Fukuta, Daisuke Komura, Vlado Ovtcharov, Shenghua Cheng, Shaoqun Zeng, Jeppe Thagaard, and Geert Litjens. From detection of individual metastases to classification of lymph node status at the patient level: The Camelyon17 challenge. *IEEE Transactions on Medical* Imaging, PP:1–1, 08 2018. doi: 10.1109/TMI.2018.2867350. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In *International Conference on Machine Learning*, 2015. Glenn W. Brier. Verification of forecasts expressed in terms of probability. *Monthly Weather Review*, 78: 1–3, 1950. Zhun Deng, Thomas P. Zollo, Jake C. Snell, Toniann Pitassi, and Richard Zemel. Distribution-free statistical dispersion control for societal applications. In *Advances in Neural Information Processing Systems*, 2023. Ran El-Yaniv and Yair Wiener. On the foundations of noise-free selective classification. Journal of Machine Learning Research, 11(53):1605–1641, 2010. Adam Fisch, Tommi S. Jaakkola, and Regina Barzilay. Calibrated selective classification. *Transactions on* Machine Learning Research, 2022. ISSN 2835-8856. Yarin Gal. *Uncertainty in Deep Learning*. PhD thesis, University of Cambridge, 2016. Yonatan Geifman and Ran El-Yaniv. Selective classification for deep neural networks. In *Advances in Neural* Information Processing Systems, 2017. Yonatan Geifman and Ran El-Yaniv. Selectivenet: A deep neural network with an integrated reject option. In *International Conference on Machine Learning*, 2019. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning, 2017. Ursula Hebert-Johnson, Michael Kim, Omer Reingold, and Guy Rothblum. Multicalibration: Calibration for the (Computationally-identifiable) masses. In *International Conference on Machine Learning*, 2018. Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. In *International* Conference on Learning Representations, 2020. Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):13–30, 1963. ISSN 01621459. Erik Jones, Shiori Sagawa, Pang Wei Koh, Ananya Kumar, and Percy Liang. Selective classification can magnify disparities across groups. In *International Conference on Learning Representations*, 2021. Archit Karandikar, Nicholas Cain, Dustin Tran, Balaji Lakshminarayanan, Jonathon Shlens, Michael C. Mozer, and Becca Roelofs. Soft calibration objectives for neural networks. In *Advances in Neural Information Processing Systems*, 2021. Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran Haque, Sara M Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. Wilds: A benchmark of in-the-wild distribution shifts. In *International Conference on Machine Learning*, 2021. Ananya Kumar, Percy S Liang, and Tengyu Ma. Verified uncertainty calibration. In *Advances in Neural* Information Processing Systems, 2019. Aviral Kumar, Sunita Sarawagi, and Ujjwal Jain. Trainable calibration measures for neural networks from kernel mean embeddings. In *International Conference on Machine Learning*, 2018. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In *Advances in Neural Information Processing Systems*, 2017. David Madras, Toniann Pitassi, and Richard Zemel. Predict responsibly: Improving fairness and accuracy by learning to defer. In *Advances in Neural Information Processing Systems*, 2018. Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Ann Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, and Mario Lucic. Revisiting the calibration of modern neural networks. In Advances in Neural Information Processing Systems, 2021. Mahdi Pakdaman Naeini, Gregory F. Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In *AAAI Conference on Artificial Intelligence*, 2015. Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your models uncertainty? Evaluating predictive uncertainty under dataset shift. In *Advances in Neural Information Processing Systems*, 2019. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. Scikit-learn: Machine learning in python. *Journal of Machine Learning Research*, 12(85):2825–2830, 2011. Alexandre Perez-Lebel, Marine Le Morvan, and Gaël Varoquaux. Beyond calibration: estimating the grouping loss of modern neural networks. In *International Conference on Learning Representations*, 2023. J. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. *Advances in Large Margin Classifiers*, 10(3), 1999. Allen Z. Ren, Anushri Dixit, Alexandra Bodrova, Sumeet Singh, Stephen Tu, Noah Brown, Peng Xu, Leila Takayama, Fei Xia, Jake Varley, Zhenjia Xu, Dorsa Sadigh, Andy Zeng, and Anirudha Majumdar. Robots that ask for help: Uncertainty alignment for large language model planners. In *Conference on Robot* Learning, 2023. Rebecca Roelofs, Nicholas Cain, Jonathon Shlens, and Michael C. Mozer. Mitigating bias in calibration error estimation. In *International Conference on Artificial Intelligence and Statistics*, 2022. Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction. *Journal of Machine Learning Research*, 9(12):371–421, 2008. Abhin Shah, Yuheng Bu, Joshua K Lee, Subhro Das, Rameswar Panda, Prasanna Sattigeri, and Gregory W Wornell. Selective regression under fairness criteria. In *International Conference on Machine Learning*, 2022. Jake Snell, Thomas P. Zollo, Zhun Deng, Toniann Pitassi, and Richard Zemel. Quantile risk control: A flexible framework for bounding the probability of high-loss predictions. In International Conference on Learning Representations, 2023. James Taylor, Berton Earnshaw, Ben Mabey, Mason Victors, and Jason Yosinski. RxRx1: An image set for cellular morphological variation across many experimental batches. In *International Conference on* Learning Representations, 2019. Sunil Thulasidasan, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, and Sarah Michalak. On mixup training: improved calibration and predictive uncertainty for deep neural networks. In *International* Conference on Neural Information Processing Systems, 2019. Vladimir Vovk, Ilia Nouretdinov, Akimichi Takemura, and Glenn Shafer. Defensive forecasting for linear protocols. In *International Conference on Algorithmic Learning Theory*, 2005. Deng-Bao Wang, Lei Feng, and Min-Ling Zhang. Rethinking calibration of deep neural networks: Do not be afraid of overconfidence. In *Advances in Neural Information Processing Systems*, 2021. Bianca Zadrozny and Charles Elkan. Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In *International Conference on Machine Learning*, 2001. Jize Zhang, Bhavya Kailkhura, and T. Yong-Jin Han. Mix-n-match : Ensemble and compositional methods for uncertainty calibration in deep learning. In *International Conference on Machine Learning*, 2020. Linjun Zhang, Zhun Deng, Kenji Kawaguchi, and James Zou. When and how mixup improves calibration. In *International Conference on Machine Learning*, 2022. Xiangxin Zhu, Dragomir Anguelov, and Deva Ramanan. Capturing long-tail distributions of object subcategories. In *Conference on Computer Vision and Pattern Recognition*, 2014. Thomas P. Zollo, Todd Morrill, Zhun Deng, Jake C. Snell, Toniann Pitassi, and Richard Zemel. Prompt risk control: A rigorous framework for responsible deployment of large language models. In International Conference on Learning Representations, 2024. ## Appendix A Additional Experiment Details In training we follow Fisch et al. (2022) and drop the denominator in Lsel, as the coverage loss suffices to keep gˆ from collapsing to 0. Recalibration model code is taken from the accompanying code releases from Guo et al. (2017)2(Temperature Scaling) and Kumar et al. (2019)3(Platt Scaling, Histogram Binning, Platt Binning). ## A.1 Calibration Measures We calculate ECEq for q ∈ {1, 2} using the python library released by Kumar et al. (2019) 4. ECEq is calculated as: $\mathrm{E}(\mathrm{C}\mathrm{E})$ $$\text{ECE}_{q}=\left(\frac{1}{|B|}\sum_{j=1}^{|B|}\left(\frac{\sum_{i\in B_{j}}\mathbf{1}\{y_{i}=\hat{y}_{i}\}}{|B_{j}|}-\frac{\sum_{i\in B_{j}}\hat{f}(x_{i})}{|B_{j}|}\right)^{q}\right)^{\frac{1}{q}}\tag{14}$$ where B = B1*, ..., B*m are a set of m equal-mass prediction bins, and predictions are sorted and binned based on their maximum confidence ˆf(x). We set m = 15. ## A.2 Baselines Next we describe how baseline methods are implemented. Our descriptions are based on creating an ordering of the test set such that at a given coverage level β, a 1 − β proportion of examples from the end of the ordering are rejected. ## A.2.1 Confidence-Based Rejection Confidence based rejection is performed by ordering instances in a decreasing order based on ˆf(x), the maximum confidence the model has in any class for that example. ## A.2.2 Out Of Distribution Scores The sklearn python library (Pedregosa et al., 2011) is used to produce the One-Class SVM and Isolation Forest models. Anomaly scores are oriented such that more typical datapoints are given higher scores; instances are ranked in a decreasing order. ## A.3 In-Distribution Experiments Our selector g is a shallow fully-connected network with 2 hidden layers of dimension 128 and ReLU activations. ## A.3.1 Camelyon17 Camelyon17 (Bandi et al., 2018) is a task where the input x is a 96x96 patch of a whole-slide image of a lymph node section from a patient with potentially metastatic breast cancer, the label y is whether the patch contains a tumor, and the domain d specifies which of 5 hospitals the patch was from. We pre-train a DenseNet-121 model on the Camelyon17 train set using the code from Koh et al. (2021)5. The validation set has 34,904 examples and accuracy of 91%, while the test set has 84,054 examples, and accuracy of 83%. Our selector g is trained with a learning rate of 0.0005, the coverage loss weight λ is set to 32 (following (Geifman & El-Yaniv, 2019)), and the model is trained with 1000 samples for 1000 epochs with a batch size of 100. 2https://github.com/gpleiss/temperature_scaling 3https://github.com/p-lambda/verified_calibration 4https://github.com/p-lambda/verified_calibration 5https://github.com/p-lambda/wilds ## A.3.2 Imagenet ImageNet is a large scale image classification dataset. We extract the features, scores, and labels from the 50,000 ImageNet validation samples using a pre-trained ResNet34 model from the torchvision library. Our selector g is trained with a learning rate of 0.00001, the coverage loss weight λ is set to 32 (following (Geifman & El-Yaniv, 2019)), and the model is trained with 2000 samples for 1000 epochs with a batch size of 200. ## A.4 Out-Of-Distribution Experiments Our selector g is a shallow fully-connected network (1 hidden layer with dimension 64 and ReLu activation) trained with a learning rate of 0.0001, the coverage loss weight λ is set to 8, and the model is trained for 50 epochs (to avoid overfitting since this is an OOD setting) with a batch size of 256. ## A.4.1 Rxrx1 RxRx1 (Taylor et al., 2019) is a task where the input x is a 3-channel image of cells obtained by fluorescent microscopy, the label y indicates which of the 1,139 genetic treatments (including no treatment) the cells received, and the domain d specifies the batch in which the imaging experiment was run. The validation set has 9,854 examples and accuracy of 18%, while the test set has 34,432 examples, and accuracy of 27%. 1000 samples are drawn for model training. Gaussian noise with mean 0 and standard deviation 1 is added to training examples in order to promote robustness. ## A.4.2 Cifar-100 CIFAR-100 is a well-known image classification dataset, and we perform zero-shot image classification with CLIP. We draw 2000 samples for model training, and test on 50,000 examples drawn from the 750,000 examples in CIFAR-100-c. Data augmentation in training is performed using AugMix (Hendrycks et al., 2020) with a severity level of 3 and a mixture width of 3. ## B Additional Experiment Results B.1 Brier Score Results While our focus in this work is Expected Calibration Error, for completeness we also report results with respect to Brier Score. Figure 5 shows Brier score results for the experiments in Section 5.1. Selective recalibration reduces Brier score in both experiments and outperforms recalibration. The OOD selection baselines perform well, although they show increasing error as more data is rejected, illustrating their poor fit for the task. Further, Brier score results for the experiments in Section 5.2 are included in Table 2. Selective recalibration reduces error, and confidence-based rejection increases error, which is surprising since Brier score favors predictions with confidence near 1. ![18_image_0.png](18_image_0.png) Figure 5: Selective calibration error on ImageNet and Camelyon17 for coverage level β ∈ {0.75, 0.8, 0.85, 0.9}. Left: Various re-calibration methods are trained using labeled validation data. **Middle**: Selection baselines including confidence-based rejection and various OOD measures. **Right**: Selective re-calibration with different loss functions. Table 2: RxRx1 and CIFAR-100-C AUC in the range β = [0.5, 1.0]. | Table 2: | RxRx1 and CIFAR-100-C AUC in the range β = [0.5, 1.0]. | | | |---------------|----------------------------------------------------------|-------|-------------| | Selection | Opt. of h, g | RxRx1 | CIFAR-100-C | | Confidence | - | 0.169 | 0.180 | | One-class SVM | - | 0.077 | 0.051 | | Iso. Forest | - | 0.061 | 0.051 | | S-MCE | sequential | 0.138 | 0.166 | | S-MCE | joint | 0.138 | 0.166 | | S-MMCE | sequential | 0.126 | 0.165 | | S-MMCE | joint | 0.126 | 0.164 | | S-TLBCE | sequential | 0.126 | 0.166 | | S-TLBCE | joint | 0.126 | 0.165 | | Recalibration | Temp. Scale | 0.140 | 0.164 | | (β = 1.0) | None | 0.252 | 0.170 | ## C Theory: Additional Details And Proofs C.1 Details On Data Generation Model Definition 2 (Formal version of definition 1) For θ ∗ ∈ R p, a (θ ∗, σ, α, r1, r2)-perturbed truncatedGaussian model is defined as the following distribution over (*x, y*) ∈ R p × {1, −1}: $$x\mid y\sim z J_{1}+(1-z)J_{2}.$$ Here, J1 and J2 *are two truncated Guassian distributions, i.e.,* $$\begin{array}{l}{{J_{1}\sim\rho_{1}\mathcal{N}(y\cdot\theta^{*},\sigma^{2}I)\mathbf{1}\{x\}}}\\ {{J_{2}\sim\rho_{2}\mathcal{N}(-y\cdot\alpha\theta^{*},\sigma^{2}I)\mathbf{1}}}\end{array}$$ ∗, σ2I)1{x ∈ B(θ ∗, r1) ∪ B(−θ ∗, r1)}, J2 ∼ ρ2N (−y · αθ∗, σ2I)1{x ∈ B(α · θ ∗, r2) ∪ B(−(α · θ ∗, r2)} where ρ1, ρ2 are normalization coefficients to make J1 and J2 properly defined; y follows the Bernoulli distribution P(y = 1) = P(y = −1) = 1/2; and z *follows a Bernoulli distribution* P(z = 1) = β. For simplicity, throughout this paper, we set ρ1 = ρ2 and this is always achievable by setting r1/r2 *appropriately. We also set* α ∈ (0, 1/2). ## C.2 Details On ˆΘ Recall that we consider the ˆθ that is the output of a training algorithm A (S tr) that takes the i.i.d. training data set S tr = {(x tr i , ytr i )} m i=1 as input. We imposed the following assumption on ˆθ. Assumption 3 For any given δ ∈ (0, 1)*, there exists* θ0 ∈ R p *with* ∥θ0∥ = Θ(1), that with probability at least 1 − δ ∥ ˆθ − θ0∥ < ϕ(*δ, m*), and ϕ(δ, m) goes to 0 as m *goes to infinity. Also, there exist a threshold* M ∈ N + such that if *m > M*, ϕ(δ, m) is a decreasing function of δ and n*. Moreover,* $$\operatorname*{min}\left\{\frac{\theta_{0}^{\top}\theta^{*}}{\|\theta_{0}\|^{2}},\theta_{0}^{\top}\theta^{*},\|\theta_{0}\|\right\}>0.$$ We will prove the following lemma as a cornerstone for our future proofs. Lemma 1 Under Assumption 3, for any δ ∈ (0, 1)*, there exists a threshold* M ∈ N +*, and constants* 0 < I1 < I2, 0 < I3 < I4 < αI3, 0 < I5 < I6, such that if m > M, with probability at least 1 − δ *over the* randomness of S tr, $${\frac{\hat{\theta}^{\top}\theta^{*}}{\|\hat{\theta}\|^{2}}}\in[I_{1},I_{2}],\quad\hat{\theta}^{\top}\theta^{*}\in[I_{3},I_{4}],\quad\|\hat{\theta}\|\in[I_{5},I_{6}].$$ Proof 1 Under Assumption 3, we know m → ∞ leads to ˆθ → θ0. In addition, for any δ ∈ (0, 1) there exists a threshold M ∈ N + such that if m > M, ϕ(δ, m) is a decreasing function of δ and m*, which leads to* $$\frac{\hat{\theta}^{\top}\theta^{*}}{\|\hat{\theta}\|^{2}}\in[\frac{\theta_{0}^{\top}\theta^{*}}{\|\theta_{0}\|^{2}}-\varepsilon,\frac{\theta_{0}^{\top}\theta^{*}}{\|\theta_{0}\|^{2}}+\varepsilon],\quad\hat{\theta}^{\top}\theta^{*}\in[\theta_{0}^{\top}\theta^{*}-\varepsilon,\theta_{0}^{\top}\theta^{*}+\varepsilon],\quad\|\hat{\theta}\|\in[\|\theta_{0}\|-\varepsilon,\|\theta_{0}\|+\varepsilon].$$ for some small ε > 0 that makes the left end of the above intervals larger than 0 and θ ⊤ 0 θ ∗ + *ε < α*(θ ⊤ 0 θ ∗ − ε) hold for all r1, r2, σ, m as long as m > M. Then, we set Ii*'s accordingly to each value above.* ## C.2.1 Example Training ˆΘ In this section, we provide one example to justify Assumption 3, i.e., ˆθ =Pm i=1 x tr i y tr i /m, where the training set is drawn from an unperturbed Gaussian mixture, i.e., x tr|y tr ∼ N (y tr·θ ∗, σ2I) and y tr follows a Bernoulli distribution P(y tr = 1) = 1/2. Directly following the analysis of Zhang et al. (2022), we have $$\hat{\theta}^{\top}\theta^{*}=O_{\mathbb{P}}(\frac{1}{\sqrt{m}})\|\theta^{*}\|+\|\theta^{*}\|^{2}.$$ For ∥ ˆθ∥ 2, notice that $${\hat{\theta}}=\theta^{*}+\epsilon_{m}$$ where ϵm ∼ N (0, σ 2I m ). Then, we have $$\|\hat{\theta}\|^{2}=\|\theta^{*}\|^{2}+2\epsilon_{m}^{\top}\theta^{*}+\|\epsilon_{m}\|^{2}=\|\theta^{*}\|^{2}+\frac{p}{m}+O_{\mathbb{P}}(\frac{\sqrt{p}}{m})+O_{\mathbb{P}}(\frac{1}{\sqrt{m}})\|\theta^{*}\|.$$ Given p/m = O(1), combined with the form of classic concentration inequalities, one can verify this example satisfies Assumption 3. ## C.3 Background: Ece Calculation Recall we denote ˆf(x) = max{pˆ−1(x), pˆ1(x)} and denote the predicition result yˆ = Cˆ(x). The definition of ECE is: ECE = Efˆ(x) |P[y = ˆy | ˆf(x) = p] − p|. Notice that there are two cases. - Case 1: $\hat{f}(x)=\hat{p}_1(x)$, . - Case 1: ˆf(x) = ˆp1(x), by reparameterization, we have $$\mathbb{P}[y=\hat{y}\mid\hat{f}(x)=p]-p]=\left|\mathbb{P}[y=1\mid\hat{f}(x)=\frac{e^{2v}}{1+e^{2v}}]-\frac{e^{2v}}{1+e^{2v}}\right|$$ $$=\left|\mathbb{P}[y=1\mid\hat{\theta}^{\top}x=v]-\frac{e^{2v}}{1+e^{2v}}\right|.$$ - Case 2: ˆf(x) = ˆp−1(x), by reparameterization, we have $$\mathbb{P}[y=\hat{y}\mid\hat{f}(x)=p]-p]=\left|\mathbb{P}[y=-1\mid\hat{f}(x)=\frac{1}{1+e^{2v}}]-\frac{1}{1+e^{2v}}\right|$$ $$=\left|\mathbb{P}[y=-1\mid\hat{\theta}^{\top}x=v]-\frac{1}{1+e^{2v}}\right|$$ $$=\left|1-\mathbb{P}[y=-1\mid\hat{\theta}^{\top}x=v]-(1-\frac{e^{2v}}{1+e^{2v}})\right|$$ $$=\left|\mathbb{P}[y=1\mid\hat{\theta}^{\top}x=v]-\frac{e^{2v}}{1+e^{2v}}\right|.$$ To summarize, $$\operatorname{ECE}=\mathbb{E}_{{\hat{f}}(x)}|\mathbb{P}[y={\hat{y}}\mid{\hat{f}}(x)=p]-p|=\mathbb{E}_{{\hat{\theta}}^{\intercal}x}\left|\mathbb{P}[y=1\mid{\hat{\theta}}^{\intercal}x=v]-{\frac{1}{1+e^{-2v}}}\right|.$$ Temperature scaling. $$p_{-1}^{T}(x)=\frac{1}{e^{2\cdot\hat{\theta}^{\top}x/T}+1},\quad p_{1}^{T}(x)=\frac{e^{2\cdot\hat{\theta}^{\top}x/T}}{e^{2\cdot\hat{\theta}^{\top}x/T}+1}.$$ (15) $\text{}$ (15) $\text{}$ (16) Thus, $$\mathrm{R-ECE}=\mathbb{E}_{\hat{\theta}^{\intercal}x}\left|\mathbb{P}[y=1\mid\hat{\theta}^{\intercal}x=v T]-{\frac{1}{1+e^{-2v}}}\right|.$$ Platt scaling. $$p_{-1}^{w,b}(x)=\frac{1}{e^{2w\cdot\hat{\theta}^{\top}x+2b}+1},\quad p_{1}^{w,b}(x)=\frac{1}{e^{-2w\cdot\hat{\theta}^{\top}x-2b}+1}.\tag{16}$$ $$\text{ECE}_{w,b}=\mathbb{E}_{\hat{\theta}^{\top}x}\left|\mathbb{P}[y=1\mid w\cdot\hat{\theta}^{\top}x+b=v]-\frac{1}{1+e^{-2v}}\right|.$$ ECE calculation. The distribution of ˆθ ⊤x has the following properties. * $\hat{\theta}^{\top}x|y=1\sim z\rho_{1}\mathcal{N}(\hat{\theta}^{\top}\theta^{*},\sigma^{2}\|\hat{\theta}\|^{2})\mathbf{1}\{\hat{\theta}^{\top}x\in\mathbb{B}(\hat{\theta}^{\top}\theta^{*},r_{1}\|\hat{\theta}\|)\cup\mathbb{B}(-\hat{\theta}^{\top}\theta^{*},r_{1}\|\hat{\theta}\|)\}$ $+(1-z)\rho_{2}\mathcal{N}(-\alpha\cdot\hat{\theta}^{\top}\theta^{*},\sigma^{2}\|\hat{\theta}\|^{2}))\mathbf{1}\{x\in\mathbb{B}(\alpha\hat{\theta}^{\top}\theta^{*},r_{2}\|\hat{\theta}\|)\cup\mathbb{B}(-(\alpha\hat{\theta}^{\top}\theta^{*},r_{2}\|\hat{\theta}\|))\}$; * $\hat{\theta}^{\top}x|y=-1\sim z\rho_{1}\mathcal{N}(-\hat{\theta}^{\top}\theta^{*},\sigma^{2}\|\hat{\theta}\|^{2})\mathbf{1}\{\hat{\theta}^{\top}x\in\mathbb{B}(\hat{\theta}^{\top}\theta^{*},r_{1}\|\hat{\theta}\|)\cup\mathbb{B}(-\hat{\theta}^{\top}\theta^{*},r_{1}\|\hat{\theta}\|)\}$. $+(1-z)\rho_{2}\mathcal{N}(\alpha\cdot\hat{\theta}^{\top}\theta^{*},\sigma^{2}\|\hat{\theta}\|^{2}))\mathbf{1}\{\hat{\theta}^{\top}x\in\mathbb{B}(\alpha\hat{\theta}^{\top}\theta^{*},r_{2}\|\hat{\theta}\|)\cup\mathbb{B}(-\alpha\hat{\theta}^{\top}\theta^{*},r_{2}\|\hat{\theta}\|)\}$. Now, we are ready to calculate ECE. Specifically, given ˆθ, For notation simplicity, we denote A = B( ˆθ ⊤θ ∗, r1∥ ˆθ∥) ∪ B(−ˆθ ⊤θ ∗, r1∥ ˆθ∥) and B = B(α · ˆθ ⊤θ ∗, r2∥ ˆθ∥) ∪ B(−(α · ˆθ ⊤θ ∗, r2∥ ˆθ∥). Meanwhile, for simplicity, we choose r1, r2 such that ρ1 = ρ2 = ρ. This is always manageable and there exists infinitely many choices, we only require S1 ∩ S2 = ∅ for any S1 ̸= S2, S1, S2 ∈ {B( ˆθ ⊤θ ∗, r1∥ ˆθ∥), B(−ˆθ ⊤θ ∗, r1∥ ˆθ∥), B(αˆθ ⊤θ ∗, r2∥ ˆθ∥), B(−αˆθ ⊤θ ∗, r2∥ ˆθ∥)}. In able to achieve ρ1 = ρ2 = ρ, it only depends on r1/r2. Apparently, there exists a threshold ϕ > 0 such that if r1 r2 are both smaller than ϕ (one can choose r1, r2 as functions of σ with appropriate choosen σ), then *A ∩ B* = ∅ can be achieved. $$\begin{array}{r}{\mathbb{P}[y=1\mid{\hat{\theta}}^{\top}x=v]={\frac{\mathbb{P}({\hat{\theta}}^{\top}x=v\mid y=1)}{\mathbb{P}({\hat{\theta}}^{\top}x=v\mid y=1)+\mathbb{P}({\hat{\theta}}^{\top}x=v\mid y=-1)}}}\\ {={\frac{1}{1+\exp\left({\frac{-2{\hat{\theta}}^{\top}\theta^{\star}}{\sigma^{2}\|{\hat{\theta}}\|^{2}}}\cdot v\right)}}\mathbf{1}\{v\in{\mathcal{A}}\}+{\frac{1}{1+\exp\left({\frac{2\alpha{\hat{\theta}}^{\top}\theta^{\star}}{\sigma^{2}\|{\hat{\theta}}\|^{2}}}\cdot v\right)}}\mathbf{1}\{v\in{\mathcal{B}}\}}\end{array}$$ ## C.4 Proof Of Theorem 1 C.4.1 Temperature Scaling Only A simple reparameterization leads to: $$\text{R-ECE}=\mathbb{E}_{v=\theta^{\top}x}\left|\frac{\mathbf{1}\{v\in\mathcal{A}\}}{1+\exp\left(\frac{2\theta^{\top}\theta^{*}}{\sigma^{2}\|\theta\|^{2}}\cdot v\right)}+\frac{\mathbf{1}\{v\in\mathcal{B}\}}{1+\exp\left(\frac{2v\theta^{\top}\theta^{*}}{\sigma^{2}\|\theta\|^{2}}\cdot v\right)}-\frac{1}{e^{-2v/T}+1}\right|.$$ The lower bound contains two parts. We choose the threshold ϕ mentioned previously small enough such that I3 > max{r1, r2}. This can be achieved because I3 is independent of r1, r2. Part I. When v ∈ B( ˆθ ⊤θ ∗, r1∥ ˆθ∥) ⊂ A, and we know that *A ∩ B* = ∅. Let us choose a threshold min{I1, I3}/σ2 *> τ >* 0. Then for any T > 0, it must fall into one of the following three cases. Case 1: T −1 and ˆθ ⊤θ ∗/(σ 2∥ ˆθ∥ 2) are far: T −1 − ˆθ ⊤θ ∗/(σ 2∥ ˆθ∥ 2) > τ , recall v = ˆθ ⊤x, then Ex∈B(θˆ⊤θ ∗,r1∥θˆ∥) 1 1 + exp −2θˆ⊤θ ∗ σ2∥θˆ∥2 · v − 1 1 + e−2v/T ≥ 1 1 + exp −2T −1( ˆθ⊤θ ∗ − r1) − 1 1 + exp −2θˆ⊤θ ∗ σ2∥θˆ∥2 ( ˆθ⊤θ ∗ + r1) P(x ∈ B(θ ∗, r1))) ≥ β 2 1 1 + exp −2(ˆθ⊤θ ∗/(σ 2∥ ˆθ∥ 2) + τ )(ˆθ⊤θ ∗ − r1) − 1 1 + exp −2θˆ⊤θ ∗ σ2∥θˆ∥2 ( ˆθ⊤θ ∗ + r1) ≥ β 2 1 1 + exp −2(ˆθ⊤θ ∗/(σ 2∥ ˆθ∥ 2) + τ )(ˆθ⊤θ ∗ − r1) − 1 1 + exp −2θˆ⊤θ ∗ σ2∥θˆ∥2 ( ˆθ⊤θ ∗ + r1) ≥ β 2min c∈[I1,I2],d∈[I3,I4] 1 1 + exp (−2(c/σ2 + τ )(d − r1)) − 1 1 + exp (−2c/σ2(d + r1)):= β1 Case 2: T −1 and ˆθ ⊤θ ∗/(σ 2∥ ˆθ∥ 2) are far: ˆθ ⊤θ ∗/(σ 2∥ ˆθ∥ 2) − T −1 > τ , Ex∈B(θˆ⊤θ ∗,r1∥θˆ∥) 1 1 + exp −2θˆ⊤θ ∗ σ2∥θˆ∥2 · v − 1 1 + e−2v/T ≥ 1 1 + exp −2θˆ⊤θ ∗ σ2∥θˆ∥2 ( ˆθ⊤θ ∗ − r1) − 1 1 + exp −2T −1( ˆθ⊤θ ∗ + r1) · β 2 ≥ 1 1 + exp −2θˆ⊤θ ∗ σ2∥θˆ∥2 ( ˆθ⊤θ ∗ − r1) − 1 1 + exp −2(ˆθ⊤θ ∗/(σ 2∥ ˆθ∥ 2) − τ )(ˆθ⊤θ ∗ + r1) · β 2 ≥ β 2min c∈[I1,I2],d∈[I3,I4] 1 1 + exp (−2c/σ2(d − r1)) − 1 1 + exp (−2(c/σ2 − τ )(d + r1)):= β2 Case 3: When T −1 and ˆθ ⊤θ ∗/(σ 2∥ ˆθ∥ 2) are close: |T −1 − ˆθ ⊤θ ∗/(σ 2∥ ˆθ∥ 2)| ≤ τ , then when v ∈ B(−αˆθ ⊤θ ∗, r2∥ ˆθ∥) ⊂ B. For small enough τ satisfying τ ≤ 0.2(1 − α)I1/σ2 Ev=θˆ⊤x∈B(−αθˆ⊤θ ∗,r2∥θˆ∥) 1 1 + exp 2αθˆ⊤θ ∗ σ2∥θˆ∥2 · v − 1 1 + e−2v/T ≥ min a∈[ −αθˆ⊤θ∗ σ2∥θˆ∥2 , θˆ⊤θ∗ σ2∥θˆ∥2 +τ] min v∈B(−αθˆ⊤θ ∗,r2∥θˆ∥) 2v exp(2va) (1 + exp(2av))2 2(1 − α) ˆθ ⊤θ ∗ σ 2∥ ˆθ∥ 2− τ !1 − β 2 ≥ min a∈[− αI2 σ2 , I2 σ2 +τ] min v∈[αI3−r2I6,αI4+r2I6] 2v exp(2va) (1 + exp(2av))2 1.8(1 − α) I1 σ 2 1 − β 2:= β3 Part III. Combining together, we have R-ECE ≥ min{β1, β2, β3}. Finally, we take r1 ≤ min{0.1*, τσ*2/I1}I3, which ensures βi > 0 for all i = 1, 2, 3. ## C.4.2 Selective Calibration Only We hope E[g(x) = 1] ≥ β. Let us first define G = { ˆθ ⊤x : g(x) = 1}. Then, for any g, we have P[y = 1 | ˆθ ⊤x = v, v ∈ G] = P( ˆθ ⊤x = v, v ∈ G | y = 1) P( ˆθ⊤x = v, v ∈ G | y = 1) + P( ˆθ⊤x = v, v ∈ G | y = −1) = 1 1 + exp −2θˆ⊤θ ∗ σ2∥θˆ∥2 · v 1{v ∈ A} + 1 1 + exp 2αθˆ⊤θ ∗ σ2∥θˆ∥2 · v 1{v ∈ B} 1{v ∈ G} =1 1 + exp −2θˆ⊤θ ∗ σ2∥θˆ∥2 · v 1{v ∈ A ∩ G} + 1 1 + exp 2αθˆ⊤θ ∗ σ2∥θˆ∥2 · v 1{v ∈ B ∩ G} Then, by choosing small enough σ, such that I1/σ2 > 1, the corresponding ECE is: ECES = Ev=θˆ⊤x|θˆ⊤x∈G 1 1 + exp −2θˆ⊤θ ∗ σ2∥θˆ∥2 · v 1{v ∈ A ∩ G} +1 1 + exp 2αθˆ⊤θ ∗ σ2∥θˆ∥2 · v 1{v ∈ B ∩ G} − 1 e−2v + 1 ≥ Ev=θˆ⊤x|θˆ⊤x∈G 1 1 + exp −2θˆ⊤θ ∗ σ2∥θˆ∥2 · v 1{v ∈ A ∩ G} − 1 e−2v + 1 + Ev=θˆ⊤x|θˆ⊤x∈G 1 1 + exp 2αθˆ⊤θ ∗ σ2∥θˆ∥2 · v 1{v ∈ B ∩ G} − 1 e−2v + 1 ≥ λ1P(v ∈ A ∩ G|v ∈ G) + λ2P(v ∈ B ∩ G|v ∈ G). Since P(v ∈ A ∩ G|v ∈ G) + P(v ∈ B ∩ G|v ∈ G) = 1, it is not hard to verify that $${\mathrm{S-ECE}}\geq\operatorname*{min}\{\lambda_{1},\lambda_{2}\}$$ where $$\lambda_{1}=\operatorname*{min}_{a\in[1,{\frac{I_{2}}{\sigma^{2}}}]}\operatorname*{min}_{v\in{\mathcal{A}}}{\frac{2v\exp(2v a)}{(1+\exp(2a v))^{2}}}\left|{\frac{I_{1}}{\sigma^{2}}}-1\right|$$ $$\lambda_{2}=\operatorname*{min}_{a\in[-{\frac{\alpha I_{2}}{\sigma^{2}}},1]}\operatorname*{min}_{v\in{\mathcal{B}}}{\frac{2v\exp(2v a)}{(1+\exp(2a v))^{2}}}\left|{\frac{\alpha I_{1}}{\sigma^{2}}}-1\right|.$$ ## C.4.3 Selective Re-Calibration We choose G = B and set T −1 =θˆ⊤θ ∗ σ2∥θˆ∥2 , then SR-ECE = 0. Thus, there exists appropriate choice of g and T such that SR-ECE(*g, T*) = 0. ## C.5 Proof Of Theorem 2 Usually, β is much larger than 1 − β, for example, β = 90%. In this section, we impose the following assumption. Assumption 4 The selector g *will retain most of the probabilty mass in the sense that* $$\beta>2(1-\beta).$$ Let us denote ξ = β/2 − (1 − β) and ξ is a positive constant. First, we have the following claim. Claim 5 *Under Assumption 4, if we further have* $$\operatorname*{min}_{v\in\mathbb{B}({\hat{\theta}}^{\top}{\hat{\theta}}^{\star},r_{1}\|{\hat{\theta}}\|)}\left|{\frac{1}{1+\exp\left({\frac{-2{\hat{\theta}}^{\top}{\hat{\theta}}^{\star}}{\sigma^{2}\|{\hat{\theta}}\|^{2}}}\cdot v\right)}}-{\frac{1}{e^{-2v}+1}}\right|$$ $$>\operatorname*{max}_{v\in\mathbb{B}(-\alpha{\hat{\theta}}^{\top}{\hat{\theta}}^{\star},r_{2}\|{\hat{\theta}}\|)}\left|{\frac{1}{1+\exp\left({\frac{2\alpha{\hat{\theta}}^{\top}{\hat{\theta}}^{\star}}{\sigma^{2}\|{\hat{\theta}}\|^{2}}}\cdot v\right)}}-{\frac{1}{e^{-2v}+1}}\right|,$$ then for g1 = arg ming:E[g(x)=1]≥β *S-ECE, we have that* Ex∈B(−αθ∗,r2)[g1(x) = 1] = P(x ∈ B(−αθ∗, r2)). Proof 2 The proof is straightforward. We denote O = {x : x ∈ B(−αθ∗, r2), g(x) = 0}*. We will prove that* P(x ∈ O) = 0. If not, let us denote P = P(x ∈ O) > 0. Since we know β > 2(1 − β)*, which means even if we "throw away"* all the probability mass 1 − β *by only setting points in* B(θ ∗, r1) with g value equals to 0, there will still be other remaining probability mass retained in B(θ ∗, r1) with g value equals to 1. Then, there exists g2 *such that* g2(x) = 1 for all x ∈ B(−αθ∗, r2) and leads to P(x ∈ B(−αθ∗, r2), g2(x) = 1) = P(x ∈ B(−αθ∗, r2), g1(x) = 1) + ξ (enabled by the fact β > 2(1 − β) *) and* P(g1(x) = 1) = P(g2(x) = 1) for x ∈ B(θ ∗, r1) ∪ B(−αθ∗, r2). Since $$v\in\mathbb{B}({\hat{\theta}}^{\top}\theta^{\star},r_{1}\|{\hat{\theta}}\|)\left|{\frac{1}{1+\exp\left({\frac{-2{\hat{\theta}}^{\top}\theta^{\star}}{\sigma^{2}\|\theta\|^{2}}}\cdot v\right)}}-{\frac{1}{e^{-2v}+1}}\right|$$ $$>\max_{v\in\mathbb{B}(-\alpha{\hat{\theta}}^{\top}\theta^{\star},r_{2}\|{\hat{\theta}}\|)}\left|{\frac{1}{1+\exp\left({\frac{2\alpha{\hat{\theta}}^{\top}\theta^{\star}}{\sigma^{2}\|\theta\|^{2}}}\cdot v\right)}}-{\frac{1}{e^{-2v}+1}}\right|,$$ which means "throwing away" points in B(αθ∗, r2) *can more effectively lower the calibration error and we* must have S-ECE(g2) < *S-ECE*(g1). Next, we state how to set the parameters such that the condition in Claim 5 holds. As long as we choose σ, r1, r2 small enough, such that $$\begin{array}{c}{{\frac{1}{1+\exp(-2I_{1}/\sigma^{2}(I_{4}+r_{1}I_{6}))}-\frac{1}{1+\exp(-2I_{1}(I_{3}-r_{1}I_{6}))}}}\\ {{<\frac{1}{1+\exp(-2(I_{4}+r_{2}I_{6}))}-\frac{1}{1+\exp(2\alpha I_{2}/\sigma^{2}(I_{4}+r_{2}I_{6}))}}}\end{array}$$ then, $$>\operatorname*{min}_{v\in\mathbb{B}({\hat{\theta}}^{\top}\theta^{\star},r_{1}\|{\hat{\theta}}\|)}\left|{\frac{1}{1+\exp\left({\frac{-2{\hat{\theta}}^{\top}\theta^{\star}}{\sigma^{2}\|{\hat{\theta}}\|^{2}}}\cdot v\right)}}-{\frac{1}{e^{-2v}+1}}\right|$$ Then, following similar derivation in Section C.4.1, we can prove with suitably chosen parameters r1, r2, σ, ECES→T > 0. Lastly, let us further prove ECET→S > 0. We choose r1 and r2 small enough such that v > 0 for all v ∈ B( ˆθ ⊤θ ∗, r1∥ ˆθ∥) ∪ B(αˆθ ⊤θ ∗, r2∥ ˆθ∥) and v < 0 for all v ∈ B(−ˆθ ⊤θ ∗, r1∥ ˆθ∥) ∪ B(−αˆθ ⊤θ ∗, r2∥ ˆθ∥). For 1/T ∈ [ −αθˆT θ ∗ σ2∥θˆ∥ , θˆT θ ∗ σ2∥θˆ∥ ], we can calculate the derivative for R-ECE as the following: R-ECE = Ev∈B(θˆ⊤θ ∗,r1∥θˆ∥) 1 1 + exp −2θˆ⊤θ ∗ σ2∥θˆ∥2 · v − 1 e−2v/T + 1 + Ev∈B(−θˆ⊤θ ∗,r1∥θˆ∥) − 1 1 + exp −2θˆ⊤θ ∗ σ2∥θˆ∥2 · v + 1 e−2v/T + 1 + Ev∈B(αθˆ⊤θ ∗,r2∥θˆ∥) − 1 1 + exp 2αθˆ⊤θ ∗ σ2∥θˆ∥2 · v + 1 e−2v/T + 1 + Ev∈B(−αθˆ⊤θ ∗,r2∥θˆ∥) 1 1 + exp 2αθˆ⊤θ ∗ σ2∥θˆ∥2 · v − 1 e−2v/T + 1 . Next, we take a derivative over x = 1/T for x ∈ [ −αθˆT θ ∗ σ2∥θˆ∥ , θˆT θ ∗ σ2∥θˆ∥ ], which leads to $${\frac{d\mathrm{R-ECE}}{d x}}=-2\mathbb{E}_{v\in\mathbb{B}(\theta^{\top}\theta^{*},r_{1}\|\theta\|)}\left[{\frac{2v e^{2v x}}{(e^{2v x}+1)^{2}}}\right]+2\mathbb{E}_{v\in\mathbb{B}(\alpha\theta^{\top}\theta^{*},r_{2}\|\theta\|)}\left[{\frac{2v e^{2v x}}{(e^{2v x}+1)^{2}}}\right]$$ Consider the two values $$\frac{2v e^{2v x}}{(e^{2v x}+1)^{2}},\quad\frac{2\alpha v e^{2\alpha v x}}{(e^{2\alpha v x}+1)^{2}},$$ the ratio $${\frac{2\alpha v e^{2\alpha v x}}{(e^{2\alpha v x}+1)^{2}}}/[{\frac{2v e^{2v x}}{(e^{2v x}+1)^{2}}}]\to_{v\to0}\ 2\alpha.$$ That means if we take suitably small r1, r2 and let σ ∈ [c1, c2] with appropriately chosen c1, c2 $$\left.{\frac{d\mathrm{R-ECE}}{d x}}\right|_{x={\frac{\hat{\sigma}^{T}\theta^{*}}{\sigma^{2}\|\hat{\sigma}\|}}}<0.$$ Thus, we know the best choice of 1/T should not be equal to θˆT θ ∗ σ2∥θˆ∥ . Then, notice β > 2(1 − β), which means the probability mass in B( ˆθ Tθ ∗, r1∥ ˆθ∥) cannot be all be "thrown away"; following similar derivation in Section C.4.1, we can prove with suitably chosen parameters r1, r2, σ, ECET→S > 0.
Review 1: Summary: This paper proposes a new framework, called “selective recalibration”, to improve the calibration of a pre-trained classifier. Post-hoc recalibration and selection approaches have been developed separately for this purpose, but the authors show that combining these two approaches could give a unique advantage and make further improvements. The authors demonstrated this advantage theoretically, under simple assumptions. Then, they also provide empirical evidence through extensive experiments on popular benchmarks. Strengths and Weaknesses: ### Pros 1. **Clarity**. Overall, the writing is clear and easy to follow. In addition, the organization of the main draft is well-established. 2. **Well-motivated problem and principled method**. Improving the reliability of pre-trained classifiers is an important problem for many practical scenarios. In addition, while the proposed method is simple and can be viewed as a naive combination of two existing approaches, the motivation is clear and demonstrated with the theoretical justification. ### Cons 1. **Limited empirical improvement**. The improvement from the proposed method is not significant compared to a simple baseline (”Selection with Confidence”), even though it requires an additional training process. For example, this baseline outperforms all the variants of the proposed method on the ImageNet dataset and shows comparable performances on the Camelyon 17 dataset. While the proposed method outperforms this baseline under distribution shift (Section 5.2), this baseline still significantly improves the calibration of the pre-trained model (e.g., ECE-1 on RxRx1: 0.308 (None) > Confidence (0.071) > Proposed (0.036)). The advantage of this simple baseline is more noticeable as the effectiveness of the proposed framework varies depending on the configurations and tested datasets. 2. **Limited justification on current design choices**. Currently, there are some fixed design choices such as the choice of feature extractor and the architecture of selector g. However, such a choice is not demonstrated yet; for example, it is unclear whether the proposed framework can be generalizable across different feature extractors on the same dataset. ### Editorial comments 1. Typo - ECE_q in Eq.1, while ECE-q in Section 5 and Tables. 2. To help the understanding of the reader, it would be better to provide more details about each dataset in the main text, including the statistics, used backbone, and the figures of examples. Requested Changes: More empirical demonstrations for the current design choices and various feature extractors. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper studies the problem of calibrating the predictions of a pre-trained model. To do so, the authors employ simultaneous selection and recalibration. While this combination on its own is not novel, the authors propose to jointly optimize the parameters of the selection and recalibration models rather than sequentially optimize them as has been studied in prior works. Methodologically, this modification is made quite simply by pushing the recalibrator model inside of the cross-entropy term. The authors validate this intuitive modification with a selection of experiments on RxRx1 and CIFAR-100-C and show strong results indicating that their method is strictly better than prior approaches with respect to ECE. However, it is a little less clear in the current presentation of the paper exactly how the accuracy compares between the methods. This will be a necessary addition to fully recommend acceptance (more on this below). The authors finish the paper by providing theoretical justification for their method in a toy setting. While I feel this gives a good intuition for why the approach works and may be useful to readers I do not see it as a strong contribution on its own, more of an aspect that strengthens the papers presentation. Strengths and Weaknesses: The paper and setting is well presented. I am not fully up to date on the latest research in the area, but from a cursory search and reading it seems the authors have covered the important related papers. The methodological improvement is straight-forward and easy to understand and the results that are generated are solid. On the basis of these strengths I am inclined to vote for accept on this work. One critical missing piece of this work is the final accuracy of the models reported in Table 1 and Table 2. It does not seem necessary that the accuracy of the model would drop when the proposed method is applied, but having the concrete accuracy numbers would allow readers to better understand what trade-off exists, if any. I feel this is my strongest suggestion. There are some minor presentation issues that ought to be fixed as well. Before equation (1) it is not made clear what $\hat{f}$ is precisely. In Equation (6) the authors are missing indices which make the equation improperly defined. Requested Changes: Addition of accuracy statistics in Table 1 and Table 2. Adjustments to preamble of equation (1) and re-write of equation (6). Broader Impact Concerns: I do not have broader impact concerns about this work. ================================================== Review 3: Summary: This paper uses several concepts which I will begin by briefly summarizing. Recalibration refers to techniques which modify the output of a classifier to ensure it is properly calibrated. Selective classification aims to only classify points where the classifier is confident enough (while obeying some coverage constraints) so as to increase classification accuracy, which is achieved by training a selection model which, given an input, determines if it should be classified or not. The recently proposed selective calibration uses a selection model for calibration purposes, i.e. to ensure that the classifier is properly calibrated on the selected points. This paper proposes selective recalibration, which combines recalibration with selective calibration. In other words, the authors simultaneously use a selection model along with a small head which changes the output of a classifier to improve its calibration on selected datapoints. The authors propose a sensible objective to jointly train the recalibration and the selection networks given a pretrained classifier. The authors also argue that this simultaneous optimization is desirable by constructing an example where perfect calibration is unachievable when using only recalibration, only selective calibration, or even when using selective recalibration where the two networks are trained sequentially; yet perfect calibration is achievable when doing simultaneous optimization in selective recalibration. Overall this is a good submission which is clearly written, well motivated, and which has good empirical results. Strengths and Weaknesses: **Strengths** 1. The paper is well written and easy to follow. 2. The problem addressed in the paper is relevant to the community. 3. The proposed approach is simple, effective, works well in practice, and is convincingly motivated. **Weaknesses** 4. Although the paper is mostly clear, I did find section 4.1 to be a bit ambiguous. For starters, eq 6 is stated without defining that $(x, y)$ is the batch. Then, the losses $l(f, \hat{g}, h, x, y)$ are not written in such a way that makes it clear for the reader what is fixed an what is being optimized, e.g. I think $l(\hat{g}, h; f, x, y)$ would be better notation. More importantly, I do not believe that $L_{set}$ as defined in eq 7, along with the choice of $l$ in eq 9 recover the loss used by Fisch et al., as claimed in the current manuscript; both because the scaling changes to $\sum_{i,j}\hat{g}(x_i)\hat{g}(x_j)$ and because $q$ is not used in the numerator in eq 6 (compare to eq 13 in Fisch et al.). Also, should eq be multiplied by $1/n^2$ instead of $1/n$? This makes it ambiguous if, in the experiments, the authors compare against the method as described in Fisch et al., or the slightly different version they describe in sec 4.1.2. 5. I do not believe that eq 10 provides an unbiased estimate of $(\beta - \mathbb{E}[\hat{g}(x)])^2$. I am aware that at test time you set the threshold so as to ensure coverage, but I still believe this should be discussed. 6. While the analysis in sec 6 is very interesting and convincingly motivates the need for joint training, the used classifier remains very simple. It would be good to discuss what might change in the setting where a neural network classifier is used instead. To be clear I'm not asking for theoretical results in the setting, just a discussion. 7. This paper mostly focuses on calibration, which can be understood as attempting to have proper quantification of aleatoric uncertainty. I think it would be appropriate to discuss methods aiming to also quantify epistemic uncertainty in the related work section, e.g. Bayesian neural networks, deep ensembles, and conformal prediction. 8. The organization of the paper, where the theoretical analysis in section 6 comes after the experiments in section 5, is bizarre. 9. Finally, a list of typos: - First paragraph: "1% chance" -> "a 1% chance" - Last paragraph of sec 2: cite Kumar et al. 2018 using \citet - Equations are sometimes missing punctuation, e.g. missing period at the end of eq 2, and missing comma at the end of eq 4 - The sentence "Intuitively, to optimize..." after eq 4 is rather unclear, please rephrase - eq 12: missing s subindex for f - Missing indicator in $\hat{g}(x) \geq \tau$ in sec 4.4.1. - Top of page 7: "ECE-1 and ECE-2" -> ECE_1 and ECE_2 for notational consistency - Definition 1, while understandable, should use more formal notation, $x|y$ as written, is $x|y,z$, and the dependence on $y$ should be made clear in the notation, rather than in the paragraph following the definition. - Please double check you cite the correct version of papers. I noticed you cite Fisch et al. (2022) as an arxiv preprint, but I believe tha's a TMLR paper. I did not go over all the citations, but please check them. Requested Changes: Please see weaknesses section of my review. Broader Impact Concerns: I have no broader impact concerns. ================================================== Metareview: Recommendation: Accept as is Comment: This paper addresses the problem of calibrating the predictions of a pre-trained classifier. The authors introduce a new technique, selective recalibration, which combines recalibration with selective calibration by jointly optimizing the parameters of both approaches. Several methods for this joint optimization are proposed, and they are easy to implement. The authors demonstrate its effectiveness through theoretical justification in a toy setting and empirical evidence on two benchmarks. All three reviewers acknowledged the clarity of this paper, the significance of the problem, and the soundness of the proposed method. During the revision process, the authors successfully addressed the reviewers’ comments; consequently, all of the reviewers now agree to the acceptance. Therefore, AE recommends a clear acceptance of this paper. ==================================================
# Can Ai-Generated Text Be Reliably Detected? Anonymous authors Paper under double-blind review ## Abstract The rapid progress of Large Language Models (LLMs) has made them capable of performing astonishingly well on various tasks, including document completion and question answering. The unregulated use of these models, however, can potentially lead to malicious consequences such as plagiarism, generating fake news, spamming, etc. Therefore, reliable detection of AI-generated text can be critical to ensure the responsible use of LLMs. Recent works attempt to tackle this problem either using certain model signatures present in the generated text outputs or by applying watermarking techniques that imprint specific patterns onto them. In this paper, we show that these detectors are not reliable in several practical scenarios. In particular, we develop a *recursive paraphrasing* attack to apply on AI text, which can break a whole range of detectors, including the ones using the watermarking schemes as well as neural network-based detectors, zero-shot classifiers, and retrieval-based detectors. Our experiments include passages around 300 tokens in length, showing the sensitivity of the detectors even in the case of relatively long passages. We also observe that our *recursive paraphrasing* only degrades text quality slightly, measured via human studies, and metrics such as perplexity scores and accuracy on text benchmarks. Additionally, we show that even LLMs protected by watermarking schemes can be vulnerable against spoofing attacks aimed to mislead detectors to classify human-written text as AI-generated, potentially causing reputational damages to the developers. In particular, we show that an adversary can infer hidden AI text signatures of the LLM outputs without having white-box access to the detection method. Finally, we provide a theoretical connection between the AUROC of the best possible detector and the Total Variation distance between human and AI text distributions that can be used to study the fundamental hardness of the reliable detection problem for advanced language models. ## 1 Introduction Artificial Intelligence (AI) has made tremendous advances in recent years, from generative models in computer vision (Rombach et al., 2022; Saharia et al., 2022) to generative models in natural language processing (NLP) (Brown et al., 2020; Zhang et al., 2022; Raffel et al., 2019). Large Language Models (LLMs) can now generate texts of supreme quality with the potential in many applications. For example, the recent model of ChatGPT (OpenAI, 2022) can generate human-like texts for various tasks such as writing codes for computer programs, lyrics for songs, completing documents, and question answering; its applications are endless. The trend in NLP shows that these LLMs will even get better with time. However, this comes with a significant challenge in terms of authenticity and regulations. AI tools have the potential to be misused by users for unethical purposes such as plagiarism, generating fake news, spamming, generating fake product reviews, and manipulating web content for social engineering in ways that can have negative impacts on society (Adelani et al., 2020; Weiss, 2019). Some news articles rewritten by AI have led to many fundamental errors in them (Christian, 2023). Hence, there is a need to ensure the responsible use of these generative AI tools. In order to aid this, a lot of recent research focuses on detecting AI-generated texts. Several detection works study this problem as a binary classification problem (OpenAI, 2019; Jawahar et al., 2020; Mitchell et al., 2023; Bakhtin et al., 2019; Fagni et al., 2020) and use **neural network-based detectors**. For example, OpenAI fine-tunes RoBERTa-based (Liu et al., 2019) GPT-2 detector models to distinguish between non-AI generated and GPT-2 generated texts (OpenAI, 2019). This requires such a detector to ![1_image_0.png](1_image_0.png) Figure 1: An illustration of vulnerabilities of existing AI-text detectors. We consider both watermarking-based and non-watermarking-based detectors and show that they are not reliable in practical scenarios. Colored arrow paths show the potential pipelines for adversaries to avoid detection. In red: an attacker can use a paraphraser to remove the LLM signatures from an AI-generated text to avoid detection. In blue: an adversary can query the watermarked LLM multiple times to learn its watermarking scheme. This information can be used to spoof the watermark detector. be fine-tuned with supervision on each new LLM for reliable detection. Another stream of work focuses on zero-shot AI text detection without any additional training overhead (Solaiman et al., 2019; Ippolito et al., 2019; Gehrmann et al., 2019). These works evaluate the expected per-token log probability of texts and perform thresholding to detect AI-generated texts. Mitchell et al. (2023) observe that AI-generated passages tend to lie in negative curvature of log probability of texts. They propose DetectGPT, a zero-shot LLM text detection method, to leverage this observation. Since these approaches rely on a neural network for their detection, they can be vulnerable to adversarial and poisoning attacks (Goodfellow et al., 2014; Sadasivan et al., 2023; Kumar et al., 2022; Wang et al., 2022). Another line of work aims to **watermark** AI-generated texts to ease their detection (Atallah et al., 2001; Wilson et al., 2014; Kirchenbauer et al., 2023a; Zhao et al., 2023). Watermarking eases the detection of LLM output text by imprinting specific patterns on them. Soft watermarking proposed in Kirchenbauer et al. (2023a) partitions tokens into "green" and "red" lists, as they define, to help create these patterns. A watermarked LLM samples a token, with high probability, from the green list determined by a pseudo-random generator seeded by its prefix token. The watermarking detector would classify a passage with a large number of tokens from the green list as AI-generated. These watermarks are often imperceptible to humans. Krishna et al. (2023) introduces an information retrieval-based detector by storing the outputs of the LLM in a database. For a candidate passage, their algorithm searches this database for semantically similar matches for detection. However, storing user-LLM conversations might cause serious privacy concerns. In this paper, through several experiments, we show that these state-of-the-art AI-text detectors are unreliable in several practical scenarios (Wolff, 2020; Aaronson, 2022; Liang et al., 2023; Pu et al., 2023; Wang et al., 2023). In §2, we have developed a *recursive paraphrasing attack* that use neural network-based paraphrasing to recursively paraphrase the source LLM's output text. Our experiments show that this automated paraphrasing attack can drastically reduce the accuracy of various detectors, including those using soft watermarking (Kirchenbauer et al., 2023a), to increase *type-II error* (detecting AI text as human text). For instance, our recursive paraphrasing attack on watermarked texts, even over relatively long passages of 300 tokens in length, can drop the detection rate (true positive rate at 1% **false positive rate or** TPR@1%FPR) from 99.3% to 9.7% with only degradation of 2.2 **in perplexity score.** We note that Kirchenbauer et al. (2023a) considers a relatively weak paraphrasing attack in their experiments where they perform span replacement by replacing random tokens (in-place) using an LLM. Our experiments, however, show the vulnerability of the watermarking scheme against stronger paraphrasing attacks that we use. After paraphrasing, the area under the receiver operating characteristic (AUROC) curves of zero-shot detectors (Mitchell et al., 2023) drops from 96.5% to 25.2%. We also observe that the performance of neural network-based trained detectors (OpenAI, 2019) deteriorates significantly after our paraphrasing attack. For instance, the TPR@1%FPR of the RoBERTa-Large-Detector from OpenAI drops from 100% to 60% after paraphrasing. In addition, we show that the retrieval-based detector by Krishna et al. (2023) designed to evade paraphrase attacks is vulnerable to our recursive paraphrasing. In fact, the accuracy of their detector falls from 100% to below 60% with our recursive paraphrase attack. We also observe that the quality of the paraphrased passages degrades, but only slightly, compared to the original ones. We quantify this via MTurk human evaluation studies and metrics such as perplexity and text benchmark accuracy. In particular, **our human evaluation study shows that** 77% of the recursively paraphrased passages are rated high quality in terms of content preservation, and 89% of them are rated high quality in terms of grammar or text quality. We also show that our recursive paraphrasing, when applied to a text benchmark such as a question-answering dataset, does not affect the performance, providing additional evidence that recursive paraphrasing does not hurt the content of the original text. Moreover, we show the possibility of **spoofing attacks** on various AI text detectors in §3. In this setting, an attacker generates a non-AI text that is detected to be AI-generated, thus increasing *type-I error* (falsely detecting human text as AI text). An adversary can potentially launch spoofing attacks to produce derogatory texts that are detected to be AI-generated to affect the reputation of the target LLM's developers. In particular, we show that an adversary can infer hidden AI text signatures without having white-box access to the detection method. For example, though the pseudo-random generator used for generating watermarked text is private, we develop an attack that adaptively queries the target LLM multiple times to learn its watermarking scheme. An *adversarial human* can then use this information to compose texts that are detected to be watermarked. Figure 1 shows an illustration of some of the vulnerabilities of the existing AI-text detectors. Finally, in §4, we present a theoretical result regarding the hardness of AI-text detection. Our main result in Theorem 1 states that the AUROC of the best possible detector differentiating two distributions H (e.g., human text) and M (e.g., AI-generated text) reduces as the total variation distance TV(M, H) between them decreases. Note that this result is true for any two arbitrary distributions H and M. For example, H could be the text distribution for a person or group and M could be the output text distribution of a general LLM or an LLM trained by an adversary to mimic the text of a particular set of people. Essentially, adversaries can train LLMs to mimic human text as they get more sophisticated, potentially reducing the TV distance between human and AI text, leading to an increasingly more difficult detection problem according to our Theorem 1. Although estimating the exact TV between text distributions from a finite set of samples is a challenging problem, we provide some empirical evidence, over simulated data or via TV estimations, showing that more advanced LLMs can potentially lead to smaller TV distances. Thus, **our Theorem** 1 would indicate an increasingly more difficult reliable detection problem in such cases. Our theory also indicates that if a detector becomes more robust to type-I errors, type-II errors will increase, revealing a fundamental tradeoff between type-I and type-II errors for the AI text detection problem. Similar tradeoffs have been explored in other domains as well. For example, Khajavi & Kuh (2016) study the relationship between detection performance and KL divergence between input distributions in the context of covariance selection. Thapliyal & Hwang (2022) show that undetectable cyberattacks can be generated by mimicking the input-output data distribution of network control systems. Although not a surprising result, Theorem 1 is the first to link this tradeoff to the detection of AI-generated content to our knowledge. Identifying AI-generated text is a critical problem to avoid its misuse by users for unethical purposes such as plagiarism, generating fake news, and spamming. However, deploying vulnerable detectors may not be the right solution to tackle this issue since it can cause its own damages, such as falsely accusing a human of plagiarism. Our results highlight the sensitivities of a wide range of detectors to both evasion and spoofing attacks and indicate the difficulty of developing reliable detectors in several practical scenarios - to maintain reliable detection performance, LLMs would have to trade off their performance. We hope that these findings can help the ethical and dependable utilization of AI-generated text. In summary, we make the following contributions in this work. - Our work is the *first to comprehensively analyze* the performance of four different classes of detectors, including watermarking-based, neural network-based, zero-shot, and retrieval-based detectors, and reveal their reliability issues (in §2). In particular, the *recursive paraphrasing attack* that we develop is the first method that can break watermarking (Kirchenbauer et al., 2023a) and retrieval-based (Krishna et al., 2023) detectors with only a small degradation in text quality. - Our work is the first to show that existing detectors are vulnerable against *spoofing attacks* where an adversarial human aims to write a (potentially derogatory) passage falsely detected as AI-generated without having a white-box access to the detection methods (in §3). For instance, as proof of concept, we show that an adversary can infer the watermarking signatures by probing the watermarked LLM and analyzing the statistics of the generated tokens. - Our work is the first to establish a theoretical connection between the AUROC of the best possible detector and the TV distance between human and AI-text distributions that can be used to study the hardness of the reliable text detection problem (in §4). Our theory also reveals a fundamental tradeoff between type-I and type-II errors for the AI text detection problem. ## 2 Evading Ai-Detectors Using Paraphrasing Attacks In this section, we first present the experimental setup for our paraphrasing attacks in §2.1. We also provide experiments in §2.2 showing that our novel recursive paraphrasing attacks only lead to a slight degradation in text quality. §2.3 and §2.4 show the effect of the paraphrasing attacks on watermarking and non-watermarking detectors, respectively. For evasion attacks, we consider a scenario where an adversary takes an AI response generated by a target model and then modifies the AI response in an automated and scalable fashion to evade detection. In this work, we propose the adversary modify the AI text from A using an AI paraphraser B to evade detection. Note that the adversary might be incentivized to use a detectable model A if, say, A is powerful or might have been fine-tuned for specific applications. In these cases where A could answer a user prompt better, an adversary would prefer to use A to generate a response and then use a less detectable AI model B for paraphrasing to evade detection. We quantify the text quality using automated metrics such as perplexity and human studies. As shown in §2.2, our evasion attacks only lead to a slight degradation in text quality while successfully evading detectors most of the time. Note that the amount of degradation that can be tolerated is application-specific. For example, an adversary could tolerate more quality degradation when generating a social media post than when generating a news article. ## 2.1 Setup And Paraphrasing Methods We use the "document" features of the XSum dataset (Narayan et al., 2018) containing 1000 long news articles (∼300 tokens in length) for our experiments. In Appendix A, we perform experiments with additional datasets - a medical text dataset PubMedQA (Jin et al., 2019) and a dataset with articles from 10 different domains Kafkai (Kafkai, 2020). As target LLMs, we use OPT-1.3B and OPT-13B (Zhang et al., 2022) language models with 1.3B and 13B parameters, respectively. In Appendix A, we also evaluate our attacks with GPT-2 Medium (Radford et al., 2019) as the target model. We use three different neural network-based paraphrasers - DIPPER with 11B parameters (Krishna et al., 2023), LLaMA-2-7B-Chat with 7B Figure 2: Recursive paraphrasing ![3_image_0.png](3_image_0.png) | ppi | i=1 | i=2 | i=3 | i=4 | i=5 | All ppi | | |--------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | Content | Avg. rating | 4.0 ± 0.8 | 4.1 ± 0.8 | 3.9 ± 0.9 | 4.2 ± 0.9 | 3.7 ± 1.1 | 4.0 ± 0.9 | | preservation | Ratings 5&4 | 70.2% | 77.2% | 63.2% | 80.0% | 61.4% | 70.4% | | Grammar or | Avg. rating | 4.28 ± 0.67 | 4.12 ± 0.50 | 4.12 ± 0.53 | 4.11 ± 0.64 | 4.07 ± 0.53 | 4.14 ± 0.58 | | text quality | Ratings 5&4 | 87.72% | 92.98% | 91.23% | 84.21% | 89.47% | 89.12% | | Paraphraser | Evaluation | AI text | pp1 | pp2 | pp3 | pp4 | pp5 | |-----------------|-----------------|-----------|-------|-------|-------|-------|-------| | DIPPER | Mean perplexity | 5.2 | 7.7 | 7.8 | 8.5 | 7.7 | 8.7 | | QA performance | 97% | 97% | 96% | 96% | 96% | 95.5% | | | LLaMA-2-7B-Chat | Mean perplexity | 5.2 | 8.1 | 9.3 | 9.0 | 10.3 | 10.5 | | QA Performance | 97% | 97% | 97% | 97% | 97% | 97% | | | ppi | i=1 | i=2 | i=3 | i=4 | i=5 | All ppi | | |--------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | Content | Avg. rating | 4.37 ± 0.63 | 4.18 ± 0.67 | 3.93 ± 0.71 | 3.9 ± 0.75 | 3.85 ± 0.78 | 4.05 ± 0.2 | | preservation | Ratings 5&4 | 91.67% | 85.0% | 80.0% | 78.3% | 80.0% | 83.0% | | Grammar or | Avg. rating | 4.62 ± 0.55 | 4.28 ± 0.73 | 4.26 ± 0.65 | 4.22 ± 0.64 | 4.17 ± 0.74 | 4.31 ± 0.35 | | text quality | Ratings 5&4 | 96.67% | 83.33% | 88.33% | 88.3% | 83.33% | 88.0% | Table 1: Summary of the MTurk human evaluation study on content preservation and grammar or text quality of the recursive paraphrases with DIPPER that we use for our attacks. Ratings are on a Likert scale of 1 to 5. See Appendix B.1 for details. Table 2: Summary of the MTurk human evaluation study of the recursive paraphrases with LLaMA-2-7B-Chat. Table 3: Automated evaluation of the text quality of recursive paraphrases using perplexity measures with respect to OPT-13B and question-answering benchmark accuracy. parameters (Touvron et al., 2023), and T5-based paraphraser (Damodaran, 2021) with 222M parameters. Suppose a passage S = (s1, s2*, ..., s*n) where siis the i th sentence. DIPPER and LLaMA-2-7B-Chat paraphrase S to be S ′ = f*strong*(S) in one-shot while the light-weight T5-based paraphraser would output S ′ = (fweak(s1), fweak(s2), ..., f*weak*(sn)) where they can only paraphrase sentence-by-sentence. DIPPER also has the ability to input a context prompt text C to generate higher-quality paraphrasing S ′ = fstrong(*S, C*). We can also vary two different hyperparameters of DIPPER to generate a diverse number of paraphrases for a single input passage. We use DIPPER and LLaMA-2-7B-Chat for recursive paraphrasing attacks since they provide high-quality paraphrasing when compared to the 222M parameter T5 model. Let an LLM L generate AI text output S = L(C) for an input prompt C. DIPPER or LLaMA-2-7B-Chat can be used to generate a paraphrase pp1 = fstrong(*S, C*). This paraphrasing can be performed in recursion (see Figure 2). That is, ppi = f*strong*(pp(i-1), C). While DIPPER is explicitly trained to be a paraphraser, LLaMA-2-7B-Chat is an instruction-tuned model for chat purposes. We design a system prompt (see Appendix B.2) with the LLaMA-2-7B-Chat model to use it as a paraphraser. In §2.3 and §2.4, we show that recursive paraphrasing is effective in evading the strong watermark and retrieval-based detectors when compared to a single round of paraphrasing. Using human and other automated evaluation techniques in §2.2, we show that our recursive paraphrasing method only degrades the text quality slightly. ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) (a) Recursive paraphrasing with DIPPER (b) Recursive paraphrasing with LLaMA-2-7B-Chat Figure 3: ROC plots for soft watermarking with recursive paraphrasing attacks. AUROC, TPR@1%FPR, and perplexity scores measured using OPT-13B are given in the legend. The target LLM OPT-13B is used to generate watermarked output that are 300 tokens in length. ## 2.2 Quality Of The Paraphrases In order to reliably study the quality of the recursive paraphrases we use in our experiments using DIPPER and LLaMA-2-7B-Chat, we perform human evaluations using MTurk and other automated techniques. The AI-text used in this study is generated using a watermarked OPT-13B model. Tables 1 and 2 provide a summary of the study. We investigate the content preservation and text quality or grammar of the recursive paraphrases with respect to the AI-generated texts (see Tables 6-9 in Appendix B.1 for more details). In terms of content preservation with DIPPER, 70% of the paraphrases were rated high quality and 23% somewhat equivalent. In terms of text quality or grammar, 89% **of the paraphrases** were rated high quality. On a Likert scale of 1 to 5, the DIPPER paraphrases that we use received an average rating of 4.14 ± 0.58 for text quality or grammar and 4.0 ± 0.9 for content preservation. **Similarly,** 83% **of the recursive paraphrases we obtain with LLaMA-2-7B-Chat were rated high quality.** See Appendix B.1 for more details on the human study. For automated text quality evaluations, we use perplexity measures and a question-answering (QA) benchmark in Table 3. We measure the perplexity scores using OPT-13B. As shown in the table, the perplexity scores degrade slightly from 5.5 to 8.7 and 10.5, respectively, for DIPPER and LLaMA-2-7B-Chat after 5 rounds of paraphrasing. We also use a QA benchmark SQuAD-v2 (Rajpurkar et al., 2016) to evaluate the effect of recursive paraphrasing. For this, we use two hundred data points from SQuAD-v2. Each data point consists of a context passage, a question, and an answer. We evaluate a QA model on the SquAD-v2 benchmark to observe that it achieves 97% accuracy in the QA task. For the QA model, we use the LLaMA-2-13B-Chat model with a carefully written system prompt (see Appendix B.2). To evaluate the quality of paraphrases, we paraphrase the context passages recursively and use these to evaluate the QA accuracy with the QA model. If the QA model can answer the question correctly based on the paraphrased context, then the information is preserved in the paraphrase. As we observe, the QA performance with recursive paraphrasing is similar to that with the clean context passage. These results confirm that AI text detectors can be effectively attacked using recursive paraphrasing with only a slight degradation in text quality. We note that the amount of acceptable quality degradation can be application-specific. For example, an adversary might be okay with a higher quality drop when writing a social media post than when writing a fake news article. Our human studies rate our recursive paraphrases to have a score of either 5 or 4 almost 70% of the time. Though this might be acceptable for some adversaries, others might not tolerate a score of 4 for their applications. Since a score of 4 denotes minor degradation, we presume that the adversaries could manually edit them for their attacks. Nevertheless, our paraphrases get a perfect score 35% of the time, ![6_image_0.png](6_image_0.png) (a) Watermarked text with mean token length 300 (b) Watermarked text with varying token lengths Figure 4: ROC plots for soft watermarking with recursive paraphrasing attacks. AUROC, TPR@1%FPR, and perplexity scores measured using OPT-13B are given in the legend. The target LLM is OPT-1.3B. (a) Even for 300 tokens long watermarked passages, recursive paraphrasing is effective. As paraphrasing rounds proceed, detection rates degrade significantly with a slight trade-off in text quality. (b) Attacking watermarked passages become easier as their length reduces. indicating that it is still practical for adversaries to use it to perform their attacks successfully. However, we believe this tradeoff in text quality degradation and detection evasion would diminish as paraphrasers improve in the future. ## 2.3 Paraphrasing Attacks On Watermarked Ai Text In this section, we evaluate our recursive paraphrasing attacks on the soft watermarking scheme proposed in Kirchenbauer et al. (2023a). Soft watermarking encourages LLMs to output token s (t) at time-step t that belongs to a "green list". The green list for s (t)is created using a private pseudo-random generator that is seeded with the prior token s (t−1). A watermarked output from the LLM is designed to have tokens that are majorly selected from the green list. Hence, a watermark detector with the pseudo-random generator checks the number of *green* tokens in a candidate passage to detect whether it is watermarked or not. Here, we target watermarked OPT-13B with 13B parameters in Figure 3 and watermarked OPT-1.3B in Figure 4 for our experiments. In Appendix A.1, we also evaluate our attacks on GPT-2 Medium (Radford et al., 2019) and other datasets - PubMedQA (Jin et al., 2019) and Kafkai (Kafkai, 2020). Dataset. We perform our experiments on 2000 text passages that are around 300 tokens in length (1000 passages per human and AI text classes). We pick 1000 long news articles from the XSum "document" feature. For each article, the first ∼300 tokens are input to the target OPT-1.3B to generate 1000 watermarked AI text passages that are each ∼300 tokens in length. The second 300 tokens from the 1000 news articles in the dataset are treated as baseline human text. We note that our considered dataset has more and longer passages compared to the experiments in Kirchenbauer et al. (2023a). Detection results after paraphrasing attack. Weaker paraphrasing attacks discussed in Kirchenbauer et al. (2023a) are not effective in removing watermarks. They perform "span replacement" by replacing random tokens (in-place) using a language model. However, after a single round of paraphrasing (pp1) with a watermarked OPT-13B as the target LLM, TPR@1%FPR of watermark detector degrades from 99.8% to 80.7% and 54.6%, respectively, with DIPPER and LLaMA-2-7B-Chat paraphrasers. We also show that our stronger recursive paraphrasing attacks can effectively evade watermark detectors with only a slight degradation in text quality. As shown in Figures 3-4, the recursive paraphrase attack further degrades the detection rate of the detector to below 20% after 5 rounds of paraphrasing (pp5). Note that in all the settings pp2 or 2 rounds of paraphrasing is sufficient to degrade TPR@1%FPR to below 50%. As shown in Figure 3, ![7_image_0.png](7_image_0.png) Figure 5: ROC curves for various trained and zero-shot detectors. **Left:** Without attack. **Middle:** After paraphrasing attack using T5-based paraphraser. The performance of zero-shot detectors drops significantly. Right: Here, we assume we can query the detector ten times for the paraphrasing attack. We generate ten paraphrasings for each passage and query multiple times to evade detection. Notice how all detectors have low TPR@1%FPR. In the plot legend - perturbation refers to the zero-shot methods in Mitchell et al. (2023); threshold refers to the zero-shot methods in Solaiman et al. (2019); Gehrmann et al. (2019); Ippolito et al. (2019); roberta refers to OpenAI's trained detectors (OpenAI, 2019). The TPR@1%FPR scores of different detectors before the attack, after the attack, and after the attack with multiple queries, respectively, are provided in the plot legend. DIPPER shows a clearer and more consistent trend in improving attack performance over recursions of paraphrasing in comparison to LLaMA-2. This is because DIPPER is trained explicitly to be a paraphraser with hyperparameters that can control the quality of paraphrasing. Therefore, we mainly employ DIPPER for our recursive paraphrase attacks. Best of ppi in the figure refers to the method where, for each passage, we select the paraphrase out of all the ppi's that has the worst detector score. For Best of ppi with OPT-1.3B, the detection rate reduces drastically from 99.8% to 4.0% with only a trade-off of 1.5 in the perplexity score (Figure 4a). Best of ppi, unlike the ppi attacks, assume black box query access to the detector. Figure 4b shows that the watermarking detector becomes weaker as the length of the watermarked text reduces. Note that for watermarked texts that are 50 or 100 tokens long, the detection performance after the recursive paraphrasing attack is similar to that of a random detector. We provide examples of paraphrased text that we use for our attacks in Appendix B.3. ## 2.4 Paraphrasing Attacks On Non-Watermarked Ai Text Neural network-based trained detectors such as RoBERTa-Large-Detector from OpenAI (OpenAI, 2019) are trained or fine-tuned for binary classification with datasets containing human and AI-generated texts. Zero-shot classifiers leverage specific statistical properties of the source LLM outputs for their detection. Retrieval-based methods search for a candidate passage in a database that stores the LLM outputs. Here, we perform experiments on these non-watermarking detectors to show they are vulnerable to our paraphrasing attack. Trained and Zero-shot detectors. We use a pre-trained GPT-2 Medium model (Radford et al., 2019) with 355M parameters as the target LLM to evaluate our attack on 1000 long passages from the XSum dataset (Narayan et al., 2018). We use the T5-based paraphrasing model (Damodaran, 2021) with 222M parameters to rephrase the 1000 output Figure 6: Recursive paraphrasing breaks the retrievalbased detector (Krishna et al., 2023) with only slight ![7_image_1.png](7_image_1.png) degradation in text quality. ppi refers to i recursion(s) of paraphrasing. Numbers next to markers denote the perplexity scores of the paraphraser output. texts generated using the target GPT-2 Medium model. Figure 5 shows the effectiveness of the paraphrasing attack over these detectors. The AUROC scores of DetectGPT (Mitchell et al., **2023) drop from** 96.5% (before the attack) to 59.8% **(after the attack).** Note that AUROC of 50% corresponds to a random detector. The rest of the zero-shot detectors (Solaiman et al., 2019; Gehrmann et al., 2019; Ippolito et al., 2019) also perform poorly after our attack. Though the performance of the trained neural network-based detectors (OpenAI, 2019) is better than that of zero-shot detectors, they are also not reliable. For example, TPR@1%FPR of OpenAI's RoBERTa-Large-Detector drops from 100% to around 92% after our attack. In another setting, we assume the attacker may have multiple access to the detector. That is, the attacker can query the detector with an input AI text passage, and the detector would reveal the detection score to the attacker. For this scenario, we generate ten different paraphrases for an input passage and query the detector for the detection scores. For each AI text passage, we then select the paraphrase with the worst detection score for evaluating the ROC curves. As shown in Figure 5, **with multiple queries to the** detector, an adversary can paraphrase more efficiently to bring down TPR@1%FPR of the RoBERTa-Large-Detector from 100% to 80%. In Appendix A.2, we show more experiments with more datasets and target LLMs. Retrieval-based detectors. Detector in Krishna et al. (2023) is designed to be robust against paraphrase attacks. However, we show that they can suffer from the recursive paraphrase attacks that we develop using DIPPER. We use 2000 passages (1000 generated by OPT-1.3B and 1000 human passages) from the XSum dataset. AI outputs are stored in the AI database by the detector. As shown in Figure 6, this detector detects almost all of the AI outputs even after a round of paraphrasing. However, **the detection** accuracy drops below ∼ 60% **after five rounds of recursive paraphrasing.** As marked in the plot, the perplexity score of the paraphrased text only degrades by 1.7 at a detection accuracy of ∼ 60%. Moreover, retrieval-based detectors are concerning since they might lead to **serious privacy issues** from storing users' LLM conversations. In Appendix A.3, we show more experiments with more datasets and target LLMs. ## 3 Spoofing Attacks On Generative Ai-Text Models An AI language detector without a low type-I error can cause harm as it might wrongly accuse a human of plagiarizing using an LLM. Moreover, an attacker (*adversarial human*) can generate a non-AI text to be detected as AI-generated. This is called the *spoofing attack*. An adversary can potentially launch spoofing attacks to produce derogatory texts to damage the reputation of the target LLM's developers. In this section, as a proof-of-concept, we show that current text detectors can be spoofed to detect texts composed by adversarial humans as AI-generated. More details on the spoofing experiments are presented in Appendix D. Soft watermarking. As discussed in §2, soft watermarked LLMs (Kirchenbauer et al., 2023a) generate tokens from the "green list" that are determined by a pseudo-random generator seeded by the prefix token. Though the pseudo-random generator is private, an attacker can estimate the green lists by observing multiple token pairs in the watermarked texts from the target LLM. An adversarial human can then leverage the estimated green lists to compose texts by themselves that are detected to be watermarked. In our experiments, we estimate the green lists for 181 most commonly used words in the English vocabulary. We query the target watermarked OPT-1.3B model one million times to observe the token pair distributions within this smaller vocabulary subset we select. Note that this attack on a watermarked model only needs to be performed once to learn the watermarking pattern or the proxy green list to spoof it thereafter. Based on the frequency of tokens that follow a prefix token in the observed generative outputs, we estimate green lists for each of the 181 common words. We build a tool that helps adversarial humans Figure 7: ROC curve of a soft watermarking-based detector (Kirchenbauer et al., 2023a) after our spoofing attack. create watermarked sentences by providing them with the proxy green list that we learn with only access to a watermarked text corpora obtained from the target watermarked LLM. We observe that the soft watermarking scheme can be spoofed to degrade its detection AUROC from 99.8% to 1.3% (see Figure 7). Retrieval-based detectors. Krishna et al. (2023) use a database to store LLM outputs to detect AI-text by retrieval. We find in our experiments (see Figure 13) that an adversary can spoof this detector 100% of the time, even if the detector maintains a private database. Suppose an adversary, say a teacher, has access to a human written document S, say a student's essay. The adversary can prompt the target LLM to paraphrase S to get S ′. This results in the LLM, by design, storing its output S ′in its private database for detection purposes. Now, the detector would classify the original human text S as AI-generated since a semantically similar copy S ′is present in its database. In this manner, a teacher can purposefully allege an innocent student to have plagiarised using the retrieval-based detector. Note that manipulating retrieval-based detectors is easier using this approach compared to watermarking techniques. This observation implies a practical tradeoff between type-I and type-II errors. When a detector is strengthened against type-II errors, it tends to result in a deterioration of its performance in terms of type-I errors. Zero-shot and neural network-based detectors. In this setting, a malicious adversary could write a short text in a collaborative work, which may lead to the entire text being classified as AI-generated. To simulate this, we prepend a human-written text marked as AI-generated by the detector to all the other human-generated text for spoofing. In other words, from 200 long passages in the XSum dataset, we pick the human text with the worst detection score for each detector considered in §2.4. We then prepend this text to all the other human texts, ensuring that the length of the prepended text does not exceed the length of the original text. Our experiments show that the **AUROC of all these detectors drops after spoofing** (see plots in Appendix D). After this naïve spoofing attack, the TPR@1%FPR of most of these detectors drop significantly. ## 4 Hardness Of Reliable Ai Text Detection In this section, we formally upper bound the AUROC of an arbitrary detector in terms of the TV between the distributions for M (e.g., AI text) and H (e.g., human text) over the set of all possible text sequences Ω. We note that this result holds for any two arbitrary distributions H and M. For example, H could be the text distribution for a person or group, while M could be the output text distribution of a general LLM or an LLM trained by an adversary to mimic the text of a particular set of people. We use TV(M, H) to denote the TV between these two distributions and model a detector as a function D : Ω → R that maps every sequence in Ω to a real number. Sequences are classified into AI-generated or human-generated by applying a threshold γ on this number. By adjusting the parameter γ, we can tune the sensitivity of the detector to AI and human-generated texts to obtain an ROC curve. Figure 8: Comparing the performance, in terms of AUROC, ![9_image_0.png](9_image_0.png) of the best possible detector to that of the baseline performance corresponding to a random classifier. Theorem 1. The area under the ROC of any detector D *is bounded as* $$\mathrm{\sf{AUROC}}(D)\leq\frac{1}{2}+\mathrm{\sf{TV}}({\mathcal{M}},{\mathcal{H}})-\frac{\mathrm{\sf{TV}}({\mathcal{M}},{\mathcal{H}})^{2}}{2}.$$ ![10_image_0.png](10_image_0.png) Figure 9: Increasing model size reduces the exact TV between the true synthetic data distribution and the learned distribution in all settings. Error bars report standard deviations after 5 independent trials. Figure 10: Estimated TV distances of GPT-2 ![10_image_1.png](10_image_1.png) output datasets from the WebText dataset using meta-token sequences of varying lengths. TV decreases with model size for each length. The proof is deferred to Appendix C.1. Figure 8 shows how the above bound grows as a function of the TV distance. This theorem states that as the TV distance between AI and human text distributions reduces, the AUROC of the best possible detector decreases. Based on our theory, an adversary can use advanced LLMs to mimic human text to reduce the TV distance between human and AI text distributions to evade text detection systems. For a detector to have a good performance (say, AUROC > 0.9), the distributions of human and AI-generated texts must be very different from each other (TV > 0.5 based on the figure). As M gets more similar to H (say, TV < 0.2), the performance of even the best-possible detector becomes unreliable (AUROC < 0.7). For some applications, say AI-text plagiarism, reliable detection should have a low false positive rate (say, < 0.01) and a high true positive rate (say, > 0.9). Based on our theory, this cannot be achieved even when the overlap between the distributions is relatively low, say 11% (or TV = 0.9 − 0.01 = 0.89, based on equation 1 in Appendix C.1). Note that, for a watermarked model, the above bound can be close to one as the TV between the watermarked distribution and human-generated distribution can be high. Corollary 1 in Appendix C.2 discusses how paraphrasing attacks can be effective in evading watermarks using Theorem 1. In Appendix C.3, we also present a tightness analysis of the bound in Theorem 1, where we show that for any distribution H there exists M and a detector D for which the bound holds with equality. We also discuss general trade-offs between true positive and false positive rates of detection in Corollaries 2 and 3 in Appendix C.2. Theorem 2 in Appendix C.4 extends Theorem 1 to bound the AUROC of the best possible detector by a function of the TV distance between LLM outputs generated using pseudorandomness and human text distributions. In studying the hardness of the detection problem, we consider the following assumption that for a given human-text distribution H, more advanced LLMs mimicking H can lead to smaller TV. Thus, using Theorem 1, the detection problem becomes increasingly more difficult. This is the core argument of our hardness result on AI text detection. Although the underlying assumption seems to be intuitive given the capabilities of LLMs such as GPT-4 (OpenAI, 2023), a precise analysis of this assumption is quite difficult because estimating the true TV of the text distributions from a finite set of samples is extremely challenging. Nevertheless, we provide some empirical evidence supporting this assumption using two sets of experiments. In all the experiments, we consistently observe that the TV distance estimates between human and AI text distributions reduce as language models get more advanced, indicating the increasing difficulty associated with AI text detection. (i) Using synthetic text data. We perform experiments on a toy synthetic text dataset where the exact TV distance can be calculated. We use the Markov assumption to generate the synthetic text data with sequence length 3 using a randomly generated token transition matrix for varying vocabulary sizes. We use single-layer LSTMs of different hidden unit sizes to train on a dataset of size 20,000 sampled from this synthetic data distribution using a default AdamW optimizer (Loshchilov & Hutter, 2017). We compute the learned token transition matrix for the LSTM output distribution using the softmax logit values of the trained model. Using transition matrices of both distributions, we compute the exact TV. Figure 9 shows that the exact TV distances between the learned and true synthetic distributions reduce as the LSTM model size increases. (ii) Using projection. For discrete distributions, the TV distance can be computed as 1/2 of the sum of the point-wise differences between their probability density functions (PDFs). While this is mathematically simple since texts can be considered as token sequences with bounded length, it is not practical to compute true TV distances directly through estimating PDFs due to the size of the sample space, which is approximately the size of the token set to the power of *sequence length*. To tackle this issue, we split the original token set into five roughly equal partitions and assign a meta-token to each partition. Given a sequence of tokens from the original set, we construct a new sequence by replacing each token with the corresponding meta-token. We estimate the PDFs of the sequences of meta-tokens created using texts from the WebText and GPT-2 output datasets. Since the set of meta-tokens is significantly smaller than the original token set, estimating PDFs becomes much more tractable. We then use these PDFs to estimate the total variation distances of the output distributions of different GPT-2 models (GPT-2-Small, GPT-2-Medium, GPT-2-Large, and GPT-2-XL) from the WebText dataset. Figure 10 plots these TV estimates for different sequence lengths, averaged over 30 runs of the experiment. We observe that the TV distance consistently decreases with increasing model size for all sequence lengths. These experiments provide empirical evidence that more advanced LLMs can lead to smaller TV distances. Thus, based on Theorem 1, reliable AI text detection would become increasingly difficult. ## 5 Conclusion In this paper, we analyze the performance of four different classes of detectors including watermarking-based, neural-net based, zero-shot based and retrieval-based detectors and show their reliability issues. In particular, we develop a strong attack called *recursive paraphrasing* that can break recently proposed watermarking and retrieval-based detectors. Using perplexity score computation as well as conducting various MTurk human study, we observe that our recursive paraphrasing only degrades text quality slightly. We also show that adversaries can spoof these detectors to increase their type-I errors. Spoofing attacks can lead to the generation of derogatory passages detected as AI-generated that might affect the reputation of the LLM detector developers. Finally, we establish a theoretical connection between the AUROC of the best possible detector to the TV distance between human and AI-text distributions that can be used to study the fundamental hardness of the reliable detection problem for more advanced LLMs. In the future, adversaries could train LLMs to explicitly mimic a specific group of people to minimize TV distances based on our theory to evade detection easily. It might be interesting to see more work on this aspect. Though the existing paraphrasers we use are powerful, they might not perform as well in specific technical domains such as clinical text data. However, stronger paraphrasers in the future might overcome these issues. By showing empirical evidence on larger models having smaller TV distance estimates, we hypothesize that reliable detection would get harder as LLMs get more powerful. A precise analysis of this assumption is quite difficult because estimating the TV of the text distributions from a finite set of samples is extremely challenging. Hence, we provide empirical experiments over synthetic data showing that bigger language models can potentially lead to smaller TV distances. A detector should ideally be helpful in reliably flagging AI-generated texts to prevent the misuse of LLMs. However, the cost of misidentification by a detector can be huge. If the false positive rate of the detector is not low enough, humans (e.g., students) could be falsely accused of AI plagiarism. Moreover, a disparaging passage falsely detected to be AI-generated could affect the reputation of the LLM's developers. As a result, the practical applications of AI-text detectors can become unreliable and invalid. Security methods need not be foolproof. However, we need to make sure that it is not an easy task for an attacker to break these security defenses. Thus, analyzing the risks of using current and future detectors can be vital to avoid creating a false sense of security. ## References Scott Aaronson. My ai safety lecture for ut effective altruism. November 2022. URL https://scottaaronson. blog/?p=6823. David Ifeoluwa Adelani, Haotian Mai, Fuming Fang, Huy H Nguyen, Junichi Yamagishi, and Isao Echizen. Generating sentiment-preserving fake online reviews using neural language models and their human-and machine-based detection. In Advanced Information Networking and Applications: Proceedings of the 34th International Conference on Advanced Information Networking and Applications (AINA-2020), pp. 1341–1354. Springer, 2020. Mikhail J Atallah, Victor Raskin, Michael Crogan, Christian Hempelmann, Florian Kerschbaum, Dina Mohamed, and Sanket Naik. Natural language watermarking: Design, analysis, and a proof-of-concept implementation. In Information Hiding: 4th International Workshop, IH 2001 Pittsburgh, PA, USA, April 25–27, 2001 Proceedings 4, pp. 185–200. Springer, 2001. Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc'Aurelio Ranzato, and Arthur Szlam. Real or fake? learning to discriminate machine from human generated text. *arXiv preprint arXiv:1906.03351*, 2019. Lenore Blum, Manuel Blum, and Mike Shub. Comparison of two pseudo-random number generators. In Advances in Cryptology: Proceedings of CRYPTO '82, pp. 61–78. Plenum, 1982. Manuel Blum and Silvio Micali. How to generate cryptographically strong sequences of pseudorandom bits. SIAM Journal on Computing, 13(4):850–864, 1984. doi: 10.1137/0213053. URL https://doi.org/10. 1137/0213053. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Souradip Chakraborty, Amrit Singh Bedi, Sicheng Zhu, Bang An, Dinesh Manocha, and Furong Huang. On the possibilities of ai-generated text detection, 2023. Jon Christian. Cnet secretly used ai on articles that didn't disclose that fact, staff say. January 2023. URL https://futurism.com/cnet-ai-articles-label. Prithiviraj Damodaran. Parrot: Paraphrase generation for nlu., 2021. T Fagni, F Falchi, M Gambini, A Martella, and M Tesconi. Tweepfake: About detecting deepfake tweets. arxiv. *arXiv preprint arXiv:2008.00036*, 2020. Sebastian Gehrmann, Hendrik Strobelt, and Alexander M Rush. Gltr: Statistical detection and visualization of generated text. *arXiv preprint arXiv:1906.04043*, 2019. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Daphne Ippolito, Daniel Duckworth, Chris Callison-Burch, and Douglas Eck. Automatic detection of generated text is easiest when humans are fooled. *arXiv preprint arXiv:1911.00650*, 2019. Ganesh Jawahar, Muhammad Abdul-Mageed, and Laks VS Lakshmanan. Automatic detection of machine generated text: A critical survey. *arXiv preprint arXiv:2011.01314*, 2020. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. PubMedQA: A dataset for biomedical research question answering. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan (eds.), Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2567–2577, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1259. URL https://aclanthology.org/D19-1259. Kafkai. "kafkai: Ai writer & ai content generator". 2020. URL https://kafkai.com/. Navid Tafaghodi Khajavi and Anthony Kuh. The quality of the covariance selection through detection problem and AUC bounds. *CoRR*, abs/1605.05776, 2016. URL http://arxiv.org/abs/1605.05776. John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, and Tom Goldstein. A watermark for large language models. *arXiv preprint arXiv:2301.10226*, 2023a. John Kirchenbauer, Jonas Geiping, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, and Tom Goldstein. On the reliability of watermarks for large language models, 2023b. Kalpesh Krishna, Yixiao Song, Marzena Karpinska, John Wieting, and Mohit Iyyer. Paraphrasing evades detectors of ai-generated text, but retrieval is an effective defense, 2023. Aounon Kumar, Alexander Levine, Tom Goldstein, and Soheil Feizi. Certifying model accuracy under distribution shifts. *arXiv preprint arXiv:2201.12440*, 2022. Weixin Liang, Mert Yuksekgonul, Yining Mao, Eric Wu, and James Zou. Gpt detectors are biased against non-native english writers, 2023. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. *arXiv preprint arXiv:1711.05101*, 2017. Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D Manning, and Chelsea Finn. Detectgpt: Zero-shot machine-generated text detection using probability curvature. *arXiv preprint arXiv:2301.11305*, 2023. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. *ArXiv*, abs/1808.08745, 2018. OpenAI. Gpt-2: 1.5b release. November 2019. URL https://openai.com/research/gpt-2-1-5b-release. OpenAI. Chatgpt: Optimizing language models for dialogue. November 2022. URL https://openai.com/ blog/chatgpt/. OpenAI. Gpt-4 technical report. March 2023. URL https://cdn.openai.com/papers/gpt-4.pdf. J. Pu, Z. Sarwar, S. Abdullah, A. Rehman, Y. Kim, P. Bhattacharya, M. Javed, and B. Viswanath. Deepfake text detection: Limitations and opportunities. In *2023 IEEE Symposium on Security and Privacy (SP)*, pp. 1613–1630, Los Alamitos, CA, USA, may 2023. IEEE Computer Society. doi: 10.1109/SP46215.2023. 10179387. URL https://doi.ieeecomputersociety.org/10.1109/SP46215.2023.10179387. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2019. URL https://arxiv.org/abs/1910.10683. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ Questions for Machine Comprehension of Text. *arXiv e-prints*, art. arXiv:1606.05250, 2016. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695, 2022. Vinu Sankar Sadasivan, Mahdi Soltanolkotabi, and Soheil Feizi. Cuda: Convolution-based unlearnable datasets. *arXiv preprint arXiv:2303.04278*, 2023. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-toimage diffusion models with deep language understanding. *arXiv preprint arXiv:2205.11487*, 2022. Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, et al. Release strategies and the social impacts of language models. *arXiv preprint arXiv:1908.09203*, 2019. Omanshu Thapliyal and Inseok Hwang. Data-driven cyberattack synthesis against network control systems, 2022. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. *arXiv preprint arXiv:2307.09288*, 2023. Wenxiao Wang, Alexander J Levine, and Soheil Feizi. Improved certified defenses against data poisoning with (deterministic) finite aggregation. In *International Conference on Machine Learning*, pp. 22769–22783. PMLR, 2022. Yuxia Wang, Jonibek Mansurov, Petar Ivanov, Jinyan Su, Artem Shelmanov, Akim Tsvigun, Chenxi Whitehouse, Osama Mohammed Afzal, Tarek Mahmoud, Alham Fikri Aji, and Preslav Nakov. M4: Multi-generator, multi-domain, and multi-lingual black-box machine-generated text detection, 2023. Max Weiss. Deepfake bot submissions to federal public comment websites cannot be distinguished from human submissions. *Technology Science*, 2019121801, 2019. Alex Wilson, Phil Blunsom, and Andrew D Ker. Linguistic steganography on twitter: hierarchical language modeling with manual interaction. In *Media Watermarking, Security, and Forensics 2014*, volume 9028, pp. 9–25. SPIE, 2014. Max Wolff. Attacking neural text detectors. *CoRR*, abs/2002.11768, 2020. URL https://arxiv.org/abs/ 2002.11768. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. Opt: Open pre-trained transformer language models, 2022. URL https://arxiv.org/abs/2205.01068. Xuandong Zhao, Yu-Xiang Wang, and Lei Li. Protecting language generation models via invisible watermarking. *arXiv preprint arXiv:2302.03162*, 2023. ## A Experiments With More Datasets And Models In this section, we consider multiple datasets (XSum (Narayan et al., 2018), PubMedQA (Jin et al., 2019), and Kafkai (Pu et al., 2023)) and target LLMs (OPT-1.3B (Zhang et al., 2022) and GPT-2-Medium (Radford et al., 2019)) for analyzing our attacks. Datasets. As discussed in the § 2.1, we use 2000 text passages (1000 passages each for human and AIgenerated text classes) of ∼300 tokens in length from the XSum dataset for analyzing our attacks. For the rest of the datasets, we use 1000 text passages (500 passages each for human and AI-generated text classes) of ∼200 tokens in length. XSum contains long news articles in its "document" feature. To evaluate the robustness of our attacks to distribution shifts, we include more datasets. We use PubMedQA, which is a medical text dataset. Kafkai dataset (Pu et al., 2023) contains real and fake articles (generated using privately fine-tuned OpenAI models) from 10 different domains, such as cybersecurity, SEO, and marketing. It is generated using Kafkai text generation service (Kafkai, 2020). ## A.1 Watermark-Based Detectors In this section, we analyze the soft watermarking scheme in Kirchenbauer et al. (2023a). We use the powerful DIPPER paraphraser from Krishna et al. (2023) with 11B parameters for our recursive paraphrasing attack on the watermarking detector. On average, five rounds of our recursive paraphrase attack take around 36 seconds per text passage, 300 tokens in length. OPT-13B is used to measure the perplexity scores for all the settings. As shown in Table 1 and Appendix B.1, we perform a human study over the XSum dataset to evaluate the semantic drifts in our recursive paraphrasing framework. The MTurk human evaluation reveals that 70% of our recursive paraphrases maintain high-quality content preservation, and 89% of our recursive paraphrases have high-quality text or grammar. Figure 11 shows the performance of the soft watermarking detector in multiple settings. In all the settings, the detection performance drops as rounds of recursive paraphrasing proceed with a slight degradation in perplexity scores. After two rounds of paraphrasing (pp2), the detection performance (TPR@1%FPR) in all the settings drops below 50%. Best of ppi, which selects the paraphrase with the worst detection score, significantly degrades the detection performance to below 10% in all the settings with only slight degradation of 1.5, 0.5, 2.0, and 2.7 in perplexity measures. ## A.2 Zero-Shot And Trained Detectors In this section, we analyze the zero-shot and trained detectors in prior literature (Mitchell et al., 2023; Solaiman et al., 2019; Gehrmann et al., 2019; Ippolito et al., 2019; OpenAI, 2019). We use the T5-based paraphraser (Damodaran, 2021), Parrot, to paraphrase the AI-generated text and use OPT-13B to measure the perplexity scores for all the settings. We perform our experiments on the XSum (Narayan et al., 2018), PubMedQA (Jin et al., 2019), and Kafkai (Kafkai, 2020) datasets with GPT-2-Medium and OPT-1.3B as the target generative models. In Figure 12 (ROC curves) and Tables 4 (TPR@1%FPR values) and 5 (AUROC scores), we present our results. 1d, 1z, 10d, and 10z in Tables 4 and 5 refer to different variants of the DetectGPT (Mitchell et al., 2023). Figure 12 shows the performance of various zero-shot and trained detectors in multiple settings. The performance of these detectors drops significantly when the AI-generated text is paraphrased, and when given 5 queries to the detector, an adversary can fool most detectors effectively. Some detectors like OpenAI's RoBERTa-based models are more resilient on datasets like XSum, but are not reliable on other datasets like Kafkai. The perplexity scores of the GPT-2 generated text before any paraphrasing were 15.58 for XSum, 12.80 for PubMedQA, 19.11 for Kafkai, while the perplexity of OPT-1.3B generated text was 9.31. After paraphrasing, the perplexity scores are 20.06, 16.45, 20.01, and 13.96, respectively. ## A.3 Retrieval-Based Detectors In this section, we analyze the retrieval-based detector proposed in Krishna et al. (2023). We show that our recursive paraphrasing attack is effective in breaking their detector. We use the 11B parameter DIPPER paraphraser (Krishna et al., 2023) for our attack. OPT-13B is used to measure the perplexity scores in all the settings. Figure 13 shows the performance of the retrieval-based detector in multiple settings. In all the settings, the detection accuracy drops as rounds of recursive paraphrasing proceed with only a slight degradation in perplexity scores. We observe that the detector works well after a single round of paraphrasing (pp1). However, after five rounds of paraphrasing, Best of ppi reduces the detector's accuracy to close to 50% with only a slight degradation in perplexity scores. We also find that we can easily spoof the retrieval-based detector as discussed in §3 to deteriorate the detector's performance to 0%. Note that retrieval-based detectors are concerning since they might lead to serious privacy issues from storing users' LLM conversations. | | DetectGPT | Threshold by | RoBERTa | | | | | | | | |-----------------------------|-------------|----------------|-----------|-------|-------|-------|----------|-------|-------|-------| | | | Entropy | Base | Large | | | | | | | | | 1 d | 1 z | 10 d | 10 z | Likelihood | Rank | Log Rank | | | | | OPT-1.3B on XSum No attack | 0.079 | 0.079 | 0.083 | 0.125 | 0.237 | 0.382 | 0.288 | 0.017 | 0.694 | 0.956 | | pp attack | 0.014 | 0.014 | 0.018 | 0.006 | 0.006 | 0.006 | 0.006 | 0.326 | 0.025 | 0.479 | | 5 pp attack | 0.0 | 0.0 | 0.005 | 0.0 | 0.004 | 0.001 | 0.002 | 0.202 | 0.003 | 0.244 | | GPT-2 on PubMedQA No attack | 0.05 | 0.05 | 0.598 | 0.481 | 0.085 | 0.379 | 0.144 | 0.029 | 0.748 | 0.902 | | pp attack | 0.052 | 0.052 | 0.19 | 0.054 | 0.015 | 0.042 | 0.017 | 0.202 | 0.181 | 0.606 | | 5 pp attack | 0.0 | 0.0 | 0.031 | 0.002 | 0.008 | 0.01 | 0.012 | 0.135 | 0.088 | 0.452 | | GPT-2 on Kafkai No attack | 0.077 | 0.077 | 0.669 | 0.625 | 0.088 | 0.352 | 0.085 | 0.0 | 0.048 | 0.006 | | pp attack | 0.056 | 0.056 | 0.125 | 0.081 | 0.004 | 0.021 | 0.004 | 0.002 | 0.01 | 0.0 | | 5 pp attack | 0.0 | 0.0 | 0.023 | 0.006 | 0.002 | 0.004 | 0.002 | 0.0 | 0.0 | 0.0 | | GPT-2 on XSum No attack | 0.169 | 0.169 | 0.599 | 0.326 | 0.114 | 0.444 | 0.186 | 0.026 | 0.881 | 1.0 | | pp attack | 0.038 | 0.038 | 0.084 | 0.015 | 0.003 | 0.005 | 0.003 | 0.411 | 0.105 | 0.925 | | 10 pp attack | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.058 | 0.02 | 0.792 | | | DetectGPT | Threshold by | | RoBERTa | | | | | | | |-----------------------------|-------------|----------------|-------|-----------|-------|-------|----------|-------|-------|-------| | | | | | Entropy | Base | Large | | | | | | | 1 d | 1 z | 10 d | 10 z | Likelihood | Rank | Log Rank | | | | | OPT-1.3B on XSum No attack | 0.769 | 0.769 | 0.9 | 0.859 | 0.918 | 0.844 | 0.943 | 0.482 | 0.974 | 0.998 | | pp attack | 0.487 | 0.487 | 0.453 | 0.41 | 0.241 | 0.387 | 0.282 | 0.868 | 0.562 | 0.945 | | 5 pp attack | 0.162 | 0.162 | 0.244 | 0.182 | 0.153 | 0.216 | 0.181 | 0.821 | 0.316 | 0.9 | | GPT-2 on PubMedQA No attack | 0.816 | 0.816 | 0.973 | 0.955 | 0.804 | 0.796 | 0.892 | 0.615 | 0.982 | 0.998 | | pp attack | 0.671 | 0.671 | 0.796 | 0.743 | 0.4 | 0.497 | 0.494 | 0.798 | 0.823 | 0.98 | | 5 pp attack | 0.33 | 0.33 | 0.601 | 0.541 | 0.327 | 0.314 | 0.409 | 0.752 | 0.712 | 0.967 | | GPT-2 on Kafkai No attack | 0.814 | 0.814 | 0.976 | 0.971 | 0.865 | 0.86 | 0.89 | 0.394 | 0.817 | 0.86 | | pp attack | 0.661 | 0.661 | 0.757 | 0.742 | 0.497 | 0.719 | 0.515 | 0.651 | 0.486 | 0.629 | | 5 pp attack | 0.353 | 0.353 | 0.532 | 0.513 | 0.412 | 0.627 | 0.426 | 0.576 | 0.358 | 0.53 | | GPT-2 on XSum No attack | 0.837 | 0.837 | 0.976 | 0.949 | 0.879 | 0.868 | 0.93 | 0.617 | 0.993 | 1.0 | | pp attack | 0.566 | 0.566 | 0.587 | 0.524 | 0.171 | 0.277 | 0.23 | 0.916 | 0.726 | 0.995 | | 10 pp attack | 0.115 | 0.115 | 0.202 | 0.177 | 0.075 | 0.104 | 0.108 | 0.744 | 0.464 | 0.983 | Table 4: TPR@1%FPR for trained and zero-shot detectors in different settings. For all attacks, we use the T5-based paraphraser. Here, "pp attack" refers to the paraphrasing attack where the AI output is paraphrased by the T5-based model. "i pp attack" refers to the setting where the attacker has black-box access to the detector. Here, the paraphraser generates "i" paraphrases for every passage, and the attacker selects the passage that has the worst detection score after "i" queries to the detector. Table 5: AUROC for trained and zero-shot detectors in different settings. Here, "pp attack" refers to the paraphrasing attack where the AI output is paraphrased by the T5-based model. "i pp attack" refers to the setting where the attacker has black-box access to the detector. Here, the paraphraser generates "i" paraphrases for every passage, and the attacker selects the passage that has the worst detection score after "i" queries to the detector. ## B More Details On Ai Paraphrasers B.1 Human Evaluation Study On Paraphrases Apart from measuring the perplexity scores of the paraphrases using OPT-13B and performance on the SQUaD-v2 benchmark, we perform two human evaluation studies to investigate the quality of the paraphrases from DIPPER and LLaMA-2-7B-Chat we use for the paraphrasing attack. We pick 20 random watermarked passages and their corresponding five rounds of recursive paraphrases (pp1 to pp5) for each of the paraphrasers for human evaluation. Each paraphrase is evaluated by 3 unique MTurk workers. We use the same setup as Krishna et al. (2023) for our human study. As shown in Figure 14, users are given a source text with some highlighted portion. The non-highlighted portion of the source text is input into the target OPT-13B model that generates watermarked text which is highlighted for the user's reference. DIPPER or LLaMA-2 paraphrases of the highlighted text are provided as the paraphrasing. The user is supposed to evaluate the quality of the paraphrases with respect to the highlighted watermarked text. They are supposed to rate it on a Likert scale of 1 to 5. See Tables 6,7 for the evaluation summary on content preservation of DIPPER paraphrasers based on the user study. Tables 8,9 show the summary of the evaluation of text quality/grammar of the paraphrases. For the content preservation study, we use the following Likert scale: 5 - preserves the meaning of the source but differs in words and/or structure. 4 - preserves most information in the source but differs in some minor factual details. 3 - reserves some information in the source but differs in certain significant ways. 2 - topically related to the source but most information in the source is not preserved. 1 – not topically related. For the text quality or grammar quality study, we use the following Likert scale: 5 – the paraphrase has excellent grammar/quality with respect to the highlighted source. 4 - the paraphrase is clear and correct with minor grammatical errors. 3 - the paraphrase has few grammatical errors, but remains clear and comparable to highlighted source text. 2 - the paraphrase has significant number of grammatical errors, but remains understandable. 1 - the paraphrase is inferior to the highlighted source text with a lot of grammatical errors, may be difficult to comprehend. Based on the evaluations, 70% and 83% of the paraphrases are rated high quality in terms of content preservation for DIPPER and LLaMA-2, respectively. 89% and 88% of the paraphrases are rated to have high-quality text/grammar for DIPPER and LLaMA-2, respectively. Hence, our human study indicates that watermarking detectors can be evaded using recursive paraphrases with only a slight degradation in text quality. ## B.2 Llama-2 Chat Template Below, we provide the chat template we use to employ LLaMA-2-7B-Chat as a paraphraser. System Prompt: You are a paraphraser. You are given an input passage 'INPUT'. You should paraphrase 'INPUT' to print 'OUTPUT'. 'OUTPUT' shoud be diverse and different as much as possible from 'INPUT' and should not copy any part verbatim from 'INPUT'. 'OUTPUT' should preserve the meaning and content of 'INPUT' while maintaining text quality and grammar. 'OUTPUT' should not be much longer than 'INPUT'. You should print 'OUTPUT' and nothing else so that its easy for me to parse. User Prompt: INPUT: [Add input passage here] Below, we provide the chat template we use to employ LLaMA-2-13B-Chat as a question-answering model. System Prompt: You are given a context 'C: [Add context passage here]' and a question 'Q: [Add question here]'. Let 'A' be the answer to question 'Q' solely based on context 'C'. You are given the true answer 'A1: [Add ground truth answer here]' for question 'Q'. User Prompt: INPUT: Do answers 'A1' and 'A' match? You SHOULD only be printing either 'YES' or 'NO'. TPR ![18_image_0.png](18_image_0.png) ![18_image_2.png](18_image_2.png) (a) XSum with OPT-13B paraphrased with DIPPER ![18_image_3.png](18_image_3.png) (c) XSum with OPT-1.3B paraphrased with DIPPER ![18_image_1.png](18_image_1.png) (b) XSum with OPT-13B paraphrased with LLaMA-2 ![18_image_4.png](18_image_4.png) ![18_image_5.png](18_image_5.png) (d) XSum with GPT-2-Medium paraphrased with DIPPER ![18_image_6.png](18_image_6.png) (e) PubMedQA with OPT-1.3B paraphrased with DIPPER (f) Kafkai with OPT-1.3B paraphrased with DIPPER Figure 11: ROC plots for soft watermarking (Kirchenbauer et al., 2023a) with our recursive paraphrasing attacks. AUROC, TPR@1%FPR, and perplexity scores measured using OPT-13B are given in the legend. Detection performance on the XSum dataset using 3 different LLMs - OPT-13B, OPT-1.3B, and GPT- 2-Medium - are evaluated in (a), (c), and (d), respectively. (b) show the performance of the detector with recursives paraphrases from LLaMA-2-7B-Chat. (e) and (f), respectively, show the performance of the detector on two datasets - PubMedQA and Kafkai - with distribution shifts using OPT-1.3B. In all the settings, we observe that the detection performance of the watermarking-based detector reduces drastically with only a slight degradation in perplexity measures. ![19_image_0.png](19_image_0.png) datasets (Left) before attack, (Middle) after paraphrasing attack, (Right) applying paraphrasing attack with multiple queries to detector. ![20_image_0.png](20_image_0.png) (c) PubMedQA with OPT-1.3B ![20_image_1.png](20_image_1.png) ![20_image_2.png](20_image_2.png) (d) Kafkai with OPT-1.3B Figure 13: ROC plots for soft watermarking (Kirchenbauer et al., 2023a) with our recursive paraphrasing attacks. AUROC, TPR@1%FPR, and perplexity scores measured using OPT-13B are given in the legend. Detection performance on the XSum dataset using two different LLMs - OPT-1.3B and GPT-2-Medium - are evaluated in (a) and (b), respectively. (c) and (d), respectively, show the performance of the detector on two datasets - PubMedQA and Kafkai - with distribution shifts using OPT-1.3B. In all the settings, we observe that the detection performance of the watermarking-based detector reduces drastically with only a slight degradation in perplexity measures. View instructions Read detailed instructions carefully before proceeding (click on "View Instructions" above). ## Given Source Text: Forest Green, promoted from the National League, will host MK Dons in their first appearance in the competition, while FA Cup giant-killers Lincoln will be away to Rotherham. The 35 ties will be played in the week commencing Monday, 7 August. Hull City and Middlesbrough have been handed a bye into the second round, having finished above Sunderland in the Premier League last season. There was confusion after the draw, which was streamed live from Bangkok, where the competition's new sponsors, energy drink company Carabao, are based. A list of fixtures displayed on the stream showed Charlton drawn against two clubs, while AFC Wimbledon were also wrongly recorded as being at home to Swindon - the Dons were drawn at home to Brentford, and Swindon will be away to Norwich. And Forest Green were listed as being away to Wolves, who were in fact drawn at home to Yeovil. The live stream was also hampered by sound problems, with listeners on some clubs' websites unable to hear the draw. The draw was conducted by former Premier League referee Mark Clattenburg, who had been involved in the draw for the first match of the season. He is employed by Premier League broadcast partner Channel 5 as a television match official. The draw for the first round was conducted on live television at 10.30pm Thai time, with the match to be shown on Thailand's Channel 5 as part of the BBC's coverage of the tournament. The fourth round was drawn on live television at 10pm Thai time, with matches to be shown on Thailand's Channel 5 as part of the BBC's coverage of the tournament. The draw for the fifth round was conducted on live television at 10.25pm Thai time, with matches to be shown on Thailand's Channel 5 as part of the BBC's coverage of the tournament. The sixth round was drawn on live television at 10.15pm Thai time, with matches to be shown on Thailand' s Channel 5 as part of the BBC's coverage of the tournament. The seventh round was drawn on live television at 10.15pm Thai time, with matches to be shown on Thailand's Channel 5 as part of the BBC's coverage of the tournament. The draw for the eighth round was conducted on live television at 10.15pm Thai time, with matches to be shown on Thailand's Channel 5 as part of the BBC's coverage of the tournament. The draw for the ninth round was conducted on live television at 10.15pm Thai time, with matches to be ## Paraphrase Of Highlighted Text: Mark Clattenburg, a former Premier League referee, who officiated at the opening game of the season, was in charge of the draw. He is now employed by the FA's broadcast partner, Channel 5, as a television match official. The draw for the first round was made on live TV at 10.30pm local time, with Channel 5 in Thailand broadcasting the event as part of the BBC's coverage of the tournament. The fourth round was drawn live on Thai TV at 10pm local time, with matches to be screened on Channel 5 in Thailand as part of the BBC's coverage of the competition. The draw for the fifth round took place live on Thai TV at 10.25pm local time, with Channel 5 in Thailand to broadcast the tles as part of the BBC's coverage. The sixth round was drawn live on Thai TV at 10.15pm. Channel 5 in Thailand to show matches as part of BBC's coverage. The draw for the eighth round was made live on Thai TV at 10.15pm. Channel 5 in Thailand to show matches as part of BBC's coverage of the competition. Which of the following best describes the quality of the paraphrase? O 5 - Approximately equivalent: the paraphrase preserves the meaning of the source but differs in words and/or structure O 4 - Nearly equivalent: the paraphrase preserves most information in the source but differs in some minor factual details O 3 - Somewhat equivalent: the paraphrase preserves some information in the source but differs in certain significant ways O 2 - Topically related: the paraphrase is topically related to the source but most information in the source is not preserved O 1 - Not topically related: the paraphrase is not topically related to the source and preserves no information Write a short reasoning for your answer: Write here ... Submit Figure 14: MTurk user interface for human evaluation of paraphrases for content preservation. | | Sum of | 5 - Approx. | 4 - nearly | 3 - Somewhat | 2 - Topically | 1 - Topically | | |---------|----------------|---------------|----------------|----------------|-----------------|-----------------|---------------| | ppi | Average rating | 5 & 4 (%) | equivalent (%) | equivalent (%) | equivalent (%) | related (%) | unrelated (%) | | i=1 | 4.0 ± 0.8 | 70.2 | 29.8 | 40.4 | 29.8 | 0.0 | 0.0 | | i=2 | 4.1 ± 0.8 | 77.2 | 33.3 | 43.9 | 19.3 | 3.5 | 0.0 | | i=3 | 3.9 ± 0.9 | 63.2 | 33.3 | 29.8 | 33.3 | 3.5 | 0.0 | | i=4 | 4.2 ± 0.9 | 80.0 | 49.1 | 30.9 | 14.5 | 5.5 | 0.0 | | i=5 | 3.7 ± 1.1 | 61.4 | 29.8 | 31.6 | 21.1 | 17.5 | 0.0 | | All ppi | 4.0 ± 0.9 | 70.4 | 35.1 | 35.3 | 23.6 | 6.0 | 0.0 | | ppi | Average | Sum of | 5 - Approx. | 4 - nearly | 3 - Somewhat | 2 - Topically | 1 - Topically | |---------|-------------|----------------|----------------|----------------|----------------|-----------------|-----------------| | rating | 5 & 4 (%) | equivalent (%) | equivalent (%) | equivalent (%) | related (%) | unrelated (%) | | | i=1 | 4.37 ± 0.63 | 91.67 | 45.0 | 46.67 | 8.3 | 0.0 | 0.0 | | i=2 | 4.18 ± 0.67 | 85.0 | 33.33 | 51.67 | 15.0 | 0.0 | 0.0 | | i=3 | 3.93 ± 0.71 | 80.0 | 21.67 | 58.33 | 16.67 | 3.3 | 0.0 | | i=4 | 3.85 ± 0.78 | 78.33 | 21.67 | 56.67 | 16.67 | 5.0 | 0.0 | | i=5 | 3.7 ± 1.1 | 80.0 | 18.33 | 61.67 | 11.67 | 8.33 | 0.0 | | All ppi | 4.05 ± 0.2 | 83.0 | 28.0 | 55.0 | 13.67 | 3.33 | 0.0 | | ppi | Average | Sum of | 5 - Excellent | 4 - Good | 3 - Fair | 2 - Adequate | 1 - Poor | |---------|-------------|-----------|-----------------|------------|------------|----------------|------------| | rating | | 5 & 4 (%) | (%) | (%) | (%) | (%) | (%) | | i=1 | 4.28± 0.67 | 87.72 | 40.35 | 47.37 | 12.28 | 0.00 | 0.00 | | i=2 | 4.12 ± 0.50 | 92.98 | 19.30 | 73.68 | 7.02 | 0.00 | 0.00 | | i=3 | 4.12 ± 0.53 | 91.23 | 21.05 | 70.18 | 8.77 | 0.00 | 0.00 | | i=4 | 4.11 ±0.64 | 84.21 | 26.32 | 57.89 | 15.79 | 0.00 | 0.00 | | i=5 | 4.07 ± 0.53 | 89.47 | 17.54 | 71.93 | 10.53 | 0.00 | 0.00 | | All ppi | 4.14 ± 0.58 | 89.12 | 24.91 | 64.21 | 10.88 | 0.0 | 0.0 | | ppi | Average | Sum of | 5 - Excellent | 4 - Good | 3 - Fair | 2 - Adequate | 1 - Poor | |---------|-------------|----------|-----------------|------------|------------|----------------|------------| | rating | 5 & 4 (%) | (%) | (%) | (%) | (%) | (%) | | | i=1 | 4.62± 0.55 | 96.67 | 65.0 | 31.67 | 3.33 | 0.00 | 0.00 | | i=2 | 4.28 ± 0.73 | 83.33 | 45.0 | 38.33 | 16.67 | 0.00 | 0.00 | | i=3 | 4.26 ± 0.65 | 88.33 | 43.33 | 45.0 | 11.67 | 0.00 | 0.00 | | i=4 | 4.22 ±0.64 | 88.33 | 38.33 | 50.0 | 11.67 | 0.00 | 0.00 | | i=5 | 4.17 ± 0.74 | 83.33 | 40.0 | 43.33 | 15.0 | 1.67 | 0.00 | | All ppi | 4.32 ± 0.35 | 88.0 | 46.33 | 41.67 | 11.67 | 0.33 | 0.0 | Table 6: MTurk human evaluation of recursive paraphrases with DIPPER for content preservation. ppi represents the i th round of recursive paraphrasing. Table 7: MTurk kuman evaluation of recursive paraphrases with LLaMA-2-7B-Chat for content preservation. ppi represents the i th round of recursive paraphrasing. Table 8: Human evaluation of recursive paraphrases using MTurk for text quality/grammar. ppi represents the i th round of recursive paraphrasing. Table 9: Human evaluation of recursive paraphrases using MTurk for text quality/grammar. ppi represents the i th round of recursive paraphrasing. ## B.3 Example Paraphrases | B.3 | Example Paraphrases | | |-----------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------| | Description | | Text | | Input | The draw was conducted by former Premier League referee Mark Clattenburg, who had been involved in the draw for the first match of the season. He is employed by Premier League broadcast partner Channel 5 as a television match official. The draw for the first round was conducted on live television at 10.30pm Thai time, with the match to be shown on Thailand's Channel 5 as part of the BBC's coverage of the tournament. The fourth round was drawn on live television at 10pm Thai time, with matches to be shown on Thailand's Channel 5 as part of the BBC's coverage of the tournament. The draw for the fifth round was conducted on live television at 10.25pm Thai time, with matches to be shown on Thailand's Channel 5 as part of the BBC's coverage of the tournament. The sixth round was drawn on live television at 10.15pm Thai time, with matches to be shown on Thailand's Channel 5 as part of the BBC's coverage of the tournament. The seventh round was drawn on live television at 10.15pm Thai time, with matches to be shown on Thailand's Channel 5 as part of the BBC's coverage of the tournament. The draw for the eighth round was conducted on live television at 10.15pm Thai time, with matches to be shown on Thailand's Channel 5 as part of the BBC's coverage of the tournament. The draw for the ninth round was conducted on live television at 10.15pm Thai time, with matches to be | | | Paraphrase | Mark Clattenburg, a former Premier League referee, who officiated at the opening game of the season, was in charge of the draw. He is now employed by the FA's broadcast partner, Channel 5, as a television match official. The draw for the first round was made on live TV at 10.30pm local time, with Channel 5 in Thailand broadcasting the event as part of the BBC's coverage of the tournament. The fourth round was drawn live on Thai TV at 10pm local time, with matches to be screened on Channel 5 in Thailand as part of the BBC's coverage of the competition. The draw for the fifth round took place live on Thai TV at 10.25pm local time, with Channel 5 in Thailand to broadcast the ties as part of the BBC's coverage. The sixth round was drawn live on Thai TV at 10.15pm. Channel 5 in Thailand to show matches as part of BBC's coverage. The draw for the eighth round was made live on Thai TV at 10.15pm. Channel 5 in Thailand to show matches as part of BBC's coverage of the competition. | | | Input | "We had to cancel his operation and put off on him, he was supposed to have it on Christmas Eve." He said Josh and his family had been in touch with some people from the US and Australia who are in Belfast right now. He is one of the first people from the US to come to Belfast to meet and speak with Josh, and the rest of the family will visit from the US in the next few days. "Josh has become a worldwide man's man, and it has been incredible to see how everybody has taken to him. He's an ambassador for Irish people and for Ireland, and the people are really behind him. He's a great ambassador for Ireland and for his country and it has been really good." Josh was taken to Queen's University Hospital for surgery and they discovered and cut down his appendix, which had been causing him pain for more than a year. The surgery took place on January 11. He is still on painkillers, and the family are encouraging people to keep praying for his recovery. The hashtag #prayforjosh and the post and the message he messages are now being shared widely through social media by people from the US and Australia. The most recent tweets from his father's account are about his recovery in Belfast. He is also being accompanied by a "pray warriors" team and the hashtag #prayforjosh is now being used by other campaigners to urge people to send support to Josh. The church is also asking people to help | | | Paraphrase | "His operation was cancelled and put back - it was due to be on Christmas Eve." He said that the family and friends of Josh have been in touch with people from the United States and Australia who are currently in Belfast. He himself was one of the first to arrive in the city to meet and talk to the teenager, and more of the family will arrive in a few days. "Josh has become a 'global' man and it's been amazing to see how people have taken to him. He's a real Irishman and an ambassador for Ireland and it's been really good for him." He said that on January 11th, when he was taken to Queen's University Hospital in Belfast for the operation, a large primary cancer and five smaller growths were removed. He is still recovering and the family ask people to keep praying for his recovery. The post with the hashtag #prayforjosh and the messages it contains have now been shared all over social media by people in the US and Australia. The last few posts on his father's Twitter account are about his recovery in Belfast. He has also been surrounded by a team of prayer warriors and the campaign has now widened, with people using the tag #prayforjosh to urge others to show their support for him. The church is also calling on people to help in other ways. | | | Input | I've enjoyed my time with the team." Wales, who won 11 medals in total in Brazil, failed to make the podium at three consecutive Games from 1996 to 2008 before Doull stepped into the team and helped to end the long wait for a Welsh team pursuit gold. The Welsh team were in fine form on Saturday as they captured their 11th medal, equalling the record of 14 won in 1972. The Welsh team's run to gold this year was well-documented. Geraint Thomas, the 2008 and 2012 Olympic champion, set an Olympic record of 31 minutes 59.4 seconds to help Great Britain to gold. Born in Cardiff but living in Aberystwyth, Anthony Iwan Thomas was selected as one of the two Welsh track riders for Rio and earned a silver medal in the men's road race, which he won in August. "It's a dream come true to come back and win Olympic gold here," said Thomas. "To do it in this stadium, it's just surreal. I can't explain it. Let me get home and I'll be really grateful to everyone." He missed the 2012 Games through injury but returned to gold-winning form in 2016 when he won his third consecutive world championship in the men's road race, finishing second in Rio. "It's a real honour to be on the podium today," added the 35-year-old. "There's a lot of times when you think 'this is it' and you think | | | Paraphrase | I've loved every minute with the team. " Wales, who won 11 medals in Brazil, had not finished on the podium at three consecutive Olympic games from 1996 to 2008 before Doull helped end the country's long wait for a team pursuit gold medal. The nation's 11th medal on the last day equalled the record of 14 set in 1972. So many stories of success for the Welsh riders in Rio have already been written. Born in Cardiff but now based in the Ceredigion resort of Aberystwyth, the son of Geraint Thomas won silver in the men's road race. Thomas said: "It's just a dream come true to come back and win gold here and it's even more special to do it in this stadium. " If I get home I'll thank everybody. " The 38-year-old had to miss the 2012 Olympic games with injury but was back to winning ways in 2016 when he retained his world title and finished second in Rio. "There's so many times when you think, 'This is the day' and it never comes." | | | Table 10: Examples of paraphrased passage from the XSum dataset. The paraphrasing is performed usin | | g | Table 10: Examples of paraphrased passage from the XSum dataset. The paraphrasing is performed using DIPPER (Krishna et al., 2023). | Description | | Text | |---------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------| | Input | The force has said it has no intention of changing policy or use of force, and will continue to use the weapons as part of its role policing the public. A spokesman for Scotland Yard said: "Officers in Scotland will continue to have a range of weapons on-hand, including tasers, pepper spray and a handgun and they will be used in accordance with Taser International guidelines. "We also have well established policies governing the use of these weapons and the use of force, and the use of force is judged to be proportionate when appropriate." The move was welcomed by a leading rights campaign group. John O'Donnell, the head of the High Court, said Tasers should have been compulsory in Scotland in 2015 following the death of a journalist in custody in Newcastle and the deaths of two more while in police custody. He said: "The use of these weapons by police in Scotland has been widely condemned and the High Court ruled they should be used to protect the public." A spokesman for the Department for Transport said: "No police force should decide for itself what constitutes necessary force and the use of these weapons by police in Scotland has been widely condemned. "The use of these weapons by officers must be subject to strict rules which safeguard the public and the officers' rights." The London police said they "do not comment on intelligence matters".? | | | Paraphrase | The Met said it had no plans to change its policy on the use of force and said the Tasers would be used "when it is appropriate and proportionate". A spokesman said: "Officers in Scotland will continue to carry a range of weapons including Tasers, pepper spray and handguns. "We have well-established policies on the use of these weapons and the use of force in general and we will continue to ensure these are followed." It was welcomed by the Scottish Human Rights Commission. The high court judge, Lord O'Donnell, said in February that the use of Tasers by police in Scotland should have been compulsory in the wake of the death of a journalist in police custody in Newcastle, and the subsequent deaths of two more people in police custody. He said: "The use of these weapons by police in Scotland has been widely condemned and the high court has ruled they must be used to protect the public." The Department for Transport said police should not be able to decide for themselves when it was appropriate to use force. "The use of these weapons by officers must be subject to strict rules which protect the public and officers' rights." The Met said it did not comment on intelligence matters. Would you like to book an appointment with the British Transport Police? "He gets on with me. I have no doubt about his motives." The move is also an enormous gamble but Koukash, a self-made millionaire who made his fortune as a real estate mogul in Dubai, is determined to make sure the gamble pays off. He is willing, he says, to let the man who guided Salford to seven Grand Final appearances head a great project. "He has an incredible track record of creating people and businesses," said Noble. "He has done it in Qatar, in Dubai, in America, here." He is also one of the game's most ruthless businessmen. Koukash, who moved to the UK from Sudan with his parents as a boy and has spent the past 30 years building his empire of clubs and businesses, has seen the game of rugby league decline dramatically over the past 10 years. He claims that the game has never been more popular than it is today. He is also a fervent supporter of the game and this makes his interest in rugby league even more compelling. Salford have been in dire straits and Koukash, who has ploughed much of his own money in the club, has promised to help them become one of the great clubs of the game. The club has all the right qualities and Koukash wants to make sure it happens. He has spent the past two weeks scouting for players to recruit and has already seen the arrival of two promising youngsters. The Reds need players to make them competitive | | | Input | | | | Paraphrase | He added: "I know him. I have no doubt about his intentions." It's a huge gamble for the self-made millionaire who has made his money in property in Dubai. He is happy to entrust the man who took Salford to seven Challenge Cup finals with his great plan. " He's got an incredible record of turning around people and businesses," Noble said. "He's done it in Qatar, in Dubai, in America, here." He's also one of the most ruthless businessmen in the game. Having arrived in Britain as a boy from Sudan with his parents, he's spent the past 30 years building an empire of clubs and businesses. He's also seen the game decline in popularity over the past decade, but insists it's now more popular than ever. He is a huge supporter of the game and that's why he's interested in the sport. Salford have been in crisis and he's promised to help them become one of the great clubs. The club has all the attributes and Koukash is determined to make it happen. He has spent the past fortnight looking at players and has already recruited two. He has to make the Reds a more competitive side and has already brought in a couple of players who have impressed. She said he would not stop attacking and asked for help. She said she wanted "peace" but "not death". Henderson-McCarroll, of St Nicholas Drive, Newry, admitted manslaughter while in charge of a dangerous drug. She said her actions were a "blip in my mind" as a result of a "bad decision" to take drugs. Justice Treacy said he would not impose a custodial sentence on Henderson-McCarroll, but instead sentenced her to three years' imprisonment. The judge said he would not impose the maximum sentence for manslaughter given the circumstances, but felt he would not impose the minimum of two and a half years. He told Henderson-McCarroll: "He (Mr Girvan) would not be in his right mind if he would not have let his guard down. If there was one thing the jury should have heard - it was that your actions were a blip in my mind. You didn't intend to kill him. You were acting in self-defence. You poked him and your actions were a blip and a bit of a lapse in judgment." The judge said the maximum sentence for manslaughter given Henderson's previous convictions would have amounted to between five and seven years. The judge said it was "not an uncommon crime" to kill someone in self-defence. He said sentencing Henderson was an "ugly case of drug-induced madness." He added: "He (Mr Girvan) must have suffered terribly." | | | Input | | | | Paraphrase | She said he had not stopped attacking her and she called for help. She said she wanted "peace" but not death. HendersonMcCarroll, of St Nicholas Drive in Newry, admitted manslaughter while under the influence of a sedative. She said her actions were the result of a "mistake" after taking drugs. Mr Justice Treacy said he would not grant a suspended sentence, but instead would sentence her to three years in prison. He said he would not impose the maximum sentence for manslaughter, in the circumstances, but did not feel he should impose the minimum term of two and a half years. He said to the defendant: "You could not in your right mind have left your guard down, you did not intend to kill him. If there was one thing the jury ought to have heard, it was that your actions were a momentary lapse of reason. You acted in self defence, you poked him with a knife, your actions were momentary and a lapse of reason." The judge said that given the defendant's previous record the maximum sentence for manslaughter, with a minimum term of a year, would have been five to seven years. He said it was not an uncommon crime for someone to kill in self defence. " But in this case, it was an ugly case of drug induced madness. " He added: "Mr Girvan must have suffered horribly." | | Table 11: Examples of paraphrased passage from the XSum dataset. The paraphrasing is performed using DIPPER (Krishna et al., 2023). | Description | | Text | |---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------| | Input | In the year 2014-2015, Prison Link Cymru made 1,099 referrals, and said some offenders waited up to a year before finding suitable accommodation. Staff from the charitable organization say that the investment in housing would be cheaper than having to re-lock homeless offenders. The Welsh government said that more people than ever are getting help to deal with their housing problems. Changes to the Welsh Housing Act in 2015 removed priority rights for ex-offenders. Prison Link Cymru, which helps people to find accommodation after they have been released, said that women generally had a good time, as they had to deal with children and other problems. | | | pp1 | In the year 2014-2015, Prison Link Cymru made 1,099 referrals, and said some offenders waited up to a year before finding suitable accommodation. Staff from the charitable organization say that the investment in housing would be cheaper than having to re-lock homeless offenders. The Welsh government said that more people than ever are getting help to deal with their housing problems. Changes to the Welsh Housing Act in 2015 removed priority rights for ex-offenders. Prison Link Cymru, which helps people to find accommodation after they have been released, said that women generally had a good time, as they had to deal with children and other problems. | | | pp2 | In 2015-16, Prison Link Cymru had 1,099 referrals and said some offenders were living rough for up to a year before finding suitable accommodation. Workers at the charity claim investment in housing would be cheaper than locking up homeless repeat offenders. The Welsh Government said more people than ever before were getting help to address housing problems. Prison Link Cymru, which helps people find accommodation after they have left prison, said things were generally good for women because issues such as children or domestic violence were often the main reasons they returned to prison. | | | pp3 | In fact, it was one of the main reasons why they came back to prison. In the year 2015-16, Prison Link Cymru made 1,099 referrals and said that offenders had lived for up to a year before finding suitable accommodation. The workers at the charitable organization say that the investment in housing would be cheaper than re-locking homeless offenders. The government of Wales says that more people than ever are being helped to deal with their housing problems. Prison Link Cymru, which helps people to find accommodation after being released, says that women are generally in a good position, as children and other problems are often the main reason why they are returned to prison. | | | pp4 | In the year to the end of March, Prison Link Cymru had 1,099 referrals and said offenders had been living rough for up to a year before finding suitable accommodation. Workers at the charity say investment in housing would be cheaper than re-imprisoning homeless repeat offenders. The Welsh Government says more people than ever before are getting help to address their housing problems. Prison Link Cymru, which helps people find accommodation after they have been released from prison, said generally things were good for women because issues such as children and other problems were often the main reasons they returned to prison. | | | pp5 | The government of Wales says that more people than ever before are being helped to deal with their housing problems. In the year 2015-16, Prison Link Cymru referred 1,099 people and said that homeless people had lived up to a year before finding suitable accommodation. The workers at the charitable organization say that the investment in housing would be cheaper than imprisoning homeless offenders again. Prison Link Cymru, which helps people find accommodation after being released, says that women are generally well, because children and other problems are often the main reason why they return to prison. | | | Table 12: | Example of a recursively paraphrased passage from the XSum dataset. | The paraphrasing is | Table 12: Example of a recursively paraphrased passage from the XSum dataset. The paraphrasing is performed using DIPPER (Krishna et al., 2023). ppi refers to the output after i rounds of recursive paraphrasing. | Description | | | Text | |---------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------| | | Input | The final round of fixtures will be played in Belfast on Tuesday. Elsewhere, Ireland's U19 women face Wales on Saturday in their Euro U19 women qualifier. The Irish women are already assured of a third-placed finish and a place at next year's finals in Northern Ireland after wins over Bulgaria and Greece. If they win against Wales they will finish second and qualify for a play-off for the finals. The Irish U19 women face Wales in Aberystwyth on Saturday. The match kicks off at 1.15pm. The Irish U19 women are in excellent form having won their last four matches. The winning margins have been 4-0, 4-0, 3-0, 3-0 and 3-0. The Irish U19 women sit fourth in the qualifying group and could qualify for the finals if they beat Wales on Saturday. The Irish women are in impressive form having won their last four games and qualified for next year's finals. Saturday's game against Wales is one of the most important of the campaign as a win will ensure they finish second and qualify for a play-off for the finals. The Irish will come up against whoever comes top of the group - Denmark, Portugal, Germany and Sweden - with the second place finisher going into a play-off for a final spot. The Irish U19 women took down Bulgaria 3-0, won 3-0 away to Greece and hammered home five against a young Spanish team. | | | | pp1 | On Tuesday, the final round of matches will take place in Belfast. Meanwhile, Ireland's U19 women's team faces Wales in their Euro U19 women's qualifier on Saturday in Aberystwyth. The Irish are already assured of a third-placed finish and a spot in next year's finals after their wins over Bulgaria and Greece. A victory against Wales would see them finish second and qualify for a playoff for the finals. The Irish U19 women have been in impressive form, winning their last four matches by margins of 4-0, 4-0, 3-0, 3-0, and 3-0. They sit fourth in the qualifying group and could qualify for the finals if they beat Wales on Saturday. The Irish team will face the top finisher from the group - Denmark, Portugal, Germany, or Sweden - in a playoff for a final spot. The Irish U19 women have had a strong campaign so far, defeating Bulgaria 3-0, winning 3-0 away to Greece, and scoring five goals against a young Spanish team. On the eve of the final round of matches in Belfast, the Irish U19 women's team prepares to face Wales in their Euro U19 women's qualifier on Saturday in Aberystwyth. While Ireland has already secured a third-placed finish and a spot in next year's finals with their wins over Bulgaria and Greece, a victory against Wales would see them finish second and qualify for a playoff for the finals. The Irish U19 women have been in impressive form, winning their last four matches by margins of 4-0, 4-0, 3-0, 3-0, and 3-0. They currently sit fourth in the qualifying group and could potentially qualify for the finals if they beat Wales on Saturday. The team will face the top finisher from the group - Denmark, Portugal, Germany, or Sweden - in a playoff for a final spot. Ireland has had a strong campaign so far, defeating Bulgaria 3-0, winning 3-0 away to Greece, and scoring five goals against a young Spanish team. | | | | pp2 pp3 | On the eve of the final round of matches in Belfast, the Northern Irish U19 women's team prepares to face Wales in their Euro U19 women's qualifier on Saturday in Aberystwyth. While Northern Ireland has already secured a third-placed finish and a spot in next year's finals with their wins over Spain and Scotland, a victory against Wales would see them finish second and qualify for a playoff for the finals. The Northern Irish U19 women have been in impressive form, winning their last four matches by margins of 4-0, 4-0, 3-0, and 3-0, and currently sit fourth in the qualifying group. They could potentially qualify for the finals if they beat Wales on Saturday, and face the top finisher from the group - Denmark, Portugal, Germany, or Sweden - in a playoff for a final spot. Northern Ireland has had a strong campaign so far, defeating Spain 2-1, winning 3-0 away to Scotland, and scoring five goals against a young Irish team. | | | | pp4 | On the eve of the final round of matches in Northern Ireland, the local U19 women's team prepares to face Wales in their Euro U19 women's qualifier on Saturday in Cardiff. While Northern Ireland has already secured a third-placed finish and a spot in next year's finals with their recent wins, a victory against Wales would see them finish second and qualify for a playoff for the finals. The Northern Irish U19 women have been in impressive form, winning their last four matches by margins of 4-0, 4-0, 3-0, and 3-0, and currently sit fifth in the qualifying group. They could potentially qualify for the finals if they beat Wales on Saturday, and face the top finisher from the group - Denmark, Portugal, Germany, or Sweden - in a playoff for a final spot. Northern Ireland has had a strong campaign so far, defeating Spain 2-1, winning 3-0 away to Scotland, and scoring five goals against a young Irish team. | | | | pp5 | On the eve of the crucial final match in Cardiff, the Northern Ireland U19 women's team prepares to face Wales in their Euro U19 women's qualifier on Saturday. With their recent string of victories, the Northern Irish have already secured a third-placed finish and a spot in next year's finals, but a win against Wales would see them finish second and qualify for a playoff for the finals. The team has been in impressive form, winning their last four matches by margins of 4-0, 4-0, 3-0, and 3-0, and currently sit fifth in the qualifying group. They could potentially qualify for the finals if they beat Wales on Saturday, and face the top finisher from the group - Denmark, Portugal, Germany, or Sweden - in a playoff for a final spot. Despite their strong campaign so far, the Northern Irish will face a tough challenge against Wales, who have also had a impressive run of form in the qualifiers. | | Table 13: Example of a recursively paraphrased passage from the XSum dataset. The paraphrasing is performed using LLaMA-2-7B-Chat (Touvron et al., 2023). ppi refers to the output after i rounds of recursive paraphrasing. ## C Proofs And Corollaries C.1 Proof Of Theorem 1 Theorem 1. The area under the ROC of any detector D *is bounded as* $$\mathrm{\sf{AUROC}}(D)\leq\frac{1}{2}+\mathrm{\sf{TV}}({\mathcal{M}},{\mathcal{H}})-\frac{\mathrm{\sf{TV}}({\mathcal{M}},{\mathcal{H}})^{2}}{2}.$$ Proof. The ROC is a plot between the true positive rate (TPR) and the false positive rate (FPR), which are defined as follows: $$\mathsf{T P R}_{\gamma}=\mathbb{P}_{s\sim\mathcal{M}}[D(s)\geq\gamma],$$ and $\mathsf{F P R}_{\gamma}=\mathbb{P}_{s\sim\mathcal{H}}[D(s)\geq\gamma]$, where γ is some classifier parameter. We can bound the difference between the TPRγ and the FPRγ by the total variation between M and H: $$\mathsf{TPR}_{\gamma}-\mathsf{FPR}_{\gamma}|=|\mathsf{P}_{s\sim\mathcal{M}}[D(s)\geq\gamma]-\mathsf{P}_{s\sim\mathcal{H}}[D(s)\geq\gamma]|\leq\mathsf{TV}(\mathcal{M},\mathcal{H})\tag{1}$$ $$\mathsf{TPR}_{\gamma}\leq\mathsf{FPR}_{\gamma}+\mathsf{TV}(\mathcal{M},\mathcal{H}).\tag{2}$$ Since the TPRγ is also bounded by 1 we have: TPRγ ≤ min(FPRγ + TV(M, H), 1). (3) Denoting FPRγ, TPRγ, and TV(M, H) with x, y, and tv for brevity, we bound the AUROC as follows: and $\tau(x,t)$ with $x$, $y$, and $t$ for brevity, we obtain the $$\text{AUROC}(D)=\int_{0}^{1}y\;dx\leq\int_{0}^{1}\min(x+tv,1)dx$$ $$=\int_{0}^{1-tv}(x+tv)dx+\int_{1-tv}^{1}dx$$ $$=\left|\frac{x^{2}}{2}+tvx\right|_{0}^{1-tv}+|x|_{1-tv}^{1}$$ $$=\frac{(1-tv)^{2}}{2}+tv(1-tv)+tv$$ $$=\frac{1}{2}+\frac{tv^{2}}{2}-tv+tv-tv^{2}+tv$$ $$=\frac{1}{2}+tv-\frac{tv^{2}}{2}.$$ $\left(3\right)^3$ $$\square$$ ## C.2 General Trade-Offs For Detection Paraphrasing to Evade Detection. Although our analysis considers general distributions, it can also be applied to specific scenarios, such as particular writing styles or sentence paraphrasing, by defining M and H appropriately. For example, M can be the outputs from an LLM trained to mimic a particular set of people, or H can be the text distribution of a specific person. Similarly, Corollary 1 shows that if a paraphraser's goal is to lower the TV between paraphrased AI text and human text, then detection gets harder. Set M = RM(s) and H = RH(s) to be the distribution of sequences with similar meanings to s produced by the paraphraser and humans, respectively. Corollary 1. The area under the ROC of the detector D *is bounded as* $$\mathrm{\sf{AUROC}}(D)\leq\frac{1}{2}+\mathsf{TV}(\mathcal{R}_{\mathcal{M}}(s),\mathcal{R}_{\mathcal{H}}(s))-\frac{\mathsf{TV}(\mathcal{R}_{\mathcal{M}}(s),\mathcal{R}_{\mathcal{H}}(s))^{2}}{2}.$$ Another way to understand the limitations of AI-generated text detectors is directly through the characterization of the trade-offs between true positive rates and false positive rates. Adapting inequality 2, we have the following corollaries: Corollary 2. *For any watermarking scheme* W, $\Pr_{s_{w}\sim\mathcal{R}_{\mathcal{M}}(s)}[s_{w}$ is watermarked using $W]\leq\Pr_{s_{w}\sim\mathcal{R}_{\mathcal{M}}(s)}[s_{w}$ is watermarked using $W]$_ $$+\mathsf{TV}(\mathcal{R}_{\mathcal{M}}(s),\mathcal{R}_{\mathcal{H}}(s)),$$ $\downarrow$ . where RM(s) and RH(s) are the distributions of rephrased sequences for s produced by the paraphrasing model and humans, respectively. Humans may have different writing styles. Corollary 2 indicates that if a rephrasing model resembles certain human text distribution H (i.e. TV(RM(s), RH(s)) is small), then either certain people's writing will be detected falsely as watermarked (i.e. Prsw∼RH(s)[sw is watermarked using W] is high) or the paraphrasing model can remove the watermark (i.e. Prsw∼RM(s)[sw is watermarked using W] is low). Corollary 3. *For any AI-text detector* D, $\Pr_{s\sim\mathcal{M}}[s$ is detected as AI-text by $D]\leq\Pr_{s\sim\mathcal{H}}[s$ is detected as AI-text by $D]+\mathsf{TV}(\mathcal{M},\mathcal{H})$, where M and H *denote text distributions by the model and by humans, respectively.* Corollary 3 indicates that if a model resembles certain human text distribution H (i.e. TV(M, H) is small), then either certain people's writing will be detected falsely as AI-generated (i.e. Prs∼H[s is detected as AI-text by D] is high) or the AI-generated text will not be detected reliably (i.e. Prs∼M[s is detected as AI-text by D] is low). A recent work (Chakraborty et al., 2023) shows a trade-off on the detection problem with respect to the availability of the number of data samples for detection. They show a TV upper bound for the detector's AUROC using an information theoretic approach. However, the underlying assumption of their result is that several *independent* samples are available to the detector from either human or text distribution, which might not be a practical assumption since sentences in a document are often correlated with each other. Also, a large number of data samples need not be available for pragmatic scenarios. For example, it may not be practical for a text detector to ask a student to write multiple essays for an assignment or to assume that a Twitter bot would publish longer tweets that are completely written by the AI without any human intervention. ## C.3 Tightness Analysis For Theorem 1 In this section, we show that the bound in Theorem 1 is tight. For a given distribution of human-generated text sequences H, we construct an AI-text distribution M and a detector D such that the bound holds with equality. Define sublevel sets of the probability density function of the distribution of human-generated text pdfH over the set of all sequences Ω as follows: $$\Omega_{\mathcal{H}}(c)=\{s\in\Omega\mid\operatorname{pdf}_{\mathcal{H}}(s)\leq c\}$$ where c ∈ R. Assume that, ΩH(0) is not empty. Now, consider a distribution M, with density function pdfM, which has the following properties: 1. The probability of a sequence drawn from M falling in ΩH(0) is TV(M, H), i.e., Ps∼M[s ∈ ΩH(0)] = TV(M, H). 2. pdfM(s) = pdfH(s) for all s ∈ Ω(τ ) − Ω(0) where τ > 0 such that Ps∼H[s ∈ Ω(τ )] = 1 − TV(M, H). 3. pdfM(s) = 0 for all s ∈ Ω − Ω(τ ). $$[\tau)]=1\ .$$ Define a hypothetical detector D that maps each sequence in Ω to the negative of the probability density function of H, i.e., D(s) = −pdfH(s). Using the definitions of TPRγ and FPRγ, we have: $$\begin{array}{r l}{\operatorname{TPR}_{\gamma}=\mathbb{P}_{s\sim\mathcal{M}}[D(s)\geq\gamma]}\\ {=\mathbb{P}_{s\sim\mathcal{M}}[-\mathrm{pdf}_{\mathcal{H}}(s)\geq\gamma]}\\ {=\mathbb{P}_{s\sim\mathcal{M}}[\mathrm{pdf}_{\mathcal{H}}(s)\leq-\gamma]}\\ {=\mathbb{P}_{s\sim\mathcal{M}}[s\in\Omega_{\mathcal{H}}(-\gamma)]}\end{array}$$ $$\begin{array}{c}{{\mathrm{(using~property~1)}}}\\ {{\mathrm{(using~property~2)}}}\\ {{\mathrm{(}\Omega_{\mathcal{H}}(0)\subseteq\Omega_{\mathcal{H}}(-\gamma)\mathrm{)}}}\\ {{\mathrm{(}\mathbb{P}_{s\sim\mathcal{H}}[s\in\Omega_{\mathcal{H}}(0)]=0\mathrm{)}}}\end{array}$$ Similarly, $${\mathsf{F P R}}_{\gamma}=\mathbb{P}_{s\sim{\mathcal{H}}}[s\in\Omega_{{\mathcal{H}}}(-\gamma)].$$ For γ ∈ [−τ, 0], TPRγ = Ps∼M[s ∈ ΩH(−γ)] = Ps∼M[s ∈ ΩH(0)] + Ps∼M[s ∈ ΩH(−γ) − ΩH(0)] = TV(M, H) + Ps∼M[s ∈ ΩH(−γ) − ΩH(0)] (using property 1) = TV(M, H) + Ps∼H[s ∈ ΩH(−γ) − ΩH(0)] (using property 2) = TV(M, H) + Ps∼H[s ∈ ΩH(−γ)] − Ps∼H[s ∈ ΩH(0)] (ΩH(0) ⊆ ΩH(−γ)) = TV(M, H) + FPRγ. (Ps∼H[s ∈ ΩH(0)] = 0) For γ ∈ [−∞, −τ ], TPRγ = 1, by property 3. Also, as γ goes from 0 to −∞, FPRγ goes from 0 to 1. Therefore, TPRγ = min(FPRγ + TV(M, H), 1) which is similar to Equation 3. Calculating the AUROC in a similar fashion as in the previous section, we get the following: $$\mathrm{{\calAUROC}}(D)=\frac{1}{2}+\mathrm{{\sfTV}}({\mathcal{M}},{\mathcal{H}})-\frac{\mathrm{{\sfTV}}({\mathcal{M}},{\mathcal{H}})^{2}}{2}.$$ ## C.4 Pseudorandomness In Llms Most machine learning models, including LLMs, use pseudorandom number generators in one form or another to produce their outputs. For example, an LLM may use a pseudorandom number generator to sample the next token in the output sequence. In discussing our hardness result, Kirchenbauer et al. (2023b) in a more recent work argue that this pseudorandomness makes the AI-generated text distribution very different from the human-generated text distribution. This is because the pseudorandom AI-generated distribution is a collection of Dirac delta function distributions, and a human is exorbitantly unlikely to produce a sample corresponding to any of the delta functions. In our framework, this means that the TV between the human and pseudorandom AI-generated distributions is almost one, making the bound in Theorem 1 vacuous. We argue that although the true TV between the human and pseudorandom AI-generated distributions is high and there exists (in theory) a detector function that can separate the distributions almost perfectly, this function may not be efficiently computable. Any polynomial-time computable detector can only achieve a negligible advantage from the use of pseudorandomness instead of true randomness. If we had knowledge of the seed used for the pseudorandom number generator, we would be able to predict the pseudorandom samples. However, an individual seeking to evade detection could simply randomize this seed making it computationally infeasible to predict the samples. We modify the bound in Theorem 1 to include a negligible correction term ϵ to account for the use of pseudorandomness. We prove that the performance of a polynomial-time computable detector D on a pseudorandom version of the AI-generated distribution Mc is bounded by the total variation for the truly random distribution M (resulting from the LLM using true randomness) as follows: $$\mathrm{\sf{AUROC}}(D)\leq{\frac{1}{2}}+\mathrm{\sf{TV}}({\mathcal{M}},{\mathcal{H}})-{\frac{\mathrm{\sf{TV}}({\mathcal{M}},{\mathcal{H}})^{2}}{2}}+\epsilon.$$ The term ϵ represents the gap between the probabilities assigned by M and Mc to any polynomial-time computable {0, 1}-function f, i.e., $$\left|\mathbb{P}_{s\in\mathcal{M}}[f(s)=1]-\mathbb{P}_{s\in\hat{\mathcal{M}}}[f(s)=1]\right|\leq\epsilon.$$ [f(s) = 1] ≤ ϵ. (4) $\left(4\right)^3$ This term is orders of magnitude smaller than any of the terms in the bound and can be safely ignored. For example, commonly used pseudorandom generators1can achieve an ϵ that is bounded by a negligible function 1/bt of the number of bits b used in the seed of the generator for a positive integer t 2(Blum et al., 1982; Blum & Micali, 1984). From a computational point of view, the TV for the pseudorandom distribution is almost the same as the truly random AI-generated distribution. Thus, our framework provides a reasonable approximation for real-world LLMs, and the hardness result holds even in the presence of pseudorandomness. Computational Total Variation Distance. Just as the total variation distance TV between two probability distributions is defined as the difference in probabilities assigned by the two distributions to any {0, 1}-function, we define a computational version of this distance TVc for polynomial-time computable functions: $$\mathsf{TV}_{c}(A,B)=\operatorname*{max}_{f\in\mathcal{P}}{\big|}\mathbb{P}_{s\sim A}[f(s)=1]-\mathbb{P}_{s\sim B}[f(s)=1]{\big|},$$ where P represents the set of polynomial-time computable {0, 1}-functions. P could also be defined as the set of all polynomial-size circuits which could be more appropriate for deep neural network-based detectors. The function f could be thought of as the indicator function for the detection parameter being above a certain threshold, i.e., D(s) ≥ γ as in the proof of Theorem 1. The following lemma holds for the performance of a polynomial-time detector D: Lemma 1. The area under the ROC of any polynomial-time computable detector D *is bounded as* $$\mathrm{\mathrm{\cal~AUROC}}(D)\leq\frac{1}{2}+\mathrm{\mathrm{\sf~TV}}_{c}(\widehat{\mathcal{M}},\mathcal{H})-\frac{\mathrm{\mathrm{\sf~TV}}_{c}(\widehat{\mathcal{M}},\mathcal{H})^{2}}{2}.$$ This lemma can be proved in the same way as Theorem 1 by replacing the truly random AI-generated distribution M with its pseudorandom version Mc and the true total variation TV with its computaional variant TVc. Next, we relate the computational total variation TVc between H and the pseudorandom distribution Mc with the total variation TV between H and the truly random distribution M. Lemma 2. For human distribution H, truly random AI-generated distribution M and its pseudorandom version Mc,TVc(Mc, H) ≤ TV(M, H) + ϵ. Proof. TVc(Mc, H) = max f∈P Ps∼H[f(s) = 1] − Ps∼Mb [f(s) = 1] (from definition of TVc) = max f∈P Ps∼H[f(s) = 1] − Ps∼M[f(s) = 1] + Ps∼M[f(s) = 1] − Ps∼Mb [f(s) = 1] (+/-ing Ps∼M[f(s) = 1]) ≤ max f∈P Ps∼H[f(s) = 1] − Ps∼M[f(s) = 1] + Ps∼M[f(s) = 1] − Ps∼Mb [f(s) = 1] (using |a + b| ≤ |a| + |b|) ≤ TV(M, H) + ϵ. (from definition of TV and bound 4) $$\mathsf{T V}_{c}({\widehat{\mathcal{M}}},{\mathcal{H}})\leq\mathsf{T V}({\mathcal{M}},{\mathcal{H}})+\epsilon.$$ We now use this to prove the modified version of our computational hardness result. Theorem 2 (**Computational Hardness Result**). *The AUROC of any polynomial-time computable detector* D for H and the pseudorandom distribution Mc is bounded using the TV *for the truly random distribution* M as $$\mathrm{\underline{{{A U R O C}}}}(D)\leq{\frac{1}{2}}+\mathsf{T V}(\mathcal{M},\mathcal{H})-{\frac{\mathsf{T V}(\mathcal{M},\mathcal{H})^{2}}{2}}+\epsilon.$$ 1Cryptographic PRNGs: https://en.wikipedia.org/wiki/Pseudorandom_number_generator 2Negligible function: https://en.wikipedia.org/wiki/Negligible_function ![31_image_0.png](31_image_0.png) (a) Varying vocabulary size (b) Varying sequence length Figure 15: TV distances between synthetic toy data distributions and LSTM model generation distributions. TV distances are computed for multiple settings, varying the vocabulary size and sequence length of the training dataset and varying the size of the LSTM network used for training. Proof. AUROC(D) ≤ 1 2 + TVc(Mc, H) − TVc(Mc, H) 2 2(from Lemma 1) ≤ 1 2 + TV(M, H) + ϵ − (TV(M, H) + ϵ) 2 2 (from Lemma 2 and since 12 + x − x 2 2is increasing in [0, 1]) = 1 2 + TV(M, H) + ϵ − TV(M, H) 2 + ϵ 2 + 2ϵTV(M, H) 2 ≤ 1 2 + TV(M, H) − TV(M, H) 2 2+ ϵ. (dropping negative terms containing ϵ) ## C.5 Estimating Tv Distance In §4, we show experiments supporting the assumption that more advanced LLMs lead to smaller TV distance between human and machine text distributions. We present two experimental settings - (i) Using synthetic text data and (ii) Using projection. In Figure 15, we show the TV distances computed with varying vocabulary sizes and sequence lengths. In all the experiments, we consistently find that the TV distances reduce as the network size increases. ## D More Details On Spoofing D.1 Soft Watermark Detector In Kirchenbauer et al. (2023a), they watermark LLM outputs by asserting the model to output tokens with some specific pattern that can be easily detected with meager error rates. Soft watermarked texts are majorly composed of *green list* tokens. If an adversary can learn the green lists for the soft watermarking scheme, they can use this information to generate human-written texts that are detected to be watermarked. Our experiments show that the soft watermarking scheme can be spoofed efficiently. Though the soft watermarking detector can detect the presence of a watermark very accurately, it cannot be certain if this pattern is actually $$\square$$ ![32_image_0.png](32_image_0.png) Figure 16: Inferred *green list score* for the token "the". The plot shows the top 50 words from our set of common words that are likely to be in the green list. The word "first" occurred ∼ 25% of the time as suffix to "the". generated by a human or an LLM. An *adversarial human* can compose derogatory watermarked texts in this fashion that are detected to be watermarked, which might cause reputational damage to the developers of the watermarked LLM. Therefore, it is important to study *spoofing attacks* to avoid such scenarios. In watermarking, the prefix word s (t−1) determines the green list for selecting the word s (t). The attacker's objective is to compute a proxy of green lists for the N most commonly used words in the vocabulary. We use a small value of N = 181 for our experiments. The attacker queries the watermarked OPT-1.3B Zhang et al. (2022) 106times to observe pair-wise token occurrences in its output to estimate *green list score* for the N tokens. We find that inputting nonsense sentences composed of the N common words encourages the LLM to output text mostly composed of these words. This makes the querying more efficient. A token with a high green list score for a prefix s (t) might be in its green list (see Figure 16). We build a tool that helps adversarial humans create watermarked sentences by providing them with proxy green list. In this manner, we can spoof watermarking models easily. See Table 14 for example sentences created by an adversarial human. Figure 7 shows that the performance of watermark-based detectors degrades significantly in the presence of paraphrasing and spoofing attacks, showing that they are not reliable. | | % tokens | z-score | Detector | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-----------|-------------| | Human text | in green | output | | | | list | | | | the first thing you do will be the best thing you do. this is the reason why you do the first thing very well. if most of us did the first thing so well this world would be a lot better place. and it is a very well known fact. people from every place know this fact. time will prove this point to the all of us. as you get more money you will also get this fact like other people do. all of us should do the first thing very well. hence the first thing you do will be the best thing you do. | 42.6 | 4.36 | Watermarked | Table 14: Proof-of-concept human-generated texts flagged as watermarked by the soft watermarking scheme. A sentence composed by an *adversarial human* contains 42.6% tokens from the green list. The z-test threshold for watermark detection is 4, the same as the default hyperparameter in Kirchenbauer et al. (2023a). ## D.2 Zero-Shot And Trained Detectors We report the false positive rate fixed at a true positive rate of 90% and the true positive rate at a false positive rate of 1% in Table 15. The ROC curves before and after spoofing the detectors are provided in Figure 17. Our experiments demonstrate that most of these detection methods show a significant increase in false positive rates at a fixed true positive rate of 90% after spoofing. After this naïve spoofing attack, the ![33_image_0.png](33_image_0.png) true positive rate at a false positive rate of 1% and AUROC scores of these detectors drop significantly. Figure 17: ROC curves before (left) and after (right) spoofing attack (§ 3). Most detectors exhibit quality degradation after our spoofing attack. | Detection Methods | T@F | F@T | |------------------------------------------------|---------------|---------------| | Entropy threshold (Gehrmann et al., 2019) | 0.025 (0.045) | 0.995 (0.845) | | Likelihood threshold (Solaiman et al., 2019) | 0.050 (0.075) | 0.995 (0.310) | | Logrank threshold | 0.165 (0.155) | 0.690 (0.190) | | Rank threshold (Gehrmann et al., 2019) | 0.530 (0.335) | 0.655 (0.590) | | Roberta (base) OpenAI detector (OpenAI, 2019) | 0.900 (0.765) | 0.010 (0.035) | | Roberta (large) OpenAI detector (OpenAI, 2019) | 0.985 (0.990) | 0.000 (0.000) | | DetectGPT (Mitchell et al., 2023) | 0.055 (0.240) | 0.555 (0.145) | Table 15: True positive rates at 1% false positive rate (T@F) and false positive rates at 90% true positive rate (F@T) after (before the attack in parentheses) the spoofing attack described in §3. Bolded numbers show successful attacks where T@F decreases, or F@T increases after spoofing.
Review 1: Summary: The paper considers an interesting and important problem: Can we differentiate AI-generated text from human-generated text? The paper argues that we can not. This is achieved by proposing the so-called (recursive) paraphrasing attacks. Through extensive experimentation, the authors show that (1) text generated by the output of LLMs, watermarked or not, is not in general detectable as AI generated; (2) The proposed recursive paraphrasing method only degrades text quality slightly, measured via human studies, and metrics such as perplexity scores and accuracy on text benchmarks. The authors also propose another attack, called the spoofing attacks aimed to mislead detectors to classify human-written text as AI-generated (i.e. increase the type-I error). The authors also have theoretical results. Strengths and Weaknesses: Strength: I find both recursive paraphrasing and spoofing attacks to be novel. The experimental results are solid. Overall the paper is written well and the methods are explained well. Weakness: My major issue with the recursive paraphrasing attack is the following: The authors use another LLM to do the paraphrasing. They also argue that the paraphrasing used in Kirshenbauer et al is weak (as it only changes a small portion of the text and it's a token-wise paraphrasing). But if the recursive paraphrasing is changing the majority of the text, then one may need to treat the output of this process as a new text (which is not watermarked). So I have a few questions here: - Is there a way to measure/report how much the paraphrased text differs from the original text? - If we are using another non-watermarked LLM to paraphrase the text in multiple rounds (and maybe change it significantly), then why don't we just use a strong non-watermarked LLM to generate the text that we want? My point here is that there should be a limit to how we can alter the text. If we change it significantly then obviously we'll remove the effect of the watermark. This is similar to the scenario where the student uses chatgpt to generate an essay, but then edits/paraphrases the generated text significantly so that it does not look like the original AI-generated text -- but then the question is if the final text (which is heavily edited by the human) can still be considered as an AI-generated text? My other question about the attack on type-I error is: Quoting from the paper, the authors write "... attacker can estimate the green lists by observing multiple token pairs in the watermarked texts from the target LLM. An adversarial human can then leverage the estimated green lists to compose texts by themselves that are detected to be watermarked...". So the authors are assuming that the Watermark is exactly the one used by Kirchenbauer et al and they assume knowledge of the greed/red list (by estimating it through many trials). Is this a realistic assumption? One can change the watermarking scheme of Kirchenbauer so that the generation of the green/red lists depends on the past k tokens (k>=2); in this way estimating the green lists will be very hard. Also, one can do simple tweaks to this scheme (e.g. longer secret keys, etc) to make it very hard to estimate the green list. Further, assuming that a very specific watermark has been used is itself a big assumption. Requested Changes: Please see the questions above. Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: This paper is studying an important problem, one that of evaluating the reliability of current methods for detecting AI generated text. The paper evaluates a number of popular approaches, ranging from ML classifiers, watermarking methods and retrieval-based approaches. The paper empirically demonstrates the unreliability of current AI detectors -- it develops attacks that can both increase the rate of false positives (spoofing) and the rate of false negatives. The paper also provides theoretical arguments towards why this task maybe impossible. Strengths and Weaknesses: Strengths: 1. Studies an important and relevant problem 2. Clearly written, and experiments are thorough (for the things they're measuring) Weaknesses: 1. The claims are sometimes misleading. I list out a few below. a) "only degradation of 2.2" -- This increase needs to be contextualized. Absolute changes in perplexity mean little. Also, an absolute degradation of 2.2 in perplexity sounds like a significant impact on quality. Maybe the authors could look at some scaling laws, and comment on the size of the model with comparable perplexity (for same number of training tokens)? I think it maybe a factor of 2 or 3 even smaller. b) The authors reduce the accuracy at 300 tokens. https://arxiv.org/abs/2306.04634 argues that for paraphrased text, you would still be able to detect it at longer lengths and the watermarks are still very reliable. The two findings seem to align, but the narrative in one is "watermarking is broken" and the other is "watermarking is reliable". 300 tokens is till small (~10 sentences) -- it is worthwhile studying the settings described in https://arxiv.org/abs/2306.04634? c) "Can AI-Generated Text be Reliably Detected" -- the answer seems to be generally yes. There's plenty of use cases for detecting AI generated text in non-adversarial settings even, and that problem seems to be reliably solved at this point in literature. Maybe the title should be "Can detectors for AI-generated text be evaded?"? A more nuanced take on this topic is currently missing in existing literature, and this paper could be a good contribution in that direction if the weaknesses are addressed. d) Assuming the AI generated text is 5, a degradation to 4 is quite high (Table 2) -- if a student cheating on an exam were to use one of these paraphrasing tools, this would impact their grade be vs directly using LLM text (assuming no AI detectors are in place). So the AI detectors seem to have a negative impact on the adversary, and can serve as a deterrent. This conflicts with the narrative of the paper, and the title. This problem can perhaps be solved by better attacks, and concrete threat models. 2. A precise threat model is lacking. What amounts of quality drop is acceptable? What amount of semantic shift is acceptable? What amounts of extra compute is acceptable for the adversary? What amounts of query access is acceptable? 3. The theoretical analysis does not seem very insightful. Particularly in the context of watermarking -- for example (https://www.scottaaronson.com/talks/watermark.ppt) -- watermarking is supposed to behave as a valid pseudorandom sampler, and the detector is recovering the source of randomness that is provided to the pseudorandom sampler. In light of this, the distributions of human and AI generated text converging will have no impact. This brings to question the relevance of the theoretical result. 4. Spoofing attacks can simply be prevented with signatures or permalinks. Why does watermarking makes sense as a defense against spoofing attacks? In my opinion, watermarking is only a tool for preventing AI pretending to be human but not vice versa. The authors should compare other solutions for preventing humans pretending to be AI, and highlight why watermarking is the best solution currently available, and then demonstrate that they can attack watermarking schemes. The result on its own seems to have little practical relevance. Requested Changes: 1. Define a precise threat model that the authors think is practically relevant, and demonstrate that current AI detection tools do not work well under this threat model. I think it would strengthen the paper to focus on a single detection methodology and a concrete threat model that is practically grounded, and do a convincing study as opposed to making a broad argument against all possible AI detection strategies. 2. Revise the theoretical result to account for the fact that watermarking does not rely on being able to separate distributions. Or, caveat the theoretical result appropriately in that it only applies to approaches trying to delineate distributions (e.g., classifiers) 3. 'All PPI' seems to be a bit misleading to present. How does a random baseline do in the setting (e.g., sample five times directly and pick the lowest scoring one?)? 4. Will the spoofing attack work for instruction tuned models? They are unlikely to ramble on with the most common tokens even if prompted with those. Demonstrating the feasibility of the attack with instruction tuned models (assuming the authors are able to provide evidence why signatures aren't a solution for preventing humans pretending to be a certain model) would also be a good contribution. 5. The authors attribute a classifier being studied to OpenAI, but I do not think they're studying a classifier that OpenAI officially released (https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text). It's one developed by the open-source community as far as I can tell. 6. Why is some paraphrasing done with a 11B T5 model and some with a 220M model? Why not consistently use the 220M model everywhere? 7. The authors describe serious privacy issues with retrieval based methods. It would be great to see attacks on retrieval systems that leak private information, and would also strengthen the paper. 8. Instead of measuring the average rating for grammar/quality, it would be nicer to see side-by-side evaluations, and look at the win-rates. Broader Impact Concerns: No concerns on the ethical front. ================================================== Review 3: Summary: The paper considers the problem of detecting if a particular data is generated from a large language model and proposes attacks that can break some of the existing systems. It has three parts: The first part considers recursive paraphrasing attacks and shows that it can break watermarking schemes, neural network based detectors, zero-shot classifiers etc.The second part considers spoofing attacks where an adversary can mimic LLM behavior and hence can cause reputational damage to the LLM detector. The final part proposes a bound on the AUROC of the detector and the total variation distance between the human generated and machine generated datasets. Strengths and Weaknesses: Strengths: The paper addresses an important and timely topic and provides a detailed experimental analysis of different types of attacks and also introduces a new "spoofing" idea, which I hadn't seen before.  Weaknesses: I have concerns about each part of the paper and have explained them in the requested changes section. Requested Changes: 1. Recursive paraphrasing attack: My biggest concern about the recursive paraphrasing attack is the threat model considered. To elaborate, let's consider the watermarking algorithm (Kirchenbauer et al., 2023). This algorithm works by modifying the probability distributions outputted by the LLM. Hence, the underlying assumption is that the person who is generating text has full control over the final generated text. This can be useful when LLM is offered as a service e.g., ChatGPT. Now, the paper shows that paraphrasing this output with another model (Figure 2) breaks the watermark. This raised the question where this model is being run. Clearly, it is not in the best interest of openAI to run the paraphraser itself. So this model makes sense when the paraphraser is run by a user themselves. This raises an important question: if the user can run the paraphraser, then why can't they use an opensourced model such as Llamma-X, themselves and not use watermarking? This question about threat models and incentives doesn't seem to be addressed by the paper. In my opinion this still makes sense, if the LLM (e.g., chatGPT) is too big and cannot be run by the user themselves, but they have enough hardware to run the paraphraser. In this scenario, one needs to demonstrate that the new accuracy of chatGPT + paraphraser is much better than the accuracy of a smaller model (e.g., OPT-13B). The paper doesn't seem to have evidence supporting this claim. I might be missing something obvious here, but without a threat model, it is hard to put the results and the papers in perspective. I would encourage authors to add a detailed description of the threat model and demonstrate that in this threat model and put the experimental results in perspective in this threat model. 2. Spoofing attacks: This is a very cool idea and I hadn't seen others consider it. But when I dug into details, it seems like the paper assumes that the attacker has access to the red and green list: Appendix D reads "We build a tool that helps adversarial humans create watermarked sentences by providing them with proxy green list". This seems like a big assumption. I was under the assumption that when watermarking is used with LLMs e.g., chatGPT, the red and green list is not public. If this is not the case, then I would encourage authors to add a citation. If it is the case that this is not public, then I believe more discussion is needed. If it is not done already, I think this knowledge and the threat model needs to be discussed in the main paper rather than Appendix. 3. AUROC bound:  I would request the authors to estimate the TV bound directly on the datasets without any projection. Note that this can be done in a sample efficient way. Let $P$ and $Q$ be the two distributions. Let $S = \{ Q(x) \geq P(x)\}$. Then the total variation distance is $Q(S) - P(S)$. This can be estimated by taking n random samples from $Q$ and $P$ and estimate the expectation of the indicator value $1(Q(x) \geq P(X))$. Since this random variable is bounded by $1$, with $10,000$ or so samples, one can easily get a good estimate of the total variation bound.  To summarize, it would be great if authors describe the threat models and the assumptions in detail in the main paper. Broader Impact Concerns: Given that the paper considers breaking LLM detectors and introduce a new concept of spoofing, I would strongly request the authors to add a broader impact section. ================================================== Review 4: Summary: The paper examines whether LLM-generated text can be reliably detected (using GPT-2 and OPT-13B and OPT-1.3B). It provides an overview of different ways of detecting generated text (through watermarking and detection, neural network-based detectors, and retrieval-based detectors). It shows that there are two ways in which this can fail: 1. through (repeated) paraphrasing generated text, the false negative rate can be increased; 2. through spoofing (making otherwise written text appear as generated by a specific LLM), the false positive rate can be increased. The paper provides detailed evaluations of simple methods that work well for both cases and across different detection mechanisms, showing that AI detectors might not be as reliable as one would like. The paper verifies using human evaluators that paraphrased text is nearly as correct and of similar quality as the original text. Finally, the paper also provides a theoretical result about the feasibility of detecting LLM-generated text by looking at the total variation as a measure of distribution divergence and by bounding the possible performance of detectors using the total variation between human-generated text and LLM-generated text. Strengths and Weaknesses: The paper is well-written and provides a thorough examination of its claims. It is clear in the importance of its contributions and grounds them well. They call their method "recursive paraphrasing". I believe repeated paraphrasing would be more apt. (Obviously, recursive also fits repeated through tail recursion, but repeated would be a better name because it would be immediately apparent what happens, whereas recursive sounds more like divide & conquer and some tree-based paraphrasing.) The theory fits in neatly, but it would be great to ground it better in the literature. Detectors can be viewed as performing a statistical test to see which distribution a sample belongs to. I shall spend a bit more time with the proof, but it would seem one could also create similar bounds with Fano's inequality or using mutual information. Great paper overall! Requested Changes: 1. Recursive -> repeated paraphrasing 2. Maybe a bit more literature in §4 "Hardness of Reliable AI Text Detection" (see above). 3. Add a sentence on limitations and future work. While the results look great, the paper already raises the question of examining stronger models like GPT-4. 4. Add a random baseline to Figure 5 (because of the log x scale, it's hard for me to know where the diagonal is). Broader Impact Concerns: None. Actually, this paper is really great because it draws attention to the fallibility of AI detectors and rightly points out that false positives are very problematic as they can easily lead to students being accused of misconduct. ================================================== Metareview: Recommendation: Reject Comment: This paper studies reliable detection of AI generated text, and derives several interesting mathematical results. The paper was reviewed by four expert reviewers whose expertise captured both theoretical and practical aspects of AI text detection. All reviewers appreciated the technical developments of the paper and think this could be an exciting paper that addresses a very important and timely challenge. However, after extensive discussion between reviewers and authors as well as internal discussions, all reviewers and AE agree that the paper still needs to be revised to address the points raised by the reviewers to update the narrative of the paper to highlight the nuances of the technical results. This requires revisions to the title, abstract, introduction, and the entire narrative of the paper, and hence it is deemed a major revision. For example, there were a suggestion by Reviewer 4dLu on what the title could be ("Limitations of AI Text Detectors: Unreliability after (Recursive) Quality-Preserving Paraphrasing"), and we think the rest of the narrative should follow suit to explicitly discuss these nuances of the results in the paper as well and explicitly reflect the responses provided to the reviewers in the discussion period, many of which are still missing from the narrative of the paper. Below, I further quote from the decision letters of the reviewers all of whom unanimously agree that the required changes are necessary to make the paper publishable. As such, the paper has to be recommended to be rejected at this time but we look forward to receiving a revision of the paper along these lines, which we think would make a nice addition to the literature around AI-generated text detection. Reviewer kL8C: "The paper can be significantly strengthened by adding nuance to the claims, and focusing on grounding the practical relevance of the attacks discussed in the paper." Reviewer UBC6: "After the discussion period, the paper was updated. However, I feel this does not capture the nuances of the results. I still feel that the main body of the paper seems to highlight the positives and all the nuances are in the Appendix or need to be inferred by the reader." Reviewer mAWJ: "as the other reviewers have pointed out, the writing should be revised to reflect the points mentioned by the reviewers." Reviewer 4dLu: "I also believe that the paper would benefit from a (major) revision to incorporate the reasonable change requests by the other reviewers that the authors seem reluctant to address in the paper text while acknowledging them in their responses." ==================================================
# A Portfolio Approach To Massively Parallel Bayesian Optimization Anonymous authors Paper under double-blind review ## Abstract One way to reduce the time of conducting optimization studies is to evaluate designs in parallel rather than just one-at-a-time. For expensive-to-evaluate black-boxes, batch versions of Bayesian optimization have been proposed. They work by building a surrogate model of the black-box to simultaneously select multiple designs via an infill criterion. Still, despite the increased availability of computing resources that enable large-scale parallelism, the strategies that work for selecting a few tens of parallel designs for evaluations become limiting due to the complexity of selecting more designs. It is even more crucial when the black-box is noisy, necessitating more evaluations as well as repeating experiments. Here we propose a scalable strategy that can keep up with massive batching natively, focused on the exploration/exploitation trade-off and a portfolio allocation. We compare the approach with related methods on noisy functions, for mono and multi-objective optimization tasks. These experiments show orders of magnitude speed improvements over existing methods with similar or better performance. ## 1 Introduction Current trends in improving the speed or accuracy of individual computations are on exploiting parallelization on highly concurrent computing systems. These computer models (a.k.a. simulators) are prevalent in many fields, ranging from physics to biology and engineering. Still, increasing the parallelization for individual simulators often comes with diminishing returns and model evaluation time remains limiting. A strategy is then to conduct several evaluations simultaneously, in batches, to optimize (here minimize) quantities of interest. See, e.g., (Haftka et al., 2016) for a review. For fast simulators, evolutionary algorithms (EAs) are amenable to parallelization by design, see e.g., the review by Emmerich and Deutz (2018). But they require a prohibitive number of evaluations for more expensive-to-evaluate simulators. For these, Bayesian optimization (BO), see e.g., (Shahriari et al., 2016; Frazier, 2018; Garnett, 2022), is preferred, with its ability to carefully select the next evaluations. Typically, BO relies on a Gaussian process (GP) model of the simulator, or any black-box, by using a probabilistic surrogate model to efficiently perform the so-called exploration/exploitation trade-off. Exploitation refers to areas where the prediction is low (for minimization), while exploration is for areas of large predictive variance. An infill criterion, or acquisition function, balances this trade-off to select evaluations, such as the expected improvement (EI) (Mockus et al., 1978) in the efficient global optimization algorithm (Jones et al., 1998). Alternatives include upper confidence bound (UCB) (Srinivas et al., 2010), knowledge gradient (Frazier, 2018), and entropy based criteria (Villemonteix et al., 2009b; Hennig and Schuler, 2012; Wang and Jegelka, 2017). Parallelization is then enabled by the definition of batch versions of the corresponding infill criteria, selecting several designs to evaluate at once. Noisy simulators have their own set of challenges, see e.g., (Baker et al., 2022), and raise questions about selecting the right amount of replication. While not necessary per se, repeating experiments is the best option to separate signal from noise, and is beneficial in terms of computational speed by limiting the number of unique designs, see e.g., (Binois et al., 2019; Zhang et al., 2020). Here, we also consider multi-objective optimization (MOO) where the goal is to find the set of best compromise solutions, the Pareto front, since there is rarely a solution minimizing all objectives at once. We refer to, e.g., (Hunter et al., 2017) for a review of MOO options for black-boxes and (Emmerich et al., 2020) for multi-objective (MO) BO infill criteria. MO versions of batch algorithms have also been proposed, taking different scalarization weights (Zhang et al., 2010), or relying on an additional notion of diversity (Lukovic et al., 2020). Our motivating example is the calibration of a large-scale agent-based model (ABM) of COVID-19 run on a supercomputer (Ozik et al., 2021) with the added goal of reducing the *time to solution* for meaningful, i.e., timely, decision-making support. The targeted setup is as follows: a massively parallel system (e.g., HPC cluster or supercomputer) with the ability to run hundreds of simulation evaluations in parallel over several iterations for the purpose of reducing the overall time to solution of the optimization to support rapid turnaround of analyses for high consequence decision making (e.g., public health (Ozik et al., 2021), meteorological (Goubier et al., 2020), and other emergency response (Mandel et al., 2019)). This is sometimes called the high throughput regime (Hernández-Lobato et al., 2017). Hence the time dedicated to select the batch points should be minimal (and not considered negligible), as well as amenable to parallelization (to use the available computing concurrency). The method we propose is to directly identify candidates realizing different exploration/exploitation trade-offs. This amounts to approximating the GP predictive mean vs. variance Pareto front, which is orders of magnitude faster than optimizing most existing batch infill criteria. In doing so, we shift the paradigm of optimizing (or sampling) acquisition functions over candidate batches to quickly finding a set of desirable candidates to choose from. In the noisy case, possibly with input-dependent variance, the variance reduction makes an additional objective to further encourage replication. Then, to actually select batches, we follow the approach proposed by (Guerreiro and Fonseca, 2016) with the hypervolume Sharpe ratio indicator (HSRI) in the context of evolutionary algorithms. In the MO version, the extension is to take the mean and variance of each objective, and still select batch-candidates based on the HSRI. The contributions of this work are: 1) The use of a portfolio allocation strategy for batch-BO, defined on the exploration/exploitation trade-off. It extends directly to the multi-objective setup; 2) An approach independent of the size of the batch, removing limitations of current batch criteria for large q; 3) The potential for flexible batch sizes and asynchronous allocation via the portfolio approach; 4) The ability to natively take into account replication and to cope with input-dependent noise variance. In Section 2 we briefly present GPs, batch BO and MO BO as well as their shortcomings for massive batches. In Section 3, the proposed method is described. It is then tested and compared empirically with alternatives in Section 4. A conclusion is given in Section 5. ## 2 Background And Related Works We consider the minimization problem of the black-box function f: find x ∗ ∈ argmin x∈X⊆Rd f(x) where X is typically a hypercube. Among various options for surrogate modeling of f, see e.g., Shahriari et al. (2016), GPs are prevalent. ## 2.1 Gaussian Process Regression Consider a set of n ∈ N ∗ designs-observations couples (xi, yi) with yi = f(xi) + εi, εi ∼ N (0, τ (xi)), often obtained with a Latin hypercube sample as design of experiments (DoE). The idea of GP regression, or kriging, is to assume that f follows a multivariate normal distribution, characterized by an arbitrary mean function m(x) and a positive semi-definite covariance kernel function k : X×X → R. Unless prior information is available to specify a mean function, m is assumed to be zero for simplicity. As for k, parameterized families of covariance functions such as Gaussian or Matérn ones are preferred, whose hyperparameters (process variance σ 2, lengthscales) can be inferred in many ways, such as maximum likelihood estimation. Conditioned on observations y := (y1*, . . . , y*n), zero mean GP predictions at any set of q designs in X, Xq : (x 0 1 , . . . , x 0 q ) >, are still Gaussian, Yn(Xq)|y ∼ N (mn(Xq), s2n (Xq)): $$m_{n}({\mathcal{X}}_{q})=\mathbf{k}_{n}({\mathcal{X}}_{q})^{\top}\mathbf{K}_{n}^{-1}\mathbf{y}_{n},$$ $\varepsilon(\mathcal{X}_a,\mathcal{X}_a)-\mathbf{k}_n(\mathcal{X}_a)^\top\mathbf{1}$ s n (Xq) = k(Xq, Xq) − kn(Xq) >K−1 $s^2\left(\mathcal{X}\right)$ n kn(Xq) + τ (Xq) where kn(Xq) = (k(xi, x 0 j ))1≤i≤n,1≤j≤q and Kn = (k(xi, xj ) + δi=j τ (xi))1≤i,j≤n. We refer to (Rasmussen and Williams, 2006; Forrester et al., 2008; Ginsbourger, 2018; Gramacy, 2020) and references therein for additional details on GPs and associated sequential design strategies. Noise variance, τ (x), if present, is seldom known and must be estimated. With replication, stochastic kriging (Ankenman et al., 2010) relies on empirical variance estimates. Otherwise, estimation methods have been proposed, building on the Markov chain Monte Carlo method of (Goldberg et al., 1998), as discussed, e.g., by (Binois et al., 2018). Not only is replication beneficial in terms of variance estimation, it also has an impact on the computational speed of using GPs, where the costs scale with the number of unique designs rather than the total number of evaluations. ## 2.2 Batch Bayesian Optimization Starting from the initial DoE to build the starting GP model, (batch-) BO sequentially selects one (q) new design(s) to evaluate based on the optimization of an acquisition function that balances exploration and exploitation. The GP model is updated every time new evaluation results are available. The generic batch BO loop is illustrated in Algorithm 1. Algorithm 1 Pseudo-code for batch BO Input: Nmax (total budget), q (batch size), GP model trained on initial DoE (xi, yi)1≤i≤n 1: **while** n ≤ Nmax do 2: Choose xn+1*, . . . ,* xn+q ∈ arg maxXq∈X α(Xq) 3: Update the GP model by conditioning on {xn+i, yn+i}1≤i≤q. 4: n ← n + q 5: **end while** Among acquisition functions α, we focus on EI, with its analytical expression, compared with, say, entropy criteria. EI (Mockus et al., 1978) is defined as: αEI (x) := E[max(0, T − Yn(x))] where T is the best value observed so far in the deterministic case. In the noisy case, taking T as the best mean estimation over sampled designs (Villemonteix et al., 2009a) or the entire space (Gramacy and Lee, 2011), are alternatives. Integrating out noise uncertainty is done by Letham et al. (2018), losing analytical tractability. This acquisition function can be extended to take into account the addition of q new points, e.g., with the batch (q in short) EI, αqEI (Xq) := E[max(0, T −Yn(x 0 1 )*, . . . , T* −Yn(x 0 q )] that has an expression amenable for computation (Chevalier and Ginsbourger, 2013) and also for its gradient (Marmin et al., 2015). A much faster approximation of the batch EI (qEI) is described in Binois (2015), relying on nested Gaussian approximations of the maximum of two Gaussian variables from Clark (1961). Otherwise, stochastic approximation (Wang et al., 2020) or sampling methods by Monte Carlo, e.g., with the reparameterization trick (Wilson et al., 2018), are largely used, but may be less precise as the batch size increases. Many batch versions of infill criteria have been proposed, such as (Kandasamy et al., 2018; Hernández-Lobato et al., 2017) for Thompson sampling. For EI, rather than just looking at its local optima (Sóbester et al., 2004), some heuristics propose to select batch points iteratively, replacing unknown values at selected points by pseudo-values (Ginsbourger et al., 2010). This was coined as "hallucination" in the UCB version of (Desautels et al., 2014). More generally, Rontsis et al. (2020) use an optimistic bound on EI for all possible distributions compatible with the same first two moments as a GP, which requires solving a semi-definite problem, limiting the scaling up to large batches. Gonzalez et al. (2016) reduce batch selection cost by not modeling the joint probability distribution of the batch nor using a hallucination scheme. Their idea is to select batch members sequentially by penalizing proximity to the previously selected ones. Taking different infill criteria is an option to select different trade-offs, as by (Tran et al., 2019). This idea of a portfolio of acquisition functions is also present in (Hoffman et al., 2011), but limited to a few options and not intended as a mechanism to select batch candidates. Using local models is another way to select batches efficiently, up to several hundreds in (Wang et al., 2018). The downside is a lack of coordination in the selection and the need of an *ad hoc* selection procedure. For entropy or stepwise uncertainty reduction criteria, see, e.g., (Chevalier et al., 2014a), batching would increase their already intensive computational burden. Another early work by (Azimi et al., 2010) attempts to match the expected sequential performance, via approximations and sampling. ## 2.3 Multi-Objective Bo The multi-objective optimization problem (MOOP) is to find x ∗ ∈ argmin x∈X⊂Rd (f1(x)*, . . . , f*p(x)), p ≥ 2. p > 4 is often called the *many-objective* setup, with its own set of challenges for BO, see e.g., (Binois et al., 2020). The solutions of a MOOP are the best compromise solutions between objectives, in the Pareto dominance sense. A solution x is said to be dominated by another x 0, denoted x 0 x, if ∀*i, f*i(x 0) ≤ fi(x) and fi(x 0) < fi(x) for at least one i. The Pareto set is the set of solutions that are not dominated by any other design in X: x ∈ R d, @x 0 ∈ R dsuch that x 0 x ; the Pareto front is its image in the objective space. In the noisy case, we consider the noisy MOOP defined on expectations over objectives (Hunter et al., 2017). Measured in the objective space, the hypervolume refers to the volume dominated by a set of points relative to a reference point, see Figure 1. It serves both as a performance metric in MOO, see e.g., Audet et al. (2020) and references therein, or to measure improvement in extending EI (Emmerich et al., 2006). The corresponding expected hypervolume improvement (EHI) can be computed in closed form for two or three objectives (Emmerich et al., 2011; Yang et al., 2017), or by sampling, see e.g., Svenson (2011). A batch version of EHI, qEHI, is proposed by Daulton et al. (2020; 2021). The operations research community has seen works dealing with low signal-to-noise ratios and heteroscedasticity, where replication is key. Generally, the idea is to first identify good points before defining the number of replicates, see, e.g., (Zhang et al., 2017; Gonzalez et al., 2020) or (Rojas-Gonzalez and Van Nieuwenhuyse, 2020) for a review on stochastic MO BO. Still, the batch aspect is missing in the identification of candidates. ## 2.4 Challenges With Massive Batches There are several limitations in the works above. First, while it is generally assumed that the cost of one evaluation is sufficiently high to consider the time to select new points negligible, this may not be the case in the large batch setup. Parallel infill criteria are more expensive to evaluate, and even computational costs increasing linearly in the batch size (q) become impractical for hundreds or thousands of batch points. For instance, the exact qEI expression uses multivariate normal probabilities, whose computation do not scale well with the batch size. There are also many approximated criteria for batch EI, or similar criteria. However, in all current approaches, the evaluation costs increase with the batch size, at best linearly in q for existing criteria, which remains too costly for the regime we target. This is already troublesome when optimization iterations must be conducted quickly, but is amplified by the difficulty of optimizing the acquisition function itself. While earlier works used branch and bounds (Jones et al., 1998) to guarantee optimality with q = 1, multi-start gradient based optimization or EAs are predominantly used. In the batch setting, the size of the optimization problem becomes q ×d, a real challenge, even with the availability of gradients. Wilson et al. (2018) showed that greedily optimizing batch members one-by-one is sensible, which still requires to solve q d-dimensional global optimization problems. Both become cumbersome for large batches and, presumably, far from the global optimum since the acquisition function landscape is multimodal, with flat regions, and symmetry properties. Plus, as we showcase, this results in parts of batch members being less pertinent. Relying on discrete search spaces bypasses parts of the issue, even though finding the best batch becomes a combinatorial search. In between the greedy and joint options is the work by Daxberger and Low (2017), to optimize an approximated batch-UCB criterion as a distributed constraint problem. As a result, only sub-optimal solutions are reachable in practice for batch acquisition function optimization. Rather than optimizing, a perhaps even more computationally intensive option is to consider the density under EI. That is, to either find local optima and adapt batch size as in Nguyen et al. (2016), or sampling uniformly from the EI density with slice sampling and clustering as with Groves and Pyzer-Knapp (2018). Thompson sampling for mono-objective batch BO, see e.g., Kandasamy et al. (2018), also bypasses the issues of optimizing the acquisition function, but batch members are independently obtained, which can be wasteful for large batches. Lastly, adaptive batch sizes might be more efficient than a fixed number of parallel evaluations, see e.g., (Desautels et al., 2014). Similarly, asynchronous evaluation is another angle to exploit when the simulation evaluation times vary, see e.g., (Gramacy and Lee, 2009; Janusevskis et al., 2012; Alvi et al., 2019). Replication adds another layer of decision: whether or not adding a new design is worth the future extra computational cost, compared to the perhaps marginally worse option of replicating on the acquisition function value. With high noise, choosing the amount of replication becomes important, as individual evaluations contain almost no information. But even fixing the number of replicates per batch, selecting batch design locations plus replication degree makes a hard dynamic programming optimization problem. ## 3 Batch Selection As A Portfolio Problem We propose an alternative to current BO methods to handle large batches by returning to the roots of BO, with the exploration/exploitation trade-off. Specifically, we focus on a portfolio selection criterion to select a batch balancing risk and return. ## 3.1 Exploration/Exploitation Trade-Off At the core of BO is the idea that regions of interest have either a low mean, or have a large variance. This is the BO exploration/exploitation trade-off, see e.g., Garnett (2022). From a multi-objective point of view, acquisition functions resolve this trade-off by selecting a solution on the corresponding mean vs. standard deviation (mn/sn) Pareto front P. With UCB (Srinivas et al., 2010), αUCB(x) := mn(x)− √βsn(x), the tuning parameter β is a way to select one solution on the convex parts of this Pareto front. EI can be interpreted this way as well, as noticed by Jones et al. (1998); De Ath et al. (2021) in showing that ∂EI ∂mn (x) = −Φ T −mn(x) sn(x) < 0 and ∂EI ∂sn (x) = φ T −mn(x) sn(x) > 0, where φ (resp. Φ) are the Gaussian pdf (resp. cdf). Hence EI also selects a specific solution on the corresponding Pareto front. Navigating the Pareto front can be done by taking expectations of powers of the improvement, i.e., the generalized EI (GEI) (Schonlau et al., 1998; Wang et al., 2017), for which higher powers of EI reward larger variance and make it more global, as in Ponweiser et al. (2008). Note that the probability of improvement (PI), αP I = P(Yn(x) < T), which corresponds to the zeroth-order EI, is not on the trade-off Pareto front P, explaining why PI is often discarded as being too exploitative (higher variance is detrimental as soon as the predicted mean is below T). Our main point is that rather than having to define a specific trade-off between exploration and exploitation a priori, before considering batching, it is better to find the set of optimal trade-offs P and select batch points from it *a posteriori*. ## 3.2 Portfolio Selection With Hsri We propose to use a criterion to select solutions on the exploration-exploitation Pareto front, rather than doing so randomly as in Gupta et al. (2018). Yevseyeva et al. (2014) defined the hypervolume Sharpe ratio indicator (HSRI) to select individuals in MO EAs, with further study on their properties in Guerreiro and Fonseca (2016; 2020). Here, we borrow from this portfolio-selection approach, where individual performance is related to expected return, while diversity is related to the return covariance. Let A =a (1)*, . . . ,* a (l) be a non-empty set of assets, a (i) ∈ R s, s ≥ 2, let vector r ∈ R l denote their expected return and matrix Q ∈ R l×l denote the return covariance between pairs of assets. Let z ∈ [0, 1]l be the investment vector where zi denotes the investment in asset a (i). The Sharpe-ratio maximization problem is defined as $$\operatorname*{max}_{\mathbf{z}\in[0,1]^{l}}h(\mathbf{z})={\frac{\mathbf{r}^{\top}\mathbf{z}-r_{f}}{\sqrt{\mathbf{z}^{\top}\mathbf{Q}\mathbf{z}}}}\ s.t.\ \sum_{i=1}^{l}z_{i}=1,$$ $\left(1\right)$. zi = 1, (1) ![5_image_0.png](5_image_0.png) y1 Figure 1: Hypervolume dominated by two assets a (1) (light gray) and a (2) (gray) with respect to the reference point R, corresponding to the expected return. The covariance return is given by the volume jointly dominated by both points (dark gray). with rf the return of a riskless asset and h the Sharpe ratio. This problem, restated as a convex quadratic programming problem (QP), see e.g., Guerreiro and Fonseca (2016), can be solved efficiently only once per iteration. The outcome is a set of weights, corresponding to the allocation to each asset. HSRI Guerreiro and Fonseca (2016) is an instance of portfolio selection where the expected return and return covariance are based on the hypervolume improvement: ri = pii and Qij = pij − piipjj where pij = Q1≤t≤p (Rl − max(a (i) t , a (j) t))/ Q1≤t≤p (Rl − f ∗ l ) ; see Figure 1 for an illustration. Note that this hypervolume computation scales linearly with the number of objectives. Importantly, as shown in Guerreiro and Fonseca (2016), if a set of assets is dominated by another set, its Sharpe ratio is lower. Furthermore, no allocation is made on dominated points: they are all on P. Finally they show that only the reference point R needs to be set in practice. ## 3.3 Proposition With Qhsri From a BO viewpoint, the goal is to obtain the q candidates leading to the highest return in terms of HSRI: α*qHSRI* (Xq) = h(z(Xq)) = r(Xq) >1q − rf q1> q Q(Xq)1q −1with 1q a vector of q ones. Here, instead of using actual objective values as in MO EAs, an asset a (i), corresponding to candidate design x (i), is characterized by its GP predictive mean(s) and standard deviation(s), i.e., with p = 1, a (i) 1 = mn(xi) and a (i) 2 = −sn(x (i)). Also here rf = 0 since risk-less assets (noiseless observations) bring no improvement. Since the optimal solutions will belong to P thanks to the properties of HSRI, the search can be decomposed into three steps: approximating P, solving the QP problem (1) over this approximation of P, and then selecting q evaluations. In the noiseless case, they are the q largest weights. When replication is possible, the allocation becomes proportional to the weights: find *γ s.t.* P l i=1 bγ × z ∗ i e = q, by bisection (randomly resolving ties). The adaptation to the asynchronous setup is detailed in Appendix A. Extension to the multi-objective setup To extend to MO, we propose to search trade-offs on the objective means, m (1) n (x)*, . . . , m* (p) n (x), and averaged predictive standard deviations Pareto front, σ¯n(x) = p −1 Pp i=1 s (i) n (x)/σ(i) n with (σ (i) n ) 2the i th-objective GP variance hyperparameter, a p + 1 dimensional space. Taking all p standard deviations is possible, but the corresponding objectives are correlated since they increase with the distance to observed design points. In the case where the GP hyperparameters are the same and evaluations of objectives coupled, the objectives would be perfectly correlated. Even with different hyperparameters, preliminary tests showed that using the averaged predictive standard deviation do not degrade the performance compared to the additional difficulty of handling more objectives. Replication When noise is present, we include an additional objective of variance reduction. That is, for two designs with the same mean and variance, the one for which adding an observation will decrease the predictive variance the most is preferred. This decrease is given by GP update equations, see e.g., Chevalier et al. (2014b), and does not depend on the value at the future q designs: s 2 n+q (Xq) = s 2 n (Xq) − s 2 n (Xq, x1:(n+q))(s 2 n (x1:(n+q)))−1s 2 n (x1:(n+q), Xq), with x1:(n+q) defined as the current DoE augmented by future q designs. It does depend on the noise variance and the degree of replication, see, e.g., Binois et al. (2019), which may be used to define a minimal degree of replication at candidate designs to ensure a sufficient decrease. Similarly, it is possible to limit the replication degree when the decrease of variance of further replication is too low. The pseudo-code of the approach is given in Algorithm 2. The first step is to identify s designs on the mean vs. standard deviation Pareto set. In the deterministic case s must be at least equal to q, while with noise and replication it can be lower. Population based evolutionary algorithms can be used here, with a sufficiently large population. Once these s candidates are identified, dominated points can be filtered out as HSRI only selects non-dominated solutions. Points with low probability of improvement (or with low probability of being non-dominated) can be removed as well. In the noisy case, the variance reduction serves as an additional objective for the computation of HSRI. It is integrated afterwards as it is not directly related to the identification of the exploration-exploitation tradeoff surface. Computing qHSRI then involves computing r and Q before solving the corresponding QP problem (1). Designs from Xs are selected based on z ∗: either by taking the q designs having the largest weights zi, or computing γ to obtain an allocation, which can include replicates. Algorithm 2 Pseudo-code for batch BO with qHSRI Input: Nmax (total budget), q (batch size), p GP model(s) fitted on initial DoE (xi, yi)1≤i≤n 1: **while** n ≤ Nmax do 2: Find Xs ∈ arg min(m (1) n (x)*, . . . , m* (p) n (x), σ¯n(x)) 3: Filter dominated solutions in Xs 4: if p = 1 **then** 5: Filter points with low PI in Xs 6: **else** 7: Filter points with low PND in Xs 8: end 9: If τ (x) > 0: add variance reduction to objectives 10: Compute return r and covariance matrix Q 11: Compute optimal Sharpe ratio z ∗ 12: Allocate q points based on the weights 13: Update the GP model(s) with {xn+i, yn+i}1≤i≤q. 14: n ← n + q 15: **end while** In terms of complexity, the main task is to find the assets, i.e., candidate designs on P. Evaluating mn and sn cost O(n 2) after a O(n 3) matrix inversion operation that only needs to be done once. An order of magnitude can be gained with approximations, see for instance Wang et al. (2019). Then r and Q are computed on the assets, before maximizing the Sharpe ratio, whose optimal weights provide the best q designs. Filtering solutions reduces the size of the QP problem to be solved, either with PI or the probability of non-domination (PND) in the MO case. Crucially, the complexity does not depend on q with replication. We illustrate the proposed method (qHSRI) on an example in Figure 2, comparing the outcome of optimizing qEI, its fast approximation qAEI (Binois, 2015) and qHSRI in the mean vs. standard deviation (mn vs. sn) space. Additional candidates are shown, either randomly sampled in X or on the estimated exploration/exploitation Pareto set. Negation of the standard deviation is taken for minimization. While all qHSRI selected designs are on P, this is not the case for the qEI version, particularly so when q is larger, where none of the selected designs are - possibly due to the much higher difficulty of solving the corresponding optimization problem. Designs corresponding to powers of EI also appear on P, showing a richer exploration/exploitation trade-off than with EI only. We also observe that points with large variance may not be of interest if they have a negligible PI (e.g., < 0.1). ![7_image_0.png](7_image_0.png) Figure 2: Comparison of qHSRI with qEI and qAEI acquisition functions on the noiseless repeated Branin function (d = 6, n = 60). The first four GEI optimal solutions are depicted as well. It could be argued that the qHSRI approach ignores distances in the input space and could form clusters. While this is the case, depending on the exploration-exploitation Pareto set, since the batch points cover P, it automatically adapts to this latter's range of optimal values, depending on the iteration and problem. This is harder to adapt *a priori* in the input space and it avoids having to define minimal distances manually, as in Gonzalez et al. (2016). Still, for numerical stability and to favor replication, a minimal distance can be set as well. ## 4 Experiments Except Wang et al. (2018) that uses disconnected local GP models and Gupta et al. (2018) that also uses the exploration-exploitation PF, existing batch BO methods mostly give results with low q, e.g., q ≤ 20. On the implementations we could test, these criteria take more than seconds per evaluation with q ≈ 100, while, in our approach, predicting for a given design takes less than a millisecond. Consequently, comparisons with qHSRI are not feasible for massive q. We provide scaling results for larger q values with qHSRI in Appendix B. The R package hetGP (Binois and Gramacy, 2021) is used for noisy GP modeling. Anisotropic Matérn covariance kernels are used throughout the examples, whose hyperparameters are estimated by maximum likelihood. As we use the same GP model, the comparison shows the effect of the acquisition function choice: qEI or qEHI vs. qHSRI. qEI is either the exact version (Chevalier and Ginsbourger, 2013) in DiceOptim (Picheny et al., 2021), or the approximated one from Binois (2015), qAEI. qEI is not available for q > 20 nor in the noisy case. In the mono-objective case, we also include Thompson sampling, qTS, implemented by generating GP realisations on discrete sets of size 200d. qEHI is from GPareto (Binois and Picheny, 2019), estimated by Monte Carlo. PF denotes the method from Gupta et al. (2018), where the batch members are randomly selected on the exploration-exploitation Pareto front. Random search (RS) is added as a baseline. All start with the same space filling designs of size 5d, replicated five times each to help the heteroscedastic GP modeling with low signal to noise ratios. Optimization of the acquisition functions is performed by combining random search, local optimization and EAs. That is, for qEI, nu = 100d designs are uniformly sampled in the design space before computing their univariate EI. Then nb = 100d candidate batches are created by sampling these designs with weights given by EI. Then the corresponding best batch for qEI is optimized locally with L-BFGS-B. A global search with pso (Bendtsen, 2012) (population of size 200) is conducted too, to directly optimize qEI, and the overall best qEI batch is selected. The same principle is applied for qEHI. As for qHSRI and PF, in addition to the same uniform sampling strategy with nu designs, NSGA-II (Deb et al., 2002) from mco (Mersmann, 2020) is used to find P, the exploration/exploitation compromise surface (with a population of size 500). The reference point R for HSRI computations is obtained from the range over each component, extended by 20%, as is the default in GPareto Binois and Picheny (2019). The R code (R Core Team, 2023) of the approach is available as supplementary material. For one objective, the optimality gap, i.e., the difference to a reference solution, is monitored. With noise, the optimality gap is computed both on noiseless values (when known) and on the estimated minimum by each method over iterations, which is the only element accessible in real applications. In fact, the optimality gap only assesses whether a design has been sampled close to an optimal one, not if it has been correctly predicted. The hypervolume metric is used in the MO case, from a reference Pareto front computed using NSGA-II and all designs found by the different methods. Similar to the mono-objective case, the hypervolume difference is also computed on the estimated Pareto set by each method, over iterations, to have a more reliable and realistic performance monitoring. In a first step, for validating qHSRI, we compared it with alternatives for relatively low q values. These preliminary experiments on noiseless functions are provided in Appendix B. The outcome is that qHSRI gives results on par with qEI and qEHI looking at the performance over iterations, at a much lower computational cost. These results also motivate the use of qAEI as a proxy for qEI when this latter is not available. We notice that qTS performed poorly on these experiments, possibly because using discretized realisations is insufficient for the relatively large input dimension (d = 12). There the use of random or Fourier features may help Mutny and Krause (2018). As for PF, it requires more samples than qHSRI, even if it goes as fast. This indicates that the portfolio allocation strategy is beneficial compared to randomly sampling on the exploration-exploitation Pareto front. ## 4.1 Mono-Objective Results We first consider the standard Branin and Hartmann6 test functions, see e.g., (Roustant et al., 2012). For the noisy Branin (resp. Hartmann6), we take the first objective of the P1 test function (Parr, 2012) (resp. repeated Hartmann3 (Roustant et al., 2012)) as input standard deviation τ (x) 1 2 , hence with heteroscedastic noise (denoted by het below). ![8_image_0.png](8_image_0.png) Figure 3: Mono-objective results over iterations and over time. Optimality gap and estimated optimality gap for noisy tests over 40 macro-runs are given. Thin dashed lines are 5% and 95% quantiles. Figure 3 highlights that qHSRI is orders of magnitude faster than qTS and qAEI for decreasing the optimality gap (see also Table 1 for all timing results). It also improves over random selection on P as with PF. In these noisy problems, looking at the true optimality gap for observed designs shows good performance of RS, since, especially in small dimension like for Branin (d = 2), there is a high probability of sampling close to one of its three global minima. Also, replicating incurs less exploration, penalizing qHSRI on this metric. Nevertheless, the actual metric of interest is the optimality gap of the estimated best solution at each iteration. It is improved with qHSRI, especially for Branin, while performances of the various acquisition functions are similar iteration-wise in the Hartmann6 case, with qTS being slightly better initially. Concerning speed, part of the speed-ups are due to the ability to replicate. Indeed, as apparent in supplementary Figure 9, for qHSRI the number of unique designs remains low, less than 20% of the total number of observations without degrading the sample efficiency. Notice that taking larger batches can be faster since the batch selection is independent of q with qHSRI. Also there are fewer iterations for the same simulation budget, hence less time is spent in fitting GPs and finding P. Next we tackle the more realistic 12d Lunar lander problem (Eriksson et al., 2019). We take Nmax = 2000 with q = 100, where a single evaluation is taken as the average over 100 runs to cope with the low signal-to-noise ratio (rather than fixing 50 seeds to make it deterministic as in Eriksson et al. (2019)). The solution found (predicted by the GP) is of −205.32 while the reference handcrafted solution gives −101.13, see Figure 5. The Lunar lander problem with qHSRI took 5 hours; it did not complete with qAEI even in 24 hours due to q = 100. ## 4.2 Multi-Objective Results We consider the P1 (Parr, 2012) and P2 (Poloni et al., 2000) problems. For the noisy setup, one problem serves as the other one's noise standard deviation function (taking absolute values for positiveness). The results are shown in Figure 4, where the leftmost panels show the beneficial performance of the qHSRI approach in terms of time to solution. While RS performs relatively well looking solely at the hypervolume difference for the evaluated designs (rightmost panels), this does not relate to the quality of the Pareto front estimation. Indeed, the estimated Pareto with RS, that is, using only the non-dominated noisy observations, is far from the actual Pareto front, due to noise realizations. There the Pareto fronts estimated by GP models do show a convergence to the reference Pareto front, indicating that their estimation improves over iterations (middle panels). Finally, like in the mono-objective results, we demonstrate in Appendix B that for the noiseless case the sample efficiency of qHSRI is at least on par to that of qEHI, and even slightly better in some cases. ![9_image_0.png](9_image_0.png) Figure 4: Multi-objective results over time and iterations. Hypervolume difference over a reference Pareto front and its counterpart for the estimated Pareto set for noisy tests over 40 macro-runs are given. Thin dashed lines are 5% and 95% quantiles. ## 4.3 Citycovid Data Set We showcase a motivating application example for massive batching: calibrating the CityCOVID ABM of the population of Chicago in the context of the COVID-19 pandemic, presented in Ozik et al. (2021) and built on the ChiSIM framework (Macal et al., 2018). It models the 2.7 million residents of Chicago as they move between 1.2 million places based on their hourly activity schedules. The synthetic population of agents extends an existing general-purpose synthetic population Cajka et al. (2010) and statistically matches Chicago's demographic composition. Agents colocate in geolocated places, which include households, schools, workplaces, etc. The agent hourly activity schedules are derived from the American Time Use Survey and the Panel Study of Income Dynamics and assigned based on agent demographic characteristics. CityCOVID includes COVID-19 disease progression within each agent, including differing symptom severities, hospitalizations, and age-dependent probabilities of transitions between disease stages. The problem is formulated as a nine variable bi-objective optimization problem: the aggregated difference for two target quantities. It corresponds to the calibration of the CityCOVID parameters θ listed in Table 3, each normalized to [0, 1]. Model outputs are compared against two empirical data sources obtained through the City of Chicago data portal City of Chicago (2022): H the daily census of hospital beds occupied by COVID-19 patients and D the COVID-19 attributed death data in and out of hospitals, both for residents of Chicago. We used an exponentially weighted error function L(θ, Ti, T˜i, d), i ∈ {H, D}, with daily discount rate d tuned to 98% and 95% for H and D, with the corresponding observed (resp. simulated) time-series denoted by T and T˜ giving the objectives. To inform public health policy, many simulations are necessary in a short period of time, which can only be achieved by running many concurrently. One simulation takes ≈ 10min, with a low signal-to-noise ratio. A data set of 217, 078 simulations (over 8, 368 unique designs, with a degree of replication between 1 and 1000) has been collected by various strategies: IMABC (Rutter et al., 2019), qEHI with fixed degree of replication, and replicating non-dominated solutions. This data set is available in the supplementary material. Contrary to the previous test cases that were defined over continuous domains, for testing qHSRI we use this existing data set. The initial design is a random subset of the data: 25, 000 simulation results over 2500 unique designs with a degree of replication of 10, out of 50, 585 simulations over 5075 unique designs, with a degree of replication between 3 and 10. These correspond to results given by IMABC (akin to a non uniform sampling). qHSRI is used to select candidates among remaining designs up to q = 2500 if enough replicates are available, hence with a flexible batch size. To speed up prediction and to benefit from a parallel architecture, local GPs are built from 20 nearest neighbors rather than relying on a single global GP. We show in Figure 5 the progress in terms of symmetric difference to the final estimate of the Pareto front, thus penalizing both under and over confident predictions. qHSRI quickly converges to the reference Pareto front, compared to RS. Lunar lander d = 12, q = 100 ![10_image_0.png](10_image_0.png) Figure 5: Left: optimality gap for the Lunar lander problem (one single run) with the evaluated values and estimated minimum found. Right: results on CityCOVID data set over 5 macro-runs, thin dashed lines are 5% and 95% quantiles. ## 5 Conclusions And Perspectives Massive batching for BO comes with the additional challenge that batch selection must happen at a fast pace. Here we demonstrate qHSRI as a flexible and light-weight option, independent of the batch size. It also handles replication natively, resulting in additional speed-up for noisy simulators without fixing a degree of replication. Hence, this proposed approach makes a sensible, simple, yet efficient, baseline for massive batching. Possible extensions could take into account global effects (e.g., on entropy, integrated variance, etc.) of candidate designs to be less myopic. A more dynamic Sharpe ratio allocation could be beneficial, to improve replication. Finally, while the integration of a few constraints is straightforward, handling more could rely on the use of copulas as in Eriksson and Poloczek (2021) to alleviate the increase of the dimension of the exploration/exploitation trade-off surface. The study of the convergence, e.g., based on results for UCB with various βs, is left for future work. ## References Alvi, A., Ru, B., Calliess, J.-P., Roberts, S., and Osborne, M. A. (2019). Asynchronous batch Bayesian optimisation with improved local penalisation. In *International Conference on Machine Learning*, pages 253–262. Ankenman, B., Nelson, B. L., and Staum, J. (2010). Stochastic kriging for simulation metamodeling. Operations research, 58(2):371–382. Audet, C., Bigeon, J., Cartier, D., Le Digabel, S., and Salomon, L. (2020). Performance indicators in multiobjective optimization. *European journal of operational research*. Azimi, J., Fern, A., and Fern, X. Z. (2010). Batch Bayesian optimization via simulation matching. In Advances in Neural Information Processing Systems, pages 109–117. Citeseer. Baker, E., Barbillon, P., Fadikar, A., Gramacy, R. B., Herbei, R., Higdon, D., Huang, J., Johnson, L. R., Ma, P., Mondal, A., et al. (2022). Analyzing stochastic computer models: A review with opportunities. Statistical Science, 37(1):64–89. Bendtsen, C. (2012). *pso: Particle Swarm Optimization*. R package version 1.0.3. Binois, M. (2015). Uncertainty quantification on Pareto fronts and high-dimensional strategies in Bayesian optimization, with applications in multi-objective automotive design. PhD thesis, Mines Saint-Etienne, EMSE. Binois, M. and Gramacy, R. B. (2021). hetGP: Heteroskedastic Gaussian process modeling and sequential design in R. *Journal of Statistical Software*, 98(13):1–44. Binois, M., Gramacy, R. B., and Ludkovski, M. (2018). Practical heteroscedastic Gaussian process modeling for large simulation experiments. *Journal of Computational and Graphical Statistics*, 27(4):808–821. Binois, M., Huang, J., Gramacy, R. B., and Ludkovski, M. (2019). Replication or exploration? Sequential design for stochastic simulation experiments. *Technometrics*, 61(1):7–23. Binois, M. and Picheny, V. (2019). GPareto: An R package for Gaussian-process-based multi-objective optimization and analysis. *Journal of Statistical Software*, 89(8):1–30. Binois, M., Picheny, V., Taillandier, P., and Habbal, A. (2020). The Kalai-Smorodinsky solution for many-objective Bayesian optimization. *Journal of Machine Learning Research*, 21(150):1–42. Cajka, J. C., Cooley, P. C., and Wheaton, W. D. (2010). Attribute assignment to a synthetic population in support of agent-based disease modeling. *Methods report (RTI Press)*, 19(1009):1–14. Chevalier, C., Bect, J., Ginsbourger, D., Vazquez, E., Picheny, V., and Richet, Y. (2014a). Fast parallel kriging-based stepwise uncertainty reduction with application to the identification of an excursion set. Technometrics, 56(4):455–465. Chevalier, C. and Ginsbourger, D. (2013). Fast computation of the multi-points expected improvement with applications in batch selection. In *International Conference on Learning and Intelligent Optimization*, pages 59–69. Springer. Chevalier, C., Ginsbourger, D., and Emery, X. (2014b). Corrected kriging update formulae for batch-sequential data assimilation. In *Mathematics of Planet Earth*, pages 119–122. Springer. City of Chicago (2022). Data Portal. Clark, C. E. (1961). The greatest of a finite set of random variables. *Operations Research*, 9(2):145–162. Daulton, S., Balandat, M., and Bakshy, E. (2020). Differentiable expected hypervolume improvement for parallel multi-objective Bayesian optimization. *Advances in Neural Information Processing Systems*, 33. Daulton, S., Balandat, M., and Bakshy, E. (2021). Parallel bayesian optimization of multiple noisy objectives with expected hypervolume improvement. *arXiv preprint arXiv:2105.08195*. Daxberger, E. A. and Low, B. K. H. (2017). Distributed batch gaussian process optimization. In *International* Conference on Machine Learning, pages 951–960. PMLR. De Ath, G., Everson, R. M., Rahat, A. A., and Fieldsend, J. E. (2021). Greed is good: Exploration and exploitation trade-offs in Bayesian optimisation. ACM Transactions on Evolutionary Learning and Optimization, 1(1):1–22. Deb, K., Pratap, A., Agarwal, S., and Meyarivan, T. (2002). A fast and elitist multiobjective genetic algorithm: NSGA-II. *Evolutionary Computation, IEEE Transactions on*, 6(2):182–197. Desautels, T., Krause, A., and Burdick, J. W. (2014). Parallelizing exploration-exploitation tradeoffs in Gaussian process bandit optimization. *The Journal of Machine Learning Research*, 15(1):3873–3923. Emmerich, M., Giannakoglou, K., and Naujoks, B. (2006). Single-and multiobjective evolutionary optimization assisted by Gaussian random field metamodels. *Evolutionary Computation, IEEE Transactions on*, 10(4):421– 439. Emmerich, M. T. and Deutz, A. H. (2018). A tutorial on multiobjective optimization: fundamentals and evolutionary methods. *Natural computing*, 17(3):585–609. Emmerich, M. T., Deutz, A. H., and Klinkenberg, J. W. (2011). Hypervolume-based expected improvement: Monotonicity properties and exact computation. In *Evolutionary Computation (CEC), 2011 IEEE Congress* on, pages 2147–2154. IEEE. Emmerich, M. T., Yang, K., and Deutz, A. H. (2020). Infill criteria for multiobjective Bayesian optimization. In *High-Performance Simulation-Based Optimization*, pages 3–16. Springer. Eriksson, D., Pearce, M., Gardner, J., Turner, R. D., and Poloczek, M. (2019). Scalable global optimization via local Bayesian optimization. *Advances in Neural Information Processing Systems*, 32:5496–5507. Eriksson, D. and Poloczek, M. (2021). Scalable constrained Bayesian optimization. In *International Conference* on Artificial Intelligence and Statistics, pages 730–738. PMLR. Forrester, A., Sobester, A., and Keane, A. (2008). Engineering design via surrogate modelling: a practical guide. John Wiley & Sons. Frazier, P. I. (2018). Bayesian optimization. In *Recent Advances in Optimization and Modeling of Contemporary* Problems, pages 255–278. INFORMS. Garnett, R. (2022). Bayesian optimization. Ginsbourger, D. (2018). *Sequential Design of Computer Experiments*, pages 1–9. American Cancer Society. Ginsbourger, D., Le Riche, R., and Carraro, L. (2010). Kriging is well-suited to parallelize optimization. In Computational Intelligence in Expensive Optimization Problems, pages 131–162. Springer. Goldberg, P. W., Williams, C. K., and Bishop, C. M. (1998). Regression with input-dependent noise: A Gaussian process treatment. *Advances in neural information processing systems*, pages 493–499. Gonzalez, J., Dai, Z., Hennig, P., and Lawrence, N. (2016). Batch Bayesian optimization via local penalization. In *Proceedings of the 19th International Conference on Artificial Intelligence and Statistics*, pages 648–657. Gonzalez, S. R., Jalali, H., and Van Nieuwenhuyse, I. (2020). A multiobjective stochastic simulation optimization algorithm. *European Journal of Operational Research*, 284(1):212–226. Goubier, T., Rakowsky, N., and Harig, S. (2020). Fast tsunami simulations for a real-time emergency response flow. In *2020 IEEE/ACM HPC for Urgent Decision Making (UrgentHPC)*, pages 21–26. Gramacy, R. B. (2020). Surrogates: Gaussian Process Modeling, Design, and Optimization for the Applied Sciences. CRC Press. Gramacy, R. B. and Lee, H. K. (2009). Adaptive design and analysis of supercomputer experiments. Technometrics, 51(2):130–145. Gramacy, R. B. and Lee, H. K. H. (2011). Optimization under unknown constraints. In Bernardo, J., Bayarri, S., Berger, J. O., Dawid, A. P., Heckerman, D., Smith, A. F. M., and West, M., editors, Bayesian Statistics 9, pages 229–256. Oxford University Press. Groves, M. and Pyzer-Knapp, E. O. (2018). Efficient and scalable batch Bayesian optimization using K-means. arXiv preprint arXiv:1806.01159. Guerreiro, A. P. and Fonseca, C. M. (2016). Hypervolume Sharpe-ratio indicator: Formalization and first theoretical results. In *International Conference on Parallel Problem Solving from Nature*, pages 814–823. Springer. Guerreiro, A. P. and Fonseca, C. M. (2020). An analysis of the Hypervolume Sharpe-Ratio Indicator. *European* Journal of Operational Research, 283(2):614–629. Gupta, S., Shilton, A., Rana, S., and Venkatesh, S. (2018). Exploiting strategy-space diversity for batch bayesian optimization. In *International conference on artificial intelligence and statistics*, pages 538–547. PMLR. Haftka, R. T., Villanueva, D., and Chaudhuri, A. (2016). Parallel surrogate-assisted global optimization with expensive functions–a survey. *Structural and Multidisciplinary Optimization*, 54(1):3–13. Hennig, P. and Schuler, C. J. (2012). Entropy search for information-efficient global optimization. The Journal of Machine Learning Research, 98888:1809–1837. Hernández-Lobato, J. M., Requeima, J., Pyzer-Knapp, E. O., and Aspuru-Guzik, A. (2017). Parallel and distributed Thompson sampling for large-scale accelerated exploration of chemical space. In International conference on machine learning, pages 1470–1479. PMLR. Hoffman, M. D., Brochu, E., and de Freitas, N. (2011). Portfolio allocation for Bayesian optimization. In UAI, pages 327–336. Citeseer. Hunter, S. R., Applegate, E. A., Arora, V., Chong, B., Cooper, K., Rincón-Guevara, O., and Vivas-Valencia, C. (2017). An introduction to multi-objective simulation optimization. *Optimization Online*. Janusevskis, J., Le Riche, R., Ginsbourger, D., and Girdziusas, R. (2012). Expected improvements for the asynchronous parallel global optimization of expensive functions: Potentials and challenges. In *Learning* and Intelligent Optimization, pages 413–418. Springer. Jones, D., Schonlau, M., and Welch, W. (1998). Efficient global optimization of expensive black-box functions. Journal of Global Optimization, 13(4):455–492. Kandasamy, K., Krishnamurthy, A., Schneider, J., and Póczos, B. (2018). Parallelised Bayesian optimisation via Thompson sampling. In *International Conference on Artificial Intelligence and Statistics*, pages 133–142. PMLR. Letham, B., Karrer, B., Ottoni, G., Bakshy, E., et al. (2018). Constrained Bayesian optimization with noisy experiments. *Bayesian Analysis*. Lukovic, M. K., Tian, Y., and Matusik, W. (2020). Diversity-guided multi-objective Bayesian optimization with batch evaluations. *Advances in Neural Information Processing Systems*, 33:6–12. Macal, C. M., Collier, N. T., Ozik, J., Tatara, E. R., and Murphy, J. T. (2018). ChiSIM: An Agent-Based Simulation Model of Social Interactions in a Large Urban Area. In 2018 Winter Simulation Conference (WSC), pages 810–820. Mandel, J., Vejmelka, M., Kochanski, A., Farguell, A., Haley, J., Mallia, D., and Hilburn, K. (2019). An interactive data-driven hpc system for forecasting weather, wildland fire, and smoke. In 2019 IEEE/ACM HPC for Urgent Decision Making (UrgentHPC), pages 35–44. Marmin, S., Chevalier, C., and Ginsbourger, D. (2015). Differentiating the multipoint expected improvement for optimal batch design. In *International Workshop on Machine Learning, Optimization and Big Data*, pages 37–48. Springer. Mersmann, O. (2020). *mco: Multiple Criteria Optimization Algorithms and Related Functions*. R package version 1.15.6. Mockus, J., Tiesis, V., and Zilinskas, A. (1978). The application of Bayesian methods for seeking the extremum. *Towards Global Optimization*, 2(117-129):2. Mutny, M. and Krause, A. (2018). Efficient high dimensional bayesian optimization with additivity and quadrature fourier features. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, *Advances in Neural Information Processing Systems 31*, pages 9005–9016. Curran Associates, Inc. Nguyen, V., Rana, S., Gupta, S. K., Li, C., and Venkatesh, S. (2016). Budgeted batch Bayesian optimization. In *2016 IEEE 16th International Conference on Data Mining (ICDM)*, pages 1107–1112. IEEE. Oh, C., Gavves, E., and Welling, M. (2018). Bock: Bayesian optimization with cylindrical kernels. In International Conference on Machine Learning, pages 3868–3877. PMLR. Ozik, J., Wozniak, J. M., Collier, N., Macal, C. M., and Binois, M. (2021). A population data-driven workflow for COVID-19 modeling and learning. *The International Journal of High Performance Computing* Applications, 35(5):483–499. Parr, J. M. (2012). *Improvement Criteria for Constraint Handling and Multiobjective Optimization*. PhD thesis, University of Southampton. Picheny, V., Green, D. G., and Roustant, O. (2021). *DiceOptim: Kriging-Based Optimization for Computer* Experiments. R package version 2.1.1. Poloni, C., Giurgevich, A., Onesti, L., and Pediroda, V. (2000). Hybridization of a multi-objective genetic algorithm, a neural network and a classical optimizer for a complex design problem in fluid dynamics. Computer Methods in Applied Mechanics and Engineering, 186(2):403–420. Ponweiser, W., Wagner, T., and Vincze, M. (2008). Clustered multiple generalized expected improvement: A novel infill sampling criterion for surrogate models. In *Proc. (IEEE World Congress Computational* Intelligence). IEEE Congress Evolutionary Computation CEC 2008, pages 3515–3522. R Core Team (2023). *R: A Language and Environment for Statistical Computing*. R Foundation for Statistical Computing, Vienna, Austria. Rasmussen, C. E. and Williams, C. (2006). *Gaussian Processes for Machine Learning*. MIT Press. Rojas-Gonzalez, S. and Van Nieuwenhuyse, I. (2020). A survey on kriging-based infill algorithms for multiobjective simulation optimization. *Computers & Operations Research*, 116:104869. Rontsis, N., Osborne, M. A., and Goulart, P. J. (2020). Distributionally robust optimization techniques in batch Bayesian optimization. *Journal of Machine Learning Research*, 21(149):1–26. Roustant, O., Ginsbourger, D., and Deville, Y. (2012). DiceKriging, DiceOptim: Two R packages for the analysis of computer experiments by kriging-based metamodeling and optimization. *Journal of Statistical* Software, 51(1):1–55. Rutter, C. M., Ozik, J., DeYoreo, M., and Collier, N. (2019). Microsimulation model calibration using incremental mixture approximate Bayesian computation. *The Annals of Applied Statistics*, 13(4):2189–2212. Schonlau, M., Welch, W. J., and Jones, D. R. (1998). Global versus local search in constrained optimization of computer models. *Lecture Notes-Monograph Series*, pages 11–25. Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., and de Freitas, N. (2016). Taking the human out of the loop: A review of Bayesian optimization. *Proceedings of the IEEE*, 104(1):148–175. Sóbester, A., Leary, S. J., and Keane, A. J. (2004). A parallel updating scheme for approximating and optimizing high fidelity computer simulations. *Structural and multidisciplinary optimization*, 27(5):371–383. Srinivas, N., Krause, A., Kakade, S., and Seeger, M. (2010). Gaussian process optimization in the bandit setting: no regret and experimental design. In Proceedings of the 27th International Conference on International Conference on Machine Learning, pages 1015–1022. Svenson, J. D. (2011). *Computer Experiments: Multiobjective Optimization and Sensitivity Analysis*. PhD thesis, The Ohio State University. Tran, A., Sun, J., Furlan, J. M., Pagalthivarthi, K. V., Visintainer, R. J., and Wang, Y. (2019). pBO-2GP-3B: A batch parallel known/unknown constrained Bayesian optimization with feasibility classification and its applications in computational fluid dynamics. *Computer Methods in Applied Mechanics and Engineering*, 347:827–852. Villemonteix, J., Vazquez, E., Sidorkiewicz, M., and Walter, E. (2009a). Global optimization of expensiveto-evaluate functions: an empirical comparison of two sampling criteria. *Journal of Global Optimization*, 43(2):373–389. Villemonteix, J., Vazquez, E., and Walter, E. (2009b). An informational approach to the global optimization of expensive-to-evaluate functions. *Journal of Global Optimization*, 44(4):509–534. Wang, H., van Stein, B., Emmerich, M., and Back, T. (2017). A new acquisition function for Bayesian optimization based on the moment-generating function. In 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 507–512. IEEE. Wang, J., Clark, S. C., Liu, E., and Frazier, P. I. (2020). Parallel Bayesian global optimization of expensive functions. *Operations Research*, 68(6):1850–1865. Wang, K., Pleiss, G., Gardner, J., Tyree, S., Weinberger, K. Q., and Wilson, A. G. (2019). Exact Gaussian processes on a million data points. *Advances in Neural Information Processing Systems*, 32:14648–14659. Wang, Z., Gehring, C., Kohli, P., and Jegelka, S. (2018). Batched large-scale Bayesian optimization in high-dimensional spaces. In *International Conference on Artificial Intelligence and Statistics*. Wang, Z. and Jegelka, S. (2017). Max-value entropy search for efficient Bayesian optimization. In International Conference on Machine Learning, pages 3627–3635. PMLR. Wilson, J., Hutter, F., and Deisenroth, M. (2018). Maximizing acquisition functions for Bayesian optimization. In *Advances in Neural Information Processing Systems*, pages 9884–9895. Yang, K., Emmerich, M., Deutz, A., and Fonseca, C. M. (2017). Computing 3-D expected hypervolume improvement and related integrals in asymptotically optimal time. In International Conference on Evolutionary Multi-Criterion Optimization, pages 685–700. Springer. Yevseyeva, I., Guerreiro, A. P., Emmerich, M. T., and Fonseca, C. M. (2014). A portfolio optimization approach to selection in multiobjective evolutionary algorithms. In *International Conference on Parallel* Problem Solving from Nature, pages 672–681. Springer. Zhang, B., Gramacy, R. B., Johnson, L., Rose, K. A., and Smith, E. (2020). Batch-sequential design and heteroskedastic surrogate modeling for delta smelt conservation. *arXiv preprint arXiv:2010.06515*. Zhang, J., Ma, Y., Yang, T., and Liu, L. (2017). Estimation of the Pareto front in stochastic simulation through stochastic kriging. *Simulation Modelling Practice and Theory*, 79:69–86. Zhang, Q., Liu, W., Tsang, E., and Virginas, B. (2010). Expensive multiobjective optimization by MOEA/D with Gaussian process model. *Evolutionary Computation, IEEE Transactions on*, 14(3):456–474. ## A Adaptation To Asynchronous Batch Bayesian Optimization The standard batch- (or multi-points) BO loop, where parallel evaluations are conducted in parallel and waiting for all of them to finish, is presented in Algorithm 1. Next we discuss briefly the adaptations required for the asynchronous setup. The main feature of the portfolio approach is to give a weight vector corresponding to Pareto optimal points on the mean/standard deviation Pareto front P. In the synchronous setting, q points are selected based on the weights. That is, the q best ones in the noiseless setting while replicates can occur in the noisy setting. If q 0 additional points can be evaluated, without new data becoming available (or while waiting for new candidates to evaluate), the change in the asynchronous setup amounts to considering that q + q 0 batch points (assets) can be selected according to the weights. The q first ones will remain unchanged while q 0 additional ones can be run. An example is provided in Figure 6. ![17_image_0.png](17_image_0.png) Figure 6: Batch allocation on indices according to weights w (black points) for 10 candidates. q = 5 designs are initially allocated, as represented by the black segments. If q 0 = 2 additional designs can be evaluated, then the allocation is recomputed with the same weights and leads to the red dashed segments. ## B Additional Experimental Results We complement the results of Section 4 on noisy problems with results on noiseless ones. The R package DiceKriging (Roustant et al., 2012) is used for deterministic GP modeling. In this deterministic setup, d is increased to accommodate larger batches via repeated versions of these problem as used, e.g., in Oh et al. (2018). The timing results of all synthetic benchmarks are presented in Tables 1 and 2, with best results in bold. qHSRI or PF are always much faster than the alternatives, even the approximated qEI criterion (qAEI) or qTS. The R code (R Core Team, 2023) of the approach and the CityCOVID data are available as supplementary material. Results have been obtained in parallel on dual-Xeon Skylake SP Silver 4114 @ 2.20GHz (20 cores) and 192 GB RAM (or similar nodes). Lunar lander tests have been run on a laptop. The detailed results of the performance over time are given in Figure 7, where qHSRI show better results than the alternatives, closely matched by PF except on Branin where it does worse. qTS performs badly as the discretization is insufficient given the problem dimension. The results over iterations of Figure 8 shows that the performance of qHSRI matches the one of qEI and improves over qEHI. qTS underperforms but it is still better than RS. PF shows similar results to qHSRI on the multi-objective problems and Hartmann6 with q = 25, but is worse in the other test cases. For the smallest batch size, q = 10, qAEI matches the performance of qEI, while being faster. | f-d-q | qEI | qAEI | qHSRI | qPF | qTS | |-----------|-------------|-------------|------------|------------|------------| | B-12-10 | 24725 (133) | 5901 (54) | 1103 (104) | 1024 (160) | 1835 (5) | | B-12-25 | − | 5856 (61) | 451 (46) | 423 (66) | 1760 (3) | | H-12-10 | 24258 (115) | 4957 (131) | 969 (38) | 1266 (144) | 1938 (22) | | H-12-25 | − | 4585 (129) | 419 (24) | 510 (76) | 1816 (9) | | B(h)-2-25 | − | 2470 (93) | 57 (3) | 342 (36) | 305 (46) | | H(h)-6-25 | − | 22012 (717) | 288 (33) | 4714 (702) | 5327 (503) | | f-d-q | qEHI | qHSRI | qPF | |------------|--------------|----------|------------| | P1-6-50 | 16308 (2118) | 678 (25) | 580 (26) | | P1(h)-2-50 | 35350 (1363) | 567 (59) | 5110 (407) | | P2-6-50 | 14022 (1818) | 635 (24) | 553 (20) | | P2(h)-2-50 | 37386 (1123) | 608 (44) | 5148 (414) | Table 1: Averaged timing results (in s), with standard deviation in parenthesis. B is for Branin, H for Hartmann6, (h) for heteroscedastically noisy problems and '−' indicates when non applicable. Table 2: Averaged timing results (in s), with standard deviation in parenthesis. (h) is for heteroscedastically noisy problems. To highlight that replication occurs, Figure 9 represents the number of unique design over time for the mono-objective noisy problems. Notice that replication is present in the initial design and that it can happen for all methods when sampling vertices of the hypercube. | θ | π(θ) | Description | |-----|--------------|-----------------------------------------------------------------------------------------------------------------------| | θ1 | U(60, 190) | Initial number of exposed (infected but not infectious) agents at the beginning of the simulation | | θ2 | U(0.03, 0.1) | Base hourly probability of transmission between one infectious and one susceptible person occupying the same location | | θ3 | U(0, 0.3) | Magnitude of seasonality effect | | θ4 | U(0.5, 1) | Per person probability of infection scaling factor due to ratio of infectious versus susceptible people in a location | | θ5 | U(0.2, 0.7) | Effective infectivity during isolation in household | | θ6 | U(0.1, 0.7) | Effective infectivity during isolation in nursing home | | θ7 | U(300, 700) | Simulation time (hrs) corresponding to March 27, 2020 | | θ8 | U(0.1, 0.8) | Reduction in out of household activities | | θ9 | U(0.01, 0.3) | Reduction in transmission due to individual protective behaviors | We complement the synthetic experiments in Section 4 with results for larger values of q using qHSRI. The mono-objective results are in Figure 10 and the multi-objective ones in 11. Increasing q shows degrading results sample-wise, as there are fewer updates of the GP model. But increasing q decreases the time to solution. ## C Details On Citycovid Calibration Parameters The nine variables of the simulator are given in Table 3. Table 3: CityCOVID calibration parameters θ and priors π(θ). ![19_image_0.png](19_image_0.png) Figure 7: Noiseless results over time. Optimality gap or hypervolume difference over 20 macro-runs is given. Thin dashed lines are 5% and 95% quantiles. ![19_image_1.png](19_image_1.png) ![19_image_2.png](19_image_2.png) Figure 8: Noiseless results over iterations. Optimality gap or hypervolume difference over 20 macro-runs is given. Thin dashed lines are 5% and 95% quantiles. ![19_image_3.png](19_image_3.png) Figure 9: Number of unique designs over iterations for the mono-objective test problems. Thin dashed lines are 5% and 95% quantiles. ![20_image_0.png](20_image_0.png) Figure 10: Mono-objective results over iterations and over time. Optimality gap and estimated optimality gap for noisy tests over 60 macro-runs are given. ![20_image_1.png](20_image_1.png) Figure 11: Multi-objective results over time and iterations. Hypervolume difference over a reference Pareto front and its counterpart for the estimated Pareto set for noisy tests over 40 macro-runs are given.
Review 1: Summary: In each iteration, Bayesian optimization samples new candidate via an acquisition function, which is based on a probabilistic model of the objective function. The acquisition function quantifies the trade-off between exploration in the search spaces where the model is uncertain and exploitation where the predictive mean of the model is low. This trade-off can be viewed as multi-objective problem, and popular acquisition functions, such as Expected improvement or UCB as scalarised formulation of this problem. The main contribution of the paper is a new batch selection method for synchronous parallel Bayesian optimization, where, in each iteration, candidates are samples in batches rather than sequentially. The computational cost of selecting a batch of candidates for existing methods increases with the batch size, which can become the computational bottleneck if the objective function is not too expensive to evaluate. The paper proposes to use the hypervolume Sharpe ratio indicator (HSRI) to select a batch where the computational costs increases linearly with the number of objectives instead. Strengths and Weaknesses: ## Strengths It seems quite sensible to explicitly model the exploration / exploitation trade-off via the predictive mean and standard deviation of the model and to select the next batch by sampling Pareto optimal points. ## Weaknesses ### Motivation of the paper I am a bit sceptical about the motivation of the paper. Bayesian optimization seems to be particularly useful when the objective function is expensive-to-evaluation, since it is often more sample efficient than other approaches, such as random-search based methods or evolutionary algorithms. With this in mind, I am wondering how relevant is the scenario of such large batch setting given that it only requires a few seconds per evaluation with q=100? Also, it seems that the large body of work on asynchronous Bayesian optimization does not suffer from this problem and might be more suitable for this setting. ### Writing Overall, I found the paper rather confusingly written and I do not think that it is very accessible for readers not deeply familiar with the subject. For example, while Gaussian processes are described in Section 2, Bayesian optimization is not formally introduced. For example, the acquisition function is not properly defined; the Bayesian optimization loop is not described. Instead, the paper immediately jumps into Batch Bayesian optimization. Furthermore, the paper contains a lot of jargon (Hypervolume) and many terms are not mathematically well defined (Probability of improvement, Pareto set, etc). ### Weak empirical evaluation I have several concerns about the empirical evaluation of the paper: - The choice to use the same hyperparameters for the GP for each objective seem questionable. I expect especially the length scale to change with each objective function. - The paper presents only single run on lunar landar which seems to exhibit substantially amount of noise. This seems in insufficient to draw any reliable conclusions. - Except the CityCovid datasets, the paper only considers synthetic problem, which makes it hard to judge how relevant the proposed gains are in practice. Requested Changes: - Extend Section 2 to introduce Bayesian optimization and give a high level overview of the basic algorithm - I'd recommend to evaluate the proposed method on surrogate benchmarks from the literature which require the same computation cost as synthetic benchmarks but resemble practical use-cases. See for example: HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO Katharina Eggensperger, Philipp Müller, Neeratyoy Mallik, Matthias Feurer, René Sass, Aaron Klein, Noor Awad, Marius Lindauer, Frank Hutter - Consider asynchronous Bayesian optimization methods as baselines - Further clarifications: - What is the difference between estimated HV difference and HV difference? - Revise Section 3.3: it's hard to understand the mapping of different components of HSRI to batch Bayesian optimization. Broader Impact Concerns: I do not see any broader ethical concerns of this work. ================================================== Review 2: Summary: This paper tackles large batch (or parallel) multi-objective Bayesian optimization (BO), a problem that is known. The authors propose an approach based on solving a "portfolio optimization" problem once a set of points on the exploration-exploitation Pareto front (where the objectives are posterior means and posterior variance) are identified. Using this method, the authors show large (order of magnitude) improvements over existing methods. Strengths and Weaknesses: Strengths: - This is an important problem and currently there is, to my knowledge, not a satisfactory method for performing extremely large batch BO, besides optimizing sequentially. Weaknesses: - The major weakness, in my view, is that the paper needs improved clarity and a more precise description of the method. There are quite a few points that were unclear to me, listed in the next few bullet points. - In Algorithm 1, what is `s` in X_s? It seems like this is the number of points to use to approximate the exploration-exploitation Pareto frontier. But should `s` be smaller or larger than `q`? If `q` is really large, and we want to find unique points, then `s` would also have to be really large. In that case, the first line of Algorithm 1 becomes a difficult MO problem. I would be interested in reading some discussion on this point. - I did not fully comprehend why we should not use all of the posterior standard deviations, especially for the MO case (the text in the paper mentions that there is correlation). Could this be explained more clearly? - The paper describes a two-step approach of first finding a Pareto frontier and then using a Sharpe ratio optimization problem. Is there a way to write this as a single joint optimization problem that sheds more light on exactly what the end goal is? In other words, it would be nice to be able to write down the acquisition function using some simple equations that illustrate the main idea. - The novelty of this paper beyond existing work (e.g., Guerreiro and Fonseca (2016), Gupta et al. (2018), Yevseyeva et al. (2014)) should be more clearly discussed. My sense is that the current paper adapts these previous methods to the batch BO setting. Requested Changes: I think all of the clarity related issues above are necessary for publication of this paper. Broader Impact Concerns: N/A ================================================== Review 3: Summary: Bayesian Optimization (BO) is a well-known and sample-efficient technique for optimizing expensive black-box functions. This is important in many applications where no close form of the optimization problem is available, incl. physics and biology but also hyperparameter optimization of ML algorithms. The authors of the paper at hand address the problem of parallel BO, i.e., the efficiency of the optimization process can be improved by querying more than one solution candidate (aka point) at the same time. In particular, they propose BO with a large number of parallel evaluations, e.g., 25, 50 or even 2500 as the authors use in their experiments. Prior parallel BO approaches often do not scale well to this setting because of the complexity of selecting several points. To this end, the authors propose a new variant of hypervolume Sharpe ratio indicator (HSRI) whose overhead scales independently of the number of parallel points to be evaluated. It can further be extended to multi-objective BO. In their experiments, they study the performance of qHSRI on artificial functions and on a real-world benchmark related to COVID. Strengths and Weaknesses: ### Strengths * I agree with the authors regarding the problem setting and that BO is not well equipped for settings with many parallel evaluations. Furthermore, parallelizing many evaluations can be important in some scenarios. * The scalability regarding overhead of qHSRI is impressive and shows that the authors achieved this goal * In several experiments, the authors show that qHSRI is very fast and performs reasonably well (see also weaknesses); in particular, it performs regret-wise fairly well but generates much less overhead ### Weaknesses * First of all, the paper was hard for me to read. In particular, the related work is mixed into the main story all over the place. For example, in the introduction, I wondered for quite some time what the idea and contributions of the paper is. * Although Section 2 and Subsections 3.1 + 3.2 provide a very nice introduction to the topic, I was lost in Subsection 3.3. It uses a new notation with assets, returns, and so on, and misses to make the connection to the original BO literature. Furthermore, it is not even coherent on its own, e.g., “the outcome is a set of weights” refers to z without being explicit about it. It took me quite some time to understand this subsection. * The pseudo-code in Algorithm 1 is not well explained. In fact, the text only says “The pseudo-code of the approach is given in Algorithm 1.” * I’m missing practical experiments (besides the artificial functions) with some reasonable baselines. * Artificial functions such as Branin and Hartmann are known to be a bad proxy for real-world optimization landscapes * On the CityCOVID benchmark, only RO and qHSRI are compared with each other * I missed a comparison against Eriksson et al. 2019 which was mentioned several times and can also be parallelized. * Why is RO missing in the “Time (s)” plots? It should obviously be the fastest of all methods. * Although qHSRI is the fastest method, it is not the best wrt optimality gap; in fact, qTS seems to be better and also decently fast. ### Further Comments * I don’t fully understand the point: “Taking all p standard deviations is possible, but the corresponding objectives are correlated since they increase with the distance to observed design points”. In principle, I agree, but in the assumed setting with heteroscedastic noise, I would argue that one should also assume that the noise level is different for different objectives. Therefore, the std devs would also be different and not perfectly correlated. * “It could be argued that the qHSRI approach ignores distances in the input space …”. Would it be possible to show or quantify that in some of the experiments? * I have never read “random search optimization” before. I believe “random search” (w/o optimization) is much more common. * Please explain why the “Iteration” curves are so bumpy. I would have expected monotonically decreasing curves, similar to the “n” plots * Why is the heteroscedastic noise in Subsection 4.1. a reasonable choice? * x-axis label in Figure 5 right is missing Requested Changes: Based on the points raised above, I see the following points as being critical: * Add a related work section that discusses the differences (pros and cons) of qHSRI and other approaches -- cleaning up the introduction * Add a better introduction into Subsection 3.3. explaining how HSRI relates to BO * Add further practical (i.e., non-artificial) benchmarks; e.g., you could use HPO benchmarks from HPOBench Points to further improve the paper includes: * Add an empirical comparison against Eriksson et al. 2019 * Explain Algorithm 1 step by step * Add a scaling study on q (e.g., can be done on one of the artificial functions that allow an arbitrary number of dimensions) Broader Impact Concerns: None ================================================== Metareview: Recommendation: Reject Comment: Bayesian optimisation has to deal with the exploration-exploitation trade-off---do we exploit the current knowledge of the unknown objective function to find the optimum or do we spend more resources on learning about the unknown objective function? Existing acquisition functions commonly choose a-priori a trade-off between exploration and exploitation. In this paper, like Gupta et al 2018, the authors first determine the Pareto front representing the trade-off and propose to then select points from it a-posteriori in a batch setting using a portfolio selection criterion. This is in contrast to Gupta et al 2018 who use random (sub)-sampling. There is consensus among the reviewers that the paper is not yet ready for publication. The main issues to improve on are: (1) The empirical performance and evaluation. (2) The clarity of the writing. For both points, the reviewers made a number of suggestions (e.g. benchmarks from [1-3] for point (1)) that were not sufficiently incorporated in the revision. I recommend to take them at heart to improve future versions of the paper. Moreover, based on the write-up, it seems to me that the paper does not present results for Gupta et 2018 (denoted as PF in the paper) in case of the Lunar and CityCovid problem, or discuss why it's not feasible. This is rather surprising because the presented work immediately builds on theirs and we needed to see clear evidence for the relative added-value. Relatedly, the paper needed to more prominently give credit and discuss differences to their predecessors (Guerreiro and Fonseca (2016), Gupta et al. (2018), Yevseyeva et al. (2014)). [1] HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO Katharina Eggensperger, Philipp Müller, Neeratyoy Mallik, Matthias Feurer, René Sass, Aaron Klein, Noor Awad, Marius Lindauer, Frank Hutter [2] YAHPO Gym - An Efficient Multi-Objective Multi-Fidelity Benchmark for Hyperparameter Optimization Florian Pfisterer, Lennart Schneider, Julia Moosbauer, Martin Binder, Bernd Bischl [3] Scalable Global Optimization via Local Bayesian Optimization D Eriksson, M Pearce, J Gardner, RD Turner, M Poloczek ==================================================
# Multiple Kronecker Rls Fusion-Based Link Propagation For Drug-Side Effect Prediction Yuqing Qian ustsyuqingqian@gmail.com Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, P.R.China Ziyu Zheng *smyzz16@nottingham.edu.cn* Department of Mathematical Sciences, University of Nottingham Ningbo, P.R.China Prayag Tiwari* *prayag.tiwari@ieee.org* School of Information Technology, Halmstad University, Sweden Yijie Ding* wuxi_dyj@163.com Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, P.R.China Quan Zou *zouquan@nclab.net* Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, P.R.China *Corresponding author. Reviewed on OpenReview: *https: // openreview. net/ forum? id= LCPzaR9mML* ## Abstract Drug-side effect prediction has become an essential area of research in the field of pharmacology. As the use of medications continues to rise, so does the importance of understanding and mitigating the potential risks associated with them. At present, researchers have turned to data-driven methods to predict drug-side effects. Drug-side effect prediction is a link prediction problem, and the related data can be described from various perspectives. To process these kinds of data, a multi-view method, called Multiple Kronecker RLS fusion-based link propagation (MKronRLSF-LP), is proposed. MKronRLSF-LP extends the Kron-RLS by finding the consensus partitions and multiple graph Laplacian constraints in the multi-view setting. Both of these multi-view settings contribute to a higher quality result. Extensive experiments have been conducted on drug-side effect datasets, and our empirical results provide evidence that our approach is effective and robust. ## 1 Introduction Pharmacovigilance is critical to drug safety and surveillance. The field of pharmacovigilance plays a crucial role in public health by continuously monitoring and evaluating the safety profile of drugs. Pharmacovigilance involves collecting and analyzing data from various sources, including health care professionals (Yang et al., 2016), patients, regulatory authorities, and pharmaceutical companies. These data are then used to identify possible side effects and assess their severity and frequency (Da Silva & Krishnamurthy, 2016; Galeano et al., 2020). Traditionally, drug-side effects were primarily identified through spontaneous reporting systems, where health care professionals and patients reported adverse events to regulatory authorities. However, this approach has limitations, such as underreporting and delayed detection. To overcome these limitations, researchers have turned to data-driven methods to find drug-side effects. With the advent of electronic health records, large-scale databases containing valuable information on medication usage and patient outcomes have become available. These databases have allowed researchers to analyze vast amounts of data to identify patterns between drugs and side effects. One of the most commonly used approaches to drug-side effects prediction is model-based methods. Modelbased methods involve the use of advanced statistical and machine learning techniques to extract knowledge from large datasets. By analyzing patterns in the data, researchers can identify potential drug-side effects and their associated risk factors. In their work, (Pauwels et al., 2011) predicted the side effects of drugs (Pau's method) by applying K-nearest neighbor (KNN), support vector machine (SVM), ordinary canonical correlation analysis (OCCA) and sparse canonical correlation analysis (SCCA) from drug chemical substructures; furthermore, their experiment outcome suggests that SCCA performs the best. Sayaka et al. (2012) utilized SCCA to associate targeted proteins with side effects (Miz's method). Liu et al. (2012) predicted drug side effects (Liu's method) using SVM and multivariate information, such as the phenotypic characteristics, chemical structures, and biological properties of the drug. Cheng et al. (2013) proposed a phenotypic network inference classifier to associate drugs with side effects (Cheng's method). NDDSA models (ShabaniMashcool et al., 2020) the drug-side effects prediction problem using a bipartite graph and applies a resource allocation method to find new links. MKL-LGC (Ding et al., 2018) integrates multiple kernels to describe the diversified information of drugs and side-effects. These kernels are then combined using an optimized linear weighting algorithm. The Local and Global Consistency algorithm (LGC) is used to estimate new potential associations based on the integrated kernel information. Deep learning techniques (Xu et al., 2022) have been increasingly used to predict drug side effects in recent years. These methods leverage the power of neural networks to analyze complex relationships between drugs, genes, and proteins. In SDPred (Zhao et al., 2022), chemical-chemical associations, chemical substructure, drug target information, word representations of drug molecular substructures, semantic similarity of side effects, and drug side effect associations are integrated. To learn drug-side effect pair representation vectors from different interaction maps, SDPred uses the CNN module. Drug interaction profile similarity (DIPA) provided the most contribution. GCRS (Xuan et al., 2022) builds a complex deep-learning structure to fuse and learn the specific topologies, common topologies and pairwise attributes from multiple drug-side effect heterogeneous graphs. Drug-side effect heterogeneous graphs are constructed using drug-side effect associations, drug-disease associations and drug chemical substructures. Based on a graph attention network, Zhao et al. (2021) developed a prediction model for drug-side effect frequencies that integrated information on similarity, known drug-side effect frequencies, and word embeddings. The above deep learning-based method is a kind of pairwise learning. To keep the sample balanced, this group selected the positive sample from trusted databases and the negative sample by random sampling. Such a treatment results in a certain loss of information and introduces noise to the label. Drug-side effect prediction is a classic link prediction problem (Yuan et al., 2019). To solve this kind of problem, many multi-view methods have been proposed in recent years (Ding et al., 2021; 2016; Cichonska et al., 2018). Based on the information fusion at different stages of the training process, multi-view methods can roughly be divided into three categories: early fusion, late fusion and fusion during the training phase. Fig. 1 illustrates our taxonomy of multi-view learning method literature. In early fusion techniques, the views are combined before training process is performed. Multiple kernel learning (MKL) (Wang et al., 2023b; Cichonska et al., 2018; Nascimento et al., 2016) is a typical early fusion technique. For each view, it computes one or more kernels, and then learns the optimal kernel from the base kernels. For example, MKL-KroneckerRLS (Ding et al., 2019) combines diversified information using Centered Kernel Alignment-based Multiple Kernel Learning (CKA-MKL). Based on the optimal kernel, Kronecker regularized least squares (Kro-RLS) was used to classify drug-side effect pairs. It must be noted that the performance of these methods relies heavily on the optimal view, which may be redundant or miss some key information. In late fusion techniques, a different model for each view is separately trained and later a weighted combination is taken as the final model. For instance, in Zhang et al. (2016), an ensemble model was constructed by integrating multiple methods, each providing a unique view. The model incorporates Liu et al. (2012), Cheng et al. (2013), a Integrated Neighbour-based Method (INBM), and a Restricted Boltzmann Machine-based Method (RBMBM). Each model is trained independently, and the final partition is the average weighted average of the base partitions. Late fusion allows for individual modeling of inherently different views, providing flexibility and advantage when dealing with diverse data. However, its drawback is the delayed coupling of information, limiting the extent to which each model can benefit from the information provided by other views. ![2_image_0.png](2_image_0.png) Figure 1: Taxonomy of multi-view learning framework literature. Note: "Partition" commonly refers to the learned result. This concept is more commonly found in classification and clustering tasks (Liu et al., 2023; Bruno & Marchand-Maillet, 2009; Wang et al., 2019). (a) Early fusion: the views are combined before the training process is performed; (b) Late fusion: a different model for each view is separately trained and then a combination is taken as the final partition; (c) Fusion during the training phase: it has some degree of freedom to model the views differently but to also ensure that information from other views is exploited during the training phase. A third category is fusion during the training phase, which combines the benefits of both fusion types. It fuses multiple views at the partition level and enables the model to explore all views while being allowed to model one view differently. This framework has been applied to classification models (Houthuys & Suykens, 2021; Houthuys et al., 2018; Qian et al., 2022b; Xie & Sun, 2020) and clustering models (Lv et al., 2021; Houthuys et al., 2018; Wang et al., 2023a). By exploring consensus or complementarity information from multiple views, multi-view method can achieve better performance than single view method. The consensus principle pursues to achieve view-agreement among views. For instance, Wang et al. (2019) maximized the alignment between the consensus partition (clustering matrix) and the weighted combination of base partitions. In this work, we apply this technique to the Kron-RLS algorithm. Due to its fast and scalable nature. The proposed method is named Multiple Kronecker RLS fusion-based link propagation (MKronRLSF-LP). Our work's main contributions are listed as follows: (1) We extend Kron-RLS to the multiple information fusion setting by finding the consensus partition and multiple graph Laplacian constraint. Specifically, we generate multiple partitions by normal Kron-RLS and adaptively learn a weight for each partition to control its contribution to the shared partitions. This work was conducted with the aim of fusing partitions while still allowing for some flexibility in modeling single information. Furthermore, multiple graph Laplacian regularization is adopted to boost the performance of semi-supervised learning. Both settings co-evolve toward better performance. (2) To fuse the features of multiple information more reasonably, we design an iterative optimization algorithm to effectively fuse multiple Kron-RLS submodels and obtain the final predictive model of drug-side effects. In the whole optimization, we avoid explicit computation of any pairwise matrices, which makes our method suitable for solving problems in large pairwise spaces. (3) The proposed method can address the general link prediction problem; it is empirically tested on four real drug-side effect datasets, which are more sparse. The results show that MKronRLSF-LP can achieve excellent classification results and outperform other competitive methods. The rest of this paper is organized as follows. Section 2 provides a description of the drug-side effect prediction problem. Section 3 reviews related work about MKronRLSF-LP. Section 4 comprehensively presents the proposed MKronRLSF-LP. After reporting the experimental results in Section 5, we conclude this paper and mention future work in Section 6. ## 2 Problem Description Identification of drug-side effects is an example of the link prediction problem, which has the aim of predicting how likely it is that there is a link between two arbitrary nodes in a network. This problem can also be seen as a recommendation system (Jiang et al., 2019; Fan et al., 2021) task. Let the drug nodes and side effect nodes of a network be D = {d1, d2*, . . . , d*N } and S = {s1, s2*, . . . , s*M}, respectively. We denote the number of drug and side effect nodes by N and M, respectively. We define an adjacency matrix F ∈ R N×M to represent the associations between drugs and side effects. Each element of F is defined as Fi,j = 1 if the node pair (di, sj ) is linked and Fi,j = 0 otherwise. The link prediction has the aim of predicting whether a link exists for the unknown state node pair (di, sj ) ∈ D × S. Thus, it is a classification problem. Most methods use regression algorithms to predict a score (ranging from 0-1), which we call the link confidence. Then, a class of 0 or 1 is assigned to the predicted score by the threshold. Higher link confidence indicates a greater probability of the link existing, while lower values indicate the opposite. We define a new matrix Fˆ, which is estimated by the prediction model. Each of elements Fˆi,j represents the predicted link confidence for the node pair (di, sj ). Figure 5 summarizes the link prediction problem discussed in this paper. ## 3 Related Work 3.1 Regularized Least Squares The objective function of Regularized Least Squares (RLS) regression is: $$\operatorname*{arg\,min}_{f}\;\;\frac{1}{2}\left\|\boldsymbol{F}-f\left(\boldsymbol{K}\right)\right\|_{F}^{2}+\frac{\lambda}{2}\left\|f\right\|_{K}^{2},$$ $$\left(1\right)$$ where λ is a regularization parameter, ∥f∥K denotes the RKHS norm (Kailath, 1971) of f (·). f (·) is the prediction function and be defined as: f (K) = Ka, (2) where a is the solution of the model, F is a kernel matrix with elements $$f\left(\mathbf{K}\right)=\mathbf{K}a,$$ $$\left(2\right)$$ $${\mathbf{K}}_{i,j}=k\left(d_{i},d_{j}\right)\left(i,j=1,\ldots,N\right),$$ $$\left({\mathrm{3}}\right)$$ Ki,j = k (di, dj ) (*i, j* = 1*, . . . , N*), (3) and k represents the kernel function. By formulating the stationary points of Equation 1 and elimination the unknown parameters a, the following solution is obtained $${\hat{F}}=K(K+\lambda I_{N})^{-1}F.$$ −1F. (4) There is only one kind of feature space considered in this model. In the drug-side effect identification problem, there are two feature spaces: the drug space and the side effect space. ## 3.2 Kronecker Regularized Least Squares Combining the kernels of the two spaces into a single large kernel that directly relates drug-side effect pairs would be a better option. Kronecker product kernel (Hue & Vert, 2010) is used for this. Given the drug kernel KD and side effect kernel KS, then we have the kronecker product kernel $${\mathbf{K}}={\mathbf{K}}_{S}\otimes{\mathbf{K}}_{D},$$ K = KS ⊗ KD, (5) $$\left({4}\right)$$ where the ⊗ indicates the Kronecker product (Laub, 2004). By applying the Kronecker product kernel to RLS, the objective function of Kronecker Regularized Least Squares (Kron-RLS) is botained: $$\arg\operatorname*{min}_{f}\;\;\frac{1}{2}\left\|\operatorname{vec}\left(\boldsymbol{F}\right)-f\left(\boldsymbol{K}\right)\right\|_{F}^{2}+\frac{\lambda}{2}\left\|f\right\|_{K}^{2},$$ $$(6)$$ $$\left(7\right)$$ $$({\boldsymbol{\delta}})$$ where vec (·) is the vectorization operating function. By setting the derivative of Equation 6 *w.r.t* a to zero, we obtain: $\mathbf{a}=\left(\mathbf{K}+\lambda\mathbf{I}_{NM}\right)^{-1}\text{vec}\left(\mathbf{F}\right).$ $\mathbf{f}\left(\mathbf{K}+\lambda\mathbf{I}_{NM}\right)$ with $\mathbf{i}=\mathbf{f}\left(\mathbf{N}\mathbf{M}+\lambda\mathbf{N}\mathbf{M}\right)$ −1vec (F). (7) Obviously, it needs calculating the inverse of (K + λINM) with size of NM × NM, whose time complexity is ON3M3. Thus, a well-known theorem (Raymond & Kashima, 2010; Laub, 2004) is proposed to obtain the approximate inverse. It is well known that the kernel (Liu et al., 2023; Pekalska & Haasdonk, 2008) matrices are positive semidefinite matrices, they can be eigen decomposed, KD = VDΛDV T D and KS = VSΛSV T S . According to the theorem (Raymond & Kashima, 2010; Laub, 2004), the eigenvectors of the Kronecker product kernel K is the V = VS ⊗ VD. Define the matrix Λ to be either Λi,j = [ΛS]i,i × [ΛD]j,j . The eigenvalues of K is diag (vec (Λ)). The matrix K + λINM has the same eigenvactors V , and eigenvalues diag (vec (Λ + λ1)). Then, we can rewrite Equation 7 as: $${\mathbf{}}{\mathbf{K}}({\mathbf{K}}+\lambda{\mathbf{I}}_{N M})^{-1}\mathrm{vec}\left({\mathbf{F}}\right)={\mathbf{V}}\mathrm{diag}(\mathrm{vec}\left({\mathbf{\Lambda}}\right)){\mathbf{V}}^{T}{\mathbf{V}}\mathrm{diag}(\mathrm{vec}\left({\mathbf{\Lambda}}+\lambda{\mathbf{1}}\right))^{-1}{\mathbf{V}}^{T}\mathrm{vec}\left({\mathbf{F}}\right).$$ $$(\mathbf{\Lambda}+\lambda\mathbf{1}))^{-1}$$ $$\tau+\lambda I_{N M})^{-1}\mathrm{vec}\left(F\right)=V\mathrm{d}$$ $$(\mathbf{J}))\mathbf{V}^{T}\mathrm{vec}\left(\mathbf{F}\right),$$ Since V TV = INM and diag(vec (Λ))diag(vec (Λ + λ1))−1is also a diagonal matrix, we further simplify Equation 8 and get K(K + λINM) −1vec (F) = V diag(vec (J))V Tvec (F), (9) where the matrix J to be either $$J_{i,j}=\frac{\mathbf{\Lambda}_{i,j}}{\mathbf{\Lambda}_{i,j}+\lambda}.$$ . (10) Using the vec-tricks techniques ((A ⊗ B) vec (C) = vec BCAT), we further simplify Equation 8. Then, we get $$\hat{\mathbf{F}}=\mathbf{V}_{D}\big{(}\mathbf{J}\odot\big{(}\mathbf{V}_{D}^{T}\mathbf{F}\mathbf{V}_{S}\big{)}\big{)}^{T}\mathbf{V}_{S}^{T},\tag{11}$$ product. The computational time of this optimization method is ## Where ⊙ Represents The Hadamard Product. The Computational Time Of This Optimization Method Is On3 + M3, Which Is Much Less Than On3M3. 3.3 Kronecker Regularized Least Squares With Multiple Kernel Learning Kron-RLS is a kind of kernel method. It can be difficult for nonexpert users to choose an appropriate kernel. To address such limitations, Multiple Kernel Learning (MKL) (Gönen & Alpaydın, 2011) is proposed. Since kernels in MKL can naturally correspond to different views, MKL has been applied with great success to cope with the multi-view data (Wang et al., 2021; Xu et al., 2021; Guo et al., 2021; Qian et al., 2022a; Wang et al., 2023b) by combining kernels appropriately. Given predefined base kernels KiD P i=1 and nKjS oQ j=1 from drug feature space and side effect feature space, respectively. These kernels can be built from different types or views. The optimal kernel can be combined by a linear function corresponding to the base kernels: $$\mathbf{K}_{D}^{opt}=\sum_{i=1}^{P}w^{i}\mathbf{K}_{D}^{i}.\tag{12}$$ $$({\mathfrak{g}})$$ $$(10)$$ $$(11)$$ Usually, an additional constraint is imposed on the corresponding combination coefficient w to control its structure: $$\sum_{i=1}^{P}w^{i}=1,w^{i}\geq0,i=1,\ldots,P.\tag{1}$$ $$(13)$$ ![5_image_0.png](5_image_0.png) ( ) ( ) ( ) ( ) = = = − = = F L F L I H K H H K H i i S S S D D D i i S S S D D D K θ K K θ K ( ) 2 2 1 1 ˆ vec 2 2 = F w − + Κ a w Figure 2: Framework diagram of MKronRLSF-LP. MKronRLSF-LP allow the multiple partitions have a degree of freedom to model the single information and introduce a multiple graph Laplacian regularization into consensus partition. The optimal side effect kernel Kopt Sis omitted. Based on MKL method, Ding et al. (2019) and Nascimento et al. (2016) developed Kron-RLS based MKL methods, called Kron-RLS with CKA-MKL and Kron-RLS with selfMKL, respectively. Kron-RLS with CKA-MKL combines diversified information using Centered Kernel Alignment-based Multiple Kernel Learning (CKA-MKL). In Kron-RLS with selfMKL, the weights indicating the importance of individual kernels are calculated automatically to select the more relevant kernels. The final decision function of both methods is given by: $$\mathrm{vec}\left(\hat{\mathbf{F}}\right)=\left(\mathbf{K}_{S}^{opt}\otimes\mathbf{K}_{D}^{opt}\right)\left(\mathbf{K}_{S}^{opt}\otimes\mathbf{K}_{D}^{opt}+\lambda\mathbf{I}_{NM}\right)^{-1}\mathrm{vec}\left(\mathbf{F}\right).\tag{14}$$ ## 4 Proposed Method Existing multi view fusion methods based on Kron-RLS all follow MKL framework. These methods optimize the optimal pairwise kernel as a linear combination of a set of base kernels. Prior to training, all views are fused, and information is not shared during training phase. This is typical early fusion technology. Our proposal addresses this limitation by fusing multi-view information in a consensus partition. Compared with MKL framework, the advantage of the proposed method is that it allows sub partitions to have a certain degree of freedom to model the single information. Further, multiple graph Laplacian regularization is introduced into the consensus partition to boost performance. Fig. 2 illustrates the main procedure of MKronRLSF-LP. ## 4.1 The Construction Of Kernel Matrix Kron-RLS is a kind of kernel method. We construct drug kernels using five different kinds of functions. Gaussian Interaction Profile (GIP): $$\left[{\mathbf{K}}_{G I P,D}\right]_{i,j}=\exp\left(-\gamma\|d_{i}-d_{j}\|^{2}\right),$$ 2, (15) where γ is the gaussian kernel bandwidth and γ = 1. $\left(15\right)^{\frac{1}{3}}$ . Cosine Similarity (COS): $$\left[K_{COS,D}\right]_{i,j}=\frac{d_{i}^{T}d_{j}}{\left|d_{i}\right|\left|d_{j}\right|}.\tag{1}$$ $$(16)$$ $$(17)$$ Correlation coefficient (Corr): $$\left[\mathbf{K}_{\mathit{Corr},D}\right]_{i,j}={\frac{\operatorname{Cov}\left(d_{i},d_{j}\right)}{\sqrt{\operatorname{Var}\left(d_{i}\right)\operatorname{Var}\left(d_{j}\right)}}}.$$ Normalized Mutual Information (NMI): .$$ [K_{NMI,D}]_{i,j}=\frac{\mbox{Q}\left(d_i,d_j\right)}{\sqrt{\mbox{H}\left(d_i\right)\mbox{H}\left(d_j\right)}},$$ action of $d_i$ and $d_j=\mbox{H}\left(d_i\right)$ and $\mbox{H}\left(d_j\right)$ are the entries of $d_i$ as $$(18)$$ where Q (di, dj ) is the mutual information of di and dj . H (di) and H (dj ) are the entropies of di and dj , respectively. Neural Tangent Kernel (NTK): **Feature tangent Hilbert (VAT)**: $$\begin{aligned} \left[K_{NTK,D}\right]_{i,j}=\mathbb{E}_{\theta\sim w}\left[f_{NTK}\left(\theta,d_i\right),f_{NTK}\left(\theta,d_j\right)\right], \end{aligned}$$ where $f_{NTK}$ is a fully connected neural network and $\theta$ is collection of parameters in this network. Similarity, we construct the side effect kernels (KGIP,S, KCOS,S, KCorr,S, KNM I,S, K*NTK,S*) in side effect space. ## 4.2 The Mkronrlsf-Lp Model Let us define two sets of base kernel sets separately: $$(19)$$ $$\mathbb{K}_{D}=\left\{\mathbf{K}_{D}^{1},\ldots,\mathbf{K}_{D}^{P}\right\},$$ $$\mathbb{K}_{S}=\left\{\mathbf{K}_{S}^{1},\ldots,\mathbf{K}_{S}^{Q}\right\},$$ $$(20\mathrm{a})$$ $$(20\mathrm{b})$$ where P and Q represents the numbers of drug and side effect kernels, respectively. Based on the KD and KS, we can get a set of pairwise kernels: $$\mathbb{K}=\left\{\mathbf{K}^{1}=\mathbf{K}_{S}^{1}\otimes\mathbf{K}_{D}^{1},\ldots,\mathbf{K}^{V}=\mathbf{K}_{S}^{P}\otimes\mathbf{K}_{D}^{Q}\right\},\tag{1}$$ where V denotes the numbers of base pairwise kernels. Obviously, V is equal to P × Q. By using multiple partitions, we can manipulate multiple views in a partition space, which enhances the robustness of the model. The following ensemble KronRLS model is obtained $$\arg\min_{\mathbf{a}^{v}}\ \sum_{v=1}^{V}\left(\frac{1}{2}\left\|\mathrm{vec}\left(\mathbf{F}\right)-\mathbf{K}^{v}\mathbf{a}^{v}\right\|_{2}^{2}+\frac{\lambda_{v}}{2}\mathbf{a}^{v^{T}}\mathbf{K}^{v}\mathbf{a}^{v}\right).\tag{1}$$ In multi-view methods, the consensus principle establishes consistency between partitions from different views. However, it's essential to find that these partitions deliver varying degrees of importance to the final prediction, unlike fusion without discrimination. To facilitate this, we introduce a consensus partition, denoted by Fˆ. It is a weighted linear combination of partitions Fˆv from multiple distinct views. A variable wv is introduced for view v which characterizes its importance, which is calculated based on the training error. To prevent sparse situations, we employ ∥·∥22 to smooth the weights. Then, we have the following optimization problem $$\begin{split}\arg\min_{\mathbf{F},\mathbf{a}^{w},\mathbf{w}}&\frac{1}{2}\left\|\text{vec}\left(\hat{\mathbf{F}}\right)-\sum_{v=1}^{V}\mathbf{w}_{v}\mathbf{K}^{v}\mathbf{a}^{v}\right\|_{2}^{2}+\mu\sum_{v=1}^{V}\left(\frac{\mathbf{w}_{v}}{2}\left\|\text{vec}\left(\mathbf{F}\right)-\mathbf{K}^{v}\mathbf{a}^{v}\right\|_{F}^{2}+\frac{\lambda_{v}}{2}\mathbf{a}^{v^{T}}\mathbf{K}^{v}\mathbf{a}^{v}\right)+\frac{1}{2}\beta\left\|\mathbf{w}\right\|_{2}^{2}\\ &\text{s.t.}\sum_{v=1}^{V}\mathbf{w}_{v}=1,\mathbf{w}_{v}\geq0,v=1,\ldots,V.\end{split}\tag{23}$$ $$(21)$$ $$(22)$$ 7 In Equation 23, we observe that the consensus partition Fˆ fits to an adjacency matrix F by an indirect path. As described in section 2, false zeros represent unobserved links in the network. Hence, we must avoid overfitting the observed matrix F. Inspired by manifold scenarios, the Laplacian operator adeptly mitigates overfitting and noise, preserving the original data structure and keeping nodes with common labels closely associated. This approach is simple, and empirical evidence confirms its effective performance (Pang & Cheung, 2017; Chao & Sun, 2019; Jiang et al., 2023). Here, we apply multiple graph Laplacian regularization to Equation 23, which can effectively explore multiple different views and boost the performance of Fˆ. Specifically, the Kronecker product Laplacian matrix is calculated from the optimal drug and side effect similarity matrix, which are weighted linear combinations of multiple related kernel matrices. The weight of each kernel can be adaptively optimized during the training process and reduce the impact of noisy or less relevant graphs. The optimization problems for MKronRLSF-LP can be formulated as: arg min Fˆ,av,w,θD,θS 1 2 vec Fˆ−XV v=1 wvKva v 2 2 + µXV v=1 wv 2 ∥vec (F) − Kva v∥ 2 2 + λ v 2 a v TKva v+ 1 2 β ∥w∥ 2 2 + 1 2 σvecFˆTLvec Fˆ s.t.XV v=1 wv = 1, wv ≥ 0, v = 1, . . . , V, L = INM −H−0.5 S K∗SH−0.5 S⊗H−0.5 D K∗DH−0.5 D, K∗S =X Q i=1 [θS] ε iKiS, K∗D =XP i=1 [θD] ε iKiD, X Q i=1 [θS]i = 1, [θS]i ≥ 0, i = 1, . . . , Q,XP i=1 [θD]i = 1, [θD]i ≥ 0, i = 1, . . . , P. $$(24)$$ where P L is a normalized laplacian matrix, HS and HD are diagonal matrix with the jth diagonal elements as k [K∗S ]j,k and Pk [K∗D]j,k, respectively. And, ε > 1, guaranteeing each graph has a particular contribution to the Laplacian matrix. Due to the lack of space, we present optimization algorithm of the Equation 24 in Appendix Section A.1. ## 5 Experiments In this section, the performance of MKronRLSF-LP is shown, and we make comparisons with baseline methods and other drug-side effect predictors. ## 5.1 Dataset | | Table 1: Summary of the real drug-side effect datasets. | | | | | |------|-----------------------------------------------------------|-------------|--------------|----------|------------------------| | Name | Drug | Side effect | Associations | Sparsity | Reference | | Liu | 832 | 1385 | 59205 | 94.86% | (Cheng et al., 2013) | | Pau | 888 | 1385 | 61102 | 95.03% | (Pauwels et al., 2011) | | Miz | 658 | 1339 | 49051 | 94.43% | (Sayaka et al., 2012) | | Luo | 708 | 4192 | 80164 | 97.30% | (Luo et al., 2017) | Four real drug-side effect datasets are used to assess the effectiveness of our proposed method. Pau dataset is derived from the SIDER database (Kuhn et al., 2010) which contains information about drugs and their recorded side effects. Miz dataset includes information about drug-protein interactions and drug-side effect interactions, obtained from the DrugBank (Wishart et al., 2006) and SIDER database, respectively. There were 658 drugs with both targeted protein and side effect information. Additionally, Liu et al. mapped drugs in SIDER to DrugBank 3.0 (Knox et al., 2010), resulting in a final dataset of 832 drugs and 1385 side effects. Luo dataset has a large number of side effects and was extracted from the SIDER 2.0. Table 1 summarizes information about the datasets. We can see that these four datasets are sparse. In other words, there are fewer positive samples than negative samples. Thus, drug-side effect prediction can be viewed as a classification problem with extremely imbalanced data. ## 5.2 Parament Setting In this paper, the objective function 24 contains the following regularization parameters: µ, β, σ, ε and λ v, v = 1*, . . . , V* . To find the right combinations of the regularization parameters of MKronRLSF-LP to give the best performance, the grid search method is performed on the Pau dataset. The optimal parameters with the best AUPR are selected. We first select λ v, v = 1*, . . . , V* by the relative pairwise kernel with a single view Kron-RLS model. For each parameter λ v, we select it in the range from 2 −5to 2 5 with step 2 1. The optimal parameters λ v are shown in Table 4. According to a previous study(Shi et al., 2019), the performance is not affected by parameter ε, so it is set to 2. Then, we fix λ v, v = 1*, . . . , V* at the best values and tune µ, β, σ from within the range 2 −10 to 2 0 with step 2 1. The optimal regularization parameters are µ = 2−7, β = 20 and σ = 2−8. ## 5.3 Baseline Methods In this work, we compare MKronRLSF-LP with the following baseline methods: BSV, Comm Kron-RLS(Perrone & Cooper, 1995), Kron-RLS+CKA-MKL(Ding et al., 2019), KronRLS+pairwiseMKL(Cichonska et al., 2018), Kron-RLS+self-MKL(Nascimento et al., 2016), MvGRLP(Ding et al., 2021) and MvGCN(Fu et al., 2022). Due to the lack of space, we present details of these baseline methods in Appendix Section A.3. For a fair comparison, the same input as our method is fed into these baseline methods. To achieve the best performance, we also adopt 5-fold CV on the Pau dataset to tune the parameters. ## 5.4 Threshold Finding Because the MKronRLSF-LP and baseline methods only output the value of regression, we apply a threshold finding operation. For a certain validation set in the five-fold cross-validation (5-fold CV) procedure, we collect the labels and their corresponding predicted scores. Then, we obtain the optimal threshold by maximizing the F*score* on the predicted scores and labels from this validation sets. A trend of Fscore, *Recall* and *P recision* with different thresholds over four datasets is shown in Fig. 6. While the threshold of prediction rises, the values of *Recall* is rising. Oppositely, *P recision* is falling. The F*score* is the harmonic mean of the *Recall* and *P recision*. It thus symmetrically represents both *Recall* and *P recision* in one metric. Here, we find the optimal threshold under maximizing the value of F*score*. Table 5 summarizes the thresholds of different baseline methods on different datasets. ## 5.5 Comparison With Baseline Methods We conduct the 5-fold CV to evaluate the performance of our method versus the baseline method. To further provide a fair and comprehensive comparison, each algorithm is iterated 10 times with different cross index, and then the mean values and standard deviations are reported in Table 3. The best single view is KGIP,D ⊗ K*NTK,S*, which is selected by 5-fold CV on Pau dataset. First, we observe that the proposed method has the best AUPR and F*score* on all datasets. Especially, the proposed method has a higher AUPR and F*score* than BSV on datasets. This indicates the improvement in using multiple views. The simple coupling frameworks BSV and Comm perform well on the Pau dataset. However, BSV and Comm cannot perform as well on other datasets, which indicates that the simple fusion schemes are sensitive to the dataset and not robust. Furthermore, Kron-RLS+pairwiseMKL achieves the highest AUC of 95.01%, 95.02% and 94.70% on the Liu, Pau and Miz datasets, respectively. This shows slight improvements of 0.23%, 0.21% and 0.23% over our method, respectively. As we discussed in Section 5.1, drug-side effect prediction is an extremely imbalanced classification problem. The AUC can be considered as the probability that the classifier will rank a randomly chosen positive instance higher than a randomly chosen negative instance. Therefore, the AUC is not an important metric for predicting drug side effects. Another interesting observation is that MKronRLSF-LP outperforms other MKL strategy methods in comparison. For example, it exceeds the best MKL method (CKA-MKL) by 2.1%, 2.32%, 1.43%, 2.51% in terms of AUPR on Liu, Pau, Miz and Luo dataset, respectively. These results verify the effectiveness of the consensus partition and multiple graph Laplacian constraint. For a more thorough analysis and reliable conclusions, we use post-hoc test statistics to statistically assess the different metrics shown in Table 3. Fig. 3 shows the results of these tests visualized as Critical Difference diagrams. These results show that MKronRLSF-LP is significantly better ranked than all methods in terms of AUPR, *Recall* and F*score*. In addition, MKronRLSF-LP is only inferior than Kron-RLS+pairwiseMKL and Kron-RLS+CKA-MKL in terms of AUC and *P recision*, respectively. Besides, MvGCN is worse ranked than our method. Another point worth mentioning is that there is no sufficient statistical evidence to support that MvGCN performs better than model-based methods. MvGCN uses shallow GCN to avoid over-smoothing. The shallow GCN (Miao et al., 2021) can only capture local neighbourhood information of nodes, but the global features of the network have not been fully explored. A result of this is inaccurate embedding vectors. In summary, the above experimental results demonstrate the superior prediction performance of MKronRLSF-LP to other baseline methods. We attribute the superiority of MKronRLSF-LP as three aspects: (1) The consensus partition is derived through joint fusion of weighted multiple partitions; (2) MKronRLSF-LP utilizes the multiple graph Laplacian regularization to constrain the consensus predicted value Fˆ, which makes the consensus partition is robust; (3) Unlike existing MKL methods, the proposed MKronRLSF-LP fuses multiple pairwise kernels at the partition level. It is these three factors that contribute to the improvement in prediction performance. ## 5.6 Ablation Study To validate the benefits of jointly applying the consensus partition and multiple graph Laplacian constraint, we conduct an ablation study by excluding a particular component. First, we construct a Kron-RLS based on each pairwise kernel separately. Each partition learns independently, so it can be regarded as an ensemble Kron-RLS, and its objective function is Equation 22. The results should be consistent for each view, and heterogeneous views have varying degrees of importance in the final prediction. Therefore, we set a consensus partition Fˆ, which is a weighted linear combination of base partitions (as shown in Equation 23). To further improve the performance and robustness of the model, we apply multiple graph Laplacian constraints to the consensus partition. Finally, the objective function 24 of MKronRLSF-LP is obtained. The results of the ablation study are shown in Fig. 4. It can be observed that the consensus partition and the multiple graph Laplacian constraint is helpful for MKronRLSF-LP to achieve the best results. ## 5.7 Comparisons Of Computational Speed In order to demonstrate the effectiveness of MKronRLSF-LP, we are now comparing it to different baseline methods in terms of computational speed. Except MvGCN, other methods are performed on a PC equipped with an Intel Core i7-13700 and 16GB RAM. Because MvGCN is a deep learning-based method, it is performed on a workstation equipped with a NVIDIA GeForce RTX 3090 GPU. For all baseline methods, we tested 10 times to report the mean running time. The results are shown in Table 2. The results do not include the kernel calculation time. As expected, learning from multiple views takes more time than learning from only one view (BSV). Also, since MKronRLSF-LP fuses multiple views at the partition level, it requires more running time than Kron-RLS+CKA-MKL and Kron-RLS+self-MKL. Another observation is that MKronRLSF-LP is much faster than Kron-RLS+pairwiseMKL. This can be explained by looking at the time complexity of MKronRLSF-LP and Kron-RLS+pairwiseMKL. The inverse of pairwise kernels dominates the time complexity of both methods. In our optimization algorithm, we use eigendecomposition techniques to compute the approximate inverse. The time complexity of our method is O((P + Iter)N3 + (Q + Iter)M3). Dif- ![10_image_0.png](10_image_0.png) Figure 3: Critical difference diagram of average score ranks. A crossbar is over each group of methods that do not show a statistically significant difference among themselves. ![10_image_1.png](10_image_1.png) Figure 4: Ablation study of the consensus partition and multiple graph Laplacian constraint on four datasets. | Table 2: Mean running time (in seconds) of baseline methods on four | | datasets. | | | |-----------------------------------------------------------------------|---------|-------------|---------|--------| | Methods | Pau | Liu | Miz | Luo | | BSV | 0.79 | 0.83 | 0.68 | 5.84 | | Comm Kron-RLS | 19.38 | 20.95 | 18.39 | 148.60 | | Kron-RLS+CKA-MKL | 2.69 | 2.18 | 2.36 | 13.13 | | Kron-RLS+pairwiseMKL | 1583.67 | 1483.26 | 1364.21 | - | | Kron-RLS+self-MKL | 12.21 | 13.05 | 12.09 | 155.85 | | MvGRLP | 8.94 | 8.37 | 7.23 | 58.53 | | MvGCN | 305.44 | 329.43 | 343.50 | - | | MKronRLSF-LP | 50.55 | 43.9 | 35.83 | 280 | ferently, Kron-RLS+pairwiseMKL solves the system with the conjugate gradient approach that iteratively improves the result by performing matrix-vector products. Hence, Kron-RLS+pairwiseMKL is carried out in O(IterP Q(N2M + M2N)). When MvGCN deal with Luo dataset, its running time exceeds 2 hours. This is because MvGCN utilizes a self-supervised learning strategy based on deep graph infomax (DGI) to initialize node embeddings. Whenever there are many nodes in a bipartite network, DGI takes a very long time to implement. ## 5.8 Comparison With Other Drug-Side Effect Predictors A comparison of the proposed drug-side effect prediction method with state-of-the-art methods is also provided. Tables 6,7, 8 and 9 present the results of 5-fold CV in terms of AUPR, AUC, Recall, *P recision* and F*score* on the four datasets, respectively. We have highlighted the best results in bold and underlined the second-best results. Obviously, MKronRLSF-LP achieves the highest AUPR and F*score* on all datasets. In the problem of drug-side effects prediction, AUPR and F*score* more desirable metrics (Ezzat et al., 2017; Li et al., 2021). Therefore, we conclude that our method outperforms the other assessed methods. GCRS (Xuan et al., 2022) and SDPred (Zhao et al., 2022) are deep learning-based methods. GCRS constructs multiple heterogeneous graphs and multi-layer convolutional neural networks with attribute-level attention to predict drug-side effect pair nodes. SDPred fuses multiple side information (including drug chemical structures, drug target, drug word, side effect semantic similarity, side effect word) by feature concatenation and adopts CNN and MLP for prediction tasks. However, on Luo dataset, GCRS and SDPred perform poorly; this is probably because they are pairwise learning methods and randomly negative sampling to construct the training set. The randomly negative sampling method cannot be guaranteed due to the reliability and quality of negative sample pairs, which results in a certain loss of information(Zhang et al., 2015; Ali & Aittokallio, 2019). The ensemble model (Zhang et al., 2016) combine Liu's method (Liu et al., 2012), Cheng's method (Cheng et al., 2013), INBM and RBM by the average scoring rule. It is obvious that the results of the ensemble model are significantly improved than the results of the sub-model on four datasets. ## 6 Conclusion This paper presents MKronRLSF-LP for drug-side effect prediction. The MKronRLSF-LP method solves the general problem of multi-view fusion-based link prediction by utilizing the consensus partition and multiple graph Laplacian constraint. MKronRLSF-LP allows for some degree of freedom to model the views differently and combination weights for each view to find the consensus partition. Each view's weight is dynamically learned and plays a crucial role in exploring consensus information. It is found that the use of Laplacian regularization enhances semi-supervised learning performance, so a term of multiple graph Laplacian regularization is added to the objective function. Finally, we present an efficient alternating optimization algorithm. The results of our experiments indicate that our proposed methods are superior in terms of their classification results to other baseline algorithms and current drug-side effect predictors. ## Acknowledgments This work is supported in part by the National Natural Science Foundation of China (NSFC 62172076, 62250028 and U22A2038), the Zhejiang Provincial Natural Science Foundation of China (Grant No. LY23F020003), and the Municipal Government of Quzhou (Grant No. 2023D036). ## References Mehreen Ali and Tero Aittokallio. Machine learning and feature selection for drug response prediction in precision oncology applications. *Biophysical reviews*, 11(1):31–39, 2019. Eric Bruno and Stéphane Marchand-Maillet. Multiview clustering: a late fusion approach using latent models. In *Proceedings of the 32nd international ACM SIGIR conference on Research and development in* information retrieval, pp. 736–737, 2009. Richard H Byrd, Mary E Hribar, and Jorge Nocedal. An interior point algorithm for large-scale nonlinear programming. *SIAM Journal on Optimization*, 9(4):877–900, 1999. Guoqing Chao and Shiliang Sun. Semi-supervised multi-view maximum entropy discrimination with expectation laplacian regularization. *Information Fusion*, 45:296–306, 2019. F. Cheng, W. Li, X. Wang, Y. Zhou, Z. Wu, J. Shen, and Y. Tang. Adverse drug events: database construction and in silico prediction. *Journal of Chemical Information & Modeling*, 53(4):744–752, 2013. Anna Cichonska, Tapio Pahikkala, Sandor Szedmak, Heli Julkunen, Antti Airola, Markus Heinonen, Tero Aittokallio, and Juho Rousu. Learning with multiple pairwise kernels for drug bioactivity prediction. Bioinformatics, 34(13):i509–i518, 2018. Brianna A Da Silva and Mahesh Krishnamurthy. The alarming reality of medication error: a patient case and review of pennsylvania and national data. *Journal of community hospital internal medicine perspectives*, 6(4):31758, 2016. Yijie Ding, Jijun Tang, and Fei Guo. Predicting protein-protein interactions via multivariate mutual information of protein sequences. *BMC bioinformatics*, 17(1):1–13, 2016. Yijie Ding, Jijun Tang, and Fei Guo. Identification of drug-side effect association via semisupervised model and multiple kernel learning. *IEEE journal of biomedical and health informatics*, 23(6):2619–2632, 2018. Yijie Ding, Jijun Tang, and Fei Guo. Identification of drug-side effect association via multiple information integration with centered kernel alignment. *Neurocomputing*, 325:211–224, 2019. Yijie Ding, Jijun Tang, and Fei Guo. Identification of drug-target interactions via multi-view graph regularized link propagation model. *Neurocomputing*, 461:618–631, 2021. Ali Ezzat, Peilin Zhao, Min Wu, Xiao-Li Li, and Chee-Keong Kwoh. Drug-target interaction prediction with graph regularized matrix factorization. *IEEE/ACM Transactions on Computational Biology and* Bioinformatics, 14(3):646–656, 2017. doi: 10.1109/TCBB.2016.2530062. Haoyi Fan, Fengbin Zhang, Yuxuan Wei, Zuoyong Li, Changqing Zou, Yue Gao, and Qionghai Dai. Heterogeneous hypergraph variational autoencoder for link prediction. *IEEE Transactions on Pattern Analysis* and Machine Intelligence, 44(8):4125–4138, 2021. Haitao Fu, Feng Huang, Xuan Liu, Yang Qiu, and Wen Zhang. Mvgcn: data integration through multi-view graph convolutional network for predicting links in biomedical bipartite networks. *Bioinformatics*, 38(2): 426–434, 2022. Diego Galeano, Shantao Li, Mark Gerstein, and Alberto Paccanaro. Predicting the frequencies of drug side effects. *Nature communications*, 11(1):1–14, 2020. Mehmet Gönen and Ethem Alpaydın. Multiple kernel learning algorithms. *The Journal of Machine Learning* Research, 12:2211–2268, 2011. Xiaoyi Guo, Wei Zhou, Bin Shi, Xiaohua Wang, Aiyan Du, Yijie Ding, Jijun Tang, and Fei Guo. An efficient multiple kernel support vector regression model for assessing dry weight of hemodialysis patients. *Current* Bioinformatics, 16(2):284–293, 2021. Lynn Houthuys and Johan AK Suykens. Tensor-based restricted kernel machines for multi-view classification. Information Fusion, 68:54–66, 2021. Lynn Houthuys, Rocco Langone, and Johan AK Suykens. Multi-view kernel spectral clustering. *Information* Fusion, 44:46–56, 2018. Martial Hue and Jean-Philippe Vert. On learning with kernels for unordered pairs. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 463–470, 2010. Bingbing Jiang, Chenglong Zhang, Yan Zhong, Yi Liu, Yingwei Zhang, Xingyu Wu, and Weiguo Sheng. Adaptive collaborative fusion for multi-view semi-supervised classification. *Information Fusion*, 96:37–50, 2023. Shuhui Jiang, Zhengming Ding, and Yun Fu. Heterogeneous recommendation via deep low-rank sparse collective factorization. *IEEE transactions on pattern analysis and machine intelligence*, 42(5):1097–1111, 2019. Thomas Kailath. Rkhs approach to detection and estimation problems–i: Deterministic signals in gaussian noise. *IEEE Transactions on Information Theory*, 17(5):530–549, 1971. Craig Knox, Vivian Law, Timothy Jewison, Philip Liu, Son Ly, Alex Frolkis, Allison Pon, Kelly Banco, Christine Mak, Vanessa Neveu, et al. Drugbank 3.0: a comprehensive resource for 'omics' research on drugs. *Nucleic acids research*, 39(suppl_1):D1035–D1041, 2010. Michael Kuhn, Monica Campillos, Ivica Letunic, Lars Juhl Jensen, and Peer Bork. A side effect resource to capture phenotypic effects of drugs. *Molecular systems biology*, 6(1):343, 2010. Alan J Laub. *Matrix analysis for scientists and engineers*. SIAM, 2004. Tianjiao Li, Xing-Ming Zhao, and Limin Li. Co-vae: Drug-target binding affinity prediction by co-regularized variational autoencoders. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 44(12):8861– 8873, 2021. Jiyuan Liu, Xinwang Liu, Yuexiang Yang, Qing Liao, and Yuanqing Xia. Contrastive multi-view kernel learning. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2023. Mei Liu, Yonghui Wu, Yukun Chen, Jingchun Sun, Zhongming Zhao, Xue-wen Chen, Michael Edwin Matheny, and Hua Xu. Large-scale prediction of adverse drug reactions using chemical, biological, and phenotypic properties of drugs. *Journal of the American Medical Informatics Association*, 19(e1):e28–e35, 2012. Yunan Luo, Xinbin Zhao, Jingtian Zhou, Jinglin Yang, Yanqing Zhang, Wenhua Kuang, Jian Peng, Ligong Chen, and Jianyang Zeng. A network integration approach for drug-target interaction prediction and computational drug repositioning from heterogeneous information. *Nature communications*, 8(1):1–13, 2017. Juncheng Lv, Zhao Kang, Boyu Wang, Luping Ji, and Zenglin Xu. Multi-view subspace clustering via partition fusion. *Information Sciences*, 560:410–423, 2021. Xupeng Miao, Wentao Zhang, Yingxia Shao, Bin Cui, Lei Chen, Ce Zhang, and Jiawei Jiang. Lasagne: A multi-layer graph convolutional network framework via node-aware deep architecture. *IEEE Transactions* on Knowledge and Data Engineering, 35(2):1721–1733, 2021. André CA Nascimento, Ricardo BC Prudêncio, and Ivan G Costa. A multiple kernel learning algorithm for drug-target interaction prediction. *BMC bioinformatics*, 17:1–16, 2016. Jorge Nocedal and Stephen J Wright. Quadratic programming. *Numerical optimization*, pp. 448–492, 2006. Jiahao Pang and Gene Cheung. Graph laplacian regularization for image denoising: Analysis in the continuous domain. *IEEE Transactions on Image Processing*, 26(4):1770–1785, 2017. E. Pauwels, V. Stoven, and Y. Yamanishi. Predicting drug side-effect profiles: a chemical fragment-based approach. *Bmc Bioinformatics*, 12(1):169, 2011. El.zbieta Pekalska and Bernard Haasdonk. Kernel discriminant analysis for positive definite and indefinite kernels. *IEEE transactions on pattern analysis and machine intelligence*, 31(6):1017–1032, 2008. Michael P Perrone and Leon N Cooper. When networks disagree: Ensemble methods for hybrid neural networks. In *How We Learn; How We Remember: Toward An Understanding Of Brain And Neural* Systems: Selected Papers of Leon N Cooper, pp. 342–358. World Scientific, 1995. Yuqing Qian, Yijie Ding, Quan Zou, and Fei Guo. Identification of drug-side effect association via restricted boltzmann machines with penalized term. *Briefings in Bioinformatics*, 23(6):bbac458, 2022a. Yuqing Qian, Yijie Ding, Quan Zou, and Fei Guo. Multi-view kernel sparse representation for identification of membrane protein types. *IEEE/ACM Transactions on Computational Biology and Bioinformatics*, 20 (2):1234–1245, 2022b. Rudy Raymond and Hisashi Kashima. Fast and scalable algorithms for semi-supervised link prediction on static and dynamic graphs. In *Joint european conference on machine learning and knowledge discovery in* databases, pp. 131–147. Springer, 2010. M. Sayaka, P. Edouard, S Véronique, G. Susumu, and Y. Yoshihiro. Relating drug–protein interaction network with drug side effects. *Bioinformatics*, 2012. S. Shabani-Mashcool, S. A. Marashi, and S. Gharaghani. Nddsa: A network- and domain-based method for predicting drug-side effect associations. *Information Processing & Management*, 57(6):102357, 2020. Caijuan Shi, Changyu Duan, Zhibin Gu, Qi Tian, Gaoyun An, and Ruizhen Zhao. Semi-supervised feature selection analysis with structured multi-view sparse regularization. *Neurocomputing*, 330:412–424, 2019. Dexian Wang, Tianrui Li, Wei Huang, Zhipeng Luo, Ping Deng, Pengfei Zhang, and Minbo Ma. A multi-view clustering algorithm based on deep semi-nmf. *Information Fusion*, pp. 101884, 2023a. Siwei Wang, Xinwang Liu, En Zhu, Chang Tang, Jiyuan Liu, Jingtao Hu, Jingyuan Xia, and Jianping Yin. Multi-view clustering via late fusion alignment maximization. In *IJCAI*, pp. 3778–3784, 2019. Tinghua Wang, Lin Zhang, and Wenyu Hu. Bridging deep and multiple kernel learning: A review. *Information Fusion*, 67:3–13, 2021. Yizheng Wang, Yixiao Zhai, Yijie Ding, and Quan Zou. Sbsm-pro: Support bio-sequence machine for proteins. *arXiv preprint arXiv:2308.10275*, 2023b. doi: 10.48550/arXiv.2308.10275. David S Wishart, Craig Knox, An Chi Guo, Savita Shrivastava, Murtaza Hassanali, Paul Stothard, Zhan Chang, and Jennifer Woolsey. Drugbank: a comprehensive resource for in silico drug discovery and exploration. *Nucleic acids research*, 34(suppl_1):D668–D672, 2006. Xijiong Xie and Shiliang Sun. General multi-view semi-supervised least squares support vector machines with multi-manifold regularization. *Information Fusion*, 62:63–72, 2020. Lixiang Xu, Lu Bai, Jin Xiao, Qi Liu, Enhong Chen, Xiaofeng Wang, and Yuanyan Tang. Multiple graph kernel learning based on gmdh-type neural network. *Information Fusion*, 66:100–110, 2021. Xianyu Xu, Ling Yue, Bingchun Li, Ying Liu, Yuan Wang, Wenjuan Zhang, and Lin Wang. Dsgat: predicting frequencies of drug side effects by graph attention networks. *Briefings in Bioinformatics*, 23(2):bbab586, 2022. Ping Xuan, Meng Wang, Yong Liu, Dong Wang, Tiangang Zhang, and Toshiya Nakaguchi. Integrating specific and common topologies of heterogeneous graphs and pairwise attributes for drug-related side effect prediction. *Briefings in Bioinformatics*, 23(3):bbac126, 2022. Bo Yang, Hongbin Pei, Hechang Chen, Jiming Liu, and Shang Xia. Characterizing and discovering spatiotemporal social contact patterns for healthcare. IEEE transactions on pattern analysis and machine intelligence, 39(8):1532–1546, 2016. Weiwei Yuan, Kangya He, Donghai Guan, Li Zhou, and Chenliang Li. Graph kernel based link prediction for signed social networks. *Information Fusion*, 46:1–10, 2019. Zheng-Jun Zha, Tao Mei, Jingdong Wang, Zengfu Wang, and Xian-Sheng Hua. Graph-based semi-supervised learning with multiple labels. *Journal of Visual Communication and Image Representation*, 20(2):97–103, 2009. Ping Zhang, Fei Wang, Jianying Hu, and Robert Sorrentino. Label propagation prediction of drug-drug interactions based on clinical side effects. *Scientific reports*, 5(1):12339, 2015. Si Zhang, Hanghang Tong, Jiejun Xu, and Ross Maciejewski. Graph convolutional networks: a comprehensive review. *Computational Social Networks*, 6(1):1–23, 2019. Wen Zhang, Hua Zou, Longqiang Luo, Qianchao Liu, Weijian Wu, and Wenyi Xiao. Predicting potential side effects of drugs by recommender methods and ensemble learning. *Neurocomputing*, 173:979–987, 2016. Haochen Zhao, Kai Zheng, Yaohang Li, and Jianxin Wang. A novel graph attention model for predicting frequencies of drug–side effects from multi-view data. *Briefings in Bioinformatics*, 22(6):bbab239, 2021. Haochen Zhao, Shaokai Wang, Kai Zheng, Qichang Zhao, Feng Zhu, and Jianxin Wang. A similarity-based deep learning approach for determining the frequencies of drug side effects. *Briefings in Bioinformatics*, 23(1):bbab449, 2022. ## A Appendix A.1 Optimization It is difficult and time-consuming to solve the Equation 24 because it contains multiple variables and large pairwise matrices. In this section, we divide the original problem into five subproblems and develop an iterative algorithm to optimize them. And, we avoid explicit computation of any pairwise matrices in the whole optimization, which makes our method suitable for solving problems in large pairwise spaces. Fˆ**-subproblem**: we fix a v, w, θD and θS to optimize variants Fˆ. Let A = H−0.5 S K∗SH−0.5 S, B = H−0.5 D K∗DH−0.5 D and vec Fˆv= Kva v. Then, the optimization model of Fˆ as follows: $$\arg\min_{\mathbf{F}}\frac{1}{2}\left\|\operatorname{vec}\left(\hat{\mathbf{F}}\right)-\sum_{v=1}^{V}\mathbf{w}_{v}\operatorname{vec}\left(\hat{\mathbf{F}}^{v}\right)\right\|_{2}^{2}+\frac{1}{2}\sigma\operatorname{vec}\left(\hat{\mathbf{F}}\right)^{T}\mathbf{L}\operatorname{vec}\left(\hat{\mathbf{F}}\right)$$ $$s.t.\mathbf{L}=\mathbf{I}_{NM}-\mathbf{A}\otimes\mathbf{B}.$$ $$(25)$$ $$(26)$$ Let the derivative of Equation 25 *w.r.t* Fˆ to zero, the solution of Fˆ can be obtained: $$\operatorname{vec}\left({\hat{\boldsymbol{F}}}\right)=\left(\left(1+\sigma\right){\boldsymbol{I}}_{N M}-\sigma{\boldsymbol{A}}\otimes{\boldsymbol{B}}\right)^{-1}\left(\sum\nolimits_{v=1}^{V}{\boldsymbol{w}}_{v}\operatorname{vec}\left({\hat{\boldsymbol{F}}}^{v}\right)\right).$$ wvvec Fˆv. (26) Notice that the inverse matrix on the right-hand side of Equation 26 needs too much time and memory. Therefore, we use eigen decomposed techniques to compute the approximate inverse. Let VAΛAV T A and VBΛBV T B be the eigen decomposition of the matrices A and B, respectively. Define the matrix U to be Ui,j = [ΛA]i,i ×[ΛB]j,j . By the theorem (Raymond & Kashima, 2010), the kronecker product matrix A⊗B can be eigendecomposed as (VA ⊗ VB) diag (vec (U)) (VA ⊗ VB) T. Then substituting it in Equation 26, we can write the inverse matrix in Equation 26 as $$\left(\left(1+\sigma\right)\mathbf{I}_{N M}-\sigma\mathbf{A}\otimes\mathbf{B}\right)^{-1}=\left(\left(1+\sigma\right)\mathbf{I}_{N M}-\sigma(\mathbf{V}_{A}\otimes\mathbf{V}_{B})\operatorname{diag}\left(\operatorname{vec}\left(\mathbf{U}\right)\right)\left(\mathbf{V}_{A}\otimes\mathbf{V}_{B}\right)^{T}\right)^{-1}.$$ T−1. (27) Since, it holds that (VA ⊗ VB) (VA ⊗ VB) T = INM. Equation 27 can be transformed into $$((1+\sigma)\,\mathbf{I}_{N M}-\sigma\mathbf{A}\otimes\mathbf{B})^{-1}=(\mathbf{V}_{A}\otimes\mathbf{V}_{B})((1+\sigma)\,\mathbf{I}_{N M}-\sigma\mathrm{diag}\,(\mathrm{vec}\,(\mathbf{U})))^{-1}\,(\mathbf{V}_{A}\otimes\mathbf{V}_{B})^{T}.$$ T. (28) Notice that the inverse matrix in Equation 28 is a diagonal matrix whose value can be calculated as the matrix W $\begin{array}{l}\mathbf{\mathit{W}}_{i,j}=\left(1+\sigma-\sigma\mathbf{\mathit{U}}_{i,j}\right)^{-1}\end{array}$ 26 as 1. So, we can further rewrite the Equation 26 as $$\operatorname{vec}\left({\hat{\mathbf{F}}}\right)=(\mathbf{V}_{A}\otimes\mathbf{V}_{B}){\mathrm{diag}}\left(\operatorname{vec}\left(\mathbf{W}\right)\right)\left(\mathbf{V}_{A}\otimes\mathbf{V}_{B}\right)^{T}\left(\sum\nolimits_{v=1}^{V}\mathbf{w}_{v}\operatorname{vec}\left({\hat{\mathbf{F}}}^{v}\right)\right)$$ Taking out the vec-tricks operation, we can obtain the solution $${\hat{\mathbf{F}}}=\mathbf{V}_{B}\left(\mathbf{W}\odot\left(\mathbf{V}_{B}^{T}\left(\sum_{v=1}^{V}\mathbf{w}_{v}{\hat{\mathbf{F}}}^{v}\right)\mathbf{V}_{A}\right)\right)\mathbf{V}_{A}^{T}\tag{1}$$ w**-subproblem**: we fix all the variants except w. The formula is as follows: $$\begin{array}{ll}\arg\min&\frac{1}{2}\left\|\hat{\mathbf{F}}-\sum_{v=1}^{V}\mathbf{w}_{v}\hat{\mathbf{F}}^{v}\right\|_{F}^{2}+\mu\sum_{v=1}^{V}\left(\frac{\mathbf{w}_{v}}{2}\left\|\mathbf{F}-\hat{\mathbf{F}}^{v}\right\|_{F}^{2}\right)+\frac{1}{2}\beta\left\|\mathbf{w}\right\|_{2}^{2}\\ s.t.\sum_{v=1}^{V}\mathbf{w}_{v}=1,\mathbf{w}_{v}\geq0,v=1,\ldots,V.\end{array}$$ $$(27)$$ $$(29)$$ $$(30)$$ $$(31)$$ $$(32)$$ Problem 32 can be simplified as a standard quadratic programming problem (Nocedal & Wright, 2006) $$\arg\operatorname*{min}\;{\boldsymbol{w}}^{T}{\boldsymbol{G}}{\boldsymbol{w}}-{\boldsymbol{w}}^{T}{\boldsymbol{h}}$$ $\min\ \mathbf{w}^{T}\ \mathbf{Gw}-\mathbf{w}^{T}\ \mathbf{h}$ $\mathbf{w}$ $s.t.\sum_{v=1}^{V}\mathbf{w}_{v}=1,\mathbf{w}_{v}\geq0,v=1,\ldots,V.$ $$(33)$$ where G ∈ R V ×V with the element as $$\mathbf{G}_{i,j}={\left\{\begin{array}{l l}{{\frac{1}{2}}\mathrm{trace}\left(\left(\hat{\mathbf{F}}^{i}\right)^{T}\hat{\mathbf{F}}^{j}\right),}&{{\mathrm{if}}\ i\neq j,}\\ {{\frac{1}{2}}\mathrm{trace}\left(\left(\hat{\mathbf{F}}^{i}\right)^{T}\hat{\mathbf{F}}^{j}\right)+{\frac{1}{2}}\beta,}&{{\mathrm{if}}\ i=j.}\end{array}\right.}\tag{1}$$ $$(34)$$ h is a vector with $$\mathbf{h}_{i}=\text{trace}\left(\hat{\mathbf{F}}^{T}\hat{\mathbf{F}}^{i}\right)-\frac{\mu}{2}\left\|\mathbf{F}-\hat{\mathbf{F}}^{i}\right\|_{F}^{2}.\tag{1}$$ The optimization method for Equation 33 is the interior-point optimization algorithm (Byrd et al., 1999). $$(35)$$ θD**-subproblem**: With the fixed all the variants except θD, the formula can be written as $$\begin{array}{l}\arg\min_{\mathbf{\theta}_{D}}\,\frac{1}{2}\sigma\mbox{vec}\big{(}\hat{\mathbf{F}}\big{)}^{T}\mathbf{L}\mbox{vec}\left(\hat{\mathbf{F}}\right)\\ s.t.\,\,\mathbf{L}=\mathbf{I}_{NM}-\big{(}\mathbf{H}_{S}^{-0.5}\mathbf{K}_{S}^{*}\mathbf{H}_{S}^{-0.5}\big{)}\otimes\big{(}\mathbf{H}_{D}^{-0.5}\mathbf{K}_{D}^{*}\mathbf{H}_{D}^{-0.5}\big{)}\,,\\ \mathbf{K}_{D}^{*}=\sum_{i=1}^{P}|\mathbf{\theta}_{D}|_{i}^{2}\,\mathbf{K}_{D}^{*},\sum_{i=1}^{P}|\mathbf{\theta}_{D}|_{i}=1,|\mathbf{\theta}_{D}|_{i}\geq0,i=1,\ldots,P.\end{array}\tag{36}$$ Let A = H−0.5 S K∗SH−0.5 Sand Bi = H−0.5 D KiDH−0.5 D . Then substituting L in Equation 36 with A and Bi, the objective function 36 can be written as $$\arg\min_{\mathbf{\theta}_{D}}-\frac{1}{2}\sigma\text{vec}\big{(}\hat{\mathbf{F}}\big{)}^{T}\sum_{i=1}^{P}\big{(}\mathbf{A}\otimes\mathbf{B}^{i}\big{)}\text{vec}\left(\hat{\mathbf{F}}\right)$$ $$s.t.\sum_{i=1}^{P}\left[\mathbf{\theta}_{D}\right]_{i}=1,\left[\mathbf{\theta}_{D}\right]_{i}\geq0,i=1,\ldots,P.$$ $$(37)$$ Further, introduce the Lagrange multiplier ξ and the objective function 37 can be converted to a Lagrange function: $$\mathrm{Lag}\left(\mathbf{\theta}_{D},\xi\right)=-{\frac{1}{2}}\sigma\mathrm{vec}\left({\hat{\mathbf{F}}}\right)^{T}\sum_{i=1}^{P}\left(\mathbf{A}\otimes\mathbf{B}^{i}\right)\mathrm{vec}\left({\hat{\mathbf{F}}}\right)-\xi\left(\sum_{i=1}^{P}\left[\mathbf{\theta}_{D}\right]_{i}-1\right)$$ $$(39)$$ $$(40)$$ $$(38)$$ Based on setting the derivative of Equation 38 *w.r.t* θD and ξ to zero respectively, we have the following solution $$[\theta_{D}]_{i}=\left(\operatorname{vec}\left({\hat{\boldsymbol{F}}}\right)^{T}\left({\boldsymbol{A}}\otimes{\boldsymbol{B}}^{i}\right)\operatorname{vec}\left({\hat{\boldsymbol{F}}}\right)\right)^{\frac{1}{1-\epsilon}}\left/\sum_{j=1}^{P}\left(\operatorname{vec}\left({\hat{\boldsymbol{F}}}\right)^{T}\left({\boldsymbol{A}}\otimes{\boldsymbol{B}}^{j}\right)\operatorname{vec}\left({\hat{\boldsymbol{F}}}\right)\right)^{\frac{1}{1-\epsilon}}.\right.$$ . (39) By using the vec-tricks operation, we can describe the solution as $$[\mathbf{\theta}_{D}]_{i}=\operatorname{trace}\Bigl(\hat{\mathbf{F}}^{T}\mathbf{B}^{i}\hat{\mathbf{F}}\mathbf{A}^{T}\Bigr)^{\frac{1}{1-\varepsilon}}\Bigg/\sum_{j=1}^{P}\operatorname{trace}\Bigl(\hat{\mathbf{F}}^{T}\mathbf{B}^{j}\hat{\mathbf{F}}\mathbf{A}^{T}\Bigr)^{\frac{1}{1-\varepsilon}}$$ θS**-subproblem**:The solution of θS is similarity to θD. Here, the optimization process is omitted and we directly give the solution $$[\boldsymbol{\theta}_{S}]_{i}=\operatorname{trace}\Bigl(\hat{\boldsymbol{F}}^{T}\boldsymbol{B}\hat{\boldsymbol{F}}(\boldsymbol{A}^{i})^{T}\Bigr)^{\frac{1}{1-\varepsilon}}\,\Bigg/\sum_{j=1}^{Q}\operatorname{trace}\Bigl(\hat{\boldsymbol{F}}^{T}\boldsymbol{B}\hat{\boldsymbol{F}}(\boldsymbol{A}^{j})^{T}\Bigr)^{\frac{1}{1-\varepsilon}}$$ $$(41)$$ where B = H−0.5 D K∗DH−0.5 D and Ai = H−0.5 S KiSH−0.5 S. a v**-subproblem**: By dropping all other irrelevant terms with respect a v, we have $$\arg\min_{\mathbf{a}^{w}}\,\frac{1}{2}\,\left\|\operatorname{vec}\left(\hat{\mathbf{F}}\right)-\sum_{i=1}^{V}\mathbf{w}_{i}\mathbf{K}^{i}\mathbf{a}^{i}\right\|_{2}^{2}+\mu\left(\frac{\mathbf{w}_{v}}{2}\left\|\operatorname{vec}\left(\mathbf{F}\right)-\mathbf{K}^{v}\mathbf{a}^{v}\right\|_{2}^{2}+\frac{\lambda^{v}}{2}\mathbf{a}^{v^{T}}\mathbf{K}^{v}\mathbf{a}^{v}\right).$$ In order to show the bit information (2) that we obtain the same state $\mathbf{F}$ as the state $\mathbf{K}^{v}$ . (42) It can be observed from the objective function 42 that when training the parameter a v, other views Ki with weight wi were taken into consideration. Therefore, each partition's training is not completely separate, but involves information sharing. Based on setting the derivative of problem 42 *w.r.t* a vto zero, we get $$\left(\mathbf{K}^{v}+\frac{\lambda_{v}}{1+\mu\mathbf{w}_{v}}\mathbf{I}_{NM}\right)\mathbf{a}^{v}=\frac{1}{1+\mu\mathbf{w}_{v}}\left(\text{vec}\left(\hat{\mathbf{F}}\right)-\sum_{i=1,i\neq v}^{V}\mathbf{w}_{i}\mathbf{K}^{i}\mathbf{a}^{i}+\mu\mathbf{w}_{v}\text{vec}\left(\mathbf{F}\right)\right)\tag{1}$$ $$(42)$$ $$(43)$$ Let $\textbf{\textit{W}}=\hat{\textbf{F}}-\sum\limits_{i=1,i\neq v}^V\textbf{\textit{w}}_i\hat{\textbf{F}}^i+\mu\textbf{\textit{w}}_v\textbf{F}$, the Equation 43 can be written as: $$\mathbf{a}^{v}=\frac{1}{1+\mu\mathbf{w}_{v}}\bigg{(}\mathbf{K}^{v}+\frac{\lambda_{v}}{1+\mu\mathbf{w}_{v}}\mathbf{I}_{NM}\bigg{)}^{-1}\text{vec}\left(\mathbf{W}\right).\tag{44}$$ $\uparrow P_{\uparrow}$ We can observe that the form of Equation 44 is similar to Equation 7. Therefore, we use eigen decomposed techniques and the vec-trick operation to effectively compute a v. We summarize the complete optimization process for problem 24 in Algorithm 1. Algorithm 1: Optimization for MKronRLSF-LP. Input: The link matrix F; The regulation parameters µ, β, σ, ε and λ v, v = 1*, . . . , V* ; Output: The predicted link matrix Fˆ; 1 Compute two sets of base kernel sets KD and KS by Equation 20a and 20b; 2 Initialize a v, v = 1*, . . . , V* by single view Kron-RLS; wv = 1*/V, v* = 1*, . . . , V* ; θ i D = 1*/P, i* = 1*, . . . , P*; θ i S = 1*/Q, i* = 1*, . . . , Q*; 3 **while** *Not convergence* do 4 Update Fˆ by solve the subproblem 25; 5 Update w by solve the subproblem 32; 6 Update θD by solve the subproblem 36; 7 Update θS by Equation 41; 8 for i = 1 to V do 9 Update a i by solve the subproblem 42; 10 end 11 end ## A.2 Measurements Considering that drug-side effect prediction is an extremely imbalanced classification problem and we do not want incorrect predictions to be recommended by the prediction model, we utilize the following evaluation parameters: $$Recall=\frac{TP}{TP+FN},$$ $$Precision=\frac{TP}{TP+FP},$$ $$F_{score}=2\times\frac{Precision\times Recall}{Precision+Recall},$$ $$\text{and so far as a ratio can be of known ratios or below for b.}$$ $$(45\mathrm{a})$$ $$(45\mathrm{b})$$ $$(45\mathrm{c})$$ where T P, F N, F P and T N are the number of true-positive samples, false-negative samples, false-positive samples and true-negative samples, respectively. The area under the ROC curve (AUC) and area under the precision recall curve (AUPR) is also used to measure predictive accuracy, because they are the most commonly used evaluate metrics in the biomedical link prediction. The precision-recall curve shows the tradeoff between precision and recall at different thresholds. F*score* is calculated from Precision and Recall. The highest possible value of an F*score* is 1, indicating perfect precision and recall, and the lowest possible value is 0, if either precision or recall are zero. AUC can be considered as the probability that the classifier will rank a randomly chosen positive instance higher than a randomly chosen negative instance (Li et al., 2021). Therefore, we consider AUPR and F*score* more desirable metrics (Ezzat et al., 2017; Li et al., 2021). ## A.3 Baseline Methods - Best single view (BSV): Applying Kron-RLS to the best single view. The one with the maximum AUPR is chosen here. - Committee Kron-RLS (Comm Kron-RLS)(Perrone & Cooper, 1995): Each view is trained by KronRLS separately, and the final classifier is a weighted average. - Kron-RLS with Centered Kernel Alignment-based Multiple Kernel Learning (Kron-RLS+CKAMKL)(Ding et al., 2019): Multiple kernels from the drug space and side effect space are linearly weighted by the optimized CKA-MKL. Finally, Kron-RLS is employed on optimal kernels. - Kron-RLS with pairwise Multiple Kernel Learning (Kron-RLS+pairwiseMKL)(Cichonska et al., 2018): First, it constructs multiple pairwise kernels. Then, the mixture weights of the pairwise kernels are determined by CKA-MKL. Finally, it learns the Kron-RLS function based on the optimal pairwise kernel. - Kron-RLS with self-weighted multiple kernel learning (Kron-RLS+self-MKL)(Nascimento et al., 2016): The optimal drug and side effect kernels are linearly weighted based on the multiple base kernel. The proper weights assignment to each kernel is performed automatically. - Multi-view graph regularized link propagation model (MvGRLP)(Ding et al., 2021): This is an extension of the graph model (Zha et al., 2009). To fuse multi view information, multi-view Laplacian regularization is introduced to constrain the predicted values. - Multi-view graph convolution network (MvGCN)(Fu et al., 2022): This extends the GCN (Zhang et al., 2019) from a single view to multi-view by combining the embeddings of multiple neighborhood information aggregation layers in each view. ## A.4 Code And Data Available The code and data are available at https://github.com/QYuQing/MKronRLSF-LP. A.5 Figures A.6 Tables ![20_image_0.png](20_image_0.png) The adjacency matrixF d334 F Out matrix F ˆ d334 F Out matrix F ˆ Figure 5: Visualization of drug-side effect association problems. ![20_image_1.png](20_image_1.png) ![20_image_2.png](20_image_2.png) Score ![20_image_3.png](20_image_3.png) Figure 6: Results (MKronRLSF-LP) for Fscore, *Recall* and *P recision* of different thresholds. Table 3: Prediction performance comparison of baseline methods on four datasets. Dataset Methods AUPR(%) AUC(%) *Recall*(%) *P recision*(%) F*score*(%) | Table 3: Prediction performance comparison of baseline methods on four datasets. | | | | | | | |------------------------------------------------------------------------------------|------------|------------|------------|------------|---------------|------------| | Dataset | Methods | AUPR(%) | AUC(%) | Recall(%) | P recision(%) | Fscore(%) | | Liu | BSV | 60.12±1.12 | 93.22±1.63 | 58.77±0.33 | 59.09±0.49 | 58.52±0.23 | | Comm Kron-RLS | 65.63±1.95 | 94.11±1.45 | 61.63±0.33 | 61.9±1.37 | 61.57±1.65 | | | Kron-RLS +CKA-MKL | 65.92±0.43 | 92.51±0.08 | 62.11±0.43 | 63.09±0.56 | 62.59±0.41 | | | Kron-RLS +pairwiseMKL | 62.03±0.44 | 95.01±0.06 | 65.39±0.24 | 54.46±0.30 | 59.43±0.21 | | | Kron-RLS +self-MKL | 65.02±0.47 | 92.1±0.10 | 60.97±0.57 | 63.12±0.61 | 62.03±0.52 | | | MvGRLP | 66.32±0.45 | 94.29±0.08 | 63.56±0.46 | 60.87±0.62 | 62.18±0.39 | | | MvGCN | 62.69±1.81 | 94.01±0.87 | 60.81±0.37 | 60.33±1.31 | 60.48±1.15 | | | MKronRLSF-LP | 68.02±0.44 | 94.78±0.13 | 65.18±0.93 | 61.27±1.08 | 63.02±0.43 | | | Pau | BSV | 65.26±0.98 | 94.57±0.34 | 62.54±0.5 | 60.77±1.27 | 60.65±0.73 | | Comm Kron-RLS | 65.63±0.36 | 94.78±0.13 | 64.01±0.38 | 60.05±0.49 | 61.01±0.27 | | | Kron-RLS +CKA-MKL | 65.49±0.37 | 92.39±0.13 | 61.65±0.40 | 63.22±0.51 | 62.42±0.27 | | | Kron-RLS +pairwiseMKL | 63.48±0.39 | 95.02±0.07 | 78.1±0.26 | 45.01±0.48 | 57.11±0.36 | | | Kron-RLS +self-MKL | 64.11±1.75 | 91.94±0.25 | 62.37±0.29 | 60.97±1.57 | 61.65±0.79 | | | MvGRLP | 66.17±0.32 | 94.42±0.07 | 62.18±0.38 | 61.95±0.45 | 62.06±0.22 | | | MvGCN | 63.51±1.43 | 94.08±0.49 | 63.21±0.69 | 57.94±1.34 | 60.4±1.78 | | | MKronRLSF-LP | 67.81±0.37 | 94.81±0.18 | 65.72±3.58 | 60.65±3.75 | 62.87±0.48 | | | Miz | BSV | 56.58±2.33 | 90.71±2.06 | 62.76±0.69 | 53.94±2.31 | 55.39±2.33 | | Comm Kron-RLS | 58.08±1.07 | 91.36±1.25 | 62.37±0.81 | 55.16±1.99 | 56.54±1.77 | | | Kron-RLS +CKA-MKL | 66.92±0.44 | 92.58±0.14 | 62.62±0.52 | 64.3±0.46 | 61.45±0.44 | | | Kron-RLS +pairwiseMKL | 62.13±0.29 | 94.70±0.11 | 63.78±0.47 | 56.26±0.42 | 59.79±0.30 | | | Kron-RLS +self-MKL | 65.84±0.43 | 92.06±0.16 | 63.63±0.48 | 61.77±0.52 | 60.68±0.43 | | | MvGRLP | 66.68±0.35 | 94.10±0.12 | 63.46±0.43 | 61.82±0.30 | 62.63±0.29 | | | MvGCN | 62.17±1.90 | 93.35±1.73 | 59.54±0.43 | 60.74±1.78 | 59.76±1.95 | | | MKronRLSF-LP | 68.35±0.38 | 94.47±0.09 | 65.15±2.77 | 62.10±3.19 | 63.45±0.53 | | | Luo | BSV | 60.40±0.40 | 94.40±0.11 | 58.28±0.41 | 58.68±0.46 | 58.48±0.39 | | Comm Kron-RLS | 54.19±1.36 | 91.92±4.01 | 57.64±2.46 | 53.16±1.97 | 52.99±1.54 | | | Kron-RLS +CKA-MKL | 60.87±0.36 | 92.03±0.15 | 55.55±0.34 | 64.15±0.46 | 59.54±0.36 | | | Kron-RLS +pairwiseMKL | 50.29±0.29 | 94.37±0.10 | 55.66±0.39 | 45.97±0.39 | 50.35±0.31 | | | Kron-RLS +self-MKL | 22.29±1.57 | 79.74±1.62 | 56.62±1.47 | 20.91±1.64 | 28.23±1.15 | | | MvGRLP | 61.76±0.45 | 94.08±0.07 | 58.70±0.40 | 60.05±0.61 | 58.37±0.42 | | | MvGCN | 61.18±0.41 | 94.54±0.1 | 57.94±0.37 | 61.26±0.48 | 51.07±0.38 | | | MKronRLSF-LP | 63.32±0.58 | 94.07±0.14 | 59.43±0.95 | 61.58±1.22 | 60.47±0.39 | | Table 4: The optimal parameters λ v obtained with the single view Kron-RLS model (based on the relative | ⊗ | KGIP,S | KGIP,S | KGIP,S | KGIP,S | KGIP,S | |---------|----------|----------|----------|----------|----------| | 0 | 2 2 | 2 2 | 2 1 | 2 1 | | | KGIP,D | 2 | −2 | | | | | KCOS,D | 2 2 | 2 3 | 2 3 | 2 2 | 2 | | KCorr,D | 2 3 | 2 3 | 2 4 | 2 2 | 2 4 −1 | | KM I,D | 2 0 | 2 1 | 2 1 | 2 0 | 2 | | KNTK,D | 2 2 | 2 4 | 2 3 | 2 1 | 2 1 | | Table 6: Prediction performance comparison of other drug-side effect predictors on Liu data | | | sets. | | | |-----------------------------------------------------------------------------------------------|---------|--------|-----------|---------------|-----------| | Methods | AUPR(%) | AUC(%) | Recall(%) | P recision(%) | Fscore(%) | | Liu's method | 28.0 | 90.7 | 67.5 | 34.0 | 45.2 | | Cheng's method | 59.2 | 92.2 | 59.0 | 55.7 | 56.9 | | RBMBM | 61.6 | 94.1 | 61.5 | 57.4 | 59.4 | | INBM | 64.1 | 93.4 | 60.7 | 60.4 | 60.6 | | Ensemble model | 66.1 | 94.8 | 62.3 | 61.1 | 61.7 | | MKL-LGCa | 67.0 | 95.1 | - | - | - | | NDDSA with sschemc | 60.5 | 94.1 | 57.9 | 56.4 | 57.1 | | NDDSA without sschemc | 60.4 | 94.0 | 57.4 | 56.8 | 57.1 | | MKronRLSF-LP | 68.2 | 94.7 | 63.8 | 62.5 | 63.1 | Table 5: Summary of the threshold of baseline methods on four datasets. Methods Liu Pau Miz Luo BSV 0.145 0.146 0.142 0.128 Comm Kron-RLS 0.205 0.204 0.192 0.183 Kron-RLS+CKA-MKL 0.100 0.106 0.099 0.102 Kron-RLS+pairwiseMKL 0.149 0.159 0.101 0.107 Kron-RLS+self-MKL 0.119 0.116 0.113 0.129 MvGRLP 0.090 0.091 0.094 0.085 MVGCN 0.225 0.237 0.208 0.197 MKronRLSF-LP 0.177 0.168 0.179 0.149 Table 7: Prediction performance comparison of other drug-side effect predictors on Pau datasets. Methods AUPR(%) AUC(%) *Recall*(%) *P recision*(%) F*score*(%) Pau's methoda 38.9 89.7 51.7 36.1 42.5 Liu's method 34.7 92.1 **64.6** 40.0 49.5 Cheng's method 58.8 82.3 58.3 55.0 56.6 RBMBM 61.3 94.1 60.8 57.7 59.2 INBM 64.1 93.4 60.8 60.5 60.7 Ensembel model 66.0 94.9 62.4 61.2 61.6 MKL-LGCb 66.8 **95.2** - - - NDDSA with sschemc 60.3 94.2 59.3 54.9 57.0 NDDSA without sschemc 60.3 94.1 58.2 55.9 57.0 MKronRLSF-LP **67.9** 94.7 63.4 **62.9 63.2** - represents not available; the bold and underlined values represent the best and second performance measure in each column, respectively; a, b and c represents the results are derived from (Zhang et al., 2016), (Ding et al., 2018) and (Shabani-Mashcool et al., 2020), respectively. Table 8: Prediction performance comparison of other drug-side effect predictors on Miz datasets. Methods AUPR(%) AUC(%) *Recall*(%) *P recision*(%) F*score*(%) Miz's methoda 41.2 89.0 52.7 38.7 44.6 Liu' method 36.3 91.8 **64.0** 41.5 50.5 Cheng's method 56.0 92.3 58.4 56.8 57.6 RBMBM 61.7 93.9 60.5 58.8 59.6 INBM 64.6 93.2 61.6 60.5 61.1 Ensemble model 66.6 94.6 62.4 61.9 62.2 MKL-LGCb 67.3 **94.8** - - - NDDSA with sschemc 60.6 93.9 58.8 56.3 57.5 NDDSA without sschemc 60.7 93.6 60.0 55.5 57.6 MKronRLSF-LP **68.5** 94.5 63.0 **64.2 63.6** - represents not available; the bold and underlined values represent the best and second performance measure in each column, respectively; a, b and c represents the results are derived from (Zhang et al., 2016), (Ding et al., 2018) and (Shabani-Mashcool et al., 2020), respectively. Table 9: Prediction performance comparison of other drug-side effect predictors on Luo datasets. Methods AUPR(%) AUC(%) *Recall*(%) *P recision*(%) F*score*(%) Liu's method 39.4 93.5 **59.6** 48.3 53.3 Cheng's method 53.2 90.9 53.1 52.3 52.7 RBMBM 55.1 93.5 56.1 54.3 55.1 INBM 57.3 91.7 55.8 56.7 56.2 Ensemble model 58.6 93.9 46.1 **68.4** 55.1 MKL-LGC 61.7 94.6 - - - NDDSA with sschema 53.1 94.2 47.6 57.3 52.0 NDDSA without sschema 44.5 93.7 44.7 47.8 46.2 GCRSb 27.2 **95.7** - - - SDPred 22.6 94.6 - - - MKronRLSF-LP **63.5** 94.1 59.2 61.9 **60.5** - represents not available; the bold and underlined values represent the best and second performance measure in each column, respectively; a,b represents the results are derived from (Shabani-Mashcool et al., 2020) and (Xuan et al., 2022), respectively.
Review 1: Summary: The authors propose to extend the Kronecker regularized least squares (Kro-RLS) technique to classify drug-side effect. Their proposal focuses on late fusion by finding a consensus partition, applying multiple graph Laplacian regularizations. To accomplish this, they introduce an optimization algorithm and subsequently contrast their approach with various baseline methods. Strengths and Weaknesses: ## Strengths - Introduce a novel late fusion technique, including various optimization steps, and contrast that to early fusion methods. - The methodology and rationale are presented comprehensively, facilitating understanding. - Comparative analysis with multiple Kron-based techniques and other multi-view graph approaches, as well as deep learning based approaches, strengthens the study's contextualization. ## Weaknesses - There's a lack of clarity on how potential overfitting is mitigated and how the approach is fairly evaluated. - Certain claims may be somewhat exaggerated or require more substantiation. - A thorough proofreading pass would enhance the manuscript's clarity and readability. Requested Changes: ## Majors requests - The proposed technique aims to determine the optimal threshold by maximizing the F-score, which is also one the main metrics used for comparing with alternative approaches. However, the absence of a validation or test split raises concerns about potential overfitting. - Various methods listed in Tables 4, 5, and 6, such as "RBMBM," "MKL-LGC," and "NDDSA," are not previously introduced in the manuscript. Providing brief descriptions for these methods would enhance the reader's understanding and facilitate a more informed assessment. - In Section 6, the final paragraph appears disconnected from the preceding content: "MKronRLSF-LP uses L2 loss to measure the quality of approximation. However, the L2 loss function measure is sensitive to outliers and the interaction matrix contains false zeros. Thus, feature work contain replaces the quadratic form of residues by robust loss, such as LINEX lossTang et al. (2021) and p-power lossChen et al. (2017)". - In Section 5.6, the following sentence seems to be exaggerated given that AUC results are reported yet not considered: "Obviously, MKronRLSF-LP achieves the best prediction results for all datasets (the highest AUPR and Fscore score on all datasets)". ## Minor requests - There are numerous typos and oversights present in the manuscript: "Based on MKL method, Ding et al. Ding et al. (2019) and Nascimento et al. Nascimento et al. (2016)", "Psecision". - The description of the datasets is nearly absent, which could hinder readers' understanding of the data used in the study. - In Table 1, "Zero of rates" appears to be nonsensical and may require clarification or correction. Broader Impact Concerns: There are no ethical concerns associated with the release of this work. However, providing a model card would facilitate the method's proper utilization. ================================================== Review 2: Summary: The authors propose to tackle the problem of drug-side effect prediction which is a link prediction problem or a recommendation problem. They propose to use a multiple kernel approach. They start from a Kronecker regularised least square formulation and add two contributions: a notion of consensus partition and a regularisation via a multiple graph Laplacian constraint. Strengths and Weaknesses: The main issue with the current paper is in the 1) presentation, 2) justification and 3) validation. 1. the authors never explain the notion of partition or consensus, which is the main contribution of the approach. This prevents the reader from understanding what is the gist of the proposal and why this particular approach should yield an advantage. 2. The authors very briefly mention at page 7 that "the node pair sharing the same labels should be kept close together. This means any two pair nodes with same labels have a higher similarity. That is, the pair node labels with stronger correlations are assumed to be closer to each other in the original space. This is analogous to the Laplacian operator on manifolds". This is the only justification for the introduction of the laplacian regularisation offered to the reader. No justification is offered for the partition notion (which is not explained). 3. a more thorough empirical validation should be offered: a) an ablation experimental design should be employed to show the relative importance of the partition notion and the laplacian regularisation; b) a significance test is missing, so we do not know if the improvement reported is statistically significative (please use a post-hoc analysis and a critical differences diagram); c) the author says that "we avoid explicit computation of any pairwise matrices, which makes our method suitable for solving problems in large pairwise spaces", but do not offer an efficiency comparison with other approaches; d) the authors should use an artificial problem where the difficulty of the problem is controlled by design and show the regime when the proposed approach works and when it stops working (e.g. when the number of missing links is above a certain threshold w.r.t. the size of the network, etc) Requested Changes: Please address the previous issues: 1. explain the notion of partition and consensus (perhaps adding a visual support) in plain English before using any formal notation 2. justify intuitively the reasoning on why these changes (partition and Laplace smoothing) should in principle help here and how these ideas could be useful in related problems (explain which are the characteristics that these problems should have to benefit from your approach) 3. provide a) an ablation study, b) critical differences diagrams, c) efficiency comparisons, d) artificial problem Broader Impact Concerns: NA ================================================== Review 3: Summary: This paper proposes a new multi-view learning method called MKronRLSF-LP for predicting drug-side effects, which is an important problem in pharmacovigilance. The method extends the Kronecker regularized least squares (Kron-RLS) framework by learning a consensus partition from multiple views. It also incorporates multiple graph Laplacian regularization to enhance the performance. An efficient optimization algorithm is developed to fuse the multiple Kron-RLS submodels and make predictions without explicitly computing large pairwise matrices. Experiments on datasets demonstrate the good performance of MKronRLSF-LP compared to various baseline methods and state-of-the-art approaches. Strengths and Weaknesses: Strengths: 1. This paper focuses on an important problem. 2. The method is computationally efficient. 3. This paper achieves good performance on the given benchmarks. Weakness: 1. This work seems combine the previous method such as Kronecker regularized least squares (Kron-RLS) and Multi-view graph regularized link propagation model (MvGRLP), etc. Based on the results, they are incremental improvements on the previous method. 2. The presentation and citation should be improved. Requested Changes: 1. This paper should carefully discuss the compare the results with the previous graph neural network based method. 2. The authors should improve the overall presentation of the paper. This includes ensuring proper citation formatting throughout the manuscript, improving the grammar and sentence structures. 3. The discussion of experiments are also weak. The authors should provide more understandings and ablation studies of proposed methods. Broader Impact Concerns: I didn't see any potential concerns. ================================================== Metareview: Recommendation: Accept as is Comment: All reviewers were supportive of accepting the paper. The Authors improved the work significantly during the rebuttal phase, in particular by adding several clarifications and the ablation study. The emphasized strengths of the paper included novelty of the model and the comprehensive evaluation. ==================================================
# Optimized Tradeoffs For Private Prediction With Majority Ensembling Anonymous authors Paper under double-blind review ## Abstract We study a classical problem in private prediction, the problem of computing an (*mϵ, δ*)- differentially private majority of K (ϵ, ∆)-differentially private algorithms for 1 ≤ m ≤ K and 1 > δ ≥ ∆ ≥ 0. Standard methods such as subsampling or randomized response are widely used, but do they provide optimal privacy-utility tradeoffs? To answer this, we introduce the Data-dependent Randomized Response Majority (DaRRM) algorithm. It is parameterized by a data-dependent noise function γ, and enables efficient utility optimization over the class of all private algorithms, encompassing those standard methods. We show that maximizing the utility of an (*mϵ, δ*)-private majority algorithm can be computed tractably through an optimization problem for any m ≤ K by a novel structural result that reduces the infinitely many privacy constraints into a polynomial set. In some settings, we show that DaRRM provably enjoys a privacy gain of a factor of 2 over common baselines, with fixed utility. Lastly, we demonstrate the strong empirical effectiveness of our first-of-its-kind privacy-constrained utility optimization for ensembling labels for private prediction from private teachers in image classification. Notably, our DaRRM framework with an optimized γ exhibits substantial utility gains when compared against several baselines. ## 1 Introduction Differential privacy (DP) is a widely applied framework for formally reasoning about privacy leakage when releasing statistics on a sensitive database Erlingsson et al. (2014); Cormode et al. (2018). Differential privacy protects data privacy by obfuscating algorithmic output, ensuring that query responses look similar on adjacent datasets while preserving utility as much as possible Dwork et al. (2006). Privacy in practice often requires aggregating or composing multiple private procedures that are distributed for data or training efficiency. For example, it is common to aggregate multiple private algorithmic or model outputs in methods such as boosting or calibration (Sagi & Rokach, 2018). In federated learning, model training is distributed across multiple edge devices. Those devices need to send local information, such as labels or gradients Konečn`y et al. (2016), to an aggregating server, which is often honest but curious about the local training data. Hence, the output from each model at an edge device needs to be privatized locally before being sent to the server. When translating from a local privacy guarantee to a centralized one, one needs to reason about the composition of the local privacy leakage Naseri et al. (2020). Therefore, we formally ask the following: Problem 1.1 (Private Majority Ensembling (Illustrated in Figure 1)). *Consider* K ≥ 1 (ϵ, ∆)-differentially private mechanisms M1, . . . , MK for K odd. Given a dataset D*, each mechanism outputs a binary answer —* that is, Mi: D → {0, 1}, ∀i ∈ [K]. Given a privacy **allowance** 1 ≤ m ≤ K, m ∈ R *and a failure probability* δ ≥ ∆ ≥ 0, δ, ∆ ∈ [0, 1), how can one maximize the utility of an (mϵ, δ)*-differentially private mechanism* A to compute the majority function g(S1, S2, . . . , SK)*, where* Si ∼ Mi(D)? The majority function g is often used in private prediction, where one studies the privacy cost of releasing one prediction Dwork & Feldman (2018) and exploits the fact that releasing only the aggregated output on sharded models is significantly more private than releasing each prediction. For example, this occurs in ![1_image_0.png](1_image_0.png) Figure 1: An illustration of the problem setting. The inputs are the dataset D and K (ϵ, ∆)-differentially private mechanisms M1*, . . . , M*K. One draws samples Si ∼ Mi(D) and computes an aggregated output g(S1*, . . . , S*K) based on all observed samples. Our goal is to design a randomized algorithm A that approximately computes g and is (*mϵ, δ*)-differentially private for 1 ≤ m ≤ K and δ ≥ ∆ ≥ 0. We focus on g being the majority function . semi-supervised knowledge transfer with private aggregated teacher ensembles (PATE) Papernot et al. (2017; 2018), in ensemble learning algorithms Jia & Qiu (2020); Xiang et al. (2018), machine unlearning Bourtoule et al. (2021), private distributed learning algorithms such as Stochastic Sign-SGD Xiang & Su (2023), and in ensemble feature selection Liu et al. (2018). Private prediction is also shown to be a competitive technique in data-adaptive settings, where the underlying dataset is changing slowly over time, to quickly adjust to online dataset updates Zhu et al. (2023). Furthermore, to address the large privacy loss of private prediction under the many-query regime, there has been recent works in everlasting private prediction that extends privacy guarantees with repeated, possibly infinite, queries without suffering a linear increase in privacy loss Naor et al. (2023); Stemmer (2024). These works, however, rely often on the standard sensitivity analysis of g to provide a private output and thus generally provide limited utility guarantees. This is because the maximum sensitivity of g can be too pessimistic in practice, as observed in the problem of private hyperparameter optimization (Liu & Talwar, 2019). On the other hand, for private model ensembling, a naive way to bound privacy loss without restrictive assumptions is to apply simple composition (Theorem 2.2) or optimal composition (Theorem 2.3) to reason about the final privacy loss after aggregation. A black-box application of the simple composition theorem to compute g would incur a Kϵ privacy cost in the pure differential privacy setting, that is, δ = 0, or if one is willing to tolerate some failure probability δ, optimal composition would yield a O( √Kϵ) privacy cost Dwork et al. (2014). Thus, a natural baseline algorithm A that is (*mϵ, m*∆)-differentially private applies privacy amplification by subsampling and randomly chooses m of the K mechanisms to aggregate and returns the majority of the subsampled mechanisms. This technique is reminiscent of the subsampling procedure used for the maximization function g (Liu & Talwar, 2019) or some general techniques for privacy amplification in the federated setting via shuffling (Erlingsson et al., 2019). However, standard composition analysis and privacy amplication techniques can be suboptimal for computing a private majority, in terms of both utility and privacy. Observe that if there is a clear majority among the outputs of M1(D)*, . . . , M*K(D), one can add less noise, since the majority outcome is unlikely to change based on single isolated changes in D. Furthermore, composition theorems make two pessimistic assumptions: 1) the worst-case function g and the dataset D are considered, and 2) all intermediate mechanism outputs M1(D)*, . . . , M*K(D) are released, rather than just the final aggregate. Based on these observations, is it possible then to improve the utility of computing a private majority, under a fixed privacy loss? ## 1.1 Our Contributions We give a (perhaps surprising) affirmative answer to the above question by using our novel data-dependent randomized response framework (DaRRM), which captures all private majority algorithms, we introduce a tractable noise optimization procedure that maximizes the privacy-utility tradeoffs. Furthermore, we can provably achieve a constant factor improvement in utility over simple subsampling by applying data-dependent noise injection when Mi's are i.i.d. and δ = 0. To our knowledge, this is the first of its work of its kind that gives a tractable utility optimization over the possibly infinite set of privacy constraints. Data-dependent Randomized Response Majority (**DaRRM**). We generalize the classical Randomized Response (RR) mechanism and the commonly used subsampling baseline for solving Problem 1.1 and propose a general randomized response framework DaRRM (see Algorithm 1), which comes with a customizable noise function γ. We show that DaRRM actually captures all algorithms computing the majority whose outputs are at least as good as a random guess (see Lemma 3.2), by choosing different γ functions. Designing γ **with Provable Privacy Amplification.** The choice of the γ function in DaRRM allows us to explicitly optimize noise while trading off privacy and utility. Using structural observations, we show privacy amplification by a factor of 2 under mild conditions over applying simple composition in the pure differential privacy setting when the mechanisms Mi's are i.i.d. (see Theorem 4.1). Finding the Best γ **through Dimension-Reduced Optimization.** We further exploit the generality of DaRRM by applying a novel optimization-based approach that applies constrained optimization to find a data-dependent γ that maximizes some measure of utility. One challenge is that there are infinitely many privacy constraints, which are necessary for DaRRM with the optimized γ to satisfy the given privacy loss. We show that we can reformulate the privacy constraints, which are infinite dimensional, to a finite polynomial-sized constraint set, allowing us to efficiently constrain the optimization problem to find the best γ, even for approximate differential privacy (see Lemma 5.1). Empirically, we show that with a small m and ϵ, the optimized γ (see γopt in Figure 2) achieves the best utility among all γ functions, even compared to the subsampling and the data-independent baseline. To our knowledge, this is the first utility maximization algorithm that optimizes over all private algorithms by constrained optimization with dimension reduction. Experiments. In downstream tasks, such as semi-supervised knowledge transfer for private image classification, we compare our DaRRM with an optimized γ to compute the private label majority from private teachers against PATE Papernot et al. (2018), which computes the private label majority from non-private teachers. We fix the privacy loss of the output of both algorithms to be the same and find that when the number of teachers K is small, DaRRM indeed has a higher utility than PATE, achieving 10%-15% and 30% higher accuracy on datasets MNIST and Fashion-MNIST, respectively. ## 2 Background 2.1 Related Work Private Composition. Blackbox privacy composition analysis often leads to pessimistic utility guarantees. In the blackbox composition setting, one can do no better than the O(Kϵ) privacy analysis for pure differential privacy Dwork et al. (2014). For approximate differential privacy, previous work has found optimal constants for advanced composition by reducing to the binary case of hypothesis testing with randomized response; and optimal tradeoffs between *ϵ, δ* for black box composition are given in Kairouz et al. (2015), where there could be a modest improvement 20%. Thus, for specific applications, previous work has turned to white-box composition analysis for improved utility. This includes, for example, moment accountant for private SGD Abadi et al. (2016) and the application of contractive maps in stochastic convex optimization Feldman et al. (2018). For the specific case of model ensembles, Papernot et al. (2018) shows a data-dependent privacy bound that vanishes as the probability of disagreement goes to 0. Their method provides no utility analysis but they empirically observed less privacy loss when there is greater ensemble agreement. When g is the maximization function, some previous work shows that an approximately maximum value can be outputted with high probability while incurring O(ϵ) privacy loss, independently of K. Liu & Talwar (2019) proposed a random stopping mechanism for m = 1 that draws samples uniformly at random from Mi(D) at each iteration. In any given iteration, the sampling halts with probability γ and the final output is computed based on the samples collected until that time. This leads to a final privacy cost of only 3ϵ for the maximization function g, which can be improved to 2ϵ (Papernot & Steinke, 2022). In addition to the aforementioned works, composing top-k and exponential mechanisms also enjoy slightly improved composition analysis via a bounded-range analysis Durfee & Rogers (2019); Dong et al. (2020). Bypassing the Global Sensitivity. To ensure differential privacy, it is usually assumed the query function g has bounded global sensitivity - that is, the output of g does not change much on any adjacent input datasets differing in one entry. The noise added to the output is then proportional to the global sensitivity of g. If the sensitivity is large, the output utility will thus be terrible due to a large amount of noises added. However, the worst case global sensitivity can be rare in practice, and this observation has inspired a line of works on designing private algorithms with data-dependent sensitivity bound to reduce the amount of noises added. Instead of using the maximum global sensitivity of g on any dataset, the classical Propose-Test-Release framework of Dwork Dwork & Lei (2009) uses a local sensitivity value for robust queries that is tested privately and if the sensitivity value is too large, the mechanism is halted before the query release. The halting mechanism incurs some failure probability but deals with the worst-case sensitivity situations, while allowing for lower noise injection in most average-case cases. One popular way to estimate average-case sensitivity is to use the Subsample-and-Aggregate framework by introducing the notion of *perturbation stability*, also known as *local sensitivity* of a function g on a dataset D Thakurta & Smith (2013); Dwork et al. (2014), which represents the minimum number of entries in D needs to be changed to change g(D). One related concept is *smooth sensitivity*, a measure of variability of g in the neighborhood of each dataset instance. To apply the framework under *smooth sensitivity*, one needs to privately estimate a function's local sensitivity Ls and adapt noise injection to be order of O( Ls ϵ ), where Ls can often be as small as O(e −n), where n = |D|, the total dataset size Nissim et al. (2007). Generally, the private computation of the smooth sensitivity of a blackbox function is nontrivial but is aided by the Subsample and Aggregate approach for certain functions. These techniques hinge on the observation that a function with higher stability on D requires less noise to ensure worst case privacy. Such techniques are also applied to answer multiple online functions/queries in model-agnostic learning Bassily et al. (2018). However, we highlight two key differences in our setting with a weaker stability assumption. First, in order to estimate the *perturbation stability* of g on D, one needs to downsample or split D into multiple blocks Thakurta & Smith (2013); Dwork et al. (2014); Bassily et al. (2018), Dˆ1*, . . . ,* DˆB , and estimate the *perturbation stability* based on the mode of g(Dˆ1)*, . . . , g*(DˆB). This essentially reduces the amount of change in the output of g due to a single entry in D, with high probability and replaces the hard-to-estimate *perturbation stability* of g with an easy-to-compute *perturbation stability* of the mode. Such a notion of stability has also been successfully applied, along with the sparse vector technique, for model-agnostic private learning to handle exponentially number of queries to a model Bassily et al. (2018). Note that in these cases, since a private stochastic test is applied, one cannot achieve pure differential privacy Dwork et al. (2014). In practice, e.g. federated learning, however, one does not have direct access to D, and thus it is impractical to draw samples from or to split D. Second, to ensure good utility, one relies on a key assumption, i.e. the *subsampling stability* of g, which requires g(Dˆ) = g(D) with high probability over the draw of subsamples Dˆ. Although our intuition in designing DaRRM also relies on the stability of the mode function g, previous usage of stability to improve privacy-utility tradeoffs, e.g., propose-test-release Vadhan (2017); Dwork et al. (2014), requires the testing of such stability, based on which one adds a larger (constant) noise γ. This can still lead to adding redundant noise in our case. Optimal Randomized Response. Holohan et al. (2017) and Kairouz et al. (2015) show that the classical Randomized Response (RR) mechanism with a constant probability of faithfully revealing the true answer is optimal in certain private estimation problems. Our proposed DaRRM framework and our problem setting is a generalized version of the ones considered in both Holohan et al. (2017) and Kairouz et al. (2015), which not only subsumes RR but also enables a data-dependent probability, or noise addition. While RR with a constant probability can be shown optimal in problems such as private count queries or private estimation of trait possession in a population, it is not optimal in other problems, such as private majority ensembling, since unlike the former problems, changing one response of the underlying mechanisms does not necessarily change the output of the majority. To explicitly compute the minimum amout of noise required, one needs the output distributions of the underlying mechanisms but this is unknown. To resolve this, our proposed DaRRM framework adds the amount of noise dependent on the set of observed outcomes from the underlying private mechanisms, S, which is a random variable of the dataset and is hence a proxy. This enables DaRRM to calibrate the amount of noise based on whether the majority output is likely to change. The amount of noise is automatically reduced when the majority output is not likely to change. Second, Holohan et al. (2017) and Kairouz et al. (2015) both consider a special case in our setting where all K private mechanisms are i.i.d., while our approach focuses on the more general setting where each private mechanism can have a different output distribution. Learning A Good Noise Distribution. There have been limited works that attempt to derive or learn a good noise distribution that improves the utility. For deep neural networks inference, Mireshghallah et al. (2020) attempts to learn the best noise distribution to maximizing utility subject to an entropy Lagrangian, but no formal privacy guarantees were derived. For queries with bounded sensitivity, Geng & Viswanath (2015) demonstrate that the optimal noise distribution is in fact a staircase distribution that approaches the Laplacian distribution as ϵ → 0. Private Prediction. Instead of releasing a privately trained model as in private learning, private prediction hides the models and only releases private outputs. Private prediction has been shown as a practical alternative compared to private learning, as performing private prediction is much easier compared to private learning on a wide range of tasks Dwork & Feldman (2018); Naor et al. (2023); van der Maaten & Hannun (2020). Although a privately trained model can make infinitely many predictions at the inference time without incurring additional privacy loss, since differential privacy is closed under post-processing, it has been shown recently that it is indeed possible to make infinitely many private predictions Naor et al. (2023) with a finite privacy loss for specific problems. ## 2.2 Preliminaries We first introduce the definition of differential privacy, simple composition and and optimal composition as follows. The optimal composition Kairouz et al. (2015) gives an exact and tight bound on privacy loss under adaptive composition, which improves upon advanced composition Dwork et al. (2014). Definition 2.1 (Differential Privacy (DP) Dwork et al. (2014)). A randomized mechanism M : D → R *with* a domain D and range R satisfies (ϵ, δ)-differential privacy for ϵ, δ ≥ 0 *if for any two* **adjacent datasets** D, D′ and for any subset of outputs S ⊆ R *it holds that* Pr[M(D) ∈ S] ≤ e ϵ Pr[M(D′) ∈ S] + δ. δ = 0 is often called pure differential privacy; while δ > 0 *is often called approximate differential privacy.* Theorem 2.2 (Simple Composition Dwork et al. (2014)). For any ϵ > 0 and δ ∈ [0, 1]*, the class of* (ϵ, δ)-differentially private mechanisms satisfy (kϵ, kδ)-differential privacy under k*-fold adaptive composition.* Theorem 2.3 (Optimal Composition (Theorem 3.4 of Kairouz et al. (2015))). For any ϵ > 0, δ ∈ [0, 1] and δ ′ ∈ (0, 1], the class of (ϵ, δ)*-differentially private mechanisms satisfies* (ϵ ′, 1 − (1 − δ) k(1 − δ ′))*-differential* privacy under k*-fold adaptive composition for* $$\epsilon^{\prime}=\min\left\{k\epsilon,\frac{(e^{\epsilon}-1)\epsilon k}{e^{\epsilon}+1}+\epsilon\sqrt{2k\log(\epsilon+\frac{\sqrt{k\epsilon^{2}}}{\delta^{\prime}})},\frac{(e^{\epsilon}-1)\epsilon k}{e^{\epsilon}+1}+\epsilon\sqrt{2k\log(1/\delta^{\prime})}\right\}.$$ We then formalize the error and utility metric in our problem as follows: Definition 2.4 (Error Metric and Utility Metric). For the problem setting in Definition 1.1, let the observed (random) outcomes set be S = {S1, .., Sk}, where Si ∼ Mi(D). For a fixed D*, we define the error of an* algorithm A, i.e., E(A), in computing the majority function g *as the Total Variation (TV) distance between* g(S) and A(D)*. Specifically,* $${\mathcal{E}}({\mathcal{A}})={\mathcal{D}}_{T V}(g({\mathcal{S}})\parallel{\mathcal{A}}({\mathcal{D}}))=|\operatorname*{Pr}[{\mathcal{A}}({\mathcal{D}})=1]-\operatorname*{Pr}[g({\mathcal{S}})=1]|$$ and the utility is defined as 1 − E(A). Notation. Throughout the paper, we use the same notations defined in Problem 1.1 and Definition 2.4. Furthermore, let D and D′to denote a pair of adjacent datasets with one entry being different. Also, let pi = Pr[Mi(D) = 1] and p ′ i = Pr[Mi(D′) = 1], ∀i ∈ [K]. We omit the subscript i when all pi's or p ′ i 's are equal. I{·} denotes the indicator function and [K] = {1, 2*, . . . , K*}. For the purpose of analysis, let L(D) = PK i=1 Mi(D) ∈ {0, 1*, . . . , K*}, i.e. the (random) sum of all observed outcomes on dataset D. D is omitted when the context is clear. Unless specified, we use the noise function γ : {0, 1, . . . , K} → [0, 1] as input to our algorithms to calibrate the probabilistic noise injection. Unless specified, the privacy allowance m ∈ R. ## 3 Private Majority Algorithms The very first approach to consider when solving private majority ensembling (Problem 1.1), since the output is binary, is the classical Randomized Response (RR) mechanism Dwork et al. (2014), where one flips a biased coin with a *constant* probability p*const* ∈ [0, 1]. If the coin lands on head with probability p*const*, output the true majority base on K samples; if not, then simply output a noisy random answer. However, to make the output (*mϵ, δ*)-differential private, the success probability p*const* can be at most O( m K ) (or O( √mK )) when δ = 0 (or δ > 0) (see Appendix A.1), which is too small for any reasonable utility. The key observation for improved utility is that the probability of success should not be a *constant*, but should depend on the *unpublished* set of observed outcomes from the mechanisms S. If we see many 1's or 0's in S, then there should be a clear majority even on adjacent datasets. On the other hand, if we see about half 1's and half 0's, this means the majority is highly volatile to data changes, which implies we need more noise to ensure privacy. In summary, if we can calibrate the success probability based on S to smoothly increase when there is a clear majority, we can improve the utility without affecting privacy. Subsampling. One natural baseline is outputting the majority of m out of K randomly subsampled mechanisms (without replacement), given a privacy allowance m ∈ [K]. Suppose δ ≥ m∆, the privacy loss of the aggregated output can be reasoned through simple composition or optimal composition 1. Interestingly, we show outputting the majority of m out of K subsampled mechanisms corresponds to RR with a *non-constant* probability pγ= γSub(L(D)), which is set by a polynomial function γSub : {0, . . . , K} → [0, 1] based on the sum of observed outcomes L(D) in Lemma 3.1 (see a full proof in Appendix A.2). Intuitively, subsampling may be seen as implicitly adding noise by only outputting based on a randomly chosen subset of the mechanisms; therefore this implicit noise is inherently *data-dependent* on L(D). Lemma 3.1. Consider Problem 1.1, with the privacy allowance m ∈ [K]*. Consider the data-dependent* algorithm that computes L(D) and then applies RR with probability pγ. If pγ = γSub(l), where l ∈ {0, 1*, . . . , K*} is the value of L(D), i.e., the (random) sum of observed outcomes on dataset D, and γSub : {0, 1, . . . , K} → [0, 1] is $$\gamma_{S u b}(l)=\gamma_{S u b}(K-l)=\begin{cases}1-2\sum_{j=\frac{m+1}{2}}^{m}\frac{\binom{l}{j}\binom{K-l}{m}}{\binom{K}{m}}&\text{if$m$is odd}\\ 1-2\sum_{j=\frac{m}{2}+1}^{m}\frac{\binom{l}{j}\binom{K-l}{m-j}}{\binom{K}{m}}-\frac{\binom{l}{m}\binom{K-l}{m}}{\binom{K}{m}}&\text{if$m$is even}\end{cases}$$ then the majority of m out of K subsampled mechanisms without replacement and the output of our datadependent RR *algorithm have the same distribution.* 1Deciding on which composition theorem to apply depends on m and δ. When δ = 0, only simple composition applies. When moderate δ > 0, for small m, the simple composition indicates less privacy loss; and for larger m or larger δ, the optimal composition is clearly better. Algorithm 1 DaRRM(·): Data-dependent Randomized Response Majority 1: Input: K (ϵ, ∆)-DP mechanisms {Mi} K i=1, noise function γ : {0, 1} K+1 → [0, 1] (in our specific setting γ : {0, 1, . . . , K} → [0, 1]), dataset D, privacy allowance 1 ≤ m ≤ K, failure probability δ ≥ ∆ ≥ 0 2: Output: (*mϵ, δ*)-DP majority vote of {Mi} K i=1 3: S = {S1*, .., S*k}, where Si ∼ Mi(D) 4: L =PK i=1 Si 5: Set probability pγ ← γ(S) (in our setting pγ ← γ(L)) 6: Flip the pγ- biased coin 7: if Head (with probability pγ) **then** 8: Output I{ 1 K L ≥ 12 } 9: **else** 10: Output 0/1 with equal probability 11: **end if** Data-dependent Randomized Response (**DaRRM**). Does subsampling give optimal utility? Inspired by the connection between RR and subsampling, we propose Data-dependent Randomized Response Majority (DaRRM) in Algorithm 1, to study optimizing privacy-utility tradeoffs in private majority ensembling. In particular, DaRRM has a *non-constant* success probability pγ that is set by a parameterized noise function γ, which in turn depends on the set of observed outcomes S = {S1*, . . . , S*K}. In fact, we can show that DaRRM is general: any *reasonable* algorithm A, name one whose output is at least as good as a random guess, can be captured by the DaRRM framework in Lemma 3.2 (see a full proof in Appendix A.3). We denote DaRRM instantiated with a specific noise function γ by DaRRMγ. Lemma 3.2 (Generality of DaRRM). Let A *be any randomized algorithm to compute the majority function* g on S *such that for all* S, Pr[A(S) = g(S)] ≥ 1/2 (i.e. A *is at least as good as a random guess). Then,* there exists a a general function γ : {0, 1} K+1 → [0, 1] such that if one sets pγ by γ(S) in DaRRM, the output distribution of DaRRMγ *is the same as the output distribution of* A. Designing the γ **Function.** With the DaRRM framework, we ask: how to design a good γ function that maximizes the utility? First, we introduce two characteristics of γ that do not affect the utility, while simplifying the analysis and the empirical optimization: (a) **A function of the sum of observed samples**: Since the observed samples set S is a permutationinvariant set, a sufficient statistic that captures the full state of S is L =PK i=1 Si, the sum of observed outcomes. This allows us to reduce γ(S) = γ(L). *Hence, in the rest of the paper, we focus* on γ : {0, 1, . . . , K} → [0, 1]. (b) **Symmetric around** K 2 : If γ is asymmetric, we can symmetrize by reflecting one region about K 2 and achieve better or equal expected utility, where the utility is summed over symmetric distributions of pi. Note that γSub satisfies both characteristics. Now, recall L(D) and L(D′) are the sum of observed outcomes on adjacent datasets D and D′. Also, recall pi = Pr[Mi(D) = 1] and p ′ i = Pr[Mi(D′) = 1] are the output probabilities of the mechanism Mi on D, D′. To design a good noise function γ in DaRRM, we start by deriving conditions for a γ function such that DaRRMγ is (*mϵ, δ*)-differentially private in Lemma 3.3 (see a full proof in Appendix A.4). Lemma 3.3 (γ privacy condition). Consider using DaRRM *(Algorithm 1) to solve Problem 1.1, let* αl = Pr[L(D) = l] and α ′ l = Pr[L(D′) = l], where D and D′ are adjacent datasets and l ∈ {0, . . . , K}*. For a noise* function γ : {0, 1, . . . , K} → [0, 1] *such that* γ(l) = γ(K − l), ∀l, DaRRMγ is (mϵ, δ)*-differentially private if* and only if for all αl, α′l , the following holds, f(p1, . . . , pK, p′1 , . . . , p′K; γ) ≤ e mϵ − 1 + 2δ (1) where f is called the **privacy cost objective** and $$f(p_{1},\ldots,p_{K},p_{1}^{\prime},\ldots,p_{K}^{\prime};\gamma):=\sum_{l=0}^{\frac{K-1}{2}}(e^{m\epsilon}\alpha_{l}^{\prime}-\alpha_{l})\cdot\gamma(l)+\sum_{l=\frac{K+1}{2}}^{K}(\alpha_{l}-e^{m\epsilon}\alpha_{l}^{\prime})\cdot\gamma(l)$$ ## 4 Provable Privacy Amplification We theoretically demonstrate that privacy is provably amplified under improved design of γ in our DaRRM framework. Specifically, we show when the mechanisms are i.i.d. and δ = 0, we gain privacy amplification by a factor of 2 compared to the naïve subsampling baseline by carefully designing γ. Theorem 4.1 (Provable Privacy Amplification by 2). Consider using DaRRM *(Algorithm 1) to solve* Problem 1.1, with i.i.d. mechanisms {Mi} K i=1*, i.e.,* pi = p, p ′ i = p ′, ∀i ∈ [K]*, the privacy allowance* m ∈ [K] and δ = ∆ = 0. Let the noise function γ : {0, 1, . . . , K} → [0, 1] be that: if m ≥ K+1 2, γ(l) = 1 *and if* m ≤ K−1 2, $$\gamma(l)={\begin{cases}1-2h(l)&\forall l\leq{\frac{K-1}{2}}\\ 2h(l)-1&\forall l\geq{\frac{K+1}{2}}\end{cases}}$$ $$w h e r e\ h(l)=\sum_{i=m}^{2m-1}\frac{{\binom{l}{i}}{\binom{l}{2i}}}{{\binom{2m}{2i}}}$$ i)( K−l *then DaRRM$_\gamma$ is $m\epsilon$-differentially private*. $\frac{K-l}{\frac{n-1-i}{K-n}},\;th$ Interpretation. First, when m ≤ K−1 2is small, the γ(l) in Theorem 4.1 corresponds to outputting the majority based on subsampling 2m − 1 outcomes, from Lemma 3.1. However, the subsampling baseline, whose privacy loss is reasoned through simple composition, would have indicated that one can only output the majority based on m outcomes, therefore implying a 2x privacy gain. When m ≥ K+1 2, the above theorem indicates that we can set a constant γ = 1, which implies we are optimally outputting the true majority with no noise while still surprisingly ensuring mϵ privacy. Intuition. This 2x privacy gain is intuitively possible because the majority is only dependent on half of the mechanisms' outputs, therefore the privacy leakage is also halved. To see this, we start by analyzing the privacy cost objective in Eq. 31, where with a careful analysis of its gradient, we show that the maximum indeed occurs (p ∗, p′∗) = (0, 0) when γ satisfies certain conditions. Now, when (p ∗, p′∗) → 0, note that the probability ratio of outputting 1 with 2m −1 outcomes is approximately e mϵ, where dependence on m follows because the probability of outputting 1 is dominated by the probability that exactly m mechanisms output 1. To rigorize this, we derive sufficient conditions for γ functions that satisfy max(p,p′) f(*p, p*′; γ) = f(0, 0; γ) ≤ e mϵ − 1 as indicated by Lemma 3.3, to ensure DaRRM to be mϵ-differentially private and a more detailed overview and the full proof can be found in Appendix B. ## 5 Optimizing The Noise Function Γ **In Darrm** Theoretically designing γ and extending privacy amplification results to the δ > 0 case is difficult and it is likely that our crafted γ is far from optimal. On the other hand, one can optimize for such γ ∗that maixmizes the utility but this involves solving a "Semi-infinite Programming" problem, due to the infinitely many privacy constraints, which are the constraints in the optimization problem necessary to ensure DaRRM with the optimized γ satisfy a given privacy loss. Solving a "Semi-infinite Programming" problem in general is non-tractable, but we show that in our specific setting this is in fact tractable, proposing a novel learning approach based on DaRRM that can optimize the noise distribution to maximize the utility. To the best of our knowledge, such optimization, presented as follows, is the first of its kind: $$\operatorname*{min}_{\gamma\in[0,1]^{k+1}}\mathbb{E}_{p_{1},p_{2},\ldots,p_{K}\sim\mathcal{T}}[\mathcal{E}(\mathsf{DARM}_{\gamma})]$$ s.t. $$\operatorname*{max}_{\{p_{i},p_{i}^{\prime}\}\in\mathcal{F}_{i}\}_{i=1}^{K}f(p_{1},\ldots,p_{K},p_{1}^{\prime},\ldots,p_{K}^{\prime};\gamma)\leq e^{m\epsilon}-1+2\delta$$ mϵ − 1 + 2δ (3) $$\left(2\right)$$ $$(3)^{\frac{1}{2}}$$ $$\gamma(l)=\gamma(K-l),\forall l\in\{0,1,\ldots,K\}$$ where f is the privacy cost objective as defined in Lemma 3.3, Fiis the feasible region where (pi, p′i ) lies due to each mechanism Mi being ϵ-differentially private. Observe that since γ is symmetric around K 2 , we only need to optimize K+1 2variables instead of K + 1 variables. T is the distribution from which p1*, . . . , p*K are drawn. We want to stress that no prior knowledge about the dataset or the amount of consensus among the private mechanisms is required to use our optimization framework. When there is no prior knowledge about p1*, . . . , p*K, T is set to be the uniform distribution for maximizing the expected utility. Note the above optimization problem also enables the flexibility of incorporating prior knowledge about the mechanisms by choosing a prior distribution T to further improve the utility. Optimizing Over All Algorithms. We want to stress that by solving the above optimization problem, we are indeed optimizing over all algorithms for maximal utility, since we show in Lemma 3.2 DaRRM that captures *all reasonable* algorithms computing a private majority. Linear Optimization Objective. Perhaps surprisingly, it turns out that optimizing for γ ∗is a Linear Programming (LP) problem! Indeed, after expanding the optimization objective in Eq. 2 by the utility definition (see Definition 2.4), optimizing the above objective is essentially same as optimizing: $$\operatorname*{min}_{\gamma\in[0,1]^{K+1}}-{\frac{1}{2}}\sum_{l={\frac{K+1}{2}}}^{K}\mathbb{E}_{p_{1},p_{2},\ldots,p_{K}\sim{\mathcal{T}}}\left[(\alpha_{l}-\alpha_{K-l})\right]\cdot\gamma(l)$$ where αl = Pr[L(D) = l], ∀l ∈ {0, 1*, . . . , K*} and observe L(D) ∼ PoissonBinomial(p1*, . . . , p*K). The above objective is linear in γ. See a full derivation in Appendix C.1. Although taking the expectation over p1*, . . . , p*K involves integrating over K variables and this can be computationally expensive, we discuss how to formulate a computationally efficient approximation of the objective in Appendix C.2, which we later use in the experiments. Note that the objective only for maximizing the utility and hence approximating the objective does not affect the privacy guarantee. Reducing Infinitely Many Constraints to A Polynomial Set. The constraints in the optimization problem (Eq. 3) is what makes sure the output of DaRRMγ is mϵ-differentially private. We thus call them the privacy constraints. Note that the privacy constraints are linear in γ. Though it appears we need to solve for infinitely many such privacy constraints since pi's and p ′ i 's are continuous, we show that through a structural understanding of DaRRM, we can reduce the number of privacy constraints from infinitely many to exponentially many, and further to a polynomial set. First, we observe the privacy cost objective f is linear in each independent pair of (pi, p′i ) fixing all (pj , p′j ), ∀j ̸= i, and hence finding the worst case probabilities in (pi, p′i ) given any γ, (p ∗ i , p′∗ i ) = arg max(pi,p′i ) f(p1, . . . , pK, p′1 , . . . , p′K; γ) is a linear programming (LP) problem. Furthermore, since pi and p ′ i are the probability of outputting 1 from the i-th (ϵ, ∆)-differentially private mechanism Mi on adjacent datasets, by definition, they are close and lie in a feasible region Fi, which we show has 8 corners if δ > 0 (and only 4 corners if δ = 0). This implies (p ∗ i , p′∗ i ) only happens at one of the corners of Fi, and hence the number of constraints reduces to K8(and K4 if δ = 0). Second, observe that αl and α ′ l in the privacy cost objective f are the pmf of two Poisson Binomial distributions at l ∈ {0*, . . . , K*}. Notice that the Poisson Binomial is invariant under the permutation of its parameters, i.e. PoissonBinomial(p1*, . . . , p*K) has the same distribution as PoissonBinomial(π(p1*, . . . , p*K)), under some permutation π. Based on this observation, we show the number of constraints can be further reduced to O(K7) if δ > 0 (and O(K3) if δ = 0). We formalize the two-step reduction of the number of privacy constraints in Lemma 5.1 as follows. See a full proof in Appendix C.3. 2 Lemma 5.1. Consider using DaRRM (Algorithm 1) to solve Problem 1.1 and let f be the privacy cost objective as defined in Lemma 3.3. Given an arbitrary noise function γ*, let the worst case probabilities be* 2**Practical Limitation.** Although the number of constraints is polynomial in K and optimizing γ in DaRRM is an LP, O(K7) can still make the number of constraints intractably large when K is large. In practice, we observe with the Gurobi optimizer, one can optimize γ for K ≤ 41 on a laptop if δ > 0. But if δ = 0, since the number of privacy constraints is O(K3), one can optimize for K over 100. $(p_{1}^{*},\ldots,p_{K}^{*},p_{1}^{\prime*},\ldots,p_{K}^{\prime*})=\arg\max_{\{(p_{i},p_{i}^{\prime})\}_{i=1}^{K}}f(p_{1},\ldots,p_{K},p_{1}^{\prime},\ldots,p_{K}^{\prime};\gamma)$. $$(p_{1}^{*},\ldots,p_{K}^{*},p_{1}^{\prime*},\ldots,p_{K}^{\prime*})=\arg\max_{\{(p_{i},p_{i}^{\prime})\}_{i=1}^{K}}f(p_{1},\ldots,p_{K},p_{1}^{\prime},\ldots,p_{K}^{\prime};\gamma)\,$$ Then, each pair (p ∗ i , p′∗ i ), ∀i ∈ [K] *satisfies* $J_t,\,v\,t\,\subseteq\,[T]$ . $$(p_{i}^{*},p_{i}^{\prime*})\in\{(0,0),(1,1),(0,\Delta),(\Delta,0),(1-\Delta,1),$$ $$(1,1-\Delta),(\frac{e^{\epsilon}+\Delta}{e^{\epsilon}+1},\frac{1-\Delta}{e^{\epsilon}+1}),(\frac{1-\Delta}{e^{\epsilon}+1},\frac{e^{\epsilon}+\Delta}{e^{\epsilon}+1})\}$$ Furthermore, when δ > 0, there exists a finite vector set P of size O(K7) *such that if* β = max{(pi,p′i )}K i=1∈P f(p1, . . . , pK, p′1 , . . . , p′K; γ)*, then* f(p ∗ 1 , . . . , p∗K, p′∗ 1 , . . . , p′∗K; γ) ≤ β*. When* δ = 0*, the size of* P *can be reduced to* O(K3). ## 6 Experiments We empirically solve3the above optimization problem (Eq. 2) using the Gurobi4solver and first present the shape of the optimized γ function, which we call γopt, and its utility in Section 6.1. Then, we demonstrate the compelling effectiveness of DaRRM with an optimized γ function, i.e., DaRRMγopt , in ensembling labels for private prediction from private teachers through the application of semi-supervised knowledge transfer for private image classification in Section 6.2. 6.1 Optimized γ **in Simulations** ![9_image_0.png](9_image_0.png) Figure 2: Plots of the shape and E(DaRRMγ) of different γ functions: the optimized γopt, and the baselines γSub (corresponding to subsampling) and γ*const* (corresponding to RR). Here, K = 11, m ∈ {1, 3, 5, 7}, ϵ = 0.1, ∆ = 10−5 and δ = m∆. We compare the shape and the error E(DaRRMγ) of different γ functions: an optimized γopt and the subsampling γSub as in Lemma 3.15. We also compare against p*const* in the classical baseline RR (see Section A.1) and E(RR). Here, p*const* can be viewed as a constant noise function γ*const*(l) = pconst, ∀l ∈ {0, 1*, . . . , K*}; and E(RR) is the same as E(DaRRMγ*const* ). 3All code for the experiments can be found at https://anonymous.4open.science/r/OptimizedPrivateMajority-CF50 4https://www.gurobi.com/ 5Note the subsampling mechanism from Section 4, which enjoys a privacy amplification by a factor of 2, only applies to pure differential privacy settings (i.e., when ∆ = δ = 0). However, we focus on the more general approximate differential privacy settings (with ∆ > 0) in the experiments, and hence, the subsampling baseline we consider throughout this section is the basic version without privacy amplification. To see how the subsampling mechanism from Section 4 with privacy amplification compares against the other algorithms, please refer to Appendix D.1.3. We present the results with K = 11, ϵ = 0.1, ∆ = 10−5 and m ∈ {1, 3, 5, 7}. We assume there is no prior knowledge about the mechanisms {Mi} K i=1, and set the prior distribution from which pi's are drawn, T , to be the uniform distribution, in the optimization objective (Eq. 2) searching for γopt. Note when K and m are small, as in this setting, optimal composition indeed has the same privacy loss as simple composition (see Appendix D.1.1 for a comparison of different composition bounds). Recall that we want the majority output to be (*mϵ, δ*)-differentially private. Since the failure probability of privacy composition composes linearly in this case, to ensure a fair comparison against the subsampling baseline, we set δ = m · ∆. We plot each γ functions over the support {0, 1*, . . . , K*} and the corresponding error of each algorithm in Figure 2. Discussion. In summary, the optimized noise function γopt has the highest probability of outputting the true majority over the support than the γ functions corresponding to the baselines. This implies DaRRMγopt has the lowest error (and hence, highest utility), which is verified on the bottom set of plots. More results on comparing the DaRRMγopt optimized under the uniform T against the baselines by optimal composition and in pure differential privacy settings (i.e., ∆ = δ = 0) for large K and m can be found in Appendix D.1.2 and D.1.3. Furthermore, we include results optimizing γ using a non-uniform T prior in Appendix D.1.4. | Dataset | MNIST | Dataset | Fashion-MNIST | | | |-----------|-------------|-------------|-----------------|-------------|-----------| | DaRRMγSub | DaRRMγopt | | | | | | GNMax | | | | | | | # Queries | (Baseline) | (Baseline) | (Ours) | | | | Q = 20 | 0.63 (0.09) | 0.76 (0.09) | 0.79 (0.09) | | | | Q = 50 | 0.66 (0.06) | 0.75 (0.06) | 0.79 (0.05) | | | | Q = 100 | 0.64 (0.04) | 0.76 (0.04) | 0.80 (0.04) | DaRRMγSub | DaRRMγopt | | | GNMax | | | | | | | # Queries | (Baseline) | (Baseline) | (Ours) | | | | Q = 20 | 0.65 (0.11) | 0.90 (0.07) | 0.96 (0.03) | | | | Q = 50 | 0.59 (0.06) | 0.94 (0.03) | 0.96 (0.02) | | | | Q = 100 | 0.64 (0.04) | 0.93 (0.02) | 0.96 (0.02) | | 6.2 Private Semi-Supervised Knowledge Transfer Table 1: Accuracy of the predicted labels of Q query samples on datasets MNIST (on the left) and Fashion-MNIST (on the right). We report the mean and one std. in parentheses over 10 random draws of the query samples from the test dataset. Note each prediction on the query sample is (ϵquery, δ*query*)-differentially private. With the same per query privacy loss (and hence the same total privacy loss over Q samples), DaRRMγopt achieves the highest accuracy compared to the other two baselines. Semi-supervised Knowledge Transfer. We apply our DaRRM framework in the application of semisupervised knowledge transfer for private image classification. We follow a similar setup as in PATE Papernot et al. (2017; 2018), where one trains K teachers, each on a subset of a sensitive dataset, and at the inference time, queries the teachers for the majority of their votes, i.e., the predicted labels, of a test sample. Each time the teachers are queried, there is a privacy loss, and we focus on this private prediction subroutine in this section. To limit the total privacy loss over all queries, the student model is also trained on a public dataset without labels. The student model queries the labels of a small portion of the samples in this dataset from the teachers and is then trained using semi-supervised learning algorithms on both the labeled and unlabeled samples from the public dataset. Baselines. We want the privacy loss per query of a test sample to the teachers to be (ϵquery, δ*query*). This can be achieved via two ways: 1) Train K non-private teachers, add Gaussian noise to the number of predicted labels from the teachers in each output class, and output the majority of the noisy votes. This is exactly the GNMax algorithm from PATE Papernot et al. (2018). 2) Train K (ϵ, ∆)-differentially private teachers and output the majority of the teachers' votes by adding a smaller amount of noise. This can be computed using DaRRM with an appropriate noise function γ. We compare the performance of GNMax and DaRRM with two γ functions: γopt (i.e., the optimized γ), and γSub (i.e., the subsampling baseline). The overall privacy loss over Q queries to the teachers can be computed by optimal composition. Experiment Setup. We use samples from two randomly chosen classes - class 5 and 8 - from the MNIST and Fashion-MNIST datasets to form our training and testing datasets. Our MNIST has a total of 11272 training samples and 1866 testing samples; our Fashion-MNIST has 10000 training samples and 2000 testing samples. We train K = 11 teachers on equally divided subsets of the training datasets. Each teacher is a CNN model. The non-private and private teachers are trained using SGD and DP-SGD Abadi et al. (2016), respectively, for 5 epochs. DaRRM *Setup:* The Gaussian noise in DP-SGD has zero mean and std. σ*dpsgd* = 12; the gradient norm clipping threshold is C = 1. This results in each private teacher, trained on MNIST and Fashion-MNIST, being (ϵ, ∆) = (0.0892, 10−4) and (0.0852, 10−4)-differentially private, respectively, after 5 epochs. We set the privacy allowance m = 36 and the privacy loss per query is then computed using optimal or simple composition under m-fold, which give the same privacy loss in the high privacy regime, resulting in (ϵquery, δ*query*) = (0.2676, 0.0003) on MNIST and (0.2556, 0.0003) on Fashion-MNIST. GNMax *Setup:* We now compute the std. σ of the Gaussian noise used by GNMax to achieve a per-query privacy loss of (*mϵ, m*∆), as in the DaRRM setup. We optimize σ according to the Renyi differential privacy loss bound of Gaussian noise. Although Papernot et al. (2018) gives a potentially tighter data-dependent privacy loss bound for majority ensembling *non-private* teachers, we found when K and the number of output classes are small as in our case, even if all teachers agree on a single output class, the condition of the data-dependent bound is not satisfied. Hence, we only use the privacy loss bound of Gaussian noise here to set σ in GNMax. See Appendix D.2.1 for more details, including the σ values and other parameters. Finally, the per sample privacy loss and the total privacy loss over Q queries, which is computed by advanced composition, are reported in Table 9. The testing dataset is treated as the public dataset on which one trains a student model. Papernot et al. (2018) empirically shows querying Q = 1%N samples from a public dataset of size N suffices to train a student model with a good performance. Therefore, we pick Q ∈ {20, 50, 100}. We repeat the selection of Q samples 10 times and report the mean test accuracy with one std. in parentheses in Table 1. The Q queries serve as the labeled samples in training the student model. The higher the accuracy of the labels from the queries, the better the final performance of the student model. We skip the actual training of the student model using semi-supervised learning algorithms here. | | Privacy loss per query (ϵquery, δquery) | Total privacy loss over Q queries (ϵtotal, δtotal) | |---------|-------------------------------------------|------------------------------------------------------| | Dataset | # Queries Q = 20 | (5.352, 0.006) | | MNIST | Q = 50 | (9.901, 0.015) | | | (0.2676, 0.0003) | | | Q = 100 | | (15.044, 0.030) | | Q = 20 | | (5.112, 0.006) | | Fashion | Q = 50 | (9.382, 0.015) | | | (0.2556, 0.0003) | | | MNIST | Q = 100 | (14.219, 0.030) | Table 2: The privacy loss per query to the teachers and the total privacy loss over Q queries. Note the total privacy loss is computed by optimal composition (see Theorem 2.3), where we set δ ′ = 0.0001. Discussion. Table 1 shows DaRRMγopt achieves the highest accuracy (i.e., utility) compared to the two baselines on both datasets. First, comparing to DaRRMγSub , we verify that subsampling does not achieve a tight privacy-utility tradeoff, and we can optimize the noise function γ in DaRRM to maximize the utility given a target privacy loss. Second, comparing to GNMax, the result shows there are regimes where ensembling private teachers gives a higher utility than directly ensembling non-private teachers, assuming the outputs in both settings have the same privacy loss. Intuitively, this is because ensembling private teachers adds fine-grained noise during both training the teachers and aggregation of teachers' votes, while ensembling non-private teachers adds a coarser amount of noise only to the teachers' outputs. This further motivates private prediction from private teachers and the practical usage of DaRRM, in addition to the need of aggregating private teachers in federated learning settings with an honest-but-curious server. 6Here, we present results with privacy allowance m = 3 because we think this is a more interesting case. m = 1 is less interesting, since one cannot get improvement compared to the subsampling baseline. m close to a K 2 ≈ 5 is also less interesting, as this case seems too easy for our proposed method (the optimized γ function is very close to 1, meaning very little noise needs to be added in this case). Hence, we pick m = 3, which is a case when improvement is possible, and is also potentially challenging for our optimization framework. This is also realistic as most applications would only want to tolerate a constant privacy overhead. See more results with different privacy allowance m's in this setting in Appendix D.2.2. ## 7 Conclusion In computing a private majority from K private mechanisms, we propose the DaRRM framework, which is provably general, with a customizable γ function. We show a privacy amplification by a factor of 2 in the i.i.d. mechanisms and a pure differential privacy setting. For the general setting, we propose an tractable optimization algorithm that maximizes utility while ensuring privacy guarantees. Furthermore, we demonstrate the empirical effectiveness of DaRRM with an optimized γ. We hope that this work inspires more research on the intersection of privacy frameworks and optimization. ## References Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In *Proceedings of the 2016 ACM SIGSAC conference on computer* and communications security, pp. 308–318, 2016. Raef Bassily, Om Thakkar, and Abhradeep Thakurta. Model-agnostic private learning via stability. *arXiv* preprint arXiv:1803.05101, 2018. Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In *2021 IEEE Symposium on Security and* Privacy (SP), pp. 141–159. IEEE, 2021. Graham Cormode, Somesh Jha, Tejas Kulkarni, Ninghui Li, Divesh Srivastava, and Tianhao Wang. Privacy at scale: Local differential privacy in practice. In *Proceedings of the 2018 International Conference on* Management of Data, pp. 1655–1658, 2018. Jinshuo Dong, David Durfee, and Ryan Rogers. Optimal differential privacy composition for exponential mechanisms. In *International Conference on Machine Learning*, pp. 2597–2606. PMLR, 2020. David Durfee and Ryan M Rogers. Practical differentially private top-k selection with pay-what-you-get composition. *Advances in Neural Information Processing Systems*, 32, 2019. Cynthia Dwork and Vitaly Feldman. Privacy-preserving prediction. In *Conference On Learning Theory*, pp. 1693–1702. PMLR, 2018. Cynthia Dwork and Jing Lei. Differential privacy and robust statistics. In *Proceedings of the forty-first* annual ACM symposium on Theory of computing, pp. 371–380, 2009. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In *Theory of cryptography conference*, pp. 265–284. Springer, 2006. Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., 9(3-4):211–407, 2014. Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. Rappor: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, CCS '14, pp. 1054–1067, New York, NY, USA, 2014. Association for Computing Machinery. ISBN 9781450329576. doi: 10.1145/2660267.2660348. URL https://doi.org/10.1145/2660267.2660348. Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Abhradeep Thakurta. Amplification by shuffling: From local to central differential privacy via anonymity. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 2468–2479. SIAM, 2019. Vitaly Feldman, Ilya Mironov, Kunal Talwar, and Abhradeep Thakurta. Privacy amplification by iteration. In *2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS)*, pp. 521–532. IEEE, 2018. Quan Geng and Pramod Viswanath. The optimal noise-adding mechanism in differential privacy. *IEEE* Transactions on Information Theory, 62(2):925–951, 2015. Naoise Holohan, Douglas J. Leith, and Oliver Mason. Optimal differentially private mechanisms for randomised response. *IEEE Transactions on Information Forensics and Security*, 12(11):2726–2735, November 2017. ISSN 1556-6021. doi: 10.1109/tifs.2017.2718487. URL http://dx.doi.org/10.1109/TIFS.2017.2718487. Junjie Jia and Wanyong Qiu. Research on an ensemble classification algorithm based on differential privacy. IEEE Access, 8:93499–93513, 2020. doi: 10.1109/ACCESS.2020.2995058. Peter Kairouz, Sewoong Oh, and Pramod Viswanath. The composition theorem for differential privacy. In International conference on machine learning, pp. 1376–1385. PMLR, 2015. Jakub Konečn`y, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. *arXiv preprint arXiv:1610.05492*, 2016. Jingcheng Liu and Kunal Talwar. Private selection from private candidates. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, pp. 298–309, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450367059. doi: 10.1145/3313276.3316377. URL https://doi.org/10.1145/3313276.3316377. Zhongfeng Liu, Yun Li, and Wei Ji. Differential private ensemble feature selection. In 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–6, 2018. doi: 10.1109/IJCNN.2018.8489308. Fatemehsadat Mireshghallah, Mohammadkazem Taram, Prakash Ramrakhyani, Ali Jalali, Dean Tullsen, and Hadi Esmaeilzadeh. Shredder: Learning noise distributions to protect inference privacy. In *Proceedings* of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 3–18, 2020. Moni Naor, Kobbi Nissim, Uri Stemmer, and Chao Yan. Private everlasting prediction. arXiv preprint arXiv:2305.09579, 2023. Mohammad Naseri, Jamie Hayes, and Emiliano De Cristofaro. Local and central differential privacy for robustness and privacy in federated learning. *arXiv preprint arXiv:2009.03561*, 2020. Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. Smooth sensitivity and sampling in private data analysis. In *Proceedings of the thirty-ninth annual ACM symposium on Theory of computing*, pp. 75–84, 2007. Nicolas Papernot and Thomas Steinke. Hyperparameter tuning with renyi differential privacy. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=-70L8lpp9DF. Nicolas Papernot, Martin Abadi, Ulfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Semi-supervised knowledge transfer for deep learning from private training data. In *Proceedings of the International* Conference on Learning Representations, 2017. URL https://arxiv.org/abs/1610.05755. Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Úlfar Erlingsson. Scalable private learning with pate. *arXiv preprint arXiv:1802.08908*, 2018. Omer Sagi and Lior Rokach. Ensemble learning: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(4):e1249, 2018. Uri Stemmer. Private truly-everlasting robust-prediction. *arXiv preprint arXiv:2401.04311*, 2024. Abhradeep Guha Thakurta and Adam Smith. Differentially private feature selection via stability arguments, and the robustness of the lasso. In *Conference on Learning Theory*, pp. 819–850. PMLR, 2013. Salil Vadhan. *The Complexity of Differential Privacy*, pp. 347–450. Springer International Publishing, Cham, 2017. doi: 10.1007/978-3-319-57048-8_7. URL https://doi.org/10.1007/978-3-319-57048-8_7. Laurens van der Maaten and Awni Hannun. The trade-offs of private prediction, 2020. Ming Xiang and Lili Su. $\beta$-stochastic sign SGD: A byzantine resilient and differentially private gradient compressor for federated learning, 2023. URL https://openreview.net/forum?id=oVPqFCI1g7q. Tao Xiang, Yang Li, Xiaoguo Li, Shigang Zhong, and Shui Yu. Collaborative ensemble learning under differential privacy. *Web Intelligence*, 16:73–87, 03 2018. doi: 10.3233/WEB-180374. Ying-Ying Zhang, Teng-Zhong Rong, and Man-Man Li. Expectation identity for the binomial distribution and its application in the calculations of high-order binomial moments. *Communications in Statistics* - Theory and Methods, 48(22):5467–5476, 2019. doi: 10.1080/03610926.2018.1435818. URL https: //doi.org/10.1080/03610926.2018.1435818. Yuqing Zhu, Xuandong Zhao, Chuan Guo, and Yu-Xiang Wang. " private prediction strikes back!"private kernelized nearest neighbors with individual renyi filter. *arXiv preprint arXiv:2306.07381*, 2023. ## A Details Of Section 3 A.1 Randomized Response With Constant Probability Pconst Algorithm 2 Randomized Response Majority (RR) 1: Input: K (ϵ, ∆)-DP mechanisms {Mi} K i=1, noise function γ : {0, . . . , K} → [0, 1], dataset D, privacy allowance 1 ≤ m ≤ K, failure probability δ ≥ ∆ ≥ 0 2: Output: (*mϵ, δ*)-DP majority vote of {Mi} K i=1 3: Compute a *constant* probability p*const* ∈ [0, 1] 4: Flip the p*const*- biased coin 5: if Head (with probability p*const*) **then** 6: S = {S1*, .., S*k}, where Si ∼ Mi(D) 7: L =PK i=1 Si 8: Output I{ 1 K L ≥ 12 } 9: **else** 10: Output 0/1 with equal probability 11: **end if** We show the magnitude of p*const* in RR (Algorithm 2) to solve Problem 1.1, such that the output is (*mϵ, δ*)-DP, in Lemma A.1. Lemma A.1. Consider using RR *(Algorithm 2) to solve Problem 1.1. Let the majority of* K (ϵ, ∆)- differentially private mechanisms be (τ ϵ, λ)-differentially private, where τ ∈ [1, K] and λ ∈ [0, 1) *are computed* by simple composition (Theorem 2.2) or optimal *composition (Theorem 2.3). If* $$p_{c o n s t}\leq\frac{e^{m\epsilon}-1+2\delta}{\frac{2(e^{\tau\epsilon}-e^{m\epsilon}+(1+e^{m\epsilon})\lambda)}{e^{\tau\epsilon}+1}+e^{m\epsilon}-1}$$ $$\left(4\right)$$ then RR is (mϵ, δ)*-differentially private.* Proof of Lemma A.1. Let x ∈ {0, 1} denote the output of RR. Let qx = Pr[L(D) = x] and q ′ x = Pr[L(D′) = x], where L(D) = PK i=1 Mi(D), L(D′) = PK i=1 Mi(D′) and D, D′ are adjacent datasets. Recall each mechanism Miis (ϵ, ∆)-differentially private, and the majority of the outputs of {Mi} K i=1 is (*τ ϵ, λ*)-differentially private. When ∆ = 0, using simple composition, τ = K and λ = 0. When ∆ > 0, using optimal composition τ ≈ √K and λ ≈ K∆. By definition of differential privacy (Definition 2.1), all of the following four constraints on qx, q′x apply: $$\begin{array}{l l}{{q_{x}\leq e^{\tau\epsilon}q_{x}^{\prime}+\lambda,}}&{{\mathrm{~and~}}}&{{1-q_{x}^{\prime}\leq e^{\tau\epsilon}(1-q_{x})+\lambda}}\\ {{q_{x}^{\prime}\leq e^{\tau\epsilon}q_{x}+\lambda,}}&{{\mathrm{~and~}}}&{{1-q_{x}\leq e^{\tau\epsilon}(1-q_{x}^{\prime})+\lambda}}\end{array}$$ To ensure RR is (*mϵ, δ*)-differentially private, p*const* needs to be such that for all possible qx, q′x ∈ [0, 1], $$\Pr[\mathrm{RR}(\mathcal{D})=x]\leq e^{m e}\Pr[\mathrm{RR}(\mathcal{D}^{\prime})=x]+\delta$$ $$p_{c o n s t}\cdot q_{x}+\frac{1}{2}(1-p_{c o n s t})\leq e^{m e}(p_{c o n s t}\cdot q_{x}^{\prime}+\frac{1}{2}(1-p_{c o n s t}))+\delta$$ $$(q_{x}-e^{m e}q_{x}^{\prime}+\frac{1}{2}e^{m e}-\frac{1}{2})\cdot p_{c o n s t}\leq\frac{1}{2}e^{m e}-\frac{1}{2}+\delta$$ Let h(qx, q′x ) := qx − e mϵq ′ x + 1 2 e mϵ − 1 2 . The above inequality of p*const* (Eq. 7) needs to hold for worst case output probabilities q ∗ x , q′∗ x that cause the maximum privacy loss. That is, p*const* needs to satisfy $$p_{c o n s t}\cdot m a x_{q_{x},q_{x}^{\prime}}h(q_{x},q_{x}^{\prime})\leq\frac{1}{2}e^{m e}-\frac{1}{2}+\delta$$ + δ (8) $$\left({\bar{5}}\right)$$ $$({\mathfrak{f}}{\mathfrak{h}})$$ $$\left(7\right)$$ $$({\boldsymbol{\delta}})$$ To find the worst case output probabilities, we solve the following Linear Programming (LP) problem: Objective: max ![16_image_0.png](16_image_0.png) $$\begin{array}{r l}{{\operatorname*{max}_{q_{x},q_{x}^{\prime}}h(q_{x},q_{x}^{\prime}):=q_{x}-e^{m\epsilon}q_{x}^{\prime}+{\frac{1}{2}}e^{m\epsilon}-{\frac{1}{2}}}}\\ {{0\leq q_{x}\leq1,0\leq q_{x}^{\prime}\leq1}}\\ {{q_{x}\leq e^{\tau\epsilon}q_{x}^{\prime}+\lambda,1-q_{x}^{\prime}\leq e^{\tau\epsilon}(1-q_{x})+\lambda}}\\ {{q_{x}^{\prime}\leq e^{\tau\epsilon}q_{x}+\lambda,1-q_{x}\leq e^{\tau\epsilon}(1-q_{x}^{\prime})+\lambda}}\end{array}$$ Objective: $\frac{1}{2}$ 2(9) Subject to: 0 ≤ qx ≤ 1, 0 ≤ q Figure 3: A visualization of the above LP problem. The optimum of any LP problem is at the corners of the feasible region, which is bounded by the optimization constraints. We plot the feasible region F and the objective of the above LP problem in Figure 3. Here, (q ∗ x , q′∗ x ) = arg maxqx,q′x h(qx, q′x ) ∈ {(0, 0),(1, 1),(0, λ),(λ, 0),(1 − λ, 1),(1, 1 − λ),( 1−λ e τϵ+1 , e τϵ+λ e τϵ+1 ),( e τϵ+λ e τϵ+1 , 1−λ e τϵ+1 )}. The optimum of the LP problem - that is, the worse case probabilities q ∗ x , q′∗ x - is, $$q_{x}^{*}=\frac{e^{\tau\epsilon}+\lambda}{e^{\tau\epsilon}+1},\quad q_{x}^{\prime*}=\frac{1-\lambda}{e^{\tau\epsilon}+1}$$ By Eq. 8, $$p_{c o n s t}\cdot\left({\frac{e^{\tau\epsilon}+\lambda}{e^{\tau\epsilon}+1}}-e^{m\epsilon}{\frac{1-\lambda}{e^{\tau\epsilon}+1}}+{\frac{1}{2}}e^{m\epsilon}-{\frac{1}{2}}\right)\leq{\frac{1}{2}}(e^{m\epsilon}-1)+\delta$$ $$p_{c o n s t}\cdot\left({\frac{e^{\tau\epsilon}-e^{m\epsilon}+(1+e^{m\epsilon})\lambda}{e^{\tau\epsilon}+1}}+{\frac{1}{2}}(e^{m\epsilon}-1)\right)\leq{\frac{1}{2}}(e^{m\epsilon}-1)+\delta$$ $$p_{c o n s t}\leq{\frac{e^{m\epsilon}-1+2\delta}{{\frac{2(e^{\tau\epsilon}-e^{m\epsilon}+(1+e^{m\epsilon})\lambda)}{e^{\tau\epsilon}+1}}}}+e^{m\epsilon}-1$$ $$\left(13\right)$$ $$\left(17\right)$$ (14) $\text{(15)}$ (16) . For small *m, ϵ, K*, using the approximation e y ≈ 1 + y and that *τ ϵ <* 2, $$p_{c o n s t}\approx{\frac{m\epsilon+2\delta}{{\frac{2(\tau\epsilon-m\epsilon+(2+m\epsilon)\lambda)}{\tau\epsilon+2}}+m\epsilon}}\approx{\frac{m\epsilon+2\delta}{\tau\epsilon+(2+m\epsilon)\lambda}}$$ In the pure differential privacy setting, δ = 0, λ = 0, τ = K, and so p*const* ≈ m K ; and in the approximate differential privacy setting, λ ≈ 0, δ ≈ 0, τ ≈ √K, and so p*const* ≈ √mK . Algorithm 3 Subsampling Majority (SubMaj) 1: Input: K (ϵ, ∆)-DP mechanisms {Mi} K i=1, noise function γ : {0, . . . , K} → [0, 1], dataset D, privacy allowance 1 ≤ m ≤ K, failure probability δ ≥ ∆ ≥ 0 2: Output: (*mϵ, δ*)-DP majority vote of {Mi} K i=1 3: S = {S1*, .., S*k}, where Si ∼ Mi(D) 4: Jm ← m indices chosen uniformly at random from [K] without replacement 5: Lb =Pj∈J Sj 6: Output I{ 1 m L ≥b 12 } ## A.2 Proof Of Lemma 3.1 Lemma A.2 (Restatement of Lemma 3.1). Consider Problem 1.1, with the privacy allowance m ∈ [K]. Consider the data-dependent algorithm that computes L(D) and then applies RR with probability pγ*. If* pγ = γSub(l), where l ∈ {0, 1, . . . , K} is the value of L(D)*, i.e., the (random) sum of observed outcomes on* dataset D, and γSub : {0, 1, . . . , K} → [0, 1] is $\gamma_{Sub}(l)=\gamma_{Sub}(K-l)$ $$=\begin{cases}1-2\sum_{j=\frac{m+1}{2}}^{m}\frac{\binom{l}{j}\binom{K-l}{m-j}}{\binom{K}{m}}&\text{if$m$is odd}\\ 1-2\sum_{j=\frac{m}{2}+1}^{m}\frac{\binom{l}{j}\binom{K-l}{m-j}}{\binom{K}{m}}-\frac{\binom{l}{m}\binom{K-l}{m}}{\binom{K}{m}}&\text{if$m$is even}\end{cases}$$ then the majority of m out of K subsampled mechanisms without replacement and the output of our datadependent RR algorithm have the same distribution. Proof of Lemma 3.1. Let L =PK i=1 Si be the sum of observed outcomes from K mechanisms. Following Algorithm 3, Jm denotes the m indices chosen uniformly at random from [K] without replacement. Conditioned on L, notice the output of SubMaj follows a hypergeometric distribution. The output probability of SubMaj is $$\Pr[\mathsf{SubMaj}(\mathcal{D})=1]=\sum_{l=0}^{K}\Pr[\mathsf{SubMaj}(\mathcal{D})=1\mid\mathcal{L}=l]\cdot\Pr[\mathcal{L}=l]$$ $$=\sum_{l=0}^{K}\Pr[\sum_{j\in\mathcal{J}_{m}}S_{j}\geq\frac{m}{2}\mid\mathcal{L}=l]\cdot\Pr[\mathcal{L}=l]$$ $$=\begin{cases}\sum_{l=0}^{K}(\sum_{j=\frac{m-1}{2}}^{m}\frac{(j)(K^{-1})}{(m)})\cdot\Pr[\mathcal{L}=l]&\text{if$m$is odd}\\ \sum_{l=0}^{K}(\sum_{j=\frac{m}{2}+1}^{m}\frac{(j)(K^{-1})}{(m)}+\frac{1}{2}\frac{(j)(K^{-1})}{(m)})\cdot\Pr[\mathcal{L}=l]&\text{if$m$is even}\end{cases}$$ $$(18)$$ $$(19)$$ $$(20)$$ Consider an arbitrary noise function γSub : {0, 1, . . . , K} → [0, 1]. Let RR-d(D) denote the output of the data-dependent RR-d on dataset D, where RR-d has the *non-constant* probability set by γSub. The output probability of RR is, $$\Pr[\text{RR-d}(\mathcal{D})=1]=\sum_{l=0}^{K}\Pr[\text{RR-d}(\mathcal{D})=1\mid\mathcal{L}=l]\cdot\Pr[\mathcal{L}=l]$$ $$=\sum_{l=0}^{K}(\gamma_{Sub}(l)\cdot\mathbb{I}\{l\geq\frac{K+1}{2}\}+\frac{1}{2}(1-\gamma_{Sub}(l)))\cdot\Pr[\mathcal{L}=l]$$ We want Pr[RR-d(D) = 1] = Pr[Submaj(D) = 1]. $$(21)$$ $$(22)$$ If m is odd, for any l ≤ K−1 2, this is 1 2 (1 − γSub(l)) = Xm j= m+1 2 lj K−l m−j Km ⇒ γSub(l) = 1 − 2Xm j= m+1 2 lj K−l m−j Km (23) $$(24)$$ and for any l ≥ K+1 2, this is $$\begin{array}{l}{{\frac{1}{2}+\frac{1}{2}\gamma_{S u b}(l)=\sum_{j=\frac{m+1}{2}}^{m}\frac{\binom{l}{j}\binom{K-l}{m-j}}{\binom{K}{m}}}}\\ {{\Rightarrow\gamma_{S u b}(l)=2\sum_{j=\frac{m+1}{2}}^{m}\frac{\binom{l}{j}\binom{K-l}{m-j}}{\binom{K}{m}}-1}}\end{array}$$ Similarly, if m is even, for any l ≤ K−1 2, this is $$\frac{1}{2}(1-\gamma_{Sub}(l))=\sum_{j=\frac{m}{2}+1}^{m}\frac{\binom{l}{j}\binom{K-l}{m-j}}{\binom{K}{m}}+\frac{1}{2}\frac{\binom{l}{\frac{1}{2}}\binom{K-l}{\frac{1}{2}}}{\binom{K}{m}}$$ $$\Rightarrow\gamma_{Sub}(l)=1-2\sum_{j=\frac{m}{2}+1}^{m}\frac{\binom{l}{j}\binom{K-l}{m-j}}{\binom{K}{m}}-\frac{\binom{l}{\frac{1}{2}}\binom{K-l}{\frac{1}{2}}}{\binom{K}{m}}\tag{25}$$ and for any l ≥ K+1 2, this is $$\frac{1}{2}+\frac{1}{2}\gamma_{Sub}(l)=\sum_{j=\frac{m}{2}+1}^{m}\frac{\binom{l}{j}\binom{K-l}{m-j}}{\binom{K}{m}}+\frac{1}{2}\frac{\binom{l}{\frac{m}{2}}\binom{K-l}{\frac{m}{2}}}{\binom{K}{m}}$$ $$\Rightarrow\gamma_{Sub}(l)=2\sum_{j=\frac{m}{2}+1}^{m}\frac{\binom{l}{j}\binom{K-l}{m-j}}{\binom{K}{m}}+\frac{\binom{l}{\frac{m}{2}}\binom{K-l}{\frac{m}{2}}}{\binom{K}{m}}-1\tag{26}$$ Next, we show the above γSub is indeed symmetric around K 2 . For any l ≤ K−1 2, there is K − l ≥ K+1 2. If m is odd, $$\gamma_{Sub}(K-l)=2\sum_{j=\frac{n+1}{2}}^{m}\frac{\binom{K-l}{j}\binom{l}{m-j}}{\binom{K}{m}}-1=2\Big{(}1-\sum_{j=1}^{\frac{m-1}{2}}\frac{\binom{K-l}{j}\binom{l}{m-j}}{\binom{K}{m}}\Big{)}-1\tag{27}$$ $$=1-2\sum_{j=1}^{\frac{m-1}{2}}\frac{\binom{K-j}{j}\binom{l}{m-j}}{\binom{K}{m}}=1-2\sum_{j=\frac{m-1}{2}}^{m}\frac{\binom{l}{j}\binom{K-l}{m-j}}{\binom{K}{m}}$$ $$=\gamma_{Sub}(l)$$ Similarly, if m is even, Km − 1 = 21 − mX2 −1 j=1 K−l j l m−j Km − 1 2 lm 2 K−l m 2 γSub(K − l) = 2 Xm j= m 2 +1 K−l j l m−j Km + lm 2 K−l m 2 Km − 1 = 1 − 2 mX2 −1 j=1 K−l j l m−j Km − lm 2 K−l m 2 Km = 1 − 2Xm j= m 2 +1 lj K−l m−j Km − lm 2 K−l m 2 Km = γSub(l) (28) Now, combining Eq. 23, Eq. 24 and Eq. 27, if m is odd, setting γSub as $$(28)^{\frac{1}{2}}$$ $$\gamma_{S u b}(l)=\gamma_{S u b}(K-l)=1-2\sum_{j=\frac{m+1}{2}}^{m}\frac{{\binom{l}{j}}{\binom{K-l}{m-j}}}{{\binom{K}{m}}}$$ 2 makes RR-d have the same output distribution as SubMaj. Similarly, combining Eq. 25, Eq. 26 and Eq. 28, if m is even, setting γSub as **4. 20**, Eq. 20 and Eq. 20, in the form, solving $\gamma_{Sub}$ as: $$\gamma_{Sub}(l)=\gamma_{Sub}(K-l)=1-2\sum_{j=\frac{m}{2}+1}^{m}\frac{\binom{1}{j}\binom{K}{m-l}}{\binom{K}{m}}-\frac{\binom{1}{\frac{m}{2}}\binom{K-l}{\frac{m}{2}}}{\binom{K}{m}}$$ $$(29)^{\frac{1}{2}}$$ $$(30)$$ $\square$ makes RR-d have the same output distribution as SubMaj. ## A.3 Proof Of Lemma 3.2 Lemma A.3 (Restatement of Lemma 3.2). Let A *be any randomized algorithm to compute the majority* function g on S *such that for all* S, Pr[A(S) = g(S)] ≥ 1/2 (i.e. A *is at least as good as a random guess).* Then, there exists a a general function γ : {0, 1} K+1 → [0, 1] such that if one sets pγ by γ(S) in DaRRM, the output distribution of DaRRMγ is the same as the output distribution of A. Proof of Lemma 3.2. For some D and conditioned on S, we see that by definition Pr[DaRRMγ(S) = g(S)] = γ(S) + (1/2)(1 − γ(S)). We want to set γ such that Pr[DaRRMγ(S) = g(S)] = Pr[A(S) = g(S)]. Therefore, we set γ(S) = 2 Pr[A(S) = g(S)] − 1. Lastly, we need to justify that γ ∈ [0, 1]. Clearly, γ(S) ≤ 2 − 1 ≤ 1 since Pr[A(S) = g(S)] ≤ 1. Note that the non-negativity follows from assumption. ## A.4 Proof Of Lemma 3.3 Lemma A.4 (Restatement of Lemma 3.3). Consider using DaRRM (Algorithm 1) to solve Problem 1.1, let αl = Pr[L(D) = l] and α ′ l = Pr[L(D′) = l], where D and D′ are adjacent datasets and l ∈ {0, . . . , K}. For a noise function γ : {0, 1, . . . , K} → [0, 1] *such that* γ(l) = γ(K − l), ∀l, DaRRMγ is (mϵ, δ)-differentially private if and only if for all αl, α′l , the following holds, $$f(p_{1},\ldots,p_{K},p_{1}^{\prime},\ldots,p_{K}^{\prime};\gamma)\leq e^{m\epsilon}-1+2\delta\tag{1}$$ where f is called the **privacy cost objective** and $$f(p_{1},\ldots,p_{K},p_{1}^{\prime},\ldots,p_{K}^{\prime};\gamma):=\sum_{l=0}^{\frac{K-1}{2}}(e^{m\epsilon}\alpha_{l}^{\prime}-\alpha_{l})\cdot\gamma(l)+\sum_{l=\frac{K+1}{2}}^{K}(\alpha_{l}-e^{m\epsilon}\alpha_{l}^{\prime})\cdot\gamma(l)$$ $$(31)$$ Proof of Lemma 3.3. By the definition of differential privacy (Definition 2.1), DaRRMγ is (*mϵ, δ*)-differentially private ⇐⇒ Pr[DaRRMγ(D) = 1] ≤ e mϵ Pr[DaRRMγ(D ′) = 1] + δ, and Pr[DaRRMγ(D) = 0] ≤ e mϵ Pr[DaRRMγ(D ′) = 0] + δ, ∀ adjacent datasets D, D $${\mathcal{D}},{\mathcal{D}}^{\prime}\qquad\quad(32)$$ Let random variables L(D) = PK i=1 S(D) and L(D′) = PK i=1 S(D′) be the sum of observed outcomes on adjacent datasets D and D′, based on which one sets pγ in DaRRM. Let αl = Pr[L(D) = l] and α ′ l = Pr[L(D′) = l], ∀l ∈ {0, 1*, . . . , K*}. Consider the output being 1. Similarly, consider the output being 0. Pr[DaRRMγ(D) = 1] ≤ e mϵ Pr[DaRRMγ(D ′) = 1] + δ (33) ⇐⇒ X K l=0 Pr[DaRRMγ(D) = 1 | L(D) = 1] · Pr[L(D) = l] (34) ≤ e mϵX K l=0 Pr[DaRRMγ(D ′) = 1 | L(D ′) = 1] · Pr[L(D ′) = l] + δ ⇐⇒ X K l=0 γ(l) · I{l ≥ K 2 } + 1 2 (1 − γ(l))· Pr[L(D) = l] (35) ≤ e mϵX K l=0 γ(l) · I{l ≥ K 2 } + 1 2 (1 − γ(l))} · Pr[L(D ′) = l] + δ γ(l) + 12 (1 − γ(l))· Pr[L(D) = l] + K−1 X2 ⇐⇒ X K $$(33)^{\frac{1}{2}}$$ $$(34)$$ l=0 1 2 (1 − γ(l)) · Pr[L(D) = l] (36) l= K+1 2 γ(l) + 12 (1 − γ(l))· Pr[L(D) = l] + e mϵ K−1 X2 ≤ e mϵ X K $$(35)$$ $$(36)$$ $$(37)$$ $$(38)$$ l=0 1 2 (1 − γ(l)) · Pr[L(D ′) = l] + δ l= K+1 2 1 2 γ(l)αl − K−1 X2 ⇐⇒ X K l=0 1 2 γ(l)αl + 1 2(37) l= K+1 2 1 2 γ(l)α ′ l − e mϵ K−1 X2 ≤ e mϵ X K l=0 1 2 γ(l)α ′ l + 1 2 e mϵ + δ l= K+1 2 (αl − e mϵα ′ l )γ(l) − K−1 X2 ⇐⇒ X K l=0 (αl − e mϵα ′ l )γ(l) ≤ e mϵ − 1 + 2δ (38) l= K+1 2 Pr[DaRRMγ(D) = 0] ≤ e mϵ Pr[DaRRMγ(D ′) = 0] + δ (39) ⇐⇒ X K l=0 Pr[DaRRMγ(D) = 0 | L(D) = 0] · Pr[L(D) = 0] (40) ≤ e mϵX K l=0 Pr[DaRRMγ(D ′) = 0 | L(D ′) = 0] · Pr[L(D ′) = 0]+ δ ⇐⇒ X K l=0 γ(l) · I{l < K 2 } + 1 2 (1 − γ(l))· Pr[L(D) = l] (41) ≤ e mϵX K l=0 γ(l) · I{l < K 2 } + 1 2 (1 − γ(l))· Pr[L(D ′) = l] + δ ⇐⇒ K−1 X2 l=0 γ(l) + 12 (1 − γ(l))· Pr[L(D) = l] + X K 1 2 (1 − γ(l)) · Pr[L(D) = l] (42) l= K+1 2 $$(39)^{\frac{1}{2}}$$ $$(40)^{3}$$ $$(41)$$ $$\quad(42)$$ ≤ e mϵ K−1 X2 l=0 γ(l) + 12 (1 − γ(l))· Pr[L(D ′) = l] + X K 1 2 (1 − γ(l)) · Pr[L(D ′) = l] + δ l= K+1 2 ⇐⇒ K−1 X2 l=0 1 2 γ(l)αl −X K 1 2 γ(l)αl + 1 2(43) l= K+1 2 ≤ e mϵ K−1 X2 l=0 1 2 γ(l)α ′ l − e mϵ X K 1 2 γ(l)α ′ l + 1 2 e mϵ + δ l= K+1 2 ⇐⇒ K−1 X2 l=0 (αl − e mϵα ′ l )γ(l) −X K (αl − e mϵα ′ l )γ(l) ≤ e mϵ − 1 + 2δ (44) l= K+1 2 Therefore, plugging Eq. 38 and Eq. 44 into Eq. 32, DaRRM${}_{\gamma}$ is $(m\epsilon,\delta)$-differentially private $$\Longleftrightarrow\sum_{l=\frac{K+1}{2}}^{K}(\alpha_{l}-e^{m\epsilon}\alpha_{l}^{\prime})\gamma(l)-\sum_{l=0}^{\frac{K-1}{2}}(\alpha_{l}-e^{m\epsilon}\alpha_{l}^{\prime})\gamma(l)\leq e^{m\epsilon}-1+2\delta$$ $$\stackrel{{\frac{K-1}{2}}}{{\sum_{l=0}^{\frac{K-1}{2}}}}(\alpha_{l}-e^{m\epsilon}\alpha_{l}^{\prime})\gamma(l)-\sum_{l=\frac{K+1}{2}}^{K}(\alpha_{l}-e^{m\epsilon}\alpha_{l}^{\prime})\gamma(l)\leq e^{m\epsilon}-1+2\delta$$ (45) $$\begin{array}{l}\left(46\right)\end{array}$$ = (46) . where αl = Pr[L(D) = l] and α ′ l = Pr[L(D′) = l], ∀l ∈ {0, 1*, . . . , K*} and D, D′ are any adjacent datasets. Next, we show if γ is symmetric around K 2 , i.e., γ(l) = γ(K − l), satisfying either one of Eq. 45 or Eq. 46 implies satisfying the other one. (αl − e mϵα ′ l )γ(l) − K−1 X2 X K l=0 (αl − e mϵα ′ l )γ(l) ≤ e mϵ − 1 + 2δ (47) l= K+1 2 ⇐⇒ K−1 X2 l=0 (αK−l − e mϵα ′ K−l ) · γ(K − l) −X K (αK−l − e mϵα ′ K−l ) · γ(K − l) ≤ e mϵ − 1 + 2δ (48) l= K−1 2 ⇐⇒ K−1 X2 l=0 (αK−l − e mϵα ′ K−l ) · γ(l) −X K (αK−l − e mϵα ′ K−l ) · γ(l) ≤ e mϵ − 1 + 2δ (49) l= K−1 2 Since γ(l) = γ(K − l) ⇐⇒ K−1 X2 l=0 (αl − e mϵα ′ l ) · γ(l) −X K (αl − e mϵα ′ l ) · γ(l) ≤ e mϵ − 1 + 2δ (50) l= K−1 2 $$(47)$$ (48) $$\begin{array}{l}\left(49\right)\end{array}$$ . $$(50)$$ where the last ⇐⇒ follows due to the following: Recall pi = Pr[Mi(D) = 1] and p ′ i = Pr[Mi(D′) = 1]. Observe L(D) ∼ PoissonBinomial({pi} K i=1) and L(D) ∼ PoissonBinomial({p ′ i } K i=1). Let Fl = {A : |A| = l, A ⊆ [K]}, for any l ∈ {0*, . . . , K*}, denote the set of all subsets of l integers that can be selected from [K]. Let Ac = [K] \ A be A's complement set. Notice FK−l = {Ac: A ∈ Fl}. Since α denotes the pmf of the Poisson Binomial distribution at l, it follows that $$\alpha_{l}=\Pr[{\mathcal{L}}({\mathcal{D}})=l]=\sum_{{\mathcal{A}}\in F_{l}}\Pi_{i\in{\mathcal{A}}}p_{i}\Pi_{j\in{\mathcal{A}}^{c}}(1-p_{j})$$ Consider βi = 1 − pi, ∀i ∈ [K] and a new random variable L β ∼ PoissonBinomial({βi} K i=1). $$\Pr[\mathcal{L}^{\beta}=l]=\sum_{\mathcal{A}\in F_{k}}\Pi_{j\in\mathcal{A}}\beta_{i}\Pi_{i\in\mathcal{A}^{c}}(1-\beta_{i})=\sum_{\mathcal{A}\in F_{l}}\Pi_{j\in\mathcal{A}}(1-p_{j})\Pi_{i\in\mathcal{A}^{c}}p_{i}\tag{52}$$ $$=\sum_{\mathcal{A}^{c}\in F_{K-l}}\Pi_{j\in\mathcal{A}}(1-p_{i})\Pi_{i\in\mathcal{A}^{c}}p_{i}=\sum_{\mathcal{A}\in F_{K-l}}\Pi_{i\in\mathcal{A}}p_{i}\Pi_{j\in\mathcal{A}^{c}}(1-p_{i})$$ $$=\alpha_{K-l}$$ This implies for any L(D) ∼ PoissonBinomial({pi} K i=1), there is another Poisson Binomial distribution PoissonBinomial({1−pi} K i=1), such that for L β ∼ PoissonBinomial({1−pi} K i=1), Pr[L(D) = K −l] = Pr[L β = l]. Similarly, for any L(D′) ∼ PoissonBinomial({p ′ i } K i=1), there is L ′β ∼ PoissonBinomial({1 − p ′ i } K i=1), such that Pr[L(D′) = K − l] = Pr[L ′β = l]. Since Eq. 49 holds for all possible αK−l = Pr[L(D) = K −l] and α ′ K−l = Pr[L(D′) = K −l], Eq. 49 also holds for Pr[L β = l] and Pr[L ′β = l]. Now, Eq. 50 follows by relabeling Pr[L β = l] as αl and Pr[L ′β = l] as α ′ l . The above implies Eq. 45 ⇐⇒ Eq. 46. Therefore, DaRRM${}_{\gamma}$ is $(m\epsilon,\delta)$-differentially private $$\Longleftrightarrow\underbrace{\sum_{l=\frac{K+1}{2}}^{K}(\alpha_{l}-e^{m\epsilon}\alpha_{l}^{\prime})\gamma(l)-\sum_{l=0}^{\frac{K-1}{2}}(\alpha_{l}-e^{m\epsilon}\alpha_{l}^{\prime})\gamma(l)}_{:=f(p_{1},...,p_{K},p_{1}^{\prime},...,p_{K}^{\prime};\gamma)}\leq e^{m\epsilon}-1+2\delta\tag{53}$$ $$\left(51\right)$$ ## B Details Of Section 4: Provable Privacy Amplification In this section, we consider Problem 1.1 in the pure differential privacy and i.i.d. mechanisms setting. That is, δ = ∆ = 0 and p = pi = Pr[Mi(D) = 1], p′ = p ′ i = Pr[Mi(D′) = 1], ∀i ∈ [K]. Our goal is to search for a good noise function γ such that: 1) DaRRMγ is mϵ-DP, and 2) DaRRMγ achieves higher utility than that of the baselines (see Section 3) under a fixed privacy loss. Our main finding of such a γ function is presented in Theorem 4.1, which states given a privacy allowance m ∈ [K], one can indeed output the majority of 2m − 1 subsampled mechanisms, instead of just m as indicated by simple composition. Later, we formally verify in Lemma B.11, Section B.3 that taking the majority of more mechanisms strictly increases the utility. To start, by Lemma 3.3, for any noise function γ, γ satisfying goal 1) is equivalent to satisfying $$(54)$$ f(*p, p*′; γ) ≤ e ϵ − 1 (54) where f(*p, p*′; γ) = P K−1 2 l=0 (e mϵα ′ l − αl) · γ(l) + PK l= K+1 2(αl − e mϵα ′ l ) · γ(l) refers to the privacy cost objective (see Lemma 3.3) in the i.i.d. mechanisms setting, and recall αl = Pr[L(D) = l] and α ′ l = Pr[L(D′) = l], ∀l ∈ {0, 1*, . . . , K*}. Notice in this setting, L(D) ∼ Binomial(p), and L(D′) ∼ Binomial(p ′). Monotonicity Assumption. For analysis, we restrict our search for a γ function with good utility to the ![23_image_0.png](23_image_0.png) class with a mild monotonicity assumption: γ(l) ≥ γ(l + 1), ∀l ≤ K−1 2and γ(l) ≤ γ(l + 1), ∀l ≥ K+1 2. This matches our intuition that as L(D) = PK i=1 Si, i.e., the number of mechanisms outputting 1, approaches 0 or K, there is a clearer majority and so not much noise is needed to ensure privacy, which implies a larger value of γ. Roadmap of Proof of Theorem 4.1. Since γ needs to enable Eq. 54 to be satisfied for all *p, p*′ ∈ [0, 1], we begin by showing characteristics of **the worst case probabilities**, i.e., (p ∗, p′∗) = arg max(p,p′) f(*p, p*′; γ), given any γ : {0, 1, . . . , K} → [0, 1] that is symmetric around K 2 and that satisfies the above monotonicity assumption, in Lemma B.1, Section B.1. We call (p ∗, p′∗) the worst case probabilities, since they incur the largest privacy loss. Later in Section B.2, we present the main proof of Theorem 4.1, where we focus on searching for a good γ that enables f(p ∗, p′∗; γ) ≤ e ϵ − 1, based on the characteristics of (p ∗, p′∗) in Lemma B.1, to ensure DaRRMγ is mϵ-differentially private. ## B.1 Characterizing The Worst Case Probabilities First, note (*p, p*′) are close to each other and lie in a feasible region F, due to each mechanism Mi being ϵ-differentially private; and so does (p ∗, p′∗). The feasible region, as illustrated in Figure 4, is bounded by (a) p ′ ≤ e ϵp (b) p ≤ e ϵp ′ (c) 1 − p ′ ≤ e ϵ(1 − p), and (d) 1 − p ≤ e ϵ(1 − p ′), where the four boundaries are derived from the definition of differential privacy. Therefore, we only need to search for (p ∗, p′∗) = arg max(p,p′)∈F f(*p, p*′; γ). Next, we show that given γ satisfying certain conditions, (p ∗, p′∗) can only be on two of the four boundaries of F in Lemma B.1 - that is, either p ∗ = e ϵp ′, i.e., on the blue line in Figure 4, or 1 − p ′∗ = e ϵ(1 − p ∗), i.e., on the orange line in Figure 4. Lemma B.1 (Characteristics of worst case probabilities). For any noise function γ : {0, 1, . . . , K} → [0, 1] that is 1) symmetric around K 2 , 2) satisfies the monotonicity assumption, and 3) γ( K−1 2) > 0 and γ( K+1 2) > 0, the worst case probabilities given γ, (p ∗, p′∗) = arg max(p,p′)∈F f(p, p′; γ)*, must satisfy one of the following* two equalities: Figure 4: The feasible region F is plotted as the blue area. The four boundaries are implied by *p, p*′satisfying ϵ-differential privacy. $$p^{*}=e^{\epsilon}p^{\prime*}\,,$$ ′∗, ∀p $$\forall p^{*}\in[0,{\frac{1}{e^{-\epsilon}+1}}],p^{\prime*}\in[0,{\frac{1}{1+e^{\epsilon}}}]$$ $$\begin{array}{r l}{o r}&{{}1-p^{\prime*}=e^{\epsilon}(1-p^{*}),}\end{array}$$ ∗), ∀p $$\forall p^{*}\in[{\frac{1}{1+e^{-\epsilon}}},1],p^{\prime*}\in[{\frac{1}{1+e^{\epsilon}}},1]$$ To show Lemma B.1, we first show in Lemma B.2 that the search of (p ∗, p′∗) can be refined to one of the four boundaries of F, via a careful gradient analysis of f(*p, p*′; γ) in F, and then show in Lemma B.3 that the search of (p ∗, p′∗) can be further refined to two of the four boundaries, due to symmetry of *p, p*′. Lemma B.1 directly follows from the two. Lemma B.2. For any noise function γ : {0, 1, . . . , K} → [0, 1] *that is 1) symmetric around* K 2 , 2) satisfies the monotonicity assumption, and 3) γ( K−1 2) > 0 and γ( K+1 2) > 0*, the worst case probabilities given* γ, (p ∗, p′∗) = arg max(p,p′)∈F f(p, p′; γ)*, must satisfy one of the following four equalities:* $$p^{\prime*}=e^{\epsilon}p^{*},\forall p^{*}\in[0,\frac{1}{1+e^{\epsilon}}],p^{\prime*}\in[0,\frac{1}{1+e^{-\epsilon}}]$$ $$p^{*}=e^{\epsilon}p^{\prime*},\forall p^{*}\in[0,\frac{1}{e^{-\epsilon}+1}],p^{\prime*}\in[0,\frac{1}{1+e^{\epsilon}}]$$ $$1-p^{*}=e^{\epsilon}(1-p^{\prime*}),\forall p^{*}\in[\frac{1}{1+e^{\epsilon}},1],p^{\prime*}\in[\frac{1}{1+e^{-\epsilon}},1]$$ $$1-p^{\prime*}=e^{\epsilon}(1-p^{*}),\forall p^{*}\in[\frac{1}{1+e^{-\epsilon}},1],p^{\prime*}\in[\frac{1}{1+e^{\epsilon}},1]$$ Proof of Lemma B.2. Recall the privacy cost objective (as defined in Lemma 3.3) is now $$f(p,p^{\prime};\gamma)=\sum_{l=0}^{\frac{K-1}{2}}(e^{m\epsilon}\alpha_{l}^{\prime}-\alpha_{l})\cdot\gamma(l)+\sum_{l=\frac{K+1}{2}}^{K}(\alpha_{l}-e^{m\epsilon}\alpha_{l}^{\prime})\cdot\gamma(l)$$ where αl = Pr[L(D) = l] and α ′ l = Pr[L(D′) = l], ∀l ∈ {0, 1*, . . . , K*}. Since L(D) ∼ Binomial(p) and L(D′) ∼ Binomial(p ′) in the i.i.d. mechanisms setting, and using the pmf of the Binomial distribution, f can be written as $$f(p,p^{\prime};\gamma)=\sum_{l=0}^{K-1}e^{\imath m}\binom{K}{l}p^{\prime l}(1-p^{\prime})^{K-l}-\binom{K}{l}p^{\prime}(1-p)^{K-l}\cdot\gamma(l)+\sum_{l=K,p^{\prime}}^{K}\binom{K}{l}p^{\prime}(1-p)^{K-l}-e^{\imath m}\binom{K}{l}p^{\prime l}(1-p^{\prime})^{K-l})$$ The gradients w.r.t. p and p ′ are $$\nabla_{p}f(p,p^{\prime};\gamma)=\underbrace{\sum_{l=0}^{K-1}-\binom{K}{l}\gamma(l)\cdot(lp^{l-1}(1-p)^{K-l}-p^{l}(K-l)(1-p)^{K-l-1})}_{:=A}\tag{55}$$ $$+\underbrace{\sum_{l=\frac{K-1}{2}}^{K}\binom{K}{l}\gamma(l)\cdot(lp^{l-1}(1-p)^{K-l}-p^{l}(K-l)(1-p)^{K-l-1})}_{:=B}$$ and $$\nabla_{p^{\prime}}f(p,p^{\prime};\gamma)=\sum_{l=0}^{K-l}e^{me}\binom{K}{l}\gamma(l)\cdot(lp^{l-1}(1-p^{\prime})^{K-l}-p^{\prime l}(K-l)(1-p^{\prime})^{K-l-1})$$ $$+\sum_{l=\frac{K-l}{2}}^{K}-e^{me}\binom{K}{l}\gamma(l)\cdot(lp^{l-1}(1-p^{\prime})^{K-l}-p^{\prime l}(K-l)(1-p^{\prime})^{K-l-1})$$ We show in the following ∀p ∈ (0, 1), ∇pf(*p, p*′; γ) > 0 and ∇p′f(*p, p*′; γ) < 0. This implies there is no local maximum inside F, and so (p ∗, p′∗) = arg maxp,p′ f(*p, p*′; γ) must be on one of the four boundaries of F. Also, if p = 0, then p ′ = 0, and (0, 0) is a corner point at the intersection of two boundaries. Similarly, if p = 1, then p ′ = 1, and (1, 1) is also a corner point. This concludes ∀p ∈ [0, 1], (p ∗, p′∗) = arg maxp,p′ f(*p, p*′; γ) must be on one of the four boundaries of F. To show ∇pf(*p, p*′; γ) > 0 for p ∈ (0, 1), we write ∇pf(*p, p*′; γ) = A + B as in Eq. 55, and show that A > 0 and B > 0. To show A > 0, first note A := K−1 X2 l=0 γ(l) K l · (p l(K − l)(1 − p) K−l−1 − lpl−1(1 − p) K−l) > 0 (57) ⇐⇒ K−1 X2 l=0 γ(l) K l · p l(K − l)(1 − p) K−l−1 > K−1 X2 l=0 γ(l) K l · lpl−1(1 − p) K−l) (58) ⇐⇒ K−1 X2 l=0 γ(l) K − 1 l K K − l · p l(K − l)(1 − p) K−l−1 > K−1 X2 l=1 γ(l) K − 1 l − 1 K l · lpl−1(1 − p) ⇐⇒ K K−1 X2 l=0 γ(l) K − 1 l p l(1 − p) K−l−1 > K K−1 X2 l=1 γ(l) K − 1 l − 1 p l−1(1 − p) K−l(60) ⇐⇒ K−1 X2 l=0 γ(l) K − 1 l p l(1 − p) K−l−1 > K−1 X2 −1 l=0 γ(l + 1)K − 1 l p l(1 − p) K−l−1(61) (57) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (58) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (59) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (60) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (61) ... K−l(59) Since ∀l ≤ K−1 2, γ(l) ≥ γ(l + 1) and p ∈ (0, 1), there is for l ∈ {0*, . . . ,* K−1 2 − 1}, $$\gamma(l){\binom{K-1}{l}}p^{l}(1-p)^{K-l-1}\geq\gamma(l+1){\binom{K-1}{l}}p^{l}(1-p)^{K-l-1}$$ Furthermore, since γ( K−1 2) > 0 and p ∈ (0, 1), $$\gamma(\frac{K-1}{2})\biggl(\frac{K-1}{\frac{K-1}{2}}\biggr)p^{\frac{K-1}{2}}(1-p)^{\frac{K-1}{2}}>0$$ Eq. 62 and Eq. 63 combined implies $$\gamma(\frac{K-1}{2})\Big{(}\frac{K-1}{2}\Big{)}p^{\frac{K-1}{2}}(1-p)^{\frac{K-1}{2}}+\sum_{l=0}^{\frac{K-1}{2}-1}\gamma(l)\binom{K-1}{l}p^{l}(1-p)^{K-l-1}>\sum_{l=0}^{\frac{K-1}{2}-1}\gamma(l+1)\binom{K-1}{l}p^{l}(1-p)^{K-l-1}>0$$ $$\left|p^{l}(1-p)^{K-l-1}\right.$$ (64) $$(62)$$ $$(\mathrm{f}63)$$ (65) $\binom{66}{2}$ (66) . and hence, Eq. 61 holds. This further implies A > 0. Next, to show B > 0, note that $$B:=\sum_{l=\frac{K+1}{4}}^{K}{K\choose l}\gamma(l)\cdot(l!p^{l-1}(1-p)^{K-l}-p^{l}(K-l)(1-p)^{K-l-1})>0$$ $$\iff\sum_{l=\frac{K+1}{4}}^{K}{K\choose l}\gamma(l)\cdot l!p^{l-1}(1-p)^{K-l}>\sum_{l=\frac{K+1}{4}}^{K}{K\choose l}p^{l}(K-l)(1-p)^{K-l-1}$$ ⇐⇒ X K γ(l) K − 1 l − 1 K l · lpl−1(1 − p) K−l(67) l= K+1 2 > K X−1 γ(l) K − 1 l K K − l · p l(K − l)(1 − p) K−l−1 l= K+1 2 ⇐⇒ KX K γ(l) K − 1 l − 1 · p l−1(1 − p) K−l(68) l= K+1 2 > K K X−1 γ(l) K − 1 l · p l(1 − p) K−l−1 l= K+1 2 ⇐⇒ X K γ(l) K − 1 l − 1 · p l−1(1 − p) K−l >X K l= K+1 2 +1 γ(l − 1)K − 1 l − 1 · p l−1(1 − p) l= K+1 2 Since ∀l ≥ K+1 2, γ(l) ≥ γ(l − 1) and p ∈ (0, 1), there is for l ∈ { K+1 2 + 1, . . . , K}, γ(l) K − 1 l − 1 p l−1(1 − p) K−l ≥ γ(l − 1)K − 1 l − 1 p l−1(1 − p) K−l(70) Furthermore, since γ( K+1 2) > 0 and p ∈ (0, 1), $$(67)$$ $$(\mathrm{68})$$ K−l(69) and $ p\in(0,1)$, $ \gamma(\frac{K+1}{2})\binom{K-1}{\frac{K-1}{2}}p^{\frac{K-1}{2}}(1-p)^{\frac{K-1}{2}}>0$ I think this is wrong. $\gamma(\frac{K+1}{2})\Big{(}\frac{K-1}{2}\Big{)}p^{\frac{K-1}{2}}(1-p)^{\frac{K-1}{2}}+\sum_{l=\frac{K+1}{2}+1}^{K}\gamma(l)\Big{(}\frac{K-1}{l-1}\Big{)}\cdot p^{l-1}(1-p)^{K-l}>\sum_{l=\frac{K+1}{2}+1}^{K}\gamma(l-1)\Big{(}\frac{K-1}{l-1}\Big{)}$ $$(69)$$ $$\binom{1}{l}\cdot p^{l-1}(1-p)^{K-l}$$ (72) ... $$(70)$$ $$\left(71\right)$$ $$(73)$$ $$\nabla_{p}f(p,p^{\prime};\gamma)=A+B>0$$ ∇pf(*p, p*′; γ) = A + B > 0 (73) by one of the following: $$p^{\prime*}=e^{\varepsilon}p^{*},\forall p\in[0,\frac{1}{1+e^{\varepsilon}}],p^{\prime}\in[0,\frac{1}{1+e^{-\varepsilon}}]$$ $$p^{*}=e^{\varepsilon}p^{\prime*},\forall p\in[0,\frac{1}{e^{-\varepsilon}+1}],p^{\prime}\in[0,\frac{1}{1+e^{\varepsilon}}]$$ $$1-p^{*}=e^{\varepsilon}(1-p^{\prime*}),\forall p\in[\frac{1}{1+e^{\varepsilon}},1],p^{\prime}\in[\frac{1}{1+e^{-\varepsilon}},1]$$ $$1-p^{\prime*}=e^{\varepsilon}(1-p^{*}),\forall p\in[\frac{1}{1+e^{-\varepsilon}},1],p^{\prime}\in[\frac{1}{1+e^{\varepsilon}},1]$$ Eq. 70 and Eq. 71 combined implies and hence Eq. 69 holds. This further implies B > 0. Following Eq.55, for p ∈ (0, 1) and γ satisfying the three assumptions, Following similar techniques, one can show for p ∈ (0, 1) and γ satisfying the three conditions, ∇p′f(*p, p*′; γ) < 0 (74) This implies there is no local minima or local maxima inside the feasible region F. Also recall (p, p′) ∈ {(0, 0),(1, 1)} are two special cases where (*p, p*′) is at the intersection of two boundaries. Hence, we conclude the worst case probability (p ∗, p′∗) = arg maxp,p′∈F f(*p, p*′; γ) is on one of the four boundaries of F - that is, (p ∗, p′∗) satisfy one of the following: Lemma B.3. For any noise function γ : {0, 1, . . . , K} → [0, 1] *function that is 1) symmetric around* K 2 and 2) satisfies the monotonicity assumption, the privacy cost objective f(p, p′; γ) *is maximized when* p ≥ p ′. Proof of Lemma B.3. Following Eq. 33 and Eq. 38 in the proof of Lemma 3.3, and that δ = 0, Following Eq. 33 and Eq. 38 in the proof of Lemma 3.3, and that $\delta=0$, $$\Pr[\mathsf{DaRPM}_{\gamma}(\mathcal{D})=1]\leq e^{me}\Pr[\mathsf{DaRPM}_{\gamma}(\mathcal{D}^{\prime})=1]$$ $$\iff\underbrace{\sum_{l=\frac{K+1}{2}}^{K}(\alpha_{l}-e^{me}\alpha_{l}^{\prime})\gamma(l)-\sum_{l=0}^{\frac{K-1}{2}}(\alpha_{l}-e^{me}\alpha_{l}^{\prime})\gamma(l)\leq e^{me}-1}_{=f(p^{\prime},p^{\prime};\gamma)}$$ - $l$l and $\alpha^{\prime}=\Pr[\mathcal{C}(\mathcal{D}^{\prime})=l]$, $\forall l\in\{0,1,\ldots,K\}$. This implies (75) $\binom{76}{7}$ . $$(77)$$ (78) $$\begin{array}{l}\left(79\right)\end{array}$$ = $$\begin{array}{l}\left(80\right)\end{array}$$ = $$\begin{array}{l}\left(81\right)\end{array}$$ . where $\alpha_{l}=\Pr[\mathcal{L}(\mathcal{D})=l]$ and $\alpha_{l}^{\prime}=\Pr[\mathcal{L}(\mathcal{D}^{\prime})=l]$, $\forall l\in\{0,1,\ldots,K\}$. This implies $$f(p,p^{\prime};\gamma)=\frac{\Pr[\mathsf{DaRBM}_{\gamma}(\mathcal{D})=1]}{\Pr[\mathsf{DaRBM}_{\gamma}(\mathcal{D}^{\prime})=1]}-1$$ Hence, f(*p, p*′; γ) is maximized when Pr[DaRRMγ(D) = 1] ≥ Pr[DaRRMγ(D′) = 1]. $\Pr[\mathsf{DARRM}_\gamma(\mathcal{D})=1]=$ $$=$$ $$=$$ $$=$$ $$=$$ $$=$$ . K l=0 Pr[DaRRMγ(D) = 1 | L(D) = 1] · Pr[L(D) = l] (78) = X K l=0 γ(l) · I{l ≥ K 2 } + 1 2 (1 − γ(l))· Pr[L(D) = l] (79) = K−1 X2 l=0 1 2 (1 − γ(l)) · αl +X K γ(l) + 12 (1 − γ(l))· αl (80) l= K+1 2 γ(l) K l p l(1 − p) K−l − 1 2 K−1 X2 = 1 2 X K l=0 γ(l) K l p l(1 − p) K−l−1 + 1 l= K+1 2 2(81) where the last line follows from the observation that in the i.i.d. mechanisms setting, L(D) ∼ Binomial(p) and αlis hence the pmf of the Binomial distribution at l. Similarly, $$\Pr[\mathrm{DaRRM}_{\gamma}(D^{\prime})=1]={\frac{1}{2}}\sum_{l={\frac{N+1}{2}}}^{K}\gamma(l){\binom{K}{l}}p^{\prime l}(1-p^{\prime})^{K-l}-{\frac{1}{2}}\sum_{l=0}^{{\frac{K-1}{2}}}\gamma(l){\binom{K}{l}}p^{\prime l}(1-p^{\prime})^{K-l-1}+{\frac{1}{2}}\sum_{l=0}^{K-1}p^{\prime l}(1-p^{\prime})^{K-l}.$$ $$(82)$$ $$(83)$$ 2(82) Now define the objective $$h(\beta)=\frac{1}{2}\sum_{l=\frac{K+1}{2}}^{K}\gamma(l)\binom{K}{l}\beta^{l}(1-\beta)^{K-l}-\frac{1}{2}\sum_{l=0}^{\frac{K-1}{2}}\gamma(l)\binom{K}{l}\beta^{l}(1-\beta)^{K-l-1}+\frac{1}{2}$$ $\beta$ is the $\beta$-function. The $\beta$-function is defined by $$\beta^{l}(1-\beta)^{K-l-1}+\frac{1}{2}\sum_{l=0}^{K}\gamma(l)\beta^{l}(1-\beta)^{K-l-1}+\frac{1}{2}$$ 2(83) for β ∈ [0, 1] and it follows that Pr[DaRRMγ(D) = 1] = h(p) and Pr[DaRRMγ(D′) = 1] = h(p ′). We now analyze the monotonicity of h(β) in β. For ease of presentation, define g(l) := (− 1 2 γ(l) ∀l ≤ K 2 1 2 γ(l) ∀l ≥ K 2 .$ \mbox{Since}\,\gamma(l)\,\geq\,\gamma(l+1),\forall l\,\leq\,\frac{K}{2}\,\mbox{and}\,\gamma(l+1)\,\geq\,$ . γ(l), ∀l ≥ K 2 , there is g(l + 1) ≥ g(l), ∀l ∈ {0*, . . . , K*}. And replacing γ(l) with g(l) in Eq. 83, $$h(\beta)=\sum_{l=0}^{K}g(l){\binom{K}{l}}\beta^{l}(1-\beta)^{K-l}$$ $$(84)$$ ∇βh(β) = X K l=0 g(l) K l lβl−1(1 − β) K−l − (K − l)β l(1 − β) K−l−1(85) = X K l=1 g(l) K − 1 l − 1 K l lβl−1(1 − β) K−l − K X−1 l=0 K − 1 l K K − l (K − l)β l(1 − β) = K X K l=1 K − 1 l − 1 β l−1(1 − β) K−l − K K X−1 l=0 K − 1 l β l(1 − β) K−l−1(87) = K K X−1 l=0 g(l + 1)K − 1 l β l(1 − β) K−l−1 − K K X−1 l=0 g(l) K − 1 l β l(1 − β) K−l−1(88) = K K X−1 l=0 g(l + 1) − g(l) K − 1 l β l(1 − β) K−l−1(89) (85) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (86) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (87) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (88) $$\begin{array}{l}~~~~~~~~~~~~~~\end{array}$$ (89) . K−l−1(86) (90) $\Box$ Since g(l + 1) ≥ g(l) and K−1 lβ l(1 − β) K−l−1 ≥ 0, ∇βh(β) ≥ 0. This implies h(β) is monotonically non-decreasing in β and hence, Hence, $\Pr[\text{DaRRM}_7(\mathcal{D})=1]\geq\Pr[\text{DaRRM}_7(\mathcal{D}')=1]\iff p\geq p'$ which has been a lot more than 1. Therefore, f(*p, p*′; γ) is maximzied when p ≥ p ′. ## B.2 Proof Of Privacy Amplification (Theorem 4.1) Theorem B.4 (Restatement of Theorem 4.1). Consider using DaRRM (Algorithm 1) to solve Problem 1.1, with i.i.d. mechanisms {Mi} K i=1*, i.e.,* pi = p, p ′ i = p ′, ∀i ∈ [K], the privacy allowance m ∈ [K] and δ = ∆ = 0. Let the noise function γ : {0, 1, . . . , K} → [0, 1] be that: if m ≥ K+1 2, $$a n d\;i f\;m\leq{\frac{K-1}{2}},$$ $$\gamma(l)=1$$ $$\gamma(l)={\begin{cases}1-2h(l)&\forall l\leq{\frac{K-1}{2}}\\ 2h(l)-1&\forall l\geq{\frac{K+1}{2}}\end{cases}}$$ $where\ h(l)=\sum_{i=m}^{2m-1}\frac{\binom{l}{i}\binom{K-1}{2m-1}}{\binom{K}{2m-1}}$ 2m−1−i) 2m−1), then DaRRMγ is mϵ*-differentially private.* Roadmap. Theorem 4.1 consists of two parts: γ under a large privacy allowance m ≥ K+1 2and γ under a small privacy allowance m ≤ K−1 2. We first show in Lemma B.5, Section B.2.1 that if m ≥ K+1 2, setting γ = 1 suffices to ensure DaRRMγ to be mϵ-differentially private, and hence one can always output the true majority of K mechanisms. In contrast, simple composition indicates only when m = K can one output the true majority of K mechanisms. Next, we show in Lemma B.10, Section B.2.2 that if m ≤ K−1 2, one can set γ to be γ*DSub*, which corresponds to outputting the majority of 2m − 1 subsampled mechanisms (and hence the name "Double Subsampling", or DSub). In contrast, simple compositon indicates one can only output the majority of m subsampled mechanisms to make sure the output is mϵ-differentially private. Theorem 4.1 follows directly from combining Lemma B.5 and Lemma B.10. ## B.2.1 Privacy Amplification Under A Large Privacy Allowance M ≥ K+1 2 The proof of Lemma B.5 is straightforward. We show that given the constant γmax(l) = 1, if m ≥ K+1 2, the worst case probabilities are (p ∗, p′∗) = arg max(p,p′)∈F f(p, p′; γmax) = (0, 0) and notice that f(0, 0; γmax) = e mϵ − 1, which satisfies the condition in Lemma 3.3. Hence, DaRRMγmax is mϵ-differentially private. Lemma B.5 (Privacy amplification, m ≥ K+1 2). Consider using DaRRM (Algorithm 1) to solve Problem 1.1, with i.i.d. mechanisms {Mi} K i=1*, i.e.,* pi = p, p ′ i = p ′, ∀i ∈ [K]*, the privacy allowance* m ≥ K+1 2, m ∈ Z and δ = ∆ = 0. Let the noise function be the constant γmax(l) = 1, ∀l ∈ {0, 1, . . . , K}. Then, DaRRMγmax is mϵ-differentially private. Proof of Lemma B.5. First, notice γmax(l) = 1, ∀l ∈ {0, 1*, . . . , K*} is: 1) symmetric around K 2 , 2) satisfies the monotonicity assumption, and 3) γmax( K−1 2) > 0 and γmax( K+1 2) > 0. Therefore, by Lemma B.1, the worst case probabilities given γmax, i.e., (p ∗, p′∗) = arg max(p,p′)∈F f(p, p′; γmax), are on one of the two boundaries of F, satisfying $$p^{*}=e^{\epsilon}p^{\prime*},$$ or $1-p^{\prime*}=e^{\epsilon}(1-p^{*})$, $$\begin{array}{l}{{\forall p^{*}\in[0,\frac{1}{e^{-\epsilon}+1}],p^{\prime*}\in[0,\frac{1}{1+e^{\epsilon}}]}}\\ {{\forall p^{*}\in[\frac{1}{1+e^{-\epsilon}},1],p^{\prime*}\in[\frac{1}{1+e^{\epsilon}},1]}}\end{array}$$ We now find the local maximums on the two possible boundaries, i.e., $$(p_{l o c a l}^{*},p_{l o c a l}^{\prime*})=\operatorname*{arg\,max}_{(p,p^{\prime}):p=e^{\epsilon}p^{\prime},p\in[0,\frac{1}{e^{-\epsilon}+1}]}f(p,p^{\prime};\gamma_{m a x})$$ and $$(p_{l o c a l}^{*},p_{l o c a l}^{\prime*})=\operatorname*{arg\,max}_{(p,p^{\prime}):1-p^{\prime}=e^{\epsilon}(1-p),p\in[\frac{1}{1+e^{-\epsilon}},1]}f(p,p^{\prime};\gamma_{m a x})$$ separately. Part I: Local worst case probabilities on the boundary p = e ϵp ′. Plugging p = e ϵp ′into the privacy cost objective f(p, p′; γmax), one gets $$f(p^{\prime};\gamma_{m a x})=\sum_{l=0}^{K-1}(e^{m e}{\binom{K}{l}}p^{\prime l}(1-p^{\prime})^{K-l}-{\binom{K}{l}}(e^{e}p^{\prime})^{l}(1-e^{e}p^{\prime})^{K-l})$$ $$+\sum_{l={\frac{K-1}{2}}}^{K}({\binom{K}{l}}(e^{e}p^{\prime})^{l}(1-e^{e}p^{\prime})^{K-l}-e^{m e}{\binom{K}{l}}p^{\prime l}(1-p^{\prime})^{K-l})$$ The gradient w.r.t. p ′is ∇p′f(p ′; γmax) = K−1 X2 l=0 e mϵK l (lp′l−1(1 − p ′) K−l − p ′l(K − l)(1 − p ′) K−l−1) (92) − e ϵ K l (l(e ϵp ′) l−1(1 − e ϵp ′) K−l − e ϵlp ′l(K − l)(1 − e ϵp ′) K−l−1) +X K e ϵ K l (l(e ϵp ′) l−1(1 − e ϵp ′) K−l − e ϵlp ′l(K − l)(1 − e ϵp ′) K−l−1) l= K+1 2 − e mϵK l (lp′l−1(1 − p ′) K−l − p ′l(K − l)(1 − p ′) K−l−1) = −K K−1 X2 l=0 e mϵK − 1 l p ′l(1 − p ′) K−l−1 + K K X−1 e mϵK − 1 l p ′l(1 − p ′) K−l−1(93) l= K+1 2 + K K−1 X2 l=0 e ϵ K − 1 l (ϵp′) ϵ(1 − e ϵp ′) K−l−1 − K K X−1 e ϵ K − 1 l (e ϵp ′) l(1 − e ϵp ′) K−l−1 l= K+1 2 $$\quad(91)$$ + K K−1 X2 −1 l=0 e mϵK − 1 l p ′l(1 − p ′) K−l−1 − K K X−1 e mϵK − 1 l p ′l(1 − p ′) K−l−1 l= K−1 2 − K K−1 X2 −1 l=0 e ϵ K − 1 l (e ϵp ′) l(1 − e ϵp ′) K−l−1 + K K X−1 e ϵ K − 1 l (e ϵp ′) l(1 − e ϵp ′) K−l−1 l= K−1 2 = −2K emϵK − 1 K−1 2 p ′ K−1 2 (1 − p ′) K−1 2 +2K eϵ K − 1 K−1 2 (e ϵp ′) K−1 2 (1 − e ϵp ′) K−1 2 (94) | {z } :=A | {z } :=B $$(95)$$ Notice that $${\frac{A}{B}}={\frac{e^{m e}{\binom{K-1}{2}}p^{p}^{\frac{K-1}{2}}(1-p^{\prime})^{\frac{K-1}{2}}}{e^{e}{\binom{K-1}{2}}(e^{e}p^{\prime})^{\frac{K-1}{2}}(1-e^{e}p^{\prime})^{\frac{K-1}{2}}}}={\frac{e^{m e}}{e^{\frac{K+1}{2}}e}}\cdot({\frac{1-p^{\prime}}{1-e^{e}p^{\prime}}})^{\frac{K-1}{2}}$$ Since 1−p ′ 1−e ϵp′ ≥ 1 and m ≥ K+1 2, A B ≥ 1. This implies ∇p′f(p ′; γmax) ≤ 0. Hence, f(p ′; γmax) is monotonically non-increasing on the boundary, for p ′ ∈ [0,1 1+e ϵ ]. Therefore, arg maxp′:p′∈[0,1 1+eϵ ] f(p ′; γmax) = 0. Since p = e ϵp ′, p ′ = 0 implies p = 0. Hence, $$(p_{l o c a l}^{*},p_{l o c a l}^{\prime*})=\operatorname*{arg\,max}_{(p,p^{\prime}):p=e^{*}p^{\prime},p\in[0,{\frac{1}{e-e+1}}]}f(p,p^{\prime};\gamma_{m a x})=(0,0)$$ and $$\operatorname*{max}_{(p,p^{\prime}):p=e^{\epsilon}p^{\prime},p\in[0,{\frac{1}{e^{-\epsilon}+1}}]}f(p,p^{\prime};\gamma_{m a x})=f(0,0;\gamma_{m a x})=e^{m\epsilon}-1$$ Part II: Local worst case probabilities on the boundary 1 − p ′ = e ϵ(1 − p). For simplicity, let q = 1 − p and q ′ = 1 − p ′. Note on this boundary p ∈ [1 1+e−ϵ , 1] and p ′ ∈ [1 1+e ϵ , 1], and hence, q ∈ [0,1 1+e ϵ ] and q ′ ∈ [0,1 1+e−ϵ ]. Plugging q and q ′into the privacy cost objective f(p, p′; γmax), one gets a new objective in *q, q*′ as f(q, q′; γmax) = K−1 X2 l=0 e mϵK l (1 − q ′) lq ′K−l − K l (1 − q) lq K−l· γmax(l) (96) +X K K l (1 − q) lq K−l − e mϵK l (1 − q ′) lq ′K−l· γmax(l) l= K+1 2 = K−1 X2 l=0 e mϵK l (1 − q ′) lq ′K−l − K l (1 − q) lq K−l(97) +X K K l (1 − q) lq K−l − e mϵK l (1 − q ′) lq ′K−l l= K+1 2 Since on this boundary, 1 − p ′ = e ϵ(1 − p), writing this in *q, q*′, this becomes q ′ = e ϵq. Plugging q ′ = e ϵq into f(q, q′; γmax), one gets $$f(q;\gamma_{max})=\sum_{l=0}^{K-1}\left(e^{me}\binom{K}{l}(1-e^{\epsilon}q)^{l}(e^{\epsilon}q)^{K-l}-\binom{K}{l}(1-q)^{l}q^{K-l}\right)\tag{98}$$ $$\;+\;\sum_{l=\frac{K+1}{2}}^{K}\;\Big(\binom{K}{l}(1-q)^{l}q^{K-l}-e^{m\epsilon}\binom{K}{l}(1-e^{\epsilon}q)^{l}(e^{\epsilon}q)^{K-l}\Big)$$ The gradient w.r.t. q is ∇qf(q) = K−1 X2 l=0 e mϵK l (−e ϵ)l(1 − e ϵq) l−1(e ϵq) K−l + e ϵ(K − l)(1 − e ϵq) l(e ϵq) K−l−1(99) − K l − l(1 − q) l−1q K−l + (K − l)(1 − q) lq K−l−1 +X K K l − l(1 − q) l−1q K−l + (K − l)(1 − q) lq K−l−1 l= K+1 2 − e mϵK l (−e ϵ)l(1 − e ϵq) l−1(e ϵq) K−l + e ϵ(K − l)(1 − e ϵq) l(e ϵq) K−l−1 = − K−1 X2 l=1 e (m+1)ϵ K − 1 l − 1 K l l(1 − e ϵq) l−1(e ϵq) K−l + K−1 X2 l=0 e (m+1)ϵ K − 1 l K K − l (K − l)(1 − e ϵq) l(e ϵq) K−l−1 (100) + K−1 X2 l=1 K − 1 l − 1 K l l(1 − q) l−1q K−l − K−1 X2 l=0 K − 1 l K K − l (K − l)(1 − q) lq K−l−1 −X K K − 1 l − 1 K l l(1 − q) l−1q K−l + K X−1 K − 1 l K K − l (K − l)(1 − q) lq K−l−1 l= K+1 2 l= K+1 2 +X K e (m+1)ϵ K − 1 l − 1 K l l(1 − e ϵq) l−1(e ϵq) K−l − K X−1 e (m+1)ϵ K − 1 l K K − l (K − l)(1 − e ϵq) l(e ϵq) K−l−1 l= K+1 2 l= K+1 2 = −K K−1 X2 l=1 e (m+1)ϵ K − 1 l − 1 (1 − e ϵq) l−1(e ϵq) K−l + K K−1 X2 l=0 e (m+1)ϵ K − 1 l (1 − e ϵq) l(e ϵq) K−l−1 (101) + K K−1 X2 l=1 K − 1 l − 1 (1 − q) l−1q K−l − K K−1 X2 l=0 K − 1 l (1 − q) lq K−l−1 − KX K K − 1 l − 1 (1 − q) l−1q K−l + K K X−1 K − 1 l (1 − q) lq K−l−1 l= K+1 2 l= K+1 2 + KX K e (m+1)ϵ K − 1 l − 1 (1 − e ϵq) l−1(e ϵq) K−l − K K X−1 e (m+1)ϵ K − 1 l (1 − e ϵq) l(e ϵq) K−l−1 l= K+1 2 l= K+1 2 = 2Ke(m+1)ϵ K − 1 K−1 2 (1 − e ϵq) K−1 2 (e ϵq) K−1 2 − 2K K − 1 K−1 2 $$(103)^{\frac{1}{2}}$$ (1 − q) K−1 2 q K−1 2 (102) Recall q ∈ [0,1 1+e ϵ ] and so (1 − e ϵq)(e ϵq) ≥ (1 − q)q. Furthermore, since e (m+1)ϵ ≥ 1, there is ∇qf(q) ≥ 0. This implies f(q) is monotonically non-decreasing in q, and so the local maximum on this boundary is $$(q_{l o c a l}^{*},q_{l o c a l}^{*})=\operatorname*{arg\,max}_{(q,q^{\prime}):q^{\prime}=e^{*}q,q\in[0,\frac{1}{1+e^{*}}]}f(q,q^{\prime};\gamma_{m a x})=(\frac{1}{1+e^{\epsilon}},\frac{1}{1+e^{-\epsilon}})$$ ) (103) That is, $$(p^{*}_{local},p^{*}_{local})=\operatorname*{arg\,max}_{(p,p^{\prime})\cdot1-p^{\prime}=o^{\prime}(1-p),p\in[\frac{1}{1+e^{-\epsilon}},1]}f(p,p^{\prime};\gamma_{max})=(1-q^{*}_{local},1-q^{*}_{local})=(\frac{1}{1+e^{-\epsilon}},\frac{1}{1+e^{\epsilon}})\tag{104}$$ $$(104)$$ $$(105)$$ $$\square$$ ## Part Iii: The Global Worst Case Probabilities. Notice that (1 1+e−ϵ ,1 1+e ϵ ), the maximum on the second boundary 1 − p ′ = e ϵ(1 − p), ∀p ∈ [1 1+e−ϵ , 1], is indeed the minimum on the first boundary p = e ϵp ′, ∀p ∈ [0,1 1+e−ϵ+1 ]. Therefore, the global maximum given γmax is $$(p^{*},p^{\prime*})=\operatorname*{arg\,max}_{(p,p^{\prime})\in F}f(p,p^{\prime};\gamma_{m a x})=\operatorname*{arg\,max}_{(p,p^{\prime}):p=e^{*}p^{\prime},p\in[0,\frac{1}{1+e^{-x}}]}f(p,p^{\prime};\gamma_{m a x})=(0,0)$$ and recall that f(0, 0; γmax) = e mϵ − 1. Hence, if m ≥ K+1 2, by Lemma 3.3 DaRRMγmax is mϵ-differentially private. ## B.2.2 Privacy Amplification Under A Small Privacy Allowance M ≤ K−1 2 The proof of Lemma B.10 is slightly more involved. First, recall by Lemma 3.1, γSub, the noise function that makes the output of DaRRMγSub and the subsampling baseline the same, is $\gamma_{Sub}(l)=\gamma_{Sub}(K=l)$ $$=\begin{cases}1-2\sum_{j=\frac{m+1}{2}}^{m}\frac{\binom{l}{j}\binom{K-l}{m-j}}{\binom{K}{m}}&\text{if$m$is odd}\\ 1-2\sum_{j=\frac{m}{2}+1}^{m}\frac{\binom{l}{j}\binom{K-l}{m-j}}{\binom{K}{m}}-\frac{\binom{l}{m}\binom{K-l}{m}}{\binom{K}{m}}&\text{if$m$is even}\end{cases}$$ for l ∈ {0, 1, . . . , K}, suppose the privacy allowance m ∈ Z. Pm j= m+1 2 ( l j)(K−l m−j) ( K m)if m is odd If we define h(l) := Pm j= m 2 +1 ( l j)(K−l m−j) ( K m)− ( lm2 )(K−l m2 ) ( K m)if m is even , then γSub(l) can be written as γSub(l) = (1 − 2h(l) if l ≤ K−1 2 2h(l) − 1 if l ≥ K+1 2 . This can be generalized to a broader class of γ functions - which we call the "symmetric form family" - as follows Definition B.6. γ : {0, 1, . . . , K} → [0, 1] is a member of the "symmetric form family" if γ *follows* $$\gamma(l)=\begin{cases}1-2h(l)&\text{if}l\leq\frac{K-1}{2}\\ 2h(l)-1&\text{if}l\geq\frac{K+1}{2}\end{cases}\tag{106}$$ where h : {0, 1, . . . , K} → [0, 1] and $h(l)+h(K-l)=1,\quad h(l+1)\geq h(l),\quad\forall l\in\{0,1,\ldots,K\},\quad\text{and}\quad\gamma(\frac{K-1}{2})>0,\gamma(\frac{K+1}{2})>0$. It is easy to verify any γ function that belongs to the "symmetric form family" satisfies: 1) symmetric around K 2 and 2) the monotonicity assumption. Hence, Lemma B.1 can be invoked to find the worst case probabilities given such γ, i.e., (p ∗, p′∗) = arg max(p,p′)∈F f(*p, p*′; γ), which in turn gives us the guarantee of DaRRMγ being mϵ-differentially private. Roadmap. In this section, we restrict our search of a good γ that maximizes the utility of DaRRMγ to in the "symmetric form family". To show the main privacy amplification result under a small m in Lemma B.10, Section B.2.4, we need a few building blocks, shown in Section B.2.3. We first show in Lemma B.7, Section B.2.3 two clean sufficient conditions that if a "symmetric form family" γ satisfies, then DaRRMγ is mϵ-differentially private, in terms of the expectation of the γ function applied to Binomial random variables. The Binomial random variables appear in the lemma, because recall the sum of the observed outcomes on a dataset D, L(D), follows a Binomial distribution in the i.i.d. mechanisms setting. Next, we show a recurrence relationship that connects the expectation of Binomial random variables to Hypergeometric random variables in Lemma B.9. This is needed because observe that for γ functions that makes DaRRMγ have the same output as the majority of subsampled mechanisms, the h function is now a sum of pmfs of the Hypergeometric random variable. Finally, the proof of the main result under a small m (Lemma B.10) is presented in Section B.2.4, based on Lemma B.7 and Lemma B.9. We show in Lemma B.10 that γ*DSub*, i.e., the γ function that enables the output of DaRRMγ*DSub* and outputting the majority of 2m − 1 subsampled mechanisms to be the same, belongs to the "symmetric form family" and satisfies the sufficient conditions as stated in Lemma B.7, implying DaRRMγ*DSub* being mϵ-differentially private. ## B.2.3 Building Blocks Lemma B.7 (Privacy conditions of the "symmetric form family" functions). *Let random variables* X ∼ Binomial(K−1, p′), Y ∼ Binomial(K−1, eϵp ′), Xˆ ∼ *Binomial*(K−1, 1−e ϵ(1−p)) and Yˆ ∼ Binomial(K−1, p). For a function γ : {0, 1, . . . , K} → [0, 1] *that belongs to the "symmetric form family" (Definition B.6), if* γ also satisfies both conditions as follows: $$e^{m\epsilon}\mathbb{E}_{X}[h(X+1)-h(X)]\geq e^{\epsilon}\mathbb{E}_{Y}[h(Y+1)-h(Y)],\quad\forall p^{\prime}\in[0,\frac{1}{1+e^{\epsilon}}]\tag{107}$$ $$e^{(m+1)\epsilon}\mathbb{E}_{\tilde{X}}[h(\hat{X}+1)-h(\hat{X})]\geq\mathbb{E}_{\tilde{Y}}[h(\hat{Y}+1)-h(\hat{Y})],\quad\forall p\in[\frac{1}{1+e^{-\epsilon}},1]\tag{108}$$ then Algorithm DaRRMγ is mϵ-differentially private. Proof of Lemma B.7. Since h(l+ 1) ≥ h(l) on l ∈ {0*, . . . , K*}, γ(l) ≥ γ(l+ 1), ∀l ≤ K 2 and γ(l+ 1) ≥ γ(l), ∀l ≥ K 2 . Furthermore, since h(l)+h(K −l) = 1, γ( K−1 2) = 1−2h( K−1 2) = 1−2(1−h( K+1 2)) = 2h( K+1 2)−1. Hence, any γ that belongs to the "symmetric form family" satisfies: 1) symmetric around K 2 , 2) the monotonicity assumption, and 3) γ( K−1 2) = γ( K+1 2) > 0. Therefore, by Lemma B.1, the worst case probabilities (p ∗, p′∗) = arg max(p,p′)∈F f(*p, p*′; γ) are on one of the two boundaries of F, satisfying $$p^{*}=e^{\epsilon}p^{\prime*},$$ or $1-p^{\prime*}=e^{\epsilon}(1-p^{*})$, ] (109) $$\begin{array}{l}{{\forall p^{*}\in[0,\frac{1}{e^{-\epsilon}+1}],p^{\prime*}\in[0,\frac{1}{1+e^{\epsilon}}]}}\\ {{\forall p^{*}\in[\frac{1}{1+e^{-\epsilon}},1],p^{\prime*}\in[\frac{1}{1+e^{\epsilon}},1]}}\end{array}$$ , 1] (110) We now derive the sufficient conditions that if any γ from the "symmetric form family" satisfy, then DaRRMγ is mϵ-differentially private, from the two boundaries as in Eq. 109 and Eq. 110 separately. Part I: Deriving a sufficient condition from Eq. 109 for "symmetric form family" γ. Consider the boundary of F, p = e ϵp ′, ∀p ∈ [0,1 1+e−ϵ ], p′ ∈ [0,1 1+e ϵ ]. $$(109)$$ $$(110)$$ Given any γ, plugging p = e ϵp ′into the privacy cost objective f(*p, p*′; γ), one gets $$f(p^{\prime};\gamma)=\sum_{l=0}^{K-1}(e^{\prime\prime\prime}\binom{K}{l}p^{\prime l}(1-p^{\prime})^{K-l}-\binom{K}{l}(e^{\prime}p^{\prime})^{l}(1-e^{\prime}p^{\prime})^{K-l})\cdot\gamma(l)$$ $$+\sum_{l=\frac{R+1}{2}}^{K}\binom{K}{l}(e^{\prime}p^{\prime})^{l}(1-e^{\prime}p^{\prime})^{K-l}-e^{\prime\prime\prime}\binom{K}{l}p^{\prime l}(1-p^{\prime})^{K-l})\cdot\gamma(l)$$ The gradient w.r.t. p ′is ∇p′f(p ′); γ K= e mϵ K−1 X2 −1 K − 1 l p ′l(1 − p ′) K−l−1γ(l + 1) − γ(l) − 2e mϵK − 1 K−1 2 p ′ K−1 2 (1 − p ′) K−1 2 γ( K − 1 2) l=0 (112) + e mϵ K X−1 K − 1 l p ′l(1 − p ′) K−l−1γ(l) − γ(l + 1) l= K+1 2 + e ϵ K−1 X2 −1 K − 1 l (e ϵp ′) l(1 − e ϵp ′) K−l−1γ(l) − γ(l + 1)+ 2e ϵ K − 1 K−1 2 (e ϵp ′) K−1 2 (1 − e ϵp ′) K−1 2 γ( K − 1 2) l=0 + e ϵ K X−1 K − 1 l (e ϵp ′) l(1 − e ϵp ′) K−l−1γ(l + 1) − γ(l) l= K+1 2 Consider l ∈ {0, 1*, . . . , K*} in the above Eq. 112. For any function γ that belongs to the "symmetric form family", 1. If $l\leq\frac{K}{2}$, $\gamma(l)-\gamma(l+1)=(1-2h(l))-(1-2h(l+1))=2h(l+1)-2h(l)$ 2. If $l\geq\frac{K}{2}$, $\gamma(l+1)-\gamma(l)=(2h(l+1)-1)-(2h(l)-1)=2h(l+1)-2h(l)$ 3. Since $\gamma(\frac{K-1}{2})=\gamma(\frac{K+1}{2})$, $$2\gamma(\frac{K-1}{2})=\left(\gamma(\frac{K-1}{2})+\gamma(\frac{K+1}{2})\right)$$ $$=\left(1-2h(\frac{K-1}{2})+2h(\frac{K+1}{2})-1\right)$$ $$=2h(\frac{K+1}{2})-2h(\frac{K-1}{2})$$ $$\left(116\right)$$ $$\left(117\right)$$ $$\left(117\right)$$ Hence, following Eq. 112, the gradient, ∇p′f(p ′; γ), given a "symmetric form family" γ can be written as $$\frac{\nabla_{Y^{\prime}}f(y^{\prime};\gamma)}{K}=-e^{\mu\epsilon}\sum_{l=0}^{K-1}\binom{K-1}{l}p^{\prime l}(1-p^{\prime})^{K-l}\Big{(}2h(l+1)-2h(l)\Big{)}$$ $$+e^{\epsilon}\sum_{l=0}^{K-1}\binom{K-1}{l}(e^{\epsilon}p^{\prime})^{l}(1-e^{\epsilon}p^{\prime})^{K-l-1}\Big{(}2h(l+1)-2h(l)\Big{)}$$ $$=-2e^{\mu\epsilon}\mathbb{E}_{X}\big{[}h(X+1)-h(X)\big{]}+2e^{\mu}\mathbb{E}_{Y}\big{[}h(Y+1)-h(Y)\big{]}$$ where $X\sim\text{Binomial}(K-1,p^{\prime})$ and $Y\sim\text{Binomial}(K-1,e^{\prime}p^{\prime})$. The above implies ∇p′f(p ′; γ) ≤ 0 ⇐⇒ e ϵEY [h(Y + 1) − h(Y )] ≤ e $$\Xi_{X}[h(X+1)-h(X)]$$ $$(118)^{3}$$ If ∇p′f(p ′; γ) ≤ 0, then we know the local worst case probabilities on the boundary p = e ϵp ′, ∀p ∈ [0,1 1+e−ϵ ] given any γ is (p ∗ local, p′∗ local) = arg max(p,p′):p=e ϵp′,p∈[0,1 1+e−ϵ ] f(*p, p*′; γ) = (0, 0). Furthermore, recall the privacy cost objective given any γ is $ f(p,p^{\prime};\gamma)$ $ =\sum_{l=0}^{\frac{p+1}{2}}(e^{\gamma\mu}\alpha^{\prime}_{l}-\alpha)\cdot\gamma(l)+\sum_{l=\frac{p+1}{2}}^{K}(\alpha_{l}-e^{\gamma\mu}\alpha^{\prime}_{l})\cdot\gamma(l)$ $ =\sum_{l=0}^{\frac{p+1}{2}}\left(e^{\gamma\mu}\binom{K}{l}p^{\prime l}(1-p)^{K-l}-\binom{K}{l}p^{\prime l}(1-p)^{K-l}\right)\cdot\gamma(l)+\sum_{l=\frac{p+1}{2}}^{K}\left(\binom{K}{l}p^{\prime l}(1-p)^{K-l}-e^{\gamma\mu}\binom{K}{l}p^{\prime l}(1-p^{\prime})^{K-l}\right)\cdot\gamma(l)$ and so for any $ \gamma$. $$(119)^{2}$$ $$(120)$$ $$f(0,0;\gamma)=(e^{m\epsilon}-1)\cdot\gamma(0)\leq e^{m\epsilon}-1$$ mϵ − 1 (119) Also, notice the local minimum on this boundary is $$(p_{m i n},p_{m i n}^{\prime})=\operatorname*{arg\,min}_{(p,p^{\prime}):p=e^{+}p^{\prime},p\in[0,{\frac{1}{1+e^{-}\epsilon}}]}f(p,p^{\prime}l;\gamma)=({\frac{1}{1+e^{-}\epsilon}},{\frac{1}{1+e^{\epsilon}}})$$ ) (120) Part II: Deriving a sufficient condition from Eq. 110 for "symmetric form family" γ. Consider the boundary of F, 1 − p ′ = e ϵ(1 − p), ∀p ∈ [1 1+e−ϵ , 1], p′ ∈ [1 1+e ϵ , 1]. For simplicity, let q = 1 − p ∈ [0,1 1+e ϵ ] and q ′ = 1 − p ′ ∈ [0,1 1+e−ϵ ]. Plugging q ′ = e ϵq into the privacy cost objective, one gets, given any γ, $$f(q;\gamma)=\sum_{l=0}^{\frac{K-1}{2}}\binom{e^{m\epsilon}\binom{K}{l}(1-e^{\epsilon}q)^{l}(e^{\epsilon}q)^{K-l}-\binom{K}{l}(1-q)^{l}q^{K-l}\right)\cdot\gamma(l)$$ $$+\sum_{l=\frac{K+1}{2}}^{K}\left(\binom{K}{l}(1-q)^{l}q^{K-l}-e^{m\epsilon}\binom{K}{l}(1-e^{\epsilon}q)^{l}(e^{\epsilon}q)^{K-l}\right)\cdot\gamma(l)$$ The gradient w.r.t. q is ∇qf(q; γ) K= K−1 X2 −1 l=0 e (m+1)ϵ K − 1 l (1 − e ϵq) l(e ϵq) K−l−1· γ(l) − γ(l + 1)(122) + K X−1 K − 1 l (1 − e ϵq) l(e ϵq) K−l−1· γ(l + 1) − γ(l) + 2e (m+1)ϵ K − 1 K−1 2 (1 − e ϵq) K−1 2 (e ϵq) K−1 2 · γ( K − 1 2) l= K+1 2 + K−1 X2 −1 K − 1 l (1 − q) lq K−l−1· γ(l + 1) − γ(l) l=0 + K X−1 (1 − q) lq K−l−1· γ(l) − γ(l + 1)− 2 K − 1 K−1 2 (1 − q) K−1 2 q K−1 2 · γ( K − 1 2) l= K+1 2 $$(121)$$ For any function γ that belongs to the "symmetric form family", the gradient ∇qf(q; γ) can be written as $$\frac{\nabla_{q}f(q;\gamma)}{K}=e^{(m+1)\epsilon}\sum_{l=0}^{K-1}{K-1\choose l}(1-e^{\epsilon}q)^{l}(e^{\epsilon}q)^{K-l-1}\cdot\left(2h(l+1)-2h(l)\right)\tag{123}$$ $$\begin{array}{l}{{-\sum_{l=0}^{K}{\binom{K-1}{l}}(1-q)^{l}q^{K-l-1}\cdot\left(2h(l+1)-2h(l)\right)}}\\ {{=2e^{(m+1)\epsilon}\mathbb{E}_{\hat{X}}[h(\hat{X}+1)-h(\hat{X})]-2\mathbb{E}_{\hat{Y}}[h(\hat{Y}+1)-h(\hat{Y})]}}\end{array}$$ $$(124)$$ where $\hat{X}\sim\text{Binomial}(K-1,1-e^{e}(1-p))$ and $\hat{Y}\sim\text{Binomial}(K-1,p)$. The above implies $$\nabla_{q}f(q;\gamma)\geq0\iff e^{(m+1)p}\mathbb{E}_{\hat{X}}[h(\hat{X}+1)-h(\hat{X})]\geq\mathbb{E}_{\hat{Y}}[h(\hat{Y}+1)-h(\hat{Y})]\tag{125}$$ If ∇qf(q; γ) ≥ 0, then since q ∈ [0,1 1+e ϵ ], we know that the local maximum given any γ is (q ∗ local, q′∗ local) = arg max(q,q′):q ′=e ϵq,q∈[0,1 1+eϵ ] f(*q, q*′; γ) = ( 1 1+e ϵ ,1 1+e−ϵ ). That is, $$(p_{local}^{*},p_{local}^{*})=\operatorname*{arg\,max}_{(p,p^{\prime}):1-p^{\prime}=e^{*}(1-p),p\in[\frac{1}{1+e^{-*}},1]}f(p,p^{\prime};\gamma)=(1-q_{local}^{*},1-q_{local}^{*})=(\frac{1}{1+e^{-*}},\frac{1}{1+e^{*}})\,.$$ Notice by Eq. 120, the above (1 1+e−ϵ ,1 1+e ϵ ) is the local minimum on the first boundary p = e ϵp ′, ∀p ∈ [0,1 1+e−ϵ ]. Therefore, given an arbitrary γ function, if it satisfies both of the following: $1.\,$ On the boundary $p=e^{\epsilon}p',\forall p\in[0,\frac{1}{1+e^{-\epsilon}}]$, $\nabla_{p'}f(p';\gamma)\leq0$? $$\frac{1}{e^{-e}}],\,\nabla_{p^{\prime}}f(p^{\prime};\gamma)\leq0$$ 2. On the boundary $1-p^{\prime}=e^{\epsilon}(1-p),\forall p\in[\frac{1}{1+\epsilon^{-\tau}},1]$, $\nabla_{q^{\prime}}f(q^{\prime};\gamma)\geq0$ where $q^{\prime}=1-p^{\prime}$. then the global worst case probabilities given this γ is (p ∗, p′∗) = arg max(p,p′)∈F f(*p, p*′; γ) = (0, 0). Furthermore, since by Eq. 119, f(0, 0; γ) ≤ e mϵ − 1 for any γ, this implies DaRRMγ is mϵ-differentially private by Lemma 3.3. Now, if γ belongs to the "symmetric form family", by Eq. 118 and Eq. 125, the sufficient conditions for γ that enables DaRRMγ to be mϵ-differentially private are hence $$e^{\epsilon}\mathbb{E}_{Y}[h(Y+1)-h(Y)]\leq e^{me}\mathbb{E}_{X}[h(X+1)-h(X)],\quad\forall p^{\prime}\in[0,\frac{1}{1+e^{\epsilon}}]$$ and $e^{(m+1)\epsilon}\mathbb{E}_{\hat{X}}[h(\hat{X}+1)-h(\hat{X})]\geq\mathbb{E}_{\hat{Y}}[h(\hat{Y}+1)-h(\hat{Y})],\quad\forall p\in[\frac{1}{1+e^{-\epsilon}},1]$ where X ∼ Binomial(K − 1, p′), Y ∼ Binomial(K − 1, eϵp ′), Xˆ ∼ Binomial(K − 1, 1 − e ϵ(1 − p)) and Yˆ ∼ Binomial(K − 1, p). Lemma B.8 (Binomial Expectation Recurrence Relationship (Theorem 2.1 of Zhang et al. (2019))). Let X(K−1) ∼ Binomial(K−1, p) and X(K) ∼ Binomial(K, p). Let g(x) *be a function with* −∞ < E[g(X(K−1))] < ∞ and −∞ < g(−1) < ∞*, then* $$K p\mathbb{E}_{X_{(K-1)}}[g(X_{(K-1)})]=\mathbb{E}_{X_{(K)}}[X_{(K)}g(X_{(K)}-1)]$$ [X(K)g(X(K) − 1)] (126) Lemma B.9. Given i, m, K ∈ Z, K ≥ 1, 0 ≤ i ≤ m ≤ K, let X(K) ∼ Binomial(K, p) *for some* p ∈ [0, 1], there is $$\frac{1}{\binom{K}{m}}\mathbb{E}_{X_{(K)}}\left[\binom{X}{i}\binom{K-X}{m-i}\right]=\binom{m}{i}p^{i}(1-p)^{m-i}$$ Proof of Lemma B.9. We show the above statement in Eq. 127 by induction on K and m. Base Case: K = 1. 1. If $m=0$, then $i=0$. $\frac{1}{\binom{1}{0}}\mathbb{E}_{X_{(1)}}\big{[}\binom{X}{0}\binom{1-X}{0}\big{]}=\mathbb{E}_{X_{(1)}}[1]=1$, and $\binom{0}{0}p^{0}(1-p)^{0}=1$. $\square$ $$(126)^{\frac{1}{2}}$$ $$(127)$$ 2. If $m=1$, 1. $i=0$, $\frac{1}{(1)}\mathbb{E}_{X_{(1)}}[\binom{X}{0}\binom{1-X}{1}]=\mathbb{E}_{X_{(1)}}[1-X]=1-p$, and $\binom{1}{0}p^{0}(1-p)^{1}=1-p$ 2. $i=1$, $\frac{1}{(1)}\mathbb{E}_{X_{(1)}}[\binom{X}{1}\binom{1-X}{0}]=\mathbb{E}_{X_{(1)}}[X]=p$, and $\binom{1}{1}p^{1}(1-p)^{0}=p$. Hence, Eq. 127 holds for the base case. Induction Hypothesis: Suppose the statement holds for some K ≥ 1 and 0 ≤ i ≤ m ≤ K. Consider 1 ≤ i ≤ m ≤ K + 1, 1 K+1 m EX(K+1) X i K + 1 − X m − i (128) =1 K+1 m EX(K+1) [ X! i!(X − i)! (K + 1 − X)! (m − i)!(K + 1 − X − (m − i))!] (129) =1 K+1 m i!(m − i)! EX(K+1) [X(X − 1)! ((X − 1) − (i − 1))! (K − (X − 1))! (K − (X − 1) − ((m − 1) − (i − 1)))!] (130) =1 K+1 m i!(m − i)! EX(K) [X! (X − (i − 1))! (K − X)! (K − X − ((m − 1) − (i − 1)))!] (131) (By Lemma B.8) = (i − 1)!(m − i)! K+1 m i!(m − i)! EX(K) [ X i − 1 K − X (m − 1) − (i − 1)] (132) = (i − 1)! K+1 m i! (K + 1)p K m − 1 m − 1 i − 1 p i−1(1 − p) m−i(133) (By Induction Hypothesis) = m!(K + 1 − m)! (K + 1)!i K! (m − 1)!(K − m + 1)! (m − 1)! (i − 1)!(m − i)!(K + 1)p i(1 − p) m−i(134) =m! i!(m − i)!p i(1 − p) m−i = m i p i(1 − p) m−i(135) Now we consider the edge cases when 0 = i ≤ m. If i = 0 and m = 0, $\frac{1}{\binom{K+1}{0}}\mathbb{E}_{X_{(K+1)}}\Big(\binom{X}{0}\binom{K+1-X}{0}\Big)]=1\cdot\mathbb{E}_{X_{(K+1)}}[1]=1=\binom{0}{0}p^0(1-p)^0$ 9. 0(136) If i = 0 and m > 0, 1 K+1 m EX(K+1) [ K + 1 − X m ] (137) =1 K+1 m K X +1 x=0 K + 1 − x m K + 1 x p x(1 − p) K+1−x(138) =1 K+1 m K X +1 x=0 K + 1 − x m K x + K x − 1 I{x ≥ 1} p x(1 − p) K+1−x(139) =1 K+1 m X K x=0 K + 1 − x m K x p x(1 − p) K+1−x +1 K+1 m K X +1 x=1 K + 1 − x m K x − 1 p x(1 − p) K+1−x(140) (Since when x = K + 1 and m > 0, K + 1 − x m = 0) $$(132)$$ $$(133)$$ $$(134)$$ $$(135)$$ $$(136)$$ =1 K+1 m X K x=0 K − x m K x p x(1 − p) K+1−x + X K x=0 K − x m − 1 K x p x(1 − p) K+1−x(141) +1 K+1 m X K x=0 K − x m K x p x+1(1 − p) K−x (Since K + 1 − x m = K − x m + K − x m − 1 ) =1 K+1 m (1 − p)EX(K) [ K − X m ] + (1 − p)EX(k) [ K − X m − 1 ] +1 K+1 m pEX(K) [ K − X m ] (142) =1 K+1 m EX(K) [ K − X m ] + (1 − p)EX(K) [ K − X m − 1 ] (143) =1 K+1 m K m (1 − p) m + (1 − p) K m − 1 (1 − p) m−1(144) (By Induction Hypothesis) (145) =1 K+1 m K + 1 m (1 − p) m (146) = (1 − p) m (147) $$(141)$$ $$\left(142\right)$$ $$\left(143\right)$$ $$\left(143\right)$$ ... $$\left(144\right)$$ $$\left(145\right)$$ $$(146)$$ Hence, Eq. 127 holds for all K ≥ 1 and 0 ≤ i ≤ m ≤ K. ## B.2.4 Main Result: Privacy Amplification Under A Small M Lemma B.10 (Privacy amplification, m ≤ K−1 2). Consider using DaRRM (Algorithm 1) to solve Problem 1.1, with i.i.d. mechanisms {Mi} K i=1, pi = p, p ′ i = p ′, ∀i ∈ [K]*, the privacy allowance* 1 ≤ m ≤ K−1 2, m ∈ Z and δ = ∆ = 0*. Let the noise function be that* $$\gamma_{DSub}(l)=\begin{cases}1-2h(l)&\forall l\in\{0,1,\ldots,\frac{K-1}{2}\}\\ 2h(l)-1&\forall l\in\{\frac{K+1}{2},\ldots,K\}\end{cases}\tag{1}$$ $$(148)$$ where h : {0, 1, . . . , K} → [0, 1] and h(l) = P2m−1 i=m ( l i)( K−l 2m−1−i) ( K 2m−1), ∀l ∈ {0, 1, . . . , K}*, then Algorithm* DaRRMγDSub is mϵ-differentially private. Proof of Lemma B.10. First, note γ*DSub* belongs to the "symmetric form family". We show γ*DSub* satisfies the two sufficient conditions in Lemma B.7 and hence by Lemma B.7, DaRRMγ*DSub* is mϵ-differentially private. Specifically, we consider h(l) = P2m−1 i=m ( l i)( K−l 2m−1−i) ( K 2m−1), ∀l ∈ {0, 1*, . . . , K*} and 1 ≤ m ≤ K. Two show the first condition is satisfied, let X(K−1) ∼ Binomial(K −1, p) and Y(K−1) ∼ Binomial(K −1, eϵp), and consider p ∈ [0,1 1+e ϵ ]. $$\mathbb{E}_{X_{(K-1)}}[h(X+1)]=\frac{1}{\binom{K}{n-1}}\sum_{i=1}^{2n-1}\mathbb{E}_{X_{(K-1)}}\binom{X+1}{i}\binom{K-X-1}{2m-1-i}]\tag{149}$$ $$=\frac{1}{\binom{K}{2m-1}}\sum_{i=1}^{2n-1}\mathbb{E}_{X_{(K-1)}}\binom{X}{i}\binom{K-X-1}{2m-1-i}+\binom{X}{i-1}\binom{K-X-1}{2m-1-i}$$ (150) $$\text{(Since}\binom{X+1}{i}=\binom{X}{i}+\binom{X}{i-1}\text{i}\{i\geq1\})$$ $$=\frac{1}{\binom{K}{2m-1}}\sum_{i=m}^{2m-1}\binom{X}{i}\binom{K-1-X}{2m-1-i}]+\mathbb{E}_{X(K-1)}\binom{X}{i-1}\binom{K-1-X}{(2m-2)-(i-1)}\big{]}\big{)}\tag{151}$$ $$=\frac{1}{\binom{K}{2m-1}}\sum_{i=m}^{2m-1}\binom{K-1}{2m-1}\binom{2m-1-i}{i}p^{i}(1-p)^{2m-1-i}+\binom{K-1}{2m-2}\binom{2m-2}{i-1}p^{i-1}(1-p)^{2m-1-i}\big{)}\tag{152}$$ (By Lemma B.9) EX(K−1) [h(X)] = 1 K 2m−1 2 Xm−1 i=m EX(K−1) [ X i K − X 2m − 1 − i ] (153) (Since K − X 2m − 1 − i = K − 1 − X 2m − 1 − i + K − 1 − X 2m − 2 − i ) =1 K 2m−1 2 Xm−1 i=m EX(K−1) [ X i K − 1 − X 2m − 1 − i ] + EX(K−1) [ X i K − 1 − X 2m − 2 − i ]I{i ≤ 2m − 2} =1 K 2m−1 2 Xm−1 i=m K − 1 2m − 1 2m − 1 i p i(1 − p) 2m−1−i + K − 1 2m − 2 2m − 2 i p i(1 − p) (By Lemma B.9) (154) (155) (153) $$\left|\mathbb{I}\{i\leq2m-2\}\right\}$$ $$\left(154\right)$$ $$\left|p^{i}(1-p)^{2m-2-i}\mathbb{I}\{i\leq2m-2\}\right\}$$ $$\left(155\right)$$ Hence, following Eq. 155 and Eq. 152, Hence, knowing Eq. 153 and Eq. 152, $$\mathbb{E}_{X_{(K-1)}}[h(X+1)-h(X)]$$ $$=\frac{1}{\binom{K}{2m-1}}\binom{2m-1}{i}\binom{2m-2}{i-1}p^{i-1}(1-p)^{2m-1-i}-\sum_{i=m}^{2m-2}\binom{K-1}{2m-2}\binom{2m-2}{i}p^{i}(1-p)^{2m-2-i}$$ $$=\frac{1}{\binom{2m-1}{2m-1}}\binom{2m-2}{i-1}\binom{2m-2}{i}p^{i}(1-p)^{2m-2-i}-\sum_{i=m}^{2m-2}\binom{K-1}{2m-2}\binom{2m-2}{i}p^{i}(1-p)^{2m-2-i}$$ $$=\frac{2m-1}{K}\binom{2m-2}{m-1}p^{m-1}(1-p)^{m-1}$$ Similarly, 2m−2−i $$(156)$$ $\left(-i\right)$ $\left(157\right)$ ... $$\mathbf{\Sigma}^{i}\mathbf{\Sigma}$$ $$(159)$$ $$(160)$$ (161) $\left(162\right)$ $\left(163\right)$ . $$\mathbb{E}_{Y_{(K-1)}}[h(Y+1)-h(Y)]={\frac{2m-1}{K}}{\binom{2m-2}{m-1}}(e^{e}p)^{m-1}(1-e^{e}p)^{m-1}$$ Since p ∈ [0,1 1+e ϵ ], there is p(1 − p) ≥ e −ϵe ϵp(1 − e ϵp). Hence, $$e^{(m-1)\epsilon}\mathbb{E}_{X_{(K-1)}}[h(X+1)-h(X)]=\frac{2m-1}{K}\binom{2m-2}{m-1}e^{(m-1)\epsilon}p^{m-1}(1-p)^{m-1}$$ $$\geq\frac{2m-1}{K}\binom{2m-2}{m-1}e^{(m-1)\epsilon}(e^{-\epsilon}e^{\epsilon}p(1-e^{\epsilon}p))^{m-1}$$ $$=\frac{2m-1}{K}\binom{2m-2}{m-1}(e^{\epsilon}p)^{m-1}(1-e^{\epsilon}p)^{m-1}$$ $$=\mathbb{E}_{Y_{(K-1)}}[h(Y+1)-h(Y)]$$ = EY(K−1) [h(Y + 1) − h(Y )] (164) implying e $e^{mc}\mathbb{E}_{X_{(K-1)}}[h(X+1)-h(X)]\geq e^c\mathbb{E}_{Y_{(K-1)}}[h(Y+1)-h(Y)]$ Is satisfied. and the first condition is satisfied. To show the second condition is satisfied, let Xˆ(K−1) ∼ Binom(K −1, 1−e ϵ(1−p)) and Yˆ(K−1) ∼ Binom(K − 1, p), and consider p ∈ [1 1+e−ϵ , 1]. EXˆ(K−1) [h(Xˆ + 1)] = 1 K 2m−1 2 Xm−1 i=m EXˆ(K−1) [ Xˆ i K − 1 − Xˆ 2m − 1 − i ] + EXˆ(K−1) [ Xˆ i − 1 K − 1 − Xˆ (2m − 2) − (i − 1)] (166) =1 K 2m−1 2 Xm−1 i=m K − 1 2m − 1 2m − 1 i (1 − e ϵ(1 − p))i(e ϵ(1 − p))2m−1−i(167) + K − 1 2m − 2 2m − 2 i − 1 (1 − e ϵ(1 − p))i−1(e ϵ(1 − p))2m−1−i By Lemma B.9 $$(164)^{\frac{1}{2}}$$ $$(165)^{\frac{1}{2}}$$ and EXˆ(K−1) [h(Xˆ)] = 1 K 2m−1 2 Xm−1 i=m EXˆ(K−1) [ Xˆ i K − 1 − Xˆ 2m − 1 − i ] + EXˆ(K−1) [ Xˆ i K − 1 − Xˆ 2m − 2 − i ]I{i ≤ 2m − 2} (168) =1 K 2m−1 2 Xm−1 i=m K − 1 2m − 1 2m − 1 i (1 − e ϵ(1 − p))i(e ϵ(1 − p))2m−1−i(169) + K − 1 2m − 2 2m − 2 i (1 − e ϵ(1 − p))i(e ϵ(1 − p))2m−2−iI{i ≤ 2m − 2} By Lemma B.9 $$(170)$$ $$(171)$$ $$(172)$$ Hence, following Eq. 167 and Eq. 169, EXˆ(K−1) [h(Xˆ + 1) − h(Xˆ)] (170) =1 K 2m−1 2Xm−1 i=m K − 1 2m − 2 2m − 2 i − 1 (1 − e ϵ(1 − p))i−1(e ϵ(1 − p))2m−1−i(171) − 2 Xm−2 i=m K − 1 2m − 2 2m − 2 i (1 − e ϵ(1 − p))i(e ϵ(1 − p))2m−2−i =1 K 2m−1 2Xm−2 i=m−1 K − 1 2m − 2 2m − 2 i (1 − e ϵ(1 − p))i(e ϵ(1 − p))2m−2−i(172) − 2 Xm−2 i=m K − 1 2m − 2 2m − 2 i (1 − e ϵ(1 − p))i(e ϵ(1 − p))2m−2−i = 2m − 1 K 2m − 2 m − 1 (1 − e ϵ(1 − p))m−1(e ϵ(1 − p))m−1(173) Similarly, $$\mathbb{E}_{\hat{Y}_{(K-1)}}[h(\hat{Y}+1)-h(\hat{Y})]=\frac{2m-1}{K}\binom{2m-2}{m-1}p^{m-1}(1-p)^{m-1}\tag{1}$$ $$(173)$$ $$(174)$$ Hence, $$e^{(m+1)\epsilon}\mathbb{E}_{\hat{X}(\kappa-1)}[h(\hat{X}+1)-h(\hat{X})]=e^{(m+1)\epsilon}\frac{2m-1}{K}\binom{2m-2}{m-1}(1-e^{\epsilon}(1-p))^{m-1}(e^{\epsilon}(1-p))^{m-1}$$ $$\geq\frac{2m-1}{K}\binom{2m-2}{m-1}(1-e^{\epsilon}(1-p))^{m-1}e^{(m-1)\epsilon}(1-p)^{m-1}$$ $$=\frac{2m-1}{K}\binom{2m-2}{m-1}(e^{\epsilon}-e^{2\epsilon}(1-p))^{m-1}(1-p)^{m-1}$$ (175) $\binom{176}{177}$ $\binom{177}{177}$ . m−1(176) Note that $$e^{\epsilon}-e^{2\epsilon}(1-p)=e^{\epsilon}-e^{2\epsilon}+e^{2\epsilon}p\geq p$$ $$\iff(e^{\epsilon}+1)(e^{\epsilon}-1)p\geq e^{\epsilon}(e^{\epsilon}-1)$$ $$\iff p\geq\frac{e^{\epsilon}}{e^{\epsilon}+1}=\frac{1}{1+e^{-\epsilon}}$$ and the condition needs to hold for p ∈ [1 1+e−ϵ , 1]. Therefore, following Eq. 177, $$e^{(m+1)*}\mathbb{E}_{\hat{X}_{(K-1)}}[h(\hat{X}+1)-h(\hat{X})]\geq\frac{2m-1}{K}\binom{2m-2}{m-1}p^{m-1}(1-p)^{m-1}$$ $$=\mathbb{E}_{\hat{Y}_{(K-1)}}[h(\hat{Y}+1)-h(\hat{Y})]$$ $$\begin{array}{l}{(178)}\\ {(179)}\end{array}$$ $$\begin{array}{l}{(180)}\end{array}$$ $$(181)$$ $$(182)$$ implying the second condition is satisfied. Therefore, by Lemma B.7, DaRRMγ*DSub* is mϵ-differentially private. ## B.3 Comparing The Utility Of Subsampling Approaches Intuitively, if we subsample 2m − 1 mechanisms, the utility is higher than that of the naïve subsampling approach which outputs the majority based on only m mechanisms. To complete the story, we formally compare the utility of outputting the majority of 2m−1 subsampled mechanisms (Theorem 4.1) and outputting the majority of m subsampled mechanisms (simple composition, Theorem 2.2) in the i.i.d. mechanisms and pure differential privacy setting, fixing the output privacy loss to be mϵ. Lemma B.11. *Consider Problem 1.1 with i.i.d. mechanisms* {Mi} K i=1*, i.e.,* p = pi = Pr[Mi(D) = 1], p′ = p ′ i = Pr[Mi(D′) = 1], ∀i ∈ [K]. Let γ1 : {0, 1, . . . , K} → [0, 1], γ2 : {0, 1, . . . , K} → [0, 1] be two functions that are both symmetric around K 2 . If 1 ≥ γ1(l) ≥ γ2(l) ≥ 0, ∀l ∈ {0, . . . , K}, then E(*DaRRM*γ1 ) ≤ E(*DaRRM*γ2 ). Proof. Recall S = {S1*, . . . , S*K}, where Si ∼ Mi(D), is the set of observed outcomes from the mechanisms {Mi} K i=1. By Definition 2.4, for any γ that is symmetric around K 2 , the error of DaRRMγ is E(DaRRMγ) = Pr[DaRRMγ(D) = 1] − Pr[g(S) = 1] (183) γ(l) + 12 (1 − γ(l))· αl + K−1 X2 = X K l=0 1 2 (1 − γ(l)) · αl −X K αl l= K+1 2 l= K+1 2 1 2 γ(l) − 1 2 · αl + K−1 X2 = X K l=0 1 2 − 1 2 γ(l) · αl (185) l= K+1 2 = 1 2 X K (1 − γ(l)) · (αl − αK−l) (186) l= K+1 2 $$(183)$$ $$\left(184\right)$$ $$\left(185\right)$$ ... $$(186)^{\frac{1}{2}}$$ (184) where αl =K l p l(1 − p) K−l, ∀l ∈ {0, 1*, . . . , K*} and recall p = Pr[Mi(D) = 1], ∀i ∈ [K]. For any l ≥ K+1 2, 1. If p = 0 or p = 1, αl = αK−l. 2. Otherwise, for p ∈ (0, 1), (a) If p ≥ 1 2 , $$\frac{\alpha_{l}}{\alpha_{K-l}}=\frac{p^{l}(1-p)^{K-l}}{p^{K-l}(1-p)^{l}}=p^{2l-K}(1-p)^{K-2l}=(\underbrace{\frac{p}{1-p}}_{\geq1})^{\frac{2l-K}{\geq0}}\geq1,\quad\Rightarrow\alpha_{l}\geq\alpha_{K-l}\tag{187}$$ (b) If p < 12 , $$\frac{\alpha_{l}}{\alpha_{K-l}}=(\underbrace{p}_{\leq1}\underbrace{\frac{2l-K}{2^{\circ}}}_{\leq1}\leq1,\quad\Rightarrow\alpha_{l}\leq\alpha_{K-l}\tag{188}$$ Hence, if p ≥ 1 2 , then αl ≥ αK−l, ∀l ≥ K+1 2. Since γ1(l) ≥ γ2(l), ∀l ∈ {0*, . . . , K*}, 1 − γ1(l) ≤ 1 − γ2(l), and so $$\mathcal{E}(\texttt{DaRRM}_{\gamma_{1}})=\sum_{l=\frac{K+1}{2}}^{K}\frac{1}{2}(1-\gamma_{1}(l))\cdot(\alpha_{l}-\alpha_{K-l})\leq\sum_{l=\frac{K+1}{2}}^{K}\frac{1}{2}(1-\gamma_{2}(l))\cdot(\alpha_{l}-\alpha_{K-l})=\mathcal{E}(\texttt{DaRRM}_{\gamma_{2}})\tag{189}$$ Similarly, if p < 12 , then αl ≤ αK−l, ∀l ≥ K+1 2and $$\mathcal{E}(\texttt{DaRRM}_{\gamma_{1}})=\sum_{l=\frac{K+1}{2}}^{K}\frac{1}{2}(1-\gamma_{1}(l))\cdot(\alpha_{K-l}-\alpha_{l})\leq\sum_{l=\frac{K+1}{2}}^{K}\frac{1}{2}(1-\gamma_{2}(l))\cdot(\alpha_{K-l}-\alpha_{l})=\mathcal{E}(\texttt{DaRRM}_{\gamma_{2}}))\tag{190}$$ $$(191)^{\frac{1}{2}}$$ $$\square$$ Therefore, $${\mathcal{E}}(\mathsf{D a R R M}_{\gamma_{1}})\leq{\mathcal{E}}(\mathsf{D a R R M}_{\gamma_{2}})$$ ) (191) Since γDSub(l) ≥ γSub(l), ∀l ∈ {0, 1*, . . . , K*}, by Lemma B.11, E(DaRRMγDSub ) ≤ E(DaRRMγSub ) - that is, outputting 2m − 1 mechanisms has a higher utility than outputting m mechanisms. ## C Details Of Section 5: Optimizing The Noise Function Γ **In Darrm** C.1 Deriving The Optimization Objective For any γ function that is symmetric around K 2 Ep1,p2,...,pK∼T [E(DaRRMγ)] (192) = Ep1,p2,...,pK∼T [|Pr[DaRRMγ(D) = 1] − Pr[g(S) = 1]|] (193) = Ep1,p2,...,pK∼T αl· (γ(l) + 12 (1 − γ(l))) − αl + K−1 X2 l=0 αl· 1 2 (1 − γ(l)) X K (194) l= K+1 2 = Ep1,p2,...,pK∼T K−1 X2 αl( 1 2 − 1 2 γ(l)) l=0 αl( 1 2 γ(l) − 1 2 ) + X K (195) l= K+1 2 The above follows by conditioning on L = l ∈ {0, 1, . . . , K}, i.e. the sum of observed outcomes in S = Ep1,p2,...,pK∼T (αl − αK−l) (1 − γ(l)) 1 2 X K (196) l= K+1 2 The above follows by symmetry of γ , we can write the optimization objective as $$(196)^{\frac{1}{2}}$$ Furthermore, notice the objective is symmetric around 0, and can be written as Ep1,p2,...,pK∼T (αl − αK−l) (1 − γ(l)) 1 2 X K (197) l= K+1 2 = 1 2 Ep1,p2,...,pK∼T (αl − αK−l) − (αl − αK−l)γ(l) (198) X K l= K+1 2 = 1 2 Ep1,p2,...,pK∼T (αl − αK−l) − 1 2 Ep1,p2,...,pK∼T (αl − αK−l)γ(l) X K X K l= K+1 2 l= K+1 2 | {z } :=A | {z } :=B $$(197)$$ (199) $$(200)$$ $$(201)$$ Since expression A in Eq. 199 does not involve γ, we only need to optimize expression B in Eq. 199. That is, It does not involve $\gamma$, we only need to optimize expression $B$ in Eq. 199. The $$-\frac{1}{2}\mathbb{E}_{p_{1},p_{2},...,p_{K}\sim\mathcal{T}}\left[\sum_{l=\frac{K+1}{2}}^{K}(\alpha_{l}-\alpha_{K-l})\gamma(l)\right]$$ $$=-\frac{1}{2}\sum_{l=\frac{K+1}{2}}^{K}\mathbb{E}_{p_{1},p_{2},...,p_{K}\sim\mathcal{T}}\left[(\alpha_{l}-\alpha_{K-l})\right]\cdot\gamma(l)$$ Eq. 201 is the optimization objective we use in the experiments. We see the optimization objective is linear in γ. Note in the general setting, L(D) ∼ PoissonBinomial(p1, p2*, . . . , p*K), where recall L(D) is the sum of observed outcomes on dataset D, and hence, αl = Pr[L(D) = l] is the pmf of the Poisson Binomial distribution at l ∈ {0, 1*, . . . , K*}. ## C.2 Practical Approximation Of The Objective Since the optimization objective in Eq. 200 requires taking an expectation over p1*, . . . , p*K, and this invovles integrating over K variables, which can be slow in practice, we propose the following approximation to efficiently compute the objective. We start with a simple idea to compute the objective, by sampling pi's from [0, 1] and take an empirical average of the objective value over all subsampled sets of p1*, . . . , p*K as the approximation of the expectation in Section C.2.1. However, we found this approach is less numerically stable. We then propose the second approach to approximate the objective in Section C.2.2, which approximates the integration over pi's using the rectangular rule instead of directly approximating the objective value. We use the second approximation approach in our experiments and empirically demonstrates its effectiveness. **Note** approximating the optimization objective does not affect the privacy guarantee. ## C.2.1 Approximation Via Direct Sampling Of Pi'S One straightforward way of efficiently computing an approximation to the optimization objective is as follows: Algorithm 4 Straightforward Approximation of the Optimization Objective 1: Input: \# mechanisms K ∈ N, \# iterations T ∈ N, noise function γ : {0, 1, . . . , K} → [0, 1] 2: for t = 1, 2*, . . . , T* do 3: Sample pˆ1, pˆ2, . . . , pˆK ∼ T 4: L ←b PoissonBinomail(ˆp1*, . . . ,* pˆK) 5: αˆl ← Pr[Lb = l], ∀l{0*, . . . , K*} 6: gt ← −12 PK l= K+1 2(ˆαl − αˆK−l) · γ(l) 7: **end for** 8: Return 1 T PT t=1 gt However, we found this approximation is not very numerically stable even for T = 10000 in the experiments and so we propose to adopt the second approximation as follows. ## C.2.2 Approximating The Integration Over Pi'S Consider the following surrogate objective: − $$-\frac{1}{2}\sum_{l=\frac{K+1}{2}}^{K}\int_{0.5}^{1}\int_{0.5}^{1}\cdots\int_{0.5}^{1}(\alpha_{l}-\alpha_{K-l})d p_{1}d p_{2}\ldots d p_{K}\cdot\gamma(l)$$ $$\overline{{\mathrm{{bigective}}}}$$ $\overline{\mathrm{K}}\left\{\to[0,1]\right\}$ $$(202)^{\frac{1}{2}}$$ (αl − αK−l)dp1dp2 *. . . dp*K · γ(l) (202) where we approximate the integration instead of directly approximating the objective value. The approximation of the integration is based on the rectangular rule and that the Poisson Binomial distribution is invariant to the order of its probability parameters. First, we discretize the integration over pi's: pick τ = 50 points representing probabilities between [0.5, 1) with equal distance θ = 0.5 τ . Denote this set of points as W. We pick only τ = 50 samples to ensure the distance between each sample, i.e., θ, is not too small; or this can cause numerical instability. For each l ∈ { K+1 2, K+1 2 + 1*, . . . , K*}, we want to compute an approximated coefficient for γ(l) as follows: $$\int_{0.5}^{1}\int_{0.5}^{1}\cdots\int_{0.5}^{1}(\alpha_{l}-\alpha_{K-l})dp_{1}dp_{2}\ldots dp_{K}\approx\sum_{p_{1}\in\mathcal{W}}\sum_{p_{2}\in\mathcal{W}}\cdots\sum_{p_{K}\in\mathcal{W}}(\alpha_{l}-\alpha_{K-l})\tag{203}$$ which approximates integration over a K-dimensional grid WK. The idea is then to sample points from this K-dimensional grid WK and compute an empirical mean of the integration based on the sample probabilities for p1*, . . . , p*K from WK as the approximation of the integration in the objective. Let (s1, s2*, . . . , s*K) be randomly sampled probability values from WK and we want to compute (αl − αK−l) for all l based on (p1*, . . . , p*K) = (s1*, . . . , s*K). To apply the rectangular rule, since the grid of probabilities is K-dimensional, the weight of (αl − αK−l) in the approximate integration is θ K. Furthermore, observe that αlis the pmf at l from a Poison Binomial distribution in our case, and PoissonBinomial(p1*, . . . , p*K) dist. ∼ PoissonBinomial(π(p1*, . . . , p*K)), where π denotes a permutation of p1*, . . . , p*K and *dist.* ∼ denotes "the same distribution". Hence, with a single probability sample (s1*, . . . , s*K), we can indeed compute αl −αK−l for each l at K! points from the grid WK, since they all have the same value. Therefore, we should set the weight of αl − αK−lin the approximate integration as w = θ K · K!. Furthermore, since the order of (p1*, . . . , p*K) does not affect the objective value, there is a total of (τ choose K with replacement) =τ+K−1 K:= P different points in the grid WK. In summary, the integration based approximation of the objective proceeds as follows: Algorithm 5 Integration Based Approximation of the Optimization Objective 1: Input: \# mechanisms K ∈ N, \# iterations T = 10000 ∈ N, noise function γ : {0, 1, . . . , K} → [0, 1], τ = 50: \# samples between [0.5, 1) to form the set W 2: θ ← 0.5/τ distance between samples 3: w ← θ K · K! 4: P ←τ+K−1 K 5: for t = 1, 2*, . . . , T* do 6: Sample probabilities (s1, s2, . . . , sK) ∼ WK 7: L ∼b PoissonBinomial(s1, s2*, . . . , s*K) 8: αˆl ← Pr[Lb = l], ∀l ∈ {0, 1*, . . . , K*} 9: gt ← −12 PK l= K+1 2w · (ˆαl − αˆK−l) · γ(l) 10: **end for** 11: Return PN PT t=1 gt ## C.3 Reducing # Constraints From ∞ **To A Polynomial Set** Lemma C.1 (Restatement of Lemma 5.1). Consider using DaRRM *(Algorithm 1) to solve Problem 1.1 and* let f be the privacy cost objective as defined in Lemma 3.3. Given an arbitrary noise function γ, let the worst case probabilities be $(p_{1}^{*},\ldots,p_{K}^{*},p_{1}^{\prime*},\ldots,p_{K}^{\prime*})=\underset{\{(p_{i},p_{i}^{\prime})\}_{i=1}^{K}}{\arg\max}\ f(p_{1},\ldots,p_{K},p_{1}^{\prime},\ldots,p_{K}^{\prime};\gamma)$ Then, each pair (p ∗ i , p′∗ i ), ∀i ∈ [K] *satisfies* $$(p_{i}^{*},p_{i}^{\prime*})\in\{(0,0),(1,1),(0,\Delta),(\Delta,0),(1-\Delta,1),$$ $$(1,1-\Delta),(\frac{e^{\epsilon}+\Delta}{e^{\epsilon}+1},\frac{1-\Delta}{e^{\epsilon}+1}),(\frac{1-\Delta}{e^{\epsilon}+1},\frac{e^{\epsilon}+\Delta}{e^{\epsilon}+1})\}$$ Furthermore, when δ > 0, there exists a finite vector set P of size O(K7) *such that if* β = max{(pi,p′i )}K i=1∈P f(p1, . . . , pK, p′1 , . . . , p′K; γ)*, then* f(p ∗ 1 , . . . , p∗K, p′∗ 1 , . . . , p′∗K; γ) ≤ β*. When* δ = 0*, the size of* P *can be reduced to* O(K3). ![46_image_0.png](46_image_0.png) Figure 5: An illustration of the feasible region Fi. Proof. Part I: Reducing \# privacy constraints from ∞ **to exponentially many.** Consider (pi, p′i ) for an arbitrary i ∈ [K] and fixing (pj , p′j ), ∀j ̸= i. Given any noise function γ, recall the privacy cost objective f(p1, . . . , pK, p′1 , . . . , p′K; γ) (see Lemma 3.3), is $$f(p_{1},\ldots,p_{K},p^{\prime}_{1},\ldots,p^{\prime}_{K};\gamma)=\sum_{l=0}^{\frac{K-1}{2}}(e^{m\epsilon}\alpha^{\prime}_{l}-\alpha_{l})\cdot\gamma(l)+\sum_{l=\frac{K+1}{2}}^{K}(\alpha_{l}-e^{m\epsilon}\alpha^{\prime}_{l})\cdot\gamma(l)\,.$$ and the privacy constraints are of the form f(p1, . . . , pK, p′1 $$f(p_{1})$$ , . . . , p′K; γ) ≤ e mϵ − 1 + 2δ where recall that αl = Pr[L(D) = l] is a function of {pi} K i=1 and α ′ l = Pr[L(D′) = l] is a function of {p ′ i } K i=1, ∀l ∈ {0, 1*, . . . , K*} and L(D), L(D′) are the sum of observed outcomes on neighboring datasets D and D′. By Lemma 3.3, γ needs to make the above privacy constraint hold for all possible {(pi, p′i )} K i=1 to make DaRRMγ (*mϵ, δ*)-differentially private. This is equivalent to saying, γ needs to ensure max{(pi,p′i }K i=1 f(p1, . . . , pK, p′1 , . . . , p′K; γ) ≤ e mϵ − 1 + 2δ. Notice that the sum of observed outcomes follows a Poisson Binomial distribution, i.e., L(D) ∼ PoissonBinomial(p1*, . . . , p*K) and L(D′) ∼ PoissonBinomial(p ′ 1 , . . . , p′K). Hence, by the pmf of the Poisson Binomial distribution7, the privacy cost objective f is linear in each pi and p ′ i , fixing all (pj , p′j ), ∀j ̸= i. Since each mechanism Miis (ϵ, ∆)-differentially private, by definition, (pi, p′i ) satisfies all of the following: $$\begin{array}{l l}{{p_{i}\leq e^{\epsilon}p_{i}^{\prime}+\Delta,}}&{{p_{i}^{\prime}\leq e^{\epsilon}p+\Delta}}\\ {{1-p_{i}\leq e^{\epsilon}(1-p_{i}^{\prime})+\Delta,}}&{{1-p_{i}^{\prime}\leq e^{\epsilon}(1-p_{i})+\Delta}}\end{array}$$ That is, (pi, p′i ) lies in a feasible region Fi (see Figure 5). Note the constraints on (pi, p′i ), that is, the boundaries of Fi, are linear in pi and p ′ i . And so the optimization problem (p ∗ i , p′∗ i ) = arg max(pi,p′i ) f(p1, . . . , pK, p′1 , . . . , p′K; γ), which finds the worst case probabilities in (pi, p′i ), is a Linear Programming (LP) problem in (pi, p′i ) for i ∈ [K]. This implies (p ∗ i , p′∗ i ) has to be on one of the eight corners of Fi - that is (p ∗ i , p′∗ i ) ∈ {(0, 0),(1, 1), (0, ∆),(∆, 0),(1 − ∆, 1),(1, 1 − ∆),( e ϵ+∆ e ϵ+1 , 1−∆ e ϵ+1 ),( 1−∆ e ϵ+1 , e ϵ+∆ e ϵ+1 )} := C. Since all (pi, p′i ) and (pj , p′j ), for i ̸= j, are independent, we can search for the worst case probabilities by searching for (p ∗ i , p′∗ i ) ∈ C, instead of searching for (pi, p′i ) ∈ Fi, ∀i ∈ [K]. Therefore, the infinitely many privacy constraints are now reduced to only 8 K to optimize for the best γ function that maximizes the utility of DaRRMγ, while ensuring the output is mϵ-differentially private. ## Part Ii: Reducing # Privacy Constraints From Exponentially Many To A Polynomial Set. To further reduce the number of privacy constraints in optimization, observe that the Poisson Binomial distribution is invariant under the permutation of its parameters. That is, PoissonBinomial(p1*, . . . , p*K) dist. ∼ 7See, e.g. https://en.wikipedia.org/wiki/Poisson_binomial_distribution, for the pmf of Poisson Binomial distribution. PoissonBinomial(π(p1*, . . . , p*K)), for some permutation π and *dist.* ∼ means "follows the same distribution". Similarly, PoissonBinomial(p ′ 1 , . . . , p′K) dist. ∼ PoissonBinomial(π(p ′ 1 , . . . , p′K)). The above observation implies if we have one privacy constraint f(p1 = v1, . . . , pK = vK, p′1 = v ′ 1 , . . . , p′K = v ′ K; γ) ≤ e mϵ − 1 + 2δ, for some {(vi, v′i )} K i=1 ∈ CK, then any privacy constraint f(p1 = s1, . . . , pK = sK, p′1 = s ′ 1 , . . . , p′K = s ′ K; γ) ≤ e mϵ − 1 + 2δ, where (s1*, . . . , s*K) = π1(v1*, . . . , v*K), (s ′ 1 , . . . , s′K) = π(v ′ 1 , . . . , v′K), for permutations π1 and π2, is redundant. Therefore, there is a vector set P, where each probability vector (p1, . . . , pK, p′1 , . . . , p′K) in P is constructed by setting (p1, p′1 ),(p2, p′2 ), . . . ,(pK, p′K) = (v1, v2*, . . . , v*K), where vi ∈ C, ∀i ∈ [K], such that vectors constructed by (p1, p′1 ),(p2, p′2 ), . . . ,(pK, p′K) = π(v1, v2*, . . . , v*K) is not in P. Note |P| = (8 chooses K with replacement) =K+8−1 K= O(K7). If we can restrict our search for the worst case probabilities to this set P - that is, solving for β := max{(pi,p′i )}K i=1∈P f(p1, . . . , pK, p′1 , . . . , p′K; γ), then f(p ∗ 1 , . . . , p∗K, p′∗ 1 , . . . , p′∗K; γ) ≤ β. This implies we only need O(K7) privacy constraints to optimize for the best noise function γ in DaRRM, while making sure DaRRMγ is mϵ-differentially private. Note if ∆ = 0, i.e., the mechanism Mi's are pure differentially private, the feasible region Fiin which (pi, p′i ) lies has only 4 corners instead of 8. This implies (p ∗ i , p′∗ i ) ∈ C = {(0, 0),(1, 1),( e ϵ e ϵ+1 ,1 e ϵ+1 ),(1 e ϵ+1 ,e ϵ e ϵ+1 )}. Hence, in this case, |P| = (4 choose K with replacement) = K+4−1 K= O(K3), which implies we only need O(K3) privacy constraints to optimize for the best noise function γ in DaRRM. ![48_image_0.png](48_image_0.png) Figure 6: ϵ*total* and δ*total* under m-fold adaptive composition using different compositoin bounds. ## D Full Experiment Results D.1 Optimal Γ **Function In Simulations** D.1.1 **Comparing Composition Bounds** In the setting we consider in simulations, where K = 11, ϵ = 0.1, ∆ = 10−5 and the privacy allowance m ∈ {1, 3, 5, 7}, we want to use the tightest privacy composition bound to ensure the strongest baselines. We compare the total privacy loss ϵ*total* and the total failure probability δ*total* under m-fold adaptive composition using simple composition (Theorem 2.2), optimal composition (Theorem 2.3), as well as the classical advanced composition (Theorem D.1). For advanced composition, we set δ ′ = ∆ = 1e − 5. For optimal composition, we first observe that for small m as in this case, the optimal total privacy loss ϵ*total* falls in the regime where ϵ*total* = kϵ. Hence, unlike in other regimes, the ϵ*total* here does not depend on 1/δ′ and we can set δ ′ = 0 to minimize δ*total*. The results comparing different composition bounds are presented in Figure 6. We see that in the regime of interest where m is small, simple composition and optimal composition have exactly the same total privacy loss ϵ*total* and simple composition has a slightly lower δ*total* than optimal composition (the gap is 10−5). Theorem D.1 (Advanced Composition Dwork et al. (2014)). For all ϵ, δ, δ′ ≥ 0, the class of (ϵ, δ)*-differentially* private mechanisms satisfies (ϵ ′*, kδ* + δ ′)-differential privacy under k*-fold adaptive composition for* $$\epsilon^{\prime}=\sqrt{2k\ln(1/\delta^{\prime})}\epsilon+k\epsilon(e^{\epsilon}-1)$$ ## D.1.2 Comparison Using Optimal **Composition** The optimal composition (Theorem 2.3) indicates less total privacy loss than simple composition (Theorem 2.2) when the number of folds, m, is large, or when the failure probability δ is large. To enable meaningful comparison against optimal composition, we consider a larger K and a larger failure probability δ. Consider K = 35, ϵ = 0.1, ∆ = 10−5. By optimal composition, if one outputs the majority of M subsampled mechanisms for some *M < K*, the majority output is (ϵopt, δopt)-differentially private, where $$\epsilon_{\text{opt}}=\min\Big{\{}M\epsilon,\frac{(e^{*}-1)\epsilon M}{e^{*}+1}+\epsilon\sqrt{2M\log(e+\frac{\sqrt{M\epsilon^{2}}}{\delta^{*}})},\frac{(e^{*}-1)\epsilon M}{e^{*}+1}+\epsilon\sqrt{2M\log\frac{1}{\delta^{*}}}\Big{\}},\quad\delta_{\text{opt}}=1-(1-\delta)^{M}(1-\delta^{*}).$$ for some δ ′ > 0. We set this as the privacy guarantee of all majority ensembling algorithms. That is, if we want the majority output to be (*mϵ, δ*)-differentially private, we set $$m=\frac{e_{\rm out}}{\epsilon}=\min\left\{M,\frac{(e^{e}-1)M}{e^{e}+1}+\sqrt{2M\log(e+\frac{\sqrt{M\epsilon^{2}}}{\delta^{\prime}})},\frac{(e^{e}-1)M}{e^{e}+1}+\sqrt{2M\log(\frac{1}{\delta^{\prime}})}\right\}.$$ and δ = 1 − (1 − δ)M(1 − δ ′) accordingly. The parameters τ and λ to compute p*const* in RR (see Section A.1) are set to be $$\tau=\min\left\{K,\frac{(e^{e}-1)K}{e^{e}+1}+\sqrt{2K\log(e+\frac{\sqrt{K e^{2}}}{\delta^{\prime}})},\frac{(e^{e}-1)K}{e^{e}+1}+\sqrt{2K\log(\frac{1}{\delta^{\prime}})}\right\}.$$ $$\mathrm{and}\ \lambda=1-(1-\delta)^{K}(1-\delta^{\prime}).$$ In the experiments, we consider M = {10, 13, 15, 20} and δ ′ = 0.1; and γopt is computed using a uniform prior T . All values of the parameters of the private ensembling algorithms we use in the experiment are listed in the table: | # Subsampled mechanisms | M | 10 | 13 | 15 | 20 | |-----------------------------|-----|---------|---------|---------|---------| | Privacy allowance | m | 6.4521 | 7.5742 | 8.2708 | 9.8823 | | Parameter of constant γ | τ | 14.0328 | 14.0328 | 14.0328 | 14.0328 | | Parameter of constant γ | λ | 0.1003 | 0.1003 | 0.1003 | 0.1003 | | Overall privacy loss | mϵ | 0.6452 | 0.7574 | 0.8271 | 0.9882 | | Overall failure probability | δ | 0.1001 | 0.1001 | 0.1001 | 0.1002 | Table 3: All parameter values. Note that all the private ensembling algorithms we compare in the experiment is required to be (*mϵ, δ*)-differentially private. Here, K = 35, ϵ = 0.1, ∆ = 10−5 and δ ′ = 0.1. ![49_image_0.png](49_image_0.png) Figure 7: Plots of the shape and E(DaRRMγ) of different γ functions: the optimized γSub, and the baselines γSub (corresponding to subsampling) and γ*const* (corresponding to RR). Here, K = 35, M ∈ {10, 13, 15, 20}, ∆ = 10−5, ϵ = 0.1, δ ′ = 0.1. ## D.1.3 Comparison In Pure Differential Privacy Settings Consider the pure differential privacy setting, where ∆ = δ = 0. Note in this setting, it is known that simple composition is tight. To compute an optimized γopt in DaRRM, since we have shown the number of constraints is O(K3) if ∆ = δ = 0 (see Lemma 5.1), we can set K to be larger. Here, we present results for K ∈ {11, 101} and ϵ = 0.1. Again, we compare the shape of different γ and the corresponding E(DaRRMγ) under those γ functions, fixing the total privacy loss to be mϵ. γopt is computed using a uniform prior T . Since the subsampling mechanism from Section 4 with privacy amplification applies to this setting, we compare four different γ noise functions here: 1. γopt (Ours): optimized γ function using our optimization framework 2. γSub (Baseline): the γ function that corresponds to outputting the majority of m out K subsampled mechanisms 3. γ*DSub* (Baseline): the γ function that corresponds to outputting 2m − 1 subsampled mechanisms from Theorem 4.1, aka., Double Subsampling (DSub) 4. γ*const* (Baseline): the constant γ function that corresponds to the classical Randomized Response (RR) algorithm Setting 1. K = 11, m ∈ {1, 3, 5, 7, 9, 11}. ![50_image_0.png](50_image_0.png) ![50_image_1.png](50_image_1.png) Figure 8: Plots of shape and E(DaRRMγ) of different γ functions: the optimized γOpt, the baselines γSub and γ*DSub* (Theorem 4.1), and the constant γ*const* (corresponding to RR). Here, K = 11, m ∈ {1, 3, 5, 7, 9, 11}, ϵ = 0.1 and δ = ∆ = 0. Note when m ∈ {7, 9}, the cyan line (γ*DSub*) and the red line (γopt) overlap. When m = 11, all lines overlap. Observe that when m ≥ K+1 2, that is, m ∈ {7, 9, 11} in this case, the above plots suggest both γopt and γ*DSub* achieve the minimum error at 0. This is consistent with our theory. Setting 2. K = 101, m ∈ {10, 20, 30, 40, 60, 80}. ![51_image_0.png](51_image_0.png) Figure 9: Plots of shape and E(DaRRMγ) of different γ functions: the optimized γOpt, the baselines γSub and γ*DSub* (Theorem 4.1), and the constant γ*const* (corresponding to RR). Here, K = 101, m ∈ {10, 20, 30, 40, 60, 80}, ϵ = 0.1 and δ = ∆ = 0. ## D.1.4 Comparison Using Different Prior Distributions When optimizing γ that maximizes the utility in DaRRM, recall that the objective takes an expectation over pi's for pi ∼ T , where T is some distribution and pi = Pr[Mi(D) = 1]. The previous experiments assume we do not have access to any prior knowledge about pi's and hence T is the uniform distribution, i.e., Uniform([0, 1]). However, when one has knowledge about the mechanisms, one can set a proper prior T to further maximize the utility of DaRRM. In this section, let TU denote Uniform([0, 1]) and we present results considering a different prior distribution, which we call TP , as follows. Suppose our prior belief is that each mechanism Mi has a clear tendency towards voting 0 or 1, i.e., piis far from 0.5. Let TP be Uniform([0, 0.3] ∪ [0.7, 1]). To optimize γ under TP , we change the approximate optimization objective in Eq. 202, which optimizes γ under TU , to be the following, $$-\frac{1}{2}\sum_{l=\frac{K+1}{2}}^{K}\int_{0.7}^{1}\int_{0.7}^{1}\cdots\int_{0.7}^{1}(\alpha_{l}-\alpha_{K-l})d p_{1}d p_{2}\ldots d p_{K}\cdot\gamma(l)$$ $$(204)^{\frac{1}{2}}$$ Setting. K = 11, m ∈ {3, 5}, ϵ = 0.1, δ = ∆ = 0. We compare the shape and E(DaRRMγ) of different γ functions: 1. γopt−U denote the γ function optimized under pi ∼ TU 2. γopt−P denote the γ function optimized under pi ∼ TP 3. γSub, corresponding to the subsampling baseline 4. γ*const*, corresponding to the RR baseline ![52_image_0.png](52_image_0.png) Figure 10: Comparison of the shape and E(DaRRMγ) of different γ functions: 1) γ optimized under prior TU , 2) γ optimized under prior TP , 3) γSub (corresponding to the subsampling baseline) and 4) γ*const* (corresponding to the RR baseline). Here, K = 11, m ∈ {3, 5}, ϵ = 0.1. Observe that if the prior TP used in optimizing γ is closer to the actual distribution of pi's, there is additional utility gain (i.e., decreased error); otherwise, we slightly suffer a utility loss (i.e., increased error), compared to optimize γ under the TU prior. Furthermore, regardless of the choice of the prior distribution T in optimizing γ, DaRRMγ with an optimized γ achieves a lower error compared to the the baselines. Note when we compute the error, we take the expectation w.r.t. the actual pi distributions, regardless of the prior used to optimize γ. In the experiments, we consider three different actual pi distributions:" 1. "Actual: Uniform([0, 1])": pi ∼ TU , ∀i ∈ [K] 2. "Actual: pi = 0.5": pi = 0.5, ∀i ∈ [K] This setting implies the mechanisms do not have a clear majority 3. "Actual: Uniform([0, 0.1])": pi ∼ Uniform([0, 0.1]), ∀i ∈ [K] This setting implies the mechanisms have a clear majority (i.e., 0) Since our prior TP is closer to Uniform([0, 0.1]) (i.e., there is a clear majority), we would expect E(DaRRMγopt−P ) to be the lowest when pi ∼ Uniform[0, 0.1], but to be higher than E(DaRRMγopt−U ) when pi ∼ Uniform([0, 1]) or pi = 0.5. The results are presented in Figure 10. ## D.2 Private Semi-Supervised Knowledge Transfer D.2.1 More Details About The Baseline Gnmax Papernot Et Al. (2018) The GNMax aggregation mechanism for majority ensembling of *non-private* teachers proceeds as follows (Section 4.1 of Papernot et al. (2018)): on input x, $$M_{\sigma}(x)=\arg\operatorname*{max}_{i}\{n_{i}(x)+{\mathcal{N}}(0,\sigma^{2})\}$$ i {ni(x) + N (0, σ2)} (205) where ni(x) is \# teachers who vote for class i. $$(205)$$ ## How To Set Σ In **Gnmax**? Section 4.1 of Papernot et al. (2018) states the GNMax mechanism is (*λ, λ/σ*2)-Renyi differentially private (RDP), for all λ ≥ 1. RDP bounds can be converted to DP bounds as follows: Theorem D.2 (RDP to DP (Theorem 5 of Papernot et al. (2018))). If a mechanism M guarantees (λ, ϵ)*-RDP,* then M *guarantees* (ϵ + log 1/δ λ−1 , δ)*-differential privacy for* δ ∈ (0, 1). Therefore, GNMax with parameter σ 2 guarantees ( λ σ2 + log 1/δ λ−1 , δ)-differential privacy, ∀λ ≥ 1. Given *m, ϵ,* ∆, we want to choose λ and σ 2 here so that the output of GNMax is (*mϵ, m*∆)-differentially private. Here, δ = m∆. We first obtain a valid range of λ. Since mϵ ≥ 0,λ σ2 + log 1/δ λ−1 ≥ 0 and so λ ≥ log 1/δ mϵ + 1 := λmin. And σ 2 =λ mϵ− log 1/δ λ−1 . Since the smaller σ 2is, the higher the utility, we perform a grid search over λ ∈ [λmin, 500], with discretized λ values of equal distance 0.5, to find the minimum σ 2 min. For the (*mϵ, m*∆) values used in the experiments, we observe σ 2 decreases first and then increases as λ increases, as shown in Figure 11. The λ and σmin values in the RDP bound of Gaussian noise to compute the privacy loss of GNMax's output we use in the experiments are presented in Table 4. ![53_image_0.png](53_image_0.png) ![53_image_1.png](53_image_1.png) Figure 11: Plots of λ vs. σ 2in the Gaussian RDP privacy bound. The goal is to choose a λ value that minimizes σ 2. It is not hard to see the value of σ 2 decreases at first and then increases as λ increases. | Privacy Loss Per Query (mϵ, m∆) | λ | σmin | | |-----------------------------------|------------------|--------|-------| | MNIST | (0.2676, 0.0003) | 34.31 | 21.46 | | Fashion-MNIST | (0.2556, 0.0003) | 35.74 | 22.46 | Table 4: Parameters of the RDP bound of Gaussian noise to compute the privacy loss of GNMax's output. ## A Note On The Data-Dependent Privacy Loss Bound Papernot et al. (2018) gives a potentially tighter data-dependent bound on the privacy loss using GNMax to output the majority of non-private teacherss votes. We give a clean pseudo-code on computing the data-dependent privacy loss bound in Algorithm 6, based on the lemmas and theorems in Papernot et al. (2018). Given privacy parameters *σ, λ* and the teacher votes per class {ni} C i=1 for C classes, the data-dependent bound can be empirically evaluated and compared against the Gaussian privacy loss bound. The smaller one is the final privacy loss. We empirically find that the condition of the data-dependent bound (line 8 in Algorithm 6) is not satisfied when K and the number of classes C are small, e.g., K = 11, C = 2 as in our case, even if all teachers agree on the same output. And so in the experiments, we can only apply the Gaussian privacy loss bound (line 14). Algorithm 6 Compute Tighter Privacy Loss 1: Input: Std. of Gaussian noise σ, Privacy parameter λ, \# teachers K, \# classes C, \# votes per class {ni} C i=1 2: *B ← {}* bound candidates 3: for i = 1, 2*, . . . , K* do 4: q (i) ← 12 Pi̸=i ∗ erfc( ni∗−ni 2σ) 5: µ (i) 2 ← σ ·plog 1/q(i), µ (i) 1 ← µ (i) 2 + 1 6: ϵ (i) 1 ← µ (i) 1 σ2 , ϵ (i) 2 ← µ (i) 2 σ2 7: q (i) ub ← exp((µ (i) 2 − 1)ϵ (i) 2 )/( µ (i) 1 µ (i) 1 −1 ·µ (i) 2 µ (i) 2 −1 ) µ (i) 2 8: if q (i) < 1 and µ (i) 1 ≥ λ and µ2 > 1 and q (i) ≤ q (i) ub **then** ``` 9: A(i) ← (1 − q (i))/(1 − q (i)· exp(ϵ (i) 2 ) µ (i) 2 −1 µ (i) 2 ) 10: B(i) ← exp(ϵ (i) 1 )/(q (i)) 1 µ (i) 1 −1 ``` 11: DataDependentBound ← 1 λ−1 · (1 − q (i)) · (A(i)) λ−1 + q (i)· (B(i)) λ−1 12: *B ← B ∪* DataDependentBound 13: **else** 14: GaussianBound ← λ σ2 15: *B ← B ∪* GaussianBound 16: **end if** 17: **end for** 18: Return min B ## D.2.2 Additional Results For Private Semi-Supervised Knowledge Transfer m = 1. | Privacy loss per query (ϵquery, δquery) | | Total privacy loss over Q queries (ϵtotal, δtotal) | |-------------------------------------------|------------------|------------------------------------------------------| | Dataset | # Queries Q = 20 | (1.704, 0.002) | | MNIST | Q = 50 | (2.837, 0.005) | | (0.0892, 0.0001) | | | | Q = 100 | | (4.202, 0.010) | | Q = 20 | | (1.620, 0.002) | | Fashion MNIST | Q = 50 | (2.695, 0.005) | | (0.0852, 0.0001) | | | | Q = 100 | | (3.988, 0.010) | | Dataset | MNIST | Dataset | Fashion-MNIST | | | | |-----------|-------------|-------------|-----------------|-------------|-----------|-----------| | GNMax | DaRRMγSub | DaRRMγopt | | | | | | # Queries | (Baseline) | (Baseline) | (Ours) | | | | | Q = 20 | 0.54 (0.11) | 0.68 (0.07) | 0.74 (0.08) | | | | | Q = 50 | 0.51 (0.07) | 0.67 (0.05) | 0.66 (0.05) | | | | | Q = 100 | 0.57 (0.03) | 0.71 (0.03) | 0.69 (0.04) | GNMax | DaRRMγSub | DaRRMγopt | | # Queries | (Baseline) | (Baseline) | (Ours) | | | | | | Q = 20 | 0.56 (0.10) | 0.92 (0.05) | 0.89 (0.06) | | | | | Q = 50 | 0.52 (0.05) | 0.89 (0.04) | 0.92 (0.03) | | | | Q = 100 | 0.56 (0.04) | 0.89 (0.04) | 0.91 (0.04) | | | | Table 5: The privacy loss per query to the teachers and the total privacy loss over Q queries. Note the total privacy loss is computed by optimal composition, where we set δ ′ = 0.0001. Table 6: Accuracy of the predicted labels of Q query samples on datasets MNIST (on the left) and Fashion-MNIST (on the right). We report the mean and one std. in parentheses over 10 random draws of the query samples from the test dataset. Note each prediction on the query sample is (ϵtotal, δ*total*)-differentially private. With the same per query privacy loss (and hence the same total privacy loss over Q samples), DaRRMγopt achieves the highest accuracy compared to the other two baselines. m = 5. | | Privacy loss per query (ϵquery, δquery) | Total privacy loss over Q queries (ϵtotal, δtotal) | |---------------|-------------------------------------------|------------------------------------------------------| | Dataset | # Queries Q = 20 | (8.920, 0.010) | | MNIST | Q = 50 | (18.428, 0.025) | | | (0.4460, 0.0005) | | | | Q = 100 | (28.926, 0.049) | | | Q = 20 | (8.520, 0.010) | | Fashion MNIST | Q = 50 | (17.398, 0.025) | | | (0.4260, 0.0005) | | | | Q = 100 | (27.223, 0.049) | | Dataset | MNIST | Dataset | Fashion-MNIST | | | | |-----------|-------------|-------------|-----------------|-------------|-----------|-----------| | | GNMax | DaRRMγSub | DaRRMγopt | | | | | # Queries | (Baseline) | (Baseline) | (Ours) | | | | | Q = 20 | 0.73 (0.11) | 0.76 (0.09) | 0.84 (0.07) | | | | | Q = 50 | 0.75 (0.07) | 0.82 (0.04) | 0.83 (0.04) | | | | | Q = 100 | 0.72 (0.04) | 0.79 (0.05) | 0.83 (0.03) | GNMax | DaRRMγSub | DaRRMγopt | | | # Queries | (Baseline) | (Baseline) | (Ours) | | | | | Q = 20 | 0.72 (0.10) | 0.96 (0.04) | 0.97 (0.04) | | | | | Q = 50 | 0.72 (0.08) | 0.96 (0.02) | 0.97 (0.02) | | | | | Q = 100 | 0.72 (0.06) | 0.97 (0.01) | 0.97 (0.01) | | | Table 7: The privacy loss per query to the teachers and the total privacy loss over Q queries. Note the total privacy loss is computed by optimal composition, where we set δ ′ = 0.0001. | | Privacy loss per query (ϵquery, δquery) | Total privacy loss over Q queries (ϵtotal, δtotal) | |---------|-------------------------------------------|------------------------------------------------------| | Dataset | # Queries Q = 20 | (12.488, 0.014) | | MNIST | Q = 50 | (28.392, 0.035) | | | (0.6244, 0.0007) | | | | Q = 100 | (45.683, 0.068) | | | Q = 20 | (11.928, 0.014) | | Fashion | Q = 50 | (26.738, 0.035) | | | (0.5964, 0.0007) | | | MNIST | Q = 100 | (42.873, 0.068) | | Dataset | MNIST | Dataset | Fashion-MNIST | | | |-----------|-------------|-------------|-----------------|-------------|-----------| | DaRRMγSub | DaRRMγopt | | | | | | GNMax | | | | | | | # Queries | (Baseline) | (Baseline) | (Ours) | | | | Q = 20 | 0.79 (0.07) | 0.80 (0.09) | 0.85 (0.08) | | | | Q = 50 | 0.80 (0.05) | 0.82 (0.05) | 0.85 (0.04) | | | | Q = 100 | 0.80 (0.04) | 0.80 (0.04) | 0.83 (0.03) | DaRRMγSub | DaRRMγopt | | | GNMax | | | | | | | # Queries | (Baseline) | (Baseline) | (Ours) | | | | Q = 20 | 0.79 (0.07) | 0.95 (0.04) | 0.96 (0.04) | | | | Q = 50 | 0.79 (0.05) | 0.96 (0.03) | 0.97 (0.03) | | | | Q = 100 | 0.79 (0.03) | 0.96 (0.02) | 0.96 (0.02) | | Table 8: Accuracy of the predicted labels of Q query samples on datasets MNIST (on the left) and Fashion-MNIST (on the right). We report the mean and one std. in parentheses over 10 random draws of the query samples from the test dataset. Note each prediction on the query sample is (ϵtotal, δ*total*)-differentially private. With the same per query privacy loss (and hence the same total privacy loss over Q samples), DaRRMγopt achieves the highest accuracy compared to the other two baselines. m = 7. Table 9: The privacy loss per query to the teachers and the total privacy loss over Q queries. Note the total privacy loss is computed by optimal composition, where we set δ ′ = 0.0001. Table 10: Accuracy of the predicted labels of Q query samples on datasets MNIST (on the left) and Fashion-MNIST (on the right). We report the mean and one std. in parentheses over 10 random draws of the query samples from the test dataset. Note each prediction on the query sample is (ϵtotal, δ*total*)-differentially private. With the same per query privacy loss (and hence the same total privacy loss over Q samples), DaRRMγopt achieves the highest accuracy compared to the other two baselines.
Review 1: Summary: The paper considers the problem of releasing a majority vote from $K$ $(\epsilon, \Delta)$-DP algorithms that satisfies $(m \epsilon, \delta)$-DP for $1 <=m <=K$ and $0 <= \Delta <= \delta <1$. The authors propose a new randomized response (RR) variant, which uses a parameterized noise function. The main novelty in the formulation is that the noise function can be optimized for different queries, which allows RR to add differing amounts of noise depending on the input (i.e., it allows to add less or no noise when there is a clear-enough majority). The authors show that the proposed formulation is quite general, and can be in some sense optimal. The paper also includes some empirical experiments showing that the proposed method tends to work better than some existing baselines. Strengths and Weaknesses: ### Strengths i) The proposed RR variant seem interesting: it seems to have a nice theoretical motivation (at least when looking at it without properly going through all of the proofs), and the reported performance is good under specific parameter ranges. ii) General DP mechanism optimality results are very interesting and important, both in theory and for practical use cases. ### Weaknesses i) The paper is generally not easy to read and could be clearly improved by clarifying the writing (e.g., there are plenty of breaks in the flow of thought [such as the first sentence of Sec1.1: what question is this talking about?], parts of the related work and many basic defs etc. are only in the appendix, in several places it is claimed that something is "hard", and then in the next sentence it is claimed that the hard problem can be easily solved). While shortening the paper by pushing most content into the Appendix is somewhat understandable, if lamentable, in a short-format conference paper, TMLR does not have such limits. ii) I am not sure if all the claims about the optimality and efficiency of the proposed method are well-founded. iii) There is not much discussion about the scalability of the proposed approach beyond some mentions, or on how the quality of the approximations depends on the available compute. Given that it seems to be maybe the most important bottle-neck for the proposed method, this seems a bit odd. Requested Changes: In more or less decreasing order of importance: 1) On the optimality and efficiency of the proposed method: in several places (e.g., in the abstract, Sec 1.1 on p2 and p3, Sec 5 on p7, Sec7 on p11) it is claimed that the noise function $\gamma$ can be optimized efficiently, and as a result, the proposed method achieves optimal utility. However, looking at the comments in Lemma 5.1 about the optimization (Linear Optimization Objective on p7, Appendix E.2), finding the optimal value involves calculating a multidimensional integral, and since this is not really doable with larger $K$, an approximation is discussed in the Appendix and used in the experiments. To me this looks like the proposed method with any larger $K$ is not actually tractable, and hence when the noise function is only approximately optimal, the proposed method has no optimality guarantees (furthermore, the non-optimality does not seem to be only a theoretical possibility, as under some settings e.g. in Fig. 8 and in Table 6 the proposed method can perform worse than the baselines). 2) Please move at least the entire Related work discussing the closely-related existing methods to the main body of the text. 3) Please add missing existing RR optimality results in the related work and when discussing your results (at least Kairouz et al. 2015, Holohan et al. 2016). 4) On composition: besides basic and advanced composition, there is also the exact bound from Kairouz et al. 2015. Is there some reason for not using it in the experiments? Please also use the exact composition bound when stating results on the provable privacy amplification (Sec 4). 5) Can you characterize how does your optimality results relate to optimal count query release (which seems also like a natural baseline, i.e., directly release the (subsampled) noisy count with geometric mechanism, see e.g. Kairouz et al. 2015 for optimality discussion)? 6) Please state the neighborhood relation (e.g. replace?) explicitly, or that your results do not depend on the exact relation, to avoid misunderstandings. ### References: Holohan et al. 2016: Optimal Differentially Private Mechanisms for Randomised Response. Kairouz et al. 2015: The Composition Theorem for Differential Privacy. Broader Impact Concerns: I have no broader impact concerns for this paper. ================================================== Review 2: Summary: This manuscript proposes a randomized response framework that optimizes utility for differentially private majority ensembling algorithms. The proposed approach employs a data-dependent noise function, and its utility optimization is evaluated in the context of differential private label ensembling for image classification applications. Strengths and Weaknesses: Strengths - Studying privacy-utility trade-offs and developing an optimization framework for it is a compelling effort. Weaknesses - Limited experimental evaluation - Some sections require clarification and revision Requested Changes: 1) Although the authors provide a thorough overview of the background on private aggregation of predictions, the motivation for applying private prediction in data-adaptive settings needs to be better explained. Including a real-world example involving a data-adaptive setting where the prediction ensembling should preserve privacy would significantly enhance the audience's understanding of the importance of the problem. 2) The first sentence of the section “1.1 Our Contributions” needs clarification: “We give a (perhaps surprising) affirmative answer to this question: by using ..”. It is not clear which question the authors are referring to. 3) Although this work is theoretically supported, and the authors claim they “demonstrate the strong empirical effectiveness” of the proposed framework, the only datasets used for empirical evaluation are MNIST and FashionMNIST. I strongly suggest the authors extend their experiments by including more complex datasets such as SVHN. 4) The “Introduction” section mentions that this work focuses on “ data-adaptive settings where the underlying dataset is changing slowly over time.” However, it is unclear to me how this setting is incorporated into the experiments conducted on MNIST and FashionMNIST. Could you please clarify how the experiments address this online dataset scenario? 5) As a minor point, the readability of the sentence, “We emphasize in data-adaptive settings where the underlying.. ” (on page 2, Introduction section), would improved by either removing “in” or replacing it with “on”. Broader Impact Concerns: No concerns about the ethical implications of the work ================================================== Review 3: Summary: This paper studies a classical problem for computing a differentially private majority from K-differentially private algorithms. They introduce a Data-dependent Randomized Response Majority (DaRRM) algorithm. They show that an (mϵ, δ)-private majority algorithm with maximal utility can be computed tractably for any m ≤ K by a novel structural result that reduces the privacy constraints. In some settings, They show DaRRM provably enjoys a privacy gain of a factor of 2 over common baselines. They demonstrate the strong empirical effectiveness of the algorithm when compared against several baselines. Strengths and Weaknesses: #### Strengths: They give effective algorithm for differentially private majority problem. #### Weakness: 1. The contributions should be summaried as short and to the points. 2. The notations of $\gamma$ and $\gamma_{sub}$ are mixed used. In some cases, they denote for the same. 3. Theorem 4.1 says it amplifies privacy by 2, the baseline is not clear here. 4. There are no error bounds analysis for the algorithm. 5. I have a question for Theorem 4.1. It say when $m \ge \frac{K+1}{2}$, just set $\gamma(l)=1$ to output the true majority with no noise. Can this protect $m\epsilon$ differential privacy? I try to understand that. This may because of Lemma 3.1, which says "the majority of m out of K-subsampled mechanisms without replacement and the output of data-dependent RR algorithm have the same distribution." So the algorithm's output distribution is the same as m samples' distribution, then how is it to get $e^{m\epsilon}$ 6. The title is "optimized tradeoff for ...", and it doesn't show and prove the optimal results. Requested Changes: 1. In Algorithm 1 Step 5, it should be $\gamma(L)$ rather than $\gamma(S)$. What is the function for $\gamma(L)$? as the $\gamma_{sub}$ in Lemma 3? 2. It says the DaRRM has a non-constant success probability $p_\gamma$, the success probability should be as least $p_\gamma$, since Step 10 also has$\frac{1}{2}(1-p_\gamma)$ success probability. 3. Why the probability use some combinations? Please explain intuitively. Overall, the contribution of this paper is that it gets a better algorithm for privacy majority problem compared with traditional algorithms. But the comparisons are not clear, such as authors can show the error bound for this algorithm and previous algorithms. Broader Impact Concerns: This paper works for privacy, and it doesn't have ethical problems. ==================================================
# Bigger Is Not Always Better: Scaling Properties Of Latent Diffusion Models Anonymous authors Paper under double-blind review ## Abstract We study the scaling properties of latent diffusion models (LDMs) with an emphasis on their sampling efficiency. While improved network architecture and inference algorithms have shown to effectively boost sampling efficiency of diffusion models, the role of model size—a critical determinant of sampling efficiency—has not been thoroughly examined. Through empirical analysis of established text-to-image diffusion models, we conduct an in-depth investigation into how model size influences sampling efficiency across varying sampling steps. Our findings unveil a surprising trend: when operating under a given inference budget, smaller models frequently outperform their larger equivalents in generating high-quality results. Moreover, we extend our study to demonstrate the generalizability of the these findings by applying various diffusion samplers, exploring diverse downstream tasks, evaluating post-distilled models, as well as comparing performance relative to training compute. These findings open up new pathways for the development of LDM scaling strategies which can be employed to enhance generative capabilities within limited inference budgets. ## 1 Introduction Latent diffusion models (LDMs) (Rombach et al., 2022), and diffusion models in general, trained on largescale, high-quality data (Lin et al., 2014; Schuhmann et al., 2022) have emerged as a powerful and robust framework for generating impressive results in a variety of tasks, including image synthesis and editing (Rombach et al., 2022; Podell et al., 2023; Delbracio & Milanfar, 2023; Ren et al., 2023; Qi et al., 2023), video creation (Mei & Patel, 2023; Mei et al., 2023; Wu et al., 2023; Singer et al., 2022), audio production (Liu et al., 2023a), and 3D synthesis (Lin et al., 2023; Liu et al., 2023b). Despite their versatility, the major barrier against wide deployment in real-world applications (Du et al., 2023; Choi et al., 2023) comes from their low *sampling efficiency*. The essence of this challenge lies in the inherent reliance of LDMs on multistep sampling (Song et al., 2021b; Ho et al., 2020) to produce high-quality outputs, where the total cost of sampling is the product of sampling steps and the cost of each step. Specifically, the go-to approach involves using the 50-step DDIM sampling (Song et al., 2021a; Rombach et al., 2022), a process that, despite ensuring output quality, still requires a relatively long latency for completion on modern mobile devices with post-quantization. In contrast to single shot generative models (e.g., generative-adversarial networks (GANs) (Goodfellow et al., 2020)) which bypass the need for iterative refinement (Goodfellow et al., 2020; Karras et al., 2019), the operational latency of LDMs calls for a pressing need for efficiency optimization to further facilitate their practical applications. Recent advancements in this field (Li et al., 2023; Zhao et al., 2023; Peebles & Xie, 2023; Kim et al., 2023b;a; Choi et al., 2023) have primarily focused on developing faster network architectures with comparable model size to reduce the inference time per step, along with innovations in improving sampling algorithms that allow for using less sampling steps (Song et al., 2021a; Dockhorn et al., 2022; Karras et al., 2022; Lu et al., 2022a; Liu et al., 2023c; Xu et al., 2023). Further progress has been made through diffusion-distillation techniques (Luhman & Luhman, 2021; Salimans & Ho, 2022; Song et al., 2023; Sauer et al., 2023b; Gu et al., 2023; Mei et al., 2024), which simplifies the process by learning multi-step sampling results in a single forward pass, and then broadcasts this single-step prediction multiple times. These distillation techniques leverage the redundant learning capability in LDMs, enabling the distilled models to assimilate additional distillation knowledge. Despite these efforts being made to improve diffusion models, the sampling efficiency of smaller, less redundant models has not received adequate attention. A significant barrier to this area of research is the scarcity of available modern accelerator clusters (Jouppi et al., 2023), as training high-quality text-to-image (T2I) LDMs from scratch is both time-consuming and expensive—often requiring several weeks and hundreds of thousands of dollars. In this paper, we empirically investigate the scaling properties of LDMs, with a particular focus on understanding how their scaling properties impact the sampling efficiency across various model sizes. We trained a suite of 12 text-to-image LDMs from scratch, ranging from 39 million to 5 billion parameters, under a constrained budget. Example results are depicted in Fig. 1. All models were trained on TPUv5 using internal data sources with about 600 million aesthetically-filtered text-to-image pairs. Our study reveals that there exist a scaling trend within LDMs, notably that smaller models may have the capability to surpass larger models under an equivalent sampling budget. Furthermore, we investigate how the size of pre-trained text-to-image LDMs affects their sampling efficiency across diverse downstream tasks, such as real-world super-resolution (Saharia et al., 2022; Sahak et al., 2023) and subject-driven text-to-image synthesis (i.e., Dreambooth) (Ruiz et al., 2023). ## 1.1 Summary Our key findings for scaling latent diffusion models in text-to-image generation and various downstream tasks are as follows: Pretraining performance scales with training compute. We demonstrate a clear link between compute resources and LDM performance by scaling models from 39 million to 5 billion parameters. This suggests potential for further improvement with increased scaling. See Section 3.1 for details. Downstream performance scales with pretraining. We demonstrate a strong correlation between pretraining performance and success in downstream tasks. Smaller models, even with extra training, cannot fully bridge the gap created by the pretraining quality of larger models. This is explored in detail in Section 3.2. Smaller models sample more efficient. Smaller models initially outperform larger models in image quality for a given sampling budget, but larger models surpass them in detail generation when computational constraints are relaxed. This is further elaborated in Section 3.3.1 and Section 3.3.2. Sampler does not change the scaling efficiency. Smaller models consistently demonstrate superior sampling efficiency, regardless of the diffusion sampler used. This holds true for deterministic DDIM (Song et al., 2021a), stochastic DDPM (Ho et al., 2020), and higher-order DPM-Solver++ (Lu et al., 2022b). For more details, see Section 3.4. Smaller models sample more efficient on the downstream tasks with fewer steps. The advantage of smaller models in terms of sampling efficiency extends to the downstream tasks when using less than 20 sampling steps. This is further elaborated in Section 3.5. Diffusion distillation does not change scaling trends. Even with diffusion distillation, smaller models maintain competitive performance against larger distilled models when sampling budgets are constrained. This suggests distillation does not fundamentally alter scaling trends. See Section 3.6 for in-depth analysis. ## 2 Related Work Scaling laws. Recent Large Language Models (LLMs) including GPT (Brown et al., 2020), PaLM (Anil et al., 2023), and LLaMa (Touvron et al., 2023) have dominated language generative modeling tasks. The foundational works (Kaplan et al., 2020; Brown et al., 2020; Hoffmann et al., 2022) for investigating their scaling behavior have shown the capability of predicting the performance from the model size. They also | Params | 39M | 83M | 145M | 223M | 318M | 430M | 558M | 704M | 866M | 2B | 5B | |-------------|-------|-------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | Filters (c) | 64 | 96 | 128 | 160 | 192 | 224 | 256 | 288 | 320 | 512 | 768 | | GFLOPS | 25.3 | 102.7 | 161.5 | 233.5 | 318.5 | 416.6 | 527.8 | 652.0 | 789.3 | 1887.5 | 4082.6 | | Norm. Cost | 0.07 | 0.13 | 0.20 | 0.30 | 0.40 | 0.53 | 0.67 | 0.83 | 1.00 | 2.39 | 5.17 | | FID ↓ | 25.30 | 24.30 | 24.18 | 23.76 | 22.83 | 22.35 | 22.15 | 21.82 | 21.55 | 20.98 | 20.14 | | CLIP ↑ | 0.305 | 0.308 | 0.310 | 0.310 | 0.311 | 0.312 | 0.312 | 0.312 | 0.312 | 0.312 | 0.314 | Table 1: We scale the baseline LDM (i.e., 866M Stable Diffusion v1.5) by changing the base number of channels c that controls the rest of the U-Net architecture as [c, 2c, 4c, 4c] (See Fig. 2). GFLOPS are measured for an input latent of shape 64 × 64 × 4 with FP32. We also show a normalized running cost with respect to the baseline model. The text-to-image performance (FID and CLIP scores) for our scaled LDMs is evaluated on the COCO-2014 validation set with 30k samples, using 50-step DDIM sampling. It is worth noting that all the model sizes, and the training and the inference costs reported in this work only refer to the denoising UNet in the latent space, and do not include the 1.4B text encoder and the 250M latent encoder and decoder. investigated the factors that affect the scaling properties of language models, including training compute, dataset size and quality, learning rate schedule, etc. Those experimental clues have effectively guided the later language model development, which have led to the emergence of several parameter-efficient LLMs (Hoffmann et al., 2022; Touvron et al., 2023; Zhou et al., 2023; Alabdulmohsin et al., 2024). However, scaling generative text-to-image models are relatively unexplored, and existing efforts have only investigated the scaling properties on small datasets or small models, like scaling UNet (Nichol & Dhariwal, 2021) to 270 million parameters and DiT (Peebles & Xie, 2023) on ImageNet (14 million), or less-efficient autoregressive models (Chen et al., 2020). Different from these attempts, our work investigates the scaling properties by scaling down the efficient and capable diffusion models, i.e. LDMs (Rombach et al., 2022), on internal data sources that have about 600 million aesthetics-filtered text-to-image pairs for featuring the sampling efficiency of scaled LDMs. We also scale LDMs on various scenarios such as finetuning LDMs on downstream tasks (Wang et al., 2021; Ruiz et al., 2023) and distilling LDMs (Mei et al., 2024) for faster sampling to demonstrate the generalizability of the scaled sampling-efficiency. Efficient diffusion models. Nichol et al. (Nichol & Dhariwal, 2021) show that the generative performance of diffusion models improves as the model size increases. Based on this preliminary observation, the model size of widely used LDMs, e.g., Stable Diffusion (Rombach et al., 2022), has been empirically increased to billions of parameters (Ramesh et al., 2022; Podell et al., 2023). However, such a large model makes it impossible to fit into the common inference budget of practical scenarios. Recent work on improving the sampling efficiency focus on improving network architectures (Li et al., 2023; Zhao et al., 2023; Peebles & Xie, 2023; Kim et al., 2023b;a; Choi et al., 2023) or the sampling procedures (Song et al., 2021a; Dockhorn et al., 2022; Karras et al., 2022; Lu et al., 2022a; Liu et al., 2023c; Xu et al., 2023). We explore sampling efficiency by training smaller, more compact LDMs. Our analysis involves scaling down the model size, training from scratch, and comparing performance at equivalent inference cost. Efficient non-diffusion generative models. Compared to diffusion models, other generative models such as, Variational Autoencoders (VAEs) (Kingma & Welling, 2014; Rezende & Mohamed, 2015; Makhzani et al., 2015; Vahdat & Kautz, 2020), Generative Adversarial Networks (GANs) (Goodfellow et al., 2020; Mao et al., 2017; Karras et al., 2019; Reed et al., 2016; Miyato et al., 2018), and Masked Models (Devlin et al., 2019; Raffel et al., 2020; He et al., 2022; Chang et al., 2022; 2023), are more efficient, as they rely less on an iterative refinement process. Sauer et al. (Sauer et al., 2023a) recently scaled up StyleGAN (Karras et al., 2019) into 1 billion parameters and demonstrated the single-step GANs' effectiveness in modeling text-toimage generation. Chang et al. (Chang et al., 2023) scaled up masked transformer models for text-to-image generation. These non-diffusion generative models can generate high-quality images with less inference cost, which require fewer sampling steps than diffusion models and autoregressive models, but they need more parameters, i.e., 4 billion parameters. ![3_image_0.png](3_image_0.png) Figure 1: Text-to-image results from our scaled LDMs (39M - 2B), highlighting the improvement in visual quality with increased model size (note: 39M model is the exception). All images generated using 50step DDIM sampling. We use representative prompts from PartiPrompts Yu et al. (2022), including "a professional photo of a sunset behind the grand canyon.", *"Dogs sitting around a poker table with beer bottles* and chips. Their hands are holding cards.", *'Portrait of anime girl in mechanic armor in night Tokyo."*, "a teddy bear on a skateboard.", "a pixel art corgi pizza.", *"Snow mountain and tree reflection in the lake."*, "a propaganda poster depicting a cat dressed as french emperor napoleon holding a piece of cheese.", *"a store* front that has the word 'LDMs' written on it.", and "ten red apples.". *Check our supplement for additional* visual comparisons. ![4_image_0.png](4_image_0.png) Figure 2: Our scaled latent diffusion models vary in the number of filters within the denoising U-Net. Other modules remain consistent. Smooth channel scaling (64 to 768) within residual blocks yields models ranging from 39M to 5B parameters. For downstream tasks requiring image input, we use an encoder to generate a latent code; this code is then concatenated with the noise vector in the denoising U-Net. ![4_image_1.png](4_image_1.png) Figure 3: In text-to-image generation, we observe consistent trends across various model sizes in how quality metrics (FID and CLIP scores) relate to training compute (i.e., the total GFLOPS spend on training). Under moderate training resources, training compute is the most relevant factor dominating quality. ## 3 Scaling Ldms We developed a family of powerful Latent Diffusion Models (LDMs) built upon the widely-used 866M Stable Diffusion v1.5 standard (Rombach et al., 2022) 1. The denoising UNet of our models offers a flexible range of sizes, with parameters spanning from 39M to 5B. We incrementally increase the number of filters in the residual blocks while maintaining other architecture elements the same, enabling a predictably controlled scaling. Table 1 shows the architectural differences among our scaled models. We also provide the relative cost of each model against the baseline model. Fig. 2 shows the architectural differences during scaling. Models were trained using the web-scale aesthetically filtered text-to-image dataset, i.e., WebLI (Chen et al., 2022). All the models are trained for 500K steps, batch size 2048, and learning rate 1e-4. This allows for all the models to have reached a point where we observe diminishing returns. Fig. 1 demonstrates the consistent generation capabilities across our scaled models. We used the common practice of 50 sampling steps with the DDIM sampler, 7.5 classifier-free guidance rate, for text-to-image generation. The visual quality of the results exhibits a clear improvement as model size increases. In order to evaluate the performance of the scaled models, we test the text-to-image performance of scaled models on the validation set of COCO 2014 (Lin et al., 2014) with 30k samples. For downstream performance, 1We adopted SD v1.5 since it is among the most popular diffusion models https://huggingface.co/models?sort=likes. ![5_image_0.png](5_image_0.png) Figure 4: In 4× real image super-resolution, FID and LPIPS scores reveal an interesting divergence. Model size drives FID score improvement, while training compute most impacts LPIPS score. Despite this, visual assessment (Fig. 5) confirms the importance of model size for superior detail recovery (similarly as observed in the text-to-image pretraining). specifically real-world super-resolution, we test the performance of scaled models on the validation of DIV2K with 3k randomly cropped patches, which are degraded with the RealESRGAN degradation (Wang et al., 2021). ## 3.1 Training Compute Scales Text-To-Image Performance We find that our scaled LDMs, across various model sizes, exhibit similar trends in generative performance relative to training compute cost, especially after training stabilizes, which typically occurs after 200K iterations. These trends demonstrate a smooth scaling in learning capability between different model sizes. To elaborate, Fig. 3 illustrates a series of training runs with models varying in size from 39 million to 5 billion parameters, where the training compute cost is quantified as the product of relative cost shown in Table 1 and training iterations. Model performance is evaluated by using the same sampling steps and sampling parameters. In scenarios with moderate training compute (i.e., < 1G, see Fig. 3), the generative performance of T2I models scales well with additional compute resources. ## 3.2 Pretraining Scales Downstream Performance Using scaled models based on their pretraining on text-to-image data, we finetune these models on the downstream tasks of real-world super-resolution (Saharia et al., 2022; Sahak et al., 2023) and DreamBooth (Ruiz et al., 2023). The performance of these pretrained models is shown in Table. 1. In the left panel of Fig. 4, we present the generative performance FID versus training compute on the super-resolution (SR) task. It can be seen that the performance of SR models is more dependent on the model size than training compute. Our results demonstrate a clear limitation of smaller models: they cannot reach the same performance levels as larger models, regardless of training compute. While the distortion metric LPIPS shows some inconsistencies compared to the generative metric FID (Fig. 4), Fig. 5 clearly demonstrates that larger models excel in recovering fine-grained details compared to smaller models. The key takeaway from Fig. 4 is that large super-resolution models achieve superior results even after short finetuning periods compared to smaller models. This suggests that pretraining performance (dominated by the pretraining model sizes) has a greater influence on the super-resolution FID scores than the duration of finetuning (i.e., training compute for finetuning). Furthermore, we compare the visual results of the DreamBooth finetuning on the different models in Fig. 6. We observe a similar trend between visual quality and model size. Please see our supplement for more discussions on the other quality metrics. ![6_image_0.png](6_image_0.png) Figure 5: In 4× super-resolution, visual quality directly improves with increased model size. As these scaled models vary in pretraining performance, the results clearly demonstrate that pretraining boosts superresolution capabilities in both quantitative (Fig 4) and qualitative ways. *Additional results are given in* supplementary material. ## 3.3 Scaling Sampling-Efficiency 3.3.1 Analyzing The Effect Of Cfg Rate. Text-to-image generative models require nuanced evaluation beyond single metrics. Sampling parameters are vital for customization, with the Classifier-Free Guidance (CFG) rate (Ho & Salimans, 2022) directly influencing the balance between visual fidelity and semantic alignment with text prompt. Rombach et al. (Rombach et al., 2022) experimentally demonstrate that different CFG rates result in different CLIP and FID scores. In this study, we find that CFG rate as a sampling parameter yields inconsistent results across different model sizes. Hence, it is interesting to quantitatively determine the *optimal* CFG rate for each model size and sampling steps using either FID or CLIP score. We demonstrate this by sampling the scaled models using different CFG rates, i.e., (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and comparing their quantitative and qualitative results. In Fig. 7, we present visual results of two models under varying CFG rates, highlighting the impact on the visual quality. We observed that changes in CFG rates impact visual quality more significantly than prompt semantic accuracy and therefore opted to use the FID score for quantitative determination of the optimal CFG rate. performance. Fig. 8 shows how different classifier-free guidance rates affect the FID scores in text-to-image generation (see figure caption for more details). ## 3.3.2 Scaling Efficiency Trends. Using the optimal CFG rates established for each model at various number of sampling steps, we analyze the optimal performance to understand the sampling efficiency of different LDM sizes. Specifically, in Fig. 9, we present a comparison between different models and their optimal performance given the sampling cost (normalized cost × sampling steps). By tracing the points of optimal performance across various sampling cost—represented by the dashed vertical line—we observe a consistent trend: smaller models frequently outperform larger models across a range of sampling cost in terms of FID scores. Furthermore, to visually ![7_image_0.png](7_image_0.png) Figure 6: Visualization of the Dreambooth results shows two distinct tiers based on model size. Smaller models (83M-223M) perform similarly, as do larger ones (318M-2B), with a clear quality advantage for the larger group. *Additional results are given in supplementary material.* ![7_image_1.png](7_image_1.png) (b) 50-step sampling results of the 866M model Figure 7: Visualization of text-to-image results with different CFG rates (from left to right in each row: (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0)). The prompt used is "*A raccoon wearing formal clothes, wearing a top hat* and holding a cane. Oil painting in the style of Rembrandt.". We observe that changes in CFG rates impact visual quality more significantly than the prompt semantic accuracy. We use the FID score for quantitative determination of optimal sampling performance (Fig. 8) because it directly measures visual quality, unlike the CLIP score, which focuses on semantic similarity. substantiate better-quality results generated by smaller models against larger ones, Fig. 10 compares the results of different scaled models, which highlights that the performance of smaller models can indeed match their larger counterparts under similar sampling cost conditions. *Please see our supplement for more visual* comparisons. ![8_image_0.png](8_image_0.png) Figure 8: The impact of the CFG rate on text-to-image generation depends on the model size and sampling steps. As demonstrated in the left and center panels, the optimal CFG rate changes as the sampling steps increased. To determine the optimal performance (according to the FID score) of each model and each sampling steps, we systematically sample the model at various CFG rates and identify the best one. As a reference of the optimal performance, the right panel shows the CFG rate corresponding to the optimal performance of each model for a given number of sampling steps. ![8_image_1.png](8_image_1.png) Figure 9: Comparison of text-to-image performance of models with varying sizes. The left figure shows the relationship between sampling cost (normalized cost × sampling steps) and sampling steps for different model sizes. The right figure plots the text-to-image FID score as a function of the sampling cost for the same models. Key Observation: Smaller models achieve better FID scores than larger models for a fixed sampling cost. For instance, at a cost of 3, the 83M model achieves the best FID compared to the larger models. This suggests that smaller models can be more efficient in achieving good results with lower costs. ## 3.4 Scaling Sampling-Efficiency In Different Samplers To assess the generalizability of observed scaling trends in sampling efficiency, we compared scaled LDM performance using different diffusion samplers. In addition to the default DDIM sampler, we employed two representative alternatives: the stochastic DDPM sampler (Ho et al., 2020) and the high-order DPMSolver++ (Lu et al., 2022b). Experiments illustrated in Fig. 11 reveal that the DDPM sampler typically produces lower-quality results than DDIM with fewer sampling steps, while the DPM-Solver++ sampler generally outperforms DDIM in image quality (see the figure caption for details). Importantly, we observe consistent sampling-efficiency trends with the DDPM and DPM-Solver++ sampler as seen with the default DDIM: smaller models tend to achieve better performance than larger models under the same sampling cost. Since the DPM-Solver++ sampler is not designed for use beyond 20 steps, we focused our testing within this range. This finding demonstrates that the scaling properties of LDMs remain consistent regardless of the diffusion sampler used. ![9_image_0.png](9_image_0.png) Figure 10: Text-to-image results of the scaled LDMs under approximately the same inference cost (normalized cost × sampling steps). Smaller models can produce comparable or even better visual results than larger models under similar sampling cost. ## 3.5 Scaling Downstream Sampling-Efficiency Here, we investigate the scaling sampling-efficiency of LDMs on downstream tasks, specifically focusing on the super-resolution task. Unlike our earlier discussions on optimal sampling performance, there is limited literature demonstrating the positive impacts of SR performance without using classifier-free guidance. Thus, our approach directly uses the SR sampling result without applying classifier-free guidance. Inspired from Fig. 4, where the scaled downstream LDMs have significant performance difference in 50-step sampling, we investigate sampling efficiency from two different aspects, i.e., fewer sampling steps [4, 20] and more ![10_image_0.png](10_image_0.png) Figure 11: *Left*: Text-to-image performance FID as a function of the sampling cost (normalized cost × sampling steps) for the DDPM sampler (solid curves) and the DDIM sampler (dashed curves). *Right*: Textto-image performance FID as a function of the sampling cost for the second-order DPM-Solver++ sampler (solid curves) and the DDIM sampler (dashed curves). Suggested by the trends shown in Fig. 9, we only show the sampling steps ≤ 50 as using more steps does not improve the performance. ![10_image_1.png](10_image_1.png) Figure 12: Super-resolution performance vs. sampling cost for different model sizes. *Left:* FID scores of super-resolution models under limited sampling steps (less than or equal to 20). Smaller models tend to achieve lower (better) FID scores within this range. *Right:* FID scores of super-resolution models under a larger number of sampling steps (greater than 20). Performance differences between models become less pronounced as sampling steps increase. sampling steps (20, 250]. As shown in the left part of Fig. 12, the scaling sampling-efficiency still holds in the SR tasks when the number of sampling steps is less than or equal to 20 steps. Beyond this threshold, however, larger models demonstrate greater sampling-efficiency than smaller models, as illustrated in the right part of Fig. 12. This observation suggests the consistent sampling efficiency of scaled models on fewer sampling steps from text-to-image generation to super-resolution tasks. ## 3.6 Scaling Sampling-Efficiency In Distilled Ldms. We have featured the scaling sampling-efficiency of latent diffusion models, which demonstrates that smaller model sizes exhibit higher sampling efficiency. A notable caveat, however, is that smaller models typically imply reduced modeling capability. This poses a challenge for recent diffusion distillation methods (Luhman & Luhman, 2021; Salimans & Ho, 2022; Song et al., 2023; Sauer et al., 2023b; Gu et al., 2023; Mei et al., 2024; Luo et al., 2023; Lin et al., 2024) that heavily depend on modeling capability. One might expect a contradictory conclusion and believe the distilled large models sample faster than distilled small models. In order to demonstrate the sampling efficiency of scaled models after distillation, we distill our previously scaled models with conditional consistency distillation (Song et al., 2023; Mei et al., 2024) on text-to-image data and compare those distilled models on their optimal performance. *Please see our supplement for more* distillation details. ![11_image_0.png](11_image_0.png) Figure 13: Distillation improves text-to-image performance and scalability. *Left:* Distilled Latent Diffusion Models (LDMs) consistently exhibit lower (better) FID scores compared to their undistilled counterparts across varying model sizes. The consistent acceleration factor (approx. 5×) indicates that the benefits of distillation scale well with model size. *Right:* Distilled models using only 4 sampling steps achieve FID scores comparable to undistilled models using significantly more steps. Interestingly, at a sampling cost of 7, the distilled 866M model performs similarly to the smaller, undistilled 83M model, suggesting improved efficiency. See the supplement for visual examples demonstrating the quality improvements achieved through distillation. To elaborate, we test all distilled models with the same 4-step sampling, which is shown to be able to achieve the best sampling performance; we then compare each distilled model with the undistilled one on the normalized sampling cost. We follow the same practice discussed in Section 3.3.1 for selecting the optimal CFG rate and compare them under the same relative inference cost. The results shown in the left part of Fig. 13 demonstrate that distillation significantly improves the generative performance for all models in 4-step sampling, with FID improvements across the board. By comparing these distilled models with the undistilled models in the right part of Fig. 13, we demonstrate that distilled models outperform undistilled models at the same sampling cost. However, at the specific sampling cost, i.e., sampling cost ≈ 8, the smaller undistilled 83M model still achieves similar performance to the larger distilled 866M model. The observation further supports our proposed scaling sampling-efficiency of LDMs, which still holds under the circumstance of diffusion distillation. ## 4 Conclusion In this paper, we investigated scaling properties of Latent Diffusion Models (LDMs), specifically through scaling model size from 39 million to 5 billion parameters. We trained these scaled models from scratch on a web-scale text-to-image dataset and then finetuned the pretrained models for downstream tasks. Our findings unveil that, under identical sampling costs, smaller models frequently outperform larger models, suggesting a promising direction for accelerating LDMs in terms of model size. We further show that the sampling efficiency is consistent in multiple axes. For example, it is invariant to various diffusion samplers (stochastic and deterministic), and also holds true for distilled models. We believe this analysis of scaling sampling efficiency would be instrumental in guiding future developments of LDMs, specifically for balancing model size against performance and efficiency in a broad spectrum of practical applications. Limitations. This work utilizes visual quality inspection alongside established metrics like FID and CLIP scores. We opted to avoid human evaluations due to the immense number of different combinations needed for the more than 1000 variants considered in this study. However, it is important to acknowledge the potential discrepancy between visual quality and quantitative metrics, which is actively discussed in recent works (Zhang et al., 2021; Jayasumana et al., 2024; Cho et al., 2023). Claims regarding the scalability of latent models are made specifically for the particular model family studied in this work. Extending this analysis to other model families, particularly those incorporating transformer-based backbones (Peebles & Xie, 2023; Mei et al., 2023; Ma et al., 2024; Esser et al., 2024), would be a valuable direction for future research. ## References Ibrahim M Alabdulmohsin, Xiaohua Zhai, Alexander Kolesnikov, and Lucas Beyer. Getting vit in shape: Scaling laws for compute-optimal model design. *Advances in Neural Information Processing Systems*, 2024. 3 Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. *ArXiv preprint*, 2023. 2 Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual*, 2020. 2 Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T. Freeman. Maskgit: Masked generative image transformer. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 2022. 3 Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. *ArXiv preprint*, 2023. 3 Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In *Proceedings of the 37th International Conference on Machine Learning,* ICML 2020, 13-18 July 2020, Virtual Event, Proceedings of Machine Learning Research, 2020. 3 Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. Pali: A jointly-scaled multilingual language-image model. *ArXiv preprint*, 2022. 5 Jaemin Cho, Yushi Hu, Roopal Garg, Peter Anderson, Ranjay Krishna, Jason Baldridge, Mohit Bansal, Jordi Pont-Tuset, and Su Wang. Davidsonian scene graph: Improving reliability in fine-grained evaluation for text-image generation. *ArXiv preprint*, 2023. 12 Jiwoong Choi, Minkyu Kim, Daehyun Ahn, Taesu Kim, Yulhwa Kim, Dongwon Jo, Hyesung Jeon, Jae-Joon Kim, and Hyungjun Kim. Squeezing large-scale diffusion models for mobile. *ArXiv preprint*, 2023. 1, 3 Mauricio Delbracio and Peyman Milanfar. Inversion by direct iteration: An alternative to denoising diffusion for image restoration. *Transactions on Machine Learning Research*, 2023. ISSN 2835-8856. Featured Certification. 1 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019. 3 Tim Dockhorn, Arash Vahdat, and Karsten Kreis. Genie: Higher-order denoising diffusion solvers. *Advances* in Neural Information Processing Systems, 2022. 1, 3 Hongyang Du, Ruichen Zhang, Dusit Niyato, Jiawen Kang, Zehui Xiong, Dong In Kim, Xuemin Sherman Shen, and H Vincent Poor. Exploring collaborative distributed diffusion-based ai-generated content (aigc) in wireless networks. *IEEE Network*, (99), 2023. 1 Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In *Forty-first International Conference on Machine Learning*, 2024. 12 Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. *Communications of the ACM*, (11), 2020. 1, 3 Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Lingjie Liu, and Joshua M Susskind. Boot: Data-free distillation of denoising diffusion models with bootstrapping. In ICML 2023 Workshop on Structured Probabilistic Inference {\&} *Generative Modeling*, 2023. 1, 11 Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross B. Girshick. Masked autoencoders are scalable vision learners. In *IEEE/CVF Conference on Computer Vision and Pattern Recognition,* CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 2022. 3 Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. *ArXiv preprint*, 2022. 7 Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. 1, 2, 9 Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. *ArXiv preprint*, 2022. 2, 3 Sadeep Jayasumana, Srikumar Ramalingam, Andreas Veit, Daniel Glasner, Ayan Chakrabarti, and Sanjiv Kumar. Rethinking fid: Towards a better evaluation metric for image generation. *ArXiv preprint*, 2024. 12 Norm Jouppi, George Kurian, Sheng Li, Peter Ma, Rahul Nagarajan, Lifeng Nai, Nishant Patil, Suvinay Subramanian, Andy Swing, Brian Towles, et al. Tpu v4: An optically reconfigurable supercomputer for machine learning with hardware support for embeddings. In *Proceedings of the 50th Annual International* Symposium on Computer Architecture, 2023. 2 Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. *ArXiv preprint*, 2020. 2 Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, 2019. 1, 3 Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. *Advances in Neural Information Processing Systems*, 2022. 1, 3 Bo-Kyeong Kim, Hyoung-Kyu Song, Thibault Castells, and Shinkook Choi. On architectural compression of text-to-image diffusion models. *ArXiv preprint*, 2023a. 1, 3 Bo-Kyeong Kim, Hyoung-Kyu Song, Thibault Castells, and Shinkook Choi. Bk-sdm: Architecturally compressed stable diffusion for efficient text-to-image generation. In *Workshop on Efficient Systems for Foundation Models@ ICML2023*, 2023b. 1, 3 Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. 3 Yanyu Li, Huan Wang, Qing Jin, Ju Hu, Pavlo Chemerys, Yun Fu, Yanzhi Wang, Sergey Tulyakov, and Jian Ren. Snapfusion: Text-to-image diffusion model on mobile devices within two seconds. *ArXiv preprint*, 2023. 1, 3 Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. 1 Shanchuan Lin, Anran Wang, and Xiao Yang. Sdxl-lightning: Progressive adversarial diffusion distillation. ArXiv preprint, 2024. 11 Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In *ECCV*, 2014. 1, 5 Haohe Liu, Zehua Chen, Yi Yuan, Xinhao Mei, Xubo Liu, Danilo Mandic, Wenwu Wang, and Mark D Plumbley. Audioldm: Text-to-audio generation with latent diffusion models. *ArXiv preprint*, 2023a. 1 Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero1-to-3: Zero-shot one image to 3d object. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023b. 1 Xingchao Liu, Xiwen Zhang, Jianzhu Ma, Jian Peng, and Qiang Liu. Instaflow: One step is enough for high-quality diffusion-based text-to-image generation. *ArXiv preprint*, 2023c. 1, 3 Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. Advances in Neural Information Processing Systems, 2022a. 1, 3 Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. *ArXiv preprint*, 2022b. 2, 9 Eric Luhman and Troy Luhman. Knowledge distillation in iterative generative models for improved sampling speed. *ArXiv preprint*, 2021. 1, 11 Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. Latent consistency models: Synthesizing high-resolution images with few-step inference. *ArXiv preprint*, 2023. 11 Nanye Ma, Mark Goldstein, Michael S Albergo, Nicholas M Boffi, Eric Vanden-Eijnden, and Saining Xie. Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers. *ArXiv* preprint, 2024. 12 Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. *ArXiv preprint*, 2015. 3 Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, 2017. 3 Kangfu Mei and Vishal Patel. Vidm: Video implicit diffusion models. In *Proceedings of the AAAI Conference* on Artificial Intelligence, number 8, 2023. 1 Kangfu Mei, Mo Zhou, and Vishal M Patel. T1: Scaling diffusion probabilistic fields to high-resolution on unified visual modalities. *ArXiv preprint*, 2023. 1, 12 Kangfu Mei, Mauricio Delbracio, Hossein Talebi, Zhengzhong Tu, Vishal M Patel, and Peyman Milanfar. Codi: Conditional diffusion distillation for higher-fidelity and faster image generation. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. 1, 3, 11 Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, 2018. 3 Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, Proceedings of Machine Learning Research, 2021. 3 William Peebles and Saining Xie. Scalable diffusion models with transformers. In *Proceedings of the* IEEE/CVF International Conference on Computer Vision, 2023. 1, 3, 12 Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. *ArXiv* preprint, 2023. 1, 3 Chenyang Qi, Zhengzhong Tu, Keren Ye, Mauricio Delbracio, Peyman Milanfar, Qifeng Chen, and Hossein Talebi. Tip: Text-driven image processing with semantic and restoration instructions. *ArXiv preprint*, 2023. 1 Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 2020. 3 Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. *ArXiv preprint*, 2022. 3 Scott E. Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In *Proceedings of the 33nd International Conference on Machine* Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, JMLR Workshop and Conference Proceedings, 2016. 3 Mengwei Ren, Mauricio Delbracio, Hossein Talebi, Guido Gerig, and Peyman Milanfar. Multiscale structure guided diffusion for image deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023. 1 Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, JMLR Workshop and Conference Proceedings, 2015. 3 Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022. 1, 3, 5, 7 Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. 2, 3, 6 Hshmat Sahak, Daniel Watson, Chitwan Saharia, and David Fleet. Denoising diffusion probabilistic models for robust image super-resolution in the wild. *ArXiv preprint*, 2023. 2, 6 Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence, (4), 2022. 2, 6 Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022, 2022. 1, 11 Axel Sauer, Tero Karras, Samuli Laine, Andreas Geiger, and Timo Aila. Stylegan-t: Unlocking the power of gans for fast large-scale text-to-image synthesis. *ArXiv preprint*, 2023a. 3 Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. Adversarial diffusion distillation. ArXiv preprint, 2023b. 1, 11 Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open largescale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 2022. 1 Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. ArXiv preprint, 2022. 1 Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In *9th International* Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, 2021a. 1, 2, 3 Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In *9th International Conference* on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, 2021b. 1 Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. 2023. 1, 11 Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. *ArXiv preprint*, 2023. 2, 3 Arash Vahdat and Jan Kautz. NVAE: A deep hierarchical variational autoencoder. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. 3 Xintao Wang, Liangbin Xie, Chao Dong, and Ying Shan. Real-esrgan: Training real-world blind superresolution with pure synthetic data. In *IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021, Montreal, BC, Canada, October 11-17, 2021*, 2021. 3, 6 Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, 2023. 1 Yanwu Xu, Yang Zhao, Zhisheng Xiao, and Tingbo Hou. Ufogen: You forward once large scale text-to-image generation via diffusion gans. *ArXiv preprint*, 2023. 1, 3 Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-toimage generation. *ArXiv preprint*, 2022. 4 Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, and Yinfei Yang. Cross-modal contrastive learning for text-to-image generation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, 2021. 12 Yang Zhao, Yanwu Xu, Zhisheng Xiao, and Tingbo Hou. Mobilediffusion: Subsecond text-to-image generation on mobile devices. *ArXiv preprint*, 2023. 1, 3 Yanqi Zhou, Nan Du, Yanping Huang, Daiyi Peng, Chang Lan, Da Huang, Siamak Shakeri, David So, Andrew M Dai, Yifeng Lu, et al. Brainformers: Trading simplicity for efficiency. In International Conference on Machine Learning. PMLR, 2023. 3